Chelsea707 commited on
Commit
1aed340
·
verified ·
1 Parent(s): 6f8fc69

Add Batch 7a75b12f-4d29-42c1-b964-1ed8de091c33 data

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +64 -0
  2. 2023/A Primal-Dual Framework for Transformers and Neural Networks/1e68b69c-bd65-4a45-b94a-7f6538c7708f_content_list.json +0 -0
  3. 2023/A Primal-Dual Framework for Transformers and Neural Networks/1e68b69c-bd65-4a45-b94a-7f6538c7708f_model.json +0 -0
  4. 2023/A Primal-Dual Framework for Transformers and Neural Networks/1e68b69c-bd65-4a45-b94a-7f6538c7708f_origin.pdf +3 -0
  5. 2023/A Primal-Dual Framework for Transformers and Neural Networks/full.md +0 -0
  6. 2023/A Primal-Dual Framework for Transformers and Neural Networks/images.zip +3 -0
  7. 2023/A Primal-Dual Framework for Transformers and Neural Networks/layout.json +0 -0
  8. 2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/2f899a2b-cc61-4033-8a1e-bab8902575cc_content_list.json +0 -0
  9. 2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/2f899a2b-cc61-4033-8a1e-bab8902575cc_model.json +0 -0
  10. 2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/2f899a2b-cc61-4033-8a1e-bab8902575cc_origin.pdf +3 -0
  11. 2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/full.md +0 -0
  12. 2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/images.zip +3 -0
  13. 2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/layout.json +0 -0
  14. 2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/0ddbb3fc-a700-465f-9a2b-cad5bec5a26e_content_list.json +0 -0
  15. 2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/0ddbb3fc-a700-465f-9a2b-cad5bec5a26e_model.json +0 -0
  16. 2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/0ddbb3fc-a700-465f-9a2b-cad5bec5a26e_origin.pdf +3 -0
  17. 2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/full.md +435 -0
  18. 2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/images.zip +3 -0
  19. 2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/layout.json +0 -0
  20. 2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/014e40f0-866e-4844-b9d6-b3de43291df6_content_list.json +1984 -0
  21. 2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/014e40f0-866e-4844-b9d6-b3de43291df6_model.json +0 -0
  22. 2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/014e40f0-866e-4844-b9d6-b3de43291df6_origin.pdf +3 -0
  23. 2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/full.md +322 -0
  24. 2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/images.zip +3 -0
  25. 2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/layout.json +0 -0
  26. 2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/e87cdc5c-ebb8-4276-85ac-98eed9409534_content_list.json +0 -0
  27. 2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/e87cdc5c-ebb8-4276-85ac-98eed9409534_model.json +0 -0
  28. 2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/e87cdc5c-ebb8-4276-85ac-98eed9409534_origin.pdf +3 -0
  29. 2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/full.md +674 -0
  30. 2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/images.zip +3 -0
  31. 2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/layout.json +0 -0
  32. 2023/AANG _ Automating Auxiliary Learning/9218a6ea-0c66-424b-aec1-82d0a79e86c3_content_list.json +0 -0
  33. 2023/AANG _ Automating Auxiliary Learning/9218a6ea-0c66-424b-aec1-82d0a79e86c3_model.json +0 -0
  34. 2023/AANG _ Automating Auxiliary Learning/9218a6ea-0c66-424b-aec1-82d0a79e86c3_origin.pdf +3 -0
  35. 2023/AANG _ Automating Auxiliary Learning/full.md +700 -0
  36. 2023/AANG _ Automating Auxiliary Learning/images.zip +3 -0
  37. 2023/AANG _ Automating Auxiliary Learning/layout.json +0 -0
  38. 2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/d1c48036-5cc9-44ce-aeae-bb7b2deba30a_content_list.json +0 -0
  39. 2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/d1c48036-5cc9-44ce-aeae-bb7b2deba30a_model.json +0 -0
  40. 2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/d1c48036-5cc9-44ce-aeae-bb7b2deba30a_origin.pdf +3 -0
  41. 2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/full.md +0 -0
  42. 2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/images.zip +3 -0
  43. 2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/layout.json +0 -0
  44. 2023/Accurate Image Restoration with Attention Retractable Transformer/c75ec409-a9b0-4c65-a0dd-0defc55f9eb5_content_list.json +0 -0
  45. 2023/Accurate Image Restoration with Attention Retractable Transformer/c75ec409-a9b0-4c65-a0dd-0defc55f9eb5_model.json +0 -0
  46. 2023/Accurate Image Restoration with Attention Retractable Transformer/c75ec409-a9b0-4c65-a0dd-0defc55f9eb5_origin.pdf +3 -0
  47. 2023/Accurate Image Restoration with Attention Retractable Transformer/full.md +412 -0
  48. 2023/Accurate Image Restoration with Attention Retractable Transformer/images.zip +3 -0
  49. 2023/Accurate Image Restoration with Attention Retractable Transformer/layout.json +0 -0
  50. 2023/Active Learning in Bayesian Neural Networks with Balanced Entropy Learning Principle/5719f378-5a7e-413f-8da4-0390dd715659_content_list.json +0 -0
.gitattributes CHANGED
@@ -6008,3 +6008,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
6008
  2023/A[[:space:]]Laplace-inspired[[:space:]]Distribution[[:space:]]on[[:space:]]SO(3)[[:space:]]for[[:space:]]Probabilistic[[:space:]]Rotation[[:space:]]Estimation/35030d36-496a-4051-9f1b-c6eb641c8ab4_origin.pdf filter=lfs diff=lfs merge=lfs -text
6009
  2023/A[[:space:]]Minimalist[[:space:]]Dataset[[:space:]]for[[:space:]]Systematic[[:space:]]Generalization[[:space:]]of[[:space:]]Perception,[[:space:]]Syntax,[[:space:]]and[[:space:]]Semantics/99bfc8d6-393c-4dd5-956f-bfee4aa8668a_origin.pdf filter=lfs diff=lfs merge=lfs -text
6010
  2023/A[[:space:]]Model[[:space:]]or[[:space:]]603[[:space:]]Exemplars_[[:space:]]Towards[[:space:]]Memory-Efficient[[:space:]]Class-Incremental[[:space:]]Learning/4e97ac8a-8041-408b-8c69-b9855a34c746_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6008
  2023/A[[:space:]]Laplace-inspired[[:space:]]Distribution[[:space:]]on[[:space:]]SO(3)[[:space:]]for[[:space:]]Probabilistic[[:space:]]Rotation[[:space:]]Estimation/35030d36-496a-4051-9f1b-c6eb641c8ab4_origin.pdf filter=lfs diff=lfs merge=lfs -text
6009
  2023/A[[:space:]]Minimalist[[:space:]]Dataset[[:space:]]for[[:space:]]Systematic[[:space:]]Generalization[[:space:]]of[[:space:]]Perception,[[:space:]]Syntax,[[:space:]]and[[:space:]]Semantics/99bfc8d6-393c-4dd5-956f-bfee4aa8668a_origin.pdf filter=lfs diff=lfs merge=lfs -text
6010
  2023/A[[:space:]]Model[[:space:]]or[[:space:]]603[[:space:]]Exemplars_[[:space:]]Towards[[:space:]]Memory-Efficient[[:space:]]Class-Incremental[[:space:]]Learning/4e97ac8a-8041-408b-8c69-b9855a34c746_origin.pdf filter=lfs diff=lfs merge=lfs -text
6011
+ 2023/A[[:space:]]Primal-Dual[[:space:]]Framework[[:space:]]for[[:space:]]Transformers[[:space:]]and[[:space:]]Neural[[:space:]]Networks/1e68b69c-bd65-4a45-b94a-7f6538c7708f_origin.pdf filter=lfs diff=lfs merge=lfs -text
6012
+ 2023/A[[:space:]]System[[:space:]]for[[:space:]]Morphology-Task[[:space:]]Generalization[[:space:]]via[[:space:]]Unified[[:space:]]Representation[[:space:]]and[[:space:]]Behavior[[:space:]]Distillation/2f899a2b-cc61-4033-8a1e-bab8902575cc_origin.pdf filter=lfs diff=lfs merge=lfs -text
6013
+ 2023/A[[:space:]]Unified[[:space:]]Algebraic[[:space:]]Perspective[[:space:]]on[[:space:]]Lipschitz[[:space:]]Neural[[:space:]]Networks/0ddbb3fc-a700-465f-9a2b-cad5bec5a26e_origin.pdf filter=lfs diff=lfs merge=lfs -text
6014
+ 2023/A[[:space:]]framework[[:space:]]for[[:space:]]benchmarking[[:space:]]Class-out-of-distribution[[:space:]]detection[[:space:]]and[[:space:]]its[[:space:]]application[[:space:]]to[[:space:]]ImageNet/014e40f0-866e-4844-b9d6-b3de43291df6_origin.pdf filter=lfs diff=lfs merge=lfs -text
6015
+ 2023/A[[:space:]]probabilistic[[:space:]]framework[[:space:]]for[[:space:]]task-aligned[[:space:]]intra-[[:space:]]and[[:space:]]inter-area[[:space:]]neural[[:space:]]manifold[[:space:]]estimation/e87cdc5c-ebb8-4276-85ac-98eed9409534_origin.pdf filter=lfs diff=lfs merge=lfs -text
6016
+ 2023/AANG[[:space:]]_[[:space:]]Automating[[:space:]]Auxiliary[[:space:]]Learning/9218a6ea-0c66-424b-aec1-82d0a79e86c3_origin.pdf filter=lfs diff=lfs merge=lfs -text
6017
+ 2023/ACMP_[[:space:]]Allen-Cahn[[:space:]]Message[[:space:]]Passing[[:space:]]with[[:space:]]Attractive[[:space:]]and[[:space:]]Repulsive[[:space:]]Forces[[:space:]]for[[:space:]]Graph[[:space:]]Neural[[:space:]]Networks/d1c48036-5cc9-44ce-aeae-bb7b2deba30a_origin.pdf filter=lfs diff=lfs merge=lfs -text
6018
+ 2023/Accurate[[:space:]]Image[[:space:]]Restoration[[:space:]]with[[:space:]]Attention[[:space:]]Retractable[[:space:]]Transformer/c75ec409-a9b0-4c65-a0dd-0defc55f9eb5_origin.pdf filter=lfs diff=lfs merge=lfs -text
6019
+ 2023/Active[[:space:]]Learning[[:space:]]in[[:space:]]Bayesian[[:space:]]Neural[[:space:]]Networks[[:space:]]with[[:space:]]Balanced[[:space:]]Entropy[[:space:]]Learning[[:space:]]Principle/5719f378-5a7e-413f-8da4-0390dd715659_origin.pdf filter=lfs diff=lfs merge=lfs -text
6020
+ 2023/Adversarial[[:space:]]Attacks[[:space:]]on[[:space:]]Adversarial[[:space:]]Bandits/0683a752-2d83-4558-a9d0-29d6eb8c1609_origin.pdf filter=lfs diff=lfs merge=lfs -text
6021
+ 2023/Adversarial[[:space:]]Diversity[[:space:]]in[[:space:]]Hanabi/ad22260a-a0c4-4d93-9321-d4e978149f86_origin.pdf filter=lfs diff=lfs merge=lfs -text
6022
+ 2023/Adversarial[[:space:]]Training[[:space:]]of[[:space:]]Self-supervised[[:space:]]Monocular[[:space:]]Depth[[:space:]]Estimation[[:space:]]against[[:space:]]Physical-World[[:space:]]Attacks/16fd338f-57c1-4efd-8f30-249270c6808f_origin.pdf filter=lfs diff=lfs merge=lfs -text
6023
+ 2023/An[[:space:]]Image[[:space:]]is[[:space:]]Worth[[:space:]]One[[:space:]]Word_[[:space:]]Personalizing[[:space:]]Text-to-Image[[:space:]]Generation[[:space:]]using[[:space:]]Textual[[:space:]]Inversion/5dad06ee-a509-444c-9893-145cfabdf5d7_origin.pdf filter=lfs diff=lfs merge=lfs -text
6024
+ 2023/Ask[[:space:]]Me[[:space:]]Anything_[[:space:]]A[[:space:]]simple[[:space:]]strategy[[:space:]]for[[:space:]]prompting[[:space:]]language[[:space:]]models/60853006-36ae-4407-8606-90564a79ae93_origin.pdf filter=lfs diff=lfs merge=lfs -text
6025
+ 2023/Associative[[:space:]]Memory[[:space:]]Augmented[[:space:]]Asynchronous[[:space:]]Spatiotemporal[[:space:]]Representation[[:space:]]Learning[[:space:]]for[[:space:]]Event-based[[:space:]]Perception/a09d3d90-eb2c-49c3-980a-d8641d731e8d_origin.pdf filter=lfs diff=lfs merge=lfs -text
6026
+ 2023/BC-IRL_[[:space:]]Learning[[:space:]]Generalizable[[:space:]]Reward[[:space:]]Functions[[:space:]]from[[:space:]]Demonstrations/7fc57236-41ff-422c-bb89-a11bafea916d_origin.pdf filter=lfs diff=lfs merge=lfs -text
6027
+ 2023/Benchmarking[[:space:]]Offline[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]on[[:space:]]Real-Robot[[:space:]]Hardware/4001598d-d73d-4ad5-9ef5-b3c9fbcd3eb2_origin.pdf filter=lfs diff=lfs merge=lfs -text
6028
+ 2023/Binding[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Symbolic[[:space:]]Languages/76b36eac-c559-46bb-8e53-cac978aa10eb_origin.pdf filter=lfs diff=lfs merge=lfs -text
6029
+ 2023/Building[[:space:]]a[[:space:]]Subspace[[:space:]]of[[:space:]]Policies[[:space:]]for[[:space:]]Scalable[[:space:]]Continual[[:space:]]Learning/2e8736a0-4f38-4902-8e4c-f21e818d521a_origin.pdf filter=lfs diff=lfs merge=lfs -text
6030
+ 2023/CLIP-Dissect_[[:space:]]Automatic[[:space:]]Description[[:space:]]of[[:space:]]Neuron[[:space:]]Representations[[:space:]]in[[:space:]]Deep[[:space:]]Vision[[:space:]]Networks/67acd45c-cafb-4789-a9b6-b935fb6f3970_origin.pdf filter=lfs diff=lfs merge=lfs -text
6031
+ 2023/CROM_[[:space:]]Continuous[[:space:]]Reduced-Order[[:space:]]Modeling[[:space:]]of[[:space:]]PDEs[[:space:]]Using[[:space:]]Implicit[[:space:]]Neural[[:space:]]Representations/e3d8a59e-fa54-4e4e-bda0-6727b9857dfa_origin.pdf filter=lfs diff=lfs merge=lfs -text
6032
+ 2023/CUDA_[[:space:]]Curriculum[[:space:]]of[[:space:]]Data[[:space:]]Augmentation[[:space:]]for[[:space:]]Long-tailed[[:space:]]Recognition/9c4616fb-c356-40fa-8c1a-ac11ec133e75_origin.pdf filter=lfs diff=lfs merge=lfs -text
6033
+ 2023/Can[[:space:]]We[[:space:]]Find[[:space:]]Nash[[:space:]]Equilibria[[:space:]]at[[:space:]]a[[:space:]]Linear[[:space:]]Rate[[:space:]]in[[:space:]]Markov[[:space:]]Games_/5ae022cb-f0ba-4387-8829-73d47083e418_origin.pdf filter=lfs diff=lfs merge=lfs -text
6034
+ 2023/Canary[[:space:]]in[[:space:]]a[[:space:]]Coalmine_[[:space:]]Better[[:space:]]Membership[[:space:]]Inference[[:space:]]with[[:space:]]Ensembled[[:space:]]Adversarial[[:space:]]Queries/235119eb-41ff-4adf-8796-9e916c20277b_origin.pdf filter=lfs diff=lfs merge=lfs -text
6035
+ 2023/Capturing[[:space:]]the[[:space:]]Motion[[:space:]]of[[:space:]]Every[[:space:]]Joint_[[:space:]]3D[[:space:]]Human[[:space:]]Pose[[:space:]]and[[:space:]]Shape[[:space:]]Estimation[[:space:]]with[[:space:]]Independent[[:space:]]Tokens/b9e6b80b-bab0-4a3b-8127-df5f7baa9b52_origin.pdf filter=lfs diff=lfs merge=lfs -text
6036
+ 2023/Certified[[:space:]]Training_[[:space:]]Small[[:space:]]Boxes[[:space:]]are[[:space:]]All[[:space:]]You[[:space:]]Need/dc453895-84bc-49f6-92b6-72b5b92f7230_origin.pdf filter=lfs diff=lfs merge=lfs -text
6037
+ 2023/Choreographer_[[:space:]]Learning[[:space:]]and[[:space:]]Adapting[[:space:]]Skills[[:space:]]in[[:space:]]Imagination/292b174f-1730-4078-be2b-d676bd8912d9_origin.pdf filter=lfs diff=lfs merge=lfs -text
6038
+ 2023/Code[[:space:]]Translation[[:space:]]with[[:space:]]Compiler[[:space:]]Representations/420ba35d-cb83-467b-8801-4badb6669cd2_origin.pdf filter=lfs diff=lfs merge=lfs -text
6039
+ 2023/CodeGen_[[:space:]]An[[:space:]]Open[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]for[[:space:]]Code[[:space:]]with[[:space:]]Multi-Turn[[:space:]]Program[[:space:]]Synthesis/f866b374-165f-4d77-aaa5-7618cd7953af_origin.pdf filter=lfs diff=lfs merge=lfs -text
6040
+ 2023/Combinatorial-Probabilistic[[:space:]]Trade-Off_[[:space:]]P-Values[[:space:]]of[[:space:]]Community[[:space:]]Properties[[:space:]]Test[[:space:]]in[[:space:]]the[[:space:]]Stochastic[[:space:]]Block[[:space:]]Models/f4967b29-a1aa-4d61-9689-147faf2d6530_origin.pdf filter=lfs diff=lfs merge=lfs -text
6041
+ 2023/Concept-level[[:space:]]Debugging[[:space:]]of[[:space:]]Part-Prototype[[:space:]]Networks/71b0f4c9-db36-4710-a950-f065dcd7c0ba_origin.pdf filter=lfs diff=lfs merge=lfs -text
6042
+ 2023/Continual[[:space:]]Unsupervised[[:space:]]Disentangling[[:space:]]of[[:space:]]Self-Organizing[[:space:]]Representations/ef9c83a6-7358-4cb7-9c93-a05a709c9956_origin.pdf filter=lfs diff=lfs merge=lfs -text
6043
+ 2023/Continual[[:space:]]evaluation[[:space:]]for[[:space:]]lifelong[[:space:]]learning_[[:space:]]Identifying[[:space:]]the[[:space:]]stability[[:space:]]gap/b140d6bb-b81b-4b0b-922b-f5de9a550964_origin.pdf filter=lfs diff=lfs merge=lfs -text
6044
+ 2023/Continuized[[:space:]]Acceleration[[:space:]]for[[:space:]]Quasar[[:space:]]Convex[[:space:]]Functions[[:space:]]in[[:space:]]Non-Convex[[:space:]]Optimization/00ed299a-ea2a-4474-96cf-fe10f9dfcecc_origin.pdf filter=lfs diff=lfs merge=lfs -text
6045
+ 2023/Continuous[[:space:]]PDE[[:space:]]Dynamics[[:space:]]Forecasting[[:space:]]with[[:space:]]Implicit[[:space:]]Neural[[:space:]]Representations/ff54bfb5-024d-44d7-b7ef-9fb64bb4d7c9_origin.pdf filter=lfs diff=lfs merge=lfs -text
6046
+ 2023/Contrastive[[:space:]]Audio-Visual[[:space:]]Masked[[:space:]]Autoencoder/62ed2688-85dd-48a9-bbbc-e9dd2433618c_origin.pdf filter=lfs diff=lfs merge=lfs -text
6047
+ 2023/Corrupted[[:space:]]Image[[:space:]]Modeling[[:space:]]for[[:space:]]Self-Supervised[[:space:]]Visual[[:space:]]Pre-Training/d394fe00-1e64-4931-b9ad-783a4c757bec_origin.pdf filter=lfs diff=lfs merge=lfs -text
6048
+ 2023/D4FT_[[:space:]]A[[:space:]]Deep[[:space:]]Learning[[:space:]]Approach[[:space:]]to[[:space:]]Kohn-Sham[[:space:]]Density[[:space:]]Functional[[:space:]]Theory/3457142b-c9ce-47eb-98d3-76cee2916371_origin.pdf filter=lfs diff=lfs merge=lfs -text
6049
+ 2023/DASHA_[[:space:]]Distributed[[:space:]]Nonconvex[[:space:]]Optimization[[:space:]]with[[:space:]]Communication[[:space:]]Compression[[:space:]]and[[:space:]]Optimal[[:space:]]Oracle[[:space:]]Complexity/27124c60-d448-43c0-a49c-24ff74a37654_origin.pdf filter=lfs diff=lfs merge=lfs -text
6050
+ 2023/DEP-RL_[[:space:]]Embodied[[:space:]]Exploration[[:space:]]for[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]in[[:space:]]Overactuated[[:space:]]and[[:space:]]Musculoskeletal[[:space:]]Systems/eda2d2ee-8115-4f6b-9e02-f00751004e17_origin.pdf filter=lfs diff=lfs merge=lfs -text
6051
+ 2023/DIFFormer_[[:space:]]Scalable[[:space:]](Graph)[[:space:]]Transformers[[:space:]]Induced[[:space:]]by[[:space:]]Energy[[:space:]]Constrained[[:space:]]Diffusion/34a4be05-9684-4a63-87b4-a3e4da9bfb81_origin.pdf filter=lfs diff=lfs merge=lfs -text
6052
+ 2023/DINO[[:space:]]as[[:space:]]a[[:space:]]von[[:space:]]Mises-Fisher[[:space:]]mixture[[:space:]]model/70822f69-41e8-4ee4-a92b-61dc1c53a1ea_origin.pdf filter=lfs diff=lfs merge=lfs -text
6053
+ 2023/Data[[:space:]]Continuity[[:space:]]Matters_[[:space:]]Improving[[:space:]]Sequence[[:space:]]Modeling[[:space:]]with[[:space:]]Lipschitz[[:space:]]Regularizer/f2640baf-2b0e-44b9-b255-07540e15a39f_origin.pdf filter=lfs diff=lfs merge=lfs -text
6054
+ 2023/Decompositional[[:space:]]Generation[[:space:]]Process[[:space:]]for[[:space:]]Instance-Dependent[[:space:]]Partial[[:space:]]Label[[:space:]]Learning/d675d99e-73f0-4310-a347-7c38f37af6d7_origin.pdf filter=lfs diff=lfs merge=lfs -text
6055
+ 2023/Denoising[[:space:]]Diffusion[[:space:]]Error[[:space:]]Correction[[:space:]]Codes/081f9f57-440f-4ede-85aa-7430cc493f50_origin.pdf filter=lfs diff=lfs merge=lfs -text
6056
+ 2023/Depth[[:space:]]Separation[[:space:]]with[[:space:]]Multilayer[[:space:]]Mean-Field[[:space:]]Networks/6d55d074-841e-439e-89d7-4ead8bb8cd4b_origin.pdf filter=lfs diff=lfs merge=lfs -text
6057
+ 2023/Designing[[:space:]]BERT[[:space:]]for[[:space:]]Convolutional[[:space:]]Networks_[[:space:]]Sparse[[:space:]]and[[:space:]]Hierarchical[[:space:]]Masked[[:space:]]Modeling/bba8f955-9dd9-4755-b02c-b28dbfb556e0_origin.pdf filter=lfs diff=lfs merge=lfs -text
6058
+ 2023/Deterministic[[:space:]]training[[:space:]]of[[:space:]]generative[[:space:]]autoencoders[[:space:]]using[[:space:]]invertible[[:space:]]layers/f0e3d27a-ac73-489c-85c6-716d10e7896a_origin.pdf filter=lfs diff=lfs merge=lfs -text
6059
+ 2023/DiffEdit_[[:space:]]Diffusion-based[[:space:]]semantic[[:space:]]image[[:space:]]editing[[:space:]]with[[:space:]]mask[[:space:]]guidance/13b49b6d-cb67-4ebc-a7ab-6056580b1877_origin.pdf filter=lfs diff=lfs merge=lfs -text
6060
+ 2023/Differentially[[:space:]]Private[[:space:]]$L_2$-Heavy[[:space:]]Hitters[[:space:]]in[[:space:]]the[[:space:]]Sliding[[:space:]]Window[[:space:]]Model/076c40fe-6c68-48d2-8c84-0db4344dea09_origin.pdf filter=lfs diff=lfs merge=lfs -text
6061
+ 2023/Diffusion[[:space:]]Models[[:space:]]Already[[:space:]]Have[[:space:]]A[[:space:]]Semantic[[:space:]]Latent[[:space:]]Space/852c6ae9-3f26-4d91-a08d-8209e42d0481_origin.pdf filter=lfs diff=lfs merge=lfs -text
6062
+ 2023/Diffusion[[:space:]]Posterior[[:space:]]Sampling[[:space:]]for[[:space:]]General[[:space:]]Noisy[[:space:]]Inverse[[:space:]]Problems/d8724a0e-2b55-4196-a229-a8d259a21a18_origin.pdf filter=lfs diff=lfs merge=lfs -text
6063
+ 2023/Dirichlet-based[[:space:]]Uncertainty[[:space:]]Calibration[[:space:]]for[[:space:]]Active[[:space:]]Domain[[:space:]]Adaptation/f68fcc73-ddc2-497e-9f61-cd272f2f1df9_origin.pdf filter=lfs diff=lfs merge=lfs -text
6064
+ 2023/Disentanglement[[:space:]]with[[:space:]]Biological[[:space:]]Constraints_[[:space:]]A[[:space:]]Theory[[:space:]]of[[:space:]]Functional[[:space:]]Cell[[:space:]]Types/8b19e51f-d053-4150-b5d3-c9ecb88ecae6_origin.pdf filter=lfs diff=lfs merge=lfs -text
6065
+ 2023/Disparate[[:space:]]Impact[[:space:]]in[[:space:]]Differential[[:space:]]Privacy[[:space:]]from[[:space:]]Gradient[[:space:]]Misalignment/14999f45-172e-4f10-ab40-aeae4b36d82e_origin.pdf filter=lfs diff=lfs merge=lfs -text
6066
+ 2023/Distilling[[:space:]]Model[[:space:]]Failures[[:space:]]as[[:space:]]Directions[[:space:]]in[[:space:]]Latent[[:space:]]Space/5457e321-ca1d-4fc2-b80f-a7755d9b9b82_origin.pdf filter=lfs diff=lfs merge=lfs -text
6067
+ 2023/Divide[[:space:]]to[[:space:]]Adapt_[[:space:]]Mitigating[[:space:]]Confirmation[[:space:]]Bias[[:space:]]for[[:space:]]Domain[[:space:]]Adaptation[[:space:]]of[[:space:]]Black-Box[[:space:]]Predictors/0a0db2e4-c7a7-4ae3-ad37-1fe90210d267_origin.pdf filter=lfs diff=lfs merge=lfs -text
6068
+ 2023/DocPrompting_[[:space:]]Generating[[:space:]]Code[[:space:]]by[[:space:]]Retrieving[[:space:]]the[[:space:]]Docs/bb9d7fce-e4d7-45b9-98ec-a91613b99637_origin.pdf filter=lfs diff=lfs merge=lfs -text
6069
+ 2023/Does[[:space:]]Zero-Shot[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]Exist_/571f2337-bf00-4be0-b280-65d4f0a033d6_origin.pdf filter=lfs diff=lfs merge=lfs -text
6070
+ 2023/Domain[[:space:]]Generalization[[:space:]]via[[:space:]]Heckman-type[[:space:]]Selection[[:space:]]Models/5bf22757-b32b-402e-9ec7-afa1309bd593_origin.pdf filter=lfs diff=lfs merge=lfs -text
6071
+ 2023/Domain-Indexing[[:space:]]Variational[[:space:]]Bayes_[[:space:]]Interpretable[[:space:]]Domain[[:space:]]Index[[:space:]]for[[:space:]]Domain[[:space:]]Adaptation/74d7ed10-26c9-4cba-8729-58d5bcd46230_origin.pdf filter=lfs diff=lfs merge=lfs -text
6072
+ 2023/Dual[[:space:]]Algorithmic[[:space:]]Reasoning/aa5e90d2-cb24-4180-94a5-a356836f7ee6_origin.pdf filter=lfs diff=lfs merge=lfs -text
6073
+ 2023/EA-HAS-Bench_[[:space:]]Energy-aware[[:space:]]Hyperparameter[[:space:]]and[[:space:]]Architecture[[:space:]]Search[[:space:]]Benchmark/8879fc5c-f7ad-46fd-b220-61abe509bc30_origin.pdf filter=lfs diff=lfs merge=lfs -text
6074
+ 2023/EVA3D_[[:space:]]Compositional[[:space:]]3D[[:space:]]Human[[:space:]]Generation[[:space:]]from[[:space:]]2D[[:space:]]Image[[:space:]]Collections/8ec08eff-40aa-4383-9546-92878d0487f7_origin.pdf filter=lfs diff=lfs merge=lfs -text
2023/A Primal-Dual Framework for Transformers and Neural Networks/1e68b69c-bd65-4a45-b94a-7f6538c7708f_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Primal-Dual Framework for Transformers and Neural Networks/1e68b69c-bd65-4a45-b94a-7f6538c7708f_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Primal-Dual Framework for Transformers and Neural Networks/1e68b69c-bd65-4a45-b94a-7f6538c7708f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31df18019db913f9f0ad7a28aadf58cc31fb0705702bcf5df390a7598b6b1e9c
3
+ size 1609645
2023/A Primal-Dual Framework for Transformers and Neural Networks/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Primal-Dual Framework for Transformers and Neural Networks/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca8e58215e5980d0360fc14e6764b4d2a064e17e211761bdfb338dd52b4268ea
3
+ size 1686405
2023/A Primal-Dual Framework for Transformers and Neural Networks/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/2f899a2b-cc61-4033-8a1e-bab8902575cc_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/2f899a2b-cc61-4033-8a1e-bab8902575cc_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/2f899a2b-cc61-4033-8a1e-bab8902575cc_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cda8e78e71afed628f5dd8c88927d123e3b6fc8587cef006f104866b91eee955
3
+ size 4257307
2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc5ee1c2553734b07fb129d268a0d9b7881d1f4dd2fdf5299e47676e250a4646
3
+ size 1827999
2023/A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/0ddbb3fc-a700-465f-9a2b-cad5bec5a26e_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/0ddbb3fc-a700-465f-9a2b-cad5bec5a26e_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/0ddbb3fc-a700-465f-9a2b-cad5bec5a26e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f9306462385f39ddd90c34d3c9471df4365d522fd3c5ac49c8199e30c5db0e0
3
+ size 379208
2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/full.md ADDED
@@ -0,0 +1,435 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A UNIFIED ALGEBRAIC PERSPECTIVE ON LIPSCHITZ NEURAL NETWORKS
2
+
3
+ Alexandre Araujo\*1, Aaron Havens\*2, Blaise Delattre3,4, Alexandre Allauzen3,5 and Bin Hu2
4
+
5
+ <sup>1</sup> INRIA, Ecole Normale Supérieure, CNRS, PSL University, Paris, France
6
+ $^{2}$ CSL & ECE, University of Illinois Urbana-Champaign, IL, USA
7
+ <sup>3</sup> Miles Team, LAMSADE, Université Paris-Dauphine, PSL University, Paris, France
8
+ 4 Foxstream, Vault-en-Velin, France
9
+ 5 ESPCI PSL, Paris, France
10
+
11
+ # ABSTRACT
12
+
13
+ Important research efforts have focused on the design and training of neural networks with a controlled Lipschitz constant. The goal is to increase and sometimes guarantee the robustness against adversarial attacks. Recent promising techniques draw inspirations from different backgrounds to design 1-Lipschitz neural networks, just to name a few: convex potential layers derive from the discretization of continuous dynamical systems, Almost-Orthogonal-Layer proposes a tailored method for matrix rescaling. However, it is today important to consider the recent and promising contributions in the field under a common theoretical lens to better design new and improved layers. This paper introduces a novel algebraic perspective unifying various types of 1-Lipschitz neural networks, including the ones previously mentioned, along with methods based on orthogonality and spectral methods. Interestingly, we show that many existing techniques can be derived and generalized via finding analytical solutions of a common semidefinite programming (SDP) condition. We also prove that AOL biases the scaled weight to the ones which are close to the set of orthogonal matrices in a certain mathematical manner. Moreover, our algebraic condition, combined with the Gershgorin circle theorem, readily leads to new and diverse parameterizations for 1-Lipschitz network layers. Our approach, called SDP-based Lipschitz Layers (SLL), allows us to design non-trivial yet efficient generalization of convex potential layers. Finally, the comprehensive set of experiments on image classification shows that SLLs outperform previous approaches on certified robust accuracy. Code is available at github.com/araujoaalexandre/Lipschitz-SLL-Networks.
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ The robustness of deep neural networks is nowadays a great challenge to establish confidence in their decisions for real-life applications. Addressing this challenge requires guarantees on the stability of the prediction, with respect to adversarial attacks. In this context, the Lipschitz constant of neural networks is a key property at the core of many recent advances. Along with the margin of the classifier, this property allows us to certify the robustness against worst-case adversarial perturbations. This certification is based on a sphere of stability within which the decision remains the same for any perturbation inside the sphere (Tsuzuki et al., 2018).
18
+
19
+ The design of 1-Lipschitz layers provides a successful approach to enforce this property for the whole neural network. For this purpose, many different techniques have been devised such as spectral normalization (Miyato et al., 2018; Farnia et al., 2019), orthogonal parameterization (Trockman et al., 2021; Li et al., 2019; Singla et al., 2021; Yu et al., 2022; Xu et al., 2022), Convex Potential Layers (CPL) (Meunier et al., 2022), and Almost-Orthogonal-Layers (AOL) (Prach et al., 2022). While all these techniques share the same goal, their motivations, and derivations can greatly differ,
20
+
21
+ delivering different solutions. Nevertheless, their raw experimental comparison fails to really gain insight into their peculiar performance, soundness, and in the end their possible complementarity. Therefore a question acts as a barrier for an in-depth analysis and future development:
22
+
23
+ # Are there common principles underlying the developments of 1-Lipschitz Layers?
24
+
25
+ In this paper, we propose a novel perspective to answer this question based on a unified Semidefinite Programming (SDP) approach. We introduce a common algebraic condition underlying various types of methods like spectral normalization, orthogonality-based methods, AOL, and CPL. Our key insight is that this condition can be formulated as a unifying and simple SDP problem, and that the development of 1-Lipschitz architectures systematically arise by finding "analytical solutions" of this SDP. Our main contributions are summarized as follows.
26
+
27
+ - We provide a unifying algebraic perspective for 1-Lipschitz network layers by showing that existing techniques such as spectral normalization, orthogonal parameterization, AOL, and CPL can all be recast as a solution of the same simple SDP condition (Theorem 1 and related discussions). Consequently, any new analytical solutions of our proposed SDP condition will immediately lead to new 1-Lipschitz network structures.
28
+ - Built upon the above algebraic viewpoint, we give a rigorous mathematical interpretation for AOL explaining how this method promotes "almost orthogonality" in training (Theorem 2).
29
+ - Based on our SDPs, a new family of 1-Lipschitz network structures termed as SDP-based Lipschitz layers (SLL) has been developed. Specifically, we apply the Gershgorin circle theorem to obtain some new SDP solutions, leading to non-trivial extensions of CPL (Theorem 3). We derive new SDP conditions to characterize SLL in a very general form (Theorem 4).
30
+ - Finally, we show, by a comprehensive set of experiments, that our new SDP-based Lipschitz layers outperform previous approaches on certified robust accuracy.
31
+
32
+ Our work is inspired by Fazlyab et al. (2019) that develops SDP conditions for numerical estimation of Lipschitz constants of given neural networks. A main difference is that we focus on "analytical SDP solutions" which can be used to characterize 1-Lipschitz network structures.
33
+
34
+ # 2 RELATED WORK
35
+
36
+ In recent years, certified methods have been central to the development of trustworthy machine learning and especially for deep learning. Randomized Smoothing (Cohen et al., 2019; Salman et al., 2019) is one of the first defenses to offer provable robustness guarantees. The method simply extends a given classifier by the smart introduction of random noise to enhance the robustness of the classifier. Although this method offers an interesting level of certified robustness, it suffers from important downsides such as the high computational cost of inference and some impossibility results from information-theory perspective (Yang et al., 2020; Kumar et al., 2020).
37
+
38
+ Another approach to certify the robustness of a classifier is to control its Lipschitz constant (Hein et al., 2017; Tsuzuku et al., 2018). The main idea is to derive a certified radius in the feature space by upper bounding the margin of the classifier. See Proposition 1 of Tsuzuku et al. (2018) for more details. This radius, along with the Lipschitz constant of the network can certify the robustness. In order to reduce the Lipschitz constant and have a non-trivial certified accuracy, Tsuzuku et al. (2018) and Leino et al. (2021) both upper bound the margin via computing a bound on the global Lipschitz constant, however, these bounds have proved to be loose. Instead of upper bounding the global Lipschitz constant, Huang et al. (2021b) leverages local information to get tighter bound on the Lipschitz constant. On the other hand, other works, instead of upper bounding the local or global Lipschitz, devised neural networks architecture that are provably 1-Lipschitz. One of the first approaches in this direction consists of normalizing each layer with its spectral norm (Miyato et al., 2018; Farnia et al., 2019). Each layer is, by construction, 1-Lipschitz. Later, a body of research replaces the normalized weight matrix by an orthogonal matrix. It improves upon the spectral normalization method by adding the gradient preservation (Li et al., 2019; Trockman et al., 2021; Singla et al., 2021; Yu et al., 2022; Xu et al., 2022). These methods constrain the parameters by orthogonality during training. Specifically, the Cayley transform can be used to constrain the weights (Trockman et al., 2021) and, in a similar fashion, SOC (Singla et al., 2021) parameterizes their layers with the exponential of a skew symmetric matrix making it orthogonal. To reduce cost,
39
+
40
+ Trockman et al. (2021), Yu et al. (2022), and Xu et al. (2022) orthogonalize their convolutional kernel in the Fourier domain.
41
+
42
+ More recently, a work by Meunier et al. (2022) has studied Lipschitz networks from a dynamical system perspective. Starting from the continuous view of a residual network, they showed that the parameterization with the Cayley transform (Trockman et al., 2021) and SOC (Singla et al., 2021) correspond respectively to two specific discretization schemes of the continuous flow. Furthermore, a new layer is derived from convex potential flows to ensure the 1-Lipschitz property<sup>1</sup>:
43
+
44
+ $$
45
+ z = x - \frac {2}{\| W \| _ {2} ^ {2}} W \sigma \left(W ^ {\top} x + b\right), \tag {1}
46
+ $$
47
+
48
+ where $\| W\| _2$ is the spectral norm of the weight matrix $W$ and $\sigma$ is the ReLU activation function. In general, the training of orthogonal layers can be expensive. The Cayley approach involves a matrix inversion, and the implementation of SOC requires either an SVD or an iterative Taylor expansion. The CPL approach can be more efficient, although the computation of $\| W\| _2$ is still needed.
49
+
50
+ A recent work, Almost-Orthogonal-layer (AOL) (Prach et al., 2022) came up with a middle ground: a new normalization which makes the layer 1-Lipschitz by favoring orthogonality. The fully-connected AOL layer is defined as $z = W \, dx + b$ where $D$ is a diagonal matrix given by:
51
+
52
+ $$
53
+ D = \operatorname {d i a g} \left(\sum_ {j} | W ^ {\top} W | _ {i j}\right) ^ {- \frac {1}{2}} \tag {2}
54
+ $$
55
+
56
+ They demonstrated that this layer is 1-Lipschitz and they empirically show that, after training, the Jacobian of the layer (with respect to $x$ ) is almost orthogonal, hence facilitating the training.
57
+
58
+ Another source of inspiration is the application of convex programs for robustness certification of neural networks (Wong et al., 2018; Raghunathan et al., 2018; Fazlyab et al., 2019; Revay et al., 2020; Fazlyab et al., 2020; Wang et al., 2022). The most relevant work is Fazlyab et al. (2019), which leverages the quadratic constraint approach from control theory (Megretski et al., 1997) to formulate SDPs for estimating the global Lipschitz constant of neural networks numerically. It is possible to solve such SDPs numerically for training relatively small Lipschitz networks (Pauli et al., 2021). However, due to the restrictions of existing SDP solvers, scalability has been one issue when deploying such approaches to deep learning problems with large data sets. Our focus is on the design of Lipschitz network structures, and we avoid the scalability issue via solving SDPs analytically.
59
+
60
+ # 3 BACKGROUND
61
+
62
+ Notation. The $n \times n$ identity matrix and the $n \times n$ zero matrix are denoted as $I_{n}$ and $0_{n}$ , respectively. The subscripts will be omitted when the dimension is clear from the context. When a matrix $P$ is negative semidefinite (definite), we will use the notation $P \preceq (\prec)0$ . When a matrix $P$ is positive semidefinite (definite), we will use the notation $P \succeq (\succ)0$ . Let $e_{i}$ denote the vector whose $i$ -entry is 1 and all other entries are 0. Given a collection of scalars $\{a_{i}\}_{i=1}^{n}$ , we use the notation $\mathrm{diag}(a_{i})$ to denote the $n \times n$ diagonal matrix whose $(i, i)$ -th entry is $a_{i}$ . For a matrix $A$ , the following notations $A^{\mathsf{T}}$ , $\|A\|_{2}$ , $\mathrm{tr}(A)$ , $\sigma_{\min}(A)$ , $\|A\|_{F}$ , and $\rho(A)$ stand for its transpose, largest singular value, trace, smallest singular value, Frobenius norm, and spectral radius, respectively.
63
+
64
+ Lipschitz functions. A function $f: \mathbb{R}^n \to \mathbb{R}^m$ is $L$ -Lipschitz with respect to the $\ell_2$ norm iff it satisfies $\| f(x) - f(y) \| \leq L \| x - y \|$ for all $x, y \in \mathbb{R}^n$ , where $\| \cdot \|$ stands for the $\ell_2$ norm. An important fact is that the robustness of a neural network can be certified based on its Lipschitz constant (Tsuzuki et al., 2018). In this paper, we are interested in the case where $L = 1$ . Specifically, we consider the training of 1-Lipschitz neural networks. If each layer of a neural network is 1-Lipschitz, then the entire neural network is also 1-Lipschitz. The Lipschitz constant also satisfies the triangle inequality, and hence convex combination will preserve the 1-Lipschitz property.
65
+
66
+ Matrix cones: Positive semidefiniteness and diagonal dominance. Let $\mathbf{S}^n$ denote the set of all $n\times n$ real symmetric matrices. Let $\mathbf{S}_+^n\subset \mathbf{S}^n$ be the set of all $n\times n$ symmetric positive semidefinite
67
+
68
+ matrices. It is well known that $\mathbf{S}_+^n$ is a closed-pointed convex cone in $\mathbf{S}^n$ . With the trace inner product, $\mathbf{S}_+^n$ is also self-dual. Consider two symmetric matrices $A$ and $B$ such that $A \succeq B \in \mathbf{S}^n$ , then we have $A - B \in \mathbf{S}_+^n$ , and $\mathrm{tr}(A - B)$ provides a distance measure between $A$ and $B$ . In addition, we have $\|A - B\|_F \leq \mathrm{tr}(A - B)$ . Finally, the set of all $n \times n$ real symmetric diagonally dominant matrices with non-negative diagonal entries is represented by $\mathbf{D}^n$ . It is known that $\mathbf{D}^n$ forms a closed, pointed, full cone (Barker et al., 1975). Based on the Gershgorin circle theorem (Horn et al., 2012), we know $\mathbf{D}^n \subset \mathbf{S}_+^n$ . It is also known that $\mathbf{D}^n$ is smaller than $\mathbf{S}_+^n$ (Barker et al., 1975). For any $A \in \mathbf{D}^n$ , we have $A_{ii} \geq \sum_{j:j \neq i} |A_{ij}|$ . It is important to require $A_{ii} \geq 0$ , and the set of real symmetric diagonally dominant matrices is not a cone by itself.
69
+
70
+ # 4 AN ALGEBRAIC UNIFICATION OF 1-LIPSCHITZ LAYERS
71
+
72
+ In this section, we present a unified algebraic perspective for various 1-Lipschitz layers (Spectral Normalization, Orthogonalization, AOL, and CPL) via developing a common SDP condition characterizing the Lipschitz property. Built upon our algebraic viewpoint, we also present a new mathematical interpretation explaining how AOL promotes orthogonality in training.
73
+
74
+ # 4.1 THE UNIFYING ALGEBRAIC CONDITION
75
+
76
+ First, we present an algebraic condition which can be used to unify the developments of existing techniques such as SN, AOL, and CPL. Our main theorem is formalized below.
77
+
78
+ Theorem 1. For any weight matrix $W \in \mathbb{R}^{m \times n}$ , if there exists a nonsingular diagonal matrix $T$ such that $W^{\top}W - T \preceq 0$ , then the two following statements hold true.
79
+
80
+ 1. The mapping $g(x) = W T^{-\frac{1}{2}} x + b$ is 1-Lipschitz.
81
+ 2. The mapping $h(x) = x - 2WT^{-1}\sigma (W^{\top}x + b)$ is 1-Lipschitz if $\sigma$ is ReLU, tanh or sigmoid.
82
+
83
+ The proof of the above theorem and some related control-theoretic interpretations are provided in the appendix. This theorem allows us to design different 1-Lipschitz layers just with various choices of $T$ , in two important cases: for a linear transformation with Statement 1, as well as for a residual and non-linear block with Statement 2. Moreover, for any given weight matrix $W$ , the condition $W^{\mathsf{T}}W \preceq T$ is linear in $T$ , and hence can be viewed as an SDP condition with decision variable $T$ . To emphasize the significance of this theorem, we propose to derive existing methods used for designing 1-Lipschitz layers by choosing specific $T$ for the SDP condition $W^{\mathsf{T}}W \preceq T$ . The 1-Lipschitz property is then automatically obtained.
84
+
85
+ - Spectral Normalization (SN) corresponds to an almost trivial choice if we notice that $W^{\mathsf{T}}W \preceq \| W^{\mathsf{T}}W\|_{2}I \preceq \| W\|_{2}^{2}I$ . Hence with $T = \| W\|_{2}^{2}I$ , we build the SN layer $g(x) = WT^{-\frac{1}{2}}x + b = \frac{1}{\|W\|_2} Wx + b$ .
86
+ - The Orthogonality-based parameterization is obtained by setting $T = I$ and enforcing the equality $W^{\top}W = T = I$ . Then obviously $g(x) = Wx + b$ is 1-Lipschitz.
87
+ - AOL formula can be derived by letting $T = \mathrm{diag}(\sum_{j=1}^{n} |W^{\top}W|_{ij})$ . With this choice, we have $T - W^{\top}W \in \mathbf{D}^{n} \subset \mathbf{S}_{+}^{n}$ , hence $W^{\top}W \preceq T$ . Then Statement 1 in Theorem 1 implies that the AOL layer, written as $g(x) = WT^{-\frac{1}{2}}x + b$ , is 1-Lipschitz.
88
+ - CPL follows the same SN choice $T = \| W\| _2^2 I$ , but with Statement 2 of Theorem 1. Hence we derive a different function $h(x) = x - \frac{2}{\|W\|_2^2} W\sigma (W^\top x + b)$ which is also 1-Lipschitz.
89
+
90
+ The above discussion illustrates the benefit of expressing all these methods within the same theoretical framework, offering us a new tool to characterize the similarity between different methods. For instance, SN and CPL share the same choice of $T = \| W\| _2^2 I$ . The difference between them is which statement is used. Hence CPL can be viewed as the "residual version" of
91
+
92
+ SN. Clearly, the residual network structure allows CPL to address the gradient vanishing issue more efficiently than SN. With the same approach, we can readily infer from our unified algebraic condition what are the "residual" counterparts for orthogonality-based parameterization and AOL. For orthogonality-based parameterization, if we enforce $W^{\mathsf{T}}W = T = I$ via methods such as SOC and ECO, then the function $h(x) = x - 2W\sigma (W^{\mathsf{T}}x + b)$ is 1-Lipschitz (by Statement 2 in Theorem 1). Finally, if we choose $T = \mathrm{diag}\left(\sum_{j = 1}^{n}|W^{\mathsf{T}}W|_{ij}\right)$ , then the function $h(x) = x - 2W\mathrm{diag}\left(\sum_{j = 1}^{n}|W^{\mathsf{T}}W|_{ij}\right)^{-1}\sigma (W^{\mathsf{T}}x + b)$ is also 1-Lipschitz. Therefore it is straightforward to create new classes of 1-Lipschitz network structures from existing ones.
93
+
94
+ Another important consequence of Theorem 1 is about new layer development. Any new nonsingular diagonal solution $T$ for the SDP condition $W^{\mathsf{T}}W - T \preceq 0$ immediately leads to new 1-Lipschitz network structures in the form of $g(x) = WT^{-\frac{1}{2}}x + b$ or $h(x) = x - 2WT^{-1}\sigma (W^{\mathsf{T}}x + b)$ . Therefore, the developments of 1-Lipschitz network structures can be reformulated as finding analytical solutions of the matrix inequality $W^{\mathsf{T}}W \preceq T$ with nonsingular diagonal $T$ . As a matter of fact, the Gershgorin circle theorem can help to improve the existing choices of $T$ in a systematic way. In Section 5, we will discuss such new choices of $T$ and related applications to improve CPL. At this point, it is worth noticing that to develop deep Lipschitz networks, it is important to have analytical formulas of $T$ . The analytical formula of $T$ will enable a fast computation of $WT^{-\frac{1}{2}}$ or $WT^{-1}$ .
95
+
96
+ Theorem 1 is powerful in building a connection between 1-Lipschitz network layers and the algebraic condition $W^{\top}W \preceq T$ . Next, we will look closer at this algebraic condition and provide a new mathematical interpretation explaining how AOL generates "almost orthogonal" weights.
97
+
98
+ Remark 1. The proof of Statement 2 in Theorem 1 relies on (Fazlyab et al., 2019, Lemma 1), which requires the activation function $\sigma$ to be slope-restricted on $[0,1]$ . Therefore, Statement 2 cannot be applied to the case with $\sigma$ being the GroupSort activation function (Anil et al., 2019). In contrast, Statement 1 can be used to build neural networks with any activation functions which are 1-Lipschitz.
99
+
100
+ # 4.2 A NEW MATHEMATICAL INTERPRETATION FOR AOL
101
+
102
+ In Prach et al. (2022), it is observed that AOL can learn "almost orthogonal" weights and hence overcome the gradient vanishing issue. As a matter of fact, the choice of $T$ used in AOL is optimal in a specific mathematical sense as formalized with the next theorem.
103
+
104
+ Theorem 2. Given any $W \in \mathbb{R}^{m \times n}$ which does not have zero columns, define the set $\mathbf{T} = \{T : T \text{ is nonsingular diagonal, and } T - W^{\top}W \in \mathbf{D}^{n}\}$ . Then the choice of $T$ for the AOL method actually satisfies
105
+
106
+ $$
107
+ T = \operatorname {diag}(\sum_{j = 1}^{n}|W^{\mathsf{T}}W|_{ij}) = \operatorname *{arg min}_{T\in \mathbf{T}}\operatorname {tr}\bigl(I - T^{-\frac{1}{2}}W^{\mathsf{T}}WT^{-\frac{1}{2}}\bigr) = \operatorname *{arg min}_{T\in \mathbf{T}}\| T^{-\frac{1}{2}}W^{\mathsf{T}}WT^{-\frac{1}{2}} - I\|_{F}.
108
+ $$
109
+
110
+ We defer the proof for the above result to the appendix. Here we provide some interpretations for the above result. Obviously, the quantity $\| T^{-\frac{1}{2}}W^{\mathsf{T}}WT^{-\frac{1}{2}} - I\|_{F}$ provides a measure for the distance between the scaled weight matrix $WT^{-\frac{1}{2}}$ and the set of $n\times n$ orthogonal matrices. If $\| T^{-\frac{1}{2}}W^{\mathsf{T}}WT^{-\frac{1}{2}} - I\|_{F} = 0$ , then the scaled weight $WT^{-\frac{1}{2}}$ is orthogonal. If $\| T^{-\frac{1}{2}}W^{\mathsf{T}}WT^{-\frac{1}{2}} - I\|_{F}$ is small, it means that $WT^{-\frac{1}{2}}$ is "almost orthogonal" and close to the set of orthogonal matrices. Since we require $W^{\mathsf{T}}W - T\spreceq 0$ , we know that $I - T^{-\frac{1}{2}}W^{\mathsf{T}}WT^{-\frac{1}{2}}$ is a positive semidefinite matrix, and its trace provides an alternative metric quantifying the distance between $WT^{-\frac{1}{2}}$ and the set of orthogonal matrices. Importantly, we have the following inequality:
111
+
112
+ $$
113
+ \left\| T ^ {- \frac {1}{2}} W ^ {\mathsf {T}} W T ^ {- \frac {1}{2}} - I \right\| _ {F} \leq \operatorname {t r} (I - T ^ {- \frac {1}{2}} W ^ {\mathsf {T}} W T ^ {- \frac {1}{2}}).
114
+ $$
115
+
116
+ If $\mathrm{tr}(I - T^{-\frac{1}{2}}W^{\mathsf{T}}WT^{-\frac{1}{2}})$ is small, then $\|T^{-\frac{1}{2}}W^{\mathsf{T}}WT^{-\frac{1}{2}} - I\|_{F}$ is also small, and $WT^{-\frac{1}{2}}$ is close to the set of orthogonal matrices. Therefore, one interpretation for Theorem 2 is that among all the nonsingular diagonal scaling matrices $T$ satisfying $T - W^{\mathsf{T}}W \in \mathbf{D}^{n}$ , the choice of $T$ used in AOL makes the scaled weight matrix $WT^{-\frac{1}{2}}$ the closest to the set of orthogonal matrices. This provides a new mathematical explanation of how AOL can generate "almost orthogonal" weights.
117
+
118
+ One potential issue for AOL is that $\mathbf{D}^n$ is typically much smaller than $\mathbf{S}_+^n$ , and the condition $T - W^{\top}W\in \mathbf{D}^{n}$ may be too conservative compared to the original condition $T - W^{\top}W\in \mathbf{S}_{+}^{n}$ in Theorem 1. If we denote the set $\hat{\mathbf{T}} = \left\{T:T\right.$ is nonsingular diagonal, and $T - W^{\top}W\in \mathbf{S}_{+}^{n}\}$ , then we have $\arg \min_{T\in \hat{\mathbf{T}}}\mathrm{tr}(I - T^{-\frac{1}{2}}W^{\top}WT^{-\frac{1}{2}})\leq \arg \min_{T\in \mathbf{T}}\mathrm{tr}(I - T^{-\frac{1}{2}}W^{\top}WT^{-\frac{1}{2}})$ , and $\arg \min_{T\in \hat{\mathbf{T}}}\| T^{-\frac{1}{2}}W^{\top}WT^{-\frac{1}{2}} - I\| _F\leq \arg \min_{T\in \mathbf{T}}\| T^{-\frac{1}{2}}W^{\top}WT^{-\frac{1}{2}} - I\| _F$ . This leads to interesting alternative choices of $T$ which can further promote orthogonality:
119
+
120
+ $$
121
+ T = \underset {T \in \hat {\mathbf {T}}} {\arg \min } \| T ^ {- \frac {1}{2}} W ^ {\mathsf {T}} W T ^ {- \frac {1}{2}} - I \| _ {F} \quad \text {o r} \quad T = \underset {T \in \hat {\mathbf {T}}} {\arg \min } \operatorname {t r} (I - T ^ {- \frac {1}{2}} W ^ {\mathsf {T}} W T ^ {- \frac {1}{2}}) \tag {3}
122
+ $$
123
+
124
+ Although (3) may be solved as convex programs on small toy examples, it is not practical to use such choice of $T$ for large-scale problems. It is our hope that our theoretical discussion above will inspire more future research on developing new practical choices of $T$ for promoting orthogonality.
125
+
126
+ # 5 EXTENSIONS OF CPL: THE POWER OF GERSHGORIN CIRCLE THEOREM
127
+
128
+ In this section, we extend the original CPL layer (5) to a new family of 1-Lipschitz network structures via providing new analytical solutions to our condition $W^{\mathsf{T}}W \preceq T$ . We term this general family of layers as SDP-based Lipschitz layers (SLL), since the condition $W^{\mathsf{T}}W \preceq T$ can be viewed as an SDP for the decision variable $T$ . First of all, we extend the existing CPL (Eq. (1)) via applying more general choices of $T$ with Theorem 1. From the discussion after Theorem 1, we already know that we can use the choice of $T = \mathrm{diag}(\sum_{j=1}^{n} |W^{\mathsf{T}}W|_{ij})$ to replace the original choice $T = \|W\|_2^2 I$ . In this section, we will strengthen CPL via an even more general choice of $T$ , which is based on a special version of Gershgorin circle theorem. Specifically, we will apply (Horn et al., 2012, Corollary 6.1.6) to show the following result.
129
+
130
+ Theorem 3. Let $W$ be the weight matrix. Suppose $T$ is a nonsingular diagonal matrix. If there exists some diagonal matrix $Q$ with all positive diagonal entries such that $(T - QW^{\mathsf{T}}WQ^{-1})$ is a real diagonally dominant matrix with diagonal entries being all positive, then $T \succeq W^{\mathsf{T}}W$ , and the function $h(x) = x - 2WT^{-1}\sigma (W^{\mathsf{T}}x + b)$ is 1-Lipschitz for $\sigma$ being ReLU, tanh or sigmoid.
131
+
132
+ We defer the proof of this result to the appendix. If we choose $Q = I$ , the above theorem just recovers the choice of $T$ used in AOL, i.e. $T = \mathrm{diag}(\sum_{j=1}^{n}|W^{\mathsf{T}}W|_{ij})$ . However, it is expected that the use of more general $Q$ will allow us to train a less conservative 1-Lipschitz neural network due to the increasing expressivity brought by these extra variables. We will present numerical results to demonstrate this. We also emphasize that $(T - QW^{\mathsf{T}}WQ^{-1})$ is typically not a symmetric matrix and hence is not in $\mathbf{D}^n$ even when it only has non-negative eigenvalues. However, this does not affect our proof on the positive-semidefiniteness of $(T - W^{\mathsf{T}}W)$ .
133
+
134
+ Application of Theorem 3. We can parameterize $Q^{-1} = \mathrm{diag}(q_i)$ with $q_{i} > 0$ . Then the $(i,j)$ -th entry of $QW^{\top}WQ^{-1}$ is equal to $(W^{\top}W)_{ij}q_j / q_i$ . Hence we can just set the diagonal entry of $T$ as
135
+
136
+ $$
137
+ T _ {i i} = \sum_ {j = 1} ^ {n} | (W ^ {\top} W) _ {i j} q _ {j} / q _ {i} | = \sum_ {j = 1} ^ {n} | W ^ {\top} W | _ {i j} \frac {q _ {j}}{q _ {i}}. \tag {4}
138
+ $$
139
+
140
+ This leads to our new choice of $T = \mathrm{diag}(\sum_{j=1}^{n} |W^{\mathsf{T}} W|_{ij} q_j / q_i)$ . Notice that the layer function $h(x) = x - 2WT^{-1}\sigma(W^{\mathsf{T}}x + b)$ has a residual network structure. Hence it is expected that vanishing gradient will not be an issue. Therefore, we can simultaneously optimize the training loss over $W$ and $\{q_i\}$ . We will present a numerical study to demonstrate that such a training approach will allow us to generate competitive results on training certifiably robust classifiers.
141
+
142
+ SDP conditions for more general network structures. It is also worth mentioning that the SDP condition in Theorem 1 can be generalized to address the following more general structure:
143
+
144
+ $$
145
+ h (x) = H x + G \sigma \left(W ^ {\mathsf {T}} x + b\right), \tag {5}
146
+ $$
147
+
148
+ where $H$ and $G$ will be determined by the weight $W$ in some manner, and the matrix dimensions are assumed to be compatible. If we choose $H = I$ and $G = -2WT^{-1}$ , then (5) reduces to the residual network structure considered in Theorem 1. There are many other choices of $(H,G)$ which can also ensure (5) to be 1-Lipschitz. Our last theoretical result is a new SDP condition which generalizes Theorem 1 and provides a more comprehensive characterization of such choices of $(H,G)$ .
149
+
150
+ Table 1: This table presents the natural, provable accuracy as well as the number of parameters and training time of several concurrent work and our SLL networks on CIFAR10 dataset. All results for SLL networks are the result of the average of 3 trainings.
151
+
152
+ <table><tr><td rowspan="2">Models</td><td rowspan="2">Natural Accuracy</td><td colspan="4">Provable Accuracy (ε)</td><td rowspan="2">Number of Parameters</td><td rowspan="2">Time by Epoch (s)</td></tr><tr><td>36 255</td><td>72 255</td><td>108 255</td><td>1</td></tr><tr><td>GloRo (Leino et al., 2021)</td><td>77.0</td><td>58.4</td><td>-</td><td>-</td><td>-</td><td>8M</td><td>6</td></tr><tr><td>Local-Lip-B (Huang et al., 2021b)</td><td>77.4</td><td>60.7</td><td>39.0</td><td>20.4</td><td>-</td><td>2.3M</td><td>8</td></tr><tr><td>Cayley Large (Trockman et al., 2021)</td><td>74.6</td><td>61.4</td><td>46.4</td><td>32.1</td><td>-</td><td>21M</td><td>30</td></tr><tr><td>SOC 20 (Singla et al., 2021)</td><td>78.0</td><td>62.7</td><td>46.0</td><td>30.3</td><td>-</td><td>27M</td><td>52</td></tr><tr><td>SOC+ 20 (Singla et al., 2022b)</td><td>76.3</td><td>62.6</td><td>48.7</td><td>36.0</td><td>-</td><td>27M</td><td>52</td></tr><tr><td>CPL XL (Meunier et al., 2022)</td><td>78.5</td><td>64.4</td><td>48.0</td><td>33.0</td><td>-</td><td>236M</td><td>163</td></tr><tr><td>AOL Large (Prach et al., 2022)</td><td>71.6</td><td>64.0</td><td>56.4</td><td>49.0</td><td>23.7</td><td>136M</td><td>64</td></tr><tr><td>SLL Small</td><td>71.2</td><td>62.6</td><td>53.8</td><td>45.3</td><td>20.4</td><td>41M</td><td>20</td></tr><tr><td>SLL Medium</td><td>72.2</td><td>64.3</td><td>56.0</td><td>48.3</td><td>23.9</td><td>78M</td><td>35</td></tr><tr><td>SLL Large</td><td>72.7</td><td>65.0</td><td>57.3</td><td>49.7</td><td>25.4</td><td>118M</td><td>55</td></tr><tr><td>SLL X-Large</td><td>73.3</td><td>65.8</td><td>58.4</td><td>51.3</td><td>27.3</td><td>236M</td><td>105</td></tr></table>
153
+
154
+ Theorem 4. Let $n$ be the neuron number. For any non-negative scalars $\{\lambda_i\}_{i=1}^n$ , define
155
+
156
+ $$
157
+ \Lambda = \operatorname {d i a g} \left(\lambda_ {1}, \lambda_ {2}, \dots , \lambda_ {n}\right). \tag {6}
158
+ $$
159
+
160
+ Suppose the activation function $\sigma$ is ReLU or tanh or sigmoid. If there exist non-negative scalars $\{\lambda_i\}_{i=1}^n$ such that the following matrix inequality holds
161
+
162
+ $$
163
+ \left[ \begin{array}{c c} I - H ^ {\mathsf {T}} H & - H ^ {\mathsf {T}} G - W \Lambda \\ - G ^ {\mathsf {T}} H - \Lambda W ^ {\mathsf {T}} & 2 \Lambda - G ^ {\mathsf {T}} G \end{array} \right] \succeq 0 \tag {7}
164
+ $$
165
+
166
+ then the network layer (5) is 1-Lipschitz, i.e., $\| h(x) - h(y)\| \leq \| x - y\|$ for all $(x,y)$ .
167
+
168
+ The above theorem can be proved via modifying the argument used in Fazlyab et al. (2019, Theorem $1$ ) and we defer the detailed proof to the appendix. On one hand, if we choose $H = 0$ , then our condition (7) reduces to a variant of Theorem 1 in Fazlyab et al. (2019). On the other hand, for residual network structure with $H = I$ , we can choose $T = 2\Lambda^{-1}$ and $G = -W\Lambda = -2WT^{-1}$ to reduce (7) to our original algebraic condition $T \succeq W^{\mathsf{T}}W$ . Therefore, Theorem 4 provides a connection between the SDP condition in Fazlyab et al. (2019) and our proposed simple algebraic condition in Theorem 1. It is possible to obtain new 1-Lipschitz network layers via providing new analytical solutions to (7). It is our hope that our proposed SDP condition (7) can lead to many more 1-Lipschitz network structures in the future.
169
+
170
+ # 6 EXPERIMENTS
171
+
172
+ In this section, we present a comprehensive set of experiments with 1-Lipschitz neural networks based on our proposed SDP-based Lipschitz Layer. More specifically, we build 1-Lipschitz neural networks based on the following layer:
173
+
174
+ $$
175
+ h (x) = x - 2 W \operatorname {d i a g} \left(\sum_ {j = 1} ^ {n} \left| W ^ {\mathsf {T}} W \right| _ {i j} q _ {j} / q _ {i}\right) ^ {- 1} \sigma \left(W ^ {\mathsf {T}} x + b\right), \tag {8}
176
+ $$
177
+
178
+ where $W$ is a parameter matrix being either dense or a convolution, $\{q_i\}$ forms a diagonal scaling matrix as described by Theorem 3, and $\sigma(\cdot)$ is the ReLU nonlinearity function. We use the same architectures proposed by Meunier et al. (2022) with small, medium, large and xlarge sizes. The architecture consists of several Conv-SLL and Linear-SLL. For CIFAR-100, we use the Last Layer Normalization proposed by Singla et al. (2022b) which improves the certified accuracy when the number of classes becomes large. Note that the layer presented in Equation (8) can be easily implemented with convolutions following the same scaling as in Prach et al. (2022). Our experiments focus on the impact of the Lipschitz layer structures on certified robustness. This complements a recent study on other aspects (e.g. projection pooling) of robust networks (Singla et al., 2022a).
179
+
180
+ Table 2: This table presents the natural and provable accuracy of several concurrent works and our SLL networks on CIFAR100 and TinyImageNet datasets. SLL networks are averaged of 3 trainings.
181
+
182
+ <table><tr><td rowspan="2">Datasets</td><td rowspan="2">Models</td><td rowspan="2">Natural Accuracy</td><td colspan="4">Provable Accuracy (ε)</td></tr><tr><td>36 255</td><td>72 255</td><td>108 255</td><td>1</td></tr><tr><td rowspan="9">CIFAR100</td><td>Cayley Large (Trockman et al., 2021)</td><td>43.3</td><td>29.2</td><td>18.8</td><td>11.0</td><td>-</td></tr><tr><td>SOC 20 (Singla et al., 2021)</td><td>48.3</td><td>34.4</td><td>22.7</td><td>14.2</td><td>-</td></tr><tr><td>SOC+ 20 (Singla et al., 2022b)</td><td>47.8</td><td>34.8</td><td>23.7</td><td>15.8</td><td>-</td></tr><tr><td>CPL XL (Meunier et al., 2022)</td><td>47.8</td><td>33.4</td><td>20.9</td><td>12.6</td><td>-</td></tr><tr><td>AOL Large (Prach et al., 2022)</td><td>43.7</td><td>33.7</td><td>26.3</td><td>20.7</td><td>7.8</td></tr><tr><td>SLL Small</td><td>44.9</td><td>34.7</td><td>26.8</td><td>20.9</td><td>8.1</td></tr><tr><td>SLL Medium</td><td>46.0</td><td>35.5</td><td>27.9</td><td>22.2</td><td>9.1</td></tr><tr><td>SLL Large</td><td>46.4</td><td>36.2</td><td>28.4</td><td>22.7</td><td>9.6</td></tr><tr><td>SLL X-Large</td><td>46.5</td><td>36.5</td><td>29.0</td><td>23.3</td><td>10.4</td></tr><tr><td rowspan="6">TinyImageNet</td><td>GloRo (Leino et al., 2021)</td><td>35.5</td><td>22.4</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Local-Lip-B (+MaxMin) (Huang et al., 2021b)</td><td>36.9</td><td>23.4</td><td>12.7</td><td>6.1</td><td>0.0</td></tr><tr><td>SLL Small</td><td>26.6</td><td>19.5</td><td>14.2</td><td>10.4</td><td>2.9</td></tr><tr><td>SLL Medium</td><td>30.4</td><td>22.3</td><td>15.9</td><td>11.6</td><td>3.0</td></tr><tr><td>SLL Large</td><td>31.3</td><td>23.0</td><td>16.9</td><td>12.3</td><td>3.3</td></tr><tr><td>SLL X-Large</td><td>32.1</td><td>23.2</td><td>16.8</td><td>12.0</td><td>3.2</td></tr></table>
183
+
184
+ Details on the architectures & Hyper-parameters. Table 3 describes the detail of our Small, Medium, Large and X-Large architectures. We trained our networks with a batch size of 256 over 1000 epochs with the data augmentation used by. We use an Adam optimizer (Kingma et al., 2014) with 0.01 learning rate and parameters $\beta_{1}$ and $\beta_{2}$ equal to 0.5 and 0.9 respectively and no weight decay. We use a piecewise triangular
185
+
186
+ Table 3: The SLL architecture used for the experiments is inspired by Meunier et al..
187
+
188
+ <table><tr><td></td><td>S</td><td>M</td><td>L</td><td>XL</td></tr><tr><td>Conv-SLL</td><td>20</td><td>30</td><td>90</td><td>120</td></tr><tr><td>Channels</td><td>45</td><td>60</td><td>60</td><td>70</td></tr><tr><td>Linear-SLL</td><td>7</td><td>10</td><td>15</td><td>15</td></tr><tr><td>Linear Features</td><td>2048</td><td>2048</td><td>4096</td><td>4096</td></tr></table>
189
+
190
+ learning rate scheduler to decay the learning rate during training. We use the CrossEntropy loss as in Prach et al. (2022) with a temperature of 0.25 and an offset value $\frac{3}{2}\sqrt{2}$ .
191
+
192
+ Results in terms of Natural and Certified Accuracy on CIFAR10/100. First, we evaluate our networks (SLL) on CIFAR10 and CIAFR100 and compare the results against recent 1-Lipschitz neural network structures: Cayley, SOC, $\mathrm{SOC + }$ , CPL and AOL. We also compare SLL with two other Lipschitz training approaches (Leino et al., 2021; Huang et al., 2021b), which do not guarantee prescribed global Lipschitz bounds during the training stage. Table 1 presents the natural and certified accuracy with different radius of certification on CIFAR10. For a fair comparison, parameter number and training time per epoch for each method are also added to Table 1. Results on CIFAR100 are included in Table 2. We can see that our approach outperforms existing 1-Lipschitz architectures including AOL and CPL on certified accuracy for all values of $\varepsilon$ . We also observe that SLL-based 1-Lipschitz neural networks offer a good trade-off among previous approaches with respect to natural and certified accuracy. A detailed comparison is given below.
193
+
194
+ Advantages of SLL over Cayley/SOC. In general, it is difficult to compare the expressive power of non-residual and residual networks. Hence we do not claim that with the same model size, SLL is more representative than Cayley or SOC which are not residual networks in the first place. However, we believe that the current choice of $T$ in SLL is very easy to calculate and hence leads to a scalable approach that allows us to train very large models with a reasonable amount of time. For illustrative purposes, consider the comparison between SLL and Cayley in Table 1. We can see that SLL Small has more parameters than Cayley Large (41M vs. 21M) while being faster to train. Indeed, the Cayley approach involves computing an expensive orthogonal projection (with a matrix inverse), while SOC requires the computation of several convolutions at training and inference (from 6 to 12) to compute the exponential of a convolution up to a desired precision. Hence the training time per epoch for Cayley Large and SOC is actually longer than SLL Small. While being faster to train SLL Small still outperforms Cayley Large and SOC for all three values of $\varepsilon$ . In general, we think it is fair to claim that our approach is more scalable than previous approaches based on orthogonal layers, and allows the use of larger networks which leads to improvements in certified robustness.
195
+
196
+ Table 5: The table describes the empirical robustness of our SLL-based classifiers on CIFAR10 ans CIFAR100 datasets. The empirical robustness is measured with AutoAttacks. All results are the average of 3 models.
197
+
198
+ <table><tr><td rowspan="2">Models</td><td colspan="4">CIFAR10 - AutoAttack (ε)</td><td colspan="4">CIFAR100 - AutoAttack (ε)</td></tr><tr><td>36/255</td><td>72/255</td><td>108/255</td><td>1</td><td>36/255</td><td>72/255</td><td>108/255</td><td>1</td></tr><tr><td>SLL Small</td><td>68.1</td><td>62.5</td><td>56.8</td><td>35.0</td><td>40.7</td><td>35.2</td><td>30.4</td><td>17.0</td></tr><tr><td>SLL Medium</td><td>69.1</td><td>63.8</td><td>58.4</td><td>37.0</td><td>41.5</td><td>36.4</td><td>31.5</td><td>17.9</td></tr><tr><td>SLL Large</td><td>69.8</td><td>64.5</td><td>59.1</td><td>37.9</td><td>42.1</td><td>37.1</td><td>32.6</td><td>18.7</td></tr><tr><td>SLL X-Large</td><td>70.3</td><td>65.4</td><td>60.2</td><td>39.4</td><td>42.7</td><td>37.8</td><td>33.2</td><td>19.5</td></tr></table>
199
+
200
+ Advantages of SLL over AOL/CPL. With careful tuning of the offset value, SLL outperforms AOL for all values of $\varepsilon$ . We experiment with several offset values: $\sqrt{2}$ , $\frac{3}{2}\sqrt{2}$ and $2\sqrt{2}$ . The detailed results for all these different offset values are deferred to Table 6 in the appendix. In general, the offset value offers a trade-off between natural accuracy and robustness, thus, by choosing the offset value properly, SLL Large already achieves better results than AOL Large (notice that the training time per epoch for these two is roughly the same). SLL X-Large has even more improvements. We can also see that SLL Large outperforms CPL XL for all values of $\varepsilon$ while being faster to train. For larger value of $\varepsilon$ , the gain of SLL over CPL is remarkable (over $10\%$ ).
201
+
202
+ Results on TinyImageNet. We have also implemented SLL on TinyImageNet (see Table 2). Previously, other 1-Lipschitz network structures including SOC, Cayley, AOL, and CPL have not been tested on TinyImageNet, and the state-of-the-art approach on TinyImageNet is the local Lipschitz bound approach (Huang et al., 2021a). We can see that SLL significantly outperforms this local Lipschitz approach for larger values of $\varepsilon$ (while generating similar results for the small $\varepsilon$ case). Notice that the local Lipschitz approach (Huang et al.,
203
+
204
+ 2021a) is quite different from other 1-Lipschitz network methods in the sense that it has no guarantees on the Lipschitz constant of the resultant network and hence does not generate 1-Lipschitz networks in the first place. Furthermore, given that this approach does not guarantee a Lipschitz bound during training, a lot more computation needs to be performed during inference, making the certification process very time consuming. Table 4 describes the inference time on TinyImageNet for this local Lipschitz approach and SLL X-large.
205
+
206
+ Table 4: Inference time for Local-LipB and SLL X-Large on the full TinyImageNet validation with 4 GPUs.
207
+
208
+ <table><tr><td>Models</td><td>Inference Time</td></tr><tr><td>Local-Lip-B</td><td>41 min</td></tr><tr><td>SLL X-Large</td><td>8 sec</td></tr></table>
209
+
210
+ Results on Empirical Robustness. We also provide results of our approach on empirical robustness against an ensemble of diverse parameter-free attacks (i.e., AutoAttacks) developed by Croce et al. (2020b). Table 5 reports the empirical robustness accuracy for different levels of perturbations. Although AutoAttacks is a strong empirical attack consisting of an ensemble of several known attacks: $\mathrm{APGD}_{\mathrm{CE}}$ , $\mathrm{APGD}_{\mathrm{DLR}}$ , FAB (Croce et al., 2020a) and Square (Andriushchenko et al., 2020). We can observe that the measure robustness is high and well above the certified radius. Indeed, on CIFAR10, we observe a robustness "gain" of up to $4.5\%$ , $9.6\%$ , $14.1\%$ and $21.7\%$ for respectively, 36, 72, 108 and $255\varepsilon$ -perturbations.
211
+
212
+ # 7 CONCLUSION
213
+
214
+ In this paper, we present a unifying framework for designing Lipschitz layers. Based on a novel algebraic perspective, we identify a common SDP condition underlying the developments of spectral normalization, orthogonality-based methods, AOL, and CPL. Furthermore, we have shown that AOL and CPL can be re-derived and generalized using our theoretical framework. From this analysis, we introduce a family of SDP-based Lipschitz layers (SLL) that outperforms previous work. In the future, it will be interesting to investigate more expressive structures of $T$ and extending our contributions to address multi-layer neural networks.
215
+
216
+ # ACKNOWLEDGMENTS
217
+
218
+ This work was performed using HPC resources from GENCI-IDRIS (Grant 2021-AD011013259) and funded by the French National Research Agency (ANR SPEEDD-20-CE23-0025). A. Havens and B. Hu are generously supported by the NSF award CAREER-2048168.
219
+
220
+ # REFERENCES
221
+
222
+ Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: a query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision, 2020.
223
+ Cem Anil, James Lucas, and Roger Grosse. Sorting out lipschitz function approximation. In International Conference on Machine Learning, 2019.
224
+ George Barker and David Carlson. Cones of diagonally dominant matrices. Pacific Journal of Mathematics, 57(1):15-32, 1975.
225
+ Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, 2019.
226
+ Francesco Croce and Matthias Hein. Minimally distorted adversarial examples with a fast adaptive boundary attack. In International Conference on Machine Learning. PMLR, 2020a.
227
+ Francesco Croce et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International Conference on Machine Learning, 2020b.
228
+ Farzan Farnia, Jesse Zhang, and David Tse. Generalizable adversarial training via spectral normalization. In International Conference on Learning Representations, 2019.
229
+ Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, and George Pappas. Efficient and accurate estimation of lipschitz constants for deep neural networks. Advances in Neural Information Processing Systems, 32, 2019.
230
+ Mahyar Fazlyab, Manfred Morari, and George J Pappas. Safety verification and robustness analysis of neural networks via quadratic constraints and semidefinite programming. IEEE Transactions on Automatic Control, 2020.
231
+ Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. Advances in neural information processing systems, 30, 2017.
232
+ R.A. Horn and C.R. Johnson. Matrix Analysis. Cambridge University Press, 2012. ISBN 9781139788885.
233
+ Yujia Huang, Huan Zhang, Yuanyuan Shi, J Zico Kolter, and Anima Anandkumar. Training certifiably robust neural networks with efficient local lipschitz bounds. In Advances in Neural Information Processing Systems, 2021a.
234
+ Yujia Huang, Huan Zhang, Yuanyuan Shi, J Zico Kolter, and Anima Anandkumar. Training certifiably robust neural networks with efficient local lipschitz bounds. Advances in Neural Information Processing Systems, 34:22745-22757, 2021b.
235
+ Diederik Kingma et al. Adam: A method for stochastic optimization. In International Conference for Learning Representations, 2014.
236
+ Aounon Kumar, Alexander Levine, Tom Goldstein, and Soheil Feizi. Curse of dimensionality on randomized smoothing for certifiable robustness. In International Conference on Machine Learning, 2020.
237
+ Klas Leino, Zifan Wang, and Matt Fredrikson. Globally-robust neural networks. In International Conference on Machine Learning, 2021.
238
+
239
+ Qiyang Li, Saminul Haque, Cem Anil, James Lucas, Roger B Grosse, and Joern-Henrik Jacobsen. Preventing gradient attenuation in lipschitz constrained convolutional networks. In Advances in Neural Information Processing Systems, 2019.
240
+ A. Lur'e and V. Postnikov. On the theory of stability of control systems. Applied mathematics and mechanics, 8(3):246-248, 1944.
241
+ A. Megretski and A. Rantzer. System analysis via integral quadratic constraints. IEEE Transactions on Automatic Control, 42:819-830, 1997.
242
+ Laurent Meunier, Blaise Delattre, Alexandre Araujo, and Alexandre Allauzen. A dynamical system perspective for lipschitz neural networks. In International Conference on Machine Learning, 2022.
243
+ Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018.
244
+ Patricia Pauli, Anne Koch, Julian Berberich, Paul Kohler, and Frank Allgower. Training robust neural networks using lipschitz bounds. IEEE Control Systems Letters, 6:121-126, 2021.
245
+ Bernd Prach and Christoph H Lampert. Almost-orthogonal layers for efficient general-purpose lipschitz networks. In Computer Vision-ECCV 2022: 17th European Conference, 2022.
246
+ Aditi Raghunathan, Jacob Steinhardt, and Percy S Liang. Semidefinite relaxations for certifying robustness to adversarial examples. Advances in neural information processing systems, 31, 2018.
247
+ Max Revay, Ruigang Wang, and Ian R Manchester. Lipschitz bounded equilibrium networks. arXiv preprint arXiv:2010.01732, 2020.
248
+ Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. In Advances in Neural Information Processing Systems, 2019.
249
+ Sahil Singla and Soheil Feizi. Skew orthogonal convolutions. In International Conference on Machine Learning, 2021.
250
+ Sahil Singla and Soheil Feizi. Improved techniques for deterministic 12 robustness. In Advances in Neural Information Processing Systems, 2022a.
251
+ Sahil Singla, Surbhi Singla, and Soheil Feizi. Improved deterministic 12 robustness on CIFAR-10 and CIFAR-100. In International Conference on Learning Representations, 2022b.
252
+ Asher Trockman and J Zico Kolter. Orthogonalizing convolutional layers with the Cayley transform. In International Conference on Learning Representations, 2021.
253
+ Yusuke Tsuzuku, Issei Sato, and Masashi Sugiyama. Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks. In Advances in Neural Information Processing Systems, 2018.
254
+ Zi Wang, Gautam Prakriya, and Somesh Jha. A quantitative geometric approach to neural-network smoothness. In Advances in Neural Information Processing Systems, 2022.
255
+ Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning, pp. 5286-5295. PMLR, 2018.
256
+ Xiaojun Xu, Linyi Li, and Bo Li. Lot: Layer-wise orthogonal training on improving 12 certified robustness. In Advances in Neural Information Processing Systems, 2022.
257
+ Greg Yang, Tony Duan, J Edward Hu, Hadi Salman, Ilya Razenshteyn, and Jerry Li. Randomized smoothing of all shapes and sizes. In International Conference on Machine Learning, 2020.
258
+ Tan Yu, Jun Li, Yunfeng Cai, and Ping Li. Constructing orthogonal convolutions in an explicit manner. In International Conference on Learning Representations, 2022.
259
+
260
+ # A PROOFS
261
+
262
+ In this section, we present the proofs for the theorems presented in our paper.
263
+
264
+ # A.1 PROOF OF THEOREM 1
265
+
266
+ To prove the first statement in Theorem 1, notice that we have
267
+
268
+ $$
269
+ \left\| g (x) - g (y) \right\| ^ {2} = \left\| W T ^ {- \frac {1}{2}} (x - y) \right\| ^ {2} = (x - y) ^ {\top} T ^ {- \frac {1}{2}} W ^ {\top} W T ^ {- \frac {1}{2}} (x - y).
270
+ $$
271
+
272
+ Based on our algebraic condition $W^{\top}W\preceq T$ , we immediately have
273
+
274
+ $$
275
+ \left\| g (x) - g (y) \right\| ^ {2} \leq (x - y) ^ {\mathsf {T}} T ^ {- \frac {1}{2}} T T ^ {- \frac {1}{2}} (x - y) = \| x - y \| ^ {2}.
276
+ $$
277
+
278
+ Therefore, Statement 1 is true.
279
+
280
+ To prove Statement 2 in Theorem 1, we need to use the property of the nonlinear activation function $\sigma$ . Notice that the condition $W^{\mathsf{T}}W\preceq T$ ensures that all the diagonal entries of the nonsingular matrix $T$ are positive. Therefore, $T^{-1}$ is also a diagonal matrix whose diagonal entries are all positive. For all the three activation functions listed in the above theorem, $\sigma$ is slope-restricted on [0, 1], and the following inequality holds for any $\{x',y'\}$ (Fazlyab et al., 2019, Lemma 1):
281
+
282
+ $$
283
+ \left[ \begin{array}{c} x ^ {\prime} - y ^ {\prime} \\ \sigma (x ^ {\prime}) - \sigma (y ^ {\prime}) \end{array} \right] ^ {\mathsf {T}} \left[ \begin{array}{c c} 0 & - T ^ {- 1} \\ - T ^ {- 1} & 2 T ^ {- 1} \end{array} \right] \left[ \begin{array}{c} x ^ {\prime} - y ^ {\prime} \\ \sigma (x ^ {\prime}) - \sigma (y ^ {\prime}) \end{array} \right] \leq 0.
284
+ $$
285
+
286
+ We can set $x' = W^{\top}x + b$ and $y' = W^{\top}y + b$ , and the above inequality becomes
287
+
288
+ $$
289
+ \left[ \begin{array}{c} W ^ {\mathsf {T}} (x - y) \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] ^ {\mathsf {T}} \left[ \begin{array}{c c} 0 & - T ^ {- 1} \\ - T ^ {- 1} & 2 T ^ {- 1} \end{array} \right] \left[ \begin{array}{c} W ^ {\mathsf {T}} (x - y) \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] \leq 0.
290
+ $$
291
+
292
+ We can rewrite the above inequality as
293
+
294
+ $$
295
+ \left[ \begin{array}{c} x - y \\ \sigma \left(W ^ {\top} x + b\right) - \sigma \left(W ^ {\top} y + b\right) \end{array} \right] ^ {\top} \left[ \begin{array}{c c} 0 & - W T ^ {- 1} \\ - T ^ {- 1} W ^ {\top} & 2 T ^ {- 1} \end{array} \right] \left[ \begin{array}{c} x - y \\ \sigma \left(W ^ {\top} x + b\right) - \sigma \left(W ^ {\top} y + b\right) \end{array} \right] \leq 0. \tag {9}
296
+ $$
297
+
298
+ Now we can apply the following argument:
299
+
300
+ $$
301
+ \begin{array}{l} \left\| h (x) - h (y) \right\| ^ {2} \\ = \| x - y - 2 \left(W T ^ {- 1} \sigma \left(W ^ {\top} x + b\right) - W T ^ {- 1} \sigma \left(W ^ {\top} y + b\right)\right) \| ^ {2} \\ = \left[ \begin{array}{c} x - y \\ 2 W T ^ {- 1} \left(\sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b)\right) \end{array} \right] \left[ \begin{array}{c c} I & - I \\ - I & I \end{array} \right] \left[ \begin{array}{c} x - y \\ 2 W T ^ {- 1} \left(\sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b)\right) \end{array} \right] \\ = \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] ^ {\mathsf {T}} \left[ \begin{array}{c c} I & - 2 W T ^ {- 1} \\ - 2 T ^ {- 1} W ^ {\mathsf {T}} & 4 T ^ {- 1} W ^ {\mathsf {T}} W T ^ {- 1} \end{array} \right] \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] \\ \leq \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] ^ {\mathsf {T}} \left[ \begin{array}{c c} I & - 2 W T ^ {- 1} \\ - 2 T ^ {- 1} W ^ {\mathsf {T}} & 4 T ^ {- 1} \end{array} \right] \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right], \\ \end{array}
302
+ $$
303
+
304
+ where the last step follows from the fact that our condition $W^{\mathsf{T}}W\preceq T$ implies $T^{-1}W^{\mathsf{T}}WT^{-1}\preceq T^{-1}$ . Finally, we can combine the above inequality with (9) to show
305
+
306
+ $$
307
+ \begin{array}{l} \| h (x) - h (y) \| ^ {2} \leq \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] ^ {\mathsf {T}} \left[ \begin{array}{c c} I & 0 \\ 0 & 0 \end{array} \right] \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] \\ = \| x - y \| ^ {2}, \\ \end{array}
308
+ $$
309
+
310
+ which is the desired conclusion. Our proof is complete.
311
+
312
+ # A.2 PROOF OF THEOREM 2
313
+
314
+ Since $T$ is nonsingular diagonal and $T - W^{\mathsf{T}}W\in \mathbf{D}^{n}$ , then we must have $T_{ii}\geq \sum_{j}|W^{\mathsf{T}}W|_{ij}$ . Given the following key relation:
315
+
316
+ $$
317
+ \mathrm {t r} (I - T ^ {- \frac {1}{2}} W ^ {\mathsf {T}} W T ^ {- \frac {1}{2}}) = \sum_ {i} \left(1 - \frac {| W ^ {\mathsf {T}} W | _ {i i}}{T _ {i i}}\right),
318
+ $$
319
+
320
+ it becomes clear that we need to choose the smallest value of $T_{ii}$ for all $i$ to minimize $\mathrm{tr}(I - T^{-\frac{1}{2}}W^{\top}WT^{-\frac{1}{2}})$ . Therefore the choice of $T$ for AOL minimizes $\mathrm{tr}(I - T^{-\frac{1}{2}}W^{\top}WT^{-\frac{1}{2}})$ over $T \in \mathbf{T}$ . The proof for the last equality in Theorem 2 is similar. Let us denote $X = I - T^{-\frac{1}{2}}W^{\top}WT^{-\frac{1}{2}}$ . For any $(i,j)$ , the quantity $X_{ij}^{2}$ is always monotone non-decreasing in $T_{ii}$ and $T_{jj}$ . To minimize $\| X \|_F$ , we just need to choose the smallest value for all $T_{ii}$ under the constraint $T_{ii} \geq \sum_j |W^{\top}W|_{ij}$ . This completes the proof.
321
+
322
+ # A.3 THE GERSHGORIN CIRCLE THEOREM AND PROOF OF THEOREM 3
323
+
324
+ Before stating the proof of Theorem 3, we will state the Gershgorin circle theorem, a useful result from matrix analysis which locates the eigenvalues of a real (or complex) matrix (Horn et al., 2012, Theorem 6.1.1).
325
+
326
+ Theorem 5 (Gershgorin). Let $A \in \mathbb{R}^{n \times n}$ and define the $n$ Gershgorin discs of $A$ by
327
+
328
+ $$
329
+ \left\{z \in \mathbb {C}: | z - A _ {i i} | \leq \sum_ {j \neq i} | A _ {i j} | \right\}, \quad i \in \{1, \dots , n \}.
330
+ $$
331
+
332
+ Then the eigenvalues of $A$ are contained in the union of Gershgorin discs
333
+
334
+ $$
335
+ \bigcup_ {i = 1} ^ {n} \left\{z \in \mathbb {C}: | z - A _ {i i} | \leq \sum_ {j \neq i} | A _ {i j} | \right\}
336
+ $$
337
+
338
+ A useful consequence of this theorem is that whenever $A$ is diagonally dominant (i.e. $|A_{ii}| \geq \sum_{j \neq i} |A_{ij}|$ ) with positive diagonal entries, then the eigenvalues of $A$ must be non-negative. With this fact, we now proceed to the proof of Theorem 3.
339
+
340
+ Proof of Theorem 3 Given nonsingular matrix $Q$ , clearly the eigenvalues of $Q(T - W^{\mathsf{T}}W)Q^{-1}$ and $(T - W^{\mathsf{T}}W)$ are the same. If $Q(T - W^{\mathsf{T}}W)Q^{-1}$ is diagonally dominant and only has positive diagonal entries, then we can apply Gershgorin circle theorem (Horn et al., 2012, Corollary 6.1.6) to show that all the eigenvalues of $Q(T - W^{\mathsf{T}}W)Q^{-1}$ (which is the same as $T - QW^{\mathsf{T}}WQ^{-1}$ ) are non-negative. Therefore, we know that all the eigenvalues of $(T - W^{\mathsf{T}}W)$ are non-negative. Since $(T - W^{\mathsf{T}}W)$ is symmetric, we have $T \succeq W^{\mathsf{T}}W$ . Then we can apply Theorem 1 to reach our desired conclusion.
341
+
342
+ # A.4 PROOF OF THEOREM 4
343
+
344
+ A detailed proof for Theorem 4 is presented here. Our proof is based on modifying the arguments used in (Fazlyab et al., 2019, Theorem 1), and mainly relies on the quadratic constraint technique developed in the control field (Megretski et al., 1997).
345
+
346
+ First, notice that (7) is equivalent to the following condition:
347
+
348
+ $$
349
+ \left[ \begin{array}{l l} H ^ {\top} H & H ^ {\top} G \\ G ^ {\top} H & G ^ {\top} G \end{array} \right] \preceq \left[ \begin{array}{c c} I & - W \Lambda \\ - \Lambda W ^ {\top} & 2 \Lambda \end{array} \right]. \tag {10}
350
+ $$
351
+
352
+ Suppose (10) holds. Next we will show that $h(x) = Hx + G\sigma(W^{\mathsf{T}}x + b)$ is 1-Lipschitz.
353
+
354
+ For all the three activation functions listed in the above theorem, $\sigma$ is slope-restricted on $[0,1]$ , and the following inequality holds for any $\{x',y'\}$ (Fazlyab et al., 2019, Lemma 1):
355
+
356
+ $$
357
+ \left[ \begin{array}{c} x ^ {\prime} - y ^ {\prime} \\ \sigma (x ^ {\prime}) - \sigma (y ^ {\prime}) \end{array} \right] ^ {\mathsf {T}} \left[ \begin{array}{c c} 0 & - \Lambda \\ - \Lambda & 2 \Lambda \end{array} \right] \left[ \begin{array}{c} x ^ {\prime} - y ^ {\prime} \\ \sigma (x ^ {\prime}) - \sigma (y ^ {\prime}) \end{array} \right] \leq 0.
358
+ $$
359
+
360
+ Table 6: Additional results for CIFAR10 and CIFAR100 datasets with different offset values.
361
+
362
+ <table><tr><td rowspan="3">Offset</td><td rowspan="3">Models</td><td colspan="5">CIFAR10</td><td colspan="5">CIFAR100</td></tr><tr><td rowspan="2">Natural Accuracy</td><td colspan="4">Provable Accuracy (ε)</td><td rowspan="2">Natural Accuracy</td><td colspan="4">Provable Accuracy (ε)</td></tr><tr><td>36/255</td><td>72/255</td><td>108/255</td><td>1</td><td>36/255</td><td>72/255</td><td>108/255</td><td>1</td></tr><tr><td rowspan="4">√2</td><td>SLL small</td><td>73.3</td><td>63.7</td><td>53.8</td><td>44.5</td><td>15.3</td><td>46.7</td><td>35.2</td><td>26.4</td><td>20.1</td><td>5.9</td></tr><tr><td>SLL medium</td><td>74.0</td><td>64.7</td><td>54.9</td><td>45.3</td><td>16.0</td><td>47.2</td><td>36.1</td><td>27.1</td><td>20.7</td><td>6.5</td></tr><tr><td>SLL large</td><td>74.6</td><td>65.3</td><td>55.2</td><td>45.8</td><td>16.2</td><td>47.9</td><td>36.7</td><td>27.9</td><td>21.3</td><td>6.7</td></tr><tr><td>SLL xlarge</td><td>75.3</td><td>65.7</td><td>55.8</td><td>46.1</td><td>16.3</td><td>48.3</td><td>37.2</td><td>28.3</td><td>21.8</td><td>6.9</td></tr><tr><td rowspan="4">3/2√2</td><td>SLL small</td><td>71.2</td><td>62.6</td><td>53.8</td><td>45.3</td><td>20.4</td><td>44.9</td><td>34.7</td><td>26.8</td><td>20.9</td><td>8.1</td></tr><tr><td>SLL medium</td><td>72.2</td><td>64.3</td><td>56.0</td><td>48.3</td><td>23.9</td><td>46.0</td><td>35.5</td><td>27.9</td><td>22.2</td><td>9.1</td></tr><tr><td>SLL large</td><td>72.7</td><td>65.0</td><td>57.3</td><td>49.7</td><td>25.4</td><td>46.4</td><td>36.2</td><td>28.4</td><td>22.7</td><td>9.6</td></tr><tr><td>SLL xlarge</td><td>73.3</td><td>65.8</td><td>58.4</td><td>51.3</td><td>27.3</td><td>46.5</td><td>36.5</td><td>29.0</td><td>23.3</td><td>10.4</td></tr><tr><td rowspan="4">2√2</td><td>SLL small</td><td>70.0</td><td>61.5</td><td>53.4</td><td>45.7</td><td>22.7</td><td>44.6</td><td>34.5</td><td>26.5</td><td>21.0</td><td>8.6</td></tr><tr><td>SLL medium</td><td>70.8</td><td>63.1</td><td>55.4</td><td>48.3</td><td>25.8</td><td>45.4</td><td>35.5</td><td>27.9</td><td>22.1</td><td>9.8</td></tr><tr><td>SLL large</td><td>71.4</td><td>63.9</td><td>56.7</td><td>49.8</td><td>27.8</td><td>45.9</td><td>36.0</td><td>28.2</td><td>22.7</td><td>10.3</td></tr><tr><td>SLL xlarge</td><td>71.6</td><td>64.6</td><td>57.7</td><td>50.8</td><td>29.6</td><td>46.1</td><td>36.3</td><td>29.0</td><td>23.6</td><td>11.0</td></tr></table>
363
+
364
+ We can set $x' = W^{\top}x + b$ and $y' = W^{\top}y + b$ , and the above inequality becomes
365
+
366
+ $$
367
+ \left[ \begin{array}{c} W ^ {\mathsf {T}} (x - y) \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] ^ {\mathsf {T}} \left[ \begin{array}{c c} 0 & - \Lambda \\ - \Lambda & 2 \Lambda \end{array} \right] \left[ \begin{array}{c} W ^ {\mathsf {T}} (x - y) \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] \leq 0.
368
+ $$
369
+
370
+ We can rewrite the above inequality as
371
+
372
+ $$
373
+ \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\top} x + b) - \sigma (W ^ {\top} y + b) \end{array} \right] ^ {\top} \left[ \begin{array}{c c} 0 & - W \Lambda \\ - \Lambda W ^ {\top} & 2 \Lambda \end{array} \right] \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\top} x + b) - \sigma (W ^ {\top} y + b) \end{array} \right] \leq 0. \tag {11}
374
+ $$
375
+
376
+ Now we can apply the following argument:
377
+
378
+ $$
379
+ \begin{array}{l} \left\| h (x) - h (y) \right\| ^ {2} = \left\| H (x - y) + \left(G \sigma \bigl (W ^ {\mathsf {T}} x + b \bigr) - G \sigma (W ^ {\mathsf {T}} y + b)\right) \right\| ^ {2} \\ = \left[ \begin{array}{c} H (x - y) \\ G (\sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b)) \end{array} \right] \left[ \begin{array}{c c} I & I \\ I & I \end{array} \right] \left[ \begin{array}{c} H (x - y) \\ G (\sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b)) \end{array} \right] \\ = \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] ^ {\mathsf {T}} \left[ \begin{array}{c c} H ^ {\mathsf {T}} H & H ^ {\mathsf {T}} G \\ G ^ {\mathsf {T}} H & G ^ {\mathsf {T}} G \end{array} \right] \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] \\ \leq \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] ^ {\mathsf {T}} \left[ \begin{array}{c c} I & - W \Lambda \\ - \Lambda W ^ {\mathsf {T}} & 2 \Lambda \end{array} \right] \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right], \\ \end{array}
380
+ $$
381
+
382
+ where the last step follows from the condition (10). Finally, we can combine the above inequality with (11) to show
383
+
384
+ $$
385
+ \begin{array}{l} \| h (x) - h (y) \| ^ {2} \leq \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] ^ {\mathsf {T}} \left[ \begin{array}{c c} I & 0 \\ 0 & 0 \end{array} \right] \left[ \begin{array}{c} x - y \\ \sigma (W ^ {\mathsf {T}} x + b) - \sigma (W ^ {\mathsf {T}} y + b) \end{array} \right] \\ = \| x - y \| ^ {2}, \\ \end{array}
386
+ $$
387
+
388
+ which is the desired conclusion.
389
+
390
+ # B ADDITIONAL RESULTS
391
+
392
+ In this section, we will present some additional results and discuss the effect of the offset value on training. The choice of the offset value will affect the performance of SLL significantly. Larger offset values will lead to decrease in natural accuracy and increase in certified robust accuracy. The details are documented in Table 6.
393
+
394
+ # C FURTHER DISCUSSIONS
395
+
396
+ In this section, we provide some extra discussions on control-theoretic interpretations and possible extensions of our main results.
397
+
398
+ # C.1 CONTROL-THEORETIC INTERPRETATIONS FOR OUR MAIN RESULTS
399
+
400
+ Our work is inspired by the quadratic constraint approach (Megretski et al., 1997) and the Lur'e system theory (Lur'e et al., 1944) developed in the control community. Specifically, the general network layer structure (5) can be viewed as a Lur'e system, which is a feedback interconnection of a linear dynamical system and a static nonlinearity. In this section, we try to make this connection more transparent.
401
+
402
+ Specifically, we can denote $x' = h(x)$ and rewrite (5) as follows
403
+
404
+ $$
405
+ x ^ {\prime} = H x + G w
406
+ $$
407
+
408
+ $$
409
+ v = W ^ {\intercal} x + b
410
+ $$
411
+
412
+ $$
413
+ w = \sigma (v)
414
+ $$
415
+
416
+ which is exactly a shifted version of the Lur'e system. Therefore, it is not surprising that one can tailor the Lur'e system theory to study the properties of (5). As a matter of fact, the previous developments in Fazlyab et al. (2019) and Revay et al. (2020) were based on similar ideas. The main difference is that our paper requires solving SDPs analytically. In the controls literature, the formulated SDP conditions are typically solved numerically.
417
+
418
+ # C.2 A VARIANT OF THEOREM 1
419
+
420
+ When discussing AOL and SLL, our main paper makes the assumption that all the columns of $W$ have at least one non-zero entry such that (2) is well defined. To drop this assumption, we can use the following variant of Theorem 1.
421
+
422
+ Theorem 6. For any weight matrix $W \in \mathbb{R}^{m \times n}$ , if there exists a diagonal matrix $\Gamma \in \mathbf{S}^n$ such that $\Gamma W^{\mathrm{T}} W \Gamma \preceq \Gamma$ , then the two following statements hold true.
423
+
424
+ 1. The mapping $g(x) = W\Gamma^{\frac{1}{2}}x + b$ is 1-Lipschitz.
425
+ 2. The mapping $h(x) = x - 2W\Gamma \sigma (W^{\top}x + b)$ is 1-Lipschitz if $\sigma$ is ReLU, tanh or sigmoid.
426
+
427
+ The proof is omitted here, since we can use exactly the same argument as before. If $\Gamma$ happens to be nonsingular, then we can set $T = \Gamma^{-1}$ , and the above theorem exactly reduces to Theorem 1. However, the above result allows $\Gamma$ to be singular. This is useful for designing AOL and SLL in the case where $W$ has some zero columns. Suppose the $(i_0,j)$ -entry of $W^{\mathsf{T}}W$ is equal to 0 for all $j$ . Then we can set the $(i_0,i_0)$ -th entry of $\Gamma$ as 0 and still use (2) or (4) for other entries. It is straightforward to verify that the resultant $\Gamma$ is still a feasible solution to $\Gamma W^{\mathsf{T}}W\Gamma \preceq \Gamma$ , and then we can implement AOL or SLL accordingly.
428
+
429
+ # C.3 A VARIANT OF THEOREM 3
430
+
431
+ We can also modify Theorem 3 for the non-residual network layer case. The following variant of Theorem 3 is useful.
432
+
433
+ Theorem 7. Let $W$ be the weight matrix. Suppose $T$ is a nonsingular diagonal matrix. If there exists some diagonal matrix $Q$ with all positive diagonal entries such that $(T - QW^{\top}WQ^{-1})$ is a real diagonally dominant matrix with diagonal entries being all positive, then $T \succeq W^{\top}W$ , and the function $g(x) = WT^{-\frac{1}{2}}x + b$ is 1-Lipschitz.
434
+
435
+ The proof is trivial and hence omitted. Based on the above result, it is possible that one can use (4) to construct a non-residual layer that can still improve upon AOL.
2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3d4378afd539ce2c599ee5aa4e7b3a2cc443e1876e64a72dc7a79b148f38e06
3
+ size 551726
2023/A Unified Algebraic Perspective on Lipschitz Neural Networks/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/014e40f0-866e-4844-b9d6-b3de43291df6_content_list.json ADDED
@@ -0,0 +1,1984 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A FRAMEWORK FOR BENCHMARKING CLASS-OUT-OF-DISTRIBUTION DETECTION AND ITS APPLICATION TO IMAGENET",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 171,
8
+ 98,
9
+ 828,
10
+ 167
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Ido Galil*",
17
+ "bbox": [
18
+ 184,
19
+ 191,
20
+ 261,
21
+ 205
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "Technion",
28
+ "bbox": [
29
+ 184,
30
+ 205,
31
+ 248,
32
+ 219
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "idogalil.ig@gmail.com",
39
+ "bbox": [
40
+ 184,
41
+ 220,
42
+ 372,
43
+ 234
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "Mohammed Dabbah*",
50
+ "bbox": [
51
+ 397,
52
+ 191,
53
+ 553,
54
+ 205
55
+ ],
56
+ "page_idx": 0
57
+ },
58
+ {
59
+ "type": "text",
60
+ "text": "Amazon",
61
+ "bbox": [
62
+ 398,
63
+ 207,
64
+ 457,
65
+ 219
66
+ ],
67
+ "page_idx": 0
68
+ },
69
+ {
70
+ "type": "text",
71
+ "text": "m.m.dabbah@gmail.com",
72
+ "bbox": [
73
+ 398,
74
+ 220,
75
+ 578,
76
+ 234
77
+ ],
78
+ "page_idx": 0
79
+ },
80
+ {
81
+ "type": "text",
82
+ "text": "Ran El-Yaniv",
83
+ "bbox": [
84
+ 604,
85
+ 191,
86
+ 702,
87
+ 205
88
+ ],
89
+ "page_idx": 0
90
+ },
91
+ {
92
+ "type": "text",
93
+ "text": "Technion, Deci.AI",
94
+ "bbox": [
95
+ 606,
96
+ 207,
97
+ 746,
98
+ 219
99
+ ],
100
+ "page_idx": 0
101
+ },
102
+ {
103
+ "type": "text",
104
+ "text": "rani@cs.technion.ac.il",
105
+ "bbox": [
106
+ 606,
107
+ 220,
108
+ 800,
109
+ 234
110
+ ],
111
+ "page_idx": 0
112
+ },
113
+ {
114
+ "type": "text",
115
+ "text": "ABSTRACT",
116
+ "text_level": 1,
117
+ "bbox": [
118
+ 450,
119
+ 271,
120
+ 545,
121
+ 285
122
+ ],
123
+ "page_idx": 0
124
+ },
125
+ {
126
+ "type": "text",
127
+ "text": "When deployed for risk-sensitive tasks, deep neural networks must be able to detect instances with labels from outside the distribution for which they were trained. In this paper we present a novel framework to benchmark the ability of image classifiers to detect class-out-of-distribution instances (i.e., instances whose true labels do not appear in the training distribution) at various levels of detection difficulty. We apply this technique to ImageNet, and benchmark 525 pretrained, publicly available, ImageNet-1k classifiers. The code for generating a benchmark for any ImageNet-1k classifier, along with the benchmarks prepared for the above-mentioned 525 models is available at https://github.com/mdabbah/COOD_benchmarking.",
128
+ "bbox": [
129
+ 228,
130
+ 303,
131
+ 767,
132
+ 443
133
+ ],
134
+ "page_idx": 0
135
+ },
136
+ {
137
+ "type": "text",
138
+ "text": "The usefulness of the proposed framework and its advantage over alternative existing benchmarks is demonstrated by analyzing the results obtained for these models, which reveals numerous novel observations including: (1) knowledge distillation consistently improves class-out-of-distribution (C-OOD) detection performance; (2) a subset of ViTs performs better C-OOD detection than any other model; (3) the language—vision CLIP model achieves good zero-shot detection performance, with its best instance outperforming $96\\%$ of all other models evaluated; (4) accuracy and in-distribution ranking are positively correlated to C-OOD detection; and (5) we compare various confidence functions for C-OOD detection. Our companion paper, also published in ICLR 2023 (Galil et al., 2023), examines the uncertainty estimation performance (ranking, calibration, and selective prediction performance) of these classifiers in an in-distribution setting.",
139
+ "bbox": [
140
+ 228,
141
+ 444,
142
+ 769,
143
+ 612
144
+ ],
145
+ "page_idx": 0
146
+ },
147
+ {
148
+ "type": "text",
149
+ "text": "1 INTRODUCTION",
150
+ "text_level": 1,
151
+ "bbox": [
152
+ 173,
153
+ 638,
154
+ 336,
155
+ 652
156
+ ],
157
+ "page_idx": 0
158
+ },
159
+ {
160
+ "type": "text",
161
+ "text": "Deep neural networks (DNNs) show great performance in a wide variety of application domains including computer vision, natural language understanding and audio processing. These models are trained on data coming from a certain distribution $P(X,Y)$ , usually with the assumption that test points will be sampled from the same distribution. When the underlying distribution $P(X,Y)$ of test points is different from the one used to train a model, we may no longer expect the same performance from the model. The difference in distribution may be the result of many processes such as natural deviation in the input space $\\mathcal{X}$ , noisy sensor readings of inputs, abrupt changes due to random events, newly arrived or refined input classes, etc. Here we distinguish between input distributional changes in $P_{X|Y}$ and changes in the label distribution. We focus on the latter case and consider the class-out-of-distribution (C-OOD) scenario, AKA open-set recognition (Scheirer et al., 2013), where the label support set $\\mathcal{V}$ changes to a different set that includes the set $\\mathcal{V}_{\\mathrm{OOD}}$ , containing new classes not observed in training.",
162
+ "bbox": [
163
+ 169,
164
+ 670,
165
+ 826,
166
+ 838
167
+ ],
168
+ "page_idx": 0
169
+ },
170
+ {
171
+ "type": "text",
172
+ "text": "Consider the detection task in which our model is required to distinguish between samples belonging to classes it has seen in training, where $x \\sim P(x|y \\in \\mathcal{Y}_{\\mathrm{ID}})$ , and samples belonging to novel classes, i.e., $x \\sim P(x|y \\in \\mathcal{Y}_{\\mathrm{OOD}})$ . The question we now ask is: how should models be evaluated to most accurately reflect their detection performance? We aim to benchmark the detection performance",
173
+ "bbox": [
174
+ 169,
175
+ 844,
176
+ 826,
177
+ 902
178
+ ],
179
+ "page_idx": 0
180
+ },
181
+ {
182
+ "type": "header",
183
+ "text": "Published as a conference paper at ICLR 2023",
184
+ "bbox": [
185
+ 171,
186
+ 32,
187
+ 478,
188
+ 47
189
+ ],
190
+ "page_idx": 0
191
+ },
192
+ {
193
+ "type": "page_footnote",
194
+ "text": "*The first two authors have equal contribution.",
195
+ "bbox": [
196
+ 199,
197
+ 910,
198
+ 477,
199
+ 924
200
+ ],
201
+ "page_idx": 0
202
+ },
203
+ {
204
+ "type": "page_number",
205
+ "text": "1",
206
+ "bbox": [
207
+ 493,
208
+ 948,
209
+ 503,
210
+ 959
211
+ ],
212
+ "page_idx": 0
213
+ },
214
+ {
215
+ "type": "text",
216
+ "text": "of DNN classification models that use their confidence rate function $\\kappa$ (e.g., softmax response; see Section 2) to detect OOD labels, where the basic premise is that instances whose labels are in $\\mathcal{V}_{\\mathrm{OOD}}$ are assigned lower $\\kappa$ values.",
217
+ "bbox": [
218
+ 169,
219
+ 103,
220
+ 823,
221
+ 147
222
+ ],
223
+ "page_idx": 1
224
+ },
225
+ {
226
+ "type": "text",
227
+ "text": "Most works on OOD detection use small-scale datasets that generally do not resemble the training distribution and, therefore, are easy to detect. The use of such sets often causes C-OOD detectors to appear better than they truly are when faced with realistic, yet harder tasks. Motivated by this deficiency, Hendrycks et al. (2021) introduced the ImageNet-O dataset as a solution. ImageNet-O, however, has two limitations. First, it benchmarks models with a single difficulty level exclusively, having only hard C-OOD instances, which might not be relevant for every task's requirements (Section 3 explains how to define different difficulty levels). Second, the original intent in the creation of ImageNet-O was to include only hard C-OOD instances. Its definition of \"OOD hardness\", however, was carried out with respect to ResNet-50's difficulty in detecting C-OOD classes, specifically when using softmax as its confidence function. This property makes ImageNet-O strongly biased. Indeed, consider the right-most box in Figure 1, which corresponds to the performance of 525 models over ImageNet-O. The orange dot in that box corresponds to ResNet-50, whose OOD detection performance is severely harmed by these ImageNet-O data. Nevertheless, it is evident that numerous models perform quite well, and all other models perform better than ResNet-50. The lack of an objective benchmark for C-OOD is the main motivation for our work.",
228
+ "bbox": [
229
+ 169,
230
+ 152,
231
+ 826,
232
+ 361
233
+ ],
234
+ "page_idx": 1
235
+ },
236
+ {
237
+ "type": "image",
238
+ "img_path": "images/478d7d4767df88a14fff07c4847c20e33dd15f299749fd39f37879afcd8b665b.jpg",
239
+ "image_caption": [
240
+ "Figure 1: OOD performance across severity (difficulty) levels, using the benchmarks produced by our framework. The detection performance decreases for all models as we increase the difficulty until it reaches near chance detection performance at the highest severity $(s_{10})$ . The top curve belongs to ViT-L/32-384, which surpasses all models at every severity level. We also observe how success or failure with regard to the previous C-OOD benchmark, ImageNet-O, does not reflect the models' true OOD detection performance since it was designed to specifically fool ResNet-50. At the bottom we provide visual examples for OOD classes from ImageNet-21k that may populate each severity level due to their similarity to ID classes from ImageNet-1k, and in this example, to a Monarch butterfly."
241
+ ],
242
+ "image_footnote": [],
243
+ "bbox": [
244
+ 171,
245
+ 371,
246
+ 823,
247
+ 661
248
+ ],
249
+ "page_idx": 1
250
+ },
251
+ {
252
+ "type": "text",
253
+ "text": "Our contributions. We propose a novel technique to generate a C-OOD benchmark that covers a variety of difficulty levels. Unlike other existing benchmarks (e.g., ImageNet-O), our technique is not biased towards an arbitrary model such as Resnet50 and/or a specific confidence function such as the softmax response. This useful property is obtained by tailoring the benchmark to the model being evaluated, including its confidence function, and not seeking to determine a single objective criterion for hardness of C-OOD samples (see Section 3).",
254
+ "bbox": [
255
+ 169,
256
+ 805,
257
+ 823,
258
+ 888
259
+ ],
260
+ "page_idx": 1
261
+ },
262
+ {
263
+ "type": "text",
264
+ "text": "Second, we show and explain how we filter ImageNet-21k to use it for the purpose of generating C-OOD benchmarks for ImageNet-1k (Deng et al., 2009) classifiers (see Section 4). We will provide",
265
+ "bbox": [
266
+ 169,
267
+ 895,
268
+ 823,
269
+ 925
270
+ ],
271
+ "page_idx": 1
272
+ },
273
+ {
274
+ "type": "header",
275
+ "text": "Published as a conference paper at ICLR 2023",
276
+ "bbox": [
277
+ 171,
278
+ 32,
279
+ 478,
280
+ 47
281
+ ],
282
+ "page_idx": 1
283
+ },
284
+ {
285
+ "type": "page_number",
286
+ "text": "2",
287
+ "bbox": [
288
+ 493,
289
+ 948,
290
+ 503,
291
+ 959
292
+ ],
293
+ "page_idx": 1
294
+ },
295
+ {
296
+ "type": "text",
297
+ "text": "a simple code to choose the filtering parameters most suitable for the specific aim for which the benchmark is meant (e.g., what is classes are considered OOD).",
298
+ "bbox": [
299
+ 169,
300
+ 103,
301
+ 823,
302
+ 132
303
+ ],
304
+ "page_idx": 2
305
+ },
306
+ {
307
+ "type": "text",
308
+ "text": "Third, we demonstrate the power and usability of our method by applying our C-OOD framework to generate benchmarks for 525 ImageNet-1k classifiers available from popular repositories. We provide a benchmark for each of these classifiers, which will be available for use from our code.",
309
+ "bbox": [
310
+ 169,
311
+ 138,
312
+ 823,
313
+ 181
314
+ ],
315
+ "page_idx": 2
316
+ },
317
+ {
318
+ "type": "text",
319
+ "text": "We then analyze the results of these benchmarks to make numerous novel observations concerning C-OOD detection such as: (1) training regimes using knowledge distillation (Hinton et al., 2015) consistently yield models with better C-OOD detection performance than the same models trained identically, but without distillation; (2) a subset of ViTs performs better C-OOD detection than any other model; (3) the language-vision model CLIP achieves good zero-shot detection performance for low difficulty (severity) levels; (4) accuracy and in-distribution (ID) ranking are positively correlated with C-OOD detection; (5) we compare the performance of various confidence functions for C-OOD detection; (6) A number of other observations (see Section 5).",
320
+ "bbox": [
321
+ 169,
322
+ 186,
323
+ 826,
324
+ 299
325
+ ],
326
+ "page_idx": 2
327
+ },
328
+ {
329
+ "type": "text",
330
+ "text": "Lastly, we emphasize that the resulting difficulty levels of our framework allow benchmarking with respect to the difficulty levels most relevant to the task. For example, for a task with a high tolerance for risk (e.g., a task for an entertainment application), the performance of a model on a median difficulty level might be more important than on the hardest difficulty level (severity 10). The opposite might be true for some applications with a low tolerance for risk (e.g., medical applications), for which one requires the best performance to be attained even if the OOD is very hard to detect (severity 10). Furthermore, in Section 5 we show that detection algorithms do not always improve performance on all inputs equally, and could even hurt performance for specific difficulty levels and models (see Figure 7 for a striking example). Choosing the combination of (model, detection algorithm) based only on the detection performance on all data may yield sub-optimal results for our specific desired level of difficulty.",
331
+ "bbox": [
332
+ 169,
333
+ 306,
334
+ 826,
335
+ 460
336
+ ],
337
+ "page_idx": 2
338
+ },
339
+ {
340
+ "type": "text",
341
+ "text": "2 PROBLEM SETUP",
342
+ "text_level": 1,
343
+ "bbox": [
344
+ 171,
345
+ 479,
346
+ 349,
347
+ 494
348
+ ],
349
+ "page_idx": 2
350
+ },
351
+ {
352
+ "type": "text",
353
+ "text": "Let $\\mathcal{X}$ be the input space and $\\mathcal{Y} = \\mathcal{Y}_{\\mathrm{ID}} \\cup \\mathcal{Y}_{\\mathrm{OOD}}$ be the label space. Let $P(\\mathcal{X}, \\mathcal{Y})$ be an unknown distribution over $\\mathcal{X} \\times \\mathcal{Y}$ . A model $f$ is a prediction function $f: \\mathcal{X} \\to \\mathcal{Y}_{\\mathrm{ID}}$ , and its predicted label for an image $x$ is denoted by $\\hat{y}_f(x)$ . The model $f$ is produced by training on a labeled set $T_m = \\{(x_i, y_i)\\}_{i=1}^m \\subseteq (\\mathcal{X} \\times \\mathcal{Y}_{\\mathrm{ID}})$ , sampled i.i.d. from $P(\\mathcal{X}, \\mathcal{Y}_{\\mathrm{ID}})$ , with the objective of minimizing its empirical risk, defined by $\\hat{r}(f|T_m) \\triangleq \\frac{1}{m} \\sum_{i=1}^{m} \\ell(f(x_i), y_i)$ , where $\\ell: \\mathcal{Y}_{\\mathrm{ID}} \\times \\mathcal{Y}_{\\mathrm{ID}} \\to \\mathbb{R}^+$ is a given loss function (e.g., cross-entropy loss for classification). Note that by this definition, the model $f$ will always misclassify any $x \\sim P(\\mathcal{X}, \\mathcal{Y}_{\\mathrm{OOD}})$ .",
354
+ "bbox": [
355
+ 169,
356
+ 508,
357
+ 823,
358
+ 611
359
+ ],
360
+ "page_idx": 2
361
+ },
362
+ {
363
+ "type": "text",
364
+ "text": "We define a confidence score function $\\kappa (x,\\hat{y} |f)$ , where $x\\in \\mathcal{X}$ , and $\\hat{y}\\in \\mathcal{Y}_{\\mathrm{ID}}$ is the model's prediction for $x$ , as follows. The function $\\kappa$ should quantify confidence in the prediction of $\\hat{y}$ for the input $x$ , based on signals from model $f$ . This function should induce a partial order over instances in $\\mathcal{X}$ .",
365
+ "bbox": [
366
+ 169,
367
+ 616,
368
+ 826,
369
+ 659
370
+ ],
371
+ "page_idx": 2
372
+ },
373
+ {
374
+ "type": "text",
375
+ "text": "The most common and well-known $\\kappa$ function for a classification model $f$ (with softmax at its last layer) is its softmax response values $-\\kappa(x, \\hat{y} | f) \\triangleq f(x)_{\\hat{y}}$ (Cordella et al., 1995; De Stefano et al., 2000) - which is also widely accepted as a baseline in the OOD literature (Hendrycks & Gimpel, 2017; Hendrycks et al., 2021; Berger et al., 2021; Shalev et al., 2018). While this is the primary $\\kappa$ we evaluate for the sake of simplicity, various other $\\kappa$ functions, which are also utilized for OOD detection, exist. To name a few: Out-of-distribution detector for neural networks (ODIN) (Liang et al., 2018), Monte-Carlo dropout (MC dropout) (Gal & Ghahramani, 2016), Mahalanobis distance (Lee et al., 2018), and more. Although many of these methods use the direct output from $f$ , $\\kappa$ could be a different model unrelated to $f$ and unable to affect its predictions.",
376
+ "bbox": [
377
+ 169,
378
+ 666,
379
+ 826,
380
+ 792
381
+ ],
382
+ "page_idx": 2
383
+ },
384
+ {
385
+ "type": "text",
386
+ "text": "$\\kappa$ functions can be evaluated by the quality of the partial order they induce over instances in $\\mathcal{X}$ . For every two random samples $(x_{1},y_{1}),(x_{2},y_{2})\\sim P(\\mathcal{X},\\mathcal{Y})$ , and given that $x_{1}$ belongs to an OOD label and that $x_{2}$ belongs to an ID label, the detection (or ranking) performance of $\\kappa$ is defined as the probability that $\\kappa$ ranks $x_{2}$ higher than $x_{1}$ :",
387
+ "bbox": [
388
+ 169,
389
+ 799,
390
+ 823,
391
+ 856
392
+ ],
393
+ "page_idx": 2
394
+ },
395
+ {
396
+ "type": "equation",
397
+ "text": "\n$$\n\\Pr \\left[ \\kappa \\left(x _ {1}, \\hat {y} _ {1} | f\\right) < \\kappa \\left(x _ {2}, \\hat {y} _ {2} | f\\right) \\mid x _ {1} \\sim P (\\mathcal {X}, \\mathcal {Y} _ {\\mathrm {O O D}}) \\wedge x _ {2} \\sim P (\\mathcal {X}, \\mathcal {Y} _ {\\mathrm {I D}}) \\right] \\tag {1}\n$$\n",
398
+ "text_format": "latex",
399
+ "bbox": [
400
+ 261,
401
+ 859,
402
+ 823,
403
+ 878
404
+ ],
405
+ "page_idx": 2
406
+ },
407
+ {
408
+ "type": "text",
409
+ "text": "The Area Under the Receiver Operating Characteristic (AUROC or AUC) metric is often used to measure the performance of OOD detection. When ID samples are counted as true positives and OOD samples are counted as false positives, AUROC, in fact, equals the probability in Equation (1) (Fawcett,",
410
+ "bbox": [
411
+ 169,
412
+ 881,
413
+ 826,
414
+ 925
415
+ ],
416
+ "page_idx": 2
417
+ },
418
+ {
419
+ "type": "header",
420
+ "text": "Published as a conference paper at ICLR 2023",
421
+ "bbox": [
422
+ 171,
423
+ 32,
424
+ 478,
425
+ 47
426
+ ],
427
+ "page_idx": 2
428
+ },
429
+ {
430
+ "type": "page_number",
431
+ "text": "3",
432
+ "bbox": [
433
+ 493,
434
+ 948,
435
+ 503,
436
+ 959
437
+ ],
438
+ "page_idx": 2
439
+ },
440
+ {
441
+ "type": "text",
442
+ "text": "2006) and thus is a proper metric to measure OOD detection in classification. See Appendix A for evaluating $\\kappa$ functions in an ID setting.",
443
+ "bbox": [
444
+ 169,
445
+ 103,
446
+ 823,
447
+ 133
448
+ ],
449
+ "page_idx": 3
450
+ },
451
+ {
452
+ "type": "text",
453
+ "text": "3 CONSTRUCTING A MODEL-SPECIFIC CLASS-OUT-OF-DISTRIBUTION BENCHMARK",
454
+ "text_level": 1,
455
+ "bbox": [
456
+ 171,
457
+ 152,
458
+ 759,
459
+ 186
460
+ ],
461
+ "page_idx": 3
462
+ },
463
+ {
464
+ "type": "text",
465
+ "text": "We first choose a dataset that contains samples from a large set of OOD labels (e.g., labels from ImageNet-21k that are not included in ImageNet-1k). Ideally, this OOD dataset should consist of OOD labels representing labels the model may encounter when deployed. Any large dataset could be used for the purpose of benchmarking performance on C-OOD by splitting it according to labels into an ID component, i.e., the labels on which the model trains, and into an OOD component, i.e., the labels on which the model is exclusively tested.",
466
+ "bbox": [
467
+ 169,
468
+ 202,
469
+ 823,
470
+ 287
471
+ ],
472
+ "page_idx": 3
473
+ },
474
+ {
475
+ "type": "text",
476
+ "text": "We now introduce a novel framework for generating C-OOD benchmarks with a controllable degree of severity, which could be thought of as the difficulty level of the data. Algorithm 1 summarizes our proposed technique. Let $\\mathcal{Y}_{\\mathrm{OOD}}$ be a large set of OOD classes (e.g., labels from ImageNet-21k that are",
477
+ "bbox": [
478
+ 169,
479
+ 292,
480
+ 825,
481
+ 335
482
+ ],
483
+ "page_idx": 3
484
+ },
485
+ {
486
+ "type": "code",
487
+ "sub_type": "algorithm",
488
+ "code_caption": [
489
+ "Algorithm 1 Generating C-OOD benchmarks"
490
+ ],
491
+ "code_body": "1: function GENERATE_BENCHMARK(f, $\\kappa ,\\mathcal{V}_{\\mathrm{OOD}},$ group_size $= |\\mathcal{V}_{\\mathrm{ID}}|$ \n2: for $\\bar{y}\\in \\mathcal{V}_{\\mathrm{OOD}}$ do \n3: Split all samples of class $\\bar{y}$ into two sets: $c_{est}^{\\bar{y}}$ and $c_{test}^{\\bar{y}}$ \n4: Set the severity score of class $\\bar{y}$ to be: $s(\\bar{y}|f,\\kappa) = \\frac{1}{|c_{est}^{\\bar{y}}|}\\sum_{x\\in c_{est}^{\\bar{y}}}\\kappa (x|f).$ \n5: Insert the class and its score $(\\bar{y},s(\\bar{y}|f,\\kappa))$ into classes_array \n6: Sort classes_array in ascending order by each OOD class' score $s(\\bar{y}|f,\\kappa)$ \n7: for $i < |\\mathcal{V}_{\\mathrm{OOD}}| -$ group_size do Sliding window of size group_size \n8: grp_array[i] $=$ classes_array[i:i+group_size] \n9: for $i < 11$ do Select groups in different percentiles to serve as benchmarks \n10: sev_benchmark[i] $= \\{x\\mid x\\in c_{test}^{\\bar{y}}$ s.t. $\\bar{y}\\in$ grp_array[j] and $j = [\\frac{i}{10}\\cdot |grp\\_array||]\\}$ \n11: return sev_benchmark",
492
+ "bbox": [
493
+ 173,
494
+ 369,
495
+ 823,
496
+ 540
497
+ ],
498
+ "page_idx": 3
499
+ },
500
+ {
501
+ "type": "text",
502
+ "text": "not included in ImageNet-1k), and let $s_{f,\\kappa}(\\bar{y})$ be a severity score, defined as the average confidence given by $\\kappa$ to samples from class $\\bar{y} \\in \\mathcal{Y}_{\\mathrm{OOD}}$ . This score reflects the level of difficulty faced by the model $f$ and its $\\kappa$ function when detecting instances from class $\\bar{y}$ . When considering ID instances we expect $\\kappa$ to give high values for highly confident predictions. Therefore, the larger $s(\\bar{y} | f, \\kappa)$ is, the harder it is for $\\kappa$ to detect the OOD class $\\bar{y}$ among ID classes. We estimate $s(\\bar{y} | f, \\kappa)$ for each class in the OOD dataset (e.g., ImageNet-21K) using a set of samples from the class (denoted by $c_{est}^{\\bar{y}}$ ), while keeping a disjoint set of samples from the same class to be used for testing (denoted by $c_{test}^{\\bar{y}}$ ). Using $s$ we sub-sample groups of classes (severity levels) from $\\mathcal{Y}_{\\mathrm{OOD}}$ , with increasing severity such that severity level $i \\in [0, 10]$ is the $i^{th}$ percentile of all severity levels.",
503
+ "bbox": [
504
+ 169,
505
+ 556,
506
+ 823,
507
+ 686
508
+ ],
509
+ "page_idx": 3
510
+ },
511
+ {
512
+ "type": "text",
513
+ "text": "To achieve this, we first estimate the severity score for each class $\\bar{y}$ in our OOD dataset for our model and its confidence function $(f,\\kappa)$ , as follows:",
514
+ "bbox": [
515
+ 169,
516
+ 691,
517
+ 823,
518
+ 720
519
+ ],
520
+ "page_idx": 3
521
+ },
522
+ {
523
+ "type": "equation",
524
+ "text": "\n$$\ns (\\bar {y} | f, \\kappa) = \\frac {1}{| c _ {e s t} ^ {\\bar {y}} |} \\sum_ {x \\in c _ {e s t} ^ {\\bar {y}}} \\kappa (x | f).\n$$\n",
525
+ "text_format": "latex",
526
+ "bbox": [
527
+ 388,
528
+ 728,
529
+ 607,
530
+ 768
531
+ ],
532
+ "page_idx": 3
533
+ },
534
+ {
535
+ "type": "text",
536
+ "text": "We group the OOD classes into different groups, and choose the size of each group $G$ to be the same as $|\\mathcal{V}_{\\mathrm{ID}}|$ , the number of labels in the ID dataset (e.g., in ImageNet we choose it to be 1000 classes). The number of possible groups of labels from $\\mathcal{V}_{\\mathrm{OOD}}$ could be huge (in ImageNet, for example, the number of possible groups of size 1000 from the 20,000 OOD classes is about $\\binom{20,000}{1000} = 2.5 \\times 10^{1722}$ ), so instead of going over every possible group of classes, we sort the classes by their severity scores and then use a sliding window of size $|\\mathcal{V}_{\\mathrm{ID}}|$ to define $|\\mathcal{V}_{\\mathrm{OOD}}| - |\\mathcal{V}_{\\mathrm{ID}}| + 1$ groups of classes with increasing severity (see Figure 2). This method for reducing the number of considered groups of classes was chosen because it groups OOD classes with similar severity scores together.",
537
+ "bbox": [
538
+ 169,
539
+ 775,
540
+ 823,
541
+ 888
542
+ ],
543
+ "page_idx": 3
544
+ },
545
+ {
546
+ "type": "text",
547
+ "text": "Next, we choose the groups that correspond to the percentiles $\\{10 \\cdot i\\}_{i=0}^{i=10}$ in the array of sorted groups. Finally, we construct the C-OOD benchmark for each severity level $i$ from the set of test",
548
+ "bbox": [
549
+ 169,
550
+ 895,
551
+ 823,
552
+ 925
553
+ ],
554
+ "page_idx": 3
555
+ },
556
+ {
557
+ "type": "header",
558
+ "text": "Published as a conference paper at ICLR 2023",
559
+ "bbox": [
560
+ 171,
561
+ 32,
562
+ 478,
563
+ 47
564
+ ],
565
+ "page_idx": 3
566
+ },
567
+ {
568
+ "type": "page_number",
569
+ "text": "4",
570
+ "bbox": [
571
+ 493,
572
+ 948,
573
+ 503,
574
+ 959
575
+ ],
576
+ "page_idx": 3
577
+ },
578
+ {
579
+ "type": "image",
580
+ "img_path": "images/84009a8503acef2f059bcd05c36c4a5f3f69d6b46eccca99249f28c7c14f02cf.jpg",
581
+ "image_caption": [
582
+ "Figure 2: We define $|\\mathcal{V}_{\\mathrm{OOD}}| - |\\mathcal{V}_{\\mathrm{ID}}| + 1$ groups of classes with increasing severity by sorting all OOD classes $\\bar{y}_i \\in \\mathcal{V}_{\\mathrm{OOD}}$ by their severity scores $s(\\bar{y} | f, \\kappa)$ , and then use a sliding window of size $|\\mathcal{V}_{\\mathrm{ID}}|$ to choose the considered groups."
583
+ ],
584
+ "image_footnote": [],
585
+ "bbox": [
586
+ 236,
587
+ 99,
588
+ 759,
589
+ 150
590
+ ],
591
+ "page_idx": 4
592
+ },
593
+ {
594
+ "type": "text",
595
+ "text": "samples $c_{test}^{\\bar{y}}$ of all classes in group $i$ . This procedure for choosing groups allows us to interpret the severity levels using percentiles. For example, severity level 5 contains classes that match the median severity among the considered groups. Thus, the performance evaluated on the benchmark for severity 5 corresponds to the performance of the model on samples with a median detection difficulty.",
596
+ "bbox": [
597
+ 169,
598
+ 227,
599
+ 826,
600
+ 287
601
+ ],
602
+ "page_idx": 4
603
+ },
604
+ {
605
+ "type": "text",
606
+ "text": "The resulting benchmark is tailored to the evaluated model, since the latter was used to generate it and, therefore, can be used to measure its specific performance. In Appendix B we further argue why our framework can be used to compare C-OOD detection performance of different models.",
607
+ "bbox": [
608
+ 169,
609
+ 291,
610
+ 826,
611
+ 335
612
+ ],
613
+ "page_idx": 4
614
+ },
615
+ {
616
+ "type": "text",
617
+ "text": "4 CONSTRUCTING BENCHMARKS FOR IMAGENET CLASSIFIERS",
618
+ "text_level": 1,
619
+ "bbox": [
620
+ 171,
621
+ 353,
622
+ 712,
623
+ 369
624
+ ],
625
+ "page_idx": 4
626
+ },
627
+ {
628
+ "type": "text",
629
+ "text": "To use ImageNet-21k as an OOD dataset, we first filter out undesired labels. Since ImageNet-21K contains the ID dataset (ImageNet-1K), the first step is to remove the ID classes from the OOD dataset. Next, we remove all classes that are hypernyms or hyponyms of classes in ImageNet-1K because it might be inaccurate to include them as an OOD class. For example, ImageNet-1K contains the class \"brown bear\" and ImageNet-21K has the class \"bear\", which is a hypernym for \"brown bear\" so it would not be accurate to include \"bear\" in a C-OOD detection test. We furthermore filter OOD classes that, together with an ID class, either comprise the same object or are a component of the other one. This is due to most images in the dataset containing both components as parts of the whole object (e.g., \"pool ball\" from ImageNet-1k and \"pool table\" from ImageNet-21k). We also filter out classes that are practically identical, even though they possess WordNet id numbers that are different (e.g., \"hen\" is found twice as two distinct classes, with id n01514859 in ImageNet-1k and id n01792640 in ImageNet-21k). Since each class in the ImageNet-1k validation set has 50 samples, we set the number of testing samples for each C-OOD class to be 50 as well $|c_{test}^{\\bar{y}}| = 50$ . In addition, We set the estimation set for each class to be $150 |c_{est}^{\\bar{y}}| = 150$ . Overall, this means that each OOD class must have at least 200 samples. Accordingly, we remove classes with less than 200 samples. For classes with more than 200 samples we randomly select 200 samples and remove the rest.",
630
+ "bbox": [
631
+ 169,
632
+ 383,
633
+ 826,
634
+ 608
635
+ ],
636
+ "page_idx": 4
637
+ },
638
+ {
639
+ "type": "text",
640
+ "text": "While the above filtering choices are trivial and suitable for most tasks, two additional filtering options are dependent on the task and its definition of two objects being considered identical. The first option concerns animal classes that might appear to be very similar but have a biological difference such that an expert could distinguish between the two. A good example of this can be observed in Figure 3, depicting the ImageNet-1k class of Monarch butterflies and the ImageNet-21k class of Viceroy butterflies, which are both distinct species of butterflies. The similarity is so remarkable that scientists believe they have evolved to mimic one another to repel common predators (Ritland & Brower, 1991). This mimicry does not only fool predators and the untrained eye: all models studied in this paper classified more than $50\\%$ of Viceroy samples as a Monarch butterfly. The fact that such",
641
+ "bbox": [
642
+ 169,
643
+ 614,
644
+ 826,
645
+ 741
646
+ ],
647
+ "page_idx": 4
648
+ },
649
+ {
650
+ "type": "image",
651
+ "img_path": "images/25aac83398d7ec66850416682274a5508b4e3975509fa1041978801ba03b8eeb.jpg",
652
+ "image_caption": [
653
+ "Figure 3: While both butterflies appear very similar, a Viceroy can be distinguished from a Monarch by a black line crossing its postmedian hindwing. The red arrow on the Viceroy image indicates this black line."
654
+ ],
655
+ "image_footnote": [],
656
+ "bbox": [
657
+ 303,
658
+ 750,
659
+ 694,
660
+ 843
661
+ ],
662
+ "page_idx": 4
663
+ },
664
+ {
665
+ "type": "text",
666
+ "text": "classes are biologically different led us to keep them in the test set by default and serve as extremely",
667
+ "bbox": [
668
+ 171,
669
+ 909,
670
+ 823,
671
+ 925
672
+ ],
673
+ "page_idx": 4
674
+ },
675
+ {
676
+ "type": "header",
677
+ "text": "Published as a conference paper at ICLR 2023",
678
+ "bbox": [
679
+ 171,
680
+ 32,
681
+ 478,
682
+ 47
683
+ ],
684
+ "page_idx": 4
685
+ },
686
+ {
687
+ "type": "page_number",
688
+ "text": "5",
689
+ "bbox": [
690
+ 493,
691
+ 948,
692
+ 504,
693
+ 959
694
+ ],
695
+ "page_idx": 4
696
+ },
697
+ {
698
+ "type": "text",
699
+ "text": "hard OOD classes. Our code, however, allows users to disable such classes easily, since some tasks might permit such similar classes to be classified as the same.",
700
+ "bbox": [
701
+ 169,
702
+ 103,
703
+ 823,
704
+ 132
705
+ ],
706
+ "page_idx": 5
707
+ },
708
+ {
709
+ "type": "text",
710
+ "text": "The second option concerns inanimate objects created by humans that might appear very similar but are, by definition, distinct from one another and are used differently. An example of two such classes",
711
+ "bbox": [
712
+ 169,
713
+ 138,
714
+ 825,
715
+ 167
716
+ ],
717
+ "page_idx": 5
718
+ },
719
+ {
720
+ "type": "image",
721
+ "img_path": "images/a7ee1436df4e9f66ac905436b89f1fe853af35c5af8066d2883462da5e87e4d8.jpg",
722
+ "image_caption": [
723
+ "Figure 4: While both balls appear similar, they are distinguished by their different uses."
724
+ ],
725
+ "image_footnote": [],
726
+ "bbox": [
727
+ 367,
728
+ 178,
729
+ 630,
730
+ 252
731
+ ],
732
+ "page_idx": 5
733
+ },
734
+ {
735
+ "type": "text",
736
+ "text": "is shown in Figure 4, depicting a cue ball used for billiard games and a ping pong ball. Both are strikingly similar, and we believe a person completely unfamiliar with one of the games might easily confuse the two, if all they had were the images. Our code can be configured easily to either exclude or include such classes.",
737
+ "bbox": [
738
+ 169,
739
+ 290,
740
+ 823,
741
+ 347
742
+ ],
743
+ "page_idx": 5
744
+ },
745
+ {
746
+ "type": "text",
747
+ "text": "After completing the filtering as described above, the remaining classes were used in the process described in Section 3 as the set of OOD classes $\\mathcal{V}_{\\mathrm{OOD}}$ , with ImageNet's validation set being the set of ID classes $\\mathcal{V}_{\\mathrm{ID}}$ . Our code allows the generation of C-OD benchmarks for any ImageNet classification model and its $\\kappa$ confidence scoring function. Moreover, we ran the process ourselves for 525 models pretrained on ImageNet, taken from the torchvision (0.10) and \"timm\" (0.4.12) repositories (Paszke et al., 2019; Wightman, 2019), with softmax as $\\kappa$ . For these models, the benchmarks are ready to be used by the community without further preparations being necessary.",
748
+ "bbox": [
749
+ 169,
750
+ 353,
751
+ 826,
752
+ 454
753
+ ],
754
+ "page_idx": 5
755
+ },
756
+ {
757
+ "type": "text",
758
+ "text": "5 PERFORMANCE ANALYSIS",
759
+ "text_level": 1,
760
+ "bbox": [
761
+ 171,
762
+ 470,
763
+ 423,
764
+ 486
765
+ ],
766
+ "page_idx": 5
767
+ },
768
+ {
769
+ "type": "text",
770
+ "text": "Having generated C-OOD benchmarks using the above technique for 525 different models, in this section we analyze the results. We first focus on results obtained when setting the confidence function $\\kappa$ to be the softmax response, as it is widely accepted as a baseline in the OOD literature (Hendrycks & Gimpel, 2017; Berger et al., 2021). We then evaluate additional $\\kappa$ functions such as ODIN, entropy and MC dropout. Our analysis leads to several interesting insights.",
771
+ "bbox": [
772
+ 169,
773
+ 501,
774
+ 823,
775
+ 571
776
+ ],
777
+ "page_idx": 5
778
+ },
779
+ {
780
+ "type": "image",
781
+ "img_path": "images/82d054cfe43423aa4cfe02ce743eb4c22e3e5654b4b891dc06cff6b2a0932b0d.jpg",
782
+ "image_caption": [
783
+ "Figure 5: The mean relative improvement when using different training regimes (distillation, pretraining etc.). The shaded green area indicates the area of positive improvement."
784
+ ],
785
+ "image_footnote": [],
786
+ "bbox": [
787
+ 302,
788
+ 583,
789
+ 697,
790
+ 718
791
+ ],
792
+ "page_idx": 5
793
+ },
794
+ {
795
+ "type": "text",
796
+ "text": "1) Knowledge distillation improves C-OOD detection. We measured C-OOD detection improvement (measured in AUROC) when using different training regimes to explore whether a certain method consistently contributes to detection performance. Results are depicted in Figure 5. To make a fair comparison, we only compare pairs of models such that both models have identical architecture and training regimes, with the exception of the method itself being evaluated (e.g., training with or without knowledge distillation). Of all training regimes (knowledge distillation, adversarial training (Goodfellow et al., 2015), pretraining on ImageNet-21k, see below), knowledge distillation had the most significant impact in most severity levels $s > 3$ . In Galil et al. (2023) we also find that among these training regimes, knowledge distillation is the best booster of uncertainty estimation performance in an in-distribution setting. Next, we find that ImageNet21k pretraining also improves performance, and is more beneficial to performance than knowledge distillation in low levels of",
797
+ "bbox": [
798
+ 169,
799
+ 771,
800
+ 826,
801
+ 925
802
+ ],
803
+ "page_idx": 5
804
+ },
805
+ {
806
+ "type": "header",
807
+ "text": "Published as a conference paper at ICLR 2023",
808
+ "bbox": [
809
+ 171,
810
+ 32,
811
+ 478,
812
+ 47
813
+ ],
814
+ "page_idx": 5
815
+ },
816
+ {
817
+ "type": "page_number",
818
+ "text": "6",
819
+ "bbox": [
820
+ 493,
821
+ 948,
822
+ 504,
823
+ 959
824
+ ],
825
+ "page_idx": 5
826
+ },
827
+ {
828
+ "type": "text",
829
+ "text": "severity $s \\leq 3$ . Note that this observation could not have been achieved with simplified benchmarks (e.g., ImageNet-O). Our new framework allows for such observations thanks to the division of the benchmarks into different levels of severity. Finally, it is not surprising that adversarial training is irrelevant to C-OOD detection.",
830
+ "bbox": [
831
+ 169,
832
+ 103,
833
+ 823,
834
+ 160
835
+ ],
836
+ "page_idx": 6
837
+ },
838
+ {
839
+ "type": "list",
840
+ "sub_type": "text",
841
+ "list_items": [
842
+ "2) A subset of ViTs achieves the best C-OD detection performance, both in absolute terms and per-model size (# parameters, see Figure 9 in Appendix C). Several training regimes (including the original regime from the paper introducing ViT) result in ViTs that outperform all other architectures and training regimes in terms of C-OD detection, e.g., Dosovitskiy et al. (2021); Steiner et al. (2022); Chen et al. (2022); Ridnik et al. (2021). Further research into other training regimes, however, reveals that not all training regimes result in superb performance (Touvron et al., 2021; 2022; Singh et al., 2022; Paszke et al., 2019), even when a similar amount of data is introduced into the training. We also find that the same successful subset of ViTs outperforms any other model in terms of uncertainty estimation performance in an in-distribution setting in Galil et al. (2023). These observations warrant additional research with the hope of either training more robust ViTs or transferring the unidentified ingredient of the successful subset of ViTs into other models.",
843
+ "3) The language-vision CLIP model achieves good zero-shot C-OOD detection performance for low severity levels. CLIP (Radford et al., 2021) enables zero-shot classification and produces an impressive performance. We find it is also good at C-OOD detection (especially in severity levels lower than 6), without needing any training or fine-tuning with regard to the dataset. This observation is significant because it means CLIP could be used as a zero-shot C-OOD detection algorithm without the need to train on the ID classes. This also allows the user to change the definition of which classes are considered ID in a flexible manner without the need to retrain the detector. To the best of our knowledge, we are the first to make the observation that CLIP can serve as a capable zero-shot detector on its own, without further training, additional components, or knowledge of the possible OOD classes in advance. For more details, see Appendix D."
844
+ ],
845
+ "bbox": [
846
+ 169,
847
+ 166,
848
+ 826,
849
+ 467
850
+ ],
851
+ "page_idx": 6
852
+ },
853
+ {
854
+ "type": "image",
855
+ "img_path": "images/c280fc2ce06238537b2d6a14151512ea289217e57ca6e473170a5a6f80eb2ed6.jpg",
856
+ "image_caption": [
857
+ "Figure 6: Architecture accuracy vs. mean C-OD AUROC performance. In the legend, the pair of numbers next to each architecture name corresponds to the Spearman correlation and the number of networks tested from that architecture family (most samples are too small to draw any specific conclusions). Accuracy appears to have a high correlation with the C-OD detection performance, with a Spearman correlation of 0.65."
858
+ ],
859
+ "image_footnote": [],
860
+ "bbox": [
861
+ 292,
862
+ 484,
863
+ 699,
864
+ 656
865
+ ],
866
+ "page_idx": 6
867
+ },
868
+ {
869
+ "type": "list",
870
+ "sub_type": "text",
871
+ "list_items": [
872
+ "4) Accuracy is the factor most correlated with C-OOD detection. We observe that accuracy is typically a good indicator of the model's performance in C-OOD detection at most severity levels $[s_0 - s_8]$ , with Spearman correlation values in the range of [0.6, 0.73] at those levels (see Figure 12 in Appendix E). The scatter plot in Figure 6 shows the relationship between the architecture accuracy and its C-OOD detection performance. When grouping the networks by architecture, we notice that most architectures also follow this trend. When measuring the correlation between AUROC and accuracy among only the $20\\%$ most accurate models, however, the Spearman correlation drops to a range of [0.34, 0.43] (see Figure 13 in Appendix E).",
873
+ "5) In-distribution ranking performance is positively correlated with C-OD detection. The next best indicative factor correlated with C-OD detection performance after accuracy is the model's in-distribution ranking performance (\"ID AUROC\", see Appendix A), with Spearman correlation values in the range of [0.4, 0.5]. When measuring the correlation between AUROC and ID AUROC"
874
+ ],
875
+ "bbox": [
876
+ 169,
877
+ 750,
878
+ 826,
879
+ 925
880
+ ],
881
+ "page_idx": 6
882
+ },
883
+ {
884
+ "type": "header",
885
+ "text": "Published as a conference paper at ICLR 2023",
886
+ "bbox": [
887
+ 171,
888
+ 32,
889
+ 478,
890
+ 47
891
+ ],
892
+ "page_idx": 6
893
+ },
894
+ {
895
+ "type": "page_number",
896
+ "text": "7",
897
+ "bbox": [
898
+ 493,
899
+ 948,
900
+ 503,
901
+ 959
902
+ ],
903
+ "page_idx": 6
904
+ },
905
+ {
906
+ "type": "text",
907
+ "text": "among only the $20\\%$ most accurate models, however, the Spearman correlation increases to a range of [0.54, 0.77]; see Appendix E for more details.",
908
+ "bbox": [
909
+ 169,
910
+ 103,
911
+ 823,
912
+ 133
913
+ ],
914
+ "page_idx": 7
915
+ },
916
+ {
917
+ "type": "list",
918
+ "sub_type": "text",
919
+ "list_items": [
920
+ "6) Most OOD classes appear in every severity level $i \\in [0, 10]$ for at least one model, with the exception of some classes that appear to reach severity level 10 for most or even all models (e.g., Viceroy Butterfly, depicted in Figure 3 in Section 4). This observation suggests that \"OOD hardness\" is usually subjective, and changes greatly across different models.",
921
+ "7) The ranking of the best C-OD detection models tends to remain similar across severity levels. This means that when selecting the best model for deployment, it is usually enough to observe its performance on only a few severity levels; see Appendix F. Note that this conclusion is only true when leaving the $\\kappa$ confidence function fixed (see below).",
922
+ "8) ODIN offers significant improvements over softmax for most models. In addition to evaluating with softmax as the $\\kappa$ confidence function, we evaluate a few additional methods to serve as $\\kappa$ functions: ODIN, entropy, MC dropout and \"max-logit\" (not applying softmax). For each model $f$ and $\\kappa$ we re-ran the algorithm described in Section 3 to benchmark $(f,\\kappa)$ (we do this because using the same C-OOD groups produced when using softmax might give an unfair advantage to other $\\kappa$ functions); see Appendix G for more technical details."
923
+ ],
924
+ "bbox": [
925
+ 169,
926
+ 138,
927
+ 828,
928
+ 349
929
+ ],
930
+ "page_idx": 7
931
+ },
932
+ {
933
+ "type": "image",
934
+ "img_path": "images/9b7ae244eb283dcf78044fca5652329abc68badd52b06aa339db9ef9f6a14667.jpg",
935
+ "image_caption": [
936
+ "Figure 7: Relative improvement gain in C-OD detection performance when using ODIN instead of softmax. Each point represents an evaluated model. The green shaded area indicates the area of positive improvement."
937
+ ],
938
+ "image_footnote": [],
939
+ "bbox": [
940
+ 299,
941
+ 358,
942
+ 699,
943
+ 532
944
+ ],
945
+ "page_idx": 7
946
+ },
947
+ {
948
+ "type": "text",
949
+ "text": "Figure 7 shows each model's improvement when using ODIN rather than softmax, from which it is visible that the improvement has a high variance: some models benefit significantly from using ODIN, while it is detrimental to other models. Furthermore, whether or not a model benefits from ODIN changes across different levels of severity. For example, applying ODIN instead of softmax to ViT-L/32-384 barely improves detection when at severity level 0 (AUROC improves by $0.4\\%$ ), but it significantly improves its detection as the severity level increases (for severity level 10, AUROC improves by $9\\%$ ). Other models' detection performance, on the other hand, may decrease as severity increases (see Figure 7 for examples). These facts suggest that the pair of (model, $\\kappa$ ) needs to be considered with respect to the task and severity level relevant to it. Moreover, it may be that the $\\kappa$ function hyperparameters need to be optimized specifically for the desired severity level.",
950
+ "bbox": [
951
+ 169,
952
+ 597,
953
+ 825,
954
+ 737
955
+ ],
956
+ "page_idx": 7
957
+ },
958
+ {
959
+ "type": "text",
960
+ "text": "9) Not applying softmax can improve some models significantly, although most are harmed by it. Figure 16 in Appendix G depicts the effect of not applying softmax, which we dub \"max-logit\". While most models are harmed by using max-logit instead of softmax, some models are significantly benefited. ViTs, which already outperform all other models, perform significantly better when softmax is not applied, with ViT-L/32-384 improving by $10.6\\%$ . It is worth mentioning that of all the (model, $\\kappa$ ) pairs evaluated in this paper, ViT-L/32-384 applied with max-logit achieve the best detection performance. Interestingly, regardless of the $\\kappa$ function evaluated, ViT-L/32-384 demonstrated the best detection performance. In Figure 8, we plot its performance across all severity levels using each of the $\\kappa$ functions we consider. Also, as noted in Appendix G, the hyperparameters used for ODIN when applied to ViT were not optimized specifically to it. Performance by using ODIN may improve beyond max-logit with model-specific optimization. Observing that max-logit could be so beneficial for a subset of models while being harmful to most other models was made possible thanks to the scale of our study.",
961
+ "bbox": [
962
+ 169,
963
+ 743,
964
+ 828,
965
+ 925
966
+ ],
967
+ "page_idx": 7
968
+ },
969
+ {
970
+ "type": "header",
971
+ "text": "Published as a conference paper at ICLR 2023",
972
+ "bbox": [
973
+ 171,
974
+ 32,
975
+ 478,
976
+ 47
977
+ ],
978
+ "page_idx": 7
979
+ },
980
+ {
981
+ "type": "page_number",
982
+ "text": "8",
983
+ "bbox": [
984
+ 493,
985
+ 948,
986
+ 504,
987
+ 959
988
+ ],
989
+ "page_idx": 7
990
+ },
991
+ {
992
+ "type": "image",
993
+ "img_path": "images/d5a953ace1ab772b0d8303e09ca3dbbd80918b3de26f1523f155fb7d8ff542a9.jpg",
994
+ "image_caption": [
995
+ "Figure 8: OOD detection performance of ViT-L/32-384, the best model evaluated using each of the $\\kappa$ functions we consider."
996
+ ],
997
+ "image_footnote": [],
998
+ "bbox": [
999
+ 282,
1000
+ 99,
1001
+ 715,
1002
+ 287
1003
+ ],
1004
+ "page_idx": 8
1005
+ },
1006
+ {
1007
+ "type": "text",
1008
+ "text": "10) Using entropy as a confidence function $\\kappa$ improves C-OD detection performance in most cases. We compare the performance gain from switching to using entropy instead of the softmax score. The results are depicted in Figure 17 in Appendix G. We note that, in most cases, using entropy improves the detection performance.",
1009
+ "bbox": [
1010
+ 169,
1011
+ 349,
1012
+ 823,
1013
+ 407
1014
+ ],
1015
+ "page_idx": 8
1016
+ },
1017
+ {
1018
+ "type": "text",
1019
+ "text": "11) MC dropout improves detection, especially for low levels of severity. We evaluate MC dropout Gal & Ghahramani (2016) in the context of C-OOD detection. We use 30 dropout-enabled forward passes. The mean softmax score of these passes is calculated and then a predictive entropy score is used as the final uncertainty estimate. The improvements when using MC dropout instead of softmax across all severity levels are depicted in Figure 18 in Appendix G using box plots. We find that MC dropout improves performance, especially so at lower levels of severity. The improvement becomes less significant as severity increases. Similar to ODIN, MC dropout seems to improve some models more significantly at lower severity levels (e.g., MobileNets (Howard et al., 2019)), while other models are improved more significantly by MC dropout at higher severity levels (e.g., ViTs). We further analyze MC dropout and recall that it comprises two main components: (a) dropout-enabled forward passes and (b) entropy of the mean probability vector from the forward passes. To test which component contributes the most to the perceived gains, we compare the C-OOD detection performance when using MC dropout to the C-OOD detection performance when using just entropy (with no multiple dropout-enabled forward passes). The results of this comparison are plotted in Figure 19 in Appendix G. We find that MC dropout slightly improves upon entropy at most severity levels, especially at lower ones, with few outliers being either significantly improved or harmed.",
1020
+ "bbox": [
1021
+ 169,
1022
+ 412,
1023
+ 826,
1024
+ 636
1025
+ ],
1026
+ "page_idx": 8
1027
+ },
1028
+ {
1029
+ "type": "text",
1030
+ "text": "6 CONCLUDING REMARKS",
1031
+ "text_level": 1,
1032
+ "bbox": [
1033
+ 171,
1034
+ 656,
1035
+ 410,
1036
+ 672
1037
+ ],
1038
+ "page_idx": 8
1039
+ },
1040
+ {
1041
+ "type": "text",
1042
+ "text": "We introduced a novel approach to benchmarking the performance of classifiers in detecting C-OODs. In contrast to existing techniques, the proposed method allows for unbiased measurements against specific models or confidence functions. A key feature of the proposed benchmarking procedure is that it allows for graded measurements of class out-of-distribution levels of severity. Using this property, we can identify trends in detection robustness that are otherwise impossible to detect. In addition to opening new avenues for future research, the proposed method can be used to draw more precise conclusions about the performance of various models and detection techniques.",
1043
+ "bbox": [
1044
+ 169,
1045
+ 686,
1046
+ 823,
1047
+ 787
1048
+ ],
1049
+ "page_idx": 8
1050
+ },
1051
+ {
1052
+ "type": "text",
1053
+ "text": "Using our new benchmarking procedure, we offered numerous interesting observations that merit further investigation into how to improve C-OOD detection. Among the interesting questions raised is why is knowledge distillation beneficial to boosting detection performance, and how can we enhance its robustness to C-OODs? What can we learn from the architectures that were inclined to perform well in C-OOD detection, such as ViT and CLIP? Finally, could detection methods be crafted and optimized for specific severity levels, or can they be modified to be so by changing a hyperparameter?",
1054
+ "bbox": [
1055
+ 169,
1056
+ 791,
1057
+ 823,
1058
+ 878
1059
+ ],
1060
+ "page_idx": 8
1061
+ },
1062
+ {
1063
+ "type": "header",
1064
+ "text": "Published as a conference paper at ICLR 2023",
1065
+ "bbox": [
1066
+ 171,
1067
+ 32,
1068
+ 478,
1069
+ 47
1070
+ ],
1071
+ "page_idx": 8
1072
+ },
1073
+ {
1074
+ "type": "page_footnote",
1075
+ "text": "<sup>1</sup>Entropy is maximal when the distribution given by the model for $P(y|x)$ is uniform, which implies high uncertainty. To convert entropy into a confidence signal, which should increase as the uncertainty decreases, we use negative entropy.",
1076
+ "bbox": [
1077
+ 169,
1078
+ 883,
1079
+ 823,
1080
+ 925
1081
+ ],
1082
+ "page_idx": 8
1083
+ },
1084
+ {
1085
+ "type": "page_number",
1086
+ "text": "9",
1087
+ "bbox": [
1088
+ 493,
1089
+ 948,
1090
+ 503,
1091
+ 959
1092
+ ],
1093
+ "page_idx": 8
1094
+ },
1095
+ {
1096
+ "type": "text",
1097
+ "text": "ACKNOWLEDGMENTS",
1098
+ "text_level": 1,
1099
+ "bbox": [
1100
+ 171,
1101
+ 103,
1102
+ 356,
1103
+ 118
1104
+ ],
1105
+ "page_idx": 9
1106
+ },
1107
+ {
1108
+ "type": "text",
1109
+ "text": "This research was partially supported by the Israel Science Foundation, grant No. 710/18.",
1110
+ "bbox": [
1111
+ 171,
1112
+ 133,
1113
+ 759,
1114
+ 148
1115
+ ],
1116
+ "page_idx": 9
1117
+ },
1118
+ {
1119
+ "type": "text",
1120
+ "text": "REFERENCES",
1121
+ "text_level": 1,
1122
+ "bbox": [
1123
+ 173,
1124
+ 170,
1125
+ 287,
1126
+ 184
1127
+ ],
1128
+ "page_idx": 9
1129
+ },
1130
+ {
1131
+ "type": "list",
1132
+ "sub_type": "ref_text",
1133
+ "list_items": [
1134
+ "Christoph Berger, Magdalini Paschali, Ben Glocker, and Konstantinos Kamnitsas. Confidence-based out-of-distribution detection: A comparative study and analysis. In Carole H. Sudre, Roxane Licandro, Christian F. Baumgartner, Andrew Melbourne, Adrian V. Dalca, Jana Hutter, Ryutaro Tanno, Esra Abaci Turk, Koen Van Leemput, Jordina Torrents-Barrena, William M. Wells III, and Christopher K. Macgowan (eds.), Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis - 3rd International Workshop, UNSURE 2021, and 6th International Workshop, PIPPI 2021 Held in Conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings, volume 12959 of Lecture Notes in Computer Science, pp. 122-132. Springer, 2021. doi: 10.1007/978-3-030-87735-4\\_.12. URL https://doi.org/10.1007/978-3-030-87735-4_12.",
1135
+ "Xiangning Chen, Cho-Jui Hsieh, and Boqing Gong. When vision transformers outperform resnets without pre-training or strong data augmentations. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https://openreview.net/forum?id=LtKcMgGOeLt.",
1136
+ "Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers, 2021.",
1137
+ "L. P. Cordella, C. De Stefano, F. Tortorella, and M. Vento. A method for improving classification reliability of multilayer perceptrons. IEEE Transactions on Neural Networks, 6(5):1140-1147, 1995. doi: 10.1109/72.410358.",
1138
+ "C. De Stefano, C. Sansone, and M. Vento. To reject or not to reject: that is the question-an answer in case of neural classifiers. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 30(1):84-94, 2000. doi: 10.1109/5326.827457.",
1139
+ "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, 2009. doi: 10.1109/CVPR.2009.5206848.",
1140
+ "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=YicbFdNTTy.",
1141
+ "Sepideh Esmaeilpour, Bing Liu, Eric Robertson, and Lei Shu. Zero-shot out-of-distribution detection based on the pre-trained model CLIP. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pp. 6568-6576. AAAI Press, 2022. URL https://ojs.aaai.org/index.php/AAAI/article/view/20610.",
1142
+ "Tom Fawcett. An introduction to roc analysis. Pattern Recognition Letters, 27(8):861-874, 2006. ISSN 0167-8655. doi: https://doi.org/10.1016/j.patrec.2005.10.010. URL https://www.sciencedirect.com/science/article/pii/S016786550500303X. ROC Analysis in Pattern Recognition.",
1143
+ "Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan. Exploring the limits of out-of-distribution detection. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 7068-7081, 2021. URL https://proceedings.neurips.cc/paper/2021/bit/3941c4358616274ac2436eacf67fae05-Abstract.html."
1144
+ ],
1145
+ "bbox": [
1146
+ 171,
1147
+ 194,
1148
+ 826,
1149
+ 922
1150
+ ],
1151
+ "page_idx": 9
1152
+ },
1153
+ {
1154
+ "type": "header",
1155
+ "text": "Published as a conference paper at ICLR 2023",
1156
+ "bbox": [
1157
+ 171,
1158
+ 32,
1159
+ 478,
1160
+ 47
1161
+ ],
1162
+ "page_idx": 9
1163
+ },
1164
+ {
1165
+ "type": "page_number",
1166
+ "text": "10",
1167
+ "bbox": [
1168
+ 490,
1169
+ 948,
1170
+ 506,
1171
+ 959
1172
+ ],
1173
+ "page_idx": 9
1174
+ },
1175
+ {
1176
+ "type": "list",
1177
+ "sub_type": "ref_text",
1178
+ "list_items": [
1179
+ "Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. 2016.",
1180
+ "Ido Galil, Mohammed Dabbah, and Ran El-Yaniv. What can we learn from the selective prediction and uncertainty estimation performance of 523 imagenet classifiers? In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=p66AzKi6Xim.",
1181
+ "Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, and Philip Torr. Res2net: A new multi-scale backbone architecture. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(2):652-662, Feb 2021. ISSN 1939-3539. doi: 10.1109/tpami.2019.2938758. URL http://dx.doi.org/10.1109/TPAMI.2019.2938758.",
1182
+ "Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6572.",
1183
+ "Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=Hkg4TI9xl.",
1184
+ "Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pp. 15262-15271. Computer Vision Foundation / IEEE, 2021. URL https://openaccess.thecvf.com/content/CVPR2021/html/Hendrycks_Natural_Adversarial_Examples_CVPR_2021_paper.html.",
1185
+ "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015.",
1186
+ "Andrew Howard, Ruoming Pang, Hartwig Adam, Quoc V. Le, Mark Sandler, Bo Chen, Weijun Wang, Liang-Chieh Chen, Mingxing Tan, Grace Chu, Vijay Vasudevan, and Yukun Zhu. Searching for mobilenetv3. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pp. 1314-1324. IEEE, 2019. doi: 10.1109/ICCV.2019.00140. URL https://doi.org/10.1109/ICCV.2019.00140.",
1187
+ "Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 7167-7177, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/abdeb6f575ac5c6676b747bca8d09cc2-Abstract.html.",
1188
+ "Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1VGkIxRZ.",
1189
+ "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.",
1190
+ "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 8748-8763. PMLR, 2021. URL http://proceedings.mlr.press/v139/radford21a.html."
1191
+ ],
1192
+ "bbox": [
1193
+ 171,
1194
+ 103,
1195
+ 826,
1196
+ 922
1197
+ ],
1198
+ "page_idx": 10
1199
+ },
1200
+ {
1201
+ "type": "header",
1202
+ "text": "Published as a conference paper at ICLR 2023",
1203
+ "bbox": [
1204
+ 171,
1205
+ 32,
1206
+ 478,
1207
+ 47
1208
+ ],
1209
+ "page_idx": 10
1210
+ },
1211
+ {
1212
+ "type": "page_number",
1213
+ "text": "11",
1214
+ "bbox": [
1215
+ 488,
1216
+ 948,
1217
+ 506,
1218
+ 959
1219
+ ],
1220
+ "page_idx": 10
1221
+ },
1222
+ {
1223
+ "type": "list",
1224
+ "sub_type": "ref_text",
1225
+ "list_items": [
1226
+ "Tal Ridnik, Emanuel Ben Baruch, Asaf Noy, and Lihi Zelnik. Imagenet-21k pretraining for the masses. In Joaquin Vanschoren and Sai-Kit Yeung (eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper/2021混沌/98f13708210194c475687be6106a3b84-AAbstract-round1.html.",
1227
+ "David B. Ritland and Lincoln P. Brower. The viceroy butterfly is not a batesian mimic. Nature, 350 (6318):497-498, Apr 1991. ISSN 1476-4687. doi: 10.1038/350497a0. URL https://doi.org/10.1038/350497a0.",
1228
+ "Walter J. Scheirer, Anderson de Rezende Rocha, Archana Sapkota, and Terrance E. Boult. Toward open set recognition. IEEE Trans. Pattern Anal. Mach. Intell., 35(7):1757-1772, 2013. doi: 10.1109/TPAMI.2012.256. URL https://doi.org/10.1109/TPAMI.2012.256.",
1229
+ "Gabi Shalev, Yossi Adi, and Joseph Keshet. Out-of-distribution detection using multiple semantic label representations. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 7386-7396, 2018. URL https://proceedings.neurips.cc/paper/2018hash/2151b4c76b4dcb048d06a5c32942b6f6-AAbstract.html.",
1230
+ "Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv Mahajan, Ross B. Girshick, Piotr Dólar, and Laurens van der Maaten. Revisiting weakly supervised pre-training of visual perception models. CoRR, abs/2201.08371, 2022. URL https://arxiv.org/abs/2201.08371.",
1231
+ "Andreas Peter Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train your vit? data, augmentation, and regularization in vision transformers. Transactions on Machine Learning Research, 2022. URL https://openreview.net/forum?id=4nPswr1KcP.",
1232
+ "Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 10347-10357. PMLR, 2021. URL http://proceedings.mlr.press/v139/touvron21a.html.",
1233
+ "Hugo Touvron, Matthieu Cord, and Herve Jégou. Deit III: revenge of the vit. CoRR, abs/2204.07118, 2022. doi: 10.48550/arXiv.2204.07118. URL https://doi.org/10.48550/arXiv.2204.07118.",
1234
+ "Ross Wightman. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019.",
1235
+ "I. Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. Billion-scale semi-supervised learning for image classification. CoRR, abs/1905.00546, 2019. URL http://arxiv.org/abs/1905.00546.",
1236
+ "Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Haibin Lin, Zhi Zhang, Yue Sun, Tong He, Jonas Mueller, R. Manmatha, Mu Li, and Alexander Smola. Resnest: Split-attention networks, 2020."
1237
+ ],
1238
+ "bbox": [
1239
+ 171,
1240
+ 102,
1241
+ 826,
1242
+ 760
1243
+ ],
1244
+ "page_idx": 11
1245
+ },
1246
+ {
1247
+ "type": "text",
1248
+ "text": "A DEFINING IN-DISTRIBUTION AUROC",
1249
+ "text_level": 1,
1250
+ "bbox": [
1251
+ 171,
1252
+ 786,
1253
+ 522,
1254
+ 803
1255
+ ],
1256
+ "page_idx": 11
1257
+ },
1258
+ {
1259
+ "type": "text",
1260
+ "text": "We follow Galil et al. (2023) in defining in-distribution AUROC (\"ID AUROC\"). ID AUROC is defined similarly to Equation 1, but discriminating between correct and incorrect predictions instead of discriminating between ID and OOD instances.",
1261
+ "bbox": [
1262
+ 169,
1263
+ 818,
1264
+ 823,
1265
+ 861
1266
+ ],
1267
+ "page_idx": 11
1268
+ },
1269
+ {
1270
+ "type": "text",
1271
+ "text": "For every two random samples $(x_{1},y_{1}),(x_{2},y_{2})\\sim P(\\mathcal{X},\\mathcal{Y})$ and given that $\\ell (f(x_1),y_1) > \\ell (f(x_2),y_2)$ , the ranking performance of $\\kappa$ is defined as the probability that $\\kappa$ ranks $x_{2}$ higher than $x_{1}$ :",
1272
+ "bbox": [
1273
+ 169,
1274
+ 867,
1275
+ 825,
1276
+ 909
1277
+ ],
1278
+ "page_idx": 11
1279
+ },
1280
+ {
1281
+ "type": "equation",
1282
+ "text": "\n$$\n\\Pr \\left[ \\kappa \\left(x _ {1}, \\hat {y} \\mid f\\right) < \\kappa \\left(x _ {2}, \\hat {y} \\mid f\\right) \\mid \\ell \\left(f \\left(x _ {1}\\right), y _ {1}\\right) > \\ell \\left(f \\left(x _ {2}\\right), y _ {2}\\right) \\right] \\tag {2}\n$$\n",
1283
+ "text_format": "latex",
1284
+ "bbox": [
1285
+ 305,
1286
+ 909,
1287
+ 823,
1288
+ 926
1289
+ ],
1290
+ "page_idx": 11
1291
+ },
1292
+ {
1293
+ "type": "header",
1294
+ "text": "Published as a conference paper at ICLR 2023",
1295
+ "bbox": [
1296
+ 171,
1297
+ 32,
1298
+ 478,
1299
+ 47
1300
+ ],
1301
+ "page_idx": 11
1302
+ },
1303
+ {
1304
+ "type": "page_number",
1305
+ "text": "12",
1306
+ "bbox": [
1307
+ 488,
1308
+ 946,
1309
+ 508,
1310
+ 960
1311
+ ],
1312
+ "page_idx": 11
1313
+ },
1314
+ {
1315
+ "type": "text",
1316
+ "text": "When the 0/1 loss is in play, it is known that AUROC in fact equals the probability in Equation (2) (Fawcett, 2006) and thus is a proper metric to measure ranking in classification (AKA ID AUROC or discrimination).",
1317
+ "bbox": [
1318
+ 169,
1319
+ 103,
1320
+ 826,
1321
+ 147
1322
+ ],
1323
+ "page_idx": 12
1324
+ },
1325
+ {
1326
+ "type": "text",
1327
+ "text": "B COMPARING MODELS' PERFORMANCE USING OUR FRAMEWORK",
1328
+ "text_level": 1,
1329
+ "bbox": [
1330
+ 171,
1331
+ 166,
1332
+ 756,
1333
+ 181
1334
+ ],
1335
+ "page_idx": 12
1336
+ },
1337
+ {
1338
+ "type": "text",
1339
+ "text": "The proposed framework allows for a fair comparison of models in terms of model-specific difficulty, rather than a fixed set of OOD classes chosen according to some (possibly arbitrary) criterion. This is because the framework evaluates each model's performance on tailored benchmarks. This approach provides a more accurate representation of the model's own performance. As the famous quote goes, \"You can't judge a fish by its ability to climb a tree\". Rephrasing this quote to adapt it to our discussion: if we want to compare a fish with a monkey on what is hardest for each of them, we should judge the fish by its ability to climb a tree and the monkey's ability to swim (although we are aware that some monkeys can swim). Our framework constructs specialized tests for both.",
1340
+ "bbox": [
1341
+ 169,
1342
+ 196,
1343
+ 826,
1344
+ 309
1345
+ ],
1346
+ "page_idx": 12
1347
+ },
1348
+ {
1349
+ "type": "text",
1350
+ "text": "That being said, by considering the construction of severity levels (per model), it is possible (neglecting estimation error of the estimation sets $c_{est}^{y}$ ) to compare the performance of two models specifically for the classes populating their maximal severity (severity 10):",
1351
+ "bbox": [
1352
+ 169,
1353
+ 316,
1354
+ 826,
1355
+ 359
1356
+ ],
1357
+ "page_idx": 12
1358
+ },
1359
+ {
1360
+ "type": "list",
1361
+ "sub_type": "text",
1362
+ "list_items": [
1363
+ "(1) Suppose that model $\\mathcal{A}$ has better performance (AUROC) on its own group $Z$ of hardest classes (severity 10) than model $\\mathcal{B}$ 's performance on its own severity 10 classes, denoted $K$ . Assume that $K$ does not equal $Z$ (otherwise we are done). Thus, $\\mathrm{AUROC}(\\mathcal{A}, Z) > \\mathrm{AUROC}(\\mathcal{B}, K)$ .",
1364
+ "(2) By construction of severity groups, for every set of classes $R \\neq Z$ , AUROC(A,R) ≥ AUROC(A,Z) (since $Z$ is the set of hardest classes for model A). This holds true for any set of classes $R$ , including the set $K$ . Therefore, AUROC(A,K) ≥ AUROC(A,Z)."
1365
+ ],
1366
+ "bbox": [
1367
+ 169,
1368
+ 364,
1369
+ 826,
1370
+ 455
1371
+ ],
1372
+ "page_idx": 12
1373
+ },
1374
+ {
1375
+ "type": "text",
1376
+ "text": "By combining (1) and (2) we get that $\\mathrm{AUROC}(\\mathcal{A}, K) \\geq \\mathrm{AUROC}(\\mathcal{A}, Z) > \\mathrm{AUROC}(\\mathcal{B}, K) \\Rightarrow \\mathrm{AUROC}(\\mathcal{A}, \\bar{K}) > \\mathrm{AUROC}(\\mathcal{B}, \\bar{K})$ , meaning that for the same set of classes $K$ , model $\\mathcal{A}$ performs better than model $\\mathcal{B}$ .",
1377
+ "bbox": [
1378
+ 169,
1379
+ 462,
1380
+ 823,
1381
+ 505
1382
+ ],
1383
+ "page_idx": 12
1384
+ },
1385
+ {
1386
+ "type": "text",
1387
+ "text": "A \"mirror\" argument could be crafted to compare the models' performance on the classes populating their minimal severity (severity 0).",
1388
+ "bbox": [
1389
+ 169,
1390
+ 512,
1391
+ 823,
1392
+ 542
1393
+ ],
1394
+ "page_idx": 12
1395
+ },
1396
+ {
1397
+ "type": "text",
1398
+ "text": "C PER-SIZE PERFORMANCE COMPARISON",
1399
+ "text_level": 1,
1400
+ "bbox": [
1401
+ 171,
1402
+ 561,
1403
+ 540,
1404
+ 575
1405
+ ],
1406
+ "page_idx": 12
1407
+ },
1408
+ {
1409
+ "type": "text",
1410
+ "text": "The scatter plot in Figure 9 shows the relationship between the # of architecture parameters and its C-OOD AUROC performance. Overall, there is a moderate Spearman correlation of 0.45 between #parameters and the C-OOD performance when considering all tested networks. When grouping the networks by architecture families, however, we see that some architectures have high correlation between their model size and their C-OOD AUROC. Architecture families that exhibit this behavior are, for example, ViTs, Swins, EffecientNetV2 and ResNets whose correlations are 0.91, 0.94, 0.89, and 0.79, respectively. Other families exhibit moderate correlations, e.g., EffecientNet(V1) with a 0.47 Spearman correlation. Some architectures, on the other hand, have strong negative correlation, e.g., Twins Chu et al. (2021), NesT Zhang et al. (2020) and Res2Net Gao et al. (2021), whose correlations are -0.94,-1.0, and -0.85, respectively.",
1411
+ "bbox": [
1412
+ 169,
1413
+ 592,
1414
+ 826,
1415
+ 733
1416
+ ],
1417
+ "page_idx": 12
1418
+ },
1419
+ {
1420
+ "type": "text",
1421
+ "text": "Additionally, we note that the subset of ViT models mentioned in Section 5 are also the best even when considering a model size limitation.",
1422
+ "bbox": [
1423
+ 169,
1424
+ 738,
1425
+ 823,
1426
+ 767
1427
+ ],
1428
+ "page_idx": 12
1429
+ },
1430
+ {
1431
+ "type": "text",
1432
+ "text": "D ZERO-SHOT C-OOD DETECTION WITH CLIP",
1433
+ "text_level": 1,
1434
+ "bbox": [
1435
+ 171,
1436
+ 787,
1437
+ 584,
1438
+ 801
1439
+ ],
1440
+ "page_idx": 12
1441
+ },
1442
+ {
1443
+ "type": "text",
1444
+ "text": "To evaluate CLIP on ImageNet, we first prepare it following the code provided by its authors (https://github.com/openai/CLIP): The labels of ImageNet-1k are encoded into normalized embedding vectors. At inference time, the incoming image is encoded into another normalized embedding vector. A cosine similarity is then calculated between each label-embedding vector and the image-embedding vector. The highest similarity score is then taken as the confidence score for that prediction.",
1445
+ "bbox": [
1446
+ 169,
1447
+ 818,
1448
+ 826,
1449
+ 888
1450
+ ],
1451
+ "page_idx": 12
1452
+ },
1453
+ {
1454
+ "type": "text",
1455
+ "text": "To evaluate CLIP's C-OOD performance, we re-run the algorithm described in Section 3 to benchmark (CLIP, $\\kappa_{\\text{cosine similarity}}$ ). The best-performing instance of CLIP (ResNet-50x64) outperforms $96\\%$",
1456
+ "bbox": [
1457
+ 169,
1458
+ 895,
1459
+ 826,
1460
+ 926
1461
+ ],
1462
+ "page_idx": 12
1463
+ },
1464
+ {
1465
+ "type": "header",
1466
+ "text": "Published as a conference paper at ICLR 2023",
1467
+ "bbox": [
1468
+ 171,
1469
+ 32,
1470
+ 478,
1471
+ 47
1472
+ ],
1473
+ "page_idx": 12
1474
+ },
1475
+ {
1476
+ "type": "page_number",
1477
+ "text": "13",
1478
+ "bbox": [
1479
+ 488,
1480
+ 946,
1481
+ 508,
1482
+ 959
1483
+ ],
1484
+ "page_idx": 12
1485
+ },
1486
+ {
1487
+ "type": "image",
1488
+ "img_path": "images/cd5adc826d7e052daedd5c608de757618ccf406d05ccd821f03d5fb3e4592b74.jpg",
1489
+ "image_caption": [
1490
+ "Figure 9: Number of architecture parameters vs. C-OOD AUROC performance at severity level 5 (median severity). The pair of numbers next to each architecture name in the legend corresponds to its Spearman correlation and the number of models tested from that architecture (family), respectively. Note that specific ViT transformers are also the best when considering a model size limitation. Vertical lines indicate the sizes of ResNet-50 (left vertical line) and ResNet-101 (right vertical line)."
1491
+ ],
1492
+ "image_footnote": [],
1493
+ "bbox": [
1494
+ 171,
1495
+ 114,
1496
+ 823,
1497
+ 385
1498
+ ],
1499
+ "page_idx": 13
1500
+ },
1501
+ {
1502
+ "type": "image",
1503
+ "img_path": "images/ec3ded2e761fdf3f66d680f17612c10eac6135e943f0f7a37bd4d6c283d460f4.jpg",
1504
+ "image_caption": [
1505
+ "Figure 10: The same graph as in Figure 1, but with an additional lime-colored curve for CLIP ResNet-50x64. Note that as severity levels increase, CLIP's detection advantage is greatly reduced."
1506
+ ],
1507
+ "image_footnote": [],
1508
+ "bbox": [
1509
+ 171,
1510
+ 484,
1511
+ 826,
1512
+ 770
1513
+ ],
1514
+ "page_idx": 13
1515
+ },
1516
+ {
1517
+ "type": "text",
1518
+ "text": "of all other models (measured by its mean AUROC over all severity levels). In Figure 10 we visualize this CLIP's performance across all severity levels, in comparison to all other models. Interestingly, CLIP's relative advantage over other models decreases as the severity increases, and at severity 10, it is even lower than the median. The same is observed in Figure 11 which depicts a comparison between three identical ResNet-50 models that were trained with three different training regimes, one of them being CLIP. CLIP outperforms its competition up to severity 6 (with a significant margin in lower",
1519
+ "bbox": [
1520
+ 169,
1521
+ 839,
1522
+ 826,
1523
+ 925
1524
+ ],
1525
+ "page_idx": 13
1526
+ },
1527
+ {
1528
+ "type": "header",
1529
+ "text": "Published as a conference paper at ICLR 2023",
1530
+ "bbox": [
1531
+ 173,
1532
+ 32,
1533
+ 478,
1534
+ 47
1535
+ ],
1536
+ "page_idx": 13
1537
+ },
1538
+ {
1539
+ "type": "page_number",
1540
+ "text": "14",
1541
+ "bbox": [
1542
+ 490,
1543
+ 948,
1544
+ 508,
1545
+ 959
1546
+ ],
1547
+ "page_idx": 13
1548
+ },
1549
+ {
1550
+ "type": "image",
1551
+ "img_path": "images/fb8c357d56fee483f897d4d2a76377331e617b9db824a458d8b7be76f82f844b.jpg",
1552
+ "image_caption": [
1553
+ "Figure 11: A comparison of three identical ResNet-50 models trained with different training regimes: (1) The orange-colored curve represents a ResNet-50 model trained on ImageNet-1k with Torchvision's recipe; (2) the purple-colored curve represents a ResNet-50 model trained with a semi-supervised regime (Yaliniz et al., 2019); and (3) the lime-colored curve represents a ResNet-50 trained with CLIP."
1554
+ ],
1555
+ "image_footnote": [],
1556
+ "bbox": [
1557
+ 220,
1558
+ 98,
1559
+ 777,
1560
+ 339
1561
+ ],
1562
+ "page_idx": 14
1563
+ },
1564
+ {
1565
+ "type": "text",
1566
+ "text": "severity levels), and then underperforms. We hypothesize the degradation in CLIP's performance for higher severity levels happens due to an increase in the number of OOD classes that are descriptively similar to ID classes at higher levels of severity. For example, when examining different types of butterflies from Figure 3, the string text of \"monarch butterfly\" is very similar to the string text of \"viceroy butterfly\", simply due to both sharing the word \"butterfly\". Other butterflies that are less visually similar might be \"confused\" by CLIP and classified as monarch butterflies, simply because they are also defined as butterflies, making their cosine similarity with the text \"monarch butterfly\" higher. Common image classifiers, on the other hand, may confuse different butterflies if they appear visually similar and share many distinguishable features, but are not affected by the fact both classes are defined as \"butterflies\".",
1567
+ "bbox": [
1568
+ 169,
1569
+ 448,
1570
+ 826,
1571
+ 587
1572
+ ],
1573
+ "page_idx": 14
1574
+ },
1575
+ {
1576
+ "type": "text",
1577
+ "text": "We also observe that while CLIPs with a confidence function $\\kappa_{\\text{cosine similarity}}$ perform very well at C-OOD detection, their ID ranking is worse than other models. Using softmax and/or adding a linear-probe (as described in Radford et al. (2021)) improves ID ranking significantly, but results in mediocre C-OOD detection performance. We believe that this suggests the multimodal nature of CLIP is a crucial component of its C-OOD detection performance, and that the scaling effect of softmax hinders the partial order induced on OOD and ID instances.",
1578
+ "bbox": [
1579
+ 169,
1580
+ 594,
1581
+ 823,
1582
+ 679
1583
+ ],
1584
+ "page_idx": 14
1585
+ },
1586
+ {
1587
+ "type": "text",
1588
+ "text": "In Fort et al. (2021), it was suggested that CLIP be used as a zero-shot OOD detection algorithm. Their suggested method, however, requires knowledge of the possible OOD classes in advance. The authors of Esmaeilpour et al. (2022) suggested to use an additional captioning model, which is fine-tuned on some large dataset (which hopefully contains knowledge of the OOD classes that might emerge during inference), instead. Our suggested approach, in contrast, requires no knowledge, no fine-tuning and no models other than CLIP itself.",
1589
+ "bbox": [
1590
+ 169,
1591
+ 685,
1592
+ 826,
1593
+ 768
1594
+ ],
1595
+ "page_idx": 14
1596
+ },
1597
+ {
1598
+ "type": "text",
1599
+ "text": "E CORRELATIONS OF VARIOUS FACTORS WITH C-OOD DETECTION PERFORMANCE",
1600
+ "text_level": 1,
1601
+ "bbox": [
1602
+ 171,
1603
+ 789,
1604
+ 756,
1605
+ 823
1606
+ ],
1607
+ "page_idx": 14
1608
+ },
1609
+ {
1610
+ "type": "text",
1611
+ "text": "We searched for factors that could be indicative of or correlated with good performance in C-OOD detection. To this end, we measure the correlations of various factors with the C-OOD detection AUROC performance across all levels of severity. The results can be seen in the graphs in Figure 12. We observe that accuracy is typically a good indicator of the model's performance in C-OOD detection at most severity levels $(s_0 - s_8)$ , with Spearman correlation values in [0.6, 0.73] at those levels (see Figure 12). When measuring the correlation between AUROC and accuracy among only",
1612
+ "bbox": [
1613
+ 169,
1614
+ 839,
1615
+ 826,
1616
+ 925
1617
+ ],
1618
+ "page_idx": 14
1619
+ },
1620
+ {
1621
+ "type": "header",
1622
+ "text": "Published as a conference paper at ICLR 2023",
1623
+ "bbox": [
1624
+ 173,
1625
+ 32,
1626
+ 478,
1627
+ 47
1628
+ ],
1629
+ "page_idx": 14
1630
+ },
1631
+ {
1632
+ "type": "page_number",
1633
+ "text": "15",
1634
+ "bbox": [
1635
+ 490,
1636
+ 946,
1637
+ 506,
1638
+ 959
1639
+ ],
1640
+ "page_idx": 14
1641
+ },
1642
+ {
1643
+ "type": "image",
1644
+ "img_path": "images/abdf25b239343f9f1b3bbaf7170ff5723fc3f7ae9ff7426c073364675a230247.jpg",
1645
+ "image_caption": [
1646
+ "Figure 12: Spearman correlations between C-OOD detection AUROC and Accuracy, ID-AUROC, #parameters, input size, and embedding size across all severity levels."
1647
+ ],
1648
+ "image_footnote": [],
1649
+ "bbox": [
1650
+ 236,
1651
+ 99,
1652
+ 759,
1653
+ 349
1654
+ ],
1655
+ "page_idx": 15
1656
+ },
1657
+ {
1658
+ "type": "image",
1659
+ "img_path": "images/8a1664e912fa6734c083c5966e95b0ad55f95d9c5e30a5f4babc77dcc224a308.jpg",
1660
+ "image_caption": [
1661
+ "Figure 13: Spearman correlations between C-OD detection AUROC and Accuracy, ID-AUROC, #parameters, input size, and embedding size across all severity levels, among only the $20\\%$ most accurate models."
1662
+ ],
1663
+ "image_footnote": [],
1664
+ "bbox": [
1665
+ 236,
1666
+ 405,
1667
+ 759,
1668
+ 659
1669
+ ],
1670
+ "page_idx": 15
1671
+ },
1672
+ {
1673
+ "type": "text",
1674
+ "text": "the $20\\%$ most accurate models, however, the Spearman correlation drops to a range of [0.34, 0.43] (see Figure 13).",
1675
+ "bbox": [
1676
+ 169,
1677
+ 741,
1678
+ 823,
1679
+ 771
1680
+ ],
1681
+ "page_idx": 15
1682
+ },
1683
+ {
1684
+ "type": "text",
1685
+ "text": "The next best indicative factors are the ID ranking performance (\"ID AUROC\"), number of parameters, and the input image size (moderate correlations). Finally, the embedding size is only weakly correlated.",
1686
+ "bbox": [
1687
+ 169,
1688
+ 777,
1689
+ 826,
1690
+ 819
1691
+ ],
1692
+ "page_idx": 15
1693
+ },
1694
+ {
1695
+ "type": "text",
1696
+ "text": "Figure 14 shows a scatter plot of in-distribution ranking performance and C-OOD detection performance of all evaluated models. The overall Spearman correlation is 0.43. The legend indicates correlations obtained by specific architecture families. Interestingly, ID AUROC exhibits slightly increasing correlation up to severity $s_9$ , and at $s_{10}$ becomes the most indicative factor for C-OD detection performance. In contrast, all other investigated factors lose their indicative power at the highest severity levels ( $s_9$ , $s_{10}$ ). Moreover, when measuring the correlation between AUROC and ID AUROC among only the 20% most accurate models, the Spearman correlation increases to a range",
1697
+ "bbox": [
1698
+ 169,
1699
+ 825,
1700
+ 826,
1701
+ 925
1702
+ ],
1703
+ "page_idx": 15
1704
+ },
1705
+ {
1706
+ "type": "header",
1707
+ "text": "Published as a conference paper at ICLR 2023",
1708
+ "bbox": [
1709
+ 173,
1710
+ 32,
1711
+ 478,
1712
+ 47
1713
+ ],
1714
+ "page_idx": 15
1715
+ },
1716
+ {
1717
+ "type": "page_number",
1718
+ "text": "16",
1719
+ "bbox": [
1720
+ 490,
1721
+ 948,
1722
+ 508,
1723
+ 959
1724
+ ],
1725
+ "page_idx": 15
1726
+ },
1727
+ {
1728
+ "type": "image",
1729
+ "img_path": "images/410e047ff65b2e1678292cc2501093363b8d34a4c22330c196afe25860bb1806.jpg",
1730
+ "image_caption": [
1731
+ "Figure 14: The x-axis represents ID ranking performance (measured by AUROC), and the y-axis represents C-OOD detection performance in severity 5 (higher is better). The legend indicates correlations, by specific architecture families, with the number on the right representing sample size, and the one on the left representing the correlation between ID ranking and detection."
1732
+ ],
1733
+ "image_footnote": [],
1734
+ "bbox": [
1735
+ 240,
1736
+ 99,
1737
+ 759,
1738
+ 378
1739
+ ],
1740
+ "page_idx": 16
1741
+ },
1742
+ {
1743
+ "type": "text",
1744
+ "text": "of [0.54, 0.77], making it the most indicative factor for C-OOD detection among such models (see Figure 13).",
1745
+ "bbox": [
1746
+ 169,
1747
+ 470,
1748
+ 823,
1749
+ 500
1750
+ ],
1751
+ "page_idx": 16
1752
+ },
1753
+ {
1754
+ "type": "text",
1755
+ "text": "F CORRELATION BETWEEN RANKINGS OF MULTIPLE SEVERITY LEVELS",
1756
+ "text_level": 1,
1757
+ "bbox": [
1758
+ 171,
1759
+ 518,
1760
+ 792,
1761
+ 535
1762
+ ],
1763
+ "page_idx": 16
1764
+ },
1765
+ {
1766
+ "type": "image",
1767
+ "img_path": "images/2897ddb9111f6635d2ca605fafe24c2997145716483934b783841697741b0685.jpg",
1768
+ "image_caption": [
1769
+ "Figure 15: Spearman correlation between the rankings of the models given by different severity levels."
1770
+ ],
1771
+ "image_footnote": [],
1772
+ "bbox": [
1773
+ 238,
1774
+ 551,
1775
+ 759,
1776
+ 758
1777
+ ],
1778
+ "page_idx": 16
1779
+ },
1780
+ {
1781
+ "type": "text",
1782
+ "text": "Since we use multiple benchmarks for C-OOD detection (i.e., the 11 severity levels), to test the performance models in C-OOD detection, and each severity level may rank the models differently (i.e. the best performers for each severity level may vary), we now consider the question of how these rankings change across severity levels. To this end we calculated the correlations between the rankings obtained at different severity levels. The resulting correlation matrix can be seen in Figure 15. Overall, we observe high correlations, which means that different severity levels generally yield similar rankings of the models. This means that when selecting the best model for deployment, it is usually enough to observe its performance on only a few severity levels.",
1783
+ "bbox": [
1784
+ 169,
1785
+ 811,
1786
+ 826,
1787
+ 925
1788
+ ],
1789
+ "page_idx": 16
1790
+ },
1791
+ {
1792
+ "type": "header",
1793
+ "text": "Published as a conference paper at ICLR 2023",
1794
+ "bbox": [
1795
+ 173,
1796
+ 32,
1797
+ 478,
1798
+ 47
1799
+ ],
1800
+ "page_idx": 16
1801
+ },
1802
+ {
1803
+ "type": "page_number",
1804
+ "text": "17",
1805
+ "bbox": [
1806
+ 488,
1807
+ 946,
1808
+ 508,
1809
+ 959
1810
+ ],
1811
+ "page_idx": 16
1812
+ },
1813
+ {
1814
+ "type": "text",
1815
+ "text": "We also notice that for each severity level $s_i$ , the correlation with $s_j$ is higher the closer $j$ is to $i$ . This is not surprising and might be anticipated because adjacent severity levels have close severity scores by design.",
1816
+ "bbox": [
1817
+ 169,
1818
+ 103,
1819
+ 823,
1820
+ 147
1821
+ ],
1822
+ "page_idx": 17
1823
+ },
1824
+ {
1825
+ "type": "text",
1826
+ "text": "G COMPARISON OF DIFFERENT CONFIDENCE FUNCTIONS",
1827
+ "text_level": 1,
1828
+ "bbox": [
1829
+ 171,
1830
+ 167,
1831
+ 663,
1832
+ 183
1833
+ ],
1834
+ "page_idx": 17
1835
+ },
1836
+ {
1837
+ "type": "text",
1838
+ "text": "This section contains additional technical details and figures related to our comparison of ODIN, max-logit, entropy and MC dropout. Our main conclusions are presented in Section 5 of the main text.",
1839
+ "bbox": [
1840
+ 169,
1841
+ 198,
1842
+ 826,
1843
+ 241
1844
+ ],
1845
+ "page_idx": 17
1846
+ },
1847
+ {
1848
+ "type": "image",
1849
+ "img_path": "images/d84e1b1f81f123af6ec8ec866077ba50620bc78191ecc4b647de7b796f1d3bd5.jpg",
1850
+ "image_caption": [
1851
+ "Figure 16: Relative improvement gain in C-OOD detection performance when using max-logit instead of softmax (i.e., not applying softmax). In median terms, using max-logit harms performance over softmax for most evaluated models. However, some models (e.g., ViTs) greatly benefit from not applying softmax. The green shaded area indicates the area of positive improvement."
1852
+ ],
1853
+ "image_footnote": [],
1854
+ "bbox": [
1855
+ 236,
1856
+ 253,
1857
+ 759,
1858
+ 477
1859
+ ],
1860
+ "page_idx": 17
1861
+ },
1862
+ {
1863
+ "type": "image",
1864
+ "img_path": "images/bc43dee02398069b83b9de90ecdbc0989814d8e1854163b8dfef9522c0f8bb11.jpg",
1865
+ "image_caption": [
1866
+ "Figure 17: Relative improvement gain in C-OOD detection performance when using entropy instead of softmax. In median terms, entropy offers positive improvement over softmax for most levels of severity except $s \\in \\{7,8,9\\}$ . The green shaded area indicates the area of positive improvement."
1867
+ ],
1868
+ "image_footnote": [],
1869
+ "bbox": [
1870
+ 236,
1871
+ 566,
1872
+ 759,
1873
+ 790
1874
+ ],
1875
+ "page_idx": 17
1876
+ },
1877
+ {
1878
+ "type": "text",
1879
+ "text": "To use MC dropout, we first use 30 dropout-enabled forward passes. The mean softmax score of these passes is calculated and then a predictive entropy score is used as the final uncertainty estimate.",
1880
+ "bbox": [
1881
+ 169,
1882
+ 859,
1883
+ 826,
1884
+ 888
1885
+ ],
1886
+ "page_idx": 17
1887
+ },
1888
+ {
1889
+ "type": "text",
1890
+ "text": "When using ODIN, we use a temperature of 2 and set $\\epsilon$ to be $1\\cdot 10^{-5}$ . We obtained these hyperparameters by using a simple grid search over a validation set, and using seven models of different",
1891
+ "bbox": [
1892
+ 169,
1893
+ 895,
1894
+ 826,
1895
+ 925
1896
+ ],
1897
+ "page_idx": 17
1898
+ },
1899
+ {
1900
+ "type": "header",
1901
+ "text": "Published as a conference paper at ICLR 2023",
1902
+ "bbox": [
1903
+ 171,
1904
+ 32,
1905
+ 478,
1906
+ 47
1907
+ ],
1908
+ "page_idx": 17
1909
+ },
1910
+ {
1911
+ "type": "page_number",
1912
+ "text": "18",
1913
+ "bbox": [
1914
+ 488,
1915
+ 946,
1916
+ 508,
1917
+ 959
1918
+ ],
1919
+ "page_idx": 17
1920
+ },
1921
+ {
1922
+ "type": "image",
1923
+ "img_path": "images/b599427bfe35dbb59748a156e6d94ab7623dd5f3d442fd1e20321537c3cb5ddd.jpg",
1924
+ "image_caption": [
1925
+ "Figure 18: Relative improvement gain in C-OOD detection performance when using MC dropout instead of softmax. We find that MC dropout improves performance, especially at lower levels of severity. The improvement becomes less significant as severity increases."
1926
+ ],
1927
+ "image_footnote": [],
1928
+ "bbox": [
1929
+ 236,
1930
+ 99,
1931
+ 759,
1932
+ 324
1933
+ ],
1934
+ "page_idx": 18
1935
+ },
1936
+ {
1937
+ "type": "image",
1938
+ "img_path": "images/aadb06effe614ab9d8ddc2603b66797506865451e21cdfb06ad6de11389a3f5b.jpg",
1939
+ "image_caption": [
1940
+ "Figure 19: Relative improvement gain in C-OOD detection performance when using MC dropout instead of entropy."
1941
+ ],
1942
+ "image_footnote": [],
1943
+ "bbox": [
1944
+ 236,
1945
+ 391,
1946
+ 759,
1947
+ 616
1948
+ ],
1949
+ "page_idx": 18
1950
+ },
1951
+ {
1952
+ "type": "text",
1953
+ "text": "architectures of the entire sample of models evaluated. Our objective was to find the hyperparameters that improve the mean AUROC across all severity levels the most. We believe that fine-tuning the hyperparameters with the specific model and severity levels in mind may allow for better results.",
1954
+ "bbox": [
1955
+ 169,
1956
+ 681,
1957
+ 823,
1958
+ 724
1959
+ ],
1960
+ "page_idx": 18
1961
+ },
1962
+ {
1963
+ "type": "header",
1964
+ "text": "Published as a conference paper at ICLR 2023",
1965
+ "bbox": [
1966
+ 173,
1967
+ 32,
1968
+ 478,
1969
+ 47
1970
+ ],
1971
+ "page_idx": 18
1972
+ },
1973
+ {
1974
+ "type": "page_number",
1975
+ "text": "19",
1976
+ "bbox": [
1977
+ 490,
1978
+ 946,
1979
+ 508,
1980
+ 959
1981
+ ],
1982
+ "page_idx": 18
1983
+ }
1984
+ ]
2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/014e40f0-866e-4844-b9d6-b3de43291df6_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/014e40f0-866e-4844-b9d6-b3de43291df6_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b81ec2a497bfbde518ed64b4a80c56a243ee98e1ff65678cb2f4cf0de4054616
3
+ size 3868629
2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/full.md ADDED
@@ -0,0 +1,322 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A FRAMEWORK FOR BENCHMARKING CLASS-OUT-OF-DISTRIBUTION DETECTION AND ITS APPLICATION TO IMAGENET
2
+
3
+ Ido Galil*
4
+
5
+ Technion
6
+
7
+ idogalil.ig@gmail.com
8
+
9
+ Mohammed Dabbah*
10
+
11
+ Amazon
12
+
13
+ m.m.dabbah@gmail.com
14
+
15
+ Ran El-Yaniv
16
+
17
+ Technion, Deci.AI
18
+
19
+ rani@cs.technion.ac.il
20
+
21
+ # ABSTRACT
22
+
23
+ When deployed for risk-sensitive tasks, deep neural networks must be able to detect instances with labels from outside the distribution for which they were trained. In this paper we present a novel framework to benchmark the ability of image classifiers to detect class-out-of-distribution instances (i.e., instances whose true labels do not appear in the training distribution) at various levels of detection difficulty. We apply this technique to ImageNet, and benchmark 525 pretrained, publicly available, ImageNet-1k classifiers. The code for generating a benchmark for any ImageNet-1k classifier, along with the benchmarks prepared for the above-mentioned 525 models is available at https://github.com/mdabbah/COOD_benchmarking.
24
+
25
+ The usefulness of the proposed framework and its advantage over alternative existing benchmarks is demonstrated by analyzing the results obtained for these models, which reveals numerous novel observations including: (1) knowledge distillation consistently improves class-out-of-distribution (C-OOD) detection performance; (2) a subset of ViTs performs better C-OOD detection than any other model; (3) the language—vision CLIP model achieves good zero-shot detection performance, with its best instance outperforming $96\%$ of all other models evaluated; (4) accuracy and in-distribution ranking are positively correlated to C-OOD detection; and (5) we compare various confidence functions for C-OOD detection. Our companion paper, also published in ICLR 2023 (Galil et al., 2023), examines the uncertainty estimation performance (ranking, calibration, and selective prediction performance) of these classifiers in an in-distribution setting.
26
+
27
+ # 1 INTRODUCTION
28
+
29
+ Deep neural networks (DNNs) show great performance in a wide variety of application domains including computer vision, natural language understanding and audio processing. These models are trained on data coming from a certain distribution $P(X,Y)$ , usually with the assumption that test points will be sampled from the same distribution. When the underlying distribution $P(X,Y)$ of test points is different from the one used to train a model, we may no longer expect the same performance from the model. The difference in distribution may be the result of many processes such as natural deviation in the input space $\mathcal{X}$ , noisy sensor readings of inputs, abrupt changes due to random events, newly arrived or refined input classes, etc. Here we distinguish between input distributional changes in $P_{X|Y}$ and changes in the label distribution. We focus on the latter case and consider the class-out-of-distribution (C-OOD) scenario, AKA open-set recognition (Scheirer et al., 2013), where the label support set $\mathcal{V}$ changes to a different set that includes the set $\mathcal{V}_{\mathrm{OOD}}$ , containing new classes not observed in training.
30
+
31
+ Consider the detection task in which our model is required to distinguish between samples belonging to classes it has seen in training, where $x \sim P(x|y \in \mathcal{Y}_{\mathrm{ID}})$ , and samples belonging to novel classes, i.e., $x \sim P(x|y \in \mathcal{Y}_{\mathrm{OOD}})$ . The question we now ask is: how should models be evaluated to most accurately reflect their detection performance? We aim to benchmark the detection performance
32
+
33
+ of DNN classification models that use their confidence rate function $\kappa$ (e.g., softmax response; see Section 2) to detect OOD labels, where the basic premise is that instances whose labels are in $\mathcal{V}_{\mathrm{OOD}}$ are assigned lower $\kappa$ values.
34
+
35
+ Most works on OOD detection use small-scale datasets that generally do not resemble the training distribution and, therefore, are easy to detect. The use of such sets often causes C-OOD detectors to appear better than they truly are when faced with realistic, yet harder tasks. Motivated by this deficiency, Hendrycks et al. (2021) introduced the ImageNet-O dataset as a solution. ImageNet-O, however, has two limitations. First, it benchmarks models with a single difficulty level exclusively, having only hard C-OOD instances, which might not be relevant for every task's requirements (Section 3 explains how to define different difficulty levels). Second, the original intent in the creation of ImageNet-O was to include only hard C-OOD instances. Its definition of "OOD hardness", however, was carried out with respect to ResNet-50's difficulty in detecting C-OOD classes, specifically when using softmax as its confidence function. This property makes ImageNet-O strongly biased. Indeed, consider the right-most box in Figure 1, which corresponds to the performance of 525 models over ImageNet-O. The orange dot in that box corresponds to ResNet-50, whose OOD detection performance is severely harmed by these ImageNet-O data. Nevertheless, it is evident that numerous models perform quite well, and all other models perform better than ResNet-50. The lack of an objective benchmark for C-OOD is the main motivation for our work.
36
+
37
+ ![](images/478d7d4767df88a14fff07c4847c20e33dd15f299749fd39f37879afcd8b665b.jpg)
38
+ Figure 1: OOD performance across severity (difficulty) levels, using the benchmarks produced by our framework. The detection performance decreases for all models as we increase the difficulty until it reaches near chance detection performance at the highest severity $(s_{10})$ . The top curve belongs to ViT-L/32-384, which surpasses all models at every severity level. We also observe how success or failure with regard to the previous C-OOD benchmark, ImageNet-O, does not reflect the models' true OOD detection performance since it was designed to specifically fool ResNet-50. At the bottom we provide visual examples for OOD classes from ImageNet-21k that may populate each severity level due to their similarity to ID classes from ImageNet-1k, and in this example, to a Monarch butterfly.
39
+
40
+ Our contributions. We propose a novel technique to generate a C-OOD benchmark that covers a variety of difficulty levels. Unlike other existing benchmarks (e.g., ImageNet-O), our technique is not biased towards an arbitrary model such as Resnet50 and/or a specific confidence function such as the softmax response. This useful property is obtained by tailoring the benchmark to the model being evaluated, including its confidence function, and not seeking to determine a single objective criterion for hardness of C-OOD samples (see Section 3).
41
+
42
+ Second, we show and explain how we filter ImageNet-21k to use it for the purpose of generating C-OOD benchmarks for ImageNet-1k (Deng et al., 2009) classifiers (see Section 4). We will provide
43
+
44
+ a simple code to choose the filtering parameters most suitable for the specific aim for which the benchmark is meant (e.g., what is classes are considered OOD).
45
+
46
+ Third, we demonstrate the power and usability of our method by applying our C-OOD framework to generate benchmarks for 525 ImageNet-1k classifiers available from popular repositories. We provide a benchmark for each of these classifiers, which will be available for use from our code.
47
+
48
+ We then analyze the results of these benchmarks to make numerous novel observations concerning C-OOD detection such as: (1) training regimes using knowledge distillation (Hinton et al., 2015) consistently yield models with better C-OOD detection performance than the same models trained identically, but without distillation; (2) a subset of ViTs performs better C-OOD detection than any other model; (3) the language-vision model CLIP achieves good zero-shot detection performance for low difficulty (severity) levels; (4) accuracy and in-distribution (ID) ranking are positively correlated with C-OOD detection; (5) we compare the performance of various confidence functions for C-OOD detection; (6) A number of other observations (see Section 5).
49
+
50
+ Lastly, we emphasize that the resulting difficulty levels of our framework allow benchmarking with respect to the difficulty levels most relevant to the task. For example, for a task with a high tolerance for risk (e.g., a task for an entertainment application), the performance of a model on a median difficulty level might be more important than on the hardest difficulty level (severity 10). The opposite might be true for some applications with a low tolerance for risk (e.g., medical applications), for which one requires the best performance to be attained even if the OOD is very hard to detect (severity 10). Furthermore, in Section 5 we show that detection algorithms do not always improve performance on all inputs equally, and could even hurt performance for specific difficulty levels and models (see Figure 7 for a striking example). Choosing the combination of (model, detection algorithm) based only on the detection performance on all data may yield sub-optimal results for our specific desired level of difficulty.
51
+
52
+ # 2 PROBLEM SETUP
53
+
54
+ Let $\mathcal{X}$ be the input space and $\mathcal{Y} = \mathcal{Y}_{\mathrm{ID}} \cup \mathcal{Y}_{\mathrm{OOD}}$ be the label space. Let $P(\mathcal{X}, \mathcal{Y})$ be an unknown distribution over $\mathcal{X} \times \mathcal{Y}$ . A model $f$ is a prediction function $f: \mathcal{X} \to \mathcal{Y}_{\mathrm{ID}}$ , and its predicted label for an image $x$ is denoted by $\hat{y}_f(x)$ . The model $f$ is produced by training on a labeled set $T_m = \{(x_i, y_i)\}_{i=1}^m \subseteq (\mathcal{X} \times \mathcal{Y}_{\mathrm{ID}})$ , sampled i.i.d. from $P(\mathcal{X}, \mathcal{Y}_{\mathrm{ID}})$ , with the objective of minimizing its empirical risk, defined by $\hat{r}(f|T_m) \triangleq \frac{1}{m} \sum_{i=1}^{m} \ell(f(x_i), y_i)$ , where $\ell: \mathcal{Y}_{\mathrm{ID}} \times \mathcal{Y}_{\mathrm{ID}} \to \mathbb{R}^+$ is a given loss function (e.g., cross-entropy loss for classification). Note that by this definition, the model $f$ will always misclassify any $x \sim P(\mathcal{X}, \mathcal{Y}_{\mathrm{OOD}})$ .
55
+
56
+ We define a confidence score function $\kappa (x,\hat{y} |f)$ , where $x\in \mathcal{X}$ , and $\hat{y}\in \mathcal{Y}_{\mathrm{ID}}$ is the model's prediction for $x$ , as follows. The function $\kappa$ should quantify confidence in the prediction of $\hat{y}$ for the input $x$ , based on signals from model $f$ . This function should induce a partial order over instances in $\mathcal{X}$ .
57
+
58
+ The most common and well-known $\kappa$ function for a classification model $f$ (with softmax at its last layer) is its softmax response values $-\kappa(x, \hat{y} | f) \triangleq f(x)_{\hat{y}}$ (Cordella et al., 1995; De Stefano et al., 2000) - which is also widely accepted as a baseline in the OOD literature (Hendrycks & Gimpel, 2017; Hendrycks et al., 2021; Berger et al., 2021; Shalev et al., 2018). While this is the primary $\kappa$ we evaluate for the sake of simplicity, various other $\kappa$ functions, which are also utilized for OOD detection, exist. To name a few: Out-of-distribution detector for neural networks (ODIN) (Liang et al., 2018), Monte-Carlo dropout (MC dropout) (Gal & Ghahramani, 2016), Mahalanobis distance (Lee et al., 2018), and more. Although many of these methods use the direct output from $f$ , $\kappa$ could be a different model unrelated to $f$ and unable to affect its predictions.
59
+
60
+ $\kappa$ functions can be evaluated by the quality of the partial order they induce over instances in $\mathcal{X}$ . For every two random samples $(x_{1},y_{1}),(x_{2},y_{2})\sim P(\mathcal{X},\mathcal{Y})$ , and given that $x_{1}$ belongs to an OOD label and that $x_{2}$ belongs to an ID label, the detection (or ranking) performance of $\kappa$ is defined as the probability that $\kappa$ ranks $x_{2}$ higher than $x_{1}$ :
61
+
62
+ $$
63
+ \Pr \left[ \kappa \left(x _ {1}, \hat {y} _ {1} | f\right) < \kappa \left(x _ {2}, \hat {y} _ {2} | f\right) \mid x _ {1} \sim P (\mathcal {X}, \mathcal {Y} _ {\mathrm {O O D}}) \wedge x _ {2} \sim P (\mathcal {X}, \mathcal {Y} _ {\mathrm {I D}}) \right] \tag {1}
64
+ $$
65
+
66
+ The Area Under the Receiver Operating Characteristic (AUROC or AUC) metric is often used to measure the performance of OOD detection. When ID samples are counted as true positives and OOD samples are counted as false positives, AUROC, in fact, equals the probability in Equation (1) (Fawcett,
67
+
68
+ 2006) and thus is a proper metric to measure OOD detection in classification. See Appendix A for evaluating $\kappa$ functions in an ID setting.
69
+
70
+ # 3 CONSTRUCTING A MODEL-SPECIFIC CLASS-OUT-OF-DISTRIBUTION BENCHMARK
71
+
72
+ We first choose a dataset that contains samples from a large set of OOD labels (e.g., labels from ImageNet-21k that are not included in ImageNet-1k). Ideally, this OOD dataset should consist of OOD labels representing labels the model may encounter when deployed. Any large dataset could be used for the purpose of benchmarking performance on C-OOD by splitting it according to labels into an ID component, i.e., the labels on which the model trains, and into an OOD component, i.e., the labels on which the model is exclusively tested.
73
+
74
+ We now introduce a novel framework for generating C-OOD benchmarks with a controllable degree of severity, which could be thought of as the difficulty level of the data. Algorithm 1 summarizes our proposed technique. Let $\mathcal{Y}_{\mathrm{OOD}}$ be a large set of OOD classes (e.g., labels from ImageNet-21k that are
75
+
76
+ Algorithm 1 Generating C-OOD benchmarks
77
+ 1: function GENERATE_BENCHMARK(f, $\kappa ,\mathcal{V}_{\mathrm{OOD}},$ group_size $= |\mathcal{V}_{\mathrm{ID}}|$
78
+ 2: for $\bar{y}\in \mathcal{V}_{\mathrm{OOD}}$ do
79
+ 3: Split all samples of class $\bar{y}$ into two sets: $c_{est}^{\bar{y}}$ and $c_{test}^{\bar{y}}$
80
+ 4: Set the severity score of class $\bar{y}$ to be: $s(\bar{y}|f,\kappa) = \frac{1}{|c_{est}^{\bar{y}}|}\sum_{x\in c_{est}^{\bar{y}}}\kappa (x|f).$
81
+ 5: Insert the class and its score $(\bar{y},s(\bar{y}|f,\kappa))$ into classes_array
82
+ 6: Sort classes_array in ascending order by each OOD class' score $s(\bar{y}|f,\kappa)$
83
+ 7: for $i < |\mathcal{V}_{\mathrm{OOD}}| -$ group_size do Sliding window of size group_size
84
+ 8: grp_array[i] $=$ classes_array[i:i+group_size]
85
+ 9: for $i < 11$ do Select groups in different percentiles to serve as benchmarks
86
+ 10: sev_benchmark[i] $= \{x\mid x\in c_{test}^{\bar{y}}$ s.t. $\bar{y}\in$ grp_array[j] and $j = [\frac{i}{10}\cdot |grp\_array||]\}$
87
+ 11: return sev_benchmark
88
+
89
+ not included in ImageNet-1k), and let $s_{f,\kappa}(\bar{y})$ be a severity score, defined as the average confidence given by $\kappa$ to samples from class $\bar{y} \in \mathcal{Y}_{\mathrm{OOD}}$ . This score reflects the level of difficulty faced by the model $f$ and its $\kappa$ function when detecting instances from class $\bar{y}$ . When considering ID instances we expect $\kappa$ to give high values for highly confident predictions. Therefore, the larger $s(\bar{y} | f, \kappa)$ is, the harder it is for $\kappa$ to detect the OOD class $\bar{y}$ among ID classes. We estimate $s(\bar{y} | f, \kappa)$ for each class in the OOD dataset (e.g., ImageNet-21K) using a set of samples from the class (denoted by $c_{est}^{\bar{y}}$ ), while keeping a disjoint set of samples from the same class to be used for testing (denoted by $c_{test}^{\bar{y}}$ ). Using $s$ we sub-sample groups of classes (severity levels) from $\mathcal{Y}_{\mathrm{OOD}}$ , with increasing severity such that severity level $i \in [0, 10]$ is the $i^{th}$ percentile of all severity levels.
90
+
91
+ To achieve this, we first estimate the severity score for each class $\bar{y}$ in our OOD dataset for our model and its confidence function $(f,\kappa)$ , as follows:
92
+
93
+ $$
94
+ s (\bar {y} | f, \kappa) = \frac {1}{| c _ {e s t} ^ {\bar {y}} |} \sum_ {x \in c _ {e s t} ^ {\bar {y}}} \kappa (x | f).
95
+ $$
96
+
97
+ We group the OOD classes into different groups, and choose the size of each group $G$ to be the same as $|\mathcal{V}_{\mathrm{ID}}|$ , the number of labels in the ID dataset (e.g., in ImageNet we choose it to be 1000 classes). The number of possible groups of labels from $\mathcal{V}_{\mathrm{OOD}}$ could be huge (in ImageNet, for example, the number of possible groups of size 1000 from the 20,000 OOD classes is about $\binom{20,000}{1000} = 2.5 \times 10^{1722}$ ), so instead of going over every possible group of classes, we sort the classes by their severity scores and then use a sliding window of size $|\mathcal{V}_{\mathrm{ID}}|$ to define $|\mathcal{V}_{\mathrm{OOD}}| - |\mathcal{V}_{\mathrm{ID}}| + 1$ groups of classes with increasing severity (see Figure 2). This method for reducing the number of considered groups of classes was chosen because it groups OOD classes with similar severity scores together.
98
+
99
+ Next, we choose the groups that correspond to the percentiles $\{10 \cdot i\}_{i=0}^{i=10}$ in the array of sorted groups. Finally, we construct the C-OOD benchmark for each severity level $i$ from the set of test
100
+
101
+ ![](images/84009a8503acef2f059bcd05c36c4a5f3f69d6b46eccca99249f28c7c14f02cf.jpg)
102
+ Figure 2: We define $|\mathcal{V}_{\mathrm{OOD}}| - |\mathcal{V}_{\mathrm{ID}}| + 1$ groups of classes with increasing severity by sorting all OOD classes $\bar{y}_i \in \mathcal{V}_{\mathrm{OOD}}$ by their severity scores $s(\bar{y} | f, \kappa)$ , and then use a sliding window of size $|\mathcal{V}_{\mathrm{ID}}|$ to choose the considered groups.
103
+
104
+ samples $c_{test}^{\bar{y}}$ of all classes in group $i$ . This procedure for choosing groups allows us to interpret the severity levels using percentiles. For example, severity level 5 contains classes that match the median severity among the considered groups. Thus, the performance evaluated on the benchmark for severity 5 corresponds to the performance of the model on samples with a median detection difficulty.
105
+
106
+ The resulting benchmark is tailored to the evaluated model, since the latter was used to generate it and, therefore, can be used to measure its specific performance. In Appendix B we further argue why our framework can be used to compare C-OOD detection performance of different models.
107
+
108
+ # 4 CONSTRUCTING BENCHMARKS FOR IMAGENET CLASSIFIERS
109
+
110
+ To use ImageNet-21k as an OOD dataset, we first filter out undesired labels. Since ImageNet-21K contains the ID dataset (ImageNet-1K), the first step is to remove the ID classes from the OOD dataset. Next, we remove all classes that are hypernyms or hyponyms of classes in ImageNet-1K because it might be inaccurate to include them as an OOD class. For example, ImageNet-1K contains the class "brown bear" and ImageNet-21K has the class "bear", which is a hypernym for "brown bear" so it would not be accurate to include "bear" in a C-OOD detection test. We furthermore filter OOD classes that, together with an ID class, either comprise the same object or are a component of the other one. This is due to most images in the dataset containing both components as parts of the whole object (e.g., "pool ball" from ImageNet-1k and "pool table" from ImageNet-21k). We also filter out classes that are practically identical, even though they possess WordNet id numbers that are different (e.g., "hen" is found twice as two distinct classes, with id n01514859 in ImageNet-1k and id n01792640 in ImageNet-21k). Since each class in the ImageNet-1k validation set has 50 samples, we set the number of testing samples for each C-OOD class to be 50 as well $|c_{test}^{\bar{y}}| = 50$ . In addition, We set the estimation set for each class to be $150 |c_{est}^{\bar{y}}| = 150$ . Overall, this means that each OOD class must have at least 200 samples. Accordingly, we remove classes with less than 200 samples. For classes with more than 200 samples we randomly select 200 samples and remove the rest.
111
+
112
+ While the above filtering choices are trivial and suitable for most tasks, two additional filtering options are dependent on the task and its definition of two objects being considered identical. The first option concerns animal classes that might appear to be very similar but have a biological difference such that an expert could distinguish between the two. A good example of this can be observed in Figure 3, depicting the ImageNet-1k class of Monarch butterflies and the ImageNet-21k class of Viceroy butterflies, which are both distinct species of butterflies. The similarity is so remarkable that scientists believe they have evolved to mimic one another to repel common predators (Ritland & Brower, 1991). This mimicry does not only fool predators and the untrained eye: all models studied in this paper classified more than $50\%$ of Viceroy samples as a Monarch butterfly. The fact that such
113
+
114
+ ![](images/25aac83398d7ec66850416682274a5508b4e3975509fa1041978801ba03b8eeb.jpg)
115
+ Figure 3: While both butterflies appear very similar, a Viceroy can be distinguished from a Monarch by a black line crossing its postmedian hindwing. The red arrow on the Viceroy image indicates this black line.
116
+
117
+ classes are biologically different led us to keep them in the test set by default and serve as extremely
118
+
119
+ hard OOD classes. Our code, however, allows users to disable such classes easily, since some tasks might permit such similar classes to be classified as the same.
120
+
121
+ The second option concerns inanimate objects created by humans that might appear very similar but are, by definition, distinct from one another and are used differently. An example of two such classes
122
+
123
+ ![](images/a7ee1436df4e9f66ac905436b89f1fe853af35c5af8066d2883462da5e87e4d8.jpg)
124
+ Figure 4: While both balls appear similar, they are distinguished by their different uses.
125
+
126
+ is shown in Figure 4, depicting a cue ball used for billiard games and a ping pong ball. Both are strikingly similar, and we believe a person completely unfamiliar with one of the games might easily confuse the two, if all they had were the images. Our code can be configured easily to either exclude or include such classes.
127
+
128
+ After completing the filtering as described above, the remaining classes were used in the process described in Section 3 as the set of OOD classes $\mathcal{V}_{\mathrm{OOD}}$ , with ImageNet's validation set being the set of ID classes $\mathcal{V}_{\mathrm{ID}}$ . Our code allows the generation of C-OD benchmarks for any ImageNet classification model and its $\kappa$ confidence scoring function. Moreover, we ran the process ourselves for 525 models pretrained on ImageNet, taken from the torchvision (0.10) and "timm" (0.4.12) repositories (Paszke et al., 2019; Wightman, 2019), with softmax as $\kappa$ . For these models, the benchmarks are ready to be used by the community without further preparations being necessary.
129
+
130
+ # 5 PERFORMANCE ANALYSIS
131
+
132
+ Having generated C-OOD benchmarks using the above technique for 525 different models, in this section we analyze the results. We first focus on results obtained when setting the confidence function $\kappa$ to be the softmax response, as it is widely accepted as a baseline in the OOD literature (Hendrycks & Gimpel, 2017; Berger et al., 2021). We then evaluate additional $\kappa$ functions such as ODIN, entropy and MC dropout. Our analysis leads to several interesting insights.
133
+
134
+ ![](images/82d054cfe43423aa4cfe02ce743eb4c22e3e5654b4b891dc06cff6b2a0932b0d.jpg)
135
+ Figure 5: The mean relative improvement when using different training regimes (distillation, pretraining etc.). The shaded green area indicates the area of positive improvement.
136
+
137
+ 1) Knowledge distillation improves C-OOD detection. We measured C-OOD detection improvement (measured in AUROC) when using different training regimes to explore whether a certain method consistently contributes to detection performance. Results are depicted in Figure 5. To make a fair comparison, we only compare pairs of models such that both models have identical architecture and training regimes, with the exception of the method itself being evaluated (e.g., training with or without knowledge distillation). Of all training regimes (knowledge distillation, adversarial training (Goodfellow et al., 2015), pretraining on ImageNet-21k, see below), knowledge distillation had the most significant impact in most severity levels $s > 3$ . In Galil et al. (2023) we also find that among these training regimes, knowledge distillation is the best booster of uncertainty estimation performance in an in-distribution setting. Next, we find that ImageNet21k pretraining also improves performance, and is more beneficial to performance than knowledge distillation in low levels of
138
+
139
+ severity $s \leq 3$ . Note that this observation could not have been achieved with simplified benchmarks (e.g., ImageNet-O). Our new framework allows for such observations thanks to the division of the benchmarks into different levels of severity. Finally, it is not surprising that adversarial training is irrelevant to C-OOD detection.
140
+
141
+ 2) A subset of ViTs achieves the best C-OD detection performance, both in absolute terms and per-model size (# parameters, see Figure 9 in Appendix C). Several training regimes (including the original regime from the paper introducing ViT) result in ViTs that outperform all other architectures and training regimes in terms of C-OD detection, e.g., Dosovitskiy et al. (2021); Steiner et al. (2022); Chen et al. (2022); Ridnik et al. (2021). Further research into other training regimes, however, reveals that not all training regimes result in superb performance (Touvron et al., 2021; 2022; Singh et al., 2022; Paszke et al., 2019), even when a similar amount of data is introduced into the training. We also find that the same successful subset of ViTs outperforms any other model in terms of uncertainty estimation performance in an in-distribution setting in Galil et al. (2023). These observations warrant additional research with the hope of either training more robust ViTs or transferring the unidentified ingredient of the successful subset of ViTs into other models.
142
+ 3) The language-vision CLIP model achieves good zero-shot C-OOD detection performance for low severity levels. CLIP (Radford et al., 2021) enables zero-shot classification and produces an impressive performance. We find it is also good at C-OOD detection (especially in severity levels lower than 6), without needing any training or fine-tuning with regard to the dataset. This observation is significant because it means CLIP could be used as a zero-shot C-OOD detection algorithm without the need to train on the ID classes. This also allows the user to change the definition of which classes are considered ID in a flexible manner without the need to retrain the detector. To the best of our knowledge, we are the first to make the observation that CLIP can serve as a capable zero-shot detector on its own, without further training, additional components, or knowledge of the possible OOD classes in advance. For more details, see Appendix D.
143
+
144
+ ![](images/c280fc2ce06238537b2d6a14151512ea289217e57ca6e473170a5a6f80eb2ed6.jpg)
145
+ Figure 6: Architecture accuracy vs. mean C-OD AUROC performance. In the legend, the pair of numbers next to each architecture name corresponds to the Spearman correlation and the number of networks tested from that architecture family (most samples are too small to draw any specific conclusions). Accuracy appears to have a high correlation with the C-OD detection performance, with a Spearman correlation of 0.65.
146
+
147
+ 4) Accuracy is the factor most correlated with C-OOD detection. We observe that accuracy is typically a good indicator of the model's performance in C-OOD detection at most severity levels $[s_0 - s_8]$ , with Spearman correlation values in the range of [0.6, 0.73] at those levels (see Figure 12 in Appendix E). The scatter plot in Figure 6 shows the relationship between the architecture accuracy and its C-OOD detection performance. When grouping the networks by architecture, we notice that most architectures also follow this trend. When measuring the correlation between AUROC and accuracy among only the $20\%$ most accurate models, however, the Spearman correlation drops to a range of [0.34, 0.43] (see Figure 13 in Appendix E).
148
+ 5) In-distribution ranking performance is positively correlated with C-OD detection. The next best indicative factor correlated with C-OD detection performance after accuracy is the model's in-distribution ranking performance ("ID AUROC", see Appendix A), with Spearman correlation values in the range of [0.4, 0.5]. When measuring the correlation between AUROC and ID AUROC
149
+
150
+ among only the $20\%$ most accurate models, however, the Spearman correlation increases to a range of [0.54, 0.77]; see Appendix E for more details.
151
+
152
+ 6) Most OOD classes appear in every severity level $i \in [0, 10]$ for at least one model, with the exception of some classes that appear to reach severity level 10 for most or even all models (e.g., Viceroy Butterfly, depicted in Figure 3 in Section 4). This observation suggests that "OOD hardness" is usually subjective, and changes greatly across different models.
153
+ 7) The ranking of the best C-OD detection models tends to remain similar across severity levels. This means that when selecting the best model for deployment, it is usually enough to observe its performance on only a few severity levels; see Appendix F. Note that this conclusion is only true when leaving the $\kappa$ confidence function fixed (see below).
154
+ 8) ODIN offers significant improvements over softmax for most models. In addition to evaluating with softmax as the $\kappa$ confidence function, we evaluate a few additional methods to serve as $\kappa$ functions: ODIN, entropy, MC dropout and "max-logit" (not applying softmax). For each model $f$ and $\kappa$ we re-ran the algorithm described in Section 3 to benchmark $(f,\kappa)$ (we do this because using the same C-OOD groups produced when using softmax might give an unfair advantage to other $\kappa$ functions); see Appendix G for more technical details.
155
+
156
+ ![](images/9b7ae244eb283dcf78044fca5652329abc68badd52b06aa339db9ef9f6a14667.jpg)
157
+ Figure 7: Relative improvement gain in C-OD detection performance when using ODIN instead of softmax. Each point represents an evaluated model. The green shaded area indicates the area of positive improvement.
158
+
159
+ Figure 7 shows each model's improvement when using ODIN rather than softmax, from which it is visible that the improvement has a high variance: some models benefit significantly from using ODIN, while it is detrimental to other models. Furthermore, whether or not a model benefits from ODIN changes across different levels of severity. For example, applying ODIN instead of softmax to ViT-L/32-384 barely improves detection when at severity level 0 (AUROC improves by $0.4\%$ ), but it significantly improves its detection as the severity level increases (for severity level 10, AUROC improves by $9\%$ ). Other models' detection performance, on the other hand, may decrease as severity increases (see Figure 7 for examples). These facts suggest that the pair of (model, $\kappa$ ) needs to be considered with respect to the task and severity level relevant to it. Moreover, it may be that the $\kappa$ function hyperparameters need to be optimized specifically for the desired severity level.
160
+
161
+ 9) Not applying softmax can improve some models significantly, although most are harmed by it. Figure 16 in Appendix G depicts the effect of not applying softmax, which we dub "max-logit". While most models are harmed by using max-logit instead of softmax, some models are significantly benefited. ViTs, which already outperform all other models, perform significantly better when softmax is not applied, with ViT-L/32-384 improving by $10.6\%$ . It is worth mentioning that of all the (model, $\kappa$ ) pairs evaluated in this paper, ViT-L/32-384 applied with max-logit achieve the best detection performance. Interestingly, regardless of the $\kappa$ function evaluated, ViT-L/32-384 demonstrated the best detection performance. In Figure 8, we plot its performance across all severity levels using each of the $\kappa$ functions we consider. Also, as noted in Appendix G, the hyperparameters used for ODIN when applied to ViT were not optimized specifically to it. Performance by using ODIN may improve beyond max-logit with model-specific optimization. Observing that max-logit could be so beneficial for a subset of models while being harmful to most other models was made possible thanks to the scale of our study.
162
+
163
+ ![](images/d5a953ace1ab772b0d8303e09ca3dbbd80918b3de26f1523f155fb7d8ff542a9.jpg)
164
+ Figure 8: OOD detection performance of ViT-L/32-384, the best model evaluated using each of the $\kappa$ functions we consider.
165
+
166
+ 10) Using entropy as a confidence function $\kappa$ improves C-OD detection performance in most cases. We compare the performance gain from switching to using entropy instead of the softmax score. The results are depicted in Figure 17 in Appendix G. We note that, in most cases, using entropy improves the detection performance.
167
+
168
+ 11) MC dropout improves detection, especially for low levels of severity. We evaluate MC dropout Gal & Ghahramani (2016) in the context of C-OOD detection. We use 30 dropout-enabled forward passes. The mean softmax score of these passes is calculated and then a predictive entropy score is used as the final uncertainty estimate. The improvements when using MC dropout instead of softmax across all severity levels are depicted in Figure 18 in Appendix G using box plots. We find that MC dropout improves performance, especially so at lower levels of severity. The improvement becomes less significant as severity increases. Similar to ODIN, MC dropout seems to improve some models more significantly at lower severity levels (e.g., MobileNets (Howard et al., 2019)), while other models are improved more significantly by MC dropout at higher severity levels (e.g., ViTs). We further analyze MC dropout and recall that it comprises two main components: (a) dropout-enabled forward passes and (b) entropy of the mean probability vector from the forward passes. To test which component contributes the most to the perceived gains, we compare the C-OOD detection performance when using MC dropout to the C-OOD detection performance when using just entropy (with no multiple dropout-enabled forward passes). The results of this comparison are plotted in Figure 19 in Appendix G. We find that MC dropout slightly improves upon entropy at most severity levels, especially at lower ones, with few outliers being either significantly improved or harmed.
169
+
170
+ # 6 CONCLUDING REMARKS
171
+
172
+ We introduced a novel approach to benchmarking the performance of classifiers in detecting C-OODs. In contrast to existing techniques, the proposed method allows for unbiased measurements against specific models or confidence functions. A key feature of the proposed benchmarking procedure is that it allows for graded measurements of class out-of-distribution levels of severity. Using this property, we can identify trends in detection robustness that are otherwise impossible to detect. In addition to opening new avenues for future research, the proposed method can be used to draw more precise conclusions about the performance of various models and detection techniques.
173
+
174
+ Using our new benchmarking procedure, we offered numerous interesting observations that merit further investigation into how to improve C-OOD detection. Among the interesting questions raised is why is knowledge distillation beneficial to boosting detection performance, and how can we enhance its robustness to C-OODs? What can we learn from the architectures that were inclined to perform well in C-OOD detection, such as ViT and CLIP? Finally, could detection methods be crafted and optimized for specific severity levels, or can they be modified to be so by changing a hyperparameter?
175
+
176
+ # ACKNOWLEDGMENTS
177
+
178
+ This research was partially supported by the Israel Science Foundation, grant No. 710/18.
179
+
180
+ # REFERENCES
181
+
182
+ Christoph Berger, Magdalini Paschali, Ben Glocker, and Konstantinos Kamnitsas. Confidence-based out-of-distribution detection: A comparative study and analysis. In Carole H. Sudre, Roxane Licandro, Christian F. Baumgartner, Andrew Melbourne, Adrian V. Dalca, Jana Hutter, Ryutaro Tanno, Esra Abaci Turk, Koen Van Leemput, Jordina Torrents-Barrena, William M. Wells III, and Christopher K. Macgowan (eds.), Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis - 3rd International Workshop, UNSURE 2021, and 6th International Workshop, PIPPI 2021 Held in Conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings, volume 12959 of Lecture Notes in Computer Science, pp. 122-132. Springer, 2021. doi: 10.1007/978-3-030-87735-4\_.12. URL https://doi.org/10.1007/978-3-030-87735-4_12.
183
+ Xiangning Chen, Cho-Jui Hsieh, and Boqing Gong. When vision transformers outperform resnets without pre-training or strong data augmentations. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https://openreview.net/forum?id=LtKcMgGOeLt.
184
+ Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers, 2021.
185
+ L. P. Cordella, C. De Stefano, F. Tortorella, and M. Vento. A method for improving classification reliability of multilayer perceptrons. IEEE Transactions on Neural Networks, 6(5):1140-1147, 1995. doi: 10.1109/72.410358.
186
+ C. De Stefano, C. Sansone, and M. Vento. To reject or not to reject: that is the question-an answer in case of neural classifiers. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 30(1):84-94, 2000. doi: 10.1109/5326.827457.
187
+ Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, 2009. doi: 10.1109/CVPR.2009.5206848.
188
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=YicbFdNTTy.
189
+ Sepideh Esmaeilpour, Bing Liu, Eric Robertson, and Lei Shu. Zero-shot out-of-distribution detection based on the pre-trained model CLIP. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pp. 6568-6576. AAAI Press, 2022. URL https://ojs.aaai.org/index.php/AAAI/article/view/20610.
190
+ Tom Fawcett. An introduction to roc analysis. Pattern Recognition Letters, 27(8):861-874, 2006. ISSN 0167-8655. doi: https://doi.org/10.1016/j.patrec.2005.10.010. URL https://www.sciencedirect.com/science/article/pii/S016786550500303X. ROC Analysis in Pattern Recognition.
191
+ Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan. Exploring the limits of out-of-distribution detection. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 7068-7081, 2021. URL https://proceedings.neurips.cc/paper/2021/bit/3941c4358616274ac2436eacf67fae05-Abstract.html.
192
+
193
+ Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. 2016.
194
+ Ido Galil, Mohammed Dabbah, and Ran El-Yaniv. What can we learn from the selective prediction and uncertainty estimation performance of 523 imagenet classifiers? In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=p66AzKi6Xim.
195
+ Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, and Philip Torr. Res2net: A new multi-scale backbone architecture. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(2):652-662, Feb 2021. ISSN 1939-3539. doi: 10.1109/tpami.2019.2938758. URL http://dx.doi.org/10.1109/TPAMI.2019.2938758.
196
+ Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6572.
197
+ Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=Hkg4TI9xl.
198
+ Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pp. 15262-15271. Computer Vision Foundation / IEEE, 2021. URL https://openaccess.thecvf.com/content/CVPR2021/html/Hendrycks_Natural_Adversarial_Examples_CVPR_2021_paper.html.
199
+ Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015.
200
+ Andrew Howard, Ruoming Pang, Hartwig Adam, Quoc V. Le, Mark Sandler, Bo Chen, Weijun Wang, Liang-Chieh Chen, Mingxing Tan, Grace Chu, Vijay Vasudevan, and Yukun Zhu. Searching for mobilenetv3. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pp. 1314-1324. IEEE, 2019. doi: 10.1109/ICCV.2019.00140. URL https://doi.org/10.1109/ICCV.2019.00140.
201
+ Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 7167-7177, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/abdeb6f575ac5c6676b747bca8d09cc2-Abstract.html.
202
+ Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1VGkIxRZ.
203
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
204
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 8748-8763. PMLR, 2021. URL http://proceedings.mlr.press/v139/radford21a.html.
205
+
206
+ Tal Ridnik, Emanuel Ben Baruch, Asaf Noy, and Lihi Zelnik. Imagenet-21k pretraining for the masses. In Joaquin Vanschoren and Sai-Kit Yeung (eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper/2021混沌/98f13708210194c475687be6106a3b84-AAbstract-round1.html.
207
+ David B. Ritland and Lincoln P. Brower. The viceroy butterfly is not a batesian mimic. Nature, 350 (6318):497-498, Apr 1991. ISSN 1476-4687. doi: 10.1038/350497a0. URL https://doi.org/10.1038/350497a0.
208
+ Walter J. Scheirer, Anderson de Rezende Rocha, Archana Sapkota, and Terrance E. Boult. Toward open set recognition. IEEE Trans. Pattern Anal. Mach. Intell., 35(7):1757-1772, 2013. doi: 10.1109/TPAMI.2012.256. URL https://doi.org/10.1109/TPAMI.2012.256.
209
+ Gabi Shalev, Yossi Adi, and Joseph Keshet. Out-of-distribution detection using multiple semantic label representations. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 7386-7396, 2018. URL https://proceedings.neurips.cc/paper/2018hash/2151b4c76b4dcb048d06a5c32942b6f6-AAbstract.html.
210
+ Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv Mahajan, Ross B. Girshick, Piotr Dólar, and Laurens van der Maaten. Revisiting weakly supervised pre-training of visual perception models. CoRR, abs/2201.08371, 2022. URL https://arxiv.org/abs/2201.08371.
211
+ Andreas Peter Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train your vit? data, augmentation, and regularization in vision transformers. Transactions on Machine Learning Research, 2022. URL https://openreview.net/forum?id=4nPswr1KcP.
212
+ Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 10347-10357. PMLR, 2021. URL http://proceedings.mlr.press/v139/touvron21a.html.
213
+ Hugo Touvron, Matthieu Cord, and Herve Jégou. Deit III: revenge of the vit. CoRR, abs/2204.07118, 2022. doi: 10.48550/arXiv.2204.07118. URL https://doi.org/10.48550/arXiv.2204.07118.
214
+ Ross Wightman. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019.
215
+ I. Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. Billion-scale semi-supervised learning for image classification. CoRR, abs/1905.00546, 2019. URL http://arxiv.org/abs/1905.00546.
216
+ Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Haibin Lin, Zhi Zhang, Yue Sun, Tong He, Jonas Mueller, R. Manmatha, Mu Li, and Alexander Smola. Resnest: Split-attention networks, 2020.
217
+
218
+ # A DEFINING IN-DISTRIBUTION AUROC
219
+
220
+ We follow Galil et al. (2023) in defining in-distribution AUROC ("ID AUROC"). ID AUROC is defined similarly to Equation 1, but discriminating between correct and incorrect predictions instead of discriminating between ID and OOD instances.
221
+
222
+ For every two random samples $(x_{1},y_{1}),(x_{2},y_{2})\sim P(\mathcal{X},\mathcal{Y})$ and given that $\ell (f(x_1),y_1) > \ell (f(x_2),y_2)$ , the ranking performance of $\kappa$ is defined as the probability that $\kappa$ ranks $x_{2}$ higher than $x_{1}$ :
223
+
224
+ $$
225
+ \Pr \left[ \kappa \left(x _ {1}, \hat {y} \mid f\right) < \kappa \left(x _ {2}, \hat {y} \mid f\right) \mid \ell \left(f \left(x _ {1}\right), y _ {1}\right) > \ell \left(f \left(x _ {2}\right), y _ {2}\right) \right] \tag {2}
226
+ $$
227
+
228
+ When the 0/1 loss is in play, it is known that AUROC in fact equals the probability in Equation (2) (Fawcett, 2006) and thus is a proper metric to measure ranking in classification (AKA ID AUROC or discrimination).
229
+
230
+ # B COMPARING MODELS' PERFORMANCE USING OUR FRAMEWORK
231
+
232
+ The proposed framework allows for a fair comparison of models in terms of model-specific difficulty, rather than a fixed set of OOD classes chosen according to some (possibly arbitrary) criterion. This is because the framework evaluates each model's performance on tailored benchmarks. This approach provides a more accurate representation of the model's own performance. As the famous quote goes, "You can't judge a fish by its ability to climb a tree". Rephrasing this quote to adapt it to our discussion: if we want to compare a fish with a monkey on what is hardest for each of them, we should judge the fish by its ability to climb a tree and the monkey's ability to swim (although we are aware that some monkeys can swim). Our framework constructs specialized tests for both.
233
+
234
+ That being said, by considering the construction of severity levels (per model), it is possible (neglecting estimation error of the estimation sets $c_{est}^{y}$ ) to compare the performance of two models specifically for the classes populating their maximal severity (severity 10):
235
+
236
+ (1) Suppose that model $\mathcal{A}$ has better performance (AUROC) on its own group $Z$ of hardest classes (severity 10) than model $\mathcal{B}$ 's performance on its own severity 10 classes, denoted $K$ . Assume that $K$ does not equal $Z$ (otherwise we are done). Thus, $\mathrm{AUROC}(\mathcal{A}, Z) > \mathrm{AUROC}(\mathcal{B}, K)$ .
237
+ (2) By construction of severity groups, for every set of classes $R \neq Z$ , AUROC(A,R) ≥ AUROC(A,Z) (since $Z$ is the set of hardest classes for model A). This holds true for any set of classes $R$ , including the set $K$ . Therefore, AUROC(A,K) ≥ AUROC(A,Z).
238
+
239
+ By combining (1) and (2) we get that $\mathrm{AUROC}(\mathcal{A}, K) \geq \mathrm{AUROC}(\mathcal{A}, Z) > \mathrm{AUROC}(\mathcal{B}, K) \Rightarrow \mathrm{AUROC}(\mathcal{A}, \bar{K}) > \mathrm{AUROC}(\mathcal{B}, \bar{K})$ , meaning that for the same set of classes $K$ , model $\mathcal{A}$ performs better than model $\mathcal{B}$ .
240
+
241
+ A "mirror" argument could be crafted to compare the models' performance on the classes populating their minimal severity (severity 0).
242
+
243
+ # C PER-SIZE PERFORMANCE COMPARISON
244
+
245
+ The scatter plot in Figure 9 shows the relationship between the # of architecture parameters and its C-OOD AUROC performance. Overall, there is a moderate Spearman correlation of 0.45 between #parameters and the C-OOD performance when considering all tested networks. When grouping the networks by architecture families, however, we see that some architectures have high correlation between their model size and their C-OOD AUROC. Architecture families that exhibit this behavior are, for example, ViTs, Swins, EffecientNetV2 and ResNets whose correlations are 0.91, 0.94, 0.89, and 0.79, respectively. Other families exhibit moderate correlations, e.g., EffecientNet(V1) with a 0.47 Spearman correlation. Some architectures, on the other hand, have strong negative correlation, e.g., Twins Chu et al. (2021), NesT Zhang et al. (2020) and Res2Net Gao et al. (2021), whose correlations are -0.94,-1.0, and -0.85, respectively.
246
+
247
+ Additionally, we note that the subset of ViT models mentioned in Section 5 are also the best even when considering a model size limitation.
248
+
249
+ # D ZERO-SHOT C-OOD DETECTION WITH CLIP
250
+
251
+ To evaluate CLIP on ImageNet, we first prepare it following the code provided by its authors (https://github.com/openai/CLIP): The labels of ImageNet-1k are encoded into normalized embedding vectors. At inference time, the incoming image is encoded into another normalized embedding vector. A cosine similarity is then calculated between each label-embedding vector and the image-embedding vector. The highest similarity score is then taken as the confidence score for that prediction.
252
+
253
+ To evaluate CLIP's C-OOD performance, we re-run the algorithm described in Section 3 to benchmark (CLIP, $\kappa_{\text{cosine similarity}}$ ). The best-performing instance of CLIP (ResNet-50x64) outperforms $96\%$
254
+
255
+ ![](images/cd5adc826d7e052daedd5c608de757618ccf406d05ccd821f03d5fb3e4592b74.jpg)
256
+ Figure 9: Number of architecture parameters vs. C-OOD AUROC performance at severity level 5 (median severity). The pair of numbers next to each architecture name in the legend corresponds to its Spearman correlation and the number of models tested from that architecture (family), respectively. Note that specific ViT transformers are also the best when considering a model size limitation. Vertical lines indicate the sizes of ResNet-50 (left vertical line) and ResNet-101 (right vertical line).
257
+
258
+ ![](images/ec3ded2e761fdf3f66d680f17612c10eac6135e943f0f7a37bd4d6c283d460f4.jpg)
259
+ Figure 10: The same graph as in Figure 1, but with an additional lime-colored curve for CLIP ResNet-50x64. Note that as severity levels increase, CLIP's detection advantage is greatly reduced.
260
+
261
+ of all other models (measured by its mean AUROC over all severity levels). In Figure 10 we visualize this CLIP's performance across all severity levels, in comparison to all other models. Interestingly, CLIP's relative advantage over other models decreases as the severity increases, and at severity 10, it is even lower than the median. The same is observed in Figure 11 which depicts a comparison between three identical ResNet-50 models that were trained with three different training regimes, one of them being CLIP. CLIP outperforms its competition up to severity 6 (with a significant margin in lower
262
+
263
+ ![](images/fb8c357d56fee483f897d4d2a76377331e617b9db824a458d8b7be76f82f844b.jpg)
264
+ Figure 11: A comparison of three identical ResNet-50 models trained with different training regimes: (1) The orange-colored curve represents a ResNet-50 model trained on ImageNet-1k with Torchvision's recipe; (2) the purple-colored curve represents a ResNet-50 model trained with a semi-supervised regime (Yaliniz et al., 2019); and (3) the lime-colored curve represents a ResNet-50 trained with CLIP.
265
+
266
+ severity levels), and then underperforms. We hypothesize the degradation in CLIP's performance for higher severity levels happens due to an increase in the number of OOD classes that are descriptively similar to ID classes at higher levels of severity. For example, when examining different types of butterflies from Figure 3, the string text of "monarch butterfly" is very similar to the string text of "viceroy butterfly", simply due to both sharing the word "butterfly". Other butterflies that are less visually similar might be "confused" by CLIP and classified as monarch butterflies, simply because they are also defined as butterflies, making their cosine similarity with the text "monarch butterfly" higher. Common image classifiers, on the other hand, may confuse different butterflies if they appear visually similar and share many distinguishable features, but are not affected by the fact both classes are defined as "butterflies".
267
+
268
+ We also observe that while CLIPs with a confidence function $\kappa_{\text{cosine similarity}}$ perform very well at C-OOD detection, their ID ranking is worse than other models. Using softmax and/or adding a linear-probe (as described in Radford et al. (2021)) improves ID ranking significantly, but results in mediocre C-OOD detection performance. We believe that this suggests the multimodal nature of CLIP is a crucial component of its C-OOD detection performance, and that the scaling effect of softmax hinders the partial order induced on OOD and ID instances.
269
+
270
+ In Fort et al. (2021), it was suggested that CLIP be used as a zero-shot OOD detection algorithm. Their suggested method, however, requires knowledge of the possible OOD classes in advance. The authors of Esmaeilpour et al. (2022) suggested to use an additional captioning model, which is fine-tuned on some large dataset (which hopefully contains knowledge of the OOD classes that might emerge during inference), instead. Our suggested approach, in contrast, requires no knowledge, no fine-tuning and no models other than CLIP itself.
271
+
272
+ # E CORRELATIONS OF VARIOUS FACTORS WITH C-OOD DETECTION PERFORMANCE
273
+
274
+ We searched for factors that could be indicative of or correlated with good performance in C-OOD detection. To this end, we measure the correlations of various factors with the C-OOD detection AUROC performance across all levels of severity. The results can be seen in the graphs in Figure 12. We observe that accuracy is typically a good indicator of the model's performance in C-OOD detection at most severity levels $(s_0 - s_8)$ , with Spearman correlation values in [0.6, 0.73] at those levels (see Figure 12). When measuring the correlation between AUROC and accuracy among only
275
+
276
+ ![](images/abdf25b239343f9f1b3bbaf7170ff5723fc3f7ae9ff7426c073364675a230247.jpg)
277
+ Figure 12: Spearman correlations between C-OOD detection AUROC and Accuracy, ID-AUROC, #parameters, input size, and embedding size across all severity levels.
278
+
279
+ ![](images/8a1664e912fa6734c083c5966e95b0ad55f95d9c5e30a5f4babc77dcc224a308.jpg)
280
+ Figure 13: Spearman correlations between C-OD detection AUROC and Accuracy, ID-AUROC, #parameters, input size, and embedding size across all severity levels, among only the $20\%$ most accurate models.
281
+
282
+ the $20\%$ most accurate models, however, the Spearman correlation drops to a range of [0.34, 0.43] (see Figure 13).
283
+
284
+ The next best indicative factors are the ID ranking performance ("ID AUROC"), number of parameters, and the input image size (moderate correlations). Finally, the embedding size is only weakly correlated.
285
+
286
+ Figure 14 shows a scatter plot of in-distribution ranking performance and C-OOD detection performance of all evaluated models. The overall Spearman correlation is 0.43. The legend indicates correlations obtained by specific architecture families. Interestingly, ID AUROC exhibits slightly increasing correlation up to severity $s_9$ , and at $s_{10}$ becomes the most indicative factor for C-OD detection performance. In contrast, all other investigated factors lose their indicative power at the highest severity levels ( $s_9$ , $s_{10}$ ). Moreover, when measuring the correlation between AUROC and ID AUROC among only the 20% most accurate models, the Spearman correlation increases to a range
287
+
288
+ ![](images/410e047ff65b2e1678292cc2501093363b8d34a4c22330c196afe25860bb1806.jpg)
289
+ Figure 14: The x-axis represents ID ranking performance (measured by AUROC), and the y-axis represents C-OOD detection performance in severity 5 (higher is better). The legend indicates correlations, by specific architecture families, with the number on the right representing sample size, and the one on the left representing the correlation between ID ranking and detection.
290
+
291
+ of [0.54, 0.77], making it the most indicative factor for C-OOD detection among such models (see Figure 13).
292
+
293
+ # F CORRELATION BETWEEN RANKINGS OF MULTIPLE SEVERITY LEVELS
294
+
295
+ ![](images/2897ddb9111f6635d2ca605fafe24c2997145716483934b783841697741b0685.jpg)
296
+ Figure 15: Spearman correlation between the rankings of the models given by different severity levels.
297
+
298
+ Since we use multiple benchmarks for C-OOD detection (i.e., the 11 severity levels), to test the performance models in C-OOD detection, and each severity level may rank the models differently (i.e. the best performers for each severity level may vary), we now consider the question of how these rankings change across severity levels. To this end we calculated the correlations between the rankings obtained at different severity levels. The resulting correlation matrix can be seen in Figure 15. Overall, we observe high correlations, which means that different severity levels generally yield similar rankings of the models. This means that when selecting the best model for deployment, it is usually enough to observe its performance on only a few severity levels.
299
+
300
+ We also notice that for each severity level $s_i$ , the correlation with $s_j$ is higher the closer $j$ is to $i$ . This is not surprising and might be anticipated because adjacent severity levels have close severity scores by design.
301
+
302
+ # G COMPARISON OF DIFFERENT CONFIDENCE FUNCTIONS
303
+
304
+ This section contains additional technical details and figures related to our comparison of ODIN, max-logit, entropy and MC dropout. Our main conclusions are presented in Section 5 of the main text.
305
+
306
+ ![](images/d84e1b1f81f123af6ec8ec866077ba50620bc78191ecc4b647de7b796f1d3bd5.jpg)
307
+ Figure 16: Relative improvement gain in C-OOD detection performance when using max-logit instead of softmax (i.e., not applying softmax). In median terms, using max-logit harms performance over softmax for most evaluated models. However, some models (e.g., ViTs) greatly benefit from not applying softmax. The green shaded area indicates the area of positive improvement.
308
+
309
+ ![](images/bc43dee02398069b83b9de90ecdbc0989814d8e1854163b8dfef9522c0f8bb11.jpg)
310
+ Figure 17: Relative improvement gain in C-OOD detection performance when using entropy instead of softmax. In median terms, entropy offers positive improvement over softmax for most levels of severity except $s \in \{7,8,9\}$ . The green shaded area indicates the area of positive improvement.
311
+
312
+ To use MC dropout, we first use 30 dropout-enabled forward passes. The mean softmax score of these passes is calculated and then a predictive entropy score is used as the final uncertainty estimate.
313
+
314
+ When using ODIN, we use a temperature of 2 and set $\epsilon$ to be $1\cdot 10^{-5}$ . We obtained these hyperparameters by using a simple grid search over a validation set, and using seven models of different
315
+
316
+ ![](images/b599427bfe35dbb59748a156e6d94ab7623dd5f3d442fd1e20321537c3cb5ddd.jpg)
317
+ Figure 18: Relative improvement gain in C-OOD detection performance when using MC dropout instead of softmax. We find that MC dropout improves performance, especially at lower levels of severity. The improvement becomes less significant as severity increases.
318
+
319
+ ![](images/aadb06effe614ab9d8ddc2603b66797506865451e21cdfb06ad6de11389a3f5b.jpg)
320
+ Figure 19: Relative improvement gain in C-OOD detection performance when using MC dropout instead of entropy.
321
+
322
+ architectures of the entire sample of models evaluated. Our objective was to find the hyperparameters that improve the mean AUROC across all severity levels the most. We believe that fine-tuning the hyperparameters with the specific model and severity levels in mind may allow for better results.
2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81616c21d1d268ef084fa97bc60d69ec82fd33f276a14b4e89102d8b29891629
3
+ size 755654
2023/A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/e87cdc5c-ebb8-4276-85ac-98eed9409534_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/e87cdc5c-ebb8-4276-85ac-98eed9409534_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/e87cdc5c-ebb8-4276-85ac-98eed9409534_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7af8ca13efa0960d99014621a49f630570552b4b29c9d074ebe1f541ddfffb72
3
+ size 5358355
2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/full.md ADDED
@@ -0,0 +1,674 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A PROBABILISTIC FRAMEWORK FOR TASK-ALIGNED INTRA- AND INTER-AREA NEURAL MANIFOLD ESTIMATION
2
+
3
+ Edoardo Balzani, Jean Paul Noel, Pedro Herrero-Vidal, & Dora E. Angelaki*
4
+
5
+ Center for Neural Science
6
+
7
+ New York University
8
+
9
+ New York, NY, 10003
10
+
11
+ {eb162,jpn5,pmh314,da93}@nyu.edu
12
+
13
+ Cristina Savin
14
+
15
+ Center for Neural Science
16
+
17
+ Center for Data Science
18
+
19
+ New York University
20
+
21
+ New York, NY, 10003
22
+
23
+ cs5360@nyu.edu
24
+
25
+ # ABSTRACT
26
+
27
+ Latent manifolds provide a compact characterization of neural population activity and of shared co-variability across brain areas. Nonetheless, existing statistical tools for extracting neural manifolds face limitations in terms of interpretability of latents with respect to task variables, and can be hard to apply to datasets with no trial repeats. Here we propose a novel probabilistic framework that allows for interpretable partitioning of population variability within and across areas in the context of naturalistic behavior. Our approach for task aligned manifold estimation (TAME-GP) explicitly partitions variability into private and shared sources which can themselves be subdivided in task-relevant and task irrelevant components, uses a realistic Poisson noise model, and introduces temporal smoothing of latent trajectories in the form of a Gaussian Process prior. This TAME-GP graphical model allows for robust estimation of task-relevant variability in local population responses, and of shared co-variability between brain areas. We demonstrate the efficiency of our estimator on within model and biologically motivated simulated data. We also apply it to several datasets of neural population recordings during behavior. Overall, our results demonstrate the capacity of TAME-GP to capture meaningful intra- and inter-area neural variability with single trial resolution.
28
+
29
+ # 1 INTRODUCTION
30
+
31
+ Systems neuroscience is gradually shifting from relatively simple and controlled tasks, to studying naturalistic closed-loop behaviors where no two observations (i.e., "trials") are alike (Michaiel et al., 2020; Noel et al., 2021). Concurrently, neurophysiological techniques are advancing rapidly (Stevenson & Kording, 2011; Angotzi et al., 2019; Boi et al., 2020) to allow recording from an ever-increasing number of simultaneous neurons (i.e., "neural populations") and across multiple brain areas. These trends lead to a pressing need for statistical tools that compactly characterize the statistics of neural activity within and across brain regions. Dimensionality reduction techniques are a popular tool for interrogating the structure of neural responses (Cunningham & Byron, 2014). However, as neural responses are driven by increasingly complex task features, the main axes of variability extracted using these techniques often intermix task and nuisance variables, making them hard to interpret. Alternatively, dimensionality reduction techniques that do allow for estimating task-aligned axes of variability (Brendel et al., 2011; Semedo et al., 2019; Keeley et al., 2020; Glaser et al., 2020; Hurwitz et al., 2021), do not apply to communication between brain areas, and/or necessitate trial repeat structure that does not occur in natural behavior.
32
+
33
+ Here, we introduce a probabilistic approach for learning interpretable task-relevant neural manifolds that capture both intra- and inter-area neural variability with single trial resolution. Task Aligned Manifold Estimation with Gaussian Process priors (TAME-GP) incorporates elements of demixed
34
+
35
+ PCA (dPCA; Machens (2010); Kobak et al. (2016)) and probabilistic canonical correlation analysis (pCCA; Bach & Jordan (2005)) into a graphical model that additionally includes biologically relevant Poisson noise. The model uses a Gaussian Process (GP) prior to enforce temporal smoothness, which allows for robust reconstruction of single-trial latent dynamics (see Damianou et al. (2016) for a similar approach using Gaussian observation noise). We demonstrate the robustness and flexibility of TAME-GP in comparison to alternative approaches using synthetic data and neural recordings from rodents and primates during naturalistic tasks. This reveals TAME-GP as a valuable tool for dissecting sources of variability within and across brain areas during behavior.
36
+
37
+ Related work. Dimensionality reduction is usually achieved by unsupervised methods that identify axes of maximal variability in the data, such as PCA. In neuroscience, this is often accompanied by additional smoothing over time reflecting the underlying neural dynamics (e.g., Gaussian process factor analysis (GPFA) (Yu et al., 2008); see GP-LVM (Ek & Lawrence, 2009) for similar approaches outside of neuroscience). This low dimensional projection is followed by a post hoc interpretation of latents in the context of behavioral variables, often by visualization. Alternative approaches such as dPCA (Machens, 2010; Kobak et al., 2016) explicitly look for axes of neural variability that correlate with task variables of interest (see also Zhou & Wei (2020) for a nonlinear version). However, these require partitioning trials into relatively few categories, based on experimental conditions or behavioral choices and averaging within conditions. This makes them unusable in naturalistic tasks where a single trial treatment is needed. Similarly, SNP-GPFA (Keeley et al., 2020) can partition (multi-region) neural activity into 'shared signal' and 'private noise' components, but only using data with stimulus repeats. Under 'no-repeat' conditions, pCCA (Bach & Jordan, 2005) can find subspaces of maximal cross-correlation between linear projections of task variables and neural responses (under gaussian noise assumptions), without the need for a priori grouping of trials by experimental condition or choice. This approach can also be applied for determining shared axes of co-variability across areas, an analog for communication subspaces (Semedo et al., 2019). Nonetheless, its noise model assumptions are mismatched to neural data. More fundamentally, pCCA only considers pairwise relationships, preventing a joint multi-area and task variables analysis. Overall, existing approaches come with practical limitations and do not directly address the routing of task-relevant information across brain areas.
38
+
39
+ # 2 TASK-ALIGNED MANIFOLD ESTIMATION WITH GP PRIORS (TAME-GP)
40
+
41
+ In its most general form, the graphical model of TAME-GP models a set of spike-count population responses $\mathbf{x}^{(j)}$ from up to $n$ different areas, $^2$ together with task variable of interest $\mathbf{y}$ (Fig. 1A). The neural responses are driven by a set of $n + 1$ low-dimensional latent variables $\mathbf{z}^{(j)}$ . Specifically, the responses of neuron $i$ in area $j$ arise as a linear combination of private latent variability $\mathbf{z}^{(j)}$ and shared latents $\mathbf{z}^{(0)}$ , which reflect task interpretable aspects of the underlying dynamics, with Poisson noise and an exponential link function:
42
+
43
+ $$
44
+ p \left(\mathbf {x} _ {i} ^ {(j)} | \mathbf {z} ^ {(0: n)}\right) = \text {P o i s s o n} \left(\exp \left(W _ {i} ^ {(0, j)} \mathbf {z} ^ {(0)} + W _ {i} ^ {(j, j)} \mathbf {z} ^ {(j)} + h _ {i} ^ {(j)}\right)\right), \tag {1}
45
+ $$
46
+
47
+ with parameters $\mathbf{W}^{(0 / j,j)}$ and $\mathbf{h}^{(j)}$ .
48
+
49
+ To make latents interpretable with respect to task variables $\mathbf{y}$ , we adapt a probabilistic framing of CCA (Bach & Jordan, 2005) to introduces dependencies between any of the latents $\mathbf{z}^{(k)}$ , which could be private or shared across areas, and $\mathbf{y}$ :
50
+
51
+ $$
52
+ p \left(\mathbf {y} \mid \mathbf {z} ^ {(0)}\right) = \mathcal {N} \left(\mathbf {y}; \mathbf {C z} ^ {(0)} + \mathbf {d}, \boldsymbol {\Psi}\right), \text {w i t h p a r a m e t e r s} \mathbf {C}, \mathbf {d}, \boldsymbol {\Psi}. \tag {2}
53
+ $$
54
+
55
+ ![](images/0df6331dbeb45e05962e4c8f5f34de2b3affe74c465d6e1d47aa2873b2511bcc.jpg)
56
+
57
+ ![](images/d50873e4265948f17edcf6cc968bb102af6912290971ef0660bf92ee4a4e178a.jpg)
58
+
59
+ ![](images/00bf034620df3024af61f110091ae7434b555fc0663d19825a4b3244766082cf.jpg)
60
+ Figure 1: A. TAME-GP generative model. $\mathbf{z}^{(0)}$ denotes shared latent dimensions while $\mathbf{z}^{(i)}$ denote private latents of the corresponding area $i$ ; $\mathbf{y}$ denotes the task variables. B. Example draws of spiking activity and a task variable from the TAME-GP graphical model. C. Model log-likelihood as a function of the EM iteration (left) and cross-validated leave-one-neuron-out marginal likelihood as a function of $\mathbf{z}^{(0)}$ dimension (right). D-F. Latent variables estimation for within model simulated data: ground truth latent factors and model posterior mean $\pm 95\%$ CI for three latent dimensions.
61
+
62
+ ![](images/99a36b29f3da5199e7743883d48cb5b7bedc24096b29a0070cde0d0dcc197fcb.jpg)
63
+
64
+ Finally, we regularize all latents to be smooth over time, through the introduction of a Gaussian Process prior, as in GPFA (Yu et al., 2008),
65
+
66
+ $$
67
+ z ^ {(j)} \sim \operatorname {G P} \left(\mathbf {0}, k _ {j} (\cdot , \cdot)\right),, \tag {3}
68
+ $$
69
+
70
+ $$
71
+ k _ {j} \left(z _ {t, i} ^ {(j)}, z _ {t ^ {\prime}, i ^ {\prime}} ^ {(j)}\right) = \delta_ {i i ^ {\prime}} \exp \left(- \frac {(t - t ^ {\prime}) ^ {2}}{2 \tau_ {i} ^ {(j)}}\right), \tag {4}
72
+ $$
73
+
74
+ with area and dimension specific hyperparameters $\tau, z_{t,i}^{(j)}$ is the $i$ -th component of the $j$ -th latent at time $t$ , and $\delta_{ii'}$ is the Kronecker delta.
75
+
76
+ Putting these elements together results in a factorization of the joint distribution of the form, $p\left(\mathbf{x}^{(1:n)}, \mathbf{y}, \mathbf{z}^{(0:n)}\right) = \prod_{j=0}^{n} p\left(\mathbf{z}^{(j)}\right) p\left(\mathbf{y}|\mathbf{z}^{(0)}\right) \prod_{i,j} p\left(x_i^{(j)}|\mathbf{z}^{(0)}, \mathbf{z}^{(j)}\right)$ . This general form allows for a unified mathematical treatment of several estimation tasks of interest. We will detail key instances of this class that have practical relevance for neuroscience when presenting our numerical results below.
77
+
78
+ # 3 EM-BASED PARAMETER LEARNING
79
+
80
+ E-step Since a closed form solution of the posterior is not available (due to the Poisson noise), we construct a Laplace approximation of the posterior $^3$ , $p(\mathbf{z}|\mathbf{x},\mathbf{y},\boldsymbol {\theta})\approx q(\mathbf{z}|\mathbf{x},\mathbf{y},\boldsymbol {\theta}) = \mathcal{N}\left(\mathbf{z};\hat{\mathbf{z}}, - \mathbf{H}^{-1}\right)$ , where $\hat{\mathbf{z}}$ is the MAP of the joint log-likelihood and $\mathbf{H}$ is its corresponding Hessian. Both of these quantities are estimated numerically.
81
+
82
+ The MAP estimate is obtained by gradient descent on the joint log likelihood. The gradient of the joint log likelihood w.r.t. the latents can be written as
83
+
84
+ $$
85
+ \begin{array}{l} \nabla_ {\mathbf {z} ^ {(j)}} \log p (\mathbf {z}, \mathbf {x}, \mathbf {y}) = \sum_ {l} \left(\sum_ {j \geq 0} \nabla_ {\mathbf {z} ^ {(j)}} \log p (\mathbf {z} ^ {(j)}) + \sum_ {t > 0} \nabla_ {\mathbf {z} ^ {(j)}} \log p (\mathbf {y} _ {t} | \mathbf {z} _ {t} ^ {(0)})\right) \\ + \sum_ {t > 0} \sum_ {j > 0} \nabla_ {\mathbf {z} ^ {(j)}} \log p \left(\mathbf {x} _ {t} ^ {(j)} | \mathbf {z} _ {t} ^ {(0)}, \mathbf {z} _ {t} ^ {(j)}\right), \\ \end{array}
86
+ $$
87
+
88
+ where $l \in (1:M)$ refers to the trial number, explicit index omitted for brevity. For a given trial, expanding one term at the time we have
89
+
90
+ $$
91
+ \nabla_ {\mathbf {z} ^ {(j)}} \log p (\mathbf {z} ^ {(j)}) = - \mathbf {K} ^ {(j)} \mathbf {z} ^ {(j)}
92
+ $$
93
+
94
+ $$
95
+ \nabla_ {\mathbf {z} _ {t} ^ {(0)}} \log p (\mathbf {y} | \mathbf {z} _ {t} ^ {(0)}) = \mathbf {C} ^ {\top} \boldsymbol {\Psi} ^ {- 1} (\mathbf {y} _ {t} - \mathbf {C} \mathbf {z} _ {t} ^ {(0)} - \mathbf {d})
96
+ $$
97
+
98
+ $$
99
+ \nabla_ {\mathbf {z} _ {t} ^ {(k)}} \log p \left(\mathbf {x} _ {t} ^ {(j)} | \mathbf {z} _ {t} ^ {(0)}, \mathbf {z} _ {t} ^ {(j)}\right) = \mathbf {W} ^ {(k, j) \top} \left(\mathbf {x} _ {t} - \exp \left(\mathbf {W} ^ {(0, j)} \mathbf {z} _ {t} ^ {(0)} + \mathbf {W} ^ {(j, j)} \mathbf {z} _ {t} ^ {(j)} + \mathbf {h} ^ {(j)}\right)\right),
100
+ $$
101
+
102
+ where $j > 0$ , $k \in \{0, j\}$ and $\mathbf{K}^{(j)}$ the GP-prior covariance matrix (Eq. 3). The corresponding second moments are
103
+
104
+ $$
105
+ \nabla_ {\mathbf {z} ^ {(j)}} ^ {2} \log p \left(\mathbf {z} ^ {(j)}\right) = - \mathbf {K} ^ {(j)} j \in (0: n)
106
+ $$
107
+
108
+ $$
109
+ \nabla_ {\mathbf {z} _ {t} ^ {(0)}} ^ {2} \log p (\mathbf {y} | \mathbf {z} _ {t} ^ {(0)}) = - \mathbf {C} ^ {\top} \boldsymbol {\Psi} ^ {- 1} \mathbf {C}
110
+ $$
111
+
112
+ $$
113
+ \nabla_ {\mathbf {z} _ {t} ^ {(h)}} \nabla_ {\mathbf {z} _ {t} ^ {(k)}} \log p \left(\mathbf {x} _ {t} ^ {(j)} | \mathbf {z} _ {t} ^ {(0)}, \mathbf {z} _ {t} ^ {(j)}\right) = - \mathbf {W} ^ {(k, j) \top} \operatorname {d i a g} \left(\exp \left(\mathbf {W} ^ {(0, j)} \mathbf {z} _ {t} ^ {(0)} + \mathbf {W} ^ {(j, j)} \mathbf {z} _ {t} ^ {(j)} + \mathbf {h} ^ {(j)}\right)\right) \mathbf {W} ^ {(h, j)}.
114
+ $$
115
+
116
+ with $h, k \in \{0, j\}$ . Inverting the $D \times D$ dimensional Hessian matrix is cubic in $D = T \sum_{j} d_{j}$ , where $T$ is the trial length and $d_{j}$ denotes the dimensionality of latent $\mathbf{z}^{(j)}$ , which restricts the number and dimensionality of latents in practice. The Hessian of the log likelihood is sparse but does not have a factorized structure. Nonetheless, we can take advantage of the block matrix inversion theorem, to speed up the computation to $\mathcal{O}(T^3 \sum_{j} d_{j}^3)$ (see Appendix A.2), with additional improvements based on sparse GP methods (Wilson & Nickisch, 2015) left for future work.
117
+
118
+ M-step Given the approximate posterior $q$ found in the E-step, the parameters updates can be derived analytically for a few parameters, and numerically for the rest (see Suppl. Info. A.3 for details). The other observation model parameters are computed numerically by optimizing the expected log-likelihood under the posterior. In particular, for neuron $i$ in population $j$ we have
119
+
120
+ $$
121
+ \begin{array}{l} \mathcal {L} \left(W _ {i} ^ {(0, j)}, W _ {i} ^ {(j, j)}, h _ {i}\right) = \sum_ {t, l} x _ {t i} \left(h _ {i} + \left[ \begin{array}{c c} W _ {i} ^ {(0, j)} & W _ {i} ^ {(j, j)} \end{array} \right] \left[ \begin{array}{c} \boldsymbol {\mu} _ {t} ^ {(0)} \\ \boldsymbol {\mu} _ {t} ^ {(j)} \end{array} \right]\right) - \exp \left(h _ {i} \right. \\ + \left[ W _ {i} ^ {(0, j)} \quad W _ {i} ^ {(j, j)} \right] \left[ \begin{array}{l} \boldsymbol {\mu} _ {t} ^ {(0)} \\ \boldsymbol {\mu} _ {t} ^ {(j)} \end{array} \right] \frac {1}{2} \left[ \begin{array}{l l} W _ {i} ^ {(0, j)} & W _ {i} ^ {(j, j)} \end{array} \right] \left[ \begin{array}{c c} \boldsymbol {\Sigma} _ {t} ^ {(0, 0)} & \boldsymbol {\Sigma} _ {t} ^ {(0, j)} \\ \boldsymbol {\Sigma} _ {t} ^ {(0, j) \top} & \boldsymbol {\Sigma} _ {t} ^ {(j, j)} \end{array} \right] \left[ \begin{array}{l} W _ {i} ^ {(0, j) \top} \\ W _ {i} ^ {(j, j) \top} \end{array} \right]. \tag {5} \\ \end{array}
122
+ $$
123
+
124
+ For each neural population, we jointly optimized the projection weights and the intercept of all neurons with a full Newton scheme by storing the inverse Hessian in compressed sparse row (CSR) format (see Appendix A.4 for the gradient and Hessian of $\mathcal{L}$ ).
125
+
126
+ The GP-prior parameters were also learned from data by gradient based optimization (using the limited-memory Broyden-Fletcher-Goldfarb-Shanno scheme (Virtanen et al., 2020)). First, we set $\lambda_i^{(j)} = -\log (2\tau_i^{(j)})$ , and optimize for $\lambda_i^{(j)}$ to enforce a positive time constant. We define $\pmb{K}_i^{(j)} \in \mathbb{R}^{T \times T}$ , such that $\left[\mathbf{K}_i^{(j)}\right]_{ts} = \exp \left(-e^{\lambda_i^{(j)}(t - s)^2}\right)$ . The resulting objective function will take the form, $\mathcal{L}\left(\lambda_i^{(j)}\right) = -\mathrm{trace}\left(\pmb{K}_i^{(j) - 1}\mathbb{E}_q[\pmb{z}_i^{(j)}\pmb{z}_i^{(j)\top}]\right) - \log |\pmb{K}_i^{(j)}|$ . Gradients are provided in Appendix A.5, together with the procedure for parameter initialization (Appendix A.6).
127
+
128
+ # 4 RESULTS
129
+
130
+ Latent reconstruction for within model data. To validate the estimation procedure, we first used a simulated dataset sampled from the TAME-GP graphical model, with predefined parameters. Specifically, we simulated two neural populations $\mathbf{x}^{(1)}$ and $\mathbf{x}^{(2)}$ , each with 50 units and a one-dimensional task relevant variable $y$ . We fixed the private latent factors $\mathbf{z}^{(1)}$ and $\mathbf{z}^{(2)}$ to two dimensions, and that of the shared factor $\mathbf{z}^{(0)}$ to one. The projection weights $\mathbf{W}^{(j)}$ and $\mathbf{C}$ , the intercept terms $\mathbf{d}$ and $\mathbf{h}^{(j)}$ , the observation variance matrix $\Phi$ , and the GP time constants of the factors were randomly assigned. The parameters were chosen such that the overall mean firing rate was about $20\mathrm{Hz}$ in both
131
+
132
+ areas. We simulated spike counts at 50ms resolution for 200 draws from the process (which we will refer to as 'trials' in analogy to experiments), each lasting 2.5 seconds (see example trial in Fig. 1B). Given this data, we assessed the ability of our EM-based estimator to recover its true latent structure. $^{4}$ The marginal log likelihood saturated after a relatively small number of EM iterations (Fig. 1C). As a basic test of our ability to determine the dimensionality of latents, we systematically varied the dimensionality of the shared latent, while fixing the dimensions of $\mathbf{z}^{(1)}$ and $\mathbf{z}^{(2)}$ to their ground truth value of 2. We found that the best model fit was achieved at the ground truth task dimension 1, demonstrating that we are able to infer true latent dimensionality from data (Fig.1D-F).
133
+
134
+ Finally, we assessed the quality of the recovered latents in individual test trials. Due to known degeneracies, originally documented in linear gaussian latent models (Roweis & Ghahramani, 1999), the latent factors in TAME-GP are identifiable up to an affine transformation of the latent space. To address this, we used Procustes (Schönemann, 1966) to realign the latent axes back to the original space. The resulting posterior mean estimate of the latents show an excellent agreement with the ground truth factors (cross-validated linear regression $R^2$ of 0.99 between the MAP estimate of latents and ground truth, Fig. 1 D-F), while the model predicted rates explained $98\%$ of the ground truth firing rate variance. The ability to reconstruct ground truth structure for within model data persists when considering more than two areas with shared covariability (Suppl. Fig. S1). Overall, these numerical tests confirm that EM provides a veridical estimation of ground truth latent structure for within distribution data.
135
+
136
+ Task-aligned latent reconstruction for simulated latent dynamical systems models. The simple graphical model of TAME-GP captures axes of neural variability of scientific interest, but is far from an accurate generative model for neural dynamics during behavior. To assess the ability of TAME-GP to extract underlying structure from complex and out-of-distribution neural data, we used latent dynamical systems models in which we can explicitly define the flow of information from external stimuli and between areas, in several scenarios of practical interest.
137
+
138
+ The first in silico experiment focuses on identifying axes of task-relevant variability in neural responses. As a simple test case, we modeled a single neural population with a 6d latent structure (Fig. 2A). Two of the latent dimensions were task-relevant, driven by an observed temporally smooth external input $\mathbf{y}_t$ , while the other four dimensions were intrinsic to the circuit. The key distinction between this process and the TAME-GP model assumptions is that the observed task variable acts as an input drive to the underlying latent dynamics rather than mapping to the latents directly. The latent dynamics take the form of a multivariate AR(1),
139
+
140
+ $$
141
+ \left\{ \begin{array}{l l} \mathbf {z} _ {\operatorname {p r}, t + 1} & = A _ {\operatorname {p r}} \left(\mathbf {z} _ {\operatorname {p r}, t} - \mu_ {t}\right) \Delta t + \sqrt {2 \Delta t} \mathrm {d} \mathbf {w} _ {t} ^ {(0)} \\ \mathbf {z} _ {\operatorname {t r}, t + 1} & = A _ {\operatorname {t r}} \left(\mathbf {z} _ {\operatorname {t r}, t} - \mathbf {y} _ {t}\right) \Delta t + \sqrt {2 \Delta t} \mathrm {d} \mathbf {w} _ {t} ^ {(1)}, \end{array} \right. \tag {6}
142
+ $$
143
+
144
+ where $A_{\mathrm{pr}} \in \mathbb{R}^{4 \times 4}$ and $A_{\mathrm{tr}} \in \mathbb{R}^{2 \times 2}$ the private and task relevant dynamics, $\mathbf{y}_t \in \mathbb{R}^2$ and $\mu_t \in \mathbb{R}^4$ inputs drawn from a factorized RBF kernel, and $w_t^{(i)}$ is independent white noise for $i = 0,1$ . Given these latent dynamics, spikes are generated as described by the TAME-GP observation model with $\mathbf{W} \in \mathbb{R}^{100 \times 6}$ , and $\mathbf{d} \in \mathbb{R}^{100}$ . We adjusted the parameters as to cover several average population firing rates by regulating $\mathbf{d}$ , for a fixed number of trials (200) and a fixed trial duration (5 seconds). For simplicity, we circumvent the hyperparameter selection step by assuming that all estimators have access to the ground truth latent dimensionality: TAME-GP assumed 2 shared and 4 private latents. Unsupervised methods (pPCA, P-GPFA) were tasked with extracting the main two axes of neural variability in the data, while the supervised methods (pCCA) estimated 2d latents that correlate with task variable $\mathbf{y}$ ; the same alignment procedure was used in all cases.
145
+
146
+ Fig. 2B illustrates the latent dynamics as estimated by TAME-GP, pPCA (Tipping & Bishop, 1999), P-GPFA (Hooram, 2015), and pCCA (Bach & Jordan, 2005). We quantify the latent space estimation accuracy by mean squared error, demonstrating that TAME-GP captured the stimulus driven dynamics better than other methods (Fig. 2C and Suppl. Fig. S2). P-GPFA showed a tendency to over-smooth, which obscured most of the underlying fine timescale latent structure. PCA failed by focusing on main axes of variability irrespective of task relevance, while CCA estimates were visually less interpretable. Only pCCA and TAME-GP found projections that selectively encoded for $\mathbf{z}_{tr}$ with TAME-GP outperforming pCCA across conditions. Finally, TAME-GP maintained its ability to
147
+
148
+ ![](images/d70775c9054c7bbace242f0878349532b03b8253dc56439ed1e858504a492c8b.jpg)
149
+ A
150
+
151
+ ![](images/a46f40f9e133888798a7f37d548bc93e3f18494b078d7de8045fb287182f65e4.jpg)
152
+ B
153
+
154
+ ![](images/f133ebdd182c102a0a4638ed3a04933017f902366c7f65bf807a34818e0fe40b.jpg)
155
+
156
+ ![](images/7de1fdadf399e8de1c9ef1446251198d060b9725b936dfd8d66bf9a903c369dd.jpg)
157
+
158
+ ![](images/2936249e1dbdbc6377875b4e6bc2bb61ad480088d5df877a9d025986cb9e5158.jpg)
159
+ Figure 2: Methods comparison for single area task manifold alignment. A. TAME-GP graphical model for single area (top) and schematic for data generating process (bottom). $\mathbf{z}_{\mathrm{tr}}$ denotes task relevant shared latent dimensions while $\mathbf{z}_{\mathrm{pr}}$ denotes private task-irrelevant variability. B. Ground truth task relevant dynamics (green) and estimated low dimensional projection for TAME-GP (purple), PGPFA (blue), pPCA (dark gray) and pCCA (light gray). C Mean squared error between the true shared dynamics and the model reconstruction, mean $\pm$ s.d. over 10-fold cross-validation. D. Example single trial firing rate reconstruction. E. Mean squared error between the true and reconstructed firing rate across conditions, mean $\pm$ s.d. over 10 folds of the data.
160
+
161
+ ![](images/a9fc00f501fcdf38f1b65402096c909e7f4a6fdd364ca1422aa6c3754a34c278.jpg)
162
+
163
+ ![](images/05d4cd86e9c4d2600181bf0147c586d024d98a984c5eb71f6f847fbdda18f4e3.jpg)
164
+
165
+ ![](images/dbb7a3054e2653f3a89e8940c2894eeec11a3810ccbb4166ffeb185af0bbce3e.jpg)
166
+
167
+ recover the underlying structure even when the model assumptions do not match the data exactly, in particular when the effect of the latents were modeled to be approximately additive (Suppl. Fig. S3).
168
+
169
+ We also compared these methods in terms of their ability to predict the ground truth firing rate generating the observed spiking responses (total dimensions matching the ground truth of 6). Both TAME-GP and P-GPFA showed a stable and accurate firing rate reconstruction error across conditions (Fig. 2D,E), while the factorized linear gaussian methods (pPCA, pCCA) performed poorly. This may be due to the larger model mismatch, while additionally suffering from the lack of temporal smoothing, especially for low firing rates. Overall, TAME-GP was the only procedure that both captured the overall data statistics well and extracted accurate task-interpretable latents.
170
+
171
+ Assessing inter-area communication in simulated latent dynamical systems. In the second set of numerical experiments, we focused on estimating low-dimensional communication sub-spaces across neural populations (Fig. 3A). The ground truth data was again constructed using latent dynamical systems models, which now included two populations (Fig. 3B), where a low dimensional projection of the dynamics in one area, the sender, drive the dynamics of the other area, the receiver:
172
+
173
+ $$
174
+ \left\{ \begin{array}{l} \mathbf {z} _ {\mathrm {S}, t + 1} = A _ {S} \left(\mathbf {z} _ {\mathrm {S}, t} - \mathbf {y} _ {t}\right) \Delta t + \sqrt {2 \Delta t} \mathbf {w} _ {t} ^ {(0)} \\ \mathbf {z} _ {\mathrm {s h}} = P \cdot \mathbf {z} _ {\mathrm {S}} \\ \mathbf {z} _ {\mathrm {R}, t + 1} = A _ {R} \left(\mathbf {z} _ {\mathrm {R}, t} - \lambda_ {t} - \mathbf {z} _ {\mathrm {s h}, t}\right) \Delta t + \sqrt {2 \Delta t} \mathbf {w} _ {t} ^ {(1)}, \end{array} \right. \tag {7}
175
+ $$
176
+
177
+ where $A_{S} \in \mathbb{R}^{4 \times 4}$ and $A_{R} \in \mathbb{R}^{4 \times 4}$ are the sender and receiver dynamics, $\mathbf{y}_{t}$ and $\lambda_{t}$ are temporally smooth inputs drawn from independent GPs with factorized RBF kernels, $P \in \mathbb{R}^{2 \times 4}$ defines the shared submanifold projection, and $w_{t}^{(i)}$ is independent white noise. These latents map into spikes as above. We simulated three average firing rate conditions and varied the ground truth number of shared dimensions, from one to three. We compared our method with the two most commonly used approaches to communication manifold: pCCA and Semedo's reduced-rank regression procedure for communication manifold estimation (Semedo et al., 2019) (Fig. 3C), as well as SNP-GPFA (Keeley et al., 2020) (both with and without trial repeats, see Appendix A.7 and Suppl. Fig. S4).
178
+
179
+ TAME-GP (without task alignment) outperformed alternative approaches in terms of the reconstruction error of both ground truth firing rates (Fig. 3D, F) and shared latent dynamics (Fig. 3E). Furthermore, when testing the ability of different approaches to infer the dimensionality of the shared manifold through model comparison, the leave-one-out likelihood saturated at the ground truth dimension for all simulations (Fig. 3I), and peaked at the correct dimension $75\%$ of the times (Fig. 3G, H). In contrast, the Smedo estimator tends to systematically overestimate the dimensionality of the shared manifold in this dataset.
180
+
181
+ Finally, we tested the general case in which we search for a communication subspace that aligns to task variable $\mathbf{y}$ . To do so, we fit TAME-GP to the same dataset but assuming that $\mathbf{y}_t$ is observed.
182
+
183
+ ![](images/f7de735cbf688497bb847acc82afb50558afd32263b35c57796cbb42710c4373.jpg)
184
+ Figure 3: A. Schematic of communication subspace (left) and associated TAME-GP graphical model versions (right). B. Ground truth spike count generation process. C. Example shared latent reconstruction for TAME-GP (purple), PCCA (light grey) and, reduced rank regression (dark grey); ground truth in orange. D. Statistics for firing rate prediction quality. E. Statistics of shared dynamics reconstruction. F. Example reconstructions of the receiver firing rates compared to the ground truth (green). G. TAME-GP leave-one-neuron-out log-likelihood for different ground truth shared manifold dimensionality $(d = 1,2,3)$ and increasing population rate from 5.1, 10.7, $15.9\mathrm{Hz}$ (respectively, dashed, dashed-dotted and continuous lines). Lines styles show different average firing rate conditions. H. Difference between estimated and true $\mathbf{z}_{\mathrm{sh}}$ dimensionality for TAME-GP (purple) and reduced rank regression (grey). I. Model fit quality as a function of latent dimensionality for all estimators. Ground truth dimension $d = 2$ (dashed line). Error bars show mean $\pm$ s.d. over 10-folds of cross-validation.
185
+
186
+ We found again that TAME-GP has the best reconstruction accuracy, which saturates at the ground truth dimensionality $(\mathrm{d} = 2)$ . These observations are consistent across firing rate levels (see Suppl. Fig. S5). When fitting SNP-GPFA to simulated data in the case of precise stimulus repetitions and comparing it to TAME-GP, we find that both models are able to capture the latent space factorization. However, only TAME-GP works well in the case when latent dynamics vary across episodes, as would be the case during natural behavior (i.e. without stimulus repeats, see Suppl. Fig. S4, Table S1 and Appendix A.7 for details). Overall, these results suggest that TAME-GP can robustly recover meaningful sources of co-variability across areas in a range of experimentally relevant setups.
187
+
188
+ Mouse neural recordings during open-field exploration. As a first validation of the method, we estimated the manifold formed by simultaneously recorded head direction cells $(n = 33)$ in the anterodorsal thalamic nuclei (ADN) (Taube, 1995) of a mouse exploring a circular open field.
189
+
190
+ These neurons are known to form a circular manifold representing heading direction (Chaudhuri et al., 2019). Thus, they provide the opportunity to examine the ability of TAME-GP to recover the underlying structure of data for which we know the biological ground truth. Recorded responses were segmented in 10sec time series, discretized in 20ms bins, and fit with a either a head-direction aligned 2d latent manifold (Fig.4A); private noise dimension $d = 5$ , or with two unsupervised methods pPCA and PGPFA, each with latent dimensionality $d = 2$ . All methods recovered the underlying circular structure of the heading representation to some degree (Fig.4B). We decoded head direction from the extracted 2d latents $^6$ and confirmed that TAME-GP preserved more information than pPCA, and comparable to P-GPFA (Fig.4C), with an overall superior data fit quality relative to pPCA (Fig.4D), as assessed by the $R^2$ between model leave-one-neuron-out firing rate predictions and the raw spike
191
+
192
+ ![](images/a84df0ac91653c2ffb30d61854de8b336b27a43a5a067ba683be0571dc686128.jpg)
193
+ A
194
+ F
195
+
196
+ ![](images/f2a30bd936fb4d97f2c13bb4581b198595559a7925af728e5fd4f2021ff3624d.jpg)
197
+ B
198
+
199
+ ![](images/4b4eb382259d41561c599e455fc5f1cec1401fbba87bd126daca4d417c74b332.jpg)
200
+
201
+
202
+ G
203
+ H
204
+
205
+ ![](images/03bb93af437fd088da3990455b9efa51435278d4a2d0f5771cea32f84c5bd336.jpg)
206
+
207
+ ![](images/852e860510d0c4224f8c5acb3f39c34d1328f41523c23358b48fc5ee28760218.jpg)
208
+ C
209
+
210
+ ![](images/911b6d0c2297f57aa5bd8cc183bbb6b9df497910ed3932e01e07358ecd3f7019.jpg)
211
+ D
212
+
213
+ ![](images/5e708b186612c0d43cb7f1c24c1d35afefb56e2119f692aae433d867172a3bb4.jpg)
214
+ E
215
+ 2) initial target position
216
+
217
+ ![](images/b52c432b3f6a2a4b7299038f5019584bfd8b79b546d0b2a4335cc75553e964ca.jpg)
218
+
219
+ ![](images/59ffab9a7065c569df4e18bad2e65e054bda72ffb54e522f33ba6bcf32fa9a41.jpg)
220
+
221
+ ![](images/99a22efd538ae6cc462243894bfcee754d21d5cbadf0e041659e17de748c6127.jpg)
222
+
223
+ ![](images/6c83cdea0148e27daa3e4b9dc6242ec73284adb9c26bca583b3849d383a2304b.jpg)
224
+ 1
225
+
226
+ ![](images/314700080a491a28875eec8d71702abec8896d13b8394ed12371fb6540c08c69.jpg)
227
+ 1
228
+
229
+ ![](images/f420a1a783d3c9286189901b69759e943434c50e134c611bd41817d72442c158.jpg)
230
+ K
231
+
232
+ ![](images/38ae17ad0f78c52e6fce3593fbe3e651e19f1d5a0e278a546b6764b1e6e06876.jpg)
233
+ L
234
+ Figure 4: Fitting TAME-GP to neural data. A. Graphical model for heading aligned mouse population responses in area ADN. $\mathbf{z}_{\mathrm{tr}}$ denotes heading related shared latent dimensions while $\mathbf{z}_{\mathrm{pr}}$ denotes private task-irrelevant variability. B. Latent population dynamics, colored by time-varying heading, for various manifold estimators. C. Head direction decoding from 2d latents extracted with each method (by Lasso regression). Mean $\pm$ standard deviation over 5folds. D. Scatter plot of leave-one-neuron-out spike count variance explained for dimension matched TAME-GP and pPCA. Dots represent individual neurons. E. Schematic of the firefly task. Initial target location is randomized and remains visible for $300\mathrm{ms}$ . The monkey has to use the joystick to navigate to the internally maintained target position. F. Top view of example monkey trajectories; increasing contrast marks initial location of the target (right, center, left). G. Within-area TAME-GP estimation aligned a latent task variable: the distance travelled. H. Scatter plot of leave-one-neuron-out spike count variance explained for dimension-matched TAME-GP and pPCA. Dots represent individual neurons. I. Single trial TAME-GP estimates of the task relevant dynamics, compared to J. those of P-GPFA. Trajectories are color-graded according to the initial angular target location (as in B). Lasso regression decoding of K. and L. linear distance travelled. TAME-GP decoding $R^2$ (purple) is based on a 2d task relevant latent. P-GPFA $R^2$ (blue) estimates were obtained for a range of latent dimensions (1-10). M. Communication subspace estimation between MSTd and dIPFC. N. As H, for shared latent space. O. Lasso regression decoding of task relevant variables (sorted by their shared subspace information content) from the shared (orange) and private latents (green, red) estimated by TAME-GP. Mean $R^2 \pm$ s.e.m. estimated across 10 folds of the data.
235
+
236
+ ![](images/06851bca12a0ad9b32f740cd5b72b4167dbd3eeb98b117e72f185ace316ca013.jpg)
237
+ M
238
+
239
+ ![](images/f76c5d7e3dce6af0a4c56d97d61d4ceac32d4fd5f5e462ba3be4516a2aa69136.jpg)
240
+ N
241
+
242
+ ![](images/30efe0a2e46ab8344e16390a9392974c48297edf7c1520c5784f529e63d323d0.jpg)
243
+ 0
244
+
245
+ counts (Yu et al., 2008). Overall, these results confirm that the TAME-GP estimator can extract sensible coding structure from real data that does not exactly match the assumptions of the model.
246
+
247
+ Multi-area neural recordings in monkeys during VR spatial navigation Finally, we tested the ability of TAME-GP to find task aligned neural manifolds in a challenging dataset characterized by a high-dimensional task space and lack of trial repeats. Specifically, monkeys navigate in virtual reality by using a joystick controlling their linear and angular velocity to "catch fireflies" (Fig.4E, F) (Lakshminarasimhan et al., 2018). Spiking activity was measured (binned in 6ms windows, sessions lasting over $90\mathrm{min}$ ) and neurons in the two recorded brain areas (MSTd and dlPFC) showed mixed selectivity, encoding a multitude of task relevant variables (Noel et al., 2021). As a result, responses are high dimensional and unsupervised dimensionality reduction methods capture an hard to interpret mixture of task relevant signals in their first few latent dimensions.
248
+
249
+ We used TAME-GP to extract latent projections that align with the ongoing distance from the origin, decomposed in an angular and a radial component (Fig. 4G). We set the task relevant latent $\mathbf{z}^{(0)}$ dimensions to two, matching the number of task variables. We verified the accuracy of the model by computing leave-one-neuron-out firing rate predictions and calculating the $R^2$ between model predictions and raw spike counts. The TAME-GP estimator systematically outperformed pPCA with matched number of latents by this metric (Fig. 4H). We also compared the latent factors found
250
+
251
+ by TAME-GP to those obtained by P-GPFA (Fig. 4I, J). For both variables, we found that the task variables were better accounted for by a two-dimensional TAME-GP estimated latent than by up to 10 dimensional latent spaces extracted with P-GPFA (Fig. 4K, L). A similar compression of the manifold was achieved in a separate dataset of monkey (pre-)motor responses during sequential reaches (see A.9 and Suppl. Fig. S7). This confirms that TAME-GP provides a compact low dimensional account of neural variability with respect to task variables of interest.
252
+
253
+ Lastly, we probed the model's ability to learn a communication subspace (Fig. 4M) between MSTd and dlPFC, brain areas that are known to interact during this task (Noel et al., 2021). In this instance, we selected the number of shared and private latent dimensions by maximizing the leave-one-neuron-out spike counts variance explained over a grid of candidate values (see Suppl. Fig. S6 and A.8). As before, we find that the TAME-GP reconstruction accuracy surpasses that of dimensionality-matched pPCA, for both MSTd and dlPFC (Fig. 4N). Since the shared manifold estimation was agnostic to task variables in this case, we used decoding from latent spaces to ask if the shared variability between these areas carried information about task variables known to drive single neuron responses in these areas. We found that the monkey's horizontal eye position, as well as latent task variables such as the travelled distance or the distance still remaining to target were mostly accounted for in shared, as opposed to private, axes of variability (Fig. 4O). This recapitulates prior observations made at the single-cell level (Noel et al., 2021). Overall, the results demonstrate that TAME-GP can extract interpretable low-dimensional latents and shared neural subspaces from complex and high-dimensional datasets.
254
+
255
+ # 5 DISCUSSION
256
+
257
+ Technological advances in systems neuroscience place an ever-increasing premium on the ability to concisely describe high-dimensional task-relevant neural responses. While sophisticated methods based on recurrent neural networks are increasingly used for fitting neural responses Pandarinath et al. (2018), the extracted dynamics are also not necessarily easy to interpret. Here we introduce TAME-GP, a flexible statistical framework for partitioning neural variability in terms of private or shared (i.e., inter-area) sources, aligned to task variables of interest, and with single trial resolution. We show that our method provides compact latent manifold descriptions that better capture neural variability than any of the standard approaches we compared it against.
258
+
259
+ An important nuance that distinguishes various neural dimensionality reduction methods is whether the covariability being modeled is that of trial-averaged responses (i.e. stimulus correlations), residual fluctuations around mean responses (i.e. noise correlations) or a combination of the two (total correlations). Since isolating either the signal or the noise correlations alone would require across trial averages, our approach models total correlations, time resolved within individual trials. This differentiates our shared variability estimates from the traditional definition of a communication subspace (Semedo et al., 2019), which uses noise correlations alone, while keeping some of its spirit. It also makes it applicable to datasets without trial repeats.
260
+
261
+ The model adapts the approach of pCCA as a way of ensuring that the extracted latents reflect axes of neural variability that carry specific task relevant information. This choice has appealing mathematical properties in terms of unifying the problems of finding interpretable axes and communication subspaces, but is not the most natural one in terms of the true generative process of the data. While behavioral outputs can be thought of as outcomes of neural activity—as described by the TAME-GP graphical model, sensory variables act as drivers for the neural responses and should affect the latent dynamics, not the other way around. Hence a natural next step will be to incorporate in the framework explicit stimulus responses, perhaps by taking advantage of recent advances in estimating complex tuning functions during naturalistic behavior (Balzani et al., 2020).
262
+
263
+ It would be interesting to explore the use temporal priors with more interesting structure, for instance spectral mixture kernels (Wilson & Adams, 2013), introducing prior dependencies across latent dimensions (de Wolff et al., 2021), or using non-reversible GP priors that better capture the causal structure of neural dynamics (Rutten et al., 2020). More generally, the probabilistic formulation allows the ideas formalized by TAME-GP to be combined with other probabilistic approaches for describing stimulus tuning and explicit latent neural dynamics (Duncker et al., 2019; Glaser et al., 2020; Duncker & Sahani, 2021). Hence, this work adds yet another building block in our statistical arsenal for tackling questions about neural population activity as substrate for brain computation.
264
+
265
+ Broader impact We do not foresee any negative consequences to society from our work. Task aligned manifold extraction may prove useful in clinical applications, specifically for increasing robustness of BMI decoders by exploiting the intrinsic structure of the neural responses. Code implementing the TAME-GP estimator and associated demos is available at https://github.com/BalzaniEdoardo/TAME-GP
266
+
267
+ Acknowledgements. This work was supported by the National Institute of Health under the U19 research program (grant agreement number NIH U19NS118246).
268
+
269
+ # REFERENCES
270
+
271
+ Gian Nicola Angotzi, Fabio Boi, Aziliz Lecomte, Ermanno Miele, Mario Malerba, Stefano Zucca, Antonino Casile, and Luca Berdondini. Sinaps: An implantable active pixel sensor cmos-probe for simultaneous large-scale neural recordings. *Biosensors and Bioelectronics*, 126:355-364, 2019.
272
+ Francis R Bach and Michael I Jordan. A probabilistic interpretation of canonical correlation analysis. Technical report, 2005.
273
+ Edoardo Balzani, Kaushik Lakshminarasimhan, Dora Angelaki, and Cristina Savin. Efficient estimation of neural tuning during naturalistic behavior. Advances in Neural Information Processing Systems, 33:12604-12614, 2020.
274
+ Christopher M Bishop and Nasser M Nasrabadi. Pattern recognition and machine learning, volume 4. Springer, 2006.
275
+ Fabio Boi, Nikolas Perentos, Aziliz Lecomte, Gerrit Schwesig, Stefano Zordan, Anton Sirota, Luca Berdondini, and Gian Nicola Angotzi. Multi-shanks sinaps active pixel sensor cmos probe: 1024 simultaneously recording channels for high-density intracortical brain mapping. bioRxiv, pp. 749911, 2020.
276
+ Wieland Brendel, Ranulfo Romo, and Christian K Machens. Demixed principal component analysis. Advances in neural information processing systems, 24, 2011.
277
+ Rishidev Chaudhuri, Berk Gerçek, Biraj Pandey, Adrien Peyrache, and Ila Fiete. The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nature neuroscience, 22(9):1512-1520, 2019.
278
+ John P Cunningham and M Yu Byron. Dimensionality reduction for large-scale neural recordings. Nature neuroscience, 17(11):1500-1509, 2014.
279
+ Andreas Damianou, Neil D Lawrence, and Carl Henrik Ek. Multi-view learning as a nonparametric nonlinear inter-battery factor analysis. arXiv preprint arXiv:1604.04939, 2016.
280
+ Taco de Wolff, Alejandro Cuevas, and Felipe Tobar. Mogptk: The multi-output gaussian process toolkit. Neurocomputing, 424:49-53, 2021.
281
+ Lea Duncker and Maneesh Sahani. Dynamics on the manifold: Identifying computational dynamical activity from neural population recordings. Current opinion in neurobiology, 70:163-170, 2021.
282
+ Lea Duncker, Gergo Bohner, Julien Boussard, and Maneesh Sahani. Learning interpretable continuous-time models of latent stochastic dynamical systems. In International Conference on Machine Learning, pp. 1726-1734. PMLR, 2019.
283
+ Carl Henrik Ek and PHTND Lawrence. Shared Gaussian process latent variable models. PhD thesis, CiteSeer, 2009.
284
+ Joshua Glaser, Matthew Whiteway, John P Cunningham, Liam Paninski, and Scott Linderman. Recurrent switching dynamical systems models for multiple interacting neural populations. Advances in neural information processing systems, 33:14867-14878, 2020.
285
+ Nam Hooram. Poisson extension of gaussian process factor analysis for modeling spiking neural populations master's thesis. Department of Neural Computation and Behaviour, Max Planck Institute for Biological Cybernetics, Tubingen, 8, 2015.
286
+
287
+ Cole Hurwitz, Akash Srivastava, Kai Xu, Justin Jude, Matthew Perich, Lee Miller, and Matthias Hennig. Targeted neural dynamical modeling. Advances in Neural Information Processing Systems, 34:29379-29392, 2021.
288
+ S.L. Keeley, M.C. Aoi, Y. Yu, S.L. Smith, and Pillow J.W. Identifying signal and noise structure in neural population activity with gaussian process factor models. NeurIPS, 34, 2020.
289
+ Dmitry Kobak, Wieland Brendel, Christos Constantinidis, Claudia E Feierstein, Adam Kepecs, Zachary F Mainen, Xue-Lian Qi, Ranulfo Romo, Naoshige Uchida, and Christian K Machens. Demixed principal component analysis of neural population data. *Elife*, 5:e10989, 2016.
290
+ Kaushik J Lakshminarasimhan, Marina Petsalis, Hyeshin Park, Gregory C DeAngelis, Xaq Pitkow, and Dora E Angelaki. A dynamic bayesian observer model reveals origins of bias in visual path integration. Neuron, 99(1):194-206, 2018.
291
+ Christian K Machens. Demixing population activity in higher cortical areas. Frontiers in computational neuroscience, 4:126, 2010.
292
+ Angie M Michail, Elliott TT Abe, and Christopher M Niell. Dynamics of gaze control during prey capture in freely moving mice. *Elife*, 9:e57458, 2020.
293
+ Jean-Paul Noel, Edoardo Balzani, Eric Avila, Kaushik Lakshminarasimhan, Stefania Bruni, Panos Alefantis, Cristina Savin, and Dora E Angelaki. Flexible neural coding in sensory, parietal, and frontal cortices during goal-directed virtual navigation. bioRxiv, 2021.
294
+ Chethan Pandarinath, Daniel J O'Shea, Jasmine Collins, Rafal Jozefowicz, Sergey D Stavisky, Jonathan C Kao, Eric M Trautmann, Matthew T Kaufman, Stephen I Ryu, Leigh R Hochberg, et al. Inferring single-trial neural population dynamics using sequential auto-encoders. Nature methods, 15(10):805–815, 2018.
295
+ Matthew G Perich, Patrick N Lawlor, Konrad P Kording, and Lee E Miller. Extracellular neural recordings from macaque primary and dorsal premotor motor cortex during a sequential reaching task. https://crcns.org/, 2018.
296
+ Sam Roweis and Zoubin Ghahramani. A unifying review of linear gaussian models. Neural computation, 11(2):305-345, 1999.
297
+ Virginia Rutten, Alberto Bernacchia, Maneesh Sahani, and Guillaume Hennequin. Non-reversible gaussian processes for identifying latent dynamical structure in neural data. Advances in neural information processing systems, 33:9622-9632, 2020.
298
+ Peter H Schonemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1-10, 1966.
299
+ João D Semedo, Amin Zandvakili, Christian K Machens, M Yu Byron, and Adam Kohn. Cortical areas interact through a communication subspace. Neuron, 102(1):249-259, 2019.
300
+ Ian H Stevenson and Konrad P Kording. How advances in neural recording affect data analysis. Nature neuroscience, 14(2):139-142, 2011.
301
+ JS Taube. Head direction cells recorded in the anterior thalamic nuclei of freely moving rats. 15(1): 70-86, 1995. doi: 10.1523/JNEUROSCI.15-01-00070.1995.
302
+ Michael E Tipping and Christopher M Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3):611-622, 1999.
303
+ Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournaepau, Evgeni Burovski, PEARU Peterson, Warren Weckesser, Jonathan Bright, Stefan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, Ilhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cirmrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antonio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261-272, 2020. doi: 10.1038/s41592-019-0686-2.
304
+
305
+ Andrew Wilson and Ryan Adams. Gaussian process kernels for pattern discovery and extrapolation. In International conference on machine learning, pp. 1067-1075. PMLR, 2013.
306
+
307
+ Andrew Wilson and Hannes Nickisch. Kernel interpolation for scalable structured gaussian processes (kiss-gp). In International conference on machine learning, pp. 1775-1784. PMLR, 2015.
308
+
309
+ Byron M Yu, John P Cunningham, Gopal Santhanam, Stephen Ryu, Krishna V Shenoy, and Maneesh Sahani. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Advances in neural information processing systems, 21, 2008.
310
+
311
+ Ding Zhou and Xue-Xin Wei. Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-vae. Advances in Neural Information Processing Systems, 33: 7234-7247, 2020.
312
+
313
+ # A APPENDIX
314
+
315
+ # A.1 BACKGROUND ON PPCA, PCCA, AND THEIR RELATION TO TAME-GP
316
+
317
+ # A.1.1 CANONICAL CORRELATION ANALYSIS
318
+
319
+ Given a random vector $\pmb{x}$ , PCA aims to find a linear transformation such that the components of the transformed vector are uncorrelated. In other words, it tries to find a linear transformation that diagonalizes the co-variance matrix of the random vectors. Similarly, CCA starts from two random vectors $\pmb{x}_1$ and $\pmb{x}_2$ of dimensions $m_1$ and $m_2$ , and tries to find two linear transformations $U \in \mathbb{R}^{m_1 \times m_1}$ and $V \in \mathbb{R}^{m_2 \times m_2}$ such that each component of $U \cdot x_1$ is correlated with a single component of $V \cdot x_2$ . In terms of correlation matrix, this corresponds to,
320
+
321
+ $$
322
+ \operatorname {c o r r} \left(U \cdot \boldsymbol {x} _ {1}, V \cdot \boldsymbol {x} _ {2}\right) _ {i j} = \left\{ \begin{array}{l l} \rho_ {i} & \text {i f} i = j \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {8}
323
+ $$
324
+
325
+ where $\rho_{i}$ are called canonical correlations.
326
+
327
+ Letting the joint empirical co-variance be $\hat{\Sigma} = \begin{bmatrix} \hat{\Sigma}_{11} & \hat{\Sigma}_{12}\\ \hat{\Sigma}_{21} & \hat{\Sigma}_{22} \end{bmatrix}$
328
+
329
+ it turns out that CCA projections are the singular vectors of the correlation matrix re-scaled by the inverse square-root of the individual co-variances. Namely, if $\tilde{u}_i,\tilde{v}_i$ are the i-th singular vectors of the correlation matrix corr $(\pmb {x}_1,\pmb {x}_2) = \hat{\Sigma}_{11}^{-1 / 2}\hat{\Sigma}_{12}\hat{\Sigma}_{22}^{-1 / 2}$ , then the canonical vectors are $(U_{i},V_{i}) = (\hat{\Sigma}_{11}^{-1 / 2}\tilde{u}_{i},\hat{\Sigma}_{22}^{-1 / 2}\tilde{v}_{i})$ . The two projection matrices $U$ and $V$ are obtained by stacking the canonical vectors; it is immediate to verify that $I_{m_1} = U^\top \hat{\Sigma}_{11}U$ , $I_{m_2} = V^\top \hat{\Sigma}_{22}V$ and $P = U^{\top}\hat{\Sigma}_{12}V$ , where $P$ is an $m_{1}\times m_{2}$ diagonal matrix with diagonal entries the canonical correlations.
330
+
331
+ Again, making the parallel with PCA, we know that the first PCA vector is the eigenvector of the empirical co-variance corresponding to the largest eigenvalue and satisfies $\pmb{w}_1 = \operatorname{argmax}_{\| \pmb{w} \| = 1} \pmb{w}^\top \operatorname{cov}(\pmb{x}) \pmb{w}$ .
332
+
333
+ Similarly, it can be shown that the canonical vector corresponding to the largest singular value of the correlation matrix satisfies,
334
+
335
+ $$
336
+ \left(U _ {1}, V _ {1}\right) = \underset {\| \boldsymbol {u} \| = 1, \| \boldsymbol {v} \| = 1} {\operatorname {a r g m a x}} \operatorname {c o r r} \left(\boldsymbol {u} ^ {\top} \cdot \boldsymbol {x} _ {1}, \boldsymbol {v} ^ {\top} \cdot \boldsymbol {x} _ {2}\right). \tag {9}
337
+ $$
338
+
339
+ Finally the n-th canonical vector satisfies,
340
+
341
+ $$
342
+ \left(U _ {n}, V _ {n}\right) = \underset {\boldsymbol {u} \in \mathcal {U} ^ {\perp}, \boldsymbol {v} \in \mathcal {V} ^ {\perp}} {\operatorname {a r g m a x}} \operatorname {c o r r} \left(\boldsymbol {u} ^ {\top} \cdot \boldsymbol {x} _ {1}, \boldsymbol {v} ^ {\top} \cdot \boldsymbol {x} _ {2}\right). \tag {10}
343
+ $$
344
+
345
+ with $\mathcal{U}^{\perp} = \{\boldsymbol{u} : \| \boldsymbol{u} \| = 1, \boldsymbol{u} \in \langle U_1, \dots, U_{n-1} \rangle^{\perp}\}$ and $\mathcal{V}^{\perp} = \{\boldsymbol{v} : \| \boldsymbol{v} \| = 1, \boldsymbol{v} \in \langle V_1, \dots, V_{n-1} \rangle^{\perp}\}$ .
346
+
347
+ # A.1.2 THE PROBABILISTIC INTERPRETATION OF PCA AND CCA
348
+
349
+ As first shown by Tipping and Bishop Tipping & Bishop (1999), PCA can be expressed in terms of the maximum likelihood solution of the following probabilistic latent variable model,
350
+
351
+ $$
352
+ p (\boldsymbol {z}) \sim \mathcal {N} (0, I) \tag {11}
353
+ $$
354
+
355
+ $$
356
+ p (\boldsymbol {x} | \boldsymbol {z}) \sim \mathcal {N} (W \boldsymbol {z} + \mu , I, \sigma^ {2} I), \tag {12}
357
+ $$
358
+
359
+ where $I$ is the $D \times D$ identity matrix, $W$ is a $N \times D$ projection matrix, $\mu \in \mathbb{R}^N$ is an intercept term and $\sigma^2$ a positive constant.
360
+
361
+ Similarly, Bach and Jordan Bach & Jordan (2005) showed that the canonical directions emerge from the maximum likelihood estimates of a simple probabilistic model,
362
+
363
+ $$
364
+ \boldsymbol {z} \sim \mathcal {N} \left(0, I _ {D}\right) \quad D = \min \left(m _ {1}, m _ {2}\right) \tag {13}
365
+ $$
366
+
367
+ $$
368
+ \boldsymbol {x} _ {1} | \boldsymbol {z} \sim \mathcal {N} \left(W _ {1} \boldsymbol {z} + \mu_ {1}, \Psi_ {1}\right) \quad \Psi_ {1} \succeq 0 \tag {14}
369
+ $$
370
+
371
+ $$
372
+ \boldsymbol {x} _ {2} | \boldsymbol {z} \sim \mathcal {N} \left(W _ {2} \boldsymbol {z} + \mu_ {2}, \Psi_ {2}\right) \quad \Psi_ {2} \succeq 0. \tag {15}
373
+ $$
374
+
375
+ where we use a notation similar to that of equations (11, 12) for the projection weights, the intercept and the identity matrix, while $\Psi_{1}$ and $\Psi_{2}$ are generic positive semi-definite $N\times N$ matrices.
376
+
377
+ We will refer to these models as the probabilistic PCA and probabilistic CCA, or pPCA and pCCA.
378
+
379
+ Too better highlight the link between CCA and pCCA we report the ML estimates of the pCCA model parameters,
380
+
381
+ $$
382
+ \hat {W} _ {1} = \hat {\Sigma} _ {1 1} U M _ {1} \tag {16}
383
+ $$
384
+
385
+ $$
386
+ \hat {W} _ {2} = \hat {\Sigma} _ {2 2} V M _ {1} \tag {17}
387
+ $$
388
+
389
+ $$
390
+ \hat {\Psi} _ {1} = \hat {\Sigma} _ {1 1} - \hat {W} _ {1} \hat {W} _ {1} ^ {\top} \tag {18}
391
+ $$
392
+
393
+ $$
394
+ \hat {\Psi} _ {2} = \hat {\Sigma} _ {2 2} - \hat {W} _ {2} \hat {W} _ {2} ^ {\top} \tag {19}
395
+ $$
396
+
397
+ $$
398
+ \hat {\mu} _ {1} = \frac {1}{N} \sum_ {j} x _ {1 j} \tag {20}
399
+ $$
400
+
401
+ $$
402
+ \hat {\mu} _ {2} = \frac {1}{N} \sum_ {j} x _ {2 j}, \tag {21}
403
+ $$
404
+
405
+ where $M_{i}$ are arbitrary $D\times D$ matrices such that $M_1M_2^\top = P$ , the diagonal matrix of the canonical correlations, $U$ and $V$ are the canonical directions.
406
+
407
+ The posterior means and co-variances are given by,
408
+
409
+ $$
410
+ \mathbb {E} [ \boldsymbol {z} | \boldsymbol {x} _ {1} ] = M _ {1} ^ {\top} U ^ {\top} \left(\boldsymbol {x} _ {1} - \hat {\mu} _ {1}\right) \tag {22}
411
+ $$
412
+
413
+ $$
414
+ \mathbb {E} [ \boldsymbol {z} | \boldsymbol {x} _ {2} ] = M _ {2} ^ {\top} V ^ {\top} \left(\boldsymbol {x} _ {2} - \hat {\mu} _ {2}\right) \tag {23}
415
+ $$
416
+
417
+ $$
418
+ \operatorname {c o v} \left(\boldsymbol {z} \mid \boldsymbol {x} _ {1}\right) = I - M _ {1} M _ {1} ^ {\top} \tag {24}
419
+ $$
420
+
421
+ $$
422
+ \operatorname {c o v} \left(\boldsymbol {z} \mid \boldsymbol {x} _ {2}\right) = I - M _ {2} M _ {2} ^ {\top} \tag {25}
423
+ $$
424
+
425
+ $$
426
+ \mathbb {E} [ \boldsymbol {z} | \boldsymbol {x} _ {1}, \boldsymbol {x} _ {2} ] = \left[ \begin{array}{l} M _ {1} \\ M _ {2} \end{array} \right] \left[ \begin{array}{c c} (I - P ^ {2}) ^ {- 1} & (I - P ^ {2}) ^ {- 1} P \\ (I - P ^ {2}) ^ {- 1} P & (I - P ^ {2}) ^ {- 1} \end{array} \right] \left[ \begin{array}{l} U ^ {\top} (\boldsymbol {x} _ {1} - \hat {\mu} _ {1}) \\ V ^ {\top} (\boldsymbol {x} _ {2} - \hat {\mu} _ {2}) \end{array} \right] \tag {26}
427
+ $$
428
+
429
+ $$
430
+ \operatorname {c o v} \left(\boldsymbol {z} \mid \boldsymbol {x} _ {1}, \boldsymbol {x} _ {2}\right) = I - \left[ \begin{array}{l} M _ {1} \\ M _ {2} \end{array} \right] \left[ \begin{array}{l l} (I - P ^ {2}) ^ {- 1} & (I - P ^ {2}) ^ {- 1} P \\ (I - P ^ {2}) ^ {- 1} P & (I - P ^ {2}) ^ {- 1} \end{array} \right] \left[ \begin{array}{l} M _ {1} \\ M _ {2} \end{array} \right] ^ {\top}. \tag {27}
431
+ $$
432
+
433
+ It is important to notice that, independently of the $M_{1}$ and $M_{2}$ matrices, the observation gets projected into the $D$ -dimensional subspace of the canonical directions. See Bishop & Nasrabadi (2006) for a similar argument bridging PCA and pPCA.
434
+
435
+ # A.1.3 TAME-GP COMBINES AND EXTENDS THE PPCA AND PCCA GENERATIVE MODELS
436
+
437
+ The probabilistic interpretation of PCA and CCA, Eqs. (11-15) - allows (1) extending the model to non-Gaussian observation noise, (2) replacing the normal prior over the latent with a smoothing GP-prior, and (3) combining the two graphical models in a more general framework.
438
+
439
+ In particular, TAME-GP assumes a shared latent factor $z^{(0)}$ with a GP prior that captures fine time scale correlations between some continuous task variables of interest (modelled as conditionally Gaussian) and the spike counts from multiple brain regions (modelled as conditionally Poisson). This approach extends the ideas of pCCA to the analysis of spike trains driven by smooth temporal dynamics. Further, we extended our graphical model by including additional area-specific latent factors $z^{(j)}$ (GP-distributed). The projection associated with those factors aim specifically to capture the residual inter-area co-fluctuations, in close resemblance to the role of the pPCA projection weights.
440
+
441
+ The general formulation of the TAME-GP generative model is given by Eqs.1-3 in the main text.
442
+
443
+ # A.2 INVERTING THE HESSIAN OF THE JOINT LOG-LIKELIHOOD
444
+
445
+ The dimensionality of the individual latents and trial duration pose computational challenges for TAME-GP approximate inference. For each trial, evaluating the posterior covariance requires inverting the Hessian of the joint log-likelihood, of dimensionality $D \times D$ , where $D = T\sum_{j}d_{j}$ , $d_{j}$ is the dimension of $z^{(j)}$ and $T$ is the number of time points of the trial (for simplicity, we assume all trials are the same length here, but the implementation allows for variability in trial duration). Hence, a naive implementation of the posterior estimation would require $O(D^3)$ operations (the cost of inverting a $D$ -dimensional matrix). Nonetheless, the specific conditional independence assumptions of our model allow us to speed up this computation by using the block matrix inversion theorem. In particular, if we define
446
+
447
+ $$
448
+ \nabla_ {\boldsymbol {z} ^ {(h)}} \nabla_ {\boldsymbol {z} ^ {(k)}} \log p (\boldsymbol {z}, \boldsymbol {x}, \boldsymbol {y}) \equiv H _ {h k},
449
+ $$
450
+
451
+ $\pmb{H}$ has the following structure,
452
+
453
+ $$
454
+ H = \left[ \begin{array}{c c c c c} H _ {0 0} & H _ {0 1} & H _ {0 2} & \dots & H _ {0 n} \\ H _ {0 1} ^ {\top} & H _ {1 1} & \mathbf {0} & \dots & \mathbf {0} \\ H _ {0 2} ^ {\top} & \mathbf {0} & H _ {2 2} & \dots & \mathbf {0} \\ & & \ddots & & \\ H _ {0 n} ^ {\top} & \mathbf {0} & \mathbf {0} & \dots & H _ {n n} \end{array} \right],
455
+ $$
456
+
457
+ therefore, it can be inverted according to,
458
+
459
+ $$
460
+ \left[ \begin{array}{c c} A & C ^ {\top} \\ C & B \end{array} \right] ^ {- 1} = \left[ \begin{array}{c c} (A - C ^ {\top} B ^ {- 1} C) ^ {- 1} & - (A - C ^ {\top} B ^ {- 1} C) ^ {- 1} C ^ {\top} B ^ {- 1} \\ - C B ^ {- 1} (A - C ^ {\top} B ^ {- 1} C) ^ {- 1} & B ^ {- 1} + B ^ {- 1} C (A - C ^ {\top} B ^ {- 1} C) C ^ {\top} B ^ {- 1} \end{array} \right],
461
+ $$
462
+
463
+ by setting $A = H_{00}$ and $B = \left[ \begin{array}{cccc}H_{11} & \mathbf{0} & \dots & \mathbf{0}\\ \mathbf{0} & H_{22} & \dots & \mathbf{0}\\ & & \ddots & \\ \mathbf{0} & \mathbf{0} & \dots & H_{nn} \end{array} \right]$ , and $C = \left[ \begin{array}{c}H_{01}^{\top}\\ \vdots \\ H_{0n}^{\top} \end{array} \right]$ ; computing $B^{-1}$
464
+
465
+ requires only inverting the block-diagonal elements, while $(A - C^{\top}B^{-1}C)$ has the same size as $H_{00}$ , achieving an inversion of $\pmb{H}$ in $O(T^{3}\sum_{j}d_{j}^{3})$ operations.
466
+
467
+ # A.3 PARAMETER UPDATE DETAILS
468
+
469
+ Introducing the notation $\pmb{\mu}_t^{(k)} = \mathbb{E}_q[\pmb{z}_t^k]$ and $\pmb{\Sigma}_t^{(k,h)} = \mathbb{E}_q[\pmb{z}_t^{(k)}\pmb{z}_t^{(h)\top}] - \pmb{\mu}_t^{(k)}\pmb{\mu}_t^{(h)\top}$ , we have
470
+
471
+ $$
472
+ \bar {\mathbf {C}} = \left[ \sum_ {l, t} \mathbf {y} _ {t} \boldsymbol {\mu} _ {t} ^ {(0) \top} - \frac {1}{T M} \sum_ {l, t} \mathbf {y} _ {t} \sum_ {l, t} \boldsymbol {\mu} _ {t} ^ {(0) \top} \right] \left[ \sum_ {l, t} \boldsymbol {\Sigma} _ {t} ^ {(0, 0)} + \sum_ {l, t} \boldsymbol {\mu} _ {t} ^ {(0)} \boldsymbol {\mu} _ {t} ^ {(0) \top} - \frac {1}{T M} \sum_ {l, t} \boldsymbol {\mu} _ {t} ^ {(0)} \sum_ {l, t} \boldsymbol {\mu} _ {t} ^ {(0) \top} \right] ^ {- 1}
473
+ $$
474
+
475
+ $$
476
+ \bar {\mathbf {d}} = \frac {1}{T M} \left(\sum_ {l, t} \mathbf {y} _ {t} - \bar {\mathbf {C}} \sum_ {l, t} \boldsymbol {\mu} _ {t} ^ {(0)}\right)
477
+ $$
478
+
479
+ $$
480
+ \begin{array}{l} \bar {\Psi} = \frac {1}{T M} \left[ \sum_ {l, t} \mathbf {y} _ {t} \mathbf {y} _ {t} ^ {\top} - \left(\sum_ {l, t} \mathbf {y} _ {t} \boldsymbol {\mu} _ {t} ^ {(0) \top} \bar {\mathbf {C}} ^ {\top} + \bar {\mathbf {C}} \sum_ {l, t} \boldsymbol {\mu} _ {t} ^ {(0)} \mathbf {y} _ {t} ^ {\top}\right) - \left(\sum_ {l, t} \mathbf {y} _ {t} \bar {\mathbf {d}} ^ {\top} + \bar {\mathbf {d}} \sum_ {l, t} \mathbf {y} _ {t} ^ {\top}\right) \right. \\ + \bar {\mathbf {C}} \left(\sum_ {l, t} \left(\boldsymbol {\Sigma} _ {t} ^ {(0, 0)} + \boldsymbol {\mu} _ {t} \boldsymbol {\mu} _ {t} ^ {(0)}\right)\right) \bar {\mathbf {C}} ^ {\top} + \left(\bar {\mathbf {C}} \sum_ {l, t} \boldsymbol {\mu} _ {t} ^ {(0)} \bar {\mathbf {d}} ^ {\top} + \bar {\mathbf {d}} \sum_ {l, t} \boldsymbol {\mu} _ {t} ^ {(0) \top} \bar {\mathbf {C}} ^ {\top}\right) + T M \bar {\mathbf {d}} \bar {\mathbf {d}} ^ {\top} \\ \end{array}
481
+ $$
482
+
483
+ where $l = 1:M$ and $t = 1:T$ are trial and time within trial indices.
484
+
485
+ # A.4 LEARNING THE POISSON OBSERVATION PARAMETERS
486
+
487
+ In order to learn the Poisson observation parameters we numerically maximize $\mathbb{E}_q[\log (p(\mathbf{x},\mathbf{y},\mathbf{z}|\boldsymbol {\theta})]$ as a function of $W^{(0,j)}$ $W^{(j,j)}$ and $\mathbf{h}^{(j)}$ 7. Our implementation follows a Newton scheme which requires both the gradient and the Hessian of the optimization objective.
488
+
489
+ In order to simplify notation, we fix a unit $i$ from population $j$ and we set
490
+
491
+ $$
492
+ \boldsymbol {\mu} _ {t} = \left[ \begin{array}{c} \boldsymbol {\mu} _ {t} ^ {(0)} \\ \boldsymbol {\mu} _ {t} ^ {(j)} \end{array} \right]
493
+ $$
494
+
495
+ $$
496
+ \Sigma_ {t} = \left[ \begin{array}{c c} \Sigma_ {t} ^ {(0, 0)} & \Sigma_ {t} ^ {(0, j)} \\ \Sigma_ {t} ^ {(0, j) \top} & \Sigma_ {t} ^ {(j, j)} \end{array} \right]
497
+ $$
498
+
499
+ $$
500
+ W = \left[ \begin{array}{c} W _ {i} ^ {(0, j) \top} \\ W _ {i} ^ {(j, j) \top} \end{array} \right]
501
+ $$
502
+
503
+ $$
504
+ x _ {t} = x _ {i t} ^ {(j)}
505
+ $$
506
+
507
+ $$
508
+ h = h _ {i} ^ {(j)},
509
+ $$
510
+
511
+ where $W \in \mathbb{R}^{d_0 + d_j}$ , and $h \in \mathbb{R}$ . The corresponding gradient and derivative will be,
512
+
513
+ $$
514
+ \frac {\partial \mathbb {E} _ {q} [ \log (p (\mathbf {x} , \mathbf {y} , \mathbf {z} | \boldsymbol {\theta}) ]}{\partial W} = \sum_ {l, t} x _ {t} \boldsymbol {\mu} _ {t} - \mathrm {e} ^ {h + W ^ {\top} \boldsymbol {\mu} _ {t} + \frac {1}{2} W ^ {\top} \Sigma_ {t} W} (\boldsymbol {\mu} _ {t} + \Sigma_ {t} W) \tag {28}
515
+ $$
516
+
517
+ $$
518
+ \frac {\partial \mathbb {E} _ {q} [ \log (p (\mathbf {x} , \mathbf {y} , \mathbf {z} | \boldsymbol {\theta}) ]}{\partial h} = \sum_ {l, t} x _ {t} - \mathrm {e} ^ {h + W ^ {\top} \boldsymbol {\mu} _ {t} + \frac {1}{2} W ^ {\top} \Sigma_ {t} W} \tag {29}
519
+ $$
520
+
521
+ $$
522
+ \frac {\partial^ {2} \mathbb {E} _ {q} [ \log (p (\mathbf {x} , \mathbf {y} , \mathbf {z} | \boldsymbol {\theta}) ]}{\partial W ^ {2}} = - \mathrm {e} ^ {h + W ^ {\top} \boldsymbol {\mu} _ {t} + \frac {1}{2} W ^ {\top} \Sigma_ {t} W} \left[ \left(\boldsymbol {\mu} _ {t} + \Sigma_ {t} W\right) \left(\boldsymbol {\mu} _ {t} + \Sigma_ {t} W\right) ^ {\top} + \Sigma_ {t} \right] \tag {30}
523
+ $$
524
+
525
+ $$
526
+ \frac {\partial^ {2} \mathbb {E} _ {q} [ \log (p (\mathbf {x} , \mathbf {y} , \mathbf {z} | \boldsymbol {\theta}) ]}{\partial h \partial W} = - \mathrm {e} ^ {h + W ^ {\top} \boldsymbol {\mu} _ {t} + \frac {1}{2} W ^ {\top} \Sigma_ {t} W} (\boldsymbol {\mu} _ {t} + \Sigma_ {t} W) \tag {31}
527
+ $$
528
+
529
+ $$
530
+ \frac {\partial^ {2} \mathbb {E} _ {q} [ \log (p (\mathbf {x} , \mathbf {y} , \mathbf {z} | \boldsymbol {\theta}) ]}{\partial h ^ {2}} = - \mathrm {e} ^ {h + W ^ {\top} \boldsymbol {\mu} _ {t} + \frac {1}{2} W ^ {\top} \Sigma_ {t} W} \tag {32}
531
+ $$
532
+
533
+ where $l = 1, \ldots, M$ and $t = 1, \ldots, T$ are the trial and time indexes respectively.
534
+
535
+ $$
536
+ \overline {{^ 7 \boldsymbol {\theta} = \{\mathbf {W} ^ {(0 / j , j)} , \mathbf {h} ^ {(j)} , \mathbf {C} , \mathbf {d} , \boldsymbol {\Psi} , \tau^ {(j)} \}}}
537
+ $$
538
+
539
+ # A.5 LEARNING THE GP TIME CONSTANTS
540
+
541
+ GP hyperparameters (time constant) are learned by gradient based numerical optimization of the joint log-likelihood. Following the notation of the main text we set, $\lambda_i^{(j)} = -\log (2\tau_i^{(j)})$ , and we define a kernel $\mathbf{K}_i^{(j)}:\mathbb{R}\longrightarrow \mathbb{R}^{T\times T}$ such that, $\left[\mathbf{K}_i^{(j)}(\lambda)\right]_{ts} = \exp \left(-e^{\lambda}(t - s)^2\right)$ .
542
+
543
+ The objective function takes the form,
544
+
545
+ $$
546
+ \mathbb {E} _ {q} \left[ \log (p (\mathbf {x}, \mathbf {y}, \mathbf {z} | \boldsymbol {\theta}) \right] = \sum_ {l, j, i} - \operatorname {t r a c e} \left(\boldsymbol {K} _ {i} ^ {(j) - 1} \left(\lambda_ {i} ^ {(j)}\right) \mathbb {E} _ {q} \left[ \boldsymbol {z} _ {i} ^ {(j)} \boldsymbol {z} _ {i} ^ {(j) \top} \right]\right) - \log \left| \boldsymbol {K} _ {i} ^ {(j)} \left(\lambda_ {i} ^ {(j)}\right) \right| + \text {c o n s t},
547
+ $$
548
+
549
+ where $j = 0, \dots, n$ is the latent factor, $l = 1, \dots, M$ is the trial number and $i = 1, \dots, d_j$ is the component of $z^{(j)}$ . Using the chain rule we obtain,
550
+
551
+ $$
552
+ \frac {\partial \mathbb {E} _ {q} [ \log (p (\mathbf {x} , \mathbf {y} , \mathbf {z} | \boldsymbol {\theta}) ]}{\partial \lambda_ {i} ^ {(j)}} = \operatorname {t r a c e} \left(\frac {\partial \mathbb {E} _ {q} [ \log (p (\mathbf {x} , \mathbf {y} , \mathbf {z} | \boldsymbol {\theta}) ]}{\partial \boldsymbol {K} _ {i} ^ {(j)}} ^ {\top} \cdot \frac {\partial \boldsymbol {K} _ {i} ^ {(j)}}{\partial \lambda_ {i} ^ {(j)}}\right),
553
+ $$
554
+
555
+ with
556
+
557
+ $$
558
+ \begin{array}{l} \frac {\partial \mathbb {E} _ {q} [ \log (p (\mathbf {x} , \mathbf {y} , \mathbf {z} | \boldsymbol {\theta}) ]}{\partial \boldsymbol {K} _ {i} ^ {(j)}} = \frac {1}{2} \sum_ {l} \left(- K _ {i} ^ {(j) - 1} + K _ {i} ^ {(j) - 1} \mathbb {E} _ {q} [ \boldsymbol {z} _ {i} ^ {(j)} \boldsymbol {z} _ {i} ^ {(j) \top} ] K _ {i} ^ {(j) - 1}\right) \\ \frac {\partial \left[ K _ {i} ^ {(j)} \right] _ {t s}}{\partial \lambda} = - \mathrm {e} ^ {\lambda} (t - h) ^ {2} \exp \left(- \mathrm {e} ^ {\lambda} (t - s) ^ {2}\right). \\ \end{array}
559
+ $$
560
+
561
+ # A.6 PARAMETER INITIALIZATION
562
+
563
+ Factorized TAME. Before running EM on the full TAME, we obtain initial condition for the model parameters (all except the GP kernel hyperparameters) by means of running five iterations of EM for the temporally factorized version of the model. In particular, we replace the GP-prior over the latents with a product of a Gaussian normal distributions, i.e. $p(\boldsymbol{z}_i^{(j)}) = \prod_t p(z_{it}^{(j)})$ , and $p(z_{it}^{(j)}) \sim \mathcal{N}(0,1)$ .
564
+
565
+ Under this prior assumption the joint likelihood as a whole factorizes over the temporal axis (i.e. the observations are temporally independent given the latents). As a consequence, the Hessian matrix of the joint pdf is sparse, and can be stored and inverted efficiently, allowing for the implementation of a full Newton scheme to numerically optimize for the MAP estimate of the posterior ever latents $z$ .
566
+
567
+ The EM-based optimization of the factorized TAME also needs an initial choice for parameters. We found empirically that a CCA-based heuristic works well for this purpose. Specifically, we set:
568
+
569
+ - $W^{(0,j)}$ to the first $d_0$ canonical directions $V$ between the square-rooted, mean-centered spike counts of population $j$ , $s^{(j)} = \sqrt{\pmb{x}^{(j)}} - \mu_j$ and the task variables $\pmb{y}$ ( $\mu_j$ is the empirical mean of the square-rooted spikes).
570
+ - $W^{(j,j)}$ as the first $d_j$ principal direction for the orthogonal complement of the counts w.r.t the canonical directions, $\pmb{s}_{\mathrm{ort}t}^{(j)} = \pmb{s}_t^{(j)} - V^\top V\pmb{s}_t^{(j)}$ . This will initially enforce orthogonality in the task relevant and private latent subspaces.
571
+ - $\pmb{h}^{(j)}$ was set to the log of the empirical mean of the counts.
572
+ - $C$ was set to the first $d_0$ canonical directions $U$ between $\mathbf{s}$ and the square-rooted counts from all the neural populations, $\mathbf{Y} = [\mathbf{y}^{(1)};\dots ;\mathbf{y}^{(m)}]$ .
573
+ - $\pmb{d}$ was set to the empirical mean of $s$ , and $\Psi$ to the empirical covariance.
574
+
575
+ GP time constants. The initial GP time constants were drawn from a uniform random distribution $\tau_{i}^{(j)}\sim \mathrm{U}[0,0.5]$
576
+
577
+ # A.7 COMPARISON OF TAME-GP AND SNP-GPFA
578
+
579
+ We compared our framework to that of SNP-GPFA Keeley et al. (2020), which identifies shared fluctuation between two neural populations, under the assumption of trial repeats with a common
580
+
581
+ stimulus-driven mean (corresponding to a dimensionality reduced peristimulus time histogram, or PSTH).
582
+
583
+ Briefly, the multi-area SNP-GPFA assumes that the spike counts of two areas, area A and area B, are generated according to
584
+
585
+ $$
586
+ \left[ \begin{array}{l} \mathbf {Y} _ {j} ^ {A} \\ \mathbf {Y} _ {j} ^ {B} \end{array} \right] = \text {P o i s s o n} \left(f \left(\boldsymbol {W} _ {s} \boldsymbol {X} ^ {s} + \left[ \begin{array}{c c} \boldsymbol {W} _ {A A} & \boldsymbol {0} \\ \boldsymbol {0} & \boldsymbol {W} _ {B B} \end{array} \right] \left[ \begin{array}{l} \boldsymbol {X} _ {j} ^ {A, n} \\ \boldsymbol {X} _ {j} ^ {B, n} \end{array} \right]\right)\right), \tag {33}
587
+ $$
588
+
589
+ with $\mathbf{Y}_j^{A / B}$ the spike counts of population $A$ and $B$ for trial $j$ , $f$ the soft-max non-linearity; $\mathbf{X}_j^{A / B,n}$ are drawn from a GP with factorized RBF covariance, which captures within area co-fluctuations for trial $j$ ; $\mathbf{X}^s$ corresponds to draws from another GP, which is shared across trials and populations, thus capturing the shared across area co-fluctuations.
590
+
591
+ We generated spike counts from the graphical model in figure S4A assuming a fixed trial duration (necessary for the SNP-GPFA), in different conditions: 1) fixing the shared dynamics across trials (as in SNP-GPFA, figure S4B, top), or 2) varying the shared dynamics across trial (figure S4B, bottom).
592
+
593
+ Specifically, for the first condition the counts followed (33), but replacing the non-linearity with an exponential. For the second case, the counts follow Poisson statistics of the form
594
+
595
+ $$
596
+ \left[ \begin{array}{l} \mathbf {Y} _ {j} ^ {A} \\ \mathbf {Y} _ {j} ^ {B} \end{array} \right] = \text {P o i s s o n} \left(\exp \left(\boldsymbol {W} _ {s} \boldsymbol {X} _ {j} ^ {s} + \left[ \begin{array}{c c} \boldsymbol {W} _ {A A} & \boldsymbol {0} \\ \boldsymbol {0} & \boldsymbol {W} _ {B B} \end{array} \right] \left[ \begin{array}{l} \boldsymbol {X} _ {j} ^ {A, n} \\ \boldsymbol {X} _ {j} ^ {B, n} \end{array} \right]\right)\right), \tag {34}
597
+ $$
598
+
599
+ where we added a trial dependency to the shared Gaussian process factor.
600
+
601
+ We set the dimensionality of the shared factor to 2, and of each private factors to 3. We simulated spike counts from two populations of 30 neurons for 50 trials, each having 100 time points with a 0.05 second resolution. The average firing rate of both population was set to $10\mathrm{Hz}$ . We fit the simulated spike counts with TAME-GP and SNP-GPFA for both conditions. The results show that TAME-GP captures the between area co-fluctuation in both scenarios while SNP-GPFA fails when the shared dynamics vary between trials, as expected by the model assumptions (Fig. S4C,E). We assessed the accuracy of the factorization of the spike-count variance by means of Lasso regression. In particular, we regressed the ground truth latents from the estimated latents of the different models, and quantified regression goodness-of-fit in terms of cross-validated $R^2$ (Fig. S4D,F). We quantified the contribution of each latent factor to the regression in terms of the magnitude the associated coefficients. Results (reported in Table S1) show that 1) both models can factorize the variance when the shared dynamics are fixed across trials, with SNP-GPFA achieving a cleaner decomposition (expected given that it is a closer model of the true data generating process in this case); 2) TAME-GP achieves a near optimal factorization when the shared latents vary across trials (as assumed by its generative model), while SNP-GPFA is unable to find the appropriate decomposition. Overall, TAME-GP estimator proves more robust to deviations from its underlying model assumptions.
602
+
603
+ # A.8 SELECTING THE NUMBER OF PRIVATE AND SHARED DIMENSIONS IN REAL DATA
604
+
605
+ We select the number of private and shared dimensions to fit in real data by optimizing these hyperparameters via a grid search. A priori we set the maximum number of dimensions to be evaluated as the number of PCs needed to account for $80\%$ of the population variance (in this case, 5 dimensions). Fig.S6 shows estimates of model fit quality as a function of the number of dimensions included in private and shared latents for the multi-area TAME-GP presented in Fig.4I-K. The results show a well-behaved cross-validated $R^2$ landscape, with optimal dimensionalities $(5,5)$ .
606
+
607
+ # A.9 FITTING TAME-GP TO MONKEY PREMOTOR AND MOTOR RESPONSES DURING REACHES
608
+
609
+ We also tested our estimator on a publicly available dataset Perich et al. (2018) that records neural activity in premotor cortex (PMd) and primary motor cortex (M1) of macaques during sequential
610
+
611
+ reaches (binned at 10ms resolution). Specifically, the monkey controls an on-screen cursor and is rewarded for moving that cursor to an indicated reach target, with multiple targets presented in a trial. Since there are minimal kinematic requirements for the reaching movements (e.g., very brief hold times), the monkey typically makes relatively smooth series of reaches. As pPCA latent structure was very poor quality in this dataset, we restricted our comparison between TAME-GP, with task manifold aligned to screen position, and PGPFA, with latent dimensionality $d = 2$ (Fig.S7). Visually, the latent structure extracted by TAME-GP seems to better capture the animal behavior, so we asked (in $R^2$ terms)<sup>8</sup> how much information about the task variables can be linearly decoded from their respective latents for TAME-GP and PGPFA with variable latent dimensionality (Fig. S7C, F). These results confirms that in this dataset as well, the TAME-GP task aligned manifold provides a compact account of neural variability with respect to task variables of interest, which does not align with the overall axes of neural variability of the data. PGPFA needs substantially higher dimensional latent spaces (10d vs. 2d) to capture the same amount of task-relevant neural variability.
612
+
613
+ # A.10 SUPPLEMENTARY FIGURES
614
+
615
+ ![](images/8d370501bea6bcf0c478f5bcfa5585f440a0e8b6a92592e364554f002c61d5c7.jpg)
616
+
617
+ ![](images/c1b1bf9c7c5ef91a4b2c466196c2f70270a566fb48b2889b3b740ba0039d0faf.jpg)
618
+ Figure S1: Multi-area parameter reconstruction. A. TAME-GP generative model for three brain areas with shared interactions. B. D- Latent variables estimation for within model simulated data: ground truth latent factors and model posterior mean $\pm 95\%$ CI for all latent dimensions.
619
+
620
+ ![](images/4610d3838dc65d55d86cd4f2ee2d8404c5480a48acd2ca8a45f58f0bd68347a6.jpg)
621
+
622
+ ![](images/ade22ad3e51ddcbe221a0da533c3849fb6f4b6fa6c13d79d43bd71c874c0e7ef.jpg)
623
+
624
+ ![](images/e1a5cf5246d000f819b9c0d438ddc42e4e326d0eddecdb110e3c4dae939a9a42.jpg)
625
+
626
+ ![](images/9d27f4f32eeac6bcd3e7c60cd5e83fc5f05da20b1a22efe30b947a35a3bace38.jpg)
627
+ Figure S2: Task-aligned latent dynamics reconstruction (extends Fig. 2C). Mean squared error between the true task relevant dynamics and the model reconstruction based on the 2 dimensional task relevant latent factor for CCA and TAME-GP, and the full 6 dimensional latent space for P-GPFA and PCA. In contrast, figure 2C shows the MSE based only on the first 2 principal latents for pCCA and P-GPFA; Error bars represent the mean $\pm$ s.d. over 10-fold cross-validation.
628
+
629
+ ![](images/23efcd6b20ad2c75a445dfee9998d1f77d78fd0eb0351f03848908a856e56980.jpg)
630
+ Figure S3: Effects of model mismatch on latent structure estimation. A. TAME-GP graphical model for single area (top) and schematic for data generating process (bottom). LDS dynamics where transformed into firing rates through a ReLu non linearity (replacing the exponential used in Fig. 2), so that the latents have now an additive instead of multipliative effects on the observed neural activity. B. Ground truth task relevant dynamics (green) and estimated low dimensional projection for TAME-GP (purple), pPCA (dark gray) and pCCA (light gray). C Mean squared error between the true shared dynamics and the model reconstruction, mean $\pm$ s.d. over 10-fold cross-folding. D. Example single trial firing rate reconstruction.
631
+
632
+ ![](images/0c42cd9bbe9d7a5a84cf2481e3b4013139242a06966b334851e8a26b2f78c149.jpg)
633
+
634
+ ![](images/f0188611243beb37d97bbbefd68146efad681acbad9ef14d29a1415836aef6c1.jpg)
635
+
636
+ ![](images/cd21cc25948ffd812743d4cbf37b7e9a5f33f2a251026af0b0325bd2f0c47c41.jpg)
637
+
638
+ ![](images/108f655c70b087753c5693a1f8376082bac9f67f1724722029cb3b566021cc3f.jpg)
639
+
640
+ ![](images/bca647248c6c1d2766c78faa24024f19946399b546070b2bd7b6b2ba4b4f208f.jpg)
641
+
642
+ ![](images/ffe7fd4b9359d4bfc28d8720822b019d5c1cc383ae2d843fbc3c9360c1e2af18.jpg)
643
+
644
+ ![](images/1a876d0fcb6df0e07ad20d7532b50c1cfd541c6cf0c28a757bce4df84d830b01.jpg)
645
+
646
+ ![](images/1103aa27f2fd98a7007330a267f2c204a1dab13fb206f9c58656fb7781eb58d2.jpg)
647
+ Figure S4: Communication subspace estimation with SNP-GPFA and TAME-GP (extends Fig. 3). A. Scheme of the spike count generative model for trial repeated (top right) and trial varying (bottom right) shared dynamics. B,C. Ground truth shared dynamics (black lines) and model reconstructions (colored lines) for the trial repeated (B) and trial varying (C) conditions. D,E. Ground truth shared and private dynamics variance explained by model predictions for the trial repeated (D) and trial varying (E) conditions.; error bars represent mean $\pm$ standard deviation over a 5-fold cross validation.
648
+
649
+ ![](images/db2556063045a1e430d5c8f122735813349a4082db4ffcc60296ba3af1daf18a.jpg)
650
+
651
+ ![](images/85feecefb4b18184280c4478cf2303541d561b49f4e3ecb533a01624439db41f.jpg)
652
+ Figure S5: Model fit of shared and task aligned dynamics. R2 of the linear regression between the ground truth task aligned latent dynamics and the model MAP estimate for TAME (purple), PCCA (light grey) and reduced rank regression (dark grey). Extends fig. 3I in the main text to multiple average firing rates.
653
+
654
+ ![](images/a70305d2b3e8521eed4535904c71e8db37c36b22377bdb4e138774d540ce88b3.jpg)
655
+ Figure S6: Latent dimensionality selection, for the MSTd and dlPFC communication manifold analysis (extends fig. 4I-K). Heat-map of the leave-one-neuron-out R2 of the spike count variance explained by a TAME-GP for different combination of shared and private latent dimensions. The upper bound on dimensionality was set to the number of principal components needed to explain $80\%$ of the population spike count variance.
656
+
657
+ ![](images/1e4f2aace7a3e9bb7f9ff473c2b232fea9336bb4b92f31611ba821baa649637c.jpg)
658
+
659
+ ![](images/44d21e8791b7471e7e914fe0ffbf7baa9e39836ee0f4bf98000bb2f3709e7fe3.jpg)
660
+
661
+ ![](images/3bc24bf77226b9e8cf38df430170a66d7674fde2634c939ff02bd16e56615cb0.jpg)
662
+
663
+ ![](images/293c1efbb68ef2648aa0bef3e9780f822bb46445e07848e6085e0702942ca9d6.jpg)
664
+ Figure S7: TAME-GP manifold estimation for monkey premotor (PMd) and motor (M1) neural responses during reaching. A. Graphical model for PMd neural manifold aligned to 2d coordinates of hand on screen. B. Behavior and corresponding PMd latent population trajectories for four example individual reaches (colors), extracted with TAME-GP and P-GPFA. C. Lasso regression decoding of position; TAME-GP $R^2$ (purple) is based on a 2d task relevant latent. P-GPFA $R^2$ (blue) estimates were obtained for a range of latent dimensions (1-10). D, E, F. Same as A, B, C for M1.
665
+
666
+ ![](images/58666e5606e6baaf23068bd74a98bf1346761b22adb78fd620552a42016278a5.jpg)
667
+
668
+ ![](images/410dbd4a7525abc4db01da58251f87252bac3a2787ee9eadf0a2e6fb49c174be.jpg)
669
+
670
+ A.11 SUPPLEMENTARY TABLE
671
+
672
+ <table><tr><td colspan="5">Lasso results</td></tr><tr><td>model</td><td>sim type</td><td>ground truth latent</td><td>model latent</td><td>||β||</td></tr><tr><td rowspan="18">SNP-GPFA</td><td rowspan="9">fixed</td><td rowspan="3">private A</td><td>private A</td><td>0.252458</td></tr><tr><td>private B</td><td>0.008377</td></tr><tr><td>shared</td><td>0.065025</td></tr><tr><td rowspan="3">private B</td><td>private A</td><td>0.001043</td></tr><tr><td>private B</td><td>0.373415</td></tr><tr><td>shared</td><td>0.012722</td></tr><tr><td rowspan="3">shared</td><td>private A</td><td>0.014438</td></tr><tr><td>private B</td><td>0.026413</td></tr><tr><td>shared</td><td>0.665547</td></tr><tr><td rowspan="9">variable</td><td rowspan="3">private A</td><td>private A</td><td>0.128838</td></tr><tr><td>private B</td><td>0.256987</td></tr><tr><td>shared</td><td>0.044492</td></tr><tr><td rowspan="3">private B</td><td>private A</td><td>0.046482</td></tr><tr><td>private B</td><td>0.332019</td></tr><tr><td>shared</td><td>0.114382</td></tr><tr><td rowspan="3">shared</td><td>private A</td><td>0.058657</td></tr><tr><td>private B</td><td>0.386729</td></tr><tr><td>shared</td><td>0.010308</td></tr><tr><td rowspan="18">TAME-GP</td><td rowspan="9">fixed</td><td rowspan="3">private A</td><td>private A</td><td>0.222231</td></tr><tr><td>private B</td><td>0.00472</td></tr><tr><td>shared</td><td>0.16166</td></tr><tr><td rowspan="3">private B</td><td>private A</td><td>0.016308</td></tr><tr><td>private B</td><td>0.419728</td></tr><tr><td>shared</td><td>0.02365</td></tr><tr><td rowspan="3">shared</td><td>private A</td><td>0.1016</td></tr><tr><td>private B</td><td>0.006841</td></tr><tr><td>shared</td><td>0.476519</td></tr><tr><td rowspan="9">variable</td><td rowspan="3">private A</td><td>private A</td><td>0.268177</td></tr><tr><td>private B</td><td>0.011153</td></tr><tr><td>shared</td><td>0.032323</td></tr><tr><td rowspan="3">private B</td><td>private A</td><td>0.003969</td></tr><tr><td>private B</td><td>0.411102</td></tr><tr><td>shared</td><td>0.020695</td></tr><tr><td rowspan="3">shared</td><td>private A</td><td>0.019493</td></tr><tr><td>private B</td><td>0.005826</td></tr><tr><td>shared</td><td>0.658865</td></tr></table>
673
+
674
+ Table S1: Lasso regression coefficients, related to session A.7. Norm of the coefficients of the Lasso regression between the ground truth latent dynamics and the SNP-GPFA/ TAME-GP predicted latents. Lasso hyperparameters are set by grid search with a 5-fold cross-folding procedure.
2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95d017ff762c369a602e759edee3371c7a13e86cafcf160a0728705897893b1c
3
+ size 1074604
2023/A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/AANG _ Automating Auxiliary Learning/9218a6ea-0c66-424b-aec1-82d0a79e86c3_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/AANG _ Automating Auxiliary Learning/9218a6ea-0c66-424b-aec1-82d0a79e86c3_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/AANG _ Automating Auxiliary Learning/9218a6ea-0c66-424b-aec1-82d0a79e86c3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de1b5d7244b93b07121a600f93f86018421fb7d767ec016181517dd8ddf28a9e
3
+ size 2357387
2023/AANG _ Automating Auxiliary Learning/full.md ADDED
@@ -0,0 +1,700 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AANG: AUTOMATING AUXILIARY LEARNING
2
+
3
+ Lucio M. Dery $^{1*}$ Paul Michel $^{2}$ Mikhail Khodak $^{1}$ Graham Neubig $^{1}$ Ameet Talwalkar $^{1,3}$ $^{1}$ Carnegie Mellon University $^{2}$ ENS PSL University $^{3}$ Hewlett Packard Enterprise
4
+
5
+ # ABSTRACT
6
+
7
+ Auxiliary objectives, supplementary learning signals that are introduced to help aid learning on data-starved or highly complex end-tasks, are commonplace in machine learning. Whilst much work has been done to formulate useful auxiliary objectives, their construction is still an art which proceeds by slow and tedious hand-design. Intuition for how and when these objectives improve end-task performance has also had limited theoretical backing. In this work, we present an approach for automatically generating a suite of auxiliary objectives. We achieve this by deconstructing existing objectives within a novel unified taxonomy, identifying connections between them, and generating new ones based on the uncovered structure. Next, we theoretically formalize widely-held intuitions about how auxiliary learning improves generalization on the end-task. This leads us to a principled and efficient algorithm for searching the space of generated objectives to find those most useful to a specified end-task. With natural language processing (NLP) as our domain of study, we demonstrate that our automated auxiliary learning pipeline leads to strong improvements over competitive baselines across continued training experiments on a pre-trained model on 5 NLP tasks<sup>1</sup>.
8
+
9
+ # 1 INTRODUCTION
10
+
11
+ The auxiliary learning paradigm, where we augment a primary objective with extra learning signals to boost end-task performance, is a staple of many machine learning (ML) domains. In natural language processing (NLP), well known models like SpanBERT (Joshi et al., 2020) and RoBERTa (Liu et al., 2019b) are trained on masked language modelling
12
+
13
+ <table><tr><td>Objective</td><td>Data (D)</td><td>Transform (T)</td><td>Representation (R)</td><td>Output (O)</td></tr><tr><td>BERT</td><td>Out-of-domain</td><td>BERT-Op</td><td>Bidirectional</td><td>Denoise Token</td></tr><tr><td>TAPT</td><td>Task data</td><td>BERT-Op</td><td>Bidirectional</td><td>Denoise Token</td></tr><tr><td>DAPT</td><td>In-domain</td><td>BERT-Op</td><td>Bidirectional</td><td>Denoise Token</td></tr><tr><td>ELMO</td><td>Out-of-domain</td><td>No-Op</td><td>Left-to-Right and Right-to-Left</td><td>Next Token</td></tr><tr><td>GPT</td><td>Out-of-domain</td><td>No-Op</td><td>Left-To-Right</td><td>Next Token</td></tr><tr><td>XLNet</td><td>Out-of-domain</td><td>No-Op</td><td>Random factorized</td><td>Next Token</td></tr><tr><td>Electra</td><td>Neural LM Data</td><td>Replace</td><td>Bidirectional</td><td>Real / Synthetic</td></tr><tr><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td></tr></table>
14
+
15
+ Figure 1: We present the decomposition of some auxiliary objectives in NLP within our framework.
16
+
17
+ (MLM) auxiliary objectives (Devlin et al., 2018) before fine-tuning on the end-task. And for speech processing and reinforcement learning (RL), Oord et al. (2018) introduced the popular contrastive predictive coding objective which achieved state of the art performance in many settings when multi-tasked with the end-task. Despite these successes and many more, research into devising such objectives has progressed in a very local, objective-by-objective manner (Raffel et al., 2019; Clark et al., 2020; Grill et al., 2020; Chen et al., 2020). Auxiliary objectives are constructed by hand-design and without much overarching structure, relying on the experience and intuition of a select group of researchers versed at making appropriate design choices. Unfortunately, this status-quo not only creates a technical barrier of entry for exploring auxiliary objectives in new domains but also, by virtue of its incremental nature, limits the rate at which new objectives are discovered and investigated.
18
+
19
+ To address the above challenges, this paper presents a framework for automatically generating and utilizing a large set of candidate auxiliary objectives. Our framework is seeded by the following key observation: leading auxiliary objectives across multiple domains can be viewed as making different design decisions within a 4 stage pipeline: Input Data $(\mathcal{D}) \to \text{Input Transformation}(\mathcal{T}) \to$
20
+
21
+ Model Representation $(\mathcal{R}) \to \text{Output}(\mathcal{O})$ . For instance, in RL, a common auxiliary objective is to predict the environment's forward dynamics (Agrawal et al., 2016; Hafner et al., 2019). To construct this objective, the current task state-action pair $(\mathcal{D})$ is corrupted $(\mathcal{T})$ and then passed through the model to produce a latent representation $(\mathcal{R})$ which is finally used to predict the next state $(\mathcal{O})$ . Similarly, in NLP, the XLNet (Yang et al., 2019) objective—which performs language modelling on a randomly factorized permutation of the input—can be written within our taxonomy as $\{\mathcal{D} = \text{Out-of-Domain}, \mathcal{T} = \text{No-op}, \mathcal{R} = \text{Random-Factorized}, \mathcal{O} = \text{Next Token}\}$ . These two examples (along with others listed in Figure 1) fall within a class we term named objectives: objectives that have been previously proposed in the auxiliary learning literature.
22
+
23
+ Decomposing named objectives within our taxonomy provides a unified view of the auxiliary learning landscape. From this vantage point, it becomes clear that there are many unexplored combinations of the various primitives used across named objectives. This presents a simple formula for automatically generating a large set of candidate objectives: take the cartesian product of the design decisions across given stages (Figure 2). Using this compositional process,
24
+
25
+ not only can we reconstruct existing named objectives, we can also generate new combinations. This overcomes the tedium of implementing each objective independently since we can just reuse a small set of simple stage-wise primitives.
26
+
27
+ ![](images/e434352a7df16472c91778b5696a0cb7194e42229c727f7ce7619e349fd9e863.jpg)
28
+ Figure 2: Our framework in the context of NLP. We decompose named objectives within our four staged taxonomy: $\{\mathcal{D},\mathcal{T},\mathcal{R},\mathcal{O}\}$ . By taking the cartesian product of choices across stages, we reproduce named objectives and discover new ones.
29
+
30
+ Generating a large set of objectives raises the natural question of how to efficiently select the most helpful ones for a given end task. Instead of leaving this to practitioner intuition, we develop principled guidelines to address this question by theoretically studying the impact of auxiliary learning on a particular end-task. Specifically, using arguments based on algorithmic stability (Hardt et al., 2016; Bousquet & Elisseeff, 2002), we derive end-task generalization error bounds that are dependent on the choice of auxiliary task. This contributes to existing theory (Saunshi et al., 2020; Xie et al., 2021) on how auxiliary learning impacts the end-task by suggesting a new candidate mechanism: auxiliary learning results in more stable optimization end-points in the sense of Bousquet & Elisseeff (2002), which in theory improves generalization of the final model.
31
+
32
+ Guided by our theory, we introduce AANG (Automating Auxiliary LearnNG), an efficient, structure-aware algorithm for adaptively combining a set of related objectives to improve generalization on a specific end-task. AANG incorporates the following prescriptions from our theory: (i) auxiliary tasks that are more similar to the end-task are desirable. Given a set of objectives, AANG learns adaptive weights to bring the composite objective closer to the end-task; (ii) in general, more auxiliary data is better. AANG maximizes the effective amount of data used in training by using all the generated objectives instead of taking task-specific subsets.
33
+
34
+ To empirically validate our method for automatically generating and utilizing auxiliary objectives, we experiment on five NLP tasks. We do so in the widely-used setting of continued pretraining (Gururangan et al., 2020; Aghajanyan et al., 2021; Dery et al., 2021b; Zhang et al., 2022), where a model trained with a single auxiliary objective on large-scale data is further trained on end-task related data. Without introducing any external data or architectural modifications, variants of AANG outperform strong and widely used baselines in 4 out of 5 tasks. AANG achieves an average improvement of $4.2\%$ over standard fine-tuning of RoBERTa across our chosen tasks. We believe our results will spur further research into exploring automating auxiliary learning across a variety of settings. Notably, while we focus on NLP when discussing the space of auxiliary objectives (Section 3) and in our empirical evaluation (Section 6), our theoretical results (Section 4) and AANG itself are domain-agnostic<sup>2</sup>.
35
+
36
+ # 2 RELATED WORK
37
+
38
+ To properly scope this work, we define auxiliary learning as training a model on alternative objectives with the goal of improving performance on some primary end-task. Auxiliary learning is an instantiation of transfer learning (Caruana, 1997; Baxter, 2000; Ruder et al., 2019). It covers the pretrain-then-finetune paradigm (Huh et al., 2016; Devlin et al., 2018; Schneider et al., 2019; Gururangan et al., 2020) as well as end-task aware multitasking approaches (Lin et al., 2019; Dery et al., 2021a;b). Whilst auxiliary objectives may be meta-learned (Liu et al., 2019a; Navon et al., 2020), for simplicity - since incorporating these would require further complication of our design space - such objectives are out of the scope of this paper.
39
+
40
+ This work bears many parallels to the area of neural architecture search (NAS) (Stanley & Miikkulainen, 2002; Zoph & Le, 2016; Roberts et al., 2021). Whilst we seek to automate auxiliary learning, the objective of NAS is to automate the discovery of the right neural architecture given a specific end-task. Search spaces of candidate architectures are created by taking the cartesian product of architecture design choices across the depth of the network. The design of suitable architectural search spaces for a variety of settings has been an active area of research (Tan & Le, 2019; Howard et al., 2019; Dao et al., 2020; Roberts et al., 2021). To develop AANG, we borrow ideas from the NAS literature on efficient algorithms for sifting through spaces of architectures. Mirroring the popular differentiable NAS method DARTS Liu et al. (2018), we perform a continuous relaxation over the search space of objectives, allowing for efficient search by gradient descent. We also use a factored approach to model relationships between objectives that share primitives. This is inspired by recent work on stochastic-relaxation weight sharing (Dong & Yang, 2019; Li et al., 2020).
41
+
42
+ As a theoretical contribution, this work derives an end-task aware generalization error bound for auxiliary learning. Our bound is built on that of Hardt et al. (2016), who derive generalization bounds for parametric models trained with stochastic gradient descent (SGD). To derive their bounds, they leverage the concept of algorithmic stability introduced by Bousquet & Elisseeff (2002). Informally, a randomized algorithm is uniformly stable if changing a single training data point in the given samples does not change its end-point too much. Said change is characterized as the average difference in predictions between the two learned models. Stability implies generalization in expectation (Hardt et al., 2016; Kuzborskij & Lampert, 2018).
43
+
44
+ # 3 AUTOMATICALLY GENERATING AUXILIARY OBJECTIVES
45
+
46
+ To begin, we take a high-level view of the landscape of named objectives. Using running examples from NLP, we propose the following coarse structure for the sequence of choices made in the hand-design of auxiliary objectives:
47
+
48
+ 1. Data, $\mathcal{D}$ : Auxiliary objective pipelines begin with a choice of input data. Here, options can range from heterogeneous out-of-domain data (Radford et al., 2019), in-domain data with respect to the final end-task (Beltagy et al., 2019) or the task data itself (Gururangan et al., 2020). It may even include data outside the modality of the end-task.
49
+ 2. Input-Transformation, $\mathcal{T}$ : Many auxiliary objectives are self-supervised with respect to their input data. They corrupt or transform the input and then reconstruct it in whole or part. For example, input text tokens can be masked, replaced or deleted. Operations can also be aggregated as in BERT-Op: mask $80\%$ of selected tokens and randomly replace $50\%$ of the remaining Devlin et al. (2018); Liu et al. (2019b).
50
+ 3. Representation, $\mathcal{R}$ : After transformation, representations of the input data can be computed from a given model in different ways. A chosen token's representation can depend on only its left context (Left-to-Right) (Radford et al., 2018) or its right context (Right-to-Left) (Peters et al., 2018). It could also depend on the representations of a randomly selected permutation of other tokens (Random Factorized) Yang et al. (2019).
51
+ 4. Output, $\mathcal{O}$ : Finally, representations obtained from the previous stage are fed into a loss function producing a final output. The choice of output loss is usually coupled with the choice of transformation made in stage 2. Choices include but are not restricted to denoising tokens, predicting the next token or predicting the TF-IDF (Term Frequency-Inverse Document Frequency) of a token.
52
+
53
+ The above taxonomy $\{\mathcal{D}\to \mathcal{T}\to \mathcal{R}\to \mathcal{O}\}$ is expansive enough to cover a range of named auxiliary objectives of interest in NLP (Figure 1) $^3$ . For example, we can write any member of the GPT series (Radford et al., 2018; 2019; Brown et al., 2020) which perform left-to-right language modelling on out-of-domain data as $\{\mathcal{D} = \text{Out-of-Domain}, \mathcal{T} = \text{No-op}, \mathcal{R} = \text{Left-To-Right}, \mathcal{O} = \text{Next Token}\}$ . We can summarize the pre-existing choices within each design stage to obtain a unique set of options. For example, we can reduce the set of model representation types used by the objectives enumerated in Figure 1 to the unique set $\mathcal{R} = \{\text{Bi-directional}, \text{Left-To-Right}, \text{Right-To-Left}, \text{Random-Factorized}\}$ . Having summarized the list of primitives within each stage, a simple formula for generating a space of auxiliary objectives becomes apparent: take the cartesian product of the design choices at each stage (see Figure 2). In general, given an instance of our taxonomy, we can construct a space of objectives $\mathcal{A} = \mathcal{D} \times \mathcal{T} \times \mathcal{R} \times \mathcal{O}$ of size $|\mathcal{A}| \leq |\mathcal{D}| \times |\mathcal{T}| \times |\mathcal{R}| \times |\mathcal{O}|$ . Consider New\_Obj $_1$ from Figure 2. This previously unexplored objective can be obtained by combining the special masking operation from BERT (BERT-Op) with computing model representations based on left-to-right causal masking as in GPT. In fact, this objective proved one of the most useful ones in our experiments below (see Figure 5).
54
+
55
+ Our framework also allows us to reason about whole families of objectives, $\mathcal{F}$ , by thinking in terms of design stages and choices. For example, given a particular end-task $\mathbf{E}$ with input text $\mathbf{E}_{\mathcal{D}}$ , we can create a family of objectives based solely on task data by fixing to that option in our input data stage; we call this family $\mathcal{F}_{\mathcal{D} = \mathbf{E}_{\mathcal{D}}}$ . $\mathcal{F}_{\mathcal{D} = \mathbf{E}_{\mathcal{D}}}$ not only includes pre-existing TAPT Gururangan et al. (2020) but also unexplored objectives like task-data dependent variants of XLNET, ELMO etc. Auxiliary learning with $\mathcal{F}_{\mathcal{D} = \mathbf{E}_{\mathcal{D}}}$ can be seen as a relaxed form of data augmentation which we dub task augmentation. Whilst data augmentation requires applying transformations that preserve the data-point's label, task augmentation has no such restriction and thus offers greater flexibility in terms of specifying $\{\mathcal{T},\mathcal{R},\mathcal{O}\}$ . We can also reason about expanding particular stages to include new primitives. Any supervised loss can be added to the output stage, $\mathcal{O}$ , allowing us to potentially explore auxiliary objectives based on supervised signals like NER or POS tagging (Carreras et al., 2003; Charniak, 1997). A special example is setting $\mathcal{O}$ to the end-task supervised output $\mathbf{E}_{\mathcal{O}}$ . This leads to $\mathcal{F}_{\mathcal{D} = \mathbf{E}_{\mathcal{D}}}^{\mathcal{O} = \mathbf{E}_{\mathcal{O}}}$ which is a subset of $\mathcal{F}_{\mathcal{D} = \mathbf{E}_{\mathcal{D}}}$ . $\mathcal{F}_{\mathcal{D} = \mathbf{E}_{\mathcal{D}}}^{\mathcal{O} = \mathbf{E}_{\mathcal{O}}}$ includes many objectives like predicting the end-task signal from corrupted input data. In Section 6, we will introduce a search space of objectives that leverages task augmentation.
56
+
57
+ # 4 THE IMPACT OF AUXILIARY LEARNING ON END-TASK GENERALIZATION
58
+
59
+ In this section, we relieve reliance on practitioner intuition by deriving a set of guiding principles on how to effectively utilize the automatically generated objectives from Section 3.
60
+
61
+ Auxiliary learning influences the end-task through both training and generalization error. Previous theory has largely focused on characterizing the impact on end-task training error. Liu et al. (2021), for example, show that end-task agnostic pre-training can create a performance gap in training error compared to training with the end-task alone. The size of this gap depends on how dissimilar the pre-training auxiliary objective is from the end-task. They introduce the following assumption (which we will borrow) to formalize their notion of task similarity:
62
+
63
+ Assumption A.1: Let $f_{e}$ represent the end-task objective and $f_{a}$ be the auxiliary objective. There exists $\Delta \geq 0$ such that $\| \nabla f_{a}(\theta) - \nabla f_{e}(\theta)\| \leq \Delta \forall \theta$ .
64
+
65
+ Note that $\theta$ represents all the parameters of the model. Smaller $\Delta$ implies $f_{a}$ is more similar to the primary task $f_{e}$ . Liu et al. (2021) bound the end-task agnostic training error gap to be logarithmic in $\Delta$ .
66
+
67
+ Unlike training error, end-task generalization error has gone unstudied in the auxiliary learning setting. Bounding the generalization error not only adds to our theoretical understanding of the impact of auxiliary learning but also provides insights to guide algorithm design. To arrive at a bound, we adapt the technique of Hardt et al. (2016) who derive a generalization bound on training with only the end-task via stochastic gradient descent. We consider the end-task aware setting where the end-task is multi-tasked with the auxiliary objective. This setting has recently been shown to improve end-task performance over the pretrain-then-finetune paradigm (Dery et al., 2021a;b; Yao et al., 2021).
68
+
69
+ Auxiliary learning with Dynamic Sampling: We are given an auxiliary objective $f_{a}(\cdot ;z) \in [0,1]$ with $N_{a}$ samples $S_{a} = (z_{1},\dots ,z_{N_{a}})$ from the distribution $\mathcal{D}_a$ . $f_{a}$ can either be a single objective or
70
+
71
+ a weighted linear combination of objectives: $f_{a} = \sum_{k} w^{k} f_{a}^{k}$ . At any iteration of SGD, we sample a choice of the end-task function $f_{e}$ or the auxiliary objective $f_{a}$ according to the probabilities $\lambda_{e}, \lambda_{a} \in [0,1] \mid \lambda_{e} + \lambda_{a} = 1$ . Given the chosen objective, we sample a data-point and perform stochastic gradient descent based on the sampled data-point. We now present our bound in the setting described.
72
+
73
+ Theorem 4.1 (Auxiliary learning with Dynamic Sampling). Assume that $f_{e}(\cdot ;z_{e}),f_{a}(\cdot ;z_{a})\in [0,1]$ are both $L$ -Lipschitz with $\beta_{e}$ and $\beta_{a}$ -smooth loss functions respectively. Consider that we have $N^{\prime} = N_{e} + N_{a}$ total samples where $f_{e}$ and $f_{a}$ have $N_{e}$ and $N_{a}$ samples respectively. $r_e = \frac{N_e}{N'}$ is the fraction of the available data represented by the end-task. Suppose that we run stochastic gradient descent for $T$ steps with monotonically non-increasing step sizes $\alpha_{t}\leq \frac{c}{t}$ by dynamically sampling the tasks according to $\lambda_{e}$ and $\lambda_{a}$ . Then, with respect to $f_{e}$ , the generalization error is bounded by:
74
+
75
+ $$
76
+ \epsilon_ {\text {g e n}} \lesssim (\Delta) ^ {\frac {1}{1 + c \lambda^ {*} \beta^ {*}}} \left(\frac {\gamma T}{N ^ {\prime}}\right) ^ {1 - \frac {1}{c \lambda^ {*} \beta^ {*} + 1}} \quad \text {W h e r e} \quad \gamma = \frac {\lambda_ {e}}{r _ {e}} \tag {1}
77
+ $$
78
+
79
+ Here $\beta^{*} = \min \{\beta_{e},\beta_{a}\}$ and $\lambda^*$ is the weighting of the function with smaller smoothness.
80
+
81
+ Proof. See Appendix E for full proof and Appendix F for more discussion
82
+
83
+ ![](images/a06ee60f3aab0b56dbca4bf53650f6d90103e67f467adf9906a35161e4d126bb.jpg)
84
+
85
+ As a detailed inspection of the proof will show, we derive Equation 1 by appealing to algorithmic stability (Bousquet & Elisseeff, 2002; Hardt et al., 2016; Kuzborskij & Lampert, 2018) (Section 2). To our knowledge, ours is the first work to present an algorithmic stability view to formally explain how auxiliary learning influences end-task performance. Equation 1 surfaces the following prescriptions about learning with auxiliary tasks:
86
+
87
+ P1 Smaller $\Delta$ improves $\epsilon_{\mathrm{gen}}$ . This implies that the more similar the auxiliary objective is to the end-task (under Assumption A.1), the lower the generalization error.
88
+ P2 Larger $N'$ leads to smaller $\epsilon_{\mathrm{gen}}^4$ . Since we usually have a fixed amount of task data $N_e$ , we can increase $N'$ by adding more auxiliary data $N_a$ .
89
+
90
+ # 5 END-TASK AWARE SEARCH OF STRUCTURED OBJECTIVE SPACES
91
+
92
+ # Algorithm 1 AANG
93
+
94
+ Input: Search Space - $\mathcal{A}$
95
+ Factor vectors - $\{W^{\mathrm{All}}, W^{\mathcal{I}}, W^{\mathcal{T}}, W^{\mathcal{R}}, W^{\mathcal{O}}\}$
96
+ End-task - E, End-task weight - $\lambda_{e}$
97
+ Initial Model Params - $\theta_0 \in \mathbf{R}^D$
98
+ repeat
99
+ Sample a batch of $n$ objectives
100
+ $\mathcal{K}^n \sim \mathcal{A}$
101
+ Weighting of objectives in $\mathcal{K}^n$
102
+ Construct $\mathbf{w}^n$
103
+ for $k = 1$ to $n$ do
104
+ $(d, t, r, o) = [\mathcal{K}_k^n]$ .stages
105
+ $w^k \propto \exp(W_{(d, t, r, o)}^{\mathrm{All}} + W_d^{\mathcal{I}} + W_t^{\mathcal{T}} + W_r^{\mathcal{R}} + W_o^{\mathcal{O}})$ $\mathbf{w}_k^n \gets w^k$
106
+ end for
107
+ Get losses from batches of data
108
+ $\hat{\mathcal{L}}_{\mathcal{A}}(\mathcal{K}^n, \mathbf{w}^n) = \sum_{k=1}^{n} w^k \mathcal{L}_k$ $\mathcal{L}_{\mathrm{total}} = \lambda_e \mathcal{L}_E + (1 - \lambda_e) \hat{\mathcal{L}}_{\mathcal{A}}$
109
+ Get gradients and update factors
110
+ $\theta_{t+1}, \{\nabla_{\mathbf{w}^n}, \lambda_e\} \gets \mathrm{META-TARTAN}(\theta_t, E, \mathcal{L}_{\mathrm{total}})$
111
+ Update $\{W^{\mathrm{All}}, W^{\mathcal{I}}, W^{\mathcal{T}}, W^{\mathcal{R}}, W^{\mathcal{O}}\}$ using $\nabla_{\mathbf{w}^n}$
112
+ Update $\lambda_e$ using $\nabla_{\lambda_e}$
113
+ until done
114
+ Return: $\theta_T$
115
+
116
+ Guided by Section 4, we build a practical method for exploring a set of objectives, $\mathcal{A}$ .
117
+
118
+ Whilst the dynamic sampling setting described in Section 4 is amenable to theoretical consideration, we make a few practical changes to it. First, instead of performing alternating gradient descent by sampling $f_{a}, f_{e}$ according to $\lambda_{e}, \lambda_{a}$ , we instead use them as multitask weights and perform joint training. Joint training has been found to produce superior results compared to alternating optimization when leveraging auxiliary objectives (Aghajanyan et al., 2021). We perform gradient descent on the following total loss which interpolates between the end-task and the auxiliary loss $\mathcal{L}_{\mathrm{total}} = \lambda_e \mathcal{L}_E + (1 - \lambda_e) \mathcal{L}_K$ . Here, $\mathcal{K}$ is a chosen subset of $\mathcal{A}$ .
119
+
120
+ Second, as indicated in Section 4, given $\mathcal{K}$ we can write the set as a single objective $f_{a} = \sum_{k\in \mathcal{K}}w^{k}f_{a}^{k}$ . By Prescription P1, we want to choose $\{w^k\}$ such that $f_{a}$ has a small $\Delta$ with the end-task $f_{e}$ . We would
121
+
122
+ also like to set $\lambda_{e}$ such that the bound on $\epsilon_{\mathrm{gen}}$ is minimized. Whilst a closed form exists for the optimal weightings $\lambda_{e}, \{w^{k}\}$ , it depends on variables like $\{\Delta^k\}$ , $\{\beta_a^k\}$ , $L$ that are hard to estimate.
123
+
124
+ We therefore propose to learn $\lambda_{e},\{w^{k}\}$ in an online, data-driven way. To do this, we build on top of the META-TARTAN algorithm proposed by Dery et al. (2021b). META-TARTAN is a meta-learning algorithm that learns adaptive weights for different auxiliary tasks in a way that prioritizes end-task generalization. It learns $\{w^{k}\}$ by minimizing the loss on the end-task validation set: $\frac{\partial\mathcal{L}_{\mathbf{E}}^{val}}{\partial w^{k}}\approx -\left(\nabla_{\theta}\mathcal{L}_{f_{a}^{k}}\right)^{T}\left(\nabla_{\theta}\mathcal{L}_{\mathbf{E}}^{val}\right)$ . This corresponds to learning $\{w^{k}\}$ such that $\left(\nabla_{\theta}f_{a}\right)^{T}\left(\nabla_{\theta}f_{e}\right)$ is maximized. This minimizes one of the terms that contributes to $\Delta$ and thus attempts to fulfil Prescription P1. We can similarly learn $\lambda_{e}$ to minimize the end-task validation loss. For a more detailed discussion of META-TARTAN, please see Appendix B.
125
+
126
+ So far, we have introduced independent weights, $\{w^k\}$ , for each objective. This is sufficient in the case of unrelated objectives. However, the objectives in $\mathcal{A}$ share an underlying structure. We recognize this by using a factored approach to model each $w^k$ . We introduce a factor vector for each of the 4 stages introduced in Section 3: $W^{\mathcal{D}} \in \mathbf{R}^{|\mathcal{D}|}$ , $W^{\mathcal{T}} \in \mathbf{R}^{|\mathcal{T}|}$ , $W^{\mathcal{R}} \in \mathbf{R}^{|\mathcal{R}|}$ and $W^{\mathcal{O}} \in \mathbf{R}^{|\mathcal{O}|}$ . This ties together the weights of objectives that share primitives in common. To capture the fact that an objective can be more than the sum of its parts, we also introduce an independent weight for each objective: $W^{\mathrm{All}} \in \mathbf{R}^{|\mathcal{D}| \times |\mathcal{T}| \times |\mathcal{R}| \times |\mathcal{O}|}$ . Consider the objective $k$ which is generated by the composition of the operations $\{d \in \mathcal{D}, t \in \mathcal{T}, r \in \mathcal{R}, o \in \mathcal{O}\}$ , its weighting is computed as: $w^k \propto \exp\left(W_{(d,t,r,o)}^{\mathrm{All}} + W_d^{\mathcal{I}} + W_t^{\mathcal{T}} + W_r^{\mathcal{R}} + W_o^{\mathcal{O}}\right)$ . Our factored approach not only allows us to share information between objectives but it also allows us to analyze which stages and primitives are most important to a particular end-task after training is completed (Section 7).
127
+
128
+ Prescription P2 from Section 4, advocates for introducing as much auxiliary data as possible. As such, instead of fixing to a specific subset throughout training for a particular end-task, we propose to utilize all the objectives in $\mathcal{A}$ . This also avoids the combinatorial explosion that comes with exploring subsets of $\mathcal{A}$ at a time. $|\mathcal{A}|$ can be large and descending on all of $\mathcal{A}$ at once can be computationally prohibitive. As an efficient work around, at each training step, we sample a subset of $\mathcal{A}$ for execution with META-TARTAN. Our samples are drawn from all of $\mathcal{A}$ so any objective can get used at any timestep. Because we model each $w^k$ via a factored approach, even if an objective is not sampled its weight is implicitly updated. Our approach is reminiscent of stochastic-relaxation weight sharing (Pham et al., 2018; Dong & Yang, 2019; Li et al., 2020) where sampled architectural primitives result in updates to shared model weights which can be used by other primitives that are not sampled.
129
+
130
+ We coalesce all the ideas we have introduced so far into Algorithm 1 which we dub AANG (Automated Auxiliary LearniNG). At a high-level, given an end-task $\mathbf{E}$ :
131
+
132
+ 1. We generate a space of auxiliary objectives $\mathcal{A}$ by leveraging the taxonomy discussed in Section 3. $\mathcal{A}$ may contain auxiliary tasks that can improve our performance on $\mathbf{E}$ .
133
+ 2. We leverage MAML-style (Finn et al., 2017) meta-learning to adaptively weight the objectives in $\mathcal{A}$ based on measuring each objective's influence on $\mathbf{E}$ 's validation set loss.
134
+ 3. We make our algorithm scalable by sub-sampling the tasks $\mathcal{A}$ . By exploiting the underlying structure of the objectives in $\mathcal{A}$ via a factored approach to modeling task weights, we reduce the impact of the inexact sub-sampling.
135
+
136
+ # 6 EXPERIMENTAL SETTING
137
+
138
+ Our exploration of auxiliary learning has made the following transitions from the status-quo: manual to automated, single task to multitask, end-task agnostic to end-task aware. In this section, we set up experiments to validate these deviations from the standard.
139
+
140
+ We focus on continued pre-training (Gururangan et al., 2020; Aghajanyan et al., 2021). In this setting, we perform further auxiliary learning on an already pre-trained model. We favor this setting over pre-training from scratch (Liu et al., 2019b; Yang et al., 2019) not only because it is a more computationally feasible arena for experimentation but also because it is more relevant to modern ML systems where building upon pre-trained models is the norm (Qiu et al., 2020; Du et al., 2020). Model Details and Datasets: We use a pre-trained RoBERTa<sub>base</sub> (Liu et al., 2019b) as the shared model base. We implement each auxiliary objective as a separate head on top of this shared base. For classification based objectives, the output head is a 2-layer multi-layer perceptron (MLP) that receives representations for the special classification token [CLS] (Devlin et al., 2018) from RoBERTa<sub>base</sub>. For sequence generation objectives, we make a copy of the pre-trained output layer of RoBERTa<sub>base</sub> for each task. Table 4 in Appendix C provides details of the 5 datasets used.
141
+
142
+ All datasets are low-resource classification tasks. Not only are these datasets more amenable to meta-learning from a computational standpoint, but low-resource tasks also benefit the most from auxiliary learning. We also choose these tasks because they feature in previous work which we use as baselines (Gururangan et al., 2020; Dery et al., 2021b)
143
+
144
+ Baselines and Search Spaces: The following methods are end-task agnostic baselines. By end-task agnostic, we mean that these do not multitask with the end-task. Finetuning on the end-task occurs after training on the auxiliary objective.
145
+
146
+ 1. RoBERTa (Liu et al., 2019b): We simply finetune a pre-trained RoBERTa<sub>base</sub> on the end-task.
147
+ 2. TAPT (Gururangan et al., 2020): Continue training RoBERTa $_{\text{base}}$ on masked language modelling on end-task data itself before finetuning on the end-task.
148
+
149
+ The following named objectives are end-task aware baselines that use META-TARTAN (Dery et al., 2021b) but utilize only 1 auxiliary task. Each auxiliary objective is multi-tasked with the end-task.
150
+
151
+ 1. GPT-style: We perform end-task aware training with a denoising auxiliary objective based on left-to-right causal masking for computing representations. $\{\mathcal{I} =$ End-task data, $\mathcal{T} =$ No-op, $\mathcal{R} =$ Left-To-Right, $\mathcal{O} =$ Denoise Token $\}$ .
152
+ 2. XLNET-style: This is a denoising auxiliary objective that uses randomized masking for computing representations. $\{\mathcal{I} =$ End-task data, $\mathcal{T} =$ No-op, $\mathcal{R} =$ Random-factorized, $\mathcal{O} =$ Denoise Token\}.
153
+ 3. BERT-style / TAPT: Denoising inputs corrupted via BERT-Op: $80\%$ masking and $10\%$ random replacement. $\{\mathcal{I} =$ End-task data, $\mathcal{T} =$ BERT-Op, $\mathcal{R} =$ Bi-directional, $\mathcal{O} =$ Denoise Token\}. Please note that this baseline is equivalent to META-TARTAN as introduced in Dery et al. (2021b).
154
+
155
+ Table 1 details the search spaces that we evaluate against the above baselines. This is by no means the most encompassing search space but we leave more expansive space design to future work. Please note that all tasks within AANG-TD, and those with $\{\mathcal{I} =$ End-task] in AANG-TD+ED, are
156
+
157
+ Table 1: AANG-TD (task data) has 24 objectives and is based on only end-task data. AANG-TD+ED (task data + external data) has 40 objectives and uses both end-task and in-domain data.
158
+
159
+ <table><tr><td></td><td>I</td><td>T</td><td>R</td><td>O</td></tr><tr><td rowspan="2">TD</td><td rowspan="2">End-task</td><td>BERT-op</td><td>Bi-directional</td><td>Denoise Token</td></tr><tr><td>Mask</td><td>Left-to-Right</td><td>End-task</td></tr><tr><td rowspan="2">TD+ED</td><td>End-task</td><td>Replace</td><td>Right-to-Left</td><td></td></tr><tr><td>In-Domain data</td><td>No-op</td><td>Random-Factorized</td><td></td></tr></table>
160
+
161
+ instantiations of task augmentation as introduced in Section 3.
162
+
163
+ Training Details : Please see Appendix D for more details about hyper-parameter configurations.
164
+
165
+ # 7 RESULTS AND DISCUSSION
166
+
167
+ In this section, we experimentally validate our case for automating the creation of auxiliary objectives and using them in an end-task aware multitask fashion.
168
+
169
+ # 7.1 GOING A LONG WAY WITHOUT EXTERNAL DATA
170
+
171
+ We first consider the setting where we rely solely on end-task data (task augmentation), and work with the AANG-TD search space. This search space has 24 objectives. Table 2 shows that automatically generating auxiliary objectives from only task data and using them appropriately is productive.
172
+
173
+ End-task awareness is key: From Table 2, methods that are end-task aware result in over $1.12\%$ average improvement over those that are end-task agnostic even under the most generous comparison (GPT-style $79.84\%$ vs task-agnostic TAPT $78.72\%$ ). Knowing the end-task means that at each iteration, AANG can make informed gradient updates by adapting task weights so the resulting auxiliary task better aligns with the end-task (Prescription P1). Amongst the single task objectives, BERT-style performs best. We posit that this is because RoBERTa was trained from scratch on a similar objective and so this objective represents minimal shift in training distributions.
174
+
175
+ Adaptive multi-task auxiliary learning improves performance: We compare single-task end-task aware auxiliary learning to its multitask variant. Table 2 shows that multitasking our 3 different types of language modelling tasks results in improved average performance over using the tasks individually (81.12% for the BERT-style and 81.55% for combining the three single task objectives). We get our best performance when we multitask 24 auxiliary objectives automatically generated with our framework using AANG-TD. Boosting the number of objectives from 3 to 24 resulted in a 0.66% improvement in average performance across tasks. This is in line with Prescription P2 from Section 4 since we are increasing the effective amount of auxiliary data. We further posit that introducing more auxiliary objectives also serves to implicitly regularize the end-task during training.
176
+
177
+ Table 2: Our framework and AANG on tasks using only task data. Without using any external data, we are able to get significant average performance improvement over baselines. Superscripts are p-values from paired t-tests (best multitask versus best single-task).
178
+
179
+ <table><tr><td rowspan="2">Task Adaptive</td><td rowspan="2">Method</td><td rowspan="2">#</td><td colspan="2">CS</td><td>BIOMED</td><td>NEWS</td><td>STANCE</td><td rowspan="2">AVG</td></tr><tr><td>ACL-ARC</td><td>SCIERC</td><td>CHEMPROT</td><td>H.PARTISAN</td><td>SE-2016-6</td></tr><tr><td rowspan="3">No</td><td>RoBERTa</td><td>1</td><td>66.033.55</td><td>77.962.96</td><td>82.100.98</td><td>93.392.26</td><td>70.371.51</td><td>77.97</td></tr><tr><td>TAPT</td><td>1</td><td>67.743.68</td><td>79.531.93</td><td>82.170.65</td><td>93.422.87</td><td>70.741.21</td><td>78.72</td></tr><tr><td>[OURS] Static Multitask-TD</td><td>24</td><td>69.603.80</td><td>83.370.58</td><td>83.420.26</td><td>97.950.73</td><td>71.020.43</td><td>81.07</td></tr><tr><td rowspan="5">Yes</td><td>X. GPT-style</td><td>1</td><td>67.220.44</td><td>81.620.84</td><td>83.291.21</td><td>96.410.73</td><td>70.671.46</td><td>79.84</td></tr><tr><td>Y. XLNET-style</td><td>1</td><td>69.762.42</td><td>81.810.42</td><td>83.390.31</td><td>96.411.92</td><td>71.180.58</td><td>80.51</td></tr><tr><td>Z. BERT-style (Dery et al., 2021b)</td><td>1</td><td>70.084.70</td><td>81.480.82</td><td>84.49(0.09)0.50</td><td>96.841.72</td><td>72.700.60</td><td>81.12</td></tr><tr><td>[OURS] AANG-[X+Y+Z]</td><td>3</td><td>71.513.19</td><td>82.890.78</td><td>83.680.45</td><td>96.921.26</td><td>72.75(0.94)0.82</td><td>81.55</td></tr><tr><td>[OURS] AANG-TD</td><td>24</td><td>73.261(0.28)1.32</td><td>82.981(0.27)1.52</td><td>83.910.32</td><td>98.46(0.14)0.0</td><td>72.461.65</td><td>82.21</td></tr></table>
180
+
181
+ # 7.2 INTRODUCING EXTERNAL DATA
182
+
183
+ For the ACL-ARC task, we experiment with introducing auxiliary tasks based on external data. AANG-TD+ED has 40 tasks, 16 of which are based on domain data. We introduce CS domain data (from the S2ORC dataset (Lo et al., 2019)) that is $n = 10 \times$ the size of the task data. From Figure 3 we see that AANG-TD+ED makes better use of domain-data than doing end-task aware training using only BERT-style objective with task (TAPT) and domain-data (DAPT) jointly as in Dery et al. (2021b). However, AANG-TD+ED (73.70) does not significantly improve over AANG-TD (73.26) on the ACL-ARC task (Figure 3). This might seem at odds with Prescription P2 since the TD+ED search space introduces more data. However, note that the AANG search algorithm is approximate and as such, with a larger search space, it can be harder to find composite tasks
184
+
185
+ with a small $\Delta$ as suggested by Prescription P1. We posit that we need more external data than $n = 10\times$ in order to see marked improvements to offset our inexact search of the space of composite functions. However, such scales are
186
+
187
+ ![](images/e6e5c15f5a2b8bfe8e8b2ca95131bbc83e11b56683868bd942675cf9147c468b.jpg)
188
+ Figure 3: AANG effectively leverages out-of-task data. P-values (in brackets) are comparisons to (Dery et al., 2021b) outside our computational budget.
189
+
190
+ # 7.3 WHY DOES AANG WORK?
191
+
192
+ To better understand why our auxiliary learning pipeline improves end-task performance, we perform multiple ablations under AANG-TD.
193
+
194
+ Static versus Dynamic Weighting: We ablate the impact of using static task weights throughout training, as against adaptive task weights. Just as with AANG, we sub-sample $n$ tasks from the search space at every iteration ( $n$ is cross-validated exactly as AANG is - Table D). Each sampled tasks weight is initialized to $\frac{1}{n}$ and this remains unchanged throughout training. This is the Static Multitask-TD baseline in Table2. AANG-TD improves upon the static multitask baseline by over 1.1% on average. With adaptive weighting, AANG down-weights objectives that are harmful to the end-task whilst up-weighting relevant ones (Prescription P1). However, using static weightings is more compute friendly since we do not have to calculate task-weight meta-gradients. This computes performance trade-off is left for practitioners to resolve based on their available resources.
195
+
196
+ Impact of number of sampled objectives: Due to computational constraints, AANG sub-samples the set of generated objectives. Whilst this sampling can result in approximation error when inferring task weightings, it can also introduce stochasticity which can help regularize the learned model. From Table 3 (Appendix A) we find that for some tasks (ACL-ARC and SCIERC) sampling a larger number of tasks helps. SE-2016-6 and CHEMPROT on the other hand benefit from smaller number of sampled tasks. Our recommendation is that the number of sampled tasks be cross-validated on a per-task basis.
197
+
198
+ Learned task weight trajectories: AANG learns interesting trajectories for weighting design stage primitives. From Table 2, the fact that AANG-TD roughly matches the best single task performance (72.46<sub>1.65</sub> versus 72.70<sub>0.60</sub> for BERT-style) on the SE-2016-6 task suggests that it may be learning to mostly up-weight this task. Figure 4 provides evidence of this. For the SE-2016-6 task (row 1), composing the highest weighted primitive from each stage [BERT $\circ$ None $\circ$ DENOISE] results in BERT-style, the best single task objective. Figure 4 also shows that AANG can adapt to overfitting.
199
+
200
+ ![](images/089656fa678cf65bbccab1e0b4ad2d16dfb2dcd5730b92ad84a9080505366f65.jpg)
201
+
202
+ ![](images/08a166263325f8f45f021e800da3286420503b406749a4323df76d64732ad2e6.jpg)
203
+
204
+ ![](images/8b8823231e54b061e0e0892e7a8ec4fd76bffc84778160332b332d4e1e64da92.jpg)
205
+
206
+ ![](images/435b6757d4335f4b36dc3d614b58f1579ff8d7a1a2eb76f7a997f3d850006852.jpg)
207
+
208
+ ![](images/ef22f5021583a587443e9d491312ebc508b1b7a3edab426ffe3e8048d3130a7e.jpg)
209
+ Figure 4: Learned trajectories for AANG-TD for run instances of SE-2016-6 and SCIERC tasks.
210
+
211
+ ![](images/f7ea144ad3c9442b82c415f6a722814d2f38d5fe52b59ce4d747e18e5698c7fa.jpg)
212
+
213
+ ![](images/9c16826c27134b521ca46827b93517cf29852351342929c3b15270836a7e4493.jpg)
214
+
215
+ ![](images/8c4dce324cd1faa7f681e8c404fb9cdd49b8e3f6cb0082500f24c7da1498de25.jpg)
216
+
217
+ The vertical black lines indicate the point of best validation set performance. AANG responds to over-fitting by down-weighting objectives based on the output loss being over-fit to. Thus, after several iterations, the objective that dominates when the validation performance is at its highest (black vertical line) gets down-weighted in response to it becoming saturated.
218
+
219
+ What tasks are important and when they are important? We study which tasks are most highly weighted early in training (first $10\%$ of learning trajectory) and later in training (last $50\%$ ). We aggregate statistics across 3 datasets. Note that early in training, objectives based on the self-
220
+
221
+ ![](images/15e8c397960a72b951e22d1d674200db894d093b6816ed1db1d129780102d4fd.jpg)
222
+ Tasks with highest average weight during first $10\%$ of training
223
+ Figure 5: Top ranked objectives (averaged weight) early in training (left) and later in training (right) supervised output $\mathcal{O} = \{\mathrm{DENOISE}\}$ are highly weighted but later, objectives based on supervised signal, $\mathcal{O} = \{\mathrm{Task}\}$ play a larger role. AANG rediscovers the common practice of training on self-supervised objectives before introducing supervised ones. It is also interesting to note that many newly generated objectives (outside of the 3 named single task baselines in Table 2) such as simple input reconstruction were discovered to have relevant impact on the end-tasks. This means AANG can automatically surface new, previously unexplored objectives relevant to the end-task.
224
+
225
+ ![](images/acd920987c976d030e8c58473b4492b6aeb31f7cfe4ffead0baf780a15182bbf.jpg)
226
+ Tasks with highest average weight during later half of training
227
+
228
+ # 8 LIMITATIONS AND CONCLUSION
229
+
230
+ Our work has some limitations that we leave for future work. First, because AANG relies on meta-learning, it presents extra compute burden over simple multitasking. This is because, we have to independently compute meta-gradients for each auxiliary task thus requiring $\mathcal{O}(n)$ forward-backward operations for $n$ sampled tasks compared to $\mathcal{O}(1)$ for static multitasking. In Table 2, we show that our static Multitask-TD method outperforms all other non-task-adaptive methods by $\approx 2.4\%$ and is thus a viable alternative when runtime is a significant constraint. Secondly, AANG as presented is an approximate algorithm – primarily due to sub-sampling the space of tasks. Thus as mentioned in Section 7.2, we do not get as much gain as desired when our search space becomes larger. We leave finding an efficient exact search algorithm for future exploration.
231
+
232
+ This paper presents a procedure for automating the creation of auxiliary objectives. We showed, theoretically, how auxiliary learning impacts end-task generalization. This resulted in prescriptions that informed the design of AANG, an algorithm to search the space of generated objectives in an end-task aware multitask fashion. Our experiments show that AANG is a promising first step in automating auxiliary learning.
233
+
234
+ # 9 ACKNOWLEDGEMENTS
235
+
236
+ This work was supported in part by DSO National Laboratories, an ENS-CFM Data Science Chair, DARPA FA875017C0141, the National Science Foundation grants IIS1705121, IIS1838017, IIS2046613 and IIS-2112471, an Amazon Web Services Award, a Facebook Faculty Research Award, funding from Booz Allen Hamilton Inc., and a Block Center Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of any of these funding agencies. We are grateful for helpful feedback from Uri Alon, Patrick Fernandes, Joon Sik Kim, Han Guo, Victor Akinwande and Clara Na.
237
+
238
+ # REFERENCES
239
+
240
+ Armen Aghajanyan, Anchit Gupta, Akshit Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. Muppet: Massive multi-task representations with pre-finetuning. arXiv preprint arXiv:2101.11038, 2021.
241
+ Pulkit Agrawal, Ashvin V Nair, Pieter Abbeel, Jitendra Malik, and Sergey Levine. Learning to poke by poking: Experiential learning of intuitive physics. Advances in neural information processing systems, 29, 2016.
242
+ Jonathan Baxter. A model of inductive bias learning. Journal of artificial intelligence research, 12: 149-198, 2000.
243
+ Iz Beltagy, Arman Cohan, and Kyle Lo. Scibert: Pretrained contextualized embeddings for scientific text. CoRR, abs/1903.10676, 2019. URL http://arxiv.org/abs/1903.10676.
244
+ Olivier Bousquet and André Elisseeff. Stability and generalization. The Journal of Machine Learning Research, 2:499-526, 2002.
245
+ Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. CoRR, abs/2005.14165, 2020. URL https://arxiv.org/abs/2005.14165.
246
+ Xavier Carreras, Lluis Marquez, and Lluis Padró. A simple named entity extractor using adaboost. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003, pp. 152-155, 2003.
247
+ Rich Caruana. Multitask learning. Machine learning, 28(1):41-75, 1997.
248
+ Eugene Charniak. Statistical techniques for natural language parsing. AI magazine, 18(4):33-33, 1997.
249
+ Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597-1607. PMLR, 2020.
250
+ Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555, 2020.
251
+ Tri Dao, Nimit S Sohoni, Albert Gu, Matthew Eichhorn, Amit Blonder, Megan Leszczynski, Atri Rudra, and Christopher Ré. Kaleidoscope: An efficient, learnable representation for all structured linear maps. arXiv preprint arXiv:2012.14966, 2020.
252
+ Lucio M Dery, Yann Dauphin, and David Grangier. Auxiliary task update decomposition: The good, the bad and the neutral. arXiv preprint arXiv:2108.11346, 2021a.
253
+ Lucio M Dery, Paul Michel, Ameet Talwalkar, and Graham Neubig. Should we be pre-training? an argument for end-task aware training as an alternative. arXiv preprint arXiv:2109.07437, 2021b.
254
+
255
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
256
+ Xuanyi Dong and Yi Yang. Searching for a robust neural architecture in fourgpu hours. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1761-1770, 2019.
257
+ Jingfei Du, Edouard Grave, Beliz Gunel, Vishrav Chaudhary, Onur Celebi, Michael Auli, Ves Stoyanov, and Alexis Conneau. Self-training improves pre-training for natural language understanding. arXiv preprint arXiv:2010.02194, 2020.
258
+ Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks, 2017. URL https://arxiv.org/abs/1703.03400.
259
+ Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems, 33:21271-21284, 2020.
260
+ Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Don't stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964, 2020.
261
+ Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In International conference on machine learning, pp. 2555-2565. PMLR, 2019.
262
+ Moritz Hardt, Ben Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In International Conference on Machine Learning, pp. 1225-1234. PMLR, 2016.
263
+ Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1314-1324, 2019.
264
+ Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes imagenet good for transfer learning? arXiv preprint arXiv:1608.08614, 2016.
265
+ Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77, 2020.
266
+ David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky. Measuring the evolution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics, 6:391-406, 2018.
267
+ Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. SemEval-2019 task 4: Hyperpartisan news detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, pp. 829-839, Minneapolis, Minnesota, USA, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/S19-2145. URL https://aclanthology.org/S19-2145.
268
+ Jens Kringelum, Sonny Kim Kjaerulff, Søren Brunak, Ole Lund, Tudor I Oprea, and Olivier Taboureau. Chemprot-3.0: a global chemical biology diseases mapping. Database, 2016, 2016.
269
+ Ilja Kuzborskij and Christoph Lampert. Data-dependent stability of stochastic gradient descent. In International Conference on Machine Learning, pp. 2815-2824. PMLR, 2018.
270
+ Liam Li, Mikhail Khodak, Maria-Florina Balcan, and Ameet Talwalkar. Geometry-aware gradient algorithms for neural architecture search. arXiv preprint arXiv:2004.07802, 2020.
271
+ Xingyu Lin, Harjatin Baweja, George Kantor, and David Held. Adaptive auxiliary task weighting for reinforcement learning. Advances in neural information processing systems, 32, 2019.
272
+
273
+ Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
274
+ Shikun Liu, Andrew J Davison, and Edward Johns. Self-supervised generalisation with meta auxiliary learning. arXiv preprint arXiv:1901.08933, 2019a.
275
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019b.
276
+ Ziquan Liu, Yi Xu, Yuanhong Xu, Qi Qian, Hao Li, Antoni B. Chan, and Rong Jin. Improved fine-tuning by leveraging pre-training data: Theory and practice. CoRR, abs/2111.12292, 2021. URL https://arxiv.org/abs/2111.12292.
277
+ Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Dan S Weld. S2orc: The semantic scholar open research corpus. arXiv preprint arXiv:1911.02782, 2019.
278
+ Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization, 2017. URL https://arxiv.org/abs/1711.05101.
279
+ Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. arXiv preprint arXiv:1808.09602, 2018.
280
+ Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pp. 31-41, San Diego, California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/S16-1003. URL https://aclanthology.org/S16-1003.
281
+ Aviv Navon, Idan Achituve, Haggai Maron, Gal Chechik, and Ethan Fetaya. Auxiliary learning by implicit differentiation. arXiv preprint arXiv:2007.02693, 2020.
282
+ Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
283
+ Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. CoRR, abs/1802.05365, 2018. URL http://arxiv.org/abs/1802.05365.
284
+ Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In International Conference on Machine Learning, pp. 4095-4104. PMLR, 2018.
285
+ Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, pp. 1-26, 2020.
286
+ Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018.
287
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
288
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019.
289
+ Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher Ré, and Ameet Talwalkar. Rethinking neural operations for diverse tasks. arXiv preprint arXiv:2103.15798, 2021.
290
+ Sebastian Ruder, Matthew E Peters, Swabha Swayamdipta, and Thomas Wolf. Transfer learning in natural language processing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials, pp. 15-18, 2019.
291
+
292
+ Nikunj Saunshi, Sadhika Malladi, and Sanjeev Arora. A mathematical exploration of why language models help solve downstream tasks. arXiv preprint arXiv:2010.03648, 2020.
293
+ Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. wav2vec: Unsupervised pre-training for speech recognition. arXiv preprint arXiv:1904.05862, 2019.
294
+ Kenneth O Stanley and Risto Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2):99-127, 2002.
295
+ Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pp. 6105-6114. PMLR, 2019.
296
+ Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. An explanation of in-context learning as implicit bayesian inference. arXiv preprint arXiv:2111.02080, 2021.
297
+ Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32, 2019.
298
+ Xingcheng Yao, Yanan Zheng, Xiaocong Yang, and Zhilin Yang. Nlp from scratch without large-scale pretraining: A simple and efficient framework. arXiv preprint arXiv:2111.04130, 2021.
299
+ Tong Zhang, Peng Gao, Hao Dong, Yin Zhuang, Guanqun Wang, Wei Zhang, and He Chen. Consecutive pretraining: A knowledge transfer learning strategy with relevant unlabeled data for remote sensing domain. arXiv preprint arXiv:2207.03860, 2022.
300
+ Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
301
+
302
+ # A MORE ABLATION TABLES
303
+
304
+ Table 3: Varying number of sampled objectives per-iteration.
305
+
306
+ <table><tr><td>Task</td><td>3/24 tasks</td><td>6/24 tasks</td></tr><tr><td>ACL-ARC</td><td>72.112.12</td><td>73.261.32</td></tr><tr><td>SCIERC</td><td>82.351.76</td><td>82.981.52</td></tr><tr><td>SE-2016-6</td><td>72.461.65</td><td>72.460.90</td></tr><tr><td>CHEMPROT</td><td>83.910.32</td><td>83.690.98</td></tr><tr><td>H.PARTISAN</td><td>98.460.0</td><td>97.950.73</td></tr></table>
307
+
308
+ # B DISCUSSION OF META-TARTAN (DERY ET AL., 2021B)
309
+
310
+ META-TARTAN (Dery et al., 2021b) is a MAML style (Finn et al., 2017) meta-learning algorithm that learns to adaptively weight a given set of tasks based on their influence on the end-task validation performance. META-TARTAN achieves this by formulating the following bi-level optimization problem :
311
+
312
+ $$
313
+ \theta^ {*}, \mathbf {w} ^ {*} = \operatorname {a r g m i n} _ {\left\{\theta \in g \left(\theta_ {0}\right), \mathbf {w} \right\}} \mathcal {L} _ {\mathbf {E}} (\theta) \tag {2}
314
+ $$
315
+
316
+ where
317
+
318
+ $$
319
+ \theta_ {0} = \operatorname {a r g m i n} _ {\theta} \mathcal {L} _ {\text {t o t a l}} (\theta , \mathbf {w}) = \operatorname {a r g m i n} _ {\theta} \left(w ^ {*} \mathcal {L} _ {\mathbf {E}} (\theta) + \sum_ {T _ {i} \in \mathcal {A}} w _ {i} \mathcal {L} _ {T _ {i}} (\theta)\right) \tag {3}
320
+ $$
321
+
322
+ Note that $\mathbf{E}$ is the end-task and $\mathcal{A}$ is the set of auxiliary tasks.
323
+
324
+ Since the above bi-level problem is difficult to solve directly, Dery et al. (2021a) relax the problem and into an alternating optimization problem where task weights are updated based on 1-step improvement to the validation performance of the end-task :
325
+
326
+ $$
327
+ \frac {\partial \mathcal {L} _ {\mathbf {E}} ^ {v a l} \left(\theta_ {t + 1} (\mathbf {w})\right)}{\partial w _ {i}} \approx - \beta \left(\nabla \mathcal {L} _ {T _ {i}}\right) ^ {T} \left(\nabla \mathcal {L} _ {\mathbf {E}} ^ {v a l} \left(\theta_ {t}\right)\right) \tag {4}
328
+ $$
329
+
330
+ To prevent the above relaxation from finding the trivial solution of just upweighting solely the end-task, Dery et al. (2021b) introduce a special dev-head which they use for estimating the meta-gradient :
331
+
332
+ $$
333
+ \frac {\partial \mathcal {L} _ {T ^ {*}} ^ {v a l} \left(\theta^ {*} (\mathbf {w})\right)}{\partial w _ {i}} \approx - \beta \left(\nabla_ {\theta} \mathcal {L} _ {T _ {i}}\right) ^ {T} \left(\nabla_ {\theta} \mathcal {L} _ {\mathbf {E}} ^ {v a l} \left(\left[ \theta_ {\text {b o d y}}; \phi^ {*} \right] _ {t}\right)\right) \tag {5}
334
+ $$
335
+
336
+ Where $\phi_t^*$ is the special dev-head and $\theta_{\mathrm{body}}$ is the body of the model. For even more details about META-TARTAN, please see Section 3 of Dery et al. (2021b).
337
+
338
+ Though we leverage MET-TARTAN, compared to Dery et al. (2021b), we make three distinct contributions to the field of auxiliary learning. We list them below
339
+
340
+ 1. Novel Problem Formulation: As far as we are aware of, we are the first to formulate the problem of automated auxiliary learning. Specifically, we presented an approach for automatically constructing a suite of auxiliary objectives based on existing objectives. Please note that Dery et al. (2021b) perform auxiliary learning with only the DAPT/TAPT variants of the BERT objective. They effectively assume that the search space of objectives (the 2 they explore) is given before-hand. Our approach automatically creates the search space.
341
+ 2. Theoretical Novelty: To the best of our knowledge, we are the first work to provide an exploration of why auxiliary learning improves primary task performance via algorithmic stability. Dery et al. (2021b) in introducing META-TARTAN do not attempt to give a theoretical characterization of why the algorithm improves end-task performance.
342
+ 3. Algorithm Improvements to META-TARTAN: Please note that META-TARAN as presented in Dery et al. (2021b) was used with only 2 auxiliary tasks. When scaling to more tasks, using META-TARTAN naively becomes computationally prohibitive. Specifically, on a search space of N tasks, META-TARTAN requires $O(N)$ order computation per step.
343
+
344
+ We improve upon this by introducing the task sub-sampling of $(k\ll N)$ which reduces the compute overhead to $O(k)$ . To account for the impact of sub-sampling as an approximation, we introduced the factorised modelling of task weights which allows sharing of information between auxiliary tasks that might themselves be related.
345
+
346
+ # C DATASET DETAILS
347
+
348
+ Table 4: Specifications of datasets used to evaluate our methods.
349
+
350
+ <table><tr><td>Domain</td><td>Task</td><td>Label Type</td><td>Train Size</td><td>Dev Size</td><td>Test Size</td><td>Classes</td><td>Metric</td></tr><tr><td>BIOMED</td><td>CHEMPROT Kringelum et al. (2016)</td><td>relation classification</td><td>4169</td><td>2427</td><td>3469</td><td>13</td><td>Accuracy</td></tr><tr><td>CS</td><td>SCIERC Luan et al. (2018)</td><td>relation classification</td><td>3219</td><td>455</td><td>974</td><td>7</td><td>F1</td></tr><tr><td>STANCE</td><td>SE-2016-6 Mohammad et al. (2016)</td><td>stance detection</td><td>2497</td><td>417</td><td>1249</td><td>3</td><td>Accuracy</td></tr><tr><td>CS</td><td>ACL-ARC Jurgens et al. (2018)</td><td>citation intent</td><td>1688</td><td>114</td><td>139</td><td>6</td><td>F1</td></tr><tr><td>NEWS</td><td>H.PARTISAN Kiesel et al. (2019)</td><td>partisanship</td><td>515</td><td>65</td><td>65</td><td>2</td><td>Accuracy</td></tr></table>
351
+
352
+ # D MORE TRAINING DETAILS
353
+
354
+ We run each hyper-parameter configuration across 3 seeds $\{0,1,2\}$ . We use a batch size of 128 for all end-tasks tasks except H.PARTISAN where we use a batch size of 64. The auxiliary task batch-size, aux_bsz, is shared across all the $n$ sub-sampled auxiliary objectives according to the objective's weight.
355
+
356
+ We use the AdamW optimizer (Loshchilov & Hutter, 2017), with weight decay of 0.01 for all experiments.
357
+
358
+ Table 5: AANG-TD specific Hyper-parameters
359
+
360
+ <table><tr><td>Hyper-parameter</td><td>Values</td><td>Description</td></tr><tr><td>aux_lr</td><td>1.0, 0.1</td><td>Learning rate for factor vectors - {WAll, WI, WT, WR, WO}</td></tr><tr><td>sopt_lr</td><td>0.1, 0.01</td><td>Learning rate for primary task weighting λe</td></tr><tr><td>nconf_subsamp</td><td>3, 6</td><td>Number of sub-sampled auxiliary tasks.</td></tr><tr><td>learning rate</td><td>1e-3, 1e-4</td><td>Learning rate used for further training of RoBERTa_base</td></tr><tr><td>aux_bsz</td><td>256</td><td>Batch size of for auxiliary objectives</td></tr></table>
361
+
362
+ Table 6: AANG-TD+ED specific Hyper-parameters
363
+
364
+ <table><tr><td>Hyper-parameter</td><td>Values</td><td>Description</td></tr><tr><td>aux_lr</td><td>1.0, 0.5, 0.1</td><td>Learning rate for factor vectors - {WAll, WI, WT, WR, WO}</td></tr><tr><td>sopt_lr</td><td>0.1</td><td>Learning rate for primary task weighting λe</td></tr><tr><td>nconf_subsamp</td><td>6, 12, 24</td><td>Number of sub-sampled auxiliary tasks.</td></tr><tr><td>learning rate</td><td>1e-4</td><td>Learning rate used for further training of RoBERTa_base</td></tr><tr><td>aux_bsz</td><td>1024</td><td>Batch size of for auxiliary objectives</td></tr></table>
365
+
366
+ Table 7: META-TARTAN Hyper-parameters for single task auxiliary tasks
367
+
368
+ <table><tr><td>Hyper-parameter</td><td>Values</td><td>Description</td></tr><tr><td>sopt_lr</td><td>1.0, 0.1, 0.01</td><td>Learning rate for primary task weighting λe</td></tr><tr><td>learning rate</td><td>1e-3, 1e-4, 5e-5</td><td>Learning rate used for further training of RoBERTa base</td></tr></table>
369
+
370
+ META-TARTAN introduces a dev-head which is trained sporadically during training for estimating the meta-gradients. We use the following hyper-parameters for training this dev-head: we sample 32 examples (8 examples in the case of H.PARTISAN) and perform full batch gradient descent with
371
+
372
+ a learning rate of 1e-2 for 10 iterations. The dev-head is trained with the AdamW optimizer with weight decay set to 0.1.
373
+
374
+ We copy the end-task agnostic baseline results from (Dery et al., 2021b) when available. We use the hyper-parameters specified for TAPT in Gururangan et al. (2020) to train for the SE-2016-6 task.
375
+
376
+ All models were trained on one of two types of gpus: NVIDIA A100 or NVIDIA A6000. All models fit within a singlegpu. We used gradient accumulation to expand the effective batch sizes used for our experiments.
377
+
378
+ # E GENERALIZATION ERROR BOUND FOR END-TASK AWARE TRAINING
379
+
380
+ # E.1 DEFINITIONS
381
+
382
+ Definition E.1. A function, $f: \Omega \to \mathbb{R}$ is L-Lipschitz if $\forall u, v \in \operatorname{dom}(f)$ :
383
+
384
+ $$
385
+ \| f (u) - f (v) \| \leq L \| u - v \|
386
+ $$
387
+
388
+ Note that $L$ -Lipschitz implies bounded gradients.
389
+
390
+ $$
391
+ \| \nabla f (w) \| \leq L \quad \forall w
392
+ $$
393
+
394
+ Definition E.2. A function, $f: \Omega \to \mathbb{R}$ is $\beta$ -smooth if $\forall u, v \in \Omega$ :
395
+
396
+ $$
397
+ \| \nabla f (u) - \nabla f (v) \| \leq \beta \| u - v \|
398
+ $$
399
+
400
+ Definition E.3. An update rule, $G$ is $\sigma$ -bounded if:
401
+
402
+ $$
403
+ \sup _ {w \in \Omega} \| w - G (w) \| \leq \sigma
404
+ $$
405
+
406
+ Consider the following general setting. There is an unknown distribution $\mathcal{D}_e$ over examples from some space $\mathcal{Z}$ . We receive a sample $S = (z_1, \ldots, z_{N_e})$ of $N_e$ examples drawn i.i.d. from $\mathcal{D}_e$ . Our goal is to find a model $w$ , that parameterizes the function $f_e$ , with small population risk defined as:
407
+
408
+ Definition E.4. Population Risk
409
+
410
+ $$
411
+ R [ w ] = \mathbf {E} _ {z \sim \mathcal {D} _ {e}} f _ {e} (w; z)
412
+ $$
413
+
414
+ # Definition E.5. Empirical Risk
415
+
416
+ Since we have a finite number of samples, we can only compute the empirical risk which is :
417
+
418
+ $$
419
+ R _ {S} [ w ] = \frac {1}{N _ {e}} \sum_ {i} f _ {e} (w; z _ {i}),
420
+ $$
421
+
422
+ Let $A$ be a potentially randomized algorithm (such as Stochastic Gradient Descent) that is a function of the $S$ such that $w = A(S)$ .
423
+
424
+ Definition E.6. Generalization Error $\epsilon_{gen}(A, N_e)$
425
+
426
+ $$
427
+ \epsilon_ {g e n} (A, N _ {e}) = \mathbf {E} _ {S, A} \left[ R _ {S} [ A (S) ] - R [ A (S) ] \right]
428
+ $$
429
+
430
+ # Definition E.7. Uniform Stability
431
+
432
+ A randomized algorithm $A$ is $\epsilon$ -uniformly stable if for all data sets $S, S' \in \mathcal{Z}$ , $|S| = |S'| = N_e$ such that $S$ and $S'$ differ in at most one example, we have
433
+
434
+ $$
435
+ \sup _ {z} \mathbf {E} _ {A} \left[ f _ {e} (A (S); z) - f _ {e} (A (S ^ {\prime}); z) \right] \leq \epsilon
436
+ $$
437
+
438
+ Here, the expectation is taken only over the internal randomness of $A$ . We will denote by $\epsilon_{\mathrm{stab}}(A, N_e)$ the infimum over all $\epsilon$ for which the above holds.
439
+
440
+ # E.2 RELEVANT THEOREMS
441
+
442
+ Theorem E.1 (Uniform Stability implies Generalization in expectation). Let Algorithm A be $\epsilon$ -uniformly stable. Then,
443
+
444
+ $$
445
+ \epsilon_ {g e n} (A, N _ {e}) = \left| \mathbf {E} _ {S, A} \big [ R _ {S} [ A (S) ] - R [ A (S) ] \big ] \right| \leq \epsilon_ {s t a b} (A, N _ {e})
446
+ $$
447
+
448
+ For full proof see Theorem 2.2 of Hardt et al. (2016).
449
+
450
+ Theorem E.2 (Stochastic Gradient Method is stable). Assume that $f_{e}(\cdot ;z)\in [0,1]$ is an $L$ -Lipschitz and $\beta_{e}$ -smooth loss function for every $z$ . Suppose that we run SGM for $T$ steps with monotonically non-increasing step sizes $\alpha_{t}\leq \frac{c}{t}$ . Then, SGM has uniform stability with:
451
+
452
+ $$
453
+ \epsilon_ {s g m} \leq \frac {1 + \frac {1}{q}}{N _ {e} - 1} \big (2 c L ^ {2} \big) ^ {\frac {1}{q + 1}} T ^ {\frac {q}{q + 1}}
454
+ $$
455
+
456
+ where $q = \beta_{e}c$
457
+
458
+ We can simplify this to only terms involving $T$ and $N_{e}$
459
+
460
+ $$
461
+ \epsilon_ {s g m} \lesssim \frac {T ^ {1 - \frac {1}{c \beta_ {e} + 1}}}{N _ {e}} \tag {6}
462
+ $$
463
+
464
+ Proof. For the full proof, see Theorem 3.12 of Hardt et al. (2016)
465
+
466
+ ![](images/ad632c2d0541001fa476c823eb168ae485d762f7bf12f5a0c2efdcebfcfe9a83.jpg)
467
+
468
+ # E.3 GROWTH FUNCTIONS
469
+
470
+ Lemma E.3 (Growth Recursion Under Dynamic Sampling). We consider the Stochastic Gradient update rule $G: \Omega \to \Omega$ :
471
+
472
+ $$
473
+ G _ {f} (w) = w - \alpha \nabla f (w)
474
+ $$
475
+
476
+ Fix an arbitrary sequence of updates $G_{f_1}, \ldots, G_{f_T}$ and another $G_{f_1}', \ldots, G_{f_T}'$ . Let $w_0 = w_0'$ be a starting point in $\Omega$ given that $f: \Omega \to \mathbb{R}$ and define
477
+
478
+ $$
479
+ \delta_ {t} = \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \| w _ {t} - w _ {t} ^ {\prime} \| \right]
480
+ $$
481
+
482
+ where $w_{t}, w_{t}^{\prime}$ are defined recursively through:
483
+
484
+ $$
485
+ w _ {t} = G _ {f _ {t}} (w _ {t - 1}) \quad w _ {t} ^ {\prime} = G _ {f _ {t}} ^ {\prime} (w _ {t - 1} ^ {\prime}) \quad t \geq 0
486
+ $$
487
+
488
+ Then we have the recurrence relation :
489
+
490
+ $$
491
+ \delta_ {0} = 0
492
+ $$
493
+
494
+ $$
495
+ \delta_ {t + 1} \leq \left\{ \begin{array}{c c} \min \left\{\left(1 + \alpha \lambda_ {1} \beta_ {1}\right) \delta_ {t} + \alpha \lambda_ {2} \big (\Delta + 2 L \big), \left(1 + \alpha \big (\lambda_ {1} \beta_ {1} + \lambda_ {2} \beta_ {2} \big)\right) \delta_ {t} \right\} & G _ {f _ {t}} = G _ {f _ {t}} ^ {\prime} \\ \delta_ {t} + 2 \sigma_ {t} & G _ {f _ {t}}, G _ {f _ {t}} ^ {\prime} a r e \sigma - b o u n d e d \end{array} \right.
496
+ $$
497
+
498
+ Note that $\mathcal{P}_f$ is a distribution over the support $\{f^1, f^2\}$ according to probabilities $\{\lambda_1, \lambda_2 \mid \lambda_1 + \lambda_2 = 1\}$ . $\{f_1, f_2\}$ have smoothness $\beta_1, \beta_2$ respectively.
499
+
500
+ Proof. The second bound on $\delta_t$ is taken directly from Lemma 2.5 of Hardt et al. (2016). We now derive the first-half of the first bound
501
+
502
+ $$
503
+ \begin{array}{l} \delta_ {t + 1} = \mathbb {E} _ {f _ {1} \dots f _ {t + 1} \sim \mathcal {P} _ {\lambda}} \left[ \| w _ {t + 1} - w _ {t + 1} ^ {\prime} \| \right] \\ = \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \lambda_ {1} \| G _ {f ^ {1}} (w _ {t}) - G _ {f ^ {1}} ^ {\prime} (w _ {t} ^ {\prime}) \| + \lambda_ {2} \| G _ {f ^ {2}} (w _ {t}) - G _ {f ^ {2}} ^ {\prime} (w _ {t} ^ {\prime}) \| \right] \\ = \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \lambda_ {1} \| w _ {t} - \alpha \nabla f ^ {1} (w _ {t}) - w _ {t} ^ {\prime} + \alpha \nabla f ^ {1} (w _ {t} ^ {\prime}) \| + \lambda_ {2} \| w _ {t} - \alpha \nabla f ^ {2} (w _ {t}) - w _ {t} ^ {\prime} + \alpha \nabla f ^ {2} (w _ {t} ^ {\prime}) \| \right] \\ \leq \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \| w _ {t} - w _ {t} ^ {\prime} \| \right] + \alpha \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left(\lambda_ {1} \| \nabla f ^ {1} \left(w _ {t} ^ {\prime}\right) - \nabla f ^ {1} \left(w _ {t}\right) \| + \lambda_ {2} \| \nabla f ^ {2} \left(w _ {t} ^ {\prime}\right) - \nabla f ^ {2} \left(w _ {t}\right) \|\right) \\ \end{array}
504
+ $$
505
+
506
+ (Triangle Inequality used for above step)
507
+
508
+ $$
509
+ = \delta_ {t} + \alpha \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left(\lambda_ {1} \| \nabla f ^ {1} \left(w _ {t} ^ {\prime}\right) - \nabla f ^ {1} \left(w _ {t}\right) \| + \lambda_ {2} \| \nabla f ^ {2} \left(w _ {t} ^ {\prime}\right) - \nabla f ^ {2} \left(w _ {t}\right) \|\right)
510
+ $$
511
+
512
+ (Without Loss of Generality, let $\beta_{1} \leq \beta_{2}$ )
513
+
514
+ $$
515
+ \begin{array}{l} \leq \delta_ {t} + \alpha \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \lambda_ {1} \beta_ {1} \| w _ {t} - w _ {t} ^ {\prime} \| + \lambda_ {2} \| \nabla f ^ {2} \left(w _ {t} ^ {\prime}\right) - \nabla f ^ {2} \left(w _ {t}\right) \| \right] \quad (\text {S m o o t h n e s s}) \\ = \delta_ {t} + \alpha \lambda_ {1} \beta_ {1} \delta_ {t} + \alpha \lambda_ {2} \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \| \nabla f ^ {2} \left(w _ {t} ^ {\prime}\right) - \nabla f ^ {2} \left(w _ {t}\right) \| \right] (\text {T r i a n g l e I n e q u a l i t y}) \\ = \left(1 + \alpha \lambda_ {1} \beta_ {1}\right) \delta_ {t} + \alpha \lambda_ {2} \left\| \nabla f ^ {2} \left(w _ {t} ^ {\prime}\right) - \nabla f ^ {1} \left(w _ {t} ^ {\prime}\right) + \nabla f ^ {1} \left(w _ {t} ^ {\prime}\right) - \nabla f ^ {2} \left(w _ {t}\right) \right\| \quad (\text {a d d z e r o}) \\ \leq \left(1 + \alpha \lambda_ {1} \beta_ {1}\right) \delta_ {t} + \alpha \lambda_ {2} \left(\| \nabla f ^ {2} \left(w _ {t} ^ {\prime}\right) - \nabla f ^ {1} \left(w _ {t} ^ {\prime}\right) \| + \| \nabla f ^ {1} \left(w _ {t} ^ {\prime}\right) - \nabla f ^ {2} \left(w _ {t}\right) \|\right) \quad (\text {T r i a n g l e I n e q u a l i t y}) \\ \leq \left(1 + \alpha \lambda_ {1} \beta_ {1}\right) \delta_ {t} + \alpha \lambda_ {2} \left(\Delta + \| \nabla f _ {1} \left(w _ {t} ^ {\prime}\right) - \nabla f _ {2} \left(w _ {t}\right) \|\right) \quad \text {U s i n g A s s u m p t i o n A . 1} \\ \leq \left(1 + \alpha \lambda_ {1} \beta_ {1}\right) \delta_ {t} + \alpha \lambda_ {2} \left(\Delta + \| \nabla f _ {1} \left(w _ {t} ^ {\prime}\right) \| + \| \nabla f _ {2} \left(w _ {t}\right) \|\right) \quad \text {T r i a n g l e I n e q u a l i t y} \\ \leq \left(1 + \alpha \lambda_ {1} \beta_ {1}\right) \delta_ {t} + \alpha \lambda_ {2} (\Delta + 2 L) \quad L - L i p s c h i t z f u n c t i o n \\ \end{array}
516
+ $$
517
+
518
+ To obtain the second half of the first bound :
519
+
520
+ $$
521
+ \begin{array}{l} \delta_ {t + 1} = \mathbb {E} _ {f _ {1} \dots f _ {t + 1} \sim \mathcal {P} _ {\lambda}} \left[ \| w _ {t + 1} - w _ {t + 1} ^ {\prime} \| \right] \\ = \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \lambda_ {1} \| G _ {f ^ {1}} (w _ {t}) - G _ {f ^ {1}} ^ {\prime} \left(w _ {t} ^ {\prime}\right) \| + \lambda_ {2} \| G _ {f ^ {2}} (w _ {t}) - G _ {f ^ {2}} ^ {\prime} \left(w _ {t} ^ {\prime}\right) \| \right] \\ = \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \lambda_ {1} \| w _ {t} - \alpha \nabla f ^ {1} (w _ {t}) - w _ {t} ^ {\prime} + \alpha \nabla f ^ {1} (w _ {t} ^ {\prime}) \| + \lambda_ {2} \| w _ {t} - \alpha \nabla f ^ {2} (w _ {t}) - w _ {t} ^ {\prime} + \alpha \nabla f ^ {2} (w _ {t} ^ {\prime}) \| \right] \\ \leq \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \| w _ {t} - w _ {t} ^ {\prime} \| \right] + \alpha \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left(\lambda_ {1} \| \nabla f ^ {1} (w _ {t} ^ {\prime}) - \nabla f ^ {1} (w _ {t}) \| + \lambda_ {2} \| \nabla f ^ {2} (w _ {t} ^ {\prime}) - \nabla f ^ {2} (w _ {t}) \|\right) \\ \end{array}
522
+ $$
523
+
524
+ (Triangle Inequality used for above step)
525
+
526
+ $$
527
+ \begin{array}{l} \leq \delta_ {t} + \alpha \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \lambda_ {1} \beta_ {1} \| w _ {t} - w _ {t} ^ {\prime} \| + \lambda_ {2} \beta_ {2} \| w _ {t} - w _ {t} ^ {\prime} \| \right] \quad (\text {S m o o t h n e s s}) \\ = \delta_ {t} + \alpha \lambda_ {1} \beta_ {1} \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \| w _ {t} - w _ {t} ^ {\prime} \| \right] + \alpha \lambda_ {2} \beta_ {2} \mathbb {E} _ {f _ {1} \dots f _ {t} \sim \mathcal {P} _ {\lambda}} \left[ \| w _ {t} - w _ {t} ^ {\prime} \| \right] \\ = \delta_ {t} + \alpha \left(\lambda_ {1} \beta_ {1} + \lambda_ {2} \beta_ {2}\right) \delta_ {t} \\ = \left(1 + \alpha \left(\lambda_ {1} \beta_ {1} + \lambda_ {2} \beta_ {2}\right)\right) \delta_ {t} \\ \end{array}
528
+ $$
529
+
530
+ ![](images/fdb462f1f85f6ffb0b3829b7fcf80d1a63392317f7a71724cd768937bcb5c6ff.jpg)
531
+
532
+ # E.4 STABILITY OF DYNAMIC SAMPLING
533
+
534
+ We repeat the description of our Auxiliary Learning with Dynamic Sampling Setting here for ease of access.
535
+
536
+ Setting: We are given an auxiliary objective $f_{a}(:,z) \in [0,1]$ with $N_{a}$ samples $S_{a} = (z_{1},\dots ,z_{N_{a}})$ from the distribution $\mathcal{D}_a$ . At any iteration of SGD, we sample a choice of either the end-task function $f_{e}$ or the auxiliary objective $f_{a}$ according to the probabilities $\lambda_{e}, \lambda_{a} \mid \lambda_{e} + \lambda_{a} = 1$ . Given the chosen objective, we sample a data-point and perform stochastic gradient descent (SGD) based on the sampled data-point.
537
+
538
+ An equivalent way to instantiate this procedure to create $S_A$ by drawing $N' = N_e + N_a$ total samples from the end-task and auxiliary task according to $\mathcal{P}_{\lambda}$ . $S_A'$ is then created by replacing 1 end-task sample in $S_A$ . At each step, a sample is drawn from a distribution: $z_i, z_i' \sim P_{S_A}, P_{S_A'}$ and a gradient step is taken on the function corresponding to the set the sample was drawn from.
539
+
540
+ Lemma E.4 (Stability of dynamic sampling). We denote the outputs of $T$ steps of SGM on $S_A$ and $S_A'$ with the dynamically sampled functions, as $w_T$ and $w_T'$ respectively. Then, for every $z_e \in Z_e$ and every $t_0 > 0$ , under both the random update rule and the random permutation rule, we have:
541
+
542
+ $$
543
+ \mathbb {E} \left| f _ {e} \left(w _ {T}; z\right) - f _ {e} \left(w _ {T} ^ {\prime}; z\right) \right| \leq \frac {\gamma t _ {0}}{N ^ {\prime}} \sup _ {w, z _ {e}} f _ {e} \left(w; z _ {e}\right) + L \mathbb {E} [ \delta_ {T} | \delta_ {t _ {0}} = 0 ]
544
+ $$
545
+
546
+ Where $N^{\prime} = N_{e} + N_{a}$ and $\gamma = \frac{\lambda_e\cdot N'}{N_e} = \frac{\lambda_e}{\lambda^r}$
547
+
548
+ Proof. Let $\mathcal{E} = \mathbf{1}[\delta_{t_0} = 0]$ denote the event that $\delta_{t_0} = 0$ . We have
549
+
550
+ $$
551
+ \begin{array}{l} \mathbb {E} \left| f _ {e} \left(w _ {T}; z\right) - f _ {e} \left(w _ {T} ^ {\prime}; z\right) \right| = P \{\mathcal {E} \} \mathbb {E} \left[ \left| f _ {e} \left(w _ {T}; z\right) - f _ {e} \left(w _ {T} ^ {\prime}; z\right) \right| | \mathcal {E} \right] \\ + P \left\{\mathcal {E} ^ {c} \right\} \mathbb {E} \left[ \left| f _ {e} \left(w _ {T}; z\right) - f _ {e} \left(w _ {T} ^ {\prime}; z\right) \right| \mid \mathcal {E} ^ {c} \right] \\ \leq \mathbb {E} \left[ \left| f _ {e} \left(w _ {T}; z\right) - f _ {e} \left(w _ {T} ^ {\prime}; z\right) \right| | \mathcal {E} \right] + P \left\{\mathcal {E} ^ {c} \right\} \cdot \sup _ {w, z _ {e}} f _ {e} (w; z _ {e}) \\ \end{array}
552
+ $$
553
+
554
+ because $f_{e}$ is non-negative
555
+
556
+ $$
557
+ \leq L \mathbb {E} \left[ \| w _ {T} - w _ {T} ^ {\prime} \| | \mathcal {E} \right] + P \left\{\mathcal {E} ^ {c} \right\} \cdot \sup _ {w, z _ {e}} f _ {e} (w; z _ {e})
558
+ $$
559
+
560
+ because $f_{e}$ is $L$ -Lipschitz
561
+
562
+ We now proceed to bound $P\{\mathcal{E}^c\}$ . Let $i_* \in [N']$ denote the position in which $S_A, S'_A$ differ and consider the random variable $I$ assuming the index of the first time step in which SGM uses the example $z_e^{i_*}$ . Note that when $I > t_0$ , then we must have that $\delta_{t_0} = 0$ since the two samples are identical up until this point.
563
+
564
+ $$
565
+ P \left\{\mathcal {E} ^ {c} \right\} = P \left\{\delta_ {0} \neq 0 \right\} \leq P \left\{I \leq t _ {0} \right\}
566
+ $$
567
+
568
+ Using the selection rule specified above (sample either $f_{e}$ , $f_{a}$ according to the probabilities $\lambda_{e}$ , $\lambda_{a}$ and then sample uniformly from the selected task data) we have that:
569
+
570
+ $$
571
+ P \{I \leq t _ {0} \} = \sum_ {t = 1} ^ {t _ {0}} P \{I = t _ {0} \} = \sum_ {t = 1} ^ {t _ {0}} \left(\lambda_ {e} \cdot \frac {1}{N _ {e}}\right) = \frac {\lambda_ {e} t _ {0}}{N _ {e}} = \frac {\gamma t _ {0}}{N ^ {\prime}}
572
+ $$
573
+
574
+ Theorem E.5 (Stability Bound on Dynamic Sampling). Assume that $f_{e}(\cdot ;z_{e}),f_{a}(\cdot ;z_{a})\in [0,1]$ are $L$ -Lipschitz and $\beta_{e}$ and $\beta_{a}$ -smooth loss functions. Consider that we have $N^{\prime} = N_{e} + N_{a}$ total samples where $f_{e}$ and $f_{a}$ have $N_{e}$ and $N_{a}$ samples respectively. Suppose that we run SGM for $T$ steps with monotonically non-increasing step sizes $\alpha_{t}\leq \frac{c}{t}$ by dynamically sampling the tasks according to $\lambda_{e}$ and $\lambda_{a}$ . Then, with respect to $f_{e}$ , SGM has uniform stability with:
575
+
576
+ $$
577
+ \epsilon_ {\mathrm {s t a b}} \leq \left(1 + \frac {1}{c \bar {\beta}}\right) \left(\frac {2 \gamma L ^ {2} c}{N ^ {\prime} - \gamma} + \rho L c\right) ^ {\frac {1}{c \beta + 1}} \left(\frac {\gamma T}{N ^ {\prime}}\right) ^ {\frac {c \bar {\beta}}{1 + c \bar {\beta}}}
578
+ $$
579
+
580
+ $$
581
+ \text {W h e r e} \quad \gamma = \frac {\lambda_ {e} N ^ {\prime}}{N _ {e}}
582
+ $$
583
+
584
+ Given that $\beta^{*} = \min \{\beta_{e},\beta_{a}\}$ and $\lambda^{*}$ is the corresponding weighting of the function with smaller smoothness.
585
+
586
+ Depending on which one gives a tighter bound the pair $(\bar{\beta},\rho)$ can be :
587
+
588
+ $$
589
+ (\bar {\beta}, \rho) _ {1} = (\lambda^ {*} \beta^ {*}, (1 - \lambda^ {*}) (\Delta + 2 L))
590
+ $$
591
+
592
+ or
593
+
594
+ $$
595
+ (\bar {\beta}, \rho) _ {2} = (\lambda_ {e} \beta_ {e} + \lambda_ {a} \beta_ {a}, 0)
596
+ $$
597
+
598
+ When $(\bar{\beta},\rho)_1$ gives the tighter bound, we can simplify to:
599
+
600
+ $$
601
+ \epsilon_ {\mathrm {g e n}} \lesssim (\Delta) ^ {\frac {1}{1 + c \lambda^ {*} \beta^ {*}}} \left(\frac {\gamma T}{N ^ {\prime}}\right) ^ {1 - \frac {1}{c \lambda^ {*} \beta^ {*} + 1}}
602
+ $$
603
+
604
+ As presented in Section 4.
605
+
606
+ Proof. Let $S_A, S_A'$ be two sample of size $N' = N_e + N_a$ as described in lemma E.4. Consider the gradient updates $G_{f_1}, \ldots, G_{f_T}$ and $G_{f_1}', \ldots, G_{f_T}'$ induced by running SGM on samples $S_A$ and $S_A'$ respectively. Let $w_T$ and $w_T'$ denote the corresponding outputs of SGM. By lemma E.4 we have:
607
+
608
+ $$
609
+ \mathbb {E} \left| f _ {e} \left(w _ {T}; z\right) - f _ {e} \left(w _ {T} ^ {\prime}; z\right) \right| \leq \frac {\gamma t _ {0}}{N ^ {\prime}} \sup _ {w, z _ {e}} f _ {e} (w; z _ {e}) + L \mathbb {E} [ \delta_ {T} | \delta_ {t _ {0}} = 0 ] \tag {8}
610
+ $$
611
+
612
+ Let $\Psi_T = \mathbb{E}[\delta_T|\delta_{t_0} = 0]$ . We will bound $\Psi_T$ as function of $t_0$ and then minimize for $t_0$ . Note the following:
613
+
614
+ - At any step $t$ , with probability $\left(1 - \frac{\gamma}{N'}\right)$ , the sample selected is the same in both $S_A$ and $S_A'$ . In this case $G_{f_t} = G_{f_t}'$ and we use the corresponding expansivity rule from lemma E.4. This gives:
615
+
616
+ $$
617
+ \delta_ {t + 1} \leq \min \left\{\left(1 + \alpha_ {t} \lambda^ {*} \beta^ {*}\right) \delta_ {t} + \alpha_ {t} \left(1 - \lambda^ {*}\right) (\Delta + 2 L), \left(1 + \alpha_ {t} \left(\lambda_ {e} \beta_ {e} + \lambda_ {a} \beta_ {a}\right)\right) \delta_ {t} \right\}
618
+ $$
619
+
620
+ Where $\beta^{*} = \min \{\beta_{e},\beta_{a}\}$ and $\lambda^*$ is the corresponding weighting of the function with smaller smoothness. To avoid deriving the bound independently for each case, we perform a variable substitution that captures the two cases:
621
+
622
+ $$
623
+ \delta_ {t + 1} \leq \left(1 + \alpha_ {t} \bar {\beta}\right) \delta_ {t} + \alpha_ {t} \rho
624
+ $$
625
+
626
+ $\bar{\beta} = \left\{\lambda^{*}\beta^{*},\lambda_{e}\beta_{e} + \lambda_{a}\beta_{a}\right\}$ and $\rho = \{(1 - \lambda^{*})(\Delta +2L),0\}$ . We can present the final bound in terms of these variables which can be substituted depending on the minimizer.
627
+
628
+ - With probability $\frac{\gamma}{N'}$ the selected example is different. Note that in this case, we know that we are evaluating the end-task function $f_{e}$ . We use that both $G_{f_t}$ and $G_{f_t}'$ are $(\sigma_t = \alpha_t L)$ -bounded according to lemma E.3 since $f_{e}$ is $L$ -Lipschitz.
629
+
630
+ Combining the above we have :
631
+
632
+ $$
633
+ \begin{array}{l} \Psi_ {t + 1} \leq \left(1 - \frac {\gamma}{N ^ {\prime}}\right) \left(\left(1 + \alpha_ {t} \bar {\beta}\right) \Psi_ {t} + \alpha_ {t} \rho\right) + \frac {\gamma}{N ^ {\prime}} \left(\Psi_ {t} + 2 \alpha_ {t} L\right) \\ = \left(\frac {\gamma}{N ^ {\prime}} + \left(1 - \frac {\gamma}{N ^ {\prime}}\right) \left(1 + \alpha_ {t} \bar {\beta}\right)\right) \Psi_ {t} + \frac {2 \gamma \alpha_ {t} L}{N ^ {\prime}} + \alpha_ {t} \left(1 - \frac {\gamma}{N ^ {\prime}}\right) \rho \\ = \left(1 + \left(1 - \frac {\gamma}{N ^ {\prime}}\right) \alpha_ {t} \bar {\beta}\right) \Psi_ {t} + \frac {\alpha_ {t} \left(2 \gamma L + \left(N ^ {\prime} - \gamma\right) \rho\right)}{N ^ {\prime}} \\ \leq \left(1 + \left(1 - \frac {\gamma}{N ^ {\prime}}\right) \frac {c}{t} \bar {\beta}\right) \Psi_ {t} + \frac {c (2 \gamma L + \left(N ^ {\prime} - \gamma\right) \rho)}{t N ^ {\prime}} \tag {9} \\ \leq \exp \left(\left(1 - \frac {\gamma}{N ^ {\prime}}\right) \frac {c}{t} \bar {\beta}\right) \Psi_ {t} + \frac {c (2 \gamma L + (N ^ {\prime} - \gamma) \rho)}{t N ^ {\prime}} \\ \text {W e} 1 + x \leq \exp (x) \forall x \\ \leq \exp \left(\left(1 - \frac {\gamma}{N ^ {\prime}}\right) \frac {c}{t} \bar {\beta}\right) \Psi_ {t} + \frac {c \bar {\rho}}{t N ^ {\prime}} \\ \text {W h e r e} \bar {\rho} = (2 \gamma L + (N ^ {\prime} - \gamma) \rho) \\ \end{array}
634
+ $$
635
+
636
+ We can unwind the recurrence until $\Psi_{t_0} = 0$ .
637
+
638
+ $$
639
+ \begin{array}{l} \Psi_ {T} \leq \sum_ {t = t _ {0} + 1} ^ {T} \left(\prod_ {k = t + 1} ^ {T} \exp \left((1 - \frac {\gamma}{N ^ {\prime}}) \frac {c \bar {\beta}}{k}\right)\right) \left(\frac {c \bar {\rho}}{t N ^ {\prime}}\right) \\ = \sum_ {t = t _ {0} + 1} ^ {T} \left(\frac {c \bar {\rho}}{t N ^ {\prime}}\right) \exp \left((1 - \frac {\gamma}{N ^ {\prime}}) c \bar {\beta} \sum_ {k = t + 1} ^ {T} \frac {1}{k}\right) \\ \leq \sum_ {t = t _ {0} + 1} ^ {T} \left(\frac {c \bar {\rho}}{t N ^ {\prime}}\right) \exp \left((1 - \frac {\gamma}{N ^ {\prime}}) c \bar {\beta} \log \left(\frac {T}{t}\right)\right) \\ = \frac {c \bar {\rho} T ^ {c \bar {\beta} \left(1 - \frac {\gamma}{N ^ {\prime}}\right)}}{N ^ {\prime}} \sum_ {t = t _ {0} + 1} ^ {T} t ^ {- c \bar {\beta} \left(1 - \frac {\gamma}{N ^ {\prime}}\right) - 1} \tag {10} \\ \end{array}
640
+ $$
641
+
642
+ We can upper bound the sum over $t$ with an integral $+$ drop negative terms
643
+
644
+ $$
645
+ \begin{array}{l} \leq \frac {c \bar {\rho}}{N ^ {\prime} c \bar {\beta} (1 - \frac {\gamma}{N ^ {\prime}})} \left(\frac {T}{t _ {0}}\right) ^ {c \bar {\beta} (1 - \frac {\gamma}{N ^ {\prime}})} \\ = \frac {\bar {\rho}}{\bar {\beta} \left(N ^ {\prime} - \gamma\right)} \left(\frac {T}{t _ {0}}\right) ^ {c \bar {\beta} \left(1 - \frac {\gamma}{N ^ {\prime}}\right)} \\ \leq \frac {\bar {\rho}}{\bar {\beta} (N ^ {\prime} - \gamma)} \left(\frac {T}{t _ {0}}\right) ^ {c \bar {\beta}} \\ \end{array}
646
+ $$
647
+
648
+ Plugging this bound back into Equation 8 and using the fact that $f_{e} \in [0,1]$ :
649
+
650
+ $$
651
+ \mathbb {E} \left| f _ {e} \left(w _ {T}; z\right) - f _ {e} \left(w _ {T} ^ {\prime}; z\right) \right| \leq \frac {\gamma t _ {0}}{N ^ {\prime}} + \frac {L \bar {\rho}}{\bar {\beta} \left(N ^ {\prime} - \gamma\right)} \left(\frac {T}{t _ {0}}\right) ^ {c \bar {\beta}} \tag {11}
652
+ $$
653
+
654
+ We let $q^{*} = c\bar{\beta}$ , we can minimize the R.H.S by setting:
655
+
656
+ $$
657
+ t _ {0} = \left(\frac {N ^ {\prime} L c \bar {\rho}}{\gamma (N ^ {\prime} - \gamma)}\right) ^ {\frac {1}{q ^ {*} + 1}} T ^ {\frac {q ^ {*}}{q ^ {*} + 1}}
658
+ $$
659
+
660
+ Plugging this in gives us :
661
+
662
+ $$
663
+ \begin{array}{l} \mathbb {E} \left| f _ {e} \left(w _ {T}; z\right) - f _ {e} \left(w _ {T} ^ {\prime}; z\right) \right| \leq \left(\frac {(1 + \frac {1}{c \beta})}{N ^ {\prime}}\right) \left(\frac {N ^ {\prime} L c (2 \gamma L + (N ^ {\prime} - \gamma) \rho)}{(N ^ {\prime} - \gamma)}\right) ^ {\frac {1}{c \beta + 1}} (\gamma T) ^ {\frac {c \bar {\beta}}{1 + c \beta}} \tag {12} \\ = \left(1 + \frac {1}{c \bar {\beta}}\right) \left(\frac {2 \gamma L ^ {2} c}{N ^ {\prime} - \gamma} + \rho L c\right) ^ {\frac {1}{c \beta + 1}} \left(\frac {\gamma T}{N ^ {\prime}}\right) ^ {\frac {c \bar {\beta}}{1 + c \beta}} \\ \end{array}
664
+ $$
665
+
666
+ Recall that :
667
+
668
+ $$
669
+ \bar {\beta} = \left\{\lambda^ {*} \beta^ {*}, \lambda_ {e} \beta_ {e} + \lambda_ {a} \beta_ {a} \right\}
670
+ $$
671
+
672
+ $$
673
+ \rho = \left\{\left(1 - \lambda^ {*}\right) \left(\Delta + 2 L\right), 0 \right\}
674
+ $$
675
+
676
+ We can choose whichever of the pairs for $\bar{\beta},\rho$ that minimizes the bound :
677
+
678
+ # F DISCUSSION OF GENERALIZATION ERROR BOUNDS
679
+
680
+ # F.1 WHAT DOES THEOREM E.5 SAY.
681
+
682
+ We consider the setting where
683
+
684
+ $$
685
+ \begin{array}{l} \bar {\beta} = \lambda^ {*} \beta^ {*} \\ \rho = (1 - \lambda^ {*}) (\Delta + 2 L) \\ \end{array}
686
+ $$
687
+
688
+ Assuming the $\rho$ term dominates Equation 12 in this setting is:
689
+
690
+ $$
691
+ \begin{array}{l} \epsilon_ {\mathrm {g e n}} ^ {\mathrm {a u x d y n}} \leq \epsilon_ {\mathrm {s t a b}} ^ {\mathrm {a u x d y n}} \big | _ {(\bar {\beta}, \rho) _ {1}} \lesssim \sqrt [ 1 + c \bar {\beta} ]{(1 - \lambda^ {*}) (\Delta + 2 L)} \bigg (\frac {\gamma T}{N ^ {\prime}} \bigg) ^ {\frac {c \bar {\beta}}{1 + c \beta}} \\ \lesssim (\Delta) ^ {\frac {1}{1 + c \lambda^ {*} \beta^ {*}} \left(\frac {\gamma T}{N ^ {\prime}}\right) ^ {1 - \frac {1}{c \lambda^ {*} \beta^ {*} + 1}}} \quad \text {T h i s i s E q u a t i o n 1 f r o m S e c t i o n 4} \tag {13} \\ \end{array}
692
+ $$
693
+
694
+ In going from the first line to the second we consider the setting where $\Delta \gg 2L$ . This is a case where the auxiliary task is sufficiently different from the primary task. Some observations about this setting:
695
+
696
+ 1. Smaller $\Delta$ implies auxiliary task is similar to main task and leads to improving the bound.
697
+ 2. Dependence of the bound on $N'$ is a bit more nuanced. Note that increasing $N'$ increases $\gamma$ unless we reduce $\lambda_e$ appropriately. Remember that $\lambda_e$ is the rate at which we sample the primary task. Thus, if we add more auxiliary data but still sample the primary task at the original rate, then we are effectively ignoring the extra auxiliary data.
698
+ 3. It might be tempting to assume that we can get arbitrary improvements in this setting by setting $\lambda_{e} = 0$ . However, note that whilst this might reduce the generalization error, it means that we are seeing none of the end-task which would result in large increase in the training error
699
+ 4. Note that $(\bar{\beta} = \lambda^{*}\beta^{*}\leq \beta_{e})$ always. So we get improvements on the dependence on $T$ compared to Theorem E.2.
700
+ 5. We can optimize $\lambda_{e},\lambda_{a}$ to minimize $\epsilon_{\mathrm{stab}}^{\mathrm{auxdyn}}$
2023/AANG _ Automating Auxiliary Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30c538a47469b129dd22d16489ce42cbaca03a7bc9069f5b31dcd97ff326bd09
3
+ size 962148
2023/AANG _ Automating Auxiliary Learning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/d1c48036-5cc9-44ce-aeae-bb7b2deba30a_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/d1c48036-5cc9-44ce-aeae-bb7b2deba30a_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/d1c48036-5cc9-44ce-aeae-bb7b2deba30a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66b9cd8408e78dc8a6f4578eca866668fd730a649ab619f6efdf814b2fb32160
3
+ size 2741560
2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d3fd8b1d86e4d26f7218924c98fff3aa38558b41749cb99a4c8aa19d4ae81de
3
+ size 1656885
2023/ACMP_ Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Accurate Image Restoration with Attention Retractable Transformer/c75ec409-a9b0-4c65-a0dd-0defc55f9eb5_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Accurate Image Restoration with Attention Retractable Transformer/c75ec409-a9b0-4c65-a0dd-0defc55f9eb5_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Accurate Image Restoration with Attention Retractable Transformer/c75ec409-a9b0-4c65-a0dd-0defc55f9eb5_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d692964f400b2d7e436bd5a5a1f35c2af6e0e518a61e3fa929d593317533957
3
+ size 2245369
2023/Accurate Image Restoration with Attention Retractable Transformer/full.md ADDED
@@ -0,0 +1,412 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACCURATE IMAGE RESTORATION WITH ATTENTION RETRACTABLE TRANSFORMER
2
+
3
+ Jiale Zhang $^{1}$ , Yulun Zhang $^{2*}$ , Jinjin Gu $^{3,4}$ , Yongbing Zhang $^{5}$ , Linghe Kong $^{1*}$ , Xin Yuan $^{6}$
4
+
5
+ $^{1}$ Shanghai Jiao Tong University, $^{2}$ ETH Zürich, $^{3}$ Shanghai AI Laboratory,
6
+
7
+ <sup>4</sup>The University of Sydney, <sup>5</sup>Harbin Institute of Technology (Shenzhen), <sup>6</sup>Westlake University
8
+
9
+ # ABSTRACT
10
+
11
+ Recently, Transformer-based image restoration networks have achieved promising improvements over convolutional neural networks due to parameter-independent global interactions. To lower computational cost, existing works generally limit self-attention computation within non-overlapping windows. However, each group of tokens are always from a dense area of the image. This is considered as a dense attention strategy since the interactions of tokens are restrained in dense regions. Obviously, this strategy could result in restricted receptive fields. To address this issue, we propose Attention Retractable Transformer (ART) for image restoration, which presents both dense and sparse attention modules in the network. The sparse attention module allows tokens from sparse areas to interact and thus provides a wider receptive field. Furthermore, the alternating application of dense and sparse attention modules greatly enhances representation ability of Transformer while providing retractable attention on the input image. We conduct extensive experiments on image super-resolution, denoising, and JPEG compression artifact reduction tasks. Experimental results validate that our proposed ART outperforms state-of-the-art methods on various benchmark datasets both quantitatively and visually. We also provide code and models at https://github.com/gladzhang/ART.
12
+
13
+ # 1 INTRODUCTION
14
+
15
+ Image restoration aims to recover the high-quality image from its low-quality counterpart and includes a series of computer vision applications, such as image super-resolution (SR) and denoising. It is an ill-posed inverse problem since there are a huge amount of candidates for any original input. Recently, deep convolutional neural networks (CNNs) have been investigated to design various models Kim et al. (2016b); Zhang et al. (2020; 2021b) for image restoration. SRCNN Dong et al. (2014) firstly introduced deep CNN into image SR. Then several representative works utilized residual learning (e.g., EDSR Lim et al. (2017)) and attention mechanism (e.g., RCAN Zhang et al. (2018b)) to train very deep network in image SR. Meanwhile, a number of methods were also proposed for image denoising such as DnCNN Zhang et al. (2017a), RPCNN Xia & Chakrabarti (2020), and BRDNet Tian et al. (2020). These CNN-based networks have achieved remarkable performance.
16
+
17
+ However, due to parameter-dependent receptive field scaling and content-independent local interactions of convolutions, CNN has limited ability to model long-range dependencies. To overcome this limitation, recent works have begun to introduce self-attention into computer vision systems Hu et al. (2019); Ramachandran et al. (2019); Wang et al. (2020); Zhao et al. (2020). Since Transformer has been shown to achieve state-of-the-art performance in natural language processing Vaswani et al. (2017) and high-level vision tasks Dosovitskiy et al. (2021); Touvron et al. (2021); Wang et al. (2021); Zheng et al. (2021); Chu et al. (2021), researchers have been investigating Transformer-based image restoration networks Yang et al. (2020); Wang et al. (2022b). Chen et al. proposed a pre-trained image processing Transformer named IPT Chen et al. (2021a). Liang et al. proposed a strong baseline model named SwinIR Liang et al. (2021) based on Swin Transformer Liu et al. (2021) for image restoration. Zamir et al. also proposed an efficient Transformer model using U-net structure named Restormer Zamir et al. (2022) and achieved state-of-the-art results on several image restoration tasks. In contrast, higher performance can be achieved when using Transformer.
18
+
19
+ Despite showing outstanding performance, existing Transformer backbones for image restoration still suffer from serious defects. As we know, SwinIR Liang et al. (2021) takes advantage of shifted window scheme to limit self-attention computation within non-overlapping windows. On the other hand, IPT Chen et al. (2021a) directly splits features into $P \times P$ patches to shrink original feature map $P^2$ times, treating each patch as a token. In short, these methods compute self-attention with shorter token sequences and the tokens in each group are always from a dense area of the image. It is considered as a dense attention strategy, which obviously causes a restricted receptive field. To address this issue, the sparse attention strategy is employed. We extract each group of tokens from a sparse area of the image to provide interactions like previous studies (e.g., GG-Transformer Yu et al. (2021), MaxViT Tu et al. (2022b), CrossFormer Wang et al. (2022a)), but different from them. Our proposed sparse attention module
20
+
21
+ focuses on equal-scale features. Besides, We pay more attention to pixel-level information than semantic-level information. Since the sparse attention has not been well proposed to solve the problems in low-level vision fields, our proposed method can bridge this gap.
22
+
23
+ ![](images/0d63d2a8f2a47ade3e4503cd9d781ee44e0e9ed9a7a3b1439c9b460c4d8d953e.jpg)
24
+ Figure 1: (a) Dense attention and sparse attention strategies of our ART. (b) Dense attention strategy with shifted window of SwinIR.
25
+
26
+ We further propose Attention Retractable Transformer named ART for image restoration. Following RCAN Zhang et al. (2018b) and SwinIR Liang et al. (2021), we reserve the residual in residual structure Zhang et al. (2018b) for model architecture. Based on joint dense and sparse attention strategies, we design two types of self-attention blocks. We utilize fixed non-overlapping local windows to obtain tokens for the first block named dense attention block (DAB) and sparse grids to obtain tokens for the second block named sparse attention block (SAB). To better understand the difference between our work and SwinIR, we show a visual comparison in Fig. 1. As we can see, the image is divided into four groups and tokens in each group interact with each other. Visibly, the token in our sparse attention block can learn relationships from farther tokens while the one in dense attention block of SwinIR cannot. At the same computational cost, the sparse attention block has stronger ability to compensate for the lack of global information. We consider our dense and sparse attention blocks as successive ones and apply them to extract deep feature. In practice, the alternating application of DAB and SAB can provide retractable attention for the model to capture both local and global receptive field. Our main contributions can be summarized as follows:
27
+
28
+ - We propose the sparse attention to compensate the defect of mainly using dense attention in existing Transformer-based image restoration networks. The interactions among tokens extracted from a sparse area of an image can bring a wider receptive field to the module.
29
+ - We further propose Attention Retractable Transformer (ART) for image restoration. Our ART offers two types of self-attention blocks to obtain retractable attention on the input feature. With the alternating application of dense and sparse attention blocks, the Transformer model can capture local and global receptive field simultaneously.
30
+ - We employ ART to train an effective Transformer-based network. We conduct extensive experiments on three image restoration tasks: image super-resolution, denoising, and JPEG compression artifact reduction. Our method achieves state-of-the-art performance.
31
+
32
+ # 2 RELATED WORK
33
+
34
+ Image Restoration. With the rapid development of CNN, numerous works based on CNN have been proposed to solve image restoration problems Anwar & Barnes (2020); Dudhane et al. (2022); Zamir et al. (2020; 2021); Li et al. (2022); Chen et al. (2021b) and achieved superior performance over conventional restoration approaches Timofte et al. (2013); Michaeli & Irani (2013); He et al. (2010). The pioneering work SRCNN Dong et al. (2014) was firstly proposed for image SR. DnCNN Zhang et al. (2017a) was a representative image denoising method. Following these works, various model
35
+
36
+ ![](images/122ffc54d975f487d07d7e9bf312102ae0eae471a1ca2fb2edffbc2e678aeb11.jpg)
37
+ Figure 2: (a) The architecture of our proposed ART for image restoration. (b) The inner structure of two successive attention blocks DAB and SAB with two attention modules D-MSA and S-MSA. designs and improving techniques have been introduced into the basic CNN frameworks. These techniques include but not limit to the residual structure Kim et al. (2016a); Zhang et al. (2021a), skip connection Zhang et al. (2018b; 2020), dropout Kong et al. (2022), and attention mechanism Dai et al. (2019); Niu et al. (2020). Recently, due to the limited ability of CNN to model long-range dependencies, researchers have started to replace convolution operator with pure self-attention module for image restoration Yang et al. (2020); Liang et al. (2021); Zamir et al. (2022); Chen et al. (2021a).
38
+
39
+ ![](images/2bc1949ee5d0f7498c6e8e6a85e379fd917ab63e1fe683d07e71cd717a563dba.jpg)
40
+
41
+ Vision Transformer. Transformer has been achieving impressive performance in machine translation tasks Vaswani et al. (2017). Due to the content-dependent global receptive field, it has been introduced to improve computer vision systems in recent years. Dosovitskiy et al. Dosovitskiy et al. (2021) proposed ViT and introduced Transformer into image recognition by projecting large image patches into token sequences. Tu et al. proposed MaxViT Tu et al. (2022b) as an efficient Vision Transformer while introducing multi-axis attention. Wang et al. proposed CrossFormer Wang et al. (2022a) to build the interactions among long and short distance tokens. Yu et al. proposed GG-Transformer Yu et al. (2021), which performed self-attention on the adaptively-dilated partitions of the input. Inspired by the strong ability to learn long-range dependencies, researches have also investigated the usage of Transformer for low-level vision tasks Yang et al. (2020); Chen et al. (2021a); Liang et al. (2021); Zamir et al. (2022); Wang et al. (2022b). However, existing works still suffer from restricted receptive fields due to mainly using dense attention strategy. Very recently, Tu et al. proposed a MLP-based network named MAXIM Tu et al. (2022a) to introduce dilated spatial communications into image processing. It further demonstrates that the sparse interactions of visual elements are important for solving low-level problems. In our proposed method, we use dense and sparse attention strategies to build network, which can capture wider global interactions. As the sparse attention has not been well proposed to solve the low-level vision problems, our proposed method can bridge this gap.
42
+
43
+ # 3 PROPOSED METHOD
44
+
45
+ # 3.1 OVERALL ARCHITECTURE
46
+
47
+ The overall architecture of our ART is shown in Fig. 2. Following RCAN Zhang et al. (2018b), ART employs residual in residual structure to construct a deep feature extraction module. Given a degraded image $I_{LQ} \in \mathbb{R}^{H \times D \times C_{in}}$ ( $H, D,$ and $C_{in}$ are the height, width, and input channels of the input), ART firstly applies a $3 \times 3$ convolutional layer (Conv) to obtain shallow feature $F_0 \in \mathbb{R}^{H \times D \times C}$ , where $C$ is the dimension size of the new feature embedding. Next, the shallow feature is normalized and fed into the residual groups, which consist of core Transformer attention blocks. The deep feature is extracted and then passes through another $3 \times 3$ Conv to get further feature embeddings $F_1$ . Then we use element-wise sum to obtain the final feature map $F_R = F_0 + F_1$ . Finally, we employ the restoration module to generate the high-quality image $I_{HQ}$ from the feature map $F_R$ .
48
+
49
+ Residual Group. We use $N_{G}$ successive residual groups to extract the deep feature. Each residual group consists of $N_{B}$ pairs of attention blocks. We design two successive attention blocks shown in Fig. 2(b). The input feature $x_{l-1}$ passes through layer normalization (LN) and multi-head self-attention (MSA). After adding the shortcut, the output $x'_l$ is fed into the multi-layer perception (MLP). $x_l$ is the final output at the $l$ -th block. The process is formulated as
50
+
51
+ $$
52
+ x ^ {\prime} _ {l} = \operatorname {M S A} \left(\ln \left(x _ {l - 1}\right)\right) + x _ {l - 1}, \tag {1}
53
+ $$
54
+
55
+ $$
56
+ x _ {l} = \operatorname {M L P} \left(\operatorname {L N} \left(x _ {l} ^ {\prime}\right)\right) + x _ {l} ^ {\prime}.
57
+ $$
58
+
59
+ Lastly, we also apply a $3 \times 3$ convolutional layer to refine the feature embeddings. As shown in Fig 2(a), a residual connection is employed to obtain the final output in each residual group module.
60
+
61
+ ![](images/dd6b2d52b79805caf5f86f4a0dac58b2287b5bfa1f392e471fe057765f80d7e8.jpg)
62
+ (a) Dense Attention
63
+
64
+ ![](images/782f606a5cd7f2293bd9c7d7251449abc277e1e3de3aba1ead23fb5f8792c5f8.jpg)
65
+ (b) Sparse Attention
66
+ Figure 3: (a) Dense attention strategy. Tokens of each group are from a dense area of the image. (b) Sparse attention strategy. Tokens of each group are from a sparse area of the image.
67
+
68
+ Restoration Module. The restoration module is applied as the last stage of the framework to obtain the reconstructed image. As we know, image restoration tasks can be divided into two categories according to the usage of upsampling. For image super-resolution, we take advantage of the sub-pixel convolutional layer Shi et al. (2016) to upsample final feature map $F_{R}$ . Next, we use a convolutional layer to get the final reconstructed image $I_{HQ}$ . The whole process is formulated as
69
+
70
+ $$
71
+ I _ {H Q} = \operatorname {C o n v} \left(\operatorname {U p s a m p l e} \left(F _ {R}\right)\right). \tag {2}
72
+ $$
73
+
74
+ For tasks without upsampling, such as image denoising, we directly use a convolutional layer to reconstruct the high-quality image. Besides, we add the original image to the last output of restoration module for better performance. We formulate the whole process as
75
+
76
+ $$
77
+ I _ {H Q} = \operatorname {C o n v} \left(F _ {R}\right) + I _ {L Q}. \tag {3}
78
+ $$
79
+
80
+ Loss Function. We optimize our ART with two types of loss functions. There are various well-studied loss functions, such as $L_{2}$ loss Dong et al. (2016); Sajjadi et al. (2017); Tai et al. (2017), $L_{1}$ loss Lai et al. (2017); Zhang et al. (2020), and Charbonnier loss Charbonnier et al. (1994). Same with previous works Zhang et al. (2018b); Liang et al. (2021), we utilize $L_{1}$ loss for image super-resolution (SR) and Charbonnier loss for image denoising and compression artifact reduction. For image SR, the goal of training ART is to minimize the $L_{1}$ loss function, which is formulated as
81
+
82
+ $$
83
+ \mathcal {L} = \left\| I _ {H Q} - I _ {G} \right\| _ {1}, \tag {4}
84
+ $$
85
+
86
+ where $I_{HQ}$ is the output of ART and $I_{G}$ is the ground-truth image. For image denoising and JPEG compression artifact reduction, we utilize Charbonnier loss with super-parameter $\varepsilon$ as $10^{-3}$ , which is
87
+
88
+ $$
89
+ \mathcal {L} = \sqrt {\left\| I _ {H Q} - I _ {G} \right\| ^ {2} + \varepsilon^ {2}}. \tag {5}
90
+ $$
91
+
92
+ # 3.2 ATTENTION RETRACTABLE TRANSFORMER
93
+
94
+ We elaborate the details about our proposed two types of self-attention blocks in this section. As plotted in Fig. 2(b), the interactions of tokens are concentrated on the multi-head self-attention module (MSA). We formulate the calculation process in MSA as
95
+
96
+ $$
97
+ \operatorname {M S A} (X) = \operatorname {S o f t m a x} \left(\frac {Q K ^ {T}}{\sqrt {C}}\right) V, \tag {6}
98
+ $$
99
+
100
+ where $Q, K, V \in \mathbb{R}^{N \times C}$ are respectively the query, key, and value from the linear projecting of input $X \in \mathbb{R}^{N \times C}$ . $N$ is the length of token sequence, and $C$ is the dimension size of each token. Here we assume that the number of heads is 1 to transfer MSA to single-head self-attention for simplification.
101
+
102
+ Multi-head Self Attention. Given an image with size $H \times D$ , vision Transformer firstly splits the raw image into numerous patches. These patches are projected by convolutions with stride size $P$ . The new projected feature map $\hat{X} \in \mathbb{R}^{h \times w \times C}$ is prepared with $h = \frac{H}{P}$ and $w = \frac{D}{P}$ . Common MSA uses all the tokens extracted from the whole feature map and sends them to self-attention module to learn relationships between each other. It will suffer from high computational cost, which is
103
+
104
+ $$
105
+ \Omega (\mathrm {M S A}) = 4 h w C ^ {2} + 2 (h w) ^ {2} C. \tag {7}
106
+ $$
107
+
108
+ To lower the computational cost, existing works generally utilize non-overlapping windows to obtain shorter token sequences. However, they mainly consider the tokens from a dense area of an image. Different from them, we propose the retractable attention strategies, which provide interactions of tokens from not only dense areas but also sparse areas of an image to obtain a wider receptive field.
109
+
110
+ **Dense Attention.** As shown in Fig. 3(a), dense attention allows each token to interact with a smaller number of tokens from the neighborhood position of a non-overlapping $W \times W$ window. All tokens
111
+
112
+ <table><tr><td>Methods</td><td>Solving problems</td><td>Structure</td><td>Interval of extracted tokens</td><td>Representation of tokens</td><td>Using long-distance residual connection</td></tr><tr><td>GG-Transformer Yu et al. (2021)</td><td>High-level</td><td>Pyramid</td><td>Changed</td><td>Semantic-level</td><td>No</td></tr><tr><td>MaxViT Tu et al. (2022b)</td><td>High-level</td><td>Pyramid</td><td>Changed</td><td>Semantic-level</td><td>No</td></tr><tr><td>CrossFormer Wang et al. (2022a)</td><td>High-level</td><td>Pyramid</td><td>Changed</td><td>Semantic-level</td><td>No</td></tr><tr><td>ART (Ours)</td><td>Low-level</td><td>Isotropic</td><td>Unchanged</td><td>Pixel-level</td><td>Yes</td></tr></table>
113
+
114
+ Table 1: Comparison to related works. The differences between our ART with other works.
115
+
116
+ are split into several groups and each group has $W \times W$ tokens. We apply these groups to compute self-attention for $\frac{h}{W} \times \frac{w}{W}$ times and the computational cost of new module named D-MSA is
117
+
118
+ $$
119
+ \Omega (\mathrm {D} - \mathrm {M S A}) = (4 W ^ {2} C ^ {2} + 2 W ^ {4} C) \times \frac {h}{W} \times \frac {w}{W} = 4 h w C ^ {2} + 2 W ^ {2} h w C. \tag {8}
120
+ $$
121
+
122
+ Sparse Attention. Meanwhile, as shown in Fig. 3(b), we propose sparse attention to allow each token to interact with a smaller number of tokens, which are from sparse positions with interval size $I$ . After that, the updates of all tokens are also split into several groups and each group has $\frac{h}{I} \times \frac{w}{I}$ tokens. We further utilize these groups to compute self-attention for $I \times I$ times. We name the new multi-head self-attention module as S-MSA and the corresponding computational cost is
123
+
124
+ $$
125
+ \Omega (\mathrm {S} - \mathrm {M S A}) = \left(4 \frac {h}{I} \times \frac {w}{I} C ^ {2} + 2 \left(\frac {h}{I} \times \frac {w}{I}\right) ^ {2} C\right) \times I \times I = 4 h w C ^ {2} + 2 \frac {h}{I} \frac {w}{I} h w C. \tag {9}
126
+ $$
127
+
128
+ By contrast, our proposed D-MSA and S-MSA modules have lower computational cost since $W^2 \ll hw$ and $\frac{h}{I} \frac{w}{I} < hw$ . After computing all groups, the outputs are further merged to form original-size feature map. In practice, we apply these two attention strategies to design two types of self-attention blocks named as dense attention block (DAB) and sparse attention block (SAB) as plotted in Fig. 2.
129
+
130
+ Successive Attention Blocks. We propose the alternating application of these two blocks. As the local interactions have higher priority, we fix the order of DAB in front of SAB. Besides, we provide the long-distance residual connection between each three pairs of blocks. We show the effectiveness of this joint application with residual connection in the supplementary material.
131
+
132
+ Attention Retractable Transformer. We demonstrate that the application of these two blocks enables our model to capture local and global receptive field simultaneously. We treat the successive attention blocks as a whole and get a new type of Transformer named Attention Retractable Transformer, which can provide interactions for both local dense tokens and global sparse tokens.
133
+
134
+ # 3.3 DIFFERENCES TO RELATED WORKS
135
+
136
+ We summarize the differences between our proposed approach, ART with the closely related works in Tab. 1. We conclude them as three points. (1) Different tasks. GG-Transformer Yu et al. (2021), MaxViT Tu et al. (2022b) and CrossFormer Wang et al. (2022a) are proposed to solve high-level vision problems. Our ART is the only one to employ the sparse attention in low-level vision fields. (2) Different designs of sparse attention. In the part of attention, GG-Transformer utilizes the adaptively-dilated partitions, MaxViT utilizes the fixed-size grid attention and CrossFormer utilizes the cross-scale long-distance attention. As the layers get deeper, the interval of tokens from sparse attention becomes smaller and the channels of tokens become larger. Therefore, each token learns more semantic-level information. In contrast, the interval and the channel dimension of tokens in our ART keep unchanged and each token represents the accurate pixel-level information. (3) Different model structures. Different from these works using Pyramid model structure, our proposed ART enjoys an Isotropic structure. Besides, we provide the long-distance residual connection between several Transformer encoders, which enables the feature of deep layers to reserve more low-frequency information from shallow layers. More discussion can be found in the supplementary material.
137
+
138
+ # 3.4 IMPLEMENTATION DETAILS
139
+
140
+ Some details about how to apply our ART to construct image restoration model are introduced here. Firstly, the residual group number, DAB number, and SAB number in each group are set as 6, 3, and 3. Secondly, all the convolutional layers are equipped with $3 \times 3$ kernel, 1-length stride, and 1-length padding, so the height and width of feature map remain unchanged. In practice, we treat $1 \times 1$ patch as a token. Besides, we set the channel dimension as 180 for most layers except for the shallow feature extraction and the image reconstruction process. Thirdly, the window size in DAB is set as 8 and the interval size in SAB is adjustable according to different tasks, which is discussed in Sec. 4.2. Lastly, to adjust the division of windows and sparse grids, we use padding and mask strategies to the input feature map of self-attention, so that the number of division is always an integer.
141
+
142
+ ![](images/a769accf9e8058d6a6977bca42f9b42f21aee09343d005ff6da4fe00589e8ebb.jpg)
143
+ Figure 4: Left: PSNR (dB) comparison of our ART using all dense attention block (DAB), using all sparse attention block (SAB), and using alternating DAB and SAB. Middle: PSNR (dB) comparison of our ART using large interval size in sparse attention block which is $(8,8,8,8,8,8)$ for six residual groups, using medium interval size which is $(8,8,6,6,4,4)$ , and using small interval size which is $(4,4,4,4,4,4)$ . Right: PSNR (dB) comparison of SwinIR, ART-S, and ART.
144
+
145
+ ![](images/9145ce30e0bdc3d33693b5e101fa01eed70d8034c25272ee27a17a29f13a0832.jpg)
146
+
147
+ ![](images/0b7eccae34b30567512fcb134e615128bd39853a6947852d1ebe16e95deb7aaf.jpg)
148
+
149
+ # 4 EXPERIMENTAL RESULTS
150
+
151
+ # 4.1 EXPERIMENTAL SETTINGS
152
+
153
+ Data and Evaluation. We conduct experiments on three image restoration tasks, including image SR, denoising, and JPEG Compression Artifact Reduction (CAR). For image SR, following previous works Zhang et al. (2018b); Haris et al. (2018), we use DIV2K Timofte et al. (2017) and Flickr2K Lim et al. (2017) as training data, Set5 Bevilacqua et al. (2012), Set14 Zeyde et al. (2010), B100 Martin et al. (2001), Urban100 Huang et al. (2015), and Manga109 Matsui et al. (2017) as test data. For image denoising and JPEG CAR, same as SwinIR Liang et al. (2021), we use training data: DIV2K, Flickr2K, BSD500 Arbelaez et al. (2010), and WED Ma et al. (2016). We use BSD68 Martin et al. (2001), Kodak24 Franzen (1999), McMaster Zhang et al. (2011), and Urban100 as test data of image denoising. Classic5 Foi et al. (2007) and LIVE1 Sheikh et al. (2006) are test data of JPEG CAR. Note that we crop large-size input image into $200 \times 200$ partitions with overlapping pixels during inference. Following Lim et al. (2017), we adopt the self-ensemble strategy to further improve the performance of our ART and name it as ART+. We evaluate experimental results with PSNR and SSIM Wang et al. (2004) values on Y channel of images transformed to YCbCr space.
154
+
155
+ Training Settings. Data augmentation is performed on the training data through horizontal flip and random rotation of $90^{\circ}$ , $180^{\circ}$ , and $270^{\circ}$ . Besides, we crop the original images into $64 \times 64$ patches as the basic training inputs for image SR, $128 \times 128$ patches for image denoising, and $126 \times 126$ patches for JPEG CAR. We resize the training batch to 32 for image SR, and 8 for image denoising and JPEG CAR in order to make a fair comparison. We choose ADAM Kingma & Ba (2015) to optimize our ART model with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , and zero weight decay. The initial learning rate is set as $2 \times 10^{-4}$ and is reduced by half as the training iteration reaches a certain number. Taking image SR as an example, we train ART for total 500k iterations and adjust learning rate to half when training iterations reach 250k, 400k, 450k, and 475k, where 1k means one thousand. Our ART is implemented on PyTorch Paszke et al. (2017) with 4 NVIDIA RTX8000 GPUs.
156
+
157
+ # 4.2 ABLATION STUDY
158
+
159
+ For ablation experiments, we train our models for image super-resolution $(\times 2)$ based on DIV2K and Flicke2K datasets. The results are evaluated on Urban100 benchmark dataset.
160
+
161
+ Design Choices for DAB and SAB. We demonstrate the necessity for simultaneous usage of dense attention block (DAB) and sparse attention block (SAB) by conducting ablation study. We set three different experiment conditions, which are using 6 DABs, 6 SABs, and 3 pairs of alternating DAB and SAB. We keep the rest of experiment environment the same and train all models within 100k iterations. The experimental results are shown in Fig. 4(Left). As we can see, only using DAB or SAB suffers from poor performance, because they lack either global receptive field or local receptive field. On the other hand, the structure of SAB following DAB brings higher performance. It validates that both local contextual interactions and global sparse interactions are important for improving strong representation ability of Transformer by obtaining retractable attention on the input feature.
162
+
163
+ Impact of Interval Size. The interval size in sparse attention block has a vital impact on the performance of our ART. In fact, if the interval size is set as 1, it will be transferred to full attention. Generally, a smaller interval means wider receptive fields but higher computational cost. We compare the experimental results under different interval settings in Fig. 4(Middle). As we can see, smaller
164
+
165
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Scale</td><td colspan="2">Set5</td><td colspan="2">Set14</td><td colspan="2">B100</td><td colspan="2">Urban100</td><td colspan="2">Manga109</td></tr><tr><td>PSNR</td><td>SSIM</td><td>PSNR</td><td>SSIM</td><td>PSNR</td><td>SSIM</td><td>PSNR</td><td>SSIM</td><td>PSNR</td><td>SSIM</td></tr><tr><td>EDSR Lim et al. (2017)</td><td>×2</td><td>38.11</td><td>0.9602</td><td>33.92</td><td>0.9195</td><td>32.32</td><td>0.9013</td><td>32.93</td><td>0.9351</td><td>39.10</td><td>0.9773</td></tr><tr><td>RCAN Zhang et al. (2018b)</td><td>×2</td><td>38.27</td><td>0.9614</td><td>34.12</td><td>0.9216</td><td>32.41</td><td>0.9027</td><td>33.34</td><td>0.9384</td><td>39.44</td><td>0.9786</td></tr><tr><td>SAN Dai et al. (2019)</td><td>×2</td><td>38.31</td><td>0.9620</td><td>34.07</td><td>0.9213</td><td>32.42</td><td>0.9028</td><td>33.10</td><td>0.9370</td><td>39.32</td><td>0.9792</td></tr><tr><td>SRFBN Li et al. (2019)</td><td>×2</td><td>38.11</td><td>0.9609</td><td>33.82</td><td>0.9196</td><td>32.29</td><td>0.9010</td><td>32.62</td><td>0.9328</td><td>39.08</td><td>0.9779</td></tr><tr><td>HAN Niu et al. (2020)</td><td>×2</td><td>38.27</td><td>0.9614</td><td>34.16</td><td>0.9217</td><td>32.41</td><td>0.9027</td><td>33.35</td><td>0.9385</td><td>39.46</td><td>0.9785</td></tr><tr><td>IGNN Zhou et al. (2020)</td><td>×2</td><td>38.24</td><td>0.9613</td><td>34.07</td><td>0.9217</td><td>32.41</td><td>0.9025</td><td>33.23</td><td>0.9383</td><td>39.35</td><td>0.9786</td></tr><tr><td>CSNLN Mei et al. (2020)</td><td>×2</td><td>38.28</td><td>0.9616</td><td>34.12</td><td>0.9223</td><td>32.40</td><td>0.9024</td><td>33.25</td><td>0.9386</td><td>39.37</td><td>0.9785</td></tr><tr><td>RFANet Liu et al. (2020)</td><td>×2</td><td>38.26</td><td>0.9615</td><td>34.16</td><td>0.9220</td><td>32.41</td><td>0.9026</td><td>33.33</td><td>0.9389</td><td>39.44</td><td>0.9783</td></tr><tr><td>NLSA Mei et al. (2021)</td><td>×2</td><td>38.34</td><td>0.9618</td><td>34.08</td><td>0.9231</td><td>32.43</td><td>0.9027</td><td>33.42</td><td>0.9394</td><td>39.59</td><td>0.9789</td></tr><tr><td>IPT Chen et al. (2021a)</td><td>×2</td><td>38.37</td><td>N/A</td><td>34.43</td><td>N/A</td><td>32.48</td><td>N/A</td><td>33.76</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>SwinIR Liang et al. (2021)</td><td>×2</td><td>38.42</td><td>0.9623</td><td>34.46</td><td>0.9250</td><td>32.53</td><td>0.9041</td><td>33.81</td><td>0.9427</td><td>39.92</td><td>0.9797</td></tr><tr><td>ART-S (ours)</td><td>×2</td><td>38.48</td><td>0.9625</td><td>34.50</td><td>0.9258</td><td>32.53</td><td>0.9043</td><td>34.02</td><td>0.9437</td><td>40.11</td><td>0.9804</td></tr><tr><td>ART (ours)</td><td>×2</td><td>38.56</td><td>0.9629</td><td>34.59</td><td>0.9267</td><td>32.58</td><td>0.9048</td><td>34.30</td><td>0.9452</td><td>40.24</td><td>0.9808</td></tr><tr><td>ART+ (ours)</td><td>×2</td><td>38.59</td><td>0.9630</td><td>34.68</td><td>0.9269</td><td>32.60</td><td>0.9050</td><td>34.41</td><td>0.9457</td><td>40.33</td><td>0.9810</td></tr><tr><td>EDSR Lim et al. (2017)</td><td>×3</td><td>34.65</td><td>0.9280</td><td>30.52</td><td>0.8462</td><td>29.25</td><td>0.8093</td><td>28.80</td><td>0.8653</td><td>34.17</td><td>0.9476</td></tr><tr><td>RCAN Zhang et al. (2018b)</td><td>×3</td><td>34.74</td><td>0.9299</td><td>30.65</td><td>0.8482</td><td>29.32</td><td>0.8111</td><td>29.09</td><td>0.8702</td><td>34.44</td><td>0.9499</td></tr><tr><td>SAN Dai et al. (2019)</td><td>×3</td><td>34.75</td><td>0.9300</td><td>30.59</td><td>0.8476</td><td>29.33</td><td>0.8112</td><td>28.93</td><td>0.8671</td><td>34.30</td><td>0.9494</td></tr><tr><td>SRFBN Li et al. (2019)</td><td>×3</td><td>34.70</td><td>0.9292</td><td>30.51</td><td>0.8461</td><td>29.24</td><td>0.8084</td><td>28.73</td><td>0.8641</td><td>34.18</td><td>0.9481</td></tr><tr><td>HAN Niu et al. (2020)</td><td>×3</td><td>34.75</td><td>0.9299</td><td>30.67</td><td>0.8483</td><td>29.32</td><td>0.8110</td><td>29.10</td><td>0.8705</td><td>34.48</td><td>0.9500</td></tr><tr><td>IGNN Zhou et al. (2020)</td><td>×3</td><td>34.72</td><td>0.9298</td><td>30.66</td><td>0.8484</td><td>29.31</td><td>0.8105</td><td>29.03</td><td>0.8696</td><td>34.39</td><td>0.9496</td></tr><tr><td>CSNLN Mei et al. (2020)</td><td>×3</td><td>34.74</td><td>0.9300</td><td>30.66</td><td>0.8482</td><td>29.33</td><td>0.8105</td><td>29.13</td><td>0.8712</td><td>34.45</td><td>0.9502</td></tr><tr><td>RFANet Liu et al. (2020)</td><td>×3</td><td>34.79</td><td>0.9300</td><td>30.67</td><td>0.8487</td><td>29.34</td><td>0.8115</td><td>29.15</td><td>0.8720</td><td>34.59</td><td>0.9506</td></tr><tr><td>NLSA Mei et al. (2021)</td><td>×3</td><td>34.85</td><td>0.9306</td><td>30.70</td><td>0.8485</td><td>29.34</td><td>0.8117</td><td>29.25</td><td>0.8726</td><td>34.57</td><td>0.9508</td></tr><tr><td>IPT Chen et al. (2021a)</td><td>×3</td><td>34.81</td><td>N/A</td><td>30.85</td><td>N/A</td><td>29.38</td><td>N/A</td><td>29.49</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>SwinIR Liang et al. (2021)</td><td>×3</td><td>34.97</td><td>0.9318</td><td>30.93</td><td>0.8534</td><td>29.46</td><td>0.8145</td><td>29.75</td><td>0.8826</td><td>35.12</td><td>0.9537</td></tr><tr><td>ART-S (ours)</td><td>×3</td><td>34.98</td><td>0.9318</td><td>30.94</td><td>0.8530</td><td>29.45</td><td>0.8146</td><td>29.86</td><td>0.8830</td><td>35.22</td><td>0.9539</td></tr><tr><td>ART (ours)</td><td>×3</td><td>35.07</td><td>0.9325</td><td>31.02</td><td>0.8541</td><td>29.51</td><td>0.8159</td><td>30.10</td><td>0.8871</td><td>35.39</td><td>0.9548</td></tr><tr><td>ART+ (ours)</td><td>×3</td><td>35.11</td><td>0.9327</td><td>31.05</td><td>0.8545</td><td>29.53</td><td>0.8162</td><td>30.22</td><td>0.8883</td><td>35.51</td><td>0.9552</td></tr><tr><td>EDSR Lim et al. (2017)</td><td>×4</td><td>32.46</td><td>0.8968</td><td>28.80</td><td>0.7876</td><td>27.71</td><td>0.7420</td><td>26.64</td><td>0.8033</td><td>31.02</td><td>0.9148</td></tr><tr><td>RCAN Zhang et al. (2018b)</td><td>×4</td><td>32.63</td><td>0.9002</td><td>28.87</td><td>0.7889</td><td>27.77</td><td>0.7436</td><td>26.82</td><td>0.8087</td><td>31.22</td><td>0.9173</td></tr><tr><td>SAN Dai et al. (2019)</td><td>×4</td><td>32.64</td><td>0.9003</td><td>28.92</td><td>0.7888</td><td>27.78</td><td>0.7436</td><td>26.79</td><td>0.8068</td><td>31.18</td><td>0.9169</td></tr><tr><td>SRFBN Li et al. (2019)</td><td>×4</td><td>32.47</td><td>0.8983</td><td>28.81</td><td>0.7868</td><td>27.72</td><td>0.7409</td><td>26.60</td><td>0.8015</td><td>31.15</td><td>0.9160</td></tr><tr><td>HAN Niu et al. (2020)</td><td>×4</td><td>32.64</td><td>0.9002</td><td>28.90</td><td>0.7890</td><td>27.80</td><td>0.7442</td><td>26.85</td><td>0.8094</td><td>31.42</td><td>0.9177</td></tr><tr><td>IGNN Zhou et al. (2020)</td><td>×4</td><td>32.57</td><td>0.8998</td><td>28.85</td><td>0.7891</td><td>27.77</td><td>0.7434</td><td>26.84</td><td>0.8090</td><td>31.28</td><td>0.9182</td></tr><tr><td>CSNLN Mei et al. (2020)</td><td>×4</td><td>32.68</td><td>0.9004</td><td>28.95</td><td>0.7888</td><td>27.80</td><td>0.7439</td><td>27.22</td><td>0.8168</td><td>31.43</td><td>0.9201</td></tr><tr><td>RFANet Liu et al. (2020)</td><td>×4</td><td>32.66</td><td>0.9004</td><td>28.88</td><td>0.7894</td><td>27.79</td><td>0.7442</td><td>26.92</td><td>0.8112</td><td>31.41</td><td>0.9187</td></tr><tr><td>NLSA Mei et al. (2021)</td><td>×4</td><td>32.59</td><td>0.9000</td><td>28.87</td><td>0.7891</td><td>27.78</td><td>0.7444</td><td>26.96</td><td>0.8109</td><td>31.27</td><td>0.9184</td></tr><tr><td>IPT Chen et al. (2021a)</td><td>×4</td><td>32.64</td><td>N/A</td><td>29.01</td><td>N/A</td><td>27.82</td><td>N/A</td><td>27.26</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>SwinIR Liang et al. (2021)</td><td>×4</td><td>32.92</td><td>0.9044</td><td>29.09</td><td>0.7950</td><td>27.92</td><td>0.7489</td><td>27.45</td><td>0.8254</td><td>32.03</td><td>0.9260</td></tr><tr><td>ART-S (ours)</td><td>×4</td><td>32.86</td><td>0.9029</td><td>29.09</td><td>0.7942</td><td>27.91</td><td>0.7489</td><td>27.54</td><td>0.8261</td><td>32.13</td><td>0.9263</td></tr><tr><td>ART (ours)</td><td>×4</td><td>33.04</td><td>0.9051</td><td>29.16</td><td>0.7958</td><td>27.97</td><td>0.7510</td><td>27.77</td><td>0.8321</td><td>32.31</td><td>0.9283</td></tr><tr><td>ART+ (ours)</td><td>×4</td><td>33.07</td><td>0.9055</td><td>29.20</td><td>0.7964</td><td>27.99</td><td>0.7513</td><td>27.89</td><td>0.8339</td><td>32.45</td><td>0.9291</td></tr></table>
166
+
167
+ Table 2: PSNR (dB)/SSIM comparisons for image super-resolution on five benchmark datasets. We color best and second best results in red and blue.
168
+
169
+ <table><tr><td>Method</td><td>EDSR</td><td>RCAN</td><td>SRFBN</td><td>HAN</td><td>CSNLN</td><td>SwiIR</td><td>ART-S (ours)</td><td>ART (ours)</td></tr><tr><td>Params (M)</td><td>43.09</td><td>15.59</td><td>3.63</td><td>16.07</td><td>7.16</td><td>11.90</td><td>11.87</td><td>16.55</td></tr><tr><td>Mult-Adds (G)</td><td>1,286</td><td>407</td><td>498</td><td>420</td><td>103,640</td><td>336</td><td>392</td><td>782</td></tr><tr><td>PSNR on Urban100 (dB)</td><td>26.64</td><td>26.82</td><td>26.60</td><td>26.85</td><td>27.22</td><td>27.45</td><td>27.54</td><td>27.77</td></tr><tr><td>PSNR on Manga109 (dB)</td><td>31.02</td><td>31.22</td><td>31.15</td><td>31.42</td><td>31.43</td><td>32.03</td><td>32.13</td><td>32.31</td></tr></table>
170
+
171
+ Table 3: Model size comparisons (×4 SR). Output size is $3 \times 640 \times 640$ for Mult-Adds calculation.
172
+
173
+ intervals bring more performance gains. To keep the balance between accuracy and complexity, we set the interval size of 6 residual groups as $(4,4,4,4,4,4)$ for image SR, $(16,16,12,12,8,8)$ for image denoising, and $(18,18,13,13,7,7)$ for JPEG CAR in the following comparative experiments.
174
+
175
+ Comparison of Variant Models. We provide a new version of our model for fair comparisons and name it ART-S. Different from ART, the MLP ratio in ART-S is set to 2 (4 in ART) and the interval size is set to 8. We demonstrate that ART-S has comparable model size with SwinIR. We provide the PSNR comparison results in Fig. 4(Right). As we can see, our ART-S achieves better performance than SwinIR. More comparative results can be found in following experiment parts.
176
+
177
+ # 4.3 IMAGE SUPER-RESOLUTION
178
+
179
+ We provide comparisons of our proposed ART with representative image SR methods, including CNN-based networks: EDSR Lim et al. (2017), RCAN Zhang et al. (2018b), SAN Dai et al. (2019), SRFBN Li et al. (2019), HAN Niu et al. (2020), IGNN Zhou et al. (2020), CSNLN Mei et al. (2020), RFANet Liu et al. (2020), NLSA Mei et al. (2021), and Transformer-based networks: IPT Chen et al. (2021a) and SwinIR Liang et al. (2021). Note that IPT is a pre-trained model, which is trained on ImageNet benchmark dataset. All the results are provided by publicly available code and data. Quantitative and visual comparisons are provided in Tab. 2 and Fig. 5.
180
+
181
+ Quantitative Comparisons. We present PSNR/SSIM comparison results for $\times 2$ , $\times 3$ , and $\times 4$ image SR in Tab. 2. As we can see, our ART achieves the best PSNR/SSIM performance on all five benchmark datasets. Using self-ensemble, $\mathrm{ART + }$ gains even better results. Compared with existing state-of-the-art method SwinIR, our ART obtains better gains across all scale factors, indicating that
182
+
183
+ ![](images/3041c37d5b84db58c4b855b3b2c4d072d0b76c29268e945be79b045ab7f3b83a.jpg)
184
+ Urban100: img_092 $(\times 4)$
185
+
186
+ ![](images/137df0c26d0664b97539026fb4533dc7849ec4fe122ec5295aa9b17e17f75690.jpg)
187
+ HQ/PSNR(dB)
188
+
189
+ ![](images/29ba7a5a8f80f3f4e2d586a6fe78661e3042ccf50c422ed6f6e9bc938318451f.jpg)
190
+
191
+ ![](images/b16cbe98f5de00e8a782dd4bb66699237c551305103a8d511d446f29a83f711b.jpg)
192
+
193
+ ![](images/9ac87ae3f384dfe68a9fcfaf86b8c7e0a5ffd02c87b24137ff2a841c9c252a77.jpg)
194
+
195
+ ![](images/0deab0dfa190fd3e26626e499a9b8b8e61ebd32e8ae3d5829d90ac7993a4cc6b.jpg)
196
+
197
+ ![](images/e7dd0af8c4ccc3816aebe899600d70cfb8490d3f91b76939709bab70d2d749da.jpg)
198
+ Urban100: img_098 $(\times 4)$
199
+ Figure 5: Visual comparison with challenging examples on image super-resolution $(\times 4)$ .
200
+
201
+ ![](images/dec82fb741e05938b08439727f487848906a19209979d6621c2cdb39d2356a7a.jpg)
202
+
203
+ ![](images/1c2a79b0f93c840ec9d9c3b027200e309031380945a3e3c1e5176894100d7109.jpg)
204
+ Bicubic / 15.31
205
+ CSNLN / 18.69
206
+
207
+ ![](images/d4e51dc12f50dffe786c93052390c1a4ebf66a24d62540bccaa032982ee383a0.jpg)
208
+ RCAN/18.36
209
+ RFANet / 18.49
210
+
211
+ ![](images/dd6b6adb14e77ce43d24b557c5e3a84a4484cc2fe7b87043cb7652f21da567dc.jpg)
212
+ SRFBN / 18.26
213
+ SwinIR / 18.59
214
+
215
+ ![](images/3ce7c871f4fcca9d70e13f5e2b06e68fbcb1ca32a4bf0a42b40984ec058c2b73.jpg)
216
+ SAN/18.26
217
+ ART/19.56
218
+
219
+ ![](images/66062b036c909890925c8cea547122d86c135147ebe256fcfdfa193ba0c8c3f6.jpg)
220
+ IGNN/18.51
221
+ HQ/PSNR(dB)
222
+
223
+ ![](images/5da3d57e11976faa321c534250d681680de0ee42050e3ac4518f8b596ef805a2.jpg)
224
+ Bicubic / 18.28
225
+
226
+ ![](images/73e0da567164a77f34dc77c4eda4e4b554cb56414160456534d5e81b710433d7.jpg)
227
+ RCAN/19.70
228
+
229
+ ![](images/33c45e450f6610a90b82a15b37c8c015f1654287976b2de8367272879bc8a275.jpg)
230
+ SRFBN / 19.55
231
+
232
+ ![](images/f4de0d1967eca4f0df5f5227b43ecd143a00c0daecdc20a8c85148bbc4163ec9.jpg)
233
+ SAN/19.66
234
+
235
+ ![](images/0758055b5836a514bb27bcf39a612107f4747261629ea9247b3ed15ba49a2340.jpg)
236
+
237
+ ![](images/d4436af1116b06962976944742775fcc7d5b8bb462513939425054fbf01f0c72.jpg)
238
+ IGNN/19.70
239
+
240
+ ![](images/ce376fa8fdf00888b519585a443a4d4315a06421ef6f9d63303142c47873da42.jpg)
241
+ CSNLN / 19.82
242
+
243
+ ![](images/2809d1dd2769e69ee44491abf7c67494c8622e8fa58077b374f4b96f8c0096f3.jpg)
244
+ RFANet / 19.72
245
+
246
+ ![](images/7d5a13256db17bfc06913f6674e31774c33b7b37f9b359ddb4c455a1c85650dd.jpg)
247
+ SwinIR / 20.00
248
+
249
+ ![](images/ab3a0d68f97bdee497353764619f400bf13d3f3f5f4b359b606b5bb2407c471e.jpg)
250
+ ART/20.10
251
+
252
+ our proposed joint dense and sparse attention blocks enable Transformer stronger representation ability. Despite showing better performance than CNN-based networks, another Transformer-based network IPT is not as good as ours. It is validated that our proposed ART becomes a new promising Transformer-based network for image SR.
253
+
254
+ Retractable vs. Dense Attention. We further show a typical visual comparison with SwinIR in Fig. 6. As SwinIR mainly utilizes dense attention strategy, it restores wrong texture structures under the influence of close patches with mainly vertical lines. However, our ART can reconstruct the right texture, thanks to the wider recep
255
+
256
+ tive field provided by sparse attention strategy. Visibly, the patch is able to interact with farther patches with similar horizontal lines so that it can be reconstructed clearly. This comparison demonstrates the advantage of retractable attention and its strong ability to restore high-quality outputs.
257
+
258
+ ![](images/a25ca2365181ad87957571cd00ff876ffa9cb1b4eea75b63a4e578e024aff0ee.jpg)
259
+ Dense Attention
260
+ Figure 6: Visual comparison $(\times 4)$ of SwinIR and Ours.
261
+
262
+ ![](images/68b8ed1fe5344df23f631619557e01c2d545cc21aa3bd3f14a72267e67955387.jpg)
263
+ SwinIR
264
+
265
+ ![](images/de8a9e7986f344ec97fb7ba8e15a01ec207fad44b9315502776bb6ae07655070.jpg)
266
+ Ours
267
+
268
+ ![](images/14f50634f14c3e53c846fc0dbadb9997b28f1b5d88b4ffbbf0b98565492f75e9.jpg)
269
+ HQ
270
+
271
+ #
272
+
273
+ ![](images/bab9eb78418815035dc010f75f4455933254da751f68571719ae8e36c5dba1cd.jpg)
274
+ Sparse Attention
275
+
276
+ Model Size Comparisons. Table 3 provides comparisons of parameters number and Mult-Adds of different networks, which include existing state-of-the-art methods. We calculate the Mult-Adds assuming that the output size is $3 \times 640 \times 640$ under $\times 4$ image SR. Compared with previous CNN-based networks, our ART has comparable parameter number and Mult-Adds but achieves high performance. Besides, we can see that our ART-S has less parameters and Mult-Adds than most of the compared methods. The model size of ART-S is similar with SwinIR. However, ART-S still achieves better performance gains than all compared methods except our ART. It indicates that our method is able to achieve promising performance at an acceptable computational and memory cost.
277
+
278
+ Visual Comparisons. We also provide some challenging examples for visual comparison $(\times 4)$ in Fig. 5. We can see that our ART is able to alleviate heavy blurring artifacts while restoring detailed edges and textures. Compared with other methods, ART obtains visually pleasing results by recovering more high-frequency details. It indicates that ART preforms better for image SR.
279
+
280
+ # 4.4 IMAGE DENOISING
281
+
282
+ We show color image denoising results to compare our ART with representative methods in Tab. 4. These methods are CBM3D Dabov et al. (2007), IRCNN Zhang et al. (2017b), FFDNet Zhang et al. (2018a), DnCNN Zhang et al. (2017a), RNAN Zhang et al. (2019), RDN Zhang et al. (2020), IPT Chen et al. (2021a), DRUNet Zhang et al. (2021a), P3AN Hu et al. (2021), SwinIR Liang et al. (2021), and Restormer Zamir et al. (2022). Following most recent works, we set the noise level to 15, 25, and 50. We also show visual comparisons of challenging examples in Fig. 7.
283
+
284
+ Quantitative Comparisons. Table 4 shows PSNR results of color image denoising. As we can see, our ART achieves the highest performance across all compared methods on three datasets except Kodak24. Even better results are obtained by $\mathrm{ART + }$ using self-ensemble. Particularly, it obtains better gains than the state-of-the-art model Restormer Zamir et al. (2022) by up to $0.25\mathrm{dB}$ on Urban100. Restormer also has restricted receptive fields and thus has difficulty in some challenging cases. In conclusion, these comparisons indicate that our ART also has strong ability in image denoising.
285
+
286
+ <table><tr><td rowspan="2">Method</td><td colspan="3">BSD68</td><td colspan="3">Kodak24</td><td colspan="3">McMaster</td><td colspan="3">Urban100</td></tr><tr><td>σ=15</td><td>σ=25</td><td>σ=50</td><td>σ=15</td><td>σ=25</td><td>σ=50</td><td>σ=15</td><td>σ=25</td><td>σ=50</td><td>σ=15</td><td>σ=25</td><td>σ=50</td></tr><tr><td>CBM3D Dabov et al. (2007)</td><td>N/A</td><td>N/A</td><td>27.38</td><td>N/A</td><td>N/A</td><td>28.63</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>27.94</td></tr><tr><td>IRCNN Zhang et al. (2017b)</td><td>33.86</td><td>31.16</td><td>27.86</td><td>34.69</td><td>32.18</td><td>28.93</td><td>34.58</td><td>32.18</td><td>28.91</td><td>33.78</td><td>31.20</td><td>27.70</td></tr><tr><td>FFDNet Zhang et al. (2018a)</td><td>33.87</td><td>31.21</td><td>27.96</td><td>34.63</td><td>32.13</td><td>28.98</td><td>34.66</td><td>32.35</td><td>29.18</td><td>33.83</td><td>31.40</td><td>28.05</td></tr><tr><td>DnCNN Zhang et al. (2017a)</td><td>33.90</td><td>31.24</td><td>27.95</td><td>34.60</td><td>32.14</td><td>28.95</td><td>33.45</td><td>31.52</td><td>28.62</td><td>32.98</td><td>30.81</td><td>27.59</td></tr><tr><td>RNAN Zhang et al. (2019)</td><td>N/A</td><td>N/A</td><td>28.27</td><td>N/A</td><td>N/A</td><td>29.58</td><td>N/A</td><td>N/A</td><td>29.72</td><td>N/A</td><td>N/A</td><td>29.08</td></tr><tr><td>RDN Zhang et al. (2020)</td><td>N/A</td><td>N/A</td><td>28.31</td><td>N/A</td><td>N/A</td><td>29.66</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>29.38</td></tr><tr><td>IPT Chen et al. (2021a)</td><td>N/A</td><td>N/A</td><td>28.39</td><td>N/A</td><td>N/A</td><td>29.64</td><td>N/A</td><td>N/A</td><td>29.98</td><td>N/A</td><td>N/A</td><td>29.71</td></tr><tr><td>DRUNet Zhang et al. (2021a)</td><td>34.30</td><td>31.69</td><td>28.51</td><td>35.31</td><td>32.89</td><td>29.86</td><td>35.40</td><td>33.14</td><td>30.08</td><td>34.81</td><td>32.60</td><td>29.61</td></tr><tr><td>P3AN Hu et al. (2021)</td><td>N/A</td><td>N/A</td><td>28.37</td><td>N/A</td><td>N/A</td><td>29.69</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>29.51</td></tr><tr><td>SwinIR Liang et al. (2021)</td><td>34.42</td><td>31.78</td><td>28.56</td><td>35.34</td><td>32.89</td><td>29.79</td><td>35.61</td><td>33.20</td><td>30.22</td><td>35.13</td><td>32.90</td><td>29.82</td></tr><tr><td>Restormer Zamir et al. (2022)</td><td>34.40</td><td>31.79</td><td>28.60</td><td>35.47</td><td>33.04</td><td>30.01</td><td>35.61</td><td>33.34</td><td>30.30</td><td>35.13</td><td>32.96</td><td>30.02</td></tr><tr><td>ART (ours)</td><td>34.46</td><td>31.84</td><td>28.63</td><td>35.39</td><td>32.95</td><td>29.87</td><td>35.68</td><td>33.41</td><td>30.31</td><td>35.29</td><td>33.14</td><td>30.19</td></tr><tr><td>ART+ (ours)</td><td>34.47</td><td>31.85</td><td>28.65</td><td>35.41</td><td>32.98</td><td>29.89</td><td>35.71</td><td>33.44</td><td>30.35</td><td>35.34</td><td>33.20</td><td>30.27</td></tr></table>
287
+
288
+ Table 4: PSNR (dB) comparisons. The best and second best results are in red and blue.
289
+
290
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">q</td><td colspan="2">RNAN</td><td colspan="2">RDN</td><td colspan="2">DRUNet</td><td colspan="2">SwinIR</td><td colspan="2">ART (ours)</td><td colspan="2">ART+ (ours)</td></tr><tr><td>PSNR</td><td>SSIM</td><td>PSNR</td><td>SSIM</td><td>PSNR</td><td>SSIM</td><td>PSNR</td><td>SSIM</td><td>PSNR</td><td>SSIM</td><td>PSNR</td><td>SSIM</td></tr><tr><td rowspan="3">Classic5</td><td>10</td><td>29.96</td><td>0.8178</td><td>30.00</td><td>0.8188</td><td>30.16</td><td>0.8234</td><td>30.27</td><td>0.8249</td><td>30.27</td><td>0.8258</td><td>30.32</td><td>0.8263</td></tr><tr><td>30</td><td>33.38</td><td>0.8924</td><td>33.43</td><td>0.8930</td><td>33.59</td><td>0.8949</td><td>33.73</td><td>0.8961</td><td>33.74</td><td>0.8964</td><td>33.78</td><td>0.8967</td></tr><tr><td>40</td><td>34.27</td><td>0.9061</td><td>34.27</td><td>0.9061</td><td>34.41</td><td>0.9075</td><td>34.52</td><td>0.9082</td><td>34.55</td><td>0.9086</td><td>34.58</td><td>0.9089</td></tr><tr><td rowspan="3">LIVE1</td><td>10</td><td>29.63</td><td>0.8239</td><td>29.67</td><td>0.8247</td><td>29.79</td><td>0.8278</td><td>29.86</td><td>0.8287</td><td>29.89</td><td>0.8300</td><td>29.92</td><td>0.8305</td></tr><tr><td>30</td><td>33.45</td><td>0.9149</td><td>33.51</td><td>0.9153</td><td>33.59</td><td>0.9166</td><td>33.69</td><td>0.9174</td><td>33.71</td><td>0.9178</td><td>33.74</td><td>0.9181</td></tr><tr><td>40</td><td>34.47</td><td>0.9299</td><td>34.51</td><td>0.9302</td><td>34.58</td><td>0.9312</td><td>34.67</td><td>0.9317</td><td>34.70</td><td>0.9322</td><td>34.73</td><td>0.9324</td></tr></table>
291
+
292
+ Table 5: PSNR (dB)/SSIM comparisons. The best and second best results are in red and blue.
293
+
294
+ Visual Comparisons. The visual comparison for color image denoising of different methods is shown in Fig. 7. Our ART can preserve detailed textures and high-frequency components and remove heavy noise corruption. Compared with other methods, it has better performance to restore clean and crisp images. It demonstrates that our ART is also suitable for image denoising.
295
+
296
+ ![](images/f50281a90c9eac4f42e6dffe7e3e41415ca32639a854dbaa7c165896901dd685.jpg)
297
+ Urban100: img.033
298
+ Figure 7: Visual comparison with challenging examples on color image denoising $(\sigma = 50)$ .
299
+
300
+ ![](images/2833d270947a8fa3d7c13ad05319ff9c839fd98f53eaf326f1f46610459083cc.jpg)
301
+ HQ/PSNR (dB)
302
+
303
+ ![](images/98bb8de50a9a74254b3b478e111087a889c68aa70a7b6b3b4d01a0225d1229d0.jpg)
304
+ Noisy / 15.15
305
+
306
+ ![](images/66a755da8f6613c45666ef06a93116ae95be95fe5c9546cc5ebc6e7351c45ee7.jpg)
307
+ CBM3D / 28.72
308
+
309
+ ![](images/8690487e73b670f57a3845e1a97a9e57f3659e73d6c4c9485366f727d5ab14e1.jpg)
310
+ IRCNN / 28.57
311
+
312
+ ![](images/30fda8dbe8dc2c8c7953b8830cb839acf8aa7c510aa071fb2e5127a1e3c61718.jpg)
313
+ DnCNN / 29.13
314
+
315
+ # 4.5 JPEG COMPRESSION ARTIFACT REDUCTION
316
+
317
+ We compare our ART with state-of-the-art JPEG CAR methods: RNAN Zhang et al. (2019), RDN Zhang et al. (2020), DRUNet Zhang et al. (2021a), and SwinIR Liang et al. (2021). Following most recent works, we set the compression quality factors of original images to 40, 30, and 10. We provide the PSNR and SSIM comparison results in Table 5.
318
+
319
+ Quantitative Comparisons. Table 5 shows the PSNR/SSIM comparisons of our ART with existing state-of-the-art methods. We can see that our proposed method has the best performance. Better results are achieved by $\mathrm{ART + }$ using self-ensemble. These results indicate that our ART also performs outstandingly when solving image compression artifact reduction problems.
320
+
321
+ # 5 CONCLUSION
322
+
323
+ In this work, we propose Attention Retractable Transformer for image restoration named ART, which offers two types of self-attention blocks to enhance the Transformer representation ability. Most previous image restoration Transformer backbones mainly utilize dense attention modules to alleviate self-attention computation within non-overlapping regions and thus suffer from restricted receptive fields. Without introducing additional computational cost, we employ the sparse attention mechanism to enable tokens from sparse areas of the image to interact with each other. In practice, the alternating application of dense and sparse attention modules is able to provide retractable attention for the model and bring promising improvement. Experiments on image SR, denoising, and JPEG CAR tasks validate that our method achieves state-of-the-art results on various benchmark datasets both quantitatively and visually. In future work, we will try to apply our proposed method to more image restoration tasks, like image deraining, deblurring, dehazing, and so on. We will further explore the potential of sparse attention in solving low-level vision problems.
324
+
325
+ # ACKNOWLEDGMENTS
326
+
327
+ This work was supported in part by NSFC grant 62141220, 61972253, U1908212, 62172276, 61972254, the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning, the National Natural Science Foundation of China under Grant No. 62271414, Zhejiang Provincial Natural Science Foundation of China under Grant No. LR23F010001. This work was also supported by the Shenzhen Science and Technology Project (JCYJ20200109142808034), and in part by Guangdong Special Support (2019TX05X187). Xin Yuan would like to thank Research Center for Industries of the Future (RCIF) at Westlake University for supporting this work.
328
+
329
+ # REPRODUCIBILITY STATEMENT
330
+
331
+ We provide the reproducibility statement of our proposed method in this section. We introduce the model architecture and core dense and sparse attention modules in Sec. 3. Besides, we also give the implementation details. In Sec. 4.1, we provide the detailed experiment settings. To ensure the reproducibility, we provide the source code and pre-trained models at the website<sup>1</sup>. Everyone can run our code to check the training and testing process according to the given instructions. At the website, the pre-trained models are provided to verify the validity of corresponding results. More details please refer to the website or the submitted supplementary materials.
332
+
333
+ # REFERENCES
334
+
335
+ Saeed Anwar and Nick Barnes. Densely residual laplacian super-resolution. TPAMI, 2020. 2
336
+ Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. TPAMI, 2010. 6
337
+ Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie Line Alberi-Morel. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In BMVC, 2012. 6
338
+ Pierre Charbonnier, Laure Blanc-Feraud, Gilles Aubert, and Michel Barlaud. Two deterministic half-quadratic regularization algorithms for computed imaging. In ICIP, 1994. 4
339
+ Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In CVPR, 2021a. 1, 2, 3, 7, 8, 9
340
+ Haoyu Chen, Jinjin Gu, and Zhi Zhang. Attention in attention network for image super-resolution. arXiv preprint arXiv:2104.09497, 2021b. 2
341
+ Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. In NeurIPS, 2021. 1
342
+ Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen O. Egiazarian. Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance-chrominance space. In ICIP, 2007. 8, 9
343
+ Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia, and Lei Zhang. Second-order attention network for single image super-resolution. In CVPR, 2019. 3, 7
344
+ Chao Dong, Chen Change Loy, Kaiming He, and Xiaou Tang. Learning a deep convolutional network for image super-resolution. In ECCV, 2014. 1, 2
345
+ Chao Dong, Chen Change Loy, and Xiaou Tang. Accelerating the super-resolution convolutional neural network. In ECCV, 2016. 4
346
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. 1, 3
347
+ Akshay Dudhane, Syed Waqas Zamir, Salman Khan, Fahad Khan, and Ming-Hsuan Yang. Burst image restoration and enhancement. In CVPR, 2022. 2
348
+
349
+ Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Pointwise shape-adaptive dct for high-quality denoising and deblocking of grayscale and color images. TIP, May 2007. 6
350
+ Rich Franzen. Kodak lossless true color image suite. source: http://r0k.us/graphics/kodak, 1999. 6
351
+ Muhammad Haris, Greg Shakhnarovich, and Norimichi Ukita. Deep back-projection networks for superresolution. In CVPR, 2018. 6
352
+ Kaiming He, Jian Sun, and Xiaou Tang. Single image haze removal using dark channel prior. TPAMI, 2010. 2
353
+ Han Hu, Zheng Zhang, Zhenda Xie, and Stephen Lin. Local relation networks for image recognition. In ICCV, 2019. 1
354
+ Xiaowan Hu, Ruijun Ma, Zhihong Liu, Yuanhao Cai, Xiaole Zhao, Yulun Zhang, and Haoqian Wang. Pseudo 3d auto-correlation network for real image denoising. In CVPR, 2021. 8, 9
355
+ Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed selfexemplars. In CVPR, 2015. 6
356
+ Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In CVPR, 2016a. 3
357
+ Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Deeply-recursive convolutional network for image superresolution. In CVPR, 2016b. 1
358
+ Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 6
359
+ Xiangtao Kong, Xina Liu, Jinjin Gu, Yu Qiao, and Chao Dong. Reflash dropout in image super-resolution. In CVPR, pp. 6002-6012, 2022. 3
360
+ Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In CVPR, 2017. 4
361
+ Zhen Li, Jinglei Yang, Zheng Liu, Xiaomin Yang, Gwanggil Jeon, and Wei Wu. Feedback network for image super-resolution. In CVPR, 2019. 7
362
+ Zheyuan Li, Yingqi Liu, Xiangyu Chen, Haoming Cai, Jinjin Gu, Yu Qiao, and Chao Dong. Blueprint separable residual network for efficient image super-resolution. In CVPR, 2022. 2
363
+ Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In ICCVW, 2021. 1, 2, 3, 4, 6, 7, 8, 9
364
+ Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In CVPRW, 2017. 1, 6, 7
365
+ Jie Liu, Wenjie Zhang, Yuting Tang, Jie Tang, and Gangshan Wu. Residual feature aggregation network for image super-resolution. In CVPR, 2020. 7
366
+ Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021. 1
367
+ Kede Ma, Zhengfang Duanmu, Qingbo Wu, Zhou Wang, Hongwei Yong, Hongliang Li, and Lei Zhang. Waterloo exploration database: New challenges for image quality assessment models. TIP, 2016. 6
368
+ David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV, 2001. 6
369
+ Yusuke Matsui, Kota Ito, Yuji Aramaki, Azuma Fujimoto, Toru Ogawa, Toshihiko Yamasaki, and Kiyoharu Aizawa. Sketch-based manga retrieval using manga109 dataset. Multimedia Tools and Applications, 2017. 6
370
+ Yiqun Mei, Yuchen Fan, Yuqian Zhou, Lichao Huang, Thomas S Huang, and Humphrey Shi. Image superresolution with cross-scale non-local attention and exhaustive self-exemplars mining. In CVPR, 2020. 7
371
+ Yiqun Mei, Yuchen Fan, and Yuqian Zhou. Image super-resolution with non-local sparse attention. In CVPR, 2021. 7
372
+ Tomer Michaeli and Michal Irani. Nonparametric blind super-resolution. In ICCV, 2013. 2
373
+ Ben Niu, Weilei Wen, Wenqi Ren, Xiangde Zhang, Lianping Yang, Shuzhen Wang, Kaihao Zhang, Xiaochun Cao, and Haifeng Shen. Single image super-resolution via a holistic attention network. In ECCV, 2020. 3, 7
374
+
375
+ Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. 6
376
+ Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jon Shlens. Stand-alone self-attention in vision models. In NeurIPS, 2019. 1
377
+ Mehdi SM Sajjadi, Bernhard Scholkopf, and Michael Hirsch. Enhancenet: Single image super-resolution through automated texture synthesis. In ICCV, 2017. 4
378
+ Hamid R Sheikh, Muhammad F Sabir, and Alan C Bovik. A statistical evaluation of recent full reference image quality assessment algorithms. TIP, 2006. 6
379
+ Wenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In CVPR, 2016. 4
380
+ Ying Tai, Jian Yang, Xiaoming Liu, and Chunyan Xu. Memnet: A persistent memory network for image restoration. In ICCV, 2017. 4
381
+ Chunwei Tian, Yong Xu, and Wangmeng Zuo. Image denoising using deep cnn with batch renormalization. Neural Networks, 2020. 1
382
+ Radu Timofte, Vincent De, and Luc Van Gool. Anchored neighborhood regression for fast example-based super-resolution. In ICCV, 2013. 2
383
+ Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang, Lei Zhang, Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee, et al. Ntire 2017 challenge on single image super-resolution: Methods and results. In CVPRW, 2017. 6
384
+ Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In ICML, 2021. 1
385
+ Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxim: Multi-axis mlp for image processing. In CVPR, 2022a. 3
386
+ Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxvit: Multi-axis vision transformer. In ECCV, 2022b. 2, 3, 5
387
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. 1, 3
388
+ Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. In ECCV, 2020. 1
389
+ Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In ICCV, 2021. 1
390
+ Wenxiao Wang, Lu Yao, Long Chen, Binbin Lin, Deng Cai, Xiaofei He, and Wei Liu. Crossformer: A versatile vision transformer hinging on cross-scale attention. In ICLR, 2022a. 2, 3, 5
391
+ Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In CVPR, 2022b. 1, 3
392
+ Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. TIP, 2004. 6
393
+ Zhihao Xia and Ayan Chakrabarti. Identifying recurring patterns with deep neural networks for natural image denoising. In WACV, 2020. 1
394
+ Fuzhi Yang, Huan Yang, Jianlong Fu, Hongtao Lu, and Baining Guo. Learning texture transformer network for image super-resolution. In CVPR, 2020. 1, 3
395
+ Qihang Yu, Yingda Xia, Yutong Bai, Yongyi Lu, Alan L Yuille, and Wei Shen. Glance-and-gaze vision transformer. In NeurIPS, 2021. 2, 3, 5
396
+ Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Learning enriched features for real image restoration and enhancement. In ECCV, 2020. 2
397
+
398
+ Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In CVPR, 2021. 2
399
+ Syed Waqas Zamir, Aditya Arora, Salman H. Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In CVPR, 2022. 1, 3, 8, 9
400
+ Roman Zeyde, Michael Elad, and Matan Protter. On single image scale-up using sparse-representations. In Proc. 7th Int. Conf. Curves Surf., 2010. 6
401
+ Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. TIP, 2017a. 1, 2, 8, 9
402
+ Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang. Learning deep cnn denoiser prior for image restoration. In CVPR, 2017b. 8, 9
403
+ Kai Zhang, Wangmeng Zuo, and Lei Zhang. Ffdnet: Toward a fast and flexible solution for cnn-based image denoising. TIP, 2018a. 8, 9
404
+ Kai Zhang, Yawei Li, Wangmeng Zuo, Lei Zhang, Luc Van Gool, and Radu Timofte. Plug-and-play image restoration with deep denoiser prior. TPAMI, 2021a. 3, 8, 9
405
+ Lei Zhang, Xiaolin Wu, Antoni Buades, and Xin Li. Color demosaicking by local directional interpolation and nonlocal adaptive thresholding. J Electron Imaging, 2011. 6
406
+ Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In ECCV, 2018b. 1, 2, 3, 4, 6, 7
407
+ Yulun Zhang, Kunpeng Li, Kai Li, Bineng Zhong, and Yun Fu. Residual non-local attention networks for image restoration. In ICLR, 2019. 8, 9
408
+ Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image restoration. TPAMI, 2020. 1, 3, 4, 8, 9
409
+ Yulun Zhang, Huan Wang, Can Qin, and Yun Fu. Aligned structured sparsity learning for efficient image super-resolution. In NeurIPS, 2021b. 1
410
+ Hengshuang Zhao, Jiaya Jia, and Vladlen Koltun. Exploring self-attention for image recognition. In CVPR, 2020. 1
411
+ Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip HS Torr, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In CVPR, 2021. 1
412
+ Shangchen Zhou, Jiawei Zhang, Wangmeng Zuo, and Chen Change Loy. Cross-scale internal graph neural network for image super-resolution. In NeurIPS, 2020. 7
2023/Accurate Image Restoration with Attention Retractable Transformer/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16093a328a2e04acabad6868b3fb80bce905bb71bc4903ba68b1903c2fcd78c6
3
+ size 897504
2023/Accurate Image Restoration with Attention Retractable Transformer/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Active Learning in Bayesian Neural Networks with Balanced Entropy Learning Principle/5719f378-5a7e-413f-8da4-0390dd715659_content_list.json ADDED
The diff for this file is too large to render. See raw diff