Chelsea707 commited on
Commit
4bbee30
·
verified ·
1 Parent(s): 0343d9a

Add Batch 03e820ec-64c2-4aba-a07f-e0a4273287bd data

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +64 -0
  2. 2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/d86aa5c3-1e83-4f15-989d-e117952024f6_content_list.json +0 -0
  3. 2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/d86aa5c3-1e83-4f15-989d-e117952024f6_model.json +0 -0
  4. 2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/d86aa5c3-1e83-4f15-989d-e117952024f6_origin.pdf +3 -0
  5. 2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/full.md +620 -0
  6. 2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/images.zip +3 -0
  7. 2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/layout.json +0 -0
  8. 2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/a1590ce8-2443-45f9-b768-74da670097a7_content_list.json +0 -0
  9. 2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/a1590ce8-2443-45f9-b768-74da670097a7_model.json +0 -0
  10. 2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/a1590ce8-2443-45f9-b768-74da670097a7_origin.pdf +3 -0
  11. 2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/full.md +0 -0
  12. 2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/images.zip +3 -0
  13. 2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/layout.json +0 -0
  14. 2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/ab188adf-da52-465b-96c7-6599648843de_content_list.json +0 -0
  15. 2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/ab188adf-da52-465b-96c7-6599648843de_model.json +0 -0
  16. 2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/ab188adf-da52-465b-96c7-6599648843de_origin.pdf +3 -0
  17. 2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/full.md +527 -0
  18. 2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/images.zip +3 -0
  19. 2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/layout.json +0 -0
  20. 2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/b7caaaa1-c4b0-4639-a849-27d74bf6d2bd_content_list.json +0 -0
  21. 2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/b7caaaa1-c4b0-4639-a849-27d74bf6d2bd_model.json +0 -0
  22. 2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/b7caaaa1-c4b0-4639-a849-27d74bf6d2bd_origin.pdf +3 -0
  23. 2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/full.md +0 -0
  24. 2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/images.zip +3 -0
  25. 2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/layout.json +0 -0
  26. 2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/f335c9f2-7e4c-49ea-9b9b-7e536ba967bd_content_list.json +0 -0
  27. 2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/f335c9f2-7e4c-49ea-9b9b-7e536ba967bd_model.json +0 -0
  28. 2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/f335c9f2-7e4c-49ea-9b9b-7e536ba967bd_origin.pdf +3 -0
  29. 2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/full.md +0 -0
  30. 2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/images.zip +3 -0
  31. 2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/layout.json +0 -0
  32. 2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/86d84cce-8a0d-4627-8174-5af26f79cb15_content_list.json +0 -0
  33. 2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/86d84cce-8a0d-4627-8174-5af26f79cb15_model.json +0 -0
  34. 2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/86d84cce-8a0d-4627-8174-5af26f79cb15_origin.pdf +3 -0
  35. 2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/full.md +0 -0
  36. 2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/images.zip +3 -0
  37. 2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/layout.json +0 -0
  38. 2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/88ef98b2-6c9b-4b6f-a634-2b2692f94bcc_content_list.json +0 -0
  39. 2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/88ef98b2-6c9b-4b6f-a634-2b2692f94bcc_model.json +0 -0
  40. 2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/88ef98b2-6c9b-4b6f-a634-2b2692f94bcc_origin.pdf +3 -0
  41. 2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/full.md +0 -0
  42. 2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/images.zip +3 -0
  43. 2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/layout.json +0 -0
  44. 2023/(Certified!!) Adversarial Robustness for Free!/7064d377-0583-493a-b0c4-c47e08357804_content_list.json +1747 -0
  45. 2023/(Certified!!) Adversarial Robustness for Free!/7064d377-0583-493a-b0c4-c47e08357804_model.json +2230 -0
  46. 2023/(Certified!!) Adversarial Robustness for Free!/7064d377-0583-493a-b0c4-c47e08357804_origin.pdf +3 -0
  47. 2023/(Certified!!) Adversarial Robustness for Free!/full.md +309 -0
  48. 2023/(Certified!!) Adversarial Robustness for Free!/images.zip +3 -0
  49. 2023/(Certified!!) Adversarial Robustness for Free!/layout.json +0 -0
  50. 2023/3D Equivariant Diffusion for Target-Aware Molecule Generation and Affinity Prediction/087f4cc3-a9bc-404f-906f-af1ead9b2e7c_content_list.json +0 -0
.gitattributes CHANGED
@@ -6328,3 +6328,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
6328
  2023/Zero-Shot[[:space:]]Image[[:space:]]Restoration[[:space:]]Using[[:space:]]Denoising[[:space:]]Diffusion[[:space:]]Null-Space[[:space:]]Model/450bbabd-f7bc-4849-a6aa-19a630ac135c_origin.pdf filter=lfs diff=lfs merge=lfs -text
6329
  2023/ZiCo_[[:space:]]Zero-shot[[:space:]]NAS[[:space:]]via[[:space:]]inverse[[:space:]]Coefficient[[:space:]]of[[:space:]]Variation[[:space:]]on[[:space:]]Gradients/a4ccb233-5f5c-41d3-ae47-bd22cdcb6231_origin.pdf filter=lfs diff=lfs merge=lfs -text
6330
  2023/gDDIM_[[:space:]]Generalized[[:space:]]denoising[[:space:]]diffusion[[:space:]]implicit[[:space:]]models/4d3ee4cc-20bf-410f-b0d4-2719f75a6431_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6328
  2023/Zero-Shot[[:space:]]Image[[:space:]]Restoration[[:space:]]Using[[:space:]]Denoising[[:space:]]Diffusion[[:space:]]Null-Space[[:space:]]Model/450bbabd-f7bc-4849-a6aa-19a630ac135c_origin.pdf filter=lfs diff=lfs merge=lfs -text
6329
  2023/ZiCo_[[:space:]]Zero-shot[[:space:]]NAS[[:space:]]via[[:space:]]inverse[[:space:]]Coefficient[[:space:]]of[[:space:]]Variation[[:space:]]on[[:space:]]Gradients/a4ccb233-5f5c-41d3-ae47-bd22cdcb6231_origin.pdf filter=lfs diff=lfs merge=lfs -text
6330
  2023/gDDIM_[[:space:]]Generalized[[:space:]]denoising[[:space:]]diffusion[[:space:]]implicit[[:space:]]models/4d3ee4cc-20bf-410f-b0d4-2719f75a6431_origin.pdf filter=lfs diff=lfs merge=lfs -text
6331
+ 2023/$O(T^{-1})$[[:space:]]Convergence[[:space:]]of[[:space:]]Optimistic-Follow-the-Regularized-Leader[[:space:]]in[[:space:]]Two-Player[[:space:]]Zero-Sum[[:space:]]Markov[[:space:]]Games/d86aa5c3-1e83-4f15-989d-e117952024f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
6332
+ 2023/$_Lambda$-DARTS_[[:space:]]Mitigating[[:space:]]Performance[[:space:]]Collapse[[:space:]]by[[:space:]]Harmonizing[[:space:]]Operation[[:space:]]Selection[[:space:]]among[[:space:]]Cells/a1590ce8-2443-45f9-b768-74da670097a7_origin.pdf filter=lfs diff=lfs merge=lfs -text
6333
+ 2023/$_mathcal{O}$-GNN_[[:space:]]incorporating[[:space:]]ring[[:space:]]priors[[:space:]]into[[:space:]]molecular[[:space:]]modeling/ab188adf-da52-465b-96c7-6599648843de_origin.pdf filter=lfs diff=lfs merge=lfs -text
6334
+ 2023/$_mathrm{SE}(3)$-Equivariant[[:space:]]Attention[[:space:]]Networks[[:space:]]for[[:space:]]Shape[[:space:]]Reconstruction[[:space:]]in[[:space:]]Function[[:space:]]Space/b7caaaa1-c4b0-4639-a849-27d74bf6d2bd_origin.pdf filter=lfs diff=lfs merge=lfs -text
6335
+ 2023/$_mathscr{N}$-WL_[[:space:]]A[[:space:]]New[[:space:]]Hierarchy[[:space:]]of[[:space:]]Expressivity[[:space:]]for[[:space:]]Graph[[:space:]]Neural[[:space:]]Networks/f335c9f2-7e4c-49ea-9b9b-7e536ba967bd_origin.pdf filter=lfs diff=lfs merge=lfs -text
6336
+ 2023/$_rm[[:space:]]A^2Q$_[[:space:]]Aggregation-Aware[[:space:]]Quantization[[:space:]]for[[:space:]]Graph[[:space:]]Neural[[:space:]]Networks/86d84cce-8a0d-4627-8174-5af26f79cb15_origin.pdf filter=lfs diff=lfs merge=lfs -text
6337
+ 2023/$k$NN[[:space:]]Prompting_[[:space:]]Beyond-Context[[:space:]]Learning[[:space:]]with[[:space:]]Calibration-Free[[:space:]]Nearest[[:space:]]Neighbor[[:space:]]Inference/88ef98b2-6c9b-4b6f-a634-2b2692f94bcc_origin.pdf filter=lfs diff=lfs merge=lfs -text
6338
+ 2023/(Certified!!)[[:space:]]Adversarial[[:space:]]Robustness[[:space:]]for[[:space:]]Free!/7064d377-0583-493a-b0c4-c47e08357804_origin.pdf filter=lfs diff=lfs merge=lfs -text
6339
+ 2023/3D[[:space:]]Equivariant[[:space:]]Diffusion[[:space:]]for[[:space:]]Target-Aware[[:space:]]Molecule[[:space:]]Generation[[:space:]]and[[:space:]]Affinity[[:space:]]Prediction/087f4cc3-a9bc-404f-906f-af1ead9b2e7c_origin.pdf filter=lfs diff=lfs merge=lfs -text
6340
+ 2023/3D[[:space:]]Segmenter_[[:space:]]3D[[:space:]]Transformer[[:space:]]based[[:space:]]Semantic[[:space:]]Segmentation[[:space:]]via[[:space:]]2D[[:space:]]Panoramic[[:space:]]Distillation/71f12a2c-edb4-403d-bda7-89406898a087_origin.pdf filter=lfs diff=lfs merge=lfs -text
6341
+ 2023/3D[[:space:]]UX-Net_[[:space:]]A[[:space:]]Large[[:space:]]Kernel[[:space:]]Volumetric[[:space:]]ConvNet[[:space:]]Modernizing[[:space:]]Hierarchical[[:space:]]Transformer[[:space:]]for[[:space:]]Medical[[:space:]]Image[[:space:]]Segmentation/16d9f05b-d812-45cf-945b-75a3044c5a00_origin.pdf filter=lfs diff=lfs merge=lfs -text
6342
+ 2023/A[[:space:]]Control-Centric[[:space:]]Benchmark[[:space:]]for[[:space:]]Video[[:space:]]Prediction/1223e8e5-87c2-4f7a-a330-39ea15ef7ce7_origin.pdf filter=lfs diff=lfs merge=lfs -text
6343
+ 2023/A[[:space:]]Convergent[[:space:]]Single-Loop[[:space:]]Algorithm[[:space:]]for[[:space:]]Relaxation[[:space:]]of[[:space:]]Gromov-Wasserstein[[:space:]]in[[:space:]]Graph[[:space:]]Data/2e003d4a-14fd-4416-a8fa-bd7ef5556c1e_origin.pdf filter=lfs diff=lfs merge=lfs -text
6344
+ 2023/A[[:space:]]Differential[[:space:]]Geometric[[:space:]]View[[:space:]]and[[:space:]]Explainability[[:space:]]of[[:space:]]GNN[[:space:]]on[[:space:]]Evolving[[:space:]]Graphs/3e5a0d78-852c-4b29-a6de-e6d56ccd8a68_origin.pdf filter=lfs diff=lfs merge=lfs -text
6345
+ 2023/A[[:space:]]GNN-Guided[[:space:]]Predict-and-Search[[:space:]]Framework[[:space:]]for[[:space:]]Mixed-Integer[[:space:]]Linear[[:space:]]Programming/f712a1a1-aab6-4e9b-996f-a531cc7a4d10_origin.pdf filter=lfs diff=lfs merge=lfs -text
6346
+ 2023/A[[:space:]]General[[:space:]]Framework[[:space:]]For[[:space:]]Proving[[:space:]]The[[:space:]]Equivariant[[:space:]]Strong[[:space:]]Lottery[[:space:]]Ticket[[:space:]]Hypothesis/4dc1ce39-bed5-4826-a0a9-05e0759bf57c_origin.pdf filter=lfs diff=lfs merge=lfs -text
6347
+ 2023/A[[:space:]]General[[:space:]]Rank[[:space:]]Preserving[[:space:]]Framework[[:space:]]for[[:space:]]Asymmetric[[:space:]]Image[[:space:]]Retrieval/eb5457c2-c96c-44f3-83a1-f8054ff0850e_origin.pdf filter=lfs diff=lfs merge=lfs -text
6348
+ 2023/A[[:space:]]Graph[[:space:]]Neural[[:space:]]Network[[:space:]]Approach[[:space:]]to[[:space:]]Automated[[:space:]]Model[[:space:]]Building[[:space:]]in[[:space:]]Cryo-EM[[:space:]]Maps/f6c2c58d-482b-4ee6-80ca-89c8c6b8d3a7_origin.pdf filter=lfs diff=lfs merge=lfs -text
6349
+ 2023/A[[:space:]]Learning[[:space:]]Based[[:space:]]Hypothesis[[:space:]]Test[[:space:]]for[[:space:]]Harmful[[:space:]]Covariate[[:space:]]Shift/00c19d14-8d7f-4d90-9435-97a80c7dbf97_origin.pdf filter=lfs diff=lfs merge=lfs -text
6350
+ 2023/A[[:space:]]Message[[:space:]]Passing[[:space:]]Perspective[[:space:]]on[[:space:]]Learning[[:space:]]Dynamics[[:space:]]of[[:space:]]Contrastive[[:space:]]Learning/50a180c7-307e-4bd4-96a8-e59113d64b35_origin.pdf filter=lfs diff=lfs merge=lfs -text
6351
+ 2023/A[[:space:]]Mixture-of-Expert[[:space:]]Approach[[:space:]]to[[:space:]]RL-based[[:space:]]Dialogue[[:space:]]Management/778d7125-dcb2-4d66-9ec6-d11611614f84_origin.pdf filter=lfs diff=lfs merge=lfs -text
6352
+ 2023/Near-optimal[[:space:]]Coresets[[:space:]]for[[:space:]]Robust[[:space:]]Clustering/4e8635ff-5ab7-479c-8f64-91600131d956_origin.pdf filter=lfs diff=lfs merge=lfs -text
6353
+ 2023/Near-optimal[[:space:]]Policy[[:space:]]Identification[[:space:]]in[[:space:]]Active[[:space:]]Reinforcement[[:space:]]Learning/62134f5a-e665-4a0e-b574-1cc5c0ab7bde_origin.pdf filter=lfs diff=lfs merge=lfs -text
6354
+ 2023/Offline[[:space:]]Q-learning[[:space:]]on[[:space:]]Diverse[[:space:]]Multi-Task[[:space:]]Data[[:space:]]Both[[:space:]]Scales[[:space:]]And[[:space:]]Generalizes/33f40ab4-7693-4498-9bae-dfc64447eef4_origin.pdf filter=lfs diff=lfs merge=lfs -text
6355
+ 2023/Offline[[:space:]]RL[[:space:]]with[[:space:]]No[[:space:]]OOD[[:space:]]Actions_[[:space:]]In-Sample[[:space:]]Learning[[:space:]]via[[:space:]]Implicit[[:space:]]Value[[:space:]]Regularization/12c78af3-659f-46fe-bca3-72707f0e1a8c_origin.pdf filter=lfs diff=lfs merge=lfs -text
6356
+ 2023/On[[:space:]]the[[:space:]]Sensitivity[[:space:]]of[[:space:]]Reward[[:space:]]Inference[[:space:]]to[[:space:]]Misspecified[[:space:]]Human[[:space:]]Models/e5ed6789-712c-4143-ba70-b5ca52bce4c3_origin.pdf filter=lfs diff=lfs merge=lfs -text
6357
+ 2023/On[[:space:]]the[[:space:]]duality[[:space:]]between[[:space:]]contrastive[[:space:]]and[[:space:]]non-contrastive[[:space:]]self-supervised[[:space:]]learning/d33be262-3b56-4ee0-ac2a-c81ee5331b7f_origin.pdf filter=lfs diff=lfs merge=lfs -text
6358
+ 2023/PaLI_[[:space:]]A[[:space:]]Jointly-Scaled[[:space:]]Multilingual[[:space:]]Language-Image[[:space:]]Model/c396a68a-ce80-4c62-9eb5-c438c030a65f_origin.pdf filter=lfs diff=lfs merge=lfs -text
6359
+ 2023/Personalized[[:space:]]Federated[[:space:]]Learning[[:space:]]with[[:space:]]Feature[[:space:]]Alignment[[:space:]]and[[:space:]]Classifier[[:space:]]Collaboration/7be375f1-38f0-4690-b2b2-546649adad51_origin.pdf filter=lfs diff=lfs merge=lfs -text
6360
+ 2023/REVISITING[[:space:]]PRUNING[[:space:]]AT[[:space:]]INITIALIZATION[[:space:]]THROUGH[[:space:]]THE[[:space:]]LENS[[:space:]]OF[[:space:]]RAMANUJAN[[:space:]]GRAPH/0a7b851c-9114-43d6-ab6a-b6a042c1880b_origin.pdf filter=lfs diff=lfs merge=lfs -text
6361
+ 2023/ReAct_[[:space:]]Synergizing[[:space:]]Reasoning[[:space:]]and[[:space:]]Acting[[:space:]]in[[:space:]]Language[[:space:]]Models/3178f963-e3b9-4f25-a629-0719e1d6f47f_origin.pdf filter=lfs diff=lfs merge=lfs -text
6362
+ 2023/Relative[[:space:]]representations[[:space:]]enable[[:space:]]zero-shot[[:space:]]latent[[:space:]]space[[:space:]]communication/061f3d74-b645-4ec7-90c3-5d9eb6953177_origin.pdf filter=lfs diff=lfs merge=lfs -text
6363
+ 2023/Rethinking[[:space:]]the[[:space:]]Expressive[[:space:]]Power[[:space:]]of[[:space:]]GNNs[[:space:]]via[[:space:]]Graph[[:space:]]Biconnectivity/af8e15c4-47fa-4aca-8b23-aab08f314cbb_origin.pdf filter=lfs diff=lfs merge=lfs -text
6364
+ 2023/SAM[[:space:]]as[[:space:]]an[[:space:]]Optimal[[:space:]]Relaxation[[:space:]]of[[:space:]]Bayes/17c87e9b-224f-4c18-b46c-84700978db83_origin.pdf filter=lfs diff=lfs merge=lfs -text
6365
+ 2023/Sample-Efficient[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]by[[:space:]]Breaking[[:space:]]the[[:space:]]Replay[[:space:]]Ratio[[:space:]]Barrier/24a3d5e5-385e-4435-a14b-ea9d95599377_origin.pdf filter=lfs diff=lfs merge=lfs -text
6366
+ 2023/Sampling[[:space:]]is[[:space:]]as[[:space:]]easy[[:space:]]as[[:space:]]learning[[:space:]]the[[:space:]]score_[[:space:]]theory[[:space:]]for[[:space:]]diffusion[[:space:]]models[[:space:]]with[[:space:]]minimal[[:space:]]data[[:space:]]assumptions/86063b59-73ca-47cc-8f69-4b41cd64094f_origin.pdf filter=lfs diff=lfs merge=lfs -text
6367
+ 2023/Scaling[[:space:]]Up[[:space:]]Probabilistic[[:space:]]Circuits[[:space:]]by[[:space:]]Latent[[:space:]]Variable[[:space:]]Distillation/74ad7dd2-45fb-438c-9eb9-d98050e51417_origin.pdf filter=lfs diff=lfs merge=lfs -text
6368
+ 2023/Selection-Inference_[[:space:]]Exploiting[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]for[[:space:]]Interpretable[[:space:]]Logical[[:space:]]Reasoning/59fbe371-c6b6-4826-a510-446e5ca7c60f_origin.pdf filter=lfs diff=lfs merge=lfs -text
6369
+ 2023/SimPer_[[:space:]]Simple[[:space:]]Self-Supervised[[:space:]]Learning[[:space:]]of[[:space:]]Periodic[[:space:]]Targets/e2456840-21fa-4162-af95-f5d78c99bab4_origin.pdf filter=lfs diff=lfs merge=lfs -text
6370
+ 2023/Simplified[[:space:]]State[[:space:]]Space[[:space:]]Layers[[:space:]]for[[:space:]]Sequence[[:space:]]Modeling/aee8befd-f517-4bd7-9d60-39985722a38e_origin.pdf filter=lfs diff=lfs merge=lfs -text
6371
+ 2023/Sparse[[:space:]]Mixture-of-Experts[[:space:]]are[[:space:]]Domain[[:space:]]Generalizable[[:space:]]Learners/ebef3566-b10b-4af7-9e1d-770788792ee8_origin.pdf filter=lfs diff=lfs merge=lfs -text
6372
+ 2023/Statistical[[:space:]]Efficiency[[:space:]]of[[:space:]]Score[[:space:]]Matching_[[:space:]]The[[:space:]]View[[:space:]]from[[:space:]]Isoperimetry/6e5187bb-90fe-46c3-b769-712ec43db9cd_origin.pdf filter=lfs diff=lfs merge=lfs -text
6373
+ 2023/Symbolic[[:space:]]Physics[[:space:]]Learner_[[:space:]]Discovering[[:space:]]governing[[:space:]]equations[[:space:]]via[[:space:]]Monte[[:space:]]Carlo[[:space:]]tree[[:space:]]search/267f6157-e2ae-4a72-955f-d848eb7e119d_origin.pdf filter=lfs diff=lfs merge=lfs -text
6374
+ 2023/Tailoring[[:space:]]Language[[:space:]]Generation[[:space:]]Models[[:space:]]under[[:space:]]Total[[:space:]]Variation[[:space:]]Distance/aa619861-4161-431c-9684-e60873fb4d74_origin.pdf filter=lfs diff=lfs merge=lfs -text
6375
+ 2023/Targeted[[:space:]]Hyperparameter[[:space:]]Optimization[[:space:]]with[[:space:]]Lexicographic[[:space:]]Preferences[[:space:]]Over[[:space:]]Multiple[[:space:]]Objectives/65ae405c-9d84-46e7-8c33-d85f7defdb00_origin.pdf filter=lfs diff=lfs merge=lfs -text
6376
+ 2023/Temporal[[:space:]]Domain[[:space:]]Generalization[[:space:]]with[[:space:]]Drift-Aware[[:space:]]Dynamic[[:space:]]Neural[[:space:]]Networks/58b57e16-915d-4730-ac6f-919da3b1ff47_origin.pdf filter=lfs diff=lfs merge=lfs -text
6377
+ 2023/The[[:space:]]Lie[[:space:]]Derivative[[:space:]]for[[:space:]]Measuring[[:space:]]Learned[[:space:]]Equivariance/b6099d84-6f1d-4bc7-bb59-43199d9ffb88_origin.pdf filter=lfs diff=lfs merge=lfs -text
6378
+ 2023/The[[:space:]]Modality[[:space:]]Focusing[[:space:]]Hypothesis_[[:space:]]Towards[[:space:]]Understanding[[:space:]]Crossmodal[[:space:]]Knowledge[[:space:]]Distillation/5c0ad5d6-6c15-4b83-addb-27a3d0231a4b_origin.pdf filter=lfs diff=lfs merge=lfs -text
6379
+ 2023/The[[:space:]]Role[[:space:]]of[[:space:]]Coverage[[:space:]]in[[:space:]]Online[[:space:]]Reinforcement[[:space:]]Learning/fbdcbb1b-1df9-47b5-8d3e-73a54bf4993d_origin.pdf filter=lfs diff=lfs merge=lfs -text
6380
+ 2023/Time[[:space:]]Will[[:space:]]Tell_[[:space:]]New[[:space:]]Outlooks[[:space:]]and[[:space:]]A[[:space:]]Baseline[[:space:]]for[[:space:]]Temporal[[:space:]]Multi-View[[:space:]]3D[[:space:]]Object[[:space:]]Detection/da87490d-1cf0-4959-a593-1e1c66a1b14c_origin.pdf filter=lfs diff=lfs merge=lfs -text
6381
+ 2023/Token[[:space:]]Merging_[[:space:]]Your[[:space:]]ViT[[:space:]]But[[:space:]]Faster/be91082d-cd16-4db0-b0a9-d55399eaa059_origin.pdf filter=lfs diff=lfs merge=lfs -text
6382
+ 2023/Towards[[:space:]]Open[[:space:]]Temporal[[:space:]]Graph[[:space:]]Neural[[:space:]]Networks/7786b487-42f4-4b10-95ab-92ac0484e6ce_origin.pdf filter=lfs diff=lfs merge=lfs -text
6383
+ 2023/Towards[[:space:]]Stable[[:space:]]Test-time[[:space:]]Adaptation[[:space:]]in[[:space:]]Dynamic[[:space:]]Wild[[:space:]]World/7894e70c-5647-4358-9fed-c01703555efb_origin.pdf filter=lfs diff=lfs merge=lfs -text
6384
+ 2023/Towards[[:space:]]Understanding[[:space:]]Ensemble,[[:space:]]Knowledge[[:space:]]Distillation[[:space:]]and[[:space:]]Self-Distillation[[:space:]]in[[:space:]]Deep[[:space:]]Learning/2678a033-457c-4c7f-8bd1-934a0de29613_origin.pdf filter=lfs diff=lfs merge=lfs -text
6385
+ 2023/Transfer[[:space:]]NAS[[:space:]]with[[:space:]]Meta-learned[[:space:]]Bayesian[[:space:]]Surrogates/e62160c3-13d8-422a-bc21-2a96bd6a79b7_origin.pdf filter=lfs diff=lfs merge=lfs -text
6386
+ 2023/Transformers[[:space:]]Learn[[:space:]]Shortcuts[[:space:]]to[[:space:]]Automata/c9b209cb-9afd-435b-a91b-d757cb520370_origin.pdf filter=lfs diff=lfs merge=lfs -text
6387
+ 2023/Transformers[[:space:]]are[[:space:]]Sample-Efficient[[:space:]]World[[:space:]]Models/90088939-cfe6-4bc8-9ca1-4d86290f25da_origin.pdf filter=lfs diff=lfs merge=lfs -text
6388
+ 2023/Universal[[:space:]]Few-shot[[:space:]]Learning[[:space:]]of[[:space:]]Dense[[:space:]]Prediction[[:space:]]Tasks[[:space:]]with[[:space:]]Visual[[:space:]]Token[[:space:]]Matching/1c7ab587-e995-4afc-93c5-ceda220f4357_origin.pdf filter=lfs diff=lfs merge=lfs -text
6389
+ 2023/View[[:space:]]Synthesis[[:space:]]with[[:space:]]Sculpted[[:space:]]Neural[[:space:]]Points/22cc757d-7a28-4d80-8903-b693a5ffc4f5_origin.pdf filter=lfs diff=lfs merge=lfs -text
6390
+ 2023/Visual[[:space:]]Classification[[:space:]]via[[:space:]]Description[[:space:]]from[[:space:]]Large[[:space:]]Language[[:space:]]Models/02364761-2532-4599-9452-506a79cee297_origin.pdf filter=lfs diff=lfs merge=lfs -text
6391
+ 2023/When[[:space:]]and[[:space:]]Why[[:space:]]Vision-Language[[:space:]]Models[[:space:]]Behave[[:space:]]like[[:space:]]Bags-Of-Words,[[:space:]]and[[:space:]]What[[:space:]]to[[:space:]]Do[[:space:]]About[[:space:]]It_/c145cbd6-7963-4148-b875-a0f2b6488fc2_origin.pdf filter=lfs diff=lfs merge=lfs -text
6392
+ 2023/WikiWhy_[[:space:]]Answering[[:space:]]and[[:space:]]Explaining[[:space:]]Cause-and-Effect[[:space:]]Questions/00a9ac2c-3e1f-428f-a34f-c88d229e2f98_origin.pdf filter=lfs diff=lfs merge=lfs -text
6393
+ 2023/Win_[[:space:]]Weight-Decay-Integrated[[:space:]]Nesterov[[:space:]]Acceleration[[:space:]]for[[:space:]]Adaptive[[:space:]]Gradient[[:space:]]Algorithms/522f8823-4bd1-4ba3-8db5-7699f023cbbd_origin.pdf filter=lfs diff=lfs merge=lfs -text
6394
+ 2023/​​What[[:space:]]learning[[:space:]]algorithm[[:space:]]is[[:space:]]in-context[[:space:]]learning_[[:space:]]Investigations[[:space:]]with[[:space:]]linear[[:space:]]models/6c5cb82c-4892-4f3e-858f-8a83309456e6_origin.pdf filter=lfs diff=lfs merge=lfs -text
2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/d86aa5c3-1e83-4f15-989d-e117952024f6_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/d86aa5c3-1e83-4f15-989d-e117952024f6_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/d86aa5c3-1e83-4f15-989d-e117952024f6_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ed0beb968ac62483d911273fd3e80b9ad989a4c29823778ea34e92328117769
3
+ size 359212
2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/full.md ADDED
@@ -0,0 +1,620 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # $O(T^{-1})$ CONVERGENCE OF OPTIMISTIC-FOLLOW-THEREGULARIZED-LEADER IN TWO-PLAYER ZERO-SUMMARKOV GAMES
2
+
3
+ Yuepeng Yang*
4
+
5
+ Cong Ma*
6
+
7
+ # ABSTRACT
8
+
9
+ We prove that optimistic-follow-the-regularized-leader (OFTRL), together with smooth value updates, finds an $O(T^{-1})$ -approximate Nash equilibrium in $T$ iterations for two-player zero-sum Markov games with full information. This improves the $\tilde{O}(T^{-5/6})$ convergence rate recently shown in the paper by Zhang et al. (2022b). The refined analysis hinges on two essential ingredients. First, the sum of the regrets of the two players, though not necessarily non-negative as in normal-form games, is approximately non-negative in Markov games. This property allows us to bound the second-order path lengths of the learning dynamics. Second, we prove a tighter algebraic inequality regarding the weights deployed by OFTRL that shaves an extra $\log T$ factor. This crucial improvement enables the inductive analysis that leads to the final $O(T^{-1})$ rate.
10
+
11
+ # 1 INTRODUCTION
12
+
13
+ Multi-agent reinforcement learning (MARL) (Busoniu et al., 2008; Zhang et al., 2021) models sequential decision-making problems in which multiple agents/players interact with each other in a shared environment. MARL has recently achieved tremendous success in playing games (Vinyals et al., 2019; Berner et al., 2019; Brown & Sandholm, 2019), which, consequently, has spurred a growing body of work on MARL; see Yang & Wang (2020) for a recent overview.
14
+
15
+ A widely adopted mathematical model for MARL is the so-called Markov games (Shapley, 1953; Littman, 1994), which combines normal-form games (Nash, 1951) with Markov decision processes (Puterman, 2014). In a nutshell, a Markov game starts with a certain state, followed by actions taken by the players. The players then receive their respective payoffs, as in a normal-form game, and at the same time the system transits to a new state as in a Markov decision process. The whole process repeats. As in normal-form games, the goal for each player is to maximize her own cumulative payoffs. We defer the precise descriptions of Markov games to Section 2.
16
+
17
+ In the simpler normal-form games, no-regret learning (Cesa-Bianchi & Lugosi, 2006) has long been used as an effective method to achieve competence in the multi-agent environment. Take the two-player zero-sum normal-form game as an example. It is easy to show that standard no-regret algorithms such as follow-the-regularized-leader (FTRL) reach an $O(T^{-1/2})$ -approximate Nash equilibrium (Nash, 1951) in $T$ iterations. Surprisingly, the seminal paper Daskalakis et al. (2011) demonstrates that a special no-regret algorithm, built upon Nesterov's excessive gap technique (Nesterov, 2005), achieves a faster and optimal $\tilde{O}(T^{-1})$ rate of convergence to the Nash equilibrium. This nice and fast convergence was later established for optimistic variants of mirror descent (Rakhlin & Sridharan, 2013) and FTRL (Syrgkanis et al., 2015). Since then,
18
+
19
+ a flurry of research (Chen & Peng, 2020; Daskalakis et al., 2021; Anagnostides et al., 2022a;b; Farina et al., 2022) has been conducted around optimistic no-regret learning algorithms to obtain faster rate of convergence in normal-form games.
20
+
21
+ In contrast, research on the fast convergence of optimistic no-regret learning in Markov games has been scarce. In this paper, we focus on two-player zero-sum Markov games—arguably the simplest Markov game. Zhang et al. (2022b) recently initiated the study of the optimistic-follow-the-regularized-leader (OFTRL) algorithm in such a setting and proved that OFTRL converges to an $\tilde{O}(T^{-5/6})$ -approximate Nash equilibrium after $T$ iterations. In light of the faster $O(T^{-1})$ convergence of optimistic algorithms in normal-form games, it is natural to ask
22
+
23
+ After $T$ iterations, can OFTRL find an $O(T^{-1})$ -approximate Nash equilibrium in two-player zero-sum Markov games?
24
+
25
+ In fact, this question has also been raised by Zhang et al. (2022b) in the Discussion section. More promisingly, they have verified the fast convergence (i.e., $O(T^{-1})$ ) of OFTRL in a simple two-stage Markov game; see Fig. 1 therein.
26
+
27
+ Our main contribution in this work is to answer this question affirmatively, through improving the $\tilde{O}(T^{-5/6})$ rate demonstrated in Zhang et al. (2022b) to the optimal $O(T^{-1})$ rate. The improved rate for OFTRL arises from two technical contributions. The first is the approximate non-negativity of the sum of the regrets of the two players in Markov games. In particular, the sum is lower bounded by the negative estimation error of the optimal $Q$ -function; see Lemma 6 for the precise statement. This is in stark contrast to the two-player zero-sum normal-form game (Anagnostides et al., 2022c) and the multi-player general-sum normal-form game (Anagnostides et al., 2022b), in which by definition, the sum of the external/swap regrets are non-negative. This approximate non-negativity proves crucial for us to control the second-order path length of the learning dynamics induced by OFTRL. In a different context—time-varying zero-sum normal-form games, Zhang et al. (2022a) also utilizes a sort of approximate non-negativity of the sum of the regrets. However, the source of this gap from non-negativity is different: in Zhang et al. (2022a) it arises from the time-varying nature of the zero-sum game, while in our case with Markov games, it comes from the estimation error of the equilibrium pay-off matrix by the algorithm itself.
28
+
29
+ Secondly, central to the analysis in finite-horizon Markov decision processes (and also Markov games) is the induction across the horizon. In our case, in order to carry out the induction step, we prove a tighter algebraic inequality related to the weights deployed by OFTRL; see Lemma 4. In particular, we shave an extra $\log T$ factor. Surprisingly, this seemingly harmless $\log T$ factor is the key to enabling the above-mentioned induction analysis, and as a by-product, removes the extra log factor in the performance guarantee of OFTRL.
30
+
31
+ Note that as an imperfect remedy, Zhang et al. (2022b) proposed a modified OFTRL algorithm that achieves $\tilde{O}(T^{-1})$ convergence to Nash equilibrium. However, compared to the vanilla OFTRL algorithm considered herein, the modified version tracks two $Q$ -functions, adopts a different $Q$ -function update procedure that can be more costly in certain scenarios, and more importantly diverges from the general policy optimization framework proposed in Zhang et al. (2022b). Our work bridges these gaps by establishing the fast convergence for the vanilla OFTRL.
32
+
33
+ Another line of algorithms used for solving Nash equilibrium is based on dynamic programming (Perolat et al., 2015; Zhang et al., 2022b; Cen et al., 2021). Unlike the single-loop structure of OFTRL, the dynamic programming approach requires a nested loop, with the outer-loop iterating over the horizons and the inner-loops solving a sub-game through iterations. This requires more tuning parameters, one set for each subproblem/layer. Such kind of extra tuning was documented in Cen et al. (2021). The nested nature of dynamic programming also demands one to predetermine a precision $\epsilon$ and estimate the sub-game at each horizon to precision $\epsilon / H$ . This is less convenient in practice compared to a single-loop algorithm like the
34
+
35
+ OFTRL we study, where such predetermined precision is not necessary. Another recent paper Cen et al. (2022) also discusses the advantages of single-loop algorithms over those with nested loops.
36
+
37
+ # 1.1 RELATED WORK
38
+
39
+ Optimistic no-regret learning in games. Our work is mostly related to the line of work on proving fast convergence of optimistic no-regret algorithms in various forms of games. Daskalakis et al. (2011) provide the first fast algorithm that reaches a Nash equilibrium at an $\tilde{O}(T^{-1})$ rate in two-player zero-sum normal-form games. Later, with the same setup, Rakhlin & Sridharan (2013) prove a similar fast convergence for optimistic mirror descent (OMD). Syrgkanis et al. (2015) extend the results to multi-player general-sum normal-form games. In addition, Syrgkanis et al. show that when all the players adopt optimistic algorithms, their individual regret is at most $O(T^{-3/4})$ . This is further improved to $O(T^{-5/6})$ in the special two-player zero-sum case (Chen & Peng, 2020). More recently, via a detailed analysis of higher-order smoothness, Daskalakis et al. (2021); Anagnostides et al. (2022a) manage to improve the individual regret guarantee of optimistic hedge to $\tilde{O}(T^{-1})$ in multi-player general-sum normal-form games, matching the result in the two-player case. A similar result is shown by Anagnostides et al. (2022b) with a different analysis using self-concordant barriers as the regularizer.
40
+
41
+ Several attempts have been made to extend the results on optimistic no-regret learning in normal-form games to Markov games. Wei et al. (2021) design a decentralized algorithm based on optimistic gradient descent / ascent that converges to a Nash equilibrium at an $\tilde{O}(T^{-1/2})$ rate. Closest to us is the work by Zhang et al. (2022b) which shows an $\tilde{O}(T^{-5/6})$ convergence of OFTRL to the Nash equilibrium in two-player zero-sum Markov games and an $\tilde{O}(T^{-3/4})$ convergence to a coarse correlated equilibrium in multi-player general-sum Markov games. Most recently, Erez et al. (2022) prove an $O(T^{-1/4})$ individual regret for OMD in multi-player general-sum Markov games.
42
+
43
+ Two-player zero-sum Markov games. Our work also fits into the study of two-player zero-sum Markov games (Shapley, 1953; Littman, 1994). Various algorithms (Hu & Wellman, 2003; Littman, 1994; Zhao et al., 2021; Cen et al., 2021) have been proposed in the full information setting, where one assumes the players have access to the exact state-action value functions. In particular, Zhao et al. (2021); Cen et al. (2021) use optimistic approaches for normal-form games as subroutines to extend the $\tilde{O}(T^{-1})$ convergence rates to two-player zero-sum Markov games. In particular, they provide last iterate convergence guarantees as well. However, in doing so, their algorithms require one to approximately solve a normal-form game in each iteration.
44
+
45
+ In the bandit setting, Bai & Jin (2020); Xie et al. (2020); Bai et al. (2020); Liu et al. (2021); Zhang et al. (2020) study the sample complexity of two-player zero-sum Markov games. In addition, Sidford et al. (2020); Jia et al. (2019); Zhang et al. (2020); Li et al. (2022) investigate the sample complexity under a generative model where one can query the Markov game at arbitrary states and actions. Last but not least, recently two-player zero-sum Markov games have been studied in the offline setting (Cui & Du, 2022; Yan et al., 2022), where the learner is given a set of historical data, and cannot interact with Markov games further.
46
+
47
+ # 2 PRELIMINARIES
48
+
49
+ This section provides the necessary background on Markov games and optimistic-follow-the-regularized-leader (OFTRL).
50
+
51
+ Two-player zero-sum Markov games. Denote by $\mathcal{MG}(H, S, \mathcal{A}, \mathcal{B}, \mathbb{P}, r)$ a finite-horizon time-inhomogeneous two-player zero-sum Markov game, with $H$ the horizon, $\mathcal{S}$ the state space, $\mathcal{A}$ (resp. $\mathcal{B}$ ) the action space for the max-player (resp. min-player), $\mathbb{P} = \{\mathbb{P}_h\}_{h \in [H]}$ the transition probabilities, and $r = \{r_h\}_{h \in [H]}$ the reward function. We assume state space $\mathcal{S}$ and action spaces $\mathcal{A}, \mathcal{B}$ to be finite and have size $S, A, B$ , respectively, and $r_h$ takes value in $[0, 1]$ . Without loss of generality, we assume that the game starts at a fixed state $s_1 \in \mathcal{S}$ . Then at each step $h$ , both players observe the current state $s_h \in \mathcal{S}$ . The max-player picks an action $a_h \in \mathcal{A}$ and the min-player picks an action $b_h \in \mathcal{B}$ simultaneously. Then the max-player (resp. min-player) receives the reward $r_h(s_h, a_h, b_h)$ (resp. $-r_h(s_h, a_h, b_h)$ ), and the game transits to step $h + 1$ with the next state $s_{h+1}$ sampled from $\mathbb{P}_h(\cdot | s_h, a_h, b_h)$ . The game ends after $H$ steps. The goal for the max-player is to maximize her total reward while the min-player seeks to minimize the total reward obtained by the max-player.
52
+
53
+ Markov policies and value functions. Let $\mu = \{\mu_h\}_{h\in [H]}$ be the Markov policy for the max-player, where $\mu_h(\cdot |s)\in \Delta_{\mathcal{A}}$ is the distribution of actions the max-player picks when seeing state $s$ at step $h$ . Here, $\Delta_{\mathcal{X}}$ denotes the set of all probability distributions on the space $\mathcal{X}$ . Similarly, the min-player is equipped with a Markov policy $\nu = \{\nu_h\}_{h\in [H]}$ . We define the value function of the policy pair $(\mu ,\nu)$ at step $h$ to be
54
+
55
+ $$
56
+ V _ {h} ^ {\mu , \nu} (s) := \mathbb {E} _ {\mu , \nu} \left[ \sum_ {i = h} ^ {H} r \left(s _ {i}, a _ {i}, b _ {i}\right) \mid s _ {h} = s \right],
57
+ $$
58
+
59
+ where the expectation is taken w.r.t. the policies $\{\mu_i,\nu_i\}_{i\geq h}$ and the state transitions $\{\mathbb{P}_i\}_{i\geq h}$ . Similarly, one can define the $Q$ -function as
60
+
61
+ $$
62
+ Q _ {h} ^ {\mu , \nu} (s, a, b) := \mathbb {E} _ {\mu , \nu} \left[ \sum_ {i = h} ^ {H} r \left(s _ {i}, a _ {i}, b _ {i}\right) \mid s _ {h} = s, a _ {h} = a, b _ {h} = b \right].
63
+ $$
64
+
65
+ In words, both functions represent the expected future rewards received by the max-player given the current state or state-action pair.
66
+
67
+ Best responses and Nash equilibria. Fix a Markov policy $\nu$ for the min-player. There exists a Markov policy $\mu^{\dagger}(\nu)$ (a.k.a. best response) such that for any $s\in S$ and $h\in [H]$
68
+
69
+ $$
70
+ V _ {h} ^ {\mu^ {\dagger} (\nu), \nu} (s) = \sup _ {\mu^ {\dagger}} V _ {h} ^ {\mu^ {\dagger}, \nu} (s),
71
+ $$
72
+
73
+ where the supremum is taken over all Markov policies. To simplify the notation, we denote $V_h^{\mu, \nu}(s) \coloneqq V_h^{\mu, \nu}(s)$ . Similarly, we can define $V_h^{\mu, \nu}(s)$ . It is known that a pair $(\mu^\star, \nu^\star)$ of Markov policies exists and $\mu^\star, \nu^\star$ are best responses to the other, i.e., $V_h^{\mu^\star, \nu^\star}(s) = V_h^{\mu, \nu^\star}(s) = V_h^{\mu^\star, \dagger}(s)$ for all $s \in S$ and $h \in [H]$ . Such a pair $(\mu^\star, \nu^\star)$ is called a Nash equilibrium (NE). We may denote the value function and $Q$ -function under any Nash equilibrium $(\mu^\star, \nu^\star)$ as
74
+
75
+ $$
76
+ V _ {h} ^ {\star} := V _ {h} ^ {\mu^ {\star}, \nu^ {\star}}, \qquad Q _ {h} ^ {\star} := Q _ {h} ^ {\mu^ {\star}, \nu^ {\star}},
77
+ $$
78
+
79
+ which are known to be unique even if there are multiple Nash equilibria (Shapley, 1953). The goal of learning in two-player zero-sum Markov games is to find an $\varepsilon$ -approximation to the NE defined as follows.
80
+
81
+ Definition 1 ( $\varepsilon$ -approximate Nash equilibrium). Fix any approximation accuracy $\varepsilon > 0$ . A pair $(\mu, \nu)$ of Markov policies is an $\varepsilon$ -approximate Nash equilibrium if
82
+
83
+ $$
84
+ \operatorname {N E} \text {- g a p} (\mu , \nu) := V _ {1} ^ {\dagger , \nu} (s _ {1}) - V _ {1} ^ {\mu , \dagger} (s _ {1}) \leq \varepsilon . \tag {1}
85
+ $$
86
+
87
+ Algorithm 1 Optimistic-follow-the-regularized-leader for solving two-player zero-sum Markov games
88
+
89
+ Input: Stepsize $\eta$ , reward function $r$ , probability transition function $\mathbb{P}$ .
90
+
91
+ Initialization: $Q_h^0 \equiv 0$ for all $h \in [H]$ .
92
+
93
+ For iteration 1 to $T$ , do
94
+
95
+ - Policy Update: for all state $s \in S$ , horizon $h \in [H]$
96
+
97
+ $$
98
+ \left. \mu_ {h} ^ {t} (a \mid s) \propto \exp \left(\frac {\eta}{w _ {t}} \left[ \sum_ {i = 1} ^ {t - 1} w _ {i} \left[ Q _ {h} ^ {i} v _ {h} ^ {i} \right] (s, a) + w _ {t} \left[ Q _ {h} ^ {t - 1} v _ {h} ^ {t - 1} \right] (s, a) \right]\right), \right. \tag {2a}
99
+ $$
100
+
101
+ $$
102
+ \left. \nu_ {h} ^ {t} (b \mid s) \propto \exp \left(- \frac {\eta}{w _ {t}} \left[ \sum_ {i = 1} ^ {t - 1} w _ {i} \left[ \left(Q _ {h} ^ {i}\right) ^ {\top} \mu_ {h} ^ {i} \right] (s, b) + w _ {t} \left[ \left(Q _ {h} ^ {t - 1}\right) ^ {\top} \mu_ {h} ^ {t - 1} \right] (s, b) \right]\right). \right. \tag {2b}
103
+ $$
104
+
105
+ - Value Update: for all $s \in S, a \in \mathcal{A}, b \in \mathcal{B}$ , from $h = H$ to 1,
106
+
107
+ $$
108
+ Q _ {h} ^ {t} (s, a, b) = \left(1 - \alpha_ {t}\right) Q _ {h} ^ {t - 1} (s, a, b) + \alpha_ {t} \left(r _ {h} + \mathbb {P} _ {h} \left[ \left(\mu_ {h + 1} ^ {t}\right) ^ {\top} Q _ {h + 1} ^ {t} \nu_ {h + 1} ^ {t} \right]\right) (s, a, b), \tag {3}
109
+ $$
110
+
111
+ Output average policy: for all $s \in S, h \in [H]$
112
+
113
+ $$
114
+ \hat {\mu} _ {h} (\cdot \mid s) := \sum_ {t = 1} ^ {T} \alpha_ {T} ^ {t} \mu_ {h} ^ {t} (\cdot \mid s), \quad \hat {\nu} _ {h} (\cdot \mid s) := \sum_ {t = 1} ^ {T} \alpha_ {T} ^ {t} \nu_ {h} ^ {t} (\cdot \mid s). \tag {4}
115
+ $$
116
+
117
+ An interlude: additional notations. Before explaining OFTRL, we introduce some additional notations to simplify things hereafter. Fix any $h \in [H]$ , $s \in S$ . For any function $Q: S \times \mathcal{A} \times \mathcal{B} \to \mathbb{R}$ , we may consider $Q(s, \cdot, \cdot)$ to be an $A \times B$ matrix and $\mu_h(\cdot | s), \nu_h(\cdot | s)$ to be vectors of length $A$ and $B$ , respectively. Then for any policy $(\mu_h, \nu_h)$ at horizon $h$ we may define
118
+
119
+ $$
120
+ \left[ \mu_ {h} ^ {\top} Q \nu_ {h} \right] (s) := \mathbb {E} _ {a \sim \mu_ {h} (\cdot | s), b \sim \nu_ {h} (\cdot | s)} [ Q (s, a, b) ],
121
+ $$
122
+
123
+ $$
124
+ \left[ \mu_ {h} ^ {\top} Q \right] (s, \cdot) := \mathbb {E} _ {a \sim \mu_ {h} (\cdot | s)} [ Q (s, a, \cdot) ],
125
+ $$
126
+
127
+ $$
128
+ [ Q \nu_ {h} ] (s, \cdot) := \mathbb {E} _ {b \sim \nu_ {h} (\cdot | s)} [ Q (s, \cdot , b) ].
129
+ $$
130
+
131
+ The term $\left[\mu_h^\top Q\nu_h\right](s)$ can also be written in the inner product form $\langle \mu_h, Q\nu_h \rangle(s)$ or $\langle \nu_h, Q^\top \mu_h \rangle(s)$ . It is easy to check that for fixed $s$ and $h$ , the left hand sides of these definitions are standard matrix operations. In addition, for any $V: S \mapsto \mathbb{R}$ , we define the shorthand
132
+
133
+ $$
134
+ \left[ \mathbb {P} _ {h} V \right] (s, a, b) := \mathbb {E} _ {s ^ {\prime} \sim \mathbb {P} _ {h} (\cdot | s, a, b)} [ V (s ^ {\prime}) ],
135
+ $$
136
+
137
+ which allows us to rewrite Bellman updates of $V$ and $Q$ as
138
+
139
+ $$
140
+ V _ {h} ^ {\mu , \nu} (s) = \left[ \mu_ {h} ^ {\top} Q _ {h} ^ {\mu , \nu} \nu_ {h} \right] (s),
141
+ $$
142
+
143
+ $$
144
+ Q _ {h} ^ {\mu , \nu} (s, a, b) = r _ {h} (s, a, b) + \left[ \mathbb {P} _ {h} V _ {h + 1} ^ {\mu , \nu} \right] (s, a, b).
145
+ $$
146
+
147
+ Optimistic-follow-the-regularized-leader. Now we are ready to introduce the optimistic-follow-the-regularized-leader (OFTRL) algorithm for solving two-player zero-sum Markov games, which has appeared in the paper by Zhang et al. (2022b). See Algorithm 1 for the full specification.
148
+
149
+ In a nutshell, the algorithm has three main components. The first is the policy update (2) using weighted OFTRL for both the max and min players. As one can see, compared to the standard follow-the-regularized-leader algorithm, the weighted OFTRL adds a loss predictor $[Q^{t - 1}\nu^{t - 1}](s,a)$ and deploys a weighted update
150
+
151
+ according to the weights $\{w_{i}\}_{1\leq i\leq t}$ , which we shall define momentarily. The second component is the backward value update (3) using weighted average of the previous estimates and the Bellman updates. The last essential part is outputting a weighted policy (4) over all the historical policies. As one can realize, weights play a big role in specifying the OFTRL algorithm. In particular, we set
152
+
153
+ $$
154
+ \alpha_ {t} := \frac {H + 1}{H + t}, \quad \alpha_ {t} ^ {t} := \alpha_ {t}, \quad \alpha_ {t} ^ {i} := \alpha_ {i} \prod_ {j = i + 1} ^ {t} (1 - \alpha_ {j}), \quad w _ {i} := \frac {\alpha_ {t} ^ {i}}{\alpha_ {t} ^ {1}} = \frac {\alpha_ {i}}{\alpha_ {1} \prod_ {j = 2} ^ {i} (1 - \alpha_ {j})}, \tag {5}
155
+ $$
156
+
157
+ which are the same choices as in the paper by (Zhang et al., 2022b).
158
+
159
+ # 3 MAIN RESULT AND OVERVIEW OF THE PROOF
160
+
161
+ With the preliminaries in place, we are in a position to state our main result for OFTRL in two-player zero-sum Markov games.
162
+
163
+ Theorem 1. Consider Algorithm 1 with $\eta = C_{\eta}H^{-2}$ for some constant $C_{\eta} \leq 1/8$ . The output policy pair $(\hat{\mu}, \hat{\nu})$ satisfies
164
+
165
+ $$
166
+ \mathrm {N E - g a p} (\hat {\mu}, \hat {\nu}) \leq \frac {3 2 0 C _ {\eta} ^ {- 1} H ^ {5} \cdot \log (A B)}{T}.
167
+ $$
168
+
169
+ Several remarks on Theorem 1 are in order. First, Theorem 1 demonstrates that OFTRL can find an $O(T^{-1})$ -approximate Nash equilibrium in $T$ iterations. This improves the $\tilde{O}(T^{-5/6})$ rate proved in the prior work (Zhang et al., 2022b), and also matches the empirical evidence provided therein. While the paper by Zhang et al. (2022b) also provides a modified OFTRL algorithm that achieves an $\tilde{O}(T^{-1})$ rate by maintaining two separate value estimators (one for the max-player and the other for the min-player), the OFTRL algorithm studied herein is more natural and also computationally simpler. Second, this rate is nearly unimprovable even in the simpler two-player zero-sum normal-form games (Daskalakis et al., 2011). It is also worth pointing out that algorithms with $\tilde{O}(T^{-1})$ rate have been proposed in the literature (Cen et al., 2021; Zhao et al., 2021). However, compared to those algorithms, OFTRL does not require one to approximately solve a normal-form game in each iteration. Lastly, Theorem 1 allows any $C_{\eta} \in (0,1/8]$ while $C_{\eta} = 1/8$ is optimal for the bound on NE-gap.
170
+
171
+ Before embarking on the formal proof, we would like to immediately provide an overview of our proof techniques.
172
+
173
+ Step 1: controlling NE-gap using the sum of regrets and estimation error. In the simpler normal-form game (i.e., without any state transition dynamics as in Markov games), it is well known that NE-gap is controlled by the sum of the regrets of the two players. This would also be the case for Markov games if in the policy update (2) by OFTRL, we use the true $Q$ -function $Q_h^\star$ instead of the estimate $Q_h^t$ . As a result, intuitively, the NE-gap in Markov games should be controlled by both the sum of the regrets of the two players and also the estimation error $\| Q_h^t - Q_h^\star \|_\infty$ ; see Lemma 1.
174
+
175
+ Step 2: bounding the sum of regrets. Given the extensive literature on regret guarantees for optimistic algorithms (Anagnostides et al., 2022c;b; Zhang et al., 2022b), it is relatively easy to control the sum of the regrets to obtain the desired $O(T^{-1})$ rate; see Lemma 2. The key is to exploit the stability in the loss vectors.
176
+
177
+ Step 3: bounding estimation error. It then boils down to controlling the estimation error $\| Q_h^t -Q_h^\star \|_\infty$ in which our main technical contributions lie. Due to the nature of the Bellman update (3), it is not hard to obtain a recursive relation for the estimation error; see the recursion (17). However, the undesirable part is that the estimation error depends on the maximal regret between the two players, instead of the sum of the
178
+
179
+ regrets. This calls for technical innovation. Inspired by the work of Anagnostides et al. (2022c;b) in normal-form games, we make an important observation that the sum of the regrets is approximately non-negative. In particular, the sum is lower bounded by the negative estimation error $\| Q_h^t -Q_h^\star \|_\infty$ ; see Lemma 6. This lower bound together with the upper bound in Step 2 allows us to control the maximal regret via the estimation error (19), which further yields a recursive relation (20) involving estimation errors only. Solving the recursion leads to the desired result.
180
+
181
+ # 4 PROOF OF THEOREM 1
182
+
183
+ In this section, we present the proof of our main result, i.e., Theorem 1. We first define a few useful notations. For each step $h \in [H]$ , each state $s \in S$ , and each iteration $t \in [T]$ , we define the state-wise weighted individual regret as
184
+
185
+ $$
186
+ \operatorname {r e g} _ {h, 1} ^ {t} (s) := \max _ {\mu^ {\dagger} \in \Delta_ {A}} \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left\langle \mu^ {\dagger} - \mu_ {h} ^ {i}, Q _ {h} ^ {i} \nu_ {h} ^ {i} \right\rangle (s), \tag {6a}
187
+ $$
188
+
189
+ $$
190
+ \operatorname {r e g} _ {h, 2} ^ {t} (s) := \max _ {\nu^ {\dagger} \in \Delta_ {\mathcal {B}}} \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left\langle \nu_ {h} ^ {i} - \nu^ {\dagger}, \left(Q _ {h} ^ {i}\right) ^ {\top} \mu_ {h} ^ {i} \right\rangle (s). \tag {6b}
191
+ $$
192
+
193
+ We also define the maximal regret as
194
+
195
+ $$
196
+ \operatorname{reg}_{h}^{t}:= \max_{s\in S}\max_{i = 1,2}\left\{\operatorname{reg}_{h,i}^{t}(s)\right\} ,
197
+ $$
198
+
199
+ that maximizes over the players and the states. In addition, for each step $h \in [H]$ , and each iteration $t \in [T]$ , we define the estimation error of the $Q$ -function as
200
+
201
+ $$
202
+ \delta_ {h} ^ {t} := \left\| Q _ {h} ^ {t} - Q _ {h} ^ {\star} \right\| _ {\infty}.
203
+ $$
204
+
205
+ With these notations in place, we first connect the NE-gap with the sum of regrets $\mathrm{reg}_{h,1}^T(s) + \mathrm{reg}_{h,2}^T(s)$ as well as the estimation error $\delta_h^t$ .
206
+
207
+ Lemma 1. One has
208
+
209
+ $$
210
+ \mathrm {N E - g a p} (\hat {\mu}, \hat {\nu}) \leq 2 \sum_ {h = 1} ^ {H} \left\{\max _ {s} \left\{\operatorname {r e g} _ {h, 1} ^ {T} (s) + \operatorname {r e g} _ {h, 2} ^ {T} (s) \right\} + 2 \sum_ {t = 1} ^ {T} \alpha_ {T} ^ {t} \delta_ {h} ^ {t} \right\}.
211
+ $$
212
+
213
+ See Section B.1 for the proof of this lemma.
214
+
215
+ It then boils down to controlling $\max_s\left\{\mathrm{reg}_{h,1}^T (s) + \mathrm{reg}_{h,2}^T (s)\right\}$ and $\sum_{t = 1}^{T}\alpha_T^t\delta_h^t$ . The following two lemmas provide such control.
216
+
217
+ Lemma 2. For every $h \in [H]$ , every $s \in S$ , and every iteration $t \in [T]$ , one has
218
+
219
+ $$
220
+ \begin{array}{l} \operatorname {r e g} _ {h, 1} ^ {t} (s) \leq \frac {2 H \cdot (\log A)}{\eta t} + \frac {1 6 \eta H ^ {3}}{t} + 2 \eta H ^ {2} \sum_ {i = 2} ^ {t} \alpha_ {t} ^ {i} \| \nu_ {h} ^ {i} (\cdot | s) - \nu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2} \tag {7a} \\ - \frac {1}{8 \eta} \sum_ {i = 2} ^ {t} \alpha_ {t} ^ {i - 1} \| \mu_ {h} ^ {i} (\cdot | s) - \mu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2}; \\ \end{array}
221
+ $$
222
+
223
+ $$
224
+ \begin{array}{l} \operatorname {r e g} _ {h, 2} ^ {t} (s) \leq \frac {2 H \cdot (\log B)}{\eta t} + \frac {1 6 \eta H ^ {3}}{t} + 2 \eta H ^ {2} \sum_ {i = 2} ^ {t} \alpha_ {t} ^ {i} \| \mu_ {h} ^ {i} (\cdot | s) - \mu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2} \tag {7b} \\ - \frac {1}{8 \eta} \sum_ {i = 2} ^ {t} \alpha_ {t} ^ {i - 1} \| \nu_ {h} ^ {i} (\cdot | s) - \nu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2}. \\ \end{array}
225
+ $$
226
+
227
+ As a result, when $\eta = C_{\eta}H^{-2}$ for some constant $C_{\eta}\leq 1 / 8$ , one has
228
+
229
+ $$
230
+ \begin{array}{l} \max _ {s} \left\{\operatorname {r e g} _ {h, 1} ^ {t} (s) + \operatorname {r e g} _ {h, 2} ^ {t} (s) \right\} \leq \frac {3 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t} - 4 \eta H ^ {3} \sum_ {i = 2} ^ {t} \alpha_ {t} ^ {i} \Big (\| \mu_ {h} ^ {i} (\cdot | s) - \mu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2} \quad (8) \\ \left. + \left\| \nu_ {h} ^ {i} (\cdot \mid s) - \nu_ {h} ^ {i - 1} (\cdot \mid s) \right\| _ {1} ^ {2}\right). \\ \end{array}
231
+ $$
232
+
233
+ See Section B.2 for the proof of this lemma.
234
+
235
+ Lemma 3. Choosing $\eta = C_{\eta}H^{-2}$ for some constant $C_{\eta}\leq 1 / 8$ , for all $h\in [H]$ and $t\in [T]$ , we have that
236
+
237
+ $$
238
+ \delta_ {h} ^ {t} \leq \frac {5 e ^ {2} C _ {\eta} ^ {- 1} H ^ {4} \cdot \log (A B)}{t}.
239
+ $$
240
+
241
+ See Section B.3 for the proof of this lemma.
242
+
243
+ Combine Lemmas 2-3 with Lemma 1 to arrive at the desired conclusion that when $\eta = C_{\eta}H^{-2}$ for some constant $C_{\eta} \leq 1/8$ ,
244
+
245
+ $$
246
+ \begin{array}{l} \mathrm {N E - g a p} (\hat {\mu}, \hat {\nu}) \leq 2 \sum_ {h = 1} ^ {H} \left\{\max _ {s} \left\{\operatorname {r e g} _ {h, 1} ^ {T} (s) + \operatorname {r e g} _ {h, 2} ^ {T} (s) \right\} + 2 \sum_ {t = 1} ^ {T} \alpha_ {T} ^ {t} \delta_ {h} ^ {t} \right\} \\ \leq 2 \sum_ {h = 1} ^ {H} \left\{\frac {3 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{T} + 2 \sum_ {t = 1} ^ {T} \alpha_ {T} ^ {t} \frac {5 e ^ {2} C _ {\eta} ^ {- 1} H ^ {4} \cdot \log (A B)}{t} \right\} \\ \leq 2 H \cdot \left\{\frac {3 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{T} + \frac {2 0 e ^ {2} C _ {\eta} ^ {- 1} H ^ {4} \cdot \log (A B)}{T} \right\} \\ \leq \frac {3 2 0 C _ {\eta} ^ {- 1} H ^ {5} \cdot \log (A B)}{T}, \\ \end{array}
247
+ $$
248
+
249
+ where the penultimate inequality uses the following important lemma we have alluded to before.
250
+
251
+ Lemma 4. For all $t \geq 1$ , one has
252
+
253
+ $$
254
+ \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \cdot \frac {1}{i} \leq \left(1 + \frac {1}{H}\right) \frac {1}{t}. \tag {9}
255
+ $$
256
+
257
+ On the surface, this lemma shaves an extra $\log t$ factor from a simple average of the sequence $\{1 / i\}_{i\leq t}$ (cf. Lemma A.3 in the paper by Zhang et al. (2022b)). But more importantly, it shines in the ensuing proof of Lemma 3 by enabling the induction step. See Section B.4 for the proof of Lemma 4, and see the end of Section B.3 for the comment on the benefit of this improved result.
258
+
259
+ # 5 DISCUSSION
260
+
261
+ In this paper, we prove that the optimistic-follow-the-regularized-leader algorithm, together with smooth value updates, converges to an $O(T^{-1})$ -approximate Nash equilibrium in two-player zero-sum Markov games. This improves the $\tilde{O}(T^{-5/6})$ rate proved in the paper Zhang et al. (2022b). Quite a few interesting directions are open. Below we single out a few of them. First, although our rate is unimprovable in the dependence on $T$ , it is likely sub-optimal in its dependence on the horizon $H$ . Improving such dependence and proving any sort of lower bound on it are both interesting and important for finite-horizon Markov games. Second, we focus on the simple two-player zero-sum games. It is an important open question to see whether one can generalize the proof technique herein to the multi-player general-sum Markov games and to other solution concepts in games (e.g., coarse correlated equilibria, and correlated equilibria).
262
+
263
+ # REFERENCES
264
+
265
+ Ioannis Anagnostides, Constantinos Daskalakis, Gabriele Farina, Maxwell Fishelson, Noah Golowich, and Tuomas Sandholm. Near-optimal no-regret learning for correlated equilibria in multi-player general-sum games. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pp. 736-749, 2022a.
266
+ Ioannis Anagnostides, Gabriele Farina, Christian Kroer, Chung-Wei Lee, Haipeng Luo, and Tuomas Sandholm. Uncoupled learning dynamics with $o(\log t)$ swap regret in multiplayer games. arXiv preprint arXiv:2204.11417, 2022b.
267
+ Ioannis Anagnostides, Ioannis Panageas, Gabriele Farina, and Tuomas Sandholm. On last-iterate convergence beyond zero-sum games. arXiv preprint arXiv:2203.12056, 2022c.
268
+ Yu Bai and Chi Jin. Provable self-play algorithms for competitive reinforcement learning. In International conference on machine learning, pp. 551-560. PMLR, 2020.
269
+ Yu Bai, Chi Jin, and Tiancheng Yu. Near-optimal reinforcement learning with self-play. Advances in neural information processing systems, 33:2159-2170, 2020.
270
+ Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019.
271
+ Noam Brown and Tuomas Sandholm. Superhuman ai for multiplayer poker. Science, 365(6456):885-890, 2019.
272
+ Lucian Busoniu, Robert Babuska, and Bart De Schutter. A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 38(2):156-172, 2008.
273
+ Shicong Cen, Yuting Wei, and Yuejie Chi. Fast policy extragradient methods for competitive games with entropy regularization. 2021.
274
+
275
+ Shicong Cen, Yuejie Chi, Simon S Du, and Lin Xiao. Faster last-iterate convergence of policy optimization in zero-sum markov games. arXiv preprint arXiv:2210.01050, 2022.
276
+ Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, learning, and games. Cambridge university press, 2006.
277
+ Xi Chen and Binghui Peng. Hedging in games: Faster convergence of external and swap regrets. Advances in Neural Information Processing Systems, 33:18990-18999, 2020.
278
+ Qiwen Cui and Simon S Du. When is offline two-player zero-sum markov game solvable? arXiv preprint arXiv:2201.03522, 2022.
279
+ Constantinos Daskalakis, Alan Deckelbaum, and Anthony Kim. Near-optimal no-regret algorithms for zero-sum games. In Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms, pp. 235-254. SIAM, 2011.
280
+ Constantinos Daskalakis, Maxwell Fishelson, and Noah Golowich. Near-optimal no-regret learning in general games. Advances in Neural Information Processing Systems, 34:27604-27616, 2021.
281
+ Liad Erez, Tal Lancewicki, Uri Sherman, Tomer Koren, and Yishay Mansour. Regret minimization and convergence to equilibria in general-sum markov games. arXiv preprint arXiv:2207.14211, 2022.
282
+ Gabriele Farina, Ioannis Anagnostides, Haipeng Luo, Chung-Wei Lee, Christian Kroer, and Tuomas Sandholm. Near-optimal no-regret learning for general convex games. arXiv preprint arXiv:2206.08742, 2022.
283
+ Junling Hu and Michael P Wellman. Nash q-learning for general-sum stochastic games. Journal of machine learning research, 4(Nov):1039-1069, 2003.
284
+ Zeyu Jia, Lin F Yang, and Mengdi Wang. Feature-based q-learning for two-player stochastic games. arXiv preprint arXiv:1906.00423, 2019.
285
+ Chi Jin, Zeyuan Allen-Zhu, Sebastien Bubeck, and Michael I Jordan. Is q-learning provably efficient? Advances in neural information processing systems, 31, 2018.
286
+ Gen Li, Yuejie Chi, Yuting Wei, and Yuxin Chen. Minimax-optimal multi-agent rl in zero-sum markov games with a generative model. arXiv preprint arXiv:2208.10458, 2022.
287
+ Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994, pp. 157-163. Elsevier, 1994.
288
+ Qinghua Liu, Tiancheng Yu, Yu Bai, and Chi Jin. A sharp analysis of model-based reinforcement learning with self-play. In International Conference on Machine Learning, pp. 7001-7010. PMLR, 2021.
289
+ John Nash. Non-cooperative games. Annals of mathematics, pp. 286-295, 1951.
290
+ Yu Nesterov. Excessive gap technique in nonsmooth convex minimization. SIAM Journal on Optimization, 16(1):235-249, 2005.
291
+ Julien Perolat, Bruno Scherrer, Bilal Piot, and Olivier Pietquin. Approximate dynamic programming for two-player zero-sum markov games. In International Conference on Machine Learning, pp. 1321-1329. PMLR, 2015.
292
+ Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014.
293
+
294
+ Sasha Rakhlin and Karthik Sridharan. Optimization, learning, and games with predictable sequences. volume 26, 2013.
295
+ Lloyd S Shapley. Stochastic games. Proceedings of the national academy of sciences, 39(10):1095-1100, 1953.
296
+ Aaron Sidford, Mengdi Wang, Lin Yang, and Yinyu Ye. Solving discounted stochastic two-player games with near-optimal time and sample complexity. In International Conference on Artificial Intelligence and Statistics, pp. 2992-3002. PMLR, 2020.
297
+ Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, and Robert E Schapire. Fast convergence of regularized learning in games. Advances in Neural Information Processing Systems, 28, 2015.
298
+ Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michael Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350-354, 2019.
299
+ Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. Last-iterate convergence of decentralized optimistic gradient descent/ascent in infinite-horizon competitive markov games. In Conference on learning theory, pp. 4259–4299. PMLR, 2021.
300
+ Qiaomin Xie, Yudong Chen, Zhaoran Wang, and Zhuoran Yang. Learning zero-sum simultaneous-move markov games using function approximation and correlated equilibrium. In Conference on learning theory, pp. 3674-3682. PMLR, 2020.
301
+ Yuling Yan, Gen Li, Yuxin Chen, and Jianqing Fan. Model-based reinforcement learning is minimax-optimal for offline zero-sum markov games. arXiv preprint arXiv:2206.04044, 2022.
302
+ Yaodong Yang and Jun Wang. An overview of multi-agent reinforcement learning from game theoretical perspective. arXiv preprint arXiv:2011.00583, 2020.
303
+ Kaiqing Zhang, Sham Kakade, Tamer Basar, and Lin Yang. Model-based multi-agent rl in zero-sum markov games with near-optimal sample complexity. Advances in Neural Information Processing Systems, 33: 1166-1178, 2020.
304
+ Kaiqing Zhang, Zhuoran Yang, and Tamer Başar. Multi-agent reinforcement learning: A selective overview of theories and algorithms. Handbook of Reinforcement Learning and Control, pp. 321-384, 2021.
305
+ Mengxiao Zhang, Peng Zhao, Haipeng Luo, and Zhi-Hua Zhou. No-regret learning in time-varying zero-sum games. arXiv preprint arXiv:2201.12736, 2022a.
306
+ Runyu Zhang, Qinghua Liu, Huan Wang, Caiming Xiong, Na Li, and Yu Bai. Policy optimization for markov games: Unified framework and faster convergence. arXiv preprint arXiv:2206.02640, 2022b.
307
+ Yulai Zhao, Yuandong Tian, Jason D Lee, and Simon S Du. Provably efficient policy gradient methods for two-player zero-sum markov games. arXiv preprint arXiv:2102.08903, 2021.
308
+
309
+ # A PROPERTIES OF $\alpha_{t}^{i}$
310
+
311
+ This section collects a few useful properties of the sequences $\{\alpha_{t}\}_{t\geq 1}$ and $\{\alpha_t^i\}_{t\geq 1,1\leq i\leq t}$ . Some of these results have appeared in prior work (Jin et al., 2018; Zhang et al., 2022b). For completeness, we include all the proofs here.
312
+
313
+ To help reading, we repeat the definitions below: for each $t \geq 1$ , and $1 \leq i \leq t$ , we define
314
+
315
+ $$
316
+ \alpha_ {t} = \alpha_ {t} ^ {t} = \frac {H + 1}{H + t}, \quad \text {a n d} \tag {10a}
317
+ $$
318
+
319
+ $$
320
+ \alpha_ {t} ^ {i} = \alpha_ {i} \prod_ {j = i + 1} ^ {t} (1 - \alpha_ {j}). \tag {10b}
321
+ $$
322
+
323
+ Lemma 5. Fix any $t \geq 1$ . The following properties are true:
324
+
325
+ 1. The sequence $\{\alpha_t^i\}_{1\leq i\leq t}$ sums to 1, i.e., $\sum_{i = 1}^{t}\alpha_{t}^{i} = 1$
326
+ 2. For all $1 \leq i \leq t$ , one has $\alpha_t^i \leq i / t$ .
327
+ 3. For the relative weight defined by $w_{i} = \alpha_{t}^{i} / \alpha_{t}^{1}$ (note that this is the same for every $t \geq i$ ), we have
328
+
329
+ $$
330
+ \frac {w _ {i}}{w _ {i - 1}} = \frac {\alpha_ {t} ^ {i}}{\alpha_ {t} ^ {i - 1}} = \frac {H + i - 1}{i - 1} \leq H.
331
+ $$
332
+
333
+ 4. The sequence $\{\alpha_t^i\}_{1\leq i\leq t}$ is increasing in $i$
334
+ 5. On the sum of squares of the weights, we have
335
+
336
+ $$
337
+ \sum_ {i = 1} ^ {t} (\alpha_ {t} ^ {i}) ^ {2} \leq \sum_ {i = 1} ^ {t} \alpha_ {i} ^ {2} \leq H + 2.
338
+ $$
339
+
340
+ 6. For any non-increasing sequence $\{b_i\}_{1\leq i\leq t}$ , one has
341
+
342
+ $$
343
+ \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} b _ {i} \leq \frac {1}{t} \sum_ {i = 1} ^ {t} b _ {i}.
344
+ $$
345
+
346
+ Proof. Property 1 follows directly from the definitions of $\{\alpha_t^i\}_{1\leq i\leq t}$ .
347
+
348
+ Now we move on to Property 2. It trivially holds for $i = t$ . Therefore we focus on the case when $1 \leq i \leq t - 1$ . By definition, we have
349
+
350
+ $$
351
+ \alpha_ {t} ^ {i} = \alpha_ {i} \prod_ {j = i + 1} ^ {t} (1 - \alpha_ {j}) \leq \prod_ {j = i + 1} ^ {t} (1 - \alpha_ {j}) = \prod_ {j = i + 1} ^ {t} \frac {j - 1}{H + j}. \tag {11}
352
+ $$
353
+
354
+ where the inequality holds since $\alpha_{i} \leq 1$ for all $1 \leq i \leq t$ , and the last relation is the definition of $\alpha_{j}$ . Expanding the right hand side of (11), we have
355
+
356
+ $$
357
+ \alpha_ {t} ^ {i} \leq \frac {i}{H + i + 1} \times \frac {i + 1}{H + i + 2} \times \dots \times \frac {t - 1}{H + t} \leq \frac {i}{H + t},
358
+ $$
359
+
360
+ where we only keep the first numerator and the last denominator. Property 2 then follows.
361
+
362
+ Property 3 is trivial. Hence we omit the proof. In addition, Property 3 implies Property 4 since $\frac{\alpha_t^i}{\alpha_t^{i - 1}} = \frac{H + i - 1}{i - 1} \geq 1$ .
363
+
364
+ For Property 5, the first inequality holds since $0 \leq \alpha_{i} \leq 1$ for all $1 \leq i \leq t$ . For the second inequality, one has
365
+
366
+ $$
367
+ \sum_ {i = 1} ^ {t} \alpha_ {i} ^ {2} = 1 + \sum_ {i = 2} ^ {t} \left(\frac {H + 1}{H + i}\right) ^ {2} \leq 1 + (H + 1) ^ {2} \sum_ {i = 2} ^ {t} \left(\frac {1}{(H + i - 1) (H + i)}\right).
368
+ $$
369
+
370
+ Expanding this as a telescoping sum, we see that
371
+
372
+ $$
373
+ \begin{array}{l} \sum_ {i = 1} ^ {t} \alpha_ {i} ^ {2} \leq 1 + (H + 1) ^ {2} \sum_ {i = 2} ^ {t} \left(\frac {1}{H + i - 1} - \frac {1}{H + i}\right) \\ \leq 1 + (H + 1) ^ {2} \frac {1}{H + 1} \\ = H + 2. \\ \end{array}
374
+ $$
375
+
376
+ Lastly, for Property 6, we have
377
+
378
+ $$
379
+ \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} b _ {i} - \frac {1}{t} \sum_ {i = 1} ^ {t} b _ {i} = \sum_ {i = 1} ^ {t} (\alpha_ {t} ^ {i} - \frac {1}{t}) b _ {i}.
380
+ $$
381
+
382
+ Let $i_0 \coloneqq \sup_i \left\{ \alpha_t^i \leq 1 / t \right\}$ . Since $\{\alpha_t^i\}$ is increasing in $i$ (cf. Property 4) and $\sum_{i=1}^{t} \alpha_t^i = 1$ (cf. Property 1), we know that $i_0$ is well defined, i.e., $1 \leq i_0 \leq t$ . Since $\left\{ \alpha_t^i \right\}_{i \leq t}$ (resp. $\{b_i\}_{i \leq t}$ ) is increasing (resp. non-increasing), we have $\alpha_t^i \leq 1 / t$ and $b_i \geq b_{i_0}$ for all $i \leq i_0$ . As a result, we obtain $(\alpha_t^i - 1 / t)b_i \leq (\alpha_t^i - 1 / t)b_{i_0}$ for all $i \leq i_0$ . Similarly, one has $\alpha_t^i > 1 / t$ and $b_i \leq b_{i_0}$ for all $i > i_0$ , which implies $(\alpha_t^i - 1 / t)b_i \leq (\alpha_t^i - 1 / t)b_{i_0}$ for all $i > i_0$ . Take these two relations together to see that
383
+
384
+ $$
385
+ \sum_ {i = 1} ^ {t} \left(\alpha_ {t} ^ {i} - 1 / t\right) b _ {i} \leq \sum_ {i = 1} ^ {t} \left(\alpha_ {t} ^ {i} - 1 / t\right) b _ {i _ {0}} = 0,
386
+ $$
387
+
388
+ where the last equality uses the fact from Property 1, namely $\sum_{i=1}^{t} \alpha_{t}^{i} = 1$ .
389
+
390
+ # B PROOF OF SUPPORTING LEMMAS IN SECTION 4
391
+
392
+ # B.1 PROOF OF LEMMA 1
393
+
394
+ Invoke Lemma C.1 in the paper by Zhang et al. (2022b) to obtain
395
+
396
+ $$
397
+ \begin{array}{l} \mathrm {N E - g a p} (\hat {\mu}, \hat {\nu}) = V _ {1} ^ {\dagger , \hat {\nu}} (s _ {1}) - V _ {1} ^ {\star} (s _ {1}) + V _ {1} ^ {\star} (s _ {1}) - V _ {1} ^ {\hat {\mu}, \dagger} (s _ {1}) \\ \leq 2 \sum_ {h = 1} ^ {H} \max _ {s} \left\{\max _ {\mu^ {\dagger}, \nu^ {\dagger}} \left[ \left\langle \mu^ {\dagger}, Q _ {h} ^ {\star} \hat {\nu} _ {h} \right\rangle - \left\langle \nu^ {\dagger}, Q _ {h} ^ {\star \top} \hat {\mu} _ {h} \right\rangle \right] (s) \right\}. \\ \end{array}
398
+ $$
399
+
400
+ By the definition of the output policy $(\hat{\mu},\hat{\nu})$ , one has
401
+
402
+ $$
403
+ \max _ {\mu^ {\uparrow}, \nu^ {\uparrow}} \left[ \left\langle \mu^ {\uparrow}, Q _ {h} ^ {\star} \hat {\nu} _ {h} \right\rangle - \left\langle \nu^ {\uparrow}, Q _ {h} ^ {\star \top} \hat {\mu} _ {h} \right\rangle \right] (s) = \max _ {\mu^ {\uparrow}, \nu^ {\uparrow}} \sum_ {t = 1} ^ {T} \alpha_ {T} ^ {t} \left[ \left\langle \mu^ {\uparrow}, Q _ {h} ^ {\star} \nu_ {h} ^ {t} \right\rangle - \left\langle \nu^ {\uparrow}, Q _ {h} ^ {\star \top} \mu_ {h} ^ {t} \right\rangle \right] (s).
404
+ $$
405
+
406
+ Replacing the true value function $Q_h^\star$ with the value estimate $Q_h^t$ yields
407
+
408
+ $$
409
+ \max _ {\mu^ {\dagger}, \nu^ {\dagger}} \left[ \left\langle \mu , Q _ {h} ^ {\star} \hat {\nu} _ {h} \right\rangle - \left\langle \nu^ {\dagger}, (Q _ {h} ^ {\star}) ^ {\top} \hat {\mu} _ {h} \right\rangle \right] (s) \leq \max _ {\mu^ {\dagger}, \nu^ {\dagger}} \sum_ {t = 1} ^ {T} \alpha_ {T} ^ {t} \left[ \left\langle \mu^ {\dagger}, Q _ {h} ^ {t} \nu_ {h} ^ {t} \right\rangle - \left\langle \nu^ {\dagger}, (Q _ {h} ^ {t}) ^ {\top} \mu_ {h} ^ {t} \right\rangle \right] (s) + 2 \sum_ {t = 1} ^ {T} \alpha_ {T} ^ {t} \delta_ {h} ^ {t},
410
+ $$
411
+
412
+ where we recall $\delta_h^t = \| Q_h^t -Q_h^\star \|_\infty$ . The proof is finished by taking the above three relations together with the observation that
413
+
414
+ $$
415
+ \operatorname {r e g} _ {h, 1} ^ {T} (s) + \operatorname {r e g} _ {h, 2} ^ {T} (s) = \max _ {\mu^ {\dagger}, \nu^ {\dagger}} \sum_ {t = 1} ^ {T} \alpha_ {T} ^ {t} \left[ \left\langle \mu^ {\dagger}, Q _ {h} ^ {t} \nu_ {h} ^ {t} \right\rangle - \left\langle \nu^ {\dagger}, \left(Q _ {h} ^ {t}\right) ^ {\top} \mu_ {h} ^ {t} \right\rangle \right] (s).
416
+ $$
417
+
418
+ # B.2 PROOF OF LEMMA 2
419
+
420
+ We prove the regret bound for the max-player (i.e., bound (7a)). The bound (7b) for the min-player can be obtained via symmetry.
421
+
422
+ First, we make the observation that, the policy update in Algorithm 1 for the max-player is exactly the OFTRL algorithm (i.e., Algorithm 4 in the paper by Zhang et al. (2022b)) with the loss vector $g_{t} = w_{t}[Q_{h}^{t}\nu_{h}^{t}](s,\cdot)$ , the recency bias $M_{t} = w_{t}[Q_{h}^{t - 1}\nu_{h}^{t - 1}](s,\cdot)$ , and a learning rate $\eta_t = \eta /w_t$ . Therefore, we can apply Lemma B.3 from Zhang et al. (2022b) to obtain
423
+
424
+ $$
425
+ \begin{array}{l} \operatorname {r e g} _ {h, 1} ^ {t} (s) = \max _ {\mu^ {\dagger}} \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left\langle \left(\mu^ {\dagger} - \mu_ {h} ^ {i}\right), Q _ {h} ^ {i} \nu_ {h} ^ {i} \right\rangle (s) \\ = \alpha_ {t} ^ {1} \max _ {\mu^ {\dagger}} \sum_ {i = 1} ^ {t} w _ {i} \left\langle \left(\mu^ {\dagger} - \mu_ {h} ^ {i}\right), Q _ {h} ^ {i} \nu_ {h} ^ {i} \right\rangle (s) \\ \leq \frac {\alpha_ {t} \cdot (\log A)}{\eta} + \underbrace {\alpha_ {t} ^ {1} \sum_ {i = 1} ^ {t} \frac {\eta}{w _ {i}} \left\| \left[ w _ {i} Q _ {h} ^ {i} \nu_ {h} ^ {i} - w _ {i} Q _ {h} ^ {i - 1} \nu_ {h} ^ {i - 1} \right] (s , \cdot) \right\| _ {\infty} ^ {2}} _ {=: \operatorname {E r r} _ {1}} (12) \\ - \underbrace {\alpha_ {t} ^ {1} \sum_ {i = 2} ^ {t} \frac {w _ {i - 1}}{8 \eta} \left\| \mu_ {h} ^ {i} (\cdot | s) - \mu_ {h} ^ {i - 1} (\cdot | s) \right\| _ {1} ^ {2}} _ {=: \operatorname {E r r} _ {2}}, (13) \\ \end{array}
426
+ $$
427
+
428
+ where we have used the fact that $w_{i} = \alpha_{t}^{i} / \alpha_{t}^{1}$ . We now move on to bound the term $\mathrm{Err}_1$ . Use $(a + b)^2 \leq 2a^2 + 2b^2$ to see that
429
+
430
+ $$
431
+ \begin{array}{l} \left\| \left[ Q _ {h} ^ {i} \nu_ {h} ^ {i} - Q _ {h} ^ {i - 1} \nu_ {h} ^ {i - 1} \right] (s, \cdot) \right\| _ {\infty} ^ {2} \leq 2 \left\| \left[ Q _ {h} ^ {i} \nu_ {h} ^ {i} - Q _ {h} ^ {i - 1} \nu_ {h} ^ {i} \right] (s, \cdot) \right\| _ {\infty} ^ {2} + 2 \left\| \left[ Q _ {h} ^ {i - 1} \nu_ {h} ^ {i} - Q _ {h} ^ {i - 1} \nu_ {h} ^ {i - 1} \right] (s, \cdot) \right\| _ {\infty} ^ {2} \\ \leq 2 \| Q _ {h} ^ {i} - Q _ {h} ^ {i - 1} \| _ {\infty} ^ {2} + 2 H ^ {2} \| \nu_ {h} ^ {i} (\cdot | s) - \nu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2}, \\ \end{array}
432
+ $$
433
+
434
+ where the second line uses Holder's inequality and the fact that $\| Q_h^{i - 1}\|_{\infty}\leq H$ . In view of the update rule (3) for the $Q$ -function, we further have
435
+
436
+ $$
437
+ \begin{array}{l} \left\| Q _ {h} ^ {i} - Q _ {h} ^ {i - 1} \right\| _ {\infty} = \left\| - \alpha_ {i} Q _ {h} ^ {i - 1} + \alpha_ {i} \left(r _ {h} + \mathbb {P} _ {h} \left[ \left(\mu_ {h + 1} ^ {i}\right) ^ {\top} Q _ {h + 1} ^ {i} \nu_ {h + 1} ^ {i} \right]\right) \right\| _ {\infty} \\ \leq \alpha_ {i} \max \left\{\left\| Q _ {h} ^ {i - 1} \right\| _ {\infty}, \left\| r _ {h} + \mathbb {P} _ {h} \left[ \left(\mu_ {h + 1} ^ {i}\right) ^ {\top} Q _ {h + 1} ^ {i} \nu_ {h + 1} ^ {i} \right] \right\| _ {\infty} \right\} \\ \leq \alpha_ {i} H. \\ \end{array}
438
+ $$
439
+
440
+ As a result, we arrive at the bound
441
+
442
+ $$
443
+ \begin{array}{l} \operatorname {E r r} _ {1} \leq 2 \eta \alpha_ {t} ^ {1} \sum_ {i = 1} ^ {t} w _ {i} \left(\alpha_ {i} ^ {2} H ^ {2} + H ^ {2} \| \nu_ {h} ^ {i} (\cdot | s) - \nu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2}\right) \\ = 2 \eta H ^ {2} \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \alpha_ {i} ^ {2} + 2 \eta H ^ {2} \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \| \nu_ {h} ^ {i} (\cdot | s) - \nu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2}, \\ \end{array}
444
+ $$
445
+
446
+ where we again use the relation $w_{i} = \alpha_{t}^{i} / \alpha_{t}^{1}$ . Since $\{\alpha_{i}\}_{i\leq t}$ is decreasing in $i$ , we can apply Property 6 in Lemma 5 to obtain
447
+
448
+ $$
449
+ \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \alpha_ {i} ^ {2} \leq \frac {1}{t} \sum_ {i = 1} ^ {t} \alpha_ {i} ^ {2} \leq \frac {H + 2}{t} \leq \frac {3 H}{t},
450
+ $$
451
+
452
+ where the second inequality follows from Property 5 in Lemma 5. In all, we see that
453
+
454
+ $$
455
+ \operatorname {E r r} _ {1} \leq \frac {6 \eta H ^ {3}}{t} + 2 \eta H ^ {2} \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left\| \nu_ {h} ^ {i} (\cdot | s) - \nu_ {h} ^ {i - 1} (\cdot | s) \right\| _ {1} ^ {2}. \tag {14}
456
+ $$
457
+
458
+ Substitute the upper bound (14) for $\mathrm{Err}_1$ into the master bound (12) to obtain
459
+
460
+ $$
461
+ \begin{array}{l} \operatorname {r e g} _ {h, 1} ^ {t} (s) \leq \frac {\alpha_ {t} \cdot (\log A)}{\eta} + \operatorname {E r r} _ {1} - \operatorname {E r r} _ {2} \\ \leq \frac {2 H \cdot (\log A)}{\eta t} + \frac {6 \eta H ^ {3}}{t} + 2 \eta H ^ {2} \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \| v _ {h} ^ {i} (\cdot | s) - v _ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2} \\ - \frac {1}{8 \eta} \sum_ {i = 2} ^ {t} \alpha_ {t} ^ {i - 1} \| \mu_ {h} ^ {i} (\cdot | s) - \mu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2}, \\ \end{array}
462
+ $$
463
+
464
+ where in the first inequality we use $\alpha_{t} = (H + 1) / (H + t)\leq 2H / t$ . Since $\| \nu_h^i (\cdot |s) - \nu_h^{i - 1}(\cdot |s)\| _1\leq 2$ and $\alpha_{t}^{1}\leq 1 / t$ (see Property 2 of Lemma 5), we can take the term $i = 1$ out and reach
465
+
466
+ $$
467
+ \begin{array}{l} \operatorname {r e g} _ {h, 1} ^ {t} (s) \leq \frac {2 H \cdot (\log A)}{\eta t} + \frac {1 6 \eta H ^ {3}}{t} + 2 \eta H ^ {2} \sum_ {i = 2} ^ {t} \alpha_ {t} ^ {i} \| \nu_ {h} ^ {i} (\cdot | s) - \nu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2} \\ - \frac {1}{8 \eta} \sum_ {i = 2} ^ {t} \alpha_ {t} ^ {i - 1} \| \mu_ {h} ^ {i} (\cdot | s) - \mu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2}. \\ \end{array}
468
+ $$
469
+
470
+ This finishes the proof of the regret bound (7a) for the max-player. The bound (7b) for the min-player can be obtained via symmetry.
471
+
472
+ Combine the two bounds (7a) and (7b) see that
473
+
474
+ $$
475
+ \begin{array}{l} \operatorname {r e g} _ {h, 1} ^ {t} (s) + \operatorname {r e g} _ {h, 2} ^ {t} (s) \leq \frac {2 H \cdot \log (A B)}{\eta t} + \frac {3 2 \eta H ^ {3}}{t} \\ + \sum_ {i = 2} ^ {t} \left(2 \eta H ^ {2} \alpha_ {t} ^ {i} - \frac {\alpha_ {t} ^ {i - 1}}{8 \eta}\right) \left(\| \mu_ {h} ^ {i} (\cdot | s) - \mu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2} \right. (15) \\ \left. + \left\| \nu_ {h} ^ {i} (\cdot | s) - \nu_ {h} ^ {i - 1} (\cdot | s) \right\| _ {1} ^ {2}\right). (16) \\ \end{array}
476
+ $$
477
+
478
+ When $\eta \leq 1 / (8H^2)$ , one has
479
+
480
+ $$
481
+ 2 \eta H ^ {2} \alpha_ {t} ^ {i} - \frac {\alpha_ {t} ^ {i - 1}}{8 \eta} \leq 2 \eta H ^ {3} \alpha_ {t} ^ {i} - \frac {\alpha_ {t} ^ {i - 1}}{8 \eta} \leq - 4 \eta H ^ {3} \alpha_ {t} ^ {i},
482
+ $$
483
+
484
+ where we have used Property 3 of Lemma 5, i.e., $\alpha_{t}^{i - 1} / \alpha_{t}^{i}\geq 1 / H$ . Consequently, with $\eta = C_{\eta}H^{-2}$ for some constant $C_\eta \leq 1 / 8$ , the bound (16) reads
485
+
486
+ $$
487
+ \begin{array}{l} \max _ {s} \left\{\operatorname {r e g} _ {h, 1} ^ {t} (s) + \operatorname {r e g} _ {h, 2} ^ {t} (s) \right\} \leq \frac {3 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t} - 4 \eta H ^ {3} \sum_ {i = 2} ^ {t} \alpha_ {t} ^ {i} \Big (\| \mu_ {h} ^ {i} (\cdot | s) - \mu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2} \\ \left. + \left\| \nu_ {h} ^ {i} (\cdot \mid s) - \nu_ {h} ^ {i - 1} (\cdot \mid s) \right\| _ {1} ^ {2}\right), \\ \end{array}
488
+ $$
489
+
490
+ where we assume the choice of players is non-trivial, i.e., $AB \geq 2$ .
491
+
492
+ # B.3 PROOF OF LEMMA 3
493
+
494
+ By Lemma C.2 in the paper by Zhang et al. (2022b), for any $h \in [H - 1]$ , we have the recursive relation
495
+
496
+ $$
497
+ \delta_ {h} ^ {t} \leq \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \delta_ {h + 1} ^ {i} + \operatorname {r e g} _ {h + 1} ^ {t}, \tag {17}
498
+ $$
499
+
500
+ where we recall $\mathrm{reg}_{h + 1}^t = \max_s\max_{i = 1,2}\{\mathrm{reg}_{h + 1,i}^t (s)\}$
501
+
502
+ Step 1: Bounding $\mathrm{reg}_{h + 1}^t$ . In view of this recursion (17), one needs to control the maximal regret $\mathrm{reg}_{h + 1}^t$ over the two players. Lemma 2 provides us with precise control of the individual regrets $\mathrm{reg}_{h,1}^t(s)$ and $\mathrm{reg}_{h,2}^t(s)$ :
503
+
504
+ $$
505
+ \operatorname {r e g} _ {h, 1} ^ {t} (s) \leq \frac {3 C _ {\eta} ^ {- 1} H ^ {3} \cdot (\log A B)}{t} + 2 \eta H ^ {2} \sum_ {i = 2} ^ {t} \alpha_ {t} ^ {i} \| \nu_ {h} ^ {i} (\cdot | s) - \nu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2}, \tag {18a}
506
+ $$
507
+
508
+ $$
509
+ \operatorname {r e g} _ {h, 2} ^ {t} (s) \leq \frac {3 C _ {\eta} ^ {- 1} H ^ {3} \cdot (\log A B)}{t} + 2 \eta H ^ {2} \sum_ {i = 2} ^ {t} \alpha_ {t} ^ {i} \| \mu_ {h} ^ {i} (\cdot | s) - \mu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2}, \tag {18b}
510
+ $$
511
+
512
+ where we have substituted $\eta = C_{\eta}H^{-2}$ for $C_{\eta}\leq 1 / 8$ and $AB\geq 2$ . We have also ignored the negative terms on the right hand sides of (7a) and (7b). Therefore, to control individual regrets, it suffices to bound the second-order path lengths $2\eta H^{2}\sum_{i = 2}^{t}\alpha_{t}^{i}\| \mu_{h}^{i}(\cdot |s) - \mu_{h}^{i - 1}(\cdot |s)\|_{1}^{2}$ and $2\eta H^{2}\sum_{i = 2}^{t}\alpha_{t}^{i}\| \nu_{h}^{i}(\cdot |s) - \nu_{h}^{i - 1}(\cdot |s)\|_{1}^{2}$ . To this end, the following lemma proves crucial, whose proof is deferred to the end of this section.
513
+
514
+ Lemma 6. For each $t, h$ and $s$ , one has
515
+
516
+ $$
517
+ \operatorname {r e g} _ {h, 1} ^ {t} (s) + \operatorname {r e g} _ {h, 2} ^ {t} (s) \geq - 2 \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \delta_ {h} ^ {i}.
518
+ $$
519
+
520
+ In words, Lemma 6 reveals the approximate non-negativity of the sum of the regrets. This together with the upper bound (8) in Lemma 2 implies
521
+
522
+ $$
523
+ \begin{array}{l} 2 \eta H ^ {2} \sum_ {i = 2} ^ {t} \left(\alpha_ {t} ^ {i} \| \mu_ {h} ^ {i} (\cdot | s) - \mu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2} + \| \nu_ {h} ^ {i} (\cdot | s) - \nu_ {h} ^ {i - 1} (\cdot | s) \| _ {1} ^ {2}\right) \\ \leq \frac {3 C _ {\eta} ^ {- 1} H ^ {2} \cdot \log (A B)}{2 t} + \frac {1}{H} \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \delta_ {h} ^ {i}. \\ \end{array}
524
+ $$
525
+
526
+ Feeding this back to (18a) and (18b), we obtain
527
+
528
+ $$
529
+ \operatorname {r e g} _ {h} ^ {t} = \max _ {s} \max _ {i = 1, 2} \left\{\operatorname {r e g} _ {h, i} ^ {t} (s) \right\} \leq \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t} + \frac {1}{H} \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \delta_ {h} ^ {i}. \tag {19}
530
+ $$
531
+
532
+ Step 2: Bounding $\delta_h^t$ . Substituting the maximal regret bound (19) into the recursion (17), we arrive at
533
+
534
+ $$
535
+ \delta_ {h} ^ {t} \leq \left(1 + \frac {1}{H}\right) \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \delta_ {h + 1} ^ {i} + \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t}. \tag {20}
536
+ $$
537
+
538
+ We continue the proof of Lemma 3 via induction on $h$ . More precisely, we aim to inductively establish the claim
539
+
540
+ $$
541
+ \delta_ {h} ^ {t} \leq \sum_ {h ^ {\prime} = h} ^ {H} \left(1 + \frac {1}{H}\right) ^ {2 \left(H - h ^ {\prime}\right)} \cdot \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t}. \tag {21}
542
+ $$
543
+
544
+ First note that the induction hypothesis holds naturally for $h = H$ as $\delta_H^t = 0$ for all $1 \leq t \leq T$ . Now assume that the induction hypothesis is true for some $2 \leq h + 1 \leq H$ and for all $1 \leq t \leq T$ . Our goal is to show that (21) continues to hold for the previous step $h$ and for all $1 \leq t \leq T$ . By the recursion (20) and the induction hypothesis, one has for any $1 \leq t \leq T$ :
545
+
546
+ $$
547
+ \begin{array}{l} \delta_ {h} ^ {t} \leq \left(1 + \frac {1}{H}\right) \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \delta_ {h + 1} ^ {i} + \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t} \\ \leq \left(1 + \frac {1}{H}\right) \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left(\sum_ {h ^ {\prime} = h + 1} ^ {H} \left(1 + \frac {1}{H}\right) ^ {2 (H - h ^ {\prime})} \cdot \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t}\right) + \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t}. \\ \end{array}
548
+ $$
549
+
550
+ Apply Lemma 4 to obtain
551
+
552
+ $$
553
+ \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \cdot \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{i} \leq \left(1 + \frac {1}{H}\right) \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t}.
554
+ $$
555
+
556
+ This leads to the conclusion that
557
+
558
+ $$
559
+ \begin{array}{l} \delta_ {h} ^ {t} \leq \left(1 + \frac {1}{H}\right) \sum_ {h ^ {\prime} = h + 1} ^ {H} \left(1 + \frac {1}{H}\right) ^ {2 (H - h ^ {\prime})} \left(1 + \frac {1}{H}\right) \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t} + \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t} \\ = \sum_ {h ^ {\prime} = h + 1} ^ {H} \left(1 + \frac {1}{\bar {H}}\right) ^ {2 (H - h ^ {\prime} + 1)} \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t} + \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t} \\ = \sum_ {h ^ {\prime} = h} ^ {H} \left(1 + \frac {1}{H}\right) ^ {2 (H - h ^ {\prime})} \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t}. \\ \end{array}
560
+ $$
561
+
562
+ This finishes the induction.
563
+
564
+ This bound on $\delta_h^t$ can be further simplified by
565
+
566
+ $$
567
+ \begin{array}{l} \delta_ {h} ^ {t} \leq \sum_ {h ^ {\prime} = h} ^ {H} \left(1 + \frac {1}{H}\right) ^ {2 (H - h ^ {\prime})} \cdot \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t} \\ \leq H \left(1 + \frac {1}{H}\right) ^ {2 H} \cdot \frac {5 C _ {\eta} ^ {- 1} H ^ {3} \cdot \log (A B)}{t} \\ \leq \frac {5 e ^ {2} C _ {\eta} ^ {- 1} H ^ {4} \cdot \log (A B)}{t}. \\ \end{array}
568
+ $$
569
+
570
+ This finishes the proof, and we are left with proving Lemma 6.
571
+
572
+ Proof of Lemma 6. Recall that
573
+
574
+ $$
575
+ \mathrm {r e g} _ {h, 1} ^ {t} (s) + \mathrm {r e g} _ {h, 2} ^ {t} (s) = \max _ {\mu^ {\uparrow}, \nu^ {\uparrow}} \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left[ \left< \mu^ {\uparrow}, Q _ {h} ^ {i} \nu_ {h} ^ {i} \right> - \left< \nu^ {\uparrow}, (Q _ {h} ^ {i}) ^ {\top} \mu_ {h} ^ {i} \right>\right] (s).
576
+ $$
577
+
578
+ Replace the estimation $Q_h^i$ with $Q_h^\star$ to obtain
579
+
580
+ $$
581
+ \begin{array}{l} \mathrm {r e g} _ {h, 1} ^ {t} (s) + \mathrm {r e g} _ {h, 2} ^ {t} (s) \geq \max _ {\mu^ {\uparrow}, \nu^ {\dagger}} \left[ \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left[ \left\langle \mu^ {\dagger}, Q _ {h} ^ {\star} \nu_ {h} ^ {i} \right\rangle - \left\langle \nu^ {\dagger}, (Q _ {h} ^ {\star}) ^ {\top} \mu_ {h} ^ {i} \right\rangle \right] (s) \right. \\ + \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left[\left\langle \mu^ {\dagger}, \left(Q _ {h} ^ {i} - Q _ {h} ^ {\star}\right) \nu_ {h} ^ {i} \right\rangle - \left\langle \nu^ {\dagger}, \left(Q _ {h} ^ {i} - Q _ {h} ^ {\star}\right) ^ {\top} \mu_ {h} ^ {i} \right\rangle\right] (s) \left. \right]. \\ \end{array}
582
+ $$
583
+
584
+ Lower bounding the term involving $Q_h^i - Q_h^\star$ yields
585
+
586
+ $$
587
+ \mathrm {r e g} _ {h, 1} ^ {t} (s) + \mathrm {r e g} _ {h, 2} ^ {t} (s) \geq \max _ {\mu^ {\dagger}, \nu^ {\dagger}} \left[ \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left[ \left\langle \mu^ {\dagger}, Q _ {h} ^ {\star} \nu_ {h} ^ {i} \right\rangle - \left\langle \nu^ {\dagger}, (Q _ {h} ^ {\star}) ^ {\top} \mu_ {h} ^ {i} \right\rangle \right] (s) \right] - 2 \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \delta_ {h} ^ {i}.
588
+ $$
589
+
590
+ where recall $\delta_h^i = \| Q_h^i -Q_h^\star \|_\infty$ . Now observe that $\sum_{i = 1}^{t}\alpha_{t}^{i}\mu_{h}^{i}(\cdot \mid s)$ and $\sum_{i = 1}^{t}\alpha_{t}^{i}\nu_{h}^{i}(\cdot \mid s)$ are valid policies, which implies
591
+
592
+ $$
593
+ \begin{array}{l} \max _ {\mu^ {\dagger}, \nu^ {\dagger}} \left[ \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left[ \left\langle \mu^ {\dagger}, Q _ {h} ^ {\star} \nu_ {h} ^ {i} \right\rangle - \left\langle \nu^ {\dagger}, (Q _ {h} ^ {\star}) ^ {\top} \mu_ {h} ^ {i} \right\rangle \right] (s) \right] \\ = \max _ {\mu^ {\dagger}, \nu^ {\dagger}} \left[ \left\langle \mu^ {\dagger}, Q _ {h} ^ {\star} \left(\sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \nu_ {h} ^ {i}\right) \right\rangle (s) - \left\langle \nu^ {\dagger}, Q _ {h} ^ {\star \top} \left(\sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \mu_ {h} ^ {i}\right) \right\rangle (s) \right] \\ \geq \left\langle \left(\sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \mu_ {h} ^ {i}\right), Q _ {h} ^ {\star} \left(\sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \nu_ {h} ^ {i}\right) \right\rangle (s) - \left\langle \left(\sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \nu_ {h} ^ {i}\right), Q _ {h} ^ {\star^ {\top}} \left(\sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \mu_ {h} ^ {i}\right) \right\rangle (s) \\ = 0. \\ \end{array}
594
+ $$
595
+
596
+ Combine the above two inequalities to finish the proof.
597
+
598
+ In the end, it is worth pointing out that without the improved inequality in Lemma 4, one would necessarily incur an extra $\log T$ factor in each induction step. Consequently, the recursion will fail due to the explosion at a rate of $(\log T)^H$ .
599
+
600
+ # B.4 PROOF OF LEMMA 4
601
+
602
+ We prove the claim via induction. The base case $t = 1$ is true since $\alpha_1^1 \cdot 1 = 1 \leq 1 + 1 / H$ . Now assume that the inequality (9) holds for some $t \geq 1$ , and we aim to prove that it continues to hold at $t + 1$ . We first make the observation that for all $i \leq t$
603
+
604
+ $$
605
+ \alpha_ {t + 1} ^ {i} = \alpha_ {i} \prod_ {j = i + 1} ^ {t + 1} (1 - \alpha_ {j}) = (1 - \alpha_ {t + 1}) \alpha_ {i} \prod_ {j = i + 1} ^ {t} (1 - \alpha_ {j}) = (1 - \alpha_ {t + 1}) \alpha_ {t} ^ {i}.
606
+ $$
607
+
608
+ This allows us to rewrite $\sum_{i=1}^{t+1} \alpha_{t+1}^i \cdot \frac{1}{i}$ as
609
+
610
+ $$
611
+ \begin{array}{l} \sum_ {i = 1} ^ {t + 1} \alpha_ {t + 1} ^ {i} \cdot \frac {1}{i} = (1 - \alpha_ {t + 1}) \left(\sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \cdot \frac {1}{i}\right) + \alpha_ {t + 1} \cdot \frac {1}{t + 1} \\ \leq \left(1 - \alpha_ {t + 1}\right) \left(1 + \frac {1}{H}\right) \frac {1}{t} + \frac {\alpha_ {t + 1}}{t + 1}, \\ \end{array}
612
+ $$
613
+
614
+ where the second line follows from the induction hypothesis. Note that $\alpha_{t + 1} = \frac{H + 1}{H + t + 1}$ . We can continue the derivation as
615
+
616
+ $$
617
+ \begin{array}{l} \sum_ {i = 1} ^ {t + 1} \alpha_ {t + 1} ^ {i} \cdot \frac {1}{i} \leq \left(1 + \frac {1}{H}\right) \frac {t}{H + t + 1} \cdot \frac {1}{t} + \frac {H + 1}{H + t + 1} \cdot \frac {1}{t + 1} \\ = \left(1 + \frac {1}{H}\right) \frac {t + 1}{H + t + 1} \cdot \frac {1}{t + 1} + \left(1 + \frac {1}{H}\right) \frac {H}{H + t + 1} \cdot \frac {1}{t + 1} \\ = \left(1 + \frac {1}{H}\right) \frac {1}{t + 1}. \\ \end{array}
618
+ $$
619
+
620
+ This finishes the proof.
2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0c5088e35b4ebef92e7fe2b377d44a99d016c903d93a719d6087fcc03f3f19d
3
+ size 787226
2023/$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/a1590ce8-2443-45f9-b768-74da670097a7_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/a1590ce8-2443-45f9-b768-74da670097a7_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/a1590ce8-2443-45f9-b768-74da670097a7_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc851bf28a55cac27cbd2d3a5b487ac5a9d16db4f567ed66eb33689d03c04c56
3
+ size 3824028
2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e70d35a26ea448732f245ece6171fc0cc582b4b095b0674a09113bb001abf20
3
+ size 1492257
2023/$_Lambda$-DARTS_ Mitigating Performance Collapse by Harmonizing Operation Selection among Cells/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/ab188adf-da52-465b-96c7-6599648843de_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/ab188adf-da52-465b-96c7-6599648843de_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/ab188adf-da52-465b-96c7-6599648843de_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:915edb76878f6f8d721fa9fba09bce023ac7cc6d1083234c1fd5f027105756a2
3
+ size 1127816
2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/full.md ADDED
@@ -0,0 +1,527 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # O-GNN: INCORPORATING RING PRIORS INTO MOLECULAR MODELING*
2
+
3
+ $^{1}$ Jinhua Zhu, $^{1}$ Kehan Wu, $^{1}$ Bohan Wang, $^{2}$ Yingce Xia, $^{3}$ Shufang Xie, $^{2}$ Qi Meng,
4
+ $^{2}$ Lijun Wu, $^{2}$ Tao Qin, $^{1}$ Wengang Zhou, $^{1}$ Houqiang Li, $^{2}$ Tie-Yan Liu
5
+ <sup>1</sup>University of Science and Technology of China, <sup>2</sup>Microsoft Research AI4Science
6
+ $^{3}$ Gaoling School of Artificial Intelligence, Renmin University of China
7
+ $^{1}$ teslazhu, wu_2018}@mail.ustc.edu.cn, $^{1}$ bhwangfy@gmail.com,
8
+ 1{zhwg, lihq}@ustc.edu.cn, $^3$ shufangxie@ruc.edu.cn,
9
+ 2{yingce.xia, meq, lijuwu, taoqin, tyliu}@microsoft.com
10
+
11
+ # ABSTRACT
12
+
13
+ Cyclic compounds that contain at least one ring play an important role in drug design. Despite the recent success of molecular modeling with graph neural networks (GNNs), few models explicitly take rings in compounds into consideration, consequently limiting the expressiveness of the models. In this work, we design a new variant of GNN, ring-enhanced GNN ( $\mathcal{O}$ -GNN), that explicitly models rings in addition to atoms and bonds in compounds. In $\mathcal{O}$ -GNN, each ring is represented by a latent vector, which contributes to and is iteratively updated by atom and bond representations. Theoretical analysis shows that $\mathcal{O}$ -GNN is able to distinguish two isomorphic subgraphs lying on different rings using only one layer while conventional graph convolutional neural networks require multiple layers to distinguish, demonstrating that $\mathcal{O}$ -GNN is more expressive. Through experiments, $\mathcal{O}$ -GNN shows good performance on 11 public datasets. In particular, it achieves state-of-the-art validation result on the PCQM4Mv1 benchmark (outperforming the previous KDDCup champion solution) and the drug-drug interaction prediction task on DrugBank. Furthermore, $\mathcal{O}$ -GNN outperforms strong baselines (without modeling rings) on the molecular property prediction and retrosynthesis prediction tasks. The code is released at https://github.com/O-GNN/O-GNN.
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ Cyclic compounds, which refers to the molecules that have at least one ring in its system, naturally exist in the chemical space. According to our statistics on $109M$ compounds from PubChem (Kim et al., 2019) which is a widely used chemical library, more than $90\%$ compounds have at least one ring. The rings could be small/simple (e.g., the benzene is a six-member carbon ring, and the pentazole is a five-member nitrogen ring) or large/complex (e.g., the molecule shown in Figure 1). Rings are important in drug discovery, for example: (1) Rings can potentially reduce the flexibility of molecules, reduce the uncertainty when interacting with target proteins, and lock the molecules to their bioactive conformation (Sun et al., 2012). (2) Macrocyclic compounds, which usually have a ring with more than 12 atoms, play important roles in antibiotics design (Venugopal & Johnson, 2011) and peptide drug design (Bhardwaj et al., 2022).
18
+
19
+ Recently, deep neural networks, especially graph neural networks (denoted as GNN) (Kipf & Welling, 2017; Hamilton et al., 2017a), have been widely used in molecular modeling. A GNN takes a graph as input, and messages of different nodes are passed along edges. GNNs have made great success in scientific discovery: (1) Stokes et al. (2020) train a GNN to predict growth inhibition of Escherichia coli and find that Halicin is a broad-spectrum bactericidal antibiotic. (2) Shan et al. (2022) leverage GNN to model the interactions between proteins, and they eventually obtain possible antibodies for SARS-CoV-2. In addition, GNNs are widely used in drug property prediction (Rong et al., 2020), drug-target interaction modeling (Torng & Altman, 2019), retrosynthesis
20
+
21
+ (Chen & Jung, 2021), etc. However, none of the above work explicitly models the ring information into GNNs. From the application's perspective, they miss an important feature for their tasks. From the machine learning's perspective, Loukas (2020) points out that existing message-passing-based GNNs cannot properly capture the ring information when the product of network width and height is not large enough (see the Table 1 in Loukas (2020)). Therefore, with the classic GNNs, the ring information in compounds is not well leveraged.
22
+
23
+ To tackle this issue, in this work, we propose a new model, ring-enhanced GNN (denoted as $\mathcal{O}$ -GNN), that explicitly models the ring information in a compound. The $\mathcal{O}$ stands for the rings in molecules and is pronounced as "O". Generally speaking, $\mathcal{O}$ -GNN stacks $L$ layers, and each layer sequentially updates edge representations, node representations and ring representations by aggregating their neighbourhood information. We mainly use a self-attention layer for adaptive message passing, and use a feed-forward layer to introduce non-linearity to representations.
24
+
25
+ ![](images/9479a9bc6266e1e302c19ad5dfc87425b0c5d112e9e3a4b6e6a45ab4059ad8f6.jpg)
26
+ Figure 1: Paclitaxel, a compound with 7 simple rings. Kampan et al. (2015) summarized that the intact taxane ring (i.e., $r_4$ , $r_5$ , $r_6$ ) and a four-membered oxetane side ring (i.e., $r_7$ ) is essential to induce cytotoxic activity.
27
+
28
+ We first demonstrate the advantage of $\mathcal{O}$ -GNN through theoretical analysis. $\mathcal{O}$ -GNN is able to distinguish two isomorphic sub-graphs lying on different rings using only one layer (see Figure 2 for the example). On the contrary, if we remove the ring-modeling components from $\mathcal{O}$ -GNN, such a distinguishability would require multiple layers (see Section 2.3 for detailed analysis). These results demonstrate that $\mathcal{O}$ -GNN is more expressive than conventional graph convolutional networks in the absence
29
+
30
+ We then conduct experiments on 11 datasets from three tasks, including molecular property prediction, drug-drug interaction prediction and retrosynthesis:
31
+
32
+ (1) For molecular property prediction, we first conduct experiments on PCQM4Mv1, which is to predict the HOMO-LUMO gap of molecules. Our method outperforms the champion solution of KDDCup on the validation set (Shi et al., 2022) (note that test set labels are not available). Next, we verify $\mathcal{O}$ -GNN on six datasets from MoleculeNet (Wu et al., 2018), which is to predict several pharmaceutical related properties of molecules. $\mathcal{O}$ -GNN outperforms the corresponding GNN baselines without rings. Finally, we conduct experiments on FS-Mol (Stanley et al., 2021), a few-shot property prediction task, and shows that modeling rings can also improve the prediction accuracy. (2) For drug-drug interaction prediction, which is to predict whether two drugs interacts with each other, we use the same approach as in (1).
33
+
34
+ other, we test $\mathcal{O}$ -GNN on DrugBank following the previous settings (Nyamabo et al., 2021; Li et al., 2022), and achieve state-of-the-art results. (3) For retrosynthesis, we apply $\mathcal{O}$ -GNN to LocalRetro (Chen & Jung, 2021), a strong GNN-based method for retrosynthesis. On USPTO-50k, our method significantly boosts the accuracy.
35
+
36
+ ![](images/40f8b04ffa69a27f0f16ccf7d0e198c0572d854657d443ca44b4f9a67026cc96.jpg)
37
+ Figure 2: An illustrative example of theoretical results. The three substructures in the red circles are isomorphic. The second and third substructures lie on different rings (a Cyclooctane and an Azocene). A regular GNN requires multiple layers to distinguish the three substructures while $\mathcal{O}$ -GNN requires only one layer due to the ring representations.
38
+
39
+ # 2 METHOD
40
+
41
+ # 2.1 NOTATION AND PRELIMINARIES
42
+
43
+ Let $G = (V, E)$ denote a molecular graph, where $V$ and $E$ are the collections of nodes/atoms and edges/bonds<sup>1</sup>. Let $R$ denote the collection of rings in $G$ . Define $V = \{v_1, v_2, \dots, v_{|V|}\}$ and
44
+
45
+ $E = \{e_{ij}\}$ , where $v_{i}$ is the $i$ -th atom and $e_{ij}$ is the bond connecting $v_{i}$ and $v_{j}$ . When the context is clear, we use $i$ to denote atom $v_{i}$ , and use $e(v_{i}, v_{j})$ to denote edge $e_{ij}$ . Let $\mathcal{N}(i)$ denote the neighbors of atom $i$ , i.e., $\mathcal{N}(i) = \{v_{j} \mid e_{ij} \in E\}$ . Define $R = \{r_{1}, r_{2}, \dots, r_{|R|}\}$ , where each $r_{i}$ is a simple ring. A simple ring does not contain any ring structure. For example, for the molecule in Figure 3, it has two simple rings as we marked $(r_{1}$ and $r_{2})$ . The ring $(1, 2, 3, 4, 5, 6, 7, 8, 9, 1)$ is not a simple ring. Let $R(v_{i})$ and $R(e_{ij})$ denote the rings that the atom $v_{i}$ or the bond $e_{ij}$ lies on, and $V(r)$ and $E(r)$ denote all the atoms and the bonds lying on ring $r$ . For example, in Figure 3, $R(v_{4}) = \{r_{1}, r_{2}\}$ while $R(v_{3}) = r_{2}$ . $R(e_{49}) = \{r_{1}, r_{2}\}$ while $R(e_{78}) = r_{1}$ . $V(r_{1}) = \{v_{4}, v_{5}, v_{6}, v_{7}, v_{8}, v_{9}\}$ and $E(r_{1}) = \{e_{45}, e_{56}, e_{67}, e_{78}, e_{89}, e_{94}\}$ .
46
+
47
+ A graph neural network (GNN) is usually stacked by several identical GNN layers. Each GNN layer is composed of an Aggregate function and an Update function,
48
+
49
+ $$
50
+ h _ {i} ^ {\prime} = \operatorname {U p d a t e} \left(h _ {i}, \text {A g g r e g a t e} \left(h _ {j} \mid j \in \mathcal {N} (i)\right)\right), \tag {1}
51
+ $$
52
+
53
+ where $h_i$ is the representation of atom $i$ and $h_i'$ is its updated representation. Different GNNs have different Aggregate functions and Update functions. Details are summarized in Appendix D.
54
+
55
+ ![](images/94ec1715055077adc4f7c497150546194f7435e092ed36ccf824f6006784f893.jpg)
56
+ Figure 3: The workflow of our method. $H_{E}^{(l)}$ , $H_{V}^{(l)}$ , $H_{R}^{(l)}$ and $U^{(l)}$ denote the representation collections of bond, atom, ring and the global compound at the $l$ -th layer.
57
+
58
+ # 2.2 MODEL
59
+
60
+ Our model consists of $L$ identical layers with different parameters. The architecture of each layer is shown in Figure 3. Let $h_i^{(l)}$ , $h_{ij}^{(l)}$ and $h_r^{(l)}$ denote the output representations of atom $v_i$ , bond $e_{ij}$ and ring $r$ at the $l$ -th layer, respectively. Let $U^{(l)}$ denote the compound representation at the $l$ -th layer. We initialize $h_i^{(0)}$ via a learnable embedding layer which indicates its atomic type, chirality, degree number, formal charge, hybridization type, and so on. Similarly, we initialize $h_{ij}^{(0)}$ with a learnable embedding which indicates its bond type, stereoisomerism type and whether the bond is conjugated. Then we initialize $h_r^{(0)}$ by concatenating the node and edge embedding and then transform it with a non-linear layer. Last, we initialize the compound representation with a learnable embedding. In each layer, we update representations of nodes, bonds, rings and the compound sequentially. We will frequently use $\mathrm{MLP}(\dots)$ , a multi-layer perception network with one hidden layer, to build our model. The inputs of MLP are concatenated as a long vector and processed by the network.
61
+
62
+ (1) Update bond representations: The representation of a bond is updated via the connected atoms, the rings that the bond belongs to and the compound representation from the last layer:
63
+
64
+ $$
65
+ h _ {i j} ^ {(l)} = h _ {i j} ^ {(l - 1)} + \mathrm {M L P} \left(h _ {i} ^ {(l - 1)}, h _ {j} ^ {(l - 1)}, h _ {i j} ^ {(l - 1)}, \frac {\sum_ {r \in R (e _ {i j})} h _ {r} ^ {(l - 1)}}{| R (e _ {i j}) |}, U ^ {(l - 1)}\right). \tag {2}
66
+ $$
67
+
68
+ (2) Update atom representations: We use an attention model to adaptively aggregate bond representations into the centralized atoms. Mathematically,
69
+
70
+ $$
71
+ \bar {h} _ {i} ^ {(l)} = \sum_ {j \in \mathcal {N} (i)} \alpha_ {j} W _ {v} \text {c o n c a t} \left(h _ {i j} ^ {(l)}, h _ {j} ^ {(l - 1)}\right);
72
+ $$
73
+
74
+ $$
75
+ \alpha_ {j} \propto \exp \left(\boldsymbol {a} ^ {\top} \text {L e a k y R e L U} \left(W _ {q} h _ {i} ^ {(l - 1)} + W _ {k} \operatorname {c o n c a t} \left(h _ {j} ^ {(l - 1)}, h _ {i j} ^ {(l)}\right)\right)\right); \tag {3}
76
+ $$
77
+
78
+ $$
79
+ h _ {i} ^ {(l)} = h _ {i} ^ {(l - 1)} + \mathbb {M L P} \left(h _ {i} ^ {(l - 1)}, \bar {h} _ {i} ^ {(l)}, \frac {1}{| R (v _ {i}) |} \sum_ {r \in R (v _ {i})} h _ {r} ^ {(l - 1)}, U ^ {(l - 1)}\right).
80
+ $$
81
+
82
+ In Eqn.(3), the $W$ 's are the parameters to be learned, and concat denotes concatenating the input vectors as a long one.
83
+
84
+ (3) Update ring representations: The ring representations are updated using MLP networks:
85
+
86
+ $$
87
+ h _ {r} ^ {(l)} = h _ {r} ^ {(l - 1)} + \mathrm {M L P} \left(h _ {r} ^ {(l - 1)}, \sum_ {v _ {i} \in V (r)} h _ {i} ^ {(l)}, \sum_ {e _ {i j} \in E (r)} h _ {i j} ^ {(l)}, U ^ {(l - 1)}\right) \tag {4}
88
+ $$
89
+
90
+ (4) Update the compound representation:
91
+
92
+ $$
93
+ U ^ {(l)} = U ^ {(l - 1)} + \mathrm {M L P} \left(\frac {1}{| V |} \sum_ {i = 1} ^ {| V |} h _ {i} ^ {(l)}, \frac {1}{| E |} \sum_ {i, j} h _ {i j} ^ {(l)}, \frac {1}{| R |} \sum_ {r \in R} h _ {r} ^ {(l)}, U ^ {(l - 1)}\right). \tag {5}
94
+ $$
95
+
96
+ After stacking $L$ O-GNN layers, we get the graph representation by a simple average pooling layer, i.e., $h_{\mathcal{G}} = \frac{1}{|V|}\sum_{i = 1}^{|V|}h_i^{(L)}$ , which could be utilized by graph classification tasks. For node classification tasks, we can add a classification head to $h_i^{(L)}$ .
97
+
98
+ # 2.3 THEORETICAL ANALYSIS
99
+
100
+ In this section, we compare the distinguishability between standard GNN (without ring representations) and $\mathcal{O}$ -GNN. In addition to the notations defined in Section 2.1, we define the valued version of a graph $G = (V,E)$ as a triplet $\mathrm{VALUE}_f(G) = (V,E,f)$ , where $f$ is a mapping storing feature information and mapping a node or an edge to its corresponding input feature (e.g., a 256-dimension representation). We call $f$ as a feature mapping on $G$ .
101
+
102
+ Definition 1 ( $k$ -neighbourhood node). For a molecular graph $G = (V, E)$ and two nodes $u, v \in V$ , we say $u$ is a $k$ -neighbourhood of $v$ if there exists a path in $G$ connecting $u$ and $v$ with length no larger than $k$ . More formally, $u$ is a $k$ -neighbourhood of $v$ if and only if there exists a set of nodes $\{v_0, v_1, \dots, v_t\} \subset V$ , such that, $t \leq k$ , $v_0 = v$ , $v_t = u$ , and for any $i \in \{0, \dots, t-1\}$ , $v_{i+1} \in N(v_i)$ .
103
+
104
+ We highlight here that $v$ is a 0-neighbourhood node (and thus a $k$ -neighbourhood node with any $k \geq 0$ ) of itself.
105
+
106
+ Definition 2 ( $k$ -neighbourhood sub-graph). For a molecular graph $G = (V, E)$ and a node $v$ in $G$ , we define the $k$ -neighbourhood sub-graph of $v$ as the sub-graph composed of all $v$ 's $k$ -neighbourhood node. More formally, we slightly abuse the notations and denote the $k$ -neighbourhood sub-graph of $v$ as $G(v, k) \triangleq (V(v, k), E(v, k))$ , where
107
+
108
+ $$
109
+ V (v, k) \triangleq \{u \in V: u i s a k - n e i g h b o u r h o o d n o d e o f v \},
110
+ $$
111
+
112
+ $$
113
+ E (v, k) \triangleq \left\{e \left(v _ {1}, v _ {2}\right) \in E: v _ {1}, v _ {2} \in V (v, k) \right\}.
114
+ $$
115
+
116
+ Definition 3 (Equivalent valued graph). For two valued graphs $\mathrm{VALUE}_{f_1}(G_1) = (V_1, E_1, f_1)$ and $\mathrm{VALUE}_{f_2}(G_2) = (V_2, E_2, f_2)$ , we say that they are equivalent if (i). $G_1$ and $G_2$ are isomorphic, i.e., there exists a one-to-one mapping $\mathcal{P}: V_1 \to V_2$ , such that the edges are preserved; (ii). $\mathcal{P}$ also preserves the value of edges and the the value of nodes, i.e., $\forall u, v \in G_1$ ,
117
+
118
+ $$
119
+ e (u, v) \in E _ {1} \Leftrightarrow e (\mathcal {P} (u), \mathcal {P} (v)) \in E _ {2},
120
+ $$
121
+
122
+ $$
123
+ f _ {1} (u) = f _ {2} (\mathcal {P} (u)), f _ {1} (v) = f _ {2} (\mathcal {P} (v)), f _ {1} (e (u, v)) = f _ {2} (e (\mathcal {P} (u), \mathcal {P} (v))).
124
+ $$
125
+
126
+ With all the preparations above, we are now ready to define the graph feature extractor and its discriminatory ability.
127
+
128
+ Definition 4 (Graph feature extractor and its discriminatory ability). We say a mapping $\Phi$ is a graph feature extractor, if it maps a valued graph $\mathrm{VALU}_{f}(G)$ to a new feature mapping $\tilde{f}$ on $G$ . We further allow $\Phi$ to be parameterized as $\Phi_{\theta}$ , and call $\Phi_{\theta}$ a parameterized graph feature extractor.
129
+
130
+ For a parameterized graph feature extractor $\Phi_{\theta}$ , we say $\Phi_{\theta}$ has the discriminatory ability for $k$ -neighbourhood sub-graphs, if for any valued graphs $(G,f)$ and any two nodes $u,v$ in $G$ , if the valued $k$ -neighbourhood sub-graph of $u$ and $v$ (i.e., $(G(u,k),f)$ and $(G(v,k),f))$ are equivalent, there exists $\theta^{\star}$ such that $\Phi_{\theta^{\star}}((G,f))(u)\neq \Phi_{\theta^{\star}}((G,f))(v)$ . In this case, we also say that $\Phi_{\theta^{\star}}$ can distinguish $u$ and $v$ .
131
+
132
+ We point out that $\{h_i^{(l)}\}_i \cup \{h_{i,j}^{(l)}\}_{i,j}$ defined by Eqn. (2, 3, 4, 5) is a parameterized feature extractor, and thus above provides a formal definition of $\mathcal{O}$ -GNN's discriminatory ability.
133
+
134
+ The next proposition shows that without the ring representation, the $\mathcal{O}$ -GNN needs at least $k + 1$ layer to have the has the discriminatory power for $k$ -neighbourhood sub-graphs.
135
+
136
+ Proposition 1. Without the ring presentation, $\mathcal{O}$ -GNN with no more than $k$ layers does not have the discriminatory ability for $k$ -neighbourhood sub-graphs.
137
+
138
+ Note that Proposition 1 can be easily extended to the conventional graph convolutional neural networks, which only aggregate information from 1-neighborhood nodes. We then show that with the ring representations, $\mathcal{O}$ -GNN with only one layer has the discriminatory power.
139
+
140
+ Proposition 2. If $u$ and $v$ lie on different rings, $\mathcal{O}$ -GNN with only one layer can distinguish them.
141
+
142
+ The proofs are deferred to Appendix B due to space limitation. From Proposition 1 and 2, we can see that $\mathcal{O}$ -GNN is more expressive than the regular GNN that does not model rings. The regular GNN requires at least $k$ layers to distinguish two isomorphic $k$ -neighborhood sub-graphs on different rings, while $\mathcal{O}$ -GNN only requires one layer for this purpose (see the example in Figure 2). Comparing $\mathcal{O}$ -GNN to a regular GNN with the same number of layers, modeling ring presentations constantly increases the percentages of parameters (irrelevant to $k$ ). However, a regular GNN may require $k$ layers to achieve the discriminatory power for $k$ -neighborhood sub-graphs. When $k$ is large, $\mathcal{O}$ -GNN will be much more parameter efficient. More discussions are in Appendix C.5.
143
+
144
+ # 3 EXPERIMENTS
145
+
146
+ To validate the effectiveness of our method, we test $\mathcal{O}$ -GNN on the following three tasks: molecular property prediction, drug-drug interaction prediction and retrosynthesis. The first two tasks are graph classification tasks, and the third one is a node/link prediction task.
147
+
148
+ # 3.1 APPLICATION TO MOLECULAR PROPERTY PREDICTION
149
+
150
+ Datasets. We work on three datasets for this application:
151
+
152
+ (1) The HOMO-LUMO energy gap prediction of the PCQM4Mv1 dataset (Hu et al., 2021). The input is a 2D molecular graph, and the target is its HOMO-LUMO energy gap, which is an essential molecular property in quantum chemistry. PCQM4Mv1 has 3045360 and 380670 training and validation data (test labels are not available). The properties are obtained via density function theory.
153
+ (2) Molecular property prediction on MoleculeNet dataset (Wu et al., 2018). This is a dataset about the prediction of pharmaceutical properties of small molecules. We choose six molecular property prediction tasks (including BBBP, Tox21, ClinTox, HIV, BACE and SIDER), and the data ranges from 1.5k to 41k.
154
+ (3) Few-shot molecular property prediction of the FS-Mol dataset (Stanley et al., 2021). FS-Mol is made up of 5120 separate assays extracted from ChEMBL27 (https://www.ebi.ac.uk/chembl/). Each assay has 94 molecular-property pairs on average.
155
+
156
+ Training configuration. For PCQM4Mv1, we set the number of layers as 12 and hidden dimension as 256, which is selected by the cross-validation method on the training set. For FS-Mol, the number of layers are 6 and the hidden dimension is 256. The candidate number of layers and hidden dimensions for MoleculeNet are \{4,6,8,12\} and \{128,256\}. On FS-Mol and MoleculeNet, the hyper-parameters are selected according to validation performance. We train all these tasks on one GPU. The optimizer is AdamW (Loshchilov & Hutter, 2019). More detailed parameters are summarized in Table 5 of Appendix A.
157
+
158
+ Results on PCQM4Mv1 The results of PCQM4Mv1 are reported in Table 1. We compare $\mathcal{O}$ -GNN with the following baselines: (1) Conventional GCN/GIN with/without virtual node (marked by "vn"). The results are from Hu et al. (2021); (2) ConfDSS (Liu et al., 2021), which predicts quantum properties conditioned on low-cost conformer sets; (3) Two-branch Transformer (Xia et al., 2021), which has a regression head and a classification head that learn from each other; (4) Graphormer (Ying et al., 2021; Shi et al., 2022), the champion solution of PCQM4Mv1. Since
159
+
160
+ <table><tr><td>Method</td><td>MAE (↓)</td></tr><tr><td>GCN</td><td>0.1684</td></tr><tr><td>GCN + vn</td><td>0.1510</td></tr><tr><td>GIN</td><td>0.1536</td></tr><tr><td>GIN + vn</td><td>0.1396</td></tr><tr><td>ConfDSS</td><td>0.1278</td></tr><tr><td>Two-branch Transformer</td><td>0.1237</td></tr><tr><td>Graphormerbase</td><td>0.1193</td></tr><tr><td>Graphormerlarge</td><td>0.1231</td></tr><tr><td>O-GNN (ours)</td><td>0.1148</td></tr></table>
161
+
162
+ Table 1: Validation MAE on PCQM4Mv1.
163
+
164
+ ![](images/faddf259e6f1fd445acb109ce566d7163b8c8c3b8b7c4cd02cb6f213e797cb41.jpg)
165
+ Figure 4: MAE w.r.t. numbers of layers.
166
+
167
+ ![](images/5ce915917e6a0ae7d24d8b092bdab312dcf57fc81b5d01b3a4d00d3fb6a57023.jpg)
168
+ (a) Number of rings.
169
+
170
+ ![](images/51b0ac8b433152b13f4861ba88049479624f452568ba61c9cbfa848f82d03496.jpg)
171
+ (b) Number of atoms lying on rings.
172
+
173
+ ![](images/867173c6db62a5425936c932642c7957f4fcbf1def50f4e776d6bec189058546.jpg)
174
+ (c) Maximum ring size.
175
+ Figure 5: Performance improvement over several ring properties.
176
+
177
+ the owners of PCQM4Mv1 did not release labels of the test set, we can only compare the results on the validation set. The evaluation metric is the mean absolute error (MAE). From Table 1, we can see that $\mathcal{O}$ -GNN achieves the best results among the strong baseline models, which shows the effectiveness of our method. In addition, GIN vs, ConfDSS and Graphormer do not explicitly use the ring information, and we will combine $\mathcal{O}$ -GNN with the strong methods in the future.
178
+
179
+ To investigate the significance of the ring information, we study a variant of $\mathcal{O}$ -GNN by removing the ring modeling component from $\mathcal{O}$ -GNN, and denote this variant as “ $\mathcal{O}$ -GNN w/o ring”. Specifically, it is implemented by removing Eqn.(4) and all the $h_r$ 's in Eqn.(2,3,5). We conduct experiments for $\mathcal{O}$ -GNN and “ $\mathcal{O}$ -GNN w/o ring” from 2 to 14 layers. The results are in Figure 4. We can see that by utilizing ring information, the performance is boosted regardless of the number of layers. In addition, we find that a 6-layer $\mathcal{O}$ -GNN is comparable with the 12-layer $\mathcal{O}$ -GNN w/o ring, which shows the great power of modeling rings in GNN. We also have that $\mathcal{O}$ -GNN outperforms “ $\mathcal{O}$ -GNN w/o ring” in terms of the number of parameters (see Figure 10). It is noteworthy to point out that validation MAE of the 14-layer $\mathcal{O}$ -GNN slightly drops compared to the 12-layer $\mathcal{O}$ -GNN. Note that this phenomenon is also observed in Graphormer (Shi et al., 2022) that larger models do not always lead to better validation results. We will explore how to train deeper models in the future.
180
+
181
+ On PCQM4Mv1, we also study the average performance improvement w.r.t. several ring properties. The performance improvement is defined as $\epsilon_{1} - \epsilon_{2}$ , where $\epsilon_{1}$ and $\epsilon_{2}$ denote the validation MAE of "O-GNN w/o rings" and O-GNN. The ring properties include: (i) the number of rings in a molecule; (ii) the number of atoms lying on rings; (iii) the number of atoms in the largest ring. We conduct experiments for the networks with different numbers of layers ( $L = 2, 6, 12$ ). Results are reported in Figure 5. We can conclude that overall, as the increase of number of rings, maximum ring sizes and the number of atoms lying on rings, O-GNN achieves more improvement compared to the variant without modeling rings. More analyses are in Appendix C.4.
182
+
183
+ Results on MoleculeNet For MoleculeNet, we compare with both pretraining and non-pretraining methods. For non-pretraining methods, we compare with the following baselines: (i) GCN (Kipf & Welling, 2017) with virtual node ; (ii) GIN (Xu et al., 2018) with virtual node; (iii) $\mathcal{O}$ -GNN without using ring information (denoted as “ $\mathcal{O}$ -GNN w/o ring”). For pre-training methods, we select
184
+
185
+ <table><tr><td>Dataset
186
+ # Molecules</td><td>BBBP
187
+ 2039</td><td>Tox21
188
+ 7831</td><td>ClinTox
189
+ 1478</td><td>HIV
190
+ 41127</td><td>BACE
191
+ 1513</td><td>SIDER
192
+ 1478</td></tr><tr><td>(Hu et al., 2020)</td><td>71.2 ± 0.9</td><td>74.2 ± 0.8</td><td>73.7 ± 4.0</td><td>75.8 ± 1.1</td><td>78.6 ± 1.4</td><td>60.4 ± 0.6</td></tr><tr><td>G-Contextual (Liu et al., 2022)</td><td>70.3 ± 1.6</td><td>75.2 ± 0.3</td><td>59.9 ± 8.2</td><td>75.9 ± 0.9</td><td>79.2 ± 0.3</td><td>58.4 ± 0.6</td></tr><tr><td>G-Motif (Liu et al., 2022)</td><td>66.4 ± 3.4</td><td>73.2 ± 0.8</td><td>77.8 ± 2.0</td><td>73.8 ± 1.4</td><td>73.4 ± 4.0</td><td>60.6 ± 1.1</td></tr><tr><td>GraphMVP (Liu et al., 2022)</td><td>72.4 ± 1.6</td><td>75.9 ± 0.5</td><td>79.1 ± 2.8</td><td>77.0 ± 1.2</td><td>81.2 ± 0.9</td><td>63.9 ± 1.2</td></tr><tr><td>GCN + vn</td><td>72.7 ± 1.3</td><td>75.0 ± 0.4</td><td>92.0 ± 1.1</td><td>78.8 ± 1.1</td><td>80.0 ± 0.8</td><td>62.9 ± 1.3</td></tr><tr><td>GIN + vn</td><td>71.7 ± 0.6</td><td>74.8 ± 0.6</td><td>89.4 ± 3.2</td><td>79.3 ± 1.0</td><td>82.0 ± 1.0</td><td>60.8 ± 0.8</td></tr><tr><td>O-GNN w/o ring</td><td>74.5 ± 1.4</td><td>75.2 ± 0.9</td><td>90.2 ± 2.1</td><td>80.5 ± 1.0</td><td>84.2 ± 1.5</td><td>65.5 ± 1.6</td></tr><tr><td>O-GNN (ours)</td><td>76.4 ± 0.4</td><td>75.7 ± 0.7</td><td>94.3 ± 1.6</td><td>81.3 ± 1.2</td><td>85.8 ± 1.0</td><td>66.2 ± 1.2</td></tr></table>
193
+
194
+ Table 2: Test ROC-AUC (%) performance of different methods on 6 binary classification tasks from MoleculeNet benchmark. The training, validation and test sets are provided by DeepChem. Each experiment is independently run for three times. The mean and standard derivation are reported.
195
+
196
+ several representative graph-based methods: (i) Hu et al. (2020) proposed to predict the masked attributes on graphs as well as maintaining the consistency between a subgraph and its neighbors; (ii) G-{Contextual, Motif} are variants of (Rong et al., 2020), which are provided in Liu et al. (2022). (iii) GraphMVP (Liu et al., 2022), which is a joint pre-training between 2D molecules and its 3D conformation. The results of (Hu et al., 2020), G-{Contextual, Motif} and GraphMVP are all extracted from Liu et al. (2022), since Liu et al. (2022) use the same scaffold-based splitting as us.
197
+
198
+ The results are reported in Table 2. We can see that: (i) $\mathcal{O}$ -GNN outperforms the conventional network architectures like GIN and GCN with virtual nodes, which demonstrate the effectiveness of our new architecture; (ii) When comparing with G-{Contextual, Motif}, GraphMVP (Liu et al., 2022) and Hu et al. (2020) which are all pre-training methods, $\mathcal{O}$ -GNN still outperforms those methods. (More discussion about pre-training methods are left in Table 11 of Appendix C.5) This shows the great potential of $\mathcal{O}$ -GNN and we will combine it with pre-training in the future. (iii) Comparing $\mathcal{O}$ -GNN and $\mathcal{O}$ -GNN w/o ring, the average improvement overall the six tasks is 1.6. This shows the advantage of using ring information in molecular property prediction.
199
+
200
+ Results on FS-Mol Stanley et al. (2021) verify that prototypical networks (PN) performs the best on FS-Mol compared with other methods like MAML (Finn et al., 2017), multi-task learning (MT) and random forest (RF). Stanley et al. (2021) use a Transformer-like residual network for few-shot classification. We replace that backbone to our $\mathcal{O}$ -GNN and "O-GNN w/o ring", and the other parts remain unchanged. Following Stanley et al. (2021), the results with different support set sizes (denoted as $|\mathcal{T}_{u, \text { support }}|$ ) are reported. A support set consists of a few examples with input-label pairs used to train models. The evaluation metric is $\Delta$ AUPRC, which is the difference between the AUPRC (area under the precision-recall curve) and the ratio of the active compounds in that query set. A higher $\Delta$ AUPRC score indicates better classification performance of the model.
201
+
202
+ ![](images/12b3aa803e51af8a05ab6914c9347000e7a475ef74bfc4f1204a88ff5153e951.jpg)
203
+ Figure 6: Results on FS-Mol.
204
+
205
+ The results are in Figure 6. We report the mean and the standard derivations for different tasks across various support sizes. We have the following observations: (i) By using $\mathcal{O}$ -GNN as the backbone model for the prototypical network, the results are boosted for different support set sizes. (ii) The improvement is more significant when the support set size is large. When $|\mathcal{T}_{u,support}| = 128$ and 256, the improvements are 0.014 and 0.016. When reducing the sizes to 16/32/64, their improvements are all around 0.008. We will further improve the results on limited data size in the future.
206
+
207
+ # 3.2 APPLICATION TO DDI PREDICTION
208
+
209
+ Drug-drug interaction (DDI) prediction is to predict therapeutic output of taking two drugs together, like increasing the risk of some side effects, or the effect is enhanced to take them together. We
210
+
211
+ focus on the classification task, where the inputs are two drug molecules and one interaction (e.g., inhibition), and the output is 0 or 1 to indicate whether the two drugs have this specific interaction. Following Nyamabo et al. (2022) and Li et al. (2022), we work on the inductive setting of the DrugBank dataset (Wishart et al., 2018), which has 1, 706 drugs, 86 interaction types, and 191, 808 triplets. To test the generalization ability of the model, we conduct experiments on two settings w.r.t. the drugs: the S1 setting, where neither of the two drugs on the test set appears in the training set; the S2 setting, where one drug is seen in the training set and the other is not. Noting that the drug pairs in the test set do not appear in the training set. Hence, the DrugBank data is split into training and test sets by the visibility of the drugs, and the negative samples are offline generated. We directly use the data provided by Nyamabo et al. (2021; 2022), where $20\%$ drugs are first hold as unseen drugs for formulating test set and the rest $80\%$ drugs are used to create the training set.
212
+
213
+ <table><tr><td rowspan="2">Method</td><td colspan="4">S2 setting (1 known drug + 1 unknown drug)</td><td colspan="4">S1 setting (2 unknown drugs)</td></tr><tr><td>ACC</td><td>AUROC</td><td>AP</td><td>F1</td><td>ACC</td><td>AUROC</td><td>AP</td><td>F1</td></tr><tr><td>GAT-DDI (Nyamabo et al., 2021)</td><td>69.83</td><td>77.29</td><td>75.79</td><td>73.01</td><td>62.63</td><td>70.92</td><td>73.01</td><td>45.81</td></tr><tr><td>MHCADDI (Deac et al., 2019)</td><td>70.58</td><td>77.84</td><td>76.16</td><td>72.74</td><td>65.40</td><td>73.43</td><td>75.03</td><td>54.12</td></tr><tr><td>MR-GNN (Xu et al., 2019)</td><td>74.67</td><td>83.15</td><td>83.81</td><td>69.88</td><td>66.50</td><td>72.53</td><td>71.06</td><td>67.21</td></tr><tr><td>SSI-DDI (Nyamabo et al., 2021)</td><td>76.38</td><td>84.23</td><td>84.94</td><td>73.54</td><td>66.31</td><td>72.75</td><td>71.61</td><td>68.68</td></tr><tr><td>GMPNN (Nyamabo et al., 2022)</td><td>77.72</td><td>84.84</td><td>84.87</td><td>78.29</td><td>68.57</td><td>74.96</td><td>75.44</td><td>65.32</td></tr><tr><td>MSAN-GCN (Zhu et al., 2022)</td><td>77.81</td><td>85.74</td><td>-</td><td>76.48</td><td>69.17</td><td>76.12</td><td>-</td><td>67.10</td></tr><tr><td>MSN-DDI (Li et al., 2022)</td><td>81.92</td><td>91.01</td><td>91.09</td><td>80.18</td><td>73.42</td><td>81.79</td><td>81.82</td><td>70.34</td></tr><tr><td>O-GNN w/o ring</td><td>87.72</td><td>94.51</td><td>95.28</td><td>85.91</td><td>75.47</td><td>83.83</td><td>85.58</td><td>65.59</td></tr><tr><td>O-GNN (ours)</td><td>88.47</td><td>95.87</td><td>96.51</td><td>86.91</td><td>76.81</td><td>87.64</td><td>88.70</td><td>70.81</td></tr></table>
214
+
215
+ Table 3: Results of drug-drug interaction prediction on DrugBank.
216
+
217
+ To predict the interaction between two drugs, we use one 6-layer $\mathcal{O}$ -GNN to extract features for the two drugs. Specifically, for each drug, we average the node representations output by the last layer as the drug feature. We concatenate the two drug features together, and then multiply the interaction embedding to do the prediction. The detailed parameters are left in Table 6 of Appendix A.
218
+
219
+ The results are reported in Table 3. $\mathcal{O}$ -GNN significantly outperforms previous baselines in terms of accuracy (denoted as ACC), the area under the receiver operating characteristic (AUROC), the average precision (AP), and the F1 score. Most of previous works use GCN, GIN or GAT backbones, and they focus on designing comprehensive interaction module (Nyamabo et al., 2021; Li et al., 2022). By using the advanced $\mathcal{O}$ -GNN backbone, we can significantly improve the results without designing complex interaction modules. This shows the effectiveness of our method.
220
+
221
+ # 3.3 APPLICATION TO RETROSYNTHESIS
222
+
223
+ Retrosynthesis is to predict the reactants of a given product. Various GNNs have been applied to this task. For example, GLN (Dai et al., 2019) use GNNs to predict the distributions of candidate reaction templates and reactants. GraphRetro (Somnath et al., 2021) and G2G (Shi et al., 2020) use GNNs to predict where to break a bond and how to add the fragments to complete the synchons. To demonstrate the ability of our $\mathcal{O}$ -GNN, we combine our method with LocalRetro (Chen & Jung, 2021), the current best graph-based model for retrosynthesis (without using pre-training). LocalRetro uses GNN to predict the possible templates for each atom and each bond, and sort the predicted templates according to their probabilities. The top templates will be applied to the corresponding atoms or bonds via RDKit (Landrum et al., 2016) to generate the reactants. Chen & Jung (2021) use MPNN (Gilmer et al., 2017a) for prediction, and we replace the MPNN with $\mathcal{O}$ -GNN. We conduct experiments on the USPTO-50k dataset (Coley et al., 2017) that contains 50,016 reactions. Following Chen & Jung (2021), we partition the dataset as $45k$ training set, $5k$ validation set and $5k$ test set. The evaluation metric is the top- $k$ accuracy, where $k = 1,3,5,10,50$ . The results are summarized in Table 4. We can observe that $\mathcal{O}$ -GNN can predict reactions more accurately than the baselines without ring information. Especially, when the reaction type is known, we improve the top-1 accuracy for 1.8 points and the top-3 accuracy for 1.6 points. These results show the importance of modeling ring structure and the effectiveness of our method.
224
+
225
+ The performance for different number of rings. To study the prediction performance of molecules with different number of rings, we group the USPTO-50k test set by the number of rings in the product molecules and compute the top-1 accuracy for each group. More specifically, we divide
226
+
227
+ <table><tr><td rowspan="2">Method</td><td colspan="5">Reaction type unknown</td><td colspan="5">Reaction type known</td></tr><tr><td>Top-1</td><td>Top-3</td><td>Top-5</td><td>Top-10</td><td>Top-50</td><td>Top-1</td><td>Top-3</td><td>Top-5</td><td>Top-10</td><td>Top-50</td></tr><tr><td>G2G</td><td>48.9</td><td>67.6</td><td>72.5</td><td>75.5</td><td>-</td><td>61.0</td><td>81.3</td><td>86.0</td><td>88.7</td><td>-</td></tr><tr><td>GLN</td><td>52.5</td><td>69.0</td><td>75.6</td><td>83.7</td><td>92.4</td><td>64.2</td><td>79.1</td><td>85.2</td><td>90.0</td><td>93.2</td></tr><tr><td>GraphRetro</td><td>53.7</td><td>68.3</td><td>72.2</td><td>75.5</td><td>-</td><td>63.9</td><td>81.5</td><td>85.2</td><td>88.1</td><td>-</td></tr><tr><td>LocalRetro</td><td>53.4</td><td>77.5</td><td>85.9</td><td>92.4</td><td>97.7</td><td>63.9</td><td>86.1</td><td>92.4</td><td>96.3</td><td>97.9</td></tr><tr><td>O-GNN (ours)</td><td>54.1</td><td>77.7</td><td>86.0</td><td>92.5</td><td>98.2</td><td>65.7</td><td>87.7</td><td>93.4</td><td>96.9</td><td>98.3</td></tr></table>
228
+
229
+ Table 4: Results on USPTO-50k datasets with reaction type known/unknown.
230
+
231
+ ![](images/74f4f42fb01299a364f435d713da9508f5e1d5c6fdd9b591ae5ba2746def4e28.jpg)
232
+ (a) Accuracy w.r.t #rings.
233
+
234
+ ![](images/a234f660affaba7ba788d303a49e4e55a0008bddf8bddc84b54944f271e09e6f.jpg)
235
+
236
+ ![](images/16dcd080615170907dd72eea89ff2a5ad0e65b9401bae96d78da984655088289.jpg)
237
+
238
+ ![](images/a1766b9dfb1ee3a482853bf73463297f4aa0a5db424f893cfcfb98f82ecb74fc.jpg)
239
+ Figure 7: Study of $\mathcal{O}$ -GNN on retrosynthesis task. (a) The top-1 accuracy w.r.t. number of rings in product molecules. (b) The one-step retrosynthesis prediction of a product molecule with five rings. The first $\mathcal{O}$ -GNN output is the same as the ground truth (marked as green).
240
+
241
+ ![](images/908e505eb917a020fd84d544c511f48fcd8e222438e8b235004b569cff3bbca3.jpg)
242
+
243
+ ![](images/38249db06c529a6015ebd7ea3883d7bf552ebecefc39d32c3b607ed04a0ceff5.jpg)
244
+
245
+ ![](images/3a05a346202db8673a3920ae10cd7558465ca1f1bceae4532527fd62f03c9c05.jpg)
246
+ (b) Case study on molecule with complex rings.
247
+
248
+ the test set into four groups with ring numbers [0, 2), [2, 4), [4, 6), [6, 12], and those groups have 808, 2347, 1617, 235 reactions, respectively. The results are in Figure 7(a) where the blue bars represent the LocalRetro baseline and the green bars represent $\mathcal{O}$ -GNN. The results show that $\mathcal{O}$ -GNN has better accuracy on all groups, where the improvements are 0.99, 0.85, 1.30 and 5.96. Overall speaking, the improvement is larger when there are more rings in a molecule. Especially, when there are at least 6 rings in a group (i.e., the last column), $\mathcal{O}$ -GNN increases the accuracy for 5.96 points, demonstrating that our method can better leverage ring structures.
249
+
250
+ Case study. In Figure 7(b), we show an example prediction of a product molecule with 5 rings. The reactions in the left panel are the top-3 predictions from LocalRetro baseline and the ones on the right are from $\mathcal{O}$ -GNN. Our method successfully predicts the correct reactants in its first output (marked as green), but the baseline fails to give a correct prediction. More importantly, the baseline system even fails to identify the correct bond to change. These results suggest that modeling ring structures is crucial to predict reactions accurately, and $\mathcal{O}$ -GNN is an effective algorithm for retrosynthesis.
251
+
252
+ # 4 CONCLUSIONS AND FUTURE WORK
253
+
254
+ In this work, we propose a new model, ring-enhanced GNN (briefly, $\mathcal{O}$ -GNN) for molecular modeling. We explicitly incorporate the ring representations into GNN and jointly update them with atom and bond representations. We provide theoretical analysis to $\mathcal{O}$ -GNN and prove that by using $\mathcal{O}$ -GNN, the node representations are more distinguishable than the variant without using ring representations. We conduct experiments on molecular property prediction, drug-drug interaction (DDI) prediction and retrosynthesis. $\mathcal{O}$ -GNN outperforms strong baselines on these tasks and achieves state-of-the-art results on the validation performance of PCQM4Mv1 and DDI prediction. For future work, first, we will combine with pre-training to obtain a stronger $\mathcal{O}$ -GNN. Second, we need to further improve our model when the training data is very limited (e.g., when the support set size is 16 or fewer). Third, how to efficiently identify and incorporate the representations with more complex structures is another interesting direction to explore. Fourth, we will apply our model to more real world scenarios, like the synthesis and generation of natural products with large rings.
255
+
256
+ # ACKNOWLEDGMENTS
257
+
258
+ This work was supported in part by NSFC under Contract 61836011, and in part by the Fundamental Research Funds for the Central Universities under contract WK3490000007.
259
+
260
+ # REFERENCES
261
+
262
+ Ravichandra Addanki, Peter Battaglia, David Budden, Andreea Deac, Jonathan Godwin, Thomas Keck, Wai Lok Sibon Li, Alvaro Sanchez-Gonzalez, Jacklynn Stott, Shantanu Thakoor, and Petar Velicković. Large-scale graph representation learning with very deep gnns and self-supervision. arXiv preprint arXiv:2107.09422, 2021.
263
+ Gaurav Bhardwaj, Jacob O'Connor, Stephen Rettie, Yen-Hua Huang, Theresa A. Ramelot, Vikram Khipple Mulligan, Gizem Gokce Alpkilic, Jonathan Palmer, Asim K. Bera, Matthew J. Bick, Maddalena Di Piazza, Xinting Li, Parisa Hosseinzadeh, Timothy W. Craven, Roberto Tejero, Anna Lauko, Ryan Choi, Calina Glynn, Linlin Dong, Robert Griffin, Wesley C. van Voorhis, Jose Rodriguez, Lance Stewart, Gaetano T. Montelione, David Craik, and David Baker. Accurate de novo design of membrane-traversing macrocycles. Cell, 185(19):3520-3532.e26, 2022. ISSN 0092-8674. doi: https://doi.org/10.1016/j.cell.2022.07.019. URL https://www.sciencedirect.com/science/article/pii/S0092867422009229.
264
+ Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? arXiv preprint arXiv:2105.14491, 2021.
265
+ Shuan Chen and Yousung Jung. Deep retrosynthetic reaction prediction using local reactivity and global attention. JACS Au, 1(10):1612-1620, 2021. doi: 10.1021/jacsau.1c00246.
266
+ Connor W. Coley, Luke Rogers, William H. Green, and Klavs F. Jensen. Computer-Assisted Retrosynthesis Based on Molecular Similarity. ACS Central Science, 3(12):1237-1245, December 2017. ISSN 2374-7943, 2374-7951. doi: 10.1021/acscentsci.7b00355. URL https://pubs.acs.org/doi/10.1021/acscentsci.7b00355.
267
+ Weilin Cong, Morteza Ramezani, and Mehrdad Mahdavi. On provable benefits of depth in training graph convolutional networks. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=r-oRRT-E1X.
268
+ Hanjun Dai, Chengtao Li, Connor Coley, Bo Dai, and Le Song. Retrosynthesis prediction with conditional graph logic network. In Advances in Neural Information Processing Systems, pp. 8870-8880, 2019.
269
+ Andreea Deac, Yu-Hsiang Huang, Petar Velicković, Pietro Lio, and Jian Tang. Drug-drug adverse effect prediction with graph co-attention. arXiv preprint arXiv:1905.00534, 2019.
270
+ Jörg Degen, Christof Wegscheid-Gerlach, Andrea Zaliani, and Matthias Rarey. On the art of compiling and using'drug-like'chemical fragment spaces. ChemMedChem: Chemistry Enabling Drug Discovery, 3(10):1503-1507, 2008.
271
+ Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. Geometry-enhanced molecular representation learning for property prediction. Nature Machine Intelligence, 4(2):127-134, 2022.
272
+ Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1126-1135. PMLR, 06-11 Aug 2017.
273
+ Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry, 2017a. URL https://arxiv.org/abs/1704.01212.
274
+
275
+ Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pp. 1263-1272. PMLR, 2017b.
276
+ Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017a. URL https://proceedings.neurips.cc/paper/2017/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf.
277
+ Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017b.
278
+ Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HJ1WWJSFDH.
279
+ Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. Ogb-lsc: A large-scale challenge for machine learning on graphs. arXiv preprint arXiv:2103.09430, 2021.
280
+ Nirmala Chandralega Kampan, Mutsa Tatenda Madondo, Orla M McNally, Michael Quinn, and Magdalena Plebanski. Paclitaxel and its evolving role in the management of ovarian cancer. Biomed Res. Int., 2015:413076, June 2015.
281
+ Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindulyte, Jia He, Siqian He, Qingliang Li, Benjamin A Shoemaker, Paul A Thiessen, Bo Yu, Leonid Zaslavsky, Jian Zhang, and Evan E Bolton. PubChem 2019 update: improved access to chemical data. *Nucleic Acids Res.*, 47(D1):D1102-D1109, January 2019.
282
+ Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *ICLR*, 2017.
283
+ Greg Landrum et al. Rdkit: Open-source cheminformatics software, 2016. URL http://www.rdkit.org/, https://github.com/rdkit/rdkit, 149:150, 2016.
284
+ Guohao Li, Chenxin Xiong, Ali Thabet, and Bernard Ghanem. Deepergcn: All you need to train deeper gcns, 2020.
285
+ Junying Li, Deng Cai, and Xiaofei He. Learning graph-level representation for drug discovery. arXiv preprint arXiv:1709.03741, 2017.
286
+ Zimeng Li, Shichao Zhu, Bin Shao, Tie-Yan Liu, Xiangxiang Zeng, and Tong Wang. Multi-view substructure learning for drug-drug interaction prediction, 2022. URL https://arxiv.org/abs/2203.14513.
287
+ Meng Liu, Cong Fu, Xuan Zhang, Limei Wang, Yaochen Xie, Hao Yuan, Youzhi Luo, Zhao Xu, Shenglong Xu, and Shuiwang Ji. Fast quantum property prediction via deeper 2d and 3d graph networks, 2021. URL https://ogb.stanford.edu/paper/kddcup2021/pcqm4m_DIVE.pdf.
288
+ Shengchao Liu, Hanchen Wang, Weiyang Liu, Joan Lasenby, Hongyu Guo, and Jian Tang. Pre-training molecular graph representation with 3d geometry. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=xQUelpOKPam.
289
+ Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7.
290
+ Andreas Loukas. What graph neural networks cannot learn: depth vs width. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=B112bp4YwS.
291
+
292
+ Arnold K Nyamabo, Hui Yu, and Jian-Yu Shi. Ssi-ddi: substructure-substructure interactions for drug-drug interaction prediction. Briefings in Bioinformatics, 2021.
293
+ Arnold K Nyamabo, Hui Yu, Zun Liu, and Jian-Yu Shi. Drug-drug interaction prediction with learnable size-adaptive molecular substructures. Briefings in Bioinformatics, 2022.
294
+ Trang Pham, Truyen Tran, Hoa Dam, and Svetha Venkatesh. Graph classification via deep learning with virtual nodes. arXiv preprint arXiv:1708.04357, 2017.
295
+ Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. Self-supervised graph transformer on large-scale molecular data. Advances in Neural Information Processing Systems, 33:12559-12571, 2020.
296
+ Sisi Shan, Shitong Luo, Ziqing Yang, Junxian Hong, Yufeng Su, Fan Ding, Lili Fu, Chenyu Li, Peng Chen, Jianzhu Ma, Xuanling Shi, Qi Zhang, Bonnie Berger, Linqi Zhang, and Jian Peng. Deep learning guided optimization of human antibody against sars-cov-2 variants with broad neutralization. Proceedings of the National Academy of Sciences, 119(11), 2022. doi: 10.1073/pnas.2122954119.
297
+ Chence Shi, Minkai Xu, Hongyu Guo, Ming Zhang, and Jian Tang. A graph to graphs framework for retrosynthesis prediction. In Hal Daume III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 8818-8827. PMLR, 13-18 Jul 2020.
298
+ Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, and Tie-Yan Liu. Benchmarking graphormer on large-scale molecular modeling datasets, 2022. URL https://arxiv.org/abs/2203.04810.
299
+ Vignesh Ram Somnath, Charlotte Bunne, Connor Coley, Andreas Krause, and Regina Barzilay. Learning graph models for retrosynthesis prediction. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 9405-9415. Curran Associates, Inc., 2021.
300
+ Megan Stanley, John F Bronskill, Krzysztof Maziarz, Hubert Misztela, Jessica Lanini, Marwin Segler, Nadine Schneider, and Marc Brockschmidt. FS-mol: A few-shot learning dataset of molecules. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. URL https://openreview.net/forum?id=701FtuyLlAd.
301
+ Jonathan M. Stokes, Kevin Yang, Kyle Swanson, Wengong Jin, Andres Cubillos-Ruiz, Nina M. Donghia, Craig R. MacNair, Shawn French, Lindsey A. Carfrae, Zohar Bloom-Ackermann, Victoria M. Tran, Anush Chiappino-Pepe, Ahmed H. Badran, Ian W. Andrews, Emma J. Chory, George M. Church, Eric D. Brown, Tommi S. Jaakkola, Regina Barzilay, and James J. Collins. A deep learning approach to antibiotic discovery. Cell, 180(4):688-702.e13, 2020. ISSN 0092-8674. doi: https://doi.org/10.1016/j.cell.2020.01.021. URL https://www.sciencedirect.com/science/article/pii/S0092867420301021.
302
+ Hongmao Sun, Gregory Tawa, and Anders Wallqvist. Classification of scaffold-hopping approaches. *Drug Discov. Today*, 17(7-8):310–324, April 2012.
303
+ Ruoxi Sun, Hanjun Dai, and Adams Wei Yu. Does GNN pretraining help molecular representation? In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=uytgM9N0v1R.
304
+ Wen Torng and Russ B. Altman. Graph convolutional neural networks for predicting drug-target interactions. Journal of Chemical Information and Modeling, 59(10):4131-4149, 2019. doi: 10.1021/acs.jcim.9b00628.
305
+ Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
306
+
307
+ Anilrudh A. Venugopal and Stuart Johnson. Fidaxomicin: A novel macrocyclic antibiotic approved for treatment of clostridium difficile infection. Clinical Infectious Diseases, 54(4):568-574, 12 2011. ISSN 1058-4838. doi: 10.1093/cid/cir830. URL https://doi.org/10.1093/cid/cir830.
308
+ David S Wishart, Yannick D Feunang, An C Guo, Elvis J Lo, Ana Marcu, Jason R Grant, Tanvir Sajed, Daniel Johnson, Carin Li, Zinat Sayeeda, et al. Drugbank 5.0: a major update to the drugbank database for 2018. Nucleic acids research, 46(D1):D1074-D1082, 2018.
309
+ Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Genisses, Aneesh S. Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. Chemical Science, 9:513-530, 2018. doi: 10.1039/C7SC02664A. URL http://dx.doi.org/10.1039/C7SC02664A.
310
+ Yingce Xia, Jinhua Zhu, Lijun Wu, Yang Fan, Shufang Xie, Yutai Hou, and Tao Qin. When transformer meets graph neural networks. 2021. URL https://ogb.stanford.edu/paper/kddcup2021/pcqm4m_GNNLearner.pdf.
311
+ Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
312
+ Nuo Xu, Pinghui Wang, Long Chen, Jing Tao, and Junzhou Zhao. Mr-gnn: Multi-resolution and dual graph neural network for predicting structured entity interactions. arXiv preprint arXiv:1905.09558, 2019.
313
+ Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? In Advances in Neural Information Processing Systems, volume 34, pp. 28877-28888, 2021.
314
+ Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, and Chee-Kong Lee. Motif-based graph self-supervised learning for molecular property prediction. Advances in Neural Information Processing Systems, 34:15870-15882, 2021.
315
+ Xinyu Zhu, Yongliang Shen, and Weiming Lu. Molecular substructure-aware network for drug-drug interaction prediction. CIKM, 2022.
316
+
317
+ # A DETAILED EXPERIMENT CONFIGURATIONS
318
+
319
+ The hyperparameters for the molecular property prediction, drug-drug interaction prediction and retrosynthesis are summarized in Table 5, Table 6 and Table 7 respectively.
320
+
321
+ <table><tr><td></td><td>PCQM4Mv1</td><td>FS-Mol</td><td>MoleculeNet</td></tr><tr><td>Number of Layers</td><td>12</td><td>6</td><td>{4,6,8,12}</td></tr><tr><td>Hidden dimension</td><td>256</td><td>256</td><td>{128,256}</td></tr><tr><td>Optimizer</td><td>AdamW</td><td>AdamW</td><td>AdamW</td></tr><tr><td>Dropout</td><td>0.0</td><td>{0.0,0.1,0.2}</td><td>{0.0,0.1,0.2,0.3,0.5}</td></tr><tr><td>Learning rate</td><td>0.0003</td><td>{0.0001,0.0002,0.0003}</td><td>{0.00005,0.0001,0.0002,0.0005}</td></tr><tr><td>Training steps</td><td>300 epochs</td><td>10000 iterations</td><td>{50,100} epochs</td></tr><tr><td>Batch size</td><td>512</td><td>16</td><td>{32,64}</td></tr><tr><td>Weight Decay</td><td>0.1</td><td>{0.01,0.1}</td><td>0.01</td></tr><tr><td>Learning Rate Decay</td><td>Cosine</td><td>Cosine</td><td>Linear</td></tr></table>
322
+
323
+ Table 5: Detailed hyper-parameters for molecular property prediction tasks.
324
+
325
+ <table><tr><td>Number of Layers</td><td>6</td></tr><tr><td>Hidden dimension</td><td>512</td></tr><tr><td>Optimizer</td><td>AdamW</td></tr><tr><td>Dropout</td><td>{0.2, 0.5}</td></tr><tr><td>Learning rate</td><td>0.0003</td></tr><tr><td>Training steps</td><td>100 epochs</td></tr><tr><td>Batch size</td><td>128</td></tr><tr><td>Weight Decay</td><td>0.01</td></tr><tr><td>Learning Rate Decay</td><td>Cosine</td></tr></table>
326
+
327
+ Table 6: Detailed hyper-parameters for drug-drug interaction prediction.
328
+
329
+ <table><tr><td>Number of Layers</td><td>6</td></tr><tr><td>Hidden dimension</td><td>512</td></tr><tr><td>Optimizer</td><td>AdamW</td></tr><tr><td>Dropout</td><td>0.1</td></tr><tr><td>Learning rate</td><td>0.0003</td></tr><tr><td>Training steps</td><td>200 epochs</td></tr><tr><td>Batch size</td><td>64</td></tr><tr><td>Weight Decay</td><td>0.1</td></tr><tr><td>Learning Rate Decay</td><td>Cosine</td></tr></table>
330
+
331
+ Table 7: Detailed hyper-parameters for retrosynthesis.
332
+
333
+ # B PROOFS OF THE TWO PROPOSITIONS
334
+
335
+ Proof of Proposition 1. We start the proof by explicitly writing down the ring-free variant of $\mathcal{O}$ -GNN.
336
+
337
+ Specifically, the bond representations are given by
338
+
339
+ $$
340
+ h _ {i j} ^ {(l)} = h _ {i j} ^ {(l - 1)} + \mathbb {M L P} (h _ {i} ^ {(l - 1)}, h _ {j} ^ {(l - 1)}, h _ {i j} ^ {(l - 1)}, U ^ {(l - 1)}).
341
+ $$
342
+
343
+ The atom representations are given by
344
+
345
+ $$
346
+ \bar {h} _ {i} ^ {(l)} = \sum_ {j \in \mathcal {N} (i)} \alpha_ {j} W _ {v} \mathrm {c o n c a t} (h _ {i j} ^ {(l)}, h _ {j} ^ {(l - 1)});
347
+ $$
348
+
349
+ $$
350
+ \alpha_ {j} \propto \exp \left(\boldsymbol {a} ^ {\top} \text {L e a k y R e L U} \left(W _ {q} h _ {i} ^ {(l - 1)} + W _ {k} \operatorname {c o n c a t} \left(h _ {j} ^ {(l - 1)}, h _ {i j} ^ {(l)}\right)\right)\right); \tag {6}
351
+ $$
352
+
353
+ $$
354
+ h _ {i} ^ {(l)} = h _ {i} ^ {(l - 1)} + \mathbb {M L P} \Big (h _ {i} ^ {(l - 1)}, \bar {h} _ {i} ^ {(l)}, U ^ {(l - 1)} \Big).
355
+ $$
356
+
357
+ The compound representations are given by
358
+
359
+ $$
360
+ U ^ {(l)} = U ^ {(l - 1)} + \mathrm {M L P} \left(\frac {1}{| V |} \sum_ {i = 1} ^ {| V |} h _ {i} ^ {(l)}, \frac {1}{| E |} \sum_ {i, j} h _ {i j} ^ {(l)}, U ^ {(l - 1)}\right). \tag {7}
361
+ $$
362
+
363
+ Given the above notations, Proposition 1 can then be translated to the following claim:
364
+
365
+ Claim. For any valued graphs $(G, f)$ and any two nodes $v_a, v_b$ in $G$ , if the valued $k$ -neighbourhood sub-graph of $u$ and $v$ (i.e., $(G(v_a, k), f)$ and $(G(v_b, k), f))$ are equivalent, we have that $h_a^l = h_b^l$ for any $l \in \{1, \dots, k\}$ holds regardless of the parameter.
366
+
367
+ We denote the equivalent mapping between $(G(v_{a},k),f)$ and $(G(v_{b},k),f)$ as $\mathcal{P}$ . We will slightly abuse the notations by letting $\mathcal{P}(i) = j$ if $\mathcal{P}(v_i) = v_j$ .
368
+
369
+ We will prove the above claim by induction. Specifically, we will prove that for any $l \in \{0,1,\dots ,k\}$ , we have $h_{c_1}^l = h_{\mathcal{P}(c_1)}^l$ for any $v_{c_1} \in V(v_a,k - l)$ , and $h_{c_1c_2}^l = h_{\mathcal{P}(c_1)\mathcal{P}(c_2)}^l$ for any $v_{c_1}, v_{c_2} \in V(v_a,k - l)$ .
370
+
371
+ Base case: for $l = 0$ , by the definition of $f$ , we have $f(v_{c_1}) = f(\mathcal{P}(v_{c_1}))$ for every $v_{c_1} \in V(v_a, k)$ and $f(v_{c_1}, v_{c_2}) = f(\mathcal{P}(v_{c_1}), \mathcal{P}(v_{c_2}))$ for every $v_{c_1}, v_{c_2} \in V(v_a, k)$ . The claim immediately follows as $h_{c_1}^0 = f(v_{c_1})$ , $h_{\mathcal{P}(c_1)}^0 = f(\mathcal{P}(v_{c_1}))$ , $h_{c_1c_2}^0 = f(v_{c_1}, v_{c_2})$ , and $h_{\mathcal{P}(c_1)\mathcal{P}(c_2)}^0 = f(\mathcal{P}(c_1), \mathcal{P}(c_2))$ .
372
+
373
+ Induction step: suppose the claim is true for $l = i \in \{0, \dots, k - 1\}$ . Then, for $l = i + 1$ , we have that for every $v_{c_1}, v_{c_2} \in V(v_a, k - l)$ ,
374
+
375
+ $$
376
+ \begin{array}{l} h _ {c _ {1} c _ {2}} ^ {(l)} = h _ {c _ {1} c _ {2}} ^ {(l - 1)} + \mathbb {M L P} (h _ {c _ {1}} ^ {(l - 1)}, h _ {c _ {2}} ^ {(l - 1)}, h _ {c _ {1} c _ {2}} ^ {(l - 1)}, U ^ {(l - 1)}) \\ \stackrel {(\star)} {=} h _ {\mathcal {P} (c _ {1}) \mathcal {P} (c _ {2})} ^ {(l - 1)} + \mathrm {M L P} \left(h _ {\mathcal {P} (c _ {1})} ^ {(l - 1)}, h _ {\mathcal {P} (c _ {2})} ^ {(l - 1)}, h _ {\mathcal {P} (c _ {1}) \mathcal {P} (c _ {2})} ^ {(l - 1)}, U ^ {(l - 1)}\right) \\ = h _ {\mathcal {P} \left(c _ {1}\right) \mathcal {P} \left(c _ {2}\right)} ^ {(l)}, \tag {8} \\ \end{array}
377
+ $$
378
+
379
+ where $Eq.(\star)$ is due to the induction hypothesis, as $v_{c_1}, v_{c_2} \in V(v_a, k - l) \subset V(v_a, k - (l - 1))$ .
380
+
381
+ Similarly, for every $v_{c_1} \in V(v_a, k - l)$ ,
382
+
383
+ $$
384
+ \begin{array}{l} \bar {h} _ {c _ {1}} ^ {(l)} = \sum_ {j \in \mathcal {N} (c _ {1})} \alpha_ {j} W _ {v} \mathrm {c o n c a t} (h _ {c _ {1} j} ^ {(l)}, h _ {j} ^ {(l - 1)}) \\ \stackrel {(\circ)} {=} \sum_ {j \in \mathcal {N} (c _ {1})} \alpha_ {j} W _ {v} \text {c o n c a t} \left(h _ {\mathcal {P} (c _ {1}) \mathcal {P} (j)} ^ {(l)}, h _ {\mathcal {P} (j)} ^ {(l - 1)}\right) \\ \stackrel {(\diamond)} {=} \sum_ {j \in \mathcal {N} (c _ {1})} \alpha_ {\mathcal {P} (j)} W _ {v} \text {c o n c a t} \left(h _ {\mathcal {P} (c _ {1}) \mathcal {P} (j)} ^ {(l)}, h _ {\mathcal {P} (j)} ^ {(l - 1)}\right) \\ = \sum_ {j \in \mathcal {N} (c _ {1})} \alpha_ {j} W _ {v} \operatorname {c o n c a t} \left(h _ {\mathcal {P} (c _ {1}) j} ^ {(l)}, h _ {j} ^ {(l - 1)}\right) \\ = \bar {h} _ {\mathcal {P} (c _ {1})} ^ {(l)}, \\ \end{array}
385
+ $$
386
+
387
+ where Eq. $(\circ)$ is due to the induction hypothesis and Eq. (8). Eq. $(\diamond)$ is due to
388
+
389
+ $$
390
+ \begin{array}{l} \alpha_ {j} \propto \exp \left(\boldsymbol {a} ^ {\top} \text {L e a k y R e L U} \left(W _ {q} h _ {c _ {1}} ^ {(l - 1)} + W _ {k} \operatorname {c o n c a t} \left(h _ {j} ^ {(l - 1)}, h _ {c _ {1} j} ^ {(l)}\right)\right)\right) \\ = \exp (\boldsymbol {a} ^ {\top} \text {L e a k y R e L U} (W _ {q} h _ {\mathcal {P} (c _ {1})} ^ {(l - 1)} + W _ {k} \text {c o n c a t} (h _ {\mathcal {P} (j)} ^ {(l - 1)}, h _ {\mathcal {P} (c _ {1}) \mathcal {P} (j)} ^ {(l)}))), \\ \end{array}
391
+ $$
392
+
393
+ and thus $\alpha_{j} = \alpha_{\mathcal{P}(j)}$ for any $j\in \mathcal{N}(c_1)$
394
+
395
+ We then have
396
+
397
+ $$
398
+ \begin{array}{l} h _ {c _ {1}} ^ {(l)} = h _ {c _ {1}} ^ {(l - 1)} + \mathrm {M L P} \left(h _ {c _ {1}} ^ {(l - 1)}, \bar {h} _ {c _ {1}} ^ {(l)}, U ^ {(l - 1)}\right) \\ = h _ {\mathcal {P} \left(c _ {1}\right)} ^ {(l - 1)} + \mathrm {M L P} \left(h _ {\mathcal {P} \left(c _ {1}\right)} ^ {(l - 1)}, \bar {h} _ {\mathcal {P} \left(c _ {1}\right)} ^ {(l)}, U ^ {(l - 1)}\right) \\ = h _ {\mathcal {P} (c _ {1})} ^ {(l)}. \\ \end{array}
399
+ $$
400
+
401
+ Thus, the claim holds for $l = i + 1$ , and the proof for the induction claim completes. Thus, the claim is true for every $l \in \{0, \dots, k\}$ .
402
+
403
+ For every $l \in \{0, \dots, k\}$ $u \in G(v_a, 0) \subset G(v_a, k - l)$ . Therefore, we have $h_a^{(l)} = h_{\mathcal{P}(a)}^{(l)} = h_b^{(l)}$ , and the proof is completed.
404
+
405
+ Proof of Proposition 2. For two equivalent valued sub-graph $(G(v_{a},k),f)$ and $(G(v_{b},k),f)$ , if $v_{a}$ and $v_{b}$ lie on different rings, we have
406
+
407
+ $$
408
+ \frac {1}{| R (v _ {a}) |} \sum_ {r \in R (v _ {a})} h _ {r} ^ {(0)} \neq \frac {1}{| R (v _ {b}) |} \sum_ {r \in R (v _ {b})} h _ {r} ^ {(0)}.
409
+ $$
410
+
411
+ Therefore, there exists a choice of MLP, such that
412
+
413
+ $$
414
+ \begin{array}{l} h _ {a} ^ {(1)} = h _ {a} ^ {(0)} + \mathrm {M L P} \left(h _ {a} ^ {(0)}, \bar {h} _ {a} ^ {(1)}, \frac {1}{| R (v _ {a}) |} \sum_ {r \in R (v _ {a})} h _ {r} ^ {(0)}, U ^ {(0)}\right) \\ \neq h _ {b} ^ {(0)} + \operatorname {M L P} \left(h _ {b} ^ {(0)}, \bar {h} _ {b} ^ {(1)}, \frac {1}{| R (v _ {b}) |} \sum_ {r \in R (v _ {b})} h _ {r} ^ {(0)}, U ^ {(0)}\right) \\ = h _ {b} ^ {(1)}. \\ \end{array}
415
+ $$
416
+
417
+ The proof is completed.
418
+
419
+ ![](images/c2f9044ad49a107307cbf6457ee4bcebc38aaa8650730b33c0c48c41a0ad8a7d.jpg)
420
+
421
+ # C MORE ABLATION STUDY
422
+
423
+ # C.1 NODE REPRESENTATION POOLING V.S.COMPOUND REPRESENTATIONS
424
+
425
+ We explore the difference between using average pooling $h_{\mathcal{G}} = \frac{1}{|V|}\sum_{i=1}^{|V|}h_i^{(L)}$ and the compound representation $U^{(L)}$ for classification. We try two networks with different numbers of layers ( $L = 6$ and 12). We conduct experiments on PCQM4Mv1 dataset. The validation mean absolute errors (MAE) are reported in Table 8. We can see that using average node pooling is better than using compound representation. This is consistent with the discovery of using virtual node in GIN (Hu et al., 2021). A virtual node can be regarded as a compound representation, which connects to all nodes in graph. When using virtual nodes, it is a common practice to use the average or sum pooling of node representations to represent a graph. One can refer to https://github.com/snap-stanford/ogb/blob/1c875697fdb20ab452b2c11cf8bfa2c0e88b5ad3/ examples/1sc/pcqm4m/gnn.py#L60 for the detailed implementation.
426
+
427
+ <table><tr><td></td><td>L=6</td><td>L=12</td></tr><tr><td>Average node pooling</td><td>0.1171</td><td>0.1149</td></tr><tr><td>Compound representation</td><td>0.1196</td><td>0.1167</td></tr></table>
428
+
429
+ Table 8: Comparison between using average node representations VS compound representations.
430
+
431
+ # C.2 ATTENTIVELY AGGREGATE THE INFORMATION FROM RINGS
432
+
433
+ In Eqn.(4), we concatenate the sum pooling of atom representations, the sum pooling of bond representations and compound representations to update ring representations. An alternative solution is to use attention models to aggregate the atom and bond representations. We study a variant which updates the ring representations as follows:
434
+
435
+ $$
436
+ h _ {r} ^ {(l)} = h _ {r} ^ {(l - 1)} + \mathrm {M L P} \left(h _ {r} ^ {(l - 1)}, \sum_ {v _ {i} \in V (r)} \alpha_ {i} ^ {(l)} h _ {i} ^ {(l)}, \sum_ {e _ {i j} \in E (r)} \beta_ {i j} ^ {(l)} h _ {i j} ^ {(l)}, U ^ {(l - 1)}\right), \tag {9}
437
+ $$
438
+
439
+ In Eqn.(9),
440
+
441
+ $$
442
+ \alpha_ {i} ^ {(l)} \propto \exp \left(W _ {q 1} h _ {r} ^ {(l - 1)} + W _ {k 1} h _ {i} ^ {(l)}\right) \text {a n d} \beta_ {i j} ^ {(l)} \propto \exp \left(W _ {q 2} h _ {r} ^ {(l - 1)} + W _ {k 2} h _ {i j} ^ {(l)}\right), \tag {10}
443
+ $$
444
+
445
+ where the four $W$ 's are parameters to be learned. The results are reported in Table 9. We can see that although our method is simple, it can effectively leverage the ring information, and outperform this attention-based variant.
446
+
447
+ # C.3 O-GNN WITH BRICS
448
+
449
+ The ring representation used in our method could be considered as a special motif. One might wonder whether other types of new motifs would be helpful. To see the effect, we use BRICS model (Degen et al., 2008) to decompose molecules into fragments. BRICS designs 16 rules to break bonds that can match a set of chemical reactions. The ring representations in Eqn.(2,3,4,5) are replaced by
450
+
451
+ <table><tr><td></td><td>L=6</td><td>L=12</td></tr><tr><td>O-GNN</td><td>0.1171</td><td>0.1149</td></tr><tr><td>O-GNN with attention models when updating ring representations</td><td>0.1179</td><td>0.1160</td></tr></table>
452
+
453
+ Table 9: Comparison between our method and using attention models when updating ring representations.
454
+
455
+ <table><tr><td></td><td>L=2</td><td>L=4</td><td>L=6</td><td>L=8</td><td>L=12</td></tr><tr><td>O-GNN</td><td>0.1247</td><td>0.1201</td><td>0.1181</td><td>0.1172</td><td>0.1155</td></tr><tr><td>O-GNN w/o rings</td><td>0.1325</td><td>0.1243</td><td>0.1222</td><td>0.1221</td><td>0.1204</td></tr><tr><td>BRICS</td><td>0.1294</td><td>0.1239</td><td>0.1219</td><td>0.1208</td><td>0.1193</td></tr></table>
456
+
457
+ Table 10: Comparison between using simple rings (i.e., our method) and using BRICS-based fragments.
458
+
459
+ these motif representations. The remaining parts remain unchanged. We conduct the experiments on PCQM4Mv1 dataset, and the results are shown in Table 10. Due to time and computation resource limitation, all the models are trained for 200 epochs.
460
+
461
+ From Table 10, we can conclude that: (1) using simple ring representations achieves better results than using BRICS; (2) in general, using BRICS is better than the variant without using any ring or ring-based fragmentation information. We will keep exploring more segmentation methods.
462
+
463
+ # C.4 MORE COMPARISON BETWEEN $\mathcal{O}$ -GNN AND $\mathcal{O}$ -GNN W/O RINGS
464
+
465
+ As a complementary to the analysis of the MAE w.r.t. the number of rings in molecules in Figure 5, we also report the predicted error (i.e., mean absolute error, MAE) of $\mathcal{O}$ -GNN and the variant "O-GNN w/o rings" in Figure 8. We can observe that when molecules have no rings, the two methods perform similar. As the number of rings increases from 1 to 6, the MAE increases, and $\mathcal{O}$ -GNN always outperforms the "O-GNN w/o rings" variant.
466
+
467
+ # C.5 ADDITIONAL DISCUSSIONS
468
+
469
+ About over-smoothing One might be curious that since we build a 12-layer network, whether it suffers from over-smoothing. Actually, Cong et al. (2021) point that "over-smoothing does not necessarily happen in practice, a deeper model is provably expressive, can converge to global optimum with linear convergence rate, and achieve very high training accuracy as long as properly trained." (The words are from (Cong et al., 2021) for accurate expression). In addition, Li et al. (2020) and Addanki et al. (2021) both successfully trained $50+$ layer networks. Our method follows
470
+
471
+ ![](images/b48d19150c1f204e959b3533c60f7bae9b41267d175f85834a4a49196f3d7868.jpg)
472
+ (a) $L = 2$
473
+
474
+ ![](images/412a7e65ed77cb5fdeb74ee1b9f075fb1929a1d4193b26f1a834fc84a06c1732.jpg)
475
+ (b) $L = 6$
476
+ Figure 8: Predicted MAE categorized by different properties. $x$ -axis denotes the number or rings, and $y$ -axis denoted the mean absolute error (MAE) on the validation set.
477
+
478
+ ![](images/46378a3453cb8ce2e4386a0a1cdb50e3f4c335146bb174e2f39c6b9ad7769ee7.jpg)
479
+ (c) $L = 12$
480
+
481
+ the architecture of (Addanki et al., 2021), therefore we do not think that our model suffers from over-smoothing.
482
+
483
+ Modeling $k$ -neighborhood: If we want to explicitly use the $k$ -neighborhood information, we might need additional modules to process them, like
484
+
485
+ $$
486
+ \mathrm {n e t} _ {1} (1 \text {n e i g h b o r n o d e s}) + \mathrm {n e t} _ {2} (2 \text {n e i g h b o r n o d e s}) + \dots + \mathrm {n e t} _ {k} (\mathrm {k} \text {n e i g h b o r n o d e s}). \tag {11}
487
+ $$
488
+
489
+ To ensure expressiveness, we usually do not share parameters. Therefore, the parameters are $k$ times larger than regular GNN. $\mathcal{O}$ -GNN constantly increases the percentages of parameters (irrelevant to $k$ ). When $k$ is large, $\mathcal{O}$ -GNN will be much more parameter efficient. On the other hand, the optimal $k^*$ is not easy to determine. For example, in DrugBank, the ring maximum ring sizes range from 3 (e.g., DB00658) to 53 (e.g., DB05034). Which $k$ is the best is hard to determine.
490
+
491
+ About invariant constraints In $\mathcal{O}$ -GNN, the features of atoms, bonds and rings are all invariant. Specifically, the features of atoms and bonds are about their types, number of correlated electrons, number of neighbors, etc (please refer to https://github.com/O-GNN/O-GNN/blob/5b70a4f9dc9a5f87a0171eea1e9cecde30489eb8/ogb/util5/features.py#L2 for details). The ring representations are obtained via atom and bond representations (please kindly refer to Eqn.(4)), which are also invariant. The variant features (like coordinates) are not encoded.
492
+
493
+ Comparison about the convergence speed: The validation MAE curves of PCQM4Mv1 are shown in Figure 9. The results of 6-layer $\mathcal{O}$ -GNN (with/without rings) 12-layer $\mathcal{O}$ -GNN (with/without rings) are reported. We can see that:
494
+
495
+ (1) by training the 6-layer $\mathcal{O}$ -GNN for 175 epochs, the results are almost the same as training the 12-layer "O-GNN w/o ring" for 275 epochs;
496
+ (2) by training the 12-layer $\mathcal{O}$ -GNN for 75 epochs, the results are almost the same as training the 12-layer "O-GNN w/o ring" for 275 epochs.
497
+
498
+ These results demonstrate that $\mathcal{O}$ -GNN has better convergence speed.
499
+
500
+ ![](images/26eb586e051be7be5e3bec23069b38d22628d0dab77a314ce8f52f9c1638d667.jpg)
501
+ Figure 9: Comparison about the convergence speed of $\mathcal{O}$ -GNN and "O-GNN w/o ring". $x$ -axis denotes the training epoch and $y$ -axis denotes the validation MAE.
502
+
503
+ Comparison between different number of parameters. Figure 4 shows the validation MAE of $\mathcal{O}$ -GNN and "O-GNN w/o ring" w.r.t. the number of layers. We also visualize the validation MAE w.r.t. the number of parameters in Figure 10. We can observe that when aligned with the number of parameters, $\mathcal{O}$ -GNN still outperforms the variant without modeling rings.
504
+
505
+ Pre-training baselines on MoleculeNet. We summarize the pre-training baselines on MoleculeNet in Table 11. Sun et al. (2022) have demonstrated different data splitting method could result in significantly different results. We follow the common practice to use scaffold based splitting, and we cite the results of Rong et al. (2020) from Fang et al. (2022). Note that the results of $\mathcal{O}$ -GNN is not pre-trained on unlabeled molecules. We can see that in terms of the average score, our method
506
+
507
+ ![](images/342b38aa4b108e7d203b61377b96f2512a47a353cdc764418d3ef6e553b8815a.jpg)
508
+ Figure 10: Validation MAE of PCQM4Mv1 w.r.t. the number of parameters.
509
+
510
+ is comparable with those strong baselines, which demonstrate the effectiveness of our method. We will combine our method with pre-training in the future.
511
+
512
+ <table><tr><td>Dataset
513
+ # Molecules</td><td>BBBP
514
+ 2039</td><td>Tox21
515
+ 7831</td><td>ClinTox
516
+ 1478</td><td>HIV
517
+ 41127</td><td>BACE
518
+ 1513</td><td>SIDER
519
+ 1478</td><td>Avg</td></tr><tr><td>(Hu et al., 2020)</td><td>71.2 ± 0.9</td><td>74.2 ± 0.8</td><td>73.7 ± 4.0</td><td>75.8 ± 1.1</td><td>78.6 ± 1.4</td><td>60.4 ± 0.6</td><td>72.3</td></tr><tr><td>G-Contextual (Liu et al., 2022)</td><td>70.3 ± 1.6</td><td>75.2 ± 0.3</td><td>59.9 ± 8.2</td><td>75.9 ± 0.9</td><td>79.2 ± 0.3</td><td>58.4 ± 0.6</td><td>69.8</td></tr><tr><td>G-Motif (Liu et al., 2022)</td><td>66.4 ± 3.4</td><td>73.2 ± 0.8</td><td>77.8 ± 2.0</td><td>73.8 ± 1.4</td><td>73.4 ± 4.0</td><td>60.6 ± 1.1</td><td>70.9</td></tr><tr><td>GraphMVP (Liu et al., 2022)</td><td>72.4 ± 1.6</td><td>75.9 ± 0.5</td><td>79.1 ± 2.8</td><td>77.0 ± 1.2</td><td>81.2 ± 0.9</td><td>63.9 ± 1.2</td><td>74.9</td></tr><tr><td>MGSSL (Zhang et al., 2021)</td><td>70.5 ± 1.1</td><td>76.5 ± 0.3</td><td>80.7 ± 2.1</td><td>79.5 ± 1.1</td><td>79.7 ± 0.8</td><td>61.8 ± 0.8</td><td>74.8</td></tr><tr><td>GROVERbase (Rong et al., 2020)</td><td>70.0 ± 0.1</td><td>74.3 ± 0.1</td><td>81.2 ± 3.0</td><td>62.5 ± 0.9</td><td>82.6 ± 0.7</td><td>64.8 ± 0.6</td><td>72.6</td></tr><tr><td>GROVERlarge (Rong et al., 2020)</td><td>69.5 ± 0.1</td><td>73.5 ± 0.1</td><td>76.2 ± 3.7</td><td>68.2 ± 1.1</td><td>81.0 ± 1.4</td><td>65.4 ± 0.1</td><td>72.3</td></tr><tr><td>GEM (Fang et al., 2022)</td><td>72.4 ± 0.4</td><td>78.1 ± 0.1</td><td>90.1 ± 1.3</td><td>80.6 ± 0.9</td><td>85.6 ± 1.1</td><td>67.2 ± 0.4</td><td>79.0</td></tr><tr><td>O-GNN (ours)</td><td>76.4 ± 0.4</td><td>75.7 ± 0.7</td><td>94.3 ± 1.6</td><td>81.3 ± 1.2</td><td>85.8 ± 1.0</td><td>66.2 ± 1.2</td><td>80.0</td></tr></table>
520
+
521
+ Table 11: Pre-training baselines on MoleculeNet.
522
+
523
+ # D RELATED WORK SUMMARY
524
+
525
+ GCN (Kipf & Welling, 2017) aggregates its neighbor information according to the adjacency matrix and degree matrix, and then updates the aggregated information with a linear transformation and a non-linear activation layer. GraphSAGE (Hamilton et al., 2017b) aggregates the neighbors information by element-wise average. GAT (Velicković et al., 2017) introduces the attention mechanism into GNN, by which it can adaptively aggregates the representations of the neighbors. Brody et al. (2021) propose the GATv2 to improve the attention mechanism in a more expressive way. Xu et al. (2018) develop a simple aggregate function which involves an $\epsilon$ parameter and multi-layer perceptrons (MLPs) which is provably as powerful as the Weisfeiler-Lehman graph isomorphism test. Besides, Gilmer et al. (2017b); Li et al. (2017); Pham et al. (2017) propose to augment the graph with a virtual node to capture the global information of the graph. A virtual node connects to all the other nodes in the graph and is jointly updated during training. Its effectiveness is validated in a series of graph classification tasks.
526
+
527
+ However, these work do not explicitly use the ring $R$ in graph neural networks. Complementary to these work, we consider how to incorporate the ring information, which is another important component on top of the node and edge information, into molecular modeling. These advanced Aggregate and Update functions are also applicable to our work.
2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20bf696b87a402e3c5f74f2b34aafde9675fd9a71ec28fa1dd489bd23f0e74f7
3
+ size 986150
2023/$_mathcal{O}$-GNN_ incorporating ring priors into molecular modeling/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/b7caaaa1-c4b0-4639-a849-27d74bf6d2bd_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/b7caaaa1-c4b0-4639-a849-27d74bf6d2bd_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/b7caaaa1-c4b0-4639-a849-27d74bf6d2bd_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4629645dc107cee6da78df93498071d7bc8396e4fa6c257b7705a11defd4ac8
3
+ size 10213820
2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1e1e9640e3fe4570186e8da0626f085c19283df49d81cc16924f911fcb8a0e2
3
+ size 1284937
2023/$_mathrm{SE}(3)$-Equivariant Attention Networks for Shape Reconstruction in Function Space/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/f335c9f2-7e4c-49ea-9b9b-7e536ba967bd_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/f335c9f2-7e4c-49ea-9b9b-7e536ba967bd_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/f335c9f2-7e4c-49ea-9b9b-7e536ba967bd_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65841a21e05452f3019ec38bb42b25f7cc9186b472ac62d372ed2ce107c60954
3
+ size 1740133
2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ef14929b9cf33b0696c5c557906ed9c0308506d9f88db7dbaf08fb270c3db3a
3
+ size 1355929
2023/$_mathscr{N}$-WL_ A New Hierarchy of Expressivity for Graph Neural Networks/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/86d84cce-8a0d-4627-8174-5af26f79cb15_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/86d84cce-8a0d-4627-8174-5af26f79cb15_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/86d84cce-8a0d-4627-8174-5af26f79cb15_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c054ad95338d97c41888697db51a86ed73a55d0b4dd7be5af0ba80455a89a017
3
+ size 2546359
2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa0cbf3003c24962652c76cb0a2616f64eb309140324843b2aadeabac734be34
3
+ size 1513573
2023/$_rm A^2Q$_ Aggregation-Aware Quantization for Graph Neural Networks/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/88ef98b2-6c9b-4b6f-a634-2b2692f94bcc_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/88ef98b2-6c9b-4b6f-a634-2b2692f94bcc_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/88ef98b2-6c9b-4b6f-a634-2b2692f94bcc_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d697383dfdfd597c0e1d7d04da30de555c592f8c72044f6ae05f4645c7764a9
3
+ size 11556816
2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:acca627ac1d22ba61ca2533263bde47d73bfe89e6d534e69dc867244efa2b909
3
+ size 2181244
2023/$k$NN Prompting_ Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/(Certified!!) Adversarial Robustness for Free!/7064d377-0583-493a-b0c4-c47e08357804_content_list.json ADDED
@@ -0,0 +1,1747 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "(CERTIFIED!!) ADVERSARIAL ROBUSTNESS FOR FREE!",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 171,
8
+ 99,
9
+ 836,
10
+ 122
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Nicholas Carlini\\*1 Florian Tram'er\\*1 Krishnamurthy (Dj) Dvijotham\\* Leslie Rice2 Mingjie Sun2 J. Zico Kolter\\*2,3",
17
+ "bbox": [
18
+ 179,
19
+ 147,
20
+ 702,
21
+ 176
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "$^{1}$ Google $^{2}$ Carnegie Mellon University $^{3}$ Bosch Center for AI",
28
+ "bbox": [
29
+ 183,
30
+ 176,
31
+ 643,
32
+ 191
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "ABSTRACT",
39
+ "text_level": 1,
40
+ "bbox": [
41
+ 450,
42
+ 215,
43
+ 547,
44
+ 231
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "In this paper we show how to achieve state-of-the-art certified adversarial robustness to $\\ell_2$ -norm bounded perturbations by relying exclusively on off-the-shelf pretrained models. To do so, we instantiate the denoised smoothing approach of Salman et al. (2020) by combining a pretrained denoising diffusion probabilistic model and a standard high-accuracy classifier. This allows us to certify $71\\%$ accuracy on ImageNet under adversarial perturbations constrained to be within an $\\ell_2$ norm of $\\varepsilon = 0.5$ , an improvement of 14 percentage points over the prior certified SoTA using any approach, or an improvement of 30 percentage points over denoised smoothing. We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters.",
51
+ "bbox": [
52
+ 228,
53
+ 247,
54
+ 767,
55
+ 400
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "1 INTRODUCTION",
62
+ "text_level": 1,
63
+ "bbox": [
64
+ 173,
65
+ 426,
66
+ 336,
67
+ 440
68
+ ],
69
+ "page_idx": 0
70
+ },
71
+ {
72
+ "type": "text",
73
+ "text": "Evaluating the robustness of deep learning models to norm bounded adversarial perturbations has been shown to be difficult (Athalye et al., 2018; Uesato et al., 2018). Certified defenses—such as those based on bound propagation (Gowal et al., 2018; Mirman et al., 2018) or randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019)—offer provable guarantees that a model's predictions are robust to norm-bounded adversarial perturbations, for a large fraction of examples in the test set.",
74
+ "bbox": [
75
+ 169,
76
+ 455,
77
+ 823,
78
+ 527
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "The current state-of-the-art approaches to certify robustness to adversarial perturbations bounded in the $\\ell_2$ norm rely on randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019). By taking a majority vote over the labels predicted by a \"base classifier\" under random Gaussian perturbations of the input, if the correct class is output sufficiently often, then the defense's output on the original un-noised input is guaranteed to be robust to $\\ell_2$ norm bounded adversarial perturbations.",
85
+ "bbox": [
86
+ 169,
87
+ 534,
88
+ 823,
89
+ 604
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "Denoised smoothing (Salman et al., 2020) is a certified defense that splits this one-step process into two. After randomly perturbing an input, the defense first applies a denoiser model that aims to remove the added noise, followed by a standard classifier that guesses a label given this noisethen-denoised input. This enables applying randomized smoothing to pretrained black-box base classifiers, as long as the denoiser can produce clean images close to the base classifier's original training distribution.",
96
+ "bbox": [
97
+ 169,
98
+ 609,
99
+ 823,
100
+ 694
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "text",
106
+ "text": "We observe that the recent line of work on denoising diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Nichol & Dhariwal, 2021)—which achieve state-of-the-art results on image generation—are a perfect match for the denoising step in a denoised smoothing defense. A forward diffusion process takes a source data distribution (e.g., images from some data distribution) and then adds Gaussian noise until the distribution converges to a high-variance isotropic Gaussian. Denoising diffusion models are trained to invert this process. Thus, we can use a diffusion model as a denoiser that recovers high quality denoised inputs from inputs perturbed with Gaussian noise.",
107
+ "bbox": [
108
+ 169,
109
+ 700,
110
+ 823,
111
+ 799
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "text",
117
+ "text": "In this paper, we combine state-of-the-art, publicly available diffusion models as denoisers with standard pretrained state-of-the-art classifiers. We show that the resulting denoised smoothing defense obtains significantly better certified robustness results—for perturbations of $\\ell_2$ norm of $\\epsilon \\leq 2$ on ImageNet and $\\epsilon \\leq 0.5$ on CIFAR-10—compared to the \"custom\" denoisers trained in prior work (Salman et al., 2020), or in fact with any certifiably robust defense (even those that do not rely on denoised smoothing). Code to reproduce our experiments is available at: https://github.com/ethz-privsec/diffusion_denoised_smoothing.",
118
+ "bbox": [
119
+ 169,
120
+ 805,
121
+ 826,
122
+ 904
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "header",
128
+ "text": "Published as a conference paper at ICLR 2023",
129
+ "bbox": [
130
+ 171,
131
+ 32,
132
+ 478,
133
+ 47
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "page_footnote",
139
+ "text": "*Joint first authors",
140
+ "bbox": [
141
+ 189,
142
+ 910,
143
+ 305,
144
+ 922
145
+ ],
146
+ "page_idx": 0
147
+ },
148
+ {
149
+ "type": "page_number",
150
+ "text": "1",
151
+ "bbox": [
152
+ 493,
153
+ 948,
154
+ 504,
155
+ 959
156
+ ],
157
+ "page_idx": 0
158
+ },
159
+ {
160
+ "type": "text",
161
+ "text": "2 BACKGROUND",
162
+ "text_level": 1,
163
+ "bbox": [
164
+ 171,
165
+ 102,
166
+ 328,
167
+ 118
168
+ ],
169
+ "page_idx": 1
170
+ },
171
+ {
172
+ "type": "text",
173
+ "text": "Adversarial examples (Biggio et al., 2013; Szegedy et al., 2014) are inputs $x' = x + \\delta$ constructed by taking some input $x$ (with true label $y \\in \\mathcal{V}$ ) and adding a perturbation $\\delta$ (that is assumed to be imperceptible and hence label-preserving) so that a given classifier $f$ misclassifies the perturbed input, i.e., $f(x + \\delta) \\neq y$ . The \"smallness\" of $\\delta$ is quantified by its Euclidean norm, and we constrain $\\| \\delta \\|_2 \\leq \\varepsilon$ . Even when considering exceptionally small perturbation budgets (e.g., $\\varepsilon = 0.5$ ) modern classifiers often have near-0% accuracy (Carlini & Wagner, 2017).",
174
+ "bbox": [
175
+ 169,
176
+ 133,
177
+ 826,
178
+ 219
179
+ ],
180
+ "page_idx": 1
181
+ },
182
+ {
183
+ "type": "text",
184
+ "text": "Randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019) is a technique to certify the robustness of arbitrary classifiers against adversarial examples under the $\\ell_2$ norm. Given an input $x$ and base classifier $f$ , randomized smoothing considers a smooth version of $f$ defined as:",
185
+ "bbox": [
186
+ 169,
187
+ 234,
188
+ 823,
189
+ 277
190
+ ],
191
+ "page_idx": 1
192
+ },
193
+ {
194
+ "type": "equation",
195
+ "text": "\n$$\ng (x) := \\operatorname {a r g m a x} _ {c} \\Pr_ {\\delta \\sim \\mathcal {N} \\left(0, \\sigma^ {2} \\mathbf {I}\\right)} \\left(f (x + \\delta) = c\\right) \\tag {1}\n$$\n",
196
+ "text_format": "latex",
197
+ "bbox": [
198
+ 351,
199
+ 286,
200
+ 823,
201
+ 310
202
+ ],
203
+ "page_idx": 1
204
+ },
205
+ {
206
+ "type": "text",
207
+ "text": "Cohen et al. (2019) prove that the smooth classifier $g$ is robust to perturbations of $\\ell_2$ radius $R$ , where the radius $R$ grows with the classifier's \"margin\" (i.e., the difference in probabilities assigned to the most likely and second most-likely classes). As the probability in Equation 1 cannot be efficiently computed when the base classifier $f$ is a neural network, Cohen et al. (2019) instantiate this defense by sampling a small number $m$ of noise instances (e.g., $m = 10$ ) and taking a majority vote over the outputs of the base classifier $f$ on $m$ noisy versions of the input. To compute a lower-bound on this defense's robust radius $R$ , they estimate the probabilities $\\operatorname*{Pr}[f(x + \\delta) = c]$ for each class label $c$ by sampling a large number $N$ of noise instances $\\delta$ (e.g., $N = 100,000$ ). See Cohen et al. (2019) for details.",
208
+ "bbox": [
209
+ 169,
210
+ 325,
211
+ 823,
212
+ 454
213
+ ],
214
+ "page_idx": 1
215
+ },
216
+ {
217
+ "type": "text",
218
+ "text": "Denoised smoothing (Salman et al., 2020) is an instantiation of randomized smoothing, where the base classifier $f$ is composed of a denoiser denoise followed by a standard classifier $f_{\\mathrm{clf}}$ :",
219
+ "bbox": [
220
+ 169,
221
+ 469,
222
+ 823,
223
+ 498
224
+ ],
225
+ "page_idx": 1
226
+ },
227
+ {
228
+ "type": "equation",
229
+ "text": "\n$$\nf (x + \\delta) := f _ {\\mathrm {c l f}} (\\operatorname {d e n o i s e} (x + \\delta)). \\tag {2}\n$$\n",
230
+ "text_format": "latex",
231
+ "bbox": [
232
+ 367,
233
+ 506,
234
+ 823,
235
+ 523
236
+ ],
237
+ "page_idx": 1
238
+ },
239
+ {
240
+ "type": "text",
241
+ "text": "Given a very good denoiser (i.e., $\\mathrm{denoise}(x + \\delta) \\approx x$ with high probability for $\\delta \\sim \\mathcal{N}(0, \\sigma^2\\mathbf{I})$ ), we can expect the base classifier's accuracy on noisy images to be similar to the clean accuracy of the standard classifier $f_{\\mathrm{clf}}$ . Salman et al. (2020) instantiate their denoised smoothing technique by training custom denoiser models with Gaussian noise augmentation, combined with off-the-shelf pretrained classifiers.",
242
+ "bbox": [
243
+ 169,
244
+ 539,
245
+ 826,
246
+ 609
247
+ ],
248
+ "page_idx": 1
249
+ },
250
+ {
251
+ "type": "text",
252
+ "text": "Denoising Diffusion Probabilistic Models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Nichol & Dhariwal, 2021) are a form of generative model that work by learning a model that can reverse time on a diffusion process of the form $x_{t} \\sim \\sqrt{1 - \\beta_{t}} \\cdot x_{t-1} + \\beta_{t} \\cdot \\omega_{t}, \\omega_{t} \\sim \\mathcal{N}(0, \\mathbf{I})$ with $x_{0}$ coming from the data distribution, and the $\\beta_{t}$ being fixed (or learned) variance parameters. The diffusion process transforms images from the target data distribution to purely random noise over time. The reverse process then synthesizes images from the data distribution starting with random Gaussian noise. In this paper we will not make use of diffusion models in the typical way; instead it suffices to understand just one single property about how they are trained.",
253
+ "bbox": [
254
+ 169,
255
+ 626,
256
+ 823,
257
+ 739
258
+ ],
259
+ "page_idx": 1
260
+ },
261
+ {
262
+ "type": "text",
263
+ "text": "Given a clean training image $x \\in [-1, 1]^{w \\cdot h \\cdot c}$ , a diffusion model selects a timestep $t \\in \\mathbb{N}^+$ from some fixed schedule and then samples a noisy image $x_{t}$ of the form",
264
+ "bbox": [
265
+ 169,
266
+ 744,
267
+ 823,
268
+ 773
269
+ ],
270
+ "page_idx": 1
271
+ },
272
+ {
273
+ "type": "equation",
274
+ "text": "\n$$\nx _ {t} := \\sqrt {\\alpha_ {t}} \\cdot x + \\sqrt {1 - \\alpha_ {t}} \\cdot \\mathcal {N} (0, \\mathbf {I}), \\tag {3}\n$$\n",
275
+ "text_format": "latex",
276
+ "bbox": [
277
+ 372,
278
+ 781,
279
+ 823,
280
+ 799
281
+ ],
282
+ "page_idx": 1
283
+ },
284
+ {
285
+ "type": "text",
286
+ "text": "where the factor $\\alpha_{t}$ is a constant derived from the timestamp $t$ that determines the amount of noise to be added to the image (the noise magnitude increases monotonically with $t$ ).",
287
+ "bbox": [
288
+ 169,
289
+ 806,
290
+ 823,
291
+ 837
292
+ ],
293
+ "page_idx": 1
294
+ },
295
+ {
296
+ "type": "text",
297
+ "text": "The diffusion model is then trained (loosely speaking) to minimize the discrepancy between $x$ and denoise $(x_{t}; t)$ ; that is, to predict what the original (un-noised) image should look like after applying the noising step at timestep $t$ .<sup>1</sup>",
298
+ "bbox": [
299
+ 169,
300
+ 842,
301
+ 823,
302
+ 886
303
+ ],
304
+ "page_idx": 1
305
+ },
306
+ {
307
+ "type": "header",
308
+ "text": "Published as a conference paper at ICLR 2023",
309
+ "bbox": [
310
+ 171,
311
+ 32,
312
+ 478,
313
+ 47
314
+ ],
315
+ "page_idx": 1
316
+ },
317
+ {
318
+ "type": "page_footnote",
319
+ "text": "${}^{1}$ State-of-the-art diffusion models are actually trained to predict the noise rather than the denoised image directly (Ho et al., 2020; Nichol & Dhariwal, 2021).",
320
+ "bbox": [
321
+ 169,
322
+ 897,
323
+ 823,
324
+ 925
325
+ ],
326
+ "page_idx": 1
327
+ },
328
+ {
329
+ "type": "page_number",
330
+ "text": "2",
331
+ "bbox": [
332
+ 493,
333
+ 948,
334
+ 504,
335
+ 959
336
+ ],
337
+ "page_idx": 1
338
+ },
339
+ {
340
+ "type": "code",
341
+ "sub_type": "algorithm",
342
+ "code_caption": [
343
+ "Algorithm 1 Noise, denoise, classify"
344
+ ],
345
+ "code_body": "1: NOISEANDCLASSIFY(x,σ): 1: PREDICT(x,σ,N,η): \n2: $t^{\\star},\\alpha_{t^{\\star}}\\gets \\text{GETTIMESTEP}(\\sigma)$ 2: counts←0 \n3: $x_{t^{\\star}}\\gets \\sqrt{\\alpha_{t^{\\star}}}(x + \\mathcal{N}(0,\\sigma^{2}\\mathbf{I}))$ 3: for i ∈ {1,2,...,N} do \n4: $\\hat{x}\\gets \\text{denoise}(x_{t^{\\star}};t^{\\star})$ 4: y ← NOISEANDCLASSIFY(x,σ) \n5: y ← fclf(x) 5: counts[y] ← counts[y] + 1 \n6: return y 6: $\\hat{y}_A,\\hat{y}_B\\gets$ top two labels in counts \n7: nA,nB← counts[hatA], counts[hatB] \n8: GETTIMESTEP(σ): 8: if BINOMPTEST(nA,nA+nB,1/2) ≤ η then \n9: return $\\hat{y}_A$ \n10: return t*,αt* 10: else \n11: return Abstain",
346
+ "bbox": [
347
+ 173,
348
+ 138,
349
+ 823,
350
+ 296
351
+ ],
352
+ "page_idx": 2
353
+ },
354
+ {
355
+ "type": "text",
356
+ "text": "Figure 1: Our approach can be implemented in under 15 lines of code, given an off-the-shelf classifier $f_{\\mathrm{clf}}$ and an off-the-shelf diffusion model denoise. The PREDICT function is adapted from Cohen et al. (2019) and takes as input a number of noise samples $N$ and a statistical significance level $\\eta \\in (0,1)$ and inherits the same robustness certificate proved in Cohen et al. (2019).",
357
+ "bbox": [
358
+ 169,
359
+ 306,
360
+ 826,
361
+ 364
362
+ ],
363
+ "page_idx": 2
364
+ },
365
+ {
366
+ "type": "text",
367
+ "text": "3 DIFFUSION DENOISED SMOOTHING",
368
+ "text_level": 1,
369
+ "bbox": [
370
+ 171,
371
+ 388,
372
+ 503,
373
+ 404
374
+ ],
375
+ "page_idx": 2
376
+ },
377
+ {
378
+ "type": "text",
379
+ "text": "Our approach, Diffusion Denoised Smoothing (DDS), requires no new technical ideas on top of what was introduced in the section above.",
380
+ "bbox": [
381
+ 169,
382
+ 419,
383
+ 826,
384
+ 448
385
+ ],
386
+ "page_idx": 2
387
+ },
388
+ {
389
+ "type": "text",
390
+ "text": "Denoised smoothing via a diffusion model. The only minor technicality required for our method is to map between the noise model required by randomized smoothing and the noise model used within diffusion models. Specifically, randomized smoothing requires a data point augmented with additive Gaussian noise $x_{\\mathrm{rs}} \\sim \\mathcal{N}(x,\\sigma^2\\mathbf{I})$ , whereas diffusion models assume the noise model $x_{t} \\sim \\mathcal{N}(\\sqrt{\\alpha_{t}} x,(1 - \\alpha_{t})\\mathbf{I})$ . Scaling $x_{\\mathrm{rs}}$ by $\\sqrt{\\alpha_t}$ and equating the variances yields the relationship",
391
+ "bbox": [
392
+ 169,
393
+ 463,
394
+ 823,
395
+ 536
396
+ ],
397
+ "page_idx": 2
398
+ },
399
+ {
400
+ "type": "equation",
401
+ "text": "\n$$\n\\sigma^ {2} = \\frac {1 - \\alpha_ {t}}{\\alpha_ {t}}. \\tag {4}\n$$\n",
402
+ "text_format": "latex",
403
+ "bbox": [
404
+ 444,
405
+ 541,
406
+ 823,
407
+ 571
408
+ ],
409
+ "page_idx": 2
410
+ },
411
+ {
412
+ "type": "text",
413
+ "text": "Thus, in order to employ a diffusion model for randomized smoothing at a given noise level $\\sigma$ , we first find the timestep $t^{\\star}$ such that $\\sigma^2 = \\frac{1 - \\alpha_t\\star}{\\alpha_{t^\\star}}$ ; the precise formula for this equation will depend on the schedule of the $\\alpha_{t}$ terms used by the diffusion model, but this can typically be computed in closed form, even for reasonably complex diffusion schedules. Next, we compute",
414
+ "bbox": [
415
+ 169,
416
+ 577,
417
+ 823,
418
+ 637
419
+ ],
420
+ "page_idx": 2
421
+ },
422
+ {
423
+ "type": "equation",
424
+ "text": "\n$$\nx _ {t ^ {*}} = \\sqrt {\\alpha_ {t ^ {*}}} (x + \\delta), \\delta \\sim \\mathcal {N} (0, \\sigma^ {2} \\mathbf {I}) \\tag {6}\n$$\n",
425
+ "text_format": "latex",
426
+ "bbox": [
427
+ 372,
428
+ 642,
429
+ 823,
430
+ 660
431
+ ],
432
+ "page_idx": 2
433
+ },
434
+ {
435
+ "type": "text",
436
+ "text": "and apply the diffusion denoiser on $x_{t^{\\star}}$ to obtain an estimate of the denoised sample",
437
+ "bbox": [
438
+ 169,
439
+ 665,
440
+ 725,
441
+ 680
442
+ ],
443
+ "page_idx": 2
444
+ },
445
+ {
446
+ "type": "equation",
447
+ "text": "\n$$\n\\hat {x} = \\operatorname {d e n o i s e} \\left(x _ {t ^ {\\star}}; t ^ {\\star}\\right). \\tag {7}\n$$\n",
448
+ "text_format": "latex",
449
+ "bbox": [
450
+ 411,
451
+ 686,
452
+ 823,
453
+ 703
454
+ ],
455
+ "page_idx": 2
456
+ },
457
+ {
458
+ "type": "text",
459
+ "text": "And finally, we classify the estimated denoised image with an off-the-shelf classifier",
460
+ "bbox": [
461
+ 171,
462
+ 709,
463
+ 727,
464
+ 724
465
+ ],
466
+ "page_idx": 2
467
+ },
468
+ {
469
+ "type": "equation",
470
+ "text": "\n$$\ny = f _ {\\mathrm {c l f}} (\\hat {x}). \\tag {8}\n$$\n",
471
+ "text_format": "latex",
472
+ "bbox": [
473
+ 452,
474
+ 729,
475
+ 823,
476
+ 747
477
+ ],
478
+ "page_idx": 2
479
+ },
480
+ {
481
+ "type": "text",
482
+ "text": "The entirety of this algorithmic approach is shown in Figure 1.",
483
+ "bbox": [
484
+ 171,
485
+ 752,
486
+ 584,
487
+ 768
488
+ ],
489
+ "page_idx": 2
490
+ },
491
+ {
492
+ "type": "equation",
493
+ "text": "\n$$\nt ^ {\\star} = T \\left(1 - \\frac {2 (1 + s) \\csc^ {- 1} \\left(\\sqrt {1 + \\sigma^ {2}} \\csc \\left(\\frac {\\pi}{2 + 2 s}\\right)\\right)}{\\pi}\\right). \\tag {5}\n$$\n",
494
+ "text_format": "latex",
495
+ "bbox": [
496
+ 318,
497
+ 834,
498
+ 823,
499
+ 880
500
+ ],
501
+ "page_idx": 2
502
+ },
503
+ {
504
+ "type": "text",
505
+ "text": "The actual formula here is unimportant and only shown as an illustration of how such computation can look in practice. Even when such a closed form solution does not exist, because the schedules for $\\alpha_{t}$ are monotonic decreasing, one can always find a solution via 1D root-finding methods if necessary.",
506
+ "bbox": [
507
+ 169,
508
+ 885,
509
+ 823,
510
+ 925
511
+ ],
512
+ "page_idx": 2
513
+ },
514
+ {
515
+ "type": "header",
516
+ "text": "Published as a conference paper at ICLR 2023",
517
+ "bbox": [
518
+ 171,
519
+ 32,
520
+ 478,
521
+ 47
522
+ ],
523
+ "page_idx": 2
524
+ },
525
+ {
526
+ "type": "page_footnote",
527
+ "text": "For example, in Nichol & Dhariwal (2021), the authors advocate for the schedule $\\alpha_{t} = f(t) / f(0)$ , where $f(t) = \\cos \\left(\\frac{t / T + s}{1 + s} \\cdot \\frac{\\pi}{2}\\right)^{2}$ for various values of $T$ , and $s$ discussed in this reference. In this case, for a given desired value of $\\sigma^{2}$ , some algebra yields the solution for $t$",
528
+ "bbox": [
529
+ 169,
530
+ 776,
531
+ 823,
532
+ 828
533
+ ],
534
+ "page_idx": 2
535
+ },
536
+ {
537
+ "type": "page_number",
538
+ "text": "3",
539
+ "bbox": [
540
+ 493,
541
+ 948,
542
+ 503,
543
+ 959
544
+ ],
545
+ "page_idx": 2
546
+ },
547
+ {
548
+ "type": "text",
549
+ "text": "To obtain a robustness certificate, we repeat the above denoising process many times (e.g., 100,000) and compute the certification radius using the approach of Cohen et al. (2019) (note that since our diffusion models expects inputs in $[-1,1]^d$ , we then divide the certified radius by 2 to obtain a certified radius for inputs in $[0,1]$ as assumed in all prior work).",
550
+ "bbox": [
551
+ 169,
552
+ 103,
553
+ 826,
554
+ 161
555
+ ],
556
+ "page_idx": 3
557
+ },
558
+ {
559
+ "type": "text",
560
+ "text": "One-shot denoising. Readers familiar with diffusion models may recall that the standard process repeatedly applies a \"single-step\" denoising operation $x_{t-1} = d(x_t; t)$ that aims to convert a noisy image at some timestep $t$ to a (slightly less) noisy image at the previous timestep $t-1$ . The full diffusion process would then be defined by the following iterative procedure:",
561
+ "bbox": [
562
+ 169,
563
+ 174,
564
+ 823,
565
+ 229
566
+ ],
567
+ "page_idx": 3
568
+ },
569
+ {
570
+ "type": "equation",
571
+ "text": "\n$$\n\\tilde {x} = \\operatorname {d e n o i s e} _ {\\text {i t e r}} (x + \\delta ; t) := d \\left(d (\\dots d (d (x + \\delta ; t); t - 1) \\dots ; 2); 1\\right).\n$$\n",
572
+ "text_format": "latex",
573
+ "bbox": [
574
+ 261,
575
+ 232,
576
+ 733,
577
+ 250
578
+ ],
579
+ "page_idx": 3
580
+ },
581
+ {
582
+ "type": "text",
583
+ "text": "In fact, each application of the one-step denoiser $d$ consists of two steps: (1) an estimation of the fully denoised image $x$ from the current timestep $t$ , and (2) computing a (properly weighted, according to the diffusion model) average between this estimated denoised image and the noisy image at the previous timestep $t - 1$ . Thus, instead of performing the entire $t$ -step diffusion process to denoise an image, it is also possible to run the diffusion step $d$ once and simply output the best estimate for the denoised image $x$ in one shot.",
584
+ "bbox": [
585
+ 169,
586
+ 251,
587
+ 826,
588
+ 335
589
+ ],
590
+ "page_idx": 3
591
+ },
592
+ {
593
+ "type": "text",
594
+ "text": "When a diffusion model generates images from scratch (i.e., the denoiser is applied to pure noise), the iterative process gives higher fidelity outputs than this one-shot approach (Ho et al., 2020). But here, where we aim to denoise one particular image, a one-shot approach has two advantages:",
595
+ "bbox": [
596
+ 169,
597
+ 340,
598
+ 825,
599
+ 383
600
+ ],
601
+ "page_idx": 3
602
+ },
603
+ {
604
+ "type": "list",
605
+ "sub_type": "text",
606
+ "list_items": [
607
+ "1. High accuracy: it turns out that standard pretrained classifiers are more accurate on one-shot denoised images compared to images denoised with the full $t$ -steps of denoising. We hypothesize this is due to the fact that when we first apply the single-step denoiser $d$ at timestep $t$ , the denoiser already has all the available information about $x$ . By applying the denoiser multiple times, we can only destroy information about $x$ as each step adds new (slightly smaller) Gaussian noise. In fact, by using the iterative $t$ -step denoising strategy, we are in essence pushing part of the classification task onto the denoiser, in order to decide how to fill in the image. Section 5 experimentally validates this hypothesis.",
608
+ "2. Improved efficiency: instead of requiring several hundred (or thousand) forward passes to denoise any given image, we only require one single pass. This is especially important when we perform many thousand predictions as is required for randomized smoothing to obtain a robustness certificate."
609
+ ],
610
+ "bbox": [
611
+ 207,
612
+ 393,
613
+ 823,
614
+ 563
615
+ ],
616
+ "page_idx": 3
617
+ },
618
+ {
619
+ "type": "text",
620
+ "text": "Related work. We are not the first to observe a connection between randomized smoothing and diffusion models. The work of Lee (2021) first studied this problem—however they do not obtain significant accuracy improvements, likely due to the fact that diffusion models available at the time that work was done were not good enough. Separately, Nie et al. (2022) suggest that diffusion models might be able to provide strong empirical robustness to adversarial examples, as evaluated by robustness under adversarial attacks computed using existing attack algorithms; this is orthogonal to our results.",
621
+ "bbox": [
622
+ 169,
623
+ 578,
624
+ 826,
625
+ 676
626
+ ],
627
+ "page_idx": 3
628
+ },
629
+ {
630
+ "type": "text",
631
+ "text": "4 EVALUATION",
632
+ "text_level": 1,
633
+ "bbox": [
634
+ 171,
635
+ 695,
636
+ 316,
637
+ 710
638
+ ],
639
+ "page_idx": 3
640
+ },
641
+ {
642
+ "type": "text",
643
+ "text": "We evaluate diffusion denoised smoothing on two standard datasets, CIFAR-10 and ImageNet, and find it gives state-of-the-art certified $\\ell_2$ robustness on both. On CIFAR-10, we draw $N = 100,000$ noise samples and on ImageNet we draw $N = 10,000$ samples to certify the robustness following Cohen et al. (2019).",
644
+ "bbox": [
645
+ 169,
646
+ 726,
647
+ 823,
648
+ 782
649
+ ],
650
+ "page_idx": 3
651
+ },
652
+ {
653
+ "type": "text",
654
+ "text": "As is standard in prior work, we perform randomized smoothing for three different noise magnitudes, $\\sigma \\in \\{0.25, 0.5, 1.0\\}$ . For a fair comparison to prior work in Table 2 and Table 1, we give the best results reported in each paper across these same three noise magnitudes. Note that prior work only uses three levels of noise due to the computational overhead; one benefit of using a diffusion model is we could have used other amounts of noise without training a new denoiser model.",
655
+ "bbox": [
656
+ 169,
657
+ 789,
658
+ 823,
659
+ 859
660
+ ],
661
+ "page_idx": 3
662
+ },
663
+ {
664
+ "type": "text",
665
+ "text": "CIFAR-10 configuration. We denoise CIFAR-10 images with the 50M-parameter diffusion model from Nichol & Dhariwal (2021). The denoised images are classified with a 87M-parameter",
666
+ "bbox": [
667
+ 169,
668
+ 873,
669
+ 823,
670
+ 902
671
+ ],
672
+ "page_idx": 3
673
+ },
674
+ {
675
+ "type": "header",
676
+ "text": "Published as a conference paper at ICLR 2023",
677
+ "bbox": [
678
+ 171,
679
+ 32,
680
+ 478,
681
+ 47
682
+ ],
683
+ "page_idx": 3
684
+ },
685
+ {
686
+ "type": "page_footnote",
687
+ "text": "<sup>3</sup>https://github.com/openai/improved-diffusion",
688
+ "bbox": [
689
+ 189,
690
+ 909,
691
+ 591,
692
+ 922
693
+ ],
694
+ "page_idx": 3
695
+ },
696
+ {
697
+ "type": "page_number",
698
+ "text": "4",
699
+ "bbox": [
700
+ 493,
701
+ 948,
702
+ 504,
703
+ 959
704
+ ],
705
+ "page_idx": 3
706
+ },
707
+ {
708
+ "type": "table",
709
+ "img_path": "images/47d8058751cf0c0c7f9a06d9dec61d52eb51588e9f3631c88eb19fec47e0df2c.jpg",
710
+ "table_caption": [],
711
+ "table_footnote": [
712
+ "Table 1: ImageNet certified top-1 accuracy for prior defenses on randomized smoothing and denoised smoothing. Randomized smoothing techniques rely on special-purpose models (indicated by a empty circle). The work of Horváth et al. (2022b) is an exception in that it selectively applies either a robust or accurate off-the-shelf classifier (indicated by a half full circle). Denoised smoothing (Salman et al., 2020) use an off-the-shelf classifier but train their own denoiser (indicated by a half full circle). Our base approach uses an off-the-shelf classifier and off-the-shelf denoiser (indicated by a full circle). Each entry lists the certified accuracy, with the clean accuracy for that model in parentheses, using numbers taken from respective papers."
713
+ ],
714
+ "table_body": "<table><tr><td>Method</td><td>Off-the-shelf</td><td>Extra data</td><td>0.5</td><td>1.0</td><td>1.5</td><td>2.0</td><td>3.0</td></tr><tr><td>PixelDP (Lecuyer et al., 2019)</td><td>○</td><td>×</td><td>(33.0)16.0</td><td>-</td><td>-</td><td></td><td></td></tr><tr><td>RS (Cohen et al., 2019)</td><td>○</td><td>×</td><td>(67.0)49.0</td><td>(57.0)37.0</td><td>(57.0)29.0</td><td>(44.0)19.0</td><td>(44.0)12.0</td></tr><tr><td>SmoothAdv (Salman et al., 2019)</td><td>○</td><td>×</td><td>(65.0)56.0</td><td>(54.0)43.0</td><td>(54.0)37.0</td><td>(40.0)27.0</td><td>(40.0)20.0</td></tr><tr><td>Consistency (Jeong &amp; Shin, 2020)</td><td>○</td><td>×</td><td>(55.0)50.0</td><td>(55.0)44.0</td><td>(55.0)34.0</td><td>(41.0)24.0</td><td>(41.0)17.0</td></tr><tr><td>MACER (Zhai et al., 2020)</td><td>○</td><td>×</td><td>(68.0)57.0</td><td>(64.0)43.0</td><td>(64.0)31.0</td><td>(48.0)25.0</td><td>(48.0)14.0</td></tr><tr><td>Boosting (Horváth et al., 2022a)</td><td>○</td><td>×</td><td>(65.6)57.0</td><td>(57.0)44.6</td><td>(57.0)38.4</td><td>(44.6)28.6</td><td>(38.6)21.2</td></tr><tr><td>DRT (Yang et al., 2021)</td><td>○</td><td>×</td><td>(52.2)46.8</td><td>(55.2)44.4</td><td>(49.8)39.8</td><td>(49.8)30.4</td><td>(49.8)23.4</td></tr><tr><td>SmoothMix (Jeong et al., 2021)</td><td>○</td><td>×</td><td>(55.0)50.0</td><td>(55.0)43.0</td><td>(55.0)38.0</td><td>(40.0)26.0</td><td>(40.0)20.0</td></tr><tr><td>ACES (Horváth et al., 2022b)</td><td>○</td><td>×</td><td>(63.8)54.0</td><td>(57.2)42.2</td><td>(55.6)35.6</td><td>(39.8)25.6</td><td>(44.0)19.8</td></tr><tr><td>Denoised (Salman et al., 2020)</td><td>○</td><td>×</td><td>(60.0)33.0</td><td>(38.0)14.0</td><td>(38.0)6.0</td><td>-</td><td>-</td></tr><tr><td>Lee (Lee, 2021)</td><td>○</td><td>×</td><td>41.0</td><td>24.0</td><td>11.0</td><td>-</td><td>-</td></tr><tr><td>Ours</td><td>○</td><td>✓</td><td>(82.8)71.1</td><td>(77.1)54.3</td><td>(77.1)38.1</td><td>(60.0)29.5</td><td>(60.0)13.1</td></tr></table>",
715
+ "bbox": [
716
+ 173,
717
+ 114,
718
+ 823,
719
+ 291
720
+ ],
721
+ "page_idx": 4
722
+ },
723
+ {
724
+ "type": "text",
725
+ "text": "Certified Accuracy at $\\varepsilon$ (\\%)",
726
+ "bbox": [
727
+ 589,
728
+ 99,
729
+ 736,
730
+ 112
731
+ ],
732
+ "page_idx": 4
733
+ },
734
+ {
735
+ "type": "text",
736
+ "text": "ViT-B/16 model (Dosovitskiy et al., 2021) that was pretrained on ImageNet-21k (Deng et al., 2009) (in $224 \\times 224$ resolution) and finetuned on CIFAR-10. We use the implementation from Hugging-Face<sup>4</sup> which reaches $97.9\\%$ test accuracy on CIFAR-10. In addition, we also report results with a standard 36M parameter Wide-ResNet-28-10 model (Zagoruyko & Komodakis, 2016) trained on CIFAR-10 to $95.2\\%$ accuracy.",
737
+ "bbox": [
738
+ 169,
739
+ 441,
740
+ 823,
741
+ 513
742
+ ],
743
+ "page_idx": 4
744
+ },
745
+ {
746
+ "type": "text",
747
+ "text": "As is typical, we report results with images normalized to $[0,1]^{32\\times 32\\times 3}$ . We obtain a throughput of 825 images per second through the diffusion model and ViT classifier on an A100 GPU at a batch size of 1,000. We report robust accuracy results averaged over the entire CIFAR-10 test set.",
748
+ "bbox": [
749
+ 169,
750
+ 518,
751
+ 826,
752
+ 561
753
+ ],
754
+ "page_idx": 4
755
+ },
756
+ {
757
+ "type": "text",
758
+ "text": "ImageNet configuration. We denoise ImageNet images with the 552M-parameter class-unconditional diffusion model from Dhariwal & Nichol (2021), and classify images with the 305M-parameter BEiT large model (Bao et al., 2022) which reaches a $88.6\\%$ top-1 validation accuracy using the implementation from timm (Wightman, 2019). We report results for our images when normalized to $[0,1]^{224\\times 224\\times 3}$ to allow us to compare to prior work. The overall latency of this joint denoise-then-classify model is 1.5 seconds per image on an A100 GPU at a batch size of 32. We report results averaged over 1,000 images randomly selected from the ImageNet test set.",
759
+ "bbox": [
760
+ 169,
761
+ 578,
762
+ 823,
763
+ 676
764
+ ],
765
+ "page_idx": 4
766
+ },
767
+ {
768
+ "type": "text",
769
+ "text": "4.1 RESULTS",
770
+ "text_level": 1,
771
+ "bbox": [
772
+ 171,
773
+ 695,
774
+ 279,
775
+ 709
776
+ ],
777
+ "page_idx": 4
778
+ },
779
+ {
780
+ "type": "text",
781
+ "text": "On both CIFAR-10 and ImageNet we outperform the state-of-the-art denoised smoothing approaches (i.e., Salman et al. (2020) and Lee (2021)) in every setting; see Table 1 and Table 2, as well as Figure 2 for detailed results. Perhaps even more impressively, we also outperform models trained with randomized smoothing at low $\\varepsilon$ distortions ( $\\epsilon \\leq 0.5$ on CIFAR-10, and $\\epsilon \\leq 2$ on ImageNet), and nearly match them at high $\\varepsilon$ . Even though these randomized smoothing techniques train their models end-to-end and specifically design these models to have high accuracy on Gaussian noise, we find that our approach's use of off-the-shelf models yields superior robustness (and much higher clean accuracy as an added bonus).",
782
+ "bbox": [
783
+ 169,
784
+ 720,
785
+ 823,
786
+ 834
787
+ ],
788
+ "page_idx": 4
789
+ },
790
+ {
791
+ "type": "text",
792
+ "text": "Interestingly, we find that using a diffusion model to perform the denoising step gives its most significant benefits when $\\sigma$ and $\\varepsilon$ are small: for example, while we reach $71.1\\%$ top-1 accuracy at $\\varepsilon = 0.5$ on ImageNet, an improvement over prior work of $+14$ percentage points, when we reach $\\varepsilon = 3$ our scheme is 7 percentage points worse than state-of-the-art. Our hypothesis for this effect,",
793
+ "bbox": [
794
+ 169,
795
+ 839,
796
+ 823,
797
+ 897
798
+ ],
799
+ "page_idx": 4
800
+ },
801
+ {
802
+ "type": "header",
803
+ "text": "Published as a conference paper at ICLR 2023",
804
+ "bbox": [
805
+ 173,
806
+ 32,
807
+ 478,
808
+ 47
809
+ ],
810
+ "page_idx": 4
811
+ },
812
+ {
813
+ "type": "page_footnote",
814
+ "text": "4https://huggingface.co/aaraki/vit-base-patch16-224-in21k-finetuned-cifar10",
815
+ "bbox": [
816
+ 189,
817
+ 909,
818
+ 857,
819
+ 924
820
+ ],
821
+ "page_idx": 4
822
+ },
823
+ {
824
+ "type": "page_number",
825
+ "text": "5",
826
+ "bbox": [
827
+ 493,
828
+ 948,
829
+ 504,
830
+ 959
831
+ ],
832
+ "page_idx": 4
833
+ },
834
+ {
835
+ "type": "table",
836
+ "img_path": "images/f9fe8e9eafe15d3a6bf4b0992bfc05c9580f2949b96af89337decab10d0ab200.jpg",
837
+ "table_caption": [
838
+ "Certified Accuracy at $\\varepsilon$ (\\%)"
839
+ ],
840
+ "table_footnote": [],
841
+ "table_body": "<table><tr><td>Method</td><td>Off-the-shelf</td><td>Extra data</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>PixelDP (Lecuyer et al., 2019)</td><td>○</td><td>X</td><td>(71.0)22.0</td><td>(44.0)2.0</td><td>-</td><td>-</td></tr><tr><td>RS (Cohen et al., 2019)</td><td>○</td><td>X</td><td>(75.0)61.0</td><td>(75.0)43.0</td><td>(65.0)32.0</td><td>(66.0)22.0</td></tr><tr><td>SmoothAdv (Salman et al., 2019)</td><td>○</td><td>X</td><td>(75.6)67.4</td><td>(75.6)57.6</td><td>(74.8)47.8</td><td>(57.4)38.3</td></tr><tr><td>SmoothAdv (Salman et al., 2019)</td><td>○</td><td>✓</td><td>(84.3)74.9</td><td>(80.1)63.4</td><td>(80.1)51.9</td><td>(62.2)39.6</td></tr><tr><td>Consistency (Jeong &amp; Shin, 2020)</td><td>○</td><td>X</td><td>(77.8)68.8</td><td>(75.8)58.1</td><td>(72.9)48.5</td><td>(52.3)37.8</td></tr><tr><td>MACER (Zhai et al., 2020)</td><td>○</td><td>X</td><td>(81.0)71.0</td><td>(81.0)59.0</td><td>(66.0)46.0</td><td>(66.0)38.0</td></tr><tr><td>Boosting (Horváth et al., 2022a)</td><td>○</td><td>X</td><td>(83.4)70.6</td><td>(76.8)60.4</td><td>(71.6)52.4</td><td>(52.4)38.8</td></tr><tr><td>DRT (Yang et al., 2021)</td><td>○</td><td>X</td><td>(81.5)70.4</td><td>(72.6)60.2</td><td>(71.9)50.5</td><td>(56.1)39.8</td></tr><tr><td>SmoothMix (Jeong et al., 2021)</td><td>○</td><td>X</td><td>(77.1)67.9</td><td>(77.1)57.9</td><td>(74.2)47.7</td><td>(61.8)37.2</td></tr><tr><td>ACES (Horváth et al., 2022b)</td><td>○</td><td>X</td><td>(79.0)69.0</td><td>(74.2)57.2</td><td>(74.2)47.0</td><td>(58.6)37.8</td></tr><tr><td>Denoised (Salman et al., 2020)</td><td>○</td><td>X</td><td>(72.0)56.0</td><td>(62.0)41.0</td><td>(62.0)28.0</td><td>(44.0)19.0</td></tr><tr><td>Lee (Lee, 2021)</td><td>○</td><td>X</td><td>60.0</td><td>42.0</td><td>28.0</td><td>19.0</td></tr><tr><td>Ours</td><td>○</td><td>✓</td><td>(88.1)76.7</td><td>(88.1)63.0</td><td>(88.1)45.3</td><td>(77.0)32.1</td></tr><tr><td>Ours (+finetuning)</td><td>○</td><td>✓</td><td>(91.2)79.3</td><td>(91.2)65.5</td><td>(87.3)48.7</td><td>(81.5)35.5</td></tr></table>",
842
+ "bbox": [
843
+ 174,
844
+ 114,
845
+ 823,
846
+ 339
847
+ ],
848
+ "page_idx": 5
849
+ },
850
+ {
851
+ "type": "text",
852
+ "text": "Table 2: CIFAR-10 certified accuracy for prior defenses from the literature. The columns have the same meaning as in Table 1.",
853
+ "bbox": [
854
+ 169,
855
+ 348,
856
+ 823,
857
+ 378
858
+ ],
859
+ "page_idx": 5
860
+ },
861
+ {
862
+ "type": "image",
863
+ "img_path": "images/8a7631a871ae98d7aeb865d54437127a0582e37d248e721c1b10ac28b22e8671.jpg",
864
+ "image_caption": [
865
+ "(a) CIFAR-10"
866
+ ],
867
+ "image_footnote": [],
868
+ "bbox": [
869
+ 184,
870
+ 398,
871
+ 480,
872
+ 563
873
+ ],
874
+ "page_idx": 5
875
+ },
876
+ {
877
+ "type": "image",
878
+ "img_path": "images/1d579ea9c27a95dd5c8bdb39219a937a1e731f683214796c5a0388f98d4ce238.jpg",
879
+ "image_caption": [
880
+ "(b) ImageNet",
881
+ "Figure 2: Certified accuracy as a function of the $\\ell_2$ adversarial perturbation bound, when varying levels of Gaussian noise $\\sigma \\in \\{0.25, 0.5, 1.0\\}$ . Bounds are computed with 100,000 samples per run on CIFAR-10, and 10,000 on ImageNet."
882
+ ],
883
+ "image_footnote": [],
884
+ "bbox": [
885
+ 516,
886
+ 398,
887
+ 810,
888
+ 561
889
+ ],
890
+ "page_idx": 5
891
+ },
892
+ {
893
+ "type": "text",
894
+ "text": "which we explore further in Section 5, is that diffusion models are prone to \"hallucinate\" content when denoising extremely noisy images. Thus, instead of reinforcing the signal from the correct class, the diffusion model generates a signal from another class, thereby fooling the classifier.",
895
+ "bbox": [
896
+ 169,
897
+ 664,
898
+ 823,
899
+ 708
900
+ ],
901
+ "page_idx": 5
902
+ },
903
+ {
904
+ "type": "text",
905
+ "text": "CIFAR-10 ablation. The off-the-shelf classifiers we use were pretrained on larger datasets than respectively CIFAR-10 and ImageNet. It is well known that the use of additional data can boost robustness, both for empirical (Schmidt et al., 2018) and certified (Salman et al., 2019) defenses. To investigate the role played by the pretrained model, we repeat our CIFAR-10 experiment using a standard Wide-ResNet-28-10 model (Zagoruyko & Komodakis, 2016) trained solely on CIFAR-10 to $95.2\\%$ accuracy. The results with this classifier (see Table 6a) outperform prior denoised smoothing approaches, and are competitive with prior randomized smoothing results up to $\\epsilon = 0.5$ .",
906
+ "bbox": [
907
+ 169,
908
+ 720,
909
+ 823,
910
+ 820
911
+ ],
912
+ "page_idx": 5
913
+ },
914
+ {
915
+ "type": "text",
916
+ "text": "The ViT classifier outperforms the ResNet because it is more robust to the distribution shift introduced by the noisig-and-denoising procedure. To alleviate this, we can further finetune the classifier on denoised images denoise $(x + \\delta)$ from the CIFAR-10 training set. This defense is thus not strictly \"off-the-shelf\" anymore (although finetuning is negligible compared to the training time of the diffusion model and classifier). Table 6b shows that a finetuned Wide-ResNet achieves comparable-or-better results than a non-finetuned ViT. Thus, with a minimal amount of training, we also surpass prior randomized smoothing results without relying on any external data. If we finetune",
917
+ "bbox": [
918
+ 169,
919
+ 825,
920
+ 826,
921
+ 925
922
+ ],
923
+ "page_idx": 5
924
+ },
925
+ {
926
+ "type": "header",
927
+ "text": "Published as a conference paper at ICLR 2023",
928
+ "bbox": [
929
+ 173,
930
+ 32,
931
+ 478,
932
+ 47
933
+ ],
934
+ "page_idx": 5
935
+ },
936
+ {
937
+ "type": "page_number",
938
+ "text": "6",
939
+ "bbox": [
940
+ 493,
941
+ 948,
942
+ 504,
943
+ 959
944
+ ],
945
+ "page_idx": 5
946
+ },
947
+ {
948
+ "type": "table",
949
+ "img_path": "images/d10378e8ba3e181c9318f314bc722595ba05309a31f97f7be01cee3f05c76b12.jpg",
950
+ "table_caption": [
951
+ "Certified Accuracy at $\\varepsilon$ (\\%)"
952
+ ],
953
+ "table_footnote": [],
954
+ "table_body": "<table><tr><td>Method</td><td>Off-the-shelf</td><td>Extra data</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>Wide-ResNet</td><td>●</td><td>×</td><td>(83.8)70.6</td><td>(83.8)55.7</td><td>(83.8)40.0</td><td>(65.8)26.1</td></tr><tr><td>ViT</td><td>●</td><td>✓</td><td>(88.1)76.7</td><td>(88.1)63.0</td><td>(88.1)45.3</td><td>(77.0)32.1</td></tr><tr><td>Wide-ResNet +finetune</td><td>○</td><td>×</td><td>(85.9)76.7</td><td>(85.9)63.8</td><td>(85.9)49.5</td><td>(74.5)36.4</td></tr><tr><td>ViT +finetune</td><td>○</td><td>✓</td><td>(91.2)79.3</td><td>(91.2)65.5</td><td>(91.2)48.7</td><td>(81.5)35.5</td></tr></table>",
955
+ "bbox": [
956
+ 174,
957
+ 103,
958
+ 823,
959
+ 188
960
+ ],
961
+ "page_idx": 6
962
+ },
963
+ {
964
+ "type": "text",
965
+ "text": "Table 3: Summary of our ablation on CIFAR-10. The diffusion model and Wide-ResNet classifier are trained solely on CIFAR-10, while the ViT classifier is pretrained on a larger dataset. The finetuning results are obtained by taking an off-the-shelf diffusion model and classifier, and tuning the classifier on noised-then-denoised images from CIFAR-10.",
966
+ "bbox": [
967
+ 169,
968
+ 198,
969
+ 823,
970
+ 255
971
+ ],
972
+ "page_idx": 6
973
+ },
974
+ {
975
+ "type": "text",
976
+ "text": "the ViT model (Table 6d), we further improve our defense's clean accuracy and certified robustness at $\\epsilon \\leq 0.5$ by a couple of percentage points. Our ablation is summarized in Table 3.",
977
+ "bbox": [
978
+ 169,
979
+ 287,
980
+ 823,
981
+ 316
982
+ ],
983
+ "page_idx": 6
984
+ },
985
+ {
986
+ "type": "text",
987
+ "text": "5 ANALYSIS AND DISCUSSION",
988
+ "text_level": 1,
989
+ "bbox": [
990
+ 171,
991
+ 343,
992
+ 444,
993
+ 358
994
+ ],
995
+ "page_idx": 6
996
+ },
997
+ {
998
+ "type": "text",
999
+ "text": "We achieve state-of-the-art certified accuracy using diffusion models despite the fact that we are not using these models as diffusion models but rather trivial denoisers. That is, instead of leveraging the fact that diffusion models can iteratively refine images across a range of noise levels, we simply apply the diffusion model once for a fixed noise level, to perform one-shot denoising.",
1000
+ "bbox": [
1001
+ 169,
1002
+ 378,
1003
+ 823,
1004
+ 436
1005
+ ],
1006
+ "page_idx": 6
1007
+ },
1008
+ {
1009
+ "type": "text",
1010
+ "text": "In this section we study why this approach outperforms prior work that trained straightforward denoisiers for denoised smoothing (Salman et al., 2020), and why using diffusion models for one-shot denoising performs better than the more involved iterative diffusion process. Last we show promising results of multi-step diffusion using an advanced deterministic sampler.",
1011
+ "bbox": [
1012
+ 169,
1013
+ 441,
1014
+ 825,
1015
+ 498
1016
+ ],
1017
+ "page_idx": 6
1018
+ },
1019
+ {
1020
+ "type": "text",
1021
+ "text": "5.1 FULL DIFFUSION VERSUS ONE-SHOT DENOISING",
1022
+ "text_level": 1,
1023
+ "bbox": [
1024
+ 171,
1025
+ 521,
1026
+ 549,
1027
+ 535
1028
+ ],
1029
+ "page_idx": 6
1030
+ },
1031
+ {
1032
+ "type": "text",
1033
+ "text": "When used as generative models, diffusion models perform denoising through an iterative process that repeatedly refines an estimate of the final denoised image. When given an image $x_{t}$ with noise of magnitude corresponding to some diffusion timestep $t$ , the model first predicts a one-shot estimate of the denoised image $x_{0}$ , and then constructs an estimate $x_{t-1}$ of the noised image at timestep $t-1$ by interpolating (with appropriate weights) between $x_{0}$ , $x_{t}$ and fresh isotropic Gaussian noise $\\mathcal{N}(0, I)$ . The diffusion process is then applied recursively at timestep $t-1$ .",
1034
+ "bbox": [
1035
+ 169,
1036
+ 549,
1037
+ 823,
1038
+ 633
1039
+ ],
1040
+ "page_idx": 6
1041
+ },
1042
+ {
1043
+ "type": "text",
1044
+ "text": "Intuitively, it may be expected that when using a diffusion model as a denoiser, one-shot denoising will produce more faithful results than the full iterative reverse-diffusion process. Indeed, each step of the reverse-diffusion process destroys information about the original image, since each step adds fresh Gaussian noise to the image. Thus, information theoretically at least, it should be easier to denoise an image in one-shot than over multiple iterations.",
1045
+ "bbox": [
1046
+ 169,
1047
+ 640,
1048
+ 826,
1049
+ 710
1050
+ ],
1051
+ "page_idx": 6
1052
+ },
1053
+ {
1054
+ "type": "image",
1055
+ "img_path": "images/d88e26b9ac48a22eb01255900bdfc2c96588d5052a1ddf777941c5b38d5d0040.jpg",
1056
+ "image_caption": [
1057
+ "Figure 3: Intuitive examples for why multi-step denoised images are less recognized by the classifier. From left to right: clean images, noisy images with $\\sigma = 1.0$ , one-step denoised images, multi-step denoised images. For the denoised images, we show the prediction by the pretrained BEiT model."
1058
+ ],
1059
+ "image_footnote": [],
1060
+ "bbox": [
1061
+ 178,
1062
+ 729,
1063
+ 336,
1064
+ 864
1065
+ ],
1066
+ "page_idx": 6
1067
+ },
1068
+ {
1069
+ "type": "image",
1070
+ "img_path": "images/bb435de1f6c0cc1430543bcec0a29b3ca9e54ab2aecd08d199465dcba31e87d6.jpg",
1071
+ "image_caption": [],
1072
+ "image_footnote": [],
1073
+ "bbox": [
1074
+ 339,
1075
+ 729,
1076
+ 496,
1077
+ 864
1078
+ ],
1079
+ "page_idx": 6
1080
+ },
1081
+ {
1082
+ "type": "image",
1083
+ "img_path": "images/8c71606c5aea8cf7882d56d83bc8a52cd37b6fbfc05219a0fd0ae00b0fc1478b.jpg",
1084
+ "image_caption": [],
1085
+ "image_footnote": [],
1086
+ "bbox": [
1087
+ 500,
1088
+ 729,
1089
+ 658,
1090
+ 864
1091
+ ],
1092
+ "page_idx": 6
1093
+ },
1094
+ {
1095
+ "type": "image",
1096
+ "img_path": "images/7fe87f3a7ad7de011ba76578568315d19b568e2f33a4827751282d8e790fbae5.jpg",
1097
+ "image_caption": [],
1098
+ "image_footnote": [],
1099
+ "bbox": [
1100
+ 661,
1101
+ 729,
1102
+ 821,
1103
+ 864
1104
+ ],
1105
+ "page_idx": 6
1106
+ },
1107
+ {
1108
+ "type": "header",
1109
+ "text": "Published as a conference paper at ICLR 2023",
1110
+ "bbox": [
1111
+ 171,
1112
+ 32,
1113
+ 478,
1114
+ 47
1115
+ ],
1116
+ "page_idx": 6
1117
+ },
1118
+ {
1119
+ "type": "page_number",
1120
+ "text": "7",
1121
+ "bbox": [
1122
+ 493,
1123
+ 948,
1124
+ 503,
1125
+ 959
1126
+ ],
1127
+ "page_idx": 6
1128
+ },
1129
+ {
1130
+ "type": "text",
1131
+ "text": "We find that this is indeed the case. While the full reverse-diffusion process produces denoised images with more finegrained details (which is a good property for generating photorealistic images from scratch), these details are often not actually faithful to the original image we want to denoise. Instead, diffusion models are prone to \"hallucinate\" salient detailed features during the iterative denoise-and-noise process. We illustrate some examples of this hallucination phenomenon in Figure 3. Here, we noise an original image (on the left) with large Gaussian noise $(\\sigma = 1)$ and then apply either the full reverse-diffusion process (rightmost image) or a one-shot denoising at the appropriate timestep (2nd image to the right). As we can see, one-shot denoising produces mostly faithful, but blurry, reconstructions of the original image, with finegrained details lost due to noise. In contrast, iterative denoising \"invents\" new details that result in images that are ultimately more photorealistic but semantically different from the starting image. Additional examples (with multiple random seeds) are in Figure 4 and Figure 5 in the Appendix.",
1132
+ "bbox": [
1133
+ 169,
1134
+ 103,
1135
+ 826,
1136
+ 272
1137
+ ],
1138
+ "page_idx": 7
1139
+ },
1140
+ {
1141
+ "type": "text",
1142
+ "text": "5.2 TRAINING ON RESTRICTED NOISE LEVELS",
1143
+ "text_level": 1,
1144
+ "bbox": [
1145
+ 171,
1146
+ 286,
1147
+ 504,
1148
+ 301
1149
+ ],
1150
+ "page_idx": 7
1151
+ },
1152
+ {
1153
+ "type": "text",
1154
+ "text": "Given that one-shot denoising performs better than full multi-shot denoising, we now turn to understanding our next question: if we are just using diffusion models as one-shot denoisers, then why do diffusion models perform better compared to the straightforward denoisers trained in prior work (Salman et al., 2020)? To investigate this, we train seven new diffusion models on CIFAR-10 with varying levels of Gaussian noise—all the way towards a model trained on a single noise level, i.e., a straightforward denoiser.",
1155
+ "bbox": [
1156
+ 169,
1157
+ 311,
1158
+ 823,
1159
+ 397
1160
+ ],
1161
+ "page_idx": 7
1162
+ },
1163
+ {
1164
+ "type": "text",
1165
+ "text": "Recall that during standard training of a diffusion model, we sample a timestep $T$ uniformly from some range, add noise according to this timestep, and then train the model to predict the noise that has been added. The only difference between this process and the standard denoised smoothing training process (Salman et al., 2020) is the fact that here we are training on multiple levels of Gaussian noise simultaneously. Therefore we now perform a comparative analysis of models trained on more restrictive noise levels. We select seven different levels of noise:",
1166
+ "bbox": [
1167
+ 169,
1168
+ 402,
1169
+ 826,
1170
+ 488
1171
+ ],
1172
+ "page_idx": 7
1173
+ },
1174
+ {
1175
+ "type": "list",
1176
+ "sub_type": "text",
1177
+ "list_items": [
1178
+ "- Three models are trained exclusively on Gaussian noise of fixed standard deviation of respectively $\\sigma = 0.25$ , $\\sigma = 0.5$ , or $\\sigma = 1.0$ . This is identical to training a \"straightforward\" denoiser on noise of a fixed magnitude.",
1179
+ "- One model is trained on all three noise levels at the same time.",
1180
+ "- Two models are trained on noise uniformly selected from $\\sigma \\in [0, 0.25]$ , and $\\sigma \\in [0, 1.0]$ .",
1181
+ "- One model is trained using the full range of noise, from $\\sigma \\in [0,S]$ for some $S\\gg 1$ (the exact value of $S$ depends on the chosen noise schedule for the diffusion model)."
1182
+ ],
1183
+ "bbox": [
1184
+ 215,
1185
+ 496,
1186
+ 823,
1187
+ 604
1188
+ ],
1189
+ "page_idx": 7
1190
+ },
1191
+ {
1192
+ "type": "text",
1193
+ "text": "We then evaluate the clean accuracy of an off-the-shelf ViT model on each image when denoised (in one shot) with each of these diffusion models, where the images are noised with a standard deviation of either $\\sigma = 0.25$ , $\\sigma = 0.5$ , or $\\sigma = 1.0$ . The results are summarized in Table 4.",
1194
+ "bbox": [
1195
+ 169,
1196
+ 614,
1197
+ 823,
1198
+ 657
1199
+ ],
1200
+ "page_idx": 7
1201
+ },
1202
+ {
1203
+ "type": "table",
1204
+ "img_path": "images/efea2d7111e9fbdf1778d3e591ddac9da0bbeb87f936a55a8acd23c9767e5243.jpg",
1205
+ "table_caption": [],
1206
+ "table_footnote": [],
1207
+ "table_body": "<table><tr><td rowspan=\"2\">Training noise</td><td colspan=\"3\">Noise at evaluation</td></tr><tr><td>σ = 0.25</td><td>σ = 0.5</td><td>σ = 1.0</td></tr><tr><td>σ ∈ {0.25}</td><td>79.0</td><td>16.2</td><td>9.8</td></tr><tr><td>σ ∈ {0.5}</td><td>14.5</td><td>60.1</td><td>15.4</td></tr><tr><td>σ ∈ {1.0}</td><td>13.9</td><td>13.5</td><td>35.5</td></tr><tr><td>σ ∈ {0.25, 0.5, 1.0}</td><td>81.6</td><td>68.1</td><td>43.0</td></tr><tr><td>σ ∈ [0, 0.25]</td><td>84.5</td><td>14.5</td><td>9.9</td></tr><tr><td>σ ∈ [0, 1.0]</td><td>84.0</td><td>71.6</td><td>46.0</td></tr><tr><td>σ ∈ [0, S ≫ 1] (standard)</td><td>85.5</td><td>72.3</td><td>44.8</td></tr></table>",
1208
+ "bbox": [
1209
+ 299,
1210
+ 667,
1211
+ 699,
1212
+ 816
1213
+ ],
1214
+ "page_idx": 7
1215
+ },
1216
+ {
1217
+ "type": "text",
1218
+ "text": "Table 4: Clean accuracy of an off-the-shelf ViT classifier on images denoised with a diffusion model trained on restricted levels of Gaussian noise. Diffusion models trained on more diverse noise ranges yield higher accuracy on one-shot denoised images, even compared to models trained on the specific noise level used at evaluation time.",
1219
+ "bbox": [
1220
+ 169,
1221
+ 824,
1222
+ 823,
1223
+ 881
1224
+ ],
1225
+ "page_idx": 7
1226
+ },
1227
+ {
1228
+ "type": "text",
1229
+ "text": "As expected, training a new model on any one individual noise level, and then using that model to denoise images at that noise level, gives high downstream accuracy: for example, training a diffusion",
1230
+ "bbox": [
1231
+ 169,
1232
+ 895,
1233
+ 823,
1234
+ 926
1235
+ ],
1236
+ "page_idx": 7
1237
+ },
1238
+ {
1239
+ "type": "header",
1240
+ "text": "Published as a conference paper at ICLR 2023",
1241
+ "bbox": [
1242
+ 171,
1243
+ 32,
1244
+ 478,
1245
+ 47
1246
+ ],
1247
+ "page_idx": 7
1248
+ },
1249
+ {
1250
+ "type": "page_number",
1251
+ "text": "8",
1252
+ "bbox": [
1253
+ 493,
1254
+ 948,
1255
+ 504,
1256
+ 959
1257
+ ],
1258
+ "page_idx": 7
1259
+ },
1260
+ {
1261
+ "type": "text",
1262
+ "text": "model using $\\sigma = 0.25$ noise and then evaluating at this same noise level gives $79\\%$ accuracy. However if we then try and use this model to denoise images at a different noise level—say $\\sigma = 0.5$ the accuracy of the classifier drops to just $16\\%$ . If we train the diffusion model directly on $\\sigma = 0.5$ noise, we instead get a much better classification accuracy of $60.1\\%$ , but without good generalization to lower or higher noise levels. Similarly, training on noise of $\\sigma = 1.0$ only gives good results when denoising images with the same noise level.",
1263
+ "bbox": [
1264
+ 169,
1265
+ 103,
1266
+ 823,
1267
+ 188
1268
+ ],
1269
+ "page_idx": 8
1270
+ },
1271
+ {
1272
+ "type": "text",
1273
+ "text": "More surprisingly, however, is that training on all three noise levels simultaneously gives better accuracy for denoising images at each noise level, compared to a diffusion model trained specifically and solely for that noise level. For example, when denoising images with $\\sigma = 0.5$ Gaussian noise, we get a classification accuracy of $68.1\\%$ when the diffusion model is trained on that noise level and additional lower and higher noise levels—a value $8\\%$ higher than the accuracy of $60.1\\%$ we get when training the diffusion model solely on $\\sigma = 0.5$ noise.",
1274
+ "bbox": [
1275
+ 169,
1276
+ 194,
1277
+ 826,
1278
+ 279
1279
+ ],
1280
+ "page_idx": 8
1281
+ },
1282
+ {
1283
+ "type": "text",
1284
+ "text": "If we train on more granular noise levels, either in $[0, 0.25]$ or in the full interval $[0, 1]$ , the classification accuracy on denoised images at the three individual noise levels further increases by a few percentage points. Quite surprisingly, the standard training regime which trains the diffusion model on noise from a larger range $[0, S]$ for some $S \\gg 1$ further improves the denoising capabilities at low noise levels ( $\\sigma = 0.25$ and $\\sigma = 0.5$ ), but slightly harms the accuracy for larger noise ( $\\sigma = 1.0$ ).",
1285
+ "bbox": [
1286
+ 169,
1287
+ 285,
1288
+ 823,
1289
+ 356
1290
+ ],
1291
+ "page_idx": 8
1292
+ },
1293
+ {
1294
+ "type": "text",
1295
+ "text": "From this experiment, we can conclude that the (full) training process of diffusion models leads to much better, and more generalizable, one-shot denoising capabilities than when training a standalone denoiser on a single noise level as in prior work.",
1296
+ "bbox": [
1297
+ 169,
1298
+ 361,
1299
+ 823,
1300
+ 405
1301
+ ],
1302
+ "page_idx": 8
1303
+ },
1304
+ {
1305
+ "type": "text",
1306
+ "text": "5.3 ADVANCED DETERMINISTIC MULTI-STEP SAMPLER",
1307
+ "text_level": 1,
1308
+ "bbox": [
1309
+ 171,
1310
+ 420,
1311
+ 568,
1312
+ 434
1313
+ ],
1314
+ "page_idx": 8
1315
+ },
1316
+ {
1317
+ "type": "text",
1318
+ "text": "In section 5.1, we found that the denoised images from full multi-step diffusion have a tendency to deviate from the original clean image. This could be due to the stochastic nature of the full reverse-diffusion process, since at each step a random noise is added. We notice a line of work (Song et al., 2021; Karras et al., 2022) on fast deterministic sampling of diffusion models. We show that with such an advanced sampler, multi-step diffusion is able to beat one-shot denoising.",
1319
+ "bbox": [
1320
+ 169,
1321
+ 446,
1322
+ 823,
1323
+ 516
1324
+ ],
1325
+ "page_idx": 8
1326
+ },
1327
+ {
1328
+ "type": "text",
1329
+ "text": "We consider the deterministic EDM sampler proposed by Karras et al. (2022). We compare the recognizability of images denoised by EDM sampler and one-shot denoising. We adapt EDM sampler for image denoising by setting the maximum noise sigma of the sampling noise schedule to be the noise level found by Equation 4. We use the suggested sampler setting from Karras et al. (2022) on CIFAR-10, where 18 reverse steps with 35 evaluations of the diffusion model are performed for each example. The result is summarized in Table 5. We can see that the deterministic EDM sampler is superior over one-shot denoising.",
1330
+ "bbox": [
1331
+ 169,
1332
+ 522,
1333
+ 826,
1334
+ 621
1335
+ ],
1336
+ "page_idx": 8
1337
+ },
1338
+ {
1339
+ "type": "table",
1340
+ "img_path": "images/14f7b7a1e97c79d188628dc73640a3099659a9ed186727bc2409ab2277140fe9.jpg",
1341
+ "table_caption": [],
1342
+ "table_footnote": [],
1343
+ "table_body": "<table><tr><td>Classifier</td><td>Method</td><td>σ = 0.25</td><td>σ = 0.5</td><td>σ = 1.0</td></tr><tr><td rowspan=\"2\">Wide-ResNet</td><td>One-shot denoising</td><td>81.3</td><td>64.0</td><td>35.8</td></tr><tr><td>EDM sampler</td><td>85.0</td><td>73.0</td><td>53.8</td></tr><tr><td rowspan=\"2\">VIT</td><td>One-shot denoising</td><td>84.9</td><td>71.6</td><td>50.8</td></tr><tr><td>EDM sampler</td><td>86.1</td><td>73.1</td><td>54.0</td></tr></table>",
1344
+ "bbox": [
1345
+ 264,
1346
+ 630,
1347
+ 733,
1348
+ 724
1349
+ ],
1350
+ "page_idx": 8
1351
+ },
1352
+ {
1353
+ "type": "text",
1354
+ "text": "Table 5: Clean accuracy (average over 5 runs) of off-the-shelf CIFAR-10 classifiers evaluated on images denoised by one-shot denoising and EDM sampler (Karras et al., 2022).",
1355
+ "bbox": [
1356
+ 169,
1357
+ 733,
1358
+ 823,
1359
+ 762
1360
+ ],
1361
+ "page_idx": 8
1362
+ },
1363
+ {
1364
+ "type": "text",
1365
+ "text": "6 CONCLUSION",
1366
+ "text_level": 1,
1367
+ "bbox": [
1368
+ 171,
1369
+ 787,
1370
+ 320,
1371
+ 803
1372
+ ],
1373
+ "page_idx": 8
1374
+ },
1375
+ {
1376
+ "type": "text",
1377
+ "text": "At present, training certified adversarially robust deep learning models requires specialized techniques explicitly designed for the purpose of performing provably robust classification (Cohen et al., 2019). While this has proven effective, these models are extremely difficult to train to high accuracy, and degrade clean accuracy significantly.",
1378
+ "bbox": [
1379
+ 169,
1380
+ 818,
1381
+ 823,
1382
+ 876
1383
+ ],
1384
+ "page_idx": 8
1385
+ },
1386
+ {
1387
+ "type": "text",
1388
+ "text": "We suggest an alternative approach is possible. By exclusively making use of off-the-shelf models designed to be state-of-the-art at classification and image denoising, we can leverage the vast resources dedicated to training highly capable models for the new purpose of robust classification.",
1389
+ "bbox": [
1390
+ 169,
1391
+ 881,
1392
+ 823,
1393
+ 925
1394
+ ],
1395
+ "page_idx": 8
1396
+ },
1397
+ {
1398
+ "type": "header",
1399
+ "text": "Published as a conference paper at ICLR 2023",
1400
+ "bbox": [
1401
+ 171,
1402
+ 32,
1403
+ 478,
1404
+ 47
1405
+ ],
1406
+ "page_idx": 8
1407
+ },
1408
+ {
1409
+ "type": "page_number",
1410
+ "text": "9",
1411
+ "bbox": [
1412
+ 493,
1413
+ 948,
1414
+ 504,
1415
+ 959
1416
+ ],
1417
+ "page_idx": 8
1418
+ },
1419
+ {
1420
+ "type": "text",
1421
+ "text": "REFERENCES",
1422
+ "text_level": 1,
1423
+ "bbox": [
1424
+ 174,
1425
+ 102,
1426
+ 287,
1427
+ 117
1428
+ ],
1429
+ "page_idx": 9
1430
+ },
1431
+ {
1432
+ "type": "list",
1433
+ "sub_type": "ref_text",
1434
+ "list_items": [
1435
+ "Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning, pp. 274-283. PMLR, 2018.",
1436
+ "Hangbo Bao, Li Dong, and Furu Wei. BEiT: BERT pre-training of image transformers. In International Conference on Learning Representations, 2022.",
1437
+ "Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pp. 387-402. Springer, 2013.",
1438
+ "Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy, pp. 39-57. IEEE, 2017.",
1439
+ "Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, pp. 1310-1320. PMLR, 2019.",
1440
+ "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009.",
1441
+ "Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. Advances in Neural Information Processing Systems, 34, 2021.",
1442
+ "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021.",
1443
+ "Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715, 2018.",
1444
+ "Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020.",
1445
+ "Miklos Z Horváth, Mark Niklas Müller, Marc Fischer, and Martin Vechev. Boosting randomized smoothing with variance reduced classifiers. In International Conference on Learning Representations, 2022a.",
1446
+ "Miklos Z Horváth, Mark Niklas Müller, Marc Fischer, and Martin Vechev. Robust and accurate-compositional architectures for randomized smoothing. arXiv preprint arXiv:2204.00487, 2022b.",
1447
+ "Jongheon Jeong and Jinwoo Shin. Consistency regularization for certified robustness of smoothed classifiers. Advances in Neural Information Processing Systems, 33:10558-10570, 2020.",
1448
+ "Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Do-Guk Kim, and Jinwoo Shin. Smoothmix: Training confidence-calibrated smoothed classifiers for certified robustness. Advances in Neural Information Processing Systems, 34:30153-30168, 2021.",
1449
+ "Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems, 2022.",
1450
+ "Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy, pp. 656-672. IEEE, 2019.",
1451
+ "Kyungmin Lee. Provable defense by denoised smoothing with learned score function. ICLR Workshop on Security and Safety in Machine Learning Systems, 2021."
1452
+ ],
1453
+ "bbox": [
1454
+ 171,
1455
+ 125,
1456
+ 825,
1457
+ 924
1458
+ ],
1459
+ "page_idx": 9
1460
+ },
1461
+ {
1462
+ "type": "header",
1463
+ "text": "Published as a conference paper at ICLR 2023",
1464
+ "bbox": [
1465
+ 171,
1466
+ 32,
1467
+ 478,
1468
+ 47
1469
+ ],
1470
+ "page_idx": 9
1471
+ },
1472
+ {
1473
+ "type": "page_number",
1474
+ "text": "10",
1475
+ "bbox": [
1476
+ 490,
1477
+ 946,
1478
+ 509,
1479
+ 960
1480
+ ],
1481
+ "page_idx": 9
1482
+ },
1483
+ {
1484
+ "type": "list",
1485
+ "sub_type": "ref_text",
1486
+ "list_items": [
1487
+ "Matthew Mirman, Timon Gehr, and Martin Vechev. Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning, pp. 3578-3586. PMLR, 2018.",
1488
+ "Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pp. 8162-8171. PMLR, 2021.",
1489
+ "Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Anima Anandkumar. Diffusion models for adversarial purification. arXiv preprint arXiv:2205.07460, 2022.",
1490
+ "Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems, 32, 2019.",
1491
+ "Hadi Salman, Mingjie Sun, Greg Yang, Ashish Kapoor, and J Zico Kolter. Denoised smoothing: A provable defense for pretrained classifiers. Advances in Neural Information Processing Systems, 33:21945-21957, 2020.",
1492
+ "Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarily robust generalization requires more data. Advances in Neural Information Processing Systems, 31, 2018.",
1493
+ "Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp. 2256-2265. PMLR, 2015.",
1494
+ "Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021.",
1495
+ "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.",
1496
+ "Jonathan Uesato, Brendan O'donoghue, Pushmeet Kohli, and Aaron Oord. Adversarial risk and the dangers of evaluating against weak attacks. In International Conference on Machine Learning, pp. 5025-5034. PMLR, 2018.",
1497
+ "Ross Wightman. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019.",
1498
+ "Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, and Bo Li. On the certified robustness for ensemble models and beyond. arXiv preprint arXiv:2107.10873, 2021.",
1499
+ "Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In British Machine Vision Conference, 2016.",
1500
+ "Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, and Liwei Wang. MACER: Attack-free and scalable robust training via maximizing certified radius. In International Conference on Learning Representations, 2020."
1501
+ ],
1502
+ "bbox": [
1503
+ 171,
1504
+ 102,
1505
+ 825,
1506
+ 734
1507
+ ],
1508
+ "page_idx": 10
1509
+ },
1510
+ {
1511
+ "type": "header",
1512
+ "text": "Published as a conference paper at ICLR 2023",
1513
+ "bbox": [
1514
+ 171,
1515
+ 32,
1516
+ 478,
1517
+ 47
1518
+ ],
1519
+ "page_idx": 10
1520
+ },
1521
+ {
1522
+ "type": "page_number",
1523
+ "text": "11",
1524
+ "bbox": [
1525
+ 490,
1526
+ 946,
1527
+ 506,
1528
+ 959
1529
+ ],
1530
+ "page_idx": 10
1531
+ },
1532
+ {
1533
+ "type": "text",
1534
+ "text": "A APPENDIX",
1535
+ "text_level": 1,
1536
+ "bbox": [
1537
+ 171,
1538
+ 102,
1539
+ 299,
1540
+ 118
1541
+ ],
1542
+ "page_idx": 11
1543
+ },
1544
+ {
1545
+ "type": "table",
1546
+ "img_path": "images/ae980f721a00d0946d4c929344dc90669ced1dbb43c1ee19744df32e1b00adfe.jpg",
1547
+ "table_caption": [],
1548
+ "table_footnote": [],
1549
+ "table_body": "<table><tr><td rowspan=\"2\">Noise</td><td colspan=\"5\">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>83.8</td><td>70.6</td><td>55.7</td><td>40.0</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>65.8</td><td>54.7</td><td>43.7</td><td>34.2</td><td>26.1</td></tr><tr><td>σ = 1.0</td><td>33.2</td><td>28.0</td><td>22.8</td><td>18.0</td><td>13.6</td></tr></table>",
1550
+ "bbox": [
1551
+ 171,
1552
+ 136,
1553
+ 488,
1554
+ 224
1555
+ ],
1556
+ "page_idx": 11
1557
+ },
1558
+ {
1559
+ "type": "table",
1560
+ "img_path": "images/8ff339466d350b5973783f1d6fe06c50e5ce5ef93c6f59a845119c4dc36c5237.jpg",
1561
+ "table_caption": [
1562
+ "(a) Wide-ResNet"
1563
+ ],
1564
+ "table_footnote": [],
1565
+ "table_body": "<table><tr><td rowspan=\"2\">Noise</td><td colspan=\"5\">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>85.9</td><td>76.7</td><td>63.8</td><td>49.5</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>74.5</td><td>66.0</td><td>56.1</td><td>45.7</td><td>36.4</td></tr><tr><td>σ = 1.0</td><td>55.1</td><td>48.7</td><td>42.3</td><td>35.8</td><td>29.9</td></tr></table>",
1566
+ "bbox": [
1567
+ 506,
1568
+ 137,
1569
+ 826,
1570
+ 224
1571
+ ],
1572
+ "page_idx": 11
1573
+ },
1574
+ {
1575
+ "type": "table",
1576
+ "img_path": "images/33e98ea59a00ab3ae5173f3af7ae3260deb54a0eb7e3e2d50fc67ca6bb1b9eb1.jpg",
1577
+ "table_caption": [
1578
+ "(b) Finetuned Wide-ResNet"
1579
+ ],
1580
+ "table_footnote": [],
1581
+ "table_body": "<table><tr><td rowspan=\"2\">Noise</td><td colspan=\"5\">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>88.1</td><td>76.7</td><td>63.0</td><td>45.3</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>77.0</td><td>65.8</td><td>53.4</td><td>41.8</td><td>32.1</td></tr><tr><td>σ = 1.0</td><td>49.5</td><td>40.3</td><td>33.3</td><td>26.1</td><td>20.2</td></tr></table>",
1582
+ "bbox": [
1583
+ 171,
1584
+ 257,
1585
+ 488,
1586
+ 345
1587
+ ],
1588
+ "page_idx": 11
1589
+ },
1590
+ {
1591
+ "type": "table",
1592
+ "img_path": "images/70889ccb362e0caee30d285e4289ab48ae1b97ff0c6a2862856c7763f4bc2ac0.jpg",
1593
+ "table_caption": [
1594
+ "(c) ViT"
1595
+ ],
1596
+ "table_footnote": [],
1597
+ "table_body": "<table><tr><td rowspan=\"2\">Noise</td><td colspan=\"5\">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>91.2</td><td>79.3</td><td>65.5</td><td>48.7</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>81.5</td><td>67.0</td><td>56.1</td><td>45.3</td><td>35.5</td></tr><tr><td>σ = 1.0</td><td>65.1</td><td>48.4</td><td>41.7</td><td>35.2</td><td>29.0</td></tr></table>",
1598
+ "bbox": [
1599
+ 506,
1600
+ 257,
1601
+ 826,
1602
+ 345
1603
+ ],
1604
+ "page_idx": 11
1605
+ },
1606
+ {
1607
+ "type": "table",
1608
+ "img_path": "images/e1dddbcecd92a268962f563e232d8e30b5cb83e86dc5fa86a6b84fc52dc5641c.jpg",
1609
+ "table_caption": [
1610
+ "(d) Finetuned ViT",
1611
+ "Table 6: Certified accuracy of four different classifiers on CIFAR-10 at varying levels of Gaussian noise $\\sigma$ ,all using the same diffusion model.",
1612
+ "Certified Accuracy at $\\varepsilon$ (\\%)"
1613
+ ],
1614
+ "table_footnote": [],
1615
+ "table_body": "<table><tr><td>Noise</td><td>0.0</td><td>0.5</td><td>1.0</td><td>1.5</td><td>2.0</td><td>3.0</td></tr><tr><td>σ = 0.25</td><td>82.8</td><td>71.1</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>77.1</td><td>67.8</td><td>54.3</td><td>38.1</td><td>0.0</td><td>0.0</td></tr><tr><td>σ = 1.0</td><td>60.0</td><td>50.0</td><td>42.0</td><td>35.5</td><td>29.5</td><td>13.1</td></tr></table>",
1616
+ "bbox": [
1617
+ 315,
1618
+ 428,
1619
+ 681,
1620
+ 500
1621
+ ],
1622
+ "page_idx": 11
1623
+ },
1624
+ {
1625
+ "type": "text",
1626
+ "text": "Table 7: Certified accuracy on ImageNet for varying levels of Gaussian noise $\\sigma$ .",
1627
+ "bbox": [
1628
+ 233,
1629
+ 508,
1630
+ 759,
1631
+ 525
1632
+ ],
1633
+ "page_idx": 11
1634
+ },
1635
+ {
1636
+ "type": "header",
1637
+ "text": "Published as a conference paper at ICLR 2023",
1638
+ "bbox": [
1639
+ 171,
1640
+ 32,
1641
+ 478,
1642
+ 47
1643
+ ],
1644
+ "page_idx": 11
1645
+ },
1646
+ {
1647
+ "type": "page_number",
1648
+ "text": "12",
1649
+ "bbox": [
1650
+ 490,
1651
+ 946,
1652
+ 509,
1653
+ 959
1654
+ ],
1655
+ "page_idx": 11
1656
+ },
1657
+ {
1658
+ "type": "image",
1659
+ "img_path": "images/847dea4a7b6eb768732f24bbe5b4a11472b85fbf65c655bd2b9b2975206ccaa0.jpg",
1660
+ "image_caption": [
1661
+ "(a) One-shot denoised images $(\\sigma = 1.00)$"
1662
+ ],
1663
+ "image_footnote": [],
1664
+ "bbox": [
1665
+ 253,
1666
+ 123,
1667
+ 715,
1668
+ 476
1669
+ ],
1670
+ "page_idx": 12
1671
+ },
1672
+ {
1673
+ "type": "image",
1674
+ "img_path": "images/ebaef54e3c3570e242332e48842b612d6b67fab489382519c3b50c32f8b812db.jpg",
1675
+ "image_caption": [
1676
+ "(b) Multi-step denoised images $(\\sigma = 1.00)$",
1677
+ "Figure 4: Qualitative comparison of one-shot denoising and multi-step denoising. We show denoised images under random Gaussian noise ( $\\sigma = 1.00$ ). A green border is applied when the denoised images are correctly classified while a red border means that the classifier misclassifies the image."
1678
+ ],
1679
+ "image_footnote": [],
1680
+ "bbox": [
1681
+ 251,
1682
+ 498,
1683
+ 712,
1684
+ 849
1685
+ ],
1686
+ "page_idx": 12
1687
+ },
1688
+ {
1689
+ "type": "header",
1690
+ "text": "Published as a conference paper at ICLR 2023",
1691
+ "bbox": [
1692
+ 173,
1693
+ 32,
1694
+ 478,
1695
+ 47
1696
+ ],
1697
+ "page_idx": 12
1698
+ },
1699
+ {
1700
+ "type": "page_number",
1701
+ "text": "13",
1702
+ "bbox": [
1703
+ 490,
1704
+ 948,
1705
+ 508,
1706
+ 959
1707
+ ],
1708
+ "page_idx": 12
1709
+ },
1710
+ {
1711
+ "type": "image",
1712
+ "img_path": "images/858c09ae7f926f08f6a9f17ee32618fc0b5bc8bbf981218b2b7468ccb1d00d6e.jpg",
1713
+ "image_caption": [
1714
+ "Figure 5: Additional intuitive examples for why multi-step denoised images are less recognized by the classifier. From left to right: clean images, noisy images with $\\sigma = 1.0$ , one-step denoised images, multi-step denoised images. For the denoised images, we show the prediction by the pretrained BEiT model."
1715
+ ],
1716
+ "image_footnote": [],
1717
+ "bbox": [
1718
+ 176,
1719
+ 340,
1720
+ 823,
1721
+ 612
1722
+ ],
1723
+ "page_idx": 13
1724
+ },
1725
+ {
1726
+ "type": "header",
1727
+ "text": "Published as a conference paper at ICLR 2023",
1728
+ "bbox": [
1729
+ 173,
1730
+ 32,
1731
+ 478,
1732
+ 47
1733
+ ],
1734
+ "page_idx": 13
1735
+ },
1736
+ {
1737
+ "type": "page_number",
1738
+ "text": "14",
1739
+ "bbox": [
1740
+ 490,
1741
+ 946,
1742
+ 509,
1743
+ 959
1744
+ ],
1745
+ "page_idx": 13
1746
+ }
1747
+ ]
2023/(Certified!!) Adversarial Robustness for Free!/7064d377-0583-493a-b0c4-c47e08357804_model.json ADDED
@@ -0,0 +1,2230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [
3
+ {
4
+ "type": "header",
5
+ "bbox": [
6
+ 0.173,
7
+ 0.033,
8
+ 0.48,
9
+ 0.049
10
+ ],
11
+ "angle": 0,
12
+ "content": "Published as a conference paper at ICLR 2023"
13
+ },
14
+ {
15
+ "type": "title",
16
+ "bbox": [
17
+ 0.172,
18
+ 0.1,
19
+ 0.838,
20
+ 0.123
21
+ ],
22
+ "angle": 0,
23
+ "content": "(CERTIFIED!!) ADVERSARIAL ROBUSTNESS FOR FREE!"
24
+ },
25
+ {
26
+ "type": "text",
27
+ "bbox": [
28
+ 0.18,
29
+ 0.148,
30
+ 0.704,
31
+ 0.178
32
+ ],
33
+ "angle": 0,
34
+ "content": "Nicholas Carlini\\*1 Florian Tram'er\\*1 Krishnamurthy (Dj) Dvijotham\\* Leslie Rice2 Mingjie Sun2 J. Zico Kolter\\*2,3"
35
+ },
36
+ {
37
+ "type": "text",
38
+ "bbox": [
39
+ 0.184,
40
+ 0.178,
41
+ 0.645,
42
+ 0.193
43
+ ],
44
+ "angle": 0,
45
+ "content": "\\(^{1}\\)Google \\(^{2}\\)Carnegie Mellon University \\(^{3}\\)Bosch Center for AI"
46
+ },
47
+ {
48
+ "type": "title",
49
+ "bbox": [
50
+ 0.451,
51
+ 0.217,
52
+ 0.548,
53
+ 0.232
54
+ ],
55
+ "angle": 0,
56
+ "content": "ABSTRACT"
57
+ },
58
+ {
59
+ "type": "text",
60
+ "bbox": [
61
+ 0.23,
62
+ 0.248,
63
+ 0.768,
64
+ 0.401
65
+ ],
66
+ "angle": 0,
67
+ "content": "In this paper we show how to achieve state-of-the-art certified adversarial robustness to \\(\\ell_2\\)-norm bounded perturbations by relying exclusively on off-the-shelf pretrained models. To do so, we instantiate the denoised smoothing approach of Salman et al. (2020) by combining a pretrained denoising diffusion probabilistic model and a standard high-accuracy classifier. This allows us to certify \\(71\\%\\) accuracy on ImageNet under adversarial perturbations constrained to be within an \\(\\ell_2\\) norm of \\(\\varepsilon = 0.5\\), an improvement of 14 percentage points over the prior certified SoTA using any approach, or an improvement of 30 percentage points over denoised smoothing. We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters."
68
+ },
69
+ {
70
+ "type": "title",
71
+ "bbox": [
72
+ 0.174,
73
+ 0.427,
74
+ 0.338,
75
+ 0.441
76
+ ],
77
+ "angle": 0,
78
+ "content": "1 INTRODUCTION"
79
+ },
80
+ {
81
+ "type": "text",
82
+ "bbox": [
83
+ 0.17,
84
+ 0.457,
85
+ 0.825,
86
+ 0.529
87
+ ],
88
+ "angle": 0,
89
+ "content": "Evaluating the robustness of deep learning models to norm bounded adversarial perturbations has been shown to be difficult (Athalye et al., 2018; Uesato et al., 2018). Certified defenses—such as those based on bound propagation (Gowal et al., 2018; Mirman et al., 2018) or randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019)—offer provable guarantees that a model's predictions are robust to norm-bounded adversarial perturbations, for a large fraction of examples in the test set."
90
+ },
91
+ {
92
+ "type": "text",
93
+ "bbox": [
94
+ 0.17,
95
+ 0.535,
96
+ 0.825,
97
+ 0.605
98
+ ],
99
+ "angle": 0,
100
+ "content": "The current state-of-the-art approaches to certify robustness to adversarial perturbations bounded in the \\(\\ell_2\\) norm rely on randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019). By taking a majority vote over the labels predicted by a \"base classifier\" under random Gaussian perturbations of the input, if the correct class is output sufficiently often, then the defense's output on the original un-noised input is guaranteed to be robust to \\(\\ell_2\\) norm bounded adversarial perturbations."
101
+ },
102
+ {
103
+ "type": "text",
104
+ "bbox": [
105
+ 0.17,
106
+ 0.61,
107
+ 0.825,
108
+ 0.695
109
+ ],
110
+ "angle": 0,
111
+ "content": "Denoised smoothing (Salman et al., 2020) is a certified defense that splits this one-step process into two. After randomly perturbing an input, the defense first applies a denoiser model that aims to remove the added noise, followed by a standard classifier that guesses a label given this noisethen-denoised input. This enables applying randomized smoothing to pretrained black-box base classifiers, as long as the denoiser can produce clean images close to the base classifier's original training distribution."
112
+ },
113
+ {
114
+ "type": "text",
115
+ "bbox": [
116
+ 0.17,
117
+ 0.701,
118
+ 0.825,
119
+ 0.8
120
+ ],
121
+ "angle": 0,
122
+ "content": "We observe that the recent line of work on denoising diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Nichol & Dhariwal, 2021)—which achieve state-of-the-art results on image generation—are a perfect match for the denoising step in a denoised smoothing defense. A forward diffusion process takes a source data distribution (e.g., images from some data distribution) and then adds Gaussian noise until the distribution converges to a high-variance isotropic Gaussian. Denoising diffusion models are trained to invert this process. Thus, we can use a diffusion model as a denoiser that recovers high quality denoised inputs from inputs perturbed with Gaussian noise."
123
+ },
124
+ {
125
+ "type": "text",
126
+ "bbox": [
127
+ 0.17,
128
+ 0.806,
129
+ 0.827,
130
+ 0.905
131
+ ],
132
+ "angle": 0,
133
+ "content": "In this paper, we combine state-of-the-art, publicly available diffusion models as denoisers with standard pretrained state-of-the-art classifiers. We show that the resulting denoised smoothing defense obtains significantly better certified robustness results—for perturbations of \\(\\ell_2\\) norm of \\(\\epsilon \\leq 2\\) on ImageNet and \\(\\epsilon \\leq 0.5\\) on CIFAR-10—compared to the \"custom\" denoisers trained in prior work (Salman et al., 2020), or in fact with any certifiably robust defense (even those that do not rely on denoised smoothing). Code to reproduce our experiments is available at: https://github.com/ethz-privsec/diffusion_denoised_smoothing."
134
+ },
135
+ {
136
+ "type": "page_footnote",
137
+ "bbox": [
138
+ 0.191,
139
+ 0.911,
140
+ 0.306,
141
+ 0.924
142
+ ],
143
+ "angle": 0,
144
+ "content": "*Joint first authors"
145
+ },
146
+ {
147
+ "type": "page_number",
148
+ "bbox": [
149
+ 0.495,
150
+ 0.949,
151
+ 0.505,
152
+ 0.96
153
+ ],
154
+ "angle": 0,
155
+ "content": "1"
156
+ }
157
+ ],
158
+ [
159
+ {
160
+ "type": "header",
161
+ "bbox": [
162
+ 0.173,
163
+ 0.033,
164
+ 0.48,
165
+ 0.049
166
+ ],
167
+ "angle": 0,
168
+ "content": "Published as a conference paper at ICLR 2023"
169
+ },
170
+ {
171
+ "type": "title",
172
+ "bbox": [
173
+ 0.172,
174
+ 0.103,
175
+ 0.33,
176
+ 0.119
177
+ ],
178
+ "angle": 0,
179
+ "content": "2 BACKGROUND"
180
+ },
181
+ {
182
+ "type": "text",
183
+ "bbox": [
184
+ 0.171,
185
+ 0.135,
186
+ 0.827,
187
+ 0.22
188
+ ],
189
+ "angle": 0,
190
+ "content": "Adversarial examples (Biggio et al., 2013; Szegedy et al., 2014) are inputs \\( x' = x + \\delta \\) constructed by taking some input \\( x \\) (with true label \\( y \\in \\mathcal{V} \\)) and adding a perturbation \\( \\delta \\) (that is assumed to be imperceptible and hence label-preserving) so that a given classifier \\( f \\) misclassifies the perturbed input, i.e., \\( f(x + \\delta) \\neq y \\). The \"smallness\" of \\( \\delta \\) is quantified by its Euclidean norm, and we constrain \\( \\| \\delta \\|_2 \\leq \\varepsilon \\). Even when considering exceptionally small perturbation budgets (e.g., \\( \\varepsilon = 0.5 \\)) modern classifiers often have near-0% accuracy (Carlini & Wagner, 2017)."
191
+ },
192
+ {
193
+ "type": "text",
194
+ "bbox": [
195
+ 0.171,
196
+ 0.235,
197
+ 0.825,
198
+ 0.279
199
+ ],
200
+ "angle": 0,
201
+ "content": "Randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019) is a technique to certify the robustness of arbitrary classifiers against adversarial examples under the \\(\\ell_2\\) norm. Given an input \\(x\\) and base classifier \\(f\\), randomized smoothing considers a smooth version of \\(f\\) defined as:"
202
+ },
203
+ {
204
+ "type": "equation",
205
+ "bbox": [
206
+ 0.352,
207
+ 0.287,
208
+ 0.825,
209
+ 0.311
210
+ ],
211
+ "angle": 0,
212
+ "content": "\\[\ng (x) := \\operatorname {a r g m a x} _ {c} \\Pr_ {\\delta \\sim \\mathcal {N} \\left(0, \\sigma^ {2} \\mathbf {I}\\right)} \\left(f (x + \\delta) = c\\right) \\tag {1}\n\\]"
213
+ },
214
+ {
215
+ "type": "text",
216
+ "bbox": [
217
+ 0.171,
218
+ 0.327,
219
+ 0.825,
220
+ 0.455
221
+ ],
222
+ "angle": 0,
223
+ "content": "Cohen et al. (2019) prove that the smooth classifier \\( g \\) is robust to perturbations of \\( \\ell_2 \\) radius \\( R \\), where the radius \\( R \\) grows with the classifier's \"margin\" (i.e., the difference in probabilities assigned to the most likely and second most-likely classes). As the probability in Equation 1 cannot be efficiently computed when the base classifier \\( f \\) is a neural network, Cohen et al. (2019) instantiate this defense by sampling a small number \\( m \\) of noise instances (e.g., \\( m = 10 \\)) and taking a majority vote over the outputs of the base classifier \\( f \\) on \\( m \\) noisy versions of the input. To compute a lower-bound on this defense's robust radius \\( R \\), they estimate the probabilities \\( \\operatorname*{Pr}[f(x + \\delta) = c] \\) for each class label \\( c \\) by sampling a large number \\( N \\) of noise instances \\( \\delta \\) (e.g., \\( N = 100,000 \\)). See Cohen et al. (2019) for details."
224
+ },
225
+ {
226
+ "type": "text",
227
+ "bbox": [
228
+ 0.171,
229
+ 0.47,
230
+ 0.825,
231
+ 0.499
232
+ ],
233
+ "angle": 0,
234
+ "content": "Denoised smoothing (Salman et al., 2020) is an instantiation of randomized smoothing, where the base classifier \\( f \\) is composed of a denoiser denoise followed by a standard classifier \\( f_{\\mathrm{clf}} \\):"
235
+ },
236
+ {
237
+ "type": "equation",
238
+ "bbox": [
239
+ 0.368,
240
+ 0.507,
241
+ 0.825,
242
+ 0.524
243
+ ],
244
+ "angle": 0,
245
+ "content": "\\[\nf (x + \\delta) := f _ {\\mathrm {c l f}} (\\operatorname {d e n o i s e} (x + \\delta)). \\tag {2}\n\\]"
246
+ },
247
+ {
248
+ "type": "text",
249
+ "bbox": [
250
+ 0.171,
251
+ 0.54,
252
+ 0.827,
253
+ 0.61
254
+ ],
255
+ "angle": 0,
256
+ "content": "Given a very good denoiser (i.e., \\( \\mathrm{denoise}(x + \\delta) \\approx x \\) with high probability for \\( \\delta \\sim \\mathcal{N}(0, \\sigma^2\\mathbf{I}) \\)), we can expect the base classifier's accuracy on noisy images to be similar to the clean accuracy of the standard classifier \\( f_{\\mathrm{clf}} \\). Salman et al. (2020) instantiate their denoised smoothing technique by training custom denoiser models with Gaussian noise augmentation, combined with off-the-shelf pretrained classifiers."
257
+ },
258
+ {
259
+ "type": "text",
260
+ "bbox": [
261
+ 0.171,
262
+ 0.627,
263
+ 0.825,
264
+ 0.74
265
+ ],
266
+ "angle": 0,
267
+ "content": "Denoising Diffusion Probabilistic Models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Nichol & Dhariwal, 2021) are a form of generative model that work by learning a model that can reverse time on a diffusion process of the form \\( x_{t} \\sim \\sqrt{1 - \\beta_{t}} \\cdot x_{t-1} + \\beta_{t} \\cdot \\omega_{t}, \\omega_{t} \\sim \\mathcal{N}(0, \\mathbf{I}) \\) with \\( x_{0} \\) coming from the data distribution, and the \\( \\beta_{t} \\) being fixed (or learned) variance parameters. The diffusion process transforms images from the target data distribution to purely random noise over time. The reverse process then synthesizes images from the data distribution starting with random Gaussian noise. In this paper we will not make use of diffusion models in the typical way; instead it suffices to understand just one single property about how they are trained."
268
+ },
269
+ {
270
+ "type": "text",
271
+ "bbox": [
272
+ 0.171,
273
+ 0.745,
274
+ 0.825,
275
+ 0.775
276
+ ],
277
+ "angle": 0,
278
+ "content": "Given a clean training image \\( x \\in [-1, 1]^{w \\cdot h \\cdot c} \\), a diffusion model selects a timestep \\( t \\in \\mathbb{N}^+ \\) from some fixed schedule and then samples a noisy image \\( x_{t} \\) of the form"
279
+ },
280
+ {
281
+ "type": "equation",
282
+ "bbox": [
283
+ 0.374,
284
+ 0.782,
285
+ 0.825,
286
+ 0.8
287
+ ],
288
+ "angle": 0,
289
+ "content": "\\[\nx _ {t} := \\sqrt {\\alpha_ {t}} \\cdot x + \\sqrt {1 - \\alpha_ {t}} \\cdot \\mathcal {N} (0, \\mathbf {I}), \\tag {3}\n\\]"
290
+ },
291
+ {
292
+ "type": "text",
293
+ "bbox": [
294
+ 0.171,
295
+ 0.808,
296
+ 0.825,
297
+ 0.838
298
+ ],
299
+ "angle": 0,
300
+ "content": "where the factor \\(\\alpha_{t}\\) is a constant derived from the timestamp \\(t\\) that determines the amount of noise to be added to the image (the noise magnitude increases monotonically with \\(t\\))."
301
+ },
302
+ {
303
+ "type": "text",
304
+ "bbox": [
305
+ 0.171,
306
+ 0.843,
307
+ 0.825,
308
+ 0.887
309
+ ],
310
+ "angle": 0,
311
+ "content": "The diffusion model is then trained (loosely speaking) to minimize the discrepancy between \\( x \\) and denoise \\( (x_{t}; t) \\); that is, to predict what the original (un-noised) image should look like after applying the noising step at timestep \\( t \\).<sup>1</sup>"
312
+ },
313
+ {
314
+ "type": "page_footnote",
315
+ "bbox": [
316
+ 0.171,
317
+ 0.898,
318
+ 0.825,
319
+ 0.926
320
+ ],
321
+ "angle": 0,
322
+ "content": "\\( {}^{1} \\) State-of-the-art diffusion models are actually trained to predict the noise rather than the denoised image directly (Ho et al., 2020; Nichol & Dhariwal, 2021)."
323
+ },
324
+ {
325
+ "type": "page_number",
326
+ "bbox": [
327
+ 0.494,
328
+ 0.949,
329
+ 0.505,
330
+ 0.96
331
+ ],
332
+ "angle": 0,
333
+ "content": "2"
334
+ }
335
+ ],
336
+ [
337
+ {
338
+ "type": "header",
339
+ "bbox": [
340
+ 0.173,
341
+ 0.033,
342
+ 0.48,
343
+ 0.049
344
+ ],
345
+ "angle": 0,
346
+ "content": "Published as a conference paper at ICLR 2023"
347
+ },
348
+ {
349
+ "type": "code_caption",
350
+ "bbox": [
351
+ 0.174,
352
+ 0.117,
353
+ 0.421,
354
+ 0.134
355
+ ],
356
+ "angle": 0,
357
+ "content": "Algorithm 1 Noise, denoise, classify"
358
+ },
359
+ {
360
+ "type": "algorithm",
361
+ "bbox": [
362
+ 0.174,
363
+ 0.139,
364
+ 0.825,
365
+ 0.297
366
+ ],
367
+ "angle": 0,
368
+ "content": "1: NOISEANDCLASSIFY(x,σ): 1: PREDICT(x,σ,N,η): \n2: \\(t^{\\star},\\alpha_{t^{\\star}}\\gets \\text{GETTIMESTEP}(\\sigma)\\) 2: counts←0 \n3: \\(x_{t^{\\star}}\\gets \\sqrt{\\alpha_{t^{\\star}}}(x + \\mathcal{N}(0,\\sigma^{2}\\mathbf{I}))\\) 3: for i ∈ {1,2,...,N} do \n4: \\(\\hat{x}\\gets \\text{denoise}(x_{t^{\\star}};t^{\\star})\\) 4: y ← NOISEANDCLASSIFY(x,σ) \n5: y ← fclf(x) 5: counts[y] ← counts[y] + 1 \n6: return y 6: \\(\\hat{y}_A,\\hat{y}_B\\gets\\) top two labels in counts \n7: nA,nB← counts[hatA], counts[hatB] \n8: GETTIMESTEP(σ): 8: if BINOMPTEST(nA,nA+nB,1/2) ≤ η then \n9: return \\(\\hat{y}_A\\) \n10: return t*,αt* 10: else \n11: return Abstain"
369
+ },
370
+ {
371
+ "type": "image_caption",
372
+ "bbox": [
373
+ 0.17,
374
+ 0.308,
375
+ 0.827,
376
+ 0.366
377
+ ],
378
+ "angle": 0,
379
+ "content": "Figure 1: Our approach can be implemented in under 15 lines of code, given an off-the-shelf classifier \\( f_{\\mathrm{clf}} \\) and an off-the-shelf diffusion model denoise. The PREDICT function is adapted from Cohen et al. (2019) and takes as input a number of noise samples \\( N \\) and a statistical significance level \\( \\eta \\in (0,1) \\) and inherits the same robustness certificate proved in Cohen et al. (2019)."
380
+ },
381
+ {
382
+ "type": "title",
383
+ "bbox": [
384
+ 0.172,
385
+ 0.389,
386
+ 0.504,
387
+ 0.405
388
+ ],
389
+ "angle": 0,
390
+ "content": "3 DIFFUSION DENOISED SMOOTHING"
391
+ },
392
+ {
393
+ "type": "text",
394
+ "bbox": [
395
+ 0.171,
396
+ 0.42,
397
+ 0.827,
398
+ 0.449
399
+ ],
400
+ "angle": 0,
401
+ "content": "Our approach, Diffusion Denoised Smoothing (DDS), requires no new technical ideas on top of what was introduced in the section above."
402
+ },
403
+ {
404
+ "type": "text",
405
+ "bbox": [
406
+ 0.17,
407
+ 0.464,
408
+ 0.825,
409
+ 0.537
410
+ ],
411
+ "angle": 0,
412
+ "content": "Denoised smoothing via a diffusion model. The only minor technicality required for our method is to map between the noise model required by randomized smoothing and the noise model used within diffusion models. Specifically, randomized smoothing requires a data point augmented with additive Gaussian noise \\( x_{\\mathrm{rs}} \\sim \\mathcal{N}(x,\\sigma^2\\mathbf{I}) \\), whereas diffusion models assume the noise model \\( x_{t} \\sim \\mathcal{N}(\\sqrt{\\alpha_{t}} x,(1 - \\alpha_{t})\\mathbf{I}) \\). Scaling \\( x_{\\mathrm{rs}} \\) by \\( \\sqrt{\\alpha_t} \\) and equating the variances yields the relationship"
413
+ },
414
+ {
415
+ "type": "equation",
416
+ "bbox": [
417
+ 0.446,
418
+ 0.542,
419
+ 0.825,
420
+ 0.573
421
+ ],
422
+ "angle": 0,
423
+ "content": "\\[\n\\sigma^ {2} = \\frac {1 - \\alpha_ {t}}{\\alpha_ {t}}. \\tag {4}\n\\]"
424
+ },
425
+ {
426
+ "type": "text",
427
+ "bbox": [
428
+ 0.171,
429
+ 0.578,
430
+ 0.825,
431
+ 0.638
432
+ ],
433
+ "angle": 0,
434
+ "content": "Thus, in order to employ a diffusion model for randomized smoothing at a given noise level \\(\\sigma\\), we first find the timestep \\(t^{\\star}\\) such that \\(\\sigma^2 = \\frac{1 - \\alpha_t\\star}{\\alpha_{t^\\star}}\\); the precise formula for this equation will depend on the schedule of the \\(\\alpha_{t}\\) terms used by the diffusion model, but this can typically be computed in closed form, even for reasonably complex diffusion schedules. Next, we compute"
435
+ },
436
+ {
437
+ "type": "equation",
438
+ "bbox": [
439
+ 0.373,
440
+ 0.643,
441
+ 0.825,
442
+ 0.661
443
+ ],
444
+ "angle": 0,
445
+ "content": "\\[\nx _ {t ^ {*}} = \\sqrt {\\alpha_ {t ^ {*}}} (x + \\delta), \\delta \\sim \\mathcal {N} (0, \\sigma^ {2} \\mathbf {I}) \\tag {6}\n\\]"
446
+ },
447
+ {
448
+ "type": "text",
449
+ "bbox": [
450
+ 0.171,
451
+ 0.666,
452
+ 0.726,
453
+ 0.681
454
+ ],
455
+ "angle": 0,
456
+ "content": "and apply the diffusion denoiser on \\( x_{t^{\\star}} \\) to obtain an estimate of the denoised sample"
457
+ },
458
+ {
459
+ "type": "equation",
460
+ "bbox": [
461
+ 0.413,
462
+ 0.688,
463
+ 0.825,
464
+ 0.704
465
+ ],
466
+ "angle": 0,
467
+ "content": "\\[\n\\hat {x} = \\operatorname {d e n o i s e} \\left(x _ {t ^ {\\star}}; t ^ {\\star}\\right). \\tag {7}\n\\]"
468
+ },
469
+ {
470
+ "type": "text",
471
+ "bbox": [
472
+ 0.172,
473
+ 0.71,
474
+ 0.728,
475
+ 0.725
476
+ ],
477
+ "angle": 0,
478
+ "content": "And finally, we classify the estimated denoised image with an off-the-shelf classifier"
479
+ },
480
+ {
481
+ "type": "equation",
482
+ "bbox": [
483
+ 0.454,
484
+ 0.731,
485
+ 0.825,
486
+ 0.748
487
+ ],
488
+ "angle": 0,
489
+ "content": "\\[\ny = f _ {\\mathrm {c l f}} (\\hat {x}). \\tag {8}\n\\]"
490
+ },
491
+ {
492
+ "type": "text",
493
+ "bbox": [
494
+ 0.172,
495
+ 0.753,
496
+ 0.585,
497
+ 0.769
498
+ ],
499
+ "angle": 0,
500
+ "content": "The entirety of this algorithmic approach is shown in Figure 1."
501
+ },
502
+ {
503
+ "type": "page_footnote",
504
+ "bbox": [
505
+ 0.171,
506
+ 0.777,
507
+ 0.825,
508
+ 0.829
509
+ ],
510
+ "angle": 0,
511
+ "content": "For example, in Nichol & Dhariwal (2021), the authors advocate for the schedule \\(\\alpha_{t} = f(t) / f(0)\\), where \\(f(t) = \\cos \\left(\\frac{t / T + s}{1 + s} \\cdot \\frac{\\pi}{2}\\right)^{2}\\) for various values of \\(T\\), and \\(s\\) discussed in this reference. In this case, for a given desired value of \\(\\sigma^{2}\\), some algebra yields the solution for \\(t\\)"
512
+ },
513
+ {
514
+ "type": "equation",
515
+ "bbox": [
516
+ 0.32,
517
+ 0.835,
518
+ 0.825,
519
+ 0.881
520
+ ],
521
+ "angle": 0,
522
+ "content": "\\[\nt ^ {\\star} = T \\left(1 - \\frac {2 (1 + s) \\csc^ {- 1} \\left(\\sqrt {1 + \\sigma^ {2}} \\csc \\left(\\frac {\\pi}{2 + 2 s}\\right)\\right)}{\\pi}\\right). \\tag {5}\n\\]"
523
+ },
524
+ {
525
+ "type": "text",
526
+ "bbox": [
527
+ 0.171,
528
+ 0.886,
529
+ 0.825,
530
+ 0.926
531
+ ],
532
+ "angle": 0,
533
+ "content": "The actual formula here is unimportant and only shown as an illustration of how such computation can look in practice. Even when such a closed form solution does not exist, because the schedules for \\(\\alpha_{t}\\) are monotonic decreasing, one can always find a solution via 1D root-finding methods if necessary."
534
+ },
535
+ {
536
+ "type": "page_number",
537
+ "bbox": [
538
+ 0.494,
539
+ 0.949,
540
+ 0.504,
541
+ 0.96
542
+ ],
543
+ "angle": 0,
544
+ "content": "3"
545
+ }
546
+ ],
547
+ [
548
+ {
549
+ "type": "header",
550
+ "bbox": [
551
+ 0.173,
552
+ 0.033,
553
+ 0.48,
554
+ 0.049
555
+ ],
556
+ "angle": 0,
557
+ "content": "Published as a conference paper at ICLR 2023"
558
+ },
559
+ {
560
+ "type": "text",
561
+ "bbox": [
562
+ 0.17,
563
+ 0.104,
564
+ 0.827,
565
+ 0.162
566
+ ],
567
+ "angle": 0,
568
+ "content": "To obtain a robustness certificate, we repeat the above denoising process many times (e.g., 100,000) and compute the certification radius using the approach of Cohen et al. (2019) (note that since our diffusion models expects inputs in \\([-1,1]^d\\), we then divide the certified radius by 2 to obtain a certified radius for inputs in \\([0,1]\\) as assumed in all prior work)."
569
+ },
570
+ {
571
+ "type": "text",
572
+ "bbox": [
573
+ 0.17,
574
+ 0.175,
575
+ 0.825,
576
+ 0.231
577
+ ],
578
+ "angle": 0,
579
+ "content": "One-shot denoising. Readers familiar with diffusion models may recall that the standard process repeatedly applies a \"single-step\" denoising operation \\( x_{t-1} = d(x_t; t) \\) that aims to convert a noisy image at some timestep \\( t \\) to a (slightly less) noisy image at the previous timestep \\( t-1 \\). The full diffusion process would then be defined by the following iterative procedure:"
580
+ },
581
+ {
582
+ "type": "equation",
583
+ "bbox": [
584
+ 0.262,
585
+ 0.233,
586
+ 0.735,
587
+ 0.25
588
+ ],
589
+ "angle": 0,
590
+ "content": "\\[\n\\tilde {x} = \\operatorname {d e n o i s e} _ {\\text {i t e r}} (x + \\delta ; t) := d \\left(d (\\dots d (d (x + \\delta ; t); t - 1) \\dots ; 2); 1\\right).\n\\]"
591
+ },
592
+ {
593
+ "type": "text",
594
+ "bbox": [
595
+ 0.17,
596
+ 0.252,
597
+ 0.827,
598
+ 0.336
599
+ ],
600
+ "angle": 0,
601
+ "content": "In fact, each application of the one-step denoiser \\( d \\) consists of two steps: (1) an estimation of the fully denoised image \\( x \\) from the current timestep \\( t \\), and (2) computing a (properly weighted, according to the diffusion model) average between this estimated denoised image and the noisy image at the previous timestep \\( t - 1 \\). Thus, instead of performing the entire \\( t \\)-step diffusion process to denoise an image, it is also possible to run the diffusion step \\( d \\) once and simply output the best estimate for the denoised image \\( x \\) in one shot."
602
+ },
603
+ {
604
+ "type": "text",
605
+ "bbox": [
606
+ 0.17,
607
+ 0.342,
608
+ 0.826,
609
+ 0.385
610
+ ],
611
+ "angle": 0,
612
+ "content": "When a diffusion model generates images from scratch (i.e., the denoiser is applied to pure noise), the iterative process gives higher fidelity outputs than this one-shot approach (Ho et al., 2020). But here, where we aim to denoise one particular image, a one-shot approach has two advantages:"
613
+ },
614
+ {
615
+ "type": "text",
616
+ "bbox": [
617
+ 0.211,
618
+ 0.394,
619
+ 0.825,
620
+ 0.506
621
+ ],
622
+ "angle": 0,
623
+ "content": "1. High accuracy: it turns out that standard pretrained classifiers are more accurate on one-shot denoised images compared to images denoised with the full \\( t \\)-steps of denoising. We hypothesize this is due to the fact that when we first apply the single-step denoiser \\( d \\) at timestep \\( t \\), the denoiser already has all the available information about \\( x \\). By applying the denoiser multiple times, we can only destroy information about \\( x \\) as each step adds new (slightly smaller) Gaussian noise. In fact, by using the iterative \\( t \\)-step denoising strategy, we are in essence pushing part of the classification task onto the denoiser, in order to decide how to fill in the image. Section 5 experimentally validates this hypothesis."
624
+ },
625
+ {
626
+ "type": "text",
627
+ "bbox": [
628
+ 0.209,
629
+ 0.509,
630
+ 0.825,
631
+ 0.564
632
+ ],
633
+ "angle": 0,
634
+ "content": "2. Improved efficiency: instead of requiring several hundred (or thousand) forward passes to denoise any given image, we only require one single pass. This is especially important when we perform many thousand predictions as is required for randomized smoothing to obtain a robustness certificate."
635
+ },
636
+ {
637
+ "type": "list",
638
+ "bbox": [
639
+ 0.209,
640
+ 0.394,
641
+ 0.825,
642
+ 0.564
643
+ ],
644
+ "angle": 0,
645
+ "content": null
646
+ },
647
+ {
648
+ "type": "text",
649
+ "bbox": [
650
+ 0.17,
651
+ 0.579,
652
+ 0.827,
653
+ 0.678
654
+ ],
655
+ "angle": 0,
656
+ "content": "Related work. We are not the first to observe a connection between randomized smoothing and diffusion models. The work of Lee (2021) first studied this problem—however they do not obtain significant accuracy improvements, likely due to the fact that diffusion models available at the time that work was done were not good enough. Separately, Nie et al. (2022) suggest that diffusion models might be able to provide strong empirical robustness to adversarial examples, as evaluated by robustness under adversarial attacks computed using existing attack algorithms; this is orthogonal to our results."
657
+ },
658
+ {
659
+ "type": "title",
660
+ "bbox": [
661
+ 0.172,
662
+ 0.696,
663
+ 0.318,
664
+ 0.712
665
+ ],
666
+ "angle": 0,
667
+ "content": "4 EVALUATION"
668
+ },
669
+ {
670
+ "type": "text",
671
+ "bbox": [
672
+ 0.17,
673
+ 0.727,
674
+ 0.825,
675
+ 0.784
676
+ ],
677
+ "angle": 0,
678
+ "content": "We evaluate diffusion denoised smoothing on two standard datasets, CIFAR-10 and ImageNet, and find it gives state-of-the-art certified \\(\\ell_2\\) robustness on both. On CIFAR-10, we draw \\(N = 100,000\\) noise samples and on ImageNet we draw \\(N = 10,000\\) samples to certify the robustness following Cohen et al. (2019)."
679
+ },
680
+ {
681
+ "type": "text",
682
+ "bbox": [
683
+ 0.17,
684
+ 0.79,
685
+ 0.825,
686
+ 0.86
687
+ ],
688
+ "angle": 0,
689
+ "content": "As is standard in prior work, we perform randomized smoothing for three different noise magnitudes, \\(\\sigma \\in \\{0.25, 0.5, 1.0\\}\\). For a fair comparison to prior work in Table 2 and Table 1, we give the best results reported in each paper across these same three noise magnitudes. Note that prior work only uses three levels of noise due to the computational overhead; one benefit of using a diffusion model is we could have used other amounts of noise without training a new denoiser model."
690
+ },
691
+ {
692
+ "type": "text",
693
+ "bbox": [
694
+ 0.171,
695
+ 0.874,
696
+ 0.825,
697
+ 0.903
698
+ ],
699
+ "angle": 0,
700
+ "content": "CIFAR-10 configuration. We denoise CIFAR-10 images with the 50M-parameter diffusion model from Nichol & Dhariwal (2021). The denoised images are classified with a 87M-parameter"
701
+ },
702
+ {
703
+ "type": "page_footnote",
704
+ "bbox": [
705
+ 0.191,
706
+ 0.91,
707
+ 0.592,
708
+ 0.924
709
+ ],
710
+ "angle": 0,
711
+ "content": "<sup>3</sup>https://github.com/openai/improved-diffusion"
712
+ },
713
+ {
714
+ "type": "page_number",
715
+ "bbox": [
716
+ 0.494,
717
+ 0.949,
718
+ 0.505,
719
+ 0.96
720
+ ],
721
+ "angle": 0,
722
+ "content": "4"
723
+ }
724
+ ],
725
+ [
726
+ {
727
+ "type": "header",
728
+ "bbox": [
729
+ 0.174,
730
+ 0.033,
731
+ 0.48,
732
+ 0.049
733
+ ],
734
+ "angle": 0,
735
+ "content": "Published as a conference paper at ICLR 2023"
736
+ },
737
+ {
738
+ "type": "table",
739
+ "bbox": [
740
+ 0.174,
741
+ 0.115,
742
+ 0.825,
743
+ 0.292
744
+ ],
745
+ "angle": 0,
746
+ "content": "<table><tr><td>Method</td><td>Off-the-shelf</td><td>Extra data</td><td>0.5</td><td>1.0</td><td>1.5</td><td>2.0</td><td>3.0</td></tr><tr><td>PixelDP (Lecuyer et al., 2019)</td><td>○</td><td>×</td><td>(33.0)16.0</td><td>-</td><td>-</td><td></td><td></td></tr><tr><td>RS (Cohen et al., 2019)</td><td>○</td><td>×</td><td>(67.0)49.0</td><td>(57.0)37.0</td><td>(57.0)29.0</td><td>(44.0)19.0</td><td>(44.0)12.0</td></tr><tr><td>SmoothAdv (Salman et al., 2019)</td><td>○</td><td>×</td><td>(65.0)56.0</td><td>(54.0)43.0</td><td>(54.0)37.0</td><td>(40.0)27.0</td><td>(40.0)20.0</td></tr><tr><td>Consistency (Jeong &amp; Shin, 2020)</td><td>○</td><td>×</td><td>(55.0)50.0</td><td>(55.0)44.0</td><td>(55.0)34.0</td><td>(41.0)24.0</td><td>(41.0)17.0</td></tr><tr><td>MACER (Zhai et al., 2020)</td><td>○</td><td>×</td><td>(68.0)57.0</td><td>(64.0)43.0</td><td>(64.0)31.0</td><td>(48.0)25.0</td><td>(48.0)14.0</td></tr><tr><td>Boosting (Horváth et al., 2022a)</td><td>○</td><td>×</td><td>(65.6)57.0</td><td>(57.0)44.6</td><td>(57.0)38.4</td><td>(44.6)28.6</td><td>(38.6)21.2</td></tr><tr><td>DRT (Yang et al., 2021)</td><td>○</td><td>×</td><td>(52.2)46.8</td><td>(55.2)44.4</td><td>(49.8)39.8</td><td>(49.8)30.4</td><td>(49.8)23.4</td></tr><tr><td>SmoothMix (Jeong et al., 2021)</td><td>○</td><td>×</td><td>(55.0)50.0</td><td>(55.0)43.0</td><td>(55.0)38.0</td><td>(40.0)26.0</td><td>(40.0)20.0</td></tr><tr><td>ACES (Horváth et al., 2022b)</td><td>○</td><td>×</td><td>(63.8)54.0</td><td>(57.2)42.2</td><td>(55.6)35.6</td><td>(39.8)25.6</td><td>(44.0)19.8</td></tr><tr><td>Denoised (Salman et al., 2020)</td><td>○</td><td>×</td><td>(60.0)33.0</td><td>(38.0)14.0</td><td>(38.0)6.0</td><td>-</td><td>-</td></tr><tr><td>Lee (Lee, 2021)</td><td>○</td><td>×</td><td>41.0</td><td>24.0</td><td>11.0</td><td>-</td><td>-</td></tr><tr><td>Ours</td><td>○</td><td>✓</td><td>(82.8)71.1</td><td>(77.1)54.3</td><td>(77.1)38.1</td><td>(60.0)29.5</td><td>(60.0)13.1</td></tr></table>"
747
+ },
748
+ {
749
+ "type": "table_caption",
750
+ "bbox": [
751
+ 0.591,
752
+ 0.101,
753
+ 0.738,
754
+ 0.113
755
+ ],
756
+ "angle": 0,
757
+ "content": "Certified Accuracy at \\(\\varepsilon\\) (\\%)"
758
+ },
759
+ {
760
+ "type": "table_footnote",
761
+ "bbox": [
762
+ 0.171,
763
+ 0.302,
764
+ 0.825,
765
+ 0.415
766
+ ],
767
+ "angle": 0,
768
+ "content": "Table 1: ImageNet certified top-1 accuracy for prior defenses on randomized smoothing and denoised smoothing. Randomized smoothing techniques rely on special-purpose models (indicated by a empty circle). The work of Horváth et al. (2022b) is an exception in that it selectively applies either a robust or accurate off-the-shelf classifier (indicated by a half full circle). Denoised smoothing (Salman et al., 2020) use an off-the-shelf classifier but train their own denoiser (indicated by a half full circle). Our base approach uses an off-the-shelf classifier and off-the-shelf denoiser (indicated by a full circle). Each entry lists the certified accuracy, with the clean accuracy for that model in parentheses, using numbers taken from respective papers."
769
+ },
770
+ {
771
+ "type": "text",
772
+ "bbox": [
773
+ 0.171,
774
+ 0.442,
775
+ 0.825,
776
+ 0.514
777
+ ],
778
+ "angle": 0,
779
+ "content": "ViT-B/16 model (Dosovitskiy et al., 2021) that was pretrained on ImageNet-21k (Deng et al., 2009) (in \\(224 \\times 224\\) resolution) and finetuned on CIFAR-10. We use the implementation from Hugging-Face<sup>4</sup> which reaches \\(97.9\\%\\) test accuracy on CIFAR-10. In addition, we also report results with a standard 36M parameter Wide-ResNet-28-10 model (Zagoruyko & Komodakis, 2016) trained on CIFAR-10 to \\(95.2\\%\\) accuracy."
780
+ },
781
+ {
782
+ "type": "text",
783
+ "bbox": [
784
+ 0.171,
785
+ 0.519,
786
+ 0.827,
787
+ 0.562
788
+ ],
789
+ "angle": 0,
790
+ "content": "As is typical, we report results with images normalized to \\([0,1]^{32\\times 32\\times 3}\\). We obtain a throughput of 825 images per second through the diffusion model and ViT classifier on an A100 GPU at a batch size of 1,000. We report robust accuracy results averaged over the entire CIFAR-10 test set."
791
+ },
792
+ {
793
+ "type": "text",
794
+ "bbox": [
795
+ 0.171,
796
+ 0.579,
797
+ 0.825,
798
+ 0.678
799
+ ],
800
+ "angle": 0,
801
+ "content": "ImageNet configuration. We denoise ImageNet images with the 552M-parameter class-unconditional diffusion model from Dhariwal & Nichol (2021), and classify images with the 305M-parameter BEiT large model (Bao et al., 2022) which reaches a \\(88.6\\%\\) top-1 validation accuracy using the implementation from timm (Wightman, 2019). We report results for our images when normalized to \\([0,1]^{224\\times 224\\times 3}\\) to allow us to compare to prior work. The overall latency of this joint denoise-then-classify model is 1.5 seconds per image on an A100 GPU at a batch size of 32. We report results averaged over 1,000 images randomly selected from the ImageNet test set."
802
+ },
803
+ {
804
+ "type": "title",
805
+ "bbox": [
806
+ 0.172,
807
+ 0.696,
808
+ 0.28,
809
+ 0.71
810
+ ],
811
+ "angle": 0,
812
+ "content": "4.1 RESULTS"
813
+ },
814
+ {
815
+ "type": "text",
816
+ "bbox": [
817
+ 0.171,
818
+ 0.722,
819
+ 0.825,
820
+ 0.835
821
+ ],
822
+ "angle": 0,
823
+ "content": "On both CIFAR-10 and ImageNet we outperform the state-of-the-art denoised smoothing approaches (i.e., Salman et al. (2020) and Lee (2021)) in every setting; see Table 1 and Table 2, as well as Figure 2 for detailed results. Perhaps even more impressively, we also outperform models trained with randomized smoothing at low \\(\\varepsilon\\) distortions (\\(\\epsilon \\leq 0.5\\) on CIFAR-10, and \\(\\epsilon \\leq 2\\) on ImageNet), and nearly match them at high \\(\\varepsilon\\). Even though these randomized smoothing techniques train their models end-to-end and specifically design these models to have high accuracy on Gaussian noise, we find that our approach's use of off-the-shelf models yields superior robustness (and much higher clean accuracy as an added bonus)."
824
+ },
825
+ {
826
+ "type": "text",
827
+ "bbox": [
828
+ 0.171,
829
+ 0.84,
830
+ 0.825,
831
+ 0.898
832
+ ],
833
+ "angle": 0,
834
+ "content": "Interestingly, we find that using a diffusion model to perform the denoising step gives its most significant benefits when \\(\\sigma\\) and \\(\\varepsilon\\) are small: for example, while we reach \\(71.1\\%\\) top-1 accuracy at \\(\\varepsilon = 0.5\\) on ImageNet, an improvement over prior work of \\(+14\\) percentage points, when we reach \\(\\varepsilon = 3\\) our scheme is 7 percentage points worse than state-of-the-art. Our hypothesis for this effect,"
835
+ },
836
+ {
837
+ "type": "page_footnote",
838
+ "bbox": [
839
+ 0.191,
840
+ 0.91,
841
+ 0.859,
842
+ 0.925
843
+ ],
844
+ "angle": 0,
845
+ "content": "4https://huggingface.co/aaraki/vit-base-patch16-224-in21k-finetuned-cifar10"
846
+ },
847
+ {
848
+ "type": "page_number",
849
+ "bbox": [
850
+ 0.494,
851
+ 0.949,
852
+ 0.505,
853
+ 0.96
854
+ ],
855
+ "angle": 0,
856
+ "content": "5"
857
+ }
858
+ ],
859
+ [
860
+ {
861
+ "type": "header",
862
+ "bbox": [
863
+ 0.174,
864
+ 0.033,
865
+ 0.48,
866
+ 0.049
867
+ ],
868
+ "angle": 0,
869
+ "content": "Published as a conference paper at ICLR 2023"
870
+ },
871
+ {
872
+ "type": "table_caption",
873
+ "bbox": [
874
+ 0.603,
875
+ 0.102,
876
+ 0.761,
877
+ 0.114
878
+ ],
879
+ "angle": 0,
880
+ "content": "Certified Accuracy at \\(\\varepsilon\\) (\\%)"
881
+ },
882
+ {
883
+ "type": "table",
884
+ "bbox": [
885
+ 0.175,
886
+ 0.115,
887
+ 0.825,
888
+ 0.34
889
+ ],
890
+ "angle": 0,
891
+ "content": "<table><tr><td>Method</td><td>Off-the-shelf</td><td>Extra data</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>PixelDP (Lecuyer et al., 2019)</td><td>○</td><td>X</td><td>(71.0)22.0</td><td>(44.0)2.0</td><td>-</td><td>-</td></tr><tr><td>RS (Cohen et al., 2019)</td><td>○</td><td>X</td><td>(75.0)61.0</td><td>(75.0)43.0</td><td>(65.0)32.0</td><td>(66.0)22.0</td></tr><tr><td>SmoothAdv (Salman et al., 2019)</td><td>○</td><td>X</td><td>(75.6)67.4</td><td>(75.6)57.6</td><td>(74.8)47.8</td><td>(57.4)38.3</td></tr><tr><td>SmoothAdv (Salman et al., 2019)</td><td>○</td><td>✓</td><td>(84.3)74.9</td><td>(80.1)63.4</td><td>(80.1)51.9</td><td>(62.2)39.6</td></tr><tr><td>Consistency (Jeong &amp; Shin, 2020)</td><td>○</td><td>X</td><td>(77.8)68.8</td><td>(75.8)58.1</td><td>(72.9)48.5</td><td>(52.3)37.8</td></tr><tr><td>MACER (Zhai et al., 2020)</td><td>○</td><td>X</td><td>(81.0)71.0</td><td>(81.0)59.0</td><td>(66.0)46.0</td><td>(66.0)38.0</td></tr><tr><td>Boosting (Horváth et al., 2022a)</td><td>○</td><td>X</td><td>(83.4)70.6</td><td>(76.8)60.4</td><td>(71.6)52.4</td><td>(52.4)38.8</td></tr><tr><td>DRT (Yang et al., 2021)</td><td>○</td><td>X</td><td>(81.5)70.4</td><td>(72.6)60.2</td><td>(71.9)50.5</td><td>(56.1)39.8</td></tr><tr><td>SmoothMix (Jeong et al., 2021)</td><td>○</td><td>X</td><td>(77.1)67.9</td><td>(77.1)57.9</td><td>(74.2)47.7</td><td>(61.8)37.2</td></tr><tr><td>ACES (Horváth et al., 2022b)</td><td>○</td><td>X</td><td>(79.0)69.0</td><td>(74.2)57.2</td><td>(74.2)47.0</td><td>(58.6)37.8</td></tr><tr><td>Denoised (Salman et al., 2020)</td><td>○</td><td>X</td><td>(72.0)56.0</td><td>(62.0)41.0</td><td>(62.0)28.0</td><td>(44.0)19.0</td></tr><tr><td>Lee (Lee, 2021)</td><td>○</td><td>X</td><td>60.0</td><td>42.0</td><td>28.0</td><td>19.0</td></tr><tr><td>Ours</td><td>○</td><td>✓</td><td>(88.1)76.7</td><td>(88.1)63.0</td><td>(88.1)45.3</td><td>(77.0)32.1</td></tr><tr><td>Ours (+finetuning)</td><td>○</td><td>✓</td><td>(91.2)79.3</td><td>(91.2)65.5</td><td>(87.3)48.7</td><td>(81.5)35.5</td></tr></table>"
892
+ },
893
+ {
894
+ "type": "table_caption",
895
+ "bbox": [
896
+ 0.171,
897
+ 0.349,
898
+ 0.825,
899
+ 0.379
900
+ ],
901
+ "angle": 0,
902
+ "content": "Table 2: CIFAR-10 certified accuracy for prior defenses from the literature. The columns have the same meaning as in Table 1."
903
+ },
904
+ {
905
+ "type": "image",
906
+ "bbox": [
907
+ 0.185,
908
+ 0.399,
909
+ 0.481,
910
+ 0.564
911
+ ],
912
+ "angle": 0,
913
+ "content": null
914
+ },
915
+ {
916
+ "type": "image_caption",
917
+ "bbox": [
918
+ 0.288,
919
+ 0.576,
920
+ 0.374,
921
+ 0.589
922
+ ],
923
+ "angle": 0,
924
+ "content": "(a) CIFAR-10"
925
+ },
926
+ {
927
+ "type": "image",
928
+ "bbox": [
929
+ 0.517,
930
+ 0.399,
931
+ 0.812,
932
+ 0.563
933
+ ],
934
+ "angle": 0,
935
+ "content": null
936
+ },
937
+ {
938
+ "type": "image_caption",
939
+ "bbox": [
940
+ 0.623,
941
+ 0.576,
942
+ 0.704,
943
+ 0.59
944
+ ],
945
+ "angle": 0,
946
+ "content": "(b) ImageNet"
947
+ },
948
+ {
949
+ "type": "image_caption",
950
+ "bbox": [
951
+ 0.17,
952
+ 0.596,
953
+ 0.825,
954
+ 0.639
955
+ ],
956
+ "angle": 0,
957
+ "content": "Figure 2: Certified accuracy as a function of the \\(\\ell_2\\) adversarial perturbation bound, when varying levels of Gaussian noise \\(\\sigma \\in \\{0.25, 0.5, 1.0\\}\\). Bounds are computed with 100,000 samples per run on CIFAR-10, and 10,000 on ImageNet."
958
+ },
959
+ {
960
+ "type": "text",
961
+ "bbox": [
962
+ 0.17,
963
+ 0.665,
964
+ 0.825,
965
+ 0.709
966
+ ],
967
+ "angle": 0,
968
+ "content": "which we explore further in Section 5, is that diffusion models are prone to \"hallucinate\" content when denoising extremely noisy images. Thus, instead of reinforcing the signal from the correct class, the diffusion model generates a signal from another class, thereby fooling the classifier."
969
+ },
970
+ {
971
+ "type": "text",
972
+ "bbox": [
973
+ 0.17,
974
+ 0.722,
975
+ 0.825,
976
+ 0.821
977
+ ],
978
+ "angle": 0,
979
+ "content": "CIFAR-10 ablation. The off-the-shelf classifiers we use were pretrained on larger datasets than respectively CIFAR-10 and ImageNet. It is well known that the use of additional data can boost robustness, both for empirical (Schmidt et al., 2018) and certified (Salman et al., 2019) defenses. To investigate the role played by the pretrained model, we repeat our CIFAR-10 experiment using a standard Wide-ResNet-28-10 model (Zagoruyko & Komodakis, 2016) trained solely on CIFAR-10 to \\(95.2\\%\\) accuracy. The results with this classifier (see Table 6a) outperform prior denoised smoothing approaches, and are competitive with prior randomized smoothing results up to \\(\\epsilon = 0.5\\)."
980
+ },
981
+ {
982
+ "type": "text",
983
+ "bbox": [
984
+ 0.17,
985
+ 0.827,
986
+ 0.827,
987
+ 0.926
988
+ ],
989
+ "angle": 0,
990
+ "content": "The ViT classifier outperforms the ResNet because it is more robust to the distribution shift introduced by the noisig-and-denoising procedure. To alleviate this, we can further finetune the classifier on denoised images denoise \\((x + \\delta)\\) from the CIFAR-10 training set. This defense is thus not strictly \"off-the-shelf\" anymore (although finetuning is negligible compared to the training time of the diffusion model and classifier). Table 6b shows that a finetuned Wide-ResNet achieves comparable-or-better results than a non-finetuned ViT. Thus, with a minimal amount of training, we also surpass prior randomized smoothing results without relying on any external data. If we finetune"
991
+ },
992
+ {
993
+ "type": "page_number",
994
+ "bbox": [
995
+ 0.494,
996
+ 0.949,
997
+ 0.506,
998
+ 0.96
999
+ ],
1000
+ "angle": 0,
1001
+ "content": "6"
1002
+ }
1003
+ ],
1004
+ [
1005
+ {
1006
+ "type": "header",
1007
+ "bbox": [
1008
+ 0.173,
1009
+ 0.033,
1010
+ 0.48,
1011
+ 0.049
1012
+ ],
1013
+ "angle": 0,
1014
+ "content": "Published as a conference paper at ICLR 2023"
1015
+ },
1016
+ {
1017
+ "type": "table_caption",
1018
+ "bbox": [
1019
+ 0.579,
1020
+ 0.089,
1021
+ 0.754,
1022
+ 0.103
1023
+ ],
1024
+ "angle": 0,
1025
+ "content": "Certified Accuracy at \\(\\varepsilon\\) (\\%)"
1026
+ },
1027
+ {
1028
+ "type": "table",
1029
+ "bbox": [
1030
+ 0.175,
1031
+ 0.104,
1032
+ 0.825,
1033
+ 0.189
1034
+ ],
1035
+ "angle": 0,
1036
+ "content": "<table><tr><td>Method</td><td>Off-the-shelf</td><td>Extra data</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>Wide-ResNet</td><td>●</td><td>×</td><td>(83.8)70.6</td><td>(83.8)55.7</td><td>(83.8)40.0</td><td>(65.8)26.1</td></tr><tr><td>ViT</td><td>●</td><td>✓</td><td>(88.1)76.7</td><td>(88.1)63.0</td><td>(88.1)45.3</td><td>(77.0)32.1</td></tr><tr><td>Wide-ResNet +finetune</td><td>○</td><td>×</td><td>(85.9)76.7</td><td>(85.9)63.8</td><td>(85.9)49.5</td><td>(74.5)36.4</td></tr><tr><td>ViT +finetune</td><td>○</td><td>✓</td><td>(91.2)79.3</td><td>(91.2)65.5</td><td>(91.2)48.7</td><td>(81.5)35.5</td></tr></table>"
1037
+ },
1038
+ {
1039
+ "type": "table_caption",
1040
+ "bbox": [
1041
+ 0.171,
1042
+ 0.199,
1043
+ 0.825,
1044
+ 0.256
1045
+ ],
1046
+ "angle": 0,
1047
+ "content": "Table 3: Summary of our ablation on CIFAR-10. The diffusion model and Wide-ResNet classifier are trained solely on CIFAR-10, while the ViT classifier is pretrained on a larger dataset. The finetuning results are obtained by taking an off-the-shelf diffusion model and classifier, and tuning the classifier on noised-then-denoised images from CIFAR-10."
1048
+ },
1049
+ {
1050
+ "type": "text",
1051
+ "bbox": [
1052
+ 0.171,
1053
+ 0.288,
1054
+ 0.825,
1055
+ 0.318
1056
+ ],
1057
+ "angle": 0,
1058
+ "content": "the ViT model (Table 6d), we further improve our defense's clean accuracy and certified robustness at \\(\\epsilon \\leq 0.5\\) by a couple of percentage points. Our ablation is summarized in Table 3."
1059
+ },
1060
+ {
1061
+ "type": "title",
1062
+ "bbox": [
1063
+ 0.173,
1064
+ 0.344,
1065
+ 0.445,
1066
+ 0.359
1067
+ ],
1068
+ "angle": 0,
1069
+ "content": "5 ANALYSIS AND DISCUSSION"
1070
+ },
1071
+ {
1072
+ "type": "text",
1073
+ "bbox": [
1074
+ 0.171,
1075
+ 0.379,
1076
+ 0.825,
1077
+ 0.437
1078
+ ],
1079
+ "angle": 0,
1080
+ "content": "We achieve state-of-the-art certified accuracy using diffusion models despite the fact that we are not using these models as diffusion models but rather trivial denoisers. That is, instead of leveraging the fact that diffusion models can iteratively refine images across a range of noise levels, we simply apply the diffusion model once for a fixed noise level, to perform one-shot denoising."
1081
+ },
1082
+ {
1083
+ "type": "text",
1084
+ "bbox": [
1085
+ 0.171,
1086
+ 0.442,
1087
+ 0.826,
1088
+ 0.499
1089
+ ],
1090
+ "angle": 0,
1091
+ "content": "In this section we study why this approach outperforms prior work that trained straightforward denoisiers for denoised smoothing (Salman et al., 2020), and why using diffusion models for one-shot denoising performs better than the more involved iterative diffusion process. Last we show promising results of multi-step diffusion using an advanced deterministic sampler."
1092
+ },
1093
+ {
1094
+ "type": "title",
1095
+ "bbox": [
1096
+ 0.172,
1097
+ 0.522,
1098
+ 0.55,
1099
+ 0.536
1100
+ ],
1101
+ "angle": 0,
1102
+ "content": "5.1 FULL DIFFUSION VERSUS ONE-SHOT DENOISING"
1103
+ },
1104
+ {
1105
+ "type": "text",
1106
+ "bbox": [
1107
+ 0.171,
1108
+ 0.55,
1109
+ 0.825,
1110
+ 0.635
1111
+ ],
1112
+ "angle": 0,
1113
+ "content": "When used as generative models, diffusion models perform denoising through an iterative process that repeatedly refines an estimate of the final denoised image. When given an image \\( x_{t} \\) with noise of magnitude corresponding to some diffusion timestep \\( t \\), the model first predicts a one-shot estimate of the denoised image \\( x_{0} \\), and then constructs an estimate \\( x_{t-1} \\) of the noised image at timestep \\( t-1 \\) by interpolating (with appropriate weights) between \\( x_{0} \\), \\( x_{t} \\) and fresh isotropic Gaussian noise \\( \\mathcal{N}(0, I) \\). The diffusion process is then applied recursively at timestep \\( t-1 \\)."
1114
+ },
1115
+ {
1116
+ "type": "text",
1117
+ "bbox": [
1118
+ 0.171,
1119
+ 0.641,
1120
+ 0.827,
1121
+ 0.711
1122
+ ],
1123
+ "angle": 0,
1124
+ "content": "Intuitively, it may be expected that when using a diffusion model as a denoiser, one-shot denoising will produce more faithful results than the full iterative reverse-diffusion process. Indeed, each step of the reverse-diffusion process destroys information about the original image, since each step adds fresh Gaussian noise to the image. Thus, information theoretically at least, it should be easier to denoise an image in one-shot than over multiple iterations."
1125
+ },
1126
+ {
1127
+ "type": "image",
1128
+ "bbox": [
1129
+ 0.179,
1130
+ 0.731,
1131
+ 0.338,
1132
+ 0.865
1133
+ ],
1134
+ "angle": 0,
1135
+ "content": null
1136
+ },
1137
+ {
1138
+ "type": "image",
1139
+ "bbox": [
1140
+ 0.34,
1141
+ 0.731,
1142
+ 0.498,
1143
+ 0.865
1144
+ ],
1145
+ "angle": 0,
1146
+ "content": null
1147
+ },
1148
+ {
1149
+ "type": "image",
1150
+ "bbox": [
1151
+ 0.501,
1152
+ 0.731,
1153
+ 0.66,
1154
+ 0.865
1155
+ ],
1156
+ "angle": 0,
1157
+ "content": null
1158
+ },
1159
+ {
1160
+ "type": "image",
1161
+ "bbox": [
1162
+ 0.662,
1163
+ 0.731,
1164
+ 0.822,
1165
+ 0.865
1166
+ ],
1167
+ "angle": 0,
1168
+ "content": null
1169
+ },
1170
+ {
1171
+ "type": "image_caption",
1172
+ "bbox": [
1173
+ 0.171,
1174
+ 0.878,
1175
+ 0.825,
1176
+ 0.922
1177
+ ],
1178
+ "angle": 0,
1179
+ "content": "Figure 3: Intuitive examples for why multi-step denoised images are less recognized by the classifier. From left to right: clean images, noisy images with \\(\\sigma = 1.0\\), one-step denoised images, multi-step denoised images. For the denoised images, we show the prediction by the pretrained BEiT model."
1180
+ },
1181
+ {
1182
+ "type": "page_number",
1183
+ "bbox": [
1184
+ 0.494,
1185
+ 0.949,
1186
+ 0.504,
1187
+ 0.96
1188
+ ],
1189
+ "angle": 0,
1190
+ "content": "7"
1191
+ }
1192
+ ],
1193
+ [
1194
+ {
1195
+ "type": "header",
1196
+ "bbox": [
1197
+ 0.173,
1198
+ 0.033,
1199
+ 0.48,
1200
+ 0.049
1201
+ ],
1202
+ "angle": 0,
1203
+ "content": "Published as a conference paper at ICLR 2023"
1204
+ },
1205
+ {
1206
+ "type": "text",
1207
+ "bbox": [
1208
+ 0.17,
1209
+ 0.104,
1210
+ 0.827,
1211
+ 0.273
1212
+ ],
1213
+ "angle": 0,
1214
+ "content": "We find that this is indeed the case. While the full reverse-diffusion process produces denoised images with more finegrained details (which is a good property for generating photorealistic images from scratch), these details are often not actually faithful to the original image we want to denoise. Instead, diffusion models are prone to \"hallucinate\" salient detailed features during the iterative denoise-and-noise process. We illustrate some examples of this hallucination phenomenon in Figure 3. Here, we noise an original image (on the left) with large Gaussian noise \\((\\sigma = 1)\\) and then apply either the full reverse-diffusion process (rightmost image) or a one-shot denoising at the appropriate timestep (2nd image to the right). As we can see, one-shot denoising produces mostly faithful, but blurry, reconstructions of the original image, with finegrained details lost due to noise. In contrast, iterative denoising \"invents\" new details that result in images that are ultimately more photorealistic but semantically different from the starting image. Additional examples (with multiple random seeds) are in Figure 4 and Figure 5 in the Appendix."
1215
+ },
1216
+ {
1217
+ "type": "title",
1218
+ "bbox": [
1219
+ 0.172,
1220
+ 0.287,
1221
+ 0.506,
1222
+ 0.302
1223
+ ],
1224
+ "angle": 0,
1225
+ "content": "5.2 TRAINING ON RESTRICTED NOISE LEVELS"
1226
+ },
1227
+ {
1228
+ "type": "text",
1229
+ "bbox": [
1230
+ 0.17,
1231
+ 0.313,
1232
+ 0.825,
1233
+ 0.398
1234
+ ],
1235
+ "angle": 0,
1236
+ "content": "Given that one-shot denoising performs better than full multi-shot denoising, we now turn to understanding our next question: if we are just using diffusion models as one-shot denoisers, then why do diffusion models perform better compared to the straightforward denoisers trained in prior work (Salman et al., 2020)? To investigate this, we train seven new diffusion models on CIFAR-10 with varying levels of Gaussian noise—all the way towards a model trained on a single noise level, i.e., a straightforward denoiser."
1237
+ },
1238
+ {
1239
+ "type": "text",
1240
+ "bbox": [
1241
+ 0.17,
1242
+ 0.403,
1243
+ 0.827,
1244
+ 0.489
1245
+ ],
1246
+ "angle": 0,
1247
+ "content": "Recall that during standard training of a diffusion model, we sample a timestep \\( T \\) uniformly from some range, add noise according to this timestep, and then train the model to predict the noise that has been added. The only difference between this process and the standard denoised smoothing training process (Salman et al., 2020) is the fact that here we are training on multiple levels of Gaussian noise simultaneously. Therefore we now perform a comparative analysis of models trained on more restrictive noise levels. We select seven different levels of noise:"
1248
+ },
1249
+ {
1250
+ "type": "text",
1251
+ "bbox": [
1252
+ 0.217,
1253
+ 0.497,
1254
+ 0.825,
1255
+ 0.54
1256
+ ],
1257
+ "angle": 0,
1258
+ "content": "- Three models are trained exclusively on Gaussian noise of fixed standard deviation of respectively \\(\\sigma = 0.25\\), \\(\\sigma = 0.5\\), or \\(\\sigma = 1.0\\). This is identical to training a \"straightforward\" denoiser on noise of a fixed magnitude."
1259
+ },
1260
+ {
1261
+ "type": "text",
1262
+ "bbox": [
1263
+ 0.217,
1264
+ 0.543,
1265
+ 0.645,
1266
+ 0.556
1267
+ ],
1268
+ "angle": 0,
1269
+ "content": "- One model is trained on all three noise levels at the same time."
1270
+ },
1271
+ {
1272
+ "type": "text",
1273
+ "bbox": [
1274
+ 0.217,
1275
+ 0.56,
1276
+ 0.811,
1277
+ 0.575
1278
+ ],
1279
+ "angle": 0,
1280
+ "content": "- Two models are trained on noise uniformly selected from \\(\\sigma \\in [0, 0.25]\\), and \\(\\sigma \\in [0, 1.0]\\)."
1281
+ },
1282
+ {
1283
+ "type": "text",
1284
+ "bbox": [
1285
+ 0.217,
1286
+ 0.578,
1287
+ 0.825,
1288
+ 0.606
1289
+ ],
1290
+ "angle": 0,
1291
+ "content": "- One model is trained using the full range of noise, from \\(\\sigma \\in [0,S]\\) for some \\(S\\gg 1\\) (the exact value of \\(S\\) depends on the chosen noise schedule for the diffusion model)."
1292
+ },
1293
+ {
1294
+ "type": "list",
1295
+ "bbox": [
1296
+ 0.217,
1297
+ 0.497,
1298
+ 0.825,
1299
+ 0.606
1300
+ ],
1301
+ "angle": 0,
1302
+ "content": null
1303
+ },
1304
+ {
1305
+ "type": "text",
1306
+ "bbox": [
1307
+ 0.17,
1308
+ 0.616,
1309
+ 0.825,
1310
+ 0.659
1311
+ ],
1312
+ "angle": 0,
1313
+ "content": "We then evaluate the clean accuracy of an off-the-shelf ViT model on each image when denoised (in one shot) with each of these diffusion models, where the images are noised with a standard deviation of either \\(\\sigma = 0.25\\), \\(\\sigma = 0.5\\), or \\(\\sigma = 1.0\\). The results are summarized in Table 4."
1314
+ },
1315
+ {
1316
+ "type": "table",
1317
+ "bbox": [
1318
+ 0.3,
1319
+ 0.668,
1320
+ 0.7,
1321
+ 0.817
1322
+ ],
1323
+ "angle": 0,
1324
+ "content": "<table><tr><td rowspan=\"2\">Training noise</td><td colspan=\"3\">Noise at evaluation</td></tr><tr><td>σ = 0.25</td><td>σ = 0.5</td><td>σ = 1.0</td></tr><tr><td>σ ∈ {0.25}</td><td>79.0</td><td>16.2</td><td>9.8</td></tr><tr><td>σ ∈ {0.5}</td><td>14.5</td><td>60.1</td><td>15.4</td></tr><tr><td>σ ∈ {1.0}</td><td>13.9</td><td>13.5</td><td>35.5</td></tr><tr><td>σ ∈ {0.25, 0.5, 1.0}</td><td>81.6</td><td>68.1</td><td>43.0</td></tr><tr><td>σ ∈ [0, 0.25]</td><td>84.5</td><td>14.5</td><td>9.9</td></tr><tr><td>σ ∈ [0, 1.0]</td><td>84.0</td><td>71.6</td><td>46.0</td></tr><tr><td>σ ∈ [0, S ≫ 1] (standard)</td><td>85.5</td><td>72.3</td><td>44.8</td></tr></table>"
1325
+ },
1326
+ {
1327
+ "type": "table_caption",
1328
+ "bbox": [
1329
+ 0.17,
1330
+ 0.825,
1331
+ 0.825,
1332
+ 0.882
1333
+ ],
1334
+ "angle": 0,
1335
+ "content": "Table 4: Clean accuracy of an off-the-shelf ViT classifier on images denoised with a diffusion model trained on restricted levels of Gaussian noise. Diffusion models trained on more diverse noise ranges yield higher accuracy on one-shot denoised images, even compared to models trained on the specific noise level used at evaluation time."
1336
+ },
1337
+ {
1338
+ "type": "text",
1339
+ "bbox": [
1340
+ 0.171,
1341
+ 0.896,
1342
+ 0.825,
1343
+ 0.927
1344
+ ],
1345
+ "angle": 0,
1346
+ "content": "As expected, training a new model on any one individual noise level, and then using that model to denoise images at that noise level, gives high downstream accuracy: for example, training a diffusion"
1347
+ },
1348
+ {
1349
+ "type": "page_number",
1350
+ "bbox": [
1351
+ 0.494,
1352
+ 0.949,
1353
+ 0.505,
1354
+ 0.96
1355
+ ],
1356
+ "angle": 0,
1357
+ "content": "8"
1358
+ }
1359
+ ],
1360
+ [
1361
+ {
1362
+ "type": "header",
1363
+ "bbox": [
1364
+ 0.173,
1365
+ 0.033,
1366
+ 0.48,
1367
+ 0.049
1368
+ ],
1369
+ "angle": 0,
1370
+ "content": "Published as a conference paper at ICLR 2023"
1371
+ },
1372
+ {
1373
+ "type": "text",
1374
+ "bbox": [
1375
+ 0.17,
1376
+ 0.104,
1377
+ 0.825,
1378
+ 0.189
1379
+ ],
1380
+ "angle": 0,
1381
+ "content": "model using \\(\\sigma = 0.25\\) noise and then evaluating at this same noise level gives \\(79\\%\\) accuracy. However if we then try and use this model to denoise images at a different noise level—say \\(\\sigma = 0.5\\) the accuracy of the classifier drops to just \\(16\\%\\). If we train the diffusion model directly on \\(\\sigma = 0.5\\) noise, we instead get a much better classification accuracy of \\(60.1\\%\\), but without good generalization to lower or higher noise levels. Similarly, training on noise of \\(\\sigma = 1.0\\) only gives good results when denoising images with the same noise level."
1382
+ },
1383
+ {
1384
+ "type": "text",
1385
+ "bbox": [
1386
+ 0.17,
1387
+ 0.195,
1388
+ 0.827,
1389
+ 0.28
1390
+ ],
1391
+ "angle": 0,
1392
+ "content": "More surprisingly, however, is that training on all three noise levels simultaneously gives better accuracy for denoising images at each noise level, compared to a diffusion model trained specifically and solely for that noise level. For example, when denoising images with \\(\\sigma = 0.5\\) Gaussian noise, we get a classification accuracy of \\(68.1\\%\\) when the diffusion model is trained on that noise level and additional lower and higher noise levels—a value \\(8\\%\\) higher than the accuracy of \\(60.1\\%\\) we get when training the diffusion model solely on \\(\\sigma = 0.5\\) noise."
1393
+ },
1394
+ {
1395
+ "type": "text",
1396
+ "bbox": [
1397
+ 0.17,
1398
+ 0.286,
1399
+ 0.825,
1400
+ 0.357
1401
+ ],
1402
+ "angle": 0,
1403
+ "content": "If we train on more granular noise levels, either in \\([0, 0.25]\\) or in the full interval \\([0, 1]\\), the classification accuracy on denoised images at the three individual noise levels further increases by a few percentage points. Quite surprisingly, the standard training regime which trains the diffusion model on noise from a larger range \\([0, S]\\) for some \\(S \\gg 1\\) further improves the denoising capabilities at low noise levels (\\(\\sigma = 0.25\\) and \\(\\sigma = 0.5\\)), but slightly harms the accuracy for larger noise (\\(\\sigma = 1.0\\))."
1404
+ },
1405
+ {
1406
+ "type": "text",
1407
+ "bbox": [
1408
+ 0.17,
1409
+ 0.362,
1410
+ 0.825,
1411
+ 0.406
1412
+ ],
1413
+ "angle": 0,
1414
+ "content": "From this experiment, we can conclude that the (full) training process of diffusion models leads to much better, and more generalizable, one-shot denoising capabilities than when training a standalone denoiser on a single noise level as in prior work."
1415
+ },
1416
+ {
1417
+ "type": "title",
1418
+ "bbox": [
1419
+ 0.172,
1420
+ 0.421,
1421
+ 0.569,
1422
+ 0.435
1423
+ ],
1424
+ "angle": 0,
1425
+ "content": "5.3 ADVANCED DETERMINISTIC MULTI-STEP SAMPLER"
1426
+ },
1427
+ {
1428
+ "type": "text",
1429
+ "bbox": [
1430
+ 0.17,
1431
+ 0.447,
1432
+ 0.825,
1433
+ 0.517
1434
+ ],
1435
+ "angle": 0,
1436
+ "content": "In section 5.1, we found that the denoised images from full multi-step diffusion have a tendency to deviate from the original clean image. This could be due to the stochastic nature of the full reverse-diffusion process, since at each step a random noise is added. We notice a line of work (Song et al., 2021; Karras et al., 2022) on fast deterministic sampling of diffusion models. We show that with such an advanced sampler, multi-step diffusion is able to beat one-shot denoising."
1437
+ },
1438
+ {
1439
+ "type": "text",
1440
+ "bbox": [
1441
+ 0.17,
1442
+ 0.523,
1443
+ 0.827,
1444
+ 0.622
1445
+ ],
1446
+ "angle": 0,
1447
+ "content": "We consider the deterministic EDM sampler proposed by Karras et al. (2022). We compare the recognizability of images denoised by EDM sampler and one-shot denoising. We adapt EDM sampler for image denoising by setting the maximum noise sigma of the sampling noise schedule to be the noise level found by Equation 4. We use the suggested sampler setting from Karras et al. (2022) on CIFAR-10, where 18 reverse steps with 35 evaluations of the diffusion model are performed for each example. The result is summarized in Table 5. We can see that the deterministic EDM sampler is superior over one-shot denoising."
1448
+ },
1449
+ {
1450
+ "type": "table",
1451
+ "bbox": [
1452
+ 0.266,
1453
+ 0.631,
1454
+ 0.734,
1455
+ 0.726
1456
+ ],
1457
+ "angle": 0,
1458
+ "content": "<table><tr><td>Classifier</td><td>Method</td><td>σ = 0.25</td><td>σ = 0.5</td><td>σ = 1.0</td></tr><tr><td rowspan=\"2\">Wide-ResNet</td><td>One-shot denoising</td><td>81.3</td><td>64.0</td><td>35.8</td></tr><tr><td>EDM sampler</td><td>85.0</td><td>73.0</td><td>53.8</td></tr><tr><td rowspan=\"2\">VIT</td><td>One-shot denoising</td><td>84.9</td><td>71.6</td><td>50.8</td></tr><tr><td>EDM sampler</td><td>86.1</td><td>73.1</td><td>54.0</td></tr></table>"
1459
+ },
1460
+ {
1461
+ "type": "table_caption",
1462
+ "bbox": [
1463
+ 0.17,
1464
+ 0.734,
1465
+ 0.825,
1466
+ 0.763
1467
+ ],
1468
+ "angle": 0,
1469
+ "content": "Table 5: Clean accuracy (average over 5 runs) of off-the-shelf CIFAR-10 classifiers evaluated on images denoised by one-shot denoising and EDM sampler (Karras et al., 2022)."
1470
+ },
1471
+ {
1472
+ "type": "title",
1473
+ "bbox": [
1474
+ 0.172,
1475
+ 0.789,
1476
+ 0.321,
1477
+ 0.804
1478
+ ],
1479
+ "angle": 0,
1480
+ "content": "6 CONCLUSION"
1481
+ },
1482
+ {
1483
+ "type": "text",
1484
+ "bbox": [
1485
+ 0.17,
1486
+ 0.819,
1487
+ 0.825,
1488
+ 0.877
1489
+ ],
1490
+ "angle": 0,
1491
+ "content": "At present, training certified adversarially robust deep learning models requires specialized techniques explicitly designed for the purpose of performing provably robust classification (Cohen et al., 2019). While this has proven effective, these models are extremely difficult to train to high accuracy, and degrade clean accuracy significantly."
1492
+ },
1493
+ {
1494
+ "type": "text",
1495
+ "bbox": [
1496
+ 0.17,
1497
+ 0.882,
1498
+ 0.825,
1499
+ 0.926
1500
+ ],
1501
+ "angle": 0,
1502
+ "content": "We suggest an alternative approach is possible. By exclusively making use of off-the-shelf models designed to be state-of-the-art at classification and image denoising, we can leverage the vast resources dedicated to training highly capable models for the new purpose of robust classification."
1503
+ },
1504
+ {
1505
+ "type": "page_number",
1506
+ "bbox": [
1507
+ 0.494,
1508
+ 0.949,
1509
+ 0.506,
1510
+ 0.96
1511
+ ],
1512
+ "angle": 0,
1513
+ "content": "9"
1514
+ }
1515
+ ],
1516
+ [
1517
+ {
1518
+ "type": "header",
1519
+ "bbox": [
1520
+ 0.173,
1521
+ 0.033,
1522
+ 0.48,
1523
+ 0.049
1524
+ ],
1525
+ "angle": 0,
1526
+ "content": "Published as a conference paper at ICLR 2023"
1527
+ },
1528
+ {
1529
+ "type": "title",
1530
+ "bbox": [
1531
+ 0.175,
1532
+ 0.103,
1533
+ 0.289,
1534
+ 0.118
1535
+ ],
1536
+ "angle": 0,
1537
+ "content": "REFERENCES"
1538
+ },
1539
+ {
1540
+ "type": "ref_text",
1541
+ "bbox": [
1542
+ 0.173,
1543
+ 0.126,
1544
+ 0.826,
1545
+ 0.17
1546
+ ],
1547
+ "angle": 0,
1548
+ "content": "Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning, pp. 274-283. PMLR, 2018."
1549
+ },
1550
+ {
1551
+ "type": "ref_text",
1552
+ "bbox": [
1553
+ 0.175,
1554
+ 0.18,
1555
+ 0.825,
1556
+ 0.21
1557
+ ],
1558
+ "angle": 0,
1559
+ "content": "Hangbo Bao, Li Dong, and Furu Wei. BEiT: BERT pre-training of image transformers. In International Conference on Learning Representations, 2022."
1560
+ },
1561
+ {
1562
+ "type": "ref_text",
1563
+ "bbox": [
1564
+ 0.174,
1565
+ 0.221,
1566
+ 0.826,
1567
+ 0.278
1568
+ ],
1569
+ "angle": 0,
1570
+ "content": "Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pp. 387-402. Springer, 2013."
1571
+ },
1572
+ {
1573
+ "type": "ref_text",
1574
+ "bbox": [
1575
+ 0.175,
1576
+ 0.288,
1577
+ 0.826,
1578
+ 0.318
1579
+ ],
1580
+ "angle": 0,
1581
+ "content": "Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy, pp. 39-57. IEEE, 2017."
1582
+ },
1583
+ {
1584
+ "type": "ref_text",
1585
+ "bbox": [
1586
+ 0.175,
1587
+ 0.327,
1588
+ 0.824,
1589
+ 0.357
1590
+ ],
1591
+ "angle": 0,
1592
+ "content": "Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, pp. 1310-1320. PMLR, 2019."
1593
+ },
1594
+ {
1595
+ "type": "ref_text",
1596
+ "bbox": [
1597
+ 0.173,
1598
+ 0.366,
1599
+ 0.825,
1600
+ 0.41
1601
+ ],
1602
+ "angle": 0,
1603
+ "content": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009."
1604
+ },
1605
+ {
1606
+ "type": "ref_text",
1607
+ "bbox": [
1608
+ 0.175,
1609
+ 0.42,
1610
+ 0.824,
1611
+ 0.449
1612
+ ],
1613
+ "angle": 0,
1614
+ "content": "Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. Advances in Neural Information Processing Systems, 34, 2021."
1615
+ },
1616
+ {
1617
+ "type": "ref_text",
1618
+ "bbox": [
1619
+ 0.175,
1620
+ 0.459,
1621
+ 0.825,
1622
+ 0.515
1623
+ ],
1624
+ "angle": 0,
1625
+ "content": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021."
1626
+ },
1627
+ {
1628
+ "type": "ref_text",
1629
+ "bbox": [
1630
+ 0.175,
1631
+ 0.526,
1632
+ 0.825,
1633
+ 0.569
1634
+ ],
1635
+ "angle": 0,
1636
+ "content": "Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715, 2018."
1637
+ },
1638
+ {
1639
+ "type": "ref_text",
1640
+ "bbox": [
1641
+ 0.175,
1642
+ 0.579,
1643
+ 0.825,
1644
+ 0.608
1645
+ ],
1646
+ "angle": 0,
1647
+ "content": "Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020."
1648
+ },
1649
+ {
1650
+ "type": "ref_text",
1651
+ "bbox": [
1652
+ 0.175,
1653
+ 0.618,
1654
+ 0.825,
1655
+ 0.66
1656
+ ],
1657
+ "angle": 0,
1658
+ "content": "Miklos Z Horváth, Mark Niklas Müller, Marc Fischer, and Martin Vechev. Boosting randomized smoothing with variance reduced classifiers. In International Conference on Learning Representations, 2022a."
1659
+ },
1660
+ {
1661
+ "type": "ref_text",
1662
+ "bbox": [
1663
+ 0.175,
1664
+ 0.671,
1665
+ 0.825,
1666
+ 0.701
1667
+ ],
1668
+ "angle": 0,
1669
+ "content": "Miklos Z Horváth, Mark Niklas Müller, Marc Fischer, and Martin Vechev. Robust and accurate-compositional architectures for randomized smoothing. arXiv preprint arXiv:2204.00487, 2022b."
1670
+ },
1671
+ {
1672
+ "type": "ref_text",
1673
+ "bbox": [
1674
+ 0.175,
1675
+ 0.711,
1676
+ 0.825,
1677
+ 0.74
1678
+ ],
1679
+ "angle": 0,
1680
+ "content": "Jongheon Jeong and Jinwoo Shin. Consistency regularization for certified robustness of smoothed classifiers. Advances in Neural Information Processing Systems, 33:10558-10570, 2020."
1681
+ },
1682
+ {
1683
+ "type": "ref_text",
1684
+ "bbox": [
1685
+ 0.175,
1686
+ 0.75,
1687
+ 0.825,
1688
+ 0.793
1689
+ ],
1690
+ "angle": 0,
1691
+ "content": "Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Do-Guk Kim, and Jinwoo Shin. Smoothmix: Training confidence-calibrated smoothed classifiers for certified robustness. Advances in Neural Information Processing Systems, 34:30153-30168, 2021."
1692
+ },
1693
+ {
1694
+ "type": "ref_text",
1695
+ "bbox": [
1696
+ 0.175,
1697
+ 0.803,
1698
+ 0.825,
1699
+ 0.833
1700
+ ],
1701
+ "angle": 0,
1702
+ "content": "Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems, 2022."
1703
+ },
1704
+ {
1705
+ "type": "ref_text",
1706
+ "bbox": [
1707
+ 0.175,
1708
+ 0.843,
1709
+ 0.825,
1710
+ 0.885
1711
+ ],
1712
+ "angle": 0,
1713
+ "content": "Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy, pp. 656-672. IEEE, 2019."
1714
+ },
1715
+ {
1716
+ "type": "ref_text",
1717
+ "bbox": [
1718
+ 0.175,
1719
+ 0.896,
1720
+ 0.825,
1721
+ 0.925
1722
+ ],
1723
+ "angle": 0,
1724
+ "content": "Kyungmin Lee. Provable defense by denoised smoothing with learned score function. ICLR Workshop on Security and Safety in Machine Learning Systems, 2021."
1725
+ },
1726
+ {
1727
+ "type": "list",
1728
+ "bbox": [
1729
+ 0.173,
1730
+ 0.126,
1731
+ 0.826,
1732
+ 0.925
1733
+ ],
1734
+ "angle": 0,
1735
+ "content": null
1736
+ },
1737
+ {
1738
+ "type": "page_number",
1739
+ "bbox": [
1740
+ 0.491,
1741
+ 0.948,
1742
+ 0.511,
1743
+ 0.961
1744
+ ],
1745
+ "angle": 0,
1746
+ "content": "10"
1747
+ }
1748
+ ],
1749
+ [
1750
+ {
1751
+ "type": "header",
1752
+ "bbox": [
1753
+ 0.173,
1754
+ 0.033,
1755
+ 0.48,
1756
+ 0.049
1757
+ ],
1758
+ "angle": 0,
1759
+ "content": "Published as a conference paper at ICLR 2023"
1760
+ },
1761
+ {
1762
+ "type": "ref_text",
1763
+ "bbox": [
1764
+ 0.173,
1765
+ 0.103,
1766
+ 0.826,
1767
+ 0.147
1768
+ ],
1769
+ "angle": 0,
1770
+ "content": "Matthew Mirman, Timon Gehr, and Martin Vechev. Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning, pp. 3578-3586. PMLR, 2018."
1771
+ },
1772
+ {
1773
+ "type": "ref_text",
1774
+ "bbox": [
1775
+ 0.173,
1776
+ 0.155,
1777
+ 0.825,
1778
+ 0.187
1779
+ ],
1780
+ "angle": 0,
1781
+ "content": "Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pp. 8162-8171. PMLR, 2021."
1782
+ },
1783
+ {
1784
+ "type": "ref_text",
1785
+ "bbox": [
1786
+ 0.173,
1787
+ 0.193,
1788
+ 0.825,
1789
+ 0.224
1790
+ ],
1791
+ "angle": 0,
1792
+ "content": "Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Anima Anandkumar. Diffusion models for adversarial purification. arXiv preprint arXiv:2205.07460, 2022."
1793
+ },
1794
+ {
1795
+ "type": "ref_text",
1796
+ "bbox": [
1797
+ 0.173,
1798
+ 0.23,
1799
+ 0.825,
1800
+ 0.275
1801
+ ],
1802
+ "angle": 0,
1803
+ "content": "Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems, 32, 2019."
1804
+ },
1805
+ {
1806
+ "type": "ref_text",
1807
+ "bbox": [
1808
+ 0.173,
1809
+ 0.282,
1810
+ 0.825,
1811
+ 0.325
1812
+ ],
1813
+ "angle": 0,
1814
+ "content": "Hadi Salman, Mingjie Sun, Greg Yang, Ashish Kapoor, and J Zico Kolter. Denoised smoothing: A provable defense for pretrained classifiers. Advances in Neural Information Processing Systems, 33:21945-21957, 2020."
1815
+ },
1816
+ {
1817
+ "type": "ref_text",
1818
+ "bbox": [
1819
+ 0.173,
1820
+ 0.334,
1821
+ 0.825,
1822
+ 0.377
1823
+ ],
1824
+ "angle": 0,
1825
+ "content": "Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarily robust generalization requires more data. Advances in Neural Information Processing Systems, 31, 2018."
1826
+ },
1827
+ {
1828
+ "type": "ref_text",
1829
+ "bbox": [
1830
+ 0.173,
1831
+ 0.385,
1832
+ 0.825,
1833
+ 0.43
1834
+ ],
1835
+ "angle": 0,
1836
+ "content": "Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp. 2256-2265. PMLR, 2015."
1837
+ },
1838
+ {
1839
+ "type": "ref_text",
1840
+ "bbox": [
1841
+ 0.173,
1842
+ 0.437,
1843
+ 0.825,
1844
+ 0.468
1845
+ ],
1846
+ "angle": 0,
1847
+ "content": "Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021."
1848
+ },
1849
+ {
1850
+ "type": "ref_text",
1851
+ "bbox": [
1852
+ 0.173,
1853
+ 0.475,
1854
+ 0.825,
1855
+ 0.518
1856
+ ],
1857
+ "angle": 0,
1858
+ "content": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014."
1859
+ },
1860
+ {
1861
+ "type": "ref_text",
1862
+ "bbox": [
1863
+ 0.173,
1864
+ 0.526,
1865
+ 0.825,
1866
+ 0.571
1867
+ ],
1868
+ "angle": 0,
1869
+ "content": "Jonathan Uesato, Brendan O'donoghue, Pushmeet Kohli, and Aaron Oord. Adversarial risk and the dangers of evaluating against weak attacks. In International Conference on Machine Learning, pp. 5025-5034. PMLR, 2018."
1870
+ },
1871
+ {
1872
+ "type": "ref_text",
1873
+ "bbox": [
1874
+ 0.173,
1875
+ 0.578,
1876
+ 0.825,
1877
+ 0.609
1878
+ ],
1879
+ "angle": 0,
1880
+ "content": "Ross Wightman. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019."
1881
+ },
1882
+ {
1883
+ "type": "ref_text",
1884
+ "bbox": [
1885
+ 0.173,
1886
+ 0.616,
1887
+ 0.825,
1888
+ 0.647
1889
+ ],
1890
+ "angle": 0,
1891
+ "content": "Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, and Bo Li. On the certified robustness for ensemble models and beyond. arXiv preprint arXiv:2107.10873, 2021."
1892
+ },
1893
+ {
1894
+ "type": "ref_text",
1895
+ "bbox": [
1896
+ 0.173,
1897
+ 0.653,
1898
+ 0.825,
1899
+ 0.684
1900
+ ],
1901
+ "angle": 0,
1902
+ "content": "Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In British Machine Vision Conference, 2016."
1903
+ },
1904
+ {
1905
+ "type": "ref_text",
1906
+ "bbox": [
1907
+ 0.173,
1908
+ 0.691,
1909
+ 0.825,
1910
+ 0.735
1911
+ ],
1912
+ "angle": 0,
1913
+ "content": "Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, and Liwei Wang. MACER: Attack-free and scalable robust training via maximizing certified radius. In International Conference on Learning Representations, 2020."
1914
+ },
1915
+ {
1916
+ "type": "list",
1917
+ "bbox": [
1918
+ 0.173,
1919
+ 0.103,
1920
+ 0.826,
1921
+ 0.735
1922
+ ],
1923
+ "angle": 0,
1924
+ "content": null
1925
+ },
1926
+ {
1927
+ "type": "page_number",
1928
+ "bbox": [
1929
+ 0.491,
1930
+ 0.948,
1931
+ 0.508,
1932
+ 0.96
1933
+ ],
1934
+ "angle": 0,
1935
+ "content": "11"
1936
+ }
1937
+ ],
1938
+ [
1939
+ {
1940
+ "type": "header",
1941
+ "bbox": [
1942
+ 0.173,
1943
+ 0.033,
1944
+ 0.48,
1945
+ 0.049
1946
+ ],
1947
+ "angle": 0,
1948
+ "content": "Published as a conference paper at ICLR 2023"
1949
+ },
1950
+ {
1951
+ "type": "title",
1952
+ "bbox": [
1953
+ 0.173,
1954
+ 0.103,
1955
+ 0.3,
1956
+ 0.119
1957
+ ],
1958
+ "angle": 0,
1959
+ "content": "A APPENDIX"
1960
+ },
1961
+ {
1962
+ "type": "table",
1963
+ "bbox": [
1964
+ 0.172,
1965
+ 0.137,
1966
+ 0.49,
1967
+ 0.226
1968
+ ],
1969
+ "angle": 0,
1970
+ "content": "<table><tr><td rowspan=\"2\">Noise</td><td colspan=\"5\">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>83.8</td><td>70.6</td><td>55.7</td><td>40.0</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>65.8</td><td>54.7</td><td>43.7</td><td>34.2</td><td>26.1</td></tr><tr><td>σ = 1.0</td><td>33.2</td><td>28.0</td><td>22.8</td><td>18.0</td><td>13.6</td></tr></table>"
1971
+ },
1972
+ {
1973
+ "type": "table_caption",
1974
+ "bbox": [
1975
+ 0.279,
1976
+ 0.23,
1977
+ 0.383,
1978
+ 0.243
1979
+ ],
1980
+ "angle": 0,
1981
+ "content": "(a) Wide-ResNet"
1982
+ },
1983
+ {
1984
+ "type": "table",
1985
+ "bbox": [
1986
+ 0.508,
1987
+ 0.138,
1988
+ 0.828,
1989
+ 0.226
1990
+ ],
1991
+ "angle": 0,
1992
+ "content": "<table><tr><td rowspan=\"2\">Noise</td><td colspan=\"5\">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>85.9</td><td>76.7</td><td>63.8</td><td>49.5</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>74.5</td><td>66.0</td><td>56.1</td><td>45.7</td><td>36.4</td></tr><tr><td>σ = 1.0</td><td>55.1</td><td>48.7</td><td>42.3</td><td>35.8</td><td>29.9</td></tr></table>"
1993
+ },
1994
+ {
1995
+ "type": "table_caption",
1996
+ "bbox": [
1997
+ 0.585,
1998
+ 0.23,
1999
+ 0.752,
2000
+ 0.243
2001
+ ],
2002
+ "angle": 0,
2003
+ "content": "(b) Finetuned Wide-ResNet"
2004
+ },
2005
+ {
2006
+ "type": "table",
2007
+ "bbox": [
2008
+ 0.172,
2009
+ 0.258,
2010
+ 0.49,
2011
+ 0.346
2012
+ ],
2013
+ "angle": 0,
2014
+ "content": "<table><tr><td rowspan=\"2\">Noise</td><td colspan=\"5\">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>88.1</td><td>76.7</td><td>63.0</td><td>45.3</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>77.0</td><td>65.8</td><td>53.4</td><td>41.8</td><td>32.1</td></tr><tr><td>σ = 1.0</td><td>49.5</td><td>40.3</td><td>33.3</td><td>26.1</td><td>20.2</td></tr></table>"
2015
+ },
2016
+ {
2017
+ "type": "table_caption",
2018
+ "bbox": [
2019
+ 0.308,
2020
+ 0.35,
2021
+ 0.355,
2022
+ 0.363
2023
+ ],
2024
+ "angle": 0,
2025
+ "content": "(c) ViT"
2026
+ },
2027
+ {
2028
+ "type": "table",
2029
+ "bbox": [
2030
+ 0.508,
2031
+ 0.258,
2032
+ 0.828,
2033
+ 0.346
2034
+ ],
2035
+ "angle": 0,
2036
+ "content": "<table><tr><td rowspan=\"2\">Noise</td><td colspan=\"5\">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>91.2</td><td>79.3</td><td>65.5</td><td>48.7</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>81.5</td><td>67.0</td><td>56.1</td><td>45.3</td><td>35.5</td></tr><tr><td>σ = 1.0</td><td>65.1</td><td>48.4</td><td>41.7</td><td>35.2</td><td>29.0</td></tr></table>"
2037
+ },
2038
+ {
2039
+ "type": "table_caption",
2040
+ "bbox": [
2041
+ 0.613,
2042
+ 0.35,
2043
+ 0.722,
2044
+ 0.363
2045
+ ],
2046
+ "angle": 0,
2047
+ "content": "(d) Finetuned ViT"
2048
+ },
2049
+ {
2050
+ "type": "table_caption",
2051
+ "bbox": [
2052
+ 0.171,
2053
+ 0.375,
2054
+ 0.825,
2055
+ 0.404
2056
+ ],
2057
+ "angle": 0,
2058
+ "content": "Table 6: Certified accuracy of four different classifiers on CIFAR-10 at varying levels of Gaussian noise \\( \\sigma \\) ,all using the same diffusion model."
2059
+ },
2060
+ {
2061
+ "type": "table_caption",
2062
+ "bbox": [
2063
+ 0.442,
2064
+ 0.413,
2065
+ 0.626,
2066
+ 0.428
2067
+ ],
2068
+ "angle": 0,
2069
+ "content": "Certified Accuracy at \\(\\varepsilon\\) (\\%)"
2070
+ },
2071
+ {
2072
+ "type": "table",
2073
+ "bbox": [
2074
+ 0.316,
2075
+ 0.429,
2076
+ 0.682,
2077
+ 0.501
2078
+ ],
2079
+ "angle": 0,
2080
+ "content": "<table><tr><td>Noise</td><td>0.0</td><td>0.5</td><td>1.0</td><td>1.5</td><td>2.0</td><td>3.0</td></tr><tr><td>σ = 0.25</td><td>82.8</td><td>71.1</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>77.1</td><td>67.8</td><td>54.3</td><td>38.1</td><td>0.0</td><td>0.0</td></tr><tr><td>σ = 1.0</td><td>60.0</td><td>50.0</td><td>42.0</td><td>35.5</td><td>29.5</td><td>13.1</td></tr></table>"
2081
+ },
2082
+ {
2083
+ "type": "table_caption",
2084
+ "bbox": [
2085
+ 0.235,
2086
+ 0.51,
2087
+ 0.761,
2088
+ 0.526
2089
+ ],
2090
+ "angle": 0,
2091
+ "content": "Table 7: Certified accuracy on ImageNet for varying levels of Gaussian noise \\( \\sigma \\) ."
2092
+ },
2093
+ {
2094
+ "type": "page_number",
2095
+ "bbox": [
2096
+ 0.491,
2097
+ 0.948,
2098
+ 0.51,
2099
+ 0.96
2100
+ ],
2101
+ "angle": 0,
2102
+ "content": "12"
2103
+ }
2104
+ ],
2105
+ [
2106
+ {
2107
+ "type": "header",
2108
+ "bbox": [
2109
+ 0.174,
2110
+ 0.033,
2111
+ 0.48,
2112
+ 0.048
2113
+ ],
2114
+ "angle": 0,
2115
+ "content": "Published as a conference paper at ICLR 2023"
2116
+ },
2117
+ {
2118
+ "type": "image",
2119
+ "bbox": [
2120
+ 0.254,
2121
+ 0.124,
2122
+ 0.716,
2123
+ 0.477
2124
+ ],
2125
+ "angle": 0,
2126
+ "content": null
2127
+ },
2128
+ {
2129
+ "type": "image_caption",
2130
+ "bbox": [
2131
+ 0.361,
2132
+ 0.483,
2133
+ 0.606,
2134
+ 0.496
2135
+ ],
2136
+ "angle": 0,
2137
+ "content": "(a) One-shot denoised images \\((\\sigma = 1.00)\\)"
2138
+ },
2139
+ {
2140
+ "type": "image",
2141
+ "bbox": [
2142
+ 0.253,
2143
+ 0.499,
2144
+ 0.714,
2145
+ 0.851
2146
+ ],
2147
+ "angle": 0,
2148
+ "content": null
2149
+ },
2150
+ {
2151
+ "type": "image_caption",
2152
+ "bbox": [
2153
+ 0.355,
2154
+ 0.856,
2155
+ 0.611,
2156
+ 0.871
2157
+ ],
2158
+ "angle": 0,
2159
+ "content": "(b) Multi-step denoised images \\((\\sigma = 1.00)\\)"
2160
+ },
2161
+ {
2162
+ "type": "image_caption",
2163
+ "bbox": [
2164
+ 0.171,
2165
+ 0.882,
2166
+ 0.825,
2167
+ 0.926
2168
+ ],
2169
+ "angle": 0,
2170
+ "content": "Figure 4: Qualitative comparison of one-shot denoising and multi-step denoising. We show denoised images under random Gaussian noise (\\(\\sigma = 1.00\\)). A green border is applied when the denoised images are correctly classified while a red border means that the classifier misclassifies the image."
2171
+ },
2172
+ {
2173
+ "type": "page_number",
2174
+ "bbox": [
2175
+ 0.491,
2176
+ 0.949,
2177
+ 0.509,
2178
+ 0.96
2179
+ ],
2180
+ "angle": 0,
2181
+ "content": "13"
2182
+ }
2183
+ ],
2184
+ [
2185
+ {
2186
+ "type": "header",
2187
+ "bbox": [
2188
+ 0.174,
2189
+ 0.033,
2190
+ 0.48,
2191
+ 0.049
2192
+ ],
2193
+ "angle": 0,
2194
+ "content": "Published as a conference paper at ICLR 2023"
2195
+ },
2196
+ {
2197
+ "type": "image",
2198
+ "bbox": [
2199
+ 0.178,
2200
+ 0.341,
2201
+ 0.825,
2202
+ 0.613
2203
+ ],
2204
+ "angle": 0,
2205
+ "content": null
2206
+ },
2207
+ {
2208
+ "type": "image_caption",
2209
+ "bbox": [
2210
+ 0.171,
2211
+ 0.626,
2212
+ 0.828,
2213
+ 0.684
2214
+ ],
2215
+ "angle": 0,
2216
+ "content": "Figure 5: Additional intuitive examples for why multi-step denoised images are less recognized by the classifier. From left to right: clean images, noisy images with \\(\\sigma = 1.0\\), one-step denoised images, multi-step denoised images. For the denoised images, we show the prediction by the pretrained BEiT model."
2217
+ },
2218
+ {
2219
+ "type": "page_number",
2220
+ "bbox": [
2221
+ 0.491,
2222
+ 0.948,
2223
+ 0.51,
2224
+ 0.96
2225
+ ],
2226
+ "angle": 0,
2227
+ "content": "14"
2228
+ }
2229
+ ]
2230
+ ]
2023/(Certified!!) Adversarial Robustness for Free!/7064d377-0583-493a-b0c4-c47e08357804_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31873c5b80e8f8d5bbd1cf577e0a061a7d7b3d98da4e6b900861dfe0d19c8a8a
3
+ size 3211235
2023/(Certified!!) Adversarial Robustness for Free!/full.md ADDED
@@ -0,0 +1,309 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # (CERTIFIED!!) ADVERSARIAL ROBUSTNESS FOR FREE!
2
+
3
+ Nicholas Carlini\*1 Florian Tram'er\*1 Krishnamurthy (Dj) Dvijotham\* Leslie Rice2 Mingjie Sun2 J. Zico Kolter\*2,3
4
+
5
+ $^{1}$ Google $^{2}$ Carnegie Mellon University $^{3}$ Bosch Center for AI
6
+
7
+ # ABSTRACT
8
+
9
+ In this paper we show how to achieve state-of-the-art certified adversarial robustness to $\ell_2$ -norm bounded perturbations by relying exclusively on off-the-shelf pretrained models. To do so, we instantiate the denoised smoothing approach of Salman et al. (2020) by combining a pretrained denoising diffusion probabilistic model and a standard high-accuracy classifier. This allows us to certify $71\%$ accuracy on ImageNet under adversarial perturbations constrained to be within an $\ell_2$ norm of $\varepsilon = 0.5$ , an improvement of 14 percentage points over the prior certified SoTA using any approach, or an improvement of 30 percentage points over denoised smoothing. We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters.
10
+
11
+ # 1 INTRODUCTION
12
+
13
+ Evaluating the robustness of deep learning models to norm bounded adversarial perturbations has been shown to be difficult (Athalye et al., 2018; Uesato et al., 2018). Certified defenses—such as those based on bound propagation (Gowal et al., 2018; Mirman et al., 2018) or randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019)—offer provable guarantees that a model's predictions are robust to norm-bounded adversarial perturbations, for a large fraction of examples in the test set.
14
+
15
+ The current state-of-the-art approaches to certify robustness to adversarial perturbations bounded in the $\ell_2$ norm rely on randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019). By taking a majority vote over the labels predicted by a "base classifier" under random Gaussian perturbations of the input, if the correct class is output sufficiently often, then the defense's output on the original un-noised input is guaranteed to be robust to $\ell_2$ norm bounded adversarial perturbations.
16
+
17
+ Denoised smoothing (Salman et al., 2020) is a certified defense that splits this one-step process into two. After randomly perturbing an input, the defense first applies a denoiser model that aims to remove the added noise, followed by a standard classifier that guesses a label given this noisethen-denoised input. This enables applying randomized smoothing to pretrained black-box base classifiers, as long as the denoiser can produce clean images close to the base classifier's original training distribution.
18
+
19
+ We observe that the recent line of work on denoising diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Nichol & Dhariwal, 2021)—which achieve state-of-the-art results on image generation—are a perfect match for the denoising step in a denoised smoothing defense. A forward diffusion process takes a source data distribution (e.g., images from some data distribution) and then adds Gaussian noise until the distribution converges to a high-variance isotropic Gaussian. Denoising diffusion models are trained to invert this process. Thus, we can use a diffusion model as a denoiser that recovers high quality denoised inputs from inputs perturbed with Gaussian noise.
20
+
21
+ In this paper, we combine state-of-the-art, publicly available diffusion models as denoisers with standard pretrained state-of-the-art classifiers. We show that the resulting denoised smoothing defense obtains significantly better certified robustness results—for perturbations of $\ell_2$ norm of $\epsilon \leq 2$ on ImageNet and $\epsilon \leq 0.5$ on CIFAR-10—compared to the "custom" denoisers trained in prior work (Salman et al., 2020), or in fact with any certifiably robust defense (even those that do not rely on denoised smoothing). Code to reproduce our experiments is available at: https://github.com/ethz-privsec/diffusion_denoised_smoothing.
22
+
23
+ # 2 BACKGROUND
24
+
25
+ Adversarial examples (Biggio et al., 2013; Szegedy et al., 2014) are inputs $x' = x + \delta$ constructed by taking some input $x$ (with true label $y \in \mathcal{V}$ ) and adding a perturbation $\delta$ (that is assumed to be imperceptible and hence label-preserving) so that a given classifier $f$ misclassifies the perturbed input, i.e., $f(x + \delta) \neq y$ . The "smallness" of $\delta$ is quantified by its Euclidean norm, and we constrain $\| \delta \|_2 \leq \varepsilon$ . Even when considering exceptionally small perturbation budgets (e.g., $\varepsilon = 0.5$ ) modern classifiers often have near-0% accuracy (Carlini & Wagner, 2017).
26
+
27
+ Randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019) is a technique to certify the robustness of arbitrary classifiers against adversarial examples under the $\ell_2$ norm. Given an input $x$ and base classifier $f$ , randomized smoothing considers a smooth version of $f$ defined as:
28
+
29
+ $$
30
+ g (x) := \operatorname {a r g m a x} _ {c} \Pr_ {\delta \sim \mathcal {N} \left(0, \sigma^ {2} \mathbf {I}\right)} \left(f (x + \delta) = c\right) \tag {1}
31
+ $$
32
+
33
+ Cohen et al. (2019) prove that the smooth classifier $g$ is robust to perturbations of $\ell_2$ radius $R$ , where the radius $R$ grows with the classifier's "margin" (i.e., the difference in probabilities assigned to the most likely and second most-likely classes). As the probability in Equation 1 cannot be efficiently computed when the base classifier $f$ is a neural network, Cohen et al. (2019) instantiate this defense by sampling a small number $m$ of noise instances (e.g., $m = 10$ ) and taking a majority vote over the outputs of the base classifier $f$ on $m$ noisy versions of the input. To compute a lower-bound on this defense's robust radius $R$ , they estimate the probabilities $\operatorname*{Pr}[f(x + \delta) = c]$ for each class label $c$ by sampling a large number $N$ of noise instances $\delta$ (e.g., $N = 100,000$ ). See Cohen et al. (2019) for details.
34
+
35
+ Denoised smoothing (Salman et al., 2020) is an instantiation of randomized smoothing, where the base classifier $f$ is composed of a denoiser denoise followed by a standard classifier $f_{\mathrm{clf}}$ :
36
+
37
+ $$
38
+ f (x + \delta) := f _ {\mathrm {c l f}} (\operatorname {d e n o i s e} (x + \delta)). \tag {2}
39
+ $$
40
+
41
+ Given a very good denoiser (i.e., $\mathrm{denoise}(x + \delta) \approx x$ with high probability for $\delta \sim \mathcal{N}(0, \sigma^2\mathbf{I})$ ), we can expect the base classifier's accuracy on noisy images to be similar to the clean accuracy of the standard classifier $f_{\mathrm{clf}}$ . Salman et al. (2020) instantiate their denoised smoothing technique by training custom denoiser models with Gaussian noise augmentation, combined with off-the-shelf pretrained classifiers.
42
+
43
+ Denoising Diffusion Probabilistic Models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Nichol & Dhariwal, 2021) are a form of generative model that work by learning a model that can reverse time on a diffusion process of the form $x_{t} \sim \sqrt{1 - \beta_{t}} \cdot x_{t-1} + \beta_{t} \cdot \omega_{t}, \omega_{t} \sim \mathcal{N}(0, \mathbf{I})$ with $x_{0}$ coming from the data distribution, and the $\beta_{t}$ being fixed (or learned) variance parameters. The diffusion process transforms images from the target data distribution to purely random noise over time. The reverse process then synthesizes images from the data distribution starting with random Gaussian noise. In this paper we will not make use of diffusion models in the typical way; instead it suffices to understand just one single property about how they are trained.
44
+
45
+ Given a clean training image $x \in [-1, 1]^{w \cdot h \cdot c}$ , a diffusion model selects a timestep $t \in \mathbb{N}^+$ from some fixed schedule and then samples a noisy image $x_{t}$ of the form
46
+
47
+ $$
48
+ x _ {t} := \sqrt {\alpha_ {t}} \cdot x + \sqrt {1 - \alpha_ {t}} \cdot \mathcal {N} (0, \mathbf {I}), \tag {3}
49
+ $$
50
+
51
+ where the factor $\alpha_{t}$ is a constant derived from the timestamp $t$ that determines the amount of noise to be added to the image (the noise magnitude increases monotonically with $t$ ).
52
+
53
+ The diffusion model is then trained (loosely speaking) to minimize the discrepancy between $x$ and denoise $(x_{t}; t)$ ; that is, to predict what the original (un-noised) image should look like after applying the noising step at timestep $t$ .<sup>1</sup>
54
+
55
+ Algorithm 1 Noise, denoise, classify
56
+ 1: NOISEANDCLASSIFY(x,σ): 1: PREDICT(x,σ,N,η):
57
+ 2: $t^{\star},\alpha_{t^{\star}}\gets \text{GETTIMESTEP}(\sigma)$ 2: counts←0
58
+ 3: $x_{t^{\star}}\gets \sqrt{\alpha_{t^{\star}}}(x + \mathcal{N}(0,\sigma^{2}\mathbf{I}))$ 3: for i ∈ {1,2,...,N} do
59
+ 4: $\hat{x}\gets \text{denoise}(x_{t^{\star}};t^{\star})$ 4: y ← NOISEANDCLASSIFY(x,σ)
60
+ 5: y ← fclf(x) 5: counts[y] ← counts[y] + 1
61
+ 6: return y 6: $\hat{y}_A,\hat{y}_B\gets$ top two labels in counts
62
+ 7: nA,nB← counts[hatA], counts[hatB]
63
+ 8: GETTIMESTEP(σ): 8: if BINOMPTEST(nA,nA+nB,1/2) ≤ η then
64
+ 9: return $\hat{y}_A$
65
+ 10: return t*,αt* 10: else
66
+ 11: return Abstain
67
+
68
+ Figure 1: Our approach can be implemented in under 15 lines of code, given an off-the-shelf classifier $f_{\mathrm{clf}}$ and an off-the-shelf diffusion model denoise. The PREDICT function is adapted from Cohen et al. (2019) and takes as input a number of noise samples $N$ and a statistical significance level $\eta \in (0,1)$ and inherits the same robustness certificate proved in Cohen et al. (2019).
69
+
70
+ # 3 DIFFUSION DENOISED SMOOTHING
71
+
72
+ Our approach, Diffusion Denoised Smoothing (DDS), requires no new technical ideas on top of what was introduced in the section above.
73
+
74
+ Denoised smoothing via a diffusion model. The only minor technicality required for our method is to map between the noise model required by randomized smoothing and the noise model used within diffusion models. Specifically, randomized smoothing requires a data point augmented with additive Gaussian noise $x_{\mathrm{rs}} \sim \mathcal{N}(x,\sigma^2\mathbf{I})$ , whereas diffusion models assume the noise model $x_{t} \sim \mathcal{N}(\sqrt{\alpha_{t}} x,(1 - \alpha_{t})\mathbf{I})$ . Scaling $x_{\mathrm{rs}}$ by $\sqrt{\alpha_t}$ and equating the variances yields the relationship
75
+
76
+ $$
77
+ \sigma^ {2} = \frac {1 - \alpha_ {t}}{\alpha_ {t}}. \tag {4}
78
+ $$
79
+
80
+ Thus, in order to employ a diffusion model for randomized smoothing at a given noise level $\sigma$ , we first find the timestep $t^{\star}$ such that $\sigma^2 = \frac{1 - \alpha_t\star}{\alpha_{t^\star}}$ ; the precise formula for this equation will depend on the schedule of the $\alpha_{t}$ terms used by the diffusion model, but this can typically be computed in closed form, even for reasonably complex diffusion schedules. Next, we compute
81
+
82
+ $$
83
+ x _ {t ^ {*}} = \sqrt {\alpha_ {t ^ {*}}} (x + \delta), \delta \sim \mathcal {N} (0, \sigma^ {2} \mathbf {I}) \tag {6}
84
+ $$
85
+
86
+ and apply the diffusion denoiser on $x_{t^{\star}}$ to obtain an estimate of the denoised sample
87
+
88
+ $$
89
+ \hat {x} = \operatorname {d e n o i s e} \left(x _ {t ^ {\star}}; t ^ {\star}\right). \tag {7}
90
+ $$
91
+
92
+ And finally, we classify the estimated denoised image with an off-the-shelf classifier
93
+
94
+ $$
95
+ y = f _ {\mathrm {c l f}} (\hat {x}). \tag {8}
96
+ $$
97
+
98
+ The entirety of this algorithmic approach is shown in Figure 1.
99
+
100
+ $$
101
+ t ^ {\star} = T \left(1 - \frac {2 (1 + s) \csc^ {- 1} \left(\sqrt {1 + \sigma^ {2}} \csc \left(\frac {\pi}{2 + 2 s}\right)\right)}{\pi}\right). \tag {5}
102
+ $$
103
+
104
+ The actual formula here is unimportant and only shown as an illustration of how such computation can look in practice. Even when such a closed form solution does not exist, because the schedules for $\alpha_{t}$ are monotonic decreasing, one can always find a solution via 1D root-finding methods if necessary.
105
+
106
+ To obtain a robustness certificate, we repeat the above denoising process many times (e.g., 100,000) and compute the certification radius using the approach of Cohen et al. (2019) (note that since our diffusion models expects inputs in $[-1,1]^d$ , we then divide the certified radius by 2 to obtain a certified radius for inputs in $[0,1]$ as assumed in all prior work).
107
+
108
+ One-shot denoising. Readers familiar with diffusion models may recall that the standard process repeatedly applies a "single-step" denoising operation $x_{t-1} = d(x_t; t)$ that aims to convert a noisy image at some timestep $t$ to a (slightly less) noisy image at the previous timestep $t-1$ . The full diffusion process would then be defined by the following iterative procedure:
109
+
110
+ $$
111
+ \tilde {x} = \operatorname {d e n o i s e} _ {\text {i t e r}} (x + \delta ; t) := d \left(d (\dots d (d (x + \delta ; t); t - 1) \dots ; 2); 1\right).
112
+ $$
113
+
114
+ In fact, each application of the one-step denoiser $d$ consists of two steps: (1) an estimation of the fully denoised image $x$ from the current timestep $t$ , and (2) computing a (properly weighted, according to the diffusion model) average between this estimated denoised image and the noisy image at the previous timestep $t - 1$ . Thus, instead of performing the entire $t$ -step diffusion process to denoise an image, it is also possible to run the diffusion step $d$ once and simply output the best estimate for the denoised image $x$ in one shot.
115
+
116
+ When a diffusion model generates images from scratch (i.e., the denoiser is applied to pure noise), the iterative process gives higher fidelity outputs than this one-shot approach (Ho et al., 2020). But here, where we aim to denoise one particular image, a one-shot approach has two advantages:
117
+
118
+ 1. High accuracy: it turns out that standard pretrained classifiers are more accurate on one-shot denoised images compared to images denoised with the full $t$ -steps of denoising. We hypothesize this is due to the fact that when we first apply the single-step denoiser $d$ at timestep $t$ , the denoiser already has all the available information about $x$ . By applying the denoiser multiple times, we can only destroy information about $x$ as each step adds new (slightly smaller) Gaussian noise. In fact, by using the iterative $t$ -step denoising strategy, we are in essence pushing part of the classification task onto the denoiser, in order to decide how to fill in the image. Section 5 experimentally validates this hypothesis.
119
+ 2. Improved efficiency: instead of requiring several hundred (or thousand) forward passes to denoise any given image, we only require one single pass. This is especially important when we perform many thousand predictions as is required for randomized smoothing to obtain a robustness certificate.
120
+
121
+ Related work. We are not the first to observe a connection between randomized smoothing and diffusion models. The work of Lee (2021) first studied this problem—however they do not obtain significant accuracy improvements, likely due to the fact that diffusion models available at the time that work was done were not good enough. Separately, Nie et al. (2022) suggest that diffusion models might be able to provide strong empirical robustness to adversarial examples, as evaluated by robustness under adversarial attacks computed using existing attack algorithms; this is orthogonal to our results.
122
+
123
+ # 4 EVALUATION
124
+
125
+ We evaluate diffusion denoised smoothing on two standard datasets, CIFAR-10 and ImageNet, and find it gives state-of-the-art certified $\ell_2$ robustness on both. On CIFAR-10, we draw $N = 100,000$ noise samples and on ImageNet we draw $N = 10,000$ samples to certify the robustness following Cohen et al. (2019).
126
+
127
+ As is standard in prior work, we perform randomized smoothing for three different noise magnitudes, $\sigma \in \{0.25, 0.5, 1.0\}$ . For a fair comparison to prior work in Table 2 and Table 1, we give the best results reported in each paper across these same three noise magnitudes. Note that prior work only uses three levels of noise due to the computational overhead; one benefit of using a diffusion model is we could have used other amounts of noise without training a new denoiser model.
128
+
129
+ CIFAR-10 configuration. We denoise CIFAR-10 images with the 50M-parameter diffusion model from Nichol & Dhariwal (2021). The denoised images are classified with a 87M-parameter
130
+
131
+ <table><tr><td>Method</td><td>Off-the-shelf</td><td>Extra data</td><td>0.5</td><td>1.0</td><td>1.5</td><td>2.0</td><td>3.0</td></tr><tr><td>PixelDP (Lecuyer et al., 2019)</td><td>○</td><td>×</td><td>(33.0)16.0</td><td>-</td><td>-</td><td></td><td></td></tr><tr><td>RS (Cohen et al., 2019)</td><td>○</td><td>×</td><td>(67.0)49.0</td><td>(57.0)37.0</td><td>(57.0)29.0</td><td>(44.0)19.0</td><td>(44.0)12.0</td></tr><tr><td>SmoothAdv (Salman et al., 2019)</td><td>○</td><td>×</td><td>(65.0)56.0</td><td>(54.0)43.0</td><td>(54.0)37.0</td><td>(40.0)27.0</td><td>(40.0)20.0</td></tr><tr><td>Consistency (Jeong &amp; Shin, 2020)</td><td>○</td><td>×</td><td>(55.0)50.0</td><td>(55.0)44.0</td><td>(55.0)34.0</td><td>(41.0)24.0</td><td>(41.0)17.0</td></tr><tr><td>MACER (Zhai et al., 2020)</td><td>○</td><td>×</td><td>(68.0)57.0</td><td>(64.0)43.0</td><td>(64.0)31.0</td><td>(48.0)25.0</td><td>(48.0)14.0</td></tr><tr><td>Boosting (Horváth et al., 2022a)</td><td>○</td><td>×</td><td>(65.6)57.0</td><td>(57.0)44.6</td><td>(57.0)38.4</td><td>(44.6)28.6</td><td>(38.6)21.2</td></tr><tr><td>DRT (Yang et al., 2021)</td><td>○</td><td>×</td><td>(52.2)46.8</td><td>(55.2)44.4</td><td>(49.8)39.8</td><td>(49.8)30.4</td><td>(49.8)23.4</td></tr><tr><td>SmoothMix (Jeong et al., 2021)</td><td>○</td><td>×</td><td>(55.0)50.0</td><td>(55.0)43.0</td><td>(55.0)38.0</td><td>(40.0)26.0</td><td>(40.0)20.0</td></tr><tr><td>ACES (Horváth et al., 2022b)</td><td>○</td><td>×</td><td>(63.8)54.0</td><td>(57.2)42.2</td><td>(55.6)35.6</td><td>(39.8)25.6</td><td>(44.0)19.8</td></tr><tr><td>Denoised (Salman et al., 2020)</td><td>○</td><td>×</td><td>(60.0)33.0</td><td>(38.0)14.0</td><td>(38.0)6.0</td><td>-</td><td>-</td></tr><tr><td>Lee (Lee, 2021)</td><td>○</td><td>×</td><td>41.0</td><td>24.0</td><td>11.0</td><td>-</td><td>-</td></tr><tr><td>Ours</td><td>○</td><td>✓</td><td>(82.8)71.1</td><td>(77.1)54.3</td><td>(77.1)38.1</td><td>(60.0)29.5</td><td>(60.0)13.1</td></tr></table>
132
+
133
+ Table 1: ImageNet certified top-1 accuracy for prior defenses on randomized smoothing and denoised smoothing. Randomized smoothing techniques rely on special-purpose models (indicated by a empty circle). The work of Horváth et al. (2022b) is an exception in that it selectively applies either a robust or accurate off-the-shelf classifier (indicated by a half full circle). Denoised smoothing (Salman et al., 2020) use an off-the-shelf classifier but train their own denoiser (indicated by a half full circle). Our base approach uses an off-the-shelf classifier and off-the-shelf denoiser (indicated by a full circle). Each entry lists the certified accuracy, with the clean accuracy for that model in parentheses, using numbers taken from respective papers.
134
+
135
+ Certified Accuracy at $\varepsilon$ (\%)
136
+
137
+ ViT-B/16 model (Dosovitskiy et al., 2021) that was pretrained on ImageNet-21k (Deng et al., 2009) (in $224 \times 224$ resolution) and finetuned on CIFAR-10. We use the implementation from Hugging-Face<sup>4</sup> which reaches $97.9\%$ test accuracy on CIFAR-10. In addition, we also report results with a standard 36M parameter Wide-ResNet-28-10 model (Zagoruyko & Komodakis, 2016) trained on CIFAR-10 to $95.2\%$ accuracy.
138
+
139
+ As is typical, we report results with images normalized to $[0,1]^{32\times 32\times 3}$ . We obtain a throughput of 825 images per second through the diffusion model and ViT classifier on an A100 GPU at a batch size of 1,000. We report robust accuracy results averaged over the entire CIFAR-10 test set.
140
+
141
+ ImageNet configuration. We denoise ImageNet images with the 552M-parameter class-unconditional diffusion model from Dhariwal & Nichol (2021), and classify images with the 305M-parameter BEiT large model (Bao et al., 2022) which reaches a $88.6\%$ top-1 validation accuracy using the implementation from timm (Wightman, 2019). We report results for our images when normalized to $[0,1]^{224\times 224\times 3}$ to allow us to compare to prior work. The overall latency of this joint denoise-then-classify model is 1.5 seconds per image on an A100 GPU at a batch size of 32. We report results averaged over 1,000 images randomly selected from the ImageNet test set.
142
+
143
+ # 4.1 RESULTS
144
+
145
+ On both CIFAR-10 and ImageNet we outperform the state-of-the-art denoised smoothing approaches (i.e., Salman et al. (2020) and Lee (2021)) in every setting; see Table 1 and Table 2, as well as Figure 2 for detailed results. Perhaps even more impressively, we also outperform models trained with randomized smoothing at low $\varepsilon$ distortions ( $\epsilon \leq 0.5$ on CIFAR-10, and $\epsilon \leq 2$ on ImageNet), and nearly match them at high $\varepsilon$ . Even though these randomized smoothing techniques train their models end-to-end and specifically design these models to have high accuracy on Gaussian noise, we find that our approach's use of off-the-shelf models yields superior robustness (and much higher clean accuracy as an added bonus).
146
+
147
+ Interestingly, we find that using a diffusion model to perform the denoising step gives its most significant benefits when $\sigma$ and $\varepsilon$ are small: for example, while we reach $71.1\%$ top-1 accuracy at $\varepsilon = 0.5$ on ImageNet, an improvement over prior work of $+14$ percentage points, when we reach $\varepsilon = 3$ our scheme is 7 percentage points worse than state-of-the-art. Our hypothesis for this effect,
148
+
149
+ Certified Accuracy at $\varepsilon$ (\%)
150
+
151
+ <table><tr><td>Method</td><td>Off-the-shelf</td><td>Extra data</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>PixelDP (Lecuyer et al., 2019)</td><td>○</td><td>X</td><td>(71.0)22.0</td><td>(44.0)2.0</td><td>-</td><td>-</td></tr><tr><td>RS (Cohen et al., 2019)</td><td>○</td><td>X</td><td>(75.0)61.0</td><td>(75.0)43.0</td><td>(65.0)32.0</td><td>(66.0)22.0</td></tr><tr><td>SmoothAdv (Salman et al., 2019)</td><td>○</td><td>X</td><td>(75.6)67.4</td><td>(75.6)57.6</td><td>(74.8)47.8</td><td>(57.4)38.3</td></tr><tr><td>SmoothAdv (Salman et al., 2019)</td><td>○</td><td>✓</td><td>(84.3)74.9</td><td>(80.1)63.4</td><td>(80.1)51.9</td><td>(62.2)39.6</td></tr><tr><td>Consistency (Jeong &amp; Shin, 2020)</td><td>○</td><td>X</td><td>(77.8)68.8</td><td>(75.8)58.1</td><td>(72.9)48.5</td><td>(52.3)37.8</td></tr><tr><td>MACER (Zhai et al., 2020)</td><td>○</td><td>X</td><td>(81.0)71.0</td><td>(81.0)59.0</td><td>(66.0)46.0</td><td>(66.0)38.0</td></tr><tr><td>Boosting (Horváth et al., 2022a)</td><td>○</td><td>X</td><td>(83.4)70.6</td><td>(76.8)60.4</td><td>(71.6)52.4</td><td>(52.4)38.8</td></tr><tr><td>DRT (Yang et al., 2021)</td><td>○</td><td>X</td><td>(81.5)70.4</td><td>(72.6)60.2</td><td>(71.9)50.5</td><td>(56.1)39.8</td></tr><tr><td>SmoothMix (Jeong et al., 2021)</td><td>○</td><td>X</td><td>(77.1)67.9</td><td>(77.1)57.9</td><td>(74.2)47.7</td><td>(61.8)37.2</td></tr><tr><td>ACES (Horváth et al., 2022b)</td><td>○</td><td>X</td><td>(79.0)69.0</td><td>(74.2)57.2</td><td>(74.2)47.0</td><td>(58.6)37.8</td></tr><tr><td>Denoised (Salman et al., 2020)</td><td>○</td><td>X</td><td>(72.0)56.0</td><td>(62.0)41.0</td><td>(62.0)28.0</td><td>(44.0)19.0</td></tr><tr><td>Lee (Lee, 2021)</td><td>○</td><td>X</td><td>60.0</td><td>42.0</td><td>28.0</td><td>19.0</td></tr><tr><td>Ours</td><td>○</td><td>✓</td><td>(88.1)76.7</td><td>(88.1)63.0</td><td>(88.1)45.3</td><td>(77.0)32.1</td></tr><tr><td>Ours (+finetuning)</td><td>○</td><td>✓</td><td>(91.2)79.3</td><td>(91.2)65.5</td><td>(87.3)48.7</td><td>(81.5)35.5</td></tr></table>
152
+
153
+ Table 2: CIFAR-10 certified accuracy for prior defenses from the literature. The columns have the same meaning as in Table 1.
154
+
155
+ ![](images/8a7631a871ae98d7aeb865d54437127a0582e37d248e721c1b10ac28b22e8671.jpg)
156
+ (a) CIFAR-10
157
+
158
+ ![](images/1d579ea9c27a95dd5c8bdb39219a937a1e731f683214796c5a0388f98d4ce238.jpg)
159
+ (b) ImageNet
160
+ Figure 2: Certified accuracy as a function of the $\ell_2$ adversarial perturbation bound, when varying levels of Gaussian noise $\sigma \in \{0.25, 0.5, 1.0\}$ . Bounds are computed with 100,000 samples per run on CIFAR-10, and 10,000 on ImageNet.
161
+
162
+ which we explore further in Section 5, is that diffusion models are prone to "hallucinate" content when denoising extremely noisy images. Thus, instead of reinforcing the signal from the correct class, the diffusion model generates a signal from another class, thereby fooling the classifier.
163
+
164
+ CIFAR-10 ablation. The off-the-shelf classifiers we use were pretrained on larger datasets than respectively CIFAR-10 and ImageNet. It is well known that the use of additional data can boost robustness, both for empirical (Schmidt et al., 2018) and certified (Salman et al., 2019) defenses. To investigate the role played by the pretrained model, we repeat our CIFAR-10 experiment using a standard Wide-ResNet-28-10 model (Zagoruyko & Komodakis, 2016) trained solely on CIFAR-10 to $95.2\%$ accuracy. The results with this classifier (see Table 6a) outperform prior denoised smoothing approaches, and are competitive with prior randomized smoothing results up to $\epsilon = 0.5$ .
165
+
166
+ The ViT classifier outperforms the ResNet because it is more robust to the distribution shift introduced by the noisig-and-denoising procedure. To alleviate this, we can further finetune the classifier on denoised images denoise $(x + \delta)$ from the CIFAR-10 training set. This defense is thus not strictly "off-the-shelf" anymore (although finetuning is negligible compared to the training time of the diffusion model and classifier). Table 6b shows that a finetuned Wide-ResNet achieves comparable-or-better results than a non-finetuned ViT. Thus, with a minimal amount of training, we also surpass prior randomized smoothing results without relying on any external data. If we finetune
167
+
168
+ Certified Accuracy at $\varepsilon$ (\%)
169
+
170
+ <table><tr><td>Method</td><td>Off-the-shelf</td><td>Extra data</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>Wide-ResNet</td><td>●</td><td>×</td><td>(83.8)70.6</td><td>(83.8)55.7</td><td>(83.8)40.0</td><td>(65.8)26.1</td></tr><tr><td>ViT</td><td>●</td><td>✓</td><td>(88.1)76.7</td><td>(88.1)63.0</td><td>(88.1)45.3</td><td>(77.0)32.1</td></tr><tr><td>Wide-ResNet +finetune</td><td>○</td><td>×</td><td>(85.9)76.7</td><td>(85.9)63.8</td><td>(85.9)49.5</td><td>(74.5)36.4</td></tr><tr><td>ViT +finetune</td><td>○</td><td>✓</td><td>(91.2)79.3</td><td>(91.2)65.5</td><td>(91.2)48.7</td><td>(81.5)35.5</td></tr></table>
171
+
172
+ Table 3: Summary of our ablation on CIFAR-10. The diffusion model and Wide-ResNet classifier are trained solely on CIFAR-10, while the ViT classifier is pretrained on a larger dataset. The finetuning results are obtained by taking an off-the-shelf diffusion model and classifier, and tuning the classifier on noised-then-denoised images from CIFAR-10.
173
+
174
+ the ViT model (Table 6d), we further improve our defense's clean accuracy and certified robustness at $\epsilon \leq 0.5$ by a couple of percentage points. Our ablation is summarized in Table 3.
175
+
176
+ # 5 ANALYSIS AND DISCUSSION
177
+
178
+ We achieve state-of-the-art certified accuracy using diffusion models despite the fact that we are not using these models as diffusion models but rather trivial denoisers. That is, instead of leveraging the fact that diffusion models can iteratively refine images across a range of noise levels, we simply apply the diffusion model once for a fixed noise level, to perform one-shot denoising.
179
+
180
+ In this section we study why this approach outperforms prior work that trained straightforward denoisiers for denoised smoothing (Salman et al., 2020), and why using diffusion models for one-shot denoising performs better than the more involved iterative diffusion process. Last we show promising results of multi-step diffusion using an advanced deterministic sampler.
181
+
182
+ # 5.1 FULL DIFFUSION VERSUS ONE-SHOT DENOISING
183
+
184
+ When used as generative models, diffusion models perform denoising through an iterative process that repeatedly refines an estimate of the final denoised image. When given an image $x_{t}$ with noise of magnitude corresponding to some diffusion timestep $t$ , the model first predicts a one-shot estimate of the denoised image $x_{0}$ , and then constructs an estimate $x_{t-1}$ of the noised image at timestep $t-1$ by interpolating (with appropriate weights) between $x_{0}$ , $x_{t}$ and fresh isotropic Gaussian noise $\mathcal{N}(0, I)$ . The diffusion process is then applied recursively at timestep $t-1$ .
185
+
186
+ Intuitively, it may be expected that when using a diffusion model as a denoiser, one-shot denoising will produce more faithful results than the full iterative reverse-diffusion process. Indeed, each step of the reverse-diffusion process destroys information about the original image, since each step adds fresh Gaussian noise to the image. Thus, information theoretically at least, it should be easier to denoise an image in one-shot than over multiple iterations.
187
+
188
+ ![](images/d88e26b9ac48a22eb01255900bdfc2c96588d5052a1ddf777941c5b38d5d0040.jpg)
189
+ Figure 3: Intuitive examples for why multi-step denoised images are less recognized by the classifier. From left to right: clean images, noisy images with $\sigma = 1.0$ , one-step denoised images, multi-step denoised images. For the denoised images, we show the prediction by the pretrained BEiT model.
190
+
191
+ ![](images/bb435de1f6c0cc1430543bcec0a29b3ca9e54ab2aecd08d199465dcba31e87d6.jpg)
192
+
193
+ ![](images/8c71606c5aea8cf7882d56d83bc8a52cd37b6fbfc05219a0fd0ae00b0fc1478b.jpg)
194
+
195
+ ![](images/7fe87f3a7ad7de011ba76578568315d19b568e2f33a4827751282d8e790fbae5.jpg)
196
+
197
+ We find that this is indeed the case. While the full reverse-diffusion process produces denoised images with more finegrained details (which is a good property for generating photorealistic images from scratch), these details are often not actually faithful to the original image we want to denoise. Instead, diffusion models are prone to "hallucinate" salient detailed features during the iterative denoise-and-noise process. We illustrate some examples of this hallucination phenomenon in Figure 3. Here, we noise an original image (on the left) with large Gaussian noise $(\sigma = 1)$ and then apply either the full reverse-diffusion process (rightmost image) or a one-shot denoising at the appropriate timestep (2nd image to the right). As we can see, one-shot denoising produces mostly faithful, but blurry, reconstructions of the original image, with finegrained details lost due to noise. In contrast, iterative denoising "invents" new details that result in images that are ultimately more photorealistic but semantically different from the starting image. Additional examples (with multiple random seeds) are in Figure 4 and Figure 5 in the Appendix.
198
+
199
+ # 5.2 TRAINING ON RESTRICTED NOISE LEVELS
200
+
201
+ Given that one-shot denoising performs better than full multi-shot denoising, we now turn to understanding our next question: if we are just using diffusion models as one-shot denoisers, then why do diffusion models perform better compared to the straightforward denoisers trained in prior work (Salman et al., 2020)? To investigate this, we train seven new diffusion models on CIFAR-10 with varying levels of Gaussian noise—all the way towards a model trained on a single noise level, i.e., a straightforward denoiser.
202
+
203
+ Recall that during standard training of a diffusion model, we sample a timestep $T$ uniformly from some range, add noise according to this timestep, and then train the model to predict the noise that has been added. The only difference between this process and the standard denoised smoothing training process (Salman et al., 2020) is the fact that here we are training on multiple levels of Gaussian noise simultaneously. Therefore we now perform a comparative analysis of models trained on more restrictive noise levels. We select seven different levels of noise:
204
+
205
+ - Three models are trained exclusively on Gaussian noise of fixed standard deviation of respectively $\sigma = 0.25$ , $\sigma = 0.5$ , or $\sigma = 1.0$ . This is identical to training a "straightforward" denoiser on noise of a fixed magnitude.
206
+ - One model is trained on all three noise levels at the same time.
207
+ - Two models are trained on noise uniformly selected from $\sigma \in [0, 0.25]$ , and $\sigma \in [0, 1.0]$ .
208
+ - One model is trained using the full range of noise, from $\sigma \in [0,S]$ for some $S\gg 1$ (the exact value of $S$ depends on the chosen noise schedule for the diffusion model).
209
+
210
+ We then evaluate the clean accuracy of an off-the-shelf ViT model on each image when denoised (in one shot) with each of these diffusion models, where the images are noised with a standard deviation of either $\sigma = 0.25$ , $\sigma = 0.5$ , or $\sigma = 1.0$ . The results are summarized in Table 4.
211
+
212
+ <table><tr><td rowspan="2">Training noise</td><td colspan="3">Noise at evaluation</td></tr><tr><td>σ = 0.25</td><td>σ = 0.5</td><td>σ = 1.0</td></tr><tr><td>σ ∈ {0.25}</td><td>79.0</td><td>16.2</td><td>9.8</td></tr><tr><td>σ ∈ {0.5}</td><td>14.5</td><td>60.1</td><td>15.4</td></tr><tr><td>σ ∈ {1.0}</td><td>13.9</td><td>13.5</td><td>35.5</td></tr><tr><td>σ ∈ {0.25, 0.5, 1.0}</td><td>81.6</td><td>68.1</td><td>43.0</td></tr><tr><td>σ ∈ [0, 0.25]</td><td>84.5</td><td>14.5</td><td>9.9</td></tr><tr><td>σ ∈ [0, 1.0]</td><td>84.0</td><td>71.6</td><td>46.0</td></tr><tr><td>σ ∈ [0, S ≫ 1] (standard)</td><td>85.5</td><td>72.3</td><td>44.8</td></tr></table>
213
+
214
+ Table 4: Clean accuracy of an off-the-shelf ViT classifier on images denoised with a diffusion model trained on restricted levels of Gaussian noise. Diffusion models trained on more diverse noise ranges yield higher accuracy on one-shot denoised images, even compared to models trained on the specific noise level used at evaluation time.
215
+
216
+ As expected, training a new model on any one individual noise level, and then using that model to denoise images at that noise level, gives high downstream accuracy: for example, training a diffusion
217
+
218
+ model using $\sigma = 0.25$ noise and then evaluating at this same noise level gives $79\%$ accuracy. However if we then try and use this model to denoise images at a different noise level—say $\sigma = 0.5$ the accuracy of the classifier drops to just $16\%$ . If we train the diffusion model directly on $\sigma = 0.5$ noise, we instead get a much better classification accuracy of $60.1\%$ , but without good generalization to lower or higher noise levels. Similarly, training on noise of $\sigma = 1.0$ only gives good results when denoising images with the same noise level.
219
+
220
+ More surprisingly, however, is that training on all three noise levels simultaneously gives better accuracy for denoising images at each noise level, compared to a diffusion model trained specifically and solely for that noise level. For example, when denoising images with $\sigma = 0.5$ Gaussian noise, we get a classification accuracy of $68.1\%$ when the diffusion model is trained on that noise level and additional lower and higher noise levels—a value $8\%$ higher than the accuracy of $60.1\%$ we get when training the diffusion model solely on $\sigma = 0.5$ noise.
221
+
222
+ If we train on more granular noise levels, either in $[0, 0.25]$ or in the full interval $[0, 1]$ , the classification accuracy on denoised images at the three individual noise levels further increases by a few percentage points. Quite surprisingly, the standard training regime which trains the diffusion model on noise from a larger range $[0, S]$ for some $S \gg 1$ further improves the denoising capabilities at low noise levels ( $\sigma = 0.25$ and $\sigma = 0.5$ ), but slightly harms the accuracy for larger noise ( $\sigma = 1.0$ ).
223
+
224
+ From this experiment, we can conclude that the (full) training process of diffusion models leads to much better, and more generalizable, one-shot denoising capabilities than when training a standalone denoiser on a single noise level as in prior work.
225
+
226
+ # 5.3 ADVANCED DETERMINISTIC MULTI-STEP SAMPLER
227
+
228
+ In section 5.1, we found that the denoised images from full multi-step diffusion have a tendency to deviate from the original clean image. This could be due to the stochastic nature of the full reverse-diffusion process, since at each step a random noise is added. We notice a line of work (Song et al., 2021; Karras et al., 2022) on fast deterministic sampling of diffusion models. We show that with such an advanced sampler, multi-step diffusion is able to beat one-shot denoising.
229
+
230
+ We consider the deterministic EDM sampler proposed by Karras et al. (2022). We compare the recognizability of images denoised by EDM sampler and one-shot denoising. We adapt EDM sampler for image denoising by setting the maximum noise sigma of the sampling noise schedule to be the noise level found by Equation 4. We use the suggested sampler setting from Karras et al. (2022) on CIFAR-10, where 18 reverse steps with 35 evaluations of the diffusion model are performed for each example. The result is summarized in Table 5. We can see that the deterministic EDM sampler is superior over one-shot denoising.
231
+
232
+ <table><tr><td>Classifier</td><td>Method</td><td>σ = 0.25</td><td>σ = 0.5</td><td>σ = 1.0</td></tr><tr><td rowspan="2">Wide-ResNet</td><td>One-shot denoising</td><td>81.3</td><td>64.0</td><td>35.8</td></tr><tr><td>EDM sampler</td><td>85.0</td><td>73.0</td><td>53.8</td></tr><tr><td rowspan="2">VIT</td><td>One-shot denoising</td><td>84.9</td><td>71.6</td><td>50.8</td></tr><tr><td>EDM sampler</td><td>86.1</td><td>73.1</td><td>54.0</td></tr></table>
233
+
234
+ Table 5: Clean accuracy (average over 5 runs) of off-the-shelf CIFAR-10 classifiers evaluated on images denoised by one-shot denoising and EDM sampler (Karras et al., 2022).
235
+
236
+ # 6 CONCLUSION
237
+
238
+ At present, training certified adversarially robust deep learning models requires specialized techniques explicitly designed for the purpose of performing provably robust classification (Cohen et al., 2019). While this has proven effective, these models are extremely difficult to train to high accuracy, and degrade clean accuracy significantly.
239
+
240
+ We suggest an alternative approach is possible. By exclusively making use of off-the-shelf models designed to be state-of-the-art at classification and image denoising, we can leverage the vast resources dedicated to training highly capable models for the new purpose of robust classification.
241
+
242
+ # REFERENCES
243
+
244
+ Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning, pp. 274-283. PMLR, 2018.
245
+ Hangbo Bao, Li Dong, and Furu Wei. BEiT: BERT pre-training of image transformers. In International Conference on Learning Representations, 2022.
246
+ Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pp. 387-402. Springer, 2013.
247
+ Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy, pp. 39-57. IEEE, 2017.
248
+ Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, pp. 1310-1320. PMLR, 2019.
249
+ Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009.
250
+ Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. Advances in Neural Information Processing Systems, 34, 2021.
251
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021.
252
+ Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715, 2018.
253
+ Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020.
254
+ Miklos Z Horváth, Mark Niklas Müller, Marc Fischer, and Martin Vechev. Boosting randomized smoothing with variance reduced classifiers. In International Conference on Learning Representations, 2022a.
255
+ Miklos Z Horváth, Mark Niklas Müller, Marc Fischer, and Martin Vechev. Robust and accurate-compositional architectures for randomized smoothing. arXiv preprint arXiv:2204.00487, 2022b.
256
+ Jongheon Jeong and Jinwoo Shin. Consistency regularization for certified robustness of smoothed classifiers. Advances in Neural Information Processing Systems, 33:10558-10570, 2020.
257
+ Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Do-Guk Kim, and Jinwoo Shin. Smoothmix: Training confidence-calibrated smoothed classifiers for certified robustness. Advances in Neural Information Processing Systems, 34:30153-30168, 2021.
258
+ Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems, 2022.
259
+ Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy, pp. 656-672. IEEE, 2019.
260
+ Kyungmin Lee. Provable defense by denoised smoothing with learned score function. ICLR Workshop on Security and Safety in Machine Learning Systems, 2021.
261
+
262
+ Matthew Mirman, Timon Gehr, and Martin Vechev. Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning, pp. 3578-3586. PMLR, 2018.
263
+ Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pp. 8162-8171. PMLR, 2021.
264
+ Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Anima Anandkumar. Diffusion models for adversarial purification. arXiv preprint arXiv:2205.07460, 2022.
265
+ Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems, 32, 2019.
266
+ Hadi Salman, Mingjie Sun, Greg Yang, Ashish Kapoor, and J Zico Kolter. Denoised smoothing: A provable defense for pretrained classifiers. Advances in Neural Information Processing Systems, 33:21945-21957, 2020.
267
+ Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarily robust generalization requires more data. Advances in Neural Information Processing Systems, 31, 2018.
268
+ Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp. 2256-2265. PMLR, 2015.
269
+ Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021.
270
+ Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
271
+ Jonathan Uesato, Brendan O'donoghue, Pushmeet Kohli, and Aaron Oord. Adversarial risk and the dangers of evaluating against weak attacks. In International Conference on Machine Learning, pp. 5025-5034. PMLR, 2018.
272
+ Ross Wightman. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019.
273
+ Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, and Bo Li. On the certified robustness for ensemble models and beyond. arXiv preprint arXiv:2107.10873, 2021.
274
+ Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In British Machine Vision Conference, 2016.
275
+ Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, and Liwei Wang. MACER: Attack-free and scalable robust training via maximizing certified radius. In International Conference on Learning Representations, 2020.
276
+
277
+ # A APPENDIX
278
+
279
+ <table><tr><td rowspan="2">Noise</td><td colspan="5">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>83.8</td><td>70.6</td><td>55.7</td><td>40.0</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>65.8</td><td>54.7</td><td>43.7</td><td>34.2</td><td>26.1</td></tr><tr><td>σ = 1.0</td><td>33.2</td><td>28.0</td><td>22.8</td><td>18.0</td><td>13.6</td></tr></table>
280
+
281
+ (a) Wide-ResNet
282
+
283
+ <table><tr><td rowspan="2">Noise</td><td colspan="5">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>85.9</td><td>76.7</td><td>63.8</td><td>49.5</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>74.5</td><td>66.0</td><td>56.1</td><td>45.7</td><td>36.4</td></tr><tr><td>σ = 1.0</td><td>55.1</td><td>48.7</td><td>42.3</td><td>35.8</td><td>29.9</td></tr></table>
284
+
285
+ (b) Finetuned Wide-ResNet
286
+
287
+ <table><tr><td rowspan="2">Noise</td><td colspan="5">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>88.1</td><td>76.7</td><td>63.0</td><td>45.3</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>77.0</td><td>65.8</td><td>53.4</td><td>41.8</td><td>32.1</td></tr><tr><td>σ = 1.0</td><td>49.5</td><td>40.3</td><td>33.3</td><td>26.1</td><td>20.2</td></tr></table>
288
+
289
+ (c) ViT
290
+
291
+ <table><tr><td rowspan="2">Noise</td><td colspan="5">Certified Accuracy at ε (%)</td></tr><tr><td>0.0</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td>σ = 0.25</td><td>91.2</td><td>79.3</td><td>65.5</td><td>48.7</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>81.5</td><td>67.0</td><td>56.1</td><td>45.3</td><td>35.5</td></tr><tr><td>σ = 1.0</td><td>65.1</td><td>48.4</td><td>41.7</td><td>35.2</td><td>29.0</td></tr></table>
292
+
293
+ (d) Finetuned ViT
294
+ Table 6: Certified accuracy of four different classifiers on CIFAR-10 at varying levels of Gaussian noise $\sigma$ ,all using the same diffusion model.
295
+ Certified Accuracy at $\varepsilon$ (\%)
296
+
297
+ <table><tr><td>Noise</td><td>0.0</td><td>0.5</td><td>1.0</td><td>1.5</td><td>2.0</td><td>3.0</td></tr><tr><td>σ = 0.25</td><td>82.8</td><td>71.1</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>σ = 0.5</td><td>77.1</td><td>67.8</td><td>54.3</td><td>38.1</td><td>0.0</td><td>0.0</td></tr><tr><td>σ = 1.0</td><td>60.0</td><td>50.0</td><td>42.0</td><td>35.5</td><td>29.5</td><td>13.1</td></tr></table>
298
+
299
+ Table 7: Certified accuracy on ImageNet for varying levels of Gaussian noise $\sigma$ .
300
+
301
+ ![](images/847dea4a7b6eb768732f24bbe5b4a11472b85fbf65c655bd2b9b2975206ccaa0.jpg)
302
+ (a) One-shot denoised images $(\sigma = 1.00)$
303
+
304
+ ![](images/ebaef54e3c3570e242332e48842b612d6b67fab489382519c3b50c32f8b812db.jpg)
305
+ (b) Multi-step denoised images $(\sigma = 1.00)$
306
+ Figure 4: Qualitative comparison of one-shot denoising and multi-step denoising. We show denoised images under random Gaussian noise ( $\sigma = 1.00$ ). A green border is applied when the denoised images are correctly classified while a red border means that the classifier misclassifies the image.
307
+
308
+ ![](images/858c09ae7f926f08f6a9f17ee32618fc0b5bc8bbf981218b2b7468ccb1d00d6e.jpg)
309
+ Figure 5: Additional intuitive examples for why multi-step denoised images are less recognized by the classifier. From left to right: clean images, noisy images with $\sigma = 1.0$ , one-step denoised images, multi-step denoised images. For the denoised images, we show the prediction by the pretrained BEiT model.
2023/(Certified!!) Adversarial Robustness for Free!/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27377459d213eeb6e7ee89b27f3f257dfa4f6e07812ed811fec43bc3bc47cd1d
3
+ size 840378
2023/(Certified!!) Adversarial Robustness for Free!/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/3D Equivariant Diffusion for Target-Aware Molecule Generation and Affinity Prediction/087f4cc3-a9bc-404f-906f-af1ead9b2e7c_content_list.json ADDED
The diff for this file is too large to render. See raw diff