Add Batch ac6e2091-eedc-4466-812d-447a6352b6d4 data
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2023/3D generation on ImageNet/ab701ff5-eb72-4d71-8760-4e730c4bc8a9_content_list.json +0 -0
- 2023/3D generation on ImageNet/ab701ff5-eb72-4d71-8760-4e730c4bc8a9_model.json +0 -0
- 2023/3D generation on ImageNet/ab701ff5-eb72-4d71-8760-4e730c4bc8a9_origin.pdf +3 -0
- 2023/3D generation on ImageNet/full.md +512 -0
- 2023/3D generation on ImageNet/images.zip +3 -0
- 2023/3D generation on ImageNet/layout.json +0 -0
- 2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/51332d35-240d-4fdc-831c-340986fdb152_content_list.json +0 -0
- 2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/51332d35-240d-4fdc-831c-340986fdb152_model.json +0 -0
- 2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/51332d35-240d-4fdc-831c-340986fdb152_origin.pdf +3 -0
- 2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/full.md +0 -0
- 2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/images.zip +3 -0
- 2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/layout.json +0 -0
- 2023/A Kernel Perspective of Skip Connections in Convolutional Networks/74f46840-0d16-43b0-a684-c39a24459942_content_list.json +0 -0
- 2023/A Kernel Perspective of Skip Connections in Convolutional Networks/74f46840-0d16-43b0-a684-c39a24459942_model.json +0 -0
- 2023/A Kernel Perspective of Skip Connections in Convolutional Networks/74f46840-0d16-43b0-a684-c39a24459942_origin.pdf +3 -0
- 2023/A Kernel Perspective of Skip Connections in Convolutional Networks/full.md +0 -0
- 2023/A Kernel Perspective of Skip Connections in Convolutional Networks/images.zip +3 -0
- 2023/A Kernel Perspective of Skip Connections in Convolutional Networks/layout.json +0 -0
- 2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/e5d03060-3337-4f77-9e3e-da15fe7c86ba_content_list.json +0 -0
- 2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/e5d03060-3337-4f77-9e3e-da15fe7c86ba_model.json +0 -0
- 2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/e5d03060-3337-4f77-9e3e-da15fe7c86ba_origin.pdf +3 -0
- 2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/full.md +0 -0
- 2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/images.zip +3 -0
- 2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/layout.json +0 -0
- 2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/5382925d-b4d2-4e5f-bb7e-98e018f67e83_content_list.json +0 -0
- 2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/5382925d-b4d2-4e5f-bb7e-98e018f67e83_model.json +0 -0
- 2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/5382925d-b4d2-4e5f-bb7e-98e018f67e83_origin.pdf +3 -0
- 2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/full.md +565 -0
- 2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/images.zip +3 -0
- 2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/layout.json +0 -0
- 2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/7ac7f310-bee1-439e-801a-a6f6b2dfa988_content_list.json +1225 -0
- 2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/7ac7f310-bee1-439e-801a-a6f6b2dfa988_model.json +1997 -0
- 2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/7ac7f310-bee1-439e-801a-a6f6b2dfa988_origin.pdf +3 -0
- 2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/full.md +214 -0
- 2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/images.zip +3 -0
- 2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/layout.json +0 -0
- 2023/AutoGT_ Automated Graph Transformer Architecture Search/d4f0ae85-f4df-4a55-84ac-a8865afd4325_content_list.json +1968 -0
- 2023/AutoGT_ Automated Graph Transformer Architecture Search/d4f0ae85-f4df-4a55-84ac-a8865afd4325_model.json +0 -0
- 2023/AutoGT_ Automated Graph Transformer Architecture Search/d4f0ae85-f4df-4a55-84ac-a8865afd4325_origin.pdf +3 -0
- 2023/AutoGT_ Automated Graph Transformer Architecture Search/full.md +383 -0
- 2023/AutoGT_ Automated Graph Transformer Architecture Search/images.zip +3 -0
- 2023/AutoGT_ Automated Graph Transformer Architecture Search/layout.json +0 -0
- 2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/90d5f745-196f-40c1-8311-4d75a522d471_content_list.json +0 -0
- 2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/90d5f745-196f-40c1-8311-4d75a522d471_model.json +0 -0
- 2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/90d5f745-196f-40c1-8311-4d75a522d471_origin.pdf +3 -0
- 2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/full.md +696 -0
- 2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/images.zip +3 -0
- 2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/layout.json +0 -0
- 2023/Clean-image Backdoor_ Attacking Multi-label Models with Poisoned Labels Only/03aaef97-f1ed-4352-8545-c82384c714ab_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -6264,3 +6264,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 6264 |
2023/UNICORN_[[:space:]]A[[:space:]]Unified[[:space:]]Backdoor[[:space:]]Trigger[[:space:]]Inversion[[:space:]]Framework/140224d9-f9bc-48f7-aaaa-fc1c2971f7e8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6265 |
2023/UNIFIED-IO_[[:space:]]A[[:space:]]Unified[[:space:]]Model[[:space:]]for[[:space:]]Vision,[[:space:]]Language,[[:space:]]and[[:space:]]Multi-modal[[:space:]]Tasks/3166fdff-b1f6-4120-a666-b223482e0434_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6266 |
2023/Understanding[[:space:]]and[[:space:]]Adopting[[:space:]]Rational[[:space:]]Behavior[[:space:]]by[[:space:]]Bellman[[:space:]]Score[[:space:]]Estimation/e3ae32ca-e451-4b6b-8dcc-f991777dd59a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6264 |
2023/UNICORN_[[:space:]]A[[:space:]]Unified[[:space:]]Backdoor[[:space:]]Trigger[[:space:]]Inversion[[:space:]]Framework/140224d9-f9bc-48f7-aaaa-fc1c2971f7e8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6265 |
2023/UNIFIED-IO_[[:space:]]A[[:space:]]Unified[[:space:]]Model[[:space:]]for[[:space:]]Vision,[[:space:]]Language,[[:space:]]and[[:space:]]Multi-modal[[:space:]]Tasks/3166fdff-b1f6-4120-a666-b223482e0434_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6266 |
2023/Understanding[[:space:]]and[[:space:]]Adopting[[:space:]]Rational[[:space:]]Behavior[[:space:]]by[[:space:]]Bellman[[:space:]]Score[[:space:]]Estimation/e3ae32ca-e451-4b6b-8dcc-f991777dd59a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6267 |
+
2023/3D[[:space:]]generation[[:space:]]on[[:space:]]ImageNet/ab701ff5-eb72-4d71-8760-4e730c4bc8a9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6268 |
+
2023/A[[:space:]]Call[[:space:]]to[[:space:]]Reflect[[:space:]]on[[:space:]]Evaluation[[:space:]]Practices[[:space:]]for[[:space:]]Failure[[:space:]]Detection[[:space:]]in[[:space:]]Image[[:space:]]Classification/51332d35-240d-4fdc-831c-340986fdb152_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6269 |
+
2023/A[[:space:]]Kernel[[:space:]]Perspective[[:space:]]of[[:space:]]Skip[[:space:]]Connections[[:space:]]in[[:space:]]Convolutional[[:space:]]Networks/74f46840-0d16-43b0-a684-c39a24459942_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6270 |
+
2023/Addressing[[:space:]]Parameter[[:space:]]Choice[[:space:]]Issues[[:space:]]in[[:space:]]Unsupervised[[:space:]]Domain[[:space:]]Adaptation[[:space:]]by[[:space:]]Aggregation/e5d03060-3337-4f77-9e3e-da15fe7c86ba_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6271 |
+
2023/Agree[[:space:]]to[[:space:]]Disagree_[[:space:]]Diversity[[:space:]]through[[:space:]]Disagreement[[:space:]]for[[:space:]]Better[[:space:]]Transferability/5382925d-b4d2-4e5f-bb7e-98e018f67e83_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6272 |
+
2023/Aligning[[:space:]]Model[[:space:]]and[[:space:]]Macaque[[:space:]]Inferior[[:space:]]Temporal[[:space:]]Cortex[[:space:]]Representations[[:space:]]Improves[[:space:]]Model-to-Human[[:space:]]Behavioral[[:space:]]Alignment[[:space:]]and[[:space:]]Adversarial[[:space:]]Robustness/7ac7f310-bee1-439e-801a-a6f6b2dfa988_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6273 |
+
2023/AutoGT_[[:space:]]Automated[[:space:]]Graph[[:space:]]Transformer[[:space:]]Architecture[[:space:]]Search/d4f0ae85-f4df-4a55-84ac-a8865afd4325_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6274 |
+
2023/Betty_[[:space:]]An[[:space:]]Automatic[[:space:]]Differentiation[[:space:]]Library[[:space:]]for[[:space:]]Multilevel[[:space:]]Optimization/90d5f745-196f-40c1-8311-4d75a522d471_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6275 |
+
2023/Clean-image[[:space:]]Backdoor_[[:space:]]Attacking[[:space:]]Multi-label[[:space:]]Models[[:space:]]with[[:space:]]Poisoned[[:space:]]Labels[[:space:]]Only/03aaef97-f1ed-4352-8545-c82384c714ab_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6276 |
+
2023/Compressing[[:space:]]multidimensional[[:space:]]weather[[:space:]]and[[:space:]]climate[[:space:]]data[[:space:]]into[[:space:]]neural[[:space:]]networks/8db07c7a-4e12-4fa8-9ae4-0e1944cf1aa3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6277 |
+
2023/Conditional[[:space:]]Antibody[[:space:]]Design[[:space:]]as[[:space:]]3D[[:space:]]Equivariant[[:space:]]Graph[[:space:]]Translation/293789d9-2c48-4973-a3c9-5e899a92a025_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6278 |
+
2023/Confidence-Conditioned[[:space:]]Value[[:space:]]Functions[[:space:]]for[[:space:]]Offline[[:space:]]Reinforcement[[:space:]]Learning/acec2328-a0ad-45bb-b023-4b8b0b321a78_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6279 |
+
2023/Confidential-PROFITT_[[:space:]]Confidential[[:space:]]PROof[[:space:]]of[[:space:]]FaIr[[:space:]]Training[[:space:]]of[[:space:]]Trees/4b3aeb7b-cf8e-4058-810c-15873f2cb689_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6280 |
+
2023/Crossformer_[[:space:]]Transformer[[:space:]]Utilizing[[:space:]]Cross-Dimension[[:space:]]Dependency[[:space:]]for[[:space:]]Multivariate[[:space:]]Time[[:space:]]Series[[:space:]]Forecasting/1c2608d7-9142-411a-9f18-0e47b7d80c34_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6281 |
+
2023/DaxBench_[[:space:]]Benchmarking[[:space:]]Deformable[[:space:]]Object[[:space:]]Manipulation[[:space:]]with[[:space:]]Differentiable[[:space:]]Physics/e8f44d35-0cca-44b8-a0a8-8527b6939edc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6282 |
+
2023/Dichotomy[[:space:]]of[[:space:]]Control_[[:space:]]Separating[[:space:]]What[[:space:]]You[[:space:]]Can[[:space:]]Control[[:space:]]from[[:space:]]What[[:space:]]You[[:space:]]Cannot/80e2f184-6768-41ce-8f5c-3ed4875e2ae0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6283 |
+
2023/Do[[:space:]]We[[:space:]]Really[[:space:]]Need[[:space:]]Complicated[[:space:]]Model[[:space:]]Architectures[[:space:]]For[[:space:]]Temporal[[:space:]]Networks_/d12d1dcf-ef4d-4398-b64f-60f4f09f77a4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6284 |
+
2023/Dr.Spider_[[:space:]]A[[:space:]]Diagnostic[[:space:]]Evaluation[[:space:]]Benchmark[[:space:]]towards[[:space:]]Text-to-SQL[[:space:]]Robustness/4be8e09d-68c1-4065-8d29-3cff44d52fd5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6285 |
+
2023/Draft,[[:space:]]Sketch,[[:space:]]and[[:space:]]Prove_[[:space:]]Guiding[[:space:]]Formal[[:space:]]Theorem[[:space:]]Provers[[:space:]]with[[:space:]]Informal[[:space:]]Proofs/fab5603c-efbb-4b8c-a107-76ddfbeffaa0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6286 |
+
2023/DreamFusion_[[:space:]]Text-to-3D[[:space:]]using[[:space:]]2D[[:space:]]Diffusion/0ed5f616-43ac-4d6d-8028-471bf8c7b56c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6287 |
+
2023/Efficient[[:space:]]Attention[[:space:]]via[[:space:]]Control[[:space:]]Variates/1708152c-d932-4fe7-b834-4197a4e64f5b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6288 |
+
2023/Efficient[[:space:]]Conditionally[[:space:]]Invariant[[:space:]]Representation[[:space:]]Learning/c687f102-a769-45f7-b339-63882653ffd3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6289 |
+
2023/Efficiently[[:space:]]Computing[[:space:]]Nash[[:space:]]Equilibria[[:space:]]in[[:space:]]Adversarial[[:space:]]Team[[:space:]]Markov[[:space:]]Games/71f65fc9-1b28-4f0c-beb4-7ed097766279_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6290 |
+
2023/Embedding[[:space:]]Fourier[[:space:]]for[[:space:]]Ultra-High-Definition[[:space:]]Low-Light[[:space:]]Image[[:space:]]Enhancement/d867dfc0-749e-4c11-8e0f-87bbfce3be2f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6291 |
+
2023/Emergent[[:space:]]World[[:space:]]Representations_[[:space:]]Exploring[[:space:]]a[[:space:]]Sequence[[:space:]]Model[[:space:]]Trained[[:space:]]on[[:space:]]a[[:space:]]Synthetic[[:space:]]Task/be937266-91ae-44be-8ce4-46f0fee8b903_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6292 |
+
2023/Encoding[[:space:]]Recurrence[[:space:]]into[[:space:]]Transformers/e916a71b-e01b-4a8f-9402-f6b33160e26f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6293 |
+
2023/Extreme[[:space:]]Q-Learning_[[:space:]]MaxEnt[[:space:]]RL[[:space:]]without[[:space:]]Entropy/d4e33d90-44d0-4caf-8a9e-cf76316fec0b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6294 |
+
2023/Fast[[:space:]]and[[:space:]]Precise_[[:space:]]Adjusting[[:space:]]Planning[[:space:]]Horizon[[:space:]]with[[:space:]]Adaptive[[:space:]]Subgoal[[:space:]]Search/cb44528c-4891-44c1-9c33-907a83b8e88a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6295 |
+
2023/From[[:space:]]Play[[:space:]]to[[:space:]]Policy_[[:space:]]Conditional[[:space:]]Behavior[[:space:]]Generation[[:space:]]from[[:space:]]Uncurated[[:space:]]Robot[[:space:]]Data/673985c8-cba7-48ed-802a-2ac41172434d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6296 |
+
2023/Git[[:space:]]Re-Basin_[[:space:]]Merging[[:space:]]Models[[:space:]]modulo[[:space:]]Permutation[[:space:]]Symmetries/c0003211-fdc2-4578-9a92-a35a388078f8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6297 |
+
2023/Graph[[:space:]]Neural[[:space:]]Networks[[:space:]]for[[:space:]]Link[[:space:]]Prediction[[:space:]]with[[:space:]]Subgraph[[:space:]]Sketching/b764ea4c-44df-4433-82a3-3ffda6c93221_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6298 |
+
2023/Image[[:space:]]as[[:space:]]Set[[:space:]]of[[:space:]]Points/9a6b5120-0183-44aa-898c-cc70b0026a82_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6299 |
+
2023/Image[[:space:]]to[[:space:]]Sphere_[[:space:]]Learning[[:space:]]Equivariant[[:space:]]Features[[:space:]]for[[:space:]]Efficient[[:space:]]Pose[[:space:]]Prediction/7db5c492-0a7b-412c-b239-94dd39915d5d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6300 |
+
2023/In-context[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]with[[:space:]]Algorithm[[:space:]]Distillation/09e6bc85-569f-4ed9-8160-87aebeea2c7a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6301 |
+
2023/Is[[:space:]]Conditional[[:space:]]Generative[[:space:]]Modeling[[:space:]]all[[:space:]]you[[:space:]]need[[:space:]]for[[:space:]]Decision[[:space:]]Making_/a0c90c01-b1be-4aa3-8fe7-54f900ff36f9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6302 |
+
2023/Is[[:space:]]the[[:space:]]Performance[[:space:]]of[[:space:]]My[[:space:]]Deep[[:space:]]Network[[:space:]]Too[[:space:]]Good[[:space:]]to[[:space:]]Be[[:space:]]True_[[:space:]]A[[:space:]]Direct[[:space:]]Approach[[:space:]]to[[:space:]]Estimating[[:space:]]the[[:space:]]Bayes[[:space:]]Error[[:space:]]in[[:space:]]Binary[[:space:]]Classification/7849a3ce-a04f-4ff7-9c1c-60b657223364_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6303 |
+
2023/Language[[:space:]]Modelling[[:space:]]with[[:space:]]Pixels/21f52c2c-2eb0-4c3f-84ba-7b0e7b18e7f9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6304 |
+
2023/Learnable[[:space:]]Behavior[[:space:]]Control_[[:space:]]Breaking[[:space:]]Atari[[:space:]]Human[[:space:]]World[[:space:]]Records[[:space:]]via[[:space:]]Sample-Efficient[[:space:]]Behavior[[:space:]]Selection/df0bf6cc-6ef5-4f8d-aca0-d775482f4076_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6305 |
+
2023/Learning[[:space:]]on[[:space:]]Large-scale[[:space:]]Text-attributed[[:space:]]Graphs[[:space:]]via[[:space:]]Variational[[:space:]]Inference/0265e404-2d03-4e33-a15c-939821ada831_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6306 |
+
2023/Learning[[:space:]]where[[:space:]]and[[:space:]]when[[:space:]]to[[:space:]]reason[[:space:]]in[[:space:]]neuro-symbolic[[:space:]]inference/dc66d72f-5f66-4eb3-9fc0-940176088ef4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6307 |
+
2023/MICN_[[:space:]]Multi-scale[[:space:]]Local[[:space:]]and[[:space:]]Global[[:space:]]Context[[:space:]]Modeling[[:space:]]for[[:space:]]Long-term[[:space:]]Series[[:space:]]Forecasting/31eb2d55-ce61-48c3-9d66-bf372c2db673_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6308 |
+
2023/Mastering[[:space:]]the[[:space:]]Game[[:space:]]of[[:space:]]No-Press[[:space:]]Diplomacy[[:space:]]via[[:space:]]Human-Regularized[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]and[[:space:]]Planning/58a0c832-0463-4f99-9889-db9b06d93aa9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6309 |
+
2023/Mitigating[[:space:]]Gradient[[:space:]]Bias[[:space:]]in[[:space:]]Multi-objective[[:space:]]Learning_[[:space:]]A[[:space:]]Provably[[:space:]]Convergent[[:space:]]Approach/01519f56-f386-4124-9667-9a1fd9133075_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6310 |
+
2023/MocoSFL_[[:space:]]enabling[[:space:]]cross-client[[:space:]]collaborative[[:space:]]self-supervised[[:space:]]learning/8e3e4b83-7417-4866-adb5-d012133c9223_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6311 |
+
2023/Modeling[[:space:]]content[[:space:]]creator[[:space:]]incentives[[:space:]]on[[:space:]]algorithm-curated[[:space:]]platforms/27a699b6-f019-4494-9105-d3682232d088_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6312 |
+
2023/Moving[[:space:]]Forward[[:space:]]by[[:space:]]Moving[[:space:]]Backward_[[:space:]]Embedding[[:space:]]Action[[:space:]]Impact[[:space:]]over[[:space:]]Action[[:space:]]Semantics/3544f99f-273e-4f79-a076-45b88dbeda39_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6313 |
+
2023/Multi-Rate[[:space:]]VAE_[[:space:]]Train[[:space:]]Once,[[:space:]]Get[[:space:]]the[[:space:]]Full[[:space:]]Rate-Distortion[[:space:]]Curve/fe738140-0d55-4fc9-ba9c-a1342c2c92a6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6314 |
+
2023/Unmasking[[:space:]]the[[:space:]]Lottery[[:space:]]Ticket[[:space:]]Hypothesis_[[:space:]]What's[[:space:]]Encoded[[:space:]]in[[:space:]]a[[:space:]]Winning[[:space:]]Ticket's[[:space:]]Mask_/af8fd8a8-72f1-4dfb-bf16-19f1bc771ca4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6315 |
+
2023/Unsupervised[[:space:]]Meta-learning[[:space:]]via[[:space:]]Few-shot[[:space:]]Pseudo-supervised[[:space:]]Contrastive[[:space:]]Learning/02e653e3-ad2b-4c33-b4c8-c0ec04aa63fe_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6316 |
+
2023/Unsupervised[[:space:]]Model[[:space:]]Selection[[:space:]]for[[:space:]]Time[[:space:]]Series[[:space:]]Anomaly[[:space:]]Detection/00b7767c-9735-4cf8-be58-18d4ac989b6f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6317 |
+
2023/Unsupervised[[:space:]]Semantic[[:space:]]Segmentation[[:space:]]with[[:space:]]Self-supervised[[:space:]]Object-centric[[:space:]]Representations/0926627e-06c9-4adc-8077-185f28083acd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6318 |
+
2023/Using[[:space:]]Language[[:space:]]to[[:space:]]Extend[[:space:]]to[[:space:]]Unseen[[:space:]]Domains/5e981ec1-8c99-44a3-9670-31abe6ce2894_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6319 |
+
2023/VA-DepthNet_[[:space:]]A[[:space:]]Variational[[:space:]]Approach[[:space:]]to[[:space:]]Single[[:space:]]Image[[:space:]]Depth[[:space:]]Prediction/69988eea-8095-496d-9c4f-7350119750a0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6320 |
+
2023/VIP_[[:space:]]Towards[[:space:]]Universal[[:space:]]Visual[[:space:]]Reward[[:space:]]and[[:space:]]Representation[[:space:]]via[[:space:]]Value-Implicit[[:space:]]Pre-Training/5d1ff90f-308d-438c-9827-9cf26e3b0d05_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6321 |
+
2023/VIPeR_[[:space:]]Provably[[:space:]]Efficient[[:space:]]Algorithm[[:space:]]for[[:space:]]Offline[[:space:]]RL[[:space:]]with[[:space:]]Neural[[:space:]]Function[[:space:]]Approximation/835e0a57-d376-4ab1-97d3-e59fc98d6402_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6322 |
+
2023/Vision[[:space:]]Transformer[[:space:]]Adapter[[:space:]]for[[:space:]]Dense[[:space:]]Predictions/0d641953-2fde-40bb-81e2-eb072d98a55b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6323 |
+
2023/Visual[[:space:]]Recognition[[:space:]]with[[:space:]]Deep[[:space:]]Nearest[[:space:]]Centroids/cc7765fb-9415-4354-ae6c-2d21b8b05b27_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6324 |
+
2023/Voxurf_[[:space:]]Voxel-based[[:space:]]Efficient[[:space:]]and[[:space:]]Accurate[[:space:]]Neural[[:space:]]Surface[[:space:]]Reconstruction/4a6fba26-ec97-4ea8-82fb-486e90c6f24d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6325 |
+
2023/Warping[[:space:]]the[[:space:]]Space_[[:space:]]Weight[[:space:]]Space[[:space:]]Rotation[[:space:]]for[[:space:]]Class-Incremental[[:space:]]Few-Shot[[:space:]]Learning/ad18cce9-faa7-4207-aa90-3b58305cf7e3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6326 |
+
2023/When[[:space:]]Source-Free[[:space:]]Domain[[:space:]]Adaptation[[:space:]]Meets[[:space:]]Learning[[:space:]]with[[:space:]]Noisy[[:space:]]Labels/4468b483-9948-4223-b0cf-e2f3daa55af2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6327 |
+
2023/Where[[:space:]]to[[:space:]]Begin_[[:space:]]On[[:space:]]the[[:space:]]Impact[[:space:]]of[[:space:]]Pre-Training[[:space:]]and[[:space:]]Initialization[[:space:]]in[[:space:]]Federated[[:space:]]Learning/b17465e6-56cf-4a56-83a3-d6c19f2df3bc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6328 |
+
2023/Zero-Shot[[:space:]]Image[[:space:]]Restoration[[:space:]]Using[[:space:]]Denoising[[:space:]]Diffusion[[:space:]]Null-Space[[:space:]]Model/450bbabd-f7bc-4849-a6aa-19a630ac135c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6329 |
+
2023/ZiCo_[[:space:]]Zero-shot[[:space:]]NAS[[:space:]]via[[:space:]]inverse[[:space:]]Coefficient[[:space:]]of[[:space:]]Variation[[:space:]]on[[:space:]]Gradients/a4ccb233-5f5c-41d3-ae47-bd22cdcb6231_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6330 |
+
2023/gDDIM_[[:space:]]Generalized[[:space:]]denoising[[:space:]]diffusion[[:space:]]implicit[[:space:]]models/4d3ee4cc-20bf-410f-b0d4-2719f75a6431_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2023/3D generation on ImageNet/ab701ff5-eb72-4d71-8760-4e730c4bc8a9_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/3D generation on ImageNet/ab701ff5-eb72-4d71-8760-4e730c4bc8a9_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/3D generation on ImageNet/ab701ff5-eb72-4d71-8760-4e730c4bc8a9_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6e41362bfe7ca3e5287c095c6f975095a677a05f2fb845e9c6c719a3036f1975
|
| 3 |
+
size 22006293
|
2023/3D generation on ImageNet/full.md
ADDED
|
@@ -0,0 +1,512 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D GENERATION ON IMAGENET
|
| 2 |
+
|
| 3 |
+
Ivan Skorokhodov* KAUST
|
| 4 |
+
|
| 5 |
+
Aliaksandr Siarohin
|
| 6 |
+
Snap Inc.
|
| 7 |
+
|
| 8 |
+
Yinghao Xu*
|
| 9 |
+
CUHK
|
| 10 |
+
|
| 11 |
+
Jian Ren
|
| 12 |
+
Snap Inc.
|
| 13 |
+
|
| 14 |
+
Hsin-Ying Lee
|
| 15 |
+
Snap Inc.
|
| 16 |
+
|
| 17 |
+
Peter Wonka KAUST
|
| 18 |
+
|
| 19 |
+
Sergey Tulyakov
|
| 20 |
+
Snap Inc.
|
| 21 |
+
|
| 22 |
+
# ABSTRACT
|
| 23 |
+
|
| 24 |
+
All existing 3D-from-2D generators are designed for well-curated single-category datasets, where all the objects have (approximately) the same scale, 3D location and orientation, and the camera always points to the center of the scene. This makes them inapplicable to diverse, in-the-wild datasets of non-alignable scenes rendered from arbitrary camera poses. In this work, we develop 3D generator with Generic Priors (3DGP): a 3D synthesis framework with more general assumptions about the training data, and show that it scales to very challenging datasets, like ImageNet. Our model is based on three new ideas. First, we incorporate an inaccurate off-the-shelf depth estimator into 3D GAN training via a special depth adaptation module to handle the imprecision. Then, we create a flexible camera model and a regularization strategy for it to learn its distribution parameters during training. Finally, we extend the recent ideas of transferring knowledge from pretrained classifiers into GANs for patch-wise trained models by employing a simple distillation-based technique on top of the discriminator. It achieves more stable training than the existing methods and speeds up the convergence by at least $40\%$ . We explore our model on four datasets: SDIP Dogs $256^{2}$ , SDIP Elephants $256^{2}$ , LSUN Horses $256^{2}$ , and ImageNet $256^{2}$ and demonstrate that 3DGP outperforms the recent state-of-the-art in terms of both texture and geometry quality.
|
| 25 |
+
|
| 26 |
+
Code and visualizations: https://snap-research.github.io/3dgp
|
| 27 |
+
|
| 28 |
+
# 1 INTRODUCTION
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
Figure 1: Selected samples from EG3D (Chan et al., 2022) and our generator trained on ImageNet $256^{2}$ (Deng et al., 2009). EG3D models the geometry in low resolution and renders either flat shapes (when trained with the default camera distribution) or repetitive "layered" ones (when trained with a wide camera distribution). In contrast, our model synthesizes the radiance field in the full dataset resolution and learns high-fidelity details during training. Zoom-in for a better view.
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+
We witness remarkable progress in the domain of 3D-aware image synthesis. The community is developing new methods to improve the image quality, 3D consistency and efficiency of the generators
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
Figure 2: Model overview. Left: our tri-plane-based generator. To synthesize an image, we first sample camera parameters from a prior distribution and pass them to the camera generator. This gives the posterior camera parameters, used to render an image and its depth map. The depth adaptor mitigates the distribution gap between the rendered and the predicted depth. Right: our discriminator receives a 4-channel color-depth pair as an input. A fake sample consists of the RGB image and its (adapted) depth map. A real sample consists of a real image and its estimated depth. Our two-headed discriminator predicts adversarial scores and image features for knowledge distillation.
|
| 41 |
+
|
| 42 |
+
(e.g., Chan et al. (2022); Deng et al. (2022); Skorokhodov et al. (2022); Zhao et al. (2022); Schwarz et al. (2022)). However, all the existing frameworks are designed for well-curated and aligned datasets consisting of objects of the same category, scale and global scene structure, like human or cat faces (Chan et al., 2021). Such curation requires domain-specific 3D knowledge about the object category at hand, since one needs to infer the underlying 3D keypoints to properly crop, rotate and scale the images (Deng et al., 2022; Chan et al., 2022). This makes it infeasible to perform a similar alignment procedure for large-scale multi-category datasets that are inherently “non-alignable”: there does not exist a single canonical position which all the objects could be transformed into (e.g., it is impossible to align a landscape panorama with a spoon).
|
| 43 |
+
|
| 44 |
+
To extend 3D synthesis to in-the-wild datasets, one needs a framework which relies on more universal 3D priors. In this work, we make a step towards this direction and develop a 3D generator with Generic Priors (3DGP): a 3D synthesis model which is guided only by (imperfect) depth predictions from an off-the-shelf monocular depth estimator. Surprisingly, such 3D cues are enough to learn reasonable scenes from loosely curated, non-aligned datasets, such as ImageNet (Deng et al., 2009).
|
| 45 |
+
|
| 46 |
+
Training a 3D generator on in-the-wild datasets comes with three main challenges: 1) extrinsic camera parameters of real images are unknown and impossible to infer; 2) objects appear in different shapes, positions, rotations and scales, complicating the learning of the underlying geometry; and 3) the dataset typically contains a lot of variation in terms of texture and structure, and is difficult to fit even for 2D generators. As shown in Fig 1 (left), state-of-the-art 3D-aware generators, such as EG3D (Chan et al., 2022), struggle to learn the proper geometry in such a challenging scenario. In this work, we develop three novel techniques to address those problems.
|
| 47 |
+
|
| 48 |
+
Learnable "Ball-in-Sphere" camera distribution. Most existing methods utilize a restricted camera model (e.g., (Schwarz et al., 2020; Niemeyer & Geiger, 2021b; Chan et al., 2021)): the camera is positioned on a sphere with a constant radius, always points to the world center and has fixed intrinsics. But diverse, non-aligned datasets violate these assumptions: e.g., dogs datasets have images of both close-up photos of a snout and photos of full-body dogs, which implies the variability in the focal length and look-at positions. Thus, we introduce a novel camera model with 6 degrees of freedom to address this variability. We optimize its distribution parameters during training and develop an efficient gradient penalty for it to prevent its collapse to a delta distribution.
|
| 49 |
+
|
| 50 |
+
Adversarial depth supervision (ADS). A generic image dataset features a wide diversity of objects with different shapes and poses. That is why learning a meaningful 3D geometry together with the camera distribution is an ill-posed problem, as the incorrect scale can be well compensated by an incorrect camera model (Hartley & Zisserman, 2003), or flat geometry (Zhao et al., 2022; Chan et al., 2022). To instill the 3D bias, we provide the scene geometry information to the discriminator by concatenating the depth map of a scene as the 4-th channel of its RGB input. For real images, we use their (imperfect) estimates from a generic off-the-shelf monocular depth predictor (Miangoleh et al., 2021). For fake images, we render the depth from the synthesized radiance field, and process it with a shallow depth adaptor module, bridging the distribution gap between the estimated and rendered depth maps. This ultimately guides the generator to learn the proper 3D geometry.
|
| 51 |
+
|
| 52 |
+
Knowledge distillation into Discriminator. Prior works found it beneficial to transfer the knowledge from off-the-shelf 2D image encoders into a synthesis model (Sauer et al., 2022). They typically
|
| 53 |
+
|
| 54 |
+
utilize pre-trained image classifiers as the discriminator backbone with additional regularization strategies on top (Sauer et al., 2021; Kumari et al., 2022). Such techniques, however, are only applicable when the discriminator has an input distribution similar to what the encoder was trained on. This does not suit the setup of patch-wise training (Schwarz et al., 2020) or allows to provide depth maps via the 4-th channel to the discriminator. That is why we develop a more general and efficient knowledge transfer strategy based on knowledge distillation. It consists in forcing the discriminator to predict features of a pre-trained ResNet50 (He et al., 2016) model, effectively transferring the knowledge into our model. This technique has just $1\%$ of computational overhead compared to standard training, but allows to improve FID for both 2D and 3D generators by at least $40\%$ .
|
| 55 |
+
|
| 56 |
+
First, we explore our ideas on non-aligned single-category image datasets: SDIP Dogs $256^{2}$ (Mokady et al., 2022), SDIP Elephants $256^{2}$ (Mokady et al., 2022), and LSUN Horses $256^{2}$ (Yu et al., 2015). On these datasets, our generator achieves better image appearance (measured by FID Heusel et al. (2017)) and geometry quality than the modern state-of-the-art 3D-aware generators. Then, we train the model on all the 1,000 classes of ImageNet (Deng et al., 2009), showing that multi-categorical 3D synthesis is possible for non-alignable data (see Fig. 1).
|
| 57 |
+
|
| 58 |
+
# 2 RELATED WORK
|
| 59 |
+
|
| 60 |
+
3D-aware image synthesis. Mildenhall et al. (2020) introduced Neural Radiance Fields (NeRF): a neural network-based representation of 3D volumes which is learnable from RGB supervision only. It ignited many 3D-aware image/video generators (Schwarz et al., 2020; Niemeyer & Geiger, 2021b; Chan et al., 2021; Xue et al., 2022; Zhou et al., 2021; Wang et al., 2022; Gu et al., 2022; Or-El et al., 2021; Chan et al., 2022; Skorokhodov et al., 2022; Zhang et al., 2022; Xu et al., 2021; Bahmani et al., 2022), all of them being GAN-based (Goodfellow et al., 2014). Many of them explore the techniques to reduce the cost of 3D-aware generation for high-resolution data, like patch-wise training (e.g., Schwarz et al. (2020); Meng et al. (2021); Skorokhodov et al. (2022)), MPI-based rendering (e.g., Zhao et al. (2022)) or training a separate 2D upsampler (e.g., Gu et al. (2022)).
|
| 61 |
+
|
| 62 |
+
Learning the camera poses. NeRFs require known camera poses, obtained from multi-view stereo (Schönberger et al., 2016) or structure from motion (Schönberger & Frahm, 2016). Alternatively, a group of works has been introduced to either automatically estimate the camera poses (Wang et al., 2021) or finetune them during training (Lin et al., 2021; Kuang et al., 2022). The problem we are tackling in this work is fundamentally different as it requires learning not the camera poses from multi-view observations, but a distribution of poses, while having access to sparse, single-view data of diverse object categories. In this respect, the work closest to ours is CAMPARI (Niemeyer & Geiger, 2021a) as it also learns a camera distribution.
|
| 63 |
+
|
| 64 |
+
GANs with external knowledge. Several works observed improved convergence and fidelity of GANs when using existing, generic image-based models (Kumari et al., 2022; Sauer et al., 2021; 2022; Mo et al., 2020), the most notable being StyleGAN-XL (Sauer et al., 2022), which uses a pre-trained EfficientNet (Tan & Le, 2019) followed by a couple of discriminator layers. A similar technique is not suitable in our case as pre-training a generic RGB-D network on a large-scale RGB-D dataset is problematic due to the lack of data. Another notable example is FreezeD Mo et al. (2020), which proposes to distill discriminator features for GAN finetuning. Our work, on the other hand, relies on an existing model for image classification.
|
| 65 |
+
|
| 66 |
+
Off-the-shelf depth guidance. GSN (DeVries et al., 2021) also concatenates depth maps as the 4-th channel of the discriminator's input, but they utilize ground truth depths, which are not available for large-scale datasets. DepthGAN (Shi et al., 2022) uses predictions from a depth estimator to guide the training of a 2D GAN. Exploiting monocular depth estimators for improving neural rendering was also explored in concurrent work (Yu et al., 2022), however, their goal is just geometry reconstruction. The core characteristic of our approach is taking the depth estimator imprecision into account by training a depth adaptor module to mitigate it (see §3.2).
|
| 67 |
+
|
| 68 |
+
# 3 METHOD
|
| 69 |
+
|
| 70 |
+
We build our generator on top of EpiGRAF (Skorokhodov et al., 2022) since its fast to train, achieves reasonable image quality and does not need a 2D upsampler, relying on patch-wise training in
|
| 71 |
+
|
| 72 |
+
stead (Schwarz et al., 2020). Given a random latent code $z$ , our generator G produces a tri-plane representation for the scene. Then, a shallow 2-layer MLP predicts RGB color and density $\sigma$ values from an interpolated feature vector at a 3D coordinate. Then, images and depths are volumetrically rendered (Mildenhall et al., 2020) at any given camera position. Differently to prior works (Chan et al., 2021; Niemeyer & Geiger, 2021b) that utilize fixed camera distribution, we sample camera from a trainable camera generator C (see §3.1). We render depth and process it via the depth adaptor (see §3.2), bridging the domains of rendered and estimated depth maps (Miangoleh et al., 2021). Our discriminator D follows the architecture of StyleGAN2 (Karras et al., 2020a), additionally taking either adapted or estimated depth as the 4-th channel. To further improve the image fidelity, we propose a simple knowledge distillation technique, that enriches D with external knowledge obtained from ResNet (He et al., 2016) (see §3.3). The overall model architecture is shown in Fig. 2.
|
| 73 |
+
|
| 74 |
+
# 3.1 LEARNABLE "BALL-IN-SPHERE" CAMERA DISTRIBUTION
|
| 75 |
+
|
| 76 |
+
Limitations of Existing Camera Parameterization. The camera parameterization of existing 3D generators follows an overly simplified distribution — its position is sampled on a fixed-radius sphere with fixed intrinsics, and the camera always points to $(0,0,0)$ . This parametrization has only two degrees of freedom: pitch and yaw ( $\varphi_{\mathrm{pos}}$ in Fig. 3 (a)), implicitly assuming that all the objects could be centered, rotated and scaled with respect to some canonical alignment. However, natural in-the-wild 3D scenes are inherently non-alignable: they could consist of multiple objects, objects might have drastically different shapes and articulation, or they could even be represented only as volumes (like smoke). This makes the traditional camera conventions ill-posed for such data.
|
| 77 |
+
|
| 78 |
+
Learnable "Ball-in-Sphere" Camera Distribution. We introduce a new camera parametrization which we call "Ball-in-Sphere". Contrary to the standard one, it has four additional degrees of freedom: the field of view $\varphi_{\mathrm{fov}}$ , and pitch, yaw, and radius of the inner sphere, specifying the look-at point within the outer sphere ( $\varphi_{\mathrm{lookat}}$ in Fig. 3 (b)). Combining with the standard parameters on the outer sphere, our camera parametrization has six degrees of freedom $\varphi = [\varphi_{\mathrm{pos}} \| \varphi_{\mathrm{fov}} \| \varphi_{\mathrm{lookat}}]$ , where $\|\cdot\|$ denotes concatenation.
|
| 79 |
+
|
| 80 |
+
Instead of manually defining camera distributions, we learn the camera distribution during training for each dataset. In particular, we train the camera generator network $C$ that takes camera parameters sampled from a sufficiently wide camera prior $\varphi'$ and produces new camera parameters $\varphi$ . For a class conditional dataset, such as ImageNet where scenes have significantly different geometry, we additionally condition this network on the class label $c$ and the latent code $z$ , i.e. $\varphi = C(\varphi', z, c)$ . For a single category dataset we use $\varphi = C(\varphi', z)$ .
|
| 81 |
+
|
| 82 |
+
Camera Gradient Penalty. To the best of our knowledge, CAMPARI (Niemeyer & Geiger, 2021a) is the only work which also learns the camera distribution. It samples a set of camera parameters from a wide distribution and passes it to a shallow neural network, which produces a residual $\Delta \varphi = \varphi -\varphi^{\prime}$ for these parameters. However, we observed that such a regularization is too weak for the complex datasets we explore in this work, and leads to a distribution collapse (see Fig. 7). Note, that a similar observation was also made by Gu et al. (2022).
|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
(a) Standard camera model
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
(b) Our camera model
|
| 89 |
+
|
| 90 |
+
To prevent the camera generator from producing collapsed camera parameters, we seek a new regularization strategy. Ideally, it should prevent constant solutions, while at the same time, reducing the Lipschitz constant for C, which is shown to be crucial for stable training of generators (Odena et al., 2018). We can achieve both if we push the derivatives of the
|
| 91 |
+
|
| 92 |
+
Figure 3: Camera model. (a) Conventional camera model is designed for aligned datasets and uses just 2 degrees of freedom. (b) The proposed "Ball-in-Sphere" parametrization has 4 additional degrees of freedom: field of view and the look at position. Variable parameters are shown in blue.
|
| 93 |
+
|
| 94 |
+
predicted camera parameters with respect to the prior camera parameters to either one or minus one,
|
| 95 |
+
|
| 96 |
+

|
| 97 |
+
Figure 4: Depth adapter. Left: An example of a real image with its depth estimated by LeReS (Miangoleh et al., 2021). Note that the estimated depth has several artifacts. For example, the human legs are closer than the tail, eyes are spaced unrealistically, and far-away grass is predicted to be close. Middle: depth adapter meant to bridge the domains of predicted and NeRF-rendered depth. Right: a generated image with its adapted depth maps obtained from different layers of the adapter.
|
| 98 |
+
|
| 99 |
+

|
| 100 |
+
|
| 101 |
+

|
| 102 |
+
|
| 103 |
+
arriving at the following regularization term:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\mathcal {L} _ {\varphi_ {i}} = \left| \frac {\partial \varphi_ {i}}{\partial \varphi_ {i} ^ {\prime}} \right| + \left| \frac {\partial \varphi_ {i}}{\partial \varphi_ {i} ^ {\prime}} \right| ^ {- 1}, \tag {1}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $\varphi_i' \in \varphi'$ is the camera sampled from the prior distribution and $\varphi_i \in \varphi$ is produced by the camera generator. We refer to this loss as Camera Gradient Penalty. Its first part prevents rapid camera changes, facilitating stable optimization, while the second part avoids collapsed posteriors.
|
| 110 |
+
|
| 111 |
+
# 3.2 ADVERSARIAL DEPTH SUPERVISION
|
| 112 |
+
|
| 113 |
+
To instill a 3D bias into our model, we develop a strategy of using depth maps predicted by an off-the-shelf estimator E (Miangoleh et al., 2021), for its advantages of being generic and readily applicable for many object categories. The main idea is concatenating a depth map as the 4-th channel of the RGB as the input of the discriminator. The fake depth maps in this case is obtained with the help of neural rendering, while the real depth maps are estimated using monocular depth estimator E. However, naively utilizing the depth from E leads to training divergence. This happens because E could only produce relative depth, not metric depth. Moreover, monocular depth estimators are still not perfect, they produce noisy artifacts, ignore high-frequency details, and make prediction mistakes. Thus, we devise a mechanism that allows utilization of the imperfect depth maps. The central part of this mechanism is a learnable depth adaptor A, that should transform and augment the depth map obtained with neural rendering to look like a depth map from E.
|
| 114 |
+
|
| 115 |
+
More specifically, we first render raw depths $d$ from NeRF via volumetric rendering:
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\boldsymbol {d} = \int_ {t _ {n}} ^ {t _ {f}} T (t) \sigma (r (t)) t d t, \tag {2}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where $t_n, t_f \in \mathbb{R}$ are near/far planes, $T(t)$ is accumulated transmittance, and $r(t)$ is a ray. Raw depth is shifted and scaled from the range of $[t_n, t_f]$ into $[-1, 1]$ to obtain normalized depth $\bar{d}$ :
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\bar {\boldsymbol {d}} = 2 \cdot \frac {\boldsymbol {d} - \left(t _ {n} + t _ {f} + b\right) / 2}{t _ {f} - t _ {n} - b}, \tag {3}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where $b \in [0, (t_n + t_f) / 2]$ is an additional learnable shift needed to account for the empty space in the front of the camera. Real depths are normalized into the $[-1, 1]$ range directly.
|
| 128 |
+
|
| 129 |
+
Learnable Depth Adaptor. While $\bar{d}$ has the same range as $d_r$ , it is not suitable for adversarial supervision directly due to the imprecision of E: G would be trained to simulate all its prediction artifacts. To overcome this issue, we introduce a depth adaptor A to adapt the depth map $d_a = \mathsf{A}(\bar{d}) \in \mathbb{R}^{h \times w}$ , where $h \times w$ is a number of sampled pixels. This depth (fake $d_a$ or real $d_r$ ) is concatenated with the RGB input and passed to D.
|
| 130 |
+
|
| 131 |
+
The depth adaptor A models artifacts produced by E, so that the discriminator should focus only on the relevant high level geometry. However, a too powerful A would be able to fake the depth completely, and G will not learn the geometry. This is why we structure A as just a 3-layer convolutional network (see Fig. 4). Each layer produces a separated depth map with different levels
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
Figure 5: Qualitative multi-view comparisons. Left: samples from the models trained on single-category datasets with articulated geometry. Two views are shown for each sample. Middle: ablations following Tab. 1, where we change the probability of using the normalized rendered depth $P(\bar{d})$ . EG3D, EpiGRAF, and $P(\bar{d}) = 0$ do not render realistic side views, due to the underlying flat geometry. Our full model instead generates realistic high-quality views on all the datasets. Right: randomly sampled real images. Zoom-in for greater detail.
|
| 135 |
+
|
| 136 |
+
of adaptation: $\pmb{d}_a^{(1)}, \pmb{d}_a^{(2)}$ and $\pmb{d}_a^{(3)}$ . The final adapted depth $\pmb{d}_a$ is randomly selected from the set of $\{\bar{d}, \pmb{d}_a^{(1)}, \pmb{d}_a^{(2)}, \pmb{d}_a^{(3)}\}$ . Such design can effectively learn good geometry while alleviating overfitting. For example, when D receives the original depth map $\bar{d}$ as input, it provides to G a strong signal for learning the geometry. And passing an adapted depth map $\pmb{d}_a^{(i)}$ to D allows G to simulate the imprecision artifacts of the depth estimator without degrading its original depth map $\bar{d}$ .
|
| 137 |
+
|
| 138 |
+
# 3.3 KNOWLEDGE DISTILLATION FOR DISCRIMINATOR
|
| 139 |
+
|
| 140 |
+
Knowledge from pretrained classification networks was shown to improve training stability and generation quality in 2D GANs (Sauer et al., 2021; Kumari et al., 2022; Sauer et al., 2022; Casanova et al., 2021). A popular solution proposed by Sauer et al. (2021; 2022) is to use an off-the-shelf model as a discriminator while freezing most of its weights. Unfortunately, this technique is not applicable in our scenario since we modify the architecture of the discriminator by adding an additional depth input (see §3.2) and condition on the parameters of the patch similarly to EpiGRAF (Skorokhodov et al., 2022). Thus, we devise an alternative technique that can work with arbitrary architectures of the discriminator. Specifically, for each real sample, we obtain two feature representations: $e$ from the pretrained ResNet (He et al., 2016) network and $\hat{e}$ extracted from the final representation of our discriminator D. Our loss simply pushes $\hat{e}$ to $e$ as follows:
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
\mathcal {L} _ {\text {d i s t}} = \left\| \boldsymbol {e} - \hat {\boldsymbol {e}} \right\| _ {2} ^ {2}. \tag {4}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
$\mathcal{L}_{\mathrm{dist}}$ can effectively distill knowledge from the pretrained ResNet (He et al., 2016) into our D.
|
| 147 |
+
|
| 148 |
+
# 3.4 TRAINING
|
| 149 |
+
|
| 150 |
+
The overall loss for generator G consists of two parts: adversarial loss and Camera Gradient Penalty:
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\mathcal {L} _ {\mathrm {G}} = \mathcal {L} _ {\mathrm {a d v}} + \sum_ {\varphi_ {i} \in \varphi} \lambda_ {\varphi_ {i}} \mathcal {L} _ {\varphi_ {i}}, \tag {5}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
where $\mathcal{L}_{\mathrm{adv}}$ is the non-saturating loss (Goodfellow et al., 2014). We observe that a diverse distribution for camera origin is most important for leaning meaningful geometry, but it is also most prone to degrade to a constant solution. Therefore, we set $\lambda_{\varphi_i} = 0.3$ for $\varphi_{\mathrm{pos}}$ , while set $\lambda_{\varphi_i} = 0.03$ for $\varphi_{\mathrm{fov}}$ and $\lambda_{\varphi_i} = 0.003$ for $\varphi_{\mathrm{lookat}}$ . The loss for discriminator D, on the other hand, consists of three
|
| 157 |
+
|
| 158 |
+
parts: adversarial loss, knowledge distillation, and $\mathcal{R}_1$ gradient penalty (Mescheder et al., 2018):
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\mathcal {L} _ {\mathrm {D}} = \mathcal {L} _ {\text {a d v}} + \lambda_ {\text {d i s t}} \mathcal {L} _ {\text {d i s t}} + \lambda_ {r} \mathcal {R} _ {1}. \tag {6}
|
| 162 |
+
$$
|
| 163 |
+
|
| 164 |
+
We use the same optimizer and hyper-parameters as EpiGRAF. We observe that for depth adaptor, sampling adapted depth maps with equal probability is not always beneficial, and found that using $P(\pmb{d}) = 0.5$ leads to better geometry. For additional details, see Appx B.
|
| 165 |
+
|
| 166 |
+
# 4 EXPERIMENTAL RESULTS
|
| 167 |
+
|
| 168 |
+
Datasets. In our experiments, we use 4 non-aligned datasets: SDIP Dogs (Mokady et al., 2022), SDIP Elephants (Mokady et al., 2022), LSUN Horses (Yu et al., 2015), and ImageNet (Deng et al., 2009). The first three are single-category datasets that contain objects with complex articulated geometry, making them challenging for standard 3D generators. We found it useful to remove outlier images from SDIP Dogs and LSUN Horsesvia instance selection (DeVries et al., 2020), reducing their size to $40\mathrm{K}$ samples each. We refer to the filtered versions of these datasets as SDIP $\mathrm{Dogs}_{40\mathrm{k}}$ and LSUN $\mathrm{Horses}_{40\mathrm{k}}$ , respectively. We then validate our method on ImageNet, a real-world, multicategory dataset containing 1,000 diverse object classes, with more than 1,000 images per category. All the 3D generators (including the baselines) use the same filtering strategy for ImageNet, where $2/3$ of its images are filtered out. Note that all the metrics are always measured on the full ImageNet.
|
| 169 |
+
|
| 170 |
+
Evaluation. We rely on FID (Heusel et al., 2017) to measure the image quality and $\mathrm{FID}_{2\mathrm{k}}$ , which is computed on 2,048 images instead of $50\mathrm{k}$ (as for FID) for efficiency. For ImageNet, we additionally compute Inception Score (IS) (Salimans et al., 2016). Note that while we train on the filtered ImageNet, we always compute the metrics on the full one. There is no established protocol to evaluate the geometry quality of 3D generators in general, but the state-of-the-art ones are tri-plane (Chan et al., 2022; Sun et al., 2022) or MPI-based Zhao et al. (2022), and we observed that it is possible to quantify their most popular geometry failure case: flatness of the shapes. For this, we propose Non-Flatness Score (NFS) which is computed as the average entropy of the normalized depth maps histograms. We depict its intuition in Fig 6 and provide the
|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
Figure 6: NFS for repetitive and diverse geometry with depth histograms.
|
| 174 |
+
|
| 175 |
+
# 4.1 3D GENERATION FOR SINGLE CATEGORY DATASETS
|
| 176 |
+
|
| 177 |
+
We show quantitative results for single category datasets in Tab. 1a, where we compare with EG3D (Chan et al., 2022) and EpiGRAF (Skorokhodov et al., 2022). EG3D was mainly designed for FFHQ (Karras et al., 2019) and uses the true camera poses inferred from real images as the camera distribution for the generator. But in our scenario, we do not have any knowledge about the true camera distribution, that is why to stay as close as possible to the setup EG3D was designed for, we use normal distribution for rotation and elevation angles with the same standard deviations as in FFHQ, which are equal to $\sigma_{\mathrm{yaw}} = 0.3$ and $\sigma_{\mathrm{pitch}} = 0.155$ , respectively. Also, to learn better geometry, we additionally trained the baselines with a twice wider camera distribution: $\sigma_{\mathrm{yaw}} = 0.6$ and $\sigma_{\mathrm{pitch}} = 0.3$ . While it indeed helped to reduce flatness, it also worsened the image quality: up to $500\%$ as measured by $\mathrm{FID}_{2\mathrm{k}}$ . Our model shows substantially better (at least $2\times$ than EG3D, slightly worse than StyleGAN2) $\mathrm{FID}_{2\mathrm{k}}$ and greater NFS on all the datasets. Low NFS indicates flat or repetitive geometry impairing the ability of the model to generate realistic side views. Indeed, as shown in Fig. 5 (left), both EG3D (with the default camera distribution) and EpiGRAF struggle to generate side views, while our method (3DGP) renders realistic side views on all three datasets.
|
| 178 |
+
|
| 179 |
+
Adversarial Depth Supervision. We evaluate the proposed Adversarial Depth Supervision (ADS) and the depth adaptor A. The only hyperparameter it has is the probability of using the non-adapted depth $P(\bar{d})$ (see §3.2). We ablate this parameter in Tab. 1b. While FID scores are only slightly affected by varying $P(\bar{d})$ , we see substantial difference in the Non-Flatness Score. We first verify that NFS is the worst without ADS, indicating the lack of a 3D bias. When $P(\bar{d}) = 0$ , the discriminator is never presented with the rendered depth $\bar{d}$ , while the adaptor learns to fake the depth,
|
| 180 |
+
|
| 181 |
+
Table 1: Comparisons on SDIP $\text{Dogs}_{40\mathrm{k}}$ , SDIP Elephants, and LSUN $\text{Horses}_{40\mathrm{k}}$ . (a) includes EG3D (with the standard and the wider camera range), EpiGRAF, and 3DGP. For completeness, we also provide 2D baseline StyleGAN2 (with KD). (b) includes the ablations of the proposed contributions. We report the total training cost for prior works, our model, and our proposed contributions.
|
| 182 |
+
(a) Comparison of EG3D, EpiGRAF and 3DGP.
|
| 183 |
+
|
| 184 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">SDIP \(Dogs_{40k}\)</td><td colspan="2">SDIP Elephants\(40k\)</td><td colspan="2">LSUN \(Horses_{40k}\)</td><td rowspan="2">Training cost (A100 days)</td></tr><tr><td>\(\text{FID}_{2k} \downarrow\)</td><td>\(\text{NFS} \uparrow\)</td><td>\(\text{FID}_{2k} \downarrow\)</td><td>\(\text{NFS} \uparrow\)</td><td>\(\text{FID}_{2k} \downarrow\)</td><td>\(\text{NFS} \uparrow\)</td></tr><tr><td>EG3D</td><td>16.2</td><td>11.91</td><td>4.78</td><td>2.59</td><td>3.12</td><td>13.34</td><td>3.7</td></tr><tr><td>+ wide camera</td><td>21.1</td><td>24.44</td><td>5.76</td><td>17.88</td><td>19.44</td><td>25.34</td><td>3.7</td></tr><tr><td>EpiGRAF</td><td>25.6</td><td>3.53</td><td>8.24</td><td>12.9</td><td>6.45</td><td>9.73</td><td>2.3</td></tr><tr><td>3DGP (ours)</td><td>8.74</td><td>34.35</td><td>5.79</td><td>32.8</td><td>4.86</td><td>30.4</td><td>2.6</td></tr><tr><td>StyleGAN2 (with KD)</td><td>6.24</td><td>N/A</td><td>3.94</td><td>N/A</td><td>2.57</td><td>N/A</td><td>1.5</td></tr></table>
|
| 185 |
+
|
| 186 |
+
(b) Impact of Adversarial Depth Supervision (ADS).
|
| 187 |
+
|
| 188 |
+
<table><tr><td>Bare 3DGP (w/o ADS, w/o C)</td><td>8.59</td><td>1.42</td><td>7.46</td><td>9.52</td><td>3.29</td><td>8.04</td><td>2.3</td></tr><tr><td>+ ADS, P(d) = 0.0</td><td>8.13</td><td>3.14</td><td>5.69</td><td>1.97</td><td>3.41</td><td>1.24</td><td>2.6</td></tr><tr><td>+ ADS, P(d) = 0.25</td><td>9.57</td><td>33.21</td><td>6.26</td><td>33.5</td><td>4.33</td><td>32.68</td><td>2.6</td></tr><tr><td>+ ADS, P(d) = 0.5</td><td>9.25</td><td>36.9</td><td>7.60</td><td>30.7</td><td>5.27</td><td>32.24</td><td>2.6</td></tr><tr><td>+ ADS, P(d) = 1.0</td><td>12.2</td><td>27.2</td><td>12.1</td><td>26.0</td><td>8.24</td><td>27.7</td><td>2.5</td></tr></table>
|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
Figure 7: Comparisons of regularization strategies. Left: selected generic and wide prior for each of the 6 DoFs of our camera model (top row). Without any regularization (no reg) or with the residual-based model (residual) as in Niemeyer & Geiger (2021a), the camera generator collapses to highly concentrated distributions. In contrast, the proposed regularization (grad penalty) leads to a wider posterior. Right: random samples for each strategy. For (no reg) and (residual), no meaningful geometry is learned; only our method (grad penalty) leads to good geometry.
|
| 192 |
+
|
| 193 |
+
leading to flat geometry. When $P(\bar{d}) = 1$ , the adaptor is never used, allowing D to easily determine the generated depth from the estimated depth, as there is a large domain gap (see Fig. 4), leading to reduced FID scores. The best results are achieved with $P(\bar{d}) = 0.5$ , which can be visually verified from observing Fig. 5 (middle). A "bare" 3DGP, $P(\bar{d}) = 0$ and $P(\bar{d}) = 1$ are unable to render side views, while the model trained with $P(\bar{d}) = 0.5$ features the overall best geometry and side views.
|
| 194 |
+
|
| 195 |
+
Knowledge Distillation. Here we further provide insights for discriminator D using our knowledge distillation strategy (see §3.3). We find that knowledge distillation provides an additional stability for adversarial training, along with the significant improvement in FID, which can be observed by comparing results of EpiGRAF in Tab. 1a and Bare 3DGP Tab. 1a. Additionally, we compare different knowledge distillation strategies in Appx C. However, as noted by Parmar et al. (2021), strategies which utilize additional classification networks, may provide a significant boost to FID, without corresponding improvement in visual quality.
|
| 196 |
+
|
| 197 |
+
"Ball-in-Sphere" Camera Distribution. In Fig. 7, we analyze different strategies for learning the camera distribution: sampling the camera $\varphi$ from the prior $\varphi'$ without learning, predicting the residual $\varphi - \varphi'$ , and using our proposed camera generator C with Camera Gradient Penalty. For the first two cases, the learned distributions are nearly deterministic. Not surprisingly, no meaningful side views can be generated as geometry becomes flat. In contrast, our regularization provides
|
| 198 |
+
|
| 199 |
+
Table 2: Comparison between different generators on ImageNet ${256}^{2}$ (note that 3DGP relies on additional information in the form of depth supervision). Training cost is measured in A100 days.
|
| 200 |
+
|
| 201 |
+
<table><tr><td>Method</td><td>Synthesis type</td><td>FID ↓</td><td>IS ↑</td><td>NFS↑</td><td>A100 days ↓</td></tr><tr><td>BigGAN (Brock et al., 2018)</td><td>2D</td><td>8.7</td><td>142.3</td><td>N/A</td><td>60</td></tr><tr><td>StyleGAN-XL (Sauer et al., 2022)</td><td>2D</td><td>2.30</td><td>265.1</td><td>N/A</td><td>163+</td></tr><tr><td>ADM (Dhariwal & Nichol, 2021)</td><td>2D</td><td>4.59</td><td>186.7</td><td>N/A</td><td>458</td></tr><tr><td>EG3D (Chan et al., 2022)</td><td>3D-aware</td><td>26.7</td><td>61.4</td><td>3.70</td><td>18.7</td></tr><tr><td>+ wide camera</td><td>3D-aware</td><td>25.6</td><td>57.3</td><td>9.83</td><td>18.7</td></tr><tr><td>VolumeGAN (Xu et al., 2021)</td><td>3D-aware</td><td>77.68</td><td>19.56</td><td>22.69</td><td>15.17</td></tr><tr><td>StyleNeRF (Gu et al., 2022)</td><td>3D-aware</td><td>56.54</td><td>21.80</td><td>16.04</td><td>20.55</td></tr><tr><td>StyleGAN-XL + 3DPhoto (Shih et al., 2020)</td><td>3D-aware</td><td>116.9</td><td>9.47</td><td>N/A</td><td>165+</td></tr><tr><td>EpiGRAF (Skorokhodov et al., 2022)</td><td>3D</td><td>47.56</td><td>26.68</td><td>3.93</td><td>15.9</td></tr><tr><td>+ wide camera</td><td>3D</td><td>58.17</td><td>20.36</td><td>12.89</td><td>15.9</td></tr><tr><td>3DGP (ours)</td><td>3D</td><td>19.71</td><td>124.8</td><td>18.49</td><td>28</td></tr></table>
|
| 202 |
+
|
| 203 |
+
sufficient regularization to C, enabling it to converge to a sufficiently wide posterior, resulting into valid geometry and realistic side views. After the submission, we found a simpler and more flexible camera regularization strategy through entropy maximization, which we discuss in Appx J.
|
| 204 |
+
|
| 205 |
+
# 4.2 3D SYNTHESIS ON IMAGENET
|
| 206 |
+
|
| 207 |
+
ImageNet (Deng et al., 2009) is significantly more difficult than single-category datasets. Following prior works, we trained all the methods in the conditional generation setting Brock et al. (2018). The quantitative results are presented in Tab. 2. We also report the results of state-of-the-art 2D generators for reference: as expected, they show better FID and IS than 3D generators, since they do not need to learn geometry, are trained on larger compute, and had been studied for much longer. Despite our best efforts to find a reasonable camera distribution, both EG3D and EpiGRAF produce flat or repetitive geometry, while 3DGP produces geometry with rich details (see Fig. 1). StyleNeRF (Gu et al., 2022) and VolumeGAN Xu et al. (2021), trained for conditional ImageNet generation with the narrow camera distribution, substantially under-performed in terms of visual quality. We hypothesize that the reason lies in their MLP/voxel-based NeRF backbones: they have a better 3D prior than tri-planes, but are considerably more expensive, which, in turn, requires sacrifices in terms of the generator's expressivity.
|
| 208 |
+
|
| 209 |
+
Training a 3D generator from scratch is not the only way to achieve 3D-aware synthesis: one can lift an existing 2D generator into 3D using the techniques from 3D photo " inpainting". To test this idea, we generate 10k images with StyleGAN-XL (the current SotA on ImageNet) and then run 3DPhoto (Shih et al., 2020) on them. This method first lifts a photo into 3D via a pre-trained depth estimator and then inpaints the holes with a separately trained GAN model to generate novel views. This method works well for negligible camera variations, but starts diverging when the camera moves more than $10^{\circ}$ . In Tab. 2, we report its FID/IS scores with the narrow camera distribution: $\sigma_{\mathrm{yaw}} \sim \mathcal{N}(0,0.3)$ and $\sigma_{\mathrm{pitch}} \sim \mathcal{N}(\pi /2,0.15)$ . The details are in Appx I.
|
| 210 |
+
|
| 211 |
+
# 5 CONCLUSION
|
| 212 |
+
|
| 213 |
+
In this work, we present the first 3D synthesis framework for in-the-wild, multi-category datasets, such as ImageNet. We demonstrate how to utilize generic priors in the form of (imprecise) monocular depth and latent feature representation to improve the visual quality and guide the geometry. Moreover, we propose the new "Ball-in-Sphere" camera model with a novel regularization scheme that enables learning meaningful camera distribution. On the other hand, our work still has several limitations, such as sticking background, lower visual quality compared to 2D generators, and no reliable quantitative measure of generated geometry. Additional discussion is provided in Appx A. This project consumed $\approx 12$ NVidia A100 GPU years in total.
|
| 214 |
+
|
| 215 |
+
# 6 REPRODUCIBILITY STATEMENT
|
| 216 |
+
|
| 217 |
+
Most importantly, we will release 1) the source code and the checkpoints of our generator as a separate github repo; and 2) fully pre-processed datasets used in our work (with the corresponding extracted depth maps). In §3 and §4 and Appx B, and also in the figures throughout the text, we provided a complete list of architecture and optimization details needed to reproduce our results. We are also open to provide any further details on our work in public and/or private correspondence.
|
| 218 |
+
|
| 219 |
+
# 7 ETHICS STATEMENT
|
| 220 |
+
|
| 221 |
+
There are two main ethical concerns around deep learning projects on synthesis: generation of fake content to mislead people (e.g. fake news or deepfakes²) and potential licensing, authorship and distribution abuse. This was recently discussed in the research community in the context of 2D image generation (by Stable Diffusion Rombach et al. (2022) and DALL-E Ramesh et al. (2022)) and code generation (by Github Copilot³). While the current synthesis quality of our generator would still need to be improved to fool an attentive human observer, we still encourage the research community to discuss ideas and mechanisms for preventing abuse in the future.
|
| 222 |
+
|
| 223 |
+
# REFERENCES
|
| 224 |
+
|
| 225 |
+
Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Hao Tang, Gordon Wetzstein, Leonidas Guibas, Luc Van Gool, and Radu Timofte. 3d-aware video generation. arXiv preprint arXiv:2206.14797, 2022.
|
| 226 |
+
Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
|
| 227 |
+
Arantxa Casanova, Marlene Careil, Jakob Verbeek, Michal Drozdal, and Adriana Romero-Soriano. Instance-conditioned gan. In Proceedings of the Neural Information Processing Systems Conference, 2021.
|
| 228 |
+
Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021.
|
| 229 |
+
Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022.
|
| 230 |
+
Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
|
| 231 |
+
Dawson-Haggerty et al. trimesh. https://github.com/mikedh/trimesh, 2019. URL https://trimsh.org/.
|
| 232 |
+
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 2009.
|
| 233 |
+
Yu Deng, Jiaolong Yang, Jianfeng Xiang, and Xin Tong. Gram: Generative radiance manifolds for 3d-aware image generation. In IEEE Computer Vision and Pattern Recognition, 2022.
|
| 234 |
+
Maximilian Denninger, Martin Sundermeyer, Dominik Winkelbauer, Youssef Zidan, Dmitry Olefir, Mohammad Elbadrawy, Ahsan Lodhi, and Harinandan Katam. Blenderproc. arXiv preprint arXiv:1911.01911, 2019.
|
| 235 |
+
Terrance DeVries, Michal Drozdal, and Graham W Taylor. Instance selection for gans. Advances in Neural Information Processing Systems, 2020.
|
| 236 |
+
Terrance DeVries, Miguel Angel Bautista, Nitish Srivastava, Graham W. Taylor, and Joshua M. Susskind. Unconstrained scene generation with locally conditioned radiance fields. arXiv, 2021.
|
| 237 |
+
Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021.
|
| 238 |
+
Ainaz Eftekhar, Alexander Sax, Jitendra Malik, and Amir Zamir. Omnidata: A scalable pipeline for making multi-task mid-level vision datasets from 3d scans. In ICCV, 2021.
|
| 239 |
+
Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T.H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, and Titouan Vayer. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78): 1-8, 2021. URL http://jmlr.org/papers/v22/20-451.html.
|
| 240 |
+
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of the Neural Information Processing Systems Conference, 2014.
|
| 241 |
+
|
| 242 |
+
Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Stylenerf: A style-based 3d aware generator for high-resolution image synthesis. In International Conference on Machine Learning, 2022.
|
| 243 |
+
Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
|
| 244 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
|
| 245 |
+
Kaiming He, Georgia Gkioxari, Piotr Dolkar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961-2969, 2017.
|
| 246 |
+
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proceedings of the Neural Information Processing Systems Conference, 2017.
|
| 247 |
+
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7), 2015.
|
| 248 |
+
Yiwen Hua, Puneet Kohli, Pritish Uplavikar, Anand Ravi, Saravana Gunaseelan, Jason Orozco, and Edward Li. Holopix50k: A large-scale in-the-wild stereo image dataset. In CVPR Workshop on Computer Vision for Augmented and Virtual Reality, Seattle, WA, 2020., June 2020.
|
| 249 |
+
Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
|
| 250 |
+
Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. arXiv preprint arXiv:2006.06676, 2020a.
|
| 251 |
+
Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020b.
|
| 252 |
+
Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. arXiv preprint arXiv:2106.12423, 2021.
|
| 253 |
+
Mijeong Kim, Seonguk Seo, and Bohyung Han. Infonerf: Ray entropy minimization for few-shot neural volume rendering. In CVPR, 2022.
|
| 254 |
+
Taehyeon Kim, Jaehoon Oh, NakYil Kim, Sangwook Cho, and Se-Young Yun. Comparing kullback-leibler divergence and mean squared error loss in knowledge distillation. arXiv preprint arXiv:2105.08919, 2021.
|
| 255 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
|
| 256 |
+
Zhengfei Kuang, Kyle Olszewski, Mengei Chai, Zeng Huang, Panos Achlioptas, and Sergey Tulyakov. Neroic: Neural rendering of objects from online image collections. In *Special Interest Group on Computer Graphics and Interactive Techniques*, 2022.
|
| 257 |
+
Nupur Kumari, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Ensembling off-the-shelf models for gan training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022.
|
| 258 |
+
Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey. Barf: Bundle-adjusting neural radiance fields. In Proceedings of the IEEE International Conference on Computer Vision, 2021.
|
| 259 |
+
Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, and Jingyi Yu. Gnerf: Gan-based neural radiance field without posed camera. In Proceedings of the IEEE International Conference on Computer Vision, 2021.
|
| 260 |
+
|
| 261 |
+
Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In International conference on machine learning, 2018.
|
| 262 |
+
S Mahdi H Miangoleh, Sebastian Dille, Long Mai, Sylvain Paris, and Yagiz Aksoy. Boosting monocular depth estimation models to high-resolution via content-adaptive multi-resolution merging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9685-9694, 2021.
|
| 263 |
+
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision, 2020.
|
| 264 |
+
Sangwoo Mo, Minsu Cho, and Jinwoo Shin. Freeze the discriminator: a simple baseline for fine-tuning gans. arXiv preprint arXiv:2002.10964, 2020.
|
| 265 |
+
Ron Mokady, Michal Yarom, Omer Tov, Oran Lang, Daniel Cohen-Or, Tali Dekel, Michal Irani, and Inbar Mosseri. Self-distilled stylegan: Towards generation from internet photos. arXiv preprint arXiv:2202.12211, 2022.
|
| 266 |
+
Michael Niemeyer and Andreas Geiger. Campari: Camera-aware decomposed generative neural radiance fields. In 2021 International Conference on 3D Vision (3DV), 2021a.
|
| 267 |
+
Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021b.
|
| 268 |
+
Augustus Odena, Jacob Buckman, Catherine Olsson, Tom Brown, Christopher Olah, Colin Raffel, and Ian Goodfellow. Is generator conditioning causally related to gan performance? In International conference on machine learning, 2018.
|
| 269 |
+
Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. StyleSDF: High-Resolution 3D-Consistent Image and Geometry Generation. arXiv preprint arXiv:2112.11427, 2021.
|
| 270 |
+
Gaurav Parmar, Richard Zhang, and Jun-Yan Zhu. On buggy resizing libraries and surprising subtleties in fid calculation. arXiv preprint arXiv:2104.11222, 2021.
|
| 271 |
+
pmneila. Pymcubes. https://github.com/pmneila/PyMCubes, 2015.
|
| 272 |
+
Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652-660, 2017a.
|
| 273 |
+
Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017b.
|
| 274 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748-8763. PMLR, 2021.
|
| 275 |
+
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
|
| 276 |
+
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684-10695, 2022.
|
| 277 |
+
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Proceedings of the Neural Information Processing Systems Conference, 2016.
|
| 278 |
+
|
| 279 |
+
Axel Sauer, Kashyap Chitta, Jens Müller, and Andreas Geiger. Projected gans converge faster. Advances in Neural Information Processing Systems, 34:17480-17492, 2021.
|
| 280 |
+
Axel Sauer, Katja Schwarz, and Andreas Geiger. Stylegan-xl: Scaling stylegan to large diverse datasets. In ACM SIGGRAPH 2022 Conference Proceedings, 2022.
|
| 281 |
+
Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
|
| 282 |
+
Johannes Lutz Schonberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise view selection for unstructured multi-view stereo. In Proceedings of the European Conference on Computer Vision, 2016.
|
| 283 |
+
Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. In Proceedings of the Neural Information Processing Systems Conference, 2020.
|
| 284 |
+
Katja Schwarz, Axel Sauer, Michael Niemeyer, Yiyi Liao, and Andreas Geiger. Voxgraf: Fast 3d-aware image synthesis with sparse voxel grids. ARXIV, 2022.
|
| 285 |
+
Zifan Shi, Yujun Shen, Jiapeng Zhu, Dit-Yan Yeung, and Qifeng Chen. 3d-aware indoor scene synthesis with depth priors. 2022.
|
| 286 |
+
Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang. 3d photography using context-aware layered depth inpainting. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
|
| 287 |
+
Dong Wook Shu, Sung Woo Park, and Junseok Kwon. 3d point cloud generative adversarial network based on tree structured graph convolutions. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 3859-3868, 2019.
|
| 288 |
+
Ivan Skorokhodov, Sergey Tulyakov, Yiqun Wang, and Peter Wonka. Epigraph: Rethinking training of 3d gans. In Proceedings of the Neural Information Processing Systems Conference, 2022.
|
| 289 |
+
Jingxiang Sun, Xuan Wang, Yichun Shi, Lizhen Wang, Jue Wang, and Yebin Liu. Ide-3d: Interactive disentangled editing for high-resolution 3d-aware portrait synthesis. arXiv preprint arXiv:2205.15517, 2022.
|
| 290 |
+
Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, 2019.
|
| 291 |
+
Can Wang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Clip-nerf: Text-and-image driven manipulation of neural radiance fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022.
|
| 292 |
+
Zirui Wang, Shangzhe Wu, Weidi Xie, Min Chen, and Victor Adrian Prisacariu. NeRF—: Neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064, 2021.
|
| 293 |
+
Ke Xian, Jianming Zhang, Oliver Wang, Long Mai, Zhe Lin, and Zhiguo Cao. Structure-guided ranking loss for single image depth prediction. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 294 |
+
Yinghao Xu, Sida Peng, Ceyuan Yang, Yujun Shen, and Bolei Zhou. 3d-aware image synthesis via learning structural and textural representations. arXiv preprint arXiv:2112.10759, 2021.
|
| 295 |
+
Yang Xue, Yuheng Li, Krishna Kumar Singh, and Yong Jae Lee. Giraffe hd: A high-resolution 3d-aware generative model. arXiv preprint arXiv:2203.14954, 2022.
|
| 296 |
+
Xu Yan. Pointnet/pointnet++ pytorch. https://github.com/yanx27/Pointnet_Pointnet2Pytorch, 2019.
|
| 297 |
+
Wei Yin, Yifan Liu, and Chunhua Shen. Virtual normal: Enforcing geometric constraints for accurate and robust depth prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021.
|
| 298 |
+
|
| 299 |
+
Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
|
| 300 |
+
Zehao Yu, Songyou Peng, Michael Niemeyer, Torsten Sattler, and Andreas Geiger. Monosdf: Exploring monocular geometric cues for neural implicit surface reconstruction. arXiv preprint arXiv:2206.00665, 2022.
|
| 301 |
+
Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3712-3722, 2018.
|
| 302 |
+
Xuanmeng Zhang, Zhedong Zheng, Daiheng Gao, Bang Zhang, Pan Pan, and Yi Yang. Multi-view consistent generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022.
|
| 303 |
+
Xiaoming Zhao, Fangchang Ma, David Guera, Zhile Ren, Alexander G. Schwing, and Alex Colburn.
|
| 304 |
+
Generative multiplane images: Making a 2d gan 3d-aware. In Proc. ECCV, 2022.
|
| 305 |
+
Peng Zhou, Lingxi Xie, Bingbing Ni, and Qi Tian. Cips-3d: A 3d-aware generator of gans based on conditionally-independent pixel synthesis. arXiv preprint arXiv:2110.09788, 2021.
|
| 306 |
+
|
| 307 |
+
# A LIMITATIONS
|
| 308 |
+
|
| 309 |
+
Lower visual quality compared to 2D generators. Despite providing a more reasonable representation of the underlining scene, 3D generators still have a lower visual quality compared to 2D generators. Closing this gap is essential for a wide adaptation of 3D generators.
|
| 310 |
+
|
| 311 |
+
Background sticking. One common 3D artifact of 3DGP is gluing of the foreground and the background. In other words our model predicts a single shape for both and there is no clear separation between the two. One potential cause of this artifact is the dataset bias, where most of the photos are frontal, thus it is not beneficial for the model to explore backward views. Another reason is the bias of tri-planes toward flat geometry. However, all our attempts to replace tri-planes with an MLP-based NeRF led to much worse results (see Appx D). Inventing a different efficient parametrization may be an important direction.
|
| 312 |
+
|
| 313 |
+
No reliable quantitative measure of generated geometry. In this work we introduce Non-Flatness Score as a proxy metric for evaluating the quality of the underlying geometry. However it can capture only a single failure case, specific for generators based on tri-planes: flatness of geometry. Devising a reasonable metric applicable to a variety of scenarios could significantly speed up the progress in this area.
|
| 314 |
+
|
| 315 |
+
Camera generator C does not learn fine-grained control. While our camera generator is conditioned on the class label $c$ , and, in theory, it should be able to perform fine-grained control over the class focal length distributions (which is natural since landscape panoramas and close-up view of a coffee mug typically have different focal lengths), it does not do this, as shown in Fig. 8. We attribute this problem to the implicit bias of G to produce large-FoV images due to tri-planes parametrization. Tri-planes define a limited volume box in space, and close-up renderings with large focal length would utilize fewer tri-plane features, hence using less generator's capacity. This is why 3DGP attempts to perform modeling with larger field-of-view values.
|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
Figure 8: Focal length distribution on ImageNet $256^{2}$ learned by C. The blue solid line is the mean values, while lower/upper curves are 0.05 and 0.95 quantiles, respectively.
|
| 319 |
+
|
| 320 |
+
# B IMPLEMENTATION DETAILS
|
| 321 |
+
|
| 322 |
+
This section provides additional architectural and training details. Also, note that we release the source code. Our generator $\mathsf{G}$ consists of M, S and T. Mapping network $\mathsf{M}$ takes noise $z \in \mathbb{R}^{512}$ and class label $c \in 0, \dots, K - 1$ , where $\mathbf{K}$ is the number of classes, and produces the style code $w \in \mathbb{R}^{512}$ . Similar to StyleGAN2-ADA Karras et al. (2020a) and EpiGRAF Skorokhodov et al. (2022), $\mathsf{M}$ is 2-layer MLP network with LeakyReLU activations and 512 neurons in each layer. Synthesis network is a decoder network same as StyleGAN2 Karras et al. (2020b), except that it produce tri-plane feature features $\pmb{p} = (p^{xy}, p^{yz}, p^{xz}) \in \mathbb{R}^{3 \times (512 \times 512 \times 32)}$ . A feature vector $f_{xyz} \in \mathbb{R}^{32}$ located $(x, y, z) \in \mathbb{R}^3$ is computed by projecting the coordinate back to the tri-plane representation, followed by bi-linearly interpolating the nearby features and averaging the features from different planes. Following EpiGRAF (Skorokhodov et al., 2022), tri-plane decoder is a two-layer MLP with Leaky-ReLU activations and 64 neurons in the hidden layer, that takes a tri-plane
|
| 323 |
+
|
| 324 |
+
Table 3: Module names glossary.
|
| 325 |
+
|
| 326 |
+
<table><tr><td>G</td><td>Generator</td></tr><tr><td>M</td><td>Mapping network</td></tr><tr><td>S</td><td>Synthesis network</td></tr><tr><td>T</td><td>Tri-plane decoder</td></tr><tr><td>C</td><td>Camera generator</td></tr><tr><td>A</td><td>Depth adaptor</td></tr><tr><td>D</td><td>Discriminator</td></tr><tr><td>E</td><td>Depth Estimator</td></tr><tr><td>F</td><td>Feature Extractor</td></tr></table>
|
| 327 |
+
|
| 328 |
+
feature $f_{xyz}$ as input and produced the color and density (RGB, $\sigma$ ) in that point. We use same volume rendering procedure as EpiGRAF (Skorokhodov et al., 2022). We also found that increasing the half-life of the exponential moving average for our G improves both FID and Inception Score. We observed this by noticing that the samples change too rapidly over the course of training. In practice, we log the samples every 400k seen images and noticed that the generator could completely change the global structure of a sample during that period.
|
| 329 |
+
|
| 330 |
+
Camera generator. Camera generator consists of linear layers with SoftPlus activations, it architecture is depicted in Fig. 9. We found it crucial to use SoftPlus activation and not LeakyReLU, since optimization of Camera Gradient Penalty for non-smooth functions is unstable for small learning rates (smaller 0.02). Our gradient penalty minimizes the function $\mathcal{L} = |g| + 1 / |g|$ , where $|g|$ is the input/output scalar derivative of the camera generator C. The motivation is to prevent the collapse of C into delta distribution and the intuition is the following. C can collapse into delta distribution in two ways: 1) by starting to produce the constant output for all the input values (this is being prevented by the first term $|g|$ and 2) by starting producing $\pm \infty$ for all the inputs, which are at the end converted to constants since we apply sigmoid normalization on top of its outputs to normalize them into a proper range (e.g., pitch is bounded in $(0,\pi)$ ) — this, in turn, is prevented by the second term $1 / |g|$ . The minimum value of this function is 2 (which is achieved when the gradient norm is constant and equals to 1), and the function itself is visualized in Fig. 9b.
|
| 331 |
+
|
| 332 |
+
Adversarial depth supervision. For depth adaptor, we use a three layer convolutional neural network with $5 \times 5$ kernel sizes (since it is shallow, we increase its receptive field by increasing the kernel size) with LeakyReLU activations and 64 filters in each layer. Additionally we use one shared convolutional layer that converts $64 \times h \times w$ features to the depth maps. We use the same architecture for D as EpiGRAF (Skorokhodov et al., 2022), but additionally concatenate a 1-channel depth to the 3-channel RGB input. Finally E and F is pretrained LeReS (Miangoleh et al., 2021) and ResNet50 (He et al., 2016) networks without any modifications. We used the timm library to extract the features for real images.
|
| 333 |
+
|
| 334 |
+
Similarly to EpiGRAF (Skorokhodov et al., 2022) we set the $\mathcal{R}_1$ regularization (Mescheder et al., 2018) term $\lambda_r = 0.1$ and knowledge distillation term $\lambda_{\mathrm{dist}} = 1$ . $\mathcal{R}_1$ regularization helps to stabilize GAN training and is formulated as a gradient penalty on top of the discriminator's real inputs:
|
| 335 |
+
|
| 336 |
+
$$
|
| 337 |
+
\mathcal {R} _ {1} = \frac {1}{2} \| \nabla_ {\boldsymbol {x}} \mathrm {D} (\boldsymbol {x}) \| _ {2} ^ {2} \longrightarrow \min _ {\mathrm {D}}. \tag {7}
|
| 338 |
+
$$
|
| 339 |
+
|
| 340 |
+
The (hand-wavy) intuition is that it makes the discriminator surface more flat in the vicinity of real points, making it easier for the generator to reach them. We train all the models with Adam optimizer (Kingma & Ba, 2014) using the learning rate of $2e-3$ and $\beta_{1} = 0.0$ , $\beta_{2} = 0.99$ . Following EpiGRAF (Skorokhodov et al., 2022), our model uses patch-wise training with $64 \times 64$ -resolution patches and uses their proposed $\beta$ scale sampling strategy without any modifications. We use the batch size of 64 in all the experiments, since in early experiments we didn't find any improvements from using a large batch size neither for our model nor for StyleGAN2, as observed by Brock et al. (2018) and Sauer et al. (2022).
|
| 341 |
+
|
| 342 |
+
As being said in Section 4, we use the instance selection technique by DeVries et al. (2020) to remove image outliers from the datasets since they might negatively affect the geometry. This procedure works by first extracting a 2,048-dimensional feature vector for each image, then fitting a multivariate gaussian distribution on the obtained dataset and removing the images, which features
|
| 343 |
+
|
| 344 |
+

|
| 345 |
+
(a) Camera generator C. We condition C on class labels $c$ when generating the camera position $\hat{\varphi}$ since it might be different for different classes. And we condition it on $z$ when generating the look-at position and field-of-view since it might depend on the object shape (e.g., there is a higher probability to synthesize a close-up view of a dog's snout rather than its tail). Each MLP consists on 3 layers with Soft-plus non-linearities.
|
| 346 |
+
|
| 347 |
+

|
| 348 |
+
(b) Camera gradient penalty. We structure the regularization term for $\mathbf{C}$ as a function $\mathcal{L}(|g|) = |g| + 1 / |g|$ (see (1)), and this function is visualized above. This allows to prevent the collapse of $\mathbf{C}$ into delta distribution by it either producing constant values or very large/small values (which become constant after the sigmoid normalization).
|
| 349 |
+
Figure 9: Camera generator architecture and visualizing its corresponding regularization term.
|
| 350 |
+
|
| 351 |
+
have low probability density. For SDIP Dogs and LSUN Horses, we fit a multi-variate gaussian distribution for the whole dataset. For ImageNet, we fit a separate model for each class with additional diagonal regularization for covariance, which is needed due to its singularity: feature vector has more dimensions than the number of images in a class. We refer to (DeVries et al., 2020) for additional details.
|
| 352 |
+
|
| 353 |
+
Also note that we will release the source code and the checkpoints, which would additionally convey all other implementation details.
|
| 354 |
+
|
| 355 |
+
# C KNOWLEDGE DISTILLATION
|
| 356 |
+
|
| 357 |
+
In this section we compare our knowledge distillation strategy with projected D (Sauer et al., 2021; 2022), another popular technique of utilizing existing classification models for GAN training. Note that projected D relies on pretrained EfficientNet (Tan & Le, 2019), it freezes most of the weights and adds some random convolutions layers that project intermediate outputs of EfficientNet to the features, that will be later processed by discriminator. Note that this strategy relies on specific architecture of Discriminator D, thus in order to adapt it for 3DGP we have to rely on two discriminators. First one is our discriminator D with additional depth input and conditioned on patch parameters and second one is projected D. In Tab 4 we show comparison between projected D and our knowledge distillation strategy on ImageNet (Deng et al., 2009) and SDIP Dogs $_{40k}$ Mokady et al. (2022) datasets. First we compare these strategies for 2D generator architectures: StyleGAN3-t (large) (Karras et al., 2021) and StyleGAN2 (Karras et al., 2020a). It can be observed that projected D is not generic and it results heavily varies depending on generator architecture. Moreover training cost of projective D is higher. Finally we test this strategy for our 3D generator, and again we can observe that this technique lead to inferior results in both visual quality and training cost.
|
| 358 |
+
|
| 359 |
+
Typically, knowledge distillation in classifiers is performed using Kullback-Leibler divergence on top of logits (Hinton et al., 2015; Kim et al., 2021), rather than $\mathcal{L}_2$ distance on top of hidden activations, as in our case. There are two design reasons of why we use the $\mathcal{L}_2$ : 1) it is more generalizable and one can transfer knowledge from other models with the same design, like CLIP (Radford et al., 2021) or Mask R-CNN (He et al., 2017); 2) our discriminator's task shouldn't be able to perform classification, that is why guiding it with the KL classification loss is not natural. However, in Fig. 10, we provide additional exploration with other knowledge distillation objectives on top of StyleGAN2 (Karras et al., 2020b) trained for conditional ImageNet generation on the $128^2$ resolution. One can observe that KL and $\mathcal{L}_2$ objectives perform approximately the same. Also, guiding by CLIP underperforms to ResNet guidance initially, but starts to outperform after more training is performed: we hypothesize that the reason is that it is a more general model. We do not use CLIP
|
| 360 |
+
|
| 361 |
+
Table 4: Comparing our knowledge transfer strategy with the one from StyleGAN-XL.
|
| 362 |
+
|
| 363 |
+
<table><tr><td>Method</td><td>ImageNet 1282 @ 10M FID↓</td><td>Training cost ↓</td><td>SDIP Dogs40k 2562 @ 5M FID ↓</td><td>Training cost ↓</td><td>Restricts D's architecture?</td></tr><tr><td>StyleGAN3-t (large)</td><td>28.1</td><td>11.9</td><td>6.42</td><td>7.3</td><td>No</td></tr><tr><td>- with Projected D</td><td>22.8</td><td>11.6</td><td>22.6</td><td>8.1</td><td>Yes</td></tr><tr><td>- with Knowledge Distillation</td><td>16.3</td><td>11.9</td><td>2.47</td><td>7.3</td><td>No</td></tr><tr><td>StyleGAN2</td><td>33.7</td><td>1.81</td><td>9.79</td><td>1.3</td><td>No</td></tr><tr><td>- with Projected D</td><td>160.5</td><td>3.83</td><td>4.10</td><td>2.1</td><td>Yes</td></tr><tr><td>- with Knowledge Distillation with L2</td><td>20.75</td><td>1.83</td><td>2.08</td><td>1.4</td><td>No</td></tr><tr><td>- with Knowledge Distillation with KL</td><td>25.92</td><td>1.83</td><td>2.54</td><td>1.4</td><td>No</td></tr><tr><td>3DGP</td><td>53.6</td><td>6.33</td><td>21.3</td><td>2.6</td><td>No</td></tr><tr><td>- with Projected D</td><td>105.9</td><td>8.0</td><td>10.5</td><td>4.2</td><td>Yes</td></tr><tr><td>- with Knowledge Distillation</td><td>27.8</td><td>6.83</td><td>4.51</td><td>2.6</td><td>No</td></tr></table>
|
| 364 |
+
|
| 365 |
+

|
| 366 |
+
Figure 10: Exploring different knowledge distillation objectives. Each model is trained for conditional ImageNet generation on the $128^{2}$ resolution with all other hyperparameters being the same.
|
| 367 |
+
|
| 368 |
+
guidance for our generators in this paper so not to give it an unfair advantage compared to other models.
|
| 369 |
+
|
| 370 |
+
# D Failed EXPERIMENTS
|
| 371 |
+
|
| 372 |
+
In this section we describe several ideas that have been tried in this project, which however did not work for us.
|
| 373 |
+
|
| 374 |
+
Gradient Penalty for Depth Adaptor. To prevent depth adaptor A from completely faking the depth and forcing it to still provide a useful signal to the generator, one could employ a gradient penalty that will force gradient of A close to one. We tried it in two setups: window-to-pixel gradient and pixel-to-pixel gradient. Even with the large weight for this loss and shallow adaptor with $k = 3$ kernel size, it didn't enforce good geometry
|
| 375 |
+
|
| 376 |
+
3DGP with Projected D. Before devoting to knowledge distillation, we spend significant resources trying to incorporate projected D in our setup. For double discriminators setting, see Appx C, we tested: different weighting strategies, less frequent updates for projected D, enabling projected D after some geometry was already learned. We also tested single projected D, where we learned additional depth encoder that can consume our depth inputs. In all these experiments learned geometry was significantly inferior to the setting without projected D. Most of the time geometry becomes completely flat.
|
| 377 |
+
|
| 378 |
+
MLPs instead of tri-planes. We noticed that tri-planes are extremely biased towards flat generation. Therefore one natural idea would be to utilize different representations, one possible candidate is MLPs Chan et al. (2021). However our experiments with MLPs always lead to significantly inferior
|
| 379 |
+
|
| 380 |
+
visual quality. Our hypothesis is that they just don't have enough capacity to model large scale datasets.
|
| 381 |
+
|
| 382 |
+
Few-shot NeRF regularization. We also try to improve learned geometry by incorporating ray entropy minimization loss from InfoNeRF Kim et al. (2022) with different weights. But the model always diverges. We hypothesize that this is because it is significantly more complicated to find proper geometry, if the model is stuck in local minimum where the object densities are very sharp.
|
| 383 |
+
|
| 384 |
+
Normal supervision. Another prominent idea was utilizing normal supervision, since generic networks for normals Eftekhar et al. (2021) are also widely available and provide good results on the arbitrary data. Since computing normals requires gradient of density $\sigma$ with respect to $(x,y,z)$ and pytorch does not provide second order derivatives for F.grid_sample<sup>4</sup>, we developed a custom CUDA kernel that computes second derivative for F.grid_sample. Unfortunately we found that normals supervision is inferior to depth and it is much easier for the generator G to fake normals.
|
| 385 |
+
|
| 386 |
+
Preventing C collapse via variance/entropy/moments regularization. Before arriving to our current form we had been experimenting with other forms of regularization: maximizing variance with some small loss coefficient, or entropy (in the assumption that the posterior distribution is gaussian), or additionally pushing mean/skewness/kurtosis to the one of the gaussian distribution. But each time, the generator was finding the ways to "cheat" the regularization and managing to produce either a delta distribution or a mixture of delta distributions (it very "likes" doing so since, it is able to completely flat images and cheat the geometry in this regime).
|
| 387 |
+
|
| 388 |
+
# E ADDITIONAL SAMPLES
|
| 389 |
+
|
| 390 |
+
Since it is much easier to visualize the samples from NeRF-based generators as videos rather than RGB images, we provide all the additional visualizations on https://snap-research.github.io/3dgp.
|
| 391 |
+
|
| 392 |
+
# F NON-FLATNESS SCORE DETAILS
|
| 393 |
+
|
| 394 |
+
To detect and quantify the flatness of 3D generators, we propose a Non-Flatness Score (NFS). To compute NFS, we sample $N$ latent codes with their corresponding radiance fields. For each radiance field, we perform integration following Eq. 2 to obtain its depth, however, we first set the $50\%$ of lowest density to zero. This is necessary to cull spurious density artifacts.
|
| 395 |
+
|
| 396 |
+
We then normalize each depth map according to the corresponding near and far planes and compute a histogram with $B$ bins, showing the distribution of the depth values. To analyze how much the depth values are concentrated versus spread across the volume, we compute its entropy for each distribution. Averaging over $N$ depth maps gives the sought score:
|
| 397 |
+
|
| 398 |
+
$$
|
| 399 |
+
\mathrm {N F S} = \frac {1}{N} \sum_ {i = 1} ^ {N} \exp \left[ - \frac {1}{h \cdot w} \sum_ {j = 1} ^ {B} b \left(\boldsymbol {d} ^ {(i)}\right) _ {j} \right], \tag {8}
|
| 400 |
+
$$
|
| 401 |
+
|
| 402 |
+
where $b(d^{(i)})_j$ is the normalized number of depth values in the $j$ -th bin of the $i$ -th depth map $d^{(i)}$ , with $N = 256$ and $B = 64$ . NFS does not directly evaluate the quality of the geometry. Instead, it helps to detect and quantify how flat the generated geometry is. In Fig. 6, we show examples of repetitive geometry generated by EG3D and a more diverse generation produced by 3DGP along with their depth histograms and NFS.
|
| 403 |
+
|
| 404 |
+
Intuitively, NFS should provide high values to EG3D with a wide camera distribution, since its repetitive shapes are supposed to span the whole ray uniformly. But surprisingly, it does not do so and is able to reflect this repeptitiveness artifact as well, yielding low scores, as can be observed from Tables 1 and 2.
|
| 405 |
+
|
| 406 |
+
# G INVESTIGATING THE POTENTIAL DATA LEAK FROM DEPTH ESTIMATOR
|
| 407 |
+
|
| 408 |
+
LeReS (Miangoleh et al., 2021) depth estimator (which was used in our work) was pre-trained on a different set of images compared to ImageNet (or other animals datasets, explored in our work),
|
| 409 |
+
|
| 410 |
+

|
| 411 |
+
EG3D (wide)
|
| 412 |
+
Figure 11: Random samples for random classes on ImageNet $256^{2}$ (random seed of 1 for the first ImageNet classes).
|
| 413 |
+
|
| 414 |
+

|
| 415 |
+
EpiGRAF (wide)
|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
3DGP
|
| 419 |
+
|
| 420 |
+

|
| 421 |
+
Figure 12: Depth maps produced by 3DGP.
|
| 422 |
+
|
| 423 |
+

|
| 424 |
+
Figure 13: Depth maps on ImageNet $256^{2}$ , predicted by LeReS (Miangoleh et al., 2021) depth estimator. See the generated depth by 3DGP in Fig. 12.
|
| 425 |
+
|
| 426 |
+

|
| 427 |
+
Figure 14: Random images from each pre-training dataset of the LeReS (Miangoleh et al., 2021) depth estimator, used in our work. From top to bottom: HRWSI (Xian et al., 2020), DiverseDepth (Yin et al., 2021), Holopix50k (Hua et al., 2020), and Taskonomy (Zamir et al., 2018).
|
| 428 |
+
|
| 429 |
+
and this theoretically could help our generator achieve better results by implicitly giving it access to a broader image set rather than giving only texture-less geometric guidance. In this section, we investigate this, and conclude that it is not the case: the pre-training data of LeReS is too different from the explored datasets, containing almost no animal images — while all the explored datasets are very animal-dominated, including ImageNet which contains 482 animal classes. Hence, LeReS has no chance in helping our generator by giving it access to additional data.
|
| 430 |
+
|
| 431 |
+
LeReS (Miangoleh et al., 2021) trains on a combination of 4 datasets (in fact, parts of them): 1) HRWSI (Xian et al., 2020) contains outdoor city imagery (e.g., buildings, monuments, landscapes); 2) DiverseDepth (Yin et al., 2021) contains clips from in-the-wild movies and videos; 3)
|
| 432 |
+
|
| 433 |
+
Holopix50k (Hua et al., 2020) — diverse, in-the-wild web images (it is the most similar to ImageNet in terms of underlying data distribution); and 4) Taskonomy (Zamir et al., 2018) contains indoor scenes (bedrooms, stores, etc.). In Fig. 14, we provide 30 random images from each dataset. In Tab. 6, we rigorously explore how many animal images does the pre-training data of LeReS contain. We perform this in two ways: by directly counting the amount of animals with the pre-trained Mask R-CNN model He et al. (2017) and by computing FID scores between the pre-training datasets of LeReS and ImageNet animal subset. The pre-trained Mask R-CNN model from TorchVision is able to detect 10 animal classes: birds, cats, dogs, horses, sheeps, cows, elephants, bears, zebras and giraffes. Following the official tutorial, we used a threshold of 0.8. We downloaded the datasets with the official download scripts of LeReS.
|
| 434 |
+
|
| 435 |
+
Table 5: Investigating the distribution overlap between LeReS Miangoleh et al. (2021) training data and the animals subset of ImageNet.
|
| 436 |
+
|
| 437 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">#images</td><td colspan="3">Animal subset</td><td>FID</td></tr><tr><td>#images</td><td>%images</td><td>ImageNet</td><td>ImageNetanimals</td></tr><tr><td>HRWSI Xian et al. (2020)</td><td>18.2k</td><td>1.05k</td><td>5.75%</td><td>71.48</td><td>97.79</td></tr><tr><td>DiverseDepth Yin et al. (2021)</td><td>95.4k</td><td>7.9k</td><td>8.26%</td><td>90.99</td><td>122.6</td></tr><tr><td>Holopix50k Hua et al. (2020)</td><td>42k</td><td>4k</td><td>9.56</td><td>47.44</td><td>81.62</td></tr><tr><td>Taskonomy Zamir et al. (2018)</td><td>134.7k</td><td>0.5k</td><td>0.36%</td><td>135.1</td><td>154.3</td></tr><tr><td>All pre-training data</td><td>290k</td><td>9.4k</td><td>3.79%</td><td>77.46</td><td>108.6</td></tr><tr><td>ImageNet</td><td>1281.2k</td><td>618.7k</td><td>48.3%</td><td>0.0</td><td>19.06</td></tr></table>
|
| 438 |
+
|
| 439 |
+
From Tab. 5, one can see that the pre-training data of LeReS contains just $3.79\%$ of animal images, while ImageNet contains $48.3\%$ . In this way, it is too far away in terms of distribution from the Animal subset of ImageNet. From this, one can conclude the following: Depth Estimator has almost never seen animals during training, since its training datasets have 600 fewer animal images than ImageNet. This means that our generator does not receive any unfair advantage in synthesizing animals from using adversarial depth supervision by implicitly getting access to a larger set of images.
|
| 440 |
+
|
| 441 |
+
How does it affect our generator's performance in synthesizing animals? Will it have an unusually poor FID compared to non-animal data or compared to other methods? In Tab. 6, we report FID scores of each generator on animal vs non-animal subsets of ImageNet. From these scores, one can see that the trend is the same as for FID on full ImageNet. This implies that our generator performs equally well on data which constitutes the main part of our training dataset and is not a part of the depth estimator pre-training.
|
| 442 |
+
|
| 443 |
+
In this way, one can conclude that adversarial depth supervision helps to achieve better synthesis quality only through geometric guidance. It does not help the generator to do this by leaking the knowledge of pre-trained data from the pre-trained depth estimator.
|
| 444 |
+
|
| 445 |
+
Table 6: FID scores of different generators on 482 animal classes of ImageNet ${256}^{2}$ (each generator was trained on all the classes of ImageNet ${256}^{2}$ ). For this evaluation, we used the images only from the animal classes for both real and fake data.
|
| 446 |
+
|
| 447 |
+
<table><tr><td>Method</td><td>Synthesis type</td><td>Animal FID ↓</td><td>Non-Animal FID ↓</td><td>FID ↓</td></tr><tr><td>StyleGAN-XL</td><td>2D</td><td>4.53</td><td>4.81</td><td>2.30</td></tr><tr><td>EG3D (wide camera)</td><td>3D-aware</td><td>44.55</td><td>49.14</td><td>25.6</td></tr><tr><td>EpiGRAF</td><td>3D</td><td>78.65</td><td>81.38</td><td>47.56</td></tr><tr><td>3DGP (ours)</td><td>3D</td><td>47.32</td><td>50.94</td><td>26.47</td></tr></table>
|
| 448 |
+
|
| 449 |
+

|
| 450 |
+
Figure 15: Examples of real images for our rendered ShapeNet $256^{2}$ dataset.
|
| 451 |
+
|
| 452 |
+
# H GEOMETRY EVALUATION ON SHAPENET
|
| 453 |
+
|
| 454 |
+
In this section, we present the details and the results of an additional study, which rigorously shows that our proposed adversarial depth supervision improves the geometry quality.
|
| 455 |
+
|
| 456 |
+
# H.1 RENDERING DETAILS
|
| 457 |
+
|
| 458 |
+
We take the ShapeNet dataset Chang et al. (2015), which consists of 50k models from 54 classes and render it from random frontal camera positions. For this, we set the camera on a sphere of radius 2 and randomly choose its position by sampling rotation and elevation angles from $N(0,\pi /8)$ and $N(\pi /2,\pi /8)$ , respectively. We chose frontal views to be closer to the real-world scenario: the most popular image synthesis datasets are dominated by the frontal views. Also, if the dataset is not frontal, then there is less necessity in additional geometry supervision: it is a good enough 3D bias (which does not exist in modern in-the-wild datasets). For wide camera distribution, a generator can learn proper geometry on its own: see experiments on synthetic datasets with 360 degrees camera coverage by Skorokhodov et al. (2022). We render just a single view per model since this is also the scenario which we typically have in real datasets (e.g., in all our explored datasets, there is just a single view per scene). Some of the models in ShapeNet are broken (they lack the corresponding mesh files), so we discard them. In total, this gave us 51209 training images, which are visualized in Fig. 15. We used BlenderProc Denninger et al. (2019) to do rendering. Also, during rendering we removed the transparent elements of the meshes since they were providing aliasing issues in the depth maps.
|
| 459 |
+
|
| 460 |
+
# H.2 EXPERIMENTS
|
| 461 |
+
|
| 462 |
+
Experimental setup. After rendering, we trained our main baselines, EG3D (Chan et al., 2022) and EpiGRAF (Skorokhodov et al., 2022), on this dataset using the ground truth camera parametrization and distribution when sampling the camera poses. For 3DGP, we trained it in the following variants: 1) with the default adversarial depth supervision (i.e., using $P(\bar{d}) = 0.5$ ; and 2-3) with the adversarial depth supervision, but with corrupted depth maps, where corruption is simulated by blurring, similar to (DeVries et al., 2021). Also, we trained StyleGAN2 (Karras et al., 2020b) as a lower bound on FID. We disabled camera distribution learning and knowledge distillation for the experiments with 3DGP to avoid unnecessary complications: we investigate only adversarial depth supervision in this study. For all the baselines, we use white background in volumetric rendering.
|
| 463 |
+
|
| 464 |
+
Evaluation. We compare the methods in terms of FID (Heusel et al., 2017) and also Frechet Pointcloud Distance (FPD), proposed by Shu et al. (2019). This metric is an FID analog for point clouds: it uses a pre-trained PointNet (Qi et al., 2017a) to extract the features from point clouds and then computes Frechet distance between real and fake representations sets, approximating their underlying distribution as multi-variate normal one.
|
| 465 |
+
|
| 466 |
+
Table 7: Geometry evaluation for different models on ShapeNet ${256}^{2}$ (Chang et al.,2015).
|
| 467 |
+
|
| 468 |
+
<table><tr><td>Method</td><td>Synthesis type</td><td>FID ↓</td><td>FPD ↓</td></tr><tr><td>EG3D</td><td>3D-aware</td><td>17.22</td><td>2770.5</td></tr><tr><td>EpiGRAF</td><td>3D</td><td>21.58</td><td>424.0</td></tr><tr><td>3DGP (ours)</td><td>3D</td><td>14.38</td><td>80.67</td></tr><tr><td>with σblur = 1</td><td>3D</td><td>18.33</td><td>106.9</td></tr><tr><td>with σblur = 3</td><td>3D</td><td>30.37</td><td>139.5</td></tr><tr><td>with σblur = 10</td><td>3D</td><td>99.41</td><td>807.4</td></tr><tr><td>StyleGAN2</td><td>2D</td><td>5.54</td><td>N/A</td></tr></table>
|
| 469 |
+
|
| 470 |
+
We used the official codebase of Shu et al. (2019) to compute the metrics. In their original work, Shu et al. (2019) used a customly trained PointNet (Qi et al., 2017a) to extract the features. But we observed an issue with it: it was sometimes producing unusually high FPD scores in the order of $10^{6}$ , even for real point clouds. This is why we extracted pointclouds features with a pre-trained PointNet++ (Qi et al., 2017b) model from a popular public implementation (Yan, 2019).
|
| 471 |
+
|
| 472 |
+
To extract point clouds from the models, we first extracted the surfaces from them via marching cubes. For this, we sampled density fields in $256^{3}$ resolution, and then used the marching cubes implementation of PyMCubes (pmneila, 2015) to extract the surfaces. Following EG3D, we thresholded the surface for marching cubes at the density value of $\sigma = 10$ . Our overall pipeline is identical to the original procedure of SDF extraction in the EG3D repo, but simpler in terms of implementation. After that, we extracted point clouds by sampling 2,048 points on the surface for both real and fake meshes using uniform sampling from trimesh Dawson-Haggerty et al. (2019).
|
| 473 |
+
|
| 474 |
+
# H.3 RESULTS
|
| 475 |
+
|
| 476 |
+
The results of these experiments are presented in Tab. 7. We also provide random sample examples (seed 1, classes 1-8) in Fig. 16. EG3D (Chan et al., 2022) fails to recover the geometry and generates inverted shapes with hollow geometry. This is highlighted by its very high FPD score. EpiGRAF ( Skorokhodov et al., 2022) is able to recover shapes, but it models the white background via an additional sphere, which also damages its FPD. Our method, in contrast, learns proper geometry and correctly drops the background. When the depth maps are corrupted with blurring, it deteriorates both its geometry quality and texture quality, which is also confirmed by the metrics and the provided samples.
|
| 477 |
+
|
| 478 |
+
# I DETAILS ON THE EXPERIMENTS WITH 3D PHOTO INPAINTING
|
| 479 |
+
|
| 480 |
+
In this section, we provide the details of our experiments on combining 2D generation with 3D Photo Inpainting techniques.
|
| 481 |
+
|
| 482 |
+
For a 2D generator, we chose StyleGAN-XL (Sauer et al., 2022) since it achieves state-of-the-art visual quality on ImageNet, as measured by FID (Heusel et al., 2017). We took the original checkpoint for $256^{2}$ generation from the official repository. First, we generated 50,000 images with the model and computed their FID: this gave the value of 2.51 vs 2.26, reported by the authors, which is a negligible difference.
|
| 483 |
+
|
| 484 |
+
After that, we used the original codebase of 3DPhoto (Shih et al., 2020) to synthesize the 3D variations of $10\mathrm{k}$ random images. We kept all the hyperparameters the same. In 3DPhoto, there is no spherical camera parametrization, this is why we had to simulate it the following way. We assumed that the sphere center lies at the median depth value from the original camera position (which is $(0,0,0)$ ), and had been rotating the camera around that point. As discussed in §4, we used the narrow camera distribution to sample the points. Namely, we used the normal distributions with standard deviations of $\sigma_{\mathrm{yaw}} = 0.3$ and $\sigma_{\mathrm{pitch}} = 0.15$ for sampling rotation and elevation angles, respectively.
|
| 485 |
+
|
| 486 |
+

|
| 487 |
+
Figure 16: Random samples (seed 1, classes 1-8) for ShapeNet $256^{2}$ for our method and the baselines.
|
| 488 |
+
|
| 489 |
+

|
| 490 |
+
Figure 17: Random samples from StyleGAN-XL (Sauer et al., 2022) paired with 3D Photo Inpainting (Shih et al., 2020). For large camera movements, there appear severe artifacts, like gray areas and interpolation artifacts.
|
| 491 |
+
|
| 492 |
+
For negligible camera variations, this strategy produces excellent results. While it is not perfectly view consistent, it inherits state-of-the-art image quality from the 2D generator, which was used to synthesize the original images. But for larger camera variations, the quality becomes to deteriorate very quickly. Not only noticeable inpainting artifacts start to appear, but also outpainting problem emerges. The model was not trained to perform image extrapolation or inpaint large regions and fails to fill the holes which appear in larger camera movements. We provide the visualizations for it in Fig. 17.
|
| 493 |
+
|
| 494 |
+
# J ENTROPY REGULARIZATION
|
| 495 |
+
|
| 496 |
+
After the submission, we explored another strategy to regularize the camera generator and observed that it is more flexible and easier to tune. Intuitively, the strategy is to maximize the entropy of each predicted camera parameter $\phi_{i}$ :
|
| 497 |
+
|
| 498 |
+
$$
|
| 499 |
+
\mathcal {L} _ {\varphi_ {i}} = H \left(\varphi_ {i}\right) = \underset {p _ {G} \left(\varphi_ {i}\right)} {\mathbb {E}} \left[ - \log p _ {G} \varphi_ {i} \right], \tag {9}
|
| 500 |
+
$$
|
| 501 |
+
|
| 502 |
+
where $p_G(\varphi_i)$ is the camera generator's distribution over $\varphi_i$ , $H$ is differential entropy.
|
| 503 |
+
|
| 504 |
+
To do this, we used the POT package Flamary et al. (2021) to minimize the Earth Mover's Distance with the uniform distribution:
|
| 505 |
+
|
| 506 |
+
$$
|
| 507 |
+
\mathcal {L} _ {\varphi_ {i}} = \operatorname {E M D} \left(p _ {G} \left(\varphi_ {i}\right), U \left[ m _ {\varphi_ {i}}, M _ {\varphi_ {i}} \right]\right), \tag {10}
|
| 508 |
+
$$
|
| 509 |
+
|
| 510 |
+
where $\operatorname{EMD}(P, Q)$ denotes the Earth Mover's Distance between the distributions $P$ and $Q$ , $U[a, b]$ is the uniform distribution on $[a, b]$ and $m_{\varphi_i}, M_{\varphi_i}$ are minimum and maximum values for a given camera parameter $\varphi_i$ .
|
| 511 |
+
|
| 512 |
+
In practice, we used 64 samples to approximate the EMD and a non-regularized Wasserstein distance.
|
2023/3D generation on ImageNet/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4433c5f5b013078b6afb45a429eb22835080418c818dfdad33f84ca0195af70c
|
| 3 |
+
size 2150465
|
2023/3D generation on ImageNet/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/51332d35-240d-4fdc-831c-340986fdb152_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/51332d35-240d-4fdc-831c-340986fdb152_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/51332d35-240d-4fdc-831c-340986fdb152_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3509d21078d9d0d40ae98c81bfe93a822fa7a3602cff01eb05a6899204dad1cb
|
| 3 |
+
size 8217535
|
2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ac97f65a094f816564d91987bf5b58d1e493dc93a74ec9a45a477a75edd24198
|
| 3 |
+
size 2456540
|
2023/A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Kernel Perspective of Skip Connections in Convolutional Networks/74f46840-0d16-43b0-a684-c39a24459942_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Kernel Perspective of Skip Connections in Convolutional Networks/74f46840-0d16-43b0-a684-c39a24459942_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Kernel Perspective of Skip Connections in Convolutional Networks/74f46840-0d16-43b0-a684-c39a24459942_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2ee3a72dfa31c48d87054cfa1ca1c9fd952920644c288f8fe1e8c1371b171540
|
| 3 |
+
size 737199
|
2023/A Kernel Perspective of Skip Connections in Convolutional Networks/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Kernel Perspective of Skip Connections in Convolutional Networks/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ea8dc0b2635482f834cb2f155237e491c6ec64bf3115e244f6478859a5154eec
|
| 3 |
+
size 2767897
|
2023/A Kernel Perspective of Skip Connections in Convolutional Networks/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/e5d03060-3337-4f77-9e3e-da15fe7c86ba_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/e5d03060-3337-4f77-9e3e-da15fe7c86ba_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/e5d03060-3337-4f77-9e3e-da15fe7c86ba_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3ab021313316f67086c45b4a52d008c65cf200abdb15b1c55dbae7074043b4b7
|
| 3 |
+
size 869149
|
2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5f6c1b01e5919202e5e9830bf2f1a6710daa0247e63fdf88a64e4c6ce450eb3c
|
| 3 |
+
size 6428195
|
2023/Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/5382925d-b4d2-4e5f-bb7e-98e018f67e83_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/5382925d-b4d2-4e5f-bb7e-98e018f67e83_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/5382925d-b4d2-4e5f-bb7e-98e018f67e83_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4a968e007979c711e23c1429cebeeeefdf77955ea91463960bcc9ac51469b323
|
| 3 |
+
size 2134668
|
2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/full.md
ADDED
|
@@ -0,0 +1,565 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AGREE TO DISAGREE: DIVERSITY THROUGH DISAGREEMENT FOR BETTER TRANSFERABILITY
|
| 2 |
+
|
| 3 |
+
Matteo Pagliardini EPFL
|
| 4 |
+
|
| 5 |
+
Martin Jaggi EPFL
|
| 6 |
+
|
| 7 |
+
François Fleuret EPFL
|
| 8 |
+
|
| 9 |
+
Sai Praneeth Karimireddy EPFL & UC Berkeley
|
| 10 |
+
|
| 11 |
+
# ABSTRACT
|
| 12 |
+
|
| 13 |
+
Gradient-based learning algorithms have an implicit simplicity bias which in effect can limit the diversity of predictors being sampled by the learning procedure. This behavior can hinder the transferability of trained models by (i) favoring the learning of simpler but spurious features — present in the training data but absent from the test data — and (ii) by only leveraging a small subset of predictive features. Such an effect is especially magnified when the test distribution does not exactly match the train distribution—referred to as the Out of Distribution (OOD) generalization problem. However, given only the training data, it is not always possible to apriori assess if a given feature is spurious or transferable. Instead, we advocate for learning an ensemble of models which capture a diverse set of predictive features. Towards this, we propose a new algorithm D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data, but disagreement on the OOD data. We show how D-BAT naturally emerges from the notion of generalized discrepancy, as well as demonstrate in multiple experiments how the proposed method can mitigate shortcut-learning, enhance uncertainty and OOD detection, as well as improve transferability.
|
| 14 |
+
|
| 15 |
+
# 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
While gradient-based learning algorithms such as Stochastic Gradient Descent (SGD), are nowadays ubiquitous in the training of Deep Neural Networks (DNNs), it is well known that the resulting models are (i) brittle when exposed to small distribution shifts (Beery et al., 2018; Sun et al., 2016; Amodei et al., 2016), (ii) can easily be fooled by small adversarial perturbations (Szegedy et al., 2014), (iii) tend to pick up spurious correlations (McCoy et al., 2019; Oakden-Rayner et al., 2020; Geirhos et al., 2020) — present in the training data but absent from the downstream task —, as well as (iv) fail to provide adequate uncertainty estimates (Kim et al., 2016; van Amersfoort et al., 2020; Liu et al., 2021b). Recently those learning algorithms have been investigated for their implicit bias toward simplicity — known as Simplicity Bias (SB), seen as one of the reasons behind their superior generalization properties (Arpit et al., 2017; Dziugaite & Roy, 2017). While for deep neural networks, simpler decision boundaries are often seen as less likely to overfit, Shah et al. (2020), Pezeshki et al. (2021) demonstrated that the SB can still cause the aforementioned issues. In particular, they show how the SB can be extreme, compelling predictors to rely only on the simplest feature available, despite the presence of equally or even more predictive complex features.
|
| 18 |
+
|
| 19 |
+
Its effect is greatly increased when we consider the more realistic out of distribution (OOD) setting (Ben-Tal et al., 2009), in which the source and target distributions are different, known to be a challenging problem (Sagawa et al., 2020; Krueger et al., 2021). The difference between the two domains can be categorized into either a distribution shift — e.g. a lack of samples in certain parts of the data manifold due to limitations of the data collection pipeline —, or as simply having completely different distributions. In the first case, the SB in its extreme form would increase the chances of learning to rely on spurious features — shortcuts not generalizing to the target distribution. Classic manifestations of this in vision applications are when models learn to rely mostly on textures or backgrounds instead of more complex and likely more generalizable semantic features such as using shapes (Beery et al., 2018; Ilyas et al., 2019; Geirhos et al., 2020). In the second instance, by relying only on the simplest feature, and being invariant to more complex ones, the SB would cause confident predictions (low uncertainty) on completely OOD samples. This even if complex features
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
(a) training data $\hat{\mathcal{D}}$
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
(b) model 1
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
(c) model 2
|
| 29 |
+
Figure 1: Example of applying D-BAT on a simple 2D toy example similar to the LMS-5 dataset introduced by Shah et al. (2020). The two classes, red and blue, can easily be separated by a vertical boundary decision. Other ways to separate the two classes — with horizontal lines for instance — are more complex., i.e. they require more hyperplanes. The simplicity bias will push models to systematically learn the simpler feature, as in the second column (b). Using D-BAT, we are able to learn the model in column (c), relying on a more complex boundary decision, effectively overcoming the simplicity bias. The ensemble $h_{\text{ens}}(x) = h_1(x) + h_2(x)$ , in column (d), outputs a flat distribution at points where the two models disagree, effectively maximizing the uncertainty at those points. In this experiments the samples from $\mathcal{D}_{\text{ood}}$ were obtained through computing adversarial perturbations, see App. D.2 for more details.
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
(d) ensemble
|
| 33 |
+
|
| 34 |
+
are contradicting simpler ones. Which brings us to our goal of deriving a method which can (i) learn more transferable features, better suited to generalize despite distribution shifts, and (ii) provides accurate uncertainty estimates also for OOD samples.
|
| 35 |
+
|
| 36 |
+
We aim to achieve those two objectives through learning an ensemble of diverse predictors $(h_1, \ldots, h_K)$ , with $h: \mathcal{X} \to \mathcal{Y}$ , and $K$ being the ensemble size. Suppose that our training data is drawn from the distribution $\mathcal{D}$ , and $\mathcal{D}_{\mathrm{ood}}$ is the distribution of OOD data on which we will be tested. Importantly, $\mathcal{D}$ and $\mathcal{D}_{\mathrm{ood}}$ may have non-overlapping support, and $\mathcal{D}_{\mathrm{ood}}$ is not known during training. Our proposed method, D-BAT (Diversity-By-disAgreement Training), relies on the following idea:
|
| 37 |
+
|
| 38 |
+
Diverse hypotheses should agree on the source distribution $\mathcal{D}$ while disagreeing on the OOD distribution $\mathcal{D}_{\mathrm{odd}}$ .
|
| 39 |
+
|
| 40 |
+
Intuitively, a set of hypotheses should agree on what is known i.e. on $\mathcal{D}$ , while formulating different interpretations of what is not known, i.e. on $\mathcal{D}_{\mathrm{odd}}$ . Even if each individual predictor might be wrongly confident on OOD samples, while predicting different outcomes — the resulting uncertainty of the ensemble on those samples will be increased. Disagreement on $\mathcal{D}_{\mathrm{odd}}$ can itself be enough to promote learning diverse representations of instances of $\mathcal{D}$ . In the context of object detection, if one model $h_1$ is relying on textures only, this model will generate predictions on $\mathcal{D}_{\mathrm{odd}}$ based on textures, when enforcing disagreement on $\mathcal{D}_{\mathrm{odd}}$ , a second model $h_2$ would be discouraged to use textures in order to disagree with $h_1$ — and consequently look for a different hypothesis to classify instances of $\mathcal{D}$ e.g. using shapes. This process is illustrated in Fig. 2. A 2D direct application of our algorithm can be seen in Fig. 1. Once trained, the ensemble can either be used by forming a weighted average of the probability distribution from each hypothesis, or—if given some labeled data from the downstream task—by selecting one particular hypothesis.
|
| 41 |
+
|
| 42 |
+
# Contributions. Our results can be summarized as:
|
| 43 |
+
|
| 44 |
+
- We introduce D-BAT, a simple yet efficient novel diversity-inducing regularizer which enables training ensembles of diverse predictors.
|
| 45 |
+
- We provide a proof, in a simplified setting, that D-BAT promotes diversity, encouraging the models to utilize different predictive features.
|
| 46 |
+
- We show on several datasets of varying complexity how the induced diversity can help to (i) tackle shortcut learning, and (ii) improve uncertainty estimation and transferability.
|
| 47 |
+
|
| 48 |
+
# 2 RELATED WORK
|
| 49 |
+
|
| 50 |
+
Diversity in ensembles. It is intuitive that in order to gain from assembling several predictors $h_1, \ldots, h_K$ , those should be diverse. The bias-variance-covariance decomposition (Ueda & Nakano, 1996), which generalizes the bias variance decomposition to ensembles, shows how the error decreases with the covariance of the members of the ensemble. Despite its importance, there is still no well accepted definition and understanding of diversity, and it is often derived from prediction errors of members of the ensemble (Zhou, 2012). This creates a conflict between trying to increase accuracy of individual predictors $h$ , and trying to increase diversity. In this view, creating a good ensemble is seen as striking a good balance between individual performance and diversity. To promote diversity in ensembles, a classic approach is to add stochasticity into the training by using different subsets of the training data for each predictor (Breiman, 1996), or using different data augmentation methods (Stickland & Murray, 2020). Another approach is to add orthogonality constrains on the predictor's gradient (Ross et al., 2020; Kariyappa & Qureshi, 2019). Recently, the information bottleneck (Tishby et al., 2000) has been used to promote ensemble diversity (Rame & Cord, 2021; Sinha et al., 2021). Unlike the aforementioned methods, D-BAT can be trained on the full dataset, it importantly does not set constraints on the output of in-distribution samples, but on a separate OOD distribution. Moreover, as opposed to Sinha et al. (2021), our individual predictors do not share the same encoder.
|
| 51 |
+
|
| 52 |
+
Simplicity bias. While the simplicity bias, by promoting simpler decision boundary, can act as an implicit regularizer and improves generalization (Arpit et al., 2017; Gunasekar et al., 2018), it is also contributing to the brittleness of gradient-based machine-leaning (Shah et al., 2020). Recently Teney et al. (2021) proposed to evade the simplicity bias by adding gradient orthogonality constrains, not at the output level, but at an intermediary hidden representation obtained after a shared and fixed encoder. While their results are promising, the reliance on a pre-trained encoder limits the type of features that can be used to the set of features extracted by the encoder, especially, if a feature was already discarded by the encoder due to SB, it is effectively lost. In contrast, our method is not relying on a pre-trained encoder, also comparatively require a very small ensemble size to counter the simplicity bias. A more detailed comparison with D-BAT is provided in App F.1.
|
| 53 |
+
|
| 54 |
+
Shortcut learning. The failures of DNNs across application domains due to shortcut learning have been documented extensively in (Geirhos et al., 2020). They introduce a taxonomy of predictors distinguishing between (i) predictors which can be learnt from the training algorithms (ii) predictors performing well on in-distribution training data, (iii) predictors performing well on in-distribution test data, and finally (iv) predictors performing well on in-distribution and OOD test data. The last category being the intended solutions. In our experiments, by learning diverse predictors, D-BAT increases the chance of finding one solution generalizing to both in and out of distribution test data, see § 4.1 for more details.
|
| 55 |
+
|
| 56 |
+
OOD generalization. Generalizing to distributions not seen during training is accomplished by two approaches: robust training, and invariant learning. In the former, the test distribution is assumed to be within a set of known plausible distributions (say $\mathcal{U}$ ). Then, robust training minimizes the loss over the worst possible distribution in $\mathcal{U}$ (Ben-Tal et al., 2009). Numerous approaches exist to defining the set $\mathcal{U}$ - see survey by (Rahimian & Mehrotra, 2019). Most recently, Sagawa et al. (2020) model the set of plausible domains as the convex hull over predefined subgroups of datapoints and Krueger et al. (2021) extend this by taking affine combinations beyond the convex hull. Our approach also borrows from this philosophy - when we do not know the labels of the OOD data, we assume the worst case and try predict as diverse labels as possible. This is similar to the notion of discrepancy introduced in domain adaptation theory (Mansour et al., 2009; Cortes & Mohri, 2011; Cortes et al., 2019). A different line of work defines a set of environments and asks that our outputs be 'invariant' (i.e. indistinguishable) among the different environments (Bengio et al., 2013; Arjovsky et al., 2019; Koyama & Yamaguchi, 2020). When only a single training environment is present, like in our setting, this is akin to adversarial domain adaptation. Here, the data of one domain is modified to be indistinguishable to the other (Ganin et al., 2016; Long et al., 2017). However, this approach is fundamentally limited. E.g. in Fig. 2 a model which classifies both the crane and the porcupine as a crane is invariant, but incorrect. Furthermore, it is worth noting that prior work in OOD generalization are often considering datasets where the spurious feature is not fully predictive in the training distribution (Zhang et al., 2021; Saito et al., 2017; 2018; Nam et al., 2020; Liu et al., 2021a), and fail in our challenging settings of § 4.1 (see App. F for more in-depth comparisons). Lastly, parallel to our work, Lee et al. (2022) adopt a similar approach and improve OOD generalization by
|
| 57 |
+
|
| 58 |
+
Figure 2: Illustration of how D-BAT can promote learning diverse features. Consider the task of classifying bird pictures among several classes. The red color represents the attention of a first model $h_1$ . This model learnt to use some simple yet discriminative feature to recognise an African Crowned Crane on the left. Now suppose we use the top image $\mathcal{D}_{\mathrm{ood}}$ on which the models must disagree. $h_2$ cannot again use the same feature as $h_1$ since then it will not disagree on $\mathcal{D}_{\mathrm{ood}}$ . Instead, $h_2$ would look for other distinctive features of the crane which are not present on the right e.g. using its beak and red throat pouch.
|
| 59 |
+

|
| 60 |
+
$\in \mathcal{D}$
|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
$\in \mathcal{D}_{\mathrm{odd}}$
|
| 64 |
+
|
| 65 |
+
minimizing the mutual information on unlabeled target data between pairs of predictors. However, their work does not investigate uncertainty estimation and is not motivated by domain adaptation theory as ours is (Mansour et al., 2009), see App. F.7 for a more in-depth comparison.
|
| 66 |
+
|
| 67 |
+
Uncertainty estimation. DNNs are notoriously unable to provide reliable confidence estimates, which is impeding the progress of the field in safety critical domains (Begoli et al., 2019), as well as hurting models interpretability (Kim et al., 2016). To improve the confidence estimates of DNNs, Gal & Ghahramani (2016) propose to use dropout at inference time, a method referred to as MC-Dropout. Other popular methods used for uncertainty estimation are Bayesian Neural Networks (BNNs) (Hernández-Lobato & Adams, 2015) and Gaussian Processes (Rasmussen & Williams, 2005). All those methods but gaussian processes, were recently shown to fail to adequately provide high uncertainty estimates on OOD samples away from the boundary decision (van Amersfoort et al., 2020; Liu et al., 2021b). We show in our experiments how D-BAT can help to associate high uncertainty to those samples by maximizing the disagreement outside of $\mathcal{D}$ (see § 4.2, as well as Fig.1).
|
| 68 |
+
|
| 69 |
+
# 3 DIVERSITY THROUGH DISAGREEMENT
|
| 70 |
+
|
| 71 |
+
# 3.1 MOTIVATING D-BAT
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
Figure 3: If $h_1$ is computed by minimizing the training loss on $\mathcal{D}$ , its loss on the OOD task $\mathcal{D}_{\mathrm{odd}}$ may be very large i.e. $h_1$ may be very far from the optimal OOD model $h_{\mathrm{odd}}$ as measured by $\mathcal{L}_{\mathcal{D}_{\mathrm{odd}}} (h_1, h_{\mathrm{odd}})$ (left). To mitigate this, we propose to learn a diverse ensemble $\{h_1, \dots, h_4\}$ which is maximally 'spread-out' (with distance measured using $\mathcal{L}_{\mathcal{D}_{\mathrm{odd}}} (\cdot, \cdot)$ ) and cover the entire space of possible solutions $\mathcal{H}_t^\star$ . This minimizes the distance between the unknown $h_{\mathrm{odd}}$ and our learned ensemble, ensuring we learn transferable features with good performance on $\mathcal{D}_{\mathrm{odd}}$ .
|
| 75 |
+
|
| 76 |
+

|
| 77 |
+
|
| 78 |
+

|
| 79 |
+
|
| 80 |
+

|
| 81 |
+
|
| 82 |
+
We will first define some notation and explain why standard training fails for OOD generalization. Then, we introduce the concept of discrepancy which will motivate our D-BAT algorithm.
|
| 83 |
+
|
| 84 |
+
Setup. Let us formally define the OOD problem. $\mathcal{X}$ is the input space, $\mathcal{V}$ the output space, we define a domain as a pair of a distribution over $\mathcal{X}$ and a labeling function $h:\mathcal{X}\to \mathcal{V}$ . Given any distribution $\mathcal{D}$ over $\mathcal{X}$ , given two labeling functions $h_1$ and $h_2$ , given a loss function $L:\mathcal{Y}\times \mathcal{Y}\rightarrow \mathbb{R}_{+}$ , we define the expected loss as the expectation: $\mathcal{L}_{\mathcal{D}}(h_1,h_2) = \mathbb{E}_{x\sim \mathcal{D}}[L(h_1(x),h_2(x))]$ .
|
| 85 |
+
|
| 86 |
+
Now, suppose that the training data is drawn from the distribution $(\mathcal{D}_t, h_t)$ , but we will be tested on a different distribution $(\mathcal{D}_{\mathrm{odd}}, h_{\mathrm{odd}})$ . While the labelling function $h_{\mathrm{odd}}$ is unknown, we assume that we have access to unlabelled samples from $\mathcal{D}_{\mathrm{odd}}$ .
|
| 87 |
+
|
| 88 |
+
Finally, let $\mathcal{H}$ be the set of all labelling functions i.e. the set of all possible prediction models. And further define $\mathcal{H}_t^\star$ and $\mathcal{H}_{\mathrm{odd}}^{\star}$ to be the optimal labelling functions on the train and the OOD domains:
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\mathcal{H}_{t}^{\star}:= \operatorname *{arg min}_{h\in \mathcal{H}}\mathcal{L}_{\mathcal{D}_{t}}(h,h_{\rm t}),\mathcal{H}_{\rm ood}^{\star}:= \operatorname *{arg min}_{h\in \mathcal{H}}\mathcal{L}_{\mathcal{D}_{\rm ood}}(h,h_{\rm ood}).
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
We assume that there exists an ideal transferable function $h^{\star} \in \mathcal{H}_t^{\star} \cap \mathcal{H}_{\mathrm{odd}}^{\star}$ . This assumption captures the reality that the training task and the OOD testing task are closely related to each other. Otherwise, we would not expect any OOD generalization.
|
| 95 |
+
|
| 96 |
+
Beyond standard training. Just using the training data, standard training would train a model $h_{\mathrm{ERM}} \in \mathcal{H}_t^\star$ . However, as we discussed in the introduction, if we use gradient descent to find the ERM solution, then $h_{\mathrm{ERM}}$ will likely be the simplest model i.e. it will likely pick up spurious correlations in $\mathcal{D}_t$ which are not present in $\mathcal{D}_{\mathrm{odd}}$ . Thus, the error on OOD data might be very high
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
\mathcal {L} _ {\mathcal {D} _ {\mathrm {o o d}}} \left(h _ {\text {E R M}}, h _ {\mathrm {o o d}}\right) \leq \max _ {h \in \mathcal {H} _ {t} ^ {\star}} \mathcal {L} _ {\mathcal {D} _ {\mathrm {o o d}}} \left(h, h _ {\mathrm {o o d}}\right).
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
Instead, we would ideally like to minimize the right hand side in order to find $h^{\star}$ . The main difficulty is that we do not have access to the OOD labels $h_{\mathrm{odd}}$ . So we can instead use the following proxy:
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\mathcal {L} _ {\mathcal {D} _ {\mathrm {o o d}}} \left(h _ {1}, h _ {\mathrm {o o d}}\right) = \max _ {h _ {2} \in \mathcal {H} _ {t} ^ {\star} \cap \mathcal {H} _ {\mathrm {o o d}} ^ {\star}} \mathcal {L} _ {\mathcal {D} _ {\mathrm {o o d}}} \left(h _ {1}, h _ {2}\right) \leq \max _ {h _ {2} \in \mathcal {H} _ {t} ^ {\star}} \mathcal {L} _ {\mathcal {D} _ {\mathrm {o o d}}} \left(h _ {1}, h _ {2}\right)
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
In the above we used the two following facts, (i) that $\forall h_2\in \mathcal{H}_{\mathrm{odd}}^\star$ $\mathcal{L}_{\mathcal{D}_{\mathrm{odd}}}(h_1,h_{\mathrm{odd}}) = \mathcal{L}_{\mathcal{D}_{\mathrm{odd}}}(h_1,h_2)$ as well as (ii) that $\mathcal{H}_t^\star \cap \mathcal{H}_{\mathrm{odd}}^\star$ is non-empty. Recall that $\mathcal{H}_t^\star = \arg \min_{h\in \mathcal{H}}\mathcal{L}_{\mathcal{D}_t}(h,h_{\mathrm{t}})$ . So this means in order to minimize the upper bound - we want to pick $h_2$ to minimize risk on our training data (i.e. belong to $\mathcal{H}_t^\star$ ), but otherwise maximally disagree with $h_1$ on the OOD data. That way we minimize the worst case expected loss: $\min_{h\in \{h_1,h_2\}}\max_{h'\in \mathcal{H}_t^\star}\mathcal{L}_{\mathcal{D}_{\mathrm{odd}}}(h,h')$ - this process is illustrated in Fig. 3. The latter is closely related to the concept of discrepancy in domain-adaption (Mansour et al., 2009; Cortes et al., 2019). However, the main difference between the definitions is that we restrict the maximum to the set of $\mathcal{H}_t^\star$ , whereas the standard notions use an unrestricted maximum. Thus, our version is tighter when the train and OOD tasks are closely related.
|
| 109 |
+
|
| 110 |
+
Deriving D-BAT. We make two final changes to the discrepancy term above to derive D-BAT. First, if $\mathcal{L}_{\mathcal{D}}(h_1,h_2)$ is a loss function which quantifies dis-agreement, then suppose we have another loss function $\mathcal{A}_{\mathcal{D}}(h_1,h_2)$ which quantifies agreement. Then, we can minimize agreement instead of maximizing dis-agreement
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\operatorname *{arg min}_{h_{2}\in \mathcal{H}_{t}^{\star}}\mathcal{A}_{\mathcal{D}}(h_{1},h_{2}) = \operatorname *{arg max}_{h_{2}\in \mathcal{H}_{t}^{\star}}\mathcal{L}_{\mathcal{D}}(h_{1},h_{2}).
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
Secondly, we relax the constrained formulation $h_2 \in \mathcal{H}_t^\star$ by adding a penalty term with weight $\alpha$ as
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
h _ {\mathrm {D - B A T}} \in \min _ {h _ {2} \in \mathcal {H}} \underbrace {\mathcal {L} _ {\mathcal {D} _ {t}} (h _ {2} , h _ {t})} _ {\text {f i t t r a i n d a t a}} + \alpha \underbrace {\mathcal {A} _ {\mathcal {D} _ {\mathrm {o o d}}} (h _ {1} , h _ {2})} _ {\text {d i s a g r e e o n O O D}}.
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
The above is the core of our D-BAT procedure - given a first model $h_1$ , we train a second model $h_2$ to fit the training data $\mathcal{D}$ while disagreeing with $h_1$ on $\mathcal{D}_{\mathrm{odd}}$ . Thus, we have
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\mathcal {L} _ {\mathcal {D} _ {\mathrm {o o d}}} (h _ {1}, h _ {\mathrm {o o d}}) \leq \max _ {h _ {2} \in \mathcal {H} _ {t} ^ {\star}} \mathcal {L} _ {\mathcal {D} _ {\mathrm {o o d}}} (h _ {1}, h _ {2}) \approx \mathcal {L} _ {\mathcal {D} _ {\mathrm {o o d}}} (h _ {1}, h _ {\mathrm {D - B A T}}),
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
implying that D-BAT gives us a good proxy for the unknown OOD loss, and can be used for uncertainty estimation. Following a similar argument for $h_1$ , we arrive the following training procedure:
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\min _ {h _ {1}, h _ {2}} \frac {1}{2} \left(\mathcal {L} _ {\mathcal {D} _ {t}} \left(h _ {1}, h _ {t}\right) + \mathcal {L} _ {\mathcal {D} _ {t}} \left(h _ {2}, h _ {t}\right)\right) + \alpha \mathcal {A} _ {\mathcal {D} _ {\mathrm {o o d}}} \left(h _ {1}, h _ {2}\right).
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
However, we found the training dynamics for simultaneously learning $h_1$ and $h_2$ to be unstable. Hence, we propose a sequential variant which we describe next.
|
| 135 |
+
|
| 136 |
+
# 3.2 ALGORITHM DESCRIPTION
|
| 137 |
+
|
| 138 |
+
Binary classification formulation. Concretely given a binary classification task, with $\mathcal{V} = \{0,1\}$ , we train two models sequentially. The training of the first model $h_1$ is done in a classical way, minimizing its empirical classification loss $\mathcal{L}(h_1(\boldsymbol{x}),y)$ over samples $(\boldsymbol{x},y)$ from $\hat{\mathcal{D}}$ . Once $h_1$ trained, we train the second model $h_2$ adding a term $\mathcal{A}_{\tilde{\boldsymbol{x}}} (h_1,h_2)$ representing the agreement on samples $\tilde{\boldsymbol{x}}$ of $\hat{\mathcal{D}}_{\mathrm{odd}}$ , with some weight $\alpha \geq 0$ :
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
h _ {2} ^ {\star} \in \underset {h _ {2} \in \mathcal {H}} {\operatorname {a r g m i n}} \frac {1}{N} \Big (\sum_ {(\boldsymbol {x}, y) \in \hat {\mathcal {D}}} \mathcal {L} (h _ {2} (\boldsymbol {x}), y) + \alpha \sum_ {\tilde {\boldsymbol {x}} \in \hat {\mathcal {D}} _ {\mathrm {o o d}}} \mathcal {A} _ {\tilde {\boldsymbol {x}}} (h _ {1}, h _ {2}) \Big)
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
Given $p_{h,\boldsymbol{x}}^{(y)}$ the probability of class $y$ predicted by $h$ given $\boldsymbol{x}$ , the agreement $\mathcal{A}_{\tilde{\boldsymbol{x}}} (h_1, h_2)$ is defined as:
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
\mathcal {A} _ {\tilde {\boldsymbol {x}}} \left(h _ {1}, h _ {2}\right) = - \log \left(p _ {h _ {1}, \tilde {\boldsymbol {x}}} ^ {(0)} \cdot p _ {h _ {2}, \tilde {\boldsymbol {x}}} ^ {(1)} + p _ {h _ {1}, \tilde {\boldsymbol {x}}} ^ {(1)} \cdot p _ {h _ {2}, \tilde {\boldsymbol {x}}} ^ {(0)}\right) \tag {AG}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
In the above formula, the term inside the log can be derived from the expected loss when $L$ is the 01-loss and $h_1, h_2$ independent. See App. B for more details.
|
| 151 |
+
|
| 152 |
+
Multi-class classification formulation. The previous formulation requires a distribution over two labels in order to compute the agreement term (AG). We extend the agreement term $\mathcal{A}(h_1, h_2, \tilde{\pmb{x}})$ to the multi-class setting by binarizing the softmax distributions $h_1(\tilde{\pmb{x}})$ and $h_2(\tilde{\pmb{x}})$ . A simple way to do this is to take as positive class the predicted class of $h_1$ : $\tilde{y} = \mathrm{argmax}(h_1(\tilde{\pmb{x}}))$ with associated probability $p_{h_1, \tilde{\pmb{x}}}^{(\tilde{y})}$ , while grouping the remaining complementary class probabilities in a negative class $\neg \tilde{y}$ . We would then have $p_{h_1, \tilde{\pmb{x}}}^{(\neg \tilde{y})} = 1 - p_{h_1, \tilde{\pmb{x}}}^{(\tilde{y})}$ . We can then use the same bins to binarize the softmax distribution of the second model $h_2(\tilde{x})$ . Another similarly sound approach would be to do the opposite and use the predicted class of $h_2$ instead of $h_1$ . In our experiments both approaches performed well. In Alg.2 we show the second approach, which is a bit more computationally efficient in the case of ensembles of more than 2 predictors, as the binarization bins are built only once, instead of building them for each pair $(h_i, h_m)$ for $0 \leq i < m$ .
|
| 153 |
+
|
| 154 |
+
# 3.3 LEARNING DIVERSE FEATURES
|
| 155 |
+
|
| 156 |
+
It is possible, under some simplifying assumptions to rigorously prove that minimizing $\mathcal{L}_{\mathrm{D - BAT}}$ results in learning predictors which use diverse features. We introduce the following theorem:
|
| 157 |
+
|
| 158 |
+
Theorem 3.1 (D-BAT favors diversity). Given a joint source distribution $\mathcal{D}$ of triplets of random variables $(C, S, Y)$ taking values in $\{0, 1\}^3$ . Assuming $\mathcal{D}$ has the following PMF: $\mathbb{P}_{\mathcal{D}}(C = c, S = s, Y = y) = 1/2$ if $c = s = y$ , and 0 otherwise, which intuitively corresponds to experiments §4.1 in which two features (e.g. color and shape) are equally predictive of the label $y$ . Assuming a first model learnt the posterior distribution $\mathbb{P}_1(Y = 1 \mid C = c, S = s) = c$ , meaning that it is invariant to feature $s$ . Given a distribution $\mathcal{D}_{\text{odd}}$ uniform over $\{0, 1\}^3$ outside of the support of $\mathcal{D}$ , the posterior solving the D-BAT objective will be $\mathbb{P}_2(Y = 1 \mid C = c, S = s) = s$ , invariant to feature $c$ .
|
| 159 |
+
|
| 160 |
+
The proof is provided in App. C. It crucially relies on the fact that $\mathcal{D}_{\mathrm{odd}}$ has positive weight on data points which only contain the alternative feature $s$ , or only contain the feature $c$ . Thus, as long as $\mathcal{D}_{\mathrm{odd}}$ is supported on a diverse enough dataset with features present in different combinations, we can expect D-BAT to learn models which utilize a variety of such features.
|
| 161 |
+
|
| 162 |
+
# 4 EXPERIMENTS
|
| 163 |
+
|
| 164 |
+
We conduct two main types of experiments, (i) we evaluate how D-BAT can mitigate shortcut learning, bypassing simplicity bias, and generalize to OOD distributions, and (ii) we test the uncertainty estimation and OOD detection capabilities of D-BAT models.
|
| 165 |
+
|
| 166 |
+
# 4.1 OOD GENERALIZATION AND AVOIDING SHORTCUTS
|
| 167 |
+
|
| 168 |
+
We estimate our method's ability to avoid spurious correlation and learn more transferable features on 6 different datasets. In this setup, we use a labelled training data $\mathcal{D}$ which might have a lot of highly correlated spurious features, and an unlabelled perturbation dataset $\mathcal{D}_{\text{ood}}$ . We then test the performance on the learnt model on a test dataset. This test dataset may be drawn from the same distribution as $\mathcal{D}_{\text{ood}}$ (which tests how well D-BAT avoids spurious features), as well as from a completely different distribution from $\mathcal{D}_{\text{ood}}$ (which tests if D-BAT generalizes to new domains). We compare D-BAT against ERM, both when used to obtain a single model or an ensemble.
|
| 169 |
+
|
| 170 |
+
Our results are summarized in Tab. 1. For each dataset, we report both the best-model accuracy and — when applicable — the best-ensemble accuracy. All experiments in Tab. 1 are with an ensemble of size 2. Among the two models of the ensemble, the best model is selected according to its validation accuracy. We show results for a larger ensemble size of 5 in Fig. 4. Finally in Fig. 4 C (right) we compare the performance of D-BAT against numerous other baseline methods. See Appendix D for additional details on the setup as well as numerous other results.
|
| 171 |
+
|
| 172 |
+
Table 1: Test accuracies on the six datasets described in § 4.1. For each dataset, we compare single model and ensemble test accuracies for D-BAT and ERM. In the left column we consider the scenario where $\mathcal{D}_{\mathrm{ood}}$ is also our test distribution (we can imagine we have access to unlabeled data from the test distribution). In the right column we consider $\mathcal{D}_{\mathrm{ood}}$ and our test distribution to be different, e.g. belonging to different domains. see § 4.1 for more details and a summary of our findings. In bold are the best scores along with any score within standard deviation reach. For datasets with completely spurious correlations, as we know ERM models would fail to learn anything generalizable, we are not interested in using them in an ensemble, hence the missing values for those datasets.
|
| 173 |
+
|
| 174 |
+
<table><tr><td rowspan="3">Dataset D</td><td colspan="4">D_ood = test data (unlabelled)</td><td colspan="4">D_ood ≠ test data</td></tr><tr><td colspan="2">Single Model</td><td colspan="2">Ensemble</td><td colspan="2">Single Model</td><td colspan="2">Ensemble</td></tr><tr><td>ERM</td><td>D-BAT</td><td>ERM</td><td>D-BAT</td><td>ERM</td><td>D-BAT</td><td>ERM</td><td>D-BAT</td></tr><tr><td>C-MNIST</td><td>12.3 ± 0.7</td><td>90.2 ± 3.7</td><td>-</td><td>-</td><td>27.1 ± 2.8</td><td>90.1 ± 1.9</td><td>-</td><td>-</td></tr><tr><td>M/F-D</td><td>52.9 ± 0.1</td><td>94.8 ± 0.3</td><td>-</td><td>-</td><td>52.9 ± 0.1</td><td>89.0 ± 0.6</td><td>-</td><td>-</td></tr><tr><td>M/C-D</td><td>50.0 ± 0.0</td><td>73.3 ± 1.2</td><td>-</td><td>-</td><td>50.0 ± 0.0</td><td>58.0 ± 0.6</td><td>-</td><td>-</td></tr><tr><td>Waterbirds</td><td>86.0 ± 0.5</td><td>88.7 ± 0.2</td><td>85.8 ± 0.4</td><td>87.5 ± 0.0</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Office-Home</td><td>50.4 ± 1.0</td><td>51.1 ± 0.7</td><td>52.0 ± 0.5</td><td>52.7 ± 0.2</td><td>51.7 ± 0.6</td><td>51.7 ± 0.3</td><td>53.9 ± 0.4</td><td>54.5 ± 0.5</td></tr><tr><td>Camelyon17</td><td>80.3 ± 0.4</td><td>93.1 ± 0.3</td><td>80.9 ± 1.5</td><td>91.9 ± 0.4</td><td>80.3 ± 0.4</td><td>88.8 ± 1.4</td><td>80.9 ± 1.5</td><td>85.9 ± 0.9</td></tr></table>
|
| 175 |
+
|
| 176 |
+
Training data $(\mathcal{D})$ . We consider two kinds of training data: synthetic datasets with completely spurious correlation, and more real world datasets where do not have any control and naturally may have some spurious features. We use the former to have a controlled setup, and the latter to judge our performance in the real world.
|
| 177 |
+
|
| 178 |
+
Datasets with completely spurious correlation: To know whether we learn a shortcut, and estimate our method's ability to overcome the SB, we design three datasets of varying complexity with known shortcut in a similar fashion as Teney et al. (2021). The Colored-MNIST, or C-MNIST for short, consists of MNIST (Lecun & Cortes, 1998) images for which the color and the shape of the digits are equally predictive, i.e. all the 1 are pink, all the 5 are orange, etc. The color being simpler to learn than the shape, the simplicity bias will result in models trained on this dataset to rely solely on the color information while being invariant to the shape information. This dataset is a multiclass dataset with 10 classes. The test distribution consists of images where the label is carried by the shape of the digit and the color is random. Following a similar idea, we build the M/F-Dominoes (M/F-D) dataset by concatenating MNIST images of 0s and 1s with Fashion-MNIST (Xiao et al., 2017) images of coats and dresses. The source distribution consists in images where the MNIST and F-MNIST parts are equally predicitve of the label. In the test distribution, the label is carried by the F-MNIST part and the MNIST part is a 0 or 1 MNIST image picked at random. The M/C-Dominoes (M/C-D) dataset is built in the same way concatenating MNIST digits 0s and 1s with CIFAR-10 (Krizhevsky, 2009) images of cars and trucks. See App. E to see samples from those datasets.
|
| 179 |
+
|
| 180 |
+
Natural datasets: To test our method in this more general case we run further experiments on three well-known domain adaptation datasets. We use the Waterbirds (Sagawa et al., 2020) and Camelyon17 (Bandi et al., 2018) datasets from the WILDS collection (Koh et al., 2021). Camelyon17 is an image dataset for cancer detection, where different hospital each provide a unique data part. For those two binary classification datasets, the test distributions are taken to be the pre-defined test splits. We also use the Office-Home dataset from Venkateswara et al. (2017), which consists of images of 65 item categories across 4 domains: Art, Product, Clipart, and Real-World. In our experiments we merge the Product and Clipart domains to use as training, and test on the Real-World domain.
|
| 181 |
+
|
| 182 |
+
Perturbation data $(\mathcal{D}_{\mathrm{odd}})$ . As mentioned previously, we consider two scenarios in which the test distribution is (i) drawn from the same distribution as $\mathcal{D}_{\mathrm{odd}}$ , or (ii) drawn from a completely different distribution. In practice, in the later case, we keep the test distribution unchanged and modify $\mathcal{D}_{\mathrm{odd}}$ . For the C-MNIST, we remove digits 5 to 9 from the training and test distributions and build $\mathcal{D}_{\mathrm{odd}}$ based on those digits associated with random colors. For M/F-D and M/C-D datasets, we build $\mathcal{D}_{\mathrm{odd}}$ by concatenating MNIST images of 0 and 1 with F-MNNIST, — respectively CIFAR-10 — categories which are not used in the training distribution (i.e. anything but coats and dresses, resp. trucks and cars), samples from those distributions are in App. E. For the Camelyon17 medical imaging dataset, we use unlabeled validation data instead of unlabeled test data, both coming from different hospitals. For the Office-Home dataset, we use the left-out Art domain as $\mathcal{D}_{\mathrm{odd}}$ .
|
| 183 |
+
|
| 184 |
+

|
| 185 |
+
(a) Waterbirds
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
(b) Office-Home
|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
(c) Camelyon17
|
| 192 |
+
Figure 4: All results are in the "D<sub>od</sub> = test data" setting. (a) and (b): Test accuracies as a function of the ensemble size for both D-BAT and Deep Ensembles (ERM ensembles). We observe a significant advantage of D-BAT on both the Waterbirds and the Office-Home datasets. The difference is especially visible on the Waterbirds dataset, which has a stronger spurious correlation. Results have been obtained averaging over 3 seeds for the Waterbirds dataset and 6 seeds for the Office-Home dataset. (c): Comparison of D-BAT with several other methods on the Camelyon17, results except D-BAT are taken from Sagawa et al. (2022).
|
| 193 |
+
|
| 194 |
+
# Results and discussion.
|
| 195 |
+
|
| 196 |
+
- D-BAT can tackle extreme spurious correlations. This is unlike prior methods from domain adaptation (Zhang et al., 2021; Saito et al., 2017; 2018; Nam et al., 2020; Liu et al., 2021a) which all fail when the spurious feature is completely correlated with the label, see App. F for an extended discussion and comparison in which we show those methods cannot improve upon ERM in that scenario. First we look at results without D-BAT for the C-MNIST, M/F-D and M/C-D datasets in Tab. 1. Looking at the ERM column, we observe how the test accuracies are near random guessing. This is a verification that without D-BAT, due to the simplicity bias, only the simplest feature is leveraged to predict the label and the models fail to generalize to domains for which the simple feature is spurious. D-BAT however, is effectively promoting models to use diverse features. This is demonstrated by the test accuracies of the best D-BAT model being much higher than of ERM.
|
| 197 |
+
- D-BAT improves generalization to new domains. In Tab. 1, in the case $\mathcal{D}_{\mathrm{odd}} \neq$ test data, we observe that despite differences between $\mathcal{D}_{\mathrm{odd}}$ and the test distribution (e.g. the target distribution for M/C-D is using CIFAR-10 images of cars and trucks whereas $\mathcal{D}_{\mathrm{odd}}$ uses images of frogs, cats, etc. but no cars or trucks), D-BAT is still able to increase the generalization to the test domain.
|
| 198 |
+
- Improved generalization on natural datasets. We observe a significant improvement in test accuracy for all our natural datasets. While the improvement is limited for the Office home dataset when considering a single model, we observe D-BAT ensembles nonetheless outperform ERM ensembles. The improvement is especially evident on the Camelyon17 dataset where D-BAT outperforms many known methods as seen in Fig. 4.c.
|
| 199 |
+
- Ensembles built using D-BAT generalize better. In Fig. 4 we observe how D-BAT ensembles trained on the Waterbirds and Office-Home datasets generalize better.
|
| 200 |
+
|
| 201 |
+
# 4.2 BETTER UNCERTAINTY & OOD DETECTION
|
| 202 |
+
|
| 203 |
+
MNIST setup. We run two experiments to investigate D-BAT's ability to provide good uncertainty estimates. The first one is similar to the MNIST experiment in Liu et al. (2021b), it consists in learning to differentiate MNIST digits 0s from 1s. The uncertainty of the model — computed as the entropy — is then estimated for fake interpolated images of the form $t \cdot 1 + (1 - t) \cdot 0$ for $t \in [-1,2]$ . An ideal model would assign (i) low uncertainty values for $t$ near 0 and 1, corresponding to in-distribution samples, while (ii) high uncertainty values elsewhere. (Liu et al., 2021b) showed how only Gaussian Processes are able to fulfill those two conditions, most models failing in attributing high uncertainty away from the boundary decision (as it can also be seen in Fig. 1 when looking at individual models). We train ensembles of size 2 and average over 20 seeds. For D-BAT, we use as $\mathcal{D}_{\mathrm{ood}}$ the remaining (OOD) digits 2 to 9, along with some random cropping. We use a LeNet.
|
| 204 |
+
|
| 205 |
+
MNIST results. Results in Fig. 5 suggest that D-BAT is able to give reliable uncertainty estimates for OOD datapoints, even when those samples are away from the boundary decision. This is in sharp contrast with deep-ensemble which only models uncertainty near the boundary decision.
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
Figure 5: Entropy of ensembles of two models trained with and without D-BAT (deep-ensemble), for inputs $x$ taken from along line $t \cdot 1 + (1 - t) \cdot 0$ for $t \in [-1,2]$ . In-distribution samples are obtained for $t \in \{0,1\}$ . All ensembles have a similar test accuracy of $99\%$ . Unlike deep ensembles, D-BAT ensembles are able to correctly give high uncertainty values for points far away from the decision boundary. The standard deviations have been omitted here for clarity, but can be seen in App. D.3.
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
Figure 6: Histogram of predicted probabilities on OOD data. See § 4.2 for more details on the setup. D-BAT ensembles are better calibrated with less confidence on OOD data than deep-ensembles or MC-Dropout models.
|
| 214 |
+
|
| 215 |
+
CIFAR-10 setup. We train ensembles of 4 models and benchmark three different methods in their ability to identify what they do not know. For this we look at the histograms of the probability of their predicted classes on OOD samples. As training set we use the CIFAR-10 classes $\{0,1,2,3,4\}$ . We use the CIFAR-100 (Krizhevsky, 2009) test set as OOD samples to compute the histograms. For D-BAT we use the remaining CIFAR-10 classes, $\{5,6,7,8,9\}$ , as $\mathcal{D}_{\mathrm{ood}}$ , and set $\alpha$ to 0.2. Histograms are averaged over 5 seeds. The three methods considered are simple deep-ensembles (Lakshminarayanan et al., 2017), MC-Dropout models (Gal & Ghahramani, 2016), and D-BAT ensembles. For the three methods we use a modified ResNet-18 (He et al., 2016) with added dropout to accommodate MC-Dropout, we use a dropout probability of 0.2 for the three methods. For MC-Dropout, we compute uncertainty estimates sampling 20 distributions.
|
| 216 |
+
|
| 217 |
+
CIFAR-10 results. In Fig. 6, we observe for both deep ensembles and MC-Dropout a large amount of predicted probabilities larger than 0.9, which indicate those methods are overly confident on OOD data. In contrast, most of the predicted probabilities of D-BAT ensembles are smaller than 0.7. The average ensemble accuracies for all those methods are $92\%$ for deep ensembles, $91.2\%$ for D-BAT ensembles, and $90.4\%$ for MC-Dropout.
|
| 218 |
+
|
| 219 |
+
# 5 LIMITATIONS
|
| 220 |
+
|
| 221 |
+
Is the simplicity bias gone? While we showed in § 4.1 that our approach can clearly mitigate shortcut learning, a bad choice of $\mathcal{D}_{\mathrm{ood}}$ distribution can introduce an additional shortcut. In essence, our approach fails to promote diverse representations when differentiating $\mathcal{D}$ from $\mathcal{D}_{\mathrm{ood}}$ is easier than learning to utilize diverse features. Furthermore, we want to stress that learning complex features is not necessarily unilaterally better than learning simple features, and is not our goal. Complex features are better only so far as they can better explain both the train distribution and OOD data. With our approach, we aim to get a diverse yet simple set of hypotheses. Intuitively, D-BAT tries to find the best hypothesis which may be somewhere within the top-k simplest hypotheses, and not necessarily the simplest one which the simplicity bias is pushing us towards.
|
| 222 |
+
|
| 223 |
+
# 6 CONCLUSION
|
| 224 |
+
|
| 225 |
+
Training deep neural networks often results in the models learning to rely on shortcuts present in the training data but absent from the test data. In this work we introduced D-BAT, a novel training method to promote diversity in ensembles of predictors. By encouraging disagreement on OOD data, while agreeing on the training data, we effectively (i) give strong incentives to our predictors to rely on diverse features, (ii) which enhance the transferability of the ensemble and (iii) improve uncertainty estimation and OOD detection. Future directions include improving the selection of samples of the OOD distribution and develop stronger theory. D-BAT could also find applications beyond OOD generalization-e.g. (Tifrea et al., 2021) recently used disagreement for anomaly/novelty detection or to test for biases in our trained models (Stanczak & Augenstein, 2021).
|
| 226 |
+
|
| 227 |
+
# REFERENCES
|
| 228 |
+
|
| 229 |
+
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F. Christiano, John Schulman, and Dan Mané. Concrete problems in AI safety. CoRR, abs/1606.06565, 2016.
|
| 230 |
+
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
|
| 231 |
+
Devansh Arpit, Stanisław Jastrzundefinedbski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, and Simon Lacoste-Julien. A closer look at memorization in deep networks. In ICML, pp. 233-242. JMLR, 2017.
|
| 232 |
+
Peter Bandi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, et al. From detection of individual metastases to classification of lymph node status at the patient level: the camelyon17 challenge. IEEE Transactions on Medical Imaging, 2018.
|
| 233 |
+
Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In ECCV (16), volume 11220 of Lecture Notes in Computer Science, pp. 472-489. Springer, 2018.
|
| 234 |
+
Edmon Begoli, Tanmoy Bhattacharya, and Dimitri Kusnezov. The need for uncertainty quantification in machine-assisted medical decision making. Nat. Mach. Intell., 1(1):20-23, 2019.
|
| 235 |
+
Aharon Ben-Tal, Laurent El Ghaoui, and Arkadi Nemirovski. Robust Optimization, volume 28 of Princeton Series in Applied Mathematics. Princeton University Press, 2009.
|
| 236 |
+
Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828, 2013.
|
| 237 |
+
Leo Breiman. Bagging predictors. Mach. Learn., 24(2):123-140, 1996.
|
| 238 |
+
Corinna Cortes and Mehryar Mohri. Domain adaptation in regression. In International Conference on Algorithmic Learning Theory, pp. 308-323. Springer, 2011.
|
| 239 |
+
Corinna Cortes, Mehryar Mohri, and Andres Munoz Medina. Adaptation based on generalized discrepancy. The Journal of Machine Learning Research, 20(1):1-30, 2019.
|
| 240 |
+
Gintare Karolina Dziugaite and Daniel M. Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. In Proceedings of the 33rd Annual Conference on Uncertainty in Artificial Intelligence (UAI), 2017.
|
| 241 |
+
Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, volume 48 of JMLR Workshop and Conference Proceedings, pp. 1050-1059. JMLR.org, 2016.
|
| 242 |
+
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096-2030, 2016.
|
| 243 |
+
Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, Richard S. Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. Shortcut learning in deep neural networks. Nat. Mach. Intell., 2(11):665-673, 2020.
|
| 244 |
+
Suriya Gunasekar, Jason D. Lee, Daniel Soudry, and Nati Srebro. Implicit bias of gradient descent on linear convolutional networks. In NeurIPS, pp. 9482-9491, 2018.
|
| 245 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
|
| 246 |
+
Jose Miguel Hernandez-Lobato and Ryan P. Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. In ICML, volume 37 of JMLR Workshop and Conference Proceedings, pp. 1861-1869. JMLR.org, 2015.
|
| 247 |
+
|
| 248 |
+
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175, 2019.
|
| 249 |
+
Sanjay Kariyappa and Moinuddin K. Qureshi. Improving adversarial robustness of ensembles with diversity training. CoRR, abs/1901.09981, 2019.
|
| 250 |
+
Been Kim, Oluwasanmi Koyejo, and Rajiv Khanna. Examples are not enough, learn to criticize! criticism for interpretability. In NIPS, pp. 2280-2288, 2016.
|
| 251 |
+
Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton A. Earnshaw, Imran S. Haque, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. WILDS: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning (ICML), 2021.
|
| 252 |
+
Masanori Koyama and Shoichiro Yamaguchi. Out-of-distribution generalization with maximal invariant predictor. arXiv preprint arXiv:2008.01883, 2020.
|
| 253 |
+
Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Master's thesis, 2009.
|
| 254 |
+
David Krueger, Ethan Caballero, Jorn-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Rémi Le Priol, and Aaron C. Courville. Out-of-distribution generalization via risk extrapolation (rex). In ICML, volume 139 of Proceedings of Machine Learning Research, pp. 5815-5826. PMLR, 2021.
|
| 255 |
+
Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6405-6416, Red Hook, NY, USA, 2017. Curran Associates Inc.
|
| 256 |
+
Yann Lecun and Corinna Cortes. The MNIST database of handwritten digits. 1998. URL http://yann.lecun.com/exdb/mnist/.
|
| 257 |
+
Yann Lecun, Leon Bottou, Joshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pp. 2278-2324, 1998.
|
| 258 |
+
Yoonho Lee, Huaxiu Yao, and Chelsea Finn. Diversify and disambiguate: Learning from underspecified data. CoRR, abs/2202.03418, 2022.
|
| 259 |
+
Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In ICML, volume 139 of Proceedings of Machine Learning Research, pp. 6781-6792. PMLR, 2021a.
|
| 260 |
+
Yehao Liu, Matteo Pagliardini, Tatjana Chavdarova, and Sebastian U. Stich. The peril of popular deep learning uncertainty estimation methods. 2021b.
|
| 261 |
+
Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. arXiv preprint arXiv:1705.10667, 2017.
|
| 262 |
+
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR (Poster). OpenReview.net, 2019.
|
| 263 |
+
Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bounds and algorithms. arXiv preprint arXiv:0902.3430, 2009.
|
| 264 |
+
Tom McCoy, Ellie Pavlick, and Tal Linzen. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In ACL (1), pp. 3428-3448. Association for Computational Linguistics, 2019.
|
| 265 |
+
Jun Hyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. Learning from failure: Training debiased classifier from biased classifier. CoRR, abs/2007.02561, 2020.
|
| 266 |
+
|
| 267 |
+
Luke Oakden-Rayner, Jared Dunnmon, Gustavo Carneiro, and Christopher Ré. Hidden stratification causes clinically meaningful failures in machine learning for medical imaging. In CHIL, pp. 151-159. ACM, 2020.
|
| 268 |
+
Mohammad Pezeshki, Sekou-Oumar Kaba, Yoshua Bengio, Aaron C. Courville, Doina Precup, and Guillaume Lajoie. Gradient starvation: A learning proclivity in neural networks. In NeurIPS, pp. 1256-1272, 2021.
|
| 269 |
+
Hamed Rahimian and Sanjay Mehrotra. Distributionally robust optimization: A review. arXiv preprint arXiv:1908.05659, 2019.
|
| 270 |
+
Alexandre Ramé and Matthieu Cord. DICE: diversity in deep ensembles via conditional redundancy adversarial estimation. In ICLR. OpenReview.net, 2021.
|
| 271 |
+
Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005.
|
| 272 |
+
Andrew Slavin Ross, Weiwei Pan, Leo A. Celi, and Finale Doshi-Velez. Ensembles of locally independent prediction models. In AAAI, pp. 5527-5536. AAAI Press, 2020.
|
| 273 |
+
Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distributionally robust neural networks. In ICLR. OpenReview.net, 2020.
|
| 274 |
+
Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, and Percy Liang. Extending the WILDS benchmark for unsupervised adaptation. In *ICLR*. OpenReview.net, 2022.
|
| 275 |
+
Kuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. Asymmetric tri-training for unsupervised domain adaptation. In ICML, volume 70 of Proceedings of Machine Learning Research, pp. 2988-2997. PMLR, 2017.
|
| 276 |
+
Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. In CVPR, pp. 3723-3732. Computer Vision Foundation / IEEE Computer Society, 2018.
|
| 277 |
+
Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, and Praneeth Netrapalli. The pitfalls of simplicity bias in neural networks. In Advances in Neural Information Processing Systems, volume 33, 2020.
|
| 278 |
+
Samarth Sinha, Homanga Bharadhwaj, Anirudh Goyal, Hugo Larochelle, Animesh Garg, and Florian Shkurti. DIBS: diversity inducing information bottleneck in model ensembles. In AAAI, pp. 9666-9674. AAAI Press, 2021.
|
| 279 |
+
Karolina Stanczak and Isabelle Augenstein. A survey on gender bias in natural language processing. arXiv preprint arXiv:2112.14168, 2021.
|
| 280 |
+
Asa Cooper Stickland and Iain Murray. Diverse ensembles improve calibration. CoRR, abs/2007.04206, 2020.
|
| 281 |
+
Baochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. In AAAI, pp. 2058-2065. AAAI Press, 2016.
|
| 282 |
+
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In *ICLR (Poster)*, 2014.
|
| 283 |
+
Damien Teney, Ehsan Abbasnejad, Simon Lucey, and Anton van den Hengel. Evading the simplicity bias: Training a diverse set of models discovers solutions with superior OOD generalization. CoRR, abs/2105.05612, 2021.
|
| 284 |
+
Naftali Tishby, Fernando C. Pereira, and William Bialek. The information bottleneck method, 2000.
|
| 285 |
+
Naonori Ueda and Ryohei Nakano. Generalization error of ensemble estimators. In ICNN, pp. 90-95. IEEE, 1996.
|
| 286 |
+
|
| 287 |
+
Joost van Amersfoort, Lewis Smith, Yee Whye Teh, and Yarin Gal. Uncertainty estimation using a single deep deterministic neural network. In ICML, volume 119 of Proceedings of Machine Learning Research, pp. 9690-9700. PMLR, 2020.
|
| 288 |
+
Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5018-5027, 2017.
|
| 289 |
+
Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747, 2017.
|
| 290 |
+
Dinghuai Zhang, Kartik Ahuja, Yilun Xu, Yisen Wang, and Aaron C. Courville. Can subnetwork structure be the key to out-of-distribution generalization? In ICML, volume 139 of Proceedings of Machine Learning Research, pp. 12356-12367. PMLR, 2021.
|
| 291 |
+
Zhi-Hua Zhou. Ensemble Methods: Foundations and Algorithms. Chapman & Hall/CRC, 2012. ISBN 1439830037.
|
| 292 |
+
Alexandru Tifrea, Eric Stavarache, and Fanny Yang. Novel disease detection using ensembles with regularized disagreement. pp. 133-144. Springer-Verlag, 2021. ISBN 978-3-030-87734-7. doi: 10.1007/978-3-030-87735-4_13.
|
| 293 |
+
|
| 294 |
+
# A SOURCE CODE
|
| 295 |
+
|
| 296 |
+
Link to the source code to reproduce our experiments: https://github.com/mpagli/ Agree-to-Disagree
|
| 297 |
+
|
| 298 |
+
# B ALGORITHMS
|
| 299 |
+
|
| 300 |
+
The D-BAT training algorithm can be applied to both binary and multi-class classification problems. For our experiments on binary classification — as for the Camelyon17, Waterbirds, M/F-D, M/C-D (see § 4.1), and for our MNIST experiments in Fig. 5 — we used Alg. 1. This algorithm assumes a first model $h_1$ has already been trained with e.g. empirical risk minimization, and trains a second model following the algorithm described in § 3.2. For our multi-class experiments — as for the C-MNIST, Office-Home (see § 4.1, and CIFAR-10 uncertainty experiments (see § 4.2), we used Alg. 2. This algorithm is training a full ensemble of size $M$ using D-BAT as described in § 3.2.
|
| 301 |
+
|
| 302 |
+
Algorithm 1 D-BAT for binary classification
|
| 303 |
+
```latex
|
| 304 |
+
Input: train data $\mathcal{D}$ OOD data $\mathcal{D}_{\mathrm{odd}}$ stopping time $T$ D-BAT coefficient $\alpha$ , learning rate $\eta$ pre-trained model $h_1$ , randomly initialized model $h_2$ with weights $\omega_0$ , and its loss $\mathcal{L}$
|
| 305 |
+
for $t\in 0,\dots ,T - 1$ do Sample $(x,y)\sim \mathcal{D}$ Sample $\tilde{\pmb{x}}\sim \mathcal{D}_{\mathrm{odd}}$ $\omega_{t + 1} = \omega_t - \eta \nabla_\omega \bigl (\mathcal{L}(h_2,\pmb {x},y) + \alpha \mathcal{A}(h_1,h_2,\tilde{\pmb{x}})\bigr)$
|
| 306 |
+
end for
|
| 307 |
+
```
|
| 308 |
+
|
| 309 |
+
Algorithm 2 D-BAT for multi-class classification
|
| 310 |
+
```txt
|
| 311 |
+
Input: ensemble size $M$ , train data $\mathcal{D}$ , OOD data $\mathcal{D}_{\mathrm{odd}}$ , stopping time $T$ , D-BAT coefficient $\alpha$ , learning rate $\eta$ , randomly initialized models $(h_0,\dots,h_{M - 1})$ with resp. weights $(\omega_0^{(0)},\ldots ,\omega_0^{(M - 1)})$ , and a classification loss $\mathcal{L}$
|
| 312 |
+
for $m\in 0,\ldots ,M - 1$ do
|
| 313 |
+
for $t\in 0,\ldots ,T - 1$ do Sample $(x,y)\sim \mathcal{D}$ Sample $\tilde{\pmb{x}}\sim \mathcal{D}_{\mathrm{odd}}$ A $\leftarrow 0$ $\tilde{y}\gets \operatorname {argmax}h_m(\tilde{\pmb{x}})$
|
| 314 |
+
for $i\in 0,\dots,m - 1$ do
|
| 315 |
+
A = A - $\frac{1}{m - 1}\log \left(p_{h_i,\tilde{\pmb{x}}}^{(\tilde{y})}\cdot p_{h_m,\tilde{\pmb{x}}}^{(\neg \tilde{y})} + p_{h_i,\tilde{\pmb{x}}}^{(\neg \tilde{y})}\cdot p_{h_m,\tilde{\pmb{x}}}^{(\tilde{y})}\right)$
|
| 316 |
+
end for
|
| 317 |
+
$\omega_{t + 1}^{(m)} = \omega_t^{(m)} - \eta \nabla_{\omega^{(m)}}\left(\mathcal{L}(h_m,x,y) + \alpha \mathcal{A}\right)$
|
| 318 |
+
end for
|
| 319 |
+
```
|
| 320 |
+
|
| 321 |
+
Sequential vs. simultaneous training. Nothing prevents the use of the D-BAT objective while training all the predictors of the ensemble simultaneously. While we had some successes in doing so, we advocate against it as this can discard the ERM solution. We found that the training dynamics of simultaneous training have a tendency to generate more complex solutions than sequential training. In our experiments on the 2D toy setting, sequential training gives two models which are both simple and diverse (see Fig. 1), whereas simultaneous training generates two relatively simple predictors but of higher complexity (see Fig. 7), especially it would deprive us from the simplest solution (Fig.1.b). In general as we do not know the spuriousness of the features, the simplest predictor is still of importance.
|
| 322 |
+
|
| 323 |
+

|
| 324 |
+
Figure 7: Simultaneous D-BAT training: two models trained simultaneously using D-BAT on our 2D toy task (see Fig. 1). We observe how we do not recover the ERM solution. The two obtained models are diverse but seemingly more complex (e.g. in terms of their boundary decision) than models trained sequentially as in Fig. 1.
|
| 325 |
+
|
| 326 |
+

|
| 327 |
+
|
| 328 |
+
# C PROOF OF THM.3.1
|
| 329 |
+
|
| 330 |
+
We redefine here the setup for clarity:
|
| 331 |
+
|
| 332 |
+
- Given a joint source distribution $\mathcal{D}$ of triplets of random variables $(C, S, Y)$ taking values in $\{0, 1\}^3$ .
|
| 333 |
+
- Assuming $\mathcal{D}$ has the following pmf: $\mathbb{P}_{\mathcal{D}}(C = c, S = s, Y = y) = 1/2$ if $c = s = y$ , and 0 otherwise.
|
| 334 |
+
Assuming a first model learnt the posterior distribution $\hat{\mathbb{P}}_1(Y = 1\mid C = c,S = s) = c$
|
| 335 |
+
- Given a distribution $\mathcal{D}_{\mathrm{odd}}$ uniform over $\{0,1\}^3$ outside of the support of $\mathcal{D}$ .
|
| 336 |
+
|
| 337 |
+
From there, training a second model $h_2$ following the D-BAT objective would mean minimizing the agreement on $\mathcal{D}_{\mathrm{odd}}$ :
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
\min _ {(c, s) \sim \mathcal {D} _ {\mathrm {o o d}}} \left[ - \log (\hat {\mathbb {P}} _ {1} (Y = 1 | c, s) \hat {\mathbb {P}} _ {2} (Y = 0 | c, s) + \hat {\mathbb {P}} _ {1} (Y = 0 | c, s) \hat {\mathbb {P}} _ {2} (Y = 1 | c, s)) \right] \tag {1}
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
While at the same time agreeing on the source distribution $\mathcal{D}$ :
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
\underset {(c, s) \sim \mathcal {D}} {\mathbb {P}} \left(\hat {\mathbb {P}} _ {1} (Y | c, s) = \hat {\mathbb {P}} _ {2} (Y | c, s))\right) = 1
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
The expectation in eq.1 becomes:
|
| 350 |
+
|
| 351 |
+
$$
|
| 352 |
+
(1) = \frac {1}{2} \Big (- \log (\hat {\mathbb {P}} _ {2} (Y = 0 | C = 1, S = 0)) - \log (\hat {\mathbb {P}} _ {2} (Y = 1 | C = 0, S = 1)) \Big)
|
| 353 |
+
$$
|
| 354 |
+
|
| 355 |
+
Which is minimized for $\hat{\mathbb{P}}_2(Y = 1|C = 0,S = 1) = \hat{\mathbb{P}}_2(Y = 0|C = 1,S = 0) = 1$
|
| 356 |
+
|
| 357 |
+
Which means the posterior of the second model, according to our disagreement constrain, will be:
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
\hat {\mathbb {P}} _ {2} (Y = 1 \mid C = c, S = s) = s
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
# D OMITTED DETAILS ON EXPERIMENTS
|
| 364 |
+
|
| 365 |
+
# D.1 IMPLEMENTATION DETAILS FOR THE C-MNIST, M/M-D, M/F-D AND M/C-D EXPERIMENTS
|
| 366 |
+
|
| 367 |
+
In the experiments on C-MNIST, M/F-D and M/C-D, we used different versions of LeNet (Lecun et al., 1998):
|
| 368 |
+
|
| 369 |
+
- For the C-MNIST dataset, we used a standard LeNet, with 3 input channels instead of 1.
|
| 370 |
+
|
| 371 |
+
- For the MF-Dominoes datasets, we increase the input dimension of the first fully-connected layer to 960.
|
| 372 |
+
- For the MC-Dominoes dataset, we use 3 input channels, increase the number of output channels of the first convolution to 32, and of the second one to 56. We modify the fully-connected layers to be $2016 \rightarrow 512 \rightarrow 256 \rightarrow c$ with $c$ the number of classes.
|
| 373 |
+
|
| 374 |
+
In those experiments — for both cases $\mathcal{D}_{\mathrm{odd}} = \mathcal{D}_{\mathrm{test}}$ and $\mathcal{D}_{\mathrm{odd}} \neq \mathcal{D}_{\mathrm{test}}$ — the test and validation distributions are distributions in which the spurious feature is random, e.g. random color for C-MNIST and random 0 or 1 on the top part for MF-Dominoes and MC-Dominoes.
|
| 375 |
+
|
| 376 |
+
We use the AdamW optimizer Loshchilov & Hutter (2019) for all our experiments. For all the datasets in this section, we only train ensembles of 2 models, which we denote $\mathcal{M}_1$ and $\mathcal{M}_2$ . When building the OOD datasets, we make sure the images used are not shared with the images used to build the training, test and validation sets. Our results are obtained by averaging over 5 seeds. For further details on the implementation, we invite the reader to check the source code, see § A.
|
| 377 |
+
|
| 378 |
+
# D.2 IMPLEMENTATION DETAILS FOR FIG.1
|
| 379 |
+
|
| 380 |
+
Instead of relying on an external OOD distribution set, it is also possible to find, given some datapoint $\pmb{x}$ , a perturbation $\delta^{\star}$ through directly minimizing the agreement in some neighborhood of $\pmb{x}$ (i.e. for $\| \delta^{\star} \| \leq \epsilon$ ):
|
| 381 |
+
|
| 382 |
+
$$
|
| 383 |
+
\delta^ {\star} \in \underset {\delta \text {s . t .} \| \delta \| < \epsilon} {\arg \min } - \log \left(p _ {h _ {1}, (\boldsymbol {x} + \delta)} ^ {(0)} \cdot p _ {h _ {2}, (\boldsymbol {x} + \delta)} ^ {(1)} + p _ {h _ {1}, (\boldsymbol {x} + \delta)} ^ {(1)} \cdot p _ {h _ {2}, (\boldsymbol {x} + \delta)} ^ {(0)}\right)
|
| 384 |
+
$$
|
| 385 |
+
|
| 386 |
+
Which can be solved using several projected gradient descent steps as it done typically in the adversarial training literature. While this approach is working for the 2D example, it is not working however for complex high-dimensional input spaces combined with deep networks as those are notorious for their sensitivity to very small $l_{p}$ -bounded perturbations, and it would most of the time be easy to find a bounded perturbation maximizing the disagreement.
|
| 387 |
+
|
| 388 |
+
# D.3 STANDARD DEVIATIONS FOR MNIST UNCERTAINTY EXPERIMENTS
|
| 389 |
+
|
| 390 |
+
For clarity we omitted the standard deviations in Fig. 5. In Fig. 8 we show each individual curve with its associated standard deviation.
|
| 391 |
+
|
| 392 |
+

|
| 393 |
+
(a) Deep-Ensemble
|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
(b) D-BAT with $\alpha = 5$
|
| 397 |
+
Figure 8: Entropy of ensembles of two models trained with ((b) and (c)) and without D-BAT (deep-ensemble, (a)), for inputs $x$ taken from along line $t \cdot 1 + (1 - t) \cdot 0$ for $t \in [-1,2]$ . For deep-ensembles in (a), we notice how the standard deviation is near 0 for OOD regions $t \in ] - 1,0] \cup [1,2[$ , which indicates a lack of diversity between members of the ensemble. This is in sharp contrast with D-BAT ensembles in (b) and (c) which clearly show some variability in those regions. The high variability is explained by the fact that we are not optimizing specifically to be able to detect OOD samples in those regions, but instead we are gaining this ability as a by-product of diversity, and diversity can be reached in many different configurations.
|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
(c) D-BAT with $\alpha = 10$
|
| 401 |
+
|
| 402 |
+
# D.4 IMPLEMENTATION DETAILS FOR THE CAMELYON17 EXPERIMENTS
|
| 403 |
+
|
| 404 |
+
The CameLyon17 cancer detection dataset Bandi et al. (2018) is taken from the WILDS collection Koh et al. (2021). The dataset consists of a training, validation, and test sets of images coming from
|
| 405 |
+
|
| 406 |
+

|
| 407 |
+
Figure 9: Test accuracy given $\alpha$ . We compare the best ERM model with the second model trained using D-BAT, for varying $\alpha$ hyperparameters.
|
| 408 |
+
|
| 409 |
+
different hospitals, each hospital being uniquely associated to a given split. The goal is to generalize to hospitals not necessarily present in the training set.
|
| 410 |
+
|
| 411 |
+
In the case where $\mathcal{D}_{\mathrm{ood}} = \mathcal{D}_{\mathrm{test}}$ , we use the unlabeled test data provided by WILDS as $\mathcal{D}_{\mathrm{ood}}$ . In our experiments with $\mathcal{D}_{\mathrm{ood}} \neq \mathcal{D}_{\mathrm{test}}$ , we use the unlabeled validation data provided by WILDS as $\mathcal{D}_{\mathrm{ood}}$ . In both cases we use the accuracy on the WILDS labeled validation set for model selection.
|
| 412 |
+
|
| 413 |
+
We use a ResNet-50 (He et al., 2016) as model. We train for 60 epochs with a fixed learning rate of 0.001 with and SGD as optimizer. We use an $l_{2}$ penalty term of 0.0001 and a momentum term $\beta = 0.9$ . For D-BAT, we tune $\alpha \in \{10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}, 10^{-6}\}$ and found $\alpha = 10^{-6}$ to be best. For each set of hyperparameters, we train a deep-ensemble and a D-BAT ensemble of size 2, and select the parameters associated with the highest averaged validation accuracy over the two predictors of the ensemble. Our results are obtained by averaging over 3 seeds.
|
| 414 |
+
|
| 415 |
+
In Fig. 9, we plot the evolution of the test accuracy as a function of $\alpha$ for both setups discussed in § 4.1. In the first "ideal" setup we have access to unlabeled target data to use as $\hat{\mathcal{D}}_{\mathrm{ood}}$ . In the second setup we do not, instead we use samples from different hospitals. In the case of the Camelyon dataset, we use the available unlabeled validation data. Despite this data belonging to a different domain, we still get a significant improvement in test accuracy.
|
| 416 |
+
|
| 417 |
+
# D.5 IMPLEMENTATION DETAILS FOR THE WATERBIRDS EXPERIMENTS
|
| 418 |
+
|
| 419 |
+
The Waterbirds dataset is built by combining images of birds with either a water or land background. It contains four categories:
|
| 420 |
+
|
| 421 |
+
Waterbirds on water
|
| 422 |
+
Waterbirds on land
|
| 423 |
+
- Land-birds on water
|
| 424 |
+
- Land-birds on land
|
| 425 |
+
|
| 426 |
+
In the official version released in the WILDS suite, the background is predictive of the label in $95\%$ of cases i.e. $95\%$ of Waterbirds, resp. land-birds, are seen on water, resp. land. Due to the simplicity bias, this means that ERM models tend to overuse the background information. The test
|
| 427 |
+
|
| 428 |
+

|
| 429 |
+
Figure 10: Comparing test accuracy for D-BAT and ERM (Deep-ensemble) for different ensemble sizes. For D-BAT, $\mathcal{D}_{\mathrm{ood}}$ is the "Art" domain, quite different from the "Real-world" domain. Despite the distribution shift we still see a noticeable improvement using D-BAT over plain ERM.
|
| 430 |
+
|
| 431 |
+
and validation sets are made more evenly, with $50\%$ of Waterbirds, resp. land-birds, being seen on water, resp. land. We use the train/ validation/test splits provided by the WILDS library.
|
| 432 |
+
|
| 433 |
+
We use a ResNet-50 (He et al., 2016) as model. We train for 300 epochs with a fixed learning rate of 0.001 with and SGD as optimizer. We an $l_{2}$ penalty term of 0.0001 and a momentum term $\beta = 0.9$ . For D-BAT, we tune $\alpha \in \{10^{0}, 10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}\}$ and found $\alpha = 10^{-4}$ to be best. For each set of hyperparameters, we train a deep-ensemble and a D-BAT ensemble of size 2, and select the parameters associated with the highest averaged validation accuracy over the two predictors of the ensemble. Our results are obtained by averaging over 3 seeds.
|
| 434 |
+
|
| 435 |
+
For our D-BAT experiments we only consider the case where we have access to unlabeled target data. We use the validation split as it is from the same distribution as the target data.
|
| 436 |
+
|
| 437 |
+
# D.6 IMPLEMENTATION DETAILS FOR THE OFFICE-HOME EXPERIMENTS
|
| 438 |
+
|
| 439 |
+
The Office-Home dataset is made of four domains: Art, Clipart, Product, and Real-world. We train on the grouped Product and Clipart domains, and measure the generalization to the Real-world domain. This dataset has 65 classes.
|
| 440 |
+
|
| 441 |
+
We use a ResNet-18, we train for 600 epochs with a fixed learning rate of 0.001 with and SGD as optimizer. We an $l_{2}$ penalty term of 0.0001 and a momentum term $\beta = 0.9$ . For D-BAT, we tune $\alpha \in \{10^{0}, 10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}, 10^{-6}\}$ and found $\alpha = 10^{-5}$ to be best. For each set of hyperparameters, we train a deep-ensemble and a D-BAT ensemble of size 2, and select the parameters associated with the highest averaged validation accuracy over the two predictors of the ensemble. Our results are obtained by averaging over 6 seeds.
|
| 442 |
+
|
| 443 |
+
We experiment with both the "ideal" case in which some unlabeled target data is available to use as $\mathcal{D}_{\mathrm{ood}}$ ( $\mathcal{D}_{\mathrm{ood}} = \mathcal{D}_{\mathrm{test}}$ ; see Fig. 4.b) as well as the case in which we use a different domain (Art) as $\mathcal{D}_{\mathrm{ood}}$ ( $\mathcal{D}_{\mathrm{ood}} \neq \mathcal{D}_{\mathrm{test}}$ ). For this later setup, the evolution of the test accuracy given the ensemble size is in Fig. 10. In both cases, the validation split, just as the test split, comes from the Real-World domain.
|
| 444 |
+
|
| 445 |
+
# D.7 NOTE ON SELECTING $\alpha$
|
| 446 |
+
|
| 447 |
+
Depending on the experiment the value of $\alpha$ used ranged from 1 to $10^{-6}$ . We explain the variability in those values by (i) the capacity of the model used and (ii) the OOD distribution selected. If the model used has a large capacity, it can more easily overfit the OOD distribution and find shortcuts to disagree on $\mathcal{D}_{\mathrm{ood}}$ without relying on different features to classify the training samples, as discussed in § 5. For this reason we observed that larger models such as ResNet-18 or ResNet-50 used respectively on CIFAR10 and the Camelyon17 datasets are requiring a smaller $\alpha$ in comparison to smaller LeNet architectures. Furthermore, when the OOD distribution is close to the training distribution, smaller $\alpha$ values are preferred, as in our Camelyon17 experiments. In this case, disagreeing too strongly on the OOD data might force a second model $\mathcal{M}_2$ to give erroneous predictions to disagree with $\mathcal{M}_1$ , assuming that this first model is generalizing well to the OOD set.
|
| 448 |
+
|
| 449 |
+
# D.8 COMPUTATIONAL RESOURCES
|
| 450 |
+
|
| 451 |
+
All of our experiments were run on single GPU machines. Most of our experiments require little computational resources and can be entirely reproduced on e.g. google colab (see App. A). For the Camelyon17, Waterbirds and Office-Home datasets, which use a ResNet-50 or ResNet-18 architectures, we used a V100 Nvidia GPU and the hyperparameter search and training took about two weeks.
|
| 452 |
+
|
| 453 |
+
# E TRAINING AND OOD DISTRIBUTION SAMPLES C-MNIST, M/F-D AND M/C-D
|
| 454 |
+
|
| 455 |
+
In Fig. 11, we show some samples from some of the training distribution used in § 4.1. We also introduce the MM-Dominoes dataset, similar in spirit to the other dominoes dataset but concatenating MNIST digits of 0s and 1s with MNSIT digits 7 and 9. In Figs. 12,13,14, we show samples for the OOD distributions used in § 4.1.
|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
(a) C-MNIST
|
| 459 |
+
|
| 460 |
+

|
| 461 |
+
(b) MM-Dominoes
|
| 462 |
+
|
| 463 |
+

|
| 464 |
+
(c) MF-Dominoes
|
| 465 |
+
|
| 466 |
+

|
| 467 |
+
(d) MC-Dominoes
|
| 468 |
+
|
| 469 |
+

|
| 470 |
+
Figure 11: Samples from the training data distribution $\mathcal{D}$ for C-MNIST, MM-Dominoes, MF-Dominoes, and MC-Dominoes. Those datasets are used to evaluate D-BAT's aptitude to evade the simplicity bias. For C-MNIST, the simple feature is the color and the complex one is the shape. For all the Dominoes datasets, the simple feature is the top row, while the complex feature is the bottom one. One could indeed separate 0s from 1s by simply looking at the value of the middle pixels (if low value then 0 else 1).
|
| 471 |
+
(a) $\mathcal{D}_{\mathrm{odd}}^{(1)}$
|
| 472 |
+
|
| 473 |
+

|
| 474 |
+
(b) $\mathcal{D}_{\mathrm{odd}}^{(2)}$
|
| 475 |
+
Figure 12: OOD distributions used for the C-MNIST experiments. $\mathcal{D}_{\mathrm{odd}}^{(1)}$ is the distribution used to train D-BAT when we assumed we have access to unlabeled target data. $\mathcal{D}_{\mathrm{odd}}^{(2)}$ is the distribution we used to show how D-BAT could work despite not having unlabeled target data. When experimenting on $\mathcal{D}_{\mathrm{odd}}^{(2)}$ we remove the shapes 5 to 9 from the training dataset, that way $\mathcal{D}_{\mathrm{odd}}^{(2)}$ is really OOD.
|
| 476 |
+
|
| 477 |
+

|
| 478 |
+
(a) $\mathcal{D}_{\mathrm{odd}}^{(1)}$
|
| 479 |
+
|
| 480 |
+

|
| 481 |
+
(b) $\mathcal{D}_{\mathrm{odd}}^{(2)}$
|
| 482 |
+
|
| 483 |
+

|
| 484 |
+
Figure 13: OOD distributions used for the MF-Dominoes experiments. $\mathcal{D}_{\mathrm{odd}}^{(1)}$ corresponds to our experiments when we have access to unlabeled target data. $\mathcal{D}_{\mathrm{odd}}^{(2)}$ is very different from the target distribution as the second row is made only of images from categories not present in the training and test distributions.
|
| 485 |
+
(a) $\mathcal{D}_{\mathrm{odd}}^{(1)}$
|
| 486 |
+
Figure 14: OOD distributions used for the MC-Dominoes experiments. $\mathcal{D}_{\mathrm{odd}}^{(1)}$ corresponds to our experiments when we have access to unlabeled target data. $\mathcal{D}_{\mathrm{odd}}^{(2)}$ is very different from the target distribution as the second row is made only of images from categories not present in the training and test distributions.
|
| 487 |
+
|
| 488 |
+

|
| 489 |
+
(b) $\mathcal{D}_{\mathrm{odd}}^{(2)}$
|
| 490 |
+
|
| 491 |
+
# F ADDITIONAL DISCUSSIONS AND EXPERIMENTS
|
| 492 |
+
|
| 493 |
+
When two features are equally predictive but have different complexities, the more complex feature will be discarded due to the extreme simplicity bias. This happens despite the uncertainty over the potential spuriousness of the simpler feature. For this reason it is important to be able to learn both features if we hope to improve our chances at OOD generalization. Recent methods such as Saito et al. (2017), Saito et al. (2018), Zhang et al. (2021), Nam et al. (2020) and Liu et al. (2021a) all fail in this challenging scenario, we explain why in the following subsections F.1 to F.6. In F.7, we add a comparison between D-BAT and the concurrent work of Lee et al. (2022).
|
| 494 |
+
|
| 495 |
+
# F.1 COMPARISON WITH TENey ET AL. (2021)
|
| 496 |
+
|
| 497 |
+
In their work, Teney et al. (2021) add a regularisation term $\delta_{g_{\varphi_1}, g_{\varphi_2}}$ which, given an input $x$ , is promoting orthogonality of hidden representations $h = f_{\theta}(x)$ given by an encoder $f_{\theta}$ with parameters $\theta$ , and pairs of classifiers $g_{\varphi_1}$ and $g_{\varphi_2}$ of parameters $\varphi_1$ and $\varphi_2$ respectively:
|
| 498 |
+
|
| 499 |
+
$$
|
| 500 |
+
\boldsymbol {\delta} _ {g _ {\varphi_ {1}}, g _ {\varphi_ {2}}} = \nabla_ {\boldsymbol {h}} g _ {\varphi_ {1}} ^ {\star} (x) \cdot \nabla_ {\boldsymbol {h}} g _ {\varphi_ {2}} ^ {\star} (x) \tag {T}
|
| 501 |
+
$$
|
| 502 |
+
|
| 503 |
+
With $\nabla g^{\star}$ the gradient of its top predicted score.
|
| 504 |
+
|
| 505 |
+
We implemented the objective of Teney et al. (2021) with two different encoders: $f_{\theta}(x) = x$ (identity) and a two-layers CNN. We tested it on our MM-Dominoes dataset (See App E). The classification heads are trained simultaneously. Considering two classifications heads, we find two sets of hyperparameters, one that is giving the best compromise between accuracy and randomized-accuracy, and one that is keeping the accuracy close to 1. In the first setup in Fig. 15, we observe that none of the pairs of models trained with equation T as regularizer are particularly good at capturing any of the two features in the data. In contrast with D-BAT (with $\mathcal{D}_{\mathrm{odd}}^{(1)}$ ) which is able to learn a second model having both high accuracy and high randomized-accuracy, hence capturing with the first model the two data modalities. For the second set of hyperparameters in Fig. 16, we observe that the improvement in randomized accuracy is only marginal if we do not want to sacrifice accuracy. We believe those results are explained by the many ways gradients of a neural network can be orthogonal while still encoding identical information. Better results might require training more classification heads (up to 96 heads are used in Teney et al. (2021)).
|
| 506 |
+
|
| 507 |
+

|
| 508 |
+
(a) Identity encoder
|
| 509 |
+
|
| 510 |
+

|
| 511 |
+
(b) CNN encoder
|
| 512 |
+
Figure 15: Comparison between D-BAT and Teney et al. (2021) with hyperparameters favoring the compromise between accuracy (test-acc) and randomized-accuracy (r-acc). We run 5 different seeds for Teney et al. (2021), each run consisting in two classification heads and a shared encoder chosen to be the identity (a) or a CNN encoder (b). The acc and r-acc are displayed for the 10 resulting classification heads. We compared with two models obtained using D-BAT, the first model learning the simplest feature is in the bottom right corner, and the second model trained with diversity is in the top right corner. We observe that the method of Teney et al. (2021) is failing to reach a good r-acc, and is sacrificing accuracy. D-BAT is able to retrieve both data modalities without sacrificing accuracy.
|
| 513 |
+
|
| 514 |
+

|
| 515 |
+
(a) Identity encoder
|
| 516 |
+
Figure 16: Comparison between D-BAT and Teney et al. (2021) with hyperparameters yielding an accuracy (test-acc) close to 1 while maximizing the randomized-accuracy (r-acc). We run 5 different seeds for Teney et al. (2021), each run consisting in two classification heads and a shared encoder chosen to be the identity (a) or a CNN encoder (b). The acc and r-acc are displayed for the 10 resulting classification heads. We compared with two models obtained using D-BAT, the first model learning the simplest feature is in the bottom right corner, and the second model trained with diversity is in the top right corner. We observe that the method of Teney et al. (2021) is only marginally improving the randomized-acc.
|
| 517 |
+
|
| 518 |
+

|
| 519 |
+
(b) CNN encoder
|
| 520 |
+
|
| 521 |
+
# F.2 COMPARISON WITH ZHANG ET AL. (2021)
|
| 522 |
+
|
| 523 |
+
In their work, Zhang et al. (2021) argue that while a model can be biased, there exist unbiased functional subnetworks. They introduced Modular Risk Minimization (MRM) to find those subnetworks. We implemented the MRM method (Alg.1 from their paper) and tested it on our MM-Dominoes dataset ( $\S 4.1$ ). We observed that their approach cannot handle the extreme case we consider where the spurious feature is fully predictive in the train distribution (but not in OOD). They need it to be, say, only $90\%$ predictive. On our dataset, in the first phase of Alg.1, the model trained on the source task learns to completely ignore the bottom row due to the extreme simplicity bias, ensuring there is no useful sub-network. We found the randomized-accuracy of subnetworks obtained with MRM to be no better than random. This is because, in extreme cases, the network which the simplicity bias pushes us to learn may completely ignore the actual feature and instead only focuses on the spurious feature. In such a case, there is no un-biased subnetwork.
|
| 524 |
+
|
| 525 |
+
# F.3 COMPARISON WITH SAITO ET AL. (2017)
|
| 526 |
+
|
| 527 |
+
Contrary to Saito et al. (2017), we aim to train an ensemble of predictors able to generalize to unknown target tasks and do not assume access to the target data. In particular, the unlabelled OOD data we need can be different from the downstream transfer target data. We make this distinction clear in § 4.1 where $\mathcal{D}_{\mathrm{ood}}^{(3)}$ for the dominoes datasets are built using combinations of 1s and 0s with images from classes not present in the target and source tasks. Despite the lack of target data, the r-acc improves by resp. $28\%$ and $38\%$ for the MM-Dominoes and MF-Dominoes datasets. Further, we focus on mitigating extreme simplicity bias as described by Shah et al. (2020), where a spurious feature can have the same predictive power as a non-spurious one on the source task (but not on the unknown target task). While (Saito et al., 2017) uses the concept of diversity, their formulation measures diversity in terms of the inner-product between the weights. However, since neural networks are highly non-convex, it is possible for two networks to effectively learn the exact same function which relies on spurious features, while still having different parameterization. Thus, our method can be viewed as "functional" extension of the method in (Shah et al., 2020). Further, the encoder $F$ itself can learn a representation such that $F_{1}$ and $F_{2}$ rely on the same information while minimizing the regularizer.
|
| 528 |
+
|
| 529 |
+
To see this, we trained the method of Alg.1 from (Saito et al., 2017) on our MM-Dominoes dataset. Tuning $\lambda \in \{0.1, 1, 10, 100\}$ , we were unable to learn a model $F_{t}$ which transfers to the target task.
|
| 530 |
+
|
| 531 |
+
# F.4 COMPARISON WITH SAITO ET AL. (2018)
|
| 532 |
+
|
| 533 |
+
Contrary to Saito et al. (2018), we do not aim at training a domain agnostic representation, but instead on overcoming simplicity bias to generalize to OOD settings. E.g. in colored MNIST, a classifier which throws out the shape and simply uses color (or vice-versa) is domain agnostic. But for overcoming spurious features, models in our ensemble would need to use both color and digit. Thus a domain agnostic representation is insufficient for OOD generalization.
|
| 534 |
+
|
| 535 |
+
Furthermore, the training procedure of (Saito et al., 2018) consists in first training a shared feature extractor $G$ and two classification heads $F_{1}$ and $F_{2}$ to minimize the cross-entropy on the source task. In a second step the classification heads $F_{1}$ and $F_{2}$ are trained to increase the discrepancy on samples from the target distribution while fixing the feature extractor $G$ . However, in the case where a spurious feature is as predictive as the non-spurious one — as in our experiments of § 4.1 — the extreme simplicity bias would force the feature extractor to become invariant to the complex feature. The second and third steps of the algorithm would fail from there.
|
| 536 |
+
|
| 537 |
+
# F.5 COMPARISON WITH NAM ET AL. (2020)
|
| 538 |
+
|
| 539 |
+
In this work, two models are trained simultaneously, one being the biased model while the other is the debiased model. During training, the first model gives higher weights to training samples agreeing with the current bias of the model. On the other hand, the second model learns by giving higher weights to training samples conflicting with the biased model. In order to work, the algorithm considers that the ratio of bias-aligned samples is smaller than $100\%$ , which is not the case for our datasets in § 4.1). In these challenging datasets, where the biased feature is as predictive as the not
|
| 540 |
+
|
| 541 |
+
biased feature, the second model fails to find bias-conflicting samples, hence would fail to de-biased itself. For this reason, the work of Nam et al. (2020) fails to counter extreme simplicity bias.
|
| 542 |
+
|
| 543 |
+
# F.6 COMPARISON WITH LIU ET AL. (2021A)
|
| 544 |
+
|
| 545 |
+
The work of Liu et al. (2021a) is similar to the work of Nam et al. (2020) and shares the same limitation. A first model is trained through ERM before a second model trained by weighting the samples misclassified in by the first model. This method, as for Nam et al. (2020), is failing to induce diversity when all the samples are correctly classify by the first model, as this is the case for our datasets in § 4.1.
|
| 546 |
+
|
| 547 |
+
We implemented the JTT method from Liu et al. (2021a) and report test accuracies on the Waterbird, Camelyon17, and Office-Home datasets in Table 2. We tuned $T$ , the number of epochs for the first model, in $\{1,2,5,10,20,60\}$ . We tune the upsampling weight $\lambda$ in $\{6,50,100\}$ . We pick the model with best validation accuracy.
|
| 548 |
+
|
| 549 |
+
Table 2: Comparison between ERM, D-BAT, and JTT. For JTT, results are reported for a single seed. While JTT is efficient when small sub-groups are present in the data—as it is the case in the Waterbirds dataset—the method fails to significantly improve upon ERM when the distribution shift is more severe as in the Office-Home and Camelyon17 datasets.
|
| 550 |
+
|
| 551 |
+
<table><tr><td>Method</td><td>Waterbirds</td><td>Office-Home</td><td>Camelyon17</td></tr><tr><td>ERM</td><td>86.0</td><td>50.4</td><td>80.3</td></tr><tr><td>JTT</td><td>91.6</td><td>49.3</td><td>81.0</td></tr><tr><td>D-BAT (ours)</td><td>88.7</td><td>51.1</td><td>93.1</td></tr></table>
|
| 552 |
+
|
| 553 |
+
# F.7 COMPARISON WITH LEE ET AL. (2022)
|
| 554 |
+
|
| 555 |
+
The concurrent work of Lee et al. (2022) proposes to measure diversity between two models using the mutual information (MI) between their predictions on the entire OOD distribution, whereas our loss is defined on the per datapoint difference in the predictions. This means that our loss decomposes as a sum over the data-points and is well defined on small mini-batches. Computing the mutual information (MI) needs processing the entirety (or at least a very large part) of the data. Besides such practical advantages, our notion of diversity naturally arises out of discrepancy based domain adaptation theory, whereas the choice of using MI is ad-hoc and in fact may not give the expected results. Consider the toy-problem in Fig.3 of Lee et al. (2022) - the predictions of the two models actually have maximum mutual information since they predict the exact opposite on all the unlabelled perturbation data. Thus, MI would say that the two models actually have zero diversity, whereas discrepancy would say they have very high diversity. Hence, MI is theoretically the wrong measure to use. We confirmed this intuition by running experiments on the same setup as in Lee et al. (2022), we compared for the two notions of diversity (MI and discrepancy) which pairs of predictor are optimal. Results can be seen in Fig. 17.
|
| 556 |
+
|
| 557 |
+

|
| 558 |
+
(a) Experimental setup
|
| 559 |
+
|
| 560 |
+

|
| 561 |
+
(b) Disagreement
|
| 562 |
+
Figure 17: Disagreement and mutual information of potential second models $h_2$ . In (a) we summarize the experimental setup which is similar to Fig.3 of Lee et al. (2022). The training data consists of the diagonal regions of $[0,1] \times [-1,0]$ as class 1 (positive), and $[-1,0] \times [0,1]$ as class 2 (negative). OOD datapoints $\tilde{X}$ are sampled randomly in the off-diagonal $[-1,0]^2$ and $[0,1]^2$ regions. The set of hyperplanes $h_\theta$ with $\theta \in [0,\pi /2]$ all achieve a perfect train accuracy. We fix the first classifier to be the horizontal $h_1 = h_{\theta=0}$ classifier. Then, we measure the disagreement between $h_1$ and different choices of $h_2 = h_\theta$ (in b), as well as their mutual information (in c) using the code provided in (Lee et al., 2022). Maximizing the disagreement yields the correct vertical classifier $h_2 = h_{\theta=\frac{\pi}{2}}$ , whereas minimizing mutual information would yield the wrong diagonal classifier. The disagreement scores match intuitive definitions of diversity, whereas mutual information does not.
|
| 563 |
+
|
| 564 |
+

|
| 565 |
+
(c) Mutual Information
|
2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:95b0b7bee9bb9c9786c70ea0ed18c40a65517360dc2f510aed1190e722f062a9
|
| 3 |
+
size 579065
|
2023/Agree to Disagree_ Diversity through Disagreement for Better Transferability/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/7ac7f310-bee1-439e-801a-a6f6b2dfa988_content_list.json
ADDED
|
@@ -0,0 +1,1225 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "ALIGNING MODEL AND MACAQUE INFERIOR TEMPORAL CORTEX REPRESENTATIONS IMPROVES MODELTO-HUMAN BEHAVIORAL ALIGNMENT AND ADVERSARIAL ROBUSTNESS",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
171,
|
| 8 |
+
99,
|
| 9 |
+
831,
|
| 10 |
+
196
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Joel Dapello\\*,1,2,3, Kohitij Kar\\*,1,2,4,6, Martin Schrimpf\\*,1,2,4, Robert Geary\\*,1,2,3, Michael Ferguson\\*,1,2,4 David D. Cox\\*, James J. DiCarlo\\*,1,2,4",
|
| 17 |
+
"bbox": [
|
| 18 |
+
181,
|
| 19 |
+
218,
|
| 20 |
+
885,
|
| 21 |
+
250
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "list",
|
| 27 |
+
"sub_type": "text",
|
| 28 |
+
"list_items": [
|
| 29 |
+
"$^{1}$ Department of Brain and Cognitive Sciences, MIT, Cambridge, MA02139",
|
| 30 |
+
"$^{2}$ McGovern Institute for Brain Research, MIT, Cambridge, MA02139",
|
| 31 |
+
"$^{3}$ School of Engineering and Applied Sciences, Harvard University, Cambridge, MA02139",
|
| 32 |
+
"$^{4}$ Center for Brains, Minds and Machines, MIT, Cambridge, MA02139",
|
| 33 |
+
"$^{5}$ MIT-IBM Watson AI Lab",
|
| 34 |
+
"$^{6}$ Department of Biology, Centre for Vision Research at York University, Toronto, CA dapello@mit.edu kohitij@mit.edu"
|
| 35 |
+
],
|
| 36 |
+
"bbox": [
|
| 37 |
+
183,
|
| 38 |
+
250,
|
| 39 |
+
772,
|
| 40 |
+
349
|
| 41 |
+
],
|
| 42 |
+
"page_idx": 0
|
| 43 |
+
},
|
| 44 |
+
{
|
| 45 |
+
"type": "text",
|
| 46 |
+
"text": "ABSTRACT",
|
| 47 |
+
"text_level": 1,
|
| 48 |
+
"bbox": [
|
| 49 |
+
450,
|
| 50 |
+
386,
|
| 51 |
+
545,
|
| 52 |
+
400
|
| 53 |
+
],
|
| 54 |
+
"page_idx": 0
|
| 55 |
+
},
|
| 56 |
+
{
|
| 57 |
+
"type": "text",
|
| 58 |
+
"text": "While some state-of-the-art artificial neural network systems in computer vision are strikingly accurate models of the corresponding primate visual processing, there are still many discrepancies between these models and the behavior of primates on object recognition tasks. Many current models suffer from extreme sensitivity to adversarial attacks and often do not align well with the image-by-image behavioral error patterns observed in humans. Previous research has provided strong evidence that primate object recognition behavior can be very accurately predicted by neural population activity in the inferior temporal (IT) cortex, a brain area in the late stages of the visual processing hierarchy. Therefore, here we directly test whether making the late stage representations of models more similar to that of macaque IT produces new models that exhibit more robust, primate-like behavior. We collected a dataset of chronic, large-scale multi-electrode recordings across the IT cortex in six non-human primates (rhesus macaques). We then use these data to fine-tune (end-to-end) the model \"IT\" representations such that they are more aligned with the biological IT representations, while preserving accuracy on object recognition tasks. We generate a cohort of models with a range of IT similarity scores validated on held-out animals across two image sets with distinct statistics. Across a battery of optimization conditions, we observed a strong correlation between the models' IT-likeness and alignment with human behavior, as well as an increase in its adversarial robustness. We further assessed the limitations of this approach and find that the improvements in behavioral alignment and adversarial robustness generalize across different image statistics, but not to object categories outside of those covered in our IT training set. Taken together, our results demonstrate that building models that are more aligned with the primate brain leads to more robust and human-like behavior, and call for larger neural data-sets to further augment these gains. Code, models, and data are available at https://github.com/dapello/braintree.",
|
| 59 |
+
"bbox": [
|
| 60 |
+
228,
|
| 61 |
+
417,
|
| 62 |
+
767,
|
| 63 |
+
779
|
| 64 |
+
],
|
| 65 |
+
"page_idx": 0
|
| 66 |
+
},
|
| 67 |
+
{
|
| 68 |
+
"type": "text",
|
| 69 |
+
"text": "1 INTRODUCTION AND RELATED WORK",
|
| 70 |
+
"text_level": 1,
|
| 71 |
+
"bbox": [
|
| 72 |
+
171,
|
| 73 |
+
811,
|
| 74 |
+
522,
|
| 75 |
+
825
|
| 76 |
+
],
|
| 77 |
+
"page_idx": 0
|
| 78 |
+
},
|
| 79 |
+
{
|
| 80 |
+
"type": "text",
|
| 81 |
+
"text": "Object recognition models have made incredible strides in the last ten years, (Krizhevsky et al., 2012; Szegedy et al., 2014; Simonyan and Zisserman, 2014; He et al., 2015b; Dosovitskiy et al., 2020; Liu et al., 2022) even surpassing human performance in some benchmarks (He et al., 2015a). While some of these models bear remarkable resemblance to the primate visual system (Daniel L. Yamins, 2013;",
|
| 82 |
+
"bbox": [
|
| 83 |
+
169,
|
| 84 |
+
842,
|
| 85 |
+
826,
|
| 86 |
+
900
|
| 87 |
+
],
|
| 88 |
+
"page_idx": 0
|
| 89 |
+
},
|
| 90 |
+
{
|
| 91 |
+
"type": "header",
|
| 92 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 93 |
+
"bbox": [
|
| 94 |
+
171,
|
| 95 |
+
32,
|
| 96 |
+
478,
|
| 97 |
+
47
|
| 98 |
+
],
|
| 99 |
+
"page_idx": 0
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"type": "page_footnote",
|
| 103 |
+
"text": "*These authors contributed equally to this work.",
|
| 104 |
+
"bbox": [
|
| 105 |
+
192,
|
| 106 |
+
909,
|
| 107 |
+
478,
|
| 108 |
+
922
|
| 109 |
+
],
|
| 110 |
+
"page_idx": 0
|
| 111 |
+
},
|
| 112 |
+
{
|
| 113 |
+
"type": "page_number",
|
| 114 |
+
"text": "1",
|
| 115 |
+
"bbox": [
|
| 116 |
+
493,
|
| 117 |
+
948,
|
| 118 |
+
503,
|
| 119 |
+
959
|
| 120 |
+
],
|
| 121 |
+
"page_idx": 0
|
| 122 |
+
},
|
| 123 |
+
{
|
| 124 |
+
"type": "image",
|
| 125 |
+
"img_path": "images/ac4d91e8a42f884adf017e59567298b5dc19d2bcb96c61ad35a08538ca9c54a9.jpg",
|
| 126 |
+
"image_caption": [
|
| 127 |
+
"B"
|
| 128 |
+
],
|
| 129 |
+
"image_footnote": [],
|
| 130 |
+
"bbox": [
|
| 131 |
+
187,
|
| 132 |
+
102,
|
| 133 |
+
604,
|
| 134 |
+
238
|
| 135 |
+
],
|
| 136 |
+
"page_idx": 1
|
| 137 |
+
},
|
| 138 |
+
{
|
| 139 |
+
"type": "image",
|
| 140 |
+
"img_path": "images/e4f2bd1e38cefe1cd34c6f845e510bf89b33fb28816615b9a6bee68d4d96c001.jpg",
|
| 141 |
+
"image_caption": [
|
| 142 |
+
"C"
|
| 143 |
+
],
|
| 144 |
+
"image_footnote": [],
|
| 145 |
+
"bbox": [
|
| 146 |
+
612,
|
| 147 |
+
116,
|
| 148 |
+
797,
|
| 149 |
+
236
|
| 150 |
+
],
|
| 151 |
+
"page_idx": 1
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"type": "image",
|
| 155 |
+
"img_path": "images/1c9a85bb66ea6c02ce90d38d7744be823658a67e7faa782d9ab8bac99b6b8907.jpg",
|
| 156 |
+
"image_caption": [
|
| 157 |
+
"Figure 1: Aligning model IT representations with primate IT representations improves behavioral alignment and improves adversarial robustness. A) A set of naturalistic images, each containing one of eight different object classes are shown to a CNN and also to three different primate subjects with implanted multi-electrode arrays recording from the Inferior Temporal (IT) cortex. (1) A Base model (ImageNet pre-trained CORnet-S) is fine-tuned using stochastic gradient descent to (2) minimize the classification loss with respect to the ground truth object in each image while also minimizing a representational similarity loss (CKA) that encourages the model's IT representation to be more like those measured in the (pooled) primate subjects. (3) The resultant IT aligned models are then frozen and each tested in three ways. First, model IT representations are evaluated for similarity to biological IT representation (CKA metric) using neural data obtained from new primate subjects - we refer to the split-trial reliabilityCeiled average across all held out macaques and both image sets as \"Validated IT neural similarity\". Second, model output behavioral error patterns are assessed for alignment with human behavioral error patterns at the resolution of individual images (i2n, see Methods). Third, model behavioral output is evaluated for its robustness to white box adversarial attacks using an $L_{\\infty}$ norm projected gradient descent attack. All three tests are carried out with: (i) new images within the IT-alignment training domain (held out HVM images; see Methods) and (ii) new images with novel image statistics (natural COCO images; see Methods), and those empirical results are tracked separately. B) We find that this IT-alignment procedure produced gains in validated IT neural similarity relative to base models on both data sets, and that these gains led to improvement in human behavioral alignment. $n = 30$ models are shown, resulting from training at six different relative weightings of the IT neural similarity loss, each from five base models that derived from five random seeds. C) We also find that these same IT-alignment gains resulted in increased adversarial accuracy $(\\mathrm{PGD} L_{\\infty}, \\epsilon = 1 / 1020)$ on the same model set as in B. Base models trained only for ImageNet and HVM image classification are circled in grey."
|
| 158 |
+
],
|
| 159 |
+
"image_footnote": [],
|
| 160 |
+
"bbox": [
|
| 161 |
+
189,
|
| 162 |
+
250,
|
| 163 |
+
607,
|
| 164 |
+
377
|
| 165 |
+
],
|
| 166 |
+
"page_idx": 1
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "image",
|
| 170 |
+
"img_path": "images/a8092623606e14c97e68bd48092283229cc802c474bab3ac7a53a77a20f95170.jpg",
|
| 171 |
+
"image_caption": [],
|
| 172 |
+
"image_footnote": [],
|
| 173 |
+
"bbox": [
|
| 174 |
+
611,
|
| 175 |
+
250,
|
| 176 |
+
795,
|
| 177 |
+
378
|
| 178 |
+
],
|
| 179 |
+
"page_idx": 1
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"text": "Khaligh-Razavi and Kriegeskorte, 2014; Schrimpf et al., 2018; 2020), there remain a number of important discrepancies. In particular, the output behavior of current models, while coarsely aligned with primate object confusion patterns, does not fully match primate error patterns on individual images (Rajalingham et al., 2018; Geirhos et al., 2021). In addition, these same models can be easily fooled by adversarial attacks – targeted pixel-level perturbations intentionally designed to cause the model to produce the wrong output(Szegedy et al., 2013; Carlini and Wagner, 2016; Chen et al., 2017; Rony et al., 2018; Brendel et al., 2019), whereas primate behavior is thought to be more robust to these kinds of attacks. This is an important unsolved problem in engineering artificial intelligence systems; the deviance between model and human behavior has been studied extensively in the machine learning community, often from the perspective of safety in real-world deployment of computer vision systems (Das et al., 2017; Liu et al., 2017; Xu et al., 2017; Madry et al., 2017; Song et al., 2017; Dhillon et al., 2018; Buckman et al., 2018; Guo et al., 2018; Michaelis et al., 2019). From a neuroscience perspective, behavioral differences like these point to different underlying mechanisms",
|
| 184 |
+
"bbox": [
|
| 185 |
+
169,
|
| 186 |
+
742,
|
| 187 |
+
826,
|
| 188 |
+
925
|
| 189 |
+
],
|
| 190 |
+
"page_idx": 1
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "header",
|
| 194 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 195 |
+
"bbox": [
|
| 196 |
+
173,
|
| 197 |
+
32,
|
| 198 |
+
478,
|
| 199 |
+
47
|
| 200 |
+
],
|
| 201 |
+
"page_idx": 1
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "page_number",
|
| 205 |
+
"text": "2",
|
| 206 |
+
"bbox": [
|
| 207 |
+
493,
|
| 208 |
+
948,
|
| 209 |
+
504,
|
| 210 |
+
959
|
| 211 |
+
],
|
| 212 |
+
"page_idx": 1
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"text": "and feature representations used for object recognition between the artificial and biological systems, meaning that our scientific understanding of the mechanisms of visual behavior remains incomplete.",
|
| 217 |
+
"bbox": [
|
| 218 |
+
169,
|
| 219 |
+
103,
|
| 220 |
+
826,
|
| 221 |
+
133
|
| 222 |
+
],
|
| 223 |
+
"page_idx": 2
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"text": "Incorporating neurophysiological constraints into models to make them behave more in line with primate visual behavior is an active field of research (Marblestone et al., 2016; Lotter et al., 2016; Nayebi and Ganguli, 2017; Guerguiev et al., 2017; Hassabis et al., 2017; Lindsay and Miller, 2018; Tang et al., 2018; Kar et al., 2019; Kubilius et al., 2019; Li et al., 2019; Hasani et al., 2019; Sinz et al., 2019; Zador, 2019; Geiger et al., 2022). Previously, Dapello et al. (2020) demonstrated that convolutional neural network (CNN) models with early visual representations that are more functionally aligned with the early representations of primate visual processing tended to be more robust to adversarial attacks. This correlational observation was turned into a causal test, by simulating a primary visual cortex at the front of CNNs, which was indeed found to improve performance across a range of white box adversarial attacks and common image corruptions. Likewise, several recent studies have demonstrated that training models to classify images while also predicting (Safarani et al., 2021) or having similar representations (Federer et al., 2020) to early visual processing regions of primates, or even mice (Li et al., 2019), has a positive effect on generalization and robustness to adversarial attacks and common image corruptions.",
|
| 228 |
+
"bbox": [
|
| 229 |
+
169,
|
| 230 |
+
138,
|
| 231 |
+
826,
|
| 232 |
+
335
|
| 233 |
+
],
|
| 234 |
+
"page_idx": 2
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"text": "However, no research to date has investigated the effects of incorporating biological knowledge of the neural representations in the IT cortex – a late stage visual processing region of the primate ventral stream, which critically supports primate visual object recognition (DiCarlo et al., 2012; Majaj et al., 2015). Here, we developed a method to align the late layer \"IT representations\" of a base object recognition model (CORnet-S (Kubilius et al., 2019) pre-trained on ImageNet (Deng et al., 2009) and naturalistic, grey-scale \"HVM\" images (Majaj et al., 2015)) to the biological IT representation while the model continues to be optimized to perform classification of the dominant object in each image. Using neural recordings performed across the IT cortex of six rhesus macaque monkeys divided into three training animals and three held-out testing animals for validation, we generate a suite of models under a variety of different optimization conditions and measure their IT alignment on held out animals, their alignment with human behavior, and their robustness to a range of adversarial attacks, in all cases on at least two image sets with distinct statistics as shown in figure 1.",
|
| 239 |
+
"bbox": [
|
| 240 |
+
169,
|
| 241 |
+
340,
|
| 242 |
+
826,
|
| 243 |
+
508
|
| 244 |
+
],
|
| 245 |
+
"page_idx": 2
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"text": "We report three novel findings:",
|
| 250 |
+
"bbox": [
|
| 251 |
+
171,
|
| 252 |
+
513,
|
| 253 |
+
377,
|
| 254 |
+
529
|
| 255 |
+
],
|
| 256 |
+
"page_idx": 2
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "list",
|
| 260 |
+
"sub_type": "text",
|
| 261 |
+
"list_items": [
|
| 262 |
+
"1. Our method robustly improves IT representational similarity of models to brains even when measured on new animals and new images.",
|
| 263 |
+
"2. We find that gains in model IT-likeness lead to gains in human behavioral alignment.",
|
| 264 |
+
"3. Likewise we find that improved IT-likeness leads to increased adversarial robustness."
|
| 265 |
+
],
|
| 266 |
+
"bbox": [
|
| 267 |
+
207,
|
| 268 |
+
541,
|
| 269 |
+
823,
|
| 270 |
+
606
|
| 271 |
+
],
|
| 272 |
+
"page_idx": 2
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"type": "text",
|
| 276 |
+
"text": "Interestingly, we observe that adversarial training improves robustness but does not significantly increase IT similarity or human behavioral alignment. Finally, while probing the limits of our current IT-alignment procedure, we observed that the improvements in IT similarity, behavioral alignment, and adversarial robustness generalized to images with different image statistics than those in the IT training set (from naturalistic gray scale images to full color natural images) but only for object categories that were part of the original IT training set and not for held-out object categories.",
|
| 277 |
+
"bbox": [
|
| 278 |
+
169,
|
| 279 |
+
619,
|
| 280 |
+
826,
|
| 281 |
+
705
|
| 282 |
+
],
|
| 283 |
+
"page_idx": 2
|
| 284 |
+
},
|
| 285 |
+
{
|
| 286 |
+
"type": "text",
|
| 287 |
+
"text": "2 DATA AND METHODS",
|
| 288 |
+
"text_level": 1,
|
| 289 |
+
"bbox": [
|
| 290 |
+
171,
|
| 291 |
+
724,
|
| 292 |
+
385,
|
| 293 |
+
739
|
| 294 |
+
],
|
| 295 |
+
"page_idx": 2
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"type": "text",
|
| 299 |
+
"text": "Here we describe the neural and behavioral data collection, the training and testing methods used for aligning model representations with IT representations, and the methods for assessing behavioral alignment and adversarial robustness.",
|
| 300 |
+
"bbox": [
|
| 301 |
+
169,
|
| 302 |
+
755,
|
| 303 |
+
826,
|
| 304 |
+
797
|
| 305 |
+
],
|
| 306 |
+
"page_idx": 2
|
| 307 |
+
},
|
| 308 |
+
{
|
| 309 |
+
"type": "text",
|
| 310 |
+
"text": "2.1 IMAGE SETS",
|
| 311 |
+
"text_level": 1,
|
| 312 |
+
"bbox": [
|
| 313 |
+
171,
|
| 314 |
+
814,
|
| 315 |
+
302,
|
| 316 |
+
828
|
| 317 |
+
],
|
| 318 |
+
"page_idx": 2
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"type": "text",
|
| 322 |
+
"text": "High-quality synthetic \"naturalistic\" images of single objects (HVM images) were generated using free ray-tracing software (http://www.povray.org), similar to (Majaj et al., 2015). Each image consisted of a 2D projection of a 3D model (purchased from Dosch Design and TurboSquid) added to a random natural background. The ten objects chosen were bear, elephant, face, apple, car, dog, chair, plane, bird and zebra. By varying six viewing parameters, we explored three types of identity while preserving object variation, position (x and y), rotation (x, y, and z), and size. All images",
|
| 323 |
+
"bbox": [
|
| 324 |
+
169,
|
| 325 |
+
839,
|
| 326 |
+
826,
|
| 327 |
+
926
|
| 328 |
+
],
|
| 329 |
+
"page_idx": 2
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"type": "header",
|
| 333 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 334 |
+
"bbox": [
|
| 335 |
+
171,
|
| 336 |
+
32,
|
| 337 |
+
478,
|
| 338 |
+
47
|
| 339 |
+
],
|
| 340 |
+
"page_idx": 2
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"type": "page_number",
|
| 344 |
+
"text": "3",
|
| 345 |
+
"bbox": [
|
| 346 |
+
493,
|
| 347 |
+
948,
|
| 348 |
+
504,
|
| 349 |
+
959
|
| 350 |
+
],
|
| 351 |
+
"page_idx": 2
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"type": "text",
|
| 355 |
+
"text": "were achromatic with a native resolution of $256 \\times 256$ pixels. Additionally, natural microsoft COCO images (photographs) pertaining to the 10 nouns, were download from http://cocodataset.org (Lin et al., 2014). Each image was resized (not cropped) to $256 \\times 256 \\times 3$ pixel size and presented within the central 8 deg.",
|
| 356 |
+
"bbox": [
|
| 357 |
+
169,
|
| 358 |
+
103,
|
| 359 |
+
823,
|
| 360 |
+
161
|
| 361 |
+
],
|
| 362 |
+
"page_idx": 3
|
| 363 |
+
},
|
| 364 |
+
{
|
| 365 |
+
"type": "text",
|
| 366 |
+
"text": "2.2 PRIMATE NEURAL DATA COLLECTION AND PROCESSING",
|
| 367 |
+
"text_level": 1,
|
| 368 |
+
"bbox": [
|
| 369 |
+
171,
|
| 370 |
+
186,
|
| 371 |
+
602,
|
| 372 |
+
200
|
| 373 |
+
],
|
| 374 |
+
"page_idx": 3
|
| 375 |
+
},
|
| 376 |
+
{
|
| 377 |
+
"type": "text",
|
| 378 |
+
"text": "We surgically implanted each monkey with a head post under aseptic conditions. We recorded neural activity using two or three micro-electrode arrays (Utah arrays; Blackrock Microsystems) implanted in IT cortex. A total of 96 electrodes were connected per array (grid arrangement, $400\\mathrm{um}$ spacing, $4\\mathrm{mm}\\times 4\\mathrm{mm}$ span of each array). Array placement was guided by the sulcus pattern, which was visible during the surgery. The electrodes were accessed through a percutaneous connector that allowed simultaneous recording from all 96 electrodes from each array. All surgical and animal procedures were performed in accordance with National Institutes of Health guidelines and the Massachusetts Institute of Technology Committee on Animal Care. For information on the neural recording quality metrics per site, see supplemental section A.1.",
|
| 379 |
+
"bbox": [
|
| 380 |
+
169,
|
| 381 |
+
215,
|
| 382 |
+
826,
|
| 383 |
+
340
|
| 384 |
+
],
|
| 385 |
+
"page_idx": 3
|
| 386 |
+
},
|
| 387 |
+
{
|
| 388 |
+
"type": "text",
|
| 389 |
+
"text": "During each daily recording session, band-pass filtered (0.1 Hz to 10 kHz) neural activity was recorded continuously at a sampling rate of 20 kHz using Intan Recording Controllers (Intan Technologies, LLC). The majority of the data presented here were based on multiunit activity. We detected the multiunit spikes after the raw voltage data were collected. A multiunit spike event was defined as the threshold crossing when voltage (falling edge) deviated by more than three times the standard deviation of the raw voltage values. Our array placements allowed us to sample neural sites from different parts of IT, along the posterior to anterior axis. However, for all the analyses, we did not consider the specific spatial location of the site, and treated each site as a random sample from a pooled IT population. For information on the neural recording quality metrics, see supplemental section A.1.",
|
| 390 |
+
"bbox": [
|
| 391 |
+
169,
|
| 392 |
+
347,
|
| 393 |
+
826,
|
| 394 |
+
486
|
| 395 |
+
],
|
| 396 |
+
"page_idx": 3
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"type": "text",
|
| 400 |
+
"text": "Behavioral state during neural data collection All neural response data were obtained during a passive viewing task. In this task, monkeys fixated a white square dot $(0.2^{\\circ})$ for $300\\mathrm{ms}$ to initiate a trial. We then presented a sequence of 5 to 10 images, each ON for $100\\mathrm{ms}$ followed by a $100\\mathrm{ms}$ gray blank screen. This was followed by a water reward and an inter trial interval of $500\\mathrm{ms}$ , followed by the next sequence. Trials were aborted if gaze was not held within $\\pm 2^{\\circ}$ of the central fixation dot during any point. Each neural site's response to each image was taken as the mean rate during a time window of $70 - 170\\mathrm{ms}$ following image onset, a window that has been previously chosen to align with the visually-driven latency of IT neurons and their quantitative relationship to object classification behavior as in Majaj et al. (2015).",
|
| 401 |
+
"bbox": [
|
| 402 |
+
169,
|
| 403 |
+
493,
|
| 404 |
+
826,
|
| 405 |
+
619
|
| 406 |
+
],
|
| 407 |
+
"page_idx": 3
|
| 408 |
+
},
|
| 409 |
+
{
|
| 410 |
+
"type": "text",
|
| 411 |
+
"text": "2.3 HUMAN BEHAVIORAL DATA COLLECTION",
|
| 412 |
+
"text_level": 1,
|
| 413 |
+
"bbox": [
|
| 414 |
+
171,
|
| 415 |
+
643,
|
| 416 |
+
501,
|
| 417 |
+
657
|
| 418 |
+
],
|
| 419 |
+
"page_idx": 3
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"type": "text",
|
| 423 |
+
"text": "We measured human behavior (from 88 subjects) using the online Amazon MTurk platform which enables efficient collection of large-scale psychophysical data from crowd-sourced \"human intelligence tasks\" (HITs). The reliability of the online MTurk platform has been validated by comparing results obtained from online and in-lab psychophysical experiments (Majaj et al., 2015; Rajalingham et al., 2015). Each trial started with a $100\\mathrm{ms}$ presentation of the sample image (one our of 1320 images). This was followed by a blank gray screen for $100\\mathrm{ms}$ ; followed by a choice screen with the target and distractor objects, similar to (Rajalingham et al., 2018). The subjects indicated their choice by touching the screen or clicking the mouse over the target object. Each subjects saw an image only once. We collected the data such that, there were 80 unique subject responses per image, with varied distractor objects. Prior work has shown that human and macaque behavioral patterns are nearly identical, even at the image grain (Rajalingham et al., 2018). For information on the human behavioral data collection, see supplemental section A.2.",
|
| 424 |
+
"bbox": [
|
| 425 |
+
169,
|
| 426 |
+
672,
|
| 427 |
+
828,
|
| 428 |
+
840
|
| 429 |
+
],
|
| 430 |
+
"page_idx": 3
|
| 431 |
+
},
|
| 432 |
+
{
|
| 433 |
+
"type": "text",
|
| 434 |
+
"text": "2.4 ALIGNING MODEL REPRESENTATIONS WITH MACAQUE IT REPRESENTATIONS",
|
| 435 |
+
"text_level": 1,
|
| 436 |
+
"bbox": [
|
| 437 |
+
171,
|
| 438 |
+
866,
|
| 439 |
+
750,
|
| 440 |
+
881
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 3
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "text",
|
| 446 |
+
"text": "In order to align neural network model representations with primate IT representations while performing classification, we use a multi-loss formulation similar to that used in Li et al. (2019) and Federer",
|
| 447 |
+
"bbox": [
|
| 448 |
+
169,
|
| 449 |
+
895,
|
| 450 |
+
828,
|
| 451 |
+
925
|
| 452 |
+
],
|
| 453 |
+
"page_idx": 3
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"type": "header",
|
| 457 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 458 |
+
"bbox": [
|
| 459 |
+
171,
|
| 460 |
+
32,
|
| 461 |
+
478,
|
| 462 |
+
47
|
| 463 |
+
],
|
| 464 |
+
"page_idx": 3
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"type": "page_number",
|
| 468 |
+
"text": "4",
|
| 469 |
+
"bbox": [
|
| 470 |
+
491,
|
| 471 |
+
948,
|
| 472 |
+
504,
|
| 473 |
+
959
|
| 474 |
+
],
|
| 475 |
+
"page_idx": 3
|
| 476 |
+
},
|
| 477 |
+
{
|
| 478 |
+
"type": "text",
|
| 479 |
+
"text": "et al. (2020). Starting with an ImageNet (Deng et al., 2009) pre-trained CORnet-S model (Kubilius et al., 2019), we used stochastic gradient descent (SGD) on all model weights to jointly minimize a standard categorical cross entropy loss on model predictions of ImageNet labels (maintained from model pre-training, for stability), HVM image labels, and a centered kernel alignment (CKA) based loss penalizing the \"IT\" layer of CORnet-S for having representations not aligned with primate IT representations of the HVM images. CORnet-S was selected because it already has a clearly defined layer committed to region IT, close to the final linear readout of the network, but otherwise our procedure is compatible with any neural network architecture. Meanwhile CKA, a measure of linear subspace alignment, was selected as a representational similarity measure. CKA has ideal properties such as invariance to isotropic scaling and orthonormal transformations which do not matter from the perspective of a linear readout, but sensitivity to arbitrary linear transformations (Kornblith et al., 2019) which could lead to differences from a linear readout as well as allow the network to hide representations useful for image classification but not present within primate IT. CKA ranges from 0, indicating completely non-overlapping subspaces, to 1, indicating completely aligned subspaces. We found that our best neural alignment results came from minimizing the neural similarity loss function $\\log(1 - CKA(X,Y))$ , where $X \\in \\mathbb{R}^{n \\times p_1}$ and $Y \\in \\mathbb{R}^{n \\times p_2}$ denote two column centered activation matrices with generated by showing $n$ example images and recording $p_1$ and $p_2$ neurons from the IT layer of CORnet-S and macaque IT recordings respectively. The macaque neural activation matrices were generated by averaging over approximately 50 trials per image and over a 70-170 millisecond time window following image presentation. An illustration of our setup is shown in figure 1A.",
|
| 480 |
+
"bbox": [
|
| 481 |
+
169,
|
| 482 |
+
103,
|
| 483 |
+
826,
|
| 484 |
+
383
|
| 485 |
+
],
|
| 486 |
+
"page_idx": 4
|
| 487 |
+
},
|
| 488 |
+
{
|
| 489 |
+
"type": "text",
|
| 490 |
+
"text": "2.5 TRAINING AND TESTING CONDITIONS",
|
| 491 |
+
"text_level": 1,
|
| 492 |
+
"bbox": [
|
| 493 |
+
171,
|
| 494 |
+
402,
|
| 495 |
+
482,
|
| 496 |
+
416
|
| 497 |
+
],
|
| 498 |
+
"page_idx": 4
|
| 499 |
+
},
|
| 500 |
+
{
|
| 501 |
+
"type": "text",
|
| 502 |
+
"text": "In all reported experiments, model IT representational similarity training was performed on 2880 grey-scale naturalistic HVM image representations consisting of 188 active neural sites collated from the three training set macaques for 1200 epochs. We use a batch size of 128, meaning the CKA loss computed for a random set of 128 representations for each gradient step. In order to create models with a variety of different final neural alignment scores, we add random probability $1 - p$ of dropping the IT alignment gradients and create six different sets (5 random seeds for each set) of neurally aligned models with $p \\in [0, 1/32, 1/16, 1/8, 1/4, 1/2, 1]$ . For example, the set with $p = 0$ drops all of the IT alignment gradients and thus has no improved IT alignment over the base model, while the set with $p = 1$ always includes the IT alignment gradients and similarly achieves the highest IT alignment scores (see figure 2). We also introduce a small amount of data augmentation including the physical equivalent of 0.5 degrees of jitter the vertical and horizontal position of the images, 0.5 degrees of rotational jitter, and +/- 0.1 degrees of scaling jitter, assuming our model has an 8 degree field of view. These augmentations were selected to simulate natural viewing conditions.",
|
| 503 |
+
"bbox": [
|
| 504 |
+
169,
|
| 505 |
+
429,
|
| 506 |
+
826,
|
| 507 |
+
609
|
| 508 |
+
],
|
| 509 |
+
"page_idx": 4
|
| 510 |
+
},
|
| 511 |
+
{
|
| 512 |
+
"type": "text",
|
| 513 |
+
"text": "Model IT representational similarity testing was performed on a total of three held out monkeys: Monkey 1 (280 neural sites) and monkey 2 (144 neural sites) on 320 held out HVM images with statistics similar to the training distribution, and monkey 1 (237 neural sites) and monkey 3 (106 active neural sites) on 200 full color natural COCO images with different statistics than those used during training. Additional model training information can be found in supplemental section B.",
|
| 514 |
+
"bbox": [
|
| 515 |
+
169,
|
| 516 |
+
616,
|
| 517 |
+
823,
|
| 518 |
+
686
|
| 519 |
+
],
|
| 520 |
+
"page_idx": 4
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"type": "text",
|
| 524 |
+
"text": "For performing white box adversarial attacks, we used untargeted projected gradient descent (PGD) (Madry et al., 2017) with $L_{\\infty}$ and $L_{2}$ norm constraints. Further details are given in supplemental section B.",
|
| 525 |
+
"bbox": [
|
| 526 |
+
169,
|
| 527 |
+
693,
|
| 528 |
+
823,
|
| 529 |
+
736
|
| 530 |
+
],
|
| 531 |
+
"page_idx": 4
|
| 532 |
+
},
|
| 533 |
+
{
|
| 534 |
+
"type": "text",
|
| 535 |
+
"text": "2.6 BEHAVIORAL BENCHMARKS",
|
| 536 |
+
"text_level": 1,
|
| 537 |
+
"bbox": [
|
| 538 |
+
171,
|
| 539 |
+
756,
|
| 540 |
+
415,
|
| 541 |
+
768
|
| 542 |
+
],
|
| 543 |
+
"page_idx": 4
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"type": "text",
|
| 547 |
+
"text": "To characterize the behavior of the visual system, we have used an image-level behavioral metric, i2n (Rajalingham et al., 2018). The behavioral metric computes a pattern of unbiased behavioral performances, using a sensitivity index: $d' = Z(HitRate) - Z(FalseAlarmRate)$ , where $Z$ is the inverse of the cumulative Gaussian distribution. The HitRates for i2n are the accuracies of the subjects when a specific image is shown and the choices include the target object (i.e., the object present in the image) and one other specific distractor object. So for every distractor-target pair we get a different i2n entry. A detailed description of how to compute i2n can be also found at Rajalingham",
|
| 548 |
+
"bbox": [
|
| 549 |
+
169,
|
| 550 |
+
782,
|
| 551 |
+
826,
|
| 552 |
+
882
|
| 553 |
+
],
|
| 554 |
+
"page_idx": 4
|
| 555 |
+
},
|
| 556 |
+
{
|
| 557 |
+
"type": "header",
|
| 558 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 559 |
+
"bbox": [
|
| 560 |
+
171,
|
| 561 |
+
32,
|
| 562 |
+
478,
|
| 563 |
+
47
|
| 564 |
+
],
|
| 565 |
+
"page_idx": 4
|
| 566 |
+
},
|
| 567 |
+
{
|
| 568 |
+
"type": "page_footnote",
|
| 569 |
+
"text": "the pre-trained version was selected as a starting point because of the relatively small number of training samples in our dataset (Riedel, 2022).",
|
| 570 |
+
"bbox": [
|
| 571 |
+
169,
|
| 572 |
+
897,
|
| 573 |
+
823,
|
| 574 |
+
925
|
| 575 |
+
],
|
| 576 |
+
"page_idx": 4
|
| 577 |
+
},
|
| 578 |
+
{
|
| 579 |
+
"type": "page_number",
|
| 580 |
+
"text": "5",
|
| 581 |
+
"bbox": [
|
| 582 |
+
493,
|
| 583 |
+
948,
|
| 584 |
+
504,
|
| 585 |
+
959
|
| 586 |
+
],
|
| 587 |
+
"page_idx": 4
|
| 588 |
+
},
|
| 589 |
+
{
|
| 590 |
+
"type": "text",
|
| 591 |
+
"text": "et al. (2018). The i2n behavioral benchmark was computed using the Brain-Score implementation of the i2n metric (Schrimpf et al., 2018).",
|
| 592 |
+
"bbox": [
|
| 593 |
+
169,
|
| 594 |
+
103,
|
| 595 |
+
826,
|
| 596 |
+
132
|
| 597 |
+
],
|
| 598 |
+
"page_idx": 5
|
| 599 |
+
},
|
| 600 |
+
{
|
| 601 |
+
"type": "image",
|
| 602 |
+
"img_path": "images/91dcccfe6330077cdabe0bb71ed283679bad60cc7789746d4dfd69560a3c21ff.jpg",
|
| 603 |
+
"image_caption": [
|
| 604 |
+
"Figure 2: IT alignment training leads to improved IT representational similarity on held out animals and held out images across two image sets with different statistics. A) IT neural similarity scores (CKA, normalized by split-half trial reliability) for held out but within domain HVM images vs gradient steps is shown for two held out monkeys across seven different neural similarity loss gradient dropout rates (the darkest trace receives neural similarity loss gradients at $100\\%$ of gradient steps, while in the lightest trace neural similarity loss gradients are dropped at every step). Two control conditions are also shown: optimizing model IT toward a random Gaussian target IT matrix (random, blue) and toward an image-shuffled target IT matrix (shuffle, orange). B) Like A but for natural COCO images out of domain with respect to the training set. Grey dashed line on each plot shows the base model score for models pre-trained on ImageNet and HVM image labels with no IT representational similarity loss, which the model set with $0\\%$ of IT similarity loss gradients does not deviate significantly from. Error bars are bootstrapped confidence intervals for 5 training seeds."
|
| 605 |
+
],
|
| 606 |
+
"image_footnote": [],
|
| 607 |
+
"bbox": [
|
| 608 |
+
173,
|
| 609 |
+
147,
|
| 610 |
+
472,
|
| 611 |
+
281
|
| 612 |
+
],
|
| 613 |
+
"page_idx": 5
|
| 614 |
+
},
|
| 615 |
+
{
|
| 616 |
+
"type": "image",
|
| 617 |
+
"img_path": "images/894fe1e2473dd2e6703ac32a921d125e7473ba141c444b1618fb16ebbe39edfb.jpg",
|
| 618 |
+
"image_caption": [],
|
| 619 |
+
"image_footnote": [],
|
| 620 |
+
"bbox": [
|
| 621 |
+
483,
|
| 622 |
+
146,
|
| 623 |
+
825,
|
| 624 |
+
281
|
| 625 |
+
],
|
| 626 |
+
"page_idx": 5
|
| 627 |
+
},
|
| 628 |
+
{
|
| 629 |
+
"type": "text",
|
| 630 |
+
"text": "3 RESULTS",
|
| 631 |
+
"text_level": 1,
|
| 632 |
+
"bbox": [
|
| 633 |
+
171,
|
| 634 |
+
489,
|
| 635 |
+
284,
|
| 636 |
+
505
|
| 637 |
+
],
|
| 638 |
+
"page_idx": 5
|
| 639 |
+
},
|
| 640 |
+
{
|
| 641 |
+
"type": "text",
|
| 642 |
+
"text": "Does aligning late stage model representations with primate IT representations lead to improvements in alignment with image-by-image patterns of human behavior or improvements in white box adversarial robustness? We start by testing if our method can generate models that are truly more IT-like by validating on held out animals and images, as this has not been previously attempted and is not guaranteed to work given the sampling limitations of neural recording experiments. We then proceed to analyze how these IT-aligned models fair on several human behavioral alignment benchmarks and a diverse set of white box adversarial attacks.",
|
| 643 |
+
"bbox": [
|
| 644 |
+
169,
|
| 645 |
+
522,
|
| 646 |
+
823,
|
| 647 |
+
619
|
| 648 |
+
],
|
| 649 |
+
"page_idx": 5
|
| 650 |
+
},
|
| 651 |
+
{
|
| 652 |
+
"type": "text",
|
| 653 |
+
"text": "3.1 DIRECT FITTING TO IT NEURAL DATA IMPROVES IT-LIKENESS OF MODELS ACROSS HELD OUT ANIMALS AND IMAGE SETS",
|
| 654 |
+
"text_level": 1,
|
| 655 |
+
"bbox": [
|
| 656 |
+
171,
|
| 657 |
+
638,
|
| 658 |
+
823,
|
| 659 |
+
667
|
| 660 |
+
],
|
| 661 |
+
"page_idx": 5
|
| 662 |
+
},
|
| 663 |
+
{
|
| 664 |
+
"type": "text",
|
| 665 |
+
"text": "First, we investigated how well our IT alignment optimization procedure generalizes to IT neural similarity measurements (CKA) for two held out test monkeys on 320 held out HVM images (similar image statistics as the training set). Figure 2A shows theCeiled IT neural similarity scores for both test animals across different neural similarity loss gradient dropout rates $(p\\in [0,1 / 32,1 / 16,1 / 8,1 / 4,1 / 2,1])$ ; the model marked $100\\%$ sees IT similarity loss gradients at every step, where as the model marked $0\\%$ never sees IT similarity loss gradients) as well as models optimized to classify HVM images while fitting a random Gaussian target activation matrix, or an image shuffled target activation matrix which has the same first and second order statistics as the true IT activation matrix, but scrambled image information. For both animals, we see a significant positive shift from the unfitted model (neural loss weight of 0.0), with higher relative neural loss weights generally leading to higher IT neural similarity scores. Meanwhile, both of the control conditions cause models to become less IT like, to a significant degree.",
|
| 666 |
+
"bbox": [
|
| 667 |
+
169,
|
| 668 |
+
680,
|
| 669 |
+
826,
|
| 670 |
+
848
|
| 671 |
+
],
|
| 672 |
+
"page_idx": 5
|
| 673 |
+
},
|
| 674 |
+
{
|
| 675 |
+
"type": "text",
|
| 676 |
+
"text": "We next investigated how well our procedure generalizes from the grey-scale naturalistic HVM images to full color, natural images from COCO. Figure 2B shows the same model optimization conditions as before, but now on two unseen animal IT representations of COCO images. Like in 2A although to a lesser absolute degree, we see improvements relative to the baseline in IT neural similarity as function of the neural loss weight, and controls generally decreasing in IT neural similarity. From",
|
| 677 |
+
"bbox": [
|
| 678 |
+
169,
|
| 679 |
+
854,
|
| 680 |
+
823,
|
| 681 |
+
925
|
| 682 |
+
],
|
| 683 |
+
"page_idx": 5
|
| 684 |
+
},
|
| 685 |
+
{
|
| 686 |
+
"type": "header",
|
| 687 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 688 |
+
"bbox": [
|
| 689 |
+
171,
|
| 690 |
+
32,
|
| 691 |
+
478,
|
| 692 |
+
47
|
| 693 |
+
],
|
| 694 |
+
"page_idx": 5
|
| 695 |
+
},
|
| 696 |
+
{
|
| 697 |
+
"type": "page_number",
|
| 698 |
+
"text": "6",
|
| 699 |
+
"bbox": [
|
| 700 |
+
493,
|
| 701 |
+
948,
|
| 702 |
+
504,
|
| 703 |
+
959
|
| 704 |
+
],
|
| 705 |
+
"page_idx": 5
|
| 706 |
+
},
|
| 707 |
+
{
|
| 708 |
+
"type": "text",
|
| 709 |
+
"text": "this, we conclude that our IT alignment procedure is able to improve IT-likeness in our models even in held out animals and across two image sets with distinct statistics.",
|
| 710 |
+
"bbox": [
|
| 711 |
+
169,
|
| 712 |
+
103,
|
| 713 |
+
823,
|
| 714 |
+
132
|
| 715 |
+
],
|
| 716 |
+
"page_idx": 6
|
| 717 |
+
},
|
| 718 |
+
{
|
| 719 |
+
"type": "text",
|
| 720 |
+
"text": "3.2 INCREASED BEHAVIORAL ALIGNMENT IN MODELS THAT BETTER MATCH MACAQUE IT",
|
| 721 |
+
"text_level": 1,
|
| 722 |
+
"bbox": [
|
| 723 |
+
171,
|
| 724 |
+
157,
|
| 725 |
+
810,
|
| 726 |
+
172
|
| 727 |
+
],
|
| 728 |
+
"page_idx": 6
|
| 729 |
+
},
|
| 730 |
+
{
|
| 731 |
+
"type": "text",
|
| 732 |
+
"text": "Next, we investigated how single image level classification error patterns correlate between humans and IT aligned models. To get a big picture view, we take all of the optimization conditions and validation epochs generated in figure 2A while models are training and compare IT neural similarity on the HVM test set (averaged over held out animals) with human behavioral alignment on the HVM test set. As shown in figure 3A, this analysis reveals a broad, though not linear correlation between IT neural similarity and behavioral alignment. Interestingly, we observe that the slope is at its steepest when IT neural similarity is at the highest values, suggesting that if an even higher degree of IT-alignment might result in greater increases in behavioral alignment. We also investigated whether these trends persist when we exclude the optimization on object labels from the HVM images and only optimize for IT neural similarity. To do so, we train the models on all previous conditions but without the HVM object-label loss. As shown in 3, the overall shape of the trend remains quite similar, though the absolute behavioral alignment shifts downward, indicating that the label information during training helps on the behavioral task, but is not required for the trend to hold. In figure 3B, we perform the same set of measurements but now focusing on the COCO image set. Consistent with the observation on COCO IT neural similarity, the behavioral alignment trend transfers to the COCO image set although the absolute magnitude of the improvements are less.",
|
| 733 |
+
"bbox": [
|
| 734 |
+
169,
|
| 735 |
+
186,
|
| 736 |
+
826,
|
| 737 |
+
411
|
| 738 |
+
],
|
| 739 |
+
"page_idx": 6
|
| 740 |
+
},
|
| 741 |
+
{
|
| 742 |
+
"type": "text",
|
| 743 |
+
"text": "Finally, using the Brain-Score platform (Schrimpf et al., 2018), we benchmark our models against publicly available human behavioral data from the Objectome image set Rajalingham et al. (2018) which has similar image statistics to our HVM IT fitting set (with a total of 24 object categories, only four of which overlap with the training set). As demonstrated in figure 4C, when the Objectome data are filtered down to just the four overlapping categories, our most IT similar models are again the most behaviorally aligned, well above the unfit baseline and control conditions, which remain close to the floor for much of the plot. However, As shown in figure 3D, when considering all 24 object categories in the Objectome dataset, we see that the trend of increasing human behavioral alignment does not hold and our models actually begin to fair worse in terms of human behavioral alignment at higher levels of IT neural similarity. As shown in figure supp A.1, using a linear probe to assess image class information content (measured by classification accuracy on held out representations) reveals that these models are losing class information content for the Objectome image set, which drives the decrease in behavioral alignment, as the model makes more mistakes overall than a human. Similarly, a linear probe analysis reveals minimal loss in class information in the overlapping categories. Thus, we observe that while our method leads to increased human behavioral alignment across different image statistics, it does not currently lead to improved alignment on unseen object categories.",
|
| 744 |
+
"bbox": [
|
| 745 |
+
169,
|
| 746 |
+
416,
|
| 747 |
+
826,
|
| 748 |
+
640
|
| 749 |
+
],
|
| 750 |
+
"page_idx": 6
|
| 751 |
+
},
|
| 752 |
+
{
|
| 753 |
+
"type": "text",
|
| 754 |
+
"text": "3.3 INCREASED ADVERSARIAL ROBUSTNESS IN MODELS THAT BETTER MATCH MACAQUE IT",
|
| 755 |
+
"text_level": 1,
|
| 756 |
+
"bbox": [
|
| 757 |
+
171,
|
| 758 |
+
664,
|
| 759 |
+
823,
|
| 760 |
+
679
|
| 761 |
+
],
|
| 762 |
+
"page_idx": 6
|
| 763 |
+
},
|
| 764 |
+
{
|
| 765 |
+
"type": "text",
|
| 766 |
+
"text": "Finally, we evaluate our models on an array of white box adversarial attacks, to assess if models with higher IT neural similarity scores also have increased adversarial robustness. Like before, we start with a big picture analysis where we consider every evaluation epoch for all optimization conditions considered in figure 2. Again, as demonstrated in figures 4A and 4B, for both HVM images and COCO images, there is a broad though not entirely linear correlation between IT neural similarity and adversarial robustness to PGD $L_{\\infty} \\epsilon = 1 / 1020$ attacks. Like in the analysis of behavioral alignment, we also see a higher slope on the right side of the plots, where IT neural similarity is the highest, suggesting further improvements could be had if models were pushed to be more IT aligned.",
|
| 767 |
+
"bbox": [
|
| 768 |
+
169,
|
| 769 |
+
694,
|
| 770 |
+
826,
|
| 771 |
+
806
|
| 772 |
+
],
|
| 773 |
+
"page_idx": 6
|
| 774 |
+
},
|
| 775 |
+
{
|
| 776 |
+
"type": "text",
|
| 777 |
+
"text": "In order to get a better sense of the gains in robustness, we measured the adversarial strength accuracy curves for models only trained with HVM image labels, models trained with HVM image labels and IT neural representations, and models adversially trained on HVM labels (PGD $L_{\\infty}$ , $\\epsilon = 4 / 255$ ). Figure 5A shows that on held-out HVM images, IT aligned models have increased accuracy across a range of $\\epsilon$ values for both $L_{\\infty}$ and $L_{2}$ norms, though less so than models with explicit adversarial training. However, as shown in figure 5 the same analysis on COCO images demonstrates that adversarial robustness in the IT aligned networks generalizes significantly better on unseen image statistics than the adversially trained models, which lose clean accuracy on COCO images.",
|
| 778 |
+
"bbox": [
|
| 779 |
+
169,
|
| 780 |
+
811,
|
| 781 |
+
828,
|
| 782 |
+
925
|
| 783 |
+
],
|
| 784 |
+
"page_idx": 6
|
| 785 |
+
},
|
| 786 |
+
{
|
| 787 |
+
"type": "header",
|
| 788 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 789 |
+
"bbox": [
|
| 790 |
+
171,
|
| 791 |
+
32,
|
| 792 |
+
478,
|
| 793 |
+
47
|
| 794 |
+
],
|
| 795 |
+
"page_idx": 6
|
| 796 |
+
},
|
| 797 |
+
{
|
| 798 |
+
"type": "page_number",
|
| 799 |
+
"text": "7",
|
| 800 |
+
"bbox": [
|
| 801 |
+
493,
|
| 802 |
+
948,
|
| 803 |
+
504,
|
| 804 |
+
959
|
| 805 |
+
],
|
| 806 |
+
"page_idx": 6
|
| 807 |
+
},
|
| 808 |
+
{
|
| 809 |
+
"type": "image",
|
| 810 |
+
"img_path": "images/d0d037d0f721943a09469877b23c1904ec0f739edebab15034f02eecdb837f97.jpg",
|
| 811 |
+
"image_caption": [
|
| 812 |
+
"Figure 3: IT neural similarity correlates with behavioral alignment across a variety of optimization conditions and unseen image statistics but not on unseen object categories. A) Held out animal and image IT neural similarity is plotted against human behavioral alignment on the HVM image set at every validation epoch for all neural loss weight conditions, random Gaussian IT target matrix conditions, and image shuffled IT target matrix conditions, in each case with or with and without image classification loss. B) and C) Like in A but for the COCO image set and the Objectome image set Rajalingham et al. (2018) filtered to overlapping categories with the IT training set. D) The behavioral alignment for the full Objectome image set with 20 categories not covered in the IT training is not improved by the IT-alignment procedure and data used here. In all plots, the black cross represents the average base model position, and the heavy blue line is a sliding X, Y average of all conditions merely to visually highlight trends. Five seeds for each condition are plotted."
|
| 813 |
+
],
|
| 814 |
+
"image_footnote": [],
|
| 815 |
+
"bbox": [
|
| 816 |
+
171,
|
| 817 |
+
99,
|
| 818 |
+
336,
|
| 819 |
+
246
|
| 820 |
+
],
|
| 821 |
+
"page_idx": 7
|
| 822 |
+
},
|
| 823 |
+
{
|
| 824 |
+
"type": "image",
|
| 825 |
+
"img_path": "images/442711f1144c39ffaadc798aa09a4ca110b8cbdfc4aa375f3be3c60be76ecfa4.jpg",
|
| 826 |
+
"image_caption": [],
|
| 827 |
+
"image_footnote": [],
|
| 828 |
+
"bbox": [
|
| 829 |
+
336,
|
| 830 |
+
99,
|
| 831 |
+
496,
|
| 832 |
+
246
|
| 833 |
+
],
|
| 834 |
+
"page_idx": 7
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "image",
|
| 838 |
+
"img_path": "images/5ed6c0a47e9a4ca0277dd10704c193f079795b431706082d68b628c0de83b97b.jpg",
|
| 839 |
+
"image_caption": [],
|
| 840 |
+
"image_footnote": [],
|
| 841 |
+
"bbox": [
|
| 842 |
+
498,
|
| 843 |
+
101,
|
| 844 |
+
658,
|
| 845 |
+
246
|
| 846 |
+
],
|
| 847 |
+
"page_idx": 7
|
| 848 |
+
},
|
| 849 |
+
{
|
| 850 |
+
"type": "image",
|
| 851 |
+
"img_path": "images/49871611ffa6033e1075edf15011deece2140158e48b7414930270864a8e2bb9.jpg",
|
| 852 |
+
"image_caption": [],
|
| 853 |
+
"image_footnote": [],
|
| 854 |
+
"bbox": [
|
| 855 |
+
658,
|
| 856 |
+
101,
|
| 857 |
+
821,
|
| 858 |
+
246
|
| 859 |
+
],
|
| 860 |
+
"page_idx": 7
|
| 861 |
+
},
|
| 862 |
+
{
|
| 863 |
+
"type": "image",
|
| 864 |
+
"img_path": "images/ea4774fa0e900b403b5f35bc8b0a02910ff20c30695e46844a85d966d8c7153c.jpg",
|
| 865 |
+
"image_caption": [
|
| 866 |
+
"Figure 4: IT neural similarity correlates with improved white box adversarial robustness. A) held out animal and image IT neural similarity is plotted against white box adversarial accuracy (PGD $L_{\\infty}\\epsilon = 1 / 1020$ ) on the HVM image set measured across multiple training time points for all neural loss ratio conditions, random Gaussian IT target matrix conditions, and image shuffled IT target matrix conditions. B) Like in A but for COCO images. In both plots, the black cross represents the average base model position, the black X marks a CORnet-S adversarially trained on HVM images, and the heavy blue line is a sliding X, Y average of all conditions merely to visually highlight trends. Five seeds for each condition are plotted."
|
| 867 |
+
],
|
| 868 |
+
"image_footnote": [
|
| 869 |
+
"Loss Function Components: IT + Classification Shuffled IT + Classification Random IT + Classification"
|
| 870 |
+
],
|
| 871 |
+
"bbox": [
|
| 872 |
+
176,
|
| 873 |
+
419,
|
| 874 |
+
480,
|
| 875 |
+
662
|
| 876 |
+
],
|
| 877 |
+
"page_idx": 7
|
| 878 |
+
},
|
| 879 |
+
{
|
| 880 |
+
"type": "image",
|
| 881 |
+
"img_path": "images/29edc992a61db32a636ae3f3a1c716188cacce8a4257ac9cd9a5b48f5f73c0dd.jpg",
|
| 882 |
+
"image_caption": [],
|
| 883 |
+
"image_footnote": [],
|
| 884 |
+
"bbox": [
|
| 885 |
+
504,
|
| 886 |
+
417,
|
| 887 |
+
810,
|
| 888 |
+
662
|
| 889 |
+
],
|
| 890 |
+
"page_idx": 7
|
| 891 |
+
},
|
| 892 |
+
{
|
| 893 |
+
"type": "text",
|
| 894 |
+
"text": "Last, we tested the IT neural similarity of our HVM image adversarially trained models and find that they do not follow the general correlation shown in 4 for IT aligned models vs adversarial accuracy. Interestingly, the adversarially trained models are slightly more similar to IT than standard models, but significantly higher than standard models on HVM adversarial accuracy and significantly lower on COCO adversarial accuracy. We take this to indicate that there are multiple possible ways to become robust to adversarial attacks, and that adversarial training does not in general induce the same representations as IT alignment.",
|
| 895 |
+
"bbox": [
|
| 896 |
+
169,
|
| 897 |
+
825,
|
| 898 |
+
826,
|
| 899 |
+
925
|
| 900 |
+
],
|
| 901 |
+
"page_idx": 7
|
| 902 |
+
},
|
| 903 |
+
{
|
| 904 |
+
"type": "header",
|
| 905 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 906 |
+
"bbox": [
|
| 907 |
+
171,
|
| 908 |
+
32,
|
| 909 |
+
478,
|
| 910 |
+
47
|
| 911 |
+
],
|
| 912 |
+
"page_idx": 7
|
| 913 |
+
},
|
| 914 |
+
{
|
| 915 |
+
"type": "page_number",
|
| 916 |
+
"text": "8",
|
| 917 |
+
"bbox": [
|
| 918 |
+
493,
|
| 919 |
+
948,
|
| 920 |
+
503,
|
| 921 |
+
959
|
| 922 |
+
],
|
| 923 |
+
"page_idx": 7
|
| 924 |
+
},
|
| 925 |
+
{
|
| 926 |
+
"type": "image",
|
| 927 |
+
"img_path": "images/4757182ce37834cdaabdf1f08b873093f932109b264be1301389cc8263042536.jpg",
|
| 928 |
+
"image_caption": [
|
| 929 |
+
"Figure 5: IT aligned models are more robust than standard models in and out of domain, and more robust than adversarially trained models in out of domain conditions. A) PGD $L_{\\infty}$ and $L_{2}$ strength accuracy curves on HVM images for standard trained networks (green) IT aligned networks (blue) and networks adversarially trained (PGD $L_{\\infty} \\epsilon = 4 / 255$ ) on the IT fitting image labels (orange). B) Like in A but for COCO images. Error shading represents bootstrapped $95\\%$ confidence intervals over five training seeds."
|
| 930 |
+
],
|
| 931 |
+
"image_footnote": [],
|
| 932 |
+
"bbox": [
|
| 933 |
+
187,
|
| 934 |
+
99,
|
| 935 |
+
493,
|
| 936 |
+
255
|
| 937 |
+
],
|
| 938 |
+
"page_idx": 8
|
| 939 |
+
},
|
| 940 |
+
{
|
| 941 |
+
"type": "image",
|
| 942 |
+
"img_path": "images/a05b20254b513bff31cad49863f319c92e6591f2d53797376e7bfbd2e6e84d59.jpg",
|
| 943 |
+
"image_caption": [],
|
| 944 |
+
"image_footnote": [],
|
| 945 |
+
"bbox": [
|
| 946 |
+
506,
|
| 947 |
+
99,
|
| 948 |
+
805,
|
| 949 |
+
255
|
| 950 |
+
],
|
| 951 |
+
"page_idx": 8
|
| 952 |
+
},
|
| 953 |
+
{
|
| 954 |
+
"type": "text",
|
| 955 |
+
"text": "4 DISCUSSION",
|
| 956 |
+
"text_level": 1,
|
| 957 |
+
"bbox": [
|
| 958 |
+
171,
|
| 959 |
+
386,
|
| 960 |
+
312,
|
| 961 |
+
402
|
| 962 |
+
],
|
| 963 |
+
"page_idx": 8
|
| 964 |
+
},
|
| 965 |
+
{
|
| 966 |
+
"type": "text",
|
| 967 |
+
"text": "Building on prior research in constraining visual object recognition models with early stage visual representations (Li et al., 2019; Dapello et al., 2020; Federer et al., 2020; Safarani et al., 2021), we report here that it is possible to better align the late stage \"IT representations\" of an object recognition model with the corresponding primate IT representations, and that this improved IT alignment leads to increased human level behavioral alignment and increased adversarial robustness. In particular, the results show that 1) the method used here is able to develop better neuroscientific models by improving IT alignment in object recognition models even on held out animals and image statistics not seen by the model during the IT neural alignment training procedure, 2) models that are more aligned with macaque IT also have better alignment with human behavioral error patterns across unseen (not shown during training) image statistics but not for unseen object categories, and 3) models more aligned with macaque IT are more robust to adversarial attacks even on unseen image statistics. Interestingly however, we observed that being more adversially robust (through adversarial training) does not lead to significantly more IT neural similarity.",
|
| 968 |
+
"bbox": [
|
| 969 |
+
169,
|
| 970 |
+
431,
|
| 971 |
+
826,
|
| 972 |
+
612
|
| 973 |
+
],
|
| 974 |
+
"page_idx": 8
|
| 975 |
+
},
|
| 976 |
+
{
|
| 977 |
+
"type": "text",
|
| 978 |
+
"text": "These empirical observations raise a number of important questions for future research. While there are clear gains in robustness from our procedure, we note that the overall magnitude is relatively small. How much adversarial robustness could we expect to gain, if we perfectly fit IT? This question hinges on how adversarily robust primate behavior really is, an active area of research (Guo et al., 2022; Elsayed et al., 2018; Yuan et al., 2020). Guo et al. (2022) is particularly interesting with respect to our work – while they find that individual neurons in IT are not particularly robust when compared to individual neurons in adversarily trained networks, our work here indicates that population geometry, not individual neuronal sensitivity, might play a critical role in robustness. We find it intriguing that aligning IT representations in our models to empirically measured macaque IT responses has no effect or even a negative effect on behavioral alignment for objects not present in the IT fitting image-set, a noteworthy limitation in our approach. We speculate that this is due to the small range of categories covered in our IT training set, which limits the span of neural representational space that those experiments were able to sample. In that regard, it would be informative to get a sense of the scaling laws (Kaplan et al., 2020) for how much neural data (in terms of images, neurons, trials, or object categories) needs to be absorbed into a model before it behaves in a truly general more human like fashion for any instance of image categories or statistics. Other avenues for further exploration include comparisons of behavioral alignment on a more diverse panel of benchmarks Bowers et al. (2022), different alignment metrics to optimize, such as deep canonical correlationPirlot et al. (2022), or including representation stochasticity as in Dapello et al. (2020). Overall, our results provide further support for the framework of constraining and optimizing models with empirical data from the primate brain to make them more robust and well aligned with human behavior (Sinz et al., 2019).",
|
| 979 |
+
"bbox": [
|
| 980 |
+
169,
|
| 981 |
+
618,
|
| 982 |
+
826,
|
| 983 |
+
924
|
| 984 |
+
],
|
| 985 |
+
"page_idx": 8
|
| 986 |
+
},
|
| 987 |
+
{
|
| 988 |
+
"type": "header",
|
| 989 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 990 |
+
"bbox": [
|
| 991 |
+
171,
|
| 992 |
+
32,
|
| 993 |
+
478,
|
| 994 |
+
47
|
| 995 |
+
],
|
| 996 |
+
"page_idx": 8
|
| 997 |
+
},
|
| 998 |
+
{
|
| 999 |
+
"type": "page_number",
|
| 1000 |
+
"text": "9",
|
| 1001 |
+
"bbox": [
|
| 1002 |
+
493,
|
| 1003 |
+
948,
|
| 1004 |
+
504,
|
| 1005 |
+
959
|
| 1006 |
+
],
|
| 1007 |
+
"page_idx": 8
|
| 1008 |
+
},
|
| 1009 |
+
{
|
| 1010 |
+
"type": "text",
|
| 1011 |
+
"text": "REFERENCES",
|
| 1012 |
+
"text_level": 1,
|
| 1013 |
+
"bbox": [
|
| 1014 |
+
173,
|
| 1015 |
+
102,
|
| 1016 |
+
287,
|
| 1017 |
+
117
|
| 1018 |
+
],
|
| 1019 |
+
"page_idx": 9
|
| 1020 |
+
},
|
| 1021 |
+
{
|
| 1022 |
+
"type": "list",
|
| 1023 |
+
"sub_type": "ref_text",
|
| 1024 |
+
"list_items": [
|
| 1025 |
+
"P. Bashivan, K. Kar, and J. J. DiCarlo. Neural population control via deep image synthesis. Science, 364(6439), May 2019.",
|
| 1026 |
+
"J. Bowers, G. Malhotra, M. Dujmović, M. Llera, C. Tsvetkov, V. Biscione, G. Puebla, F. Adolfi, J. Hummel, R. Heaton, B. Evans, J. Mitchell, and R. Blything. Deep problems with neural network models of human vision. 04 2022. doi: 10.31234/osf.io/5zf4s.",
|
| 1027 |
+
"W. Brendel, J. Rauber, M. Kümmerer, I. Ustyuzhaninov, and M. Bethge. Accurate, reliable and fast robustness evaluation. July 2019.",
|
| 1028 |
+
"J. Buckman, A. Roy, C. Raffel, and I. Goodfellow. Thermometer encoding: One hot way to resist adversarial examples. Feb. 2018.",
|
| 1029 |
+
"N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. Aug. 2016.",
|
| 1030 |
+
"P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh. EAD: Elastic-Net attacks to deep neural networks via adversarial examples. Sept. 2017.",
|
| 1031 |
+
"C. C. J. J. D. Daniel L. Yamins, Ha Hong. Hierarchical modular optimization of convolutional networks achieves representations similar to macaque IT and human ventral stream. Advances in Neural Information Processing Systems 26 (NIPS 2013), 2013.",
|
| 1032 |
+
"J. Dapello, T. Marques, M. Schrimpf, F. Geiger, D. D. Cox, and J. J. DiCarlo. Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations. page Advances in Neural Information Processing Systems 33 (NeurIPS 2020). Neurips, June 2020.",
|
| 1033 |
+
"N. Das, M. Shanbhogue, S.-T. Chen, F. Hohman, L. Chen, M. E. Kounavis, and D. H. Chau. Keeping the bad guys out: Protecting and vaccinating deep learning with JPEG compression. May 2017.",
|
| 1034 |
+
"J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, June 2009.",
|
| 1035 |
+
"G. S. Dhillon, K. Azizzadenesheli, Z. C. Lipton, J. D. Bernstein, J. Kossaifi, A. Khanna, and A. Anandkumar. Stochastic activation pruning for robust adversarial defense. Feb. 2018.",
|
| 1036 |
+
"J. J. DiCarlo, D. Zoccolan, and N. C. Rust. How does the brain solve visual object recognition? Neuron, 73(3):415-434, Feb. 2012.",
|
| 1037 |
+
"A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. Oct. 2020.",
|
| 1038 |
+
"G. Elsayed, S. Shankar, B. Cheung, N. Papernot, A. Kurakin, I. Goodfellow, and J. Sohl-Dickstein. Adversarial examples that fool both computer vision and Time-Limited humans. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 3910-3920. Curran Associates, Inc., 2018.",
|
| 1039 |
+
"W. Falcon et al. Pytorch lightning. *GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning*, 3, 2019.",
|
| 1040 |
+
"C. Federer, H. Xu, A. Fyshe, and J. Zylberberg. Improved object recognition using neural networks trained to mimic the brain's statistical properties. *Neural Netw.*, 2020.",
|
| 1041 |
+
"F. Geiger, M. Schrimpf, T. Marques, and J. J. DiCarlo. Wiring up vision: Minimizing supervised synaptic updates needed to produce a primate ventral stream. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=g1SzIRLQXMM.",
|
| 1042 |
+
"R. Geirhos, K. Narayanappa, B. Mitzkus, T. Thieringer, M. Bethge, F. A. Wichmann, and W. Brendel. Partial success in closing the gap between human and machine vision. Adv. Neural Inf. Process. Syst., 34, 2021."
|
| 1043 |
+
],
|
| 1044 |
+
"bbox": [
|
| 1045 |
+
171,
|
| 1046 |
+
125,
|
| 1047 |
+
826,
|
| 1048 |
+
924
|
| 1049 |
+
],
|
| 1050 |
+
"page_idx": 9
|
| 1051 |
+
},
|
| 1052 |
+
{
|
| 1053 |
+
"type": "header",
|
| 1054 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1055 |
+
"bbox": [
|
| 1056 |
+
171,
|
| 1057 |
+
32,
|
| 1058 |
+
478,
|
| 1059 |
+
47
|
| 1060 |
+
],
|
| 1061 |
+
"page_idx": 9
|
| 1062 |
+
},
|
| 1063 |
+
{
|
| 1064 |
+
"type": "page_number",
|
| 1065 |
+
"text": "10",
|
| 1066 |
+
"bbox": [
|
| 1067 |
+
488,
|
| 1068 |
+
946,
|
| 1069 |
+
508,
|
| 1070 |
+
960
|
| 1071 |
+
],
|
| 1072 |
+
"page_idx": 9
|
| 1073 |
+
},
|
| 1074 |
+
{
|
| 1075 |
+
"type": "list",
|
| 1076 |
+
"sub_type": "ref_text",
|
| 1077 |
+
"list_items": [
|
| 1078 |
+
"J. Guerguiev, T. P. Lillicrap, and B. A. Richards. Towards deep learning with segregated dendrites. *Elite*, 6, Dec. 2017.",
|
| 1079 |
+
"C. Guo, M. Rana, M. Cisse, and L. van der Maaten. Countering adversarial images using input transformations. Feb. 2018.",
|
| 1080 |
+
"C. Guo, M. J. Lee, G. Leclerc, J. Dapello, Y. Rao, A. Madry, and J. J. DiCarlo. Adversarily trained neural representations may already be as robust as corresponding biological neural representations. June 2022.",
|
| 1081 |
+
"H. Hasani, M. Soleymani, and H. Aghajan. Surround Modulation: A Bio-inspired Connectivity Structure for Convolutional Neural Networks. NeurIPS, (NeurIPS):15877-15888, 2019.",
|
| 1082 |
+
"D. Hassabis, D. Kumaran, C. Summerfield, and M. Botvinick. Neuroscience-Inspired Artificial Intelligence. *Neuron*, 95(2):245–258, 2017. ISSN 10974199. doi: 10.1016/j.neuron.2017.06.011.",
|
| 1083 |
+
"K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Proceedings of the IEEE International Conference on Computer Vision, 2015 Inter:1026-1034, 2015a. ISSN 15505499. doi: 10.1109/ICCV.2015.123.",
|
| 1084 |
+
"K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. Dec. 2015b.",
|
| 1085 |
+
"H. Hong, D. L. K. Yamins, N. J. Majaj, and J. J. DiCarlo. Explicit information for category-orthogonal object properties increases along the ventral stream. Nat. Neurosci., 19(4):613-622, Apr. 2016.",
|
| 1086 |
+
"J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models. Jan. 2020.",
|
| 1087 |
+
"K. Kar and J. J. DiCarlo. Fast recurrent processing via ventrolateral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition. Neuron, 109(1):164-176, 2021.",
|
| 1088 |
+
"K. Kar, J. Kubilius, K. Schmidt, E. B. Issa, and J. J. DiCarlo. Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior. Nature neuroscience, 22(6):974-983, 2019.",
|
| 1089 |
+
"S.-M. Khaligh-Razavi and N. Kriegeskorte. Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Comput. Biol., 10(11):e1003915, Nov. 2014.",
|
| 1090 |
+
"S. Kornblith, M. Norouzi, H. Lee, and G. Hinton. Similarity of neural network representations revisited. May 2019.",
|
| 1091 |
+
"A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097-1105. Curran Associates, Inc., 2012.",
|
| 1092 |
+
"J. Kubilius, M. Schrimpf, K. Kar, H. Hong, N. J. Majaj, R. Rajalingham, E. B. Issa, P. Bashivan, J. Prescott-Roy, K. Schmidt, A. Nayebi, D. Bear, D. L. K. Yamins, and J. J. DiCarlo. Brain-Like object recognition with High-Performing shallow recurrent ANNs. Sept. 2019.",
|
| 1093 |
+
"Z. Li, W. Brendel, E. Y. Walker, E. Cobos, T. Muhammad, J. Reimer, M. Bethge, F. H. Sinz, X. Pitkow, and A. S. Tolias. Learning from brains how to regularize machines. Nov. 2019.",
|
| 1094 |
+
"T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. Lawrence Zitnick, and P. Dollar. Microsoft COCO: Common objects in context. May 2014.",
|
| 1095 |
+
"G. W. Lindsay and K. D. Miller. How biological attention mechanisms improve task performance in a large-scale visual system model. *Elite*, 7, Oct. 2018.",
|
| 1096 |
+
"X. Liu, M. Cheng, H. Zhang, and C.-J. Hsieh. Towards robust neural networks via random self-ensemble. Dec. 2017.",
|
| 1097 |
+
"Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie. A convnet for the 2020s. 2022.",
|
| 1098 |
+
"W. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction and unsupervised learning. May 2016."
|
| 1099 |
+
],
|
| 1100 |
+
"bbox": [
|
| 1101 |
+
171,
|
| 1102 |
+
102,
|
| 1103 |
+
826,
|
| 1104 |
+
924
|
| 1105 |
+
],
|
| 1106 |
+
"page_idx": 10
|
| 1107 |
+
},
|
| 1108 |
+
{
|
| 1109 |
+
"type": "header",
|
| 1110 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1111 |
+
"bbox": [
|
| 1112 |
+
171,
|
| 1113 |
+
32,
|
| 1114 |
+
478,
|
| 1115 |
+
47
|
| 1116 |
+
],
|
| 1117 |
+
"page_idx": 10
|
| 1118 |
+
},
|
| 1119 |
+
{
|
| 1120 |
+
"type": "page_number",
|
| 1121 |
+
"text": "11",
|
| 1122 |
+
"bbox": [
|
| 1123 |
+
488,
|
| 1124 |
+
946,
|
| 1125 |
+
506,
|
| 1126 |
+
959
|
| 1127 |
+
],
|
| 1128 |
+
"page_idx": 10
|
| 1129 |
+
},
|
| 1130 |
+
{
|
| 1131 |
+
"type": "list",
|
| 1132 |
+
"sub_type": "ref_text",
|
| 1133 |
+
"list_items": [
|
| 1134 |
+
"A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. June 2017.",
|
| 1135 |
+
"N. J. Majaj, H. Hong, E. A. Solomon, and J. J. DiCarlo. Simple learned weighted sums of inferior temporal neuronal firing rates accurately predict human core object recognition performance. *J. Neurosci.*, 35(39):13402-13418, Sept. 2015.",
|
| 1136 |
+
"A. H. Marblestone, G. Wayne, and K. P. Kording. Toward an integration of deep learning and neuroscience. Front. Comput. Neurosci., 10:94, Sept. 2016.",
|
| 1137 |
+
"C. Michaelis, B. Mitzkus, R. Geirhos, E. Rusak, O. Bringmann, A. S. Ecker, M. Bethge, and W. Brendel. Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming. pages 1-23, 2019. URL http://arxiv.org/abs/1907.07484.",
|
| 1138 |
+
"A. Nayebi and S. Ganguli. Biologically inspired protection of deep networks from adversarial attacks. Mar. 2017.",
|
| 1139 |
+
"M.-I. Nicolae, M. Sinn, M. N. Tran, B. Buesser, A. Rawat, M. Wistuba, V. Zantedeschi, N. Baracaldo, B. Chen, H. Ludwig, I. Molloy, and B. Edwards. Adversarial robustness toolbox v1.2.0. CoRR, 1807.01069, 2018. URL https://arxiv.org/pdf/1807.01069.",
|
| 1140 |
+
"A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in PyTorch. Oct. 2017.",
|
| 1141 |
+
"C. Pirlot, R. Gerum, C. Efird, J. Zylberberg, and A. Fyshe. Improving the accuracy and robustness of cnns using a deep cca neural data regularizer. 09 2022. doi: 10.48550/arXiv.2209.02582.",
|
| 1142 |
+
"R. Rajalingham, K. Schmidt, and J. J. DiCarlo. Comparison of Object Recognition Behavior in Human and Monkey. Journal of Neuroscience, 35(35):12127-12136, 2015. ISSN 0270-6474. doi: 10.1523/JNEUROSCI.0573-15.2015. URL http://www.jneurosci.org/cgi/doi/10. 1523/JNEUROSCI.0573-15.2015.",
|
| 1143 |
+
"R. Rajalingham, E. B. Issa, P. Bashivan, K. Kar, K. Schmidt, and J. J. DiCarlo. Large-Scale, High-Resolution comparison of the core visual object recognition behavior of humans, monkeys, and State-of-the-Art deep artificial neural networks. *J. Neurosci.*, 38(33):7255–7269, Aug. 2018.",
|
| 1144 |
+
"R. Rajalingham, K. Kar, S. Sanghavi, S. Dehaene, and J. J. DiCarlo. The inferior temporal cortex is a potential cortical precursor of orthographic processing in untrained monkeys. Nature communications, 11(1):1-13, 2020.",
|
| 1145 |
+
"A. Riedel. Bag of tricks for training brain-like deep neural networks. In *Brain-Score Workshop*, 2022. URL https://openreview.net/forum?id=Sudzh-vWQ-c.",
|
| 1146 |
+
"J. Rony, L. G. Hafemann, L. S. Oliveira, I. Ben Ayed, R. Sabourin, and E. Granger. Decoupling direction and norm for efficient Gradient-Based L2 adversarial attacks and defenses. Nov. 2018.",
|
| 1147 |
+
"S. Safarani, A. Nix, K. Willeke, S. A. Cadena, K. Restivo, G. Denfield, A. S. Tolias, and F. H. Sinz. Towards robust vision by multi-task learning on monkey visual cortex. July 2021.",
|
| 1148 |
+
"M. Schrimpf, J. Kubilius, H. Hong, N. J. Majaj, R. Rajalingham, E. B. Issa, K. Kar, P. Bashivan, J. Prescott-Roy, K. Schmidt, D. L. K. Yamins, and J. J. DiCarlo. Brain-Score: Which artificial neural network for object recognition is most Brain-Like? Sept. 2018.",
|
| 1149 |
+
"M. Schrimpf, J. Kubilius, M. J. Lee, N. A. R. Murty, R. Ajemian, and J. J. DiCarlo. Integrative benchmarking to advance neurally mechanistic models of human intelligence. Neuron, 2020. URL: https://www.cell.com/neuron/fulltext/S0896-6273(20)30605-x.",
|
| 1150 |
+
"K. Simonyan and A. Zisserman. Very deep convolutional networks for Large-Scale image recognition. Sept. 2014.",
|
| 1151 |
+
"F. H. Sinz, X. Pitkow, J. Reimer, M. Bethge, and A. S. Tolias. Engineering a less artificial intelligence. *Neuron*, 103(6):967–979, Sept. 2019.",
|
| 1152 |
+
"Y. Song, T. Kim, S. Nowozin, S. Ermon, and N. Kushman. PixelDefend: Leveraging generative models to understand and defend against adversarial examples. Oct. 2017."
|
| 1153 |
+
],
|
| 1154 |
+
"bbox": [
|
| 1155 |
+
171,
|
| 1156 |
+
102,
|
| 1157 |
+
828,
|
| 1158 |
+
925
|
| 1159 |
+
],
|
| 1160 |
+
"page_idx": 11
|
| 1161 |
+
},
|
| 1162 |
+
{
|
| 1163 |
+
"type": "header",
|
| 1164 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1165 |
+
"bbox": [
|
| 1166 |
+
171,
|
| 1167 |
+
32,
|
| 1168 |
+
478,
|
| 1169 |
+
47
|
| 1170 |
+
],
|
| 1171 |
+
"page_idx": 11
|
| 1172 |
+
},
|
| 1173 |
+
{
|
| 1174 |
+
"type": "page_number",
|
| 1175 |
+
"text": "12",
|
| 1176 |
+
"bbox": [
|
| 1177 |
+
488,
|
| 1178 |
+
946,
|
| 1179 |
+
506,
|
| 1180 |
+
959
|
| 1181 |
+
],
|
| 1182 |
+
"page_idx": 11
|
| 1183 |
+
},
|
| 1184 |
+
{
|
| 1185 |
+
"type": "list",
|
| 1186 |
+
"sub_type": "ref_text",
|
| 1187 |
+
"list_items": [
|
| 1188 |
+
"C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. Dec. 2013.",
|
| 1189 |
+
"C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. Sept. 2014.",
|
| 1190 |
+
"H. Tang, M. Schrimpf, W. Lotter, C. Moerman, A. Paredes, J. Ortega Caro, W. Hardesty, D. Cox, and G. Kreiman. Recurrent computations for visual pattern completion. Proceedings of the National Academy of Sciences, 115(35):8835-8840, 2018. ISSN 0027-8424. doi: 10.1073/pnas.1719397115.",
|
| 1191 |
+
"W. Xu, D. Evans, and Y. Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv [cs.CV], Apr. 2017.",
|
| 1192 |
+
"L. Yuan, W. Xiao, G. Dellafererra, G. Kreiman, F. E. H. Tay, J. Feng, and M. S. Livingstone. Fooling the primate brain with minimal, targeted image manipulation. Nov. 2020.",
|
| 1193 |
+
"A. M. Zador. A critique of pure learning and what artificial neural networks can learn from animal brains. Nat. Commun., 10(1):3770, Aug. 2019."
|
| 1194 |
+
],
|
| 1195 |
+
"bbox": [
|
| 1196 |
+
171,
|
| 1197 |
+
102,
|
| 1198 |
+
828,
|
| 1199 |
+
335
|
| 1200 |
+
],
|
| 1201 |
+
"page_idx": 12
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "header",
|
| 1205 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1206 |
+
"bbox": [
|
| 1207 |
+
171,
|
| 1208 |
+
32,
|
| 1209 |
+
478,
|
| 1210 |
+
47
|
| 1211 |
+
],
|
| 1212 |
+
"page_idx": 12
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "page_number",
|
| 1216 |
+
"text": "13",
|
| 1217 |
+
"bbox": [
|
| 1218 |
+
490,
|
| 1219 |
+
946,
|
| 1220 |
+
506,
|
| 1221 |
+
959
|
| 1222 |
+
],
|
| 1223 |
+
"page_idx": 12
|
| 1224 |
+
}
|
| 1225 |
+
]
|
2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/7ac7f310-bee1-439e-801a-a6f6b2dfa988_model.json
ADDED
|
@@ -0,0 +1,1997 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "header",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.173,
|
| 7 |
+
0.033,
|
| 8 |
+
0.48,
|
| 9 |
+
0.049
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "title",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.172,
|
| 18 |
+
0.1,
|
| 19 |
+
0.832,
|
| 20 |
+
0.198
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "ALIGNING MODEL AND MACAQUE INFERIOR TEMPORAL CORTEX REPRESENTATIONS IMPROVES MODELTO-HUMAN BEHAVIORAL ALIGNMENT AND ADVERSARIAL ROBUSTNESS"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.182,
|
| 29 |
+
0.219,
|
| 30 |
+
0.887,
|
| 31 |
+
0.251
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Joel Dapello\\*,1,2,3, Kohitij Kar\\*,1,2,4,6, Martin Schrimpf\\*,1,2,4, Robert Geary\\*,1,2,3, Michael Ferguson\\*,1,2,4 David D. Cox\\*, James J. DiCarlo\\*,1,2,4"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.184,
|
| 40 |
+
0.251,
|
| 41 |
+
0.678,
|
| 42 |
+
0.264
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "\\(^{1}\\)Department of Brain and Cognitive Sciences, MIT, Cambridge, MA02139"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.184,
|
| 51 |
+
0.264,
|
| 52 |
+
0.642,
|
| 53 |
+
0.278
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "\\(^{2}\\)McGovern Institute for Brain Research, MIT, Cambridge, MA02139"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.184,
|
| 62 |
+
0.278,
|
| 63 |
+
0.773,
|
| 64 |
+
0.293
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "\\(^{3}\\)School of Engineering and Applied Sciences, Harvard University, Cambridge, MA02139"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.184,
|
| 73 |
+
0.293,
|
| 74 |
+
0.645,
|
| 75 |
+
0.307
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "\\(^{4}\\)Center for Brains, Minds and Machines, MIT, Cambridge, MA02139"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.184,
|
| 84 |
+
0.307,
|
| 85 |
+
0.362,
|
| 86 |
+
0.321
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "\\(^{5}\\)MIT-IBM Watson AI Lab"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.184,
|
| 95 |
+
0.321,
|
| 96 |
+
0.746,
|
| 97 |
+
0.35
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "\\(^{6}\\) Department of Biology, Centre for Vision Research at York University, Toronto, CA dapello@mit.edu kohitij@mit.edu"
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "list",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.184,
|
| 106 |
+
0.251,
|
| 107 |
+
0.773,
|
| 108 |
+
0.35
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": null
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "title",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.451,
|
| 117 |
+
0.387,
|
| 118 |
+
0.547,
|
| 119 |
+
0.401
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "ABSTRACT"
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.23,
|
| 128 |
+
0.418,
|
| 129 |
+
0.769,
|
| 130 |
+
0.78
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "While some state-of-the-art artificial neural network systems in computer vision are strikingly accurate models of the corresponding primate visual processing, there are still many discrepancies between these models and the behavior of primates on object recognition tasks. Many current models suffer from extreme sensitivity to adversarial attacks and often do not align well with the image-by-image behavioral error patterns observed in humans. Previous research has provided strong evidence that primate object recognition behavior can be very accurately predicted by neural population activity in the inferior temporal (IT) cortex, a brain area in the late stages of the visual processing hierarchy. Therefore, here we directly test whether making the late stage representations of models more similar to that of macaque IT produces new models that exhibit more robust, primate-like behavior. We collected a dataset of chronic, large-scale multi-electrode recordings across the IT cortex in six non-human primates (rhesus macaques). We then use these data to fine-tune (end-to-end) the model \"IT\" representations such that they are more aligned with the biological IT representations, while preserving accuracy on object recognition tasks. We generate a cohort of models with a range of IT similarity scores validated on held-out animals across two image sets with distinct statistics. Across a battery of optimization conditions, we observed a strong correlation between the models' IT-likeness and alignment with human behavior, as well as an increase in its adversarial robustness. We further assessed the limitations of this approach and find that the improvements in behavioral alignment and adversarial robustness generalize across different image statistics, but not to object categories outside of those covered in our IT training set. Taken together, our results demonstrate that building models that are more aligned with the primate brain leads to more robust and human-like behavior, and call for larger neural data-sets to further augment these gains. Code, models, and data are available at https://github.com/dapello/braintree."
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "title",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.173,
|
| 139 |
+
0.812,
|
| 140 |
+
0.524,
|
| 141 |
+
0.827
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "1 INTRODUCTION AND RELATED WORK"
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "text",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.171,
|
| 150 |
+
0.843,
|
| 151 |
+
0.828,
|
| 152 |
+
0.901
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "Object recognition models have made incredible strides in the last ten years, (Krizhevsky et al., 2012; Szegedy et al., 2014; Simonyan and Zisserman, 2014; He et al., 2015b; Dosovitskiy et al., 2020; Liu et al., 2022) even surpassing human performance in some benchmarks (He et al., 2015a). While some of these models bear remarkable resemblance to the primate visual system (Daniel L. Yamins, 2013;"
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "page_footnote",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.193,
|
| 161 |
+
0.91,
|
| 162 |
+
0.48,
|
| 163 |
+
0.924
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "*These authors contributed equally to this work."
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "page_number",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.494,
|
| 172 |
+
0.949,
|
| 173 |
+
0.504,
|
| 174 |
+
0.96
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": "1"
|
| 178 |
+
}
|
| 179 |
+
],
|
| 180 |
+
[
|
| 181 |
+
{
|
| 182 |
+
"type": "header",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.174,
|
| 185 |
+
0.033,
|
| 186 |
+
0.48,
|
| 187 |
+
0.049
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "image",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.189,
|
| 196 |
+
0.103,
|
| 197 |
+
0.605,
|
| 198 |
+
0.239
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": null
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "image_caption",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.607,
|
| 207 |
+
0.103,
|
| 208 |
+
0.621,
|
| 209 |
+
0.114
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "B"
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "image",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.614,
|
| 218 |
+
0.117,
|
| 219 |
+
0.798,
|
| 220 |
+
0.237
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": null
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "image_caption",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.608,
|
| 229 |
+
0.235,
|
| 230 |
+
0.621,
|
| 231 |
+
0.246
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "C"
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "image",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.191,
|
| 240 |
+
0.25,
|
| 241 |
+
0.609,
|
| 242 |
+
0.378
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": null
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "image",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.612,
|
| 251 |
+
0.25,
|
| 252 |
+
0.797,
|
| 253 |
+
0.38
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": null
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "image_caption",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.171,
|
| 262 |
+
0.381,
|
| 263 |
+
0.828,
|
| 264 |
+
0.715
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "Figure 1: Aligning model IT representations with primate IT representations improves behavioral alignment and improves adversarial robustness. A) A set of naturalistic images, each containing one of eight different object classes are shown to a CNN and also to three different primate subjects with implanted multi-electrode arrays recording from the Inferior Temporal (IT) cortex. (1) A Base model (ImageNet pre-trained CORnet-S) is fine-tuned using stochastic gradient descent to (2) minimize the classification loss with respect to the ground truth object in each image while also minimizing a representational similarity loss (CKA) that encourages the model's IT representation to be more like those measured in the (pooled) primate subjects. (3) The resultant IT aligned models are then frozen and each tested in three ways. First, model IT representations are evaluated for similarity to biological IT representation (CKA metric) using neural data obtained from new primate subjects - we refer to the split-trial reliabilityCeiled average across all held out macaques and both image sets as \"Validated IT neural similarity\". Second, model output behavioral error patterns are assessed for alignment with human behavioral error patterns at the resolution of individual images (i2n, see Methods). Third, model behavioral output is evaluated for its robustness to white box adversarial attacks using an \\(L_{\\infty}\\) norm projected gradient descent attack. All three tests are carried out with: (i) new images within the IT-alignment training domain (held out HVM images; see Methods) and (ii) new images with novel image statistics (natural COCO images; see Methods), and those empirical results are tracked separately. B) We find that this IT-alignment procedure produced gains in validated IT neural similarity relative to base models on both data sets, and that these gains led to improvement in human behavioral alignment. \\(n = 30\\) models are shown, resulting from training at six different relative weightings of the IT neural similarity loss, each from five base models that derived from five random seeds. C) We also find that these same IT-alignment gains resulted in increased adversarial accuracy \\((\\mathrm{PGD} L_{\\infty}, \\epsilon = 1 / 1020)\\) on the same model set as in B. Base models trained only for ImageNet and HVM image classification are circled in grey."
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.17,
|
| 273 |
+
0.743,
|
| 274 |
+
0.828,
|
| 275 |
+
0.926
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "Khaligh-Razavi and Kriegeskorte, 2014; Schrimpf et al., 2018; 2020), there remain a number of important discrepancies. In particular, the output behavior of current models, while coarsely aligned with primate object confusion patterns, does not fully match primate error patterns on individual images (Rajalingham et al., 2018; Geirhos et al., 2021). In addition, these same models can be easily fooled by adversarial attacks – targeted pixel-level perturbations intentionally designed to cause the model to produce the wrong output(Szegedy et al., 2013; Carlini and Wagner, 2016; Chen et al., 2017; Rony et al., 2018; Brendel et al., 2019), whereas primate behavior is thought to be more robust to these kinds of attacks. This is an important unsolved problem in engineering artificial intelligence systems; the deviance between model and human behavior has been studied extensively in the machine learning community, often from the perspective of safety in real-world deployment of computer vision systems (Das et al., 2017; Liu et al., 2017; Xu et al., 2017; Madry et al., 2017; Song et al., 2017; Dhillon et al., 2018; Buckman et al., 2018; Guo et al., 2018; Michaelis et al., 2019). From a neuroscience perspective, behavioral differences like these point to different underlying mechanisms"
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "page_number",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.494,
|
| 284 |
+
0.949,
|
| 285 |
+
0.506,
|
| 286 |
+
0.96
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "2"
|
| 290 |
+
}
|
| 291 |
+
],
|
| 292 |
+
[
|
| 293 |
+
{
|
| 294 |
+
"type": "header",
|
| 295 |
+
"bbox": [
|
| 296 |
+
0.173,
|
| 297 |
+
0.033,
|
| 298 |
+
0.48,
|
| 299 |
+
0.049
|
| 300 |
+
],
|
| 301 |
+
"angle": 0,
|
| 302 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"type": "text",
|
| 306 |
+
"bbox": [
|
| 307 |
+
0.171,
|
| 308 |
+
0.104,
|
| 309 |
+
0.827,
|
| 310 |
+
0.135
|
| 311 |
+
],
|
| 312 |
+
"angle": 0,
|
| 313 |
+
"content": "and feature representations used for object recognition between the artificial and biological systems, meaning that our scientific understanding of the mechanisms of visual behavior remains incomplete."
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"type": "text",
|
| 317 |
+
"bbox": [
|
| 318 |
+
0.171,
|
| 319 |
+
0.14,
|
| 320 |
+
0.827,
|
| 321 |
+
0.336
|
| 322 |
+
],
|
| 323 |
+
"angle": 0,
|
| 324 |
+
"content": "Incorporating neurophysiological constraints into models to make them behave more in line with primate visual behavior is an active field of research (Marblestone et al., 2016; Lotter et al., 2016; Nayebi and Ganguli, 2017; Guerguiev et al., 2017; Hassabis et al., 2017; Lindsay and Miller, 2018; Tang et al., 2018; Kar et al., 2019; Kubilius et al., 2019; Li et al., 2019; Hasani et al., 2019; Sinz et al., 2019; Zador, 2019; Geiger et al., 2022). Previously, Dapello et al. (2020) demonstrated that convolutional neural network (CNN) models with early visual representations that are more functionally aligned with the early representations of primate visual processing tended to be more robust to adversarial attacks. This correlational observation was turned into a causal test, by simulating a primary visual cortex at the front of CNNs, which was indeed found to improve performance across a range of white box adversarial attacks and common image corruptions. Likewise, several recent studies have demonstrated that training models to classify images while also predicting (Safarani et al., 2021) or having similar representations (Federer et al., 2020) to early visual processing regions of primates, or even mice (Li et al., 2019), has a positive effect on generalization and robustness to adversarial attacks and common image corruptions."
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"type": "text",
|
| 328 |
+
"bbox": [
|
| 329 |
+
0.171,
|
| 330 |
+
0.341,
|
| 331 |
+
0.828,
|
| 332 |
+
0.509
|
| 333 |
+
],
|
| 334 |
+
"angle": 0,
|
| 335 |
+
"content": "However, no research to date has investigated the effects of incorporating biological knowledge of the neural representations in the IT cortex – a late stage visual processing region of the primate ventral stream, which critically supports primate visual object recognition (DiCarlo et al., 2012; Majaj et al., 2015). Here, we developed a method to align the late layer \"IT representations\" of a base object recognition model (CORnet-S (Kubilius et al., 2019) pre-trained on ImageNet (Deng et al., 2009) and naturalistic, grey-scale \"HVM\" images (Majaj et al., 2015)) to the biological IT representation while the model continues to be optimized to perform classification of the dominant object in each image. Using neural recordings performed across the IT cortex of six rhesus macaque monkeys divided into three training animals and three held-out testing animals for validation, we generate a suite of models under a variety of different optimization conditions and measure their IT alignment on held out animals, their alignment with human behavior, and their robustness to a range of adversarial attacks, in all cases on at least two image sets with distinct statistics as shown in figure 1."
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "text",
|
| 339 |
+
"bbox": [
|
| 340 |
+
0.172,
|
| 341 |
+
0.515,
|
| 342 |
+
0.379,
|
| 343 |
+
0.53
|
| 344 |
+
],
|
| 345 |
+
"angle": 0,
|
| 346 |
+
"content": "We report three novel findings:"
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "text",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.211,
|
| 352 |
+
0.542,
|
| 353 |
+
0.825,
|
| 354 |
+
0.57
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": "1. Our method robustly improves IT representational similarity of models to brains even when measured on new animals and new images."
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "text",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.209,
|
| 363 |
+
0.574,
|
| 364 |
+
0.786,
|
| 365 |
+
0.589
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": "2. We find that gains in model IT-likeness lead to gains in human behavioral alignment."
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "text",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.209,
|
| 374 |
+
0.593,
|
| 375 |
+
0.787,
|
| 376 |
+
0.607
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "3. Likewise we find that improved IT-likeness leads to increased adversarial robustness."
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "list",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.209,
|
| 385 |
+
0.542,
|
| 386 |
+
0.825,
|
| 387 |
+
0.607
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": null
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.171,
|
| 396 |
+
0.621,
|
| 397 |
+
0.827,
|
| 398 |
+
0.706
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "Interestingly, we observe that adversarial training improves robustness but does not significantly increase IT similarity or human behavioral alignment. Finally, while probing the limits of our current IT-alignment procedure, we observed that the improvements in IT similarity, behavioral alignment, and adversarial robustness generalized to images with different image statistics than those in the IT training set (from naturalistic gray scale images to full color natural images) but only for object categories that were part of the original IT training set and not for held-out object categories."
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "title",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.172,
|
| 407 |
+
0.725,
|
| 408 |
+
0.387,
|
| 409 |
+
0.74
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "2 DATA AND METHODS"
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "text",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.171,
|
| 418 |
+
0.756,
|
| 419 |
+
0.827,
|
| 420 |
+
0.799
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "Here we describe the neural and behavioral data collection, the training and testing methods used for aligning model representations with IT representations, and the methods for assessing behavioral alignment and adversarial robustness."
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "title",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.172,
|
| 429 |
+
0.815,
|
| 430 |
+
0.303,
|
| 431 |
+
0.829
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "2.1 IMAGE SETS"
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "text",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.171,
|
| 440 |
+
0.84,
|
| 441 |
+
0.827,
|
| 442 |
+
0.927
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "High-quality synthetic \"naturalistic\" images of single objects (HVM images) were generated using free ray-tracing software (http://www.povray.org), similar to (Majaj et al., 2015). Each image consisted of a 2D projection of a 3D model (purchased from Dosch Design and TurboSquid) added to a random natural background. The ten objects chosen were bear, elephant, face, apple, car, dog, chair, plane, bird and zebra. By varying six viewing parameters, we explored three types of identity while preserving object variation, position (x and y), rotation (x, y, and z), and size. All images"
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "page_number",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.494,
|
| 451 |
+
0.949,
|
| 452 |
+
0.506,
|
| 453 |
+
0.96
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "3"
|
| 457 |
+
}
|
| 458 |
+
],
|
| 459 |
+
[
|
| 460 |
+
{
|
| 461 |
+
"type": "header",
|
| 462 |
+
"bbox": [
|
| 463 |
+
0.173,
|
| 464 |
+
0.033,
|
| 465 |
+
0.48,
|
| 466 |
+
0.049
|
| 467 |
+
],
|
| 468 |
+
"angle": 0,
|
| 469 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "text",
|
| 473 |
+
"bbox": [
|
| 474 |
+
0.171,
|
| 475 |
+
0.104,
|
| 476 |
+
0.825,
|
| 477 |
+
0.162
|
| 478 |
+
],
|
| 479 |
+
"angle": 0,
|
| 480 |
+
"content": "were achromatic with a native resolution of \\(256 \\times 256\\) pixels. Additionally, natural microsoft COCO images (photographs) pertaining to the 10 nouns, were download from http://cocodataset.org (Lin et al., 2014). Each image was resized (not cropped) to \\(256 \\times 256 \\times 3\\) pixel size and presented within the central 8 deg."
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"type": "title",
|
| 484 |
+
"bbox": [
|
| 485 |
+
0.172,
|
| 486 |
+
0.187,
|
| 487 |
+
0.603,
|
| 488 |
+
0.202
|
| 489 |
+
],
|
| 490 |
+
"angle": 0,
|
| 491 |
+
"content": "2.2 PRIMATE NEURAL DATA COLLECTION AND PROCESSING"
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"type": "text",
|
| 495 |
+
"bbox": [
|
| 496 |
+
0.17,
|
| 497 |
+
0.216,
|
| 498 |
+
0.827,
|
| 499 |
+
0.342
|
| 500 |
+
],
|
| 501 |
+
"angle": 0,
|
| 502 |
+
"content": "We surgically implanted each monkey with a head post under aseptic conditions. We recorded neural activity using two or three micro-electrode arrays (Utah arrays; Blackrock Microsystems) implanted in IT cortex. A total of 96 electrodes were connected per array (grid arrangement, \\(400\\mathrm{um}\\) spacing, \\(4\\mathrm{mm}\\times 4\\mathrm{mm}\\) span of each array). Array placement was guided by the sulcus pattern, which was visible during the surgery. The electrodes were accessed through a percutaneous connector that allowed simultaneous recording from all 96 electrodes from each array. All surgical and animal procedures were performed in accordance with National Institutes of Health guidelines and the Massachusetts Institute of Technology Committee on Animal Care. For information on the neural recording quality metrics per site, see supplemental section A.1."
|
| 503 |
+
},
|
| 504 |
+
{
|
| 505 |
+
"type": "text",
|
| 506 |
+
"bbox": [
|
| 507 |
+
0.17,
|
| 508 |
+
0.348,
|
| 509 |
+
0.828,
|
| 510 |
+
0.487
|
| 511 |
+
],
|
| 512 |
+
"angle": 0,
|
| 513 |
+
"content": "During each daily recording session, band-pass filtered (0.1 Hz to 10 kHz) neural activity was recorded continuously at a sampling rate of 20 kHz using Intan Recording Controllers (Intan Technologies, LLC). The majority of the data presented here were based on multiunit activity. We detected the multiunit spikes after the raw voltage data were collected. A multiunit spike event was defined as the threshold crossing when voltage (falling edge) deviated by more than three times the standard deviation of the raw voltage values. Our array placements allowed us to sample neural sites from different parts of IT, along the posterior to anterior axis. However, for all the analyses, we did not consider the specific spatial location of the site, and treated each site as a random sample from a pooled IT population. For information on the neural recording quality metrics, see supplemental section A.1."
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"type": "text",
|
| 517 |
+
"bbox": [
|
| 518 |
+
0.17,
|
| 519 |
+
0.494,
|
| 520 |
+
0.827,
|
| 521 |
+
0.62
|
| 522 |
+
],
|
| 523 |
+
"angle": 0,
|
| 524 |
+
"content": "Behavioral state during neural data collection All neural response data were obtained during a passive viewing task. In this task, monkeys fixated a white square dot \\((0.2^{\\circ})\\) for \\(300\\mathrm{ms}\\) to initiate a trial. We then presented a sequence of 5 to 10 images, each ON for \\(100\\mathrm{ms}\\) followed by a \\(100\\mathrm{ms}\\) gray blank screen. This was followed by a water reward and an inter trial interval of \\(500\\mathrm{ms}\\), followed by the next sequence. Trials were aborted if gaze was not held within \\(\\pm 2^{\\circ}\\) of the central fixation dot during any point. Each neural site's response to each image was taken as the mean rate during a time window of \\(70 - 170\\mathrm{ms}\\) following image onset, a window that has been previously chosen to align with the visually-driven latency of IT neurons and their quantitative relationship to object classification behavior as in Majaj et al. (2015)."
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "title",
|
| 528 |
+
"bbox": [
|
| 529 |
+
0.172,
|
| 530 |
+
0.645,
|
| 531 |
+
0.503,
|
| 532 |
+
0.659
|
| 533 |
+
],
|
| 534 |
+
"angle": 0,
|
| 535 |
+
"content": "2.3 HUMAN BEHAVIORAL DATA COLLECTION"
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "text",
|
| 539 |
+
"bbox": [
|
| 540 |
+
0.17,
|
| 541 |
+
0.674,
|
| 542 |
+
0.829,
|
| 543 |
+
0.842
|
| 544 |
+
],
|
| 545 |
+
"angle": 0,
|
| 546 |
+
"content": "We measured human behavior (from 88 subjects) using the online Amazon MTurk platform which enables efficient collection of large-scale psychophysical data from crowd-sourced \"human intelligence tasks\" (HITs). The reliability of the online MTurk platform has been validated by comparing results obtained from online and in-lab psychophysical experiments (Majaj et al., 2015; Rajalingham et al., 2015). Each trial started with a \\(100\\mathrm{ms}\\) presentation of the sample image (one our of 1320 images). This was followed by a blank gray screen for \\(100\\mathrm{ms}\\); followed by a choice screen with the target and distractor objects, similar to (Rajalingham et al., 2018). The subjects indicated their choice by touching the screen or clicking the mouse over the target object. Each subjects saw an image only once. We collected the data such that, there were 80 unique subject responses per image, with varied distractor objects. Prior work has shown that human and macaque behavioral patterns are nearly identical, even at the image grain (Rajalingham et al., 2018). For information on the human behavioral data collection, see supplemental section A.2."
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"type": "title",
|
| 550 |
+
"bbox": [
|
| 551 |
+
0.172,
|
| 552 |
+
0.867,
|
| 553 |
+
0.75,
|
| 554 |
+
0.882
|
| 555 |
+
],
|
| 556 |
+
"angle": 0,
|
| 557 |
+
"content": "2.4 ALIGNING MODEL REPRESENTATIONS WITH MACAQUE IT REPRESENTATIONS"
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "text",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.171,
|
| 563 |
+
0.896,
|
| 564 |
+
0.829,
|
| 565 |
+
0.926
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": "In order to align neural network model representations with primate IT representations while performing classification, we use a multi-loss formulation similar to that used in Li et al. (2019) and Federer"
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "page_number",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.493,
|
| 574 |
+
0.949,
|
| 575 |
+
0.506,
|
| 576 |
+
0.96
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": "4"
|
| 580 |
+
}
|
| 581 |
+
],
|
| 582 |
+
[
|
| 583 |
+
{
|
| 584 |
+
"type": "header",
|
| 585 |
+
"bbox": [
|
| 586 |
+
0.173,
|
| 587 |
+
0.033,
|
| 588 |
+
0.48,
|
| 589 |
+
0.049
|
| 590 |
+
],
|
| 591 |
+
"angle": 0,
|
| 592 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"type": "text",
|
| 596 |
+
"bbox": [
|
| 597 |
+
0.171,
|
| 598 |
+
0.104,
|
| 599 |
+
0.827,
|
| 600 |
+
0.384
|
| 601 |
+
],
|
| 602 |
+
"angle": 0,
|
| 603 |
+
"content": "et al. (2020). Starting with an ImageNet (Deng et al., 2009) pre-trained CORnet-S model (Kubilius et al., 2019), we used stochastic gradient descent (SGD) on all model weights to jointly minimize a standard categorical cross entropy loss on model predictions of ImageNet labels (maintained from model pre-training, for stability), HVM image labels, and a centered kernel alignment (CKA) based loss penalizing the \"IT\" layer of CORnet-S for having representations not aligned with primate IT representations of the HVM images. CORnet-S was selected because it already has a clearly defined layer committed to region IT, close to the final linear readout of the network, but otherwise our procedure is compatible with any neural network architecture. Meanwhile CKA, a measure of linear subspace alignment, was selected as a representational similarity measure. CKA has ideal properties such as invariance to isotropic scaling and orthonormal transformations which do not matter from the perspective of a linear readout, but sensitivity to arbitrary linear transformations (Kornblith et al., 2019) which could lead to differences from a linear readout as well as allow the network to hide representations useful for image classification but not present within primate IT. CKA ranges from 0, indicating completely non-overlapping subspaces, to 1, indicating completely aligned subspaces. We found that our best neural alignment results came from minimizing the neural similarity loss function \\( \\log(1 - CKA(X,Y)) \\), where \\( X \\in \\mathbb{R}^{n \\times p_1} \\) and \\( Y \\in \\mathbb{R}^{n \\times p_2} \\) denote two column centered activation matrices with generated by showing \\( n \\) example images and recording \\( p_1 \\) and \\( p_2 \\) neurons from the IT layer of CORnet-S and macaque IT recordings respectively. The macaque neural activation matrices were generated by averaging over approximately 50 trials per image and over a 70-170 millisecond time window following image presentation. An illustration of our setup is shown in figure 1A."
|
| 604 |
+
},
|
| 605 |
+
{
|
| 606 |
+
"type": "title",
|
| 607 |
+
"bbox": [
|
| 608 |
+
0.172,
|
| 609 |
+
0.403,
|
| 610 |
+
0.483,
|
| 611 |
+
0.417
|
| 612 |
+
],
|
| 613 |
+
"angle": 0,
|
| 614 |
+
"content": "2.5 TRAINING AND TESTING CONDITIONS"
|
| 615 |
+
},
|
| 616 |
+
{
|
| 617 |
+
"type": "text",
|
| 618 |
+
"bbox": [
|
| 619 |
+
0.171,
|
| 620 |
+
0.43,
|
| 621 |
+
0.827,
|
| 622 |
+
0.611
|
| 623 |
+
],
|
| 624 |
+
"angle": 0,
|
| 625 |
+
"content": "In all reported experiments, model IT representational similarity training was performed on 2880 grey-scale naturalistic HVM image representations consisting of 188 active neural sites collated from the three training set macaques for 1200 epochs. We use a batch size of 128, meaning the CKA loss computed for a random set of 128 representations for each gradient step. In order to create models with a variety of different final neural alignment scores, we add random probability \\(1 - p\\) of dropping the IT alignment gradients and create six different sets (5 random seeds for each set) of neurally aligned models with \\(p \\in [0, 1/32, 1/16, 1/8, 1/4, 1/2, 1]\\). For example, the set with \\(p = 0\\) drops all of the IT alignment gradients and thus has no improved IT alignment over the base model, while the set with \\(p = 1\\) always includes the IT alignment gradients and similarly achieves the highest IT alignment scores (see figure 2). We also introduce a small amount of data augmentation including the physical equivalent of 0.5 degrees of jitter the vertical and horizontal position of the images, 0.5 degrees of rotational jitter, and +/- 0.1 degrees of scaling jitter, assuming our model has an 8 degree field of view. These augmentations were selected to simulate natural viewing conditions."
|
| 626 |
+
},
|
| 627 |
+
{
|
| 628 |
+
"type": "text",
|
| 629 |
+
"bbox": [
|
| 630 |
+
0.171,
|
| 631 |
+
0.617,
|
| 632 |
+
0.825,
|
| 633 |
+
0.687
|
| 634 |
+
],
|
| 635 |
+
"angle": 0,
|
| 636 |
+
"content": "Model IT representational similarity testing was performed on a total of three held out monkeys: Monkey 1 (280 neural sites) and monkey 2 (144 neural sites) on 320 held out HVM images with statistics similar to the training distribution, and monkey 1 (237 neural sites) and monkey 3 (106 active neural sites) on 200 full color natural COCO images with different statistics than those used during training. Additional model training information can be found in supplemental section B."
|
| 637 |
+
},
|
| 638 |
+
{
|
| 639 |
+
"type": "text",
|
| 640 |
+
"bbox": [
|
| 641 |
+
0.171,
|
| 642 |
+
0.694,
|
| 643 |
+
0.825,
|
| 644 |
+
0.737
|
| 645 |
+
],
|
| 646 |
+
"angle": 0,
|
| 647 |
+
"content": "For performing white box adversarial attacks, we used untargeted projected gradient descent (PGD) (Madry et al., 2017) with \\( L_{\\infty} \\) and \\( L_{2} \\) norm constraints. Further details are given in supplemental section B."
|
| 648 |
+
},
|
| 649 |
+
{
|
| 650 |
+
"type": "title",
|
| 651 |
+
"bbox": [
|
| 652 |
+
0.172,
|
| 653 |
+
0.757,
|
| 654 |
+
0.416,
|
| 655 |
+
0.77
|
| 656 |
+
],
|
| 657 |
+
"angle": 0,
|
| 658 |
+
"content": "2.6 BEHAVIORAL BENCHMARKS"
|
| 659 |
+
},
|
| 660 |
+
{
|
| 661 |
+
"type": "text",
|
| 662 |
+
"bbox": [
|
| 663 |
+
0.171,
|
| 664 |
+
0.784,
|
| 665 |
+
0.827,
|
| 666 |
+
0.883
|
| 667 |
+
],
|
| 668 |
+
"angle": 0,
|
| 669 |
+
"content": "To characterize the behavior of the visual system, we have used an image-level behavioral metric, i2n (Rajalingham et al., 2018). The behavioral metric computes a pattern of unbiased behavioral performances, using a sensitivity index: \\( d' = Z(HitRate) - Z(FalseAlarmRate) \\), where \\( Z \\) is the inverse of the cumulative Gaussian distribution. The HitRates for i2n are the accuracies of the subjects when a specific image is shown and the choices include the target object (i.e., the object present in the image) and one other specific distractor object. So for every distractor-target pair we get a different i2n entry. A detailed description of how to compute i2n can be also found at Rajalingham"
|
| 670 |
+
},
|
| 671 |
+
{
|
| 672 |
+
"type": "page_footnote",
|
| 673 |
+
"bbox": [
|
| 674 |
+
0.171,
|
| 675 |
+
0.898,
|
| 676 |
+
0.825,
|
| 677 |
+
0.926
|
| 678 |
+
],
|
| 679 |
+
"angle": 0,
|
| 680 |
+
"content": "the pre-trained version was selected as a starting point because of the relatively small number of training samples in our dataset (Riedel, 2022)."
|
| 681 |
+
},
|
| 682 |
+
{
|
| 683 |
+
"type": "page_number",
|
| 684 |
+
"bbox": [
|
| 685 |
+
0.494,
|
| 686 |
+
0.949,
|
| 687 |
+
0.506,
|
| 688 |
+
0.96
|
| 689 |
+
],
|
| 690 |
+
"angle": 0,
|
| 691 |
+
"content": "5"
|
| 692 |
+
}
|
| 693 |
+
],
|
| 694 |
+
[
|
| 695 |
+
{
|
| 696 |
+
"type": "header",
|
| 697 |
+
"bbox": [
|
| 698 |
+
0.173,
|
| 699 |
+
0.033,
|
| 700 |
+
0.48,
|
| 701 |
+
0.049
|
| 702 |
+
],
|
| 703 |
+
"angle": 0,
|
| 704 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 705 |
+
},
|
| 706 |
+
{
|
| 707 |
+
"type": "text",
|
| 708 |
+
"bbox": [
|
| 709 |
+
0.171,
|
| 710 |
+
0.104,
|
| 711 |
+
0.827,
|
| 712 |
+
0.133
|
| 713 |
+
],
|
| 714 |
+
"angle": 0,
|
| 715 |
+
"content": "et al. (2018). The i2n behavioral benchmark was computed using the Brain-Score implementation of the i2n metric (Schrimpf et al., 2018)."
|
| 716 |
+
},
|
| 717 |
+
{
|
| 718 |
+
"type": "image",
|
| 719 |
+
"bbox": [
|
| 720 |
+
0.174,
|
| 721 |
+
0.148,
|
| 722 |
+
0.473,
|
| 723 |
+
0.282
|
| 724 |
+
],
|
| 725 |
+
"angle": 0,
|
| 726 |
+
"content": null
|
| 727 |
+
},
|
| 728 |
+
{
|
| 729 |
+
"type": "image",
|
| 730 |
+
"bbox": [
|
| 731 |
+
0.484,
|
| 732 |
+
0.147,
|
| 733 |
+
0.826,
|
| 734 |
+
0.282
|
| 735 |
+
],
|
| 736 |
+
"angle": 0,
|
| 737 |
+
"content": null
|
| 738 |
+
},
|
| 739 |
+
{
|
| 740 |
+
"type": "image_caption",
|
| 741 |
+
"bbox": [
|
| 742 |
+
0.171,
|
| 743 |
+
0.291,
|
| 744 |
+
0.828,
|
| 745 |
+
0.459
|
| 746 |
+
],
|
| 747 |
+
"angle": 0,
|
| 748 |
+
"content": "Figure 2: IT alignment training leads to improved IT representational similarity on held out animals and held out images across two image sets with different statistics. A) IT neural similarity scores (CKA, normalized by split-half trial reliability) for held out but within domain HVM images vs gradient steps is shown for two held out monkeys across seven different neural similarity loss gradient dropout rates (the darkest trace receives neural similarity loss gradients at \\(100\\%\\) of gradient steps, while in the lightest trace neural similarity loss gradients are dropped at every step). Two control conditions are also shown: optimizing model IT toward a random Gaussian target IT matrix (random, blue) and toward an image-shuffled target IT matrix (shuffle, orange). B) Like A but for natural COCO images out of domain with respect to the training set. Grey dashed line on each plot shows the base model score for models pre-trained on ImageNet and HVM image labels with no IT representational similarity loss, which the model set with \\(0\\%\\) of IT similarity loss gradients does not deviate significantly from. Error bars are bootstrapped confidence intervals for 5 training seeds."
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "title",
|
| 752 |
+
"bbox": [
|
| 753 |
+
0.173,
|
| 754 |
+
0.49,
|
| 755 |
+
0.285,
|
| 756 |
+
0.506
|
| 757 |
+
],
|
| 758 |
+
"angle": 0,
|
| 759 |
+
"content": "3 RESULTS"
|
| 760 |
+
},
|
| 761 |
+
{
|
| 762 |
+
"type": "text",
|
| 763 |
+
"bbox": [
|
| 764 |
+
0.171,
|
| 765 |
+
0.523,
|
| 766 |
+
0.825,
|
| 767 |
+
0.621
|
| 768 |
+
],
|
| 769 |
+
"angle": 0,
|
| 770 |
+
"content": "Does aligning late stage model representations with primate IT representations lead to improvements in alignment with image-by-image patterns of human behavior or improvements in white box adversarial robustness? We start by testing if our method can generate models that are truly more IT-like by validating on held out animals and images, as this has not been previously attempted and is not guaranteed to work given the sampling limitations of neural recording experiments. We then proceed to analyze how these IT-aligned models fair on several human behavioral alignment benchmarks and a diverse set of white box adversarial attacks."
|
| 771 |
+
},
|
| 772 |
+
{
|
| 773 |
+
"type": "title",
|
| 774 |
+
"bbox": [
|
| 775 |
+
0.172,
|
| 776 |
+
0.64,
|
| 777 |
+
0.825,
|
| 778 |
+
0.669
|
| 779 |
+
],
|
| 780 |
+
"angle": 0,
|
| 781 |
+
"content": "3.1 DIRECT FITTING TO IT NEURAL DATA IMPROVES IT-LIKENESS OF MODELS ACROSS HELD OUT ANIMALS AND IMAGE SETS"
|
| 782 |
+
},
|
| 783 |
+
{
|
| 784 |
+
"type": "text",
|
| 785 |
+
"bbox": [
|
| 786 |
+
0.171,
|
| 787 |
+
0.681,
|
| 788 |
+
0.827,
|
| 789 |
+
0.849
|
| 790 |
+
],
|
| 791 |
+
"angle": 0,
|
| 792 |
+
"content": "First, we investigated how well our IT alignment optimization procedure generalizes to IT neural similarity measurements (CKA) for two held out test monkeys on 320 held out HVM images (similar image statistics as the training set). Figure 2A shows theCeiled IT neural similarity scores for both test animals across different neural similarity loss gradient dropout rates \\((p\\in [0,1 / 32,1 / 16,1 / 8,1 / 4,1 / 2,1])\\) ; the model marked \\(100\\%\\) sees IT similarity loss gradients at every step, where as the model marked \\(0\\%\\) never sees IT similarity loss gradients) as well as models optimized to classify HVM images while fitting a random Gaussian target activation matrix, or an image shuffled target activation matrix which has the same first and second order statistics as the true IT activation matrix, but scrambled image information. For both animals, we see a significant positive shift from the unfitted model (neural loss weight of 0.0), with higher relative neural loss weights generally leading to higher IT neural similarity scores. Meanwhile, both of the control conditions cause models to become less IT like, to a significant degree."
|
| 793 |
+
},
|
| 794 |
+
{
|
| 795 |
+
"type": "text",
|
| 796 |
+
"bbox": [
|
| 797 |
+
0.171,
|
| 798 |
+
0.855,
|
| 799 |
+
0.825,
|
| 800 |
+
0.926
|
| 801 |
+
],
|
| 802 |
+
"angle": 0,
|
| 803 |
+
"content": "We next investigated how well our procedure generalizes from the grey-scale naturalistic HVM images to full color, natural images from COCO. Figure 2B shows the same model optimization conditions as before, but now on two unseen animal IT representations of COCO images. Like in 2A although to a lesser absolute degree, we see improvements relative to the baseline in IT neural similarity as function of the neural loss weight, and controls generally decreasing in IT neural similarity. From"
|
| 804 |
+
},
|
| 805 |
+
{
|
| 806 |
+
"type": "page_number",
|
| 807 |
+
"bbox": [
|
| 808 |
+
0.494,
|
| 809 |
+
0.949,
|
| 810 |
+
0.506,
|
| 811 |
+
0.96
|
| 812 |
+
],
|
| 813 |
+
"angle": 0,
|
| 814 |
+
"content": "6"
|
| 815 |
+
}
|
| 816 |
+
],
|
| 817 |
+
[
|
| 818 |
+
{
|
| 819 |
+
"type": "header",
|
| 820 |
+
"bbox": [
|
| 821 |
+
0.173,
|
| 822 |
+
0.033,
|
| 823 |
+
0.48,
|
| 824 |
+
0.049
|
| 825 |
+
],
|
| 826 |
+
"angle": 0,
|
| 827 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 828 |
+
},
|
| 829 |
+
{
|
| 830 |
+
"type": "text",
|
| 831 |
+
"bbox": [
|
| 832 |
+
0.171,
|
| 833 |
+
0.104,
|
| 834 |
+
0.825,
|
| 835 |
+
0.133
|
| 836 |
+
],
|
| 837 |
+
"angle": 0,
|
| 838 |
+
"content": "this, we conclude that our IT alignment procedure is able to improve IT-likeness in our models even in held out animals and across two image sets with distinct statistics."
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"type": "title",
|
| 842 |
+
"bbox": [
|
| 843 |
+
0.172,
|
| 844 |
+
0.159,
|
| 845 |
+
0.812,
|
| 846 |
+
0.173
|
| 847 |
+
],
|
| 848 |
+
"angle": 0,
|
| 849 |
+
"content": "3.2 INCREASED BEHAVIORAL ALIGNMENT IN MODELS THAT BETTER MATCH MACAQUE IT"
|
| 850 |
+
},
|
| 851 |
+
{
|
| 852 |
+
"type": "text",
|
| 853 |
+
"bbox": [
|
| 854 |
+
0.171,
|
| 855 |
+
0.188,
|
| 856 |
+
0.827,
|
| 857 |
+
0.412
|
| 858 |
+
],
|
| 859 |
+
"angle": 0,
|
| 860 |
+
"content": "Next, we investigated how single image level classification error patterns correlate between humans and IT aligned models. To get a big picture view, we take all of the optimization conditions and validation epochs generated in figure 2A while models are training and compare IT neural similarity on the HVM test set (averaged over held out animals) with human behavioral alignment on the HVM test set. As shown in figure 3A, this analysis reveals a broad, though not linear correlation between IT neural similarity and behavioral alignment. Interestingly, we observe that the slope is at its steepest when IT neural similarity is at the highest values, suggesting that if an even higher degree of IT-alignment might result in greater increases in behavioral alignment. We also investigated whether these trends persist when we exclude the optimization on object labels from the HVM images and only optimize for IT neural similarity. To do so, we train the models on all previous conditions but without the HVM object-label loss. As shown in 3, the overall shape of the trend remains quite similar, though the absolute behavioral alignment shifts downward, indicating that the label information during training helps on the behavioral task, but is not required for the trend to hold. In figure 3B, we perform the same set of measurements but now focusing on the COCO image set. Consistent with the observation on COCO IT neural similarity, the behavioral alignment trend transfers to the COCO image set although the absolute magnitude of the improvements are less."
|
| 861 |
+
},
|
| 862 |
+
{
|
| 863 |
+
"type": "text",
|
| 864 |
+
"bbox": [
|
| 865 |
+
0.171,
|
| 866 |
+
0.417,
|
| 867 |
+
0.828,
|
| 868 |
+
0.641
|
| 869 |
+
],
|
| 870 |
+
"angle": 0,
|
| 871 |
+
"content": "Finally, using the Brain-Score platform (Schrimpf et al., 2018), we benchmark our models against publicly available human behavioral data from the Objectome image set Rajalingham et al. (2018) which has similar image statistics to our HVM IT fitting set (with a total of 24 object categories, only four of which overlap with the training set). As demonstrated in figure 4C, when the Objectome data are filtered down to just the four overlapping categories, our most IT similar models are again the most behaviorally aligned, well above the unfit baseline and control conditions, which remain close to the floor for much of the plot. However, As shown in figure 3D, when considering all 24 object categories in the Objectome dataset, we see that the trend of increasing human behavioral alignment does not hold and our models actually begin to fair worse in terms of human behavioral alignment at higher levels of IT neural similarity. As shown in figure supp A.1, using a linear probe to assess image class information content (measured by classification accuracy on held out representations) reveals that these models are losing class information content for the Objectome image set, which drives the decrease in behavioral alignment, as the model makes more mistakes overall than a human. Similarly, a linear probe analysis reveals minimal loss in class information in the overlapping categories. Thus, we observe that while our method leads to increased human behavioral alignment across different image statistics, it does not currently lead to improved alignment on unseen object categories."
|
| 872 |
+
},
|
| 873 |
+
{
|
| 874 |
+
"type": "title",
|
| 875 |
+
"bbox": [
|
| 876 |
+
0.172,
|
| 877 |
+
0.665,
|
| 878 |
+
0.825,
|
| 879 |
+
0.68
|
| 880 |
+
],
|
| 881 |
+
"angle": 0,
|
| 882 |
+
"content": "3.3 INCREASED ADVERSARIAL ROBUSTNESS IN MODELS THAT BETTER MATCH MACAQUE IT"
|
| 883 |
+
},
|
| 884 |
+
{
|
| 885 |
+
"type": "text",
|
| 886 |
+
"bbox": [
|
| 887 |
+
0.171,
|
| 888 |
+
0.695,
|
| 889 |
+
0.827,
|
| 890 |
+
0.807
|
| 891 |
+
],
|
| 892 |
+
"angle": 0,
|
| 893 |
+
"content": "Finally, we evaluate our models on an array of white box adversarial attacks, to assess if models with higher IT neural similarity scores also have increased adversarial robustness. Like before, we start with a big picture analysis where we consider every evaluation epoch for all optimization conditions considered in figure 2. Again, as demonstrated in figures 4A and 4B, for both HVM images and COCO images, there is a broad though not entirely linear correlation between IT neural similarity and adversarial robustness to PGD \\( L_{\\infty} \\epsilon = 1 / 1020 \\) attacks. Like in the analysis of behavioral alignment, we also see a higher slope on the right side of the plots, where IT neural similarity is the highest, suggesting further improvements could be had if models were pushed to be more IT aligned."
|
| 894 |
+
},
|
| 895 |
+
{
|
| 896 |
+
"type": "text",
|
| 897 |
+
"bbox": [
|
| 898 |
+
0.171,
|
| 899 |
+
0.813,
|
| 900 |
+
0.829,
|
| 901 |
+
0.926
|
| 902 |
+
],
|
| 903 |
+
"angle": 0,
|
| 904 |
+
"content": "In order to get a better sense of the gains in robustness, we measured the adversarial strength accuracy curves for models only trained with HVM image labels, models trained with HVM image labels and IT neural representations, and models adversially trained on HVM labels (PGD \\( L_{\\infty} \\), \\( \\epsilon = 4 / 255 \\)). Figure 5A shows that on held-out HVM images, IT aligned models have increased accuracy across a range of \\( \\epsilon \\) values for both \\( L_{\\infty} \\) and \\( L_{2} \\) norms, though less so than models with explicit adversarial training. However, as shown in figure 5 the same analysis on COCO images demonstrates that adversarial robustness in the IT aligned networks generalizes significantly better on unseen image statistics than the adversially trained models, which lose clean accuracy on COCO images."
|
| 905 |
+
},
|
| 906 |
+
{
|
| 907 |
+
"type": "page_number",
|
| 908 |
+
"bbox": [
|
| 909 |
+
0.494,
|
| 910 |
+
0.949,
|
| 911 |
+
0.506,
|
| 912 |
+
0.96
|
| 913 |
+
],
|
| 914 |
+
"angle": 0,
|
| 915 |
+
"content": "7"
|
| 916 |
+
}
|
| 917 |
+
],
|
| 918 |
+
[
|
| 919 |
+
{
|
| 920 |
+
"type": "header",
|
| 921 |
+
"bbox": [
|
| 922 |
+
0.173,
|
| 923 |
+
0.033,
|
| 924 |
+
0.48,
|
| 925 |
+
0.049
|
| 926 |
+
],
|
| 927 |
+
"angle": 0,
|
| 928 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 929 |
+
},
|
| 930 |
+
{
|
| 931 |
+
"type": "image",
|
| 932 |
+
"bbox": [
|
| 933 |
+
0.172,
|
| 934 |
+
0.101,
|
| 935 |
+
0.337,
|
| 936 |
+
0.247
|
| 937 |
+
],
|
| 938 |
+
"angle": 0,
|
| 939 |
+
"content": null
|
| 940 |
+
},
|
| 941 |
+
{
|
| 942 |
+
"type": "image",
|
| 943 |
+
"bbox": [
|
| 944 |
+
0.338,
|
| 945 |
+
0.101,
|
| 946 |
+
0.498,
|
| 947 |
+
0.247
|
| 948 |
+
],
|
| 949 |
+
"angle": 0,
|
| 950 |
+
"content": null
|
| 951 |
+
},
|
| 952 |
+
{
|
| 953 |
+
"type": "image",
|
| 954 |
+
"bbox": [
|
| 955 |
+
0.499,
|
| 956 |
+
0.102,
|
| 957 |
+
0.659,
|
| 958 |
+
0.247
|
| 959 |
+
],
|
| 960 |
+
"angle": 0,
|
| 961 |
+
"content": null
|
| 962 |
+
},
|
| 963 |
+
{
|
| 964 |
+
"type": "image",
|
| 965 |
+
"bbox": [
|
| 966 |
+
0.66,
|
| 967 |
+
0.102,
|
| 968 |
+
0.822,
|
| 969 |
+
0.247
|
| 970 |
+
],
|
| 971 |
+
"angle": 0,
|
| 972 |
+
"content": null
|
| 973 |
+
},
|
| 974 |
+
{
|
| 975 |
+
"type": "image_caption",
|
| 976 |
+
"bbox": [
|
| 977 |
+
0.171,
|
| 978 |
+
0.25,
|
| 979 |
+
0.828,
|
| 980 |
+
0.404
|
| 981 |
+
],
|
| 982 |
+
"angle": 0,
|
| 983 |
+
"content": "Figure 3: IT neural similarity correlates with behavioral alignment across a variety of optimization conditions and unseen image statistics but not on unseen object categories. A) Held out animal and image IT neural similarity is plotted against human behavioral alignment on the HVM image set at every validation epoch for all neural loss weight conditions, random Gaussian IT target matrix conditions, and image shuffled IT target matrix conditions, in each case with or with and without image classification loss. B) and C) Like in A but for the COCO image set and the Objectome image set Rajalingham et al. (2018) filtered to overlapping categories with the IT training set. D) The behavioral alignment for the full Objectome image set with 20 categories not covered in the IT training is not improved by the IT-alignment procedure and data used here. In all plots, the black cross represents the average base model position, and the heavy blue line is a sliding X, Y average of all conditions merely to visually highlight trends. Five seeds for each condition are plotted."
|
| 984 |
+
},
|
| 985 |
+
{
|
| 986 |
+
"type": "image",
|
| 987 |
+
"bbox": [
|
| 988 |
+
0.177,
|
| 989 |
+
0.42,
|
| 990 |
+
0.481,
|
| 991 |
+
0.663
|
| 992 |
+
],
|
| 993 |
+
"angle": 0,
|
| 994 |
+
"content": null
|
| 995 |
+
},
|
| 996 |
+
{
|
| 997 |
+
"type": "image",
|
| 998 |
+
"bbox": [
|
| 999 |
+
0.506,
|
| 1000 |
+
0.419,
|
| 1001 |
+
0.811,
|
| 1002 |
+
0.663
|
| 1003 |
+
],
|
| 1004 |
+
"angle": 0,
|
| 1005 |
+
"content": null
|
| 1006 |
+
},
|
| 1007 |
+
{
|
| 1008 |
+
"type": "image_footnote",
|
| 1009 |
+
"bbox": [
|
| 1010 |
+
0.178,
|
| 1011 |
+
0.668,
|
| 1012 |
+
0.818,
|
| 1013 |
+
0.681
|
| 1014 |
+
],
|
| 1015 |
+
"angle": 0,
|
| 1016 |
+
"content": "Loss Function Components: IT + Classification Shuffled IT + Classification Random IT + Classification"
|
| 1017 |
+
},
|
| 1018 |
+
{
|
| 1019 |
+
"type": "image_caption",
|
| 1020 |
+
"bbox": [
|
| 1021 |
+
0.171,
|
| 1022 |
+
0.685,
|
| 1023 |
+
0.828,
|
| 1024 |
+
0.797
|
| 1025 |
+
],
|
| 1026 |
+
"angle": 0,
|
| 1027 |
+
"content": "Figure 4: IT neural similarity correlates with improved white box adversarial robustness. A) held out animal and image IT neural similarity is plotted against white box adversarial accuracy (PGD \\(L_{\\infty}\\epsilon = 1 / 1020\\)) on the HVM image set measured across multiple training time points for all neural loss ratio conditions, random Gaussian IT target matrix conditions, and image shuffled IT target matrix conditions. B) Like in A but for COCO images. In both plots, the black cross represents the average base model position, the black X marks a CORnet-S adversarially trained on HVM images, and the heavy blue line is a sliding X, Y average of all conditions merely to visually highlight trends. Five seeds for each condition are plotted."
|
| 1028 |
+
},
|
| 1029 |
+
{
|
| 1030 |
+
"type": "text",
|
| 1031 |
+
"bbox": [
|
| 1032 |
+
0.171,
|
| 1033 |
+
0.827,
|
| 1034 |
+
0.828,
|
| 1035 |
+
0.926
|
| 1036 |
+
],
|
| 1037 |
+
"angle": 0,
|
| 1038 |
+
"content": "Last, we tested the IT neural similarity of our HVM image adversarially trained models and find that they do not follow the general correlation shown in 4 for IT aligned models vs adversarial accuracy. Interestingly, the adversarially trained models are slightly more similar to IT than standard models, but significantly higher than standard models on HVM adversarial accuracy and significantly lower on COCO adversarial accuracy. We take this to indicate that there are multiple possible ways to become robust to adversarial attacks, and that adversarial training does not in general induce the same representations as IT alignment."
|
| 1039 |
+
},
|
| 1040 |
+
{
|
| 1041 |
+
"type": "page_number",
|
| 1042 |
+
"bbox": [
|
| 1043 |
+
0.494,
|
| 1044 |
+
0.949,
|
| 1045 |
+
0.504,
|
| 1046 |
+
0.96
|
| 1047 |
+
],
|
| 1048 |
+
"angle": 0,
|
| 1049 |
+
"content": "8"
|
| 1050 |
+
}
|
| 1051 |
+
],
|
| 1052 |
+
[
|
| 1053 |
+
{
|
| 1054 |
+
"type": "header",
|
| 1055 |
+
"bbox": [
|
| 1056 |
+
0.173,
|
| 1057 |
+
0.033,
|
| 1058 |
+
0.48,
|
| 1059 |
+
0.049
|
| 1060 |
+
],
|
| 1061 |
+
"angle": 0,
|
| 1062 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 1063 |
+
},
|
| 1064 |
+
{
|
| 1065 |
+
"type": "image",
|
| 1066 |
+
"bbox": [
|
| 1067 |
+
0.189,
|
| 1068 |
+
0.101,
|
| 1069 |
+
0.495,
|
| 1070 |
+
0.256
|
| 1071 |
+
],
|
| 1072 |
+
"angle": 0,
|
| 1073 |
+
"content": null
|
| 1074 |
+
},
|
| 1075 |
+
{
|
| 1076 |
+
"type": "image",
|
| 1077 |
+
"bbox": [
|
| 1078 |
+
0.508,
|
| 1079 |
+
0.1,
|
| 1080 |
+
0.807,
|
| 1081 |
+
0.256
|
| 1082 |
+
],
|
| 1083 |
+
"angle": 0,
|
| 1084 |
+
"content": null
|
| 1085 |
+
},
|
| 1086 |
+
{
|
| 1087 |
+
"type": "image_caption",
|
| 1088 |
+
"bbox": [
|
| 1089 |
+
0.171,
|
| 1090 |
+
0.259,
|
| 1091 |
+
0.828,
|
| 1092 |
+
0.344
|
| 1093 |
+
],
|
| 1094 |
+
"angle": 0,
|
| 1095 |
+
"content": "Figure 5: IT aligned models are more robust than standard models in and out of domain, and more robust than adversarially trained models in out of domain conditions. A) PGD \\( L_{\\infty} \\) and \\( L_{2} \\) strength accuracy curves on HVM images for standard trained networks (green) IT aligned networks (blue) and networks adversarially trained (PGD \\( L_{\\infty} \\epsilon = 4 / 255 \\)) on the IT fitting image labels (orange). B) Like in A but for COCO images. Error shading represents bootstrapped \\( 95\\% \\) confidence intervals over five training seeds."
|
| 1096 |
+
},
|
| 1097 |
+
{
|
| 1098 |
+
"type": "title",
|
| 1099 |
+
"bbox": [
|
| 1100 |
+
0.172,
|
| 1101 |
+
0.387,
|
| 1102 |
+
0.313,
|
| 1103 |
+
0.403
|
| 1104 |
+
],
|
| 1105 |
+
"angle": 0,
|
| 1106 |
+
"content": "4 DISCUSSION"
|
| 1107 |
+
},
|
| 1108 |
+
{
|
| 1109 |
+
"type": "text",
|
| 1110 |
+
"bbox": [
|
| 1111 |
+
0.171,
|
| 1112 |
+
0.432,
|
| 1113 |
+
0.828,
|
| 1114 |
+
0.613
|
| 1115 |
+
],
|
| 1116 |
+
"angle": 0,
|
| 1117 |
+
"content": "Building on prior research in constraining visual object recognition models with early stage visual representations (Li et al., 2019; Dapello et al., 2020; Federer et al., 2020; Safarani et al., 2021), we report here that it is possible to better align the late stage \"IT representations\" of an object recognition model with the corresponding primate IT representations, and that this improved IT alignment leads to increased human level behavioral alignment and increased adversarial robustness. In particular, the results show that 1) the method used here is able to develop better neuroscientific models by improving IT alignment in object recognition models even on held out animals and image statistics not seen by the model during the IT neural alignment training procedure, 2) models that are more aligned with macaque IT also have better alignment with human behavioral error patterns across unseen (not shown during training) image statistics but not for unseen object categories, and 3) models more aligned with macaque IT are more robust to adversarial attacks even on unseen image statistics. Interestingly however, we observed that being more adversially robust (through adversarial training) does not lead to significantly more IT neural similarity."
|
| 1118 |
+
},
|
| 1119 |
+
{
|
| 1120 |
+
"type": "text",
|
| 1121 |
+
"bbox": [
|
| 1122 |
+
0.171,
|
| 1123 |
+
0.619,
|
| 1124 |
+
0.828,
|
| 1125 |
+
0.925
|
| 1126 |
+
],
|
| 1127 |
+
"angle": 0,
|
| 1128 |
+
"content": "These empirical observations raise a number of important questions for future research. While there are clear gains in robustness from our procedure, we note that the overall magnitude is relatively small. How much adversarial robustness could we expect to gain, if we perfectly fit IT? This question hinges on how adversarily robust primate behavior really is, an active area of research (Guo et al., 2022; Elsayed et al., 2018; Yuan et al., 2020). Guo et al. (2022) is particularly interesting with respect to our work – while they find that individual neurons in IT are not particularly robust when compared to individual neurons in adversarily trained networks, our work here indicates that population geometry, not individual neuronal sensitivity, might play a critical role in robustness. We find it intriguing that aligning IT representations in our models to empirically measured macaque IT responses has no effect or even a negative effect on behavioral alignment for objects not present in the IT fitting image-set, a noteworthy limitation in our approach. We speculate that this is due to the small range of categories covered in our IT training set, which limits the span of neural representational space that those experiments were able to sample. In that regard, it would be informative to get a sense of the scaling laws (Kaplan et al., 2020) for how much neural data (in terms of images, neurons, trials, or object categories) needs to be absorbed into a model before it behaves in a truly general more human like fashion for any instance of image categories or statistics. Other avenues for further exploration include comparisons of behavioral alignment on a more diverse panel of benchmarks Bowers et al. (2022), different alignment metrics to optimize, such as deep canonical correlationPirlot et al. (2022), or including representation stochasticity as in Dapello et al. (2020). Overall, our results provide further support for the framework of constraining and optimizing models with empirical data from the primate brain to make them more robust and well aligned with human behavior (Sinz et al., 2019)."
|
| 1129 |
+
},
|
| 1130 |
+
{
|
| 1131 |
+
"type": "page_number",
|
| 1132 |
+
"bbox": [
|
| 1133 |
+
0.494,
|
| 1134 |
+
0.949,
|
| 1135 |
+
0.506,
|
| 1136 |
+
0.96
|
| 1137 |
+
],
|
| 1138 |
+
"angle": 0,
|
| 1139 |
+
"content": "9"
|
| 1140 |
+
}
|
| 1141 |
+
],
|
| 1142 |
+
[
|
| 1143 |
+
{
|
| 1144 |
+
"type": "header",
|
| 1145 |
+
"bbox": [
|
| 1146 |
+
0.173,
|
| 1147 |
+
0.033,
|
| 1148 |
+
0.48,
|
| 1149 |
+
0.049
|
| 1150 |
+
],
|
| 1151 |
+
"angle": 0,
|
| 1152 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 1153 |
+
},
|
| 1154 |
+
{
|
| 1155 |
+
"type": "title",
|
| 1156 |
+
"bbox": [
|
| 1157 |
+
0.174,
|
| 1158 |
+
0.103,
|
| 1159 |
+
0.289,
|
| 1160 |
+
0.118
|
| 1161 |
+
],
|
| 1162 |
+
"angle": 0,
|
| 1163 |
+
"content": "REFERENCES"
|
| 1164 |
+
},
|
| 1165 |
+
{
|
| 1166 |
+
"type": "ref_text",
|
| 1167 |
+
"bbox": [
|
| 1168 |
+
0.173,
|
| 1169 |
+
0.126,
|
| 1170 |
+
0.826,
|
| 1171 |
+
0.157
|
| 1172 |
+
],
|
| 1173 |
+
"angle": 0,
|
| 1174 |
+
"content": "P. Bashivan, K. Kar, and J. J. DiCarlo. Neural population control via deep image synthesis. Science, 364(6439), May 2019."
|
| 1175 |
+
},
|
| 1176 |
+
{
|
| 1177 |
+
"type": "ref_text",
|
| 1178 |
+
"bbox": [
|
| 1179 |
+
0.173,
|
| 1180 |
+
0.164,
|
| 1181 |
+
0.827,
|
| 1182 |
+
0.207
|
| 1183 |
+
],
|
| 1184 |
+
"angle": 0,
|
| 1185 |
+
"content": "J. Bowers, G. Malhotra, M. Dujmović, M. Llera, C. Tsvetkov, V. Biscione, G. Puebla, F. Adolfi, J. Hummel, R. Heaton, B. Evans, J. Mitchell, and R. Blything. Deep problems with neural network models of human vision. 04 2022. doi: 10.31234/osf.io/5zf4s."
|
| 1186 |
+
},
|
| 1187 |
+
{
|
| 1188 |
+
"type": "ref_text",
|
| 1189 |
+
"bbox": [
|
| 1190 |
+
0.174,
|
| 1191 |
+
0.215,
|
| 1192 |
+
0.825,
|
| 1193 |
+
0.247
|
| 1194 |
+
],
|
| 1195 |
+
"angle": 0,
|
| 1196 |
+
"content": "W. Brendel, J. Rauber, M. Kümmerer, I. Ustyuzhaninov, and M. Bethge. Accurate, reliable and fast robustness evaluation. July 2019."
|
| 1197 |
+
},
|
| 1198 |
+
{
|
| 1199 |
+
"type": "ref_text",
|
| 1200 |
+
"bbox": [
|
| 1201 |
+
0.174,
|
| 1202 |
+
0.254,
|
| 1203 |
+
0.825,
|
| 1204 |
+
0.283
|
| 1205 |
+
],
|
| 1206 |
+
"angle": 0,
|
| 1207 |
+
"content": "J. Buckman, A. Roy, C. Raffel, and I. Goodfellow. Thermometer encoding: One hot way to resist adversarial examples. Feb. 2018."
|
| 1208 |
+
},
|
| 1209 |
+
{
|
| 1210 |
+
"type": "ref_text",
|
| 1211 |
+
"bbox": [
|
| 1212 |
+
0.173,
|
| 1213 |
+
0.292,
|
| 1214 |
+
0.785,
|
| 1215 |
+
0.309
|
| 1216 |
+
],
|
| 1217 |
+
"angle": 0,
|
| 1218 |
+
"content": "N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. Aug. 2016."
|
| 1219 |
+
},
|
| 1220 |
+
{
|
| 1221 |
+
"type": "ref_text",
|
| 1222 |
+
"bbox": [
|
| 1223 |
+
0.173,
|
| 1224 |
+
0.316,
|
| 1225 |
+
0.825,
|
| 1226 |
+
0.346
|
| 1227 |
+
],
|
| 1228 |
+
"angle": 0,
|
| 1229 |
+
"content": "P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh. EAD: Elastic-Net attacks to deep neural networks via adversarial examples. Sept. 2017."
|
| 1230 |
+
},
|
| 1231 |
+
{
|
| 1232 |
+
"type": "ref_text",
|
| 1233 |
+
"bbox": [
|
| 1234 |
+
0.173,
|
| 1235 |
+
0.354,
|
| 1236 |
+
0.825,
|
| 1237 |
+
0.397
|
| 1238 |
+
],
|
| 1239 |
+
"angle": 0,
|
| 1240 |
+
"content": "C. C. J. J. D. Daniel L. Yamins, Ha Hong. Hierarchical modular optimization of convolutional networks achieves representations similar to macaque IT and human ventral stream. Advances in Neural Information Processing Systems 26 (NIPS 2013), 2013."
|
| 1241 |
+
},
|
| 1242 |
+
{
|
| 1243 |
+
"type": "ref_text",
|
| 1244 |
+
"bbox": [
|
| 1245 |
+
0.173,
|
| 1246 |
+
0.405,
|
| 1247 |
+
0.825,
|
| 1248 |
+
0.449
|
| 1249 |
+
],
|
| 1250 |
+
"angle": 0,
|
| 1251 |
+
"content": "J. Dapello, T. Marques, M. Schrimpf, F. Geiger, D. D. Cox, and J. J. DiCarlo. Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations. page Advances in Neural Information Processing Systems 33 (NeurIPS 2020). Neurips, June 2020."
|
| 1252 |
+
},
|
| 1253 |
+
{
|
| 1254 |
+
"type": "ref_text",
|
| 1255 |
+
"bbox": [
|
| 1256 |
+
0.173,
|
| 1257 |
+
0.457,
|
| 1258 |
+
0.825,
|
| 1259 |
+
0.487
|
| 1260 |
+
],
|
| 1261 |
+
"angle": 0,
|
| 1262 |
+
"content": "N. Das, M. Shanbhogue, S.-T. Chen, F. Hohman, L. Chen, M. E. Kounavis, and D. H. Chau. Keeping the bad guys out: Protecting and vaccinating deep learning with JPEG compression. May 2017."
|
| 1263 |
+
},
|
| 1264 |
+
{
|
| 1265 |
+
"type": "ref_text",
|
| 1266 |
+
"bbox": [
|
| 1267 |
+
0.173,
|
| 1268 |
+
0.495,
|
| 1269 |
+
0.825,
|
| 1270 |
+
0.538
|
| 1271 |
+
],
|
| 1272 |
+
"angle": 0,
|
| 1273 |
+
"content": "J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, June 2009."
|
| 1274 |
+
},
|
| 1275 |
+
{
|
| 1276 |
+
"type": "ref_text",
|
| 1277 |
+
"bbox": [
|
| 1278 |
+
0.173,
|
| 1279 |
+
0.547,
|
| 1280 |
+
0.825,
|
| 1281 |
+
0.577
|
| 1282 |
+
],
|
| 1283 |
+
"angle": 0,
|
| 1284 |
+
"content": "G. S. Dhillon, K. Azizzadenesheli, Z. C. Lipton, J. D. Bernstein, J. Kossaifi, A. Khanna, and A. Anandkumar. Stochastic activation pruning for robust adversarial defense. Feb. 2018."
|
| 1285 |
+
},
|
| 1286 |
+
{
|
| 1287 |
+
"type": "ref_text",
|
| 1288 |
+
"bbox": [
|
| 1289 |
+
0.173,
|
| 1290 |
+
0.585,
|
| 1291 |
+
0.825,
|
| 1292 |
+
0.614
|
| 1293 |
+
],
|
| 1294 |
+
"angle": 0,
|
| 1295 |
+
"content": "J. J. DiCarlo, D. Zoccolan, and N. C. Rust. How does the brain solve visual object recognition? Neuron, 73(3):415-434, Feb. 2012."
|
| 1296 |
+
},
|
| 1297 |
+
{
|
| 1298 |
+
"type": "ref_text",
|
| 1299 |
+
"bbox": [
|
| 1300 |
+
0.173,
|
| 1301 |
+
0.623,
|
| 1302 |
+
0.827,
|
| 1303 |
+
0.667
|
| 1304 |
+
],
|
| 1305 |
+
"angle": 0,
|
| 1306 |
+
"content": "A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. Oct. 2020."
|
| 1307 |
+
},
|
| 1308 |
+
{
|
| 1309 |
+
"type": "ref_text",
|
| 1310 |
+
"bbox": [
|
| 1311 |
+
0.173,
|
| 1312 |
+
0.675,
|
| 1313 |
+
0.827,
|
| 1314 |
+
0.732
|
| 1315 |
+
],
|
| 1316 |
+
"angle": 0,
|
| 1317 |
+
"content": "G. Elsayed, S. Shankar, B. Cheung, N. Papernot, A. Kurakin, I. Goodfellow, and J. Sohl-Dickstein. Adversarial examples that fool both computer vision and Time-Limited humans. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 3910-3920. Curran Associates, Inc., 2018."
|
| 1318 |
+
},
|
| 1319 |
+
{
|
| 1320 |
+
"type": "ref_text",
|
| 1321 |
+
"bbox": [
|
| 1322 |
+
0.173,
|
| 1323 |
+
0.74,
|
| 1324 |
+
0.827,
|
| 1325 |
+
0.77
|
| 1326 |
+
],
|
| 1327 |
+
"angle": 0,
|
| 1328 |
+
"content": "W. Falcon et al. Pytorch lightning. *GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning*, 3, 2019."
|
| 1329 |
+
},
|
| 1330 |
+
{
|
| 1331 |
+
"type": "ref_text",
|
| 1332 |
+
"bbox": [
|
| 1333 |
+
0.173,
|
| 1334 |
+
0.778,
|
| 1335 |
+
0.825,
|
| 1336 |
+
0.808
|
| 1337 |
+
],
|
| 1338 |
+
"angle": 0,
|
| 1339 |
+
"content": "C. Federer, H. Xu, A. Fyshe, and J. Zylberberg. Improved object recognition using neural networks trained to mimic the brain's statistical properties. *Neural Netw.*, 2020."
|
| 1340 |
+
},
|
| 1341 |
+
{
|
| 1342 |
+
"type": "ref_text",
|
| 1343 |
+
"bbox": [
|
| 1344 |
+
0.173,
|
| 1345 |
+
0.816,
|
| 1346 |
+
0.827,
|
| 1347 |
+
0.874
|
| 1348 |
+
],
|
| 1349 |
+
"angle": 0,
|
| 1350 |
+
"content": "F. Geiger, M. Schrimpf, T. Marques, and J. J. DiCarlo. Wiring up vision: Minimizing supervised synaptic updates needed to produce a primate ventral stream. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=g1SzIRLQXMM."
|
| 1351 |
+
},
|
| 1352 |
+
{
|
| 1353 |
+
"type": "ref_text",
|
| 1354 |
+
"bbox": [
|
| 1355 |
+
0.173,
|
| 1356 |
+
0.882,
|
| 1357 |
+
0.827,
|
| 1358 |
+
0.925
|
| 1359 |
+
],
|
| 1360 |
+
"angle": 0,
|
| 1361 |
+
"content": "R. Geirhos, K. Narayanappa, B. Mitzkus, T. Thieringer, M. Bethge, F. A. Wichmann, and W. Brendel. Partial success in closing the gap between human and machine vision. Adv. Neural Inf. Process. Syst., 34, 2021."
|
| 1362 |
+
},
|
| 1363 |
+
{
|
| 1364 |
+
"type": "list",
|
| 1365 |
+
"bbox": [
|
| 1366 |
+
0.173,
|
| 1367 |
+
0.126,
|
| 1368 |
+
0.827,
|
| 1369 |
+
0.925
|
| 1370 |
+
],
|
| 1371 |
+
"angle": 0,
|
| 1372 |
+
"content": null
|
| 1373 |
+
},
|
| 1374 |
+
{
|
| 1375 |
+
"type": "page_number",
|
| 1376 |
+
"bbox": [
|
| 1377 |
+
0.49,
|
| 1378 |
+
0.948,
|
| 1379 |
+
0.509,
|
| 1380 |
+
0.961
|
| 1381 |
+
],
|
| 1382 |
+
"angle": 0,
|
| 1383 |
+
"content": "10"
|
| 1384 |
+
}
|
| 1385 |
+
],
|
| 1386 |
+
[
|
| 1387 |
+
{
|
| 1388 |
+
"type": "header",
|
| 1389 |
+
"bbox": [
|
| 1390 |
+
0.173,
|
| 1391 |
+
0.033,
|
| 1392 |
+
0.48,
|
| 1393 |
+
0.049
|
| 1394 |
+
],
|
| 1395 |
+
"angle": 0,
|
| 1396 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 1397 |
+
},
|
| 1398 |
+
{
|
| 1399 |
+
"type": "ref_text",
|
| 1400 |
+
"bbox": [
|
| 1401 |
+
0.173,
|
| 1402 |
+
0.103,
|
| 1403 |
+
0.826,
|
| 1404 |
+
0.133
|
| 1405 |
+
],
|
| 1406 |
+
"angle": 0,
|
| 1407 |
+
"content": "J. Guerguiev, T. P. Lillicrap, and B. A. Richards. Towards deep learning with segregated dendrites. *Elite*, 6, Dec. 2017."
|
| 1408 |
+
},
|
| 1409 |
+
{
|
| 1410 |
+
"type": "ref_text",
|
| 1411 |
+
"bbox": [
|
| 1412 |
+
0.173,
|
| 1413 |
+
0.141,
|
| 1414 |
+
0.825,
|
| 1415 |
+
0.17
|
| 1416 |
+
],
|
| 1417 |
+
"angle": 0,
|
| 1418 |
+
"content": "C. Guo, M. Rana, M. Cisse, and L. van der Maaten. Countering adversarial images using input transformations. Feb. 2018."
|
| 1419 |
+
},
|
| 1420 |
+
{
|
| 1421 |
+
"type": "ref_text",
|
| 1422 |
+
"bbox": [
|
| 1423 |
+
0.173,
|
| 1424 |
+
0.178,
|
| 1425 |
+
0.826,
|
| 1426 |
+
0.221
|
| 1427 |
+
],
|
| 1428 |
+
"angle": 0,
|
| 1429 |
+
"content": "C. Guo, M. J. Lee, G. Leclerc, J. Dapello, Y. Rao, A. Madry, and J. J. DiCarlo. Adversarily trained neural representations may already be as robust as corresponding biological neural representations. June 2022."
|
| 1430 |
+
},
|
| 1431 |
+
{
|
| 1432 |
+
"type": "ref_text",
|
| 1433 |
+
"bbox": [
|
| 1434 |
+
0.173,
|
| 1435 |
+
0.23,
|
| 1436 |
+
0.824,
|
| 1437 |
+
0.26
|
| 1438 |
+
],
|
| 1439 |
+
"angle": 0,
|
| 1440 |
+
"content": "H. Hasani, M. Soleymani, and H. Aghajan. Surround Modulation: A Bio-inspired Connectivity Structure for Convolutional Neural Networks. NeurIPS, (NeurIPS):15877-15888, 2019."
|
| 1441 |
+
},
|
| 1442 |
+
{
|
| 1443 |
+
"type": "ref_text",
|
| 1444 |
+
"bbox": [
|
| 1445 |
+
0.173,
|
| 1446 |
+
0.268,
|
| 1447 |
+
0.827,
|
| 1448 |
+
0.297
|
| 1449 |
+
],
|
| 1450 |
+
"angle": 0,
|
| 1451 |
+
"content": "D. Hassabis, D. Kumaran, C. Summerfield, and M. Botvinick. Neuroscience-Inspired Artificial Intelligence. *Neuron*, 95(2):245–258, 2017. ISSN 10974199. doi: 10.1016/j.neuron.2017.06.011."
|
| 1452 |
+
},
|
| 1453 |
+
{
|
| 1454 |
+
"type": "ref_text",
|
| 1455 |
+
"bbox": [
|
| 1456 |
+
0.173,
|
| 1457 |
+
0.305,
|
| 1458 |
+
0.827,
|
| 1459 |
+
0.348
|
| 1460 |
+
],
|
| 1461 |
+
"angle": 0,
|
| 1462 |
+
"content": "K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Proceedings of the IEEE International Conference on Computer Vision, 2015 Inter:1026-1034, 2015a. ISSN 15505499. doi: 10.1109/ICCV.2015.123."
|
| 1463 |
+
},
|
| 1464 |
+
{
|
| 1465 |
+
"type": "ref_text",
|
| 1466 |
+
"bbox": [
|
| 1467 |
+
0.173,
|
| 1468 |
+
0.356,
|
| 1469 |
+
0.805,
|
| 1470 |
+
0.372
|
| 1471 |
+
],
|
| 1472 |
+
"angle": 0,
|
| 1473 |
+
"content": "K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. Dec. 2015b."
|
| 1474 |
+
},
|
| 1475 |
+
{
|
| 1476 |
+
"type": "ref_text",
|
| 1477 |
+
"bbox": [
|
| 1478 |
+
0.173,
|
| 1479 |
+
0.38,
|
| 1480 |
+
0.824,
|
| 1481 |
+
0.41
|
| 1482 |
+
],
|
| 1483 |
+
"angle": 0,
|
| 1484 |
+
"content": "H. Hong, D. L. K. Yamins, N. J. Majaj, and J. J. DiCarlo. Explicit information for category-orthogonal object properties increases along the ventral stream. Nat. Neurosci., 19(4):613-622, Apr. 2016."
|
| 1485 |
+
},
|
| 1486 |
+
{
|
| 1487 |
+
"type": "ref_text",
|
| 1488 |
+
"bbox": [
|
| 1489 |
+
0.173,
|
| 1490 |
+
0.418,
|
| 1491 |
+
0.826,
|
| 1492 |
+
0.447
|
| 1493 |
+
],
|
| 1494 |
+
"angle": 0,
|
| 1495 |
+
"content": "J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models. Jan. 2020."
|
| 1496 |
+
},
|
| 1497 |
+
{
|
| 1498 |
+
"type": "ref_text",
|
| 1499 |
+
"bbox": [
|
| 1500 |
+
0.173,
|
| 1501 |
+
0.455,
|
| 1502 |
+
0.824,
|
| 1503 |
+
0.484
|
| 1504 |
+
],
|
| 1505 |
+
"angle": 0,
|
| 1506 |
+
"content": "K. Kar and J. J. DiCarlo. Fast recurrent processing via ventrolateral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition. Neuron, 109(1):164-176, 2021."
|
| 1507 |
+
},
|
| 1508 |
+
{
|
| 1509 |
+
"type": "ref_text",
|
| 1510 |
+
"bbox": [
|
| 1511 |
+
0.173,
|
| 1512 |
+
0.493,
|
| 1513 |
+
0.826,
|
| 1514 |
+
0.535
|
| 1515 |
+
],
|
| 1516 |
+
"angle": 0,
|
| 1517 |
+
"content": "K. Kar, J. Kubilius, K. Schmidt, E. B. Issa, and J. J. DiCarlo. Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior. Nature neuroscience, 22(6):974-983, 2019."
|
| 1518 |
+
},
|
| 1519 |
+
{
|
| 1520 |
+
"type": "ref_text",
|
| 1521 |
+
"bbox": [
|
| 1522 |
+
0.173,
|
| 1523 |
+
0.544,
|
| 1524 |
+
0.824,
|
| 1525 |
+
0.573
|
| 1526 |
+
],
|
| 1527 |
+
"angle": 0,
|
| 1528 |
+
"content": "S.-M. Khaligh-Razavi and N. Kriegeskorte. Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Comput. Biol., 10(11):e1003915, Nov. 2014."
|
| 1529 |
+
},
|
| 1530 |
+
{
|
| 1531 |
+
"type": "ref_text",
|
| 1532 |
+
"bbox": [
|
| 1533 |
+
0.173,
|
| 1534 |
+
0.582,
|
| 1535 |
+
0.824,
|
| 1536 |
+
0.611
|
| 1537 |
+
],
|
| 1538 |
+
"angle": 0,
|
| 1539 |
+
"content": "S. Kornblith, M. Norouzi, H. Lee, and G. Hinton. Similarity of neural network representations revisited. May 2019."
|
| 1540 |
+
},
|
| 1541 |
+
{
|
| 1542 |
+
"type": "ref_text",
|
| 1543 |
+
"bbox": [
|
| 1544 |
+
0.173,
|
| 1545 |
+
0.619,
|
| 1546 |
+
0.826,
|
| 1547 |
+
0.662
|
| 1548 |
+
],
|
| 1549 |
+
"angle": 0,
|
| 1550 |
+
"content": "A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097-1105. Curran Associates, Inc., 2012."
|
| 1551 |
+
},
|
| 1552 |
+
{
|
| 1553 |
+
"type": "ref_text",
|
| 1554 |
+
"bbox": [
|
| 1555 |
+
0.173,
|
| 1556 |
+
0.671,
|
| 1557 |
+
0.826,
|
| 1558 |
+
0.714
|
| 1559 |
+
],
|
| 1560 |
+
"angle": 0,
|
| 1561 |
+
"content": "J. Kubilius, M. Schrimpf, K. Kar, H. Hong, N. J. Majaj, R. Rajalingham, E. B. Issa, P. Bashivan, J. Prescott-Roy, K. Schmidt, A. Nayebi, D. Bear, D. L. K. Yamins, and J. J. DiCarlo. Brain-Like object recognition with High-Performing shallow recurrent ANNs. Sept. 2019."
|
| 1562 |
+
},
|
| 1563 |
+
{
|
| 1564 |
+
"type": "ref_text",
|
| 1565 |
+
"bbox": [
|
| 1566 |
+
0.173,
|
| 1567 |
+
0.722,
|
| 1568 |
+
0.826,
|
| 1569 |
+
0.751
|
| 1570 |
+
],
|
| 1571 |
+
"angle": 0,
|
| 1572 |
+
"content": "Z. Li, W. Brendel, E. Y. Walker, E. Cobos, T. Muhammad, J. Reimer, M. Bethge, F. H. Sinz, X. Pitkow, and A. S. Tolias. Learning from brains how to regularize machines. Nov. 2019."
|
| 1573 |
+
},
|
| 1574 |
+
{
|
| 1575 |
+
"type": "ref_text",
|
| 1576 |
+
"bbox": [
|
| 1577 |
+
0.173,
|
| 1578 |
+
0.759,
|
| 1579 |
+
0.826,
|
| 1580 |
+
0.789
|
| 1581 |
+
],
|
| 1582 |
+
"angle": 0,
|
| 1583 |
+
"content": "T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. Lawrence Zitnick, and P. Dollar. Microsoft COCO: Common objects in context. May 2014."
|
| 1584 |
+
},
|
| 1585 |
+
{
|
| 1586 |
+
"type": "ref_text",
|
| 1587 |
+
"bbox": [
|
| 1588 |
+
0.173,
|
| 1589 |
+
0.797,
|
| 1590 |
+
0.826,
|
| 1591 |
+
0.826
|
| 1592 |
+
],
|
| 1593 |
+
"angle": 0,
|
| 1594 |
+
"content": "G. W. Lindsay and K. D. Miller. How biological attention mechanisms improve task performance in a large-scale visual system model. *Elite*, 7, Oct. 2018."
|
| 1595 |
+
},
|
| 1596 |
+
{
|
| 1597 |
+
"type": "ref_text",
|
| 1598 |
+
"bbox": [
|
| 1599 |
+
0.173,
|
| 1600 |
+
0.834,
|
| 1601 |
+
0.826,
|
| 1602 |
+
0.863
|
| 1603 |
+
],
|
| 1604 |
+
"angle": 0,
|
| 1605 |
+
"content": "X. Liu, M. Cheng, H. Zhang, and C.-J. Hsieh. Towards robust neural networks via random self-ensemble. Dec. 2017."
|
| 1606 |
+
},
|
| 1607 |
+
{
|
| 1608 |
+
"type": "ref_text",
|
| 1609 |
+
"bbox": [
|
| 1610 |
+
0.173,
|
| 1611 |
+
0.872,
|
| 1612 |
+
0.824,
|
| 1613 |
+
0.887
|
| 1614 |
+
],
|
| 1615 |
+
"angle": 0,
|
| 1616 |
+
"content": "Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie. A convnet for the 2020s. 2022."
|
| 1617 |
+
},
|
| 1618 |
+
{
|
| 1619 |
+
"type": "ref_text",
|
| 1620 |
+
"bbox": [
|
| 1621 |
+
0.173,
|
| 1622 |
+
0.896,
|
| 1623 |
+
0.824,
|
| 1624 |
+
0.925
|
| 1625 |
+
],
|
| 1626 |
+
"angle": 0,
|
| 1627 |
+
"content": "W. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction and unsupervised learning. May 2016."
|
| 1628 |
+
},
|
| 1629 |
+
{
|
| 1630 |
+
"type": "list",
|
| 1631 |
+
"bbox": [
|
| 1632 |
+
0.173,
|
| 1633 |
+
0.103,
|
| 1634 |
+
0.827,
|
| 1635 |
+
0.925
|
| 1636 |
+
],
|
| 1637 |
+
"angle": 0,
|
| 1638 |
+
"content": null
|
| 1639 |
+
},
|
| 1640 |
+
{
|
| 1641 |
+
"type": "page_number",
|
| 1642 |
+
"bbox": [
|
| 1643 |
+
0.49,
|
| 1644 |
+
0.948,
|
| 1645 |
+
0.507,
|
| 1646 |
+
0.96
|
| 1647 |
+
],
|
| 1648 |
+
"angle": 0,
|
| 1649 |
+
"content": "11"
|
| 1650 |
+
}
|
| 1651 |
+
],
|
| 1652 |
+
[
|
| 1653 |
+
{
|
| 1654 |
+
"type": "header",
|
| 1655 |
+
"bbox": [
|
| 1656 |
+
0.173,
|
| 1657 |
+
0.033,
|
| 1658 |
+
0.48,
|
| 1659 |
+
0.049
|
| 1660 |
+
],
|
| 1661 |
+
"angle": 0,
|
| 1662 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 1663 |
+
},
|
| 1664 |
+
{
|
| 1665 |
+
"type": "ref_text",
|
| 1666 |
+
"bbox": [
|
| 1667 |
+
0.173,
|
| 1668 |
+
0.103,
|
| 1669 |
+
0.826,
|
| 1670 |
+
0.134
|
| 1671 |
+
],
|
| 1672 |
+
"angle": 0,
|
| 1673 |
+
"content": "A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. June 2017."
|
| 1674 |
+
},
|
| 1675 |
+
{
|
| 1676 |
+
"type": "ref_text",
|
| 1677 |
+
"bbox": [
|
| 1678 |
+
0.173,
|
| 1679 |
+
0.14,
|
| 1680 |
+
0.827,
|
| 1681 |
+
0.185
|
| 1682 |
+
],
|
| 1683 |
+
"angle": 0,
|
| 1684 |
+
"content": "N. J. Majaj, H. Hong, E. A. Solomon, and J. J. DiCarlo. Simple learned weighted sums of inferior temporal neuronal firing rates accurately predict human core object recognition performance. *J. Neurosci.*, 35(39):13402-13418, Sept. 2015."
|
| 1685 |
+
},
|
| 1686 |
+
{
|
| 1687 |
+
"type": "ref_text",
|
| 1688 |
+
"bbox": [
|
| 1689 |
+
0.173,
|
| 1690 |
+
0.191,
|
| 1691 |
+
0.825,
|
| 1692 |
+
0.223
|
| 1693 |
+
],
|
| 1694 |
+
"angle": 0,
|
| 1695 |
+
"content": "A. H. Marblestone, G. Wayne, and K. P. Kording. Toward an integration of deep learning and neuroscience. Front. Comput. Neurosci., 10:94, Sept. 2016."
|
| 1696 |
+
},
|
| 1697 |
+
{
|
| 1698 |
+
"type": "ref_text",
|
| 1699 |
+
"bbox": [
|
| 1700 |
+
0.173,
|
| 1701 |
+
0.228,
|
| 1702 |
+
0.825,
|
| 1703 |
+
0.272
|
| 1704 |
+
],
|
| 1705 |
+
"angle": 0,
|
| 1706 |
+
"content": "C. Michaelis, B. Mitzkus, R. Geirhos, E. Rusak, O. Bringmann, A. S. Ecker, M. Bethge, and W. Brendel. Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming. pages 1-23, 2019. URL http://arxiv.org/abs/1907.07484."
|
| 1707 |
+
},
|
| 1708 |
+
{
|
| 1709 |
+
"type": "ref_text",
|
| 1710 |
+
"bbox": [
|
| 1711 |
+
0.173,
|
| 1712 |
+
0.279,
|
| 1713 |
+
0.827,
|
| 1714 |
+
0.31
|
| 1715 |
+
],
|
| 1716 |
+
"angle": 0,
|
| 1717 |
+
"content": "A. Nayebi and S. Ganguli. Biologically inspired protection of deep networks from adversarial attacks. Mar. 2017."
|
| 1718 |
+
},
|
| 1719 |
+
{
|
| 1720 |
+
"type": "ref_text",
|
| 1721 |
+
"bbox": [
|
| 1722 |
+
0.173,
|
| 1723 |
+
0.316,
|
| 1724 |
+
0.827,
|
| 1725 |
+
0.36
|
| 1726 |
+
],
|
| 1727 |
+
"angle": 0,
|
| 1728 |
+
"content": "M.-I. Nicolae, M. Sinn, M. N. Tran, B. Buesser, A. Rawat, M. Wistuba, V. Zantedeschi, N. Baracaldo, B. Chen, H. Ludwig, I. Molloy, and B. Edwards. Adversarial robustness toolbox v1.2.0. CoRR, 1807.01069, 2018. URL https://arxiv.org/pdf/1807.01069."
|
| 1729 |
+
},
|
| 1730 |
+
{
|
| 1731 |
+
"type": "ref_text",
|
| 1732 |
+
"bbox": [
|
| 1733 |
+
0.173,
|
| 1734 |
+
0.367,
|
| 1735 |
+
0.827,
|
| 1736 |
+
0.398
|
| 1737 |
+
],
|
| 1738 |
+
"angle": 0,
|
| 1739 |
+
"content": "A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in PyTorch. Oct. 2017."
|
| 1740 |
+
},
|
| 1741 |
+
{
|
| 1742 |
+
"type": "ref_text",
|
| 1743 |
+
"bbox": [
|
| 1744 |
+
0.173,
|
| 1745 |
+
0.404,
|
| 1746 |
+
0.827,
|
| 1747 |
+
0.435
|
| 1748 |
+
],
|
| 1749 |
+
"angle": 0,
|
| 1750 |
+
"content": "C. Pirlot, R. Gerum, C. Efird, J. Zylberberg, and A. Fyshe. Improving the accuracy and robustness of cnns using a deep cca neural data regularizer. 09 2022. doi: 10.48550/arXiv.2209.02582."
|
| 1751 |
+
},
|
| 1752 |
+
{
|
| 1753 |
+
"type": "ref_text",
|
| 1754 |
+
"bbox": [
|
| 1755 |
+
0.173,
|
| 1756 |
+
0.442,
|
| 1757 |
+
0.829,
|
| 1758 |
+
0.498
|
| 1759 |
+
],
|
| 1760 |
+
"angle": 0,
|
| 1761 |
+
"content": "R. Rajalingham, K. Schmidt, and J. J. DiCarlo. Comparison of Object Recognition Behavior in Human and Monkey. Journal of Neuroscience, 35(35):12127-12136, 2015. ISSN 0270-6474. doi: 10.1523/JNEUROSCI.0573-15.2015. URL http://www.jneurosci.org/cgi/doi/10. 1523/JNEUROSCI.0573-15.2015."
|
| 1762 |
+
},
|
| 1763 |
+
{
|
| 1764 |
+
"type": "ref_text",
|
| 1765 |
+
"bbox": [
|
| 1766 |
+
0.173,
|
| 1767 |
+
0.506,
|
| 1768 |
+
0.827,
|
| 1769 |
+
0.551
|
| 1770 |
+
],
|
| 1771 |
+
"angle": 0,
|
| 1772 |
+
"content": "R. Rajalingham, E. B. Issa, P. Bashivan, K. Kar, K. Schmidt, and J. J. DiCarlo. Large-Scale, High-Resolution comparison of the core visual object recognition behavior of humans, monkeys, and State-of-the-Art deep artificial neural networks. *J. Neurosci.*, 38(33):7255–7269, Aug. 2018."
|
| 1773 |
+
},
|
| 1774 |
+
{
|
| 1775 |
+
"type": "ref_text",
|
| 1776 |
+
"bbox": [
|
| 1777 |
+
0.173,
|
| 1778 |
+
0.557,
|
| 1779 |
+
0.827,
|
| 1780 |
+
0.601
|
| 1781 |
+
],
|
| 1782 |
+
"angle": 0,
|
| 1783 |
+
"content": "R. Rajalingham, K. Kar, S. Sanghavi, S. Dehaene, and J. J. DiCarlo. The inferior temporal cortex is a potential cortical precursor of orthographic processing in untrained monkeys. Nature communications, 11(1):1-13, 2020."
|
| 1784 |
+
},
|
| 1785 |
+
{
|
| 1786 |
+
"type": "ref_text",
|
| 1787 |
+
"bbox": [
|
| 1788 |
+
0.173,
|
| 1789 |
+
0.608,
|
| 1790 |
+
0.827,
|
| 1791 |
+
0.639
|
| 1792 |
+
],
|
| 1793 |
+
"angle": 0,
|
| 1794 |
+
"content": "A. Riedel. Bag of tricks for training brain-like deep neural networks. In *Brain-Score Workshop*, 2022. URL https://openreview.net/forum?id=Sudzh-vWQ-c."
|
| 1795 |
+
},
|
| 1796 |
+
{
|
| 1797 |
+
"type": "ref_text",
|
| 1798 |
+
"bbox": [
|
| 1799 |
+
0.173,
|
| 1800 |
+
0.645,
|
| 1801 |
+
0.825,
|
| 1802 |
+
0.675
|
| 1803 |
+
],
|
| 1804 |
+
"angle": 0,
|
| 1805 |
+
"content": "J. Rony, L. G. Hafemann, L. S. Oliveira, I. Ben Ayed, R. Sabourin, and E. Granger. Decoupling direction and norm for efficient Gradient-Based L2 adversarial attacks and defenses. Nov. 2018."
|
| 1806 |
+
},
|
| 1807 |
+
{
|
| 1808 |
+
"type": "ref_text",
|
| 1809 |
+
"bbox": [
|
| 1810 |
+
0.173,
|
| 1811 |
+
0.682,
|
| 1812 |
+
0.827,
|
| 1813 |
+
0.713
|
| 1814 |
+
],
|
| 1815 |
+
"angle": 0,
|
| 1816 |
+
"content": "S. Safarani, A. Nix, K. Willeke, S. A. Cadena, K. Restivo, G. Denfield, A. S. Tolias, and F. H. Sinz. Towards robust vision by multi-task learning on monkey visual cortex. July 2021."
|
| 1817 |
+
},
|
| 1818 |
+
{
|
| 1819 |
+
"type": "ref_text",
|
| 1820 |
+
"bbox": [
|
| 1821 |
+
0.173,
|
| 1822 |
+
0.719,
|
| 1823 |
+
0.827,
|
| 1824 |
+
0.763
|
| 1825 |
+
],
|
| 1826 |
+
"angle": 0,
|
| 1827 |
+
"content": "M. Schrimpf, J. Kubilius, H. Hong, N. J. Majaj, R. Rajalingham, E. B. Issa, K. Kar, P. Bashivan, J. Prescott-Roy, K. Schmidt, D. L. K. Yamins, and J. J. DiCarlo. Brain-Score: Which artificial neural network for object recognition is most Brain-Like? Sept. 2018."
|
| 1828 |
+
},
|
| 1829 |
+
{
|
| 1830 |
+
"type": "ref_text",
|
| 1831 |
+
"bbox": [
|
| 1832 |
+
0.173,
|
| 1833 |
+
0.77,
|
| 1834 |
+
0.825,
|
| 1835 |
+
0.814
|
| 1836 |
+
],
|
| 1837 |
+
"angle": 0,
|
| 1838 |
+
"content": "M. Schrimpf, J. Kubilius, M. J. Lee, N. A. R. Murty, R. Ajemian, and J. J. DiCarlo. Integrative benchmarking to advance neurally mechanistic models of human intelligence. Neuron, 2020. URL: https://www.cell.com/neuron/fulltext/S0896-6273(20)30605-x."
|
| 1839 |
+
},
|
| 1840 |
+
{
|
| 1841 |
+
"type": "ref_text",
|
| 1842 |
+
"bbox": [
|
| 1843 |
+
0.173,
|
| 1844 |
+
0.821,
|
| 1845 |
+
0.827,
|
| 1846 |
+
0.851
|
| 1847 |
+
],
|
| 1848 |
+
"angle": 0,
|
| 1849 |
+
"content": "K. Simonyan and A. Zisserman. Very deep convolutional networks for Large-Scale image recognition. Sept. 2014."
|
| 1850 |
+
},
|
| 1851 |
+
{
|
| 1852 |
+
"type": "ref_text",
|
| 1853 |
+
"bbox": [
|
| 1854 |
+
0.173,
|
| 1855 |
+
0.858,
|
| 1856 |
+
0.827,
|
| 1857 |
+
0.889
|
| 1858 |
+
],
|
| 1859 |
+
"angle": 0,
|
| 1860 |
+
"content": "F. H. Sinz, X. Pitkow, J. Reimer, M. Bethge, and A. S. Tolias. Engineering a less artificial intelligence. *Neuron*, 103(6):967–979, Sept. 2019."
|
| 1861 |
+
},
|
| 1862 |
+
{
|
| 1863 |
+
"type": "ref_text",
|
| 1864 |
+
"bbox": [
|
| 1865 |
+
0.173,
|
| 1866 |
+
0.895,
|
| 1867 |
+
0.825,
|
| 1868 |
+
0.926
|
| 1869 |
+
],
|
| 1870 |
+
"angle": 0,
|
| 1871 |
+
"content": "Y. Song, T. Kim, S. Nowozin, S. Ermon, and N. Kushman. PixelDefend: Leveraging generative models to understand and defend against adversarial examples. Oct. 2017."
|
| 1872 |
+
},
|
| 1873 |
+
{
|
| 1874 |
+
"type": "list",
|
| 1875 |
+
"bbox": [
|
| 1876 |
+
0.173,
|
| 1877 |
+
0.103,
|
| 1878 |
+
0.829,
|
| 1879 |
+
0.926
|
| 1880 |
+
],
|
| 1881 |
+
"angle": 0,
|
| 1882 |
+
"content": null
|
| 1883 |
+
},
|
| 1884 |
+
{
|
| 1885 |
+
"type": "page_number",
|
| 1886 |
+
"bbox": [
|
| 1887 |
+
0.49,
|
| 1888 |
+
0.948,
|
| 1889 |
+
0.508,
|
| 1890 |
+
0.96
|
| 1891 |
+
],
|
| 1892 |
+
"angle": 0,
|
| 1893 |
+
"content": "12"
|
| 1894 |
+
}
|
| 1895 |
+
],
|
| 1896 |
+
[
|
| 1897 |
+
{
|
| 1898 |
+
"type": "header",
|
| 1899 |
+
"bbox": [
|
| 1900 |
+
0.173,
|
| 1901 |
+
0.033,
|
| 1902 |
+
0.48,
|
| 1903 |
+
0.049
|
| 1904 |
+
],
|
| 1905 |
+
"angle": 0,
|
| 1906 |
+
"content": "Published as a conference paper at ICLR 2023"
|
| 1907 |
+
},
|
| 1908 |
+
{
|
| 1909 |
+
"type": "ref_text",
|
| 1910 |
+
"bbox": [
|
| 1911 |
+
0.173,
|
| 1912 |
+
0.103,
|
| 1913 |
+
0.826,
|
| 1914 |
+
0.134
|
| 1915 |
+
],
|
| 1916 |
+
"angle": 0,
|
| 1917 |
+
"content": "C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. Dec. 2013."
|
| 1918 |
+
},
|
| 1919 |
+
{
|
| 1920 |
+
"type": "ref_text",
|
| 1921 |
+
"bbox": [
|
| 1922 |
+
0.173,
|
| 1923 |
+
0.141,
|
| 1924 |
+
0.826,
|
| 1925 |
+
0.172
|
| 1926 |
+
],
|
| 1927 |
+
"angle": 0,
|
| 1928 |
+
"content": "C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. Sept. 2014."
|
| 1929 |
+
},
|
| 1930 |
+
{
|
| 1931 |
+
"type": "ref_text",
|
| 1932 |
+
"bbox": [
|
| 1933 |
+
0.173,
|
| 1934 |
+
0.178,
|
| 1935 |
+
0.829,
|
| 1936 |
+
0.223
|
| 1937 |
+
],
|
| 1938 |
+
"angle": 0,
|
| 1939 |
+
"content": "H. Tang, M. Schrimpf, W. Lotter, C. Moerman, A. Paredes, J. Ortega Caro, W. Hardesty, D. Cox, and G. Kreiman. Recurrent computations for visual pattern completion. Proceedings of the National Academy of Sciences, 115(35):8835-8840, 2018. ISSN 0027-8424. doi: 10.1073/pnas.1719397115."
|
| 1940 |
+
},
|
| 1941 |
+
{
|
| 1942 |
+
"type": "ref_text",
|
| 1943 |
+
"bbox": [
|
| 1944 |
+
0.173,
|
| 1945 |
+
0.23,
|
| 1946 |
+
0.825,
|
| 1947 |
+
0.261
|
| 1948 |
+
],
|
| 1949 |
+
"angle": 0,
|
| 1950 |
+
"content": "W. Xu, D. Evans, and Y. Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv [cs.CV], Apr. 2017."
|
| 1951 |
+
},
|
| 1952 |
+
{
|
| 1953 |
+
"type": "ref_text",
|
| 1954 |
+
"bbox": [
|
| 1955 |
+
0.173,
|
| 1956 |
+
0.268,
|
| 1957 |
+
0.825,
|
| 1958 |
+
0.298
|
| 1959 |
+
],
|
| 1960 |
+
"angle": 0,
|
| 1961 |
+
"content": "L. Yuan, W. Xiao, G. Dellafererra, G. Kreiman, F. E. H. Tay, J. Feng, and M. S. Livingstone. Fooling the primate brain with minimal, targeted image manipulation. Nov. 2020."
|
| 1962 |
+
},
|
| 1963 |
+
{
|
| 1964 |
+
"type": "ref_text",
|
| 1965 |
+
"bbox": [
|
| 1966 |
+
0.173,
|
| 1967 |
+
0.306,
|
| 1968 |
+
0.825,
|
| 1969 |
+
0.336
|
| 1970 |
+
],
|
| 1971 |
+
"angle": 0,
|
| 1972 |
+
"content": "A. M. Zador. A critique of pure learning and what artificial neural networks can learn from animal brains. Nat. Commun., 10(1):3770, Aug. 2019."
|
| 1973 |
+
},
|
| 1974 |
+
{
|
| 1975 |
+
"type": "list",
|
| 1976 |
+
"bbox": [
|
| 1977 |
+
0.173,
|
| 1978 |
+
0.103,
|
| 1979 |
+
0.829,
|
| 1980 |
+
0.336
|
| 1981 |
+
],
|
| 1982 |
+
"angle": 0,
|
| 1983 |
+
"content": null
|
| 1984 |
+
},
|
| 1985 |
+
{
|
| 1986 |
+
"type": "page_number",
|
| 1987 |
+
"bbox": [
|
| 1988 |
+
0.491,
|
| 1989 |
+
0.948,
|
| 1990 |
+
0.508,
|
| 1991 |
+
0.96
|
| 1992 |
+
],
|
| 1993 |
+
"angle": 0,
|
| 1994 |
+
"content": "13"
|
| 1995 |
+
}
|
| 1996 |
+
]
|
| 1997 |
+
]
|
2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/7ac7f310-bee1-439e-801a-a6f6b2dfa988_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6b15606b4a206c3f1972a9f4528be16784f94e78eee461c23f94a8a9de5ce242
|
| 3 |
+
size 16477653
|
2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/full.md
ADDED
|
@@ -0,0 +1,214 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ALIGNING MODEL AND MACAQUE INFERIOR TEMPORAL CORTEX REPRESENTATIONS IMPROVES MODELTO-HUMAN BEHAVIORAL ALIGNMENT AND ADVERSARIAL ROBUSTNESS
|
| 2 |
+
|
| 3 |
+
Joel Dapello\*,1,2,3, Kohitij Kar\*,1,2,4,6, Martin Schrimpf\*,1,2,4, Robert Geary\*,1,2,3, Michael Ferguson\*,1,2,4 David D. Cox\*, James J. DiCarlo\*,1,2,4
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Brain and Cognitive Sciences, MIT, Cambridge, MA02139
|
| 6 |
+
$^{2}$ McGovern Institute for Brain Research, MIT, Cambridge, MA02139
|
| 7 |
+
$^{3}$ School of Engineering and Applied Sciences, Harvard University, Cambridge, MA02139
|
| 8 |
+
$^{4}$ Center for Brains, Minds and Machines, MIT, Cambridge, MA02139
|
| 9 |
+
$^{5}$ MIT-IBM Watson AI Lab
|
| 10 |
+
$^{6}$ Department of Biology, Centre for Vision Research at York University, Toronto, CA dapello@mit.edu kohitij@mit.edu
|
| 11 |
+
|
| 12 |
+
# ABSTRACT
|
| 13 |
+
|
| 14 |
+
While some state-of-the-art artificial neural network systems in computer vision are strikingly accurate models of the corresponding primate visual processing, there are still many discrepancies between these models and the behavior of primates on object recognition tasks. Many current models suffer from extreme sensitivity to adversarial attacks and often do not align well with the image-by-image behavioral error patterns observed in humans. Previous research has provided strong evidence that primate object recognition behavior can be very accurately predicted by neural population activity in the inferior temporal (IT) cortex, a brain area in the late stages of the visual processing hierarchy. Therefore, here we directly test whether making the late stage representations of models more similar to that of macaque IT produces new models that exhibit more robust, primate-like behavior. We collected a dataset of chronic, large-scale multi-electrode recordings across the IT cortex in six non-human primates (rhesus macaques). We then use these data to fine-tune (end-to-end) the model "IT" representations such that they are more aligned with the biological IT representations, while preserving accuracy on object recognition tasks. We generate a cohort of models with a range of IT similarity scores validated on held-out animals across two image sets with distinct statistics. Across a battery of optimization conditions, we observed a strong correlation between the models' IT-likeness and alignment with human behavior, as well as an increase in its adversarial robustness. We further assessed the limitations of this approach and find that the improvements in behavioral alignment and adversarial robustness generalize across different image statistics, but not to object categories outside of those covered in our IT training set. Taken together, our results demonstrate that building models that are more aligned with the primate brain leads to more robust and human-like behavior, and call for larger neural data-sets to further augment these gains. Code, models, and data are available at https://github.com/dapello/braintree.
|
| 15 |
+
|
| 16 |
+
# 1 INTRODUCTION AND RELATED WORK
|
| 17 |
+
|
| 18 |
+
Object recognition models have made incredible strides in the last ten years, (Krizhevsky et al., 2012; Szegedy et al., 2014; Simonyan and Zisserman, 2014; He et al., 2015b; Dosovitskiy et al., 2020; Liu et al., 2022) even surpassing human performance in some benchmarks (He et al., 2015a). While some of these models bear remarkable resemblance to the primate visual system (Daniel L. Yamins, 2013;
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
B
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
C
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
Figure 1: Aligning model IT representations with primate IT representations improves behavioral alignment and improves adversarial robustness. A) A set of naturalistic images, each containing one of eight different object classes are shown to a CNN and also to three different primate subjects with implanted multi-electrode arrays recording from the Inferior Temporal (IT) cortex. (1) A Base model (ImageNet pre-trained CORnet-S) is fine-tuned using stochastic gradient descent to (2) minimize the classification loss with respect to the ground truth object in each image while also minimizing a representational similarity loss (CKA) that encourages the model's IT representation to be more like those measured in the (pooled) primate subjects. (3) The resultant IT aligned models are then frozen and each tested in three ways. First, model IT representations are evaluated for similarity to biological IT representation (CKA metric) using neural data obtained from new primate subjects - we refer to the split-trial reliabilityCeiled average across all held out macaques and both image sets as "Validated IT neural similarity". Second, model output behavioral error patterns are assessed for alignment with human behavioral error patterns at the resolution of individual images (i2n, see Methods). Third, model behavioral output is evaluated for its robustness to white box adversarial attacks using an $L_{\infty}$ norm projected gradient descent attack. All three tests are carried out with: (i) new images within the IT-alignment training domain (held out HVM images; see Methods) and (ii) new images with novel image statistics (natural COCO images; see Methods), and those empirical results are tracked separately. B) We find that this IT-alignment procedure produced gains in validated IT neural similarity relative to base models on both data sets, and that these gains led to improvement in human behavioral alignment. $n = 30$ models are shown, resulting from training at six different relative weightings of the IT neural similarity loss, each from five base models that derived from five random seeds. C) We also find that these same IT-alignment gains resulted in increased adversarial accuracy $(\mathrm{PGD} L_{\infty}, \epsilon = 1 / 1020)$ on the same model set as in B. Base models trained only for ImageNet and HVM image classification are circled in grey.
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
+
Khaligh-Razavi and Kriegeskorte, 2014; Schrimpf et al., 2018; 2020), there remain a number of important discrepancies. In particular, the output behavior of current models, while coarsely aligned with primate object confusion patterns, does not fully match primate error patterns on individual images (Rajalingham et al., 2018; Geirhos et al., 2021). In addition, these same models can be easily fooled by adversarial attacks – targeted pixel-level perturbations intentionally designed to cause the model to produce the wrong output(Szegedy et al., 2013; Carlini and Wagner, 2016; Chen et al., 2017; Rony et al., 2018; Brendel et al., 2019), whereas primate behavior is thought to be more robust to these kinds of attacks. This is an important unsolved problem in engineering artificial intelligence systems; the deviance between model and human behavior has been studied extensively in the machine learning community, often from the perspective of safety in real-world deployment of computer vision systems (Das et al., 2017; Liu et al., 2017; Xu et al., 2017; Madry et al., 2017; Song et al., 2017; Dhillon et al., 2018; Buckman et al., 2018; Guo et al., 2018; Michaelis et al., 2019). From a neuroscience perspective, behavioral differences like these point to different underlying mechanisms
|
| 32 |
+
|
| 33 |
+
and feature representations used for object recognition between the artificial and biological systems, meaning that our scientific understanding of the mechanisms of visual behavior remains incomplete.
|
| 34 |
+
|
| 35 |
+
Incorporating neurophysiological constraints into models to make them behave more in line with primate visual behavior is an active field of research (Marblestone et al., 2016; Lotter et al., 2016; Nayebi and Ganguli, 2017; Guerguiev et al., 2017; Hassabis et al., 2017; Lindsay and Miller, 2018; Tang et al., 2018; Kar et al., 2019; Kubilius et al., 2019; Li et al., 2019; Hasani et al., 2019; Sinz et al., 2019; Zador, 2019; Geiger et al., 2022). Previously, Dapello et al. (2020) demonstrated that convolutional neural network (CNN) models with early visual representations that are more functionally aligned with the early representations of primate visual processing tended to be more robust to adversarial attacks. This correlational observation was turned into a causal test, by simulating a primary visual cortex at the front of CNNs, which was indeed found to improve performance across a range of white box adversarial attacks and common image corruptions. Likewise, several recent studies have demonstrated that training models to classify images while also predicting (Safarani et al., 2021) or having similar representations (Federer et al., 2020) to early visual processing regions of primates, or even mice (Li et al., 2019), has a positive effect on generalization and robustness to adversarial attacks and common image corruptions.
|
| 36 |
+
|
| 37 |
+
However, no research to date has investigated the effects of incorporating biological knowledge of the neural representations in the IT cortex – a late stage visual processing region of the primate ventral stream, which critically supports primate visual object recognition (DiCarlo et al., 2012; Majaj et al., 2015). Here, we developed a method to align the late layer "IT representations" of a base object recognition model (CORnet-S (Kubilius et al., 2019) pre-trained on ImageNet (Deng et al., 2009) and naturalistic, grey-scale "HVM" images (Majaj et al., 2015)) to the biological IT representation while the model continues to be optimized to perform classification of the dominant object in each image. Using neural recordings performed across the IT cortex of six rhesus macaque monkeys divided into three training animals and three held-out testing animals for validation, we generate a suite of models under a variety of different optimization conditions and measure their IT alignment on held out animals, their alignment with human behavior, and their robustness to a range of adversarial attacks, in all cases on at least two image sets with distinct statistics as shown in figure 1.
|
| 38 |
+
|
| 39 |
+
We report three novel findings:
|
| 40 |
+
|
| 41 |
+
1. Our method robustly improves IT representational similarity of models to brains even when measured on new animals and new images.
|
| 42 |
+
2. We find that gains in model IT-likeness lead to gains in human behavioral alignment.
|
| 43 |
+
3. Likewise we find that improved IT-likeness leads to increased adversarial robustness.
|
| 44 |
+
|
| 45 |
+
Interestingly, we observe that adversarial training improves robustness but does not significantly increase IT similarity or human behavioral alignment. Finally, while probing the limits of our current IT-alignment procedure, we observed that the improvements in IT similarity, behavioral alignment, and adversarial robustness generalized to images with different image statistics than those in the IT training set (from naturalistic gray scale images to full color natural images) but only for object categories that were part of the original IT training set and not for held-out object categories.
|
| 46 |
+
|
| 47 |
+
# 2 DATA AND METHODS
|
| 48 |
+
|
| 49 |
+
Here we describe the neural and behavioral data collection, the training and testing methods used for aligning model representations with IT representations, and the methods for assessing behavioral alignment and adversarial robustness.
|
| 50 |
+
|
| 51 |
+
# 2.1 IMAGE SETS
|
| 52 |
+
|
| 53 |
+
High-quality synthetic "naturalistic" images of single objects (HVM images) were generated using free ray-tracing software (http://www.povray.org), similar to (Majaj et al., 2015). Each image consisted of a 2D projection of a 3D model (purchased from Dosch Design and TurboSquid) added to a random natural background. The ten objects chosen were bear, elephant, face, apple, car, dog, chair, plane, bird and zebra. By varying six viewing parameters, we explored three types of identity while preserving object variation, position (x and y), rotation (x, y, and z), and size. All images
|
| 54 |
+
|
| 55 |
+
were achromatic with a native resolution of $256 \times 256$ pixels. Additionally, natural microsoft COCO images (photographs) pertaining to the 10 nouns, were download from http://cocodataset.org (Lin et al., 2014). Each image was resized (not cropped) to $256 \times 256 \times 3$ pixel size and presented within the central 8 deg.
|
| 56 |
+
|
| 57 |
+
# 2.2 PRIMATE NEURAL DATA COLLECTION AND PROCESSING
|
| 58 |
+
|
| 59 |
+
We surgically implanted each monkey with a head post under aseptic conditions. We recorded neural activity using two or three micro-electrode arrays (Utah arrays; Blackrock Microsystems) implanted in IT cortex. A total of 96 electrodes were connected per array (grid arrangement, $400\mathrm{um}$ spacing, $4\mathrm{mm}\times 4\mathrm{mm}$ span of each array). Array placement was guided by the sulcus pattern, which was visible during the surgery. The electrodes were accessed through a percutaneous connector that allowed simultaneous recording from all 96 electrodes from each array. All surgical and animal procedures were performed in accordance with National Institutes of Health guidelines and the Massachusetts Institute of Technology Committee on Animal Care. For information on the neural recording quality metrics per site, see supplemental section A.1.
|
| 60 |
+
|
| 61 |
+
During each daily recording session, band-pass filtered (0.1 Hz to 10 kHz) neural activity was recorded continuously at a sampling rate of 20 kHz using Intan Recording Controllers (Intan Technologies, LLC). The majority of the data presented here were based on multiunit activity. We detected the multiunit spikes after the raw voltage data were collected. A multiunit spike event was defined as the threshold crossing when voltage (falling edge) deviated by more than three times the standard deviation of the raw voltage values. Our array placements allowed us to sample neural sites from different parts of IT, along the posterior to anterior axis. However, for all the analyses, we did not consider the specific spatial location of the site, and treated each site as a random sample from a pooled IT population. For information on the neural recording quality metrics, see supplemental section A.1.
|
| 62 |
+
|
| 63 |
+
Behavioral state during neural data collection All neural response data were obtained during a passive viewing task. In this task, monkeys fixated a white square dot $(0.2^{\circ})$ for $300\mathrm{ms}$ to initiate a trial. We then presented a sequence of 5 to 10 images, each ON for $100\mathrm{ms}$ followed by a $100\mathrm{ms}$ gray blank screen. This was followed by a water reward and an inter trial interval of $500\mathrm{ms}$ , followed by the next sequence. Trials were aborted if gaze was not held within $\pm 2^{\circ}$ of the central fixation dot during any point. Each neural site's response to each image was taken as the mean rate during a time window of $70 - 170\mathrm{ms}$ following image onset, a window that has been previously chosen to align with the visually-driven latency of IT neurons and their quantitative relationship to object classification behavior as in Majaj et al. (2015).
|
| 64 |
+
|
| 65 |
+
# 2.3 HUMAN BEHAVIORAL DATA COLLECTION
|
| 66 |
+
|
| 67 |
+
We measured human behavior (from 88 subjects) using the online Amazon MTurk platform which enables efficient collection of large-scale psychophysical data from crowd-sourced "human intelligence tasks" (HITs). The reliability of the online MTurk platform has been validated by comparing results obtained from online and in-lab psychophysical experiments (Majaj et al., 2015; Rajalingham et al., 2015). Each trial started with a $100\mathrm{ms}$ presentation of the sample image (one our of 1320 images). This was followed by a blank gray screen for $100\mathrm{ms}$ ; followed by a choice screen with the target and distractor objects, similar to (Rajalingham et al., 2018). The subjects indicated their choice by touching the screen or clicking the mouse over the target object. Each subjects saw an image only once. We collected the data such that, there were 80 unique subject responses per image, with varied distractor objects. Prior work has shown that human and macaque behavioral patterns are nearly identical, even at the image grain (Rajalingham et al., 2018). For information on the human behavioral data collection, see supplemental section A.2.
|
| 68 |
+
|
| 69 |
+
# 2.4 ALIGNING MODEL REPRESENTATIONS WITH MACAQUE IT REPRESENTATIONS
|
| 70 |
+
|
| 71 |
+
In order to align neural network model representations with primate IT representations while performing classification, we use a multi-loss formulation similar to that used in Li et al. (2019) and Federer
|
| 72 |
+
|
| 73 |
+
et al. (2020). Starting with an ImageNet (Deng et al., 2009) pre-trained CORnet-S model (Kubilius et al., 2019), we used stochastic gradient descent (SGD) on all model weights to jointly minimize a standard categorical cross entropy loss on model predictions of ImageNet labels (maintained from model pre-training, for stability), HVM image labels, and a centered kernel alignment (CKA) based loss penalizing the "IT" layer of CORnet-S for having representations not aligned with primate IT representations of the HVM images. CORnet-S was selected because it already has a clearly defined layer committed to region IT, close to the final linear readout of the network, but otherwise our procedure is compatible with any neural network architecture. Meanwhile CKA, a measure of linear subspace alignment, was selected as a representational similarity measure. CKA has ideal properties such as invariance to isotropic scaling and orthonormal transformations which do not matter from the perspective of a linear readout, but sensitivity to arbitrary linear transformations (Kornblith et al., 2019) which could lead to differences from a linear readout as well as allow the network to hide representations useful for image classification but not present within primate IT. CKA ranges from 0, indicating completely non-overlapping subspaces, to 1, indicating completely aligned subspaces. We found that our best neural alignment results came from minimizing the neural similarity loss function $\log(1 - CKA(X,Y))$ , where $X \in \mathbb{R}^{n \times p_1}$ and $Y \in \mathbb{R}^{n \times p_2}$ denote two column centered activation matrices with generated by showing $n$ example images and recording $p_1$ and $p_2$ neurons from the IT layer of CORnet-S and macaque IT recordings respectively. The macaque neural activation matrices were generated by averaging over approximately 50 trials per image and over a 70-170 millisecond time window following image presentation. An illustration of our setup is shown in figure 1A.
|
| 74 |
+
|
| 75 |
+
# 2.5 TRAINING AND TESTING CONDITIONS
|
| 76 |
+
|
| 77 |
+
In all reported experiments, model IT representational similarity training was performed on 2880 grey-scale naturalistic HVM image representations consisting of 188 active neural sites collated from the three training set macaques for 1200 epochs. We use a batch size of 128, meaning the CKA loss computed for a random set of 128 representations for each gradient step. In order to create models with a variety of different final neural alignment scores, we add random probability $1 - p$ of dropping the IT alignment gradients and create six different sets (5 random seeds for each set) of neurally aligned models with $p \in [0, 1/32, 1/16, 1/8, 1/4, 1/2, 1]$ . For example, the set with $p = 0$ drops all of the IT alignment gradients and thus has no improved IT alignment over the base model, while the set with $p = 1$ always includes the IT alignment gradients and similarly achieves the highest IT alignment scores (see figure 2). We also introduce a small amount of data augmentation including the physical equivalent of 0.5 degrees of jitter the vertical and horizontal position of the images, 0.5 degrees of rotational jitter, and +/- 0.1 degrees of scaling jitter, assuming our model has an 8 degree field of view. These augmentations were selected to simulate natural viewing conditions.
|
| 78 |
+
|
| 79 |
+
Model IT representational similarity testing was performed on a total of three held out monkeys: Monkey 1 (280 neural sites) and monkey 2 (144 neural sites) on 320 held out HVM images with statistics similar to the training distribution, and monkey 1 (237 neural sites) and monkey 3 (106 active neural sites) on 200 full color natural COCO images with different statistics than those used during training. Additional model training information can be found in supplemental section B.
|
| 80 |
+
|
| 81 |
+
For performing white box adversarial attacks, we used untargeted projected gradient descent (PGD) (Madry et al., 2017) with $L_{\infty}$ and $L_{2}$ norm constraints. Further details are given in supplemental section B.
|
| 82 |
+
|
| 83 |
+
# 2.6 BEHAVIORAL BENCHMARKS
|
| 84 |
+
|
| 85 |
+
To characterize the behavior of the visual system, we have used an image-level behavioral metric, i2n (Rajalingham et al., 2018). The behavioral metric computes a pattern of unbiased behavioral performances, using a sensitivity index: $d' = Z(HitRate) - Z(FalseAlarmRate)$ , where $Z$ is the inverse of the cumulative Gaussian distribution. The HitRates for i2n are the accuracies of the subjects when a specific image is shown and the choices include the target object (i.e., the object present in the image) and one other specific distractor object. So for every distractor-target pair we get a different i2n entry. A detailed description of how to compute i2n can be also found at Rajalingham
|
| 86 |
+
|
| 87 |
+
et al. (2018). The i2n behavioral benchmark was computed using the Brain-Score implementation of the i2n metric (Schrimpf et al., 2018).
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
Figure 2: IT alignment training leads to improved IT representational similarity on held out animals and held out images across two image sets with different statistics. A) IT neural similarity scores (CKA, normalized by split-half trial reliability) for held out but within domain HVM images vs gradient steps is shown for two held out monkeys across seven different neural similarity loss gradient dropout rates (the darkest trace receives neural similarity loss gradients at $100\%$ of gradient steps, while in the lightest trace neural similarity loss gradients are dropped at every step). Two control conditions are also shown: optimizing model IT toward a random Gaussian target IT matrix (random, blue) and toward an image-shuffled target IT matrix (shuffle, orange). B) Like A but for natural COCO images out of domain with respect to the training set. Grey dashed line on each plot shows the base model score for models pre-trained on ImageNet and HVM image labels with no IT representational similarity loss, which the model set with $0\%$ of IT similarity loss gradients does not deviate significantly from. Error bars are bootstrapped confidence intervals for 5 training seeds.
|
| 91 |
+
|
| 92 |
+

|
| 93 |
+
|
| 94 |
+
# 3 RESULTS
|
| 95 |
+
|
| 96 |
+
Does aligning late stage model representations with primate IT representations lead to improvements in alignment with image-by-image patterns of human behavior or improvements in white box adversarial robustness? We start by testing if our method can generate models that are truly more IT-like by validating on held out animals and images, as this has not been previously attempted and is not guaranteed to work given the sampling limitations of neural recording experiments. We then proceed to analyze how these IT-aligned models fair on several human behavioral alignment benchmarks and a diverse set of white box adversarial attacks.
|
| 97 |
+
|
| 98 |
+
# 3.1 DIRECT FITTING TO IT NEURAL DATA IMPROVES IT-LIKENESS OF MODELS ACROSS HELD OUT ANIMALS AND IMAGE SETS
|
| 99 |
+
|
| 100 |
+
First, we investigated how well our IT alignment optimization procedure generalizes to IT neural similarity measurements (CKA) for two held out test monkeys on 320 held out HVM images (similar image statistics as the training set). Figure 2A shows theCeiled IT neural similarity scores for both test animals across different neural similarity loss gradient dropout rates $(p\in [0,1 / 32,1 / 16,1 / 8,1 / 4,1 / 2,1])$ ; the model marked $100\%$ sees IT similarity loss gradients at every step, where as the model marked $0\%$ never sees IT similarity loss gradients) as well as models optimized to classify HVM images while fitting a random Gaussian target activation matrix, or an image shuffled target activation matrix which has the same first and second order statistics as the true IT activation matrix, but scrambled image information. For both animals, we see a significant positive shift from the unfitted model (neural loss weight of 0.0), with higher relative neural loss weights generally leading to higher IT neural similarity scores. Meanwhile, both of the control conditions cause models to become less IT like, to a significant degree.
|
| 101 |
+
|
| 102 |
+
We next investigated how well our procedure generalizes from the grey-scale naturalistic HVM images to full color, natural images from COCO. Figure 2B shows the same model optimization conditions as before, but now on two unseen animal IT representations of COCO images. Like in 2A although to a lesser absolute degree, we see improvements relative to the baseline in IT neural similarity as function of the neural loss weight, and controls generally decreasing in IT neural similarity. From
|
| 103 |
+
|
| 104 |
+
this, we conclude that our IT alignment procedure is able to improve IT-likeness in our models even in held out animals and across two image sets with distinct statistics.
|
| 105 |
+
|
| 106 |
+
# 3.2 INCREASED BEHAVIORAL ALIGNMENT IN MODELS THAT BETTER MATCH MACAQUE IT
|
| 107 |
+
|
| 108 |
+
Next, we investigated how single image level classification error patterns correlate between humans and IT aligned models. To get a big picture view, we take all of the optimization conditions and validation epochs generated in figure 2A while models are training and compare IT neural similarity on the HVM test set (averaged over held out animals) with human behavioral alignment on the HVM test set. As shown in figure 3A, this analysis reveals a broad, though not linear correlation between IT neural similarity and behavioral alignment. Interestingly, we observe that the slope is at its steepest when IT neural similarity is at the highest values, suggesting that if an even higher degree of IT-alignment might result in greater increases in behavioral alignment. We also investigated whether these trends persist when we exclude the optimization on object labels from the HVM images and only optimize for IT neural similarity. To do so, we train the models on all previous conditions but without the HVM object-label loss. As shown in 3, the overall shape of the trend remains quite similar, though the absolute behavioral alignment shifts downward, indicating that the label information during training helps on the behavioral task, but is not required for the trend to hold. In figure 3B, we perform the same set of measurements but now focusing on the COCO image set. Consistent with the observation on COCO IT neural similarity, the behavioral alignment trend transfers to the COCO image set although the absolute magnitude of the improvements are less.
|
| 109 |
+
|
| 110 |
+
Finally, using the Brain-Score platform (Schrimpf et al., 2018), we benchmark our models against publicly available human behavioral data from the Objectome image set Rajalingham et al. (2018) which has similar image statistics to our HVM IT fitting set (with a total of 24 object categories, only four of which overlap with the training set). As demonstrated in figure 4C, when the Objectome data are filtered down to just the four overlapping categories, our most IT similar models are again the most behaviorally aligned, well above the unfit baseline and control conditions, which remain close to the floor for much of the plot. However, As shown in figure 3D, when considering all 24 object categories in the Objectome dataset, we see that the trend of increasing human behavioral alignment does not hold and our models actually begin to fair worse in terms of human behavioral alignment at higher levels of IT neural similarity. As shown in figure supp A.1, using a linear probe to assess image class information content (measured by classification accuracy on held out representations) reveals that these models are losing class information content for the Objectome image set, which drives the decrease in behavioral alignment, as the model makes more mistakes overall than a human. Similarly, a linear probe analysis reveals minimal loss in class information in the overlapping categories. Thus, we observe that while our method leads to increased human behavioral alignment across different image statistics, it does not currently lead to improved alignment on unseen object categories.
|
| 111 |
+
|
| 112 |
+
# 3.3 INCREASED ADVERSARIAL ROBUSTNESS IN MODELS THAT BETTER MATCH MACAQUE IT
|
| 113 |
+
|
| 114 |
+
Finally, we evaluate our models on an array of white box adversarial attacks, to assess if models with higher IT neural similarity scores also have increased adversarial robustness. Like before, we start with a big picture analysis where we consider every evaluation epoch for all optimization conditions considered in figure 2. Again, as demonstrated in figures 4A and 4B, for both HVM images and COCO images, there is a broad though not entirely linear correlation between IT neural similarity and adversarial robustness to PGD $L_{\infty} \epsilon = 1 / 1020$ attacks. Like in the analysis of behavioral alignment, we also see a higher slope on the right side of the plots, where IT neural similarity is the highest, suggesting further improvements could be had if models were pushed to be more IT aligned.
|
| 115 |
+
|
| 116 |
+
In order to get a better sense of the gains in robustness, we measured the adversarial strength accuracy curves for models only trained with HVM image labels, models trained with HVM image labels and IT neural representations, and models adversially trained on HVM labels (PGD $L_{\infty}$ , $\epsilon = 4 / 255$ ). Figure 5A shows that on held-out HVM images, IT aligned models have increased accuracy across a range of $\epsilon$ values for both $L_{\infty}$ and $L_{2}$ norms, though less so than models with explicit adversarial training. However, as shown in figure 5 the same analysis on COCO images demonstrates that adversarial robustness in the IT aligned networks generalizes significantly better on unseen image statistics than the adversially trained models, which lose clean accuracy on COCO images.
|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
Figure 3: IT neural similarity correlates with behavioral alignment across a variety of optimization conditions and unseen image statistics but not on unseen object categories. A) Held out animal and image IT neural similarity is plotted against human behavioral alignment on the HVM image set at every validation epoch for all neural loss weight conditions, random Gaussian IT target matrix conditions, and image shuffled IT target matrix conditions, in each case with or with and without image classification loss. B) and C) Like in A but for the COCO image set and the Objectome image set Rajalingham et al. (2018) filtered to overlapping categories with the IT training set. D) The behavioral alignment for the full Objectome image set with 20 categories not covered in the IT training is not improved by the IT-alignment procedure and data used here. In all plots, the black cross represents the average base model position, and the heavy blue line is a sliding X, Y average of all conditions merely to visually highlight trends. Five seeds for each condition are plotted.
|
| 120 |
+
|
| 121 |
+

|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
|
| 125 |
+

|
| 126 |
+
|
| 127 |
+
Figure 4: IT neural similarity correlates with improved white box adversarial robustness. A) held out animal and image IT neural similarity is plotted against white box adversarial accuracy (PGD $L_{\infty}\epsilon = 1 / 1020$ ) on the HVM image set measured across multiple training time points for all neural loss ratio conditions, random Gaussian IT target matrix conditions, and image shuffled IT target matrix conditions. B) Like in A but for COCO images. In both plots, the black cross represents the average base model position, the black X marks a CORnet-S adversarially trained on HVM images, and the heavy blue line is a sliding X, Y average of all conditions merely to visually highlight trends. Five seeds for each condition are plotted.
|
| 128 |
+

|
| 129 |
+
Loss Function Components: IT + Classification Shuffled IT + Classification Random IT + Classification
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
|
| 133 |
+
Last, we tested the IT neural similarity of our HVM image adversarially trained models and find that they do not follow the general correlation shown in 4 for IT aligned models vs adversarial accuracy. Interestingly, the adversarially trained models are slightly more similar to IT than standard models, but significantly higher than standard models on HVM adversarial accuracy and significantly lower on COCO adversarial accuracy. We take this to indicate that there are multiple possible ways to become robust to adversarial attacks, and that adversarial training does not in general induce the same representations as IT alignment.
|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
Figure 5: IT aligned models are more robust than standard models in and out of domain, and more robust than adversarially trained models in out of domain conditions. A) PGD $L_{\infty}$ and $L_{2}$ strength accuracy curves on HVM images for standard trained networks (green) IT aligned networks (blue) and networks adversarially trained (PGD $L_{\infty} \epsilon = 4 / 255$ ) on the IT fitting image labels (orange). B) Like in A but for COCO images. Error shading represents bootstrapped $95\%$ confidence intervals over five training seeds.
|
| 137 |
+
|
| 138 |
+

|
| 139 |
+
|
| 140 |
+
# 4 DISCUSSION
|
| 141 |
+
|
| 142 |
+
Building on prior research in constraining visual object recognition models with early stage visual representations (Li et al., 2019; Dapello et al., 2020; Federer et al., 2020; Safarani et al., 2021), we report here that it is possible to better align the late stage "IT representations" of an object recognition model with the corresponding primate IT representations, and that this improved IT alignment leads to increased human level behavioral alignment and increased adversarial robustness. In particular, the results show that 1) the method used here is able to develop better neuroscientific models by improving IT alignment in object recognition models even on held out animals and image statistics not seen by the model during the IT neural alignment training procedure, 2) models that are more aligned with macaque IT also have better alignment with human behavioral error patterns across unseen (not shown during training) image statistics but not for unseen object categories, and 3) models more aligned with macaque IT are more robust to adversarial attacks even on unseen image statistics. Interestingly however, we observed that being more adversially robust (through adversarial training) does not lead to significantly more IT neural similarity.
|
| 143 |
+
|
| 144 |
+
These empirical observations raise a number of important questions for future research. While there are clear gains in robustness from our procedure, we note that the overall magnitude is relatively small. How much adversarial robustness could we expect to gain, if we perfectly fit IT? This question hinges on how adversarily robust primate behavior really is, an active area of research (Guo et al., 2022; Elsayed et al., 2018; Yuan et al., 2020). Guo et al. (2022) is particularly interesting with respect to our work – while they find that individual neurons in IT are not particularly robust when compared to individual neurons in adversarily trained networks, our work here indicates that population geometry, not individual neuronal sensitivity, might play a critical role in robustness. We find it intriguing that aligning IT representations in our models to empirically measured macaque IT responses has no effect or even a negative effect on behavioral alignment for objects not present in the IT fitting image-set, a noteworthy limitation in our approach. We speculate that this is due to the small range of categories covered in our IT training set, which limits the span of neural representational space that those experiments were able to sample. In that regard, it would be informative to get a sense of the scaling laws (Kaplan et al., 2020) for how much neural data (in terms of images, neurons, trials, or object categories) needs to be absorbed into a model before it behaves in a truly general more human like fashion for any instance of image categories or statistics. Other avenues for further exploration include comparisons of behavioral alignment on a more diverse panel of benchmarks Bowers et al. (2022), different alignment metrics to optimize, such as deep canonical correlationPirlot et al. (2022), or including representation stochasticity as in Dapello et al. (2020). Overall, our results provide further support for the framework of constraining and optimizing models with empirical data from the primate brain to make them more robust and well aligned with human behavior (Sinz et al., 2019).
|
| 145 |
+
|
| 146 |
+
# REFERENCES
|
| 147 |
+
|
| 148 |
+
P. Bashivan, K. Kar, and J. J. DiCarlo. Neural population control via deep image synthesis. Science, 364(6439), May 2019.
|
| 149 |
+
J. Bowers, G. Malhotra, M. Dujmović, M. Llera, C. Tsvetkov, V. Biscione, G. Puebla, F. Adolfi, J. Hummel, R. Heaton, B. Evans, J. Mitchell, and R. Blything. Deep problems with neural network models of human vision. 04 2022. doi: 10.31234/osf.io/5zf4s.
|
| 150 |
+
W. Brendel, J. Rauber, M. Kümmerer, I. Ustyuzhaninov, and M. Bethge. Accurate, reliable and fast robustness evaluation. July 2019.
|
| 151 |
+
J. Buckman, A. Roy, C. Raffel, and I. Goodfellow. Thermometer encoding: One hot way to resist adversarial examples. Feb. 2018.
|
| 152 |
+
N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. Aug. 2016.
|
| 153 |
+
P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh. EAD: Elastic-Net attacks to deep neural networks via adversarial examples. Sept. 2017.
|
| 154 |
+
C. C. J. J. D. Daniel L. Yamins, Ha Hong. Hierarchical modular optimization of convolutional networks achieves representations similar to macaque IT and human ventral stream. Advances in Neural Information Processing Systems 26 (NIPS 2013), 2013.
|
| 155 |
+
J. Dapello, T. Marques, M. Schrimpf, F. Geiger, D. D. Cox, and J. J. DiCarlo. Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations. page Advances in Neural Information Processing Systems 33 (NeurIPS 2020). Neurips, June 2020.
|
| 156 |
+
N. Das, M. Shanbhogue, S.-T. Chen, F. Hohman, L. Chen, M. E. Kounavis, and D. H. Chau. Keeping the bad guys out: Protecting and vaccinating deep learning with JPEG compression. May 2017.
|
| 157 |
+
J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, June 2009.
|
| 158 |
+
G. S. Dhillon, K. Azizzadenesheli, Z. C. Lipton, J. D. Bernstein, J. Kossaifi, A. Khanna, and A. Anandkumar. Stochastic activation pruning for robust adversarial defense. Feb. 2018.
|
| 159 |
+
J. J. DiCarlo, D. Zoccolan, and N. C. Rust. How does the brain solve visual object recognition? Neuron, 73(3):415-434, Feb. 2012.
|
| 160 |
+
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. Oct. 2020.
|
| 161 |
+
G. Elsayed, S. Shankar, B. Cheung, N. Papernot, A. Kurakin, I. Goodfellow, and J. Sohl-Dickstein. Adversarial examples that fool both computer vision and Time-Limited humans. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 3910-3920. Curran Associates, Inc., 2018.
|
| 162 |
+
W. Falcon et al. Pytorch lightning. *GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning*, 3, 2019.
|
| 163 |
+
C. Federer, H. Xu, A. Fyshe, and J. Zylberberg. Improved object recognition using neural networks trained to mimic the brain's statistical properties. *Neural Netw.*, 2020.
|
| 164 |
+
F. Geiger, M. Schrimpf, T. Marques, and J. J. DiCarlo. Wiring up vision: Minimizing supervised synaptic updates needed to produce a primate ventral stream. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=g1SzIRLQXMM.
|
| 165 |
+
R. Geirhos, K. Narayanappa, B. Mitzkus, T. Thieringer, M. Bethge, F. A. Wichmann, and W. Brendel. Partial success in closing the gap between human and machine vision. Adv. Neural Inf. Process. Syst., 34, 2021.
|
| 166 |
+
|
| 167 |
+
J. Guerguiev, T. P. Lillicrap, and B. A. Richards. Towards deep learning with segregated dendrites. *Elite*, 6, Dec. 2017.
|
| 168 |
+
C. Guo, M. Rana, M. Cisse, and L. van der Maaten. Countering adversarial images using input transformations. Feb. 2018.
|
| 169 |
+
C. Guo, M. J. Lee, G. Leclerc, J. Dapello, Y. Rao, A. Madry, and J. J. DiCarlo. Adversarily trained neural representations may already be as robust as corresponding biological neural representations. June 2022.
|
| 170 |
+
H. Hasani, M. Soleymani, and H. Aghajan. Surround Modulation: A Bio-inspired Connectivity Structure for Convolutional Neural Networks. NeurIPS, (NeurIPS):15877-15888, 2019.
|
| 171 |
+
D. Hassabis, D. Kumaran, C. Summerfield, and M. Botvinick. Neuroscience-Inspired Artificial Intelligence. *Neuron*, 95(2):245–258, 2017. ISSN 10974199. doi: 10.1016/j.neuron.2017.06.011.
|
| 172 |
+
K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Proceedings of the IEEE International Conference on Computer Vision, 2015 Inter:1026-1034, 2015a. ISSN 15505499. doi: 10.1109/ICCV.2015.123.
|
| 173 |
+
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. Dec. 2015b.
|
| 174 |
+
H. Hong, D. L. K. Yamins, N. J. Majaj, and J. J. DiCarlo. Explicit information for category-orthogonal object properties increases along the ventral stream. Nat. Neurosci., 19(4):613-622, Apr. 2016.
|
| 175 |
+
J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models. Jan. 2020.
|
| 176 |
+
K. Kar and J. J. DiCarlo. Fast recurrent processing via ventrolateral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition. Neuron, 109(1):164-176, 2021.
|
| 177 |
+
K. Kar, J. Kubilius, K. Schmidt, E. B. Issa, and J. J. DiCarlo. Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior. Nature neuroscience, 22(6):974-983, 2019.
|
| 178 |
+
S.-M. Khaligh-Razavi and N. Kriegeskorte. Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Comput. Biol., 10(11):e1003915, Nov. 2014.
|
| 179 |
+
S. Kornblith, M. Norouzi, H. Lee, and G. Hinton. Similarity of neural network representations revisited. May 2019.
|
| 180 |
+
A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097-1105. Curran Associates, Inc., 2012.
|
| 181 |
+
J. Kubilius, M. Schrimpf, K. Kar, H. Hong, N. J. Majaj, R. Rajalingham, E. B. Issa, P. Bashivan, J. Prescott-Roy, K. Schmidt, A. Nayebi, D. Bear, D. L. K. Yamins, and J. J. DiCarlo. Brain-Like object recognition with High-Performing shallow recurrent ANNs. Sept. 2019.
|
| 182 |
+
Z. Li, W. Brendel, E. Y. Walker, E. Cobos, T. Muhammad, J. Reimer, M. Bethge, F. H. Sinz, X. Pitkow, and A. S. Tolias. Learning from brains how to regularize machines. Nov. 2019.
|
| 183 |
+
T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. Lawrence Zitnick, and P. Dollar. Microsoft COCO: Common objects in context. May 2014.
|
| 184 |
+
G. W. Lindsay and K. D. Miller. How biological attention mechanisms improve task performance in a large-scale visual system model. *Elite*, 7, Oct. 2018.
|
| 185 |
+
X. Liu, M. Cheng, H. Zhang, and C.-J. Hsieh. Towards robust neural networks via random self-ensemble. Dec. 2017.
|
| 186 |
+
Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie. A convnet for the 2020s. 2022.
|
| 187 |
+
W. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction and unsupervised learning. May 2016.
|
| 188 |
+
|
| 189 |
+
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. June 2017.
|
| 190 |
+
N. J. Majaj, H. Hong, E. A. Solomon, and J. J. DiCarlo. Simple learned weighted sums of inferior temporal neuronal firing rates accurately predict human core object recognition performance. *J. Neurosci.*, 35(39):13402-13418, Sept. 2015.
|
| 191 |
+
A. H. Marblestone, G. Wayne, and K. P. Kording. Toward an integration of deep learning and neuroscience. Front. Comput. Neurosci., 10:94, Sept. 2016.
|
| 192 |
+
C. Michaelis, B. Mitzkus, R. Geirhos, E. Rusak, O. Bringmann, A. S. Ecker, M. Bethge, and W. Brendel. Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming. pages 1-23, 2019. URL http://arxiv.org/abs/1907.07484.
|
| 193 |
+
A. Nayebi and S. Ganguli. Biologically inspired protection of deep networks from adversarial attacks. Mar. 2017.
|
| 194 |
+
M.-I. Nicolae, M. Sinn, M. N. Tran, B. Buesser, A. Rawat, M. Wistuba, V. Zantedeschi, N. Baracaldo, B. Chen, H. Ludwig, I. Molloy, and B. Edwards. Adversarial robustness toolbox v1.2.0. CoRR, 1807.01069, 2018. URL https://arxiv.org/pdf/1807.01069.
|
| 195 |
+
A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in PyTorch. Oct. 2017.
|
| 196 |
+
C. Pirlot, R. Gerum, C. Efird, J. Zylberberg, and A. Fyshe. Improving the accuracy and robustness of cnns using a deep cca neural data regularizer. 09 2022. doi: 10.48550/arXiv.2209.02582.
|
| 197 |
+
R. Rajalingham, K. Schmidt, and J. J. DiCarlo. Comparison of Object Recognition Behavior in Human and Monkey. Journal of Neuroscience, 35(35):12127-12136, 2015. ISSN 0270-6474. doi: 10.1523/JNEUROSCI.0573-15.2015. URL http://www.jneurosci.org/cgi/doi/10. 1523/JNEUROSCI.0573-15.2015.
|
| 198 |
+
R. Rajalingham, E. B. Issa, P. Bashivan, K. Kar, K. Schmidt, and J. J. DiCarlo. Large-Scale, High-Resolution comparison of the core visual object recognition behavior of humans, monkeys, and State-of-the-Art deep artificial neural networks. *J. Neurosci.*, 38(33):7255–7269, Aug. 2018.
|
| 199 |
+
R. Rajalingham, K. Kar, S. Sanghavi, S. Dehaene, and J. J. DiCarlo. The inferior temporal cortex is a potential cortical precursor of orthographic processing in untrained monkeys. Nature communications, 11(1):1-13, 2020.
|
| 200 |
+
A. Riedel. Bag of tricks for training brain-like deep neural networks. In *Brain-Score Workshop*, 2022. URL https://openreview.net/forum?id=Sudzh-vWQ-c.
|
| 201 |
+
J. Rony, L. G. Hafemann, L. S. Oliveira, I. Ben Ayed, R. Sabourin, and E. Granger. Decoupling direction and norm for efficient Gradient-Based L2 adversarial attacks and defenses. Nov. 2018.
|
| 202 |
+
S. Safarani, A. Nix, K. Willeke, S. A. Cadena, K. Restivo, G. Denfield, A. S. Tolias, and F. H. Sinz. Towards robust vision by multi-task learning on monkey visual cortex. July 2021.
|
| 203 |
+
M. Schrimpf, J. Kubilius, H. Hong, N. J. Majaj, R. Rajalingham, E. B. Issa, K. Kar, P. Bashivan, J. Prescott-Roy, K. Schmidt, D. L. K. Yamins, and J. J. DiCarlo. Brain-Score: Which artificial neural network for object recognition is most Brain-Like? Sept. 2018.
|
| 204 |
+
M. Schrimpf, J. Kubilius, M. J. Lee, N. A. R. Murty, R. Ajemian, and J. J. DiCarlo. Integrative benchmarking to advance neurally mechanistic models of human intelligence. Neuron, 2020. URL: https://www.cell.com/neuron/fulltext/S0896-6273(20)30605-x.
|
| 205 |
+
K. Simonyan and A. Zisserman. Very deep convolutional networks for Large-Scale image recognition. Sept. 2014.
|
| 206 |
+
F. H. Sinz, X. Pitkow, J. Reimer, M. Bethge, and A. S. Tolias. Engineering a less artificial intelligence. *Neuron*, 103(6):967–979, Sept. 2019.
|
| 207 |
+
Y. Song, T. Kim, S. Nowozin, S. Ermon, and N. Kushman. PixelDefend: Leveraging generative models to understand and defend against adversarial examples. Oct. 2017.
|
| 208 |
+
|
| 209 |
+
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. Dec. 2013.
|
| 210 |
+
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. Sept. 2014.
|
| 211 |
+
H. Tang, M. Schrimpf, W. Lotter, C. Moerman, A. Paredes, J. Ortega Caro, W. Hardesty, D. Cox, and G. Kreiman. Recurrent computations for visual pattern completion. Proceedings of the National Academy of Sciences, 115(35):8835-8840, 2018. ISSN 0027-8424. doi: 10.1073/pnas.1719397115.
|
| 212 |
+
W. Xu, D. Evans, and Y. Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv [cs.CV], Apr. 2017.
|
| 213 |
+
L. Yuan, W. Xiao, G. Dellafererra, G. Kreiman, F. E. H. Tay, J. Feng, and M. S. Livingstone. Fooling the primate brain with minimal, targeted image manipulation. Nov. 2020.
|
| 214 |
+
A. M. Zador. A critique of pure learning and what artificial neural networks can learn from animal brains. Nat. Commun., 10(1):3770, Aug. 2019.
|
2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:48dcbe44678a6f3a1b28014d390dd2f1d01640a6b386f20c7ccdd7f067a0d8d2
|
| 3 |
+
size 318214
|
2023/Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/AutoGT_ Automated Graph Transformer Architecture Search/d4f0ae85-f4df-4a55-84ac-a8865afd4325_content_list.json
ADDED
|
@@ -0,0 +1,1968 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "AUTOGT: AUTOMATED GRAPH TRANSFORMER ARCHITECTURE SEARCH",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
171,
|
| 8 |
+
99,
|
| 9 |
+
828,
|
| 10 |
+
147
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Zizhao Zhang $^{1}$ , Xin Wang $^{1,2*}$ , Chaoyu Guan $^{1}$ , Ziwei Zhang $^{1}$ , Haoyang Li $^{1}$ , Wenwu Zhu $^{1*}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
179,
|
| 19 |
+
167,
|
| 20 |
+
823,
|
| 21 |
+
184
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "<sup>1</sup>Department of Computer Science and Technology, Tsinghua University",
|
| 28 |
+
"bbox": [
|
| 29 |
+
183,
|
| 30 |
+
185,
|
| 31 |
+
658,
|
| 32 |
+
199
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "$^{2}$ THU-Bosch JCML Center, Tsinghua University",
|
| 39 |
+
"bbox": [
|
| 40 |
+
183,
|
| 41 |
+
199,
|
| 42 |
+
506,
|
| 43 |
+
214
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "{zzz22, guancy19, lihy18}@mails.tsinghua.edu.cn",
|
| 50 |
+
"bbox": [
|
| 51 |
+
183,
|
| 52 |
+
214,
|
| 53 |
+
598,
|
| 54 |
+
228
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "{xin_wang, zwzhang, wwzhu}@tsinghua.edu.cn",
|
| 61 |
+
"bbox": [
|
| 62 |
+
183,
|
| 63 |
+
228,
|
| 64 |
+
553,
|
| 65 |
+
242
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "ABSTRACT",
|
| 72 |
+
"text_level": 1,
|
| 73 |
+
"bbox": [
|
| 74 |
+
450,
|
| 75 |
+
277,
|
| 76 |
+
545,
|
| 77 |
+
292
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "Although Transformer architectures have been successfully applied to graph data with the advent of Graph Transformer, the current design of Graph Transformers still heavily relies on human labor and expertise knowledge to decide on proper neural architectures and suitable graph encoding strategies at each Transformer layer. In literature, there have been some works on the automated design of Transformers focusing on non-graph data such as texts and images without considering graph encoding strategies, which fail to handle the non-euclidean graph data. In this paper, we study the problem of automated graph Transformers, for the first time. However, solving these problems poses the following challenges: i) how can we design a unified search space for graph Transformer, and ii) how to deal with the coupling relations between Transformer architectures and the graph encodings of each Transformer layer. To address these challenges, we propose Automated Graph Transformer (AutoGT), a neural architecture search framework that can automatically discover the optimal graph Transformer architectures by joint optimization of Transformer architecture and graph encoding strategies. Specifically, we first propose a unified graph Transformer formulation that can represent most state-of-the-art graph Transformer architectures. Based upon the unified formulation, we further design the graph Transformer search space that includes both candidate architectures and various graph encodings. To handle the coupling relations, we propose a novel encoding-aware performance estimation strategy by gradually training and splitting the supernets according to the correlations between graph encodings and architectures. The proposed strategy can provide a more consistent and fine-grained performance prediction when evaluating the jointly optimized graph encodings and architectures. Extensive experiments and ablation studies show that our proposed AutoGT gains sufficient improvement over state-of-the-art hand-crafted baselines on all datasets, demonstrating its effectiveness and wide applicability.",
|
| 84 |
+
"bbox": [
|
| 85 |
+
228,
|
| 86 |
+
310,
|
| 87 |
+
769,
|
| 88 |
+
686
|
| 89 |
+
],
|
| 90 |
+
"page_idx": 0
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"type": "text",
|
| 94 |
+
"text": "1 INTRODUCTION",
|
| 95 |
+
"text_level": 1,
|
| 96 |
+
"bbox": [
|
| 97 |
+
173,
|
| 98 |
+
714,
|
| 99 |
+
336,
|
| 100 |
+
729
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "Recently, designing Transformer for graph data has attracted intensive research interests (Dwivedi & Bresson, 2020; Ying et al., 2021). As a powerful architecture to extract meaningful information from relational data, the graph Transformers have been successfully applied in natural language processing (Zhang & Zhang, 2020; Cai & Lam, 2020; Wang et al., 2023), social networks (Hu et al., 2020b), chemistry (Chen et al., 2019; Rong et al., 2020), recommendation (Xia et al., 2021) etc. However, developing a state-of-the-art graph Transformer for downstream tasks is still challenging because it heavily relies on the tedious trial-and-error hand-crafted human design, including determining the best Transformer architecture and the choices of proper graph encoding strategies to utilize, etc. In addition, the inefficient hand-crafted design will also inevitably introduce human bias, which leads to sub-optimal solutions for developing graph transformers. In literature, there have been works on automatically searching for the architectures of Transformer, which are designed specifically for data",
|
| 107 |
+
"bbox": [
|
| 108 |
+
169,
|
| 109 |
+
744,
|
| 110 |
+
826,
|
| 111 |
+
900
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "header",
|
| 117 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 118 |
+
"bbox": [
|
| 119 |
+
171,
|
| 120 |
+
32,
|
| 121 |
+
478,
|
| 122 |
+
47
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "page_footnote",
|
| 128 |
+
"text": "*Corresponding authors",
|
| 129 |
+
"bbox": [
|
| 130 |
+
191,
|
| 131 |
+
910,
|
| 132 |
+
336,
|
| 133 |
+
924
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "page_number",
|
| 139 |
+
"text": "1",
|
| 140 |
+
"bbox": [
|
| 141 |
+
493,
|
| 142 |
+
948,
|
| 143 |
+
503,
|
| 144 |
+
959
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "text",
|
| 150 |
+
"text": "in Natural Language Processing (Xu et al., 2021) and Computer Vision (Chen et al., 2021b). These works only focus on non-graph data without considering the graph encoding strategies which are shown to be very important in capturing graph information (Min et al., 2022a), thus failing to handle graph data with non-euclidean properties.",
|
| 151 |
+
"bbox": [
|
| 152 |
+
169,
|
| 153 |
+
103,
|
| 154 |
+
823,
|
| 155 |
+
161
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 1
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"text": "In this paper, we study the problem of automated graph Transformers for the first time. However, previous work (Min et al., 2022a) has demonstrated that a good graph Transformer architecture is expected to not only select proper neural architectures for every layer but also utilize appropriate encoding strategies capable of capturing various meaningful graph structure information to boost graph Transformer performance. Therefore, there exist two critical challenges for automated graph Transformers:",
|
| 162 |
+
"bbox": [
|
| 163 |
+
169,
|
| 164 |
+
166,
|
| 165 |
+
826,
|
| 166 |
+
251
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 1
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "list",
|
| 172 |
+
"sub_type": "text",
|
| 173 |
+
"list_items": [
|
| 174 |
+
"- How to design a unified search space appropriate for graph Transformer? A good graph Transformer needs to handle the non-euclidean graph data, requiring explicit consideration of node relations within the search space, where the architectures, as well as the encoding strategies, can be incorporated simultaneously.",
|
| 175 |
+
"- How to conduct encoding-aware architecture search strategy to tackle the coupling relations between Transformer architectures and graph encoding? Although one simple solution may resort to a one-shot formulation enabling efficient searching in vanilla Transformer operation space which can change its functionality during supernet training, the graph encoding strategies differ from vanilla Transformer in containing certain meanings related to structure information. How to train an encoding-aware supernet specifically designed for graphs is challenging."
|
| 176 |
+
],
|
| 177 |
+
"bbox": [
|
| 178 |
+
171,
|
| 179 |
+
260,
|
| 180 |
+
826,
|
| 181 |
+
404
|
| 182 |
+
],
|
| 183 |
+
"page_idx": 1
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"type": "text",
|
| 187 |
+
"text": "To address these challenges, we propose Automated Graph Transformer, AutoGT<sup>1</sup>, a novel neural architecture search method for graph Transformer. In particular, we propose a unified graph Transformer formulation to cover most of the state-of-the-art graph Transformer architectures in our search space. Besides the general search space of the Transformer with hidden dimension, feed-forward dimension, number of attention head, attention head dimension, and number of layers, our unified search space introduces two new kinds of augmentation strategies to attain graph information: node attribution augmentation and attention map augmentation. To handle the coupling relations, we further propose a novel encoding-aware performance estimation strategy tailored for graphs. As the encoding strategy and architecture have strong coupling relations when generating results, our AutoGT split the supernet based on the important encoding strategy during evaluation to handle the coupling relations. As such, we propose to gradually train and split the supernets according to the most coupled augmentation, attention map augmentation, using various supernets to evaluate different architectures in our unified searching space, which can provide a more consistent and fine-grained performance prediction when evaluating the jointly optimized architecture and encoding. In summary, we made the following contributions:",
|
| 188 |
+
"bbox": [
|
| 189 |
+
169,
|
| 190 |
+
412,
|
| 191 |
+
826,
|
| 192 |
+
623
|
| 193 |
+
],
|
| 194 |
+
"page_idx": 1
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"type": "list",
|
| 198 |
+
"sub_type": "text",
|
| 199 |
+
"list_items": [
|
| 200 |
+
"- We propose Automated Graph Transformer, AutoGT, a novel neural architecture search framework for graph Transformer, which can automatically discover the optimal graph Transformer architectures for various down-streaming tasks. To the best of our knowledge, AutoGT is the first automated graph Transformer framework.",
|
| 201 |
+
"- We design a unified search space containing both the Transformer architectures and the essential graph encoding strategies, covering most of the state-of-the-art graph Transformer, which can lead to global optimal for structure information excavation and node information retrieval.",
|
| 202 |
+
"- We propose an encoding-aware performance estimation strategy tailored for graphs to provide a more accurate and consistent performance prediction without bringing heavier computation costs. The encoding strategy and the Transformer architecture are jointly optimized to discover the best graph Transformers.",
|
| 203 |
+
"- The extensive experiments show that our proposed AutoGT model can significantly outperform the state-of-the-art baselines on graph classification tasks over several datasets with different scales."
|
| 204 |
+
],
|
| 205 |
+
"bbox": [
|
| 206 |
+
169,
|
| 207 |
+
631,
|
| 208 |
+
823,
|
| 209 |
+
824
|
| 210 |
+
],
|
| 211 |
+
"page_idx": 1
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"type": "text",
|
| 215 |
+
"text": "2 RELATED WORK",
|
| 216 |
+
"text_level": 1,
|
| 217 |
+
"bbox": [
|
| 218 |
+
171,
|
| 219 |
+
843,
|
| 220 |
+
346,
|
| 221 |
+
859
|
| 222 |
+
],
|
| 223 |
+
"page_idx": 1
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"text": "The Graph Transformer. Graph Transformer, as a category of neural networks, enables Transformer to handle graph data (Min et al., 2022a). Several works (Dwivedi & Bresson, 2020; Ying et al.,",
|
| 228 |
+
"bbox": [
|
| 229 |
+
169,
|
| 230 |
+
873,
|
| 231 |
+
826,
|
| 232 |
+
902
|
| 233 |
+
],
|
| 234 |
+
"page_idx": 1
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "header",
|
| 238 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 239 |
+
"bbox": [
|
| 240 |
+
171,
|
| 241 |
+
32,
|
| 242 |
+
478,
|
| 243 |
+
47
|
| 244 |
+
],
|
| 245 |
+
"page_idx": 1
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "page_footnote",
|
| 249 |
+
"text": "Our codes are publicly available at https://github.com/SandMartex/AutoGT",
|
| 250 |
+
"bbox": [
|
| 251 |
+
191,
|
| 252 |
+
909,
|
| 253 |
+
645,
|
| 254 |
+
924
|
| 255 |
+
],
|
| 256 |
+
"page_idx": 1
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "page_number",
|
| 260 |
+
"text": "2",
|
| 261 |
+
"bbox": [
|
| 262 |
+
493,
|
| 263 |
+
946,
|
| 264 |
+
504,
|
| 265 |
+
959
|
| 266 |
+
],
|
| 267 |
+
"page_idx": 1
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"text": "2021; Hussain et al., 2021; Zhang et al., 2020; Kreuzer et al., 2021; Shi et al., 2021) propose to pre-calculate some node positional encoding from graph structure and add them to the node attributes after a linear or embedding layer. Some works (Dwivedi & Bresson, 2020; Zhao et al., 2021; Ying et al., 2021; Khoo et al., 2020) also propose to add manually designed graph structural information into the attention matrix in Transformer layers. Others (Yao et al., 2020; Min et al., 2022b) explore the mask mechanism in the attention matrix, masking the influence of non-neighbor nodes. In particular, UniMP (Shi et al., 2021) achieves new state-of-the-art results on OGB (Hu et al., 2020a) datasets, Graphormer (Ying et al., 2021) won first place in KDD Cup Challenge on Large-SCale graph classification by encoding various information about graph structures into graph Transformer.",
|
| 272 |
+
"bbox": [
|
| 273 |
+
169,
|
| 274 |
+
103,
|
| 275 |
+
826,
|
| 276 |
+
229
|
| 277 |
+
],
|
| 278 |
+
"page_idx": 2
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"text": "Neural Architecture Search. Neural architecture search has drawn increasing attention in the past few years (Elsken et al., 2019; Zoph & Le, 2017; Ma et al., 2018; Pham et al., 2018; Wei et al., 2021; Cai et al., 2022; Guan et al., 2021b; 2022; Qin et al., 2022a,b; Zhang et al., 2021; ?). There are many efforts to automate the design of Transformers. (So et al., 2019) propose the first automated framework for Transformer in neural machine translation tasks. AutoTrans (Zhu et al., 2021) improves the search efficiency of the NLP Transformer through a one-shot supernet training. NAS-BERT (Xu et al., 2021) further leverages the neural architecture search for big language model distillation and compression. AutoFormer (Chen et al., 2021b) migrates the automation of the Transformer for vision tasks, where they utilize weight-entanglement to improve the consistency of the supernet training. GLiT (Chen et al., 2021a) proposes to search both global and local attention for the Vision Transformer using a hierarchical evolutionary search algorithm. (Chen et al., 2021c) further propose to evolve the search space of the Vision Transformer to solve the exponential explosion problems.",
|
| 283 |
+
"bbox": [
|
| 284 |
+
169,
|
| 285 |
+
236,
|
| 286 |
+
826,
|
| 287 |
+
402
|
| 288 |
+
],
|
| 289 |
+
"page_idx": 2
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"text": "3 AUTOMATED GRAPH TRANSFORMER ARCHITECTURE SEARCH (AUTOGT)",
|
| 294 |
+
"text_level": 1,
|
| 295 |
+
"bbox": [
|
| 296 |
+
171,
|
| 297 |
+
424,
|
| 298 |
+
823,
|
| 299 |
+
441
|
| 300 |
+
],
|
| 301 |
+
"page_idx": 2
|
| 302 |
+
},
|
| 303 |
+
{
|
| 304 |
+
"type": "text",
|
| 305 |
+
"text": "To automatically design graph Transformer architectures, we first unify the formulation of current graph Transformers in Section 3.1. Based on the unified formulation, we design the search space tailored for the graph Transformers in Section 3.2. We propose a novel encoding-aware performance estimation strategy in Section 3.3, and introduce our evolutionary search strategy in Section 3.4. The whole algorithm is presented by Figure 2.",
|
| 306 |
+
"bbox": [
|
| 307 |
+
169,
|
| 308 |
+
455,
|
| 309 |
+
823,
|
| 310 |
+
526
|
| 311 |
+
],
|
| 312 |
+
"page_idx": 2
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"type": "text",
|
| 316 |
+
"text": "3.1 THE UNIFIED GRAPH TRANSFORMER FRAMEWORK",
|
| 317 |
+
"text_level": 1,
|
| 318 |
+
"bbox": [
|
| 319 |
+
171,
|
| 320 |
+
542,
|
| 321 |
+
571,
|
| 322 |
+
556
|
| 323 |
+
],
|
| 324 |
+
"page_idx": 2
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"type": "text",
|
| 328 |
+
"text": "Current representative graph Transformer designs can be regarded as improving the input and attention map in Transformer architecture through various graph encoding strategies. We first introduce the basic Transformer architecture and then show how to combine various graph encoding strategies.",
|
| 329 |
+
"bbox": [
|
| 330 |
+
169,
|
| 331 |
+
569,
|
| 332 |
+
410,
|
| 333 |
+
696
|
| 334 |
+
],
|
| 335 |
+
"page_idx": 2
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "text",
|
| 339 |
+
"text": "Let $G = (V,E)$ denote a graph where $V = \\{v_{1},v_{2},\\dots ,v_{n}\\}$ represents the set of nodes and $E = \\{e_1,e_2,\\dots ,e_m\\}$ represents the set of edges, and denote $n = |V|$ and $m = |E|$ as the number of nodes and edges, respectively. Let $\\mathbf{v}_i,i\\in$ $\\{1,\\ldots ,n\\}$ represents the features of node $v_{i}$ ,and $\\mathbf{e}_j,j\\in \\{1,\\dots,m\\}$ represents the features of edge $e_j$",
|
| 340 |
+
"bbox": [
|
| 341 |
+
169,
|
| 342 |
+
700,
|
| 343 |
+
410,
|
| 344 |
+
843
|
| 345 |
+
],
|
| 346 |
+
"page_idx": 2
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "text",
|
| 350 |
+
"text": "3.1.1 BASIC TRANSFORMER",
|
| 351 |
+
"text_level": 1,
|
| 352 |
+
"bbox": [
|
| 353 |
+
171,
|
| 354 |
+
856,
|
| 355 |
+
385,
|
| 356 |
+
871
|
| 357 |
+
],
|
| 358 |
+
"page_idx": 2
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"type": "image",
|
| 362 |
+
"img_path": "images/9cae6319cc46cdc76dbbdb45cfd6868ea58df4a5bda1225dea1678f7449e4d09.jpg",
|
| 363 |
+
"image_caption": [
|
| 364 |
+
"Figure 1: The unified graph Transformer search space. It consists of the Transformer architecture space and the graph specific encoding space. The Transformer architecture search space is detailed in Table 1. The graph specific encoding search space is to decide whether each encoding strategy should be adopted or not and the mask threshold for the attention mask."
|
| 365 |
+
],
|
| 366 |
+
"image_footnote": [],
|
| 367 |
+
"bbox": [
|
| 368 |
+
457,
|
| 369 |
+
561,
|
| 370 |
+
787,
|
| 371 |
+
782
|
| 372 |
+
],
|
| 373 |
+
"page_idx": 2
|
| 374 |
+
},
|
| 375 |
+
{
|
| 376 |
+
"type": "text",
|
| 377 |
+
"text": "As shown in Figure 1, a basic Transformer consists of several stacked blocks, with each block containing two modules, namely the multi-head attention (MHA) module and the feed-forward network (FFN) module.",
|
| 378 |
+
"bbox": [
|
| 379 |
+
169,
|
| 380 |
+
881,
|
| 381 |
+
823,
|
| 382 |
+
924
|
| 383 |
+
],
|
| 384 |
+
"page_idx": 2
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"type": "header",
|
| 388 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 389 |
+
"bbox": [
|
| 390 |
+
171,
|
| 391 |
+
32,
|
| 392 |
+
478,
|
| 393 |
+
47
|
| 394 |
+
],
|
| 395 |
+
"page_idx": 2
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"type": "page_number",
|
| 399 |
+
"text": "3",
|
| 400 |
+
"bbox": [
|
| 401 |
+
493,
|
| 402 |
+
948,
|
| 403 |
+
503,
|
| 404 |
+
959
|
| 405 |
+
],
|
| 406 |
+
"page_idx": 2
|
| 407 |
+
},
|
| 408 |
+
{
|
| 409 |
+
"type": "text",
|
| 410 |
+
"text": "At block $l$ , the node representation $\\mathbf{H}^{(l)} \\in \\mathbb{R}^{n \\times d}$ first goes through the MHA module to interact with each other and pass information through self-attention:",
|
| 411 |
+
"bbox": [
|
| 412 |
+
169,
|
| 413 |
+
103,
|
| 414 |
+
823,
|
| 415 |
+
132
|
| 416 |
+
],
|
| 417 |
+
"page_idx": 3
|
| 418 |
+
},
|
| 419 |
+
{
|
| 420 |
+
"type": "equation",
|
| 421 |
+
"text": "\n$$\n\\mathbf {A} _ {h} ^ {(l)} = \\operatorname {s o f t m a x} \\left(\\frac {\\mathbf {Q} _ {h} ^ {(l)} \\mathbf {K} _ {h} ^ {(l) ^ {T}}}{\\sqrt {d _ {k}}}\\right), \\mathbf {O} _ {h} ^ {(l)} = \\mathbf {A} _ {h} ^ {(l)} \\mathbf {V} _ {h} ^ {(l)}, \\tag {1}\n$$\n",
|
| 422 |
+
"text_format": "latex",
|
| 423 |
+
"bbox": [
|
| 424 |
+
339,
|
| 425 |
+
138,
|
| 426 |
+
825,
|
| 427 |
+
178
|
| 428 |
+
],
|
| 429 |
+
"page_idx": 3
|
| 430 |
+
},
|
| 431 |
+
{
|
| 432 |
+
"type": "text",
|
| 433 |
+
"text": "where $\\mathbf{A}_h^{(l)}\\in \\mathbb{R}^{n\\times n}$ is the message passing matrix, $\\mathbf{O}_h^{(l)}$ is the output of the self-attention mechanism of the $h^{th}$ attention head, $h = 1,2,\\dots ,Head$ , Head is the number of attention heads, and $\\mathbf{K}_h^{(l)},\\mathbf{Q}_h^{(l)},\\mathbf{V}_h^{(l)}\\in \\mathbb{R}^{n\\times d_k}$ are the key, query, value calculated as:",
|
| 434 |
+
"bbox": [
|
| 435 |
+
171,
|
| 436 |
+
186,
|
| 437 |
+
826,
|
| 438 |
+
239
|
| 439 |
+
],
|
| 440 |
+
"page_idx": 3
|
| 441 |
+
},
|
| 442 |
+
{
|
| 443 |
+
"type": "equation",
|
| 444 |
+
"text": "\n$$\n\\mathbf {K} _ {h} ^ {(l)}, = \\mathbf {H} ^ {(l)} \\mathbf {W} _ {k, h} ^ {(l)}, \\mathbf {Q} _ {h} ^ {(l)} = \\mathbf {H} ^ {(l)} \\mathbf {W} _ {q, h} ^ {(l)}, \\mathbf {V} _ {h} ^ {(l)} = \\mathbf {H} ^ {(l)} \\mathbf {W} _ {v, h} ^ {(l)}, \\tag {2}\n$$\n",
|
| 445 |
+
"text_format": "latex",
|
| 446 |
+
"bbox": [
|
| 447 |
+
318,
|
| 448 |
+
244,
|
| 449 |
+
825,
|
| 450 |
+
266
|
| 451 |
+
],
|
| 452 |
+
"page_idx": 3
|
| 453 |
+
},
|
| 454 |
+
{
|
| 455 |
+
"type": "text",
|
| 456 |
+
"text": "where $\\mathbf{W}_{k,h}^{(l)},\\mathbf{W}_{q,h}^{(l)},\\mathbf{W}_{v,h}^{(l)}\\in \\mathbb{R}^{d\\times d_k}$ are learnable parameters. Then, the representations of different heads are concatenated and further transformed as:",
|
| 457 |
+
"bbox": [
|
| 458 |
+
171,
|
| 459 |
+
273,
|
| 460 |
+
823,
|
| 461 |
+
306
|
| 462 |
+
],
|
| 463 |
+
"page_idx": 3
|
| 464 |
+
},
|
| 465 |
+
{
|
| 466 |
+
"type": "equation",
|
| 467 |
+
"text": "\n$$\n\\mathbf {O} ^ {(l)} = \\left(\\mathbf {O} _ {1} ^ {(l)} \\circ \\mathbf {O} _ {2} ^ {(l)} \\circ \\dots \\circ \\mathbf {O} _ {H e a d} ^ {(l)}\\right) \\mathbf {W} _ {O} ^ {(l)} + \\mathbf {H} ^ {(l)}, \\tag {3}\n$$\n",
|
| 468 |
+
"text_format": "latex",
|
| 469 |
+
"bbox": [
|
| 470 |
+
334,
|
| 471 |
+
311,
|
| 472 |
+
825,
|
| 473 |
+
333
|
| 474 |
+
],
|
| 475 |
+
"page_idx": 3
|
| 476 |
+
},
|
| 477 |
+
{
|
| 478 |
+
"type": "text",
|
| 479 |
+
"text": "where $\\mathbf{W}_O^{(l)}\\in \\mathbb{R}^{(d_k*Head)\\times d_t}$ is the parameter and $\\mathbf{O}^{(l)}$ is the multi-head result. Then, the attended representation will go through the FFN module to further refine the information of each node:",
|
| 480 |
+
"bbox": [
|
| 481 |
+
169,
|
| 482 |
+
340,
|
| 483 |
+
823,
|
| 484 |
+
371
|
| 485 |
+
],
|
| 486 |
+
"page_idx": 3
|
| 487 |
+
},
|
| 488 |
+
{
|
| 489 |
+
"type": "equation",
|
| 490 |
+
"text": "\n$$\n\\mathbf {H} ^ {(l + 1)} = \\sigma (\\mathbf {O} ^ {(l)} \\mathbf {W} _ {1} ^ {(l)}) \\mathbf {W} _ {2} ^ {(l)}, \\tag {4}\n$$\n",
|
| 491 |
+
"text_format": "latex",
|
| 492 |
+
"bbox": [
|
| 493 |
+
403,
|
| 494 |
+
377,
|
| 495 |
+
825,
|
| 496 |
+
397
|
| 497 |
+
],
|
| 498 |
+
"page_idx": 3
|
| 499 |
+
},
|
| 500 |
+
{
|
| 501 |
+
"type": "text",
|
| 502 |
+
"text": "where $\\mathbf{O}^{(l)}\\in \\mathbb{R}^{n\\times d_t}$ is the output, $\\mathbf{W}_1^{(l)}\\in \\mathbb{R}^{d_k\\times d_h}$ , $\\mathbf{W}_2^{(l)}\\in \\mathbb{R}^{d_h\\times d}$ are weight matrices.",
|
| 503 |
+
"bbox": [
|
| 504 |
+
173,
|
| 505 |
+
404,
|
| 506 |
+
758,
|
| 507 |
+
424
|
| 508 |
+
],
|
| 509 |
+
"page_idx": 3
|
| 510 |
+
},
|
| 511 |
+
{
|
| 512 |
+
"type": "text",
|
| 513 |
+
"text": "As for the input of the first block, we concatenate all the node features $\\mathbf{H}^{(0)} = [\\mathbf{v}_1,\\dots,\\mathbf{v}_n]$ . After $L$ blocks, we obtain the final representation of each node $\\mathbf{H}^{(L)}$ .",
|
| 514 |
+
"bbox": [
|
| 515 |
+
169,
|
| 516 |
+
430,
|
| 517 |
+
823,
|
| 518 |
+
460
|
| 519 |
+
],
|
| 520 |
+
"page_idx": 3
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"type": "text",
|
| 524 |
+
"text": "3.1.2 GRAPHENCODINGSTRATEGY",
|
| 525 |
+
"text_level": 1,
|
| 526 |
+
"bbox": [
|
| 527 |
+
171,
|
| 528 |
+
476,
|
| 529 |
+
439,
|
| 530 |
+
491
|
| 531 |
+
],
|
| 532 |
+
"page_idx": 3
|
| 533 |
+
},
|
| 534 |
+
{
|
| 535 |
+
"type": "text",
|
| 536 |
+
"text": "From Section 3.1.1, we can observe that directly using the basic Transformer on graphs can only process node attributes, ignoring important edge attributes and graph topology information in the graph. To make the Transformer architecture aware of the graph structure, several works resort to various graph encoding strategies, which can be divided into two kinds of categories: node attribution augmentation and attention map augmentation.",
|
| 537 |
+
"bbox": [
|
| 538 |
+
169,
|
| 539 |
+
501,
|
| 540 |
+
823,
|
| 541 |
+
571
|
| 542 |
+
],
|
| 543 |
+
"page_idx": 3
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"type": "text",
|
| 547 |
+
"text": "The node attribution augmentations take the whole graph $G$ as input and generate the topology-aware features $Enc_{node}(G)$ for each node to directly improve the node representations:",
|
| 548 |
+
"bbox": [
|
| 549 |
+
169,
|
| 550 |
+
577,
|
| 551 |
+
823,
|
| 552 |
+
604
|
| 553 |
+
],
|
| 554 |
+
"page_idx": 3
|
| 555 |
+
},
|
| 556 |
+
{
|
| 557 |
+
"type": "equation",
|
| 558 |
+
"text": "\n$$\n\\mathbf {H} _ {a u g} ^ {(l)} = \\mathbf {H} ^ {(l)} + E n c _ {n o d e} (G). \\tag {5}\n$$\n",
|
| 559 |
+
"text_format": "latex",
|
| 560 |
+
"bbox": [
|
| 561 |
+
403,
|
| 562 |
+
612,
|
| 563 |
+
825,
|
| 564 |
+
631
|
| 565 |
+
],
|
| 566 |
+
"page_idx": 3
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "text",
|
| 570 |
+
"text": "On the other hand, the attention map augmentations generate an additional attention map $Enc_{map}(G)$ , which represents the relationships of any two nodes and improves the attention map generated by self-attention in Eq equation 1 as:",
|
| 571 |
+
"bbox": [
|
| 572 |
+
169,
|
| 573 |
+
638,
|
| 574 |
+
826,
|
| 575 |
+
678
|
| 576 |
+
],
|
| 577 |
+
"page_idx": 3
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "equation",
|
| 581 |
+
"text": "\n$$\n\\mathbf {A} _ {h, a u g} ^ {(l)} = \\operatorname {s o f t m a x} \\left(\\frac {\\mathbf {Q} _ {h} ^ {(l)} \\mathbf {K} _ {h} ^ {(l) T}}{\\sqrt {d}} + E n c _ {m a p} (G)\\right). \\tag {6}\n$$\n",
|
| 582 |
+
"text_format": "latex",
|
| 583 |
+
"bbox": [
|
| 584 |
+
338,
|
| 585 |
+
686,
|
| 586 |
+
825,
|
| 587 |
+
724
|
| 588 |
+
],
|
| 589 |
+
"page_idx": 3
|
| 590 |
+
},
|
| 591 |
+
{
|
| 592 |
+
"type": "text",
|
| 593 |
+
"text": "Combining node attribution augmentations and attention map augmentations together, our proposed framework is as follows:",
|
| 594 |
+
"bbox": [
|
| 595 |
+
169,
|
| 596 |
+
739,
|
| 597 |
+
823,
|
| 598 |
+
766
|
| 599 |
+
],
|
| 600 |
+
"page_idx": 3
|
| 601 |
+
},
|
| 602 |
+
{
|
| 603 |
+
"type": "equation",
|
| 604 |
+
"text": "\n$$\n\\mathbf {H} ^ {(l + 1)} = \\sigma \\left(\\operatorname {C o n c a t} \\left(\\operatorname {s o f t m a x} \\left(\\frac {\\mathbf {H} _ {a u g} ^ {(l)} \\mathbf {W} _ {q , h} ^ {(l)} \\left(\\mathbf {H} _ {a u g} ^ {(l)} \\mathbf {W} _ {k , h} ^ {(l)}\\right) ^ {T}}{\\sqrt {d _ {k}}} + E n c _ {m a p} (G)\\right) \\mathbf {H} _ {a u g} ^ {(l)} \\mathbf {W} _ {v, h} ^ {(l)}\\right) \\mathbf {W} _ {1} ^ {(l)}\\right) \\mathbf {W} _ {2} ^ {(l)}. \\tag {7}\n$$\n",
|
| 605 |
+
"text_format": "latex",
|
| 606 |
+
"bbox": [
|
| 607 |
+
171,
|
| 608 |
+
772,
|
| 609 |
+
831,
|
| 610 |
+
821
|
| 611 |
+
],
|
| 612 |
+
"page_idx": 3
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "text",
|
| 616 |
+
"text": "where $\\mathbf{H}_{\\text{aug}}^{(l)} = \\mathbf{H}^{(l)} + \\text{Enc}_{\\text{node}}(G)$ .",
|
| 617 |
+
"bbox": [
|
| 618 |
+
174,
|
| 619 |
+
820,
|
| 620 |
+
419,
|
| 621 |
+
839
|
| 622 |
+
],
|
| 623 |
+
"page_idx": 3
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "text",
|
| 627 |
+
"text": "3.2 THE GRAPH TRANSFORMER SEARCH SPACE",
|
| 628 |
+
"text_level": 1,
|
| 629 |
+
"bbox": [
|
| 630 |
+
171,
|
| 631 |
+
854,
|
| 632 |
+
522,
|
| 633 |
+
869
|
| 634 |
+
],
|
| 635 |
+
"page_idx": 3
|
| 636 |
+
},
|
| 637 |
+
{
|
| 638 |
+
"type": "text",
|
| 639 |
+
"text": "Based on the unified graph Transformer formulation, we propose our unified search space design, which can be decomposed into two parts, i.e., Transformer Architecture space and graph encoding space. Figure 1 shows the unified graph Transformer search space.",
|
| 640 |
+
"bbox": [
|
| 641 |
+
169,
|
| 642 |
+
881,
|
| 643 |
+
826,
|
| 644 |
+
925
|
| 645 |
+
],
|
| 646 |
+
"page_idx": 3
|
| 647 |
+
},
|
| 648 |
+
{
|
| 649 |
+
"type": "header",
|
| 650 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 651 |
+
"bbox": [
|
| 652 |
+
171,
|
| 653 |
+
32,
|
| 654 |
+
478,
|
| 655 |
+
47
|
| 656 |
+
],
|
| 657 |
+
"page_idx": 3
|
| 658 |
+
},
|
| 659 |
+
{
|
| 660 |
+
"type": "page_number",
|
| 661 |
+
"text": "4",
|
| 662 |
+
"bbox": [
|
| 663 |
+
493,
|
| 664 |
+
948,
|
| 665 |
+
504,
|
| 666 |
+
959
|
| 667 |
+
],
|
| 668 |
+
"page_idx": 3
|
| 669 |
+
},
|
| 670 |
+
{
|
| 671 |
+
"type": "image",
|
| 672 |
+
"img_path": "images/49289faa2c965083154f3458cfa28a6ac17915f2c906531edc1622d31c79de96.jpg",
|
| 673 |
+
"image_caption": [
|
| 674 |
+
"Figure 2: The framework of our work. Firstly, we construct the search space for each layer, consisting of the Transformer architecture space (above) and the graph encoding strategy space (below). Then, we carry out our encoding-aware supernet training method in two stages: before splitting, we train a supernet by randomly sampling architectures from the search space, while after splitting, we train multiple subnets (inheriting the weights from the supernet) by randomly sampling architectures with fixed attention map augmentation strategies (except for the attention mask). Finally, we conduct an evolutionary search based on the subnets and obtain our final architecture and results."
|
| 675 |
+
],
|
| 676 |
+
"image_footnote": [],
|
| 677 |
+
"bbox": [
|
| 678 |
+
215,
|
| 679 |
+
103,
|
| 680 |
+
781,
|
| 681 |
+
405
|
| 682 |
+
],
|
| 683 |
+
"page_idx": 4
|
| 684 |
+
},
|
| 685 |
+
{
|
| 686 |
+
"type": "text",
|
| 687 |
+
"text": "3.2.1 TRANSFORMER ARCHITECTURE SPACE",
|
| 688 |
+
"text_level": 1,
|
| 689 |
+
"bbox": [
|
| 690 |
+
171,
|
| 691 |
+
526,
|
| 692 |
+
501,
|
| 693 |
+
540
|
| 694 |
+
],
|
| 695 |
+
"page_idx": 4
|
| 696 |
+
},
|
| 697 |
+
{
|
| 698 |
+
"type": "text",
|
| 699 |
+
"text": "Following Section 3.1.1, we automate five key architecture components for graph Transformer as follows: the number of encoder layers $L$ , the dimension $d$ , the intermediate dimension $d_{t}$ , the hidden dimension $d_{h}$ , the number of attention heads $Head$ , and the attention head dimension $d_{k}$ in graph Transformer. Notice that these five components already cover the most important designs for Transformer architectures.",
|
| 700 |
+
"bbox": [
|
| 701 |
+
169,
|
| 702 |
+
551,
|
| 703 |
+
823,
|
| 704 |
+
621
|
| 705 |
+
],
|
| 706 |
+
"page_idx": 4
|
| 707 |
+
},
|
| 708 |
+
{
|
| 709 |
+
"type": "text",
|
| 710 |
+
"text": "A suitable search space should be not only expressive enough to allow powerful architectures, but also compact enough to enable efficient searches. With this principle in mind, we propose two search spaces for these components with different size ranges. Table 1 gives the detailed search space for these two spaces.",
|
| 711 |
+
"bbox": [
|
| 712 |
+
169,
|
| 713 |
+
628,
|
| 714 |
+
823,
|
| 715 |
+
685
|
| 716 |
+
],
|
| 717 |
+
"page_idx": 4
|
| 718 |
+
},
|
| 719 |
+
{
|
| 720 |
+
"type": "text",
|
| 721 |
+
"text": "3.2.2 GRAPHENCODINGSPACE",
|
| 722 |
+
"text_level": 1,
|
| 723 |
+
"bbox": [
|
| 724 |
+
171,
|
| 725 |
+
698,
|
| 726 |
+
410,
|
| 727 |
+
712
|
| 728 |
+
],
|
| 729 |
+
"page_idx": 4
|
| 730 |
+
},
|
| 731 |
+
{
|
| 732 |
+
"type": "text",
|
| 733 |
+
"text": "To exploit the potential of the graph encoding strategies, we further determine whether and which graph encoding strategies to use for each layer of the graph Transformer. Specifically, we explore the node attribution augmentations encoding and attention map augmentations encoding as below.",
|
| 734 |
+
"bbox": [
|
| 735 |
+
169,
|
| 736 |
+
723,
|
| 737 |
+
823,
|
| 738 |
+
766
|
| 739 |
+
],
|
| 740 |
+
"page_idx": 4
|
| 741 |
+
},
|
| 742 |
+
{
|
| 743 |
+
"type": "text",
|
| 744 |
+
"text": "Node Attribution Augmentations:",
|
| 745 |
+
"text_level": 1,
|
| 746 |
+
"bbox": [
|
| 747 |
+
171,
|
| 748 |
+
772,
|
| 749 |
+
410,
|
| 750 |
+
787
|
| 751 |
+
],
|
| 752 |
+
"page_idx": 4
|
| 753 |
+
},
|
| 754 |
+
{
|
| 755 |
+
"type": "text",
|
| 756 |
+
"text": "- Centrality Encoding (Ying et al., 2021). Use two node embeddings with the same size representing the in-degree and the out-degree of nodes, i.e.,",
|
| 757 |
+
"bbox": [
|
| 758 |
+
179,
|
| 759 |
+
794,
|
| 760 |
+
826,
|
| 761 |
+
821
|
| 762 |
+
],
|
| 763 |
+
"page_idx": 4
|
| 764 |
+
},
|
| 765 |
+
{
|
| 766 |
+
"type": "equation",
|
| 767 |
+
"text": "\n$$\nh _ {i} ^ {(l)} = x _ {i} ^ {(l)} + z _ {\\operatorname {d e g} ^ {-} \\left(v _ {i}\\right)} ^ {-} + z _ {\\operatorname {d e g} ^ {-} \\left(v _ {i}\\right)} ^ {+} \\tag {8}\n$$\n",
|
| 768 |
+
"text_format": "latex",
|
| 769 |
+
"bbox": [
|
| 770 |
+
398,
|
| 771 |
+
824,
|
| 772 |
+
823,
|
| 773 |
+
845
|
| 774 |
+
],
|
| 775 |
+
"page_idx": 4
|
| 776 |
+
},
|
| 777 |
+
{
|
| 778 |
+
"type": "text",
|
| 779 |
+
"text": "where $h_i^{(l)}$ is the input embedding in layer $l$ , $x_i$ is the input attribution of node $i$ in layer $l$ , and $z^{-}$ and $z^{+}$ are the embedding generated by the in-degree and out-degree.",
|
| 780 |
+
"bbox": [
|
| 781 |
+
194,
|
| 782 |
+
849,
|
| 783 |
+
823,
|
| 784 |
+
880
|
| 785 |
+
],
|
| 786 |
+
"page_idx": 4
|
| 787 |
+
},
|
| 788 |
+
{
|
| 789 |
+
"type": "text",
|
| 790 |
+
"text": "- Laplacian Eigenvector (Dwivedi & Bresson, 2020). Conducting spectral decomposition of the graph Laplacian matrix:",
|
| 791 |
+
"bbox": [
|
| 792 |
+
179,
|
| 793 |
+
882,
|
| 794 |
+
823,
|
| 795 |
+
907
|
| 796 |
+
],
|
| 797 |
+
"page_idx": 4
|
| 798 |
+
},
|
| 799 |
+
{
|
| 800 |
+
"type": "equation",
|
| 801 |
+
"text": "\n$$\n\\mathbf {U} ^ {T} \\boldsymbol {\\Lambda} \\mathbf {U} = \\mathbf {I} - \\mathbf {D} ^ {- 1 / 2} \\mathbf {A} ^ {G} \\mathbf {D} ^ {- 1 / 2} \\tag {9}\n$$\n",
|
| 802 |
+
"text_format": "latex",
|
| 803 |
+
"bbox": [
|
| 804 |
+
406,
|
| 805 |
+
907,
|
| 806 |
+
823,
|
| 807 |
+
924
|
| 808 |
+
],
|
| 809 |
+
"page_idx": 4
|
| 810 |
+
},
|
| 811 |
+
{
|
| 812 |
+
"type": "header",
|
| 813 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 814 |
+
"bbox": [
|
| 815 |
+
171,
|
| 816 |
+
32,
|
| 817 |
+
478,
|
| 818 |
+
47
|
| 819 |
+
],
|
| 820 |
+
"page_idx": 4
|
| 821 |
+
},
|
| 822 |
+
{
|
| 823 |
+
"type": "page_number",
|
| 824 |
+
"text": "5",
|
| 825 |
+
"bbox": [
|
| 826 |
+
493,
|
| 827 |
+
948,
|
| 828 |
+
503,
|
| 829 |
+
959
|
| 830 |
+
],
|
| 831 |
+
"page_idx": 4
|
| 832 |
+
},
|
| 833 |
+
{
|
| 834 |
+
"type": "table",
|
| 835 |
+
"img_path": "images/0a88a015d4de155d1ec84baae49dd03b8d22b036a35bd0fc55ed5c4dbf73152a.jpg",
|
| 836 |
+
"table_caption": [
|
| 837 |
+
"Table 1: The Transformer Architecture Search Space for AutoGT<sub>base</sub> and AutoGT."
|
| 838 |
+
],
|
| 839 |
+
"table_footnote": [],
|
| 840 |
+
"table_body": "<table><tr><td></td><td colspan=\"2\">AutoGTbase</td><td colspan=\"2\">AutoGT</td></tr><tr><td></td><td>Choices</td><td>Supernet Size</td><td>Choices</td><td>Supernet Size</td></tr><tr><td>#Layers</td><td>{2,3,4}</td><td>4</td><td>{5,6,7,8}</td><td>8</td></tr><tr><td>Input Dimension d</td><td>{24,28,32}</td><td>32</td><td>{96,112,128}</td><td>128</td></tr><tr><td>Intermediate Dimension dt</td><td>{24,28,32}</td><td>32</td><td>{96,112,128}</td><td>128</td></tr><tr><td>Hidden Dimension dh</td><td>{24,28,32}</td><td>32</td><td>{96,112,128}</td><td>128</td></tr><tr><td>#Attention Heads</td><td>{2,3,4}</td><td>4</td><td>{6,7,8}</td><td>8</td></tr><tr><td>Attention Head Dimension dk</td><td>{6,8}</td><td>8</td><td>{12,14,16}</td><td>16</td></tr></table>",
|
| 841 |
+
"bbox": [
|
| 842 |
+
174,
|
| 843 |
+
114,
|
| 844 |
+
779,
|
| 845 |
+
233
|
| 846 |
+
],
|
| 847 |
+
"page_idx": 5
|
| 848 |
+
},
|
| 849 |
+
{
|
| 850 |
+
"type": "text",
|
| 851 |
+
"text": "where $\\mathbf{A}^G$ is the adjacency matrix of graph $G$ , $\\mathbf{D}$ is the diagonal degree matrix, and $\\mathbf{U}$ and $\\Lambda$ are the eigenvectors and eigenvalues, respectively. We only select the eigenvectors of the $k$ smallest non-zero eigenvalues as the final embedding and concatenate them to the input node attribute matrix for each layer.",
|
| 852 |
+
"bbox": [
|
| 853 |
+
192,
|
| 854 |
+
241,
|
| 855 |
+
823,
|
| 856 |
+
299
|
| 857 |
+
],
|
| 858 |
+
"page_idx": 5
|
| 859 |
+
},
|
| 860 |
+
{
|
| 861 |
+
"type": "text",
|
| 862 |
+
"text": "- SVD-based Positional Encoding (Hussain et al., 2021). Conducting singular value decomposition to the graph adjacency matrix:",
|
| 863 |
+
"bbox": [
|
| 864 |
+
179,
|
| 865 |
+
303,
|
| 866 |
+
823,
|
| 867 |
+
333
|
| 868 |
+
],
|
| 869 |
+
"page_idx": 5
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "equation",
|
| 873 |
+
"text": "\n$$\n\\mathbf {A} ^ {G} \\stackrel {\\text {S V D}} {\\approx} \\mathbf {U} \\boldsymbol {\\Sigma} \\mathbf {V} ^ {T} = (\\mathbf {U} \\sqrt {\\boldsymbol {\\Sigma}}) \\cdot (\\mathbf {V} \\sqrt {\\boldsymbol {\\Sigma}}) ^ {T} = \\hat {\\mathbf {U}} \\hat {\\mathbf {V}} ^ {T} \\tag {10}\n$$\n",
|
| 874 |
+
"text_format": "latex",
|
| 875 |
+
"bbox": [
|
| 876 |
+
357,
|
| 877 |
+
340,
|
| 878 |
+
825,
|
| 879 |
+
359
|
| 880 |
+
],
|
| 881 |
+
"page_idx": 5
|
| 882 |
+
},
|
| 883 |
+
{
|
| 884 |
+
"type": "text",
|
| 885 |
+
"text": "where $\\mathbf{U},\\mathbf{V}\\in \\mathbb{R}^{n\\times r}$ contains the left and right singular vectors of the top $r$ singular values in the diagonal matrix $\\boldsymbol {\\Sigma}\\in \\mathbb{R}^{r\\times r}$ . Without loss of generality, we only choose $\\hat{\\mathbf{U}}$ as final embedding since they are highly correlated for symmetric graphs (with differences in signs, to be specific). Similar to Laplacian eigenvector, we concatenate it to input node attribute matrix for each layer.",
|
| 886 |
+
"bbox": [
|
| 887 |
+
192,
|
| 888 |
+
369,
|
| 889 |
+
825,
|
| 890 |
+
429
|
| 891 |
+
],
|
| 892 |
+
"page_idx": 5
|
| 893 |
+
},
|
| 894 |
+
{
|
| 895 |
+
"type": "text",
|
| 896 |
+
"text": "Attention Map Augmentations Space:",
|
| 897 |
+
"text_level": 1,
|
| 898 |
+
"bbox": [
|
| 899 |
+
171,
|
| 900 |
+
441,
|
| 901 |
+
437,
|
| 902 |
+
457
|
| 903 |
+
],
|
| 904 |
+
"page_idx": 5
|
| 905 |
+
},
|
| 906 |
+
{
|
| 907 |
+
"type": "text",
|
| 908 |
+
"text": "- Spatial Encoding (Ying et al., 2021). Spatial encoding is added to the attention result before softmax:",
|
| 909 |
+
"bbox": [
|
| 910 |
+
171,
|
| 911 |
+
465,
|
| 912 |
+
823,
|
| 913 |
+
493
|
| 914 |
+
],
|
| 915 |
+
"page_idx": 5
|
| 916 |
+
},
|
| 917 |
+
{
|
| 918 |
+
"type": "equation",
|
| 919 |
+
"text": "\n$$\nA _ {i j} = \\frac {\\left(h _ {i} W _ {Q}\\right) \\left(h _ {j} W _ {K}\\right) ^ {T}}{\\sqrt {d _ {k}}} + b _ {\\phi \\left(v _ {i}, v _ {j}\\right)} \\tag {11}\n$$\n",
|
| 920 |
+
"text_format": "latex",
|
| 921 |
+
"bbox": [
|
| 922 |
+
387,
|
| 923 |
+
493,
|
| 924 |
+
825,
|
| 925 |
+
525
|
| 926 |
+
],
|
| 927 |
+
"page_idx": 5
|
| 928 |
+
},
|
| 929 |
+
{
|
| 930 |
+
"type": "text",
|
| 931 |
+
"text": "where $\\phi(v_i, v_j)$ is the length of the shortest path from $v_i$ to $v_j$ , and $b \\in \\mathbb{R}$ is a weight parameter generated by $\\phi(v_i, v_j)$ .",
|
| 932 |
+
"bbox": [
|
| 933 |
+
184,
|
| 934 |
+
530,
|
| 935 |
+
823,
|
| 936 |
+
561
|
| 937 |
+
],
|
| 938 |
+
"page_idx": 5
|
| 939 |
+
},
|
| 940 |
+
{
|
| 941 |
+
"type": "text",
|
| 942 |
+
"text": "- Edge Encoding (Ying et al., 2021). Edge encoding is added to the attention result before softmax:",
|
| 943 |
+
"bbox": [
|
| 944 |
+
171,
|
| 945 |
+
563,
|
| 946 |
+
818,
|
| 947 |
+
579
|
| 948 |
+
],
|
| 949 |
+
"page_idx": 5
|
| 950 |
+
},
|
| 951 |
+
{
|
| 952 |
+
"type": "equation",
|
| 953 |
+
"text": "\n$$\nA _ {i j} = \\frac {\\left(h _ {i} W _ {Q}\\right) \\left(h _ {j} W _ {K}\\right) ^ {T}}{\\sqrt {d _ {k}}} + \\frac {1}{N} \\sum_ {n = 1} ^ {N} x _ {e _ {n}} \\left(w _ {n} ^ {E}\\right) ^ {T} \\tag {12}\n$$\n",
|
| 954 |
+
"text_format": "latex",
|
| 955 |
+
"bbox": [
|
| 956 |
+
357,
|
| 957 |
+
588,
|
| 958 |
+
825,
|
| 959 |
+
626
|
| 960 |
+
],
|
| 961 |
+
"page_idx": 5
|
| 962 |
+
},
|
| 963 |
+
{
|
| 964 |
+
"type": "text",
|
| 965 |
+
"text": "where $x_{e_n}$ is the feature of the $n$ -th edge $e_n$ on the shortest path between $v_i$ and $v_j$ , and $w_n^E$ is the $n$ -th learnable embedding vector.",
|
| 966 |
+
"bbox": [
|
| 967 |
+
184,
|
| 968 |
+
635,
|
| 969 |
+
823,
|
| 970 |
+
665
|
| 971 |
+
],
|
| 972 |
+
"page_idx": 5
|
| 973 |
+
},
|
| 974 |
+
{
|
| 975 |
+
"type": "text",
|
| 976 |
+
"text": "- Proximity-Enhanced Attention (Zhao et al., 2021). Proximity-Enhanced Attention is added to the attention result before softmax:",
|
| 977 |
+
"bbox": [
|
| 978 |
+
171,
|
| 979 |
+
669,
|
| 980 |
+
823,
|
| 981 |
+
696
|
| 982 |
+
],
|
| 983 |
+
"page_idx": 5
|
| 984 |
+
},
|
| 985 |
+
{
|
| 986 |
+
"type": "equation",
|
| 987 |
+
"text": "\n$$\nA _ {i j} = \\frac {\\left(h _ {i} W _ {Q}\\right) \\left(h _ {j} W _ {K}\\right) ^ {T}}{\\sqrt {d _ {k}}} + \\phi_ {i j} ^ {T} b \\tag {13}\n$$\n",
|
| 988 |
+
"text_format": "latex",
|
| 989 |
+
"bbox": [
|
| 990 |
+
400,
|
| 991 |
+
703,
|
| 992 |
+
825,
|
| 993 |
+
736
|
| 994 |
+
],
|
| 995 |
+
"page_idx": 5
|
| 996 |
+
},
|
| 997 |
+
{
|
| 998 |
+
"type": "text",
|
| 999 |
+
"text": "where $b \\in \\mathbb{R}^{M \\times 1}$ is a learnable parameter, $\\phi_{ij} = \\mathrm{Concat}(\\Phi_m(v_i, v_j) | m \\in \\{0, 1, \\dots, M-1\\})$ is the structural encoding generated from: $\\Phi_m(v_i, v_j) = \\tilde{\\mathbf{A}}^m[i, j]$ , where $\\tilde{\\mathbf{A}} = \\mathrm{Norm}(\\mathbf{A} + \\mathbf{I})$ represents the normalized adjacency matrix. Thus the augmentation denotes the reachable probabilities between nodes.",
|
| 1000 |
+
"bbox": [
|
| 1001 |
+
184,
|
| 1002 |
+
743,
|
| 1003 |
+
825,
|
| 1004 |
+
804
|
| 1005 |
+
],
|
| 1006 |
+
"page_idx": 5
|
| 1007 |
+
},
|
| 1008 |
+
{
|
| 1009 |
+
"type": "text",
|
| 1010 |
+
"text": "- Attention Mask (Min et al., 2022b; Yao et al., 2020). Attention Mask is added to the attention result before softmax:",
|
| 1011 |
+
"bbox": [
|
| 1012 |
+
171,
|
| 1013 |
+
808,
|
| 1014 |
+
823,
|
| 1015 |
+
834
|
| 1016 |
+
],
|
| 1017 |
+
"page_idx": 5
|
| 1018 |
+
},
|
| 1019 |
+
{
|
| 1020 |
+
"type": "equation",
|
| 1021 |
+
"text": "\n$$\nA _ {i j} = \\frac {\\left(h _ {i} W _ {Q}\\right) \\left(h _ {j} W _ {K}\\right) ^ {T}}{\\sqrt {d _ {k}}} + \\operatorname {M a s k} _ {m} \\left(v _ {i}, v _ {j}\\right) \\tag {14}\n$$\n",
|
| 1022 |
+
"text_format": "latex",
|
| 1023 |
+
"bbox": [
|
| 1024 |
+
367,
|
| 1025 |
+
834,
|
| 1026 |
+
825,
|
| 1027 |
+
867
|
| 1028 |
+
],
|
| 1029 |
+
"page_idx": 5
|
| 1030 |
+
},
|
| 1031 |
+
{
|
| 1032 |
+
"type": "text",
|
| 1033 |
+
"text": "where $m$ is the mask threshold, $\\mathrm{Mask}_m(v_i,v_j)$ depends on the relationship between $m$ and $\\phi (v_{i},v_{j})$ , i.e. the shortest path length between $v_{i}$ and $v_{j}$ . When $m\\geq \\phi (v_i,v_j)$ , $\\mathrm{Mask}_m(v_i,v_j) = 0$ . Otherwise, $\\mathrm{Mask}_m(v_i,v_j)$ is $-\\infty$ , masking the corresponding attention in practical terms.",
|
| 1034 |
+
"bbox": [
|
| 1035 |
+
184,
|
| 1036 |
+
871,
|
| 1037 |
+
825,
|
| 1038 |
+
915
|
| 1039 |
+
],
|
| 1040 |
+
"page_idx": 5
|
| 1041 |
+
},
|
| 1042 |
+
{
|
| 1043 |
+
"type": "header",
|
| 1044 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1045 |
+
"bbox": [
|
| 1046 |
+
171,
|
| 1047 |
+
32,
|
| 1048 |
+
478,
|
| 1049 |
+
47
|
| 1050 |
+
],
|
| 1051 |
+
"page_idx": 5
|
| 1052 |
+
},
|
| 1053 |
+
{
|
| 1054 |
+
"type": "page_number",
|
| 1055 |
+
"text": "6",
|
| 1056 |
+
"bbox": [
|
| 1057 |
+
493,
|
| 1058 |
+
948,
|
| 1059 |
+
504,
|
| 1060 |
+
959
|
| 1061 |
+
],
|
| 1062 |
+
"page_idx": 5
|
| 1063 |
+
},
|
| 1064 |
+
{
|
| 1065 |
+
"type": "table",
|
| 1066 |
+
"img_path": "images/b677e3bae01755d74a4e292871c94e7421bf2114291d68ec9715697838095108.jpg",
|
| 1067 |
+
"table_caption": [
|
| 1068 |
+
"Table 2: Comparison of our proposed unified framework with state-of-the-art graph Transformer models. CE, LPE, SVD, SE, EE, PMA, Mask denote Centrality Encoding, Laplacian Eigenvector, SVD-based Positional Encoding, Spatial Encoding, Edge Encoding, Proximity-Enhanced Attention, and Attention Mask respectively."
|
| 1069 |
+
],
|
| 1070 |
+
"table_footnote": [],
|
| 1071 |
+
"table_body": "<table><tr><td></td><td>CE</td><td>LPE</td><td>SVD</td><td>SE</td><td>EE</td><td>PMA</td><td>Mask</td></tr><tr><td>EGT (Hussain et al., 2021)</td><td></td><td></td><td>✓</td><td></td><td></td><td></td><td></td></tr><tr><td>Gophormer (Zhao et al., 2021)</td><td></td><td></td><td></td><td></td><td></td><td>✓</td><td></td></tr><tr><td>Graph Trans (Dwivedi & Bresson, 2020)</td><td></td><td>✓</td><td></td><td></td><td></td><td></td><td>✓</td></tr><tr><td>Graphormer (Ying et al., 2021)</td><td>✓</td><td></td><td></td><td>✓</td><td>✓</td><td></td><td></td></tr><tr><td>Ours</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr></table>",
|
| 1072 |
+
"bbox": [
|
| 1073 |
+
210,
|
| 1074 |
+
169,
|
| 1075 |
+
787,
|
| 1076 |
+
268
|
| 1077 |
+
],
|
| 1078 |
+
"page_idx": 6
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"type": "text",
|
| 1082 |
+
"text": "3.3 ENCODING-AWARE SUPERNET TRAINING",
|
| 1083 |
+
"text_level": 1,
|
| 1084 |
+
"bbox": [
|
| 1085 |
+
171,
|
| 1086 |
+
290,
|
| 1087 |
+
503,
|
| 1088 |
+
304
|
| 1089 |
+
],
|
| 1090 |
+
"page_idx": 6
|
| 1091 |
+
},
|
| 1092 |
+
{
|
| 1093 |
+
"type": "text",
|
| 1094 |
+
"text": "We next introduce our proposed encoding-aware performance estimation strategy for efficient training.",
|
| 1095 |
+
"bbox": [
|
| 1096 |
+
169,
|
| 1097 |
+
316,
|
| 1098 |
+
826,
|
| 1099 |
+
333
|
| 1100 |
+
],
|
| 1101 |
+
"page_idx": 6
|
| 1102 |
+
},
|
| 1103 |
+
{
|
| 1104 |
+
"type": "text",
|
| 1105 |
+
"text": "Similar to general NAS problems, the graph Transformer architecture search can be formulated as a bi-level optimization problem:",
|
| 1106 |
+
"bbox": [
|
| 1107 |
+
169,
|
| 1108 |
+
335,
|
| 1109 |
+
825,
|
| 1110 |
+
364
|
| 1111 |
+
],
|
| 1112 |
+
"page_idx": 6
|
| 1113 |
+
},
|
| 1114 |
+
{
|
| 1115 |
+
"type": "equation",
|
| 1116 |
+
"text": "\n$$\na ^ {*} = \\operatorname {a r g m a x} _ {a \\in \\mathcal {A}} A c c _ {v a l} \\left(\\mathbf {W} ^ {*} (a), a\\right), \\quad \\text {s . t .} \\mathbf {W} ^ {*} (a) = \\operatorname {a r g m i n} _ {\\mathbf {W}} \\mathcal {L} _ {t r a i n} (\\mathbf {W}, a), \\tag {15}\n$$\n",
|
| 1117 |
+
"text_format": "latex",
|
| 1118 |
+
"bbox": [
|
| 1119 |
+
246,
|
| 1120 |
+
368,
|
| 1121 |
+
825,
|
| 1122 |
+
385
|
| 1123 |
+
],
|
| 1124 |
+
"page_idx": 6
|
| 1125 |
+
},
|
| 1126 |
+
{
|
| 1127 |
+
"type": "text",
|
| 1128 |
+
"text": "where $a \\in \\mathcal{A}$ is the architecture in the search space $\\mathcal{A}$ , $Acc_{val}$ stands for the validation accuracy, $\\mathbf{W}$ represents the learnable weights, and $a^*$ and $\\mathbf{W}^*(a)$ denotes the optimal architecture and the optimal weights for the architecture $a$ .",
|
| 1129 |
+
"bbox": [
|
| 1130 |
+
169,
|
| 1131 |
+
390,
|
| 1132 |
+
823,
|
| 1133 |
+
433
|
| 1134 |
+
],
|
| 1135 |
+
"page_idx": 6
|
| 1136 |
+
},
|
| 1137 |
+
{
|
| 1138 |
+
"type": "text",
|
| 1139 |
+
"text": "Following one-shot NAS methods (Liu et al., 2019; Pham et al., 2018), we encode all candidate architectures in the search space into a supernet and transform Eq. equation 15 into a two-step optimization (Guo et al., 2020):",
|
| 1140 |
+
"bbox": [
|
| 1141 |
+
169,
|
| 1142 |
+
436,
|
| 1143 |
+
825,
|
| 1144 |
+
477
|
| 1145 |
+
],
|
| 1146 |
+
"page_idx": 6
|
| 1147 |
+
},
|
| 1148 |
+
{
|
| 1149 |
+
"type": "equation",
|
| 1150 |
+
"text": "\n$$\na ^ {*} = \\operatorname {a r g m a x} _ {a \\in \\mathcal {A}} A c c _ {v a l} \\left(\\mathbf {W} ^ {*}, a\\right), \\quad \\mathbf {W} ^ {*} = \\operatorname {a r g m i n} _ {\\mathbf {W}} \\mathbb {E} _ {a \\in \\mathcal {A}} \\mathcal {L} _ {\\text {t r a i n}} (\\mathbf {W}, a), \\tag {16}\n$$\n",
|
| 1151 |
+
"text_format": "latex",
|
| 1152 |
+
"bbox": [
|
| 1153 |
+
259,
|
| 1154 |
+
482,
|
| 1155 |
+
825,
|
| 1156 |
+
500
|
| 1157 |
+
],
|
| 1158 |
+
"page_idx": 6
|
| 1159 |
+
},
|
| 1160 |
+
{
|
| 1161 |
+
"type": "text",
|
| 1162 |
+
"text": "where $\\mathbf{W}$ denotes the shared learnable weights in the supernet with its optimal value $\\mathbf{W}^*$ for all the architectures in the search space.",
|
| 1163 |
+
"bbox": [
|
| 1164 |
+
171,
|
| 1165 |
+
503,
|
| 1166 |
+
823,
|
| 1167 |
+
532
|
| 1168 |
+
],
|
| 1169 |
+
"page_idx": 6
|
| 1170 |
+
},
|
| 1171 |
+
{
|
| 1172 |
+
"type": "text",
|
| 1173 |
+
"text": "To further improve the optimization efficiency of the supernet training, we leverage weight entanglement (Guan et al., 2021a; Chen et al., 2021b; Guo et al., 2020) to deeply share the weights of architectures with different hidden sizes. Specifically, for every architecture sampled from the supernet, we use a 0-1 mask to discard unnecessary hidden channels instead of maintaining a new set of weights. In this way, the number of parameters in the supernet will remain the same as the largest (i.e., with the most parameters) model in the search space, thus leading to efficient optimization.",
|
| 1174 |
+
"bbox": [
|
| 1175 |
+
169,
|
| 1176 |
+
539,
|
| 1177 |
+
825,
|
| 1178 |
+
625
|
| 1179 |
+
],
|
| 1180 |
+
"page_idx": 6
|
| 1181 |
+
},
|
| 1182 |
+
{
|
| 1183 |
+
"type": "text",
|
| 1184 |
+
"text": "Although this strategy is fast and convenient, using the same supernet parameters $\\mathbf{W}$ for all architectures will decrease the consistency between the estimation of the supernet and the ground-truth architecture performance. To improve the consistency and accuracy of supernet, we propose an encoding-aware supernet training strategy. Based on the contribution of coupling of different encoding strategies, we split the search space into different sub-spaces based on whether adopting three kinds of attention map augmentation strategies: spatial encoding, edge encoding, and proximity-enhanced attention. Therefore, there are $2^{3} = 8$ supernets.",
|
| 1185 |
+
"bbox": [
|
| 1186 |
+
169,
|
| 1187 |
+
628,
|
| 1188 |
+
825,
|
| 1189 |
+
728
|
| 1190 |
+
],
|
| 1191 |
+
"page_idx": 6
|
| 1192 |
+
},
|
| 1193 |
+
{
|
| 1194 |
+
"type": "text",
|
| 1195 |
+
"text": "To be specific, we first train a single supernet for certain epochs and split the supernet into 8 subnets according to the sub-spaces afterward. Then, we continuously train the weights in each subnet $\\mathbf{W}_i$ by only sampling the architecture from the corresponding subspace $A_{i}$ . Experiments to support such a design are provided in Section 4.",
|
| 1196 |
+
"bbox": [
|
| 1197 |
+
169,
|
| 1198 |
+
733,
|
| 1199 |
+
825,
|
| 1200 |
+
791
|
| 1201 |
+
],
|
| 1202 |
+
"page_idx": 6
|
| 1203 |
+
},
|
| 1204 |
+
{
|
| 1205 |
+
"type": "text",
|
| 1206 |
+
"text": "3.4 EVOLUTIONARY SEARCH",
|
| 1207 |
+
"text_level": 1,
|
| 1208 |
+
"bbox": [
|
| 1209 |
+
171,
|
| 1210 |
+
806,
|
| 1211 |
+
390,
|
| 1212 |
+
820
|
| 1213 |
+
],
|
| 1214 |
+
"page_idx": 6
|
| 1215 |
+
},
|
| 1216 |
+
{
|
| 1217 |
+
"type": "text",
|
| 1218 |
+
"text": "Similar to other NAS research, our proposed graph transformer search space is too large to enumerate. Therefore, we propose to utilize the evolutionary algorithm to efficiently explore the search space to obtain the architecture with optimal accuracy on the validation dataset.",
|
| 1219 |
+
"bbox": [
|
| 1220 |
+
169,
|
| 1221 |
+
832,
|
| 1222 |
+
826,
|
| 1223 |
+
875
|
| 1224 |
+
],
|
| 1225 |
+
"page_idx": 6
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "text",
|
| 1229 |
+
"text": "Specifically, we first maintain a population consisting of $T$ architectures by random sample. Then, we evolve the architectures through our designed mutation and crossover operations. In the mutation operation, we randomly choose from the top- $k$ architectures with the highest performance in the",
|
| 1230 |
+
"bbox": [
|
| 1231 |
+
169,
|
| 1232 |
+
881,
|
| 1233 |
+
826,
|
| 1234 |
+
925
|
| 1235 |
+
],
|
| 1236 |
+
"page_idx": 6
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"type": "header",
|
| 1240 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1241 |
+
"bbox": [
|
| 1242 |
+
171,
|
| 1243 |
+
32,
|
| 1244 |
+
478,
|
| 1245 |
+
47
|
| 1246 |
+
],
|
| 1247 |
+
"page_idx": 6
|
| 1248 |
+
},
|
| 1249 |
+
{
|
| 1250 |
+
"type": "page_number",
|
| 1251 |
+
"text": "7",
|
| 1252 |
+
"bbox": [
|
| 1253 |
+
493,
|
| 1254 |
+
948,
|
| 1255 |
+
503,
|
| 1256 |
+
959
|
| 1257 |
+
],
|
| 1258 |
+
"page_idx": 6
|
| 1259 |
+
},
|
| 1260 |
+
{
|
| 1261 |
+
"type": "table",
|
| 1262 |
+
"img_path": "images/6e08005a09ed0d381bee38db159272a48adbc4c7f5804da85acc0f2a3efe311b.jpg",
|
| 1263 |
+
"table_caption": [
|
| 1264 |
+
"Table 3: Comparisons of AutoGT against state-of-the-art hand-crafted baselines. We report the average accuracy $(\\%)$ and the standard deviation on all the datasets. Out-of-time (OOT) indicates the method cannot produce results in 1 GPU day."
|
| 1265 |
+
],
|
| 1266 |
+
"table_footnote": [],
|
| 1267 |
+
"table_body": "<table><tr><td>Dataset</td><td>COX2_MD</td><td>BZR_MD</td><td>PTC_FM</td><td>DHFR_MD</td><td>PROTEINS</td><td>DBLP</td></tr><tr><td>GIN</td><td>45.8214.35</td><td>59.6814.65</td><td>57.878.86</td><td>62.888.26</td><td>73.764.61</td><td>91.180.42</td></tr><tr><td>DGCNN</td><td>54.8118.51</td><td>62.7420.59</td><td>62.173.62</td><td>63.895.91</td><td>72.683.75</td><td>91.570.54</td></tr><tr><td>DiffPool</td><td>51.4514.28</td><td>65.0114.74</td><td>60.165.87</td><td>61.069.42</td><td>73.313.75</td><td>OOT</td></tr><tr><td>GraphSAGE</td><td>49.5912.80</td><td>57.4313.50</td><td>64.173.28</td><td>66.922.35</td><td>67.196.97</td><td>51.010.02</td></tr><tr><td>Graphormer</td><td>56.3915.03</td><td>63.9412.58</td><td>64.887.58</td><td>64.887.58</td><td>75.293.10</td><td>89.362.31</td></tr><tr><td>GT(ours)</td><td>54.4416.84</td><td>63.3311.67</td><td>64.182.60</td><td>65.685.64</td><td>73.943.78</td><td>90.671.01</td></tr><tr><td>AutoGT(ours)</td><td>59.7223.26</td><td>65.9210.00</td><td>65.603.71</td><td>68.225.02</td><td>77.173.40</td><td>91.660.79</td></tr></table>",
|
| 1268 |
+
"bbox": [
|
| 1269 |
+
194,
|
| 1270 |
+
154,
|
| 1271 |
+
802,
|
| 1272 |
+
277
|
| 1273 |
+
],
|
| 1274 |
+
"page_idx": 7
|
| 1275 |
+
},
|
| 1276 |
+
{
|
| 1277 |
+
"type": "table",
|
| 1278 |
+
"img_path": "images/ac40ac737cb2fd832a43b37097a1140ae02652cd440159000cb10eccf597933a.jpg",
|
| 1279 |
+
"table_caption": [
|
| 1280 |
+
"Table 4: Comparisons of AutoGT against state-of-the-art hand-crafted baselines. We report the area under the curve (AUC) [%] and the standard deviation on all the datasets."
|
| 1281 |
+
],
|
| 1282 |
+
"table_footnote": [],
|
| 1283 |
+
"table_body": "<table><tr><td>Dataset</td><td>OGBG-MolHIV</td><td>OGBG-MolBACE</td><td>OGBG-MolBBBP</td></tr><tr><td>GIN</td><td>71.112.57</td><td>70.424.78</td><td>63.371.81</td></tr><tr><td>DGCNN</td><td>69.972.16</td><td>75.622.64</td><td>60.921.78</td></tr><tr><td>DiffPool</td><td>74.581.71</td><td>73.874.50</td><td>66.686.08</td></tr><tr><td>GraphSAGE</td><td>67.823.67</td><td>72.911.24</td><td>64.193.50</td></tr><tr><td>Graphormer</td><td>71.892.66</td><td>76.421.67</td><td>66.520.74</td></tr><tr><td>AutoGT(ours)</td><td>74.951.02</td><td>76.701.42</td><td>67.291.46</td></tr></table>",
|
| 1284 |
+
"bbox": [
|
| 1285 |
+
256,
|
| 1286 |
+
313,
|
| 1287 |
+
738,
|
| 1288 |
+
425
|
| 1289 |
+
],
|
| 1290 |
+
"page_idx": 7
|
| 1291 |
+
},
|
| 1292 |
+
{
|
| 1293 |
+
"type": "text",
|
| 1294 |
+
"text": "last generation and change its architecture choices with probabilities. In the crossover operation, we randomly select pairs of architectures with the same number of layers from the remaining architectures, and randomly switch their architecture choices.",
|
| 1295 |
+
"bbox": [
|
| 1296 |
+
169,
|
| 1297 |
+
450,
|
| 1298 |
+
826,
|
| 1299 |
+
494
|
| 1300 |
+
],
|
| 1301 |
+
"page_idx": 7
|
| 1302 |
+
},
|
| 1303 |
+
{
|
| 1304 |
+
"type": "text",
|
| 1305 |
+
"text": "4 EXPERIMENTS",
|
| 1306 |
+
"text_level": 1,
|
| 1307 |
+
"bbox": [
|
| 1308 |
+
171,
|
| 1309 |
+
515,
|
| 1310 |
+
328,
|
| 1311 |
+
530
|
| 1312 |
+
],
|
| 1313 |
+
"page_idx": 7
|
| 1314 |
+
},
|
| 1315 |
+
{
|
| 1316 |
+
"type": "text",
|
| 1317 |
+
"text": "In this section, we present detailed experimental results as well as the ablation studies to empirically show the effectiveness of our proposed AutoGT.",
|
| 1318 |
+
"bbox": [
|
| 1319 |
+
169,
|
| 1320 |
+
547,
|
| 1321 |
+
823,
|
| 1322 |
+
575
|
| 1323 |
+
],
|
| 1324 |
+
"page_idx": 7
|
| 1325 |
+
},
|
| 1326 |
+
{
|
| 1327 |
+
"type": "text",
|
| 1328 |
+
"text": "Datasets and Baselines. We first consider six graph classification datasets from Deep Graph Kernels Benchmark((Yanardag & Vishwanathan, 2015)) and TUDataset (Morris et al., 2020), namely COX2_MD, BZR_MD, PTC_FM, DHFR_MD, PROTEINS, and DBLP. We also adopt three datasets from Open Graph Benchmark (OGB) (Hu et al., 2020a), including OBGG-MolHIV, OBGG-MolBACE, and OBGG-MolBBBP. The task is to predict the label of each graph using node/edge attributes and graph structures. The detailed statistics of the datasets are shown in Table 6 in the appendix.",
|
| 1329 |
+
"bbox": [
|
| 1330 |
+
169,
|
| 1331 |
+
583,
|
| 1332 |
+
825,
|
| 1333 |
+
680
|
| 1334 |
+
],
|
| 1335 |
+
"page_idx": 7
|
| 1336 |
+
},
|
| 1337 |
+
{
|
| 1338 |
+
"type": "text",
|
| 1339 |
+
"text": "We compare AutoGT with state-of-the-art hand-crafted baselines, including GIN (Xu et al., 2019), DGCNN (Zhang et al., 2018), DiffPool (Ying et al., 2018), GraphSAGE (Hamilton et al., 2017), and Graphormer (Ying et al., 2021). Notice that Graphormer is a state-of-the-art graph Transformer architecture that won first place in the graph classification task of KDD Cup 2021 (OGB-LSC).",
|
| 1340 |
+
"bbox": [
|
| 1341 |
+
169,
|
| 1342 |
+
686,
|
| 1343 |
+
826,
|
| 1344 |
+
744
|
| 1345 |
+
],
|
| 1346 |
+
"page_idx": 7
|
| 1347 |
+
},
|
| 1348 |
+
{
|
| 1349 |
+
"type": "text",
|
| 1350 |
+
"text": "For all the datasets, we follow Errica et al., (Errica et al., 2020) to utilize 10-fold cross-validation for all the baselines and our proposed method. All the hyper-parameters and training strategies of baselines are implemented according to the publicly available codes (Errica et al., 2020) $^{2}$ .",
|
| 1351 |
+
"bbox": [
|
| 1352 |
+
169,
|
| 1353 |
+
750,
|
| 1354 |
+
823,
|
| 1355 |
+
792
|
| 1356 |
+
],
|
| 1357 |
+
"page_idx": 7
|
| 1358 |
+
},
|
| 1359 |
+
{
|
| 1360 |
+
"type": "text",
|
| 1361 |
+
"text": "Implementation Details. Recall that our proposed architecture space has two variants, a larger AutoGT $(L = 8, d = 128)$ and a smaller AutoGT $\\text{base}(L = 4, d = 32)$ . In our experiments, we adopt the smaller search space for five relatively small datasets, i.e., all datasets except DBLP, and the larger search space for DBLP. We use the Adam optimizer, and the learning rate is $3e - 4$ . For the smaller/larger datasets, we set the number of iterations to split (i.e., $T_{s}$ in Algorithm 1 in Appendix) as 50/6 and the maximum number of iterations (i.e., $T_{m}$ in Algorithm 1) as 200/50. The batch size is 128. The hyperparameters of these baselines are kept consistent with our method for a fair comparison.",
|
| 1362 |
+
"bbox": [
|
| 1363 |
+
169,
|
| 1364 |
+
799,
|
| 1365 |
+
826,
|
| 1366 |
+
897
|
| 1367 |
+
],
|
| 1368 |
+
"page_idx": 7
|
| 1369 |
+
},
|
| 1370 |
+
{
|
| 1371 |
+
"type": "header",
|
| 1372 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1373 |
+
"bbox": [
|
| 1374 |
+
171,
|
| 1375 |
+
32,
|
| 1376 |
+
478,
|
| 1377 |
+
47
|
| 1378 |
+
],
|
| 1379 |
+
"page_idx": 7
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "page_footnote",
|
| 1383 |
+
"text": "$^{2}$ https://github.com/diningphil/gnn-comparison",
|
| 1384 |
+
"bbox": [
|
| 1385 |
+
189,
|
| 1386 |
+
909,
|
| 1387 |
+
473,
|
| 1388 |
+
924
|
| 1389 |
+
],
|
| 1390 |
+
"page_idx": 7
|
| 1391 |
+
},
|
| 1392 |
+
{
|
| 1393 |
+
"type": "page_number",
|
| 1394 |
+
"text": "8",
|
| 1395 |
+
"bbox": [
|
| 1396 |
+
493,
|
| 1397 |
+
948,
|
| 1398 |
+
504,
|
| 1399 |
+
959
|
| 1400 |
+
],
|
| 1401 |
+
"page_idx": 7
|
| 1402 |
+
},
|
| 1403 |
+
{
|
| 1404 |
+
"type": "text",
|
| 1405 |
+
"text": "We also report the results of our unified framework in Section 3.1, i.e. mixing all the encodings in our search space with the supernet but without the search part, denoted as GT(Graph Transformer).",
|
| 1406 |
+
"bbox": [
|
| 1407 |
+
169,
|
| 1408 |
+
103,
|
| 1409 |
+
823,
|
| 1410 |
+
133
|
| 1411 |
+
],
|
| 1412 |
+
"page_idx": 8
|
| 1413 |
+
},
|
| 1414 |
+
{
|
| 1415 |
+
"type": "text",
|
| 1416 |
+
"text": "Experimental Results. We report the results in Table 3. We can make the following observations. First, AutoGT consistently outperforms all the existing hand-crafted methods on all datasets, demonstrating the effectiveness of our proposed method. Graphormer shows remarkable performance and achieves the second-best results on three datasets, showing the great potential of Transformer architectures in processing graph data. However, since Graphormer is a manually designed architecture and cannot adapt to different datasets, it fails to be as effective as our proposed automatic solution. Lastly, GT, our proposed unified framework, fails to show strong performance in most cases. The results indicate that simply mixing different graph Transformers cannot produce satisfactory results, demonstrating the importance of searching for effective architectures to handle different datasets.",
|
| 1417 |
+
"bbox": [
|
| 1418 |
+
169,
|
| 1419 |
+
138,
|
| 1420 |
+
826,
|
| 1421 |
+
265
|
| 1422 |
+
],
|
| 1423 |
+
"page_idx": 8
|
| 1424 |
+
},
|
| 1425 |
+
{
|
| 1426 |
+
"type": "text",
|
| 1427 |
+
"text": "We also conduct experiments on Open Graph Benchmark (OGB) (Hu et al., 2020a). On the three binary classification datasets of OGB, we report the AUC score of our method and all the baselines. The results also show that our method outperforms all the hand-crafted baselines on these datasets.",
|
| 1428 |
+
"bbox": [
|
| 1429 |
+
169,
|
| 1430 |
+
270,
|
| 1431 |
+
826,
|
| 1432 |
+
314
|
| 1433 |
+
],
|
| 1434 |
+
"page_idx": 8
|
| 1435 |
+
},
|
| 1436 |
+
{
|
| 1437 |
+
"type": "text",
|
| 1438 |
+
"text": "Time Cost. We further show the time comparison of AutoGT with hand-crafted graph transformer Graphormer. On OGBG-MolHIV dataset, both Graphormer and AutoGT cost 2 minutes for one epoch on single GPU. The default Graphormer is trained for 300 epochs, which costs 10 hours to obtain the result of one random seed. For AutoGT, we train shared supernet for 50 epochs, 8 supernets inherit, and continue to train for 150 epochs. So the training process costs totally 1250 epochs with 40 hours. And on the evolutionary search stage, we evaluate 2000 architectures' inheriting weight performances, which costs about 900 epochs with 30 hours. In summary, the total time cost for AutoGT is only 7 times total time cost for a hand-crafted graph transformer Graphormer.",
|
| 1439 |
+
"bbox": [
|
| 1440 |
+
169,
|
| 1441 |
+
320,
|
| 1442 |
+
823,
|
| 1443 |
+
434
|
| 1444 |
+
],
|
| 1445 |
+
"page_idx": 8
|
| 1446 |
+
},
|
| 1447 |
+
{
|
| 1448 |
+
"type": "text",
|
| 1449 |
+
"text": "Ablation Studies. We verify the effectiveness of the proposed encoding-aware supernet training strategy by reporting the results on the PROTEINS dataset, while other datasets show similar patterns.",
|
| 1450 |
+
"bbox": [
|
| 1451 |
+
169,
|
| 1452 |
+
438,
|
| 1453 |
+
826,
|
| 1454 |
+
469
|
| 1455 |
+
],
|
| 1456 |
+
"page_idx": 8
|
| 1457 |
+
},
|
| 1458 |
+
{
|
| 1459 |
+
"type": "text",
|
| 1460 |
+
"text": "To show the importance of considering encoding strategies when training the supernet, we design two variants of AutoGT and compare the results:",
|
| 1461 |
+
"bbox": [
|
| 1462 |
+
169,
|
| 1463 |
+
473,
|
| 1464 |
+
823,
|
| 1465 |
+
502
|
| 1466 |
+
],
|
| 1467 |
+
"page_idx": 8
|
| 1468 |
+
},
|
| 1469 |
+
{
|
| 1470 |
+
"type": "list",
|
| 1471 |
+
"sub_type": "text",
|
| 1472 |
+
"list_items": [
|
| 1473 |
+
"- One-Shot. We only train a single supernet and use it to evaluate all the architectures.",
|
| 1474 |
+
"- Positional-Aware. We also split up the supernet into 8 subnets but based on three node attribute augmentations instead of the three attention map augmentation as in AutoGT."
|
| 1475 |
+
],
|
| 1476 |
+
"bbox": [
|
| 1477 |
+
171,
|
| 1478 |
+
511,
|
| 1479 |
+
823,
|
| 1480 |
+
559
|
| 1481 |
+
],
|
| 1482 |
+
"page_idx": 8
|
| 1483 |
+
},
|
| 1484 |
+
{
|
| 1485 |
+
"type": "text",
|
| 1486 |
+
"text": "The results of AutoGT and two variants are shown in Table 5. From the table, we can observe that, compared with the result of one-shot NAS, positional-aware and AutoGT methods achieve different levels of improvement. Further comparing the accuracy gain, we find that the result of AutoGT (1.25%) is nearly 5 times larger than the result of positional-aware (0.27%), even though both methods adopt 8 subnets. We attribute the significant difference in accuracy gain from supernet splitting to the different degrees of coupling of graph encoding strategies with the Transformer architecture. For example, the dimensional",
|
| 1487 |
+
"bbox": [
|
| 1488 |
+
169,
|
| 1489 |
+
566,
|
| 1490 |
+
552,
|
| 1491 |
+
719
|
| 1492 |
+
],
|
| 1493 |
+
"page_idx": 8
|
| 1494 |
+
},
|
| 1495 |
+
{
|
| 1496 |
+
"type": "table",
|
| 1497 |
+
"img_path": "images/38d729f5e1a39d4013616310581780103d467886c39de83f681058a6d881a899.jpg",
|
| 1498 |
+
"table_caption": [
|
| 1499 |
+
"Table 5: The ablation study on the effectiveness of the proposed encoding-aware supernet training strategy. We report the average accuracy[%] with the variance on PROTEINS."
|
| 1500 |
+
],
|
| 1501 |
+
"table_footnote": [],
|
| 1502 |
+
"table_body": "<table><tr><td>Method</td><td>Accuracy</td></tr><tr><td>One-Shot</td><td>75.923.10</td></tr><tr><td>Positional-Aware</td><td>76.193.42</td></tr><tr><td>AutoGT</td><td>77.173.40</td></tr></table>",
|
| 1503 |
+
"bbox": [
|
| 1504 |
+
586,
|
| 1505 |
+
637,
|
| 1506 |
+
792,
|
| 1507 |
+
709
|
| 1508 |
+
],
|
| 1509 |
+
"page_idx": 8
|
| 1510 |
+
},
|
| 1511 |
+
{
|
| 1512 |
+
"type": "text",
|
| 1513 |
+
"text": "ity of node attribution augmentation is the same as the number of nodes, while the attention map augmentation has a quadratic dimensionality, resulting in different coupling degrees. Our proposed encoding-aware performance estimation based on three attention map augmentation strategies is shown to be effective in practice.",
|
| 1514 |
+
"bbox": [
|
| 1515 |
+
169,
|
| 1516 |
+
720,
|
| 1517 |
+
823,
|
| 1518 |
+
777
|
| 1519 |
+
],
|
| 1520 |
+
"page_idx": 8
|
| 1521 |
+
},
|
| 1522 |
+
{
|
| 1523 |
+
"type": "text",
|
| 1524 |
+
"text": "5 CONCLUSION",
|
| 1525 |
+
"text_level": 1,
|
| 1526 |
+
"bbox": [
|
| 1527 |
+
171,
|
| 1528 |
+
795,
|
| 1529 |
+
320,
|
| 1530 |
+
811
|
| 1531 |
+
],
|
| 1532 |
+
"page_idx": 8
|
| 1533 |
+
},
|
| 1534 |
+
{
|
| 1535 |
+
"type": "text",
|
| 1536 |
+
"text": "In this paper, we propose AutoGT, a neural architecture search framework for graph Transformers. We design a search space tailored for graph Transformer architectures, and an encoding-aware supernet training strategy to provide reliable graph Transformer supernets considering various graph encoding strategies. Our method integrates the existing graph Transformer into a unified framework, where different Transformer encodings can enhance each other. Extensive experiments on six datasets demonstrate that our proposed AutoGT consistently outperforms state-of-the-art baselines on all datasets, demonstrating its strength on various graph tasks.",
|
| 1537 |
+
"bbox": [
|
| 1538 |
+
169,
|
| 1539 |
+
825,
|
| 1540 |
+
823,
|
| 1541 |
+
925
|
| 1542 |
+
],
|
| 1543 |
+
"page_idx": 8
|
| 1544 |
+
},
|
| 1545 |
+
{
|
| 1546 |
+
"type": "header",
|
| 1547 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1548 |
+
"bbox": [
|
| 1549 |
+
171,
|
| 1550 |
+
32,
|
| 1551 |
+
478,
|
| 1552 |
+
47
|
| 1553 |
+
],
|
| 1554 |
+
"page_idx": 8
|
| 1555 |
+
},
|
| 1556 |
+
{
|
| 1557 |
+
"type": "page_number",
|
| 1558 |
+
"text": "9",
|
| 1559 |
+
"bbox": [
|
| 1560 |
+
493,
|
| 1561 |
+
948,
|
| 1562 |
+
504,
|
| 1563 |
+
959
|
| 1564 |
+
],
|
| 1565 |
+
"page_idx": 8
|
| 1566 |
+
},
|
| 1567 |
+
{
|
| 1568 |
+
"type": "text",
|
| 1569 |
+
"text": "ACKNOWLEDGEMENTS",
|
| 1570 |
+
"text_level": 1,
|
| 1571 |
+
"bbox": [
|
| 1572 |
+
171,
|
| 1573 |
+
102,
|
| 1574 |
+
369,
|
| 1575 |
+
118
|
| 1576 |
+
],
|
| 1577 |
+
"page_idx": 9
|
| 1578 |
+
},
|
| 1579 |
+
{
|
| 1580 |
+
"type": "ref_text",
|
| 1581 |
+
"text": "This work was supported in part by the National Key Research and Development Program of China No. 2020AAA0106300, National Natural Science Foundation of China (No. 62250008, 62222209, 62102222, 61936011, 62206149), China National Postdoctoral Program for Innovative Talents No. BX20220185, and China Postdoctoral Science Foundation No. 2022M711813, Tsinghua GuoQiang Research Center Grant 2020GQG1014 and partially funded by THU-Bosch JCML Center. All opinions, findings, and conclusions in this paper are those of the authors and do not necessarily reflect the views of the funding agencies.",
|
| 1582 |
+
"bbox": [
|
| 1583 |
+
171,
|
| 1584 |
+
133,
|
| 1585 |
+
826,
|
| 1586 |
+
232
|
| 1587 |
+
],
|
| 1588 |
+
"page_idx": 9
|
| 1589 |
+
},
|
| 1590 |
+
{
|
| 1591 |
+
"type": "text",
|
| 1592 |
+
"text": "REFERENCES",
|
| 1593 |
+
"text_level": 1,
|
| 1594 |
+
"bbox": [
|
| 1595 |
+
171,
|
| 1596 |
+
251,
|
| 1597 |
+
287,
|
| 1598 |
+
266
|
| 1599 |
+
],
|
| 1600 |
+
"page_idx": 9
|
| 1601 |
+
},
|
| 1602 |
+
{
|
| 1603 |
+
"type": "list",
|
| 1604 |
+
"sub_type": "ref_text",
|
| 1605 |
+
"list_items": [
|
| 1606 |
+
"Deng Cai and Wai Lam. Graph transformer for graph-to-sequence learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 7464-7471, 2020.",
|
| 1607 |
+
"Jie Cai, Xin Wang, Chaoyu Guan, Yateng Tang, Jin Xu, Bin Zhong, and Wenwu Zhu. Multimodal continual graph learning with neural architecture search. In Proceedings of the ACM Web Conference 2022, pp. 1292-1300, 2022.",
|
| 1608 |
+
"Benson Chen, Regina Barzilay, and T. Jaakkola. Path-augmented graph transformer network. ArXiv, abs/1905.12712, 2019.",
|
| 1609 |
+
"Boyu Chen, Peixia Li, Chuming Li, Baopu Li, Lei Bai, Chen Lin, Ming Sun, Junjie Yan, and Wanli Ouyang. Glit: Neural architecture search for global and local image transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12-21, October 2021a.",
|
| 1610 |
+
"Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling. Autoformer: Searching transformers for visual recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12270-12280, October 2021b.",
|
| 1611 |
+
"Minghao Chen, Kan Wu, Bolin Ni, Houwen Peng, Bei Liu, Jianlong Fu, Hongyang Chao, and Haibin Ling. Searching the search space of vision transformer. Advances in Neural Information Processing Systems, 34:8714-8726, 2021c.",
|
| 1612 |
+
"Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699, 2020.",
|
| 1613 |
+
"Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. The Journal of Machine Learning Research, 20(1):1997-2017, 2019.",
|
| 1614 |
+
"Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. A fair comparison of graph neural networks for graph classification. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.",
|
| 1615 |
+
"Chaoyu Guan, Ziwei Zhang, Haoyang Li, Heng Chang, Zeyang Zhang, Yijian Qin, Jiyan Jiang, Xin Wang, and Wenwu Zhu. Autogl: A library for automated graph learning. In ICLR 2021 Workshop on Geometrical and Topological Representation Learning.",
|
| 1616 |
+
"Chaoyu Guan, Yijian Qin, Zhikun Wei, Zeyang Zhang, Zizhao Zhang, Xin Wang, and Wenwu Zhu. One-shot neural channel search: What works and what's next. In CVPR Workshop on NAS, 2021a.",
|
| 1617 |
+
"Chaoyu Guan, Xin Wang, and Wenwu Zhu. Autoattend: Automated attention representation search. In International conference on machine learning, pp. 3864-3874. PMLR, 2021b.",
|
| 1618 |
+
"Chaoyu Guan, Xin Wang, Hong Chen, Ziwei Zhang, and Wenwu Zhu. Large-scale graph neural architecture search. In International Conference on Machine Learning, pp. 7968-7981. PMLR, 2022.",
|
| 1619 |
+
"Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XVI, pp. 544-560, 2020."
|
| 1620 |
+
],
|
| 1621 |
+
"bbox": [
|
| 1622 |
+
171,
|
| 1623 |
+
275,
|
| 1624 |
+
826,
|
| 1625 |
+
922
|
| 1626 |
+
],
|
| 1627 |
+
"page_idx": 9
|
| 1628 |
+
},
|
| 1629 |
+
{
|
| 1630 |
+
"type": "header",
|
| 1631 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1632 |
+
"bbox": [
|
| 1633 |
+
171,
|
| 1634 |
+
32,
|
| 1635 |
+
478,
|
| 1636 |
+
47
|
| 1637 |
+
],
|
| 1638 |
+
"page_idx": 9
|
| 1639 |
+
},
|
| 1640 |
+
{
|
| 1641 |
+
"type": "page_number",
|
| 1642 |
+
"text": "10",
|
| 1643 |
+
"bbox": [
|
| 1644 |
+
490,
|
| 1645 |
+
946,
|
| 1646 |
+
506,
|
| 1647 |
+
959
|
| 1648 |
+
],
|
| 1649 |
+
"page_idx": 9
|
| 1650 |
+
},
|
| 1651 |
+
{
|
| 1652 |
+
"type": "list",
|
| 1653 |
+
"sub_type": "ref_text",
|
| 1654 |
+
"list_items": [
|
| 1655 |
+
"Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.",
|
| 1656 |
+
"Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020a.",
|
| 1657 |
+
"Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. Heterogeneous graph transformer. In WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pp. 2704-2710. ACM / IW3C2, 2020b.",
|
| 1658 |
+
"Md Shamim Hussain, Mohammed J Zaki, and Dharmashankar Subramanian. Edge-augmented graph transformers: Global self-attention is enough for graphs. arXiv preprint arXiv:2108.03348, 2021.",
|
| 1659 |
+
"Ling Min Serena Khoo, Hai Leong Chieu, Zhong Qian, and Jing Jiang. Interpretable rumor detection in microblogs by attending to user interactions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 8783-8790, 2020.",
|
| 1660 |
+
"Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent Létourneau, and Prudencio Tossou. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, 34, 2021.",
|
| 1661 |
+
"Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations, 2019.",
|
| 1662 |
+
"Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European conference on computer vision (ECCV), pp. 116-131, 2018.",
|
| 1663 |
+
"Erxue Min, Runfa Chen, Yatao Bian, Tingyang Xu, Kangfei Zhao, Wenbing Huang, Peilin Zhao, Junzhou Huang, Sophia Ananiadou, and Yu Rong. Transformer for graphs: An overview from architecture perspective. arXiv preprint arXiv:2202.08455, 2022a.",
|
| 1664 |
+
"Erxue Min, Yu Rong, Tingyang Xu, Yatao Bian, Peilin Zhao, Junzhou Huang, Da Luo, Kangyi Lin, and Sophia Ananiadou. Masked transformer for neighbourhood-aware click-through rate prediction. arXiv preprint arXiv:2201.13311, 2022b.",
|
| 1665 |
+
"Christopher Morris, Nils M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663, 2020.",
|
| 1666 |
+
"Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In International conference on machine learning, pp. 4095-4104. PMLR, 2018.",
|
| 1667 |
+
"Yijian Qin, Xin Wang, Ziwei Zhang, Pengtao Xie, and Wenwu Zhu. Graph neural architecture search under distribution shifts. In International Conference on Machine Learning, pp. 18083-18095. PMLR, 2022a.",
|
| 1668 |
+
"Yijian Qin, Ziwei Zhang, Xin Wang, Zeyang Zhang, and Wenwu Zhu. Nas-bench-graph: Benchmarking graph neural architecture search. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022b.",
|
| 1669 |
+
"Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. Self-supervised graph transformer on large-scale molecular data. Advances in Neural Information Processing Systems, 33:12559-12571, 2020.",
|
| 1670 |
+
"Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjing Wang, and Yu Sun. Masked label prediction: Unified message passing model for semi-supervised classification. In International Joint Conference on Artificial Intelligence, pp. 1548-1554. ijcai.org, 2021.",
|
| 1671 |
+
"David R. So, Quoc V. Le, and Chen Liang. The evolved transformer. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 5877-5886. PMLR, 2019."
|
| 1672 |
+
],
|
| 1673 |
+
"bbox": [
|
| 1674 |
+
171,
|
| 1675 |
+
102,
|
| 1676 |
+
828,
|
| 1677 |
+
924
|
| 1678 |
+
],
|
| 1679 |
+
"page_idx": 10
|
| 1680 |
+
},
|
| 1681 |
+
{
|
| 1682 |
+
"type": "header",
|
| 1683 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1684 |
+
"bbox": [
|
| 1685 |
+
171,
|
| 1686 |
+
32,
|
| 1687 |
+
478,
|
| 1688 |
+
47
|
| 1689 |
+
],
|
| 1690 |
+
"page_idx": 10
|
| 1691 |
+
},
|
| 1692 |
+
{
|
| 1693 |
+
"type": "page_number",
|
| 1694 |
+
"text": "11",
|
| 1695 |
+
"bbox": [
|
| 1696 |
+
490,
|
| 1697 |
+
948,
|
| 1698 |
+
504,
|
| 1699 |
+
959
|
| 1700 |
+
],
|
| 1701 |
+
"page_idx": 10
|
| 1702 |
+
},
|
| 1703 |
+
{
|
| 1704 |
+
"type": "list",
|
| 1705 |
+
"sub_type": "ref_text",
|
| 1706 |
+
"list_items": [
|
| 1707 |
+
"Xin Wang, Yue Liu, Jiapei Fan, Weigao Wen, Hui Xue, and Wenwu Zhu. Continual few-shot learning with transformer adaptation and knowledge regularization. In Proceedings of the ACM Web Conference 2023, 2023.",
|
| 1708 |
+
"Lanning Wei, Huan Zhao, Quanming Yao, and Zhiqiang He. Pooling architecture search for graph classification. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 2091-2100, 2021.",
|
| 1709 |
+
"Lianghao Xia, Chao Huang, Yong Xu, Peng Dai, Xiyue Zhang, Hongsheng Yang, Jian Pei, and Liefeng Bo. Knowledge-enhanced hierarchical graph transformer network for multi-behavior recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 4486-4493, 2021.",
|
| 1710 |
+
"Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, and Tie-Yan Liu. Nas-bert: task-agnostic and adaptive-size bert compression with neural architecture search. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1933–1943, 2021.",
|
| 1711 |
+
"Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.",
|
| 1712 |
+
"Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1365-1374, 2015.",
|
| 1713 |
+
"Shaowei Yao, Tianming Wang, and Xiaojun Wan. Heterogeneous graph transformer for graph-to-sequence learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7145-7154, 2020.",
|
| 1714 |
+
"Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems, 34:28877-28888, 2021.",
|
| 1715 |
+
"Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31, 2018.",
|
| 1716 |
+
"Haopeng Zhang and Jiawei Zhang. Text graph transformer for document classification. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.",
|
| 1717 |
+
"Jiawei Zhang, Haopeng Zhang, Congying Xia, and Li Sun. Graph-bert: Only attention is needed for learning graph representations. arXiv preprint arXiv:2001.05140, 2020.",
|
| 1718 |
+
"Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.",
|
| 1719 |
+
"Ziwei Zhang, Xin Wang, and Wenwu Zhu. Automated machine learning on graphs: A survey. In International Joint Conference on Artificial Intelligence, 2021.",
|
| 1720 |
+
"Jianan Zhao, Chaozhuo Li, Qianlong Wen, Yiqi Wang, Yuming Liu, Hao Sun, Xing Xie, and Yanfang Ye. Gophormer: Ego-graph transformer for node classification. arXiv preprint arXiv:2110.13094, 2021.",
|
| 1721 |
+
"Wei Zhu, Xiaoling Wang, Yuan Ni, and Guotong Xie. Autotrans: Automating transformer design via reinforced architecture search. In Natural Language Processing and Chinese Computing: 10th CCF International Conference, NLPCC 2021, Qingdao, China, October 13-17, 2021, Proceedings, Part I, pp. 169-182, 2021.",
|
| 1722 |
+
"Barret Zoph and Quoc Le. Neural architecture search with reinforcement learning. In International Conference on Learning Representations, 2017."
|
| 1723 |
+
],
|
| 1724 |
+
"bbox": [
|
| 1725 |
+
171,
|
| 1726 |
+
102,
|
| 1727 |
+
825,
|
| 1728 |
+
878
|
| 1729 |
+
],
|
| 1730 |
+
"page_idx": 11
|
| 1731 |
+
},
|
| 1732 |
+
{
|
| 1733 |
+
"type": "header",
|
| 1734 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1735 |
+
"bbox": [
|
| 1736 |
+
171,
|
| 1737 |
+
32,
|
| 1738 |
+
478,
|
| 1739 |
+
47
|
| 1740 |
+
],
|
| 1741 |
+
"page_idx": 11
|
| 1742 |
+
},
|
| 1743 |
+
{
|
| 1744 |
+
"type": "page_number",
|
| 1745 |
+
"text": "12",
|
| 1746 |
+
"bbox": [
|
| 1747 |
+
488,
|
| 1748 |
+
946,
|
| 1749 |
+
508,
|
| 1750 |
+
960
|
| 1751 |
+
],
|
| 1752 |
+
"page_idx": 11
|
| 1753 |
+
},
|
| 1754 |
+
{
|
| 1755 |
+
"type": "text",
|
| 1756 |
+
"text": "A TRAINING PROCEDURE",
|
| 1757 |
+
"text_level": 1,
|
| 1758 |
+
"bbox": [
|
| 1759 |
+
171,
|
| 1760 |
+
102,
|
| 1761 |
+
405,
|
| 1762 |
+
118
|
| 1763 |
+
],
|
| 1764 |
+
"page_idx": 12
|
| 1765 |
+
},
|
| 1766 |
+
{
|
| 1767 |
+
"type": "text",
|
| 1768 |
+
"text": "We list the training procedure of our method in Algorithm 1.",
|
| 1769 |
+
"bbox": [
|
| 1770 |
+
171,
|
| 1771 |
+
133,
|
| 1772 |
+
570,
|
| 1773 |
+
148
|
| 1774 |
+
],
|
| 1775 |
+
"page_idx": 12
|
| 1776 |
+
},
|
| 1777 |
+
{
|
| 1778 |
+
"type": "code",
|
| 1779 |
+
"sub_type": "algorithm",
|
| 1780 |
+
"code_caption": [
|
| 1781 |
+
"Algorithm 1 Our proposed encoding-aware supernet training strategy"
|
| 1782 |
+
],
|
| 1783 |
+
"code_body": "1: Initialize. Supernet weights W, subnet weights $\\mathbf{W}_i$ , search space A, subspace $\\mathcal{A}_i$ , the split iteration $T_{s}$ the max iteration $T_{m}$ , a dataset $\\mathcal{D}$ \n2: for $t = 1:T_s$ do \n3: Sample architecture and encoding strategies $a\\in \\mathcal{A}$ \n4: Sample a batch of graph data $\\mathcal{D}_s\\subset \\mathcal{D}$ \n5: Calculate the training loss $\\mathcal{L}_{train}$ over the sampled data. \n6: Update the supernet weights W through gradient descents. \n7: end for \n8: for $i = 1:n$ do \n9: Let subnet $\\mathbf{W}_i$ inherit the weights from supernet W. \n10: for $t = T_s:T_m$ do \n11: Sample architectures and encoding strategies from the subspace $a\\in \\mathcal{A}_i$ \n12: Sample a batch of graph data $\\mathcal{D}_s\\in \\mathcal{D}$ \n13: Calculate the training loss $\\mathcal{L}_{train}$ over the sampled batch. \n14: Update the subnet weights $\\mathbf{W}_i$ through gradient descents. \n15: end for \n16: end for \n17: Output. Subnets with weights $\\mathbf{W}_i$",
|
| 1784 |
+
"bbox": [
|
| 1785 |
+
173,
|
| 1786 |
+
179,
|
| 1787 |
+
825,
|
| 1788 |
+
412
|
| 1789 |
+
],
|
| 1790 |
+
"page_idx": 12
|
| 1791 |
+
},
|
| 1792 |
+
{
|
| 1793 |
+
"type": "text",
|
| 1794 |
+
"text": "B DATASET",
|
| 1795 |
+
"text_level": 1,
|
| 1796 |
+
"bbox": [
|
| 1797 |
+
171,
|
| 1798 |
+
439,
|
| 1799 |
+
287,
|
| 1800 |
+
455
|
| 1801 |
+
],
|
| 1802 |
+
"page_idx": 12
|
| 1803 |
+
},
|
| 1804 |
+
{
|
| 1805 |
+
"type": "text",
|
| 1806 |
+
"text": "We provide the statistics of the adopted datasets in Table 6 and Table 7.",
|
| 1807 |
+
"bbox": [
|
| 1808 |
+
171,
|
| 1809 |
+
470,
|
| 1810 |
+
640,
|
| 1811 |
+
486
|
| 1812 |
+
],
|
| 1813 |
+
"page_idx": 12
|
| 1814 |
+
},
|
| 1815 |
+
{
|
| 1816 |
+
"type": "table",
|
| 1817 |
+
"img_path": "images/bba065bfcd283c98ed3cd13b54bbd8ec00f7e11c232358ec5bb46b91c6a28d1c.jpg",
|
| 1818 |
+
"table_caption": [
|
| 1819 |
+
"Table 6: Statistics of graph classification datasets (precision) used to compare AutoGT with baselines. We adopt five datasets with relatively small numbers of graphs (upper part) and one dataset with a larger size (lower part) to demonstrate the efficiency of the proposed AutoGT."
|
| 1820 |
+
],
|
| 1821 |
+
"table_footnote": [],
|
| 1822 |
+
"table_body": "<table><tr><td>Dataset</td><td>#Graph</td><td>#Class</td><td>#Avg. Nodes</td><td>#Avg. Edges</td><td># Node Feature</td><td># Edge Feature</td></tr><tr><td>COX2_MD</td><td>303</td><td>2</td><td>26.28</td><td>335.12</td><td>7</td><td>5</td></tr><tr><td>BZR_MD</td><td>306</td><td>2</td><td>21.3</td><td>225.06</td><td>8</td><td>5</td></tr><tr><td>PTC_FM</td><td>349</td><td>2</td><td>14.11</td><td>14.48</td><td>18</td><td>4</td></tr><tr><td>DHFR_MD</td><td>393</td><td>2</td><td>23.87</td><td>283.01</td><td>7</td><td>5</td></tr><tr><td>PROTEINS</td><td>1,133</td><td>2</td><td>39.06</td><td>72.82</td><td>3</td><td>0</td></tr><tr><td>DBLP</td><td>19,456</td><td>2</td><td>10.48</td><td>19.65</td><td>41,325</td><td>3</td></tr></table>",
|
| 1823 |
+
"bbox": [
|
| 1824 |
+
183,
|
| 1825 |
+
551,
|
| 1826 |
+
810,
|
| 1827 |
+
662
|
| 1828 |
+
],
|
| 1829 |
+
"page_idx": 12
|
| 1830 |
+
},
|
| 1831 |
+
{
|
| 1832 |
+
"type": "table",
|
| 1833 |
+
"img_path": "images/4b7799961ddfc2d0c87bf8215b60169ecdfea3e6730f120a65a2fb639f6100d1.jpg",
|
| 1834 |
+
"table_caption": [
|
| 1835 |
+
"Table 7: Statistics of graph classification datasets (AUC) used to compare AutoGT with baselines. We adopt two datasets with relatively small numbers of graphs (upper part) and one dataset with a larger size (lower part) to demonstrate the efficiency of the proposed AutoGT."
|
| 1836 |
+
],
|
| 1837 |
+
"table_footnote": [],
|
| 1838 |
+
"table_body": "<table><tr><td>Dataset</td><td>#Graph</td><td>#Class</td><td>#Avg. Nodes</td><td>#Avg. Edges</td><td># Node Feature</td><td># Edge Feature</td></tr><tr><td>OGBG-MolBACE</td><td>1,513</td><td>2</td><td>25.51</td><td>27.47</td><td>9</td><td>3</td></tr><tr><td>OGBG-MolBBBP</td><td>2,039</td><td>2</td><td>34.09</td><td>36.86</td><td>9</td><td>3</td></tr><tr><td>OGBG-MolHIV</td><td>41,127</td><td>2</td><td>24.06</td><td>25.95</td><td>9</td><td>3</td></tr></table>",
|
| 1839 |
+
"bbox": [
|
| 1840 |
+
171,
|
| 1841 |
+
736,
|
| 1842 |
+
836,
|
| 1843 |
+
809
|
| 1844 |
+
],
|
| 1845 |
+
"page_idx": 12
|
| 1846 |
+
},
|
| 1847 |
+
{
|
| 1848 |
+
"type": "text",
|
| 1849 |
+
"text": "C ADDITIONAL EXPERIMENTS",
|
| 1850 |
+
"text_level": 1,
|
| 1851 |
+
"bbox": [
|
| 1852 |
+
171,
|
| 1853 |
+
837,
|
| 1854 |
+
444,
|
| 1855 |
+
852
|
| 1856 |
+
],
|
| 1857 |
+
"page_idx": 12
|
| 1858 |
+
},
|
| 1859 |
+
{
|
| 1860 |
+
"type": "text",
|
| 1861 |
+
"text": "In Table 3, the results on COX2_MD and BZR_MD show larger standard deviations than other datasets. One plausible reason is that the number of graphs of these two datasets are relatively small, so that the results of the model can be sensitive to dataset splits. To obtain more convincing results on these two datasets, we conduct additional experiments by still utilizing 10-fold cross-validation for all",
|
| 1862 |
+
"bbox": [
|
| 1863 |
+
169,
|
| 1864 |
+
867,
|
| 1865 |
+
826,
|
| 1866 |
+
925
|
| 1867 |
+
],
|
| 1868 |
+
"page_idx": 12
|
| 1869 |
+
},
|
| 1870 |
+
{
|
| 1871 |
+
"type": "header",
|
| 1872 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1873 |
+
"bbox": [
|
| 1874 |
+
171,
|
| 1875 |
+
32,
|
| 1876 |
+
478,
|
| 1877 |
+
47
|
| 1878 |
+
],
|
| 1879 |
+
"page_idx": 12
|
| 1880 |
+
},
|
| 1881 |
+
{
|
| 1882 |
+
"type": "page_number",
|
| 1883 |
+
"text": "13",
|
| 1884 |
+
"bbox": [
|
| 1885 |
+
488,
|
| 1886 |
+
946,
|
| 1887 |
+
506,
|
| 1888 |
+
959
|
| 1889 |
+
],
|
| 1890 |
+
"page_idx": 12
|
| 1891 |
+
},
|
| 1892 |
+
{
|
| 1893 |
+
"type": "table",
|
| 1894 |
+
"img_path": "images/8d07c192d470b9fbc835cbb1cd274e0a5206d64bd7e4a3281d1f45cb85395bc4.jpg",
|
| 1895 |
+
"table_caption": [
|
| 1896 |
+
"Table 8: Comparisons of AutoGT against state-of-the-art hand-crafted baselines. We report the average accuracy (\\%) and the standard deviation on all the datasets."
|
| 1897 |
+
],
|
| 1898 |
+
"table_footnote": [],
|
| 1899 |
+
"table_body": "<table><tr><td>Dataset</td><td>COX2_MD</td><td>BZR_MD</td></tr><tr><td>GIN</td><td>57.229.74</td><td>62.648.23</td></tr><tr><td>DGCNN</td><td>60.337.56</td><td>64.919.42</td></tr><tr><td>DiffPool</td><td>59.528.20</td><td>64.848.51</td></tr><tr><td>GraphSAGE</td><td>53.626.95</td><td>55.838.27</td></tr><tr><td>Graphormer</td><td>59.227.04</td><td>64.539.43</td></tr><tr><td>AutoGT(ours)</td><td>63.458.04</td><td>67.189.87</td></tr></table>",
|
| 1900 |
+
"bbox": [
|
| 1901 |
+
346,
|
| 1902 |
+
141,
|
| 1903 |
+
645,
|
| 1904 |
+
260
|
| 1905 |
+
],
|
| 1906 |
+
"page_idx": 13
|
| 1907 |
+
},
|
| 1908 |
+
{
|
| 1909 |
+
"type": "table",
|
| 1910 |
+
"img_path": "images/1fc1431330de0ccf482148863789c05fe1e77dd3055b343bd13588d18dd01075.jpg",
|
| 1911 |
+
"table_caption": [
|
| 1912 |
+
"Table 9: Comparisons of AutoGT with the different number of supernets. We report the average accuracy $(\\%)$ and the standard deviation on the datasets."
|
| 1913 |
+
],
|
| 1914 |
+
"table_footnote": [],
|
| 1915 |
+
"table_body": "<table><tr><td>Dataset</td><td>PROTEINS</td></tr><tr><td>1 supernet</td><td>75.923.10</td></tr><tr><td>2 supernet</td><td>76.733.25</td></tr><tr><td>4 supernet</td><td>76.913.35</td></tr><tr><td>8 supernet</td><td>77.173.40</td></tr><tr><td>16 supernet</td><td>77.273.65</td></tr></table>",
|
| 1916 |
+
"bbox": [
|
| 1917 |
+
398,
|
| 1918 |
+
313,
|
| 1919 |
+
594,
|
| 1920 |
+
412
|
| 1921 |
+
],
|
| 1922 |
+
"page_idx": 13
|
| 1923 |
+
},
|
| 1924 |
+
{
|
| 1925 |
+
"type": "text",
|
| 1926 |
+
"text": "the baselines and our proposed method, and repeat the 10-fold cross-validation with 10 random seeds. We report the results in Table 8. The results are consistent with Table 3, i.e., our method consistently outperforms other baselines, while the standard deviations are considerably smaller by adopting more repeated experiments.",
|
| 1927 |
+
"bbox": [
|
| 1928 |
+
169,
|
| 1929 |
+
436,
|
| 1930 |
+
826,
|
| 1931 |
+
494
|
| 1932 |
+
],
|
| 1933 |
+
"page_idx": 13
|
| 1934 |
+
},
|
| 1935 |
+
{
|
| 1936 |
+
"type": "text",
|
| 1937 |
+
"text": "In addition, to further explore how the number of supernets affects our proposed method, we carry out experiments with 1, 2, 4, 8, 16 supernets on the PROTEINS dataset, and report our results in Table 9. We can observe that as the number of subnets increases, the performance of our method increases. One possible reason is that more well-trained subnets can bring more consistent performance estimation results, which improves performance.",
|
| 1938 |
+
"bbox": [
|
| 1939 |
+
169,
|
| 1940 |
+
500,
|
| 1941 |
+
826,
|
| 1942 |
+
571
|
| 1943 |
+
],
|
| 1944 |
+
"page_idx": 13
|
| 1945 |
+
},
|
| 1946 |
+
{
|
| 1947 |
+
"type": "header",
|
| 1948 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1949 |
+
"bbox": [
|
| 1950 |
+
171,
|
| 1951 |
+
32,
|
| 1952 |
+
478,
|
| 1953 |
+
47
|
| 1954 |
+
],
|
| 1955 |
+
"page_idx": 13
|
| 1956 |
+
},
|
| 1957 |
+
{
|
| 1958 |
+
"type": "page_number",
|
| 1959 |
+
"text": "14",
|
| 1960 |
+
"bbox": [
|
| 1961 |
+
488,
|
| 1962 |
+
946,
|
| 1963 |
+
508,
|
| 1964 |
+
959
|
| 1965 |
+
],
|
| 1966 |
+
"page_idx": 13
|
| 1967 |
+
}
|
| 1968 |
+
]
|
2023/AutoGT_ Automated Graph Transformer Architecture Search/d4f0ae85-f4df-4a55-84ac-a8865afd4325_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/AutoGT_ Automated Graph Transformer Architecture Search/d4f0ae85-f4df-4a55-84ac-a8865afd4325_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eea38db6b1ff219e50152c34b6cb8f5843cf76589855440794910dfdcabe4959
|
| 3 |
+
size 1103302
|
2023/AutoGT_ Automated Graph Transformer Architecture Search/full.md
ADDED
|
@@ -0,0 +1,383 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AUTOGT: AUTOMATED GRAPH TRANSFORMER ARCHITECTURE SEARCH
|
| 2 |
+
|
| 3 |
+
Zizhao Zhang $^{1}$ , Xin Wang $^{1,2*}$ , Chaoyu Guan $^{1}$ , Ziwei Zhang $^{1}$ , Haoyang Li $^{1}$ , Wenwu Zhu $^{1*}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Department of Computer Science and Technology, Tsinghua University
|
| 6 |
+
|
| 7 |
+
$^{2}$ THU-Bosch JCML Center, Tsinghua University
|
| 8 |
+
|
| 9 |
+
{zzz22, guancy19, lihy18}@mails.tsinghua.edu.cn
|
| 10 |
+
|
| 11 |
+
{xin_wang, zwzhang, wwzhu}@tsinghua.edu.cn
|
| 12 |
+
|
| 13 |
+
# ABSTRACT
|
| 14 |
+
|
| 15 |
+
Although Transformer architectures have been successfully applied to graph data with the advent of Graph Transformer, the current design of Graph Transformers still heavily relies on human labor and expertise knowledge to decide on proper neural architectures and suitable graph encoding strategies at each Transformer layer. In literature, there have been some works on the automated design of Transformers focusing on non-graph data such as texts and images without considering graph encoding strategies, which fail to handle the non-euclidean graph data. In this paper, we study the problem of automated graph Transformers, for the first time. However, solving these problems poses the following challenges: i) how can we design a unified search space for graph Transformer, and ii) how to deal with the coupling relations between Transformer architectures and the graph encodings of each Transformer layer. To address these challenges, we propose Automated Graph Transformer (AutoGT), a neural architecture search framework that can automatically discover the optimal graph Transformer architectures by joint optimization of Transformer architecture and graph encoding strategies. Specifically, we first propose a unified graph Transformer formulation that can represent most state-of-the-art graph Transformer architectures. Based upon the unified formulation, we further design the graph Transformer search space that includes both candidate architectures and various graph encodings. To handle the coupling relations, we propose a novel encoding-aware performance estimation strategy by gradually training and splitting the supernets according to the correlations between graph encodings and architectures. The proposed strategy can provide a more consistent and fine-grained performance prediction when evaluating the jointly optimized graph encodings and architectures. Extensive experiments and ablation studies show that our proposed AutoGT gains sufficient improvement over state-of-the-art hand-crafted baselines on all datasets, demonstrating its effectiveness and wide applicability.
|
| 16 |
+
|
| 17 |
+
# 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
Recently, designing Transformer for graph data has attracted intensive research interests (Dwivedi & Bresson, 2020; Ying et al., 2021). As a powerful architecture to extract meaningful information from relational data, the graph Transformers have been successfully applied in natural language processing (Zhang & Zhang, 2020; Cai & Lam, 2020; Wang et al., 2023), social networks (Hu et al., 2020b), chemistry (Chen et al., 2019; Rong et al., 2020), recommendation (Xia et al., 2021) etc. However, developing a state-of-the-art graph Transformer for downstream tasks is still challenging because it heavily relies on the tedious trial-and-error hand-crafted human design, including determining the best Transformer architecture and the choices of proper graph encoding strategies to utilize, etc. In addition, the inefficient hand-crafted design will also inevitably introduce human bias, which leads to sub-optimal solutions for developing graph transformers. In literature, there have been works on automatically searching for the architectures of Transformer, which are designed specifically for data
|
| 20 |
+
|
| 21 |
+
in Natural Language Processing (Xu et al., 2021) and Computer Vision (Chen et al., 2021b). These works only focus on non-graph data without considering the graph encoding strategies which are shown to be very important in capturing graph information (Min et al., 2022a), thus failing to handle graph data with non-euclidean properties.
|
| 22 |
+
|
| 23 |
+
In this paper, we study the problem of automated graph Transformers for the first time. However, previous work (Min et al., 2022a) has demonstrated that a good graph Transformer architecture is expected to not only select proper neural architectures for every layer but also utilize appropriate encoding strategies capable of capturing various meaningful graph structure information to boost graph Transformer performance. Therefore, there exist two critical challenges for automated graph Transformers:
|
| 24 |
+
|
| 25 |
+
- How to design a unified search space appropriate for graph Transformer? A good graph Transformer needs to handle the non-euclidean graph data, requiring explicit consideration of node relations within the search space, where the architectures, as well as the encoding strategies, can be incorporated simultaneously.
|
| 26 |
+
- How to conduct encoding-aware architecture search strategy to tackle the coupling relations between Transformer architectures and graph encoding? Although one simple solution may resort to a one-shot formulation enabling efficient searching in vanilla Transformer operation space which can change its functionality during supernet training, the graph encoding strategies differ from vanilla Transformer in containing certain meanings related to structure information. How to train an encoding-aware supernet specifically designed for graphs is challenging.
|
| 27 |
+
|
| 28 |
+
To address these challenges, we propose Automated Graph Transformer, AutoGT<sup>1</sup>, a novel neural architecture search method for graph Transformer. In particular, we propose a unified graph Transformer formulation to cover most of the state-of-the-art graph Transformer architectures in our search space. Besides the general search space of the Transformer with hidden dimension, feed-forward dimension, number of attention head, attention head dimension, and number of layers, our unified search space introduces two new kinds of augmentation strategies to attain graph information: node attribution augmentation and attention map augmentation. To handle the coupling relations, we further propose a novel encoding-aware performance estimation strategy tailored for graphs. As the encoding strategy and architecture have strong coupling relations when generating results, our AutoGT split the supernet based on the important encoding strategy during evaluation to handle the coupling relations. As such, we propose to gradually train and split the supernets according to the most coupled augmentation, attention map augmentation, using various supernets to evaluate different architectures in our unified searching space, which can provide a more consistent and fine-grained performance prediction when evaluating the jointly optimized architecture and encoding. In summary, we made the following contributions:
|
| 29 |
+
|
| 30 |
+
- We propose Automated Graph Transformer, AutoGT, a novel neural architecture search framework for graph Transformer, which can automatically discover the optimal graph Transformer architectures for various down-streaming tasks. To the best of our knowledge, AutoGT is the first automated graph Transformer framework.
|
| 31 |
+
- We design a unified search space containing both the Transformer architectures and the essential graph encoding strategies, covering most of the state-of-the-art graph Transformer, which can lead to global optimal for structure information excavation and node information retrieval.
|
| 32 |
+
- We propose an encoding-aware performance estimation strategy tailored for graphs to provide a more accurate and consistent performance prediction without bringing heavier computation costs. The encoding strategy and the Transformer architecture are jointly optimized to discover the best graph Transformers.
|
| 33 |
+
- The extensive experiments show that our proposed AutoGT model can significantly outperform the state-of-the-art baselines on graph classification tasks over several datasets with different scales.
|
| 34 |
+
|
| 35 |
+
# 2 RELATED WORK
|
| 36 |
+
|
| 37 |
+
The Graph Transformer. Graph Transformer, as a category of neural networks, enables Transformer to handle graph data (Min et al., 2022a). Several works (Dwivedi & Bresson, 2020; Ying et al.,
|
| 38 |
+
|
| 39 |
+
2021; Hussain et al., 2021; Zhang et al., 2020; Kreuzer et al., 2021; Shi et al., 2021) propose to pre-calculate some node positional encoding from graph structure and add them to the node attributes after a linear or embedding layer. Some works (Dwivedi & Bresson, 2020; Zhao et al., 2021; Ying et al., 2021; Khoo et al., 2020) also propose to add manually designed graph structural information into the attention matrix in Transformer layers. Others (Yao et al., 2020; Min et al., 2022b) explore the mask mechanism in the attention matrix, masking the influence of non-neighbor nodes. In particular, UniMP (Shi et al., 2021) achieves new state-of-the-art results on OGB (Hu et al., 2020a) datasets, Graphormer (Ying et al., 2021) won first place in KDD Cup Challenge on Large-SCale graph classification by encoding various information about graph structures into graph Transformer.
|
| 40 |
+
|
| 41 |
+
Neural Architecture Search. Neural architecture search has drawn increasing attention in the past few years (Elsken et al., 2019; Zoph & Le, 2017; Ma et al., 2018; Pham et al., 2018; Wei et al., 2021; Cai et al., 2022; Guan et al., 2021b; 2022; Qin et al., 2022a,b; Zhang et al., 2021; ?). There are many efforts to automate the design of Transformers. (So et al., 2019) propose the first automated framework for Transformer in neural machine translation tasks. AutoTrans (Zhu et al., 2021) improves the search efficiency of the NLP Transformer through a one-shot supernet training. NAS-BERT (Xu et al., 2021) further leverages the neural architecture search for big language model distillation and compression. AutoFormer (Chen et al., 2021b) migrates the automation of the Transformer for vision tasks, where they utilize weight-entanglement to improve the consistency of the supernet training. GLiT (Chen et al., 2021a) proposes to search both global and local attention for the Vision Transformer using a hierarchical evolutionary search algorithm. (Chen et al., 2021c) further propose to evolve the search space of the Vision Transformer to solve the exponential explosion problems.
|
| 42 |
+
|
| 43 |
+
# 3 AUTOMATED GRAPH TRANSFORMER ARCHITECTURE SEARCH (AUTOGT)
|
| 44 |
+
|
| 45 |
+
To automatically design graph Transformer architectures, we first unify the formulation of current graph Transformers in Section 3.1. Based on the unified formulation, we design the search space tailored for the graph Transformers in Section 3.2. We propose a novel encoding-aware performance estimation strategy in Section 3.3, and introduce our evolutionary search strategy in Section 3.4. The whole algorithm is presented by Figure 2.
|
| 46 |
+
|
| 47 |
+
# 3.1 THE UNIFIED GRAPH TRANSFORMER FRAMEWORK
|
| 48 |
+
|
| 49 |
+
Current representative graph Transformer designs can be regarded as improving the input and attention map in Transformer architecture through various graph encoding strategies. We first introduce the basic Transformer architecture and then show how to combine various graph encoding strategies.
|
| 50 |
+
|
| 51 |
+
Let $G = (V,E)$ denote a graph where $V = \{v_{1},v_{2},\dots ,v_{n}\}$ represents the set of nodes and $E = \{e_1,e_2,\dots ,e_m\}$ represents the set of edges, and denote $n = |V|$ and $m = |E|$ as the number of nodes and edges, respectively. Let $\mathbf{v}_i,i\in$ $\{1,\ldots ,n\}$ represents the features of node $v_{i}$ ,and $\mathbf{e}_j,j\in \{1,\dots,m\}$ represents the features of edge $e_j$
|
| 52 |
+
|
| 53 |
+
# 3.1.1 BASIC TRANSFORMER
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
Figure 1: The unified graph Transformer search space. It consists of the Transformer architecture space and the graph specific encoding space. The Transformer architecture search space is detailed in Table 1. The graph specific encoding search space is to decide whether each encoding strategy should be adopted or not and the mask threshold for the attention mask.
|
| 57 |
+
|
| 58 |
+
As shown in Figure 1, a basic Transformer consists of several stacked blocks, with each block containing two modules, namely the multi-head attention (MHA) module and the feed-forward network (FFN) module.
|
| 59 |
+
|
| 60 |
+
At block $l$ , the node representation $\mathbf{H}^{(l)} \in \mathbb{R}^{n \times d}$ first goes through the MHA module to interact with each other and pass information through self-attention:
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
\mathbf {A} _ {h} ^ {(l)} = \operatorname {s o f t m a x} \left(\frac {\mathbf {Q} _ {h} ^ {(l)} \mathbf {K} _ {h} ^ {(l) ^ {T}}}{\sqrt {d _ {k}}}\right), \mathbf {O} _ {h} ^ {(l)} = \mathbf {A} _ {h} ^ {(l)} \mathbf {V} _ {h} ^ {(l)}, \tag {1}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
where $\mathbf{A}_h^{(l)}\in \mathbb{R}^{n\times n}$ is the message passing matrix, $\mathbf{O}_h^{(l)}$ is the output of the self-attention mechanism of the $h^{th}$ attention head, $h = 1,2,\dots ,Head$ , Head is the number of attention heads, and $\mathbf{K}_h^{(l)},\mathbf{Q}_h^{(l)},\mathbf{V}_h^{(l)}\in \mathbb{R}^{n\times d_k}$ are the key, query, value calculated as:
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
\mathbf {K} _ {h} ^ {(l)}, = \mathbf {H} ^ {(l)} \mathbf {W} _ {k, h} ^ {(l)}, \mathbf {Q} _ {h} ^ {(l)} = \mathbf {H} ^ {(l)} \mathbf {W} _ {q, h} ^ {(l)}, \mathbf {V} _ {h} ^ {(l)} = \mathbf {H} ^ {(l)} \mathbf {W} _ {v, h} ^ {(l)}, \tag {2}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
where $\mathbf{W}_{k,h}^{(l)},\mathbf{W}_{q,h}^{(l)},\mathbf{W}_{v,h}^{(l)}\in \mathbb{R}^{d\times d_k}$ are learnable parameters. Then, the representations of different heads are concatenated and further transformed as:
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\mathbf {O} ^ {(l)} = \left(\mathbf {O} _ {1} ^ {(l)} \circ \mathbf {O} _ {2} ^ {(l)} \circ \dots \circ \mathbf {O} _ {H e a d} ^ {(l)}\right) \mathbf {W} _ {O} ^ {(l)} + \mathbf {H} ^ {(l)}, \tag {3}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
where $\mathbf{W}_O^{(l)}\in \mathbb{R}^{(d_k*Head)\times d_t}$ is the parameter and $\mathbf{O}^{(l)}$ is the multi-head result. Then, the attended representation will go through the FFN module to further refine the information of each node:
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
\mathbf {H} ^ {(l + 1)} = \sigma (\mathbf {O} ^ {(l)} \mathbf {W} _ {1} ^ {(l)}) \mathbf {W} _ {2} ^ {(l)}, \tag {4}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
where $\mathbf{O}^{(l)}\in \mathbb{R}^{n\times d_t}$ is the output, $\mathbf{W}_1^{(l)}\in \mathbb{R}^{d_k\times d_h}$ , $\mathbf{W}_2^{(l)}\in \mathbb{R}^{d_h\times d}$ are weight matrices.
|
| 85 |
+
|
| 86 |
+
As for the input of the first block, we concatenate all the node features $\mathbf{H}^{(0)} = [\mathbf{v}_1,\dots,\mathbf{v}_n]$ . After $L$ blocks, we obtain the final representation of each node $\mathbf{H}^{(L)}$ .
|
| 87 |
+
|
| 88 |
+
# 3.1.2 GRAPHENCODINGSTRATEGY
|
| 89 |
+
|
| 90 |
+
From Section 3.1.1, we can observe that directly using the basic Transformer on graphs can only process node attributes, ignoring important edge attributes and graph topology information in the graph. To make the Transformer architecture aware of the graph structure, several works resort to various graph encoding strategies, which can be divided into two kinds of categories: node attribution augmentation and attention map augmentation.
|
| 91 |
+
|
| 92 |
+
The node attribution augmentations take the whole graph $G$ as input and generate the topology-aware features $Enc_{node}(G)$ for each node to directly improve the node representations:
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
\mathbf {H} _ {a u g} ^ {(l)} = \mathbf {H} ^ {(l)} + E n c _ {n o d e} (G). \tag {5}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
On the other hand, the attention map augmentations generate an additional attention map $Enc_{map}(G)$ , which represents the relationships of any two nodes and improves the attention map generated by self-attention in Eq equation 1 as:
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\mathbf {A} _ {h, a u g} ^ {(l)} = \operatorname {s o f t m a x} \left(\frac {\mathbf {Q} _ {h} ^ {(l)} \mathbf {K} _ {h} ^ {(l) T}}{\sqrt {d}} + E n c _ {m a p} (G)\right). \tag {6}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
Combining node attribution augmentations and attention map augmentations together, our proposed framework is as follows:
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\mathbf {H} ^ {(l + 1)} = \sigma \left(\operatorname {C o n c a t} \left(\operatorname {s o f t m a x} \left(\frac {\mathbf {H} _ {a u g} ^ {(l)} \mathbf {W} _ {q , h} ^ {(l)} \left(\mathbf {H} _ {a u g} ^ {(l)} \mathbf {W} _ {k , h} ^ {(l)}\right) ^ {T}}{\sqrt {d _ {k}}} + E n c _ {m a p} (G)\right) \mathbf {H} _ {a u g} ^ {(l)} \mathbf {W} _ {v, h} ^ {(l)}\right) \mathbf {W} _ {1} ^ {(l)}\right) \mathbf {W} _ {2} ^ {(l)}. \tag {7}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
where $\mathbf{H}_{\text{aug}}^{(l)} = \mathbf{H}^{(l)} + \text{Enc}_{\text{node}}(G)$ .
|
| 111 |
+
|
| 112 |
+
# 3.2 THE GRAPH TRANSFORMER SEARCH SPACE
|
| 113 |
+
|
| 114 |
+
Based on the unified graph Transformer formulation, we propose our unified search space design, which can be decomposed into two parts, i.e., Transformer Architecture space and graph encoding space. Figure 1 shows the unified graph Transformer search space.
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
Figure 2: The framework of our work. Firstly, we construct the search space for each layer, consisting of the Transformer architecture space (above) and the graph encoding strategy space (below). Then, we carry out our encoding-aware supernet training method in two stages: before splitting, we train a supernet by randomly sampling architectures from the search space, while after splitting, we train multiple subnets (inheriting the weights from the supernet) by randomly sampling architectures with fixed attention map augmentation strategies (except for the attention mask). Finally, we conduct an evolutionary search based on the subnets and obtain our final architecture and results.
|
| 118 |
+
|
| 119 |
+
# 3.2.1 TRANSFORMER ARCHITECTURE SPACE
|
| 120 |
+
|
| 121 |
+
Following Section 3.1.1, we automate five key architecture components for graph Transformer as follows: the number of encoder layers $L$ , the dimension $d$ , the intermediate dimension $d_{t}$ , the hidden dimension $d_{h}$ , the number of attention heads $Head$ , and the attention head dimension $d_{k}$ in graph Transformer. Notice that these five components already cover the most important designs for Transformer architectures.
|
| 122 |
+
|
| 123 |
+
A suitable search space should be not only expressive enough to allow powerful architectures, but also compact enough to enable efficient searches. With this principle in mind, we propose two search spaces for these components with different size ranges. Table 1 gives the detailed search space for these two spaces.
|
| 124 |
+
|
| 125 |
+
# 3.2.2 GRAPHENCODINGSPACE
|
| 126 |
+
|
| 127 |
+
To exploit the potential of the graph encoding strategies, we further determine whether and which graph encoding strategies to use for each layer of the graph Transformer. Specifically, we explore the node attribution augmentations encoding and attention map augmentations encoding as below.
|
| 128 |
+
|
| 129 |
+
# Node Attribution Augmentations:
|
| 130 |
+
|
| 131 |
+
- Centrality Encoding (Ying et al., 2021). Use two node embeddings with the same size representing the in-degree and the out-degree of nodes, i.e.,
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
h _ {i} ^ {(l)} = x _ {i} ^ {(l)} + z _ {\operatorname {d e g} ^ {-} \left(v _ {i}\right)} ^ {-} + z _ {\operatorname {d e g} ^ {-} \left(v _ {i}\right)} ^ {+} \tag {8}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
where $h_i^{(l)}$ is the input embedding in layer $l$ , $x_i$ is the input attribution of node $i$ in layer $l$ , and $z^{-}$ and $z^{+}$ are the embedding generated by the in-degree and out-degree.
|
| 138 |
+
|
| 139 |
+
- Laplacian Eigenvector (Dwivedi & Bresson, 2020). Conducting spectral decomposition of the graph Laplacian matrix:
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\mathbf {U} ^ {T} \boldsymbol {\Lambda} \mathbf {U} = \mathbf {I} - \mathbf {D} ^ {- 1 / 2} \mathbf {A} ^ {G} \mathbf {D} ^ {- 1 / 2} \tag {9}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
Table 1: The Transformer Architecture Search Space for AutoGT<sub>base</sub> and AutoGT.
|
| 146 |
+
|
| 147 |
+
<table><tr><td></td><td colspan="2">AutoGTbase</td><td colspan="2">AutoGT</td></tr><tr><td></td><td>Choices</td><td>Supernet Size</td><td>Choices</td><td>Supernet Size</td></tr><tr><td>#Layers</td><td>{2,3,4}</td><td>4</td><td>{5,6,7,8}</td><td>8</td></tr><tr><td>Input Dimension d</td><td>{24,28,32}</td><td>32</td><td>{96,112,128}</td><td>128</td></tr><tr><td>Intermediate Dimension dt</td><td>{24,28,32}</td><td>32</td><td>{96,112,128}</td><td>128</td></tr><tr><td>Hidden Dimension dh</td><td>{24,28,32}</td><td>32</td><td>{96,112,128}</td><td>128</td></tr><tr><td>#Attention Heads</td><td>{2,3,4}</td><td>4</td><td>{6,7,8}</td><td>8</td></tr><tr><td>Attention Head Dimension dk</td><td>{6,8}</td><td>8</td><td>{12,14,16}</td><td>16</td></tr></table>
|
| 148 |
+
|
| 149 |
+
where $\mathbf{A}^G$ is the adjacency matrix of graph $G$ , $\mathbf{D}$ is the diagonal degree matrix, and $\mathbf{U}$ and $\Lambda$ are the eigenvectors and eigenvalues, respectively. We only select the eigenvectors of the $k$ smallest non-zero eigenvalues as the final embedding and concatenate them to the input node attribute matrix for each layer.
|
| 150 |
+
|
| 151 |
+
- SVD-based Positional Encoding (Hussain et al., 2021). Conducting singular value decomposition to the graph adjacency matrix:
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
\mathbf {A} ^ {G} \stackrel {\text {S V D}} {\approx} \mathbf {U} \boldsymbol {\Sigma} \mathbf {V} ^ {T} = (\mathbf {U} \sqrt {\boldsymbol {\Sigma}}) \cdot (\mathbf {V} \sqrt {\boldsymbol {\Sigma}}) ^ {T} = \hat {\mathbf {U}} \hat {\mathbf {V}} ^ {T} \tag {10}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
where $\mathbf{U},\mathbf{V}\in \mathbb{R}^{n\times r}$ contains the left and right singular vectors of the top $r$ singular values in the diagonal matrix $\boldsymbol {\Sigma}\in \mathbb{R}^{r\times r}$ . Without loss of generality, we only choose $\hat{\mathbf{U}}$ as final embedding since they are highly correlated for symmetric graphs (with differences in signs, to be specific). Similar to Laplacian eigenvector, we concatenate it to input node attribute matrix for each layer.
|
| 158 |
+
|
| 159 |
+
# Attention Map Augmentations Space:
|
| 160 |
+
|
| 161 |
+
- Spatial Encoding (Ying et al., 2021). Spatial encoding is added to the attention result before softmax:
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
A _ {i j} = \frac {\left(h _ {i} W _ {Q}\right) \left(h _ {j} W _ {K}\right) ^ {T}}{\sqrt {d _ {k}}} + b _ {\phi \left(v _ {i}, v _ {j}\right)} \tag {11}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
where $\phi(v_i, v_j)$ is the length of the shortest path from $v_i$ to $v_j$ , and $b \in \mathbb{R}$ is a weight parameter generated by $\phi(v_i, v_j)$ .
|
| 168 |
+
|
| 169 |
+
- Edge Encoding (Ying et al., 2021). Edge encoding is added to the attention result before softmax:
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
A _ {i j} = \frac {\left(h _ {i} W _ {Q}\right) \left(h _ {j} W _ {K}\right) ^ {T}}{\sqrt {d _ {k}}} + \frac {1}{N} \sum_ {n = 1} ^ {N} x _ {e _ {n}} \left(w _ {n} ^ {E}\right) ^ {T} \tag {12}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
where $x_{e_n}$ is the feature of the $n$ -th edge $e_n$ on the shortest path between $v_i$ and $v_j$ , and $w_n^E$ is the $n$ -th learnable embedding vector.
|
| 176 |
+
|
| 177 |
+
- Proximity-Enhanced Attention (Zhao et al., 2021). Proximity-Enhanced Attention is added to the attention result before softmax:
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
A _ {i j} = \frac {\left(h _ {i} W _ {Q}\right) \left(h _ {j} W _ {K}\right) ^ {T}}{\sqrt {d _ {k}}} + \phi_ {i j} ^ {T} b \tag {13}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
where $b \in \mathbb{R}^{M \times 1}$ is a learnable parameter, $\phi_{ij} = \mathrm{Concat}(\Phi_m(v_i, v_j) | m \in \{0, 1, \dots, M-1\})$ is the structural encoding generated from: $\Phi_m(v_i, v_j) = \tilde{\mathbf{A}}^m[i, j]$ , where $\tilde{\mathbf{A}} = \mathrm{Norm}(\mathbf{A} + \mathbf{I})$ represents the normalized adjacency matrix. Thus the augmentation denotes the reachable probabilities between nodes.
|
| 184 |
+
|
| 185 |
+
- Attention Mask (Min et al., 2022b; Yao et al., 2020). Attention Mask is added to the attention result before softmax:
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
A _ {i j} = \frac {\left(h _ {i} W _ {Q}\right) \left(h _ {j} W _ {K}\right) ^ {T}}{\sqrt {d _ {k}}} + \operatorname {M a s k} _ {m} \left(v _ {i}, v _ {j}\right) \tag {14}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
where $m$ is the mask threshold, $\mathrm{Mask}_m(v_i,v_j)$ depends on the relationship between $m$ and $\phi (v_{i},v_{j})$ , i.e. the shortest path length between $v_{i}$ and $v_{j}$ . When $m\geq \phi (v_i,v_j)$ , $\mathrm{Mask}_m(v_i,v_j) = 0$ . Otherwise, $\mathrm{Mask}_m(v_i,v_j)$ is $-\infty$ , masking the corresponding attention in practical terms.
|
| 192 |
+
|
| 193 |
+
Table 2: Comparison of our proposed unified framework with state-of-the-art graph Transformer models. CE, LPE, SVD, SE, EE, PMA, Mask denote Centrality Encoding, Laplacian Eigenvector, SVD-based Positional Encoding, Spatial Encoding, Edge Encoding, Proximity-Enhanced Attention, and Attention Mask respectively.
|
| 194 |
+
|
| 195 |
+
<table><tr><td></td><td>CE</td><td>LPE</td><td>SVD</td><td>SE</td><td>EE</td><td>PMA</td><td>Mask</td></tr><tr><td>EGT (Hussain et al., 2021)</td><td></td><td></td><td>✓</td><td></td><td></td><td></td><td></td></tr><tr><td>Gophormer (Zhao et al., 2021)</td><td></td><td></td><td></td><td></td><td></td><td>✓</td><td></td></tr><tr><td>Graph Trans (Dwivedi & Bresson, 2020)</td><td></td><td>✓</td><td></td><td></td><td></td><td></td><td>✓</td></tr><tr><td>Graphormer (Ying et al., 2021)</td><td>✓</td><td></td><td></td><td>✓</td><td>✓</td><td></td><td></td></tr><tr><td>Ours</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr></table>
|
| 196 |
+
|
| 197 |
+
# 3.3 ENCODING-AWARE SUPERNET TRAINING
|
| 198 |
+
|
| 199 |
+
We next introduce our proposed encoding-aware performance estimation strategy for efficient training.
|
| 200 |
+
|
| 201 |
+
Similar to general NAS problems, the graph Transformer architecture search can be formulated as a bi-level optimization problem:
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
a ^ {*} = \operatorname {a r g m a x} _ {a \in \mathcal {A}} A c c _ {v a l} \left(\mathbf {W} ^ {*} (a), a\right), \quad \text {s . t .} \mathbf {W} ^ {*} (a) = \operatorname {a r g m i n} _ {\mathbf {W}} \mathcal {L} _ {t r a i n} (\mathbf {W}, a), \tag {15}
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
where $a \in \mathcal{A}$ is the architecture in the search space $\mathcal{A}$ , $Acc_{val}$ stands for the validation accuracy, $\mathbf{W}$ represents the learnable weights, and $a^*$ and $\mathbf{W}^*(a)$ denotes the optimal architecture and the optimal weights for the architecture $a$ .
|
| 208 |
+
|
| 209 |
+
Following one-shot NAS methods (Liu et al., 2019; Pham et al., 2018), we encode all candidate architectures in the search space into a supernet and transform Eq. equation 15 into a two-step optimization (Guo et al., 2020):
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
a ^ {*} = \operatorname {a r g m a x} _ {a \in \mathcal {A}} A c c _ {v a l} \left(\mathbf {W} ^ {*}, a\right), \quad \mathbf {W} ^ {*} = \operatorname {a r g m i n} _ {\mathbf {W}} \mathbb {E} _ {a \in \mathcal {A}} \mathcal {L} _ {\text {t r a i n}} (\mathbf {W}, a), \tag {16}
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
where $\mathbf{W}$ denotes the shared learnable weights in the supernet with its optimal value $\mathbf{W}^*$ for all the architectures in the search space.
|
| 216 |
+
|
| 217 |
+
To further improve the optimization efficiency of the supernet training, we leverage weight entanglement (Guan et al., 2021a; Chen et al., 2021b; Guo et al., 2020) to deeply share the weights of architectures with different hidden sizes. Specifically, for every architecture sampled from the supernet, we use a 0-1 mask to discard unnecessary hidden channels instead of maintaining a new set of weights. In this way, the number of parameters in the supernet will remain the same as the largest (i.e., with the most parameters) model in the search space, thus leading to efficient optimization.
|
| 218 |
+
|
| 219 |
+
Although this strategy is fast and convenient, using the same supernet parameters $\mathbf{W}$ for all architectures will decrease the consistency between the estimation of the supernet and the ground-truth architecture performance. To improve the consistency and accuracy of supernet, we propose an encoding-aware supernet training strategy. Based on the contribution of coupling of different encoding strategies, we split the search space into different sub-spaces based on whether adopting three kinds of attention map augmentation strategies: spatial encoding, edge encoding, and proximity-enhanced attention. Therefore, there are $2^{3} = 8$ supernets.
|
| 220 |
+
|
| 221 |
+
To be specific, we first train a single supernet for certain epochs and split the supernet into 8 subnets according to the sub-spaces afterward. Then, we continuously train the weights in each subnet $\mathbf{W}_i$ by only sampling the architecture from the corresponding subspace $A_{i}$ . Experiments to support such a design are provided in Section 4.
|
| 222 |
+
|
| 223 |
+
# 3.4 EVOLUTIONARY SEARCH
|
| 224 |
+
|
| 225 |
+
Similar to other NAS research, our proposed graph transformer search space is too large to enumerate. Therefore, we propose to utilize the evolutionary algorithm to efficiently explore the search space to obtain the architecture with optimal accuracy on the validation dataset.
|
| 226 |
+
|
| 227 |
+
Specifically, we first maintain a population consisting of $T$ architectures by random sample. Then, we evolve the architectures through our designed mutation and crossover operations. In the mutation operation, we randomly choose from the top- $k$ architectures with the highest performance in the
|
| 228 |
+
|
| 229 |
+
Table 3: Comparisons of AutoGT against state-of-the-art hand-crafted baselines. We report the average accuracy $(\%)$ and the standard deviation on all the datasets. Out-of-time (OOT) indicates the method cannot produce results in 1 GPU day.
|
| 230 |
+
|
| 231 |
+
<table><tr><td>Dataset</td><td>COX2_MD</td><td>BZR_MD</td><td>PTC_FM</td><td>DHFR_MD</td><td>PROTEINS</td><td>DBLP</td></tr><tr><td>GIN</td><td>45.8214.35</td><td>59.6814.65</td><td>57.878.86</td><td>62.888.26</td><td>73.764.61</td><td>91.180.42</td></tr><tr><td>DGCNN</td><td>54.8118.51</td><td>62.7420.59</td><td>62.173.62</td><td>63.895.91</td><td>72.683.75</td><td>91.570.54</td></tr><tr><td>DiffPool</td><td>51.4514.28</td><td>65.0114.74</td><td>60.165.87</td><td>61.069.42</td><td>73.313.75</td><td>OOT</td></tr><tr><td>GraphSAGE</td><td>49.5912.80</td><td>57.4313.50</td><td>64.173.28</td><td>66.922.35</td><td>67.196.97</td><td>51.010.02</td></tr><tr><td>Graphormer</td><td>56.3915.03</td><td>63.9412.58</td><td>64.887.58</td><td>64.887.58</td><td>75.293.10</td><td>89.362.31</td></tr><tr><td>GT(ours)</td><td>54.4416.84</td><td>63.3311.67</td><td>64.182.60</td><td>65.685.64</td><td>73.943.78</td><td>90.671.01</td></tr><tr><td>AutoGT(ours)</td><td>59.7223.26</td><td>65.9210.00</td><td>65.603.71</td><td>68.225.02</td><td>77.173.40</td><td>91.660.79</td></tr></table>
|
| 232 |
+
|
| 233 |
+
Table 4: Comparisons of AutoGT against state-of-the-art hand-crafted baselines. We report the area under the curve (AUC) [%] and the standard deviation on all the datasets.
|
| 234 |
+
|
| 235 |
+
<table><tr><td>Dataset</td><td>OGBG-MolHIV</td><td>OGBG-MolBACE</td><td>OGBG-MolBBBP</td></tr><tr><td>GIN</td><td>71.112.57</td><td>70.424.78</td><td>63.371.81</td></tr><tr><td>DGCNN</td><td>69.972.16</td><td>75.622.64</td><td>60.921.78</td></tr><tr><td>DiffPool</td><td>74.581.71</td><td>73.874.50</td><td>66.686.08</td></tr><tr><td>GraphSAGE</td><td>67.823.67</td><td>72.911.24</td><td>64.193.50</td></tr><tr><td>Graphormer</td><td>71.892.66</td><td>76.421.67</td><td>66.520.74</td></tr><tr><td>AutoGT(ours)</td><td>74.951.02</td><td>76.701.42</td><td>67.291.46</td></tr></table>
|
| 236 |
+
|
| 237 |
+
last generation and change its architecture choices with probabilities. In the crossover operation, we randomly select pairs of architectures with the same number of layers from the remaining architectures, and randomly switch their architecture choices.
|
| 238 |
+
|
| 239 |
+
# 4 EXPERIMENTS
|
| 240 |
+
|
| 241 |
+
In this section, we present detailed experimental results as well as the ablation studies to empirically show the effectiveness of our proposed AutoGT.
|
| 242 |
+
|
| 243 |
+
Datasets and Baselines. We first consider six graph classification datasets from Deep Graph Kernels Benchmark((Yanardag & Vishwanathan, 2015)) and TUDataset (Morris et al., 2020), namely COX2_MD, BZR_MD, PTC_FM, DHFR_MD, PROTEINS, and DBLP. We also adopt three datasets from Open Graph Benchmark (OGB) (Hu et al., 2020a), including OBGG-MolHIV, OBGG-MolBACE, and OBGG-MolBBBP. The task is to predict the label of each graph using node/edge attributes and graph structures. The detailed statistics of the datasets are shown in Table 6 in the appendix.
|
| 244 |
+
|
| 245 |
+
We compare AutoGT with state-of-the-art hand-crafted baselines, including GIN (Xu et al., 2019), DGCNN (Zhang et al., 2018), DiffPool (Ying et al., 2018), GraphSAGE (Hamilton et al., 2017), and Graphormer (Ying et al., 2021). Notice that Graphormer is a state-of-the-art graph Transformer architecture that won first place in the graph classification task of KDD Cup 2021 (OGB-LSC).
|
| 246 |
+
|
| 247 |
+
For all the datasets, we follow Errica et al., (Errica et al., 2020) to utilize 10-fold cross-validation for all the baselines and our proposed method. All the hyper-parameters and training strategies of baselines are implemented according to the publicly available codes (Errica et al., 2020) $^{2}$ .
|
| 248 |
+
|
| 249 |
+
Implementation Details. Recall that our proposed architecture space has two variants, a larger AutoGT $(L = 8, d = 128)$ and a smaller AutoGT $\text{base}(L = 4, d = 32)$ . In our experiments, we adopt the smaller search space for five relatively small datasets, i.e., all datasets except DBLP, and the larger search space for DBLP. We use the Adam optimizer, and the learning rate is $3e - 4$ . For the smaller/larger datasets, we set the number of iterations to split (i.e., $T_{s}$ in Algorithm 1 in Appendix) as 50/6 and the maximum number of iterations (i.e., $T_{m}$ in Algorithm 1) as 200/50. The batch size is 128. The hyperparameters of these baselines are kept consistent with our method for a fair comparison.
|
| 250 |
+
|
| 251 |
+
We also report the results of our unified framework in Section 3.1, i.e. mixing all the encodings in our search space with the supernet but without the search part, denoted as GT(Graph Transformer).
|
| 252 |
+
|
| 253 |
+
Experimental Results. We report the results in Table 3. We can make the following observations. First, AutoGT consistently outperforms all the existing hand-crafted methods on all datasets, demonstrating the effectiveness of our proposed method. Graphormer shows remarkable performance and achieves the second-best results on three datasets, showing the great potential of Transformer architectures in processing graph data. However, since Graphormer is a manually designed architecture and cannot adapt to different datasets, it fails to be as effective as our proposed automatic solution. Lastly, GT, our proposed unified framework, fails to show strong performance in most cases. The results indicate that simply mixing different graph Transformers cannot produce satisfactory results, demonstrating the importance of searching for effective architectures to handle different datasets.
|
| 254 |
+
|
| 255 |
+
We also conduct experiments on Open Graph Benchmark (OGB) (Hu et al., 2020a). On the three binary classification datasets of OGB, we report the AUC score of our method and all the baselines. The results also show that our method outperforms all the hand-crafted baselines on these datasets.
|
| 256 |
+
|
| 257 |
+
Time Cost. We further show the time comparison of AutoGT with hand-crafted graph transformer Graphormer. On OGBG-MolHIV dataset, both Graphormer and AutoGT cost 2 minutes for one epoch on single GPU. The default Graphormer is trained for 300 epochs, which costs 10 hours to obtain the result of one random seed. For AutoGT, we train shared supernet for 50 epochs, 8 supernets inherit, and continue to train for 150 epochs. So the training process costs totally 1250 epochs with 40 hours. And on the evolutionary search stage, we evaluate 2000 architectures' inheriting weight performances, which costs about 900 epochs with 30 hours. In summary, the total time cost for AutoGT is only 7 times total time cost for a hand-crafted graph transformer Graphormer.
|
| 258 |
+
|
| 259 |
+
Ablation Studies. We verify the effectiveness of the proposed encoding-aware supernet training strategy by reporting the results on the PROTEINS dataset, while other datasets show similar patterns.
|
| 260 |
+
|
| 261 |
+
To show the importance of considering encoding strategies when training the supernet, we design two variants of AutoGT and compare the results:
|
| 262 |
+
|
| 263 |
+
- One-Shot. We only train a single supernet and use it to evaluate all the architectures.
|
| 264 |
+
- Positional-Aware. We also split up the supernet into 8 subnets but based on three node attribute augmentations instead of the three attention map augmentation as in AutoGT.
|
| 265 |
+
|
| 266 |
+
The results of AutoGT and two variants are shown in Table 5. From the table, we can observe that, compared with the result of one-shot NAS, positional-aware and AutoGT methods achieve different levels of improvement. Further comparing the accuracy gain, we find that the result of AutoGT (1.25%) is nearly 5 times larger than the result of positional-aware (0.27%), even though both methods adopt 8 subnets. We attribute the significant difference in accuracy gain from supernet splitting to the different degrees of coupling of graph encoding strategies with the Transformer architecture. For example, the dimensional
|
| 267 |
+
|
| 268 |
+
Table 5: The ablation study on the effectiveness of the proposed encoding-aware supernet training strategy. We report the average accuracy[%] with the variance on PROTEINS.
|
| 269 |
+
|
| 270 |
+
<table><tr><td>Method</td><td>Accuracy</td></tr><tr><td>One-Shot</td><td>75.923.10</td></tr><tr><td>Positional-Aware</td><td>76.193.42</td></tr><tr><td>AutoGT</td><td>77.173.40</td></tr></table>
|
| 271 |
+
|
| 272 |
+
ity of node attribution augmentation is the same as the number of nodes, while the attention map augmentation has a quadratic dimensionality, resulting in different coupling degrees. Our proposed encoding-aware performance estimation based on three attention map augmentation strategies is shown to be effective in practice.
|
| 273 |
+
|
| 274 |
+
# 5 CONCLUSION
|
| 275 |
+
|
| 276 |
+
In this paper, we propose AutoGT, a neural architecture search framework for graph Transformers. We design a search space tailored for graph Transformer architectures, and an encoding-aware supernet training strategy to provide reliable graph Transformer supernets considering various graph encoding strategies. Our method integrates the existing graph Transformer into a unified framework, where different Transformer encodings can enhance each other. Extensive experiments on six datasets demonstrate that our proposed AutoGT consistently outperforms state-of-the-art baselines on all datasets, demonstrating its strength on various graph tasks.
|
| 277 |
+
|
| 278 |
+
# ACKNOWLEDGEMENTS
|
| 279 |
+
|
| 280 |
+
This work was supported in part by the National Key Research and Development Program of China No. 2020AAA0106300, National Natural Science Foundation of China (No. 62250008, 62222209, 62102222, 61936011, 62206149), China National Postdoctoral Program for Innovative Talents No. BX20220185, and China Postdoctoral Science Foundation No. 2022M711813, Tsinghua GuoQiang Research Center Grant 2020GQG1014 and partially funded by THU-Bosch JCML Center. All opinions, findings, and conclusions in this paper are those of the authors and do not necessarily reflect the views of the funding agencies.
|
| 281 |
+
|
| 282 |
+
# REFERENCES
|
| 283 |
+
|
| 284 |
+
Deng Cai and Wai Lam. Graph transformer for graph-to-sequence learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 7464-7471, 2020.
|
| 285 |
+
Jie Cai, Xin Wang, Chaoyu Guan, Yateng Tang, Jin Xu, Bin Zhong, and Wenwu Zhu. Multimodal continual graph learning with neural architecture search. In Proceedings of the ACM Web Conference 2022, pp. 1292-1300, 2022.
|
| 286 |
+
Benson Chen, Regina Barzilay, and T. Jaakkola. Path-augmented graph transformer network. ArXiv, abs/1905.12712, 2019.
|
| 287 |
+
Boyu Chen, Peixia Li, Chuming Li, Baopu Li, Lei Bai, Chen Lin, Ming Sun, Junjie Yan, and Wanli Ouyang. Glit: Neural architecture search for global and local image transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12-21, October 2021a.
|
| 288 |
+
Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling. Autoformer: Searching transformers for visual recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12270-12280, October 2021b.
|
| 289 |
+
Minghao Chen, Kan Wu, Bolin Ni, Houwen Peng, Bei Liu, Jianlong Fu, Hongyang Chao, and Haibin Ling. Searching the search space of vision transformer. Advances in Neural Information Processing Systems, 34:8714-8726, 2021c.
|
| 290 |
+
Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699, 2020.
|
| 291 |
+
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. The Journal of Machine Learning Research, 20(1):1997-2017, 2019.
|
| 292 |
+
Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. A fair comparison of graph neural networks for graph classification. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
|
| 293 |
+
Chaoyu Guan, Ziwei Zhang, Haoyang Li, Heng Chang, Zeyang Zhang, Yijian Qin, Jiyan Jiang, Xin Wang, and Wenwu Zhu. Autogl: A library for automated graph learning. In ICLR 2021 Workshop on Geometrical and Topological Representation Learning.
|
| 294 |
+
Chaoyu Guan, Yijian Qin, Zhikun Wei, Zeyang Zhang, Zizhao Zhang, Xin Wang, and Wenwu Zhu. One-shot neural channel search: What works and what's next. In CVPR Workshop on NAS, 2021a.
|
| 295 |
+
Chaoyu Guan, Xin Wang, and Wenwu Zhu. Autoattend: Automated attention representation search. In International conference on machine learning, pp. 3864-3874. PMLR, 2021b.
|
| 296 |
+
Chaoyu Guan, Xin Wang, Hong Chen, Ziwei Zhang, and Wenwu Zhu. Large-scale graph neural architecture search. In International Conference on Machine Learning, pp. 7968-7981. PMLR, 2022.
|
| 297 |
+
Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XVI, pp. 544-560, 2020.
|
| 298 |
+
|
| 299 |
+
Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
|
| 300 |
+
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020a.
|
| 301 |
+
Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. Heterogeneous graph transformer. In WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pp. 2704-2710. ACM / IW3C2, 2020b.
|
| 302 |
+
Md Shamim Hussain, Mohammed J Zaki, and Dharmashankar Subramanian. Edge-augmented graph transformers: Global self-attention is enough for graphs. arXiv preprint arXiv:2108.03348, 2021.
|
| 303 |
+
Ling Min Serena Khoo, Hai Leong Chieu, Zhong Qian, and Jing Jiang. Interpretable rumor detection in microblogs by attending to user interactions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 8783-8790, 2020.
|
| 304 |
+
Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent Létourneau, and Prudencio Tossou. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, 34, 2021.
|
| 305 |
+
Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations, 2019.
|
| 306 |
+
Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European conference on computer vision (ECCV), pp. 116-131, 2018.
|
| 307 |
+
Erxue Min, Runfa Chen, Yatao Bian, Tingyang Xu, Kangfei Zhao, Wenbing Huang, Peilin Zhao, Junzhou Huang, Sophia Ananiadou, and Yu Rong. Transformer for graphs: An overview from architecture perspective. arXiv preprint arXiv:2202.08455, 2022a.
|
| 308 |
+
Erxue Min, Yu Rong, Tingyang Xu, Yatao Bian, Peilin Zhao, Junzhou Huang, Da Luo, Kangyi Lin, and Sophia Ananiadou. Masked transformer for neighbourhood-aware click-through rate prediction. arXiv preprint arXiv:2201.13311, 2022b.
|
| 309 |
+
Christopher Morris, Nils M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663, 2020.
|
| 310 |
+
Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In International conference on machine learning, pp. 4095-4104. PMLR, 2018.
|
| 311 |
+
Yijian Qin, Xin Wang, Ziwei Zhang, Pengtao Xie, and Wenwu Zhu. Graph neural architecture search under distribution shifts. In International Conference on Machine Learning, pp. 18083-18095. PMLR, 2022a.
|
| 312 |
+
Yijian Qin, Ziwei Zhang, Xin Wang, Zeyang Zhang, and Wenwu Zhu. Nas-bench-graph: Benchmarking graph neural architecture search. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022b.
|
| 313 |
+
Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. Self-supervised graph transformer on large-scale molecular data. Advances in Neural Information Processing Systems, 33:12559-12571, 2020.
|
| 314 |
+
Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjing Wang, and Yu Sun. Masked label prediction: Unified message passing model for semi-supervised classification. In International Joint Conference on Artificial Intelligence, pp. 1548-1554. ijcai.org, 2021.
|
| 315 |
+
David R. So, Quoc V. Le, and Chen Liang. The evolved transformer. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 5877-5886. PMLR, 2019.
|
| 316 |
+
|
| 317 |
+
Xin Wang, Yue Liu, Jiapei Fan, Weigao Wen, Hui Xue, and Wenwu Zhu. Continual few-shot learning with transformer adaptation and knowledge regularization. In Proceedings of the ACM Web Conference 2023, 2023.
|
| 318 |
+
Lanning Wei, Huan Zhao, Quanming Yao, and Zhiqiang He. Pooling architecture search for graph classification. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 2091-2100, 2021.
|
| 319 |
+
Lianghao Xia, Chao Huang, Yong Xu, Peng Dai, Xiyue Zhang, Hongsheng Yang, Jian Pei, and Liefeng Bo. Knowledge-enhanced hierarchical graph transformer network for multi-behavior recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 4486-4493, 2021.
|
| 320 |
+
Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, and Tie-Yan Liu. Nas-bert: task-agnostic and adaptive-size bert compression with neural architecture search. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1933–1943, 2021.
|
| 321 |
+
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
|
| 322 |
+
Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1365-1374, 2015.
|
| 323 |
+
Shaowei Yao, Tianming Wang, and Xiaojun Wan. Heterogeneous graph transformer for graph-to-sequence learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7145-7154, 2020.
|
| 324 |
+
Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems, 34:28877-28888, 2021.
|
| 325 |
+
Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31, 2018.
|
| 326 |
+
Haopeng Zhang and Jiawei Zhang. Text graph transformer for document classification. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
|
| 327 |
+
Jiawei Zhang, Haopeng Zhang, Congying Xia, and Li Sun. Graph-bert: Only attention is needed for learning graph representations. arXiv preprint arXiv:2001.05140, 2020.
|
| 328 |
+
Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
|
| 329 |
+
Ziwei Zhang, Xin Wang, and Wenwu Zhu. Automated machine learning on graphs: A survey. In International Joint Conference on Artificial Intelligence, 2021.
|
| 330 |
+
Jianan Zhao, Chaozhuo Li, Qianlong Wen, Yiqi Wang, Yuming Liu, Hao Sun, Xing Xie, and Yanfang Ye. Gophormer: Ego-graph transformer for node classification. arXiv preprint arXiv:2110.13094, 2021.
|
| 331 |
+
Wei Zhu, Xiaoling Wang, Yuan Ni, and Guotong Xie. Autotrans: Automating transformer design via reinforced architecture search. In Natural Language Processing and Chinese Computing: 10th CCF International Conference, NLPCC 2021, Qingdao, China, October 13-17, 2021, Proceedings, Part I, pp. 169-182, 2021.
|
| 332 |
+
Barret Zoph and Quoc Le. Neural architecture search with reinforcement learning. In International Conference on Learning Representations, 2017.
|
| 333 |
+
|
| 334 |
+
# A TRAINING PROCEDURE
|
| 335 |
+
|
| 336 |
+
We list the training procedure of our method in Algorithm 1.
|
| 337 |
+
|
| 338 |
+
Algorithm 1 Our proposed encoding-aware supernet training strategy
|
| 339 |
+
1: Initialize. Supernet weights W, subnet weights $\mathbf{W}_i$ , search space A, subspace $\mathcal{A}_i$ , the split iteration $T_{s}$ the max iteration $T_{m}$ , a dataset $\mathcal{D}$
|
| 340 |
+
2: for $t = 1:T_s$ do
|
| 341 |
+
3: Sample architecture and encoding strategies $a\in \mathcal{A}$
|
| 342 |
+
4: Sample a batch of graph data $\mathcal{D}_s\subset \mathcal{D}$
|
| 343 |
+
5: Calculate the training loss $\mathcal{L}_{train}$ over the sampled data.
|
| 344 |
+
6: Update the supernet weights W through gradient descents.
|
| 345 |
+
7: end for
|
| 346 |
+
8: for $i = 1:n$ do
|
| 347 |
+
9: Let subnet $\mathbf{W}_i$ inherit the weights from supernet W.
|
| 348 |
+
10: for $t = T_s:T_m$ do
|
| 349 |
+
11: Sample architectures and encoding strategies from the subspace $a\in \mathcal{A}_i$
|
| 350 |
+
12: Sample a batch of graph data $\mathcal{D}_s\in \mathcal{D}$
|
| 351 |
+
13: Calculate the training loss $\mathcal{L}_{train}$ over the sampled batch.
|
| 352 |
+
14: Update the subnet weights $\mathbf{W}_i$ through gradient descents.
|
| 353 |
+
15: end for
|
| 354 |
+
16: end for
|
| 355 |
+
17: Output. Subnets with weights $\mathbf{W}_i$
|
| 356 |
+
|
| 357 |
+
# B DATASET
|
| 358 |
+
|
| 359 |
+
We provide the statistics of the adopted datasets in Table 6 and Table 7.
|
| 360 |
+
|
| 361 |
+
Table 6: Statistics of graph classification datasets (precision) used to compare AutoGT with baselines. We adopt five datasets with relatively small numbers of graphs (upper part) and one dataset with a larger size (lower part) to demonstrate the efficiency of the proposed AutoGT.
|
| 362 |
+
|
| 363 |
+
<table><tr><td>Dataset</td><td>#Graph</td><td>#Class</td><td>#Avg. Nodes</td><td>#Avg. Edges</td><td># Node Feature</td><td># Edge Feature</td></tr><tr><td>COX2_MD</td><td>303</td><td>2</td><td>26.28</td><td>335.12</td><td>7</td><td>5</td></tr><tr><td>BZR_MD</td><td>306</td><td>2</td><td>21.3</td><td>225.06</td><td>8</td><td>5</td></tr><tr><td>PTC_FM</td><td>349</td><td>2</td><td>14.11</td><td>14.48</td><td>18</td><td>4</td></tr><tr><td>DHFR_MD</td><td>393</td><td>2</td><td>23.87</td><td>283.01</td><td>7</td><td>5</td></tr><tr><td>PROTEINS</td><td>1,133</td><td>2</td><td>39.06</td><td>72.82</td><td>3</td><td>0</td></tr><tr><td>DBLP</td><td>19,456</td><td>2</td><td>10.48</td><td>19.65</td><td>41,325</td><td>3</td></tr></table>
|
| 364 |
+
|
| 365 |
+
Table 7: Statistics of graph classification datasets (AUC) used to compare AutoGT with baselines. We adopt two datasets with relatively small numbers of graphs (upper part) and one dataset with a larger size (lower part) to demonstrate the efficiency of the proposed AutoGT.
|
| 366 |
+
|
| 367 |
+
<table><tr><td>Dataset</td><td>#Graph</td><td>#Class</td><td>#Avg. Nodes</td><td>#Avg. Edges</td><td># Node Feature</td><td># Edge Feature</td></tr><tr><td>OGBG-MolBACE</td><td>1,513</td><td>2</td><td>25.51</td><td>27.47</td><td>9</td><td>3</td></tr><tr><td>OGBG-MolBBBP</td><td>2,039</td><td>2</td><td>34.09</td><td>36.86</td><td>9</td><td>3</td></tr><tr><td>OGBG-MolHIV</td><td>41,127</td><td>2</td><td>24.06</td><td>25.95</td><td>9</td><td>3</td></tr></table>
|
| 368 |
+
|
| 369 |
+
# C ADDITIONAL EXPERIMENTS
|
| 370 |
+
|
| 371 |
+
In Table 3, the results on COX2_MD and BZR_MD show larger standard deviations than other datasets. One plausible reason is that the number of graphs of these two datasets are relatively small, so that the results of the model can be sensitive to dataset splits. To obtain more convincing results on these two datasets, we conduct additional experiments by still utilizing 10-fold cross-validation for all
|
| 372 |
+
|
| 373 |
+
Table 8: Comparisons of AutoGT against state-of-the-art hand-crafted baselines. We report the average accuracy (\%) and the standard deviation on all the datasets.
|
| 374 |
+
|
| 375 |
+
<table><tr><td>Dataset</td><td>COX2_MD</td><td>BZR_MD</td></tr><tr><td>GIN</td><td>57.229.74</td><td>62.648.23</td></tr><tr><td>DGCNN</td><td>60.337.56</td><td>64.919.42</td></tr><tr><td>DiffPool</td><td>59.528.20</td><td>64.848.51</td></tr><tr><td>GraphSAGE</td><td>53.626.95</td><td>55.838.27</td></tr><tr><td>Graphormer</td><td>59.227.04</td><td>64.539.43</td></tr><tr><td>AutoGT(ours)</td><td>63.458.04</td><td>67.189.87</td></tr></table>
|
| 376 |
+
|
| 377 |
+
Table 9: Comparisons of AutoGT with the different number of supernets. We report the average accuracy $(\%)$ and the standard deviation on the datasets.
|
| 378 |
+
|
| 379 |
+
<table><tr><td>Dataset</td><td>PROTEINS</td></tr><tr><td>1 supernet</td><td>75.923.10</td></tr><tr><td>2 supernet</td><td>76.733.25</td></tr><tr><td>4 supernet</td><td>76.913.35</td></tr><tr><td>8 supernet</td><td>77.173.40</td></tr><tr><td>16 supernet</td><td>77.273.65</td></tr></table>
|
| 380 |
+
|
| 381 |
+
the baselines and our proposed method, and repeat the 10-fold cross-validation with 10 random seeds. We report the results in Table 8. The results are consistent with Table 3, i.e., our method consistently outperforms other baselines, while the standard deviations are considerably smaller by adopting more repeated experiments.
|
| 382 |
+
|
| 383 |
+
In addition, to further explore how the number of supernets affects our proposed method, we carry out experiments with 1, 2, 4, 8, 16 supernets on the PROTEINS dataset, and report our results in Table 9. We can observe that as the number of subnets increases, the performance of our method increases. One possible reason is that more well-trained subnets can bring more consistent performance estimation results, which improves performance.
|
2023/AutoGT_ Automated Graph Transformer Architecture Search/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f049ad3cecc41cef085478b9648998024ee31085d80f7908413d8db0793d251a
|
| 3 |
+
size 584368
|
2023/AutoGT_ Automated Graph Transformer Architecture Search/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/90d5f745-196f-40c1-8311-4d75a522d471_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/90d5f745-196f-40c1-8311-4d75a522d471_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/90d5f745-196f-40c1-8311-4d75a522d471_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bc8314ace1b1bde590661559a97d4fcef2cc12555a0e157b10431e41bc749ddb
|
| 3 |
+
size 641608
|
2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/full.md
ADDED
|
@@ -0,0 +1,696 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# BETTY: AN AUTOMATIC DIFFERENTIATION LIBRARY FOR MULTILEVEL OPTIMIZATION
|
| 2 |
+
|
| 3 |
+
Sang Keun Choe
|
| 4 |
+
|
| 5 |
+
Willie Neiswanger2
|
| 6 |
+
|
| 7 |
+
Pengtao Xie $^{3,4*}$
|
| 8 |
+
|
| 9 |
+
Eric Xing<sup>1,4*</sup>
|
| 10 |
+
|
| 11 |
+
Carnegie Mellon University
|
| 12 |
+
|
| 13 |
+
$^{2}$ Stanford University
|
| 14 |
+
|
| 15 |
+
3UCSD 4MBZUAI
|
| 16 |
+
|
| 17 |
+
*Equal contribution
|
| 18 |
+
|
| 19 |
+
{sangkeuc,epxing}@cs.cmu.edu,neiswanger@cs.stanford.edu,plxie@ucsd.edu
|
| 20 |
+
|
| 21 |
+
# ABSTRACT
|
| 22 |
+
|
| 23 |
+
Gradient-based multilevel optimization (MLO) has gained attention as a framework for studying numerous problems, ranging from hyperparameter optimization and meta-learning to neural architecture search and reinforcement learning. However, gradients in MLO, which are obtained by composing best-response Jacobians via the chain rule, are notoriously difficult to implement and memory/compute intensive. We take an initial step towards closing this gap by introducing BETTY, a software library for large-scale MLO. At its core, we devise a novel dataflow graph for MLO, which allows us to (1) develop efficient automatic differentiation for MLO that reduces the computational complexity from $\mathcal{O}(d^3)$ to $\mathcal{O}(d^2)$ , (2) incorporate systems support such as mixed-precision and data-parallel training for scalability, and (3) facilitate implementation of MLO programs of arbitrary complexity while allowing a modular interface for diverse algorithmic and systems design choices. We empirically demonstrate that BETTY can be used to implement an array of MLO programs, while also observing up to $11\%$ increase in test accuracy, $14\%$ decrease in GPU memory usage, and $20\%$ decrease in training wall time over existing implementations on multiple benchmarks. We also showcase that BETTY enables scaling MLO to models with hundreds of millions of parameters. We open-source the code at https://github.com/leopard-ai/betty.
|
| 24 |
+
|
| 25 |
+
# 1 INTRODUCTION
|
| 26 |
+
|
| 27 |
+
Multilevel optimization (MLO) addresses nested optimization scenarios, where upper level optimization problems are constrained by lower level optimization problems following an underlying hierarchical dependency. MLO has gained considerable attention as a unified mathematical framework for studying diverse problems including meta-learning (Finn et al., 2017; Rajeswaran et al., 2019), hyperparameter optimization (Franceschi et al., 2017), neural architecture search (Liu et al., 2019), and reinforcement learning (Konda & Tsitsiklis, 1999; Rajeswaran et al., 2020). While a majority of existing work is built upon bilevel optimization, the simplest case of MLO, there have been recent efforts that go beyond this two-level hierarchy. For example, (Raghu et al., 2021) proposed trilevel optimization that combines hyperparameter optimization with two-level pretraining and finetuning. More generally, conducting joint optimization over machine learning pipelines consisting of multiple models and hyperparameter sets can be approached as deeper instances of MLO (Garg et al., 2022; Raghu et al., 2021; Somayajula et al., 2022; Such et al., 2020).
|
| 28 |
+
|
| 29 |
+
Following its increasing popularity, a multitude of optimization algorithms have been proposed to solve MLO. Among them, gradient-based (or first-order) approaches (Pearlmutter & Siskind, 2008; Lorraine et al., 2020; Raghu et al., 2021; Sato et al., 2021) have recently received the limelight from the machine learning community, due to their ability to carry out efficient high-dimensional optimization, under which all of the above listed applications fall. Nevertheless, research in gradient-based MLO has been largely impeded by two major bottlenecks. First, implementing gradients in multilevel optimization, which is achieved by composing best-response Jacobians via the chain rule, requires both programming and mathematical proficiency. Second, algorithms for best-response Jacobian calculation, such as iterative differentiation (ITD) or approximate implicit differentiation (AID) (Grazzi et al., 2020), are memory and compute intensive, as they require multiple forward/backward computations and oftentimes second-order gradient (i.e. Hessian) information.
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
Figure 1: In Engine (left), users define their MLO program as a hierarchy/graph of optimization problems. In Problem (middle), users define an optimization problem with a data loader, cost function, module, and optimizer, while upper/ lower level constraint problems (i.e. $\mathcal{U}_k,\mathcal{L}_k$ ) are injected by Engine. The "step" function in Problem serves as the base of gradient-based optimization, abstracting the one-step gradient descent update process. Finally, users can easily try out different best-response Jacobian algorithms & system features (right) via Config in a modular manner.
|
| 33 |
+
|
| 34 |
+
In recent years, there has been some work originating in the meta-learning community on developing software libraries that target some aspects of gradient-based MLO (Blondel et al., 2021; Deleu et al., 2019; Grefenstette et al., 2019). For example, JAXopt (Blondel et al., 2021) provides efficient and modular implementations of AID algorithms by letting the user define a function capturing the optimality conditions of the problem to be differentiated. However, JAXopt fails to combine the chain rule with AID to support general MLO programs beyond a two-level hierarchy. Similarly, higher (Grefenstette et al., 2019) provides several basic primitives (e.g. making PyTorch's (Paszke et al., 2019) native optimizers differentiable) for implementing ITD/AID algorithms, but users still need to manually implement complicated internal mechanisms of these algorithms as well as the chain rule to implement a given instance of MLO. Furthermore, most existing libraries do not have systems support, such as mixed-precision and data-parallel training, that could mitigate memory and computation bottlenecks. As a result, gradient-based MLO research built upon these libraries has been largely limited to simple bilevel optimization and small-scale setups.
|
| 35 |
+
|
| 36 |
+
In this paper, we attempt to bridge this gap between research and software systems by introducing BETTY, an easy-to-use and modular automatic differentiation library with various systems support for large-scale MLO. The main contributions of this paper are as follows:
|
| 37 |
+
|
| 38 |
+
1. We develop an efficient automatic differentiation technique for MLO based on a novel interpretation of MLO as a special type of dataflow graph (Section 3). In detail, gradient calculation for each optimization problem is automatically carried out by iteratively multiplying best-response Jacobians (defined in Section 2) through the chain rule while reverse-traversing specific paths of this dataflow graph. This reverse-traversing procedure is crucial for efficiency, as it reduces the computational complexity of our automatic differentiation technique from $\mathcal{O}(d^3)$ to $\mathcal{O}(d^2)$ , where $d$ is the dimension of the largest optimization problem in the MLO program.
|
| 39 |
+
2. We introduce a software library for MLO, BETTY, built upon the above automatic differentiation technique. Our software design (Section 4), motivated by the dataflow graph interpretation, provides two major benefits: (1) it allows for incorporating various systems support, such as mixed-precision and data-parallel training, for large-scale MLO, and (2) it facilitates implementation of MLO programs of arbitrary complexity while allowing a modular interface for diverse algorithmic and systems design choices. The overall software architecture of BETTY is presented in Figure 1.
|
| 40 |
+
3. We empirically demonstrate that BETTY can be used to implement an array of MLO applications with varying scales and complexities (Section 5). Interestingly, we observe that trying out different best-response Jacobian algorithms with our modular interface (which only requires changing one line of code) can lead to up to $11\%$ increase in test accuracy, $14\%$ decrease in GPU memory usage, and $20\%$ decrease in training wall time on various benchmarks, compared with the original papers' implementations. Finally, we showcase the scalability of BETTY to models with hundreds of millions of parameters by performing MLO on the BERT-base model with the help of BETTY's systems support, which was otherwise infeasible.
|
| 41 |
+
|
| 42 |
+
# 2 BACKGROUND: GRADIENT-BASED MULTILEVEL OPTIMIZATION
|
| 43 |
+
|
| 44 |
+
To introduce MLO, we first define an important concept known as a "constrained problem" (Vicente & Calamai, 1994).
|
| 45 |
+
|
| 46 |
+
Definition 1. An optimization problem $P$ is said to be constrained by $\lambda$ when its cost function $\mathcal{C}$ has $\lambda$ as an argument in addition to the optimization parameter $\theta$ (i.e. $P: \arg \min_{\theta} \mathcal{C}(\theta, \lambda, \dots)$ ).
|
| 47 |
+
|
| 48 |
+
Multilevel optimization (Migdalas et al., 1998) refers to a field of study that aims to solve a nested set of optimization problems defined on a sequence of so-called levels, which satisfy two main criteria: A1) upper-level problems are constrained by the optimal parameters of lower-level problems while A2) lower-level problems are constrained by the nonoptimal parameters of upper-level problems. Formally, an $n$ -level MLO program can be written as:
|
| 49 |
+
|
| 50 |
+
$$
|
| 51 |
+
P _ {n}: \quad \theta_ {n} ^ {*} = \underset {\theta_ {n}} {\operatorname {a r g m i n}} \mathcal {C} _ {n} (\theta_ {n}, \mathcal {U} _ {n}, \mathcal {L} _ {n}; \mathcal {D} _ {n}) \quad \triangleright \text {L e v e l} n \text {p r o b l e m}
|
| 52 |
+
$$
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
P _ {k}: \quad \text {s . t .} \theta_ {k} ^ {*} = \underset {\theta_ {k}} {\operatorname {a r g m i n}} \mathcal {C} _ {k} \left(\theta_ {k}, \mathcal {U} _ {k}, \mathcal {L} _ {k}; \mathcal {D} _ {k}\right) \quad \triangleright \text {L e v e l} k \in \{2, \dots , n - 1 \}
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
P _ {1}: \quad \text {s . t .} \theta_ {1} ^ {*} = \underset {\theta_ {1}} {\operatorname {a r g m i n}} \mathcal {C} _ {1} (\theta_ {1}, \mathcal {U} _ {1}, \mathcal {L} _ {1}; \mathcal {D} _ {1}) \quad \triangleright \text {L e v e l 1 p r o b l e m}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
where, $P_{k}$ stands for the level $k$ problem, $\theta_{k} / \theta_{k}^{*}$ for corresponding nonoptimal / optimal parameters, and $\mathcal{U}_k / \mathcal{L}_k$ for the sets of constraining parameters from upper/ lower level problems. Here, $D_{k}$ is the training dataset, and $C_k$ indicates the cost function. Due to criteria A1 & A2, constraining parameters from upper-level problems should be nonoptimal (i.e. $\mathcal{U}_k\subseteq \{\theta_{k + 1},\dots ,\theta_n\}$ ) while constraining parameters from lower-level problems should be optimal (i.e. $\mathcal{L}_k\subseteq \{\theta_1^*,\dots ,\theta_{k - 1}^*\}$ ). Although we denote only one optimization problem per level in the above formulation, each level could in fact have multiple problems. Therefore, we henceforth discard the concept of level, and rather assume that problems $\{P_1,P_2,\dots ,P_n\}$ of a general MLO program are topologically sorted in a "reverse" order (i.e. $P_{n} / P_{1}$ denote uppermost/ lowermost problems).
|
| 67 |
+
|
| 68 |
+
For example, in hyperparameter optimization formulated as bilevel optimization, hyperparameters and network parameters (weights) correspond to upper and lower level parameters $(\theta_{2}$ and $\theta_{1})$ . Train / validation losses correspond to $\mathcal{C}_1 / \mathcal{C}_2$ , and validation loss is dependent on optimal network parameters $\theta_1^*$ obtained given $\theta_{2}$ . Thus, constraining sets for each level are $\mathcal{U}_1 = \{\theta_2\}$ and $\mathcal{L}_2 = \{\theta_1^*\}$ .
|
| 69 |
+
|
| 70 |
+
In this paper, we focus in particular on gradient-based MLO, rather than zeroth-order methods like Bayesian optimization (Cui & Bai, 2019), in order to efficiently scale to high-dimensional problems. Essentially, gradient-based MLO calculates gradients of the cost function $\mathcal{C}_k(\theta_k,\mathcal{U}_k,\mathcal{L}_k)$ with respect to the corresponding parameter $\theta_{k}$ , with which gradient descent is performed to solve for optimal parameters $\theta_{k}^{*}$ for every problem $P_{k}$ . Since optimal parameters from lower level problems (i.e. $\theta_l^*\in \mathcal{L}_k$ ) can be functions of $\theta_{k}$ (criterion A2), $\frac{d\mathcal{C}_k}{d\theta_k}$ can be expanded using the chain rule as follows:
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
\frac {d \mathcal {C} _ {k}}{d \theta_ {k}} = \underbrace {\frac {\partial \mathcal {C} _ {k}}{\partial \theta_ {k}}} _ {\text {d i r e c t g r a d i e n t}} + \sum_ {\theta_ {l} ^ {*} \in \mathcal {L} _ {k}} \underbrace {\frac {d \theta_ {l} ^ {*}}{d \theta_ {k}}} _ {\text {b e s t - r e s p o n s e J a c o b i a n}} \times \underbrace {\frac {\partial \mathcal {C} _ {k}}{\partial \theta_ {l} ^ {*}}} _ {\text {d i r e c t g r a d i e n t}} \tag {1}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
While calculating direct gradients (purple) is straightforward with existing automatic differentiation engines like PyTorch (Paszke et al., 2019), a major difficulty in gradient-based MLO lies in best-response Jacobian<sup>1</sup> (orange) calculation, which will be discussed in depth in Section 3. Once gradient calculation for each level $k$ is enabled via Equation (1), gradient-based optimization is executed from lower to upper level problems in a topologically reverse order, reflecting underlying hierarchies.
|
| 77 |
+
|
| 78 |
+
# 3 AUTOMATIC DIFFERENTIATION FOR MULTILEVEL OPTIMIZATION
|
| 79 |
+
|
| 80 |
+
While Equation (1) serves as a mathematical basis for gradient-based multilevel optimization, how to automatically and efficiently carry out such gradient calculation has not been extensively studied.
|
| 81 |
+
|
| 82 |
+
and incorporated into a software system that can support MLO programs involving many problems with complex dependencies. In this section, we discuss the challenges in building an automatic differentiation library for MLO, and provide solutions to address these challenges.
|
| 83 |
+
|
| 84 |
+
# 3.1 DATAFLOW GRAPH FOR MULTILEVEL OPTIMIZATION
|
| 85 |
+
|
| 86 |
+
One may observe that the best-response Jacobian term in Equation (1) is expressed with a total derivative instead of a partial derivative. This is because $\theta_{k}$ can affect $\theta_{l}^{*}$ not only through a direct interaction, but also through multiple indirect interactions via other lower-level optimal parameters. For example, consider the four-problem MLO program illustrated in Figure 2. Here, the parameter of Problem 4 ( $\theta_{p_4}$ ) affects the optimal parameter of Problem 3 ( $\theta_{p_3}^*$ ) in two different ways: 1) $\theta_{p_4} \rightarrow \theta_{p_3}^*$ and 2) $\theta_{p_4} \rightarrow \theta_{p_1}^* \rightarrow \theta_{p_3}^*$ . In general, we can expand the best-response Jacobian $\frac{d\theta_{l}^{*}}{d\theta_{k}}$ in Equation (1) by applying the chain rule for all paths from $\theta_{k}$ to $\theta_{l}^{*}$ as
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
\frac {d \mathcal {C} _ {k}}{d \theta_ {k}} = \frac {\partial \mathcal {C} _ {k}}{\partial \theta_ {k}} + \sum_ {\theta_ {l} ^ {*} \in \mathcal {L} _ {k}} \sum_ {q \in \mathcal {Q} _ {k, l}} \left(\underbrace {\frac {\partial \theta_ {q (1)} ^ {*}}{\partial \theta_ {k}}} _ {\text {u p p e r - t o - l o w e r}} \times \left(\prod_ {i = 1} ^ {\text {l e n} (q) - 1} \underbrace {\frac {\partial \theta_ {q (i + 1)} ^ {*}}{\partial \theta_ {q (i)} ^ {*}}}\right) \times \frac {\partial \mathcal {C} _ {k}}{\partial \theta_ {l} ^ {*}}\right) \tag {2}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
where $\mathcal{Q}_{k,l}$ is a set of paths from $\theta_{k}$ to $\theta_{l}^{*}$ , and $q(i)$ refers to the index of the $i$ -th problem in the path $q$ with the last point being $\theta_{l}^{*}$ . Replacing a total derivative term in Equation (1) with a product of partial derivative terms using the chain rule allows us to ignore indirect interactions between problems, and only deal with direct interactions.
|
| 93 |
+
|
| 94 |
+
To formalize the path finding problem, we develop a novel dataflow graph for MLO. Unlike traditional dataflow graphs with no predefined hierarchy among nodes, a dataflow graph for multilevel optimization has two different types of directed edges stemming from criteria A1 & A2: lower-to-upper and upper-to-lower. Each of these directed edges is respectively depicted with green and red arrows in Figure 2. Essentially, a lower-to-upper edge represents the directed dependency between two optimal parameters (i.e. $\theta_{P_i}^* \rightarrow \theta_{P_j}^*$ with $P_i < P_j$ ), while an upper-to-lower edge represents the directed dependency between nonoptimal and optimal parameters (i.e. $\theta_{P_i} \rightarrow \theta_{P_j}^*$ with $P_i > P_j$ ). Since we need to find paths from the nonoptimal parameter $\theta_k$ to the optimal parameter $\theta_l^*$ , the first directed edge must be an upper-to-lower edge (red), which connects $\theta_k$ to some lower-level optimal parameter. Once it reaches the
|
| 95 |
+
|
| 96 |
+
optimal parameter, it can only move through optimal parameters via lower-to-upper edges (green) in the dataflow graph. Therefore, every valid path from $\theta_{k}$ to $\theta_l^*$ will start with an upper-to-lower edge, and then reach the destination only via lower-to-upper edges. The best-response Jacobian term for each edge in the dataflow graph is also marked with the corresponding color in Equation (2). We implement the above path finding mechanism with a modified depth-first search algorithm in BETTY.
|
| 97 |
+
|
| 98 |
+

|
| 99 |
+
Figure 2: An example dataflow graph for MLO.
|
| 100 |
+
|
| 101 |
+
# 3.2 GRADIENT CALCULATION WITH BEST-RESPONSE JACOBIANS
|
| 102 |
+
|
| 103 |
+
Automatic differentiation for MLO can be realized by calculating Equation (2) for each problem $P_{k}$ ( $k = 1, \dots, n$ ). However, a naive calculation of Equation (2) could be computationally onerous as it involves multiple matrix multiplications with best-response Jacobians, of which computational complexity is $\mathcal{O}(d^3)$ , where $d$ is the dimension of the largest optimization problem in the MLO program. To alleviate this issue, we observe that the rightmost term in Equation (2) is a vector, which allows us to reduce the computational complexity of Equation (2) to $\mathcal{O}(d^2)$ by iteratively performing matrix-vector multiplication from right to left (or, equivalently, reverse-traversing a path $q$ in the dataflow graph). As such, matrix-vector multiplication between the best-response Jacobian and a vector serves as a base operation of efficient automatic differentiation for MLO. Mathematically, this problem can be simply written as follows:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\text {C a l c u l a t e} \frac {\partial w ^ {*} (\lambda)}{\partial \lambda} \times v \tag {3}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\text {G i v e n} w ^ {*} (\lambda) = \underset {w} {\operatorname {a r g m i n}} \mathcal {C} (w, \lambda). \tag {4}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
Two major challenges in the above problems are: 1) approximating the solution of the optimization problem (i.e. $w^{*}(\lambda)$ ), and 2) differentiating through the (approximated) solution.
|
| 114 |
+
|
| 115 |
+
In practice, an approximation of $w^{*}(\lambda)$ is typically achieved by unrolling a small number of gradient steps, which can significantly reduce the computational cost (Franceschi et al., 2017). While we could potentially obtain a better approximation of $w^{*}(\lambda)$ by running gradient steps until convergence, this procedure alone can take a few days (or even weeks) when the underlying optimization problem is large-scale (Deng et al., 2009; Devlin et al., 2018).
|
| 116 |
+
|
| 117 |
+
Once $w^{*}(\lambda)$ is approximated, matrix-vector multiplication between the best-response Jacobian $\frac{dw^{*}(\lambda)}{d\lambda}$ and a vector $v$ is popularly obtained by either iterative differentiation (ITD) or approximate implicit differentiation (AID) (Grazzi et al., 2020). This problem has been extensively studied in bilevel optimization literature (Finn et al., 2017; Franceschi et al., 2017; Lorraine et al., 2020), and we direct interested readers to the original papers, as studying these algorithms is not the focus of this paper. In BETTY, we provide implementations of several popular ITD/AID algorithms which users can easily plug-and-play for their MLO applications. Currently available algorithms within BETTY include ITD with reverse-mode automatic differentiation (ITD-RMAD) (Finn et al., 2017), AID with Neumann series (AID-NMN) (Lorraine et al., 2020), AID with conjugate gradient (AID-CG) (Rajeswaran et al., 2019), and AID with finite difference (AID-FD) (Liu et al., 2019).
|
| 118 |
+
|
| 119 |
+
# 3.3 EXECUTION OF MULTILEVEL OPTIMIZATION
|
| 120 |
+
|
| 121 |
+
In MLO, optimization of each problem should be performed in a topologically reverse order, as the upper-level optimization is constrained by the result of lower-level optimization. To ease an MLO implementation, we also automate such an execution order with the dataflow graph developed in Section 3.1. Specifically, let's assume that there is a lower-to-upper edge between problems $P_{i}$ and $P_{j}$ (i.e. $\theta_{i}^{*} \rightarrow \theta_{j}^{*}$ ). When the optimization process (i.e. a small number of gradient steps) of the problem $P_{i}$ is complete, it can call the problem $P_{j}$ to start its one-step gradient descent update through the lower-to-upper edge. The problem $P_{j}$ waits until all lower level problems in $\mathcal{L}_{j}$ send their calls, and then performs the one-step gradient descent update when all the calls from lower levels are received. Hence, to achieve the full execution of gradient-based MLO, we only need to call the one-step gradient descent processes of the lowermost problems, as the optimization processes of upper problems will be automatically called from lower problems via lower-to-upper edges.
|
| 122 |
+
|
| 123 |
+
To summarize, automatic differentiation for MLO is accomplished by performing gradient updates of multiple optimization problems in a topologically reverse order based on the lower-to-upper edges (Sec. 3.3), where gradients for each problem are calculated by iteratively multiplying best-response Jacobians obtained with ITD/AID (Sec. 3.2) while reverse-traversing the dataflow graph (Sec. 3.1).
|
| 124 |
+
|
| 125 |
+
# 4 SOFTWARE DESIGN
|
| 126 |
+
|
| 127 |
+
On top of the automatic differentiation technique developed in Section 3, we build an easy-to-use and modular software library, BETTY, with various systems support for large-scale gradient-based MLO. In detail, we break down MLO into two high-level concepts, namely 1) optimization problems and 2) hierarchical dependencies among problems, and design abstract Python classes for both of them. Such abstraction is also motivated by our dataflow graph interpretation, as each of these concepts respectively corresponds to nodes and edges. The architecture of BETTY is shown in Figure 1
|
| 128 |
+
|
| 129 |
+
Problem Each optimization problem $P_{k}$ in MLO is defined by the parameter (or module) $\theta_{k}$ , the sets of the upper and lower constraining problems $\mathcal{U}_k \& \mathcal{L}_k$ , the dataset $\mathcal{D}_k$ , the cost function $\mathcal{C}_k$ , the optimizer, and other optimization configurations (e.g. best-response Jacobian calculation algorithm, number of unrolling steps). The Problem class is an interface where users can provide each of the aforementioned components to define the optimization problem. In detail, each one except for the cost function $\mathcal{C}_k$ and the constraining problems $\mathcal{U}_k \& \mathcal{L}_k$ can be provided through the class constructor, while the cost function can be defined through a "training_step" method and the constraining problems are automatically provided by Engine.
|
| 130 |
+
|
| 131 |
+
Abstracting an optimization problem by encapsulating module, optimizer, and data loader together additionally allows us to implement various systems support, including mixed-precision, data-parallel training, and gradient accumulation, within the abstract Problem class. A similar strategy has also
|
| 132 |
+
|
| 133 |
+
been adopted in popular frameworks for large-scale deep learning such as DeepSpeed (Rajbhandari et al., 2020). Since implementations of such systems support as well as best-response Jacobian are abstracted away, users can easily plug-and-play different algorithmic and systems design choices, such as unrolling steps or mixed-precision training, via Config in a modular fashion. An example usage of Problem is shown in Listing 1, and a full list of supported features in Config is provided in Appendix F.
|
| 134 |
+
|
| 135 |
+
```python
|
| 136 |
+
1 class MyProblem(Problem):
|
| 137 |
+
2 def training_step(self, batch):
|
| 138 |
+
3 # Users define the cost function here
|
| 139 |
+
4 return cost_fn(batch, self/module, self.other_probs, ...)
|
| 140 |
+
5 config = Config(type="darts", unroll_steps=10, fp16=True, gradient Accumulation=4)
|
| 141 |
+
6 prob = MyProblem("myproblem", config, module, optimizer, dataloader)
|
| 142 |
+
```
|
| 143 |
+
|
| 144 |
+
Listing 1: Problem class example.
|
| 145 |
+
|
| 146 |
+
Engine While Problem manages each optimization problem, Engine handles hierarchical dependencies among problems in the dataflow graph. As discussed in Section 3.1, a dataflow graph for MLO has upper-to-lower and lower-to-upper directed edges. We allow users to define two separate graphs, one for each type of edge, using a Python dictionary, in which keys/values respectively represent start/end nodes of the edge. When user-defined dependency graphs are provided, Engine compiles them and finds all paths required for automatic differentiation with a modified depth-first search algorithm. Moreover, Engine sets constraining problem sets for each problem based on the dependency graphs, as mentioned above. Once all initialization processes are done, users can run a full MLO program by calling Engine's run method, which repeatedly calls the one-step gradient descent procedure of lowermost problems. The example usage of Engine is provided in Listing 2.
|
| 147 |
+
|
| 148 |
+
```txt
|
| 149 |
+
1 prob1 = MyProblem1(...)
|
| 150 |
+
2 prob2 = MyProblem2(...)
|
| 151 |
+
3 dependency = {"u21": {prob1: [prob2]}, "l2u": {prob1: [prob2]}}
|
| 152 |
+
4 engine = Engine(problems=[prob1, prob2], dependencies=dependency)
|
| 153 |
+
5 engine.run()
|
| 154 |
+
```
|
| 155 |
+
|
| 156 |
+
Listing 2: Engine class example.
|
| 157 |
+
|
| 158 |
+
# 5 EXPERIMENTS
|
| 159 |
+
|
| 160 |
+
To showcase the general applicability of BETTY, we implement three MLO benchmarks with varying complexities and scales: data reweighting for class imbalance (Sec. 5.1), correcting and reweighting corrupted labels (Sec. 5.2), and domain adaptation for a pretraining/finetuning framework (Sec. 5.3). Furthermore, we analyze the effect of different best-response Jacobian algorithms and system features by reporting GPU memory usage and training wall time. Last but not least, in the Appendix, we include an additional MLO benchmark experiment on differentiable neural architecture search (Appendix A), code examples (Appendix B), training details such as hyperparameters (Appendix C), analyses on various algorithmic and systems design choices (Appendix D and E).
|
| 161 |
+
|
| 162 |
+
# 5.1 DATA REWEIGHTING FOR CLASS IMBALANCE
|
| 163 |
+
|
| 164 |
+
Many real-world datasets suffer from class imbalance due to underlying long-tailed data distributions. Meta-Weight-Net (MWN) (Shu et al., 2019) proposes to alleviate the class imbalance issue with a data reweighting scheme where they learn to assign higher/ lower weights to data from more rare/common classes. In detail, MWN formulates data reweighting with bilevel optimization as follows:
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
\theta^ {*} = \underset {\theta} {\operatorname {a r g m i n}} \mathcal {L} _ {v a l} (w ^ {*} (\theta)) \quad \triangleright \text {R e w e i g h t i n g}
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
\text {s . t .} w ^ {*} (\theta) = \underset {w} {\operatorname {a r g m i n}} \frac {1}{N} \sum_ {i = 1} ^ {n} \mathcal {R} \left(L _ {t r a i n} ^ {i}; \theta\right) \cdot L _ {t r a i n} ^ {i} (f (x _ {i}; w), y _ {i}) \quad \triangleright \text {C l a s s i f i c a t i o n}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
where $w$ is the network parameters, $L_{train}^{i}$ is the training loss for the $i$ -th training sample, and $\theta$ is the MWN $\mathcal{R}$ 's parameters, which reweights each training sample given its training loss $L_{train}^{i}$ .
|
| 175 |
+
|
| 176 |
+
Following the original paper, we artificially inject class imbalance into the CIFAR-10 dataset by geometrically decreasing the number of data sample for each class, as per an imbalance factor. While
|
| 177 |
+
|
| 178 |
+
the official implementation, which is built upon Torchmeta (Deleu et al., 2019), only adopts ITD-RMAD for best-response Jacobian calculation, we re-implement MWN with multiple best-response Jacobian algorithms, which only require one-liner changes using BETTY, to study their effect on test accuracy, memory efficiency, and training wall time. The experiment results are given in Table 1.
|
| 179 |
+
|
| 180 |
+
<table><tr><td></td><td>Algorithm</td><td>IF 200</td><td>IF 100</td><td>IF 50</td><td>Memory</td><td>Time</td></tr><tr><td>MWN (original)</td><td>ITD-RMAD</td><td>68.91</td><td>75.21</td><td>80.06</td><td>2381MiB</td><td>35.8m</td></tr><tr><td>MWN (ours, step=1)</td><td>ITD-RMAD</td><td>71.96</td><td>75.13</td><td>79.50</td><td>2381MiB</td><td>36.0m</td></tr><tr><td>MWN (ours, step=1)</td><td>AID-CG</td><td>66.23±1.88</td><td>70.88±1.68</td><td>75.41±0.61</td><td>2435MiB</td><td>67.4m</td></tr><tr><td>MWN (ours, step=1)</td><td>AID-NMN</td><td>66.45±1.18</td><td>70.92±1.35</td><td>75.90 ±1.73</td><td>2419MiB</td><td>67.1m</td></tr><tr><td>MWN (ours, step=1)</td><td>AID-FD</td><td>75.45±0.63</td><td>78.11±0.43</td><td>81.15±0.25</td><td>2051MiB</td><td>28.5m</td></tr><tr><td>MWN (ours, step=5)</td><td>AID-FD</td><td>76.56±1.19</td><td>80.45±0.73</td><td>83.11±0.54</td><td>2051MiB</td><td>65.5m</td></tr></table>
|
| 181 |
+
|
| 182 |
+
Table 1: MWN experiment results. IF denotes an imbalance factor. AID-CG/NMN/FD respectively stand for implicit differentiation with conjugate gradient/Neumann series/finite difference.
|
| 183 |
+
|
| 184 |
+
We observe that different best-Jacobian algorithms lead to vastly different test accuracy, memory efficiency, and training wall time. Interestingly, we notice that AID-FD with unrolling steps of both 1 and 5 consistently achieve better test accuracy (close to SoTA (Tang et al., 2020)) and memory efficiency than other methods. This demonstrates that, while BETTY is developed to support large and general MLO programs, it is still useful for simpler bilevel optimization tasks as well. An additional analysis on the effect of best-response Jacobian can also be found in Appendix D.
|
| 185 |
+
|
| 186 |
+
Furthermore, to demonstrate the scalability of BETTY to large-scale MLO, we applied MWN to sentence classification with the BERT-base model (Devlin et al., 2018) with 110M parameters. Similarly, we artificially inject class imbalance into the SST dataset, and use AID-FD as our best-response Jacobian calculation algorithm. The experiment results are provided in Table 2.
|
| 187 |
+
|
| 188 |
+
<table><tr><td></td><td>Algorithm</td><td>IF 20</td><td>IF 50</td><td>Memory</td></tr><tr><td>Baseline</td><td>AID-FD</td><td>89.99±0.38</td><td>87.54±0.70</td><td>8319MiB</td></tr><tr><td>MWN (fp32)</td><td>AID-FD</td><td>-</td><td>-</td><td>Out-of-memory</td></tr><tr><td>MWN (fp16)</td><td>AID-FD</td><td>91.06±0.09</td><td>89.79±0.65</td><td>10511MiB</td></tr></table>
|
| 189 |
+
|
| 190 |
+
Table 2: MWN+BERT experiment results. fp32 and fp16 respectively stand for full-precision and mixed-precision training.
|
| 191 |
+
|
| 192 |
+
As shown above, default full-precision training fails due to the CUDA out-of-memory error, while mixed-precision training, which only requires a one-line change in Config, avoids this issue while also providing consistent improvements in test accuracy compared to the BERT baseline. This demonstrates that our system features are indeed effective in scaling MLO to large models. We include more analyses on our systems support in Appendix E.
|
| 193 |
+
|
| 194 |
+
# 5.2 CORRECTING & REWEIGHTING CORRUPTED LABELS
|
| 195 |
+
|
| 196 |
+
Another common pathology in real-world data science is the issue of label corruption, stemming from noisy data preparation processes (e.g. Amazon MTurk). One prominent example of this is in weak supervision (Ratner et al., 2016), where users create labels for large training sets by leveraging multiple weak/noisy labeling sources such as heuristics and knowledge bases. Due to the nature of weak supervision, generated labels are generally noisy, and consequently lead to a significant performance degradation. In this example, we aim to mitigate this issue by 1) correcting and 2) reweighting potentially corrupted labels. More concretely, this problem can be formulated as an extended bilevel optimization problem, as, unlike the MWN example, we have two optimization problems—correcting and reweighting—in the upper level, as opposed to one. The mathematical formulation of this MLO program is as follows:
|
| 197 |
+
|
| 198 |
+
$$
|
| 199 |
+
\theta^ {*} = \underset {\theta} {\operatorname {a r g m i n}} \mathcal {L} _ {v a l} (w ^ {*} (\theta , \alpha)), \quad \alpha^ {*} = \underset {\alpha} {\operatorname {a r g m i n}} \mathcal {L} _ {v a l} ^ {\prime} (w ^ {*} (\theta , \alpha)) \quad \triangleright \mathrm {R W T} \& \mathrm {C R T}
|
| 200 |
+
$$
|
| 201 |
+
|
| 202 |
+
$$
|
| 203 |
+
\text {s . t .} w ^ {*} (\theta , \alpha) = \underset {w} {\operatorname {a r g m i n}} \frac {1}{N} \sum_ {i = 1} ^ {n} \mathcal {R} \left(L _ {t r a i n} ^ {i}; \theta\right) \cdot L _ {t r a i n} ^ {i} (f (x _ {i}; w), g (x _ {i}, y _ {i}; \alpha)) \quad \triangleright \text {C l a s s i f i c a t i o n}
|
| 204 |
+
$$
|
| 205 |
+
|
| 206 |
+
where, $\alpha$ is the parameter for the label correction network $g$ , and $\mathcal{L}_{val}^{\prime}$ is augmented with the classification loss of the correction network in addition to that of the main classification network $f$ on the clean validation set.
|
| 207 |
+
|
| 208 |
+
We test our framework on the WRENCH benchmark (Zhang et al., 2021a), which contains multiple weak supervision datasets. In detail, we use a 2-layer MLP as our classifier, AID-FD as our best-response Jacobian algorithm, and Snorkel Data Programming (Ratner et al., 2016) as our weak supervision algorithm for generating training labels. The experiment results are provided in Table 3.
|
| 209 |
+
|
| 210 |
+
<table><tr><td></td><td>TREC</td><td>AGNews</td><td>IMDB</td><td>SemEval</td><td>ChemProt</td><td>YouTube</td></tr><tr><td>Snorkel</td><td>57.52±0.18</td><td>62.00±0.07</td><td>71.03±0.55</td><td>71.00±0.00</td><td>51.54±0.41</td><td>77.44±0.22</td></tr><tr><td>Baseline</td><td>53.88±1.83</td><td>80.74±0.20</td><td>72.26±0.81</td><td>71.50±0.44</td><td>54.47±0.78</td><td>88.16±1.56</td></tr><tr><td>+RWT</td><td>57.56±1.41</td><td>82.79±0.10</td><td>77.18±0.13</td><td>77.23±3.38</td><td>65.33±0.72</td><td>91.60±0.75</td></tr><tr><td>+RWT&CRT</td><td>66.76±1.31</td><td>83.16±0.20</td><td>77.80±0.26</td><td>84.34±1.43</td><td>67.69±1.17</td><td>91.52±0.66</td></tr></table>
|
| 211 |
+
|
| 212 |
+
Table 3: Wrench Results. RWT stands for reweighting and CRT for correction
|
| 213 |
+
|
| 214 |
+
We observe that simultaneously applying label correction and reweighting significantly improves the test accuracy over the baseline and the reweighting-only scheme in almost all tasks. Thanks to BETTY, adding label correction in the upper-level on top of the existing reweighting scheme only requires defining one more Problem class, and accordingly updating the problem dependency in Engine (code examples can be found in Appendix B).
|
| 215 |
+
|
| 216 |
+
# 5.3 DOMAIN ADAPTATION FOR PRETRAINING & FINETUNING
|
| 217 |
+
|
| 218 |
+
Pretraining/finetuning paradigms are increasingly adopted with recent advances in self-supervised learning (Devlin et al., 2018; He et al., 2020). However, the data for pretraining are oftentimes from a different distribution than the data for finetuning, which could potentially cause negative transfer. Thus, domain adaptation emerges as a natural solution to mitigate this issue. As a domain adaptation strategy, (Raghu et al., 2021) proposes to combine data reweighting with a pretraining/finetuning framework to automatically decrease/increase the weight of pretraining samples that cause negative/positive transfer. In contrast with the above two benchmarks, this problem can be formulated as trilevel optimization as follows:
|
| 219 |
+
|
| 220 |
+
$$
|
| 221 |
+
\theta^ {*} = \underset {\theta} {\operatorname {a r g m i n}} \mathcal {L} _ {F T} (v ^ {*} (w ^ {*} (\theta)))
|
| 222 |
+
$$
|
| 223 |
+
|
| 224 |
+
$\triangleright$ Reweighting
|
| 225 |
+
|
| 226 |
+
$$
|
| 227 |
+
\text {s . t .} v ^ {*} (w ^ {*} (\theta)) = \underset {v} {\operatorname {a r g m i n}} \left(\mathcal {L} _ {F T} (v) + \lambda \| v - w ^ {*} (\theta) \| _ {2} ^ {2}\right)
|
| 228 |
+
$$
|
| 229 |
+
|
| 230 |
+
Finetuning
|
| 231 |
+
|
| 232 |
+
$$
|
| 233 |
+
w ^ {*} (\theta) = \operatorname * {a r g m i n} _ {w} \frac {1}{N} \sum_ {i = 1} ^ {n} \mathcal {R} (x _ {i}; \theta) \cdot L _ {P T} ^ {i} (w)
|
| 234 |
+
$$
|
| 235 |
+
|
| 236 |
+
Pretraining
|
| 237 |
+
|
| 238 |
+
where $x_{i} / L_{PT}^{i}$ stands for the $i$ -th pretraining sample/loss, $\mathcal{R}$ for networks that reweight importance for each pretraining sample $x_{i}$ , and $\lambda$ for the proximal regularization parameter. Additionally, $w$ , $v$ , and $\theta$ are respectively parameters for pretraining, finetuning, and reweighting networks.
|
| 239 |
+
|
| 240 |
+
We conduct an experiment on the OfficeHome dataset (Venkateswara et al., 2017) that consists of 15,500 images from 65 classes and 4 domains: Art (Ar), Clipart (Cl), Product (Pr), and Real World (RW). Specifically, we randomly choose 2 domains and use one of them as a pretraining task and the other as a finetuning task. ResNet-18 (He et al., 2016) is used for all pretraining/finetuning/reweighting networks, and AID-FT with an unrolling step of 1 is used as our best-response Jacobian algorithm. Following (Bai et al., 2021), the finetuning and the reweighting stages share the same training dataset. We adopted a normal pretraining/finetuning framework without the reweighting stage as our baseline, and the result is presented in Table 4.
|
| 241 |
+
|
| 242 |
+
Our trilevel optimization framework achieves consistent improvements over the baseline for every task combination at the cost of additional memory usage and wall time, which demonstrates the empirical usefulness of multilevel optimization beyond a two-level hierarchy. Finally, we provide an example of (a simplified version of) the code for this experiment in Appendix B to showcase the usability of our library for a general MLO program.
|
| 243 |
+
|
| 244 |
+
<table><tr><td></td><td>Algorithm</td><td>Cl→Ar</td><td>Ar→Pr</td><td>Pr→Rw</td><td>Rw→Cl</td><td>Memory</td><td>Time</td></tr><tr><td>Baseline</td><td>N/A</td><td>65.43±0.36</td><td>87.62±0.33</td><td>77.43±0.41</td><td>68.76±0.13</td><td>3.8GiB</td><td>290s</td></tr><tr><td>+ RWT</td><td>AID-FD</td><td>67.76±0.83</td><td>88.53±0.42</td><td>78.58±0.17</td><td>69.75±0.43</td><td>8.2GiB</td><td>869s</td></tr></table>
|
| 245 |
+
|
| 246 |
+
Table 4: Domain Adaptation for Pretraining & Finetuning results. Reported numbers are classification accuracy on the target domain (right of arrow), after pretraining on the source domain (left of arrow). We note that Baseline is a two-layer, and Baseline + Reweight a three-layer, MLO program.
|
| 247 |
+
|
| 248 |
+
# 6 RELATED WORK
|
| 249 |
+
|
| 250 |
+
Bilevel & Multilevel Optimization There are a myriad of machine learning applications that are built upon bilevel optimization (BLO), the simplest case of multilevel optimization with a two-level hierarchy. For example, neural architecture search (Liu et al., 2019; Zhang et al., 2021b), hyperparameter optimization (Franceschi et al., 2017; Lorraine et al., 2020; Maclaurin et al., 2015), reinforcement learning (Hong et al., 2020; Konda & Tsitsiklis, 1999), data valuation (Ren et al., 2020; Wang et al., 2020), meta learning (Finn et al., 2017; Rajeswaran et al., 2019), and label correction (Zheng et al., 2019) are formulated as BLO. In addition to applying BLO to machine learning tasks, a variety of optimization techniques (Couellan & Wang, 2016; Grazzi et al., 2020; Ji et al., 2021; Liu et al., 2021) have been developed for solving BLO.
|
| 251 |
+
|
| 252 |
+
Following the popularity of BLO, MLO with more than a two-level hierarchy has also attracted increasing attention recently (Raghu et al., 2021; Somayajula et al., 2022; Such et al., 2020; Xie & Du, 2022). In general, these works construct complex multi-stage ML pipelines, and optimize the pipelines in an end-to-end fashion with MLO. For instance, (Garg et al., 2022) constructs the pipeline of (data generation)-(architecture search)-(classification) and (He et al., 2021) of (data reweighting)-(finetuning)-(pretraining), all of which are solved with MLO. Furthermore, (Sato et al., 2021) study gradient-based methods for solving MLO with theoretical guarantees.
|
| 253 |
+
|
| 254 |
+
Multilevel Optimization Software There are several software libraries that are frequently used for implementing MLO programs. Most notably, JAXopt (Blondel et al., 2021) proposes an efficient and modular approach for AID by leveraging JAX's native autodiff of the optimality conditions. Despite its easy-to-use programming interface for AID, it fails to support combining the chain rule with AID as in Equation (2), because it overrides the default behavior of JAX's automatic differentiation, which takes care of the chain rule. Therefore, it cannot be used for implementing MLO beyond a two-level hierarchy without major changes in the source code and the software design. Alternatively, higher (Grefenstette et al., 2019) provides two major primitives of making 1) stateful PyTorch modules stateless and 2) PyTorch optimizers differentiable to ease the implementation of AID/ITD. However, users still need to manually implement complicated internal mechanisms of these algorithms as well as the chain rule with the provided primitives. Torchmeta (Deleu et al., 2019) also provides similar functionalities as higher, but it requires users to use its own stateless modules implemented in the library rather than patching general modules as in higher. Thus, it lacks the support for user's custom modules, limiting its applicability. learn2learn (Arnold et al., 2020) focuses on supporting meta learning. However, since meta-learning is strictly a bilevel problem, extending it beyond a two-level hierarchy is not straightforward. Finally, most existing libraries do not have systems support, such as data-parallel training, that could mitigate memory/compute bottlenecks.
|
| 255 |
+
|
| 256 |
+
# 7 CONCLUSION
|
| 257 |
+
|
| 258 |
+
In this paper, we aimed to help establish both mathematical and systems foundations for automatic differentiation in MLO. To this end, we devised a novel dataflow graph for MLO, upon which an automatic differentiation procedure is built, and additionally introduced BETTY, a software library with various systems support, that allows for easy programming of a wide range of MLO applications in a modular fashion. We showed that BETTY allows for scaling up to both larger models with many parameters, as well as to MLO programs with multiple dependent problems. As future work, we plan to extend BETTY to support additional algorithmic and systems features, such as best-response Jacobian algorithms for non-differentiable processes, and advanced memory optimization techniques like model-parallel training and CPU-offloading.
|
| 259 |
+
|
| 260 |
+
# ETHICS STATEMENT
|
| 261 |
+
|
| 262 |
+
Multilevel optimization has the power to be a double-edged sword that can have both positive and negative societal impacts. For example, both 1) defense or attack in an adversarial game, and 2) decreasing or increasing bias in machine learning models, can all be formulated as MLO programs, depending on the goal of the uppermost optimization problem, which is defined by users. Thus, research in preventing malicious use cases of MLO is of high importance.
|
| 263 |
+
|
| 264 |
+
# REPRODUCIBILITY STATEMENT
|
| 265 |
+
|
| 266 |
+
As one of main contributions of this work is a new software library for scalable multilevel optimization, all of the source code for the library and examples will be released open source with an Apache-2.0 License, including a full implementation of all MLO programs and experiments described in this paper. In addition, for reviewing purposes, we include our source code and easily runnable scripts for all experiments in the supplemental material of this submission.
|
| 267 |
+
|
| 268 |
+
# ACKNOWLEDGEMENTS
|
| 269 |
+
|
| 270 |
+
We thank all the reviewers for invaluable comments and feedback. EX acknowledges the support of NSF IIS1563887, NSF CCF1629559, NSF IIS1617583, NGA HM04762010002, NIGMS R01GM140467, NSF IIS1955532, NSF CNS2008248, NSF IIS2123952, and NSF BCS2040381. WN was supported in part by NSF (1651565), AFOSR (FA95501910024), ARO (W911NF-21-1-0125), CZ Biohub, Sloan Fellowship, and U.S. Department of Energy Office of Science under Contract No. DE-AC02-76SF00515.
|
| 271 |
+
|
| 272 |
+
# REFERENCES
|
| 273 |
+
|
| 274 |
+
Sebastien MR Arnold, Praateek Mahajan, Debajyoti Datta, Ian Bunner, and Konstantinos Saitas Zarkias. learn2learn: A library for meta-learning research. arXiv preprint arXiv:2008.12284, 2020.
|
| 275 |
+
Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason Lee, Sham Kakade, Huan Wang, and Caiming Xiong. How important is the train-validation split in meta-learning? In International Conference on Machine Learning, pp. 543-553. PMLR, 2021.
|
| 276 |
+
Mathieu Blondel, Quentin Berthet, Marco Cuturi, Roy Frostig, Stephan Hoyer, Felipe Llinares-López, Fabian Pedregosa, and Jean-Philippe Vert. Efficient and modular implicit differentiation. arXiv preprint arXiv:2105.15183, 2021.
|
| 277 |
+
Nicolas Couellan and Wenjuan Wang. On the convergence of stochastic bi-level gradient methods. Optimization, 2016.
|
| 278 |
+
Hua Cui and Jie Bai. A new hyperparameters optimization method for convolutional neural networks. Pattern Recognition Letters, 125:828-834, 2019.
|
| 279 |
+
Tristan Deleu, Tobias Würfl, Mandana Samiei, Joseph Paul Cohen, and Yoshua Bengio. Torchmeta: A Meta-Learning library for PyTorch, 2019. URL https://arxiv.org/abs/1909.06576. Available at: https://github.com/tristandeleu/pytorch-meta.
|
| 280 |
+
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009.
|
| 281 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
|
| 282 |
+
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126-1135. JMLR.org, 2017.
|
| 283 |
+
Luca Franceschi, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. Forward and reverse gradient-based hyperparameter optimization. In International Conference on Machine Learning, pp. 1165-1173. PMLR, 2017.
|
| 284 |
+
|
| 285 |
+
Bhanu Garg, Li Zhang, Pradyumna Sridhara, Ramtin Hosseini, Eric Xing, and Pengtao Xie. Learning from mistakes-a framework for neural architecture search. Proceedings of the AAAI Conference on Artificial Intelligence, 2022.
|
| 286 |
+
Riccardo Grazzi, Luca Franceschi, Massimiliano Pontil, and Saverio Salzo. On the iteration complexity of hypergradient computation. In International Conference on Machine Learning, pp. 3748-3758. PMLR, 2020.
|
| 287 |
+
Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, and Soumith Chintala. Generalized inner loop meta-learning. arXiv preprint arXiv:1910.01727, 2019.
|
| 288 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
|
| 289 |
+
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729-9738, 2020.
|
| 290 |
+
Xuehai He, Zhuo Cai, Wenlan Wei, Yichen Zhang, Luntian Mou, Eric Xing, and Pengtao Xie. Towards visual question answering on pathology images. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pp. 708-718, 2021.
|
| 291 |
+
Mingyi Hong, Hoi-To Wai, Zhaoran Wang, and Zhuoran Yang. A two-timescale framework for bilevel optimization: Complexity analysis and application to actor-critic. arXiv preprint arXiv:2007.05170, 2020.
|
| 292 |
+
Kaiyi Ji, Junjie Yang, and Yingbin Liang. Bilevel optimization: Convergence analysis and enhanced design. In International Conference on Machine Learning, pp. 4882-4892. PMLR, 2021.
|
| 293 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
|
| 294 |
+
Vijay Konda and John Tsitsiklis. Actor-critic algorithms. Advances in neural information processing systems, 12, 1999.
|
| 295 |
+
Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: differentiable architecture search. In ICLR, 2019.
|
| 296 |
+
Risheng Liu, Yaohua Liu, Shangzhi Zeng, and Jin Zhang. Towards gradient-based bilevel optimization with non-convex followers and beyond. Advances in Neural Information Processing Systems, 34, 2021.
|
| 297 |
+
Jonathan Lorraine, Paul Vicol, and David Duvenaud. Optimizing millions of hyperparameters by implicit differentiation. In International Conference on Artificial Intelligence and Statistics, pp. 1540-1552. PMLR, 2020.
|
| 298 |
+
Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter optimization through reversible learning. In International conference on machine learning, pp. 2113-2122. PMLR, 2015.
|
| 299 |
+
Athanasios Migdalas, Panos M Pardalos, and Peter Värbrand. Multilevel optimization: algorithms and applications, volume 20. Springer Science & Business Media, 1998.
|
| 300 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
|
| 301 |
+
Barak A. Pearlmutter and Jeffrey Mark Siskind. Reverse-mode ad in a functional framework: Lambda the ultimate backpropagator. ACM Trans. Program. Lang. Syst., 30:7:1-7:36, 2008.
|
| 302 |
+
|
| 303 |
+
Aniruddh Raghu, Jonathan Lorraine, Simon Kornblith, Matthew McDermott, and David K Duvenaud. Meta-learning to improve pre-training. Advances in Neural Information Processing Systems, 34, 2021.
|
| 304 |
+
Samyam Rajbhandari, Jeff Rasley, Olatunjri Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-16. IEEE, 2020.
|
| 305 |
+
Aravind Rajeswaran, Chelsea Finn, Sham M Kakade, and Sergey Levine. Meta-learning with implicit gradients. Advances in neural information processing systems, 32, 2019.
|
| 306 |
+
Aravind Rajeswaran, Igor Mordatch, and Vikash Kumar. A game theoretic framework for model based reinforcement learning. In International conference on machine learning, pp. 7953-7963. PMLR, 2020.
|
| 307 |
+
Alexander J Ratner, Christopher M De Sa, Sen Wu, Daniel Selsam, and Christopher Ré. Data programming: Creating large training sets, quickly. Advances in neural information processing systems, 29, 2016.
|
| 308 |
+
Zhongzheng Ren, Raymond Yeh, and Alexander Schwing. Not all unlabeled data are equal: Learning to weight data in semi-supervised learning. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 21786-21797. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/f7ac67a9aa8d255282de7d11391e1b69-Paper.pdf.
|
| 309 |
+
Ryo Sato, Mirai Tanaka, and Akiko Takeda. A gradient method for multilevel optimization. Advances in Neural Information Processing Systems, 34, 2021.
|
| 310 |
+
Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. Meta-weight-net: Learning an explicit mapping for sample weighting. In Advances in Neural Information Processing Systems, pp. 1919-1930, 2019.
|
| 311 |
+
Sai Ashish Somayajula, Linfeng Song, and Pengtao Xie. A multi-level optimization framework for end-to-end text augmentation. Transactions of the Association for Computational Linguistics, 10: 343-358, 2022.
|
| 312 |
+
Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth Stanley, and Jeffrey Clune. Generative teaching networks: Accelerating neural architecture search by learning to generate synthetic training data. In International Conference on Machine Learning, pp. 9206-9216. PMLR, 2020.
|
| 313 |
+
Kaihua Tang, Jianqiang Huang, and Hanwang Zhang. Long-tailed classification by keeping the good and removing the bad momentum causal effect. Advances in Neural Information Processing Systems, 33:1513-1524, 2020.
|
| 314 |
+
Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5018-5027, 2017.
|
| 315 |
+
Luis N Vicente and Paul H Calamai. Bilevel and multilevel programming: A bibliography review. Journal of Global optimization, 5(3):291-306, 1994.
|
| 316 |
+
Yulin Wang, Jiayi Guo, Shiji Song, and Gao Huang. Meta-semi: A meta-learning approach for semi-supervised learning. CoRR, abs/2007.02394, 2020. URL https://arxiv.org/abs/2007.02394.
|
| 317 |
+
Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2691-2699, 2015.
|
| 318 |
+
Pengtao Xie and Xuefeng Du. Performance-aware mutual knowledge distillation for improving neural architecture search. CVPR, 2022.
|
| 319 |
+
|
| 320 |
+
Jieyu Zhang, Yue Yu, Yinghao Li, Yujing Wang, Yaming Yang, Mao Yang, and Alexander Ratner. Wrench: A comprehensive benchmark for weak supervision. arXiv preprint arXiv:2109.11377, 2021a.
|
| 321 |
+
Miao Zhang, Steven W Su, Shirui Pan, Xiaojun Chang, Ehsan M Abbasnejad, and Reza Haffari. idarts: Differentiable architecture search with stochastic implicit gradients. In International Conference on Machine Learning, pp. 12557-12566. PMLR, 2021b.
|
| 322 |
+
Guoqing Zheng, Ahmed Hassan Awadallah, and Susan T. Dumais. Meta label correction for learning with weak supervision. CoRR, abs/1911.03809, 2019. URL http://arxiv.org/abs/1911.03809.
|
| 323 |
+
|
| 324 |
+
# A ADDITIONAL MULTILEVEL OPTIMIZATION BENCHMARKS
|
| 325 |
+
|
| 326 |
+
# A.1 DIFFERENTIABLE NEURAL ARCHITECTURE SEARCH
|
| 327 |
+
|
| 328 |
+
A neural network architecture plays a significant role in deep learning research. However, the search space of neural architectures is so large that manual search is almost impossible. To overcome this issue, DARTS (Liu et al., 2019) proposes an efficient gradient-based neural architecture search method based on the bilevel optimization formulation:
|
| 329 |
+
|
| 330 |
+
$$
|
| 331 |
+
\alpha^ {*} = \underset {\alpha} {\operatorname {a r g m i n}} \mathcal {L} _ {v a l} (w ^ {*} (\alpha), \alpha)
|
| 332 |
+
$$
|
| 333 |
+
|
| 334 |
+
$\triangleright$ Architecture Search
|
| 335 |
+
|
| 336 |
+
$$
|
| 337 |
+
\text {s . t .} w ^ {*} (\alpha) = \underset {w} {\operatorname {a r g m i n}} \mathcal {L} _ {t r a i n} (w; \alpha)
|
| 338 |
+
$$
|
| 339 |
+
|
| 340 |
+
$\triangleright$ Classification
|
| 341 |
+
|
| 342 |
+
where $\alpha$ is the architecture weight and $w$ is the network weight. The original paper uses implicit differentiation with finite difference as its best-response Jacobian algorithm to solve the above MLO program.
|
| 343 |
+
|
| 344 |
+
We follow the training configurations from the original paper's CIFAR-10 experiment, with a few minor changes. While the original paper performs a finite difference method on the initial network weights, we perform it on the unrolled network weights. This is because we view their best-response Jacobian calculation from the implicit differentiation perspective, where the second-order derivative is calculated based on the unrolled weight. This allows us to unroll the lower-level optimization for more than one step as opposed to strict one-step unrolled gradient descent of the original paper. A similar idea was also proposed in iDARTS (Zhang et al., 2021b). Specifically, we re-implement DARTS with implicit differentiation and finite difference using 1 and 3 unrolling steps. The results are provided in Table 5.
|
| 345 |
+
|
| 346 |
+
<table><tr><td></td><td>Algorithm</td><td>Test Acc.</td><td>Parameters</td><td>Memory</td><td>Wall Time</td></tr><tr><td>Random Search</td><td>Random</td><td>96.71%</td><td>3.2M</td><td>N/A</td><td>N/A</td></tr><tr><td>DARTS (original)</td><td>AID-FD*</td><td>97.24%</td><td>3.3M</td><td>10493MiB</td><td>25.4h</td></tr><tr><td>DARTS (ours, step=1)</td><td>AID-FD</td><td>97.39%</td><td>3.8M</td><td>10485MiB</td><td>23.6h</td></tr><tr><td>DARTS (ours, step=3)</td><td>AID-FD</td><td>97.22%</td><td>3.2M</td><td>10485MiB</td><td>28.5h</td></tr></table>
|
| 347 |
+
|
| 348 |
+
Table 5: DARTS re-implementation results. AID-FD refers to implicit differentiation with a finite difference method, and * indicates the difference in the implementation of AID-FD explained above.
|
| 349 |
+
|
| 350 |
+
Our re-implementation with different unrolling steps achieves a similar performance as the original paper. We also notice that our re-implementation achieves slightly less GPU memory usage and wall time. This is because the original implementation calculates gradients for the architecture weights (upper-level parameters) while running lower-level optimization, while ours only calculates gradients of the parameters for the corresponding optimization stage.
|
| 351 |
+
|
| 352 |
+
# A.2 CORRECTING & REWEIGHTING CORRUPTED LABELS (EXTENDED)
|
| 353 |
+
|
| 354 |
+
To further demonstrate the general applicability of BETTY to different datasets and scales, we performed experiments from Section 5.2 in two additional settings.
|
| 355 |
+
|
| 356 |
+
Clothing-1M + ResNet-50 Clothing-1M (Xiao et al., 2015) is a real-world noisy dataset that consists of 1 million fashion images collected from various online shopping websites and has the approximate noise ratio of $38.5\%$ . Following the standard, we use ResNet-50 as our backbone model and attempt to correct and reweight noisy labels with extended bilevel optimization. The experiment result is presented in Table 6
|
| 357 |
+
|
| 358 |
+
<table><tr><td></td><td>Test Accuracy</td></tr><tr><td>Baseline</td><td>70.76%</td></tr><tr><td>+RWT</td><td>75.57%</td></tr><tr><td>+RWT&CRT</td><td>76.34%</td></tr></table>
|
| 359 |
+
|
| 360 |
+
In this experiment, we are able to empirically show that the MLO application implemented with BETTY works well with a large-scale dataset.
|
| 361 |
+
|
| 362 |
+
Wrench + BERT-base In recent years, finetuning the pretrained large language model has become the standard for text classification. As the Wrench benchmark mostly consists of text classification datasets, we further applied our "correcting and reweighting corrupted labels" framework to the BERT-base model.
|
| 363 |
+
|
| 364 |
+
Table 6: Clothing-1M + ResNet-50 results.
|
| 365 |
+
|
| 366 |
+
<table><tr><td></td><td>TREC</td><td>AGNews</td><td>IMDB</td><td>SemEval</td><td>ChemProt</td><td>YouTube</td></tr><tr><td>Baseline</td><td>64.14±6.56</td><td>86.12±0.17</td><td>71.66±2.05</td><td>79.93±1.53</td><td>52.35±0.56</td><td>93.20±1.44</td></tr><tr><td>+RWT</td><td>84.07±4.42</td><td>89.62±0.60</td><td>87.85±0.24</td><td>87.45±0.69</td><td>71.42±1.50</td><td>94.67±0.46</td></tr><tr><td>+RWT&CRT</td><td>93.07±0.31</td><td>90.40±0.16</td><td>87.45±0.39</td><td>87.92±0.04</td><td>75.27±1.23</td><td>94.80±0.80</td></tr></table>
|
| 367 |
+
|
| 368 |
+
Table 7: Wrench + BERT-base results.
|
| 369 |
+
|
| 370 |
+
In this experiment, we are able to empirically show that the MLO application implemented with BETTY works well with a large model.
|
| 371 |
+
|
| 372 |
+
# B CODE EXAMPLE
|
| 373 |
+
|
| 374 |
+
Here, we provide simplified code for our experiments from Section 5. Note that every experiment shares a similar code structure when implemented with BETTY.
|
| 375 |
+
|
| 376 |
+
B.1 DATA REWEIGHTING FOR CLASS IMBALANCE
|
| 377 |
+
```python
|
| 378 |
+
1 trainloader, validloader = setup_dataloger()
|
| 379 |
+
2 rwt_module, rwt_optimizer = setup_reweight()
|
| 380 |
+
3 clsmodule,clsOptimizer,cls_scheduler = setup_classifier()
|
| 381 |
+
4
|
| 382 |
+
5 #Level 2
|
| 383 |
+
6 class Reweight(ImplicitProblem):
|
| 384 |
+
7 def training_step(self, batch):
|
| 385 |
+
8 inputs, labels = batch
|
| 386 |
+
9 outputs = self.classifier(inputs)
|
| 387 |
+
10 return F.cros_entropy(outputs, labels)
|
| 388 |
+
11
|
| 389 |
+
12 #Level 1
|
| 390 |
+
13 class Classifier(ImplicitProblem):
|
| 391 |
+
14 def training_step(self, batch):
|
| 392 |
+
15 inputs, labels = batch
|
| 393 |
+
16 outputs = self/module(inputs)
|
| 394 |
+
17 loss = F.cros_entropy(outputs, labels, reduction="none")
|
| 395 |
+
18 loss_reshape = torch.reshape(loss, (-1, 1))
|
| 396 |
+
19 #Reweighting
|
| 397 |
+
20 weight = self.reweight(loss_reshapedetach())
|
| 398 |
+
21 return torch.mean(weight * lossreshape)
|
| 399 |
+
22
|
| 400 |
+
23 upper_config = Config(type="darts", retain_graph=True)
|
| 401 |
+
24 lower_config = Config(type="default", unroll_steps=5)
|
| 402 |
+
25 reweight = Reweight(name="reweight",
|
| 403 |
+
26 config=upper_config,
|
| 404 |
+
27 module=rwt_module,
|
| 405 |
+
28 optimizer=rwt_OPTimizer,
|
| 406 |
+
29 train_dataloader=validloader)
|
| 407 |
+
20 classifier = Classifier(name="classifier",
|
| 408 |
+
21 config=lower_config,
|
| 409 |
+
22 module=cls_module,
|
| 410 |
+
23 optimizer=cls_OPTimizer,
|
| 411 |
+
24 scheduler=cls_scheduler,
|
| 412 |
+
25 train_dataloader=trainloader)
|
| 413 |
+
26 probs = [reweight, classifier]
|
| 414 |
+
27 u2l = {reweight: [classifier}]
|
| 415 |
+
28 l2u = {classifier: [reweight}]
|
| 416 |
+
29 depends = {"l2u": l2u, "u2l": u2l}
|
| 417 |
+
30 engine = Engine(problems=probs, dependencies=depends)
|
| 418 |
+
31 engine.run()
|
| 419 |
+
```
|
| 420 |
+
|
| 421 |
+
Listing 3: Simplified code of "Data Reweighting for Class Imbalance"
|
| 422 |
+
|
| 423 |
+
B.2 CORRECTING & REWEIGHTING CORRUPTED LABELS
|
| 424 |
+
```python
|
| 425 |
+
1 trainloader, validloader = setup_datalogue()
|
| 426 |
+
2 rwt_module, rwt_optimizer = setup_reweight()
|
| 427 |
+
3 crtModule, crt_optimizer = setup(correct()
|
| 428 |
+
4 clsModule, cls_OPTimizer, cls_scheduler = setup_classifier()
|
| 429 |
+
5
|
| 430 |
+
6 # Level 2
|
| 431 |
+
7 class Correct(ImplicitProblem):
|
| 432 |
+
8 def training_step(self, batch): inputs, labels = batch outputs, embeds = self.classifier(inputs, return_embedding=True) correct_outputs = self/module(embeds, test=True) ce_loss = F CROSS_entropy(outputs, labels) aux_loss = F CROSS_entropy.correct Outputs, labels) return ce_loss + aux_loss
|
| 433 |
+
10
|
| 434 |
+
11
|
| 435 |
+
12
|
| 436 |
+
13
|
| 437 |
+
14
|
| 438 |
+
15
|
| 439 |
+
16 # Level 2
|
| 440 |
+
17 class Rweight (ImplicitProblem):
|
| 441 |
+
18 def training_step(self, batch): inputs, labels = batch outputs = self.classifier(inputs) return F CROSS_entropy(outputs, labels)
|
| 442 |
+
19
|
| 443 |
+
20
|
| 444 |
+
21
|
| 445 |
+
22
|
| 446 |
+
23
|
| 447 |
+
24
|
| 448 |
+
25
|
| 449 |
+
26
|
| 450 |
+
27
|
| 451 |
+
28
|
| 452 |
+
29
|
| 453 |
+
30
|
| 454 |
+
31
|
| 455 |
+
32
|
| 456 |
+
33
|
| 457 |
+
34
|
| 458 |
+
35
|
| 459 |
+
36
|
| 460 |
+
37
|
| 461 |
+
38
|
| 462 |
+
39
|
| 463 |
+
40
|
| 464 |
+
41
|
| 465 |
+
42
|
| 466 |
+
43
|
| 467 |
+
44
|
| 468 |
+
45
|
| 469 |
+
46
|
| 470 |
+
47
|
| 471 |
+
48
|
| 472 |
+
49
|
| 473 |
+
50
|
| 474 |
+
51
|
| 475 |
+
52
|
| 476 |
+
53
|
| 477 |
+
54
|
| 478 |
+
55
|
| 479 |
+
56
|
| 480 |
+
57
|
| 481 |
+
58
|
| 482 |
+
59
|
| 483 |
+
60
|
| 484 |
+
61
|
| 485 |
+
62
|
| 486 |
+
63
|
| 487 |
+
64
|
| 488 |
+
65
|
| 489 |
+
```
|
| 490 |
+
|
| 491 |
+
Listing 4: Simplified code of "Correcting & Reweighting Corrupted Labels"
|
| 492 |
+
|
| 493 |
+
B.3 DOMAIN ADAPTATION FOR PRETRAINING & FINETUNING
|
| 494 |
+
Listing 5: Simplified code of "Domain Adaptation for Pretraining & Finetuning"
|
| 495 |
+
```python
|
| 496 |
+
Get module, optimizer, lr_scheduler, data loader for each problem ptModule, pt_optimizer, pt_scheduler, ptloader = setup_pretrain() ftmodule, ft_OPTimizer, ft_scheduler, ftloader = setup_finetune() rw_module, rw_OPTimizer, rw_scheduler, rwloader = setup_reweight()
|
| 497 |
+
# Level 1 class Pretrain(ImplicitProblem): def training_step(self, batch): inputs, targets = batch outs $=$ self-module(inputs) loss_raw $=$ F.cross_entropy(outs, targets, reduction="none") logit $=$ self.reweight(inputs) weight $=$ torchsigmoid(logit) return torch.mean(loss_raw $\star$ weight)
|
| 498 |
+
# Level 2 class Finetune(ImplicitProblem): def training_step(self, batch): inputs, targets $=$ batch outs $=$ self-module(inputs) loss $=$ F.cross_entropy(outs, targets, reduction="none") loss $=$ torch.mean(ce_loss) # Proximal regularization for (n1, p1), p2 in zip(self/module.named_parameters(), self. pretrain-module.params(): lam $= 0$ if "fc" in n1 else args.lam loss $+ =$ lam $\star$ (p1 - p2).pow(2).sum() return loss
|
| 499 |
+
# Level 3 class Reweight(ImplicitProblem): def training_step(self, batch): inputs, targets $=$ batch outs $=$ self.finetune(inputs) return F.cross_entropy(outs, targets)
|
| 500 |
+
# Define optimization configurations reweight_config $=$ Config(type="darts", step=1, retain_graph=True) finetune_config $=$ Config(type="default", step=1) pretrain_config $=$ Config(type="default", step=1) pretrain $=$ Pretrain("pretrain", pt_config, ptModule, pt_optimizer pt_scheduler, ptloader) finetune $=$ Finetune("finetune", ft_config, ftModule, ft_optimizer ft_scheduler, ftloader) reweight $=$ Reweight("reweight", rw_config, rwModule, rw_optimizer rw_scheduler, rwloader)
|
| 501 |
+
probs $=$ [reweight, finetune, pretrain] u2l $=$ {reweight: [pretrain]} l2u $=$ {pretrain: [finetune], finetune: [reweight]} depends $=$ {"u2l": u2l,"12u": 12u} engine $=$ Engine(problems=probs, dependencies=depends) engine.run()
|
| 502 |
+
```
|
| 503 |
+
|
| 504 |
+
B.4 DIFFERENTIABLE NEURAL ARCHITECTURE SEARCH
|
| 505 |
+
```diff
|
| 506 |
+
1 trainloader, validloader = setup_dataloger()
|
| 507 |
+
2 arch_module, arch_OPTimizer = setup_architecture()
|
| 508 |
+
3 clsmodule,cls_OPTimizer,cls_scheduler = setup(Classifier())
|
| 509 |
+
4
|
| 510 |
+
5 #Level 2
|
| 511 |
+
6 class Architecture(ImplicitProblem):
|
| 512 |
+
7 def training_step(self, batch):
|
| 513 |
+
8 x,target $\equiv$ batch
|
| 514 |
+
9 alphas $\equiv$ self/module( )
|
| 515 |
+
10 return self.classifer/module.loss(x, alphas, target)
|
| 516 |
+
11
|
| 517 |
+
12 #Level 1
|
| 518 |
+
13 class Classifier (ImplicitProblem):
|
| 519 |
+
14 def training_step(self, batch):
|
| 520 |
+
15 x,target $\equiv$ batch
|
| 521 |
+
16 alphas $\equiv$ self.architecture( )
|
| 522 |
+
17 return self/module.loss(x, alphas, target)
|
| 523 |
+
18
|
| 524 |
+
19 arch_config $\equiv$ Config(type="darts",
|
| 525 |
+
step=1,
|
| 526 |
+
keep_graph=True,
|
| 527 |
+
first_order=True)
|
| 528 |
+
20
|
| 529 |
+
21
|
| 530 |
+
22
|
| 531 |
+
23
|
| 532 |
+
24 architecture $\equiv$ Architecture(name="architecture",
|
| 533 |
+
config=arch_config,
|
| 534 |
+
module=arch_module,
|
| 535 |
+
optimizer=arch_OPTIZER,
|
| 536 |
+
train_dataloader=validloader)
|
| 537 |
+
25
|
| 538 |
+
26 classifier $\equiv$ Classifier(name="classifier",
|
| 539 |
+
config=cls_config,
|
| 540 |
+
module=cls_module,
|
| 541 |
+
optimizer=cls_OPTIZER,
|
| 542 |
+
schedule=cls_schedule,
|
| 543 |
+
train_dataloader=trainloader)
|
| 544 |
+
27
|
| 545 |
+
28
|
| 546 |
+
29
|
| 547 |
+
30
|
| 548 |
+
31
|
| 549 |
+
32
|
| 550 |
+
33
|
| 551 |
+
34
|
| 552 |
+
35
|
| 553 |
+
36
|
| 554 |
+
37 probs $=$ [architecture, classifier]
|
| 555 |
+
38 u2l $=$ {architecture: [classifier}]
|
| 556 |
+
39 l2u $=$ {classifier: [architecture}]
|
| 557 |
+
40 depends $=$ {"l2u": l2u,"u2l": u2l}
|
| 558 |
+
41
|
| 559 |
+
42 engine $\equiv$ Engine(problems=probs, dependencies=depends)
|
| 560 |
+
43 engine.run()
|
| 561 |
+
```
|
| 562 |
+
|
| 563 |
+
Listing 6: Simplified code of "Differentiable Neural Architecture Search"
|
| 564 |
+
|
| 565 |
+
# C EXPERIMENT DETAILS
|
| 566 |
+
|
| 567 |
+
In this section, we provide further training details (e.g. hyperparameters) of each experiment.
|
| 568 |
+
|
| 569 |
+
# C.1 DATA REWEIGHTING FOR CLASS IMBALANCE
|
| 570 |
+
|
| 571 |
+
Dataset We reuse the long-tailed CIFAR-10 dataset from the original paper (Shu et al., 2019) as our inner-level training dataset. More specifically, the imbalance factor is defined as the ratio between the number of training samples from the most common class and the most rare class. The number of training samples of other classes are defined by geometrically interpolating the number of training samples from the most common class and the most rare class. We randomly select 100 samples from the validation set to construct the upper-level (or meta) training dataset, and use the rest of it as the validation dataset, on which classification accuracy is reported in the main text.
|
| 572 |
+
|
| 573 |
+
Meta-Weight-Network We adopt a MLP with one hidden layer of 100 neurons (i.e. 1-100-1) as our Meta-Weight-Network (MWN). It is trained with the Adam optimizer (Kingma & Ba, 2014) whose learning rate is set to 0.00001 throughout the whole training procedure, momentum values to (0.9, 0.999), and weight decay value to 0. MWN is trained for 10,000 iterations and learning rate is fixed throughout training.
|
| 574 |
+
|
| 575 |
+
Classification Network Following the original MWN work (Shu et al., 2019), we use ResNet32 (He et al., 2016) as our classification network. It is trained with the SGD optimizer whose initial learning rate is set to 0.1, momentum value to 0.9, and weight decay value to 0.0005. Training is performed for 10,000 iterations, and we decay the learning rate by a factor of 10 on the iterations of 5,000 and 7,500.
|
| 576 |
+
|
| 577 |
+
# C.2 CORRECTING & REWEIGHTING CORRUPTED LABELS
|
| 578 |
+
|
| 579 |
+
Dataset We directly use TREC, AGNews, IMDB, SemEval, ChemProt, YouTube text classification datasets from the Wrench benchmark (Zhang et al., 2021a). More specifically, we use the training split of each dataset for training the classification network, and the validation split for training the correcting and the reweighting networks. Test accuracy is measured on the test split.
|
| 580 |
+
|
| 581 |
+
Correct Network Our correct network takes the penultimate activation from the classification network, and outputs soft labels through the linear layer and the softmax layer. These new soft labels are interpolated with the original labels via the reweighting scheme which is achieved with 2-layer MLP. As our reweighting network, the correct network is trained with Adam optimizer whose learning rate is set to 0.00001, momentum values to (0.9, 0.999), and weight decay value to 0.
|
| 582 |
+
|
| 583 |
+
Reweighting Network For our reweighting network, we reuse Meta-Weight-Net from the "Data Reweighting for Class Imbalance" experiment, follow all the training details.
|
| 584 |
+
|
| 585 |
+
Classification Network As our classification network, we adopt a 2-layer MLP with the hidden size of 100. The classification network is trained for 30,000 iterations with the SGD optimizer whose learning rate is set to 0.003, momentum to 0.9, and weight decay to 0.0001. Learning rate is decayed to 0 with the cosine annealing schedule during training.
|
| 586 |
+
|
| 587 |
+
# C.3 DOMAIN ADAPTATION FOR PRETRAINING & FINETUNING
|
| 588 |
+
|
| 589 |
+
Dataset We split each domain of the OfficeHome dataset (Venkateswara et al., 2017) into training/validation/test datasets with a ratio of 5:3:2. The pretraining network is trained on the training set of the source domain. Finetuning and reweighting networks are both trained on the training set of the target domain following the strategy proposed in (Bai et al., 2021). The final performance is measured by the classification accuracy of the finetuning network on the test dataset of the target domain.
|
| 590 |
+
|
| 591 |
+
Pretraining Network We use ResNet18 (He et al., 2016) pretrained on the ImageNet dataset (Deng et al., 2009) for our pretraining network. Following the popular transfer learning strategy, we split the network into two parts, namely the feature (or convolutional layer) part and the classifier (or fully-connected layer) part, and each part is trained with different learning rates. Specifically, learning rates for the feature and the classifier parts are respectively set to 0.001 and 0.0001 with the Adam optimizer. They share the same weight decay value of 0.0005 and momentum values of (0.9, 0.999). Furthermore, we encourage the network weight to stay close to the pretrained weight by introducing the additional proximal regularization with the regularization value of 0.001. Training is performed for 1,000 iterations, and the learning rate is decayed by a factor of 10 on the iterations of 400 and 800.
|
| 592 |
+
|
| 593 |
+
Finetuning Network The same architecture and optimization configurations as the pretraining network are used for the finetuning network. The proximal regularization parameter, which encourages the finetuning network parameter to stay close to the pretraining network parameter, is set to 0.007.
|
| 594 |
+
|
| 595 |
+
Reweighting Network The same architecture and optimization configurations as the pretraining network are used for the reweighting network, except that no proximal regularization is applied to the reweighting network.
|
| 596 |
+
|
| 597 |
+
# C.4 DIFFERENTIABLE NEURAL ARCHITECTURE SEARCH
|
| 598 |
+
|
| 599 |
+
Dataset Following the original paper (Liu et al., 2019), we use the first half of the CIFAR-10 training dataset as our inner-level training dataset (i.e. classification network) and the other half as the outer-level training dataset (i.e. architecture network). Training accuracy reported in the main text is measured on the CIFAR-10 validation dataset.
|
| 600 |
+
|
| 601 |
+
Architecture Network We adopt the same architecture search space as in the original paper (Liu et al., 2019) with 8 operations, and 7 nodes per convolutional cell. The architecture parameters are initialized to zero to ensure equal softmax values, and trained with the Adam optimizer (Kingma & Ba, 2014) whose learning rate is fixed to 0.0003, momentum values to (0.5, 0.999), and weight decay value to 0.001 throughout training. Training is performed for 50 epochs.
|
| 602 |
+
|
| 603 |
+
Classification Network Given the above architecture parameters, we set our classification network to have 8 cells and the initial number of channels to be 16. The network is trained with the SGD optimizer whose initial learning rate is set to 0.025, momentum to 0.9, and weight decay value to 0.0003. Training is performed for 50 epochs, and the learning rate is decayed following the cosine annealing schedule without restart to the minimum learning rate of 0.001 by the end of training.
|
| 604 |
+
|
| 605 |
+
# D DESIGN CHOICE ANALYSIS
|
| 606 |
+
|
| 607 |
+
In this section, we visually compare the convergence speed of different best-response Jacobian algorithms with the loss convergence graphs on the synthetic hyperparameter optimization task and the data reweighting task (Section 5.1). Specifically, we analyze the convergence speed in terms of both 1) the number of steps and 2) training time, as the per-step computational cost differs for each algorithm.
|
| 608 |
+
|
| 609 |
+
# D.1 SYNTHETIC HYPERPARAMETER OPTIMIZATION
|
| 610 |
+
|
| 611 |
+
Following (Grazzi et al., 2020), we constructed a synthetic hyperparameter optimization task where we optimize the weight decay value for every parameter in simple binary logistic regression. Mathematically, this problem can be formulated as bilevel optimization as follows:
|
| 612 |
+
|
| 613 |
+
$$
|
| 614 |
+
\lambda^ {*} = \arg \min _ {\lambda} \operatorname {s i g m o i d} \left(y _ {u} x _ {u} ^ {T} w ^ {*}\right)
|
| 615 |
+
$$
|
| 616 |
+
|
| 617 |
+
$$
|
| 618 |
+
w ^ {*} = \arg \min _ {w} \operatorname {s i g m o i d} \left(y _ {l} x _ {l} ^ {T} w ^ {*}\right) + \frac {1}{2} w ^ {T} \operatorname {d i a g} (\lambda) w
|
| 619 |
+
$$
|
| 620 |
+
|
| 621 |
+
where, $(x_{l},y_{l})$ and $(x_{u},y_{u})$ are respectively the training datasets for the lower-(and upper-)level problems, with $x\in \mathbb{R}^{n\times d}$ and $y\in \mathbb{R}^{n\times 1}$ . Here, $n$ is the number of training data in each dataset and $d$ is the dimension of the feature vector. $w\in \mathbb{R}^{d\times 1}$ is the logistic regression parameter, and $\lambda \in \mathbb{R}^{d\times 1}$ is the hyperparameter (i.e. the per-parameter weight decay value).
|
| 622 |
+
|
| 623 |
+
Given the above setup, we compared four different best-response Jacobian algorithms: 1) ITD-RMAD, 2) AID-FD, 3) AID-CG, and 4) AID-Neumann. For the fair comparison, we fixed the unrolling step to 100 for all algorithms. The experiment result is presented below:
|
| 624 |
+
|
| 625 |
+

|
| 626 |
+
(a) Outer training loss (x-axis: time)
|
| 627 |
+
|
| 628 |
+

|
| 629 |
+
(b) Outer training loss (x-axis: step)
|
| 630 |
+
Figure 3: Convergence analysis of different best-response Jacobian algorithms on the synthetic hyperparameter optimization task
|
| 631 |
+
|
| 632 |
+
As shown in Figure 3, AID-CG achieves the fastest convergence both in terms of training steps and training time. However, AID-FD achieves the fastest per-step computation time as it is the only algorithms that doesn't require the explicit calculation of the second-order derivative (i.e. Hessian).
|
| 633 |
+
|
| 634 |
+
# D.2 DATA REWEIGHTING
|
| 635 |
+
|
| 636 |
+
To study how different best-response Jacobian algorithms perform on more complex tasks, we repeated the above experiment on the data reweighting task from Section 5.1. Again, for the fair
|
| 637 |
+
|
| 638 |
+
comparison, we used the same unrolling step of 1 for all algorithms. The experiment result is provided in Figure 4.
|
| 639 |
+
|
| 640 |
+

|
| 641 |
+
|
| 642 |
+

|
| 643 |
+
Figure 4: Convergence analysis of different best-response Jacobian algorithms on the data reweighting task
|
| 644 |
+
|
| 645 |
+

|
| 646 |
+
|
| 647 |
+
Unlike in the synthetic hyperparameter optimization task, AID-FD achieves the fastest convergence in terms of training steps and training time as well as the best final validation accuracy. As AID-FD doesn't require any second-order derivative calculation, it also achieves the minimal per-step computation cost.
|
| 648 |
+
|
| 649 |
+
Above two experiments follow the no free lunch theorem: the optimal design choice can vary for different tasks without golden rules. However, thanks to the modular interface for switching between different design choices (in Config), only minimal programming efforts would be needed with BETTY, expediting the research cycle.
|
| 650 |
+
|
| 651 |
+
# E SYSTEMS SUPPORT
|
| 652 |
+
|
| 653 |
+
In this section, we perform additional analyses on the memory saving effects of our system features with two benchmarks: (1) differentiable neural architecture search and (2) data reweighting for class imbalance.
|
| 654 |
+
|
| 655 |
+
# E.1 DIFFERENTIABLE NEURAL ARCHITECTURE SEARCH
|
| 656 |
+
|
| 657 |
+
<table><tr><td></td><td>Baseline</td><td>+ mixed-precision</td></tr><tr><td>GPU Memory Usage</td><td>9867MiB</td><td>5759MiB</td></tr></table>
|
| 658 |
+
|
| 659 |
+
# E.2 DATA REWEIGHTING FOR CLASS IMBALANCE
|
| 660 |
+
|
| 661 |
+
In this experiment, we use ResNet50 (He et al., 2016) instead of ResNet30, to better study the memory reduction from our system features, when the larger model is used. Importantly, we also test the data-parallel training feature in addition to the mixed-precision training feature.
|
| 662 |
+
|
| 663 |
+
Table 8: GPU memory usage analysis for DARTS.
|
| 664 |
+
|
| 665 |
+
<table><tr><td></td><td>Baseline</td><td>+ mixed-precision</td><td>+ data-parallel (2 GPUs)</td></tr><tr><td>GPU Memory Usage</td><td>6817MiB</td><td>4397MiB</td><td>3185/3077MiB (GPU0/1)</td></tr></table>
|
| 666 |
+
|
| 667 |
+
Table 9: GPU memory usage analysis for MWN with ResNet-50.
|
| 668 |
+
|
| 669 |
+
As shown above, we observe more reduction in memory usage as we add more system features.
|
| 670 |
+
|
| 671 |
+
# F SUPPORTED FEATURES
|
| 672 |
+
|
| 673 |
+
Here, we summarize the supported features within BETTY.
|
| 674 |
+
|
| 675 |
+
<table><tr><td>Category</td><td>Features</td></tr><tr><td>Best-response</td><td>·ITD-RMAD</td></tr><tr><td rowspan="3">Jacobian algorithms</td><td>·AID-FD</td></tr><tr><td>·AID-NMN</td></tr><tr><td>·AID-CG</td></tr><tr><td rowspan="3">Systems</td><td>·Mixed-precision</td></tr><tr><td>·Data-parallel</td></tr><tr><td>·Gradient accumulation</td></tr><tr><td rowspan="3">Logging</td><td>·Default Python logging</td></tr><tr><td>·TensorBoard</td></tr><tr><td>·Weights & Biases</td></tr><tr><td rowspan="2">Miscellaneous</td><td>·Gradient clipping</td></tr><tr><td>·Early stopping</td></tr></table>
|
| 676 |
+
|
| 677 |
+
Table 10: Supported features in BETTY
|
| 678 |
+
|
| 679 |
+
# G DATAFLOW GRAPHS FOR EXPERIMENTS
|
| 680 |
+
|
| 681 |
+

|
| 682 |
+
5.1 Data Reweghting for Class Imbalance
|
| 683 |
+
|
| 684 |
+

|
| 685 |
+
Classifier
|
| 686 |
+
|
| 687 |
+

|
| 688 |
+
5.2 Correcting and Reweighing Corrupted Labels
|
| 689 |
+
Pretrain
|
| 690 |
+
5.3 Domain Adaptation for Pretraining & Finetuning
|
| 691 |
+
A.1 Differentiable Neural Architecture Search
|
| 692 |
+
Figure 5: Dataflow graphs for all our experiments
|
| 693 |
+
|
| 694 |
+

|
| 695 |
+
Architecture
|
| 696 |
+
Classifier
|
2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b82d0614eb37d682f6f4086d347b1fa8e7b2bbbe044403cba065ebfcab65cd9d
|
| 3 |
+
size 535455
|
2023/Betty_ An Automatic Differentiation Library for Multilevel Optimization/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Clean-image Backdoor_ Attacking Multi-label Models with Poisoned Labels Only/03aaef97-f1ed-4352-8545-c82384c714ab_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|