Add Batch bf1343f2-16be-4ddc-b114-d2c895ee2224 data
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/ed1a3322-eae7-42dc-a0b1-e5152d8fbf81_content_list.json +0 -0
- 2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/ed1a3322-eae7-42dc-a0b1-e5152d8fbf81_model.json +0 -0
- 2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/ed1a3322-eae7-42dc-a0b1-e5152d8fbf81_origin.pdf +3 -0
- 2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/full.md +472 -0
- 2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/images.zip +3 -0
- 2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/layout.json +0 -0
- 2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/9566ef4e-e5d9-46fe-acb7-0fc0ac7c0dff_content_list.json +0 -0
- 2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/9566ef4e-e5d9-46fe-acb7-0fc0ac7c0dff_model.json +0 -0
- 2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/9566ef4e-e5d9-46fe-acb7-0fc0ac7c0dff_origin.pdf +3 -0
- 2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/full.md +1066 -0
- 2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/images.zip +3 -0
- 2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/layout.json +0 -0
- 2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/81341b4f-f47d-44a6-9c0c-76b5656a7234_content_list.json +0 -0
- 2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/81341b4f-f47d-44a6-9c0c-76b5656a7234_model.json +0 -0
- 2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/81341b4f-f47d-44a6-9c0c-76b5656a7234_origin.pdf +3 -0
- 2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/full.md +949 -0
- 2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/images.zip +3 -0
- 2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/layout.json +0 -0
- 2023/A Non-monotonic Self-terminating Language Model/88334e4e-c7af-4ca1-ab25-12138e63533d_content_list.json +0 -0
- 2023/A Non-monotonic Self-terminating Language Model/88334e4e-c7af-4ca1-ab25-12138e63533d_model.json +0 -0
- 2023/A Non-monotonic Self-terminating Language Model/88334e4e-c7af-4ca1-ab25-12138e63533d_origin.pdf +3 -0
- 2023/A Non-monotonic Self-terminating Language Model/full.md +510 -0
- 2023/A Non-monotonic Self-terminating Language Model/images.zip +3 -0
- 2023/A Non-monotonic Self-terminating Language Model/layout.json +0 -0
- 2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/1495f6b2-6e9f-4296-a806-8e145de397eb_content_list.json +0 -0
- 2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/1495f6b2-6e9f-4296-a806-8e145de397eb_model.json +0 -0
- 2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/1495f6b2-6e9f-4296-a806-8e145de397eb_origin.pdf +3 -0
- 2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/full.md +430 -0
- 2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/images.zip +3 -0
- 2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/layout.json +0 -0
- 2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/55bfe47f-9477-48ca-8278-43a53e11f20e_content_list.json +0 -0
- 2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/55bfe47f-9477-48ca-8278-43a53e11f20e_model.json +0 -0
- 2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/55bfe47f-9477-48ca-8278-43a53e11f20e_origin.pdf +3 -0
- 2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/full.md +341 -0
- 2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/images.zip +3 -0
- 2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/layout.json +0 -0
- 2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/dfb8bf62-6253-4020-aab4-1a7df51580ae_content_list.json +0 -0
- 2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/dfb8bf62-6253-4020-aab4-1a7df51580ae_model.json +0 -0
- 2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/dfb8bf62-6253-4020-aab4-1a7df51580ae_origin.pdf +3 -0
- 2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/full.md +365 -0
- 2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/images.zip +3 -0
- 2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/layout.json +0 -0
- 2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/c8ffe538-d5da-49af-b4cb-8cd3b47e4ebe_content_list.json +2099 -0
- 2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/c8ffe538-d5da-49af-b4cb-8cd3b47e4ebe_model.json +0 -0
- 2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/c8ffe538-d5da-49af-b4cb-8cd3b47e4ebe_origin.pdf +3 -0
- 2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/full.md +380 -0
- 2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/images.zip +3 -0
- 2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/layout.json +0 -0
- 2023/A Statistical Framework for Personalized Federated Learning and Estimation_ Theory, Algorithms, and Privacy/f738034d-c124-415e-a0fe-caa30effdfa7_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -6392,3 +6392,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 6392 |
2023/WikiWhy_[[:space:]]Answering[[:space:]]and[[:space:]]Explaining[[:space:]]Cause-and-Effect[[:space:]]Questions/00a9ac2c-3e1f-428f-a34f-c88d229e2f98_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6393 |
2023/Win_[[:space:]]Weight-Decay-Integrated[[:space:]]Nesterov[[:space:]]Acceleration[[:space:]]for[[:space:]]Adaptive[[:space:]]Gradient[[:space:]]Algorithms/522f8823-4bd1-4ba3-8db5-7699f023cbbd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6394 |
2023/What[[:space:]]learning[[:space:]]algorithm[[:space:]]is[[:space:]]in-context[[:space:]]learning_[[:space:]]Investigations[[:space:]]with[[:space:]]linear[[:space:]]models/6c5cb82c-4892-4f3e-858f-8a83309456e6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6392 |
2023/WikiWhy_[[:space:]]Answering[[:space:]]and[[:space:]]Explaining[[:space:]]Cause-and-Effect[[:space:]]Questions/00a9ac2c-3e1f-428f-a34f-c88d229e2f98_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6393 |
2023/Win_[[:space:]]Weight-Decay-Integrated[[:space:]]Nesterov[[:space:]]Acceleration[[:space:]]for[[:space:]]Adaptive[[:space:]]Gradient[[:space:]]Algorithms/522f8823-4bd1-4ba3-8db5-7699f023cbbd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6394 |
2023/What[[:space:]]learning[[:space:]]algorithm[[:space:]]is[[:space:]]in-context[[:space:]]learning_[[:space:]]Investigations[[:space:]]with[[:space:]]linear[[:space:]]models/6c5cb82c-4892-4f3e-858f-8a83309456e6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6395 |
+
2023/A[[:space:]]Multi-Grained[[:space:]]Self-Interpretable[[:space:]]Symbolic-Neural[[:space:]]Model[[:space:]]For[[:space:]]Single_Multi-Labeled[[:space:]]Text[[:space:]]Classification/ed1a3322-eae7-42dc-a0b1-e5152d8fbf81_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6396 |
+
2023/A[[:space:]]Neural[[:space:]]Mean[[:space:]]Embedding[[:space:]]Approach[[:space:]]for[[:space:]]Back-door[[:space:]]and[[:space:]]Front-door[[:space:]]Adjustment/9566ef4e-e5d9-46fe-acb7-0fc0ac7c0dff_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6397 |
+
2023/A[[:space:]]Non-Asymptotic[[:space:]]Analysis[[:space:]]of[[:space:]]Oversmoothing[[:space:]]in[[:space:]]Graph[[:space:]]Neural[[:space:]]Networks/81341b4f-f47d-44a6-9c0c-76b5656a7234_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6398 |
+
2023/A[[:space:]]Non-monotonic[[:space:]]Self-terminating[[:space:]]Language[[:space:]]Model/88334e4e-c7af-4ca1-ab25-12138e63533d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6399 |
+
2023/A[[:space:]]Self-Attention[[:space:]]Ansatz[[:space:]]for[[:space:]]Ab-initio[[:space:]]Quantum[[:space:]]Chemistry/1495f6b2-6e9f-4296-a806-8e145de397eb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6400 |
+
2023/A[[:space:]]Simple[[:space:]]Approach[[:space:]]for[[:space:]]Visual[[:space:]]Room[[:space:]]Rearrangement_[[:space:]]3D[[:space:]]Mapping[[:space:]]and[[:space:]]Semantic[[:space:]]Search/55bfe47f-9477-48ca-8278-43a53e11f20e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6401 |
+
2023/A[[:space:]]Simple[[:space:]]Yet[[:space:]]Powerful[[:space:]]Deep[[:space:]]Active[[:space:]]Learning[[:space:]]With[[:space:]]Snapshots[[:space:]]Ensembles/dfb8bf62-6253-4020-aab4-1a7df51580ae_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6402 |
+
2023/A[[:space:]]Stable[[:space:]]and[[:space:]]Scalable[[:space:]]Method[[:space:]]for[[:space:]]Solving[[:space:]]Initial[[:space:]]Value[[:space:]]PDEs[[:space:]]with[[:space:]]Neural[[:space:]]Networks/c8ffe538-d5da-49af-b4cb-8cd3b47e4ebe_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6403 |
+
2023/A[[:space:]]Statistical[[:space:]]Framework[[:space:]]for[[:space:]]Personalized[[:space:]]Federated[[:space:]]Learning[[:space:]]and[[:space:]]Estimation_[[:space:]]Theory,[[:space:]]Algorithms,[[:space:]]and[[:space:]]Privacy/f738034d-c124-415e-a0fe-caa30effdfa7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6404 |
+
2023/A[[:space:]]Theoretical[[:space:]]Framework[[:space:]]for[[:space:]]Inference[[:space:]]and[[:space:]]Learning[[:space:]]in[[:space:]]Predictive[[:space:]]Coding[[:space:]]Networks/981f671a-fd3e-4ad2-bd5c-69be8690836d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6405 |
+
2023/A[[:space:]]Theoretical[[:space:]]Understanding[[:space:]]of[[:space:]]Shallow[[:space:]]Vision[[:space:]]Transformers_[[:space:]]Learning,[[:space:]]Generalization,[[:space:]]and[[:space:]]Sample[[:space:]]Complexity/17fb682b-7886-4eb8-8a4a-207b71dd8852_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6406 |
+
2023/A[[:space:]]Theory[[:space:]]of[[:space:]]Dynamic[[:space:]]Benchmarks/ad61e58c-4e16-4310-878e-b28424a3e062_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6407 |
+
2023/A[[:space:]]Time[[:space:]]Series[[:space:]]is[[:space:]]Worth[[:space:]]64[[:space:]]Words_[[:space:]]Long-term[[:space:]]Forecasting[[:space:]]with[[:space:]]Transformers/bbf73207-5595-4b60-8925-1a16a035e883_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6408 |
+
2023/A[[:space:]]Unified[[:space:]]Approach[[:space:]]to[[:space:]]Reinforcement[[:space:]]Learning,[[:space:]]Quantal[[:space:]]Response[[:space:]]Equilibria,[[:space:]]and[[:space:]]Two-Player[[:space:]]Zero-Sum[[:space:]]Games/37212fe9-354a-41c5-8547-fdab05e2271d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6409 |
+
2023/A[[:space:]]Unified[[:space:]]Framework[[:space:]]for[[:space:]]Soft[[:space:]]Threshold[[:space:]]Pruning/bc50017d-95f3-40e6-9b60-741942c09eb9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6410 |
+
2023/A[[:space:]]VAE[[:space:]]for[[:space:]]Transformers[[:space:]]with[[:space:]]Nonparametric[[:space:]]Variational[[:space:]]Information[[:space:]]Bottleneck/242b5dc9-de3d-4f35-9d6d-bebfbcfa993c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6411 |
+
2023/A[[:space:]]View[[:space:]]From[[:space:]]Somewhere_[[:space:]]Human-Centric[[:space:]]Face[[:space:]]Representations/340c47dc-b26f-4d4d-9492-6883a91ecfd3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6412 |
+
2023/A[[:space:]]critical[[:space:]]look[[:space:]]at[[:space:]]the[[:space:]]evaluation[[:space:]]of[[:space:]]GNNs[[:space:]]under[[:space:]]heterophily_[[:space:]]Are[[:space:]]we[[:space:]]really[[:space:]]making[[:space:]]progress_/3e8811ba-7259-4c07-bd30-6f066f89108e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6413 |
+
2023/A[[:space:]]law[[:space:]]of[[:space:]]adversarial[[:space:]]risk,[[:space:]]interpolation,[[:space:]]and[[:space:]]label[[:space:]]noise/76203bb3-e61d-493a-80f1-f057f08cae4f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6414 |
+
2023/A[[:space:]]new[[:space:]]characterization[[:space:]]of[[:space:]]the[[:space:]]edge[[:space:]]of[[:space:]]stability[[:space:]]based[[:space:]]on[[:space:]]a[[:space:]]sharpness[[:space:]]measure[[:space:]]aware[[:space:]]of[[:space:]]batch[[:space:]]gradient[[:space:]]distribution/23da28d0-91a1-4328-b3bf-624536af0dc9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6415 |
+
2023/A[[:space:]]theoretical[[:space:]]study[[:space:]]of[[:space:]]inductive[[:space:]]biases[[:space:]]in[[:space:]]contrastive[[:space:]]learning/8b830ac9-4384-4840-b05a-75f3bed0a5af_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6416 |
+
2023/A[[:space:]]view[[:space:]]of[[:space:]]mini-batch[[:space:]]SGD[[:space:]]via[[:space:]]generating[[:space:]]functions_[[:space:]]conditions[[:space:]]of[[:space:]]convergence,[[:space:]]phase[[:space:]]transitions,[[:space:]]benefit[[:space:]]from[[:space:]]negative[[:space:]]momenta./32693cf5-2124-4309-9c5a-ab02b42e7e31_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6417 |
+
2023/AE-FLOW_[[:space:]]Autoencoders[[:space:]]with[[:space:]]Normalizing[[:space:]]Flows[[:space:]]for[[:space:]]Medical[[:space:]]Images[[:space:]]Anomaly[[:space:]]Detection/192675d3-948c-4b9a-a905-a9a9881f8b44_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6418 |
+
2023/AGRO_[[:space:]]Adversarial[[:space:]]discovery[[:space:]]of[[:space:]]error-prone[[:space:]]Groups[[:space:]]for[[:space:]]Robust[[:space:]]Optimization/9acf0775-71f0-464e-a29e-e6cd48d6a116_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6419 |
+
2023/AIM_[[:space:]]Adapting[[:space:]]Image[[:space:]]Models[[:space:]]for[[:space:]]Efficient[[:space:]]Video[[:space:]]Action[[:space:]]Recognition/17201184-9c08-4972-accc-8e979af6baed_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6420 |
+
2023/Accelerated[[:space:]]Single-Call[[:space:]]Methods[[:space:]]for[[:space:]]Constrained[[:space:]]Min-Max[[:space:]]Optimization/5525eba0-a0d6-4274-8898-cde394885d8a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6421 |
+
2023/Accelerating[[:space:]]Guided[[:space:]]Diffusion[[:space:]]Sampling[[:space:]]with[[:space:]]Splitting[[:space:]]Numerical[[:space:]]Methods/8b2761bf-1959-4c5c-997a-497b6f8b1ecb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6422 |
+
2023/Accelerating[[:space:]]Hamiltonian[[:space:]]Monte[[:space:]]Carlo[[:space:]]via[[:space:]]Chebyshev[[:space:]]Integration[[:space:]]Time/74469fb9-cfd5-4e7f-8c5b-74a1af72385b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6423 |
+
2023/Accurate[[:space:]]Bayesian[[:space:]]Meta-Learning[[:space:]]by[[:space:]]Accurate[[:space:]]Task[[:space:]]Posterior[[:space:]]Inference/bc44ccfe-a7ec-4f63-9ab2-c5d75c07ac9d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6424 |
+
2023/Accurate[[:space:]]Neural[[:space:]]Training[[:space:]]with[[:space:]]4-bit[[:space:]]Matrix[[:space:]]Multiplications[[:space:]]at[[:space:]]Standard[[:space:]]Formats/177823cd-63fa-4ca5-884d-21d34829807f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6425 |
+
2023/Achieve[[:space:]]the[[:space:]]Minimum[[:space:]]Width[[:space:]]of[[:space:]]Neural[[:space:]]Networks[[:space:]]for[[:space:]]Universal[[:space:]]Approximation/5fae6e40-789a-4c59-8190-108252522d3c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6426 |
+
2023/Achieving[[:space:]]Near-Optimal[[:space:]]Individual[[:space:]]Regret[[:space:]]&[[:space:]]Low[[:space:]]Communications[[:space:]]in[[:space:]]Multi-Agent[[:space:]]Bandits/874fb7b1-e176-4d85-8046-a22c9733e770_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6427 |
+
2023/Achieving[[:space:]]Sub-linear[[:space:]]Regret[[:space:]]in[[:space:]]Infinite[[:space:]]Horizon[[:space:]]Average[[:space:]]Reward[[:space:]]Constrained[[:space:]]MDP[[:space:]]with[[:space:]]Linear[[:space:]]Function[[:space:]]Approximation/16555d2a-4150-4da8-9d23-69d64ca664ee_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6428 |
+
2023/Actionable[[:space:]]Neural[[:space:]]Representations_[[:space:]]Grid[[:space:]]Cells[[:space:]]from[[:space:]]Minimal[[:space:]]Constraints/ff8fbc9e-1ed9-43c8-89f5-78698ce08092_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6429 |
+
2023/Active[[:space:]]Image[[:space:]]Indexing/a3e537ec-8264-4ffd-9f7e-650cb3e303d3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6430 |
+
2023/Active[[:space:]]Learning[[:space:]]for[[:space:]]Object[[:space:]]Detection[[:space:]]with[[:space:]]Evidential[[:space:]]Deep[[:space:]]Learning[[:space:]]and[[:space:]]Hierarchical[[:space:]]Uncertainty[[:space:]]Aggregation/caa43ec3-868d-448f-9117-c8f7aca9b12d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6431 |
+
2023/Adaptive[[:space:]]Budget[[:space:]]Allocation[[:space:]]for[[:space:]]Parameter-Efficient[[:space:]]Fine-Tuning/3beb3f01-02b7-42b1-8a4a-ba971929d08f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6432 |
+
2023/Adaptive[[:space:]]Optimization[[:space:]]in[[:space:]]the[[:space:]]$_infty$-Width[[:space:]]Limit/690d47cb-8e59-48f5-9cc1-92bf3af78ce0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6433 |
+
2023/Adaptive[[:space:]]Robust[[:space:]]Evidential[[:space:]]Optimization[[:space:]]For[[:space:]]Open[[:space:]]Set[[:space:]]Detection[[:space:]]from[[:space:]]Imbalanced[[:space:]]Data/95d7401e-cea4-4802-a157-7397edce1d2c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6434 |
+
2023/Advancing[[:space:]]Radiograph[[:space:]]Representation[[:space:]]Learning[[:space:]]with[[:space:]]Masked[[:space:]]Record[[:space:]]Modeling/1884b975-ac9d-4c57-b5d0-629c1b322461_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6435 |
+
2023/Adversarial[[:space:]]Imitation[[:space:]]Learning[[:space:]]with[[:space:]]Preferences/06c22a01-d6be-4787-a584-09f67a2b318f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6436 |
+
2023/Agent-based[[:space:]]Graph[[:space:]]Neural[[:space:]]Networks/4763008e-6dd2-4ada-807b-f8eb09db25b5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6437 |
+
2023/Agnostic[[:space:]]Learning[[:space:]]of[[:space:]]General[[:space:]]ReLU[[:space:]]Activation[[:space:]]Using[[:space:]]Gradient[[:space:]]Descent/f9f65a7d-5824-4454-b880-5b89b418e4c9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6438 |
+
2023/Almost[[:space:]]Linear[[:space:]]Constant-Factor[[:space:]]Sketching[[:space:]]for[[:space:]]$_ell_1$[[:space:]]and[[:space:]]Logistic[[:space:]]Regression/400c8e30-95f8-4b86-b505-dd4c23e7b902_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6439 |
+
2023/Alternating[[:space:]]Differentiation[[:space:]]for[[:space:]]Optimization[[:space:]]Layers/026a3897-b95f-4f4c-9c3d-c677106618b0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6440 |
+
2023/Amortised[[:space:]]Invariance[[:space:]]Learning[[:space:]]for[[:space:]]Contrastive[[:space:]]Self-Supervision/1804c91c-7653-4432-9a28-53c1413b4b87_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6441 |
+
2023/An[[:space:]]Adaptive[[:space:]]Policy[[:space:]]to[[:space:]]Employ[[:space:]]Sharpness-Aware[[:space:]]Minimization/d635d05c-42ef-4a67-91da-61168fafdcbc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6442 |
+
2023/An[[:space:]]Additive[[:space:]]Instance-Wise[[:space:]]Approach[[:space:]]to[[:space:]]Multi-class[[:space:]]Model[[:space:]]Interpretation/e41c2f97-fbab-4017-bd41-ad3274066c13_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6443 |
+
2023/An[[:space:]]Equal-Size[[:space:]]Hard[[:space:]]EM[[:space:]]Algorithm[[:space:]]for[[:space:]]Diverse[[:space:]]Dialogue[[:space:]]Generation/7f231e5a-0b9c-4ef7-8567-99a7306cbb62_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6444 |
+
2023/An[[:space:]]Exact[[:space:]]Poly-Time[[:space:]]Membership-Queries[[:space:]]Algorithm[[:space:]]for[[:space:]]Extracting[[:space:]]a[[:space:]]Three-Layer[[:space:]]ReLU[[:space:]]Network/2875f1d2-57f6-4839-8d9e-c90f3b35cb84_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6445 |
+
2023/An[[:space:]]Extensible[[:space:]]Multi-modal[[:space:]]Multi-task[[:space:]]Object[[:space:]]Dataset[[:space:]]with[[:space:]]Materials/50281a0b-5a2c-4e06-a6d2-74c5fb829dd3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6446 |
+
2023/An[[:space:]]efficient[[:space:]]encoder-decoder[[:space:]]architecture[[:space:]]with[[:space:]]top-down[[:space:]]attention[[:space:]]for[[:space:]]speech[[:space:]]separation/8ab941a0-96da-4136-9c04-03073cc7f1c5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6447 |
+
2023/Analog[[:space:]]Bits_[[:space:]]Generating[[:space:]]Discrete[[:space:]]Data[[:space:]]using[[:space:]]Diffusion[[:space:]]Models[[:space:]]with[[:space:]]Self-Conditioning/a171817a-575a-4438-8445-da8f70706e1c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6448 |
+
2023/Analogy-Forming[[:space:]]Transformers[[:space:]]for[[:space:]]Few-Shot[[:space:]]3D[[:space:]]Parsing/b6796c75-bf8b-435a-81b7-1aed0f774324_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6449 |
+
2023/Analyzing[[:space:]]Tree[[:space:]]Architectures[[:space:]]in[[:space:]]Ensembles[[:space:]]via[[:space:]]Neural[[:space:]]Tangent[[:space:]]Kernel/8ad2c5f5-5818-4f83-ae6d-adc0659ac723_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6450 |
+
2023/Anamnesic[[:space:]]Neural[[:space:]]Differential[[:space:]]Equations[[:space:]]with[[:space:]]Orthogonal[[:space:]]Polynomial[[:space:]]Projections/af12f33c-96f1-4c07-9990-974001206cd5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6451 |
+
2023/Anisotropic[[:space:]]Message[[:space:]]Passing_[[:space:]]Graph[[:space:]]Neural[[:space:]]Networks[[:space:]]with[[:space:]]Directional[[:space:]]and[[:space:]]Long-Range[[:space:]]Interactions/68171981-c139-45e8-95f5-ddae848dc106_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6452 |
+
2023/Anti-Symmetric[[:space:]]DGN_[[:space:]]a[[:space:]]stable[[:space:]]architecture[[:space:]]for[[:space:]]Deep[[:space:]]Graph[[:space:]]Networks/87aef09e-9d86-4c53-b5d5-31f779f34d8b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6453 |
+
2023/Any-scale[[:space:]]Balanced[[:space:]]Samplers[[:space:]]for[[:space:]]Discrete[[:space:]]Space/4354a5df-64bd-4b08-9ad7-f975729b1827_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6454 |
+
2023/AnyDA_[[:space:]]Anytime[[:space:]]Domain[[:space:]]Adaptation/e0a017b5-bc78-4523-ab15-9dff02b0848f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6455 |
+
2023/Approximate[[:space:]]Bayesian[[:space:]]Inference[[:space:]]with[[:space:]]Stein[[:space:]]Functional[[:space:]]Variational[[:space:]]Gradient[[:space:]]Descent/e54a1860-24b1-4a60-8f17-9fe467bc07e7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6456 |
+
2023/Approximate[[:space:]]Nearest[[:space:]]Neighbor[[:space:]]Search[[:space:]]through[[:space:]]Modern[[:space:]]Error-Correcting[[:space:]]Codes/c47b207c-8075-4f39-949b-bd1d1efe7eb1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6457 |
+
2023/Approximate[[:space:]]Vanishing[[:space:]]Ideal[[:space:]]Computations[[:space:]]at[[:space:]]Scale/7dce4fde-7cfa-476c-b5f5-6db4277094d7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6458 |
+
2023/Approximation[[:space:]]and[[:space:]]non-parametric[[:space:]]estimation[[:space:]]of[[:space:]]functions[[:space:]]over[[:space:]]high-dimensional[[:space:]]spheres[[:space:]]via[[:space:]]deep[[:space:]]ReLU[[:space:]]networks/ac66dfc0-e069-4ce6-bef1-c06cb7554bc1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/ed1a3322-eae7-42dc-a0b1-e5152d8fbf81_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/ed1a3322-eae7-42dc-a0b1-e5152d8fbf81_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/ed1a3322-eae7-42dc-a0b1-e5152d8fbf81_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d41060fc685b0969db67951fe762039e44fd2d19e56bdcaaf857da7876926925
|
| 3 |
+
size 2258334
|
2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/full.md
ADDED
|
@@ -0,0 +1,472 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A MULTI-GRAINED SELF-INTERPRETABLE SYMBOLIC-NEURAL MODEL FOR SINGLE/MULTI-LABELED TEXT CLASSIFICATION
|
| 2 |
+
|
| 3 |
+
Xiang Hu\*1, Xinyu Kong1, Kewei Tu\*2
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Ant Group <sup>2</sup>ShanghaiTech University
|
| 6 |
+
|
| 7 |
+
# ABSTRACT
|
| 8 |
+
|
| 9 |
+
Deep neural networks based on layer-stacking architectures have historically suffered from poor inherent interpretability. Meanwhile, symbolic probabilistic models function with clear interpretability, but how to combine them with neural networks to enhance their performance remains to be explored. In this paper, we try to marry these two systems for text classification via a structured language model. We propose a Symbolic-Neural model that can learn to explicitly predict class labels of text spans from a constituency tree without requiring any access to span-level gold labels. As the structured language model learns to predict constituency trees in a self-supervised manner, only raw texts and sentence-level labels are required as training data, which makes it essentially a general constituent-level self-interpretable classification model. Our experiments demonstrate that our approach could achieve good prediction accuracy in downstream tasks. Meanwhile, the predicted span labels are consistent with human rationales to a certain degree.
|
| 10 |
+
|
| 11 |
+
# 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Lack of interpretability is an intrinsic problem in deep neural networks based on layer-stacking for text classification. Many methods have been proposed to provide posthoc explanations for neural networks (Lipton, 2018; Lundberg & Lee, 2017; Sundararajan et al., 2017). However, these methods have multiple drawbacks. First, there is only word-level attribution but no high-level attribution such as those over phrases and clauses. Take sentiment analysis as an example, in addition to the ability to recognize the sentiment of sentences, an ideal interpretable model should be able to identify the sentiment and polarity reversal at the levels of words, phrases, and clauses. Secondly, as argued by Rudin (2019), models should be inherently interpretable rather than explained by a posthoc model.
|
| 14 |
+
|
| 15 |
+
A widely accepted property of natural languages is that "the meaning of a whole is a function of the meanings of the parts and of the way they are syntactically combined" (Partee, 1995). Compared with the sequential outputs of layer-stacked model architectures, syntactic tree structures naturally capture features of various levels because each node in a tree represents a constituent span. Such a characteristic motivates us to think about whether the representations of these internal nodes could be leveraged to design an inherently constituent-level interpretable model. One challenge faced by this idea is that traditional syntactic parsers require supervised training and have degraded performance on out-of-domain data. Fortunately, with the development of structured language models (Tu et al., 2013; Maillard et al., 2017; Choi et al., 2018; Kim et al., 2019), we are now able to learn hierarchical syntactic structures in an unsupervised manner from any raw text.
|
| 16 |
+
|
| 17 |
+
In this paper, we propose a general self-interpretable text classification model that can learn to predict span-level labels unsupervisedly as shown in Figure 1. Specifically, we propose a novel label extraction framework based on a simple inductive bias for inference. During training, we maximize the probability summation of all
|
| 18 |
+
|
| 19 |
+
potential trees whose extracted labels are consistent with a gold label set via dynamic programming
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Our model can learn to predict span-level labels without access to span-level gold labels during training. In examples (a) and (b), only raw texts and sentence-level gold labels {request_address, navigate} and {negative} are given.
|
| 23 |
+
|
| 24 |
+
with linear complexity. By using a structured language model as the backbone, we are able to leverage the internal representations of constituent spans as symbolic interfaces, based on which we build transition functions for the dynamic programming algorithm.
|
| 25 |
+
|
| 26 |
+
The main contribution of this work is that we propose a Symbolic-Neural model, a simple but general model architecture for text classification, which has three advantages:
|
| 27 |
+
|
| 28 |
+
1. Our model has both competitive prediction accuracy and self-interpretability, whose rationales are explicitly reflected on the label probabilities of each constituent.
|
| 29 |
+
2. Our model can learn to predict span-level labels without requiring any access to span-level gold labels.
|
| 30 |
+
3. It handles both single-label and multi-label text classification tasks in a unified way instead of transferring the latter ones into binary classification problems (Read et al., 2011) in conventional methods.
|
| 31 |
+
|
| 32 |
+
To the best of our knowledge, we are the first to propose a general constituent-level self-interpretable classification model with good performance on downstream task performance. Our experiment shows that the span-level attribution is consistent with human rationales to a certain extent. We argue such characteristics of our model could be valuable in various application scenarios like data mining, NLU systems, prediction explanation, etc, and we discuss some of them in our experiments.
|
| 33 |
+
|
| 34 |
+
# 2 PRELIMINARY
|
| 35 |
+
|
| 36 |
+
# 2.1 ESSENTIAL PROPERTIES OF STRUCTURED LANGUAGE MODELS
|
| 37 |
+
|
| 38 |
+
Structured language models feature combining the powerful representation of neural networks with syntax structures. Though many attempts have been made about structured language models (Kim et al., 2019; Drozdov et al., 2019; Shen et al., 2021), three prerequisites need to be met before a model is selected as the backbone of our method. Firstly, it should have the ability to learn reasonable syntax structure in an unsupervised manner. Secondly, it computes an intermediate representation for each constituency node. Thirdly, it has a pretraining mechanism to improve representation performance. Since Fast-R2D2 (Hu et al., 2022; 2021) satisfies all the above conditions and also has good inference speed, we choose Fast-R2D2 as our backbone.
|
| 39 |
+
|
| 40 |
+
# 2.2 FAST-R2D2
|
| 41 |
+
|
| 42 |
+
Overall, Fast-R2D2 is a type of structured language model that takes raw texts as input and outputs corresponding binary parsing trees along with node representations as shown in Figure 3(a). The representation $e_{i,j}$ representing a text span from the $i_{th}$ to the $j_{th}$ word is computed recursively from its child node representations via a shared composition function, i.e., $e_{i,j} = f(e_{i,k},e_{k + 1,j})$ , where $k$ is the split point given by the parser and $f(\cdot)$ is an n-layered Transformer encoder. When $i = j$ , $e_{i,j}$ is initialized as the embedding of the corresponding input token. Please note the parser is trained in a self-supervised manner, so no human-annotated parsing trees are required.
|
| 43 |
+
|
| 44 |
+
# 3 SYMBOLIC-NEURAL MODEL
|
| 45 |
+
|
| 46 |
+
# 3.1 MODEL
|
| 47 |
+
|
| 48 |
+
There are two basic components in the Symbolic-Neural model:
|
| 49 |
+
|
| 50 |
+
1. A Structured LM backbone which is used to parse a sentence to a binary tree with node representations.
|
| 51 |
+
2. An MLP which is used to estimate the label distribution from the node representation.
|
| 52 |
+
|
| 53 |
+
For Structured LMs that follow a bottom-up hierarchical encoding process (such as our default LM Fast-R2D2), context outside a span is invisible to the span, which may make low-level short spans unable to predict correct labels because of a lack of information. So we introduce an optional module to allow information to flow in parse trees from top to down.
|
| 54 |
+
|
| 55 |
+
The overall idea is to construct a top-down process to fuse information from both inside and outside of spans. For a given span $(i,j)$ , we denote the top-down representation as $e_{i,j}^{\prime}$ . We use the Transformer as the top-down encoder function $f^{\prime}$ . The top-down encoding process starts from the root and functions recursively on the child nodes. For the root node, we have $[\cdot ,e_{1,n}^{\prime}] = f^{\prime}([e_{root},e_{1,n}])$
|
| 56 |
+
|
| 57 |
+
where $e_{root}$ is embedding of the special token [ROOT] and $n$ is the sentence length. Once the top-down representation $e_{i,j}'$ is ready, we compute its child representations recursively via $[\cdot, e_{i,k}', e_{k+1,j}] = f'([e_{i,j}', e_{i,k}, e_{k+1,j}])$ as illustrated in Figure 2.
|
| 58 |
+
|
| 59 |
+
We denote the parameters of the model as $\Psi$ , the parameters used in the Structured LM as $\Phi$ and the parameters used in the MLP layer and the top-down encoder as $\Theta$ . Thus $\Psi = \{\Phi, \Theta\}$ .
|
| 60 |
+
|
| 61 |
+
# 3.2 LABEL EXTRACTION FRAMEWORK FOR INFERENCE
|
| 62 |
+
|
| 63 |
+
During inference, we first use Fast-R2D2 to produce a parsing tree, then predict the label of each node in the parse tree and output a final label set by the yield function introduced below.
|
| 64 |
+
|
| 65 |
+

|
| 66 |
+
Figure 2: [PRT], [LEFT], [RIGHT] are role embeddings for the corresponding inputs.
|
| 67 |
+
|
| 68 |
+
Inductive bias. Through observing cases in single/multi-
|
| 69 |
+
|
| 70 |
+
label classification tasks, we propose an inductive bias that a constituent in a text corresponds to at most one label. As constituents could be seen as nodes in a binary parsing tree, we can associate the nodes with labels. Nodes with multiple labels could be achieved by assigning labels to non-overlapping child nodes. Please note such an inductive bias is not applicable for special cases in which a minimal semantic constituent of a text is associated with multiple labels, e.g., the movie "Titanic" could be labeled with both 'disaster' and 'love'. However, we argue that such cases are rare because our inductive bias works well on most single/multi-label tasks as demonstrated in our experiments.
|
| 71 |
+
|
| 72 |
+
Label Tree. A label tree is transferred from a parsing tree by associating each node with a label. A label tree example is illustrated in Figure 3(b). During inference, we predict a probability distribution of labels for each node and pick the label with the highest probability. To estimate the label distribution, we have $P_{\Psi}(\cdot | n_{i,j}) = \text{softmax}(\text{MLP}(e_{i,j}))$ . Please note if the top-down encoder is enabled, we replace $e_{i,j}$ with $e_{i,j}'$ .
|
| 73 |
+
|
| 74 |
+
Algorithm 1 Definition of Yield func
|
| 75 |
+
tion
|
| 76 |
+
1: function YIELD(\hat{t})
|
| 77 |
+
2: $\mathcal{S} = \{\}$
|
| 78 |
+
3: $q \gets [t.root]$
|
| 79 |
+
4: ▷ The list of nodes to visit
|
| 80 |
+
5: while len(q) > 0 do
|
| 81 |
+
6: $n_{ij} \gets q.pop(0)$
|
| 82 |
+
7: if $n_{ij}.\text{label} == \phi_{NT}$ then
|
| 83 |
+
8: if not $n_{ij}.\text{is_leaf}$ then
|
| 84 |
+
9: q.append(n_{ij}.left)
|
| 85 |
+
10: q.append(n_{ij}.right)
|
| 86 |
+
11: else
|
| 87 |
+
12: $\mathcal{S} = \mathcal{S} \cup \{n_{ij}.\text{label}\}$
|
| 88 |
+
13: $\mathcal{S} = \mathcal{S} \setminus \{\phi_T\}$
|
| 89 |
+
14: return $\mathcal{S}$
|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
Figure 3: (a) A parsing tree. (b) A label tree transferred from the left parsing tree. For the given label tree, $\mathcal{V}$ returns $\{\mathrm{A},\mathrm{C}\}$ . $\mathcal{V}$ stops traversing at terminal nodes in shallow gray whose ancestor labels are all $\phi_{NT}$ and the nodes in dark gray are not visited.
|
| 93 |
+
|
| 94 |
+

|
| 95 |
+
|
| 96 |
+
Yield function. We design a yield function that traverses a label tree in a top-down manner and extracts labels. For brevity, we use $\mathcal{V}$ short for the yield function. We divide the labels into two categories: terminal labels and non-terminal labels, which indicate whether $\mathcal{V}$ should stop or continue respectively when it traverses to a node. Considering some nodes may not be associated with any task-defined labels, we introduce empty labels denoted as $\phi_T$ and $\phi_{NT}$ for terminal and non-terminal ones respectively. For simplicity, we do not discuss nesting cases<sup>1</sup> in this paper, so there is only one unique non-terminal label which is $\phi_{NT}$ and all task-defined labels are terminal labels. However, our method can be naturally extended to handle nesting cases by allowing non-terminal labels to be associated with task labels. As defined by the pseudo-code in Algorithm 1, $\mathcal{V}$ traverses a label tree from top to down starting with the root; when it sees $\phi_{NT}$ , it continues to traverse all
|
| 97 |
+
|
| 98 |
+
its children; otherwise, when it sees a terminal label, it stops and gathers the task-defined terminal label of the node. Figure 3 illustrates how $\mathcal{V}$ traverses the label tree and gathers task-defined labels.
|
| 99 |
+
|
| 100 |
+
# 3.3 TRAINING OBJECTIVE.
|
| 101 |
+
|
| 102 |
+
During the training stage, though the Structured LM can predict tree structures, the difficulty here is how to associate each node with a single label without span-level gold labels. We define our training objective as follows:
|
| 103 |
+
|
| 104 |
+
Training objective Given a sentence $\mathbf{S}$ whose length is $|\mathbf{S}|$ and its gold label set $\mathcal{T} = \{l_1,\dots,l_m\}$ , $t$ is its best parsing tree given by the unsupervised parser of Fast-R2D2 and $\hat{t}$ is a label tree transferred from $t$ . $\hat{t}^{[C]}$ denotes $\hat{t}$ satisfying condition $\mathcal{C}$ . The training objective is to maximize the probability of a given tree transferring to a label tree yielding labels that are consistent with the ground-truth labels, which could be formalized as minimizing $-\log P_{\Psi}(\hat{t}^{[\mathcal{V}(\hat{t}) = \mathcal{T}]}|t)$ .
|
| 105 |
+
|
| 106 |
+
Before we get into the specifics, several key aspects are defined as follows:
|
| 107 |
+
|
| 108 |
+
(1) Denotations: $t_{i,j}$ denotes the subtree spanning from $i$ to $j$ (both indices are inclusive), whose root, left and right subtree are $n_{i,j}$ , $t_{i,k}$ and $t_{k+1,j}$ respectively in which $k$ is the split point.
|
| 109 |
+
(2) Symbolic Interface: $P_{\Psi}(l|n_{i,j})$ is the probability of a single node $n_{i,j}$ being associated with the specified label $l$ . Thus, the probability of $t$ transferring to a specific label tree $\hat{t}$ is the product of all the probabilities of nodes being associated with the corresponding labels in $\hat{t}$ .
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
Figure 4: To ensure that the yield result of $\hat{t}_{i,j}$ contains label $l$ , node $n_{i,j}$ needs to be associated with either $\phi_{NT}$ or $l$ , whose probabilities are $P_{\Psi}(\phi_{NT}|n_{i,j})$ and $P_{\Psi}(l|n_{i,j})$ respectively. If associated with $l$ , it satisfies the condition. If associated with $\phi_{NT}$ , at least one of its children's yield results should contain $l$ . Here we use $\backslash l$ to denote that the yield result does not contain label $l$ . In conclusion, $Y_{i,j}^{l}$ could be estimated recursively by Equation 1.
|
| 113 |
+
|
| 114 |
+
Obviously, it is intractable to exhaust all potential $\hat{t}$ to estimate $P_{\Psi}(\hat{t}^{[\mathcal{Y}(\hat{t}) = \mathcal{T}]}|t)$ . Our core idea is to leverage symbolic interfaces to estimate $P_{\Psi}(\hat{t}^{[C]}|t)$ via dynamic programming. We start with an elementary case: estimate the probability that the yield result of $t_{i,j}$ contains a given label $l$ , i.e., $P_{\Psi}(\hat{t}_{i,j}^{[l\in \mathcal{Y}(\hat{t}_{i,j})]}|t_{i,j})$ . For brevity, we denote it as $Y_{i,j}^{l}$ . As the recursive formulation illustrated in Figure 4, we have:
|
| 115 |
+
|
| 116 |
+
$$
|
| 117 |
+
Y _ {i, j} ^ {l} = \left\{ \begin{array}{l l} P _ {\Psi} (l | n _ {i, j}) + P _ {\Psi} \left(\phi_ {N T} | n _ {i, j}\right) \cdot \left(1 - \left(1 - Y _ {i, k} ^ {l}\right) \cdot \left(1 - Y _ {k + 1, j} ^ {l}\right)\right) & \text {i f} i < j \\ P _ {\Psi} (l | n _ {i, j}) & \text {i f} i = j \end{array} \right. \tag {1}
|
| 118 |
+
$$
|
| 119 |
+
|
| 120 |
+
However, for a given label set $\mathcal{M}$ , if we try to estimate $P_{\Psi}(\hat{t}_{i,j}^{[\mathcal{V}(t_{i,j}) = \mathcal{M}]}|t_{i,j})$ in the same way, we will inevitably exhaust all potential combinations as illustrated in Figure 5(a) which will lead to exponential complexity.2
|
| 121 |
+
|
| 122 |
+
To tackle the problem of exponential complexity, we try to divide the problem of estimating $P_{\Psi}(\hat{t}_{i,j}^{[\mathcal{Y}(\hat{t}) = \mathcal{M}]}|t_{i,j})$ to estimating $Y_{i,j}^{l}$ for each label $l$ in $\mathcal{M}$ . Let $\mathcal{F}$ denote the set union of all the task labels and $\{\phi_T,\phi_{NT}\}$ , and let $\mathcal{O}$ denote $\mathcal{F}\setminus \mathcal{T}$ . By assuming that the states of labels are independent of each other, where the state of a label indicates whether the label is contained in the
|
| 123 |
+
|
| 124 |
+

|
| 125 |
+
Figure 5: (a) Potential valid yield results of left and right children for $\mathcal{M} = \{a, b, c\}$ . (b) Valid label trees when we include a mutual-exclusiveness constraint.
|
| 126 |
+
|
| 127 |
+
yield result<sup>3</sup>, we have:
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
P _ {\Psi} (\hat {t} ^ {[ \mathcal {Y} (\hat {t}) = \mathcal {T} ]} | t) = P _ {\Psi} (\hat {t} ^ {[ \mathcal {T} \subseteq \mathcal {Y} (\hat {t}) ]}, \hat {t} ^ {[ \mathcal {O} \cap \mathcal {Y} (\hat {t}) = \phi ]} | t) \approx P _ {\Psi} (\hat {t} ^ {[ \mathcal {T} \subseteq \mathcal {Y} (\hat {t}) ]} | t) \cdot P _ {\Psi} (\hat {t} ^ {[ \mathcal {O} \cap \mathcal {Y} (\hat {t}) = \phi ]} | t)
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
P _ {\Psi} \left(\hat {t} ^ {[ \mathcal {T} \subseteq \mathcal {Y} (\hat {t}) ]} | t\right) \approx \prod_ {l \in \mathcal {T}} P _ {\Psi} \left(\hat {t} ^ {[ l \in \mathcal {Y} (\hat {t}) ]} | t\right) = \prod_ {l \in \mathcal {T}} Y _ {i, j} ^ {l}, P _ {\Psi} \left(\hat {t} ^ {[ \mathcal {O} \cap \mathcal {Y} (\hat {t}) = \phi ]} | t\right) = 1 - P _ {\Psi} \left(\hat {t} ^ {[ \mathcal {O} \cap \mathcal {Y} (\hat {t}) \neq \phi ]} | t\right) \tag {2}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
We do not approximate $P_{\Psi}(\hat{t}^{[\mathcal{O}\cap \mathcal{Y}(\hat{t})\neq \phi ]}|t)$ as it could be computed directly. The above function premises that multiple non-overlapping spans could associate with the same label. In some cases, if there is a mutual-exclusiveness constraint that two non-overlapping spans are not allowed to associate with the same task label as shown in Figure 5(b), the function becomes:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
Y _ {i, j} ^ {l} = \left\{ \begin{array}{l l} P _ {\Psi} (l | n _ {i, j}) + P _ {\Psi} \left(\phi_ {N T} | n _ {i, j}\right) \cdot \left(Y _ {i, k} ^ {l} \cdot \left(1 - Y _ {k + 1, j} ^ {l}\right) + Y _ {k + 1, j} ^ {l} \cdot \left(1 - Y _ {i, k} ^ {l}\right)\right) & \text {i f} i < j \\ P _ {\Psi} (l | n _ {i, j}) & \text {i f} i = j \end{array} \right. \tag {3}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
Regarding $P_{\Psi}(\hat{t}^{[\mathcal{O}\cap \mathcal{Y}(\hat{t})\neq \phi ]}|t)$ $\mathcal{V}(\hat{t})$ containing any label $l\in \mathcal{O}$ would satisfy the condition. We denote it as $Y_{i,j}^{\mathcal{O}}$ in short. Similar to Equation 1 we have:
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
Y _ {i, j} ^ {\mathcal {O}} = \left\{ \begin{array}{l l} \sum_ {l \in \mathcal {O}} P _ {\Psi} (l | n _ {i, j}) + P _ {\Psi} \left(\phi_ {N T} | n _ {i, j}\right) \cdot (1 - (1 - Y _ {i, k} ^ {\mathcal {O}}) \cdot (1 - Y _ {k + 1, j} ^ {\mathcal {O}})) & \text {i f} i < j \\ \sum_ {l \in \mathcal {O}} P _ {\Psi} (l | n _ {i, j}) & \text {i f} i = j \end{array} \right. \tag {4}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
Thus $P_{\Psi}(\hat{t}^{[\mathcal{V}(\hat{t}) = \mathcal{T}]}|t) = \prod_{l\in \mathcal{T}}Y_{1,|\mathbf{S}|}^l\cdot (1 - Y_{1,|\mathbf{S}|}^{\mathcal{O}})$ and the objective function given a parsing tree is:
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\mathcal {L} _ {c l s} ^ {t} (\Psi) = - \log P _ {\Psi} \left(\hat {t} ^ {[ \mathcal {Y} (\hat {t}) = \mathcal {T} ]} | t\right) = - \sum_ {l \in \mathcal {T}} \log Y _ {1, | \mathbf {S} |} ^ {l} - \log \left(1 - Y _ {1, | \mathbf {S} |} ^ {\mathcal {O}}\right) \tag {5}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
Because it has been verified in prior work Hu et al. (2022) that models could achieve better downstream performance and domain-adaptivity by training along with the self-supervised objective $\mathcal{L}_{self}(\Phi)$ , we design the final loss as follows:
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\mathcal {L} = \mathcal {L} _ {\text {c l s}} ^ {t} (\Psi) + \mathcal {L} _ {\text {s e l f}} (\Phi) \tag {6}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
# 4 EXPERIMENTS
|
| 162 |
+
|
| 163 |
+
# 4.1 DOWNSSTREAM TASKS
|
| 164 |
+
|
| 165 |
+
In this section, we compare our interpretable symbolic-Neural model with models based on dense sentence representation to verify our model works as well as conventional models. All systems are trained on raw texts and sentence-level labels only.
|
| 166 |
+
|
| 167 |
+
Data set. We report the results on the development set of the following datasets: SST-2, CoLA (Wang et al., 2019), ATIS (Hakkani-Tur et al., 2016), SNIPS (Coucke et al., 2018), StanfordLU (Eric et al., 2017). Please note that SST-2, CoLA, and SNIPS are single-label tasks and ATIS, StanfordLU are multi-label tasks. There are three sub-fields in StanfordLU including navigator, scheduler, and weather.
|
| 168 |
+
|
| 169 |
+
Baselines. To fairly compare our method with other systems, all backbones such as Fast-R2D2 and BERT (Devlin et al., 2019) are pretrained on the same corpus with the same vocabulary and epochs. We record the best results of running with 4 different random seeds and report the mean of
|
| 170 |
+
|
| 171 |
+
them. Because of GPU resource limit and energy saving, we pretrain all models on Wiki-103 (Merty et al., 2017), which contains 110 million tokens<sup>4</sup>. To compare our model with systems only using whole sentence representations, we include BERT and Fast-R2D2 using root representation in our baselines. To study the reliability of the unsupervised parser, we include systems with a supervised parser Zhang et al. (2020) that uses BERT or a tree encoder as the backbone. For the former, we take the average pooling on representations of words in span (i,j) as the representation of the span. For the latter, we use the pretrained R2D2 tree encoder as the backbone. To compare with methods dealing with multi-instance learning (MIL) but without structure constraints, we extend the multi-instance learning framework proposed by Angelidis & Lapata (2018) to the multi-instance multi-label learning (MIMLL) scenario. Please find the details about the MIL and MIMLL in Appendix A.7. We also conduct ablation studies on systems with or without the top-down encoder and the mutual-exclusiveness constraint. For the systems using root or [CLS] representations on multi-label tasks, outputs are followed by a sigmoid layer and filtered by a threshold that is tuned on the training set.
|
| 172 |
+
|
| 173 |
+
Hyperparameters. Our BERT follows the setting in Devlin et al. (2019), using 12-layer Transformers with 768-dimensional embeddings, 3,072-dimensional hidden layer representations, and 12 attention heads. The setting of Fast-R2D2 follows Hu et al. (2022). Specifically, the tree encoder uses 4-layer Transformers with other hyper-parameters same as BERT and the top-down encoder uses 2-layer ones. The top-down parser uses a 4-layer bidirectional LSTM with 128-dimensional embeddings and 256-dimensional hidden layers. We train all the systems across the seven datasets for 20 epochs with a learning rate of $5 \times 10^{-5}$ for the encoder, $1 \times 10^{-2}$ for the unsupervised parser, and batch size 64 on 8 A100 GPUs.
|
| 174 |
+
|
| 175 |
+
<table><tr><td>Backbone
|
| 176 |
+
Multi-Label%</td><td>Arch.</td><td>#Param.</td><td>SST-2
|
| 177 |
+
0</td><td>CoLA
|
| 178 |
+
0</td><td>SNIPS
|
| 179 |
+
0</td><td>ATIS
|
| 180 |
+
1.69</td><td>Nav.
|
| 181 |
+
26.54</td><td>Sche.
|
| 182 |
+
24.86</td><td>Wea.
|
| 183 |
+
5.39</td></tr><tr><td>BERT(Wiki-103)</td><td>Sent.</td><td>116M</td><td>89.54</td><td>34.99</td><td>98.86</td><td>98.56</td><td>89.25</td><td>94.14</td><td>96.98</td></tr><tr><td>parser+BERT</td><td>S.N.</td><td>172M</td><td>89.11</td><td>9.46</td><td>99.00</td><td>93.50</td><td>80.06</td><td>81.77</td><td>95.22</td></tr><tr><td>parser+TreeEnc.</td><td>Sent.</td><td>128M</td><td>88.53</td><td>11.77</td><td>99.04</td><td>97.52</td><td>89.50</td><td>93.74</td><td>96.98</td></tr><tr><td>parser+TreeEnc.</td><td>S.N.</td><td>128M</td><td>88.19</td><td>21.49</td><td>98.93</td><td>95.77</td><td>88,94</td><td>86.95</td><td>96.50</td></tr><tr><td>Fast-R2D2</td><td>MIL</td><td>62M</td><td>89.22</td><td>34.05</td><td>98.66</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Fast-R2D2</td><td>MIMLL</td><td>62M</td><td>-</td><td>-</td><td>-</td><td>93.29</td><td>84.21</td><td>89.66</td><td>94.39</td></tr><tr><td>Fast-R2D2</td><td>Sent.</td><td>62M</td><td>89.96</td><td>36.31</td><td>99.00</td><td>98.11</td><td>89.25</td><td>93.68</td><td>96.70</td></tr><tr><td>Fast-R2D2</td><td>S.N.fp</td><td>62M</td><td>-</td><td>-</td><td>-</td><td>98.61</td><td>89.38</td><td>91.18</td><td>96.96</td></tr><tr><td>Fast-R2D2topdown</td><td>S.N.fp</td><td>67M</td><td>-</td><td>-</td><td>-</td><td>98.45</td><td>88.91</td><td>93.46</td><td>97.13</td></tr><tr><td>Fast-R2D2</td><td>S.N.</td><td>62M</td><td>89.91</td><td>35.02</td><td>98.86</td><td>98.78</td><td>88.18</td><td>91.94</td><td>97.43</td></tr><tr><td>Fast-R2D2exclusive</td><td>S.N.</td><td>62M</td><td>89.45</td><td>36.68</td><td>99.00</td><td>98.27</td><td>87.86</td><td>89.87</td><td>96.47</td></tr><tr><td>Fast-R2D2topdown</td><td>S.N.</td><td>67M</td><td>89.45</td><td>36.49</td><td>99.14</td><td>98.50</td><td>90.82</td><td>93.47</td><td>97.21</td></tr><tr><td>Fast-R2D2top./excl.</td><td>S.N.</td><td>67M</td><td>89.91</td><td>35.31</td><td>99.14</td><td>98.16</td><td>90.75</td><td>93.80</td><td>96.94</td></tr></table>
|
| 184 |
+
|
| 185 |
+
Table 1: We report mean accuracy for SST-2, Matthews correlation for CoLA, and F1 scores for the rest. We use "S.N." to denote the systems based on the Symbolic-Neural architecture, and "Sent." to denote those using only whole sentence representations. We use subscript $fp$ for the models based on full permutation, topdown, and exclusive for those with the top-down encoder and the mutual-exclusiveness constraint. Please find the details of $S.N_{fp}$ in Appendix A.2.
|
| 186 |
+
|
| 187 |
+
Results and discussion. We make several observations from Table 1. Firstly, We find that our models overall achieve competitive prediction accuracy compared with strong baselines including BERT, especially on multi-label tasks. The result validates the rationality of our label-constituent association inductive bias. The significant gap compared to MIMLL fully demonstrates the superiority of building hierarchical relationships between spans in the model. Secondly, when using sentence representation, the models with the unsupervised parser achieve similar results to those with the supervised parser on most tasks but significantly outperform the latter on CoLA. A possible reason for the poor performance of the latter systems on CoLA is that there are many sentences with grammar errors in the dataset which are not covered by the training set of the supervised parser. While the unsupervised parser can adapt to those sentences as $\mathcal{L}_{bilm}$ and $\mathcal{L}_{KL}$ are included in the final loss. The result reflects the flexibility and adaptability of using unsupervised parsers. Thirdly, 'parser+TreeEnc.' in Symbolic-Neural architectures does not perform as well as 'parser+TreeEnc.'
|
| 188 |
+
|
| 189 |
+
using sentence representation, while the systems using the unsupervised parser show opposite results. Considering that the Symbolic-Neural model relies heavily on the representation of inner constituents, we suppose such results ascribe to the tree encoder having adapted to the trees given by the unsupervised parser during the pretraining stage of Fast-R2D2, which leads to the self-consistent intermediate representations. This result also verifies the structured language model that learns latent tree structures unsupervisedly is mature enough to be the backbone of our method.
|
| 190 |
+
|
| 191 |
+
# 4.2 ANALYSIS OF INTERPRETABILITY.
|
| 192 |
+
|
| 193 |
+
Bastings et al. (2022) propose a method that "poisons" a classification dataset with synthetic shortcuts, trains classifiers on the poisoned data, and then tests if a given interpretability method can pick up on the shortcut.
|
| 194 |
+
|
| 195 |
+
Setup. Following the work, we define two shortcuts with four continuous tokens to access the faithfulness of predicted span labels: #0#1#2#3 and #4#5#6#7 indicate label 1 and 0 respectively. We select SST2 and CoLA as the training sets, with additional $20\%$ synthetic data. We create a synthetic example by (1) randomly sampling an instance from the source data, (2) inserting the continuous tokens at random positions, and (3) setting the label as the shortcut prescribes.
|
| 196 |
+
|
| 197 |
+
Verification steps. The model trained on the synthesis data could achieve $100\%$ accuracy on the synthetic test data and the model trained on the original dataset achieves around $50\%$ on the synthetic test set.
|
| 198 |
+
|
| 199 |
+
Sorting tokens. Since our model does not produce a heatmap for input tokens, it lacks an intuitive way to get top K tokens as required in the shortcut method. So we propose a simple heuristic tree-based ranking algorithm. Specifically, for a given label, we start from the root denoted as $n$ and compare $P(l|n_{left})$ and $P(l|n_{right})$ where $n_{left}$ and $n_{right}$ are its left and right children. If $P(l|n_{left}) > P(l|n_{right})$ , all descendants of the left node are ordered before the descendants of the right child, and vice versa. By recursively ranking according to the above rule, we could have all tokens ranked. We additionally report the precision of shortcut span labels in the predicted label trees. A shortcut span label is correct only if the continuous shortcut tokens are covered by the same span and the predicted label is consistent with the shortcut label.
|
| 200 |
+
|
| 201 |
+
Results. From the table 2, we have an interesting finding that the precision declines with the increase of training epochs. We think the reason for this phenomenon is that shortcut spans are the easiest to learn at early epochs, so almost all top tokens are short-cut tokens. With continuous training, the model gradually learns the semantics of texts from the original data. Although the label for a sentence in the synthetic data is random, there is still around $50\%$ probability that it is semantically consistent with the text and hence the label probability of a certain span may exceed the
|
| 202 |
+
|
| 203 |
+
<table><tr><td rowspan="2">Models</td><td rowspan="2">epochs</td><td colspan="2">top4 prec.</td><td colspan="2">span label prec.</td></tr><tr><td>SST2</td><td>CoLA</td><td>SST2</td><td>CoLA</td></tr><tr><td>Symbolic-Neural</td><td>1</td><td>93.31%</td><td>99.11%</td><td>100%</td><td>100%</td></tr><tr><td>Symbolic-Neural</td><td>3</td><td>95.10%</td><td>99.30%</td><td>100%</td><td>100%</td></tr><tr><td>Symbolic-Neural</td><td>5</td><td>93.46%</td><td>98.95%</td><td>100%</td><td>100%</td></tr><tr><td>Symbolic-Neural</td><td>7</td><td>93.23%</td><td>98.25%</td><td>100%</td><td>100%</td></tr><tr><td>Symbolic-Neural</td><td>9</td><td>87.44%</td><td>98.83%</td><td>100%</td><td>100%</td></tr><tr><td>Symbolic-Neuraltopdown</td><td>1</td><td>97.31%</td><td>99.23%</td><td>100%</td><td>100%</td></tr><tr><td>Symbolic-Neuraltopdown</td><td>3</td><td>98.76%</td><td>99.95%</td><td>100%</td><td>100%</td></tr><tr><td>Symbolic-Neuraltopdown</td><td>5</td><td>77.06%</td><td>99.64%</td><td>100%</td><td>100%</td></tr><tr><td>Symbolic-Neuraltopdown</td><td>7</td><td>93.03%</td><td>100.00%</td><td>100%</td><td>100%</td></tr><tr><td>Symbolic-Neuraltopdown</td><td>9</td><td>80.76%</td><td>100.00%</td><td>100%</td><td>100%</td></tr><tr><td>IGbert-{mask|200}</td><td>1</td><td>72.94%</td><td>99.68%</td><td>-</td><td>-</td></tr><tr><td>IGbert-{mask|200}</td><td>3</td><td>67.15%</td><td>96.91%</td><td>-</td><td>-</td></tr><tr><td>IGbert-{mask|200}</td><td>5</td><td>69.21%</td><td>90.01%</td><td>-</td><td>-</td></tr><tr><td>IGbert-{mask|200}</td><td>7</td><td>57.03%</td><td>86.36%</td><td>-</td><td>-</td></tr><tr><td>IGbert-{mask|200}</td><td>9</td><td>66.20%</td><td>86.29%</td><td>-</td><td>-</td></tr></table>
|
| 204 |
+
|
| 205 |
+
Figure 6: Top 4 precision and span label precision.
|
| 206 |
+
|
| 207 |
+
probability of the shortcut span. Please note the precision of shortcut span labels predicted by our model is $100\%$ . Such results demonstrate again that our model is self-interpretable and could reflect the model's rationales by span labels. Samples of label trees with shortcut tokens are shown in Appendix A.5
|
| 208 |
+
|
| 209 |
+
# 4.3 CONSISTENCY WITH HUMAN RATIONALES
|
| 210 |
+
|
| 211 |
+
To evaluate the consistency of the span labels learned by our model with human rationales, we design a constituent-level attribution task. Specifically, we hide the gold span positions in NER and slot-filling datasets to see whether our model is able to recover gold spans and labels. So only raw text and sentence-level gold labels are visible to models. We then train models as multi-label classification tasks and evaluate span positions learned unsupervisedly by models.
|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
Figure 7: A sample of our method on semi-supervised slot filling. The ground truths are Denver, Oakland, afternoon, $5\mathrm{pm}$ , nonstop for each slot correspondingly. However, the last three are reasonable even though different from the ground truths.
|
| 215 |
+
|
| 216 |
+
Data set. We report F1 scores on the following data sets: ATIS (Hakkani-Tur et al., 2016), MITRestaurant (Liu et al., 2013a) and MITMovie (Liu et al., 2013b). ATIS is a slot-filling task and the others are NER tasks.
|
| 217 |
+
|
| 218 |
+
Baselines. We include two baselines with attribution ability on multi-label tasks: integrated-gradient(IG) (Sundararajan et al., 2017) and multi-instance learning (Angelidis & Lapata, 2018). We follow the setup in Sec 4.1 and report the results of the last epoch. For IG, we set the interpolation steps as 200 and use the same BERT in the last section as the encoder, filter the attribution of each token by a threshold and select filtered positions as outputs. We use zero vectors and [MASK] embeddings as the baselines for IG as Bastings et al. (2022) find the latter one could significantly improve its performance. Considering IG scores not having explicit meaning, we allow IG to adjust thresholds according to the test datasets. We report the best results of both baselines and corresponding thresholds. Please find the full version of the table in Appendix A.4. For MIMLL, we select the span with the max attention score for a specified label. Please find details in Appendix A.7.
|
| 219 |
+
|
| 220 |
+
Metrics. We denote the predicted span set as $\mathcal{P}$ and gold span set as $\mathcal{G}$ and the overlap of $\mathcal{P}$ and $\mathcal{G}$ with the same labels as $\mathcal{O}$ . Then we have:
|
| 221 |
+
|
| 222 |
+
$$
|
| 223 |
+
\operatorname {p r e c} = \frac {\sum_ {o \in \mathcal {O}} o . j - o . i + 1}{\sum_ {p \in \mathcal {P}} p . j - p . i + 1}, \text {r e c a l l} = \frac {\sum_ {o \in \mathcal {O}} o . j - o . i + 1}{\sum_ {g \in \mathcal {G}} g . j - g . i + 1}, \mathrm {F} 1 = \frac {2 * (\operatorname {p r e c} \cdot \operatorname {r e c a l l})}{(\operatorname {p r e c} + \operatorname {r e c a l l})} \tag {7}
|
| 224 |
+
$$
|
| 225 |
+
|
| 226 |
+
<table><tr><td>Model</td><td>Thres.</td><td colspan="4">Slot-filling</td><td>Thres.</td><td colspan="4">sls-movie-eng</td></tr><tr><td>length ratio</td><td></td><td colspan="4">all 1-2 95.84 3.70 0.46</td><td></td><td colspan="4">all 1-2 55.60 42.75 1.65</td></tr><tr><td>IGBERT{mask|200}</td><td>0.3</td><td>50.28</td><td>51.13</td><td>37.15</td><td>15.87</td><td>0.2</td><td>57.19</td><td>60.07</td><td>55.79</td><td>34.36</td></tr><tr><td>IGBERT{zero|200}</td><td>0.4</td><td>56.62</td><td>57.42</td><td>46.15</td><td>18.46</td><td>0.3</td><td>47.59</td><td>50.53</td><td>46.12</td><td>23.80</td></tr><tr><td>MIMLL</td><td>N.A.</td><td>11.11</td><td>10.84</td><td>17.37</td><td>16.74</td><td>N.A.</td><td>14.55</td><td>14.11</td><td>14.76</td><td>17.43</td></tr><tr><td>Symbolic-Neuron</td><td>N.A.</td><td>35.30</td><td>35.38</td><td>33.78</td><td>34.01</td><td>N.A.</td><td>53.04</td><td>50.61</td><td>54.99</td><td>57.77</td></tr><tr><td>Symbolic-Neuronexclusive</td><td>N.A.</td><td>32.13</td><td>32.37</td><td>30.62</td><td>16.33</td><td>N.A.</td><td>52.89</td><td>50.45</td><td>54.55</td><td>61.15</td></tr><tr><td>Symbolic-Neurontopdown</td><td>N.A.</td><td>32.86</td><td>32.91</td><td>32.06</td><td>32.88</td><td>N.A.</td><td>53.15</td><td>51.59</td><td>54.21</td><td>59.08</td></tr><tr><td>Symbolic-Neurontop./excl.</td><td>N.A.</td><td>42.01</td><td>42.28</td><td>38.69</td><td>34.95</td><td>N.A.</td><td>57.82</td><td>56.54</td><td>58.87</td><td>59.82</td></tr><tr><td>Model</td><td>Thres.</td><td colspan="4">sls-movie-trivial</td><td>Thres.</td><td colspan="4">sls-restaurant</td></tr><tr><td>length ratio</td><td></td><td colspan="4">all 1-2 7.57 57.07 35.36</td><td></td><td colspan="4">all 1-2 100 40.87 57.89 7.33</td></tr><tr><td>IGBERT{mask|200}</td><td>0.02</td><td>47.69</td><td>38.63</td><td>45.27</td><td>50.83</td><td>0.2</td><td>50.10</td><td>51.06</td><td>50.11</td><td>37.73</td></tr><tr><td>IGBERT{zero|200}</td><td>0.1</td><td>42.34</td><td>31.20</td><td>39.00</td><td>46.57</td><td>0.2</td><td>43.65</td><td>45.85</td><td>43.07</td><td>30.23</td></tr><tr><td>MIMLL</td><td>N.A.</td><td>61.77</td><td>42.48</td><td>52.03</td><td>69.02</td><td>N.A.</td><td>7.58</td><td>7.40</td><td>7.73</td><td>6.08</td></tr><tr><td>Symbolic-Neuron</td><td>N.A.</td><td>67.30</td><td>41.11</td><td>60.18</td><td>75.18</td><td>N.A.</td><td>48.07</td><td>42.93</td><td>50.89</td><td>45.60</td></tr><tr><td>Symbolic-Neuronexclusive</td><td>N.A.</td><td>63.60</td><td>44.75</td><td>58.89</td><td>68.80</td><td>N.A.</td><td>49.46</td><td>44.28</td><td>52.22</td><td>48.20</td></tr><tr><td>Symbolic-Neurontopdown</td><td>N.A.</td><td>68.55</td><td>41.62</td><td>60.74</td><td>77.07</td><td>N.A.</td><td>47.43</td><td>43.42</td><td>49.83</td><td>40.41</td></tr><tr><td>Symbolic-Neurontop./excl.</td><td>N.A.</td><td>70.83</td><td>45.92</td><td>64.32</td><td>77.73</td><td>N.A.</td><td>52.52</td><td>49.14</td><td>54.67</td><td>44.27</td></tr></table>
|
| 227 |
+
|
| 228 |
+
Table 2: F1 scores for semi-supervised slot filling and NER whose golden span positions are hidden. "Thres." is short for threshold.
|
| 229 |
+
|
| 230 |
+
Results and discussion. From Table 2, one observation is that models with the mutual-exclusiveness constraint achieve better F1 scores. Such results illustrate that a stronger inductive bias is more helpful for models to learn constituent-label alignments. Besides, we find the Neural-Symbolic models significantly outperform the MIMLL and IG baselines on the NER datasets but trail the IG on the slot-filling task. Through studying the outputs of our method, with a sample shown in Figure 7, we find that our model tends to recall long spans while the ground truths in ATIS tend to be short spans. We also find that on sls-movie-trivial, MIMLL significantly outperforms IG. So we hypothesize that the distribution of golden span lengths may affect results. We divide sentences into buckets according to the average golden span length and compute F1 scores for each bucket, as shown in Table 2. Interestingly, we find that the scores of IG decline significantly with increasing
|
| 231 |
+
|
| 232 |
+
span lengths, while our method performs well on all the buckets. In addition, we argue that the F1 scores on the NER datasets can reflect interpretability more objectively, because the boundaries of proper nouns are clear and objective, while the choice of slots is relatively ambiguous about whether to include prepositions, modal verbs, etc.
|
| 233 |
+
|
| 234 |
+
# 4.4 CASE STUDY & POTENTIAL APPLICATIONS
|
| 235 |
+
|
| 236 |
+

|
| 237 |
+
Figure 8: A sample of the symbolic-neural model on Navigator with the top-down encoder.
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
Figure 9: Samples of the symbolic-neural model with the top-down encoder.
|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
|
| 244 |
+

|
| 245 |
+
|
| 246 |
+
We output the label trees generated by our model trained on the Navigator, SST-2, and CoLA to observe whether the model has sufficient interpretability. From Figure 8 we can find our method is able to learn potential alignments of intents and texts and show them explicitly. This can be used in multi-intent NLU systems to help determine the attribution of slots to corresponding intents. We also study the difference between generated label trees of the vanilla Symbolic-Neural model and the Symbolic-Neuraltopdown. The cases could be found in Appendix A.12. We find the vanilla Symbolic-Neural model fails to deal with multi-intent cases. Such an observation verifies the necessity of introducing the top-down encoder. For SST-2, as there are no neutral samples, we randomly sampled sentences from Wiki-103 as neutral texts and force all nodes to be $\phi_{NT}$ by the mean squared error loss. Figure 9(a) shows the sentiment polarity of each constituent and the polarity reversal of "never". Such a characteristic could be used for text mining by gathering the minimal spans of a specified label. We also study the generated label trees on CoLA, a linguistic acceptance data set. We transfer the task to a grammar error detection problem by converting the label "1" to $\phi$ as "1" means no error is found in a sentence. Figure 9(b) shows it's able to detect incomplete constituents and may help in applications like grammar error location. More cases could be found in the Appendix.
|
| 247 |
+
|
| 248 |
+
# 5 CONCLUSION & LIMITATION
|
| 249 |
+
|
| 250 |
+
In this paper, we propose a novel label extraction framework based on a simple inductive bias and model single/multi-label text classification in a unified way. We discuss how to build a probabilistic model to maximize the valid potential label trees by leveraging the internal representations of a structured language model as symbolic interfaces. Our experiment results show our method achieves inherent interpretability on various granularities. The generated label trees could have potential values in various unsupervised tasks requiring constituent-level outputs.
|
| 251 |
+
|
| 252 |
+
Regarding to the limitation of our work, we require that the labels corresponding to the texts in the dataset have a certain degree of diversity, thus forcing the model to learn self-consistent constituent-label alignments. For example, in ATIS, almost all training samples have the same labels like "from-loc.city_name" and "toloc.city_name". That's why our model fails to accurately associate these two labels with correct spans in Figure 7.
|
| 253 |
+
|
| 254 |
+
# 6 REPRODUCIBILITY STATEMENT
|
| 255 |
+
|
| 256 |
+
In the supplemental, we include a zip file containing our code and datasets downloading linkage. We've also included in the supplemental the scripts we run all baselines and the Symbolic-Neural models.
|
| 257 |
+
|
| 258 |
+
# 7 ACKNOWLEDGEMENT
|
| 259 |
+
|
| 260 |
+
This work was supported by Ant Group through CCF-Ant Research Fund. We thank the Aliyun EFLOPS team for their substantial support in designing and providing a cutting-edge training platform to facilitate fast experimentation in this work. We also thank Jing Zheng for his help in paper revising and code reviewing.
|
| 261 |
+
|
| 262 |
+
# REFERENCES
|
| 263 |
+
|
| 264 |
+
David Alvarez-Melis and Tommi S. Jaakkola. Towards robust interpretability with self-explaining neural networks. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 7786-7795, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/3e9f0fc9b2f89e043bc6233994dfcf76-AAbstract.html.
|
| 265 |
+
Stefanos Angelidis and Mirella Lapata. Multiple instance learning networks for fine-grained sentiment analysis. Trans. Assoc. Comput. Linguistics, 6:17-31, 2018. doi: 10.1162/tacl_a_00002. URL https://doi.org/10.1162/tacl_a_00002.
|
| 266 |
+
David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. How to explain individual classification decisions. J. Mach. Learn. Res., 11: 1803-1831, 2010. doi: 10.5555/1756006.1859912. URL https://dl.acm.org/doi/10.5555/1756006.1859912.
|
| 267 |
+
James K. Baker. Trainable grammars for speech recognition. Journal of the Acoustical Society of America, 65, 1979.
|
| 268 |
+
Jasmijn Bastings, Sebastian Ebert, Polina Zablotskaia, Anders Sandholm, and Katja Filippova. "will you find these shortcuts?" A protocol for evaluating the faithfulness of input salience methods for text classification. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pp. 976-991. Association for Computational Linguistics, 2022. URL https://aclanthology.org/2022.emnlp-main.64.
|
| 269 |
+
Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. How do decisions emerge across layers in neural models? interpretation with differentiable masking. In Bonnie Weber, Trevor Cohn, Yulan He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pp. 3243-3255. Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020.emnlp-main.262. URL https://doi.org/10.18653/v1/2020.emnlp-main.262.
|
| 270 |
+
Francisco Casacuberta. Statistical estimation of stochastic context-free grammars using the inside-outside algorithm and a transformation on grammars. In Rafael C. Carrasco and José Oncina (eds.), Grammatical Inference and Applications, Second International Colloquium, ICGI-94, Alicante, Spain, September 21-23, 1994, Proceedings, volume 862 of Lecture Notes in Computer Science, pp. 119-129. Springer, 1994. doi: 10.1007/3-540-58473-0\_.142. URL https://doi.org/10.1007/3-540-58473-0_142.
|
| 271 |
+
Hanjie Chen and Yangfeng Ji. Learning variational word masks to improve the interpretability of neural text classifiers. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pp. 4236-4251. Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020.emnlp-main.347. URL https://doi.org/10.18653/v1/2020.emnlp-main.347.
|
| 272 |
+
|
| 273 |
+
Jihun Choi, Kang Min Yoo, and Sang-goo Lee. Learning to compose task-specific tree structures. In Sheila A. McIlraith and Kilian Q. Weinberger (eds.), Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pp. 5094-5101. AAAI Press, 2018. URL https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16682.
|
| 274 |
+
Jishnu Ray Chowdhury and Cornelia Caragea. Modeling hierarchical structures with continuous recursive neural networks. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 1975-1988. PMLR, 2021. URL http://proceedings.mlr.press/v139/chowdhury21a.html.
|
| 275 |
+
Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, Maël Primet, and Joseph Dureau. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. CoRR, abs/1805.10190, 2018. URL http://arxiv.org/abs/1805.10190.
|
| 276 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 4171-4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423.
|
| 277 |
+
Thomas G. Dietterich, Richard H. Lathrop, and Tomás Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artif. Intell., 89(1-2):31-71, 1997. doi: 10.1016/S0004-3702(96)00034-3. URL https://doi.org/10.1016/S0004-3702(96)00034-3.
|
| 278 |
+
Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, and Andrew McCallum. Unsupervised latent tree induction with deep inside-outside recursive auto-encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1129-1141, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1116. URL https://www.aclweb.org/anthology/N19-1116.
|
| 279 |
+
Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 199-209, San Diego, California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-1024. URL https://www.aclweb.org/anthology/N16-1024.
|
| 280 |
+
Mihail Eric, Lakshmi Krishnan, François Charette, and Christopher D. Manning. Key-value retrieval networks for task-oriented dialogue. In Kristiina Jokinen, Manfred Stede, David DeVault, and Annie Louis (eds.), Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbrücken, Germany, August 15-17, 2017, pp. 37-49. Association for Computational Linguistics, 2017. doi: 10.18653/v1/w17-5506. URL https://doi.org/10.18653/v1/w17-5506.
|
| 281 |
+
Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. Multi-domain joint semantic frame parsing using bi-directional rnN-lstm. In Proceedings of Interspeech, 2016.
|
| 282 |
+
Xiang Hu, Haitao Mi, Zujie Wen, Yafang Wang, Yi Su, Jing Zheng, and Gerard de Melo. R2D2: Recursive transformer based on differentiable tree for interpretable hierarchical language modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4897-4908, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.379. URL https://aclanthology.org/2021.acl-long.379.
|
| 283 |
+
|
| 284 |
+
Xiang Hu, Haitao Mi, Liang Li, and Gerard de Melo. Fast-R2D2: A pretrained recursive neural network based on pruned CKY for grammar induction and text representation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 2809-2821, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.emnlp-main.181.
|
| 285 |
+
James D. Keeler, David E. Rumelhart, and Wee Kheng Leow. Integrated segmentation and recognition of hand-printed numerals. In Richard Lippmann, John E. Moody, and David S. Touretzky (eds.), Advances in Neural Information Processing Systems 3, [NIPS Conference, Denver, Colorado, USA, November 26-29, 1990], pp. 557-563. Morgan Kaufmann, 1990. URL http://papers.nips.cc/paper/397-integrated-segmentation-and-recognition-of-hand-printed-numerals.
|
| 286 |
+
Siwon Kim, Jihun Yi, Eunji Kim, and Sungroh Yoon. Interpretation of NLP models through input marginalization. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pp. 3154-3167. Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020.emnlp-main.255. URL https://doi.org/10.18653/v1/2020.emnlp-main.255.
|
| 287 |
+
Yoon Kim, Alexander Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gábor Melis. Unsupervised recurrent neural network grammars. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1105-1117, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1114. URL https://www.aclweb.org/anthology/N19-1114.
|
| 288 |
+
Dimitrios Kotzias, Misha Denil, Nando de Freitas, and Padhraic Smyth. From group to individual labels using deep features. In Longbing Cao, Chengqi Zhang, Thorsten Joachims, Geoffrey I. Webb, Dragos D. Margineantu, and Graham Williams (eds.), Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, August 10-13, 2015, pp. 597-606. ACM, 2015. doi: 10.1145/2783258.2783380. URL https://doi.org/10.1145/2783258.2783380.
|
| 289 |
+
Zachary C. Lipton. The mythos of model interpretability. Commun. ACM, 61(10):36-43, 2018. doi: 10.1145/3233231. URL https://doi.org/10.1145/3233231.
|
| 290 |
+
Jingjing Liu, Panupong Pasupat, Scott Cyphers, and James R. Glass. Asgard: A portable architecture for multilingual dialogue systems. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2013, Vancouver, BC, Canada, May 26-31, 2013, pp. 8386-8390. IEEE, 2013a. doi: 10.1109/ICASSP.2013.6639301. URL https://doi.org/10.1109/ICASSP.2013.6639301.
|
| 291 |
+
Jingjing Liu, Panupong Pasupat, Yining Wang, Scott Cyphers, and James R. Glass. Query understanding enhanced by hierarchical parsing structures. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, Olomouc, Czech Republic, December 8-12, 2013, pp. 72-77. IEEE, 2013b. doi: 10.1109/ASRU.2013.6707708. URL https://doi.org/10.1109/ ASRU.2013.6707708.
|
| 292 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692, 2019. URL http://arxiv.org/abs/1907.11692.
|
| 293 |
+
Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 4765-4774, 2017. URL https://proceedings.neurips.cc/paper/2017/bitical/8a20a8621978632d76c43dfd28b67767-AAbstract.html.
|
| 294 |
+
Jean Maillard, Stephen Clark, and Dani Yogatama. Jointly learning sentence embeddings and syntax with unsupervised tree-istms. CoRR, abs/1705.09189, 2017. URL http://arxiv.org/abs/1705.09189.
|
| 295 |
+
|
| 296 |
+
Oded Maron and Aparna Lakshmi Ratan. Multiple-instance learning for natural scene classification. In Jude W. Shavlik (ed.), Proceedings of the Fifteenth International Conference on Machine Learning (ICML 1998), Madison, Wisconsin, USA, July 24-27, 1998, pp. 341-349. Morgan Kaufmann, 1998.
|
| 297 |
+
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=Byj72udxe.
|
| 298 |
+
Barbara H. Partee. Lexical semantics and compositionality. 1995.
|
| 299 |
+
Jordan B. Pollack. Recursive distributed representations. Artif. Intell., 46(1-2):77-105, 1990. doi: 10.1016/0004-3702(90)90005-K. URL https://doi.org/10.1016/0004-3702(90)90005-K.
|
| 300 |
+
Jesse Read, Bernhard Pfahringer, Geoff Holmes, and Eibe Frank. Classifier chains for multi-label classification. *Mach. Learn.*, 85(3):333-359, 2011. doi: 10.1007/s10994-011-5256-5. URL https://doi.org/10.1007/s10994-011-5256-5.
|
| 301 |
+
Marco Ribeiro, Sameer Singh, and Carlos Guestrin. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pp. 97-101, San Diego, California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-3020. URL https://aclanthology.org/N16-3020.
|
| 302 |
+
Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206-215, 2019.
|
| 303 |
+
Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron C. Courville. Ordered neurons: Integrating tree structures into recurrent neural networks. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=B116qiR5F7.
|
| 304 |
+
Yikang Shen, Yi Tay, Che Zheng, Dara Bahri, Donald Metzler, and Aaron C. Courville. Structformer: Joint unsupervised induction of dependency and constituency structure from masked language modeling. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pp. 7196-7209. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.acl-long.559. URL https://doi.org/10.18653/v1/2021.acl-long.559.
|
| 305 |
+
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Workshop Track Proceedings, 2014. URL http://arxiv.org/abs/1312.6034.
|
| 306 |
+
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA. A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1631-1642. ACL, 2013. URL https://aclanthology.org/D13-1170/.
|
| 307 |
+
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 3319-3328. PMLR, 2017. URL http://proceedings.mlrpress/v70/sundararajan17a.html.
|
| 308 |
+
|
| 309 |
+
Kewei Tu, Maria Pavlovskaia, and Song Chun Zhu. Unsupervised structure learning of stochastic and-or grammars. In Christopher J. C. Burges, Léon Bottou, Zoubin Ghahramani, and Kilian Q. Weinberger (eds.), Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pp. 1322-1330, 2013. URL https://proceedings.neurips.cc/paper/2013/bit/24681928425f5a9133504de568f5f6df-Abstract.html.
|
| 310 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id= rJ4km2R5t7.
|
| 311 |
+
Yu Zhang, Houquan Zhou, and Zhenghua Li. Fast and accurate neural CRF constituency parsing. In Proceedings of IJCAI, pp. 4046-4053, 2020. doi: 10.24963/ijcai.2020/560. URL https://doi.org/10.24963/ijcai.2020/560.
|
| 312 |
+
Zhi-Hua Zhou, Yu-Yin Sun, and Yu-Feng Li. Multi-instance learning by treating instances as non-i.i.d. samples. In Andrea Pohoreckiyj Danyluk, Leon Bottou, and Michael L. Littman (eds.), Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, volume 382 of ACM International Conference Proceeding Series, pp. 1249-1256. ACM, 2009. doi: 10.1145/1553374.1553534. URL https://doi.org/10.1145/1553374.1553534.
|
| 313 |
+
Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pp. 19-27. IEEE Computer Society, 2015. doi: 10.1109/ICCV.2015.11. URL https://doi.org/10.1109/ICCV.2015.11.
|
| 314 |
+
|
| 315 |
+
# A APPENDIX
|
| 316 |
+
|
| 317 |
+
# A.1 RELATED WORKS
|
| 318 |
+
|
| 319 |
+
Structured language models. Many attempts have been made to develop structured language models. Pollack (1990) proposed to use RvNN as a recursive architecture to encode text hierarchically, and Socher et al. (2013) showed the effectiveness of RvNNs with gold trees for sentiment analysis. However, both approaches require annotated trees. Gumbel-Tree-LSTMs (Choi et al., 2018) construct trees by recursively selecting two terminal nodes to merge and learning composition probabilities via downstream tasks. CRvNN (Chowdhury & Caragea, 2021) makes the entire process end-to-end differentiable and parallel by introducing a continuous relaxation. However, neither Gumbel-Tree-LSTMs nor CRvNN mention the pretraining mechanism in their work. URNNG (Kim et al., 2019) proposed the first architecture to jointly pretrain a parser and an encoder based on RNNG (Dyer et al., 2016). However, its $O(n^3)$ time and space complexity makes it hard to pretrain on large-scale corpora. ON-LSTM and StructFormer (Shen et al., 2019; 2021) propose a series of methods to integrate structures into LSTM or Transformer by masking information in differentiable ways. As the encoding process is still performed in layer-stacking models, there are no intermediate representations for tree nodes. Maillard et al. (2017) propose an alternative approach, based on a differentiable CKY encoding. The algorithm is differentiable by using a soft-gating approach, which approximates discrete candidate selection by a probabilistic mixture of the constituents available in a given cell of the chart. While their work relies on annotated downstream tasks to learn structures,Drozdo et al. (2019) propose a novel auto-encoder-like pretraining objective based on the inside-outside algorithm Baker (1979); Casacuberta (1994) but is still of cubic complexity. To tackle the $O(n^3)$ limitation of CKY encoding, Hu et al. (2021) propose an MLM-like pretraining objective and a pruning strategy, which reduces the complexity of encoding to linear and makes the model possible to pretrain on large-scale corpora.
|
| 320 |
+
|
| 321 |
+
Multi-Instance Learning. Multi-Instance learning (MIL) deals with problems where labels are associated with groups of instances or bags (spans in our case), while instance labels are unobserved. The goal is either to label bags Keeler et al. (1990); Dietterich et al. (1997); Maron & Ratan (1998) or to simultaneously infer bag and instance labels Zhou et al. (2009); Kotzias et al. (2015). Angelidis & Lapata (2018) apply MIL to segment-level sentiment analysis based on an attention-based scoring method. In our work, we refine instances to different semantic granularities and consider hierarchical relationships between instances.
|
| 322 |
+
|
| 323 |
+
Model Interpretability. In the line of work on model interpretability, many approaches have been proposed. Ribeiro et al. (2016); Lundberg & Lee (2017) try to generate explanation for prediction. Baehrens et al. (2010); Simonyan et al. (2014); Sundararajan et al. (2017) analyze attribution by gradients. The above-mentioned methods are all posthoc. Kim et al. (2020); Cao et al. (2020); Chen & Ji (2020) apply masks on the model input in text classification to obtain token weights, but single-dimensional weights are not enough to reflect multi-label interpretation. Alvarez-Melis & Jaakkola (2018); Rudin (2019) argue interpretability should be an inherent property of a deep neural network and propose corresponding model architectures. However, all the above-mentioned methods are not able to generate constituent-level interpretability.
|
| 324 |
+
|
| 325 |
+
# A.2 DYNAMIC PROGRAMMING BASED ON FULL PERMUTATION
|
| 326 |
+
|
| 327 |
+
A naive way to estimate $P(\hat{t}_{i,j}^{[\mathcal{V}(\mathcal{M})]}|t_{i,j})$ is to enumerate all possible state spaces and sum them up via dynamic programming. We use $X_{i,j}^{\mathcal{M}}$ short for $P(\hat{t}_{i,j}^{[\mathcal{V}(\mathcal{M})]}|t_{i,j})$ . Let $\mathcal{M}_l$ and $\mathcal{M}_r$ denote a pair of sets subject to $\mathcal{M}_l \cup \mathcal{M}_r = \mathcal{M}$ . Let $\mathcal{C}(\mathcal{M})$ denote the set containing all valid $\mathcal{M}_l$ and $\mathcal{M}_r$ pairs. Figure 5(a) discusses all potential combinations of $\mathcal{M}_l$ and $\mathcal{M}_r$ when $|\mathcal{M}| > 1$ .
|
| 328 |
+
|
| 329 |
+
If $|\mathcal{M}| > 1$ , let $\mathcal{C}(\mathcal{M})$ be the set of all potential pairs where $\mathcal{Y}(\hat{t}_{i,k}) \cup \mathcal{Y}(\hat{t}_{k+1,j}) = \mathcal{M}$ .
|
| 330 |
+
|
| 331 |
+
If $|\mathcal{M}| = 1$ , it's similar to the case described in Figure 4.
|
| 332 |
+
|
| 333 |
+
If $\mathcal{M} = \phi$ , $n_{i,j}$ could only be associated with $\phi_T$ or $\phi_{NT}$ with $\mathcal{M}_l = \phi$ and $\mathcal{M}_r = \phi$ .
|
| 334 |
+
|
| 335 |
+
Finally the transition function for $t_{i,j}$ where $i < j$ is:
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
X _ {i, j} ^ {\mathcal {M}} = \left\{ \begin{array}{l l} P \left(\phi_ {N T} \mid n _ {i, j}\right) \cdot \sum_ {\left\{\mathcal {M} _ {l}, \mathcal {M} _ {r} \right\} \in \mathcal {C} (\mathcal {M})} X _ {i, k} ^ {\mathcal {M} _ {l}} X _ {k + 1, j} ^ {\mathcal {M} _ {r}} & , | \mathcal {M} | > 1 \\ P (m \mid n _ {i, j}) + P \left(\phi_ {N T} \mid n _ {i, j}\right) \left(X _ {i, k} ^ {\phi} X _ {k + 1, j} ^ {\mathcal {M}} + X _ {i, k} ^ {\mathcal {M}} X _ {k + 1, j} ^ {\phi} + X _ {i, k} ^ {\mathcal {M}} X _ {k + 1, j} ^ {\mathcal {M}}\right) & , \mathcal {M} = \{m \} \\ P \left(\phi_ {T} \mid n _ {i, j}\right) + P \left(\phi_ {N T} \mid n _ {i, j}\right) \left(X _ {i, k} ^ {\phi} X _ {k + 1, j} ^ {\phi}\right) & , \mathcal {M} = \phi \end{array} \right. \tag {8}
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
When $i = j$ , we have:
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
X _ {i, j} ^ {\mathcal {M}} = \left\{ \begin{array}{l l} 0 & , | \mathcal {M} | > 1 \\ P (m \mid n _ {i, j}) & , \mathcal {M} = \{m \} \\ P \left(\phi_ {T} \mid n _ {i, j}\right) + P \left(\phi_ {N T} \mid n _ {i, j}\right) & , \mathcal {M} = \phi \end{array} \right. \tag {9}
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
The transition function works in a bottom-up manner and iterates all possible $\mathcal{M} \subseteq \mathcal{T}$ . $X_{1,|\mathbf{S}|}^{\mathcal{T}}$ is the final probability. Even though, iterating $\mathcal{C}(\mathcal{M})$ and all $\mathcal{M} \subseteq \mathcal{T}$ is of exponential complexity, so it only works when $|\mathcal{T}|$ is small.
|
| 348 |
+
|
| 349 |
+
# A.3 PRETRAIN BERT AND FAST-R2D2 FROM SCRATCH
|
| 350 |
+
|
| 351 |
+
The dataset WikiBooks originally used to train BERT (Devlin et al., 2019) is a combination of English Wikipedia and BooksCorpus (Zhu et al., 2015). However, BooksCorpus is no longer publicly available. So it's hard to pretrain Fast-R2D2 on the same corpus, making it impossible to compare fairly with the publicly available BERT model. Considering the limited GPU resources, we pretrain both BERT and Fast-R2D2 from scratch on Wiki-103. We train BERT from scratch following the tutorial by Huggingface $^{5}$ with the masked rate set to $15\%$ . The vocabulary of BERT and Fast-R2D2 is kept the same as the original BERT. As demonstrated in RoBERTa (Liu et al., 2019) that the NSP task is harmful and longer sentence is helpful to improve performance in downstream tasks, we remove the NSP task and use the original corpus that is not split into sentences as inputs. For Fast-R2D2, WikiText103 is split at the sentence level, and sentences longer than 200 after tokenization are discarded (about $0.04\%$ of the original data). BERT is pretrained for 60 epochs with a learning rate of $5 \times 10^{-5}$ and batch size 50 per GPU on 8 A100 GPUs. Fast-R2D2 is pretrained with learning rate of $5 \times 10^{-5}$ for the transformer encoder and $1 \times 10^{-3}$ for the parser. Please note that the batch size of Fast-R2D2 is dynamically adjusted to ensure the total length of sentences in a batch won't exceed a certain maximum threshold, to make the batch size similar to that of BERT, the maximum threshold is set to 1536. Because the average sentence length is around 30 for Wiki103, the average batch size of Fast-R2D2 is around 50 which is similar to that of BERT.
|
| 352 |
+
|
| 353 |
+
A.4 THE FULL VERSION OF THE SPAN Attribution TASK.
|
| 354 |
+
|
| 355 |
+
<table><tr><td>Model</td><td>Thres.</td><td colspan="4">Slot-filling</td><td>Thres.</td><td colspan="4">sls-movie-eng</td></tr><tr><td>length ratio</td><td></td><td colspan="4">all 1-2 95.84 3.70 0.46</td><td></td><td colspan="4">all 1-2 55.60 42.75 1.65</td></tr><tr><td>IGBERT{mask|200}</td><td>0.2</td><td>47.80</td><td>48.35</td><td>40.51</td><td>17.72</td><td>0.1</td><td>51.40</td><td>51.57</td><td>51.74</td><td>42.69</td></tr><tr><td>IGBERT{mask|200}</td><td>0.3</td><td>50.28</td><td>51.13</td><td>37.15</td><td>15.87</td><td>0.2</td><td>57.19</td><td>60.07</td><td>55.79</td><td>34.36</td></tr><tr><td>IGBERT{mask|200}</td><td>0.4</td><td>49.82</td><td>51.03</td><td>30.96</td><td>10.53</td><td>0.3</td><td>56.84</td><td>62.13</td><td>53.71</td><td>26.15</td></tr><tr><td>IGBERT{zero|200}</td><td>0.3</td><td>53.88</td><td>54.33</td><td>48.93</td><td>23.68</td><td>0.2</td><td>45.57</td><td>46.20</td><td>45.72</td><td>31.88</td></tr><tr><td>IGBERT{zero|200}</td><td>0.4</td><td>56.62</td><td>57.42</td><td>46.15</td><td>18.46</td><td>0.3</td><td>47.59</td><td>50.53</td><td>46.12</td><td>23.80</td></tr><tr><td>IGBERT{zero|200}</td><td>0.5</td><td>56.24</td><td>57.44</td><td>39.09</td><td>13.79</td><td>0.4</td><td>46.01</td><td>51.47</td><td>42.61</td><td>12.01</td></tr><tr><td>MIMLL</td><td>N.A.</td><td>11.11</td><td>10.84</td><td>17.37</td><td>16.74</td><td>N.A.</td><td>14.55</td><td>14.11</td><td>14.76</td><td>17.43</td></tr><tr><td>Symbolic-Neuron</td><td>N.A.</td><td>35.30</td><td>35.38</td><td>33.78</td><td>34.01</td><td>N.A.</td><td>53.04</td><td>50.61</td><td>54.99</td><td>57.77</td></tr><tr><td>Symbolic-Neuronexclusive</td><td>N.A.</td><td>32.13</td><td>32.37</td><td>30.62</td><td>16.33</td><td>N.A.</td><td>52.89</td><td>50.45</td><td>54.55</td><td>61.15</td></tr><tr><td>Symbolic-Neurontopdown</td><td>N.A.</td><td>32.86</td><td>32.91</td><td>32.06</td><td>32.88</td><td>N.A.</td><td>53.15</td><td>51.59</td><td>54.21</td><td>59.08</td></tr><tr><td>Symbolic-Neurontop./excl.</td><td>N.A.</td><td>42.01</td><td>42.28</td><td>38.69</td><td>34.95</td><td>N.A.</td><td>57.82</td><td>56.54</td><td>58.87</td><td>59.82</td></tr><tr><td>Model</td><td>Thres.</td><td colspan="4">sls-movie-trivial</td><td>Thres.</td><td colspan="4">sls-restaurant</td></tr><tr><td>length ratio</td><td></td><td colspan="4">all 1-2 7.57 57.07 35.36</td><td></td><td colspan="4">all 1-2 100 40.87 57.89 1.24</td></tr><tr><td>IGBERT{mask|200}</td><td>0.02</td><td>47.30</td><td>30.68</td><td>41.50</td><td>54.73</td><td>0.1</td><td>46.70</td><td>45.45</td><td>47.58</td><td>41.34</td></tr><tr><td>IGBERT{mask|200}</td><td>0.05</td><td>47.69</td><td>38.63</td><td>45.27</td><td>50.83</td><td>0.2</td><td>50.10</td><td>51.06</td><td>50.11</td><td>37.73</td></tr><tr><td>IGBERT{mask|200}</td><td>0.1</td><td>44.35</td><td>43.91</td><td>46.00</td><td>42.91</td><td>0.3</td><td>49.04</td><td>51.67</td><td>48.46</td><td>31.33</td></tr><tr><td>IGBERT{zero|200}</td><td>0.05</td><td>41.78</td><td>25.79</td><td>36.29</td><td>49.00</td><td>0.1</td><td>40.01</td><td>39.19</td><td>40.72</td><td>32.92</td></tr><tr><td>IGBERT{zero|200}</td><td>0.1</td><td>42.34</td><td>31.20</td><td>39.00</td><td>46.57</td><td>0.2</td><td>43.65</td><td>45.85</td><td>43.07</td><td>30.23</td></tr><tr><td>IGBERT{zero|200}</td><td>0.2</td><td>37.26</td><td>36.49</td><td>38.01</td><td>36.68</td><td>0.3</td><td>43.07</td><td>45.45</td><td>41.10</td><td>25.95</td></tr><tr><td>MIMLL</td><td>N.A.</td><td>61.77</td><td>42.48</td><td>52.03</td><td>69.02</td><td>N.A.</td><td>7.58</td><td>7.40</td><td>7.73</td><td>6.08</td></tr><tr><td>Symbolic-Neuron</td><td>N.A.</td><td>67.30</td><td>41.11</td><td>60.18</td><td>75.18</td><td>N.A.</td><td>48.07</td><td>42.93</td><td>50.89</td><td>45.60</td></tr><tr><td>Symbolic-Neuronexclusive</td><td>N.A.</td><td>63.60</td><td>44.75</td><td>58.89</td><td>68.80</td><td>N.A.</td><td>49.46</td><td>44.28</td><td>52.22</td><td>48.20</td></tr><tr><td>Symbolic-Neurontopdown</td><td>N.A.</td><td>68.55</td><td>41.62</td><td>60.74</td><td>77.07</td><td>N.A.</td><td>47.43</td><td>43.42</td><td>49.83</td><td>40.41</td></tr><tr><td>Symbolic-Neurontop./excl.</td><td>N.A.</td><td>70.83</td><td>45.92</td><td>64.32</td><td>77.73</td><td>N.A.</td><td>52.52</td><td>49.14</td><td>54.67</td><td>44.27</td></tr></table>
|
| 356 |
+
|
| 357 |
+
# A.5 SAMPLES OF LABEL TREES WITH SHORTCUT TAXENS.
|
| 358 |
+
|
| 359 |
+
Samples of label trees with shortcut tokens are shown as follows:
|
| 360 |
+
|
| 361 |
+

|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
|
| 365 |
+
# A.6 MULTI-LABEL LEARNING BASED ON FAST-R2D2
|
| 366 |
+
|
| 367 |
+
We adopt a canonical multi-instance learning framework used in text classification proposed by Angelidis & Lapata (2018), in which each instance has a representation and all instances are fused by attention. The original work produces hidden vectors $h_i$ for each segment by GRU modules and computes attention weights $a_i$ as the normalized similarity of each $h_i$ with $h_a$ .
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
a _ {i} = \frac {\exp \left(\mathrm {h} _ {i} ^ {\mathrm {T}} \mathrm {h} _ {a}\right)}{\sum_ {i} \exp \left(\mathrm {h} _ {i} ^ {\mathrm {T}} \mathrm {h} _ {a}\right)}, p _ {i} = \operatorname {s o f t m a x} \left(W _ {c l s} h _ {i} + b _ {c l s}\right), p _ {d} ^ {(c)} = \sum_ {i} a _ {i} p _ {i} ^ {(c)}, c \in [ 1, C ]. \tag {10}
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
where $C$ is the total class number, $p_i$ is the individual segment label prediction, $p_d$ is document level predictions. They use the negative log-likelihood of the prediction as an objective function: $L_{cls} = -\sum_d \log p_d^{(y_d)}$ . We simply replace segment representations with span representations in our work as the experiment baseline. Specifically, we use the top-down representation $e_{i,j}'$ as the tensor to be attended to and predict the label by $e_{i,j}$ :
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
a _ {i, j} = \frac {\exp \left(\mathrm {e} _ {\mathrm {i} , \mathrm {j}} ^ {\prime} {} ^ {\top} \mathrm {h} _ {a}\right)}{\sum_ {m , n \in \mathcal {D}} \exp \left(\mathrm {e} _ {\mathrm {m} , \mathrm {n} _ {i}} ^ {\prime} {} ^ {\top} \mathrm {h} _ {a}\right)}, p _ {i, j} = \operatorname {s o f t m a x} \left(W _ {c l s} e _ {i, j} + b _ {c l s}\right), \tag {11}
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
p _ {d} ^ {(c)} = \sum_ {m, n \in \mathcal {D}} a _ {m, n} p _ {m, n} ^ {(c)}, c \in [ 1, C ].
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
where $\mathcal{D}$ is the span set for a parsing tree. Please note the MIL model in our baselines is trained together with $\mathcal{L}_{bilm}$ and $\mathcal{L}_{KL}$ , whose final loss is $\mathcal{L}_{cls} + \mathcal{L}_{self}$ .
|
| 384 |
+
|
| 385 |
+
# A.7 MULTI-LABEL MULTI-INSTANCE LEARNING BASED ON FAST-R2D2
|
| 386 |
+
|
| 387 |
+
To support multi-label multi-instance learning, we refactor the above equations to enable them to support attention on different labels. For each label there is vector $h_a^{(c)}$ , based on which attention weights $a_i^{(c)}$ is computed as following:
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
a _ {i, j} ^ {(c)} = \frac {\exp \left(\mathrm {e} _ {i , j} ^ {\prime \mathsf {T}} \mathrm {h} _ {a} ^ {(c)}\right)}{\sum_ {m , n \in \mathcal {D}} \exp \left(\mathrm {e} _ {m , n} ^ {\mathsf {T}} \mathrm {h} _ {a} ^ {(c)}\right)}, p _ {i, j} ^ {(c)} = \operatorname {s i g m o i d} \left(W _ {c l s} ^ {(c)} e _ {i, j} + b _ {c l s} ^ {(c)}\right), \tag {12}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
$$
|
| 394 |
+
p ^ {(c)} = \sum_ {m, n \in \mathcal {D}} a _ {m, n} p _ {m, n} ^ {(c)}, c \in [ 1, C ].
|
| 395 |
+
$$
|
| 396 |
+
|
| 397 |
+
The final objective function is $L = -\sum_{c \in \mathcal{T}} \log p^{(c)} - \sum_{c \in \mathcal{F} \setminus \mathcal{T}} \log (1 - p^{(c)})$ . In the semi-supervised slot-filling and NER tasks, we let the model predicts labels first and then pick the span with the max attention weight for each label.
|
| 398 |
+
|
| 399 |
+
# A.8 ABOUT THE CONDITIONAL INDEPENDENCE ASSUMPTION
|
| 400 |
+
|
| 401 |
+
We argue the independence assumption used in our objective actually is weaker than the one used in conventional multi-label classification tasks. Formally, conventional multi-label classification is the problem of finding a model that maps inputs $\mathbf{x}$ to binary vectors $\mathbf{y}$ ; that is, it assigns a value of 0 or 1 for each element (label) in $\mathbf{y}$ . So the objective of multi-label classification is to minimize: $-\log P(\bigcap_{i\in \mathcal{T}}y_i = 1,\bigcap_{j\in \mathcal{O}}y_j = 0|x)$ , where $\mathcal{T}$ denotes the indices for golden labels and $\mathcal{O}$ denotes the indices not in $\mathcal{T}$ . It's impossible to tractably estimate it without introducing some conditional independence assumption. By assuming the states of labels are independent of each other, we have:
|
| 402 |
+
|
| 403 |
+
$$
|
| 404 |
+
P \left(\bigcap_ {i \in \mathcal {T}} y _ {i} = 1, \bigcap_ {j \in \mathcal {O}} y _ {j} = 0 | x\right) \approx P \left(\bigcap_ {i \in \mathcal {T}} y _ {i} = 1 | x\right) \cdot P \left(\bigcap_ {j \in \mathcal {O}} y _ {j} = 0 | x\right) \tag {13}
|
| 405 |
+
$$
|
| 406 |
+
|
| 407 |
+
$$
|
| 408 |
+
\log P \left(\bigcap_ {i \in \mathcal {T}} y _ {i} = 1 | x\right) \approx \log \prod_ {i \in \mathcal {T}} P \left(y _ {i} = 1 | x\right) = \sum_ {i \in \mathcal {T}} \log P \left(y _ {i} = 1 | x\right) \tag {14}
|
| 409 |
+
$$
|
| 410 |
+
|
| 411 |
+
$$
|
| 412 |
+
\log P \left(\bigcap_ {j \in \mathcal {O}} y _ {j} = 0 | x\right) \approx \log \prod_ {j \in \mathcal {O}} P \left(y _ {j} = 0 | x\right) = \sum_ {j \in \mathcal {T}} \log P \left(y _ {j} = 0 | x\right) \tag {15}
|
| 413 |
+
$$
|
| 414 |
+
|
| 415 |
+
which could finally be reformulated to the well-known binary cross entropy loss $-\sum_{i} \hat{y}_i \log y_i + (1 - \hat{y}_i) \log (1 - y_i)$ , where $\hat{y}$ is the ground truth and $y$ is the output probability of a model.
|
| 416 |
+
|
| 417 |
+
The logic of Equation 2 is similar to the above equations. $P(\hat{t}^{[\mathcal{T}\subseteq \mathcal{Y}(\hat{t})]}|t)$ is equivalent to $P(\bigcap_{i\in \mathcal{T}}y_i = 1|x)$ and $P(\hat{t}^{[\mathcal{O}\cap \mathcal{Y}(\hat{t})\neq \phi ]}|t)$ is equivalent to $P(\bigcap_{j\in \mathcal{O}}y_j = 0|x)$ . But we don't require the independence assumption to estimate the latter.
|
| 418 |
+
|
| 419 |
+
# A.9 REAL LABEL TREES SAMPLED FROM SYMBOLIC-NEURAL $-t / - e$ ON SST-2
|
| 420 |
+
|
| 421 |
+

|
| 422 |
+
|
| 423 |
+

|
| 424 |
+
|
| 425 |
+

|
| 426 |
+
|
| 427 |
+

|
| 428 |
+
|
| 429 |
+

|
| 430 |
+
|
| 431 |
+

|
| 432 |
+
A.10 REAL LABEL TREES SAMPLED FROM SYMBOLIC-NEURAL $-t / - e$ ON SST-2
|
| 433 |
+
|
| 434 |
+

|
| 435 |
+
|
| 436 |
+

|
| 437 |
+
|
| 438 |
+

|
| 439 |
+
|
| 440 |
+

|
| 441 |
+
|
| 442 |
+
# A.11 REAL LABEL TREES SAMPLED FROM SYMBOLIC-NEURAL $-t / - e$ ON COLA
|
| 443 |
+
|
| 444 |
+

|
| 445 |
+
|
| 446 |
+

|
| 447 |
+
|
| 448 |
+
# A.12 SAMPLED LABEL TREES IN ATIS
|
| 449 |
+
|
| 450 |
+
We sample label trees from Neural-Symbolic $-t / - e$ and Neural-Symbolic $+t / - e$ respectively for observation. Ground truths are annotated in brackets.
|
| 451 |
+
|
| 452 |
+

|
| 453 |
+
Figure 10: The label tree generated by Neural-Symbolic w/o the topdown encoder.
|
| 454 |
+
|
| 455 |
+

|
| 456 |
+
Figure 11: The label tree generated by Neural-SymbolicTopdown
|
| 457 |
+
|
| 458 |
+
# A.13 SAMPLED LABEL TREES IN NAVIGATOR
|
| 459 |
+
|
| 460 |
+
2:request-route, 3:appreciate, 4:request_address, 6:navigate
|
| 461 |
+
|
| 462 |
+

|
| 463 |
+
Figure 12: The label tree generated by Neural-Symbolic
|
| 464 |
+
|
| 465 |
+

|
| 466 |
+
Figure 13: The label tree generated by Neural-SymbolicTopdown
|
| 467 |
+
|
| 468 |
+

|
| 469 |
+
Figure 14: The label tree generated by Neural-SymbolicTopdown
|
| 470 |
+
|
| 471 |
+

|
| 472 |
+
Figure 15: The label tree generated by Neural-SymbolicTopdown
|
2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:12bdd099250b7b73c45570aa868e1f319c0efeb85884972cd732a6d942ba4b0b
|
| 3 |
+
size 1353274
|
2023/A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single_Multi-Labeled Text Classification/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/9566ef4e-e5d9-46fe-acb7-0fc0ac7c0dff_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/9566ef4e-e5d9-46fe-acb7-0fc0ac7c0dff_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/9566ef4e-e5d9-46fe-acb7-0fc0ac7c0dff_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bb1bfd804e891374c5cb2d0d9015e1539b9e581080376cd2564c694be93b0f78
|
| 3 |
+
size 502433
|
2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/full.md
ADDED
|
@@ -0,0 +1,1066 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A NEURAL MEAN EMBEDDING APPROACH FOR BACK-DOOR AND FRONT-DOOR ADJUSTMENT
|
| 2 |
+
|
| 3 |
+
Liyuan Xu
|
| 4 |
+
|
| 5 |
+
Gatsby Unit
|
| 6 |
+
|
| 7 |
+
liyuan.jo.19@ucl.ac.uk
|
| 8 |
+
|
| 9 |
+
Arthur Gretton
|
| 10 |
+
|
| 11 |
+
Gatsby Unit
|
| 12 |
+
|
| 13 |
+
arthur.gretton@gmail.com
|
| 14 |
+
|
| 15 |
+
# ABSTRACT
|
| 16 |
+
|
| 17 |
+
We consider the estimation of average and counterfactual treatment effects, under two settings: back-door adjustment and front-door adjustment. The goal in both cases is to recover the treatment effect without having an access to a hidden confounder. This objective is attained by first estimating the conditional mean of the desired outcome variable given relevant covariates (the "first stage" regression), and then taking the (conditional) expectation of this function as a "second stage" procedure. We propose to compute these conditional expectations directly using a regression function to the learned input features of the first stage, thus avoiding the need for sampling or density estimation. All functions and features (and in particular, the output features in the second stage) are neural networks learned adaptively from data, with the sole requirement that the final layer of the first stage should be linear. The proposed method is shown to converge to the true causal parameter, and outperforms the recent state-of-the-art methods on challenging causal benchmarks, including settings involving high-dimensional image data.
|
| 18 |
+
|
| 19 |
+
# 1 INTRODUCTION
|
| 20 |
+
|
| 21 |
+
The goal of causal inference from observational data is to predict the effect of our actions, or treatments, on the outcome without performing interventions. Questions of interest can include what is the effect of smoking on life expectancy? or counterfactual questions, such as given the observed health outcome for a smoker, how long would they have lived had they quit smoking? Answering these questions becomes challenging when a confounder exists, which affects both treatment and the outcome, and causes bias in the estimation. Causal estimation requires us to correct for this confounding bias.
|
| 22 |
+
|
| 23 |
+
A popular assumption in causal inference is the no unmeasured confounder requirement, which means that we observe all the confounders that cause the bias in the estimation. Although a number of causal inference methods are proposed under this assumption (Hill, 2011; Shalit et al., 2017; Shi et al., 2019; Schwab et al., 2020), it rarely holds in practice. In the smoking example, the confounder can be one's genetic characteristics or social status, which are difficult to measure for both technical and ethical reasons.
|
| 24 |
+
|
| 25 |
+
To address this issue, Pearl (1995) proposed back-door adjustment and front-door adjustment, which recover the causal effect in the presence of hidden confounders using a back-door variable or front-door variable, respectively. The back-door variable is a covariate that blocks all causal effects directed from the confounder to the treatment. In health care, patients may have underlying predispositions to illness due to genetic or social factors (hidden), which cause measurable symptoms. The symptoms can be used as the back-door variable if the treatment is chosen based on these.
|
| 26 |
+
|
| 27 |
+
By contrast, a front-door variable blocks the path from treatment to outcome. In perhaps the best-known example, the amount of tar in a smoker's lungs serves as a front-door variable, since it is increased by smoking, shortens life expectancy, and has no direct link to underlying (hidden) sociological traits. Pearl (1995) showed that causal quantities can be obtained by taking the (conditional) expectation of the conditional average outcome.
|
| 28 |
+
|
| 29 |
+
While Pearl (1995) only considered the discrete case, this framework was extended to the continuous case by Singh et al. (2020), using two-stage regression (a review of this and other recent approaches
|
| 30 |
+
|
| 31 |
+
for the continuous case is given in Section 5). In the first stage, the approach regresses from the relevant covariates to the outcome of interest, expressing the function as a linear combination of non-linear feature maps. Then, in the second stage, the causal parameters are estimated by learning the (conditional) expectation of the non-linear feature map used in the first stage. Unlike competing methods (Colangelo & Lee, 2020; Kennedy et al., 2017), two-stage regression avoids fitting probability densities, which is challenging in high-dimensional settings (Wasserman, 2006, Section 6.5). Singh et al. (2020)'s method is shown to converge to the true causal parameters and exhibits better empirical performance than competing methods.
|
| 32 |
+
|
| 33 |
+
One limitation of the methods in Singh et al. (2020) is that they use fixed pre-specified feature maps from reproducing kernel Hilbert spaces, which have a limited expressive capacity when data are complex (images, text, audio). To overcome this, we propose to employ a neural mean embedding approach to learning task-specific adaptive feature dictionaries. At a high level, we first employ a neural network with a linear final layer in the first stage. For the second stage, we learn the (conditional) mean of the stage 1 features in the penultimate layer, again with a neural net. The approach develops the technique of Xu et al. (2021a;b) and enables the model to capture complex causal relationships for high-dimensional covariates and treatments. Neural network feature means are also used to represent (conditional) probabilities in other machine learning settings, such as representation learning (Zaheer et al., 2017) and approximate Bayesian inference (Xu et al., 2022). We derive the consistency of the method based on the Rademacher complexity, a result of which is of independent interest and may be relevant in establishing consistency for broader categories of neural mean embedding approaches, including Xu et al. (2021a;b). We empirically show that the proposed method performs better than other state-of-the-art neural causal inference methods, including those using kernel feature dictionaries.
|
| 34 |
+
|
| 35 |
+
This paper is structured as follows. In Section 2, we introduce the causal parameters we are interested in and give a detailed description of the proposed method in Section 3. The theoretical analysis is presented in Section 4, followed by a review of related work in Section 5. We demonstrate the empirical performance of the proposed method in Section 6, covering two settings: a classical back-door adjustment problem with a binary treatment, and a challenging back-door and front-door setting where the treatment consists of high-dimensional image data.
|
| 36 |
+
|
| 37 |
+
# 2 PROBLEM SETTING
|
| 38 |
+
|
| 39 |
+
In this section, we introduce the causal parameters and methods to estimate these causal methods, namely a back-door adjustment and front-door adjustment. Throughout the paper, we denote a random variable in a capital letter (e.g. $A$ ), the realization of this random variable in lowercase (e.g. $a$ ), and the set where a random variable takes values in a calligraphic letter (e.g. $A$ ). We assume data is generated from a distribution $P$ .
|
| 40 |
+
|
| 41 |
+
Causal Parameters We introduce the target causal parameters using the potential outcome framework (Rubin, 2005). Let the treatment and the observed outcome be $A \in \mathcal{A}$ and $Y \in \mathcal{Y} \subseteq [-R,R]$ . We denote the potential outcome given treatment $a$ as $Y^{(a)} \in \mathcal{Y}$ . Here, we assume no inference, which means that we observe $Y = Y^{(a)}$ when $A = a$ . We denote the hidden confounder as $U \in \mathcal{U}$ and assume conditional exchangeability $\forall a \in \mathcal{A}$ , $Y^{(a)} \perp A|U$ , which means that the potential outcomes are not affected by the treatment assignment. A typical causal graph is shown in Figure 1a. We may additionally consider the observable confounder $O \in \mathcal{O}$ , which is discussed in Appendix C.
|
| 42 |
+
|
| 43 |
+
A first goal of causal inference is to estimate the Average Treatment Effect $(ATE)^{1}\theta_{ATE}(a) = \mathbb{E}\left[Y^{(a)}\right]$ , which is the average potential outcome of $A = a$ . We also consider Average Treatment Effect on the Treated (ATT) $\theta_{\mathrm{ATT}}(a;a^{\prime}) = \mathbb{E}\left[Y^{(a)}|A = a^{\prime}\right]$ , which is the expected potential outcome of $A = a$ for those who received the treatment $A = a^{\prime}$ . Given no inference and conditional exchangeability assumptions, these causal parameters can be written in the following form.
|
| 44 |
+
|
| 45 |
+
Proposition 1 (Rosenbaum & Rubin, 1983; Robins, 1986). Given unobserved confounder $U$ , which satisfies no inference and conditional exchangeability, we have
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\theta_ {\mathrm {A T E}} (a) = \mathbb {E} _ {U} \left[ \mathbb {E} \left[ Y | A = a, U \right] \right], \theta_ {\mathrm {A T T}} (a; a ^ {\prime}) = \mathbb {E} _ {U} \left[ \mathbb {E} \left[ Y | A = a, U \right] | A = a ^ {\prime} \right].
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
If we observable additional confounder $O$ , we may also consider conditional average treatment effect (CATE): the average potential outcome for the sub-population of $O = o$ , which is discussed
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
(a) General causal graph
|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
(b) Back-door adjustment
|
| 58 |
+
Figure 1: Causal graphs we consider. The dotted circle means the unobservable variable.
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
(c) Front-door adjustment
|
| 62 |
+
|
| 63 |
+
in Appendix C. Note that since the confounder $U$ is not observed, we cannot recover these causal parameters only from $(A,Y)$ .
|
| 64 |
+
|
| 65 |
+
Back-door Adjustment In back-door adjustment, we assume the access to the back-door variable $X \in \mathcal{X}$ , which blocks all causal paths from unobserved confounder $U$ to treatment $A$ . See Figure 1b for a typical causal graph. Given the back-door variable, causal parameters can be written only from observable variables $(A, Y, X)$ as follows.
|
| 66 |
+
|
| 67 |
+
Proposition 2 (Pearl, 1995, Theorem 1). Given the back-door variable $X$ , we have
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\theta_ {\mathrm {A T E}} (a) = \mathbb {E} _ {X} [ g (a, X) ], \theta_ {\mathrm {A T T}} (a; a ^ {\prime}) = \mathbb {E} _ {X} [ g (a, X) | A = a ^ {\prime} ],
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where $g(a,x) = \mathbb{E}[Y|A = a,X = x]$ .
|
| 74 |
+
|
| 75 |
+
By comparing Proposition 2 to Proposition 1, we can see that causal parameters can be learned by treating the back-door variable $X$ as the only "confounder", despite the presence of the additional hidden confounder $U$ . Hence, we may apply any method based on the "no unobservable confounder" assumption to back-door adjustment.
|
| 76 |
+
|
| 77 |
+
Front-door Adjustment Another adjustment for causal estimation is front-door adjustment, which uses the causal mechanism to determine the causal effect. Assume we observe the front-door variable $M \in \mathcal{M}$ , which blocks all causal paths from treatment $A$ to outcome $Y$ , as in Figure 1c. Then, we can recover the causal parameters as follows.
|
| 78 |
+
|
| 79 |
+
Proposition 3 (Pearl, 1995, Theorem 2). Given the front-door variable $M$ , we have
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\theta_ {\mathrm {A T E}} (a) = \mathbb {E} _ {A ^ {\prime}} \left[ \mathbb {E} _ {M} \left[ g \left(A ^ {\prime}, M\right) | A = a \right] \right], \theta_ {\mathrm {A T T}} (a; a ^ {\prime}) = \mathbb {E} _ {M} \left[ g \left(a ^ {\prime}, M\right) | A = a \right],
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $g(a, m) = \mathbb{E}[Y | A = a, M = m]$ and $A' \in \mathcal{A}$ is a random variable that follows the same distribution as treatment $A$ .
|
| 86 |
+
|
| 87 |
+
Unlike the case of the back-door adjustment, we cannot naively apply methods based on the "no unmeasured confounder" assumption here, since Proposition 3 takes a different form to Proposition 1.
|
| 88 |
+
|
| 89 |
+
# 3 ALGORITHMS
|
| 90 |
+
|
| 91 |
+
In this section, we present our proposed methods. We first present the case with back-door adjustment and then move to front-door adjustment. The algorithm is summarized in Appendix A.
|
| 92 |
+
|
| 93 |
+
Back-door adjustment The algorithm consists of two stages; In the first stage, we learn the conditional expectation $g = \mathbb{E}\left[Y|A = a,X = x\right]$ with a specific form. We then compute the causal parameter by estimating the expectation of the input features to $g$ .
|
| 94 |
+
|
| 95 |
+
The conditional expectation $g(a, x)$ is learned by regressing $(A, X)$ to $Y$ . Here, we consider a specific model $g(a, x) = \boldsymbol{w}^\top (\phi_A(a) \otimes \phi_X(x))$ , where $\phi_A : \mathcal{A} \to \mathbb{R}^{d_1}$ , $\phi_X : \mathcal{X} \to \mathbb{R}^{d_2}$ are feature maps represented by neural networks, $\boldsymbol{w} \in \mathbb{R}^{d_1 d_2}$ is a trainable weight vector, and $\otimes$ denotes a tensor product $\boldsymbol{a} \otimes \boldsymbol{b} = \operatorname{vec}(\boldsymbol{ab}^\top)$ . This tensor form used for $g(a, x)$ explicitly separates out the treatment of the features of $X$ and of $A$ ; in the event that $X$ is much higher dimension than $A$ ,
|
| 96 |
+
|
| 97 |
+
then concatenating both as a single input tends to downplay the information in $A$ . In addition, we can take advantage of linearity and focus on estimating the relevant (conditional) expectation as discussed later.
|
| 98 |
+
|
| 99 |
+
Given data $\{(a_i, y_i, x_i)\}_{i=1}^n \sim P$ size of $n$ , the feature maps $\phi_A, \phi_X$ and the weight $\boldsymbol{w}$ can be trained by minimizing the following empirical loss:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\hat {\mathcal {L}} _ {1} ^ {\mathcal {X}} \left(\boldsymbol {w}, \phi_ {A}, \phi_ {X}\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(y _ {i} - \boldsymbol {w} ^ {\top} \left(\phi_ {A} \left(a _ {i}\right) \otimes \phi_ {X} \left(x _ {i}\right)\right)\right) ^ {2}. \tag {1}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
We may add any regularization term to this loss, such as weight decay $\lambda \| \pmb{w} \|^2$ . Let the minimizer of the loss $\hat{\mathcal{L}}_1^{\mathcal{X}}$ be $\hat{\pmb{w}}, \hat{\phi}_A, \hat{\phi}_X = \arg \min \hat{\mathcal{L}}_1^{\mathcal{X}}$ and the learned regression function be $\hat{g}(a,x) = \hat{\pmb{w}}^\top (\hat{\phi}_A(a) \otimes \hat{\phi}_X(x))$ . Then, by substituting $\hat{g}$ for $g$ in Proposition 2, we have
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\theta_ {\mathrm {A T E}} (a) \simeq \hat {\boldsymbol {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \mathbb {E} \left[ \hat {\phi} _ {X} (X) \right]\right), \theta_ {\mathrm {A T T}} (a; a ^ {\prime}) \simeq \hat {\boldsymbol {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \mathbb {E} \left[ \hat {\phi} _ {X} (X) \Big | A = a ^ {\prime} \right]\right).
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
This is the advantage of assuming the specific form of $g(a,x) = \pmb{w}^{\top}(\phi_A(a)\otimes \phi_X(x))$ ; From linearity, we can recover the causal parameters by estimating $\mathbb{E}[\hat{\phi}_X(X)]$ , $\mathbb{E}[\hat{\phi}_X(X)|A = a']$ . Such (conditional) expectations of features are called (conditional) mean embedding, and thus, we name our method "neural (conditional) mean embedding".
|
| 112 |
+
|
| 113 |
+
We can estimate the marginal expectation $\mathbb{E}[\hat{\phi}_X(X)]$ , as a simple empirical average $\mathbb{E}[\hat{\phi}_X(X)] \simeq \frac{1}{n}\sum_{i=1}^{n}\hat{\phi}_X(x_i)$ . The conditional mean embedding $\mathbb{E}[\hat{\phi}_X(X)|A = a']$ requires more care, however: it can be learned by a technique proposed in Xu et al. (2021a), in which we train another regression function from treatment $A$ to the back-door feature $\hat{\phi}_X(X)$ . Specifically, we estimate $\mathbb{E}[\hat{\phi}_X(X)|A = a']$ by $\hat{f}_{\hat{\phi}_X}(a')$ , where the regression function $\hat{f}_{\hat{\phi}_X} : \mathcal{A} \to \mathbb{R}^{d_2}$ be given by
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\hat {\boldsymbol {f}} _ {\hat {\phi} _ {X}} = \underset {\boldsymbol {f}: \mathcal {A} \rightarrow \mathbb {R} ^ {d _ {2}}} {\arg \min } \hat {\mathcal {L}} _ {2} ^ {\mathcal {X}} (\boldsymbol {f}; \hat {\phi} _ {X}), \quad \hat {\mathcal {L}} _ {2} ^ {\mathcal {X}} (\boldsymbol {f}; \phi_ {X}) = \frac {1}{n} \sum_ {i = 1} ^ {n} \| \phi_ {X} (x _ {i}) - \boldsymbol {f} (a _ {i}) \| ^ {2}. \tag {2}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
Here, $\| \cdot \|$ denotes the Euclidean norm. The loss $\hat{\mathcal{L}}_2^{\mathcal{X}}$ may include the additional regularization term such as a weight decay term for parameters in $f$ . We have
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\hat {\theta} _ {\mathrm {A T E}} (a) = \boldsymbol {\hat {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {X} \left(x _ {i}\right)\right), \quad \hat {\theta} _ {\mathrm {A T T}} (a; a ^ {\prime}) = \boldsymbol {\hat {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \boldsymbol {f} _ {\hat {\phi} _ {X}} \left(a ^ {\prime}\right)\right)
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
as the final estimator for the back-door adjustment. The estimator for the ATE $\hat{\theta}_{\mathrm{ATE}}$ is reduced to the average of the predictions $\hat{\theta}_{\mathrm{ATE}} = \frac{1}{n}\sum_{i=1}^{n}\hat{g}(a,x_i)$ . This coincides with other neural network causal methods (Shalit et al., 2017; Chernozhukov et al., 2022b), which do not assume $g(a,z) = \pmb{w}^{\top}(\phi_A(a)\otimes \phi_X(x))$ . As we have seen, however, this tensor product formulation is essential for estimating ATT by back-door adjustment. It will also be necessary for the front-door adjustment, as we will see next.
|
| 126 |
+
|
| 127 |
+
**Front-door adjustment** We can obtain the estimator for front-door adjustment by following the almost same procedure as the back-door adjustment. Given data $\{(a_i, y_i, m_i)\}_{i=1}^n$ , we again fit the regression model $\hat{g}(a, m) = \hat{\boldsymbol{w}}^\top \left( \hat{\phi}_A(a) \otimes \hat{\phi}_M(m) \right)$ by minimizing
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\hat {\mathcal {L}} _ {1} ^ {\mathcal {M}} (\boldsymbol {w}, \phi_ {A}, \phi_ {M}) = \frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - \boldsymbol {w} ^ {\top} (\phi_ {A} (a _ {i}) \otimes \phi_ {M} (m _ {i}))) ^ {2},
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where $\phi_M: \mathcal{M} \to \mathbb{R}^{d_2}$ is a feature map represented as the neural network. From Proposition 3, for $\pmb{f}_{\hat{\phi}_M}(a) = \mathbb{E}\left[\hat{\phi}_M(M)\bigg|A = a\right]$ , we have $\theta_{\mathrm{ATE}}(a) \simeq \hat{\pmb{w}}^{\top}\left(\mathbb{E}\left[\hat{\phi}_A(A)\right] \otimes \pmb{f}_{\hat{\phi}_M}(a)\right)$ and $\theta_{\mathrm{ATT}}(a;a') \simeq \hat{\pmb{w}}^{\top}\left(\hat{\phi}_A(a') \otimes \pmb{f}_{\hat{\phi}_M}(a)\right)$ . Again, we estimate feature embedding by empirical average for $\mathbb{E}[\hat{\phi}_A(A)]$ or solving another regression problem for $\pmb{\mu}_{\hat{\phi}_M}(a)$ . The final estimator for front-door adjustment is given as
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\hat {\theta} _ {\mathrm {A T E}} (a) = \boldsymbol {\hat {w}} ^ {\top} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {A} \left(a _ {i}\right) \otimes \boldsymbol {\hat {f}} _ {\hat {\phi} _ {M}} (a)\right), \quad \hat {\theta} _ {\mathrm {A T T}} (a; a ^ {\prime}) = \boldsymbol {\hat {w}} ^ {\top} \left(\hat {\phi} _ {A} \left(a ^ {\prime}\right) \otimes \boldsymbol {\hat {f}} _ {\hat {\phi} _ {M}} (a)\right),
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
where $\hat{f}_{\hat{\phi}_M}$ is given by minimizing loss $\hat{\mathcal{L}}_2^{\mathcal{M}} = \frac{1}{n}\sum_{i=1}^{n}\|\phi_M(m_i) - \boldsymbol{f}(a_i)\|^2$ (with additional regularization term).
|
| 140 |
+
|
| 141 |
+
# 4 THEORETICAL ANALYSIS
|
| 142 |
+
|
| 143 |
+
In this section, we prove the consistency of the proposed method. We focus on the back-door adjustment case, since the consistency of front-door adjustment can be derived identically. The proposed method consists of two successive regression problems. In the first stage, we learn the conditional expectation $g$ , and then in the second stage, we estimate the feature embeddings. First, we show each stage's consistency, then present the overall convergence rate to the causal parameter.
|
| 144 |
+
|
| 145 |
+
Consistency for the first stage: In this section, we consider the hypothesis space of $g$ as
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\begin{array}{l} \mathcal {H} _ {g} = \left\{\boldsymbol {w} ^ {\top} \left(\phi_ {A} (a) \otimes \phi_ {X} (x)\right) \mid \boldsymbol {w} \in \mathbb {R} ^ {d _ {1} d _ {2}}, \phi_ {A} (a) \in \mathbb {R} ^ {d _ {1}}, \phi_ {X} (x) \in \mathbb {R} ^ {d _ {2}}, \right. \\ \| \boldsymbol {w}\|_{1}\leq R,\max_{a\in \mathcal{A}}\| \phi_{A}(a)\|_{\infty}\leq 1,\max_{x\in \mathcal{X}}\| \phi_{X}(x)\|_{\infty}\leq 1\} . \\ \end{array}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
Here, we denote $\ell_1$ -norm and infinity norm of vector $\boldsymbol{b} \in \mathbb{R}^d$ as $\| \boldsymbol{b} \|_1 = \sum_{i=1}^d |b_i|$ and $\| \boldsymbol{b} \|_\infty = \max_{i \in [d]} b_i$ . Note that from inequality $\| \phi_A(a) \otimes \phi_X(x) \|_\infty \leq \| \phi_A(a) \|_\infty \| \phi_X(x) \|_\infty$ and Hölder's inequality, we can show that $h(a, x) \in [-R, R]$ for all $h \in \mathcal{H}_g$ . First, we discuss the richness of this hypothesis space by the following theorem.
|
| 152 |
+
|
| 153 |
+
Theorem 1. Let $\mathcal{A},\mathcal{X}\subset \mathbb{R}^d$ be compact. Given sufficiently large $R,d_{1},d_{2}$ , for any continuous function $f:\mathcal{A}\times \mathcal{X}\to \mathbb{R}$ and constant $\varepsilon >0$ , there exists $h\in \mathcal{H}_g$ which satisfies $\sup_{a,x}|f(a,x) - h(a,x)|\leq \varepsilon$ .
|
| 154 |
+
|
| 155 |
+
The proof uses the modified version of universal approximation theorem (Cybenko, 1989) for neural net, which will be given in Appendix B.1. Theorem 1 tells that we can approximate any continuous function $f$ with an arbitrary accuracy, which suggests the richness of our function class. Given this hypothesis space, the following lemma bounds the deviation of estimated conditional expectation $\hat{g}$ and the true one.
|
| 156 |
+
|
| 157 |
+
Lemma 1. Given data $S = \{a_{i},y_{i},x_{i}\}_{i = 1}^{n}$ , let minimizer of loss $\hat{\mathcal{L}}_1^\chi$ be $\hat{g} = \arg \min \hat{\mathcal{L}}_1^\chi$ . If the true conditional expectation $g$ is in the hypothesis space $g\in \mathcal{H}_g$ , w.p. at least $1 - 2\delta$ , we have
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
\left\| g - \hat {g} \right\| _ {P (A, X)} \leq \sqrt {1 6 R \mathfrak {R} _ {S} (\mathcal {H} _ {g}) + 8 R ^ {2} \sqrt {(\log 2 / \delta) / 2 n}},
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
where $\hat{\mathfrak{R}}_S(\mathcal{H}_g)$ is empirical Rademacher complexity of $\mathcal{H}_g$ given data $S$ .
|
| 164 |
+
|
| 165 |
+
The proof is given in Appendix B.3. Here, we present the empirical Rademacher complexity when we apply a feed-forward neural network for features.
|
| 166 |
+
|
| 167 |
+
Lemma 2. The empirical Rademacher complexity $\hat{\mathfrak{R}}_S(\mathcal{H}_g)$ scales as $\hat{\mathfrak{R}}_S(\mathcal{H}_g) \leq O(C^L / \sqrt{n})$ for some constant $C$ if we use a specific $L$ -layer neural net for features $\phi_A, \phi_X$ .
|
| 168 |
+
|
| 169 |
+
See Lemma 7 in Appendix B.3 for the detailed expression of the upper bound. Note that this may be of independent interest since the similar hypothesis class is considered in Xu et al. (2021a;b), and no explicit upper bound is provided on the empirical Rademacher complexity in that work.
|
| 170 |
+
|
| 171 |
+
Consistency for the second stage: Next, we consider the second stage of regression. In back-door adjustment, we estimate the feature embedding $\mathbb{E}[\hat{\phi}_X(X)]$ and the conditional feature embedding $\mathbb{E}[\hat{\phi}_X(X)|A = a']$ . We first state the consistency of the estimation of marginal expectation, which can be shown by Hoeffding's inequality.
|
| 172 |
+
|
| 173 |
+
Lemma 3. Given data $\{x_{i}\}_{i = 1}^{n}$ and feature map $\hat{\phi}_X$ , w.p. at least $1 - \delta$ , we have
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
\left. \right.\left\| \mathbb {E} \left[ \hat {\phi} _ {X} (X) \right] - \frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {X} (x _ {i}) \right\| _ {\infty} \leq \sqrt {\frac {2 \log (2 d _ {2} / \delta)}{n}}.
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
For conditional feature embedding $\mathbb{E}[\hat{\phi}_X(X)|A = a']$ , we solve the regression problem $\hat{f}_{\hat{\phi}_X} = \arg \min_{\pmb{f}}\hat{\mathcal{L}}_2^{\mathcal{X}}(\pmb {f};\hat{\phi}_X)$ , the consistency of which is stated as follows.
|
| 180 |
+
|
| 181 |
+
Lemma 4. Let hypothesis space $\mathcal{H}_f$ be
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\mathcal {H} _ {\boldsymbol {f}} = \left\{a \in \mathcal {A} \rightarrow \left(f _ {1} (a), \dots , f _ {d _ {2}} (a)\right) ^ {\top} \in [ - 1, 1 ] ^ {d _ {2}} \mid f _ {1}, \dots , f _ {d _ {2}} \in \mathcal {H} _ {f} \right\},
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
where $\mathcal{H}_f$ is some hypothesis space of functions of $f:\mathcal{X}\to [-1,1]$ . Let the true function be $\pmb{f}_{\hat{\phi}_X}(a) = \mathbb{E}[\hat{\phi}_X(X)|A = a]$ , and we assume $\pmb{f}_{\hat{\phi}_X}\in \mathcal{H}_{\pmb{f}}$ . Let $\hat{\pmb{f}}_{\hat{\phi}_X} = \arg \min_{\pmb {f}\in \mathcal{H}_f}\hat{\mathcal{L}}_2^{\mathcal{X}}(\pmb {f};\hat{\phi}_X)$ , given data $S = \{(a_i,x_i)\}$ . Then, we have
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\left\| \pmb {f} _ {\hat {\phi} _ {X}} (A) - \hat {\pmb {f}} _ {\hat {\phi} _ {X}} (A) \right\| _ {P (A), \infty} \leq \sqrt {1 6 \mathfrak {R} _ {S} (\mathcal {H} _ {f}) + 8 \sqrt {(\log (2 d _ {2} / \delta)) / 2 n}}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
w.p. at least $1 - 2\delta$ , where $\| \pmb{f}(A) \|_{P(A),\infty} = \max_i \| f_i \|_{P(A)}$ and $\hat{\mathfrak{R}}_S(\mathcal{H}_f)$ is the empirical Rademacher complexity of $\mathcal{H}_f$ given data $S$ .
|
| 194 |
+
|
| 195 |
+
The proof is identical to Lemma 1. We use neural network hypothesis class for $\mathcal{H}_f$ whose empirical Rademacher complexity is bounded by $O(1 / \sqrt{n})$ as discussed in Proposition 5 in Appendix B.3.
|
| 196 |
+
|
| 197 |
+
Consistency of the causal estimator Finally, we show that if these two estimators converge uniformly, we can recover the true causal parameters. To derive the consistency of the causal parameter, we put the following assumption on hypothesis spaces in order to guarantee that convergence in $\ell_2$ -norm leads to uniform convergence.
|
| 198 |
+
|
| 199 |
+
Assumption 1. For functions $h_1, h_2 \in \mathcal{H}_g$ , there exists constant $c > 0$ and $\beta$ that
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\sup _ {a \in \mathcal {A}, x \in \mathcal {X}} | h _ {1} (a, x) - h _ {2} (a, x) | \leq \frac {1}{c} \| h _ {1} (A, X) - h _ {2} (A, X) \| _ {P (A, X)} ^ {\frac {1}{\beta}}.
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
Intuitively, this ensures that we have a non-zero probability of observing all elements in $\mathcal{A} \times \mathcal{X}$ . We can see that Assumption 1 is satisfied with $\beta = 1$ and $c = \min_{(a,x) \in \mathcal{A} \times \mathcal{X}} P(A = a, X = x)$ when treatment and back-door variables are discrete. A similar intuition holds for the continuous case; in Appendix B.2, we show that Assumption 1 holds when with $\beta = \frac{2d + 2}{2}$ when $\mathcal{A}$ , $\mathcal{X}$ are $d$ -dimensional intervals if the density function of $P(A, X)$ is bounded away from zero and all functions in $\mathcal{H}_g$ are Lipschitz continuous.
|
| 206 |
+
|
| 207 |
+
Theorem 2. Under conditions in Lemmas 1 to 3 and Assumption 1, w.p. at least $1 - 4\delta$ , we have
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
\sup _ {a \in \mathcal {A}} \left| \theta_ {\mathrm {A T E}} (a) - \hat {\theta} _ {\mathrm {A T E}} (a) \right| \leq O \left(n ^ {- \frac {1}{4 \beta}}\right).
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
If we furthermore assume that for all $\pmb{f}, \tilde{\pmb{f}}$ ,
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\sup _ {a \in A} \| \boldsymbol {f} (a) - \tilde {\boldsymbol {f}} (a) \| _ {\infty} \leq \frac {1}{c ^ {\prime}} \left(\max _ {i \in [ d _ {2} ]} \| \boldsymbol {f} (A) - \tilde {\boldsymbol {f}} (A) \| _ {P (A), \infty}\right) ^ {\frac {1}{\beta^ {\prime}}},
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
then, w.p. at least $1 - 4\delta$ , we have $\sup_{a,a' \in \mathcal{A}} |\theta_{\mathrm{ATT}}(a;a') - \hat{\theta}_{\mathrm{ATT}}(a;a')| \leq O(n^{-\frac{1}{4\beta}} + n^{-\frac{1}{4\beta'}})$ .
|
| 220 |
+
|
| 221 |
+
The proof is given in Appendix B.3. This rate is slow compared to the existing work (Singh et al., 2020), which can be as fast as $O(n^{-1/4})$ . However, Singh et al. (2020) assumes that the correct regression function $g$ is in a certain reproducing kernel Hilbert space (RKHS), which is a stronger assumption than ours, which only assumes a Lipschitz hypothesis space. Deriving the matching minimax rates under the Lipschitz assumption remains a topic for future work.
|
| 222 |
+
|
| 223 |
+
# 5 RELATED WORK
|
| 224 |
+
|
| 225 |
+
Meanwhile learning approaches to the back-door adjustment have been extensively explored in recent work, including tree models (Hill, 2011; Athey et al., 2019), kernel models (Singh et al., 2020) and neural networks (Shi et al., 2019; Chernozhukov et al., 2022b; Shalit et al., 2017), most literature considers binary treatment cases, and few methods can be applied to continuous treatments. Schwab et al. (2020) proposed to discretize the continuous treatments and Kennedy et al. (2017); Colangelo & Lee (2020) conducted density estimation of $P(X)$ and $P(X|A)$ . These are simple to implement but suffer from the curse of dimensionality (Wasserman, 2006, Section 6.5).
|
| 226 |
+
|
| 227 |
+
Recently, the automatic debiased machine learner (Auto-DML) approach (Chernozhukov et al., 2022a) has gained increasing attention, and can handle continuous treatments in the back-door adjustment. Consider a functional $m$ that maps $g$ to causal parameter $\theta = \mathbb{E}\left[m(g,(A,X))\right]$ . For
|
| 228 |
+
|
| 229 |
+
the ATE case, we have $m(g,(A,X)) = g(a,X)$ since $\theta_{\mathrm{ATE}}(a) = \mathbb{E}\left[g(a,X)\right]$ . We may estimate both $g$ and the Riesz representative $\alpha$ that satisfies $\mathbb{E}\left[m(g,(A,X))\right] = \mathbb{E}\left[\alpha (A,X)g(A,X)\right]$ by the least-square regression to get the causal estimator. Although Auto-DML can learn a complex causal relationship with neural network model (Chernozhukov et al., 2022b), it requires a considerable amount of computation when the treatment is continuous, since we have to learn a different Riesz representative $\alpha$ for each treatment $a$ . Furthermore, as discussed in Appendix B.4, the error bound on $\alpha$ can grow exponentially with respect to the dimension of the probability space, which may harm performance in high-dimensional settings.
|
| 230 |
+
|
| 231 |
+
Singh et al. (2020) proposed a feature embedding approach, in which feature maps are specified as the fixed feature maps in a reproducing kernel Hilbert space (RKHS). Although this strategy can be applied to a number of different causal parameters, the flexibility of the model is limited since it uses pre-specified features. Our main contribution is to generalize this feature embedding approach to adaptive features which enables us to capture more complex causal relationships. Similar techniques are used in the additional causal inference settings, such as deep feature instrumental variable method (Xu et al., 2021a) or deep proxy causal learning (Xu et al., 2021b).
|
| 232 |
+
|
| 233 |
+
By contrast with the back-door case, there is little literature that discusses non-linear front-door adjustment. The idea was originally introduced for the discrete treatment setting (Pearl, 1995) and was later discussed using the linear causal model (Pearl, 2009). To the best of our knowledge, Singh et al. (2020) is the only work that considers the nonlinear front-door adjustment, where fixed kernel feature dictionaries are used. We generalize this approach using adaptive neural feature dictionaries and obtain promising performance.
|
| 234 |
+
|
| 235 |
+
# 6 EXPERIMENTS
|
| 236 |
+
|
| 237 |
+
In this section, we evaluate the performance of the proposed method based on two scenarios. One considers the back-door adjustment methods with binary treatment based on IHDP dataset (Gross, 1993) and ACIC dataset (Shimoni et al., 2018). Another tests the performance on a high-dimensional treatment based on dSprite image dataset (Matthey et al., 2017). We first describe the training procedure we apply for our proposed method, and then report the results of each benchmark. The details of hyperparameters used in the experiment are summarized in Appendix D.
|
| 238 |
+
|
| 239 |
+
# 6.1 TRAINING PROCEDURE
|
| 240 |
+
|
| 241 |
+
During the training, we use the learning procedure proposed by Xu et al. (2021a). Let us consider the first stage regression in a back-door adjustment, in which we consider the following loss $\hat{\mathcal{L}}_1^{\mathcal{X}}$ with weight decay regularization
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
\hat {\mathcal {L}} _ {1} ^ {\mathcal {X}} \left(\boldsymbol {w}, \phi_ {A}, \phi_ {X}\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(y _ {i} - \boldsymbol {w} ^ {\top} \left(\phi_ {A} \left(a _ {i}\right) \otimes \phi_ {X} \left(x _ {i}\right)\right)\right) ^ {2} + \lambda \| \boldsymbol {w} \| ^ {2}.
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
To minimize $\hat{\mathcal{L}}_1^X$ with respect to $(\pmb {w},\phi_A,\phi_X)$ , we can use the closed form solution of weight $\pmb{w}$ . If we fix features $\phi_{A},\phi_{X}$ , the minimizer of $\pmb{w}$ can be written
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
\hat {\boldsymbol {w}} (\phi_ {A}, \phi_ {X}) = \left(\frac {1}{n} \sum_ {i = 1} ^ {n} (\phi_ {A, X} (a _ {i}, x _ {i})) (\phi_ {A, X} (a _ {i}, x _ {i})) ^ {\top} + \lambda I\right) ^ {- 1} \frac {1}{n} \sum_ {i = 1} ^ {n} y _ {i} \phi_ {A, X} (a _ {i}, x _ {i}),
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
where $\phi_{A,X}(a,x) = \phi_A(a)\otimes \phi_Z(a)$ . Then, we optimize the features as $\hat{\phi}_A,\hat{\phi}_X = \arg \min_{\phi_A,\phi_X}\hat{\mathcal{L}}_1^{\mathcal{X}}(\hat{w} (\phi_A,\phi_X),\phi_A,\phi_X)$ using Adam (Kingma & Ba, 2015). We empirically found that this stabilizes the learning and improves the performance of the proposed method.
|
| 254 |
+
|
| 255 |
+
# 6.2 BINARY TREATMENT SCENARIO
|
| 256 |
+
|
| 257 |
+
In this section, we report the performers on two classical causal datasets: IHDP dataset and ACIC dataset. The IHDP dataset is widely used to evaluate the performance of the estimators for the ATE (Shi et al., 2019; Chernozhukov et al., 2022b; Athey et al., 2019). This is a semi-synthetic dataset based on the Infant Health and Development Program (IHDP) (Gross, 1993). Following existing work, we generate 1000 sets of 747 observations of outcomes and binary treatments based
|
| 258 |
+
|
| 259 |
+
Table 1: Mean and standard error of the ATE prediction error.
|
| 260 |
+
|
| 261 |
+
<table><tr><td></td><td>IHDP</td><td>ACIC</td></tr><tr><td>DragonNet</td><td>0.146 ± 0.010</td><td>0.241 ± 0.123</td></tr><tr><td>RieszNet(Direct)</td><td>0.123 ± 0.004</td><td>0.334 ± 0.133</td></tr><tr><td>RieszNet(IPW)</td><td>0.122 ± 0.037</td><td>52.73 ± 40.71</td></tr><tr><td>RieszNet(DR)</td><td>0.110 ± 0.003</td><td>1.071 ± 0.555</td></tr><tr><td>RKHS Embedding</td><td>0.166 ± 0.003</td><td>1.785 ± 1.398</td></tr><tr><td>NN Embedding (Proposed)</td><td>0.117 ± 0.002</td><td>0.231 ± 0.112</td></tr></table>
|
| 262 |
+
|
| 263 |
+
on the 25-dimensional observable confounder in the original data. The ACIC dataset is introduced in (Shi et al., 2019), which is based on linked birth and infant death data (LBIDD) (Mathews & MacDorman, 2006). This is considered a more challenging benchmark dataset than IHDP since it contains data points with extreme propensity scores (i.e. $P(A = 1|X)$ can be very close to 0 or 1). We select 101 datasets following Shi et al. (2019) and remove outliers in each dataset using the procedure described in Appendix D.
|
| 264 |
+
|
| 265 |
+
We compare our method to competing causal methods, DragonNet (Shi et al., 2019), RieszNet (Chernozhukov et al., 2022b), and RKHS Embedding (Singh et al., 2020). DragonNet is a neural causal inference method specially designed for the binary treatment, which applies the targeted regularization (van der Laan & Rubin, 2006) to ATE estimation. RieszNet implements Auto-DML with a neural network, which learns the conditional expectation $g$ and Riesz representer $\alpha$ jointly while sharing the intermediate features. Given estimated $\hat{g}, \hat{\alpha}$ , it proposes three ways to calculate the causal parameter;
|
| 266 |
+
|
| 267 |
+
Direct: $\mathbb{E}\left[m(\hat{g},(A,X))\right]$ , IPW: $\mathbb{E}\left[Y\hat{\alpha} (A,X)\right]$ , DR: $\mathbb{E}\left[m(\hat{g},(A,X))+\hat{\alpha}(A,X)(Y-\hat{g}(A,X))\right]$ , where functional $m$ maps $g$ to the causal parameter (See Section 5 for the example of functional $m$ ). We report the performance of each estimator in RieszNet. RKHS Embedding employs the feature embedding approach with a fixed kernel feature dictionaries.
|
| 268 |
+
|
| 269 |
+
The results are summarized in Table 1. Although RieszNet(IPW) estimator performs promisingly in IHDP, the performance degenerates for the ACIC dataset, which suggests RieszNet(IPW) is prone to extreme propensity scores. This is not surprising, since the true Riesz representer in this case is $\alpha(A,X) = \frac{A}{P(A=1|X)} - \frac{1-A}{P(A=0|X)}$ , which can be very large if $P(A=1|X)$ becomes close to 0 or 1. This also harms the performance of RieszNet(DR). We can see that the proposed method outperforms all competing methods besides RieszNet(DR) in the IHDP dataset, for which the performance is comparable (0.117 ± 0.002 v.s. 0.110 ± 0.003).
|
| 270 |
+
|
| 271 |
+
# 6.3 HIGH-DIMENSIONAL TREATMENT SCENARIO
|
| 272 |
+
|
| 273 |
+
To test the performance of our method of causal inference in a more complex setting, we used dSprite data (Matthey et al., 2017), which is also used as the benchmark for other high-dimensional causal inference methods (Xu et al., 2021a;b). The dSprite dataset consists of images that are $64 \times 64 = 4096$ -dimensional, described by five latent parameters (shape, scale, rotation, posX and posY). Throughout this paper, we fix (shape, scale, rotation) and use posX $\in [0,1]$ and posY $\in [0,1]$ as the latent parameters. Based on this dataset, we propose two experiments; one is ATE estimation based on the back-door adjustment, and the other is ATT estimation based on front-door adjustment.
|
| 274 |
+
|
| 275 |
+
Back-door Adjustment In our back-door adjustment experiment, we consider the case where the image is the treatment. Let us sample hidden confounder $U \sim \mathrm{Unif}(0,1)$ , and consider the back-door as $(X_1,X_2) = (U\cos \theta +\varepsilon_1,U\sin \theta +\varepsilon_2)$ where $\varepsilon_{1},\varepsilon_{2} \sim \mathcal{N}(0,0.09),\theta \sim \mathrm{Unif}(0,2\pi)$ . We define treatment $A$ as the image, where the parameters are set as $\mathrm{posX} = \frac{X_1 + 1.5}{3}$ , $\mathrm{posY} = \frac{X_2 + 1.5}{3}$ . We add Gaussian noise $\mathcal{N}(0,0.01)$ to each pixel of images. The outcome is given as follows,
|
| 276 |
+
|
| 277 |
+
$$
|
| 278 |
+
Y = \frac {h ^ {2} (A)}{1 0 0} + 4 (U - 0. 5) + \varepsilon_ {Y}, \quad h (A) = \sum_ {i, j = 1} ^ {6 4} \frac {(i - 1)}{6 4} \frac {(j - 1)}{6 4} A _ {[ i j ]}
|
| 279 |
+
$$
|
| 280 |
+
|
| 281 |
+
where $A_{[ij]}$ denotes the value of the pixel at $(i,j)$ and $\varepsilon_Y$ is the noise variable sampled from $\varepsilon_Y \sim \mathcal{N}(0,0.25)$ . Each dataset consists of 5000 samples of $(Y,A,X_1,X_2)$ and we consider the problem of estimating $\theta_{\mathrm{ATE}}(a) = h^2 (A) / 100$ . We compare the proposed method to RieszNet and RKHS Embedding, since DragonNet is designed for binary treatments and is not applicable here. We
|
| 282 |
+
|
| 283 |
+

|
| 284 |
+
Figure 3: ATT experiment based on dSprite data
|
| 285 |
+
Figure 2: ATE experiment based on dSprite data
|
| 286 |
+
|
| 287 |
+

|
| 288 |
+
|
| 289 |
+
generate 10 datasets and the average of squared error $(\theta_{\mathrm{ATE}}(a) - \hat{\theta}_{\mathrm{ATE}}(a))^2$ at 9 test points $a$ is reported in Figure 2.
|
| 290 |
+
|
| 291 |
+
We can see that the proposed method performs best in the setting, which shows the power of the method for complex high-dimensional inputs. The RKHS Embedding method suffers from the limited flexibility of the model for the case of complex high-dimensional treatment, and performs worse than all neural methods besides RieszNet(IPW). This suggests that it is difficult to estimate Riesz representative $\alpha$ in a high-dimensional scenario, which is also suggested by the exponential growth of the error bound to the dimension as discussed in Appendix B.4. We conjecture this also harms the performance of RieszNet(Direct) and RieszNet(DR), since the models for conditional expectation $\hat{g}$ and Riesz representative $\hat{\alpha}$ share the intermediate features in the network and are jointly trained in RieszNet.
|
| 292 |
+
|
| 293 |
+
Frontdoor Adjustment We use dSprite dataset to consider front-door adjustment. Again, we sample hidden confounder $U_{1}, U_{2} \sim \mathrm{Unif}(-1.5, 1.5)$ , and we set the image to be the treatment, where the parameters are set as $\mathrm{posX} = \frac{U_{1} + 1.5}{3}$ , $\mathrm{posY} = \frac{U_{2} + 1.5}{3}$ . We add Gaussian noise $\mathcal{N}(0, 0.01)$ to each pixel of the images. We use $M = h(A) + \varepsilon_{M}$ as the front-door variable $M$ , where $\varepsilon_{M} \sim \mathcal{N}(0, 0.04)$ . The outcome is given as follows,
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
Y = \frac {M ^ {2}}{1 0 0} + 5 (U _ {1} + U _ {2}) + \varepsilon_ {Y}, \quad \varepsilon_ {Y} \sim \mathcal {N} (0, 0. 2 5)
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
We consider the problem of estimating $\theta_{\mathrm{ATT}}(a;a^{\prime})$ and obtain the average squared error on 121 points of $a$ while fixing $a^\prime$ to the image of $\mathsf{posX} = 0.6,\mathsf{posY} = 0.6$ . We compare against RKHS Embedding, where the result is given in Figure 3. Note that RieszNet has not been developed for this setting. Again, the RKHS Embedding method suffers from the limited flexibility of the model, whereas our proposed model successfully captures the complex causal relationships.
|
| 300 |
+
|
| 301 |
+
# 7 CONCLUSION
|
| 302 |
+
|
| 303 |
+
We have proposed a novel method for back-door and front-door adjustment, based on the neural mean embedding. We established consistency of the proposed method based on a Rademacher complexity argument, which contains a new analysis of the hypothesis space with the tensor product features. Our empirical evaluation shows that the proposed method outperforms existing estimators, especially when high-dimensional image observations are involved.
|
| 304 |
+
|
| 305 |
+
As future work, it would be promising to apply a similar adaptive feature embedding approach to other causal parameters, such as marginal average effect $\nabla_{a}\theta_{\mathrm{ATE}}(a)$ (Imbens & Newey, 2009). Furthermore, it would be interesting to consider sequential treatments, as in dynamic treatment effect estimation, in which the treatment may depend on the past covariates, treatments and outcomes. Recently, a kernel feature embedding approach (Singh et al., 2021) has been developed to estimate the dynamic treatment effect, and we expect that applying the neural mean embedding would benefit the performance.
|
| 306 |
+
|
| 307 |
+
# ACKNOWLEDGEMENT
|
| 308 |
+
|
| 309 |
+
This work was supported by the Gatsby Charitable Foundation.
|
| 310 |
+
|
| 311 |
+
# REFERENCES
|
| 312 |
+
|
| 313 |
+
Susan Athey, Julie Tibshirani, and Stefan Wager. Generalized random forests. The Annals of Statistics, 47(2):1148 - 1178, 2019.
|
| 314 |
+
Victor Chernozhukov, Whitney K. Newey, Victor Quintas-Martinez, and Vasilis Syrgkanis. Automatic debiased machine learning via neural nets for generalized linear regression, 2021.
|
| 315 |
+
Victor Chernozhukov, Whitney Newey, and Rahul Singh. Automatic debiased machine learning of causal and structural effects. *Econometrica*, 2022a.
|
| 316 |
+
Victor Chernozhukov, Whitney K. Newey, Victor Quintas-Martinez, and Vasilis Syrgkanis. Riesznet and forestriesz: Automatic debiased machine learning with neural nets and random forests. In ICML 2022, 2022b.
|
| 317 |
+
Kyle Colangelo and Ying-Ying Lee. Double debiased machine learning nonparametric inference with continuous treatments. arXiv:2004.03036, 2020.
|
| 318 |
+
G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems (MCSS), 2(4):303-314, 1989.
|
| 319 |
+
R. T. Gross. Infant health and development program (IHDP): Enhancing the outcomes of low birth weight, premature infants in the united states, 1985-1988, 1993.
|
| 320 |
+
Jennifer L. Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1):217-240, 2011.
|
| 321 |
+
Guido W. Imbens and Whitney K. Newey. Identification and estimation of triangular simultaneous equations models without additivity. *Econometrica*, 77(5):1481-1512, 2009.
|
| 322 |
+
Edward H Kennedy, Zongming Ma, Matthew D McHugh, and Dylan S Small. Nonparametric methods for doubly robust estimation of continuous treatment effects. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(4):1229, 2017.
|
| 323 |
+
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
|
| 324 |
+
Tj Mathews and Marian MacDorman. Infant mortality statistics from the 2008 period linked birth/infant death data set. National vital statistics reports : from the Centers for Disease Control and Prevention, National Center for Health Statistics, National Vital Statistics System, 54:1-29, 06 2006. doi: 10.1037/e558952006-001.
|
| 325 |
+
Loic Matthew, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dSprites: Disentanglement testing sprites dataset, 2017. URL https://github.com/deepmind/dSprites-dataset/.
|
| 326 |
+
Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018.
|
| 327 |
+
Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. MIT Press, 2012.
|
| 328 |
+
Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural networks. In Proceedings of The 28th Conference on Learning Theory, volume 40 of Proceedings of Machine Learning Research, pp. 1376-1401, 2015.
|
| 329 |
+
Judea Pearl. Causal diagrams for empirical research. Biometrika, 82(4):669-688, 1995.
|
| 330 |
+
Judea Pearl. Causality. Cambridge University Press, 2 edition, 2009.
|
| 331 |
+
|
| 332 |
+
James Robins. A new approach to causal inference in mortality studies with a sustained exposure period—application to control of the healthy worker survivor effect. *Mathematical Modelling*, 7 (9):1393–1512, 1986.
|
| 333 |
+
Paul R. Rosenbaum and Donald B. Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41-55, 1983.
|
| 334 |
+
Donald B. Rubin. Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469):322-331, 2005.
|
| 335 |
+
Patrick Schwab, Lorenz Linhardt, Stefan Bauer, Joachim M. Buhmann, and Walter Karlen. Learning counterfactual representations for estimating individual dose-response curves. In AAAI, 2020.
|
| 336 |
+
Uri Shalit, Fredrik D. Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In Proceedings of the 34th International Conference on Machine Learning. PMLR, 2017.
|
| 337 |
+
Claudia Shi, David Blei, and Victor Veitch. Adapting neural networks for the estimation of treatment effects. In Advances in Neural Information Processing Systems, volume 32, 2019.
|
| 338 |
+
Y. Shimoni, C. Yanover, E. Karavani, and Y. Goldschmmidt. Benchmarking Framework for Performance-Evaluation of Causal Inference Analysis. ArXiv preprint arXiv:1802.05046, 2018.
|
| 339 |
+
Rahul Singh, Liyuan Xu, and Arthur Gretton. Kernel methods for causal functions: Dose, heterogeneous, and incremental response curves, 2020.
|
| 340 |
+
Rahul Singh, Liyuan Xu, and Arthur Gretton. Kernel methods for multistage causal inference: Mediation analysis and dynamic treatment effects, 2021.
|
| 341 |
+
Mark J. van der Laan and Daniel Rubin. Targeted maximum likelihood learning. The International Journal of Biostatistics, 2(1), 2006.
|
| 342 |
+
Larry Wasserman. *All of Nonparametric Statistics* (Springer Texts in Statistics). Springer-Verlag, 2006. ISBN 0387251456.
|
| 343 |
+
Liyuan Xu, Yutian Chen, Siddarth Srinivasan, Nando de Freitas, Arnaud Doucet, and Arthur Gretton. Learning deep features in instrumental variable regression. In International Conference on Learning Representations, 2021a.
|
| 344 |
+
Liyuan Xu, Heishiro Kanagawa, and Arthur Gretton. Deep proxy causal learning and its application to confounded bandit policy evaluation. In Advances in Neural Information Processing Systems, volume 34, pp. 26264-26275, 2021b.
|
| 345 |
+
Liyuan Xu, Yutian Chen, Arnaud Doucet, and Arthur Gretton. Importance weighted kernel Bayes' rule. In Proceedings of the 39th International Conference on Machine Learning, 2022.
|
| 346 |
+
Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in Neural Information Processing Systems, volume 30, 2017.
|
| 347 |
+
|
| 348 |
+
# A ALGORITHM SUMMARY
|
| 349 |
+
|
| 350 |
+
Here, we provide the summary of algorithm.
|
| 351 |
+
|
| 352 |
+
# Algorithm 1: Back-door Adjustment
|
| 353 |
+
|
| 354 |
+
Data: Back-door adjustment data $\{a_i, y_i, x_i\}$
|
| 355 |
+
|
| 356 |
+
1 Learn weights and features
|
| 357 |
+
|
| 358 |
+
$$
|
| 359 |
+
\boldsymbol {\hat {w}}, \hat {\phi} _ {A}, \hat {\phi} _ {X} = \arg \min \hat {\mathcal {L}} _ {1} ^ {\mathcal {X}}, \quad \hat {\mathcal {L}} _ {1} ^ {\mathcal {X}} = \frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - \boldsymbol {w} ^ {\top} (\phi_ {A} (a _ {i}) \otimes \phi_ {X} (x _ {i}))) ^ {2}.
|
| 360 |
+
$$
|
| 361 |
+
|
| 362 |
+
2 Learn conditional embedding
|
| 363 |
+
|
| 364 |
+
$$
|
| 365 |
+
\hat {\boldsymbol {f}} _ {\hat {\phi} _ {X}} = \arg \min _ {\boldsymbol {f}: \mathcal {A} \rightarrow \mathbb {R} ^ {d _ {2}}} \hat {\mathcal {L}} _ {2} ^ {\mathcal {X}} (\boldsymbol {f}; \hat {\phi} _ {X}), \quad \hat {\mathcal {L}} _ {2} ^ {\mathcal {X}} (\boldsymbol {f}; \phi_ {X}) = \frac {1}{n} \sum_ {i = 1} ^ {n} \| \phi_ {X} (x _ {i}) - \boldsymbol {f} (a _ {i}) \| ^ {2}
|
| 366 |
+
$$
|
| 367 |
+
|
| 368 |
+
3 Compute causal parameters as
|
| 369 |
+
|
| 370 |
+
$$
|
| 371 |
+
\hat {\theta} _ {\mathrm {A T E}} (a) = \hat {\boldsymbol {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {X} \left(x _ {i}\right)\right)
|
| 372 |
+
$$
|
| 373 |
+
|
| 374 |
+
$$
|
| 375 |
+
\hat {\theta} _ {\text {A T T}} (a; a ^ {\prime}) = \boldsymbol {\hat {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \boldsymbol {\hat {f}} _ {\hat {\phi} _ {X}} (a ^ {\prime})\right)
|
| 376 |
+
$$
|
| 377 |
+
|
| 378 |
+
# Algorithm 2: Front-door Adjustment
|
| 379 |
+
|
| 380 |
+
Data: Front-door adjustment data $\{a_i, y_i, m_i\}$
|
| 381 |
+
|
| 382 |
+
1 Learn weights and features
|
| 383 |
+
|
| 384 |
+
$$
|
| 385 |
+
\hat {\boldsymbol {w}}, \hat {\phi} _ {A}, \hat {\phi} _ {M} = \arg \min \hat {\mathcal {L}} _ {1} ^ {\mathcal {M}}, \quad \hat {\mathcal {L}} _ {1} ^ {\mathcal {M}} = \frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - \boldsymbol {w} ^ {\top} (\phi_ {A} (a _ {i}) \otimes \phi_ {M} (m _ {i}))) ^ {2}.
|
| 386 |
+
$$
|
| 387 |
+
|
| 388 |
+
2 Learn conditional embedding
|
| 389 |
+
|
| 390 |
+
$$
|
| 391 |
+
\hat {\boldsymbol {f}} _ {\hat {\phi} _ {M}} = \arg \min _ {\boldsymbol {f}: \mathcal {A} \to \mathbb {R} ^ {d _ {2}}} \hat {\mathcal {L}} _ {2} ^ {\mathcal {M}} (\boldsymbol {f}; \hat {\phi} _ {M}), \quad \hat {\mathcal {L}} _ {2} ^ {\mathcal {M}} (\boldsymbol {f}; \phi_ {M}) = \frac {1}{n} \sum_ {i = 1} ^ {n} \| \phi_ {M} (x _ {i}) - \boldsymbol {f} (a _ {i}) \| ^ {2}
|
| 392 |
+
$$
|
| 393 |
+
|
| 394 |
+
3 Compute causal parameters as
|
| 395 |
+
|
| 396 |
+
$$
|
| 397 |
+
\hat {\theta} _ {\mathrm {A T E}} (a) = \boldsymbol {\hat {w}} ^ {\top} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {A} (a _ {i}) \otimes \boldsymbol {\hat {f}} _ {\hat {\phi} _ {M}} (a)\right)
|
| 398 |
+
$$
|
| 399 |
+
|
| 400 |
+
$$
|
| 401 |
+
\hat {\theta} _ {\mathrm {A T T}} (a; a ^ {\prime}) = \boldsymbol {\hat {w}} ^ {\top} \left(\hat {\phi} _ {A} (a ^ {\prime}) \otimes \boldsymbol {\hat {f}} _ {\hat {\phi} _ {M}} (a)\right)
|
| 402 |
+
$$
|
| 403 |
+
|
| 404 |
+
# B TECHNICAL DETAILS
|
| 405 |
+
|
| 406 |
+
# B.1 UNIVERSAL APPROXIMATION THEORY
|
| 407 |
+
|
| 408 |
+
In this section, we provide the proof of Theorem 1. Recall our hypothesis space is
|
| 409 |
+
|
| 410 |
+
$$
|
| 411 |
+
\mathcal {H} _ {g} = \left\{\boldsymbol {w} ^ {\top} \left(\phi_ {A} (a) \otimes \phi_ {X} (x)\right) \mid \boldsymbol {w} \in \mathbb {R} ^ {d _ {1} d _ {2}}, \phi_ {A} (a) \in \mathbb {R} ^ {d _ {1}}, \phi_ {X} (x) \in \mathbb {R} ^ {d _ {2}}, \right.
|
| 412 |
+
$$
|
| 413 |
+
|
| 414 |
+
$$
|
| 415 |
+
\| \boldsymbol {w}\|_{1}\leq R,\max_{a\in \mathcal{A}}\| \phi_{A}(a)\|_{\infty}\leq 1,\max_{x\in \mathcal{X}}\| \phi_{X}(x)\|_{\infty}\leq 1\} .
|
| 416 |
+
$$
|
| 417 |
+
|
| 418 |
+
Let consider the feature be
|
| 419 |
+
|
| 420 |
+
$$
|
| 421 |
+
\phi_ {A} (a) = [ \sigma (s _ {1} ^ {\top} a + \alpha_ {1}), \ldots , \sigma (s _ {d _ {1}} ^ {\top} a + \alpha_ {d _ {1}}) ] ^ {\top}
|
| 422 |
+
$$
|
| 423 |
+
|
| 424 |
+
$$
|
| 425 |
+
\phi_ {X} (x) = \left[ \sigma \left(t _ {1} ^ {\top} x + \beta_ {1}\right), \dots , \sigma \left(t _ {d _ {2}} ^ {\top} x + \beta_ {d _ {2}}\right) \right] ^ {\top}
|
| 426 |
+
$$
|
| 427 |
+
|
| 428 |
+
where $\sigma$ is the sigmoid function and $s_i, t_i \in \mathbb{R}^D, \alpha_i, \beta_i \in \mathbb{R}$ are parameters. By considering the case of $d_1 = d_2$ and setting "non-diagonal" elements of $\boldsymbol{w}$ to zero, we can see that
|
| 429 |
+
|
| 430 |
+
$$
|
| 431 |
+
g (a, x) = \sum_ {i = 1} ^ {d _ {1}} w _ {i} \sigma \left(s _ {i} ^ {\top} a + \alpha_ {i}\right) \sigma \left(t _ {i} ^ {\top} x + \beta_ {i}\right).
|
| 432 |
+
$$
|
| 433 |
+
|
| 434 |
+
is a member of of $\mathcal{H}_g$ . Next, we present the following lemma.
|
| 435 |
+
|
| 436 |
+
Lemma 5. Let $\mu$ be a finite, signed regular Borel measures on $\mathcal{A} \times \mathcal{X}$ . If $\sigma$ satisfies the followings:
|
| 437 |
+
|
| 438 |
+
$$
|
| 439 |
+
\forall s, t \in \mathbb {R} ^ {D}, \forall \alpha , \beta \in \mathbb {R}, \int_ {\mathcal {A} \times \mathcal {X}} \sigma \left(s ^ {\top} a + \alpha\right) \sigma \left(t ^ {\top} x + \beta\right) d \mu (a, x) = 0 \Leftrightarrow \mu = 0, \tag {3}
|
| 440 |
+
$$
|
| 441 |
+
|
| 442 |
+
then, given any continuous function $f: \mathcal{A} \times \mathcal{A} \to \mathbb{R}$ and $\varepsilon > 0$ , there is a finite sum
|
| 443 |
+
|
| 444 |
+
$$
|
| 445 |
+
g (z) = \sum_ {i = 1} ^ {n} w _ {i} \sigma \left(s _ {i} ^ {\top} a + \alpha_ {i}\right) \sigma \left(t _ {i} ^ {\top} x + \beta_ {i}\right),
|
| 446 |
+
$$
|
| 447 |
+
|
| 448 |
+
which satisfies
|
| 449 |
+
|
| 450 |
+
$$
|
| 451 |
+
\max _ {a, x \in \mathcal {A} \times \mathcal {X}} | f (a, x) - g (a, x) | \leq \varepsilon .
|
| 452 |
+
$$
|
| 453 |
+
|
| 454 |
+
The proof is identical to Theorem 1 in Cybenko (1989). Now, all we have to prove is that the Sigmoid function $\sigma$ satisfies (3). This can be shown by the similar discussion as in the Lemma 1 in (Cybenko, 1989).
|
| 455 |
+
|
| 456 |
+
Proof of Theorem 1. Assume that
|
| 457 |
+
|
| 458 |
+
$$
|
| 459 |
+
\forall s, t \in \mathbb {R} ^ {D}, \forall \alpha , \beta \in \mathbb {R}, \int_ {\mathcal {A} \times \mathcal {X}} \sigma (s ^ {\top} a + \alpha) \sigma (t ^ {\top} x + \beta) d \mu (a, x) = 0
|
| 460 |
+
$$
|
| 461 |
+
|
| 462 |
+
Then, for all $\gamma ,\delta \in \mathbb{R}$ we have
|
| 463 |
+
|
| 464 |
+
$$
|
| 465 |
+
\begin{array}{l} 0 = \lim _ {\lambda_ {1} \rightarrow \infty} \lim _ {\lambda_ {2} \rightarrow \infty} \int_ {\mathcal {A} \times \mathcal {X}} \sigma (\lambda_ {1} (s ^ {\top} a + \alpha) + \gamma) \sigma (\lambda_ {2} (t ^ {\top} x + \beta) + \delta) d \mu (a, x) \\ = \int_ {\mathcal {A} \times \mathcal {X}} \lim _ {\lambda_ {1} \rightarrow \infty} \lim _ {\lambda_ {2} \rightarrow \infty} \sigma \left(\lambda_ {1} \left(s ^ {\top} a + \alpha\right) + \gamma\right) \sigma \left(\lambda_ {2} \left(t ^ {\top} x + \beta\right) + \delta\right) d \mu (a, x) \\ = \int_ {\mathcal {A} \times \mathcal {X}} \xi_ {A} (a) \xi_ {X} (x) \mathrm {d} \mu (a, x), \\ \end{array}
|
| 466 |
+
$$
|
| 467 |
+
|
| 468 |
+
where
|
| 469 |
+
|
| 470 |
+
$$
|
| 471 |
+
\xi_ {A} (a) = \left\{ \begin{array}{l l} 0 & (s ^ {\top} a + \alpha < 0) \\ 1 & (s ^ {\top} a + \alpha > 0) \\ \sigma (\gamma) & (s ^ {\top} a + \alpha = 0) \end{array} , \quad \xi_ {X} (x) = \left\{ \begin{array}{l l} 0 & (t ^ {\top} x + \beta < 0) \\ 1 & (t ^ {\top} x + \beta > 0) \\ \sigma (\delta) & (t ^ {\top} x + \beta = 0) \end{array} . \right. \right.
|
| 472 |
+
$$
|
| 473 |
+
|
| 474 |
+
We used the Lesbegue Bounded Convergence Theorem in the second equation. From definition, we have
|
| 475 |
+
|
| 476 |
+
$$
|
| 477 |
+
\begin{array}{l} 0 = \int_ {\mathcal {A} \times \mathcal {X}} \xi_ {A} (a) \xi_ {X} (x) \mathrm {d} \mu (a, x) \\ = \sigma (\gamma) \sigma (\delta) \mu \left(\Pi_ {s, \alpha} ^ {\mathcal {A}} \times \Pi_ {t, \beta} ^ {\mathcal {X}}\right) + \sigma (\gamma) \mu \left(\Pi_ {s, \alpha} ^ {\mathcal {A}} \times H _ {t, \beta} ^ {\mathcal {X}}\right) + \sigma (\delta) \mu \left(H _ {s, \alpha} ^ {\mathcal {A}} \times \Pi_ {t, \beta} ^ {\mathcal {X}}\right) + \mu \left(H _ {s, \alpha} ^ {\mathcal {A}} \times H _ {t, \beta} ^ {\mathcal {X}}\right), \\ \end{array}
|
| 478 |
+
$$
|
| 479 |
+
|
| 480 |
+
where
|
| 481 |
+
|
| 482 |
+
$$
|
| 483 |
+
\begin{array}{l} \Pi_ {s, \alpha} ^ {\mathcal {A}} = \left\{a \in \mathcal {A} | s ^ {\top} a + \alpha = 0 \right\} \qquad \qquad \Pi_ {t, \beta} ^ {\mathcal {X}} = \left\{x \in \mathcal {X} | t ^ {\top} x + \beta = 0 \right\} \\ H _ {s, \alpha} ^ {\mathcal {A}} = \left\{a \in \mathcal {A} | s ^ {\top} a + \alpha > 0 \right\} \quad H _ {t, \beta} ^ {\mathcal {X}} = \left\{x \in \mathcal {X} | t ^ {\top} x + \beta > 0 \right\}. \\ \end{array}
|
| 484 |
+
$$
|
| 485 |
+
|
| 486 |
+
Hence for all $s, \alpha, t, \beta$ , we have
|
| 487 |
+
|
| 488 |
+
$$
|
| 489 |
+
\mu (\Pi_ {s, \alpha} ^ {\mathcal {A}} \times \Pi_ {t, \beta} ^ {\mathcal {X}}) = \mu (\Pi_ {s, \alpha} ^ {\mathcal {A}} \times H _ {t, \beta} ^ {\mathcal {X}}) = \mu (H _ {s, \alpha} ^ {\mathcal {A}} \times \Pi_ {t, \beta} ^ {\mathcal {X}}) = \mu (H _ {s, \alpha} ^ {\mathcal {A}} \times H _ {t, \beta} ^ {\mathcal {X}}) = 0.
|
| 490 |
+
$$
|
| 491 |
+
|
| 492 |
+
Based on this, we show $\mu = 0$ . Fix $s, t$ and consider functional $F(h)$ defined as
|
| 493 |
+
|
| 494 |
+
$$
|
| 495 |
+
F (h) = \int_ {\mathcal {A} \times \mathcal {X}} h (s ^ {\top} a, t ^ {\top} x) \mathrm {d} \mu (a, x),
|
| 496 |
+
$$
|
| 497 |
+
|
| 498 |
+
where $h$ is bounded measurable function $h(u,v): [\bar{u},\underline{u}] \times [\bar{v},\underline{v}] \to \mathbb{R}$ , where
|
| 499 |
+
|
| 500 |
+
$$
|
| 501 |
+
\bar {u} = \max _ {a \in \mathcal {A}} s ^ {\top} a, \underline {{u}} = \min _ {a \in \mathcal {A}} s ^ {\top} a, \bar {v} = \max _ {x \in \mathcal {X}} t ^ {\top} x, \underline {{v}} = \min _ {x \in \mathcal {X}} t ^ {\top} x.
|
| 502 |
+
$$
|
| 503 |
+
|
| 504 |
+
Let indicator function $I_{(b,c]\times (d,e]}(u,v)$ defined as
|
| 505 |
+
|
| 506 |
+
$$
|
| 507 |
+
I _ {[ b, c) \times [ d, e)} (u, v) = \left\{ \begin{array}{l l} 1 & (u \in [ b, c), v \in [ d, e)) \\ 0 & \text {o t h e r w i s e} \end{array} \right..
|
| 508 |
+
$$
|
| 509 |
+
|
| 510 |
+
Then, we have
|
| 511 |
+
|
| 512 |
+
$$
|
| 513 |
+
F \left(I _ {[ b, \infty) \times [ c, \infty)}\right) = \mu \left(\left(\Pi_ {s, - b} ^ {\mathcal {A}} \cup H _ {s, - b} ^ {\mathcal {A}}\right) \times \left(\Pi_ {t, - c} ^ {\mathcal {X}} \cup H _ {t, - c} ^ {\mathcal {X}}\right)\right) = 0.
|
| 514 |
+
$$
|
| 515 |
+
|
| 516 |
+
Since
|
| 517 |
+
|
| 518 |
+
$$
|
| 519 |
+
I _ {[ b, c) \times [ d, e)} = I _ {[ b, \infty) \times [ d, \infty)} - I _ {[ c, \infty) \times [ d, \infty)} - I _ {[ b \infty) \times [ e, \infty)} + I _ {[ c, \infty) \times [ e, \infty)},
|
| 520 |
+
$$
|
| 521 |
+
|
| 522 |
+
we have $F(I_{[b,c)\times [d,e)}) = 0$ for all $b,c,d,e\in \mathbb{R}$ . For linearity, we have
|
| 523 |
+
|
| 524 |
+
$$
|
| 525 |
+
F \left(\sum_ {i = 1} ^ {N} \eta_ {i} I _ {[ b _ {i}, c _ {i}) \times [ d _ {i}, e _ {i})}\right) = 0.
|
| 526 |
+
$$
|
| 527 |
+
|
| 528 |
+
Note that $\sum_{i=1}^{N} \eta_i I_{[b_i, c_i) \times [d_i, e_i)}$ uniformly converges to any bounded measurable function $h: [\bar{u}, \underline{u}] \times [\bar{v}, \underline{v}] \to \mathbb{R}$ . Hence, $F(h) = 0$ . In particular, $h(u, v) = \cos(u + v), \sin(u + v)$ are bounded measurable functions, and thus,
|
| 529 |
+
|
| 530 |
+
$$
|
| 531 |
+
\begin{array}{l} \int_ {\mathcal {A} \times \mathcal {X}} \exp (i (s ^ {\top} a + t ^ {\top} x)) \mathrm {d} \mu (a, x) \\ = \int_ {A \times \mathcal {X}} \cos \left(s ^ {\top} a + t ^ {\top} x\right) + i \sin \left(s ^ {\top} a + t ^ {\top} x\right) d \mu (a, x) \\ = F (\cos (u + v)) + i F (\sin (u + v)) = 0. \\ \end{array}
|
| 532 |
+
$$
|
| 533 |
+
|
| 534 |
+
Thus, the Fourier transform of $\mu$ is 0 and so $\mu$ must be zero as well. From Lemma 5, we see Theorem 1.
|
| 535 |
+
|
| 536 |
+
# B.2 IMPLICATION OF ASSUMPTION 1
|
| 537 |
+
|
| 538 |
+
In this section, we discuss the implication of Assumption 1, especially when the back-door and treatment variables are continuous. First, we show the upper bound of the sup norm of Lipschitz function.
|
| 539 |
+
|
| 540 |
+
Lemma 6. Let $Z \in \mathcal{Z}$ be the probability variable following $P(Z)$ and $\mathcal{Z} \subset [0,1]^d$ . Then, for all $L$ -Lipschitz function $h$ bounded in $h(z) \in [-R,R]$ , we have
|
| 541 |
+
|
| 542 |
+
$$
|
| 543 |
+
\max _ {z \in \mathcal {Z}} | h (z) | \leq \left(\frac {4}{c}\right) ^ {\frac {1}{d + 2}} (2 R + 2 \sqrt {d} L) ^ {\frac {d}{d + 2}} \| h \| _ {P (Z)} ^ {\frac {2}{d + 2}}
|
| 544 |
+
$$
|
| 545 |
+
|
| 546 |
+
if the density function $f(z)$ is bounded away from zero $f(z) \geq \varepsilon > 0$ .
|
| 547 |
+
|
| 548 |
+
Proof. Since $\mathcal{Z}$ is compact, there exists $z^{*}$ such that
|
| 549 |
+
|
| 550 |
+
$$
|
| 551 |
+
\left| h \left(z ^ {*}\right) \right| = \max _ {z \in \mathcal {Z}} | h (z) |.
|
| 552 |
+
$$
|
| 553 |
+
|
| 554 |
+
Let $M = |h(z^{*})|$ and we consider the following rectangle
|
| 555 |
+
|
| 556 |
+
$$
|
| 557 |
+
\mathfrak {B} = \left\{z \in \mathcal {Z} \left| \forall i \in [ d ] \max \left(0, z _ {[ i ]} ^ {*} - \frac {M}{2 R + 2 \sqrt {d} L}\right) \leq z _ {[ i ]} \leq \min \left(1, z _ {[ i ]} ^ {*} + \frac {M}{2 R + 2 \sqrt {d} L}\right) \right. \right\},
|
| 558 |
+
$$
|
| 559 |
+
|
| 560 |
+
where $z_{[i]}$ denotes the $i$ -th element of $z$ . Then, from Lipschitz continuity, for all $z \in \mathfrak{B}$ , we have
|
| 561 |
+
|
| 562 |
+
$$
|
| 563 |
+
\begin{array}{l} \left| h (z) \right| \geq \left| h \left(z ^ {*}\right) \right| - L \| z ^ {*} - z \| _ {2} \\ = M - L \sqrt {\sum_ {i = 1} ^ {d} \left| z _ {[ i ]} ^ {*} - z _ {[ i ]} \right| ^ {2}} \\ \geq M - L \sqrt {\sum_ {i = 1} ^ {d} \left(\frac {M}{2 R + 2 \sqrt {d} L}\right) ^ {2}} \\ \geq M - L \sqrt {\sum_ {i = 1} ^ {d} \left(\frac {M}{2 \sqrt {d} L}\right) ^ {2}} \geq M / 2 \\ \end{array}
|
| 564 |
+
$$
|
| 565 |
+
|
| 566 |
+
Now, consider the volume of $\mathfrak{B}$ . Since
|
| 567 |
+
|
| 568 |
+
$$
|
| 569 |
+
\frac {M}{2 R + 2 \sqrt {d} L} \leq \frac {R}{2 R + 2 \sqrt {d} L} \leq \frac {R}{2 R} = \frac {1}{2},
|
| 570 |
+
$$
|
| 571 |
+
|
| 572 |
+
the events $0 \geq z_{[i]}^{*} - \frac{M}{2R + 2\sqrt{d}L}$ and $1 \leq z_{[i]}^{*} + \frac{M}{2R + 2\sqrt{d}L}$ do not occur simultaneously. Therefore, we have
|
| 573 |
+
|
| 574 |
+
$$
|
| 575 |
+
\min \left(1, z _ {[ i ]} ^ {*} + \frac {M}{2 R + 2 \sqrt {d} L}\right) - \max \left(0, z _ {[ i ]} ^ {*} - \frac {M}{2 R + 2 \sqrt {d} L}\right) \geq \frac {M}{2 R + 2 \sqrt {d} L},
|
| 576 |
+
$$
|
| 577 |
+
|
| 578 |
+
and
|
| 579 |
+
|
| 580 |
+
$$
|
| 581 |
+
\begin{array}{l} \| h \| _ {P (Z)} ^ {2} = \int_ {\mathcal {Z}} | h (z) | ^ {2} f (z) \mathrm {d} z \\ \geq \int_ {\mathfrak {B}} | h (z) | ^ {2} f (z) \mathrm {d} z \\ \geq c \left(\frac {M}{2 R + 2 \sqrt {d} L}\right) ^ {d} \frac {M ^ {2}}{4}. \\ \end{array}
|
| 582 |
+
$$
|
| 583 |
+
|
| 584 |
+
Since $M = \max_{z\in \mathcal{Z}}|h(z)|$ , we have
|
| 585 |
+
|
| 586 |
+
$$
|
| 587 |
+
\max _ {z \in \mathcal {Z}} | h (z) | \leq \left(\frac {4}{c}\right) ^ {\frac {1}{d + 2}} (2 R + 2 \sqrt {d} L) ^ {\frac {d}{d + 2}} \| h \| _ {P (Z)} ^ {\frac {2}{d + 2}}.
|
| 588 |
+
$$
|
| 589 |
+
|
| 590 |
+

|
| 591 |
+
|
| 592 |
+
By this, we can give the Assumption 1 follows for the interval probability space.
|
| 593 |
+
|
| 594 |
+
Corollary 1. If $\mathcal{A} = [0,1]^{d_A}$ , $\mathcal{X} = [0,1]^{d_X}$ , and all function $h\in \mathcal{H}_g$ are $L$ -Lipschitz continuous, we have
|
| 595 |
+
|
| 596 |
+
$$
|
| 597 |
+
\max _ {a, x \in \mathcal {A} \times \mathcal {X}} | h _ {1} (a, x) - h _ {2} (a, x) | \leq C \| h _ {1} - h _ {2} \| _ {P (A, X)} ^ {\frac {2}{d _ {A} + d _ {X} + 2}},
|
| 598 |
+
$$
|
| 599 |
+
|
| 600 |
+
where $C = \left(\frac{4}{c}\right)^{\frac{1}{d_A + d_X + 2}}\left(4R + 4\sqrt{d_A + d_X} L\right)^{\frac{d_A + d_X}{d_A + d_X + 2}}$
|
| 601 |
+
|
| 602 |
+
Note that the assumption on hypothesis space is easy to satisfy since all neural network is Lipschitz function if we use the ReLU activation and regularize the operator norm of the weight in each layer.
|
| 603 |
+
|
| 604 |
+
# B.3 CONSISTENCY RESULTS
|
| 605 |
+
|
| 606 |
+
Proof of Lemma 1 We use the following Rademacher bound to prove the consistency (Mohri et al., 2012).
|
| 607 |
+
|
| 608 |
+
Proposition 4. (Mohri et al., 2012, Theorem 11.3) Let $\mathcal{X}$ be a measurable space and $\mathcal{H}$ be a family of functions mapping from $\mathcal{X}$ to $\mathcal{Y} \subseteq [-R,R]$ . Given fixed dataset $S = ((y_{1},x_{1}),(y_{2},x_{2}),\ldots ,(y_{n},x_{n})) \in (\mathcal{X} \times \mathcal{Y})^{n}$ , the empirical Rademacher complexity is given by
|
| 609 |
+
|
| 610 |
+
$$
|
| 611 |
+
\hat {\mathfrak {N}} _ {S} (\mathcal {H}) = \mathbb {E} _ {\boldsymbol {\sigma}} \left[ \frac {1}{n} \sup _ {h \in \mathcal {H}} \sum_ {i = 1} ^ {n} \sigma_ {i} h (x _ {i}) \right],
|
| 612 |
+
$$
|
| 613 |
+
|
| 614 |
+
where $\sigma = (\sigma_{1},\dots,\sigma_{n})$ , with $\sigma_{i}$ independent random variables taking values in $\{-1, + 1\}$ with equal probability. Then, for any $\delta >0$ with probability at least $1 - \delta$ over the draw of an i.i.d sample $S$ of size $n$ , each of following holds for all $h\in \mathcal{H}$ :
|
| 615 |
+
|
| 616 |
+
$$
|
| 617 |
+
\mathbb {E} \left[ (Y - h (X)) ^ {2} \right] \leq \frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - h (x _ {i})) ^ {2} + 8 R \hat {\Re} _ {S} (\mathcal {H}) + 4 R ^ {2} \sqrt {\frac {\log 2 / \delta}{2 n}},
|
| 618 |
+
$$
|
| 619 |
+
|
| 620 |
+
$$
|
| 621 |
+
\frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - h (x _ {i})) ^ {2} \leq \mathbb {E} \left[ (Y - h (X)) ^ {2} \right] + 8 R \hat {\Re} _ {S} (\mathcal {H}) + 4 R ^ {2} \sqrt {\frac {\log 2 / \delta}{2 n}}.
|
| 622 |
+
$$
|
| 623 |
+
|
| 624 |
+
Given Proposition 4, we can prove the consistency of conditional expectation.
|
| 625 |
+
|
| 626 |
+
Proof of Lemma 1. From Proposition 4 and $\hat{g}, g \in \mathcal{H}_g$ , for the probability at least $1 - 2\delta$ , we have followings.
|
| 627 |
+
|
| 628 |
+
$$
|
| 629 |
+
\mathbb {E} \left[ (Y - \hat {g} (A, X)) ^ {2} \right] \leq \frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - \hat {g} (a _ {i}, x _ {i})) ^ {2} + 8 R \hat {\Re} _ {S} (\mathcal {H} _ {g}) + 4 R ^ {2} \sqrt {\frac {\log 2 / \delta}{2 n}}
|
| 630 |
+
$$
|
| 631 |
+
|
| 632 |
+
$$
|
| 633 |
+
\frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - g (a _ {i}, x _ {i})) ^ {2} \leq \mathbb {E} \left[ (Y - g (A, X)) ^ {2} \right] + 8 R \hat {\Re} _ {S} (\mathcal {H} _ {g}) + 4 R ^ {2} \sqrt {\frac {\log 2 / \delta}{2 n}}
|
| 634 |
+
$$
|
| 635 |
+
|
| 636 |
+
From the minimality of $\hat{g} = \arg \min \hat{\mathcal{L}}_1^\chi$ , we have
|
| 637 |
+
|
| 638 |
+
$$
|
| 639 |
+
\begin{array}{l} \mathbb {E} \left[ (Y - \hat {g} (A, X)) ^ {2} \right] \leq \mathbb {E} \left[ (Y - g (A, X)) ^ {2} \right] + 1 6 R \hat {\Re} _ {S} (\mathcal {H} _ {g}) + 8 R ^ {2} \sqrt {\frac {\log 2 / \delta}{2 n}} \\ \Leftrightarrow \mathbb {E} \left[ (g (A, X) - \hat {g} (A, X)) ^ {2} \right] \leq 1 6 R \hat {\Re} _ {S} (\mathcal {H} _ {g}) + 8 R ^ {2} \sqrt {\frac {\log 2 / \delta}{2 n}}. \\ \end{array}
|
| 640 |
+
$$
|
| 641 |
+
|
| 642 |
+
Taking the square root of both sides completes the proof.
|
| 643 |
+
|
| 644 |
+

|
| 645 |
+
|
| 646 |
+
Empirical Rademacher Complexity of $\mathcal{H}_g$ We discuss the empirical Rademacher complexity of $\mathcal{H}_g$ when we use feed-forward neural network for features $\phi_A, \phi_X$ here. The discussion is based on a "peeling" argument proposed in Neyshabur et al. (2015).
|
| 647 |
+
|
| 648 |
+
Proposition 5 ((Neyshabur et al., 2015), Theorem 1). Let hypothesis space of $L$ layer neural net be $\mathcal{H}_{\mathrm{NN}}$ that
|
| 649 |
+
|
| 650 |
+
$$
|
| 651 |
+
\mathcal {H} _ {\mathrm {N N}} = \left\{f: \mathbb {R} ^ {D} \to \mathbb {R} \left| f (s) = W ^ {(L)} \sigma \left(W ^ {(L - 1)} \sigma (\dots \sigma (W ^ {(1)} s)\right), \prod_ {i = 1} ^ {L} \| W ^ {(i)} \| _ {p, q} \leq \gamma \right. \right\},
|
| 652 |
+
$$
|
| 653 |
+
|
| 654 |
+
where $\sigma$ is ReLU function and $W^{(1)}\in \mathbb{R}^{D\times H}$ , $W^{(L)}\in \mathbb{R}^{1\times H}$ , $W^{(2)},\ldots ,W^{(L - 1)}\in \mathbb{R}^{H\times H}$ are weights. The norm $\| \cdot \|_{p,q}$ is matrix $L_{p,q}$ -norm $\sup_{x\neq 0}\| Wx\| _q / \| x\| _p$ . Then, for any $L,q\geq 1$ , any $1\leq p\leq \infty$ , and any set $S = \{s_1,\dots,s_n\}$ , the empirical Rademacher complexity is bounded as
|
| 655 |
+
|
| 656 |
+
$$
|
| 657 |
+
\hat {\mathfrak {R}} _ {S} (\mathcal {H} _ {\mathrm {N N}}) \leq \sqrt {\frac {1}{n} \left(\gamma^ {2} 2 H ^ {\left[ \frac {1}{p ^ {*}} - \frac {1}{q} \right] _ {+}}\right) ^ {2 (L - 1)} \left(\min \{p ^ {*}, 4 \log (2 D) \}\right) \max _ {i} \| s _ {i} \| _ {p ^ {*}}}
|
| 658 |
+
$$
|
| 659 |
+
|
| 660 |
+
for $p^* = 1 / (1 - 1 / p)$ and $[x]_+ = \max \{0, x\}$ .
|
| 661 |
+
|
| 662 |
+
Given this, we can bound the empirical Rademacher complexity of $\mathcal{H}_g$ when each coordinate of features is a truncated member of $\mathcal{H}_{\mathrm{NN}}$ .
|
| 663 |
+
|
| 664 |
+
Lemma 7. Let $\mathcal{A},\mathcal{X}\subset \mathbb{R}^D$ and define hypothesis set $\mathcal{H}_{\mathrm{NNFeat.}}(d)$ that
|
| 665 |
+
|
| 666 |
+
$\mathcal{H}_{\mathrm{NNFeat.}}(d) = \big\{\phi :\mathbb{R}^D\to \mathbb{R}^d\big|\phi (s) = (\tilde{\sigma} (f_1(s)),\tilde{\sigma} (f_2(s)),\ldots ,\tilde{\sigma} (f_d(s)))^\top ,f_1,\ldots ,f_d\in \mathcal{H}_{\mathrm{NN}}\big\}$
|
| 667 |
+
|
| 668 |
+
where $\tilde{\sigma}$ is a ramp function $\tilde{\sigma}(x) = \min(1, \max(0, x))$ . Consider $\mathcal{H}_g$ that
|
| 669 |
+
|
| 670 |
+
$$
|
| 671 |
+
\begin{array}{l} \mathcal {H} _ {g} = \left\{\boldsymbol {w} ^ {\top} \left(\phi_ {A} (a) \otimes \phi_ {X} (x)\right) \mid \boldsymbol {w} \in \mathbb {R} ^ {d _ {1} d _ {2}}, \phi_ {A} (a) \in \mathbb {R} ^ {d _ {1}}, \phi_ {X} (x) \in \mathbb {R} ^ {d _ {2}}, \right. \\ \| \boldsymbol {w} \| _ {1} \leq R, \phi_ {A} \in \mathcal {H} _ {\mathrm {N N F e a t .}} (d _ {1}), \phi_ {X} \in \mathcal {H} _ {\mathrm {N N F e a t .}} (d _ {2}) \}. \\ \end{array}
|
| 672 |
+
$$
|
| 673 |
+
|
| 674 |
+
Given data set $S = \{(a_{1},x_{1}),\ldots (a_{n},x_{n})\}$ , we have
|
| 675 |
+
|
| 676 |
+
$$
|
| 677 |
+
\hat {\mathfrak {R}} _ {S} (\mathcal {H} _ {g}) \leq 6 R \sqrt {\frac {1}{n} \left(\gamma^ {2} 2 H ^ {\left[ \frac {1}{p ^ {*}} - \frac {1}{q} \right] _ {+}}\right) ^ {2 (L - 1)} \left(\min \{p ^ {*}, 4 \log (2 D) \}\right) \left(\max _ {i} \| a _ {i} \| _ {p ^ {*}} + \max _ {i} \| x _ {i} \| _ {p ^ {*}}\right)}.
|
| 678 |
+
$$
|
| 679 |
+
|
| 680 |
+
Note that we have
|
| 681 |
+
|
| 682 |
+
$$
|
| 683 |
+
\max _ {a \in \mathcal {A}} \| \phi_ {A} (a) \| _ {\infty} \leq 1, \max _ {x \in \mathcal {X}} \| \phi_ {X} (x) \| _ {\infty} \leq 1
|
| 684 |
+
$$
|
| 685 |
+
|
| 686 |
+
since we apply $\tilde{\sigma}$ in the features. The proof is given as follows.
|
| 687 |
+
|
| 688 |
+
Proof. Let us define the following hypothesis spaces.
|
| 689 |
+
|
| 690 |
+
$$
|
| 691 |
+
\tilde {\mathcal {H}} _ {\mathrm {N N}} = \{\tilde {\sigma} \circ f | f \in \mathcal {H} _ {\mathrm {N N}} \},
|
| 692 |
+
$$
|
| 693 |
+
|
| 694 |
+
$$
|
| 695 |
+
\tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2} = \left\{\tilde {f} _ {1} (a) \tilde {f} _ {2} (x) | \tilde {f} _ {1}, \tilde {f} _ {2} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} \right\}.
|
| 696 |
+
$$
|
| 697 |
+
|
| 698 |
+
Then, from the definition, we have
|
| 699 |
+
|
| 700 |
+
$$
|
| 701 |
+
\mathcal {H} _ {g} \subset \left\{\sum_ {i = 1} ^ {d _ {1}} \sum_ {j = 1} ^ {d _ {2}} w _ {i j} h _ {i j} (a, x) \middle | \sum_ {i = 1} ^ {d _ {1}} \sum_ {j = 1} ^ {d _ {2}} | w _ {i j} | \leq R, \forall i, j h _ {i j} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2} \right\}.
|
| 702 |
+
$$
|
| 703 |
+
|
| 704 |
+
Since the maximum of a linear function of $\mathbf{w}$ over the constraint $\| \mathbf{w} \| \leq R$ is achieved for the values satisfying $\| \mathbf{w} \| = R$ , we have
|
| 705 |
+
|
| 706 |
+
$$
|
| 707 |
+
\begin{array}{l} \hat {\mathfrak {R}} _ {S} (\mathcal {H} _ {g}) \leq \hat {\mathfrak {R}} _ {S} \left(\left\{\sum_ {i = 1} ^ {d _ {1}} \sum_ {j = 1} ^ {d _ {2}} w _ {i j} h _ {i j} (a, x) \left| \sum_ {i = 1} ^ {d _ {1}} \sum_ {j = 1} ^ {d _ {2}} | w _ {i j} | \leq R, \forall i, j h _ {i j} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2} \right. \right\}\right) \\ = \hat {\Re} _ {S} \left(\left\{\sum_ {i = 1} ^ {d _ {1}} \sum_ {j = 1} ^ {d _ {2}} w _ {i j} h _ {i j} (a, x) \left| \sum_ {i = 1} ^ {d _ {1}} \sum_ {j = 1} ^ {d _ {2}} | w _ {i j} | = R, \forall i, j h _ {i j} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2} \right. \right\}\right) \\ \leq R \hat {\Re} _ {S} \left(\left\{\sum_ {i = 1} ^ {d _ {1}} \sum_ {j = 1} ^ {d _ {2}} w _ {i j} h _ {i j} (a, x) \left| \sum_ {i = 1} ^ {d _ {1}} \sum_ {j = 1} ^ {d _ {2}} | w _ {i j} | = 1, \forall i, j h _ {i j} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2} \right. \right\}\right) \\ \end{array}
|
| 708 |
+
$$
|
| 709 |
+
|
| 710 |
+
Let $\tilde{\mathcal{H}}_{\mathrm{NN}}^2 -\tilde{\mathcal{H}}_{\mathrm{NN}}^2$ be the function space defined as
|
| 711 |
+
|
| 712 |
+
$$
|
| 713 |
+
\tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2} - \tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2} = \left\{h _ {1} (a, x) - h _ {2} (a, x) \Big | h _ {1}, h _ {2} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2} \right\}.
|
| 714 |
+
$$
|
| 715 |
+
|
| 716 |
+
Since $\tilde{\mathcal{H}}_{\mathrm{NN}}^2$ contains the zero function, the final hypothesis space is the subset the convex hull of $\tilde{\mathcal{H}}_{\mathrm{NN}}^2 - \tilde{\mathcal{H}}_{\mathrm{NN}}^2$ because
|
| 717 |
+
|
| 718 |
+
$$
|
| 719 |
+
\sum_ {i = 1} ^ {d _ {1}} \sum_ {j = 1} ^ {d _ {2}} w _ {i j} h _ {i j} (a, x) = \sum_ {w _ {i, j} \geq 0} w _ {i j} \left(h _ {i j} (a, x) - 0\right) + \sum_ {w _ {i, j} < 0} \left| w _ {i j} \right| \left(0 - h _ {i j} (a, x)\right).
|
| 720 |
+
$$
|
| 721 |
+
|
| 722 |
+
Therefore, we have
|
| 723 |
+
|
| 724 |
+
$$
|
| 725 |
+
\hat {\Re} _ {S} (\mathcal {H} _ {g}) \leq R \hat {\Re} _ {S} (\tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2} - \tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2}) \leq 2 R \hat {\Re} _ {S} (\tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2}).
|
| 726 |
+
$$
|
| 727 |
+
|
| 728 |
+
Now, we can bound $\hat{\mathfrak{R}}_S(\tilde{\mathcal{H}}_{\mathrm{NN}}^2)$ as
|
| 729 |
+
|
| 730 |
+
$$
|
| 731 |
+
\begin{array}{l} \hat {\mathfrak {R}} _ {S} (\tilde {\mathcal {H}} _ {\mathrm {N N}} ^ {2}) = \hat {\mathfrak {R}} _ {S} (\{\tilde {f} _ {1} (a) \tilde {f} _ {2} (x) | \tilde {f} _ {1}, \tilde {f} _ {2} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} \}) \\ = \hat {\Re} _ {S} \left(\left\{\frac {1}{2} \left(\left(\tilde {f} _ {1} (a) + \tilde {f} _ {2} (x)\right) ^ {2} - \left(\tilde {f} _ {1} (a)\right) ^ {2} - \left(\tilde {f} _ {2} (x)\right) ^ {2}\right) \mid \tilde {f} _ {1}, \tilde {f} _ {2} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} \right\}\right) \\ = \frac {1}{2} \hat {\Re} _ {S} \left(\left\{\left(\tilde {f} _ {1} (a) + \tilde {f} _ {2} (x)\right) ^ {2} \mid \tilde {f} _ {1}, \tilde {f} _ {2} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} \right\}\right) + \frac {1}{2} \hat {\Re} _ {S} \left(\left\{\left(\tilde {f} _ {1} (a)\right) ^ {2} \mid \tilde {f} _ {1} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} \right\}\right) \\ + \frac {1}{2} \hat {\Re} _ {S} \left(\left\{\left(\tilde {f} _ {2} (x)\right) ^ {2} \mid \tilde {f} _ {2} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} \right\}\right) \\ \leq 2 \hat {\Re} _ {S} \left(\left\{\tilde {f} _ {1} (a) + \tilde {f} _ {2} (x) \mid \tilde {f} _ {1}, \tilde {f} _ {2} \in \tilde {\mathcal {H}} _ {\mathrm {N N}} \right\}\right) + \hat {\Re} _ {S _ {A}} (\tilde {\mathcal {H}} _ {\mathrm {N N}}) + \hat {\Re} _ {S _ {X}} (\tilde {\mathcal {H}} _ {\mathrm {N N}}) \\ = 3 \hat {\Re} _ {S _ {A}} (\tilde {\mathcal {H}} _ {\mathrm {N N}}) + 3 \hat {\Re} _ {S _ {X}} (\tilde {\mathcal {H}} _ {\mathrm {N N}}), \\ \end{array}
|
| 732 |
+
$$
|
| 733 |
+
|
| 734 |
+
where $S_A = \{a_i\}$ and $S_X = \{x_i\}$ . Here, we used Talagrand's contraction lemma (Mohri et al., 2012, Lemma 5.11) in the inequality. Again, from Talagrand's contraction lemma, we have
|
| 735 |
+
|
| 736 |
+
$$
|
| 737 |
+
\hat {\Re} _ {S _ {A}} \left(\tilde {\mathcal {H}} _ {\mathrm {N N}}\right) \leq \hat {\Re} _ {S _ {A}} \left(\mathcal {H} _ {\mathrm {N N}}\right), \hat {\Re} _ {S _ {X}} \left(\tilde {\mathcal {H}} _ {\mathrm {N N}}\right) \leq \hat {\Re} _ {S _ {X}} \left(\mathcal {H} _ {\mathrm {N N}}\right),
|
| 738 |
+
$$
|
| 739 |
+
|
| 740 |
+
since $\tilde{\sigma}$ is an 1-Lipchitz function.
|
| 741 |
+
|
| 742 |
+
Combining them, we have
|
| 743 |
+
|
| 744 |
+
$$
|
| 745 |
+
\hat {\Re} _ {S} (\mathcal {H} _ {g}) \leq 6 R (\hat {\Re} _ {S _ {A}} (\mathcal {H} _ {\mathrm {N N}}) + \hat {\Re} _ {S _ {X}} (\mathcal {H} _ {\mathrm {N N}})).
|
| 746 |
+
$$
|
| 747 |
+
|
| 748 |
+
This and Proposition 5 completes the proof.
|
| 749 |
+
|
| 750 |
+
Now, we derive the final theorem to show the consistency of the method.
|
| 751 |
+
|
| 752 |
+
Proof of Theorem 2. From the triangular inequality, we have
|
| 753 |
+
|
| 754 |
+
$$
|
| 755 |
+
\left. \left| \theta_ {\mathrm {A T E}} (a) - \hat {\theta} _ {\mathrm {A T E}} (a) \right| \leq \left| \theta - \mathbb {E} [ \hat {g} (a, X) ] \right| + \left| \hat {\theta} _ {\mathrm {A T E}} (a) - \mathbb {E} [ \hat {g} (a, X) ] \right| \right|
|
| 756 |
+
$$
|
| 757 |
+
|
| 758 |
+
For the first term of r.h.s, we have
|
| 759 |
+
|
| 760 |
+
$$
|
| 761 |
+
\begin{array}{l} \left| \theta_ {\mathrm {A T E}} (a) - \mathbb {E} [ \hat {g} (a, X) ] \right| = \left| \mathbb {E} [ g (a, X) - \hat {g} (a, X) ] \right| \\ \leq \mathbb {E} \left[ \left| g (a, X) - \hat {g} (a, X) \right| \right] \\ \leq \sup _ {a \in \mathcal {A}, x \in \mathcal {X}} | g (a, x) - \hat {g} (a, x) | \\ \end{array}
|
| 762 |
+
$$
|
| 763 |
+
|
| 764 |
+
For the second term, we have
|
| 765 |
+
|
| 766 |
+
$$
|
| 767 |
+
\begin{array}{l} \left| \hat {\theta} - \int \mathbb {E} [ \hat {g} (a, X) ] \right| = \left| \hat {\boldsymbol {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {X} \left(x _ {i}\right) - \hat {\phi} _ {A} (a) \otimes \mathbb {E} \left[ \hat {\phi} _ {X} (X) \right]\right) \right| \\ \leq \| \hat {\boldsymbol {w}} \| _ {1} \left\| \hat {\phi} _ {A} (a) \otimes \frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {X} \left(x _ {i}\right) - \hat {\phi} _ {A} (a) \otimes \mathbb {E} \left[ \hat {\phi} _ {X} (X) \right] \right\| _ {\infty} \\ \leq \| \hat {\boldsymbol {w}} \| _ {1} \left\| \hat {\phi} _ {A} (a) \right\| _ {\infty} \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {X} \left(x _ {i}\right) - \mathbb {E} \left[ \hat {\phi} _ {X} (X) \right] \right\| _ {\infty} \\ \leq R \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {X} \left(x _ {i}\right) - \mathbb {E} \left[ \hat {\phi} _ {X} (X) \right] \right\| _ {\infty} \\ \end{array}
|
| 768 |
+
$$
|
| 769 |
+
|
| 770 |
+
Therefore, we have
|
| 771 |
+
|
| 772 |
+
$$
|
| 773 |
+
\left| \theta_ {\text {A T E}} (a) - \hat {\theta} _ {\text {A T E}} (a) \right| \leq \sup _ {a, x} | g (a, x) - \hat {g} (a, x) | + R \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {X} \left(x _ {i}\right) - \mathbb {E} \left[ \hat {\phi} _ {X} (X) \right] \right\| _ {\infty}.
|
| 774 |
+
$$
|
| 775 |
+
|
| 776 |
+
Using Lemmas 1 and 3 and Assumption 1, we have
|
| 777 |
+
|
| 778 |
+
$$
|
| 779 |
+
\begin{array}{l} \sup _ {a, x} | g (a, x) - \hat {g} (a, x) | \leq \frac {1}{c} \left(1 6 R \hat {\mathfrak {R}} _ {S} (\mathcal {H} _ {g}) + 8 R ^ {2} \sqrt {\frac {\log 2 / \delta}{2 n}}\right) ^ {1 / 2 \beta}, \\ \left\| \mathbb {E} \left[ \hat {\phi} _ {X} (X) \right] - \frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {X} (x _ {i}) \right\| _ {\infty} \leq \sqrt {\frac {2 \log (2 d _ {2} / \delta)}{n}} \\ \end{array}
|
| 780 |
+
$$
|
| 781 |
+
|
| 782 |
+
with probability at least $1 - 4\delta$ . Combining them and applying Lemma 7 completes the proof for ATE bound. For ATT, we can derive the followings with the same discussion
|
| 783 |
+
|
| 784 |
+
$$
|
| 785 |
+
\left| \theta_ {\mathrm {A T T}} (a; a ^ {\prime}) - \hat {\theta} _ {\mathrm {A T T}} (a; a ^ {\prime}) \right| \leq \sup _ {a, x} | g (a, x) - \hat {g} (a, x) | + \sup _ {a ^ {\prime} \in \mathcal {A}} R \left\| \hat {\boldsymbol {f}} _ {\hat {\phi} _ {X}} (a ^ {\prime}) - \mathbb {E} \left[ \hat {\phi} _ {X} (X) | A = a ^ {\prime} \right] \right\| _ {\infty}.
|
| 786 |
+
$$
|
| 787 |
+
|
| 788 |
+
Using Lemma 4 and the assumption made in Theorem 2, we have
|
| 789 |
+
|
| 790 |
+
$$
|
| 791 |
+
\left\| \hat {\boldsymbol {f}} _ {\hat {\phi} _ {X}} \left(a ^ {\prime}\right) - \mathbb {E} \left[ \hat {\phi} _ {X} (X) | A = a ^ {\prime} \right] \right\| \leq \frac {1}{c ^ {\prime}} \left(1 6 \hat {\Re} _ {S} \left(\mathcal {H} _ {f}\right) + 8 \sqrt {\frac {\log \left(2 d _ {2} / \delta\right)}{2 n}}\right) ^ {1 / 2 \beta^ {\prime}}.
|
| 792 |
+
$$
|
| 793 |
+
|
| 794 |
+
If we use neural network hypothesis space $\mathcal{H}_f$ considered in Proposition 5, we can see that the ATT bound holds.
|
| 795 |
+
|
| 796 |
+
# B.4 LIMITATION OF SMOOTHNESS ASSUMPTION ON RIESZ REPRESENTER
|
| 797 |
+
|
| 798 |
+
In Chernozhukov et al. (2022b), we consider a functional $m$ such that the causal parameter $\theta$ can be written as $\theta = \mathbb{E}[m(g,(A,X))]$ , where $g$ is the conditional expectation $g(a,x) = \mathbb{E}[Y|A = a,X = x]$ . Then, a Riesz Representative $\alpha$ , which satisfies $\mathbb{E}[m(g,(A,X))] = \mathbb{E}[\alpha (A,X)g(A,X)]$ , exists as long as
|
| 799 |
+
|
| 800 |
+
$$
|
| 801 |
+
\mathbb {E} \left[ \left(m ^ {2} (\alpha , (A, X))\right) \right] \leq M \| \alpha \| _ {P (A, X)} ^ {2},
|
| 802 |
+
$$
|
| 803 |
+
|
| 804 |
+
for all $\alpha \in \mathcal{H}_{\alpha}$ and a smoothness parameter $M$ . When we consider ATE $\theta_{\mathrm{ATE}}(a)$ , the corresponding functional $m$ would be
|
| 805 |
+
|
| 806 |
+
$$
|
| 807 |
+
m (\alpha , (A, X)) = \alpha (a, X).
|
| 808 |
+
$$
|
| 809 |
+
|
| 810 |
+
Chernozhukov et al. (2021, Theorem 1) shows that the deviation of estimated the Riesz Representer $\hat{\alpha}$ and the true one $\alpha_0$ scales as linear to the smoothness parameter $M$ .
|
| 811 |
+
|
| 812 |
+
$$
|
| 813 |
+
\left\| \hat {\alpha} - \alpha_ {0} \right\| _ {P (A, X)} ^ {2} \leq O \left(M \delta_ {n} + n ^ {- 1 / 2}\right),
|
| 814 |
+
$$
|
| 815 |
+
|
| 816 |
+

|
| 817 |
+
(a) General causal graph
|
| 818 |
+
|
| 819 |
+

|
| 820 |
+
(b) Back-door adjustment
|
| 821 |
+
Figure 4: Causal graph with observable confounder. The bidirectional arrows mean that we allow both directions or even a common ancestor variable.
|
| 822 |
+
|
| 823 |
+

|
| 824 |
+
(c) Front-door adjustment
|
| 825 |
+
|
| 826 |
+
where $\delta_{n}$ is the critical radius that scales
|
| 827 |
+
|
| 828 |
+
$$
|
| 829 |
+
\delta_ {n} = O \left(\sqrt {\frac {\log n}{n}}\right).
|
| 830 |
+
$$
|
| 831 |
+
|
| 832 |
+
when we consider fully connected neural networks. Now, we show that the smoothness parameter $M$ can have an exponential dependency on the dimension of the space, even for simple $\alpha$ . Consider $\mathcal{A} = [-1,1]^d$ and some compact space $\mathcal{X}$ . We assume the uniform distribution for $P(A,X)$ . Consider following $\tilde{\alpha}$
|
| 833 |
+
|
| 834 |
+
$$
|
| 835 |
+
\tilde {\alpha} (a, x) = \max \left(1 - \sum_ {i = 1} ^ {d} 2 | a _ {[ i ]} |, 0\right),
|
| 836 |
+
$$
|
| 837 |
+
|
| 838 |
+
where $a_{[i]}$ denotes $i$ -th element of $a$ , and here we consider $\tilde{\alpha}$ that does not depend on $x$ . Say, we are interested in estimating $\theta_{\mathrm{ATE}}(a)$ of $a = \mathbf{0} = [0,\dots ,0]^{\top}$ , for which
|
| 839 |
+
|
| 840 |
+
$$
|
| 841 |
+
\mathbb {E} \left[ (m (\tilde {\alpha}) (A, X)) ^ {2} \right] = \mathbb {E} \left[ (\tilde {\alpha} (\mathbf {0}, X)) ^ {2} \right] = 1.
|
| 842 |
+
$$
|
| 843 |
+
|
| 844 |
+
Now consider $\mathfrak{B}$ that
|
| 845 |
+
|
| 846 |
+
$$
|
| 847 |
+
\mathfrak {B} = \left\{a \in \mathcal {A} \left| \forall i \in [ d ], - \frac {1}{2} \leq a _ {[ i ]} \leq \frac {1}{2} \right. \right\}.
|
| 848 |
+
$$
|
| 849 |
+
|
| 850 |
+
Then, since $\tilde{\alpha}(a,x) = 0$ for all $a \notin \mathfrak{B}$ , we have
|
| 851 |
+
|
| 852 |
+
$$
|
| 853 |
+
\begin{array}{l} \left\| \tilde {\alpha} \right\| _ {P (A, X)} ^ {2} = \int_ {A} | \tilde {\alpha} (A, X) | ^ {2} \mathrm {d} P (A, X) \\ = \int_ {\mathfrak {B}} | \tilde {\alpha} (A, X) | ^ {2} \mathrm {d} P (A, X) \\ \leq \int_ {\mathfrak {B}} \mathrm {d} P (A, X) = 1 / 2 ^ {d}. \\ \end{array}
|
| 854 |
+
$$
|
| 855 |
+
|
| 856 |
+
We use the assumption that $P(A, X)$ is the uniform distribution to have the last equality. Hence, if $\tilde{\alpha} \in \mathcal{H}_{\alpha}$ , the smoothness parameter $M$ must have the exponential dependency
|
| 857 |
+
|
| 858 |
+
$$
|
| 859 |
+
M \geq 2 ^ {d}.
|
| 860 |
+
$$
|
| 861 |
+
|
| 862 |
+
# C OBSERVABLE CONFOUNDER
|
| 863 |
+
|
| 864 |
+
In this section, we consider the case where we have the additional observable confounder, the causal graph of which is given in Figure 4.
|
| 865 |
+
|
| 866 |
+
Given the causal graph in Figure 4, ATE and ATT are defined as follows.
|
| 867 |
+
|
| 868 |
+
$$
|
| 869 |
+
\theta_ {\mathrm {A T E}} (a) = \mathbb {E} _ {U, O} \left[ \mathbb {E} \left[ Y | U, O, A = a \right] \right], \quad \theta_ {\mathrm {A T E}} (a; a ^ {\prime}) = \mathbb {E} _ {U, O} \left[ \mathbb {E} \left[ Y | U, O, A = a \right] \right].
|
| 870 |
+
$$
|
| 871 |
+
|
| 872 |
+
Furthermore, we can consider another causal parameter called conditional average treatment effect (CATE), which is a conditional average of the potential outcome given $O = o$
|
| 873 |
+
|
| 874 |
+
$$
|
| 875 |
+
\theta_ {\text {C A T E}} (a; o) = \mathbb {E} \left[ Y ^ {(a)} \Big | O = o \right].
|
| 876 |
+
$$
|
| 877 |
+
|
| 878 |
+
Given exchangeability and no inference assumption, we have
|
| 879 |
+
|
| 880 |
+
$$
|
| 881 |
+
\theta_ {\text {C A T E}} (a; o) = \mathbb {E} _ {U | O = o} [ \mathbb {E} [ Y | U, O = o, A = a ] ].
|
| 882 |
+
$$
|
| 883 |
+
|
| 884 |
+
These causal parameters can be recovered if the back-door or the front-door variable is provided as follows.
|
| 885 |
+
|
| 886 |
+
Back-door adjustments: First, we present the Proposition stating these causal parameters can be recovered if we are given the back-door variable $X$ .
|
| 887 |
+
|
| 888 |
+
Proposition 6 (Pearl, 1995). Given the back-door adjustment $X$ in Figure 4b, we have
|
| 889 |
+
|
| 890 |
+
$$
|
| 891 |
+
\theta_ {\mathrm {A T E}} (a) = \mathbb {E} _ {X, O} [ g (a, O, X) ],
|
| 892 |
+
$$
|
| 893 |
+
|
| 894 |
+
$$
|
| 895 |
+
\theta_ {\mathrm {A T T}} (a; a ^ {\prime}) = \mathbb {E} _ {X, O} [ g (a, O, X) | A = a ^ {\prime} ],
|
| 896 |
+
$$
|
| 897 |
+
|
| 898 |
+
$$
|
| 899 |
+
\theta_ {\mathrm {C A T E}} (a; o) = \mathbb {E} _ {X} [ g (a, o, X) | O = o ]
|
| 900 |
+
$$
|
| 901 |
+
|
| 902 |
+
where $g(a,o,x) = \mathbb{E}[Y|A = a,O = o,X = x]$ .
|
| 903 |
+
|
| 904 |
+
Now, we present the deep adaptive feature embedding approach to this. We first learn conditional expectation $\hat{g}$ as $\hat{g} (a,o,x) = \hat{\boldsymbol{w}}^{\top}(\hat{\phi}_A(a)\otimes \hat{\phi}_O(o)\otimes \hat{\phi}_X(x))$ , where
|
| 905 |
+
|
| 906 |
+
$$
|
| 907 |
+
\boldsymbol {\hat {w}}, \hat {\phi} _ {A}, \hat {\phi} _ {O}, \hat {\phi} _ {X} (x) = \arg \min \frac {1}{n} \sum_ {i = 1} ^ {n} \left(y _ {i} - \boldsymbol {w} ^ {\top} \left(\phi_ {A} \left(a _ {i}\right) \otimes \phi_ {O} \left(o _ {i}\right) \otimes \phi_ {X} \left(x _ {i}\right)\right)\right) ^ {2} \tag {4}
|
| 908 |
+
$$
|
| 909 |
+
|
| 910 |
+
given data $(y_{i},a_{i},o_{i},x_{i})$ . Here, $\pmb{w}$ is the weight and $\phi_A,\phi_O,\phi_X$ are the feature maps. From Proposition 6, we have
|
| 911 |
+
|
| 912 |
+
$$
|
| 913 |
+
\theta_ {\mathrm {A T E}} (a) \simeq \hat {\boldsymbol {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \mathbb {E} _ {X, O} \left[ \hat {\phi} _ {O} (O) \otimes \hat {\phi} _ {X} (X) \right]\right),
|
| 914 |
+
$$
|
| 915 |
+
|
| 916 |
+
$$
|
| 917 |
+
\theta_ {\mathrm {A T T}} (a; a ^ {\prime}) \simeq \hat {\boldsymbol {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \mathbb {E} _ {X, O} \left[ \hat {\phi} _ {O} (O) \otimes \hat {\phi} _ {X} (X) \mid A = a ^ {\prime} \right]\right),
|
| 918 |
+
$$
|
| 919 |
+
|
| 920 |
+
$$
|
| 921 |
+
\theta_ {\text {C A T E}} (a; o) \simeq \hat {\boldsymbol {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \hat {\phi} _ {O} (o) \otimes \mathbb {E} \left[ \hat {\phi} _ {X} (X) | O = o \right]\right)
|
| 922 |
+
$$
|
| 923 |
+
|
| 924 |
+
Therefore, by estimating the feature embeddings, we have
|
| 925 |
+
|
| 926 |
+
$$
|
| 927 |
+
\hat {\theta} _ {\mathrm {A T E}} (a) = \hat {\boldsymbol {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\hat {\phi} _ {O} \left(o _ {i}\right) \otimes \hat {\phi} _ {X} \left(x _ {i}\right)\right)\right),
|
| 928 |
+
$$
|
| 929 |
+
|
| 930 |
+
$$
|
| 931 |
+
\hat {\boldsymbol {\theta}} _ {\mathrm {A T T}} (a; a ^ {\prime}) = \boldsymbol {\hat {w}} ^ {\top} \left(\hat {\boldsymbol {\phi}} _ {A} (a) \otimes \boldsymbol {\hat {f}} _ {\hat {\boldsymbol {\phi}} _ {O} \otimes \hat {\boldsymbol {\phi}} _ {X}} (a ^ {\prime})\right),
|
| 932 |
+
$$
|
| 933 |
+
|
| 934 |
+
$$
|
| 935 |
+
\hat {\theta} _ {\text {C A T E}} (a; o) = \hat {\boldsymbol {w}} ^ {\top} \left(\hat {\phi} _ {A} (a) \otimes \hat {\phi} _ {O} (o) \otimes \hat {\boldsymbol {f}} _ {\hat {\phi} _ {X}} (o)\right)
|
| 936 |
+
$$
|
| 937 |
+
|
| 938 |
+
where $\hat{\pmb{f}}_{\hat{\phi}_O\otimes \hat{\phi}_X},\hat{\pmb{f}}_{\hat{\phi}_X}(o)$ are learned from
|
| 939 |
+
|
| 940 |
+
$$
|
| 941 |
+
\hat {f} _ {\hat {\phi} _ {O} \otimes \hat {\phi} _ {X}} = \arg \min _ {\boldsymbol {f}} \frac {1}{n} \sum_ {i = 1} ^ {n} \| \hat {\phi} _ {O} (o _ {i}) \otimes \hat {\phi} _ {X} (x _ {i}) - \boldsymbol {f} (a _ {i}) \| ^ {2}
|
| 942 |
+
$$
|
| 943 |
+
|
| 944 |
+
$$
|
| 945 |
+
\hat {\boldsymbol {f}} _ {\hat {\phi} _ {X}} = \underset {\boldsymbol {f}} {\arg \min } \frac {1}{n} \sum_ {i = 1} ^ {n} \| \hat {\boldsymbol {\phi}} _ {X} (x _ {i}) - \boldsymbol {f} (o _ {i}) \| ^ {2}.
|
| 946 |
+
$$
|
| 947 |
+
|
| 948 |
+
Front-door adjustment: Given the front-door variable $M$ , these causal parameters can be identified as follows.
|
| 949 |
+
|
| 950 |
+
Proposition 7 (Pearl, 1995). Given the front-door variable $M$ in Figure 4c, we have
|
| 951 |
+
|
| 952 |
+
$$
|
| 953 |
+
\theta_ {\mathrm {A T E}} (a) = \mathbb {E} _ {A ^ {\prime}} \left[ \mathbb {E} _ {O} \left[ \mathbb {E} _ {M | O, A = a} \left[ g (A ^ {\prime}, O, M) \right] \right] \right],
|
| 954 |
+
$$
|
| 955 |
+
|
| 956 |
+
$$
|
| 957 |
+
\theta_ {\mathrm {A T T}} (a; a ^ {\prime}) = \mathbb {E} _ {O} \left[ \mathbb {E} _ {M | O, A = a} \left[ g \left(a ^ {\prime}, O, M\right) \right] \right],
|
| 958 |
+
$$
|
| 959 |
+
|
| 960 |
+
$$
|
| 961 |
+
\theta_ {\text {C A T E}} (a; o) = \mathbb {E} _ {A ^ {\prime}} \left[ \mathbb {E} _ {M | O = o, A = a} [ g (A ^ {\prime}, o, M) ] \right]
|
| 962 |
+
$$
|
| 963 |
+
|
| 964 |
+
where $g(a, o, m) = \mathbb{E}[Y | A = a, O = o, M = m]$ and $A'$ follows the identical distribution as $A$ .
|
| 965 |
+
|
| 966 |
+
For front-door adjustment, we learn conditional expectation $\hat{g}$ as $\hat{g}(a,o,x) = \hat{\boldsymbol{w}}^{\top}(\hat{\phi}_A(a)\otimes \hat{\phi}_O(o)\otimes \hat{\phi}_M(m))$ , where
|
| 967 |
+
|
| 968 |
+
$$
|
| 969 |
+
\hat {\boldsymbol {w}}, \hat {\phi} _ {A}, \hat {\phi} _ {O}, \hat {\phi} _ {M} (m) = \arg \min \frac {1}{n} \sum_ {i = 1} ^ {n} \left(y _ {i} - \boldsymbol {w} ^ {\top} \left(\phi_ {A} \left(a _ {i}\right) \otimes \phi_ {O} \left(o _ {i}\right) \otimes \phi_ {M} \left(m _ {i}\right)\right)\right) ^ {2}.
|
| 970 |
+
$$
|
| 971 |
+
|
| 972 |
+
Then, from Proposition 7, we have
|
| 973 |
+
|
| 974 |
+
$$
|
| 975 |
+
\theta_ {\text {A T E}} (a) \simeq \hat {\boldsymbol {w}} ^ {\top} \left(\mathbb {E} \left[ \hat {\phi} _ {A} (A) \right] \otimes \mathbb {E} _ {O} \left[ \hat {\phi} _ {O} (O) \otimes \mathbb {E} _ {M | O, A = a} \left[ \hat {\phi} _ {M} (M) \right] \right]\right).
|
| 976 |
+
$$
|
| 977 |
+
|
| 978 |
+
$$
|
| 979 |
+
\theta_ {\mathrm {A T T}} (a; a ^ {\prime}) \simeq \hat {\boldsymbol {w}} ^ {\top} \left(\hat {\phi} _ {A} (a ^ {\prime}) \otimes \mathbb {E} _ {O} \left[ \hat {\phi} _ {O} (O) \otimes \mathbb {E} _ {M | O, A = a} \left[ \hat {\phi} _ {M} (M) \right] \right]\right),
|
| 980 |
+
$$
|
| 981 |
+
|
| 982 |
+
$$
|
| 983 |
+
\theta_ {\text {C A T E}} (a; o) \simeq \hat {\boldsymbol {w}} ^ {\top} \left(\mathbb {E} \left[ \hat {\phi} _ {A} (A) \right] \otimes \hat {\phi} _ {O} (o) \otimes \mathbb {E} _ {M | O = o, A = a} \left[ \hat {\phi} _ {M} (M) \right]\right).
|
| 984 |
+
$$
|
| 985 |
+
|
| 986 |
+
The conditional expectation $\mathbb{E}_{M|O = o,A = a}\left[\hat{\phi}_M(M)\right]$ is estimated as $\mathbb{E}_{M|O = o,A = a}\left[\hat{\phi}_M(M)\right] = \hat{f}_{\hat{\phi}_M}(o,a)$ , where
|
| 987 |
+
|
| 988 |
+
$$
|
| 989 |
+
\hat {\boldsymbol {f}} _ {\hat {\phi} _ {M}} = \operatorname * {a r g m i n} _ {\boldsymbol {f}} \frac {1}{n} \sum_ {i = 1} ^ {n} \| \hat {\boldsymbol {\phi}} _ {M} (m _ {i}) - \boldsymbol {f} (o _ {i}, a _ {i}) \| ^ {2}.
|
| 990 |
+
$$
|
| 991 |
+
|
| 992 |
+
Then, by replacing the marginal expectation with the empirical average, we have
|
| 993 |
+
|
| 994 |
+
$$
|
| 995 |
+
\hat {\theta} _ {\mathrm {A T E}} (a) = \boldsymbol {\hat {w}} ^ {\top} \left(\left(\frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {A} \left(a _ {i}\right)\right) \otimes \frac {1}{n} \sum_ {j = 1} ^ {n} \left(\hat {\phi} _ {O} \left(o _ {j}\right) \otimes \boldsymbol {f} _ {\hat {\phi} _ {M}} \left(o _ {j}, a\right)\right)\right),
|
| 996 |
+
$$
|
| 997 |
+
|
| 998 |
+
$$
|
| 999 |
+
\hat {\theta} _ {\mathrm {A T T}} (a; a ^ {\prime}) = \hat {\boldsymbol {w}} ^ {\top} \left(\hat {\phi} _ {A} (a ^ {\prime}) \otimes \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\hat {\phi} _ {O} (o _ {i}) \otimes \hat {\boldsymbol {f}} _ {\hat {\phi} _ {M}} (o _ {i}, a)\right)\right),
|
| 1000 |
+
$$
|
| 1001 |
+
|
| 1002 |
+
$$
|
| 1003 |
+
\hat {\theta} _ {\text {C A T E}} (a; o) = \boldsymbol {\hat {w}} ^ {\top} \left(\left(\frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\phi} _ {A} \left(a _ {i}\right)\right) \otimes \hat {\phi} _ {O} (o) \otimes \boldsymbol {\hat {f}} _ {\hat {\phi} _ {M}} (o, a)\right).
|
| 1004 |
+
$$
|
| 1005 |
+
|
| 1006 |
+
# D EXPERIMENT DETAILS
|
| 1007 |
+
|
| 1008 |
+
Here, we describe the network architecture and hyper-parameters of all experiments. Unless otherwise specified, we used Adam with learning rate $= 0.001$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ and $\varepsilon = 10^{-8}$ . For RKHS Embedding, we used Gaussian kernel for continuous variable where the bandwidth is determined by the median trick.
|
| 1009 |
+
|
| 1010 |
+
# D.1 BINARY TREATMENT SCENARIO
|
| 1011 |
+
|
| 1012 |
+
In this scenario, all treatments are binary $A \in \{0,1\}$ . In RKHS Embedding and Neural Embedding, we used the feature $\phi_A$ given as
|
| 1013 |
+
|
| 1014 |
+
$$
|
| 1015 |
+
\phi_ {A} (1) = [ 1, 0 ] ^ {\top}, \phi_ {A} (0) = [ 0, 1 ] ^ {\top}
|
| 1016 |
+
$$
|
| 1017 |
+
|
| 1018 |
+
in both IHDP setting and ACIC setting. This is equivalent to learn two models
|
| 1019 |
+
|
| 1020 |
+
$$
|
| 1021 |
+
\mathbb {E} [ Y | X = x, A = 0 ] = w _ {0} ^ {\top} \phi_ {X} (X), \mathbb {E} [ Y | X = x, A = 1 ] = w _ {1} ^ {\top} \phi_ {X} (X)
|
| 1022 |
+
$$
|
| 1023 |
+
|
| 1024 |
+
with shared nonlinear feature $\phi_X(X)$ .
|
| 1025 |
+
|
| 1026 |
+
IHDP Dataset We used the 1000 data used in (Chernozhukov et al., 2022b), which is publicly available at Github page of the paper. The network structure for back-door feature $\phi_X(X)$ is shown in Table 2. Note that is much smaller network than Dragonnet or Riesznet, but increasing network size did not affect the result much.
|
| 1027 |
+
|
| 1028 |
+
Table 2: Network structures of Neural Embedding for IHDP dataset. For the input layer, we provide the input variable. For the fully-connected layers (FC), we provide the input and output dimensions.
|
| 1029 |
+
|
| 1030 |
+
<table><tr><td colspan="2">Back-door feature φX(X)</td></tr><tr><td>Layer</td><td>Configuration</td></tr><tr><td>1</td><td>Input(X)</td></tr><tr><td>2</td><td>FC(25, 200), ReLU</td></tr><tr><td>3</td><td>FC(200, 200), ReLU</td></tr></table>
|
| 1031 |
+
|
| 1032 |
+
ACIC Dataset We used the 101 data used in (Shi et al., 2019), which satisfies overlap assumption. (i.e. Not all data points has the extreme propensity score $P(A = 1|X)$ .) We noticed that some data contains a outliers and we only consider the data points with the outcome $Y$ is in the range of
|
| 1033 |
+
|
| 1034 |
+
$$
|
| 1035 |
+
Y \in \left[ Q _ {1} (Y) - 5 \mathrm {I Q R}, Q _ {3} (Y) + 5 \mathrm {I Q R} \right]
|
| 1036 |
+
$$
|
| 1037 |
+
|
| 1038 |
+
where $Q_{1}(Y), Q_{3}(Y)$ are $25\%$ , $75\%$ -quantile value of outcome, respectively, and $\mathrm{IQR} = Q_3(Y) - Q_1(Y)$ .
|
| 1039 |
+
|
| 1040 |
+
We run Dragonnet and RieszNet estimators with the same network architecture as IHDP dataset. The network structure for back-door feature $\phi_X(X)$ is shown in Table 3. Note that the same structure is used in Dragonnet and Riesznet to predict conditional expectation $\mathbb{E}\left[Y|X,A\right]$ .
|
| 1041 |
+
|
| 1042 |
+
Table 3: Network structures of Neural Embedding for ACIC dataset. For the input layer, we provide the input variable. For the fully-connected layers (FC), we provide the input and output dimensions.
|
| 1043 |
+
|
| 1044 |
+
Back-door feature $\phi_X(X)$
|
| 1045 |
+
|
| 1046 |
+
<table><tr><td>Layer</td><td>Configuration</td></tr><tr><td>1</td><td>Input(X)</td></tr><tr><td>2</td><td>FC(177, 200), ELU</td></tr><tr><td>3</td><td>FC(200, 200), ELU</td></tr><tr><td>4</td><td>FC(200, 200), ELU</td></tr><tr><td>5</td><td>FC(200, 100), ELU</td></tr></table>
|
| 1047 |
+
|
| 1048 |
+
# D.2 HIGH-DIMENSIONAL TREATMENT SCENARIO
|
| 1049 |
+
|
| 1050 |
+
Here, we generate all dataset by ourselves from original dSprite dataset (Matthey et al., 2017).
|
| 1051 |
+
|
| 1052 |
+
Back-door ATE estimation The network features for the proposed method is summarized in Table 4. The network structures for RieszNet is given in Table 5. Note that they share the similar feature extractor for images.
|
| 1053 |
+
|
| 1054 |
+
Table 4: Network structures of the neural embedding method in dSprite back-door adjustment experiment. For the input layer, we provide the input variable. For the fully-connected layers (FC), we provide the input and output dimensions. SN denotes Spectral Normalization (Miyato et al., 2018).
|
| 1055 |
+
|
| 1056 |
+
<table><tr><td colspan="2">Back-door feature φX(X)</td><td colspan="2">Treatment Feature φA(A)</td></tr><tr><td>Layer</td><td>Configuration</td><td>Layer</td><td>Configuration</td></tr><tr><td>1</td><td>Input(X)</td><td>1</td><td>Input(A)</td></tr><tr><td>2</td><td>FC(2, 36), ReLU</td><td>2</td><td>FC(4096, 1024), SN, ReLU</td></tr><tr><td>3</td><td>FC(36, 5), ReLU</td><td>3</td><td>FC(1024, 512), SN, ReLU, BN</td></tr><tr><td></td><td></td><td>4</td><td>FC(512, 128), SN, ReLU</td></tr><tr><td></td><td></td><td>5</td><td>FC(128, 32), SN, BN, Tanh</td></tr></table>
|
| 1057 |
+
|
| 1058 |
+
Front-door ATT estimation Here, we used the same network architecture as in the back-door adjustment summarized in Table 4.
|
| 1059 |
+
|
| 1060 |
+
Table 5: Network structures of RieszNet in dSprite back-door adjustment experiment. For the fully-connected layers (FC), we provide the input and output dimensions. SN denotes Spectral Normalization (Miyato et al., 2018).
|
| 1061 |
+
|
| 1062 |
+
<table><tr><td colspan="2">Common Feature φ(A, X)</td></tr><tr><td>Layer</td><td>Configuration</td></tr><tr><td>1</td><td>Input(A, X)</td></tr><tr><td>2</td><td>FC(4098, 1024), SN, ReLU</td></tr><tr><td>3</td><td>FC(1024, 512), SN, ReLU, BN</td></tr><tr><td>4</td><td>FC(512, 128), SN, ReLU</td></tr><tr><td>5</td><td>FC(128, 32), SN, BN, Tanh</td></tr></table>
|
| 1063 |
+
|
| 1064 |
+
<table><tr><td colspan="2">Regressor g</td></tr><tr><td>Layer</td><td>Configuration</td></tr><tr><td>1</td><td>Input(A, X)</td></tr><tr><td>2</td><td>Common Feature φ(A, X)</td></tr><tr><td>3</td><td>FC(32, 32), ReLU</td></tr><tr><td>4</td><td>FC(32, 32), ReLU</td></tr><tr><td>5</td><td>FC(32, 1)</td></tr></table>
|
| 1065 |
+
|
| 1066 |
+
<table><tr><td colspan="2">Riesz representative learning α</td></tr><tr><td>Layer</td><td>Configuration</td></tr><tr><td>1</td><td>Input(A, X)</td></tr><tr><td>2</td><td>Common Feature φ(A, X)</td></tr><tr><td>3</td><td>FC(32, 1)</td></tr></table>
|
2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67b91d75591239eef64411f899e940dfc93248f7abcc5a5d556ee685cbd6f40c
|
| 3 |
+
size 1384531
|
2023/A Neural Mean Embedding Approach for Back-door and Front-door Adjustment/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/81341b4f-f47d-44a6-9c0c-76b5656a7234_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/81341b4f-f47d-44a6-9c0c-76b5656a7234_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/81341b4f-f47d-44a6-9c0c-76b5656a7234_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c6f238f19ec4244a929549296af96a0abd59aab9a49d678dd49b00438331fc1c
|
| 3 |
+
size 4462529
|
2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/full.md
ADDED
|
@@ -0,0 +1,949 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A NON-ASYMPTOTIC ANALYSIS OF OVERSMOOTHING IN GRAPH NEURAL NETWORKS
|
| 2 |
+
|
| 3 |
+
Xinyi Wu $^{1}$ , Zhengdao Chen $^{2,*}$ , William Wang $^{1}$ , Ali Jadbabaei $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Laboratory for Information and Decision Systems (LIDS), MIT
|
| 6 |
+
|
| 7 |
+
$^{2}$ Courant Institute of Mathematical Sciences, New York University
|
| 8 |
+
|
| 9 |
+
{xinyiwu,wwang314,jadbabai}@mit.edu,zc1216@nyu.edu
|
| 10 |
+
|
| 11 |
+
# ABSTRACT
|
| 12 |
+
|
| 13 |
+
Oversmoothing is a central challenge of building more powerful Graph Neural Networks (GNNs). While previous works have only demonstrated that oversmoothing is inevitable when the number of graph convolutions tends to infinity, in this paper, we precisely characterize the mechanism behind the phenomenon via a non-asymptotic analysis. Specifically, we distinguish between two different effects when applying graph convolutions—an undesirable mixing effect that homogenizes node representations in different classes, and a desirable denoising effect that homogenizes node representations in the same class. By quantifying these two effects on random graphs sampled from the Contextual Stochastic Block Model (CSBM), we show that oversmoothing happens once the mixing effect starts to dominate the denoising effect, and the number of layers required for this transition is $O(\log N / \log (\log N))$ for sufficiently dense graphs with $N$ nodes. We also extend our analysis to study the effects of Personalized PageRank (PPR), or equivalently, the effects of initial residual connections on oversmoothing. Our results suggest that while PPR mitigates oversmoothing at deeper layers, PPR-based architectures still achieve their best performance at a shallow depth and are outperformed by the graph convolution approach on certain graphs. Finally, we support our theoretical results with numerical experiments, which further suggest that the oversmoothing phenomenon observed in practice can be magnified by the difficulty of optimizing deep GNN models.
|
| 14 |
+
|
| 15 |
+
# 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Graph Neural Networks (GNNs) are a powerful framework for learning with graph-structured data (Gori et al., 2005; Scarselli et al., 2009; Bruna et al., 2014; Duvenaud et al., 2015; Defferrard et al., 2016; Battaglia et al., 2016; Li et al., 2016). Most GNN models are built by stacking graph convolutions or message-passing layers (Gilmer et al., 2017), where the representation of each node is computed by recursively aggregating and transforming the representations of its neighboring nodes. The most representative and popular example is the Graph Convolutional Network (GCN) (Kipf & Welling, 2017), which has demonstrated success in node classification, a primary graph task which asks for node labels and identifies community structures in real graphs.
|
| 18 |
+
|
| 19 |
+
Despite these achievements, the choice of depth for these GNN models remains an intriguing question. GNNs often achieve optimal classification performance when networks are shallow. Many widely used GNNs such as the GCN are no deeper than 4 layers (Kipf & Welling, 2017; Wu et al., 2019), and it has been observed that for deeper GNNs, repeated message-passing makes node representations in different classes indistinguishable and leads to lower node classification accuracy—a phenomenon known as oversmoothing (Kipf & Welling, 2017; Li et al., 2018; Klicpera et al., 2019; Wu et al., 2019; Oono & Suzuki, 2020; Chen et al., 2020a,b; Keriven, 2022). Through the insight that graph convolutions can be regarded as low-pass filters on graph signals, prior studies have established that oversmoothing is inevitable when the number of layers in a GNN increases to infinity (Li et al., 2018; Oono & Suzuki, 2020). However, these asymptotic analyses do not fully explain the rapid occurrence of oversmoothing when we increase the network depth, let alone the fact that for some datasets, having no graph convolution is even optimal (Liu et al., 2021). These observations motivate the following key questions about oversmoothing in GNNs:
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Stacking GNN layers increases both the mixing and denoising effects counteracting each other. Depending on the graph properties, either the denoising effect dominates the mixing effect, resulting in less difficulty classifying nodes (A), or the mixing effect dominates the denoising effect, resulting in more difficulty classifying nodes (B)—this is when oversmoothing starts to happen.
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
|
| 28 |
+
Why does oversmoothing happen at a relatively shallow depth?
|
| 29 |
+
|
| 30 |
+
Can we quantitatively model the effect of applying a finite number of graph convolutions and theoretically predict the "sweet spot" for the choice of depth?
|
| 31 |
+
|
| 32 |
+
In this paper, we propose a non-asymptotic analysis framework to study the effects of graph convolutions and oversmoothing using the Contextual Stochastic Block Model (CSBM) (Deshpande et al., 2018). The CSBM mimics the community structure of real graphs and enables us to evaluate the performance of linear GNNs through the probabilistic model with ground truth community labels. More importantly, as a generative model, the CSBM gives us full control over the graph structure and allows us to analyze the effect of graph convolutions non-asymptotically. In particular, we distinguish between two counteracting effects of graph convolutions:
|
| 33 |
+
|
| 34 |
+
- mixing effect (undesirable): homogenizing node representations in different classes;
|
| 35 |
+
- denoising effect (desirable): homogenizing node representations in the same class.
|
| 36 |
+
|
| 37 |
+
Adding graph convolutions will increase both the mixing and denoising effects. As a result, oversmoothing happens not just because the mixing effect keeps accumulating as the depth increases, on which the asymptotic analyses are based (Li et al., 2018; Oono & Suzuki, 2020), but rather because the mixing effect starts to dominate the denoising effect (see Figure 1 for a schematic illustration). By quantifying both effects as a function of the model depth, we show that the turning point of the tradeoff between the two effects is $O(\log N / \log (\log N))$ for graphs with $N$ nodes sampled from the CSBM in sufficiently dense regimes. Besides new theory, this paper also presents numerical results directly comparing theoretical predictions and empirical results. This comparison leads to new insights highlighting the fact that the oversmoothing phenomenon observed in practice is often a mixture of pure oversmoothing and difficulty of optimizing weights in deep GNN models.
|
| 38 |
+
|
| 39 |
+
In addition, we apply our framework to analyze the effects of Personalized PageRank (PPR) on oversmoothing. Personalized propagation of neural predictions (PPNP) and its approximate variant (APPNP) make use of PPR and its approximate variant, respectively, and were proposed as a solution to mitigate oversmoothing while retaining the ability to aggregate information from larger neighborhoods in the graph (Klicpera et al., 2019). We show mathematically that PPR makes the model performance more robust to increasing number of layers by reducing the mixing effect at each layer, while it nonetheless reduces the desirable denoising effect at the same time. For graphs with a large size or strong community structure, the reduction of the denoising effect would be greater than the reduction of the mixing effect and thus PPNP and APPNP would perform worse than the vanilla GNN on those graphs.
|
| 40 |
+
|
| 41 |
+
# Our contributions are summarized as follows:
|
| 42 |
+
|
| 43 |
+
- We show that adding graph convolutions strengthens the denoising effect while exacerbates the mixing effect. Oversmoothing happens because the mixing effect dominates the denoising effect beyond a certain depth. For sufficiently dense CSBM graphs with $N$ nodes, the required number of layers for this to happen is $O(\log N / \log (\log N))$ .
|
| 44 |
+
- We apply our framework to rigorously characterize the effects of PPR on oversmoothing. We show that PPR reduces both the mixing effect and the denoising effect of message-passing and thus does not necessarily improve node classification performance.
|
| 45 |
+
- We verify our theoretical results in experiments. Through comparison between theory and experiments, we find that the difficulty of optimizing weights in deep GNN architectures often aggravates oversmoothing.
|
| 46 |
+
|
| 47 |
+
# 2 ADDITIONAL RELATED WORK
|
| 48 |
+
|
| 49 |
+
Oversmoothing problem in GNNs Oversmoothing is a well-known issue in deep GNNs, and many techniques have been proposed to relieve it practically (Xu et al., 2018; Li et al., 2019; Chen et al., 2020b; Huang et al., 2020; Zhao & Akoglu, 2020). On the theory side, prior works have shown that as the model depth goes to infinity, the node representations within each connected component of the graph will converge to the same values (Li et al., 2018; Oono & Suzuki, 2020). However, The early onset of oversmoothing renders it an important concern in practice, and it has not been satisfyingly explained by the previous asymptotic studies. Our work addresses this gap by quantifying the effects of graph convolutions as a function of model depth and justifying why oversmoothing happens in shallow GNNs. A recent study shared a similar insight of distinguishing between two competing effects of message-passing and showed the existence of an optimal number of layers for node prediction tasks on a latent space random graph model. But the result had no further quantification on the optimal depth and hence the oversmoothing phenomenon was still only characterized asymptotically (Keriven, 2022).
|
| 50 |
+
|
| 51 |
+
Analysis of GNNs on CSBMs Stochastic block models (SBMs) and their contextual counterparts have been widely used to study node classification problems (Abbe, 2018; Chen et al., 2019). Recently there have been several works proposing to use CSBMs to theoretically analyze GNNs for the node classification task. Wei et al. (2022) used CSBMs to study the function of nonlinearity on the node classification performance, while Fountoulakis et al. (2022) used CSBMs to study the attention-based GNNs. More relevantly, Baranwal et al. (2021; 2022) showed the advantage of applying graph convolutions up to three times for node classification on CSBM graphs. Nonetheless, they only focused on the desirable denoising effect of graph convolution instead of its tradeoff with the undesirable mixing effect, and therefore did not explain the occurrence of oversmoothing.
|
| 52 |
+
|
| 53 |
+
# 3 PROBLEM SETTING AND MAIN RESULTS
|
| 54 |
+
|
| 55 |
+
We first introduce our theoretical analysis setup using the Contextual Stochastic Block Model (CSBM), a random graph model with planted community structure (Deshpande et al., 2018; Baranwal et al., 2021; 2022; Ma et al., 2022; Wei et al., 2022; Fountoulakis et al., 2022). We then present a set of theoretical results establishing bounds for the representation power of GNNs in terms of the best-case node classification accuracy. The proofs of all the theorems and additional claims will be provided in the Appendix.
|
| 56 |
+
|
| 57 |
+
# 3.1 NOTATIONS
|
| 58 |
+
|
| 59 |
+
We represent an undirected graph with $N$ nodes by $\mathcal{G} = (A,X)$ , where $A\in \{0,1\}^{N\times N}$ is the adjacency matrix and $X\in \mathbb{R}^N$ is the node feature vector. For nodes $u,v\in [N]$ , $A_{uv} = 1$ if and only if $u$ and $v$ are connected with an edge in $\mathcal{G}$ , and $X_{u}\in \mathbb{R}$ represents the node feature of $u$ . We let $\mathbb{1}_N$ denote the all-one vector of length $N$ and $D = \mathrm{diag}(A\mathbb{1}_N)$ be the degree matrix of $\mathcal{G}$ .
|
| 60 |
+
|
| 61 |
+
# 3.2 THEORETICAL ANALYSIS FRAMEWORK
|
| 62 |
+
|
| 63 |
+
Contextual Stochastic Block Models We will focus on the case where the CSBM consists of two classes $\mathcal{C}_1$ and $\mathcal{C}_2$ of nodes of equal size, in total with $N$ nodes. For any two nodes in the graph, if they are from the same class, they are connected by an edge independently with probability $p$ , or if they are from different classes, the probability is $q$ . For each node $v \in \mathcal{C}_i$ , $i \in \{1, 2\}$ , the initial feature $X_v$ is sampled independently from a Gaussian distribution $\mathcal{N}(\mu_i, \sigma^2)$ , where $\mu_i \in \mathbb{R}$ , $\sigma \in (0, \infty)$ . Without loss of generality, we assume that $\mu_1 < \mu_2$ . We denote a graph generated from such a CSBM as $\mathcal{G}(A, X) \sim \mathrm{CSBM}(N, p, q, \mu_1, \mu_2, \sigma^2)$ . We further impose the following assumption on the CSBM used in our analysis.
|
| 64 |
+
|
| 65 |
+
Assumption 1. $p,q = \omega (\log N / N)$ and $p > q > 0$
|
| 66 |
+
|
| 67 |
+
The choice $p, q = \omega(\log N / N)$ ensures that the generated graph $\mathcal{G}$ is connected almost surely (Abbe, 2018) while being slightly more general than the $p, q = \omega(\log^2 N / N)$ regime considered in some concurrent works (Baranwal et al., 2021; Wei et al., 2022). In addition, this regime also guarantees that $\mathcal{G}$ has a small diameter. Real-world graphs are known to exhibit the "small-world" phenomenon—even if the number of nodes $N$ is very large, the diameter of graph remains small (Girvan & Newman, 2002; Chung, 2010). We will see in the theoretical analysis (Section 3.3) how this small-diameter characteristic contributes to the occurrence of oversmoothing in shallow
|
| 68 |
+
|
| 69 |
+
GNNs. We remark that our results in fact hold for the more general choice of $p, q = \Omega (\log N / N)$ , for which only the concentration bound in Theorem 1 needs to be modified in the threshold log $N / N$ case where all the constants need a more careful treatment.
|
| 70 |
+
|
| 71 |
+
Further, the choice $p > q$ ensures that the graph structure has homophily, meaning that nodes from the same class are more likely to be connected than nodes from different classes. This characteristic is observed in a wide range of real-world graphs (Easley & Kleinberg, 2010; Ma et al., 2022). We note that this homophily assumption ( $p > q$ ) is not essential to our analysis, though we add it for simplicity since the discussion of homophily versus heterophily ( $p < q$ ) is not the focus of our paper.
|
| 72 |
+
|
| 73 |
+
Graph convolution and linear GNN In this paper, our theoretical analysis focuses on the simplified linear GNN model defined as follows: a graph convolution using the (left-)normalized adjacency matrix takes the operation $h' = (D^{-1}A)h$ , where $h$ and $h'$ are the input and output node representations, respectively. A linear GNN layer can then be defined as $h' = (D^{-1}A)hW$ , where $W$ is a learnable weight matrix. As a result, the output of $n$ linear GNN layers can be written as $h^{(n)}\prod_{k=1}^{n}W^{(k)}$ , where $h^{(n)} = (D^{-1}A)^{n}X$ is the output of $n$ graph convolutions, and $W^{(k)}$ is the weight matrix of the $k^{\text{th}}$ layer. Since this is linear in $h^{(n)}$ , it follows that $n$ -layer linear GNNs have the equivalent representation power as linear classifiers applied to $h^{(n)}$ .
|
| 74 |
+
|
| 75 |
+
In practice, when building GNN models, nonlinear activation functions can be added between consecutive linear GNN layers. For additional results showing that adding certain nonlinearity would not improve the classification performance, see Appendix K.1.
|
| 76 |
+
|
| 77 |
+
Bayes error rate and z-score Thanks to the linearity of the model, we see that the representation of node $v \in \mathcal{C}_i$ after $n$ graph convolutions is distributed as $\mathcal{N}(\mu_i^{(n)}, (\sigma^{(n)})^2)$ , where the variance $(\sigma^{(n)})^2$ is shared between classes. The optimal node-wise classifier in this case is the Bayes optimal classifier, given by the following lemma.
|
| 78 |
+
|
| 79 |
+
Lemma 1. Suppose the label $y$ is drawn uniformly from $\{1,2\}$ , and given $y$ , $x \sim \mathcal{N}(\mu_y^{(n)}, (\sigma^{(n)})^2)$ . Then the Bayes optimal classifier, which minimizes the probability of misclassification among all classifiers, has decision boundary $\mathcal{D} = (\mu_1 + \mu_2) / 2$ , and predicts $y = 1$ , if $x \leq \mathcal{D}$ or $y = 2$ , if $x > \mathcal{D}$ . The associated Bayes error rate is $1 - \Phi(z^{(n)})$ , where $\Phi$ denotes the cumulative distribution function of the standard Gaussian distribution and $z^{(n)} = \frac{1}{2} (\mu_2^{(n)} - \mu_1^{(n)}) / \sigma^{(n)}$ is the $z$ -score of $\mathcal{D}$ with respect to $\mathcal{N}(\mu_1^{(n)}, (\sigma^{(n)})^2)$ .
|
| 80 |
+
|
| 81 |
+
Lemma 1 states that we can estimate the optimal performance of an $n$ -layer linear GNN through the z-score $z^{(n)} = \frac{1}{2} (\mu_2^{(n)} - \mu_1^{(n)}) / \sigma^{(n)}$ . A higher z-score indicates a smaller Bayes error rate, and hence a better expected performance of node classification. The z-score serves as a basis for our quantitative analysis of oversmoothing. In the following section, by estimating $\mu_2^{(n)} - \mu_1^{(n)}$ and $(\sigma^{(n)})^2$ , we quantify the two counteracting effects of graph convolutions and obtain bounds on the z-score $z^{(n)}$ as a function of $n$ , which allows us to characterize oversmoothing quantitatively. Specifically, there are two potential interpretations of oversmoothing based on the z-score: (1) $z^{(n)} < z^{(n^{\star})}$ , where $n^{\star} = \arg \max_{n'} z^{(n')}$ ; and (2) $z^{(n)} < z^{(0)}$ . They correspond to the cases (1) $n > n^{\star}$ ; and (2) $n > n_0$ , where $n_0 \geq 0$ denotes the number of layers that yield a z-score on par with $z^{(0)}$ . The bounds on the z-score $z^{(n)}$ , $z_{\mathrm{lower}}^{(n)}$ and $z_{\mathrm{upper}}^{(n)}$ , enable us to estimate $n^{\star}$ and $n_0$ under different scenarios and provide insights into the optimal choice of depth.
|
| 82 |
+
|
| 83 |
+
# 3.3 MAIN RESULTS
|
| 84 |
+
|
| 85 |
+
We first estimate the gap between the means $\mu_2^{(n)} - \mu_1^{(n)}$ with respect to the number of layers $n$ . $\mu_2^{(n)} - \mu_1^{(n)}$ measures how much node representations in different classes have homogenized after $n$ GNN layers, which is the undesirable mixing effect.
|
| 86 |
+
|
| 87 |
+
Lemma 2. For $n\in \mathbb{N}\cup \{0\}$ , assuming $D^{-1}A\approx \mathbb{E}[D]^{-1}\mathbb{E}[A]$
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\mu_ {2} ^ {(n)} - \mu_ {1} ^ {(n)} = \left(\frac {p - q}{p + q}\right) ^ {n} (\mu_ {2} - \mu_ {1}).
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
Lemma 2 states that the means $\mu_1^{(n)}$ and $\mu_2^{(n)}$ get closer exponentially fast and as $n\to \infty$ , both $\mu_1^{(n)}$ and $\mu_2^{(n)}$ will converge to the same value (in this case $(\mu_1^{(n)} + \mu_2^{(n)}) / 2$ ). The rate of change
|
| 94 |
+
|
| 95 |
+
$(p - q) / (p + q)$ is determined by the intra-community edge density $p$ and the inter-community edge density $q$ . Lemma 2 suggests that graphs with higher inter-community density $(q)$ or lower intra-community density $(p)$ are expected to suffer from a higher mixing effect when we perform message-passing. We provide the following concentration bound for our estimate of $\mu_2^{(n)} - \mu_1^{(n)}$ , which states that the estimate concentrates at a rate of $O(1 / \sqrt{N(p + q)})$ .
|
| 96 |
+
|
| 97 |
+
Theorem 1. Fix $K \in \mathbb{N}$ and $r > 0$ . There exists a constant $C(r, K)$ such that with probability at least $1 - O(1/N^r)$ , it holds for all $1 \leq k \leq K$ that
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\left| \left(\mu_ {2} ^ {(k)} - \mu_ {1} ^ {(k)}\right) - \left(\frac {p - q}{p + q}\right) ^ {k} \left(\mu_ {2} - \mu_ {1}\right) \right| \leq \frac {C}{\sqrt {N (p + q)}}.
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
We then study the variance $(\sigma^{(n)})^2$ with respect to the number of layers $n$ . The variance $(\sigma^{(n)})^2$ measures how much the node representations in the same class have homogenized, which is the desirable denoising effect. We first state that no matter how many layers are applied, there is a nontrivial fixed lower bound for $(\sigma^{(n)})^2$ for a graph with $N$ nodes.
|
| 104 |
+
|
| 105 |
+
Lemma 3. For all $n\in \mathbb{N}\cup \{0\}$ $\frac{1}{N}\sigma^2\leq (\sigma^{(n)})^2\leq \sigma^2$
|
| 106 |
+
|
| 107 |
+
Lemma 3 implies that for a given graph, even as the number of layers $n$ goes to infinity, the variance $(\sigma^{(n)})^2$ does not converge to zero, meaning that there is a fixed lower bound for the denoising effect. See Appendix K.2 for the exact theoretical limit for the variance $(\sigma^{(n)})^2$ as $n$ goes to infinity. We now establish a set of more precise upper and lower bounds for the variance $(\sigma^{(n)})^2$ with respect to the number of layers $n$ in the following technical lemma.
|
| 108 |
+
|
| 109 |
+
Lemma 4. Let $a = Np / \log N$ . With probability at least $1 - O(1 / N)$ , it holds for all $1 \leq n \leq N$ that
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\max \left\{\frac {\operatorname* {m i n} \{a , 2 \}}{1 0} \frac {1}{(N p) ^ {n}}, \frac {1}{N} \right\} \sigma^ {2} \leq (\sigma^ {(n)}) ^ {2}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
(\sigma^ {(n)}) ^ {2} \leq \min \left\{\sum_ {k = 0} ^ {\lfloor \frac {n}{2} \rfloor} \frac {9}{\min \{a , 2 \}} (n - 2 k + 1) ^ {2 k} (N p) ^ {n - 2 k} \left(\frac {2}{N (p + q)}\right) ^ {2 n - 2 k}, 1 \right\} \sigma^ {2}.
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
Lemma 4 holds for all $1 \leq n \leq N$ and directly leads to the following theorem with a clarified upper bound where $n$ is bounded by a constant $K$ .
|
| 120 |
+
|
| 121 |
+
Theorem 2. Let $a = Np / \log N$ . Fix $K \in \mathbb{N}$ . There exists a constant $C(K)$ such that with probability at least $1 - O(1 / N)$ , it holds for all $1 \leq n \leq K$ that
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\max \left\{\frac {\min \{a , 2 \}}{1 0} \frac {1}{(N p) ^ {n}}, \frac {1}{N} \right\} \sigma^ {2} \leq (\sigma^ {(n)}) ^ {2} \leq \min \left\{\frac {C}{\min \{a , 2 \}} \frac {1}{(N (p + q)) ^ {n}}, 1 \right\} \sigma^ {2}.
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
Theorem 2 states that the variance $(\sigma^{(n)})^2$ for each Gaussian distribution decreases more for larger graphs or denser graphs. Moreover, the upper bound implies that the variance $(\sigma^{(n)})^2$ will initially go down at least at a rate exponential in $O(1 / \log N)$ before reaching the fixed lower bound $\sigma^2 /N$ suggested by Lemma 3. This means that after $O(\log N / \log (\log N))$ layers, the desirable denoising effect homogenizing node representations in the same class will saturate and the undesirable mixing effect will start to dominate.
|
| 128 |
+
|
| 129 |
+
Why does oversmoothing happen at a shallow depth? For each node, message-passing with different-class nodes homogenizes their representations exponentially. The exponential rate depends on the fraction of different-class neighbors among all neighbors (Lemma 2, mixing effect). Meanwhile, message-passing with nodes that have not been encountered before causes the denoising effect, and the magnitude depends on the absolute number of newly encountered neighbors. The diameter of the graph is approximately $\log N / \log (Np)$ in the $p,q = \Omega (\log N / N)$ regime (Graham & Lu, 2001), and thus is at most $\log N / \log (\log N)$ in our case. After the number of layers surpasses the diameter, for each node, there will be no nodes that have not been encountered before in message-passing and hence the denoising effect will almost vanish (Theorem 2, denoising effect). $\log N / \log (\log N)$ grows very slowly with $N$ ; for example, when $N = 10^{6}$ , $\log N / \log (\log N)\approx 8$ . This is why even in a large graph, the mixing effect will quickly dominate the denoising effect when we increase the number of layers, and so oversmoothing is expected to happen at a shallow depth.
|
| 130 |
+
|
| 131 |
+
Our theory suggests that the optimal number of layers, $n^{\star}$ , is at most $O(\log N / \log (\log N))$ . For a more quantitative estimate, we can use Lemma 2 and Lemma 4 to compute bounds $z_{\mathrm{lower}}^{(n)}$ and
|
| 132 |
+
|
| 133 |
+
$z_{\mathrm{upper}}^{(n)}$ for $z = \frac{1}{2} (\mu_2^{(n)} - \mu_1^{(n)}) / \sigma^{(n)}$ and use them to infer $n^{\star}$ and $n_0$ , as defined in Section 3.2. See Appendix H for detailed discussion.
|
| 134 |
+
|
| 135 |
+
Next, we investigate the effect of increasing the dimension of the node features $X$ . So far, we have only considered the case with one-dimensional node features. The following proposition states that if features in each dimension are independent, increasing input feature dimension decreases the Bayes error rate for a fixed $n$ . The intuition is that when node features provide more evidence for classification, it is easier to classify nodes correctly.
|
| 136 |
+
|
| 137 |
+
Proposition 1. Let the input feature dimension be $d$ , $X \in \mathbb{R}^{N \times d}$ . Without loss of generality, suppose for node $v$ in $\mathcal{C}_i$ , initial node feature $X_v \sim \mathcal{N}([\mu_i]^d, \sigma^2 I_d)$ independently. Then the Bayes error rate is $1 - \Phi \left( \frac{\sqrt{d}}{2} \frac{(\mu_2^{(n)} - \mu_1^{(n)})}{\sigma^{(n)}} \right) = 1 - \Phi \left( \frac{\sqrt{d}}{2} z^{(n)} \right)$ , where $\Phi$ denotes the cumulative distribution function of the standard Gaussian distribution. Hence the Bayes error rate is decreasing in $d$ , and as $d \to \infty$ , it converges to 0.
|
| 138 |
+
|
| 139 |
+
# 4 THE EFFECTS OF PERSONALIZED PAGERANK ON OVERSMOOTHING
|
| 140 |
+
|
| 141 |
+
Our analysis framework in Section 3.3 can also be applied to GNNs with other message-passing schemes. Specifically, we can analyze the performance of Personalized Propagation of Neural Predictions (PPNP) and its approximate variant, Approximate PPNP (APPNP), which were proposed for alleviating oversmoothing while still making use of multi-hop information in the graph. The main idea is to use Personalized PageRank (PPR) or the approximate Personalized PageRank (APPR) in place of graph convolutions (Klicpera et al., 2019). Mathematically, the output of PPNP can be written as $h^{\mathrm{PPNP}} = \alpha (I_N - (1 - \alpha)(D^{-1}A))^{-1}X$ , while APPNP computes $h^{\mathrm{APPNP}(n + 1)} = (1 - \alpha)(D^{-1}A)h^{\mathrm{APPNP}(n)} + \alpha X$ iteratively in $n$ , where $I_N$ is the identity matrix of size $N$ and in both cases $\alpha$ is the teleportation probability. Then for nodes in $C_i, i\in \{1,2\}$ , the node representations follow a Gaussian distribution $\mathcal{N}\Big(\mu_i^{\mathrm{PPNP}},(\sigma^{\mathrm{PPNP}})^2\Big)$ after applying PPNP, or a Gaussian distribution $\mathcal{N}\Big(\mu_i^{\mathrm{APPNP}(n)},(\sigma^{\mathrm{APPNP}(n)})^2\Big)$ after applying $n$ APPNP layers.
|
| 142 |
+
|
| 143 |
+
We quantify the effects on the means and variances for PPNP and APPNP in the CSBM case. We can similarly use them to calculate the z-score of $(\mu_{1} + \mu_{2}) / 2$ and compare it to the one derived for the baseline GNN in Section 3. The key idea is that the PPR propagation can be written as a weighted average of the standard message-passing, i.e. $\alpha (I_N - (1 - \alpha)(D^{-1}A))^{-1} = \sum_{k = 0}^{\infty}(1-$ $\alpha)^{k}(D^{-1}A)^{k}$ (Andersen et al., 2006). We first state the resulting mixing effect measured by the difference between the two means.
|
| 144 |
+
|
| 145 |
+
Proposition 2. Fix $r > 0$ , $K \in \mathbb{N}$ . For PPNP, with probability at least $1 - O(1 / N^r)$ , there exists a constant $C(\alpha, r, K)$ such that
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\mu_ {2} ^ {P P N P} - \mu_ {1} ^ {P P N P} = \frac {p + q}{p + \frac {2 - \alpha}{\alpha} q} (\mu_ {2} - \mu_ {1}) + \epsilon .
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
where the error term $|\epsilon| \leq C / \sqrt{N(p + q)} + (1 - \alpha)^{K + 1}$ .
|
| 152 |
+
|
| 153 |
+
Proposition 3. Let $r > 0$ . For APPNP, with probability at least $1 - O(1 / N^r)$ ,
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\mu_ {2} ^ {A P P N P (n)} - \mu_ {1} ^ {A P P N P (n)} = \left(\frac {p + q}{p + \frac {2 - \alpha}{\alpha} q} + \frac {(2 - 2 \alpha) q}{\alpha p + (2 - \alpha) q} (1 - \alpha) ^ {n} \left(\frac {p - q}{p + q}\right) ^ {n}\right) (\mu_ {2} - \mu_ {1}) + \epsilon .
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
where the error term $\epsilon$ is the same as the one defined in Theorem 1 for the case of $K = n$ .
|
| 160 |
+
|
| 161 |
+
Both $\frac{p + q}{p + \frac{2 - \alpha}{\alpha}q}$ and $\frac{(2 - 2\alpha)q}{\alpha p + (2 - \alpha)q}(1 - \alpha)\left(\frac{p - q}{p + q}\right)$ are monotone increasing in $\alpha$ . Hence from Proposition 2 and 3, we see that with larger $\alpha$ , meaning a higher probability of teleportation back to the root node at each step of message-passing, PPNP and APPNP will indeed make the difference between the means of the two classes larger: while the difference in means for the baseline GNN decays as $\left(\frac{p - q}{p + q}\right)^n$ , the difference for PPNP/APPNP is lower bounded by a constant. This validates the original intuition behind PPNP and APPNP that compared to the baseline GNN, they reduce the mixing effect of message-passing, as staying closer to the root node means aggregating less information from nodes of different classes. This advantage becomes more prominent when $n$ is larger, where the model performance is dominated by the mixing effect: as $n$ tends to infinity, while the means converge to the same value for the baseline GNN, their separation is lower-bounded for PPNP/APPNP.
|
| 162 |
+
|
| 163 |
+
However, the problem with the previous intuition is that PPNP and APPNP will also reduce the denoising effect at each layer, as staying closer to the root node also means aggregating less information from new nodes that have not been encountered before. Hence, for an arbitrary graph, the result of the tradeoff after the reduction of both effects is not trivial to analyze. Here, we quantify the resulting denoising effect for CSBM graphs measured by the variances. We denote $(\sigma^{(n)})_{\mathrm{upper}}^2$ as the variance upper bound for depth $n$ in Lemma 4.
|
| 164 |
+
|
| 165 |
+
Proposition 4. For PPNP, with probability at least $1 - O(1 / N)$ , it holds for all $1 \leq K \leq N$ that
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\max \left\{\frac {\alpha^ {2} \operatorname* {m i n} \{a , 2 \}}{1 0}, \frac {1}{N} \right\} \sigma^ {2} \leq (\sigma^ {P P N P}) ^ {2} \leq \max \left\{\alpha^ {2} \left(\sum_ {k = 0} ^ {K} (1 - \alpha) ^ {k} \sqrt {(\sigma^ {(k)}) _ {u p p e r} ^ {2}} + \frac {(1 - \alpha) ^ {K + 1}}{\alpha} \sigma\right) ^ {2}, \sigma^ {2} \right\}.
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
Proposition 5. For APPNP, with probability at least $1 - O(1 / N)$ , it holds for all $1 \leq n \leq N$ that
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\begin{array}{l} \max \left\{\frac {\min \{a , 2 \}}{1 0} \left(\alpha^ {2} + \frac {(1 - \alpha) ^ {2 n}}{(N p) ^ {n}}\right), \frac {1}{N} \right\} \sigma^ {2} \leq (\sigma^ {A P P N P (n)}) ^ {2}, \\ (\sigma^ {A P P N P (n)}) ^ {2} \leq \min \left\{\left(\alpha \left(\sum_ {k = 0} ^ {n - 1} (1 - \alpha) ^ {k} \sqrt {(\sigma^ {(k)}) _ {u p p e r} ^ {2}}\right) + (1 - \alpha) ^ {n} \sqrt {(\sigma^ {(n)}) _ {u p p e r} ^ {2}}\right) ^ {2}, \sigma^ {2} \right\}. \\ \end{array}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
By comparing the lower bounds in Proposition 4 and 5 with that in Theorem 2, we see that PPR reduces the beneficial denoising effect of message-passing: for large or dense graphs, while the variances for the baseline GNN decay as $1 / (Np)^n$ , the variances for PPNP/APPNP are lower bounded by the constant $\alpha^2\min \{a,2\} /10$ . In total, the mixing effect is reduced by a factor of $\left(\frac{p - q}{p + q}\right)^n$ , while the denoising effect is reduced by a factor of $1 / (Np)^n$ . Hence PPR would cause greater reduction in the denoising effect than the improvement in the mixing effect for graphs where $N$ and $p$ are large. This drawback would be especially notable at a shallow depth, where the denoising effect is supposed to dominate the mixing effect. APPNP would perform worse than the baseline GNN on these graphs in terms of the optimal classification performance.
|
| 178 |
+
|
| 179 |
+
We remark that in each APPNP layer, another way to interpret the term $\alpha X$ is to regard it as a residual connection to the initial representation $X$ (Chen et al., 2020b). Thus, our theory also validates the empirical observation that adding initial residual connections allows us to build very deep models without catastrophic oversmoothing. However, our results suggest that initial residual connections do not guarantee an improvement in model performance by themselves.
|
| 180 |
+
|
| 181 |
+
# 5 EXPERIMENTS
|
| 182 |
+
|
| 183 |
+
In this section, we first demonstrate our theoretical results in previous sections on synthetic CSBM data. Then we discuss the role of optimizing weights $W^{(k)}$ in GNN layers in the occurrence of oversmoothing through both synthetic data and the three widely used benchmarks: Cora, CiteSeer and PubMed (Yang et al., 2016). Our results highlight the fact that the oversmoothing phenomenon observed in practice can be exacerbated by the difficulty of optimizing weights in deep GNN models. More details about the experiments are provided in Appendix J.
|
| 184 |
+
|
| 185 |
+
# 5.1 THE EFFECT OF GRAPH TOPOLOGY ON OVERSMOOTHING
|
| 186 |
+
|
| 187 |
+
We first show how graph topology affects the occurrence of oversmoothing and the effects of PPR. We randomly generated synthetic graph data from CSBM $(N = 2000, p, q = 0.0038, \mu_1 = 1, \mu_2 = 1.5, \sigma^2 = 1)$ . We used $60\% / 20\% / 20\%$ random splits and ran GNN and APPNP with $\alpha = 0.1$ . For results in Figure 2, we report averages over 5 graphs and for results in Figure 3, we report averages over 5 runs.
|
| 188 |
+
|
| 189 |
+
In Figure 2, we study how the strength of community structure affects oversmoothing. We can see that when graphs have a stronger community structure in terms of a higher intra-community edge density $p$ , they would benefit more from repeated message-passing. As a result, given the same set of node features, oversmoothing would happen later and a classifier could achieve better classification performance. A similar trend can also be observed in Figure 4A. Our theory predicts $n^{\star}$ and $n_0$ , as defined in Section 3.2, with high accuracy.
|
| 190 |
+
|
| 191 |
+
In Figure 3, we compare APPNP and GNN under different graph topologies. In all three cases, APPNP manifests its advantage of reducing the mixing effect compared to GNN when the number
|
| 192 |
+
|
| 193 |
+
of layers is large, i.e. when the undesirable mixing effect is dominant. However, as Figure 3B,C show, when we have large graphs or graphs with strong community structure, APPNP's disadvantage of concurrently reducing the denoising effect is more severe, particularly when the number of layers is small. As a result, APPNP's optimal performance is worse than the baseline GNN. These observations accord well with our theoretical discussions in Section 4.
|
| 194 |
+
|
| 195 |
+

|
| 196 |
+
Figure 2: How the strength of community structure affects oversmoothing. When graphs have stronger community structure (i.e. higher $a$ ), oversmoothing would happen later. Our theory (gray bar) predicts the optimal number of layers $n^{\star}$ in practice (blue) with high accuracy (A). Given the same set of features, a classifier has significantly better performance on graphs with higher $a$ (B,C).
|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
Figure 3: Comparison of node classification performance between the baseline GNN and APPNP. The performance of APPNP is more robust when we increase the model depth. However, compared to the base case (A), APPNP tends to have worse optimal performance than GNN on graphs with larger size (B) or stronger community structure (C), as predicted by the theory.
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
|
| 209 |
+
# 5.2 THE EFFECT OF OPTIMIZING WEIGHTS ON OVERSMOOTHING
|
| 210 |
+
|
| 211 |
+
We investigate how adding learnable weights $W^{(k)}$ in each GNN layers affects the node classification performance in practice. Consider the case when all the GNN layers used have width one, meaning that the learnable weight matrix $W^{(k)}$ in each layer is a scalar. In theory, the effects of adding such weights on the means and the variances would cancel each other and therefore they would not affect the z-score of our interest and the classification performance. Figure 4A shows the value of $n_0$ predicted by the z-score, the actual $n_0$ without learnable weights according to the test accuracy and the actual $n_0$ with learnable weights according to the test accuracy. The results are averages over 5 graphs for each case. We empirically observe that GNNs with weights are much harder to train, and the difficulty increases as we increase the number of layers. As a result, $n_0$ is smaller for the model with weights and the gap is larger when $n_0$ is supposed to be larger, possibly due to greater difficulty in optimizing deeper architectures (Shamir, 2019).
|
| 212 |
+
|
| 213 |
+
To relieve this potential optimization problem, we increase the width of each GNN layer (Du & Hu, 2019). Figure 4B,C presents the training and testing accuracies of GNNs with increasing width with respect to the number of layers on a specific synthetic example. The results are averages over 5 runs. We observe that increasing the width of the network mitigates the difficulty of optimizing weights, and the performance after adding weights is able to gradually match the performance without weights. This empirically validates our claim in Section 3.2 that adding learnable weights should not affect the representation power of GNN in terms of node classification accuracy on CSBM graphs, besides empirical optimization issues.
|
| 214 |
+
|
| 215 |
+
In practice, as we build deeper GNNs for more complicated tasks on real graph data, the difficulty of optimizing weights in deep GNN models persists. We revisit the multi-class node classification task on the three widely used benchmark datasets: Cora, CiteSeer and PubMed (Yang et al., 2016). We compare the performance of GNN without weights against the performance of GNN with weights
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
Figure 4: The effect of optimizing weights on oversmoothing using synthetic CSBM data. Compared to the GNN without weights, oversmoothing happens much sooner after adding learnable weights in each GNN layer, although these two models have the same representation power (A). As we increase the width of each GNN layer, the performance of GNN with weights is able to gradually match that of GNN without weights (B,C).
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
|
| 222 |
+

|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
Figure 5: The effect of optimizing weights on oversmoothing using real-world benchmark datasets. Adding learnable weights in each GNN layer does not improve node classification performance but rather leads to optimization difficulty.
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+
in terms of test accuracy. We used $60\% / 20\% / 20\%$ random splits, as in Wang & Leskovec (2020) and Huang et al. (2021) and report averages over 5 runs. Figure 5 shows the same kind of difficulty in optimizing deeper models with learnable weights in each GNN layer as we have seen for the synthetic data. Increasing the width of each GNN layer still mitigates the problem for shallower models, but it becomes much more difficult to tackle beyond 10 layers to the point that simply increasing the width could not solve it. As a result, although GNNs with and without weights are on par with each other when both are shallow, the former has much worse performance when the number of layers goes beyond 10. These results suggest that the oversmoothing phenomenon observed in practice is aggravated by the difficulty of optimizing deep GNN models.
|
| 232 |
+
|
| 233 |
+
# 6 DISCUSSION
|
| 234 |
+
|
| 235 |
+
Designing more powerful GNNs requires deeper understanding of current GNNs—how they work and why they fail. In this paper, we precisely characterize the mechanism of oversmoothing via a non-asymptotic analysis and justify why oversmoothing happens at a shallow depth. Our analysis suggests that oversmoothing happens once the undesirable mixing effect homogenizing node representations in different classes starts to dominate the desirable denoising effect homogenizing node representations in the same class. Due to the small diameter characteristic of real graphs, the turning point of the tradeoff will occur after only a few rounds of message-passing, resulting in oversmoothing in shallow GNNs.
|
| 236 |
+
|
| 237 |
+
It is worth noting that oversmoothing becomes an important problem in the literature partly because typical Convolutional Neural Networks (CNNs) used for image processing are much deeper than GNNs (He et al., 2016). As such, researchers have been trying to use methods that have previously worked for CNNs to make current GNNs deeper (Li et al., 2019; Chen et al., 2020b). However, images can be regarded as giant grids with high diameter. This contrasts with with real-world graphs, which often have much smaller diameters. Hence we believe that building more powerful GNNs will require us to think beyond CNNs and images and take advantage of the structure in real graphs.
|
| 238 |
+
|
| 239 |
+
There are many outlooks to our work and possible directions for further research. First, while our use of the CSBM provided important insights into GNNs, it will be helpful to incorporate other real graph properties such as degree heterogeneity in the analysis. Additionally, further research can focus on the learning perspective of the problem.
|
| 240 |
+
|
| 241 |
+
# ACKNOWLEDGMENTS
|
| 242 |
+
|
| 243 |
+
This research has been supported by a Vannevar Bush Fellowship from the Office of the Secretary of Defense. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Department of Defense or the U.S. Government.
|
| 244 |
+
|
| 245 |
+
# REFERENCES
|
| 246 |
+
|
| 247 |
+
Emmanuel Abbe. Community detection and stochastic block models. Foundations and Trends in Communications and Information Theory, 14:1-162, 2018.
|
| 248 |
+
Reid Andersen, Fan Chung Graham, and Kevin J. Lang. Local graph partitioning using pagerank vectors. In FOCS, 2006.
|
| 249 |
+
Afonso S Bandeira and Ramon Van Handel. Sharp nonasymptotic bounds on the norm of random matrices with independent entries. The Annals of Probability, 44(4):2479-2506, 2016.
|
| 250 |
+
Aseem Baranwal, Kimon Fountoulakis, and Aukosh Jagannath. Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization. In ICML, 2021.
|
| 251 |
+
Aseem Baranwal, Kimon Fountoulakis, and Aukosh Jagannath. Effects of graph convolutions in deep networks. ArXiv, abs/2204.09297, 2022.
|
| 252 |
+
Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, and koray kavukcuoglu. Interaction networks for learning about objects, relations and physics. In NeurIPS, 2016.
|
| 253 |
+
Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. Oxford university press, 2013.
|
| 254 |
+
Joan Bruna, Wojciech Zaremba, Arthur D. Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. In *ICLR*, 2014.
|
| 255 |
+
Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the oversmoothing problem for graph neural networks from the topological view. In AAAI, 2020a.
|
| 256 |
+
Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In ICML, 2020b.
|
| 257 |
+
Zhengdao Chen, Lisha Li, and Joan Bruna. Supervised community detection with line graph neural networks. In ICLR, 2019.
|
| 258 |
+
Fan Chung and Linyuan Lu. Concentration inequalities and martingale inequalities: a survey. _Internet mathematics_, 3(1):79-127, 2006.
|
| 259 |
+
Fan R. K. Chung. Graph theory in the information age. volume 57, pp. 726-732, 2010.
|
| 260 |
+
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NeurIPS, 2016.
|
| 261 |
+
Yash Deshpande, Andrea Montanari, Elchanan Mossel, and Subhabrata Sen. Contextual stochastic block models. In NeurIPS, 2018.
|
| 262 |
+
Luc Devroye, László Győrfi, and Gábor Lugosi. A probabilistic theory of pattern recognition. In Stochastic Modelling and Applied Probability, 1996.
|
| 263 |
+
Simon Shaolei Du and Wei Hu. Width provably matters in optimization for deep linear neural networks. In ICML, 2019.
|
| 264 |
+
|
| 265 |
+
David Kristjanson Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael Gomez-Bombarelli, Timothy D. Hirzel, Alán Aspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In NeurIPS, 2015.
|
| 266 |
+
David A. Easley and Jon M. Kleinberg. Networks, Crowds, and Markets: Reasoning about a Highly Connected World. 2010.
|
| 267 |
+
Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
|
| 268 |
+
Kimon Fountoulakis, Amit Levi, Shenghao Yang, Aseem Baranwal, and Aukosh Jagannath. Graph attention retrospective. ArXiv, abs/2202.13060, 2022.
|
| 269 |
+
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In ICML, 2017.
|
| 270 |
+
Michelle Girvan and Mark E. J. Newman. Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99:7821 - 7826, 2002.
|
| 271 |
+
M. Gori, G. Monfardini, and F. Scarselli. A new model for learning in graph domains. In IJCNN, 2005.
|
| 272 |
+
Fan Chung Graham and Linyuan Lu. The diameter of sparse random graphs. Advances in Applied Mathematics, 26:257-279, 2001.
|
| 273 |
+
Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
|
| 274 |
+
Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, and Austin R. Benson. Combining label propagation and simple models out-performs graph neural networks. In ICLR, 2021.
|
| 275 |
+
Wenbing Huang, Yu Rong, Tingyang Xu, Fuchun Sun, and Junzhou Huang. Tackling oversmoothing for general graph convolutional networks. arXiv preprint arXiv:2008.09864, 2020.
|
| 276 |
+
Nicolas Keriven. Not too little, not too much: a theoretical analysis of graph (over)smoothing. In NeurIPS, 2022.
|
| 277 |
+
Thomas Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
|
| 278 |
+
Johannes Klicpera, Aleksandar Bojchevski, and Stephan Gunnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In ICLR, 2019.
|
| 279 |
+
Guohao Li, Matthias Müller, Ali Thabet, and Bernard Ghanem. Deep GCs: Can GCs go as deep as cnns? In ICCV, 2019.
|
| 280 |
+
Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In AAAI, 2018.
|
| 281 |
+
Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. Gated graph sequence neural networks. In *ICLR*, 2016.
|
| 282 |
+
Meng Liu, Zhengyang Wang, and Shuiwang Ji. Non-local graph neural networks. IEEE transactions on pattern analysis and machine intelligence, 2021.
|
| 283 |
+
Linyuan Lu and Xing Peng. Spectra of edge-independent random graphs. The Electronic Journal of Combinatorics, 20:27, 2013.
|
| 284 |
+
Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. Is homophily a necessity for graph neural networks? In ICLR, 2022.
|
| 285 |
+
Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In ICLR, 2020.
|
| 286 |
+
|
| 287 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019.
|
| 288 |
+
Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20:61-80, 2009.
|
| 289 |
+
Ohad Shamir. Exponential convergence time of gradient descent for one-dimensional deep linear neural networks. In $COLT$ , 2019.
|
| 290 |
+
Hongwei Wang and Jure Leskovec. Unifying graph convolutional neural networks and label propagation. ArXiv, abs/2002.06755, 2020.
|
| 291 |
+
Rongzhe Wei, Haoteng Yin, J. Jia, Austin R. Benson, and Pan Li. Understanding non-linearity in graph neural networks from the bayesian-inference perspective. *ArXiv*, abs/2207.11311, 2022.
|
| 292 |
+
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32:4-24, 2019.
|
| 293 |
+
Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In ICML, 2018.
|
| 294 |
+
Zhilin Yang, William W. Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. In ICML, 2016.
|
| 295 |
+
Lingxiao Zhao and Leman Akoglu. Pairnorm: Tackling oversmoothing in gnns. In ICLR, 2020.
|
| 296 |
+
|
| 297 |
+
# A PROOF OF LEMMA 1
|
| 298 |
+
|
| 299 |
+
Following the definition of the Bayes optimal classifier (Devroye et al., 1996),
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
\mathcal{B}(x) = \operatorname *{arg max}_{i = 1,2}\quad \mathbb{P}[y = i|x],
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
we get that the Bayes optimal classifier has a linear decision boundary $\mathcal{D} = (\mu_1 + \mu_2) / 2$ such that the decision rule is
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
\left\{ \begin{array}{l l} y = 1 & \quad \text {i f} x \leq \mathcal {D} \\ y = 2 & \quad \text {i f} x > \mathcal {D}. \end{array} \right.
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
Probability of misclassification could be written as
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
\begin{array}{l} \mathbb {P} [ y = 1, x > \mathcal {D} ] + \mathbb {P} [ y = 2, x \leq \mathcal {D} ] = \mathbb {P} [ x > \mathcal {D} | y = 1 ] \mathbb {P} [ y = 1 ] + \mathbb {P} [ x \leq \mathcal {D} | y = 2 ] \mathbb {P} [ y = 2 ] \\ = \frac {1}{2} \left(\mathbb {P} [ x > \mathcal {D} | y = 1 ] + \mathbb {P} [ x \leq \mathcal {D} | y = 2 ]\right). \\ \end{array}
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
When $\mathcal{D} = (\mu_1 + \mu_2) / 2$ , the expression is called the Bayes error rate, which is the minimal probability of misclassification among all classifiers. Geometrically, it is easy to see that the Bayes error rate equals $\frac{1}{2} S$ , where $S$ is the overlapping area between the two Gaussian distributions $\mathcal{N}\left(\mu_1^{(n)},(\sigma^{(n)})^2\right)$ and $\mathcal{N}\left(\mu_2^{(n)},(\sigma^{(n)})^2\right)$ . Hence one can use the z-score of $(\mu_1 + \mu_2) / 2$ with respect to either of the two Gaussian distributions to directly calculate the Bayes error rate.
|
| 318 |
+
|
| 319 |
+
# B PROOF OF LEMMA 2
|
| 320 |
+
|
| 321 |
+
Under the heuristic assumption $D^{-1}A\approx \mathbb{E}[D]^{-1}\mathbb{E}[A]$ , we can write
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
\mu_ {1} ^ {(1)} = \frac {p \mu_ {1} + q \mu_ {2}}{p + q}, \quad \mu_ {2} ^ {(1)} = \frac {p \mu_ {2} + q \mu_ {1}}{p + q}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
$$
|
| 328 |
+
\mu_ {1} ^ {(k)} = \frac {p \mu_ {1} ^ {(k - 1)} + q \mu_ {2} ^ {(k - 1)}}{p + q}, \quad \mu_ {2} ^ {(k)} = \frac {p \mu_ {2} ^ {(k - 1)} + q \mu_ {1} ^ {(k - 1)}}{p + q}, \text {f o r a l l} k \in \mathbb {N}.
|
| 329 |
+
$$
|
| 330 |
+
|
| 331 |
+
Writing recursively, we get that
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\mu_ {1} ^ {(n)} = \frac {(p + q) ^ {n} + (p - q) ^ {n}}{2 (p + q) ^ {n}} \mu_ {1} + \frac {(p + q) ^ {n} - (p - q) ^ {n}}{2 (p + q) ^ {n}} \mu_ {2},
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
\mu_ {2} ^ {(n)} = \frac {(p + q) ^ {n} + (p - q) ^ {n}}{2 (p + q) ^ {n}} \mu_ {2} + \frac {(p + q) ^ {n} - (p - q) ^ {n}}{2 (p + q) ^ {n}} \mu_ {1}.
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
# C PROOF OF THEOREM 1
|
| 342 |
+
|
| 343 |
+
We use $\| \cdot \|_2$ to denote the spectral norm, $\| A \|_2 = \max_{x: \| x \| = 1} \| Ax \|_2$ . We denote $\bar{A} = \mathbb{E}[A]$ , $\bar{D} = \mathbb{E}[D]$ , $d = A \mathbb{1}_N$ and $\bar{d} = \mathbb{E}[d]_i$ . We further define the following relevant vectors:
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
w _ {1} := \mathbb {1} _ {N}, \quad w _ {2} := \left( \begin{array}{c} \mathbb {1} _ {N / 2} \\ - \mathbb {1} _ {N / 2} \end{array} \right), \quad \mu := \left( \begin{array}{c} \mu_ {1} \mathbb {1} _ {N / 2} \\ \mu_ {2} \mathbb {1} _ {N / 2} \end{array} \right).
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
The quantity of interest is $\mu_2^{(k)} - \mu_1^{(k)} = \frac{1}{N / 2} w_2^\top (D^{-1}A)^k\mu$
|
| 350 |
+
|
| 351 |
+
# C.1 AUXILIARY RESULTS
|
| 352 |
+
|
| 353 |
+
We record some properties of the adjacency matrices:
|
| 354 |
+
|
| 355 |
+
1. $D^{-1}A$ and $\bar{D}^{-1}\bar{A}$ have an eigenvalue of 1, corresponding to the (right) eigenvector $w_{1}$ .
|
| 356 |
+
2. If $J_{n} = \mathbb{1}_{n}\mathbb{1}_{n}^{\top}$ , where $\mathbb{1}_n$ is all-one vector of length $n$ , then
|
| 357 |
+
|
| 358 |
+
$$
|
| 359 |
+
\bar {A} := \left( \begin{array}{c c} p J _ {N / 2} & q J _ {N / 2} \\ q J _ {N / 2} & p J _ {N / 2} \end{array} \right).
|
| 360 |
+
$$
|
| 361 |
+
|
| 362 |
+
3. $\bar{D} = \frac{N}{2} (p + q)I_N$
|
| 363 |
+
4. $\mu = \alpha w_{1} + \beta w_{2}$ , where $\alpha = \frac{\mu_1 + \mu_2}{2}$ and $\beta = \frac{\mu_1 - \mu_2}{2}$ .
|
| 364 |
+
|
| 365 |
+
To control the degree matrix $D^{-1}$ , we will use the following standard Chernoff bound Chung & Lu (2006):
|
| 366 |
+
|
| 367 |
+
Lemma 5 (Chernoff Bound). Let $X_{1},\ldots ,X_{n}$ be independent, $S\coloneqq \sum_{i = 1}^{n}X_{i}$ , and $\bar{S} = \mathbb{E}[S]$ . Then for all $\varepsilon >0$
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
\mathbb {P} (S \leq \bar {S} - \varepsilon) \leq e ^ {- \varepsilon^ {2} / (2 \bar {S})},
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
\mathbb {P} (S \geq \bar {S} + \varepsilon) \leq e ^ {- \varepsilon^ {2} / (2 (\bar {S} + \varepsilon / 3))}.
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
We can thus derive a uniform lower bound on the degree of every vertex:
|
| 378 |
+
|
| 379 |
+
Corollary 1. For every $r > 0$ , there is a constant $C(r)$ such that whenever $\bar{d} \geq C\log N$ , with probability at least $1 - N^{-r}$ ,
|
| 380 |
+
|
| 381 |
+
$$
|
| 382 |
+
\frac {1}{2} \bar {d} \leq d _ {i} \leq \frac {3}{2} \bar {d}, \quad f o r a l l 1 \leq i \leq N.
|
| 383 |
+
$$
|
| 384 |
+
|
| 385 |
+
Consequently, with probability at least $1 - N^{-r}$ , $\| D^{-1} - \bar{D}^{-1}\|_2 \leq C / \bar{d}$ for some $C$ .
|
| 386 |
+
|
| 387 |
+
Proof. By applying Lemma 1 and a union bound, all degrees are within $1 / 2\bar{d}$ of their expectations, with probability at least $1 - e^{-\bar{d} /8 + \log N}$ . Taking $C = 8r + 8$ yields the desired lower bound. An analogous proof works for the upper bound.
|
| 388 |
+
|
| 389 |
+
To show the latter part, write
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
\| D ^ {- 1} - \bar {D} ^ {- 1} \| _ {2} = \max _ {1 \leq i \leq N} \frac {\left| d _ {i} - \bar {d} \right|}{d _ {i} \bar {d}}
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
Using the above bounds, the numerator for each $i$ is at most $1 / 2\bar{d}$ and the denominator for each $i$ is at least $1 / 2\bar{d}^2$ , with probability at least $1 - N^{-r}$ . Combining the bounds yields the claim.
|
| 396 |
+
|
| 397 |
+
We will also need a result on concentration of random adjacency matrices, which is a corollary of the sharp bounds derived in Bandeira & Van Handel (2016)
|
| 398 |
+
|
| 399 |
+
Lemma 6 (Concentration of Adjacency Matrix). For every $r > 0$ , there is a constant $C(r)$ such that whenever $\bar{d} \geq \log N$ , with probability at least $1 - N^{-r}$ ,
|
| 400 |
+
|
| 401 |
+
$$
|
| 402 |
+
\left\| A - \bar {A} \right\| _ {2} < C \sqrt {\bar {d}}.
|
| 403 |
+
$$
|
| 404 |
+
|
| 405 |
+
Proof. By corollary 3.12 from Bandeira & Van Handel (2016), there is a constant $\kappa$ such that
|
| 406 |
+
|
| 407 |
+
$$
|
| 408 |
+
\mathbb {P} (\| A - \bar {A} \| _ {2} \geq 3 \sqrt {\bar {d}} + t) \leq e ^ {- t ^ {2} / \kappa + \log N}.
|
| 409 |
+
$$
|
| 410 |
+
|
| 411 |
+
Setting $t = \sqrt{(1 + r)d}$ , $C = 3 + \sqrt{(1 + r)\kappa}$ suffices to achieve the desired bound.
|
| 412 |
+
|
| 413 |
+

|
| 414 |
+
|
| 415 |
+
# C.2 SHARP CONCENTRATION OF THE RANDOM WALK OPERATOR $D^{-1}A$
|
| 416 |
+
|
| 417 |
+
In this section, we aim to show the following concentration result for the random walk operator $D^{-1}A$ :
|
| 418 |
+
|
| 419 |
+
Theorem 3. Suppose the edge probabilities are $\omega\left(\frac{\log N}{N}\right)$ , and let $\bar{d}$ be the average degree. For any $r$ , there exists a constant $C$ such that for sufficiently large $N$ , with probability at least $1 - O(N^{-r})$ ,
|
| 420 |
+
|
| 421 |
+
$$
|
| 422 |
+
\| D ^ {- 1} A - \bar {D} ^ {- 1} \bar {A} \| _ {2} \leq \frac {C}{\sqrt {\bar {d}}}.
|
| 423 |
+
$$
|
| 424 |
+
|
| 425 |
+
Proof. We decompose the error
|
| 426 |
+
|
| 427 |
+
$$
|
| 428 |
+
E = D ^ {- 1} A - \bar {D} ^ {- 1} \bar {A} = D ^ {- 1} (A - \bar {A}) + (D ^ {- 1} - \bar {D} ^ {- 1}) \bar {A} = T _ {1} + T _ {2},
|
| 429 |
+
$$
|
| 430 |
+
|
| 431 |
+
where
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
T _ {1} = D ^ {- 1} (A - \bar {A}), \quad T _ {2} = (D ^ {- 1} - \bar {D} ^ {- 1}) \bar {A}.
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
We bound the two terms separately.
|
| 438 |
+
|
| 439 |
+
Bounding $T_{1}$ : By Corollary 1, $\| D^{-1}\|_{2} = \max_{i}1 / d_{i}\leq 2 / \bar{d}$ with probability $1 - N^{-r}$ . Combining this with Lemma 6, we see that with probability at least $1 - 2N^{-r}$ ,
|
| 440 |
+
|
| 441 |
+
$$
|
| 442 |
+
\left\| D ^ {- 1} (A - \bar {A}) \right\| _ {2} \leq \left\| D ^ {- 1} \right\| _ {2} \left\| A - \bar {A} \right\| _ {2} \leq \frac {C}{\sqrt {\bar {d}}}
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
for some $C$ depending only on $r$ .
|
| 446 |
+
|
| 447 |
+
Bounding $T_{2}$ : Similar to Lu & Peng (2013), we bound $T_{2}$ by exploiting the low-rank structure of the expected adjacency matrix, $\bar{A}$ . Recall that $\bar{A}$ has a special block form. The eigendecomposition of $\bar{A}$ is thus
|
| 448 |
+
|
| 449 |
+
$$
|
| 450 |
+
\bar {A} = \sum_ {j = 1} ^ {2} \lambda_ {j} w ^ {(j)},
|
| 451 |
+
$$
|
| 452 |
+
|
| 453 |
+
where $w^{(1)} = \frac{1}{\sqrt{N}}\mathbb{1}_N, \lambda_1 = \frac{N(p + q)}{2}, w^{(2)} = \frac{1}{\sqrt{N}}\left(\frac{\mathbb{1}_{N / 2}}{-\mathbb{1}_{N / 2}}\right), \lambda_2 = \frac{N(p - q)}{2}$ .
|
| 454 |
+
|
| 455 |
+
Using the definition of the spectral norm, we can bound $\| T_2\| _2$ as
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
\begin{array}{l} \| T _ {2} \| _ {2} \leq \max _ {\| x \| = 1} \| (D ^ {- 1} - \bar {D} ^ {- 1}) \bar {A} x \| _ {2} \\ \leq \max _ {\alpha \in \mathbb {R} ^ {2}, \| \alpha \| = 1} \| (D ^ {- 1} - \bar {D} ^ {- 1}) \bar {A} (\alpha_ {1} w ^ {(1)} + \alpha_ {2} w ^ {(2)}) \| _ {2}. \\ \end{array}
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
Note that when $\| \alpha \|_2 = 1$ ,
|
| 462 |
+
|
| 463 |
+
$$
|
| 464 |
+
\begin{array}{l} \| (D ^ {- 1} - \bar {D} ^ {- 1}) \bar {A} (\alpha_ {1} w ^ {(1)} + \alpha_ {2} w ^ {(2)}) \| _ {2} ^ {2} = \sum_ {i = 1} ^ {N} \left(\frac {1}{d _ {i}} - \frac {1}{\bar {d}}\right) ^ {2} \left(\sum_ {j = 1} ^ {2} \lambda_ {j} \alpha_ {j} w _ {i} ^ {(j)}\right) ^ {2} \\ \leq \sum_ {i = 1} ^ {N} \left(\frac {1}{d _ {i}} - \frac {1}{\bar {d}}\right) ^ {2} \sum_ {j = 1} ^ {2} \lambda_ {j} ^ {2} (w _ {i} ^ {(j)}) ^ {2} \\ \end{array}
|
| 465 |
+
$$
|
| 466 |
+
|
| 467 |
+
using Cauchy-Schwarz. Since $|w_i^{(j)}| \leq \frac{1}{\sqrt{N}}$ for all $i, j$ , the second summation can be bounded by $\frac{1}{N} \sum_{j=1}^{2} \lambda_j^2$ . Overall, the upper bound is now
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
\frac {1}{N} \sum_ {i = 1} ^ {N} \frac {(d _ {i} - \bar {d}) ^ {2}}{(d _ {i} \bar {d}) ^ {2}} \sum_ {j = 1} ^ {2} \lambda_ {j} ^ {2},
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
Under the event of Corollary 1, $d_{i} \geq C\bar{d}$ for some $C < 1$ . Under our setup, we also have $\lambda_1^2 = \bar{d}^2$ , $\lambda_2^2 \leq \bar{d}^2$ . This means that the upper bound is
|
| 474 |
+
|
| 475 |
+
$$
|
| 476 |
+
\frac {1}{C ^ {2} \bar {d} ^ {2} N} \| d - \bar {d} \mathbb {1} _ {N} \| _ {2} ^ {2},
|
| 477 |
+
$$
|
| 478 |
+
|
| 479 |
+
where $d$ is the vector of node degrees. It remains to show that $\frac{1}{N}\| d - \bar{d}\mathbb{1}_N\| _2^2 = O(\bar{d})$ . To do this, we use a form of Talagrand's concentration inequality, given in Boucheron et al. (2013). Since the function $\frac{1}{\sqrt{N}}\| d - \bar{d}\mathbb{1}_N\| _2 = \frac{1}{\sqrt{N}}\| (A - \bar{d} I_N)\mathbb{1}_N\| _2$ is a convex, 1-Lipschitz function of $A$ , Theorem 6.10 from Boucheron et al. (2013) guarantees that for any $t > 0$ ,
|
| 480 |
+
|
| 481 |
+
$$
|
| 482 |
+
\mathbb {P} \big (\frac {1}{\sqrt {N}} \| d - \bar {d} \mathbb {1} _ {N} \| _ {2} > \mathbb {E} \big [ \frac {1}{\sqrt {N}} \| d - \bar {d} \mathbb {1} _ {N} \| _ {2} \big ] + t \big) \leq e ^ {- t ^ {2} / 2}.
|
| 483 |
+
$$
|
| 484 |
+
|
| 485 |
+
Using Jensen's inequality,
|
| 486 |
+
|
| 487 |
+
$$
|
| 488 |
+
\begin{array}{l} \mathbb {E} [ \| d - \bar {d} \mathbb {1} _ {N} \| _ {2} ] \leq \sqrt {\mathbb {E} [ \| d - \bar {d} \mathbb {1} _ {N} \| _ {2} ^ {2} ]} \\ = \sqrt {\sum_ {i = 1} ^ {N} \operatorname {V a r} \left(d _ {i}\right)} = \sqrt {N \operatorname {V a r} \left(d _ {1}\right)} \leq \sqrt {N \bar {d}}. \\ \end{array}
|
| 489 |
+
$$
|
| 490 |
+
|
| 491 |
+
If $\bar{d} = \omega (\log N)$ , we can guarantee that
|
| 492 |
+
|
| 493 |
+
$$
|
| 494 |
+
\frac {1}{\sqrt {N}} \| d - \bar {d} \mathbb {1} _ {N} \| _ {2} \leq C \sqrt {\bar {d}}
|
| 495 |
+
$$
|
| 496 |
+
|
| 497 |
+
with probability at least $1 - e^{-(C - 1)^2\bar{d} /2} = 1 - O(N^{-r})$ for an appropriate constant $C$ . Thus we have shown that with high probability, $T_{2} = O(1 / \sqrt{\bar{d}})$ , which proves the claim.
|
| 498 |
+
|
| 499 |
+
# C.3 PROOF OF THEOREM 1
|
| 500 |
+
|
| 501 |
+
Fix $r$ and $K$ . We desire to bound
|
| 502 |
+
|
| 503 |
+
$$
|
| 504 |
+
\frac {1}{N / 2} w _ {2} ^ {\top} \left(\left(D ^ {- 1} A\right) ^ {k} - \left(\bar {D} ^ {- 1} \bar {A}\right) ^ {k}\right) \mu .
|
| 505 |
+
$$
|
| 506 |
+
|
| 507 |
+
By the first property of adjacency matrices in auxiliary results, it suffices to bound
|
| 508 |
+
|
| 509 |
+
$$
|
| 510 |
+
\beta \frac {1}{N / 2} w _ {2} ^ {\top} ((D ^ {- 1} A) ^ {k} - (\bar {D} ^ {- 1} \bar {A}) ^ {k}) w _ {2}.
|
| 511 |
+
$$
|
| 512 |
+
|
| 513 |
+
where $\beta = \frac{\mu_1 - \mu_2}{2}$ . We will show inductively that there is a $C$ such that for every $k = 1, \dots, K$ ,
|
| 514 |
+
|
| 515 |
+
$$
|
| 516 |
+
\left\| \left(D ^ {- 1} A\right) ^ {k} - \left(\bar {D} ^ {- 1} \bar {A}\right) ^ {k} \right\| _ {2} \leq C / \sqrt {\bar {d}}.
|
| 517 |
+
$$
|
| 518 |
+
|
| 519 |
+
If this is true, then Cauchy-Schwarz gives
|
| 520 |
+
|
| 521 |
+
$$
|
| 522 |
+
\begin{array}{l} \beta \frac {1}{N / 2} w _ {2} ^ {\top} ((D ^ {- 1} A) ^ {k} - (\bar {D} ^ {- 1} \bar {A}) ^ {k}) w _ {2} \leq \beta \frac {1}{N / 2} \| w _ {2} \| _ {2} \| (D ^ {- 1} A) ^ {k} - (\bar {D} ^ {- 1} \bar {A}) ^ {k} \| _ {2} \| w _ {2} \| _ {2} \\ \leq C / \sqrt {\bar {d}}. \\ \end{array}
|
| 523 |
+
$$
|
| 524 |
+
|
| 525 |
+
By Theorem 3, we have that with probability at least $1 - O(N^{-r})$
|
| 526 |
+
|
| 527 |
+
$$
|
| 528 |
+
\left\| D ^ {- 1} A - \bar {D} ^ {- 1} \bar {A} \right\| _ {2} \leq \frac {C}{\sqrt {\bar {d}}}.
|
| 529 |
+
$$
|
| 530 |
+
|
| 531 |
+
So $D^{-1}A = \bar{D}^{-1}\bar{A} +J$ where $\| J\| \leq C / \sqrt{\bar{d}}$ . Iterating, we have
|
| 532 |
+
|
| 533 |
+
$$
|
| 534 |
+
\left\| \left(D ^ {- 1} A\right) ^ {k} - (\bar {D} ^ {- 1} \bar {A}) ^ {k} \right\| _ {2} = \left\| \left(D ^ {- 1} A\right) ^ {k - 1} D ^ {- 1} A - (\bar {D} ^ {- 1} \bar {A}) ^ {k} \right\| _ {2} \tag {1}
|
| 535 |
+
$$
|
| 536 |
+
|
| 537 |
+
Inductively, $(D^{-1}A)^{k - 1} = (\bar{D}^{-1}\bar{A})^{k - 1} + H$ where $\| H\| _2\leq C / \sqrt{\bar{d}}$ . Plugging this in (1), we have
|
| 538 |
+
|
| 539 |
+
$$
|
| 540 |
+
\left\| \left(D ^ {- 1} A\right) ^ {k - 1} D ^ {- 1} A - \left(\bar {D} ^ {- 1} \bar {A}\right) ^ {k} \right\| _ {2} = \left\| \left(\left(\bar {D} ^ {- 1} \bar {A}\right) ^ {k - 1} + H\right) \left(\bar {D} ^ {- 1} \bar {A} + J\right) - \left(\bar {D} ^ {- 1} \bar {A}\right) ^ {k} \right\| _ {2}.
|
| 541 |
+
$$
|
| 542 |
+
|
| 543 |
+
Of these terms, $(\bar{D}^{-1}\bar{A})^{k - 1}J$ has norm at most $\| J\| _2$ , $H(\bar{D}^{-1}\bar{A})$ has norm at most $\| H\| _2$ , and $HJ$ has norm at most $C / \bar{d}$ . Hence the induction step is complete.
|
| 544 |
+
|
| 545 |
+
We have thus shown that there is a constant $C(r, K)$ such that with probability at least $1 - N^{-r}$ ,
|
| 546 |
+
|
| 547 |
+
$$
|
| 548 |
+
\left| \frac {1}{N / 2} w _ {2} ^ {\top} \left(\left(D ^ {- 1} A\right) ^ {k} - \left(\bar {D} ^ {- 1} \bar {A}\right) ^ {k}\right) \mu \right| \leq \frac {C}{\bar {d}}.
|
| 549 |
+
$$
|
| 550 |
+
|
| 551 |
+
which proves the claim.
|
| 552 |
+
|
| 553 |
+
By simulation one can verify that indeed $\frac{1}{N/2} w_2^\top (\bar{D}^{-1}\bar{A})^k\mu \approx \left(\frac{p - q}{p + q}\right)^k (\mu_2 - \mu_1)$ . Figure 6 presents $\mu_1^{(n)},\mu_2^{(n)}$ calculated from simulation against predicted values from our theoretical results. The simulation results are averaged over 20 instances generated from CSBM $(N = 2000,p = 0.0114,q = 0.0038,\mu_1 = 1,\mu_2 = 1.5,\sigma^2 = 1)$ .
|
| 554 |
+
|
| 555 |
+

|
| 556 |
+
Figure 6: Comparison of the mean estimation in Lemma 2 against simulation results.
|
| 557 |
+
|
| 558 |
+
# D PROOF OF LEMMA 3
|
| 559 |
+
|
| 560 |
+
Fix $n$ and let the element in the $i^{th}$ row and $j^{th}$ column of $(D^{-1}A)^n$ be $p_{ij}^{(n)}$ . Consider a fixed node $i$ . The variance of the feature for node $i$ after $n$ layers of convolutions is $(\sum_{j}(p_{ij}^{(n)})^2)\sigma^2$ , by the basic property of variance of sum. Since $\sum_{j}|p_{ij}^{(n)}| = 1$ , it follows that $\sum_{j}(p_{ij}^{(n)})^2\leq 1$ , which is the second inequality.
|
| 561 |
+
|
| 562 |
+
To show the first inequality, consider the following optimization problem:
|
| 563 |
+
|
| 564 |
+
$$
|
| 565 |
+
\min _ {p _ {i j} ^ {(n)}, 1 \leq j \leq N} \sum_ {j} (p _ {i j} ^ {(n)}) ^ {2}
|
| 566 |
+
$$
|
| 567 |
+
|
| 568 |
+
$$
|
| 569 |
+
\mathrm {s . t .} \qquad \sum_ {j} p _ {i j} ^ {(n)} = 1,
|
| 570 |
+
$$
|
| 571 |
+
|
| 572 |
+
$$
|
| 573 |
+
p _ {i j} ^ {(n)} \geq 0, \quad 1 \leq j \leq N
|
| 574 |
+
$$
|
| 575 |
+
|
| 576 |
+
This part of proof goes by contradiction. Suppose $\exists k, l$ such that $p_{ik}^{(n)} \neq \exists p_{il}^{(n)}$ . Fixing all other $p_{ij}^{(n)}, j \neq k, l$ , if we average $p_{ik}^{(n)}$ and $p_{il}^{(n)}$ , their sum of squares will strictly decrease while not
|
| 577 |
+
|
| 578 |
+
breaking the constraints:
|
| 579 |
+
|
| 580 |
+
$$
|
| 581 |
+
2 \Big (\frac {p _ {i k} ^ {(n)} + p _ {i l} ^ {(n)}}{2} \Big) ^ {2} - ((p _ {i k} ^ {(n)}) ^ {2} + (p _ {i l} ^ {(n)}) ^ {2}) = - \frac {1}{2} (p _ {i k} ^ {(n)} - p _ {i l} ^ {(n)}) ^ {2} < 0.
|
| 582 |
+
$$
|
| 583 |
+
|
| 584 |
+
So we obtain a contradiction. Thus to minimize $\sum_{j}(p_{ij}^{(n)})^2$ , $p_{ij}^{(n)} = \frac{1}{N}, 1 \leq j \leq N$ , and the minimum is $1 / N$ .
|
| 585 |
+
|
| 586 |
+
# E PROOF OF LEMMA 4
|
| 587 |
+
|
| 588 |
+
The proof relies on the following definition of neighborhood size: in a graph $\mathcal{G}$ , we denote by $\Gamma_k(x)$ the set of vertices in $\mathcal{G}$ at distance $k$ from a vertex $x$ :
|
| 589 |
+
|
| 590 |
+
$$
|
| 591 |
+
\Gamma_ {k} (x) = \{y \in \mathcal {G}: d (x, y) = k \}.
|
| 592 |
+
$$
|
| 593 |
+
|
| 594 |
+
we define $N_{k}(x)$ to be the set of vertices within distance $k$ of $\mathbf{x}$ :
|
| 595 |
+
|
| 596 |
+
$$
|
| 597 |
+
N _ {k} (x) = \bigcup_ {i = 0} ^ {k} \Gamma_ {i} (x).
|
| 598 |
+
$$
|
| 599 |
+
|
| 600 |
+
To prove the lower bound, we first show an intermediate step that
|
| 601 |
+
|
| 602 |
+
$$
|
| 603 |
+
\frac {1}{| N _ {n} |} \sigma^ {2} \leq (\sigma^ {(n)}) ^ {2}.
|
| 604 |
+
$$
|
| 605 |
+
|
| 606 |
+
The proof is the same as the one for the first inequality in Lemma 3, except we add in another constraint that for a fixed $i$ , the row $p_i$ is $|N_n(i)|$ -sparse. This implies that the minimum of $\sum_{j}(p_{ij}^{(n)})^2$ becomes $1 / |N_n(i)|$ . The we apply the result on upper bound of neighborhood sizes in Erdős-Rényi graph $\mathcal{G}(N,p)$ (Lemma 2 Graham & Lu (2001)), as it also serves as upper bound of neighborhood sizes in $\mathrm{SBM}(N,p,q)$ . The result implies that with probability at least $1 - O(1 / N)$ , we have
|
| 607 |
+
|
| 608 |
+
$$
|
| 609 |
+
\left| N _ {n} \right| \leq \frac {1 0}{\min \{a , 2 \}} (N p) ^ {n}, \forall 1 \leq n \leq N. \tag {2}
|
| 610 |
+
$$
|
| 611 |
+
|
| 612 |
+
We ignore $i$ for $N_{n}$ because of all nodes are identical in CSBM, so the bound applies for every nodes in the graph.
|
| 613 |
+
|
| 614 |
+
The proof of upper bound is combinatorial. Corollary 1 states that when $N$ is large, the degree of node $i$ is approximately the expected degree in $\mathcal{G}$ , namely, $\mathbb{E}[\mathrm{degree}] = \frac{N}{2}(p + q)$ . Since
|
| 615 |
+
|
| 616 |
+
$$
|
| 617 |
+
p _ {i j} ^ {(n)} = \sum_ {\text {p a t h} P = \{i, v _ {1}, \dots , v _ {n - 1}, j \}} \frac {1}{\deg (i)} \frac {1}{\deg (v _ {1})} \dots \frac {1}{\deg (v _ {n - 1})}, \tag {3}
|
| 618 |
+
$$
|
| 619 |
+
|
| 620 |
+
using the approximation of degrees, we get that
|
| 621 |
+
|
| 622 |
+
$$
|
| 623 |
+
p _ {i j} ^ {(n)} = \left(\frac {2}{N (p + q)}\right) ^ {n} (\# \text {o f p a t h s} P \text {o f l e n g t h n b e t w e e n i a n d j}).
|
| 624 |
+
$$
|
| 625 |
+
|
| 626 |
+
Then we use a tree approximation to calculate the number of paths $P$ of length $n$ between $i$ and $j$ by regarding $i$ as the root. Note that
|
| 627 |
+
|
| 628 |
+
$$
|
| 629 |
+
\sum_ {j} \left(p _ {i j} ^ {(n)}\right) ^ {2} = \sum_ {k = 0} ^ {\lfloor \frac {n}{2} \rfloor} \sum_ {j \in \Gamma_ {n - 2 k}} \left(p _ {i j} ^ {(n)}\right) ^ {2} \tag {4}
|
| 630 |
+
$$
|
| 631 |
+
|
| 632 |
+
and for $j \in \Gamma_{n-2k}$ , a deterministic path $P'$ of length $n - 2k$ is needed in order to reach $j$ from $i$ . This implies that there are only $k$ steps deviating from $P'$ . There are $(n - 2k + 1)^k$ ways of choosing when to deviate. For each specific way of when to deviate, there are approximately $\mathbb{E}[\text{degree}]^k$ ways of choosing the destinations for deviation. Hence in total, for $j \in \Gamma_{n-2k}$ , there are $(n - 2k + 1)^k \mathbb{E}[\text{degree}]^k$ path of length $n$ between $i$ and $j$ . Thus
|
| 633 |
+
|
| 634 |
+
$$
|
| 635 |
+
p _ {i j} ^ {(n)} = (n - 2 k + 1) ^ {k} \left(\frac {2}{N (p + q)}\right) ^ {n - k}. \tag {5}
|
| 636 |
+
$$
|
| 637 |
+
|
| 638 |
+
Plug in (5) into (4), we get that
|
| 639 |
+
|
| 640 |
+
$$
|
| 641 |
+
\begin{array}{l} \sum_ {j} \left(p _ {i j} ^ {(n)}\right) ^ {2} = \sum_ {k = 0} ^ {\lfloor \frac {n}{2} \rfloor} \left| \Gamma_ {n - 2 k} \right| (n - 2 k + 1) ^ {2 k} \left(\frac {2}{N (p + q)}\right) ^ {2 n - 2 k} (6) \\ \leq \sum_ {k = 0} ^ {\lfloor \frac {n}{2} \rfloor} \frac {9}{\min \{a , 2 \}} (n - 2 k + 1) ^ {2 k} (N p) ^ {n - 2 k} \left(\frac {2}{N (p + q)}\right) ^ {2 n - 2 k} (7) \\ \end{array}
|
| 642 |
+
$$
|
| 643 |
+
|
| 644 |
+
Again, (7) follows from using the upper bound on $|\Gamma_{n - 2k}|$ Graham & Lu (2001) such that with probability at least $1 - O(1 / N)$ ,
|
| 645 |
+
|
| 646 |
+
$$
|
| 647 |
+
| \Gamma_ {n - 2 k} | \leq \frac {9}{\operatorname* {m i n} \{a , 2 \}} (N p) ^ {n - 2 k}, \forall 1 \leq k \leq \left\lfloor \frac {n}{2} \right\rfloor .
|
| 648 |
+
$$
|
| 649 |
+
|
| 650 |
+
Combining with Lemma 3, we obtain the final result.
|
| 651 |
+
|
| 652 |
+
Figure 7 presents variance calculated from simulation against predicted upper and lower bounds from our theoretical results. The simulation results are averaged over 1000 instances generated from CSBM $(N = 2000, p = 0.0114, q = 0.0038, \mu_1 = 1, \mu_2 = 1.5, \sigma^2 = 1)$ .
|
| 653 |
+
|
| 654 |
+

|
| 655 |
+
Figure 7: Comparison of the bounds on variance in Theorem 2 against simulation results.
|
| 656 |
+
|
| 657 |
+
# F PROOF OF THEOREM 2
|
| 658 |
+
|
| 659 |
+
When we fix $K \in \mathbb{N}$ , only the upper bound in Theorem 2 will change. Note that now the upper bound in (7) can be written as
|
| 660 |
+
|
| 661 |
+
$$
|
| 662 |
+
\begin{array}{l} \sum_ {k = 0} ^ {\lfloor \frac {n}{2} \rfloor} \frac {9}{\min \{a , 2 \}} (n - 2 k + 1) ^ {2 k} \left(\frac {p + q}{2 p}\right) ^ {2 k} \left(\frac {2 p}{p + q}\right) ^ {n} \left(\frac {2}{N (p + q)}\right) ^ {n} \\ \leq \frac {C}{\min \{a , 2 \}} \left(\sum_ {k = 0} ^ {C} \left(\frac {p + q}{2 p}\right) ^ {2 k}\right) \left(\frac {2}{N (p + q)}\right) ^ {n} \\ \leq \frac {C}{\min \{a , 2 \}} \left(\frac {2}{N (p + q)}\right) ^ {n}. \\ \end{array}
|
| 663 |
+
$$
|
| 664 |
+
|
| 665 |
+
# G PROOF OF PROPOSITION 1
|
| 666 |
+
|
| 667 |
+
Let the node representation vector of node $v$ after $n$ graph convolutions be $h_v^{(n)}$ . The Bayes error rate could be written as $\frac{1}{2} (\mathbb{P}[h_v^{(n)} > \mathcal{D}|v \in \mathcal{C}_1] + \mathbb{P}[h_v^{(n)} \leq \mathcal{D}|v \in \mathcal{C}_2])$ . For $d \in \mathbb{N}$ , due to the symmetry of our setup, one can easily see that the optimal linear decision boundary is the hyperplane $\sum_{j=1}^{d} x_j = \frac{d}{2} (\mu_1 + \mu_2)$ . Then for $v \in \mathcal{C}_1$ , $\sum_{j=1}^{d} (h_v^{(n)})_j \sim \mathcal{N}(d\mu_1^{(n)}, d(\sigma^{(n)})^2)$ and for $v \in \mathcal{C}_2$ ,
|
| 668 |
+
|
| 669 |
+
$\sum_{j=1}^{d}(h_v^{(n)})_j \sim \mathcal{N}(d\mu_2^{(n)}, d(\sigma^{(n)})^2)$ . Thus the Bayes error rate can be written as
|
| 670 |
+
|
| 671 |
+
$$
|
| 672 |
+
\begin{array}{l} \frac {1}{2} \left(\mathbb {P} \left[ \sum_ {j = 1} ^ {d} \left(h _ {v} ^ {(n)}\right) _ {j} > \mathcal {D} | v \in \mathcal {C} _ {1} \right] + \mathbb {P} \left[ \sum_ {j = 1} ^ {d} \left(h _ {v} ^ {(n)}\right) _ {j} \leq \mathcal {D} | v \in \mathcal {C} _ {2} \right]\right) \\ = \frac {1}{2} \left(1 - \Phi \left(\frac {\frac {d}{2} \left(\mu_ {1} + \mu_ {2}\right) - d \mu_ {1} ^ {(n)}}{\sqrt {d} \sigma^ {(n)}}\right)\right) + \frac {1}{2} \left(\Phi \left(\frac {\frac {d}{2} \left(\mu_ {1} + \mu_ {2}\right) - d \mu_ {2} ^ {(n)}}{\sqrt {d} \sigma^ {(n)}}\right)\right) \\ = 1 - \Phi \left(\frac {\frac {d}{2} (\mu_ {1} + \mu_ {2}) - d \mu_ {1} ^ {(n)}}{\sqrt {d} \sigma^ {(n)}}\right). \\ \end{array}
|
| 673 |
+
$$
|
| 674 |
+
|
| 675 |
+
The last equality follows from the fact that $\frac{d}{2} (\mu_1 + \mu_2) - d\mu_1^{(n)} = -\left(\frac{d}{2} (\mu_1 + \mu_2) - d\mu_2^{(n)}\right)$ .
|
| 676 |
+
|
| 677 |
+
# H HOW TO USE THE Z-SCORE TO CHOOSE THE NUMBER OF LAYERS
|
| 678 |
+
|
| 679 |
+
The bounds of the z-score with respect to the number of layers, $z_{\mathrm{lower}}^{(n)}$ and $z_{\mathrm{upper}}^{(n)}$ allow us to calculate bounds for $n^{\star}$ and $n_0$ under different scenarios. Specifically,
|
| 680 |
+
|
| 681 |
+
1. $\forall n\in \mathbb{N},z_{\mathrm{upper}}^{(n)} < z^{(0)} = (\mu_2 - \mu_1) / \sigma$ then $n^{\star} = n_{0} = 0$ , meaning that no graph convolution should be applied.
|
| 682 |
+
2. $|\{n\in \mathbb{N}:z_{\mathrm{upper}}^{(n)}\geq z^{(0)}\} | > 0$ , and
|
| 683 |
+
|
| 684 |
+
(a) $\forall n\in \mathbb{N},z_{\mathrm{lower}}^{(n)} < z^{(0)}$ , then $0\leq n_0\leq \min \{n\in \mathbb{N}:z_{\mathrm{upper}}^{(n)}\leq z^{(0)}\}$ , which means that the number of graph convolutions should not exceed the upper bound of $n_0$ , or otherwise one gets worse performance than having no graph convolution. Note that in this case, since $n^{\star}\leq n_{0}$ , we can only conclude that
|
| 685 |
+
|
| 686 |
+
$$
|
| 687 |
+
0 \leq n ^ {\star} \leq \min \left\{n \in \mathbb {N}: z _ {\text {u p p e r}} ^ {(n)} \leq z ^ {(0)} \right\}.
|
| 688 |
+
$$
|
| 689 |
+
|
| 690 |
+
(b) $|\{n\in \mathbb{N}:z_{\mathrm{lower}}^{(n)}\geq z^{(0)}\} | > 0$ , then $0\leq n_0\leq \min \{n\in \mathbb{N}:z_{\mathrm{upper}}^{(n)}\leq z^{(0)}\}$ , and let $\arg \max_{n}z_{\mathrm{lower}}^{(n)} = n_{\mathrm{floor}}^{\star},$
|
| 691 |
+
|
| 692 |
+
$$
|
| 693 |
+
\max \left\{n \leq n _ {\text {f l o o r}} ^ {\star}: z _ {\text {u p p e r}} ^ {(n)} \leq z _ {\text {l o w e r}} ^ {(n _ {\text {f l o o r}} ^ {\star})} \right\} \leq n ^ {\star} \leq \min \left\{n \geq n _ {\text {f l o o r}} ^ {\star}: z _ {\text {u p p e r}} ^ {(n)} \leq z _ {\text {l o w e r}} ^ {(n _ {\text {f l o o r}} ^ {\star})} \right\},
|
| 694 |
+
$$
|
| 695 |
+
|
| 696 |
+
meaning that the number of layers one should apply for optimal node classification performance is more than the lower bound of $n^{\star}$ , and less than the upper bound of $n^{\star}$ .
|
| 697 |
+
|
| 698 |
+
# I PROOFS OF PROPOSITION 2-5
|
| 699 |
+
|
| 700 |
+
# I.1 PROOF OF PROPOSITION 2
|
| 701 |
+
|
| 702 |
+
Since the spectral radius of $D^{-1}A$ is 1,
|
| 703 |
+
|
| 704 |
+
$$
|
| 705 |
+
\alpha (I d - (1 - \alpha) (D ^ {- 1} A)) ^ {- 1} = \alpha \sum_ {k = 0} ^ {\infty} (1 - \alpha) ^ {k} (D ^ {- 1} A) ^ {k}.
|
| 706 |
+
$$
|
| 707 |
+
|
| 708 |
+
Apply Lemma 2, we get that $\mu_2^{\mathrm{PPNP}} - \mu_1^{\mathrm{PPNP}}\approx \frac{p + q}{p + \frac{2 - \alpha}{\alpha}q} (\mu_2 - \mu_1)$
|
| 709 |
+
|
| 710 |
+
To bound the approximation error, similar to the proof of the concentration bound in Theorem 1, it suffices to bound
|
| 711 |
+
|
| 712 |
+
$$
|
| 713 |
+
\frac {\mu_ {1} - \mu_ {2}}{N} w _ {2} ^ {\top} \left(\sum_ {k = 0} ^ {\infty} \alpha (1 - \alpha) ^ {k} \left(\left(D ^ {- 1} A\right) ^ {k} - \left(\bar {D} ^ {- 1} \bar {A}\right) ^ {k}\right)\right) w _ {2} = \frac {\mu_ {1} - \mu_ {2}}{N} w _ {2} ^ {\top} \left(T _ {K} + T _ {K + 1, \infty}\right) w _ {2},
|
| 714 |
+
$$
|
| 715 |
+
|
| 716 |
+
where $T_K = \sum_{k=0}^{K} \alpha(1 - \alpha)^k ((D^{-1}A)^k - (\bar{D}^{-1}\bar{A})^k)$ , $T_{K+1,\infty} = \sum_{k=K+1}^{\infty} \alpha(1 - \alpha)^k ((D^{-1}A)^k - (\bar{D}^{-1}\bar{A})^k)$ , and $K \in \mathbb{N}$ up to our own choice.
|
| 717 |
+
|
| 718 |
+
Bounding $T_K$ : Apply Theorem 1, fix $r > 0$ , there exists a constant $C(r, K, \alpha)$ such that with probability $1 - O(N^{-r})$ ,
|
| 719 |
+
|
| 720 |
+
$$
|
| 721 |
+
\left\| T _ {K} \right\| _ {2} \leq \frac {C}{\sqrt {\bar {d}}}.
|
| 722 |
+
$$
|
| 723 |
+
|
| 724 |
+
Bounding $T_{K + 1,\infty}$ : We will show upper bound for $(D^{-1}A)^k - (\bar{D}^{-1}\bar{A})^k$ that applies for all $k \in \mathbb{N}$ . Note that for every $k \in \mathbb{N}$ ,
|
| 725 |
+
|
| 726 |
+
$$
|
| 727 |
+
(D ^ {- 1} A) ^ {k} = D ^ {- 1 / 2} \left(D ^ {- 1 / 2} A D ^ {- 1 / 2}\right) ^ {k} D ^ {1 / 2} = D ^ {- 1 / 2} \left(V \Lambda^ {k} V ^ {\top}\right) D ^ {1 / 2},
|
| 728 |
+
$$
|
| 729 |
+
|
| 730 |
+
where $D^{-1/2}AD^{-1/2} = V\Lambda V^{\top}$ is the eigenvalue decomposition. Then
|
| 731 |
+
|
| 732 |
+
$$
|
| 733 |
+
\begin{array}{l} \left\| \left(D ^ {- 1} A\right) ^ {k} - \left(\bar {D} ^ {- 1} \bar {A}\right) ^ {k}\right) \| _ {2} \leq \left\| \left(D ^ {- 1} A\right) ^ {k} \right\| _ {2} + \left\| \left(\bar {D} ^ {- 1} \bar {A}\right) ^ {k} \right\| _ {2} = \left\| \left(D ^ {- 1} A\right) ^ {k} \right\| _ {2} + 1 \\ \leq \| D ^ {- 1 / 2} \| _ {2} \| \left(D ^ {- 1 / 2} A D ^ {- 1 / 2}\right) ^ {k} \| _ {2} \| D ^ {- 1 / 2} \| _ {2} + 1. \\ \end{array}
|
| 734 |
+
$$
|
| 735 |
+
|
| 736 |
+
Since $\| (D^{-1 / 2}AD^{-1 / 2})^k\| _2 = 1$ and by Corollary 1, with probability at least $1 - N^{-r}$
|
| 737 |
+
|
| 738 |
+
$$
|
| 739 |
+
\| D ^ {1 / 2} \| _ {2} \leq \sqrt {3 \bar {d} / 2}, \| D ^ {- 1 / 2} \| _ {2} \leq \sqrt {2 / \bar {d}},
|
| 740 |
+
$$
|
| 741 |
+
|
| 742 |
+
the previous inequality becomes $\| (D^{-1}A)^k - (\bar{D}^{-1}\bar{A})^k\|_2 \leq \sqrt{3} + 1$ . Hence
|
| 743 |
+
|
| 744 |
+
$$
|
| 745 |
+
\left\| T _ {K + 1, \infty} \right\| _ {2} \leq (1 - \alpha) ^ {K + 1}.
|
| 746 |
+
$$
|
| 747 |
+
|
| 748 |
+
Combining the two results, we prove the claim.
|
| 749 |
+
|
| 750 |
+
# I.2 PROOF OF PROPOSITION 3
|
| 751 |
+
|
| 752 |
+
The claim is a direct corollary of Theorem 1.
|
| 753 |
+
|
| 754 |
+
# I.3 PROOF OF PROPOSITION 4
|
| 755 |
+
|
| 756 |
+
The covariance matrix $\Sigma^{\mathrm{PPNP}}$ of $h^{\mathrm{PPNP}}$ could be written as
|
| 757 |
+
|
| 758 |
+
$$
|
| 759 |
+
\Sigma^ {\mathrm {P P N P}} = \alpha^ {2} \left(\sum_ {k = 0} ^ {\infty} (1 - \alpha) ^ {k} \left(D ^ {- 1} A\right) ^ {k}\right) \left(\sum_ {l = 0} ^ {\infty} (1 - \alpha) ^ {l} \left(D ^ {- 1} A\right) ^ {l}\right) ^ {\top} \sigma^ {2}.
|
| 760 |
+
$$
|
| 761 |
+
|
| 762 |
+
Note that the variance of node $i$ equals $\alpha^2\sum_{k,l=0}^{\infty}(1-\alpha)^{k+l}(D^{-1}A)_i^k\left((D^{-1}A)^l\right)_i^\top$ , where $i\cdot$ refers row $i$ of a matrix. Then by Cauchy-Schwarz Theorem,
|
| 763 |
+
|
| 764 |
+
$$
|
| 765 |
+
\begin{array}{l} (D ^ {- 1} A) _ {i.} ^ {k} ((D ^ {- 1} A) ^ {l}) _ {i.} ^ {\top} \leq \| (D ^ {- 1} A) _ {i.} ^ {k} \| \| ((D ^ {- 1} A) ^ {l}) _ {i.} \| \\ \leq \sqrt {(\sigma^ {(k)}) ^ {2} (\sigma^ {(l)}) ^ {2}} / \sigma^ {2}, \text {f o r a l l} 1 \leq k, l \leq N. \\ \end{array}
|
| 766 |
+
$$
|
| 767 |
+
|
| 768 |
+
Moreover, by Lemma 3, $(\sigma^{(k)})^2 \leq \sigma^2$ . Due to the identity of each node $i$ , we get that with probability $1 - O(1/N)$ , for all $1 \leq K \leq N$ ,
|
| 769 |
+
|
| 770 |
+
$$
|
| 771 |
+
\begin{array}{l} \left(\sigma^ {\mathrm {P P N P}}\right) ^ {2} \leq \alpha^ {2} \left(\sum_ {k = 0} ^ {K} (1 - \alpha) ^ {k} \sqrt {\left(\sigma^ {(k)}\right) _ {\text {u p p e r}} ^ {2}} + \sum_ {k = K + 1} ^ {\infty} (1 - \alpha) ^ {k} \sigma\right) ^ {2} \\ \leq \alpha^ {2} \left(\sum_ {k = 0} ^ {K} (1 - \alpha) ^ {k} \sqrt {(\sigma^ {(k)}) _ {\text {u p p e r}} ^ {2}} + \frac {(1 - \alpha) ^ {K + 1}}{\alpha} \sigma\right) ^ {2}. \\ \end{array}
|
| 772 |
+
$$
|
| 773 |
+
|
| 774 |
+
For the lower bound, note that with probability $1 - O(1 / N)$
|
| 775 |
+
|
| 776 |
+
$$
|
| 777 |
+
\left(\sigma^ {\mathrm {P P N P}}\right) ^ {2} \geq \alpha^ {2} \left(\sum_ {k = 0} ^ {N} (1 - \alpha) ^ {2 k} \frac {1}{N _ {k}} + \sum_ {k = N + 1} ^ {\infty} (1 - \alpha) ^ {2 k} \frac {1}{N}\right) \sigma^ {2},
|
| 778 |
+
$$
|
| 779 |
+
|
| 780 |
+
where $N_{k}$ is the size of $k$ -hop neighborhood. Then
|
| 781 |
+
|
| 782 |
+
$$
|
| 783 |
+
\begin{array}{l} (\sigma^ {\mathrm {P P N P}}) ^ {2} \geq \alpha^ {2} \left(\sum_ {k = 0} ^ {N} (1 - \alpha) ^ {2 k} \frac {\min \{a , 2 \}}{1 0} \frac {1}{(N p) ^ {k}}\right) \sigma^ {2} \\ \geq \alpha^ {2} \frac {\operatorname* {m i n} \{a , 2 \}}{1 0} \frac {(N p) ^ {N + 1} - (1 - \alpha) ^ {2 N + 2}}{(N p) ^ {N} (N p - (1 - \alpha) ^ {2})} \sigma^ {2} \\ \geq \alpha^ {2} \frac {\operatorname* {m i n} \{a , 2 \}}{1 0} \sigma^ {2}. \\ \end{array}
|
| 784 |
+
$$
|
| 785 |
+
|
| 786 |
+
It is easy to see that Lemma 3 applies to any message-passing scheme which could be regarded as a random walk on the graph. Combining with Lemma 3, we get the final result.
|
| 787 |
+
|
| 788 |
+
# I.4 PROOF OF PROPOSITION 5
|
| 789 |
+
|
| 790 |
+
Since
|
| 791 |
+
|
| 792 |
+
$$
|
| 793 |
+
h ^ {\mathrm {A P P N P} (n)} = \left(\alpha \left(\sum_ {k = 0} ^ {n - 1} (1 - \alpha) ^ {k} (D ^ {- 1} A) ^ {k}\right) + (1 - \alpha) ^ {n} (D ^ {- 1} A) ^ {n}\right) X
|
| 794 |
+
$$
|
| 795 |
+
|
| 796 |
+
Through the same calculation as for the upper bound in the proof of Proposition 2, we get that with probability $1 - O(1 / N)$
|
| 797 |
+
|
| 798 |
+
$$
|
| 799 |
+
\left(\sigma^ {\mathrm {A P P N P} (n)}\right) ^ {2} \leq \left(\alpha \left(\sum_ {k = 0} ^ {n - 1} (1 - \alpha) ^ {k} \sqrt {(\sigma^ {(k)}) _ {\mathrm {u p p e r}} ^ {2}}\right) + (1 - \alpha) ^ {n} \sqrt {(\sigma^ {(n)}) _ {\mathrm {u p p e r}} ^ {2}}\right) ^ {2}.
|
| 800 |
+
$$
|
| 801 |
+
|
| 802 |
+
For the lower bound, through the same calculation as for the upper bound in the proof of Proposition 2, we get that with probability $1 - O(1 / N)$
|
| 803 |
+
|
| 804 |
+
$$
|
| 805 |
+
\begin{array}{l} (\sigma^ {\mathrm {A P P N P} (n)}) ^ {2} \geq \alpha^ {2} \sum_ {k = 0} ^ {n - 1} (1 - \alpha) ^ {2 k} (\sigma^ {(k)}) ^ {2} + (1 - \alpha) ^ {2 n} (\sigma^ {(n)}) ^ {2} \\ \geq \alpha^ {2} \frac {\min \{a , 2 \}}{1 0} \left(\sum_ {k = 0} ^ {n - 1} (1 - \alpha) ^ {2 k} \frac {1}{(N p) ^ {k}}\right) \sigma^ {2} + \frac {\min \{a , 2 \}}{1 0} (1 - \alpha) ^ {2 n} \frac {1}{(N p) ^ {n}} \sigma^ {2} \\ \geq \frac {\min \{a , 2 \}}{1 0} \left(\alpha^ {2} + \frac {(1 - \alpha) ^ {2 n}}{(N p) ^ {n}}\right) \sigma^ {2}. \\ \end{array}
|
| 806 |
+
$$
|
| 807 |
+
|
| 808 |
+
Combining with Lemma 3, we get the final result.
|
| 809 |
+
|
| 810 |
+
# J EXPERIMENTS
|
| 811 |
+
|
| 812 |
+
Here we provide more details on the models that we use in Section 5. In all cases we use the Adam optimizer and tune some hyperparameters for better performance. The hyperparameters used are summarized as follows.
|
| 813 |
+
|
| 814 |
+
<table><tr><td>Data</td><td>final linear classifier</td><td>weights in GNN layer</td><td>learning rate (width)</td><td>iterations (width)</td></tr><tr><td rowspan="2">synthetic</td><td rowspan="2">1 layer</td><td>no</td><td>0.01</td><td>8000</td></tr><tr><td>yes</td><td>0.01(1,4,16)/0.001(64,256)</td><td>8000(1,4,16)/10000(64)/50000(256)</td></tr><tr><td rowspan="2">Cora</td><td rowspan="2">3 layer with 32 hidden channels</td><td>no</td><td>0.001</td><td>150</td></tr><tr><td>yes</td><td>0.001</td><td>200</td></tr><tr><td rowspan="2">CiteSeer</td><td rowspan="2">3 layer with 16 hidden channels</td><td>no</td><td>0.001</td><td>100</td></tr><tr><td>yes</td><td>0.001</td><td>100</td></tr><tr><td rowspan="2">PubMed</td><td rowspan="2">3 layer with 32 hidden channels</td><td>no</td><td>0.001</td><td>500</td></tr><tr><td>yes</td><td>0.001</td><td>500</td></tr></table>
|
| 815 |
+
|
| 816 |
+
We empirically find that after adding in weights in each GNN layer, it takes much longer to train the model for one iteration, and the time increases when the depth or the width increases (Figure 8). Since for some combinations, it takes more than 200,000 iterations for the validation accuracy to finally increase, for each case, we only train for a reasonable amount of iterations.
|
| 817 |
+
|
| 818 |
+
All models were implemented with PyTorch (Paszke et al., 2019) and PyTorch Geometric (Fey & Lenssen, 2019).
|
| 819 |
+
|
| 820 |
+

|
| 821 |
+
Figure 8: Iterations per second for each model.
|
| 822 |
+
|
| 823 |
+
# K ADDITIONAL RESULTS
|
| 824 |
+
|
| 825 |
+
# K.1 EFFECT OF NONLINEARITY ON CLASSIFICATION PERFORMANCE
|
| 826 |
+
|
| 827 |
+
In section 3, we consider the case of a simplified linear GNN. What would happen if we add nonlinearity after linear graph convolutions? Here, we consider the case of a GNN with a ReLU activation function added after $n$ linear graph convolutions, i.e. $h^{(n)\mathrm{ReLU}} = \mathrm{ReLU}((D^{-1}A)^n X)$ . We show that adding such nonlinearity does not improve the classification performance.
|
| 828 |
+
|
| 829 |
+
Proposition 6. Applying a ReLU activation function after $n$ linear graph convolutions does not decrease the Bayes error rate, i.e. Bayes error rate based on $h^{(n)\text{ReLU}} \geq$ Bayes error rate based on $h^{(n)}$ , and equality holds if $\mu_1 \geq -\mu_2$ .
|
| 830 |
+
|
| 831 |
+
Proof. If is known that if $x$ follows a Gaussian distribution, then $\mathrm{ReLU}(x)$ follows a Rectified Gaussian distribution. Following the definition of the Bayes optimal classifier, we present a geometric proof in Figure 9 (see next page, top), where the dark blue bar denotes the location of 0 and the red bar denotes the decision boundary $\mathcal{D}$ of the Bayes optimal classifier, and the light blue area denotes the overlapping area $S$ , which is twice the Bayes error rate.
|
| 832 |
+
|
| 833 |
+
1. $\mu_{1} \geq -\mu_{2}$ , $\mathcal{D} = \frac{\mu_{1} + \mu_{2}}{2} \to \mathcal{D} = \frac{\mu_{1} + \mu_{2}}{2}$ , the Bayes error rate stays the same.
|
| 834 |
+
|
| 835 |
+

|
| 836 |
+
|
| 837 |
+

|
| 838 |
+
|
| 839 |
+
2. $\mu_{1} < -\mu_{2}$ , $\mathcal{D} = \frac{\mu_1 + \mu_2}{2} \to \mathcal{D} = 0$ , the Bayes error rate increases.
|
| 840 |
+
|
| 841 |
+

|
| 842 |
+
Figure 9: A geometric proof of Proposition 6.
|
| 843 |
+
|
| 844 |
+

|
| 845 |
+
|
| 846 |
+
# K.2 EXACT LIMIT OF VARIANCE $(\sigma^{(n)})^2$ AS $n\to \infty$
|
| 847 |
+
|
| 848 |
+
Proposition 7. Given a graph $\mathcal{G}$ with adjacency matrix $A$ , let its degree vector be $d = A\mathbb{1}_N$ , where $\mathbb{1}_N$ is the all-one vector of length $N$ . If $\mathcal{G}$ is connected and non-bipartite, the variance of each node $i$ , denoted as $(\sigma_i^{(n)})^2$ , converges asymptotically to $\frac{\|d\|_2^2}{\|d\|_2^2}$ , i.e.
|
| 849 |
+
|
| 850 |
+
$$
|
| 851 |
+
\left(\sigma_ {i} ^ {(n)}\right) ^ {2} \xrightarrow {n \to \infty} \frac {\| d \| _ {2} ^ {2}}{\| d \| _ {1} ^ {2}}.
|
| 852 |
+
$$
|
| 853 |
+
|
| 854 |
+
Then $\frac{\|d\|^2}{\|d\|_1^2} \geq \frac{1}{N}$ , and the equality holds if and only if $\mathcal{G}$ is regular.
|
| 855 |
+
|
| 856 |
+
Proof. Let $e_i$ denotes the standard basis unit vector with the $i^{th}$ entry equals 1, and all other entries equal 0. Since $\mathcal{G}$ is connected and non-bipartite, the random walk represented by $P = D^{-1}A$ is ergodic, meaning that
|
| 857 |
+
|
| 858 |
+
$$
|
| 859 |
+
e _ {i} ^ {\top} P ^ {(n)} \xrightarrow {n \to \infty} \pi ,
|
| 860 |
+
$$
|
| 861 |
+
|
| 862 |
+
where $\pi$ is the stationary distribution of this random walk with $\pi_i = \frac{d_i}{\|d\|_1}$ . Then since norms are continuous functions, we conclude that
|
| 863 |
+
|
| 864 |
+
$$
|
| 865 |
+
\left(\sigma_ {i} ^ {(n)}\right) ^ {2} = \sum_ {j} (p _ {i j} ^ {(n)}) ^ {2} = \| e _ {i} ^ {\top} P ^ {(n)} \| _ {2} ^ {2} \xrightarrow {n \to \infty} \| \pi \| _ {2} ^ {2} = \frac {\| d \| _ {2} ^ {2}}{\| d \| _ {1} ^ {2}}.
|
| 866 |
+
$$
|
| 867 |
+
|
| 868 |
+
By Lemma 3, it follows that $\frac{\|d\|_2^2}{\|d\|_1^2} \geq \frac{1}{N}$ . The unique minimizer of $\|\pi\|_2^2$ subject to $\|\pi\|_1 = 1$ is $\pi = \frac{1}{N}\mathbb{1}_N$ . This means that $\mathcal{G}$ must be regular to achieve the lower bound asymptotically.
|
| 869 |
+
|
| 870 |
+
Under Assumption 1, the graph generated by our CSBM is almost surely connected. Here, we remain to show that with high probability, the graph will also be non-bipartite.
|
| 871 |
+
|
| 872 |
+
Proposition 8. With probability at least $1 - O(1 / (Np)^3)$ , a graph $\mathcal{G}$ generated from CSBM(N, p, q, $\mu_1, \mu_2, \sigma^2$ ) contains a triangle, which implies that it is non-bipartite.
|
| 873 |
+
|
| 874 |
+
Proof. The proof goes by the classic probabilistic method. Let $T_{\Delta} = \sum_{i}^{(N/3)} \mathbb{1}_{\tau_i}$ denotes the number of triangles in $\mathcal{G}$ , where $\mathbb{1}_{\tau_i}$ equals 1 if potential triangle $\tau_i$ exists and 0 otherwise. Then by second moment method,
|
| 875 |
+
|
| 876 |
+
$$
|
| 877 |
+
\mathbb {P} [ T _ {\Delta} = 0 ] \leq \frac {\operatorname {V a r} (T _ {\Delta})}{(\mathbb {E} [ T _ {\Delta} ]) ^ {2}} = \frac {1}{\mathbb {E} [ T _ {\Delta} ])} + \frac {\sum_ {i \neq j} \mathbb {E} [ \mathbb {1} _ {\tau_ {i}} \mathbb {1} _ {\tau_ {j}} ] - (\mathbb {E} [ T _ {\Delta} ]) ^ {2}}{(\mathbb {E} [ T _ {\Delta} ]) ^ {2}}.
|
| 878 |
+
$$
|
| 879 |
+
|
| 880 |
+
Since $\mathbb{E}[T_{\Delta}] = O(Np), \sum_{i \neq j} \mathbb{E}[\mathbb{1}_{\tau_i} \mathbb{1}_{\tau_j}] = (1 + O(1/N)) (\mathbb{E}[T_{\Delta}])^2$ , we get that
|
| 881 |
+
|
| 882 |
+
$$
|
| 883 |
+
\mathbb {P} \left[ T _ {\Delta} = 0 \right] \leq O \left(1 / (N p) ^ {3}\right) + O (1 / N) \leq O \left(1 / (N p) ^ {3}\right).
|
| 884 |
+
$$
|
| 885 |
+
|
| 886 |
+
Hence $\mathbb{P}[\mathcal{G}$ is non-bipartite] $\geq \mathbb{P}[T_{\Delta}\geq 1]\geq 1 - O(1 / (Np)^3)$
|
| 887 |
+
|
| 888 |
+
# K.3 SYMMETRIC GRAPH CONVOLUTION $D^{-1/2}AD^{-1/2}$
|
| 889 |
+
|
| 890 |
+
Proposition 9. When using symmetric message-passing convolution $D^{-1/2}AD^{-1/2}$ instead, the variance $(\sigma^{(n)})^2$ is non-increasing with respect to the number of convolutional layers $n$ . i.e.
|
| 891 |
+
|
| 892 |
+
$$
|
| 893 |
+
\left(\sigma^ {(n + 1)}\right) ^ {2} \leq \left(\sigma^ {(n)}\right) ^ {2}, n \in \mathbb {N} \cup \{0 \}.
|
| 894 |
+
$$
|
| 895 |
+
|
| 896 |
+
Proof. We want to calculate the diagonal entries of the covariance matrix $\Sigma^{(n)}$ of $(D^{-1/2}AD^{-1/2})^n X$ , where the covariance matrix of $X$ is $\sigma^2 I_N$ . Hence
|
| 897 |
+
|
| 898 |
+
$$
|
| 899 |
+
\Sigma^ {(n)} = \left(D ^ {- 1 / 2} A D ^ {- 1 / 2}\right) ^ {n} \left(\left(D ^ {- 1 / 2} A D ^ {- 1 / 2}\right) ^ {n}\right) ^ {\top}.
|
| 900 |
+
$$
|
| 901 |
+
|
| 902 |
+
Since $D^{-1/2}AD^{-1/2}$ is symmetric, let its eigendecomposition be $V\Lambda V^\top$ and we could rewrite
|
| 903 |
+
|
| 904 |
+
$$
|
| 905 |
+
\Sigma^ {(n)} = (V \Lambda^ {n} V ^ {\top}) (V \Lambda^ {n} V ^ {\top}) = V \Lambda^ {2 n} V ^ {\top}.
|
| 906 |
+
$$
|
| 907 |
+
|
| 908 |
+
Notice that the closed form of the diagonal entries is
|
| 909 |
+
|
| 910 |
+
$$
|
| 911 |
+
\operatorname {d i a g} \left(\Sigma^ {(n)}\right) = \sum_ {i = 1} ^ {N} \lambda_ {i} ^ {2 n} | v | ^ {2}.
|
| 912 |
+
$$
|
| 913 |
+
|
| 914 |
+
Since for all $1 \leq i \leq N$ , $|\lambda_i| \leq 1$ , we obtain monotonicity of each entry of $\mathrm{diag}(\Sigma^{(n)})$ , i.e. variance of each node.
|
| 915 |
+
|
| 916 |
+
Although the proposition does not always hold for random walk message-passing convolution $D^{-1}A$ as one can construct specific counterexamples (Appendix K.4), in practice, variances are observed to be decreasing with respect to the number of layers. Moreover, we empirically observe that variance goes down more than the variance using symmetric message-passing convolutions. Figure 10 presents visualization of node representations comparing the change of variance with respect to the number of layers using random walk convolution and symmetric message-passing convolution. The data is generated from CSBM( $N = 2000$ , $p = 0.0114$ , $q = 0.0038$ , $\mu_1 = 1$ , $\mu_2 = 1.5$ , $\sigma^2 = 1$ ).
|
| 917 |
+
|
| 918 |
+

|
| 919 |
+
Figure 10: The change of variance with respect to the number of layers using random walk convolution $D^{-1}A$ and symmetric message-passing convolution $D^{-1/2}AD^{-1/2}$ .
|
| 920 |
+
|
| 921 |
+

|
| 922 |
+
|
| 923 |
+
# K.4 COUNTEREXAMPLES
|
| 924 |
+
|
| 925 |
+
Here, we construct a specific example where the variance $(\sigma^{(n)})^2$ is not non-increasing with respect to the number of layers $n$ (Figure 11A). We remark that such a non-monotone nature of change in variance is not caused by the bipartiteness of the graph, as a cycle graph with even number of nodes is also bipartite, but does not exhibit such phenomenon (Figure 11B). We conjecture the increase in variance is rather caused by the tree-like structure.
|
| 926 |
+
|
| 927 |
+

|
| 928 |
+
Figure 11: Counterexamples.
|
| 929 |
+
|
| 930 |
+

|
| 931 |
+
|
| 932 |
+
# K.5 THE MIXING AND DENOISING EFFECTS IN PRACTICE
|
| 933 |
+
|
| 934 |
+
In this section, we measure the mixing and denoising effects of graph convolutions identified by our theoretical results in practice, and show that the same tradeoff between the two counteracting effects exists for real-world graphs. For the mixing effect, we measure the pairwise $L_{2}$ distances between the means of different classes, and for the denoising effect, we measure the within-class variances, both respect to the number of layers. Figure 12 gives a visualization of both metrics for
|
| 935 |
+
|
| 936 |
+
all classes on Cora, CiteSeer and PubMed. We observe that similar to the synthetic CSBM data, adding graph convolutions increases both the mixing effect (homogenizing node representations in different classes, measured by the inter-class distances) and the denoising effect (homogenizing node representations in the same class, measured by the within-class distances). In addition, the beneficial denoising effect clearly reaches saturation just after a small number of layers, as predicted by our theory.
|
| 937 |
+
|
| 938 |
+

|
| 939 |
+
|
| 940 |
+

|
| 941 |
+
|
| 942 |
+

|
| 943 |
+
|
| 944 |
+

|
| 945 |
+
Figure 12: The existence of the mixing (top row) and denoising effects (bottom row) of graph convolutions in practice. Adding graph convolutions increases both effects and the beneficial denoising effect clearly reaches saturation just after a small number of layers, as predicted by our theory in Section 3.
|
| 946 |
+
|
| 947 |
+

|
| 948 |
+
|
| 949 |
+

|
2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d9c9290710320da0a09487d5da3ec55527072f8bb4c44f1ef8fdd14473a41f0a
|
| 3 |
+
size 1103731
|
2023/A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Non-monotonic Self-terminating Language Model/88334e4e-c7af-4ca1-ab25-12138e63533d_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Non-monotonic Self-terminating Language Model/88334e4e-c7af-4ca1-ab25-12138e63533d_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Non-monotonic Self-terminating Language Model/88334e4e-c7af-4ca1-ab25-12138e63533d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:78fe41a6d6405bddf244aff6173a12d013b0bfa161fbb06b8de163991c82e418
|
| 3 |
+
size 2094398
|
2023/A Non-monotonic Self-terminating Language Model/full.md
ADDED
|
@@ -0,0 +1,510 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A NON-MONOTONIC SELF-TERMINATING LANGUAGE MODEL
|
| 2 |
+
|
| 3 |
+
Eugene Choi†
|
| 4 |
+
|
| 5 |
+
eugene.choi@nyu.edu
|
| 6 |
+
|
| 7 |
+
Kyunghyun Cho†‡§
|
| 8 |
+
|
| 9 |
+
kyunghyun.cho@nyu.edu
|
| 10 |
+
|
| 11 |
+
Cheolhyoung Lee\*
|
| 12 |
+
|
| 13 |
+
cheolhyoung.lee@nyu.edu
|
| 14 |
+
|
| 15 |
+
# ABSTRACT
|
| 16 |
+
|
| 17 |
+
Recent large-scale neural autoregressive sequence models have shown impressive performances on a variety of natural language generation tasks. However, their generated sequences often exhibit degenerate properties such as non-termination, undesirable repetition, and premature termination, when generated with decoding algorithms such as greedy search, beam search, top- $k$ sampling, and nucleus sampling. In this paper, we focus on the problem of non-terminating sequences resulting from an incomplete decoding algorithm. We first define an incomplete probable decoding algorithm which includes greedy search, top- $k$ sampling, and nucleus sampling, beyond the incomplete decoding algorithm originally put forward by Welleck et al. (2020). We then propose a non-monotonic self-terminating language model, which significantly relaxes the constraint of monotonically increasing termination probability in the originally proposed self-terminating language model by Welleck et al. (2020), to address the issue of non-terminating sequences when using incomplete probable decoding algorithms. We prove that our proposed model prevents non-terminating sequences when using not only incomplete probable decoding algorithms but also beam search. We empirically validate our model on sequence completion tasks with various architectures.
|
| 18 |
+
|
| 19 |
+
# 1 INTRODUCTION
|
| 20 |
+
|
| 21 |
+
Autoregressive neural sequence models (Bengio et al., 2000) have been widely used for various natural language generation tasks such as language modeling (Brown et al., 2020; Chowdhery et al., 2022), machine translation (Bahdanau et al., 2014), and conversational dialogue modeling (Vinyals & Le, 2015). Furthermore, large-scale autoregressive neural sequence models have shown unprecedented ability to generate fluent, human-like texts (Vaswani et al., 2017; Brown et al., 2020). Despite their success, the autoregressive neural sequence models have shown undesirable behaviors: non-termination (Welleck et al., 2020), degenerate repetition (Welleck et al., 2019; Holtzman et al., 2020), and premature termination (Koehn & Knowles, 2017; Stahlberg & Byrne, 2019). In this paper, we focus on how to prevent non-termination when using a given decoding algorithm.
|
| 22 |
+
|
| 23 |
+
Non-termination is the problem that a language model generates infinitely long sequences with a positive probability from our language model when using a given decoding algorithm. Welleck et al. (2020) pointed out that this issue comes from a discrepancy between the distribution of our language model and its induced distribution by an incomplete decoding algorithm. They formalized this disparity by the notion of inconsistency where our language model generates non-terminating sequences with a positive probability from the decoding algorithm. To avoid this inconsistency, they proposed a self-terminating (ST) language model that uses new parametrization for its classifier rather than usual softmax parametrization. They proved that the ST language model is consistent with respect to greedy search, beam search, top- $k$ sampling (Fan et al., 2018) as well as nucleus sampling (Holtzman et al., 2020).
|
| 24 |
+
|
| 25 |
+
The ST language model increases the termination probability of each sequence monotonically to 1, but this parametrization is not appropriate for learning our natural language. As an illustrative
|
| 26 |
+
|
| 27 |
+
example, suppose there are two sequences in our dataset: "I am a boy" vs. "I am a boy, and you are a girl". Our language model trained on this dataset may or may not terminate after the former. Once our model decides not to end, it should dramatically reduce the termination probability to continue. The ST language model, which monotonically increase the termination probability, cannot capture such a case, where one sequence is a prefix of another. We thus propose a non-monotonic self-terminating (NMST) language model which guarantees the consistency with respect to greedy search, beam search, top- $k$ sampling, and nucleus sampling without monotonically increasing termination probability.
|
| 28 |
+
|
| 29 |
+
The NMST language model encourages the termination probability of each sequence to converge to 1 through NMST parametrization however without monotonicity. Even under this relaxation, the proposed NMST language model provably prevents any non-terminating sequence resulting from greedy search, beam search, top- $k$ sampling, and nucleus sampling, which we refer to as incomplete probable decoding algorithms.
|
| 30 |
+
|
| 31 |
+
We conduct experiments validating the effectiveness of our NMST language models on sequence completion tasks, as was done in earlier studies. We test NMST parametrization with various architectures. Specifically, we train RNN (Elman, 1990) and LSTM (Hochreiter & Schmidhuber, 1997) on WikiText-2 (Merit et al., 2016). We additionally finetune GPT-2 (Radford et al., 2019) on WikiText-103 (Merit et al., 2016). Across all these setups, NMST parametrization effectively prevents non-terminating sequences, especially when compared to softmax parametrization. Furthermore, we see that our NMST parametrization has better (lower) perplexities than those of ST parametrization, confirming the importance of relaxing the monotonic termination probability.
|
| 32 |
+
|
| 33 |
+
# 2 NOTATIONS AND BACKGROUND
|
| 34 |
+
|
| 35 |
+
# 2.1 NOTATIONS FOR AUTOREGRESSIVE NEURAL SEQUENCE MODELS
|
| 36 |
+
|
| 37 |
+
Sequences, vocabulary, and $\langle \mathrm{eos}\rangle$ We view an instance (e.g., a sentence and a paragraph) as a sequence $\mathbf{y} = (y_{1},y_{2},\dots ,y_{T})$ , where each $y_{t}$ is an element from a pre-defined finite set of discrete tokens, referred to as a vocabulary $\nu$ . $\nu$ includes a special symbol $\langle \mathrm{eos}\rangle$ that only appears at the end of the sequence. Every sequence $\mathbf{y}$ must end with $\langle \mathrm{eos}\rangle$ . We write the length of $\mathbf{y}$ as $|\mathbf{y}|$ , and $y_{|\mathbf{y}|} = \langle \mathrm{eos}\rangle$ . We call $\mathbf{y}$ a non-terminating sequence, $|\mathbf{y}| = \infty$ , if $y_{t}\neq \langle \mathrm{eos}\rangle$ for all $t$ .
|
| 38 |
+
|
| 39 |
+
Embedding vectors Each token $v \in \mathcal{V}$ is not a numerical vector so that we use an embedding vector $\pmb{u}_v \in \mathbb{R}^m$ to represent $v$ . To capture the notion of similarity between discrete tokens efficiently, we use an embedding vector $\pmb{u}_v \in \mathbb{R}^m$ to project $v$ into continuous embedding space (Bengio et al., 2000; Mikolov et al., 2013b;a; Levy & Goldberg, 2014).
|
| 40 |
+
|
| 41 |
+
Autoregressive neural sequence models Bengio et al. (2000) proposed an autoregressive neural sequence model parametrized by $\pmb{\theta} \in \mathbb{R}^k$ . They factorized $p_{\pmb{\theta}}(\pmb{y}|\pmb{x})$ into a product of the conditional probability of each token given all the previous tokens and an input in a predefined order as follows:
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
p _ {\boldsymbol {\theta}} (\boldsymbol {y} | \boldsymbol {x}) = \prod_ {t = 1} ^ {T} p _ {\boldsymbol {\theta}} \left(\boldsymbol {y} _ {t} \mid \boldsymbol {y} _ {< t}, \boldsymbol {x}\right), \tag {1}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
where $\mathbf{y}_{<t}$ is a $t$ -prefix of $\mathbf{y}$ and $\mathbf{x}$ is an input referred to as a context. For example, $\mathbf{x}$ represents either a prompt in sequence completion or a source-side sequence in machine translation.
|
| 48 |
+
|
| 49 |
+
There are several popular architectures for $p_{\theta}$ such as RNN (Elman, 1990), LSTM (Hochreiter & Schmidhuber, 1997), GRU (Cho et al., 2014), and Transformer (Vaswani et al., 2017). As shown in equation 2, all these models utilize softmax classifiers. In this paper, we modify the parametrization of their softmax classifiers to prevent non-terminating sequences. We thus write a vanilla language model, regardless of its choice of architecture, that uses the original softmax parametrization as $p_{\theta}^{va}$ defined in Definition 1.
|
| 50 |
+
|
| 51 |
+
Definition 1. A vanilla language model $p_{\theta}^{va}$ computes the conditional probability of each token given a $t$ -prefix $y_{<t}$ and a context $x$ at each time step $t$ as follows:
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
p _ {\boldsymbol {\theta}} ^ {v a} \left(y _ {t} = v \mid \boldsymbol {y} _ {< t}, \boldsymbol {x}\right) = \exp \left(\boldsymbol {u} _ {v} ^ {\top} \boldsymbol {h} _ {t}\right) / \sum_ {v ^ {\prime} \in \mathcal {V}} \exp \left(\boldsymbol {u} _ {v ^ {\prime}} ^ {\top} \boldsymbol {h} _ {t}\right), \tag {2}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
where $\pmb{h}_t = f_{\pmb{\theta}}(\pmb{y}_t, \pmb{h}_{t-1})$ with $\pmb{h}_0 = \pmb{0}$ .<sup>1</sup>
|
| 58 |
+
|
| 59 |
+
Training For a given dataset, $\mathcal{D} = \left\{\left(\boldsymbol{x}^{(n)},\boldsymbol{y}^{(n)}\right)\right\}_{n = 1}^{N}$ , we maximize the joint probability assigned to the sequences in our training dataset to find an optimal parameter configuration $\theta^*$ as follows:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\boldsymbol {\theta} ^ {\star} = \arg \max _ {\boldsymbol {\theta}} \sum_ {n = 1} ^ {N} \sum_ {t = 1} ^ {T ^ {(n)}} \log p _ {\boldsymbol {\theta}} \left(\boldsymbol {y} _ {t} ^ {(n)} \mid \boldsymbol {y} _ {< t} ^ {(n)}, \boldsymbol {x} ^ {(n)}\right). \tag {3}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
# 2.2 INCOMPLETE PROBABLE DECODING ALGORITHMS
|
| 66 |
+
|
| 67 |
+
An autoregressive language model $p_{\theta}$ predicts the likelihood of a sequence $y$ given a context $x$ . Its autoregressive factorization in equation 1 requires a recursive process for every $t$ to infer. Hence, at inference time, we use a decoding algorithm, defined below, to generate sequences from $p_{\theta}$ .
|
| 68 |
+
|
| 69 |
+
Definition 2. Let $\mathcal{V}$ be a collection of $\pmb{y}$ such that $\pmb{y} = (y_{1}, y_{2}, \dots, y_{T})$ where $T \in \{1, 2, \dots\}$ and $y_{t} \in \mathcal{V}$ . A decoding algorithm $\mathcal{S}$ is a function that maps $p_{\theta}$ to $q_{\mathcal{S}(p_{\theta})}$ which is a probability distribution over $\mathcal{V}$ . A decoded sentence $\hat{\pmb{y}}$ given $\pmb{x}$ by $\mathcal{S}$ from $p_{\theta}$ is a random sample from $q_{\mathcal{S}(p_{\theta})}(\pmb{y}|\pmb{x})$ .
|
| 70 |
+
|
| 71 |
+
To generate a high quality sequence from $p_{\theta}$ , a decoding algorithm assumes that a higher quality sequence has a higher probability of $p_{\theta}$ than others. For instance, maximum a posteriori (MAP) decoding algorithm $S_{map}$ gives the most probable sequence $y^{\star}$ given a context $x$ from $p_{\theta}$ :
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\boldsymbol {y} ^ {\star} = \arg \max _ {\boldsymbol {y} \in \mathcal {Y}} p _ {\theta} (\boldsymbol {y} | \boldsymbol {x}), \tag {4}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
by setting $q_{S_{map}(p_\theta)}(\boldsymbol{y} = \boldsymbol{y}^\star | \boldsymbol{x}) = 1$ and $q_{S_{map}(p_\theta)}(\boldsymbol{y} = \boldsymbol{y}' | \boldsymbol{x}) = 0$ where $\boldsymbol{y}' \in \mathcal{Y} \setminus \{\boldsymbol{y}^\star\}$ . Unfortunately, $S_{map}$ is intractable since equation 4 requires an exhaustive search over the sequence space $\mathcal{V}$ . Hence, in practice, we utilize incomplete probable decoding algorithms defined as follows:
|
| 78 |
+
|
| 79 |
+
Definition 3. A decoding algorithm $S$ is incomplete and probable if there exists $\mathcal{V}_t \subsetneq \mathcal{V}$ such that
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\sum_ {v \in \mathcal {V} _ {t}} q _ {S (p _ {\theta})} \left(y _ {t} = v \mid \boldsymbol {y} _ {< t}, \boldsymbol {x}\right) = 1 \tag {5}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
and
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\min _ {v \in \mathcal {V} _ {t}} p _ {\boldsymbol {\theta}} \left(y _ {t} = v \mid \boldsymbol {y} _ {< t}, \boldsymbol {x}\right) \geq \max _ {v \in \mathcal {V} \backslash \mathcal {V} _ {t}} p _ {\boldsymbol {\theta}} \left(y _ {t} = v \mid \boldsymbol {y} _ {< t}, \boldsymbol {x}\right) \tag {6}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
for each $t$ . Furthermore, for every $v \in \mathcal{V}_t$ , $\mathcal{S}$ satisfies
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
q _ {\mathcal {S} \left(p _ {\boldsymbol {\theta}}\right)} \left(y _ {t} = v \mid \boldsymbol {y} _ {< t}, \boldsymbol {x}\right) \geq p _ {\boldsymbol {\theta}} \left(y _ {t} = v \mid \boldsymbol {y} _ {< t}, \boldsymbol {x}\right). \tag {7}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
At each $t$ , an incomplete probable decoding algorithm $\mathcal{S}$ considers only a set of highly probable tokens, $\mathcal{V}_t$ . $\mathcal{S}$ generates $\hat{\pmb{y}}$ given $\pmb{x}$ by recursively sampling $\hat{y}_t$ from $q_{\mathcal{S}(p_\theta)}(y_t|\hat{\pmb{y}}_{< t},\pmb{x})$ supported on $\mathcal{V}_t$ . This reduces an exponential complexity of $S_{map}$ , $\mathcal{O}\left(|\mathcal{V}|^{|\hat{\pmb{y}}|}\right)$ , down to a linear level, $\mathcal{O}\left(|\hat{\pmb{y}}|\cdot |\mathcal{V}|\right)$ .
|
| 98 |
+
|
| 99 |
+
Greedy search, top- $k$ sampling (Fan et al., 2018), and nucleus sampling (Holtzman et al., 2020) are incomplete and probable. For example, greedy search $S_{gr}$ generates the $t$ -th item of a sequence by
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\hat {y} _ {t} = \arg \max _ {v \in \mathcal {V}} p _ {\boldsymbol {\theta}} \left(y _ {t} = v \mid \hat {\boldsymbol {y}} _ {< t}, \boldsymbol {x}\right). \tag {8}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
In other words, $S_{gr}$ sets $\mathcal{V}_t$ to $\left\{v_t^{(1)}\right\}$ where $v_t^{(1)} = \arg \max_{v\in \mathcal{V}}p_\theta (y_t = v|\hat{\boldsymbol{y}}_{< t},\boldsymbol {x})$ . Moreover, we have $p_{\pmb{\theta}}\big(y_t = v_t^{(1)}|\hat{\boldsymbol{y}}_{< t},\boldsymbol {x}\big)\leq q_{S_{gr}(p_\pmb{\theta})}\big(y_t = v_t^{(1)}|\hat{\boldsymbol{y}}_{< t},\boldsymbol {x}\big) = 1$ , and $q_{S_{gr}(p_\pmb{\theta})}(y_t = v'| \hat{y}_{< t},\boldsymbol {x}) = 0$ holds for $v^{\prime}\in \mathcal{V}\setminus \mathcal{V}_{t}$ . Thus, $S_{gr}$ is incomplete and probable. Unlike $S_{gr}$ , top- $k$ sampling considers $k$ most probable tokens in $\mathcal{V}$ as $\mathcal{V}_t$ while nucleus sampling sets the smallest subset of $\mathcal{V}$ , containing most probable tokens of which total probability is higher than a given threshold $\mu$ , to $\mathcal{V}_t$ . In §A.1 and A.2, we present that top- $k$ sampling and nucleus sampling are also incomplete and probable.
|
| 106 |
+
|
| 107 |
+
Beam search is a heuristic algorithm that operates on the level of prefixes. We describe it further in $\S A.3$ . Although beam search is not an incomplete probable decoding algorithm, it also selects $\mathcal{V}_t$ which is a proper subset of $\mathcal{V}$ to expand each prefix at each step $t$ . Due to this, our main theoretical finding for the incomplete probable decoding algorithms in $\S 3$ is applicable to beam search as well.
|
| 108 |
+
|
| 109 |
+
# 2.3 CONSISTENCY WITH RESPECT TO INCOMPLETE PROBABLE DECODING ALGORITHMS AND SELF-TERMINATING (ST) LANGUAGE MODELS
|
| 110 |
+
|
| 111 |
+
Incomplete probable decoding algorithms greatly reduce computational overhead for generating sequences from our model. However, Welleck et al. (2020) observed that they can generate non-terminating sequences even if every training sequence has a finite length. To study this, Welleck et al. (2020) defined consistency with respect to decoding algorithms as shown in Definition 4.
|
| 112 |
+
|
| 113 |
+
Definition 4. A language model $p_{\theta}$ is consistent with respect to a decoding algorithm $S$ if $q_{S(p_{\theta})}(|y| = \infty) = 0$ for any parameter configuration $\pmb{\theta} \in \mathbb{R}^k$ .
|
| 114 |
+
|
| 115 |
+
Welleck et al. (2020) also proved that a vanilla language model $p_{\pmb{\theta}}^{va}$ defined in Definition 1 is inconsistent with respect to incomplete probable decoding algorithms and beam search as follows:
|
| 116 |
+
|
| 117 |
+
Theorem 1. A vanilla language model $p_{\theta}^{va}$ defined in Definition 1 is inconsistent with respect to any incomplete probable decoding algorithm and beam search (Theorem 3.4 in Welleck et al. (2020)).
|
| 118 |
+
|
| 119 |
+
For each $t$ , an incomplete probable decoding algorithm $\mathcal{S}$ selects $\mathcal{V}_t \subsetneq \mathcal{V}$ as a set of candidates for decoding, but $p_{\theta}^{va}$ does not guarantee that $\langle eos \rangle \in \mathcal{V}_t$ . Specifically, if $\langle eos \rangle \notin \mathcal{V}_t$ for all $t$ , then $\mathcal{S}$ cannot decode each token to $\langle eos \rangle$ for all $t$ (i.e., non-terminating). Based on this result, Welleck et al. (2020) proposed a self-terminating (ST) language model, defined below:
|
| 120 |
+
|
| 121 |
+
Definition 5. For $h_t$ defined in Definition 1, the conditional probability of each token $v \in \mathcal{V}$ given a $t$ -prefix $y_{<t}$ and a context $x$ at each time step $t$ in an ST language model is given by
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\alpha_ {t} = p _ {\boldsymbol {\theta}} ^ {s t} \left(y _ {t} = \langle e o s \rangle | \boldsymbol {y} _ {< t}, \boldsymbol {x}\right) = 1 - \prod_ {t ^ {\prime} = 1} ^ {t} (1 - \epsilon) \cdot \sigma \left(\boldsymbol {u} _ {\langle e o s \rangle} ^ {\top} \boldsymbol {h} _ {t ^ {\prime}}\right), \tag {9}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
and
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
p _ {\pmb {\theta}} ^ {s t} (y _ {t} = v | \pmb {y} _ {< t}, \pmb {x}) = (1 - \alpha_ {t}) \cdot \exp (\pmb {u} _ {v} ^ {\top} \pmb {h} _ {t}) / \sum_ {v ^ {\prime} \in \mathcal {V} \backslash \{\langle e o s \rangle \}} \exp (\pmb {u} _ {v ^ {\prime}} ^ {\top} \pmb {h} _ {t}),
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where $v \in \mathcal{V} \setminus \{\langle eos \rangle\}$ , $\epsilon \in (0,1)$ , and $\sigma(x) = (1 + \exp(-x))^{-1}$ is a sigmoid function.
|
| 134 |
+
|
| 135 |
+
They proved that the ST language model is consistent with respect to any incomplete probable decoding algorithm and beam search, as follows:
|
| 136 |
+
|
| 137 |
+
Theorem 2. An ST language model $p_{\theta}^{st}$ defined in Definition 5 is consistent with respect to any incomplete probable decoding algorithms and beam search (Theorem 4.1-4.3 in Welleck et al. (2020)).
|
| 138 |
+
|
| 139 |
+
In equation 9, $p_{\pmb{\theta}}^{st}(y_t = \langle \text{eos} \rangle | \pmb{y}_{<t}, \pmb{x})$ monotonically increases to 1 as $t$ increases. $S$ ends up including $\langle \text{eos} \rangle$ in $\mathcal{V}_t$ always for $t \geq t'$ with some $t'$ , and $\lim_{t \to \infty} q_{S(p_\theta)}(y_t = \langle \text{eos} \rangle | \pmb{y}_{<t}, \pmb{x}) = 1$ by equation 7. This guarantees $S$ to terminate in a finite number of steps. Despite $p_{\pmb{\theta}}^{st}$ 's consistency, its validation perplexity degrades compared to $p_{\pmb{\theta}}^{va}$ in sequence completion (Welleck et al., 2020). We suspect that such degradation comes from the core property of $p_{\pmb{\theta}}^{st}$ that $p_{\pmb{\theta}}(y_t = \langle \text{eos} \rangle | \pmb{y}_{<t}, \pmb{x})$ monotonically increases to 1 as $t$ increases. In Remark 1 below, we provide an example where the optimal $p_{\pmb{\theta}^\star}(y_t = \langle \text{eos} \rangle | \pmb{y}_{<t}, \pmb{x})$ is not monotonic.
|
| 140 |
+
|
| 141 |
+
Remark 1. Let $\mathcal{D} = \left\{(\pmb{x}^{(1)},\pmb{y}^{(1)}),(\pmb{x}^{(2)},\pmb{y}^{(2)})\right\}$ be a two-instance training dataset. Assume that there exists $t_0$ such that $\pmb{y}_{< t_0} = \pmb{y}_{< t_0}^{(1)} = \pmb{y}_{< t_0}^{(2)}$ . Suppose further that $t_0 = |\pmb{y}^{(1)}| < |\pmb{y}^{(2)}| - 1$ and $\pmb{x} = \pmb{x}^{(1)} = \pmb{x}^{(2)}$ . If $\pmb{\theta}^{\star}$ is an optimal parameter configuration in equation 3 over $\mathcal{D}$ . Then, $p_{\pmb{\theta}^{\star}}\left(y_t^{(2)} = \langle \mathrm{eos}\rangle |\pmb{y}_{< t}^{(2)},\pmb{x}\right)$ is not monotonic with respect to $t$ (proved in §B).
|
| 142 |
+
|
| 143 |
+
We can easily find such case in natural language satisfying the assumptions in Remark 1 by concatenating two sequences. We empirically demonstrate the existence of such cases in §4.2.
|
| 144 |
+
|
| 145 |
+
# 3 NON-MONOTONIC SELF-TERMINATING (NMST) LANGUAGE MODELS
|
| 146 |
+
|
| 147 |
+
The consistency of $p_{\theta}^{st}$ comes from $\lim_{t \to \infty} p_{\theta}^{st}(y_t = \langle eos \rangle | \mathbf{y}_{<t}, \mathbf{x}) = 1$ , not the monotonically increasing $p_{\theta}^{st}(y_t = \langle eos \rangle | \mathbf{y}_{<t}, \mathbf{x})$ as a function of $t$ . This motivates us to propose a non-monotonic self-terminating (NMST) language model $p_{\theta}^{nmst}$ that permits $p_{\theta}^{nmst}(y_t = \langle eos \rangle | \mathbf{y}_{<t}, \mathbf{x})$ to be a non-monotonic function of $t$ while satisfying $\lim_{t \to \infty} p_{\theta}^{nmst}(y_t = \langle eos \rangle | \mathbf{y}_{<t}, \mathbf{x}) = 1$ as follows:
|
| 148 |
+
|
| 149 |
+
Definition 6. For $h_t$ defined in Definition 1, the conditional probability of each token given a $t$ -prefix $y_{<t}$ and a context $x$ at the $t$ -th step in an NMST language model is defined by
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\alpha_ {t} = p _ {\boldsymbol {\theta}} ^ {n m s t} \left(y _ {t} = \langle e o s \rangle | \boldsymbol {y} _ {< t}, \boldsymbol {x}\right) = \left(1 - \sigma \left(\boldsymbol {u} _ {\langle e o s \rangle} ^ {\top} \boldsymbol {h} _ {t}\right)\right) \left(1 - (1 - \epsilon) ^ {t}\right) + \sigma \left(\boldsymbol {u} _ {\langle e o s \rangle} ^ {\top} \boldsymbol {h} _ {t}\right), \tag {10}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
and
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
p _ {\boldsymbol {\theta}} ^ {n m s t} (y _ {t} = v | \boldsymbol {y} _ {< t}, \boldsymbol {x}) = (1 - \alpha_ {t}) \cdot \exp (\boldsymbol {u} _ {v} ^ {\top} \boldsymbol {h} _ {t}) / \sum_ {v ^ {\prime} \in \mathcal {V} \backslash \{\langle e o s \rangle \}} \exp (\boldsymbol {u} _ {v ^ {\prime}} ^ {\top} \boldsymbol {h} _ {t}),
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
where $v \in \mathcal{V} \setminus \{\langle eos \rangle\}$ , $\epsilon \in (0,1)$ , and $\sigma(x) = (1 + \exp(-x))^{-1}$ is a sigmoid function.
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
Figure 1: An illustration of NMST parametrization in equation 10 where $f_{lb}(t) = 1 - (1 - \epsilon)^{t}$ , $f_{ub}(t) = 1$ , $\lambda(t') = \sigma(\boldsymbol{u}_{\langle eos\rangle}^{\top}\boldsymbol{h}_{t'})$ , and $g(t) = p_{\theta}^{nmst}(y_t = \langle eos \rangle | \boldsymbol{y}_{<t}, \boldsymbol{x})$ . If $g(t)$ lies between $f_{lb}(t)$ and $f_{ub}(t)$ , we can find $\lambda(t')$ such that $g(t') = (1 - \lambda(t'))f_{lb}(t') + \lambda(t')f_{ub}(t')$ for any $t'$ regardless of whether $g(t)$ is monotonic with respect to $t$ . This allows $p_{\theta}^{nmst}$ to learn a non-monotonic behavior of $p_{\theta}^{nmst}(y_t = \langle eos \rangle | \boldsymbol{y}_{<t}, \boldsymbol{x})$ . $p_{\theta}^{nmst}$ is consistent with respect to any incomplete probable decoding algorithms and beam search due to $\lim_{t \to \infty} f_{lb}(t) = 1 \Rightarrow \lim_{t \to \infty} p_{\theta}^{nmst}(y_t = \langle eos \rangle | \boldsymbol{y}_{<t}, \boldsymbol{x}) = 1$ .
|
| 165 |
+
|
| 166 |
+
Figure 1 shows that $p_{\pmb{\theta}}^{nmst}$ uses convex combination of two curves to model $p_{\pmb{\theta}}^{nmst}(y_t = \langle eos \rangle | y_{<t}, x)$ . We can write a curve $g(t)$ between a lower-bound curve $f_{lb}(t)$ and an upper-bound curve $f_{ub}(t)$ as $g(t) = (1 - \lambda(t)) f_{lb}(t) + \lambda(t) f_{ub}(t)$ , with appropriate $\lambda(t) \in (0,1)$ for all $t$ . $p_{\pmb{\theta}}^{nmst}$ sets $g(t)$ to $p_{\pmb{\theta}}^{nmst}(y_t = \langle eos \rangle | y_{<t}, x)$ , and then regards it as a convex combination of $f_{lb}(t) = 1 - (1 - \epsilon)^t$ and $f_{ub}(t) = 1$ with a coefficient $\lambda(t) = \sigma(\mathbf{u}_{\langle eos \rangle}^{\top} h_t)$ . This enables non-monotonic $p_{\pmb{\theta}}^{nmst}(y_t = \langle eos \rangle | y_{<t}, x)$ . Moreover, in Theorem 3 below, we show that the proposed NMST parametrization in equation 10 still guarantees the consistency with respect to any incomplete probable decoding algorithms and beam search.
|
| 167 |
+
|
| 168 |
+
Theorem 3. An NMST language model defined in Definition 6 is consistent with respect to any incomplete probable decoding algorithms and beam search.2
|
| 169 |
+
|
| 170 |
+
Theorem 3 guarantees that every decoded sequence from $p_{\theta}^{nmst}$ terminates when using incomplete decoding algorithms and beam search. Neither $p_{\theta}^{nmst}$ nor $p_{\theta}^{st}$ results in non-terminating sequences resulting from incomplete probable decoding algorithms and beam search. Unlike ST parametrization, our NMST parametrization in equation 10 can capture a wider range of $p_{\theta}(y_t = \langle eos\rangle | \mathbf{y}_{<t}, \mathbf{x})$ , since $p_{\theta}^{nmst}$ does not assume that $p_{\theta}(y_t = \langle eos\rangle | \mathbf{y}_{<t}, \mathbf{x})$ is a monotonic function of $t$ . We empirically demonstrate this by comparing $p_{\theta}^{va}(y_t = \langle eos\rangle | \mathbf{y}_{<t}, \mathbf{x})$ , $p_{\theta}^{st}(y_t = \langle eos\rangle | \mathbf{y}_{<t}, \mathbf{x})$ , and $p_{\theta}^{nmst}(y_t = \langle eos\rangle | \mathbf{y}_{<t}, \mathbf{x})$ in Figure 3.
|
| 171 |
+
|
| 172 |
+
# 4 EXPERIMENTS
|
| 173 |
+
|
| 174 |
+
We empirically validate the effectiveness of the proposed non-monotonic self-terminating (NMST) language model by evaluating it on sequence completion tasks. We test three variants of a given architecture: (i) a vanilla $(\mathrm{VA}+)$ language model using common softmax parametrization in equation 2, (ii) a self-terminating $(\mathrm{ST}+)$ language model using ST parametrization proposed by Welleck et al. (2020) and (iii) our non-monotonic self-terminating $(\mathrm{NMST}+)$ language model using NMST parametrization in equation 10. We use following evaluation metrics for comparison:
|
| 175 |
+
|
| 176 |
+
- Perplexity: Given an autoregressive language model $p_{\theta}$ , the perplexity of $p_{\theta}$ over $\mathcal{D}$ is $\exp \left(-\frac{1}{N} \sum_{n=1}^{N} \sum_{t=1}^{T^{(n)}} \log p_{\theta}\left(\boldsymbol{y}_t^{(n)} \mid \boldsymbol{y}_{<t}^{(n)}, \boldsymbol{x}^{(n)}\right)\right)$ , where $\mathcal{D} = \left\{\left(\boldsymbol{x}^{(n)}, \boldsymbol{y}^{(n)}\right)\right\}_{n=1}^{N}$ .
|
| 177 |
+
- Non-termination ratio $(r_{nt})$ : To present the consistency of $p_{\theta}$ with respect to a given decoding algorithm $\mathcal{S}$ , we need to compute $r_{nt} = q_{\mathcal{S}(p_{\theta})}(|\boldsymbol{y}| = \infty)$ . Instead, based on
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
r _ {n t} = q _ {\mathcal {S} \left(p _ {\boldsymbol {\theta}}\right)} \left(\left| \boldsymbol {y} \right| = \infty\right) = \lim _ {L \rightarrow \infty} q _ {\mathcal {S} \left(p _ {\boldsymbol {\theta}}\right)} \left(\left| \boldsymbol {y} \right| > L\right), \tag {11}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
we use $r_{nt}(L) = q_{S(p_{\theta})}(|\pmb{y}| > L)$ with a sufficiently large threshold $L$ to estimate $r_{nt}$ .
|
| 184 |
+
|
| 185 |
+
Sequence completion is a task of predicting a continuation $\hat{\pmb{y}}$ given a $c$ -length context $\pmb{x} = (x_{1}, x_{2}, \dots, x_{c})$ by using a decoding algorithm $\mathcal{S}$ from a language model $p_{\theta}$ (i.e. $\hat{\pmb{y}} \sim q_{\mathcal{S}(p_{\theta})}(\pmb{y}|\pmb{x})$ ).
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
(a) RNN
|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
(b) LSTM
|
| 192 |
+
Figure 2: Non-termination ratios, $r_{nt}(L)$ 's, as a function of $L$ in log-log scale for (a) RNN and (b) LSTM trained on WikiText-2 when using greedy search. We report mean (curve) $\pm$ st.dev. (shaded area) across 10 random experiments. For all configurations, both ST+ (non-red dashed) proposed by Welleck et al. (2020) and our NMST+ (non-red solid) are consistent with respect to greedy search since $r_{nt}(L)$ goes to 0 as $L$ increases. However, softmax parametrization (VA+, red dotted) is inconsistent with respect to greedy search since its $r_{nt}(L)$ does not converge to 0 as $L \to \infty$ .
|
| 193 |
+
|
| 194 |
+
In this section, we use greedy search defined in equation 8 to generate $\hat{\pmb{y}}$ given $\pmb{x}$ . Our main theoretical finding in Theorem 3 is that the proposed NMST language model is consistent with respect to not only greedy search but also top- $k$ sampling, nucleus sampling, and beam search. We thus present results when using decoding algorithms other than greedy search at the end in §5 and §F.
|
| 195 |
+
|
| 196 |
+
# 4.1 WIKITEXT-2
|
| 197 |
+
|
| 198 |
+
WikiText-2 (Merit et al., 2016) consists of 2 million words from 600 Wikipedia articles. With word tokenization, we regard the first 10 tokens of each sequence and its remaining part, as a context $\pmb{x}$ and a ground truth $\pmb{y}$ , respectively. We train RNN with tanh (Elman, 1990) and LSTM (Hochreiter & Schmidhuber, 1997) on WikiText-2. Both RNN and LSTM have 2 layers, with 256 and 512 hidden units at each layer, respectively. We perform 10 random runs with a batch size of 32 for 70 epochs. We use AdamW (Loshchilov & Hutter, 2017) with an initial learning rate of 0.001, $\beta_{1} = 0.9$ , $\beta_{2} = 0.99$ , weight decay of 0.01, learning rate decay, and early stopping. We further describe our models and training strategies for WikiText-2 experiments in §D. Unlike VA+{RNN, LSTM}, ST+{RNN, LSTM} and NMST+{RNN, LSTM} need an additional hyperparameter $\epsilon$ . We explore $\epsilon$ in $\{1.0 \times 10^{-5}, 5.0 \times 10^{-5}, 1.0 \times 10^{-4}, 5.0 \times 10^{-4}\}$ .
|
| 199 |
+
|
| 200 |
+
We present the average ( $\pm$ st.dev.) non-termination ratios, $r_{nt}(L)$ 's, across 10 random runs as a function of $L$ for all considered setups of WikiText-2 in Figure 2, using greedy search. From equation 11, a language model is consistent with respect to greedy search if $\lim_{L\to \infty}r_{nt}(L) = 0$ . As $L$ increases, we observe that $r_{nt}(L)$ 's of VA+{RNN, LSTM} fail to converge toward 0 while $r_{nt}(L)$ 's of ST+{RNN, LSTM} and NMST+{RNN, LSTM} all reach 0. In other words, RNN and LSTM are now consistent with respect to greedy search after replacing the original softmax parametrization with either the proposed NMST parametrization or ST parametrization.
|
| 201 |
+
|
| 202 |
+
Table 1 shows that the average ( $\pm$ st.dev.) validation perplexities across 10 random experiments for all variants of RNN and LSTM, trained on WikiText-2. We observe that NMST+\{RNN, LSTM\} have better validation perplexities than ST+\{RNN, LSTM\} for every $\epsilon$ . We demonstrate this more clearly in §E.1 by plotting the evolution of the mean validation perplexities as we vary $\epsilon$ . Although our NMST+ guarantees the consistency of RNN and LSTM with respect to greedy search with a better validation perplexity than ST+, we need to carefully select $\epsilon$ of NMST+. As $\epsilon$ increases, the lower bound of $p_{\theta}^{nmst}(y_t = \langle eos \rangle | y_{<t}, x)$ grows faster, yielding premature sequences when $\epsilon$ is too large. Indeed, the average validation perplexities of NMST+RNN and NMST+LSTM with $\epsilon = 5.0 \times 10^{-4}$ are 184.2 and 105.6 which degrade by 5.6 and 4.0 from those of VA+RNN and VA+LSTM, 178.6 and 101.6, respectively. We however emphasize that there is the optimal $\epsilon = 1.0 \times 10^{-5}$ that makes NMST+\{RNN, LSTM\} have the validation perplexities similar to those of VA+\{RNN, LSTM\}. In short, both NMST+ and ST+ prevent non-termination when using greedy search but only NMST+ has a competitive validation perplexity against VA+. In §G, we further observe that the length distribution of predicted sequences from NMST+LSTM is closer to the length distribution of ground truth sequences than those of predicted sequences from {VA, ST}+LSTM.
|
| 203 |
+
|
| 204 |
+
Table 1: Mean (±st.dev.) validation perplexities across 10 random runs on WikiText-2 for various model configurations. Lower is better. Bold marks the best of each architecture. For all $\epsilon$ , the validation perplexities of our NMST+{RNN, LSTM} are better than those of ST+{RNN, LSTM} proposed by Welleck et al. (2020). Moreover, with a proper choice of $\epsilon = 1.0 \times 10^{-5}$ , NMST+{RNN, LSTM} have competitive validation perplexities against those of VA+{RNN, LSTM}.
|
| 205 |
+
|
| 206 |
+
<table><tr><td rowspan="2">ε</td><td colspan="2">RNN</td><td colspan="2">LSTM</td></tr><tr><td>ST+</td><td>NMST+</td><td>ST+</td><td>NMST+</td></tr><tr><td>5.0 × 10-4</td><td>186.1 ± (6.2)</td><td>184.2 ± (6.5)</td><td>106.1 ± (1.0)</td><td>105.6 ± (1.2)</td></tr><tr><td>1.0 × 10-4</td><td>181.0 ± (3.8)</td><td>177.4 ± (7.0)</td><td>104.6 ± (1.4)</td><td>102.5 ± (1.0)</td></tr><tr><td>5.0 × 10-5</td><td>182.6 ± (8.0)</td><td>179.6 ± (5.7)</td><td>104.7 ± (1.6)</td><td>102.1 ± (1.0)</td></tr><tr><td>1.0 × 10-5</td><td>180.4 ± (3.3)</td><td>177.4 ± (4.5)</td><td>104.5 ± (1.4)</td><td>101.5 ± (0.8)</td></tr><tr><td>VA+</td><td colspan="2">178.6 ± (6.3)</td><td colspan="2">101.6 ± (1.0)</td></tr></table>
|
| 207 |
+
|
| 208 |
+
Table 2: We present the average ( $\pm$ st.dev.) validation perplexities across 10 random runs for all variants of GPT-2 finetuned on WikiText-103. We also demonstrate their non-termination ratios (mean $\pm$ st.dev.), $r_{nt}(L)$ 's, when using greedy search. We set $L$ to 1,000 since the maximum length of generated sequences from GPT-2 is 1,024. For perplexity, lower is better. **Bold** marks the best validation perplexity in all setups. For every $\epsilon$ , NMST+GPT-2 outperforms ST+GPT-2 in terms of the average validation perplexity. From $r_{nt}(L)$ , NMST+GPT-2 effectively prevents non-termination sequences compared to VA+GPT-2 for every $\epsilon$ while ST+GPT-2 with small $\epsilon$ fails to avoid them. With a proper choice of $\epsilon$ (e.g., $\epsilon = 1.0 \times 10^{-5}$ ), NMST+GPT-2 improves the validation perplexity.
|
| 209 |
+
|
| 210 |
+
<table><tr><td></td><td colspan="2">Perplexity</td><td colspan="2">r_nt(L)</td></tr><tr><td>ε</td><td>ST+</td><td>NMST+</td><td>ST+</td><td>NMST+</td></tr><tr><td>5.0 × 10-4</td><td>21.80 ± (0.02)</td><td>21.63 ± (0.02)</td><td>0.05 ± (0.03)</td><td>0.07 ± (0.03)</td></tr><tr><td>1.0 × 10-4</td><td>21.21 ± (0.02)</td><td>20.86 ± (0.02)</td><td>0.72 ± (0.11)</td><td>0.22 ± (0.10)</td></tr><tr><td>5.0 × 10-5</td><td>21.19 ± (0.03)</td><td>20.76 ± (0.02)</td><td>0.72 ± (0.11)</td><td>0.24 ± (0.10)</td></tr><tr><td>1.0 × 10-5</td><td>21.16 ± (0.03)</td><td>20.69 ± (0.03)</td><td>0.75 ± (0.10)</td><td>0.23 ± (0.10)</td></tr><tr><td>VA+</td><td colspan="2">20.72 ± (0.03)</td><td colspan="2">0.27 ± (0.08)</td></tr></table>
|
| 211 |
+
|
| 212 |
+
# 4.2 WIkIText-103
|
| 213 |
+
|
| 214 |
+
WikiText-103 (Merit et al., 2016) consists of 103 million words constructed from 28,000 articles. We use BPE tokenization (Sennrich et al., 2015) and consider the first 10 tokens as a context for each sequence. Since WikiText-103 is substantially larger than WikiText-2, we finetune a pretrained GPT-2 which is a transformer language model with 124 million parameters (Radford et al., 2019) for 500,000 steps. For computational efficiency, we bucket the dataset into sequences of similar lengths, and each batch contains a maximum of 1,024 total tokens. We use AdamW (Loshchilov & Hutter, 2017) with an initial learning rate of $5.0 \times 10^{-5}$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.99$ , weight decay of 0.01, linear learning rate decay, and early stopping. We present a more detailed description in §D. We select $\epsilon$ from $\{1.0 \times 10^{-5}, 5.0 \times 10^{-5}, 1.0 \times 10^{-4}, 5.0 \times 10^{-4}\}$ for ST+GPT-2 and NMST+GPT-2.
|
| 215 |
+
|
| 216 |
+
We report the mean ( $\pm$ st.dev.) validation perplexities and non-termination ratios, $r_{nt}(L)$ 's, resulting from greedy search across 10 random runs for all GPT-2 setups finetuned on WikiText-103 in Table 2. Since GPT-2 can handle up to 1,024 tokens, we use $L = 1,000$ . As shown in Figure 2, we need a sufficiently large $L$ such as $L = 10^5$ to determine whether a language model is consistent with respect to greedy search. Although $L = 1,000$ is not sufficiently large, we observe that $r_{nt}(L)$ of $\mathrm{NMST} + \mathrm{GPT - 2}$ decreases compared to $r_{nt}(L)$ of $\mathrm{VA} + \mathrm{GPT - 2}$ as $\epsilon$ increases. That is, $\mathrm{NMST}+$ reduces the number of non-terminating continuations within 1,000 steps. Non-terminating sequences do not necessarily imply better quality. We thus demonstrate sample continuations from $\mathrm{NMST} + \mathrm{GPT - 2}$ , given a context that leads non-termination with $\mathrm{VA} + \mathrm{GPT - 2}$ in Table 3, using greedy search. We observe that the quality of the generated sequence tends to improve with $\mathrm{NMST}+$ by avoiding repetitions of similar phrases and ending with $\langle \text{eos} \rangle$ . We present more example continuations in §E.3.
|
| 217 |
+
|
| 218 |
+
Table 3: Given a context in a validation instance of WikiText-103, we present example continuations of {VA, ST, NMST}+GPT-2 when using greedy search. We select $\epsilon = 1.0 \times 10^{-5}$ for {ST, NMST}+GPT-2 because it is optimal in terms of validation perplexities in Table 2. Unlike {VA, ST}+GPT-2, NMST+GPT-2 improves the quality of the sequence by avoiding repetitive tokens and ending with $\langle es\rangle$ when the given context leads VA+GPT-2 to non-terminate within 1,000 steps.
|
| 219 |
+
|
| 220 |
+
<table><tr><td>Context</td><td>Made of concrete, steel, and wood, the</td></tr><tr><td>VA+</td><td>building was built in the mid @-@ 19th century. It was the first building in the United States to be built in concrete, and the first to be built in wood. It was also the first building in the United States to be built in steel. It was the first building in ...</td></tr><tr><td>ST+</td><td>building is constructed of steel and concrete. The building's exterior is made of steel and concrete. The building's interior is made of wood, and the building's exterior is made of concrete. The building's exterior is made of concrete, and the building's ...</td></tr><tr><td>NMST+</td><td>building was designed by the architectural firm of Bowers & Wainwright, and was completed in 1892. The building is the largest of its kind in the United States. <eos></td></tr></table>
|
| 221 |
+
|
| 222 |
+

|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
Figure 3: We present $p_{\theta}(y_t = \langle eos \rangle | \pmb{y}_{<t}, \pmb{x})$ as a function of $t$ for validation instances of WikiText-103 where $p_{\theta}$ 's are {VA, ST, NMST}+GPT-2. For {ST, NMST}+GPT-2, we choose $\epsilon = 1.0 \times 10^{-5}$ because it is optimal in terms of validation perplexities in Table 2. Instead of $t$ , we tag the $t$ -th ground truth token. We report their mean (curve) $\pm$ st.dev. (shaded area) across 10 random runs. Unlike ST+GPT-2, NMST+GPT-2 can model non-monotonic behaviors of $p_{\theta}(y_t = \langle eos \rangle | \pmb{y}_{<t}, \pmb{x})$ with respect to $t$ . Both plots show that the non-monotonic behaviors occur where the sequences could end (e.g., after red marked tokens such as periods).
|
| 226 |
+
|
| 227 |
+
Similar to the results in §4.1, Table 2 shows that the validation perplexities of both ST+GPT-2 proposed by Welleck et al. (2020) and our NMST+GPT-2 degrade compared to VA+GPT-2 as $\epsilon$ increases. NMST+GPT-2 with the optimal $\epsilon = 1.0 \times 10^{-5}$ has a competitive validation perplexity of 20.69 against that of VA+GPT-2, 20.72. On the other side, we cannot find $\epsilon$ that makes the validation perplexity of ST+GPT-2 competitive against that of VA+GPT-2. Moreover, if $\epsilon \neq 5.0 \times 10^{-4}$ , then $r_{nt}(L)$ 's of ST+GPT-2 blow up unlike $r_{nt}(L)$ 's of VA+GPT-2. §E.2 demonstrates the inevitable perplexity degradation and exploding $r_{nt}(L)$ of ST+GPT-2. We suspect that it is due to monotonically increasing $p_{\theta}(y_t = \langle \text{eos} \rangle | \mathbf{y}_{<t}, \mathbf{x})$ with respect to $t$ .
|
| 228 |
+
|
| 229 |
+
We investigate behaviors of $p_{\theta}(y_t = \langle \mathrm{eos} \rangle | \mathbf{y}_{<t}, \mathbf{x})$ where $p_{\theta}$ 's are {VA, ST, NMST}+GPT-2 in Figure 3. Based on Table 2, we select the optimal $\epsilon = 1.0 \times 10^{-5}$ in terms of validation perplexities for {ST, NMST}+GPT-2. In Figure 3, {VA, NMST}+GPT-2 well-capture whether a sequence might end (e.g., after periods) by showing non-monotonic behaviors at those seemingly terminating steps, but ST+GPT-2 cannot model such non-monotonic behaviors because it assumes that $p_{\theta}(y_t = \langle \mathrm{eos} \rangle | \mathbf{y}_{<t}, \mathbf{x})$ is a monotonic function of $t$ . This constraint makes ST+GPT-2 generate often finite but unnecessarily long sequences with greedy search (i.e., higher $r_{nt}(L)$ than VA+GPT-2 for small $L$ , but $r_{nt}(L) = 0$ for sufficiently large $L$ ). We demonstrate more behaviors in §E.4.
|
| 230 |
+
|
| 231 |
+
# 5 CONSISTENCY WITH RESPECT TO OTHER DECODING ALGORITHMS
|
| 232 |
+
|
| 233 |
+
We explore the effectiveness of our proposed non-monotonic self-terminating (NMST) language model when using decoding algorithms other than greedy search, such as top- $k$ sampling (Fan et al.,
|
| 234 |
+
|
| 235 |
+
Table 4: Mean (±st.dev.) non-termination ratios, $r_{nt}(L)$ 's, across 10 random runs for the variants of GPT-2 finetuned on WikiText-103 with various decoding algorithms. We set $L$ to 1,000 due to GPT-2's context window size of 1,024. We use the optimal $\epsilon = 1.0 \times 10^{-5}$ in terms of average validation perplexities in Table 2 for both NMST+GPT-2 and ST+GPT-2. Bold marks the lowest $r_{nt}(L)$ within each decoding algorithm (column). Similar to greedy search in Table 2, for all decoding algorithms, $r_{nt}(L)$ 's of NMST+GPT-2 are lower than those of ST+GPT-2 and VA+GPT-2. It means that NMST+ reduce the number of non-terminating sequences within 1,000 decoding steps.
|
| 236 |
+
|
| 237 |
+
<table><tr><td></td><td>top-2</td><td>top-4</td><td>nucleus-0.2</td><td>nucleus-0.4</td><td>beam-2</td><td>beam-4</td></tr><tr><td>VA+</td><td>0.0 ± (0.0)</td><td>0.0 ± (0.0)</td><td>0.25 ± (0.08)</td><td>0.14 ± (0.05)</td><td>0.05 ± (0.02)</td><td>0.03 ± (0.01)</td></tr><tr><td>ST+</td><td>0.0 ± (0.0)</td><td>0.0 ± (0.0)</td><td>0.73 ± (0.11)</td><td>0.55 ± (0.15)</td><td>0.29 ± (0.10)</td><td>0.15 ± (0.07)</td></tr><tr><td>NMST+</td><td>0.0 ± (0.0)</td><td>0.0 ± (0.0)</td><td>0.21 ± (0.10)</td><td>0.10 ± (0.06)</td><td>0.03 ± (0.02)</td><td>0.01 ± (0.01)</td></tr></table>
|
| 238 |
+
|
| 239 |
+
2018), nucleus sampling (Holtzman et al., 2020), and beam search. All experimental setups and notations are the same as Section §4. According to Theorem 3, the NMST language model is consistent with respect to any incomplete decoding algorithms (e.g., greedy search, top- $k$ sampling, and nucleus sampling) and beam search for all $\epsilon > 0$ . To validate this, we use top-\{2, 4\} sampling, nucleus-\{0.2, 0.4\} sampling, and beam search with a width of \{2, 4\} (beam-\{2, 4\}) to generate sequences from NMST+GPT-2 finetuned on WikiText-103 with $\epsilon = 1.0 \times 10^{-5}$ . The choice of $\epsilon = 1.0 \times 10^{-5}$ is made based on the validation perplexities in Table 2. Since the validation perplexity does not depend on decoding algorithms, we focus on the average ( $\pm$ st.dev.) non-termination ratios, $r_{nt}(L)$ 's, across 10 random runs with $L = 1,000$ for each decoding algorithm in Table 4. We also present $r_{nt}(L)$ 's of VA+GPT-2 and ST+GPT-2 with $\epsilon = 1.0 \times 10^{-5}$ as baselines.
|
| 240 |
+
|
| 241 |
+
Table 4 shows that our NMST+GPT-2 has the lowest $r_{nt}(L)$ with $L = 1,000$ for all decoding algorithms compared to VA+GPT-2 and ST+GPT-2 proposed by (Welleck et al., 2020). In other words, NMST+ effectively prevent non-terminating sequences within 1,000 time steps regardless of decoding algorithms. Comparing with greedy search in Table 2 ( $r_{nt}(L)$ when $\epsilon = 1.0 \times 10^{-5}$ ), we observe that $r_{nt}(L)$ 's decrease for all setups. As we discussed in §2.3, non-terminating sequences originate from the choice of $\langle \text{eos} \rangle \notin \mathcal{V}_t \subsetneq \mathcal{V}$ for all $t$ where $\mathcal{V}$ is a vocabulary and $\mathcal{V}_t$ is the $t$ -th proper subset of $\mathcal{V}$ , considered by a decoding algorithm at the $t$ -th step. The decoding algorithms other than greedy search are likely to have $\langle \text{eos} \rangle$ in $\mathcal{V}_t$ and have the lower $r_{nt}(L)$ since their $|\mathcal{V}_t|$ are greater than or equal to $|\mathcal{V}_t| = 1$ of greedy search for all $t$ . In the case of top-\{2, 4\} sampling, we obtain $r_{nt}(L) = 0.0$ for VA+GPT-2. Even without NMST+, VA+ can avoid non-terminating sequences if we choose a proper decoding algorithm. We however emphasize that NMST+GPT-2 with $\epsilon = 1.0 \times 10^{-5}$ has a competitive validation perplexity against VA+GPT-2 in Table 2 and that it is guaranteed to terminate regardless of the choice of a decoding algorithm. We also empirically demonstrate the consistency of NMST+{RNN, LSTM} trained on WikiText-2 with respect to other decoding algorithms in §F.
|
| 242 |
+
|
| 243 |
+
# 6 CONCLUSION
|
| 244 |
+
|
| 245 |
+
Non-termination is a degenerate behavior we often observe when generating text from a well-trained language model. To prevent this, Welleck et al. (2020) proposed a self-terminating language model that encourages the termination probability of each sequence, which is the conditional probability of $\langle eos\rangle$ given a $t$ -prefix and a context, to monotonically increase toward 1 as $t$ increases. In this paper, we theoretically demonstrate that monotonically increasing termination probability of each sequence is not a necessary condition for avoiding non-terminating sequences. We then propose a non-monotonic self-terminating language model where the termination probability for each sequence converges to 1 but not monotonically. Our non-monotonic self-terminating language models successfully address the issue of non-termination and achieve perplexities that are comparable to vanilla language models and are better than the original self-terminating language models.
|
| 246 |
+
|
| 247 |
+
# REPRODUCIBILITY STATEMENT
|
| 248 |
+
|
| 249 |
+
To ensure the reproducibility of our paper, we provide our code available at https://github. com/nyu-dl/non-monotonic-self-terminating-lm.
|
| 250 |
+
|
| 251 |
+
# ACKNOWLEDGMENTS
|
| 252 |
+
|
| 253 |
+
This work was supported by 42dot, Hyundai Motor Company (under the project Uncertainty in Neural Sequence Modeling), Samsung Advanced Institute of Technology (under the project Next Generation Deep Learning: From Pattern Recognition to AI), and NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science. This work was supported in part through the NYU IT High Performance Computing resources, services, and staff expertise.
|
| 254 |
+
|
| 255 |
+
# REFERENCES
|
| 256 |
+
|
| 257 |
+
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
|
| 258 |
+
Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic language model. In J. Mach. Learn. Res., 2000.
|
| 259 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020.
|
| 260 |
+
Kyunghyun Cho, Bart Van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
|
| 261 |
+
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
|
| 262 |
+
Jeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179-211, 1990.
|
| 263 |
+
Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 889-898, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1082. URL https://aclanthology.org/P18-1082.
|
| 264 |
+
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.
|
| 265 |
+
Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. ArXiv, abs/1904.09751, 2020.
|
| 266 |
+
Philipp Koehn and Rebecca Knowles. Six challenges for neural machine translation. arXiv preprint arXiv:1706.03872, 2017.
|
| 267 |
+
Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. Advances in neural information processing systems, 27, 2014.
|
| 268 |
+
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
|
| 269 |
+
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
|
| 270 |
+
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013a.
|
| 271 |
+
|
| 272 |
+
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26, 2013b.
|
| 273 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
|
| 274 |
+
Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015.
|
| 275 |
+
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014.
|
| 276 |
+
Felix Stahlberg and Bill Byrne. On nmt search errors and model errors: Cat got your tongue? arXiv preprint arXiv:1908.10090, 2019.
|
| 277 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
|
| 278 |
+
Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
|
| 279 |
+
Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. Neural text generation with unlikelihood training. arXiv preprint arXiv:1908.04319, 2019.
|
| 280 |
+
Sean Welleck, Ilia Kulikov, Jaedeok Kim, Richard Yuanzhe Pang, and Kyunghyun Cho. Consistency of a recurrent language model with respect to incomplete decoding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 5553-5568, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.448. URL https://aclanthology.org/2020.emnlp-main.448.
|
| 281 |
+
|
| 282 |
+
# APPENDIX
|
| 283 |
+
|
| 284 |
+
# A DEFINITIONS OF COMMON DECODING ALGORITHMS AND THEIR CHARACTERISTICS
|
| 285 |
+
|
| 286 |
+
In this section, we present mathematical definitions of top- $k$ sampling (Fan et al., 2018), nucleus sampling (Holtzman et al., 2020), greedy search, and beam search. We then demonstrate whether they are incomplete probable decoding algorithms.
|
| 287 |
+
|
| 288 |
+
# A.1 TOP-K SAMPLING
|
| 289 |
+
|
| 290 |
+
At each step $t$ , top- $k$ sampling selects a subset of $k$ most probable tokens in a vocabulary $\mathcal{V}$ . Top- $k$ sampling generates decoded sequences from a language model $p_{\theta}$ as follows:
|
| 291 |
+
|
| 292 |
+
Definition A.1 (Top- $k$ sampling (Fan et al., 2018)). Top- $k$ sampling $\mathcal{S}_{top-k}$ generates a sequence from a language model $p_{\theta}$ given a context $\mathbf{x}$ by recursively sampling $\hat{y}_t$ from
|
| 293 |
+
|
| 294 |
+
$$
|
| 295 |
+
q _ {S _ {\text {t o p -} k} \left(p _ {\boldsymbol {\theta}}\right)} \left(y _ {t} = v \mid \hat {\boldsymbol {y}} _ {< t}, \boldsymbol {x}\right) = \left\{ \begin{array}{l l} \frac {p _ {\boldsymbol {\theta}} \left(y _ {t} = v \mid \hat {\boldsymbol {y}} _ {< t} , \boldsymbol {x}\right)}{\sum_ {v ^ {\prime} \in \mathcal {V} _ {t}} p _ {\boldsymbol {\theta}} \left(y _ {t} = v ^ {\prime} \mid \hat {\boldsymbol {y}} _ {< t} , \boldsymbol {x}\right)}, & \text {i f} v \in \mathcal {V} _ {t}, \\ 0, & \text {o t h e r w i s e}, \end{array} \right. \tag {12}
|
| 296 |
+
$$
|
| 297 |
+
|
| 298 |
+
where
|
| 299 |
+
|
| 300 |
+
$$
|
| 301 |
+
\mathcal {V} _ {t} = \underset {v \in \mathcal {V}} {\arg \operatorname {t o p} - k} p _ {\boldsymbol {\theta}} \left(y _ {t} = v \mid \hat {\boldsymbol {y}} _ {< t}, \boldsymbol {x}\right). \tag {13}
|
| 302 |
+
$$
|
| 303 |
+
|
| 304 |
+
Except the trivial case $k = |\mathcal{V}|$ , we have $\emptyset \subsetneq \mathcal{V}_t \subsetneq \mathcal{V}$ for all $t$ . By equation 13, equation 6 holds. From equation 12, we see that top- $k$ sampling satisfies equation 5 and equation 7. Therefore, top- $k$ sampling is an incomplete probable decoding algorithm.
|
| 305 |
+
|
| 306 |
+
# A.2 NUCLEUS SAMPLING
|
| 307 |
+
|
| 308 |
+
At each step $t$ , nucleus sampling selects the smallest subset of most probable tokens in a vocabulary $\mathcal{V}$ , of which total probability is higher than a given threshold $\mu$ . Nucleus sampling generates decoded sequences from a language model $p_{\theta}$ as follows:
|
| 309 |
+
|
| 310 |
+
Definition A.2 (Nucleus sampling (Holtzman et al., 2020)). Nucleus sampling $S_{nuc - \mu}$ generates a sequence from a language model $p_\theta$ given a context $x$ by recursively sampling $\hat{y}_t$ from
|
| 311 |
+
|
| 312 |
+
$$
|
| 313 |
+
q _ {S _ {\text {n u c -} \mu} (p _ {\boldsymbol {\theta}})} \left(y _ {t} = v \mid \hat {\boldsymbol {y}} _ {< t}, \boldsymbol {x}\right) = \left\{ \begin{array}{l l} \frac {p _ {\boldsymbol {\theta}} \left(y _ {t} = v \mid \hat {\boldsymbol {y}} _ {< t} , \boldsymbol {x}\right)}{\sum_ {v ^ {\prime} \in \mathcal {V} _ {t}} p _ {\boldsymbol {\theta}} \left(y _ {t} = v ^ {\prime} \mid \hat {\boldsymbol {y}} _ {< t} , \boldsymbol {x}\right)}, & \text {i f} v \in \mathcal {V} _ {t}, \\ 0, & \text {o t h e r w i s e}, \end{array} \right. \tag {14}
|
| 314 |
+
$$
|
| 315 |
+
|
| 316 |
+
where $\mathcal{V}_t$ is the smallest subset of $\mathcal{V}$ such that
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
\sum_ {v \in \mathcal {V} _ {t}} p _ {\boldsymbol {\theta}} \left(y _ {t} = v \mid \hat {\boldsymbol {y}} _ {< t}, \boldsymbol {x}\right) \geq \mu . \tag {15}
|
| 320 |
+
$$
|
| 321 |
+
|
| 322 |
+
If $\min_{v\in \mathcal{V}}p_{\boldsymbol{\theta}}(y_t = v|\boldsymbol{y}_{< t},\boldsymbol{x})\leq 1 - \mu$ for any context $\boldsymbol{x}$ and any $t$ -prefix $\boldsymbol{y}_{< t}$ , then we have $\emptyset \subsetneq \mathcal{V}_t\subsetneq \mathcal{V}$ for all $t$ . Suppose that equation 6 does not hold for nucleus sampling. Then, this contradicts to $\mathcal{V}_t$ is the smallest subset of $\mathcal{V}$ , satisfying equation 15. From equation 14, we see that nucleus sampling satisfies equation 5 and equation 7. Therefore, nucleus sampling is incomplete and probable.
|
| 323 |
+
|
| 324 |
+
# A.3 BEAM SEARCH
|
| 325 |
+
|
| 326 |
+
Beam search is a heuristic algorithm that operates on the level of prefixes. We use the definition of beam search in Welleck et al. (2020).
|
| 327 |
+
|
| 328 |
+
Definition A.3 (Beam search, Definition A.2 in Welleck et al. (2020)). Beam search with a width (beam size) $k$ , $S_{\text{beam-k}}$ , generates a sequence from a language model $p_\theta$ by maintaining a set of $k$
|
| 329 |
+
|
| 330 |
+
prefixes, $\mathcal{P}_t = \{\pmb{\rho}^{(1)}(t),\pmb{\rho}^{(2)}(t),\dots ,\pmb{\rho}^{(k)}(t)\}$ , at each time step $t$ where $\pmb{\rho}^{(i)}(0)$ is an empty prefix for all $i$ . At each step $t\in \{1,2,\dots \}$ , beam search forms a set of $k\times k$ prefixes,
|
| 331 |
+
|
| 332 |
+
$$
|
| 333 |
+
\tilde {\mathcal {P}} _ {t} = \bigcup_ {\boldsymbol {\rho} \in \mathcal {P} _ {t - 1}} \left\{\boldsymbol {\rho} \circ v | v \in \mathcal {V} _ {t} (\boldsymbol {\rho}) \right\}, \tag {16}
|
| 334 |
+
$$
|
| 335 |
+
|
| 336 |
+
where $\rho \circ v$ is concatenation and
|
| 337 |
+
|
| 338 |
+
$$
|
| 339 |
+
\mathcal {V} _ {t} (\boldsymbol {\rho}) = \underset {v \in \mathcal {V}} {\arg \operatorname {t o p - k}} p _ {\boldsymbol {\theta}} \left(y _ {t} = v \mid \boldsymbol {\rho}, \boldsymbol {x}\right). \tag {17}
|
| 340 |
+
$$
|
| 341 |
+
|
| 342 |
+
After forming $\tilde{\mathcal{P}}_t$ , beam search selects a set of the $k$ highest scoring prefixes in $\tilde{\mathcal{P}}_t$ ,
|
| 343 |
+
|
| 344 |
+
$$
|
| 345 |
+
\mathcal {P} _ {t} = \underset {\boldsymbol {\rho} \in \tilde {\mathcal {P}} _ {t}} {\arg \operatorname {t o p - k}} s (\boldsymbol {\rho}), \tag {18}
|
| 346 |
+
$$
|
| 347 |
+
|
| 348 |
+
where $s(\pmb{\rho}) = \sum_{\tau=1}^{t} \log p_{\pmb{\theta}}(y_{\tau} = \pmb{\rho}_{\tau} | \pmb{\rho}_{<\tau}, \pmb{x})$ . If $\pmb{\rho} \in \mathcal{P}_t$ ends with $\langle \mathrm{eos} \rangle$ , then it does not expand further and is added to the final set $\mathcal{P}$ . Beam search continues until $\mathcal{P}$ contains $k$ sequences ending with $\langle \mathrm{eos} \rangle$ . After that it returns the highest scoring sequence
|
| 349 |
+
|
| 350 |
+
$$
|
| 351 |
+
\hat {\boldsymbol {y}} = \underset {\boldsymbol {\rho} \in \mathcal {P}} {\arg \max } s (\boldsymbol {\rho}). \tag {19}
|
| 352 |
+
$$
|
| 353 |
+
|
| 354 |
+
Unlike greedy search, top- $k$ sampling, and nucleus sampling, beam search recursively expands $k$ sequences with at most $k$ different prefixes. Therefore, we cannot formalize beam search in token-level by using $q_{\mathcal{S}_{\mathrm{beam - }k}}(y_t = v|\pmb {y}_{< t},\pmb {x})$ . However, in equation 17, the number of possible tokens at $t$ is at most $k\times k$ . It means that $\mathcal{S}_{\mathrm{beam - }k}$ may exclude $\langle eos\rangle$ at time $t$ if $k\leq \sqrt{|\mathcal{V}| - 1}$ . By using this, Welleck et al. (2020) proved that a vanilla language model $p_{\theta}^{va}$ is inconsistent with respect to beam search as shown in Theorem 1.
|
| 355 |
+
|
| 356 |
+
# B PROOFS FOR §2.3
|
| 357 |
+
|
| 358 |
+
Remark 1. Let $\mathcal{D} = \left\{(\pmb{x}^{(1)},\pmb{y}^{(1)}),(\pmb{x}^{(2)},\pmb{y}^{(2)})\right\}$ be a two-instance training dataset. Assume that there exists $t_0$ such that $\pmb{y}_{< t_0} = \pmb{y}_{< t_0}^{(1)} = \pmb{y}_{< t_0}^{(2)}$ . Suppose further that $t_0 = |\pmb{y}^{(1)}| < |\pmb{y}^{(2)}| - 1$ and $\pmb{x} = \pmb{x}^{(1)} = \pmb{x}^{(2)}$ . If $\pmb{\theta}^{\star}$ is an optimal parameter configuration in equation 3 over $\mathcal{D}$ . Then, $p_{\pmb{\theta}^{\star}}\left(y_t^{(2)} = \langle \mathrm{eos}\rangle |\pmb{y}_{< t}^{(2)},\pmb{x}\right)$ is non-monotonic with respect to $t$ .
|
| 359 |
+
|
| 360 |
+
Proof. Since $\theta^{\star}$ is an optimal parameter configuration that perfectly minimizes equation 3 and $t_0 < |y^{(2)}| - 1$ , we have
|
| 361 |
+
|
| 362 |
+
$$
|
| 363 |
+
p _ {\boldsymbol {\theta} ^ {\star}} \left(y _ {t} ^ {(2)} = \langle e o s \rangle | \boldsymbol {y} _ {< t} ^ {(2)}, \boldsymbol {x} ^ {(2)}\right) = 0, \tag {20}
|
| 364 |
+
$$
|
| 365 |
+
|
| 366 |
+
for $t < t_0$ . Note that $t_0 = |\pmb{y}^{(1)}| \Rightarrow \pmb{y}_{t_0}^{(1)} = \langle eos \rangle$ and $t_0 < |\pmb{y}^{(2)}| - 1 \Rightarrow \pmb{y}_{t_0}^{(2)} \neq \langle eos \rangle$ . From $\pmb{x} = \pmb{x}^{(1)} = \pmb{x}^{(2)}$ and $\pmb{y} = \pmb{y}_{<t_0}^{(1)} = \pmb{y}_{<t_0}^{(2)}$ , we obtain
|
| 367 |
+
|
| 368 |
+
$$
|
| 369 |
+
p _ {\boldsymbol {\theta} ^ {\star}} \left(y _ {t _ {0}} ^ {(2)} = \langle e o s \rangle | \boldsymbol {y} _ {< t _ {0}} ^ {(2)}, \boldsymbol {x} ^ {(2)}\right) = \frac {1}{2}. \tag {21}
|
| 370 |
+
$$
|
| 371 |
+
|
| 372 |
+
Moreover, $t_0 < |y^{(2)}| - 1$ implies that $\pmb{y}_{t_0 + 1}^{(2)}\neq \langle eos\rangle$ which is equivalent to
|
| 373 |
+
|
| 374 |
+
$$
|
| 375 |
+
p _ {\boldsymbol {\theta} ^ {\star}} \left(y _ {t _ {0} + 1} ^ {(2)} = \langle e o s \rangle | \boldsymbol {y} _ {< t _ {0} + 1} ^ {(2)}, \boldsymbol {x} ^ {(2)}\right) = 0. \tag {22}
|
| 376 |
+
$$
|
| 377 |
+
|
| 378 |
+
From equation 20, equation 21, and equation 22, we see that $p_{\pmb{\theta}^*}\big(y_t^{(2)} = \langle eos\rangle |\pmb{y}_{< t}^{(2)},\pmb{x}\big)$ is non-monotonic with respect to $t$ .
|
| 379 |
+
|
| 380 |
+
# C PROOFS FOR §3
|
| 381 |
+
|
| 382 |
+
Theorem 3. A non-monotonic self-terminating (NMST) language model defined in Definition 6 is consistent with respect to any incomplete probable decoding algorithms and beam search.
|
| 383 |
+
|
| 384 |
+
Proof. From equation 10, for any $\pmb{\theta} \in \mathbb{R}^k$ , we have
|
| 385 |
+
|
| 386 |
+
$$
|
| 387 |
+
\lim _ {t \to \infty} p _ {\pmb {\theta}} ^ {n m s t} (y _ {t} = \langle e o s \rangle | \pmb {y} _ {< t}, \pmb {x}) = 1,
|
| 388 |
+
$$
|
| 389 |
+
|
| 390 |
+
since $(1 - \epsilon)^{t} \to 0$ as $t \to \infty$ for $\epsilon \in (0,1)$ and $\sigma\left(\boldsymbol{u}_{\langle \cos \rangle}^{\top} \boldsymbol{h}_{t}\right) \in (0,1)$ for any $t$ . Hence, there exists $t_{1/2}$ such that
|
| 391 |
+
|
| 392 |
+
$$
|
| 393 |
+
t \geq t _ {1 / 2} \Rightarrow p _ {\boldsymbol {\theta}} ^ {n m s t} \left(y _ {t} = \langle e o s \rangle | \boldsymbol {y} _ {< t}, \boldsymbol {x}\right) > \frac {1}{2}. \tag {23}
|
| 394 |
+
$$
|
| 395 |
+
|
| 396 |
+
Let $S$ be any incomplete probable decoding algorithm. From equation 6 and equation 7, $\langle eos\rangle \in \mathcal{V}_t$ and $q_{\mathcal{S}(p_\theta^{nmst})}(y_t\neq \langle eos\rangle |\pmb {y}_{< t},\pmb {x}) < \frac{1}{2}$ holds for any $t\geq t_{1 / 2}$ . Therefore, we obtain
|
| 397 |
+
|
| 398 |
+
$$
|
| 399 |
+
\begin{array}{l} q _ {\mathcal {S} \left(p _ {\boldsymbol {\theta}} ^ {n m s t}\right)} (| \boldsymbol {y} | = \infty | \boldsymbol {x}) = \prod_ {t = 1} ^ {\infty} q _ {\mathcal {S} \left(p _ {\boldsymbol {\theta}} ^ {n m s t}\right)} \left(y _ {t} \neq \langle e o s \rangle | \boldsymbol {y} _ {< t}, \boldsymbol {x}\right) \\ \leq \prod_ {t = t _ {1 / 2}} ^ {\infty} q _ {\mathcal {S} (p _ {\boldsymbol {\theta}} ^ {n m s t})} (y _ {t} \neq \langle e o s \rangle | \boldsymbol {y} _ {< t}, \boldsymbol {x}) \\ < \prod_ {t = t _ {1 / 2}} ^ {\infty} \frac {1}{2} \rightarrow 0. \tag {24} \\ \end{array}
|
| 400 |
+
$$
|
| 401 |
+
|
| 402 |
+
Taking expectation of equation 24 over $\pmb{x}$ , we finally have $q_{\mathcal{S}(p_\theta^{nmst})}(|\pmb{y}| = \infty) = 0$ for any $S$ . In other words, $p_\theta^{nmst}$ is consistent with respect to any incomplete probable decoding algorithms.
|
| 403 |
+
|
| 404 |
+
In the case of beam search $S_{\mathrm{beam - }k}$ defined in §A.3, without loss of generality, there exists $\pmb {\rho}\in \mathcal{P}_{t_{1 / 2}}$ such that $\pmb{\rho}$ does not end with $\langle eos\rangle$ . Let $\mathcal{P}_{>t_{1 / 2}}(\pmb {\rho})$ be a set of $k$ highest scoring sequences continued from $\pmb{\rho}$ by $S_{\mathrm{beam - }k}$ . From equation 23, we have
|
| 405 |
+
|
| 406 |
+
$$
|
| 407 |
+
p _ {\boldsymbol {\theta}} ^ {n m s t} (\langle e o s \rangle | \boldsymbol {\rho}, \boldsymbol {x}) > p _ {\boldsymbol {\theta}} ^ {n m s t} (v | \boldsymbol {\rho}, \boldsymbol {x})
|
| 408 |
+
$$
|
| 409 |
+
|
| 410 |
+
for all $v \in \mathcal{V} \setminus \{\langle eos \rangle\}$ . Hence, $\mathcal{V}_{t_{1/2}}(\rho)$ in equation 17 includes $\langle eos \rangle$ . Let $z = (z_1, z_2, \dots, z_l)$ be any subsequence with $z_1 \neq \langle eos \rangle$ . Then, we have
|
| 411 |
+
|
| 412 |
+
$$
|
| 413 |
+
\begin{array}{l} p _ {\boldsymbol {\theta}} ^ {n m s t} (\boldsymbol {\rho} \circ \boldsymbol {z} | \boldsymbol {\rho}, \boldsymbol {x}) = \prod_ {i = 1} ^ {l} p _ {\boldsymbol {\theta}} ^ {n m s t} (z _ {i} | \boldsymbol {\rho} \circ \boldsymbol {z} _ {< i}, \boldsymbol {x}) \\ \leq p _ {\boldsymbol {\theta}} ^ {n m s t} \left(z _ {1} | \boldsymbol {\rho}, \boldsymbol {x}\right) \\ < p _ {\boldsymbol {\theta}} ^ {n m s t} (\langle e o s \rangle | \boldsymbol {\rho}, \boldsymbol {x}) = p _ {\boldsymbol {\theta}} ^ {n m s t} (\boldsymbol {\rho} \circ \langle e o s \rangle | \boldsymbol {\rho}, \boldsymbol {x}), \tag {25} \\ \end{array}
|
| 414 |
+
$$
|
| 415 |
+
|
| 416 |
+
where $\circ$ is concatenation. Therefore, $\pmb{\rho} \circ \langle eos \rangle = \arg \max_{\pmb{\rho}' \in \mathcal{P}_{t_{1/2}}} s(\pmb{\rho}')$ holds where $s(\pmb{\rho}') = \sum_{\tau=1}^{t} \log p_{\pmb{\theta}}^{nmst}(\pmb{\rho}_{\tau}'|\pmb{\rho}_{<\tau}', \pmb{x})$ . That is, $\pmb{\rho} \circ \langle eos \rangle$ is the highest scoring sequence starting with $\pmb{\rho}$ , and we have $\pmb{\rho} \circ \langle eos \rangle \in \mathcal{P}(\pmb{\rho})$ .
|
| 417 |
+
|
| 418 |
+
For each $\pmb{\rho}^{\prime}\in \mathcal{P}_{>t_{1 / 2}}(\pmb {\rho})\setminus \{\pmb {\rho}\circ \langle eos\rangle \}$ , $\pmb{\rho}^{\prime}$ starts with $\pmb {\rho}\circ v$ for $v\in \mathcal{V}\setminus \{\langle eos\rangle \}$ . By the same argument, we add at least one sequence ending with $\langle eos\rangle$ to $\mathcal{P}_{>t_{1 / 2}}(\pmb {\rho})$ . It means that $\mathcal{P}_{>t_{1 / 2}}(\pmb {\rho})$ has $k$ sequences ending with $\langle eos\rangle$ within $t_{1 / 2} + k$ steps. Note that the final set $\mathcal{P}$ satisfies
|
| 419 |
+
|
| 420 |
+
$$
|
| 421 |
+
\mathcal {P} \subset \bigcup_ {\boldsymbol {\rho} \in \mathcal {P} _ {t _ {1 / 2}}} \mathcal {P} _ {> t _ {1 / 2}} (\boldsymbol {\rho}). \tag {26}
|
| 422 |
+
$$
|
| 423 |
+
|
| 424 |
+
Equation 26 implies that every sequence in $\mathcal{P}$ has the length of at most $t_{1/2} + k$ . We thus obtain
|
| 425 |
+
|
| 426 |
+
$$
|
| 427 |
+
q _ {\mathcal {S} _ {\text {b e a m} \cdot k} \left(p _ {\boldsymbol {\theta}} ^ {\text {n m s t}}\right)} \left(\left| \boldsymbol {y} \right| = \infty \mid \boldsymbol {x}\right) \leq q _ {\mathcal {S} _ {\text {b e a m} \cdot k} \left(p _ {\boldsymbol {\theta}} ^ {\text {n m s t}}\right)} \left(\left| \boldsymbol {y} \right| > t _ {1 / 2} + k \mid \boldsymbol {x}\right) = 0. \tag {27}
|
| 428 |
+
$$
|
| 429 |
+
|
| 430 |
+
Taking expectation of equation 27 over $\mathbf{x}$ , we see that $q_{\mathcal{S}_{\mathrm{beam - k}}(p_{\theta}^{nmst})}(|\boldsymbol{y}| = \infty)$ . That is, $p_{\theta}^{nmst}$ is consistent with respect to beam search.
|
| 431 |
+
|
| 432 |
+
# D EXPERIMENTAL DETAILS
|
| 433 |
+
|
| 434 |
+
In this section, we describe our models and optimization processes used in $\S 4$ .
|
| 435 |
+
|
| 436 |
+
RNN and LSTM on WikiText-2 We use word tokenization for WikiText-2. We train RNN with tanh activations (Elman, 1990) and LSTM (Hochreiter & Schmidhuber, 1997) on WikiText-2. Both RNN and LSTM have 2 layers. Each layer has 256 hidden units for RNN and 512 hidden units for LSTM. The sizes of input and output embedding layers are 256 and 512 for RNN and LSTM, respectively. We use weight tying to share the weights between the input and output embedding layers for both models. We apply dropout (Srivastava et al., 2014) with drop probabilities of 0.3 and 0.5 to RNN and LSTM accordingly. For each model, we perform 10 random runs with a batch size of 32 for 70 epochs. To maximize the log-likelihood presented in equation 3, we use AdamW (Loshchilov & Hutter, 2017) with an initial learning rate of 0.001, $\beta_{1} = 0.9$ , $\beta_{2} = 0.99$ , weight decay of 0.01, and learning rate decay which halves the learning rate if the validation perplexity does not improve for a training epoch. To avoid overfitting, we additionally use early stopping, which terminates training if the validation perplexity does not improve upon the best score attained so far for 10 consecutive epochs. In most cases, the training ends within 50 epochs.
|
| 437 |
+
|
| 438 |
+
GPT-2 on WikiText-103 We use BPE tokenization $^4$ (Sennrich et al., 2015) and the pretrained GPT-2 $^5$ (Radford et al., 2019) with 124 million parameters, provided by HuggingFace. GPT-2 can handle up to 1,024 tokens. We apply dropout (Srivastava et al., 2014) with a drop probability of 0.1 to GPT-2. We finetune GPT-2 for 300,000 steps while ensuring that all runs continue for at least 250,000 steps. To minimize the number of padding tokens in every batch for computational efficiency, we bucket the dataset into sequences of similar lengths, and each batch contains a maximum of 1,024 total tokens. To maximize the log-likelihood function in equation 3, we use AdamW (Loshchilov & Hutter, 2017) with an initial learning rate of $5.0 \times 10^{-5}$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.99$ , weight decay of 0.01, and linear learning rate decay over 500,000 steps.
|
| 439 |
+
|
| 440 |
+
# E ADDITIONAL PLOTS AND TABLES FOR §4
|
| 441 |
+
|
| 442 |
+
In this section, we demonstrate additional plots and tables for $\S 4$ .
|
| 443 |
+
|
| 444 |
+
# E.1 ADDITIONAL PLOTS FOR $\S 4.1$
|
| 445 |
+
|
| 446 |
+

|
| 447 |
+
Figure 4: Validation perplexities as a function of $\epsilon$ in log-linear scale for all configurations of RNN (left) and LSTM (right), which are trained on WikiText-2. We present their average (curve) $\pm$ st.dev. (shaded area) across 10 random experiments. For all $\epsilon$ and architectures, NMST+ has better validation perplexities than ST+. As $\epsilon$ increases, the validation perplexities of both NMST+RNN and NMST+LSTM degrade compared to those of VA+RNN and VA+LSTM. We thus need to search for an optimal $\epsilon$ to avoid degradation of validation perplexity when applying NMST+ to our language model.
|
| 448 |
+
|
| 449 |
+

|
| 450 |
+
|
| 451 |
+
# E.2 ADDITIONAL PLOTS FOR §4.2
|
| 452 |
+
|
| 453 |
+

|
| 454 |
+
Figure 5: We present the average (curve) $\pm$ st.dev. (shaded area) of validation perplexities (left) and non-termination ratios $r_{nt}(L)$ (right) with greedy search across 10 random runs for all considered setups of GPT-2 finetuned on WikiText-130 in log-linear scale. For $r_{nt}(L)$ , we use $L = 1,000$ because GPT-2 has a context window size of 1,024. For all $\epsilon$ , NMST+GPT-2 outperforms ST+GPT-2 in terms of the average validation perplexity. When $\epsilon$ is small, $r_{nt}t(L)$ of ST+GPT-2 explodes. It means that ST+GPT-2 with small $\epsilon$ cannot prevent non-terminating sequences. However, our NMST+GPT-2 effectively reduces $r_{nt}(L)$ compared to VA+GPT-2 for every $\epsilon$ , and the validation perplexity degradation is smaller than that of ST+GPT-2 proposed by Welleck et al. (2020).
|
| 455 |
+
|
| 456 |
+

|
| 457 |
+
|
| 458 |
+
# E.3 ADDITIONAL TABLES FOR TABLE 3
|
| 459 |
+
|
| 460 |
+
Table 5: Given a context in a validation instance of WikiText-103, we present example continuations of {VA, ST, NMST}+GPT-2 when using greedy search. We select $\epsilon = 1.0 \times 10^{-5}$ for {ST, NMST}+GPT-2 because it is optimal in terms of validation perplexities in Table 2. Unlike {VA, ST}+GPT-2, NMST+GPT-2 improves the quality of the sequence by avoiding repetitive tokens and ending with $\langle es\rangle$ when the given context leads VA+GPT-2 to non-terminate within 1,000 steps.
|
| 461 |
+
|
| 462 |
+
<table><tr><td>Context</td><td>The single made its Irish Singles Chart debut at</td></tr><tr><td>VA+</td><td>number three on the issue date of November 18, 2005. It debuted at number one on the chart for the issue date of December 7, 2005. The single debuted at number two on the chart for the issue date of January 4, 2006. The single debuted at number one on the Irish Singles Chart for the issue date of March 4, 2006. The single debuted at number two on the Irish Singles Chart for the issue date of June 4, 2006. The single debuted at number one on the Irish Singles Chart for the issue date ...</td></tr><tr><td>ST+</td><td>number three on the chart issue dated March 16, 2010. It debuted at number two on the UK Singles Chart on the chart issue dated March 16, 2010, and peaked at number one on the chart issue dated March 16, 2010. The single was certified gold by the British Phonographic Industry (BPI) for shipments of over 15 @, @ 000 copies. The single debuted at number two on the Irish Singles Chart on the chart issue dated March 16, 2010, and peaked at number one on the chart issue dated March 16, 2010. The single was certified gold by the Irish Recorded Music Association (IRMA) for shipments of over 15 @, @ 000 copies. The single was ...</td></tr><tr><td>NMST+</td><td>number twenty @-@ seven on the week ending March 26, 2010, and peaked at number three on the week ending March 27, 2010. It was certified gold by the Recording Industry Association of Ireland (RIAA) for shipment of 500 @, @ 000 copies of the single. The single was certified gold by the Recording Industry Association of Ireland (RIANZ) for shipment of 500 @, @ 000 copies of the single.<eos></td></tr><tr><td>Context</td><td>Despite the expensive reconstructions, both vessels were considered</td></tr><tr><td>VA+</td><td>to be of sufficient quality to be considered for use in the Grand Fleet. The first, the British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @- @ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British at-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built, British @-@ built ...</td></tr><tr><td>ST+</td><td>to be of sufficient quality to be considered a part of the Royal Navy, and were as-signed to the Channel Fleet. The two ships were transferred to the Mediterranean Fleet in early 1915, and were transferred to the North America and West Indies Station in early 1916. They were transferred to the Mediterranean Fleet in early 1917, and were transferred to the North America and West Indies Station in early 1918. They were transferred to the North America and West Indies Station in early 1918, and were transferred to the Mediterranean Fleet in early 1918. They were transferred to the North America and West Indies Station in late 1918, and ...</td></tr><tr><td>NMST+</td><td>obsolescent by the time of the Second World War, and were eventually scrapped in favour of the much more powerful British battleship HMS Dreadnought.<eos></td></tr><tr><td>Context</td><td>On the July 21, 2014 episode of Raw,</td></tr><tr><td>VA+</td><td>Triple H and Randy Orton were involved in a tag team match, in which Triple H and Orton were accompanied by Triple H and Randy Orton's manager, Mr. Kennedy. Orton was accompanied by Orton's manager, Mr. Kennedy, and Orton's tag team partner, Mr. Kennedy. Triple H and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton's tag team partner, Mr. Kennedy. Triple H and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's Manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were accompanied by Orton's manager, Mr. Kennedy, and Orton were experienced by Orton's manager, Mr. Kennedy, and Orton were experienced by Orton's manager, Mr. Kennedy, and Orton were experienced by Orton's manager, Mr. Kennedy, and Orton were experienced by Orton's manager, Mr. Kennedy, and Orton were experienced by Orton's manager, Mr. Kennedy, and Orton were experienced by Orton's manager, Mr. Kennedy, and Orton were experienced by Orton's manager, Mr. Kennedy.</td></tr><tr><td>ST+</td><td>Triple H defeated Randy Orton to win the World Heavyweight Championship. On the July 27 episode of Raw, Triple H defeated Randy Orton to win the World Heavyweight Championship. On the July 30 episode of Raw, Triple H defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. On the August 2 episode of Raw, Triple H defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World HeavyWeight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton to win the World Heavyweight Championship. Triple H then defeated Randy Orton.</td></tr><tr><td>NMST+</td><td>Triple H defeated Randy Orton to win the WWE Championship for the third time.(eos)</td></tr><tr><td>Context</td><td>On the west side of the Naktong,</td></tr><tr><td>VA+</td><td>the NK 6th Division was reinforced by the NK 7th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6h Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6th Division, which was reinforced by the NK 6thDivision, which was reinforced by the NK 6thDivision, which was reinforced by the NK 6thDivision, which was reinforced by the NK 6thDivision, which was reinforced by the NK 6thDivision, which was reinforced by the NK 6thDivision, which was reinforced by the NK 6thDivision, which was reinforced by the NK 6thDivision, which was reinforced by the NK 6thDivision, which was reinforced by the NK 6thDivision.</td></tr><tr><td>ST+</td><td>the 2nd Battalion, 27th Infantry Regiment, which had been holding up the North Koreans in the hills east of the Naktong, was ordered to withdraw to the Naktong itself. The 2nd Battalion, 27th Infantry Regiment, which had been holding up the North Koreans in the hills east of the Naktong, was ordered to withdraw to the Naktong itself. The 2nd Battalion, 27th Infantry Regiment, which had been holding up the North Koreans in the hills east of the Naktong, was ordered to withdraw to the Naktong itself. The 2nd Battalion, 27th Infantry Regiment, which had been持有 up the North Koreans in the hills east of the Naktong, was ordered to withdraw to the Naktong itself. The 2nd Battalion, 27th Infantry Regiment, which had been holding up the North Koreans in the hills east of the Naktong, was ordered to withdraw to the Naktong itself. The 2nd Battalion, 27th Infantry Regiment, which had been holding up the North Koreans in the hills east of the Naktong, was orderd to withdraw to the Naktong itself. The 2nd Battalion, 27th Infantry Regiment, which had been ...</td></tr><tr><td>NMST+</td><td>the Naktong River rises to a height of 1 @, @ 000 metres (3 @, @ 300 ft) above the surrounding terrain. The Naktong River is the longest river in North and South Korea, and the longest in the whole of North Korea. The Naktong is the longest river in North Korea, and the longest in the whole of North Korea. The river is the longest in the entire country, and the longest in the whole of North Korea.(eos)</td></tr><tr><td>Context</td><td>During spring training in a game against the Boston Red</td></tr><tr><td>VA+</td><td>Sox on April 4, 2009, Kinsler was placed on the 15 @-@ day disabled list with a strained right hamstring. He returned to the active roster on April 7, and was placed on the 15 @-@ day DL with a strained right hamstring. He was placed on the 15 @-@ day DL on May 4, and was placed on the 15 @-@ day DL on May 5. On May 6, he was placed on the 15 @-@ day DL with a strained right hamstring. He was placed on the 15 @-@ day DL on May 9. On May 10, he was placed on the 15 @-@ day DL with a strained right hamstring. He was placed on the 15 @-@ day DL on May 11. He was placed on the 15 @-@ day DL on May 13. He was placed on the 15 @-@ day DL on May 20. He was placed on the 15 @-@ day DL on May 24. He was placed on the 15 @-@ day DL on May 28. He was placed on the 15 @-@ day DL on June 2. He was placed on ...</td></tr><tr><td>ST+</td><td>Sox on April 20, 2011, he was placed on the disabled list with a back injury. He returned to the disabled list on May 10, 2011, and was activated on May 17. He was activated on May 20 and made his first appearance on May 21. He was activated on June 2 and made his first appearance on June 4. He was activated on June 8 and made his first appearance on June 10. He was activated on June 15 and made his first appearance on June 17. He was activated on June 20 and made his first appearance on June 23. He was activated on June 29 and made his first appearance on July 1. He was activated on July 1 and made his first appearance on July 4. He was activated on July 6 and made his first appearance on July 10. He was activated on July 14 and made his first appearance on July 16. He was activated on July 20 and made his first appearance on July 23. He was ...</td></tr><tr><td>NMST+</td><td>Sox on April 16, 2010, the Yankees signed Rivera to a one @-@ year, $ 2 @.@ 5 million contract. He made his major league debut on April 21, 2010, against the Boston Red Sox. He pitched a scoreless inning in the first inning of the first game of the 2010 World Series against the New York Mets. On May 1, 2010, Rivera was traded to the Pittsburgh Pirates in exchange for J. J. Hardy.(eos)</td></tr></table>
|
| 463 |
+
|
| 464 |
+
# E.4 ADDITIONAL PLOTS FOR FIGURE 3
|
| 465 |
+
|
| 466 |
+

|
| 467 |
+
|
| 468 |
+

|
| 469 |
+
|
| 470 |
+

|
| 471 |
+
|
| 472 |
+

|
| 473 |
+
|
| 474 |
+

|
| 475 |
+
Figure 6: Additional plots of $p_{\theta}(y_t = \langle eos \rangle | \mathbf{y}_{<t}, \mathbf{x})$ as a function of $t$ for validation instances of WikiText-103 where $p_{\theta}$ 's are {VA, ST, NMST}+GPT-2. For {ST, NMST}+GPT-2, we choose $\epsilon = 1.0 \times 10^{-5}$ because it is optimal in terms of validation perplexities in Table 2. Instead of $t$ , we tag the $t$ -th ground truth token. We report their mean (curve) ± st.dev. (shaded area) across 10 random runs. Unlike ST+GPT-2, NMST+GPT-2 exhibits non-monotonic behaviors at plausibly terminating steps (e.g., after red marked tokens such as periods).
|
| 476 |
+
|
| 477 |
+
# F CONSISTENCY WITH RESPECT TO OTHER DECODING ALGORITHMS FOR RNN AND LSTM
|
| 478 |
+
|
| 479 |
+
We validate the consistency of our proposed non-monotonic self-terminating (NMST) language model when using decoding algorithms other than greedy search, such as top- $k$ sampling (Fan et al., 2018), nucleus sampling (Holtzman et al., 2020), and beam search. All experimental setups and notations are the same as Section $\S 4$ . We use top- $\{2,4\}$ sampling, nucleus- $\{0.2,0.4\}$ sampling, and beam search with a width of $\{2,4\}$ (beam- $\{2,4\}$ ) to generate sequences from NMST+\{RNN, LSTM\} trained on Wikitext-2 with $\epsilon = 1.0 \times 10^{-5}$ . The choice of $\epsilon = 1.0 \times 10^{-5}$ is made based on the validation perplexities in Table 1. Since the validation perplexity does not change with decoding algorithms, we focus on the average ( $\pm$ st.dev.) non termination ratios, $r_{nt}(L)$ 's, across 10 random runs as a function of $L$ , for each decoding algorithm in Figure 7. We also plot the evolution of $r_{nt}(L)$ 's for VA+\{RNN, LSTM\} and ST+\{RNN, LSTM\} of $\epsilon = 1.0 \times 10^{-5}$ as we vary $L$ .
|
| 480 |
+
|
| 481 |
+

|
| 482 |
+
Figure 7: Non-termination ratios, $r_{nt}(L)$ 's, of sequences generated from all variants of RNN (top) and LSTM (bottom), trained on WikiText-2, when using top- $k$ sampling (left), nucleus sampling (middle), and beam search (right), as a function of $L$ in log-log scale. We use the first 10 tokens of every WikiText-2 validation instance as a context. We present their average (curve) with their min-max range (shaded area) across 10 random experiments. VA+ (orange) displays inconsistency $(\lim_{L\to \infty}r_{nt}(L)\rightarrow 0)$ for all combinations of model architectures and decoding algorithms, except in VA+RNN using top-4 (orange dashed in top left) and VA+LSTM using top-{2,4} (orange solid and dashed in top left, respectively). On the other hand, NMST+ (blue) and ST+ (green) show consistency $(\lim_{L\to \infty}r_{nt}(L)\rightarrow 0)$ across all configurations. By using decoding algorithms other than greedy search, VA+LSTM can avoid non-terminating sequences (e.g., top-{2,4}). However, as shown in Table 1, NMST+{RNN, LSTM} not only have better validation perplexities than VA+{RNN, LSTM} and ST+{RNN, LSTM} but also are consistent with respect to all decoding algorithms.
|
| 483 |
+
|
| 484 |
+
# G ANALYSIS OF PREDICTED SEQUENCE LENGTH DISTRIBUTIONS IN §4.1
|
| 485 |
+
|
| 486 |
+
We investigate whether our proposed non-monotonic self-terminating (NMST+) language model matches the data length distribution better than the baselines: i) a vanilla (VA+) language model and ii) a self-terminating (ST+) language model. For this, we compare the length distributions of predicted sequences from {VA, ST, NMST}+LSTM trained on WikiText-2 with the data length distribution of ground truth sequences in the WikiText-2 validation dataset, $\mathcal{D}_{val}$ , when using greedy search. All experimental setups and notations are the same as §4.1.
|
| 487 |
+
|
| 488 |
+
Figure 8 shows the length distributions of $\{\mathrm{VA},\mathrm{ST},\mathrm{NMST}\} +\mathrm{LSTM}$ , and $\mathcal{D}_{val}$ . For $\{\mathrm{ST},$ NMST\}+LSTM, we use $\epsilon = 1\times 10^{-5}$ because this choice is optimal in terms of validation perplexities based on Table 1. We observe that the length distributions of predicted sequences from NMST+LSTM is closer to the data length distribution of $\mathcal{D}_{val}$ , than those of predicted sequences from VA+LSTM and ST+LSTM.
|
| 489 |
+
|
| 490 |
+

|
| 491 |
+
|
| 492 |
+

|
| 493 |
+
|
| 494 |
+

|
| 495 |
+
Figure 8: Length distributions of generated sequences from $\{\mathrm{VA},\mathrm{ST},\mathrm{NMST}\} +\mathrm{LSTM}$ trained on WikiText-2 and the data length distribution of ground truth sequences in WikiText-2 validation dataset, $\mathcal{D}_{val}$ . For $\{\mathrm{ST},\mathrm{NMST}\} +\mathrm{LSTM}$ , we select $\epsilon = 1.0\times 10^{-5}$ since it is optimal in terms of validation perplexities in Table 1. NMST+LSTM better models the length distribution of $\mathcal{D}_{val}$ than both VA+LSTM and ST+LSTM.
|
| 496 |
+
|
| 497 |
+

|
| 498 |
+
|
| 499 |
+
Furthermore, we can tune $\epsilon$ to make the predicted length distribution of NMST+LSTM agree with the ground truth length distribution of $\mathcal{D}_{val}$ . In Figure 9, we compare NMST+LSTM's predicted length distribution of $\epsilon = 5 \times 10^{-4}$ with that of $\epsilon = 1 \times 10^{-5}$ . We see that $\epsilon = 5 \times 10^{-4}$ better models the data length distribution than $\epsilon = 5 \times 10^{-4}$ . However, in this case, the average validation perplexity of NMST+LSTM degrades from $101.5\left(\epsilon = 1 \times 10^{-5}\right)$ to $105.6\left(\epsilon = 5 \times 10^{-4}\right)$ as shown in Table 1.
|
| 500 |
+
|
| 501 |
+

|
| 502 |
+
|
| 503 |
+

|
| 504 |
+
|
| 505 |
+

|
| 506 |
+
L
|
| 507 |
+
|
| 508 |
+

|
| 509 |
+
L
|
| 510 |
+
Figure 9: Length distributions of predicted sequences from NMST+LSTM trained on WikiText-2 for various $\epsilon$ 's and the data length distribution of ground truth sequences in WikiText-2 validation dataset, $D_{val}$ . The length distribution of NMST+LSTM using $\epsilon = 5.0 \times 10^{-5}$ matches the data length distribution of $D_{val}$ better than that of NMST+LSTM using $\epsilon = 1.0 \times 10^{-4}$ . We can choose $\epsilon$ to make the predicted length distribution of NMST+LSTM agree with the ground truth length distribution.
|
2023/A Non-monotonic Self-terminating Language Model/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1b09134df68a8cfc366d2f2fe116b31bc82a3d66a250548fdab39958c13de891
|
| 3 |
+
size 1811043
|
2023/A Non-monotonic Self-terminating Language Model/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/1495f6b2-6e9f-4296-a806-8e145de397eb_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/1495f6b2-6e9f-4296-a806-8e145de397eb_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/1495f6b2-6e9f-4296-a806-8e145de397eb_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:699bb8e9f82ab2c1a8592ff4189e5876bd345d0dc529b80df0693d679b4cf81d
|
| 3 |
+
size 3855751
|
2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/full.md
ADDED
|
@@ -0,0 +1,430 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A SELF-ATTENTION ANSATZ FOR AB-INITIO QUANTUM CHEMISTRY
|
| 2 |
+
|
| 3 |
+
Ingrid von Glehn, James S. Spencer & David Pfau
|
| 4 |
+
|
| 5 |
+
{ingridvg, jamessspencer,pfau}@deepmind.com
|
| 6 |
+
|
| 7 |
+
# ABSTRACT
|
| 8 |
+
|
| 9 |
+
We present a novel neural network architecture using self-attention, the Wavefunction Transformer (Psiformer), which can be used as an approximation (or Ansatz) for solving the many-electron Schrödinger equation, the fundamental equation for quantum chemistry and material science. This equation can be solved from first principles, requiring no external training data. In recent years, deep neural networks like the FermiNet and PauliNet have been used to significantly improve the accuracy of these first-principle calculations, but they lack an attention-like mechanism for gating interactions between electrons. Here we show that the Psiformer can be used as a drop-in replacement for these other neural networks, often dramatically improving the accuracy of the calculations. On larger molecules especially, the ground state energy can be improved by dozens of kcal/mol, a qualitative leap over previous methods. This demonstrates that self-attention networks can learn complex quantum mechanical correlations between electrons, and are a promising route to reaching unprecedented accuracy in chemical calculations on larger systems.
|
| 10 |
+
|
| 11 |
+
# 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
The laws of quantum mechanics describe the nature of matter at the microscopic level, and underpin the study of chemistry, condensed matter physics and material science. Although these laws have been known for nearly a century (Schrodinger, 1926), the fundamental equations are too difficult to solve analytically for all but the simplest systems. In recent years, tools from deep learning have been used to great effect to improve the quality of computational quantum physics (Carleo & Troyer, 2017). For the study of chemistry in particular, it is the quantum behavior of electrons that matters, which imposes certain constraints on the possible solutions. The use of deep neural networks for successfully computing the quantum behavior of molecules was introduced almost simultaneously by several groups (Pfau et al., 2020; Hermann et al., 2020; Choo et al., 2020), and has since led to a variety of extensions and improvements (Hermann et al., 2022). However, follow-up work has mostly focused on applications and iterative improvements to the neural network architectures introduced in the first set of papers.
|
| 14 |
+
|
| 15 |
+
At the same time, neural networks using self-attention layers, like the Transformer (Vaswani et al., 2017), have had a profound impact on much of machine learning. They have led to breakthroughs in natural language processing (Devlin et al., 2018), language modeling (Brown et al., 2020), image recognition (Dosovitskiy et al., 2020), and protein folding (Jumper et al., 2021). The basic self-attention layer is also permutation equivariant, a useful property for applications to chemistry, where physical quantities should be invariant to the ordering of atoms and electrons (Fuchs et al., 2020). Despite the manifest successes in other fields, no one has yet investigated whether self-attention neural networks are appropriate for approximating solutions in computational quantum mechanics.
|
| 16 |
+
|
| 17 |
+
In this work, we introduce a new self-attention neural network, the Wavefunction Transformer (Psiformer), which can be used as an approximate numerical solution (or Ansatz) for the fundamental equations of the quantum mechanics of electrons. We test the Psiformer on a wide variety of benchmark systems for quantum chemistry and find that it is significantly more accurate than existing neural network Ansatzes of roughly the same size. The increase in accuracy is more pronounced the larger the system is - as much as 75 times the normal standard for "chemical accuracy" - suggesting that the Psiformer is a particularly attractive approach for scaling neural network Ansatzes to larger,
|
| 18 |
+
|
| 19 |
+
more challenging systems. In what follows, we will provide an overview of the variational quantum Monte Carlo approach to computational quantum mechanics (Sec. 2), introduce the Psiformer architecture in detail (Sec. 3), present results on a wide variety of atomic and molecular benchmarks (Sec. 4) and wrap up with a discussion of future directions (Sec. 5).
|
| 20 |
+
|
| 21 |
+
# 2 BACKGROUND
|
| 22 |
+
|
| 23 |
+
# 2.1 QUANTUM MECHANICS AND CHEMISTRY
|
| 24 |
+
|
| 25 |
+
The fundamental object of study in quantum mechanics is the wavefunction, which represents the state of all possible classical configurations of a system. If the wavefunction is known, then all other properties of a system can be calculated from it. While there are multiple ways of representing a wavefunction, we focus on the first quantization approach, where the wavefunction is a map from possible particle states to a complex amplitude. The state of a single electron $\mathbf{x} \in \mathbb{R}^3 \times \{\uparrow, \downarrow\}$ can be represented by its position $\mathbf{r} \in \mathbb{R}^3$ and spin $\sigma \in \{\uparrow, \downarrow\}$ . Then the wavefunction for an N-electron system is a function $\Psi : (\mathbb{R}^3 \times \{\uparrow, \downarrow\})^N \to \mathbb{C}$ . Let $\pmb{x} \triangleq \mathbf{x}_1, \dots, \mathbf{x}_N$ denote the set of all electron states. The wavefunction is constrained to have unit $\ell_2$ norm $\int d\pmb{x} |\Psi|^2(\pmb{x}) = 1$ , and $|\Psi|^2$ can be interpreted as the probability of observing a quantum system in a given state when measured.
|
| 26 |
+
|
| 27 |
+
Not all functions are valid wavefunctions – particles must be indistinguishable, meaning $|\Psi|^2$ should be invariant to changes in ordering. Additionally, the Pauli exclusion principle states that the probability of observing any two electrons in the same state must be zero. This is enforced by requiring the wavefunction for electronic systems to be antisymmetric. In this paper, we will focus on how to learn an unnormalized approximation to $\Psi$ by representing it with a neural network.
|
| 28 |
+
|
| 29 |
+
The physical behavior of non-relativistic quantum systems is described by the Schrödinger equation. In its time-independent form, it is an eigenfunction equation $\hat{H}\Psi (\pmb {x}) = E\Psi (\pmb {x})$ where $\hat{H}$ is a Hermitian linear operator called the Hamiltonian and the scalar eigenvalue $E$ corresponds to the energy of that particular solution. In quantum chemistry, atomic units (a.u.) are typically used, in which the unit of distance is the Bohr radius $(a_0)$ , and the unit of energy is Hartree (Ha).
|
| 30 |
+
|
| 31 |
+
The physical details of a system are defined through the choice of Hamiltonian. For chemical systems, the only details which need to be specified are the locations and charges of the atomic nuclei. In quantum chemistry it is standard to approximate the nuclei as classical particles with fixed positions, known as the Born-Oppenheimer approximation, in which case the Hamiltonian becomes:
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
\hat {H} = - \frac {1}{2} \sum_ {i} \nabla_ {i} ^ {2} + \sum_ {i > j} \frac {1}{| \mathbf {r} _ {i} - \mathbf {r} _ {j} |} - \sum_ {i I} \frac {Z _ {I}}{| \mathbf {r} _ {i} - \mathbf {R} _ {I} |} + \sum_ {I > J} \frac {Z _ {I} Z _ {J}}{| \mathbf {R} _ {I} - \mathbf {R} _ {J} |} \tag {1}
|
| 35 |
+
$$
|
| 36 |
+
|
| 37 |
+
where $\nabla_{i}^{2} = \sum_{j=1}^{3} \frac{\partial^{2}}{\partial r_{ij}^{2}}$ is the Laplacian w.r.t. the $i$ th particle and $Z_{I}$ and $\mathbf{R}_{I}, I \in \{1, \dots, N_{\mathrm{nuc}}\}$ are the charges and coordinates of the nuclei.
|
| 38 |
+
|
| 39 |
+
Two simplifications follow from this. First, since $\hat{H}$ is a Hermitian operator, solutions $\Psi$ must be real-valued. Thus we can restrict our attention to real-valued wavefunctions. Second, since the spins $\sigma_{i}$ do not appear anywhere in Eq. 1, we can fix a certain number of electrons to be spin up and the remainder to be spin down before beginning any calculation (Foulkes et al., 2001). The appropriate number for the lowest energy state can usually be guessed by heuristics such as Hund's rules.
|
| 40 |
+
|
| 41 |
+
While the time-independent Schrödinger equation defines the possible solutions of constant energy, at the energy scales relevant for most chemistry the electrons are almost always found near the lowest energy state, known as the ground state. Solutions with higher energy, known as excited states, are relevant to photochemistry, but in this paper we will restrict our attention to ground states.
|
| 42 |
+
|
| 43 |
+
For a typical small molecule, the total energy of a system is on the order of hundreds to thousands of Hartrees. However the relevant energy scale for chemical bonds is typically much smaller – on the order of 1 kilocalorie per mole (kcal/mol), or $\sim 1.6\mathrm{mHa}$ – less than one part in one hundred thousand of the total energy. Calculations within 1 kcal/mol of the ground truth are generally considered "chemically accurate". Mean-field methods are typically within about $0.5\%$ of the true total energy. The difference between the mean-field energy and true energy is known as the correlation energy, and chemical accuracy is usually less than $1\%$ of this correlation energy. For example, the binding energy of the benzene dimer (investigated in Section 4.5) is only $\sim 4\mathrm{mHa}$ .
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
(a) FermiNet / FermiNet+SchNet
|
| 47 |
+
Figure 1: Comparison of (a) FermiNet and FermiNet+SchNet and (b) the Psiformer. The FermiNet variants have two streams, acting on electron-nuclear and electron-electron features, which are merged via concatenation or continuous-filter convolution operations. In contrast, the Psiformer uses a single stream of self-attention layers, acting on electron-nuclear features only. Electron-electron features appear only via the Jastrow factor. The FermiNet+SchNet also includes a nuclear embedding stream and separate spin-dependant electron-electron streams, not pictured here.
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
(b) Psiformer
|
| 51 |
+
|
| 52 |
+
# 2.2 VARIATIONAL QUANTUM MONTE CARLO
|
| 53 |
+
|
| 54 |
+
There are a wide variety of computational techniques to find the ground state solution of the Hamiltonian in Eq 1. We are particularly interested in solving these equations from first principles (ab-initio), that is, without any data other than the atomic positions. The ab-initio method most compatible with the modern deep learning paradigm is variational quantum Monte Carlo (VMC, Foulkes et al. (2001)). In VMC, a parametric wavefunction approximation (or Ansatz) is optimized using samples from the Ansatz itself, in much the same way that deep neural networks are optimized by gradient descent on stochastic minibatches. VMC is variational in the sense that it minimizes an upper bound on the energy of a system. Therefore if two VMC solutions give different energies, the one with the lower energy will be closer to the true energy, even if the true energy is not known.
|
| 55 |
+
|
| 56 |
+
In VMC, we start with an unnormalized wavefunction Ansatz $\Psi_{\theta}:(\mathbb{R}^3\times \{\uparrow ,\downarrow \})^N\to \mathbb{R}$ with parameters $\theta$ . The expected energy of the system is given by the Rayleigh quotient:
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
\mathcal {L} _ {\theta} = \frac {\left\langle \Psi_ {\theta} \hat {H} \Psi_ {\theta} \right\rangle}{\left\langle \Psi_ {\theta} ^ {2} \right\rangle} = \frac {\int d \boldsymbol {x} \Psi_ {\theta} (\boldsymbol {x}) \hat {H} \Psi_ {\theta} (\boldsymbol {x})}{\int d \boldsymbol {x} \Psi_ {\theta} ^ {2} (\boldsymbol {x})} = \mathbb {E} _ {\boldsymbol {x} \sim \Psi_ {\theta} ^ {2}} \left[ \Psi_ {\theta} ^ {- 1} (\boldsymbol {x}) \hat {H} \Psi_ {\theta} (\boldsymbol {x}) \right] = \mathbb {E} _ {\boldsymbol {x} \sim \Psi_ {\theta} ^ {2}} \left[ E _ {L} (\boldsymbol {x}) \right] \tag {2}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
where we have rewritten the Rayleigh quotient as an expectation over a random variable proportional to $\Psi_{\theta}^{2}$ on the right hand side. The term $E_L(\pmb {x}) = \Psi^{-1}(\pmb {x})\hat{H}\Psi (\pmb {x})$ is known as the local energy. Details on how to compute the local energy and unbiased estimates of the gradient of the average energy are given in Sec. A.1 in the appendix.
|
| 63 |
+
|
| 64 |
+
In VMC, samples from the distribution proportional to $\Psi^2$ are generated by Monte Carlo methods, and unbiased estimates of the gradient are used to optimize the Ansatz, either by standard stochastic gradient methods, or more advanced methods (Umrigar et al., 2007; Sorella, 1998). Notably, the samples $x\sim \Psi_{\theta}^{2}$ can be generated from the Ansatz itself, rather than requiring external data.
|
| 65 |
+
|
| 66 |
+
The form of $\Psi$ must be restricted to antisymmetric functions to avoid collapsing onto non-physical solutions. This is most commonly done by taking the determinant of a matrix of single-electron
|
| 67 |
+
|
| 68 |
+
functions $\Psi (\pmb {x}) = \operatorname *{det}\left[\Phi (\pmb {x})\right]$ , where $\Phi (\pmb {x})$ denotes the matrix with elements $\phi_i(\mathbf{x}_j)$ , since the determinant is antisymmetric under exchange of rows or columns. This is known as a Slater determinant, and the minimum-energy wavefunction of this form gives the mean-field solution to the Schrödinger equation. While $\phi_{i}$ is a function of one electron in a Slater determinant, any permutation-equivariant function of all electrons can be used as input to $\Phi$ and $\Psi$ will still be antisymmetric.
|
| 69 |
+
|
| 70 |
+
The potential energy becomes infinite when particles overlap, which places strict constraints on the form of the wavefunction at these points, known as the Kato cusp conditions (Kato, 1957). The cusp conditions state that the wavefunction must be non-differentiable at these points, and give exact values for the average derivatives at the cusps. This can be built into an Ansatz by multiplying by a Jastrow factor which satisfies these conditions analytically (Drummond et al., 2004).
|
| 71 |
+
|
| 72 |
+
# 2.3 RELATED WORK
|
| 73 |
+
|
| 74 |
+
Machine learning has found numerous applications to computational chemistry in recent years, but has mostly focused on problems at the level of classical physics (Schütt et al., 2018; Fuchs et al., 2020; Batzner et al., 2022; Segler et al., 2018; Gómez-Bombarelli et al., 2018), which all rely on learning from large datasets of experiments or ab-initio calculations. There is also work on machine learning for density functional theory (DFT) (Nagai et al., 2020; Kirkpatrick et al., 2021), which is an intermediate between classical and all-electron quantum chemistry, but even this is still primarily a supervised learning problem and relies on ab-initio calculations for data. Here instead, we are focused on improving the ab-initio methods themselves.
|
| 75 |
+
|
| 76 |
+
For a thorough introduction to ab-initio quantum chemistry, we recommend Helgaker et al. (2014) and Szabo & Oaslund (2012). Within this field, VMC was considered a simple but low-accuracy method, failing to match the performance of sophisticated methods (Motta et al., 2020), but often used as a starting point for diffusion Monte Carlo (DMC) calculations, which are more accurate, but do not produce an explicit functional form for the wavefunction (Foulkes et al., 2001). These VMC calculations typically used a Slater-Jastrow Ansatz (Kwon et al., 1993), which consists of a large linear combination of Slater determinants multiplied by a Jastrow factor. They also sometimes include backflow, a coordinate transformation that accounts for electron correlations with a fixed functional form (Feynman & Cohen, 1956).
|
| 77 |
+
|
| 78 |
+
Recently, the use of neural network Ansatzes in VMC was shown to greatly improve the accuracy of many-electron calculations, often making them competitive with, or in some circumstances superior to, methods like DMC or coupled cluster (Pfau et al., 2020; Hermann et al., 2020; Choo et al., 2020; Han et al., 2019; Luo & Clark, 2019; Taddei et al., 2015). In first quantization, these Ansatzes used a sum of a small number of determinants to construct antisymmetric functions, but used very general permutation-equivariant functions of all electrons as inputs, rather than the single-electron functions used in Slater determinants. Some, like the PauliNet (Hermann et al., 2020), used Jastrow factors, while the FermiNet (Pfau et al., 2020) used non-differentiable input features to learn the cusp conditions.
|
| 79 |
+
|
| 80 |
+
Most follow-up work, as surveyed in Hermann et al. (2022), integrated these Ansatzes with other methods and applications, but did not significantly alter the architecture of the neural networks. The most significant departure in terms of the neural network architecture was Gerard et al. (2022), which extended the FermiNet – generally recognized to be the most accurate neural network Ansatz up to that point – and integrated it with several details of the PauliNet, especially the continuous-filter convolutions also used by the SchNet (Schütt et al., 2018), claiming to reach even higher accuracy on several challenging systems. We refer to this architecture as the FermiNet+SchNet.
|
| 81 |
+
|
| 82 |
+
# 3 THE PSIFORMER
|
| 83 |
+
|
| 84 |
+
The Psiformer has the basic form:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\Psi_ {\theta} (\boldsymbol {x}) = \exp \left(\mathcal {J} _ {\theta} (\boldsymbol {x})\right) \sum_ {k = 1} ^ {N _ {\mathrm {d e t}}} \det \left[ \boldsymbol {\Phi} _ {\theta} ^ {k} (\boldsymbol {x}) \right], \tag {3}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
where $\mathcal{I}_{\theta}:(\mathbb{R}^3\times \{\uparrow ,\downarrow \})^N\to \mathbb{R}$ and $\Phi_{\theta}^{k}:(\mathbb{R}^{3}\times \{\uparrow ,\downarrow \})^{N}\rightarrow \mathbb{R}^{N\times N}$ are functions with learnable parameters $\theta$ . This is similar to the Slater-Jastrow Ansatz, FermiNet and PauliNet, with the key
|
| 91 |
+
|
| 92 |
+
difference being that in the Psiformer, $\Phi_{\theta}^{k}$ consists of a sequence of multiheaded self-attention layers (Vaswani et al., 2017). The key motivation for this is that the electron-electron dependence in the Hamiltonian introduces subtle and complex dependence in the wavefunction. Self-attention is one way of introducing this without a fixed functional form. The high-level structure of the Psiformer is shown in Fig. 1(b), where it is contrasted with the FermiNet (and SchNet extension) in Fig. 1(a).
|
| 93 |
+
|
| 94 |
+
Because a self-attention layer takes a sequence of vectors as input, only features of single electrons are used as input to $\Phi_{\theta}^{k}$ . The input feature vector $\mathbf{f}_i^0$ for electron $i$ is similar to the one-electron stream of the FermiNet, which uses a concatenation of electron-nuclear differences $\mathbf{r}_i - \mathbf{R}_I$ and distances $|\mathbf{r}_i - \mathbf{R}_I|$ , $I = 1,\dots ,N_{\mathrm{nuc}}$ , with two key differences. First, we found that for systems with widely separated atoms, using FermiNet one-electron features caused self-attention Ansatzes to become unstable, so we rescale the inputs in the Psiformer by a factor of $\log (1 + |\mathbf{r_i} - \mathbf{R_I}|) / |\mathbf{r_i} - \mathbf{R_I}|$ , so that the input vectors grow logarithmically with distance from the nucleus. Second, we concatenate the spin $\sigma_{i}$ into the input feature vector itself (mapping $\uparrow$ to 1 and $\downarrow$ to -1). While this spin term is kept fixed dur
|
| 95 |
+
|
| 96 |
+

|
| 97 |
+
Figure 2: FermiNet and Psiformer accuracy on small molecules. Geometries and CCSD(T)/CBS reference energies are taken from Pfau et al. (2020). The grey region indicates chemical accuracy (1 kcal/mol or $1.6\mathrm{mHa}$ ) relative to the reference energies.
|
| 98 |
+
|
| 99 |
+
ing training, it breaks the symmetry between spin up and spin down electrons, so that columns of $\Phi_{\theta}^{k}$ are only equivariant under exchange of same-spin electrons. This is a notable departure from the FermiNet and PauliNet, where the difference between spin-up and spin-down electrons is instead built into the architecture.
|
| 100 |
+
|
| 101 |
+
The input features $\mathbf{f}_i^0$ are next projected into the same dimension as the attention inputs by a linear mapping $\mathbf{h}_i^0 = \mathbf{W}^0\mathbf{f}_i^0$ , and then passed into a sequence of multiheaded self-attention layers followed by linear-nonlinear layers, both with residual connections:
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\mathbf {f} _ {i} ^ {\ell + 1} = \mathbf {h} _ {i} ^ {\ell} + \mathbf {W} _ {o} ^ {\ell} \operatorname {c o n c a t} _ {h} \left[ \operatorname {S E L F A T T N} _ {i} \left(\mathbf {h} _ {1} ^ {\ell}, \dots , \mathbf {h} _ {N} ^ {\ell}; \mathbf {W} _ {q} ^ {\ell h}, \mathbf {W} _ {k} ^ {\ell h}, \mathbf {W} _ {v} ^ {\ell h}\right) \right] \tag {4}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\mathbf {h} _ {i} ^ {\ell + 1} = \mathbf {f} _ {i} ^ {\ell + 1} + \tanh \left(\mathbf {W} ^ {\ell + 1} \mathbf {f} _ {i} ^ {\ell + 1} + \mathbf {b} ^ {\ell + 1}\right) \tag {5}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where $h$ indexes the different attention heads, $\mathrm{concat}_h$ denotes concatenation of the output from different attention heads, and SELFATTN denotes standard self-attention:
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\operatorname {S E L F A T T N} _ {i} \left(\mathbf {h} _ {1}, \dots , \mathbf {h} _ {N}; \mathbf {W} _ {q}, \mathbf {W} _ {k}, \mathbf {W} _ {v}\right) = \frac {1}{\sqrt {d}} \sum_ {j} \sigma_ {j} \left(\mathbf {q} _ {1} ^ {T} \mathbf {k} _ {i}, \dots , \mathbf {q} _ {N} ^ {T} \mathbf {k} _ {i}\right) \mathbf {v} _ {j} \tag {6}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\mathbf {k} _ {i} = \mathbf {W} _ {k} \mathbf {h} _ {i}, \mathbf {q} _ {i} = \mathbf {W} _ {q} \mathbf {h} _ {i}, \mathbf {v} _ {i} = \mathbf {W} _ {v} \mathbf {h} _ {i} \tag {7}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\sigma_ {i} \left(x _ {1}, \dots , x _ {N}\right) = \frac {\exp \left(x _ {i}\right)}{\sum_ {j} \exp \left(x _ {j}\right)} \tag {8}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where $d$ is the output dimension of the key and query weights. In principle, multiple linear-nonlinear layers could be used in-between self-attention layers, but we found that adding a deeper MLP between self-attention layers was less effective than adding more self-attention layers. While a smooth nonlinearity must be used to guarantee that a wavefunction is smooth everywhere except the cusps, we found that using activation functions other than tanh had a marginal impact on performance.
|
| 126 |
+
|
| 127 |
+
A final linear projection into a $NN_{\mathrm{det}}$ dimensional space is applied to the activations, and the output is multiplied by a weighted sum of exponentially-decaying envelopes, $\Phi_{ij}^{k} = \Omega_{ij}^{k}\mathbf{w}_{i}^{kT}\mathbf{h}_{j}^{L}$ , where
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
Figure 3: Comparison of the Psiformer $(\Psi \mathrm{F})$ with and without LayerNorm (LN) against the FermiNet (FN) and FermiNet $^+$ SchNet $\mathrm{(FN + SN)}$ using training hyperparameters from Pfau et al. (2020) and Gerard et al. (2022). Learning curves from the original FN+SN implementation in Gerard et al. (2022) are in green. Energies are smoothed over 4000 iterations.
|
| 131 |
+
|
| 132 |
+
$\Omega_{ij}^{k} = \sum_{I}\pi_{iI}^{k}\exp \left(-\sigma_{iI}^{k}|\mathbf{r}_{j} - \mathbf{R}_{I}|\right)$ . This is so that the boundary condition $\lim_{|\mathbf{r}| \to \infty} \Psi_{\theta}(\mathbf{r}) = 0$ is enforced, and is the same form as the envelope used by the FermiNet in Spencer et al. (2020a). The matrix elements $\Phi_{ij}^{k}$ are then passed into $k$ determinants and the output summed together.
|
| 133 |
+
|
| 134 |
+
As the distances $|\mathbf{r}_i - \mathbf{R}_I|$ are inputs to the Psiformer, it is capable of learning the electron-nuclear cusp conditions, much like the FermiNet. However, the self-attention part of the Psiformer does not take pairwise electron distances as inputs, so it cannot learn the electron-electron cusp conditions. Instead, the Psiformer uses a conventional Jastrow factor only for the electron-electron cusps. We use a particularly simple Jastrow factor:
|
| 135 |
+
|
| 136 |
+
$$
|
| 137 |
+
\mathcal {J} _ {\theta} (\boldsymbol {x}) = \sum_ {i < j; \sigma_ {i} = \sigma_ {j}} - \frac {1}{4} \frac {\alpha_ {\text {p a r}} ^ {2}}{\alpha_ {\text {p a r}} + | \mathbf {r} _ {i} - \mathbf {r} _ {j} |} + \sum_ {i, j; \sigma_ {i} \neq \sigma_ {j}} - \frac {1}{2} \frac {\alpha_ {\text {a n t i}} ^ {2}}{\alpha_ {\text {a n t i}} + | \mathbf {r} _ {i} - \mathbf {r} _ {j} |} \tag {9}
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
which has only two free parameters, $\alpha_{\mathrm{par}}$ and $\alpha_{\mathrm{anti}}$ , and works well in practice.
|
| 141 |
+
|
| 142 |
+
# 4 EXPERIMENTS
|
| 143 |
+
|
| 144 |
+
Here we present an evaluation of the Psiformer on a wide variety of benchmark systems. Where it is not specified, all results with the FermiNet and FermiNet+SchNet are with our own implementation, forked from Spencer et al. (2020b). We use Kronecker-factored Approximate Curvature (KFAC) (Martens & Grosse, 2015) to optimize the Psiformer. The use of KFAC for self-attention has been investigated in Zhang et al. (2019). We show the advantage of KFAC, and how it interacts with LayerNorm, in Sec. A.2.3 of the appendix. We use hyperparameters and a Metropolis-Hastings MCMC algorithm similar to the original FermiNet paper, though we have made several modifications which help for larger systems: we pretrain for longer, we generate samples for pretraining from the target wavefunction, we take more MCMC steps between parameter updates, we propose updates for subsets of electrons rather than all electrons simultaneously, and we slightly change the gradient computation to be more robust to outliers. Details are given in Sec. A.2.4 of the appendix.
|
| 145 |
+
|
| 146 |
+
# 4.1 REVISITING SMALL MOLECULES
|
| 147 |
+
|
| 148 |
+
In Pfau et al. (2020), they compared the FermiNet against CCSD(T) extrapolated to the complete basis set (CBS) limit on a number of small molecules (4-30 electrons) from the G3 database (Curtiss et al., 2000). While the FermiNet captured more than $99\%$ of the correlation energy relative to CCSD(T)/CBS for systems as large as ethene (16 electrons), the quality of the FermiNet calculation began to decline as the system size grew. While some of this discrepancy was reduced simply by changing to a framework with better numerics (Spencer et al., 2020a), the reported difference in energy for bicyclobutane was still greater than $20\mathrm{mHa}$ . Here we revisit these systems, comparing the FermiNet with training improvements against the Psiformer.
|
| 149 |
+
|
| 150 |
+
<table><tr><td rowspan="2">System</td><td colspan="2">FermiNet</td><td colspan="2">FermiNet+SchNet</td><td colspan="2">Psiformer</td><td rowspan="2">Expt.</td></tr><tr><td>Small</td><td>Large</td><td>Small</td><td>Large</td><td>Small</td><td>Large</td></tr><tr><td>K (Ha)</td><td>-599.9133(2)</td><td>-599.9149(2)</td><td>-599.9153(2)</td><td>-599.9175(3)</td><td>-599.9202(2)</td><td>-599.9205(1)</td><td></td></tr><tr><td>K+(Ha)</td><td>-599.7561(2)</td><td>-599.7679(2)</td><td>-599.7578(2)</td><td>-599.7601(2)</td><td>-599.7589(1)</td><td>-599.7617(1)</td><td></td></tr><tr><td>IPK (eV)</td><td>4.278(9)</td><td>4.000(9)</td><td>4.286(8)</td><td>4.28(1)</td><td>4.388(6)</td><td>4.321(5)</td><td>4.32631</td></tr><tr><td>Fe (Ha)</td><td>-1263.6363(5)</td><td>-1263.6381(4)</td><td>-1263.6464(5)</td><td>-1263.6385(5)</td><td>-1263.6595(3)</td><td>-1263.6613(3)</td><td></td></tr><tr><td>Fe+(Ha)</td><td>-1263.3570(5)</td><td>-1263.3616(4)</td><td>-1263.3620(4)</td><td>-1263.3676(4)</td><td>-1263.3746(3)</td><td>-1263.3768(3)</td><td></td></tr><tr><td>IPFe (eV)</td><td>7.60(2)</td><td>7.52(2)</td><td>7.74(2)</td><td>7.37(2)</td><td>7.75(1)</td><td>7.74(1)</td><td>7.84194</td></tr><tr><td>Zn (Ha)</td><td>-1779.4101(7)</td><td>-1779.4131(6)</td><td>-1779.4195(6)</td><td>-1779.4267(5)</td><td>-1779.4304(4)</td><td>-1779.4365(5)</td><td></td></tr><tr><td>Zn+(Ha)</td><td>-1779.0778(6)</td><td>-1779.0846(6)</td><td>-1779.0843(7)</td><td>-1779.0909(5)</td><td>-1779.102(1)</td><td>-1779.1054(3)</td><td></td></tr><tr><td>IPZn (eV)</td><td>9.04(2)</td><td>8.94(2)</td><td>9.12(2)</td><td>9.14(2)</td><td>8.94(3)</td><td>9.01(2)</td><td>9.24695</td></tr></table>
|
| 151 |
+
|
| 152 |
+
Table 1: Energies of third-row neutral atoms and cations. Ionization potentials are compared against experimental results from Koga et al. (1997). Total energies are in Hartree while ionization potentials are in eV. Chemical accuracy is $0.043\mathrm{eV}$
|
| 153 |
+
|
| 154 |
+
To investigate whether the Psiformer performance could be reproduced by simply making the FermiNet larger, we investigated both a "small" and "large" configuration for the Psiformer and FermiNet. The small FermiNet has the same layer dimensions and determinants as the one used in Pfau et al. (2020), while the large configuration has twice as many determinants and a one-electron stream twice as wide, similar to the largest networks in Spencer et al. (2020a). The Psiformer configurations have the same number of determinants and the MLP layers are the same width as the FermiNet one-electron stream, though due to the self-attention weights the small Psiformer has a number of parameters between that of the large and small FermiNet. Exact details are given in Table 7 in the appendix. On these systems, the small Psiformer ran in a similar amount of time to the large FermiNet, though for even larger systems, the difference in wall time between the small FermiNet and small Psiformer became much smaller (see Table 9 in the appendix).
|
| 155 |
+
|
| 156 |
+
The results on small molecules can be seen in Fig. 2. While increasing the size of the FermiNet increases the accuracy somewhat, the small Psiformer is more accurate than the large FermiNet, despite having fewer parameters, while the large Psiformer is the most accurate of all. This is true for all systems investigated. The improvement from the Psiformer is particularly dramatic on ozone and bicyclobutane – on ozone, the large Psiformer is within $1\mathrm{kcal / mol}$ of CCSD(T)/CBS, while even the largest FermiNet has an error more than 4 times larger than this. On all molecules, the large Psiformer captures more than $99\%$ of the correlation energy relative to the reference energy.
|
| 157 |
+
|
| 158 |
+
# 4.2 COMPARISON AGAINST FERMINET WITH SCHNET-LIKE CONVOLUTIONS
|
| 159 |
+
|
| 160 |
+
While the performance of the Psiformer relative to the FermiNet is impressive, recent work has proposed several innovations to reach even lower energies (Gerard et al., 2022). The primary innovations of this work were the FermiNet+SchNet architecture and new hyperparameters they claim led to much faster optimization. They showed an especially large improvement on heavy atoms like potassium (K) and iron (Fe), and some improvement on larger molecules like benzene.
|
| 161 |
+
|
| 162 |
+
We attempted to reproduce the results of Gerard et al. (2022) with our own FermiNet+SchNet implementation, with somewhat surprising results in Fig. 3. First, the changes in training used for the FermiNet seem to be enough to close the gap with the published FermiNet+SchNet results on heavy atoms. For instance, ablation studies suggested that modifying the hyperparameters plus adding SchNet-like convolutions accounts for a $38.9\mathrm{mHa}$ improvement in accuracy on the potassium atom at $10^{5}$ iterations, but in our experiments the FermiNet with default hyperparameters is within a few $\mathrm{mHa}$ of the published result. On benzene, both the SchNet-like convolutions and modified hyperparameters improved the final energy by a few $\mathrm{mHa}$ , though still fell slightly short of the published results. Most importantly, the Psiformer is clearly either comparable to the best published FermiNet+SchNet results, as on K, or better by a wide margin, as on benzene. These results also give us confidence that the FermiNet is as strong a baseline as any other published method for further comparison.
|
| 163 |
+
|
| 164 |
+
# 4.3 THIRD-ROW ATOMS
|
| 165 |
+
|
| 166 |
+
The FermiNet, FermiNet+SchNet and Psiformer all seem to perform well on the potassium and iron atoms, but it is difficult to judge how close to the ground truth these results are. Prior work
|
| 167 |
+
|
| 168 |
+
<table><tr><td rowspan="3">System</td><td colspan="3">Ours</td><td colspan="2">Best Published</td></tr><tr><td rowspan="2">FermiNet</td><td colspan="2">Psifommer</td><td rowspan="2">VMC</td><td rowspan="2">DMC</td></tr><tr><td>No LayerNorm</td><td>LayerNorm</td></tr><tr><td>Benzene</td><td>-232.2205(2)</td><td>-232.2400(1)</td><td>-232.2393(1)</td><td>-232.2267a</td><td>-232.2370(3)b</td></tr><tr><td>Toluene</td><td>-271.5274(2)</td><td>-271.5494(1)</td><td>-271.5538(1)</td><td>-</td><td>-</td></tr><tr><td>Naphthalene</td><td>-385.8147(4)</td><td>-385.8679(2)</td><td>-385.8685(2)</td><td>-</td><td>-</td></tr><tr><td>CCl4</td><td>-1878.684(1)</td><td>-1878.734(1)</td><td>-1878.804(1)</td><td>-</td><td>-</td></tr></table>
|
| 169 |
+
|
| 170 |
+
Table 2: Energies for molecules with between 42 and 74 electrons. For benzene, the best published results using neural network Ansatzes are either from ${}^{a}$ Gerard et al. (2022) or ${}^{b}$ Ren et al. (2022).
|
| 171 |
+
|
| 172 |
+
had shown that the FermiNet can achieve energies within chemical accuracy of exact results for atoms up to argon (Spencer et al., 2020a), but exact calculations are impractical for third-row atoms. Instead, comparison to experimental results is a more practical evaluation metric. The ionization potential – the amount of energy it takes to remove one electron – is a particularly simple comparison for which good experimental data exists (Koga et al., 1997). Here we compare the FermiNet, FermiNet+SchNet and Psiformer for estimating the ionization potential of potassium, iron and zinc. To compare the relative importance of size and architecture, we also looked at both a small and large configuration of all Ansatzes, with parameters described in Table 6 the appendix.
|
| 173 |
+
|
| 174 |
+
Results are shown in Table 1. For atoms this heavy, all methods showed some amount of run-to-run variability, usually small, but on occasion as large as $10\mathrm{mHa}$ , which may explain some outlier results. On potassium, the difference between Ansatzes was within the range of run-to-run variability, but the Psiformer still did quite well, and reached the most accurate ionization potential. On heavier atoms, the difference between architectures became more pronounced, and the improvement of the Psiformer relative to other models on absolute energy was more robust. The results on ionization potentials were more mixed, and no Ansatz came within chemical accuracy of the ground truth. This shows that even the Psiformer is not yet converged to the ground truth, though it is the Ansatz closest to reaching it so far.
|
| 175 |
+
|
| 176 |
+
# 4.4 LARGER MOLECULES
|
| 177 |
+
|
| 178 |
+
Much of the promise for deep neural networks for QMC comes from the fact that they can, in theory, scale much better than other all-electron methods, though this promise has yet to be realized. While CCSD(T) scales with number of electrons as $\mathcal{O}(N^7)$ , a single iteration of wavefunction optimization for a fixed neural network size scales as $\mathcal{O}(N^4)$ in theory, and in practice is closer to cubic for system sizes of several dozen electrons. Self-attention has been especially powerful when scaling to extremely large problems in machine learning – here we investigate whether the same holds true for QMC, by applying both the FermiNet and Psiformer to single molecules much larger than those which have been investigated in most prior work.
|
| 179 |
+
|
| 180 |
+
Results on systems from benzene (42 electrons) to carbon tetrachloride $(\mathrm{CCl}_4, 74$ electrons) are given in Table 2. On these systems, CCSD(T) becomes impractical for us to run without approximations, so we only compare against other QMC results. Due to computational constraints, we were only able to run the small network configurations on these systems. Additionally, to help MCMC convergence, we updated half the electron positions at a time in each MCMC move, rather than moving all electrons simultaneously, as all-electron moves become less efficient for larger systems.
|
| 181 |
+
|
| 182 |
+
In Table 2, it can be seen that the Psiformer is not only significantly better on benzene than the FermiNet, as in Fig. 3, but it is better than the best previously published $DMC$ energy, a remarkable feat. For even larger molecules, there are no results in the literature with neural network wavefunctions to compare against, so we only compare the Psiformer and FermiNet directly. The Psiformer outperforms the FermiNet by an ever larger margin on larger systems, reaching $120\mathrm{mH}$ (75 kcal/mol) on $\mathrm{CCl_4}$ . On the three hydrocarbon systems investigated, LayerNorm has only a small impact. However for $\mathrm{CCl_4}$ LayerNorm has a significant impact, accounting for $70\mathrm{mH}$ of the total $120\mathrm{mH}$ improvement over the FermiNet. While we do not claim these are the best variational results in the literature, it is clear that on larger molecules the Psiformer is a significant improvement over the FermiNet, which is itself the most accurate Ansatz for many smaller systems.
|
| 183 |
+
|
| 184 |
+
<table><tr><td rowspan="3"></td><td colspan="3">Ours</td><td colspan="2">Ren et al. (2022)</td><td rowspan="3">Expt.</td></tr><tr><td rowspan="2">FermiNet</td><td colspan="2">Psiformer</td><td rowspan="2">VMC</td><td rowspan="2">DMC</td></tr><tr><td>No LayerNorm</td><td>LayerNorm</td></tr><tr><td>Equilibrium</td><td>-464.3770(5)</td><td>-464.4624(2)</td><td>-464.4667(2)</td><td>-464.4067(3)</td><td>-464.4640(2)</td><td></td></tr><tr><td>Dissociated</td><td>-464.3724(6)</td><td>-464.4674(2)</td><td>-464.4660(2)</td><td></td><td></td><td></td></tr><tr><td>ΔEmono</td><td>-0.0640(6)</td><td>-0.0176(2)</td><td>-0.0119(2)</td><td>-0.0219(3)</td><td>-0.0020(5)</td><td>0.0038(6)</td></tr><tr><td>ΔE10Å</td><td>0.0046(8)</td><td>-0.0050(3)</td><td>0.0007(3)</td><td>0.0183(5)</td><td>0.0092(4)</td><td>0.0038(6)</td></tr></table>
|
| 185 |
+
|
| 186 |
+
Table 3: Energies for the benzene dimer, with center separation of $4.95\AA$ (equilibrium) and 10 Å (dissociated), and the estimated dissociation energy from taking the difference of the equilibrium energy from twice the monomer energy $(\Delta E_{\mathrm{mono}})$ and the dissociated energy $(\Delta E_{10\AA})$ . All $\Delta E_{\mathrm{mono}}$ results are based on comparing like-for-like dimer and monomer calculations. $\Delta E_{10\AA}$ results from Ren et al. (2022) are estimated from figures. Experimental energies are from Grover et al. (1987).
|
| 187 |
+
|
| 188 |
+
# 4.5 THE BENZENE DIMER
|
| 189 |
+
|
| 190 |
+
Finally, we look at the benzene dimer, a challenging benchmark system for computational chemistry due to the weak van der Waals force between the two molecules, and the largest molecular system ever investigated using neural network Ansatzes (Ren et al., 2022). The dimer has several possible equilibrium configurations, many of which have nearly equal energy (Sorella et al., 2007; Azadi & Cohen, 2015), making it additionally challenging to study computationally, but here we restrict our attention to the T-shaped structure (Fig. 4) so that we can directly compare against Ren et al. (2022).
|
| 191 |
+
|
| 192 |
+
Results are given in Table 3. The results from Ren et al. (2022) are from a small FermiNet (3 layers) trained for 2 million iterations. Even our "small" 4-layer FermiNet baseline (which is larger than their FermiNet) trained for 200,000 iterations is able to reach the same accuracy as their small FermiNet trained for 800,000 iterations (Ren et al. (2022) Supplementary Figure 7a). The Psiformer reaches a significantly lower energy than the FermiNet with the same number of training iterations, again surpassing the DMC result by a few mHa.
|
| 193 |
+
|
| 194 |
+

|
| 195 |
+
Figure 4: The T-shaped benzene dimer equilibrium.
|
| 196 |
+
|
| 197 |
+
We also tried to estimate the dissociation energy by similar means as Ren et al. (2022) – comparing against the energy of the same model trained with a bond length of $10\,\text{\AA}$ , and twice the energy of the same model trained on the monomer. Ironically, our FermiNet baseline, which had the worst absolute energy, had the best relative energy between configurations. While every model underestimated the energy relative to twice the monomer, the discrepancy was lowest with DMC. It should be noted that the zero-point vibrational energy (ZPE) is not included, the dissociation energies from Ren et al. (2022) in Table 3 are estimates based on their figures, and there is disagreement over the exact experimental energy (Grover et al., 1987; Krause et al., 1991), so this comparison should be considered a rough estimate only. Our main result is that the absolute energy of the Psiformer is a vast improvement over the FermiNet,
|
| 198 |
+
|
| 199 |
+
and we leave it to future work to properly apply the Psiformer to predicting binding energies.
|
| 200 |
+
|
| 201 |
+
# 5 DISCUSSION
|
| 202 |
+
|
| 203 |
+
We have shown that self-attention networks are capable of learning quantum mechanical properties of electrons far more effectively than comparable methods. The advantage of self-attention networks seems to become most pronounced on large systems, suggesting that these models should be the focus of future efforts to scale to even larger systems. In addition to the strong empirical results, using standard attention layers means that we can leverage existing work on improving scalability, either by architectural advances (Child et al., 2019; Wang et al., 2020; Xiong et al., 2021; Jaegle et al., 2021a;b) or software implementations optimized for specialized hardware (Dao et al., 2022), which could make it possible to scale these models even further. This presents a promising path towards studying the most challenging molecules and materials in silico with unprecedented accuracy.
|
| 204 |
+
|
| 205 |
+
# ETHICS STATEMENT
|
| 206 |
+
|
| 207 |
+
The work presented here focuses on fundamental questions in computational chemistry, and is not yet at the stage where it is likely to be widely adopted by experimental chemists. However, in the future, this line of work could lead to computational chemistry becoming much more accurate, making it easier for chemists and material scientists to make new discoveries without requiring cumbersome trial-and-error physical experiments. This could lead to the discovery of new beneficial drugs or industrial chemical processes which are more environmentally friendly. The field of experimental chemistry already has robust ethical standards and processes for preventing harmful applications, and we are confident that more accurate computational methods will not in any way make it easier to circumvent these safeguards.
|
| 208 |
+
|
| 209 |
+
# REPRODUCIBILITY STATEMENT
|
| 210 |
+
|
| 211 |
+
Details of training, network parameters, and optimization hyperparameters are given in the appendices for all experiments shown. The code is available under the Apache License 2.0 as part of the FermiNet repo at https://github.com/deepmind/ferminet.
|
| 212 |
+
|
| 213 |
+
# ACKNOWLEDGMENTS
|
| 214 |
+
|
| 215 |
+
We would like to thank Alex G. de G. Matthews for suggesting the model name, Alex Botev for assistance with KFAC, Michael Scherbela and Leon Gerard for providing data for figures, and James Kirkpatrick for support and encouragement.
|
| 216 |
+
|
| 217 |
+
# REFERENCES
|
| 218 |
+
|
| 219 |
+
Sam Azadi and R. E. Cohen. Chemical Accuracy from Quantum Monte Carlo for the Benzene Dimer. The Journal of Chemical Physics, 143(10):104301, 2015.
|
| 220 |
+
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer Normalization. arXiv preprint arXiv:1607.06450, 2016.
|
| 221 |
+
Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E. Smidt, and Boris Kozinsky. E(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials. Nature Communications, 13 (1):1-11, 2022.
|
| 222 |
+
Aleksandar Botev and James Martens. KFAC-JAX, 2022. URL http://github.com/deepmind/kfac-jax.
|
| 223 |
+
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
|
| 224 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33:1877-1901, 2020.
|
| 225 |
+
Giuseppe Carleo and Matthias Troyer. Solving the Quantum Many-Body Problem with Artificial Neural Networks. Science, 355(6325):602-606, 2017.
|
| 226 |
+
Gino Cassella, Halvard Sutterud, Sam Azadi, N. D. Drummond, David Pfau, James S. Spencer, and W. M. C. Foulkes. Discovering Quantum Phase Transitions with Fermionic Neural Networks. arXiv preprint arXiv:2202.05183, 2022.
|
| 227 |
+
Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating Long Sequences with Sparse Transformers. arXiv preprint arXiv:1904.10509, 2019.
|
| 228 |
+
Kenny Choo, Antonio Mezzacapo, and Giuseppe Carleo. Fermionic Neural-Network States for Ab-initio Electronic Structure. Nature Communications, 11(1):1-7, 2020.
|
| 229 |
+
|
| 230 |
+
Larry A Curtiss, Krishnan Raghavachari, Paul C Redfern, and John A Pople. Assessment of gaussian-3 and density functional theories for a larger experimental test set. J. Chem. Phys., 112(17):7374-7383, May 2000.
|
| 231 |
+
Tri Dao, Daniel Y Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. arXiv preprint arXiv:2205.14135, 2022.
|
| 232 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805, 2018.
|
| 233 |
+
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv preprint arXiv:2010.11929, 2020.
|
| 234 |
+
N. D. Drummond, M. D. Towler, and R. J. Needs. Jastrow Correlation Factor for Atoms, Molecules, and Solids. Physical Review B, 70(23):235119, 2004.
|
| 235 |
+
RP Feynman and Michael Cohen. Energy spectrum of the excitations in liquid helium. Physical Review, 102(5):1189, 1956.
|
| 236 |
+
W. M. C. Foulkes, Lubos Mitas, R. J. Needs, and Guna Rajagopal. Quantum Monte Carlo Simulations of Solids. Reviews of Modern Physics, 73(1):33, 2001.
|
| 237 |
+
Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks. Advances in Neural Information Processing Systems, 33:1970-1981, 2020.
|
| 238 |
+
Nicholas Gao and Stephan Gunnemann. Sampling-Free Inference for Ab-initio Potential Energy Surface Networks. arXiv preprint arXiv:2205.14962, 2022.
|
| 239 |
+
Leon Gerard, Michael Scherbela, Philipp Marquetand, and Philipp Grohs. Gold-Standard Solutions to the Schrödinger Equation using Deep Learning: How Much Physics Do We Need? Advances in Neural Information Processing Systems, 2022.
|
| 240 |
+
Rafael Gómez-Bombarelli, Jennifer N. Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, and Alán Aspuru-Guzik. Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules. ACS Central Science, 4(2):268-276, 2018.
|
| 241 |
+
J. R. Grover, E. A. Walters, and E. T. Hui. Dissociation Energies of the Benzene Dimer and Dimer Cation. Journal of Physical Chemistry, 91(12):3233-3237, 1987.
|
| 242 |
+
Jiequn Han, Linfeng Zhang, and E Weinan. Solving Many-Electron Schrödinger Equation Using Deep Neural Networks. Journal of Computational Physics, 399:108929, 2019.
|
| 243 |
+
Trygve Helgaker, Poul Jorgensen, and Jeppe Olsen. Molecular Electronic-Structure Theory. John Wiley & Sons, 2014.
|
| 244 |
+
Jan Hermann, Zeno Schatzle, and Frank Noé. Deep-Neural-Network Solution of the Electronic Schrödinger Equation. Nature Chemistry, 12(10):891-897, 2020.
|
| 245 |
+
Jan Hermann, James Spencer, Kenny Choo, Antonio Mezzacapo, W. M. C. Foulkes, David Pfau, Giuseppe Carleo, and Frank Noé. Ab-initio Quantum Chemistry with Neural-Network Wavefunctions. arXiv preprint arXiv:2208.12590, 2022.
|
| 246 |
+
Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, et al. PerceiverIO: A General Architecture for Structured Inputs & Outputs. arXiv preprint arXiv:2107.14795, 2021a.
|
| 247 |
+
Andrew Jaegle, Felix Gimeno, Andy Brock, Oriol Vinyals, Andrew Zisserman, and Joao Carreira. Perceiver: General Perception with Iterative Attention. In International conference on machine learning, pp. 4651-4664. PMLR, 2021b.
|
| 248 |
+
|
| 249 |
+
John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly Accurate Protein Structure Prediction with AlphaFold. Nature, 596(7873):583-589, 2021.
|
| 250 |
+
Tosio Kato. On the Eigenfunctions of Many-Particle Systems in Quantum Mechanics. Communications on Pure and Applied Mathematics, 10(2):151-177, 1957.
|
| 251 |
+
James Kirkpatrick, Brendan McMorrow, David HP Turban, Alexander L. Gaunt, James S. Spencer, Alexander G. D. G. Matthews, Annette Obika, Louis Thiry, Meire Fortunato, David Pfau, et al. Pushing the Frontiers of Density Functionals by Solving the Fractional Electron Problem. Science, 374(6573):1385-1389, 2021.
|
| 252 |
+
Toshikatsu Koga, Hidenori Aoki, JM de la Vega, and Hiroshi Tatewaki. Atomic Ionization Potentials and Electron Affinities with Relativistic and Mass Corrections. Theoretical Chemistry Accounts, 96(4):248-255, 1997.
|
| 253 |
+
H. Krause, B. Ernstberger, and H. J. Neusser. Binding Energies of Small Benzene Clusters. Chemical Physics Letters, 184(5-6):411-417, 1991.
|
| 254 |
+
Yongkyung Kwon, DM Ceperley, and Richard M Martin. Effects of Three-Body and Backflow Correlations in the Two-Dimensional Electron Gas. Physical Review B, 48(16):12037, 1993.
|
| 255 |
+
Xiang Li, Cunwei Fan, Weiluo Ren, and Ji Chen. Fermionic Neural Network with Effective Core Potential. Physical Review Research, 4(1):013021, 2022.
|
| 256 |
+
Jeffmin Lin, Gil Goldshlager, and Lin Lin. Explicitly Antisymmetrized Neural Network Layers for Variational Monte Carlo Simulation. arXiv preprint arXiv:2112.03491, 2021.
|
| 257 |
+
Di Luo and Bryan K Clark. Backflow Transformations via Neural Networks for Quantum Many-Body Wave Functions. Physical Review Letters, 122(22):226401, 2019.
|
| 258 |
+
James Martens and Roger Grosse. Optimizing Neural Networks with Kronecker-factored Approximate Curvature. In Proceedings of the 32nd International Conference on Machine Learning - Volume 37, pp. 2408-2417. PMLR, 2015.
|
| 259 |
+
James Martens, Andy Ballard, Guillaume Desjardins, Grzegorz Swirszcz, Valentin Dalibard, Jascha Sohl-Dickstein, and Samuel S Schoenholz. Rapid Training of Deep Neural Networks Without Skip Connections or Normalization Layers Using Deep Kernel Shaping. arXiv preprint arXiv:2110.01765, 2021.
|
| 260 |
+
Mario Motta, Claudio Genovese, Fengjie Ma, Zhi-Hao Cui, Randy Sawaya, Garnet Kin-Lic Chan, Natalia Chepiga, Phillip Helms, Carlos Jimenez-Hoyos, Andrew J Millis, et al. Ground-State Properties of the Hydrogen Chain: Dimerization, Insulator-to-Metal Transition, and Magnetic Phases. Physical Review X, 10(3):031058, 2020.
|
| 261 |
+
Ryo Nagai, Ryosuke Akashi, and Osamu Sugino. Completing Density Functional Theory by Machine-Learning Hidden Messages from Molecules. npj Computational Materials, 6(1):43, 2020. ISSN 2057-3960. doi: 10.1038/s41524-020-0310-0.
|
| 262 |
+
Shivesh Pathak and Lucas K. Wagner. A Light Weight Regularization for Wave Function Parameter Gradients in Quantum Monte Carlo. *AIP Advances*, 10(8):085213, 2020.
|
| 263 |
+
David Pfau, James S. Spencer, Alexander G. G. Matthews, and W. M. C. Foulkes. Ab-initio Solution of the Many-Electron Schrödinger Equation with Deep Neural Networks. Physical Review Research, 2(3):033429, 2020.
|
| 264 |
+
Weiluo Ren, Weizhong Fu, and Ji Chen. Towards the Ground State of Molecules via Diffusion Monte Carlo on Neural Networks. arXiv preprint arXiv:2204.13903, 2022.
|
| 265 |
+
Erwin Schrödinger. An Undulatory Theory of the Mechanics of Atoms and Molecules. Physical Review, 28(6):1049, 1926.
|
| 266 |
+
Kristof T Schütt, Huziel E Sauceda, P-J Kindermans, Alexandre Tkatchenko, and K-R Müller. Schnet-a Deep Learning Architecture for Molecules and Materials. The Journal of Chemical Physics, 148(24):241722, 2018.
|
| 267 |
+
|
| 268 |
+
Marwin H. S. Segler, Mike Preuss, and Mark P. Waller. Planning Chemical Syntheses with Deep Neural Networks and Symbolic AI. Nature, 555(7698):604-610, 2018.
|
| 269 |
+
Sandro Sorella. Green Function Monte Carlo with Stochastic Reconfiguration. Physical Review Letters, 80(20):4558, 1998.
|
| 270 |
+
Sandro Sorella, Michele Casula, and Dario Rocca. Weak Binding Between Two Aromatic Rings: Feeling the Van der Waals Attraction by Quantum Monte Carlo Methods. The Journal of Chemical Physics, 127(1):014105, 2007.
|
| 271 |
+
James S. Spencer, David Pfau, Aleksandar Botev, and W. M. C. Foulkes. Better, Faster Fermionic Neural Networks. arXiv preprint arXiv:2011.07125, 2020a.
|
| 272 |
+
James S. Spencer, David Pfau, and FermiNet Contributors. FermiNet, 2020b. URL http://github.com/deepmind/ferminet.
|
| 273 |
+
Attila Szabo and Neil S. Ostlund. Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory. Courier Corporation, 2012.
|
| 274 |
+
Michele Taddei, Michele Ruggeri, Saverio Moroni, and Markus Holzmann. Iterative Backflow Renormalization Procedure for Many-Body Ground-State Wave Functions of Strongly Interacting Normal Fermi Liquids. Physical Review B, 91(11):115106, 2015.
|
| 275 |
+
C. J. Umrigar, Julien Toulouse, Claudia Filippi, Sandro Sorella, and Richard G. Hennig. Alleviation of the Fermion-Sign Problem by Optimization of Many-Body Wave Functions. Physical Review Letters, 98(11):110201, 2007.
|
| 276 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is All You Need. Advances in Neural Information Processing Systems, 30, 2017.
|
| 277 |
+
Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-Attention with Linear Complexity. arXiv preprint arXiv:2006.04768, 2020.
|
| 278 |
+
Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. Nystromformer: A Nystrom-Based Algorithm for Approximating Self-Attention. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 14138-14148, 2021.
|
| 279 |
+
Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large Batch Optimization for Deep Learning: Training BERT in 76 Minutes. *Eighth International Conference on Learning Representations (ICLR)*, 2020.
|
| 280 |
+
Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George Dahl, Chris Shallue, and Roger B Grosse. Which algorithmic choices matter at which batch sizes? Insights from a noisy quadratic model. Advances in Neural Information Processing Systems (NeurIPS), 32, 2019.
|
| 281 |
+
|
| 282 |
+
# A APPENDIX
|
| 283 |
+
|
| 284 |
+
# A.1 CALCULATING ENERGIES AND GRADIENTS
|
| 285 |
+
|
| 286 |
+
It is generally more numerically stable to work directly with the log wavefunction, and the local energy can be expressed as
|
| 287 |
+
|
| 288 |
+
$$
|
| 289 |
+
E _ {L} (\boldsymbol {x}) = - \frac {1}{2} \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {3} \left[ \frac {\partial^ {2} \log | \Psi (\boldsymbol {x}) |}{\partial r _ {i j} ^ {2}} + \left(\frac {\partial \log | \Psi (\boldsymbol {x}) |}{\partial r _ {i j}}\right) ^ {2} \right] + V (\boldsymbol {x}) \tag {10}
|
| 290 |
+
$$
|
| 291 |
+
|
| 292 |
+
where $V(\pmb{x})$ is the potential energy (the last three terms of Eq 1).
|
| 293 |
+
|
| 294 |
+
The gradient of the energy is given by
|
| 295 |
+
|
| 296 |
+
$$
|
| 297 |
+
\nabla \mathbb {E} _ {\boldsymbol {x} \sim \Psi^ {2}} \left[ E _ {L} (\boldsymbol {x}) \right] = 2 \mathbb {E} _ {\boldsymbol {x} \sim \Psi^ {2}} \left[ \left(E _ {L} (\boldsymbol {x}) - \mathbb {E} _ {\boldsymbol {x} ^ {\prime} \sim \Psi^ {2}} \left[ E _ {L} \left(\boldsymbol {x} ^ {\prime}\right) \right]\right) \nabla \log | \Psi (\boldsymbol {x}) | \right] \tag {11}
|
| 298 |
+
$$
|
| 299 |
+
|
| 300 |
+
where the local energy $E_L(\pmb{x}) = \Psi^{-1}(\pmb{x})\hat{H}\Psi(\pmb{x})$ . Note that the $E_L(\pmb{x}) - \mathbb{E}_{\pmb{x}'\sim \Psi^2}[E_L(\pmb{x}')]$ term is the difference between the local energy at $\pmb{x}$ and the average energy over all $\pmb{x}'$ .
|
| 301 |
+
|
| 302 |
+
We make a small but critical change to how the gradients are computed relative to Pfau et al. (2020) which stabilizes training for all models considered here. The local energy often has very large tails due to numerical issues, especially near cusps (where two particles overlap) and nodes (where the wavefunction goes to zero) (Pathak & Wagner, 2020). To mitigate this, the local energy is often truncated in practice. Let $\langle E_L\rangle_{\mathrm{mean}}$ denote the mean local energy for one minibatch of walkers and $\langle E_L\rangle_{\mathrm{median}}$ denote the median. Then in Pfau et al. (2020), the local energies were clipped to be within a constant multiple $\rho$ of the mean absolute deviation around the mean: $\mathrm{MAD}_{\mathrm{mean}}(E_L) = \frac{1}{N}\sum_i|E_L(\pmb{x}_i) - \langle E_L\rangle_{\mathrm{mean}}|$ . While this is more robust than the standard deviation, it is still susceptible to large outliers because it is centered at the mean. Instead, we use the mean absolute deviation around the median to determine the window for clipping: $\mathrm{MAD}_{\mathrm{median}}(E_L) = \frac{1}{N}\sum_i|E_L(\pmb{x}_i) - \langle E_L\rangle_{\mathrm{median}}|$ .
|
| 303 |
+
|
| 304 |
+
Secondly, in Pfau et al. (2020), the average energy term $\mathbb{E}_{\boldsymbol{x}^{\prime}\sim \Psi^{2}}[E_{L}(\boldsymbol{x}^{\prime})]$ in the gradient was approximated by the mean local energy $\langle E_L\rangle_{\mathrm{mean}}$ of a minibatch. That meant that outliers were still included in this term. We instead use the mean of the clipped local energies. This guarantees that the mean over a minibatch of the energy difference term is always zero, improving stability during optimization. If we let $\mathrm{clip}_{\mathrm{mean / median}}(x;\sigma)$ denote the functions that clip $x$ to be in the range $[\langle x\rangle_{\mathrm{mean / median}} - \sigma ,\langle x\rangle_{\mathrm{mean / median}} + \sigma ]$ , then the gradient for one batch in Pfau et al. (2020) was:
|
| 305 |
+
|
| 306 |
+
$$
|
| 307 |
+
\hat {E} _ {L} (\boldsymbol {x}) = \operatorname {c l i p} _ {\text {m e a n}} \left(E _ {L} (\boldsymbol {x}); \rho \mathrm {M A D} _ {\text {m e a n}} \left(E _ {L}\right)\right) \tag {12}
|
| 308 |
+
$$
|
| 309 |
+
|
| 310 |
+
$$
|
| 311 |
+
\left\langle \left(\hat {E} _ {L} (\boldsymbol {x}) - \left\langle E _ {L} \right\rangle_ {\text {m e a n}}\right) \nabla \log | \Psi (\boldsymbol {x}) | \right\rangle_ {\text {m e a n}} \tag {13}
|
| 312 |
+
$$
|
| 313 |
+
|
| 314 |
+
while here we use:
|
| 315 |
+
|
| 316 |
+
$$
|
| 317 |
+
\hat {E} _ {L} (\boldsymbol {x}) = \operatorname {c l i p} _ {\text {m e d i a n}} \left(E _ {L} (\boldsymbol {x}); \rho \mathrm {M A D} _ {\text {m e d i a n}} \left(E _ {L}\right)\right) \tag {14}
|
| 318 |
+
$$
|
| 319 |
+
|
| 320 |
+
$$
|
| 321 |
+
\left\langle \left(\hat {E} _ {L} (\boldsymbol {x}) - \left\langle \hat {E} _ {L} \right\rangle_ {\text {m e a n}}\right) \nabla \log | \Psi (\boldsymbol {x}) | \right\rangle_ {\text {m e a n}} \tag {15}
|
| 322 |
+
$$
|
| 323 |
+
|
| 324 |
+
While this may seem like a subtle difference, it has a great effect on larger systems, especially heavier atoms. A similar technique was used to stabilize training the FermiNet with pseudopotentials (Li et al., 2022), but instead of clipping outlier local energies, the outlier walkers were removed entirely for that minibatch. We found that clipping with proper centering was more effective than removing the walkers entirely.
|
| 325 |
+
|
| 326 |
+
# A.2 TRAINING
|
| 327 |
+
|
| 328 |
+
In this section we give details on training and further differences from the original FermiNet (Pfau et al., 2020).
|
| 329 |
+
|
| 330 |
+
# A.2.1 DENSEDETERMINANTS
|
| 331 |
+
|
| 332 |
+
The original FermiNet and PauliNet Ansatzes used block-diagonal determinants for spin-up and spin-down electrons (Pfau et al., 2020; Hermann et al., 2020), such that each determinant could be
|
| 333 |
+
|
| 334 |
+

|
| 335 |
+
(a) $\mathrm{S}_2$ $R = 1.9202\AA$
|
| 336 |
+
|
| 337 |
+

|
| 338 |
+
(b) Fe atom
|
| 339 |
+
Figure 5: Comparison of different optimization algorithms (KFAC and ADAM) for the Psiformer on (a) the sulphur dimer and (b) the iron atom, with and without LayerNorm. While there is a clear advantage to using KFAC on both systems, LayerNorm has a marginal impact on the sulphur dimer. On the iron atom, however, LayerNorm improves the accuracy of ADAM and the stability of KFAC. A learning rate of 3e-4 was used for ADAM. A rolling mean of the last 1000 iterations is used to smooth the energy.
|
| 340 |
+
|
| 341 |
+
factorized into the product of a determinant of spin-up electrons of size $N_{\uparrow} \times N_{\uparrow}$ and a determinant of spin-down electrons $N_{\downarrow} \times N_{\downarrow}$ , as is the case with conventional VMC ansatzes (Foulkes et al., 2001):
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
\det \left[ \boldsymbol {\Phi} (\boldsymbol {x}) \right] = \left| \begin{array}{c c} \boldsymbol {\Phi} _ {\uparrow} \left(\boldsymbol {x} _ {\uparrow}\right) & \boldsymbol {0} \\ \boldsymbol {0} & \boldsymbol {\Phi} _ {\downarrow} \left(\boldsymbol {x} _ {\downarrow}\right) \end{array} \right| = \det \left[ \boldsymbol {\Phi} _ {\uparrow} \left(\boldsymbol {x} _ {\uparrow}\right) \right] \det \left[ \boldsymbol {\Phi} _ {\downarrow} \left(\boldsymbol {x} _ {\downarrow}\right) \right]. \tag {16}
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
where $\pmb{x}_{\sigma}$ denotes the set of electron states with the specified spin and different functions are used for electrons of different spins. The FermiNet authors subsequently proposed simply using dense determinants of size $N\times N$ , $N = N_{\uparrow} + N_{\downarrow}$ (Spencer et al., 2020b):
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
\det \left[ \boldsymbol {\Phi} (\boldsymbol {x}) \right] = \left| \boldsymbol {\Phi} _ {\uparrow} \left(\boldsymbol {x} _ {\uparrow}\right) \quad \boldsymbol {\Phi} _ {\downarrow} \left(\boldsymbol {x} _ {\downarrow}\right) \right| \tag {17}
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
where now each block is of dimension $N \times N_{\sigma}$ . This has largely become the default choice for FermiNet architectures and has shown to provide improved accuracy at small additional cost for a myriad of systems (Lin et al., 2021; Cassella et al., 2022; Ren et al., 2022; Gerard et al., 2022; Gao & Gunnemann, 2022).
|
| 354 |
+
|
| 355 |
+
# A.2.2 IMPLEMENTATION OF FERMINET+SCHNET
|
| 356 |
+
|
| 357 |
+
To best reproduce the results from Gerard et al. (2022), we implemented the changes to the FermiNet which they claimed made the largest difference. All experiments in this paper, including the FermiNet, used dense determinants. We added the SchNet-like continuous filter convolutions, as well as the nuclear embedding stream and separate streams for same-spin and different-spin electrons in the two-electron stream. We also compared their training hyperparameters against ours, except for batch size, which was kept at 4096 for all experiments. We did not implement their local input features or envelope initialization, as their results suggested they did not make as significant a difference, and our longer pretraining likely had the same effect as changing the envelope initialization.
|
| 358 |
+
|
| 359 |
+
# A.2.3 CHOICE OF OPTIMIZER AND LAYERNORM
|
| 360 |
+
|
| 361 |
+
The original FermiNet was only able to reach high accuracy when trained with Kronecker-factored approximate curvature (KFAC) (Martens & Grosse, 2015). To see if the same holds true for the Psiformer, here we compare training with KFAC against ADAM on several systems. Additionally, when trained with ADAM, self-attention layers usually require LayerNorm (Ba et al., 2016). However, other work has suggested that in some contexts normalization may not be necessary when using KFAC (Martens et al., 2021), so we also compare the Psiformer with and without LayerNorm. Figure 5 shows a comparison for the Psiformer on the sulphur dimer and the iron atom. Consistent with the FermiNet, the Psiformer consistently converges faster and to lower energies with KFAC. With LayerNorm, the situation is more ambiguous. On the sulphur dimer, it seems to have marginal
|
| 362 |
+
|
| 363 |
+
<table><tr><td></td><td>Parameter</td><td>Value</td></tr><tr><td rowspan="5">Training</td><td>Training iterations</td><td>2e5</td></tr><tr><td>Learning rate at time t</td><td>lr0(1 + t/τ0)-1</td></tr><tr><td>Initial learning rate lr0</td><td>0.05</td></tr><tr><td>Learning rate decay t0</td><td>1e5</td></tr><tr><td>Local energy clipping ρ</td><td>5.0</td></tr><tr><td rowspan="3">Pretraining</td><td>Pretraining optimizer</td><td>LAMB</td></tr><tr><td>Pretraining iterations</td><td>2e4 or 1e5</td></tr><tr><td>Pretraining basis set</td><td>STO-6G</td></tr><tr><td>Markov Chain</td><td>Batch size</td><td>4096</td></tr><tr><td>Monte Carlo</td><td>Decorrelation steps</td><td>30</td></tr><tr><td rowspan="2">KFAC</td><td>Norm constraint</td><td>1e-3</td></tr><tr><td>Damping</td><td>1e-3</td></tr></table>
|
| 364 |
+
|
| 365 |
+
Table 4: Table of default hyperparameters used.
|
| 366 |
+
|
| 367 |
+
<table><tr><td>Systems</td><td>Pretraining Steps</td><td>MCMC Blocks</td><td>LayerNorm</td></tr><tr><td>Small molecules (Fig. 2)</td><td>20,000</td><td>1</td><td>No</td></tr><tr><td>Third-row atoms (Table 1)</td><td>100,000</td><td>2</td><td>Yes</td></tr><tr><td>Large molecules (Table 2)</td><td>100,000</td><td>2</td><td>Both</td></tr><tr><td>Benzene dimer</td><td>100,000</td><td>4</td><td>Both</td></tr></table>
|
| 368 |
+
|
| 369 |
+
Table 5: Variations in hyperparameters between experiments.
|
| 370 |
+
|
| 371 |
+
impact, while on the iron atom, it definitely improves training with ADAM, and seems to improve stability when using KFAC.
|
| 372 |
+
|
| 373 |
+
# A.2.4 HYPERPARAMETERS
|
| 374 |
+
|
| 375 |
+
Table 4 shows the default hyperparameters used for training all models implemented in this work. Note that Pfau et al. (2020) took the sum over gradients across the batch on each device and averaged over devices, whereas here the gradients are averaged over the entire batch. This amounts to a scaling of the learning rate, meaning that the same learning rate as in Pfau et al. (2020) cannot be used.
|
| 376 |
+
|
| 377 |
+
As in previous FermiNet implementations, we pretrain all networks to match Hartree-Fock (HF) orbitals computed using PySCF. Here, we use the LAMB optimizer (You et al., 2020) as the pretraining optimizer. We find that longer pretraining stabilises training for all models considered. In addition, during pretraining, we draw samples from the HF orbitals only, instead of the neural network wavefunction. A smaller number of pretraining iterations (20,000) was used for small molecules in Section 4.1, while 100,000 iterations were used for third-row atoms and larger molecules.
|
| 378 |
+
|
| 379 |
+
To generate samples from $\Psi^2$ , we use the Metropolis-Hastings algorithm with symmetric Gaussian proposals, as in Pfau et al. (2020). We increase the number of decorrelation MCMC steps between optimization iterations from 10 in Pfau et al. (2020) to 30. Additionally, for larger systems, we do not update all electron positions simultaneously in one Metropolis step. Instead, we split the electrons into multiple blocks, and iteratively update each block once per step. This is an intermediate between the all-electron and one-electron moves commonly used in VMC.
|
| 380 |
+
|
| 381 |
+
Note that while all models were trained for 200,000 optimization iterations, this does not mean that all systems required that many iterations to converge. Many smaller systems converged in far fewer iterations than this, but the same number was used for all systems to minimize confusion.
|
| 382 |
+
|
| 383 |
+
Not all systems used identical parameters - for larger systems we increased the number of pretraining steps and blocks per MCMC update, and LayerNorm was not used for small molecules in Table 2. In Table 5 we specify which systems were trained with which settings.
|
| 384 |
+
|
| 385 |
+
In Section 4.1, the performance of "small" and "large" FermiNet and Psiformer models was investigated on a set of small and medium molecules. Table 6 gives the network parameters used for these
|
| 386 |
+
|
| 387 |
+
<table><tr><td rowspan="2">Parameter</td><td colspan="2">FermiNet</td><td colspan="2">Psiformer</td></tr><tr><td>Small</td><td>Large</td><td>Small</td><td>Large</td></tr><tr><td>Determinants</td><td>16</td><td>32</td><td>16</td><td>32</td></tr><tr><td>Network layers</td><td>4</td><td>4</td><td>4</td><td>4</td></tr><tr><td>Attention heads</td><td>N/A</td><td>N/A</td><td>4</td><td>8</td></tr><tr><td>Attention dimension</td><td>N/A</td><td>N/A</td><td>64</td><td>64</td></tr><tr><td>MLP hidden dims: one-electron</td><td>256</td><td>512</td><td>256</td><td>512</td></tr><tr><td>MLP hidden dims: two-electron</td><td>32</td><td>32</td><td>N/A</td><td>N/A</td></tr></table>
|
| 388 |
+
|
| 389 |
+
Table 6: Table of network parameters used for the Psiformer and FermiNet.
|
| 390 |
+
|
| 391 |
+
<table><tr><td rowspan="2" colspan="2">System</td><td colspan="3">FermiNet</td><td colspan="2">Psiformer</td></tr><tr><td>Pfau et al. (2020)</td><td>Small</td><td>Large</td><td>Small</td><td>Large</td></tr><tr><td rowspan="2">LiH</td><td>Energy</td><td>-8.0705(1)</td><td>-8.07050(1)</td><td>-8.070515(8)</td><td>-8.070528(5)</td><td>-8.070536(4)</td></tr><tr><td># parameters</td><td>668128</td><td>683744</td><td>2610400</td><td>1593858</td><td>6366210</td></tr><tr><td rowspan="2">Li2</td><td>Energy</td><td>-14.99475(1)</td><td>-14.99480(2)</td><td>-14.99484(2)</td><td>-14.99486(1)</td><td>-14.99485(2)</td></tr><tr><td># parameters</td><td>676960</td><td>700384</td><td>2676448</td><td>1602178</td><td>6399234</td></tr><tr><td rowspan="2">NH3</td><td>Energy</td><td>-56.56295(8)</td><td>-56.56347(4)</td><td>-56.56387(3)</td><td>-56.56367(2)</td><td>-56.56381(2)</td></tr><tr><td># parameters</td><td>703968</td><td>741088</td><td>2823392</td><td>1621506</td><td>6470658</td></tr><tr><td rowspan="2">CH4</td><td>Energy</td><td>-40.51400(7)</td><td>-40.51430(3)</td><td>-40.51450(3)</td><td>-40.51454(2)</td><td>-40.51461(1)</td></tr><tr><td># parameters</td><td>708640</td><td>744800</td><td>2830816</td><td>1622850</td><td>6473346</td></tr><tr><td rowspan="2">CO</td><td>Energy</td><td>-113.3218(1)</td><td>-113.32354(7)</td><td>-113.32444(5)</td><td>-113.32416(4)</td><td>-113.32466(3)</td></tr><tr><td># parameters</td><td>712288</td><td>766944</td><td>2940640</td><td>1635458</td><td>6531330</td></tr><tr><td rowspan="2">N2</td><td>Energy</td><td>-109.5388(1)</td><td>-109.54046(6)</td><td>-109.54148(6)</td><td>-109.54137(4)</td><td>-109.54179(4)</td></tr><tr><td># parameters</td><td>712288</td><td>766944</td><td>2940640</td><td>1635458</td><td>6531330</td></tr><tr><td rowspan="2">C2H4</td><td>Energy</td><td>-78.5844(1)</td><td>-78.58604(5)</td><td>-78.58701(6)</td><td>-78.58762(3)</td><td>-78.58794(3)</td></tr><tr><td># parameters</td><td>743648</td><td>799968</td><td>3039456</td><td>1649922</td><td>6576642</td></tr><tr><td rowspan="2">methylamine</td><td>Energy</td><td>-95.8554(2)</td><td>-95.85917(6)</td><td>-95.86010(5)</td><td>-95.86050(4)</td><td>-95.86096(3)</td></tr><tr><td># parameters</td><td>759712</td><td>821344</td><td>3114976</td><td>1660098</td><td>6613378</td></tr><tr><td rowspan="2">O3</td><td>Energy</td><td>-225.4145(3)</td><td>-225.4226(2)</td><td>-225.4268(1)</td><td>-225.43061(9)</td><td>-225.43231(8)</td></tr><tr><td># parameters</td><td>763360</td><td>854752</td><td>3280096</td><td>1678850</td><td>6700034</td></tr><tr><td rowspan="2">ethanol</td><td>Energy</td><td>-155.0308(3)</td><td>-155.0419(1)</td><td>-155.0455(1)</td><td>-155.04656(7)</td><td>-155.04759(6)</td></tr><tr><td># parameters</td><td>815904</td><td>899936</td><td>3403232</td><td>1698370</td><td>6755458</td></tr><tr><td rowspan="2">bicyclobutane</td><td>Energy</td><td>-155.9263(6)</td><td>-155.9388(1)</td><td>-155.9432(1)</td><td>-155.94619(8)</td><td>-155.94836(7)</td></tr><tr><td># parameters</td><td>845920</td><td>940000</td><td>3548896</td><td>1717890</td><td>6827266</td></tr></table>
|
| 392 |
+
|
| 393 |
+
Table 7: Energies and number of parameters in network for set of small molecules studied in Pfau et al. (2020). Note Pfau et al. (2020) used block diagonal determinants and did not report precise parameter counts; parameters counts for their results are estimated using their published settings and the FermiNet JAX implementation (Spencer et al., 2020b). Our FermiNet experiments used the envelope function proposed in Spencer et al. (2020a) and dense determinants, resulting in slightly different numbers of parameters between the networks of Pfau et al. (2020) and our FermiNet (Small) networks.
|
| 394 |
+
|
| 395 |
+
models. In all other experiments, unless otherwise specified, the 'small' network configurations were used. Table 7 contains the number of parameters used by each model for this set of molecules.
|
| 396 |
+
|
| 397 |
+
# A.3 COMPUTATIONAL DETAILS
|
| 398 |
+
|
| 399 |
+
All models were implemented in JAX (Bradbury et al., 2018) based upon the public FermiNet (Spencer et al., 2020b) and KFAC implementations (Botev & Martens, 2022), and trained in parallel using between 16 and 64 A100 GPUs, depending on the system size. All calculations were done at standard single precision, as we found that TensorFloat-32 calculations on A100 had numerical accuracy issues. A table of total training time – including pretraining – for several models is given in Table 9. Empirically, the time per iteration scaled roughly cubically, i.e. the benzene dimer required $\sim 8$ times the computational resources to run as the benzene molecule, though the number of atoms also played a significant role in timing. For systems with very large numbers of atoms, like the benzene dimer, the FermiNet and Psiformer ran at nearly the same speed. This is likely because the determinants and envelope, which are similar between the FermiNet and Psiformer, make up a larger share of the total run time as the systems become larger. Geometries are taken from Curtiss et al. (2000) and given in Table 8 or in Pfau et al. (2020). The benzene dimer geometry is obtained via a rigid translation and rotation of two monomers.
|
| 400 |
+
|
| 401 |
+
<table><tr><td>System</td><td>atom</td><td colspan="3">position (co)</td></tr><tr><td rowspan="7">benzene</td><td>C 2.28339</td><td>2.63664</td><td>0.0000</td><td></td></tr><tr><td>C 2.28339</td><td>1.31832</td><td>0.0000</td><td></td></tr><tr><td>C 2.28339</td><td>-2.63664</td><td>0.0000</td><td></td></tr><tr><td>C 2.28339</td><td>-1.31832</td><td>0.0000</td><td></td></tr><tr><td>C 2.28339</td><td>-1.31832</td><td>0.0000</td><td></td></tr><tr><td>C 2.28339</td><td>-1.31832</td><td>0.0000</td><td></td></tr><tr><td>C 2.28339</td><td>-1.31832</td><td>0.0000</td><td></td></tr></table>
|
| 402 |
+
|
| 403 |
+
Table 8: Molecular geometries in Bohr.
|
| 404 |
+
|
| 405 |
+
<table><tr><td rowspan="2">System</td><td colspan="2">FermiNet</td><td colspan="2">FermiNet+SchNet</td><td colspan="2">Psiformer</td></tr><tr><td>Small</td><td>Large</td><td>Small</td><td>Large</td><td>Small</td><td>Large</td></tr><tr><td>Bicyclobutane</td><td>484</td><td>789</td><td>-</td><td>-</td><td>712</td><td>1412</td></tr><tr><td>Zn</td><td>592</td><td>1200</td><td>808</td><td>1365</td><td>1317</td><td>1600</td></tr><tr><td>Benzene</td><td>1224</td><td>-</td><td>1608</td><td>-</td><td>1768</td><td>-</td></tr><tr><td>Benzene Dimer</td><td>10576</td><td>-</td><td>-</td><td>-</td><td>10695</td><td>-</td></tr></table>
|
| 406 |
+
|
| 407 |
+
Table 9: Number of A100 GPU hours required to train several systems.
|
| 408 |
+
|
| 409 |
+

|
| 410 |
+
(a) $\mathrm{N}_2$ $\mathrm{R} = 1.094\AA$
|
| 411 |
+
|
| 412 |
+

|
| 413 |
+
(b) ethanol
|
| 414 |
+
Figure 6: Ablation studies for the Psiformer without a Jastrow factor or input rescaling on (a) the nitrogen dimer and (b) ethanol. A rolling mean of the last 1000 iterations is used to smooth the energy. The divergence of the energy without using input rescaling, as shown here for ethanol, is typical for medium or large systems.
|
| 415 |
+
|
| 416 |
+
# A.4 ABLATION STUDIES
|
| 417 |
+
|
| 418 |
+
Two modifications to the self-attention network proved critical to stability for the Psiformer - the Jastrow factor ensuring correct behaviour at electronic cusps, and rescaling the input distances. Here we show ablation studies without these modifications.
|
| 419 |
+
|
| 420 |
+
Figure 6 shows the Psiformer on (a) the nitrogen dimer and (b) ethanol, without a Jastrow factor and without input feature rescaling. In the absence of an electron-electron Jastrow factor to enforce cusp conditions, the energy is very noisy. If the input features are not rescaled, the energy is often unstable. For smaller systems, as in the nitrogen dimer shown, training may recover from the instability, but especially for medium or larger systems, the energy often diverges.
|
| 421 |
+
|
| 422 |
+
# A.5 ATTENTION MAPS
|
| 423 |
+
|
| 424 |
+
Figures 7 and 8 show attention maps of the Psiformer for a sample electron configuration, for the benzene dimer at equilibrium and dissociated geometry respectively. Each attention map shows the output of the dot product between key and query vectors before the softmax operation. Four attention maps are shown for each network layer, one per attention head. The ordering of the electrons is chosen to highlight the structure of the attention maps in relation to electron distances: the electrons are grouped according to their closest atom, where the atoms of each benzene molecule are ordered around the ring. The block structure of the attention weights is clearly visible, where electrons attend more to other electrons around the same molecule.
|
| 425 |
+
|
| 426 |
+

|
| 427 |
+
Figure 7: Attention maps for the Psiformer for a sample electron configuration from the benzene dimer at equilibrium geometry. For each network layer, the attention maps for 4 attention heads are shown. The electrons are grouped by nearest atom.
|
| 428 |
+
|
| 429 |
+

|
| 430 |
+
Figure 8: Attention maps for the Psiformer for a sample electron configuration from the benzene dimer at dissociated geometry. For each network layer, the attention maps for 4 attention heads are shown. The electrons are grouped by nearest atom.
|
2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:36662f11d06c2ac6544a8b2d761f93c017c5ae2b8fd959ef27012e99cc343147
|
| 3 |
+
size 1342121
|
2023/A Self-Attention Ansatz for Ab-initio Quantum Chemistry/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/55bfe47f-9477-48ca-8278-43a53e11f20e_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/55bfe47f-9477-48ca-8278-43a53e11f20e_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/55bfe47f-9477-48ca-8278-43a53e11f20e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:03b0423dc914fac4898153a8bbf6fe0f7ea81df47cf8b9dfc1c3ca304231bdb6
|
| 3 |
+
size 1762126
|
2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/full.md
ADDED
|
@@ -0,0 +1,341 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A SIMPLE APPROACH FOR VISUAL ROOM REARRANGEMENT: 3D MAPPING AND SEMANTIC SEARCH
|
| 2 |
+
|
| 3 |
+
Brandon trabucco
|
| 4 |
+
|
| 5 |
+
Carnegie Mellon University btrabucc@cs.cmu.edu
|
| 6 |
+
|
| 7 |
+
Gunnar A. Sigurdsson
|
| 8 |
+
|
| 9 |
+
Amazon Alexa AI
|
| 10 |
+
|
| 11 |
+
Robinson Piramuthu
|
| 12 |
+
|
| 13 |
+
Amazon Alexa AI robinpir@amazon.com
|
| 14 |
+
|
| 15 |
+
Gaurav S. Sukhatme
|
| 16 |
+
|
| 17 |
+
University of Southern California and Amazon Alexa AI gaurav@usc.edu
|
| 18 |
+
|
| 19 |
+
Ruslan Salakhutdinov
|
| 20 |
+
|
| 21 |
+
Carnegie Mellon University
|
| 22 |
+
rsalakhu@cs.cmu.edu
|
| 23 |
+
|
| 24 |
+
# ABSTRACT
|
| 25 |
+
|
| 26 |
+
Physically rearranging objects is an important capability for embodied agents. Visual room rearrangement evaluates an agent's ability to rearrange objects in a room to a desired goal based solely on visual input. We propose a simple yet effective method for this problem: (1) search for and map which objects need to be rearranged, and (2) rearrange each object until the task is complete. Our approach consists of an off-the-shelf semantic segmentation model, voxel-based semantic map, and semantic search policy to efficiently find objects that need to be rearranged. Our method was the winning submission to the AI2-THOR Rearrangement Challenge in the 2022 Embodied AI Workshop at CVPR 2022, and improves on current state-of-the-art end-to-end reinforcement learning-based methods that learn visual room rearrangement policies from $0.53\%$ correct rearrangement to $16.56\%$ , using only $2.7\%$ as many samples from the environment.
|
| 27 |
+
|
| 28 |
+
# 1 INTRODUCTION
|
| 29 |
+
|
| 30 |
+
Physically rearranging objects is an everyday skill for humans, but remains a core challenge for embodied agents that assist humans in realistic environments. Natural environments for humans are complex and require generalization to a combinatorially large number of object configurations (Batra et al., 2020a). Generalization in complex realistic environments remains an immense practical challenge for embodied agents, and the rearrangement setting provides a rich test bed for embodied generalization in these environments. The rearrangement setting combines two challenging perception and control tasks: (1) understanding the state of a dynamic 3D environment, and (2) acting over a long horizon to reach a goal. These problems have traditionally been studied independently by the vision and reinforcement learning communities (Chaplot et al., 2021), but the advent of large models and challenging benchmarks is showing that both components are important for embodied agents.
|
| 31 |
+
|
| 32 |
+
Reinforcement learning (RL) can excel at embodied tasks, especially if a lot of experience can be leveraged (Weihs et al., 2021; Chaplot et al., 2020b; Ye et al., 2021) for training. In a simulated environment with unlimited retries, this experience is cheap to obtain, and agents can explore randomly until a good solution is discovered by the agent. This pipeline works well for tasks like point-goal navigation (Wijmans et al., 2020), but in some cases this strategy is not enough. As the difficulty of embodied learning tasks increases, the agent must generalize to an increasing number of environment configurations, and broadly scaled experience can become insufficient.
|
| 33 |
+
|
| 34 |
+
In the rearrangement setting, a perfect understanding of the environment simplifies the problem: an object is here, it should go there, and the rest can be solved with grasping and planning routines. Representing the information about the locations and states of objects in an accessible format is therefore an important contribution for the rearrangement setting. Our initial experiments suggest that accurate 3D semantic maps of the environment are one such accessible format for visual rearrangement. With accurate 3D semantic maps, our method rearranges $15.11\%$ of objects correctly, and requires significantly less experience from the environment to do so. While end-to-end RL requires up to 75 million environment steps in Weihs et al. (2021), our method only requires 2 million
|
| 35 |
+
|
| 36 |
+

|
| 37 |
+
Figure 1: Our method incrementally builds voxel-based Semantic Maps from visual observations and efficiently finds objects using a Semantic Search Policy. We visualize an example rearrangement on the right with the initial position of the pink object (laptop on the bed), followed by the agent holding the object (laptop), and finally the destination position of the object (laptop on the desk).
|
| 38 |
+
|
| 39 |
+
samples and trains offline. Our results suggest end-to-end RL without an accurate representation of the scene may be missing out on a fundamental aspect of understanding of the environment.
|
| 40 |
+
|
| 41 |
+
We demonstrate how semantic maps help agents effectively understand dynamic 3D environments and perform visual rearrangement. These dynamic environments have elements that can move (like furniture), and objects with changing states (like the door of a cabinet). We present a method that builds accurate semantic maps in these dynamic environments, and reasons about what has changed. Deviating from prior work that leverages end-to-end RL, we propose a simple approach for visual rearrangement: (1) search for and map which objects need to be rearranged, and (2) procedurally rearrange objects until a desired goal configuration is reached. We evaluate our approach on the AI2-THOR Rearrangement Challenge (Weihs et al., 2021) and establish a new state-of-the-art.
|
| 42 |
+
|
| 43 |
+
We propose an architecture for visual rearrangement that builds voxel-based semantic maps of the environment and rapidly finds objects using a search-based policy. Our method shows an improvement of 14.72 absolute percentage points over current work in visual rearrangement, and is robust to the accuracy of the perception model, the budget for exploration, and the size of objects being rearranged. We conduct ablations to diagnose where the bottlenecks are for visual rearrangement, and find that accurate scene understanding is the most crucial. As an upper bound, when provided with a perfect semantic map, our method solves $38.33\%$ of tasks, a potential for significant out-of-the-box gains as better perception models are developed. Our results show the importance of building effective scene representations for embodied agents in complex and dynamic visual environments.
|
| 44 |
+
|
| 45 |
+
# 2 RELATED WORK
|
| 46 |
+
|
| 47 |
+
Embodied 3D Scene Understanding. Knowledge of the 3D environment is at the heart of various tasks for embodied agents, such as point navigation (Anderson et al., 2018a), image navigation (Batra et al., 2020b; Yang et al., 2019), vision language navigation (Anderson et al., 2018b; Shridhar et al., 2020), embodied question answering (Gordon et al., 2018; Das et al., 2018), and more. These tasks require an agent to reason about its 3D environment. For example, vision language navigation (Anderson et al., 2018b; Shridhar et al., 2020) requires grounding language in an environment goal, and reasoning about where to navigate and what to modify in the environment to reach that goal. Reasoning about the 3D environment is especially important for the rearrangement setting, and has a rich interdisciplinary history in the robotics, vision, and reinforcement learning communities.
|
| 48 |
+
|
| 49 |
+
Visual Room Rearrangement. Rearrangement has long been one of the fundamental tasks in robotics research (Ben-Shahar & Rivlin, 1996; Stilman et al., 2007; King et al., 2016; Krontiris & Bekris, 2016; Yuan et al., 2018; Correll et al., 2018; Labbe et al., 2020). Typically, these methods address the challenge in the context of the state of the objects being fully observed (Cosgun et al., 2011; King et al., 2016), which allows for efficient and accurate planning-based solutions. In contrast, there has been recent interest in visual rearrangement (Batra et al., 2020a; Weihs et al., 2021; Qureshi et al., 2021; Goyal et al., 2022; Gadre et al., 2022) where the states of objects and the rearrangement
|
| 50 |
+
|
| 51 |
+
goal are not directly observed. In these cases, the agent is provided a direct visual input, and the environment is relatively complex and realistic. This latest iteration of rearrangement shares similarity with various other challenging embodied AI tasks such as embodied navigation (Anderson et al., 2018a; Batra et al., 2020b; Chaplot et al., 2020a; Shridhar et al., 2020; Francis et al., 2021; Min et al., 2021; Pashevich et al., 2021; Singh et al., 2021) and embodied question answering (Gordon et al., 2018; Das et al., 2018), which require finding objects and reasoning about their state.
|
| 52 |
+
|
| 53 |
+
AI2-THOR Rearrangement Challenge. Our work builds on the latest rearrangement methods and demonstrates how building accurate voxel-based semantic maps can produce significant gains. We focus on the AI2-THOR Rearrangement Challenge (Weihs et al., 2021), which uses AI2-THOR, an open-source and high-fidelity simulator used in many prior works (Gadre et al., 2022; Weihs et al., 2021; Shridhar et al., 2020; Gordon et al., 2018). Prior works on this challenge have studied a variety of approaches, including end-to-end RL in Weihs et al. (2021), and a planning-based approach in Gadre et al. (2022). Our approach is the first to use voxel-based semantic maps to infer what to rearrange from an experience goal as described by Batra et al. (2020a). Though both Gadre et al. (2022) and our method use planning, Gadre et al. (2022) use a graph-based continuous scene representation, and we use voxel-based semantic maps instead, which we show is more effective.
|
| 54 |
+
|
| 55 |
+
3D Mapping & Search. Agents that interact with an embodied world through navigation and manipulation must keep track of the world (mapping) (Thrun, 2002) and themselves (localization) (Thrun et al., 2001)—both extensively studied in robotics by processing low-level information (Engel et al., 2014), building semantic maps (Kuipers & Byun, 1991) and more recently, via techniques specifically developed to handle dynamic and general aspects of the environment (Rünz & Agapito, 2017; Rosinol et al., 2021; Wong et al., 2021). When semantics are more important than precision, such as for embodied learning tasks, recent methods have looked at neural network-based maps (Gupta et al., 2017; Chen et al., 2019; Wu et al., 2019a; Chaplot et al., 2020b; Blukis et al., 2021; Chaplot et al., 2021). Our method builds on these and adopts the use of a voxel-based semantic map and pretrained semantic segmentation model—a similar methodological setup to Chaplot et al. (2021); Min et al. (2021); Blukis et al. (2021). However, our method diverges from these prior works by using multiple voxel-based semantic maps to infer what to rearrange from an experience goal as described by Batra et al. (2020a). These prior works have instead considered geometric goals in Chaplot et al. (2021) and language goals in Min et al. (2021); Blukis et al. (2021), and ours is the first to consider an experience goal (Batra et al., 2020a). Furthermore, while a search-based policy is used in Min et al. (2021), we are the first to use search with an unspecified destination (the target object is not known a priori).
|
| 56 |
+
|
| 57 |
+
# 3 METHODOLOGY
|
| 58 |
+
|
| 59 |
+
In this section, we present a simple approach for solving visual rearrangement problems. We begin the section by discussing the visual rearrangement problem statement and metrics we use for evaluation. We then discuss our methodological contributions. First, we propose to build multiple voxel-based semantic maps representing the environment in different configurations. Second, we propose a policy that efficiently finds objects that need to be rearranged. Third, we propose a method for inferring the rearrangement goal from two semantic maps to efficiently solve visual rearrangement tasks.
|
| 60 |
+
|
| 61 |
+
Visual rearrangement definition and evaluation metrics. Consider the rearrangement setting defined by Batra et al. (2020a), which is a special case of a Markov Decision Process (MDP) augmented with a goal specification $g = \phi(s_0, S^*)$ . This goal specification encodes the set of states $S^*$ for which the rearrangement task is considered solved from initial state $s_0$ . The agent typically does not directly observe the set of goal states $S^*$ , and this is reflected by the goal specification function $\phi: S \times 2^S \longrightarrow \mathcal{G}$ . We consider a setting where the rearrangement goal $g$ is specified visually and the agent initially observes the environment in its goal configuration. This setting is especially challenging because the agent must remember what the environment initially looked like to infer the set of goal states. Once the goal has been understood and rearrangement has been attempted, we evaluate agents using metrics introduced by Weihs et al. (2021). We consider a Success metric that measures the proportion of tasks for which the agent has correctly rearranged all objects and misplaced none during rearrangement. This metric is strict in the sense that an agent receives a success of 0.0 if at least one object is misplaced—even if all others are correctly rearranged. We consider an additional %Fixed Strict metric that measures the proportion of objects per task correctly
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
Figure 2: Our method builds voxel-based Semantic Maps from visual observations. Our Semantic Search Policy helps build accurate maps by selecting navigation goals to efficiently find objects that need to be rearranged. Once accurate maps are built, our method compares the Semantic Maps to identify disagreements between the maps, and rearranges objects to resolve those disagreements using a deterministic rearrangement policy.
|
| 65 |
+
|
| 66 |
+
rearranged, equal to 0.0 per task if any were misplaced. This second metric is more informative regarding how close the agent was to solving each task. Effective agents will correctly rearrange all objects in the scene to their goal configurations, maximizing their Success and %Fixed Strict.
|
| 67 |
+
|
| 68 |
+
Building two semantic maps. Our approach builds off recent work that uses voxel-based semantic maps in embodied settings (Blukis et al., 2021; Min et al., 2021; Chaplot et al., 2021). Our work differs from these in that we use multiple voxel-based semantic maps to encode both the goal state and current state of the environment. In particular, we build two semantic maps $m_0, m_1 \in \mathcal{R}^{H \times W \times D \times C}$ that represent 3D grids with $H \times W \times D$ voxels. Each voxel is represented with a categorical distribution on $C$ classes encoding which class is likely to occupy each voxel. Empty voxels are assigned the zero vector. In an initial observation phase, our agent navigates the scene and builds $m_0$ , a semantic map encoding the goal configurations for objects in the scene. Likewise, in a second interaction phase, our agent navigates the scene and builds $m_1$ , a semantic map encoding the current state of objects in the scene. At every timestep during each phase, pose, RGB, and depth images are observed, and either $m_0$ or $m_1$ is updated depending on which phase the agent is currently observing.
|
| 69 |
+
|
| 70 |
+
Incorporating semantic predictions in the maps. Each semantic map is initialized to all zeros and, at every timestep $t$ , semantic predictions from Mask R-CNN (He et al., 2017) are added to the map. Given the RGB image observation $I_{t}$ , we generate semantic predictions from Mask R-CNN consisting of the probability of each pixel belonging to a particular class. We filter these predictions to remove those with a detection confidence lower than 0.9 and conduct an ablation in Section 4.3. We follow Blukis et al. (2021); Min et al. (2021); Chaplot et al. (2021) and generate an egocentric point cloud $c_{t}^{ego}$ using the depth observation $D_{t}$ . Each point in this point cloud is associated with a pixel in the image $I_{t}$ and a vector of class probabilities from Mask R-CNN. Given the current pose $x_{t}$ , we then transform the egocentric point cloud $c_{t}^{ego}$ from the agent's coordinate system to world coordinate system. This transformation results in a geocentric point cloud $c_{t}^{geo}$ that is converted to a geocentric voxel representation $v_{t}^{geo} \in \mathcal{R}^{H \times W \times D \times C}$ of the same cardinality as the maps. We generate a voxelized mask $v_{t}^{mask} \in \mathcal{R}^{H \times W \times D \times 1}$ that equals one for every occupied voxel in $v_{t}^{geo}$ and zero otherwise. New semantic predictions are added to the maps with a moving average.
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
m _ {i} [ t + 1 ] = m _ {i} [ t ] \odot (1 - v _ {t} ^ {\text {m a s k}} (1 - \epsilon)) + v _ {t} ^ {\text {g e o}} (1 - \epsilon) \tag {1}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
The update in Equation 1 allows voxels to be updated at different rates depending on how frequently they are observed. The hyperparameter $\epsilon \in (0,1)$ controls how quickly the semantic maps are updated to account for new semantic predictions, and is set to 0.5 in our experiments. An overview of how our two semantic maps are built is shown in Figure 2. We've detailed how the semantic maps are constructed from observations, and we will next describe how navigation goals are selected.
|
| 77 |
+
|
| 78 |
+
Algorithm 1 3D Mapping and Semantic Search For Visual Rearrangement
|
| 79 |
+
Require: visual rearrangement environment $e$ , initial voxel-based semantic maps $m_0,m_1\in$ $\mathcal{R}^{H\times W\times D\times C}$ , search-based policy $\pi_{\theta}(\mathbf{x}|m)$ , pre-trained semantic segmentation model $g$
|
| 80 |
+
for each phase $i\in \{0,1\}$ do
|
| 81 |
+
for each $I_{t},D_{t},x_{t}$ observed do $v_{t}^{\mathrm{geo}},v_{t}^{\mathrm{mask}}\gets \mathrm{project}(g(I_{t}),D_{t},x_{t})$ project to voxels $m_i[t]\leftarrow m_i[t - 1]\odot (1 - v_t^{\mathrm{mask}}(1 - \epsilon)) + v_t^{\mathrm{geo}}(1 - \epsilon)$ update map if goal is reached or goal does not exist then goal $\sim \pi_{\theta}(\mathbf{x}|m_i[t])$ emit a semantic search goal end if
|
| 82 |
+
navigate to goal
|
| 83 |
+
end for
|
| 84 |
+
end for
|
| 85 |
+
while a disagreement $d$ between $m_0$ and $m_{1}$ is detected do navigate to $d$ in $m_{1}$ and rearrange $d$ to match $m_0$
|
| 86 |
+
end while
|
| 87 |
+
|
| 88 |
+
Locating objects with a search-based policy. Building accurate maps requires locating and observing every object in the scene so they can be added to the maps. This requires intelligently selecting navigation goals based on where objects are likely to be. We learn a high-level policy $\pi_{\theta}(\mathbf{x}|m_i)$ that builds off recent work in Min et al. (2021); Chaplot et al. (2021) and parameterizes a distribution over 3D search locations in the environment. The input to the policy is a 3D semantic map $m_i$ from whichever phase is currently active. The policy is a 5-layer 2D convolutional neural network that processes a 3D semantic map $m_i$ and outputs a categorical distribution over voxels in $m_i$ , corresponding to 3D search locations. The policy is trained using maximum likelihood training with an expert distribution $p^*(\mathbf{x})$ that captures the locations of the $K$ objects the agent should rearrange in the current scene. This expert distribution in Equation 2 is a Gaussian mixture model with a mode centered at the location $\mu_k$ of each object, and a variance hyperparameter $\sigma^2$ for each mode.
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
p ^ {*} (\mathbf {x}) \propto \frac {1}{K} \sum_ {k = 1} ^ {K} \mathcal {N} (\mathbf {x}; \mu_ {k}, \sigma^ {2} I) \tag {2}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
Once a policy $\pi_{\theta}(\mathbf{x}|m_i)$ is trained that captures a semantic prior for object locations, we use planning to reach goals sampled from the policy. We build a planar graph that represents traversable space derived from voxel occupancy in the semantic map, and use Dijkstra's algorithm (Dijkstra, 1959) to find the shortest path from the agent's current location to the goal. We filter navigation goals to ensure only feasible goals are sampled, and then allow sufficient time for each navigation goal to be reached. Once the current goal is reached, we sample another goal and call the planner again.
|
| 95 |
+
|
| 96 |
+
Inferring the rearrangement goal from the maps. Once two semantic maps are built, we compare them to extract differences in object locations, which we refer to as map disagreements. These disagreements represent objects that need to be rearranged by the agent. To locate disagreements, we first use OpenCV (Bradski, 2000) to label connected voxels of the same class as object instances. We consider voxels with nonzero probability of class $c$ to contain an instance of that class. Object instances are then matched between phases by taking the assignment of object instances that minimizes the difference in appearance between instances of the same class. We leverage the Hungarian algorithm (Kuhn & Yaw, 1955), and represent appearance by the average color of an object instance in the map. Once objects are matched, we label pairs separated by $> 0.05$ meters as disagreements. Given a set of map disagreements $\{(x_1, x_1^*), (x_2, x_2^*), \ldots, (x_N, x_N^*)\}$ represented by the current pose $x_i$ and goal pose $x_i^*$ for each object, we leverage a planning-based rearrangement policy to solve the task. Our rearrangement policy navigates to each object in succession and transports them to their goal location. By accurately mapping with a search-based policy, inferring the rearrangement goal, and planning towards the goal, our method in Algorithm 1 efficiently solves visual rearrangement.
|
| 97 |
+
|
| 98 |
+
Table 1: Evaluation on the 2022 AI2-THOR 2-Phase Rearrangement Challenge. Our method attains state-of-the-art performance on this challenge, outperforming prior work by $875\%$ %Fixed Strict. Results are averaged over 1000 rearrangement tasks in each of the 2022 validation set and 2022 test set. Higher is better. A Success of 100.0 indicates all objects are successfully rearranged if none are misplaced and 0.0 otherwise. The metric $\%$ Fixed Strict is more lenient, equal to the percent of objects that are successfully rearranged if none are newly misplaced, and 0.0 otherwise.
|
| 99 |
+
|
| 100 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Validation</td><td colspan="2">Test</td></tr><tr><td>% Fixed Strict</td><td>Success</td><td>% Fixed Strict</td><td>Success</td></tr><tr><td>VRR + Map (Weihs et al., 2021)</td><td>1.18</td><td>0.40</td><td>0.53</td><td>0.00</td></tr><tr><td>CSR (Gadre et al., 2022)</td><td>3.30</td><td>1.20</td><td>1.90</td><td>0.40</td></tr><tr><td>Ours w/o Semantic Search</td><td>15.77</td><td>4.30</td><td>+795%</td><td>15.11</td></tr><tr><td>Ours</td><td>17.47</td><td>6.30</td><td>+871%</td><td>16.56</td></tr></table>
|
| 101 |
+
|
| 102 |
+
# 4 EXPERIMENTS
|
| 103 |
+
|
| 104 |
+
In this section we evaluate our approach and show its effectiveness. We first evaluate our approach on the AI2-THOR Rearrangement Challenge Weihs et al. (2021) and show our approach leads to an improvement of 14.72 absolute percentage points over current work, detailed in Subsection 4.1. This benchmark tests an agent's ability to rearrange rooms to a desired object goal configuration, and is a suitable choice for measuring visual rearrangement performance. Next, we show the importance of each proposed component, and demonstrate in Subsection 4.2 our voxel-based map and search-based policy exhibit large potential gains as more performant models for perception and search are developed in the future. Finally, we show in Subsection4.3 our approach is robust to the quality of object detections and budget for exploration.
|
| 105 |
+
|
| 106 |
+
Description of the benchmark. In this benchmark, the goal is to rearrange up to five objects to a desired state, defined in terms of object locations and openness. The challenge is based on the RoomR (Weihs et al., 2021) dataset that consists of a training set with 80 rooms and 4000 tasks, validation set with 20 rooms and 1000 tasks, and a test set with 20 rooms and 1000 tasks. We consider a two-phase setting where an agent observes the goal configuration of the scene during an initial Walkthrough Phase. The scene is then shuffled, and the agent is tasked with rearranging objects back to their goal configuration during a second Unshuffle Phase. This two-phase rearrangement setting is challenging because it requires the agent to remember the scene layout from the Walkthrough Phase, to identify the rearrangement goal. Goals are internally represented by a set of valid object poses $S^{*} \subset (\mathcal{R}^{3} \times SO(3)) \times (\mathcal{R}^{3} \times SO(3)) \cdots \times (\mathcal{R}^{3} \times SO(3))$ , but the agent does not observe $S^{*}$ directly. At every time step $t$ during either phase, the agent observes a geocentric pose $x_{t}$ , an egocentric RGB image $I_{t}$ , and an egocentric depth image $D_{t}$ . The rearrangement goal is specified indirectly via observations of the scene layout during the Walkthrough Phase. During training, additional metadata is available such as ground-truth semantic labels, but during evaluation only the allowed observations $x_{t}, I_{t}$ and $D_{t}$ can be used. Once both the Walkthrough Phase and Unshuffle Phase are complete, we measure performance using the %Fixed Strict and Success metrics described in Section 3.
|
| 107 |
+
|
| 108 |
+
# 4.1 EFFECTIVENESS AT VISUAL REARRANGEMENT
|
| 109 |
+
|
| 110 |
+
We report performance in Table 1 and show an improvement in %Fixed Strict from 1.9 to 15.11 over the current state-of-the-art method, namely Continuous Scene Representations (CSR) (Gadre et al., 2022). These results show our method is more effective than prior work at visual rearrangement. Our success of $4.63\%$ on the test set indicates our method solves 46 / 1000 tasks, whereas the best existing approach, CSR, solves 4 / 1000 tasks. Furthermore, our method correctly rearranges 499 / 3004 objects in the test set, while CSR rearranges only 57 / 3004 objects in the test set.
|
| 111 |
+
|
| 112 |
+
The results in Table 1 support two conclusions. First, 3D Mapping is a helpful inductive bias. Ours is currently the only method on the challenge to leverage 3D Mapping for identifying rearrangement goals. The next best approach, CSR Gadre et al. (2022), represents the scene with a graph, where nodes encode objects, and edges encode spatial relationships. We speculate determining which objects need to be rearranged benefits from knowing their fine-grain 3D position, which our method
|
| 113 |
+
|
| 114 |
+
Table 2: Ablation of the importance of each component of our method. Our method produces significant gains as perception and search models become more accurate. Results are averaged over 1000 rearrangement tasks in each of the 2022 validation set and 2022 test set. As in Table 1, higher is better, and a Success of 100.0 indicates all objects are successfully rearranged if none are misplaced and 0.0 otherwise. Our results show that as perception and search models continue to improve with future research, we have an out-of-the-box improvement of 34.73 Success on the test set.
|
| 115 |
+
|
| 116 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Validation</td><td colspan="2">Test</td></tr><tr><td>% Fixed Strict</td><td>Success</td><td>% Fixed Strict</td><td>Success</td></tr><tr><td>CSR + GT T</td><td>3.80</td><td>1.30</td><td>2.10</td><td>0.70</td></tr><tr><td>CSR + GT BT</td><td>7.90</td><td>3.00</td><td>5.90</td><td>2.20</td></tr><tr><td>CSR + GT MBT</td><td>26.00</td><td>8.80</td><td>27.00</td><td>10.00</td></tr><tr><td>Ours + GT Semantic Search</td><td>21.24</td><td>7.60</td><td>+942%</td><td>19.79</td></tr><tr><td>Ours + GT Segmentation</td><td>66.66</td><td>45.60</td><td>+1004%</td><td>59.29</td></tr><tr><td>Ours + GT Both</td><td>68.46</td><td>48.60</td><td>+1008%</td><td>59.50</td></tr></table>
|
| 117 |
+
|
| 118 |
+
directly represents via semantic maps. Second, these results also suggest that our method more successfully rearranges small objects. This is important (see additional results in Subsection 4.5) because many common objects humans use are small—cutlery, plates, cups, etc.
|
| 119 |
+
|
| 120 |
+
# 4.2 COMPONENT ABLATION
|
| 121 |
+
|
| 122 |
+
The goal of this experiment is to determine the importance of each component to our method. We consider a series of ablations in Table 2 that replace different components of our method with ground truth predictions. We first consider Ours + GT Semantic Search, where we substitute the predictions of our search-based policy $\pi_{\theta}$ with the ground truth locations of objects that need to be rearranged. We also consider Ours + GT Segmentation, where we substitute the predictions of Mask R-CNN (He et al., 2017) with ground truth semantic segmentation labels. The final ablation in the table Ours + GT Both includes both substitutions at once. In addition to reporting our performance, we reference the performance of CSR (Gadre et al., 2022) in a similar set of ablations. We consider $CSR + GT T$ which uses expert trajectories that observe all objects needing to be rearranged, $CSR + GT B T$ which also uses ground truth object detection labels, and $CSR + GT M B T$ which additionally uses ground truth object instance pairs between the Walkthrough Phase and the Unshuffle Phase. Table 2 shows our method produces a better out-of-the-box improvement in all metrics as the perception and search components become more accurate, suggesting both components are important.
|
| 123 |
+
|
| 124 |
+
Table 2 demonstrates our method produces significant gains when paired with accurate semantic search and accurate semantic segmentation. When using ground-truth semantic segmentation labels and ground-truth search locations, our method attains an improvement of 35.35 absolute percentage points in Success compared to existing work given access to the same experts. $\text{CSR} + \text{GTBT}$ makes the same assumptions as our method with both components replaced with ground-truth, and is used to compute this improvement margin. When prior work is given the additional accommodation of ground-truth object instance pairs between the two environment phases, $\text{CSR} + \text{GTMBT}$ , our method maintains an improvement of 27.55 absolute Success points without the accommodation. These results show our method has greater room for improvement than prior work, with a %Fixed Strict 32.50 absolute percentage points higher than current work. Our method's room for improvement with more accurate perception and search models is appealing because accurate 3D vision models are an active area of research, and our method directly benefits from innovations in these models.
|
| 125 |
+
|
| 126 |
+
# 4.3 STABILITY VERSUS PERCEPTION QUALITY
|
| 127 |
+
|
| 128 |
+
In the previous sections, we evaluated our method's effectiveness at rearrangement, and room for growth as better perception and search models are developed. This direct benefit from improvements in perception quality resulting from better models is desirable, but an effective method should also be robust when perception quality is poor. In this section, we evaluate our method's performance stability as a function of the quality of object detections. We simulate changes in object detection
|
| 129 |
+
|
| 130 |
+

|
| 131 |
+
Figure 3: Rearrangement performance versus perception quality. Dark colored lines represent the average metric across 1000 tasks, and shaded regions correspond to a $68\%$ confidence interval. Lower Num Newly Misplaced (left plot) is better, higher $\%$ Fixed Strict (center plot) and Success (right plot) are better. Our method improves smoothly as perception quality increases, simulated by varying the detection confidence threshold used to filter Mask R-CNN predictions detailed in Section 3.
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+
quality by varying the detection confidence threshold of Mask R-CNN (He et al., 2017) described in Section 3. A low threshold permits accepting detections where Mask R-CNN makes high-variance predictions, reducing the quality of detections overall. In the following experiment, we vary the detection confidence threshold on the validation and test sets of the rearrangement challenge.
|
| 140 |
+
|
| 141 |
+
Figure 3 shows our method is robust to small changes in perception quality. As the detection confidence increases, simulating an improvement in object detection fidelity, performance of our method smoothly increases. Peak performance with our method on the validation set is attained with a detection confidence threshold close to 0.9, which is the value we employ throughout the paper. Error bars in this experiment are computed using a $68\%$ confidence interval with 1000 sample points, corresponding to 1000 tasks in each of the validation and test sets. The small width of error bars indicates the observed relationship between perception quality and performance most likely holds for tasks individually (not just on average), supporting the conclusion our method is robust to small changes in perception quality. We make a final observation that as perception quality increases, fewer objects are misplaced as our method more accurately infers the rearrangement goal. These results suggest our method produces consistent gains in rearrangement as perception models improve.
|
| 142 |
+
|
| 143 |
+
# 4.4 STABILITY VERSUS EXPLORATION BUDGET
|
| 144 |
+
|
| 145 |
+
We conduct an ablation in this section to evaluate how the exploration budget affects our method. This is important because the conditions an agent faces in the real world vary, and an effective agent must be robust when the budget for exploring the scene is small. We simulate a limited exploration budget by varying the amount of navigation goals used by the agent when building the semantic maps. A lower budget results in fewer time steps spent building the semantic maps, and fewer updates to voxels described in Section 3. With fewer updates, sampling goals intelligently is crucial to ensure the agent has the information necessary to infer the task rearrangement goal.
|
| 146 |
+
|
| 147 |
+
Figure 4 shows our method is robust when the exploration budget is small. Performance is stable when less than 5 navigation goals are proposed by our semantic search module, where no penalty in %Fixed Strict and Success can be observed. This result confirms the effectiveness of semantic search: sampled goals correspond to the locations of objects likely to need rearrangement, so even when the budget is small, these objects are already observed. The experiment also shows that as the budget decreases, fewer objects are misplaced. This is intuitive because when the budget is small, fewer objects in the environment are observed and added to the map, reducing the chance of incorrect map disagreements being proposed. Additionally, when the budget is large, the agent spends the majority of the episode in navigation, and may not have enough time left to correct map disagreements, resulting in slightly lower overall performance. These results suggest our method is effective for a variety of exploration budgets, and is robust when the budget is small.
|
| 148 |
+
|
| 149 |
+
# 4.5 FAILURE MODES
|
| 150 |
+
|
| 151 |
+
Our previous experiments showed instances where our method is effective, but an understanding of its limitations is equally important. The goal of this subsection is to identify how and why our
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
Figure 4: Rearrangement performance versus navigation budget. Dark colored lines represent the average metric across 1000 tasks, and shaded regions correspond to a $68\%$ confidence interval. Lower Num Newly Misplaced (left plot) is better, higher $\%$ Fixed Strict (center plot) and Success (right plot) is better. Our method performs well even when the number of navigation goals is small and maintains a performance gain over prior work for all sizes of the budget, from one navigation goal (x-axis left) to 15 navigation goals (x-axis right).
|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
|
| 158 |
+

|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
|
| 162 |
+
method can fail. To accomplish this, we conduct an ablation to study how three indicators—object size, distance to the goal position, and amount of nearby clutter—affect our method. These capture different aspects of what makes rearrangement hard. For example, small objects can be ignored, objects distant to their goal can be easier to misplace, and objects too close to one another can be mis-detected. We measure the performance of our method with respect to these indicators in Figure 6 in Appendix B, and analyze the experimental conditions when our method is less effective.
|
| 163 |
+
|
| 164 |
+
Our results illuminate what kinds of tasks are difficult for our method. We find experimentally that objects further from the rearrangement goal are harder for our method to successfully rearrange. Objects within 0.326 meters of the goal are correctly rearranged $>30\%$ of the time, whereas objects further than 4.157 meters from the goal are only correctly rearranged $<20\%$ of the time. One explanation for this disparity in performance could be matching object instances between phases is more difficult when those instances are further apart. Better perception models can mitigate this explanation by providing more information about object appearance that may be used to accurately pair instances. While this first observation is intuitive, our second is more surprising. We find that our method rearranges small objects as effectively as large objects, suggesting our method is robust to the size of objects it rearranges. This quality is desirable because realistic environments contain objects in a variety of sizes. Effective agents should generalize to a variety of object sizes.
|
| 165 |
+
|
| 166 |
+
# 5 CONCLUSION
|
| 167 |
+
|
| 168 |
+
We presented a simple modular approach for rearranging objects to desired visual goals. Our approach leverages a voxel-based semantic map containing objects detected by a perception model, and a semantic-search policy for efficiently locating the objects to rearrange. Our approach generalizes to rearrangement goals of varying difficulties, including objects that are small in size, far from the goal, and in cluttered spaces. Furthermore, our approach is efficient, performing well even with a small exploration budget. Our experimental evaluation shows our approach improves over current work in rearrangement by 14.7 absolute percentage points, and continues to improve smoothly as better models are developed and the quality of object detections increases. Our results confirm the efficacy of active perceptual mapping for the rearrangement setting, and motivate several future directions that can expand the flexibility and generalization of the method.
|
| 169 |
+
|
| 170 |
+
One limitation of the rearrangement setting in this work is that objects only have simple states: position, orientation, and openness. Real objects are complex and have states that may change over time, potentially from interactions not involving the agent. Investigating tasks that require modelling these dynamic objects in the map is an emerging topic that can benefit from new benchmarks and methods. Another promising future direction is using an agent's experience to improve its perception. Feedback from the environment, including instructions, rewards, and transition dynamics, provides rich information about how to improve perception when true labels may be difficult to acquire. Investigating how to leverage all sources of feedback available to an agent is a useful research topic that may unlock better generalization for embodied agents in dynamic environments.
|
| 171 |
+
|
| 172 |
+
# ACKNOWLEDGEMENTS
|
| 173 |
+
|
| 174 |
+
We thank Amazon for supporting this work financially, providing access to computational resources, and feedback on intermediate drafts of this manuscript. In addition, we thank the reviewers for their suggestions and critique in the review process, which improved this paper. We thank Ben Eysenbach, Devendra Chaplot, Minji Yoon, Theophile Gervet, Murtaza Dalal, and So Yeon Min for their discussion and feedback on the paper. Finally, we thank the teams at the Allen Institute for AI that developed the AI2-THOR Rearrangement Challenge and provided benchmarking code, which has been crucial in this work.
|
| 175 |
+
|
| 176 |
+
# REFERENCES
|
| 177 |
+
|
| 178 |
+
Peter Anderson, Angel Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, and Amir R. Zamir. On evaluation of embodied navigation agents. arXiv:1807.06757, 2018a.
|
| 179 |
+
Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sunderhauf, Ian D. Reid, Stephen Gould, and Anton van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In CVPR, 2018b.
|
| 180 |
+
Dhruv Batra, Angel X Chang, Sonia Chernova, Andrew J Davison, Jia Deng, Vladlen Koltun, Sergey Levine, Jitendra Malik, Igor Mordatch, Roozbeh Mottaghi, et al. Rearrangement: A challenge for embodied ai. arXiv:2011.01975, 2020a.
|
| 181 |
+
Dhruv Batra, Aaron Gokaslan, Aniruddha Kembhavi, Oleksandr Maksymets, Roozbeh Mottaghi, Manolis Savva, A. Toshev, and Erik Wijmans. Objectnav revisited: On evaluation of embodied agents navigating to objects. arXiv:2006.13171, 2020b.
|
| 182 |
+
Ohad Ben-Shahar and Ehud Rivlin. Practical pushing planning for rearrangement tasks. ICRA, 1996.
|
| 183 |
+
Valts Blukis, Chris Paxton, Dieter Fox, Animesh Garg, and Yoav Artzi. A persistent spatial semantic representation for high-level natural language instruction execution. In CoRL, 2021.
|
| 184 |
+
G. Bradski. The OpenCV Library. Dr. Dobb's Journal of Software Tools, 2000.
|
| 185 |
+
Devendra Singh Chaplot, Dhiraj Gandhi, Abhinav Gupta, and Ruslan Salakhutdinov. Object goal navigation using goal-oriented semantic exploration. In NeurIPS, 2020a.
|
| 186 |
+
Devendra Singh Chaplot, Ruslan Salakhutdinov, Abhinav Gupta, and Saurabh Gupta. Neural topological slam for visual navigation. In CVPR, 2020b.
|
| 187 |
+
Devendra Singh Chaplot, Murtaza Dalal, Saurabh Gupta, Jitendra Malik, and Ruslan Salakhutdinov. SEAL: self-supervised embodied active learning using exploration and 3d consistency. In Marc' Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), NeurIPS, 2021.
|
| 188 |
+
Kevin Chen, Juan Pablo de Vicente, Gabriel Sepulveda, Fei Xia, Alvaro Soto, Marynel Vazquez, and Silvio Savarese. A behavioral approach to visual navigation with graph localization networks. In RSS, 2019.
|
| 189 |
+
Nikolaus Correll, Kostas E. Bekris, Dmitry Berenson, Oliver Brock, Albert Causo, Kris Hauser, Kei Okada, Alberto Rodriguez, Joseph M. Romano, and Peter R. Wurman. Analysis and observations from the first amazon picking challenge. *IEEE Transactions on Automation Science and Engineering*, 2018.
|
| 190 |
+
Akansel Cosgun, Tucker Hermans, Victor Emeli, and Mike Stilman. Push planning for object placement on cluttered table surfaces. In IROS, 2011.
|
| 191 |
+
Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied Question Answering. In CVPR, 2018.
|
| 192 |
+
Edsger W Dijkstra. A note on two problems in connexion with graphs. Numerische mathematik, 1959.
|
| 193 |
+
|
| 194 |
+
Jakob Engel, Thomas Schöps, and Daniel Cremers. Lsd-slam: Large-scale direct monocular slam. In ECCV, 2014.
|
| 195 |
+
Jonathan Francis, Nariaki Kitamura, Felix Labelle, Xiaopeng Lu, Ingrid Navarro, and Jean Oh. Core challenges in embodied vision-language planning. arXiv:2106.13948, 2021.
|
| 196 |
+
Samir Yitzhak Gadre, Kiana Ehsani, Shuran Song, and Roozbeh Mottaghi. Continuous scene representations for embodied AI. arXiv:2203.17251, 2022.
|
| 197 |
+
Daniel Gordon, Aniruddha Kembhavi, Mohammad Rastegari, Joseph Redmon, Dieter Fox, and Ali Farhadi. IQA: Visual question answering in interactive environments. In CVPR, 2018.
|
| 198 |
+
Ankit Goyal, Arsalan Mousavian, Chris Paxton, Yu-Wei Chao, Brian Okorn, Jia Deng, and Dieter Fox. IFOR: iterative flow minimization for robotic object rearrangement. CoRR, abs/2202.00732, 2022. URL https://arxiv.org/abs/2202.00732.
|
| 199 |
+
Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, and Jitendra Malik. Cognitive mapping and planning for visual navigation. In CVPR, 2017.
|
| 200 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 770-778. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.90. URL https://doi.org/10.1109/CVPR.2016.90.
|
| 201 |
+
Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross B. Girshick. Mask R-CNN. In ICCV, 2017.
|
| 202 |
+
Jennifer E King, Marco Cognetti, and Siddhartha S Srinivasa. Rearrangement planning using object-centric and robot-centric action spaces. In ICRA, 2016.
|
| 203 |
+
A. Krontiris and K. E. Bekris. Efficiently solving general rearrangement tasks: A fast extension primitive for an incremental sampling-based planner. In ICRA, 2016.
|
| 204 |
+
H. W. Kuhn and Bryn Yaw. The hungarian method for the assignment problem. Naval Res. Logist. Quart, 1955.
|
| 205 |
+
Benjamin Kuipers and Yung-Tai Byun. A robot exploration and mapping strategy based on a semantic hierarchy of spatial representations. Robotics and autonomous systems, 1991.
|
| 206 |
+
Yann Labbe, Sergey Zagoruyko, Igor Kalevatykh, Ivan Laptev, Justin Carpentier, Mathieu Aubry, and Josef Sivic. Monte-carlo tree search for efficient visually guided rearrangement planning. IEEE Robotics and Automation Letters, 2020.
|
| 207 |
+
Tsung-Yi Lin, Piotr Dálár, Ross B. Girshick, Kaiming He, Bharath Hariharan, and Serge J. Belongie. Feature pyramid networks for object detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pp. 936-944. IEEE Computer Society, 2017. doi: 10.1109/CVPR.2017.106. URL https://doi.org/10.1109/CVPR.2017.106.
|
| 208 |
+
So Yeon Min, Devendra Singh Chaplot, Pradeep Ravikumar, Yonatan Bisk, and Ruslan Salakhutdinov. FILM: following instructions in language with modular methods. arXiv:2110.07342, 2021.
|
| 209 |
+
Adithyavairavan Murali, Tao Chen, Kalyan Vasudev Alwala, Dhiraj Gandhi, Lerrel Pinto, Saurabh Gupta, and Abhinav Gupta. Pyrobot: An open-source robotics framework for research and benchmarking. CoRR, abs/1906.08236, 2019. URL http://arxiv.org/abs/1906.08236.
|
| 210 |
+
Alexander Pashevich, Cordelia Schmid, and Chen Sun. Episodic transformer for vision-and-language navigation. In ICCV, 2021.
|
| 211 |
+
Ahmed Hussain Qureshi, Arsalan Mousavian, Chris Paxton, Michael C. Yip, and Dieter Fox. Nerp: Neural rearrangement planning for unknown objects. In Dylan A. Shell, Marc Toussaint, and M. Ani Hsieh (eds.), Robotics: Science and Systems XVII, Virtual Event, July 12-16, 2021, 2021. doi: 10. 15607/RSS.2021.XVII.072. URL https://doi.org/10.15607/RSS.2021.XVII.072.
|
| 212 |
+
|
| 213 |
+
A. Rosinol, A. Violette, M. Abate, N. Hughes, Y. Chang, J. Shi, A. Gupta, and L. Carlone. Kimera: from SLAM to spatial perception with 3D dynamic scene graphs. Intl. J. of Robotics Research, 2021.
|
| 214 |
+
Martin Runz and Lourdes Agapito. Co-fusion: Real-time segmentation, tracking and fusion of multiple objects. In ICRA, 2017.
|
| 215 |
+
Gabriel Sarch, Zhaoyuan Fang, Adam W. Harley, Paul Schydlo, Michael J. Tarr, Saurabh Gupta, and Katerina Fragkiadaki. TIDEE: tidying up novel rooms using visuo-semantic commonsense priors. In Shai Avidan, Gabriel J. Brostow, Moustapha Cisse, Giovanni Maria Farinella, and Tal Hassner (eds.), Computer Vision - ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIX, volume 13699 of Lecture Notes in Computer Science, pp. 480-496. Springer, 2022. doi: 10.1007/978-3-031-19842-7_28. URL https://doi.org/10.1007/978-3-031-19842-7_28.
|
| 216 |
+
Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. In CVPR, 2020.
|
| 217 |
+
Kunal Pratap Singh, Suvaansh Bhambi, Byeonghwi Kim, Roozbeh Mottaghi, and Jonghyun Choi. Factorizing perception and policy for interactive instruction following. In ICCV. IEEE, 2021.
|
| 218 |
+
Mike Stilman, Jan-Ullrich Schamburek, James Kuffner, and Tamim Asfour. Manipulation planning among movable obstacles. In ICRA, 2007.
|
| 219 |
+
Sebastian Thrun. *Robotic mapping: A survey.* Exploring artificial intelligence in the new millennium, 2002.
|
| 220 |
+
Sebastian Thrun, Dieter Fox, Wolfram Burgard, and Frank Dellaert. Robust monte carlo localization for mobile robots. Artificial Intelligence, 2001.
|
| 221 |
+
Luca Weihs, Matt Deitke, Aniruddha Kembhavi, and Roozbeh Mottaghi. Visual room rearrangement. In CVPR, 2021.
|
| 222 |
+
Erik Wijmans, Abhishek Kadian, Ari S. Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, and Dhruv Batra. Dd-ppo: Learning near-perfect pointgoal navigators from 2.5 billion frames. In ICLR, 2020.
|
| 223 |
+
Yu-Shiang Wong, Changjian Li, Matthias Niessner, and Niloy J. Mitra. Rigidfusion: RGB-d scene reconstruction with rigidly-moving objects. Computer Graphics Forum, 2021.
|
| 224 |
+
Yi Wu, Yuxin Wu, Aviv Tamar, Stuart Russell, Georgia Gkioxari, and Yuandong Tian. Bayesian relational memory for semantic visual navigation. In ICCV, 2019a.
|
| 225 |
+
Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detector2. https://github.com/facebookresearch/detectron2, 2019b.
|
| 226 |
+
Wei Yang, Xiaolong Wang, Ali Farhadi, Abhinav Gupta, and Roozbeh Mottaghi. Visual semantic navigation using scene priors. In ICLR, 2019.
|
| 227 |
+
Joel Ye, Dhruv Batra, Abhishek Das, and Erik Wijmans. Auxiliary tasks and exploration enable objectnav. arXiv:2104.04112, 2021.
|
| 228 |
+
Weihao Yuan, Johannes A Stork, Danica Kragic, Michael Y Wang, and Kaiyu Hang. Rearrangement with nonprehensile manipulation using deep reinforcement learning. In ICRA, 2018.
|
| 229 |
+
|
| 230 |
+
# APPENDIX
|
| 231 |
+
|
| 232 |
+
In this appendix we include the following supporting experiments and visualizations:
|
| 233 |
+
|
| 234 |
+
A. We begin this appendix by presenting the performance of our map disagreement detection module for each object category. We find that our method effectively detects map disagreements for both small and large objects, and is therefore robust to object size.
|
| 235 |
+
B. We then present a performance breakdown of our method for object size, distance to goal, and amount of clutter, and find that our method is less effective when objects are further from the goal or when nearby objects are closer together.
|
| 236 |
+
C. We report confidence intervals for our method's performance on the rearrangement challenge.
|
| 237 |
+
D. Finally, we outline the compute infrastructure needed to reproduce our experiments.
|
| 238 |
+
E. We list the hyperparameters used in our paper.
|
| 239 |
+
F. We categorize why our method can fail and provide a qualitative example.
|
| 240 |
+
|
| 241 |
+
The official code for our method will be released at publication.
|
| 242 |
+
|
| 243 |
+
# A OBJECT TYPE VERSUS DETECTION ACCURACY
|
| 244 |
+
|
| 245 |
+
In this section, we visualize the relationship between the performance of our map disagreement detection module, detailed in Section 3, and the category of objects to be rearranged. For each of 1000 tasks in the validation set and test set of RoomR (Weihs et al., 2021), we record which object categories are detected as needing to be rearranged, and log the ground truth list of object categories that need to be rearranged. For each object, we calculate precision as the proportion of objects per category that were correctly identified as map disagreements out of all predicted map disagreements. Similarly, we calculate recall as the proportion of correctly identified as map disagreements out of all ground-truth map disagreements. Each bar in Figure 5 represents a $68\%$ confidence interval of precision and recall over 1000 tasks per dataset split. The experiment shows that our method is robust to the size of objects that it rearranges because small objects such as the SoapBar, CellPhone, CreditCard, and DishSponge have comparable accuracy to large objects in Figure 5.
|
| 246 |
+
|
| 247 |
+
# B PERFORMANCE ANALYSIS
|
| 248 |
+
|
| 249 |
+
This section extends Section 4.5 with an experiment to show potential failure modes. We consider three failure modes: (1) object size, (2) object distance to the goal, and (3) closest object in the same class. These indicators are visualized in Figure 6 against $\%$ Fixed. Our experiment suggests our method is robust to the size of objects, shown by the lack of a global trend in the left plot in Figure 6, and confirmed by Appendix A. Additionally, the experiment shows that objects further from the rearrangement goal are solved less frequently (middle plot), which is intuitive. Instances that have been shuffled to faraway locations in the scene may require longer exploration to find, and may be more difficult for our map disagreement detection module to match. A final conclusion we can draw from this experiment is that our method can fail when object instances are too close together. This is shown in the right plot in Figure 6 by the steep drop in performance when objects in the same category are $< 1$ meter apart. In this situation, our semantic mapping module can incorrectly detect two nearby objects as a single object, which prevents their successful rearrangement. For each of these potential failure modes, better perception and mapping approaches that more accurately describe object locations and appearance can improve fidelity of our method and reduce failure.
|
| 250 |
+
|
| 251 |
+
# C PERFORMANCE CONFIDENCE INTERVALS
|
| 252 |
+
|
| 253 |
+
We report $68\%$ confidence intervals in Table 3 to supplement our evaluation in Section 4.1 and Section 4.2. We calculate intervals using 1000 tasks from the validation and test sets of the RoomR (Weihs et al., 2021) dataset, and report the mean followed by $\pm$ interval width. Note that the official rearrangement challenge leaderboard does not expose confidence intervals, nor the sample-wise performance needed to calculate them. Due to this, we are unable to compute confidence intervals of the baselines VRR (Weihs et al., 2021) and CSR (Gadre et al., 2022) at this time. These additional results show that our improvements over prior work significantly exceed the $68\%$ confidence interval.
|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
|
| 259 |
+

|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
Figure 5: Performance breakdown on the validation and test sets for various types of objects. The height of bars corresponds to the sample mean of precision or recall for our map disagreement detection module. Error bars show a $68\%$ confidence interval for each kind of object. The top two plots correspond to precision and recall on the validation set, while the bottom two plots correspond to precision and recall on the test set. Object categories are shown on the x-axis, and are ordered in ascending order of size. The experiment shows our method is robust to size, with small objects at the left end of the plots having comparable accuracy to large objects at the right end of the plots.
|
| 263 |
+
|
| 264 |
+
# D REQUIRED COMPUTE
|
| 265 |
+
|
| 266 |
+
The goal of this section is to outline the amount of compute required to replicate our experiments. We will describe the amount of compute required for (1) training Mask R-CNN, (2) training a semantic search policy $\pi_{\theta}(\mathbf{x}|m_i)$ , and (3) benchmarking the agent on the rearrangement challenge. For training Mask R-CNN, a dataset of 2 million images with instance segmentation labels were collected from the THOR simulator using the training split of the RoomR (Weihs et al., 2021) dataset. We then used Detectron2 (Wu et al., 2019b) with default hyperparameters to train Mask R-CNN with a ResNet50 (He et al., 2016) Feature Pyramid Network backbone (Lin et al., 2017). We trained our Mask R-CNN for five epochs using a DGX with eight Nvidia 32GB v100 GPU for 48 hours. Our semantic search policy requires significantly less compute: completing 15 epochs on a dataset of 8000 semantic maps annotated with an expert search distribution in nine hours on a single Nvidia 12GB 3080ti GPU. Evaluating our method on the AI2-THOR rearrangement challenge requires 40
|
| 267 |
+
|
| 268 |
+

|
| 269 |
+
Figure 6: Performance of various ablations for different Size (Meters $^3$ ), Distance To Goal (Meters), and Nearest Same Object (Meters). These indicators measure properties of objects that make rearrangement hard. Colored lines represent the average performance over 1000 tasks in each dataset split. Error bars represent a $68\%$ confidence interval over those same 1000 sample points. The experiment shows our method can fail when objects of the same class are too close together (right plot), and when objects are too far from the goal location, typically $>4.157$ meters (center plot).
|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
|
| 275 |
+

|
| 276 |
+
|
| 277 |
+
Table 3: Confidence intervals for our method on the AI2-THOR rearrangement challenge. Intervals are calculated from 1000 sample points from RoomR (Weihs et al., 2021) validation and test sets. We report performance starting with the sample mean, followed by $\pm$ a $68\%$ confidence interval width. Our improvements over prior work significantly exceed the $68\%$ confidence interval, which suggests that our improvements are significant and our method performs consistently well.
|
| 278 |
+
|
| 279 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Validation</td><td colspan="2">Test</td></tr><tr><td>% Fixed Strict</td><td>Success</td><td>% Fixed Strict</td><td>Success</td></tr><tr><td>Ours w/o Semantic Search</td><td>15.77 ± 0.85</td><td>4.30 ± 0.63</td><td>15.11 ± 0.84</td><td>3.60 ± 0.58</td></tr><tr><td>Ours</td><td>17.47 ± 0.92</td><td>6.30 ± 0.76</td><td>16.56 ± 0.89</td><td>4.70 ± 0.67</td></tr><tr><td>Ours + GT Semantic Search</td><td>21.24 ± 0.99</td><td>7.60 ± 0.83</td><td>19.79 ± 0.96</td><td>6.10 ± 0.75</td></tr><tr><td>Ours + GT Segmentation</td><td>66.66 ± 1.21</td><td>45.60 ± 1.57</td><td>59.29 ± 1.26</td><td>37.55 ± 1.53</td></tr><tr><td>Ours + GT Both</td><td>68.46 ± 1.20</td><td>48.60 ± 1.57</td><td>59.50 ± 1.31</td><td>38.33 ± 1.57</td></tr></table>
|
| 280 |
+
|
| 281 |
+
GPU-hours with a 2080ti GPU or equivalent. In practice, we parallelize evaluation across 32 GPUs, which results in an evaluation time of 1.25 hours for each of the validation and test sets.
|
| 282 |
+
|
| 283 |
+
# E HYPERPARAMETERS
|
| 284 |
+
|
| 285 |
+
We provide a list of hyperparameters are their values in Table 4. These hyperparameters are held constant throughout the paper, except in ablations that study the sensitivity of our method to them, such as Section 4.3. Our ablations show our method is robust to these hyperparameters.
|
| 286 |
+
|
| 287 |
+
# F REASONS FOR TASK FAILURES
|
| 288 |
+
|
| 289 |
+
This section explores the reasons why certain tasks in the validation and test sets are not solved by our method. We consider four reasons for task failures that cover all possible outcomes: (1) the agent correctly predicts which objects need to be moved where, but fails to rearrange at least one object, (2) the agent incorrectly predicts an object needs to be rearranged that doesn't, (3) the agent runs out of time, and (4) the agent misses at least one object that needs to be rearranged. We visualize the proportion of failed tasks for each category in Figure 7. We find that our method with ground truth perception and search $(Ours + GT\text{Both})$ tends to fail to rearrange objects after correctly identifying which objects need to be rearranged. In contrast, the largest reason for failure for our method $(Ours)$ is the agent running out of time, followed by rearranging incorrect objects. This suggests the largest potential gains for our method arise from improving the speed and fidelity of map building, whereas, the optimality of the rearrangement policy becomes the bottleneck once a perfect map is available.
|
| 290 |
+
|
| 291 |
+
Table 4: Hyperparameters used by our approach for all rearrangement tasks.
|
| 292 |
+
|
| 293 |
+
<table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>voxel size</td><td>0.05 meters</td></tr><tr><td>map height H</td><td>384</td></tr><tr><td>map width W</td><td>384</td></tr><tr><td>map depth D</td><td>96</td></tr><tr><td>classes C</td><td>54</td></tr><tr><td>detection confidence threshold</td><td>0.9</td></tr><tr><td>rearrangement distance threshold</td><td>0.05 meters</td></tr><tr><td>expert search distribution σ</td><td>0.75 meters</td></tr><tr><td>πθ convolution hidden size</td><td>64</td></tr><tr><td>πθ convolution kernel size</td><td>3 × 3</td></tr><tr><td>πθ layers</td><td>5</td></tr><tr><td>πθ activation function</td><td>ReLU</td></tr><tr><td>πθ optimizer</td><td>Adam</td></tr><tr><td>πθ learning rate</td><td>0.0003</td></tr><tr><td>πθ batch size</td><td>8</td></tr><tr><td>πθ epochs</td><td>15</td></tr><tr><td>πθ dataset size</td><td>8000</td></tr></table>
|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
Figure 7: Categorization of the reasons why our method fails to solve tasks. The proportion of tasks that are solved (shown in blue) or fail due to one of four reasons (orange, green, red, purple) is shown for different ablations of our method. The height per bar corresponds to the proportion of tasks in the validation or test set in each category, and error bars indicate a $68\%$ confidence interval. This experiment shows the largest reason for failure is a result of mapping errors. In the right plot, the agent fails most frequently by rearranging the wrong object, and by running out of time, which can result from imperfect semantic maps. In contrast, once perfect maps are available in the left plot, the largest source of errors are due to an imperfect planning-based rearrangement policy instead.
|
| 297 |
+
|
| 298 |
+

|
| 299 |
+
|
| 300 |
+

|
| 301 |
+
|
| 302 |
+
# G IMAGE FEATURES FOR MATCHING OBJECT INSTANCES
|
| 303 |
+
|
| 304 |
+
We conduct an experiment where we use image features for matching instances of objects between phases instead of their average color. We use ResNet50 (He et al., 2016) pretrained on ImageNet, and compute a mean image feature $h_i \in \mathcal{R}^{256}$ for each object. We process images with the ResNet50 backbone, extract a spatial feature map after the first residual block of ResNet50, and back-project the features to a voxel grid. In this fashion, we introduce 256 additional channels at each voxel to store the image features. We then compute $h_i$ by averaging the voxel features corresponding to occupied voxels for object $i$ in the map. Results in Table 5 show that matching object instances using image features improves the performance of our method by $6.97\%$ Fixed Strict on the test set.
|
| 305 |
+
|
| 306 |
+
# H EFFECT OF SEMANTIC SEARCH ON FOUND OBJECTS
|
| 307 |
+
|
| 308 |
+
To understand how our Semantic Search policy leads to an improvement in downstream performance, our hypothesis is that Semantic Search leads the agent to find more objects during episodes. We
|
| 309 |
+
|
| 310 |
+

|
| 311 |
+
Locating ToiletPaper
|
| 312 |
+
|
| 313 |
+

|
| 314 |
+
Moving To Goal
|
| 315 |
+
|
| 316 |
+

|
| 317 |
+
Attempt Placing At Goal
|
| 318 |
+
Figure 8: Qualitative example for why rearranging the correct object can fail. In this task, the agent correctly predicts the ToiletPaper needs to be rearranged, but fails to place the ToiletPaper in the correct location. The rightmost image shows the goal is located on the floor, but the agent mistakenly places the ToiletPaper on the bathtub instead, shown in the second image from the right.
|
| 319 |
+
|
| 320 |
+

|
| 321 |
+
Placed At Wrong Location
|
| 322 |
+
|
| 323 |
+
Table 5: Matching object instances using image features. Results show that our method can be further improved by matching object instances between the walkthrough and unshuffle phases to maximize similarity of corresponding features from ResNet50 (He et al., 2016) pretrained on ImageNet.
|
| 324 |
+
|
| 325 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Validation</td><td colspan="2">Test</td></tr><tr><td>% Fixed Strict</td><td>Success</td><td>% Fixed Strict</td><td>Success</td></tr><tr><td>Ours</td><td>17.47 ± 0.92</td><td>6.30 ± 0.76</td><td>16.56 ± 0.89</td><td>4.70 ± 0.67</td></tr><tr><td>Ours + Feature Matching</td><td>23.06 ± 1.04</td><td>7.81 ± 0.88</td><td>23.59 ± 1.03</td><td>6.79 ± 0.82</td></tr></table>
|
| 326 |
+
|
| 327 |
+
test this hypothesis by measuring the percent of objects found during episodes of the official 2022 Rearrangement Challenge. We consider an object found once the agent navigates within 1 meter of the object. We track the cumulative percent of objects found, and report in Figure 9 the mean and $68\%$ confidence interval of this metric across 1000 test set tasks. The results confirm our hypothesis: in both phases, Semantic Search leads the agent to find more objects faster. In particular, at 250 episode timesteps during the Walkthrough phase, the agent has a $9.13 \pm 1.23$ higher percent than a uniform baseline. During the Unshuffle phase, the agent has a $6.23 \pm 1.20$ higher percent than our uniform baseline at 100 timesteps. This improvement becomes less significant as time increases during the Unshuffle phase, suggesting Semantic Search is most helpful with a small time budget.
|
| 328 |
+
|
| 329 |
+
# I NOISY POSE ESTIMATION
|
| 330 |
+
|
| 331 |
+
We simulate noisy pose estimation due to imperfect localization and mapping using the sensor noise model introduced by Sarch et al. (2022), which builds on earlier work from Chaplot et al. (2020b), and constructs a Gaussian distribution based on real robot localization data collected from a real LocoBot robot (Murali et al., 2019), and applies this noise model in simulation. This model adds Gaussian noise with $\sigma = 0.005$ meters to positional observations, and adds Gaussian noise with $\sigma = 0.5$ degrees to yaw observations at each timestep. Results with noisy pose are given in Table 6. Our method is robust to this sensor noise model. On both the validation and test sets of the 2022 AI2-THOR Rearrangement Challenge, sensor noise leads to a slight improvement in performance, which we attribute to noise producing a smoother semantic map.
|
| 332 |
+
|
| 333 |
+
Figure 9: Impact of Semantic Search on the percent of shuffled objects found during either phase. In both cases, we observe an increase in the percent of shuffled objects found when using Semantic Search. The improvement is most significant during the walkthrough phase, where Semantic Search leads to an improvement of $+9.13$ percent of objects found at 250 episode timesteps. The improvement during the unshuffle phase is smaller, with $+6.23$ at 100 timesteps, and $+1.67$ at 500 timesteps.
|
| 334 |
+

|
| 335 |
+
Semantic Search Uniform Baseline
|
| 336 |
+
|
| 337 |
+

|
| 338 |
+
|
| 339 |
+
Table 6: Impact of imperfect localization due to sensor noise on our method.
|
| 340 |
+
|
| 341 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Validation</td><td colspan="2">Test</td></tr><tr><td>% Fixed Strict</td><td>Success</td><td>% Fixed Strict</td><td>Success</td></tr><tr><td>Ours</td><td>17.47</td><td>6.30</td><td>16.56</td><td>4.70</td></tr><tr><td>Ours + Noisy Pose</td><td>19.84</td><td>6.80</td><td>17.33</td><td>4.90</td></tr></table>
|
2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:93d430595744fea6ea2823c47e00fe565388638774243b8212b59b0992b6b569
|
| 3 |
+
size 796275
|
2023/A Simple Approach for Visual Room Rearrangement_ 3D Mapping and Semantic Search/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/dfb8bf62-6253-4020-aab4-1a7df51580ae_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/dfb8bf62-6253-4020-aab4-1a7df51580ae_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/dfb8bf62-6253-4020-aab4-1a7df51580ae_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1e7a3966e6a086d1332eb8b44f0b15c1d6b0f0ccf7ea6faf499a00340dd9404c
|
| 3 |
+
size 1555120
|
2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/full.md
ADDED
|
@@ -0,0 +1,365 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A SIMPLE YET POWERFUL DEEP ACTIVE LEARNING WITH SNAPSHOTS ENSEMBLES
|
| 2 |
+
|
| 3 |
+
Seohyeon Jung* Sanghyun Kim* Juho Lee
|
| 4 |
+
|
| 5 |
+
Kim Jaechul Graduate School of AI
|
| 6 |
+
|
| 7 |
+
Korea Advanced Institute of Science and Technology (KAIST)
|
| 8 |
+
|
| 9 |
+
Daejeon, Republic of Korea
|
| 10 |
+
|
| 11 |
+
{heon2203,nannullna,juholee}@kaist.ac.kr
|
| 12 |
+
|
| 13 |
+
# ABSTRACT
|
| 14 |
+
|
| 15 |
+
Given an unlabeled pool of data and the experts who can label them, active learning aims to build an agent that can effectively acquire data to be queried to the experts, maximizing the gain in performance when trained with them. While there are several principles for active learning, a prevailing approach is to estimate uncertainties of predictions for unlabeled samples and use them to define acquisition functions. Active learning with the uncertainty principle works well for deep learning, especially for large-scale image classification tasks with deep neural networks. Still, it is often overlooked how the uncertainty of predictions is estimated, despite the common findings on the difficulty of accurately estimating uncertainties of deep neural networks. In this paper, we highlight the effectiveness of snapshot ensembles for deep active learning. Compared to the previous approaches based on Monte-Carlo dropout or deep ensembles, we show that a simple acquisition strategy based on uncertainties estimated from parameter snapshots gathered from a single optimization path significantly improves the quality of the acquired samples. Based on this observation, we further propose an efficient active learning algorithm that maintains a single learning trajectory throughout the entire active learning episodes, unlike the existing algorithms training models from scratch for every active learning episode. Through the extensive empirical comparison, we demonstrate the effectiveness of snapshot ensembles for deep active learning. Our code is available at: https://github.com/nannullna/snapshot-al.
|
| 16 |
+
|
| 17 |
+
# 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
The progress of deep learning is largely driven by data, and we often work with well-curated and labeled benchmark data for model developments. However, in practice, such nicely labeled data are rarely available. Many of the data accessible to practitioners are unlabeled, and more importantly, labeling such data incurs costs due to human resources involved in the labeling process. Active Learning (AL) may reduce the gap between the ideal and real-world scenarios by selecting the informative samples from the unlabeled pool of data, so after being labeled and trained with them, a model can maximally improve the performance.
|
| 20 |
+
|
| 21 |
+
The main ingredient of an AL algorithm is the acquisition function which ranks the samples in an unlabeled pool with respect to their utility for improvement. While there are several possible design principles (Ren et al., 2021), in this paper, we mainly focus on the acquisition functions based on the uncertainty of the predictions. Intuitively speaking, given a model trained with the data acquired so far, an unlabeled example exhibiting high predictive uncertainty with respect to the model would be a "confusing" sample which would substantially improve the model if being trained with the label acquired from experts. A popular approach in this line is Bayesian Active Learning by Disagreement (BALD) (Houlsby et al., 2011), where a committee of multiple models predicts an unlabeled sample, and the degree of disagreement is measured as a ranking factor. Here, the multiple models are
|
| 22 |
+
|
| 23 |
+
usually constructed in a Bayesian fashion, and their disagreement reflects the model uncertainty about the prediction. BALD is demonstrated to scale well for modern deep neural networks for high-dimensional and large-scale data (Gal et al., 2017).
|
| 24 |
+
|
| 25 |
+
Similar to BALD, many AL algorithms based on uncertainty employ a committee of models to estimate the uncertainty of predictions. The problem is, for deep neural networks trained with high-dimensional data, it is often frustratingly difficult to accurately estimate the uncertainty. To address this, Gal et al. (2017) proposed to use Monte-Carlo DropOut (MCDO) (Gal and Ghahramani, 2017), an instance of variational approximation to the posteriors and predictive uncertainty, while Rakesh and Jain (2021) suggested using more generic spike-and-slab variational posteriors (Louizos et al., 2017). Nevertheless, variational approximations tend to underestimate the posterior variances (Blei et al., 2017; Le Folgoc et al., 2021), so the uncertainty-based acquisition functions computed from them may be suboptimal. Alternatively, one can employ Deep Ensemble (DE) (Lakshminarayanan et al., 2017), where a single model is trained multiple times with the same data but with different random seeds for initialization and mini-batching. Despite being simple to implement, DE works surprisingly well, surpassing most of the Bayesian Neural Networks (BNN) alternatives in terms of accuracy and predictive uncertainty (Fort et al., 2021; Ovadia et al., 2019). To this end, Beluch et al. (2018) highlighted the effectiveness of DE as a way to estimate uncertainty for acquisition functions and demonstrated excellent performance.
|
| 26 |
+
|
| 27 |
+
A drawback of DE is that it is computationally expensive, as it requires multiple models to be trained and maintained for inference. As an alternative, Snapshot Ensemble (SE) (Huang et al., 2017; Garipov et al., 2018) proposes to collect multiple model snapshots (checkpoints) within a single learning trajectory, rather than collecting them at the end of the multiple learning trajectories as in DE. Compared to DE, SE enables the construction of a decent set of models without having to go through multiple training runs while not losing too much accuracy.
|
| 28 |
+
|
| 29 |
+
Inspired by the advantage of SE, we study its use in the context of AL. Specifically, we estimate the uncertainties from SE and use them to evaluate the uncertainty-based acquisition functions. Through extensive empirical comparisons, we demonstrate that the AL based on SE significantly outperforms existing approaches, even comparable to or better than the one with DE. This result is somewhat surprising since it is often reported that SE is less accurate than DE (Ashukha et al., 2020). Moreover, based on this observation, we propose a novel AL algorithm that can substantially reduce the number of training steps required until the final acquisition. Typically, an AL algorithm alternates between acquiring labels based on a model and re-training the model with the newly acquired labels. Here, for every re-training step, the old models are discarded, and a new model is trained from scratch. Instead, we suggest maintaining a model on a single learning trajectory throughout the entire AL procedure and gathering snapshots from the trajectory to compute acquisition functions. We show that this can significantly reduce the number of training steps without sacrificing too much accuracy. In summary, our contributions are as follows:
|
| 30 |
+
|
| 31 |
+
- We propose to use SE for the uncertainty-based acquisition functions for AL and demonstrate its effectiveness through various empirical evaluations.
|
| 32 |
+
- We propose a novel AL algorithm where a single learning trajectory is maintained and used to compute acquisition functions throughout the entire AL procedure. We demonstrate that our algorithm could achieve decent accuracy with much fewer training steps.
|
| 33 |
+
|
| 34 |
+
# 2 BACKGROUND
|
| 35 |
+
|
| 36 |
+
# 2.1 SETTINGS AND BASIC ACTIVE LEARNING ALGORITHM
|
| 37 |
+
|
| 38 |
+
In this paper, we mainly discuss $K$ -way classification problem, where the goal is to learn a classifier $f(\cdot; \theta)$ , parameterized by $\theta$ , taking an input $\pmb{x} \in \mathbb{R}^d$ to produce a $K$ -dimensional probability vector, that is, $f(\pmb{x}; \pmb{\theta}) \in [0,1]^K$ such that $\sum_{k=1}^{K} f_k(\pmb{x}; \pmb{\theta}) = 1$ . To learn $\pmb{\theta}$ , we need a labeled dataset consisting of pairs of an input $\pmb{x}$ and corresponding label $y \in \{1, \dots, K\}$ , but in AL, we are given only an unlabeled dataset $\mathcal{U} = \{\pmb{x}_i\}_{i=1}^n$ without labels.
|
| 39 |
+
|
| 40 |
+
An AL algorithm is defined with a classifier model $f(\cdot; \theta)$ and an acquisition function $a: \mathbb{R}^d \to \mathbb{R}$ measuring how useful an unlabeled example $x$ is to the classifier $f(\cdot; \theta)$ . Given $f$ and $a$ , an AL algorithm alternates between acquiring the labels for chosen unlabeled samples and training the
|
| 41 |
+
|
| 42 |
+
classifier with the labeled samples. A single iteration of acquiring samples and training the classifier is called an episode. In the first episode, $m$ samples are randomly chosen from $\mathcal{U}$ , and the labels are acquired for them to constitute an initial training set $\mathcal{D}_{\mathrm{train}}$ . The classifier is then trained with $\mathcal{D}_{\mathrm{train}}$ to obtain $\theta_{1}$ , or in case of the ensemble-based AL, a set of parameters $\{\theta_{1}^{(s)}\}_{s=1}^{S}$ . The labeled samples are removed from $\mathcal{U}$ . For all subsequent episodes $t \geq 2$ , with the parameters $\{\theta_{t-1}^{(s)}\}_{s=1}^{S}$ from the previous episode, the samples remaining in $\mathcal{U}$ are ranked with the values of the acquisition function, and the top $m$ of them are selected to be labeled. The newly labeled $m$ samples are then appended to the labeled training set $\mathcal{D}_{\mathrm{train}}$ , and the classifier is trained from scratch with the extended $\mathcal{D}_{\mathrm{train}}$ to obtain $\{\theta_{t}^{(s)}\}_{s=1}^{S}$ . The algorithm terminates when it reaches the predetermined number of episodes $T$ , and the goal of AL is to maximize the accuracy of the classifier after the final episode.
|
| 43 |
+
|
| 44 |
+
# 2.2 ACQUISITION FUNCTIONS
|
| 45 |
+
|
| 46 |
+
Here, we review some popular choices for the acquisition functions, especially the ones based on predictive uncertainties. We consider the following acquisition functions which estimate the uncertainty of the predictions via a set of parameters $\{\pmb{\theta}^{(s)}\}_{s=1}^S$ . This set of parameters defines a committee of models $\{f(\cdot; \pmb{\theta}^{(s)})\}_{s=1}^S$ .
|
| 47 |
+
|
| 48 |
+
Maximum Entropy (ME). ME measures the predictive entropy of a given example $x$ which can be approximated as,
|
| 49 |
+
|
| 50 |
+
$$
|
| 51 |
+
\begin{array}{l} H [ \boldsymbol {y} | \boldsymbol {x}, \mathcal {D} _ {\mathrm {t r a i n}} ] = - \sum_ {k = 1} ^ {K} p (y = k | \boldsymbol {x}, \mathcal {D} _ {\mathrm {t r a i n}}) \log p (y = k | \boldsymbol {x}, \mathcal {D} _ {\mathrm {t r a i n}}) \\ \approx - \sum_ {k = 1} ^ {K} \left(\frac {1}{S} \sum_ {s = 1} ^ {S} f _ {k} (\boldsymbol {x}; \boldsymbol {\theta} ^ {(s)})\right) \log \left(\frac {1}{S} \sum_ {s = 1} ^ {S} f _ {k} (\boldsymbol {x}; \boldsymbol {\theta} ^ {(s)})\right). \tag {1} \\ \end{array}
|
| 52 |
+
$$
|
| 53 |
+
|
| 54 |
+
Larger entropy means that the model is more uncertain about the prediction.
|
| 55 |
+
|
| 56 |
+
Variation Ratio (VR). VR (Freeman, 1965) measures how certain the model is about its prediction for $\pmb{x}$ , or how many of the committee agree with the prediction, and is calculated as $1 - f_{m} / S$ , where $f_{m}$ is the frequency of a mode prediction over $S$ committee members. Similarly, Least Confident (LC) sampling chooses the least confident sample as $1 - \max_k p(y = k|\pmb{x},\mathcal{D}_{\mathrm{train}})$ , and Margin (MAR) sampling chooses examples with the smallest difference between the largest and the second largest probabilities.
|
| 57 |
+
|
| 58 |
+
Bayesian Active Learning by Disagreement (BALD). BALD (Houlsby et al., 2011) measures the mutual information between the label and the parameter given an input $\pmb{x}$ and the training data $\mathcal{D}_{\mathrm{train}}$ . It can also be interpreted as measuring disagreement among the predictions of the committee members. BALD is maximized when each of the committee members is certain about their own predictions (small $H[y|\pmb{x},\pmb{\theta}^{(s)}]$ ), but the predictions disagree with each other, so the averaged prediction becomes uncertain (high $H[y|\pmb{x},\mathcal{D}_{\mathrm{train}}]$ ).
|
| 59 |
+
|
| 60 |
+
$$
|
| 61 |
+
\begin{array}{l} I [ y, \boldsymbol {\theta} | \boldsymbol {x}, \mathcal {D} _ {\text {t r a i n}} ] = H [ y | \boldsymbol {x}, \mathcal {D} _ {\text {t r a i n}} ] - \mathbb {E} _ {\boldsymbol {\theta} | \mathcal {D} _ {\text {t r a i n}}} \left[ H [ y | \boldsymbol {x}, \boldsymbol {\theta} ] \right] \\ \approx - \sum_ {k = 1} ^ {K} \left(\frac {1}{S} \sum_ {s = 1} ^ {S} f _ {k} (\pmb {x}; \pmb {\theta} ^ {(s)})\right) \log \left(\frac {1}{S} \sum_ {s = 1} ^ {S} f _ {k} (\pmb {x}; \pmb {\theta} ^ {(s)})\right) \\ + \frac {1}{S} \sum_ {s = 1} ^ {S} \sum_ {k = 1} ^ {K} f _ {k} (\boldsymbol {x}; \boldsymbol {\theta} ^ {(s)}) \log f _ {k} (\boldsymbol {x}; \boldsymbol {\theta} ^ {(s)}). \tag {2} \\ \end{array}
|
| 62 |
+
$$
|
| 63 |
+
|
| 64 |
+
# 2.3 ESTIMATING UNCERTAINTY
|
| 65 |
+
|
| 66 |
+
In Bayesian AL, the model parameter $\pmb{\theta}$ is treated as a random variable with prior $p(\pmb{\theta})$ and the posterior $p(\pmb{\theta}|\mathcal{D}_{\mathrm{train}})$ is approximated via a set of samples $\{\pmb{\theta}^{(s)}\}_{s=1}^S$ . There are several ways to approximate it.
|
| 67 |
+
|
| 68 |
+
Variational approximations. For variational approximation, an easy-to-handle variational distribution $q(\pmb{\theta})$ is introduced, learned to minimize $D_{\mathrm{KL}}[q(\pmb{\theta})\| p(\pmb{\theta}|\mathcal{D}_{\mathrm{train}})]$ , and used as a proxy for $p(\pmb{\theta}|\mathcal{D}_{\mathrm{train}})$ . That is, once obtained approximate distribution $q(\pmb{\theta})$ , we draw $\pmb{\theta}^{(1)},\dots,\pmb{\theta}^{(S)}\stackrel {\mathrm{i.i.d.}}{\sim}q(\pmb{\theta})$ . A popular choice for $q(\pmb{\theta})$ is mean-field Gaussian distribution (Blundell et al., 2015). In AL literature, MCDO is widely used after (Gal et al., 2017), where the dropout (Srivastava et al., 2014) is applied to the model $f$ and the randomness due to the dropout is interpreted as an approximate posterior $q(\pmb{\theta})$ . While relatively simple to implement, the variational approximations are known to underestimate the posterior variances.
|
| 69 |
+
|
| 70 |
+
Deep ensembles. DE (Lakshminarayanan et al., 2017) trains $f(\cdot ;\theta)$ multiple times with the same $\mathcal{D}_{\mathrm{train}}$ but with different random initializations to obtain $\{\pmb{\theta}^{(s)}\}_{s = 1}^{S}$ . DE is simple to implement, and yet its performance is remarkable, achieving state-of-the-art across various applications. The power of DE is mainly from its property to pick parameters from multiple modes (Fort et al., 2021), so the committee constructed from them yields a diverse set of predictions. Even if it is not explicitly assuming the prior for $p(\pmb {\theta})$ , DE can roughly be interpreted as an approximate Bayesian inference method (Wilson and Izmailov, 2021; D'Angelo and Fortuin, 2021), so the parameters $\{\pmb{\theta}^{(s)}\}_{s = 1}^{S}$ can be interpreted as posterior samples approximating the uncertainty of the models.
|
| 71 |
+
|
| 72 |
+
Snapshot ensembles. DE is expensive, both for training and inference since it has to keep multiple models with different parameters. SE (Huang et al., 2017; Garipov et al., 2018) is an idea to reduce the training cost of DE, where the multiple parameters $\{\pmb{\theta}^{(s)}\}_{s=1}^{S}$ are gathered within a single training run rather than collected from multiple training runs. To obtain diverse parameters, the learning rate of the training run is carefully chosen to encourage the optimization path to explore a wide area of the loss surface, and the parameter "snapshots" are periodically captured during the training run. The SE usually underperforms DE with the same number of parameters gathered, but it can collect the parameters much faster since it requires only a single training run.
|
| 73 |
+
|
| 74 |
+
# 3 METHODS
|
| 75 |
+
|
| 76 |
+
# 3.1 ACTIVE LEARNING WITH SNAPSHOT ENSEMBLES
|
| 77 |
+
|
| 78 |
+
We first present an AL algorithm based on SE that is simple and efficient. In each episode, we store parameter snapshots at regular intervals during the classifier training stage, which are then used to compute the acquisition function at the end of the episode. This approach incurs no additional computation cost for training, unlike AL based on DE. In the final episode, we have several options to choose from when we train the classifier using the acquired data $\mathcal{D}_{\mathrm{train}}$ : following a single learning trajectory and picking the parameter at the last step as a point estimate, or applying DE to obtain an ensembled model. Algorithm 1 summarizes our SE-based AL algorithm, where the final classifier is obtained with vanilla Stochastic Gradient Descent (SGD), but DE can be applied instead. In Section 5, we demonstrate that this simple modification with SE, albeit no increase in the training and inference costs, significantly improves the performance, even outperforming the AL with DE.
|
| 79 |
+
|
| 80 |
+
# 3.2 ACTIVE LEARNING WITH SNAPSHOT ENSEMBLES AND FINE-TUNING
|
| 81 |
+
|
| 82 |
+
Algorithm 1 trains classifiers for intermediate episodes and discards them after computing the acquisition function. However, considering that the acquisition function computed from a single learning trajectory works well, we can improve efficiency by keeping a single trajectory throughout all episodes and using the resulting parameter of the previous episode as initialization for the next episode. We call this strategy $\mathrm{SE} + \mathrm{FT}$ . There are two things to note here. First, since we continuously fine-tune a single model, we need fewer training steps for each episode than the vanilla AL. Second, although the intermediate classifiers may be less accurate than those trained from scratch, as expected, this is less important since what really matters is the accuracy after the final episode. We argue that the intermediate episodes are to acquire samples quickly for the final training, so the accuracy of the classifier during that process is not important.
|
| 83 |
+
|
| 84 |
+
Also, our AL procedure with fine-tuning is reminiscent of continual learning, in the sense that a single model is continually being trained for multiple episodes having different data. To this end, we employ two commonly used tricks in continual learning literature.
|
| 85 |
+
|
| 86 |
+
# Algorithm 1: AL with SE
|
| 87 |
+
|
| 88 |
+
Algorithm 2: AL with SE + Fine-tuning (FT)
|
| 89 |
+
```txt
|
| 90 |
+
Input: Unlabeled dataset $\mathcal{U}$ , number of episodes $T$ , number of (acquisitions $m$ , snapshots $S$ , SGD steps $N$ ) per episode, acquisition function $a$ , snapshot threshold steps $N_{\mathrm{thres}}$ , objective function $\mathcal{J}$ , learning rate schedule $\eta$ .
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
```latex
|
| 94 |
+
Output: A classifier $f(\cdot ;\theta_{*})$
|
| 95 |
+
Randomly draw m samples from $\mathcal{U}$ ,remove them from $\mathcal{U}$ , and set them as $\mathcal{D}_{\mathrm{train}}$
|
| 96 |
+
for $t = 1,\dots ,T$ do Randomly initialize $\pmb{\theta}_0$ . Set $\Theta \gets \emptyset$ for $j = 1,\ldots ,N$ do Draw a mini-batch $\mathcal{B}$ from $\mathcal{D}_{\mathrm{train}}$ $\pmb {\theta}_j\gets \pmb {\theta}_{j - 1} - \eta (j)\nabla_\pmb {\theta}\mathcal{J}(\pmb {B},\pmb {\theta}_{j - 1}).$ if $j\geq N_{\mathrm{thres}}\wedge$ mod $(j - N_{\mathrm{thres}},\lfloor \frac{N - N_{\mathrm{thres}}}{S}\rfloor) = 0$ then end end if $t < T$ then Compute $a(x,\Theta)$ for all $\pmb {x}\in \mathcal{U}$ Pick top m samples, remove them from U, and append them to $\mathcal{D}_{\mathrm{train}}$ else Set $\theta_{*}\leftarrow \theta_{N}$ end
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
```latex
|
| 100 |
+
Input: Input for Algorithm 1 + regularization parameter $\lambda$
|
| 101 |
+
Output: A classifier $f(\cdot ;\theta_{*})$
|
| 102 |
+
Randomly draw m samples from U, remove them from $\mathcal{U}$ , and set them as $\mathcal{D}_{\mathrm{train}}$
|
| 103 |
+
Randomly initialize $\theta_0$
|
| 104 |
+
for $t = 1,\dots ,T$ do // Only at the final episode! if $t = T$ then | Randomly initialize $\theta_0$ end Set $\Theta \gets \emptyset$ . for $j = 1,\ldots ,N$ do Draw a mini-batch $\mathcal{B}$ from $\mathcal{D}_{\mathrm{train}}$ $\theta_j\gets \theta_{j - 1} - \eta (j)\nabla_\theta (\mathcal{J}(\mathcal{B},\theta_{j - 1}) + \lambda 1_{\{t > 1\}}\| \theta_{j - 1} - \theta_0\| ^2).$ if $j\geq N_{\mathrm{thres}}\wedge$ mod $(j - N_{\mathrm{thres}},\lfloor \frac{N - N_{\mathrm{thres}}}{S}\rfloor) = 0$ then $\Theta \leftarrow \Theta \cup \{\theta_j\}$ end
|
| 105 |
+
end if $t < T$ then Compute $a(x,\Theta)$ for all $x\in \mathcal{U}$ Pick top m samples, remove them from U, and append them to $\mathcal{D}_{\mathrm{train}}$ // Reuse in the next episode. $\theta_0\gets \theta_N$ else Set $\theta_{*}\gets \theta_{N}$ end
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
Replay buffer. In principle, for the purpose of fine-tuning, we may use only the newly acquired data from the previous episode for fine-tuning. However, this would cause catastrophic forgetting (McCloskey and Cohen, 1989), so the acquisition function based on it may be biased towards recently acquired data. In order to prevent this, we adopt the idea of using a replay buffer (Jung et al., 2016; Rolnick et al., 2019; Aljundi et al., 2019), where we draw some portion of the data from the newly acquired data and the remaining portion from the past data. We empirically find that this significantly improves the stability of the acquisition functions.
|
| 109 |
+
|
| 110 |
+
Regularization. Similar to the replay buffer, we regularize the fine-tuning procedure to avoid deviating too much from the previous parameter (Kirkpatrick et al., 2017). That is, optimize the parameter $\theta$ with the $\ell_2$ -regularizer $\| \pmb{\theta} - \pmb{\theta}_0 \|^2$ where $\pmb{\theta}_0$ is the starting point of the fine-tuning (the parameters passed from the previous episodes). We also find that this regularization improves the quality of the acquisition functions, leading to better classification accuracy in the final episode.
|
| 111 |
+
|
| 112 |
+
Algorithm 2 summarizes the AL with fine-tuning. The parts that are different from the AL without fine-tuning are marked as blue.
|
| 113 |
+
|
| 114 |
+
# 4 RELATED WORKS
|
| 115 |
+
|
| 116 |
+
Active Learning. Based on how an unlabeled example is fed into an AL agent, AL can broadly be categorized into membership query synthesis, where the agent even generates examples in the sample space, stream-based selective sampling, where the agent decides whether a given input is helpful if labeled in online settings, and pool-based AL, where the agent can access a large unlabeled pool (Settles, 2009). Pool-based AL can be further divided into uncertainty-based approach, diversity-based approach, and hybrid approach (Ren et al., 2021). Geifman and El-Yaniv (2019) first introduced neural architecture search into AL, claiming that the over-parameterized model could lead
|
| 117 |
+
|
| 118 |
+
to overfitting especially in the earlier episodes, and therefore, uncertainty estimates could be inaccurate. Similarly, Munjal et al. (2022) argued that the optimal hyperparameters may vary with the size of labeled examples and used Bayesian hyperparameter optimization (AutoML) every episode.
|
| 119 |
+
|
| 120 |
+
Ensemble. Ensembles of neural networks have shown competitive performance improvement and are widely used in machine learning and deep learning. In addition, it shows improvements in estimates for predictive uncertainty. DE (Lakshminarayanan et al., 2017) is one of the well-behaved methods estimating the uncertainty of deep neural networks. This method works with several classifiers trained with the same dataset and architecture but with different seeds for a random number generator. However, the biggest limitation of ensemble is its computational costs because it requires training multiple models (Izmailov et al., 2018).
|
| 121 |
+
|
| 122 |
+
Active Learning with Ensemble. The uncertainty of an example for neural networks can be estimated by using an ensemble of neural networks in the context of AL. Gal et al. (2017) used BALD (Houlsby et al., 2011) on MCDO (Gal and Ghahramani, 2017) and later extended it to batch settings where it considers overlaps among data points to be acquired (Kirsch et al., 2019). However, due to the expensive cost of training multiple models, most research on ensemble-based AL had been restricted to the traditional ML algorithms (Melville and Mooney, 2004; Korner and Wrobel, 2006). Beluch et al. (2018) compares the performance of various acquisition functions and uncertainty estimation methods on large-scale image classification tasks, proving that DE consistently outperforms other uncertainty-based methods such as MCDO(Gal et al., 2017) or a single model. In the same manner, Bayesian neural networks show much robustness and reliability compared to DE and MCDO in the context of AL with continual learning (Rakesh and Jain, 2021).
|
| 123 |
+
|
| 124 |
+
# 5 EXPERIMENTS
|
| 125 |
+
|
| 126 |
+
In this section, through an extensive empirical comparison on three image classification benchmarks (CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and Tiny ImageNet (Le and Yang, 2015)), we would like to demonstrate the followings:
|
| 127 |
+
|
| 128 |
+
- Measuring uncertainty via SE for AL is effective, comparable, or even better than the one based on DE across various choices of uncertainty-based acquisition functions.
|
| 129 |
+
- AL with $\mathrm{SE} + \mathrm{FT}$ can significantly reduce the training cost without sacrificing too much accuracy.
|
| 130 |
+
- The reason why SE is effective for AL is that it can build a committee of models yielding diverse predictions. Interestingly, for the purpose of acquiring samples of better qualities, the diversity in predictions of the models is more important than the accuracy of the models.
|
| 131 |
+
|
| 132 |
+
We compare the AL algorithms with three variables - acquisition functions (VR, ME, BALD, MAR), algorithms to measure uncertainty (DE, SE, MCDO), and how to train the classifier in the final episode (single model via a vanilla SGD, DE). We report the results with ResNet-18 (He et al., 2016). Please refer to Appendix B for more details, such as experimental protocols or hyperparameter settings. The test accuracy results on CIFAR-10, CIFAR-100, and Tiny ImageNet are summarized in Table 1, Table 2, and Table 3, respectively, according to the proportion of labeled examples. Due to the limited resources, we report the results with four and three acquisition functions for CIFAR-10 and CIFAR-100, respectively, and the results with only VR for Tiny ImageNet.
|
| 133 |
+
|
| 134 |
+
# 5.1 ANALYSIS OF THE MAIN RESULTS
|
| 135 |
+
|
| 136 |
+
Effectiveness of SE for AL. Table 1 confirms that the measuring uncertainty with SE acquires the samples leading to the best classification accuracies, regardless of the choice of acquisition functions or how we train the final classifier. This is somewhat remarkable, considering that the runtimes of SE-based methods are not significantly longer than those for the random baselines, while DE requires significantly longer runtimes due to its requirement to train and test with multiple models. It is also noteworthy that the choice of uncertainty estimation method was much more crucial than the choice of acquisition functions. A similar trend is evident for CIFAR-100 and Tiny ImageNet, where SE generally produces the best classification accuracies. We set $S = 5$ for SE and DE and $S = 25$ for
|
| 137 |
+
|
| 138 |
+
Table 1: Test accuracy on CIFAR-10 according to the ratio of labeled examples
|
| 139 |
+
|
| 140 |
+
<table><tr><td rowspan="2">Acq Fn</td><td rowspan="2">Uncertainty</td><td colspan="4">A single model at the final episode</td><td colspan="4">DE at the final episode</td><td rowspan="2">Runtime</td></tr><tr><td>10%</td><td>15%</td><td>20%</td><td>25%</td><td>10%</td><td>15%</td><td>20%</td><td>25%</td></tr><tr><td rowspan="4">VR</td><td>SE</td><td>69.10±0.46</td><td>79.60±0.55</td><td>84.22±0.49</td><td>87.39±0.26</td><td>75.01</td><td>83.38</td><td>87.22</td><td>89.75</td><td>15.3 hr</td></tr><tr><td>SE + FT</td><td>69.72±0.57</td><td>79.36±0.33</td><td>84.24±0.23</td><td>87.06±0.28</td><td>74.94</td><td>83.11</td><td>87.14</td><td>89.26</td><td>2.3 hr</td></tr><tr><td>DE</td><td>66.43±0.75</td><td>77.44±0.73</td><td>82.48±0.83</td><td>86.34±0.44</td><td>73.23</td><td>81.77</td><td>85.96</td><td>88.95</td><td>60.5 hr</td></tr><tr><td>MCDO</td><td>67.73±0.76</td><td>76.98±0.45</td><td>81.87±0.52</td><td>86.05±0.20</td><td>73.12</td><td>81.52</td><td>85.70</td><td>89.10</td><td>13.3 hr</td></tr><tr><td rowspan="4">BALD</td><td>SE</td><td>70.80±0.06</td><td>79.63±0.37</td><td>84.32±0.35</td><td>87.43±0.27</td><td>76.37</td><td>83.19</td><td>86.86</td><td>89.25</td><td>15.3 hr</td></tr><tr><td>SE + FT</td><td>70.28±0.53</td><td>78.00±0.28</td><td>83.36±0.24</td><td>86.64±0.26</td><td>75.27</td><td>81.57</td><td>85.99</td><td>88.85</td><td>2.3 hr</td></tr><tr><td>DE</td><td>68.36±1.16</td><td>78.10±0.56</td><td>82.43±0.66</td><td>86.12±0.30</td><td>74.07</td><td>82.23</td><td>85.67</td><td>88.81</td><td>60.5 hr</td></tr><tr><td>MCDO</td><td>69.28±0.48</td><td>77.32±0.24</td><td>82.85±0.27</td><td>86.15±0.28</td><td>74.03</td><td>81.17</td><td>86.41</td><td>88.52</td><td>13.3 hr</td></tr><tr><td rowspan="4">ME</td><td>SE</td><td>68.39±0.84</td><td>79.11±0.58</td><td>84.19±0.20</td><td>87.37±0.26</td><td>74.79</td><td>82.99</td><td>86.82</td><td>89.65</td><td>15.3 hr</td></tr><tr><td>SE + FT</td><td>70.25±0.81</td><td>79.68±0.18</td><td>84.56±0.19</td><td>87.52±0.19</td><td>76.00</td><td>83.47</td><td>87.33</td><td>89.69</td><td>2.3 hr</td></tr><tr><td>DE</td><td>65.77±1.39</td><td>77.31±0.85</td><td>82.54±0.26</td><td>86.24±0.29</td><td>72.30</td><td>82.08</td><td>85.97</td><td>88.97</td><td>60.5 hr</td></tr><tr><td>MCDO</td><td>67.96±0.89</td><td>77.67±0.67</td><td>82.70±0.68</td><td>86.79±0.28</td><td>74.43</td><td>81.89</td><td>85.94</td><td>89.31</td><td>13.3 hr</td></tr><tr><td rowspan="4">MAR</td><td>SE</td><td>70.89±0.32</td><td>79.12±0.57</td><td>84.03±0.27</td><td>87.49±0.20</td><td>76.34</td><td>82.80</td><td>86.82</td><td>89.53</td><td>15.3 hr</td></tr><tr><td>SE + FT</td><td>73.26±0.46</td><td>81.09±0.27</td><td>85.43±0.17</td><td>87.87±0.17</td><td>77.95</td><td>83.93</td><td>87.88</td><td>89.70</td><td>2.3 hr</td></tr><tr><td>DE</td><td>70.48±0.59</td><td>79.79±0.47</td><td>83.48±0.29</td><td>87.03±0.39</td><td>76.49</td><td>83.47</td><td>86.78</td><td>89.51</td><td>60.5 hr</td></tr><tr><td>MCDO</td><td>71.82±0.32</td><td>79.25±0.44</td><td>82.75±0.39</td><td>85.19±0.19</td><td>75.77</td><td>82.15</td><td>84.98</td><td>87.32</td><td>13.3 hr</td></tr><tr><td>Random</td><td></td><td>69.88±0.25</td><td>78.08±0.37</td><td>82.44±0.38</td><td>84.58±0.09</td><td>74.85</td><td>81.32</td><td>84.95</td><td>86.92</td><td>12.1 hr</td></tr></table>
|
| 141 |
+
|
| 142 |
+
Table 2: Test accuracy on CIFAR-100 according to the ratio of labeled examples
|
| 143 |
+
|
| 144 |
+
<table><tr><td rowspan="2">Acq Fn</td><td rowspan="2">Uncertainty</td><td colspan="4">A single model at the final episode</td><td colspan="4">DE at the final episode</td><td rowspan="2">Runtime</td></tr><tr><td>18%</td><td>22%</td><td>26%</td><td>30%</td><td>18%</td><td>22%</td><td>26%</td><td>30%</td></tr><tr><td rowspan="4">VR</td><td>SE</td><td>58.58±0.19</td><td>62.75±0.40</td><td>65.77±0.27</td><td>68.02±0.24</td><td>64.02</td><td>68.27</td><td>70.87</td><td>73.37</td><td>4.7 hr</td></tr><tr><td>SE + FT</td><td>59.04±0.27</td><td>62.35±0.15</td><td>65.21±0.15</td><td>67.27±0.29</td><td>64.06</td><td>67.15</td><td>69.53</td><td>71.71</td><td>1.1 hr</td></tr><tr><td>DE</td><td>57.22±0.39</td><td>61.73±0.16</td><td>64.79±0.31</td><td>67.41±0.26</td><td>63.35</td><td>67.65</td><td>70.55</td><td>72.71</td><td>18.5 hr</td></tr><tr><td>MCDO</td><td>58.30±0.25</td><td>62.49±0.25</td><td>65.00±0.17</td><td>67.70±0.15</td><td>63.95</td><td>68.18</td><td>70.39</td><td>72.91</td><td>4.5 hr</td></tr><tr><td rowspan="4">BALD</td><td>SE</td><td>57.13±0.52</td><td>61.10±0.42</td><td>63.85±0.39</td><td>66.05±0.36</td><td>61.86</td><td>65.51</td><td>68.39</td><td>70.63</td><td>4.7 hr</td></tr><tr><td>SE + FT</td><td>57.82±0.36</td><td>61.46±0.28</td><td>63.45±0.11</td><td>65.29±0.26</td><td>62.55</td><td>66.21</td><td>68.30</td><td>69.96</td><td>1.1 hr</td></tr><tr><td>DE</td><td>58.25±0.42</td><td>62.26±0.32</td><td>65.40±0.21</td><td>67.40±0.21</td><td>63.28</td><td>67.04</td><td>70.27</td><td>72.18</td><td>18.5 hr</td></tr><tr><td>MCDO</td><td>58.03±0.29</td><td>62.28±0.27</td><td>65.06±0.24</td><td>67.23±0.39</td><td>62.84</td><td>67.07</td><td>69.66</td><td>71.81</td><td>4.5 hr</td></tr><tr><td rowspan="4">ME</td><td>SE</td><td>57.21±0.28</td><td>61.84±0.38</td><td>64.61±0.40</td><td>67.08±0.25</td><td>62.73</td><td>67.73</td><td>70.36</td><td>72.24</td><td>4.7 hr</td></tr><tr><td>SE + FT</td><td>58.92±0.25</td><td>62.41±0.26</td><td>65.04±0.22</td><td>67.08±0.33</td><td>64.11</td><td>67.53</td><td>69.91</td><td>71.43</td><td>1.1 hr</td></tr><tr><td>DE</td><td>56.40±0.44</td><td>61.37±0.48</td><td>63.95±0.32</td><td>67.03±0.45</td><td>62.77</td><td>67.13</td><td>69.83</td><td>72.65</td><td>18.5 hr</td></tr><tr><td>MCDO</td><td>56.90±0.38</td><td>61.28±0.55</td><td>64.89±0.22</td><td>67.05±0.26</td><td>62.39</td><td>67.10</td><td>70.27</td><td>72.57</td><td>4.5 hr</td></tr><tr><td>Random</td><td></td><td>57.55±0.52</td><td>61.82±0.13</td><td>64.46±0.37</td><td>66.06±0.27</td><td>62.47</td><td>66.10</td><td>68.93</td><td>70.55</td><td>3.7 hr</td></tr></table>
|
| 145 |
+
|
| 146 |
+
MCDO. The reported means and standard deviations were averaged over five trials and one trial for retraining a single model and DE, respectively.
|
| 147 |
+
|
| 148 |
+
Effectiveness of fine-tuning. Tables 1 to 3 compare the AL with $\mathrm{SE} + \mathrm{FT}$ (Algorithm 2) to baselines. Somewhat surprisingly, $\mathrm{SE} + \mathrm{FT}$ achieved comparable or even better test accuracies with much shorter runtimes. Fig. 1 shows the progression of test accuracies during the episodes of the $\mathrm{SE} + \mathrm{FT}$ procedure. The intermediate models, which are only used for acquisitions, exhibit lower accuracies, as expected, since they are fine-tuned and trained with fewer examples for a lower number of epochs. However, at the end of the final episode, where a new classifier is trained from scratch, the test accuracy catches up with the vanilla AL, indicating that even if the accuracies of the intermediate classifiers are inferior, the acquired samples are good enough to obtain a decent classifier at the final episode. We set $S = 5$ for $\mathrm{SE} + \mathrm{FT}$ , and the reported means and standard deviations were averaged over five trials and one trial for re-training a single model and DE, respectively.
|
| 149 |
+
|
| 150 |
+
AL with pretrained models. As one can see from Table 3, the performance of the methods on Tiny ImageNet is generally poor, presumably due to the complexity of the dataset. To this end, we employed models pretrained on ImageNet (Deng et al., 2009) and examined whether the AL algorithms could fine-tune on pretrained models. We evaluated ResNet-50 and Vision Transformer (Dosovitskiy et al., 2021) backbones and compared SE-based and DE-based AL with VR acquisition. For both backbones, SE outperformed DE significantly, especially with a small number of acquisitions. We
|
| 151 |
+
|
| 152 |
+
Table 3: Test accuracy on Tiny ImageNet according to the ratio of labeled examples
|
| 153 |
+
|
| 154 |
+
<table><tr><td>Acq Fn</td><td>Uncertainty</td><td colspan="4">A single model at the final episode</td><td colspan="4">DE at the final episode</td><td></td></tr><tr><td colspan="2">ResNet-18 from scratch</td><td>14%</td><td>15%</td><td>16%</td><td>17%</td><td>14%</td><td>15%</td><td>16%</td><td>17%</td><td>Runtime</td></tr><tr><td rowspan="4">VR</td><td>SE</td><td>30.60±0.32</td><td>31.28±0.16</td><td>32.38±0.47</td><td>33.37±0.24</td><td>36.23</td><td>37.49</td><td>38.58</td><td>39.79</td><td>14.0 hr</td></tr><tr><td>SE + FT</td><td>30.86±0.41</td><td>31.77±0.39</td><td>32.96±0.26</td><td>33.74±0.31</td><td>36.32</td><td>37.13</td><td>38.19</td><td>39.01</td><td>2.3 hr</td></tr><tr><td>DE</td><td>30.05±0.21</td><td>31.20±0.43</td><td>31.63±0.28</td><td>32.81±0.65</td><td>35.83</td><td>37.17</td><td>37.97</td><td>39.34</td><td>26.1 hr</td></tr><tr><td>MCDO</td><td>29.48±0.30</td><td>31.40±0.89</td><td>32.35±0.33</td><td>33.32±0.68</td><td>35.12</td><td>37.33</td><td>38.16</td><td>38.94</td><td>14.8 hr</td></tr><tr><td colspan="2">Random</td><td>28.37±0.45</td><td>28.99±0.44</td><td>29.04±0.12</td><td>29.19±0.39</td><td>32.17</td><td>33.23</td><td>32.94</td><td>32.90</td><td>8.5 hr</td></tr><tr><td colspan="2">ResNet-50 pretrained</td><td>3%</td><td>5%</td><td>7%</td><td>10%</td><td>3%</td><td>5%</td><td>7%</td><td>10%</td><td>Runtime</td></tr><tr><td rowspan="2">VR</td><td>SE</td><td>60.89±0.40</td><td>66.46±0.30</td><td>69.71±0.42</td><td>72.31±0.14</td><td>62.72</td><td>68.59</td><td>71.01</td><td>73.85</td><td>3.2 hr</td></tr><tr><td>DE</td><td>59.88±0.21</td><td>61.81±0.43</td><td>68.21±0.28</td><td>70.44±0.65</td><td>62.19</td><td>67.14</td><td>70.18</td><td>73.23</td><td>5.8 hr</td></tr><tr><td colspan="2">Random</td><td>61.82±0.34</td><td>64.11±0.22</td><td>68.76±0.07</td><td>70.75±0.16</td><td>63.40</td><td>67.59</td><td>69.64</td><td>72.09</td><td>1.8 hr</td></tr><tr><td colspan="2">ViT-Base pretrained</td><td>3%</td><td>5%</td><td>7%</td><td>10%</td><td>3%</td><td>5%</td><td>7%</td><td>10%</td><td>Runtime</td></tr><tr><td rowspan="2">VR</td><td>SE</td><td>77.12±1.10</td><td>80.68±2.78</td><td>85.31±0.74</td><td>86.39±0.14</td><td>80.51</td><td>85.81</td><td>86.95</td><td>88.73</td><td>7.1 hr</td></tr><tr><td>DE</td><td>69.03±5.47</td><td>73.20±5.24</td><td>82.83±1.32</td><td>86.17±0.26</td><td>77.01</td><td>75.92</td><td>85.86</td><td>88.06</td><td>14.9 hr</td></tr><tr><td colspan="2">Random</td><td>71.43±1.03</td><td>75.98±2.59</td><td>82.20±0.57</td><td>84.74±0.92</td><td>78.73</td><td>84.50</td><td>86.83</td><td>86.87</td><td>4.9 hr</td></tr></table>
|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
— Vanilla AL with SE
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
SE+FT
|
| 161 |
+
|
| 162 |
+

|
| 163 |
+
Figure 1: Results of vanilla AL (gray) and fine-tuning (FT) at specific episodes (the rest of the colors). CIFAR-10, CIFAR-100 and Tiny ImageNet all used ResNet-18 architecture. Note that before the final episode, the intermediate models show low accuracies.
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
Final episode. Train from scratch after.
|
| 167 |
+
|
| 168 |
+

|
| 169 |
+
Tiny ImageNet
|
| 170 |
+
|
| 171 |
+
set $S = 5$ for SE and $S = 3$ for DE, and the reported means and standard deviations were averaged over three trials and one trial for re-training a single model and DE, respectively.
|
| 172 |
+
|
| 173 |
+
# 5.2 ANALYSIS OF THE UNCERTAINTY ESTIMATION
|
| 174 |
+
|
| 175 |
+
In this section, we analyze SE in more detail to see why it is more effective than DE for AL. We conjecture that SE builds a committee of models whose predictions are more diverse than the ones from other methods within a single trajectory, and this is a key to its success in AL. To verify this, we measure average KL-divergences and pair-wise disagreement values (Melville and Mooney, 2005) of the predictions computed from SE, DE, and MCDO. The disagreement between two class probabilities $f(\cdot; \pmb{\theta}^{(i)})$ and $f(\cdot; \pmb{\theta}^{(j)})$ on an example $\pmb{x}$ is calculated as
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
d _ {i, j} (\boldsymbol {x}) = \mathbb {1} _ {\left\{\arg \max _ {k} f _ {k} \left(\boldsymbol {x}; \boldsymbol {\theta} ^ {(i)}\right) \neq \arg \max _ {k} f _ {k} \left(\boldsymbol {x}; \boldsymbol {\theta} ^ {(j)}\right) \right\}}. \tag {3}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
As summarized in Fig. 2 (left), SE generally exhibits much higher KL-divergences and disagreement values among their predictions compared to DE and MCDO. As reported in the literature (Fort et al., 2021), DE shows higher disagreements than MCDO but still much less than SE. We believe that this is mainly due to the nature of AL, where we usually work with a relatively small amount of data and less number of training steps than in typical supervised learning settings. DE parameters are collected at the end of each training run, so the models are likely to reach the local optimum. On the other hand, SE collects the parameter snapshots during a single training run, so some of them may not converge to the local optimum. This degrades the classification accuracy, but as we point out in Section 5.1, for the purpose of acquisition, the diversity in predictions within a single trajectory is more important than the accuracy of the individual models.
|
| 182 |
+
|
| 183 |
+
Fig. 2 (right) depicts the correlation between the VR values and the predicted probabilities of the ground truth class, along with distributions of VR scores. The test error rate for each bin are shown,
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
Figure 2: Left panel: (Left) average KL-Divergence $\square$ and disagreement $\square$ between the predictions with different uncertainty estimation methods. (Right) Spearman's rank correlations $\square$ and Pearson's correlations $\square$ between VR and a predicted probability of the ground truth class $p_{\mathrm{true}}$ . Right panel: boxplots (up) and histograms (down) of samples binned with the VR values.
|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
Figure 3: Probability maps for the upper right corner images where the blue means dog class and red means cat class. The black small crosses indicate snapshots obtained from SE in the parameter space, while red crosses indicate instances which the model committee predicted incorrectly.
|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
|
| 195 |
+
and the number of correctly classified examples is visualized as light gray areas. This can be interpreted as an unnormalized reliability diagram (Murphy and Winkler, 1977). For instance, for the bin with $\mathrm{VR} = 0.8$ (all disagree), SE exhibits a test error of $72.1\%$ , while DE shows only $46.7\%$ . However, for the bin with $\mathrm{VR} = 0.0$ (consensus), SE and DE display error rates of $6.1\%$ and $15.5\%$ , respectively. As examples with high VR scores are initially selected to be labeled, DE is more likely to query examples that the model is already familiar with but has misclassified as confusing.
|
| 196 |
+
|
| 197 |
+
To provide a qualitative comparison, Fig. 3 shows the map of class prediction probabilities for the committee constructed from SE in the parameter space. The committee members tend to agree on samples with low VR values, whereas they exhibit disagreement on those with high VR values, as expected. This observation underscores the ability of SE to capture parameter snapshots within the same minima while generating diverse predictions on challenging examples with slight changes in the parameter space. Such a phenomenon is closely related to recent advancements in mode connectivity (Garipov et al., 2018) and fast ensembling methods (Huang et al., 2017; Izmailov et al., 2018; Maddox et al., 2019), which reveal that wandering around the wider optima leads to diverse yet more reliable predictions. Based on this empirical evidence, we can conclude that SE is effective at discovering samples with high uncertainties which are predicted to be difficult for the classifier.
|
| 198 |
+
|
| 199 |
+
# 6 CONCLUSION
|
| 200 |
+
|
| 201 |
+
In this paper, we demonstrate that estimating uncertainties of the predictions using SE works efficiently for uncertainty-based AL. Through extensive experiments with real-world image classification benchmarks, we empirically confirmed that the AL with SE outperforms AL with DE or MCDO for various choices of acquisition functions. We further presented a novel AL algorithm based on fine-tuning, where we keep a single model and continuously fine-tune it instead of re-initializing the models at the beginning of every episode. The resulting algorithm could achieve comparable classification accuracies given the same number of acquired samples compared to the baselines with much smaller training steps. We provide a detailed analysis of the effectiveness of SE for AL and show that SE builds model committees that yield diverse predictions that are useful for acquiring informative samples.
|
| 202 |
+
|
| 203 |
+
# REPRODUCIBILITY STATEMENT
|
| 204 |
+
|
| 205 |
+
We used the Pytorch (Paszke et al., 2019) library in our experiments and algorithms which are described in Algorithm 1 and Algorithm 2. In addition, all experimental details and configurations on hyperparameters are recorded in Appendix B. We will provide an open-source implementation of AL environments and our code of SE and $\mathrm{SE} + \mathrm{FT}$ algorithms.
|
| 206 |
+
|
| 207 |
+
# ACKNOWLEDGEMENT
|
| 208 |
+
|
| 209 |
+
This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00184, Development and Study of AI Technologies to Inexpensively Conform to Evolving Policy on Ethics, and No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)), and National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2021M3E5D9025030).
|
| 210 |
+
|
| 211 |
+
# REFERENCES
|
| 212 |
+
|
| 213 |
+
R. Aljundi, M. Lin, B. Goujaud, and Y. Bengio. Gradient based sample selection for online continual learning. Advances in neural information processing systems, 32, 2019. 5
|
| 214 |
+
A. Ashukha, A. Lyzhov, D. Molchanov, and D. Vetrov. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. In International Conference on Learning Representations (ICLR), 2020. 2
|
| 215 |
+
W. H. Beluch, T. Genewein, A. Nurnberger, and J. M. Kohler. The power of ensembles for active learning in image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9368-9377, 2018. 2, 6, 14, 17
|
| 216 |
+
D. M. Blei, A. Kucukelbir, and J. D. McAuliffe. Variational inference: a review for statisticians. Journal of the American Statistical Association, 112:859-877, 2017. 2
|
| 217 |
+
C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. In Proceedings of The 32nd International Conference on Machine Learning (ICML 2015), 2015. 4
|
| 218 |
+
F. D'Angelo and V. Fortuin. Replusive deep ensembles are Bayesian. In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 2021. 4
|
| 219 |
+
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), 2009. 7
|
| 220 |
+
a. Dosovitskiy, L. Beyer, A. Kolsenikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: transformers for image recognition at scale. In International Conference on Learning Representations (ICLR), 2021. 7
|
| 221 |
+
S. Fort, H. Hu, and B. Lakshminarayanan. Deep ensembles: a loss landscape perspective. arXiv preprint arXiv:2106.11642, 2021. 2, 4, 8
|
| 222 |
+
L. C. Freeman. Elementary applied statistics: for students in behavioral science. John Wiley and Sons, 1965. 3
|
| 223 |
+
Y. Gal and Z. Ghahramani. Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In Proceedings of The 33rd International Conference on Machine Learning (ICML 2016), 2017. 2, 6
|
| 224 |
+
Y. Gal, R. Islam, and Z. Ghahramani. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183-1192. PMLR, 2017. 2, 4, 6
|
| 225 |
+
|
| 226 |
+
T. Garipov, P. Izmailov, D. Podoprikhin, D. Vetrov, and A. G. Wilson. Loss surfaces, mode connectivity, and fast ensembling of DNNs. In Advances in Neural Information Processing Systems 31 (NeurIPS 2018), 2018. 2, 4, 9
|
| 227 |
+
Y. Geifman and R. El-Yaniv. Deep active learning with a neural architecture search. Advances in Neural Information Processing Systems, 32, 2019. 5
|
| 228 |
+
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6
|
| 229 |
+
N. Houlsby, F. Huzár, Z. Ghahramani, and M. Lengyel. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745, 2011. 1, 3, 6
|
| 230 |
+
G. Huang, Y. Li, G. Pleiss, Z. Liu, J. E. Hopcroft, and K. Q. Weinberger. Snapshot ensembles: train 1, get M for free. In International Conference on Learning Representations (ICLR), 2017. 2, 4, 9
|
| 231 |
+
P. Izmailov, D. Podoprikhin, T. Garipov, D. Vetrov, and A. G. Wilson. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407, 2018. 6, 9, 13
|
| 232 |
+
H. Jung, J. Ju, M. Jung, and J. Kim. Less-forgetting learning in deep neural networks. arXiv preprint arXiv:1607.00122, 2016. 5
|
| 233 |
+
J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526, 2017. 5
|
| 234 |
+
A. Kirsch, J. Van Amersfoort, and Y. Gal. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. Advances in neural information processing systems, 32, 2019. 6
|
| 235 |
+
C. Korner and S. Wrobel. Multi-class ensemble-based active learning. In European conference on machine learning, pages 687-694. Springer, 2006. 6
|
| 236 |
+
A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. 6
|
| 237 |
+
B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems 30 (NIPS 2017), 2017. 2, 4, 6
|
| 238 |
+
Y. Le and X. Yang. Tiny imagenet visual recognition challenge. Technical report, Stanford University, 2015. 6
|
| 239 |
+
L. Le Folgoc, V. Baltatzis, S. Desai, A. Devaraj, S. Ellis, O. E. M. Manzanera, A. Nair, H. Qiu, J. Schnabel, and B. Glocker. Is mc dropout bayesian? arXiv preprint arXiv:2110.04286, 2021. 2
|
| 240 |
+
C. Louizos, K. Ullrich, and M. Welling. Bayesian compression for deep learning. In Advances in Neural Information Processing Systems 30 (NIPS 2017), 2017. 2
|
| 241 |
+
W. J. Maddox, P. Izmailov, T. Garipov, D. P. Vetrov, and A. G. Wilson. A simple baseline for bayesian uncertainty in deep learning. Advances in Neural Information Processing Systems, 32, 2019. 9
|
| 242 |
+
M. McCloskey and N. J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pages 109–165. Elsevier, 1989. 5
|
| 243 |
+
P. Melville and R. J. Mooney. Diverse ensembles for active learning. In Proceedings of the twenty-first international conference on Machine learning, page 74, 2004. 6
|
| 244 |
+
P. Melville and R. J. Mooney. Creating diversity in ensembles using artificial data. Information Fusion, 6(1):99-111, 2005. 8
|
| 245 |
+
P. Munjal, N. Hayat, M. Hayat, J. Sourati, and S. Khan. Towards robust and reproducible active learning using neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 223-232, 2022. 6, 16
|
| 246 |
+
|
| 247 |
+
A. H. Murphy and R. L. Winkler. Reliability of subjective probability forecasts of precipitation and temperature. Journal of the Royal Statistical Society: Series C (Applied Statistics), 26(1):41-47, 1977. 9
|
| 248 |
+
Y. Ovadia, E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J. Dillon, B. Lakshminarayanan, and J. Snoek. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), 2019. 2
|
| 249 |
+
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. 10, 17
|
| 250 |
+
V. Rakesh and S. Jain. Efficacy of bayesian neural networks in active learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2601-2609, 2021. 2, 6
|
| 251 |
+
P. Ren, Y. Xiao, X. Chang, P.-Y. Huang, Z. Li, B. B. Gupta, X. Chen, and X. Wang. A survey of deep active learning. ACM Computing Surveys (CSUR), 54(9):1-40, 2021. 1, 5
|
| 252 |
+
D. Rolnick, A. Ahuja, J. Schwarz, T. Lillicrap, and G. Wayne. Experience replay for continual learning. Advances in Neural Information Processing Systems, 32, 2019. 5
|
| 253 |
+
B. Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2009. 5
|
| 254 |
+
L. N. Smith and N. Topin. Super-convergence: Very fast training of neural networks using large learning rates. In Artificial intelligence and machine learning for multi-domain operations applications, volume 11006, pages 369-386. SPIE, 2019. 17
|
| 255 |
+
N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(56): 1929-1958, 2014. 4
|
| 256 |
+
R. Wightman. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019.17
|
| 257 |
+
A. G. Wilson and P. Izmailov. Deep ensembles as approximate Bayesian inference. https://cims.nyu.edu/~andrewgw/deepensembles/, 2021.4
|
| 258 |
+
D. Yoo and I. S. Kweon. Learning loss for active learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 93-102, 2019. 17
|
| 259 |
+
|
| 260 |
+
# A ADDITIONAL RESULTS AND DISCUSSIONS FOR SE
|
| 261 |
+
|
| 262 |
+
# A.1 VGG-16
|
| 263 |
+
|
| 264 |
+
Table 4: Comparison between SE and FT with VGG-16 on CIFAR-10
|
| 265 |
+
|
| 266 |
+
<table><tr><td rowspan="2"></td><td colspan="4">A single model at the final episode</td><td colspan="4">DE at the final episode</td></tr><tr><td>10%</td><td>15%</td><td>20%</td><td>25%</td><td>10%</td><td>15%</td><td>20%</td><td>25%</td></tr><tr><td>SE</td><td>59.79±2.36</td><td>68.18±2.01</td><td>68.88±1.89</td><td>72.16±1.21</td><td>66.97</td><td>74.65</td><td>77.60</td><td>81.10</td></tr><tr><td>SE + FT</td><td>57.78±3.31</td><td>65.22±2.21</td><td>68.58±0.80</td><td>72.50±1.20</td><td>68.43</td><td>74.84</td><td>78.20</td><td>81.45</td></tr></table>
|
| 267 |
+
|
| 268 |
+
We also used VGG-16 architecture for fine tuning experiments in Table 4. Somewhat surprisingly, SE + FT has caught up with SE.
|
| 269 |
+
|
| 270 |
+
# A.2 HYPERPARAMETERS FOR SE
|
| 271 |
+
|
| 272 |
+
Table 5: Test accuracy on different hyperparameters for SE
|
| 273 |
+
|
| 274 |
+
<table><tr><td># snapshots S</td><td>10%</td><td>15%</td><td>20%</td><td>25%</td><td>30%</td></tr><tr><td>5</td><td>74.28±0.77</td><td>81.55±0.38</td><td>86.17±0.05</td><td>88.34±0.39</td><td>90.09±0.24</td></tr><tr><td>10</td><td>73.71±1.91</td><td>81.05±0.26</td><td>85.65±0.38</td><td>88.60±0.03</td><td>90.06±0.20</td></tr><tr><td>20</td><td>73.49±0.79</td><td>81.27±0.62</td><td>86.04±0.13</td><td>88.68±0.16</td><td>90.45±0.50</td></tr><tr><td>50</td><td>71.29±0.76</td><td>80.36±0.07</td><td>85.30±0.11</td><td>88.64±0.28</td><td>90.52±0.11</td></tr><tr><td>SE learning rate η</td><td>10%</td><td>15%</td><td>20%</td><td>25%</td><td>30%</td></tr><tr><td>0.0001</td><td>69.66±0.25</td><td>80.25±0.69</td><td>84.95±0.71</td><td>87.42±0.48</td><td>88.85±0.32</td></tr><tr><td>0.001</td><td>71.33±0.87</td><td>79.97±0.19</td><td>85.11±0.02</td><td>88.33±0.21</td><td>89.77±0.04</td></tr><tr><td>0.005</td><td>74.76±1.15</td><td>81.34±0.07</td><td>85.73±0.12</td><td>88.87±0.05</td><td>90.34±0.14</td></tr><tr><td>0.01</td><td>74.49±2.55</td><td>81.95±0.18</td><td>85.52±0.16</td><td>88.43±0.40</td><td>90.35±0.20</td></tr><tr><td>starting point Nthres</td><td>10%</td><td>15%</td><td>20%</td><td>25%</td><td>30%</td></tr><tr><td>100</td><td>78.29±0.05</td><td>82.96±0.13</td><td>84.89±0.13</td><td>85.66±0.04</td><td>86.08±0.15</td></tr><tr><td>125</td><td>78.02±0.08</td><td>82.74±0.18</td><td>85.33±0.21</td><td>86.65±0.08</td><td>87.43±0.39</td></tr><tr><td>150</td><td>74.49±2.55</td><td>81.95±0.18</td><td>85.52±0.16</td><td>88.43±0.40</td><td>90.35±0.20</td></tr><tr><td>175</td><td>74.65±1.43</td><td>82.09±0.44</td><td>85.66±0.16</td><td>88.14±0.37</td><td>89.75±0.61</td></tr><tr><td>Random</td><td>74.74±0.05</td><td>80.64±0.71</td><td>83.91±0.55</td><td>86.00±0.07</td><td>87.53±0.24</td></tr></table>
|
| 275 |
+
|
| 276 |
+
Table 5 summarizes various settings of SE hyperparameters according to the proportion of labeled examples. Here, we only compare test accuracies of a single model trained with vanilla SGD on CIFAR-10 dataset with VR acquisition function. We additionally applied Stochastic Weight Averaging (SWA) (Izmailov et al., 2018) when training from scratch at the end to be less sensitive to hyperparameter settings and effectively compare the quality of queried examples.
|
| 277 |
+
|
| 278 |
+
Number of snapshots $S$ . One may increase the number of snapshots collected to better acquire examples to be labeled since SE does not incur additional training costs. However, collecting more snapshots linearly increases the inference time for AL due to multiple forward passes. Overall, there is no significant difference in performance gains from the increase in the number of snapshots. Increasing the number of snapshots results in lower performance than random at the beginning, but performance tends to increase as episodes continue. Based on these findings, one might come up with a strategy that uses a small number of snapshots at first and then increases the number of snapshots in latter episodes. All results used SE with VR acquisition function and were averaged over two trials including the ones with random acquisition. We used as the same hyperparameter settings described in Appendix B. Here, the total epochs $N$ and SE staring epoch $N_{\text{thres}}$ are fixed to 200 and 150, respectively. The jump between snapshots differs accordingly.
|
| 279 |
+
|
| 280 |
+
SE learning rate. In Algorithm 1 and Algorithm 2, a learning rate is adjusted by the learning rate scheduler $\eta(.)$ , and we used a high constant learning rate while collecting snapshots except for CIFAR-100 due to the instability of training. We maintain the same learning rate for CIFAR-100. The high constant learning rate used when collecting snapshots contributes to diverse predictions and, consequently, the success of SE in the AL context. The learning rate during training was all fixed to 0.001. When the SE learning rate value was too small ( $\eta = 0.0001$ ), the performance was inferior. However, too high learning rate may cause a model to diverge and move to different mode values or meaningless areas in the weight space, which surely degrades the acquisition quality. We also include the previous result with CIFAR-100 dataset in Table 9, where we used $\eta = 0.005$ during SE. In our preliminary experiments, the VR acquisition function showed robustness despite the decrease in the accuracy of the model's predictions because it uses the count of members who agree rather than predicted probabilities, whereas other acquisition functions did not perform well. Here, the number of snapshots $S$ and SE starting epoch $N_{thres}$ are fixed to 5 and 150, respectively. The total number of epochs $N$ was also fixed as 200.
|
| 281 |
+
|
| 282 |
+
The starting point of SE $N_{\mathrm{thres}}$ . We tried four different starting points of SE, or burn-in time, $N_{\mathrm{thres}}$ . This experiment shows how important it is to collect snapshots after the model sufficiently converges and why previous methods have failed. For example, Beluch et al. (2018) collected snapshots from the beginning of the training (like at 40, 80, 120, 160, and 200 epoch). We showed that when snapshots were obtained before the model sufficiently converged ( $N_{\mathrm{thres}} = 100, 125$ ), it performed even worse than the random acquisition. Similarly, when the jump between snapshots was too small (in case of $N_{\mathrm{thres}} = 175$ , and therefore jump=5), the performance dropped in later episodes. Here, the number of snapshots $S$ and SE learning rate were fixed to 5 and 0.01, respectively. The total number of epochs $N$ was also fixed as 200.
|
| 283 |
+
|
| 284 |
+
# A.3 SE WITH FINE-TUNING HYPERPARAMETERS
|
| 285 |
+
|
| 286 |
+
$\mathrm{SE} + \mathrm{FT}$ algorithms are governed mainly by the two hyperparameters: regularization hyperparameter and replay buffer size.
|
| 287 |
+
|
| 288 |
+
Table 6: Test accuracy on different regularization hyperparameter $\lambda$ for FT
|
| 289 |
+
|
| 290 |
+
<table><tr><td>lambda λ</td><td>10%</td><td>15%</td><td>20%</td><td>25%</td><td>30%</td></tr><tr><td>0.0</td><td>77.66±0.10</td><td>82.45±0.22</td><td>86.01±0.43</td><td>87.70±0.36</td><td>89.05±0.75</td></tr><tr><td>0.001</td><td>76.86±0.40</td><td>82.83±0.18</td><td>86.27±1.76</td><td>88.12±0.19</td><td>88.45±0.14</td></tr><tr><td>0.005</td><td>76.99±0.10</td><td>82.70±0.05</td><td>86.24±0.22</td><td>87.79±0.06</td><td>89.12±0.22</td></tr><tr><td>0.01</td><td>77.09±0.02</td><td>82.86±0.13</td><td>85.78±0.59</td><td>87.79±0.16</td><td>89.56±0.22</td></tr><tr><td>0.02</td><td>73.91±2.02</td><td>82.74±0.72</td><td>86.30±0.25</td><td>87.60±0.74</td><td>82.14±8.42</td></tr></table>
|
| 291 |
+
|
| 292 |
+
Regularization hyperparameter $\lambda$ . The regularization hyperparameter $\lambda$ has a role of controlling the balance between maintaining a single trajectory from the previous episode and adapting the model to newly acquired samples. It is crucial to find appropriate $\lambda$ values for $\ell_2$ -regularizer $\|\pmb{\theta} - \pmb{\theta}_0\|^2$ . Table 6 shows test accuracies at 10, 15, 20, 25, and 30 episode (with 5K, 7.5K, 10K, 12.5K, and 15K labeled examples, respectively) depending on different $\lambda$ values on CIFAR-10 dataset. Here, the replay buffer size was fixed as 2,500. Although the acquisition quality is improved with regularization, there was no substantially outstanding $\lambda$ value throughout the entire episodes. Therefore, in the experiments in Section 5, the $\lambda$ value was fixed as 0.01.
|
| 293 |
+
|
| 294 |
+
Table 7: Test accuracy at episode 30 according to the replay buffer size on CIFAR-10
|
| 295 |
+
|
| 296 |
+
<table><tr><td>Replay buffer size</td><td>500</td><td>1,000</td><td>1,500</td><td>2,000</td><td>2,500</td><td>3,000</td></tr><tr><td>Acc (%)</td><td>87.79</td><td>88.26</td><td>88.57</td><td>89.30</td><td>89.39</td><td>88.91</td></tr></table>
|
| 297 |
+
|
| 298 |
+
Replay buffer size. In the case of the number of data used for fine tuning process, some of the data labeled in the previous episodes were added to the newly acquired data. For CIFAR-10, we
|
| 299 |
+
|
| 300 |
+
used the budget size for 500. Moreover, we add additional 2,000 randomly sampled examples from the previous labeled data. For both CIFAR-100 and Tiny ImageNet, we add 1,000 random sampled from labeled data to 1,000 newly acquired data. This has significantly reduced the training cost. Table 7 shows test accuracies of the final model (trained from scratch) with 15,000 labeled examples according to the size of replay buffer in each episode.
|
| 301 |
+
|
| 302 |
+
# A.4 VISUALIZATIONS
|
| 303 |
+
|
| 304 |
+

|
| 305 |
+
Figure 4: Loss surface on test set of SE (left) and DE with same initialization (right) for the visualization purpose in the parameter space when trained with the first 2,000 examples of CIFAR-10 dataset. Contours represent test accuracy, and red points denote weights gathered for AL.
|
| 306 |
+
|
| 307 |
+

|
| 308 |
+
|
| 309 |
+

|
| 310 |
+
Figure 5: More boxplots and histograms for DE (up) and SE (down) plotted in the same manner as Fig. 2 (right), showing the tendency as the number of labeled examples increases.
|
| 311 |
+
Fig. 4 shows test accuracy map in the parameter space, and red crosses indicate parameters collected for ensemble. Note that two contours have different color scales. With a high learning rate, SE picks snapshots that are weak individually but strong together around the wider optimum. Each member of DE falls into their own narrow optima.
|
| 312 |
+
|
| 313 |
+
Similar to Fig. 2 (right), Fig. 5 shows the correlation between the VR values and the predicted probabilities of the ground truth class, along with a histogram of VR values. In the histogram, the percentages denote the test error for each bin, and light gray areas depict the number of examples which the committee predicted incorrectly. Fig. 5 clearly illustrates that DE tends to exhibit overconfident results. Even when the number of acquired data increases, the degree of overconfidence by DE remains severe. However, VR scores calculated with SE showed much better correspondence with the actual error rate. Here, we trained the model the first \{1000, 2000, 4000\} CIFAR-10 examples in the train set.
|
| 314 |
+
|
| 315 |
+
# B EXPERIMENTAL DETAILS
|
| 316 |
+
|
| 317 |
+
# B.1 BASELINE DESIGN
|
| 318 |
+
|
| 319 |
+
When conducting our experiments, we placed great importance on achieving robustness, reproducibility, and generalizability. Our experiments on the CIFAR-10 and CIFAR-100 datasets yielded average test accuracy scores of $90.15\%$ and $68.02\%$ , respectively, over five trials. We achieved these results using the VR acquisition function and acquiring $30\%$ of the labels (equivalent to 15,000 examples) under the settings outlined in Appendix B.3. In contrast, a recent survey by Munjal et al. (2022) reported baseline results of $90.87\%$ and $59.36\%$ test accuracy with $40\%$ random samples.
|
| 320 |
+
|
| 321 |
+
In order to facilitate reproducibility of our results, we provide not only the code and configurations, but also the indices of the queried examples. Reported runtimes are based on an Ubuntu 20.04 server with an AMD Ryzen-4 5900X CPU and 64GB RAM, as well as an NVIDIA RTX-3090 GPU with 24GB VRAM. For faster training on the Tiny/ImageNet dataset, we additionally employed FP16 (Automatic Mixed Precision).
|
| 322 |
+
|
| 323 |
+
# B.2 ACQUISITION QUALITY
|
| 324 |
+
|
| 325 |
+
To ensure a fair and objective comparison of acquisition quality, we devised an experiment in which the data selected by three different methods, namely SE, DE, and MCDO, were used to re-train both a single model and an ensembled model. By comparing the effectiveness of the selected data in learning, we aimed to provide a comprehensive evaluation of the performance of each method. In cases where multiple experiments were conducted, we carefully examined the results to ensure that they exhibited a similar trend across experiments. We randomly selected one experiment and reported the re-trained results obtained from that experiment due to the limited resources.
|
| 326 |
+
|
| 327 |
+
# B.3 HYPERPARAMETERS
|
| 328 |
+
|
| 329 |
+
Table 8: Summary of hyperparameters
|
| 330 |
+
|
| 331 |
+
<table><tr><td>Dataset</td><td>Architecture</td><td>Optimizer</td><td>Base lr</td><td>Momentum</td><td>Weight decay</td><td>Scheduler</td><td>SE lr</td><td>Max epoch</td><td>SE epochs</td><td># snapshots</td></tr><tr><td rowspan="2">CIFAR-10</td><td>ResNet-18</td><td>SGD</td><td>0.001</td><td>0.9</td><td>0.01</td><td>ONECYCLE</td><td>0.01</td><td>200</td><td>50</td><td>5</td></tr><tr><td>VGG-16</td><td>SGD</td><td>0.001</td><td>0.9</td><td>0.01</td><td>ONECYCLE</td><td>0.01</td><td>100</td><td>100</td><td>5</td></tr><tr><td>CIFAR-100</td><td>ResNet-18</td><td>SGD</td><td>0.001</td><td>0.9</td><td>0.005</td><td>ONECYCLE</td><td>0.01</td><td>200</td><td>50</td><td>5</td></tr><tr><td rowspan="3">Tiny ImageNet</td><td>ResNet-18</td><td>SGD</td><td>0.001</td><td>0.9</td><td>0.0001</td><td>Steps</td><td>0.01</td><td>100</td><td>50</td><td>5</td></tr><tr><td>ResNet-50</td><td>SGD</td><td>0.001</td><td>0.9</td><td>0.0001</td><td>ONECYCLE</td><td>0.05</td><td>50</td><td>25</td><td>3</td></tr><tr><td>ViT-base-224</td><td>SGD</td><td>0.001</td><td>0.9</td><td>0.0001</td><td>Constant</td><td>0.05</td><td>50</td><td>25</td><td>3</td></tr></table>
|
| 332 |
+
|
| 333 |
+
We used a standard SGD optimizer with the following hyperparameters for both CIFAR-10 and CIFAR-100 datasets: a base learning rate of 0.001, momentum of 0.9, and weight decay of 0.01. The mini-batch size was set to 64 for CIFAR-10 and 128 for CIFAR-100. During SE, we raised the learning rate to 0.01 for CIFAR-10 or dropped it to 0.0001 for CIFAR-100. In our preliminary experiments, we also tried increasing the learning rate to 0.005 during SE for CIFAR-100, but the results (shown in Table 9) were not as good as those obtained with the final settings. Our results showed that SE with VR outperformed DE and MCDO, but other acquisition functions did not perform well with SE. Using a learning rate lower than the base learning rate can help collect snapshots yielding decent predictions if the training itself is unstable. In contrast, a learning rate higher than the base learning rate worked well in other hyperparameter settings and datasets.
|
| 334 |
+
|
| 335 |
+
Table 9: Previous results on CIFAR-100
|
| 336 |
+
|
| 337 |
+
<table><tr><td rowspan="2">Acq Fn</td><td rowspan="2">Uncertainty</td><td colspan="4">A single model at the final episode</td><td colspan="4">DE at the final episode</td><td rowspan="2">Runtime</td></tr><tr><td>16%</td><td>18%</td><td>20%</td><td>22%</td><td>16%</td><td>18%</td><td>20%</td><td>22%</td></tr><tr><td rowspan="4">VR</td><td>SE</td><td>59.34±0.36</td><td>61.02±0.11</td><td>63.02±0.23</td><td>64.14±0.27</td><td>64.26</td><td>66.21</td><td>68.27</td><td>69.24</td><td>7.2 hr</td></tr><tr><td>SE + FT</td><td>59.56±0.58</td><td>61.17±0.37</td><td>62.88±0.26</td><td>64.81±0.27</td><td>64.35</td><td>66.23</td><td>67.68</td><td>69.72</td><td>0.8 hr</td></tr><tr><td>DE</td><td>58.17±0.20</td><td>60.44±0.21</td><td>62.08±0.29</td><td>63.46±0.35</td><td>63.78</td><td>65.99</td><td>67.99</td><td>69.22</td><td>28.7 hr</td></tr><tr><td>MCDO</td><td>58.81±0.34</td><td>60.62±0.26</td><td>62.38±0.26</td><td>63.50±0.17</td><td>63.23</td><td>65.09</td><td>66.80</td><td>67.69</td><td>6.2 hr</td></tr><tr><td>Random</td><td></td><td>58.93±0.20</td><td>60.58±0.29</td><td>61.95±0.24</td><td>62.80±0.44</td><td>63.63</td><td>65.09</td><td>66.68</td><td>67.82</td><td>5.7 hr</td></tr></table>
|
| 338 |
+
|
| 339 |
+
To speed up convergence and reduce the effort of finding optimal hyperparameters, we used One Cycle learning rate scheduler (ONECYCLE) proposed by Smith and Topin (2019), setting max_lr to 0.01, for both datasets. For augmentations, we normalized images with the mean and variance of all images in the train set and applied random horizontal flip to both datasets. Additionally, random cropping was applied to CIFAR-100. We trained the models for a total of 200 epochs, and for SE, we collected five snapshots with an additional 50 epochs (10-epoch interval).
|
| 340 |
+
|
| 341 |
+
Table 10: Performance gain with ONECYCLE scheduler with 10,000 labeled examples on CIFAR-10. The differences from random acquisition are in parentheses.
|
| 342 |
+
|
| 343 |
+
<table><tr><td>Acq Fn</td><td>w/ ONECYCLE</td><td>w/o ONECYCLE</td><td>Δ</td></tr><tr><td>VR</td><td>86.0% (Δ 1.9%p)</td><td>84.1% (Δ 2.9%p)</td><td>1.9%p</td></tr><tr><td>BALD</td><td>85.4% (Δ 1.3%p)</td><td>83.4% (Δ 2.2%p)</td><td>2.0%p</td></tr><tr><td>ME</td><td>85.2% (Δ 1.1%p)</td><td>84.1% (Δ 2.9%p)</td><td>1.1%p</td></tr><tr><td>Random</td><td>84.1%</td><td>81.2%</td><td>1.9%p</td></tr></table>
|
| 344 |
+
|
| 345 |
+
We also evaluated the effect of using ONECYCLE on the performance of various acquisition functions for CIFAR-10 with ResNet-18 in Table 10. Without ONECYCLE, the performance gain compared to random acquisition of all acquisition functions decreased, with the differences ranging from $2.2\%$ p to $2.9\%$ p. However, when using ONECYCLE, the performance of all acquisition functions improved, but the differences from random acquisition decreased to a range of $1.1\%$ p to $1.9\%$ p. Despite the fact that the differences from random sampling were lower with ONECYCLE than without ONECYCLE, we chose to report the results with ONECYCLE as they are more relevant for practical applications with limited labeled examples. The reported accuracies are averaged over five trials.
|
| 346 |
+
|
| 347 |
+
For Tiny-ImageNet dataset with ResNet-18, we also used a SGD optimizer with momentum of 0.9 and weight decay of 0.0001. We adjusted the learning rate to 0.1 for the first half of the training epochs, 0.01 until the $75\%$ of the training epochs, and 0.001 for the rest. During SE, the learning rate is increased to 0.05. For augmentations, we used random crops and random horizontal flips, which are the de-facto standard random augmentation strategies when reporting in academia. To reduce the computational overhead in ensemble-based acquisitions due to multiple forward passes of a large unlabeled pool $\mathcal{U}$ , we first randomly draw $Q$ unlabeled examples from the pool and measured the scores, following Beluch et al. (2018) and Yoo and Kweon (2019). We set $Q$ to 10,000. The total number of training epochs is 100, and for SE we collected five snapshots with additional 50 epochs (10 epoch interval).
|
| 348 |
+
|
| 349 |
+
For transfer learning experiments on Tiny ImageNet, we used pretrained weights for ResNet-50 from Torchvision (Paszke et al., 2019) and those for ViT-base-224 from PyTorch Image Models (Wightman, 2019) and replaced the final linear classification head. Instead of using the original image size, an image was scaled up to $224 \times 224$ resolution to match the model trained with ImageNet dataset. The total number of training epochs is 50, and for SE we collected 5 snapshots with additional 25 epochs (5 epoch interval). We here also used a SGD optimizer with momentum of 0.9 and weight decay of 0.0001. For ResNet-50, we used ONECYCLE (the same as above) and for ViT-base, we did not use a learning rate scheduler (constant learning rate). We also set $Q$ to 10,000. Please see Table 8 for a summary of hyperparameters.
|
| 350 |
+
|
| 351 |
+
# B.4 MODEL STRUCTURES
|
| 352 |
+
|
| 353 |
+
For all experiments above, the structure of ResNet-18 model is slightly modified in order to fit $32 \times 32$ and $64 \times 64$ images following the standard protocol as follows:
|
| 354 |
+
|
| 355 |
+
- The kernel size of the first convolution layer (conv1) has changed to 3, and the stride is changed to 1.
|
| 356 |
+
- The max pooling layer is disabled.
|
| 357 |
+
- For MCDO, a dropout layer with dropout rate $p = 0.5$ is added in front of the final linear classifier layer, since the original implementation has no dropout layers.
|
| 358 |
+
|
| 359 |
+
Similarly, the structure of VGG-16 model is slightly modified as follows:
|
| 360 |
+
|
| 361 |
+
- The batch normalization layer is added next to every convolution layer.
|
| 362 |
+
- The average pooling layer is disabled.
|
| 363 |
+
- No additional dropout layer is attached to the model since its classifier already has two dropout layers with dropout rate $p = 0.5$ . We turned on the dropout layer when querying with MCDO.
|
| 364 |
+
|
| 365 |
+
These additional changes including MCDO layers had no effects on all reported figures in Section 5 since we reported acquisition qualities by re-training models with the same structure with the examples queried by each method.
|
2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f82d8212bf4c0d420f3ff7f336903fe95422ddda37b4190fecc3f1162c38ccfa
|
| 3 |
+
size 889736
|
2023/A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/c8ffe538-d5da-49af-b4cb-8cd3b47e4ebe_content_list.json
ADDED
|
@@ -0,0 +1,2099 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "A STABLE AND SCALABLE METHOD FOR SOLVING INITIAL VALUE PDES WITH NEURAL NETWORKS",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
173,
|
| 8 |
+
99,
|
| 9 |
+
823,
|
| 10 |
+
146
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Marc Finzi $^{1*}$ , Andres Potapczynski $^{1*}$ , Matthew Choptuik $^{2}$ , Andrew Gordon Wilson $^{1}$ \nNew York University $^{1}$ and University of British Columbia $^{2}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
179,
|
| 19 |
+
167,
|
| 20 |
+
774,
|
| 21 |
+
200
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "ABSTRACT",
|
| 28 |
+
"text_level": 1,
|
| 29 |
+
"bbox": [
|
| 30 |
+
450,
|
| 31 |
+
234,
|
| 32 |
+
547,
|
| 33 |
+
251
|
| 34 |
+
],
|
| 35 |
+
"page_idx": 0
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"type": "text",
|
| 39 |
+
"text": "Unlike conventional grid and mesh based methods for solving partial differential equations (PDEs), neural networks have the potential to break the curse of dimensionality, providing approximate solutions to problems where using classical solvers is difficult or impossible. While global minimization of the PDE residual over the network parameters works well for boundary value problems, catastrophic forgetting impairs applicability to initial value problems (IVPs). In an alternative local-in-time approach, the optimization problem can be converted into an ordinary differential equation (ODE) on the network parameters and the solution propagated forward in time; however, we demonstrate that current methods based on this approach suffer from two key issues. First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors. Second, as the ODE methods scale cubically with the number of model parameters, they are restricted to small neural networks, significantly limiting their ability to represent intricate PDE initial conditions and solutions. Building on these insights, we develop Neural-IVP, an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters, enabling us to evolve the dynamics of challenging PDEs with neural networks.",
|
| 40 |
+
"bbox": [
|
| 41 |
+
228,
|
| 42 |
+
267,
|
| 43 |
+
769,
|
| 44 |
+
518
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "1 INTRODUCTION",
|
| 51 |
+
"text_level": 1,
|
| 52 |
+
"bbox": [
|
| 53 |
+
173,
|
| 54 |
+
546,
|
| 55 |
+
338,
|
| 56 |
+
561
|
| 57 |
+
],
|
| 58 |
+
"page_idx": 0
|
| 59 |
+
},
|
| 60 |
+
{
|
| 61 |
+
"type": "text",
|
| 62 |
+
"text": "Partial differential equations (PDEs) are needed to describe many phenomena in the natural sciences. PDEs that model complex phenomena cannot be solved analytically and many numerical techniques are used to compute their solutions. Classical techniques such as finite differences rely on grids and provide efficient and accurate solutions when the dimensionality is low ( $d = 1,2$ ). Yet, the computational and memory costs of using grids or meshes scales exponentially with the dimension, making it extremely challenging to solve PDEs accurately in more than 3 dimensions.",
|
| 63 |
+
"bbox": [
|
| 64 |
+
169,
|
| 65 |
+
578,
|
| 66 |
+
826,
|
| 67 |
+
662
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "Neural networks have shown considerable success in modeling and reconstructing functions on high-dimensional structured data such as images or text, but also for unstructured tabular data and spatial functions. Neural networks sidestep the \"curse of dimensionality\" by learning representations of the data that enables them to perform efficiently. In this respect, neural networks have similar benefits and drawbacks as Monte Carlo methods. The approximation error $\\epsilon$ converges at a rate $\\epsilon \\propto 1 / \\sqrt{n}$ from statistical fluctuations where $n$ is the number of data points or Monte Carlo samples. Expressed inversely, we would need: $n \\propto e^{2\\log 1 / \\epsilon}$ samples to get error $\\epsilon$ , a compute grows exponentially in the number of significant digits instead of exponential in the dimension as it is for grids. For many problems this tradeoff is favorable and an approximate solution is much better than no solution.",
|
| 74 |
+
"bbox": [
|
| 75 |
+
169,
|
| 76 |
+
667,
|
| 77 |
+
826,
|
| 78 |
+
796
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "Thus, it is natural to consider neural networks for solving PDEs whose dimensionality makes standard approaches intractable. While first investigated in Dissanayake & Phan-Thien (1994) and Lagaris et al. (1998), recent developments by Yu et al. (2018) and Sirignano & Spiliopoulos (2018) have shown that neural networks can successfully approximate the solution by forcing them to satisfy the dynamics of the PDE on collocation points in the spatio-temporal domain. In particular, the global collocation approaches have proven effective for solving boundary value problems where the neural network can successfully approximate the solution. However, for initial value problems",
|
| 85 |
+
"bbox": [
|
| 86 |
+
169,
|
| 87 |
+
801,
|
| 88 |
+
826,
|
| 89 |
+
902
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "header",
|
| 95 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 96 |
+
"bbox": [
|
| 97 |
+
171,
|
| 98 |
+
32,
|
| 99 |
+
478,
|
| 100 |
+
47
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "page_footnote",
|
| 106 |
+
"text": "*Equal contribution, order chosen by random coin flip. {maf820, ap6604}@nyu.edu",
|
| 107 |
+
"bbox": [
|
| 108 |
+
189,
|
| 109 |
+
909,
|
| 110 |
+
732,
|
| 111 |
+
925
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "page_number",
|
| 117 |
+
"text": "1",
|
| 118 |
+
"bbox": [
|
| 119 |
+
493,
|
| 120 |
+
948,
|
| 121 |
+
504,
|
| 122 |
+
959
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "text",
|
| 128 |
+
"text": "(IVPs), treating time as merely another spatial dimension results in complications for the neural network like catastrophic forgetting. Some heuristics have been developed to ameliorate this latter problem, such as increasing the collocation points as time progresses, but then the computational cost of training the neural network becomes impractical.",
|
| 129 |
+
"bbox": [
|
| 130 |
+
169,
|
| 131 |
+
103,
|
| 132 |
+
823,
|
| 133 |
+
160
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 1
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "text",
|
| 139 |
+
"text": "Recently, Du & Zaki (2021) and Bruna et al. (2022) have provided two methods that follow a novel local-in-time approach for training neural networks to solve IVPs by updating the network parameters sequentially through time rather than by having some fixed set of parameters to model the whole spatio-temporal domain. These methods have proven successful for a variety of PDEs, but they currently suffer from two shortcomings. First, the conditioning of the linear systems required to follow the ODE on the network parameters degrades over time, leading to longer solving times and ultimately to a complete breakdown of the solution. Second, the current methodologies lack the capacity to represent difficult initial conditions and solutions as their runtime scales cubically in the number of network parameters, limiting their ability to use large neural networks. In this work we provide a local-in-time IVP solver (Neural-IVP) that circumvents the shortcomings of Du & Zaki (2021) and Bruna et al. (2022) and thus enable us to solve challenging PDEs. In particular:",
|
| 140 |
+
"bbox": [
|
| 141 |
+
169,
|
| 142 |
+
166,
|
| 143 |
+
826,
|
| 144 |
+
321
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 1
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "list",
|
| 150 |
+
"sub_type": "text",
|
| 151 |
+
"list_items": [
|
| 152 |
+
"- Leveraging fast matrix vector multiplies and preconditioned conjugate gradients, we develop an approach that scales only linearly in the number of parameters, allowing us to use considerably larger neural networks and more data.",
|
| 153 |
+
"- We further improve the representational power and quality of the fit to initial conditions through the use of last layer linear solves and sinusoidal embeddings.",
|
| 154 |
+
"- We show how following the parameter ODE leads the network parameters to an increasingly poorly conditioned region of the parameter space, and we show how this relates to exact and approximate parameter symmetries in the network.",
|
| 155 |
+
"- Using regularization, restarts, and last layer finetuning, we are able to prevent the parameters from reaching these poorly conditioned regions, thereby stabilizing the method."
|
| 156 |
+
],
|
| 157 |
+
"bbox": [
|
| 158 |
+
181,
|
| 159 |
+
330,
|
| 160 |
+
823,
|
| 161 |
+
483
|
| 162 |
+
],
|
| 163 |
+
"page_idx": 1
|
| 164 |
+
},
|
| 165 |
+
{
|
| 166 |
+
"type": "text",
|
| 167 |
+
"text": "We provide a code implementation at https://github.com/mfinzi/neural-ivp.",
|
| 168 |
+
"bbox": [
|
| 169 |
+
169,
|
| 170 |
+
492,
|
| 171 |
+
784,
|
| 172 |
+
508
|
| 173 |
+
],
|
| 174 |
+
"page_idx": 1
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"type": "text",
|
| 178 |
+
"text": "2 BACKGROUND",
|
| 179 |
+
"text_level": 1,
|
| 180 |
+
"bbox": [
|
| 181 |
+
171,
|
| 182 |
+
526,
|
| 183 |
+
328,
|
| 184 |
+
541
|
| 185 |
+
],
|
| 186 |
+
"page_idx": 1
|
| 187 |
+
},
|
| 188 |
+
{
|
| 189 |
+
"type": "text",
|
| 190 |
+
"text": "Given a spatial domain $\\mathcal{X} \\subseteq \\mathbb{R}^D$ , we will consider the evolution of a time-dependent function $u: \\mathcal{X} \\times [0, T] \\to \\mathbb{R}^k$ which at all times belongs to some functional space $\\mathcal{U}$ and with dynamics governed by",
|
| 191 |
+
"bbox": [
|
| 192 |
+
169,
|
| 193 |
+
556,
|
| 194 |
+
823,
|
| 195 |
+
599
|
| 196 |
+
],
|
| 197 |
+
"page_idx": 1
|
| 198 |
+
},
|
| 199 |
+
{
|
| 200 |
+
"type": "equation",
|
| 201 |
+
"text": "\n$$\n\\partial_ {t} u (x, t) = \\mathcal {L} [ u ] (x, t) \\quad \\text {f o r} (x, t) \\in \\mathcal {X} \\times [ 0, T ]\n$$\n",
|
| 202 |
+
"text_format": "latex",
|
| 203 |
+
"bbox": [
|
| 204 |
+
316,
|
| 205 |
+
603,
|
| 206 |
+
676,
|
| 207 |
+
619
|
| 208 |
+
],
|
| 209 |
+
"page_idx": 1
|
| 210 |
+
},
|
| 211 |
+
{
|
| 212 |
+
"type": "equation",
|
| 213 |
+
"text": "\n$$\nu (x, 0) = u _ {0} (x) \\quad \\text {f o r} x \\in \\mathcal {X}\n$$\n",
|
| 214 |
+
"text_format": "latex",
|
| 215 |
+
"bbox": [
|
| 216 |
+
333,
|
| 217 |
+
622,
|
| 218 |
+
593,
|
| 219 |
+
637
|
| 220 |
+
],
|
| 221 |
+
"page_idx": 1
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"type": "equation",
|
| 225 |
+
"text": "\n$$\nu (x, t) = h (x, t) \\quad \\text {f o r} x \\in \\partial \\mathcal {X} \\times [ 0, T ]\n$$\n",
|
| 226 |
+
"text_format": "latex",
|
| 227 |
+
"bbox": [
|
| 228 |
+
334,
|
| 229 |
+
638,
|
| 230 |
+
660,
|
| 231 |
+
655
|
| 232 |
+
],
|
| 233 |
+
"page_idx": 1
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"type": "text",
|
| 237 |
+
"text": "where $u_0 \\in \\mathcal{U}$ is the initial condition, $h$ is spatial boundary condition, and $\\mathcal{L}$ is the (possibly nonlinear) operator containing spatial derivatives. We can represent PDEs with higher order derivatives in time, such as the wave equation $\\partial_t^2\\phi = \\Delta \\phi$ , by reducing them to a system of first order in time equations $u \\coloneqq [\\phi, \\partial_t\\phi]$ , where in this example $\\mathcal{L}[u_0, u_1] = [u_1, \\Delta u_0]$ .",
|
| 238 |
+
"bbox": [
|
| 239 |
+
169,
|
| 240 |
+
659,
|
| 241 |
+
823,
|
| 242 |
+
715
|
| 243 |
+
],
|
| 244 |
+
"page_idx": 1
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"type": "text",
|
| 248 |
+
"text": "Global Collocation Methods The first approaches for solving PDEs via neural networks are based on the idea of sampling uniformly on the whole spatio-temporal domain and ensuring that the neural network obeys the PDE by minimizing the PDE residual (or a proxy for it). This approach was initially proposed by Dissanayake & Phan-Thien (1994) and Lagaris et al. (1998), which used neural networks as approximate solutions. However, recent advances in automatic differentiation, compute, and neural network architecture have enabled successful applications such as the Deep Galerkin Method (Sirignano & Spiliopoulos, 2018), Deep Ritz Method (Yu et al., 2018), and PINN (Raissi et al., 2019), which have revitalized interest in using neural networks to solve PDEs.",
|
| 249 |
+
"bbox": [
|
| 250 |
+
169,
|
| 251 |
+
720,
|
| 252 |
+
826,
|
| 253 |
+
834
|
| 254 |
+
],
|
| 255 |
+
"page_idx": 1
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"type": "text",
|
| 259 |
+
"text": "Learning From Simulations Not all approaches use neural networks as a basis function to represent the PDE solution. Some approaches focus on directly learning the PDE operator as in Lu et al. (2019) or Kovachki et al. (2021), where the operator can be learned from simulation. However, as these methods typically use grids, their purpose is to accelerate existing solvers rather than tackling new problems. Other approaches that do not rely on collocation points exploit specific information of elliptic and semi-linear parabolic PDEs, such as E. et al. (2017) and Han et al. (2018).",
|
| 260 |
+
"bbox": [
|
| 261 |
+
169,
|
| 262 |
+
840,
|
| 263 |
+
826,
|
| 264 |
+
925
|
| 265 |
+
],
|
| 266 |
+
"page_idx": 1
|
| 267 |
+
},
|
| 268 |
+
{
|
| 269 |
+
"type": "header",
|
| 270 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 271 |
+
"bbox": [
|
| 272 |
+
171,
|
| 273 |
+
32,
|
| 274 |
+
478,
|
| 275 |
+
47
|
| 276 |
+
],
|
| 277 |
+
"page_idx": 1
|
| 278 |
+
},
|
| 279 |
+
{
|
| 280 |
+
"type": "page_number",
|
| 281 |
+
"text": "2",
|
| 282 |
+
"bbox": [
|
| 283 |
+
493,
|
| 284 |
+
948,
|
| 285 |
+
504,
|
| 286 |
+
959
|
| 287 |
+
],
|
| 288 |
+
"page_idx": 1
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"type": "text",
|
| 292 |
+
"text": "2.1 GLOBAL PDE RESIDUAL MINIMIZATION",
|
| 293 |
+
"text_level": 1,
|
| 294 |
+
"bbox": [
|
| 295 |
+
171,
|
| 296 |
+
103,
|
| 297 |
+
500,
|
| 298 |
+
118
|
| 299 |
+
],
|
| 300 |
+
"page_idx": 2
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "text",
|
| 304 |
+
"text": "The most straightforward method for producing a neural network solution to initial values PDEs is similar to the approach used for boundary value problems of treating the temporal dimensions as if they were spatial dimensions and parameterizing the solution simultaneously for all times $u(x,t) = \\mathrm{N}_{\\theta}(x,t)$ . The initial and boundary conditions can be enforced through appropriate parameterization of the network architecture (Berg & Nyström, 2018), whereas the PDE is enforced through minimization of the training objective:",
|
| 305 |
+
"bbox": [
|
| 306 |
+
169,
|
| 307 |
+
128,
|
| 308 |
+
826,
|
| 309 |
+
214
|
| 310 |
+
],
|
| 311 |
+
"page_idx": 2
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "equation",
|
| 315 |
+
"text": "\n$$\nS (\\theta) = \\int_ {\\mathcal {X} \\times [ 0, T ]} r _ {\\theta} (x, t) ^ {2} d \\mu (x) d t = \\int_ {\\mathcal {X} \\times [ 0, T ]} [ (\\partial_ {t} u _ {\\theta} (x, t) - \\mathcal {L} [ u _ {\\theta} ] (x, t)) ^ {2} ] d \\mu (x) d t\n$$\n",
|
| 316 |
+
"text_format": "latex",
|
| 317 |
+
"bbox": [
|
| 318 |
+
217,
|
| 319 |
+
231,
|
| 320 |
+
779,
|
| 321 |
+
267
|
| 322 |
+
],
|
| 323 |
+
"page_idx": 2
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"type": "text",
|
| 327 |
+
"text": "where the expectation is estimated via Monte Carlo samples over a chosen distribution of $\\mu$ and times $t$ .",
|
| 328 |
+
"bbox": [
|
| 329 |
+
169,
|
| 330 |
+
277,
|
| 331 |
+
823,
|
| 332 |
+
306
|
| 333 |
+
],
|
| 334 |
+
"page_idx": 2
|
| 335 |
+
},
|
| 336 |
+
{
|
| 337 |
+
"type": "text",
|
| 338 |
+
"text": "Initial value PDEs have a local temporal structure where only values on the previous spatial slice are necessary to compute the next; however, global minimization ignores this property. Moreover, as the weights of the neural network must be used to represent the solution simultaneously at all times, then we must ensure that the neural network approximation does not forget the PDE solution learnt at earlier times (catastrophic forgetting). While Sirignano & Spiliopoulos (2018) and Sitzmann et al. (2020) take this approach, the downsides of avoiding catastrophic forgetting is to increase the computation spent by ensuring the presence of data from previous times.",
|
| 339 |
+
"bbox": [
|
| 340 |
+
169,
|
| 341 |
+
311,
|
| 342 |
+
823,
|
| 343 |
+
411
|
| 344 |
+
],
|
| 345 |
+
"page_idx": 2
|
| 346 |
+
},
|
| 347 |
+
{
|
| 348 |
+
"type": "text",
|
| 349 |
+
"text": "2.2 LOCAL-IN-TIME METHODS",
|
| 350 |
+
"text_level": 1,
|
| 351 |
+
"bbox": [
|
| 352 |
+
171,
|
| 353 |
+
428,
|
| 354 |
+
403,
|
| 355 |
+
441
|
| 356 |
+
],
|
| 357 |
+
"page_idx": 2
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "text",
|
| 361 |
+
"text": "To circumvent the inherent inefficiency of the global methods, Du & Zaki (2021) and Bruna et al. (2022) propose a local-in-time method whereby the minimization problem gets converted into an ODE that the parameters satisfy at each point in time. In this approach, the PDE solution is given by $u(x,t) = \\mathrm{N}(x,\\theta(t))$ for a neural network $\\mathrm{N}$ , where the time dependence comes from the parameter vector $\\theta(t)$ rather than as an input to the network. Thus, the network only represents the solution at a single time, rather than simultaneously at all times, and subsequently $\\theta(t)$ can be recorded and no representational power or computational cost is incurred from preserving previous solutions. Assuming the PDE is one-dimensional, the PDE residual at a single time can be written as,",
|
| 362 |
+
"bbox": [
|
| 363 |
+
169,
|
| 364 |
+
454,
|
| 365 |
+
823,
|
| 366 |
+
566
|
| 367 |
+
],
|
| 368 |
+
"page_idx": 2
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "equation",
|
| 372 |
+
"text": "\n$$\nL (\\dot {\\theta}, t) = \\int_ {\\mathcal {X}} r (x, t) ^ {2} d \\mu (x) = \\int_ {\\mathcal {X}} \\left(\\dot {\\theta} ^ {\\top} \\nabla_ {\\theta} \\mathrm {N} (x, \\theta (t)) - \\mathcal {L} [ \\mathrm {N} ] (x, \\theta (t))\\right) ^ {2} d \\mu (x), \\tag {1}\n$$\n",
|
| 373 |
+
"text_format": "latex",
|
| 374 |
+
"bbox": [
|
| 375 |
+
241,
|
| 376 |
+
573,
|
| 377 |
+
823,
|
| 378 |
+
606
|
| 379 |
+
],
|
| 380 |
+
"page_idx": 2
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"type": "text",
|
| 384 |
+
"text": "since the time derivative is $\\partial_t u(x,t) = \\dot{\\theta}^\\top \\nabla_\\theta \\mathrm{N}(x,\\theta)$ .",
|
| 385 |
+
"bbox": [
|
| 386 |
+
169,
|
| 387 |
+
614,
|
| 388 |
+
529,
|
| 389 |
+
631
|
| 390 |
+
],
|
| 391 |
+
"page_idx": 2
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"type": "text",
|
| 395 |
+
"text": "Choosing the dynamics $\\dot{\\theta}$ of the parameters to minimize the instantaneous PDE residual error $L(\\dot{\\theta},t)$ yields the (implicitly defined) differential equation",
|
| 396 |
+
"bbox": [
|
| 397 |
+
169,
|
| 398 |
+
638,
|
| 399 |
+
823,
|
| 400 |
+
669
|
| 401 |
+
],
|
| 402 |
+
"page_idx": 2
|
| 403 |
+
},
|
| 404 |
+
{
|
| 405 |
+
"type": "equation",
|
| 406 |
+
"text": "\n$$\nM (\\theta) \\dot {\\theta} = F (\\theta) \\quad \\text {a n d} \\quad \\theta_ {0} = \\arg \\min _ {\\theta} \\int_ {\\mathcal {X}} \\left(\\mathrm {N} (x, \\theta) - u (x, 0)\\right) ^ {2} d \\mu (x), \\tag {2}\n$$\n",
|
| 407 |
+
"text_format": "latex",
|
| 408 |
+
"bbox": [
|
| 409 |
+
267,
|
| 410 |
+
676,
|
| 411 |
+
823,
|
| 412 |
+
709
|
| 413 |
+
],
|
| 414 |
+
"page_idx": 2
|
| 415 |
+
},
|
| 416 |
+
{
|
| 417 |
+
"type": "text",
|
| 418 |
+
"text": "where $M(\\theta) = \\int_{\\mathcal{X}} \\nabla_{\\theta} \\mathrm{N}(x, \\theta) \\nabla_{\\theta} \\mathrm{N}(x, \\theta)^{\\top} d\\mu(x)$ and $F(\\theta) = \\int_{\\mathcal{X}} \\nabla_{\\theta} \\mathrm{N}(x, \\theta) \\mathcal{L}[\\mathrm{N}](x, \\theta) d\\mu(x)$ . Once we find $\\theta_0$ to fit the initial conditions, we have a fully specified system of differential equations, where we can advance the parameters (and therefore the solution $u(x, \\theta(t))$ ) forward in time.",
|
| 419 |
+
"bbox": [
|
| 420 |
+
169,
|
| 421 |
+
715,
|
| 422 |
+
823,
|
| 423 |
+
760
|
| 424 |
+
],
|
| 425 |
+
"page_idx": 2
|
| 426 |
+
},
|
| 427 |
+
{
|
| 428 |
+
"type": "text",
|
| 429 |
+
"text": "Since both $M(\\theta)$ and $F(\\theta)$ involve integrals over space, we can estimate them with $n$ Monte Carlo samples, yielding $\\hat{M}(\\theta)$ and $\\hat{F}(\\theta)$ . We then proceed to solve the linear system $\\hat{M}(\\theta)\\dot{\\theta} = \\hat{F}(\\theta)$ at each timestep for the dynamics $\\dot{\\theta}$ and feed that into an ODE integrator such as RK45 (Dormand & Prince, 1980). For systems of PDEs such as the Navier-Stokes equations, the method can be extended in a straightforward manner by replacing the outer product of gradients with the Jacobians of the multi-output network $N$ : $M(\\theta) = \\int_{\\mathcal{X}} D_{\\theta}\\mathrm{N}(x,\\theta)^{\\top}D_{\\theta}\\mathrm{N}(x,\\theta)d\\mu(x)$ and likewise for $F$ , which results from minimizing the norm of the PDE residual $\\int_{\\mathcal{X}} \\|r(x,t)\\|^2 d\\mu(x)$ .",
|
| 430 |
+
"bbox": [
|
| 431 |
+
169,
|
| 432 |
+
766,
|
| 433 |
+
825,
|
| 434 |
+
873
|
| 435 |
+
],
|
| 436 |
+
"page_idx": 2
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"type": "text",
|
| 440 |
+
"text": "Introducing some additional notation, we can write the Monte Carlo estimates $\\hat{M}$ and $\\hat{F}$ in a more illuminating way. Defining the Jacobian matrix of the network for different input points $J_{ik} = \\frac{\\partial}{\\partial\\theta_k}\\mathrm{N}(x_i,\\theta)$ , and defining $f$ as $f_{i} = \\mathcal{L}[\\mathrm{N}](x_{i},\\theta)$ , the PDE residual estimated via the $n$ -sample",
|
| 441 |
+
"bbox": [
|
| 442 |
+
169,
|
| 443 |
+
881,
|
| 444 |
+
825,
|
| 445 |
+
928
|
| 446 |
+
],
|
| 447 |
+
"page_idx": 2
|
| 448 |
+
},
|
| 449 |
+
{
|
| 450 |
+
"type": "header",
|
| 451 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 452 |
+
"bbox": [
|
| 453 |
+
171,
|
| 454 |
+
32,
|
| 455 |
+
478,
|
| 456 |
+
47
|
| 457 |
+
],
|
| 458 |
+
"page_idx": 2
|
| 459 |
+
},
|
| 460 |
+
{
|
| 461 |
+
"type": "page_number",
|
| 462 |
+
"text": "3",
|
| 463 |
+
"bbox": [
|
| 464 |
+
493,
|
| 465 |
+
948,
|
| 466 |
+
504,
|
| 467 |
+
959
|
| 468 |
+
],
|
| 469 |
+
"page_idx": 2
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "text",
|
| 473 |
+
"text": "Monte Carlo estimator is just the least squares objective $\\hat{L} (\\dot{\\theta},t) = \\frac{1}{n}\\| J\\dot{\\theta} -f\\| ^2$ . The matrices $\\hat{M} (\\theta) = \\frac{1}{n} J^{\\top}J$ and $\\hat{F} (\\theta) = \\frac{1}{n} J^{\\top}f$ reveal that the ODE dynamics is just the familiar least squares solution $\\dot{\\theta} = J^{\\dagger}f = (J^{\\top}J)^{-1}J^{\\top}f$",
|
| 474 |
+
"bbox": [
|
| 475 |
+
169,
|
| 476 |
+
102,
|
| 477 |
+
823,
|
| 478 |
+
156
|
| 479 |
+
],
|
| 480 |
+
"page_idx": 3
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"type": "text",
|
| 484 |
+
"text": "3 DIAGNOSING LOCAL-IN-TIME NEURAL PDE SOLVERS",
|
| 485 |
+
"text_level": 1,
|
| 486 |
+
"bbox": [
|
| 487 |
+
171,
|
| 488 |
+
174,
|
| 489 |
+
656,
|
| 490 |
+
191
|
| 491 |
+
],
|
| 492 |
+
"page_idx": 3
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"type": "text",
|
| 496 |
+
"text": "The success of local-in-time methods hinges on making the PDE residual $L(\\dot{\\theta}, t)$ close to 0 as we follow the dynamics of $\\dot{\\theta} = \\hat{M}(\\theta)^{-1}\\hat{F}(\\theta)$ . The lower the local error, the lower the global PDE residual $S(\\theta) = \\int L(\\dot{\\theta}, t) dt$ and the more faithfully the PDE is satisfied.",
|
| 497 |
+
"bbox": [
|
| 498 |
+
169,
|
| 499 |
+
207,
|
| 500 |
+
823,
|
| 501 |
+
255
|
| 502 |
+
],
|
| 503 |
+
"page_idx": 3
|
| 504 |
+
},
|
| 505 |
+
{
|
| 506 |
+
"type": "text",
|
| 507 |
+
"text": "Even though $\\dot{\\theta}$ directly minimizes $L(\\dot{\\theta}, t)$ , the PDE residual is not necessarily small and instead the value of $r(x, t)$ depends nontrivially on the network architecture and the values of the parameters themselves. While local-in-time methods have been applied successfully in several cases, there are harder problems where they fail unexpectedly. For example, they fail with unacceptably large errors in second-order PDEs or problems with complex initial conditions. In the following section, we identify the reasons for these failures.",
|
| 508 |
+
"bbox": [
|
| 509 |
+
169,
|
| 510 |
+
263,
|
| 511 |
+
825,
|
| 512 |
+
348
|
| 513 |
+
],
|
| 514 |
+
"page_idx": 3
|
| 515 |
+
},
|
| 516 |
+
{
|
| 517 |
+
"type": "text",
|
| 518 |
+
"text": "3.1 REPRESENTATIONAL POWER",
|
| 519 |
+
"text_level": 1,
|
| 520 |
+
"bbox": [
|
| 521 |
+
171,
|
| 522 |
+
366,
|
| 523 |
+
413,
|
| 524 |
+
380
|
| 525 |
+
],
|
| 526 |
+
"page_idx": 3
|
| 527 |
+
},
|
| 528 |
+
{
|
| 529 |
+
"type": "text",
|
| 530 |
+
"text": "The simplest reason why local-in-time methods fail is because the neural networks do not have enough representational power. Having enough degrees of freedom and inductive biases in the network matters for being able to find a $\\dot{\\theta}$ in the span of $J$ which can match the spatial derivatives. The spatial derivatives $\\mathcal{L}[N](x,\\theta)$ of the PDE must be able to be expressed (or nearly expressed) as a linear combination of the derivative with respect to each parameter: $\\frac{\\partial}{\\partial\\theta_k} N(x,\\theta)$ , which is a different task than a neural network is typically designed for. The easiest intervention is to simply increase the number of parameters $p$ , yielding additional degrees of freedom.",
|
| 531 |
+
"bbox": [
|
| 532 |
+
169,
|
| 533 |
+
392,
|
| 534 |
+
823,
|
| 535 |
+
496
|
| 536 |
+
],
|
| 537 |
+
"page_idx": 3
|
| 538 |
+
},
|
| 539 |
+
{
|
| 540 |
+
"type": "text",
|
| 541 |
+
"text": "Increasing the number of parameters also improves the ability of the network to reconstruct the initial conditions, which can have knock-on effects with the evolution later in time. However, increasing the number of parameters and evolving through time following Du & Zaki (2021) and Bruna et al. (2022) quickly leads to intractable computations. The linear solves used to define the ODE dynamics require time $O(p^3 + p^2 n)$ and use $O(p^2 + pn)$ memory, where $p$ represents the neural network parameters and $n$ is the number of Monte Carlo samples used to estimate the linear system. Therefore the networks with more than around $p = 5,000$ parameters cannot be used. Neural networks of these sizes are extremely small compared to modern networks which often have millions or even billions of parameters. In section 4.1, we show how our Neural-IVP method resolves this limitation, allowing us to use large neural networks with many parameters.",
|
| 542 |
+
"bbox": [
|
| 543 |
+
169,
|
| 544 |
+
502,
|
| 545 |
+
825,
|
| 546 |
+
642
|
| 547 |
+
],
|
| 548 |
+
"page_idx": 3
|
| 549 |
+
},
|
| 550 |
+
{
|
| 551 |
+
"type": "text",
|
| 552 |
+
"text": "3.2 STABILITY AND CONDITIONING",
|
| 553 |
+
"text_level": 1,
|
| 554 |
+
"bbox": [
|
| 555 |
+
171,
|
| 556 |
+
660,
|
| 557 |
+
437,
|
| 558 |
+
674
|
| 559 |
+
],
|
| 560 |
+
"page_idx": 3
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"type": "text",
|
| 564 |
+
"text": "In addition to lacking sufficient representational power, there are more subtle reasons why the local-in-time methods fail.",
|
| 565 |
+
"bbox": [
|
| 566 |
+
169,
|
| 567 |
+
686,
|
| 568 |
+
823,
|
| 569 |
+
715
|
| 570 |
+
],
|
| 571 |
+
"page_idx": 3
|
| 572 |
+
},
|
| 573 |
+
{
|
| 574 |
+
"type": "text",
|
| 575 |
+
"text": "Even when the solution is exactly representable, a continuous path $\\theta^{*}(t)$ between the solutions may not exist. Even if a network is able to faithfully express the solution at a given time $u(x,t) = N(x,\\theta^{*})$ for some value of $\\theta^{*}$ in the parameter space, there may not exist a continuous path between these $\\theta^{*}$ for different times. This fact is related to the implicit function theorem. With the multi-output function $H_{i}(\\theta) = N(x_{i},\\theta) - u(x_{i},t)$ , even if we wish to satisfy $H_{i} = 0$ only at a finite collection of points $x_{i}$ , the existence of a continuous path $\\theta^{*}(t) = g(t)$ in general requires that the Jacobian matrix $D_{\\theta}H = J$ is invertible. Unfortunately the Jacobian is not invertible because there exist singular directions and nearly singular directions in the parameter space, as we now argue.",
|
| 576 |
+
"bbox": [
|
| 577 |
+
169,
|
| 578 |
+
720,
|
| 579 |
+
823,
|
| 580 |
+
834
|
| 581 |
+
],
|
| 582 |
+
"page_idx": 3
|
| 583 |
+
},
|
| 584 |
+
{
|
| 585 |
+
"type": "text",
|
| 586 |
+
"text": "There exist singular directions of $J$ and $M$ as a result of symmetries in the network. Each continuous symmetry of the network will produce a right singular vector of $J$ , regardless of how many points $n$ are used in the Monte Carlo estimate. Here we define a continuous symmetry as a parameterized transformation of the parameters $T_{\\alpha} : \\mathbb{R}^{p} \\to \\mathbb{R}^{p}$ defined for $\\alpha \\in (-\\epsilon, \\epsilon)$ , in a neighborhood of the identity $T_{0} = \\mathrm{Id}$ , and $T_{\\alpha}$ has a nonzero derivative with respect to $\\alpha$ at the identity. For convenience, consider reparametrizing $\\alpha$ to be unit speed so that $\\| \\partial_{\\alpha}T_{\\alpha}(\\theta)\\| = 1$ .",
|
| 587 |
+
"bbox": [
|
| 588 |
+
169,
|
| 589 |
+
839,
|
| 590 |
+
825,
|
| 591 |
+
925
|
| 592 |
+
],
|
| 593 |
+
"page_idx": 3
|
| 594 |
+
},
|
| 595 |
+
{
|
| 596 |
+
"type": "header",
|
| 597 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 598 |
+
"bbox": [
|
| 599 |
+
171,
|
| 600 |
+
32,
|
| 601 |
+
478,
|
| 602 |
+
47
|
| 603 |
+
],
|
| 604 |
+
"page_idx": 3
|
| 605 |
+
},
|
| 606 |
+
{
|
| 607 |
+
"type": "page_number",
|
| 608 |
+
"text": "4",
|
| 609 |
+
"bbox": [
|
| 610 |
+
493,
|
| 611 |
+
948,
|
| 612 |
+
504,
|
| 613 |
+
959
|
| 614 |
+
],
|
| 615 |
+
"page_idx": 3
|
| 616 |
+
},
|
| 617 |
+
{
|
| 618 |
+
"type": "image",
|
| 619 |
+
"img_path": "images/5fec05c0453e5ecdcc31d549aa205a87d35a165b7248960f64db918013b3e72e.jpg",
|
| 620 |
+
"image_caption": [
|
| 621 |
+
"Figure 1: The conditioning of the linear systems needed to solve the ODE on the network parameters increases for challenging PDEs like the wave equation, but not for others like the advection equation. (Left): Eigenvalue spectrum of $M(\\theta)$ matrix at initialization. (Middle): Growth of largest eigenvalue of $M(\\theta)$ over time. (Right): Number of preconditioned CG iterations required to solve the linear system to a specified tolerance of $\\epsilon = 10^{-7}$ ."
|
| 622 |
+
],
|
| 623 |
+
"image_footnote": [],
|
| 624 |
+
"bbox": [
|
| 625 |
+
189,
|
| 626 |
+
104,
|
| 627 |
+
379,
|
| 628 |
+
247
|
| 629 |
+
],
|
| 630 |
+
"page_idx": 4
|
| 631 |
+
},
|
| 632 |
+
{
|
| 633 |
+
"type": "image",
|
| 634 |
+
"img_path": "images/29c523bd2ca955eb6498d7576bdd158c6a37f13e542722cd50b6afb4b355bd88.jpg",
|
| 635 |
+
"image_caption": [],
|
| 636 |
+
"image_footnote": [],
|
| 637 |
+
"bbox": [
|
| 638 |
+
401,
|
| 639 |
+
103,
|
| 640 |
+
591,
|
| 641 |
+
247
|
| 642 |
+
],
|
| 643 |
+
"page_idx": 4
|
| 644 |
+
},
|
| 645 |
+
{
|
| 646 |
+
"type": "image",
|
| 647 |
+
"img_path": "images/b258eda4c7fec55fe6045e74a344f31e88fe90144d25143fe80cee466cf1a2ec.jpg",
|
| 648 |
+
"image_caption": [],
|
| 649 |
+
"image_footnote": [],
|
| 650 |
+
"bbox": [
|
| 651 |
+
616,
|
| 652 |
+
103,
|
| 653 |
+
803,
|
| 654 |
+
247
|
| 655 |
+
],
|
| 656 |
+
"page_idx": 4
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "text",
|
| 660 |
+
"text": "Theorem 1. Suppose the network $\\mathrm{N}(x,\\theta)$ has a continuous parameter symmetry $T_{\\alpha}$ which preserves the outputs of the function: $\\forall \\theta ,x:\\mathrm{N}(x,T_{\\alpha}(\\theta)) = \\mathrm{N}(x,\\theta)$ , then",
|
| 661 |
+
"bbox": [
|
| 662 |
+
169,
|
| 663 |
+
349,
|
| 664 |
+
823,
|
| 665 |
+
380
|
| 666 |
+
],
|
| 667 |
+
"page_idx": 4
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "equation",
|
| 671 |
+
"text": "\n$$\nv (\\theta) = \\left. \\partial_ {\\alpha} T _ {\\alpha} (\\theta) \\right| _ {\\alpha = 0} \\tag {3}\n$$\n",
|
| 672 |
+
"text_format": "latex",
|
| 673 |
+
"bbox": [
|
| 674 |
+
424,
|
| 675 |
+
383,
|
| 676 |
+
823,
|
| 677 |
+
402
|
| 678 |
+
],
|
| 679 |
+
"page_idx": 4
|
| 680 |
+
},
|
| 681 |
+
{
|
| 682 |
+
"type": "text",
|
| 683 |
+
"text": "is a singular vector of both $J$ and $M$ .",
|
| 684 |
+
"bbox": [
|
| 685 |
+
169,
|
| 686 |
+
405,
|
| 687 |
+
421,
|
| 688 |
+
420
|
| 689 |
+
],
|
| 690 |
+
"page_idx": 4
|
| 691 |
+
},
|
| 692 |
+
{
|
| 693 |
+
"type": "text",
|
| 694 |
+
"text": "Proof: Taking the derivative with respect to $\\alpha$ at $0$ , from the chain rule we have: $0 = \\partial_{\\alpha}\\big|_{0}N(x,T_{\\alpha}(\\theta)) = \\nabla_{\\theta}N(x,\\theta)^{\\top}\\partial_{\\alpha}T_{\\alpha}(\\theta)\\big|_{\\alpha = 0}$ . As this expression holds for all $x$ , $J(\\theta)v(\\theta) = 0$ and $M(\\theta)v(\\theta) = 0$ .",
|
| 695 |
+
"bbox": [
|
| 696 |
+
169,
|
| 697 |
+
426,
|
| 698 |
+
825,
|
| 699 |
+
470
|
| 700 |
+
],
|
| 701 |
+
"page_idx": 4
|
| 702 |
+
},
|
| 703 |
+
{
|
| 704 |
+
"type": "text",
|
| 705 |
+
"text": "As Dinh et al. (2017) demonstrated, multilayer perceptrons using ReLU nonlinearities have a high-dimensional group of exact parameter symmetries corresponding to a rescaling of weights in alternate layers. Furthermore even replacing ReLUs with alternate activation functions such as Swish (Ramachandran et al., 2017) does not solve the problem, as these will have approximate symmetries which will produce highly ill-conditioned $M$ and $J$ matrices.",
|
| 706 |
+
"bbox": [
|
| 707 |
+
169,
|
| 708 |
+
479,
|
| 709 |
+
823,
|
| 710 |
+
551
|
| 711 |
+
],
|
| 712 |
+
"page_idx": 4
|
| 713 |
+
},
|
| 714 |
+
{
|
| 715 |
+
"type": "text",
|
| 716 |
+
"text": "Theorem 2. An approximate symmetry $\\forall x:\\| N(x,T_{\\alpha}(\\theta)) - N(x,\\theta)\\|^{2}\\leq \\epsilon \\alpha^{2}$ will produce nearly singular vectors $v(\\theta) = \\partial_{\\alpha}T_{\\alpha}(\\theta)\\big|_{\\alpha = 0}$ for which",
|
| 717 |
+
"bbox": [
|
| 718 |
+
169,
|
| 719 |
+
553,
|
| 720 |
+
823,
|
| 721 |
+
585
|
| 722 |
+
],
|
| 723 |
+
"page_idx": 4
|
| 724 |
+
},
|
| 725 |
+
{
|
| 726 |
+
"type": "equation",
|
| 727 |
+
"text": "\n$$\nv ^ {\\top} M v < \\epsilon , \\tag {4}\n$$\n",
|
| 728 |
+
"text_format": "latex",
|
| 729 |
+
"bbox": [
|
| 730 |
+
455,
|
| 731 |
+
590,
|
| 732 |
+
823,
|
| 733 |
+
608
|
| 734 |
+
],
|
| 735 |
+
"page_idx": 4
|
| 736 |
+
},
|
| 737 |
+
{
|
| 738 |
+
"type": "text",
|
| 739 |
+
"text": "and therefore the smallest eigenvalue of $M$ is less than $\\epsilon$ .",
|
| 740 |
+
"bbox": [
|
| 741 |
+
169,
|
| 742 |
+
612,
|
| 743 |
+
545,
|
| 744 |
+
627
|
| 745 |
+
],
|
| 746 |
+
"page_idx": 4
|
| 747 |
+
},
|
| 748 |
+
{
|
| 749 |
+
"type": "text",
|
| 750 |
+
"text": "Proof: See Appendix A",
|
| 751 |
+
"text_level": 1,
|
| 752 |
+
"bbox": [
|
| 753 |
+
169,
|
| 754 |
+
633,
|
| 755 |
+
328,
|
| 756 |
+
648
|
| 757 |
+
],
|
| 758 |
+
"page_idx": 4
|
| 759 |
+
},
|
| 760 |
+
{
|
| 761 |
+
"type": "text",
|
| 762 |
+
"text": "Additionally, the rank of the Monte Carlo estimate $\\hat{M} = \\frac{1}{n} J^{\\top}J$ using $n$ samples is at most $n$ , and there is a $p - n$ dimensional manifold of parameters which match the function values at the sample points $\\forall i = 1,\\dots,n:\\mathrm{N}(x_i,\\theta) = \\mathrm{N}_i$ rather than over the whole domain. In Figure 1 (left), we show empirically that the eigenspectrum of $\\hat{M}$ is indeed deficient and highly ill-conditioned, with a long tail of small eigenvalues.",
|
| 763 |
+
"bbox": [
|
| 764 |
+
169,
|
| 765 |
+
659,
|
| 766 |
+
823,
|
| 767 |
+
734
|
| 768 |
+
],
|
| 769 |
+
"page_idx": 4
|
| 770 |
+
},
|
| 771 |
+
{
|
| 772 |
+
"type": "text",
|
| 773 |
+
"text": "Hence some form of regularization such as $[M(\\theta) + \\mu I]\\dot{\\theta}$ is necessary to have a bounded condition number, $\\kappa (M(\\theta) + \\mu I) = (\\lambda_1 + \\mu) / (0 + \\mu)$ .",
|
| 774 |
+
"bbox": [
|
| 775 |
+
169,
|
| 776 |
+
741,
|
| 777 |
+
823,
|
| 778 |
+
772
|
| 779 |
+
],
|
| 780 |
+
"page_idx": 4
|
| 781 |
+
},
|
| 782 |
+
{
|
| 783 |
+
"type": "text",
|
| 784 |
+
"text": "Furthermore, as seen in Figure 1 (middle) the conditioning of the linear system (equation 2) deteriorates over time. This deterioration worsens in more challenging PDEs like second-order wave equation in contrast to the easier advection equation. Even when using a dense solver, the quality of the solves will degrade over time, leading to increased error in the solution. When using an iterative method like CG shown in Figure 1 (right), the runtime of the method will increase during the evolution and eventually not be able to meet the desired error tolerance. In contrast, if we instead take snapshots of the solution at different times and fit it directly with SGD, we find that the conditioning is much better as shown by the green curve in Figure 1.",
|
| 785 |
+
"bbox": [
|
| 786 |
+
169,
|
| 787 |
+
777,
|
| 788 |
+
825,
|
| 789 |
+
888
|
| 790 |
+
],
|
| 791 |
+
"page_idx": 4
|
| 792 |
+
},
|
| 793 |
+
{
|
| 794 |
+
"type": "text",
|
| 795 |
+
"text": "Making sense of this observation, we can learn from the ways that neural networks are typically used: in conjunction with stochastic gradient descent. In general when training neural networks with",
|
| 796 |
+
"bbox": [
|
| 797 |
+
169,
|
| 798 |
+
895,
|
| 799 |
+
825,
|
| 800 |
+
925
|
| 801 |
+
],
|
| 802 |
+
"page_idx": 4
|
| 803 |
+
},
|
| 804 |
+
{
|
| 805 |
+
"type": "header",
|
| 806 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 807 |
+
"bbox": [
|
| 808 |
+
171,
|
| 809 |
+
32,
|
| 810 |
+
478,
|
| 811 |
+
47
|
| 812 |
+
],
|
| 813 |
+
"page_idx": 4
|
| 814 |
+
},
|
| 815 |
+
{
|
| 816 |
+
"type": "page_number",
|
| 817 |
+
"text": "5",
|
| 818 |
+
"bbox": [
|
| 819 |
+
493,
|
| 820 |
+
948,
|
| 821 |
+
503,
|
| 822 |
+
959
|
| 823 |
+
],
|
| 824 |
+
"page_idx": 4
|
| 825 |
+
},
|
| 826 |
+
{
|
| 827 |
+
"type": "text",
|
| 828 |
+
"text": "SGD, we must chose carefully the initialization to make the problem well-conditioned (Mishkin & Matas, 2015). Many initializations, such as setting the scale of the parameters with drastically different values between layers, will lead either to diverging solutions or no progress on the objective. With the right initialization and a good choice of hyperparameters, the optimization trajectory will stay in a well-conditioned region of the parameter space. However, many bad regions of the parameter space exist, and a number of architectural improvements in deep learning such as batch normalization (Ioffe & Szegedy, 2015) and skip connections (He et al., 2016) were designed with the express purpose of improving the conditioning of optimization while leaving the expressive power unchanged. Unfortunately, while SGD optimizers stay in a well-conditioned regions of the parameter space, equation (2) does not.",
|
| 829 |
+
"bbox": [
|
| 830 |
+
169,
|
| 831 |
+
103,
|
| 832 |
+
826,
|
| 833 |
+
243
|
| 834 |
+
],
|
| 835 |
+
"page_idx": 5
|
| 836 |
+
},
|
| 837 |
+
{
|
| 838 |
+
"type": "text",
|
| 839 |
+
"text": "To see this, consider a singular vector $v$ of $J$ which has a very small singular value $\\sigma_v$ (due to an approximate symmetry or otherwise). Analyzing the solution, we see that the projection of the parameters along the singular vector evolves like: $v^\\top \\dot{\\theta} = v^\\top J^\\dagger f = \\sigma_v^{-1} u_v^\\top f$ , where $Jv = \\sigma_v u_v$ . A small singular value leads to a large change in that subspace according to the evolution of the ODE. Considering the approximate rescaling symmetry for swish networks, this points the dynamics directly into amplifying the difference between size of weights in neighboring layers and worsening the conditioning, precisely what was identified as sharp minima from Dinh et al. (2017). We describe how this problem can be circumvented by our method as described in section 4.2.",
|
| 840 |
+
"bbox": [
|
| 841 |
+
169,
|
| 842 |
+
250,
|
| 843 |
+
826,
|
| 844 |
+
363
|
| 845 |
+
],
|
| 846 |
+
"page_idx": 5
|
| 847 |
+
},
|
| 848 |
+
{
|
| 849 |
+
"type": "text",
|
| 850 |
+
"text": "4 NEURAL IVP",
|
| 851 |
+
"text_level": 1,
|
| 852 |
+
"bbox": [
|
| 853 |
+
171,
|
| 854 |
+
383,
|
| 855 |
+
320,
|
| 856 |
+
398
|
| 857 |
+
],
|
| 858 |
+
"page_idx": 5
|
| 859 |
+
},
|
| 860 |
+
{
|
| 861 |
+
"type": "text",
|
| 862 |
+
"text": "Drawing on these observations, we introduce Neural IVP, a method for solving initial value PDEs that resolves the scalability and numerical stability issues limiting current local-in-time methods. We also enhance the Neural IVP, through projections to different regions of the parameter space, finetuning procedures, and mechanisms to increase the representation power of the neural networks.",
|
| 863 |
+
"bbox": [
|
| 864 |
+
169,
|
| 865 |
+
415,
|
| 866 |
+
823,
|
| 867 |
+
472
|
| 868 |
+
],
|
| 869 |
+
"page_idx": 5
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "text",
|
| 873 |
+
"text": "4.1 IMPROVING REPRESENTATIONAL POWER AND SCALABILITY",
|
| 874 |
+
"text_level": 1,
|
| 875 |
+
"bbox": [
|
| 876 |
+
171,
|
| 877 |
+
488,
|
| 878 |
+
633,
|
| 879 |
+
502
|
| 880 |
+
],
|
| 881 |
+
"page_idx": 5
|
| 882 |
+
},
|
| 883 |
+
{
|
| 884 |
+
"type": "text",
|
| 885 |
+
"text": "To evaluate the representational power of the network, we examine fitting a complex initial condition typically present in the later evolution of an entangled system, and which will contain components at many different frequencies. We fit a sum of two Gaussians of different sizes modulated by frequencies pointing in different directions in 3-dimensions within the cube $[-1,1]^3$ , with the slice through $z = 0$ shown in Figure 2 (left). The function is defined as the sum of two Gaussian like wave packets, modulated by spatial frequencies pointing in different directions: $u_0(x) = 30(2\\pi s_1^2)^{-1}e^{-||v||_2^2 / (2s_1^2)}\\cos(2\\pi f x^\\top \\hat{n}) + 24(2\\pi s_2^2)^{-1}e^{-||w||_2^2 / (2s_2^2)}\\cos(2\\pi f x^\\top \\hat{m})$ . where $v = x - 0.5(\\hat{x}_2 + \\hat{x}_3)$ , $w = x + (\\hat{x}_1 + \\hat{x}_2 + \\hat{x}_3)/6$ also $\\hat{n} = (\\hat{x}_1 + \\hat{x}_2)/\\sqrt{2}$ and $\\hat{m} = 2^{-1}(\\hat{x}_2 + x_1\\hat{x}_1/3)$ .",
|
| 886 |
+
"bbox": [
|
| 887 |
+
169,
|
| 888 |
+
513,
|
| 889 |
+
823,
|
| 890 |
+
648
|
| 891 |
+
],
|
| 892 |
+
"page_idx": 5
|
| 893 |
+
},
|
| 894 |
+
{
|
| 895 |
+
"type": "text",
|
| 896 |
+
"text": "By changing the frequency parameter $f$ , we can investigate how well the network is able to fit fine-scale details. We introduce three substantial improvements to the models over those used in Evolutional Deep Neural Networks (EDNN) of Du & Zaki (2021) in order to improve their representational power, and we evaluate the impact of these changes in Figure 2 (middle), starting with the exact 4-layer 30 hidden unit tanh nonlinearity MLP architecture used in EDNN.",
|
| 897 |
+
"bbox": [
|
| 898 |
+
169,
|
| 899 |
+
652,
|
| 900 |
+
826,
|
| 901 |
+
723
|
| 902 |
+
],
|
| 903 |
+
"page_idx": 5
|
| 904 |
+
},
|
| 905 |
+
{
|
| 906 |
+
"type": "text",
|
| 907 |
+
"text": "Increasing Number of Parameters and Scalability As shown in Figure 2 (right), the number of network parameters has a large impact on its representational power, not just for solving the initial conditions but also for finding a $\\dot{\\theta}$ that achieves a low PDE residual. Increasing the number of parameters substantially reduces the approximation error, especially at high frequencies which are challenging for the model. While dense solves prohibit scaling past 5,000 parameters, we show that our solves can be performed much faster making use of the structure of $\\hat{M}$ . Matrix vector multiplies with the matrix $\\hat{M} (\\theta) = \\frac{1}{n} J^{\\top}J$ can be implemented much more efficiently than using the dense matrix. Making use of Jacobian-vector-products implemented using automatic differentiation (such as in JAX (Bradbury et al., 2018)), we can implement a matrix vector multiply using 2 Jacobian-vector-products:",
|
| 908 |
+
"bbox": [
|
| 909 |
+
169,
|
| 910 |
+
729,
|
| 911 |
+
826,
|
| 912 |
+
875
|
| 913 |
+
],
|
| 914 |
+
"page_idx": 5
|
| 915 |
+
},
|
| 916 |
+
{
|
| 917 |
+
"type": "equation",
|
| 918 |
+
"text": "\n$$\n\\hat {M} (\\theta) v = \\frac {1}{2 n} \\nabla_ {v} \\| J v \\| ^ {2}, \\tag {5}\n$$\n",
|
| 919 |
+
"text_format": "latex",
|
| 920 |
+
"bbox": [
|
| 921 |
+
413,
|
| 922 |
+
875,
|
| 923 |
+
823,
|
| 924 |
+
893
|
| 925 |
+
],
|
| 926 |
+
"page_idx": 5
|
| 927 |
+
},
|
| 928 |
+
{
|
| 929 |
+
"type": "text",
|
| 930 |
+
"text": "which takes $O(n + p)$ time and memory for a single matrix-vector product, sidestepping memory bottlenecks that are usually the limiting factor. Then, with these efficient matrix-vector products,",
|
| 931 |
+
"bbox": [
|
| 932 |
+
169,
|
| 933 |
+
895,
|
| 934 |
+
823,
|
| 935 |
+
925
|
| 936 |
+
],
|
| 937 |
+
"page_idx": 5
|
| 938 |
+
},
|
| 939 |
+
{
|
| 940 |
+
"type": "header",
|
| 941 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 942 |
+
"bbox": [
|
| 943 |
+
171,
|
| 944 |
+
32,
|
| 945 |
+
478,
|
| 946 |
+
47
|
| 947 |
+
],
|
| 948 |
+
"page_idx": 5
|
| 949 |
+
},
|
| 950 |
+
{
|
| 951 |
+
"type": "page_number",
|
| 952 |
+
"text": "6",
|
| 953 |
+
"bbox": [
|
| 954 |
+
493,
|
| 955 |
+
948,
|
| 956 |
+
504,
|
| 957 |
+
959
|
| 958 |
+
],
|
| 959 |
+
"page_idx": 5
|
| 960 |
+
},
|
| 961 |
+
{
|
| 962 |
+
"type": "image",
|
| 963 |
+
"img_path": "images/86dc1d9044b3abf7c9566c3a100e879b2fd34d51b92aa7d009e0ebcd5816350d.jpg",
|
| 964 |
+
"image_caption": [
|
| 965 |
+
"Figure 2: (Left): Example wave packet initial condition with varying levels of fine details parameterized by the frequency $f$ . (Middle): Impact of Neural-IVP improvements in the model on the initial condition fit relative error across different levels of difficulty of the solution as parameterized by the frequency $f$ , yielding an improvement of 1 - 2 orders of magnitude. (Right): Initial condition fit with all Neural-IVP interventions but with varying number of parameters in the model (shown by the colors). Note that the largest networks which can be used by the dense method are only 5000 parameters."
|
| 966 |
+
],
|
| 967 |
+
"image_footnote": [],
|
| 968 |
+
"bbox": [
|
| 969 |
+
187,
|
| 970 |
+
103,
|
| 971 |
+
346,
|
| 972 |
+
210
|
| 973 |
+
],
|
| 974 |
+
"page_idx": 6
|
| 975 |
+
},
|
| 976 |
+
{
|
| 977 |
+
"type": "image",
|
| 978 |
+
"img_path": "images/aaadd24636939d4e4321d03296e38e44156e56f45ea348302671ac5485978dc7.jpg",
|
| 979 |
+
"image_caption": [],
|
| 980 |
+
"image_footnote": [],
|
| 981 |
+
"bbox": [
|
| 982 |
+
364,
|
| 983 |
+
103,
|
| 984 |
+
599,
|
| 985 |
+
212
|
| 986 |
+
],
|
| 987 |
+
"page_idx": 6
|
| 988 |
+
},
|
| 989 |
+
{
|
| 990 |
+
"type": "image",
|
| 991 |
+
"img_path": "images/09b68fb6f100df6fdd29cff937d9de509dc022fb674137e9f6640a648d895f5a.jpg",
|
| 992 |
+
"image_caption": [],
|
| 993 |
+
"image_footnote": [],
|
| 994 |
+
"bbox": [
|
| 995 |
+
617,
|
| 996 |
+
103,
|
| 997 |
+
834,
|
| 998 |
+
213
|
| 999 |
+
],
|
| 1000 |
+
"page_idx": 6
|
| 1001 |
+
},
|
| 1002 |
+
{
|
| 1003 |
+
"type": "text",
|
| 1004 |
+
"text": "we can use a scalable Krylov subspace routine of conjugate gradients (CG). To further accelerate the method, we construct a Nyström preconditioner (Frangella et al., 2021), drastically reducing the number of CG iterations. Using this approach, each solve takes time $O((n + p)\\sqrt{\\kappa})$ where $\\kappa(P^{-1}\\hat{M})$ is the condition number of the preconditioned matrix $\\hat{M}$ , rather than $O(p^3 + p^2 n)$ time of dense solves. These improvements to the runtime using the structure of $\\hat{M}$ mirror the sparse and Kronecker structures used by finite difference methods.",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
169,
|
| 1007 |
+
344,
|
| 1008 |
+
823,
|
| 1009 |
+
434
|
| 1010 |
+
],
|
| 1011 |
+
"page_idx": 6
|
| 1012 |
+
},
|
| 1013 |
+
{
|
| 1014 |
+
"type": "text",
|
| 1015 |
+
"text": "Sinusoidal Embedding We make several architectural enhancements to the networks used in Du & Zaki (2021) and Bruna et al. (2022), which improve the quality of the initial condition fit and the error when evolving forward in time. Notably, we find that using the sinusoidal embedding (Mildenhall et al., 2021) substantially improves the ability of the network to represent higher frequency details in the solution. In contrast to the original form, we use the featurization",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
169,
|
| 1018 |
+
440,
|
| 1019 |
+
823,
|
| 1020 |
+
511
|
| 1021 |
+
],
|
| 1022 |
+
"page_idx": 6
|
| 1023 |
+
},
|
| 1024 |
+
{
|
| 1025 |
+
"type": "equation",
|
| 1026 |
+
"text": "\n$$\n\\gamma (x) = \\left[ \\sin \\left(2 ^ {k} x \\frac {\\pi}{2}\\right) 2 ^ {- \\alpha k} \\right] _ {k = 0} ^ {L} + \\left[ \\cos \\left(2 ^ {k} x \\frac {\\pi}{2}\\right) 2 ^ {- \\alpha k} \\right] _ {k = 0} ^ {L}, \\tag {6}\n$$\n",
|
| 1027 |
+
"text_format": "latex",
|
| 1028 |
+
"bbox": [
|
| 1029 |
+
316,
|
| 1030 |
+
515,
|
| 1031 |
+
823,
|
| 1032 |
+
535
|
| 1033 |
+
],
|
| 1034 |
+
"page_idx": 6
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "text",
|
| 1038 |
+
"text": "which scales the magnitude of the high frequency (large $\\omega$ ) components down by $1 / \\omega^{\\alpha}$ . While $\\alpha = 0$ (the original sinusoidal embedding) works the best for fitting an initial signal (the only requirement needed for Neural Radiance Fields (Mildenhall et al., 2021)), the derivatives of $\\gamma$ will not be well behaved as the magnitude of the largest components of $\\gamma'(x)$ will scale like $2^{L}$ and $\\gamma''(x)$ will scale like $2^{2L}$ . We find setting $\\alpha = 1$ to be the most effective for both first order and 2nd order PDEs. Figure 2 (middle) shows the sinusoidal embedding helps the model represent complex functions.",
|
| 1039 |
+
"bbox": [
|
| 1040 |
+
169,
|
| 1041 |
+
537,
|
| 1042 |
+
823,
|
| 1043 |
+
635
|
| 1044 |
+
],
|
| 1045 |
+
"page_idx": 6
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "text",
|
| 1049 |
+
"text": "Last Layer Linear Solves To further improve the quality of the initial condition fit, after training the network on the initial condition, we recast the fitting of the last layer of the network as solving a linear least squares problem. The network up until the last layer can be considered as features $N(x) = w^{\\top}\\phi_{\\theta}(x) + b$ over a fixed set of collocation points $X$ . We can then solve the minimization problem with respect to the final layer weights $w, b$",
|
| 1050 |
+
"bbox": [
|
| 1051 |
+
169,
|
| 1052 |
+
642,
|
| 1053 |
+
823,
|
| 1054 |
+
713
|
| 1055 |
+
],
|
| 1056 |
+
"page_idx": 6
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "equation",
|
| 1060 |
+
"text": "\n$$\n\\min _ {w, b} \\left\\| w ^ {\\top} \\phi_ {\\theta} (X) + b - u _ {0} (X) \\right\\| ^ {2}, \\tag {7}\n$$\n",
|
| 1061 |
+
"text_format": "latex",
|
| 1062 |
+
"bbox": [
|
| 1063 |
+
387,
|
| 1064 |
+
717,
|
| 1065 |
+
823,
|
| 1066 |
+
741
|
| 1067 |
+
],
|
| 1068 |
+
"page_idx": 6
|
| 1069 |
+
},
|
| 1070 |
+
{
|
| 1071 |
+
"type": "text",
|
| 1072 |
+
"text": "which can be solved to a higher level of precision and achieve a lower error when compared to the values of the last layer obtained without tuning from the full nonlinear and stochastic problem.",
|
| 1073 |
+
"bbox": [
|
| 1074 |
+
169,
|
| 1075 |
+
747,
|
| 1076 |
+
823,
|
| 1077 |
+
777
|
| 1078 |
+
],
|
| 1079 |
+
"page_idx": 6
|
| 1080 |
+
},
|
| 1081 |
+
{
|
| 1082 |
+
"type": "text",
|
| 1083 |
+
"text": "Combining these three improvements of scalability, sinusoidal embeddings, and last layer linear solves (head tuning), we are able to reduce the representation error of the networks by 1-2 orders of magnitude across different difficulties of this challenging 3 dimensional problem.",
|
| 1084 |
+
"bbox": [
|
| 1085 |
+
169,
|
| 1086 |
+
782,
|
| 1087 |
+
826,
|
| 1088 |
+
825
|
| 1089 |
+
],
|
| 1090 |
+
"page_idx": 6
|
| 1091 |
+
},
|
| 1092 |
+
{
|
| 1093 |
+
"type": "text",
|
| 1094 |
+
"text": "4.2 STABILITY AND CONDITIONING",
|
| 1095 |
+
"text_level": 1,
|
| 1096 |
+
"bbox": [
|
| 1097 |
+
171,
|
| 1098 |
+
842,
|
| 1099 |
+
437,
|
| 1100 |
+
856
|
| 1101 |
+
],
|
| 1102 |
+
"page_idx": 6
|
| 1103 |
+
},
|
| 1104 |
+
{
|
| 1105 |
+
"type": "text",
|
| 1106 |
+
"text": "Preconditioning In section 3.2 we discussed how even for easier PDEs, the symmetries in the neural networks generate badly conditioned linear systems for the ODE on the parameters. To counteract this negative effect on our CG solver, we use the highly effective and scalable randomized Nyström preconditioner (Frangella et al., 2021). As discussed in Frangella et al. (2021),",
|
| 1107 |
+
"bbox": [
|
| 1108 |
+
169,
|
| 1109 |
+
867,
|
| 1110 |
+
823,
|
| 1111 |
+
925
|
| 1112 |
+
],
|
| 1113 |
+
"page_idx": 6
|
| 1114 |
+
},
|
| 1115 |
+
{
|
| 1116 |
+
"type": "header",
|
| 1117 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1118 |
+
"bbox": [
|
| 1119 |
+
171,
|
| 1120 |
+
32,
|
| 1121 |
+
478,
|
| 1122 |
+
47
|
| 1123 |
+
],
|
| 1124 |
+
"page_idx": 6
|
| 1125 |
+
},
|
| 1126 |
+
{
|
| 1127 |
+
"type": "page_number",
|
| 1128 |
+
"text": "7",
|
| 1129 |
+
"bbox": [
|
| 1130 |
+
493,
|
| 1131 |
+
948,
|
| 1132 |
+
503,
|
| 1133 |
+
959
|
| 1134 |
+
],
|
| 1135 |
+
"page_idx": 6
|
| 1136 |
+
},
|
| 1137 |
+
{
|
| 1138 |
+
"type": "image",
|
| 1139 |
+
"img_path": "images/a0d90f77bf43630b4ee22b4069b404ab4557e8ec86af84692dd4e3e33e57ca3b.jpg",
|
| 1140 |
+
"image_caption": [
|
| 1141 |
+
"Figure 3: (Left): Restarts improve the condition of the linear systems. Here we cap the number of CG iterations at 1000, but without restarts the number required to reach a desired error tolerance will only continue to increase. Neural-IVP achieves an order of magnitude better PDE residual than EDNN on the Fokker-Plank equation (middle) and on the Vlasov equation (right)."
|
| 1142 |
+
],
|
| 1143 |
+
"image_footnote": [],
|
| 1144 |
+
"bbox": [
|
| 1145 |
+
187,
|
| 1146 |
+
104,
|
| 1147 |
+
380,
|
| 1148 |
+
247
|
| 1149 |
+
],
|
| 1150 |
+
"page_idx": 7
|
| 1151 |
+
},
|
| 1152 |
+
{
|
| 1153 |
+
"type": "image",
|
| 1154 |
+
"img_path": "images/d8256c1805a5557d99c8ffbb710ed52b347a97879b965ee0fa94377877c93eb2.jpg",
|
| 1155 |
+
"image_caption": [],
|
| 1156 |
+
"image_footnote": [],
|
| 1157 |
+
"bbox": [
|
| 1158 |
+
405,
|
| 1159 |
+
103,
|
| 1160 |
+
591,
|
| 1161 |
+
247
|
| 1162 |
+
],
|
| 1163 |
+
"page_idx": 7
|
| 1164 |
+
},
|
| 1165 |
+
{
|
| 1166 |
+
"type": "image",
|
| 1167 |
+
"img_path": "images/cf32d517df30faa097bd35b4ad0581be544275ff01bd5a9fe87bc2aa671f94a2.jpg",
|
| 1168 |
+
"image_caption": [],
|
| 1169 |
+
"image_footnote": [],
|
| 1170 |
+
"bbox": [
|
| 1171 |
+
617,
|
| 1172 |
+
103,
|
| 1173 |
+
803,
|
| 1174 |
+
247
|
| 1175 |
+
],
|
| 1176 |
+
"page_idx": 7
|
| 1177 |
+
},
|
| 1178 |
+
{
|
| 1179 |
+
"type": "text",
|
| 1180 |
+
"text": "this preconditioner is close to the optimal truncated SVD preconditioner and it is empirically impressive. To use this preconditioner, we first construct a Nyström approximation of $M(\\theta)$ : $M_{\\mathrm{nys}}(\\theta) = (M(\\theta)\\Omega)(\\Omega^{\\top}M(\\theta)\\Omega)^{-1}(M(\\theta)\\Omega)^{\\top}$ using a Gaussian random subspace projection $\\Omega \\in \\mathbb{R}^{p\\times \\ell}$ , where $\\ell$ denotes the subspace rank. Then, using the SVD of the approximation $M_{\\mathrm{nys}}(\\theta) = U\\hat{\\Lambda} U^{\\top}$ we can construct a preconditioner (where $\\nu$ is a small regularization) as follows:",
|
| 1181 |
+
"bbox": [
|
| 1182 |
+
169,
|
| 1183 |
+
358,
|
| 1184 |
+
823,
|
| 1185 |
+
441
|
| 1186 |
+
],
|
| 1187 |
+
"page_idx": 7
|
| 1188 |
+
},
|
| 1189 |
+
{
|
| 1190 |
+
"type": "equation",
|
| 1191 |
+
"text": "\n$$\nP = \\frac {1}{\\hat {\\lambda} _ {\\ell} + \\nu} U \\big (\\hat {\\Lambda} + \\nu I \\big) U ^ {\\top} + \\big (I - U U ^ {\\top} \\big).\n$$\n",
|
| 1192 |
+
"text_format": "latex",
|
| 1193 |
+
"bbox": [
|
| 1194 |
+
356,
|
| 1195 |
+
454,
|
| 1196 |
+
638,
|
| 1197 |
+
478
|
| 1198 |
+
],
|
| 1199 |
+
"page_idx": 7
|
| 1200 |
+
},
|
| 1201 |
+
{
|
| 1202 |
+
"type": "text",
|
| 1203 |
+
"text": "This preconditioner closely approximates the optimal truncated SVD preconditioner when the eigenspectrum $\\hat{\\Lambda}$ resembles that of the original problem. The cost of using this preconditioner are $\\ell$ matrix-vector-multipplies (MVMs) and a Cholesky decomposition of $O\\left(\\ell^{3}\\right)$ , where in our problems $\\ell \\in \\{100, 200, 300\\}$ .",
|
| 1204 |
+
"bbox": [
|
| 1205 |
+
169,
|
| 1206 |
+
492,
|
| 1207 |
+
823,
|
| 1208 |
+
553
|
| 1209 |
+
],
|
| 1210 |
+
"page_idx": 7
|
| 1211 |
+
},
|
| 1212 |
+
{
|
| 1213 |
+
"type": "text",
|
| 1214 |
+
"text": "Projection to SGD-like regions Following the ODE on the parameters leads to linear systems whose condition worsens over time as seen on the middle panel of Figure 1. In that plot it is also visible how the conditioning of the systems is much lower and does not increase as rapidly when the neural network is trained to fit the PDE solution using SGD. In principle we would opt for the SGD behavior but we do not have access to fitting the ground truth PDE solution directly. Instead, when the condition number grows too large, we can refit against our neural predictions using SGD and thus start from another location in the parameter space. That is, every so often, we solve $\\theta^{\\mathrm{SGD}}(t) = \\arg \\min_{\\theta}\\int_{\\mathcal{X}}\\left(\\mathrm{N}(\\theta ,x) - \\mathrm{N}(\\theta (t),x)\\right)^2 d\\mu (x)$ where $\\theta (t)$ is our current parameters at time $t$ . Performing restarts in this way considerably improves the conditioning. As seen in Figure 3 (left) the number of CG iterations increases substantially slower when using the SGD restarts.",
|
| 1215 |
+
"bbox": [
|
| 1216 |
+
169,
|
| 1217 |
+
559,
|
| 1218 |
+
826,
|
| 1219 |
+
702
|
| 1220 |
+
],
|
| 1221 |
+
"page_idx": 7
|
| 1222 |
+
},
|
| 1223 |
+
{
|
| 1224 |
+
"type": "text",
|
| 1225 |
+
"text": "4.3 VALIDATING OUR METHOD",
|
| 1226 |
+
"text_level": 1,
|
| 1227 |
+
"bbox": [
|
| 1228 |
+
171,
|
| 1229 |
+
727,
|
| 1230 |
+
405,
|
| 1231 |
+
742
|
| 1232 |
+
],
|
| 1233 |
+
"page_idx": 7
|
| 1234 |
+
},
|
| 1235 |
+
{
|
| 1236 |
+
"type": "text",
|
| 1237 |
+
"text": "We validate our method with three different PDEs: the wave equation (3+1), the Vlasov equation (6+1) and the Fokker-Planck equation (8+1). For more details on these equations, see Appendix B. For the wave equation we make comparisons against its analytic solution, while for the remaining equations we compare using the PDE residual (evaluated on different samples from training), additional details about the experiment setup can be found in appendix B. For the wave equation, Neural-IVP achieves the lowest error beating EDNN and finite differences evaluated on a $100 \\times 100 \\times 100$ grid as seen in Figure 6. For the remaining equations Neural-IVP achieves an order of magnitude lower PDE residual compared to EDNN<sup>1</sup> as seen in the middle and right panels of Figure 3.",
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
169,
|
| 1240 |
+
757,
|
| 1241 |
+
823,
|
| 1242 |
+
869
|
| 1243 |
+
],
|
| 1244 |
+
"page_idx": 7
|
| 1245 |
+
},
|
| 1246 |
+
{
|
| 1247 |
+
"type": "header",
|
| 1248 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
171,
|
| 1251 |
+
32,
|
| 1252 |
+
478,
|
| 1253 |
+
47
|
| 1254 |
+
],
|
| 1255 |
+
"page_idx": 7
|
| 1256 |
+
},
|
| 1257 |
+
{
|
| 1258 |
+
"type": "page_footnote",
|
| 1259 |
+
"text": "<sup>1</sup>While we would like to benefit from the improved sampling distribution proposed by Bruna et al. (2022), unfortunately since the solution is not a probability distribution there is no clear measure to sample from.",
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
169,
|
| 1262 |
+
896,
|
| 1263 |
+
823,
|
| 1264 |
+
925
|
| 1265 |
+
],
|
| 1266 |
+
"page_idx": 7
|
| 1267 |
+
},
|
| 1268 |
+
{
|
| 1269 |
+
"type": "page_number",
|
| 1270 |
+
"text": "8",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
493,
|
| 1273 |
+
948,
|
| 1274 |
+
503,
|
| 1275 |
+
959
|
| 1276 |
+
],
|
| 1277 |
+
"page_idx": 7
|
| 1278 |
+
},
|
| 1279 |
+
{
|
| 1280 |
+
"type": "image",
|
| 1281 |
+
"img_path": "images/faeae463d8a88cbc53ae80614c50183cc9a029fe0e607db6690f1a675ac0fdc9.jpg",
|
| 1282 |
+
"image_caption": [
|
| 1283 |
+
"Figure 4: (Left): Neural-IVP's PDE residual over time for wave maps compared to EDNN. (Middle): Neural-IVP solution for the wave maps at $t = 0.36$ at a $x = 0$ slice. (Right): Finite difference solution for the wave maps at the same slice."
|
| 1284 |
+
],
|
| 1285 |
+
"image_footnote": [],
|
| 1286 |
+
"bbox": [
|
| 1287 |
+
187,
|
| 1288 |
+
104,
|
| 1289 |
+
361,
|
| 1290 |
+
233
|
| 1291 |
+
],
|
| 1292 |
+
"page_idx": 8
|
| 1293 |
+
},
|
| 1294 |
+
{
|
| 1295 |
+
"type": "image",
|
| 1296 |
+
"img_path": "images/46d4ee9dba49f2e013f8af593c5aebdb7a161be917a408879fbaf54c51607610.jpg",
|
| 1297 |
+
"image_caption": [],
|
| 1298 |
+
"image_footnote": [],
|
| 1299 |
+
"bbox": [
|
| 1300 |
+
383,
|
| 1301 |
+
103,
|
| 1302 |
+
604,
|
| 1303 |
+
236
|
| 1304 |
+
],
|
| 1305 |
+
"page_idx": 8
|
| 1306 |
+
},
|
| 1307 |
+
{
|
| 1308 |
+
"type": "image",
|
| 1309 |
+
"img_path": "images/378deead8acfcd894382dbed749c6e63e4d94eaa5c59b76d3d2c2aba2bbca0fa.jpg",
|
| 1310 |
+
"image_caption": [],
|
| 1311 |
+
"image_footnote": [],
|
| 1312 |
+
"bbox": [
|
| 1313 |
+
627,
|
| 1314 |
+
103,
|
| 1315 |
+
851,
|
| 1316 |
+
236
|
| 1317 |
+
],
|
| 1318 |
+
"page_idx": 8
|
| 1319 |
+
},
|
| 1320 |
+
{
|
| 1321 |
+
"type": "text",
|
| 1322 |
+
"text": "5 SOLVING CHALLENGING HYPERBOLIC PDES",
|
| 1323 |
+
"text_level": 1,
|
| 1324 |
+
"bbox": [
|
| 1325 |
+
171,
|
| 1326 |
+
310,
|
| 1327 |
+
583,
|
| 1328 |
+
325
|
| 1329 |
+
],
|
| 1330 |
+
"page_idx": 8
|
| 1331 |
+
},
|
| 1332 |
+
{
|
| 1333 |
+
"type": "text",
|
| 1334 |
+
"text": "We now turn to a challenging PDE: the wave maps equation. This equation is a second-order hyperbolic PDE that often arises in general relativity and that describes the evolution of a scalar field in a curved $(3 + 1)$ dimensional spacetime. Following Einstein's tensor notation from differential geometry, the wave maps equation can be expressed as",
|
| 1335 |
+
"bbox": [
|
| 1336 |
+
169,
|
| 1337 |
+
340,
|
| 1338 |
+
823,
|
| 1339 |
+
397
|
| 1340 |
+
],
|
| 1341 |
+
"page_idx": 8
|
| 1342 |
+
},
|
| 1343 |
+
{
|
| 1344 |
+
"type": "equation",
|
| 1345 |
+
"text": "\n$$\ng ^ {\\mu \\nu} \\nabla_ {\\mu} \\nabla_ {\\nu} \\phi = g ^ {\\mu \\nu} \\partial_ {\\mu} \\partial_ {\\nu} \\phi - g ^ {\\mu \\nu} \\Gamma_ {\\mu \\nu} ^ {\\sigma} \\partial_ {\\sigma} \\phi = 0, \\tag {8}\n$$\n",
|
| 1346 |
+
"text_format": "latex",
|
| 1347 |
+
"bbox": [
|
| 1348 |
+
343,
|
| 1349 |
+
400,
|
| 1350 |
+
823,
|
| 1351 |
+
417
|
| 1352 |
+
],
|
| 1353 |
+
"page_idx": 8
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "text",
|
| 1357 |
+
"text": "where $g$ is the metric tensor expressing the curvature of the space, $\\Gamma_{\\mu \\nu}^{\\sigma}$ are the Christoffel symbols, combinations of the derivatives of the metric, and $\\partial_{\\mu}$ are derivatives with respect to components of both space and time. For the metric, we choose the metric from a Schwarzschild black hole located at coordinate value $c = -2\\hat{x}$ in a Cartesian-like coordinate system:",
|
| 1358 |
+
"bbox": [
|
| 1359 |
+
169,
|
| 1360 |
+
420,
|
| 1361 |
+
826,
|
| 1362 |
+
476
|
| 1363 |
+
],
|
| 1364 |
+
"page_idx": 8
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "equation",
|
| 1368 |
+
"text": "\n$$\ng _ {\\mu \\nu} d x ^ {\\mu} d x ^ {\\nu} = - (1 - r _ {s} / r _ {c}) d t ^ {2} + [ \\delta_ {i j} + \\frac {1}{r _ {c} ^ {2} (r _ {c} / r _ {s} - 1)} (x _ {i} - c _ {i}) (x _ {j} - c _ {j}) ] d x ^ {i} d x ^ {j},\n$$\n",
|
| 1369 |
+
"text_format": "latex",
|
| 1370 |
+
"bbox": [
|
| 1371 |
+
230,
|
| 1372 |
+
478,
|
| 1373 |
+
764,
|
| 1374 |
+
500
|
| 1375 |
+
],
|
| 1376 |
+
"page_idx": 8
|
| 1377 |
+
},
|
| 1378 |
+
{
|
| 1379 |
+
"type": "text",
|
| 1380 |
+
"text": "where $r_c = \\| x - c\\| = \\sqrt{\\sum_i(x_i - c_i)^2}$ and $r_s = 2M$ is the radius of the event horizon of the black hole, and we choose the mass $M = 1 / 2$ so that $r_s = 1$ . We choose a wave packet initial condition and evolve the solution for time $T = .5 = 1M$ inside the box $[-1,1]^3$ , with the artificial Dirichlet boundary conditions on the boundary $\\partial [-1,1]^3$ . Here the event horizon of the black hole lies just outside the computational domain and boundary conditions meaning that we need not worry about complications on and inside the horizon, and instead the scalar field only feels the effect of the gravity and is integrated for a time short enough that it is not yet pulled inside.",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
169,
|
| 1383 |
+
503,
|
| 1384 |
+
823,
|
| 1385 |
+
602
|
| 1386 |
+
],
|
| 1387 |
+
"page_idx": 8
|
| 1388 |
+
},
|
| 1389 |
+
{
|
| 1390 |
+
"type": "text",
|
| 1391 |
+
"text": "While we do not have an analytic solution to compare to, we plot the relative error of the PDE residual averaged over the spatial domain in Figure 4 which is consistently small. We also compare the solution at the time $T = 0.36$ of our solver solution against the finite difference solution run at a spatial grid size of $150 \\times 150 \\times 150$ which is the largest we were able to run with our optimized sparse finite difference solver before running out of memory. Despite the challenging nature of the problem, Neural-IVP is able to produce a consistent solution for this task. Finally, we present an ablation study in Figure 6 showing the gains of using the sinusoidal embedding and of scaling the neural network size and grid for this experiment.",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
169,
|
| 1394 |
+
608,
|
| 1395 |
+
825,
|
| 1396 |
+
720
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 8
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "text",
|
| 1402 |
+
"text": "6 DISCUSSION",
|
| 1403 |
+
"text_level": 1,
|
| 1404 |
+
"bbox": [
|
| 1405 |
+
171,
|
| 1406 |
+
739,
|
| 1407 |
+
310,
|
| 1408 |
+
755
|
| 1409 |
+
],
|
| 1410 |
+
"page_idx": 8
|
| 1411 |
+
},
|
| 1412 |
+
{
|
| 1413 |
+
"type": "text",
|
| 1414 |
+
"text": "There are many PDEs of interest that are massively complex to simulate using classical methods due to the scalability limitations of using grids and meshes. At the same time, neural networks have shown promise for solving boundary value problems but the current methods for solving initial value problems can be unstable, deficient in scale and in representation power. To ameliorate these deficiencies, we presented Neural-IVP, a local-in-time method for approximating the solution to initial value PDEs. Neural-IVP is a compelling option for problems that are computationally challenging for classical methods like the $(3 + 1)$ dimensional wave maps equation in section 5. Continuous effort on this front will empower researchers and engineers to simulate physical PDEs which lie at the boundary of what is currently possible to solve, allowing prototyping and experimentation without the massive complexity of modern large scale grid-based solvers involving mesh generation, mesh refinement, boundaries, excision, parallelization, and communication.",
|
| 1415 |
+
"bbox": [
|
| 1416 |
+
169,
|
| 1417 |
+
771,
|
| 1418 |
+
826,
|
| 1419 |
+
924
|
| 1420 |
+
],
|
| 1421 |
+
"page_idx": 8
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"type": "header",
|
| 1425 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1426 |
+
"bbox": [
|
| 1427 |
+
171,
|
| 1428 |
+
32,
|
| 1429 |
+
478,
|
| 1430 |
+
47
|
| 1431 |
+
],
|
| 1432 |
+
"page_idx": 8
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "page_number",
|
| 1436 |
+
"text": "9",
|
| 1437 |
+
"bbox": [
|
| 1438 |
+
493,
|
| 1439 |
+
948,
|
| 1440 |
+
503,
|
| 1441 |
+
959
|
| 1442 |
+
],
|
| 1443 |
+
"page_idx": 8
|
| 1444 |
+
},
|
| 1445 |
+
{
|
| 1446 |
+
"type": "text",
|
| 1447 |
+
"text": "REFERENCES",
|
| 1448 |
+
"text_level": 1,
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
174,
|
| 1451 |
+
102,
|
| 1452 |
+
287,
|
| 1453 |
+
117
|
| 1454 |
+
],
|
| 1455 |
+
"page_idx": 9
|
| 1456 |
+
},
|
| 1457 |
+
{
|
| 1458 |
+
"type": "list",
|
| 1459 |
+
"sub_type": "ref_text",
|
| 1460 |
+
"list_items": [
|
| 1461 |
+
"Jens Berg and Kaj Nyström. A Unified Deep Artificial Neural Network Approach To Partial Differential Equations In Complex Geometries. Neurocomputing, 1(317):28-41, 2018.",
|
| 1462 |
+
"James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs. Version 0.3.17, 2018. URL http://github.com/google/jax.",
|
| 1463 |
+
"Joan Bruna, Benjamin Peherstorfer, and Eric Vaden-Eijnden. Neural Galerkin Scheme with Active Learning for High-Dimensional Evolution Equations. Preprint arXiv 2203.01360v1, 2022.",
|
| 1464 |
+
"Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In International Conference on Machine Learning, pp. 1019-1028. PMLR, 2017.",
|
| 1465 |
+
"M. W. M. G. Dissanayake and N. Phan-Thien. Neural-network-based approximations for solving partial differential equations. Communications in Numerical Methods in Engineering, 10(3):195-201, 1994.",
|
| 1466 |
+
"J. Dormand and P. Prince. A Family of Embedded Runge-Kutta Formulae. Journal of Computational and Applied Mathematics, 6(1):19-26, 1980.",
|
| 1467 |
+
"Yifan Du and Tamer A Zaki. Evolutional Deep Neural Network. Physical Review E, 104(4):045303, 2021.",
|
| 1468 |
+
"Weinan E., Jiequn Han, and Arnulf Jentzen. Deep Learning-Based Numerical Methods for High-Dimensional Parabolic Partial Differential Equations and Backward Stochastic Differential Equations. Communications in Mathematics and Statistics, 5(4):349-380, 2017.",
|
| 1469 |
+
"Zachary Frangella, Joel A. Tropp, and Madeleine Udell. Randomized Nyström Preconditioning. Preprint arXiv 2110.02820v2, 2021.",
|
| 1470 |
+
"Jiequn Han, Arnulf Jentzen, and Weinan E. Solving High-Dimensional Partial Differential Equations using Deep Learning. Communications in Mathematics and Statistics, 115(34):Proceedings of the National Academy of Sciences, 2018.",
|
| 1471 |
+
"Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.",
|
| 1472 |
+
"Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448-456. PMLR, 2015.",
|
| 1473 |
+
"Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural Operator: Learning Maps Between Function Spaces. Preprint arXiv 2108.08481v3, 2021.",
|
| 1474 |
+
"I. E. Lagaris, A. Likas, and D. I. Fotiadis. Artificial Neural Networks for Solving Ordinary and Partial Differential Equations. IEEE Transactions on Neural Networks, 9(5):987-1000, 1998.",
|
| 1475 |
+
"L. Lu, P. Jin, and G. E. Karniadakis. DeepONet: Learning Nonlinear Operators for Identifying Differential Equations Based on the Universal Approximation Theorem of Operators. Preprint arXiv 1910.03193, 2019.",
|
| 1476 |
+
"Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021.",
|
| 1477 |
+
"Dmytro Mishkin and Jiri Matas. All you need is a good init. arXiv preprint arXiv:1511.06422, 2015."
|
| 1478 |
+
],
|
| 1479 |
+
"bbox": [
|
| 1480 |
+
171,
|
| 1481 |
+
125,
|
| 1482 |
+
825,
|
| 1483 |
+
922
|
| 1484 |
+
],
|
| 1485 |
+
"page_idx": 9
|
| 1486 |
+
},
|
| 1487 |
+
{
|
| 1488 |
+
"type": "header",
|
| 1489 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1490 |
+
"bbox": [
|
| 1491 |
+
171,
|
| 1492 |
+
32,
|
| 1493 |
+
478,
|
| 1494 |
+
47
|
| 1495 |
+
],
|
| 1496 |
+
"page_idx": 9
|
| 1497 |
+
},
|
| 1498 |
+
{
|
| 1499 |
+
"type": "page_number",
|
| 1500 |
+
"text": "10",
|
| 1501 |
+
"bbox": [
|
| 1502 |
+
490,
|
| 1503 |
+
946,
|
| 1504 |
+
509,
|
| 1505 |
+
960
|
| 1506 |
+
],
|
| 1507 |
+
"page_idx": 9
|
| 1508 |
+
},
|
| 1509 |
+
{
|
| 1510 |
+
"type": "list",
|
| 1511 |
+
"sub_type": "ref_text",
|
| 1512 |
+
"list_items": [
|
| 1513 |
+
"M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations. Journal of Computational Physics, 2019.",
|
| 1514 |
+
"Prajit Ramachandran, Barret Zoph, and Quoc V. Le. Swish: a Self-Gated Activation Function. Preprint arXiv 1710.05941v1, 2017.",
|
| 1515 |
+
"Justin Sirignano and Konstantinos Spiliopoulos. DGM: A Deep Learning Algorithm for Solving Partial Differential Equations. Journal of computational physics, 375:1339-1364, 2018.",
|
| 1516 |
+
"Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit Neural Representations with Periodic Activation Functions. Advances in Neural Information Processing Systems, 33:7462-7473, 2020.",
|
| 1517 |
+
"Bing Yu et al. The Deep Ritz Method: a Deep Learning-Based Numerical Algorithm for Solving Variational Problems. Communications in Mathematics and Statistics, 6(1):1-12, 2018."
|
| 1518 |
+
],
|
| 1519 |
+
"bbox": [
|
| 1520 |
+
171,
|
| 1521 |
+
103,
|
| 1522 |
+
823,
|
| 1523 |
+
335
|
| 1524 |
+
],
|
| 1525 |
+
"page_idx": 10
|
| 1526 |
+
},
|
| 1527 |
+
{
|
| 1528 |
+
"type": "text",
|
| 1529 |
+
"text": "A APPROXIMATE SYMMETRIES YIELD SMALL EIGENVALUES",
|
| 1530 |
+
"text_level": 1,
|
| 1531 |
+
"bbox": [
|
| 1532 |
+
171,
|
| 1533 |
+
368,
|
| 1534 |
+
699,
|
| 1535 |
+
383
|
| 1536 |
+
],
|
| 1537 |
+
"page_idx": 10
|
| 1538 |
+
},
|
| 1539 |
+
{
|
| 1540 |
+
"type": "text",
|
| 1541 |
+
"text": "Suppose that the network $\\mathbf{N}$ has an approximate symmetry in the parameters, meaning that there exists a value $\\epsilon$ for which",
|
| 1542 |
+
"bbox": [
|
| 1543 |
+
169,
|
| 1544 |
+
401,
|
| 1545 |
+
823,
|
| 1546 |
+
429
|
| 1547 |
+
],
|
| 1548 |
+
"page_idx": 10
|
| 1549 |
+
},
|
| 1550 |
+
{
|
| 1551 |
+
"type": "equation",
|
| 1552 |
+
"text": "\n$$\n\\forall x, \\theta : \\| \\mathrm {N} (x, T _ {\\alpha} (\\theta)) - \\mathrm {N} (x, \\theta) \\| ^ {2} \\leq \\epsilon \\alpha^ {2}. \\tag {9}\n$$\n",
|
| 1553 |
+
"text_format": "latex",
|
| 1554 |
+
"bbox": [
|
| 1555 |
+
357,
|
| 1556 |
+
439,
|
| 1557 |
+
823,
|
| 1558 |
+
455
|
| 1559 |
+
],
|
| 1560 |
+
"page_idx": 10
|
| 1561 |
+
},
|
| 1562 |
+
{
|
| 1563 |
+
"type": "text",
|
| 1564 |
+
"text": "holds for $\\alpha$ in a neighborhood of 0. If this is the case, we can rearrange the inequality and take the limit as $\\alpha \\to 0$ :",
|
| 1565 |
+
"bbox": [
|
| 1566 |
+
169,
|
| 1567 |
+
465,
|
| 1568 |
+
823,
|
| 1569 |
+
494
|
| 1570 |
+
],
|
| 1571 |
+
"page_idx": 10
|
| 1572 |
+
},
|
| 1573 |
+
{
|
| 1574 |
+
"type": "equation",
|
| 1575 |
+
"text": "\n$$\n\\lim _ {\\alpha \\rightarrow 0} \\| \\mathrm {N} (x, T _ {\\alpha} (\\theta)) - \\mathrm {N} (x, \\theta) \\| ^ {2} / \\alpha^ {2} \\leq \\epsilon . \\tag {10}\n$$\n",
|
| 1576 |
+
"text_format": "latex",
|
| 1577 |
+
"bbox": [
|
| 1578 |
+
364,
|
| 1579 |
+
497,
|
| 1580 |
+
823,
|
| 1581 |
+
520
|
| 1582 |
+
],
|
| 1583 |
+
"page_idx": 10
|
| 1584 |
+
},
|
| 1585 |
+
{
|
| 1586 |
+
"type": "text",
|
| 1587 |
+
"text": "As the limit $\\lim_{\\alpha \\to 0} \\frac{\\mathrm{N}(x, T_{\\alpha}(\\theta)) - \\mathrm{N}(x, \\theta)}{\\alpha} = \\frac{\\partial}{\\partial \\alpha} \\mathrm{N}(x, T_{\\alpha}(\\theta))$ exists, we can interchange the limit and norm to get",
|
| 1588 |
+
"bbox": [
|
| 1589 |
+
169,
|
| 1590 |
+
527,
|
| 1591 |
+
823,
|
| 1592 |
+
559
|
| 1593 |
+
],
|
| 1594 |
+
"page_idx": 10
|
| 1595 |
+
},
|
| 1596 |
+
{
|
| 1597 |
+
"type": "equation",
|
| 1598 |
+
"text": "\n$$\n\\left\\| \\nabla_ {\\theta} \\mathrm {N} ^ {\\top} v \\right\\| ^ {2} = v \\nabla_ {\\theta} \\mathrm {N} \\nabla_ {\\theta} \\mathrm {N} ^ {\\top} v \\leq \\epsilon , \\tag {11}\n$$\n",
|
| 1599 |
+
"text_format": "latex",
|
| 1600 |
+
"bbox": [
|
| 1601 |
+
383,
|
| 1602 |
+
563,
|
| 1603 |
+
823,
|
| 1604 |
+
580
|
| 1605 |
+
],
|
| 1606 |
+
"page_idx": 10
|
| 1607 |
+
},
|
| 1608 |
+
{
|
| 1609 |
+
"type": "text",
|
| 1610 |
+
"text": "since $\\frac{\\partial}{\\partial\\alpha}\\mathrm{N}(x,T_{\\alpha}(\\theta)) = \\nabla_{\\theta}\\mathrm{N}^{\\top}v$ for $v(\\theta) = \\left.\\partial_{\\alpha}T_{\\alpha}(\\theta)\\right|_{\\alpha = 0}$ . Recalling that $M(\\theta) = \\int \\nabla_{\\theta}\\mathrm{N}\\nabla_{\\theta}\\mathrm{N}^{\\top}d\\mu (x)$ , we can take the expectation of both sides of the inequality with respect to $\\mu$ , producing",
|
| 1611 |
+
"bbox": [
|
| 1612 |
+
169,
|
| 1613 |
+
589,
|
| 1614 |
+
825,
|
| 1615 |
+
637
|
| 1616 |
+
],
|
| 1617 |
+
"page_idx": 10
|
| 1618 |
+
},
|
| 1619 |
+
{
|
| 1620 |
+
"type": "equation",
|
| 1621 |
+
"text": "\n$$\nv ^ {\\top} M v < \\epsilon . \\tag {12}\n$$\n",
|
| 1622 |
+
"text_format": "latex",
|
| 1623 |
+
"bbox": [
|
| 1624 |
+
455,
|
| 1625 |
+
659,
|
| 1626 |
+
823,
|
| 1627 |
+
676
|
| 1628 |
+
],
|
| 1629 |
+
"page_idx": 10
|
| 1630 |
+
},
|
| 1631 |
+
{
|
| 1632 |
+
"type": "text",
|
| 1633 |
+
"text": "B EXTENDED EXPERIMENTAL RESULTS",
|
| 1634 |
+
"text_level": 1,
|
| 1635 |
+
"bbox": [
|
| 1636 |
+
171,
|
| 1637 |
+
700,
|
| 1638 |
+
522,
|
| 1639 |
+
715
|
| 1640 |
+
],
|
| 1641 |
+
"page_idx": 10
|
| 1642 |
+
},
|
| 1643 |
+
{
|
| 1644 |
+
"type": "text",
|
| 1645 |
+
"text": "In this section we expose additional experimental details that were not fully covered in section 4.3.",
|
| 1646 |
+
"bbox": [
|
| 1647 |
+
169,
|
| 1648 |
+
733,
|
| 1649 |
+
818,
|
| 1650 |
+
748
|
| 1651 |
+
],
|
| 1652 |
+
"page_idx": 10
|
| 1653 |
+
},
|
| 1654 |
+
{
|
| 1655 |
+
"type": "text",
|
| 1656 |
+
"text": "B.1 WAVE EQUATION",
|
| 1657 |
+
"text_level": 1,
|
| 1658 |
+
"bbox": [
|
| 1659 |
+
171,
|
| 1660 |
+
768,
|
| 1661 |
+
338,
|
| 1662 |
+
782
|
| 1663 |
+
],
|
| 1664 |
+
"page_idx": 10
|
| 1665 |
+
},
|
| 1666 |
+
{
|
| 1667 |
+
"type": "text",
|
| 1668 |
+
"text": "For this experiment we use the 3 dimensional wave equation $\\partial_t^2 u = \\Delta u$ , that has a few well known analytic solutions. Even this equation is computationally taxing for finite difference and finite element methods. We use the radially symmetric outgoing wave solution $u(x,t) = f(t - \\| x\\|) / \\| x\\|$ with $f(s) = 2s^{2}e^{-200s^{2}}$ and integrate the initial condition forward in time by $T = .5$ seconds. In Figure 6 we compare to the analytic solution each of the solutions produced by Neural-IVP, EDNN (Du & Zaki, 2021), and finite differences evaluated on a $100\\times 100\\times 100$ grid with RK45 set to a $10^{-4}$ tolerance. Despite the fact that this initial condition does not contain fine scale detail that Neural-IVP excels at, Neural-IVP performs the best among the three solvers, and faithfully reproduces the solution as shown in Figure 6 (left).",
|
| 1669 |
+
"bbox": [
|
| 1670 |
+
169,
|
| 1671 |
+
795,
|
| 1672 |
+
825,
|
| 1673 |
+
924
|
| 1674 |
+
],
|
| 1675 |
+
"page_idx": 10
|
| 1676 |
+
},
|
| 1677 |
+
{
|
| 1678 |
+
"type": "header",
|
| 1679 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1680 |
+
"bbox": [
|
| 1681 |
+
171,
|
| 1682 |
+
32,
|
| 1683 |
+
478,
|
| 1684 |
+
47
|
| 1685 |
+
],
|
| 1686 |
+
"page_idx": 10
|
| 1687 |
+
},
|
| 1688 |
+
{
|
| 1689 |
+
"type": "page_number",
|
| 1690 |
+
"text": "11",
|
| 1691 |
+
"bbox": [
|
| 1692 |
+
490,
|
| 1693 |
+
948,
|
| 1694 |
+
506,
|
| 1695 |
+
959
|
| 1696 |
+
],
|
| 1697 |
+
"page_idx": 10
|
| 1698 |
+
},
|
| 1699 |
+
{
|
| 1700 |
+
"type": "text",
|
| 1701 |
+
"text": "B.2 VLASOV EQUATION",
|
| 1702 |
+
"text_level": 1,
|
| 1703 |
+
"bbox": [
|
| 1704 |
+
171,
|
| 1705 |
+
103,
|
| 1706 |
+
356,
|
| 1707 |
+
118
|
| 1708 |
+
],
|
| 1709 |
+
"page_idx": 11
|
| 1710 |
+
},
|
| 1711 |
+
{
|
| 1712 |
+
"type": "text",
|
| 1713 |
+
"text": "The Vlasov equation is a PDE that describes the evolution of the density of collisionless but charged particles in an electric field, expressed both as a function of position and velocity. This equation has spatial dimension 6 and takes the following form",
|
| 1714 |
+
"bbox": [
|
| 1715 |
+
169,
|
| 1716 |
+
128,
|
| 1717 |
+
823,
|
| 1718 |
+
172
|
| 1719 |
+
],
|
| 1720 |
+
"page_idx": 11
|
| 1721 |
+
},
|
| 1722 |
+
{
|
| 1723 |
+
"type": "equation",
|
| 1724 |
+
"text": "\n$$\n\\partial_ {t} u (x, v, t) + v ^ {\\top} \\nabla_ {x} u (x, v, t) + \\frac {q}{m} E (x, t) ^ {\\top} \\nabla_ {v} u (x, v, t) = 0\n$$\n",
|
| 1725 |
+
"text_format": "latex",
|
| 1726 |
+
"bbox": [
|
| 1727 |
+
289,
|
| 1728 |
+
179,
|
| 1729 |
+
707,
|
| 1730 |
+
199
|
| 1731 |
+
],
|
| 1732 |
+
"page_idx": 11
|
| 1733 |
+
},
|
| 1734 |
+
{
|
| 1735 |
+
"type": "text",
|
| 1736 |
+
"text": "where the vector $x \\in \\mathbb{R}^3$ represents the position of the particles and $v \\in \\mathbb{R}^3$ represents the velocity, and $u$ represents a normalized probability density over $x, v$ . Here $q$ is the charge of the particles and $m$ is their mass (both of which we set to 1). In a self-contained treatment of the Vlasov equation, the electric field $E(x, t)$ is itself induced by the density of charged particles: $E(x, t) = -\\nabla_x \\phi(x, t)$ and the potential $\\phi$ is the solution to the Poisson equation $\\Delta \\phi = -\\rho$ where $\\rho(x, t) = \\int q u(x, v, t) dv$ . However, to simplify the setting to a pure IVP, we assume that $E(x, t)$ is some known and fixed electric field.",
|
| 1737 |
+
"bbox": [
|
| 1738 |
+
169,
|
| 1739 |
+
205,
|
| 1740 |
+
823,
|
| 1741 |
+
303
|
| 1742 |
+
],
|
| 1743 |
+
"page_idx": 11
|
| 1744 |
+
},
|
| 1745 |
+
{
|
| 1746 |
+
"type": "text",
|
| 1747 |
+
"text": "For this particular example, we choose $E(x) = \\nabla_{x}\\exp (-\\| x\\|_{2}^{2})$ and the initial condition is a product of two Gaussians",
|
| 1748 |
+
"bbox": [
|
| 1749 |
+
169,
|
| 1750 |
+
308,
|
| 1751 |
+
823,
|
| 1752 |
+
339
|
| 1753 |
+
],
|
| 1754 |
+
"page_idx": 11
|
| 1755 |
+
},
|
| 1756 |
+
{
|
| 1757 |
+
"type": "equation",
|
| 1758 |
+
"text": "\n$$\nu _ {0} (x, v) = \\mathcal {N} (x; 0, . 3 ^ {2} I) \\mathcal {N} (v; 0, . 3 ^ {2} I),\n$$\n",
|
| 1759 |
+
"text_format": "latex",
|
| 1760 |
+
"bbox": [
|
| 1761 |
+
364,
|
| 1762 |
+
345,
|
| 1763 |
+
630,
|
| 1764 |
+
364
|
| 1765 |
+
],
|
| 1766 |
+
"page_idx": 11
|
| 1767 |
+
},
|
| 1768 |
+
{
|
| 1769 |
+
"type": "text",
|
| 1770 |
+
"text": "corresponding to the Maxwell-Boltzmann distribution over velocities and a standard Gaussian distribution over position, and we solve the problem on the cube $[-1, 1]^6$ .",
|
| 1771 |
+
"bbox": [
|
| 1772 |
+
169,
|
| 1773 |
+
371,
|
| 1774 |
+
823,
|
| 1775 |
+
402
|
| 1776 |
+
],
|
| 1777 |
+
"page_idx": 11
|
| 1778 |
+
},
|
| 1779 |
+
{
|
| 1780 |
+
"type": "text",
|
| 1781 |
+
"text": "B.3 FOKKER-PLANCK",
|
| 1782 |
+
"text_level": 1,
|
| 1783 |
+
"bbox": [
|
| 1784 |
+
171,
|
| 1785 |
+
417,
|
| 1786 |
+
344,
|
| 1787 |
+
431
|
| 1788 |
+
],
|
| 1789 |
+
"page_idx": 11
|
| 1790 |
+
},
|
| 1791 |
+
{
|
| 1792 |
+
"type": "text",
|
| 1793 |
+
"text": "For the Fokker-Planck equation, we choose the harmonic trap for a collection of $d = 8$ interacting particles from Bruna et al. (2022), giving rise to the equation",
|
| 1794 |
+
"bbox": [
|
| 1795 |
+
169,
|
| 1796 |
+
443,
|
| 1797 |
+
823,
|
| 1798 |
+
470
|
| 1799 |
+
],
|
| 1800 |
+
"page_idx": 11
|
| 1801 |
+
},
|
| 1802 |
+
{
|
| 1803 |
+
"type": "equation",
|
| 1804 |
+
"text": "\n$$\n\\partial_ {t} u (x, t) = D \\Delta u (x, t) - \\nabla \\cdot (h u),\n$$\n",
|
| 1805 |
+
"text_format": "latex",
|
| 1806 |
+
"bbox": [
|
| 1807 |
+
375,
|
| 1808 |
+
478,
|
| 1809 |
+
617,
|
| 1810 |
+
494
|
| 1811 |
+
],
|
| 1812 |
+
"page_idx": 11
|
| 1813 |
+
},
|
| 1814 |
+
{
|
| 1815 |
+
"type": "text",
|
| 1816 |
+
"text": "where $h(x) = (a - x) + \\alpha (\\mathbf{11}^{\\top} / d - I)x$ . We choose $a = (0.2)\\mathbf{1}$ along with constants $D = .01$ and $\\alpha = 1/4$ .",
|
| 1817 |
+
"bbox": [
|
| 1818 |
+
169,
|
| 1819 |
+
500,
|
| 1820 |
+
823,
|
| 1821 |
+
531
|
| 1822 |
+
],
|
| 1823 |
+
"page_idx": 11
|
| 1824 |
+
},
|
| 1825 |
+
{
|
| 1826 |
+
"type": "text",
|
| 1827 |
+
"text": "We solve this equation in $d = 8$ dimensions with the initial condition",
|
| 1828 |
+
"bbox": [
|
| 1829 |
+
171,
|
| 1830 |
+
536,
|
| 1831 |
+
627,
|
| 1832 |
+
551
|
| 1833 |
+
],
|
| 1834 |
+
"page_idx": 11
|
| 1835 |
+
},
|
| 1836 |
+
{
|
| 1837 |
+
"type": "equation",
|
| 1838 |
+
"text": "\n$$\nu _ {0} \\left(x\\right) = \\left(\\frac {3}{4}\\right) ^ {d} \\Pi_ {i = 1} ^ {d} (1 - x _ {i} ^ {2})\n$$\n",
|
| 1839 |
+
"text_format": "latex",
|
| 1840 |
+
"bbox": [
|
| 1841 |
+
401,
|
| 1842 |
+
558,
|
| 1843 |
+
596,
|
| 1844 |
+
579
|
| 1845 |
+
],
|
| 1846 |
+
"page_idx": 11
|
| 1847 |
+
},
|
| 1848 |
+
{
|
| 1849 |
+
"type": "text",
|
| 1850 |
+
"text": "which is a normalized probability distribution on $[-1,1]^d$ . We use the same Dirichlet boundary conditions for this problem.",
|
| 1851 |
+
"bbox": [
|
| 1852 |
+
169,
|
| 1853 |
+
585,
|
| 1854 |
+
823,
|
| 1855 |
+
614
|
| 1856 |
+
],
|
| 1857 |
+
"page_idx": 11
|
| 1858 |
+
},
|
| 1859 |
+
{
|
| 1860 |
+
"type": "text",
|
| 1861 |
+
"text": "Additionally, since we can naturally increase the dimensionality of the Fokker-Planck equation, we explore up to what dimension Neural-IVP can give reasonable PDE residuals. As seen in Figure 5, Neural-IVP can still give solutions for $d = 20$ . For dimensions higher than 20, the linear system solvers cannot converge to the desire tolerance needed to evolve the parameters. We warn that these higher dimensional solutions are not guaranteed to be of high quality since the PDE residual estimation also worsens as we increase the spatial dimension.",
|
| 1862 |
+
"bbox": [
|
| 1863 |
+
169,
|
| 1864 |
+
621,
|
| 1865 |
+
825,
|
| 1866 |
+
705
|
| 1867 |
+
],
|
| 1868 |
+
"page_idx": 11
|
| 1869 |
+
},
|
| 1870 |
+
{
|
| 1871 |
+
"type": "text",
|
| 1872 |
+
"text": "C HYPERPARAMETERS",
|
| 1873 |
+
"text_level": 1,
|
| 1874 |
+
"bbox": [
|
| 1875 |
+
171,
|
| 1876 |
+
724,
|
| 1877 |
+
380,
|
| 1878 |
+
739
|
| 1879 |
+
],
|
| 1880 |
+
"page_idx": 11
|
| 1881 |
+
},
|
| 1882 |
+
{
|
| 1883 |
+
"type": "text",
|
| 1884 |
+
"text": "For our experiments we used the following hyperparameters for Neural-IVP in our experiments unless otherwise specified:",
|
| 1885 |
+
"bbox": [
|
| 1886 |
+
169,
|
| 1887 |
+
756,
|
| 1888 |
+
823,
|
| 1889 |
+
786
|
| 1890 |
+
],
|
| 1891 |
+
"page_idx": 11
|
| 1892 |
+
},
|
| 1893 |
+
{
|
| 1894 |
+
"type": "list",
|
| 1895 |
+
"sub_type": "text",
|
| 1896 |
+
"list_items": [
|
| 1897 |
+
"1. RK45 Integrator with rtol: 1e-4 (for all equations except wave maps, which uses RK23)",
|
| 1898 |
+
"2. Number of Monte Carlo samples: 50K for wave maps and 10-20K for all other PDEs",
|
| 1899 |
+
"3. Maximum CG iterations: 1,000",
|
| 1900 |
+
"4. CG tolerance: 1e-8",
|
| 1901 |
+
"5. Nystrom preconditioner rank: 200-350",
|
| 1902 |
+
"6. Linear system regularization: 1e-6",
|
| 1903 |
+
"7. Initial fit iterations, optimizer and learning rate: 50K, ADAM, 1e-3"
|
| 1904 |
+
],
|
| 1905 |
+
"bbox": [
|
| 1906 |
+
176,
|
| 1907 |
+
796,
|
| 1908 |
+
771,
|
| 1909 |
+
922
|
| 1910 |
+
],
|
| 1911 |
+
"page_idx": 11
|
| 1912 |
+
},
|
| 1913 |
+
{
|
| 1914 |
+
"type": "header",
|
| 1915 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1916 |
+
"bbox": [
|
| 1917 |
+
171,
|
| 1918 |
+
32,
|
| 1919 |
+
478,
|
| 1920 |
+
47
|
| 1921 |
+
],
|
| 1922 |
+
"page_idx": 11
|
| 1923 |
+
},
|
| 1924 |
+
{
|
| 1925 |
+
"type": "page_number",
|
| 1926 |
+
"text": "12",
|
| 1927 |
+
"bbox": [
|
| 1928 |
+
490,
|
| 1929 |
+
946,
|
| 1930 |
+
509,
|
| 1931 |
+
959
|
| 1932 |
+
],
|
| 1933 |
+
"page_idx": 11
|
| 1934 |
+
},
|
| 1935 |
+
{
|
| 1936 |
+
"type": "image",
|
| 1937 |
+
"img_path": "images/b53713fa88dac774ad8f93f3ca3ab51d4f8911a7f3b2f4ed00480f808fb908f1.jpg",
|
| 1938 |
+
"image_caption": [
|
| 1939 |
+
"Figure 5: (Left): Neural-IVP is able to provide solutions up to dimension $20 + 1$ for the Fokker-Planck equation. Higher dimensions break-down the linear systems to evolve the PDE parameters. (Right): Interventions to transform EDNN to Neural-IVP. (I1) First, add a sinusoidal embedding before the MLP. (I2) Second, use head finetuning (in this case there is no notable improvement as the initial condition does not possess finer details). Finally, scale the neural network and the grid size. Note that this is only possible due to our scalable and efficient construction."
|
| 1940 |
+
],
|
| 1941 |
+
"image_footnote": [],
|
| 1942 |
+
"bbox": [
|
| 1943 |
+
204,
|
| 1944 |
+
108,
|
| 1945 |
+
480,
|
| 1946 |
+
316
|
| 1947 |
+
],
|
| 1948 |
+
"page_idx": 12
|
| 1949 |
+
},
|
| 1950 |
+
{
|
| 1951 |
+
"type": "image",
|
| 1952 |
+
"img_path": "images/6204e2d2ff227b0f70eedc5fc26fb151e9132fa2a70e52841e6f6cc5046b3766.jpg",
|
| 1953 |
+
"image_caption": [],
|
| 1954 |
+
"image_footnote": [],
|
| 1955 |
+
"bbox": [
|
| 1956 |
+
514,
|
| 1957 |
+
108,
|
| 1958 |
+
790,
|
| 1959 |
+
316
|
| 1960 |
+
],
|
| 1961 |
+
"page_idx": 12
|
| 1962 |
+
},
|
| 1963 |
+
{
|
| 1964 |
+
"type": "image",
|
| 1965 |
+
"img_path": "images/fc174f261b5b5d3998fe6ff1f2af9279daa10c787bedae1d5155896fba761ab6.jpg",
|
| 1966 |
+
"image_caption": [
|
| 1967 |
+
"Figure 6: (Left): Neural-IVP fit of the wave equation through time. (Right): Neural-IVP performs slightly better than a finite difference method on the 3D wave equation."
|
| 1968 |
+
],
|
| 1969 |
+
"image_footnote": [],
|
| 1970 |
+
"bbox": [
|
| 1971 |
+
256,
|
| 1972 |
+
435,
|
| 1973 |
+
493,
|
| 1974 |
+
640
|
| 1975 |
+
],
|
| 1976 |
+
"page_idx": 12
|
| 1977 |
+
},
|
| 1978 |
+
{
|
| 1979 |
+
"type": "image",
|
| 1980 |
+
"img_path": "images/ef01fbbe6009adab690199b352c85c0d22f72692028b1d17de4639faab04dba2.jpg",
|
| 1981 |
+
"image_caption": [],
|
| 1982 |
+
"image_footnote": [],
|
| 1983 |
+
"bbox": [
|
| 1984 |
+
514,
|
| 1985 |
+
435,
|
| 1986 |
+
795,
|
| 1987 |
+
647
|
| 1988 |
+
],
|
| 1989 |
+
"page_idx": 12
|
| 1990 |
+
},
|
| 1991 |
+
{
|
| 1992 |
+
"type": "list",
|
| 1993 |
+
"sub_type": "text",
|
| 1994 |
+
"list_items": [
|
| 1995 |
+
"8. Floating point precision: double",
|
| 1996 |
+
"9. Number of restarts: 10"
|
| 1997 |
+
],
|
| 1998 |
+
"bbox": [
|
| 1999 |
+
176,
|
| 2000 |
+
714,
|
| 2001 |
+
410,
|
| 2002 |
+
744
|
| 2003 |
+
],
|
| 2004 |
+
"page_idx": 12
|
| 2005 |
+
},
|
| 2006 |
+
{
|
| 2007 |
+
"type": "text",
|
| 2008 |
+
"text": "The neural network architecture we use is a simple MLP with 3 hidden layers, each with 100 hidden units, $L = 5$ for the highest frequency power in the sinusoidal embedding, and the network uses swish nonlinearities. The initial sinusoidal embedding values $\\gamma(p)$ are scaled by 1.5 before feeding into the network.",
|
| 2009 |
+
"bbox": [
|
| 2010 |
+
169,
|
| 2011 |
+
758,
|
| 2012 |
+
823,
|
| 2013 |
+
814
|
| 2014 |
+
],
|
| 2015 |
+
"page_idx": 12
|
| 2016 |
+
},
|
| 2017 |
+
{
|
| 2018 |
+
"type": "header",
|
| 2019 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 2020 |
+
"bbox": [
|
| 2021 |
+
173,
|
| 2022 |
+
32,
|
| 2023 |
+
478,
|
| 2024 |
+
47
|
| 2025 |
+
],
|
| 2026 |
+
"page_idx": 12
|
| 2027 |
+
},
|
| 2028 |
+
{
|
| 2029 |
+
"type": "page_number",
|
| 2030 |
+
"text": "13",
|
| 2031 |
+
"bbox": [
|
| 2032 |
+
490,
|
| 2033 |
+
946,
|
| 2034 |
+
508,
|
| 2035 |
+
959
|
| 2036 |
+
],
|
| 2037 |
+
"page_idx": 12
|
| 2038 |
+
},
|
| 2039 |
+
{
|
| 2040 |
+
"type": "code",
|
| 2041 |
+
"sub_type": "algorithm",
|
| 2042 |
+
"code_caption": [
|
| 2043 |
+
"Algorithm 1 NEURAL-IVP"
|
| 2044 |
+
],
|
| 2045 |
+
"code_body": "1: Input: 1. IVP: Initial condition $u_{0}(x)$ , PDE rhs operator $\\mathcal{L}[u](x,t)$ , integration time $T$ 2. Design Choices: neural network architecture N $(x,\\theta)$ , sampling distribution $\\mu$ 3. Hyperparameters: $n$ number of Monte Carlo samples, ODE_TOL, CG_TOL, regularization $\\mu$ , preconditioner rank $r$ 2: Output: Solution $u(x,t)$ at specified times $t_1,\\ldots t_N\\leq T$ \n3: function NEURAL-IVP 4: $\\theta \\gets \\mathrm{FitFunction}(u_0)$ 5: $\\Delta t\\gets 20\\mathrm{ODE\\_TOL}$ 6: while $t < T$ do 7: if Sufficient time since last restart then 8: $\\theta \\gets \\mathrm{FitFunction}(\\mathrm{N}(\\cdot ,\\theta))$ 9: $\\theta ,\\Delta t\\gets \\mathrm{Adaptive\\_RK23\\_Step(Dynamics},\\theta ,\\Delta t,\\mathrm{ODE\\_TOL})$ 10: $t\\gets t + \\Delta t$ return $[\\mathrm{N}(\\cdot ,\\theta_{t_1}),\\dots ,\\mathrm{N}(\\cdot ,\\theta_{t_N})]$ \n11: function FITFUNCTION(u) 12: $\\theta = \\arg \\min_{\\theta}\\mathbb{E}_{x\\sim \\mu}\\| \\mathrm{N}(x,\\theta) - u(x)\\| ^2$ minimized with Adam 13: Separate last layer weights W from $\\mathrm{N}(x,\\theta) = W^{\\top}\\Phi_{\\theta}(x)$ (including bias) 14: Solve for W from regularized least squares over samples X: 15: $W\\leftarrow (\\Phi (X)^{\\top}\\Phi (X) + \\lambda I)^{-1}\\Phi (X)^{\\top}u(X)$ 16: Assemble $\\theta \\gets [\\theta_{[-1]},W]$ return $\\theta$ \n17: function DYNAMICS(θ) 18: Let the Jacobian vector product of $N(x_i,\\theta)$ with $v$ taken with respect to $\\theta$ be $DN(x_{i},\\theta)v$ 19: Construct efficient MVM $\\hat{M} (\\theta)v\\coloneqq \\nabla_v\\frac{1}{2n}\\sum_{i = 1}^n ||DN(x_i,\\theta)v||^2$ 20: Construct RHS $\\hat{F} (\\theta) = \\nabla_v\\frac{1}{2n}\\sum_{i = 1}^n\\mathcal{L}[\\mathrm{N}](\\boldsymbol {x}_i,\\theta)^{\\top}DN(\\boldsymbol {x}_i,\\theta)v$ 21: Construct rank- $r$ Nystrom pred conditioner $P$ using $\\hat{M} (\\theta)$ MVMs 22: Solve $(\\hat{M} (\\theta) + \\mu I)\\dot{\\theta} = \\hat{F} (\\theta)$ for $\\dot{\\theta}$ using conjugate gradients with preconditioner $P$ 23: return $\\dot{\\theta}$",
|
| 2046 |
+
"bbox": [
|
| 2047 |
+
174,
|
| 2048 |
+
119,
|
| 2049 |
+
823,
|
| 2050 |
+
580
|
| 2051 |
+
],
|
| 2052 |
+
"page_idx": 13
|
| 2053 |
+
},
|
| 2054 |
+
{
|
| 2055 |
+
"type": "text",
|
| 2056 |
+
"text": "D NEURAL-IVP PSEUDO-CODE",
|
| 2057 |
+
"text_level": 1,
|
| 2058 |
+
"bbox": [
|
| 2059 |
+
171,
|
| 2060 |
+
607,
|
| 2061 |
+
450,
|
| 2062 |
+
623
|
| 2063 |
+
],
|
| 2064 |
+
"page_idx": 13
|
| 2065 |
+
},
|
| 2066 |
+
{
|
| 2067 |
+
"type": "text",
|
| 2068 |
+
"text": "The pseudo-code for Neural-IVP is present in Algorithm 1. For reduced clutter, have omitted the logic for setting step sizes so that $\\theta$ will be output by the integrator at the specified times $t_1, t_2, \\ldots, t_N$ . Sampling RNG state is updated outside the RK23 step but inside the time evolution while loop.",
|
| 2069 |
+
"bbox": [
|
| 2070 |
+
169,
|
| 2071 |
+
638,
|
| 2072 |
+
826,
|
| 2073 |
+
683
|
| 2074 |
+
],
|
| 2075 |
+
"page_idx": 13
|
| 2076 |
+
},
|
| 2077 |
+
{
|
| 2078 |
+
"type": "header",
|
| 2079 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 2080 |
+
"bbox": [
|
| 2081 |
+
171,
|
| 2082 |
+
32,
|
| 2083 |
+
478,
|
| 2084 |
+
47
|
| 2085 |
+
],
|
| 2086 |
+
"page_idx": 13
|
| 2087 |
+
},
|
| 2088 |
+
{
|
| 2089 |
+
"type": "page_number",
|
| 2090 |
+
"text": "14",
|
| 2091 |
+
"bbox": [
|
| 2092 |
+
490,
|
| 2093 |
+
946,
|
| 2094 |
+
509,
|
| 2095 |
+
960
|
| 2096 |
+
],
|
| 2097 |
+
"page_idx": 13
|
| 2098 |
+
}
|
| 2099 |
+
]
|
2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/c8ffe538-d5da-49af-b4cb-8cd3b47e4ebe_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/c8ffe538-d5da-49af-b4cb-8cd3b47e4ebe_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a6c398a8dd078d20b376838e45f82c86d1658486fa3f06592491805f62372082
|
| 3 |
+
size 550930
|
2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/full.md
ADDED
|
@@ -0,0 +1,380 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A STABLE AND SCALABLE METHOD FOR SOLVING INITIAL VALUE PDES WITH NEURAL NETWORKS
|
| 2 |
+
|
| 3 |
+
Marc Finzi $^{1*}$ , Andres Potapczynski $^{1*}$ , Matthew Choptuik $^{2}$ , Andrew Gordon Wilson $^{1}$
|
| 4 |
+
New York University $^{1}$ and University of British Columbia $^{2}$
|
| 5 |
+
|
| 6 |
+
# ABSTRACT
|
| 7 |
+
|
| 8 |
+
Unlike conventional grid and mesh based methods for solving partial differential equations (PDEs), neural networks have the potential to break the curse of dimensionality, providing approximate solutions to problems where using classical solvers is difficult or impossible. While global minimization of the PDE residual over the network parameters works well for boundary value problems, catastrophic forgetting impairs applicability to initial value problems (IVPs). In an alternative local-in-time approach, the optimization problem can be converted into an ordinary differential equation (ODE) on the network parameters and the solution propagated forward in time; however, we demonstrate that current methods based on this approach suffer from two key issues. First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors. Second, as the ODE methods scale cubically with the number of model parameters, they are restricted to small neural networks, significantly limiting their ability to represent intricate PDE initial conditions and solutions. Building on these insights, we develop Neural-IVP, an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters, enabling us to evolve the dynamics of challenging PDEs with neural networks.
|
| 9 |
+
|
| 10 |
+
# 1 INTRODUCTION
|
| 11 |
+
|
| 12 |
+
Partial differential equations (PDEs) are needed to describe many phenomena in the natural sciences. PDEs that model complex phenomena cannot be solved analytically and many numerical techniques are used to compute their solutions. Classical techniques such as finite differences rely on grids and provide efficient and accurate solutions when the dimensionality is low ( $d = 1,2$ ). Yet, the computational and memory costs of using grids or meshes scales exponentially with the dimension, making it extremely challenging to solve PDEs accurately in more than 3 dimensions.
|
| 13 |
+
|
| 14 |
+
Neural networks have shown considerable success in modeling and reconstructing functions on high-dimensional structured data such as images or text, but also for unstructured tabular data and spatial functions. Neural networks sidestep the "curse of dimensionality" by learning representations of the data that enables them to perform efficiently. In this respect, neural networks have similar benefits and drawbacks as Monte Carlo methods. The approximation error $\epsilon$ converges at a rate $\epsilon \propto 1 / \sqrt{n}$ from statistical fluctuations where $n$ is the number of data points or Monte Carlo samples. Expressed inversely, we would need: $n \propto e^{2\log 1 / \epsilon}$ samples to get error $\epsilon$ , a compute grows exponentially in the number of significant digits instead of exponential in the dimension as it is for grids. For many problems this tradeoff is favorable and an approximate solution is much better than no solution.
|
| 15 |
+
|
| 16 |
+
Thus, it is natural to consider neural networks for solving PDEs whose dimensionality makes standard approaches intractable. While first investigated in Dissanayake & Phan-Thien (1994) and Lagaris et al. (1998), recent developments by Yu et al. (2018) and Sirignano & Spiliopoulos (2018) have shown that neural networks can successfully approximate the solution by forcing them to satisfy the dynamics of the PDE on collocation points in the spatio-temporal domain. In particular, the global collocation approaches have proven effective for solving boundary value problems where the neural network can successfully approximate the solution. However, for initial value problems
|
| 17 |
+
|
| 18 |
+
(IVPs), treating time as merely another spatial dimension results in complications for the neural network like catastrophic forgetting. Some heuristics have been developed to ameliorate this latter problem, such as increasing the collocation points as time progresses, but then the computational cost of training the neural network becomes impractical.
|
| 19 |
+
|
| 20 |
+
Recently, Du & Zaki (2021) and Bruna et al. (2022) have provided two methods that follow a novel local-in-time approach for training neural networks to solve IVPs by updating the network parameters sequentially through time rather than by having some fixed set of parameters to model the whole spatio-temporal domain. These methods have proven successful for a variety of PDEs, but they currently suffer from two shortcomings. First, the conditioning of the linear systems required to follow the ODE on the network parameters degrades over time, leading to longer solving times and ultimately to a complete breakdown of the solution. Second, the current methodologies lack the capacity to represent difficult initial conditions and solutions as their runtime scales cubically in the number of network parameters, limiting their ability to use large neural networks. In this work we provide a local-in-time IVP solver (Neural-IVP) that circumvents the shortcomings of Du & Zaki (2021) and Bruna et al. (2022) and thus enable us to solve challenging PDEs. In particular:
|
| 21 |
+
|
| 22 |
+
- Leveraging fast matrix vector multiplies and preconditioned conjugate gradients, we develop an approach that scales only linearly in the number of parameters, allowing us to use considerably larger neural networks and more data.
|
| 23 |
+
- We further improve the representational power and quality of the fit to initial conditions through the use of last layer linear solves and sinusoidal embeddings.
|
| 24 |
+
- We show how following the parameter ODE leads the network parameters to an increasingly poorly conditioned region of the parameter space, and we show how this relates to exact and approximate parameter symmetries in the network.
|
| 25 |
+
- Using regularization, restarts, and last layer finetuning, we are able to prevent the parameters from reaching these poorly conditioned regions, thereby stabilizing the method.
|
| 26 |
+
|
| 27 |
+
We provide a code implementation at https://github.com/mfinzi/neural-ivp.
|
| 28 |
+
|
| 29 |
+
# 2 BACKGROUND
|
| 30 |
+
|
| 31 |
+
Given a spatial domain $\mathcal{X} \subseteq \mathbb{R}^D$ , we will consider the evolution of a time-dependent function $u: \mathcal{X} \times [0, T] \to \mathbb{R}^k$ which at all times belongs to some functional space $\mathcal{U}$ and with dynamics governed by
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
\partial_ {t} u (x, t) = \mathcal {L} [ u ] (x, t) \quad \text {f o r} (x, t) \in \mathcal {X} \times [ 0, T ]
|
| 35 |
+
$$
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
u (x, 0) = u _ {0} (x) \quad \text {f o r} x \in \mathcal {X}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
u (x, t) = h (x, t) \quad \text {f o r} x \in \partial \mathcal {X} \times [ 0, T ]
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
where $u_0 \in \mathcal{U}$ is the initial condition, $h$ is spatial boundary condition, and $\mathcal{L}$ is the (possibly nonlinear) operator containing spatial derivatives. We can represent PDEs with higher order derivatives in time, such as the wave equation $\partial_t^2\phi = \Delta \phi$ , by reducing them to a system of first order in time equations $u \coloneqq [\phi, \partial_t\phi]$ , where in this example $\mathcal{L}[u_0, u_1] = [u_1, \Delta u_0]$ .
|
| 46 |
+
|
| 47 |
+
Global Collocation Methods The first approaches for solving PDEs via neural networks are based on the idea of sampling uniformly on the whole spatio-temporal domain and ensuring that the neural network obeys the PDE by minimizing the PDE residual (or a proxy for it). This approach was initially proposed by Dissanayake & Phan-Thien (1994) and Lagaris et al. (1998), which used neural networks as approximate solutions. However, recent advances in automatic differentiation, compute, and neural network architecture have enabled successful applications such as the Deep Galerkin Method (Sirignano & Spiliopoulos, 2018), Deep Ritz Method (Yu et al., 2018), and PINN (Raissi et al., 2019), which have revitalized interest in using neural networks to solve PDEs.
|
| 48 |
+
|
| 49 |
+
Learning From Simulations Not all approaches use neural networks as a basis function to represent the PDE solution. Some approaches focus on directly learning the PDE operator as in Lu et al. (2019) or Kovachki et al. (2021), where the operator can be learned from simulation. However, as these methods typically use grids, their purpose is to accelerate existing solvers rather than tackling new problems. Other approaches that do not rely on collocation points exploit specific information of elliptic and semi-linear parabolic PDEs, such as E. et al. (2017) and Han et al. (2018).
|
| 50 |
+
|
| 51 |
+
# 2.1 GLOBAL PDE RESIDUAL MINIMIZATION
|
| 52 |
+
|
| 53 |
+
The most straightforward method for producing a neural network solution to initial values PDEs is similar to the approach used for boundary value problems of treating the temporal dimensions as if they were spatial dimensions and parameterizing the solution simultaneously for all times $u(x,t) = \mathrm{N}_{\theta}(x,t)$ . The initial and boundary conditions can be enforced through appropriate parameterization of the network architecture (Berg & Nyström, 2018), whereas the PDE is enforced through minimization of the training objective:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
S (\theta) = \int_ {\mathcal {X} \times [ 0, T ]} r _ {\theta} (x, t) ^ {2} d \mu (x) d t = \int_ {\mathcal {X} \times [ 0, T ]} [ (\partial_ {t} u _ {\theta} (x, t) - \mathcal {L} [ u _ {\theta} ] (x, t)) ^ {2} ] d \mu (x) d t
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
where the expectation is estimated via Monte Carlo samples over a chosen distribution of $\mu$ and times $t$ .
|
| 60 |
+
|
| 61 |
+
Initial value PDEs have a local temporal structure where only values on the previous spatial slice are necessary to compute the next; however, global minimization ignores this property. Moreover, as the weights of the neural network must be used to represent the solution simultaneously at all times, then we must ensure that the neural network approximation does not forget the PDE solution learnt at earlier times (catastrophic forgetting). While Sirignano & Spiliopoulos (2018) and Sitzmann et al. (2020) take this approach, the downsides of avoiding catastrophic forgetting is to increase the computation spent by ensuring the presence of data from previous times.
|
| 62 |
+
|
| 63 |
+
# 2.2 LOCAL-IN-TIME METHODS
|
| 64 |
+
|
| 65 |
+
To circumvent the inherent inefficiency of the global methods, Du & Zaki (2021) and Bruna et al. (2022) propose a local-in-time method whereby the minimization problem gets converted into an ODE that the parameters satisfy at each point in time. In this approach, the PDE solution is given by $u(x,t) = \mathrm{N}(x,\theta(t))$ for a neural network $\mathrm{N}$ , where the time dependence comes from the parameter vector $\theta(t)$ rather than as an input to the network. Thus, the network only represents the solution at a single time, rather than simultaneously at all times, and subsequently $\theta(t)$ can be recorded and no representational power or computational cost is incurred from preserving previous solutions. Assuming the PDE is one-dimensional, the PDE residual at a single time can be written as,
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
L (\dot {\theta}, t) = \int_ {\mathcal {X}} r (x, t) ^ {2} d \mu (x) = \int_ {\mathcal {X}} \left(\dot {\theta} ^ {\top} \nabla_ {\theta} \mathrm {N} (x, \theta (t)) - \mathcal {L} [ \mathrm {N} ] (x, \theta (t))\right) ^ {2} d \mu (x), \tag {1}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
since the time derivative is $\partial_t u(x,t) = \dot{\theta}^\top \nabla_\theta \mathrm{N}(x,\theta)$ .
|
| 72 |
+
|
| 73 |
+
Choosing the dynamics $\dot{\theta}$ of the parameters to minimize the instantaneous PDE residual error $L(\dot{\theta},t)$ yields the (implicitly defined) differential equation
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
M (\theta) \dot {\theta} = F (\theta) \quad \text {a n d} \quad \theta_ {0} = \arg \min _ {\theta} \int_ {\mathcal {X}} \left(\mathrm {N} (x, \theta) - u (x, 0)\right) ^ {2} d \mu (x), \tag {2}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where $M(\theta) = \int_{\mathcal{X}} \nabla_{\theta} \mathrm{N}(x, \theta) \nabla_{\theta} \mathrm{N}(x, \theta)^{\top} d\mu(x)$ and $F(\theta) = \int_{\mathcal{X}} \nabla_{\theta} \mathrm{N}(x, \theta) \mathcal{L}[\mathrm{N}](x, \theta) d\mu(x)$ . Once we find $\theta_0$ to fit the initial conditions, we have a fully specified system of differential equations, where we can advance the parameters (and therefore the solution $u(x, \theta(t))$ ) forward in time.
|
| 80 |
+
|
| 81 |
+
Since both $M(\theta)$ and $F(\theta)$ involve integrals over space, we can estimate them with $n$ Monte Carlo samples, yielding $\hat{M}(\theta)$ and $\hat{F}(\theta)$ . We then proceed to solve the linear system $\hat{M}(\theta)\dot{\theta} = \hat{F}(\theta)$ at each timestep for the dynamics $\dot{\theta}$ and feed that into an ODE integrator such as RK45 (Dormand & Prince, 1980). For systems of PDEs such as the Navier-Stokes equations, the method can be extended in a straightforward manner by replacing the outer product of gradients with the Jacobians of the multi-output network $N$ : $M(\theta) = \int_{\mathcal{X}} D_{\theta}\mathrm{N}(x,\theta)^{\top}D_{\theta}\mathrm{N}(x,\theta)d\mu(x)$ and likewise for $F$ , which results from minimizing the norm of the PDE residual $\int_{\mathcal{X}} \|r(x,t)\|^2 d\mu(x)$ .
|
| 82 |
+
|
| 83 |
+
Introducing some additional notation, we can write the Monte Carlo estimates $\hat{M}$ and $\hat{F}$ in a more illuminating way. Defining the Jacobian matrix of the network for different input points $J_{ik} = \frac{\partial}{\partial\theta_k}\mathrm{N}(x_i,\theta)$ , and defining $f$ as $f_{i} = \mathcal{L}[\mathrm{N}](x_{i},\theta)$ , the PDE residual estimated via the $n$ -sample
|
| 84 |
+
|
| 85 |
+
Monte Carlo estimator is just the least squares objective $\hat{L} (\dot{\theta},t) = \frac{1}{n}\| J\dot{\theta} -f\| ^2$ . The matrices $\hat{M} (\theta) = \frac{1}{n} J^{\top}J$ and $\hat{F} (\theta) = \frac{1}{n} J^{\top}f$ reveal that the ODE dynamics is just the familiar least squares solution $\dot{\theta} = J^{\dagger}f = (J^{\top}J)^{-1}J^{\top}f$
|
| 86 |
+
|
| 87 |
+
# 3 DIAGNOSING LOCAL-IN-TIME NEURAL PDE SOLVERS
|
| 88 |
+
|
| 89 |
+
The success of local-in-time methods hinges on making the PDE residual $L(\dot{\theta}, t)$ close to 0 as we follow the dynamics of $\dot{\theta} = \hat{M}(\theta)^{-1}\hat{F}(\theta)$ . The lower the local error, the lower the global PDE residual $S(\theta) = \int L(\dot{\theta}, t) dt$ and the more faithfully the PDE is satisfied.
|
| 90 |
+
|
| 91 |
+
Even though $\dot{\theta}$ directly minimizes $L(\dot{\theta}, t)$ , the PDE residual is not necessarily small and instead the value of $r(x, t)$ depends nontrivially on the network architecture and the values of the parameters themselves. While local-in-time methods have been applied successfully in several cases, there are harder problems where they fail unexpectedly. For example, they fail with unacceptably large errors in second-order PDEs or problems with complex initial conditions. In the following section, we identify the reasons for these failures.
|
| 92 |
+
|
| 93 |
+
# 3.1 REPRESENTATIONAL POWER
|
| 94 |
+
|
| 95 |
+
The simplest reason why local-in-time methods fail is because the neural networks do not have enough representational power. Having enough degrees of freedom and inductive biases in the network matters for being able to find a $\dot{\theta}$ in the span of $J$ which can match the spatial derivatives. The spatial derivatives $\mathcal{L}[N](x,\theta)$ of the PDE must be able to be expressed (or nearly expressed) as a linear combination of the derivative with respect to each parameter: $\frac{\partial}{\partial\theta_k} N(x,\theta)$ , which is a different task than a neural network is typically designed for. The easiest intervention is to simply increase the number of parameters $p$ , yielding additional degrees of freedom.
|
| 96 |
+
|
| 97 |
+
Increasing the number of parameters also improves the ability of the network to reconstruct the initial conditions, which can have knock-on effects with the evolution later in time. However, increasing the number of parameters and evolving through time following Du & Zaki (2021) and Bruna et al. (2022) quickly leads to intractable computations. The linear solves used to define the ODE dynamics require time $O(p^3 + p^2 n)$ and use $O(p^2 + pn)$ memory, where $p$ represents the neural network parameters and $n$ is the number of Monte Carlo samples used to estimate the linear system. Therefore the networks with more than around $p = 5,000$ parameters cannot be used. Neural networks of these sizes are extremely small compared to modern networks which often have millions or even billions of parameters. In section 4.1, we show how our Neural-IVP method resolves this limitation, allowing us to use large neural networks with many parameters.
|
| 98 |
+
|
| 99 |
+
# 3.2 STABILITY AND CONDITIONING
|
| 100 |
+
|
| 101 |
+
In addition to lacking sufficient representational power, there are more subtle reasons why the local-in-time methods fail.
|
| 102 |
+
|
| 103 |
+
Even when the solution is exactly representable, a continuous path $\theta^{*}(t)$ between the solutions may not exist. Even if a network is able to faithfully express the solution at a given time $u(x,t) = N(x,\theta^{*})$ for some value of $\theta^{*}$ in the parameter space, there may not exist a continuous path between these $\theta^{*}$ for different times. This fact is related to the implicit function theorem. With the multi-output function $H_{i}(\theta) = N(x_{i},\theta) - u(x_{i},t)$ , even if we wish to satisfy $H_{i} = 0$ only at a finite collection of points $x_{i}$ , the existence of a continuous path $\theta^{*}(t) = g(t)$ in general requires that the Jacobian matrix $D_{\theta}H = J$ is invertible. Unfortunately the Jacobian is not invertible because there exist singular directions and nearly singular directions in the parameter space, as we now argue.
|
| 104 |
+
|
| 105 |
+
There exist singular directions of $J$ and $M$ as a result of symmetries in the network. Each continuous symmetry of the network will produce a right singular vector of $J$ , regardless of how many points $n$ are used in the Monte Carlo estimate. Here we define a continuous symmetry as a parameterized transformation of the parameters $T_{\alpha} : \mathbb{R}^{p} \to \mathbb{R}^{p}$ defined for $\alpha \in (-\epsilon, \epsilon)$ , in a neighborhood of the identity $T_{0} = \mathrm{Id}$ , and $T_{\alpha}$ has a nonzero derivative with respect to $\alpha$ at the identity. For convenience, consider reparametrizing $\alpha$ to be unit speed so that $\| \partial_{\alpha}T_{\alpha}(\theta)\| = 1$ .
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
Figure 1: The conditioning of the linear systems needed to solve the ODE on the network parameters increases for challenging PDEs like the wave equation, but not for others like the advection equation. (Left): Eigenvalue spectrum of $M(\theta)$ matrix at initialization. (Middle): Growth of largest eigenvalue of $M(\theta)$ over time. (Right): Number of preconditioned CG iterations required to solve the linear system to a specified tolerance of $\epsilon = 10^{-7}$ .
|
| 109 |
+
|
| 110 |
+

|
| 111 |
+
|
| 112 |
+

|
| 113 |
+
|
| 114 |
+
Theorem 1. Suppose the network $\mathrm{N}(x,\theta)$ has a continuous parameter symmetry $T_{\alpha}$ which preserves the outputs of the function: $\forall \theta ,x:\mathrm{N}(x,T_{\alpha}(\theta)) = \mathrm{N}(x,\theta)$ , then
|
| 115 |
+
|
| 116 |
+
$$
|
| 117 |
+
v (\theta) = \left. \partial_ {\alpha} T _ {\alpha} (\theta) \right| _ {\alpha = 0} \tag {3}
|
| 118 |
+
$$
|
| 119 |
+
|
| 120 |
+
is a singular vector of both $J$ and $M$ .
|
| 121 |
+
|
| 122 |
+
Proof: Taking the derivative with respect to $\alpha$ at $0$ , from the chain rule we have: $0 = \partial_{\alpha}\big|_{0}N(x,T_{\alpha}(\theta)) = \nabla_{\theta}N(x,\theta)^{\top}\partial_{\alpha}T_{\alpha}(\theta)\big|_{\alpha = 0}$ . As this expression holds for all $x$ , $J(\theta)v(\theta) = 0$ and $M(\theta)v(\theta) = 0$ .
|
| 123 |
+
|
| 124 |
+
As Dinh et al. (2017) demonstrated, multilayer perceptrons using ReLU nonlinearities have a high-dimensional group of exact parameter symmetries corresponding to a rescaling of weights in alternate layers. Furthermore even replacing ReLUs with alternate activation functions such as Swish (Ramachandran et al., 2017) does not solve the problem, as these will have approximate symmetries which will produce highly ill-conditioned $M$ and $J$ matrices.
|
| 125 |
+
|
| 126 |
+
Theorem 2. An approximate symmetry $\forall x:\| N(x,T_{\alpha}(\theta)) - N(x,\theta)\|^{2}\leq \epsilon \alpha^{2}$ will produce nearly singular vectors $v(\theta) = \partial_{\alpha}T_{\alpha}(\theta)\big|_{\alpha = 0}$ for which
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
v ^ {\top} M v < \epsilon , \tag {4}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
and therefore the smallest eigenvalue of $M$ is less than $\epsilon$ .
|
| 133 |
+
|
| 134 |
+
# Proof: See Appendix A
|
| 135 |
+
|
| 136 |
+
Additionally, the rank of the Monte Carlo estimate $\hat{M} = \frac{1}{n} J^{\top}J$ using $n$ samples is at most $n$ , and there is a $p - n$ dimensional manifold of parameters which match the function values at the sample points $\forall i = 1,\dots,n:\mathrm{N}(x_i,\theta) = \mathrm{N}_i$ rather than over the whole domain. In Figure 1 (left), we show empirically that the eigenspectrum of $\hat{M}$ is indeed deficient and highly ill-conditioned, with a long tail of small eigenvalues.
|
| 137 |
+
|
| 138 |
+
Hence some form of regularization such as $[M(\theta) + \mu I]\dot{\theta}$ is necessary to have a bounded condition number, $\kappa (M(\theta) + \mu I) = (\lambda_1 + \mu) / (0 + \mu)$ .
|
| 139 |
+
|
| 140 |
+
Furthermore, as seen in Figure 1 (middle) the conditioning of the linear system (equation 2) deteriorates over time. This deterioration worsens in more challenging PDEs like second-order wave equation in contrast to the easier advection equation. Even when using a dense solver, the quality of the solves will degrade over time, leading to increased error in the solution. When using an iterative method like CG shown in Figure 1 (right), the runtime of the method will increase during the evolution and eventually not be able to meet the desired error tolerance. In contrast, if we instead take snapshots of the solution at different times and fit it directly with SGD, we find that the conditioning is much better as shown by the green curve in Figure 1.
|
| 141 |
+
|
| 142 |
+
Making sense of this observation, we can learn from the ways that neural networks are typically used: in conjunction with stochastic gradient descent. In general when training neural networks with
|
| 143 |
+
|
| 144 |
+
SGD, we must chose carefully the initialization to make the problem well-conditioned (Mishkin & Matas, 2015). Many initializations, such as setting the scale of the parameters with drastically different values between layers, will lead either to diverging solutions or no progress on the objective. With the right initialization and a good choice of hyperparameters, the optimization trajectory will stay in a well-conditioned region of the parameter space. However, many bad regions of the parameter space exist, and a number of architectural improvements in deep learning such as batch normalization (Ioffe & Szegedy, 2015) and skip connections (He et al., 2016) were designed with the express purpose of improving the conditioning of optimization while leaving the expressive power unchanged. Unfortunately, while SGD optimizers stay in a well-conditioned regions of the parameter space, equation (2) does not.
|
| 145 |
+
|
| 146 |
+
To see this, consider a singular vector $v$ of $J$ which has a very small singular value $\sigma_v$ (due to an approximate symmetry or otherwise). Analyzing the solution, we see that the projection of the parameters along the singular vector evolves like: $v^\top \dot{\theta} = v^\top J^\dagger f = \sigma_v^{-1} u_v^\top f$ , where $Jv = \sigma_v u_v$ . A small singular value leads to a large change in that subspace according to the evolution of the ODE. Considering the approximate rescaling symmetry for swish networks, this points the dynamics directly into amplifying the difference between size of weights in neighboring layers and worsening the conditioning, precisely what was identified as sharp minima from Dinh et al. (2017). We describe how this problem can be circumvented by our method as described in section 4.2.
|
| 147 |
+
|
| 148 |
+
# 4 NEURAL IVP
|
| 149 |
+
|
| 150 |
+
Drawing on these observations, we introduce Neural IVP, a method for solving initial value PDEs that resolves the scalability and numerical stability issues limiting current local-in-time methods. We also enhance the Neural IVP, through projections to different regions of the parameter space, finetuning procedures, and mechanisms to increase the representation power of the neural networks.
|
| 151 |
+
|
| 152 |
+
# 4.1 IMPROVING REPRESENTATIONAL POWER AND SCALABILITY
|
| 153 |
+
|
| 154 |
+
To evaluate the representational power of the network, we examine fitting a complex initial condition typically present in the later evolution of an entangled system, and which will contain components at many different frequencies. We fit a sum of two Gaussians of different sizes modulated by frequencies pointing in different directions in 3-dimensions within the cube $[-1,1]^3$ , with the slice through $z = 0$ shown in Figure 2 (left). The function is defined as the sum of two Gaussian like wave packets, modulated by spatial frequencies pointing in different directions: $u_0(x) = 30(2\pi s_1^2)^{-1}e^{-||v||_2^2 / (2s_1^2)}\cos(2\pi f x^\top \hat{n}) + 24(2\pi s_2^2)^{-1}e^{-||w||_2^2 / (2s_2^2)}\cos(2\pi f x^\top \hat{m})$ . where $v = x - 0.5(\hat{x}_2 + \hat{x}_3)$ , $w = x + (\hat{x}_1 + \hat{x}_2 + \hat{x}_3)/6$ also $\hat{n} = (\hat{x}_1 + \hat{x}_2)/\sqrt{2}$ and $\hat{m} = 2^{-1}(\hat{x}_2 + x_1\hat{x}_1/3)$ .
|
| 155 |
+
|
| 156 |
+
By changing the frequency parameter $f$ , we can investigate how well the network is able to fit fine-scale details. We introduce three substantial improvements to the models over those used in Evolutional Deep Neural Networks (EDNN) of Du & Zaki (2021) in order to improve their representational power, and we evaluate the impact of these changes in Figure 2 (middle), starting with the exact 4-layer 30 hidden unit tanh nonlinearity MLP architecture used in EDNN.
|
| 157 |
+
|
| 158 |
+
Increasing Number of Parameters and Scalability As shown in Figure 2 (right), the number of network parameters has a large impact on its representational power, not just for solving the initial conditions but also for finding a $\dot{\theta}$ that achieves a low PDE residual. Increasing the number of parameters substantially reduces the approximation error, especially at high frequencies which are challenging for the model. While dense solves prohibit scaling past 5,000 parameters, we show that our solves can be performed much faster making use of the structure of $\hat{M}$ . Matrix vector multiplies with the matrix $\hat{M} (\theta) = \frac{1}{n} J^{\top}J$ can be implemented much more efficiently than using the dense matrix. Making use of Jacobian-vector-products implemented using automatic differentiation (such as in JAX (Bradbury et al., 2018)), we can implement a matrix vector multiply using 2 Jacobian-vector-products:
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\hat {M} (\theta) v = \frac {1}{2 n} \nabla_ {v} \| J v \| ^ {2}, \tag {5}
|
| 162 |
+
$$
|
| 163 |
+
|
| 164 |
+
which takes $O(n + p)$ time and memory for a single matrix-vector product, sidestepping memory bottlenecks that are usually the limiting factor. Then, with these efficient matrix-vector products,
|
| 165 |
+
|
| 166 |
+

|
| 167 |
+
Figure 2: (Left): Example wave packet initial condition with varying levels of fine details parameterized by the frequency $f$ . (Middle): Impact of Neural-IVP improvements in the model on the initial condition fit relative error across different levels of difficulty of the solution as parameterized by the frequency $f$ , yielding an improvement of 1 - 2 orders of magnitude. (Right): Initial condition fit with all Neural-IVP interventions but with varying number of parameters in the model (shown by the colors). Note that the largest networks which can be used by the dense method are only 5000 parameters.
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
|
| 173 |
+
we can use a scalable Krylov subspace routine of conjugate gradients (CG). To further accelerate the method, we construct a Nyström preconditioner (Frangella et al., 2021), drastically reducing the number of CG iterations. Using this approach, each solve takes time $O((n + p)\sqrt{\kappa})$ where $\kappa(P^{-1}\hat{M})$ is the condition number of the preconditioned matrix $\hat{M}$ , rather than $O(p^3 + p^2 n)$ time of dense solves. These improvements to the runtime using the structure of $\hat{M}$ mirror the sparse and Kronecker structures used by finite difference methods.
|
| 174 |
+
|
| 175 |
+
Sinusoidal Embedding We make several architectural enhancements to the networks used in Du & Zaki (2021) and Bruna et al. (2022), which improve the quality of the initial condition fit and the error when evolving forward in time. Notably, we find that using the sinusoidal embedding (Mildenhall et al., 2021) substantially improves the ability of the network to represent higher frequency details in the solution. In contrast to the original form, we use the featurization
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\gamma (x) = \left[ \sin \left(2 ^ {k} x \frac {\pi}{2}\right) 2 ^ {- \alpha k} \right] _ {k = 0} ^ {L} + \left[ \cos \left(2 ^ {k} x \frac {\pi}{2}\right) 2 ^ {- \alpha k} \right] _ {k = 0} ^ {L}, \tag {6}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
which scales the magnitude of the high frequency (large $\omega$ ) components down by $1 / \omega^{\alpha}$ . While $\alpha = 0$ (the original sinusoidal embedding) works the best for fitting an initial signal (the only requirement needed for Neural Radiance Fields (Mildenhall et al., 2021)), the derivatives of $\gamma$ will not be well behaved as the magnitude of the largest components of $\gamma'(x)$ will scale like $2^{L}$ and $\gamma''(x)$ will scale like $2^{2L}$ . We find setting $\alpha = 1$ to be the most effective for both first order and 2nd order PDEs. Figure 2 (middle) shows the sinusoidal embedding helps the model represent complex functions.
|
| 182 |
+
|
| 183 |
+
Last Layer Linear Solves To further improve the quality of the initial condition fit, after training the network on the initial condition, we recast the fitting of the last layer of the network as solving a linear least squares problem. The network up until the last layer can be considered as features $N(x) = w^{\top}\phi_{\theta}(x) + b$ over a fixed set of collocation points $X$ . We can then solve the minimization problem with respect to the final layer weights $w, b$
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\min _ {w, b} \left\| w ^ {\top} \phi_ {\theta} (X) + b - u _ {0} (X) \right\| ^ {2}, \tag {7}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
which can be solved to a higher level of precision and achieve a lower error when compared to the values of the last layer obtained without tuning from the full nonlinear and stochastic problem.
|
| 190 |
+
|
| 191 |
+
Combining these three improvements of scalability, sinusoidal embeddings, and last layer linear solves (head tuning), we are able to reduce the representation error of the networks by 1-2 orders of magnitude across different difficulties of this challenging 3 dimensional problem.
|
| 192 |
+
|
| 193 |
+
# 4.2 STABILITY AND CONDITIONING
|
| 194 |
+
|
| 195 |
+
Preconditioning In section 3.2 we discussed how even for easier PDEs, the symmetries in the neural networks generate badly conditioned linear systems for the ODE on the parameters. To counteract this negative effect on our CG solver, we use the highly effective and scalable randomized Nyström preconditioner (Frangella et al., 2021). As discussed in Frangella et al. (2021),
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
Figure 3: (Left): Restarts improve the condition of the linear systems. Here we cap the number of CG iterations at 1000, but without restarts the number required to reach a desired error tolerance will only continue to increase. Neural-IVP achieves an order of magnitude better PDE residual than EDNN on the Fokker-Plank equation (middle) and on the Vlasov equation (right).
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
|
| 204 |
+
this preconditioner is close to the optimal truncated SVD preconditioner and it is empirically impressive. To use this preconditioner, we first construct a Nyström approximation of $M(\theta)$ : $M_{\mathrm{nys}}(\theta) = (M(\theta)\Omega)(\Omega^{\top}M(\theta)\Omega)^{-1}(M(\theta)\Omega)^{\top}$ using a Gaussian random subspace projection $\Omega \in \mathbb{R}^{p\times \ell}$ , where $\ell$ denotes the subspace rank. Then, using the SVD of the approximation $M_{\mathrm{nys}}(\theta) = U\hat{\Lambda} U^{\top}$ we can construct a preconditioner (where $\nu$ is a small regularization) as follows:
|
| 205 |
+
|
| 206 |
+
$$
|
| 207 |
+
P = \frac {1}{\hat {\lambda} _ {\ell} + \nu} U \big (\hat {\Lambda} + \nu I \big) U ^ {\top} + \big (I - U U ^ {\top} \big).
|
| 208 |
+
$$
|
| 209 |
+
|
| 210 |
+
This preconditioner closely approximates the optimal truncated SVD preconditioner when the eigenspectrum $\hat{\Lambda}$ resembles that of the original problem. The cost of using this preconditioner are $\ell$ matrix-vector-multipplies (MVMs) and a Cholesky decomposition of $O\left(\ell^{3}\right)$ , where in our problems $\ell \in \{100, 200, 300\}$ .
|
| 211 |
+
|
| 212 |
+
Projection to SGD-like regions Following the ODE on the parameters leads to linear systems whose condition worsens over time as seen on the middle panel of Figure 1. In that plot it is also visible how the conditioning of the systems is much lower and does not increase as rapidly when the neural network is trained to fit the PDE solution using SGD. In principle we would opt for the SGD behavior but we do not have access to fitting the ground truth PDE solution directly. Instead, when the condition number grows too large, we can refit against our neural predictions using SGD and thus start from another location in the parameter space. That is, every so often, we solve $\theta^{\mathrm{SGD}}(t) = \arg \min_{\theta}\int_{\mathcal{X}}\left(\mathrm{N}(\theta ,x) - \mathrm{N}(\theta (t),x)\right)^2 d\mu (x)$ where $\theta (t)$ is our current parameters at time $t$ . Performing restarts in this way considerably improves the conditioning. As seen in Figure 3 (left) the number of CG iterations increases substantially slower when using the SGD restarts.
|
| 213 |
+
|
| 214 |
+
# 4.3 VALIDATING OUR METHOD
|
| 215 |
+
|
| 216 |
+
We validate our method with three different PDEs: the wave equation (3+1), the Vlasov equation (6+1) and the Fokker-Planck equation (8+1). For more details on these equations, see Appendix B. For the wave equation we make comparisons against its analytic solution, while for the remaining equations we compare using the PDE residual (evaluated on different samples from training), additional details about the experiment setup can be found in appendix B. For the wave equation, Neural-IVP achieves the lowest error beating EDNN and finite differences evaluated on a $100 \times 100 \times 100$ grid as seen in Figure 6. For the remaining equations Neural-IVP achieves an order of magnitude lower PDE residual compared to EDNN<sup>1</sup> as seen in the middle and right panels of Figure 3.
|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
Figure 4: (Left): Neural-IVP's PDE residual over time for wave maps compared to EDNN. (Middle): Neural-IVP solution for the wave maps at $t = 0.36$ at a $x = 0$ slice. (Right): Finite difference solution for the wave maps at the same slice.
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
|
| 225 |
+
# 5 SOLVING CHALLENGING HYPERBOLIC PDES
|
| 226 |
+
|
| 227 |
+
We now turn to a challenging PDE: the wave maps equation. This equation is a second-order hyperbolic PDE that often arises in general relativity and that describes the evolution of a scalar field in a curved $(3 + 1)$ dimensional spacetime. Following Einstein's tensor notation from differential geometry, the wave maps equation can be expressed as
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
g ^ {\mu \nu} \nabla_ {\mu} \nabla_ {\nu} \phi = g ^ {\mu \nu} \partial_ {\mu} \partial_ {\nu} \phi - g ^ {\mu \nu} \Gamma_ {\mu \nu} ^ {\sigma} \partial_ {\sigma} \phi = 0, \tag {8}
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
where $g$ is the metric tensor expressing the curvature of the space, $\Gamma_{\mu \nu}^{\sigma}$ are the Christoffel symbols, combinations of the derivatives of the metric, and $\partial_{\mu}$ are derivatives with respect to components of both space and time. For the metric, we choose the metric from a Schwarzschild black hole located at coordinate value $c = -2\hat{x}$ in a Cartesian-like coordinate system:
|
| 234 |
+
|
| 235 |
+
$$
|
| 236 |
+
g _ {\mu \nu} d x ^ {\mu} d x ^ {\nu} = - (1 - r _ {s} / r _ {c}) d t ^ {2} + [ \delta_ {i j} + \frac {1}{r _ {c} ^ {2} (r _ {c} / r _ {s} - 1)} (x _ {i} - c _ {i}) (x _ {j} - c _ {j}) ] d x ^ {i} d x ^ {j},
|
| 237 |
+
$$
|
| 238 |
+
|
| 239 |
+
where $r_c = \| x - c\| = \sqrt{\sum_i(x_i - c_i)^2}$ and $r_s = 2M$ is the radius of the event horizon of the black hole, and we choose the mass $M = 1 / 2$ so that $r_s = 1$ . We choose a wave packet initial condition and evolve the solution for time $T = .5 = 1M$ inside the box $[-1,1]^3$ , with the artificial Dirichlet boundary conditions on the boundary $\partial [-1,1]^3$ . Here the event horizon of the black hole lies just outside the computational domain and boundary conditions meaning that we need not worry about complications on and inside the horizon, and instead the scalar field only feels the effect of the gravity and is integrated for a time short enough that it is not yet pulled inside.
|
| 240 |
+
|
| 241 |
+
While we do not have an analytic solution to compare to, we plot the relative error of the PDE residual averaged over the spatial domain in Figure 4 which is consistently small. We also compare the solution at the time $T = 0.36$ of our solver solution against the finite difference solution run at a spatial grid size of $150 \times 150 \times 150$ which is the largest we were able to run with our optimized sparse finite difference solver before running out of memory. Despite the challenging nature of the problem, Neural-IVP is able to produce a consistent solution for this task. Finally, we present an ablation study in Figure 6 showing the gains of using the sinusoidal embedding and of scaling the neural network size and grid for this experiment.
|
| 242 |
+
|
| 243 |
+
# 6 DISCUSSION
|
| 244 |
+
|
| 245 |
+
There are many PDEs of interest that are massively complex to simulate using classical methods due to the scalability limitations of using grids and meshes. At the same time, neural networks have shown promise for solving boundary value problems but the current methods for solving initial value problems can be unstable, deficient in scale and in representation power. To ameliorate these deficiencies, we presented Neural-IVP, a local-in-time method for approximating the solution to initial value PDEs. Neural-IVP is a compelling option for problems that are computationally challenging for classical methods like the $(3 + 1)$ dimensional wave maps equation in section 5. Continuous effort on this front will empower researchers and engineers to simulate physical PDEs which lie at the boundary of what is currently possible to solve, allowing prototyping and experimentation without the massive complexity of modern large scale grid-based solvers involving mesh generation, mesh refinement, boundaries, excision, parallelization, and communication.
|
| 246 |
+
|
| 247 |
+
# REFERENCES
|
| 248 |
+
|
| 249 |
+
Jens Berg and Kaj Nyström. A Unified Deep Artificial Neural Network Approach To Partial Differential Equations In Complex Geometries. Neurocomputing, 1(317):28-41, 2018.
|
| 250 |
+
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs. Version 0.3.17, 2018. URL http://github.com/google/jax.
|
| 251 |
+
Joan Bruna, Benjamin Peherstorfer, and Eric Vaden-Eijnden. Neural Galerkin Scheme with Active Learning for High-Dimensional Evolution Equations. Preprint arXiv 2203.01360v1, 2022.
|
| 252 |
+
Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In International Conference on Machine Learning, pp. 1019-1028. PMLR, 2017.
|
| 253 |
+
M. W. M. G. Dissanayake and N. Phan-Thien. Neural-network-based approximations for solving partial differential equations. Communications in Numerical Methods in Engineering, 10(3):195-201, 1994.
|
| 254 |
+
J. Dormand and P. Prince. A Family of Embedded Runge-Kutta Formulae. Journal of Computational and Applied Mathematics, 6(1):19-26, 1980.
|
| 255 |
+
Yifan Du and Tamer A Zaki. Evolutional Deep Neural Network. Physical Review E, 104(4):045303, 2021.
|
| 256 |
+
Weinan E., Jiequn Han, and Arnulf Jentzen. Deep Learning-Based Numerical Methods for High-Dimensional Parabolic Partial Differential Equations and Backward Stochastic Differential Equations. Communications in Mathematics and Statistics, 5(4):349-380, 2017.
|
| 257 |
+
Zachary Frangella, Joel A. Tropp, and Madeleine Udell. Randomized Nyström Preconditioning. Preprint arXiv 2110.02820v2, 2021.
|
| 258 |
+
Jiequn Han, Arnulf Jentzen, and Weinan E. Solving High-Dimensional Partial Differential Equations using Deep Learning. Communications in Mathematics and Statistics, 115(34):Proceedings of the National Academy of Sciences, 2018.
|
| 259 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
|
| 260 |
+
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448-456. PMLR, 2015.
|
| 261 |
+
Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural Operator: Learning Maps Between Function Spaces. Preprint arXiv 2108.08481v3, 2021.
|
| 262 |
+
I. E. Lagaris, A. Likas, and D. I. Fotiadis. Artificial Neural Networks for Solving Ordinary and Partial Differential Equations. IEEE Transactions on Neural Networks, 9(5):987-1000, 1998.
|
| 263 |
+
L. Lu, P. Jin, and G. E. Karniadakis. DeepONet: Learning Nonlinear Operators for Identifying Differential Equations Based on the Universal Approximation Theorem of Operators. Preprint arXiv 1910.03193, 2019.
|
| 264 |
+
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021.
|
| 265 |
+
Dmytro Mishkin and Jiri Matas. All you need is a good init. arXiv preprint arXiv:1511.06422, 2015.
|
| 266 |
+
|
| 267 |
+
M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations. Journal of Computational Physics, 2019.
|
| 268 |
+
Prajit Ramachandran, Barret Zoph, and Quoc V. Le. Swish: a Self-Gated Activation Function. Preprint arXiv 1710.05941v1, 2017.
|
| 269 |
+
Justin Sirignano and Konstantinos Spiliopoulos. DGM: A Deep Learning Algorithm for Solving Partial Differential Equations. Journal of computational physics, 375:1339-1364, 2018.
|
| 270 |
+
Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit Neural Representations with Periodic Activation Functions. Advances in Neural Information Processing Systems, 33:7462-7473, 2020.
|
| 271 |
+
Bing Yu et al. The Deep Ritz Method: a Deep Learning-Based Numerical Algorithm for Solving Variational Problems. Communications in Mathematics and Statistics, 6(1):1-12, 2018.
|
| 272 |
+
|
| 273 |
+
# A APPROXIMATE SYMMETRIES YIELD SMALL EIGENVALUES
|
| 274 |
+
|
| 275 |
+
Suppose that the network $\mathbf{N}$ has an approximate symmetry in the parameters, meaning that there exists a value $\epsilon$ for which
|
| 276 |
+
|
| 277 |
+
$$
|
| 278 |
+
\forall x, \theta : \| \mathrm {N} (x, T _ {\alpha} (\theta)) - \mathrm {N} (x, \theta) \| ^ {2} \leq \epsilon \alpha^ {2}. \tag {9}
|
| 279 |
+
$$
|
| 280 |
+
|
| 281 |
+
holds for $\alpha$ in a neighborhood of 0. If this is the case, we can rearrange the inequality and take the limit as $\alpha \to 0$ :
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
\lim _ {\alpha \rightarrow 0} \| \mathrm {N} (x, T _ {\alpha} (\theta)) - \mathrm {N} (x, \theta) \| ^ {2} / \alpha^ {2} \leq \epsilon . \tag {10}
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
As the limit $\lim_{\alpha \to 0} \frac{\mathrm{N}(x, T_{\alpha}(\theta)) - \mathrm{N}(x, \theta)}{\alpha} = \frac{\partial}{\partial \alpha} \mathrm{N}(x, T_{\alpha}(\theta))$ exists, we can interchange the limit and norm to get
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
\left\| \nabla_ {\theta} \mathrm {N} ^ {\top} v \right\| ^ {2} = v \nabla_ {\theta} \mathrm {N} \nabla_ {\theta} \mathrm {N} ^ {\top} v \leq \epsilon , \tag {11}
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
since $\frac{\partial}{\partial\alpha}\mathrm{N}(x,T_{\alpha}(\theta)) = \nabla_{\theta}\mathrm{N}^{\top}v$ for $v(\theta) = \left.\partial_{\alpha}T_{\alpha}(\theta)\right|_{\alpha = 0}$ . Recalling that $M(\theta) = \int \nabla_{\theta}\mathrm{N}\nabla_{\theta}\mathrm{N}^{\top}d\mu (x)$ , we can take the expectation of both sides of the inequality with respect to $\mu$ , producing
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
v ^ {\top} M v < \epsilon . \tag {12}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
# B EXTENDED EXPERIMENTAL RESULTS
|
| 300 |
+
|
| 301 |
+
In this section we expose additional experimental details that were not fully covered in section 4.3.
|
| 302 |
+
|
| 303 |
+
# B.1 WAVE EQUATION
|
| 304 |
+
|
| 305 |
+
For this experiment we use the 3 dimensional wave equation $\partial_t^2 u = \Delta u$ , that has a few well known analytic solutions. Even this equation is computationally taxing for finite difference and finite element methods. We use the radially symmetric outgoing wave solution $u(x,t) = f(t - \| x\|) / \| x\|$ with $f(s) = 2s^{2}e^{-200s^{2}}$ and integrate the initial condition forward in time by $T = .5$ seconds. In Figure 6 we compare to the analytic solution each of the solutions produced by Neural-IVP, EDNN (Du & Zaki, 2021), and finite differences evaluated on a $100\times 100\times 100$ grid with RK45 set to a $10^{-4}$ tolerance. Despite the fact that this initial condition does not contain fine scale detail that Neural-IVP excels at, Neural-IVP performs the best among the three solvers, and faithfully reproduces the solution as shown in Figure 6 (left).
|
| 306 |
+
|
| 307 |
+
# B.2 VLASOV EQUATION
|
| 308 |
+
|
| 309 |
+
The Vlasov equation is a PDE that describes the evolution of the density of collisionless but charged particles in an electric field, expressed both as a function of position and velocity. This equation has spatial dimension 6 and takes the following form
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
\partial_ {t} u (x, v, t) + v ^ {\top} \nabla_ {x} u (x, v, t) + \frac {q}{m} E (x, t) ^ {\top} \nabla_ {v} u (x, v, t) = 0
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
where the vector $x \in \mathbb{R}^3$ represents the position of the particles and $v \in \mathbb{R}^3$ represents the velocity, and $u$ represents a normalized probability density over $x, v$ . Here $q$ is the charge of the particles and $m$ is their mass (both of which we set to 1). In a self-contained treatment of the Vlasov equation, the electric field $E(x, t)$ is itself induced by the density of charged particles: $E(x, t) = -\nabla_x \phi(x, t)$ and the potential $\phi$ is the solution to the Poisson equation $\Delta \phi = -\rho$ where $\rho(x, t) = \int q u(x, v, t) dv$ . However, to simplify the setting to a pure IVP, we assume that $E(x, t)$ is some known and fixed electric field.
|
| 316 |
+
|
| 317 |
+
For this particular example, we choose $E(x) = \nabla_{x}\exp (-\| x\|_{2}^{2})$ and the initial condition is a product of two Gaussians
|
| 318 |
+
|
| 319 |
+
$$
|
| 320 |
+
u _ {0} (x, v) = \mathcal {N} (x; 0, . 3 ^ {2} I) \mathcal {N} (v; 0, . 3 ^ {2} I),
|
| 321 |
+
$$
|
| 322 |
+
|
| 323 |
+
corresponding to the Maxwell-Boltzmann distribution over velocities and a standard Gaussian distribution over position, and we solve the problem on the cube $[-1, 1]^6$ .
|
| 324 |
+
|
| 325 |
+
# B.3 FOKKER-PLANCK
|
| 326 |
+
|
| 327 |
+
For the Fokker-Planck equation, we choose the harmonic trap for a collection of $d = 8$ interacting particles from Bruna et al. (2022), giving rise to the equation
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
\partial_ {t} u (x, t) = D \Delta u (x, t) - \nabla \cdot (h u),
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
where $h(x) = (a - x) + \alpha (\mathbf{11}^{\top} / d - I)x$ . We choose $a = (0.2)\mathbf{1}$ along with constants $D = .01$ and $\alpha = 1/4$ .
|
| 334 |
+
|
| 335 |
+
We solve this equation in $d = 8$ dimensions with the initial condition
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
u _ {0} \left(x\right) = \left(\frac {3}{4}\right) ^ {d} \Pi_ {i = 1} ^ {d} (1 - x _ {i} ^ {2})
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
which is a normalized probability distribution on $[-1,1]^d$ . We use the same Dirichlet boundary conditions for this problem.
|
| 342 |
+
|
| 343 |
+
Additionally, since we can naturally increase the dimensionality of the Fokker-Planck equation, we explore up to what dimension Neural-IVP can give reasonable PDE residuals. As seen in Figure 5, Neural-IVP can still give solutions for $d = 20$ . For dimensions higher than 20, the linear system solvers cannot converge to the desire tolerance needed to evolve the parameters. We warn that these higher dimensional solutions are not guaranteed to be of high quality since the PDE residual estimation also worsens as we increase the spatial dimension.
|
| 344 |
+
|
| 345 |
+
# C HYPERPARAMETERS
|
| 346 |
+
|
| 347 |
+
For our experiments we used the following hyperparameters for Neural-IVP in our experiments unless otherwise specified:
|
| 348 |
+
|
| 349 |
+
1. RK45 Integrator with rtol: 1e-4 (for all equations except wave maps, which uses RK23)
|
| 350 |
+
2. Number of Monte Carlo samples: 50K for wave maps and 10-20K for all other PDEs
|
| 351 |
+
3. Maximum CG iterations: 1,000
|
| 352 |
+
4. CG tolerance: 1e-8
|
| 353 |
+
5. Nystrom preconditioner rank: 200-350
|
| 354 |
+
6. Linear system regularization: 1e-6
|
| 355 |
+
7. Initial fit iterations, optimizer and learning rate: 50K, ADAM, 1e-3
|
| 356 |
+
|
| 357 |
+

|
| 358 |
+
Figure 5: (Left): Neural-IVP is able to provide solutions up to dimension $20 + 1$ for the Fokker-Planck equation. Higher dimensions break-down the linear systems to evolve the PDE parameters. (Right): Interventions to transform EDNN to Neural-IVP. (I1) First, add a sinusoidal embedding before the MLP. (I2) Second, use head finetuning (in this case there is no notable improvement as the initial condition does not possess finer details). Finally, scale the neural network and the grid size. Note that this is only possible due to our scalable and efficient construction.
|
| 359 |
+
|
| 360 |
+

|
| 361 |
+
|
| 362 |
+

|
| 363 |
+
Figure 6: (Left): Neural-IVP fit of the wave equation through time. (Right): Neural-IVP performs slightly better than a finite difference method on the 3D wave equation.
|
| 364 |
+
|
| 365 |
+

|
| 366 |
+
|
| 367 |
+
8. Floating point precision: double
|
| 368 |
+
9. Number of restarts: 10
|
| 369 |
+
|
| 370 |
+
The neural network architecture we use is a simple MLP with 3 hidden layers, each with 100 hidden units, $L = 5$ for the highest frequency power in the sinusoidal embedding, and the network uses swish nonlinearities. The initial sinusoidal embedding values $\gamma(p)$ are scaled by 1.5 before feeding into the network.
|
| 371 |
+
|
| 372 |
+
Algorithm 1 NEURAL-IVP
|
| 373 |
+
1: Input: 1. IVP: Initial condition $u_{0}(x)$ , PDE rhs operator $\mathcal{L}[u](x,t)$ , integration time $T$ 2. Design Choices: neural network architecture N $(x,\theta)$ , sampling distribution $\mu$ 3. Hyperparameters: $n$ number of Monte Carlo samples, ODE_TOL, CG_TOL, regularization $\mu$ , preconditioner rank $r$ 2: Output: Solution $u(x,t)$ at specified times $t_1,\ldots t_N\leq T$
|
| 374 |
+
3: function NEURAL-IVP 4: $\theta \gets \mathrm{FitFunction}(u_0)$ 5: $\Delta t\gets 20\mathrm{ODE\_TOL}$ 6: while $t < T$ do 7: if Sufficient time since last restart then 8: $\theta \gets \mathrm{FitFunction}(\mathrm{N}(\cdot ,\theta))$ 9: $\theta ,\Delta t\gets \mathrm{Adaptive\_RK23\_Step(Dynamics},\theta ,\Delta t,\mathrm{ODE\_TOL})$ 10: $t\gets t + \Delta t$ return $[\mathrm{N}(\cdot ,\theta_{t_1}),\dots ,\mathrm{N}(\cdot ,\theta_{t_N})]$
|
| 375 |
+
11: function FITFUNCTION(u) 12: $\theta = \arg \min_{\theta}\mathbb{E}_{x\sim \mu}\| \mathrm{N}(x,\theta) - u(x)\| ^2$ minimized with Adam 13: Separate last layer weights W from $\mathrm{N}(x,\theta) = W^{\top}\Phi_{\theta}(x)$ (including bias) 14: Solve for W from regularized least squares over samples X: 15: $W\leftarrow (\Phi (X)^{\top}\Phi (X) + \lambda I)^{-1}\Phi (X)^{\top}u(X)$ 16: Assemble $\theta \gets [\theta_{[-1]},W]$ return $\theta$
|
| 376 |
+
17: function DYNAMICS(θ) 18: Let the Jacobian vector product of $N(x_i,\theta)$ with $v$ taken with respect to $\theta$ be $DN(x_{i},\theta)v$ 19: Construct efficient MVM $\hat{M} (\theta)v\coloneqq \nabla_v\frac{1}{2n}\sum_{i = 1}^n ||DN(x_i,\theta)v||^2$ 20: Construct RHS $\hat{F} (\theta) = \nabla_v\frac{1}{2n}\sum_{i = 1}^n\mathcal{L}[\mathrm{N}](\boldsymbol {x}_i,\theta)^{\top}DN(\boldsymbol {x}_i,\theta)v$ 21: Construct rank- $r$ Nystrom pred conditioner $P$ using $\hat{M} (\theta)$ MVMs 22: Solve $(\hat{M} (\theta) + \mu I)\dot{\theta} = \hat{F} (\theta)$ for $\dot{\theta}$ using conjugate gradients with preconditioner $P$ 23: return $\dot{\theta}$
|
| 377 |
+
|
| 378 |
+
# D NEURAL-IVP PSEUDO-CODE
|
| 379 |
+
|
| 380 |
+
The pseudo-code for Neural-IVP is present in Algorithm 1. For reduced clutter, have omitted the logic for setting step sizes so that $\theta$ will be output by the integrator at the specified times $t_1, t_2, \ldots, t_N$ . Sampling RNG state is updated outside the RK23 step but inside the time evolution while loop.
|
2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:36b6d5f1f0af2e429289476488d14b1cf525f0759e878a5d8a68dccfa0002e51
|
| 3 |
+
size 296536
|
2023/A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Statistical Framework for Personalized Federated Learning and Estimation_ Theory, Algorithms, and Privacy/f738034d-c124-415e-a0fe-caa30effdfa7_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|