Add Batch 0d68f3e4-4646-4d1a-a1f0-831b958536d0 data
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/f0daf96a-4478-4cb0-896e-4538a0e68035_content_list.json +0 -0
- 2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/f0daf96a-4478-4cb0-896e-4538a0e68035_model.json +0 -0
- 2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/f0daf96a-4478-4cb0-896e-4538a0e68035_origin.pdf +3 -0
- 2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/full.md +574 -0
- 2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/images.zip +3 -0
- 2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/layout.json +0 -0
- 2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/7ee4925b-8b13-43da-aa85-069a25707236_content_list.json +0 -0
- 2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/7ee4925b-8b13-43da-aa85-069a25707236_model.json +0 -0
- 2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/7ee4925b-8b13-43da-aa85-069a25707236_origin.pdf +3 -0
- 2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/full.md +0 -0
- 2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/images.zip +3 -0
- 2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/layout.json +0 -0
- 2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/9fda41e2-544c-4f91-8ab6-462a4a922809_content_list.json +0 -0
- 2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/9fda41e2-544c-4f91-8ab6-462a4a922809_model.json +0 -0
- 2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/9fda41e2-544c-4f91-8ab6-462a4a922809_origin.pdf +3 -0
- 2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/full.md +0 -0
- 2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/images.zip +3 -0
- 2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/layout.json +0 -0
- 2024/A General Framework for User-Guided Bayesian Optimization/168ed08a-b567-41d2-abd2-5e82251d79b2_content_list.json +0 -0
- 2024/A General Framework for User-Guided Bayesian Optimization/168ed08a-b567-41d2-abd2-5e82251d79b2_model.json +0 -0
- 2024/A General Framework for User-Guided Bayesian Optimization/168ed08a-b567-41d2-abd2-5e82251d79b2_origin.pdf +3 -0
- 2024/A General Framework for User-Guided Bayesian Optimization/full.md +413 -0
- 2024/A General Framework for User-Guided Bayesian Optimization/images.zip +3 -0
- 2024/A General Framework for User-Guided Bayesian Optimization/layout.json +0 -0
- 2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/835d9bdc-b1d0-4584-861e-7d0b76aaea95_content_list.json +0 -0
- 2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/835d9bdc-b1d0-4584-861e-7d0b76aaea95_model.json +0 -0
- 2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/835d9bdc-b1d0-4584-861e-7d0b76aaea95_origin.pdf +3 -0
- 2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/full.md +0 -0
- 2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/images.zip +3 -0
- 2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/layout.json +0 -0
- 2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/95f5339c-dfa7-49a6-b487-698cb4f07243_content_list.json +0 -0
- 2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/95f5339c-dfa7-49a6-b487-698cb4f07243_model.json +0 -0
- 2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/95f5339c-dfa7-49a6-b487-698cb4f07243_origin.pdf +3 -0
- 2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/full.md +0 -0
- 2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/images.zip +3 -0
- 2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/layout.json +0 -0
- 2024/A Mutual Information Perspective on Federated Contrastive Learning/05751880-dcb3-420c-98e6-29ac0eada528_content_list.json +0 -0
- 2024/A Mutual Information Perspective on Federated Contrastive Learning/05751880-dcb3-420c-98e6-29ac0eada528_model.json +0 -0
- 2024/A Mutual Information Perspective on Federated Contrastive Learning/05751880-dcb3-420c-98e6-29ac0eada528_origin.pdf +3 -0
- 2024/A Mutual Information Perspective on Federated Contrastive Learning/full.md +473 -0
- 2024/A Mutual Information Perspective on Federated Contrastive Learning/images.zip +3 -0
- 2024/A Mutual Information Perspective on Federated Contrastive Learning/layout.json +0 -0
- 2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/e82becdb-9086-4140-813d-b7925e4a2ac8_content_list.json +0 -0
- 2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/e82becdb-9086-4140-813d-b7925e4a2ac8_model.json +0 -0
- 2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/e82becdb-9086-4140-813d-b7925e4a2ac8_origin.pdf +3 -0
- 2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/full.md +749 -0
- 2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/images.zip +3 -0
- 2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/layout.json +0 -0
- 2024/A path-norm toolkit for modern networks_ consequences, promises and challenges/ced837d8-0083-44eb-9558-eded545a0f0a_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -5605,3 +5605,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 5605 |
2024/When[[:space:]]Scaling[[:space:]]Meets[[:space:]]LLM[[:space:]]Finetuning_[[:space:]]The[[:space:]]Effect[[:space:]]of[[:space:]]Data,[[:space:]]Model[[:space:]]and[[:space:]]Finetuning[[:space:]]Method/b69ac0ac-e90e-485e-bfc4-545446d6d34f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5606 |
2024/When[[:space:]]Semantic[[:space:]]Segmentation[[:space:]]Meets[[:space:]]Frequency[[:space:]]Aliasing/3750dcf4-3a1b-45b0-9226-503e70091164_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5607 |
2024/When[[:space:]]can[[:space:]]transformers[[:space:]]reason[[:space:]]with[[:space:]]abstract[[:space:]]symbols_/17fff0fe-d680-4080-b875-61d35d5ed057_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5605 |
2024/When[[:space:]]Scaling[[:space:]]Meets[[:space:]]LLM[[:space:]]Finetuning_[[:space:]]The[[:space:]]Effect[[:space:]]of[[:space:]]Data,[[:space:]]Model[[:space:]]and[[:space:]]Finetuning[[:space:]]Method/b69ac0ac-e90e-485e-bfc4-545446d6d34f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5606 |
2024/When[[:space:]]Semantic[[:space:]]Segmentation[[:space:]]Meets[[:space:]]Frequency[[:space:]]Aliasing/3750dcf4-3a1b-45b0-9226-503e70091164_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5607 |
2024/When[[:space:]]can[[:space:]]transformers[[:space:]]reason[[:space:]]with[[:space:]]abstract[[:space:]]symbols_/17fff0fe-d680-4080-b875-61d35d5ed057_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5608 |
+
2024/$_mathcal{B}$-Coder_[[:space:]]Value-Based[[:space:]]Deep[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]for[[:space:]]Program[[:space:]]Synthesis/f0daf96a-4478-4cb0-896e-4538a0e68035_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5609 |
+
2024/$_texttt{NAISR}$_[[:space:]]A[[:space:]]3D[[:space:]]Neural[[:space:]]Additive[[:space:]]Model[[:space:]]for[[:space:]]Interpretable[[:space:]]Shape[[:space:]]Representation/7ee4925b-8b13-43da-aa85-069a25707236_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5610 |
+
2024/A[[:space:]]Benchmark[[:space:]]for[[:space:]]Learning[[:space:]]to[[:space:]]Translate[[:space:]]a[[:space:]]New[[:space:]]Language[[:space:]]from[[:space:]]One[[:space:]]Grammar[[:space:]]Book/9fda41e2-544c-4f91-8ab6-462a4a922809_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5611 |
+
2024/A[[:space:]]General[[:space:]]Framework[[:space:]]for[[:space:]]User-Guided[[:space:]]Bayesian[[:space:]]Optimization/168ed08a-b567-41d2-abd2-5e82251d79b2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5612 |
+
2024/A[[:space:]]Hierarchical[[:space:]]Bayesian[[:space:]]Model[[:space:]]for[[:space:]]Few-Shot[[:space:]]Meta[[:space:]]Learning/835d9bdc-b1d0-4584-861e-7d0b76aaea95_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5613 |
+
2024/A[[:space:]]Lightweight[[:space:]]Method[[:space:]]for[[:space:]]Tackling[[:space:]]Unknown[[:space:]]Participation[[:space:]]Statistics[[:space:]]in[[:space:]]Federated[[:space:]]Averaging/95f5339c-dfa7-49a6-b487-698cb4f07243_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5614 |
+
2024/A[[:space:]]Mutual[[:space:]]Information[[:space:]]Perspective[[:space:]]on[[:space:]]Federated[[:space:]]Contrastive[[:space:]]Learning/05751880-dcb3-420c-98e6-29ac0eada528_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5615 |
+
2024/A[[:space:]]Poincaré[[:space:]]Inequality[[:space:]]and[[:space:]]Consistency[[:space:]]Results[[:space:]]for[[:space:]]Signal[[:space:]]Sampling[[:space:]]on[[:space:]]Large[[:space:]]Graphs/e82becdb-9086-4140-813d-b7925e4a2ac8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5616 |
+
2024/A[[:space:]]path-norm[[:space:]]toolkit[[:space:]]for[[:space:]]modern[[:space:]]networks_[[:space:]]consequences,[[:space:]]promises[[:space:]]and[[:space:]]challenges/ced837d8-0083-44eb-9558-eded545a0f0a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5617 |
+
2024/AMAGO_[[:space:]]Scalable[[:space:]]In-Context[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]for[[:space:]]Adaptive[[:space:]]Agents/3362fac6-df77-46fc-9d35-f27abffa61a1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5618 |
+
2024/Accelerating[[:space:]]Data[[:space:]]Generation[[:space:]]for[[:space:]]Neural[[:space:]]Operators[[:space:]]via[[:space:]]Krylov[[:space:]]Subspace[[:space:]]Recycling/b7e96783-9a06-4c6f-968b-4f55e8bde611_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5619 |
+
2024/Adaptive[[:space:]]Chameleon[[:space:]]or[[:space:]]Stubborn[[:space:]]Sloth_[[:space:]]Revealing[[:space:]]the[[:space:]]Behavior[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Knowledge[[:space:]]Conflicts/8337521b-fdf8-4327-bda9-64a1530644e8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5620 |
+
2024/Adaptive[[:space:]]Rational[[:space:]]Activations[[:space:]]to[[:space:]]Boost[[:space:]]Deep[[:space:]]Reinforcement[[:space:]]Learning/823c34dc-c426-4eb3-828d-ee51078c5e70_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5621 |
+
2024/Addressing[[:space:]]Signal[[:space:]]Delay[[:space:]]in[[:space:]]Deep[[:space:]]Reinforcement[[:space:]]Learning/30c95151-f2de-4045-ab49-75e38047ad97_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5622 |
+
2024/Adversarial[[:space:]]AutoMixup/51bc7332-976c-4305-8ab7-b6e024fbaeec_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5623 |
+
2024/An[[:space:]]Image[[:space:]]Is[[:space:]]Worth[[:space:]]1000[[:space:]]Lies_[[:space:]]Transferability[[:space:]]of[[:space:]]Adversarial[[:space:]]Images[[:space:]]across[[:space:]]Prompts[[:space:]]on[[:space:]]Vision-Language[[:space:]]Models/12af1da0-da92-4f49-8085-cbf72d30cf2e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5624 |
+
2024/Analyzing[[:space:]]Feed-Forward[[:space:]]Blocks[[:space:]]in[[:space:]]Transformers[[:space:]]through[[:space:]]the[[:space:]]Lens[[:space:]]of[[:space:]]Attention[[:space:]]Maps/c604a5c3-e63f-46ee-b729-b1b2100f0082_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5625 |
+
2024/AnimateDiff_[[:space:]]Animate[[:space:]]Your[[:space:]]Personalized[[:space:]]Text-to-Image[[:space:]]Diffusion[[:space:]]Models[[:space:]]without[[:space:]]Specific[[:space:]]Tuning/84184080-047d-43ce-b901-5feda5db6039_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5626 |
+
2024/AnyText_[[:space:]]Multilingual[[:space:]]Visual[[:space:]]Text[[:space:]]Generation[[:space:]]and[[:space:]]Editing/a061a624-975f-44a0-83d7-51ed21f68ff2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5627 |
+
2024/Asymptotically[[:space:]]Free[[:space:]]Sketched[[:space:]]Ridge[[:space:]]Ensembles_[[:space:]]Risks,[[:space:]]Cross-Validation,[[:space:]]and[[:space:]]Tuning/cad7156f-8690-4241-b4f7-fac210bd99e8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5628 |
+
2024/At[[:space:]]Which[[:space:]]Training[[:space:]]Stage[[:space:]]Does[[:space:]]Code[[:space:]]Data[[:space:]]Help[[:space:]]LLMs[[:space:]]Reasoning_/be14cad9-2707-4b7b-babb-7157cd28492f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5629 |
+
2024/BECLR_[[:space:]]Batch[[:space:]]Enhanced[[:space:]]Contrastive[[:space:]]Few-Shot[[:space:]]Learning/8e97b7d2-5ac5-42ca-bec0-cca1d984f55d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5630 |
+
2024/BTR_[[:space:]]Binary[[:space:]]Token[[:space:]]Representations[[:space:]]for[[:space:]]Efficient[[:space:]]Retrieval[[:space:]]Augmented[[:space:]]Language[[:space:]]Models/57a5494c-fe58-4751-8229-52847b0a101e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5631 |
+
2024/Bandits[[:space:]]Meet[[:space:]]Mechanism[[:space:]]Design[[:space:]]to[[:space:]]Combat[[:space:]]Clickbait[[:space:]]in[[:space:]]Online[[:space:]]Recommendation/a70cb086-43c9-48f2-a557-9bf37bcda643_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5632 |
+
2024/BarLeRIa_[[:space:]]An[[:space:]]Efficient[[:space:]]Tuning[[:space:]]Framework[[:space:]]for[[:space:]]Referring[[:space:]]Image[[:space:]]Segmentation/7a93d292-b3a8-49dd-b199-e3a30d27a68c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5633 |
+
2024/BatteryML_[[:space:]]An[[:space:]]Open-source[[:space:]]Platform[[:space:]]for[[:space:]]Machine[[:space:]]Learning[[:space:]]on[[:space:]]Battery[[:space:]]Degradation/95a14ab5-0e79-4f5b-9cae-68bc57b5d172_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5634 |
+
2024/Benchmarking[[:space:]]Algorithms[[:space:]]for[[:space:]]Federated[[:space:]]Domain[[:space:]]Generalization/2be98de2-8307-4522-be7e-a8fa907f25eb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5635 |
+
2024/Bespoke[[:space:]]Solvers[[:space:]]for[[:space:]]Generative[[:space:]]Flow[[:space:]]Models/c11999c2-618c-49ae-8aee-d8ce1c35006e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5636 |
+
2024/Beyond[[:space:]]Memorization_[[:space:]]Violating[[:space:]]Privacy[[:space:]]via[[:space:]]Inference[[:space:]]with[[:space:]]Large[[:space:]]Language[[:space:]]Models/7c93acfd-5d0b-4820-9e70-4fb1c92df251_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5637 |
+
2024/Beyond[[:space:]]Reverse[[:space:]]KL_[[:space:]]Generalizing[[:space:]]Direct[[:space:]]Preference[[:space:]]Optimization[[:space:]]with[[:space:]]Diverse[[:space:]]Divergence[[:space:]]Constraints/d672f63a-1912-4d59-bb74-aa217c72b6ea_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5638 |
+
2024/Beyond[[:space:]]Worst-case[[:space:]]Attacks_[[:space:]]Robust[[:space:]]RL[[:space:]]with[[:space:]]Adaptive[[:space:]]Defense[[:space:]]via[[:space:]]Non-dominated[[:space:]]Policies/8f2789af-ea6c-4fe8-bec9-2a13f32dcbca_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5639 |
+
2024/Bilevel[[:space:]]Optimization[[:space:]]under[[:space:]]Unbounded[[:space:]]Smoothness_[[:space:]]A[[:space:]]New[[:space:]]Algorithm[[:space:]]and[[:space:]]Convergence[[:space:]]Analysis/8eff9d13-ec8f-4bad-a544-8c1d1cdc77db_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5640 |
+
2024/Blending[[:space:]]Imitation[[:space:]]and[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]for[[:space:]]Robust[[:space:]]Policy[[:space:]]Improvement/40ca0bd5-24a5-41fc-a31f-10a8fbab4db0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5641 |
+
2024/Bounding[[:space:]]Box[[:space:]]Stability[[:space:]]against[[:space:]]Feature[[:space:]]Dropout[[:space:]]Reflects[[:space:]]Detector[[:space:]]Generalization[[:space:]]across[[:space:]]Environments/e321d3cd-8ce2-44da-9c69-50c492c22e94_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5642 |
+
2024/Bounds[[:space:]]on[[:space:]]Representation-Induced[[:space:]]Confounding[[:space:]]Bias[[:space:]]for[[:space:]]Treatment[[:space:]]Effect[[:space:]]Estimation/d4c8b37f-a9ba-4a2e-baa7-33e0f77c25c3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5643 |
+
2024/When[[:space:]]should[[:space:]]we[[:space:]]prefer[[:space:]]Decision[[:space:]]Transformers[[:space:]]for[[:space:]]Offline[[:space:]]Reinforcement[[:space:]]Learning_/a2b89f6c-adab-46fd-9f57-dcd958e078fe_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5644 |
+
2024/Where[[:space:]]We[[:space:]]Have[[:space:]]Arrived[[:space:]]in[[:space:]]Proving[[:space:]]the[[:space:]]Emergence[[:space:]]of[[:space:]]Sparse[[:space:]]Interaction[[:space:]]Primitives[[:space:]]in[[:space:]]DNNs/42966302-3880-4ead-99a0-ef0c1146626b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5645 |
+
2024/Whittle[[:space:]]Index[[:space:]]with[[:space:]]Multiple[[:space:]]Actions[[:space:]]and[[:space:]]State[[:space:]]Constraint[[:space:]]for[[:space:]]Inventory[[:space:]]Management/92e5b3d4-ee0e-4319-ab5e-cca361ca9067_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5646 |
+
2024/Why[[:space:]]is[[:space:]]SAM[[:space:]]Robust[[:space:]]to[[:space:]]Label[[:space:]]Noise_/7fb6428d-193a-46ea-83b7-ae02df3a7942_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5647 |
+
2024/WildFusion_[[:space:]]Learning[[:space:]]3D-Aware[[:space:]]Latent[[:space:]]Diffusion[[:space:]]Models[[:space:]]in[[:space:]]View[[:space:]]Space/6f2ef11d-76d2-4f3b-b55a-9e18713fef6e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5648 |
+
2024/Win-Win_[[:space:]]Training[[:space:]]High-Resolution[[:space:]]Vision[[:space:]]Transformers[[:space:]]from[[:space:]]Two[[:space:]]Windows/c5126b84-0740-488f-b4f1-2e7672765edf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5649 |
+
2024/Window[[:space:]]Attention[[:space:]]is[[:space:]]Bugged_[[:space:]]How[[:space:]]not[[:space:]]to[[:space:]]Interpolate[[:space:]]Position[[:space:]]Embeddings/f70e34fc-2146-49a5-9426-201df05e2c2b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5650 |
+
2024/WizardCoder_[[:space:]]Empowering[[:space:]]Code[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]with[[:space:]]Evol-Instruct/ae359044-7589-4c6f-bfbd-c6ea419d0b5f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5651 |
+
2024/WizardLM_[[:space:]]Empowering[[:space:]]Large[[:space:]]Pre-Trained[[:space:]]Language[[:space:]]Models[[:space:]]to[[:space:]]Follow[[:space:]]Complex[[:space:]]Instructions/db8d55af-8fae-4250-a0b6-7be4c61be6f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5652 |
+
2024/Xformer_[[:space:]]Hybrid[[:space:]]X-Shaped[[:space:]]Transformer[[:space:]]for[[:space:]]Image[[:space:]]Denoising/bc104c19-f551-47fc-aea5-6ee9abce851a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5653 |
+
2024/YaRN_[[:space:]]Efficient[[:space:]]Context[[:space:]]Window[[:space:]]Extension[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models/274e9257-ce3e-48e4-9613-343b3ef398a1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5654 |
+
2024/Yet[[:space:]]Another[[:space:]]ICU[[:space:]]Benchmark_[[:space:]]A[[:space:]]Flexible[[:space:]]Multi-Center[[:space:]]Framework[[:space:]]for[[:space:]]Clinical[[:space:]]ML/fd914c27-c2b1-4f58-a423-d1720a7ee72e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5655 |
+
2024/You[[:space:]]Only[[:space:]]Query[[:space:]]Once_[[:space:]]An[[:space:]]Efficient[[:space:]]Label-Only[[:space:]]Membership[[:space:]]Inference[[:space:]]Attack/ea24a96c-7b1e-4ea8-8d43-520ebdec28b9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5656 |
+
2024/ZeRO++_[[:space:]]Extremely[[:space:]]Efficient[[:space:]]Collective[[:space:]]Communication[[:space:]]for[[:space:]]Large[[:space:]]Model[[:space:]]Training/891e3ec2-a766-4505-8450-1859438c0377_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5657 |
+
2024/Zero[[:space:]]Bubble[[:space:]](Almost)[[:space:]]Pipeline[[:space:]]Parallelism/aa434715-e234-4885-8905-62a19e466c14_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5658 |
+
2024/Zero[[:space:]]and[[:space:]]Few-shot[[:space:]]Semantic[[:space:]]Parsing[[:space:]]with[[:space:]]Ambiguous[[:space:]]Inputs/f2752b60-09cd-4c4c-8c8d-bd7173f41160_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5659 |
+
2024/Zero-Mean[[:space:]]Regularized[[:space:]]Spectral[[:space:]]Contrastive[[:space:]]Learning_[[:space:]]Implicitly[[:space:]]Mitigating[[:space:]]Wrong[[:space:]]Connections[[:space:]]in[[:space:]]Positive-Pair[[:space:]]Graphs/2dbd877d-cd42-48fc-b64b-b114f733cf88_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5660 |
+
2024/Zero-Shot[[:space:]]Continuous[[:space:]]Prompt[[:space:]]Transfer_[[:space:]]Generalizing[[:space:]]Task[[:space:]]Semantics[[:space:]]Across[[:space:]]Language[[:space:]]Models/0a54eda8-8134-4877-8ca3-31fc2dabd601_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5661 |
+
2024/Zero-Shot[[:space:]]Robotic[[:space:]]Manipulation[[:space:]]with[[:space:]]Pre-Trained[[:space:]]Image-Editing[[:space:]]Diffusion[[:space:]]Models/7d9af388-2703-4511-b1de-a3b78e771c86_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5662 |
+
2024/Zero-Shot[[:space:]]Robustification[[:space:]]of[[:space:]]Zero-Shot[[:space:]]Models/7bfe818e-27b3-43c5-93d3-ca37f20504a7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5663 |
+
2024/ZeroFlow_[[:space:]]Scalable[[:space:]]Scene[[:space:]]Flow[[:space:]]via[[:space:]]Distillation/a477bde7-3bd1-45e9-a4b6-72b48c293269_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5664 |
+
2024/Zeroth-Order[[:space:]]Optimization[[:space:]]Meets[[:space:]]Human[[:space:]]Feedback_[[:space:]]Provable[[:space:]]Learning[[:space:]]via[[:space:]]Ranking[[:space:]]Oracles/5d673bc0-255f-4d83-a77f-8a2185f57e92_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5665 |
+
2024/ZipIt![[:space:]]Merging[[:space:]]Models[[:space:]]from[[:space:]]Different[[:space:]]Tasks[[:space:]]without[[:space:]]Training/8090ad9e-cf16-4652-b171-4ed0404f1b0c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5666 |
+
2024/Zoology_[[:space:]]Measuring[[:space:]]and[[:space:]]Improving[[:space:]]Recall[[:space:]]in[[:space:]]Efficient[[:space:]]Language[[:space:]]Models/49bdbd4c-37c9-41b9-959a-86a2699cbca0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5667 |
+
2024/f-FERM_[[:space:]]A[[:space:]]Scalable[[:space:]]Framework[[:space:]]for[[:space:]]Robust[[:space:]]Fair[[:space:]]Empirical[[:space:]]Risk[[:space:]]Minimization/de467158-b7e6-44e2-9354-7e914228b4d1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5668 |
+
2024/fairret_[[:space:]]a[[:space:]]Framework[[:space:]]for[[:space:]]Differentiable[[:space:]]Fairness[[:space:]]Regularization[[:space:]]Terms/fcbe03b1-8d43-4714-9659-cf0163da2852_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5669 |
+
2024/iGraphMix_[[:space:]]Input[[:space:]]Graph[[:space:]]Mixup[[:space:]]Method[[:space:]]for[[:space:]]Node[[:space:]]Classification/b4ce5c96-e964-4715-8fb3-177f190ca7b2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5670 |
+
2024/lpNTK_[[:space:]]Better[[:space:]]Generalisation[[:space:]]with[[:space:]]Less[[:space:]]Data[[:space:]]via[[:space:]]Sample[[:space:]]Interaction[[:space:]]During[[:space:]]Learning/8207076b-d50f-44b3-8b72-dffe71275feb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5671 |
+
2024/sRGB[[:space:]]Real[[:space:]]Noise[[:space:]]Modeling[[:space:]]via[[:space:]]Noise-Aware[[:space:]]Sampling[[:space:]]with[[:space:]]Normalizing[[:space:]]Flows/1d1a640a-a167-4411-956e-084abab14216_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/f0daf96a-4478-4cb0-896e-4538a0e68035_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/f0daf96a-4478-4cb0-896e-4538a0e68035_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/f0daf96a-4478-4cb0-896e-4538a0e68035_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f170ba688875888bff3c65881a0865b5895fdd31a38c2c2e84ea4f946d6d8000
|
| 3 |
+
size 626580
|
2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/full.md
ADDED
|
@@ -0,0 +1,574 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# $\mathcal{B}$ -CODER: VALUE-BASED DEEP REINFORCEMENT LEARNING FOR PROGRAM SYNTHESIS
|
| 2 |
+
|
| 3 |
+
Zishun Yu*
|
| 4 |
+
|
| 5 |
+
Department of Computer Science
|
| 6 |
+
University of Illinois Chicago
|
| 7 |
+
|
| 8 |
+
Chicago, IL 60607
|
| 9 |
+
|
| 10 |
+
zyu32@uic.edu
|
| 11 |
+
|
| 12 |
+
Yunzhe Tao, Liyu Chen, Tao Sun & Hongxia Yang
|
| 13 |
+
|
| 14 |
+
ByteDance Inc.
|
| 15 |
+
|
| 16 |
+
Seattle, WA 98004
|
| 17 |
+
|
| 18 |
+
{yunzhe.tao, liyu.chen1,
|
| 19 |
+
|
| 20 |
+
tao.sun, hx.yang}@bytedance.com
|
| 21 |
+
|
| 22 |
+
# ABSTRACT
|
| 23 |
+
|
| 24 |
+
Program synthesis aims to create accurate, executable programs from problem specifications, specifically from natural language descriptions in our context. Recent studies have leveraged the power of reinforcement learning (RL) in conjunction with large language models (LLMs), significantly enhancing code generation capabilities. The application of RL focuses on directly optimizing for functional correctness, offering an advantage over conventional supervised methods. Despite policy-based RL methods dominating the literature on RL for program synthesis, the nature of program synthesis tasks hints at a natural alignment with value-based methods. This stems from the rich collection of off-policy programs, including those developed by human programmers and also historical samples, coupled with the straightforward verification of generated programs through automated unit testing, meaning rewards are easy to obtain. Diverging from the dominant use of policy-based algorithms, our work explores the feasibility of value-based approaches, leading to the development of our $\mathcal{B}$ -Coder (pronounced Bellman coder). Yet, training value-based methods presents challenges due to the enormous search space inherent to program synthesis. To this end, we introduce an initialization protocol for RL agents utilizing pre-trained LMs and a conservative Bellman operator to reduce training complexities. Moreover, we demonstrate how to leverage the learned value functions as a dual strategy to post-process generated programs. Our empirical evaluations demonstrated $\mathcal{B}$ -Coder's capability in achieving state-of-the-art performance when compared to policy-based methods. Remarkably, this achievement is reached with minimal reward engineering effort, highlighting the effectiveness of value-based RL, independent of reward designs.
|
| 25 |
+
|
| 26 |
+
# 1 INTRODUCTION
|
| 27 |
+
|
| 28 |
+
Program synthesis (or code generation) aims to create functionally accurate executable programs from problem specifications, such as input-output (IO) examples (Summers, 1977; Gulwani et al., 2012), constraint-based (Osera & Zdancewic, 2015; Frankle et al., 2016) or natural language descriptions (Hendrycks et al., 2021; Austin et al., 2021), among others. The increasing attention towards this field can be attributed to its potential in transforming the software development paradigm. Notably, AI-powered tools have shown evidence of boosting efficiency within the software industry.
|
| 29 |
+
|
| 30 |
+
Large language models (LLMs) (Brown et al., 2020; OpenAI, 2023; Anil et al., 2023; Chowdhery et al., 2022; Rae et al., 2021; Hoffmann et al., 2022; Touvron et al., 2023) have garnered substantial interest and shown remarkable achievements. The scheme of pre-training on vast amounts of data has yielded notable successes in natural language generation. This trend extends its influence to program synthesis, where numerous specialized code LLMs (Li et al., 2023; 2022; Nijkamp et al., 2022; Zheng et al., 2023; Fried et al., 2022; Chen et al., 2021a; Wang et al., 2021; 2023; Xu et al., 2023; Rozière et al., 2023) have been introduced to address challenges in program synthesis.
|
| 31 |
+
|
| 32 |
+
Unlike many free-form natural language generation tasks, where the quality of model's output is hard to assess, the correctness of synthesized programs can be verified through automated execution with
|
| 33 |
+
|
| 34 |
+
predefined unit tests. This allows for directly optimizing execution outcomes through reinforcement learning (RL), by formulating test outcomes as reward signals. Our discussion focuses on recent RL-based works (Le et al., 2022; Shojaee et al., 2023; Liu et al., 2023) that have achieved remarkable advancements in Python text-to-code generation, evaluated on the challenging benchmarks sourced from Codeforces programming contests (Hendrycks et al., 2021; Li et al., 2022) Notably, these works predominantly favor on-policy policy-based algorithms.
|
| 35 |
+
|
| 36 |
+
While (on-policy) policy-based methods are favored in existing program synthesis works, they are known to be sample inefficient (Nachum et al., 2017; Gu et al., 2016) due to their inability to use off-policy samples. In contrast, value-based methods, using temporal difference learning, are known to be more sample-efficient (Gu et al., 2016; Nachum et al., 2017; Liu et al., 2020), as they solve a fixed-point iteration which does not explicitly require a specific data distribution, hence offering better compatibility with off-policy data. We defer the technical explanations on on/off-policy data and reasons for the different efficiency to Section 3.2, where we have notations and definitions ready.
|
| 37 |
+
|
| 38 |
+
In program synthesis, the primary sources of off-policy data include human programs and previously synthesized programs. Both are off-policy as they do not follow the sequence distribution induced by the current model. Current program synthesis works often directly use off-policy samples with on-policy methods. Unsurprisingly, Shojaee et al. (2023) notices that an increase in off-policy synthetic programs may degrade performance. This occurs as off-policy data lead to biased gradient estimates. Ideally, an objective should be to enhance or at least sustain performance as data volume grows.
|
| 39 |
+
|
| 40 |
+
To summarize, the reasons that suggest a natural fit for value-based methods in program synthesis are twofold: the availability of (inexpensive) rewards, similar to classical RL tasks like GO and Atari; and the principle compatibility with off-policy data for effectively leveraging human and historical data. However, value-based RL faces challenges such as difficulty in converging in large state-action spaces. To this end, we introduce $\mathcal{B}$ -Coder (Bellman coder), with our contributions being threefold:
|
| 41 |
+
|
| 42 |
+
- We stabilize value-based RL for program synthesis by proposing an initialization protocol for $Q$ -functions and a conservative Bellman operator to mitigate the training complexities.
|
| 43 |
+
- We demonstrate how to leverage value functions as a dual strategy to improve generation.
|
| 44 |
+
- $\mathcal{B}$ -Coder achieves strong empirical performance with minimal reward engineering, providing further insights of RL algorithm design independent of reward function designs.
|
| 45 |
+
|
| 46 |
+
Paper structure. We introduce related works and notations in Section 2 and 3. Section 4 details our method and the rationale behind our design choices. Specifically, Sections 4.1, 4.2, and 4.3 address the challenges of value function training by: leveraging task structure, providing effective $Q$ -function initialization, and a conservative operator for stable yet less ambitious updates, respectively. Section 4.5 shows an additional benefit of value functions, and Section 5 shows our empirical results.
|
| 47 |
+
|
| 48 |
+
# 2 RELATED WORKS
|
| 49 |
+
|
| 50 |
+
Execution-guided program synthesis. The feasibility of verifying programs through test case outcomes has led to the line of execution-guided works (Chen et al., 2018; Zohar & Wolf, 2018; Chen et al., 2021b). While these efforts leverage execution feedback, they do not directly optimize towards higher execution success rate due to the inherent non-differentiability of execution outcomes.
|
| 51 |
+
|
| 52 |
+
RL for general sequence modeling. Supervised LM training, using next token predictions (NTP) or masked language modeling (Kenton & Toutanova, 2019), has recognized limitations. One prominent issue is the exposure bias: given that the training is done in a "teacher-forcing" manner (Bengio et al., 2015; Ranzato et al., 2015), errors tend to accumulate during testing due to auto-regressive generation. In contrast, prior works (Ranzato et al., 2015; Rennie et al., 2017) have demonstrated the efficacy of RL in addressing exposure bias and optimizing non-differentiable metrics, e.g. BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004), by leveraging automatic scoring as reward function.
|
| 53 |
+
|
| 54 |
+
RL for program synthesis. Supervised losses also fall short when assessing the functional accuracy of synthesized programs (Hendrycks et al., 2021; Chen et al., 2021a). As such, relying solely on supervised learning for program synthesis is not ideal. As RL provides a pathway to directly optimize non-differentiable objectives, plentiful work (Zhong et al., 2017; Simmons-Edler et al., 2018; Ellis et al., 2019; Wang et al., 2022) have studied enhancing code generation through RL. For the works most related to ours: CodeRL (Le et al., 2022) adapted REINFORCE (Williams, 1992), a classic
|
| 55 |
+
|
| 56 |
+
policy gradient (PG) algorithm, along with the baseline trick for variance reduction and a superviser-trained reward model to alleviate the issue of sparse execution signals. In addition, they proposed a critic sampling strategy to refine and repair program based on the example unit tests feedback. PPOCoder (Shojae et al., 2023) applied proximal policy gradient (Schulman et al., 2017, PPO) to fine-tune pre-trained LMs. In addition, they leverage the syntactic and semantic structure of code, such as syntax trees (Rabinovich et al., 2017) and data-flow graphs (Yasunaga & Liang, 2020), to improve reward function designs. RLTF (Liu et al., 2023) proposed an online training framework for program synthesis using policy gradient with heursitically-designed fine-grained rewards.
|
| 57 |
+
|
| 58 |
+
Additional discussions. Appendix D lists several RL applications, showing the analogies between program synthesis and tasks that benefit from value-based methods. In C, we extend the discussion on works that extend policy-based methods to an off-policy setting. Such attempts often involve training a value function, further highlighting our motivation for starting with value-based methods.
|
| 59 |
+
|
| 60 |
+
# 3 PRELIMINARIES
|
| 61 |
+
|
| 62 |
+
One could formulate the program synthesis task as a sequence-to-sequence generation task, where a model takes a problem description $D$ as input and outputs a program $\hat{W}$ which aims to achieve the functionality specified by $D$ . A generated program $\hat{W} = (\hat{w}_0, \dots, \hat{w}_T)$ is composed by a sequence of tokens $\hat{w}_t \in \mathcal{V}$ . For brevity, we use constant $T$ to denote the sequence length although it could be a variable in practice, and $W$ to denote a program in general (both generated and ground truth). Let LM be an instance of LM, $\ell((w_{<t}, D), \cdot)$ be the logits layer (language modelling head) output, and $p(\cdot | w_{<t}, D)$ be the probabilistic distribution over the vocabulary $\mathcal{V}$ (computed by passing $\ell(\cdot, \cdot)$ through softmax), conditioned on a sequence $w_{<t}$ and $D$ . Suppose $W^*$ is a ground truth program and $\mathcal{D}_{\text{train}}$ is the train set, conventionally LMs could be trained by minimizing the cross-entropy loss
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\mathcal {L} _ {\mathrm {c e}} (p) = - \mathbb {E} _ {W ^ {*} \sim \mathcal {D} _ {\text {t r a i n}}} \log p \left(W ^ {*} \mid D\right) = - \mathbb {E} _ {W ^ {*} \sim \mathcal {D} _ {\text {t r a i n}}} \sum_ {t} \log p \left(w _ {t} ^ {*} \mid w _ {< t} ^ {*}, D\right). \tag {1}
|
| 66 |
+
$$
|
| 67 |
+
|
| 68 |
+
# 3.1 RL NOTATIONS
|
| 69 |
+
|
| 70 |
+
To make notations easier to interpret, we bridge program synthesis notations to standard RL ones. RL problems are typically formulated as Markov Decision Processes (MDPs) and an MDP $\mathcal{M}$ is often composed by a 5-tuple $\mathcal{M} = (\mathcal{S},\mathcal{A},\mathbb{P},r,\gamma)$ which are state space, action space, transition function, reward function and discount factor, respectively. The discount factor $\gamma$ discounts future values to emphasize the near futures, and we use $\gamma = 0.999$ (which slightly prefers more concise solution). A (stochastic) transition function $\mathbb{P}:S\times \mathcal{A}\to \Delta (\mathcal{S})$ is a distribution over $\mathcal{S}$ conditioned on a state-action pair $(s,a)$ . In program synthesis, $\mathbb{P}$ is trivial as $s_{t + 1}\equiv s_t\circ a_t$ , where $\circ$ denotes concatenation.
|
| 71 |
+
|
| 72 |
+
State and action. In code generation context, an action $a_{t}$ is a token $\hat{w}_{t}$ . Hence the action space $\mathcal{A}$ is the vocabulary $\mathcal{V}$ . As the information used to generate token $\hat{w}_{t}$ is $(\hat{w}_{< t}, D)$ , the state is hence defined as $s_{t} := (\hat{w}_{< t}, D)$ . For a given $D$ , the state space $S = \mathcal{V}^{T}$ . For brevity, we will mainly use $s_{t}, a_{t}$ rather than the $w_{t}$ notations, and sometimes omit the time index $t$ if it leads to no confusion. We will also use $s', a'$ to denote $s_{t+1}, a_{t+1}$ whenever only the relative temporal position matters.
|
| 73 |
+
|
| 74 |
+
Policy. A policy $\pi : S \to \Delta(\mathcal{A})$ assigns an action distribution $\Delta(\mathcal{A})$ to any state $s \in S$ , meaning predicting a token $\hat{w}_t$ based on current sequence $\hat{w}_{<t}$ and the problem specification $D$ . Prior works often define $\pi_\theta \equiv p_\theta$ and directly optimize LM parameters $\theta$ with PG methods. We however define $\pi := f(\theta, \square)$ to be a function of $\theta$ and other components $\square$ , see details in Section 4.
|
| 75 |
+
|
| 76 |
+
Reward function. A reward function $r: S \times \mathcal{A} \to \mathbb{R}$ determines reward of taking action $a_{t}$ at state $s_{t}$ . We follow the reward design of Le et al. (2022) in equation 2. We may also use shorthand notation $r_{t} := r(s_{t}, a_{t})$ . Note that the reward is determined when the program $W$ is completed at $T$ . Thus $r_{t} = 0$ if $t \neq T$ otherwise defined as equation 2.
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\begin{array}{l} r (W) = r \left(s _ {T}, a _ {T}\right) = \\ \left\{ \begin{array}{l} + 1. 0, \text {i f W p a s s e d a l l u n i t t e s t s} \\ - 0. 3, \text {i f W f a i l e d a n y u n i t t e s t} \\ - 0. 6, \text {i f W c a n n o t b e e x e c u t e d} \\ - 1. 0, \text {i f W c a n n o t b e c o m p i l e d} \end{array} \right. \\ \end{array}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
(2)
|
| 83 |
+
|
| 84 |
+
Value functions. RL maximizes the discounted returns, $J(\pi) = \mathbb{E}[\sum_t\gamma^t r_t|\pi ,\mathcal{M}]$ . The state-action value function $Q^{\pi}\colon S\times \mathcal{A}\to \mathbb{R}$ and the state value function $V^{\pi}\colon S\to \mathbb{R}$ , are defined recursively as:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\begin{array}{l} V ^ {\pi} (s) := \mathbb {E} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} r _ {t} | \pi , \mathcal {M}, S _ {0} = s \right] = \mathbb {E} _ {a \sim \pi (\cdot | s), s ^ {\prime} \sim \mathbb {P} (\cdot | s, a)} \left[ r (s, a) + \gamma V ^ {\pi} (s ^ {\prime}) \right] (3) \\ Q ^ {\pi} (s, a) := \mathbb {E} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} r _ {t} | \pi , \mathcal {M}, S _ {0} = s, A _ {0} = a \right] = \mathbb {E} _ {s ^ {\prime} \sim \mathbb {P} (\cdot | s, a)} \left[ r (s, a) + \gamma Q ^ {\pi} (s ^ {\prime}, \pi) \right], (4) \\ \end{array}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
where $Q(s,\pi)\coloneqq \mathbb{E}_{a\sim \pi}Q(s,a)$ . In addition, the advantage function is $A^{\pi}(s,a)\coloneqq Q^{\pi}(s,a) - V^{\pi}(s)$
|
| 91 |
+
|
| 92 |
+
# 3.2 VALUE-BASED RL AND DUELING DQN
|
| 93 |
+
|
| 94 |
+
Value-based algorithms especially the $Q$ -learning family (Watkins & Dayan, 1992; Mnih et al., 2013; Van Hasselt et al., 2016; Bellemare et al., 2017) have achieved remarkable successes. A canonical framework of the $Q$ -learning family iterates between policy evaluation and policy improvement:
|
| 95 |
+
|
| 96 |
+
policy evaluation (PE): $Q_{k} = \arg \min_{Q}\mathbb{E}_{\mathcal{D}}[Q_{k - 1}(s,a) - (r + \gamma Q_{k - 1}(s^{\prime},\pi_{k - 1}))]^{2}$ (5)
|
| 97 |
+
|
| 98 |
+
policy improvement (PI): $\pi_k = \arg \max_{\pi} Q_k(s, \pi(s))$ (6)
|
| 99 |
+
|
| 100 |
+
where $\mathcal{D}$ is an arbitrary dataset, the PE step estimates the previous policy $\pi_{k - 1}$ using the Bellman equation (Bellman, 1966), and the PI step finds an improved $\pi_{k}$ by maximizing $Q_{k}$ estimates.
|
| 101 |
+
|
| 102 |
+
In particular, we build our framework on top of Dueling DQN (Wang et al., 2016, DDQN). In a nutshell, DDQN approximates $V(s)$ and $A(s,a)$ with separate heads, and run improvement and evaluation steps with $Q(s,a) = V(s) + A(s,a)$ . This bifurcation enables a robust estimation of $V(s)$ without conflating with the actions, which subsequently ensures a stable learning of $A(s,a)$ given that it focuses solely on the relative values. As a consequence, DDQN often exhibits enhanced stability in training dynamics and improved generalization. In addition to the prior mentioned advantages, DDQN enables us to leverage a task structure that ground truth programs should attain highest advantages, therefore reducing the searching space, which we will elaborate on in Section 4.1.
|
| 103 |
+
|
| 104 |
+
Remarks on sample efficiency. We illustrate the inefficiency of policy-based methods using vanilla PG as an example. PG maximizes $J(\mu) \coloneqq \mathbb{E}[\sum_t\gamma^t r_t|\pi_\mu ,\mathcal{M}] \equiv \mathbb{E}_{W\sim \pi_\mu}[\sum_t\gamma^t r_t]$ , with gradient $\nabla_{\mu}J(\mu)$ computed using the policy gradient theorem. This method requires training data $W$ drawn from the distribution induced by current policy $\pi_{\mu}$ , hence called on-policy. Therefore, one should in principle generate new data and discard historical data at every update, leading to undesired sample inefficiency. In contrast, policy evaluation as in equation 5 works with arbitrary dataset $\mathcal{D}$ .
|
| 105 |
+
|
| 106 |
+
# 4 ALGORITHMIC DESIGNS - ACCELERATING VALUE-BASED TRAINING
|
| 107 |
+
|
| 108 |
+
While value-based RL holds great promise, its training can be challenging due to the large action space $\mathcal{A} = \mathcal{V}$ and the high-dimensional state space $S = \mathcal{V}^T$ . This leads to a notably large $Q$ -table of size $\mathcal{O}(|\mathcal{V}|^T)$ . And the cardinality of policy space is $|\mathcal{A}|^{|\mathcal{S}|} = \mathcal{O}(|\mathcal{V}|^{|\mathcal{V}|^T})$ , which grows doubly exponentially. Both challenges from large action spaces and high-dimensional state spaces are pivotal research topics in RL. The action space challenges are discussed by e.g. Dulac-Arnold et al. (2015); Tavakoli et al. (2018); Kalashnikov et al. (2018), while He et al. (2016); Nair et al. (2018), among others, considered the state spaces complexities. In particular, Silver (2015); Duan et al. (2016) commented on that the potentially better training stability of policy-based methods in these scenarios.
|
| 109 |
+
|
| 110 |
+

|
| 111 |
+
Figure 1: Training curves on APPS train set. $\blacksquare$ denotes $\mathcal{B}$ -Coder, $\star$ removes our conservative operator, and $\triangledown$ is $\mathcal{B}$ -Coder without both our operator and initialization.
|
| 112 |
+
|
| 113 |
+
To address the challenges inherent in training value-based RL for LMs, at a high level, we developed $\mathcal{B}$ -Coder considering three key aspects: incorporation of task structure, initialization of $Q$ -function, and backup using a conservative Bellman operator. Figure 1 previews the effectiveness of our algorithmic designs, which shows the training curve of different value-based RL algorithms on the APPS dataset. Due to aforementioned challenges, the performance of the vanilla DDQN continuously decreases even evaluated on the training set. In contrast, both the $Q$ -function initialization and the conservative Bellman operator show benefits in stabilizing and accelerating the training process.
|
| 114 |
+
|
| 115 |
+
For notational convenience in subsequent sections, we begin with an overview of our notations and parameterizations, summarized in Figure 2. Figure 2(a) denotes a pre-trained encoder-decoder LM parameterized by $\theta_{\mathrm{ckpt}}$ (where subscript ckpt denotes the fact it's a checkpoint/constant). Figure 2(b) and (c) show the forward graphs of our two different training stages: (b) corresponds to a pre-training stage for $\phi$ , to provide a good initialization for (c) the subsequent fine-tuning of $\theta$ . Motivations and details are deferred to Section 4.2 and 4.3, respectively. As we proceed to the rationale behind our
|
| 116 |
+
|
| 117 |
+
designs, it is encouraged to maintain familiarity with $\theta_{\mathrm{ckpt}}$ , $\phi$ , $\theta$ and their corresponding products, especially the forward paths to $Q_{\phi}$ and $Q_{\theta}$ , to prevent confusion in the subsequent sections.
|
| 118 |
+
|
| 119 |
+
# 4.1 LEVERAGING TASK STRUCTURES
|
| 120 |
+
|
| 121 |
+
As noted earlier, a key attribute of program synthesis task is the provision of human solutions, which are guaranteed to be correct. As a result, these solutions should attain the highest $Q$ -values, even if the correct solutions might not be unique. As such, for a ground truth program $W^{*} = (s_{0}^{*},a_{0}^{*},\ldots ,s_{T}^{*},a_{T}^{*})$ , $Q(s_{t}^{*},a_{t}^{*}) \geq Q(s_{t}^{*},a)$ holds for all $a \in \mathcal{V}$ , hence $A(s_{t}^{*},a_{t}^{*}) \geq A(s_{t}^{*},a)$ .
|
| 122 |
+
|
| 123 |
+
To enforce this structure, one could ensure $A(W) \leq 0$ and $A(W^{*}) \approx 0$ , where we abuse the notation and by letting $A(W) := \sum_{t=0}^{T} A(s_{t}, a_{t})$ . It ensures that $W^{*}$ has advantages that are roughly the highest. To this end, suppose $g(\cdot)$ is a general neural network, we decompose $Q$ as follows,
|
| 124 |
+
|
| 125 |
+

|
| 126 |
+
Figure 2: (a) A forward graph of conventional enc-dec LMs, with a checkpoint $\theta_{\mathrm{ckpt}}$ , where $p$ is a distribution over $\mathcal{A}$ and $\ell$ denotes logits; (b) Our forward graph for pre-training $\phi$ ; (c) Our forward graph for fine-tuning $\theta$ . * indicates a frozen/constant component.
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
Q (s, a) = \underbrace {g (s , a) - \max _ {a} g (s , a)} _ {\text {n o n - p o s i t i v e a d v a n t a g e}} + V (s) = A (s, a) + V (s). \tag {7}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
It enforces our first condition that $A(W) \leq 0$ . For the second condition $A(W^{*}) \approx 0$ , we optimize an advantage function $A$ by minimizing an auxiliary advantage loss function, namely $\mathcal{L}_{\mathrm{adv}}$ ,
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\mathcal {L} _ {\mathrm {a d v}} (A) = \mathbb {E} _ {\left(s _ {0} ^ {*}, a _ {0} ^ {*}, \dots , s _ {T} ^ {*}, a _ {T} ^ {*}\right) \sim \mathcal {D} _ {\text {t r a i n}}} \left[ \sum_ {t = 0} ^ {T} \left| A \left(s _ {t} ^ {*}, a _ {t} ^ {*}\right) \right| \right]. \tag {8}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
We also cap the $Q$ -function with $R_{\mathrm{max}} = 1$ , the maximum total rewards. See Appendix G for details.
|
| 139 |
+
|
| 140 |
+
# 4.2 $Q$ -FUNCTION INITIALIZATION
|
| 141 |
+
|
| 142 |
+
Despite the task structures introduced, training the $Q$ -function from scratch remains extremely challenging. While this is not a problem for policy-based learning (given that directly fine-tune pretrained LMs without requiring a $Q$ -function at all), it presents significant challenges in value-based approaches because one often does not have a pre-trained $Q$ -function. To this end, we show that one could initialize a $Q$ -function from the logits output $\ell(\cdot, \cdot)$ of a pre-trained LM.
|
| 143 |
+
|
| 144 |
+
Initialization of $Q$ via pre-trained models. Yu & Zhang (2023) considered the fine-tuning of RL agents after offline RL pre-training. Their main idea is to reconstruct a $Q$ -function from the pre-trained policy, for fine-tuning. Drawing inspiration from their approach, one could similarly reconstruct/initialize a $Q$ -function using a pre-trained LM, akin to using a pre-trained policy.
|
| 145 |
+
|
| 146 |
+
This initialization was motivated by the energy-based policy line of works (Haarnoja et al., 2017; 2018), where a policy $\pi$ is the product of passing a $Q$ -function through a softmax transfer function. Analogously, in LMs, $p$ - the distribution over $\mathcal{V}$ - is produced by passing logits $\ell$ through softmax.
|
| 147 |
+
|
| 148 |
+
language modeling: $p(a|s) = \exp (\ell (s,a)) / \sum_{a\in \mathcal{A}}\exp (\ell (s,a))$ (9)
|
| 149 |
+
|
| 150 |
+
energy-based $\pi$ : $\pi(a|s) = \exp \left( \frac{1}{\alpha} Q(s, a) \right) / \sum_{a \in \mathcal{A}} \exp \left( \frac{1}{\alpha} Q(s, a) \right)$ , (10)
|
| 151 |
+
|
| 152 |
+
where $\alpha$ is a temperature hyper-parameter. One could naturally set $Q(s,a) = \alpha \ell (s,a)$ for initialization. Hence, with aforementioned dueling structure in equation 7 and our pre-defined parameterization, one could set the advantage function as $A_{\theta_{\mathrm{ckpt}}}(s,a)\coloneqq \alpha [\ell_{\theta_{\mathrm{ckpt}}}(s,a) - \max_a\ell_{\theta_{\mathrm{ckpt}}}(s,a)]$ leading to $Q_{\phi}(s,a)\coloneqq A_{\theta_{\mathrm{ckpt}}}(s,a) + V_{\phi}(s)$ . See also our forward pass graph defined in Figure 2b. In a nutshell, this $Q_{\phi}$ -function produces a policy $\pi_{\phi}$ identical to the output distribution $p_{\theta_{\mathrm{ckpt}}}$ of $\mathrm{LM}_{\theta_{\mathrm{ckpt}}}$ ,
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
\pi_ {\phi} (a | s) = \operatorname {s o f t m a x} \left[ \frac {1}{\alpha} \mathbf {Q} _ {\phi} (s) \right] [ a ] = \operatorname {s o f t m a x} \left[ \ell_ {\theta_ {\mathrm {c k p t}}} (s) - \max _ {a} \ell_ {\theta_ {\mathrm {c k p t}}} (s, a) + \frac {1}{\alpha} V _ {\phi} (s) \right] [ a ] = p _ {\theta_ {\mathrm {c k p t}}} (a | s), \tag {11}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
where $\mathbf{Q}(s) \coloneqq [Q(s, a)]_{a \in \mathcal{A}}$ and $\ell(s) \coloneqq [\ell(s, a)]_{a \in \mathcal{A}}$ .
|
| 159 |
+
|
| 160 |
+
Recalling equation 5 - 6, the $Q$ -learning family can be viewed as iterations between policy evaluation and improvement. We now elaborate on how this $Q_{\phi}$ function initialization affects both steps.
|
| 161 |
+
|
| 162 |
+
Policy improvement. One could, informally, consider the operation of taking softmax with respect to $\frac{1}{\alpha} Q_{\phi}$ as a soft policy improvement (Haarnoja et al., 2018) step with a temperature $\alpha$ . Therefore, equation 11 can be interpreted as: running soft policy improvement alone with this initialized $Q_{\phi}$ preserved the performance of pre-trained $\mathrm{LM}_{\theta_{\mathrm{ckpt}}}$ , offering a good starting point of online fine-tuning.
|
| 163 |
+
|
| 164 |
+
Policy evaluation. Yet, this $Q_{\phi}$ function only captures relative values, since we initialized only the advantages $A_{\theta_{\mathrm{ckpt}}}$ - the relative information - as shown in equation 11. $V_{\phi}$ can thereby be an arbitrary function. This would not affect the policy improvement step due to the translation invariance of the softmax function. However, during the policy evaluation step, see e.g. equation 5, the Bellman error can be heavily influenced by the $V$ -values. When the $V$ -values is the dominant source of error, the policy evaluation optimization could be largely driven by the state-only $V$ -values. This can lead to a loss of the relative action values, that we intended to preserve in the previous step.
|
| 165 |
+
|
| 166 |
+
Pre-training of $V_{\phi}$ . This can be addressed by adding a pre-training phase of $V_{\phi}(s)$ , during which we freeze the advantage function $A_{\theta_{\mathrm{ckpt}}}$ and train $V_{\phi}$ by minimizing the temporal difference error (or equivalently doing policy evaluation). In this stage, we optimize the following loss until convergence
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
\mathcal {L} _ {V} \left(V _ {\phi}; \ell_ {\theta_ {\mathrm {c k p t}}}\right) = \frac {1}{T} \mathbb {E} _ {\left(s _ {t}, a _ {t}, r _ {t}, s _ {t + 1}\right) \sim \mathcal {D} _ {\text {t r a i n}}} \sum_ {t = 0} ^ {T} \left[ r _ {t} + \gamma \operatorname {S G} \left(Q _ {\phi} \left(s _ {t + 1}, \hat {a} _ {t + 1}\right)\right) - Q _ {\phi} \left(s _ {t}, a _ {t}\right) \right] ^ {2}, \tag {12}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
where SG is a stop gradient operator, $\mathrm{SG}(Q_{\phi}(s',\hat{a}'))$ follows standard semi-gradient optimization, $\hat{a}_{t + 1}$ is a target action (details deferred to section 4.3), and $Q_{\phi}(s,a) = A_{\theta_{\mathrm{ckpt}}}(s,a) + V_{\phi}(s)$ .
|
| 173 |
+
|
| 174 |
+
In summary, our initialization steps ensures that, prior to fine-tuning $\theta$ , our $Q_{\phi}$ meets two important conditions: it starts with the action distribution $p_{\theta_{\mathrm{ckpt}}}$ of a pre-trained $\mathrm{LM}_{\theta_{\mathrm{ckpt}}}$ , and it begins with low temporal difference error (because the pre-training of $V_{\phi}$ in equation 12 directly minimizes it).
|
| 175 |
+
|
| 176 |
+
# 4.3 A CONSERVATIVE BELLMAN OPERATOR
|
| 177 |
+
|
| 178 |
+
With a pre-trained state value function $V_{\phi}$ , we are now ready to learn a good state-action value function via fine-tuning. We parameterize $Q_{\theta}(s,a) \coloneqq A_{\theta}(s,a) + V_{\theta}(s) = \alpha [\ell_{\theta}(s,a) - \max_{a}\ell_{\theta}(s,a)] + V_{\theta}^{r} + V_{\phi}$ , where we define $V_{\theta} = V_{\theta}^{r} + V_{\phi}$ , and we initialize $\theta$ in a way such that $\ell_{\theta} = \ell_{\theta_{\mathrm{cpt}}}$ and $V_{\theta}^{r} = 0$ . It ensures that $Q_{\theta} = Q_{\phi}$ on initialization, a good starting point for subsequent fine-tuning on $\theta$ . Technically speaking, setting $V_{\theta} = V_{\theta}^{r} + V_{\phi}$ is not required, as one could finetune both $\theta$ and $\phi$ . We however observed that finetuning a residual head $V_{\theta}^{r}$ , with $\phi$ frozen, leads to better stability.
|
| 179 |
+
|
| 180 |
+
Although we avoid training $Q_{\theta}$ from scratch, optimizing $Q_{\theta}$ by $Q$ -learning family algorithms can still be challenging. We attribute this to the characteristics of the Bellman optimality operator $\mathcal{B}^*$ that seeks to learn the optimal value function $Q^*$ and optimal policy $\pi^*$ , which requires a good data coverage of the state-action space $S \times \mathcal{A}$ (e.g. Jiang & Huang, 2020; Xie et al., 2021a; Zhan et al., 2022). In program synthesis, however, such assumption can hardly be met due to the large state-action space and the high computational costs of Transformer inference. While conventional $Q$ -learning family relies on the operator $\mathcal{B}^*$ , recent works in RL, especially those considering limited data regime (e.g. Agarwal et al., 2020; Levine et al., 2020), often design "conservative" operators (e.g. Achiam et al., 2017; Kumar et al., 2020; Brandfonbrener et al., 2021) to address difficulties led by $\mathcal{B}^*$ .
|
| 181 |
+
|
| 182 |
+
Conservative Bellman operators. The concept behind conservative Bellman operators is to "aim low". Instead of learning the optimal $Q^{*}$ and $\pi^{*}$ , these operators typically seeks to learn a policy $\pi$ that either surpasses a behavior policy (which is used to collect a RL dataset in offline RL literature, see e.g. Achiam et al., 2017; Brandfonbrener et al., 2021) or fine-tune a pre-existing policy (e.g. Xie et al., 2021b; Yu & Zhang, 2023). This is often achieved by introducing a regularizer that penalizes deviations from the behavior/pre-existing policy. In particular, as shown in equation 14, we define our conservative Bellman operator $\mathcal{B}^q$ , which depends on a fixed, pre-defined policy $q$ , as follows:
|
| 183 |
+
|
| 184 |
+
$$
|
| 185 |
+
\text {o p t i m a l i t y} \mathcal {B}: \left(\mathcal {B} ^ {*} Q\right) (s, a) = r (s, a) + \gamma \mathbb {E} _ {s ^ {\prime}} \left[ Q \left(s ^ {\prime}, \hat {a} ^ {\prime}\right) \right], \text {w h e r e} \hat {a} ^ {\prime} = \arg \max _ {a} Q \left(s ^ {\prime}, a\right) \tag {13}
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
\text {c o n s e r v a t i v e} \mathcal {B}: \left(\mathcal {B} ^ {q} Q\right) (s, a) = r (s, a) + \gamma \mathbb {E} _ {s ^ {\prime}} [ Q \left(s ^ {\prime}, \hat {a} ^ {\prime}\right) ], \text {w h e r e} \hat {a} ^ {\prime} = \arg \max _ {a} q (a | s ^ {\prime}). \tag {14}
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
The intuition behind our operator $\mathcal{B}^q$ is that we evaluate the action-value function $Q^{q^{\top}}$ of a greedified policy $q^{\uparrow}(a|s) \coloneqq \mathbb{1}\{a = \arg \max_{a} q(a|s)\}$ , where $\mathbb{1}$ is the indicator function. The rationale
|
| 193 |
+
|
| 194 |
+
behind greedification is that $q^{\dagger}$ can be seen as $q$ in a greedy-decoding mode, which usually has better (one-shot) capability than sampling mode (although the latter has better generation diversity). Considering setting $q = p_{\theta_{\mathrm{ckpt}}}$ , the operator $\mathcal{B}^{p\theta_{\mathrm{ckpt}}}$ seeks to learn a policy $\pi$ that outperforms $p_{\theta_{\mathrm{ckpt}}}$ .
|
| 195 |
+
|
| 196 |
+
We further comment on some properties of $\mathcal{B}^q$ : proposition 4.1 shows $\mathcal{B}^q$ is a contraction, meaning there is an unique fixed point. It leads to proposition 4.2, motivating our development of Section 4.5.
|
| 197 |
+
|
| 198 |
+
Proposition 4.1. $\mathcal{B}^q$ is $\gamma$ -contraction in $\ell_{\infty}$ norm.
|
| 199 |
+
|
| 200 |
+
Given our conservative Bellman operator, we could define our conservative temporal difference loss,
|
| 201 |
+
|
| 202 |
+
$$
|
| 203 |
+
\mathcal {L} _ {Q} \left(Q _ {\theta}; q\right) = \frac {1}{T} \mathbb {E} _ {\left(s _ {t}, a _ {t}, r _ {t}, s _ {t + 1}\right) \sim \mathcal {D} _ {\text {t r a i n}}} \sum_ {t = 0} ^ {T} \left[ r _ {t} + \gamma \operatorname {S G} \left(Q _ {\theta} \left(s _ {t + 1}, \hat {a} _ {t + 1}\right)\right) - Q _ {\theta} \left(s _ {t}, a _ {t}\right) \right] ^ {2}, \tag {15}
|
| 204 |
+
$$
|
| 205 |
+
|
| 206 |
+
where $\hat{a}_{t + 1} = \arg \max_{a}q(a|s_{t + 1})$ , and $Q_{\theta}(s,a) = \alpha [\ell_{\theta}(s,a) - \max_{a}\ell_{\theta}(s,a)] + V_{\theta}^{r}(s) + V_{\phi}(s)$ .
|
| 207 |
+
|
| 208 |
+
# 4.4 IMPLEMENTATION AND OPTIMIZATION
|
| 209 |
+
|
| 210 |
+
Architecture and parameterization recap. Following (Le et al., 2022; Shojaee et al., 2023; Liu et al., 2023), we choose T5 (Raffel et al., 2020) as our base architecture for $\theta_{\mathrm{ckpt}}$ , $\phi$ and $\theta$ ; and $\theta_{\mathrm{ckpt}}$ is initialized with CodeRL checkpoint which is publicly available. Specifically, $\theta_{\mathrm{ckpt}}$ , $\phi$ and $\theta$ share a same encoder, and the encoder is frozen throughout, to reduce the amount of learnable parameters.
|
| 211 |
+
|
| 212 |
+
Two-stage training. As noted earlier, our training are composed with two stages: a pre-training stage of $\phi$ , namely $\phi$ -stage, and a fine-tuning stage of $\theta$ , namely $\theta$ -stage. A pseudo-algorithm could be found in Appendix A. In addition, further implementation details are deferred to Appendix H.
|
| 213 |
+
|
| 214 |
+
$\phi$ -stage: Given our development of Section 4.2, we pre-train $V_{\phi}$ function using stochastic gradient descent with $\nabla_{\phi} \mathcal{L}_V(V_{\phi}; \ell_{\theta_{\mathrm{ckpt}}})$ , with $\mathcal{L}_V$ defined in equation 12.
|
| 215 |
+
|
| 216 |
+
$\theta$ -stage (fine-tuning): In this stage, we seek to optimize $Q_{\theta}$ to minimize our previously developed losses: $\mathcal{L}_{\mathrm{adv}}$ and $\mathcal{L}_Q$ , as defined in equation 8 and 15, respectively. In addition, it is also a common practice to include a cross-entropy loss $\mathcal{L}_{\mathrm{ce}}$ during fine-tuning. Therefore, we conclude our final loss function as equation 17, and $\theta$ is updated using stochastic gradient descent with $\nabla_{\theta}\mathcal{L}_{\mathrm{ft}}(Q_{\theta};p_{\theta_{\mathrm{ckpt}}})$ .
|
| 217 |
+
|
| 218 |
+
$$
|
| 219 |
+
\text {R e c a l l}: Q _ {\theta} (s, a) = A _ {\theta} (s, a) + V _ {\theta} (s) = \alpha \left(\ell_ {\theta} (s, a) - \max _ {a} \ell_ {\theta} (s, a)\right) + V _ {\theta} ^ {r} (s) + V _ {\phi} (s) \tag {16}
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
$$
|
| 223 |
+
\mathcal {L} _ {\mathrm {f t}} \left(Q _ {\theta}; p _ {\theta_ {\mathrm {c k p t}}}\right) = \mathcal {L} _ {Q} \left(Q _ {\theta}; p _ {\theta_ {\mathrm {c k p t}}}\right) + \beta_ {\mathrm {a d v}} \mathcal {L} _ {\mathrm {a d v}} \left(A _ {\theta}\right) + \beta_ {\mathrm {c e}} \mathcal {L} _ {\mathrm {c e}} \left(\pi_ {\theta}\right), \text {w h e r e} \pi_ {\theta} = \operatorname {s o f t m a x} \left(\frac {1}{\alpha} Q _ {\theta}\right). \tag {17}
|
| 224 |
+
$$
|
| 225 |
+
|
| 226 |
+
# 4.5 A FREE REWARD MODEL
|
| 227 |
+
|
| 228 |
+
Reward modeling is crucial in language modeling and also in inverse RL (detailed discussions could be found in Appendix C). An intriguing finding from IRL, applicable to our framework, is that a trained $Q$ -function can recover a reward function without additional training. Analogously to Garg et al. (2021), an one-to-one correspondence between $Q$ and reward holds with our conservative Bellman operator $\mathcal{B}^q$ . We define the inverse conservative Bellman operator $\mathcal{T}^q: \mathbb{R}^{S \times \mathcal{A}} \to \mathbb{R}^{S \times \mathcal{A}}$ ,
|
| 229 |
+
|
| 230 |
+
$$
|
| 231 |
+
\left(\mathcal {T} ^ {q} Q\right) (s, a) = Q (s, a) - \gamma \mathbb {E} _ {s ^ {\prime}} Q \left(s ^ {\prime}, \arg \max _ {a} q \left(a \mid s ^ {\prime}\right)\right). \tag {18}
|
| 232 |
+
$$
|
| 233 |
+
|
| 234 |
+
Proposition 4.2. The inverse conservative Bellman operator $\mathcal{T}^q$ is a bijection.
|
| 235 |
+
|
| 236 |
+
Proposition 4.2 shows that a $Q_{\theta}$ is uniquely corresponding to a reward function $\tilde{r}_{\theta} \coloneqq \mathcal{T}^{q}Q_{\theta}$ .<sup>1</sup> Given the definition of $\mathcal{T}^q$ we could recover a reward model $\tilde{r}_{\theta}$ with $Q_{\theta}$ without additional training:
|
| 237 |
+
|
| 238 |
+
$$
|
| 239 |
+
\tilde {r} _ {\theta} (s, a) = Q _ {\theta} (s, a) - \gamma \mathbb {E} _ {s ^ {\prime}} Q _ {\theta} \left(s ^ {\prime}, \arg \max _ {a} p _ {\theta_ {\mathrm {c k p t}}} \left(a \mid s ^ {\prime}\right)\right) \approx Q _ {\theta} (s, a) - \gamma V _ {\theta} \left(s ^ {\prime}\right). \tag {19}
|
| 240 |
+
$$
|
| 241 |
+
|
| 242 |
+
We use the estimation $\tilde{r}_{\theta}(s,a)\approx Q_{\theta}(s,a) - \gamma V_{\theta}(s^{\prime})$ in practice, with reasons deferred to Appendix F.
|
| 243 |
+
|
| 244 |
+
Candidates selection with $\tilde{r}_{\theta}$ . We leverage our reward model $\tilde{r}_{\theta}$ to do candidate programs selection, as an example to highlight the additional benefits of value-based RL. We rank generated programs by the cumulative rewards $\tilde{R}_{\theta}(W) := \sum_{t=0}^{T} \tilde{r}_{\theta}(s_t, a_t)$ , predicted by our reward model $\tilde{r}_{\theta}$ , to select the programs that are most likely to be correct. Specifically, for pass@k metrics, we follow the evaluation protocol used in CodeT (Chen et al., 2022), a work that considered program selection via automatic generated tests. This protocol computes pass@k by first generating $m$ programs and select a subset of $k$ programs to evaluate pass@k. In our case, we select the $k$ -sized subset with top-k highest $\tilde{R}_{\theta}(\cdot)$ from total $m$ candidates. Our results in Section 5 follow this evaluation protocol.
|
| 245 |
+
|
| 246 |
+
Table 1: Empirical evaluation on APPS test set. ${}^{ \dagger }$ , ${}^{ \ddagger }$ and ${}^{\ddagger \ddagger }$ indicates results duplicated from Le et al. (2022),Shojaee et al. (2023) and Liu et al. (2023), respectively. Bold number indicates the best result and underlined number means our result are the second best. Intro, inter and comp stand for introductory, interview and competition, respectively.
|
| 247 |
+
|
| 248 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2"># trainable parameters</td><td colspan="4">Pass@1</td><td colspan="4">Pass@5</td><td colspan="4">Pass@1000</td></tr><tr><td>Intro</td><td>Inter</td><td>Comp</td><td>All</td><td>Intro</td><td>Inter</td><td>Comp</td><td>All</td><td>Intro</td><td>Inter</td><td>Comp</td><td>All</td></tr><tr><td>Codex†</td><td>12B</td><td>4.14</td><td>0.14</td><td>0.02</td><td>0.92</td><td>9.65</td><td>0.51</td><td>0.09</td><td>2.25</td><td>25.02</td><td>3.70</td><td>3.23</td><td>7.87</td></tr><tr><td>AlphaCode†</td><td>1B</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>17.67</td><td>5.24</td><td>7.06</td><td>8.09</td></tr><tr><td>GPT3†</td><td>175B</td><td>0.20</td><td>0.03</td><td>0.00</td><td>0.06</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GPT2†</td><td>0.1B</td><td>1.00</td><td>0.33</td><td>0.00</td><td>0.40</td><td>2.70</td><td>0.73</td><td>0.00</td><td>1.02</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GPT2†</td><td>1.5B</td><td>1.30</td><td>0.70</td><td>0.00</td><td>0.68</td><td>3.60</td><td>1.03</td><td>0.00</td><td>1.34</td><td>25.00</td><td>9.27</td><td>8.80</td><td>12.32</td></tr><tr><td>GPT-Neo†</td><td>2.7B</td><td>3.90</td><td>0.57</td><td>0.00</td><td>1.12</td><td>5.50</td><td>0.80</td><td>0.00</td><td>1.58</td><td>27.90</td><td>9.83</td><td>11.40</td><td>13.76</td></tr><tr><td>GPT-J†</td><td>6B</td><td>5.60</td><td>1.00</td><td>0.50</td><td>1.82</td><td>9.20</td><td>1.73</td><td>1.00</td><td>3.08</td><td>35.20</td><td>13.15</td><td>13.51</td><td>17.63</td></tr><tr><td colspan="14">RL based methods - without using example unit tests</td></tr><tr><td>CodeRL†</td><td>770M</td><td>6.20</td><td>1.50</td><td>0.30</td><td>2.20</td><td>9.39</td><td>1.90</td><td>0.42</td><td>3.10</td><td>35.30</td><td>13.33</td><td>13.60</td><td>17.78</td></tr><tr><td>PPOCeder‡</td><td>770M</td><td>5.20</td><td>1.00</td><td>0.50</td><td>1.74</td><td>9.10</td><td>2.50</td><td>1.20</td><td>3.56</td><td>35.20</td><td>13.35</td><td>13.90</td><td>17.77</td></tr><tr><td>RLTF‡‡</td><td>770M</td><td>4.16</td><td>0.97</td><td>0.20</td><td>1.45</td><td>10.12</td><td>2.65</td><td>0.82</td><td>3.78</td><td>38.30</td><td>15.13</td><td>15.90</td><td>19.92</td></tr><tr><td>B-Coder</td><td>≤770M/stage3</td><td>6.70</td><td>1.50</td><td>0.30</td><td>2.30</td><td>10.40</td><td>2.63</td><td>0.70</td><td>3.80</td><td>37.00</td><td>13.67</td><td>12.60</td><td>18.12</td></tr></table>
|
| 249 |
+
|
| 250 |
+
Remarks on $\tilde{r}_{\theta}$ . To further explain the motivation of ranking with $\tilde{r}_{\theta}$ , consider a realistic deployment setting where a fine-tuned model is deployed for end-user applications. Users often provide a language description of their needs but may not include test cases (which can also be challenging for beginners or casual users). Additionally, the model is usually required to offer a single best response instead of a range of options. Therefore, the ability to rank programs without true rewards is a desirable advantage.
|
| 251 |
+
|
| 252 |
+
To preview the effectiveness of $\tilde{r}_{\theta}$ , we show the correlation between environmental reward $r$ and our cumulative reward $\tilde{R}_{\theta}$ . In Figure 3, green region corresponds to correct programs, and has the highest $\tilde{R}_{\theta}$ on average. For incorrect programs, those with compile and runtime errors have the lowest and the second lowest $\tilde{R}_{\theta}$ , respectively. Programs can be executed but fail some tests, have the second
|
| 253 |
+
|
| 254 |
+

|
| 255 |
+
Figure 3: Kernel density estimation of $\tilde{R}_{\theta}(\cdot)$ evaluated on a collection of generated programs. The x-axis represents the predicted reward given by $\tilde{R}_{\theta}$ and the y-axis is its density. Color codes the true outcomes defined in equation 2.
|
| 256 |
+
|
| 257 |
+
highest $\tilde{R}_{\theta}$ . Hence, it concludes that $\tilde{R}_{\theta}$ has an evident positive correlation to the true reward $r$ .
|
| 258 |
+
|
| 259 |
+
# 5 EMPIRICAL EVALUATION
|
| 260 |
+
|
| 261 |
+
Sampling using $Q_{\theta}$ . Nucleus sampling (top- $p$ sampling) (Holtzman et al., 2019) with sampling temperature² (Ackley et al., 1985) has been one of the most important sampling techniques. It can also be easily implemented in our framework. One could simply consider $Q_{\theta} / \alpha$ as logits and the sampling procedure would remain identical to standard LMs, see Appendix B for details.
|
| 262 |
+
|
| 263 |
+
APPS benchmark and baselines. In line with prior RL-based works (Le et al., 2022; Shojaee et al., 2023; Liu et al., 2023), we evaluate $\mathcal{B}$ -Coder on the challenging code contests benchmark APPS (Hendrycks et al., 2021). It contains 5,000 training and 5,000 testing problems, with three difficulty levels: introductory, interview and competition. We compare our $\mathcal{B}$ -Coder with pre-trained or supervise fine-tuned LLM baselines: GPT2 (Radford et al., 2019), GPT3 (Brown et al., 2020), GPTNeo (Black et al., 2021), GPT-J (Wang & Komatsuzaki, 2021), Codex (Chen et al., 2021a) and AlphaCode (Li et al., 2022); and RL fine-tuned baselines: CodeRL (Le et al., 2022), PPOCoder (Shojaee et al., 2023) and a concurrent work RLTF (Liu et al., 2023).
|
| 264 |
+
|
| 265 |
+
APPS: without example test outcomes. In the APPS dataset, each problem has several example unit tests (different from the hidden unit tests used for evaluation). These example tests are usually leveraged to refine generated samples. For example, CodeRL and RLTF considers a critic sampling
|
| 266 |
+
|
| 267 |
+
(CS) strategy that refines and repairs generated programs based on the execution outcomes of the example tests. We start with experiments results in which example test outcomes are not used (hence CodeRL and RLTF results in Table 1 are without CS). Table 1 shows that our $\mathcal{B}$ -Coder has overall the best pass@ $k$ for $k = \{1,5\}$ and achieves second best place for $k = 1000$ (best result reported by the concurrent work RLTF). For Table 1 results, we use nucleus sampling with a sampling temperature of 0.6. We set $m$ to 256 for $k = \{1,5\}$ and $m$ to 2500 for $k = 1000$ , where $m$ is a hyper-parameter of our ranking protocol introduced in Section 4.5 (see Appendix I for an ablation study on $m$ ).
|
| 268 |
+
|
| 269 |
+
APPS: using example test outcomes. Table 2 lists the results using example tests. In addition to the CS strategy that uses example tests to refine/repair programs, Li et al. (2022) and Chen et al. (2021a) consider a filtered setting, in which programs failing example tests are excluded, and the pass@ $k$ is evaluated us
|
| 270 |
+
|
| 271 |
+
Table 2: APPS results when using example test outcomes.
|
| 272 |
+
|
| 273 |
+
<table><tr><td rowspan="2">Model</td><td colspan="4">Pass@1</td><td colspan="4">Pass@5</td></tr><tr><td>Intro</td><td>Inter</td><td>Comp</td><td>All</td><td>Intro</td><td>Inter</td><td>Comp</td><td>All</td></tr><tr><td>Codex† filtered</td><td>22.78</td><td>2.64</td><td>3.04</td><td>6.75</td><td>24.52</td><td>3.23</td><td>3.08</td><td>7.46</td></tr><tr><td>AlphaCode† filtered</td><td>-</td><td>-</td><td>-</td><td>-</td><td>14.36</td><td>5.63</td><td>4.58</td><td>7.17</td></tr><tr><td>CodeRL†cs</td><td>6.77</td><td>1.80</td><td>0.69</td><td>2.57</td><td>15.27</td><td>4.48</td><td>2.36</td><td>6.21</td></tr><tr><td>CodeRL†filtered</td><td>16.27</td><td>6.00</td><td>4.27</td><td>7.71</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>CodeRL†cs+filtered</td><td>16.52</td><td>6.16</td><td>4.15</td><td>7.83</td><td>24.49</td><td>8.58</td><td>7.82</td><td>11.61</td></tr><tr><td>RLTF‡‡cs</td><td>8.40</td><td>2.28</td><td>1.10</td><td>3.27</td><td>18.60</td><td>5.57</td><td>3.70</td><td>7.80</td></tr><tr><td>B-Coder filtered</td><td>18.00</td><td>6.63</td><td>2.30</td><td>8.04</td><td>23.30</td><td>8.83</td><td>6.40</td><td>11.30</td></tr></table>
|
| 274 |
+
|
| 275 |
+
ing (a subset of) programs that pass example tests (which is also related to the $k$ @ $m$ metric (Li et al., 2022), the pass rate using $k$ submissions from $m$ samples). We also test $\mathcal{B}$ -Coder in this filtered setting. Similarly, we first exclude programs that fail example tests. Suppose $n$ out of $m$ programs pass; we then follow our ranking protocol to get top- $k$ out of $n$ programs for evaluation. $\mathcal{B}$ -Coder outperforms baselines with either CS or filtered setting for $k = \{1,5\}$ . The baseline, CodeRL+CS+filtered, incorporated both strategies achieved a slight advantage over $\mathcal{B}$ -Coder for pass@5 while being surpassed by $\mathcal{B}$ -Coder for pass@1. It worth mentioning that CS is a plug-and-play component, which could also be combined with $\mathcal{B}$ -Coder, to further improve pass rate. For the results in Table 2, we use a temperature of 0.4 and $m$ set to 1000, matching the $m$ used in Le et al. (2022).
|
| 276 |
+
|
| 277 |
+
Generalization ability. In addition, we test the generalization ability of our dual strategy, ranking with $\tilde{R}_{\theta}$ . We study two aspects: generalization to other models and generalization to different domains. To this end, we designed the following experiments, which confirmed its generalizability in positive.
|
| 278 |
+
|
| 279 |
+
For the former, we generate (off-policy) programs using CodeRL (with $m = 256$ ), and rank those programs by $\tilde{R}_{\theta}$ . Table 3 shows our ranking strategy leads to improvements in most cases, even though the programs to be ranked are not generated by $\mathcal{B}$ -Coder.
|
| 280 |
+
|
| 281 |
+
For the latter, we test our dual strategy with another dataset MBPP (Austin et al., 2021) (with $m = 512$ ). Table 4 shows consistent improvements for all temperatures and $k$ .
|
| 282 |
+
|
| 283 |
+
Table 3: Generalization to CodeRL. Pass@k evaluated with top-k ranked programs, generated by CodeRL. indicates absolute improvement achieved by ranking, compared to un-ranked pass@k.
|
| 284 |
+
|
| 285 |
+
<table><tr><td rowspan="2">k</td><td rowspan="2">Temp.</td><td colspan="8">Pass@k</td></tr><tr><td colspan="2">Intro</td><td colspan="2">Inter</td><td colspan="2">Comp</td><td colspan="2">All</td></tr><tr><td rowspan="2">1</td><td>0.4</td><td>6.30</td><td>1.91</td><td>1.27</td><td>0.37</td><td>0.50</td><td>0.37</td><td>2.12</td><td>0.68</td></tr><tr><td>0.6</td><td>6.00</td><td>2.13</td><td>1.23</td><td>0.42</td><td>0.50</td><td>0.36</td><td>2.04</td><td>0.75</td></tr><tr><td rowspan="2">5</td><td>0.4</td><td>9.30</td><td>-0.2</td><td>2.10</td><td>0.01</td><td>0.70</td><td>0.15</td><td>3.26</td><td>0.00</td></tr><tr><td>0.6</td><td>10.20</td><td>0.58</td><td>2.57</td><td>0.41</td><td>0.80</td><td>0.16</td><td>3.74</td><td>0.39</td></tr></table>
|
| 286 |
+
|
| 287 |
+
Table 4: Zero-shot pass@k on MBPP. indicates absolute improvement achieved by ranking.
|
| 288 |
+
|
| 289 |
+
<table><tr><td>Temp.</td><td colspan="2">k=1</td><td colspan="2">k=5</td><td colspan="2">k=10</td><td colspan="2">k=80</td></tr><tr><td>0.7</td><td>20.13</td><td>6.61</td><td>37.04</td><td>5.61</td><td>44.45</td><td>4.63</td><td>64.00</td><td>1.41</td></tr><tr><td>0.8</td><td>18.89</td><td>6.99</td><td>36.59</td><td>7.21</td><td>44.46</td><td>6.59</td><td>65.20</td><td>4.28</td></tr><tr><td>0.9</td><td>17.32</td><td>7.34</td><td>35.04</td><td>8.58</td><td>43.15</td><td>8.22</td><td>63.20</td><td>4.33</td></tr></table>
|
| 290 |
+
|
| 291 |
+
# 6 CONCLUSION
|
| 292 |
+
|
| 293 |
+
In this work, we explore the feasibility of value-based RL algorithms for program synthesis task. We demonstrate how to stabilize and accelerate training through $Q$ -function initialization and conservative updates. Moreover, our work is conducted with minimal reward engineering effort, thereby placing an emphasis on the perspective of algorithm designs. While policy-based algorithms remain mainstream in the current program synthesis literature, the question of how to effectively leverage off-policy programs, including historical synthetic samples, in a principled way, might still be under-explored. We are convinced that value-based RL offers a promising direction to address this question, and thereby to scale RL for code generation at large by (re)-using the extensive collection of off-policy programs. Our work could thus serve as an important initial step towards this direction.
|
| 294 |
+
|
| 295 |
+
# REFERENCES
|
| 296 |
+
|
| 297 |
+
Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, pp. 1, 2004.
|
| 298 |
+
Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. In International conference on machine learning, pp. 22-31. PMLR, 2017.
|
| 299 |
+
David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147-169, 1985.
|
| 300 |
+
Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. An optimistic perspective on offline reinforcement learning. In International Conference on Machine Learning, pp. 104-114. PMLR, 2020.
|
| 301 |
+
Rohan Anil, Andrew M Dai, Orhan First, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
|
| 302 |
+
Kai Arulkumaran, Antoine Cully, and Julian Togelius. *Alphastar: An evolutionary computation perspective*. In *Proceedings of the genetic and evolutionary computation conference companion*, pp. 314-315, 2019.
|
| 303 |
+
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.
|
| 304 |
+
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a.
|
| 305 |
+
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b.
|
| 306 |
+
Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In International conference on machine learning, pp. 449-458. PMLR, 2017.
|
| 307 |
+
Richard Bellman. Dynamic programming. Science, 153(3731):34-37, 1966.
|
| 308 |
+
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. Advances in neural information processing systems, 28, 2015.
|
| 309 |
+
Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, March 2021. URL https://doi.org/10.5281/zenodo.5297715. If you use this software, please cite it using these metadata.
|
| 310 |
+
David Brandfonbrener, Will Whitney, Rajesh Ranganath, and Joan Bruna. Offline rl without off-policy evaluation. Advances in neural information processing systems, 34:4933-4946, 2021.
|
| 311 |
+
Noam Brown and Tuomas Sandholm. Superhuman ai for heads-up no-limit poker: Libratus beats top professionals. Science, 359(6374):418-424, 2018.
|
| 312 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020.
|
| 313 |
+
Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. Codet: Code generation with generated tests. In The Eleventh International Conference on Learning Representations, 2022.
|
| 314 |
+
|
| 315 |
+
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021a.
|
| 316 |
+
Xinyun Chen, Chang Liu, and Dawn Song. Execution-guided neural program synthesis. In International Conference on Learning Representations, 2018.
|
| 317 |
+
Xinyun Chen, Dawn Song, and Yuandong Tian. Latent execution for neural program synthesis beyond domain-specific languages. Advances in Neural Information Processing Systems, 34: 22196-22208, 2021b.
|
| 318 |
+
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
|
| 319 |
+
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.
|
| 320 |
+
Thomas Degris, Martha White, and Richard S Sutton. Off-policy actor-critic. arXiv preprint arXiv:1205.4839, 2012.
|
| 321 |
+
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, and Tong Zhang. Lmflow: An extensible toolkit for finetuning and inference of large foundation models. arXiv preprint arXiv:2306.12420, 2023.
|
| 322 |
+
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In International conference on machine learning, pp. 1329-1338. PMLR, 2016.
|
| 323 |
+
Gabriel Dulac-Arnold, Richard Evans, Hado van Hasselt, Peter Sunehag, Timothy Lillicrap, Jonathan Hunt, Timothy Mann, Theophane Weber, Thomas Degris, and Ben Coppin. Deep reinforcement learning in large discrete action spaces. arXiv preprint arXiv:1512.07679, 2015.
|
| 324 |
+
Kevin Ellis, Maxwell Nye, Yewen Pu, Felix Sosa, Josh Tenenbaum, and Armando Solar-Lezama. Write, execute, assess: Program synthesis with a repl. Advances in Neural Information Processing Systems, 32, 2019.
|
| 325 |
+
Jonathan Frankle, Peter-Michael Osera, David Walker, and Steve Zdancewic. Example-directed synthesis: a type-theoretic interpretation. ACM Sigplan Notices, 51(1):802-815, 2016.
|
| 326 |
+
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Scott Yih, Luke Zettlemoyer, and Mike Lewis. Incoder: A generative model for code infilling and synthesis. In The Eleventh International Conference on Learning Representations, 2022.
|
| 327 |
+
Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pp. 1587-1596. PMLR, 2018.
|
| 328 |
+
Divyansh Garg, Shuvam Chakraborty, Chris Cundy, Jiaming Song, and Stefano Ermon. Iq-learn: Inverse soft-q learning for imitation. Advances in Neural Information Processing Systems, 34: 4028-4039, 2021.
|
| 329 |
+
Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E Turner, and Sergey Levine. Q-prop: Sample-efficient policy gradient with an off-policy critic. arXiv preprint arXiv:1611.02247, 2016.
|
| 330 |
+
Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, et al. Reinforced self-training (rest) for language modeling. arXiv preprint arXiv:2308.08998, 2023.
|
| 331 |
+
Sumit Gulwani, William R Harris, and Rishabh Singh. Spreadsheet data manipulation using examples. Communications of the ACM, 55(8):97-105, 2012.
|
| 332 |
+
Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In International conference on machine learning, pp. 1352-1361. PMLR, 2017.
|
| 333 |
+
|
| 334 |
+
Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pp. 1861-1870. PMLR, 2018.
|
| 335 |
+
Frank S He, Yang Liu, Alexander G Schwing, and Jian Peng. Learning to play in a day: Faster deep reinforcement learning by optimality tightening. In International Conference on Learning Representations, 2016.
|
| 336 |
+
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938, 2021.
|
| 337 |
+
Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. Advances in neural information processing systems, 29, 2016.
|
| 338 |
+
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
|
| 339 |
+
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. In International Conference on Learning Representations, 2019.
|
| 340 |
+
Ehsan Imani, Eric Graves, and Martha White. An off-policy policy gradient theorem using emphatic weightings. Advances in Neural Information Processing Systems, 31, 2018.
|
| 341 |
+
Alexis Jacq, Matthieu Geist, Ana Paiva, and Olivier Pietquin. Learning from a learner. In International Conference on Machine Learning, pp. 2990-2999. PMLR, 2019.
|
| 342 |
+
Nan Jiang and Jiawei Huang. Minimax value interval for off-policy evaluation and policy optimization. Advances in Neural Information Processing Systems, 33:2747-2758, 2020.
|
| 343 |
+
Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, et al. Scalable deep reinforcement learning for vision-based robotic manipulation. In Conference on Robot Learning, pp. 651-673. PMLR, 2018.
|
| 344 |
+
Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, and Amar Shah. Learning to drive in a day. In 2019 International Conference on Robotics and Automation (ICRA), pp. 8248-8254. IEEE, 2019.
|
| 345 |
+
Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pp. 4171-4186, 2019.
|
| 346 |
+
Vijay Konda and John Tsitsiklis. Actor-critic algorithms. Advances in neural information processing systems, 12, 1999.
|
| 347 |
+
Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179-1191, 2020.
|
| 348 |
+
Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven Chu Hong Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems, 35:21314-21328, 2022.
|
| 349 |
+
Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.
|
| 350 |
+
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. Starcoder: may the source be with you! arXiv preprint arXiv:2305.06161, 2023.
|
| 351 |
+
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphabet. Science, 378(6624):1092-1097, 2022.
|
| 352 |
+
|
| 353 |
+
Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74-81, 2004.
|
| 354 |
+
Jiate Liu, Yiqin Zhu, Kaiwen Xiao, Qiang Fu, Xiao Han, Wei Yang, and Deheng Ye. Rltf: Reinforcement learning from unit test feedback. arXiv preprint arXiv:2307.04349, 2023.
|
| 355 |
+
Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. Off-policy policy gradient with stationary distribution correction. In Uncertainty in artificial intelligence, pp. 1180-1190. PMLR, 2020.
|
| 356 |
+
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2018.
|
| 357 |
+
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568, 2023.
|
| 358 |
+
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
|
| 359 |
+
Matej Moravčík, Martin Schmid, Neil Burch, Viliam Lisý, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, and Michael Bowling. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science, 356(6337):508-513, 2017.
|
| 360 |
+
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value and policy based reinforcement learning. Advances in neural information processing systems, 30, 2017.
|
| 361 |
+
Ashvin V Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Visual reinforcement learning with imagined goals. Advances in neural information processing systems, 31, 2018.
|
| 362 |
+
Andrew Y Ng, Stuart Russell, et al. Algorithms for inverse reinforcement learning. In Icml, volume 1, pp. 2, 2000.
|
| 363 |
+
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. In The Eleventh International Conference on Learning Representations, 2022.
|
| 364 |
+
R OpenAI. Gpt-4 technical report. arXiv, pp. 2303–08774, 2023.
|
| 365 |
+
Peter-Michael Osera and Steve Zdancewic. Type-and-example-directed program synthesis. ACM SIGPLAN Notices, 50(6):619-630, 2015.
|
| 366 |
+
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730-27744, 2022.
|
| 367 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311-318, 2002.
|
| 368 |
+
Maxim Rabinovich, Mitchell Stern, and Dan Klein. Abstract syntax networks for code generation and semantic parsing. arXiv preprint arXiv:1704.07535, 2017.
|
| 369 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
|
| 370 |
+
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446, 2021.
|
| 371 |
+
|
| 372 |
+
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023.
|
| 373 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551, 2020.
|
| 374 |
+
Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.
|
| 375 |
+
Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7008-7024, 2017.
|
| 376 |
+
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023.
|
| 377 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
|
| 378 |
+
Parshin Shojaee, Aneesh Jain, Sindhu Tipirneni, and Chandan K Reddy. Execution-based code generation using deep reinforcement learning. arXiv preprint arXiv:2301.13816, 2023.
|
| 379 |
+
David Silver. Lecture 7: Policy gradient. UCL Course on RL, 2015.
|
| 380 |
+
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484-489, 2016.
|
| 381 |
+
Riley Simmons-Edler, Anders Miltner, and Sebastian Seung. Program synthesis through reinforcement learning guided tree search. arXiv preprint arXiv:1806.02932, 2018.
|
| 382 |
+
Nisan Stiannon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008-3021, 2020.
|
| 383 |
+
Phillip D Summers. A methodology for lisp program construction from examples. Journal of the ACM (JACM), 24(1):161-175, 1977.
|
| 384 |
+
Arash Tavakoli, Fabio Pardo, and Petar Kormushev. Action branching architectures for deep reinforcement learning. In Proceedings of the aai conference on artificial intelligence, volume 32, 2018.
|
| 385 |
+
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
|
| 386 |
+
Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016.
|
| 387 |
+
Ben Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingofflolz/mesh-transformer-jax, May 2021.
|
| 388 |
+
Xin Wang, Yasheng Wang, Yao Wan, Fei Mi, Yitong Li, Pingyi Zhou, Jin Liu, Hao Wu, Xin Jiang, and Qun Liu. Compilable neural code generation with compiler feedback. In Findings of the Association for Computational Linguistics: ACL 2022, pp. 9-19, 2022.
|
| 389 |
+
Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. Codet5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 8696-8708, 2021.
|
| 390 |
+
|
| 391 |
+
Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi DQ Bui, Junnan Li, and Steven CH Hoi. Codet5+: Open code large language models for code understanding and generation. arXiv preprint arXiv:2305.07922, 2023.
|
| 392 |
+
Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas. *Dueling network architectures for deep reinforcement learning*. In *International conference on machine learning*, pp. 1995–2003. PMLR, 2016.
|
| 393 |
+
Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8:279-292, 1992.
|
| 394 |
+
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8:229-256, 1992.
|
| 395 |
+
Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, and Alekh Agarwal. Bellman-consistent pessimism for offline reinforcement learning. Advances in neural information processing systems, 34:6683-6694, 2021a.
|
| 396 |
+
Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, and Yu Bai. Policy finetuning: Bridging sample-efficient offline and online reinforcement learning. Advances in neural information processing systems, 34:27395-27407, 2021b.
|
| 397 |
+
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023.
|
| 398 |
+
Michihiro Yasunaga and Percy Liang. Graph-based, self-supervised program repair from diagnostic feedback. In International Conference on Machine Learning, pp. 10799-10808. PMLR, 2020.
|
| 399 |
+
Zishun Yu and Xinhua Zhang. Actor-critic alignment for offline-to-online reinforcement learning. In Proceedings of the 40th International Conference on Machine Learning, volume 202, pp. 40452-40474, 2023.
|
| 400 |
+
Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, and Jason Lee. Offline reinforcement learning with realizability and single-policy concentrability. In Conference on Learning Theory, pp. 2730-2775. PMLR, 2022.
|
| 401 |
+
Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, et al. Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x. arXiv preprint arXiv:2303.17568, 2023.
|
| 402 |
+
Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103, 2017.
|
| 403 |
+
Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. Maximum entropy inverse reinforcement learning. In Aaai, volume 8, pp. 1433-1438. Chicago, IL, USA, 2008.
|
| 404 |
+
Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
|
| 405 |
+
Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret minimization in games with incomplete information. Advances in neural information processing systems, 20, 2007.
|
| 406 |
+
Amit Zohar and Lior Wolf. Automatic program synthesis of long programs with a learned garbage collector. Advances in neural information processing systems, 31, 2018.
|
| 407 |
+
|
| 408 |
+
# A PSEUDO-CODE FOR TRAINING
|
| 409 |
+
|
| 410 |
+
Algorithm 1 Training Procedure with $\phi$ - and $\theta$ -stages
|
| 411 |
+
Require: $\theta_{\mathrm{ckpt}}$ $\phi$ , and $\theta$ with a shared frozen encoder
|
| 412 |
+
1: # pre-training stage, update $\phi$ only
|
| 413 |
+
2: procedure PRETRAINVVALUE( $\phi$ ) ▷ $\phi$ -stage
|
| 414 |
+
3: for num_ites do
|
| 415 |
+
4: Draw sample $(s,a,r,s^{\prime})$ from dataset
|
| 416 |
+
5: Compute logits $\ell_{\theta_{\mathrm{ckpt}}}(s,\cdot)$
|
| 417 |
+
6: Compute state value $V_{\phi}(s)$
|
| 418 |
+
7: Compute loss $\mathcal{L}_V(V_\phi ;\ell_{\theta_{\mathrm{ckpt}}})$ ▷ arguments omitted for brevity
|
| 419 |
+
8: Gradient step with $\nabla_{\phi}\mathcal{L}_{V}(V_{\phi};\ell_{\theta_{\mathrm{ckpt}}})$ ▷ equation 12
|
| 420 |
+
9: end for
|
| 421 |
+
10: end procedure
|
| 422 |
+
11: # fine-tuning stage, update $\theta$ only
|
| 423 |
+
12: procedure FINETUNEQVALUE(θ) ▷ $\theta$ -stage
|
| 424 |
+
13: for num_ites do
|
| 425 |
+
14: Draw sample $(s,a,r,s^{\prime})$ from dataset
|
| 426 |
+
15: Compute residual state-value $V_{\theta}^{r}(s)$
|
| 427 |
+
16: Compute pre-trained state-value $V_{\phi}(s)$
|
| 428 |
+
17: Compute state-value $V_{\theta}(s) = V_{\theta}^{r}(s) + V_{\phi}(s)$
|
| 429 |
+
18: Compute advantage $A_{\theta}(s,\cdot) = \ell_{\theta}(s,\cdot) - \max_{a}\ell_{\theta}(s,a)$
|
| 430 |
+
19:Compute $Q_{\theta}(s,\cdot) = \alpha A_{\theta}(s,\cdot) + V_{\theta}(s)$ ▷ equation 16
|
| 431 |
+
20:Compute $\pi_{\theta}(\cdot |s) = \mathrm{softmax}(Q_{\theta}(s,\cdot) / \alpha)$
|
| 432 |
+
21:Compute $p_{\theta_{\mathrm{ckpt}}}(\cdot |s)$ ▷ equation 14
|
| 433 |
+
22:Compute $\mathcal{L}_Q(Q_\theta ;p_{\theta_{\mathrm{ckpt}}})$ ▷ equation 15
|
| 434 |
+
23:Compute $\mathcal{L}_{\mathrm{ce}}(\pi_{\theta})$ and $\mathcal{L}_{\mathrm{adv}}(A_\theta)$ ▷ equation 1 and 8
|
| 435 |
+
24: Compute fine-tune loss $\mathcal{L}_{\mathrm{ft}}(Q_\theta ;p_{\theta_{\mathrm{ckpt}}}) = \mathcal{L}_Q(Q_\theta ;p_{\theta_{\mathrm{ckpt}}}) + \beta_{\mathrm{ce}}\mathcal{L}_{\mathrm{ce}}(\pi_\theta) + \beta_{\mathrm{adv}}\mathcal{L}_{\mathrm{adv}}(A_\theta)$
|
| 436 |
+
25:Gradient step with $\nabla_{\theta}\mathcal{L}_{\mathrm{ft}}(Q_\theta ;p_{\theta_{\mathrm{ckpt}}})$
|
| 437 |
+
26:end for
|
| 438 |
+
27: end procedure
|
| 439 |
+
|
| 440 |
+
# B PSEUDO-CODE FOR SAMPLING
|
| 441 |
+
|
| 442 |
+
Algorithm 2 Sampling Procedure
|
| 443 |
+
Require: model parameters $\theta, \phi$ ; $\mathrm{SAMPLER}_{p,t}(\cdot): \mathbb{R}^{1 \times |\mathcal{V}|} \to \mathcal{V}$ that maps a logits vector to a token with hyper-parameters $p$ (top- $p$ sampling) and temperature $t$
|
| 444 |
+
1:
|
| 445 |
+
2: procedure SAMPLEONEToken(s)
|
| 446 |
+
3: Obtain current state $s$
|
| 447 |
+
4: Compute logits vector $\ell_{\theta}(s) \in \mathbb{R}^{1 \times |\mathcal{V}|}$
|
| 448 |
+
5: Compute advantage vector $\mathbf{A}_{\theta}(s) = \ell_{\theta}(s) - \max_{a} \ell_{\theta}(s)[a]$
|
| 449 |
+
6:Compute $V_{\theta}(s) = V_{\theta}^{r}(s) + V_{\phi}(s)$
|
| 450 |
+
7:Compute $Q$ vector $\mathbf{Q}_{\theta}(s) = \alpha \mathbf{A}_{\theta}(s) + V_{\theta}(s)$
|
| 451 |
+
8:Run $\mathrm{SAMPLER}_{p,t}(\mathbf{Q}_{\theta}(s)/\alpha)$ ▷ sample with $\mathbf{Q}_{\theta}(s)/\alpha$
|
| 452 |
+
9: end procedure
|
| 453 |
+
|
| 454 |
+
# C ADDITIONAL RELATED WORKS
|
| 455 |
+
|
| 456 |
+
Off-policy policy-based methods. One string of off-policy policy-based methods is based on importance ratio. Suppose the data is collected by a behavior policy $\beta$ , PG with off-policy data can be corrected by $\nabla_{\mu}J(\mu) = \mathbb{E}_{\beta}[\frac{\pi_{\mu}(a_t|s_t)}{\beta(a_t|s_t)} (\sum_{i=t}^{T}\gamma^{i-t}r_i)\nabla_{\mu}\log \pi_{\mu}(a_t|s_t)]$ . This allows unbiased gradient even though the data distribution is off-policy. However, computing the ratio $\pi_{\mu}(a|s) / \beta(a|s)$ is not always feasible as the density function of off-policy data, such as human data, is often unknown. In addition, this correction can lead to high variance due to the product of ratios along trajectories.
|
| 457 |
+
|
| 458 |
+
While vanilla importance-weighted off-policy PG does not require the approximation of value functions, some advanced ratio-based methods often incorporate value functions, such as (Imani et al., 2018; Liu et al., 2020). Another viable approach is the direct combination of value-based and policy-based methods, often referred to as the actor-critic framework, e.g. (Konda & Tsitsiklis, 1999; Degris et al., 2012). Although actor-critic methods are often considered as the third category, besides policy-based and value-based, we and some other works (Fujimoto et al., 2018) lean towards categorizing actor-critic to be more value-based, as the major difficulty lies in value function approximations. Nevertheless, both directions of extending policy-based methods to an off-policy setting, largely rely on the value functions. This emphasizes the motivation and significance of our work.
|
| 459 |
+
|
| 460 |
+
Reward modeling and beyond. Due to the successes of reinforcement learning from human/AI feedback (Christiano et al., 2017; Bai et al., 2022b). Reward modeling and RL fine-tuning with learned reward model has been a popular choice for post-SFT (supervised fine-tuning) refinement (see e.g. Ziegler et al., 2019; Stiannon et al., 2020; Bai et al., 2022a; Ouyang et al., 2022). In particular, in program synthesis, Le et al. (2022) trains a classifier, that predicts unit test outcomes, as their reward model for RL fine-tuning. However, reward models can sometimes be expensive to train and their quality can heavily impact RL fine-tuning performance. Recent works (e.g. Rafailov et al., 2023; Diao et al., 2023) explore preference learning beyond conventional reward model.
|
| 461 |
+
|
| 462 |
+
Modeling reward function, on the other hand, has been a long-lasting topic in inverse RL or imitation learning (IRL or IL, see e.g. Ng et al., 2000; Abbeel & Ng, 2004; Ziebart et al., 2008; Ho & Ermon, 2016). While conventional IRL/IL often iterates between reward model fitting and RL training stages, recent IL works (Jacq et al., 2019; Garg et al., 2021) also explore beyond explicitly reward modeling to reduce training instability and optimization difficulty, led by the iterative optimization scheme. Specifically, Garg et al. (2021) leverages the one-to-one correspondence between $Q$ -function and reward model, given the soft Bellman operator, to eliminate the reward fitting step.
|
| 463 |
+
|
| 464 |
+
Candidate selection in program synthesis. Existing works have shown one could improve program pass rate by filtering out programs that are likely to be incorrect. For instance, Chen et al. (2021a) filtered out programs that cannot pass example unit tests given in doc-strings, and Chen et al. (2022) filtered out programs that cannot pass generated unit tests. Furthermore, reward models are also often used to rank candidate programs (see e.g. Gulcehre et al., 2023; Touvron et al., 2023).
|
| 465 |
+
|
| 466 |
+
# D A SPECTRUM OF RL APPLICATIONS
|
| 467 |
+
|
| 468 |
+
To conceptually demonstrate the differences between policy-based and value-based methods, and why program synthesis might be well-suited to value-based approaches, Figure 4 presents a spectrum of RL applications. It could be observed that in scenarios where rewards are not expensive to evaluate or there's plenty of off-policy data (data not generated by the current policy/model) value-based methods tend to be preferred. Consider, for instance, InstructGPT (Ouyang et al., 2022) (policy-based) and AlphaGo (Silver et al., 2016) (value-based). The former relies on human annotators (expensive) to label model-generated (on-policy) responses, while the latter obtains rewards from simulators (cheap), and leverages (1) human expert games (off-policy) during training and (2) re-using historical games (off-policy) through experience reply.
|
| 469 |
+
|
| 470 |
+
Table 5 provides explanations our application plot of Figure 4. Applications in games typically find it easy to obtain rewards and make extensive use of off-policy data, e.g human games or historical replays. Conversely, InstructGPT obtains its rewards from preferences labeled by human annotators, with the data predominantly generated by the GPT model itself. The self-driving application notable has high cost of gathering rewards, due to the risks of real-world driving. While existing driving data
|
| 471 |
+
|
| 472 |
+

|
| 473 |
+
Figure 4: A collection of RL applications. and represents value-based and policy-based RL, respectively. The x-axis shows the difficulty of obtaining rewards, while the y-axis measures the amount of off-policy data. Tasks that face significant hurdles in gathering rewards or have limited off-policy data typically lean towards policy-based algorithms. Tasks where rewards are more readily obtained or that benefit from a substantial collection of off-policy data favors value-based methods. See descriptions of each task in Table 5.
|
| 474 |
+
|
| 475 |
+
could be utilized, Kendall et al. (2019) specifically choose not to use pre-collected data, leading to their choice of a policy-based algorithm.
|
| 476 |
+
|
| 477 |
+
In code generation, despite the availability of cheap rewards and the existing collection of off-policy programs, whether human-written or historical synthetic programs, current literature leans towards policy-based methods. We believe that value-based methods could be a promising direction, given their similarity to tasks with simulators.
|
| 478 |
+
|
| 479 |
+
Table 5: Summary of RL applications.
|
| 480 |
+
|
| 481 |
+
<table><tr><td></td><td>References</td><td>Type of RL</td><td>Costs of Getting Rewards</td><td>Available Off-Policy Data</td></tr><tr><td>Atari</td><td>(Mnih et al., 2013)</td><td>value</td><td>cheap: simulator</td><td>extensive: history/human games</td></tr><tr><td>GO</td><td>(Silver et al., 2016)</td><td>value</td><td>cheap: simulator</td><td>extensive: history/human games</td></tr><tr><td>Poker</td><td>(Moravčík et al., 2017)(Brown & Sandholm, 2018)</td><td>value4</td><td>cheap: simulator</td><td>extensive: history/human games</td></tr><tr><td>StarCraft II</td><td>(Arulkumaran et al., 2019)</td><td>value</td><td>cheap: simulator</td><td>extensive: history/human games</td></tr><tr><td>InstructGPT</td><td>(Ouyang et al., 2022)</td><td>policy</td><td>expensive: human annotators</td><td>limited: mostly model-generated data</td></tr><tr><td>ImageCaption</td><td>(Ranzato et al., 2015)(Rennie et al., 2017)</td><td>policy</td><td>cheap: automatic metrics</td><td>limited: mostly model-generated data</td></tr><tr><td>Self-driving</td><td>(Kendall et al., 2019)</td><td>policy</td><td>expensive: driving in real-world</td><td>limited: mostly model-generated data</td></tr><tr><td>Code Generation</td><td>(Le et al., 2022)(Shojaee et al., 2023)(Liu et al., 2023)</td><td>policy</td><td>cheap: unit testing</td><td>extensive: collection of human programs</td></tr></table>
|
| 482 |
+
|
| 483 |
+
# E REWARD ENGINEERING COMPARISON
|
| 484 |
+
|
| 485 |
+
Table 6 shows that ours has the least reward engineering effort. Note that our reward model $\tilde{r}_{\theta}$ is directly derived from $Q_{\theta}$ , and is not used for training.
|
| 486 |
+
|
| 487 |
+
Table 7 shows the results when only basic reward function (defined in equation 2) is used, under no example test outcomes setting. CodeRL and RLTF results are duplicated from their reports.
|
| 488 |
+
|
| 489 |
+
Table 6: Comparison of reward designs
|
| 490 |
+
|
| 491 |
+
<table><tr><td>Reward</td><td>Remark</td><td>Ours</td><td>CodeRL</td><td>RLTF</td><td>PPOCoder</td></tr><tr><td>Basic</td><td>equation 2</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>Reward Model</td><td>learned reward model</td><td></td><td>✓</td><td></td><td></td></tr><tr><td>Fine-Grained</td><td>fine-grained error type & location of error</td><td></td><td></td><td>✓</td><td></td></tr><tr><td>Adaptive</td><td>ratio of passed tests</td><td></td><td></td><td>✓</td><td></td></tr><tr><td>Syntactic Correctness</td><td>compilable</td><td></td><td></td><td></td><td>✓</td></tr><tr><td>Syntactic Matching</td><td>syntactic similarity to ground truth</td><td></td><td></td><td></td><td>✓</td></tr><tr><td>Semantic Matching</td><td>semantic similarity to ground truth</td><td></td><td></td><td></td><td>✓</td></tr></table>
|
| 492 |
+
|
| 493 |
+
Table 7: Performance with only basic reward (equation 2). ${}^{ \dagger }$ and ${}^{\ddagger \ddagger }$ indicates results duplicated from Le et al. (2022) and Liu et al. (2023), respectively.
|
| 494 |
+
|
| 495 |
+
<table><tr><td rowspan="2">Model</td><td colspan="4">Pass@1</td><td colspan="4">Pass@5</td></tr><tr><td>Intro</td><td>Inter</td><td>Comp</td><td>All</td><td>Intro</td><td>Inter</td><td>Comp</td><td>All</td></tr><tr><td>CodeRL†</td><td>4.60</td><td>1.10</td><td>0.20</td><td>1.62</td><td>7.10</td><td>1.57</td><td>0.40</td><td>2.44</td></tr><tr><td>RLTF‡‡</td><td>-</td><td>-</td><td>-</td><td>1.37</td><td>-</td><td>-</td><td>-</td><td>3.50</td></tr><tr><td>B-Coder</td><td>6.70</td><td>1.50</td><td>0.30</td><td>2.30</td><td>10.40</td><td>2.63</td><td>0.70</td><td>3.80</td></tr></table>
|
| 496 |
+
|
| 497 |
+
# F ADVANTAGE OF APPROXIMATE VERSION OF $\tilde{r}$
|
| 498 |
+
|
| 499 |
+
Recap that our recovered reward $\tilde{r}$ is computed by
|
| 500 |
+
|
| 501 |
+
$$
|
| 502 |
+
\tilde {r} _ {\theta} (s, a) = Q _ {\theta} (s, a) - \gamma \mathbb {E} _ {s ^ {\prime}} Q _ {\theta} \left(s ^ {\prime}, \arg \max _ {a} p _ {\theta_ {\mathrm {c k p t}}} \left(a \mid s ^ {\prime}\right)\right) \approx Q _ {\theta} (s, a) - \gamma V _ {\theta} \left(s ^ {\prime}\right). \tag {20}
|
| 503 |
+
$$
|
| 504 |
+
|
| 505 |
+
Imagining a scenario in which we sample/decode using a trained $Q_{\theta}$ , the forward pass will compute $Q_{\theta}(s,a)$ and $V_{\theta}(s)$ for each timestep, because of our dueling architecture. But $p_{\theta_{\mathrm{ckpt}}}$ will not be evaluated during generation, because $p_{\theta_{\mathrm{ckpt}}}$ is only used when computing $\mathcal{L}_Q(\cdot ;p_{\theta_{\mathrm{ckpt}}})$ . Computing the exact version $Q_{\theta}(s,a) - \gamma \mathbb{E}_{s^{\prime}}Q_{\theta}(s^{\prime},\arg \max_{a}p_{\theta_{\mathrm{ckpt}}}(a|s^{\prime}))$ will require additional computation of $p_{\theta_{\mathrm{ckpt}}}$ during generation. In contrast, $Q(s,a)$ and $V(s)$ are already computed during generation, therefore it requires almost no additional computation to compute $\tilde{r}_{\theta}(s,a)$ .
|
| 506 |
+
|
| 507 |
+
# G ADDITIONAL IMPLEMENTATION TRICKS
|
| 508 |
+
|
| 509 |
+
# G.1 UPPER BOUND OF $Q$ -FUNCTION
|
| 510 |
+
|
| 511 |
+
Given our reward design in equation 2, the cumulative reward is upper bounded by $R_{\max} = 1$ . We enforce $Q(s,a) \leq R_{\max}$ by transform the state value function as $V(s) = -\text{SOFTABS}(V(s)) + R_{\max} \leq R_{\max}$ , where $\text{SOFTABS}(x) := [\text{SOFTPLUS}(x) + \text{SOFTPLUS}(-x)] / 2 + \ln 2$ is a soft absolute function. Given $A(s,a) \leq 0$ , enforcing $V(s) \leq R_{\max}$ leads to $Q(s,a) \leq R_{\max}$ .
|
| 512 |
+
|
| 513 |
+
# G.2 RESIDUAL HEAD INITIALIZATION
|
| 514 |
+
|
| 515 |
+
In section 4.3, we initialize $\theta$ in a way such that $\ell_{\theta} = \ell_{\theta_{\mathrm{ckpt}}}$ and $V_{\theta}^{r}(s) = 0$ . The former can be done by simply loading the checkpoint $\theta_{\mathrm{ckpt}}$ . Adding a residual head $V_{\theta}^{r}$ , that initialized to output zeros, can be done with a simple trick. One can simply add two heads $h_1$ and $h_2$ , let $h_1$ be trainable, and $h_2$ be fixed for subsequent fine-tuning, setting $V_{\theta}^{r} = h_{1} - h_{2}$ achieves the desired functionality.
|
| 516 |
+
|
| 517 |
+
# H TRAINING AND EVALUATION DETAILS
|
| 518 |
+
|
| 519 |
+
In supplement to implementation details in Section 4.4 and 5, we give more low-level details here.
|
| 520 |
+
|
| 521 |
+
APPS dataset. In addition to the train/test split details described in Section 5, APPS dataset, on average, consists of 2 example unit tests, 21 hidden unit tests, and 23 ground truth programs. We
|
| 522 |
+
|
| 523 |
+
follow the same procedure as Hendrycks et al. (2021); Le et al. (2022) to construct prompts for both training and evaluation. Specifically, see Section 3 of Hendrycks et al. (2021).
|
| 524 |
+
|
| 525 |
+
MBPP dataset. MBPP has 974 instances with a 374/90/500 train/val/test splits and, in addition, 10 problems reserved for few-shot learning. Because we only do zero-shot evaluation on MBPP, only the 500 test problems are used for evaluation. Each problem of MBPP usually comes with three unit tests. In addition, these tests are usually not hidden. Therefore, prior works Le et al. (2022); Shojaee et al. (2023); Liu et al. (2023) often explicitly incorporate the tests into prompt string. We follow WizardCoder (Luo et al., 2023) to construct our input format. Details could be found in this repo.
|
| 526 |
+
|
| 527 |
+
Pre-trained model. We initialize our model with CodeRL checkpoint publicly available at here, meaning we initialize $\theta_{\mathrm{ckpt}}$ , $\phi$ , and $\theta$ from it. Note that we freeze encoder for both $\phi$ -stage and $\theta$ -stage, therefore the encoder is shared during both training and generation. For both training and generation, we set the maximum length to 600 and 512 for source and target sequences, respectively.
|
| 528 |
+
|
| 529 |
+
Training data preparation. While we use $\mathcal{D}_{\text{train}}$ to represent our training dataset, yet we have not elaborated on how it is constructed. In general, we follow the protocol of prior RL-based works that combining all ground truth programs and a set of programs generated by the pre-trained model, for each problem $D$ . Specifically, we generate 256 programs per problem using pre-trained checkpoint. Combined with ground truth programs, there are, on average, 278 programs per problem.
|
| 530 |
+
|
| 531 |
+
Mini-batch preparation. By prior definition, our dataset $\mathcal{D}_{\mathrm{train}}$ now contains both ground truth programs and generated programs. Notably, the volume of generated programs is significantly larger than that of the ground truth programs. This means that if one were to randomly sample from the dataset, generated programs would dominate the mini-batches. To address this, when preparing a mini-batch, we sample $\rho_{\mathrm{real}} \times B$ ground truth programs and $(1 - \rho_{\mathrm{real}}) \times B$ generated programs, where $B$ is batch size.
|
| 532 |
+
|
| 533 |
+
$\phi$ -stage training. In the $\phi$ -stage, we pre-train state-value function $V_{\phi}(s)$ . We conduct our experiment with 4×A100-80G GPUs. Specifically, we use batch size of 16 for each GPU and gradient accumulation step of 4, resulting in a total batch size of 256. For optimizer and scheduler, we use AdamW optimizer (Loshchilov & Hutter, 2018) with a constant learning rate of 1e-5 and a weight decay of 0.05. We train $\phi$ for 18k gradient steps.
|
| 534 |
+
|
| 535 |
+
$\theta$ -stage training. In the $\theta$ -stage, we conduct our experiment with $8\times$ A100-80G GPUs. Specifically we use batch size of 16 for each GPU and gradient accumulation step of 1, resulting in a total batch size of 128. For optimizer and scheduler, we use AdamW with a peak learning rate 3e-5, a weight decay of 0.05, and a linear decay scheduler with no warmup. We train $\theta$ for 10k gradient steps.
|
| 536 |
+
|
| 537 |
+
Other hyper-parameters. We set the ground truth data ratio $\rho_{\mathrm{real}} = 0.5$ and the energy-based policy temperature $\alpha = 1$ (see equation 10) for all experiments. In $\theta$ -stage, we use $\beta_{\mathrm{adv}} = 0.1$ and $\beta_{\mathrm{ce}} = 0.5$ .
|
| 538 |
+
|
| 539 |
+
# I ABLATION ON $m$
|
| 540 |
+
|
| 541 |
+

|
| 542 |
+
Figure 5: Ablation on $m$ : our ranking strategy achieves consistent improvements under different budgets $m$ .
|
| 543 |
+
|
| 544 |
+
Table 5 conduct an ablation study on ranking budgets $m$ , it can be observed that our ranking strategy achieves consistent improvements under different budgets $m$ .
|
| 545 |
+
|
| 546 |
+
# J COMMENTS ON $\mathcal{B}^q$ PROPERTIES
|
| 547 |
+
|
| 548 |
+
# J.1 PROPOSITION 4.1
|
| 549 |
+
|
| 550 |
+
Proof.
|
| 551 |
+
|
| 552 |
+
$$
|
| 553 |
+
\begin{array}{l} \| \mathcal {B} ^ {q} Q _ {1} - \mathcal {B} ^ {q} Q _ {2} \| _ {\infty} = \max _ {s, a} | r (s, a) + \gamma \mathbb {E} _ {s ^ {\prime}} Q _ {1} (s ^ {\prime}, \hat {a} ^ {\prime}) - r (s, a) - \gamma \mathbb {E} _ {s ^ {\prime}} Q _ {2} (s ^ {\prime}, \hat {a} ^ {\prime}) | \\ \left(\hat {a} ^ {\prime} = \arg \max _ {a} q (a | s ^ {\prime})\right) \\ = \max _ {s, a} \gamma \left| \mathbb {E} _ {s ^ {\prime}} \left[ Q _ {1} \left(s ^ {\prime}, \hat {a} ^ {\prime}\right) - Q _ {2} \left(s ^ {\prime}, \hat {a} ^ {\prime}\right) \right] \right| (21) \\ \leq \max _ {s, a} \gamma \mathbb {E} _ {s ^ {\prime}} \left| Q _ {1} \left(s ^ {\prime}, \hat {a} ^ {\prime}\right) - Q _ {2} \left(s ^ {\prime}, \hat {a} ^ {\prime}\right) \right| (22) \\ \leq \max _ {s, a} \gamma \mathbb {E} _ {s ^ {\prime}, a ^ {\prime}} \max _ {s ^ {\prime}, a ^ {\prime}} \left| Q _ {1} \left(s ^ {\prime}, a ^ {\prime}\right) - Q _ {2} \left(s ^ {\prime}, a ^ {\prime}\right) \right| (23) \\ = \gamma \| Q _ {1} - Q _ {2} \| _ {\infty} (24) \\ \end{array}
|
| 554 |
+
$$
|
| 555 |
+
|
| 556 |
+

|
| 557 |
+
|
| 558 |
+
# J.2 PROPOSITION 4.2
|
| 559 |
+
|
| 560 |
+
Proof. The proof is similar to Lemma C.3. in Garg et al. (2021). To prove that $\mathcal{T}^p$ is a bijection, it suffices to show that for any $r: S \times \mathcal{A} \to \mathbb{R}$ , there exists a unique $Q: S \times \mathcal{A} \to \mathbb{R}$ such that $r = \mathcal{T}^p Q$ . Note that by proposition 4.1, there exists a unique $Q^p = \mathcal{B}^p r$ that satisfies $Q^p(s, a) = r(s, a) + \gamma \mathbb{E}_{s'} Q^p(s', \arg \max_a p(a|s'))$ . Rearranging the terms gives $r = \mathcal{T}^p Q^p$ . This completes the proof.
|
| 561 |
+
|
| 562 |
+
# K DISCUSSION ON LIMITATIONS
|
| 563 |
+
|
| 564 |
+
Table 8: Pass@1 results are evaluated with greedy decoded programs, and pass@{5,50,100} are computed by sampled programs using a temperature of 0.4.
|
| 565 |
+
|
| 566 |
+
<table><tr><td>Pass@</td><td>CodeRL</td><td>B-Coder</td></tr><tr><td>1</td><td>1.60</td><td>1.60</td></tr><tr><td>5</td><td>3.28</td><td>2.88</td></tr><tr><td>50</td><td>7.16</td><td>7.35</td></tr><tr><td>100</td><td>8.76</td><td>9.18</td></tr></table>
|
| 567 |
+
|
| 568 |
+
Table 9: Ranking with $\tilde{r}$ compared with filtering with real environmental reward function $r$ , i.e. hidden tests. $\tilde{r}$ -ranked results are duplicated from Table 1.
|
| 569 |
+
|
| 570 |
+
<table><tr><td></td><td colspan="2">r-ranked</td><td>r-filtered</td></tr><tr><td></td><td>pass@1</td><td>pass@5</td><td></td></tr><tr><td>Intro</td><td>6.70</td><td>10.40</td><td>26.60</td></tr><tr><td>Inter</td><td>1.50</td><td>2.63</td><td>7.87</td></tr><tr><td>Comp</td><td>0.30</td><td>0.70</td><td>5.10</td></tr><tr><td>All</td><td>2.30</td><td>3.80</td><td>11.06</td></tr></table>
|
| 571 |
+
|
| 572 |
+
While being exploratory, our work admits certain limitations including: additional frozen parameters introduced, and we observe that raw performance (without ranking) is mixed compared to CodeRL (see Table 8) (which we believe is somewhat excusable as we use less reward designs). However, we remark the effectiveness of our overall framework including the dual strategy is non-trivial, especially with limited reward engineering.
|
| 573 |
+
|
| 574 |
+
It is also informative to show results filtered by the true environmental reward function $r$ , instead of results ranked by our recovered reward function $\tilde{r}$ . Although filtering with $r$ requires using hidden tests, meaning it cannot be implemented in realistic settings, also see discussions in Section 4.5. However, it could serve as an upper limit for our ranking strategy and as a sanity check. (Roughly speaking, if $\tilde{r} = r$ , the pass rate of $\tilde{r}$ -ranking and $r$ -filtering would be identical.) To this end, we use the same set of candidate programs as those in Table 1, but apply the ground truth reward function $r$ to filter candidates rather than using $\tilde{r}$ for ranking. The corresponding results in Table 9 show that, although $\tilde{r}$ -ranking is effective, there remains a large room for improvement.
|
2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:69cf4359ebe81b470a1b50f04c7da4b70b662f15c63b81003d32eed7ef157b90
|
| 3 |
+
size 608341
|
2024/$_mathcal{B}$-Coder_ Value-Based Deep Reinforcement Learning for Program Synthesis/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/7ee4925b-8b13-43da-aa85-069a25707236_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/7ee4925b-8b13-43da-aa85-069a25707236_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/7ee4925b-8b13-43da-aa85-069a25707236_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a4897e3f0c7195eeb06f5f8bb69315f3e1052ee3e7d709096b863509c8703c46
|
| 3 |
+
size 15851038
|
2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1c78eab19aa5e769a24e18554d4019e54b6993fe762dfe8a581f23abe9ef1198
|
| 3 |
+
size 1880107
|
2024/$_texttt{NAISR}$_ A 3D Neural Additive Model for Interpretable Shape Representation/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/9fda41e2-544c-4f91-8ab6-462a4a922809_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/9fda41e2-544c-4f91-8ab6-462a4a922809_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/9fda41e2-544c-4f91-8ab6-462a4a922809_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a9e0c25e66e1b44290d7120889ca614c836571f7d3ea62234d77e6c56bba727f
|
| 3 |
+
size 436182
|
2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cf4bfac9fef64a105c00592b57e708f59dd481273a81412a3d816035bf9143fa
|
| 3 |
+
size 1067044
|
2024/A Benchmark for Learning to Translate a New Language from One Grammar Book/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A General Framework for User-Guided Bayesian Optimization/168ed08a-b567-41d2-abd2-5e82251d79b2_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A General Framework for User-Guided Bayesian Optimization/168ed08a-b567-41d2-abd2-5e82251d79b2_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A General Framework for User-Guided Bayesian Optimization/168ed08a-b567-41d2-abd2-5e82251d79b2_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:79d6c9ba2cf8ef4738bf7716c3089e39213f7b6ad638796f283bacc7d597c6fd
|
| 3 |
+
size 1338684
|
2024/A General Framework for User-Guided Bayesian Optimization/full.md
ADDED
|
@@ -0,0 +1,413 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A GENERAL FRAMEWORK FOR USER-GUIDED BAYESIAN OPTIMIZATION
|
| 2 |
+
|
| 3 |
+
Carl Hvarfner
|
| 4 |
+
|
| 5 |
+
Lund University
|
| 6 |
+
|
| 7 |
+
carl.hvarfner@cs.lth.se
|
| 8 |
+
|
| 9 |
+
Frank Hutter
|
| 10 |
+
|
| 11 |
+
University of Freiburg
|
| 12 |
+
|
| 13 |
+
fh@cs.uni-freiburg.de
|
| 14 |
+
|
| 15 |
+
Luigi Nardi
|
| 16 |
+
|
| 17 |
+
DBtune, Lund University, Stanford University
|
| 18 |
+
|
| 19 |
+
luigi.nardi@cs.lth.se
|
| 20 |
+
|
| 21 |
+
# ABSTRACT
|
| 22 |
+
|
| 23 |
+
The optimization of expensive-to-evaluate black-box functions is prevalent in various scientific disciplines. Bayesian optimization is an automatic, general and sample-efficient method to solve these problems with minimal knowledge of the underlying function dynamics. However, the ability of Bayesian optimization to incorporate prior knowledge or beliefs about the function at hand in order to accelerate the optimization is limited, which reduces its appeal for knowledgeable practitioners with tight budgets. To allow domain experts to customize the optimization routine, we propose ColaBO, the first Bayesian-principled framework for incorporating prior beliefs beyond the typical kernel structure, such as the likely location of the optimizer or the optimal value. The generality of ColaBO makes it applicable across different Monte Carlo acquisition functions and types of user beliefs. We empirically demonstrate ColaBO's ability to substantially accelerate optimization when the prior information is accurate, and to retain approximately default performance when it is misleading.
|
| 24 |
+
|
| 25 |
+
# 1 INTRODUCTION
|
| 26 |
+
|
| 27 |
+
Bayesian Optimization (BO) (Mockus et al., 1978; Jones et al., 1998; Snoek et al., 2012) is a well-established methodology for the optimization of expensive-to-evaluate black-box functions. Known for its sample efficiency, BO has been successfully applied to a variety of domains where laborious system tuning is prominent, such as hyperparameter optimization (Snoek et al., 2012; Bergstra et al., 2011b; Lindauer et al., 2022), neural architecture search (Ru et al., 2021; White et al., 2021), robotics (Calandra et al., 2014; Mayr et al., 2022), hardware design (Nardi et al., 2019; Ejjeh et al., 2022), and chemistry (Griffiths & Hernández-Lobato, 2020).
|
| 28 |
+
|
| 29 |
+
Typically employing a Gaussian Process (Rasmussen & Williams, 2006) (GP) surrogate model, BO allows the user to specify a prior over functions $p(f)$ via the Gaussian Process kernel, as well as an optional prior over its hyperparameters. Within the framework of the prior, the user can specify expected smoothness, output range and possible noise level of the function at hand, with the hopes of accelerating the optimization if accurate. However, the prior beliefs that can be specified within the framework of the kernel hyperparameters do not span the full range of beliefs that practitioners may possess. For example, practitioners may know which parts of the input space tend to work best (Oh et al., 2018; Perrone et al., 2019; Smith, 2018; Wang et al., 2019), know a range or upper bound on the optimal output (Jeong & Kim, 2021; Nguyen & Osborne, 2020) such as a maximal achievable accuracy of $100\%$ , or other properties of the objective, such as preference relations between configurations (Huang et al., 2022). The limited ability of practitioners to interact and collaborate with the BO machinery (Kumar et al., 2022) thus runs the risk of failing to use valuable domain expertise, or alienating knowledgeable practitioners altogether. While knowledge injection beyond what is natively supported by the GP kernel is crucial to further increase the efficiency of
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
Figure 1: Three different ColaBO priors: (left) Prior over the optimum $x_{*}$ , and the induced changed in the GP for an optimum located in the green region. (middle) Prior over optimal value, $f^{*} < 0.8$ . (right) Prior over preference relations $f(\pmb{x})_1 \geqslant f(\pmb{x}_2)$ for five preferences (green arrows, e.g. $f(0.0) \geqslant f(0.1) \geqslant f(0.2)$ ).
|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
|
| 44 |
+
Bayesian optimization, so far no current approach allows for the integration of arbitrary types of user knowledge. To address this gap, we propose an intuitive framework that effectively allows the user to reshape the Gaussian process at will to mimic their held beliefs.
|
| 45 |
+
|
| 46 |
+
Figure 1 illustrates how, for the three aforementioned priors, the GP is reshaped to faithfully represent the belief held by the user - whether it be a prior over the optimum (left), optimal value (middle), or preference relations between points (right). Our novel framework for Collaborative Bayesian Optimization (ColaBO) diverges from the typical assumption of Gaussian posteriors, and is applicable to any Monte Carlo acquisition function (Wilson et al., 2017; 2018; Balandat et al., 2020). Formally, we make the following contributions:
|
| 47 |
+
|
| 48 |
+
1. We introduce ColaBO, a framework for incorporating user knowledge over the optimizer, optimal value and preference relations into Bayesian optimization in the form of an additional prior on the surrogate, orthogonal to the conventional prior on the kernel hyperparameters,
|
| 49 |
+
2. We demonstrate that the proposed framework is generally applicable to Monte Carlo acquisition functions, inheriting MC acquisition function utility,
|
| 50 |
+
3. We empirically show that ColaBO accelerates optimization when injected with for priors over optimal location and optimal value.
|
| 51 |
+
|
| 52 |
+
# 2 BACKGROUND
|
| 53 |
+
|
| 54 |
+
We outline Bayesian optimization, Gaussian Processes and Monte Carlo (MC) acquisition functions, as well as the concept of a prior over the optimum.
|
| 55 |
+
|
| 56 |
+
# 2.1 BAYESIAN OPTIMIZATION
|
| 57 |
+
|
| 58 |
+
We consider the problem of optimizing a black-box function $f$ across a set of feasible inputs $\mathcal{X} \subset \mathbb{R}^d$ :
|
| 59 |
+
|
| 60 |
+
$$
|
| 61 |
+
\boldsymbol {x} ^ {*} \in \underset {\boldsymbol {x} \in \mathcal {X}} {\arg \max } f (\boldsymbol {x}). \tag {1}
|
| 62 |
+
$$
|
| 63 |
+
|
| 64 |
+
We assume that $f(\pmb{x})$ is expensive to evaluate and can potentially only be observed through a noise-corrupted estimate, $y_{\pmb{x}}$ , where $y_{\pmb{x}} = f(\pmb{x}) + \epsilon, \epsilon \sim \mathcal{N}(0, \sigma_{\epsilon}^{2})$ for some noise level $\sigma_{\epsilon}^{2}$ . In this setting, we wish to maximize $f$ in an efficient manner. Bayesian optimization (BO) aims to globally maximize $f$ by an initial design and thereafter sequentially choosing new points $x_{n}$ for some iteration $n$ , creating the data $\mathcal{D}_n = \mathcal{D}_{n-1} \cup \{(x_n, y_n)\}$ (Brochu et al., 2010; Shahriari et al., 2016; Garnett, 2022). After each new observation, BO constructs a probabilistic surrogate model $p(f|\mathcal{D}_n)$ (Snoek et al., 2012; Hutter et al., 2011; Bergstra et al., 2011a; Müller et al., 2023) and uses that surrogate to build an acquisition function $\alpha(x; \mathcal{D}_n)$ which selects the next query.
|
| 65 |
+
|
| 66 |
+
# 2.2 GAUSSIAN PROCESSES
|
| 67 |
+
|
| 68 |
+
When constructing the surrogate, the most common choice is a Gaussian process (GP) (Rasmussen & Williams, 2006). The GP utilizes a covariance function $k$ , which encodes a prior belief for the smoothness of $f$ , and determines how previous observations influence prediction. Given observations $\mathcal{D}_n$ at iteration $n$ , the Gaussian posterior $p(f|\mathcal{D}_n)$ over the objective is characterized by the posterior mean $\mu_n(\boldsymbol{x},\boldsymbol{x}')$ and (co-)variance $\Sigma_n(\boldsymbol{x},\boldsymbol{x}')$ of the GP:
|
| 69 |
+
|
| 70 |
+
$$
|
| 71 |
+
\mu_ {n} (\boldsymbol {x}) = \mathbf {k} _ {n} (\boldsymbol {x}) ^ {\top} (\mathbf {K} _ {n} + \sigma_ {\epsilon} ^ {2} \mathbf {I}) ^ {- 1} \mathbf {y}, \quad \Sigma_ {n} (\boldsymbol {x}, \boldsymbol {x} ^ {\prime}) = k (\boldsymbol {x}, \boldsymbol {x} ^ {\prime}) - \mathbf {k} _ {n} (\boldsymbol {x}) ^ {\top} (\mathbf {K} + \sigma_ {\epsilon} ^ {2} \mathbf {I}) ^ {- 1} \mathbf {k} _ {n} (\boldsymbol {x} ^ {\prime}),
|
| 72 |
+
$$
|
| 73 |
+
|
| 74 |
+
where $(\mathbf{K}_n)_{ij} = k(\pmb{x}_i,\pmb{x}_j)$ , $\mathbf{k}_n(\pmb{x}) = [k(\pmb{x},\pmb{x}_1),\dots,k(\pmb{x},\pmb{x}_n)]^\top$ and $\sigma_{\epsilon}^{2}$ is the noise variance. For applications in BO and beyond, samples from the posterior are required either directly for optimization (Eriksson et al., 2019) through Thompson sampling (Thompson, 1933), or to estimate auxiliary quantities of interest (Hernandez-Lobato et al., 2015; Neiswanger et al., 2021; Hvarfner et al., 2023). For a finite set of $k$ query locations $(X = x_1,\ldots,x_k)$ , the classical method of generating samples is via a location-scale transform of Gaussian random variables, $f(\pmb{X}) = \mu_n(\pmb{X}) + \pmb{L}\pmb{\epsilon}$ , where $\pmb{L}$ is the Cholesky decomposition of $\pmb{K}$ and $\epsilon \sim \mathcal{N}(0,\pmb{I})$ . Unfortunately, the classic approach is intrinsically non-scalable, incurring a $\mathcal{O}(k^3)$ cost due to the aforementioned matrix decomposition.
|
| 75 |
+
|
| 76 |
+
# 2.3 DECOUPLED POSTERIOR SAMPLING
|
| 77 |
+
|
| 78 |
+
To remedy the issue of scalability in posterior sampling, $\mathcal{O}(k)$ weight-space approximations based on Random Fourier Features (RFF) (Rahimi & Recht, 2007) obtain approximate (continuous) function draws $\tilde{f}(\boldsymbol{x}) = \sum_{i=1}^{m} w_i \phi_i(\boldsymbol{x})$ , where $\phi_i(\boldsymbol{x}) = \frac{2}{\ell} (\psi_i^\top \boldsymbol{x} + b_i)$ . The random variables $w_i \sim \mathcal{N}(0,1)$ , $b_i \sim \mathcal{U}(0,2\pi)$ , and $\psi_i$ are sampled proportional to the spectral density of $k$ .
|
| 79 |
+
|
| 80 |
+
While achieving scalability, the seminal RFF approach by Rahimi & Recht (2007) suffers from the issue of variance starvation (Mutny & Krause, 2018; Wang et al., 2018; Wilson et al., 2020). As a remedy, Wilson et al. (2020) decouple the draw of functions from the approximate posterior $p(\tilde{f}|\mathcal{D})$ into a more accurate draw from the prior $p(\tilde{f})$ , followed by a deterministic data-dependent update:
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
(\tilde {f} | \mathcal {D}) (\boldsymbol {x}) \stackrel {{d}} {{=}} \underbrace {\tilde {f} (\boldsymbol {x})} _ {\text {d r a w f r o m p r i o r}} + \underbrace {\mathbf {k} _ {n} (\boldsymbol {x}) ^ {\top} \left(\mathbf {K} _ {n} + \sigma_ {\epsilon} ^ {2} \mathbf {I}\right) ^ {- 1} (\mathbf {y} - \tilde {f} (\boldsymbol {x}) - \boldsymbol {\epsilon})} _ {\text {d e t e r m i n i s t i c u p d a t e}} \tag {2}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
Eq. 2 deviates from the distribution-first approach that is typically prevalent in GPs in favor of a variable-first approach utilizing Matheron's rule (Journel & Huijbregts, 1976).
|
| 87 |
+
|
| 88 |
+
# 2.4 MONTE CARLO ACQUISITION FUNCTIONS
|
| 89 |
+
|
| 90 |
+
Acquisition functions act on the surrogate model to quantify the utility of a point in the search space. They encode a trade-off between exploration and exploitation, typically using a greedy heuristic to do so. A simple and computationally cheap heuristic is Expected Improvement (E I) (Jones et al., 1998; Bull, 2011). For a noiseless function and a current best observation $y_{n}^{*}$ , the E I acquisition function is $\alpha_{EI}(\pmb{x}) = \mathbb{E}_{y_{\pmb{x}}}[(y_n^* - y_x)^+]$ . For noisy problem settings, a noise-adapted variant of E I (Letham et al., 2018) is frequently considered, where both the incumbent $y_{n}^{*}$ and the upcoming query $y_{x}$ are substituted for the non-observable noiseless incumbent $f_{n}^{*}$ and noiseless upcoming query $f_{x}$ . Other frequently used acquisition functions are the Upper Confidence Bound (UCB) (Srinivas et al., 2012), Probability of Improvement (PI) (Kushner, 1964) and Knowledge Gradient (KG) (Frazier et al., 2009). Information-theoretic acquisition functions consider the mutual information to select the next query $\alpha_{\mathrm{MI}}(\pmb{x}) = I((\pmb{x}, y_{\pmb{x}}); *|\mathcal{D}_n)$ , where $*$ can entail either the optimum $x_{*}$ as in (Predictive) Entropy Search (ES/PES) (Hennig & Schuler, 2012; Hernández-Lobato et al., 2014), the optimal value $f_{*}$ as in Max-value Entropy Search (MES) (Wang & Jegelka, 2017; Moss et al., 2021) or the tuple $(x_{*}, f_{*})$ for Joint Entropy Search (JES) (Hvarfner et al., 2022a; Tu et al., 2022).
|
| 91 |
+
|
| 92 |
+
All the aforementioned acquisition functions compute expectations $\mathbb{E}_{f_x}$ (or alternatively $\mathbb{E}_{y_x}$ ) over some utility $u(f_x)$ of the output (Wilson et al., 2017; 2018), which typically have simple, or even closed-form, solutions for Gaussian posteriors. However, approximating the expectation through Monte Carlo integration has proven useful in the context of batch optimization (Wilson et al., 2018), efficient acquisition function approximation (Balandat et al., 2020), and non-Gaussian posteriors (Astudillo & Frazier, 2021). By sampling over possible outputs $f_x$ and utilizing the reparametrization
|
| 93 |
+
|
| 94 |
+
trick (Kingma & Welling, 2014; Rezende et al., 2014), utilities $u$ can be easily computed across a larger set of applications and be optimized to greater accuracy.
|
| 95 |
+
|
| 96 |
+
# 2.5 PRIOR OVER THE OPTIMUM
|
| 97 |
+
|
| 98 |
+
A prior over the optimum (Souza et al., 2021; Hvarfner et al., 2022b; Mallik et al., 2023) is a user-specified belief $\pi : \mathcal{X} \to \mathbb{R}$ of the subjective likelihood that a given $\pmb{x}$ is optimal. Formally,
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\pi (\boldsymbol {x}) = \mathbb {P} \left(\boldsymbol {x} = \underset {\boldsymbol {x} ^ {\prime}} {\arg \max } f \left(\boldsymbol {x} ^ {\prime}\right)\right). \tag {3}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
This prior is generally considered to be independent of observed data, but rather a result of previous experimentation or anecdotal evidence. Regions that the user expects to contain the optimum will typically have a high value, but this does not exclude the chance of the user belief $\pi(x)$ to be inaccurate, or even misleading. Lastly, we require $\pi$ to be strictly positive in all of $\mathcal{X}$ , which suggests that any point included in the search space may be optimal.
|
| 105 |
+
|
| 106 |
+
# 3 METHODOLOGY
|
| 107 |
+
|
| 108 |
+
We now introduce ColaBO, the first Bayesian-principled BO framework that flexibly allows users to collaborate with the optimizer by injecting prior knowledge about the objective that substantially exceeds the type of prior knowledge natively supported by GPs. In Sec. 3.1, we introduce and derive a novel prior over function properties, which yields a surrogate model conditioned on the user belief. Thereafter, in Sec. 3.2, we demonstrate how the hierarchical prior integrates with MC acquisition functions. Lastly, in Sec. 3.3, we state practical considerations to assure the performance of ColaBO.
|
| 109 |
+
|
| 110 |
+
# 3.1 PRIOR OVER FUNCTION PROPERTIES
|
| 111 |
+
|
| 112 |
+
We consider the typical GP prior over functions $p(f) = \mathcal{GP}(\mu, \Sigma)$ , where the characteristics of $f$ , such as smoothness and output magnitude, are fully defined by the kernel $k$ (and its associated hyperparameters $\theta$ , which are omitted for brevity). We seek to inject an additional, user-defined prior belief over $f$ into the GP, such as the prior over the optimum in Sec. 2.5, $\pi(\boldsymbol{x}) = \mathbb{P}\left(\boldsymbol{x} = \arg \max_{\boldsymbol{x}'} f(\boldsymbol{x}')\right)$ . By postulating that $\pi$ is accurate, we wish to form a belief-weighted prior - a prior over functions where the distribution over the optimum is exactly $\pi(\boldsymbol{x})$ . We start by considering the user belief $\pi: \mathcal{X} \to \mathbb{R}$ from Eq. (3), and extend the definition to involve the integration over $f$ , similarly to the Thompson sampling definition of Kandasamy et al. (2018). Formally,
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\pi (\boldsymbol {x}) = \mathbb {P} \left(\boldsymbol {x} = \underset {\boldsymbol {x} ^ {\prime}} {\arg \max } f \left(\boldsymbol {x} ^ {\prime}\right)\right) = \int_ {f} \pi \left(\delta_ {*} (\boldsymbol {x} | f)\right) p (f) d f \tag {4}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
where $\delta_{*}(\pmb{x}|f) = 1$ , if $\pmb{x} = \arg \max_{\pmb{x}' \in \mathcal{X}} f(\pmb{x}')$ , and zero otherwise. As such, $\delta_{*}(\pmb{x}|f)$ maps a function $f_{i} \sim p(f)$ to its arg max, and evaluates whether this arg max is equal to $\pmb{x}$ .
|
| 119 |
+
|
| 120 |
+
However, a belief over the optimum, or any other property, of a function $f$ is implicitly a belief over the function $f$ itself. As such, a non-uniform $\pi(\pmb{x})$ should reasonably induce a change in the prior $p(f)$ to reflect the non-uniform optimum. To this end, we introduce an augmented user belief over the optimum $\rho_{\pmb{x}}^{*} \sim \mathcal{P}_{\pmb{x}}^{*}$ , where $\mathcal{P}_{\pmb{x}}^{*}$ is the prior over possible user beliefs, and draws are random functions $\rho_{\pmb{x}}^{*}: \mathcal{X} \to \mathbb{R}^{+}$ which themselves take a function $f$ as input, and output a positive real number quantifying the likelihood of a sample $f_{i}$ under $\pi(\pmb{x})$ . Formally, we define $\rho_{\pmb{x}}^{*}$ as
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\rho_ {\boldsymbol {x}} ^ {*} (f) = \mathbb {P} \left(\boldsymbol {x} = \underset {\boldsymbol {x} ^ {\prime}} {\arg \max } f \left(\boldsymbol {x} ^ {\prime}\right)\right) = \frac {1}{Z _ {\rho_ {\boldsymbol {x}} ^ {*}}} \int_ {\mathcal {X}} \delta_ {*} (\boldsymbol {x} | f) \pi (\boldsymbol {x}) d \boldsymbol {x} \tag {5}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
where the intractable normalizing constant $Z_{\rho_x^*}$ arises from the fact that the integrated density $\pi(\pmb{x})$ acts on a finite-dimensional property of $f$ , and not $f$ itself. Under $\rho_x^*(f)$ , functions whose arg max lies in a high-density region under $\pi$ will be assigned a higher probability. Notably, the definition in 5 can extend to other properties of $f$ as well: a user belief $p_{f_*}$ over the optimal value $f_*$ analogously yields a belief over functions $\rho_{f_*}^*(f)$ :
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
\rho_ {f _ {\boldsymbol {x}}} ^ {*} (f) = \mathbb {P} \left(\boldsymbol {x} = \max _ {\boldsymbol {x} ^ {\prime}} f \left(\boldsymbol {x} ^ {\prime}\right)\right) = \frac {1}{Z _ {\rho_ {f _ {\boldsymbol {x}}} ^ {*}}} \int_ {f _ {\boldsymbol {x}}} \delta_ {*} (\boldsymbol {x} | f) p _ {f *} (y) d f _ {\boldsymbol {x}}. \tag {6}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
It is worthwhile to reflect on the meaning of $\rho(f)$ , and how beliefs over function properties propagate to $p(f)$ . Concretely, if the user belief $\rho_{f_x}^*(f)$ asserts that the maximal value lies within $C_1 < \max f < C_2$ , the resulting distribution over $f$ should only contain functions whose max falls within this range. Using rejection sampling, functions which disobey this criterion are filtered out, which yields the posterior $p(f|\rho)$ . Having defined and exemplified how user beliefs impact the prior over functions $p(f)$ , the role of $\rho$ as a likelihood should be apparent: given a prior over functions $p(f)$ and a user belief over functions $\rho(f)$ which places a probability on all possible draws $f_i p(f)$ , we can form a belief-weighted prior $p(f|\rho) \propto p(f) \rho(f)$ . Thus, we introduce the formal definition of a user belief over a function property:
|
| 133 |
+
|
| 134 |
+
Definition 3.1 (User Belief over Functions). The user belief over functions $\rho(f) \propto \frac{p(f|\rho)}{p(f)}$ .
|
| 135 |
+
|
| 136 |
+
As the subsequent derived methodology applies independently of the specific property of $f$ that a prior is placed on, we will henceforth consider a belief over a general function property $\rho$ . Having defined the role of $\rho$ and the posterior over functions it produces, a natural question arises: How is $p(f|\rho)$ updated once observations $\mathcal{D}$ are obtained?
|
| 137 |
+
|
| 138 |
+
Since the data $\mathcal{D}$ is independent of the prior (the data generation process is intrinsically unaffected by the belief held by the user), application of Bayes' rule yields the following posterior $p(f|\mathcal{D},\rho)$
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
p (f | \mathcal {D}, \rho) = \frac {p (\mathcal {D} , \rho | f) p (f)}{p (\mathcal {D} , \rho)} = \frac {p (\mathcal {D} | f) p (\rho | f) p (f)}{p (\mathcal {D}) p (\rho)} = \frac {p (f | \rho)}{p (f)} p (f | \mathcal {D}) \propto \rho (f) p (f | \mathcal {D}), \tag {7}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
where the right side of the proportionality in Eq. 7 suggests an intuitive generation process for samples $(f|\mathcal{D},\rho)$ to approximate the density $p(f|\mathcal{D},\rho)$ . Utilizing the pathwise update from Eq. 2, we note that given an approximate draw $\tilde{f}$ from the prior, the subsequent data-dependent update is deterministic. Recalling Eq. 2 and assuming independence between $\rho$ and $\mathcal{D}$ , $\rho$ only affects the draw from the prior, whereas $\mathcal{D}$ only affects the update. Consequently, we obtain
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
(\tilde {f} | \mathcal {D}, \rho) (\boldsymbol {x}) \stackrel {{d}} {{=}} \underbrace {(\tilde {f} | \rho) (\boldsymbol {x})} _ {\text {d r a w f r o m p r i o r}} + \underbrace {\mathbf {k} _ {n} (\boldsymbol {x}) ^ {\top} \left(\mathbf {K} _ {n} + \sigma_ {\epsilon} ^ {2} \mathbf {I}\right) ^ {- 1} (\mathbf {y} - (\tilde {f} | \rho) (\boldsymbol {x}) - \epsilon)} _ {\text {d e t e r m i n i s t i c u p d a t e}}, \tag {8}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
where $(\tilde{f}|\rho) \sim p(f)\rho(\tilde{f})$ are once again obtained using rejection sampling on draws from $p(\tilde{f})$ . Figure 2 displays this in detail: given the typical GP prior over functions and a user belief over the optimum, we obtain a distribution over functions $p(\tilde{f}|\rho_{\mathbf{x}}^{*})$ before having observed any data (top right). Samples from the approximate prior $p(\tilde{f})$ (light blue) are re-sampled proportionally to their probability of occurring under the prior $\rho_{\mathbf{x}}^{*}(\tilde{f})$ in green, leaving samples $(\tilde{f}|\rho_{\mathbf{x}}^{*})$ in navy blue, which are highly probable under $\rho_{\mathbf{x}}^{*}$ . Once data is obtained, these samples are updates according to Eq. 8, which preserves the shape of the samples far away from observed data and yields the desired posterior.
|
| 151 |
+
|
| 152 |
+
# 3.2 PRIOR-WEIGHTED MONTE CARLO ACQUISITION FUNCTIONS
|
| 153 |
+
|
| 154 |
+
Naturally, neither the belief-weighted prior $p(f|\rho)$ nor the belief-weighted posterior $p(f|\mathcal{D},\rho)$ have a closed-form expression. Both are inherently non-Gaussian for non-uniform beliefs. As such, we resort to MC acquisition functions to compute utilities that are amenable to BO. In the subsequent section, we focus on the prevalent acquisition functions EI, and MES.
|
| 155 |
+
|
| 156 |
+
Expected Improvement The computation of the MC-EI within the ColaBO framework requires only minor adaptations of the original MC acquisition function. By definition, MC-EI assigns utility $u$ as $u_{\mathrm{EI}}(f(\pmb{x})) = \max(f_n^* - f(\pmb{x}), 0)$ , which yields
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
\alpha_ {\mathrm {E I}} (\boldsymbol {x}; \mathcal {D}) = \mathbb {E} _ {f _ {\boldsymbol {x}} | \mathcal {D}} \left[ u _ {\mathrm {E I}} \left(f _ {\boldsymbol {x}}\right) \right] \approx \tag {9}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\sum_ {\ell} \max \left(f _ {n} ^ {*} - f _ {\boldsymbol {x}} ^ {(\ell)}, 0\right), f _ {\boldsymbol {x}} ^ {(\ell)} \sim p (f (\boldsymbol {x}) | \mathcal {D}). \tag {10}
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+

|
| 167 |
+
|
| 168 |
+

|
| 169 |
+
Figure 3: (Top) Draws from $p(f|\mathcal{D})$ (light blue) and $p(f|\rho, \mathcal{D})$ with a prior $\rho$ located in the green region. (Bottom) Vanilla MC-EI and ColaBO MC-EI, resulting from computing the acquisition function from sample draws from $p(f|\rho, \mathcal{D})$ .
|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
Figure 2: (Top left) Draws from the prior $p(f)$ (light blue) and the belief-weighted prior $p(f|\rho)$ whose members are likely to have their optimum within the green region. (Top right) Pathwise updated draws based on observed data. As the green region is distant from the observed data, samples are almost unaffected by the data in this region. (Bottom left) Exact mean and standard deviation $(\mu_{\pmb{x}},\sigma_{\pmb{x}})$ of $p(f)$ and estimated mean and standard deviation of $p(f|\rho)$ . (Bottom right) Exact $p(f|\mathcal{D})$ and estimated $p(f|\rho,\mathcal{D})$ . As $p(f|\rho)$ constitutes of functions whose optimum is located within the green region the resulting model has a higher mean and lower variance within this region. Moreover, $p(f|\rho)$ globally displays lower upside variance compared to the vanilla GP.
|
| 173 |
+
|
| 174 |
+
Utilizing rejection sampling, we can compute the MC-EI under the ColaBO posterior accordingly,
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
\alpha_ {\mathbb {E I}} (\boldsymbol {x}; \mathcal {D}, \rho) = \mathbb {E} _ {f _ {\boldsymbol {x}} | \mathcal {D}, \rho} [ u _ {\mathbb {E I}} (f _ {\boldsymbol {x}}) ] \propto \tag {11}
|
| 178 |
+
$$
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
\int_ {f} u _ {\mathbb {E I}} \left(f _ {\boldsymbol {x}}\right) \rho (f) p (f | \mathcal {D}) d f \approx \sum_ {\ell} \rho \left(f ^ {(\ell)}\right) \max \left(f _ {n} ^ {*} - f _ {\boldsymbol {x}} ^ {(\ell)}, 0\right), \quad f _ {\boldsymbol {x}} ^ {(\ell)} \sim p (f (\boldsymbol {x}) | \mathcal {D}), \tag {12}
|
| 182 |
+
$$
|
| 183 |
+
|
| 184 |
+
wherein samples in Eq. 12 are drawn from the prior, retained with probability $\rho(f^{(\ell)}) / \max \rho$ , and pathwise updated. In Figure 3, we demonstrate how ColaBO-EI differs from MC-EI for an identical posterior as in Figure 2. By computing $\alpha_{\mathrm{EI}}$ from samples biased by $\rho$ , ColaBO substantially directs the search towards good regions under $\rho$ . Derivations for PI and KG are analogous to that of EI.
|
| 185 |
+
|
| 186 |
+
Max-Value Entropy Search We derive a ColaBO-MES acquisition function by first considering the definition of the entropy, $\mathrm{H}[p(y_{\mathbf{x}}|\mathcal{D})] = \mathbb{E}_{y_{\mathbf{x}}|\mathcal{D}}[-\log p(y_{\mathbf{x}}|\mathcal{D})]$ . When considering the belief-weighted posterior, we further condition the posterior on $\rho$ and obtain
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
\begin{array}{l} \alpha_ {\mathrm {M E S}} (\boldsymbol {x}) = \mathbb {E} _ {f _ {*} | \mathcal {D}, \rho} \left[ \mathbb {E} _ {y _ {\boldsymbol {x}} | \mathcal {D}, \rho , f _ {*}} [ \log p (y _ {\boldsymbol {x}} | \mathcal {D}, \rho , f _ {*}) ] \right] - \mathbb {E} _ {y _ {\boldsymbol {x}} | \mathcal {D}, \rho} [ \log p (y _ {\boldsymbol {x}} | \mathcal {D}, \rho) ] (13) \\ \propto \mathbb {E} _ {f _ {*} | \mathcal {D}, \rho} \left[ \mathbb {E} _ {f _ {\boldsymbol {x}} | \mathcal {D}, \rho} \left[ \mathbb {E} _ {y _ {\boldsymbol {x}} | f _ {\boldsymbol {x}}} \left[ \log p \left(y _ {\boldsymbol {x}} \mid f _ {\boldsymbol {x}}, \rho , f _ {*}\right) \right] \right] \right] - \mathbb {E} _ {f _ {\boldsymbol {x}} | \mathcal {D}, \rho} \left[ \mathbb {E} _ {y _ {\boldsymbol {x}} | f _ {\boldsymbol {x}}} \left[ \log p \left(y _ {\boldsymbol {x}} \mid f _ {\boldsymbol {x}}, \rho\right) \right] \right] (14) \\ \approx \frac {1}{Z _ {J}} \sum_ {j = 1} ^ {J} \sum_ {\ell = 1} ^ {L} \sum_ {k = 1} ^ {K} \log p \left(y _ {\boldsymbol {x}} ^ {(k)} \mid f _ {\boldsymbol {x}} ^ {(\ell)}, f _ {*} ^ {(j)}\right) \rho \left(f ^ {(\ell)}\right) \rho \left(f ^ {(j)}\right) - \sum_ {\ell = 1} ^ {L} \sum_ {k = 1} ^ {K} \log p \left(y _ {\boldsymbol {x}} ^ {(k)} \mid f _ {\boldsymbol {x}} ^ {(\ell)}\right) \rho \left(f ^ {(\ell)}\right), (15) \\ \end{array}
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
where $Z_J$ is a normalizing constant $\sum_{J} \rho(f^{(j)})$ brought on by sampling optimal values, $y_x | f_x$ can trivially be obtained by sampling Gaussian noise $\varepsilon \sim \mathcal{N}(0, \sigma_\varepsilon^2)$ to a noiseless observation $f_x | \mathcal{D}$ in the innermost expectation, and $f_x$ and $f_*$ are obtained through the pathwise sampling procedure outlined in Eq. 8. The samples are evaluated on $p((y_x | f_x), (y_x | f_x, f_*))$ . As evident by Eq. 15, $\rho$ affects the posterior distribution of both the observations $y_x$ and the optimal values $f_*$ . PES and JES are derived analogously. However, these acquisition functions require conditioning on additional, simulated data and consequently, additional pathwise updates, to compute.
|
| 193 |
+
|
| 194 |
+
# 3.3 PRACTICAL CONSIDERATIONS
|
| 195 |
+
|
| 196 |
+
ColaBO introduces additional flexibility to MC-based BO acquisition functions. The ColaBO framework deviates from vanilla (q-)MC acquisition functions (Wilson et al., 2017; Balandat et al., 2020) by utilizing approximate sample functions from the posterior, as opposed to pointwise draws from the posterior predictive and the reparametrization trick (Kingma & Welling, 2014; Rezende et al., 2014). ColaBO holds three shortcomings not prevalent in vanilla MC acquisition functions: (1) it cannot utilize Quasi-MC in the draws from the predictive posterior (only in the RFF weights),
|
| 197 |
+
|
| 198 |
+
Algorithm 1 ColaBO iteration
|
| 199 |
+
1: Input: User prior $\rho$ , number of function samples $L$ , current data $\mathcal{D}$
|
| 200 |
+
2: Output: Next query location $\pmb{x}'$ .
|
| 201 |
+
3: for $\ell \in \{1, \dots, L\}$ do
|
| 202 |
+
4: $\rho^{(\ell)} = \rho(\tilde{f}^{(\ell)}; n)$ , $\tilde{f}^{(\ell)} \sim p(\tilde{f})$
|
| 203 |
+
5: $(\tilde{f}^{(\ell)}|D) = \text{PathwiseUpdate}(\tilde{f}^{(\ell)}, D)$
|
| 204 |
+
6: end for
|
| 205 |
+
7: $p(\tilde{f}|D, \rho) \approx \sum_{\ell} \rho^{(\ell)}(\tilde{f}^{(\ell)}|D)$
|
| 206 |
+
8: $\pmb{x}' = \arg \max_{\pmb{x} \in \mathcal{X}} \mathbb{E}_{p(\tilde{f}|D, \rho)}[u(\tilde{f}_x)]$
|
| 207 |
+
9: ▷Maximize MC acquisition
|
| 208 |
+
|
| 209 |
+
(2) it cannot fix the base samples (Balandat et al., 2020) drawn from the posterior for acquisition function consistency across the search space, and (3) the RFF approximation of $p(f)$ introduces bias. This approximation error is substantially more pronounced for the Matérn 5/2-kernel than the squared exponential, leaving ColaBO best suited for the latter. In Sec. 4.1, we empirically display the impact of these shortcomings. While acquisition function optimization no longer enjoys improved accuracy resulting from reparametrization, the acquisition function can still benefit from the fact that ColaBO backpropagates through quantities computed as sums of smooth functions.
|
| 210 |
+
|
| 211 |
+
# 4 RESULTS
|
| 212 |
+
|
| 213 |
+
We evaluate the performance of ColaBO on various tasks, using priors over the optimum $\pi_{x_*}$ obtained from known optima on synthetic tasks, as well as from prior work (Mallik et al., 2023) on realistic tasks. We consider two variants of ColaBO: one using LogEI (Ament et al., 2023), a numerically stable, smoothed logsumexp transformation of EI with analogous derivation, and one variant using MES. We benchmark against the vanilla variants of each acquisition function, as well as $\pi$ BO (Hvarfner et al., 2022b) and decoupled Thompson sampling Thompson (1933); Wilson et al. (2020). All acquisition functions are implemented in BoTorch (Balandat et al., 2020) using a squared exponential kernel and MAP hyperparameter estimation. We present experiments with a Matérn-5/2 (Matérn, 1960) kernel in App. C.1. Unless stated otherwise, all methods are initialized with the mode of the prior followed by 2 Sobol samples. The experimental setup is outlined in Appendix B, and our code is publicly available at https://github.com/hvarfner/colabo.
|
| 214 |
+
|
| 215 |
+
# 4.1 APPROXIMATION QUALITY OF THE COLABO FRAMEWORK
|
| 216 |
+
|
| 217 |
+
Firstly, we demonstrate the approximation quality of ColaBO without user priors to assert its accuracy compared to a vanilla MC acquisition function. To facilitate comparison, we randomly sample 10 points on the Hartmann (3D) function, and optimize LogEI with a large budget. We subsequently optimize ColaBO-LogEI on the same set of points and compare the arg max to the solution found by the gold standard. Figure 4 displays the (log10) Euclidian distance between the arg max of LogEI and its ColaBO variant. We note that, for small amounts ( $\leqslant 256$ ) of posterior samples, the error induced by RFF bias is relatively low, which is evidenced by all RFF variants being roughly equal in distance to the true acquisition function optimizer.
|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
Figure 4: Mean and $1/4$ standard deviation of MC-induced errors of ColaBO-LogEI relative vanilla LogEI as measured by the distance to the arg max of the acquisition function on Hartmann (3D) on 10 randomly sampled points for 40 seeds.
|
| 221 |
+
|
| 222 |
+
# 4.2 SYNTHETIC FUNCTIONS WITH KNOWN PRIORS
|
| 223 |
+
|
| 224 |
+
We adapt a similar evaluation protocol to Hvarfner et al. (2022b), and evaluate ColaBO for two types of user beliefs for synthetic tasks: well-located and poorly located priors over the optimal location, designed to emulate a well-informed and poorly-informed practitioner, respectively. The well-located prior is offset by a small $(10\%)$ amount from the optimum, and the poorly located prior is maximally offset, while retaining its mode inside the search space. Complete details on the priors can be found
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
Figure 5: Performance on synthetic functions with well-located priors. Both ColaBO-LogEI and ColaBO-MES offer drastic speed-ups over their vanilla variants, and offer similar performance to $\pi$ BO. The ranking of ColaBO acquisition functions are generally consistent with their respective vanilla variants. This is most prominent on Rosenbrock (6D), where ColaBO-MES struggles similarly to vanilla MES.
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
Figure 6: Performance on poorly located priors. ColaBO acquisition functions are more robust than $\pi$ BO, as it frequently recovers the performance of the vanilla acquisition function before the total budget is depleted. ColaBO-LogEI struggles marginally on Hartmann (6D). ColaBO-MES recovers the baseline on all tasks.
|
| 231 |
+
|
| 232 |
+
in Appendix B.3. On well-located priors, both ColaBO-LogEI and ColaBO-MES demonstrate substantially improved performance relative to their vanilla counterparts, comparable to $\pi$ BO on all benchmarks. On poorly located priors, ColaBO demonstrates superior robustness, recovering the performance of the vanilla acquisition function within the maximal budget of $20D$ iterations and clearly outperforming $\pi$ BO, which more frequently misled by the poor prior. In Appendix C.2, we also demonstrate ColaBO utilizing (accurate) beliefs over the optimal value: similarly to Figure 5, ColaBO yields increased efficiency relative to baselines, albeit not as substantial. Moreover, we demonstrate its usage with batch evaluations on well-located priors in Sec. C.3, showing that the drop in performance from batching evaluations is marginal at worst.
|
| 233 |
+
|
| 234 |
+
# 4.3 HYPERPARAMETER TUNING TASKS
|
| 235 |
+
|
| 236 |
+
Lastly, we evaluate ColaBO on three $4D$ deep learning HPO tasks from the PD1 (Wang et al., 2023) benchmarking suite. While the optima for these tasks are ultimately unknown, we utilize the priors provided in MF-Prior-Bench<sup>1</sup> (Mallik et al., 2023), which are intended to provide a good starting point for further optimization. To emulate a realistic HPO setting, we consider a smaller optimization budget of $10D$ iterations, and initialize all methods that utilize user beliefs with only one initial sample, that being the mode of the prior. The two ColaBO variants perform best in this evaluation, producing the best terminal performance on two tasks (CIFAR, LM1B), with all methods being tied on the third (CIFAR). ColaBO demonstrates consistent speed-ups compared to its vanilla counterparts, surpassing the terminal performance of the baseline within a third of the budget on CIFAR and LM1B. In App. A, we benchmark on 5 tasks from LBBench (Zimmer et al., 2020), displaying similar performance.
|
| 237 |
+
|
| 238 |
+
# 5 RELATED WORK
|
| 239 |
+
|
| 240 |
+
In BO, auxiliary prior information can be conveyed in multiple ways. We outline meta learning/transfer learning for BO based on data from previous experiments, and data-less approaches.
|
| 241 |
+
|
| 242 |
+
Learning from Previous Experiments Transfer learning and meta learning for BO aims to automatically extract and use knowledge from prior executions of BO by pre-training the model on
|
| 243 |
+
|
| 244 |
+

|
| 245 |
+
Figure 7: Performance on the 4D PD1 hyperparameter tuning tasks of various deep learning pipelines. ColaBO drastically accelerates optimization initially, finding configurations with close to terminal performance quickly. $\pi$ BO offers competitive performance, but lacks the rapid initial progress of ColaBO on CIFAR and LM1B.
|
| 246 |
+
|
| 247 |
+
data acquired from previous executions (Swersky et al., 2013; Wistuba et al., 2015; Perrone et al., 2019; Feurer et al., 2015; 2018; Rothfuss et al., 2021a;b; Wistuba & Grabocka, 2021; Feurer et al., 2022). Typically, meta- and transfer learning exploit relevant previous data for training the GP for the current task while retaining predictive uncertainty to account for imperfect task correlation.
|
| 248 |
+
|
| 249 |
+
Expert Priors over Function Optimum Few previous works have proposed to inject explicit prior distributions over the location of an optimum into BO. In these cases, users explicitly define a prior that encodes their beliefs on where the optimum is more likely to be located. Bergstra et al. (2011a) suggest an approach that supports prior beliefs from a fixed set of distributions, which affects the very initial stage of optimization. However, this approach cannot be combined with standard acquisition functions. BOPrO (Souza et al., 2021) employs a similar structure that combines the user-provided prior distribution with a data-driven model into a pseudo-posterior. From the pseudo-posterior, configurations are selected using the EI acquisition function, using the formulation in Bergstra et al. (2011a). $\pi$ BO (Hvarfner et al., 2022b) suggests a general-purpose prior-weighted acquisition function, where the influence of the prior decreases over time. They provide convergence guarantees for when the framework is applied to the EI acquisition function. While effective, none of these approaches act on the surrogate model in a Bayesian-principled fashion, but strictly as heuristics. Moreover, they solely focus on priors over optimal inputs, thus offering less utility than ColaBO.
|
| 250 |
+
|
| 251 |
+
Priors over Optimal Value Similarly few works have addressed the issue of auxiliary knowledge of the optimal value. Both Jeong & Kim (2021) and Nguyen & Osborne (2020) propose altering the GP and accompanying it with tailored acquisition functions. Jeong & Kim (2021) employ variational inference, proposing distinct variational families depending on the type of knowledge pertaining to the optimal value. Nguyen & Osborne (2020) use a parabolic transformation of the output space to ensure an upper bound is preserved. Unlike ColaBO, neither of these methods is general enough to accompany arbitrary user priors to guide the optimization.
|
| 252 |
+
|
| 253 |
+
# 6 CONCLUSION, LIMITATIONS AND FUTURE WORK
|
| 254 |
+
|
| 255 |
+
We presented ColaBO, a flexible BO framework that allows practitioners to inject beliefs over function properties in a Bayesian-principled manner, allowing for increased efficiency in the BO procedure. ColaBO works across a collection of MC acquisition functions, inheriting their flexibility in batch optimization and ability to work with non-Gaussian posteriors. It demonstrates competitive performance for well-located priors, using them to substantially accelerate optimization. Moreover, it retains approximately baseline performance when applied to detrimental priors, demonstrating greater robustness than $\pi$ BO. ColaBO crucially relies on multiple steps of MC. While flexible, this approach drives computational expense in order to assert sufficient accuracy, requiring tens of seconds per evaluation to achieve desired accuracy, depending on the size of the benchmark. Moreover, obtaining draws from $\rho_{x}^{*}$ scales exponentially in the dimensionality of the prior. While practitioners are unlikely to specify priors over more than a handful of variables, ColaBO may become impractical when priors of higher dimensionality are employed. Paths for future work could involve more accurate and efficient sampling procedures (Lin et al., 2023) from the belief-weighted prior, as well as variational (Titsias, 2009) or pre-trained Müller et al. (2022); Müller et al. (2023) approaches to obtain a representative belief-biased model with an analytical posterior. This would likely bring down the runtime of ColaBO and broaden its potential use. Lastly, applying ColaBO to multi-fidelity optimization (Kandasamy et al., 2016; Mallik et al., 2023) offers an additional avenue for increased efficiency which would further increase its viability on costly deep learning pipelines.
|
| 256 |
+
|
| 257 |
+
# ACKNOWLEDGEMENTS
|
| 258 |
+
|
| 259 |
+
We thank the anonymous reviewers for their valuable contributions. Luigi Nardi was supported in part by affiliate members and other supporters of the Stanford DAWN project — Ant Financial, Facebook, Google, Intel, Microsoft, NEC, SAP, Teradata, and VMware. Carl Hvarfner, Erik Hellsten and Luigi Nardi were partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Luigi Nardi was partially supported by the Wallenberg Launch Pad (WALP) grant Dnr 2021.0348. Frank Hutter acknowledges support through TAILOR, a project funded by the EU Horizon 2020 research and innovation programme under GA No 952215, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 417962828, by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 39/963-1 FUGG, and by the European Research Council (ERC) Consolidator Grant “Deep Learning 2.0” (grant no. 101045765). The computations were also enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC partially funded by the Swedish Research Council through grant agreement no. 2018-05973. Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the ERC. Neither the European Union nor the ERC can be held responsible for them.
|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
|
| 263 |
+
Funded by the European Union
|
| 264 |
+
|
| 265 |
+
# REFERENCES
|
| 266 |
+
|
| 267 |
+
Sebastian Ament, Samuel Daulton, David Eriksson, Maximilian Balandat, and Eytan Bakshy. Unexpected improvements to expected improvement for bayesian optimization. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=1vyAG6j9PE.
|
| 268 |
+
Raul Astudillo and Peter Frazier. Bayesian optimization of function networks. Advances in neural information processing systems, 34:14463-14475, 2021.
|
| 269 |
+
M. Balandat, B. Karrer, D. R. Jiang, S. Daulton, B. Letham, A. G. Wilson, and E. Bakshy. Botorch: A framework for efficient monte-carlo bayesian optimization. In Advances in Neural Information Processing Systems, 2020. URL http://arxiv.org/abs/1910.06403.
|
| 270 |
+
J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl. Algorithms for hyper-parameter optimization. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger (eds.), Proceedings of the 25th International Conference on Advances in Neural Information Processing Systems (NeurIPS'11), pp. 2546-2554, 2011a.
|
| 271 |
+
James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for Hyper-Parameter Optimization. In Advances in Neural Information Processing Systems (NeurIPS), volume 24. Curran Associates, Inc., 2011b.
|
| 272 |
+
E. Brochu, V. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv:1012.2599v1 [cs.LG], 2010.
|
| 273 |
+
Adam D. Bull. Convergence rates of efficient global optimization algorithms. 12:2879-2904, 2011.
|
| 274 |
+
R. Calandra, N. Gopalan, A. Seyfarth, J. Peters, and M. Deisenroth. Bayesian gait optimization for bipedal locomotion. In P. Pardalos and M. Resende (eds.), Proceedings of the Eighth International Conference on Learning and Intelligent Optimization (LION'14), 2014.
|
| 275 |
+
Adel Ejeh, Leon Medvinsky, Aaron Councilman, Hemang Nehra, Suraj Sharma, Vikram Adve, Luigi Nardi, Eriko Nurvitadhi, and Rob A Rutenbar. Hpvm2fpga: Enabling true hardware-agnostic fpga programming. In Proceedings of the 33rd IEEE International Conference on Application-specific Systems, Architectures, and Processors, 2022.
|
| 276 |
+
|
| 277 |
+
David Eriksson, Michael Pearce, Jacob Gardner, Ryan D Turner, and Matthias Poloczek. Scalable global optimization via local Bayesian optimization. In Advances in Neural Information Processing Systems, pp. 5496-5507, 2019. URL http://papers.nips.cc/paper/8788-scalable-global-optimization-via-local-bayesian-optimization.pdf.
|
| 278 |
+
M. Feurer, Jost Tobias Springenberg, and F. Hutter. Initializing bayesian hyperparameter optimization via meta-learning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 1128-1135, 2015.
|
| 279 |
+
M. Feurer, B. Letham, F. Hutter, and E. Bakshy. Practical transfer learning for bayesian optimization. ArXiv abs/1802.02219, 2018.
|
| 280 |
+
Matthias Feurer, Benjamin Letham, Frank Hutter, and Eytan Bakshy. Practical transfer learning for Bayesian optimization. arXiv preprint 1802.02219, 2022.
|
| 281 |
+
Peter Frazier, Warren Powell, and Savas Dayanik. The knowledge-gradient policy for correlated normal beliefs. INFORMS journal on Computing, 21(4):599-613, 2009.
|
| 282 |
+
R. Garnett. Bayesian Optimization. Cambridge University Press, 2022. Available for free at https://bayesoptbook.com/.
|
| 283 |
+
Ryan-Rhys Griffiths and José Miguel Hernández-Lobato. Constrained bayesian optimization for automatic chemical design using variational autoencoders. Chemical Science, 2020.
|
| 284 |
+
P. Hennig and C. J. Schuler. Entropy search for information-efficient global optimization. Journal of Machine Learning Research, 13(1):1809-1837, June 2012. ISSN 1532-4435.
|
| 285 |
+
J. M. Hernández-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive entropy search for efficient global optimization of black-box functions. In Advances in Neural Information Processing Systems, 2014. URL https://proceedings.neurips.cc/paper/2014/file/069d3bb002acd8d7dd095917f9efe4cb-Paper.pdf.
|
| 286 |
+
Jose Miguel Hernández-Lobato, Michael Gelbart, Matthew Hoffman, Ryan Adams, and Zoubin Ghahramani. Predictive entropy search for bayesian optimization with unknown constraints. In International conference on machine learning, pp. 1699-1707. PMLR, 2015.
|
| 287 |
+
Daolang Huang, Louis Filstroff, Petrus Mikkola, Runkai Zheng, and Samuel Kaski. Bayesian optimization augmented with actively elicited expert knowledge, 2022.
|
| 288 |
+
F. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In C. Coello (ed.), Proceedings of the Fifth International Conference on Learning and Intelligent Optimization (LION'11), volume 6683, pp. 507-523, 2011.
|
| 289 |
+
Carl Hvarfner, Frank Hutter, and Luigi Nardi. Joint entropy search for maximally-informed bayesian optimization. In Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022a.
|
| 290 |
+
Carl Hvarfner, Danny Stoll, Artur Souza, Marius Lindauer, Frank Hutter, and Luigi Nardi. PiBO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization. In International Conference on Learning Representations, 2022b.
|
| 291 |
+
Carl Hvarfner, Erik Hellsten, Frank Hutter, and Luigi Nardi. Self-correcting bayesian optimization through bayesian active learning. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=dX9MjUtP1A.
|
| 292 |
+
Taewon Jeong and Heeyoung Kim. Objective bound conditional gaussian process for bayesian optimization. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 4819-4828. PMLR, 18-24 Jul 2021. URL https://proceedings.mlr.press/v139/jeong21a.html.
|
| 293 |
+
D. Jones, M. Schonlau, and W. Welch. Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13:455-492, 12 1998. doi: 10.1023/A:1008306431147.
|
| 294 |
+
|
| 295 |
+
A G Journel and C J Huijbregts. Mining geostatistics, Jan 1976.
|
| 296 |
+
K. Kandasamy, G. Dasarathy, J. Oliva, J. Schneider, and B. Poczos. Gaussian Process Bandit Optimisation with Multi-fidelity Evaluations. In D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnett (eds.), Proceedings of the 30th International Conference on Advances in Neural Information Processing Systems (NeurIPS'16), pp. 992-1000, 2016.
|
| 297 |
+
K. Kandasamy, A. Krishnamurthy, J. Schneider, and B. Poczos. Parallelised Bayesian optimisation via Thompson sampling. In A. Storkey and F Perez-Cruz (eds.), Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), volume 84, pp. 133-142. Proceedings of Machine Learning Research, 2018.
|
| 298 |
+
Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2014. URL https://arxiv.org/abs/1312.6114.
|
| 299 |
+
Arun Kumar, Santu Rana, Alistair Shilton, and Svetha Venkatesh. Human-ai collaborative bayesian optimisation. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 16233-16245. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/6751611b394a3464cea53eed91cf163c-Paper-Conference.pdf.
|
| 300 |
+
H. J. Kushner. A New Method of Locating the Maximum Point of an Arbitrary Multipeak Curve in the Presence of Noise. Journal of Basic Engineering, 86(1):97-106, 03 1964. ISSN 0021-9223. doi: 10.1115/1.3653121. URL https://doi.org/10.1115/1.3653121.
|
| 301 |
+
B. Letham, K. Brian, G. Ottoni, and E. Bakshy. Constrained Bayesian optimization with noisy experiments. Bayesian Analysis, 2018.
|
| 302 |
+
Jihao Andreas Lin, Javier Antorán, Shreyas Padhy, David Janz, José Miguel Hernández-Lobato, and Alexander Terenin. Sampling from gaussian process posteriors using stochastic gradient descent. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=Sf9goJtTCE.
|
| 303 |
+
Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhkopf, René Sass, and Frank Hutter. Smac3: A versatile bayesian optimization package for hyperparameter optimization. Journal of Machine Learning Research, 23(54):1-9, 2022. URL http://jmlr.org/papers/v23/21-0888.html.
|
| 304 |
+
Neeratyoy Mallik, Edward Bergman, Carl Hvarfner, Danny Stoll, Maciej Janowski, Marius Lindauer, Luigi Nardi, and Frank Hutter. Priorband: Practical hyperparameter optimization in the age of deep learning. arXiv preprint 2306.12370, 2023.
|
| 305 |
+
B. Matérn. Spatial variation. Meddelanden fran Statens Skogsforskningsinstitut, 1960.
|
| 306 |
+
Matthias Mayr, Carl Hvarfner, Konstantinos Chatzilygeroudis, Luigi Nardi, and Volker Krueger. Learning skill-based industrial robot tasks with user priors. IEEE 18th International Conference on Automation Science and Engineering, 2022. URL https://arxiv.org/abs/2208.01605.
|
| 307 |
+
J. Mockus, V. Tiesis, and A. Zilinskas. The application of Bayesian methods for seeking the extremum. Towards Global Optimization, 2(117-129):2, 1978.
|
| 308 |
+
Henry B. Moss, David S. Leslie, Javier Gonzalez, and Paul Rayson. Gibbon: General-purpose information-based bayesian optimisation. Journal of Machine Learning Research, 22(235):1-49, 2021. URL http://jmlr.org/papers/v22/21-0120.html.
|
| 309 |
+
Samuel Müller, Noah Hollmann, Sebastian Pineda Arango, Josif Grabocka, and Frank Hutter. Transformers can do bayesian inference. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=KSugKcbNf9.
|
| 310 |
+
|
| 311 |
+
Samuel Müller, Matthias Feurer, Noah Hollmann, and Frank Hutter. PFNs4BO: In-context learning for Bayesian optimization. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 25444-25470. PMLR, 23-29 Jul 2023. URL https://proceedings.mlr.press/v202/muller23a.html.
|
| 312 |
+
Mojmir Mutny and Andreas Krause. Efficient high dimensional bayesian optimization with additivity and quadrature fourier features. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/file/4e5046fc8d6a97d18a5f54beaed54dea-Paper.pdf.
|
| 313 |
+
L. Nardi, D. Koeplinger, and K. Olukotun. Practical design space exploration. In 2019 IEEE 27th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), pp. 347-358. IEEE, 2019.
|
| 314 |
+
Willie Neiswanger, Ke Alexander Wang, and Stefano Ermon. Bayesian algorithm execution: Estimating computable properties of black-box functions using mutual information. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 8005-8015. PMLR, 18-24 Jul 2021. URL https://proceedings.mlr.press/v139/neiswanger21a.html.
|
| 315 |
+
Vu Nguyen and Michael A. Osborne. Knowing the what but not the where in Bayesian optimization. In Hal Daume III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 7317-7326. PMLR, 13-18 Jul 2020. URL https://proceedings.mlr.press/v119/nguyen20d.html.
|
| 316 |
+
C. Oh, E. Gavves, and M. Welling. BOCK: Bayesian optimization with cylindrical kernels. In International Conference on Machine Learning, pp. 3865-3874, 2018.
|
| 317 |
+
V. Perrone, H. Shen, M. Seeger, C. Archambeau, and R. Jenatton. Learning search spaces for bayesian optimization: Another view of hyperparameter transfer learning. In Advances in Neural Information Processing Systems, 2019.
|
| 318 |
+
Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In J. Platt, D. Koller, Y. Singer, and S. Roweis (eds.), Advances in Neural Information Processing Systems, volume 20. Curran Associates, Inc., 2007. URL https://proceedings.neurips.cc/paper_files/paper/2007/file/013a006f03dbc5392effeb8f18fda755-Paper.pdf.
|
| 319 |
+
C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006.
|
| 320 |
+
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Eric P. Xing and Tony Jebara (eds.), Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pp. 1278-1286, Beijing, China, 22-24 Jun 2014. PMLR. URL https://proceedings.mlr.press/v32/rezende14.html.
|
| 321 |
+
Jonas Rothfuss, Vincent Fortuin, Martin Josifoski, and Andreas Krause. Pacoh: Bayes-optimal meta-learning with pac-guarantees. In Proceedings of the 38th International Conference on Machine Learning, pp. 9116-9126, 2021a.
|
| 322 |
+
Jonas Rothfuss, Dominique Heyn, Jinfan Chen, and Andreas Krause. Meta-learning reliable priors in the function space. In Advances in Neural Information Processing Systems, volume 34, 2021b.
|
| 323 |
+
Binxin Ru, Xingchen Wan, Xiaowen Dong, and Michael Osborne. Interpretable neural architecture search via bayesian optimisation with weisfeiler-lehman kernels. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id= j9Rv7qXdXjd.
|
| 324 |
+
B. Shahriari, K. Swersky, Z. Wang, R. Adams, and N. de Freitas. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1):148-175, 2016.
|
| 325 |
+
|
| 326 |
+
L. Smith. A disciplined approach to neural network hyper-parameters: Part 1-learning rate, batch size, momentum, and weight decay. arXiv preprint arXiv:1803.09820, 2018.
|
| 327 |
+
J. Snoek, H. Larochelle, and R. Adams. Practical Bayesian optimization of machine learning algorithms. In P. Bartlett, F. Pereira, C. Burges, L. Bottou, and K. Weinberger (eds.), Proceedings of the 26th International Conference on Advances in Neural Information Processing Systems (NeurIPS'12), pp. 2960-2968, 2012.
|
| 328 |
+
A. Souza, L. Nardi, L. Oliveira, K. Olukotun, M. Lindauer, and F. Hutter. Bayesian optimization with a prior for the optimum. In Machine Learning and Knowledge Discovery in Databases. Research Track - European Conference, ECML PKDD 2021, Bilbao, Spain, September 13-17, 2021, Proceedings, Part III, volume 12977 of Lecture Notes in Computer Science, pp. 265-296. Springer, 2021.
|
| 329 |
+
N. Srinivas, A. Krause, S. M. Kakade, and M. W. Seeger. Information-theoretic regret bounds for gaussian process optimization in the bandit setting. IEEE Transactions on Information Theory, 58(5):3250-3265, May 2012. ISSN 1557-9654. doi: 10.1109/tit.2011.2182033. URL http://dx.doi.org/10.1109/TIT.2011.2182033.
|
| 330 |
+
K. Swersky, J. Snoek, and R. Adams. Multi-task Bayesian optimization. In C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger (eds.), Proceedings of the 27th International Conference on Advances in Neural Information Processing Systems (NeurIPS'13), pp. 2004-2012, 2013.
|
| 331 |
+
W. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285-294, 1933.
|
| 332 |
+
Michalis Titsias. Variational learning of inducing variables in sparse gaussian processes. In David van Dyk and Max Welling (eds.), Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, volume 5 of Proceedings of Machine Learning Research, pp. 567-574, Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA, 16-18 Apr 2009. PMLR. URL https://proceedings.mlr.press/v5/titsias09a.html.
|
| 333 |
+
Ben Tu, Axel Gandy, Nikolas Kantas, and Behrang Shafei. Joint entropy search for multi-objective bayesian optimization. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=ZChgD8OoGds.
|
| 334 |
+
Q. Wang, Y. Ming, Z. Jin, Q. Shen, D. Liu, M. J. Smith, K. Veeramachaneni, and H. Qu. Atmseer: Increasing transparency and controllability in automated machine learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 1-12. Association for Computing Machinery, 2019.
|
| 335 |
+
Zi Wang and Stefanie Jegelka. Max-value entropy search for efficient bayesian optimization. In International Conference on Machine Learning (ICML), 2017.
|
| 336 |
+
Zi Wang, Clement Gehring, Pushmeet Kohli, and Stefanie Jegelka. Batched large-scale bayesian optimization in high-dimensional spaces. In Amos Storkey and Fernando Perez-Cruz (eds.), Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, volume 84 of Proceedings of Machine Learning Research, pp. 745-754. PMLR, 09-11 Apr 2018. URL https://proceedings.mlr.press/v84/wang18c.html.
|
| 337 |
+
Zi Wang, George E. Dahl, Kevin Swersky, Chansoo Lee, Zachary Nado, Justin Gilmer, Jasper Snoek, and Zoubin Ghahramani. Pre-trained Gaussian processes for Bayesian optimization. arXiv preprint arXiv:2109.08215, 2023.
|
| 338 |
+
C. White, W. Neiswanger, and Y. Savani. BANANAS: Bayesian optimization with neural architectures for neural architecture search. In Q. Yang, K. Leyton-Brown, and Mausam (eds.), Proceedings of the Thirty-Fifth Conference on Artificial Intelligence (AAAI'21), pp. 10293-10301. Association for the Advancement of Artificial Intelligence, AAAI Press, 2021.
|
| 339 |
+
|
| 340 |
+
James Wilson, Frank Hutter, and Marc Deisenroth. Maximizing acquisition functions for bayesian optimization. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/498f2c21688f6451d9f5fd09d53edda7-Paper.pdf.
|
| 341 |
+
James T. Wilson, Riccardo Moriconi, Frank Hutter, and Marc Peter Deisenroth. The reparameterization trick for acquisition functions, 2017. URL https://arxiv.org/abs/1712.00424.
|
| 342 |
+
James T. Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, and Marc Peter Deisenroth. Efficiently sampling functions from gaussian process posteriors. In International Conference on Machine Learning, 2020. URL https://arxiv.org/abs/2002.09309.
|
| 343 |
+
M. Wistuba, N. Schilling, and L. Schmidt-Thieme. Hyperparameter search space pruning - A new component for sequential model-based hyperparameter optimization. In A. Appice, P. Rodrigues, V. Costa, J. Gama, A. Jorge, and C. Soares (eds.), Machine Learning and Knowledge Discovery in Databases (ECML/PKDD'15), volume 9285, pp. 104-119, 2015.
|
| 344 |
+
Martin Wistuba and Josif Grabocka. Few-shot bayesian optimization with deep kernel surrogates. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=bJxgv5C3sYc.
|
| 345 |
+
Lucas Zimmer, Marius Thomas Lindauer, and Frank Hutter. Auto-pytorch tabular: Multi-fidelity metalearning for efficient and robust autodl. ArXiv, abs/2006.13799, 2020. URL https://apisemantic scholar.org/CorpusID:220041844.
|
| 346 |
+
|
| 347 |
+

|
| 348 |
+
Figure 8: Performance on the 6D LCBench hyperparameter tuning tasks of various deep learning pipelines. ColaBO substantially improves on the non-prior baselines for 3 out of five tasks. $\pi$ BO performs best on aggregate, and achieves the best acceleration in performance at early iterations.
|
| 349 |
+
|
| 350 |
+
# A LCBENCH BENCHMARKING
|
| 351 |
+
|
| 352 |
+
We evaluate all methods on five deep learning tasks (6D) from the LCBench (Zimmer et al., 2020) suite, utilizing priors from MF-Prior-Bench. The chosen tasks were the five tasks with available priors of the best (good) strength, as per the benchmark suite. Figure 8 shows the performance of all methods on the LCBench tasks. ColaBO improves substantially on the baseline approaches for 3 out of 5 tasks. $\pi$ BO is the overall best-performing method, followed by ColaBO-LogEI.
|
| 353 |
+
|
| 354 |
+
# B EXPERIMENTAL SETUP
|
| 355 |
+
|
| 356 |
+
# B.1 MODEL
|
| 357 |
+
|
| 358 |
+
We outline the model used and the budget allocated to the various MC approximations involved with ColaBO. For all experiments, we utilize MAP estimation of the hyperparameters, and update the hyperparameters at every iteration of BO. All hyperparameters - lengthscale, outputscale and observation noise $(\theta = \{\ell, \sigma_{\varepsilon}^{2}, \sigma_{f}^{2}\})$ are given conventional $\mathcal{LN}(0,1)$ prior, applied on normalized inputs and standardized outputs. Furthermore, we fit the constant $c$ of the mean function, assigning it a $\mathcal{N}(0,1)$ prior as well. In Tab. 1, we display the parameters of the MC approximations for various tasks. No. $f$ is the maximal number of functions used in the MC computation of the acquisition function. No. Reamples is the number of initial posterior draws maximally used for the re-sampling of functions from the posterior $p(f|\rho)$ . Lastly, . No. $f_*$ is the number of optimal values used in the computation of ColaBO-MES.
|
| 359 |
+
|
| 360 |
+
<table><tr><td>Task</td><td>No. f</td><td>No. RFFs</td><td>No. Resamples</td><td>No. f*</td></tr><tr><td>Synthetic Good</td><td>768</td><td>2048</td><td>1.5 * 105</td><td>32</td></tr><tr><td>Synthetic Bad</td><td>768</td><td>2048</td><td>1.5 * 105</td><td>32</td></tr><tr><td>PD1</td><td>512</td><td>4096</td><td>2 * 105</td><td>32</td></tr><tr><td>Appendix</td><td>512</td><td>1024</td><td>105</td><td>32</td></tr></table>
|
| 361 |
+
|
| 362 |
+
Table 1: Budget-related parameters of the Monte Carlo approximations for all tasks.
|
| 363 |
+
|
| 364 |
+
# B.2 BENCHMARKS
|
| 365 |
+
|
| 366 |
+
We outline the benchmarks used, their search spaces and the amount of synthetic noise added. When adding noise, we intend for the ratio of noise variance to total output range to be approximately equal across benchmarks.
|
| 367 |
+
|
| 368 |
+
# B.3 PRIORS
|
| 369 |
+
|
| 370 |
+
For synthetic benchmarks, the approximate optima of all included functions can be obtained in advance. Thus, the correctness of the prior is ultimately known in advance. For a function of dimensionality $d$ with optimum at $\boldsymbol{x}_{*}$ , the well-located prior is constructed by sampling an offset
|
| 371 |
+
|
| 372 |
+
<table><tr><td>Task</td><td>Dimensionality</td><td>σε</td><td>Search space</td></tr><tr><td>Hartmann (4D)</td><td>4</td><td>0.25</td><td>[0,1]D</td></tr><tr><td>Levy (5D)</td><td>5</td><td>0.5</td><td>[-5,5]D</td></tr><tr><td>Hartmann (6D)</td><td>6</td><td>0.25</td><td>[0,1]D</td></tr><tr><td>Rosenbrock (6D)</td><td>6</td><td>5</td><td>[-2.048,2.048]D</td></tr><tr><td>Stybtang (7D)</td><td>7</td><td>1</td><td>[-4,4]D</td></tr></table>
|
| 373 |
+
|
| 374 |
+
direction $\epsilon$ and scaling the offset by a dimensionality- and quality-specific term $c(d, q) = q\sqrt{d}$ . For the well-located prior on synthetic tasks, we use $q = 0.1$ , which implies that the optimum is located $10\%$ of the distance across the search space away from the optimum, and construct a Gaussian prior as
|
| 375 |
+
|
| 376 |
+
$$
|
| 377 |
+
\pi_ {\boldsymbol {x} _ {*}} (\boldsymbol {x}) \sim \mathcal {N} \left(\boldsymbol {x} _ {*} + c _ {d} \epsilon / | | \epsilon | |, \sigma_ {s}\right), \quad \epsilon \sim \mathcal {N} (0, I). \tag {16}
|
| 378 |
+
$$
|
| 379 |
+
|
| 380 |
+
with $\sigma_s = 25\%$ for all tasks and prior qualities. For our 20 runs of the well-located prior, this procedure yields us 20 unique priors per quality type, with identical offsets from the true optimum. No priors with a mode outside the search space were allowed, such priors were simply replaced. For the misinformed priors, we set $q = 1$ , guaranteeing that the mode of the prior will be outside of the search space, and subsequently relocating to the edge of the search space by its shortest path. Priors for all tasks are displayed in Tab. 3. For the PD1 tasks, the location for the priors were obtained from MF-Prior-Bench( https://github.com/automl/mf-prior-bench). However, these priors require offsetting in order to not be too strong, thus making subsequent BO obsolete. PD1 priors are provided in [0, 1]-normalized space for simplicity.
|
| 381 |
+
|
| 382 |
+
Table 2: Benchmarks used for the Bayesian optimization experiments.
|
| 383 |
+
|
| 384 |
+
<table><tr><td>Task</td><td>Location</td><td>Offset, good</td><td>Offset, bad</td><td>σs</td></tr><tr><td>Hartmann (4D)</td><td>[0.19, 0.19, 0.56, 0.26]</td><td>0.1√D</td><td>max</td><td>0.25</td></tr><tr><td>Levy (5D)</td><td>[1]D</td><td>1√D</td><td>max</td><td>2.5</td></tr><tr><td>Hartmann (6D)</td><td>[0.20, 0.15, 0.48, 0.28, 0.31, 0.66]</td><td>0.1√D</td><td>max</td><td>0.25</td></tr><tr><td>Rosenbrock (6D)</td><td>[1]D</td><td>0.4096√D</td><td>max</td><td>1.024</td></tr><tr><td>Stybtang (7D)</td><td>[-2.9]D</td><td>0.8√D</td><td>max</td><td>2</td></tr><tr><td>PD1-WMT</td><td>[0.90, 0.69, 0.02, 0.97]</td><td>0.05√D</td><td>N/A</td><td>0.25</td></tr><tr><td>PD1-CIFAR</td><td>[1, 0.80, 0.0, 0.0]</td><td>0.05√D</td><td>N/A</td><td>0.25</td></tr><tr><td>PD1-LM1B</td><td>[0.91, 0.67, 0.36, 0.85]</td><td>0.05√D</td><td>N/A</td><td>0.25</td></tr></table>
|
| 385 |
+
|
| 386 |
+
Table 3: ${\pi }_{{x}_{ * }}$ for synthetic BO tasks of both prior qualities and PD1.
|
| 387 |
+
|
| 388 |
+
# C ADDITIONAL EXPERIMENTS
|
| 389 |
+
|
| 390 |
+
We provide complementary experiments to those introduced in the main paper. Firstly, we display results when ColaBO is used with a prior $\pi_{f*}$ over the optimal value in Sec. C.2. In Sec. C.3, we demonstrate ColaBO:s extensibility to batch evaluations, seamlessly extending the work of (Wilson et al., 2017).
|
| 391 |
+
|
| 392 |
+
# C.1 SYNTHETIC MATERN KERNEL EXPERIMENTS
|
| 393 |
+
|
| 394 |
+
We evaluate ColaBO and all baselines on the synthetic tasks with a Matern-5/2 kernel and the good user belief over the optimum. We note that roughly half of all $\pi \mathsf{BO}$ runs struggle with numerical instability from iteration 60 onwards, which produces stagnation in performance and infrequent gains.
|
| 395 |
+
|
| 396 |
+
# C.2 MAX-VALUE PRIORS
|
| 397 |
+
|
| 398 |
+
We evaluate ColaBO with priors over the optimal value $\pi_{f*}$ in Figure 10. For each task, we place a Gaussian prior over the optimal value, centering it exactly at the optimal value. Notably, such a prior
|
| 399 |
+
|
| 400 |
+

|
| 401 |
+
Figure 9: ColaBO on the synthetic tasks with a Matern kernel. Due to the difficulty of the RFF approximation, ColaBO-LogEI struggles on Hartmann (6D), and ColaBO performance is marginally worse on aggregate.
|
| 402 |
+
|
| 403 |
+

|
| 404 |
+
Figure 10: ColaBO with priors over the optimal value. Terminal performance substantially increases on 3 out of 5 benchmarks (Levy, Hartmann (6D), Stybtang), and is approximately preserved on the final two. ColaBO-MES improves marginally more than ColaBO-LogEI when utilizing a prior $\pi_{f*}$ over the optimal value.
|
| 405 |
+
|
| 406 |
+
substantially influences the exploration-exploitation trade-off; if the prior suggests that the incumbent has a value close to the optimal one, we are encouraged to exploit as samples with well-above-optimal values in exploratory will be discarded. Conversely, we are heavily encouraged to explore if the current best observation holds a value that we believe is far from optimal. On Hartmann (6D), we can see this behavior at play. Initial performance is poorer for ColaBO than their respective baselines, presumably due to above-average exploration, but terminal performance is better.
|
| 407 |
+
|
| 408 |
+
# C.3 BATCH EVALUATIONS
|
| 409 |
+
|
| 410 |
+
We evaluate ColaBO on batch evaluations, utilizing the sequential greedy technique for MC acquisition functions from Wilson et al. (2018). Drop-off from sequential to batch evaluations is not evident from the plots, as ordering between sequential and batch varies with the benchmark. While unpredictable, we speculate that the altered exploration-exploitation trade-off provided by the batched acquisition function is occasionally beneficial in the presence of auxiliary user beliefs $\pi_{x*}$ .
|
| 411 |
+
|
| 412 |
+

|
| 413 |
+
Figure 11: $q = 1$ (sequential) and $q = 3$ batch evaluation on a subset of synthetic functions with well-located priors for ColaBO-LogEI and ColaBO-MES. Total function evaluations are plotted for both sequential and batched variants, leaving them with the same number of total function evaluations.
|
2024/A General Framework for User-Guided Bayesian Optimization/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6194ea084d7b2f6e71c8469d8699c25d586f84844b0910cf66d8f69a4dedb84a
|
| 3 |
+
size 816054
|
2024/A General Framework for User-Guided Bayesian Optimization/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/835d9bdc-b1d0-4584-861e-7d0b76aaea95_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/835d9bdc-b1d0-4584-861e-7d0b76aaea95_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/835d9bdc-b1d0-4584-861e-7d0b76aaea95_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:119d8cd60f1f393f4ac95496116e44a2ccddaaf1effdfb7220cba980aa3fc2c0
|
| 3 |
+
size 1119865
|
2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5398219809a30f2521ce3b4ee9d2107cee8732ff0c2a67f219a1ac054c5ff641
|
| 3 |
+
size 1565799
|
2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/95f5339c-dfa7-49a6-b487-698cb4f07243_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/95f5339c-dfa7-49a6-b487-698cb4f07243_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/95f5339c-dfa7-49a6-b487-698cb4f07243_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:252124efe7da79aa18886f0f0cc495950847ba3af3a266c433b1e0cad6cbd026
|
| 3 |
+
size 3236977
|
2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6cc0014f99a92d3781bc2d523ff28edb9ff2ba58b28394f7070c22fa65ed3d6c
|
| 3 |
+
size 2631606
|
2024/A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Mutual Information Perspective on Federated Contrastive Learning/05751880-dcb3-420c-98e6-29ac0eada528_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Mutual Information Perspective on Federated Contrastive Learning/05751880-dcb3-420c-98e6-29ac0eada528_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Mutual Information Perspective on Federated Contrastive Learning/05751880-dcb3-420c-98e6-29ac0eada528_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:888d4b406711530c66beead43a0c47e4bbc496146d3918684101fb728b0ee664
|
| 3 |
+
size 416770
|
2024/A Mutual Information Perspective on Federated Contrastive Learning/full.md
ADDED
|
@@ -0,0 +1,473 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A MUTUAL INFORMATION PERSPECTIVE ON FEDERATED CONTRASTIVE LEARNING
|
| 2 |
+
|
| 3 |
+
Christos Louizos, Matthias Reisser, Denis Korzhenkov
|
| 4 |
+
|
| 5 |
+
Qualcomm AI Research*
|
| 6 |
+
|
| 7 |
+
{clouizos,mreisser,dkorzhen}@qti.QUALCOMM.com
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
We investigate contrastive learning in the federated setting through the lens of SimCLR and multi-view mutual information maximization. In doing so, we uncover a connection between contrastive representation learning and user verification; by adding a user verification loss to each client's local SimCLR loss we recover a lower bound to the global multi-view mutual information. To accommodate for the case of when some labelled data are available at the clients, we extend our SimCLR variant to the federated semi-supervised setting. We see that a supervised SimCLR objective can be obtained with two changes: a) the contrastive loss is computed between datapoints that share the same label and b) we require an additional auxiliary head that predicts the correct labels from either of the two views. Along with the proposed SimCLR extensions, we also study how different sources of non-i.i.d.-ness can impact the performance of federated unsupervised learning through global mutual information maximization; we find that a global objective is beneficial for some sources of non-i.i.d.-ness but can be detrimental for others. We empirically evaluate our proposed extensions in various tasks to validate our claims and furthermore demonstrate that our proposed modifications generalize to other pretraining methods.
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
For many machine-learning applications "at the edge", data is observed without labels. Consider for example pictures on smartphones, medical data measurements on smart watches or video-feeds from vehicles. Leveraging the information in those data streams traditionally requires labelling - e.g. asking users to confirm the identity of contacts in photo libraries, uploading road recordings to a central labelling entity - or the data might remain unused. Fundamentally, labelling data from the edge either happens at the edge or one accepts the communication overhead, privacy costs and infrastructure effort to transfer the data to a central entity and label it there. Labelling at the edge on the other hand either requires enough hardware resources to run a more powerful teacher model or it requires costly end-user engagement with inherent label noise and potential lack of expertise for labelling. Ideally, we can leverage unlabelled data directly at the edge by applying unsupervised learning, without the need for labels nor needing to transfer data to a central location.
|
| 16 |
+
|
| 17 |
+
In this work, we consider the case of federated unsupervised and semi-supervised learning through the lens of contrastive learning and multi-view mutual information (MI) maximization. The main challenges in this context are twofold: estimating the MI can be difficult because it often requires intractable marginal distributions (Poole et al., 2019). Additionally, the federated environment introduces extra complications, as the global MI objective does not readily decompose into a sum of local (client-wise) loss functions, thereby making it difficult to employ FedAvg (McMahan et al., 2017), the go-to algorithm in federated learning.
|
| 18 |
+
|
| 19 |
+
To combat these challenges, we introduce specific lower bounds to the global MI that decompose appropriately into local objectives, allowing for straightforward federated optimization. In doing so, we arrive at a principled extension of SimCLR (Chen et al., 2020) to the federated (semi-) unsupervised setting, while uncovering interesting properties. While each user can run vanilla SimCLR locally,
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Graphical model of the assumed generative process under the various sources of non-i.i.d.-ness: label-skew, covariate shift and joint shift.
|
| 23 |
+
|
| 24 |
+
to establish a lower bound for the global MI, it is necessary to add a "user-verification" (UV) loss (Hosseini et al., 2021) for each view. When also dealing with labelled data, the local SimCLR loss on each client needs to contrast datapoints in the batch that belong to the same class, thus acting as a form of hard-negative mining. Additionally, besides the UV loss, a label loss is also required for each view. Along with the proposed extensions, we also consider how different sources of non-i.i.d.-ness can impact the performance of federated unsupervised learning through global MI maximization. We show that such an objective is beneficial for specific sources of non-i.i.d.-ness but it can be detrimental for others. Finally, while our theoretical analysis and model design was based on SimCLR, we demonstrate that they are generally applicable to other pretraining methods as well, such as spectral contrastive learning (HaoChen et al., 2021) and SimSiam (Chen & He, 2021).
|
| 25 |
+
|
| 26 |
+
# 2 FEDERATED MULTI-VIEW MUTUAL INFORMATION MAXIMIZATION
|
| 27 |
+
|
| 28 |
+
Mutual information (MI) has been a paramount tool for unsupervised representation learning; SimCLR (Chen et al., 2020), one of the most popular self-supervised learning methods, can be cast as learning an encoder model that maximizes the MI between two views of the same image (Wu et al., 2020). Applying SimCLR to the federated setting however is not straightforward, primarily because the global dataset is not accessible during optimization. In FL, each client only has a subset of the available dataset, and this subset is not necessarily representative of the global dataset due to differences in the data-generative process between clients. Various methods have been proposed to mitigate this effect via global dictionaries of representations (Zhang et al., 2020) or feature alignment regularizers (Wang et al., 2022). In this work, we adopt a different view and extend SimCLR to the federated setting through the lens of global multi-view MI maximization.
|
| 29 |
+
|
| 30 |
+
# 2.1 FEDERATED SIMCLR
|
| 31 |
+
|
| 32 |
+
Assume that we have access to an encoder $p_{\theta}(\mathbf{z}|\mathbf{x})$ with parameters $\theta$ . We would like to train this encoder, such that we maximize the MI between the representations of two views of the input $\mathbf{x} \in \mathbb{R}^{D_x}$ , namely, $\mathbf{z}_1, \mathbf{z}_2 \in \mathbb{R}^{D_z}$ , in the federated setting. Let $s \in \mathbb{N}$ denote the client ID and $p(s)$ a distribution over clients.
|
| 33 |
+
|
| 34 |
+
In federated learning (FL), the non-i.i.d.-ness can manifest in various ways: a) label skew, where each client $s$ has a different distribution over labels $p(y|s)$ but the same $p(\mathbf{x}|y)$ , the most common non-iidness assumed in the FL literature, b) covariate shift, where each client has a different distribution over features for a specific class $p(\mathbf{x}|y,s)$ , e.g. due to different mobile sensors, but the same $p(y)$ and c) joint shift, where both, the distribution of $\mathbf{x}, y$ vary as a function of $s$ . This affects the assumed data-generating process of SimCLR representations accordingly, which we illustrate in Figure 1.
|
| 35 |
+
|
| 36 |
+
Let $\mathrm{I}(x; y)$ denote the MI between $x, y$ and $\mathrm{I}(x; y|z)$ be the MI between $x, y$ conditioned on a third variable $z$ . Based on the aforementioned generative process and assuming that all labels are unknown, we start the derivation of federated SimCLR from the chain rule of MI:
|
| 37 |
+
|
| 38 |
+
$$
|
| 39 |
+
\mathrm {I} _ {\theta} (\mathbf {z} _ {1}; s, \mathbf {z} _ {2}) = \mathrm {I} _ {\theta} (\mathbf {z} _ {1}; \mathbf {z} _ {2}) + \mathrm {I} _ {\theta} (\mathbf {z} _ {1}; s | \mathbf {z} _ {2}) = \mathrm {I} _ {\theta} (\mathbf {z} _ {1}; s) + \mathrm {I} _ {\theta} (\mathbf {z} _ {1}; \mathbf {z} _ {2} | s) \tag {1}
|
| 40 |
+
$$
|
| 41 |
+
|
| 42 |
+
$$
|
| 43 |
+
\underbrace {\mathrm {I} _ {\theta} \left(\mathbf {z} _ {1} ; \mathbf {z} _ {2}\right)} _ {\text {G l o b a l m u l t i - v i e w M I}} = \underbrace {\mathrm {I} _ {\theta} \left(\mathbf {z} _ {1} ; \mathbf {z} _ {2} \mid s\right)} _ {\text {L o c a l m u l t i - v i e w M I}} + \underbrace {\mathrm {I} _ {\theta} \left(\mathbf {z} _ {1} ; s\right)} _ {\text {C l i e n t I D M I}} - \underbrace {\mathrm {I} _ {\theta} \left(\mathbf {z} _ {1} ; s \mid \mathbf {z} _ {2}\right)} _ {\text {E x c e s s c l i e n t I D M I}}. \tag {2}
|
| 44 |
+
$$
|
| 45 |
+
|
| 46 |
+
We see that the multi-view MI in the federated setting decomposes into three terms; we want to maximize the average, over the clients, local MI between the representations of the two views $\mathbf{z}_1$ , $\mathbf{z}_2$ , along with the MI between the representation $\mathbf{z}_1$ and the client ID $s$ while simultaneously minimizing the additional information $\mathbf{z}_1$ carries about $s$ conditioned on $\mathbf{z}_2$ . Such MI decompositions have
|
| 47 |
+
|
| 48 |
+
also been considered at Sordoni et al. (2021) for improving MI estimation in a different context. Unfortunately, in our case these terms require access to potentially intractable or hard to obtain distributions, so we will resort to easy to compute and evaluate variational bounds.
|
| 49 |
+
|
| 50 |
+
For the first term, i.e., the client conditional MI between the two views, we provide proposition 1 which uses the standard InfoNCE bound (Poole et al., 2019), leading to an objective that decomposes into a sum of local terms, one for each client, thus allowing for federated optimization with FedAvg.
|
| 51 |
+
|
| 52 |
+
Proposition 1. Let $s \in \mathbb{N}$ denote the user $ID$ , $\mathbf{x} \in \mathbb{R}^{D_x}$ the input and $\mathbf{z}_1, \mathbf{z}_2 \in \mathbb{R}^{D_z}$ the latent representations of the two views of $\mathbf{x}$ given by the encoder with parameters $\theta$ . Given a critic function $f: \mathbb{R}^{D_z} \times \mathbb{R}^{D_z} \to \mathbb{R}$ , we have that
|
| 53 |
+
|
| 54 |
+
$$
|
| 55 |
+
\mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; \mathbf {z} _ {2} \mid s\right) \geq \mathbb {E} _ {p (s) p _ {\theta} \left(\mathbf {z} _ {1}, \mathbf {z} _ {2} \mid s\right) _ {1: K}} \left[ \frac {1}{K} \sum_ {k = 1} ^ {K} \log \frac {\exp \left(f \left(\mathbf {z} _ {1 k} , \mathbf {z} _ {2 k}\right)\right)}{\frac {1}{K} \sum_ {j = 1} ^ {K} \exp \left(f \left(\mathbf {z} _ {1 j} , \mathbf {z} _ {2 k}\right)\right)} \right]. \tag {3}
|
| 56 |
+
$$
|
| 57 |
+
|
| 58 |
+
All of the proofs can be found in the appendix. This corresponds to a straightforward application of SimCLR to the federated setting where each client performs SimCLR training locally, i.e., clients contrast against their local dataset instead of the global dataset. We will refer to this objective as Local SimCLR.
|
| 59 |
+
|
| 60 |
+
In order to optimize the global MI instead of the local MI, we need to address the two remaining terms of equation 2. The first term, $\mathrm{I}_{\theta}(\mathbf{z}_1;s)$ , requires information from the entire federation, i.e., $p_{\theta}(\mathbf{z}_1)$ which is intractable. However, with lemma 2.1 we show that by introducing a "client classification" task, we can form a simple and tractable lower bound to this term.
|
| 61 |
+
|
| 62 |
+
Lemma 2.1. Let $s \in \mathbb{N}$ denote the client $ID$ , $\mathbf{x} \in \mathbb{R}^{D_x}$ the input and $\mathbf{z}_1 \in \mathbb{R}^{D_z}$ the latent representation of a view of $\mathbf{x}$ given by the encoder with parameters $\theta$ . Let $\phi$ denote the parameters of a client classifier $r_{\phi}(s|\mathbf{z}_1)$ that predicts the client $ID$ from this specific representation and let $H(s)$ be the entropy of the client distribution $p(s)$ . We have that
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\mathrm {I} _ {\theta} (\mathbf {z} _ {1}; s) \geq \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1} | s)} \left[ \log r _ {\phi} (s | \mathbf {z} _ {1}) \right] + \mathrm {H} (s) \tag {4}
|
| 66 |
+
$$
|
| 67 |
+
|
| 68 |
+
With this bound we avoid the need for the intractable marginal $p_{\theta}(\mathbf{z}_1)$ and highlight an interesting connection between self-supervised learning in FL and user-verification models (Yu et al., 2020; Hosseini et al., 2021). For the last term of equation 2, we need an upper bound to maintain an overall lower bound to $\mathrm{I}_{\theta}(\mathbf{z}_1;\mathbf{z}_2)$ . Upper bounds to the MI can be problematic as they require explicit densities (Poole et al., 2019). Fortunately, in our specific case, we show in lemma 2.2 that with an additional client classification task for the second view, we obtain a simple and tractable upper bound.
|
| 69 |
+
|
| 70 |
+
Lemma 2.2. Let $s \in \mathbb{N}$ denote the user $ID$ , $\mathbf{x} \in \mathbb{R}^{D_x}$ the input and $\mathbf{z}_1, \mathbf{z}_2 \in \mathbb{R}^{D_z}$ the latent representations of the views of $\mathbf{x}$ given by the encoder with parameters $\theta$ . Let $\phi$ denote the parameters of a client classifier $r_{\phi}(s|\mathbf{z}_2)$ that predicts the client $ID$ from the representations. We have that
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
\mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; s \mid \mathbf {z} _ {2}\right) \leq - \mathbb {E} _ {p (s) p _ {\theta} \left(\mathbf {z} _ {2} \mid s\right)} \left[ \log r _ {\phi} (s \mid \mathbf {z} _ {2}) \right] \tag {5}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
By combining our results, we arrive at the following lower bound for the global MI that decomposes into a sum of local objectives involving the parameters $\theta, \phi$ . We dub it as Federated SimCLR.
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\begin{array}{l} \mathrm {I} _ {\theta} (\mathbf {z} _ {1}; \mathbf {z} _ {2}) \geq \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1}, \mathbf {z} _ {2} | s) _ {1: K}} \left[ \frac {1}{K} \sum_ {k = 1} ^ {K} \log \frac {\exp (f (\mathbf {z} _ {1 k} , \mathbf {z} _ {2 k}))}{\frac {1}{K} \sum_ {j = 1} ^ {K} \exp (f (\mathbf {z} _ {1 j} , \mathbf {z} _ {2 k}))} \right. \\ \left. + \log r _ {\phi} (s | \mathbf {z} _ {1 k}) + \log r _ {\phi} (s | \mathbf {z} _ {2 k}) \right] + \mathrm {H} (s). \tag {6} \\ \end{array}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
In this way, Federated SimCLR allows for a straightforward optimization of $\theta, \phi$ with standard FL optimization methods, such as Reddi et al. (2020), and inherits their convergence guarantees. Furthermore, it is intuitive; each client performs locally SimCLR, while simultaneously training a shared classifier that predicts their user ID from both views. The additional computational overhead of this classifier is relatively minor compared to the encoder itself, making it appropriate for resource constrained devices.
|
| 83 |
+
|
| 84 |
+
Optimizing the user-verification loss For the client ID loss we use a single linear layer followed by softmax with three important modifications, as the local optimization of the client ID loss is prone to bad optima due to having "labels" from only "a single class" (that of the client optimizing it) (Yu et al., 2020); a) the linear layer does not have a bias, as that would make the local optimization of the UV loss trivial and would not meaningfully affect the encoder, b) both the inputs to the linear layer as well as the linear layer weights are constrained to have unit norm and, c) each client locally optimizes only their associated vector weight in the linear classifier while all of the others are kept fixed. In this way each client needs to find their "own cluster center" to optimize the UV loss locally. These centers need to be sufficiently far from the cluster centers of the other clients that a client receives from the server and keeps fixed throughout local optimization.
|
| 85 |
+
|
| 86 |
+
Effects of non-i.i.d.-ness on the performance on downstream tasks Given access to both the global and local MI objectives, we now want to understand how the type of non-i.i.d.-ness determines whether a specific objective is the better choice. To answer this question, we first show at proposition 2 that in the case of label skew, the client classification objective is a lower bound to the MI between the representations $\mathbf{z}_1, \mathbf{z}_2$ and the unavailable label $y$ .
|
| 87 |
+
|
| 88 |
+
Proposition 2. Consider the label skew data-generating process for federated SimCLR from Figure 1 with $s \in \mathbb{N}$ denoting the user ID with $\mathrm{H}(s)$ being the entropy of $p(s)$ , $\mathbf{x} \in \mathbb{R}^{D_x}$ the input, $\mathbf{z}_1, \mathbf{z}_2 \in \mathbb{R}^{D_z}$ the latent representations of the two views of $\mathbf{x}$ given by the encoder with parameters $\theta$ . Let $y$ be the label and let $r_{\phi}(s|\mathbf{z}_i)$ be a model with parameters $\phi$ that predicts the user ID from the latent representation $\mathbf{z}_i$ . In this case, we have that
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\mathrm {I} _ {\theta} (\mathbf {z} _ {1}; y) + \mathrm {I} _ {\theta} (\mathbf {z} _ {2}; y) \geq \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1}, \mathbf {z} _ {2} | s)} \left[ \log r _ {\phi} (s | \mathbf {z} _ {1}) + \log r _ {\phi} (s | \mathbf {z} _ {2}) \right] + 2 \mathrm {H} (s). \tag {7}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
Therefore, when the source of non-i.i.d.-ness is heavily dependent on the actual downstream task, the additional client classification objective stemming from the global MI bound is beneficial as it is a good proxy for the thing we care about. In the case of covariate shift, we know that the source of non-i.i.d.-ness is independent of the label, i.e., $\mathrm{I}(y;s) = 0$ , so the additional client classification term can actually become detrimental; the representation will encode information irrelevant for the downstream task and, depending on the capacity of the network and underlying trade-offs, can lead to worse task performance. In this case, optimizing the local MI is expected to work better, as the client specific information (i.e., the irrelevant information) is not encouraged in the representations.
|
| 95 |
+
|
| 96 |
+
# 2.2 FEDERATED SEMI-SUPERVISED SIMCLR
|
| 97 |
+
|
| 98 |
+
In practice, labeled data for a specific task are sometimes available. These could for example constitute a curated dataset at the server or a small labelled subset of data on each client. In this case, it will generally be beneficial for the downstream task if the objective takes these labels into account. To this end, we can use the following label-dependent expression for the client conditional MI
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; \mathbf {z} _ {2} \mid s\right) = \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; y \mid s\right) + \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}, \mathbf {z} _ {2} \mid y, s\right) - \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; y \mid s, \mathbf {z} _ {2}\right). \tag {8}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
Therefore, once we obtain a label-specific lower bound for this quantity, it will be straightforward to translate it to a label-specific lower bound for the global MI by adding back the user-verification losses for the two views. For the following we will assume that we have an underlying classification task, hence a label $y \in \mathbb{N}$ .
|
| 105 |
+
|
| 106 |
+
For the MI between the two views $\mathbf{z}_1, \mathbf{z}_2$ conditioned on the label $y$ and client $s$ , we can make use of proposition 1 by treating $s, y$ as the conditioning set. In this case, we again use the InfoNCE loss, with the exception that we now contrast between datapoints that also belong to the same class,
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; \mathbf {z} _ {2} \mid y, s\right) \geq \mathbb {E} _ {p (s, y) p _ {\theta} \left(\mathbf {z} _ {1}, \mathbf {z} _ {2} \mid y, s\right) _ {1: K}} \left[ \frac {1}{K} \sum_ {k = 1} ^ {K} \log \frac {\exp \left(f \left(\mathbf {z} _ {1 k} , \mathbf {z} _ {2 k}\right)\right)}{\frac {1}{K} \sum_ {j = 1} ^ {K} \exp \left(f \left(\mathbf {z} _ {1 j} , \mathbf {z} _ {2 k}\right)\right)} \right]. \tag {9}
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
For the other two terms that involve the label $y$ we can proceed in a similar manner to the client ID $s$ . For the MI between $\mathbf{z}_1$ and $y$ conditioned on $s$ , as $y$ is also discrete, we can make use of lemma 2.1 by treating $y$ as $s$ . Therefore, we introduce a classifier $r_{\phi}(y|\mathbf{z}_1)$ and obtain the following lower bound
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\mathrm {I} _ {\theta} (\mathbf {z} _ {1}; y | s) \geq \mathbb {E} _ {p (s) p _ {\theta} (y, \mathbf {z} _ {1} | s)} \left[ \log r _ {\phi} (y | \mathbf {z} _ {1}) \right] + \mathrm {H} (y | s), \tag {10}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
|
| 120 |
+

|
| 121 |
+
Figure 2: Overview of the SimCLR architectures considered. Local SimCLR (left): each client optimizes a contrastive loss on their own data, thus the federation implicitly optimizes a lower bound to $\mathrm{I}(\mathbf{z}_1;\mathbf{z}_2|s)$ . Federated SimCLR (center): along with the contrastive loss on their own data, each client also optimizes a client classifier, thus the federation implicitly optimizes a lower bound to $\mathrm{I}(\mathbf{z}_1;\mathbf{z}_2)$ . Supervised federated SimCLR (right): a label-dependent variant of federated SimCLR that encourages clustering according to the label while also optimizing a lower bound to $\mathrm{I}(\mathbf{z}_1;\mathbf{z}_2)$ .
|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
|
| 125 |
+
where $\mathrm{H}(y|s)$ denotes the entropy of the label marginal at the client, $p(y|s)$ . For the MI between $\mathbf{z}_1$ and $y$ conditioned on $\mathbf{z}_2$ and $s$ we make use of lemma 2.2 and get the following upper bound
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
\mathrm {I} _ {\theta} (\mathbf {z} _ {1}; y | \mathbf {z} _ {2}, s) \leq - \mathbb {E} _ {p (s, y) p _ {\theta} (\mathbf {z} _ {2} | y, s)} \left[ \log r _ {\phi} (y | \mathbf {z} _ {2}) \right]. \tag {11}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
Putting everything together, we arrive at the following label-dependent lower bound for local SimCLR
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\begin{array}{l} \mathrm {I} _ {\theta} (\mathbf {z} _ {1}; \mathbf {z} _ {2} | s) \geq \mathbb {E} _ {p (s, y) p _ {\theta} (\mathbf {z} _ {1}, \mathbf {z} _ {2} | y, s) _ {1: K}} \left[ \frac {1}{K} \sum_ {k = 1} ^ {K} \log \frac {\exp (f (\mathbf {z} _ {1 k} , \mathbf {z} _ {2 k}))}{\frac {1}{K} \sum_ {j = 1} ^ {K} \exp (f (\mathbf {z} _ {1 j} , \mathbf {z} _ {2 k}))} \right. \\ \left. + \log r _ {\phi} (y | \mathbf {z} _ {1 k}) + \log r _ {\phi} (y | \mathbf {z} _ {2 k}) + \mathrm {H} (y | s) \right], \tag {12} \\ \end{array}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
which decomposes into intuitive terms; we are performing InfoNCE between the views of the datapoints that belong to the same class and client, while simultaneously trying to predict the class from the representations of both views. To transition from a label-dependent bound for the local SimCLR to a label-dependent bound of the federated SimCLR, it suffices to add the client classifiers
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\begin{array}{l} \mathrm {I} _ {\theta} (\mathbf {z} _ {1}; \mathbf {z} _ {2}) \geq \mathbb {E} _ {p (s, y) p _ {\theta} (\mathbf {z} _ {1}, \mathbf {z} _ {2} | y, s) _ {1: K}} \left[ \frac {1}{K} \sum_ {k = 1} ^ {K} \log \frac {\exp (f (\mathbf {z} _ {1 k} , \mathbf {z} _ {2 k}))}{\frac {1}{K} \sum_ {j = 1} ^ {K} \exp (f (\mathbf {z} _ {1 j} , \mathbf {z} _ {2 k}))} + \log r _ {\phi} (s | \mathbf {z} _ {1 k}) \right] \\ \left. + \log r _ {\phi} (s | \mathbf {z} _ {2 k}) + \log r _ {\phi} (y | \mathbf {z} _ {1 k}) + \log r _ {\phi} (y | \mathbf {z} _ {2 k}) + \mathrm {H} (y | s) \right] + \mathrm {H} (s). \tag {13} \\ \end{array}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
Figure 2 visualizes all of the SimCLR architectures considered in this work.
|
| 144 |
+
|
| 145 |
+
The case of unlabelled data The primary motivation of the previous discussion is to tackle the semi-supervised case, i.e., the case when some clients do not have access to all labels. A simple way to handle the unlabelled data is to fall back to the bound of proposition 1 for the conditional MI when we do not have access to labels. In this way, each client can do a form of "more difficult" contrastive learning for their labelled data, where they contrast against datapoints which are more semantically similar (i.e., they share the same class), while simultaneously trying to predict the correct class whereas for their unlabelled data, they perform standard contrastive learning.
|
| 146 |
+
|
| 147 |
+
Label-dependent vs label-independent bound Even though both our label-dependent and label-independent bounds are lower bounds of the MI between the representations of the two views, the former should be preferred if labels are available. This is because the label independent one can be satisfied without necessarily clustering the representations semantically, whereas the label dependent one directly encourages clustering according to the label through the additional classification losses, so it is expected to perform better for downstream tasks.
|
| 148 |
+
|
| 149 |
+
# 3 RELATED WORK
|
| 150 |
+
|
| 151 |
+
Unsupervised learning in the federated context has gained significant attention in recent years. On the contrastive learning side, Zhang et al. (2020) introduces FedCA, a SimCLR variant for federated setting. The main idea is that the representations between the clients can become misaligned due to the non-i.i.d. nature of FL. The authors then introduce a global dictionary of representations which is shared between all participants and is used to align the representation spaces. One of the main drawbacks of this method is that it requires the transmission of data representations of clients, which leads to reduced privacy. Compared to a global dictionary module, our federated SimCLR aligns the representations of the clients through the additional UV loss component, requiring the communication of just some additional model parameters and not raw representations. Dong & Voiculescu (2021) introduces FedMoCo, an extension of MoCo (He et al., 2020) to the federated setting. Similar to FedCA, FedMoCo shares additional client metadata, i.e., moments of the local feature distributions, from the clients to the server, thus leading to reduced privacy. Li et al. (2023a) also extends MoCo to the federated setting however, instead of using a FedAvg type of protocol, the authors employ a split learning (Poirot et al., 2019) protocol, which leads to reduced compute requirements at the edge but also requires communicating raw representations of the local data to the server. Finally, the closest to our work is the work of Wang et al. (2022) where the authors also explore the effects of non-i.i.d.-ness when training a model with SimCLR in the federated setting. The authors further propose an extension that uses multiple models and encourages feature alignment with an additional loss function. In contrast to FeatARC where the feature alignment loss is added ad-hoc to SimCLR, we can see that from our MI perspective on SimCLR, a feature alignment loss naturally manifests via an additional user-verification loss to SimCLR when optimizing a lower bound to the global MI.
|
| 152 |
+
|
| 153 |
+
On the non-contrastive learning side, Makhija et al. (2022) introduces Hetero-SSFL, an extension of BYOL (Grill et al., 2020) and SimSiam (Chen & He, 2021) to the federated setting where each client can have their own encoder model but, in order to align the local models, an additional public dataset is required. Zhuang et al. (2022) introduces FedEMA, where a hyperparameter of BYOL is adapted in a way that takes into account the divergence of the local and global models. In contrast to these methods which require several tricks for improved performance, i.e., moving average updates, custom type of aggregations and stop gradient operations, our federated SimCLR method works by just optimizing a straightforward loss function with the defacto standard, FedAvg. On a different note, Lu et al. (2022) proposes to train a model with pseudo-labels for the unlabelled data and then recover the model for the desired labels via a post-processing step. Finally Lubana et al. (2022) proposes an unsupervised learning framework through simultaneous local and global clustering, which requires communicating client data representations, i.e., the cluster centroids, to the server.
|
| 154 |
+
|
| 155 |
+
On the federated semi-supervised learning side, most works rely on generating pseudo-labels for the unlabelled examples. Jeong et al. (2020) proposes FedMatch, an adaptation of FixMatch (Sohn et al., 2020) to the federated setting by adding one more consistency loss that encourages the models learned on each client to output similar predictions for the local data. The authors also propose a pseudo-labelling strategy that takes into account the agreement of client models and a parameter decomposition strategy that allocates separate parameters to be optimized on unlabelled and labelled data. In contrast, our semi-supervised objectives are simpler, do not rely on pseudo-labels (which introduce additional hyper-parameters for filtering low-confidence predictions) and do not require communicating client specific models among the federation. Liang et al. (2022) proposes a student-teacher type scheme for training on unlabelled data, where consistency regularization is applied. The teacher model is an exponential moving average of the student and a novel aggregation mechanism is introduced. Our proposed methods for semi-supervised learning could potentially also benefit from better aggregation mechanisms, but we leave such an exploration for future work. Finally, Kim et al. (2022) introduces ProtoFSSL, which incorporates knowledge from other clients in the local training via sharing prototypes between the clients. While such prototypes do improve performance, they also
|
| 156 |
+
|
| 157 |
+
reveal more information about the local data of each client, thus reducing privacy. In contrast, our federated semi-supervised framework does not rely on sharing prototypes between the clients.
|
| 158 |
+
|
| 159 |
+
# 4 EXPERIMENTS
|
| 160 |
+
|
| 161 |
+
Our experimental evaluation consists of unsupervised and semi-supervised experiments, where for the latter each client has labels for $10\%$ of their data. To quantify the quality of the learned representations, we adapt the classical evaluation pipeline of training a linear probe (LP) to be in line with common assumptions of self-supervised learning. In the unsupervised case, we report the LP accuracy on the union of clients' labelled version of their data, as this corresponds to the traditional non-federated evaluation pipeline. For the semi-supervised case, we train a LP on top of the representations of the clients' labelled training data (which is a subset of the full training set) and then report its test accuracy. At every evaluation for plotting of learning curves, we initialize the LP from the final parameters of the previous evaluation. Furthermore, as we mention at section 2.1, the nature of non-i.i.d. data in FL can manifest in various ways: label skew, covariate shift and joint shift, i.e., a combination of the two. We therefore evaluate, besides label skew (the predominant type of non-i.i.d.-ness assumed in the FL literature), covariate shift by creating a rotated version of CIFAR10 and CIFAR100 as well as a joint shift case where both sources of non-i.i.d.-ness are present. For CIFAR 10 we consider 100 clients whereas for CIFAR100 we consider 500 clients. For the encoder we use a ResNet18 architecture adapted for the CIFAR datasets where, following Hsieh et al. (2020), we replace batch normalization (Ioffe & Szegedy, 2015) with group normalization (Wu & He, 2018).
|
| 162 |
+
|
| 163 |
+
In order to demonstrate the general usefulness of our theoretical results and model design stemming from our MI perspective, we include two more methods in our evaluation besides SimCLR. The first one is spectral contrastive learning (HaoChen et al., 2021) (dubbed as Spectral CL) as another instance of constractive learning and the other is SimSiam (Chen & He, 2021), a non-contrastive method. For both of these methods, we consider both a "local" variant where each of the losses is optimized locally and Reddi et al. (2020) is applied to the parameters as well as, based on the intuition from our federated SimCLR, a "global" variant where the same UV loss component of federated SimCLR is added to the baselines. As we show in proposition 2, such an auxiliary task is beneficial in the case of label skew in general. Furthermore we also extend these baselines to the semi-supervised setting. Based on the insights from our label-dependent MI bounds for SimCLR, we consider label-dependent variants of SimSiam and Spectral CL where, when labels are available, the unsupervised losses are evaluated between elements that share the same class and a classification loss for the two views is added to the overall loss function.
|
| 164 |
+
|
| 165 |
+
Unsupervised setting The results in the unsupervised setting can be seen in Table 1. In the case of label skew, adding our user-verification loss to each of the local losses leads to (sometimes dramatic) improvements in all cases. This is to be expected, as in this case the mutual information between the labels and the client ID, $\mathrm{I}(y; s)$ , is quite high, so the UV loss acts as a good proxy for the downstream task. For SimCLR we observe a $\sim 6\%$ improvement on CIFAR 10/100 and on Spectral CL we observe $\sim 11\%$ and $\sim 8\%$ respectively. SimSiam type of methods generally underperformed compared to SimCLR and Spectral CL, and we believe this is due to representation collapse, especially given that in our setting we employ group normalization instead of batch-normalization. On covariate shift, we now see that the situation is flipped; as in this case $\mathrm{I}(y; s) = 0$ , local SimCLR / Spectral CL are doing better compared to their global counterparts that include the UV loss. Both local SimCLR and Spectral CL perform better by $\sim 1 - 2\%$ and $\sim 2 - 4\%$ on CIFAR 10 and CIFAR 100 respectively, with local SimCLR providing the better overall performance. Finally, on the joint shift case, the label skew is strong enough to allow for improvements with the additional UV loss components in most cases; for SimCLR there is an improvement of $\sim 4 - 5\%$ and for Spectral CL there is a $\sim 8\%$ improvement for CIFAR 10 but a drop of $\sim 8\%$ for CIFAR 100. We attribute the latter to the overall instability of Spectral CL in our CIFAR 100 experiments, explained by the large standard error.
|
| 166 |
+
|
| 167 |
+
Overall, we observe that the results are consistent with our expectations; when the source of non-i.i.d.-ness in the federated setting is strongly correlated with the downstream task, optimizing a "global" objective, such as $\mathrm{I}(\mathbf{z}_1,\mathbf{z}_2)$ , is beneficial, as the additional UV term serves for a good proxy for the downstream task. This intuition also generalizes to one of our baselines as, e.g., even Spectral CL benefits from the addition of the UV loss in such settings. In the absence of such correlation, the
|
| 168 |
+
|
| 169 |
+
Table 1: Test set performance (\%) on the unsupervised setting along with standard error over 5 seeds. Clients' data is assumed to be fully annotated for LP fine-tuning in the unsupervised case.
|
| 170 |
+
|
| 171 |
+
<table><tr><td></td><td colspan="3">CIFAR 10</td><td colspan="3">CIFAR 100</td></tr><tr><td>Method</td><td>Label skew</td><td>Covariate shift</td><td>Joint shift</td><td>Label skew</td><td>Covariate shift</td><td>Joint shift</td></tr><tr><td>Local SimCLR</td><td>79.4±0.2</td><td>74.3±0.3</td><td>71.0±0.4</td><td>42.2±0.2</td><td>41.2±0.2</td><td>38.1±0.3</td></tr><tr><td>Federated SimCLR</td><td>85.0±0.2</td><td>73.8±0.2</td><td>74.8±0.5</td><td>48.5±0.1</td><td>39.5±0.2</td><td>43.1±0.2</td></tr><tr><td>Spectral CL</td><td>76.5±1.1</td><td>73.5±0.4</td><td>68.2±0.6</td><td>33.3±6.0</td><td>33.6±2.3</td><td>29.6±6.2</td></tr><tr><td>Spectral CL + UV</td><td>87.8±0.3</td><td>71.7±0.5</td><td>76.6±0.6</td><td>41.0±6.4</td><td>29.3±4.8</td><td>21.5±6.2</td></tr><tr><td>SimSiam</td><td>40.0±0.5</td><td>39.9±0.3</td><td>39.6±0.3</td><td>16.9±0.3</td><td>16.6±0.4</td><td>16.9±0.4</td></tr><tr><td>SimSiam + UV</td><td>35.4±0.4</td><td>35.4±0.2</td><td>34.5±0.3</td><td>16.5±0.2</td><td>16.5±0.3</td><td>16.3±0.5</td></tr><tr><td>Supervised</td><td>89.6±0.1</td><td>78.3±0.4</td><td>76.3±1.1</td><td>59.2±0.2</td><td>47.9±0.2</td><td>43.9±0.3</td></tr></table>
|
| 172 |
+
|
| 173 |
+
Table 2: Test set performance (%) on the semi-supervised setting with $10\%$ labelled data on each client along with standard error over 5 seeds. We use the corresponding labelled subset for the LP.
|
| 174 |
+
|
| 175 |
+
<table><tr><td></td><td colspan="3">CIFAR 10</td><td colspan="3">CIFAR 100</td></tr><tr><td>Method</td><td>Label skew</td><td>Covariate shift</td><td>Joint shift</td><td>Label Skew</td><td>Covariate shift</td><td>Joint shift</td></tr><tr><td>Local SimCLR</td><td>74.5±0.3</td><td>49.1±1.3</td><td>45.8±1.4</td><td>30.3±0.2</td><td>15.1±0.4</td><td>13.1±0.3</td></tr><tr><td>Federated SimCLR</td><td>78.0±0.2</td><td>50.3±1.1</td><td>49.9±1.4</td><td>34.5±0.3</td><td>14.8±0.3</td><td>14.6±0.3</td></tr><tr><td>Spectral CL</td><td>74.2±0.3</td><td>48.0±0.7</td><td>45.4±1.5</td><td>30.1±0.2</td><td>14.1±0.4</td><td>12.3±0.3</td></tr><tr><td>Spectral CL + UV</td><td>79.6±0.3</td><td>49.7±1.0</td><td>49.8±1.1</td><td>34.0±0.2</td><td>13.7±0.3</td><td>13.6±0.4</td></tr><tr><td>SimSiam</td><td>75.3±0.4</td><td>46.8±0.7</td><td>40.5±0.9</td><td>30.7±0.2</td><td>13.4±0.3</td><td>12.8±0.3</td></tr><tr><td>SimSiam + UV</td><td>80.4±0.2</td><td>50.0±1.2</td><td>44.3±1.0</td><td>34.3±0.1</td><td>13.6±0.3</td><td>14.0±0.4</td></tr><tr><td>Supervised</td><td>75.1±0.2</td><td>48.1±0.9</td><td>42.7±1.7</td><td>29.6±0.3</td><td>12.6±0.2</td><td>12.2±0.1</td></tr></table>
|
| 176 |
+
|
| 177 |
+
simple local SimCLR / Spectral CL variants are doing better since they do not encode information in the representations that is irrelevant for the downstream task.
|
| 178 |
+
|
| 179 |
+
Semi-supervised setting Our semi-supervised results with $10\%$ labelled data in Table 2 show interesting observations. Overall, we improve performance with semi-supervised training relative to purely supervised training on the labelled subset of the data. On CIFAR 10, we notice that our semi-supervised models with the UV loss do better than the local variants on all sources of non-i.i.d.-ness, even in the case of covariate shift. Despite the limited quantity of labels available, we believe that the encoders possessed sufficient capacity to both retain and separate the label-specific and label-independent (e.g., rotation) information. Consequently, the downstream LP could accurately use the label-specific portion of the representations for its predictions. SimSiam does much better in this setting, as the supervised objective prevented representation collapse, achieving the best performance on label skew when we add the UV loss, whereas Federated SimCLR does best on the joint shift.
|
| 180 |
+
|
| 181 |
+
# 4.1 ABLATION STUDIES
|
| 182 |
+
|
| 183 |
+
In this section we perform additional experiments in order to investigate the behaviour of local and federated SimCLR under different settings. We adopt our CIFAR 10 setting with 100 clients and strong $(\alpha = 0.1)$ joint shift, unless mentioned otherwise.
|
| 184 |
+
|
| 185 |
+
Amount of non-i.i.d.-ness For the first set of experiments we investigate how the amount of non-i.i.d.-ness affects the local and federated SimCLR performance with $E = 1$ . We adopt the joint shift setting and perform experiments with different strengths for each source of non-i.i.d.-ness. The results can be seen in Figure 3a where we have an interesting observation; federated SimCLR does better the higher the amount of label skew non-i.i.d.-ness is, in fact even surpassing the performance of local SimCLR on i.i.d. data. This can be explained from our proposition 2. As the amount of label skew increases, the client ID carries more information about $y$ , thus $\mathrm{I}_{\theta}(\mathbf{z}_1,y|s)$ becomes lower and the lower bound tighter. On the flipside, when there is strong covariate shift and not enough label-skew, we observe that local SimCLR has consistently better performance.
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
(a)
|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
Figure 3: CIFAR 10 ablation studies. (a) Performance of local and federated SimCLR as a function of the non-i.i.d.-ness strength $\alpha$ for covariate shift and label skew. (b) Performance of local and federated SimCLR for different amount of local epochs $E$ in the case of strong $(\alpha = 0.1)$ covariate shift and label skew. (c) Performance of local and federated SimCLR in the semi-supervised setting as a function of the amount of available labelled data.
|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
(b)
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
(c)
|
| 198 |
+
|
| 199 |
+
Amount of local updates The auxiliary UV objective in federated SimCLR can be problematic for a large amount of local updates, as there is only a single available class at each client. Therefore, federated SimCLR requires relatively frequent synchronization. We show in Figure 3b how the amount of local epochs affect local and federated SimCLR when keeping a fixed computation budget; more local epochs imply less communication rounds and vice versa. We can see that federated SimCLR achieves the best performance of the two with 1 local step, however, its performance drops with more local updates and eventually becomes worse or comparable to local SimCLR.
|
| 200 |
+
|
| 201 |
+
Amount of labelled data for the semi-supervised setting Finally, we also measure the impact of the amount of available labelled data in the semi-supervised setting for local and federated SimCLR. We measure this by keeping a fixed and labelled holdout set which we use to train a LP on top of the representations given by the two algorithms. We also train a fully supervised (i.e., on $100\%$ labelled training data) baseline with the same augmentations as the SimCLR variants. We can see in Figure 3c that the test accuracy of the LP improves with more labelled data for both algorithms, as expected. Federated SimCLR demonstrates improved performance compared to local SimCLR on all cases considered, with the biggest advantages seen when the amount of available labelled data during training is low. Furthermore, federated SimCLR reaches performance comparable to the fully supervised baseline with $\geq 50\%$ labelled training data.
|
| 202 |
+
|
| 203 |
+
# 5 DISCUSSION
|
| 204 |
+
|
| 205 |
+
In this work we analyze contrastive learning and SimCLR in the federated setting. By adopting a multi-view MI view, we arrive at several interesting observations and extensions. We show that a naive application of local SimCLR training at each client coupled with parameter averaging at the server, corresponds to maximizing a lower bound to the client conditional MI between the two views. We then identify that, in order to close the gap, for global MI an auxiliary user-verification task is necessary. Finally, through the same MI lens, we extend both local and federated SimCLR to the semi-supervised setting in order to handle the case of partially available data. Despite the fact that these modifications were developed through the MI view for SimCLR, we show that they are generally useful for pretraining in the federated setting, yielding improvements for both spectral contrastive learning and SimSiam.
|
| 206 |
+
|
| 207 |
+
As non-i.i.d. data are an inherent challenge in FL, we further discuss how it affects contrastive learning, both theoretically and empirically. In the case of label skew, the most predominant type of non-i.i.d.-ness in the FL literature, we show that maximizing the global MI through federated SimCLR is appropriate, as the auxiliary user classification task is a good proxy for the unavailable label. On the flipside, in the case of covariate shift, local SimCLR leads to better models due to not being forced to encode irrelevant, for the downstream task, information in the representations.
|
| 208 |
+
|
| 209 |
+
For future work, we will explore improved variants of the UV loss that can tolerate more local optimization, as well as better bounds for the MI in the federated setting.
|
| 210 |
+
|
| 211 |
+
# REFERENCES
|
| 212 |
+
|
| 213 |
+
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597-1607. PMLR, 2020.
|
| 214 |
+
Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 15750-15758, 2021.
|
| 215 |
+
Nanqing Dong and Irina Voiculescu. Federated contrastive learning for decentralized unlabeled medical images. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2021: 24th International Conference, Strasbourg, France, September 27-October 1, 2021, Proceedings, Part III 24, pp. 378-387. Springer, 2021.
|
| 216 |
+
Jean-Bastien Grill, Florian Strub, Florent Alché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271-21284, 2020.
|
| 217 |
+
Jeff Z HaoChen, Colin Wei, Adrien Gaidon, and Tengyu Ma. Provable guarantees for self-supervised deep learning with spectral contrastive loss. Advances in Neural Information Processing Systems, 34:5000-5011, 2021.
|
| 218 |
+
Ali Hassani, Steven Walton, Nikhil Shah, Abulikemu Abuduweili, Jiachen Li, and Humphrey Shi. Escaping the big data paradigm with compact transformers. arXiv preprint arXiv:2104.05704, 2021.
|
| 219 |
+
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729-9738, 2020.
|
| 220 |
+
Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, and Max Welling. Federated learning of user verification models without sharing embeddings. In International Conference on Machine Learning, pp. 4328-4336. PMLR, 2021.
|
| 221 |
+
Kevin Hsieh, Amar Phanishayee, Onur Mutlu, and Phillip Gibbons. The non-iid data quagmire of decentralized machine learning. In International Conference on Machine Learning, pp. 4387-4398. PMLR, 2020.
|
| 222 |
+
Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019.
|
| 223 |
+
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448-456. pmlr, 2015.
|
| 224 |
+
Wonyong Jeong, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang. Federated semi-supervised learning with inter-client consistency & disjoint learning. arXiv preprint arXiv:2006.12097, 2020.
|
| 225 |
+
Woojung Kim, Keondo Park, Kihyuk Sohn, Raphael Shu, and Hyung-Sin Kim. Federated semi-supervised learning with prototypical networks. arXiv preprint arXiv:2205.13921, 2022.
|
| 226 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
|
| 227 |
+
Jingtao Li, Lingjuan Lyu, Daisuke Iso, Chaitali Chakrabarti, and Michael Spranger. MocoSFL: enabling cross-client collaborative self-supervised learning. In The Eleventh International Conference on Learning Representations, 2023a. URL https://openreview.net/forum?id=2QGJxymNoPz.
|
| 228 |
+
Ming Li, Qingli Li, and Yan Wang. Class balanced adaptive pseudo labeling for federated semi-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16292-16301, 2023b.
|
| 229 |
+
|
| 230 |
+
Xiaoxiao Liang, Yiqun Lin, Huazhu Fu, Lei Zhu, and Xiaomeng Li. Rscfed: Random sampling consensus federated semi-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10154-10163, 2022.
|
| 231 |
+
Haowen Lin, Jian Lou, Li Xiong, and Cyrus Shahabi. Semifed: Semi-supervised federated learning with consistency and pseudo-labeling. arXiv preprint arXiv:2108.09412, 2021.
|
| 232 |
+
Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, and Masashi Sugiyama. Federated learning from only unlabeled data with class-conditional-sharing clients. arXiv preprint arXiv:2204.03304, 2022.
|
| 233 |
+
Ekdeep Singh Lubana, Chi Ian Tang, Fahim Kawsar, Robert P Dick, and Akhil Mathur. Orchestra: Unsupervised federated learning via globally consistent clustering. arXiv preprint arXiv:2205.11506, 2022.
|
| 234 |
+
Disha Makhija, Nhat Ho, and Joydeep Ghosh. Federated self-supervised learning for heterogeneous clients. arXiv preprint arXiv:2205.12493, 2022.
|
| 235 |
+
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273-1282. PMLR, 2017.
|
| 236 |
+
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
|
| 237 |
+
Maarten G Poirot, Praneeth Vepakomma, Ken Chang, Jayashree Kalpathy-Cramer, Rajiv Gupta, and Ramesh Raskar. Split learning for collaborative deep learning in healthcare. arXiv preprint arXiv:1912.12115, 2019.
|
| 238 |
+
Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In International Conference on Machine Learning, pp. 5171-5180. PMLR, 2019.
|
| 239 |
+
Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. arXiv preprint arXiv:2003.00295, 2020.
|
| 240 |
+
Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in neural information processing systems, 33:596-608, 2020.
|
| 241 |
+
Alessandro Sordoni, Nouha Dziri, Hannes Schulz, Geoff Gordon, Philip Bachman, and Remi Tachet Des Combes. Decomposed mutual information estimation for contrastive representation learning. In International Conference on Machine Learning, pp. 9859-9869. PMLR, 2021.
|
| 242 |
+
Lirui Wang, Kaiqing Zhang, Yunzhu Li, Yonglong Tian, and Russ Tedrake. Does learning from decentralized non-iid unlabeled data benefit from self supervision? In The Eleventh International Conference on Learning Representations, 2022.
|
| 243 |
+
Mike Wu, Chengxu Zhuang, Milan Mosse, Daniel Yamins, and Noah Goodman. On mutual information in contrastive learning for visual representations. arXiv preprint arXiv:2005.13149, 2020.
|
| 244 |
+
Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pp. 3-19, 2018.
|
| 245 |
+
Felix Yu, Ankit Singh Rawat, Aditya Menon, and Sanjiv Kumar. Federated learning with only positive labels. In International Conference on Machine Learning, pp. 10946-10956. PMLR, 2020.
|
| 246 |
+
Fengda Zhang, Kun Kuang, Zhaoyang You, Tao Shen, Jun Xiao, Yin Zhang, Chao Wu, Yueting Zhuang, and Xiaolin Li. Federated unsupervised representation learning. arXiv preprint arXiv:2010.08982, 2020.
|
| 247 |
+
|
| 248 |
+
Weiming Zhuang, Yonggang Wen, and Shuai Zhang. Divergence-aware federated self-supervised learning. arXiv preprint arXiv:2204.04385, 2022.
|
| 249 |
+
|
| 250 |
+
# A EXPERIMENTAL SETUP
|
| 251 |
+
|
| 252 |
+
Data partitioning and non-i.i.d.-ness For the label-skew setting, we use the Dirichlet splits for CIFAR 10, 100 discussed at Reddi et al. (2020) with $\alpha = 0.1$ in both cases. Notice that we adopt the convention of Hsu et al. (2019) where $\alpha$ is multiplied by the prior probability of the label in the dataset, so, for example, in the case of CIFAR 10 the final concentration parameter is 0.01.
|
| 253 |
+
|
| 254 |
+
For the covariate shift setting we consider the case of rotation non-i.i.d.-ness. More specifically, we first perform an i.i.d., with respect to the labels, split of the data into 100 and 500 clients for CIFAR 10 and CIFAR 100 respectively. Afterwards, we bin the $[0,2\pi]$ range into 10 rotation bins and then assign to each client bins according to a Dirichlet distribution with $\alpha = 0.1$ . In this case, each client receives one or two bins of rotations. After that bin assignment, each client randomly rotates each image of their local dataset once with an angle for each image sampled i.i.d. from the bins selected at that client. For the evaluation we consider non-rotated images.
|
| 255 |
+
|
| 256 |
+
For the joint shift setting we mix the two cases above by first performing a non-i.i.d., Dirichlet split, i.e., $\alpha = 0.1$ , according to the labels and then apply the non-i.i.d. rotation strategy described above.
|
| 257 |
+
|
| 258 |
+
Architecture details For all methods we use the same encoder model, a ResNet18 architecture adapted for CIFAR 10/100 by replacing the kernel of the first convolutional layer with a $3 \times 64 \times 3 \times 3$ kernel and removing the max-pooling and last fully connected layer. Furthermore, to better accommodate for the non-i.i.d. issues in the federated learning scenario (Hsieh et al., 2020) we replace batch normalization (Ioffe & Szegedy, 2015) with group normalization (Wu & He, 2018). For the client ID projector, we use a simple MLP on top of the encoder output with a single ReLU hidden layer of 2048 units and 128 output units. For the auxiliary classifier in the case of semi-supervised learning we use a simple linear layer on top of the encoder output.
|
| 259 |
+
|
| 260 |
+
For our SimCLR and spectral constrastive learning variants, the representations of the encoder are passed through an MLP projector with a single hidden layer of 2048 units and 128 dimensional outputs. The contrastive loss between the two views is measured at the output of the projector.
|
| 261 |
+
|
| 262 |
+
For our SimSiam baseline we measure the cosine similarity objective on the output of a projector that follows the SimCLR design with the exception that we also add a group normalization layer before the hidden layer, as SimSiam was unstable without it (especially at the unsupervised experiments). For the predictor we use another single hidden layer MLP with 2048 ReLU units and group normalization.
|
| 263 |
+
|
| 264 |
+
For the data augmentations, in order to create the two views, we follow the standard recipe of random cropping into $32 \times 32$ images, followed by a random horizontal flip, a random, with probability 0.8, color distortion with brightness, contrast, saturation factors of 0.4 and a hue factor of 0.1. The final augmentation is a random, with probability 0.2, RGB-to-grayscale transformation.
|
| 265 |
+
|
| 266 |
+
**Optimization details** For local optimization we use standard stochastic gradient descent with a learning rate of 0.1 for both CIFAR 10 and CIFAR 100 for, unless mentioned otherwise, a single local epoch and a batch size of 128. After the local optimization on a specific round has been completed, each client communicates to the server the delta between the finetuned parameters and the model communicated from the server to the clients. The server averages these deltas, interprets them as "gradients", and uses them in conjunction with the Adam Kingma & Ba (2014) optimizer in order to update the global model. This is a strategy originally proposed in Reddi et al. (2020). For the server-side Adam we are using the default hyperparameters.
|
| 267 |
+
|
| 268 |
+
# B ADDITIONAL EXPERIMENTS
|
| 269 |
+
|
| 270 |
+
In this section we consider more baselines for both our unsupervised and semi-supervised setups in the federated setting.
|
| 271 |
+
|
| 272 |
+
# B.1 UNSUPERVISED SETTING
|
| 273 |
+
|
| 274 |
+
Additional baseline We consider one more baseline for self-supervised learning in the federated setting, FeatARC (Wang et al., 2022), specifically the "Align Only" variant. We omit the clustering approach as it makes additional assumptions compared to our unsupervised learning setup. The
|
| 275 |
+
|
| 276 |
+
authors report results with a loss-coefficient of $\lambda = 1.0$ , which leads to loss divergence in our case, so we report $\lambda = 0.5$ , which was stable except for the covariate shift setting. We see in table 3 that adding FeatARC alignment regularization does not result in improved accuracy, contrary to what the FeatARC paper results would lead us to expect. We hypothesise that this is due to the differences in our setup. Whereas FeatARC considers a cross-silo setting with a large number of local update steps, our setting focuses on the cross-device setting with one local epoch per client communication round. We leave a further analysis of FeatARC applicability to this cross-device setting to future work.
|
| 277 |
+
|
| 278 |
+
Table 3: Test set performance on the unsupervised setting of CIFAR 10. Clients' data is assumed to be fully annotated for LP fine-tuning in the unsupervised case.
|
| 279 |
+
|
| 280 |
+
<table><tr><td>Method</td><td>Label skew</td><td>Covariate shift</td><td>Joint shift</td></tr><tr><td>Local SimCLR</td><td>79.4±0.2</td><td>74.3±0.3</td><td>71.0±0.4</td></tr><tr><td>Local SimCLR + FeatARC</td><td>70.4±0.2</td><td>34.4±-</td><td>57.6±2.7</td></tr><tr><td>Federated SimCLR</td><td>85.0±0.2</td><td>73.8±0.2</td><td>74.8±0.5</td></tr><tr><td>Spectral CL</td><td>76.5±1.1</td><td>73.5±0.4</td><td>68.2±0.6</td></tr><tr><td>Spectral CL + UV</td><td>87.8±0.3</td><td>71.7±0.5</td><td>76.6±0.6</td></tr><tr><td>SimSiam</td><td>40.0±0.5</td><td>39.9±0.3</td><td>39.6±0.3</td></tr><tr><td>SimSiam + UV</td><td>35.4±0.4</td><td>35.4±0.2</td><td>34.5±0.3</td></tr><tr><td>Supervised</td><td>89.6±0.1</td><td>78.3±0.4</td><td>76.3±1.1</td></tr></table>
|
| 281 |
+
|
| 282 |
+
TinyImagenet dataset To demonstrate the scalability of our theoretical results and model design stemming from our MI perspective, we also consider the more challenging task of self-supervised pretraining on TinyImagenet. It consists of 100k training examples and 10k test examples, each belonging to one of 200 classes. We apply our federated CIFAR 10 setting to this dataset as well, i.e., we partition the training dataset to 100 clients with either the covariate shift or joint shift non-i.i.d. strategies. We sample 10 clients per round in order to optimize the models and each client performs one local epoch of updates. The encoder mdoel we use is a Compact Convolutional Transformer Hassani et al. (2021) in the "CCT-4/3×2" variant, i.e. with 4 transformer encoder layers and a 2-layer convolutional feature extractor with a 3x3 kernel size. The results with the different methods can be seen at table 4.
|
| 283 |
+
|
| 284 |
+
Table 4: Test set performance (%) on the unsupervised setting of TinyImagenet with 100 clients after 50k rounds. Clients' data is assumed to be fully annotated for LP fine-tuning in the unsupervised case.
|
| 285 |
+
|
| 286 |
+
<table><tr><td>Method</td><td>Label skew</td><td>Covariate shift</td><td>Joint shift</td></tr><tr><td>Local SimCLR</td><td>33.3</td><td>30.3</td><td>29.6</td></tr><tr><td>Federated SimCLR</td><td>38.0</td><td>30.0</td><td>31.6</td></tr><tr><td>Spectral CL</td><td>34.0</td><td>28.4</td><td>27.9</td></tr><tr><td>Spectral CL + UV</td><td>39.7</td><td>29.5</td><td>32.4</td></tr><tr><td>SimSiam</td><td>10.6</td><td>4.7</td><td>0.5</td></tr><tr><td>SimSiam + UV</td><td>0.5</td><td>0.5</td><td>0.5</td></tr><tr><td>Supervised</td><td>44</td><td>36.6</td><td>33.0</td></tr></table>
|
| 287 |
+
|
| 288 |
+
Overall, we see that the results are consistent with our intuitions and story in the case of contrastive methods; the biggest gains from the additional UV loss are in the case of label skew and joint shift. SimSiam generally underperformed in this setting, which is also consistent with our observations in the case of unsupervised learning on CIFAR 10/100, probably due to representation collapse, given that in our setting we use group normalization instead of batch normalization.
|
| 289 |
+
|
| 290 |
+
# B.2 SEMI-SUPERVISED SETTING
|
| 291 |
+
|
| 292 |
+
Additional pseudo-labelling baselines We provide more results on our partially labeled (with $10\%$ labeled data on each client) semi-supervised setting by also considering baselines that perform pseudolabelling as a means for semi-supervised learning. The two methods we consider are SemiFed (Lin et al., 2021) and CBAFed (Li et al., 2023b). For both of these settings we have the following modifications that bring them in line with our semi-supervised setup.
|
| 293 |
+
|
| 294 |
+
For SemiFed we do not make use of an ensemble of client models in order to impute the missing labels but rather assign a pseudo-label to the datapoint based on the received server model on each client. In this way, our proposed methods and SemiFed have similar communication costs and privacy, as exchanging models directly trained on local data between clients reduces the overall privacy. For CBAFed, we do not use residual weight connection, in order to have a consistent optimization strategy for all our methods, but do use the class balanced adaptive threshold strategy. We follow the setup described in Appendix F.5 of (Li et al., 2023b) to train a model with partially labeled clients.
|
| 295 |
+
|
| 296 |
+
From what we can see it table 5 and table 6, our conclusion about the usefulness of the UV loss (c.f. proposition 2) applies to this setting as well. While SemiFed underperforms when trained without the UV loss, it manages to improve upon the fully supervised baseline and be comparable to the other methods when we add it back. On CIFAR 10, adding the UV loss yields a significant $16.7\%$ improvement in the case of label skew and on CIFAR 100, while it gets a more modest $6\%$ improvement, it manages to outperform all other methods. CBAFed performs worse than self-supervised methods albeit also benefits from adding the UV loss in all the conducted experiments.
|
| 297 |
+
|
| 298 |
+
Table 5: Test set performance (%) on the semi-supervised setting of CIFAR 10 with $10\%$ labelled data on each client along with standard error over 5 seeds for all experiments except of CBAFed which have one seed only. We use the corresponding labelled subset for the LP.
|
| 299 |
+
|
| 300 |
+
<table><tr><td>Method</td><td>Label skew</td><td>Covariate shift</td><td>Joint shift</td></tr><tr><td>Local SimCLR</td><td>74.5±0.3</td><td>49.1±1.3</td><td>45.8±1.4</td></tr><tr><td>Federated SimCLR</td><td>78.0±0.2</td><td>50.3±1.1</td><td>49.9±1.4</td></tr><tr><td>Spectral CL</td><td>74.2±0.3</td><td>48.0±0.7</td><td>45.4±1.5</td></tr><tr><td>Spectral CL + UV</td><td>79.6±0.3</td><td>49.7±1.0</td><td>49.8±1.1</td></tr><tr><td>SimSiam</td><td>75.3±0.4</td><td>46.8±0.7</td><td>40.5±0.9</td></tr><tr><td>SimSiam + UV</td><td>80.4±0.2</td><td>50.0±1.2</td><td>44.3±1.0</td></tr><tr><td>SemiFed</td><td>60.0±4.5</td><td>18.6±1.8</td><td>37.2±0.9</td></tr><tr><td>SemiFed + UV</td><td>76.7±1.2</td><td>24.0±2.2</td><td>45.1±2.0</td></tr><tr><td>CBAFed</td><td>66.3</td><td>45.9</td><td>34.8</td></tr><tr><td>CBAFed + UV</td><td>74.1</td><td>48.2</td><td>36.2</td></tr><tr><td>Supervised</td><td>75.1±0.2</td><td>48.1±0.9</td><td>42.7±1.7</td></tr></table>
|
| 301 |
+
|
| 302 |
+
TinyImagenet dataset To demonstrate the scalability of our semi-supervised model design stemming from our MI perspective, we also consider the more challenging TinyImagenet task in the case of label skew non-i.i.d.-ness with Dirichlet splitting and an $\alpha = 0.1$ multiplied by the prior probability of each class. The setup is similar to our semi-supervised federated CIFAR 10 setting, with 100 clients and $10\%$ labelled data per client. We sample 10 clients per round in order to optimize the models and each client performs one local epoch of updates. We use the same CCT architecture as the unsupervised TinyImagenet experiment. The results with the different methods can be seen in table 7.
|
| 303 |
+
|
| 304 |
+
We observe similar patterns to our unsupervised TinyImagenet setting, with the biggest gains for the contrastive methods from the UV loss being in the case where some label skew is present. SimSiam did experience representation collapse at the case of label skew, however, by adding to it the UV loss, this was successfully mitigated and improved significantly the performance.
|
| 305 |
+
|
| 306 |
+
Table 6: Test set performance (%) on the semi-supervised setting of CIFAR 100 with $10\%$ labelled data on each client along with standard error over 5 seeds. We use the corresponding labelled subset for the LP.
|
| 307 |
+
|
| 308 |
+
<table><tr><td>Method</td><td>Label Skew</td><td>Covariate shift</td><td>Joint shift</td></tr><tr><td>Local SimCLR</td><td>30.3±0.2</td><td>15.1±0.4</td><td>13.1±0.3</td></tr><tr><td>Federated SimCLR</td><td>34.5±0.3</td><td>14.8±0.3</td><td>14.6±0.3</td></tr><tr><td>Spectral CL</td><td>30.1±0.2</td><td>14.1±0.4</td><td>12.3±0.3</td></tr><tr><td>Spectral CL + UV</td><td>34.0±0.2</td><td>13.7±0.3</td><td>13.6±0.4</td></tr><tr><td>SimSiam</td><td>30.7±0.2</td><td>13.4±0.3</td><td>12.8±0.3</td></tr><tr><td>SimSiam + UV</td><td>34.3±0.1</td><td>13.6±0.3</td><td>14.0±0.4</td></tr><tr><td>SemiFed</td><td>29.7±0.5</td><td>13.3±0.2</td><td>12.3±0.2</td></tr><tr><td>SemiFed + UV</td><td>35.7±0.2</td><td>13.4±0.6</td><td>13.1±0.2</td></tr><tr><td>Supervised</td><td>29.6±0.3</td><td>12.6±0.2</td><td>12.2±0.1</td></tr></table>
|
| 309 |
+
|
| 310 |
+
Table 7: Test set performance (\%) on the semi-supervised setting of TinyImagenet with 100 clients after 50k rounds. We use the corresponding labelled subset for the linear probe.
|
| 311 |
+
|
| 312 |
+
<table><tr><td>Method</td><td>Label skew</td><td>Covariate shift</td><td>Joint shift</td></tr><tr><td>Local SimCLR</td><td>18.5</td><td>8.1</td><td>6.7</td></tr><tr><td>Federated SimCLR</td><td>19.5</td><td>8.4</td><td>7.4</td></tr><tr><td>Spectral CL</td><td>17.8</td><td>8.3</td><td>6.9</td></tr><tr><td>Spectral CL + UV</td><td>18.9</td><td>8.1</td><td>7.5</td></tr><tr><td>SimSiam</td><td>0.5</td><td>8.1</td><td>6.9</td></tr><tr><td>SimSiam + UV</td><td>20.0</td><td>8.5</td><td>6.9</td></tr><tr><td>Supervised</td><td>17.9</td><td>8.4</td><td>7.7</td></tr></table>
|
| 313 |
+
|
| 314 |
+
# C ALGORITHMS
|
| 315 |
+
|
| 316 |
+
Algorithm 1 The server side algorithm for our federated SimCLR / Spectral CL / SimSiam with optional user-verification and semi-supervision.
|
| 317 |
+
|
| 318 |
+
```txt
|
| 319 |
+
Initialize $\theta$ and $\phi$ with $\theta_{1},\phi_{i}$
|
| 320 |
+
for round $t$ in 1,...T do Sample S clients from the population Initialize $\nabla_{\theta}^{t} = 0,\nabla_{\phi}^{t} = 0$
|
| 321 |
+
for s in S do $\theta_s,\phi_s\gets \mathrm{CLIENT}(s,\theta_t,\phi_t)$ $\nabla_{\theta}^{t} + = \frac{\theta_{t} - \theta_{s}}{|S|}$ $\nabla_{\phi}^{t} + = \frac{\phi_{t} - \phi_{s}}{|S|}$
|
| 322 |
+
end for $\theta^{t + 1},\phi^{t + 1}\gets \mathrm{ADAM}(\nabla_{\theta}^{t},\nabla_{\phi}^{t})$
|
| 323 |
+
end for
|
| 324 |
+
```
|
| 325 |
+
|
| 326 |
+
Algorithm 2 The client side algorithm for our federated SimCLR / Spectral CL / SimSiam with optional user-verification and semi-supervision. $L_{ul}$ corresponds to the unsupervised loss component of SimCLR / Spectral CL / SimSiam. $\beta$ is a coefficient that determines the weight of the UV loss, with a default value of 1.
|
| 327 |
+
|
| 328 |
+
Get $\theta ,\phi$ from the server
|
| 329 |
+
$\theta_{s},\phi_{s}\gets \theta ,\phi$
|
| 330 |
+
for epoch $e$ in $1,\ldots ,E$ do
|
| 331 |
+
for batch $b\in B$ do
|
| 332 |
+
> Unlabelled and labelled datapoints of the batch $^b$ $x_{ul},(x_l,y_l)\gets b$
|
| 333 |
+
> Get the two views through augmentations
|
| 334 |
+
$[x_{ul}^{1},x_{l}^{1}],[x_{ul}^{2},x_{l}^{2}] = \mathrm{AUG}([x_{ul},x_{l}])$ , AUG([xul,xl])
|
| 335 |
+
> Representations of the two views from the encoder $f$ with parameters $\theta_{s}$ $[z_{ul}^{1},z_{l}^{1}],[z_{ul}^{2},z_{l}^{2}]\gets f([x_{ul}^{1},x_{l}^{1}];\theta_{s}),f([x_{ul}^{2},x_{l}^{2}];\theta_{s})$
|
| 336 |
+
> Unsupervised loss with, depending on $\beta$ , an additional UV loss
|
| 337 |
+
$\mathcal{L}_s = \mathcal{L}_{ul}(z_{ul}^1,z_{ul}^2;\phi_s) + \beta \mathcal{L}_{uv}(s,z_{ul}^1,z_{ul}^2;\phi_s)$
|
| 338 |
+
Supervised loss on the labelled data
|
| 339 |
+
for label $i\in \{0,\dots ,|Y| - 1\}$ do
|
| 340 |
+
> Unsupervised loss between datapoints of the same class
|
| 341 |
+
$\mathcal{L}_s + = \mathcal{L}_{ul}(z_l^1 [y_l == i],z_l^2 [y_l == i];\phi_s)$
|
| 342 |
+
end for
|
| 343 |
+
> Standard supervised loss
|
| 344 |
+
$\mathcal{L}_s + = \mathcal{L}_y(y_l,z_l^1,z_l^2;\phi_s)$
|
| 345 |
+
> Local gradient updates on the loss
|
| 346 |
+
$\theta_s,\phi_s\gets \mathrm{SGD}(\nabla_{\theta_s,\phi_s}L_s)$
|
| 347 |
+
end for
|
| 348 |
+
end for
|
| 349 |
+
return $\theta_s,\phi_s$
|
| 350 |
+
|
| 351 |
+
# D MISSING PROOFS
|
| 352 |
+
|
| 353 |
+
Proposition 1. Let $s \in \mathbb{N}$ denote the user $ID$ , $\mathbf{x} \in \mathbb{R}^{D_x}$ the input and $\mathbf{z}_1, \mathbf{z}_2 \in \mathbb{R}^{D_z}$ the latent representations of the two views of $\mathbf{x}$ given by the encoder with parameters $\theta$ . Given a critic function $f: \mathbb{R}^{D_z} \times \mathbb{R}^{D_z} \to \mathbb{R}$ , we have that
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
\mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; \mathbf {z} _ {2} \mid s\right) \geq \mathbb {E} _ {p (s) p _ {\theta} \left(\mathbf {z} _ {1}, \mathbf {z} _ {2} \mid s\right) _ {1: K}} \left[ \frac {1}{K} \sum_ {k = 1} ^ {K} \log \frac {\exp \left(f \left(\mathbf {z} _ {1 k} , \mathbf {z} _ {2 k}\right)\right)}{\frac {1}{K} \sum_ {j = 1} ^ {K} \exp \left(f \left(\mathbf {z} _ {1 j} , \mathbf {z} _ {2 k}\right)\right)} \right]. \tag {14}
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
Proof. The proof follows Poole et al. (2019). We can show that
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
\begin{array}{l} \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; \mathbf {z} _ {2} \mid s\right) = \mathbb {E} _ {p (s) p _ {\theta} \left(\mathbf {z} _ {1, 1}, \mathbf {z} _ {2} \mid s\right) p _ {\theta} \left(\mathbf {z} _ {1, 2: K} \mid s\right)} \left[ \log \frac {p _ {\theta} \left(\mathbf {z} _ {1 , 1} \mid \mathbf {z} _ {2} , s\right) p _ {\theta} \left(\mathbf {z} _ {1 , 2 : K} \mid s\right)}{p _ {\theta} \left(\mathbf {z} _ {1 , 2 : K} \mid s\right) p _ {\theta} \left(\mathbf {z} _ {1 , 1} \mid s\right)} \right] (15) \\ = \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1, 1: K}, \mathbf {z} _ {2} | s)} \left[ \log \frac {p _ {\theta} (\mathbf {z} _ {1 , 1 : K} | \mathbf {z} _ {2} , s)}{p _ {\theta} (\mathbf {z} _ {1 , 1 : K} | s)} \right] (16) \\ = \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1, 1: K}, \mathbf {z} _ {2} | s)} \left[ \log \frac {p _ {\theta} \left(\mathbf {z} _ {1 , 1 : K} \mid \mathbf {z} _ {2} , s\right) q \left(\mathbf {z} _ {1 , 1 : K} \mid \mathbf {z} _ {2} , s\right)}{q \left(\mathbf {z} _ {1 , 1 : K} \mid \mathbf {z} _ {2} , s\right) p _ {\theta} \left(\mathbf {z} _ {1 , 1 : K} \mid s\right)} \right] (17) \\ = \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1, 1: K}, \mathbf {z} _ {2} | s)} \left[ \log \frac {q (\mathbf {z} _ {1 , 1 : K} | \mathbf {z} _ {2} , s)}{p _ {\theta} (\mathbf {z} _ {1 , 1 : K} | s)} \right] \\ + \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s) p _ {\theta} (\mathbf {z} _ {1, 1: K} | \mathbf {z} _ {2}, s)} \left[ \log \frac {p _ {\theta} (\mathbf {z} _ {1 , 1 : K} | \mathbf {z} _ {2} , s)}{q (\mathbf {z} _ {1 , 1 : K} | \mathbf {z} _ {2} , s)} \right] (18) \\ = \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1, 1: K}, \mathbf {z} _ {2} | s)} \left[ \log \frac {q (\mathbf {z} _ {1 , 1 : K} | \mathbf {z} _ {2} , s)}{p _ {\theta} (\mathbf {z} _ {1 , 1 : K} | s)} \right] \\ + \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s)} \left[ D _ {\mathrm {K L}} \left(p _ {\theta} \left(\mathbf {z} _ {1, 1: K} \mid \mathbf {z} _ {2}, s\right) \right\lvert \right\rvert q \left(\mathbf {z} _ {1, 1: K} \mid \mathbf {z} _ {2}, s\right) \bigg) ] (19) \\ \geq \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1, 1: K}, \mathbf {z} _ {2} | s)} \left[ \log \frac {q (\mathbf {z} _ {1 , 1 : K} | \mathbf {z} _ {2} , s)}{p _ {\theta} (\mathbf {z} _ {1 , 1 : K} | s)} \right], (20) \\ \end{array}
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
and then by parametrizing $q(\mathbf{z}_{1,1:K}|\mathbf{z}_2,s)$ in terms of a critic function $f$ ,
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
q \left(\mathbf {z} _ {1, 1: K} \mid \mathbf {z} _ {2}, s\right) = \frac {p _ {\theta} \left(\mathbf {z} _ {1 , 1 : K} \mid s\right) \exp \left(f \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K}\right)\right)}{\mathbb {E} _ {p _ {\theta} \left(\mathbf {z} _ {1 , 1 : K} \mid s\right)} \left[ \exp \left(f \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K}\right)\right) \right]}, \tag {21}
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
we have that
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
\mathrm {I} _ {\theta} (\mathbf {z} _ {1}; \mathbf {z} _ {2} | s) \geq \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1, 1: K}, \mathbf {z} _ {2} | s)} \left[ \log \frac {\exp (f (\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K}))}{\mathbb {E} _ {p _ {\theta} (\mathbf {z} _ {1 , 1 : K} | s)} [ \exp (f (\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K})) ]} \right]. \tag {22}
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
Since the denominator depends on the aggregate score $\exp(f(\mathbf{z}_2, \mathbf{z}_{1:K}))$ over $p_{\theta}(\mathbf{z}_{1:K}|s)$ , which is similarly intractable, we can introduce one more lower bound that will allow us to work with minibatches of data Poole et al. (2019). Due to the positivity of the exponent, we have that for any $a > 0$
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
\log \mathbb {E} _ {p _ {\theta} (\mathbf {z} _ {1, 1: K} | s)} [ \exp (f (\mathbf {z} _ {2}, \mathbf {z} _ {1, 1: K})) ] \leq \frac {\mathbb {E} _ {p _ {\theta} (\mathbf {z} _ {1 , 1 : K} | s)} [ \exp (f (\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K})) ]}{a} + \log a - 1. \tag {23}
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
Using this bound with $\alpha = \exp (1)$ , we have that
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
\begin{array}{l} \mathrm {I} _ {\theta} (\mathbf {z} _ {1}; \mathbf {z} _ {2} | s) \geq \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1: K}, \mathbf {z} _ {2} | s)} [ \log \exp (f (\mathbf {z} _ {2}, \mathbf {z} _ {1, 1: K})) ] \\ - \exp (- 1) \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s) p _ {\theta} (\mathbf {z} _ {1, 1: K} | s)} \left[ \exp \left(f \left(\mathbf {z} _ {2}, \mathbf {z} _ {1, 1: K}\right)\right) \right]. \tag {24} \\ \end{array}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
We can now set $f(\mathbf{z}_2,\mathbf{z}_{1,1:K})$ as Poole et al. (2019)
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
f \left(\mathbf {z} _ {2}, \mathbf {z} _ {1, 1: K}\right)\rightarrow 1 + f \left(\mathbf {z} _ {2}, \mathbf {z} _ {1, 1}\right) - \log a \left(\mathbf {z} _ {2}, \mathbf {z} _ {1, 1: K}\right). \tag {25}
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
In this way, we end up with
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
\begin{array}{l} \mathrm {I} _ {\theta} (\mathbf {z} _ {1}; \mathbf {z} _ {2} | s) \geq 1 + \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2}, \mathbf {z} _ {1, 1: K} | s)} \left[ \log \frac {\exp (f (\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1}))}{a (\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K})} \right] \\ - \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s) p _ {\theta} (\mathbf {z} _ {1, 1: K} | s)} \left[ \frac {\exp \left(f \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1}\right)\right)}{a \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K}\right)} \right]. \tag {26} \\ \end{array}
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
We can now average the bound over $K$ replicates and reindex $\mathbf{z}_1$ as
|
| 402 |
+
|
| 403 |
+
$$
|
| 404 |
+
\begin{array}{l} \mathrm {I} _ {\theta} (\mathbf {z} _ {1}; \mathbf {z} _ {2} | s) \geq 1 + \frac {1}{K} \sum_ {k = 1} ^ {K} \left(\mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2}, \mathbf {z} _ {1, 1: K} | s)} \left[ \log \frac {\exp (f (\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1}))}{a (\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K})} \right] \right. \\ \left. - \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s) p _ {\theta} (\mathbf {z} _ {1, 1: K} | s)} \left[ \frac {\exp \left(f \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1}\right)\right)}{a \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K}\right)} \right]\right) (27) \\ = 1 + \frac {1}{K} \sum_ {k = 1} ^ {K} \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2}, \mathbf {z} _ {1, 1: K} | s)} \left[ \log \frac {\exp (f (\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1}))}{a (\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K})} \right] \\ - \frac {1}{K} \sum_ {k = 1} ^ {K} \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s) p _ {\theta} (\mathbf {z} _ {1, 1: K} | s)} \left[ \frac {\exp \left(f \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1}\right)\right)}{a \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K}\right)} \right] (28) \\ = 1 + \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2}, \mathbf {z} _ {1, 1: K} | s)} \left[ \frac {1}{K} \sum_ {k = 1} ^ {K} \log \frac {\exp (f (\mathbf {z} _ {2} , \mathbf {z} _ {1 , k}))}{a (\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K})} \right] \\ - \frac {1}{K} \sum_ {k = 1} ^ {K} \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s) p _ {\theta} (\mathbf {z} _ {1, 1: K} | s)} \left[ \frac {\exp \left(f \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , k}\right)\right)}{a \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , 1 : K}\right)} \right] (29) \\ \end{array}
|
| 405 |
+
$$
|
| 406 |
+
|
| 407 |
+
and for the specific choice of $a(\mathbf{z}_2, \mathbf{z}_{1,1:K}) = \frac{1}{K} \sum_{k=1}^{K} \exp(f(\mathbf{z}_2, \mathbf{z}_{1,k}))$ , we have that terms cancel, i.e.,
|
| 408 |
+
|
| 409 |
+
$$
|
| 410 |
+
\begin{array}{l} \frac {1}{K} \sum_ {k = 1} ^ {K} \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s) p _ {\theta} (\mathbf {z} _ {1, 1: K} | s)} \left[ \frac {\exp (f (\mathbf {z} _ {2} , \mathbf {z} _ {1 , k}))}{\frac {1}{K} \sum_ {k = 1} ^ {K} \exp (f (\mathbf {z} _ {2} , \mathbf {z} _ {1 , k}))} \right] \\ = \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s) p _ {\theta} (\mathbf {z} _ {1, 1: K} | s)} \left[ \frac {\frac {1}{K} \sum_ {k = 1} ^ {K} \exp \left(f \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , k}\right)\right)}{\frac {1}{K} \sum_ {k = 1} ^ {K} \exp \left(f \left(\mathbf {z} _ {2} , \mathbf {z} _ {1 , k}\right)\right)} \right] = 1. \tag {30} \\ \end{array}
|
| 411 |
+
$$
|
| 412 |
+
|
| 413 |
+
In this way, we end up with the well known InfoNCE loss Oord et al. (2018), where now we contrast between datapoints that share the same class
|
| 414 |
+
|
| 415 |
+
$$
|
| 416 |
+
\mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; \mathbf {z} _ {2} \mid s\right) \geq \mathbb {E} _ {p (s) p _ {\theta} \left(\mathbf {z} _ {1}, \mathbf {z} _ {2} \mid s\right) _ {1: K}} \left[ \frac {1}{K} \sum_ {k = 1} ^ {K} \log \frac {\exp \left(f \left(\mathbf {z} _ {1 k} , \mathbf {z} _ {2 k}\right)\right)}{\frac {1}{K} \sum_ {j = 1} ^ {K} \exp \left(f \left(\mathbf {z} _ {1 j} , \mathbf {z} _ {2 k}\right)\right)} \right]. \tag {31}
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+

|
| 420 |
+
|
| 421 |
+
Lemma 2.1. Let $s \in \mathbb{N}$ denote the client $ID$ , $\mathbf{x} \in \mathbb{R}^{D_x}$ the input and $\mathbf{z}_1 \in \mathbb{R}^{D_z}$ the latent representation of a view of $\mathbf{x}$ given by the encoder with parameters $\theta$ . Let $\phi$ denote the parameters of a client classifier $r_{\phi}(s|\mathbf{z}_1)$ that predicts the client $ID$ from this specific representation and let $H(s)$ be the entropy of the client distribution $p(s)$ . We have that
|
| 422 |
+
|
| 423 |
+
$$
|
| 424 |
+
\mathrm {I} _ {\theta} (\mathbf {z} _ {1}; s) \geq \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1} | s)} \left[ \log r _ {\phi} (s | \mathbf {z} _ {1}) \right] + \mathrm {H} (s) \tag {32}
|
| 425 |
+
$$
|
| 426 |
+
|
| 427 |
+
Proof.
|
| 428 |
+
|
| 429 |
+
$$
|
| 430 |
+
\begin{array}{l} \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; s\right) = \mathbb {E} _ {p _ {\theta} (s, \mathbf {z} _ {1})} \left[ \log \frac {p _ {\theta} (s , \mathbf {z} _ {1})}{p (s) p _ {\theta} (\mathbf {z} _ {1})} \right] = \mathbb {E} _ {p (s) p _ {\theta} \left(\mathbf {z} _ {1} \mid s\right)} \left[ \log \frac {p _ {\theta} (s \mid \mathbf {z} _ {1})}{p (s)} \right] (33) \\ = \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1} | s)} \left[ \log \frac {r _ {\phi} (s | \mathbf {z} _ {1})}{p (s)} \right] + \mathbb {E} _ {p (s)} \left[ D _ {\mathrm {K L}} \left(p _ {\theta} (s | \mathbf {z} _ {1}) \mid r _ {\phi} (s | \mathbf {z} _ {1})\right) \right] (34) \\ \geq \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1} | s)} \left[ \log \frac {r _ {\phi} (s | \mathbf {z} _ {1})}{p (s)} \right] = \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1} | s)} \left[ \log r _ {\phi} (s | \mathbf {z} _ {1}) \right] + \mathrm {H} (s). (35) \\ \end{array}
|
| 431 |
+
$$
|
| 432 |
+
|
| 433 |
+

|
| 434 |
+
|
| 435 |
+
Lemma 2.2. Let $s \in \mathbb{N}$ denote the user $ID$ , $\mathbf{x} \in \mathbb{R}^{D_x}$ the input and $\mathbf{z}_1, \mathbf{z}_2 \in \mathbb{R}^{D_z}$ the latent representations of the views of $\mathbf{x}$ given by the encoder with parameters $\theta$ . Let $\phi$ denote the parameters of a client classifier $r_{\phi}(s|\mathbf{z}_2)$ that predicts the client $ID$ from the representations. We have that
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
\mathrm {I} _ {\theta} (\mathbf {z} _ {1}; s | \mathbf {z} _ {2}) \leq - \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s)} [ \log r _ {\phi} (s | \mathbf {z} _ {2}) ] \tag {36}
|
| 439 |
+
$$
|
| 440 |
+
|
| 441 |
+
Proof.
|
| 442 |
+
|
| 443 |
+
$$
|
| 444 |
+
\begin{array}{l} \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; s \mid \mathbf {z} _ {2}\right) = \mathrm {H} _ {\theta} (s \mid \mathbf {z} _ {2}) - \mathrm {H} _ {\theta} (s \mid \mathbf {z} _ {2}, \mathbf {z} _ {1}) (37) \\ \leq \mathrm {H} _ {\theta} (s | \mathbf {z} _ {2}) = \mathrm {H} (s) - \mathrm {I} _ {\theta} (\mathbf {z} _ {2}; s) \leq - \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s)} \left[ \log r _ {\phi} (s | \mathbf {z} _ {2}) \right] (38) \\ \end{array}
|
| 445 |
+
$$
|
| 446 |
+
|
| 447 |
+
where $\mathrm{H}(s)$ is the entropy of $p(s)$ , $\mathrm{H}_{\theta}(s|\mathbf{z}_2)$ , $\mathrm{H}_{\theta}(s|\mathbf{z}_2,\mathbf{z}_1)$ are the conditional entropies of $s$ given $\mathbf{z}_2$ and $\mathbf{z}_2, \mathbf{z}_1$ and the last inequality is due to the lower bound of lemma 2.1. We also used the fact that the entropy of a discrete distribution is non-negative.
|
| 448 |
+
|
| 449 |
+
Proposition 2. Consider the label skew data-generating process for federated SimCLR from Figure 1 with $s \in \mathbb{N}$ denoting the user ID with $\mathrm{H}(s)$ being the entropy of $p(s)$ , $\mathbf{x} \in \mathbb{R}^{D_x}$ the input, $\mathbf{z}_1, \mathbf{z}_2 \in \mathbb{R}^{D_z}$ the latent representations of the two views of $\mathbf{x}$ given by the encoder with parameters $\theta$ . Let $y$ be the label and let $r_{\phi}(s|\mathbf{z}_i)$ be a model with parameters $\phi$ that predicts the user ID from the latent representation $\mathbf{z}_i$ . In this case, we have that
|
| 450 |
+
|
| 451 |
+
$$
|
| 452 |
+
\mathrm {I} _ {\theta} (\mathbf {z} _ {1}; y) + \mathrm {I} _ {\theta} (\mathbf {z} _ {2}; y) \geq \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1}, \mathbf {z} _ {2} | s)} \left[ \log r _ {\phi} (s | \mathbf {z} _ {1}) + \log r _ {\phi} (s | \mathbf {z} _ {2}) \right] + 2 \mathrm {H} (s). \tag {39}
|
| 453 |
+
$$
|
| 454 |
+
|
| 455 |
+
Proof. The claim is a consequence of the data processing inequality. We start by noting that
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
\mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; y\right) + \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; s \mid y\right) = \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; y, s\right) = \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; s\right) + \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; y \mid s\right) \tag {40}
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
and since in this graphical model we have that $s \perp \mathbf{z}_1|y$ , so $\mathrm{I}_{\theta}(s;\mathbf{z}_1|y) = 0$ , we end up with
|
| 462 |
+
|
| 463 |
+
$$
|
| 464 |
+
\begin{array}{l} \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; y\right) = \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; s\right) + \mathrm {I} _ {\theta} \left(\mathbf {z} _ {1}; y \mid s\right) (41) \\ \geq \mathrm {I} _ {\theta} (\mathbf {z} _ {1}; s) \geq \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {1} | s)} \left[ \log r _ {\phi} (s | \mathbf {z} _ {1}) \right] + \mathrm {H} (s), (42) \\ \end{array}
|
| 465 |
+
$$
|
| 466 |
+
|
| 467 |
+
where we use the positivity of mutual information and our lemma 2.1. In a similar manner we can also show that
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
\mathrm {I} _ {\theta} (\mathbf {z} _ {2}; y) \geq \mathbb {E} _ {p (s) p _ {\theta} (\mathbf {z} _ {2} | s)} \left[ \log r _ {\phi} (s | \mathbf {z} _ {2}) \right] + \mathrm {H} (s). \tag {43}
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
By adding up eq. (42) and eq. (43) we arrive at the claim.
|
2024/A Mutual Information Perspective on Federated Contrastive Learning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:986b5237a20cc506cc64292a7da025e7ed1ff03c2482913c619fd001c2434653
|
| 3 |
+
size 879954
|
2024/A Mutual Information Perspective on Federated Contrastive Learning/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/e82becdb-9086-4140-813d-b7925e4a2ac8_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/e82becdb-9086-4140-813d-b7925e4a2ac8_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/e82becdb-9086-4140-813d-b7925e4a2ac8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8bcaabb58a4987ebf453dacb2af0a1f98b8060755b374df5eac2b6395540bfdc
|
| 3 |
+
size 462959
|
2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/full.md
ADDED
|
@@ -0,0 +1,749 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A POINCARE INEQUALITY AND CONSISTENCY RESULTS FOR SIGNAL SAMPLING ON LARGE GRAPHS
|
| 2 |
+
|
| 3 |
+
Thien Le
|
| 4 |
+
|
| 5 |
+
MIT
|
| 6 |
+
|
| 7 |
+
thienle@mit.edu
|
| 8 |
+
|
| 9 |
+
Luana Ruiz
|
| 10 |
+
|
| 11 |
+
Johns Hopkins University
|
| 12 |
+
|
| 13 |
+
1rubini1@jh.edu
|
| 14 |
+
|
| 15 |
+
Stefanie Jegelka
|
| 16 |
+
|
| 17 |
+
TU Munich, MIT
|
| 18 |
+
|
| 19 |
+
stefje@mit.edu
|
| 20 |
+
|
| 21 |
+
# ABSTRACT
|
| 22 |
+
|
| 23 |
+
Large-scale graph machine learning is challenging as the complexity of learning models scales with the graph size. Subsampling the graph is a viable alternative, but sampling on graphs is nontrivial as graphs are non-Euclidean. Existing graph sampling techniques require not only computing the spectra of large matrices but also repeating these computations when the graph changes, e.g., grows. In this paper, we introduce a signal sampling theory for a type of graph limit—the graphon. We prove a Poincaré inequality for graphon signals and show that complements of node subsets satisfying this inequality are unique sampling sets for Paley-Wiener spaces of graphon signals. Exploiting connections with spectral clustering and Gaussian elimination, we prove that such sampling sets are consistent in the sense that unique sampling sets on a convergent graph sequence converge to unique sampling sets on the graphon. We then propose a related graphon signal sampling algorithm for large graphs, and demonstrate its good empirical performance on graph machine learning tasks.
|
| 24 |
+
|
| 25 |
+
# 1 INTRODUCTION
|
| 26 |
+
|
| 27 |
+
Graphs are ubiquitous data structures in modern data science and machine learning. Examples range from social networks (Kempe et al., 2003; Barabási et al., 2000) and recommender systems (Ying et al., 2018) to drug interactions (Zitnik et al., 2018) and protein folding (Jumper et al., 2021), in which the graph can have tens of thousands to millions of nodes and edges (Takac & Zábovský, 2012). The ability to sense systems at this scale presents unprecedented opportunities for scientific and technological advancement. However, it also poses challenges, as traditional algorithms and models may need to scale more efficiently to large graphs, including neural graph learning methods.
|
| 28 |
+
|
| 29 |
+
However, the large size of modern graphs does not necessarily indicate the degree of complexity of the problem. In fact, many graph-based problems have low intrinsic dimensions. For instance, the 'small-world phenomenon' (Kleinberg, 2000) observes that any two entities in a network are likely connected by a short sequence of intermediate nodes. Another example are power-law graphs, where there are few highly connected influencers and many scattered nodes (Barabási et al., 2000).
|
| 30 |
+
|
| 31 |
+
At a high level, this paper studies how to exploit these simplicities in large graphs to design scalable algorithms with theoretical guarantees. In particular, we combine two ideas: graph limits, which are used to approximate large, random graphs; and sampling theory, which studies the problem of representing (graph) signals using the smallest possible subset of data points (nodes), with the least possible loss of information. We then illustrate how to use the resulting sampling techniques to compress graphs for GNN training and to compute faster, subsampled positional encodings.
|
| 32 |
+
|
| 33 |
+
Graphons and graph limits. Leveraging continuous limits to analyze large discrete data is helpful because limits often reveal the intrinsic dimension of the data. E.g., in Euclidean domain, the Fourier transform (FT) of a continuous signal is easier to analyze than the FT of its discrete counterpart, which is periodic and may exhibit aliasing. We propose to study the graph signal sampling problem on a graph limit called graphon. Graphons can be thought of as undirected graphs with an uncountable number of nodes, and are both random graph models and limits of large dense graphs (Borgs et al., 2008; Lovász, 2012).
|
| 34 |
+
|
| 35 |
+
(Graph) signal sampling. Sampling theory is a long-standing line of work with deep roots in signal processing. Traditionally, sampling seeks to answer the fundamental question: if one can only observe discrete samples of an analog (continuous) signal, under what conditions can the analog signal be perfectly reconstructed? On a graph on $n$ nodes, signals are vectors $\mathbf{x} \in \mathbb{R}^n$ that map each node to some value. The graph signal sampling problem is then defined as follows.
|
| 36 |
+
|
| 37 |
+
Problem 1. For some signal space $\mathcal{X}$ of interest, find subsets $S$ of nodes such that if $\mathbf{x},\mathbf{x}'\in \mathcal{X}$ and $x_{i} = x_{i}^{\prime}$ for all $i\in S$ then $x_{j} = x_{j}^{\prime}$ for all other nodes. Thus, such a set can uniquely represent any signals in $\mathcal{X}$ and is called a uniqueness set for $\mathcal{X}$ .
|
| 38 |
+
|
| 39 |
+
Problem 1 was first studied by Pesenson (2008), who introduced Paley-Wiener (PW) spaces for graph signals, defined graph uniqueness sets, and derived a Poincaré inequality for discrete graph signals that allows recovering such uniqueness sets. These definitions are reviewed in Section 2. Graph signal sampling theory subsequently found applications in the field of graph signal processing (GSP) (Shuman et al., 2013; Ortega et al., 2018), with Chen et al. (2015) describing how sampling sets can be obtained via column-wise Gaussian elimination of the eigenvector matrix.
|
| 40 |
+
|
| 41 |
+
Current limitations. Though widely used, Chen et al. (2015)'s approach requires expensive spectral computations. Several methods, briefly discussed in Section 1.1, have been proposed to circumvent these computations; however, these approaches still present stringent tradeoffs between complexity and quality of approximation on very large graphs. Perhaps more limiting, the discrete sampling sets yielded by these methods are no longer applicable if the graph changes, as often happens in large real-world network problems, e.g., an influx of new users in a social network.
|
| 42 |
+
|
| 43 |
+
Contributions. To address the abovementioned issues, we propose sampling uniqueness sets on the limit graphon. By solving a single sampling problem at the graph limit (graphon), we obtain a uniqueness set that generalizes to any large finite graphs in a sequence converging to the limit graphon. We provide both theoretical guarantees and experiments to verify this generalization in downstream graph-based tasks. In summary, our contributions are:
|
| 44 |
+
|
| 45 |
+
1. Motivated by Pesenson (2008), we formulate signal sampling over a graphon and study traditional sampling theory notions such as Paley-Wiener spaces and uniqueness sets in a Euclidean setting of $L^2([0,1])$ while still incorporating graph structure into the sampling procedure.
|
| 46 |
+
2. We prove a Poincaré inequality for graphons and relate bandlimitedness in graphon signal space to optimal sampling sets<sup>1</sup>. This generalizes previous results on finite graphs and rigorously answers a reconstruction question. Unlike other results for graphon signal processing in the literature, we do not require any continuity or smoothness assumption on the graphon.
|
| 47 |
+
3. We uncover a connection between graphon sampling and kernel spectral clustering and design a Gaussian-elimination-based algorithm to sample from the graphon uniqueness set with provable consistency, using an argument from (Schiebinger et al., 2015).
|
| 48 |
+
4. We empirically evaluate our sampling method on two tasks: (1) transferability: training a GNN on subsampled graphs and testing on the full graph; (2) accelerating the computation of positional encodings for GNNs by restricting them to a sampled subset of nodes.
|
| 49 |
+
|
| 50 |
+
# 1.1 RELATED WORK
|
| 51 |
+
|
| 52 |
+
Graphons in machine learning. In machine learning, graphons have been used for network model estimation (Borgs et al., 2015), hierarchical clustering (Eldridge et al., 2016) and to study the theoretical properties of graph neural networks (GNNs) on large graphs. Specifically, Ruiz et al. (2020b) have shown that graph convolutions converge to graphon convolutions, further proving a non-asymptotic result that implies that GNNs are transferable across graphon-sampled graphs (Ruiz et al., 2020a). Similar studies have been done using graphops (Le & Jegelka, 2023), which are very general graph limits that range from graphons to very sparse graphs. Graphons have also been used to show convergence of GNN training on increasing graph sequences (Cervino et al., 2023), to prove PAC-Bayes bounds for GNN learning (Maskey et al., 2022), and to study the learning dynamics of wide large-graph NNs (Krishnagopal & Ruiz, 2023).
|
| 53 |
+
|
| 54 |
+
Graph signal sampling. Graph signal sampling has been studied at length in GSP. Chen et al. (2015) describe how sampling sets can be obtained via column-wise Gaussian elimination of the eigenvector matrix and derive conditions for perfect reconstruction. Noting that this approach requires expensive spectral computations, several methods were proposed to avoid them. E.g., Anis et al. (2016) calculate eigenvalue and eigenvector approximations using power iteration; Marques et al. (2015) compute $n$ signal aggregations at a single node $i$ to construct an $n$ -dimensional local signal from which $K$ elements are sampled; and Chamon & Ribeiro (2017) do greedy sampling and provide near optimal guarantees when the interpolation error is approximately supermodular.
|
| 55 |
+
|
| 56 |
+
Connections with other sampling techniques. The sampling algorithm we propose is based on a greedy iterative procedure that attempts to find the signal with the lowest total variation on the complement of the current sampling set $S$ , and adds the node corresponding to the largest component in this signal to $S$ . This heuristic is derived by trying to maximize the largest eigenvalue of the normalized Laplacian restricted to $S$ (see (Anis et al., 2016, Section IV.C) for a detailed discussion). Thus, our algorithm has close connections with E-optimal design, which minimizes the largest eigenvalue of the pseudo-inverse of the sampled matrix (Pukelsheim, 2006), and with dual volume sampling (Avron & Boutsidis, 2013; Li et al., 2017), which provides approximation guarantees for E-optimal sampling. This type of objective also appears in effective resistance/leverage scores sampling (Ma et al., 2014; Rudi et al., 2018), which is used for graph sparsification (Spielman & Srivastava, 2008).
|
| 57 |
+
|
| 58 |
+
Recent work by Parada-Mayorga & Ribeiro (2024), concurrent with ours, also generalized PW spaces and uniqueness sets to graphons. Similarly, they proved a Poincaré inequality and proposed a sampling algorithm for graphon signals. Their main results quantitatively compare the Poincaré constant across different graphon-signal spaces, implying convergence of this constant for a convergent graph sequence, unlike our work, which analyzes consistency of sampling via spectral clustering.
|
| 59 |
+
|
| 60 |
+
# 2 PRELIMINARIES
|
| 61 |
+
|
| 62 |
+
# 2.1 GRAPH SIGNAL PROCESSING
|
| 63 |
+
|
| 64 |
+
Setup. We consider graphs $\mathbf{G} = (\mathcal{V},\mathcal{E})$ with $n$ nodes and edges $\mathcal{E}\subseteq \mathcal{V}\times \mathcal{V}$ . We write a graph's adjacency matrix as $\bar{\mathbf{A}}\in \mathbb{R}^{n\times n}$ ; its degree matrix as $\mathbf{D} = \mathrm{diag}(\mathbf{A}\mathbf{1})$ ; and its Laplacian matrix as $\bar{\mathbf{D}} -\mathbf{A}$ . We also consider the normalized adjacency and Laplacian matrices $\bar{\mathbf{A}} = (\mathbf{D}^{\dagger})^{1 / 2}\mathbf{A}(\mathbf{D}^{\dagger})^{1 / 2}$ and $\bar{\mathbf{L}} = \mathbf{I} - \bar{\mathbf{A}}$ (where $\cdot^{\dagger}$ is the pseudoinverse), with eigendecomposition $\bar{\mathbf{L}} = \mathbf{V}\pmb {\Lambda}\mathbf{V}^T$ and eigenvalues $\lambda_{1}\leq \ldots \leq \lambda_{n}$ . We further consider node signals $\mathbf{x}\in \mathbb{R}^n$ , which assign data value $x_{i}$ to node $i$ ; e.g., in a social network, $x_{i}$ may represent the political affiliation of person $i$ .
|
| 65 |
+
|
| 66 |
+
Total variation and graph frequencies. The total variation of a graph signal is defined as $\mathrm{TV}(\mathbf{x}) = \mathbf{x}^T\bar{\mathbf{L}}\mathbf{x}$ (Anis et al., 2016; Sandryhaila & Moura, 2014). This allows interpreting the eigenvalues $\lambda_{i}$ as the graph's essential frequencies, with oscillation modes given by the eigenvectors $\mathbf{v}_i = [\mathbf{V}]_{:i}$ .
|
| 67 |
+
|
| 68 |
+
Graph FT and Paley-Wiener spaces. We may analyze signals on the graph frequency domain via the graph Fourier transform (GFT). The GFT $\hat{\mathbf{x}}$ of $\mathbf{x}$ is its projection onto the Laplacian eigenbasis $\hat{\mathbf{x}} = \overline{\mathbf{V}^T\mathbf{x}}$ (Sandryhaila & Moura, 2014). The GFT further allows defining bandlimited graph signals, or, more formally, Paley-Wiener (PW) spaces. On $\mathbf{G}$ , the PW space with cutoff frequency $\lambda$ is defined as $PW_{\lambda}(\mathbf{G}) = \{\mathbf{x}\text{s.t.} [\hat{\mathbf{x}}]_i = 0\text{for all}\lambda_i > \lambda \}$ (Anis et al., 2016; Pesenson, 2008).
|
| 69 |
+
|
| 70 |
+
Uniqueness sets. When $\mathcal{X}$ is a PW space $PW_{\lambda}(\mathbf{G})$ with $\lambda \leq \lambda_K$ for some $K < n$ , there exists a subset of at most $K$ nodes that perfectly determine any signal in $\mathcal{X}$ called uniqueness set. The following theorem from (Anis et al., 2016) gives conditions under which a proposed subset $\mathcal{S}$ is a uniqueness set for $PW_{\lambda}(\mathbf{G})$ .
|
| 71 |
+
|
| 72 |
+
Theorem 1 (Uniqueness sets for $PW_{\lambda}(\mathbf{G})$ ). Let $\mathcal{S} \subseteq \mathcal{V}$ . Let $\mathbf{V}_K \in \mathbb{R}^{n \times K}$ denote the first $K$ columns of the eigenvector matrix $\mathbf{V}$ and $\Psi_S \in \mathbb{R}^{K \times K}$ be the submatrix of $\mathbf{V}$ with rows indexed by $S$ . If $\text{rank} \, \Psi_S = K$ , then $\mathcal{S}$ is a uniqueness set for $PW_{\lambda}(\mathbf{G})$ for all $\lambda \leq \lambda_K(\mathbf{G})$ . If $\lambda_K \leq \lambda < \lambda_{K+1}$ then $\text{rank} \, \Psi_S = K$ is also necessary.
|
| 73 |
+
|
| 74 |
+
In addition to providing a sufficient condition to verify if a set is a uniqueness set for some PW space, this theorem suggests a two-step strategy for obtaining such sets: first compute $\mathbf{V}_K$ , and then design a sampling method that outputs $S$ such that $\mathrm{rank}\Psi_S = K$ . However, these sampling strategies,
|
| 75 |
+
|
| 76 |
+
e.g., the one suggested by Thm. 1, can be limiting on large graphs as they require computing the eigendecomposition of a large matrix.
|
| 77 |
+
|
| 78 |
+
# 2.2 GRAPHON SIGNAL PROCESSING
|
| 79 |
+
|
| 80 |
+
Graphons and graphon signals. A graphon is a symmetric, bounded, measurable function $\mathbf{W}:\Omega \times \Omega \to [0,1]$ , where $\Omega$ is a general measurable space (Borgs & Chayes, 2017). We assume that there exists an invertible map $\beta :\Omega \to [0,1]$ and w.l.o.g., we can also write $\mathbf{W}:[0,1]^2\rightarrow [0,1]$ . Graphons are only defined up to a bijective measure-preserving map, similar to how finite graphs are defined up to node permutations. Graphons are limits of graph sequences $\{\mathbf{G}_n\}$ in the so-called homomorphism density sense (Borgs et al., 2008), and can also be seen as random graph models where nodes $u_{i},u_{j}$ are sampled from $\Omega$ and edges $(u_{i},u_{j})\sim$ Bernoulli(W(u_i,u_j)). Graphons can also be motivated via infinite exchangeable graphs (Hoover, 1979; Aldous, 1981).
|
| 81 |
+
|
| 82 |
+
Graphon signals are functions $X:[0,1]\to \mathbb{R}$ . They represent data on the "nodes" of a graphon, i.e., $X(u)$ is the value of the signal at node $u\in [0,1]$ (Ruiz et al., 2021). Since two graphons that differ on a set of Lebesgue measure 0 are identified, so are graphon signals. We restrict attention to finite-energy signals $X\in L^{2}([0,1])$ .
|
| 83 |
+
|
| 84 |
+
Graphon Laplacian and FT. Given a graphon $\mathbf{W}$ , its degree function is $\mathbf{d}(v) = \int_0^1\mathbf{W}(u,v)\mathrm{d}u$ . Define the normalized graphon $\bar{\mathbf{W}} (u,v) = \mathbf{W}(u,v) / \sqrt{\mathbf{d}(u)\mathbf{d}(v)}$ if $\mathbf{d}(u),\mathbf{d}(v)\neq 0$ and 0 otherwise. Given a graphon signal $X$ , we define the normalized graphon Laplacian:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\bar {\mathcal {L}} X = X - \int_ {0} ^ {1} \bar {\mathbf {W}} (u, \cdot) X (u) \mathrm {d} u. \tag {1}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
The spectrum of $\bar{\mathcal{L}}$ consists of at most countably many nonnegative eigenvalues with finite multiplicity in $[0,2]$ . Its essential spectrum consists of at most one point $\{1\}$ , and this is also the only possible accumulation point. We enumerate the eigenvalues as $0 \leq \lambda_1 \leq \lambda_2 \leq \ldots \leq 2$ . The corresponding set of eigenfunctions $\{\varphi_i\}_{i \in \mathbb{Z} \setminus \{0\}}$ forms an orthonormal basis of $L^2([0,1])$ ; see App. B.
|
| 91 |
+
|
| 92 |
+
We define the graphon Fourier transform (WFT) of signal $X$ as the projection
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
\hat {X} (\lambda_ {i}) = \int_ {0} ^ {1} X (u) \varphi_ {i} (u) \mathrm {d} u \tag {2}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
for all $i$ . Note that this is different from the WFT defined in (Ruiz et al., 2020b), which corresponds to projections onto the eigenbasis of a different but related linear operator.
|
| 99 |
+
|
| 100 |
+
# 3 SAMPLING THEORY FOR GRAPHONS
|
| 101 |
+
|
| 102 |
+
We generalize the graph sampling problem studied in Pesenson (2008) to a graphon sampling problem. The sampling procedure returns a (Lebesgue) measurable subset $U \subseteq [0,1]$ . Intuitively, we would like to choose a set $U$ such that sampling from $U$ gives us the most information about the whole signal over $[0,1]$ . These are called uniqueness sets. Similar to finite graphs, when the graphon signals have limited bandwidth, there exist nontrivial (other than $U = [0,1]$ ) uniqueness sets. Finding these sets is the main focus of the sampling theory for graphons that we develop here.
|
| 103 |
+
|
| 104 |
+
For an arbitrary bandwidth cutoff $\lambda > 0$ , we use the normalized graphon Laplacian (1) with eigenvalues $0 \leq \lambda_1 \leq \lambda_2 \leq \ldots \leq \lambda_{-2} \leq \lambda_{-1} \leq 2$ . First, we define the Paley-Wiener space:
|
| 105 |
+
|
| 106 |
+
Definition 1 (Graphon signal $PW_{\lambda}(\mathbf{W})$ space). The Paley-Wiener space associated with $\lambda \in [0,1]$ and graphon $\mathbf{W}$ , denoted $PW_{\lambda}(\mathbf{W})$ , is the space of graphon signals $X:[0,1]\to \mathbb{R}$ such that $\hat{X} (\lambda_i) = 0$ for all $\lambda_{i} > \lambda$ , where $\hat{X}$ is the projection operator defined in Eq. (2).
|
| 107 |
+
|
| 108 |
+
The definition of $PW_{\lambda}(\mathbf{W})$ depends on the underlying limit graphon through the projection operator (2), in particular the positions of its Laplacian eigenvalues. When $\lambda \geq \lambda_{-1}$ , $PW_{\lambda}$ is all of $L^2([0,1])$ as the definition above is vacuously satisfied. Decreasing $\lambda$ induces some constraints on what functions are allowed in $PW_{\lambda}$ . At $\lambda = 0$ , $PW_0 = \{0\}$ contains only the trivial function.
|
| 109 |
+
|
| 110 |
+
For any signal space $\mathcal{H} \subseteq L^2([0,1])$ , we further define graphon uniqueness sets:
|
| 111 |
+
|
| 112 |
+
Definition 2 (Graphon uniqueness set). A measurable $U \subseteq [0,1]$ is a uniqueness set for the signal space $\mathcal{H} \subseteq L^2([0,1])$ if, for any $X, Y \in \mathcal{H}$ , $\int_U |X(u) - Y(u)|^2 du = 0$ implies $\| X - Y \|_{L^2([0,1])}^2 = 0$ .
|
| 113 |
+
|
| 114 |
+
Since $U = [0,1]$ is a trivial uniqueness set for any $\mathcal{H} \subseteq L^2([0,1])$ , we are mainly interested in the interplay between the bandwidth cutoff $\lambda$ in $PW_{\lambda}(\mathbf{W})$ , and its corresponding non-trivial uniqueness sets. More precisely, we study the question:
|
| 115 |
+
|
| 116 |
+
Problem 2. Assume that a graphon signal comes from $PW_{\lambda}(\mathbf{W})$ for some $\lambda$ and $\mathbf{W}$ . Is there an algorithm that outputs a uniqueness set $U(\lambda, \mathbf{W})$ ?
|
| 117 |
+
|
| 118 |
+
We answer this question in the positive and provide two approaches. First, by generalizing results by Pesenson (2008) for finite graphs, we give a graphon Poincaré inequality (Thm. 2) for nontrivial measurable subsets of $[0,1]$ . Then, in Thm. 3, we show that if a set $S$ satisfies the Poincaré inequality with constant $\Lambda > 0$ then the complement $U = [0,1] \setminus S$ is a uniqueness set for $\mathrm{PW}_{1 / \Lambda}(\mathbf{W})$ (Thm. 3). Thus, we can find uniqueness set $U$ by first finding an $S$ that satisfies the Poincaré inequality with constant $1 / \lambda$ .
|
| 119 |
+
|
| 120 |
+
The second approach is more direct: the analogous question for finite graphs admits a straightforward answer using Gaussian elimination (see the discussion underneath Thm. 1). However, in the limit of infinitely many nodes, it does not make sense to perform Gaussian elimination as is. Instead, we form a sequence of graphs $\{\mathbf{G}_n\}$ that converges to the prescribed graphon $\mathbf{W}$ . We then prove, using techniques from (Schiebinger et al., 2015), that performing Gaussian elimination with proper pivoting for $\mathbf{G}_n$ recovers sets that converge to a uniqueness set for $\mathrm{PW}_{\lambda}(\mathbf{W})$ (Prop. 5). Finally, we implement and analyze this approach empirically in Section 6.
|
| 121 |
+
|
| 122 |
+
# 4 MAIN RESULTS
|
| 123 |
+
|
| 124 |
+
# 4.1 POINCARE INEQUALITY AND BANDWIDTH OF UNIQUENESS SET
|
| 125 |
+
|
| 126 |
+
We start with the first approach to Problem 2, proving a Poincaré inequality for subsets $S \subset [0,1]$ and showing that this Poincaré inequality implies uniqueness of $[0,1] \backslash S$ at some bandwidth.
|
| 127 |
+
|
| 128 |
+
First, we need some definitions. These definitions generalize Pesenson (2008)'s observation that for finite graphs, any strict subset $T$ of the vertex set satisfies a Poincaré inequality with constant determined by spectral properties of another graph $\Gamma(T)$ . Intuitively, $\Gamma(T)$ is designed to capture the non-Euclidean geometry induced by nodes in $T$ and their neighbors. We now want to construct an analogous $\Gamma(S)$ in the graphon case. Fix an arbitrary graphon $\mathbf{W}$ and measurable subset $S \subset [0,1]$ . Define the neighborhood $\mathcal{N}(S)$ of $S$ as the measurable set $\mathcal{N}(S) := \{v \in [0,1] \backslash S : \int_S \mathbf{W}(u,v) \mathrm{d}u > 0\}$ .
|
| 129 |
+
|
| 130 |
+
To define $\Gamma(S)$ , make a copy of $S$ by letting $S'$ be a set disjoint from $[0,1]$ such that there is a measure-preserving bijection $\theta : S' \to S$ . Let $\tilde{S} \coloneqq S \cup \mathcal{N}(S)$ and $\tilde{S}' \coloneqq S' \cup \mathcal{N}(S)$ . Observe that one can extend $\theta : \tilde{S}' \to \tilde{S}$ by mapping elements of $\mathcal{N}(S)$ to itself. We will define a graphon on the extended domain $D = \tilde{S} \cup S'$ :
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
\Gamma (S): D ^ {2} \rightarrow [ 0, 1 ]: (u, v) \mapsto \left\{\begin{array}{l l}\mathbf {W} (u, v)&\text {i f} u \in \tilde {S} \text {a n d} v \in \tilde {S}\\\mathbf {W} (\theta (u), \theta (v))&\text {i f} u \in \tilde {S} ^ {\prime} \text {a n d} v \in \tilde {S} ^ {\prime}\\0&\text {o t h e r w i s e .}\end{array}\right. \tag {3}
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
Spectral properties of $\Gamma(S)$ determine the constant in our Poincaré inequality: a class of important results in functional analysis that control the action of the functional (normalized Laplacian) by the (non-Euclidean) geometry of the underlying space (here, a graph).
|
| 137 |
+
|
| 138 |
+
Theorem 2 (Graphon Poincaré inequality). Let $S \subset [0,1]$ such that $\mathcal{N}(S)$ has positive Lebesgue measure. Denote by $\lambda_{1}$ the smallest nonzero eigenvalue of the scaled normalized Laplacian operator applied to $\Gamma(S)$ . Then for every $X \in L^{2}([0,1])$ supported only on $S$ , $\|X\|_{L^{2}} \leq \frac{1}{\lambda_{1}}\|\overline{\mathcal{L}}X\|_{L^{2}}$ .
|
| 139 |
+
|
| 140 |
+
The proof of this theorem is in App. C and generalizes that in (Pesenson, 2008). Next, we prove that if we can find a set $S$ that satisfies a Poincaré inequality with constant $\Lambda$ , then its complement is a uniqueness set for any $PW_{\lambda}(\mathbf{W})$ with $\lambda < 1 / \Lambda$ .
|
| 141 |
+
|
| 142 |
+
Theorem 3. Let $S$ be a proper subset of $[0,1]$ satisfying the Poincaré inequality
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
\| X \| _ {L ^ {2}} \leq \Lambda \| \bar {\mathcal {L}} X \| _ {L ^ {2}} \tag {4}
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
for all $X \in L^{2}([0,1])$ supported only on $S$ . Then, $U = [0,1] \backslash S$ is a uniqueness set for any $PW_{\lambda}(\mathbf{W})$ with $\lambda < 1 / \Lambda$ .
|
| 149 |
+
|
| 150 |
+
The proof of this result is in App. C, providing an answer to Problem 2: given a bandwidth limit $\lambda$ , one can find a uniqueness set $U$ by searching through measurable sets $S$ and computing the smallest nonzero eigenvalue $\lambda_1$ of $\Gamma(S)$ . If $\lambda < \lambda_1$ then $U = [0,1] \setminus S$ is a uniqueness set. This approach is inefficient as we may need to check every $S$ . Next, we investigate a more efficient approach.
|
| 151 |
+
|
| 152 |
+
# 4.2 GAUSSIAN ELIMINATION AND CONVERGENCE OF UNIQUENESS SETS
|
| 153 |
+
|
| 154 |
+
Our second approach to Problem 2 relies on approximating the graphon with a sequence of graphs $\{\mathbf{G}_n\}$ which has the graphon as its limit, and solving Problem 2 in one of these graphs. While attempting to solve the graphon sampling problem on a finite graph may appear tautological, our goal is to exploit the countable (and often finite) rank of the graphon to make the problem tractable.
|
| 155 |
+
|
| 156 |
+
To establish the connection between the continuous sampling sets in a graphon and its finite rank $K$ , we partition the graphon sampling set into $K$ elements and view each element as representing a mixture component or "cluster". This leads to a connection to mixture models and spectral clustering, which we exploit in two ways. First, to quantify the quality of the graphon sampling sets via a "difficulty" function borrowed from (Schiebinger et al., 2015) relating to the separability of the mixture components. Second, similar to consistency of kernelized spectral clustering, to prove that in convergent graph sequences, graph sampling sets converge to graphon sampling sets.
|
| 157 |
+
|
| 158 |
+
Graphons are equivalent to mixture models of random graphs. To make the above connection rigorous, the first step is to show we can view the graphon as a mixture model of random graphs.
|
| 159 |
+
|
| 160 |
+
Definition 3 (Mixture model for random graphs). Let $\Omega \subset \mathbb{R}^d$ be a compact space and $\mathcal{P}(\Omega)$ the space of probability measures on $\Omega$ . For some number of components $K$ , components $\{\mathbb{P}_i \in \mathcal{P}(\Omega)\}_{i=1}^K$ , weights $\{w_i \geq 0\}_{i=1}^K$ that sum to 1, and a bounded, symmetric, measurable kernel $\mathbf{k}: \Omega \times \Omega \to [0,1]$ , a mixture model for random graphs $\mathbb{K}(\Omega, \mathbb{P}, \mathbf{k})$ samples nodes from some mixture distribution; then sample edges using $\mathcal{B}$ - the Bernoulli distribution over the kernel $\mathbf{k}$ :
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\omega_ {w} \sim \mathbb {P} := \sum_ {i = 1} ^ {K} w _ {i} \mathbb {P} _ {i}, f o r 1 \leq w \leq n, \quad (u, v) \sim \mathcal {B} (\mathbf {k} (\omega_ {u}, \omega_ {v})), f o r 1 \leq u, v \leq n. \tag {5}
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
Historically, some authors (Borgs & Chayes, 2017) have defined graphons as in Def. 3, where $\mathbb{P}$ is not necessarily a mixture. Under mild conditions on $\mathbb{P}$ , we assert that our simpler definition of a graphon is still equivalent to a random graph model. We leave the proof to App. D.
|
| 167 |
+
|
| 168 |
+
Proposition 1. Assume the CDF of $\mathbb{P}$ is strictly monotone. Then, the mixture model $\mathbb{K}(\Omega, \mathbb{P}, \mathbf{k})$ (Def. 3) is equivalent to the random graph model $\mathbb{W}([0,1], \mathbb{U}, \mathbf{W})$ where $\mathbf{W}: [0,1]^2 \to [0,1]$ is a graphon given by $\mathbf{W} = \mathbf{k} \circ \beta$ and $\beta: [0,1] \to \Omega$ is the inverse of the CDF of $\mathbb{P}$ .
|
| 169 |
+
|
| 170 |
+
Recall that Problem 2 prescribes a bandwidth $\lambda$ , and requires finding a uniqueness set for graphon signals with the prescribed bandwidth. Let $K$ be the number of eigenvalues of $\mathbf{W}$ which are smaller than $\lambda$ (i.e., $K = \sup \{k \mid \lambda_k < \lambda\}$ ). The following result shows that $K$ is precisely the number of elements or samples that we need to add to the graphon uniqueness set.
|
| 171 |
+
|
| 172 |
+
Proposition 2. There exists a set of functions $\{f_i\}_{i=1}^K$ , called frames, such that for any graphon signal $X \in \mathrm{PW}_{\lambda}(\mathbf{W})$ there is a unique reconstruction of $X$ from samples $\{\langle f_i, X \rangle\}_{i=1}^K$ .
|
| 173 |
+
|
| 174 |
+
To see why this result is possible, recall that if $X \in \mathrm{PW}_{\lambda}(\mathbf{W})$ for some $\lambda < \lambda_{K+1}$ then $X$ is a linear combination of $K$ eigenfunctions $\{\varphi_i\}_{i=1}^K$ corresponding to $\{\lambda_i\}_{i=1}^K$ . Therefore, it suffices to calculate $K$ coefficients $\mathbf{c} = (c_i)_{i=1}^K$ by forming a full rank system (if one exists), which can then be solved via Gaussian elimination:
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
\left( \begin{array}{c c c c} \langle f _ {1}, \varphi_ {1} \rangle & \langle f _ {1}, \varphi_ {2} \rangle & \ldots & \langle f _ {1}, \varphi_ {K} \rangle \\ \langle f _ {2}, \varphi_ {1} \rangle & \langle f _ {2}, \varphi_ {2} \rangle & \ldots & \langle f _ {2}, \varphi_ {K} \rangle \\ \vdots & \vdots & \ldots & \vdots \\ \langle f _ {K}, \varphi_ {1} \rangle & \langle f _ {K}, \varphi_ {2} \rangle & \ldots & \langle f _ {K}, \varphi_ {K} \rangle \end{array} \right) \mathbf {c} = \left( \begin{array}{c} \langle f _ {1}, X \rangle \\ \langle f _ {2}, X \rangle \\ \vdots \\ \langle f _ {K}, X \rangle \end{array} \right)
|
| 178 |
+
$$
|
| 179 |
+
|
| 180 |
+
The next result tells us that different choices of mixture components and $\mathbf{k}$ result in frames with different approximation quality. Specifically, the approximation quality is a function of how well-separated the components in $\mathbb{P}$ are with respect to $\mathbf{k}$ and is measured quantitatively by a difficulty function $\phi (\mathbb{P},K)$ (Schiebinger et al., 2015). E.g., if there are repeated components in the mixture, or a bimodal component, we expect $\phi$ to be high.
|
| 181 |
+
|
| 182 |
+
Proposition 3. When $\mathbf{W}$ is viewed as a mixture model of random graphs $\mathbb{K}(\Omega, \mathbb{P}, \mathbf{k})$ with $K$ components $\{\mathbb{P}_i\}_{i=1}^K$ the square-root kernelized density $\{q_i := \sqrt{\int_{\Omega} \mathbf{k}(\Omega, \cdot) \mathrm{d}\mathbb{P}_i(\Omega)}\}_{i=1}^K$ is a good frame approximation. Quantitatively, let $\Phi$ be the subspace spanned by the eigenfunctions of $\mathbf{W}$ corresponding to $\{\lambda_i\}_{i=1}^K$ , and $\mathbf{Q}$ the subspace spanned by the $\{q_i\}_{i=1}^K$ . Then:
|
| 183 |
+
|
| 184 |
+
$$
|
| 185 |
+
\left\| \Pi_ {\Phi} - \Pi_ {\mathbf {Q}} \right\| _ {H S} \leq 1 6 \sqrt {1 2 + b} \phi (\mathbb {P}, \mathbf {k}), \tag {6}
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
where $\| .\|_{HS}$ is the Hilbert-Schmidt norm, $\Pi$ is the projection operator, and the difficulty function $\phi$ and the boundedness parameter $b$ are as in (Schiebinger et al., 2015) and App. $F^1$ .
|
| 189 |
+
|
| 190 |
+
Next, we connect the square-root kernelized density $q_{i}$ back to graphon uniqueness sets. The following result shows that when the $q_{i}$ 's align with eigenfunctions of $\mathbf{W}$ , there is a clear correspondence between the uniqueness set and the mixture components. The proof is in App. D.
|
| 191 |
+
|
| 192 |
+
Theorem 4. Fix a small $\epsilon >0$ . Assuming that $\| q_i - \varphi_i\|_{L^2 (\mathbb{P}_i)} < \epsilon$ for all $i\in [K]$ ; and that there exists a set of disjoint measurable subsets $\{A_i\subset [0,1]\}_{i = 1}^K$ such that EITHER:
|
| 193 |
+
|
| 194 |
+
- the kernelized density $p_i \coloneqq \int_{\mathcal{X}} \mathbf{k}(\omega, \cdot) \, \mathrm{d}\mathbb{P}_i(\omega)$ is concentrated around an interval $A_i \subset [0,1]$ in the sense that $p_i(A_i) - K^2 \epsilon^2 > \sum_{i' \neq i} p_i(A_{i'}) / (K - 1)^2$ for each $i \in [K]$ , OR
|
| 195 |
+
- for each $i \in [K]$ , the likelihood ratio statistic is large: $\frac{p_i(A_i) - K^2\epsilon^2}{\sum_{k \neq i} p_k(A_i)} > 1 / (K - 1)^2$ ,
|
| 196 |
+
|
| 197 |
+
then the set $U = \bigcup_{i=1}^{K} A_i$ is a uniqueness set for $\mathrm{PW}_{\lambda}(\mathbf{W})$ for any $\lambda \in (\lambda_K, \lambda_{K+1})$ .
|
| 198 |
+
|
| 199 |
+
Put together, the above results culminate in a method to find uniqueness sets by recovering the mixture components. However, this is still cumbersome to implement due to the continuous nature of graphons. Next we explore an efficient approach to find approximate uniqueness sets for a graphon by finding uniqueness sets for a finite graph sampled from (and thus converging to) the graphon.
|
| 200 |
+
|
| 201 |
+
Gaussian elimination (GE) on (approximations) of graphon eigenfunctions returns uniqueness sets for finite sampled graphs. We now derive a scheme to sample points $\omega$ from a uniqueness set $U$ with high probability. Assume that from $\mathbb{W} = \mathbb{K}$ , we sample $n$ points to collect a dataset $\{\omega_i\}_{i=1}^n$ . From a graphon perspective, these points are nodes in a finite graph $\mathbf{G}_n$ of size $n$ where the edges are sampled with probability given by $\mathbf{W}$ . From a mixture model perspective, the points $\omega_i \in \Omega$ are associated with a latent variable $\{z_i \in [K]\}_{i=1}^n$ that indicates the component the sample came from. By building on a result by Schiebinger et al. (2015) on the geometry of spectral clustering, we can unify these two perspectives: running a variant of GE over the Laplacian eigenvectors of a large enough $\mathbf{G}_n$ returns a sample from each mixture component with high probability.
|
| 202 |
+
|
| 203 |
+
Theorem 5. For any $t > c_0 \sqrt{\phi_n(\delta)} w_{\min}^{-3}$ , GE over the Laplacian eigenvectors of $\mathbf{G}_n$ recovers $K$ samples distributed according to each of the mixture components $\mathbb{P}_i$ , $1 \leq i \leq K$ , with probability at least
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\left(1 - 8 K ^ {2} \exp - \frac {c _ {2} n \delta^ {4}}{\delta^ {2} + S _ {\max } + C}\right) \frac {(1 - \alpha) ^ {K} (N - n _ {\min }) ^ {K}}{(N - (1 + \alpha) n _ {\min }) ^ {K}}, w i t h n _ {\min } = \min _ {m \in [ K ]} | \{i: z _ {i} = m \} |, \tag {7}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
where $\alpha$ is upper bounded as $\alpha \leq c_{1}\phi_{n}(\delta) / w_{min}^{3 / 2} + \psi (2t)$ . The constants $c_{1},c_{2},w_{min}$ and $\delta$ , and the functions $C,S,\phi_n$ and $\psi$ are as in (Schiebinger et al., 2015) and App. $F$ .
|
| 210 |
+
|
| 211 |
+
Prop. 5 in App. D works out a small example, corresponding to a case where the $\mathbb{P}_i$ 's are uniformly distributed on disjoint domains. There, we show that by using GE, we end up solving an eigenvector problem of order $K$ , the number of components, instead of the naive order $n \gg K$ .
|
| 212 |
+
|
| 213 |
+
Intuitively, for well-separated mixture models, embedding the dataset via the top Laplacian eigenvectors returns an embedded dataset that exhibits an almost orthogonal structure: points that share the same latent variable (i.e., which came from the same mixture component) have a high probability of lying along the same axis in the orthogonal system; while points sampled from different distributions tend to be positioned orthogonally. GE with proper pivoting on $\mathbf{G}_n$ is thus a good heuristic for sampling uniqueness sets, as it selects points that are almost orthogonal to each other, which is
|
| 214 |
+
|
| 215 |
+
equivalent to picking a sample from each component. The significance of this result is twofold: it bridges graphon sampling and kernelized spectral clustering; and the almost orthogonal structure ensures that the set sampled via GE is a uniqueness set for large graphs sampled from $\mathbf{W}$ with high probability. This is stated in the following proposition, which we prove in App. D.
|
| 216 |
+
|
| 217 |
+
Proposition 4. Consider a graph sequence $\mathbf{G}_n \xrightarrow{n \to \infty} \mathbf{W}_3$ . If there is a $\delta \in (0, \| \mathbf{k} \|_{\mathbb{P}} / (b\sqrt{2\pi}))$ such that the difficulty function<sup>3</sup> is small, i.e., $\phi_n(\delta) < \left(\frac{w_{\min}t}{(3/\pi + 1)c_0}\right)^2$ , then with probability at least that in Thm. 5, there exists a minimum number of nodes $N$ such that, for all $n > N$ , the sampled nodes form a uniqueness set for the finite graph $\mathbf{G}_n$ . All quantities in the bound and additional assumptions are the same as in (Schiebinger et al., 2015) and App. F.
|
| 218 |
+
|
| 219 |
+
# 5 ALGORITHM
|
| 220 |
+
|
| 221 |
+
Motivated by Theorems 3-5, we propose a novel algorithm for efficient sampling of signals on large graphs via graphon signal sampling. When the regularity assumptions of our theorems are satisfied, this algorithm will generate a consistent sampling set.
|
| 222 |
+
|
| 223 |
+
Consider a graph $\mathbf{G}_n = (\mathcal{V},\mathcal{E})$ and signal $\mathbf{x}_n$ from which we want to sample a subgraph $\mathbf{G}_m$ and signal $\mathbf{x}_m$ with minimal loss of information (i.e., we would like the signal $\mathbf{x}_n$ to be uniquely represented on the sampled graph $\mathbf{G}_m$ ). The proposed algorithm consists of three steps:
|
| 224 |
+
|
| 225 |
+
(1) Represent $\mathbf{G}_n$ as its induced graphon $\mathbf{W}_n(\omega, \theta) = \sum_{i=1}^n \sum_{j=1}^n [\mathbf{A}_n]_{ij} \mathbb{I}(\omega \in I_i) \mathbb{I}(\theta \in I_j)$ where $I_1 \cup \ldots \cup I_n$ is the $n$ -equivariant partition of $[0,1]$ .
|
| 226 |
+
(2) Define a coarser equipartition $I_1' \cup \ldots \cup I_q'$ , $q < n$ , of $[0, 1]$ . Given the bandwidth $\lambda$ of the signal $\mathbf{x}_n$ , sample a graphon uniqueness interval $\cup_{j=1}^{p} I_{i_j}'$ (Def. 2), $p < q$ , from $I_1' \cup \ldots \cup I_q'$ .
|
| 227 |
+
(3) Sample the graph $\mathbf{G}_m$ by sampling $r = \lfloor m / (p - 1)\rfloor$ points from each of the $I_{i_1}'$ , ..., $I_{i_{p - 1}}'$ in the graphon uniqueness set (and the remaining $m - (p - 1)r$ nodes from $I_{i_p}$ ). By Prop. 4, this procedure yields a uniqueness set for $\mathbf{G}_n$ with high probability.
|
| 228 |
+
|
| 229 |
+
To realize (2), we develop a heuristic based on representing the graphon $\mathbf{W}_n$ on the partition $I_1' \cup \ldots \cup I_q'$ as a graph $\tilde{\mathbf{G}}_q$ with adjacency matrix given by $[ \tilde{\mathbf{A}}_q ]_{ij} = \int_{I_i'} \int_{I_j'} \mathbf{W}_n(x,y) \, \mathrm{d}x \, \mathrm{d}y$ . We then sample $p$ nodes from $\tilde{\mathbf{G}}_q$ —each corresponding to an interval $I_{i_j}' \subset I_1' \cup \ldots \cup I_q'$ using the graph signal sampling algorithm from (Anis et al., 2016). This algorithm is a greedy heuristic closely connected to GE and E-optimal sampling but without spectral computations.
|
| 230 |
+
|
| 231 |
+
The sampling of $m$ nodes from $I_{i_1}' \cup \ldots \cup I_{i_p}'$ in step (3) is flexible in the way nodes in each interval are sampled. Random sampling is possible, but one could design more elaborate schemes based on local node information. To increase node diversity, we employ a scheme using a local clustering algorithm based on the localized heat kernel PageRank (Chung & Simpson, 2018) to cluster the graph nodes into communities, and then sample an equal number of nodes from each community.
|
| 232 |
+
|
| 233 |
+
Runtime analysis. The advantages of algorithm (1)-(3) w.r.t. conventional graph signal sampling algorithms (e.g., (Anis et al., 2016; Marques et al., 2015)) are twofold. First, if $q \ll n$ , (2) is much cheaper. E.g., the heuristic from (Anis et al., 2016) now costs $O(pq^2)$ as opposed to $O(p|\mathcal{E}|)$ . If step (3) uses uniform sampling then our method runs in $O(|\mathcal{E}| + pq^2 + m)$ ; whereas obtaining a uniqueness set of size $m$ from Anis et al. (2016) requires $O(m|\mathcal{E}|)$ time. Second, given the graphon $\mathbf{W}$ , we only need to calculate the sampled intervals once and reuse them to find approximate uniqueness sets for any graph $\mathbf{G}_n$ generated from $\mathbf{W}$ as described in Section 2.2, provided that their node labels $\omega_1, \ldots, \omega_n$ (or at least their order) are known. Thus we save time on future sampling computations.
|
| 234 |
+
|
| 235 |
+
# 6 NUMERICAL EXPERIMENTS
|
| 236 |
+
|
| 237 |
+
Transferability for node classification. We use our sampling algorithm to subsample smaller graphs for training GNNs that are later transferred for inference on the full graph. We consider node classification on citation networks (Yang et al., 2016) and compare the accuracy of GNNs trained on the full graph, on graphs subsampled following the proposed algorithm, and on graphs
|
| 238 |
+
|
| 239 |
+
Table 1: Accuracy and runtime for models trained on the full graph, a graphon-subsampled graph, and a subgraph with randomly sampled nodes with the same size as (ii). The columns correspond to doubling the number of communities, doubling $r$ , and doubling the eigenvalue index.
|
| 240 |
+
|
| 241 |
+
<table><tr><td colspan="5">Cora</td><td colspan="4">CiteSeer</td></tr><tr><td></td><td>base</td><td>x2 comm.</td><td>x2 nodes per int.</td><td>x2 eig.</td><td>base</td><td>x2 comm.</td><td>x2 nodes per int.</td><td>x2 eig.</td></tr><tr><td>full graph</td><td>0.86 ± 0.02</td><td>0.86 ± 0.01</td><td>0.86 ± 0.01</td><td>0.85 ± 0.01</td><td>0.80 ± 0.01</td><td>0.81 ± 0.01</td><td>0.79 ± 0.01</td><td>0.79 ± 0.02</td></tr><tr><td>graphon suppl.</td><td>0.49 ± 0.09</td><td>0.56 ± 0.09</td><td>0.73 ± 0.05</td><td>0.51 ± 0.09</td><td>0.56 ± 0.06</td><td>0.56 ± 0.05</td><td>0.67 ± 0.03</td><td>0.51 ± 0.10</td></tr><tr><td>random suppl.</td><td>0.46 ± 0.09</td><td>0.52 ± 0.17</td><td>0.71 ± 0.05</td><td>0.57 ± 0.14</td><td>0.51 ± 0.08</td><td>0.48 ± 0.11</td><td>0.67 ± 0.03</td><td>0.52 ± 0.03</td></tr><tr><td colspan="5">PubMed</td><td colspan="4">runtime (s)</td></tr><tr><td></td><td>base</td><td>x2 comm.</td><td>x2 nodes per int.</td><td>x2 eig.</td><td>Cora</td><td>CiteSeer</td><td>PubMed</td><td></td></tr><tr><td>full graph</td><td>0.76 ± 0.02</td><td>0.77 ± 0.02</td><td>0.77 ± 0.03</td><td>0.77 ± 0.01</td><td>0.9178</td><td>0.8336</td><td>0.8894</td><td></td></tr><tr><td>graphon suppl.</td><td>0.71 ± 0.07</td><td>0.67 ± 0.06</td><td>0.75 ± 0.05</td><td>0.69 ± 0.07</td><td>0.3091</td><td>0.2578</td><td>0.3204</td><td></td></tr><tr><td>random suppl.</td><td>0.69 ± 0.07</td><td>0.71 ± 0.07</td><td>0.74 ± 0.07</td><td>0.72 ± 0.04</td><td>0.3131</td><td>0.2514</td><td>0.3223</td><td></td></tr></table>
|
| 242 |
+
|
| 243 |
+
Table 2: Accuracy and PE compute runtime on MalNet-Tiny w/o PEs, w/ PEs computed on full graph, w/ PEs computed on graphon-sampled subgraph (removing or not isolated nodes), and w/ PEs computed on subgraph with randomly sampled nodes (removing or not isolated nodes).
|
| 244 |
+
|
| 245 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">no PEs</td><td rowspan="2">full graph PEs</td><td colspan="2">graphon sampl. PEs</td><td colspan="2">randomly sampl. PEs</td><td rowspan="2">PE compute runtime (s)</td></tr><tr><td>w/ isolated</td><td>w/o</td><td>w/ isolated</td><td>w/o</td></tr><tr><td>mean</td><td>0.26±0.03</td><td>0.43±0.07</td><td>0.29±0.06</td><td>0.33±0.06</td><td>0.28±0.07</td><td>0.27±0.07</td><td>full 12.40</td></tr><tr><td>max</td><td>0.30</td><td>0.51</td><td>0.40</td><td>0.42</td><td>0.35</td><td>0.37</td><td>sampl. 0.075</td></tr></table>
|
| 246 |
+
|
| 247 |
+
sampled at random. To ablate the effect of different parameters, we consider a base scenario and 3 variations. For Cora and CiteSeer, the base scenario fixes the cutoff frequency at the 5th smallest eigenvalue, $\lambda_5$ , of the full graph. It partitions [0,1] into $q = 20$ intervals and samples $p = 10$ intervals from this partition in step (2). In step (3), it clusters the nodes in each sampled interval into 2 communities and samples $r = 20$ nodes from each sampled interval, 10 per community. For PubMed, the parameters are the same except $q = 30$ and $p = 15$ . The three variations are doubling (i) the number of communities, (ii) $r$ , and (iii) the eigenvalue index. Further details are in App. G.
|
| 248 |
+
|
| 249 |
+
Table 1 reports results for 5 realizations. Graphon sampling performs better than random sampling in the base case, where the subsampled graphs have less than $10\%$ of the full graph size. Increasing the number of communities improves performance for Cora and widens the gap between graphon and random sampling for both Cora and CiteSeer. For PubMed, it tips the scale in favor of random sampling, which is not very surprising since PubMed has less classes. When we double $r$ , the difference between graphon and random sampling shrinks as expected. Finally, when we increase $\lambda$ , graphon sampling performs worse than random sampling. This could be caused by the sample size being too small to preserve the bandwidth, thus worsening the quality of the sampling sets.
|
| 250 |
+
|
| 251 |
+
Positional encodings for graph classification. Many graph positional encodings (PEs) for GNNs and graph transformers use the first $K$ normalized Laplacian eigenvectors (or their learned representations) as input signals (Dwivedi et al., 2021; Lim et al., 2022); they provide additional localization information for each node. While they can greatly improve performance, they are expensive to compute for large graphs. In this experiment, we show how our algorithm can mitigate this issue. We sample subgraphs for which the Laplacian eigenvectors are computed, and then use these eigenvectors as PEs for the full-sized graph by zero-padding them at the non-sampled nodes.
|
| 252 |
+
|
| 253 |
+
We consider the MalNet-Tiny dataset (Freitas et al., 2021), modified to anonymize the node features and pruned to only keep large graphs (with at least 4500 nodes). After balancing the classes, we obtain a dataset with 216 graphs and 4 classes on which we compare four models: (i) without PEs, and qith PEs calculated from (ii) the full-sized graph, (iii) a graphon-sampled subgraph, and (iv) a randomly sampled subgraph. For (iii) and (iv), we also consider the case where isolated nodes are removed from the sampled graphs to obtain more meaningful PEs.
|
| 254 |
+
|
| 255 |
+
We report results for 10 random realizations in Table 2. The PEs from the graphon-subsampled graphs were not as effective as the PEs from the full-sized graph, but still improved performance with respect to the model without PEs, especially without isolated nodes. In contrast, on average, PEs from subgraphs with randomly sampled nodes did not yield as significant an improvement, and displayed only slightly better accuracy than random guessing when isolated nodes were removed.
|
| 256 |
+
|
| 257 |
+
# ACKNOWLEDGEMENTS
|
| 258 |
+
|
| 259 |
+
TL and SJ were supported by NSF awards 2134108 and CCF-2112665 (TILOS AI Institute), and Office of Naval Research grant N00014-20-1-2023 (MURI ML-SCOPE). This work was done while LR was at MIT, supported by a METEOR and a FODSI fellowships.
|
| 260 |
+
|
| 261 |
+
# REFERENCES
|
| 262 |
+
|
| 263 |
+
David J. Aldous. Representations for partially exchangeable arrays of random variables. Journal of Multivariate Analysis, 11(4):581-598, 1981. ISSN 0047-259X. doi: https://doi.org/10.1016/0047-259X(81)90099-3. URL https://www.sciencedirect.com/science/article/pii/0047259X81900993.
|
| 264 |
+
A. Anis, A. Gadde, and A. Ortega. Efficient sampling set selection for bandlimited graph signals using graph spectral proxies. IEEE Trans. Signal Process., 64(14):3775-3789, 2016.
|
| 265 |
+
H. Avron and C. Boutsidis. Faster subset selection for matrices and applications. SIAM Journal on Matrix Analysis and Applications, 34(4):1464-1499, 2013.
|
| 266 |
+
A. L. Barabási, R. Albert, and H. Jeong. Scale-free characteristics of random networks: the topology of the world-wide web. Physica A: Statistical Mechanics and its Applications, 281(1): 69-77, 2000. URL https://www.sciencedirect.com/science/article/pii/S0378437100000182.
|
| 267 |
+
C. Borgs and J. Chayes. Graphons: A nonparametric method to model, estimate, and design algorithms for massive networks. In Proceedings of the 2017 ACM Conference on Economics and Computation, pp. 665-672, 2017.
|
| 268 |
+
C. Borgs, J. T. Chayes, L. Lovász, V. T. Sós, and K. Vesztergombi. Convergent sequences of dense graphs I: Subgraph frequencies, metric properties and testing. Adv. Math., 219(6):1801-1851, 2008.
|
| 269 |
+
C. Borgs, J. Chayes, and A. Smith. Private graphon estimation for sparse graphs. Neural Inform. Process. Syst., 28, 2015.
|
| 270 |
+
J. Cervino, L. Ruiz, and A. Ribeiro. Learning by transference: Training graph neural networks on growing graphs. IEEE Trans. Signal Process., 2023.
|
| 271 |
+
L. F. O. Chamon and A. Ribeiro. Greedy sampling of graph signals. IEEE Trans. Signal Process., 66:34-47, 2017.
|
| 272 |
+
S. Chen, R. Varma, A. Sandryhaila, and J. Kovacevic. Discrete signal processing on graphs: Sampling theory. IEEE Trans. Signal Process., 63:6510-6523, 2015.
|
| 273 |
+
F. Chung and O. Simpson. Computing heat kernel pagerank and a local clustering algorithm. European Journal of Combinatorics, 68:96-119, 2018.
|
| 274 |
+
V. P. Dwivedi, A. T. Luu, T. Laurent, Y. Bengio, and X. Bresson. Graph neural networks with learnable structural and positional representations. arXiv:2110.07875 [cs.LG], 2021.
|
| 275 |
+
J. Eldridge, M. Belkin, and Y. Wang. Graphons, mergeons, and so on! Neural Inform. Process. Syst., 29, 2016.
|
| 276 |
+
S. Freitas, R. Duggal, and D. H. Chau. MalNet: A large-scale image database of malicious software. arXiv:2102.01072 [cs.LG], 2021.
|
| 277 |
+
Douglas N Hoover. Relations on probability spaces and arrays of. t, Institute for Advanced Study, 1979.
|
| 278 |
+
J Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Žídek, A. Potapenko, A. Bridgland, C. Meyer, S. A. A. Kohl, A. J. Ballard, A. Cowie, B. Romera-Paredes, S. Nikolov, R. Jain, J. Adler, T. Back, S. Petersen, D. Reiman,
|
| 279 |
+
|
| 280 |
+
E. Clancy, M. Zielinski, M. Steinegger, M. Pacholska, T. Berghammer, S. Bodenstein, D. Silver, O. Vinyals, A. W. Senior, K. Kavukcuoglu, P. Kohli, and D. Hassabis. Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873):583-589, 2021. URL https://doi.org/10.1038/s41586-021-03819-2.
|
| 281 |
+
D. Kempe, J. Kleinberg, and É. Tardos. Maximizing the spread of influence through a social network. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), pp. 137-146. Association for Computing Machinery, 2003.
|
| 282 |
+
J. M. Kleinberg. The small-world phenomenon: An algorithmic perspective. In Symposium on Theory of Computing (STOC), 2000. URL https://api_semanticscholar.org/CorpusID:221559836.
|
| 283 |
+
S. Krishnagopal and L. Ruiz. Graph neural tangent kernel: Convergence on large graphs. Int. Conf. Mach. Learning, 202:1-15, 2023.
|
| 284 |
+
T. Le and S. Jegelka. Limits, approximation and size transferability for gnns on sparse graphs via graphops. arXiv:2306.04495 [cs.LG], 2023.
|
| 285 |
+
C. Li, S. Jegelka, and S. Sra. Polynomial time algorithms for dual volume sampling. Neural Inform. Process. Syst., 30, 2017.
|
| 286 |
+
D. Lim, J. Robinson, L. Zhao, T. Smidt, S. Sra, H. Maron, and S. Jegelka. Sign and basis invariant networks for spectral graph representation learning. arXiv:2202.13013 [cs.LG], 2022.
|
| 287 |
+
L. Lovász. Large Networks and Graph Limits, volume 60. American Mathematical Society, 2012.
|
| 288 |
+
P. Ma, M. Mahoney, and B. Yu. A statistical perspective on algorithmic leveraging. In Int. Conference on Machine Learning (ICML), pp. 91-99. PMLR, 2014.
|
| 289 |
+
A. G. Marques, S. Segarra, G. Leus, and A. Ribeiro. Sampling of graph signals with successive local aggregations. IEEE Trans. Signal Process., 64:1832-1843, 2015.
|
| 290 |
+
S. Maskey, R. Levie, Y. Lee, and G. Kutyniok. Generalization analysis of message passing neural networks on large random graphs. Neural Inform. Process. Syst., 35:4805-4817, 2022.
|
| 291 |
+
A. Ortega, P. Frossard, J. Kovačević, J. M. F. Moura, and P. Vandergheynst. Graph signal processing: Overview, challenges, and applications. Proc. IEEE, 106(5):808-828, 2018.
|
| 292 |
+
Alejandro Parada-Mayorga and Alejandro Ribeiro. Sampling and uniqueness sets in graphon signal processing, 2024.
|
| 293 |
+
I. Pesenson. Sampling in Paley-Wiener spaces on combinatorial graphs. Transactions of the American Mathematical Society, 360(10):5603-5627, 2008.
|
| 294 |
+
F. Pukelsheim. Optimal design of experiments. SIAM, 2006.
|
| 295 |
+
A. Rudi, D. Calandriello, L. Carratino, and L. Rosasco. On fast leverage score sampling and optimal learning. *Neural Inform. Process. Syst.*, 31, 2018.
|
| 296 |
+
L. Ruiz, L. F. O. Chamon, and A. Ribeiro. Graphon neural networks and the transferability of graph neural networks. In 34th Neural Inform. Process. Syst., Vancouver, BC (Virtual), 6-12 Dec. 2020a. NeurIPS Foundation.
|
| 297 |
+
L. Ruiz, L. F. O. Chamon, and A. Ribeiro. The Graphon Fourier Transform. In 45th IEEE Int. Conf. Acoust., Speech and Signal Process., pp. 5660-5664, Barcelona, Spain (Virtual), 4-8 May 2020b. IEEE.
|
| 298 |
+
L. Ruiz, L. F. O. Chamon, and A. Ribeiro. Graphon signal processing. IEEE Trans. Signal Process., 69:4961-4976, 2021.
|
| 299 |
+
A. Sandryhaila and J. M. F. Moura. Discrete signal processing on graphs: Frequency analysis. IEEE Trans. Signal Process., 62:3042-3054, June 2014.
|
| 300 |
+
|
| 301 |
+
G. Schiebinger, M. J. Wainwright, and B. Yu. The geometry of kernelized spectral clustering. The Annals of Statistics, 43(2), Apr. 2015. URL https://doi.org/10.1214%2F14-aos1283.
|
| 302 |
+
D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag., 30(3):83-98, May 2013.
|
| 303 |
+
D. Spielman and N. Srivastava. Graph sparsification by effective resistances. In Proceedings of the 40th Annual ACM Symposium on Theory of Computing, pp. 563-568, 2008.
|
| 304 |
+
L. Takac and M. Zábovský. Data analysis in public social networks. International Scientific Conference and International Workshop Present Day Trends of Innovations, pp. 1-6, Jan. 2012.
|
| 305 |
+
Z. Yang, W. Cohen, and R. Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In Int. Conf. Mach. Learning, pp. 40-48. PMLR, 2016.
|
| 306 |
+
R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec. Graph convolutional neural networks for web-scale recommender systems. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), KDD '18, pp. 974-983. Association for Computing Machinery, 2018. URL https://doi.org/10.1145/3219819.3219890.
|
| 307 |
+
M. Zitnik, M. Agrawal, and J. Leskovec. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34(13):i457-i466, 06 2018. URL https://doi.org/10.1093/bioinformatics/bty294.
|
| 308 |
+
|
| 309 |
+
# A EXTRA NOTATIONS
|
| 310 |
+
|
| 311 |
+
For some probability measure $\mathbb{Q}$ , and some functions in the same $L^2(\mathbb{Q})$ spaces, denote by $\langle \cdot, \cdot \rangle_{L^2(\mathbb{Q})}$ the $L^2(\mathbb{Q})$ inner product and $\| \cdot \|_{L^2(\mathbb{Q})}$ the induced $L^2$ norm. We will also abuse notation and write $L^2(D)$ for some set $D$ that is a closed subset of the real line to mean the $L^2$ space supported on $D$ under the usual Lebesgue measure. When the measure space is clear, we will also drop it and simply write $L^2$ .
|
| 312 |
+
|
| 313 |
+
For some set of functions $\{f_1, \ldots, f_K\}$ , $\{g_1, \ldots, g_K\}$ where $f_i$ and $g_j$ are in the same $L^2$ space, denote by $((f_i, g_j))_{i,j=1}^K$ the $K \times K$ matrix:
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
\left(\left(f _ {i}, g _ {j}\right)\right) _ {i, j = 1} ^ {K} = \left[ \begin{array}{c c c c} \left\langle f _ {1}, g _ {1} \right\rangle_ {L ^ {2}} & \left\langle f _ {1}, g _ {2} \right\rangle_ {L ^ {2}} & \dots & \left\langle f _ {1}, g _ {K} \right\rangle_ {L ^ {2}} \\ \left\langle f _ {2}, g _ {1} \right\rangle_ {L ^ {2}} & \left\langle f _ {2}, g _ {2} \right\rangle_ {L ^ {2}} & \dots & \left\langle f _ {2}, g _ {K} \right\rangle_ {L ^ {2}} \\ \vdots & \vdots & \dots & \vdots \\ \left\langle f _ {K}, g _ {1} \right\rangle_ {L ^ {2}} & \left\langle f _ {K}, g _ {2} \right\rangle_ {L ^ {2}} & \dots & \left\langle f _ {K}, g _ {K} \right\rangle_ {L ^ {2}} \end{array} \right] \tag {8}
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
# B ADDITIONAL BACKGROUND
|
| 320 |
+
|
| 321 |
+
In this section, we revisit operator theory arguments in our construction of various graphon objects (degree function, normalized graphon, graphon shift operators and normalized graphon Laplacian) from Section 2.2.
|
| 322 |
+
|
| 323 |
+
Recall that a graphon $\mathbf{W}$ is a bounded, symmetric and $L^2$ -measurable function from $[0,1]^2 \to [0,1]$ and thus induces a Hilbert-Schmidt kernel with open connected domain $\mathbf{W}:(0,1)^2 \to [0,1]$ . We will abuse notation and refer to both of these objects as graphons. The associated Hilbert-Schmidt integral operator for $\mathbf{W}$ is:
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
H: L ^ {2} ([ 0, 1 ]) \rightarrow L ^ {2} ([ 0, 1]): X \mapsto \left(v \mapsto \int_ {0} ^ {1} \mathbf {W} (u, v) X (u) \mathrm {d} u\right), \tag {9}
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+
where the resulting function is understood to be in $L^2$ . When $\mathbf{W}$ is viewed as the adjacency matrix of a graph with infinitely many vertices, if $X$ is taken to assign each node with a feature in [0, 1], then $H$ is understood as a message-passing operator that aggregates neighboring features into each node. Note that measurable functions are only defined up to a set of measure 0.
|
| 330 |
+
|
| 331 |
+
In the paper, we consider a normalized version of $\mathbf{W}$ :
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\overline {{\mathbf {W}}} (u, v) = \left\{ \begin{array}{l l} \mathbf {W} (u, v) / \sqrt {\mathbf {d} (u) \mathbf {d} (v)} & \text {i f} \mathbf {d} (u) \neq 0 \text {a n d} \mathbf {d} (v) \neq 0 \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {10}
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
where $\mathbf{d} \in L^2([0,1])$ is the degree function:
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
\mathbf {d} (u) = \int_ {0} ^ {1} \mathbf {W} (u, v) \mathrm {d} v. \tag {11}
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
It is clear that $\overline{\mathbf{W}}$ is also bounded, symmetric and $L^2$ -measurable. The corresponding HS operator is denotes $\overline{H}$ . When the kernel is symmetric and has bounded $L^2([0,1]^2)$ -norm, then Hilbert-Schmidt operator theory tells us that $H$ is continuous, compact and self-adjoint.
|
| 344 |
+
|
| 345 |
+
Spectral theory of HS operators then tell us that $H$ and $\overline{H}$ has countable discrete spectrum $\{\lambda_1 \geq \lambda_2 \geq \ldots\}$ , $\{\overline{\lambda}_1 \geq \overline{\lambda}_2 \geq \ldots\}$ and the essential spectrum of a single accumulation point 0 (Lovász, 2012). Furthermore, each nonzero eigenvalues have finite multiplicity (Lovász, 2012). As compact self-adjoint operator, $H$ and $\overline{H}$ admits a spectral theorem:
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
\mathbf {W} (u, v) \sim \sum_ {k \in \mathbb {N}} \lambda_ {k} \varphi_ {k} (u) \varphi_ {k} (v), \tag {12}
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
for some eigenfunctions $\{\varphi_k\}_{k\in \mathbb{N}},\| \varphi_k\|_{L^2} = 1$ (Lovasz, 2012).
|
| 352 |
+
|
| 353 |
+
Recall that Mercer's theorem asserts that continuous positive semi-definite kernel $k$ admits a spectral theorem: there exists a set of orthonormal functions $\{p_i\}_{i \in \mathbb{N}}$ and a countable set of eigenvalues $\{\lambda_i\}_{i \in \mathbb{N}}$ such that $\sum_{i=1}^{\infty} \lambda_i p_i(u)p_j(v) = k(u,v)$ where the convergence is absolute and uniform. For measurable kernels (graphons), Eq. (12) only converges in $L^2$ norm. However, the sequence of eigenvalues admits a stronger $\ell^2$ convergence:
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
\sum_ {i = 1} ^ {\infty} \lambda_ {i} ^ {2} = \| \mathbf {W} \| _ {2} ^ {2}. \tag {13}
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
Note that by our normalization, $\| \overline{\mathbf{W}} \|_2^2 \leq 1$ and thus $|\bar{\lambda}_i| \leq 1$ for all $i \in \mathbb{N}$ . Finally, we defined the normalized Laplacian operator $\overline{\mathcal{L}} = \mathrm{Id} - \overline{H}$ . It is then straightforward to see that the spectrum of $\overline{\mathcal{L}}$ is just $1 - \sigma(\overline{H})$ set-wise.
|
| 360 |
+
|
| 361 |
+
# C POINCARE INEQUALITY
|
| 362 |
+
|
| 363 |
+
Proof of Thm. 2. The proof mirrors Pesenson (2008). Fix an $X$ in $L^2 (U)$ . Define $X^{\prime}\in L^{2}(D)$ as:
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
X ^ {\prime} (u) = \left\{ \begin{array}{l l} X (u) & \text {i f} u \in U \\ - X (u) & \text {i f} u \in U ^ {\prime} \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {14}
|
| 367 |
+
$$
|
| 368 |
+
|
| 369 |
+
It is clear that $X'$ is measureable (with respect to Lebesgue measure on $D$ ). Consider:
|
| 370 |
+
|
| 371 |
+
$$
|
| 372 |
+
\left\| X ^ {\prime} (u) \right\| _ {L ^ {2} (D)} ^ {2} = \int_ {U} \left(X ^ {\prime} (u)\right) ^ {2} \mathrm {d} u + \int_ {U ^ {\prime}} \left(X ^ {\prime} (u)\right) ^ {2} \mathrm {d} u = 2 \| X (u) \| _ {L ^ {2} (U)} ^ {2}, \tag {15}
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
and at the same time, for all $u \in U$ :
|
| 376 |
+
|
| 377 |
+
$$
|
| 378 |
+
\int_ {0} ^ {1} \mathbf {W} (u, v) \mathrm {d} v = \int_ {U \cup \mathcal {N} (U)} \mathbf {W} (u, v) \mathrm {d} v = \int_ {D} (\Gamma (U)) (u, v) \mathrm {d} v. \tag {16}
|
| 379 |
+
$$
|
| 380 |
+
|
| 381 |
+
This in particular means that normalizing $\Gamma(U)$ as $\Gamma(U)'$ means scaling by the same scalar as normalizing $\mathbf{W}$ into $\mathbf{W}'$ .
|
| 382 |
+
|
| 383 |
+
Now we investigate the image of $X^{\prime}$ under Laplacian operator:
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
\begin{array}{l} L _ {\Gamma (U)} ^ {\prime} X ^ {\prime} (u) := X ^ {\prime} (u) - \int_ {D} (\Gamma (U)) ^ {\prime} (u, v) X ^ {\prime} (v) \mathrm {d} v (17) \\ = \left\{ \begin{array}{l l} X (u) - \int_ {0} ^ {1} \mathbf {W} ^ {\prime} (u, v) X (v) \mathrm {d} v & \text {i f} u \in U \\ - X (u) - \int_ {0} ^ {1} - \mathbf {W} ^ {\prime} (u, v) X (v) \mathrm {d} v & \text {i f} u \in U ^ {\prime} \\ 0 & \text {o t h e r w i s e ,} \end{array} \right. (18) \\ = \left\{ \begin{array}{l l} \mathcal {L} ^ {\prime} X (u) & \text {i f} u \in U \\ - \mathcal {L} ^ {\prime} X (u) & \text {i f} u \in U ^ {\prime} \\ 0 & \text {o t h e r w i s e .} \end{array} \right. (19) \\ \end{array}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
And therefore: $\| \mathcal{L}_{\Gamma (U)}^{\prime}X^{\prime}\|_{L^{2}(D)} = \sqrt{2}\| \mathcal{L}^{\prime}X\|_{L^{2}(U)}\leq \sqrt{2}\| \mathcal{L}^{\prime}X\|_{L^{2}([0,1])}$ . The point of constructing $\Gamma (U)^{\prime}$ is that it has a nice eigenfunction that corresponds to eigenvalue 0. Let $\varphi_0$ be such a function, then
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
0 = \mathcal {L} _ {\Gamma (U)} ^ {\prime} \varphi_ {0} (u) = \varphi_ {0} (u) - \int_ {D} \frac {(\Gamma (U)) (u , v)}{\sqrt {\int_ {D} (\Gamma (U)) (z , v) d z} \int_ {D} (\Gamma (U)) (u , z) d z} \varphi_ {0} (v) d v. \tag {20}
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
By inspection, setting $\varphi_0(u) \coloneqq \sqrt{\int_D(\Gamma(U))(u,v)\mathrm{d}v}$ satisfies the above equation and this is the eigenfunction of $\mathcal{L}_{\Gamma (U)}^{\prime}$ corresponding to eigenvalue 0. Expand $X^{\prime}$ in the eigenfunction basis of $\mathcal{L}_{\Gamma (U)}^{\prime}$ to get:
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
\left\| X ^ {\prime} \right\| _ {L ^ {2} (D)} = \sum_ {i \in \mathbb {N} \cup \{0 \}} | \langle X ^ {\prime}, \varphi_ {i} \rangle | ^ {2}. \tag {21}
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
However, the first coefficient vanishes:
|
| 402 |
+
|
| 403 |
+
$$
|
| 404 |
+
\begin{array}{l} \langle X ^ {\prime}, \varphi_ {0} \rangle = \int_ {D} X ^ {\prime} (u) \sqrt {\int_ {D} (\Gamma (U)) (u , v) \mathrm {d} v} \mathrm {d} u (22) \\ = \int_ {U} X (u) \sqrt {\int_ {D} \mathbf {W} (u , v) \mathrm {d} v} \mathrm {d} u - \int_ {U ^ {\prime}} X (u) \sqrt {\int_ {D} \mathbf {W} (u , v) \mathrm {d} v} \mathrm {d} u = 0, (23) \\ \end{array}
|
| 405 |
+
$$
|
| 406 |
+
|
| 407 |
+
and we have:
|
| 408 |
+
|
| 409 |
+
$$
|
| 410 |
+
\begin{array}{l} \sqrt {2} \| \mathcal {L} ^ {\prime} X \| _ {L ^ {2} ([ 0, 1 ])} \geq \| \mathcal {L} _ {\Gamma (U)} ^ {\prime} X ^ {\prime} \| _ {L ^ {2} (D)} ^ {2} (24) \\ = \sum_ {i \in \mathbb {N}} \lambda_ {i} ^ {2} | \langle f ^ {\prime}, \varphi_ {i} \rangle | ^ {2} (25) \\ \geq \lambda_ {1} ^ {2} \| X ^ {\prime} \| _ {L ^ {2} (D)} ^ {2} (26) \\ = \sqrt {2} \| X \| _ {L ^ {2} (U)} ^ {2}, (27) \\ \end{array}
|
| 411 |
+
$$
|
| 412 |
+
|
| 413 |
+
which finishes the proof.
|
| 414 |
+
|
| 415 |
+
Proof of Thm. 3. If $X, Y \in PW_{\lambda}(\mathbf{W})$ , then $X - Y \in PW_{\lambda}(\mathbf{W})$ and we have:
|
| 416 |
+
|
| 417 |
+
$$
|
| 418 |
+
\left\| \bar {\mathcal {L}} (X - Y) \right\| _ {L ^ {2}} \leq \lambda \| X - Y \| _ {L ^ {2}}. \tag {28}
|
| 419 |
+
$$
|
| 420 |
+
|
| 421 |
+
If $X$ and $Y$ coincide on $U$ , then $X - Y \in L^{2}(S)$ and we can write the Poincaré inequality:
|
| 422 |
+
|
| 423 |
+
$$
|
| 424 |
+
\left\| X - Y \right\| _ {L ^ {2}} \leq \Lambda \left\| \bar {\mathcal {L}} (X - Y) \right\| _ {L ^ {2}}. \tag {29}
|
| 425 |
+
$$
|
| 426 |
+
|
| 427 |
+
Combining the two inequalities, we have:
|
| 428 |
+
|
| 429 |
+
$$
|
| 430 |
+
\| X - Y \| _ {L ^ {2}} \leq \Lambda \| \bar {\mathcal {L}} (X - Y) \| _ {L ^ {2}} \leq \Lambda \lambda \| X - Y \| _ {L ^ {2}} \tag {30}
|
| 431 |
+
$$
|
| 432 |
+
|
| 433 |
+
which can only be true if $\| X - Y\|_{L^2} = 0$ since $\lambda \Lambda < 1$ .
|
| 434 |
+
|
| 435 |
+
# D PROOF FROM SECTION 4.2
|
| 436 |
+
|
| 437 |
+
# D.1 GRAPHON IS EQUIVALENT TO MIXTURE MODEL FOR RANDOM GRAPHS
|
| 438 |
+
|
| 439 |
+
Proof of Prop. 1. Let $\omega \sim \mathbb{P}(\Omega)$ . We want to find a strictly monotone function $\beta : [0,1] \to \Omega$ such that $U = \beta^{-1}(\omega)$ is uniformly distributed over $[0,1]$ . Let $F_{\omega}(\omega) = \mathbb{P}(\omega \leq \omega)$ , and assume the function $\beta$ exists. Then, for all $\omega$ we can write
|
| 440 |
+
|
| 441 |
+
$$
|
| 442 |
+
F _ {\omega} (\omega) = \mathbb {P} (\omega \leq \omega) = \mathbb {P} (\beta (U) \leq \omega) = \mathbb {P} (U \leq \beta^ {- 1} (\omega)) = \beta^ {- 1} (\omega) \tag {31}
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
where the second equality follows from the fact that, since $\beta$ is strictly monotone, it has an inverse. This proves that $\beta$ exists and is equal to the inverse of the CDF of $\omega$ .
|
| 446 |
+
|
| 447 |
+
Before continuing, let us introduce a few useful definitions. The Laplacian associated with the model $\mathbb{K}(\Omega, \mathbb{P}, K)$ is defined as
|
| 448 |
+
|
| 449 |
+
$$
|
| 450 |
+
\mathcal {L} _ {K} f = f - \int_ {\Omega} \bar {K} (\omega , \cdot) f (\omega) d \mathbb {P} (\omega) \tag {32}
|
| 451 |
+
$$
|
| 452 |
+
|
| 453 |
+
where $\bar{K} (\omega ,\theta) = K(\omega ,\theta) / (q(\omega)q(\theta))$ and $q(\omega) = \sqrt{\int_{\Omega}\bar{K}(\omega,\theta)d\mathbb{P}(\theta)}$ . The operator $\mathcal{L}$ is self-adjoint and positive semidefinite, therefore it has a non-negative real spectrum $\{\lambda_i,\varphi_i\}_{i = 1}^{\infty}$ .
|
| 454 |
+
|
| 455 |
+
To simplify matters, we will consider the problem of finding frames $\{f_i\}_{i=1}^K$ allowing to uniquely represent signals in any $PW_{\Omega}(\lambda)$ with $\lambda \leq \lambda_K$ . Note that the graphon $\mathbf{W}$ (and therefore its associated Laplacian) are themselves rank $K$ . Recall that, in order to uniquely represent a signal, the frame $\{f_i\}$ must satisfy
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
\operatorname {r a n k} \left[ \begin{array}{c c c c} \langle f _ {1}, \varphi_ {1} \rangle & \langle f _ {1}, \varphi_ {2} \rangle & \dots & \langle f _ {1}, \varphi_ {K} \rangle \\ \langle f _ {2}, \varphi_ {1} \rangle & \langle f _ {2}, \varphi_ {2} \rangle & \dots & \langle f _ {2}, \varphi_ {K} \rangle \\ \vdots & \vdots & \dots & \vdots \\ \langle f _ {K}, \varphi_ {1} \rangle & \langle f _ {K}, \varphi_ {2} \rangle & \dots & \langle f _ {K}, \varphi_ {K} \rangle \end{array} \right] = K \tag {33}
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
where $\{\varphi_i\}_{i=1}^K$ are the eigenfunctions associated with strictly positive eigenvalues of $\mathcal{L}_K$ , sorted according to their magnitude.
|
| 462 |
+
|
| 463 |
+
By (Schiebinger et al., 2015, Thm.1), the functions $q_{i}(\theta) = \int_{\Omega}K(\omega ,\theta)d\mathbb{P}_{i}(\omega), 1\leq i\leq K$ , form such a frame.
|
| 464 |
+
|
| 465 |
+
# D.2 MIXTURE COMPONENT GIVES RISE TO UNIQUENESS SETS
|
| 466 |
+
|
| 467 |
+
In this section, we Leb to emphasize the Lebesgue measure on $\mathbb{R}$ being used in our integrals.
|
| 468 |
+
|
| 469 |
+
Proof of Thm. 4. Define the Heaviside frame $\{h_i : \mathcal{X} \to \mathbb{R}\}_{i=1}^K$ as $h_i(\omega) = \delta_{\in A_i}(\omega) \sqrt{p_i(\omega) / p_i(A_i)}$ where $\delta_E$ is the Dirac delta function for a measurable set $E$ , for each $i \in [K]$ . It is straightforward to check that $h_i$ is also in $L^2(p_i)$ for each $i \in [K]$ . Define the subspace $\mathbf{H} := \operatorname{span}\{h_1, \ldots, h_K\}$ and the Heaviside embedding $\Phi_{\mathbf{H}}: \mathcal{X} \to \mathbb{R}^K$ as $\Phi_{\mathbf{H}}(\omega) = (h_1(\omega), \ldots, h_K(\omega))$ .
|
| 470 |
+
|
| 471 |
+
Step 1: Show that $((h_i, q_j))_{i,j=1}^K$ is full-rank. To show that $((h_i, q_j))_{i,j=1}^K$ is full-rank, we compute entries of $((h_i, q_j))_{i,j=1}^K$ : for any $i, j \in [K]$ ,
|
| 472 |
+
|
| 473 |
+
$$
|
| 474 |
+
\left\langle h _ {i}, q _ {j} \right\rangle = \frac {1}{\sqrt {p _ {i} \left(A _ {i}\right)}} \int_ {A _ {i}} q _ {j} (\omega) \sqrt {p _ {i} (\omega)} \mathrm {d L e b} (\omega) = \frac {1}{\sqrt {p _ {i} \left(A _ {i}\right)}} \int_ {A _ {i}} \sqrt {p _ {j} (\omega) p _ {i} (\omega)} \mathrm {d L e b} (\omega). \tag {34}
|
| 475 |
+
$$
|
| 476 |
+
|
| 477 |
+
For diagonal entries, note that:
|
| 478 |
+
|
| 479 |
+
$$
|
| 480 |
+
\left\langle h _ {j}, q _ {j} \right\rangle = \frac {1}{\sqrt {p _ {j} \left(A _ {j}\right)}} \int_ {A _ {j}} p _ {j} (\omega) \mathrm {d L e b} (\omega) = \sqrt {p _ {j} \left(A _ {j}\right)}. \tag {35}
|
| 481 |
+
$$
|
| 482 |
+
|
| 483 |
+
Fix an $j\in [K]$ and consider:
|
| 484 |
+
|
| 485 |
+
$$
|
| 486 |
+
\begin{array}{l} \sum_ {i \neq j} | \langle h _ {i}, q _ {j} \rangle | = \sum_ {i \neq j} \frac {1}{\sqrt {p _ {i} (A _ {i})}} \int_ {A _ {i}} \sqrt {p _ {j} (\omega) p _ {i} (\omega)} d \mathrm {L e b} (\omega) (36) \\ \leq \sum_ {i \neq j} \frac {1}{\sqrt {p _ {i} \left(A _ {i}\right)}} \sqrt {\int_ {A _ {i}} p _ {j} (\omega) \mathrm {d L e b} (\omega)} \sqrt {\int_ {A _ {i}} p _ {i} (\omega) \mathrm {d L e b} (\omega)} (37) \\ = \sum_ {i \neq j} \frac {1}{\sqrt {p _ {i} \left(A _ {i}\right)}} \sqrt {p _ {j} \left(A _ {i}\right)} \sqrt {p _ {i} \left(A _ {i}\right)} (38) \\ = \sum_ {i \neq j} \sqrt {p _ {j} \left(A _ {i}\right)}, (39) \\ \end{array}
|
| 487 |
+
$$
|
| 488 |
+
|
| 489 |
+
where the inequality is from Cauchy-Schwarz. In the first choice of assumption, we have $p_j(A_j) - K^2\epsilon^2 > \sum_{i \neq j} p_j(A_i) / (K - 1)^2$ and thus $\sqrt{p_j(A_j)} - K\epsilon > \sqrt{\sum_{i \neq j} p_i(A_i)} / (K - 1) > \sum_{i \neq j} \sqrt{p_i(A_i)}$ , due to monotonicity of square root and Cauchy-Schwarz. Thus, we have shown that for every $j \in [K]$ , the $j$ -th column of $((h_i, q_j))_{i,j=1}^K$ has $j$ -th entry larger (in absolute value) than the sum of absolute values of all other entries. Gershgorin circle theorem then tells us that eigenvalues of $((h_i, q_j))_{i,j=1}^K$ lie in at least one disk center at some diagonal value with radius sum of absolute value of remaining column entries. None of the Gershgorin disks contain the origin, and we can conclude that $((h_i, q_j))_{i,j=1}^K$ has no 0 eigenvalue. Therefore, it is full rank.
|
| 490 |
+
|
| 491 |
+
Now, fix an $i\in [K]$ and consider:
|
| 492 |
+
|
| 493 |
+
$$
|
| 494 |
+
\begin{array}{l} \sum_ {j \neq i} | \langle h _ {i}, q _ {j} \rangle | = \sum_ {j \neq i} \frac {1}{\sqrt {p _ {i} (A _ {i})}} \int_ {A _ {i}} \sqrt {p _ {j} (\omega) p _ {i} (\omega)} d L e b (\omega) (40) \\ \leq \sum_ {j \neq i} \frac {1}{\sqrt {p _ {i} \left(A _ {i}\right)}} \sqrt {\int_ {A _ {i}} p _ {j} (\omega) \mathrm {d L e b} (\omega)} \sqrt {\int_ {A _ {i}} p _ {i} (\omega) \mathrm {d L e b} (\omega)} (41) \\ = \sum_ {j \neq i} \frac {1}{\sqrt {p _ {i} \left(A _ {i}\right)}} \sqrt {p _ {j} \left(A _ {i}\right)} \sqrt {p _ {i} \left(A _ {i}\right)} (42) \\ = \sum_ {j \neq i} \sqrt {p _ {j} \left(A _ {i}\right)} (43) \\ \end{array}
|
| 495 |
+
$$
|
| 496 |
+
|
| 497 |
+
In the second choice of assumption, the same thing happens: $p_i(A_i) - K^2\epsilon^2 > \sum_{j \neq i} p_j(A_i) / (K - 1)^2$ implies that $\sqrt{p_i(A_i)} - K\epsilon > \sum_{j \neq i} \sqrt{p_i(A_i)}$ and once again, the center of any Gershgorin disk (but this time in the rows) are further away from zero than the sum of absolute value of other non-diagonal entries. Therefore, none of the disks contain the origin and $((h_i, q_j))_{i,j=1}^K$ cannot have 0 eigenvalue, thus full-rank. Therefore, either choices of assumption leads to full-rank-ness of the system $((h_i, q_j))_{i,j=1}^K$ .
|
| 498 |
+
|
| 499 |
+
Step 2. Full-rank implies uniqueness. By the premise of this result, we have for each $i$ ,
|
| 500 |
+
|
| 501 |
+
$$
|
| 502 |
+
\left\| q _ {i} - \varphi_ {i} \right\| _ {L ^ {2}} < \epsilon . \tag {44}
|
| 503 |
+
$$
|
| 504 |
+
|
| 505 |
+
Thus,
|
| 506 |
+
|
| 507 |
+
$$
|
| 508 |
+
\left\langle h _ {i}, \varphi_ {j} \right\rangle = \left\langle h _ {i}, q _ {j} \right\rangle - \left\langle h _ {j}, q _ {j} - \varphi_ {j} \right\rangle \in \left(\left\langle h _ {i}, q _ {j} \right\rangle - \epsilon , \left\langle h _ {i}, q _ {j} \right\rangle + \epsilon\right), \tag {45}
|
| 509 |
+
$$
|
| 510 |
+
|
| 511 |
+
by Cauchy-Schwarz.
|
| 512 |
+
|
| 513 |
+
Recall that $((h_i, q_j))_{i,j}$ is full rank, and that Gershgorin circle theorem applied in the previous step still has a slack of at least $K\epsilon$ . Therefore, perturbation element-wise of additive size $\epsilon$ of $((h_i, q_j))_{i,j}$ will still be full rank by Gershgorin circle theorem and we conclude that $((h_i, \varphi_j))_{i,j}$ is full-rank.
|
| 514 |
+
|
| 515 |
+
Let $X \in \mathrm{PW}_{\lambda}(\mathbf{W})$ for some $\lambda \in (\lambda_K, \lambda_{K+1})$ , then by definition, there exists a vector $\pmb{c} \in \mathbb{R}^K$ such that $X = \sum_{j=1}^{K} c_j \varphi_j$ . Take inner product (in $L^2(\mathbb{P}) = L^2(\mathbf{W})$ ), we have:
|
| 516 |
+
|
| 517 |
+
$$
|
| 518 |
+
\left[ \begin{array}{c c c c} \langle h _ {1}, \varphi_ {1} \rangle & \langle h _ {1}, \varphi_ {2} \rangle & \dots & \langle h _ {1}, \varphi_ {K} \rangle \\ \langle h _ {2}, \varphi_ {1} \rangle & \langle h _ {2}, \varphi_ {2} \rangle & \dots & \langle h _ {2}, \varphi_ {K} \rangle \\ \vdots & \vdots & \dots & \vdots \\ \langle h _ {K}, \varphi_ {1} \rangle & \langle h _ {K}, \varphi_ {2} \rangle & \dots & \langle h _ {K}, \varphi_ {K} \rangle \end{array} \right] \boldsymbol {c} = \left[ \begin{array}{c} \langle h _ {1}, X \rangle \\ \langle h _ {2}, X \rangle \\ \vdots \\ \langle h _ {K}, X \rangle \end{array} \right] \tag {46}
|
| 519 |
+
$$
|
| 520 |
+
|
| 521 |
+
To test if $U = \bigcup_{i=1}^{K} A_i$ is a uniqueness set, we assume that $\| X\delta_U\|_{L^2(\mathbf{W})} = 0$ . But $|\langle h_i, X\rangle| = |\langle h_i, \delta_{A_i}X\rangle| \leq \| h_i\| \| X\delta_U\| = 0$ for each $i$ in $[K]$ . Thus:
|
| 522 |
+
|
| 523 |
+
$$
|
| 524 |
+
\left[ \begin{array}{c c c c} \langle h _ {1}, \varphi_ {1} \rangle & \langle h _ {1}, \varphi_ {2} \rangle & \dots & \langle h _ {1}, \varphi_ {K} \rangle \\ \langle h _ {2}, \varphi_ {1} \rangle & \langle h _ {2}, \varphi_ {2} \rangle & \dots & \langle h _ {2}, \varphi_ {K} \rangle \\ \vdots & \vdots & \dots & \vdots \\ \langle h _ {K}, \varphi_ {1} \rangle & \langle h _ {K}, \varphi_ {2} \rangle & \dots & \langle h _ {K}, \varphi_ {K} \rangle \end{array} \right] \boldsymbol {c} = \left[ \begin{array}{l} 0 \\ 0 \\ \vdots \\ 0 \end{array} \right] \tag {47}
|
| 525 |
+
$$
|
| 526 |
+
|
| 527 |
+
Finally, since $((h_i,\varphi_j))_{i,j = 1}^K$ is full rank, its null space is trivial, implying $c = 0$ and thus $X = 0$ , which proves uniqueness of $U$ .
|
| 528 |
+
|
| 529 |
+
# D.3 CONSISTENCY THEOREM
|
| 530 |
+
|
| 531 |
+
This result is an adaptation of (Schiebinger et al., 2015, Thm. 2), which is reproduced below.
|
| 532 |
+
|
| 533 |
+
Theorem 6 (Thm.2, Schiebinger et al. (2015)). There are numbers $c$ , $c_0$ , $c_1$ , $c_2$ depending only on $b$ and $r$ such that for any $\delta \in (0, \frac{\|K\|_{\mathbb{P}}}{b\sqrt{2\pi}})$ satisfying condition (Schiebinger et al., 2015, 3.17) and any $t > c_0w_{min}^{-1}\sqrt{\phi_n(\delta)}$ , the embedded dataset $\{\Phi_{\mathcal{V}}(\omega_i), Z_i\}_{i=1}^n$ has $(\alpha, \theta)$ orthogonal cone structure with
|
| 534 |
+
|
| 535 |
+
$$
|
| 536 |
+
| \cos \theta | \leq \frac {c _ {0} \sqrt {\phi_ {n} (\delta)}}{w _ {\min } ^ {3} t - c _ {0} \sqrt {\phi_ {n} (\delta)}} \tag {48}
|
| 537 |
+
$$
|
| 538 |
+
|
| 539 |
+
$$
|
| 540 |
+
\alpha \leq \frac {c _ {1}}{w _ {\min } ^ {3 / 2}} \phi_ {n} (\delta) + \psi (2 t) \tag {49}
|
| 541 |
+
$$
|
| 542 |
+
|
| 543 |
+
and this event holds with probability at least $1 - 8K^2 \exp \left( -\frac{c_2 n \delta^4}{\delta^2 + S_{max} + C} \right)$ .
|
| 544 |
+
|
| 545 |
+
Thm. 6 elucidates the conditions under which the spectral embeddings of the nodes $\omega$ form an orthogonal cone structure (see (Schiebinger et al., 2015, Def. 1) for a precise definition). This is helpful for Gaussian elimination, as provided that we pick a pivot inside a cone, the other rows to be picked—which are orthogonal to the pivot—are themselves inside other cones, and therefore likely to belong to a different cluster (i.e., to be distributed according to a different mixture component).
|
| 546 |
+
|
| 547 |
+
We first recall connections between graphons and mixture models and explain how each objects in the context of Thm. 6 can be understood in graphons terms. In mixture model, we sample dataset $\{\omega_i\}_{i=1}^n$ from the mixture distribution. This is equivalent to sampling nodes under a pushforward in when we sample finite graphs from a graphon. Thus, each data point $\omega_i$ is a 'node' of a finite graph sampled from the graphon. Next, the spectral embedding of datapoints in spectral clustering is equivalent to computing the eigenfunction of graphon Laplacian at that datapoint - embedding it in frequency domain. Therefore, from a graphon perspective, the theorem is asserting that given some underlying structure controlled by the difficulty function, embedding of nodes in finite graph from a fix graphon into frequency domain under GFT has a peculiar structure: an orthogonal cone. While we do not have easy access to graphon eigenfunction, computing an approximation once with a large graph suffices. This is because we can reuse the embedding when new points are sampled into the graph!
|
| 548 |
+
|
| 549 |
+
Proof of Thm. 5. Let us consider what happens when performing Gaussian elimination on the columns of $\Phi_{\mathcal{V}}(\omega)$ . When picking the pivot, the probability of picking a "good" point inside a cone, i.e., a point that is both inside a cone and that is distributed according to the mixture component associated with that cone, is $1 - \alpha$ . Conditioned on this event, the probability of picking a
|
| 550 |
+
|
| 551 |
+
second "good" point from another cone is $\frac{(1 - \alpha)(n - n_1)}{n - (1 - \alpha)n_1}$ , where $n_1$ is the number of points distributed according to the pivot's distribution, denoted $\mathbb{P}_1$ . More generally, the probability of picking a "good" point at the $i$ th step, conditioned on having picked $i - 1$ "good" points, is
|
| 552 |
+
|
| 553 |
+
$$
|
| 554 |
+
\mathbb {P} (i \text {t h p o i n t i s “ g o o d ”} \mid 1, \dots , i - 1 \text {a r e “ g o o d ”}) = \frac {(1 - \alpha) n _ {- i}}{n - (1 - \alpha) n _ {+ i}} \tag {50}
|
| 555 |
+
$$
|
| 556 |
+
|
| 557 |
+
where $n_{-i} = \sum_{j=1}^{i-1} n - n_j$ and $n_{+i} = n - n_{-i}$ .
|
| 558 |
+
|
| 559 |
+
Since Eq. (50) is a decreasing function of $n_{-i}$ , the probability of picking $K$ good points is lower bounded by
|
| 560 |
+
|
| 561 |
+
$$
|
| 562 |
+
\mathbb {P} (1, \dots , K \text {a r e}" \text {g o o d}) \geq \frac {(1 - \alpha) ^ {K} \left(n - n _ {\min }\right) ^ {K}}{(n - (1 - \alpha) n _ {\min }) ^ {K}} \tag {51}
|
| 563 |
+
$$
|
| 564 |
+
|
| 565 |
+
where $n_{\min} = \min_{1 \leq j \leq K} n_j$ . Combining Eq. (51) with Theorem Thm. 6 gives the proposition's result.
|
| 566 |
+
|
| 567 |
+
Proof of Prop. 4. The conditions on the difficulty function in the hypothesis of Prop. 4 means that the angle $\theta$ in the cone structure is at least $\pi /3$ .
|
| 568 |
+
|
| 569 |
+
Note that every finite graph $\mathbf{G}_n$ induces a graphon via stochastic block model:
|
| 570 |
+
|
| 571 |
+
$$
|
| 572 |
+
\mathbf {W} _ {\mathbf {G} _ {n}} := \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} [ \mathbf {A} _ {n} ] _ {i, j} \mathbb {I} (x \in I _ {i}) \mathbb {I} (y \in I _ {j}) \tag {52}
|
| 573 |
+
$$
|
| 574 |
+
|
| 575 |
+
From Ruiz et al. (2021), we know that the eigenvalues of the adjacency HS operator of $\mathbf{W}_{\mathbf{G}_n}$ converges to that of $\mathbf{W}$ . As the graphon Laplacian is a scaled and translated operator from the adjacency operator, eigenvalues of the Laplacian also converge. Let the eigenvalues of the finite graph be $\hat{\lambda}_{n,1} \leq \ldots \leq \hat{\lambda}_{n,-1}$ . Pick an $n_0$ large enough such that there is a spectral gap $\hat{\lambda}_{n,K} < \hat{\lambda}_{n,K+1}$ for all $n > n_0$ . Then pick an even larger $n_1$ such that $\lambda \in (\hat{\lambda}_{n,K}, \hat{\lambda}_{n,K+1})$ for all $n > n_1$ . Such a choice of $n_0, n_1$ is guaranteed by convergence of eigenvalue.
|
| 576 |
+
|
| 577 |
+
Not only do eigenvalue converges, when there is an eigengap, the subspace spanned by the first $K$ eigenfunctions also converges. The convergence is in term of convergence in operator norm of the projection operator (Ruiz et al., 2021). Let the projection operator be $\Phi_{\mathbf{G}_n}$ and $\Phi_{\mathbf{W}}$ , corresponding to that for the finite graph $\mathbf{G}_n$ and for the graphon $\mathbf{W}$ respectively. Therefore, we select yet a larger $n_2$ such that $\| \Phi_{\mathbf{G}_n} - \Phi_{\mathbf{W}} \|_{HS} < \epsilon$ for all $n > n_2$ and for some $\epsilon$ to be chosen later.
|
| 578 |
+
|
| 579 |
+
Recall that we picked some sample via Thm. 5 and with high probability, our sample attains an orthogonal cone structure. In other words, there is a permutation of samples such that for each $i \in [K]$ , $|\cos \tau(i)| > 1/2$ with high probability, where $\tau(i)$ is the angle between $\varphi(x_i)$ and the unit vector with all zero entries but the $i$ -th one. This means that for any $i$ , $|\varphi_i(x_i)| / \| (\varphi_j(x_i))_{j \in [K]} \|_2 > 1/2$ . Therefore, the matrix:
|
| 580 |
+
|
| 581 |
+
$$
|
| 582 |
+
\left[ \begin{array}{c c c c} \varphi_ {1} \left(x _ {1}\right) & \varphi_ {2} \left(x _ {1}\right) & \dots & \varphi_ {K} \left(x _ {1}\right) \\ \varphi_ {1} \left(x _ {2}\right) & \varphi_ {2} \left(x _ {2}\right) & \dots & \varphi_ {K} \left(x _ {2}\right) \\ \vdots & \vdots & \dots & \vdots \\ \varphi_ {1} \left(x _ {K}\right) & \varphi_ {2} \left(x _ {K}\right) & \dots & \varphi_ {K} \left(x _ {K}\right) \end{array} \right] \tag {53}
|
| 583 |
+
$$
|
| 584 |
+
|
| 585 |
+
is full rank, since the off-diagonal absolute value sum does not exceed the absolute value of the diagonal entry for every row, via Gershgorin circle theorem. As a corollary from Thm. 1 of Anis et al. (2016), the system being full rank means that the samples drawn form a uniqueness set and the proof is complete.
|
| 586 |
+
|
| 587 |
+
To select $\epsilon$ , notice that there are still slack in Gershgorin circle theorem and one can select such an $\epsilon$ that the two projection has eigenfunctions differs by at most that slack amount in $L^2$ . This is possible since full-ranked-ness is a robust property: if a matrix is full-rank then other matrices within a small ball from it is also full-rank. Thus, if there is a converging sequence of eigenfunction/eigenspace to $\varphi(x)$ then the perturbed matrix analogous to Eq. (53) would eventually enter the small ball of full-rank matrices. We leave more precise nonasymptotic analysis to future work.
|
| 588 |
+
|
| 589 |
+

|
| 590 |
+
|
| 591 |
+
# E SMALL EXAMPLE: BLOCK MODEL AND MIXTURE OF DISJOINT UNIFORM DISTRIBUTIONS
|
| 592 |
+
|
| 593 |
+
Let us consider a simplified setting, consisting of a blockmodel kernel and uniform mixture components, to show an example where Gaussian elimination recovers intervals distributed according to the $\{q_i\}_{i=1}^K$ .
|
| 594 |
+
|
| 595 |
+
Proposition 5. Let $\mathcal{I} = \Omega_1 \cup \ldots \cup \Omega_N$ be an $N$ -partition of $\Omega$ . Let the kernel $\mathbf{k}$ be a $K$ -block model over a coarser partition $\mathcal{I}' = \Omega_1' \cup \ldots \cup \Omega_K'$ of $\Omega$ containing $\mathcal{I}$ (each block has value given by the integral of $K$ over the centroid). Let the $\mathbb{P}_i$ be uniform over the $\Omega_i'$ . Then, column-wise Gaussian elimination over the positive eigenfunctions(vectors) finds subsets $\Omega_{j_1}, \ldots, \Omega_{j_K}$ distributed with probability density functions equal to the corresponding $q_i$ , up to a normalization.
|
| 596 |
+
|
| 597 |
+
Proof of Prop. 5. The kernel $\mathbf{k}$ can be written as
|
| 598 |
+
|
| 599 |
+
$$
|
| 600 |
+
\mathbf {k} (\omega , \theta) = \sum_ {i, j = 1} ^ {K} a _ {i j} \mathbb {I} \left(\omega \in \Omega_ {i} ^ {\prime}\right) \mathbb {I} \left(\theta \in \Omega_ {j} ^ {\prime}\right). \tag {54}
|
| 601 |
+
$$
|
| 602 |
+
|
| 603 |
+
Therefore, the model $\mathbb{K}$ can be represented as a SBM graphon,
|
| 604 |
+
|
| 605 |
+
$$
|
| 606 |
+
\mathbf {A} = \begin{array}{c c c c c c c} i _ {1} ^ {1} & \dots & i _ {k _ {1}} ^ {1} & \dots & i _ {1} ^ {K} & \dots & i _ {k _ {K}} ^ {K} \\ i _ {1} ^ {1} & a _ {1 1} & \dots & a _ {1 1} & \dots & a _ {1 K} & \dots & a _ {1 K} \\ \vdots & \ddots & \vdots & & & \vdots & \ddots & \vdots \\ i _ {k _ {1}} ^ {1} & a _ {1 1} & \dots & a _ {1 1} & \dots & a _ {1 K} & \dots & a _ {1 K} \\ & \vdots & & \ddots & & \vdots \\ i _ {1} ^ {K} & a _ {1 K} & \dots & a _ {1 K} & \dots & a _ {K K} & \dots & a _ {K K} \\ \vdots & \ddots & \vdots & & & \vdots & \ddots & \vdots \\ i _ {k _ {K}} ^ {K} & a _ {1 K} & \dots & a _ {1 K} & \dots & a _ {K K} & \dots & a _ {K K} \end{array} \tag {55}
|
| 607 |
+
$$
|
| 608 |
+
|
| 609 |
+
where $i_{j_l}^l$ , $1 \leq j_l \leq k_l$ , indexes elements of $\mathcal{I}$ contained in $\Omega_l'$ (i.e., in the support of $\mathbb{P}_l$ ), and $\sum_{j=1}^{K} k_j = N$ . For a more concise representation, let us write
|
| 610 |
+
|
| 611 |
+
$$
|
| 612 |
+
\mathbf {A} = \left[ \begin{array}{c c c} \mathbf {A} _ {1 1} & \dots & \mathbf {A} _ {1 K} \\ \vdots & \ddots & \vdots \\ \mathbf {A} _ {1 K} & \dots & \mathbf {A} _ {K K} \end{array} \right] \tag {56}
|
| 613 |
+
$$
|
| 614 |
+
|
| 615 |
+
where $\mathbf{A}_{ij} = a_{ij}\mathbf{11}^T$
|
| 616 |
+
|
| 617 |
+
Consider the normalized adjacency $\tilde{\mathbf{A}} = (\mathbf{D}^{\dagger})^{1 / 2}\mathbf{A}(\mathbf{D}^{\dagger})^{1 / 2}$ , which has the same block structure as $\mathbf{A}$ but with blocks $\tilde{\mathbf{A}}_{ij}$ . Note that technically, we would find eigenvectors of the normalized Laplacian $\mathbf{I} - \tilde{\mathbf{A}}$ but the identity shift only shifts the spectrum by 1 (after inversion about the origin). Therefore it is equivalent to finding the eigenvectors of $\tilde{\mathbf{A}}$ :
|
| 618 |
+
|
| 619 |
+
$$
|
| 620 |
+
\left[ \begin{array}{c c c} \tilde {\mathbf {A}} _ {1 1} & \dots & \tilde {\mathbf {A}} _ {1 K} \\ \vdots & \ddots & \vdots \\ \tilde {\mathbf {A}} _ {1 K} & \dots & \tilde {\mathbf {A}} _ {K K} \end{array} \right] \mathbf {u} = \lambda \mathbf {u}. \tag {57}
|
| 621 |
+
$$
|
| 622 |
+
|
| 623 |
+
Note, however, that for each $1 \leq i \leq K$ , the rows corresponding to $[\tilde{\mathbf{A}}_{1i} \ldots \tilde{\mathbf{A}}_{Ki}]\mathbf{u}$ are repeated, so plugging $\tilde{\mathbf{A}}$ into an eigensolver without simplifying $\tilde{\mathbf{A}}$ first is going to incur huge computational cost for little gain. We can exploit the repeated structure of $\tilde{\mathbf{A}}$ to do some preprocessing first, via a variant of Gaussian elimination. Permuting the rows and columns of this matrix to ensure the sequence $a_{i1}, \ldots, a_{iK}$ appears in the first $K$ columns, and subtracting the repeated rows, we can
|
| 624 |
+
|
| 625 |
+
rewrite this as
|
| 626 |
+
|
| 627 |
+
$$
|
| 628 |
+
\left[ \begin{array}{c c c c} \tilde {a} _ {1 1} & \dots & \tilde {a} _ {1 K} & \mathbf {b} _ {1} \\ \vdots & \ddots & \vdots & \vdots \\ \tilde {a} _ {1 K} & \dots & \tilde {a} _ {K K} & \mathbf {b} _ {K} \\ \mathbf {0} & \dots & \mathbf {0} & \mathbf {0} \end{array} \right] \mathbf {u} = \lambda \mathbf {u} \tag {58}
|
| 629 |
+
$$
|
| 630 |
+
|
| 631 |
+
where the $\mathbf{b}_i\in \mathbb{R}^{N - K}$ are row vectors collecting the remaining entries of row $i$ after permutation, and $\mathbf{0}$ denotes the all-zeros vector of dimension $N - K$ .
|
| 632 |
+
|
| 633 |
+
For the linear system in Eq. (58), it is easy to see that the solutions $\mathbf{u}$ must have form $\mathbf{u} = [u_1 \ldots u_k 0 \ldots 0]^T$ . Hence, the eigenvectors of the modified matrix in Eq. (58) are the eigenvectors of its $K \times K$ principal submatrix padded with zeros. To obtain the eigenvectors of the original matrix Eq. (57), we simply have to "revert" the operations performed to get from there to Eq. (58), with the appropriate normalizations to ensure orthonormality. By doing so, we get eigenvectors of the following form
|
| 634 |
+
|
| 635 |
+
$$
|
| 636 |
+
\mathbf {u} = \begin{array}{l} k _ {1} \text {t i m e s} \left( \begin{array}{c} u _ {1} \\ \vdots \\ u _ {1} \\ \vdots \\ u _ {K} \\ \vdots \\ u _ {K} \end{array} \right) \end{array} \tag {59}
|
| 637 |
+
$$
|
| 638 |
+
|
| 639 |
+
i.e., in every eigenvector of $\tilde{\mathbf{A}}$ , entries corresponding to sets $\Omega_{i}$ contained in the same set $\Omega_k^\prime$ are the same.
|
| 640 |
+
|
| 641 |
+
Now, assume that we have found all $K$ eigenvectors of $\tilde{\mathbf{A}}$ and collect them in the matrix $\mathbf{U}_K\in \mathbb{R}^{N\times K}$ . To find a uniqueness set for the associated graphon, we perform columnwise Gaussian elimination on $\mathbf{U}_K$ , and add the indices of the zeros in the $K$ th row of the echelon form to the sampling set.
|
| 642 |
+
|
| 643 |
+
In the current example, this heuristic is always guaranteed to find a uniqueness set. Any combination of indices corresponding to $K$ different rows from $\mathbf{U}_K$ forms such a set. Since through Gaussian elimination we are guaranteed to pick $K$ linearly independent rows, when picking a row from cluster $\Omega_i$ for arbitrary $i$ , all $k_i$ rows are equally likely to be picked, as they are equal and thus have the same "pivoting" effect. In an independent trial, the probability of picking a row from $\Omega_i'$ is thus $(k_i / N) \times \mathbb{P}_i$ . Up to a normalization, this probability is equal to $\mathbf{q}_i = \mathbf{A}\mathbb{P}_i$ . The entries of this vector determine the level sets of $q_i$ as
|
| 644 |
+
|
| 645 |
+
$$
|
| 646 |
+
q _ {i} (x) = \mathbf {q} _ {i} \mathbb {I} \left(x \in \Omega_ {i} ^ {\prime}\right) \tag {60}
|
| 647 |
+
$$
|
| 648 |
+
|
| 649 |
+
completing the proof.
|
| 650 |
+
|
| 651 |
+
# F ELEMENTS FROM (SCHIEBINGER ET AL., 2015)
|
| 652 |
+
|
| 653 |
+
For completeness, we reproduce elements from (Schiebinger et al., 2015) that were used in our paper.
|
| 654 |
+
|
| 655 |
+
# F.1 DIFFICULTY FUNCTION FOR MIXTURE MODELS
|
| 656 |
+
|
| 657 |
+
Recall $\Omega$ is a measurable space and $\mathcal{P}(\Omega)$ is a set of all probability measures on $\Omega$ . Let $\mathbb{P}_i \in \mathcal{P}(\Omega)$ mixture components for $i = 1..K$ . A mixture model is a convex combination:
|
| 658 |
+
|
| 659 |
+
$$
|
| 660 |
+
\mathbb {P} := \sum_ {i = 1} ^ {K} w _ {i} \mathbb {P} _ {i}, \tag {61}
|
| 661 |
+
$$
|
| 662 |
+
|
| 663 |
+
for a set of weights $w_{i} \geq 0$ for $i = 1 \dots K$ and $\sum_{i} w_{i} = 1$ . Recall that there is also a kernel $\mathbf{k}$ associated with the mixture model.
|
| 664 |
+
|
| 665 |
+
The statistics of how well-separated the mixture components are can be quantified through five defined quantities:
|
| 666 |
+
|
| 667 |
+
Similarity index. For any distinct pair of mixtures $l \neq k$ , the kernel-dependent similarity index between $\mathbb{P}_l$ and $\mathbb{P}_k$ is:
|
| 668 |
+
|
| 669 |
+
$$
|
| 670 |
+
\mathcal {S} \left(\mathbb {P} _ {l}, \mathbb {P} _ {k}\right) := \frac {\int_ {\Omega} \int_ {\Omega} \mathbf {k} (\omega , \theta) \mathrm {d} \mathbb {P} _ {l} (\omega) \mathrm {d} \mathbb {P} _ {l} (\theta)}{\int_ {\Omega} \int_ {\Omega} \mathbf {k} (\omega , \theta) \mathrm {d} \mathbb {P} (\omega) \mathrm {d} \mathbb {P} _ {l} (\theta)}, \tag {62}
|
| 671 |
+
$$
|
| 672 |
+
|
| 673 |
+
and the maximum over all ordered pairs of similarity index is:
|
| 674 |
+
|
| 675 |
+
$$
|
| 676 |
+
\mathcal {S} _ {\max } (\mathbb {P}) := \max _ {l \neq k} \mathcal {S} \left(\mathbb {P} _ {l}, \mathbb {P} _ {k}\right) \tag {63}
|
| 677 |
+
$$
|
| 678 |
+
|
| 679 |
+
In general, $S_{\mathrm{max}}$ measures the worst overlap between any two components with respect to the kernel $\mathbf{k}$ .
|
| 680 |
+
|
| 681 |
+
Coupling parameter. The coupling parameter is defined as:
|
| 682 |
+
|
| 683 |
+
$$
|
| 684 |
+
\mathcal {C} (\mathbb {P}) := \max _ {m} \left\| \frac {\mathbf {k} (\omega , \theta)}{q _ {m} (\omega) q _ {m} (\theta)} - w _ {m} \frac {\mathbf {k} (\omega , \theta)}{q (\omega) q (\theta)} \right\| _ {\mathbb {P} _ {m} \otimes \mathbb {P} _ {m}} ^ {2}, \tag {64}
|
| 685 |
+
$$
|
| 686 |
+
|
| 687 |
+
where $q(\theta) = \sqrt{\int\mathbf{k}(\omega,\theta)\mathrm{d}\mathbb{P}(\omega)}$ and $q_{m}(\theta) = \sqrt{\int\mathbf{k}(\omega,\theta)\mathrm{d}\mathbb{P}_{m}(\omega)}$ . It measures the coupling of function spaces over $\mathbb{P}_2$ with respect to the Laplacian operator. When it is 0, for instance, the Laplacian over $\mathbb{P}$ is the weighted sum of Laplacians over $\mathbb{P}_m$ with weights $w_{m}$ .
|
| 688 |
+
|
| 689 |
+
Indivisibility parameter. The indivisibility of a probability measure is defined as:
|
| 690 |
+
|
| 691 |
+
$$
|
| 692 |
+
\Gamma (\mathbb {Q}) := \inf _ {S \subset \Omega} \frac {p (\Omega) \int_ {S} \int_ {S ^ {c}} \mathbf {k} (\omega , \theta) \mathrm {d} \mathbb {Q} (\omega) \mathrm {d} \mathbb {Q} (\theta)}{p (S) p \left(S ^ {c}\right)}, \tag {65}
|
| 693 |
+
$$
|
| 694 |
+
|
| 695 |
+
where $p(S)\coloneqq \int_{S}\int_{\Omega}\mathbf{k}(\omega ,\theta)\mathrm{d}\mathbb{Q}(\omega)\mathrm{d}\mathbb{Q}(\theta).$
|
| 696 |
+
|
| 697 |
+
And $\Gamma_{\mathrm{min}}(\mathbb{P})\coloneqq \min_{m}\Gamma (\mathbb{P}_{m})$ measures how easy it is to split a single component into two which is suggestive of ill-fittedness of the current model.
|
| 698 |
+
|
| 699 |
+
Boundedness parameter. Finally, we define:
|
| 700 |
+
|
| 701 |
+
$$
|
| 702 |
+
b _ {\max } := \max _ {m} \left\| \frac {\mathbf {k} (\cdot , \theta)}{q _ {m} (\cdot) q _ {m} (\theta)} \mathrm {d} \mathbb {P} _ {m} (\theta) \right\| _ {\infty} ^ {2}. \tag {66}
|
| 703 |
+
$$
|
| 704 |
+
|
| 705 |
+
This is just a constant when the kernel is bounded.
|
| 706 |
+
|
| 707 |
+
The difficulty function. With these parameters set up, we can now define the difficulty function used in Prop. 3:
|
| 708 |
+
|
| 709 |
+
$$
|
| 710 |
+
\phi (\mathbb {P}, \mathbf {k}) := \frac {\sqrt {K \left(\mathcal {S} _ {\operatorname* {m a x}} (\mathbb {P}) + \mathcal {C} (\mathbb {P})\right)}}{\min _ {m} w _ {m} \Gamma_ {\operatorname* {m i n}} ^ {2} (\mathbb {P})}. \tag {67}
|
| 711 |
+
$$
|
| 712 |
+
|
| 713 |
+
# F.2 FINITE-SAMPLE CONE STRUCTURE ELEMENTS
|
| 714 |
+
|
| 715 |
+
To get Theorem 2 from (Schiebinger et al., 2015), we require additional concepts and notations. For two vectors $u, v$ in $\mathbb{R}^K$ , we define the angle between them $\mathrm{angle}(u, v) \coloneqq \arccos \frac{\langle u, v \rangle}{\|u\| \|v \|}$ . An orthogonal cone structure $OSC$ with parameter $\alpha, \theta$ is an embedding of $n$ points $\{(X_i \in \mathbb{R}^n, Z_i \in [K])\}_{i \in [n]}$ into $\mathbb{R}^K$ such that for each $m \in [K]$ , we can find a subset $S_m$ with at least a $(1 - \alpha)$ proportion of all points with $Z_i = m$ where any $K$ points taken from each one of these subsets have pairwise angle at least $\theta$ .
|
| 716 |
+
|
| 717 |
+
In the derivation of Thm. 6, Schiebinger et al. (2015) also let $b$ be such that $\mathbf{k} \in (0, b)$ , and $r$ be such that $q_{m}(X^{m}) \geq r > 0$ with probability 1. $c_{0}, c_{1}, \ldots$ are then other constants that depend only on $b$ and $r$ .
|
| 718 |
+
|
| 719 |
+
Table 3: Citation network details.
|
| 720 |
+
|
| 721 |
+
<table><tr><td></td><td>Nodes (N)</td><td>Edges</td><td>Features</td><td>Classes (C)</td></tr><tr><td>Cora</td><td>2708</td><td>10556</td><td>1433</td><td>7</td></tr><tr><td>CiteSeer</td><td>3327</td><td>9104</td><td>3703</td><td>6</td></tr><tr><td>PubMed</td><td>19717</td><td>88648</td><td>500</td><td>3</td></tr></table>
|
| 722 |
+
|
| 723 |
+
In conjunction with other works on the topic, they also defined a tail decay parameter:
|
| 724 |
+
|
| 725 |
+
$$
|
| 726 |
+
\psi (t) := \sum_ {m = 1} ^ {K} \mathbb {P} _ {m} \left[ \frac {q _ {m} ^ {2} (X)}{\| q _ {m} \| _ {\mathbb {P}} ^ {2}} < t \right] \tag {68}
|
| 727 |
+
$$
|
| 728 |
+
|
| 729 |
+
and an extra requirement and the difficulty function: that there exists a $\delta > 0$ such that:
|
| 730 |
+
|
| 731 |
+
$$
|
| 732 |
+
\phi (\mathbb {P}; K) + \frac {1}{\Gamma_ {\min } ^ {2} (\mathbb {P})} \left(\frac {1}{\sqrt {n}} + \delta\right) \leq c \Gamma_ {\min } ^ {2} (\mathbb {P}). \tag {69}
|
| 733 |
+
$$
|
| 734 |
+
|
| 735 |
+
In words, it means that the indivisibility parameter of the mixture model is not too small relative to the clustering function. Finally, in the statement of Thm. 6, the difficulty parameter is reparameterized as the left hand side of Eq. (69):
|
| 736 |
+
|
| 737 |
+
$$
|
| 738 |
+
\phi_ {n} (t) := \phi (\mathbb {P}; k) + \frac {1}{\Gamma_ {\min } ^ {2} (\mathbb {P})} \left(\frac {1}{\sqrt {n}} + \delta\right), \tag {70}
|
| 739 |
+
$$
|
| 740 |
+
|
| 741 |
+
where $n$ is the number of points in the dataset.
|
| 742 |
+
|
| 743 |
+
# G ADDITIONAL EXPERIMENT DETAILS
|
| 744 |
+
|
| 745 |
+
All the code for the numerical experiments was written using the PyTorch and PyTorch Geometric libraries. The first set of experiments was run on an Intel i7 CPU, and the second set on an NVIDIA A6000 GPU.
|
| 746 |
+
|
| 747 |
+
Transferability for node classification. The details of the citation network datasets used in this experiment are displayed in Table 3. To perform graphon sampling, the nodes in these networks were sorted by degree. We considered a 60-20-20 training-validation-test random split of the data for each realization. In all scenarios, we trained a GNN consisting of a 2-layer GCN with embedding dimension 32 and ReLU nonlinearity, and 1 readout layer followed by softmax. We minimized the negative log-likelihood using ADAM with learning rate 0.001 and default forgetting factors over 100 training epochs.
|
| 748 |
+
|
| 749 |
+
Positional encodings for graph classification. We anonymized the MalNet-Tiny dataset by removing the node features and replacing them with the all-ones signal. Since we focus on large graphs, we further removed any graphs with less than 4500 nodes. This brought the number of classes down to 4, and we additionally removed samples from certain classes at random to balance the class sizes, yielding a dataset with 216 graphs in total (54 per class). We considered a 60-20-20 random split of the data for each realization. In all scenarios, we trained a GNN consisting of a 4-layer GCN with embedding dimension 64 and ReLU nonlinearity, and 1 readout layer with mean aggregation followed by softmax. We minimized the negative log-likelihood using ADAM with batch size 8, learning rate 0.001 and default forgetting factors over 150 training epochs. The PEs are the 10 first normalized Laplacian eigenvectors, and to obtain the graphon-sampled subgraph, we fix $\lambda = \lambda_{10}$ , $q = 20$ , $p = 10$ , 2 communities, and $r = 10$ .
|
2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bf92f05b7bb16a9552cc8aa4ef286172d260f1cf5081db5d3a0c48cc86403cf8
|
| 3 |
+
size 637080
|
2024/A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/A path-norm toolkit for modern networks_ consequences, promises and challenges/ced837d8-0083-44eb-9558-eded545a0f0a_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|