Chelsea707 commited on
Commit
692ba94
·
verified ·
1 Parent(s): 8ceb0fb

Add Batch 0e5875b8-5ac3-453f-ad0f-b6a1db2d0a03 data

Browse files
Files changed (49) hide show
  1. .gitattributes +8 -0
  2. 2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/c7054f4a-fa31-433d-8e70-a9c2dd232094_content_list.json +0 -0
  3. 2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/c7054f4a-fa31-433d-8e70-a9c2dd232094_model.json +0 -0
  4. 2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/c7054f4a-fa31-433d-8e70-a9c2dd232094_origin.pdf +3 -0
  5. 2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/full.md +0 -0
  6. 2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/images.zip +3 -0
  7. 2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/layout.json +0 -0
  8. 2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/935af58b-9003-40bf-801f-467e4234ddf3_content_list.json +0 -0
  9. 2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/935af58b-9003-40bf-801f-467e4234ddf3_model.json +0 -0
  10. 2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/935af58b-9003-40bf-801f-467e4234ddf3_origin.pdf +3 -0
  11. 2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/full.md +332 -0
  12. 2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/images.zip +3 -0
  13. 2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/layout.json +0 -0
  14. 2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/67359384-56cf-4363-aee6-c64cf6dd5faf_content_list.json +0 -0
  15. 2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/67359384-56cf-4363-aee6-c64cf6dd5faf_model.json +0 -0
  16. 2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/67359384-56cf-4363-aee6-c64cf6dd5faf_origin.pdf +3 -0
  17. 2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/full.md +0 -0
  18. 2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/images.zip +3 -0
  19. 2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/layout.json +0 -0
  20. 2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/da4f6dce-4c38-439e-aa6c-715a9f5477fe_content_list.json +1504 -0
  21. 2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/da4f6dce-4c38-439e-aa6c-715a9f5477fe_model.json +0 -0
  22. 2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/da4f6dce-4c38-439e-aa6c-715a9f5477fe_origin.pdf +3 -0
  23. 2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/full.md +244 -0
  24. 2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/images.zip +3 -0
  25. 2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/layout.json +0 -0
  26. 2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/d1222a18-647e-4fda-98fe-6739b4983f8b_content_list.json +0 -0
  27. 2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/d1222a18-647e-4fda-98fe-6739b4983f8b_model.json +0 -0
  28. 2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/d1222a18-647e-4fda-98fe-6739b4983f8b_origin.pdf +3 -0
  29. 2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/full.md +0 -0
  30. 2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/images.zip +3 -0
  31. 2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/layout.json +0 -0
  32. 2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/35030d36-496a-4051-9f1b-c6eb641c8ab4_content_list.json +0 -0
  33. 2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/35030d36-496a-4051-9f1b-c6eb641c8ab4_model.json +0 -0
  34. 2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/35030d36-496a-4051-9f1b-c6eb641c8ab4_origin.pdf +3 -0
  35. 2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/full.md +694 -0
  36. 2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/images.zip +3 -0
  37. 2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/layout.json +0 -0
  38. 2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/99bfc8d6-393c-4dd5-956f-bfee4aa8668a_content_list.json +0 -0
  39. 2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/99bfc8d6-393c-4dd5-956f-bfee4aa8668a_model.json +0 -0
  40. 2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/99bfc8d6-393c-4dd5-956f-bfee4aa8668a_origin.pdf +3 -0
  41. 2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/full.md +354 -0
  42. 2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/images.zip +3 -0
  43. 2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/layout.json +0 -0
  44. 2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/4e97ac8a-8041-408b-8c69-b9855a34c746_content_list.json +0 -0
  45. 2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/4e97ac8a-8041-408b-8c69-b9855a34c746_model.json +0 -0
  46. 2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/4e97ac8a-8041-408b-8c69-b9855a34c746_origin.pdf +3 -0
  47. 2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/full.md +0 -0
  48. 2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/images.zip +3 -0
  49. 2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/layout.json +0 -0
.gitattributes CHANGED
@@ -6000,3 +6000,11 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
6000
  2024/Whole-Song[[:space:]]Hierarchical[[:space:]]Generation[[:space:]]of[[:space:]]Symbolic[[:space:]]Music[[:space:]]Using[[:space:]]Cascaded[[:space:]]Diffusion[[:space:]]Models/f11a4f77-f4eb-4b80-82ca-c3fbfdf7043f_origin.pdf filter=lfs diff=lfs merge=lfs -text
6001
  2024/WildChat_[[:space:]]1M[[:space:]]ChatGPT[[:space:]]Interaction[[:space:]]Logs[[:space:]]in[[:space:]]the[[:space:]]Wild/e4fb83f9-4039-46fd-a67d-9cc7caaa235c_origin.pdf filter=lfs diff=lfs merge=lfs -text
6002
  2024/iTransformer_[[:space:]]Inverted[[:space:]]Transformers[[:space:]]Are[[:space:]]Effective[[:space:]]for[[:space:]]Time[[:space:]]Series[[:space:]]Forecasting/f33aa952-63c4-429d-b0f5-60e2239d045d_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
6000
  2024/Whole-Song[[:space:]]Hierarchical[[:space:]]Generation[[:space:]]of[[:space:]]Symbolic[[:space:]]Music[[:space:]]Using[[:space:]]Cascaded[[:space:]]Diffusion[[:space:]]Models/f11a4f77-f4eb-4b80-82ca-c3fbfdf7043f_origin.pdf filter=lfs diff=lfs merge=lfs -text
6001
  2024/WildChat_[[:space:]]1M[[:space:]]ChatGPT[[:space:]]Interaction[[:space:]]Logs[[:space:]]in[[:space:]]the[[:space:]]Wild/e4fb83f9-4039-46fd-a67d-9cc7caaa235c_origin.pdf filter=lfs diff=lfs merge=lfs -text
6002
  2024/iTransformer_[[:space:]]Inverted[[:space:]]Transformers[[:space:]]Are[[:space:]]Effective[[:space:]]for[[:space:]]Time[[:space:]]Series[[:space:]]Forecasting/f33aa952-63c4-429d-b0f5-60e2239d045d_origin.pdf filter=lfs diff=lfs merge=lfs -text
6003
+ 2023/A[[:space:]]CMDP-within-online[[:space:]]framework[[:space:]]for[[:space:]]Meta-Safe[[:space:]]Reinforcement[[:space:]]Learning/c7054f4a-fa31-433d-8e70-a9c2dd232094_origin.pdf filter=lfs diff=lfs merge=lfs -text
6004
+ 2023/A[[:space:]]Closer[[:space:]]Look[[:space:]]at[[:space:]]Model[[:space:]]Adaptation[[:space:]]using[[:space:]]Feature[[:space:]]Distortion[[:space:]]and[[:space:]]Simplicity[[:space:]]Bias/935af58b-9003-40bf-801f-467e4234ddf3_origin.pdf filter=lfs diff=lfs merge=lfs -text
6005
+ 2023/A[[:space:]]General[[:space:]]Framework[[:space:]]for[[:space:]]Sample-Efficient[[:space:]]Function[[:space:]]Approximation[[:space:]]in[[:space:]]Reinforcement[[:space:]]Learning/67359384-56cf-4363-aee6-c64cf6dd5faf_origin.pdf filter=lfs diff=lfs merge=lfs -text
6006
+ 2023/A[[:space:]]Higher[[:space:]]Precision[[:space:]]Algorithm[[:space:]]for[[:space:]]Computing[[:space:]]the[[:space:]]$1$-Wasserstein[[:space:]]Distance/da4f6dce-4c38-439e-aa6c-715a9f5477fe_origin.pdf filter=lfs diff=lfs merge=lfs -text
6007
+ 2023/A[[:space:]]Holistic[[:space:]]View[[:space:]]of[[:space:]]Label[[:space:]]Noise[[:space:]]Transition[[:space:]]Matrix[[:space:]]in[[:space:]]Deep[[:space:]]Learning[[:space:]]and[[:space:]]Beyond/d1222a18-647e-4fda-98fe-6739b4983f8b_origin.pdf filter=lfs diff=lfs merge=lfs -text
6008
+ 2023/A[[:space:]]Laplace-inspired[[:space:]]Distribution[[:space:]]on[[:space:]]SO(3)[[:space:]]for[[:space:]]Probabilistic[[:space:]]Rotation[[:space:]]Estimation/35030d36-496a-4051-9f1b-c6eb641c8ab4_origin.pdf filter=lfs diff=lfs merge=lfs -text
6009
+ 2023/A[[:space:]]Minimalist[[:space:]]Dataset[[:space:]]for[[:space:]]Systematic[[:space:]]Generalization[[:space:]]of[[:space:]]Perception,[[:space:]]Syntax,[[:space:]]and[[:space:]]Semantics/99bfc8d6-393c-4dd5-956f-bfee4aa8668a_origin.pdf filter=lfs diff=lfs merge=lfs -text
6010
+ 2023/A[[:space:]]Model[[:space:]]or[[:space:]]603[[:space:]]Exemplars_[[:space:]]Towards[[:space:]]Memory-Efficient[[:space:]]Class-Incremental[[:space:]]Learning/4e97ac8a-8041-408b-8c69-b9855a34c746_origin.pdf filter=lfs diff=lfs merge=lfs -text
2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/c7054f4a-fa31-433d-8e70-a9c2dd232094_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/c7054f4a-fa31-433d-8e70-a9c2dd232094_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/c7054f4a-fa31-433d-8e70-a9c2dd232094_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbca04eb1ab688e77f436806c5ab7e17f15bc8206dcd3e12122ad20e22777749
3
+ size 1611694
2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:288301451a0005a5efe2b1fffd1eee34a85a89a7d18b32c911c3bae311268af2
3
+ size 2230173
2023/A CMDP-within-online framework for Meta-Safe Reinforcement Learning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/935af58b-9003-40bf-801f-467e4234ddf3_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/935af58b-9003-40bf-801f-467e4234ddf3_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/935af58b-9003-40bf-801f-467e4234ddf3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b87f001b80c3ff0f9504ad932adbde0801a93d4ccbca9ae36becbcbaf5d3a76
3
+ size 1706988
2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/full.md ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A CLOSER LOOK AT MODEL ADAPTATION USING FEATURE DISTORTION AND SIMPLICITY BIAS
2
+
3
+ # Puja Trivedi
4
+
5
+ CSE Department
6
+
7
+ University of Michigan
8
+
9
+ pujat@umich.edu
10
+
11
+ # Danai Koutra
12
+
13
+ CSE Department
14
+
15
+ University of Michigan
16
+
17
+ dkoutra@umich.edu
18
+
19
+ # Jayaraman J. Thiagarajan
20
+
21
+ Center of Applied Scientific Computing
22
+ Lawrence Livermore Natl. Laboratory
23
+
24
+ jjayaram@11n1.gov
25
+
26
+ # ABSTRACT
27
+
28
+ Advances in the expressivity of pretrained models have increased interest in the design of adaptation protocols which enable safe and effective transfer learning. Going beyond conventional linear probing (LP) and fine tuning (FT) strategies, protocols that can effectively control feature distortion, i.e., the failure to update features orthogonal to the in-distribution, have been found to achieve improved out-of-distribution generalization (OOD). In order to limit this distortion, the LP+FT protocol, which first learns a linear probe and then uses this initialization for subsequent FT, was proposed. However, in this paper, we find when adaptation protocols (LP, FT, $\mathrm{LP + FT}$ ) are also evaluated on a variety of safety objectives (e.g., calibration, robustness, etc.), a complementary perspective to feature distortion is helpful to explain protocol behavior. To this end, we study the susceptibility of protocols to simplicity bias (SB), i.e. the well-known propensity of deep neural networks to rely upon simple features, as SB has recently been shown to underlie several problems in robust generalization. Using a synthetic dataset, we demonstrate the susceptibility of existing protocols to SB. Given the strong effectiveness of $\mathrm{LP + FT}$ , we then propose modified linear probes that help mitigate SB, and lead to better initializations for subsequent FT. We verify the effectiveness of the proposed $\mathrm{LP + FT}$ variants for decreasing SB in a controlled setting, and their ability to improve OOD generalization and safety on three adaptation datasets.
29
+
30
+ # 1 INTRODUCTION
31
+
32
+ Through the use of larger datasets (Yaliniz et al., 2019), better architectures (Zhai et al., 2022; Chen et al., 2022; Steiner et al., 2022; Tolstikhin et al., 2021), and different self-supervised learning (SSL) approaches (He et al., 2020; Chen et al., 2020; Grill et al., 2020; Caron et al., 2020), the quality of pretrained representations available for transfer learning tasks has dramatically improved. Indeed, representations from such high-quality SSL models have been found to be more robust (Hendrycks et al., 2019; Liu et al., 2021), transferable (Ericsson et al., 2021) and semantically consistent (Caron et al., 2021) than their supervised counterparts. In this regard, there is growing need for adaptation protocols that explicitly capitalize on these improved pretrained features to induce similar beneficial properties, e.g., inducing more than just high accuracy on the target task, after models have been trained on the downstream task.
33
+
34
+ However, standard adaptation protocols that rely upon finetuning (FT) all model parameters or training only a linear probe (LP) while freezing the network parameters do not maximize the potential of high-quality representations. For example,
35
+
36
+ while high-quality, pre-trained models have sufficiently expressive features to perform well on
37
+
38
+ ![](images/3b5fe8dc3cb45bb1cfc3e1aab0cdc37c0119ab0a813d10df125052f203c72ad3.jpg)
39
+ ID Data
40
+
41
+ ![](images/6838664ae35efa9cce3bc9dc582c1732190e65321ffce58bafd6167927da69ed.jpg)
42
+ Corruptions Bar
43
+ Figure 1: Strong and Safe Adaptation. Practical deployment in high risk applications requires that adapted models not only generalize well to in- and out-of distribution data of the downstream task, but they do so safely.
44
+
45
+ ![](images/e0f03740369bcbe809af0db98cb307504aa7f51621b4521326a31236f3672971.jpg)
46
+ ODData
47
+
48
+ ![](images/a4304be764a611dea3ddb5c1903295b3552a51a4324d689e175cf705ffd22fcb.jpg)
49
+ Adversaries
50
+
51
+ ![](images/ff700506543127d7900ca483b06118f849555b514db12b0e68f196a7b28c9218.jpg)
52
+ Corruptions
53
+
54
+ ![](images/b145ad0ee9000969ca0a00c68c413a7497a25dae88b7e7ce46a8fa6b0e1d9c76.jpg)
55
+ Anomalies
56
+
57
+ both in-distribution (ID) and out-of-distribution (OOD) data, LP and FT are not able to effectively induce this property in adapted models (Andreassen et al., 2021). Recently, however, Kumar et al. (2022) proved that by modifying features only in the ID representation subspace, FT can lead to higher OOD error as it distorts directions outside the ID subspace that are needed for OOD generalization. As both ID and OOD subspaces are represented by the pretrained model, Kumar et al. demonstrate that limiting feature distortion, or controlling updates towards the ID subspace, can lead to improved ID and OOD performance. To this end, they propose a new protocol which performs LP prior to FT (abbrev. LP + FT). By first performing LP, this two-step process ensures that subsequent FT will remain in the vicinity of the original LP solution. This reduces the overall distortion towards the ID distribution subspace and improves performance.
58
+
59
+ While strong ID and OOD generalization on the target task is indeed an important aspect of transfer learning, practical, high-risk applications require that models are also safe (Hendrycks et al., 2021). For example, adapted models should also be well-calibrated, robust to corruptions or adversaries and able to reliably detect anomalous samples (see Figure 1). Given that existing adaptation protocols are primarily focused on improving generalization, it is unclear how existing protocols utilize high-quality pretrained features to promote safe adaptation, and if current protocol design perspectives, such as mitigating feature distortion, will also enable safe generalization.
60
+
61
+ Our Work: In this work, we seek to understand the factors relevant to the design of adaption protocols that promote effective and safe generalization. We take the first step towards this aim by (i) demonstrating limitations in existing LP, FT, and $\mathsf{LP} + \mathsf{FT}$ protocols through an extensive, joint evaluation, and (ii) studying adaptation protocols through the complementary lens of avoiding simplicity bias, i.e., the problematic tendency of deep neural networks (DNNs) to prefer simple, potentially brittle features over complex features (Soudry et al., 2018; Gunasekar et al., 2018; Geirhos et al., 2019; Hermann et al., 2020; Shah et al., 2020). Using the insights from our analysis, we propose three variants of the $\mathsf{LP} + \mathsf{FT}$ protocol that jointly improve safety and generalization on three datasets. Our contributions can be summarized as follows:
62
+
63
+ - Joint Analysis of Adaptation Protocol Safety and Generalization (Sec. 3). We show that when adaptation protocols are evaluated with respect to both ID/OOD generalization and safety, $\mathsf{LP} + \mathsf{FT}$ trails behind LP or FT on several safety metrics. This demonstrates that solely mitigating feature distortion may not be sufficient for safe generalization. We also observe that keeping subsequent FT close to LP solution is crucial for the improved OOD generalization of $\mathsf{LP} + \mathsf{FT}$ . This motivates us to focus on improving the LP initialization as a mechanism for improving both safety and OOD performance.
64
+ - Role of Simplicity Bias in (Unsafe) Adaptation (Sec. 4). To understand how protocols may induce safe adaptation, we study how different protocols avoid simplicity bias. While simplicity bias (Shah et al., 2020; Geirhos et al., 2019) has been shown to underlie several problems in machine learning safety, to the best of our knowledge, we are the first to consider its role in adaptation settings. We demonstrate that protocols must not only reduce distortion, but also should mitigate simplicity bias for effective adaptation.
65
+ - Improved Protocols for Mitigating Simplicity Bias and Distortion (Sec. 5). We propose three, simple modified LP+FT protocols that help mitigate both simplicity bias and distortion (Sec. 4.1). In particular, we consider modifying the LP step with uncertainty-driven perturbations (Pagliardini et al., 2022), virtual adversarial training (Miyato et al., 2017) and model-soups (Wortsman et al., 2022), as they are simple and effective strategies. Across synthetic and real datasets, the modified protocols help improve safety and generalization to some extent.
66
+
67
+ # 2 RELATED WORK AND BACKGROUND
68
+
69
+ Here, we discuss the most relevant work on adaptation protocols and simplicity bias; we discuss additional related work in Sup. A.
70
+
71
+ Adaptation Protocols. For a comprehensive overview of transfer learning, please see the surveys of Zhuang et al. (2021) and Pan & Yang (2010). Here, we discuss the works that are most relevant to our own. Kirichenko et al. (2022) recently demonstrated that models are able to learn both core features and spurious features. However, classifiers can rely upon spurious features, harming
72
+
73
+ performance on minority groups. To reduce the reliance on spurious features, they propose to retrain the classifier on a small amount of "re-weighting" data, which allows the model to leverage the core features instead of the spurious features. Other modifications and heuristics have also been proposed to improve FT's performance, including side-tuning (Zhang et al., 2019), which tunes a small secondary network that is then combined with the original model, using larger/smaller learning rates for the classifier, as well as regularization-based methods (Jiang et al., 2020). In this work, we focus on two popular and effective protocols, LP and FT. We additionally study the $\mathrm{LP} + \mathrm{FT}$ protocol as it is theoretically-grounded, does not require re-weighting data, is designed to exploit high-quality pre-trained representations and achieves SOTA OOD performance during adaptation.
74
+
75
+ Simplicity Bias. It is well-known that DNNs demonstrate a bias toward simple, potentially less expressive features (Brutzkus et al., 2017; Soudry et al., 2018; Gunasekar et al., 2018; Geirhos et al., 2019; Hermann et al., 2020; Lubana et al., 2023), such as textures and backgrounds, and that this bias can lead to shortcuts that limit the generalization of DNNs. Indeed, recently Shah et al. (2020) formalized this intuition by more precisely defining simplicity bias, based on the number of linear components to define a decision boundary, and showed that SB leads to non-robust decision boundaries that affects a model's sensitivity to distribution shifts and adversarial perturbations. In brief, by learning simple features first, models become invariant to complex features, potentially leading to narrow decision boundaries which can fail to generalize under data shifts. Notably, DNNs exhibit this bias even when complex features are more expressive and necessary for fitting the distribution. While various techniques have recently been proposed to mitigate simplicity bias when training from scratch or in the context of pretraining (Teney et al., 2021), we are, to the best of our knowledge, the first to rigorously study the role of simplicity in the context of model adaptation.
76
+
77
+ # 3 JOINT ANALYSIS OF PROTOCOL SAFETY AND GENERALIZATION
78
+
79
+ In this section, we evaluate the performance of adaptation protocols across several additional safety objectives (Hendrycks et al., 2021), as practical transfer learning applications require both strong and safe generalization. Through this expanded evaluation, we find that no single protocol is optimal across all safety objectives. Indeed, the inability of $\mathrm{LP + FT}$ to induce safe adaptation indicates that a complementary perspective to feature distortion, namely simplicity bias, is necessary when designing generalizable and safe protocols (see Sec. 4). We further argue that by constraining models around the LP initialization during FT, $\mathrm{LP + FT}$ may inadvertently harm safety performance by hampering models' abilities to learn complex, task-specific features needed for robust generalization. While we expand upon the role of LP initialization in Secs. 4 and 5, we begin, here, by introducing the expanded evaluation and experimental setup.
80
+
81
+ **Experimental Setup.** Three downstream adaptation tasks (and their respective OOD distributions) are considered: CIFAR-10 (ID) $\rightarrow$ {STL10, CIFAR10.1} (OOD), Domainnet-Sketch $\rightarrow$ {DomainnetClipArt, Domainnet-Painting, Domainnet-Real} and Living17 (Source) $\rightarrow$ Living17 (Target). These datasets are selected as they correspond to two different types of distribution shifts (standard domain adaptation and subpopulation) and three levels of distortion (low, medium, high). A MoCo-V2 ResNet-50 (He et al., 2020) pretrained on ImageNet-1K is used as the base-feature extractor for CIFAR10 and Living17 experiments, and the CLIP ResNet-50 image encoder pretrained on 400 million (image,text) pairs is used for Domainnet-Sketch. These models are selected as they provide sufficiently high-quality representations capable of generalizing to both ID and OOD downstream data (Kumar et al., 2022). We perform grid-search to find the best hyper-parameters, and average over 3 seeds. See Sup. B.2 for additional details.
82
+
83
+ Expanded Evaluation. In addition to OOD accuracy on the aforementioned distribution shifts, we report performance on the following metrics in order to evaluate adapted models on key problems in machine learning safety (Hendrycks et al., 2021). Our evaluation setup is inspired by Hendrycks et al. (2022):
84
+
85
+ - Mean Corruption Accuracy $(mCA / m\bar{C}A)$ : We consider two sets of corruptions: the 15 naturalistic corruptions $(Corr)$ (Hendrycks & Dietterich, 2019), and 10 perceptually dissimilar corruptions $(\overline{Corr})$ (Mintun et al., 2021). Corruptions are applied to the ID test dataset and the average accuracy over each set is reported.
86
+
87
+ - Calibration Error (RMSE): It is important that models are well-calibrated so that practitioners may trust the provided predictions in high-risk applications Guo et al. (2017). We measure the root mean square error of calibration as follows: $\sqrt{\mathbb{E}_{\mathrm{C}}\left[\left(\mathbb{P}(Y = \hat{Y} \mid \mathrm{C} = \mathrm{c}) - \mathrm{c}\right)^{2}\right]}$ , where C indicates the confidence scores, while $\hat{Y}$ and $Y$ denote the model's predictions and ground-truth labels, respectively.
88
+ - Anomaly Detection Performance (AUROC): Recognizing when samples are anomalous allows models to abstain from making uninformed and inapplicable predictions. We consider samples from blobs, Gaussian, LSUN, Places69, Rademacher, Textures, and SVHN datasets as anomalies and report the AUROC (area under the ROC curve) of the binary classification problem of detecting such samples as anomalies.
89
+ - Adversarial Accuracy: DNNs are well-known to be fooled by imperceptible distortions (Ilyas et al., 2019). We use a 2/225, 10-step PGD (Madry et al., 2018) attack to measure the robustness of models to such perturbations.
90
+
91
+ We make the following observations regarding the behavior of different protocols using this expanded evaluation. In brief, we find that no single protocol is effective across all datasets in jointly obtaining strong and safe adaptation, and that, on low distortion adaptation tasks, the quality of the LP initialization is critical as pre-trained feature extractor is not substantially updated during LP+FT.
92
+
93
+ # OBSERVATION 1: MITIGATING FEATURE DISTORTION MAY NOT INDUCE SAFE ADAPTATION.
94
+
95
+ Here, we ask how protocols perform when we consider both safety and generalization objectives to better understand the feature distortion perspective. In particular, if $\mathrm{LP + FT}$ is able to outperform LP and FT in this expanded evaluation, then it suggests that solely mitigating feature distortion during FT may be sufficient to induce robust adaptation. To test this claim, we rank protocol performance for each safety metric, where ranks are first computed for each dataset separately, and then averaged. Results are shown in Fig. 2. Smaller ranks correspond to better performance.
96
+
97
+ Results. $\mathrm{LP} + \mathrm{FT}$ obtains the best rank for ID and OOD accuracy as expected, as well as Corr and Corr accuracy. However, we also see that FT is better ranked for Adversarial Accuracy and OOD calibration, while LP is better ranked for ID calibration and Corr calibration. However, given that LP+FT trails behind protocols that are not explicitly designed to limit distortion on some safety metrics, it is clear that a complementary perspective is needed to better understand protocol behavior. Indeed, $\mathrm{LP} + \mathrm{FT}$ has the best average rank, indicating that it is a good starting point to improve upon.
98
+
99
+ The above results are aggregated across different types of distribution shifts; we extend this analysis next by considering the interplay between individual datasets and protocol performance. These detailed results are presented in Table 1.
100
+
101
+ ![](images/18fa479f5e930fe5509c57ce8619ff10697ecf3543b723e0ae365350b15e2b64.jpg)
102
+ Figure 2: Disparate Performance of Protocols. We plot the average rank of each protocol for different safety and generalization metrics. We see no single protocol achieves top rank across all metrics.
103
+
104
+ # OBSERVATION 2: LINEAR PROBING SOLUTIONS MATTER.
105
+
106
+ Naturally, the amount of distortion required to effectively adapt a pretrained model to a downstream task will vary in accordance to the similarity of the downstream and pretraining data. Here, we seek to understand how protocols behave under different levels of distortion. In particular, we hypothesize that the LP initialization becomes more influential for $\mathrm{LP + FT}$ in low distortion settings, as subsequent FT remains in the vicinity of initialization. To this end, we compute the batched centered kernel alignment (CKA) score (Nguyen et al., 2021) with respect to the adapted and pretrained models, and take a closer look at performance across metrics. We note that while CKA is better suited for measuring distortion than the L2 norm as used by Kumar et al. (2022), other neural representation metrics can also be used (Ding et al., 2021; Davari et al., 2023).
107
+
108
+ Results. As shown in Fig. 3, we see that minimal distortion (CKA $\geq 0.9$ ) is required to obtain competitive LP+FT performance on DomainNet and Living17. However, on CIFAR10, which requires the most distortion as evidenced by lower CKA scores, FT is the most effective protocol for safety measures and is very comparable on generalization performance (see Table 1).
109
+
110
+ The effectiveness of LP and $\mathsf{LP} + \mathsf{FT}$ on Living17 in improving OOD generalization over FT is hardly surprising, as Living17 is a subset of ImageNet, on which the base feature-encoder was already trained.
111
+
112
+ ![](images/9a0c3747279f1ea527545f2a95faf62246183855ee914de2648e12c6f154a64d.jpg)
113
+ Figure 3: Dataset Distortion. We plot the CKA similarity between adapted and pretrained models. DomainNet and Living17 require low distortion, as seen by performance of LP+FT across metrics with high CKA $(>0.9)$ .
114
+
115
+ In contrast, on DomainNet, the difficulty of FT in matching the ID test task performance, despite achieving high training accuracy, suggests FT may learn a solution that relies upon shortcuts (or simple features) that do not generalize. We emphasize that $\mathrm{LP + FT}$ greatly benefits from strong LP initializations on these low-distortion datasets as corresponding CKA scores show that very limited updates are made during FT. While $\mathrm{LP + FT}$ does induce meaningful improvements over LP on Living17 and performs comparably to LP on DomainNet, we stress the model must be kept close to the LP initialization during FT. Indeed, to obtain acceptable $\mathrm{LP + FT}$ performance, small learning rates (3e-7,1e-5) and frozen batch-norm parameters during FT are necessary.
116
+
117
+ Summary. Taken jointly, these results suggest that while solely mitigating feature distortion may not be sufficient to ensure that adapted models perform well on safety metrics across different levels of shift, improving the LP initialization may be a viable solution to obtaining strong and safe generalization. Indeed, the effectiveness of $\mathsf{LP} + \mathsf{FT}$ on low distortion datasets and its high average ranking indicates that it is a promising protocol to build upon. To understand
118
+
119
+ how to build better protocols, we next introduce simplicity bias as a complementary perspective to feature distortion.
120
+
121
+ <table><tr><td colspan="2"></td><td colspan="2">Generalization</td><td colspan="3">Robustness</td><td colspan="4">Calibration</td><td>Anomaly Det.</td><td>Rep. Similarity</td></tr><tr><td>Dataset</td><td>Protocol</td><td>ID Acc.</td><td>\( OOD^2 \)Acc.</td><td>C Acc.</td><td>\( \overline{C}^3 \)Acc.</td><td>Adv. Acc.</td><td>ID 1-RMS</td><td>C 1-RMS</td><td>\( \overline{C} \) 1-RMS</td><td>OOD. 1-RMS</td><td>Out-of-Class AUROC</td><td>ID CKA</td></tr><tr><td>CIFAR10</td><td>LP</td><td>0.9138</td><td>0.8188/0.8192</td><td>0.6912</td><td>0.6553</td><td>0.0003</td><td>0.95945</td><td>0.83025</td><td>0.8142</td><td>0.8696</td><td>0.6206</td><td>1.0000</td></tr><tr><td>CIFAR10</td><td>FT</td><td>0.9539</td><td>0.8962/0.8545</td><td>0.7434</td><td>0.7553</td><td>0.0231</td><td>0.9668</td><td>0.83635</td><td>0.8453</td><td>0.9232</td><td>1.0000</td><td>0.6831</td></tr><tr><td>CIFAR10</td><td>LP+FT</td><td>0.9442</td><td>0.8775/0.8581</td><td>0.6921</td><td>0.6790</td><td>0.0018</td><td>0.9521</td><td>0.7849</td><td>0.7721</td><td>0.88633</td><td>0.6511</td><td>0.7853</td></tr><tr><td>DomainNet</td><td>LP</td><td>0.8913</td><td>0.8013</td><td>0.6019</td><td>0.6020</td><td>0.1768</td><td>0.9638</td><td>0.9045</td><td>0.8571</td><td>0.9264</td><td>0.8679</td><td>1.0000</td></tr><tr><td>DomainNet</td><td>FT</td><td>0.7613</td><td>0.4522</td><td>0.5186</td><td>0.2744</td><td>0.4164</td><td>0.8368</td><td>0.7234</td><td>0.7234</td><td>0.6379</td><td>0.8841</td><td>0.6092</td></tr><tr><td>DomainNet</td><td>LP+FT</td><td>0.8985</td><td>0.7990</td><td>0.6343</td><td>0.5979</td><td>0.1927</td><td>0.9566</td><td>0.8445</td><td>0.8445</td><td>0.8899</td><td>0.9022</td><td>0.9222</td></tr><tr><td>Living17</td><td>LP</td><td>0.9521</td><td>0.8124</td><td>0.7010</td><td>0.7377</td><td>0.2350</td><td>0.9313</td><td>0.8693</td><td>0.8801</td><td>0.9117</td><td>0.9907</td><td>1.0000</td></tr><tr><td>Living17</td><td>FT</td><td>0.9518</td><td>0.7168</td><td>0.7011</td><td>0.7164</td><td>0.1563</td><td>0.8873</td><td>0.9019</td><td>0.8604</td><td>0.9295</td><td>0.9794</td><td>0.7847</td></tr><tr><td>Living17</td><td>LP+FT</td><td>0.9643</td><td>0.8261</td><td>0.7426</td><td>0.7671</td><td>0.2135</td><td>0.9782</td><td>0.9472</td><td>0.9451</td><td>0.8742</td><td>0.9924</td><td>0.9887</td></tr></table>
122
+
123
+ Table 1: Safety and Generalization Performance of Adaptation Protocols. (Best in bold. Second best underlined.) While $\mathsf{LP} + \mathsf{FT}$ indeed achieves strong ID and OOD performance across datasets, we see that different protocols may perform better when safety evaluation is also considered. For CIFAR-10, which requires the most distortion as evidenced by lower CKA scores, we see that FT is the most effective; $\mathsf{LP} + \mathsf{FT}$ and LP are most effective, respectively, on Living17 and DomainNet, which require significantly less distortion. This suggests that, while mitigating feature distortion is effective for improving generalization, it is not always sufficient for also improving safety.
124
+
125
+ # 4 MITIGATING SIMPLICITY BIAS & FEATURE DISTORTION FOR SAFE ADAPTATION
126
+
127
+ As discussed in Sec. 2, simplicity bias (SB) underlies various safety issues in machine learning as models may learn to rely upon simple features that often do not generalize under distribution shifts, such as corruptions or adversaries (Shah et al., 2020). Therefore, we argue that mitigating feature distortion in a way that minimizes this bias can be a valid mechanism to improve both safety and generalization performance. Correspondingly, in this section, we first measure the propensity of different protocols to simplicity bias in a controlled setting. In particular, given our previous observation that LP+FT models remain in close vicinity of the LP solution after FT, we
128
+
129
+ focus on improving the performance of this initial LP initialization so that we may capitalize upon $\mathsf{LP} + \mathsf{FT}$ strong OOD performance, while simultaneously improving safety. To this end, we propose three light-weight $\mathsf{LP} + \mathsf{FT}$ variants that are able to both reduce distortion and SB. We begin by introducing our synthetic dataset and experimental setup.
130
+
131
+ Dataset. As shown in Fig. 4, we create "dominoes" of complex and simple features by pairing each class (Shah et al., 2020) from CIFAR10 (complex) with the corresponding "digit" class in MNIST (simple), e.g., "bird" samples are paired with digit "2" samples, where the label for each domino is determined by the complex, CIFAR10 sample. Datasets with three levels of correlation (95%, 99%, 100%) between the simple and complex features are constructed for training. While 100% correlation allows models to only learn the simple feature for perfect generalization, the more realistic lower correlation settings require models learn at least some aspect of the complex features.
132
+
133
+ Experimental Setup. For evaluation, we also construct a randomized (10% correlation) variant, where simple features are randomly paired with complex features. We give two examples in panels 3 and 4 of Fig. 4. To assess OOD generalization, we create a variant where complex features are sampled from STL10, instead of CIFAR10, e.g., panels 1 and 2 in Fig. 4.
134
+
135
+ Metrics. We assess the reliance on simple features using the following metrics: (1) Randomized Accuracy: the accuracy on the variant where samples contain random pairings between simple and complex features; (2) Correlated Accuracy: accuracy when pairings between simple and complex features remain correlated. Models that are susceptible to simplicity bias will have high Correlated Accuracy and low Randomized Accuracy. Likewise, models that are not susceptible to simplicity bias will have relatively lower correlated accuracy and higher randomized accuracy.
136
+
137
+ Training Details. A MoCo-V2 ResNet-50 (He et al., 2020) pretrained
138
+
139
+ on ImageNet-1K is the base-feature extractor. See Supp. B.2 for additional details. We performed grid-search to find the best parameters. Results are over 3 seeds and 3 correlation strengths.
140
+
141
+ ![](images/487097244bb26bad9b17f4ac60b6de15002d1fa9d4ddba930620fcac612b5f11.jpg)
142
+ Figure 4: Synthetic Data with Simple and Complex Features. Using a synthetic dominoes dataset (Shah et al., 2020), we study the effect of simplicity bias on safety and OOD generalization.
143
+
144
+ <table><tr><td rowspan="2">Protocol</td><td colspan="3">Correlation = 0.95</td><td colspan="3">Correlation = 0.99</td><td colspan="3">Correlation = 1.00</td></tr><tr><td>Corr. ID Acc.</td><td>Corr. OOD Acc.</td><td>Rand.OOD Acc.</td><td>Corr. ID Acc.</td><td>Corr. OOD Acc.</td><td>Rand.OOD Acc.</td><td>Corr. ID Acc.</td><td>Corr. OOD Acc.</td><td>Rand.OOD Acc.</td></tr><tr><td>LP</td><td>0.9728</td><td>0.9156</td><td>0.7910</td><td>0.9809</td><td>0.9386</td><td>0.7862</td><td>0.9836</td><td>0.9505</td><td>0.7696</td></tr><tr><td>FT</td><td>0.9866</td><td>0.9814</td><td>0.6021</td><td>0.9923</td><td>0.9928</td><td>0.3629</td><td>0.9855</td><td>0.9789</td><td>0.2697</td></tr><tr><td>LP+FT</td><td>0.9902</td><td>0.9793</td><td>0.8422</td><td>0.9844</td><td>0.9484</td><td>0.7813</td><td>0.9874</td><td>0.9594</td><td>0.7548</td></tr><tr><td>FT (Scratch)</td><td>0.9557</td><td>0.9123</td><td>0.1820</td><td>0.9861</td><td>0.9595</td><td>0.1201</td><td>0.9952</td><td>0.9444</td><td>0.1055</td></tr></table>
145
+
146
+ Table 2: Simplicity Bias and Performance of Adaptation Protocols. Using the synthetic dominoes dataset, we measure the propensity of different models to simplicity bias by measuring the Corr. OOD and Rand. OOD accuracy With highest Corr. OOD accuracy and lowest Rand. OOD accuracy, we see that FT is particularly susceptible to inducing simplicity bias.
147
+
148
+ Results. Given the above experimental setup, we report the performance of different adaptation protocols in Table. 2. Across all correlation strengths, FT has the lowest Rand. OOD accuracy and high Corr. OOD accuracy. This clearly indicates that FT has learned to rely upon simple features, effectively disregarding the expressive features of the pretrained model, and is easily susceptible to simplicity bias. In contrast, by preserving the expressive features of the underlying feature encoder, LP best mitigates simplicity bias in high correlation (0.99,1.0) settings as evidenced by the highest Rand OOD accuracy (though Corr. ID/OOD accuracy does slightly suffer). However, in moderate correlation (0.95), $\mathrm{LP} + \mathrm{FT}$ improves upon LP to achieve better Corr. OOD accuracy than LP and the best Rand. OOD accuracy across protocols. This suggests that when the correlation is not extreme, that moderate distortion, given a suitable LP, is in fact beneficial to mitigating simplicity bias. At higher correlation strengths (0.99,1.0), however, $\mathrm{LP} + \mathrm{FT}$ has lower Rand OOD accuracy, while improving the Corr. OOD accuracy relative to LP, indicating in such extreme settings the distortion incurred by subsequent FT is not beneficial and has increased reliance upon simple features.
149
+
150
+ # 4.1 IMPROVED LINEAR PROBES FOR MITIGATING SIMPLICITY BIAS
151
+
152
+ As discussed earlier, adaptation protocols have varying susceptibility to simplicity bias, and mitigating this susceptibility can help improve generalization and safety.
153
+
154
+ In particular, we observe that $\mathrm{LP + FT}$ and LP are effective protocols for reducing reliance upon simple features on both synthetic datasets, and on low-distortion real datasets (Sec. 3). However, as some level of distortion is typically required when adapting to downstream tasks to obtain sufficient ID task performance, we propose new variants of the $\mathrm{LP + FT}$ protocol that attempt to enable the subsequent FT step to distort features without compromising generalization or increasing simplicity bias.
155
+
156
+ We note that, while it is possible to modify the FT step as well, modifications to LP are inexpensive as the feature-encoder is not updated. Moreover, as discussed in Sec. 3, fine-tuned solutions remain in close vicinity of initial LP initializations, further motivating strong starting solutions. To this end, we introduce the following modifications to the LP step of $\mathrm{LP + FT}$ below, where $h$ are the hidden representations, $\theta$ model parameters, $y$ labels, $\hat{y}$ predicted classes, $C$ the number of classes, $q$ the classifier, and $\delta$
157
+
158
+ ![](images/8d219fb32e5e843a798d364a86dde0355b16f5923c83919ce0e9f98e38398c69.jpg)
159
+ Figure 5: Hardness Promoting Augmentations help Mitigate Simplicity Bias. We evaluate the modified LP+FT protocols on the dominoes dataset, and find they improve the Rand. OOD Accuracy over vanilla FT and LP+FT. This suggests that modified protocols can rely less upon shortcuts or simple features.
160
+
161
+ the perturbation. See Supp. B.1 for additional discussion on the choice of these mitigation strategies and Supp. B.2 for discussion on the importance of applying mitigations during LP.
162
+
163
+ - LP (VAT): Virtual adversarial training (VAT) (Miyato et al., 2017) enforces local distribution smoothness by minimizing the KL-divergence between the predictions of perturbed pairs of examples. Since we are using expressive pretrained models, such perturbations may be meaningful in the inverted latent space as well, and resulting classifiers become robust in some $\epsilon$ -neighborhood around each latent-space input. Formally, let $\epsilon$ be some perturbation budget, and $\alpha$ a hyper-parameter weighting distributional label smoothness, we minimize the following loss: $\min_{\theta} \mathcal{L}_{\mathrm{CE}}(g_{\theta}(h), y) - \alpha \mathrm{KL}\left[p(y \mid g_{\theta}(h)), p(y \mid g_{\theta}(h + \delta))\right]$ where $\delta := \arg \max_{\| \delta \|_2 \leq \epsilon} \mathrm{KL}[p(y \mid g_{\theta}(h)), p(y \mid g_{\theta}(h + \delta))]$ .
164
+ - LP (UDP): Instead of maximizing the loss, uncertainty-driven perturbations (UDP) (Pagliardini et al., 2022) adversarially maximize a model's estimated uncertainty. UDPs have been shown to be effective in decreasing simplicity bias and improving generalization in non-adaptation settings. Formally, they can be defined as: $\delta_{u} = \arg \max_{\| \delta \|_{2}\leq \epsilon}\mathcal{H}(g_{\theta}(h) + \delta)$ , where $\mathcal{H}(g_{\theta}(h)) = -\sum_{c\in C}\hat{y}_c\log \hat{y}_c$ , (e.g., entropy of predictions).
165
+ - LP (Soup): Inspired by Wortsman et al. (2022), we train multiple, sparse, linear probes jointly and then take the average of their weights (aka soup) as the learned LP for subsequent FT. While soups of large models improve generalization by combining models from the same low-error basin, we consider sparse classifiers soups as an alternative strategy which seeks to average diverse decision rules, to avoid relying upon a single set of simple features. Formally, given $k$ classifiers, we minimize $\min_{\theta_{1\dots k}}\frac{1}{k}\sum_{i = 1}^{k}\mathcal{L}_{\mathrm{CE}}(g_{\theta_i}(h),y)$ and let $g_{\bar{\theta}} = \frac{1}{k}\sum_{i}^{k}\theta_{i}$ for the FT step.
166
+
167
+ Empirical Evaluation of Hardness Promoting Augmentations. We evaluate the effectiveness of the above LP variants, which we collectively refer to as "hardness-promoting", in mitigating the simplicity of bias of $\mathsf{LP} + \mathsf{FT}$ . We make the following observations (see Fig. 5).
168
+
169
+ Across all correlation strengths, we find that using the modified hardness-promoting LPs during $\mathrm{LP + FT}$ (aka hp- $\mathrm{LP + FT}$ ) improves the Rand. OOD Accuracy over vanilla $\mathrm{LP + FT}(\geq 2\%)$ and $\mathrm{FT}(>20\%)$ . This clearly indicates that hp- $\mathrm{LP + FT}$ is indeed effective in decreasing reliance on
170
+
171
+ <table><tr><td rowspan="2">Protocol</td><td colspan="2">Generalization</td><td colspan="3">Robustness</td><td colspan="4">Calibration</td><td>Anomaly Det.</td><td>Rep. Similarity</td></tr><tr><td>ID Acc.</td><td>OOD Acc.</td><td>C Acc.</td><td>C Acc.</td><td>Adv. Acc.</td><td>ID 1-RMS</td><td>C 1-RMS</td><td>C 1-RMS</td><td>OOD 1-RMS</td><td>Out-of-Class AUROC</td><td>ID CKA</td></tr><tr><td>LP</td><td>0.9521</td><td>0.8124</td><td>0.7010</td><td>0.7378</td><td>0.2350</td><td>0.9313</td><td>0.8693</td><td>0.8802</td><td>0.9117</td><td>0.9907</td><td>1.0000</td></tr><tr><td>FT</td><td>0.9518</td><td>0.7168</td><td>0.7011</td><td>0.7164</td><td>0.1563</td><td>0.8873</td><td>0.9019</td><td>0.8604</td><td>0.9295</td><td>0.9794</td><td>0.7847</td></tr><tr><td>LP+FT</td><td>0.9643</td><td>0.8261</td><td>0.7426</td><td>0.7671</td><td>0.2135</td><td>0.9782</td><td>0.9472</td><td>0.9451</td><td>0.8742</td><td>0.9924</td><td>0.9887</td></tr><tr><td>LP (UDP)</td><td>0.9524</td><td>0.8110</td><td>0.7017</td><td>0.7382</td><td>0.2353</td><td>0.9308</td><td>0.8691</td><td>0.8801</td><td>0.9118</td><td>0.9908</td><td>1.0000</td></tr><tr><td>LP (VAT)</td><td>0.9524</td><td>0.8122</td><td>0.7010</td><td>0.7379</td><td>0.2345</td><td>0.9299</td><td>0.8682</td><td>0.8791</td><td>0.9103</td><td>0.9907</td><td>1.0000</td></tr><tr><td>LP (Soup)</td><td>0.9439</td><td>0.7996</td><td>0.6874</td><td>0.7290</td><td>0.2451</td><td>0.8806</td><td>0.7868</td><td>0.8094</td><td>0.9064</td><td>0.9897</td><td>1.0000</td></tr><tr><td>LP (UDP) + FT</td><td>0.9637</td><td>0.8265</td><td>0.7448</td><td>0.7681</td><td>0.2157</td><td>0.9768</td><td>0.9464</td><td>0.9467</td><td>0.8757</td><td>0.9927</td><td>0.98927</td></tr><tr><td>LP (VAT) + FT</td><td>0.9647</td><td>0.8247</td><td>0.7425</td><td>0.7650</td><td>0.2224</td><td>0.9727</td><td>0.9521</td><td>0.9463</td><td>0.8775</td><td>0.9925</td><td>0.9893</td></tr><tr><td>LP (Soup) + FT</td><td>0.9608</td><td>0.8163</td><td>0.7456</td><td>0.7684</td><td>0.1855</td><td>0.9760</td><td>0.9498</td><td>0.9492</td><td>0.8678</td><td>0.9936</td><td>0.98540</td></tr></table>
172
+
173
+ Table 3: Living17: Hardness Promoting Augmentation and Adaptation. In this low-distortion adaptation setting, we see that vanilla $\mathbb{L}\mathbb{P} + \mathbb{F}\mathbb{T}$ is an effective baseline and that hardness promoting variants of $\mathbb{L}\mathbb{P} + \mathbb{F}\mathbb{T}$ tend to perform comparably.
174
+
175
+ simple features, potentially also leading to improved safety. Furthermore, with the exception of LP (Soup) +FT, hp-LP+FT also performs better than vanilla LP+FT on Corr. OOD accuracy. Vanilla FT does outperform all LP+FT protocols in this setting, but this is due to reliance upon simple features. Lastly, we observe that with respect to Corr. ID Accuracy that hp-LP+FT improves performance at low correlation strength, but slightly loses performance at higher correlation strengths. This is not entirely unexpected as FT's reliance upon simple features will be useful in the correlated setting. Given that hp-LP+FT is able to reduce reliance upon simple features in this controlled setting, we next evaluate whether these modified protocols are beneficial in improving the performance of LP+FT on real datasets.
176
+
177
+ # 5 EVALUATING GENERALIZATION AND SAFETY OF THE LP+FT FAMILY
178
+
179
+ Given the effectiveness of incorporating hardness promoting (hp) augmentations with the family of $\mathrm{LP + FT}$ protocols (hp-LP+FT) in mitigating simplicity bias in a synthetic setting, we further evaluate the modified protocols on the three real-world datasets (Living17, DomainNet, and CIFAR10) with respect to the generalization and safety metrics introduced in Sec. 3. We present our results in Tables 3,4, and 5); our observations are summarized below. Any method-specific hyperparameters (e.g., epsilon) are tuned using ID validation data and all results are reported over three seeds. We provide additional results in Supp. C.
180
+
181
+ Results. As discussed in Sec. 3, these three datasets represent scenarios where different levels of distortion are necessary when adapting the pretrained model. On Living17, a setting which requires minimal distortion during adaptation, we see that vanilla LP+FT is quite effective with respect to both generalization and safety metrics and is a difficult baseline to surpass. Indeed, while hp-LP+FT variants do not lead to significant benefits, they generally perform comparably to vanilla LP+FT. On DomainNet, a setting where fairly low distortion is required for LP+FT but FT struggles to find a good solution, we see that hp-LP+FT variants induce some slight benefits with respect to ID/OOD generalization and robustness, though vanilla LP and hp-LP have better calibration performance. In contrast on CIFAR10, which requires more distortion to obtain an acceptable solution, we see that hp-LP+FT variants lead to improved generalization and a noticeable boost in corruption robustness. LP (VAT) + FT and LP (VAT) are particularly effective in this regard. Lastly, across all datasets, we observe that hp-LP+FT protocols lead to similar distortion to vanilla LP+FT, which suggests that any additional benefits of hp-LP+FT should not be attributed to only reducing feature distortion.
182
+
183
+ Discussion. We find that while vanilla $\mathrm{LP + FT}$ is already an effective protocol, especially in settings where low distortion is required, hp- $\mathrm{LP + FT}$ can provide some benefits and performs competitively. We suspect that the performance of these modified protocols can further be improved if more sophisticated simplicity bias mitigation strategies are used. Indeed, our central claim, that adaptation protocols should mitigate feature distortion and simplicity, is not dependent on a specific strategy. We additionally note that while such mitigation strategies may optionally also be used during FT, they cannot solely be used in FT. Indeed, in the case of extreme simplicity, if the LP classifier relies upon simple features to find a low-loss solution, during the subsequent FT step, gradients may not be
184
+
185
+ <table><tr><td rowspan="2">Protocol</td><td colspan="2">Generalization</td><td colspan="3">Robustness</td><td colspan="4">Calibration</td><td>Anomaly Det.</td><td>Rep. Similarity</td></tr><tr><td>ID Acc.</td><td>OOD Acc.</td><td>C Acc.</td><td>C Acc.</td><td>Adv. Acc.</td><td>ID 1-RMS</td><td>C 1-RMS</td><td>C 1-RMS</td><td>OOD 1-RMS</td><td>Out-of-Class AUROC</td><td>ID CKA</td></tr><tr><td>LP</td><td>0.8913</td><td>0.8013</td><td>0.6019</td><td>0.6020</td><td>0.1768</td><td>0.9638</td><td>0.9264</td><td>0.9045</td><td>0.9014</td><td>0.8679</td><td>1.0000</td></tr><tr><td>FT</td><td>0.7613</td><td>0.4522</td><td>0.5186</td><td>0.2744</td><td>0.4164</td><td>0.8368</td><td>0.7234</td><td>0.7234</td><td>0.6379</td><td>0.8841</td><td>0.6092</td></tr><tr><td>LP+FT</td><td>0.8985</td><td>0.7990</td><td>0.6343</td><td>0.5979</td><td>0.1927</td><td>0.9566</td><td>0.8445</td><td>0.8445</td><td>0.8899</td><td>0.9022</td><td>0.9222</td></tr><tr><td>LP (UDP)</td><td>0.8919</td><td>0.8021</td><td>0.6022</td><td>0.6101</td><td>0.1345</td><td>0.9635</td><td>0.9250</td><td>0.9047</td><td>0.8619</td><td>0.8714</td><td>1.0000</td></tr><tr><td>LP (VAT)</td><td>0.8836</td><td>0.7914</td><td>0.5893</td><td>0.5963</td><td>0.1687</td><td>0.8897</td><td>0.9552</td><td>0.8905</td><td>0.9178</td><td>0.8735</td><td>1.0000</td></tr><tr><td>LP (Soup)</td><td>0.8787</td><td>0.7977</td><td>0.5951</td><td>0.6048</td><td>0.1731</td><td>0.8844</td><td>0.9479</td><td>0.8861</td><td>0.9176</td><td>0.8661</td><td>1.0000</td></tr><tr><td>LP (UDP) + FT</td><td>0.9033</td><td>0.7965</td><td>0.6414</td><td>0.6178</td><td>0.1778</td><td>0.9436</td><td>0.8533</td><td>0.79415</td><td>0.752</td><td>0.8857</td><td>0.9662</td></tr><tr><td>LP (VAT) + FT</td><td>0.9048</td><td>0.8009</td><td>0.6466</td><td>0.6131</td><td>0.1942</td><td>0.9686</td><td>0.8911</td><td>0.8428</td><td>0.7985</td><td>0.9204</td><td>0.9370</td></tr><tr><td>LP (Soup) + FT</td><td>0.9051</td><td>0.8013</td><td>0.6393</td><td>0.6091</td><td>0.1954</td><td>0.9670</td><td>0.9042</td><td>0.8692</td><td>0.8246</td><td>0.9097</td><td>0.9281</td></tr></table>
186
+
187
+ Table 4: DomainNet: Hardness Promoting Augmentations and Adaptation. While relatively low distortion is induced by LP+FT, FT struggles to find a viable solution. Here, hardness-promoting LP+FT variants, particularly LP (VAT) +FT do slightly improve ID and OOD generalization as well as robustness to corruptions.
188
+
189
+ <table><tr><td rowspan="2">Protocol</td><td colspan="2">Generalization</td><td colspan="3">Robustness</td><td colspan="4">Calibration</td><td>Anomaly Det.</td><td>Rep. Similarity</td></tr><tr><td>ID Acc.</td><td>OOD Acc.</td><td>C Acc.</td><td>C Acc.</td><td>Adv. Acc.</td><td>ID 1-RMS</td><td>C 1-RMS</td><td>C 1-RMS</td><td>OOD 1-RMS</td><td>Out-of-Class AUROC</td><td>ID CKA</td></tr><tr><td>LP</td><td>0.9138</td><td>0.8190</td><td>0.6912</td><td>0.6553</td><td>0.0003</td><td>0.9595</td><td>0.8303</td><td>0.8142</td><td>0.8696</td><td>0.6206</td><td>1.0000</td></tr><tr><td>FT</td><td>0.9539</td><td>0.8754</td><td>0.7434</td><td>0.7553</td><td>0.0231</td><td>0.9668</td><td>0.8364</td><td>0.8453</td><td>0.9232</td><td>1.0000</td><td>0.6831</td></tr><tr><td>LP+FT</td><td>0.9442</td><td>0.8678</td><td>0.6921</td><td>0.6790</td><td>0.0018</td><td>0.9521</td><td>0.7849</td><td>0.7721</td><td>0.8864</td><td>0.6511</td><td>0.7853</td></tr><tr><td>LP (UDP)</td><td>0.9033</td><td>0.8356</td><td>0.6948</td><td>0.6643</td><td>0.0003</td><td>0.9689</td><td>0.9111</td><td>0.9023</td><td>0.9277</td><td>0.9033</td><td>1.0000</td></tr><tr><td>LP (VAT)</td><td>0.8977</td><td>0.8251</td><td>0.6742</td><td>0.6483</td><td>0.0002</td><td>0.9265</td><td>0.9255</td><td>0.9139</td><td>0.9375</td><td>0.7200</td><td>1.0000</td></tr><tr><td>LP (Soup)</td><td>0.9052</td><td>0.8353</td><td>0.6917</td><td>0.6588</td><td>0.0003</td><td>0.9605</td><td>0.9205</td><td>0.9037</td><td>0.9364</td><td>0.8859</td><td>1.0000</td></tr><tr><td>LP (UDP) + FT</td><td>0.944</td><td>0.8848</td><td>0.7028</td><td>0.6986</td><td>0.0004</td><td>0.9670</td><td>0.8472</td><td>0.8476</td><td>0.9237</td><td>0.9559</td><td>0.7764</td></tr><tr><td>LP (VAT) + FT</td><td>0.9611</td><td>0.8900</td><td>0.7442</td><td>0.7321</td><td>0.0027</td><td>0.9294</td><td>0.8355</td><td>0.8281</td><td>0.9178</td><td>0.8276</td><td>0.7839</td></tr><tr><td>LP (Soup) + FT</td><td>0.9466</td><td>0.8892</td><td>0.7031</td><td>0.6931</td><td>0.0001</td><td>0.9678</td><td>0.8390</td><td>0.8287</td><td>0.9216</td><td>0.9265</td><td>0.7806</td></tr></table>
190
+
191
+ Table 5: CIFAR10: Hardness Promoting Augmentations and Adaptation. In contrast to Living17 and DomainNet, FT is more effective than LP+FT in the safety metrics and performs comparably on ID/OOD generalization. However, hardness-promoting variants, particularly LP (VAT), see noticeable improvements with respect to generalization & corruptions, performing comparably to FT.
192
+
193
+ back propagated in directions that contain complex features. This entails that the decision boundary continues to rely upon simple features and is at risk of reduced safety performance. We provide further discussion in Supp.B.2. To this end, we recommend incorporating hardness-promoting augmentations during LP as a potential safe-guard to simplicity bias.
194
+
195
+ # 6 CONCLUSION
196
+
197
+ In this paper, we took a closer look at the behavior of protocols designed for adapting large-scale pretrained models to downstream datasets. While it is argued that adaptation protocols should be designed to mitigate feature distortion (e.g., $\mathrm{LP + FT}$ ) in order to improve ID and OOD generalization, we found that when additional aspects of safe generalization are evaluated (e.g., prediction calibration error, adversarial robustness etc.), mitigating feature distortion alone is not sufficient. We then considered the complementary perspective, that adaptation protocols should also mitigate simplicity bias. Using a synthetic dominoes dataset that allows for control over the correlation between simple and complex features, we found that protocols have varying levels of effectiveness in reducing reliance upon simple features. While, as expected, FT, is most susceptible to simplicity bias, we see that $\mathrm{LP + FT}$ is able to balance both distortion and simplicity bias in settings where the correlation between simple and complex features is not too extreme. Motivated by the benefits of $\mathrm{LP + FT}$ and given the known relationship between simplicity bias and sub-optimal generalization, we used "hardness-promoting" LP initializations (virtual adversarial, uncertainty-driven perturbations, sparse soups) to further improve $\mathrm{LP + FT}$ 's performance. These modifications helped reduce $\mathrm{LP + FT}$ 's reliance upon simple features on the synthetic dataset. On three real-world datasets, these modified protocols led to some improvements in safety and generalization performance, further validating the need to consider both distortion and simplicity bias when designing adaptation protocols.
198
+
199
+ # ACKNOWLEDGMENTS
200
+
201
+ We thank Ekdeep Singh Lubana for several helpful discussions during the course of this project. This work was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344, Lawrence Livermore National Security, LLC.and was supported by the LLNL-LDRD Program under Project No. 21-ERD-012. It was also partially supported by the National Science Foundation under CAREER Grant No. IIS 1845491. PT began this work as an intern at Lawrence Livermore National Laboratory.
202
+
203
+ # REFERENCES
204
+
205
+ Zeyuan Allen-Zhu and Yuanzhi Li. Feature purification: How adversarial training performs robust deep learning. In Proc. Symposium on Foundations of Computer Science, FOCS, 2021.
206
+ Anders Andreassen, Yasaman Bahri, Behnam Neyshabur, and Rebecca Roelofs. The evolution of out-of-distribution robustness throughout fine-tuning, 2021.
207
+ Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. Factors of transferability for a generic convnet representation. IEEE Trans. Pattern Anal. Mach. Intell., 38(9):1790-1802, 2016.
208
+ Alon Brutzkus, Amir Globerson, Eran Malach, and Shai Shalev-Shwartz. SGD learns overparameterized networks that provably generalize on linearly separable data. In Proc. Int. Conf. on Learning Representations (ICLR), 2017.
209
+ Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2020.
210
+ Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proc. Int. Conf. on Computer Vision (ICCV), 2021.
211
+ Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In Proc. Int. Conf. on Machine Learning (ICML), 2020.
212
+ Xiangning Chen, Cho-Jui Hsieh, and Boqing Gong. When vision transformers outperform resnets without pretraining or strong data augmentations. In Proc. Int. Conf. on Learning Representations (ICLR), 2022.
213
+ Ekin Dogus Cubuk, Barret Zoph, Dandelion Mané, Vijay Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation policies from data. In Proc. Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
214
+ Ekin Dogus Cubuk, Barret Zoph, Jon Shlens, and Quoc Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2020.
215
+ Mohammad Reza Davari, Stefan Horoi, Amine Natik, Guillaume Lajoie, Guy Wolf, and Eugene Belilovsky. Reliability of CKA as a similarity measure in deep learning. In Proc. Int. Conf. on Learning Representations (ICLR), 2023.
216
+ Terrance Devries and Graham W. Taylor. Improved regularization of convolutional neural networks with cutout. CoRR, abs/1708.04552, 2017.
217
+ Frances Ding, Jean-Stanislas Denain, and Jacob Steinhardt. Grounding representation similarity with statistical testing. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2021.
218
+ Linus Ericsson, Henry Gouk, and Timothy M. Hospedales. How well do self-supervised models transfer? In Proc. Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
219
+
220
+ Utku Evci, Vincent Dumoulin, Hugo Larochelle, and Michael C Mozer. Head2Toe: Utilizing intermediate representations for better transfer learning. In Proc. Int. Conf. on Machine Learning (ICML), 2022.
221
+ Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In Proc. Int. Conf. on Learning Representations (ICLR), 2019.
222
+ Jean-Bastien Grill, Florian Strub, Florent Alché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Ávila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. Bootstrap your own latent - A new approach to self-supervised learning. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2020.
223
+ Suriya Gunasekar, Jason D. Lee, Daniel Soudry, and Nati Srebro. Implicit bias of gradient descent on linear convolutional networks. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2018.
224
+ Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In Proc. of the Int. Conf. on Machine Learning, (ICML), 2017.
225
+ Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. Momentum contrast for unsupervised visual representation learning. In Proc. Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
226
+ Dan Hendrycks and Thomas G. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In Proc. Int. Conf. on Learning Representations, (ICLR), 2019.
227
+ Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. Using self-supervised learning can improve model robustness and uncertainty. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2019.
228
+ Dan Hendrycks, Norman Mu, Ekin Dogus Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. In Proc. Int. Conf. on Learning Representations (ICLR), 2020.
229
+ Dan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt. Unsolved problems in ML safety. CoRR, abs/2109.13916, 2021.
230
+ Dan Hendrycks, Andy Zou, Mantas Mazeika, Leonard Tang, Bo Li, Dawn Song, and Jacob Steinhardt. Pixmix: Dreamlike pictures comprehensively improve safety measures. In Proc. Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
231
+ Katherine L. Hermann, Ting Chen, and Simon Kornblith. The origins and prevalence of texture bias in convolutional neural networks. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2020.
232
+ Mi-Young Huh, Pulkit Agrawal, and Alexei A. Efros. What makes imagenet good for transfer learning? CoRR, abs/1608.08614, 2016.
233
+ Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2019.
234
+ Pavel Izmailov, Polina Kirichenko, Nate Gruver, and Andrew Gordon Wilson. On feature learning in the presence of spurious correlations. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2022.
235
+ Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. SMART: robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization. In Proc. Assn. for Computational Linguistics, ACL, 2020.
236
+ Simran Kaur, Jeremy M. Cohen, and Zachary C. Lipton. Are perceptually-aligned gradients a general property of robust classifiers? CoRR, abs/1910.08640, 2019.
237
+
238
+ Polina Kirichenko, Pavel Izmailov, and Andrew Gordon Wilson. Last layer re-training is sufficient for robustness to spurious correlations. CoRR, abs/2204.02937, 2022.
239
+ Simon Kornblith, Jonathon Shlens, and Quoc V. Le. Do better imagenet models transfer better? In Proc. Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
240
+ Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, and Percy Liang. Fine-tuning can distort pretrained features and underperform out-of-distribution. In Proc. Int. Conf. on Learning Representations (ICLR), 2022.
241
+ Yoonho Lee, Annie S Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, and Chelsea Finn. Surgical fine-tuning improves adaptation to distribution shifts. In Proc. Int. Conf. on Learning Representations (ICLR), 2023a.
242
+ Yoonho Lee, Huaxiu Yao, and Chelsea Finn. Diversify and disambiguate: Out-of-distribution robustness via disagreement. In Proc. Int. Conf. on Learning Representations (ICLR), 2023b.
243
+ Hong Liu, Jeff Z. HaoChen, Adrien Gaidon, and Tengyu Ma. Self-supervised learning is more robust to dataset imbalance. CoRR, abs/2110.05025, 2021.
244
+ Ekdeep Singh Lubana, Eric J. Bigelow, Robert P. Dick, David Krueger, and Hidenori Tanaka. Mechanistic mode connectivity, 2023.
245
+ Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In Proc. Int. Conf. on Learning Representations (ICLR), 2018.
246
+ Eric Mintun, Alexander Kirillov, and Saining Xie. On interaction between augmentations and corruptions in natural corruption robustness. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2021.
247
+ Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
248
+ Thao Nguyen, Maithra Raghu, and Simon Kornblith. Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. In Proc. Int. Conf. on Learning Representations (ICLR), 2021.
249
+ Matteo Pagliardini, Gilberto Manunza, Martin Jaggi, Michael I. Jordan, and Tatjana Chavdarova. Improving generalization via uncertainty driven perturbations, 2022.
250
+ Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Trans. Knowl. Data Eng., 22 (10):1345-1359, 2010.
251
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Proc. Int. Conf. on Machine Learning (ICML), 2021.
252
+ Alexandre Rame, Matthieu Kirchmeyer, Thibaud Rahier, Alain Rakotomamonjy, patrick gallinari, and Matthieu Cord. Diverse weight averaging for out-of-distribution generalization. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2022.
253
+ Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust imagenet models transfer better? In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2020.
254
+ Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Pratek Jain, and Praneeth Netrapalli. The pitfalls of simplicity bias in neural networks. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2020.
255
+ Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. J. Mach. Learn. Res., 19:70:1-70:57, 2018.
256
+
257
+ Andreas Peter Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train your vit? data, augmentation, and regularization in vision transformers. Transactions on Machine Learning Research, 2022.
258
+ Damien Teney, Ehsan Abbasnejad, Simon Lucey, and Anton van den Hengel. Evading the simplicity bias: Training a diverse set of models discovers solutions with superior OOD generalization. In Proc. Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
259
+ Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, and Alexey Dosovitskiy. Mlp-mixer: An all-mlp architecture for vision. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2021.
260
+ Francisco Utrera, Evan Kravitz, N. Benjamin Erichson, Rajiv Khanna, and Michael W. Mahoney. Adversarily-trained deep nets transfer better: Illustration on image classification. In Proc. Int. Conf. on Learning Representations (ICLR), 2021.
261
+ Ross Wightman. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019.
262
+ Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S. Marcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, and Ludwig Schmidt. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In Proc. Int. Conf. on Machine Learning (ICML), 2022.
263
+ I. Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. Billion-scale semi-supervised learning for image classification. CoRR, abs/1905.00546, 2019.
264
+ Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Seong Joon Oh, Youngjoon Yoo, and Junsuk Choe. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proc. of Int. Conf. on Computer Vision, ICCV, 2019.
265
+ Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. Lit: Zero-shot transfer with locked-image text tuning. In Proc. Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
266
+ Jeffrey O. Zhang, Alexander Sax, Amir Zamir, Leonidas J. Guibas, and Jitendra Malik. Side-tuning: Network adaptation via additive side networks. In Proc. Euro. Conf. on Computer Vision (ECCV), 2019.
267
+ Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A comprehensive survey on transfer learning. Proc. IEEE, 109(1):43-76, 2021.
268
+
269
+ # APPENDIX
270
+
271
+ # A ADDITIONAL RELATED WORK
272
+
273
+ For a comprehensive overview of transfer learning, please see the surveys of Zhuang et al. and Pan & Yang. Here, we discuss a few directly works directly relevant to our own.
274
+
275
+ Recently, Kumar et al. demonstrated that learning probing prior to fine-tuning (e.g., $\mathrm{LP + FT}$ ) can improve both in-distribution and out-of-distribution performance when transferring to a downstream task given a highly expressive, pretrained model. They demonstrated that FT only modifies features in the ID representation subspace and not in other directions, which can lead higher OOD error as direction outside the ID subspace are necessary for OOD generalization. However, by initializing FT with a trained linear probe, feature distortion can be decreased since this initialization is closer to optimal model, and thus requires less distortion in ID subspace, preserving the expressiveness of the original model. Concurrently, Kirichenko et al. demonstrated that models are able to learn both core features and spurious features. However, classifiers can rely upon spurious features, harming performance on minority groups. To reduce the reliance on spurious features, they propose to retrain the classifier on a small amount of "re-weighting" data, that allows the model to leverage the core features instead of the spurious features.
276
+
277
+ Other modifications and heuristics have also been proposed to improve fine-tuning, including side-tuning (Zhang et al., 2019), which tunes a small secondary network that is then combined with the original model, using larger/smaller learning rates for the classifier, as well as regularization-based methods (Jiang et al., 2020). We focus on the LP+FT protocol, as it is principled and achieves strong OOD performance.
278
+
279
+ Additionally, several works have studied properties of the model that influence the effectiveness of transfer learning (Azizpour et al., 2016; Huh et al., 2016; Kornblith et al., 2019; Lee et al., 2023a; Evci et al., 2022; Lee et al., 2023b; Izmailov et al., 2022; Lubana et al., 2023; Rame et al., 2022), including the robustness of pretrained features (Salman et al., 2020; Utrera et al., 2021). While the connection between adversarial training and improved feature representations (Allen-Zhu & Li, 2021; Kaur et al., 2019) has been studied, we use virtual adversarial training during LP to learn a better classifier that is less reliant upon simple features, and we do not use an adversarily trained feature extractor. Finally, we note that while we are, to the best of our knowledge, the first to consider this holistic evaluation of safety and generalization in the context of transfer learning with highly expressive pretrained models, Hendrycks et al. have considered the trade-offs induced by different data augmentation strategies (Yun et al., 2019; Devries & Taylor, 2017; Hendrycks et al., 2020; Cubuk et al., 2019; 2020) on safety metrics in supervised learning. We emphasize that while our evaluation is similar, that our work focuses on a different context and contains an additional layer of complexity as we consider the interaction between adaptation protocols, generalization behavior and safety performance.
280
+
281
+ # B EXPERIMENTAL DETAILS
282
+
283
+ Please see the https://github.com/pujacomputes/23-ICLR-Adaptation.git for training details. In brief, we performed grid-search to find the best parameters, which are as follows. For CIFAR-10 and CIFAR-100, we train only the classifier for 200 epochs with $\mathrm{LR} = 30$ during LP. For FT, the entire model is trained for 20 epochs with $\mathrm{LR} = 1\mathrm{e} - 5$ . For $\mathrm{LP + FT}$ , the model's classifier is initialized with the solution found by LP, and then it is fine-tuned for 20 epochs. A grid-search was conducted to determine the LR for LP and FT. For Domain-Net Experiments, we use 200 epochs with $\mathrm{LR} = 30$ during LP. For FT, the entire model is trained for 20 epochs with $\mathrm{LR} = 3\mathrm{e} - 4$ . For $\mathrm{LP + FT}$ , the model's classifier is initialized with the solution found by LP, and then it is fine-tuned for 20 epochs, using $\mathrm{LR} = 3\mathrm{e} - 7$ . Furthermore, following Kumar et al., we freeze the batchnorm layers during $\mathrm{LP + FT}$ . A CLIP (Radford et al., 2021) pretrained ResNet-50 is used for the DomainNet experiments, while a MoCoV2 (He et al., 2020) is used for all CIFAR experiments. We use augmentation functions from timm (Wightman, 2019) and compute CKA scores using the packaged provided by torch-cka. When using augmented protocols, the same LRs are used. Note, all results were obtained by averaging over 3 seeds. We consider model soups of sizes 5,10,20, tune $\epsilon$ in 0.005, 0.01, 0.02 and 0.1 for UDP, and
284
+
285
+ $\alpha$ in 0.001, 0.01, 0.1 for VAT. For CIFAR-MNIST results, LP is done for 100 epochs, and FT is done for 20 epochs.
286
+
287
+ # B.1 MOTIVATION FOR HARDNESS-PROMOTING VARIANTS
288
+
289
+ We selected UDP (Pagliardini et al., 2022), VAT (Miyato et al., 2017), and model-soups (Wortsman et al., 2022) as simplicity bias mitigation strategies due to their effectiveness and ease of use. We emphasize, however, that our findings are not specific to the choice of a given mitigation strategy and we expect that advancements in such strategies will further improve the effectiveness of our proposed LP+FTvariants. At present, the selected strategies are strong, representative mitigations that we have confirmed are effective at mitigating simplicity bias in the adaptation context using the synthetic dominoes dataset in Sec. 4.
290
+
291
+ We conceptually justify each strategy here:
292
+
293
+ - UDP is designed to help mitigate simplicity bias by learning by a large margin classifier, opposed to a narrow margin classifier that relies upon simple features. As noted by Shah et al. (2020), such narrow margin classifiers are sensitive to small perturbations and the simple features supporting the decision boundary may not be discriminative under distribution shifts. By maximizing uncertainty (instead of loss) to create adversarial perturbations, UDP is able to learn a maximum-margin classifier that is better able to handle such shifts. Notably, to create such a maximum-margin classifier, the model will necessarily learn more complex features;
294
+ - We use virtual adversarial training (VAT) to help avoid reliance upon simple features, as VAT enforces distribution smoothness so that classifiers become robust in some epsilon neighborhood around the input. We note that we are performing this training in the hidden representation space, so perturbations correspond may be altering high-level semantics. To maintain strong performance under such high-level perturbations, the model should learn to rely upon more complex features, and learn a better margin classifier;
295
+ - We use model-soups so that we may learn a set of classifiers that rely upon disjoint sets of features. By learning a set of diverse classifiers, we are able to average classifiers that have learned to rely upon different features, instead of becoming overly reliant upon a single simple feature. In future work, we intend to build a theoretical framework that helps us better justify these interventions and create new ones.
296
+
297
+ # B.2 APPLYING SIMPLICITY BIAS MITIGATION STRATEGIES TO FINE-TUNING STEP.
298
+
299
+ To demonstrate that simplicity bias mitigation strategies must be applied during the LP step of FT for maximum effectiveness, we conduct the following additional experiment.
300
+
301
+ Setup. We evaluate two additional protocols where VAT and UDP are applied only during the FT step, $(\mathrm{LP} + \mathrm{FT}(\mathrm{VAT}),$ and $\mathrm{LP} + \mathrm{FT}(\mathrm{UDP}))$ on the synthetic dominoes dataset. We plot the results for Randomized OOD Accuracy in Fig. 6.
302
+
303
+ Results. Here, we see that, across three different correlation ratios, FT variants lose performance with respect to the LP mitigation variants. Notably, $\mathrm{LP} + \mathrm{FT}$ (UDP) loses up to $4\%$ performance with respect to $\mathrm{LP}(\mathrm{UDP}) + \mathrm{FT}$ . While performance drops are not as large for VAT, we
304
+
305
+ ![](images/65cf94a8388415833df64ff9c952ca9d67ae91cedbcaf0e9e63d13f31ca1448f.jpg)
306
+ Figure 6: Applying Mitigation Strategies to FT. We create FT variants of our LP mitigation strategies and evaluate them on the synthetic dominoes dataset. We see that FT variants lose performance with respect to LP variants, indicating that interventions must be undertaken during the LP step as originally proposed.
307
+
308
+ nonetheless see that $\mathrm{LP} + \mathrm{FT}(\mathrm{VAT})$ loses performance with respect to $\mathrm{LP}(\mathrm{VAT}) + \mathrm{FT}$ .
309
+
310
+ Our results in Fig. 6 support our conceptual argument that mitigation strategies must be undertaken during the LP step to ensure that subsequent FT is in a direction that preserves complex features; applying mitigation strategies during FT may be too late to avoid simplicity bias. We note that
311
+
312
+ applying mitigation strategies during FT, in addition to LP, may further improve performance, and we will add these variants in the final version. We did not include a FT soup variant as it would be prohibitively expensive to train and average large soups of entire models (instead of classifiers). This highlights the computational efficiency of implementing mitigation strategies in the LP step itself.
313
+
314
+ # C ADDITIONAL RESULTS
315
+
316
+ Below, we include results corresponding to different hyperparameters (number of souped classifiers, $\alpha$ for vat, and $\delta$ for udp).
317
+
318
+ <table><tr><td rowspan="2">Protocol</td><td colspan="2">Generalization</td><td colspan="3">Robustness</td><td colspan="4">Calibration</td><td>Anomaly Detection</td><td>Rep. Similarity</td></tr><tr><td>ID Acc.</td><td>OOD Acc.</td><td>C Acc.</td><td>C Acc.</td><td>Adv. Acc.</td><td>ID 1-RMS</td><td>C 1-RMS</td><td>C 1-RMS</td><td>OOD 1-RMS</td><td>Out-of-Class AUROC</td><td>ID CKA</td></tr><tr><td>LP</td><td>0.9138</td><td>0.8190</td><td>0.6912</td><td>0.6553</td><td>0.0003</td><td>0.9595</td><td>0.8303</td><td>0.8142</td><td>0.8696</td><td>0.6206</td><td>1.0000</td></tr><tr><td>LP+ soup-5</td><td>0.9108</td><td>0.8348</td><td>0.7007</td><td>0.6678</td><td>0.0002</td><td>0.9748</td><td>0.8943</td><td>0.8835</td><td>0.9108</td><td>0.8463</td><td>1.0000</td></tr><tr><td>LP+ soup-10</td><td>0.9129</td><td>0.8359</td><td>0.6985</td><td>0.6652</td><td>0.0003</td><td>0.9669</td><td>0.9104</td><td>0.8956</td><td>0.9205</td><td>0.8713</td><td>1.0000</td></tr><tr><td>LP+ soup-20</td><td>0.9052</td><td>0.8353</td><td>0.6917</td><td>0.6588</td><td>0.0003</td><td>0.9605</td><td>0.9205</td><td>0.9037</td><td>0.9364</td><td>0.8859</td><td>1.0000</td></tr><tr><td>LP+udp-0.005</td><td>0.9129</td><td>0.8332</td><td>0.7015</td><td>0.6702</td><td>0.0003</td><td>0.9729</td><td>0.8879</td><td>0.8817</td><td>0.9017</td><td>0.8708</td><td>1.0000</td></tr><tr><td>LP+udp-0.01</td><td>0.9033</td><td>0.8356</td><td>0.6948</td><td>0.6643</td><td>0.0003</td><td>0.9689</td><td>0.9111</td><td>0.9023</td><td>0.9277</td><td>0.9033</td><td>1.0000</td></tr><tr><td>LP+udp-0.02</td><td>0.8885</td><td>0.8281</td><td>0.6796</td><td>0.6492</td><td>0.0004</td><td>0.9655</td><td>0.9259</td><td>0.9142</td><td>0.9473</td><td>0.9217</td><td>1.0000</td></tr><tr><td>LP+udp-0.1</td><td>0.8573</td><td>0.8005</td><td>0.6290</td><td>0.6064</td><td>0.0007</td><td>0.9245</td><td>0.9235</td><td>0.9143</td><td>0.9531</td><td>0.8570</td><td>1.0000</td></tr><tr><td>LP+vat-0.001</td><td>0.9189</td><td>0.8276</td><td>0.6945</td><td>0.6606</td><td>0.0006</td><td>0.9714</td><td>0.8564</td><td>0.8442</td><td>0.8927</td><td>0.7159</td><td>1.0000</td></tr><tr><td>LP+vat-0.01</td><td>0.8977</td><td>0.8251</td><td>0.6742</td><td>0.6483</td><td>0.0002</td><td>0.9265</td><td>0.9255</td><td>0.9139</td><td>0.9375</td><td>0.7200</td><td>1.0000</td></tr><tr><td>FT</td><td>0.9539</td><td>0.8754</td><td>0.7434</td><td>0.7553</td><td>0.0231</td><td>0.9668</td><td>0.8364</td><td>0.8453</td><td>0.9232</td><td>1.0000</td><td>0.6831</td></tr><tr><td>LP+FT</td><td>0.9442</td><td>0.8678</td><td>0.6921</td><td>0.6790</td><td>0.0018</td><td>0.9521</td><td>0.7849</td><td>0.7721</td><td>0.8864</td><td>0.6511</td><td>0.7853</td></tr><tr><td>(LP+soup-5) +FT</td><td>0.9466</td><td>0.8832</td><td>0.6997</td><td>0.6861</td><td>0.0001</td><td>0.9639</td><td>0.8197</td><td>0.8051</td><td>0.9155</td><td>0.9020</td><td>0.7603</td></tr><tr><td>(LP+soup-10) +FT</td><td>0.9467</td><td>0.8857</td><td>0.7022</td><td>0.6907</td><td>0.0001</td><td>0.9660</td><td>0.8307</td><td>0.8182</td><td>0.9184</td><td>0.9161</td><td>0.7671</td></tr><tr><td>(LP+soup-20) +FT</td><td>0.9466</td><td>0.8892</td><td>0.7031</td><td>0.6931</td><td>0.0001</td><td>0.9678</td><td>0.8390</td><td>0.8287</td><td>0.9216</td><td>0.9265</td><td>0.7806</td></tr><tr><td>(LP+udp-0.005) +FT</td><td>0.9458</td><td>0.8864</td><td>0.6962</td><td>0.6893</td><td>0.0005</td><td>0.9643</td><td>0.8127</td><td>0.8110</td><td>0.9119</td><td>0.9180</td><td>0.7742</td></tr><tr><td>(LP+udp-0.01) +FT</td><td>0.9450</td><td>0.8869</td><td>0.7048</td><td>0.6977</td><td>0.0004</td><td>0.9642</td><td>0.8335</td><td>0.8311</td><td>0.9209</td><td>0.9419</td><td>0.7746</td></tr><tr><td>(LP+udp-0.02) +FT</td><td>0.9440</td><td>0.8848</td><td>0.7028</td><td>0.6986</td><td>0.0004</td><td>0.9670</td><td>0.8472</td><td>0.8476</td><td>0.9237</td><td>0.9559</td><td>0.7764</td></tr><tr><td>(LP+udp-0.1) + FT</td><td>0.9435</td><td>0.8836</td><td>0.6959</td><td>0.6952</td><td>0.0000</td><td>0.9676</td><td>0.8449</td><td>0.8525</td><td>0.9355</td><td>0.9651</td><td>0.7382</td></tr><tr><td>(LP+vat)+FT</td><td>0.9611</td><td>0.8900</td><td>0.7442</td><td>0.7321</td><td>0.0027</td><td>0.9294</td><td>0.8355</td><td>0.8281</td><td>0.9178</td><td>0.8276</td><td>0.7839</td></tr></table>
319
+
320
+ Table 6: CIFAR10, Hardness-Promoting Augmentations.
321
+
322
+ <table><tr><td rowspan="2">Protocol</td><td colspan="2">Generalization</td><td colspan="3">Robustness</td><td colspan="4">Calibration</td><td>Anomaly Detection</td><td>Rep. Similarity</td></tr><tr><td>ID Acc.</td><td>OOD Acc.</td><td>C Acc.</td><td>C Acc.</td><td>Adv. Acc.</td><td>ID 1-RMS</td><td>C 1-RMS</td><td>C 1-RMS</td><td>OOD 1-RMS</td><td>Out-of-Class AUROC</td><td>ID CKA</td></tr><tr><td>LP</td><td>0.9521</td><td>0.8124</td><td>0.7010</td><td>0.7378</td><td>0.2350</td><td>0.9313</td><td>0.8693</td><td>0.8802</td><td>0.9117</td><td>0.9907</td><td>1.0000</td></tr><tr><td>LP+udp-0.005</td><td>0.9524</td><td>0.8114</td><td>0.7012</td><td>0.7379</td><td>0.2337</td><td>0.9304</td><td>0.8699</td><td>0.8806</td><td>0.9108</td><td>0.9907</td><td>1.000</td></tr><tr><td>LP+udp-0.01</td><td>0.9524</td><td>0.8110</td><td>0.7017</td><td>0.7382</td><td>0.2353</td><td>0.9308</td><td>0.8691</td><td>0.8801</td><td>0.9118</td><td>0.9908</td><td>1.000</td></tr><tr><td>LP+udp-0.02</td><td>0.9500</td><td>0.8126</td><td>0.7036</td><td>0.7387</td><td>0.2373</td><td>0.9343</td><td>0.8621</td><td>0.8763</td><td>0.9135</td><td>0.9913</td><td>1.000</td></tr><tr><td>LP+udp-0.1</td><td>0.9459</td><td>0.8165</td><td>0.6840</td><td>0.7220</td><td>0.2339</td><td>0.9032</td><td>0.8243</td><td>0.8427</td><td>0.8990</td><td>0.9882</td><td>1.000</td></tr><tr><td>LP+soup-5</td><td>0.9439</td><td>0.7996</td><td>0.6874</td><td>0.7290</td><td>0.2451</td><td>0.8806</td><td>0.7868</td><td>0.8094</td><td>0.9064</td><td>0.9897</td><td>1.0000</td></tr><tr><td>LP+soup-10</td><td>0.9373</td><td>0.7904</td><td>0.6767</td><td>0.7220</td><td>0.2547</td><td>0.8496</td><td>0.7478</td><td>0.7709</td><td>0.8841</td><td>0.9887</td><td>1.0000</td></tr><tr><td>LP+soup-20</td><td>0.9298</td><td>0.7841</td><td>0.6601</td><td>0.7082</td><td>0.2575</td><td>0.8056</td><td>0.7084</td><td>0.7305</td><td>0.8274</td><td>0.9867</td><td>1.0000</td></tr><tr><td>LP+vat-0.001</td><td>0.9524</td><td>0.8122</td><td>0.7010</td><td>0.7379</td><td>0.2345</td><td>0.9299</td><td>0.8682</td><td>0.8791</td><td>0.9103</td><td>0.9907</td><td>1.0000</td></tr><tr><td>FT</td><td>0.9518</td><td>0.7168</td><td>0.7011</td><td>0.7164</td><td>0.1563</td><td>0.8873</td><td>0.9019</td><td>0.8604</td><td>0.9295</td><td>0.9794</td><td>0.7847</td></tr><tr><td>LP+FT</td><td>0.9643</td><td>0.8261</td><td>0.7426</td><td>0.7671</td><td>0.2135</td><td>0.9782</td><td>0.9472</td><td>0.9451</td><td>0.8742</td><td>0.9924</td><td>0.9887</td></tr><tr><td>(LP+udp-0.005) +FT</td><td>0.9627</td><td>0.8243</td><td>0.7434</td><td>0.7666</td><td>0.2153</td><td>0.9811</td><td>0.9456</td><td>0.9445</td><td>0.8736</td><td>0.9922</td><td>0.98950</td></tr><tr><td>(LP+udp-0.01) +FT</td><td>0.9627</td><td>0.8253</td><td>0.7436</td><td>0.7669</td><td>0.2133</td><td>0.9812</td><td>0.9454</td><td>0.9447</td><td>0.8737</td><td>0.9923</td><td>0.98957</td></tr><tr><td>(LP+udp-0.02) +FT</td><td>0.9637</td><td>0.8265</td><td>0.7448</td><td>0.7681</td><td>0.2157</td><td>0.9768</td><td>0.9464</td><td>0.9467</td><td>0.8757</td><td>0.9927</td><td>0.98927</td></tr><tr><td>(LP+udp-0.1) +FT</td><td>0.9614</td><td>0.8249</td><td>0.7499</td><td>0.7689</td><td>0.2165</td><td>0.9808</td><td>0.9441</td><td>0.9420</td><td>0.8711</td><td>0.9912</td><td>0.9861</td></tr><tr><td>(LP+soup-5) + FT</td><td>0.9608</td><td>0.8163</td><td>0.7456</td><td>0.7684</td><td>0.1855</td><td>0.9760</td><td>0.9498</td><td>0.9492</td><td>0.8678</td><td>0.9936</td><td>0.98540</td></tr><tr><td>(LP+soup-10) + FT</td><td>0.9580</td><td>0.8114</td><td>0.7445</td><td>0.7678</td><td>0.1753</td><td>0.9838</td><td>0.9503</td><td>0.9488</td><td>0.8748</td><td>0.9938</td><td>0.98360</td></tr><tr><td>(LP+soup-20) + FT</td><td>0.9594</td><td>0.8165</td><td>0.7450</td><td>0.7684</td><td>0.1782</td><td>0.9893</td><td>0.9503</td><td>0.9490</td><td>0.8609</td><td>0.9936</td><td>0.98190</td></tr><tr><td>(LP+vat-0.001) +FT</td><td>0.9647</td><td>0.8247</td><td>0.7425</td><td>0.7650</td><td>0.2224</td><td>0.9727</td><td>0.9521</td><td>0.9463</td><td>0.8775</td><td>0.9925</td><td>0.9370</td></tr></table>
323
+
324
+ Table 7: Living17, Hardness-Promoting Augmentations
325
+
326
+ <table><tr><td rowspan="2">Protocol</td><td colspan="2">Generalization</td><td colspan="3">Robustness</td><td colspan="4">Calibration</td><td>Anomaly Detection</td><td>Rep. Similarity</td></tr><tr><td>ID Acc.</td><td>OOD Acc.</td><td>Sketch-C Acc.</td><td>Real-C Acc.</td><td>Adv. Acc.</td><td>ID 1-RMS</td><td>Sketch-C 1-RMS</td><td>Real-C 1-RMS</td><td>OOD 1-RMS</td><td>Out-of-Class AUROC</td><td>ID CKA</td></tr><tr><td>LP</td><td>0.8913</td><td>0.8013</td><td>0.6019</td><td>0.6020</td><td>0.1768</td><td>0.9638</td><td>0.9264</td><td>0.9045</td><td>0.9014</td><td>0.8679</td><td>1.0000</td></tr><tr><td>LP+augmix</td><td>0.8897</td><td>0.7998</td><td>0.6336</td><td>0.6104</td><td>0.1872</td><td>0.9718</td><td>0.9230</td><td>0.9263</td><td>0.9083</td><td>0.8818</td><td>1.0000</td></tr><tr><td>LP+autoaug</td><td>0.8944</td><td>0.8057</td><td>0.6419</td><td>0.6257</td><td>0.1857</td><td>0.9614</td><td>0.9357</td><td>0.9309</td><td>0.9022</td><td>0.8849</td><td>1.0000</td></tr><tr><td>LP+raandaug</td><td>0.8971</td><td>0.8090</td><td>0.6392</td><td>0.6232</td><td>0.1877</td><td>0.9559</td><td>0.9321</td><td>0.9312</td><td>0.9036</td><td>0.8875</td><td>1.0000</td></tr><tr><td>LP+vat</td><td>0.8836</td><td>0.7914</td><td>0.5893</td><td>0.5963</td><td>0.1687</td><td>0.8897</td><td>0.9552</td><td>0.8905</td><td>0.9178</td><td>0.8735</td><td>1.0000</td></tr><tr><td>FT</td><td>0.7613</td><td>0.4522</td><td>0.5186</td><td>0.2744</td><td>0.4164</td><td>0.8368</td><td>0.6379</td><td>0.7234</td><td>0.5597</td><td>0.8841</td><td>0.6092</td></tr><tr><td>FT+augmix</td><td>0.8246</td><td>0.5233</td><td>0.5911</td><td>0.3408</td><td>0.4802</td><td>0.9308</td><td>0.8042</td><td>0.8665</td><td>0.6761</td><td>0.9255</td><td>0.5272</td></tr><tr><td>FT+autoaug</td><td>0.7786</td><td>0.5161</td><td>0.5561</td><td>0.3160</td><td>0.4313</td><td>0.9157</td><td>0.7485</td><td>0.8246</td><td>0.7324</td><td>0.9231</td><td>0.7025</td></tr><tr><td>FT+raandaug</td><td>0.7823</td><td>0.5370</td><td>0.5610</td><td>0.3298</td><td>0.4551</td><td>0.9160</td><td>0.7970</td><td>0.8682</td><td>0.7444</td><td>0.9318</td><td>0.6477</td></tr><tr><td>LP+FT</td><td>0.8985</td><td>0.7990</td><td>0.6343</td><td>0.5979</td><td>0.1927</td><td>0.9566</td><td>0.8899</td><td>0.8445</td><td>0.8024</td><td>0.9022</td><td>0.9222</td></tr><tr><td>LP+(FT+augmix)</td><td>0.9047</td><td>0.8081</td><td>0.6673</td><td>0.5980</td><td>0.2597</td><td>0.9768</td><td>0.9200</td><td>0.9067</td><td>0.8443</td><td>0.9155</td><td>0.8811</td></tr><tr><td>LP+(FT+autoaug)</td><td>0.9023</td><td>0.8028</td><td>0.6571</td><td>0.5851</td><td>0.2354</td><td>0.9830</td><td>0.9249</td><td>0.8990</td><td>0.8484</td><td>0.9034</td><td>0.9096</td></tr><tr><td>LP+(FT+raandaug)</td><td>0.9054</td><td>0.8099</td><td>0.6703</td><td>0.6152</td><td>0.2489</td><td>0.9786</td><td>0.9194</td><td>0.9044</td><td>0.8598</td><td>0.9252</td><td>0.9000</td></tr><tr><td>(LP+vat)+FT</td><td>0.9048</td><td>0.8009</td><td>0.6466</td><td>0.6131</td><td>0.1942</td><td>0.9686</td><td>0.8911</td><td>0.8428</td><td>0.7985</td><td>0.9204</td><td>0.9370</td></tr><tr><td>(LP+vat)+(FT+augmix)</td><td>0.9032</td><td>0.8024</td><td>0.6589</td><td>0.5896</td><td>0.2525</td><td>0.9769</td><td>0.9169</td><td>0.8929</td><td>0.8384</td><td>0.9212</td><td>0.8673</td></tr><tr><td>(LP+vat)+(FT+autoaug)</td><td>0.9003</td><td>0.8049</td><td>0.6600</td><td>0.5862</td><td>0.2331</td><td>0.9783</td><td>0.9178</td><td>0.9000</td><td>0.8381</td><td>0.9149</td><td>0.9244</td></tr><tr><td>(LP+vat)+(FT+raandaug)</td><td>0.9006</td><td>0.8060</td><td>0.6651</td><td>0.5894</td><td>0.2622</td><td>0.9762</td><td>0.9197</td><td>0.8993</td><td>0.8414</td><td>0.9238</td><td>0.8956</td></tr></table>
327
+
328
+ Table 8: DomainNet, Diversity Promoting Augmentations and Generalization Trade-offs.
329
+
330
+ <table><tr><td rowspan="2">Protocol</td><td colspan="2">Generalization</td><td colspan="3">Robustness</td><td colspan="4">Calibration</td><td>Anomaly Det.</td><td>Rep. Similarity</td></tr><tr><td>ID Acc.</td><td>OOD Acc.</td><td>C Acc.</td><td>\(\overline{C}\) Acc.</td><td>Adv. Acc.</td><td>ID 1-RMS</td><td>C 1-RMS</td><td>\(\overline{C}\) 1-RMS</td><td>OOD. 1-RMS</td><td>Out-of-Class AUROC</td><td>ID CKA</td></tr><tr><td>LP</td><td>0.9297</td><td>0.9083</td><td>0.8532</td><td>0.7491</td><td>0.7077</td><td>0.9794</td><td>0.9006</td><td>0.9007</td><td>0.9301</td><td>0.9623</td><td>0.0668</td></tr><tr><td>LP+ soup-5</td><td>0.9220</td><td>0.9151</td><td>0.8315</td><td>0.7432</td><td>0.7050</td><td>0.9598</td><td>0.9232</td><td>0.9279</td><td>0.9623</td><td>0.9665</td><td>0.1399</td></tr><tr><td>LP+ soup-10</td><td>0.9156</td><td>0.9135</td><td>0.8183</td><td>0.7344</td><td>0.6985</td><td>0.9476</td><td>0.9221</td><td>0.9271</td><td>0.9732</td><td>0.9602</td><td>0.1778</td></tr><tr><td>LP+ soup-20</td><td>0.9069</td><td>0.9064</td><td>0.8065</td><td>0.7216</td><td>0.6885</td><td>0.9279</td><td>0.9129</td><td>0.9191</td><td>0.9714</td><td>0.9484</td><td>0.2617</td></tr><tr><td>LP+ udp-0.005</td><td>0.9299</td><td>0.9092</td><td>0.8533</td><td>0.7494</td><td>0.7079</td><td>0.9794</td><td>0.9009</td><td>0.9003</td><td>0.9312</td><td>0.9614</td><td>0.0822</td></tr><tr><td>LP+ udp-0.01</td><td>0.9298</td><td>0.9097</td><td>0.8535</td><td>0.7495</td><td>0.7083</td><td>0.9795</td><td>0.9007</td><td>0.9006</td><td>0.9316</td><td>0.9616</td><td>0.0880</td></tr><tr><td>LP+ udp-0.02</td><td>0.9294</td><td>0.9108</td><td>0.8538</td><td>0.7497</td><td>0.7088</td><td>0.9789</td><td>0.9012</td><td>0.9014</td><td>0.9335</td><td>0.9631</td><td>0.1017</td></tr><tr><td>LP+ udp-0.1</td><td>0.9238</td><td>0.9218</td><td>0.8377</td><td>0.7488</td><td>0.7111</td><td>0.9801</td><td>0.9154</td><td>0.9216</td><td>0.9517</td><td>0.9645</td><td>0.1478</td></tr><tr><td>LP+ vat-0.001</td><td>0.9298</td><td>0.9091</td><td>0.8533</td><td>0.7493</td><td>0.7078</td><td>0.9801</td><td>0.9014</td><td>0.9012</td><td>0.9325</td><td>0.9614</td><td>0.0784</td></tr><tr><td>LP+ vat-0.01</td><td>0.9295</td><td>0.9094</td><td>0.8531</td><td>0.7494</td><td>0.7080</td><td>0.9800</td><td>0.9039</td><td>0.9040</td><td>0.9342</td><td>0.9632</td><td>0.0837</td></tr><tr><td>LP+ vat-0.1</td><td>0.9275</td><td>0.9106</td><td>0.8493</td><td>0.7481</td><td>0.7087</td><td>0.9581</td><td>0.9191</td><td>0.9246</td><td>0.9589</td><td>0.9598</td><td>0.1528</td></tr><tr><td>FT</td><td>0.9724</td><td>0.8761</td><td>0.9218</td><td>0.8131</td><td>0.8074</td><td>0.9577</td><td>0.8429</td><td>0.8418</td><td>0.8855</td><td>0.9138</td><td>0.9317</td></tr><tr><td>LP+FT</td><td>0.9692</td><td>0.9387</td><td>0.9195</td><td>0.8106</td><td>0.7736</td><td>0.9451</td><td>0.8034</td><td>0.7743</td><td>0.9026</td><td>0.8949</td><td>0.5349</td></tr><tr><td>(LP+soup-5)+FT</td><td>0.9685</td><td>0.9417</td><td>0.9210</td><td>0.8136</td><td>0.7787</td><td>0.9385</td><td>0.8079</td><td>0.7765</td><td>0.9102</td><td>0.8974</td><td>0.5315</td></tr><tr><td>(LP+soup-10)+FT</td><td>0.9681</td><td>0.9411</td><td>0.9220</td><td>0.8178</td><td>0.7824</td><td>0.9382</td><td>0.8119</td><td>0.7796</td><td>0.9072</td><td>0.8933</td><td>0.5521</td></tr><tr><td>(LP+soup-20)+FT</td><td>0.9677</td><td>0.9395</td><td>0.9213</td><td>0.8164</td><td>0.7837</td><td>0.9385</td><td>0.8107</td><td>0.7817</td><td>0.9070</td><td>0.8964</td><td>0.5411</td></tr><tr><td>(LP+udp-0.005)+FT</td><td>0.9677</td><td>0.9297</td><td>0.9142</td><td>0.8104</td><td>0.7710</td><td>0.9422</td><td>0.8024</td><td>0.7718</td><td>0.8942</td><td>0.8916</td><td>0.6428</td></tr><tr><td>(LP+udp-0.01)+FT</td><td>0.9677</td><td>0.9359</td><td>0.9195</td><td>0.8098</td><td>0.7721</td><td>0.9417</td><td>0.8029</td><td>0.7732</td><td>0.9019</td><td>0.8999</td><td>0.4239</td></tr><tr><td>(LP+udp-0.02)+FT</td><td>0.9687</td><td>0.9349</td><td>0.9195</td><td>0.8136</td><td>0.7724</td><td>0.9437</td><td>0.8067</td><td>0.7736</td><td>0.8994</td><td>0.8981</td><td>0.5015</td></tr><tr><td>(LP+udp-0.1)+FT</td><td>0.9688</td><td>0.9423</td><td>0.9242</td><td>0.8174</td><td>0.7811</td><td>0.9408</td><td>0.8130</td><td>0.7815</td><td>0.9072</td><td>0.9064</td><td>0.4496</td></tr><tr><td>(LP+vat-0.001)+FT</td><td>0.9681</td><td>0.9366</td><td>0.9180</td><td>0.8111</td><td>0.7727</td><td>0.9422</td><td>0.8033</td><td>0.7732</td><td>0.9013</td><td>0.8962</td><td>0.5904</td></tr><tr><td>(LP+vat-0.01)+FT</td><td>0.9689</td><td>0.9366</td><td>0.9168</td><td>0.8121</td><td>0.7766</td><td>0.9455</td><td>0.8062</td><td>0.7791</td><td>0.9013</td><td>0.8918</td><td>0.5687</td></tr><tr><td>(LP+vat-0.1)+FT</td><td>0.9692</td><td>0.9402</td><td>0.9207</td><td>0.8127</td><td>0.7743</td><td>0.9420</td><td>0.8068</td><td>0.7734</td><td>0.9083</td><td>0.8978</td><td>0.4398</td></tr></table>
331
+
332
+ Table 9: CIFAR10 with Resnet101/SimCLR Pretrained Model. We see that with a larger model, and different pretraining method, our proposed variants still have some benefits. We note that the baseline performance is also improved as a result of a more larger pretrained model.
2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1399c8e7b972b15f3cf3dea5f6e6d44e1f44814e2ae92266fed85ac3caad4abb
3
+ size 1008251
2023/A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/67359384-56cf-4363-aee6-c64cf6dd5faf_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/67359384-56cf-4363-aee6-c64cf6dd5faf_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/67359384-56cf-4363-aee6-c64cf6dd5faf_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:230091b829a6a9cee3127626a6d44c1ece1ca88deaa76e70ed6bddb8556d0c55
3
+ size 1437100
2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbe677c6b0ea46926e929311f54cfeb3c52fb85faec75fd140f1e6a4263ebcb9
3
+ size 2362491
2023/A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/da4f6dce-4c38-439e-aa6c-715a9f5477fe_content_list.json ADDED
@@ -0,0 +1,1504 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A HIGHER PRECISION ALGORITHM FOR COMPUTING THE 1-WASSERSTEIN DISTANCE",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 171,
8
+ 99,
9
+ 823,
10
+ 146
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Pankaj K. Agarwal<sup>1*</sup>, Sharath Raghvendra<sup>2</sup>, Pouyan Shirzadian<sup>2</sup>, and Rachita Sowle<sup>2</sup>",
17
+ "bbox": [
18
+ 196,
19
+ 174,
20
+ 799,
21
+ 191
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "$^{1}$ Duke University, $^{2}$ Virginia Tech",
28
+ "bbox": [
29
+ 387,
30
+ 200,
31
+ 609,
32
+ 218
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "ABSTRACT",
39
+ "text_level": 1,
40
+ "bbox": [
41
+ 450,
42
+ 258,
43
+ 545,
44
+ 273
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "We consider the problem of computing the 1-Wasserstein distance $\\mathcal{W}(\\mu, \\nu)$ between two $d$ -dimensional discrete distributions $\\mu$ and $\\nu$ whose support lie within the unit hypercube. There are several algorithms that estimate $\\mathcal{W}(\\mu, \\nu)$ within an additive error of $\\varepsilon$ . However, when $\\mathcal{W}(\\mu, \\nu)$ is small, the additive error $\\varepsilon$ dominates, leading to noisy results. Consider any additive approximation algorithm with execution time $T(n, \\varepsilon)$ . We propose an algorithm that runs in $O(T(n, \\varepsilon / d) \\log n)$ time and boosts the accuracy of estimating $\\mathcal{W}(\\mu, \\nu)$ from $\\varepsilon$ to an expected additive error of $\\min\\{\\varepsilon, (d \\log_{\\sqrt{d} / \\varepsilon} n) \\mathcal{W}(\\mu, \\nu)\\}$ . For the special case where every point in the support of $\\mu$ and $\\nu$ has a mass of $1 / n$ (also called the Euclidean Bipartite Matching problem), we describe an algorithm to boost the accuracy of any additive approximation algorithm from $\\varepsilon$ to an expected additive error of $\\min\\{\\varepsilon, (d \\log \\log n) \\mathcal{W}(\\mu, \\nu)\\}$ in $O(T(n, \\varepsilon / d) \\log \\log n)$ time.",
51
+ "bbox": [
52
+ 228,
53
+ 290,
54
+ 767,
55
+ 464
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "1 INTRODUCTION",
62
+ "text_level": 1,
63
+ "bbox": [
64
+ 171,
65
+ 489,
66
+ 336,
67
+ 506
68
+ ],
69
+ "page_idx": 0
70
+ },
71
+ {
72
+ "type": "text",
73
+ "text": "Given two discrete probability distributions $\\mu$ and $\\nu$ whose support $A$ and $B$ , respectively, lie inside the $d$ -dimensional unit hypercube $[0,1]^d$ with $\\max \\{|A|, |B|\\} = n$ , the 1-Wasserstein distance $\\mathcal{W}(\\mu, \\nu)$ (also called the Earth Mover's distance) between them is the minimum cost required to transport mass from $\\nu$ to $\\mu$ under the Euclidean metric. The special case where $|A| = |B| = n$ and the mass at each point of $A \\cup B$ is $1/n$ is called the Euclidean Bipartite Matching (EBM) problem. In machine learning applications, one can improve a model $\\mu$ by using its Earth Mover's distance from a distribution $\\nu$ built on real data. Consequently, it has been extensively used in generative models (Deshpande et al. (2018); Genevay et al. (2018); Salimans et al. (2018)), robust learning (Esfahani & Kuhn (2018)), supervised learning (Luis et al. (2018); Janati et al. (2019)), and parameter estimation (Liu et al. (2018); Bernton et al. (2019)).",
74
+ "bbox": [
75
+ 169,
76
+ 523,
77
+ 826,
78
+ 662
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "Computing the 1-Wasserstein distance between $\\mu$ and $\\nu$ can be modeled as a linear program and solved in $O(n^{3}\\log n)$ time (Edmonds & Karp (1972); Orlin (1988)), which is computationally expensive. There has been substantial effort on designing $\\varepsilon$ -additive-approximation algorithms that estimate $\\mathcal{W}(\\mu, \\nu)$ within an additive error of $\\varepsilon$ in $n^{2}\\mathrm{poly}(d, \\log n, 1/\\varepsilon)$ time (Cuturi (2013); Lin et al. (2019); Lahn et al. (2019)). When $\\mathcal{W}(\\mu, \\nu)$ is significantly smaller than $\\varepsilon$ , however, the cost produced by such algorithms will be unreliable as it is dominated by the error parameter $\\varepsilon$ .",
85
+ "bbox": [
86
+ 169,
87
+ 669,
88
+ 823,
89
+ 753
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "To get a higher accuracy in this case, for $\\alpha > 0$ , one can compute an $\\alpha$ -relative approximation of the 1-Wasserstein distance, which is a cost $w$ that satisfies $\\mathcal{W}(\\mu, \\nu) \\leq w \\leq \\alpha \\mathcal{W}(\\mu, \\nu)$ . There has been considerable effort on designing relative-approximation algorithms; however, many such methods suffer from curse of dimensionality; i.e., their execution time grows exponentially in $d$ . Furthermore, they rely on fairly involved data structures that have good asymptotic execution times but are slow in practice and difficult to implement, making them impractical (Agarwal & Sharathkumar (2014); Fox & Lu (2020); Agarwal et al. (2022a;b)). The only exception to this is a classical greedy algorithm, based on a $d$ -dimensional quadtree, that returns an $O(d \\log n)$ -relative approximation of the 1-Wasserstein distance in $O(nd)$ time. It has been used in various machine-learning and computer-vision applications (Gupta et al. (2010); Backurs et al. (2020)). In the case of the Euclidean Bipartite",
96
+ "bbox": [
97
+ 169,
98
+ 758,
99
+ 826,
100
+ 901
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "header",
106
+ "text": "Published as a conference paper at ICLR 2023",
107
+ "bbox": [
108
+ 171,
109
+ 32,
110
+ 478,
111
+ 47
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "page_footnote",
117
+ "text": "*Following convention from Theoretical Computer Science, all authors are ordered alphabetically.",
118
+ "bbox": [
119
+ 197,
120
+ 909,
121
+ 779,
122
+ 925
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "page_number",
128
+ "text": "1",
129
+ "bbox": [
130
+ 493,
131
+ 948,
132
+ 504,
133
+ 959
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "text",
139
+ "text": "Matching, Agarwal & Varadarajan (2004) and Indyk (2007) generalized the algorithm in the hierarchical framework to achieve a relative approximation ratio of $O(d^{2}\\log (1 / \\varepsilon))$ in $\\tilde{O} (n^{1 + \\varepsilon})$ time<sup>1</sup>.",
140
+ "bbox": [
141
+ 169,
142
+ 103,
143
+ 823,
144
+ 136
145
+ ],
146
+ "page_idx": 1
147
+ },
148
+ {
149
+ "type": "text",
150
+ "text": "In this paper, we design an algorithm that combines any additive approximation algorithm with the hierarchical quad-tree based framework. As a result, our algorithm achieves better guarantees for both additive and relative approximations. To our knowledge, this is the first result that combines the power of additive and relative approximation techniques, leading to improvement in both settings.",
151
+ "bbox": [
152
+ 169,
153
+ 140,
154
+ 826,
155
+ 198
156
+ ],
157
+ "page_idx": 1
158
+ },
159
+ {
160
+ "type": "text",
161
+ "text": "1.1 PROBLEM DEFINITION",
162
+ "text_level": 1,
163
+ "bbox": [
164
+ 171,
165
+ 215,
166
+ 372,
167
+ 229
168
+ ],
169
+ "page_idx": 1
170
+ },
171
+ {
172
+ "type": "text",
173
+ "text": "We are given two discrete distributions $\\mu$ and $\\nu$ . Let $A$ and $B$ be the points in the support of $\\mu$ and $\\nu$ , respectively. For the distribution $\\mu$ (resp. $\\nu$ ), suppose each point $a \\in A$ (resp. $b \\in B$ ) has a probability of $\\mu_{a}$ (resp. $\\nu_{b}$ ) associated with it, where $\\sum_{a \\in A} \\mu_{a} = \\sum_{b \\in B} \\nu_{b} = 1$ . Let $G(A, B)$ denote the complete bipartite graph where, for any pair of points $a \\in A$ and $b \\in B$ , there is an edge from $a$ to $b$ of cost $\\|a - b\\|$ , i.e., the Euclidean distance between $a$ and $b$ .",
174
+ "bbox": [
175
+ 169,
176
+ 242,
177
+ 823,
178
+ 313
179
+ ],
180
+ "page_idx": 1
181
+ },
182
+ {
183
+ "type": "text",
184
+ "text": "For each point $a \\in A$ (resp. $b \\in B$ ), we assign a weight $\\eta(a) = -\\mu_a$ (resp. $\\eta(b) = \\nu_b$ ). We refer to any point $v \\in A \\cup B$ with a negative (resp. positive) weight as a demand point (resp. supply point) with a demand (resp. supply) of $|\\eta(v)|$ . Given any subset of points $V \\subseteq A \\cup B$ , the weight $\\eta(V)$ is simply the sum of the weights of its points; i.e., $\\eta(V) = \\sum_{v \\in V} \\eta(v)$ . For any edge $(a, b) \\in A \\times B$ , let the cost of transporting a supply of $\\beta$ from $b$ to $a$ be $\\beta \\| a - b \\|$ . In this problem, our goal is to transport all supplies from supply points to demand points at the minimum cost. More formally, a transport plan is a function $\\sigma: A \\times B \\to \\mathbb{R}_{\\geq 0}$ that assigns a non-negative value to each edge of $G(A, B)$ indicating the quantity of supplies transported along the edge. The transport plan $\\sigma$ is such that the total supplies transported into (resp. from) any demand (resp. supply) point $a \\in A$ (resp. $b \\in B$ ) is equal to $-\\eta(a)$ (resp. $\\eta(b)$ ). The cost of the transport plan $\\sigma$ , denoted by $w(\\sigma)$ , is given by $\\sum_{(a, b) \\in A \\times B} \\sigma(a, b) \\| a - b \\|$ . The goal of this problem is to find a minimum-cost transport plan.",
185
+ "bbox": [
186
+ 169,
187
+ 319,
188
+ 826,
189
+ 476
190
+ ],
191
+ "page_idx": 1
192
+ },
193
+ {
194
+ "type": "text",
195
+ "text": "If two points $a \\in A$ and $b \\in B$ are co-located (i.e., they share the same coordinates), then due to the metric property of the Euclidean distances, if $\\eta(b) = -\\eta(a)$ , we can match the supplies to the demands at zero cost and remove the points from the input. Otherwise, if $\\eta(b) \\neq -\\eta(a)$ , we replace the two points with a single point of weight $\\eta(a) + \\eta(b)$ . By definition, if the weight of the newly created point is negative (resp. positive), we consider it a demand (resp. supply) point. In our presentation, we always consider $A$ and $B$ to be the point sets obtained after replacing all the co-located points. Observe that, after removing the co-located points, the total supply $U = \\eta(B)$ may be less than 1. However, it is easy to see that $\\eta(B) = -\\eta(A)$ ; i.e., the problem instance defined on $A \\cup B$ is balanced. We say that a transport plan $\\sigma$ is an $\\varepsilon$ -close transport plan if $w(\\sigma) \\leq \\mathcal{W}(\\mu, \\nu) + \\varepsilon U$ .",
196
+ "bbox": [
197
+ 169,
198
+ 482,
199
+ 823,
200
+ 609
201
+ ],
202
+ "page_idx": 1
203
+ },
204
+ {
205
+ "type": "text",
206
+ "text": "In many applications, the distributions $\\mu$ and $\\nu$ are continuous or large (possibly unknown) discrete distributions. In such cases, it might be computationally expensive or even impossible to compute $\\mathcal{W}(\\mu, \\nu)$ . Instead, one can draw two sets $A$ and $B$ of $n$ samples each from $\\mu$ and $\\nu$ , respectively. Each point $a \\in A$ (resp. $b \\in B$ ) is assigned a weight of $\\eta(a) = -1/n$ (resp. $\\eta(b) = 1/n$ ). One can approximate the 1-Wasserstein distance between the distributions $\\mu$ and $\\nu$ by simply solving the 1-Wasserstein problem defined on $G(A, B)$ . This special case where every point has the same demand and supply is called the Euclidean Bipartite Matching (EBM) problem. A matching $M$ is a set of vertex-disjoint edges in $G(A, B)$ and has a cost $1/n \\sum_{(a, b) \\in M} \\|a - b\\|$ . For the EBM problem, the optimal transport plan is simply a minimum-cost matching of cardinality $n$ .",
207
+ "bbox": [
208
+ 169,
209
+ 614,
210
+ 823,
211
+ 743
212
+ ],
213
+ "page_idx": 1
214
+ },
215
+ {
216
+ "type": "text",
217
+ "text": "For any point set $P$ in the Euclidean space, let $C_{\\max}(P) \\coloneqq \\max_{(a,b) \\in P \\times P} \\| a - b \\|$ denote the distance of its farthest pair and $C_{\\min}(P) \\coloneqq \\min_{(a,b) \\in P \\times P, a \\neq b} \\| a - b \\|$ denote the distance of its closest pair. The spread of the point set, denoted by $\\Delta(P)$ , is the ratio $\\Delta(P) = C_{\\max}(P) / C_{\\min}(P)$ . When $P$ is obvious from the context, we simply use $C_{\\min}$ , $C_{\\max}$ , and $\\Delta$ to denote the distance of its closest and farthest pair and its spread.",
218
+ "bbox": [
219
+ 169,
220
+ 748,
221
+ 826,
222
+ 821
223
+ ],
224
+ "page_idx": 1
225
+ },
226
+ {
227
+ "type": "text",
228
+ "text": "1.2 RELATED WORK",
229
+ "text_level": 1,
230
+ "bbox": [
231
+ 171,
232
+ 839,
233
+ 333,
234
+ 853
235
+ ],
236
+ "page_idx": 1
237
+ },
238
+ {
239
+ "type": "text",
240
+ "text": "Relative Approximations: In fixed dimensional settings, i.e., $d = O(1)$ , there is extensive work on the design of near-linear time Monte-Carlo $(1 + \\varepsilon)$ -relative approximation al-",
241
+ "bbox": [
242
+ 169,
243
+ 866,
244
+ 823,
245
+ 897
246
+ ],
247
+ "page_idx": 1
248
+ },
249
+ {
250
+ "type": "header",
251
+ "text": "Published as a conference paper at ICLR 2023",
252
+ "bbox": [
253
+ 171,
254
+ 32,
255
+ 478,
256
+ 47
257
+ ],
258
+ "page_idx": 1
259
+ },
260
+ {
261
+ "type": "page_footnote",
262
+ "text": "$^1\\tilde{O} ()$ hides poly $(d,\\log n,1 / \\varepsilon)$ factors in the execution time.",
263
+ "bbox": [
264
+ 191,
265
+ 907,
266
+ 553,
267
+ 925
268
+ ],
269
+ "page_idx": 1
270
+ },
271
+ {
272
+ "type": "page_number",
273
+ "text": "2",
274
+ "bbox": [
275
+ 493,
276
+ 946,
277
+ 504,
278
+ 959
279
+ ],
280
+ "page_idx": 1
281
+ },
282
+ {
283
+ "type": "text",
284
+ "text": "gorithms for 1-Wasserstein and EBM problems. The execution time of these algorithms are $\\Omega(n(d\\varepsilon^{-1}\\log n)^d)$ (Khesin et al. (2019); Raghvendra & Agarwal (2020); Fox & Lu (2020); Agarwal et al. (2022a)). A recent algorithm presented by Agarwal et al. (2022b) improved the dependence on $d$ slightly and achieved an execution time of $\\Omega(n(d\\varepsilon^{-1}\\log\\log n)^d)$ . Nonetheless, the exponential dependence on $d$ makes it unsuitable for higher dimensions.",
285
+ "bbox": [
286
+ 169,
287
+ 103,
288
+ 823,
289
+ 174
290
+ ],
291
+ "page_idx": 2
292
+ },
293
+ {
294
+ "type": "text",
295
+ "text": "For higher dimensions, a quad-tree based greedy algorithm provides an expected $O(d \\log n)$ approximation in $O(nd)$ time. This algorithm constructs a randomly-shifted recursive partition of space by splitting each cell into $2^d$ cells with half the side-length. For every cell of the quad-tree, the algorithm moves excess supply present inside its child to meet any excess demand present inside another child. Agarwal & Varadarajan (2004) combined a different hierarchical partition with an exact solver to get an expected $O(d^2 \\log 1 / \\varepsilon)$ -approximation in $\\tilde{O}(n^{1 + \\varepsilon})$ time. Indyk (2007) combined the hierarchical greedy framework with importance sampling to estimate the cost of the Euclidean bipartite matching within a constant factor of the optimal. Additionally, there are $\\tilde{O}(nd)$ time algorithms that approximate the matching cost with a factor of $O(\\log^2 n)$ (Andoni et al. (2008); Chen et al. (2022)).",
296
+ "bbox": [
297
+ 169,
298
+ 180,
299
+ 826,
300
+ 325
301
+ ],
302
+ "page_idx": 2
303
+ },
304
+ {
305
+ "type": "text",
306
+ "text": "There are also approximation algorithms that run in $\\tilde{O}(n^2)$ time; however, these algorithms rely on several black-box reductions and at present, there are no usable implementations of these algorithms (Agarwal & Sharathkumar 2014; Sherman (2017)). The lack of fast exact and relative approximations that are also implementable have motivated machine-learning researchers to design additive-approximation algorithms, which we discuss next.",
307
+ "bbox": [
308
+ 169,
309
+ 330,
310
+ 823,
311
+ 402
312
+ ],
313
+ "page_idx": 2
314
+ },
315
+ {
316
+ "type": "text",
317
+ "text": "Additive Approximations: Cuturi (2013) introduced a regularized version of the optimal transport problem that produces an $\\varepsilon$ -close transport plan, which can be solved using the Sinkhorn method. For input points within the unit hypercube, such an algorithm produces an $\\varepsilon$ -close transport plan in $\\tilde{O}(n^2 d / \\varepsilon^2)$ time $^2$ (Lin et al. (2019)). One can also adapt graph theoretic approaches including the algorithm by Gabow & Tarjan (1989) to obtain an $\\varepsilon$ -close solution in $O(n^2 \\sqrt{d} / \\varepsilon + nd / \\varepsilon^2)$ time for points within the unit hypercube (Lahn et al. (2019)). Some of the additive approximation methods, including the Sinkhorn method, are highly parallelizable. For instance, the algorithm by Jambulapati et al. (2019) has a parallel depth of $\\tilde{O}(1 / \\varepsilon)$ ; see also Altschuler et al. (2017; 2019); Blanchet et al. (2018); Quanrud (2018); Dvurechensky et al. (2018); Guo et al. (2020).",
318
+ "bbox": [
319
+ 169,
320
+ 407,
321
+ 826,
322
+ 541
323
+ ],
324
+ "page_idx": 2
325
+ },
326
+ {
327
+ "type": "text",
328
+ "text": "1.3 OUR RESULTS",
329
+ "text_level": 1,
330
+ "bbox": [
331
+ 171,
332
+ 558,
333
+ 316,
334
+ 571
335
+ ],
336
+ "page_idx": 2
337
+ },
338
+ {
339
+ "type": "text",
340
+ "text": "Let $T(n,\\varepsilon)$ be the time taken by an $\\varepsilon$ -additive-approximation algorithm on an input of $n$ points in the unit hypercube. In Theorem 1.1 and 1.2, we present new algorithms that improve the accuracy of any additive-approximation algorithm for the 1-Wasserstein problem and the EBM problem, respectively.",
341
+ "bbox": [
342
+ 169,
343
+ 584,
344
+ 823,
345
+ 641
346
+ ],
347
+ "page_idx": 2
348
+ },
349
+ {
350
+ "type": "text",
351
+ "text": "Theorem 1.1. Given two discrete distributions $\\mu$ and $\\nu$ whose support lie in the $d$ -dimensional unit hypercube and have spread $n^{O(1)}$ , and a parameter $\\varepsilon > 0$ , a transport plan with an expected additive error of $\\min \\{\\varepsilon, (d\\log_{\\sqrt{d}/\\varepsilon} n)\\mathcal{W}(\\mu, \\nu)\\}$ can be computed in $O(T(n, \\varepsilon/d)\\log_{\\sqrt{d}/\\varepsilon} n)$ time; here, $\\mathcal{W}(\\mu, \\nu)$ is the 1-Wasserstein distance between $\\mu$ and $\\nu$ .",
352
+ "bbox": [
353
+ 169,
354
+ 646,
355
+ 826,
356
+ 705
357
+ ],
358
+ "page_idx": 2
359
+ },
360
+ {
361
+ "type": "text",
362
+ "text": "Theorem 1.2. Given two sets of $n$ points $A$ and $B$ in the $d$ -dimensional unit hypercube and a parameter $\\varepsilon > 0$ , a matching whose expected cost is within an additive error of $\\min \\{ \\varepsilon, (d\\log \\log n)w^* \\}$ of the optimal matching cost $w^*$ can be computed in $O(T(n,\\varepsilon /d)\\log \\log n)$ time with high probability.",
363
+ "bbox": [
364
+ 169,
365
+ 709,
366
+ 828,
367
+ 753
368
+ ],
369
+ "page_idx": 2
370
+ },
371
+ {
372
+ "type": "text",
373
+ "text": "Typical additive-approximation algorithms run in $T(n, \\varepsilon) = \\tilde{O}(n^2)$ time and compute an $\\varepsilon$ -close transport plan for any arbitrary cost function; i.e., they make no assumption about the distance between points. The inputs to our algorithm, on the other hand, are point sets in the Euclidean space. Therefore, one can use an approximate dynamic nearest neighbor data structure to improve the execution time of such additive-approximation algorithms. In particular, the algorithm by Lahn et al. (2019) runs in $O(1 / \\varepsilon)$ phases, where each phase executes one iteration of Gabow-Tarjan's algorithm. As shown by Agarwal & Sharathkumar (2014), one can use a $\\frac{1}{\\sqrt{\\varepsilon}}$ -approximate dynamic nearest neighbor data structure with a query/update time of $O(n^{\\varepsilon} + d\\log n)$ (Andoni et al. (2014))",
374
+ "bbox": [
375
+ 169,
376
+ 766,
377
+ 826,
378
+ 883
379
+ ],
380
+ "page_idx": 2
381
+ },
382
+ {
383
+ "type": "header",
384
+ "text": "Published as a conference paper at ICLR 2023",
385
+ "bbox": [
386
+ 171,
387
+ 32,
388
+ 478,
389
+ 47
390
+ ],
391
+ "page_idx": 2
392
+ },
393
+ {
394
+ "type": "page_footnote",
395
+ "text": "2The execution time of Sinkhorn algorithm as well as other additive approximations depend on the diameter $C$ of the point set. In the case of $d$ -dimensional unit hypercube, $C = \\sqrt{d}$ .",
396
+ "bbox": [
397
+ 169,
398
+ 895,
399
+ 823,
400
+ 925
401
+ ],
402
+ "page_idx": 2
403
+ },
404
+ {
405
+ "type": "page_number",
406
+ "text": "3",
407
+ "bbox": [
408
+ 493,
409
+ 948,
410
+ 504,
411
+ 959
412
+ ],
413
+ "page_idx": 2
414
+ },
415
+ {
416
+ "type": "text",
417
+ "text": "to execute each iteration of Gabow-Tarjan's algorithm in $\\tilde{O}(n^{1 + \\varepsilon})$ time. Combining with the algorithms from Theorem 1.1 and 1.2, we obtain the following relative-approximation algorithms. The details are provided in Appendix D $^3$ .",
418
+ "bbox": [
419
+ 169,
420
+ 102,
421
+ 823,
422
+ 147
423
+ ],
424
+ "page_idx": 3
425
+ },
426
+ {
427
+ "type": "text",
428
+ "text": "Theorem 1.3. Let $\\mu$ and $\\nu$ be two discrete distributions whose support lie in the $d$ -dimensional unit hypercube and have polynomial spread, and $\\varepsilon > 0$ be a parameter. An $O(d / \\varepsilon^{3 / 2})$ -approximate transport plan under Euclidean metric can be computed in $O(d^2 n^{1 + \\varepsilon} / \\varepsilon)$ time.",
429
+ "bbox": [
430
+ 169,
431
+ 151,
432
+ 823,
433
+ 195
434
+ ],
435
+ "page_idx": 3
436
+ },
437
+ {
438
+ "type": "text",
439
+ "text": "Theorem 1.4. Given two sets of $n$ points $A$ and $B$ in the $d$ -dimensional unit hypercube and a parameter $\\varepsilon > 0$ , an $O\\left(\\frac{d}{\\sqrt{\\varepsilon}} \\log \\frac{d}{\\varepsilon}\\right)$ -approximate matching can be computed in $O(dn^{1 + \\varepsilon} \\log \\frac{d}{\\varepsilon})$ time with high probability.",
440
+ "bbox": [
441
+ 169,
442
+ 199,
443
+ 823,
444
+ 246
445
+ ],
446
+ "page_idx": 3
447
+ },
448
+ {
449
+ "type": "text",
450
+ "text": "In contrast to our results, Agarwal & Varadarajan (2004) compute an $O(d^{2}\\log \\frac{1}{\\varepsilon})$ -approximate matching in the same time. Therefore, our algorithm computes a more accurate matching for $d > \\frac{1}{\\sqrt{\\varepsilon}}$ . For instance, consider the case where $d = \\sqrt{\\log n}$ . For any arbitrarily small constant $\\varepsilon > 0$ , the algorithm of Theorem 1.4 will run in $\\tilde{O}(n^{1 + \\varepsilon})$ time and return an $O(\\sqrt{\\log n})$ -approximation. In contrast, all previous methods that achieve sub-logarithmic approximation require $\\Omega(n^{5/4})$ time (Agarwal & Sharathkumar (2014)).",
451
+ "bbox": [
452
+ 169,
453
+ 256,
454
+ 825,
455
+ 349
456
+ ],
457
+ "page_idx": 3
458
+ },
459
+ {
460
+ "type": "text",
461
+ "text": "We also note that all of our algorithms extend to computing transport plans under any $\\ell_p$ -metric in a straight forward way. For simplicity, we restrict our presentation only to the Euclidean metric.",
462
+ "bbox": [
463
+ 169,
464
+ 354,
465
+ 823,
466
+ 386
467
+ ],
468
+ "page_idx": 3
469
+ },
470
+ {
471
+ "type": "text",
472
+ "text": "Overview of the algorithm: Our algorithm uses the hierarchical greedy paradigm. In our presentation, we refer to a hypercube as a cell. For any cell $\\square$ , let $V_{\\square} = (A \\cup B) \\cap \\square$ . Unlike a quadtree based greedy algorithm, which splits each cell $\\square$ into $2^{d}$ cells, we split it into $\\min \\{|V_{\\square}|, (4\\sqrt{d}/\\varepsilon)^{d}\\}$ cells. Thus, the height of the resulting tree $T$ reduces from $O(\\log n)$ to $O(\\log_{\\sqrt{d}/\\varepsilon}n)$ . For any cell $\\square$ of $T$ and any child $\\square'$ of $\\square$ , we move any excess supply or demand inside $\\square'$ to its center. Let $\\mathcal{A}_{\\square}$ (resp. $\\mathcal{B}_{\\square}$ ) be a set consisting of the center points of all children of $\\square$ with excess demand (resp. supply). For any child $\\square'$ of $\\square$ with excess demand (resp. supply), we assign a weight of $\\eta(V_{\\square'})$ to its center point in $\\mathcal{A}_{\\square}$ (resp. $\\mathcal{B}_{\\square}$ ). Using an additive-approximation algorithm, we compute an $(\\varepsilon/d)$ -close transport cost between $\\mathcal{A}_{\\square}$ and $\\mathcal{B}_{\\square}$ in $T(|V_{\\square}|, \\varepsilon/d)$ time. We report the sum of the transport costs computed at all cells of $T$ as an approximate 1-Wasserstein distance. This simple algorithm guarantees improvement in the quality of the solutions produced by both additive and relative-approximation algorithms.",
473
+ "bbox": [
474
+ 169,
475
+ 391,
476
+ 825,
477
+ 566
478
+ ],
479
+ "page_idx": 3
480
+ },
481
+ {
482
+ "type": "text",
483
+ "text": "From the perspective of relative-approximation algorithms, Agarwal & Varadarajan (2004) as well as Indyk (2007) have utilized a similar hierarchical framework to design approximation algorithms. However, unlike our algorithm, they used an exact solver at each cell that takes $\\Omega\\left(|\\mathcal{A}_{\\square} \\cup \\mathcal{B}_{\\square}|^{3}\\right)$ time. As a result, to obtain near-quadratic execution time, they divided every cell into $O\\left(|V_{\\square}|^{2/3}\\right)$ children, i.e., $1/\\varepsilon^{d} \\leq n^{2/3}$ or $d = O(\\log_{1/\\varepsilon}n)$ . This also forces the height of the tree to be $O(d\\log_{1/\\varepsilon}n)$ , leading to an $O(d^{2}\\log_{1/\\varepsilon}n)$ -factor approximation. We replace the $\\Omega(n^{3})$ time exact solver in their algorithm with a $T(|V_{\\square}|, \\varepsilon) = \\tilde{O}(|V_{\\square}|^{2})$ time additive-approximation algorithm. Therefore, each instance (regardless of the number of non-empty children) can be solved in $\\tilde{O}(|V_{\\square}|^{2})$ time. As a result, we are able to improve the approximation factor from $O(d^{2}\\log_{1/\\varepsilon}n)$ to $O(d\\log_{\\sqrt{d}/\\varepsilon}n)$ and also remove restrictions on the dimension. Our algorithm now works for any dimension!",
484
+ "bbox": [
485
+ 169,
486
+ 570,
487
+ 825,
488
+ 743
489
+ ],
490
+ "page_idx": 3
491
+ },
492
+ {
493
+ "type": "text",
494
+ "text": "Technical Challenge: Using an additive-approximation algorithm at each cell increases the error that may be difficult to bound. We use the following observation to overcome this challenge. In Section 2.2, we show that for any point set with spread $\\Delta$ , an additive-approximation algorithm can be used to compute a 2-relative approximation in $T(n,1 / \\Delta)$ time. The algorithm guarantees that the spread of the point set at each cell $\\square$ is $O(d / \\varepsilon)$ , and as a result, we get a 2-relative approximation by using an additive-approximation algorithm in $O(T(n,\\varepsilon /d))$ time.",
495
+ "bbox": [
496
+ 169,
497
+ 750,
498
+ 823,
499
+ 835
500
+ ],
501
+ "page_idx": 3
502
+ },
503
+ {
504
+ "type": "text",
505
+ "text": "Improvements for EBM: For the EBM problem, we obtain an improvement in the approximation ratio, as stated in Theorem 1.2, as follows: Instead of dividing each cell into a fixed number",
506
+ "bbox": [
507
+ 169,
508
+ 840,
509
+ 823,
510
+ 869
511
+ ],
512
+ "page_idx": 3
513
+ },
514
+ {
515
+ "type": "header",
516
+ "text": "Published as a conference paper at ICLR 2023",
517
+ "bbox": [
518
+ 171,
519
+ 32,
520
+ 478,
521
+ 47
522
+ ],
523
+ "page_idx": 3
524
+ },
525
+ {
526
+ "type": "page_footnote",
527
+ "text": "3The appendix is provided in the supplemental material.",
528
+ "bbox": [
529
+ 192,
530
+ 883,
531
+ 524,
532
+ 897
533
+ ],
534
+ "page_idx": 3
535
+ },
536
+ {
537
+ "type": "page_footnote",
538
+ "text": "4Owing to a higher complexity of LSH in arbitrary $\\ell_p$ -metrics, the approximation factor in Theorem 1.3 and 1.4 is slightly higher for arbitrary $\\ell_p$ -metrics.",
539
+ "bbox": [
540
+ 169,
541
+ 896,
542
+ 823,
543
+ 925
544
+ ],
545
+ "page_idx": 3
546
+ },
547
+ {
548
+ "type": "page_number",
549
+ "text": "4",
550
+ "bbox": [
551
+ 493,
552
+ 948,
553
+ 504,
554
+ 959
555
+ ],
556
+ "page_idx": 3
557
+ },
558
+ {
559
+ "type": "text",
560
+ "text": "$\\min \\{n,(4\\sqrt{d} /\\varepsilon)^d\\}$ of children, we divide it into $\\min \\{n,n^{d / 2^i}\\}$ children at each level $i$ ; here, level of any cell in the tree is equal to the length of the path from the root to this cell. By doing so, we reduce the height of the tree to $O(\\log \\log n)$ . To analyze the running time, we show that the number of remaining unmatched points over all level $i$ cells is $\\tilde{O} (n^{1 - \\frac{1}{2^i}})$ . Since there are only sub-linearly many points remaining, we can afford a larger spread of $O(\\frac{d}{\\varepsilon} n^{\\frac{1}{2^i}})$ and the resulting execution time of the additive approximation will continue to be $O(T(n,\\frac{\\varepsilon}{d}))$ per level.",
561
+ "bbox": [
562
+ 169,
563
+ 102,
564
+ 823,
565
+ 196
566
+ ],
567
+ "page_idx": 4
568
+ },
569
+ {
570
+ "type": "text",
571
+ "text": "Open Question: Although our algorithms work in any dimension, we would like to note that the relative approximation factor grows linearly in the dimension. Recently, Chen et al. (2022) removed the dependence on the dimension $d$ from the approximation factor of quad-tree greedy algorithm by using a data dependent weight assignment. It is an open question if their approach can be adapted in our framework leading to a similar improvement in the approximation factor of our algorithms.",
572
+ "bbox": [
573
+ 169,
574
+ 200,
575
+ 826,
576
+ 273
577
+ ],
578
+ "page_idx": 4
579
+ },
580
+ {
581
+ "type": "text",
582
+ "text": "2 PRELIMINARIES",
583
+ "text_level": 1,
584
+ "bbox": [
585
+ 171,
586
+ 292,
587
+ 341,
588
+ 309
589
+ ],
590
+ "page_idx": 4
591
+ },
592
+ {
593
+ "type": "text",
594
+ "text": "We begin by introducing a few notations. For a cell $\\square$ , we denote its side-length by $\\ell_{\\square}$ and its center by $c_{\\square}$ . For a parameter $\\ell$ , let $\\mathbb{G}(\\square, \\ell)$ denote a grid that partitions $\\square$ into smaller cells with side-length $\\ell$ . Recall that $V_{\\square}$ denotes the subset of $A \\cup B$ that lies inside $\\square$ . We say that $\\square$ is non-empty if $V_{\\square}$ is non-empty. Recall that $\\eta(V_{\\square}) = \\sum_{v \\in V_{\\square}} \\eta(v)$ . We define the weight of $\\square$ to be $\\eta(V_{\\square})$ and denote it by $\\eta(\\square)$ . We call a cell $\\square$ a deficit cell if $\\eta(\\square) < 0$ , a surplus cell if $\\eta(\\square) > 0$ , and a neutral cell if $\\eta(\\square) = 0$ .",
595
+ "bbox": [
596
+ 169,
597
+ 325,
598
+ 823,
599
+ 412
600
+ ],
601
+ "page_idx": 4
602
+ },
603
+ {
604
+ "type": "text",
605
+ "text": "In this section, we provide a simple transformation of the input for achieving an additive approximation of the 1-Wasserstein distance. Furthermore, we show how an additive-approximation algorithm for the 1-Wasserstein problem can be used to obtain a $(1 + \\varepsilon)$ -relative-approximation algorithm that runs in $T(n,\\varepsilon /\\Delta)$ time, where $\\Delta$ is the spread of input points.",
606
+ "bbox": [
607
+ 169,
608
+ 419,
609
+ 825,
610
+ 476
611
+ ],
612
+ "page_idx": 4
613
+ },
614
+ {
615
+ "type": "text",
616
+ "text": "2.1 ADDITIVE APPROXIMATION IN EUCLIDEAN SPACE",
617
+ "text_level": 1,
618
+ "bbox": [
619
+ 171,
620
+ 493,
621
+ 566,
622
+ 508
623
+ ],
624
+ "page_idx": 4
625
+ },
626
+ {
627
+ "type": "text",
628
+ "text": "For any $\\varepsilon > 0$ , given an additive-approximation algorithm that runs in $T(n, \\varepsilon)$ time, we present an algorithm to compute an $\\varepsilon$ -close transport cost for distributions inside the $d$ -dimensional unit hypercube $\\square^{*}$ in $O(\\min\\{T(n, \\frac{\\varepsilon}{2}), n + T((\\frac{2\\sqrt{d}}{\\varepsilon})^d, \\frac{\\varepsilon}{2})\\})$ time.",
629
+ "bbox": [
630
+ 169,
631
+ 520,
632
+ 823,
633
+ 568
634
+ ],
635
+ "page_idx": 4
636
+ },
637
+ {
638
+ "type": "text",
639
+ "text": "The algorithm works in two steps. In the first step, it constructs a grid $\\mathbb{G} := \\mathbb{G}(\\square^*, \\frac{\\varepsilon}{2\\sqrt{d}})$ on the unit hypercube $\\square^*$ and computes a transport plan $\\sigma_1$ , as follows. For each non-empty neutral cell of $\\mathbb{G}$ , $\\sigma_1$ arbitrarily transports all supplies to demands within the cell. Similarly, for any deficit (resp. surplus) cell, $\\sigma_1$ arbitrarily transports supplies from (resp. to) all supply (resp. demand) points inside the cell to (resp. from) some arbitrary demand (resp. supply) points within the cell.",
640
+ "bbox": [
641
+ 169,
642
+ 571,
643
+ 825,
644
+ 647
645
+ ],
646
+ "page_idx": 4
647
+ },
648
+ {
649
+ "type": "text",
650
+ "text": "In the second step, the algorithm constructs a set of demand points $\\mathcal{A}$ and a set of supply points $\\mathcal{B}$ as follows: For any deficit cell $\\square$ (resp. surplus cell $\\square'$ ), the point $c_{\\square}$ (resp. $c_{\\square'}$ ) is added to $\\mathcal{A}$ (resp. $\\mathcal{B}$ ) with a weight of $\\eta(\\square)$ (resp. $\\eta(\\square')$ ). Note that $\\mathcal{A} \\cup \\mathcal{B}$ is a balanced instance for the 1-Wasserstein problem. The algorithm computes an $\\frac{\\varepsilon}{2}$ -close transport plan $\\sigma_2$ on the instance $\\mathcal{A} \\cup \\mathcal{B}$ in $T(|\\mathcal{A}| + |\\mathcal{B}|, \\varepsilon/2)$ time (See Figure 1). The algorithm returns $w(\\sigma_1) + w(\\sigma_2)$ as an $\\varepsilon$ -close transport cost on $A \\cup B$ .",
651
+ "bbox": [
652
+ 169,
653
+ 652,
654
+ 825,
655
+ 736
656
+ ],
657
+ "page_idx": 4
658
+ },
659
+ {
660
+ "type": "text",
661
+ "text": "We provide a discussion on the accuracy of the algorithm in Appendix A. The following lemma follows from the fact that $|\\mathcal{A}| + |\\mathcal{B}|$ is bounded by $\\min \\{2n, (2\\sqrt{d} / \\varepsilon)^d\\}$ .",
662
+ "bbox": [
663
+ 169,
664
+ 743,
665
+ 823,
666
+ 773
667
+ ],
668
+ "page_idx": 4
669
+ },
670
+ {
671
+ "type": "text",
672
+ "text": "Lemma 2.1. Given two point sets $A$ and $B$ in the $d$ -dimensional unit hypercube and a parameter $\\varepsilon > 0$ , an $\\varepsilon$ -close transport cost can be computed in $O\\left(\\min \\{T(n, \\frac{\\varepsilon}{2}), n + T\\left(\\left(\\frac{2\\sqrt{d}}{\\varepsilon}\\right)^d, \\frac{\\varepsilon}{2}\\right)\\}\\right)$ time.",
673
+ "bbox": [
674
+ 169,
675
+ 777,
676
+ 823,
677
+ 811
678
+ ],
679
+ "page_idx": 4
680
+ },
681
+ {
682
+ "type": "text",
683
+ "text": "Instead of transporting supplies inside cells arbitrarily, our algorithm in Section 3 recursively applies the same algorithm in each cell and obtains a higher accuracy.",
684
+ "bbox": [
685
+ 169,
686
+ 821,
687
+ 823,
688
+ 851
689
+ ],
690
+ "page_idx": 4
691
+ },
692
+ {
693
+ "type": "text",
694
+ "text": "2.2 RELATIVE APPROXIMATION FOR LOW SPREAD POINT SETS",
695
+ "text_level": 1,
696
+ "bbox": [
697
+ 171,
698
+ 869,
699
+ 627,
700
+ 883
701
+ ],
702
+ "page_idx": 4
703
+ },
704
+ {
705
+ "type": "text",
706
+ "text": "In this section, we show that an $\\varepsilon$ -additive-approximation algorithm can be used to obtain a $(1 + \\varepsilon)$ -relative-approximation algorithm for the 1-Wasserstein problem that runs in $T(n,\\varepsilon / \\Delta)$ time; here,",
707
+ "bbox": [
708
+ 169,
709
+ 895,
710
+ 823,
711
+ 925
712
+ ],
713
+ "page_idx": 4
714
+ },
715
+ {
716
+ "type": "header",
717
+ "text": "Published as a conference paper at ICLR 2023",
718
+ "bbox": [
719
+ 171,
720
+ 32,
721
+ 478,
722
+ 47
723
+ ],
724
+ "page_idx": 4
725
+ },
726
+ {
727
+ "type": "page_number",
728
+ "text": "5",
729
+ "bbox": [
730
+ 493,
731
+ 948,
732
+ 504,
733
+ 959
734
+ ],
735
+ "page_idx": 4
736
+ },
737
+ {
738
+ "type": "image",
739
+ "img_path": "images/a5ed9494c0eae5743ab2b78bb5c4dcff42e32c7a13c31504791b084c043bf57c.jpg",
740
+ "image_caption": [
741
+ "(a)"
742
+ ],
743
+ "image_footnote": [],
744
+ "bbox": [
745
+ 336,
746
+ 101,
747
+ 490,
748
+ 218
749
+ ],
750
+ "page_idx": 5
751
+ },
752
+ {
753
+ "type": "image",
754
+ "img_path": "images/9707b983a4c8141e12e09c523f30d795587378f4b903dfd1a1cac9c8b35c22b6.jpg",
755
+ "image_caption": [
756
+ "(b)",
757
+ "Figure 1: (a) The algorithm transports supplies (red disks) to demands (blue circles) within each cell and creates an instance by moving any excess supplies or demands to the center of the corresponding cells, (b) An $\\varepsilon /2$ -close transport plan is computed on the new problem instance."
758
+ ],
759
+ "image_footnote": [],
760
+ "bbox": [
761
+ 508,
762
+ 101,
763
+ 660,
764
+ 219
765
+ ],
766
+ "page_idx": 5
767
+ },
768
+ {
769
+ "type": "text",
770
+ "text": "$\\Delta$ is the spread of the points in $A \\cup B$ . Since relative approximations are scale invariant, without loss of generality, we assume that the input has $C_{\\max} = 1$ and $C_{\\min} = 1 / \\Delta$ .",
771
+ "bbox": [
772
+ 169,
773
+ 316,
774
+ 823,
775
+ 348
776
+ ],
777
+ "page_idx": 5
778
+ },
779
+ {
780
+ "type": "text",
781
+ "text": "Suppose $-\\eta(A) = \\eta(B) = U$ . The minimum distance between any two points $(a, b) \\in A \\times B$ is $1 / \\Delta$ . Therefore, the cost of transporting a supply of $U$ is at least $U / \\Delta$ . We obtain a $(1 + \\varepsilon)$ -relative approximate transport plan by simply executing an additive-approximation algorithm with an additive error of $U(\\varepsilon / \\Delta)$ in time $T(n, \\varepsilon / \\Delta)$ .",
782
+ "bbox": [
783
+ 169,
784
+ 352,
785
+ 823,
786
+ 411
787
+ ],
788
+ "page_idx": 5
789
+ },
790
+ {
791
+ "type": "text",
792
+ "text": "Lemma 2.2. Given two point sets $A$ and $B$ in the $d$ -dimensional unit hypercube with a spread of $\\Delta$ , a $(1 + \\varepsilon)$ -approximate transport plan can be computed in $T(n, \\varepsilon / \\Delta)$ time.",
793
+ "bbox": [
794
+ 169,
795
+ 414,
796
+ 823,
797
+ 444
798
+ ],
799
+ "page_idx": 5
800
+ },
801
+ {
802
+ "type": "text",
803
+ "text": "3 AN $O(d\\log_{\\sqrt{d} / \\varepsilon}n)$ -APPROXIMATION ALGORITHM FOR 1-WASSERSTEIN PROBLEM",
804
+ "text_level": 1,
805
+ "bbox": [
806
+ 171,
807
+ 463,
808
+ 789,
809
+ 497
810
+ ],
811
+ "page_idx": 5
812
+ },
813
+ {
814
+ "type": "text",
815
+ "text": "In this section, we present our algorithm that satisfies the bounds presented in Theorem 1.1. We begin by defining a hierarchical partitioning and a tree $T$ associated with the hierarchical partitioning. Each node in $T$ corresponds to a non-empty cell in our hierarchical partition, and we do not distinguish between the two. We partition each cell of side-length $\\ell$ into $\\lceil 4\\sqrt{d} / \\varepsilon \\rceil^d$ cells of side-length at most $(\\varepsilon / 4\\sqrt{d})\\ell$ . We construct our hierarchical partition in a randomly-shifted fashion as follows.",
816
+ "bbox": [
817
+ 169,
818
+ 513,
819
+ 823,
820
+ 589
821
+ ],
822
+ "page_idx": 5
823
+ },
824
+ {
825
+ "type": "text",
826
+ "text": "Hierarchical Partitioning: First, we pick a point $\\xi$ uniformly at random from the unit hypercube $[0,1]^d$ and set $\\square^* = [-1,1]^d + \\xi$ . Note that $\\square^*$ is a hypercube of side-length 2 containing all points in $A \\cup B$ . We designate $\\square^*$ as the root of $T$ . Let $\\kappa = \\lceil 4\\sqrt{d} / \\varepsilon \\rceil$ . For any cell $\\square$ , if only one point of $A \\cup B$ lies inside $\\square$ , we designate $\\square$ as a leaf cell in $T$ . For any non-leaf cell $\\square$ , we construct its children by partitioning $\\square$ , using a grid $\\mathbb{G}_{\\square} = \\mathbb{G}(\\square, \\ell_{\\square} / \\kappa)$ , into $\\lceil 4\\sqrt{d} / \\varepsilon \\rceil^d$ cells and create a child node for each non-empty cell of this grid. We denote the set of children of a non-leaf cell $\\square$ by $\\mathsf{C}[\\square]$ . Assuming the spread of $A \\cup B$ is $n^{O(1)}$ , the height of $T$ , denoted by $h$ , is $O(\\log_{\\sqrt{d} / \\varepsilon} n)$ .",
827
+ "bbox": [
828
+ 169,
829
+ 594,
830
+ 823,
831
+ 702
832
+ ],
833
+ "page_idx": 5
834
+ },
835
+ {
836
+ "type": "text",
837
+ "text": "Similar to quadtrees, our hierarchical partitioning can be seen as a sequence of grids $\\langle \\mathbb{G}_0,\\mathbb{G}_1,\\ldots ,\\mathbb{G}_h\\rangle$ , where $\\mathbb{G}_0$ is the root and each grid $\\mathbb{G}_i$ refines the cells of grid $\\mathbb{G}_{i - 1}$ . For each grid $\\mathbb{G}_i$ , we denote the cell-side-length of $\\mathbb{G}_i$ by $\\ell_{i}$ . Next, we describe our algorithm.",
838
+ "bbox": [
839
+ 169,
840
+ 707,
841
+ 823,
842
+ 750
843
+ ],
844
+ "page_idx": 5
845
+ },
846
+ {
847
+ "type": "text",
848
+ "text": "Computing an approximate 1-Wasserstein distance: In this section, we present our algorithm for computing an approximate cost of an optimal transport plan on $A \\cup B$ .",
849
+ "bbox": [
850
+ 169,
851
+ 756,
852
+ 823,
853
+ 786
854
+ ],
855
+ "page_idx": 5
856
+ },
857
+ {
858
+ "type": "list",
859
+ "sub_type": "text",
860
+ "list_items": [
861
+ "1 - Creating an instance at each cell: For each cell $\\square$ of $T$ , we create a balanced instance $\\mathcal{I}_{\\square}$ of the 1-Wasserstein problem as follows. If $\\square$ is a leaf cell, then it contains a single point $u$ of $A \\cup B$ , which we add to $\\mathcal{I}_{\\square}$ . In addition, we add the center point $c_{\\square}$ with a weight $-\\eta(\\square)$ to $\\mathcal{I}_{\\square}$ . Otherwise, $\\square$ is a non-leaf cell. For any child $\\square' \\in \\mathsf{C}[\\square]$ , if $\\square'$ is a deficit or surplus cell, we add $c_{\\square'}$ with a weight $\\eta(\\square')$ to $\\mathcal{I}_{\\square}$ . The weight $\\eta(\\square')$ represents the excess demands or supplies of the points in $V_{\\square'}$ . Furthermore, we add the center point $c_{\\square}$ with a weight $-\\eta(\\square)$ to $\\mathcal{I}_{\\square}$ . The instance $\\mathcal{I}_{\\square}$ is balanced and has a spread of $O(d / \\varepsilon)$ .",
862
+ "2 - Estimating the 1-Wasserstein distance: In this step, for the root cell $\\square^{*}$ , we compute an $\\varepsilon / d$ -close transport plan $\\sigma_{\\square^{*}}$ on $\\mathcal{I}_{\\square^{*}}$ . Furthermore, for each cell $\\square$ at any level $i > 0$ , using the"
863
+ ],
864
+ "bbox": [
865
+ 169,
866
+ 790,
867
+ 823,
868
+ 925
869
+ ],
870
+ "page_idx": 5
871
+ },
872
+ {
873
+ "type": "header",
874
+ "text": "Published as a conference paper at ICLR 2023",
875
+ "bbox": [
876
+ 171,
877
+ 32,
878
+ 478,
879
+ 47
880
+ ],
881
+ "page_idx": 5
882
+ },
883
+ {
884
+ "type": "page_number",
885
+ "text": "6",
886
+ "bbox": [
887
+ 493,
888
+ 948,
889
+ 504,
890
+ 959
891
+ ],
892
+ "page_idx": 5
893
+ },
894
+ {
895
+ "type": "text",
896
+ "text": "algorithm from Lemma 2.2, we compute a 2-approximate transport plan $\\sigma_{\\square}$ on $\\mathcal{I}_{\\square}$ . Our algorithm then reports the total cost of the transport plans computed at all cells of $T$ , i.e., $w := \\sum_{\\square \\in T} w(\\sigma_{\\square})$ as an approximate 1-Wasserstein distance.",
897
+ "bbox": [
898
+ 169,
899
+ 103,
900
+ 823,
901
+ 147
902
+ ],
903
+ "page_idx": 6
904
+ },
905
+ {
906
+ "type": "text",
907
+ "text": "Retrieving an approximate transport plan: We retrieve an approximate transport plan $\\sigma$ on the point set $A\\cup B$ by processing grids $\\langle \\mathbb{G}_h,\\dots ,\\mathbb{G}_0\\rangle$ in decreasing order of their level. First, for each non-empty cell $\\square$ of $\\mathbb{G}_h$ $\\square$ is a leaf cell and $V_{\\square}$ contains only one point. We map the only point in $V_{\\square}$ to $c_{\\square}$ . For some $i < h$ , assume (inductively) that after processing the non-empty cells of the grid $\\mathbb{G}_{i + 1}$ , the following conditions (i)-(iii) hold for the current transport plan $\\sigma$ within any cell $\\square$ in $\\mathbb{G}_{i + 1}$ : (i) if $\\square$ is a neutral cell, then $\\sigma$ transports every supply to some demand point inside $\\square$ , (ii) if $\\square$ is a deficit (resp. surplus) cell, then $\\sigma$ transports all supplies (resp. demands) inside $\\square$ to (resp. from) some demand (resp. supply) point within $\\square$ , and, (iii) if $\\square$ is a deficit (resp. surplus) cell, the excess demand (resp. supply) is mapped to $c_{\\square}$ . Given this, we show how to process any non-empty cell $\\square$ of $\\mathbb{G}_i$ so that (i)-(iii) holds for $\\square$ .",
908
+ "bbox": [
909
+ 169,
910
+ 152,
911
+ 826,
912
+ 292
913
+ ],
914
+ "page_idx": 6
915
+ },
916
+ {
917
+ "type": "text",
918
+ "text": "Recollect that $\\sigma_{\\square}$ is a transport plan computed by our algorithm on $\\mathcal{I}_{\\square}$ . By condition (iii), the excess supplies or demands at any child $\\square'$ of $\\square$ is mapped to $c_{\\square'}$ . Therefore, for any pair of children $\\square_1, \\square_2 \\in \\mathsf{C}[\\square]$ , where $\\square_1$ is a surplus cell and $\\square_2$ is a deficit cell, the transport plan $\\sigma$ transports $\\sigma_{\\square}(c_{\\square_1}, c_{\\square_2})$ supplies from $c_{\\square_1}$ to $c_{\\square_2}$ . In addition, for any child $\\square_1$ (resp. $\\square_2$ ) of $\\square$ , suppose $\\square_1$ (resp. $\\square_2$ ) is a surplus (resp. deficit) cell. If $\\sigma_{\\square}(c_{\\square_1}, c_{\\square}) > 0$ (resp. $\\sigma_{\\square}(c_{\\square_2}, c_{\\square}) > 0$ ), then we map the supplies (resp. demands) from $c_{\\square_1}$ (resp. $c_{\\square_2}$ ) to $c_{\\square}$ . It is easy to confirm that after processing $\\square$ , (i)-(iii) holds for $\\square$ . From triangle inequality, $w(\\sigma)$ is upper-bounded by the total cost of the transport plans computed for each cell of $T$ ; i.e, $w(\\sigma) \\leq \\sum_{\\square \\in T} w(\\sigma_{\\square})$ .",
919
+ "bbox": [
920
+ 169,
921
+ 297,
922
+ 823,
923
+ 412
924
+ ],
925
+ "page_idx": 6
926
+ },
927
+ {
928
+ "type": "text",
929
+ "text": "Efficiency: For any $i$ , let $\\mathsf{C}_i$ denote the set of non-empty cells of $T$ at level $i$ . For each cell $\\square \\in \\mathsf{C}_i$ , let $n_{\\square}$ be the number of points in $\\mathcal{I}_{\\square}$ . Since the spread of the points in $\\mathcal{I}_{\\square}$ is $O(d / \\varepsilon)$ , executing the algorithm from Lemma 2.2 on $\\mathcal{I}_{\\square}$ takes $O(T(n_{\\square},\\varepsilon /d))$ time (the same bound holds for the root cell as well). Since $\\mathcal{I}_{\\square}$ contains at most one point for each non-empty child of $\\square$ , $n_{\\square} \\leq \\min \\{|V_{\\square}|, (4\\sqrt{d} /\\varepsilon)^{d}\\}$ . Therefore, $\\sum_{\\square \\in \\mathsf{C}_i} n_{\\square} \\leq \\sum_{\\square \\in \\mathsf{C}_i} |V_{\\square}| = n$ . Since $T(n,\\varepsilon) = \\Omega (n)$ , the running time of our algorithm on cells at level $i$ is $O(\\sum_{\\square \\in \\mathsf{C}_i} T(n_{\\square},\\varepsilon /d)) = O(T(n,\\varepsilon /d))$ . Summing over all levels, the running time of our algorithm is $O(T(n,\\varepsilon /d)\\log_{\\sqrt{d} /\\varepsilon}n)$ .",
930
+ "bbox": [
931
+ 169,
932
+ 416,
933
+ 823,
934
+ 523
935
+ ],
936
+ "page_idx": 6
937
+ },
938
+ {
939
+ "type": "text",
940
+ "text": "When the dimension is a small constant, we get an improved running time as follows. For each level $i$ of $T$ , there are at most $n$ non-empty cells at level $i$ and the instance created at each cell has a size of at most $(4\\sqrt{d} /\\varepsilon)^{d}$ . Therefore, $O(\\sum_{\\square \\in \\mathbb{C}_i}T(n_\\square ,\\varepsilon /d)) = O(nT((4\\sqrt{d} /\\varepsilon)^d,\\varepsilon /d)) = n(\\frac{d}{\\varepsilon})^{O(d)}$ and the overall running time will be improved to $n(\\frac{d}{\\varepsilon})^{O(d)}\\log_{\\sqrt{d} /\\varepsilon}n$ .",
941
+ "bbox": [
942
+ 169,
943
+ 529,
944
+ 823,
945
+ 595
946
+ ],
947
+ "page_idx": 6
948
+ },
949
+ {
950
+ "type": "text",
951
+ "text": "Quality of Approximation: In this part, we analyze the approximate 1-Wasserstein distance computed by our algorithm. First, we show that the reported cost is $\\varepsilon$ -close. For the root cell $\\square^{*}$ , our algorithm computes an $\\varepsilon / d$ -close transport plan $\\sigma_{\\square^{*}}$ . The remaining demands and supplies are recursively transported within the children of $\\square^{*}$ , where each child has diameter $\\varepsilon / 2$ . Therefore, similar to Section 2.1, we can argue that our algorithm reports an $\\varepsilon$ -close 1-Wasserstein distance. Next, we show that the reported cost is an $O(d \\log_{\\sqrt{d} / \\varepsilon} n)$ -approximation of the 1-Wasserstein distance.",
952
+ "bbox": [
953
+ 169,
954
+ 599,
955
+ 823,
956
+ 696
957
+ ],
958
+ "page_idx": 6
959
+ },
960
+ {
961
+ "type": "text",
962
+ "text": "For each level $i < h$ , we show that the expected cost of the transport plans computed for all cells of level $i$ is $\\mathbb{E}\\left[\\sum_{\\square \\in \\mathbb{C}_i} w(\\sigma_\\square)\\right] = O(d)w(\\sigma^*)$ . Here, $\\sigma^*$ is an optimal transport plan on $A \\cup B$ , $\\mathbb{C}_i$ denotes the set of non-empty cells of $T$ at level $i$ , and the expectation is over the choice of the random shift of the hierarchical partitioning. We bound $\\mathbb{E}\\left[\\sum_{\\square \\in \\mathbb{C}_i} w(\\sigma_\\square)\\right]$ in two steps as follows. In the first step, we assign a budget to every edge $(a,b)$ with $\\sigma^*(a,b) > 0$ and show that the total budget assigned to all such edges is (in expectation) $O(d)w(\\sigma^*)$ . In the second step, we redistribute this budget to the cells of level $i$ in a way that the budget received by any cell $\\square$ is at least $w(\\sigma_\\square)/2$ . Summing over all $O(\\log_{\\sqrt{d}/\\varepsilon} n)$ levels of $T$ , the expected value of the total cost computed at all levels $i < h$ is $O(d\\log_{\\sqrt{d}/\\varepsilon} n)w(\\sigma^*)$ .",
963
+ "bbox": [
964
+ 169,
965
+ 704,
966
+ 826,
967
+ 839
968
+ ],
969
+ "page_idx": 6
970
+ },
971
+ {
972
+ "type": "text",
973
+ "text": "Additionally, we show that the expected cost of mapping all points to the centers of the cells of level $h$ is $O(d\\log_{\\sqrt{d} /\\varepsilon}n)w(\\sigma^{*})$ . Details of each step is provided in Appendix B. Theorem 1.1 follows from combining the two bounds.",
974
+ "bbox": [
975
+ 169,
976
+ 843,
977
+ 823,
978
+ 888
979
+ ],
980
+ "page_idx": 6
981
+ },
982
+ {
983
+ "type": "header",
984
+ "text": "Published as a conference paper at ICLR 2023",
985
+ "bbox": [
986
+ 171,
987
+ 32,
988
+ 478,
989
+ 47
990
+ ],
991
+ "page_idx": 6
992
+ },
993
+ {
994
+ "type": "page_number",
995
+ "text": "7",
996
+ "bbox": [
997
+ 493,
998
+ 948,
999
+ 504,
1000
+ 959
1001
+ ],
1002
+ "page_idx": 6
1003
+ },
1004
+ {
1005
+ "type": "text",
1006
+ "text": "4 AN $O(d\\log \\log n)$ -APPROXIMATION ALGORITHM FOR EBM PROBLEM",
1007
+ "text_level": 1,
1008
+ "bbox": [
1009
+ 169,
1010
+ 102,
1011
+ 777,
1012
+ 119
1013
+ ],
1014
+ "page_idx": 7
1015
+ },
1016
+ {
1017
+ "type": "text",
1018
+ "text": "Suppose $A$ and $B$ are two sets of $n$ points inside the $d$ -dimensional unit hypercube, where each point $a \\in A$ (resp. $b \\in B$ ) has a weight $\\eta(a) = -1/n$ (resp. $\\eta(b) = 1/n$ ). In this section, we present an approximation algorithm for the EBM problem satisfying the bounds claimed in Theorem 1.2. Note that by invoking Lemma 2.1 on $A \\cup B$ , one can compute an $\\varepsilon$ -close transport cost on $A \\cup B$ . To boost the accuracy of the algorithm of Lemma 2.1, in this section, we present an $O(d\\log\\log n)$ -approximation algorithm for the EBM problem. To satisfy the bounds claimed in Theorem 1.2, one can then report the minimum of the costs computed by the two algorithms.",
1019
+ "bbox": [
1020
+ 169,
1021
+ 142,
1022
+ 826,
1023
+ 241
1024
+ ],
1025
+ "page_idx": 7
1026
+ },
1027
+ {
1028
+ "type": "text",
1029
+ "text": "Input transformation: We transform input points such that (1) all coordinates are positive integers bounded by $n^{O(1)}$ , (2) an optimal matching on the transformed points is a $(1 + \\varepsilon)$ -approximate matching with respect to the original points, and (3) the cost of the optimal matching is $O(d^{3/2}n\\log n / \\varepsilon)$ . Similar transformations have been applied in several papers in the literature (Agarwal et al. (2017); Lahn & Raghevendra (2021)). We describe this transformation in Appendix C.1. As before, we can match and remove any co-located points $a \\in A$ and $b \\in B$ .",
1030
+ "bbox": [
1031
+ 169,
1032
+ 247,
1033
+ 823,
1034
+ 335
1035
+ ],
1036
+ "page_idx": 7
1037
+ },
1038
+ {
1039
+ "type": "text",
1040
+ "text": "Next, assuming $T(n,\\varepsilon) = O\\left(\\frac{n^k}{\\varepsilon}\\mathrm{poly}(d,\\log n,\\log \\frac{1}{\\varepsilon})\\right)$ for some $k \\geq 1$ , we describe our EBM algorithm. Our algorithm is easily adaptable to use any additive-approximation algorithm with a running time of $T(n,\\varepsilon) = O(n^{k}\\varepsilon^{-t}\\mathrm{poly}(d,\\log n,\\log \\frac{1}{\\varepsilon}))$ , where $k \\geq 1$ and $t$ is a fixed constant.",
1041
+ "bbox": [
1042
+ 169,
1043
+ 340,
1044
+ 823,
1045
+ 388
1046
+ ],
1047
+ "page_idx": 7
1048
+ },
1049
+ {
1050
+ "type": "text",
1051
+ "text": "Overview of the algorithm: Similar to Section 3, our EBM algorithm constructs a hierarchical partitioning and the associated tree $T$ , and executes the algorithm from Lemma 2.2 on the instance created for each cell of $T$ . In contrast to Section 3, the tree $T$ constructed by our EBM algorithm has height $O(\\log \\log n)$ , resulting in an improved approximation factor. The hierarchical partitioning of this section differs from the one in Section 3 in two ways. First, we partition the root cell into a grid $\\mathbb{G}_1$ with cell-side-length of $\\Theta(d^{5/2}n\\log n/\\varepsilon^2)$ at the first level. The grid $\\mathbb{G}_1$ may result in a high branching factor at the root; however, we show that, with probability at least $1 - \\varepsilon/\\sqrt{d}$ , no edges of an optimal matching will cross $\\mathbb{G}_1$ . Therefore, with that probability, all cells of $\\mathbb{G}_1$ are neutral cells and the problem instance for the root is an empty instance; i.e., the branching factor of the root will not impact the running time of our algorithm. Second, for any cell $\\square$ of level $i$ , instead of splitting $\\square$ into a fixed number $\\min\\{n, (4\\sqrt{d}/\\varepsilon)^d\\}$ of children, we divide $\\square$ into $\\min\\{n, n^{d/2^i}\\}$ cells. Although this results in a spread of $\\tilde{O}(n^{1/2^i})$ for the problem instance $\\mathcal{I}_\\square$ , we show that the expected number of remaining unmatched points over all cells of level $i$ is $\\tilde{O}(n^{1-1/2^i})$ . Therefore, the total execution time of our algorithm remains $T(n, \\varepsilon/d)$ per level. These modifications result in a tree $T$ of height $O(\\log \\log n)$ . We describe the details below.",
1052
+ "bbox": [
1053
+ 169,
1054
+ 392,
1055
+ 826,
1056
+ 616
1057
+ ],
1058
+ "page_idx": 7
1059
+ },
1060
+ {
1061
+ "type": "text",
1062
+ "text": "Hierarchical Partitioning: Define $\\delta \\coloneqq 5d^{2}\\varepsilon^{-1}\\log n$ . Similar to Section 3, we define a cell $\\square^{*}$ as a randomly-shifted hypercube that contains all points of $A\\cup B$ and has a side-length of $2\\max \\{C_{\\max},\\ell_1 / \\varepsilon \\}$ , where $\\ell_1 = \\frac{\\sqrt{d}}{\\varepsilon}\\delta n$ . We designate $\\square^{*}$ as the root of $T$ ( $\\square^{*}$ is at level 0 of $T$ ). Define a grid $\\mathbb{G}_1\\coloneqq \\mathbb{G}(\\square^*,\\ell_1)$ . We add each non-empty cell of $\\mathbb{G}_1$ to the tree as the children of $\\square^{*}$ . We construct the hierarchical partitioning in a recursive fashion as follows. For any non-root cell $\\square$ of $T$ , if $\\square$ contains only one point of $A\\cup B$ , then we designate $\\square$ as a leaf cell. Otherwise, let $\\square$ be a cell of level $i$ . Define the grid $\\mathbb{G}_{\\square} = \\mathbb{G}(\\square,\\delta n^{1 / 2^i})$ and add the non-empty cells of $\\mathbb{G}_{\\square}$ to $T$ as the children of $\\square$ . For simplicity in presentation, we assume $n^{1 / 2^i}$ is an integer. For any cell $\\square$ , denote the set of children of $\\square$ in $T$ by $\\mathsf{C}[\\square]$ . The height of $T$ , denoted by $h$ , is $O(\\log \\log n)$ .",
1063
+ "bbox": [
1064
+ 169,
1065
+ 619,
1066
+ 826,
1067
+ 756
1068
+ ],
1069
+ "page_idx": 7
1070
+ },
1071
+ {
1072
+ "type": "text",
1073
+ "text": "Similar to Section 3, our hierarchical partitioning is also a sequence of grids $\\langle \\mathbb{G}_0,\\mathbb{G}_1,\\ldots ,\\mathbb{G}_h\\rangle$ where $\\mathbb{G}_0$ is the root cell $\\square^{*}$ , $\\mathbb{G}_1$ has a cell-side-length of $\\ell_1 = \\frac{\\sqrt{d}}{\\varepsilon}\\delta n$ , and for each $2\\leq i\\leq h$ , the cell-side-length of $\\mathbb{G}_i$ is $\\ell_{i} = \\delta n^{1 / 2^{i - 1}}$ .",
1074
+ "bbox": [
1075
+ 169,
1076
+ 761,
1077
+ 823,
1078
+ 813
1079
+ ],
1080
+ "page_idx": 7
1081
+ },
1082
+ {
1083
+ "type": "text",
1084
+ "text": "Computing an approximate matching cost: To estimate the matching cost, similar to Section 3, our algorithm creates an instance of the 1-Wasserstein problem for each cell of the tree $T$ . Using the algorithm from Lemma 2.2, our algorithm computes a 2-approximate transport plan for the instance created for each cell and returns the total cost of such transport plans as an approximate matching cost. This completes the description of the algorithm.",
1085
+ "bbox": [
1086
+ 169,
1087
+ 818,
1088
+ 823,
1089
+ 890
1090
+ ],
1091
+ "page_idx": 7
1092
+ },
1093
+ {
1094
+ "type": "text",
1095
+ "text": "We describe the details of retrieving a matching in Appendix C.2, the quality of approximation in Appendix C.3, and the efficiency of our algorithm in Appendix C.4.",
1096
+ "bbox": [
1097
+ 169,
1098
+ 895,
1099
+ 823,
1100
+ 925
1101
+ ],
1102
+ "page_idx": 7
1103
+ },
1104
+ {
1105
+ "type": "header",
1106
+ "text": "Published as a conference paper at ICLR 2023",
1107
+ "bbox": [
1108
+ 171,
1109
+ 32,
1110
+ 478,
1111
+ 47
1112
+ ],
1113
+ "page_idx": 7
1114
+ },
1115
+ {
1116
+ "type": "page_number",
1117
+ "text": "8",
1118
+ "bbox": [
1119
+ 493,
1120
+ 948,
1121
+ 504,
1122
+ 959
1123
+ ],
1124
+ "page_idx": 7
1125
+ },
1126
+ {
1127
+ "type": "image",
1128
+ "img_path": "images/923c28fb981be1509e0b8402bd360f6d8893737e37fe6f2fef0e1b5b55faacc2.jpg",
1129
+ "image_caption": [
1130
+ "(a) 15D: $n$ - time"
1131
+ ],
1132
+ "image_footnote": [],
1133
+ "bbox": [
1134
+ 184,
1135
+ 99,
1136
+ 339,
1137
+ 193
1138
+ ],
1139
+ "page_idx": 8
1140
+ },
1141
+ {
1142
+ "type": "image",
1143
+ "img_path": "images/418b8294fa89e19f5f5dc54878cf5e007e33365f1251036f972afffd3e629380.jpg",
1144
+ "image_caption": [
1145
+ "(b) Real: $n$ - time"
1146
+ ],
1147
+ "image_footnote": [],
1148
+ "bbox": [
1149
+ 341,
1150
+ 102,
1151
+ 496,
1152
+ 193
1153
+ ],
1154
+ "page_idx": 8
1155
+ },
1156
+ {
1157
+ "type": "image",
1158
+ "img_path": "images/bb092b6d168468cd40a0e95ff68529cfbb13e7cef7116dc48b660c93e944f511.jpg",
1159
+ "image_caption": [
1160
+ "(c) 15D: $n$ -time"
1161
+ ],
1162
+ "image_footnote": [],
1163
+ "bbox": [
1164
+ 500,
1165
+ 102,
1166
+ 656,
1167
+ 195
1168
+ ],
1169
+ "page_idx": 8
1170
+ },
1171
+ {
1172
+ "type": "image",
1173
+ "img_path": "images/a61141a46d18ec42a36a10f96ef7687d1a0d47671b2a84facbc7e0a26074f86b.jpg",
1174
+ "image_caption": [
1175
+ "(d) Real: $n$ - time"
1176
+ ],
1177
+ "image_footnote": [],
1178
+ "bbox": [
1179
+ 656,
1180
+ 101,
1181
+ 813,
1182
+ 195
1183
+ ],
1184
+ "page_idx": 8
1185
+ },
1186
+ {
1187
+ "type": "image",
1188
+ "img_path": "images/a8391c3f53915c3ed4e0ef61655237d30e197a5bb839dbc31a2b8cf232397e5e.jpg",
1189
+ "image_caption": [
1190
+ "(e) 15D: $n$ -cost"
1191
+ ],
1192
+ "image_footnote": [],
1193
+ "bbox": [
1194
+ 184,
1195
+ 220,
1196
+ 339,
1197
+ 309
1198
+ ],
1199
+ "page_idx": 8
1200
+ },
1201
+ {
1202
+ "type": "image",
1203
+ "img_path": "images/040d9fa00c76880354173979fadc5b8e94f3b936647e9a77714914abd239adb2.jpg",
1204
+ "image_caption": [
1205
+ "(f) 15D: $n$ - time"
1206
+ ],
1207
+ "image_footnote": [],
1208
+ "bbox": [
1209
+ 341,
1210
+ 220,
1211
+ 496,
1212
+ 309
1213
+ ],
1214
+ "page_idx": 8
1215
+ },
1216
+ {
1217
+ "type": "image",
1218
+ "img_path": "images/80254cf810c551d8862f6182b3beff04d26661b19e6a9a002a0f695aabb42027.jpg",
1219
+ "image_caption": [
1220
+ "(g) Real: $n$ -cost",
1221
+ "Figure 2: (a) and (b) Comparison with the Sinkhorn algorithm, (c) and (d) Comparison with the LMR algorithm, and (e)-(h) Comparison with the Geometric-Additive algorithm"
1222
+ ],
1223
+ "image_footnote": [],
1224
+ "bbox": [
1225
+ 500,
1226
+ 220,
1227
+ 653,
1228
+ 309
1229
+ ],
1230
+ "page_idx": 8
1231
+ },
1232
+ {
1233
+ "type": "image",
1234
+ "img_path": "images/226609ae66213d6ce613a72b447787bea5b847cbfe4b6417047ad2b8697cea0c.jpg",
1235
+ "image_caption": [
1236
+ "(h) Real: $n$ -time"
1237
+ ],
1238
+ "image_footnote": [],
1239
+ "bbox": [
1240
+ 656,
1241
+ 218,
1242
+ 812,
1243
+ 309
1244
+ ],
1245
+ "page_idx": 8
1246
+ },
1247
+ {
1248
+ "type": "text",
1249
+ "text": "5 EXPERIMENTS",
1250
+ "text_level": 1,
1251
+ "bbox": [
1252
+ 171,
1253
+ 397,
1254
+ 326,
1255
+ 412
1256
+ ],
1257
+ "page_idx": 8
1258
+ },
1259
+ {
1260
+ "type": "text",
1261
+ "text": "In this section, we conduct experiments to show that our algorithms from Section 3 and Section 4 improve the accuracy of the additive approximation algorithms. We test an implementation of our algorithm, written in Python, on discrete probability distributions derived from real-world and synthetic data sets. All tests are executed on a computer with a $2.50\\mathrm{GHz}$ Intel Core i7 processor and 8GB of RAM using a single computation thread.",
1262
+ "bbox": [
1263
+ 169,
1264
+ 430,
1265
+ 823,
1266
+ 500
1267
+ ],
1268
+ "page_idx": 8
1269
+ },
1270
+ {
1271
+ "type": "text",
1272
+ "text": "Datasets: We test our algorithms on two sets of $n$ samples taken from synthetic (distribution) and real-world data sets. For each dataset, we use our algorithms to compute the minimum-cost matching between these samples and present our results averaged over 10 executions. To generate the synthetic data, we sample from a uniform distribution inside a 2-dimensional unit square placed on a random plane in 15-dimensional space (15D). For a real-world dataset, we use the Adult Census Data (UCI repository), which is a point cloud in $\\mathbb{R}^6$ with continuous features for 35,000 individuals, divided into two categories by income (Dua & Graff (2017)). See Appendix E for the results of our experiments on additional datasets.",
1273
+ "bbox": [
1274
+ 169,
1275
+ 507,
1276
+ 823,
1277
+ 618
1278
+ ],
1279
+ "page_idx": 8
1280
+ },
1281
+ {
1282
+ "type": "text",
1283
+ "text": "Results: In our first experiment, we compare our algorithm with existing additive approximation schemes, namely the Sinkhorn method (Cuture (2013)) (resp. LMR algorithm (Lahn et al. (2019))). We use the Sinkhorn (resp. LMR) algorithm as a black-box within our algorithm from Section 3 (1-Wasserstein algorithm) and compare its execution time with the standard Sinkhorn (resp. LMR) implementation. We set the parameters of the Sinkhorn and LMR algorithms in such a way that the error produced by them matches with the error produced by our 1-Wasserstein algorithm. As shown in Figure 2 (a)-(d), our algorithm runs significantly faster than both the Sinkhorn and the LMR algorithms while producing solutions of similar quality. In our second experiment, we also compare the accuracy as well as the computation time of our algorithms (1-Wasserstein algorithm and our EBM algorithm from Section 4) with the additive approximation algorithm from Section 2.1 (Geometric-Additive algorithm). For this experiment, our algorithms use the Sinkhorn algorithm as a black-box. The results of this experiment are shown in Figure 2 (e)-(h). We observe that our algorithms achieve a better accuracy (see Figure 2 (a) and (c)), especially on real-world data sets. As expected, the execution time of our algorithms increase slightly (see Figure 2 (d)). Furthermore, we observe that as the sample size increases, the costs returned by our algorithms converge to the optimal cost, whereas the Geometric-Additive Approximation based approach does not.",
1284
+ "bbox": [
1285
+ 169,
1286
+ 625,
1287
+ 826,
1288
+ 848
1289
+ ],
1290
+ "page_idx": 8
1291
+ },
1292
+ {
1293
+ "type": "text",
1294
+ "text": "Finally, we highlight the scalability of our algorithms. For an input of 3 million points drawn from a 2 dimensional uniform distribution on the unit square, our 1-Wasserstein algorithm runs in 593 seconds and computes an approximate transport cost of 0.0093. Furthermore, for an input of 1.5 million points for the 15D dataset, our 1-Wasserstein algorithm computes an approximate cost of 0.0304 in 608 seconds.",
1295
+ "bbox": [
1296
+ 169,
1297
+ 854,
1298
+ 825,
1299
+ 922
1300
+ ],
1301
+ "page_idx": 8
1302
+ },
1303
+ {
1304
+ "type": "header",
1305
+ "text": "Published as a conference paper at ICLR 2023",
1306
+ "bbox": [
1307
+ 171,
1308
+ 32,
1309
+ 478,
1310
+ 47
1311
+ ],
1312
+ "page_idx": 8
1313
+ },
1314
+ {
1315
+ "type": "page_number",
1316
+ "text": "9",
1317
+ "bbox": [
1318
+ 493,
1319
+ 948,
1320
+ 503,
1321
+ 959
1322
+ ],
1323
+ "page_idx": 8
1324
+ },
1325
+ {
1326
+ "type": "text",
1327
+ "text": "ACKNOWLEDGEMENT",
1328
+ "text_level": 1,
1329
+ "bbox": [
1330
+ 171,
1331
+ 102,
1332
+ 361,
1333
+ 118
1334
+ ],
1335
+ "page_idx": 9
1336
+ },
1337
+ {
1338
+ "type": "text",
1339
+ "text": "We would like to acknowledge, Advanced Research Computing (ARC) at Virginia Tech, which provided us with the computational resources used to run the experiments. Research presented in this paper was funded by NSF grants CCF-1909171, CCF-2223871, IIS-1814493, CCF-2007556, and CCF-2223870. We would like to thank the anonymous reviewers for their useful feedback.",
1340
+ "bbox": [
1341
+ 171,
1342
+ 133,
1343
+ 826,
1344
+ 191
1345
+ ],
1346
+ "page_idx": 9
1347
+ },
1348
+ {
1349
+ "type": "text",
1350
+ "text": "REFERENCES",
1351
+ "text_level": 1,
1352
+ "bbox": [
1353
+ 173,
1354
+ 210,
1355
+ 287,
1356
+ 226
1357
+ ],
1358
+ "page_idx": 9
1359
+ },
1360
+ {
1361
+ "type": "list",
1362
+ "sub_type": "ref_text",
1363
+ "list_items": [
1364
+ "Pankaj K. Agarwal and R. Sharathkumar. Approximation algorithms for bipartite matching with metric and geometric costs. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pp. 555-564, 2014.",
1365
+ "Pankaj K. Agarwal and Kasturi R. Varadarajan. A near-linear constant-factor approximation for Euclidean bipartite matching? In Proceedings of the 20th annual symposium on Computational geometry, pp. 247-252, 2004.",
1366
+ "Pankaj K. Agarwal, Kyle Fox, Debmalya Panigrahi, Kasturi R. Varadarajan, and Allen Xiao. Faster algorithms for the geometric transportation problem. In Proc. 33rd International Symposium on Computational Geometry, pp. 7:1-7:16, 2017.",
1367
+ "Pankaj K Agarwal, Hsien-Chih Chang, Sharath Raghvendra, and Allen Xiao. Deterministic, near-linear $\\varepsilon$ -approximation algorithm for geometric bipartite matching. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pp. 1052-1065, 2022a.",
1368
+ "Pankaj K Agarwal, Sharath Raghvendra, Pouyan Shirzadian, and Rachita Sowle. An improved $\\varepsilon$ -approximation algorithm for geometric bipartite matching. In 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2022b.",
1369
+ "Jason Altschuler, Jonathan Niles-Weed, and Philippe Rigollet. Near-linear time approximation algorithms for optimal transport via sinkhorn iteration. Advances in neural information processing systems, 30, 2017.",
1370
+ "Jason Altschuler, Francis Bach, Alessandro Rudi, and Jonathan Niles-Weed. Massively scalable sinkhorn distances via the nyström method. Advances in neural information processing systems, 32, 2019.",
1371
+ "Alexandr Andoni, Piotr Indyk, and Robert Krauthgamer. Earth mover distance over high-dimensional spaces. In SODA, volume 8, pp. 343-352, 2008.",
1372
+ "Alexandr Andoni, Piotr Indyk, Huy L Nguyen, and Ilya Razenshteyn. Beyond locality-sensitive hashing. In Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms, pp. 1018-1028. SIAM, 2014.",
1373
+ "Arturs Backurs, Yihe Dong, Piotr Indyk, Ilya Razenshteyn, and Tal Wagner. Scalable nearest neighbor search for optimal transport. In International Conference on Machine Learning, pp. 497-506, 2020.",
1374
+ "Espen Bernton, Pierre E. Jacob, Mathieu Gerber, and Christian P. Robert. On parameter estimation with the wasserstein distance. Information and Inference: A Journal of the IMA, 8(4):657-676, 2019.",
1375
+ "Jose Blanchet, Arun Jambulapati, Carson Kent, and Aaron Sidford. Towards optimal running times for optimal transport. arXiv preprint arXiv:1810.07717, 2018.",
1376
+ "Xi Chen, Rajesh Jayaram, Amit Levi, and Erik Waingarten. New streaming algorithms for high dimensional emd and mst. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pp. 222-233, 2022.",
1377
+ "Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems 26, pp. 2292-2300, 2013."
1378
+ ],
1379
+ "bbox": [
1380
+ 171,
1381
+ 233,
1382
+ 826,
1383
+ 925
1384
+ ],
1385
+ "page_idx": 9
1386
+ },
1387
+ {
1388
+ "type": "header",
1389
+ "text": "Published as a conference paper at ICLR 2023",
1390
+ "bbox": [
1391
+ 171,
1392
+ 32,
1393
+ 478,
1394
+ 47
1395
+ ],
1396
+ "page_idx": 9
1397
+ },
1398
+ {
1399
+ "type": "page_number",
1400
+ "text": "10",
1401
+ "bbox": [
1402
+ 490,
1403
+ 946,
1404
+ 509,
1405
+ 960
1406
+ ],
1407
+ "page_idx": 9
1408
+ },
1409
+ {
1410
+ "type": "list",
1411
+ "sub_type": "ref_text",
1412
+ "list_items": [
1413
+ "Ishan Deshpande, Ziyu Zhang, and Alexander G Schwing. Generative modeling using the sliced Wasserstein distance. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3483-3491, 2018.",
1414
+ "Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml.",
1415
+ "Pavel Dvurechensky, Alexander Gasnikov, and Alexey Kroshnin. Computational optimal transport: Complexity by accelerated gradient descent is better than by sinkhorn's algorithm. In International conference on machine learning, pp. 1367-1376. PMLR, 2018.",
1416
+ "Jack Edmonds and Richard M Karp. Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the ACM (JACM), 19(2):248-264, 1972.",
1417
+ "Peyman Mohajerin Esfahani and Daniel Kuhn. Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations. Mathematical Programming, 171(1):115-166, 2018.",
1418
+ "Kyle Fox and Jiashuai Lu. A near-linear time approximation scheme for geometric transportation with arbitrary supplies and spread. In Proc. 36th Annual Symposium on Computational Geometry, pp. 45:1-45:18, 2020.",
1419
+ "H. N. Gabow and R.E. Tarjan. Faster scaling algorithms for network problems. SIAM J. Comput., 18: 1013-1036, October 1989. ISSN 0097-5397. doi: 10.1137/0218069. URL http://portal.acm.org/citation.cfm?id=75795.75806.",
1420
+ "Aude Geneva, Gabriel Peyre, and Marco Cuturi. Learning generative models with sinkhorn divergences. In International Conference on Artificial Intelligence and Statistics, pp. 1608-1617, 2018.",
1421
+ "Wenshuo Guo, Nhat Ho, and Michael Jordan. Fast algorithms for computational optimal transport and Wasserstein barycenter. In International Conference on Artificial Intelligence and Statistics, pp. 2088-2097, 2020.",
1422
+ "Rishi Gupta, Piotr Indyk, and Eric Price. Sparse recovery for earth mover distance. In 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 1742-1744. IEEE, 2010.",
1423
+ "P. Indyk. A near linear time constant factor approximation for Euclidean bichromatic matching (cost). In Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 39-42, 2007.",
1424
+ "Arun Jambulapati, Aaron Sidford, and Kevin Tian. A direct $\\tilde{O}(1/\\epsilon)$ iteration parallel algorithm for optimal transport. arXiv preprint arXiv:1906.00618, 2019.",
1425
+ "Hicham Janati, Marco Cuturi, and Alexandre Gramfort. Wasserstein regularization for sparse multi-task regression. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1407-1416. PMLR, 2019.",
1426
+ "Andrey Boris Khesin, Aleksandar Nikolov, and Dmitry Paramonov. Preconditioning for the geometric transportation problem. arXiv preprint arXiv:1902.08384, 2019.",
1427
+ "Nathaniel Lahn and Sharath Raghvendra. A faster algorithm for minimum-cost bipartite matching in minor-free graphs. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 569-588. SIAM, 2019.",
1428
+ "Nathaniel Lahn and Sharath Raghvendra. An $O(n^{5/4})$ time $\\varepsilon$ -approximation algorithm for RMS matching in a plane. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms, pp. 869-888, 2021.",
1429
+ "Nathaniel Lahn, Deepika Mulchandani, and Sharath Raghevendra. A graph theoretic additive approximation of optimal transport. In Advances in Neural Information Processing Systems 32, pp. 13813-13823, 2019."
1430
+ ],
1431
+ "bbox": [
1432
+ 171,
1433
+ 102,
1434
+ 825,
1435
+ 922
1436
+ ],
1437
+ "page_idx": 10
1438
+ },
1439
+ {
1440
+ "type": "header",
1441
+ "text": "Published as a conference paper at ICLR 2023",
1442
+ "bbox": [
1443
+ 171,
1444
+ 32,
1445
+ 478,
1446
+ 47
1447
+ ],
1448
+ "page_idx": 10
1449
+ },
1450
+ {
1451
+ "type": "page_number",
1452
+ "text": "11",
1453
+ "bbox": [
1454
+ 490,
1455
+ 948,
1456
+ 506,
1457
+ 959
1458
+ ],
1459
+ "page_idx": 10
1460
+ },
1461
+ {
1462
+ "type": "list",
1463
+ "sub_type": "ref_text",
1464
+ "list_items": [
1465
+ "Tianyi Lin, Nhat Ho, and Michael I. Jordan. On the efficiency of the sinkhorn and greenkhorn algorithms and their acceleration for optimal transport. arXiv preprint arXiv:1906.01437, 2019.",
1466
+ "Huidong Liu, G. U. Xianfeng, and Dimitris Samaras. A two-step computation of the exact gan Wasserstein distance. In International Conference on Machine Learning, pp. 3159-3168, 2018.",
1467
+ "Giulia Luise, Alessandro Rudi, Massimiliano Pontil, and Carlo Ciliberto. Differential properties of sinkhorn approximation for learning with Wasserstein distance. Advances in Neural Information Processing Systems, 31, 2018.",
1468
+ "James Orlin. A faster strongly polynomial minimum cost flow algorithm. In Proceedings of the Twentieth annual ACM symposium on Theory of Computing, pp. 377-387, 1988.",
1469
+ "Kent Quanrud. Approximating optimal transport with linear programs. arXiv preprint arXiv:1810.05957, 2018.",
1470
+ "Sharath Raghvendra and Pankaj K. Agarwal. A near-linear time $\\varepsilon$ -approximation algorithm for geometric bipartite matching. J. ACM, 67(3):18:1-18:19, 2020. doi: 10.1145/3393694. URL https://doi.org/10.1145/3393694.",
1471
+ "Tim Salimans, Han Zhang, Alec Radford, and Dimitris Metaxas. Improving gans using optimal transport. In International Conference on Learning Representations, 2018.",
1472
+ "Jonah Sherman. Generalized preconditioning and undirected minimum-cost flow. In Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 772-780, 2017."
1473
+ ],
1474
+ "bbox": [
1475
+ 171,
1476
+ 102,
1477
+ 828,
1478
+ 426
1479
+ ],
1480
+ "page_idx": 11
1481
+ },
1482
+ {
1483
+ "type": "header",
1484
+ "text": "Published as a conference paper at ICLR 2023",
1485
+ "bbox": [
1486
+ 171,
1487
+ 32,
1488
+ 478,
1489
+ 47
1490
+ ],
1491
+ "page_idx": 11
1492
+ },
1493
+ {
1494
+ "type": "page_number",
1495
+ "text": "12",
1496
+ "bbox": [
1497
+ 490,
1498
+ 946,
1499
+ 509,
1500
+ 959
1501
+ ],
1502
+ "page_idx": 11
1503
+ }
1504
+ ]
2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/da4f6dce-4c38-439e-aa6c-715a9f5477fe_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/da4f6dce-4c38-439e-aa6c-715a9f5477fe_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbeb0c08db372d1cef1828d0dae96af922006b3ea304d6f545b4a961b34a5302
3
+ size 662665
2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/full.md ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A HIGHER PRECISION ALGORITHM FOR COMPUTING THE 1-WASSERSTEIN DISTANCE
2
+
3
+ Pankaj K. Agarwal<sup>1*</sup>, Sharath Raghvendra<sup>2</sup>, Pouyan Shirzadian<sup>2</sup>, and Rachita Sowle<sup>2</sup>
4
+
5
+ $^{1}$ Duke University, $^{2}$ Virginia Tech
6
+
7
+ # ABSTRACT
8
+
9
+ We consider the problem of computing the 1-Wasserstein distance $\mathcal{W}(\mu, \nu)$ between two $d$ -dimensional discrete distributions $\mu$ and $\nu$ whose support lie within the unit hypercube. There are several algorithms that estimate $\mathcal{W}(\mu, \nu)$ within an additive error of $\varepsilon$ . However, when $\mathcal{W}(\mu, \nu)$ is small, the additive error $\varepsilon$ dominates, leading to noisy results. Consider any additive approximation algorithm with execution time $T(n, \varepsilon)$ . We propose an algorithm that runs in $O(T(n, \varepsilon / d) \log n)$ time and boosts the accuracy of estimating $\mathcal{W}(\mu, \nu)$ from $\varepsilon$ to an expected additive error of $\min\{\varepsilon, (d \log_{\sqrt{d} / \varepsilon} n) \mathcal{W}(\mu, \nu)\}$ . For the special case where every point in the support of $\mu$ and $\nu$ has a mass of $1 / n$ (also called the Euclidean Bipartite Matching problem), we describe an algorithm to boost the accuracy of any additive approximation algorithm from $\varepsilon$ to an expected additive error of $\min\{\varepsilon, (d \log \log n) \mathcal{W}(\mu, \nu)\}$ in $O(T(n, \varepsilon / d) \log \log n)$ time.
10
+
11
+ # 1 INTRODUCTION
12
+
13
+ Given two discrete probability distributions $\mu$ and $\nu$ whose support $A$ and $B$ , respectively, lie inside the $d$ -dimensional unit hypercube $[0,1]^d$ with $\max \{|A|, |B|\} = n$ , the 1-Wasserstein distance $\mathcal{W}(\mu, \nu)$ (also called the Earth Mover's distance) between them is the minimum cost required to transport mass from $\nu$ to $\mu$ under the Euclidean metric. The special case where $|A| = |B| = n$ and the mass at each point of $A \cup B$ is $1/n$ is called the Euclidean Bipartite Matching (EBM) problem. In machine learning applications, one can improve a model $\mu$ by using its Earth Mover's distance from a distribution $\nu$ built on real data. Consequently, it has been extensively used in generative models (Deshpande et al. (2018); Genevay et al. (2018); Salimans et al. (2018)), robust learning (Esfahani & Kuhn (2018)), supervised learning (Luis et al. (2018); Janati et al. (2019)), and parameter estimation (Liu et al. (2018); Bernton et al. (2019)).
14
+
15
+ Computing the 1-Wasserstein distance between $\mu$ and $\nu$ can be modeled as a linear program and solved in $O(n^{3}\log n)$ time (Edmonds & Karp (1972); Orlin (1988)), which is computationally expensive. There has been substantial effort on designing $\varepsilon$ -additive-approximation algorithms that estimate $\mathcal{W}(\mu, \nu)$ within an additive error of $\varepsilon$ in $n^{2}\mathrm{poly}(d, \log n, 1/\varepsilon)$ time (Cuturi (2013); Lin et al. (2019); Lahn et al. (2019)). When $\mathcal{W}(\mu, \nu)$ is significantly smaller than $\varepsilon$ , however, the cost produced by such algorithms will be unreliable as it is dominated by the error parameter $\varepsilon$ .
16
+
17
+ To get a higher accuracy in this case, for $\alpha > 0$ , one can compute an $\alpha$ -relative approximation of the 1-Wasserstein distance, which is a cost $w$ that satisfies $\mathcal{W}(\mu, \nu) \leq w \leq \alpha \mathcal{W}(\mu, \nu)$ . There has been considerable effort on designing relative-approximation algorithms; however, many such methods suffer from curse of dimensionality; i.e., their execution time grows exponentially in $d$ . Furthermore, they rely on fairly involved data structures that have good asymptotic execution times but are slow in practice and difficult to implement, making them impractical (Agarwal & Sharathkumar (2014); Fox & Lu (2020); Agarwal et al. (2022a;b)). The only exception to this is a classical greedy algorithm, based on a $d$ -dimensional quadtree, that returns an $O(d \log n)$ -relative approximation of the 1-Wasserstein distance in $O(nd)$ time. It has been used in various machine-learning and computer-vision applications (Gupta et al. (2010); Backurs et al. (2020)). In the case of the Euclidean Bipartite
18
+
19
+ Matching, Agarwal & Varadarajan (2004) and Indyk (2007) generalized the algorithm in the hierarchical framework to achieve a relative approximation ratio of $O(d^{2}\log (1 / \varepsilon))$ in $\tilde{O} (n^{1 + \varepsilon})$ time<sup>1</sup>.
20
+
21
+ In this paper, we design an algorithm that combines any additive approximation algorithm with the hierarchical quad-tree based framework. As a result, our algorithm achieves better guarantees for both additive and relative approximations. To our knowledge, this is the first result that combines the power of additive and relative approximation techniques, leading to improvement in both settings.
22
+
23
+ # 1.1 PROBLEM DEFINITION
24
+
25
+ We are given two discrete distributions $\mu$ and $\nu$ . Let $A$ and $B$ be the points in the support of $\mu$ and $\nu$ , respectively. For the distribution $\mu$ (resp. $\nu$ ), suppose each point $a \in A$ (resp. $b \in B$ ) has a probability of $\mu_{a}$ (resp. $\nu_{b}$ ) associated with it, where $\sum_{a \in A} \mu_{a} = \sum_{b \in B} \nu_{b} = 1$ . Let $G(A, B)$ denote the complete bipartite graph where, for any pair of points $a \in A$ and $b \in B$ , there is an edge from $a$ to $b$ of cost $\|a - b\|$ , i.e., the Euclidean distance between $a$ and $b$ .
26
+
27
+ For each point $a \in A$ (resp. $b \in B$ ), we assign a weight $\eta(a) = -\mu_a$ (resp. $\eta(b) = \nu_b$ ). We refer to any point $v \in A \cup B$ with a negative (resp. positive) weight as a demand point (resp. supply point) with a demand (resp. supply) of $|\eta(v)|$ . Given any subset of points $V \subseteq A \cup B$ , the weight $\eta(V)$ is simply the sum of the weights of its points; i.e., $\eta(V) = \sum_{v \in V} \eta(v)$ . For any edge $(a, b) \in A \times B$ , let the cost of transporting a supply of $\beta$ from $b$ to $a$ be $\beta \| a - b \|$ . In this problem, our goal is to transport all supplies from supply points to demand points at the minimum cost. More formally, a transport plan is a function $\sigma: A \times B \to \mathbb{R}_{\geq 0}$ that assigns a non-negative value to each edge of $G(A, B)$ indicating the quantity of supplies transported along the edge. The transport plan $\sigma$ is such that the total supplies transported into (resp. from) any demand (resp. supply) point $a \in A$ (resp. $b \in B$ ) is equal to $-\eta(a)$ (resp. $\eta(b)$ ). The cost of the transport plan $\sigma$ , denoted by $w(\sigma)$ , is given by $\sum_{(a, b) \in A \times B} \sigma(a, b) \| a - b \|$ . The goal of this problem is to find a minimum-cost transport plan.
28
+
29
+ If two points $a \in A$ and $b \in B$ are co-located (i.e., they share the same coordinates), then due to the metric property of the Euclidean distances, if $\eta(b) = -\eta(a)$ , we can match the supplies to the demands at zero cost and remove the points from the input. Otherwise, if $\eta(b) \neq -\eta(a)$ , we replace the two points with a single point of weight $\eta(a) + \eta(b)$ . By definition, if the weight of the newly created point is negative (resp. positive), we consider it a demand (resp. supply) point. In our presentation, we always consider $A$ and $B$ to be the point sets obtained after replacing all the co-located points. Observe that, after removing the co-located points, the total supply $U = \eta(B)$ may be less than 1. However, it is easy to see that $\eta(B) = -\eta(A)$ ; i.e., the problem instance defined on $A \cup B$ is balanced. We say that a transport plan $\sigma$ is an $\varepsilon$ -close transport plan if $w(\sigma) \leq \mathcal{W}(\mu, \nu) + \varepsilon U$ .
30
+
31
+ In many applications, the distributions $\mu$ and $\nu$ are continuous or large (possibly unknown) discrete distributions. In such cases, it might be computationally expensive or even impossible to compute $\mathcal{W}(\mu, \nu)$ . Instead, one can draw two sets $A$ and $B$ of $n$ samples each from $\mu$ and $\nu$ , respectively. Each point $a \in A$ (resp. $b \in B$ ) is assigned a weight of $\eta(a) = -1/n$ (resp. $\eta(b) = 1/n$ ). One can approximate the 1-Wasserstein distance between the distributions $\mu$ and $\nu$ by simply solving the 1-Wasserstein problem defined on $G(A, B)$ . This special case where every point has the same demand and supply is called the Euclidean Bipartite Matching (EBM) problem. A matching $M$ is a set of vertex-disjoint edges in $G(A, B)$ and has a cost $1/n \sum_{(a, b) \in M} \|a - b\|$ . For the EBM problem, the optimal transport plan is simply a minimum-cost matching of cardinality $n$ .
32
+
33
+ For any point set $P$ in the Euclidean space, let $C_{\max}(P) \coloneqq \max_{(a,b) \in P \times P} \| a - b \|$ denote the distance of its farthest pair and $C_{\min}(P) \coloneqq \min_{(a,b) \in P \times P, a \neq b} \| a - b \|$ denote the distance of its closest pair. The spread of the point set, denoted by $\Delta(P)$ , is the ratio $\Delta(P) = C_{\max}(P) / C_{\min}(P)$ . When $P$ is obvious from the context, we simply use $C_{\min}$ , $C_{\max}$ , and $\Delta$ to denote the distance of its closest and farthest pair and its spread.
34
+
35
+ # 1.2 RELATED WORK
36
+
37
+ Relative Approximations: In fixed dimensional settings, i.e., $d = O(1)$ , there is extensive work on the design of near-linear time Monte-Carlo $(1 + \varepsilon)$ -relative approximation al-
38
+
39
+ gorithms for 1-Wasserstein and EBM problems. The execution time of these algorithms are $\Omega(n(d\varepsilon^{-1}\log n)^d)$ (Khesin et al. (2019); Raghvendra & Agarwal (2020); Fox & Lu (2020); Agarwal et al. (2022a)). A recent algorithm presented by Agarwal et al. (2022b) improved the dependence on $d$ slightly and achieved an execution time of $\Omega(n(d\varepsilon^{-1}\log\log n)^d)$ . Nonetheless, the exponential dependence on $d$ makes it unsuitable for higher dimensions.
40
+
41
+ For higher dimensions, a quad-tree based greedy algorithm provides an expected $O(d \log n)$ approximation in $O(nd)$ time. This algorithm constructs a randomly-shifted recursive partition of space by splitting each cell into $2^d$ cells with half the side-length. For every cell of the quad-tree, the algorithm moves excess supply present inside its child to meet any excess demand present inside another child. Agarwal & Varadarajan (2004) combined a different hierarchical partition with an exact solver to get an expected $O(d^2 \log 1 / \varepsilon)$ -approximation in $\tilde{O}(n^{1 + \varepsilon})$ time. Indyk (2007) combined the hierarchical greedy framework with importance sampling to estimate the cost of the Euclidean bipartite matching within a constant factor of the optimal. Additionally, there are $\tilde{O}(nd)$ time algorithms that approximate the matching cost with a factor of $O(\log^2 n)$ (Andoni et al. (2008); Chen et al. (2022)).
42
+
43
+ There are also approximation algorithms that run in $\tilde{O}(n^2)$ time; however, these algorithms rely on several black-box reductions and at present, there are no usable implementations of these algorithms (Agarwal & Sharathkumar 2014; Sherman (2017)). The lack of fast exact and relative approximations that are also implementable have motivated machine-learning researchers to design additive-approximation algorithms, which we discuss next.
44
+
45
+ Additive Approximations: Cuturi (2013) introduced a regularized version of the optimal transport problem that produces an $\varepsilon$ -close transport plan, which can be solved using the Sinkhorn method. For input points within the unit hypercube, such an algorithm produces an $\varepsilon$ -close transport plan in $\tilde{O}(n^2 d / \varepsilon^2)$ time $^2$ (Lin et al. (2019)). One can also adapt graph theoretic approaches including the algorithm by Gabow & Tarjan (1989) to obtain an $\varepsilon$ -close solution in $O(n^2 \sqrt{d} / \varepsilon + nd / \varepsilon^2)$ time for points within the unit hypercube (Lahn et al. (2019)). Some of the additive approximation methods, including the Sinkhorn method, are highly parallelizable. For instance, the algorithm by Jambulapati et al. (2019) has a parallel depth of $\tilde{O}(1 / \varepsilon)$ ; see also Altschuler et al. (2017; 2019); Blanchet et al. (2018); Quanrud (2018); Dvurechensky et al. (2018); Guo et al. (2020).
46
+
47
+ # 1.3 OUR RESULTS
48
+
49
+ Let $T(n,\varepsilon)$ be the time taken by an $\varepsilon$ -additive-approximation algorithm on an input of $n$ points in the unit hypercube. In Theorem 1.1 and 1.2, we present new algorithms that improve the accuracy of any additive-approximation algorithm for the 1-Wasserstein problem and the EBM problem, respectively.
50
+
51
+ Theorem 1.1. Given two discrete distributions $\mu$ and $\nu$ whose support lie in the $d$ -dimensional unit hypercube and have spread $n^{O(1)}$ , and a parameter $\varepsilon > 0$ , a transport plan with an expected additive error of $\min \{\varepsilon, (d\log_{\sqrt{d}/\varepsilon} n)\mathcal{W}(\mu, \nu)\}$ can be computed in $O(T(n, \varepsilon/d)\log_{\sqrt{d}/\varepsilon} n)$ time; here, $\mathcal{W}(\mu, \nu)$ is the 1-Wasserstein distance between $\mu$ and $\nu$ .
52
+
53
+ Theorem 1.2. Given two sets of $n$ points $A$ and $B$ in the $d$ -dimensional unit hypercube and a parameter $\varepsilon > 0$ , a matching whose expected cost is within an additive error of $\min \{ \varepsilon, (d\log \log n)w^* \}$ of the optimal matching cost $w^*$ can be computed in $O(T(n,\varepsilon /d)\log \log n)$ time with high probability.
54
+
55
+ Typical additive-approximation algorithms run in $T(n, \varepsilon) = \tilde{O}(n^2)$ time and compute an $\varepsilon$ -close transport plan for any arbitrary cost function; i.e., they make no assumption about the distance between points. The inputs to our algorithm, on the other hand, are point sets in the Euclidean space. Therefore, one can use an approximate dynamic nearest neighbor data structure to improve the execution time of such additive-approximation algorithms. In particular, the algorithm by Lahn et al. (2019) runs in $O(1 / \varepsilon)$ phases, where each phase executes one iteration of Gabow-Tarjan's algorithm. As shown by Agarwal & Sharathkumar (2014), one can use a $\frac{1}{\sqrt{\varepsilon}}$ -approximate dynamic nearest neighbor data structure with a query/update time of $O(n^{\varepsilon} + d\log n)$ (Andoni et al. (2014))
56
+
57
+ to execute each iteration of Gabow-Tarjan's algorithm in $\tilde{O}(n^{1 + \varepsilon})$ time. Combining with the algorithms from Theorem 1.1 and 1.2, we obtain the following relative-approximation algorithms. The details are provided in Appendix D $^3$ .
58
+
59
+ Theorem 1.3. Let $\mu$ and $\nu$ be two discrete distributions whose support lie in the $d$ -dimensional unit hypercube and have polynomial spread, and $\varepsilon > 0$ be a parameter. An $O(d / \varepsilon^{3 / 2})$ -approximate transport plan under Euclidean metric can be computed in $O(d^2 n^{1 + \varepsilon} / \varepsilon)$ time.
60
+
61
+ Theorem 1.4. Given two sets of $n$ points $A$ and $B$ in the $d$ -dimensional unit hypercube and a parameter $\varepsilon > 0$ , an $O\left(\frac{d}{\sqrt{\varepsilon}} \log \frac{d}{\varepsilon}\right)$ -approximate matching can be computed in $O(dn^{1 + \varepsilon} \log \frac{d}{\varepsilon})$ time with high probability.
62
+
63
+ In contrast to our results, Agarwal & Varadarajan (2004) compute an $O(d^{2}\log \frac{1}{\varepsilon})$ -approximate matching in the same time. Therefore, our algorithm computes a more accurate matching for $d > \frac{1}{\sqrt{\varepsilon}}$ . For instance, consider the case where $d = \sqrt{\log n}$ . For any arbitrarily small constant $\varepsilon > 0$ , the algorithm of Theorem 1.4 will run in $\tilde{O}(n^{1 + \varepsilon})$ time and return an $O(\sqrt{\log n})$ -approximation. In contrast, all previous methods that achieve sub-logarithmic approximation require $\Omega(n^{5/4})$ time (Agarwal & Sharathkumar (2014)).
64
+
65
+ We also note that all of our algorithms extend to computing transport plans under any $\ell_p$ -metric in a straight forward way. For simplicity, we restrict our presentation only to the Euclidean metric.
66
+
67
+ Overview of the algorithm: Our algorithm uses the hierarchical greedy paradigm. In our presentation, we refer to a hypercube as a cell. For any cell $\square$ , let $V_{\square} = (A \cup B) \cap \square$ . Unlike a quadtree based greedy algorithm, which splits each cell $\square$ into $2^{d}$ cells, we split it into $\min \{|V_{\square}|, (4\sqrt{d}/\varepsilon)^{d}\}$ cells. Thus, the height of the resulting tree $T$ reduces from $O(\log n)$ to $O(\log_{\sqrt{d}/\varepsilon}n)$ . For any cell $\square$ of $T$ and any child $\square'$ of $\square$ , we move any excess supply or demand inside $\square'$ to its center. Let $\mathcal{A}_{\square}$ (resp. $\mathcal{B}_{\square}$ ) be a set consisting of the center points of all children of $\square$ with excess demand (resp. supply). For any child $\square'$ of $\square$ with excess demand (resp. supply), we assign a weight of $\eta(V_{\square'})$ to its center point in $\mathcal{A}_{\square}$ (resp. $\mathcal{B}_{\square}$ ). Using an additive-approximation algorithm, we compute an $(\varepsilon/d)$ -close transport cost between $\mathcal{A}_{\square}$ and $\mathcal{B}_{\square}$ in $T(|V_{\square}|, \varepsilon/d)$ time. We report the sum of the transport costs computed at all cells of $T$ as an approximate 1-Wasserstein distance. This simple algorithm guarantees improvement in the quality of the solutions produced by both additive and relative-approximation algorithms.
68
+
69
+ From the perspective of relative-approximation algorithms, Agarwal & Varadarajan (2004) as well as Indyk (2007) have utilized a similar hierarchical framework to design approximation algorithms. However, unlike our algorithm, they used an exact solver at each cell that takes $\Omega\left(|\mathcal{A}_{\square} \cup \mathcal{B}_{\square}|^{3}\right)$ time. As a result, to obtain near-quadratic execution time, they divided every cell into $O\left(|V_{\square}|^{2/3}\right)$ children, i.e., $1/\varepsilon^{d} \leq n^{2/3}$ or $d = O(\log_{1/\varepsilon}n)$ . This also forces the height of the tree to be $O(d\log_{1/\varepsilon}n)$ , leading to an $O(d^{2}\log_{1/\varepsilon}n)$ -factor approximation. We replace the $\Omega(n^{3})$ time exact solver in their algorithm with a $T(|V_{\square}|, \varepsilon) = \tilde{O}(|V_{\square}|^{2})$ time additive-approximation algorithm. Therefore, each instance (regardless of the number of non-empty children) can be solved in $\tilde{O}(|V_{\square}|^{2})$ time. As a result, we are able to improve the approximation factor from $O(d^{2}\log_{1/\varepsilon}n)$ to $O(d\log_{\sqrt{d}/\varepsilon}n)$ and also remove restrictions on the dimension. Our algorithm now works for any dimension!
70
+
71
+ Technical Challenge: Using an additive-approximation algorithm at each cell increases the error that may be difficult to bound. We use the following observation to overcome this challenge. In Section 2.2, we show that for any point set with spread $\Delta$ , an additive-approximation algorithm can be used to compute a 2-relative approximation in $T(n,1 / \Delta)$ time. The algorithm guarantees that the spread of the point set at each cell $\square$ is $O(d / \varepsilon)$ , and as a result, we get a 2-relative approximation by using an additive-approximation algorithm in $O(T(n,\varepsilon /d))$ time.
72
+
73
+ Improvements for EBM: For the EBM problem, we obtain an improvement in the approximation ratio, as stated in Theorem 1.2, as follows: Instead of dividing each cell into a fixed number
74
+
75
+ $\min \{n,(4\sqrt{d} /\varepsilon)^d\}$ of children, we divide it into $\min \{n,n^{d / 2^i}\}$ children at each level $i$ ; here, level of any cell in the tree is equal to the length of the path from the root to this cell. By doing so, we reduce the height of the tree to $O(\log \log n)$ . To analyze the running time, we show that the number of remaining unmatched points over all level $i$ cells is $\tilde{O} (n^{1 - \frac{1}{2^i}})$ . Since there are only sub-linearly many points remaining, we can afford a larger spread of $O(\frac{d}{\varepsilon} n^{\frac{1}{2^i}})$ and the resulting execution time of the additive approximation will continue to be $O(T(n,\frac{\varepsilon}{d}))$ per level.
76
+
77
+ Open Question: Although our algorithms work in any dimension, we would like to note that the relative approximation factor grows linearly in the dimension. Recently, Chen et al. (2022) removed the dependence on the dimension $d$ from the approximation factor of quad-tree greedy algorithm by using a data dependent weight assignment. It is an open question if their approach can be adapted in our framework leading to a similar improvement in the approximation factor of our algorithms.
78
+
79
+ # 2 PRELIMINARIES
80
+
81
+ We begin by introducing a few notations. For a cell $\square$ , we denote its side-length by $\ell_{\square}$ and its center by $c_{\square}$ . For a parameter $\ell$ , let $\mathbb{G}(\square, \ell)$ denote a grid that partitions $\square$ into smaller cells with side-length $\ell$ . Recall that $V_{\square}$ denotes the subset of $A \cup B$ that lies inside $\square$ . We say that $\square$ is non-empty if $V_{\square}$ is non-empty. Recall that $\eta(V_{\square}) = \sum_{v \in V_{\square}} \eta(v)$ . We define the weight of $\square$ to be $\eta(V_{\square})$ and denote it by $\eta(\square)$ . We call a cell $\square$ a deficit cell if $\eta(\square) < 0$ , a surplus cell if $\eta(\square) > 0$ , and a neutral cell if $\eta(\square) = 0$ .
82
+
83
+ In this section, we provide a simple transformation of the input for achieving an additive approximation of the 1-Wasserstein distance. Furthermore, we show how an additive-approximation algorithm for the 1-Wasserstein problem can be used to obtain a $(1 + \varepsilon)$ -relative-approximation algorithm that runs in $T(n,\varepsilon /\Delta)$ time, where $\Delta$ is the spread of input points.
84
+
85
+ # 2.1 ADDITIVE APPROXIMATION IN EUCLIDEAN SPACE
86
+
87
+ For any $\varepsilon > 0$ , given an additive-approximation algorithm that runs in $T(n, \varepsilon)$ time, we present an algorithm to compute an $\varepsilon$ -close transport cost for distributions inside the $d$ -dimensional unit hypercube $\square^{*}$ in $O(\min\{T(n, \frac{\varepsilon}{2}), n + T((\frac{2\sqrt{d}}{\varepsilon})^d, \frac{\varepsilon}{2})\})$ time.
88
+
89
+ The algorithm works in two steps. In the first step, it constructs a grid $\mathbb{G} := \mathbb{G}(\square^*, \frac{\varepsilon}{2\sqrt{d}})$ on the unit hypercube $\square^*$ and computes a transport plan $\sigma_1$ , as follows. For each non-empty neutral cell of $\mathbb{G}$ , $\sigma_1$ arbitrarily transports all supplies to demands within the cell. Similarly, for any deficit (resp. surplus) cell, $\sigma_1$ arbitrarily transports supplies from (resp. to) all supply (resp. demand) points inside the cell to (resp. from) some arbitrary demand (resp. supply) points within the cell.
90
+
91
+ In the second step, the algorithm constructs a set of demand points $\mathcal{A}$ and a set of supply points $\mathcal{B}$ as follows: For any deficit cell $\square$ (resp. surplus cell $\square'$ ), the point $c_{\square}$ (resp. $c_{\square'}$ ) is added to $\mathcal{A}$ (resp. $\mathcal{B}$ ) with a weight of $\eta(\square)$ (resp. $\eta(\square')$ ). Note that $\mathcal{A} \cup \mathcal{B}$ is a balanced instance for the 1-Wasserstein problem. The algorithm computes an $\frac{\varepsilon}{2}$ -close transport plan $\sigma_2$ on the instance $\mathcal{A} \cup \mathcal{B}$ in $T(|\mathcal{A}| + |\mathcal{B}|, \varepsilon/2)$ time (See Figure 1). The algorithm returns $w(\sigma_1) + w(\sigma_2)$ as an $\varepsilon$ -close transport cost on $A \cup B$ .
92
+
93
+ We provide a discussion on the accuracy of the algorithm in Appendix A. The following lemma follows from the fact that $|\mathcal{A}| + |\mathcal{B}|$ is bounded by $\min \{2n, (2\sqrt{d} / \varepsilon)^d\}$ .
94
+
95
+ Lemma 2.1. Given two point sets $A$ and $B$ in the $d$ -dimensional unit hypercube and a parameter $\varepsilon > 0$ , an $\varepsilon$ -close transport cost can be computed in $O\left(\min \{T(n, \frac{\varepsilon}{2}), n + T\left(\left(\frac{2\sqrt{d}}{\varepsilon}\right)^d, \frac{\varepsilon}{2}\right)\}\right)$ time.
96
+
97
+ Instead of transporting supplies inside cells arbitrarily, our algorithm in Section 3 recursively applies the same algorithm in each cell and obtains a higher accuracy.
98
+
99
+ # 2.2 RELATIVE APPROXIMATION FOR LOW SPREAD POINT SETS
100
+
101
+ In this section, we show that an $\varepsilon$ -additive-approximation algorithm can be used to obtain a $(1 + \varepsilon)$ -relative-approximation algorithm for the 1-Wasserstein problem that runs in $T(n,\varepsilon / \Delta)$ time; here,
102
+
103
+ ![](images/a5ed9494c0eae5743ab2b78bb5c4dcff42e32c7a13c31504791b084c043bf57c.jpg)
104
+ (a)
105
+
106
+ ![](images/9707b983a4c8141e12e09c523f30d795587378f4b903dfd1a1cac9c8b35c22b6.jpg)
107
+ (b)
108
+ Figure 1: (a) The algorithm transports supplies (red disks) to demands (blue circles) within each cell and creates an instance by moving any excess supplies or demands to the center of the corresponding cells, (b) An $\varepsilon /2$ -close transport plan is computed on the new problem instance.
109
+
110
+ $\Delta$ is the spread of the points in $A \cup B$ . Since relative approximations are scale invariant, without loss of generality, we assume that the input has $C_{\max} = 1$ and $C_{\min} = 1 / \Delta$ .
111
+
112
+ Suppose $-\eta(A) = \eta(B) = U$ . The minimum distance between any two points $(a, b) \in A \times B$ is $1 / \Delta$ . Therefore, the cost of transporting a supply of $U$ is at least $U / \Delta$ . We obtain a $(1 + \varepsilon)$ -relative approximate transport plan by simply executing an additive-approximation algorithm with an additive error of $U(\varepsilon / \Delta)$ in time $T(n, \varepsilon / \Delta)$ .
113
+
114
+ Lemma 2.2. Given two point sets $A$ and $B$ in the $d$ -dimensional unit hypercube with a spread of $\Delta$ , a $(1 + \varepsilon)$ -approximate transport plan can be computed in $T(n, \varepsilon / \Delta)$ time.
115
+
116
+ # 3 AN $O(d\log_{\sqrt{d} / \varepsilon}n)$ -APPROXIMATION ALGORITHM FOR 1-WASSERSTEIN PROBLEM
117
+
118
+ In this section, we present our algorithm that satisfies the bounds presented in Theorem 1.1. We begin by defining a hierarchical partitioning and a tree $T$ associated with the hierarchical partitioning. Each node in $T$ corresponds to a non-empty cell in our hierarchical partition, and we do not distinguish between the two. We partition each cell of side-length $\ell$ into $\lceil 4\sqrt{d} / \varepsilon \rceil^d$ cells of side-length at most $(\varepsilon / 4\sqrt{d})\ell$ . We construct our hierarchical partition in a randomly-shifted fashion as follows.
119
+
120
+ Hierarchical Partitioning: First, we pick a point $\xi$ uniformly at random from the unit hypercube $[0,1]^d$ and set $\square^* = [-1,1]^d + \xi$ . Note that $\square^*$ is a hypercube of side-length 2 containing all points in $A \cup B$ . We designate $\square^*$ as the root of $T$ . Let $\kappa = \lceil 4\sqrt{d} / \varepsilon \rceil$ . For any cell $\square$ , if only one point of $A \cup B$ lies inside $\square$ , we designate $\square$ as a leaf cell in $T$ . For any non-leaf cell $\square$ , we construct its children by partitioning $\square$ , using a grid $\mathbb{G}_{\square} = \mathbb{G}(\square, \ell_{\square} / \kappa)$ , into $\lceil 4\sqrt{d} / \varepsilon \rceil^d$ cells and create a child node for each non-empty cell of this grid. We denote the set of children of a non-leaf cell $\square$ by $\mathsf{C}[\square]$ . Assuming the spread of $A \cup B$ is $n^{O(1)}$ , the height of $T$ , denoted by $h$ , is $O(\log_{\sqrt{d} / \varepsilon} n)$ .
121
+
122
+ Similar to quadtrees, our hierarchical partitioning can be seen as a sequence of grids $\langle \mathbb{G}_0,\mathbb{G}_1,\ldots ,\mathbb{G}_h\rangle$ , where $\mathbb{G}_0$ is the root and each grid $\mathbb{G}_i$ refines the cells of grid $\mathbb{G}_{i - 1}$ . For each grid $\mathbb{G}_i$ , we denote the cell-side-length of $\mathbb{G}_i$ by $\ell_{i}$ . Next, we describe our algorithm.
123
+
124
+ Computing an approximate 1-Wasserstein distance: In this section, we present our algorithm for computing an approximate cost of an optimal transport plan on $A \cup B$ .
125
+
126
+ 1 - Creating an instance at each cell: For each cell $\square$ of $T$ , we create a balanced instance $\mathcal{I}_{\square}$ of the 1-Wasserstein problem as follows. If $\square$ is a leaf cell, then it contains a single point $u$ of $A \cup B$ , which we add to $\mathcal{I}_{\square}$ . In addition, we add the center point $c_{\square}$ with a weight $-\eta(\square)$ to $\mathcal{I}_{\square}$ . Otherwise, $\square$ is a non-leaf cell. For any child $\square' \in \mathsf{C}[\square]$ , if $\square'$ is a deficit or surplus cell, we add $c_{\square'}$ with a weight $\eta(\square')$ to $\mathcal{I}_{\square}$ . The weight $\eta(\square')$ represents the excess demands or supplies of the points in $V_{\square'}$ . Furthermore, we add the center point $c_{\square}$ with a weight $-\eta(\square)$ to $\mathcal{I}_{\square}$ . The instance $\mathcal{I}_{\square}$ is balanced and has a spread of $O(d / \varepsilon)$ .
127
+ 2 - Estimating the 1-Wasserstein distance: In this step, for the root cell $\square^{*}$ , we compute an $\varepsilon / d$ -close transport plan $\sigma_{\square^{*}}$ on $\mathcal{I}_{\square^{*}}$ . Furthermore, for each cell $\square$ at any level $i > 0$ , using the
128
+
129
+ algorithm from Lemma 2.2, we compute a 2-approximate transport plan $\sigma_{\square}$ on $\mathcal{I}_{\square}$ . Our algorithm then reports the total cost of the transport plans computed at all cells of $T$ , i.e., $w := \sum_{\square \in T} w(\sigma_{\square})$ as an approximate 1-Wasserstein distance.
130
+
131
+ Retrieving an approximate transport plan: We retrieve an approximate transport plan $\sigma$ on the point set $A\cup B$ by processing grids $\langle \mathbb{G}_h,\dots ,\mathbb{G}_0\rangle$ in decreasing order of their level. First, for each non-empty cell $\square$ of $\mathbb{G}_h$ $\square$ is a leaf cell and $V_{\square}$ contains only one point. We map the only point in $V_{\square}$ to $c_{\square}$ . For some $i < h$ , assume (inductively) that after processing the non-empty cells of the grid $\mathbb{G}_{i + 1}$ , the following conditions (i)-(iii) hold for the current transport plan $\sigma$ within any cell $\square$ in $\mathbb{G}_{i + 1}$ : (i) if $\square$ is a neutral cell, then $\sigma$ transports every supply to some demand point inside $\square$ , (ii) if $\square$ is a deficit (resp. surplus) cell, then $\sigma$ transports all supplies (resp. demands) inside $\square$ to (resp. from) some demand (resp. supply) point within $\square$ , and, (iii) if $\square$ is a deficit (resp. surplus) cell, the excess demand (resp. supply) is mapped to $c_{\square}$ . Given this, we show how to process any non-empty cell $\square$ of $\mathbb{G}_i$ so that (i)-(iii) holds for $\square$ .
132
+
133
+ Recollect that $\sigma_{\square}$ is a transport plan computed by our algorithm on $\mathcal{I}_{\square}$ . By condition (iii), the excess supplies or demands at any child $\square'$ of $\square$ is mapped to $c_{\square'}$ . Therefore, for any pair of children $\square_1, \square_2 \in \mathsf{C}[\square]$ , where $\square_1$ is a surplus cell and $\square_2$ is a deficit cell, the transport plan $\sigma$ transports $\sigma_{\square}(c_{\square_1}, c_{\square_2})$ supplies from $c_{\square_1}$ to $c_{\square_2}$ . In addition, for any child $\square_1$ (resp. $\square_2$ ) of $\square$ , suppose $\square_1$ (resp. $\square_2$ ) is a surplus (resp. deficit) cell. If $\sigma_{\square}(c_{\square_1}, c_{\square}) > 0$ (resp. $\sigma_{\square}(c_{\square_2}, c_{\square}) > 0$ ), then we map the supplies (resp. demands) from $c_{\square_1}$ (resp. $c_{\square_2}$ ) to $c_{\square}$ . It is easy to confirm that after processing $\square$ , (i)-(iii) holds for $\square$ . From triangle inequality, $w(\sigma)$ is upper-bounded by the total cost of the transport plans computed for each cell of $T$ ; i.e, $w(\sigma) \leq \sum_{\square \in T} w(\sigma_{\square})$ .
134
+
135
+ Efficiency: For any $i$ , let $\mathsf{C}_i$ denote the set of non-empty cells of $T$ at level $i$ . For each cell $\square \in \mathsf{C}_i$ , let $n_{\square}$ be the number of points in $\mathcal{I}_{\square}$ . Since the spread of the points in $\mathcal{I}_{\square}$ is $O(d / \varepsilon)$ , executing the algorithm from Lemma 2.2 on $\mathcal{I}_{\square}$ takes $O(T(n_{\square},\varepsilon /d))$ time (the same bound holds for the root cell as well). Since $\mathcal{I}_{\square}$ contains at most one point for each non-empty child of $\square$ , $n_{\square} \leq \min \{|V_{\square}|, (4\sqrt{d} /\varepsilon)^{d}\}$ . Therefore, $\sum_{\square \in \mathsf{C}_i} n_{\square} \leq \sum_{\square \in \mathsf{C}_i} |V_{\square}| = n$ . Since $T(n,\varepsilon) = \Omega (n)$ , the running time of our algorithm on cells at level $i$ is $O(\sum_{\square \in \mathsf{C}_i} T(n_{\square},\varepsilon /d)) = O(T(n,\varepsilon /d))$ . Summing over all levels, the running time of our algorithm is $O(T(n,\varepsilon /d)\log_{\sqrt{d} /\varepsilon}n)$ .
136
+
137
+ When the dimension is a small constant, we get an improved running time as follows. For each level $i$ of $T$ , there are at most $n$ non-empty cells at level $i$ and the instance created at each cell has a size of at most $(4\sqrt{d} /\varepsilon)^{d}$ . Therefore, $O(\sum_{\square \in \mathbb{C}_i}T(n_\square ,\varepsilon /d)) = O(nT((4\sqrt{d} /\varepsilon)^d,\varepsilon /d)) = n(\frac{d}{\varepsilon})^{O(d)}$ and the overall running time will be improved to $n(\frac{d}{\varepsilon})^{O(d)}\log_{\sqrt{d} /\varepsilon}n$ .
138
+
139
+ Quality of Approximation: In this part, we analyze the approximate 1-Wasserstein distance computed by our algorithm. First, we show that the reported cost is $\varepsilon$ -close. For the root cell $\square^{*}$ , our algorithm computes an $\varepsilon / d$ -close transport plan $\sigma_{\square^{*}}$ . The remaining demands and supplies are recursively transported within the children of $\square^{*}$ , where each child has diameter $\varepsilon / 2$ . Therefore, similar to Section 2.1, we can argue that our algorithm reports an $\varepsilon$ -close 1-Wasserstein distance. Next, we show that the reported cost is an $O(d \log_{\sqrt{d} / \varepsilon} n)$ -approximation of the 1-Wasserstein distance.
140
+
141
+ For each level $i < h$ , we show that the expected cost of the transport plans computed for all cells of level $i$ is $\mathbb{E}\left[\sum_{\square \in \mathbb{C}_i} w(\sigma_\square)\right] = O(d)w(\sigma^*)$ . Here, $\sigma^*$ is an optimal transport plan on $A \cup B$ , $\mathbb{C}_i$ denotes the set of non-empty cells of $T$ at level $i$ , and the expectation is over the choice of the random shift of the hierarchical partitioning. We bound $\mathbb{E}\left[\sum_{\square \in \mathbb{C}_i} w(\sigma_\square)\right]$ in two steps as follows. In the first step, we assign a budget to every edge $(a,b)$ with $\sigma^*(a,b) > 0$ and show that the total budget assigned to all such edges is (in expectation) $O(d)w(\sigma^*)$ . In the second step, we redistribute this budget to the cells of level $i$ in a way that the budget received by any cell $\square$ is at least $w(\sigma_\square)/2$ . Summing over all $O(\log_{\sqrt{d}/\varepsilon} n)$ levels of $T$ , the expected value of the total cost computed at all levels $i < h$ is $O(d\log_{\sqrt{d}/\varepsilon} n)w(\sigma^*)$ .
142
+
143
+ Additionally, we show that the expected cost of mapping all points to the centers of the cells of level $h$ is $O(d\log_{\sqrt{d} /\varepsilon}n)w(\sigma^{*})$ . Details of each step is provided in Appendix B. Theorem 1.1 follows from combining the two bounds.
144
+
145
+ # 4 AN $O(d\log \log n)$ -APPROXIMATION ALGORITHM FOR EBM PROBLEM
146
+
147
+ Suppose $A$ and $B$ are two sets of $n$ points inside the $d$ -dimensional unit hypercube, where each point $a \in A$ (resp. $b \in B$ ) has a weight $\eta(a) = -1/n$ (resp. $\eta(b) = 1/n$ ). In this section, we present an approximation algorithm for the EBM problem satisfying the bounds claimed in Theorem 1.2. Note that by invoking Lemma 2.1 on $A \cup B$ , one can compute an $\varepsilon$ -close transport cost on $A \cup B$ . To boost the accuracy of the algorithm of Lemma 2.1, in this section, we present an $O(d\log\log n)$ -approximation algorithm for the EBM problem. To satisfy the bounds claimed in Theorem 1.2, one can then report the minimum of the costs computed by the two algorithms.
148
+
149
+ Input transformation: We transform input points such that (1) all coordinates are positive integers bounded by $n^{O(1)}$ , (2) an optimal matching on the transformed points is a $(1 + \varepsilon)$ -approximate matching with respect to the original points, and (3) the cost of the optimal matching is $O(d^{3/2}n\log n / \varepsilon)$ . Similar transformations have been applied in several papers in the literature (Agarwal et al. (2017); Lahn & Raghevendra (2021)). We describe this transformation in Appendix C.1. As before, we can match and remove any co-located points $a \in A$ and $b \in B$ .
150
+
151
+ Next, assuming $T(n,\varepsilon) = O\left(\frac{n^k}{\varepsilon}\mathrm{poly}(d,\log n,\log \frac{1}{\varepsilon})\right)$ for some $k \geq 1$ , we describe our EBM algorithm. Our algorithm is easily adaptable to use any additive-approximation algorithm with a running time of $T(n,\varepsilon) = O(n^{k}\varepsilon^{-t}\mathrm{poly}(d,\log n,\log \frac{1}{\varepsilon}))$ , where $k \geq 1$ and $t$ is a fixed constant.
152
+
153
+ Overview of the algorithm: Similar to Section 3, our EBM algorithm constructs a hierarchical partitioning and the associated tree $T$ , and executes the algorithm from Lemma 2.2 on the instance created for each cell of $T$ . In contrast to Section 3, the tree $T$ constructed by our EBM algorithm has height $O(\log \log n)$ , resulting in an improved approximation factor. The hierarchical partitioning of this section differs from the one in Section 3 in two ways. First, we partition the root cell into a grid $\mathbb{G}_1$ with cell-side-length of $\Theta(d^{5/2}n\log n/\varepsilon^2)$ at the first level. The grid $\mathbb{G}_1$ may result in a high branching factor at the root; however, we show that, with probability at least $1 - \varepsilon/\sqrt{d}$ , no edges of an optimal matching will cross $\mathbb{G}_1$ . Therefore, with that probability, all cells of $\mathbb{G}_1$ are neutral cells and the problem instance for the root is an empty instance; i.e., the branching factor of the root will not impact the running time of our algorithm. Second, for any cell $\square$ of level $i$ , instead of splitting $\square$ into a fixed number $\min\{n, (4\sqrt{d}/\varepsilon)^d\}$ of children, we divide $\square$ into $\min\{n, n^{d/2^i}\}$ cells. Although this results in a spread of $\tilde{O}(n^{1/2^i})$ for the problem instance $\mathcal{I}_\square$ , we show that the expected number of remaining unmatched points over all cells of level $i$ is $\tilde{O}(n^{1-1/2^i})$ . Therefore, the total execution time of our algorithm remains $T(n, \varepsilon/d)$ per level. These modifications result in a tree $T$ of height $O(\log \log n)$ . We describe the details below.
154
+
155
+ Hierarchical Partitioning: Define $\delta \coloneqq 5d^{2}\varepsilon^{-1}\log n$ . Similar to Section 3, we define a cell $\square^{*}$ as a randomly-shifted hypercube that contains all points of $A\cup B$ and has a side-length of $2\max \{C_{\max},\ell_1 / \varepsilon \}$ , where $\ell_1 = \frac{\sqrt{d}}{\varepsilon}\delta n$ . We designate $\square^{*}$ as the root of $T$ ( $\square^{*}$ is at level 0 of $T$ ). Define a grid $\mathbb{G}_1\coloneqq \mathbb{G}(\square^*,\ell_1)$ . We add each non-empty cell of $\mathbb{G}_1$ to the tree as the children of $\square^{*}$ . We construct the hierarchical partitioning in a recursive fashion as follows. For any non-root cell $\square$ of $T$ , if $\square$ contains only one point of $A\cup B$ , then we designate $\square$ as a leaf cell. Otherwise, let $\square$ be a cell of level $i$ . Define the grid $\mathbb{G}_{\square} = \mathbb{G}(\square,\delta n^{1 / 2^i})$ and add the non-empty cells of $\mathbb{G}_{\square}$ to $T$ as the children of $\square$ . For simplicity in presentation, we assume $n^{1 / 2^i}$ is an integer. For any cell $\square$ , denote the set of children of $\square$ in $T$ by $\mathsf{C}[\square]$ . The height of $T$ , denoted by $h$ , is $O(\log \log n)$ .
156
+
157
+ Similar to Section 3, our hierarchical partitioning is also a sequence of grids $\langle \mathbb{G}_0,\mathbb{G}_1,\ldots ,\mathbb{G}_h\rangle$ where $\mathbb{G}_0$ is the root cell $\square^{*}$ , $\mathbb{G}_1$ has a cell-side-length of $\ell_1 = \frac{\sqrt{d}}{\varepsilon}\delta n$ , and for each $2\leq i\leq h$ , the cell-side-length of $\mathbb{G}_i$ is $\ell_{i} = \delta n^{1 / 2^{i - 1}}$ .
158
+
159
+ Computing an approximate matching cost: To estimate the matching cost, similar to Section 3, our algorithm creates an instance of the 1-Wasserstein problem for each cell of the tree $T$ . Using the algorithm from Lemma 2.2, our algorithm computes a 2-approximate transport plan for the instance created for each cell and returns the total cost of such transport plans as an approximate matching cost. This completes the description of the algorithm.
160
+
161
+ We describe the details of retrieving a matching in Appendix C.2, the quality of approximation in Appendix C.3, and the efficiency of our algorithm in Appendix C.4.
162
+
163
+ ![](images/923c28fb981be1509e0b8402bd360f6d8893737e37fe6f2fef0e1b5b55faacc2.jpg)
164
+ (a) 15D: $n$ - time
165
+
166
+ ![](images/418b8294fa89e19f5f5dc54878cf5e007e33365f1251036f972afffd3e629380.jpg)
167
+ (b) Real: $n$ - time
168
+
169
+ ![](images/bb092b6d168468cd40a0e95ff68529cfbb13e7cef7116dc48b660c93e944f511.jpg)
170
+ (c) 15D: $n$ -time
171
+
172
+ ![](images/a61141a46d18ec42a36a10f96ef7687d1a0d47671b2a84facbc7e0a26074f86b.jpg)
173
+ (d) Real: $n$ - time
174
+
175
+ ![](images/a8391c3f53915c3ed4e0ef61655237d30e197a5bb839dbc31a2b8cf232397e5e.jpg)
176
+ (e) 15D: $n$ -cost
177
+
178
+ ![](images/040d9fa00c76880354173979fadc5b8e94f3b936647e9a77714914abd239adb2.jpg)
179
+ (f) 15D: $n$ - time
180
+
181
+ ![](images/80254cf810c551d8862f6182b3beff04d26661b19e6a9a002a0f695aabb42027.jpg)
182
+ (g) Real: $n$ -cost
183
+ Figure 2: (a) and (b) Comparison with the Sinkhorn algorithm, (c) and (d) Comparison with the LMR algorithm, and (e)-(h) Comparison with the Geometric-Additive algorithm
184
+
185
+ ![](images/226609ae66213d6ce613a72b447787bea5b847cbfe4b6417047ad2b8697cea0c.jpg)
186
+ (h) Real: $n$ -time
187
+
188
+ # 5 EXPERIMENTS
189
+
190
+ In this section, we conduct experiments to show that our algorithms from Section 3 and Section 4 improve the accuracy of the additive approximation algorithms. We test an implementation of our algorithm, written in Python, on discrete probability distributions derived from real-world and synthetic data sets. All tests are executed on a computer with a $2.50\mathrm{GHz}$ Intel Core i7 processor and 8GB of RAM using a single computation thread.
191
+
192
+ Datasets: We test our algorithms on two sets of $n$ samples taken from synthetic (distribution) and real-world data sets. For each dataset, we use our algorithms to compute the minimum-cost matching between these samples and present our results averaged over 10 executions. To generate the synthetic data, we sample from a uniform distribution inside a 2-dimensional unit square placed on a random plane in 15-dimensional space (15D). For a real-world dataset, we use the Adult Census Data (UCI repository), which is a point cloud in $\mathbb{R}^6$ with continuous features for 35,000 individuals, divided into two categories by income (Dua & Graff (2017)). See Appendix E for the results of our experiments on additional datasets.
193
+
194
+ Results: In our first experiment, we compare our algorithm with existing additive approximation schemes, namely the Sinkhorn method (Cuture (2013)) (resp. LMR algorithm (Lahn et al. (2019))). We use the Sinkhorn (resp. LMR) algorithm as a black-box within our algorithm from Section 3 (1-Wasserstein algorithm) and compare its execution time with the standard Sinkhorn (resp. LMR) implementation. We set the parameters of the Sinkhorn and LMR algorithms in such a way that the error produced by them matches with the error produced by our 1-Wasserstein algorithm. As shown in Figure 2 (a)-(d), our algorithm runs significantly faster than both the Sinkhorn and the LMR algorithms while producing solutions of similar quality. In our second experiment, we also compare the accuracy as well as the computation time of our algorithms (1-Wasserstein algorithm and our EBM algorithm from Section 4) with the additive approximation algorithm from Section 2.1 (Geometric-Additive algorithm). For this experiment, our algorithms use the Sinkhorn algorithm as a black-box. The results of this experiment are shown in Figure 2 (e)-(h). We observe that our algorithms achieve a better accuracy (see Figure 2 (a) and (c)), especially on real-world data sets. As expected, the execution time of our algorithms increase slightly (see Figure 2 (d)). Furthermore, we observe that as the sample size increases, the costs returned by our algorithms converge to the optimal cost, whereas the Geometric-Additive Approximation based approach does not.
195
+
196
+ Finally, we highlight the scalability of our algorithms. For an input of 3 million points drawn from a 2 dimensional uniform distribution on the unit square, our 1-Wasserstein algorithm runs in 593 seconds and computes an approximate transport cost of 0.0093. Furthermore, for an input of 1.5 million points for the 15D dataset, our 1-Wasserstein algorithm computes an approximate cost of 0.0304 in 608 seconds.
197
+
198
+ # ACKNOWLEDGEMENT
199
+
200
+ We would like to acknowledge, Advanced Research Computing (ARC) at Virginia Tech, which provided us with the computational resources used to run the experiments. Research presented in this paper was funded by NSF grants CCF-1909171, CCF-2223871, IIS-1814493, CCF-2007556, and CCF-2223870. We would like to thank the anonymous reviewers for their useful feedback.
201
+
202
+ # REFERENCES
203
+
204
+ Pankaj K. Agarwal and R. Sharathkumar. Approximation algorithms for bipartite matching with metric and geometric costs. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pp. 555-564, 2014.
205
+ Pankaj K. Agarwal and Kasturi R. Varadarajan. A near-linear constant-factor approximation for Euclidean bipartite matching? In Proceedings of the 20th annual symposium on Computational geometry, pp. 247-252, 2004.
206
+ Pankaj K. Agarwal, Kyle Fox, Debmalya Panigrahi, Kasturi R. Varadarajan, and Allen Xiao. Faster algorithms for the geometric transportation problem. In Proc. 33rd International Symposium on Computational Geometry, pp. 7:1-7:16, 2017.
207
+ Pankaj K Agarwal, Hsien-Chih Chang, Sharath Raghvendra, and Allen Xiao. Deterministic, near-linear $\varepsilon$ -approximation algorithm for geometric bipartite matching. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pp. 1052-1065, 2022a.
208
+ Pankaj K Agarwal, Sharath Raghvendra, Pouyan Shirzadian, and Rachita Sowle. An improved $\varepsilon$ -approximation algorithm for geometric bipartite matching. In 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2022b.
209
+ Jason Altschuler, Jonathan Niles-Weed, and Philippe Rigollet. Near-linear time approximation algorithms for optimal transport via sinkhorn iteration. Advances in neural information processing systems, 30, 2017.
210
+ Jason Altschuler, Francis Bach, Alessandro Rudi, and Jonathan Niles-Weed. Massively scalable sinkhorn distances via the nyström method. Advances in neural information processing systems, 32, 2019.
211
+ Alexandr Andoni, Piotr Indyk, and Robert Krauthgamer. Earth mover distance over high-dimensional spaces. In SODA, volume 8, pp. 343-352, 2008.
212
+ Alexandr Andoni, Piotr Indyk, Huy L Nguyen, and Ilya Razenshteyn. Beyond locality-sensitive hashing. In Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms, pp. 1018-1028. SIAM, 2014.
213
+ Arturs Backurs, Yihe Dong, Piotr Indyk, Ilya Razenshteyn, and Tal Wagner. Scalable nearest neighbor search for optimal transport. In International Conference on Machine Learning, pp. 497-506, 2020.
214
+ Espen Bernton, Pierre E. Jacob, Mathieu Gerber, and Christian P. Robert. On parameter estimation with the wasserstein distance. Information and Inference: A Journal of the IMA, 8(4):657-676, 2019.
215
+ Jose Blanchet, Arun Jambulapati, Carson Kent, and Aaron Sidford. Towards optimal running times for optimal transport. arXiv preprint arXiv:1810.07717, 2018.
216
+ Xi Chen, Rajesh Jayaram, Amit Levi, and Erik Waingarten. New streaming algorithms for high dimensional emd and mst. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pp. 222-233, 2022.
217
+ Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems 26, pp. 2292-2300, 2013.
218
+
219
+ Ishan Deshpande, Ziyu Zhang, and Alexander G Schwing. Generative modeling using the sliced Wasserstein distance. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3483-3491, 2018.
220
+ Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml.
221
+ Pavel Dvurechensky, Alexander Gasnikov, and Alexey Kroshnin. Computational optimal transport: Complexity by accelerated gradient descent is better than by sinkhorn's algorithm. In International conference on machine learning, pp. 1367-1376. PMLR, 2018.
222
+ Jack Edmonds and Richard M Karp. Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the ACM (JACM), 19(2):248-264, 1972.
223
+ Peyman Mohajerin Esfahani and Daniel Kuhn. Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations. Mathematical Programming, 171(1):115-166, 2018.
224
+ Kyle Fox and Jiashuai Lu. A near-linear time approximation scheme for geometric transportation with arbitrary supplies and spread. In Proc. 36th Annual Symposium on Computational Geometry, pp. 45:1-45:18, 2020.
225
+ H. N. Gabow and R.E. Tarjan. Faster scaling algorithms for network problems. SIAM J. Comput., 18: 1013-1036, October 1989. ISSN 0097-5397. doi: 10.1137/0218069. URL http://portal.acm.org/citation.cfm?id=75795.75806.
226
+ Aude Geneva, Gabriel Peyre, and Marco Cuturi. Learning generative models with sinkhorn divergences. In International Conference on Artificial Intelligence and Statistics, pp. 1608-1617, 2018.
227
+ Wenshuo Guo, Nhat Ho, and Michael Jordan. Fast algorithms for computational optimal transport and Wasserstein barycenter. In International Conference on Artificial Intelligence and Statistics, pp. 2088-2097, 2020.
228
+ Rishi Gupta, Piotr Indyk, and Eric Price. Sparse recovery for earth mover distance. In 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 1742-1744. IEEE, 2010.
229
+ P. Indyk. A near linear time constant factor approximation for Euclidean bichromatic matching (cost). In Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 39-42, 2007.
230
+ Arun Jambulapati, Aaron Sidford, and Kevin Tian. A direct $\tilde{O}(1/\epsilon)$ iteration parallel algorithm for optimal transport. arXiv preprint arXiv:1906.00618, 2019.
231
+ Hicham Janati, Marco Cuturi, and Alexandre Gramfort. Wasserstein regularization for sparse multi-task regression. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1407-1416. PMLR, 2019.
232
+ Andrey Boris Khesin, Aleksandar Nikolov, and Dmitry Paramonov. Preconditioning for the geometric transportation problem. arXiv preprint arXiv:1902.08384, 2019.
233
+ Nathaniel Lahn and Sharath Raghvendra. A faster algorithm for minimum-cost bipartite matching in minor-free graphs. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 569-588. SIAM, 2019.
234
+ Nathaniel Lahn and Sharath Raghvendra. An $O(n^{5/4})$ time $\varepsilon$ -approximation algorithm for RMS matching in a plane. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms, pp. 869-888, 2021.
235
+ Nathaniel Lahn, Deepika Mulchandani, and Sharath Raghevendra. A graph theoretic additive approximation of optimal transport. In Advances in Neural Information Processing Systems 32, pp. 13813-13823, 2019.
236
+
237
+ Tianyi Lin, Nhat Ho, and Michael I. Jordan. On the efficiency of the sinkhorn and greenkhorn algorithms and their acceleration for optimal transport. arXiv preprint arXiv:1906.01437, 2019.
238
+ Huidong Liu, G. U. Xianfeng, and Dimitris Samaras. A two-step computation of the exact gan Wasserstein distance. In International Conference on Machine Learning, pp. 3159-3168, 2018.
239
+ Giulia Luise, Alessandro Rudi, Massimiliano Pontil, and Carlo Ciliberto. Differential properties of sinkhorn approximation for learning with Wasserstein distance. Advances in Neural Information Processing Systems, 31, 2018.
240
+ James Orlin. A faster strongly polynomial minimum cost flow algorithm. In Proceedings of the Twentieth annual ACM symposium on Theory of Computing, pp. 377-387, 1988.
241
+ Kent Quanrud. Approximating optimal transport with linear programs. arXiv preprint arXiv:1810.05957, 2018.
242
+ Sharath Raghvendra and Pankaj K. Agarwal. A near-linear time $\varepsilon$ -approximation algorithm for geometric bipartite matching. J. ACM, 67(3):18:1-18:19, 2020. doi: 10.1145/3393694. URL https://doi.org/10.1145/3393694.
243
+ Tim Salimans, Han Zhang, Alec Radford, and Dimitris Metaxas. Improving gans using optimal transport. In International Conference on Learning Representations, 2018.
244
+ Jonah Sherman. Generalized preconditioning and undirected minimum-cost flow. In Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 772-780, 2017.
2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8253b7f40eae82cad34d4cb446c1efe1eec54cd64e72b7f319ff6b7432a18eeb
3
+ size 73209
2023/A Higher Precision Algorithm for Computing the $1$-Wasserstein Distance/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/d1222a18-647e-4fda-98fe-6739b4983f8b_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/d1222a18-647e-4fda-98fe-6739b4983f8b_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/d1222a18-647e-4fda-98fe-6739b4983f8b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:260b34acc1ae9e1570542e96bb26c2c79355ede0159b01601e757a229d941e2b
3
+ size 2513568
2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2de19a1fb59cb2ccd3cdbf450d0c7ae91e7dde36e15696ae3fb22c12086ef3f
3
+ size 1078211
2023/A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/35030d36-496a-4051-9f1b-c6eb641c8ab4_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/35030d36-496a-4051-9f1b-c6eb641c8ab4_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/35030d36-496a-4051-9f1b-c6eb641c8ab4_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f53533d83cf76f5e23fa866e90d2d87a9ce4cce04ed714634d69fc4d29ebcc25
3
+ size 11342612
2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/full.md ADDED
@@ -0,0 +1,694 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A LAPLACE-INSPIRED DISTRIBUTION ON SO(3) FOR PROBABILISTIC ROTATION ESTIMATION
2
+
3
+ Yingda Yin
4
+
5
+ Yang Wang
6
+
7
+ He Wang†
8
+
9
+ Baoquan Chen
10
+
11
+ Peking University
12
+
13
+ # ABSTRACT
14
+
15
+ Estimating the 3DoF rotation from a single RGB image is an important yet challenging problem. Probabilistic rotation regression has raised more and more attention with the benefit of expressing uncertainty information along with the prediction. Though modeling noise using Gaussian-resembling Bingham distribution and matrix Fisher distribution is natural, they are shown to be sensitive to outliers for the nature of quadratic punishment to deviations. In this paper, we draw inspiration from multivariate Laplace distribution and propose a novel Rotation Laplace distribution on SO(3). Rotation Laplace distribution is robust to the disturbance of outliers and enforces much gradient to the low-error region, resulting in a better convergence. Our extensive experiments show that our proposed distribution achieves state-of-the-art performance for rotation regression tasks over both probabilistic and non-probabilistic baselines. Our project page is at pkuepic.github.io/RotationLaplace.
16
+
17
+ # 1 INTRODUCTION
18
+
19
+ Incorporating neural networks to perform rotation regression is of great importance in the field of computer vision, computer graphics and robotics (Wang et al., 2019b; Yin et al., 2022; Dong et al., 2021; Breyer et al., 2021). To close the gap between the SO(3) manifold and the Euclidean space where neural network outputs exist, one popular line of research discovers learning-friendly rotation representations including 6D continuous representation (Zhou et al., 2019), 9D matrix representation with SVD orthogonalization (Levinson et al., 2020), etc. Recently, Chen et al. (2022) focuses on the gradient backpropagating process and replaces the vanilla auto differentiation with a SO(3) manifold-aware gradient layer, which sets the new state-of-the-art in rotation regression tasks.
20
+
21
+ Reasoning about the uncertainty information along with the predicted rotation is also attracting more and more attention, which enables many applications in aerospace (Crassidis & Markley, 2003), autonomous driving (McAllister et al., 2017) and localization (Fang et al., 2020). On this front, recent efforts have been developed to model the uncertainty of rotation regression via probabilistic modeling of rotation space. The most commonly used distributions are Bingham distribution (Bingham, 1974) on $S^3$ for unit quaternions and matrix Fisher distribution (Khatri & Mardia, 1977) on SO(3) for rotation matrices. These two distributions are equivalent to each other (Prentice, 1986) and resemble the Gaussian distribution in Euclidean Space (Bingham, 1974; Khatri & Mardia, 1977). While modeling noise using Gaussian-like distributions is well-motivated by the Central Limit Theorem, Gaussian distribution is well-known to be sensitive to outliers in the probabilistic regression models (Murphy, 2012). This is because Gaussian distribution penalizes deviations quadratically, so predictions with larger errors weigh much more heavily with the learning than low-error ones and thus potentially result in suboptimal convergence when a certain amount of outliers exhibit.
22
+
23
+ Unfortunately, in certain rotation regression tasks, we fairly often come across large prediction errors, e.g. $180^{\circ}$ error, due to either the (near) symmetry nature of the objects or severe occlusions (Murphy et al., 2021). In Fig. 1(left), using training on single image rotation regression as an example, we show the statistics of predictions after achieving convergence, assuming matrix Fisher distribution (as done in Mohlin et al. (2020)). The blue histogram shows the population with different prediction errors and the red dots are the impacts of these predictions on learning, evaluated
24
+
25
+ ![](images/a85461210e339fe9749e46b5017a0aebca61ff2fe727aa14d9d296ea32848159.jpg)
26
+ Figure 1: Visualization of the results of matrix Fisher distribution and Rotation Laplace distribution after convergence. The horizontal axis is the geodesic distance between the prediction and the ground truth. The blue bins count the number of data points within corresponding errors $(2^{\circ}$ each bin). The red dots illustrate the percentage of the sum of the gradient magnitude $\| \partial \mathcal{L} / \partial (\mathrm{dist. param.})\|$ within each bin. The experiment is done on all categories of ModelNet10-SO3 dataset.
27
+
28
+ ![](images/48464f3615a5ac6aa4a7e22e72006f234b7d82c0e294bd0dbc371a3cbfc47ddb.jpg)
29
+
30
+ by computing the sum of their gradient magnitudes $\| \partial \mathcal{L} / \partial (\text{distribution param.})\|$ within each bin and then normalizing them across bins. It is clear that the $180^{\circ}$ outliers dominate the gradient as well as the network training though their population is tiny, while the vast majority of points with low error predictions are deprioritized. Arguably, at convergence, the gradient should focus more on refining the low errors rather than fixing the inevitable large errors (e.g. arose from symmetry). This motivates us to find a better probabilistic model for rotation.
31
+
32
+ As pointed out by Murphy (2012), Laplace distribution, with heavy tails, is a better option for robust probabilistic modeling. Laplace distribution drops sharply around its mode and thus allocates most of its probability density to a small region around the mode; meanwhile, it also tolerates and assigns higher likelihoods to the outliers, compared to Gaussian distribution. Consequently, it encourages predictions near its mode to be even closer, thus fitting sparse data well, most of whose data points are close to their mean with the exception of several outliers(Mitianoudis, 2012), which makes Laplace distribution to be favored in the context of deep learning(Goodfellow et al., 2016).
33
+
34
+ In this work, we propose a novel Laplace-inspired distribution on $\mathrm{SO}(3)$ for rotation matrices, namely Rotation Laplace distribution, for probabilistic rotation regression. We devise Rotation Laplace distribution to be an approximation of multivariate Laplace distribution in the tangent space of its mode. As shown in the visualization in Fig. 1(right), our Rotation Laplace distribution is robust to the disturbance of outliers, with most of its gradient contributed by the low-error region, and thus leads to a better convergence along with significantly higher accuracy. Moreover, our Rotation Laplace distribution is simply parameterized by an unconstrained $3\times 3$ matrix and thus accommodates the Euclidean output of neural networks with ease. This network-friendly distribution requires neither complex functions to fulfill the constraints of parameterization nor any normalization process from Euclidean to rotation manifold which has been shown harmful for learning (Chen et al., 2022).
35
+
36
+ For completeness of the derivations, we also propose the Laplace-inspired distribution on $S^3$ for quaternions. We show that Rotation Laplace distribution is equivalent to Quaternion Laplace distribution, similar to the equivalence of matrix Fisher distribution and Bingham distribution.
37
+
38
+ We extensively compare our Rotation Laplace distributions to methods that parameterize distributions on $\mathrm{SO}(3)$ for pose estimation, and also non-probabilistic approaches including multiple rotation representations and recent $\mathrm{SO}(3)$ -aware gradient layer (Chen et al., 2022). On common benchmark datasets of rotation estimation from RGB images, we achieve a significant and consistent performance improvement over all baselines.
39
+
40
+ # 2 RELATED WORK
41
+
42
+ Probabilistic regression Nix & Weigend (1994) first proposes to model the output of the neural network as a Gaussian distribution and learn the Gaussian parameters by the negative log-likelihood loss function, through which one obtains not only the target but also a measure of prediction uncertainty. More recently, Kendall & Gal (2017) offers more understanding and analysis of the underlying uncertainties. Lakshminarayanan et al. (2017) further improves the performance of uncertainty estimation by network ensembling and adversarial training. Makansi et al. (2019) stabilizes the training with the winner-takes-all and iterative grouping strategies. Probabilistic regression for
43
+
44
+ uncertainty prediction has been widely used in various applications, including optical flow estimation (Ilg et al., 2018), depth estimation (Poggi et al., 2020), weather forecasting (Wang et al., 2019a), etc.
45
+
46
+ Among the literature of decades, the majority of probabilistic regression works model the network output by a Gaussian-like distribution, while Laplace distribution is less discovered. Li et al. (2021) empirically finds that assuming a Laplace distribution in the process of maximum likelihood estimation yields better performance than a Gaussian distribution, in the field of 3D human pose estimation. Recent work (Nair et al., 2022) makes use of Laplace distribution to improve the robustness of maximum likelihood-based uncertainty estimation. Due to the heavy-tailed property of Laplace distribution, the outlier data produces comparatively less loss and have an insubstantial impact on training. Other than in Euclidean space, Mitianoudis (2012) develops Generalized Directional Laplacian distribution in $S^d$ for the application of audio separation.
47
+
48
+ Probabilistic rotation regression Several works focus on utilizing probability distributions on the rotation manifold for rotation uncertainty estimation. Prokudin et al. (2018) uses the mixture of von Mises distributions (Mardia et al., 2000) over Euler angles using Biternion networks. In Gilitschenski et al. (2019) and Deng et al. (2022), Bingham distribution over unit quaternion is used to jointly estimate a probability distribution over all axes. Mohlin et al. (2020) leverages matrix Fisher distribution (Khatri & Mardia, 1977) on $\mathrm{SO}(3)$ over rotation matrices for deep rotation regression. Though both bear similar properties with Gaussian distribution in Euclidean space, matrix Fisher distribution benefits from the continuous rotation representation and unconstrained distribution parameters, which yields better performance (Murphy et al., 2021). Recently, Murphy et al. (2021) introduces a non-parametric implicit pdf over $\mathrm{SO}(3)$ , with the distribution properties modeled by the neural network parameters. Implicit-pdf especially does good for modeling rotations of symmetric objects.
49
+
50
+ Non-probabilistic rotation regression The choice of rotation representation is one of the core issues concerning rotation regression. The commonly used representations include Euler angles (Kundu et al., 2018; Tulsiani & Malik, 2015), unit quaternion (Kendall & Cipolla, 2017; Kendall et al., 2015; Xiang et al., 2017) and axis-angle (Do et al., 2018; Gao et al., 2018; Ummenhofer et al., 2017), etc. However, Euler angles may suffer from gimbal lock, and unit quaternions doubly cover the group of SO(3), which leads to two disconnected local minima. Moreover, Zhou et al. (2019) points out that all representations in the real Euclidean spaces of four or fewer dimensions are discontinuous and are not friendly for deep learning. To this end, the continuous 6D representation with Gram-Schmidt orthogonalization (Zhou et al., 2019) and 9D representation with SVD orthogonalization (Levinson et al., 2020) have been proposed, respectively. More recently, Chen et al. (2022) investigates the gradient backpropagation in the backward pass and proposes a SO(3) manifold-aware gradient layer.
51
+
52
+ # 3 REVISIT MATRIX FISHER DISTRIBUTION
53
+
54
+ # 3.1 MATRIX FISHER DISTRIBUTION
55
+
56
+ Matrix Fisher distribution (or von Mises-Fisher matrix distribution) (Khatri & Mardia, 1977) is one of the widely used distributions for probabilistic modeling of rotation matrices.
57
+
58
+ Definition 1. Matrix Fisher distribution. The random variable $\mathbf{R} \in \mathrm{SO}(3)$ follows matrix Fisher distribution with parameter $\mathbf{A}$ , if its probability density function is defined as
59
+
60
+ $$
61
+ p (\mathbf {R}; \mathbf {A}) = \frac {1}{F (\mathbf {A})} \exp \left(\operatorname {t r} \left(\mathbf {A} ^ {T} \mathbf {R}\right)\right) \tag {1}
62
+ $$
63
+
64
+ where $\mathbf{A} \in \mathbb{R}^{3 \times 3}$ is an unconstrained matrix, and $F(\mathbf{A}) \in \mathbb{R}$ is the normalization factor. Without further clarification, we denote $F$ as the normalization factor of the corresponding distribution in the remaining of this paper. We also denote matrix Fisher distribution as $\mathbf{R} \sim \mathcal{MF}(\mathbf{A})$ .
65
+
66
+ Suppose the singular value decomposition of matrix $\mathbf{A}$ is given by $\mathbf{A} = \mathbf{U}'\mathbf{S}'(\mathbf{V}')^T$ , proper SVD is defined as $\mathbf{A} = \mathbf{USV}^T$ where
67
+
68
+ $$
69
+ \begin{array}{l} \mathbf {U} = \mathbf {U} ^ {\prime} \operatorname {d i a g} (1, 1, \det (\mathbf {U} ^ {\prime})) \qquad \mathbf {V} = \mathbf {V} ^ {\prime} \operatorname {d i a g} (1, 1, \det (\mathbf {V} ^ {\prime})) \\ \mathbf {S} = \operatorname {d i a g} \left(s _ {1}, s _ {2}, s _ {3}\right) = \operatorname {d i a g} \left(s _ {1} ^ {\prime}, s _ {2} ^ {\prime}, \det \left(\mathbf {U} ^ {\prime} \mathbf {V} ^ {\prime}\right) s _ {3} ^ {\prime}\right) \\ \end{array}
70
+ $$
71
+
72
+ The definition of $\mathbf{U}$ and $\mathbf{V}$ ensures that $\operatorname*{det}(\mathbf{U}) = \operatorname*{det}(\mathbf{V}) = 1$ and $\mathbf{U},\mathbf{V}\in \mathrm{SO}(3)$ .
73
+
74
+ # 3.2 RELATIONSHIP BETWEEN MATRIX FISHER DISTRIBUTION IN SO(3) AND GAUSSIAN DISTRIBUTION IN $\mathbb{R}^3$
75
+
76
+ It is shown that matrix Fisher distribution is highly relevant with zero-mean Gaussian distribution near its mode (Lee, 2018a;b). Denote $\mathbf{R}_0$ as the mode of matrix Fisher distribution, and define $\widetilde{\mathbf{R}} = \mathbf{R}_0^T\mathbf{R}$ , the relationship is shown as follows. Please refer to supplementary for the proof.
77
+
78
+ Proposition 1. Let $\Phi = \log \widetilde{\mathbf{R}}\in \mathfrak{so}(3)$ and $\phi = \Phi^{\vee}\in \mathbb{R}^{3}$ . For rotation matrix $\mathbf{R}\in \mathrm{SO}(3)$ following matrix Fisher distribution, when $\| \mathbf{R} - \mathbf{R}_0\| \to 0$ , $\phi$ follows zero-mean multivariate Gaussian distribution.
79
+
80
+ # 4 PROBABILISTIC ROTATION ESTIMATION WITH ROTATION LAPLACE DISTRIBUTION
81
+
82
+ # 4.1 ROTATION LAPLACE DISTRIBUTION
83
+
84
+ We get inspiration from multivariate Laplace distribution (Eltoft et al., 2006; Kozubowski et al., 2013), defined as follows.
85
+
86
+ Definition 2. Multivariate Laplace distribution. If means $\mu = 0$ , the $d$ -dimensional multivariate Laplace distribution with covariance matrix $\Sigma$ is defined as
87
+
88
+ $$
89
+ p (\mathbf {x}; \boldsymbol {\Sigma}) = \frac {1}{F} \left(\mathbf {x} ^ {T} \boldsymbol {\Sigma} ^ {- 1} \mathbf {x}\right) ^ {v / 2} K _ {v} \left(\sqrt {2 \mathbf {x} ^ {T} \boldsymbol {\Sigma} ^ {- 1} \mathbf {x}}\right)
90
+ $$
91
+
92
+ where $v = (2 - d) / 2$ and $K_{v}$ is modified Bessel function of the second kind.
93
+
94
+ We consider three dimensional Laplace distribution of $\mathbf{x} \in \mathbb{R}^3$ (i.e. $d = 3$ and $v = -\frac{1}{2}$ ). Given the property $K_{-\frac{1}{2}}(\xi) \propto \xi^{-\frac{1}{2}} \exp(-\xi)$ , three dimensional Laplace distribution is defined as
95
+
96
+ $$
97
+ p (\mathbf {x}; \boldsymbol {\Sigma}) = \frac {1}{F} \frac {\exp \left(- \sqrt {2 \mathbf {x} ^ {T} \boldsymbol {\Sigma} ^ {- 1} \mathbf {x}}\right)}{\sqrt {\mathbf {x} ^ {T} \boldsymbol {\Sigma} ^ {- 1} \mathbf {x}}}
98
+ $$
99
+
100
+ In this section, we first give the definition of our proposed Rotation Laplace distribution and then shows its relationship with multivariate Laplace distribution.
101
+
102
+ Definition 3. Rotation Laplace distribution. The random variable $\mathbf{R} \in \mathrm{SO}(3)$ follows Rotation Laplace distribution with parameter $\mathbf{A}$ , if its probability density function is defined as
103
+
104
+ $$
105
+ p (\mathbf {R}; \mathbf {A}) = \frac {1}{F (\mathbf {A})} \frac {\exp \left(- \sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {T} \mathbf {R}\right)}\right)}{\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {T} \mathbf {R}\right)}} \tag {2}
106
+ $$
107
+
108
+ where $\mathbf{A} \in \mathbb{R}^{3 \times 3}$ is an unconstrained matrix, and $\mathbf{S}$ is the diagonal matrix composed of the proper singular values of matrix $\mathbf{A}$ , i.e., $\mathbf{A} = \mathbf{USV}^T$ . We also denote Rotation Laplace distribution as $\mathbf{R} \sim \mathcal{RL}(\mathbf{A})$ .
109
+
110
+ Denote $\mathbf{R}_0$ as the mode of Rotation Laplace distribution and define $\widetilde{\mathbf{R}} = \mathbf{R}_0^T\mathbf{R}$ , the relationship between Rotation Laplace distribution and multivariate Laplace distribution is shown as follows.
111
+
112
+ Proposition 2. Let $\Phi = \log \widetilde{\mathbf{R}}\in \mathfrak{so}(3)$ and $\phi = \Phi^{\vee}\in \mathbb{R}^{3}$ . For rotation matrix $\mathbf{R}\in \mathrm{SO}(3)$ following Rotation Laplace distribution, when $\| \mathbf{R} - \mathbf{R}_0\| \to 0$ , $\phi$ follows zero-mean multivariate Laplace distribution.
113
+
114
+ Proof. Apply proper SVD to matrix $\mathbf{A}$ as $\mathbf{A} = \mathbf{USV}^T$ . For $\mathbf{R} \sim \mathcal{RL}(\mathbf{A})$ , we have
115
+
116
+ $$
117
+ p (\mathbf {R}) \mathrm {d} \mathbf {R} \propto \frac {\exp \left(\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {\mathrm {T}} \mathbf {R}\right)}\right)}{\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {\mathrm {T}} \mathbf {R}\right)}} \mathrm {d} \mathbf {R} = \frac {\exp \left(\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {S V} ^ {\mathrm {T}} \widetilde {\mathbf {R}} \mathbf {V}\right)}\right)}{\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {S V} ^ {\mathrm {T}} \widetilde {\mathbf {R}} \mathbf {V}\right)}} \mathrm {d} \mathbf {R} \tag {3}
118
+ $$
119
+
120
+ With $\phi = (\log \widetilde{\mathbf{R}})^{\vee} \in \mathbb{R}^{3}$ , $\widetilde{\mathbf{R}}$ can be parameterized as
121
+
122
+ $$
123
+ \widetilde {\mathbf {R}} (\phi) = \exp (\hat {\phi}) = \mathbf {I} + \frac {\sin \| \phi \|}{\| \phi \|} \hat {\phi} + \frac {1 - \cos \| \phi \|}{\| \phi \| ^ {2}} \hat {\phi} ^ {2}
124
+ $$
125
+
126
+ We follow the common practice (Mohlin et al., 2020; Lee, 2018a) that the Haar measure $\mathrm{d}\mathbf{R}$ is scaled such that $\int_{SO(3)}\mathrm{d}\mathbf{R} = 1$ and thus the Haar measure is given by
127
+
128
+ $$
129
+ \mathrm {d} \widetilde {\mathbf {R}} = \frac {1 - \cos \| \phi \|}{4 \pi^ {2} \| \phi \| ^ {2}} \mathrm {d} \phi = \left(\frac {1}{8 \pi^ {2}} + O (\| \phi \|) ^ {2}\right) \mathrm {d} \phi . \tag {4}
130
+ $$
131
+
132
+ Also, $\widetilde{\mathbf{R}}$ expanded at $\phi = \mathbf{0}$ is computed as $\widetilde{\mathbf{R}} = \mathbf{I} + \hat{\phi} +\frac{1}{2}\hat{\phi}^2 +O(\| \phi \| ^3)$ , we have
133
+
134
+ $$
135
+ \begin{array}{l} \mathbf {V} ^ {T} \widetilde {\mathbf {R}} \mathbf {V} = \mathbf {I} + \mathbf {V} ^ {T} \hat {\phi} \mathbf {V} + \frac {1}{2} \mathbf {V} ^ {T} \hat {\phi} ^ {2} \mathbf {V} + O (\| \phi \| ^ {3}) = \mathbf {I} + \widehat {\mathbf {V} ^ {T} \phi} + \frac {1}{2} \widehat {\mathbf {V} ^ {T} \phi} ^ {2} + O (\| \phi \| ^ {3}) \\ = \left[ \begin{array}{c c c} 1 - \frac {1}{2} \left(\mu_ {2} ^ {2} + \mu_ {3} ^ {2}\right) & \frac {1}{2} \mu_ {1} \mu_ {2} - \mu_ {3} & \frac {1}{2} \mu_ {1} \mu_ {3} + \mu_ {2} \\ \frac {1}{2} \mu_ {1} \mu_ {2} + \mu_ {3} & 1 - \frac {1}{2} \left(\mu_ {3} ^ {2} + \mu_ {1} ^ {2}\right) & \frac {1}{2} \mu_ {2} \mu_ {3} - \mu_ {1} \\ \frac {1}{2} \mu_ {1} \mu_ {3} - \mu_ {2} & \frac {1}{2} \mu_ {2} \mu_ {3} + \mu_ {1} & 1 - \frac {1}{2} \left(\mu_ {1} ^ {2} + \mu_ {2} ^ {2}\right) \end{array} \right] + O (\| \phi \| ^ {3}), \tag {5} \\ \end{array}
136
+ $$
137
+
138
+ where $(\mu_1,\mu_2,\mu_3)^T = \mathbf{V}^T\phi$ , and
139
+
140
+ $$
141
+ \operatorname {t r} (\mathbf {S} - \mathbf {S V} ^ {\mathrm {T}} \tilde {\mathbf {R}} \mathbf {V}) = \sum_ {(i, j, k) \in I} \frac {1}{2} (s _ {j} + s _ {k}) \mu_ {i} ^ {2} + O (\| \phi \| ^ {3}) = \frac {1}{2} \boldsymbol {\phi} ^ {T} \mathbf {V} \left[ \begin{array}{c c} s _ {2} + s _ {3} & \\ & s _ {1} + s _ {3} \\ & s _ {1} + s _ {2} \end{array} \right] \mathbf {V} ^ {T} \boldsymbol {\phi} + O (\| \phi \| ^ {3}) \tag {6}
142
+ $$
143
+
144
+ Considering Eq. 3, 4 and 6, we have
145
+
146
+ $$
147
+ p (\mathbf {R}) \mathrm {d} \mathbf {R} \propto \frac {\exp \left(\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {\mathrm {T}} \mathbf {R}\right)}\right)}{\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {\mathrm {T}} \mathbf {R}\right)}} \mathrm {d} \mathbf {R} = \frac {1}{8 \pi^ {2}} \frac {\exp \left(- \sqrt {2 \phi^ {T} \boldsymbol {\Sigma} ^ {- 1} \phi}\right)}{\sqrt {2 \phi^ {T} \boldsymbol {\Sigma} ^ {- 1} \phi}} \left(1 + O (\| \phi \| ^ {2})\right) \mathrm {d} \phi \tag {7}
148
+ $$
149
+
150
+ When $\| \mathbf{R} - \mathbf{R}_0\| \to 0$ , we have $\| \widetilde{\mathbf{R}} -\mathbf{I}\| \to 0$ and $\phi \rightarrow 0$ , so Eq. 7 follows the multivariate Laplace distribution with the covariance matrix as $\pmb{\Sigma}$ , where $\pmb {\Sigma} = 4\mathbf{V}\mathrm{diag}(\frac{1}{s_2 + s_3},\frac{1}{s_1 + s_3},\frac{1}{s_1 + s_2})\mathbf{V}^T.$
151
+
152
+ Rotation Laplace distribution bears similar properties with matrix Fisher distribution. Its mode is computed as $\mathbf{U}\mathbf{V}^T$ . The columns of $\mathbf{U}$ and the proper singular values $\mathbf{S}$ describe the orientation and the strength of dispersions, respectively.
153
+
154
+ # 4.2 NEGATIVE LOG-LIKELIHOOD LOSS
155
+
156
+ Given a collection of observations $\mathcal{X} = \{\pmb{x}_i\}$ and the associated ground truth rotations $\mathcal{R} = \{\mathbf{R}_i\}$ , we aim at training the network to best estimate the parameter $\mathbf{A}$ of Rotation Laplace distribution. This is achieved by maximizing a likelihood function so that, under our probabilistic model, the observed data is most probable, which is known as maximum likelihood estimation (MLE). We use the negative log-likelihood of $\mathbf{R}_{\pmb{x}}$ as the loss function:
157
+
158
+ $$
159
+ \mathcal {L} (\boldsymbol {x}, \mathbf {R} _ {\boldsymbol {x}}) = - \log p \left(\mathbf {R} _ {\boldsymbol {x}}; \mathbf {A} _ {\boldsymbol {x}}\right)
160
+ $$
161
+
162
+ # 4.3 DISCRETE APPROXIMATION OF THE NORMALIZATION FACTOR
163
+
164
+ Efficiently and accurately estimating the normalization factor for distributions over $\mathrm{SO}(3)$ is nontrivial. Inspired by Murphy et al. (2021), we approximate the normalization factor of Rotation Laplace distribution through equivolumetric discretization over $\mathrm{SO}(3)$ manifold. We employ the discretization method introduced in Yershova et al. (2010), which starts with the equal area grids on the 2-sphere (Gorski et al., 2005) and covers $\mathrm{SO}(3)$ by threading a great circle through each point on the surface of a 2-sphere with Hopf fibration. Concretely, we discretize $\mathrm{SO}(3)$ space into a finite set of equivolumetric grids $\mathcal{G} = \{\mathbf{R}|\mathbf{R}\in \mathrm{SO}(3)\}$ , the normalization factor of Laplace Rotation distribution is computed as
165
+
166
+ $$
167
+ F (\mathbf {A}) = \int_ {\mathrm {S O (3)}} \frac {\exp \left(- \sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {T} \mathbf {R}\right)}\right)}{\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {T} \mathbf {R}\right)}} \mathrm {d} \mathbf {R} \approx \sum_ {\mathbf {R} _ {i} \in \mathcal {G}} \frac {\exp \left(- \sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {T} \mathbf {R} _ {i}\right)}\right)}{\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {T} \mathbf {R} _ {i}\right)}} \Delta \mathbf {R} _ {i}
168
+ $$
169
+
170
+ where $\Delta \mathbf{R}_i = \frac{\int_{SO(3)}\mathrm{d}\mathbf{R}}{|\mathcal{G}|} = \frac{1}{|\mathcal{G}|}$ . In experiments, we discretize SO(3) space into about 37k points. Please refer to supplementary for analysis of the effect of different numbers of samples.
171
+
172
+ # 4.4 QUATERNION LAPLACE DISTRIBUTION
173
+
174
+ In this section, we introduce our extension of Laplace-inspired distribution for quaternions, namely, Quaternion Laplace distribution.
175
+
176
+ Definition 4. Quaternion Laplace distribution. The random variable $\mathbf{q} \in S^3$ follows quaternion Laplace distribution with parameter $\mathbf{M}$ and $\mathbf{Z}$ , if its probability density function is defined as
177
+
178
+ $$
179
+ p (\mathbf {q}; \mathbf {M}, \mathbf {Z}) = \frac {1}{F (\mathbf {Z})} \frac {\exp \left(- \sqrt {- \mathbf {q} ^ {T} \mathbf {M} \mathbf {Z} \mathbf {M} ^ {T} \mathbf {q}}\right)}{\sqrt {- \mathbf {q} ^ {T} \mathbf {M} \mathbf {Z} \mathbf {M} ^ {T} \mathbf {q}}} \tag {8}
180
+ $$
181
+
182
+ where $\mathbf{M} \in \mathbf{O}(4)$ is a $4 \times 4$ orthogonal matrix, and $\mathbf{Z} = \mathrm{diag}(0, z_1, z_2, z_3)$ is a $4 \times 4$ diagonal matrix with $0 \geq z_1 \geq z_2 \geq z_3$ . We also denote quaternion Laplace distribution as $\mathbf{q} \sim \mathcal{QL}(\mathbf{M}, \mathbf{Z})$ .
183
+
184
+ Proposition 3. Denote $\mathbf{q}_0$ as the mode of quaternion Laplace distribution. Let $\pi$ be the tangent space of $\mathbb{S}^3$ at $\mathbf{q}_0$ , and $\pi(\mathbf{x}) \in \mathbb{R}^4$ be the projection of $\mathbf{x} \in \mathbb{R}^4$ on $\pi$ . For quaternion $\mathbf{q} \in \mathbb{S}^3$ following Bingham distribution / quaternion Laplace distribution, when $\mathbf{q} \to \mathbf{q}_0$ , $\pi(\mathbf{q})$ follows zero-mean multivariate Gaussian distribution / zero-mean multivariate Laplace distribution.
185
+
186
+ Both Bingham distribution and Quaternion Laplace distribution exhibit antipodal symmetry on $S^3$ , i.e., $p(\mathbf{q}) = p(-\mathbf{q})$ , which captures the nature that the quaternions $\mathbf{q}$ and $-\mathbf{q}$ represent the same rotation on $\mathrm{SO}(3)$ .
187
+
188
+ Proposition 4. Denote $\gamma$ as the standard transformation from unit quaternions to corresponding rotation matrices. For rotation matrix $\mathbf{R} \in \mathrm{SO}(3)$ following Rotation Laplace distribution, $\mathbf{q} = \gamma^{-1}(\mathbf{R}) \in \mathbb{S}^3$ follows quaternion Laplace distribution.
189
+
190
+ Prop. 4 shows that our proposed Rotation Laplace distribution is equivalent to Quaternion Laplace distribution, similar to the equivalence of matrix Fisher distribution and Bingham distribution (Prentice, 1986), demonstrating the consistency of our derivations. Please see supplementary for the proofs to the above propositions.
191
+
192
+ The normalization factor of Quaternion Laplace distribution is also approximated by dense discretization, as follows:
193
+
194
+ $$
195
+ F (\mathbf {Z}) = \oint_ {\mathcal {S} ^ {3}} \frac {\exp \left(- \sqrt {- \mathbf {q} ^ {T} \mathbf {M Z M} ^ {T} \mathbf {q}}\right)}{\sqrt {- \mathbf {q} ^ {T} \mathbf {M Z M} ^ {T} \mathbf {q}}} \mathrm {d} \mathbf {q} \approx \sum_ {\mathbf {q} _ {i} \in \mathcal {G} _ {\mathbf {q}}} \frac {\exp \left(- \sqrt {- \mathbf {q} _ {i} ^ {T} \mathbf {M Z M} ^ {T} \mathbf {q} _ {i}}\right)}{\sqrt {- \mathbf {q} _ {i} ^ {T} \mathbf {M Z M} ^ {T} \mathbf {q} _ {i}}} \Delta \mathbf {q} _ {i}
196
+ $$
197
+
198
+ where $\mathcal{G}_{\mathbf{q}} = \left\{\mathbf{q}|\mathbf{q}\in \mathcal{S}^3\right\}$ denotes the set of equivolumetric grids and $\Delta \mathbf{q}_i = \frac{\oint_{S^3}\mathrm{d}\mathbf{q}}{|\mathcal{G}_{\mathbf{q}}|} = \frac{2\pi^2}{|\mathcal{G}_{\mathbf{q}}|}$ .
199
+
200
+ # 5 EXPERIMENT
201
+
202
+ Following the previous state-of-the-arts (Murphy et al., 2021; Mohlin et al., 2020), we evaluate our method on the task of object rotation estimation from single RGB images, where object rotation is the relative rotation between the input object and the object in the canonical pose. Concerning this task, we find two kinds of independent research tracks with slightly different evaluation settings. One line of research focuses on probabilistic rotation regression with different parametric or non-parametric distributions on SO(3) (Prokudin et al., 2018; Gilitschenski et al., 2019; Deng et al., 2022; Mohlin et al., 2020; Murphy et al., 2021), and the other non-probabilistic track proposes multiple rotation representations (Zhou et al., 2019; Levinson et al., 2020; Peretroukhin et al., 2020) or improves the gradient of backpropagation (Chen et al., 2022). To fully demonstrate the capacity of our Rotation Laplace distribution, we leave the baselines in their original optimal states and adapt our method to follow the common experimental settings in each track, respectively.
203
+
204
+ # 5.1 DATASETS & EVALUATION METRICS
205
+
206
+ Datasets ModelNet10-SO3 (Liao et al., 2019) is a commonly used synthetic dataset for single image rotation estimation containing 10 object classes. It is synthesized by rendering the CAD models of ModelNet-10 dataset (Wu et al., 2015) that are rotated by uniformly sampled rotations in SO(3). Pascal3D+ (Xiang et al., 2014) is a popular benchmark on real-world images for pose estimation. It covers 12 common daily object categories. The images in Pascal3D+ dataset are sourced from Pascal VOC and ImageNet datasets, and are split into ImageNet_train, ImageNet_val, PascalVOC_train, and PascalVOC_val sets.
207
+
208
+ Table 1: Numerical comparisons with probabilistic baselines on ModelNet10-SO3 dataset averaged on all categories. Numbers in parentheses ( $\cdot$ ) are our reproduced results. Please refer to supplementary for comparisons with each category.
209
+
210
+ <table><tr><td></td><td>Acc@3°↑</td><td>Acc@5°↑</td><td>Acc@10°↑</td><td>Acc@15°↑</td><td>Acc@30°↑</td><td>Med.(°)↓</td></tr><tr><td>Liao et al. (2019)</td><td>-</td><td>-</td><td>-</td><td>0.496</td><td>0.658</td><td>28.7</td></tr><tr><td>Prokudin et al. (2018)</td><td>-</td><td>-</td><td>-</td><td>0.456</td><td>0.528</td><td>49.3</td></tr><tr><td>Deng et al. (2022)</td><td>(0.138)</td><td>(0.301)</td><td>(0.502)</td><td>0.562 (0.584)</td><td>0.694 (0.673)</td><td>32.6 (31.6)</td></tr><tr><td>Mohlin et al. (2020)</td><td>(0.164)</td><td>(0.389)</td><td>(0.615)</td><td>0.693 (0.684)</td><td>0.757 (0.751)</td><td>17.1 (17.9)</td></tr><tr><td>Murphy et al. (2021)</td><td>(0.294)</td><td>(0.534)</td><td>(0.680)</td><td>0.719 (0.714)</td><td>0.735 (0.730)</td><td>21.5 (20.3)</td></tr><tr><td>Rotation Laplace</td><td>0.447</td><td>0.611</td><td>0.715</td><td>0.742</td><td>0.772</td><td>12.7</td></tr></table>
211
+
212
+ Table 2: Numerical comparisons with probabilistic baselines on Pascal3D+ dataset averaged on all categories. Numbers in parentheses ( $\cdot$ ) are our reproduced results. Please refer to supplementary for comparisons with each category.
213
+
214
+ <table><tr><td></td><td>Acc@3°↑</td><td>Acc@5°↑</td><td>Acc@10°↑</td><td>Acc@15°↑</td><td>Acc@30°↑</td><td>Med.(°)↓</td></tr><tr><td>Tulsiani &amp; Malik (2015)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.808</td><td>13.6</td></tr><tr><td>Mahendran et al. (2018)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.859</td><td>10.1</td></tr><tr><td>Liao et al. (2019)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.819</td><td>13.0</td></tr><tr><td>Prokudin et al. (2018)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.838</td><td>12.2</td></tr><tr><td>Mohlin et al. (2020)</td><td>(0.089)</td><td>(0.215)</td><td>(0.484)</td><td>(0.650)</td><td>0.825 (0.827)</td><td>11.5 (11.9)</td></tr><tr><td>Murphy et al. (2021)</td><td>(0.102)</td><td>(0.242)</td><td>(0.524)</td><td>(0.672)</td><td>0.837 (0.838)</td><td>10.3 (10.2)</td></tr><tr><td>Rotation Laplace</td><td>0.134</td><td>0.292</td><td>0.574</td><td>0.714</td><td>0.874</td><td>9.3</td></tr></table>
215
+
216
+ Evaluation metrics We evaluate our experiments with the geodesic distance of the network prediction and the ground truth. This metric returns the angular error and we measure it in degrees. In addition, we report the prediction accuracy within the given error threshold.
217
+
218
+ # 5.2 COMPARISONS WITH PROBABILISTIC METHODS
219
+
220
+ # 5.2.1 EVALUATION SETUP
221
+
222
+ Settings In this section, we follow the experiment settings of the latest work (Murphy et al., 2021) and quote its reported numbers for baselines. Specifically, we train one single model for all categories of each dataset. For Pascal3D+ dataset, we follow Murphy et al. (2021) to use (the more challenging) PascalVOC_val as test set. Note that Murphy et al. (2021) only measure the coarse-scale accuracy (e.g., Acc@30°) which may not adequately satisfy the downstream tasks (Wang et al., 2019b; Fang et al., 2020). To facilitate finer-scale comparisons (e.g., Acc@5°), we further re-run several recent baselines and report the reproduced results in parentheses ( $\cdot$ ).
223
+
224
+ Baselines We compare our method to recent works which utilize probabilistic distributions on SO(3) for the purpose of pose estimation. In concrete, the baselines are with mixture of von Mises distributions Prokudin et al. (2018), Bingham distribution Gilitschenski et al. (2019); Deng et al. (2022), matrix Fisher distribution Mohlin et al. (2020) and Implicit-PDF Murphy et al. (2021). We also compare to the spherical regression work of Liao et al. (2019) as Murphy et al. (2021) does.
225
+
226
+ # 5.2.2 RESULTS
227
+
228
+ Table 1 shows the quantitative comparisons of our method and baselines on ModelNet10-SO3 dataset. From the multiple evaluation metrics, we can see that maximum likelihood estimation with the assumption of Rotation Laplace distribution significantly outperforms the other distributions for rotation, including matrix Fisher distribution (Mohlin et al., 2020), Bingham distribution (Do et al., 2018) and von-Mises distribution (Prokudin et al., 2018). Our method also gets superior performance than the non-parametric implicit-PDF (Murphy et al., 2021). Especially, our method improves the fine-scale Acc@3° and Acc@5° accuracy by a large margin, showing its capacity to precisely model the target distribution.
229
+
230
+ The experiments on Pascal3D+ dataset are shown in Table 2, where our Rotation Laplace distribution outperforms all the baselines. While our method gets reasonably good performance on the median
231
+
232
+ Table 3: Numerical comparisons with non-probabilistic baselines on ModelNet10-SO3 dataset. One model is trained for each category.
233
+
234
+ <table><tr><td rowspan="2">Methods</td><td colspan="3">Chair</td><td colspan="3">Sofa</td><td colspan="3">Toilet</td><td colspan="3">Bed</td></tr><tr><td>Mean↓</td><td>Med.↓</td><td>Acc@5↑</td><td>Mean↓</td><td>Med.↓</td><td>Acc@5↑</td><td>Mean↓</td><td>Med.↓</td><td>Acc@5↑</td><td>Mean↓</td><td>Med.↓</td><td>Acc@5↑</td></tr><tr><td>6D</td><td>19.6</td><td>9.1</td><td>0.19</td><td>17.5</td><td>7.3</td><td>0.27</td><td>10.9</td><td>6.2</td><td>0.37</td><td>32.3</td><td>11.7</td><td>0.11</td></tr><tr><td>9D</td><td>17.5</td><td>8.3</td><td>0.23</td><td>19.8</td><td>7.6</td><td>0.25</td><td>11.8</td><td>6.5</td><td>0.34</td><td>30.4</td><td>11.1</td><td>0.13</td></tr><tr><td>9D-Inf</td><td>12.1</td><td>5.1</td><td>0.49</td><td>12.5</td><td>3.5</td><td>0.70</td><td>7.6</td><td>3.7</td><td>0.67</td><td>22.5</td><td>4.5</td><td>0.56</td></tr><tr><td>10D</td><td>18.4</td><td>9.0</td><td>0.20</td><td>20.9</td><td>8.7</td><td>0.20</td><td>11.5</td><td>5.9</td><td>0.39</td><td>29.9</td><td>11.5</td><td>0.11</td></tr><tr><td>RPMG-6D</td><td>12.9</td><td>4.7</td><td>0.53</td><td>11.5</td><td>2.8</td><td>0.77</td><td>7.8</td><td>3.4</td><td>0.71</td><td>20.3</td><td>3.6</td><td>0.67</td></tr><tr><td>RPMG-9D</td><td>11.9</td><td>4.4</td><td>0.58</td><td>10.5</td><td>2.4</td><td>0.82</td><td>7.5</td><td>3.2</td><td>0.75</td><td>20.0</td><td>2.9</td><td>0.76</td></tr><tr><td>RPMG-10D</td><td>12.8</td><td>4.5</td><td>0.55</td><td>11.2</td><td>2.4</td><td>0.82</td><td>7.2</td><td>3.0</td><td>0.76</td><td>19.2</td><td>2.9</td><td>0.75</td></tr><tr><td>Rot. Laplace</td><td>9.7</td><td>3.5</td><td>0.68</td><td>8.8</td><td>2.1</td><td>0.84</td><td>5.3</td><td>2.6</td><td>0.83</td><td>15.5</td><td>2.3</td><td>0.82</td></tr></table>
235
+
236
+ Table 4: Numerical comparisons with non-probabilistic baselines on Pascal3D+ dataset. One model is trained for each category.
237
+
238
+ <table><tr><td rowspan="2">Methods</td><td colspan="4">Bicycle</td><td colspan="4">Sofa</td></tr><tr><td>Acc@10↑</td><td>Acc@15↑</td><td>Acc@20↑</td><td>Med.↓</td><td>Acc@10↑</td><td>Acc@15↑</td><td>Acc@20↑</td><td>Med.↓</td></tr><tr><td>6D</td><td>0.218</td><td>0.390</td><td>0.553</td><td>18.1</td><td>0.508</td><td>0.767</td><td>0.890</td><td>9.9</td></tr><tr><td>9D</td><td>0.206</td><td>0.376</td><td>0.569</td><td>18.0</td><td>0.524</td><td>0.796</td><td>0.903</td><td>9.2</td></tr><tr><td>9D-Inf</td><td>0.380</td><td>0.533</td><td>0.699</td><td>13.4</td><td>0.709</td><td>0.880</td><td>0.935</td><td>6.7</td></tr><tr><td>10D</td><td>0.239</td><td>0.423</td><td>0.567</td><td>17.9</td><td>0.502</td><td>0.770</td><td>0.896</td><td>9.8</td></tr><tr><td>RPMG-6D</td><td>0.354</td><td>0.572</td><td>0.706</td><td>13.5</td><td>0.696</td><td>0.861</td><td>0.922</td><td>6.7</td></tr><tr><td>RPMG-9D</td><td>0.368</td><td>0.574</td><td>0.718</td><td>12.5</td><td>0.725</td><td>0.880</td><td>0.958</td><td>6.7</td></tr><tr><td>RPMG-10D</td><td>0.400</td><td>0.577</td><td>0.713</td><td>12.9</td><td>0.693</td><td>0.871</td><td>0.939</td><td>7.0</td></tr><tr><td>Rot. Laplace</td><td>0.435</td><td>0.641</td><td>0.744</td><td>11.2</td><td>0.735</td><td>0.900</td><td>0.964</td><td>6.3</td></tr></table>
239
+
240
+ error and coarser-scale accuracy, we do not find a similar impressive improvement on fine-scale metrics as in ModelNet10-SO3 dataset. We suspect it is because the imperfect human annotations of real-world images may lead to comparatively noisy ground truths, increasing the difficulty for networks to get rather close predictions with GT labels. Nevertheless, our method still manages to obtain superior performance, which illustrates the robustness of our Rotation Laplace distribution.
241
+
242
+ # 5.3 COMPARISONS WITH NON-PROBABILISTIC METHODS
243
+
244
+ # 5.3.1 EVALUATION SETUP
245
+
246
+ Settings For comparisons with non-probabilistic methods, we follow the latest work of Chen et al. (2022) to learn a network for each category. For Pascal3D+ dataset, we follow Chen et al. (2022) to use ImageNet-val as our test set. We use the same evaluation metrics as in Chen et al. (2022) and quote its reported numbers for baselines.
247
+
248
+ Baselines We compare to multiple baselines that leverage different rotation representations to directly regress the prediction given input images, including 6D (Zhou et al., 2019), 9D / 9D-Inf (Levinson et al., 2020) and 10D (Peretroukhin et al., 2020). We also include regularized projective manifold gradient (RPMG) series of methods (Chen et al., 2022).
249
+
250
+ # 5.3.2 RESULTS
251
+
252
+ We report the numerical results of our method and on-probabilistic baselines on ModelNet10-SO3 dataset in Table 3. Our method obtains a clear superior performance to the best competitor under all the metrics among all the categories. Note that we train a model for each category (so do all the baselines), thus our performance in Table 3 is better than Table 1 where one model is trained for the whole dataset. The results on Pascal3D+ dataset are shown in Table 4 where our method with Rotation Laplace distribution achieves state-of-the-art performance.
253
+
254
+ ![](images/f891d42f43e32be4578a12b0120e87f6a13d7206cae6815f396f7026ef94263c.jpg)
255
+
256
+ ![](images/ab9dc8bbe9ea2b444f1cfd4af3d27921534393dfe951e31ce5abd849f3d48d2d.jpg)
257
+ (a)
258
+
259
+ ![](images/99e2ed6470acb95380e329464994501f9b2db6994e4d3f314103c6144e1ed838.jpg)
260
+
261
+ ![](images/15d5182083a8206e647c4a622168cc7bd3fc4b108fe04be4867e06ad59a8d343.jpg)
262
+ (b)
263
+
264
+ ![](images/329e3f0bbca8ce65f6b8f5875468697c57f07b64907c4ee1d26c1999a0191d83.jpg)
265
+
266
+ ![](images/8db3a4d3fa71f609a97f8e6c8e4bd62add974fb753b2bcfed71a1f6be47684fd.jpg)
267
+ (c)
268
+ Figure 2: Visualizations of the predicted distributions. The top row displays example images with the projected axes of predictions (thick lines) and ground truths (thin lines) of the object. The bottom row shows the visualization of the corresponding predicted distributions of the image. For clarity we have aligned the predicted poses with the standard axes.
269
+
270
+ ![](images/671aa4a97a806d9b1ae993b476f56beb995e4b861bc7fad409f93e7d508bb621.jpg)
271
+
272
+ ![](images/055c8c375f3f7617f281e2b65a3236a154fd189c2aff1df2d8a18f8d4f896198.jpg)
273
+ (d)
274
+
275
+ ![](images/e271169639a1aad7e7d47943b03b7783f14e7cb55d16d8bea4634fe2284f98d0.jpg)
276
+
277
+ ![](images/fc26c96a11fa145f5ffffe4ae86871d437647a957336207879d9b5377eec30b7.jpg)
278
+ (e)
279
+
280
+ ![](images/9e2f85c6bd2714ce227d2d9ce0ff7c6bcc33098404cfe0c9c7dbb9f0431ee06b.jpg)
281
+
282
+ ![](images/a9674e9f5dba7572e2d61118aeaefd9ea00dc3d75324ded12f5ff05a8d7b25ce.jpg)
283
+ (f)
284
+
285
+ Table 5: Numerical comparisons with our proposed Quaternion & Rotation Laplace distribution and baselines on ModelNet10-SO3 dataset. One model is trained for each category. Quaternion Laplace distribution clearly outperforms Bingham distribution (Deng et al., 2022).
286
+
287
+ <table><tr><td rowspan="2"></td><td colspan="3">Chair</td><td colspan="3">Sofa</td><td colspan="3">Toilet</td><td colspan="3">Bed</td></tr><tr><td>Mean↓</td><td>Med.↓</td><td>Acc@5↑</td><td>Mean↓</td><td>Med.↓</td><td>Acc@5↑</td><td>Mean↓</td><td>Med.↓</td><td>Acc@5↑</td><td>Mean↓</td><td>Med.↓</td><td>Acc@5↑</td></tr><tr><td>Deng et al. (2022)</td><td>16.5</td><td>7.2</td><td>0.31</td><td>16.5</td><td>4.9</td><td>0.52</td><td>9.6</td><td>4.2</td><td>0.59</td><td>22.0</td><td>5.1</td><td>0.49</td></tr><tr><td>Mohlin et al. (2020)</td><td>10.8</td><td>4.6</td><td>0.55</td><td>11.1</td><td>3.5</td><td>0.70</td><td>6.4</td><td>3.5</td><td>0.70</td><td>16.0</td><td>3.8</td><td>0.66</td></tr><tr><td>Quat. Laplace</td><td>12.6</td><td>5.2</td><td>0.49</td><td>13.1</td><td>3.7</td><td>0.67</td><td>5.9</td><td>3.4</td><td>0.69</td><td>17.7</td><td>3.4</td><td>0.69</td></tr><tr><td>Rot. Laplace</td><td>9.7</td><td>3.5</td><td>0.68</td><td>8.8</td><td>2.1</td><td>0.84</td><td>5.3</td><td>2.6</td><td>0.83</td><td>15.5</td><td>2.3</td><td>0.82</td></tr></table>
288
+
289
+ # 5.4 QUALITATIVE RESULTS
290
+
291
+ We visualize the predicted distributions in Figure 2 with the visualization method in Mohlin et al. (2020). As shown in the figure, the predicted distributions can exhibit high uncertainty when the object has rotational symmetry, leading to near $180^{\circ}$ errors (a-c), or the input image is with low resolution (d). Subfigure (e-f) show cases with high certainty and reasonably low errors. Please refer to the supplementary for more visual results.
292
+
293
+ # 5.5 IMPLEMENTATION DETAILS
294
+
295
+ For fair comparisons, we follow the implementation designs of Mohlin et al. (2020) and merely change the distribution from matrix Fisher distribution to our Rotation Laplace distribution. For numerical stability, we clip $\mathrm{tr}(\mathbf{S} - \mathbf{A}^T\mathbf{R})$ by $\max(1e - 8, \mathrm{tr}(\mathbf{S} - \mathbf{A}^T\mathbf{R}))$ for Eq.2. Please refer to supplementary for more details.
296
+
297
+ # 5.6 COMPARISONS OF ROTATION LAPLACE DISTRIBUTION AND QUATERNION LAPLACE DISTRIBUTION
298
+
299
+ For the completeness of experiments, we also compare our proposed Quaternion Laplace distribution and Bingham distribution and report the performance in Table 5. As shown in the table, quaternion Laplace distribution consistently achieves superior performance than its competitor, which validates the effectiveness of our Laplace-inspired derivations. However, its rotation error is in general larger than Rotation Laplace distribution, since its rotation representation, quaternion, is not a continuous representation, as pointed in Zhou et al. (2019), thus leading to inferior performance.
300
+
301
+ # 6 CONCLUSION
302
+
303
+ In this paper, we draw inspiration from multivariant Laplace distribution and derive two novel distributions for probabilistic rotation regression, namely, Rotation Laplace distribution for rotation matrices on $\mathrm{SO}(3)$ and Quaternion Laplace distribution for quaternions on $S^3$ . Extensive comparisons with both probabilistic and non-probabilistic baselines on ModelNet10-SO3 and Pascal3D+ datasets demonstrate the effectiveness and advantages of our proposed distributions.
304
+
305
+ # ACKNOWLEDGEMENT
306
+
307
+ We thank Haoran Liu from Peking University for the help in experiments. This work is supported in part by National Key R&D Program of China 2022ZD0160801.
308
+
309
+ # REFERENCES
310
+
311
+ Christopher Bingham. An antipodally symmetric distribution on the sphere. The Annals of Statistics, pp. 1201-1225, 1974.
312
+ Michel Breyer, Jen Jen Chung, Lionel Ott, Roland Siegwart, and Juan Nieto. Volumetric grasping network: Real-time 6 dof grasp detection in clutter. arXiv preprint arXiv:2101.01132, 2021.
313
+ Jiayi Chen, Yingda Yin, Tolga Birdal, Baoquan Chen, Leonidas J Guibas, and He Wang. Projective manifold gradient layer for deep rotation regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6646-6655, 2022.
314
+ Gregory S Chirikjian. Engineering applications of noncommutative harmonic analysis: with emphasis on rotation and motion groups. CRC press, 2000.
315
+ John L Crassidis and F Landis Markley. Unscented filtering for spacecraft attitude estimation. Journal of guidance, control, and dynamics, 26(4):536-542, 2003.
316
+ Haowen Deng, Mai Bui, Nassir Navab, Leonidas Guibas, Slobodan Ilic, and Tolga Birdal. Deep bingham networks: Dealing with uncertainty and ambiguity in pose estimation. International Journal of Computer Vision, pp. 1-28, 2022.
317
+ Thanh-Toan Do, Ming Cai, Trung Pham, and Ian Reid. Deep-6dpose: Recovering 6d object pose from a single rgb image. arXiv preprint arXiv:1802.10367, 2018.
318
+ Siyan Dong, Qingnan Fan, He Wang, Ji Shi, Li Yi, Thomas Funkhouser, Baoquan Chen, and Leonidas J Guibas. Robust neural routing through space partitions for camera relocalization in dynamic indoor environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8544-8554, 2021.
319
+ Torbjørn Eltoft, Taesu Kim, and Te-Won Lee. On the multivariate laplace distribution. IEEE Signal Processing Letters, 13(5):300-303, 2006.
320
+ Qihang Fang, Yingda Yin, Qingnan Fan, Fei Xia, Siyan Dong, Sheng Wang, Jue Wang, Leonidas Guibas, and Baoquan Chen. Towards accurate active camera localization. arXiv e-prints, pp. arXiv-2012, 2020.
321
+ Ge Gao, Mikko Lauri, Jianwei Zhang, and Simone Frintrop. Occlusion resistant object rotation regression from point cloud segments. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, pp. 0-0, 2018.
322
+ Igor Gilitschenski, Roshni Sahoo, Wilko Schwarting, Alexander Amini, Sertac Karaman, and Daniela Rus. Deep orientation uncertainty learning based on a bingham loss. In International Conference on Learning Representations, 2019.
323
+ Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
324
+ Krzysztof M Gorski, Eric Hivon, Anthony J Banday, Benjamin D Wandelt, Frode K Hansen, Mstvos Reinecke, and Matthia Bartelmann. Healpix: A framework for high-resolution discretization and fast analysis of data distributed on the sphere. The Astrophysical Journal, 622(2):759, 2005.
325
+ Alfred Haar. Der massbegriff in der theorie der kontinuierlichen gruppen. Annals of mathematics, pp. 147-169, 1933.
326
+ Eddy Ig, Ozgun Cicek, Silvio Galesso, Aaron Klein, Osama Makansi, Frank Hutter, and Thomas Brox. Uncertainty estimates and multi-hypotheses networks for optical flow. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 652-667, 2018.
327
+
328
+ loan Mackenzie James. History of topology. Elsevier, 1999.
329
+ Alex Kendall and Roberto Cipolla. Geometric loss functions for camera pose regression with deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5974-5983, 2017.
330
+ Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? Advances in neural information processing systems, 30, 2017.
331
+ Alex Kendall, Matthew Grimes, and Roberto Cipolla. Posenet: A convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE international conference on computer vision, pp. 2938-2946, 2015.
332
+ CG Khatri and Kanti V Mardia. The von mises-fisher matrix distribution in orientation statistics. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):95-106, 1977.
333
+ Tomasz J Kozubowski, Krzysztof Podgórski, and Igor Rychlik. Multivariate generalized laplace distribution and related random fields. Journal of Multivariate Analysis, 113:59-72, 2013.
334
+ Abhijit Kundu, Yin Li, and James M Rehg. 3d-rcnn: Instance-level 3d object reconstruction via render-and-compare. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3559-3568, 2018.
335
+ Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017.
336
+ Taeyoung Lee. Bayesian attitude estimation with the matrix fisher distribution on so (3). IEEE Transactions on Automatic Control, 63(10):3377-3392, 2018a.
337
+ Taeyoung Lee. Bayesian attitude estimation with approximate matrix fisher distributions on so (3). In 2018 IEEE Conference on Decision and Control (CDC), pp. 5319-5325. IEEE, 2018b.
338
+ Jake Levinson, Carlos Esteves, Kefan Chen, Noah Snavely, Angjoo Kanazawa, Afshin Rostamizadeh, and Ameesh Makadia. An analysis of svd for deep rotation estimation. Advances in Neural Information Processing Systems, 33:22554-22565, 2020.
339
+ Jiefeng Li, Siyuan Bian, Ailing Zeng, Can Wang, Bo Pang, Wentao Liu, and Cewu Lu. Human pose regression with residual log-likelihood estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11025–11034, 2021.
340
+ Shuai Liao, Efstratios Gavves, and Cees GM Snoek. Spherical regression: Learning viewpoints, surface normals and 3d rotations on n-spheres. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9759-9767, 2019.
341
+ Siddharth Mahendran, Haider Ali, and Rene Vidal. A mixed classification-regression framework for 3d pose estimation from 2d images. arXiv preprint arXiv:1805.03225, 2018.
342
+ Osama Makansi, Eddy Ilg, Ozgun Cicek, and Thomas Brox. Overcoming limitations of mixture density networks: A sampling and fitting framework for multimodal future prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7144-7153, 2019.
343
+ Kanti V Mardia, Peter E Jupp, and KV Mardia. Directional statistics, volume 2. Wiley Online Library, 2000.
344
+ Rowan McAllister, Yarin Gal, Alex Kendall, Mark Van Der Wilk, Amar Shah, Roberto Cipolla, and Adrian Weller. Concrete problems for autonomous vehicle safety: Advantages of bayesian deep learning. International Joint Conferences on Artificial Intelligence, Inc., 2017.
345
+ Nikolaos Mitianoudis. A generalized directional laplacian distribution: Estimation, mixture models and audio source separation. IEEE Transactions on Audio, Speech, and Language Processing, 20 (9):2397-2408, 2012.
346
+
347
+ David Mohlin, Josephine Sullivan, and Gerald Bianchi. Probabilistic orientation estimation with matrix fisher distributions. Advances in Neural Information Processing Systems, 33:4884-4893, 2020.
348
+ Kevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
349
+ Kieran A Murphy, Carlos Esteves, Varun Jampani, Srikumar Ramalingam, and Ameesh Makadia. Implicit-pdf: Non-parametric representation of probability distributions on the rotation manifold. In International Conference on Machine Learning, pp. 7882-7893. PMLR, 2021.
350
+ Deebul S Nair, Nico Hochgeschwender, and Miguel A Olivares-Mendez. Maximum likelihood uncertainty estimation: Robustness to outliers. arXiv preprint arXiv:2202.03870, 2022.
351
+ David A Nix and Andreas S Weigend. Estimating the mean and variance of the target probability distribution. In Proceedings of 1994 IEEE international conference on neural networks (ICNN'94), volume 1, pp. 55-60. IEEE, 1994.
352
+ Valentin Peretroukhin, Matthew Giamou, David M. Rosen, W. Nicholas Greene, Nicholas Roy, and Jonathan Kelly. A Smooth Representation of SO(3) for Deep Rotation Learning with Uncertainty. In Proceedings of Robotics: Science and Systems (RSS'20), Jul. 12-16 2020.
353
+ Matteo Poggi, Filippo Aleotti, Fabio Tosi, and Stefano Mattoccia. On the uncertainty of self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3227-3237, 2020.
354
+ Michael J Prentice. Orientation statistics without parametric assumptions. Journal of the Royal Statistical Society: Series B (Methodological), 48(2):214-222, 1986.
355
+ Sergey Prokudin, Peter Gehler, and Sebastian Nowozin. Deep directional statistics: Pose estimation with uncertainty quantification. In Proceedings of the European conference on computer vision (ECCV), pp. 534-551, 2018.
356
+ Joan Sola, Jeremie Deray, and Dinesh Atchuthan. A micro lie theory for state estimation in robotics. arXiv preprint arXiv:1812.01537, 2018.
357
+ Zachary Teed and Jia Deng. Tangent space backpropagation for 3d transformation groups. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10338-10347, 2021.
358
+ Shubham Tulsiani and Jitendra Malik. Viewpoints and keypoints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1510-1519, 2015.
359
+ Benjamin Ummenhofer, Huizhong Zhou, Jonas Uhrig, Nikolaus Mayer, Eddy Ilg, Alexey Dosovitskiy, and Thomas Brox. Demon: Depth and motion network for learning monocular stereo. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5038-5047, 2017.
360
+ Bin Wang, Jie Lu, Zheng Yan, Huaishao Luo, Tianrui Li, Yu Zheng, and Guangquan Zhang. Deep uncertainty quantification: A machine learning approach for weather forecasting. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2087-2095, 2019a.
361
+ He Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas J Guibas. Normalized object coordinate space for category-level 6d object pose and size estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2642-2651, 2019b.
362
+ Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912-1920, 2015.
363
+ Yu Xiang, Roozbeh Mottaghi, and Silvio Savarese. Beyond Pascal: A benchmark for 3d object detection in the wild. In IEEE winter conference on applications of computer vision, pp. 75-82. IEEE, 2014.
364
+
365
+ Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox. PoseCNN: A convolutional neural network for 6d object pose estimation in cluttered scenes. arXiv preprint arXiv:1711.00199, 2017.
366
+ Anna Yershova, Swati Jain, Steven M Lavalle, and Julie C Mitchell. Generating uniform incremental grids on so (3) using the hopf fibration. The International journal of robotics research, 29(7):801-812, 2010.
367
+ Yingda Yin, Yingcheng Cai, He Wang, and Baoquan Chen. Fishermatch: Semi-supervised rotation regression via entropy-based filtering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11164-11173, 2022.
368
+ Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. On the continuity of rotation representations in neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5745-5753, 2019.
369
+
370
+ # A NOTATIONS AND DEFINITIONS
371
+
372
+ # A.1 NOTATIONS FOR LIE ALGEBRA AND EXPONENTIAL & LOGARITHM MAP
373
+
374
+ This paper follows the common notations for Lie algebra and exponential & logarithm map (Lee, 2018a; Teed & Deng, 2021; Sola et al., 2018).
375
+
376
+ The three-dimensional special orthogonal group $\mathrm{SO}(3)$ is defined as
377
+
378
+ $$
379
+ \mathrm {S O} (3) = \left\{\mathbf {R} \in \mathbb {R} ^ {3 \times 3} | \mathbf {R} \mathbf {R} ^ {T} = \mathbf {I}, \det (\mathbf {R}) = 1 \right\}.
380
+ $$
381
+
382
+ The Lie algebra of $\mathrm{SO}(3)$ , denoted by $\mathfrak{so}(3)$ , is the tangent space of $\mathrm{SO}(3)$ at $\mathbf{I}$ , given by
383
+
384
+ $$
385
+ \mathfrak {s o} (3) = \left\{\Phi \in \mathbb {R} ^ {3 \times 3} | \Phi = - \Phi^ {T} \right\}.
386
+ $$
387
+
388
+ $\mathfrak{so}(3)$ is identified with $(\mathbb{R}^3,\times)$ by the hat $\wedge$ map and the vee $\vee$ map defined as
389
+
390
+ $$
391
+ \mathfrak {s o} (3) \ni \left[\begin{array}{c c c}0&- \phi_ {z}&\phi_ {y}\\\phi_ {z}&0&- \phi_ {x}\\- \phi_ {y}&\phi_ {x}&0\end{array}\right] \underset {\text {v e e} \wedge} {\stackrel {{\longleftrightarrow}} {\rightleftarrows}} \left[\begin{array}{c}\phi_ {x}\\\phi_ {y}\\\phi_ {z}\end{array}\right] \in \mathbb {R} ^ {3}
392
+ $$
393
+
394
+ The exponential map, taking skew symmetric matrices to rotation matrices is given by
395
+
396
+ $$
397
+ \exp (\hat {\phi}) = \sum_ {k = 0} ^ {\infty} \frac {\hat {\phi} ^ {k}}{k !} = \mathbf {I} + \frac {\sin \theta}{\theta} \hat {\phi} + \frac {1 - \cos \theta}{\theta^ {2}} \hat {\phi} ^ {2},
398
+ $$
399
+
400
+ where $\theta = \| \phi \|$ . The exponential map can be inverted by the logarithm map, going from SO(3) to so(3) as
401
+
402
+ $$
403
+ \log (\mathbf {R}) = \frac {\theta}{2 \sin \theta} (\mathbf {R} - \mathbf {R} ^ {T}),
404
+ $$
405
+
406
+ where $\theta = \arccos \frac{\operatorname{tr}(\mathbf{R}) - 1}{2}$ .
407
+
408
+ # A.2 HAAR MEASURE
409
+
410
+ To evaluate the normalization factors and therefore the probability density functions, the measure $\mathrm{d}\mathbf{R}$ on $\mathrm{SO}(3)$ needs to be defined. For the Lie group $\mathrm{SO}(3)$ , the commonly used bi-invariant measure is referred to as Haar measure (Haar, 1933; James, 1999). Haar measure is unique up to scalar multiples (Chirikjian, 2000) and we follow the common practice (Mohlin et al., 2020; Lee, 2018a) that the Haar measure $\mathrm{d}\mathbf{R}$ is scaled such that $\int_{\mathrm{SO}(3)}\mathrm{d}\mathbf{R} = 1$ .
411
+
412
+ # B MORE ANALYSIS ON GRADIENT W.R.T. OUTLIERS
413
+
414
+ In the task of rotation regression, predictions with really large errors (e.g., $180^{\circ}$ error) are fairly observed due to rotational ambiguity or lack of discriminate visual features. Properly handling these outliers during training is one of the keys to success in probabilistic modeling of rotations.
415
+
416
+ ![](images/a0248b9e2b71e77aa4992c793bfa662102cc46bb192615399beff6084047f354.jpg)
417
+ Figure 3: Visualization of the gradient magnitude $\| \partial \mathcal{L} / \partial (\text{distribution param.})\|$ w.r.t. the prediction errors on ModelNet10-SO3 dataset after convergence.
418
+
419
+ ![](images/4ae564235a091bba0bf2a9c33aac6adbab0098854d069d9e28c4b033e501c918.jpg)
420
+
421
+ ![](images/0dceb63b2156cfe845f90909d90f9337e83057df2e6249c45d334fc608844d4f.jpg)
422
+ Figure 4: Visualization of the indication ability of the distribution entropy w.r.t. the performance. The horizontal axis is the distribution entropy and the vertical axis is the number of data points (in log scale), color coded by the errors (in degrees). The experiments are done on the test set of ModelNet10-SO3 dataset (left) and Pascal3D+ dataset (right).
423
+
424
+ ![](images/a2bd8b950e064be261134516be18ba1837ee1baada5bb1571c18d73c63f1d061.jpg)
425
+
426
+ In Figure 3, for matrix Fisher distribution and Rotation Laplace distribution, we visualize the gradient magnitudes $\|\partial\mathcal{L}/\partial(\text{distribution param.})\|$ w.r.t. the prediction errors on ModelNet10-SO3 dataset after convergence, where each point is a data point in the test set. As shown in the figure, for matrix Fisher distribution, predictions with larger errors clearly yield larger gradient magnitudes, and those with near $180^{\circ}$ errors (the outliers) have the biggest impact. Given that outliers may be inevitable and hard to be fixed, they may severely disturb the training process and the sensitivity to outliers can result in a poor fit (Murphy, 2012; Nair et al., 2022). In contrast, for our Rotation Laplace distribution, the gradient magnitudes are not affected by the prediction errors much, leading to a stable learning process.
427
+
428
+ Consistent results can also be seen in Figure 1 of the main paper, where the red dots illustrate the sum of the gradient magnitude over the population within an interval of prediction errors. We argue that, at convergence, the gradient should focus more on the large population with low errors rather than fixing the unavoidable large errors.
429
+
430
+ # C UNCERTAINTY QUANTIFICATION MEASURED BY DISTRIBUTION ENTROPY
431
+
432
+ Probabilistic modeling of rotation naturally models the uncertainty information of rotation regression. Yin et al. (2022) proposes to use the entropy of the distribution as an uncertainty measure. We adopt it as the uncertainty indicator of Rotation Laplace distribution and plot the relationship between the error of the prediction and the corresponding distribution entropy on the testset of ModelNet10-SO3 and Pascal3D+ datasets in Figure 4. As shown in the figure, predictions with lower entropies (i.e., lower uncertainty) clearly achieve higher accuracy than predictions with large entropies, demonstrating the ability of uncertainty estimation of our Rotation Laplace distribution. We compute the entropy via discretization, where SO(3) space is quantized into a finite set of equivolumetric girds $\mathcal{G} = \{\mathbf{R}|\mathbf{R}\in \mathrm{SO}(3)\}$ , and
433
+
434
+ $$
435
+ H (p) = - \int_ {\mathrm {S O} (3)} p \log p \mathrm {d} \mathbf {R} \approx - \sum_ {\mathbf {R} _ {i} \in \mathcal {G}} p _ {i} \log p _ {i} \Delta \mathbf {R} _ {i}
436
+ $$
437
+
438
+ We use about 0.3M grids to discretize SO(3) space.
439
+
440
+ # D EFFECT OF DIFFERENT NUMBERS OF DISCRETIZATION SAMPLES
441
+
442
+ To compute the normalization factor of our distribution, we discretize SO(3) space into a finite set of equivolumetric grids using Hopf fibration. Here we show the comparison on different numbers of samples. We experiment with ModelNet10-SO3 toilet dataset on a single 3090 GPU.
443
+
444
+ Table 6: Comparison on different numbers of discretization samples. The experiment is done on ModelNet10-SO3 toilet dataset on a single 3090 GPU.
445
+
446
+ <table><tr><td>Number of samples</td><td>Training time (min)↓</td><td>Mean(°)↓</td><td>Med.(°)↓</td><td>Acc@5°↑</td></tr><tr><td>0.6k</td><td>122</td><td>5.8</td><td>2.8</td><td>0.80</td></tr><tr><td>4.6k</td><td>122</td><td>5.3</td><td>2.6</td><td>0.82</td></tr><tr><td>37k</td><td>136</td><td>5.3</td><td>2.6</td><td>0.83</td></tr><tr><td>295k</td><td>168</td><td>5.3</td><td>2.5</td><td>0.82</td></tr></table>
447
+
448
+ Table 7: Per-category results ModelNet10-SO3 dataset.
449
+
450
+ <table><tr><td colspan="2"></td><td>avg.</td><td>bathtub</td><td>bed</td><td>chair</td><td>desk</td><td>dresser</td><td>tv</td><td>n. stand</td><td>sofa</td><td>table</td><td>toilet</td></tr><tr><td rowspan="5">Acc@15°↑</td><td>Deng et al. (2022)</td><td>0.562</td><td>0.140</td><td>0.788</td><td>0.800</td><td>0.345</td><td>0.563</td><td>0.708</td><td>0.279</td><td>0.733</td><td>0.440</td><td>0.832</td></tr><tr><td>Prokudin et al. (2018)</td><td>0.456</td><td>0.114</td><td>0.822</td><td>0.662</td><td>0.023</td><td>0.406</td><td>0.704</td><td>0.187</td><td>0.590</td><td>0.108</td><td>0.946</td></tr><tr><td>Mohlin et al. (2020)</td><td>0.693</td><td>0.322</td><td>0.882</td><td>0.881</td><td>0.536</td><td>0.682</td><td>0.790</td><td>0.516</td><td>0.919</td><td>0.446</td><td>0.957</td></tr><tr><td>Murphy et al. (2021)</td><td>0.719</td><td>0.392</td><td>0.877</td><td>0.874</td><td>0.615</td><td>0.687</td><td>0.799</td><td>0.567</td><td>0.914</td><td>0.523</td><td>0.945</td></tr><tr><td>Rotation Laplace</td><td>0.741</td><td>0.390</td><td>0.902</td><td>0.909</td><td>0.644</td><td>0.722</td><td>0.815</td><td>0.590</td><td>0.934</td><td>0.521</td><td>0.977</td></tr><tr><td rowspan="5">Acc@30°↑</td><td>Deng et al. (2022)</td><td>0.694</td><td>0.325</td><td>0.880</td><td>0.908</td><td>0.556</td><td>0.649</td><td>0.807</td><td>0.466</td><td>0.902</td><td>0.485</td><td>0.958</td></tr><tr><td>Prokudin et al. (2018)</td><td>0.528</td><td>0.175</td><td>0.847</td><td>0.777</td><td>0.061</td><td>0.500</td><td>0.788</td><td>0.306</td><td>0.673</td><td>0.183</td><td>0.972</td></tr><tr><td>Mohlin et al. (2020)</td><td>0.757</td><td>0.403</td><td>0.908</td><td>0.935</td><td>0.674</td><td>0.739</td><td>0.863</td><td>0.614</td><td>0.944</td><td>0.511</td><td>0.981</td></tr><tr><td>Murphy et al. (2021)</td><td>0.735</td><td>0.410</td><td>0.883</td><td>0.917</td><td>0.629</td><td>0.688</td><td>0.832</td><td>0.570</td><td>0.921</td><td>0.531</td><td>0.967</td></tr><tr><td>Rotation Laplace</td><td>0.770</td><td>0.430</td><td>0.911</td><td>0.940</td><td>0.698</td><td>0.751</td><td>0.869</td><td>0.625</td><td>0.946</td><td>0.541</td><td>0.986</td></tr><tr><td rowspan="5">Median Error (°)↓</td><td>Deng et al. (2022)</td><td>32.6</td><td>147.8</td><td>9.2</td><td>8.3</td><td>25.0</td><td>11.9</td><td>9.8</td><td>36.9</td><td>10.0</td><td>58.6</td><td>8.5</td></tr><tr><td>Prokudin et al. (2018)</td><td>49.3</td><td>122.8</td><td>3.6</td><td>9.6</td><td>117.2</td><td>29.9</td><td>6.7</td><td>73.0</td><td>10.4</td><td>115.5</td><td>4.1</td></tr><tr><td>Mohlin et al. (2020)</td><td>17.1</td><td>89.1</td><td>4.4</td><td>5.2</td><td>13.0</td><td>6.3</td><td>5.8</td><td>13.5</td><td>4.0</td><td>25.8</td><td>4.0</td></tr><tr><td>Murphy et al. (2021)</td><td>21.5</td><td>161.0</td><td>4.4</td><td>5.5</td><td>7.1</td><td>5.5</td><td>5.7</td><td>7.5</td><td>4.1</td><td>9.0</td><td>4.8</td></tr><tr><td>Rotation Laplace</td><td>12.2</td><td>85.1</td><td>2.3</td><td>3.4</td><td>5.4</td><td>2.7</td><td>3.7</td><td>4.8</td><td>2.1</td><td>9.6</td><td>2.5</td></tr></table>
451
+
452
+ As stated in Table 6, the approximation with too few samples leads to inferior performance, and increasing the number of samples yields a better performance at the cost of a longer runtime. The performance improvement saturates when the number of samples is sufficient. We choose to use $37\mathrm{k}$ samples in our experiments.
453
+
454
+ # E ADDITIONAL RESULTS
455
+
456
+ # E.1 ADDITIONAL NUMERICAL RESULTS
457
+
458
+ Table 7 and 8 extend the results on ModelNet10-SO3 dataset and Pascal3D+ dataset in the main paper and show the per-category results. Our prediction with Rotation Laplace distribution is at or near state-of-the-art on many categories. The numbers for baselines are quoted from Murphy et al. (2021).
459
+
460
+ # E.2 ADDITIONAL VISUAL RESULTS
461
+
462
+ We show additional visual results on ModelNet10-SO3 dataset in Figure 5 and on Pascal3D+ dataset in Figure 6. As shown in the figures, our distribution provides rich information about the rotation estimations.
463
+
464
+ To visualize the predicted distributions, we adopt two popular visualization methods used in Mohlin et al. (2020) and Murphy et al. (2021). The visualization in Mohlin et al. (2020) is achieved by summing the three marginal distributions over the standard basis of $\mathbb{R}^3$ and displaying them on the sphere with color coding. Murphy et al. (2021) introduces a new visualization method based on discretization over $\mathrm{SO}(3)$ . It projects a great circle of points on $\mathrm{SO}(3)$ to each point on the 2-sphere, and then uses the color wheel to indicate the location on the great circle. The probability density is shown by the size of the points on the plot. See the corresponding papers for more details.
465
+
466
+ Table 8: Per-category results on Pascal3D+ dataset.
467
+
468
+ <table><tr><td></td><td></td><td>avg.</td><td>aero</td><td>bike</td><td>boat</td><td>bottle</td><td>bus</td><td>car</td><td>chair</td><td>table</td><td>mbike</td><td>sofa</td><td>train</td><td>tv</td></tr><tr><td rowspan="7">Acc@30°↑</td><td>Tulsiani &amp; Malik (2015)</td><td>0.808</td><td>0.81</td><td>0.77</td><td>0.59</td><td>0.93</td><td>0.98</td><td>0.89</td><td>0.80</td><td>0.62</td><td>0.88</td><td>0.82</td><td>0.80</td><td>0.80</td></tr><tr><td>Mahendran et al. (2018)</td><td>0.859</td><td>0.87</td><td>0.81</td><td>0.64</td><td>0.96</td><td>0.97</td><td>0.95</td><td>0.92</td><td>0.67</td><td>0.85</td><td>0.97</td><td>0.82</td><td>0.88</td></tr><tr><td>Liao et al. (2019)</td><td>0.819</td><td>0.82</td><td>0.77</td><td>0.55</td><td>0.93</td><td>0.95</td><td>0.94</td><td>0.85</td><td>0.61</td><td>0.80</td><td>0.95</td><td>0.83</td><td>0.82</td></tr><tr><td>Prokudin et al. (2018)</td><td>0.838</td><td>0.89</td><td>0.83</td><td>0.46</td><td>0.96</td><td>0.93</td><td>0.90</td><td>0.80</td><td>0.76</td><td>0.90</td><td>0.90</td><td>0.82</td><td>0.91</td></tr><tr><td>Mohlin et al. (2020)</td><td>0.825</td><td>0.90</td><td>0.85</td><td>0.57</td><td>0.94</td><td>0.95</td><td>0.96</td><td>0.78</td><td>0.62</td><td>0.87</td><td>0.85</td><td>0.77</td><td>0.84</td></tr><tr><td>Murphy et al. (2021)</td><td>0.837</td><td>0.81</td><td>0.85</td><td>0.56</td><td>0.93</td><td>0.95</td><td>0.94</td><td>0.87</td><td>0.78</td><td>0.85</td><td>0.88</td><td>0.78</td><td>0.86</td></tr><tr><td>Rot. Laplace (Ours)</td><td>0.876</td><td>0.90</td><td>0.90</td><td>0.60</td><td>0.96</td><td>0.98</td><td>0.96</td><td>0.91</td><td>0.76</td><td>0.88</td><td>0.97</td><td>0.81</td><td>0.88</td></tr><tr><td rowspan="7">Median error (°)↓</td><td>Tulsiani &amp; Malik (2015)</td><td>13.6</td><td>13.8</td><td>17.7</td><td>21.3</td><td>12.9</td><td>5.8</td><td>9.1</td><td>14.8</td><td>15.2</td><td>14.7</td><td>13.7</td><td>8.7</td><td>15.4</td></tr><tr><td>Mahendran et al. (2018)</td><td>10.1</td><td>8.5</td><td>14.8</td><td>20.5</td><td>7.0</td><td>3.1</td><td>5.1</td><td>9.3</td><td>11.3</td><td>14.2</td><td>10.2</td><td>5.6</td><td>11.7</td></tr><tr><td>Liao et al. (2019)</td><td>13.0</td><td>13.0</td><td>16.4</td><td>29.1</td><td>10.3</td><td>4.8</td><td>6.8</td><td>11.6</td><td>12.0</td><td>17.1</td><td>12.3</td><td>8.6</td><td>14.3</td></tr><tr><td>Prokudin et al. (2018)</td><td>12.2</td><td>9.7</td><td>15.5</td><td>45.6</td><td>5.4</td><td>2.9</td><td>4.5</td><td>13.1</td><td>12.6</td><td>11.8</td><td>9.1</td><td>4.3</td><td>12.0</td></tr><tr><td>Mohlin et al. (2020)</td><td>11.5</td><td>10.1</td><td>15.6</td><td>24.3</td><td>7.8</td><td>3.3</td><td>5.3</td><td>13.5</td><td>12.5</td><td>12.9</td><td>13.8</td><td>7.4</td><td>11.7</td></tr><tr><td>Murphy et al. (2021)</td><td>10.3</td><td>10.8</td><td>12.9</td><td>23.4</td><td>8.8</td><td>3.4</td><td>5.3</td><td>10.0</td><td>7.3</td><td>13.6</td><td>9.5</td><td>6.4</td><td>12.3</td></tr><tr><td>Rot. Laplace (Ours)</td><td>9.4</td><td>8.6</td><td>11.7</td><td>21.8</td><td>6.9</td><td>2.8</td><td>4.8</td><td>7.9</td><td>9.1</td><td>12.2</td><td>8.1</td><td>6.9</td><td>11.6</td></tr></table>
469
+
470
+ ![](images/5865edb11490b06eb72f7814ead94197197a8a47be1631aec3e42f3199187d6f.jpg)
471
+ Figure 5: Visual results on ModelNet10-SO3 dataset. We adopt the distribution visualization methods in Mohlin et al. (2020) and Murphy et al. (2021). For input images and visualizations with Mohlin et al. (2020), predicted rotations are shown with thick lines and the ground truths are with thin lines. For visualizations with Murphy et al. (2021), ground truths are shown by solid circles.
472
+
473
+ # F DERIVATIONS
474
+
475
+ Proposition 1 in the main paper. Let $\Phi = \log \tilde{\mathbf{R}}\in \mathfrak{so}(3)$ and $\phi = \Phi^{\vee}\in \mathbb{R}^{3}$ . For rotation matrix $\mathbf{R}\in \mathrm{SO}(3)$ following matrix Fisher distribution, when $\| \mathbf{R} - \mathbf{R}_0\| \to 0$ , $\phi$ follows zero-mean multivariate Gaussian distribution.
476
+
477
+ Proof. For $\mathbf{R} \sim \mathcal{MF}(\mathbf{A})$ , we have
478
+
479
+ $$
480
+ p (\mathbf {R}) \mathrm {d} \mathbf {R} \propto \exp \left(\operatorname {t r} \left(\mathbf {A} ^ {\mathrm {T}} \mathbf {R}\right)\right) \mathrm {d} \mathbf {R} = \exp \left(\operatorname {t r} \left(\mathbf {S V} ^ {\mathrm {T}} \widetilde {\mathbf {R}} \mathbf {V}\right)\right) \mathrm {d} \widetilde {\mathbf {R}} \tag {9}
481
+ $$
482
+
483
+ ![](images/fa1fc2a70e4c468fb2bceb55316de7ae0c21ebf438e6aa02fa100a0e7e5501c1.jpg)
484
+
485
+ ![](images/c8cb991937dee0d758993312b5c74b955cb3716495fcb92448605dc9fdcdb1c7.jpg)
486
+
487
+ ![](images/b5f9d075ac1996c8a509725e3a6de683d82754784ea03e7b091caeb3aceaf8c6.jpg)
488
+
489
+ ![](images/6b02518f8cebcc40733f458737d2ce42bffffedc9fe0fdec2e46962ec8fb6155.jpg)
490
+
491
+ ![](images/e6d131d5e0dcfc0424db209107096e84c717616cdf3b638c4cce68fe181f9825.jpg)
492
+
493
+ ![](images/5c6d55e5a0e4e74f7673342fa70b2c7e24796d9be6b105f16668b2c8cd6810c5.jpg)
494
+ Input image
495
+
496
+ ![](images/8af2c86279c55e0ec74a3fc870198f1d532c98b6557dd0c36407fc4ecadd84c4.jpg)
497
+
498
+ ![](images/db951f576cfd5ce233fde8b59ceb7fa9a90328d27517d82699c38b9c9397425a.jpg)
499
+
500
+ ![](images/63c4c1237d8d1b0f6a3e9de2dfd0f1196eef298de752dbb4685cee5d5704d3b5.jpg)
501
+
502
+ ![](images/130c1efcdec03b442f265b085a4e155c909a149975a8849af60cbbf5b3bc7091.jpg)
503
+
504
+ ![](images/1504364094f77c6b259940e3e9ad1e1553997545cbb2224ac5390e028fc9ab7b.jpg)
505
+
506
+ ![](images/11dc555174831f8b1dbcff2b41ec67fa87a016ca8405d245c7e34b96bdf1afa7.jpg)
507
+ Distribution visual. (Mohlin et al., 2020)
508
+
509
+ ![](images/f5ced818d9be92afb14806a0102857016a95d9e2d638ff3a8ef3b9adaa147721.jpg)
510
+
511
+ ![](images/1b5e5bbba5174b7f70e1b3b619ebeba8c0df4ae1e2ab8a6eeac6a4ee13ced38f.jpg)
512
+
513
+ ![](images/faa60af60d5dc361c4348b89de6e019819fa472daf899300ad430914faaaa8ee.jpg)
514
+
515
+ ![](images/b9e47a6634441a16aa8f52b61e9d869375627d987a75be92c98b0f0129e0715a.jpg)
516
+
517
+ ![](images/4ae83f107739ddd81a62e6e563b24ad80f288146413a890f47b9b589449f39e6.jpg)
518
+
519
+ ![](images/e69407854167163e19c2b4e22f1e50034f0d3a59de87a2b8a335d252b3ddc6b8.jpg)
520
+ Distribution visual (Murphy et al., 2021)
521
+
522
+ ![](images/02de16dc716421191b0614993f6213de5d82a21b6d4444d5c2a1741d3a115e2f.jpg)
523
+
524
+ ![](images/75feeed1ec648229d3733157204805b0c5051a48c4346f43a5934b57869bdae9.jpg)
525
+
526
+ ![](images/7182a9fd5d7c46c7ffbf6e673a3c9e02ce6da4cc23445874f3128b73a173190a.jpg)
527
+
528
+ ![](images/5fdc081d1c27c4ceb22ec5753ebb9b90fd2f83946cc99f76775ff49f0130753c.jpg)
529
+
530
+ ![](images/228c3abec61164d1419580d13d81b4176c9460c85ef65c886ba77d9cfff7ae2f.jpg)
531
+
532
+ ![](images/63cc86a599b4e1dd42a09bb0d8b2f54c4cffea571305c8ebcb24a0f35a820d9c.jpg)
533
+ Input image
534
+ Figure 6: Visual results on Pascal3D+ dataset. We adopt the distribution visualization methods in Mohlin et al. (2020) and Murphy et al. (2021). For input images and visualizations with Mohlin et al. (2020), predicted rotations are shown with thick lines and the ground truths are with thin lines. For visualizations with Murphy et al. (2021), ground truths are shown by solid circles.
535
+
536
+ ![](images/fea4fb5d92bdcd3d0d84bbbfdecd85bb7718a496d2b4ebe070264ca601194060.jpg)
537
+
538
+ ![](images/70151224bf3cd1f9740401030621d1b8db46548ced9c9065fd281df1831aea44.jpg)
539
+
540
+ ![](images/c18e3fde3f05231d03433922e80217989a246757a20f5aea6b100fe77a4b05de.jpg)
541
+
542
+ ![](images/c54692ad423cc4f5dedee8c0d919bd1ac3960918682d123f50434ba0fc31e850.jpg)
543
+
544
+ ![](images/597c923604d373b15570c896fb1e16537a006260a1fa00eb8ef8badd6df8d635.jpg)
545
+
546
+ ![](images/a70d2a8f25f4f6388a1086691c2f3b9189854b0aeb26d3759c9160e3af68090c.jpg)
547
+ Distribution visual. (Mohlin et al., 2020)
548
+
549
+ ![](images/f96852a3b626d630ac2eadd8f5ea7818c4b7401e72fbdf10342081a17b8baf5f.jpg)
550
+
551
+ ![](images/0a5c2e6ddfa3118b74d2c5861ee85d6179ddd3cf0c241ce63e8bc1d66fb2db38.jpg)
552
+
553
+ ![](images/a3ca82e2f62802a1f3fc67617d167aadb91640ac273bc9d1a6d336156d4e13d4.jpg)
554
+
555
+ ![](images/57cea2ee39a3cca98d5a4b31c073036d92e959009f7471eefabb04c920fbc212.jpg)
556
+
557
+ ![](images/d4fda1963963fd62242bdac45bd8b89f7f4d23498243f7b963b46e587650e51a.jpg)
558
+
559
+ ![](images/644ed3678f9fcbcd5cb2ddd9155f39eaa1e80df1d6c95669913c803ab7d7bb2a.jpg)
560
+ Distribution visual (Murphy et al., 2021)
561
+
562
+ Considering Eq. 5 in the main paper, we have
563
+
564
+ $$
565
+ \begin{array}{l} \operatorname {t r} \left(\mathbf {S} \mathbf {V} ^ {\mathrm {T}} \widetilde {\mathbf {R}} \mathbf {V}\right) = \operatorname {t r} (\mathbf {S}) + \sum - \frac {1}{2} \left(s _ {j} + s _ {k}\right) \mu_ {i} ^ {2} + O \left(\| \phi \| ^ {3}\right) \\ (i, j, k) \in I \tag {10} \\ = \operatorname {t r} (\mathbf {S}) - \frac {1}{2} \boldsymbol {\phi} ^ {T} \mathbf {V} \left[ \begin{array}{c c} s _ {2} + s _ {3} & \\ & s _ {1} + s _ {3} \\ & s _ {1} + s _ {2} \end{array} \right] \mathbf {V} ^ {T} \boldsymbol {\phi} \\ \end{array}
566
+ $$
567
+
568
+ Thus
569
+
570
+ $$
571
+ \begin{array}{l} p (\mathbf {R}) \mathrm {d} \mathbf {R} \propto \exp \left(\operatorname {t r} \left(\mathbf {A} ^ {\mathrm {T}} \mathbf {R}\right)\right) \mathrm {d} \mathbf {R} \\ = \frac {\exp (\operatorname {t r} (\mathbf {S}))}{8 \pi^ {2}} \exp \left(- \frac {1}{2} \phi^ {T} \boldsymbol {\Sigma} ^ {- 1} \phi\right) (1 + O \left(\| \phi \| ^ {2}\right)) d \phi \tag {11} \\ \end{array}
572
+ $$
573
+
574
+ When $\| \mathbf{R} - \mathbf{R}_0\| \to 0$ , we have $\| \widetilde{\mathbf{R}} -\mathbf{I}\| \to 0$ and $\phi \rightarrow 0$ , so Eq. 11 follows the multivariate Gaussian distribution with the covariance matrix as $\boldsymbol{\Sigma}$ , where $\boldsymbol{\Sigma} = \mathbf{V}$ diag $(\frac{1}{s_2 + s_3},\frac{1}{s_1 + s_3},\frac{1}{s_1 + s_2})\mathbf{V}^T$ .
575
+
576
+ Proposition 3 in the main paper. Denote $\mathbf{q}_0$ as the mode of quaternion Laplace distribution. Let $\pi$ be the tangent space of $\mathbb{S}^3$ at $\mathbf{q}_0$ , and $\pi(\mathbf{x}) \in \mathbb{R}^4$ be the projection of $\mathbf{x} \in \mathbb{R}^4$ on $\pi$ . For quaternion $\mathbf{q} \in \mathbb{S}^3$ following Bingham distribution / quaternion Laplace distribution, when $\mathbf{q} \to \mathbf{q}_0$ , $\pi(\mathbf{q})$ follows zero-mean multivariate Gaussian distribution / zero-mean multivariate Laplace distribution.
577
+
578
+ Proof. Denote $\mathbf{q}_{\mathbf{I}} = (1,0,0,0)^T$ as the identity quaternion. Define $\mathbf{M}$ as an orthogonal matrix such that $\mathbf{M}^T\mathbf{q}_0 = \mathbf{q}_{\mathbf{I}}$ . Given $\pi (\mathbf{q}) = \mathbf{q} - (\mathbf{q}\cdot \mathbf{q}_0)\mathbf{q}_0$ , we have
579
+
580
+ $$
581
+ \mathbf {M} ^ {T} \pi (\mathbf {q}) = \mathbf {M} ^ {T} \mathbf {q} - ((\mathbf {M} ^ {T} \mathbf {q}) \cdot (\mathbf {M} ^ {T} \mathbf {q} _ {0})) \mathbf {q} _ {\mathbf {I}} = \mathbf {M} ^ {T} \mathbf {q} - w \mathbf {q} _ {\mathbf {I}}, \tag {12}
582
+ $$
583
+
584
+ where $\mathbf{M}^T\mathbf{q} = (w,x,y,z)^T$ . Let $(\mathbf{e}_0,\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3)$ be the column vectors of $\mathbf{I}_{4\times 4}$ , we have
585
+
586
+ $$
587
+ \left(\mathbf {M} \mathbf {e} _ {i}\right) \cdot \mathbf {q} _ {0} = \mathbf {e} _ {i} \cdot \mathbf {q} _ {\mathbf {I}} = 0 \tag {13}
588
+ $$
589
+
590
+ for $i = 1,2,3$ . Therefore, $\mathbf{Me}_i(i = 1,2,3)$ form an orthogonal basis of $\pi$ .
591
+
592
+ Given $\mathbf{M}^T\mathbf{q} = w\mathbf{e}_0 + x\mathbf{e}_1 + y\mathbf{e}_2 + z\mathbf{e}_3$ , we have
593
+
594
+ $$
595
+ \mathbf {q} = w \left(\mathbf {M e} _ {0}\right) + x \left(\mathbf {M e} _ {1}\right) + y \left(\mathbf {M e} _ {2}\right) + z \left(\mathbf {M e} _ {3}\right) \tag {14}
596
+ $$
597
+
598
+ Therefore, $\pmb {\eta} = (x,y,z)$ is the coordinate of $\pi (\mathbf{q})$ in $\pi$ under the basis of $\mathbf{Me}_i$ .
599
+
600
+ The Jacobian of the transformation $\mathbf{q} \rightarrow \boldsymbol{\eta}$ is given by
601
+
602
+ $$
603
+ \begin{array}{l} \mathbf {J} = \frac {\partial \mathbf {q}}{\partial \boldsymbol {\eta}} = \mathbf {M} \frac {\partial (\mathbf {M} ^ {T} \mathbf {q})}{\partial \boldsymbol {\eta}} \\ = \mathbf {M} \left[ \begin{array}{l l l l} - x / w & 1 & 0 & 0 \\ - y / w & 0 & 1 & 0 \\ - z / w & 0 & 0 & 0 \end{array} \right] \tag {15} \\ \end{array}
604
+ $$
605
+
606
+ Therefore, the scaling factor from $\eta$ to $\mathbf{q}$ is given by
607
+
608
+ $$
609
+ \frac {\mathrm {d} \mathbf {q}}{\mathrm {d} \boldsymbol {\eta}} = \det (\mathbf {J} \mathbf {J} ^ {T}) = 1 + \frac {x ^ {2} + y ^ {2} + z ^ {2}}{w ^ {2}} + O \left(\| \eta \| ^ {4}\right) = 1 + O \left(\| \eta \| ^ {2}\right). \tag {16}
610
+ $$
611
+
612
+ Thus
613
+
614
+ $$
615
+ \begin{array}{l} \mathbf {q} ^ {T} \mathbf {M Z M} ^ {T} \mathbf {q} = \left[ \begin{array}{l l l l} w & x & y & z \end{array} \right] \left[ \begin{array}{l l l l} 0 & & z _ {1} & \\ & z _ {2} & & \\ & & z _ {3} & \end{array} \right] \left[ \begin{array}{l} w \\ x \\ y \\ z \end{array} \right] \\ = \left[ \begin{array}{c c c} x & y & z \end{array} \right] \left[ \begin{array}{c c c} z _ {1} & & \\ & z _ {2} & \\ & & z _ {3} \end{array} \right] \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] \tag {17} \\ = \eta \widetilde {\mathbf {Z}} \eta \\ \end{array}
616
+ $$
617
+
618
+ where we define $\widetilde{\mathbf{Z}} = \mathrm{diag}(z_1, z_2, z_3)$ .
619
+
620
+ For Bingham distribution, we have
621
+
622
+ $$
623
+ \begin{array}{l} p (\mathbf {q}) \mathrm {d} \mathbf {q} \propto \exp \left(\mathbf {q} ^ {T} \mathbf {M} \mathbf {Z} \mathbf {M} ^ {T} \mathbf {q}\right) \mathrm {d} \mathbf {q} \\ = \exp \left(\boldsymbol {\eta} ^ {T} \widetilde {\mathbf {Z}} \boldsymbol {\eta}\right) (1 + O \left(\| \boldsymbol {\eta} \| ^ {2}\right)) \mathrm {d} \boldsymbol {\eta} \tag {18} \\ = \exp \left(- \boldsymbol {\eta} ^ {T} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\eta}\right) \left(1 + O \left(\| \boldsymbol {\eta} \| ^ {2}\right)\right) \mathrm {d} \boldsymbol {\eta} \\ \end{array}
624
+ $$
625
+
626
+ which follows the multivariate Gaussian distribution with the covariance matrix as $\pmb{\Sigma}$ , where $\pmb{\Sigma} = -\mathrm{diag}\left(\frac{1}{z_1},\frac{1}{z_2},\frac{1}{z_3}\right)$
627
+
628
+ For Quaternion Laplace distribution, we have
629
+
630
+ $$
631
+ \begin{array}{l} p (\mathbf {q}) \mathrm {d} \mathbf {q} \propto \frac {\exp \left(- \sqrt {- \mathbf {q} ^ {T} \mathbf {M Z M} ^ {T} \mathbf {q}}\right)}{\sqrt {- \mathbf {q} ^ {T} \mathbf {M Z M} ^ {T} \mathbf {q}}} \mathrm {d} \mathbf {q} \\ = \frac {1}{\sqrt {2}} \frac {\exp \left(- \sqrt {- \boldsymbol {\eta} ^ {T} \widetilde {\mathbf {Z}} \boldsymbol {\eta}}\right)}{\sqrt {- \boldsymbol {\eta} ^ {T} \widetilde {\mathbf {Z}} \boldsymbol {\eta}}} (1 + O (\| \boldsymbol {\eta} \| ^ {2})) d \boldsymbol {\eta} \tag {19} \\ = \frac {1}{\sqrt {2}} \frac {\exp \left(- \sqrt {2 \boldsymbol {\eta} ^ {T} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\eta}}\right)}{\sqrt {2 \boldsymbol {\eta} ^ {T} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\eta}}} (1 + O (\| \boldsymbol {\eta} \| ^ {2})) d \boldsymbol {\eta} \\ \end{array}
632
+ $$
633
+
634
+ which follows the multivariate Laplace distribution with the covariance matrix as $\Sigma$ , where $\Sigma = -2\mathrm{diag}\left(\frac{1}{z_1},\frac{1}{z_2},\frac{1}{z_3}\right)$ .
635
+
636
+ Proposition 4 in the main paper. Denote $\gamma$ as the standard transformation from unit quaternions to corresponding rotation matrices. For rotation matrix $\mathbf{R} \in \mathrm{SO}(3)$ following Rotation Laplace distribution, $\mathbf{q} = \gamma^{-1}(\mathbf{R}) \in \mathbb{S}^3$ follows quaternion Laplace distribution.
637
+
638
+ Proof. For a quaternion $\mathbf{q} = [q_0, q_1, q_2, q_3]$ , we use the standard transform function $\gamma$ to compute its corresponding rotation matrix:
639
+
640
+ $$
641
+ \gamma (\mathbf {q}) = \left[ \begin{array}{l l l} 1 - 2 q _ {2} ^ {2} - 2 q _ {3} ^ {2} & 2 q _ {1} q _ {2} - 2 q _ {0} q _ {3} & 2 q _ {1} q _ {3} + 2 q _ {0} q _ {2} \\ 2 q _ {1} q _ {2} + 2 q _ {0} q _ {3} & 1 - 2 q _ {1} ^ {2} - 2 q _ {3} ^ {2} & 2 q _ {2} q _ {3} - 2 q _ {0} q _ {1} \\ 2 q _ {1} q _ {3} - 2 q _ {0} q _ {2} & 2 q _ {2} q _ {3} + 2 q _ {0} q _ {1} & 1 - 2 q _ {1} ^ {2} - 2 q _ {2} ^ {2} \end{array} \right] \tag {20}
642
+ $$
643
+
644
+ Let $\mathbf{u} = \gamma^{-1}(\mathbf{U}),\mathbf{v} = \gamma^{-1}(\mathbf{V})$ and
645
+
646
+ $$
647
+ \widetilde {\mathbf {q}} = \left[ \widetilde {q} _ {0}, \widetilde {q} _ {1}, \widetilde {q} _ {2}, \widetilde {q} _ {3} \right] ^ {T} = \gamma^ {- 1} \left(\mathbf {U} ^ {T} \mathbf {R} \mathbf {V}\right) = \overline {{\mathbf {u}}} \mathbf {q} \mathbf {v} \tag {21}
648
+ $$
649
+
650
+ Note that the transformation $\mathbf{q} \rightarrow \overline{\mathbf{u}}\mathbf{q}\mathbf{v}$ is an orthogonal transformation on $\mathbb{S}^3$ . Therefore, there exists an orthogonal Matrix $\mathbf{M}$ , such that
651
+
652
+ $$
653
+ \mathbf {M} ^ {T} \mathbf {q} = \overline {{\mathbf {u}}} \mathbf {q} \mathbf {v} = \widetilde {\mathbf {q}} \tag {22}
654
+ $$
655
+
656
+ The scaling factor from quaternions to rotation matrices is given by
657
+
658
+ $$
659
+ \mathrm {d} \mathbf {R} = \frac {1}{2 \pi^ {2}} \mathrm {d} \mathbf {q} \tag {23}
660
+ $$
661
+
662
+ Suppose $\mathbf{R}$ follows quaternion Laplace distribution as
663
+
664
+ $$
665
+ p (\mathbf {R}) \mathrm {d} \mathbf {R} = \frac {1}{F} \frac {\exp \left(- \sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {\mathrm {T}} \mathbf {R}\right)}\right)}{\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {\mathrm {T}} \mathbf {R}\right)}} \mathrm {d} \mathbf {R} \tag {24}
666
+ $$
667
+
668
+ Given
669
+
670
+ $$
671
+ \begin{array}{l} \operatorname {t r} \left(\mathbf {S} - \mathbf {A} ^ {\mathrm {T}} \mathbf {R}\right) = \operatorname {t r} \left(\mathbf {S} - \mathbf {S U} ^ {\mathrm {T}} \mathbf {R V}\right) = \sum_ {(i, j, k) \in I} 2 \left(s _ {j} + s _ {k}\right) q _ {i} ^ {2} \\ = 2 \widetilde {\mathbf {q}} ^ {T} \left[ \begin{array}{c c c} & & \\ & s _ {2} + s _ {3} & \\ & & s _ {1} + s _ {3} \\ & & s _ {1} + s _ {2} \end{array} \right] \widetilde {\mathbf {q}} \tag {25} \\ \end{array}
672
+ $$
673
+
674
+ we have
675
+
676
+ $$
677
+ \begin{array}{l} p (\mathbf {R}) \mathrm {d} \mathbf {R} = \frac {1}{2 \pi^ {2} F} \frac {\exp \left(- \sqrt {2 \widetilde {\mathbf {q}} ^ {T} \left[ \begin{array}{c c c} 0 & & \\ & s _ {2} + s _ {3} & \\ & & s _ {1} + s _ {3} \\ & & & s _ {1} + s _ {2} \end{array} \right] \widetilde {\mathbf {q}}}\right)}{\sqrt {2 \widetilde {\mathbf {q}} ^ {T} \left[ \begin{array}{c c c} 0 & & \\ & s _ {2} + s _ {3} \\ & & s _ {1} + s _ {3} \\ & & & s _ {1} + s _ {2} \end{array} \right] \widetilde {\mathbf {q}}}} \mathrm {d} \mathbf {q} \\ = \frac {1}{2 \pi^ {2} F} \frac {\exp \left(- \sqrt {2 \mathbf {q} ^ {T} \mathbf {M} \left[ \begin{array}{c c} 0 & \\ s _ {1} + s _ {3} \\ & s _ {1} + s _ {2} \end{array} \right] \mathbf {M} ^ {T} \mathbf {q}}\right)}{\sqrt {2 \mathbf {q} ^ {T} \mathbf {M} \left[ \begin{array}{c c} 0 & \\ s _ {2} + s _ {3} \\ & s _ {1} + s _ {3} \\ & s _ {1} + s _ {2} \end{array} \right] \mathbf {M} ^ {T} \mathbf {q}}} d \mathbf {q} \tag {26} \\ = \frac {1}{2 \pi^ {2} F} \frac {\exp \left(- \sqrt {- \mathbf {q} ^ {T} \mathbf {M Z M} ^ {T} \mathbf {q}}\right)}{\sqrt {- \mathbf {q} ^ {T} \mathbf {M Z M} ^ {T} \mathbf {q}}} \mathrm {d} \mathbf {q}, \\ \end{array}
678
+ $$
679
+
680
+ where $\mathbf{M}$ is an orthogonal matrix and $\mathbf{Z} = -2\mathrm{diag}(0,s_2 + s_3,s_1 + s_3,s_1 + s_2)$ is a $4\times 4$ diagonal matrix.
681
+
682
+ # Elaboration of Eq. 3 in the main paper
683
+
684
+ Given $\mathbf{R}_0 = \mathbf{U}\mathbf{V}^T$ and $\widetilde{\mathbf{R}} = \mathbf{R}_0^T\mathbf{R}$
685
+
686
+ $$
687
+ \begin{array}{l} p (\mathbf {R}) \mathrm {d} \mathbf {R} \propto \frac {\exp \left(\sqrt {\operatorname {t r} (\mathbf {S} - \mathbf {A} ^ {T} \mathbf {R})}\right)}{\sqrt {\operatorname {t r} (\mathbf {S} - \mathbf {A} ^ {T} \mathbf {R})}} \mathrm {d} \mathbf {R} = \frac {\exp \left(\sqrt {\operatorname {t r} (\mathbf {S} - \mathbf {V S U} ^ {T} \mathbf {R})}\right)}{\sqrt {\operatorname {t r} (\mathbf {S} - \mathbf {V S U} ^ {T} \mathbf {R})}} \mathrm {d} \mathbf {R} = \frac {\exp \left(\sqrt {\operatorname {t r} (\mathbf {S} - \mathbf {S U} ^ {T} \mathbf {R V})}\right)}{\sqrt {\operatorname {t r} (\mathbf {S} - \mathbf {S U} ^ {T} \mathbf {R V})}} \mathrm {d} \mathbf {R} \\ = \frac {\exp \left(\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {S U} ^ {T} \mathbf {R} _ {0} \widetilde {\mathbf {R}} \mathbf {V}\right)}\right)}{\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {S U} ^ {T} \mathbf {R} _ {0} \widetilde {\mathbf {R}} \mathbf {V}\right)}} \mathrm {d} \mathbf {R} = \frac {\exp \left(\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {S V} ^ {T} \widetilde {\mathbf {R}} \mathbf {V}\right)}\right)}{\sqrt {\operatorname {t r} \left(\mathbf {S} - \mathbf {S V} ^ {T} \widetilde {\mathbf {R}} \mathbf {V}\right)}} \mathrm {d} \mathbf {R} \tag {27} \\ \end{array}
688
+ $$
689
+
690
+ # G MORE IMPLEMENTATION DETAILS
691
+
692
+ For fair comparisons, we follow the implementation designs of Mohlin et al. (2020) and merely change the distribution from matrix Fisher distribution to our Rotation Laplace distribution. We use pretrained ResNet-101 as our backbone, and encode the object class information (for single-model-all-category experiments) by an embedding layer that produces a 32-dim vector. We apply a 512-512-9 MLP as the output layer.
693
+
694
+ The batch size is set as 32. We use the SGD optimizer and start with the learning rate of 0.01. For ModelNet10-SO3 dataset, we train 50 epochs with learning rate decaying by a factor of 10 at epochs 30, 40, and 45. For Pascal3D+ dataset, we train 120 epochs with the same learning rate decay at epochs 30, 60 and 90.
2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c50627d519666bc3a8c5db60ee8a844a832c67a5baf17a4dd78427ede09384ab
3
+ size 1171772
2023/A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/99bfc8d6-393c-4dd5-956f-bfee4aa8668a_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/99bfc8d6-393c-4dd5-956f-bfee4aa8668a_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/99bfc8d6-393c-4dd5-956f-bfee4aa8668a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72a2be8c846e86b3e77ad35f2d14a359a63fc8341272106198fef066de65f0b7
3
+ size 1054225
2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/full.md ADDED
@@ -0,0 +1,354 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A MINIMALIST DATASET FOR SYSTEMATIC GENERALIZATION OF PERCEPTION, SYNTAX, AND SEMANTICS
2
+
3
+ Qing Li $^{1}$ , Siyuan Huang $^{1}$ , Yining Hong $^{2}$ , Yixin Zhu $^{3}$ , Ying Nian Wu $^{2}$ , Song-Chun Zhu $^{1,2,3}$
4
+
5
+ $^{1}$ National Key Laboratory of General Artificial Intelligence, BIGAI
6
+ $^{2}$ Center for Vision, Cognition, Learning, and Autonomy (VCLA), UCLA
7
+ $^{3}$ Institute for Artificial Intelligence, Peking University
8
+
9
+ https://liqing-ustc.github.io/HINT
10
+
11
+ # ABSTRACT
12
+
13
+ Inspired by humans' exceptional ability to master arithmetic and generalize to new problems, we present a new dataset, Handwritten arithmetic with INTegers (HINT), to examine machines' capability of learning generalizable concepts at three levels: perception, syntax, and semantics. In HINT, machines are tasked with learning how concepts are perceived from raw signals such as images (i.e., perception), how multiple concepts are structurally combined to form a valid expression (i.e., syntax), and how concepts are realized to afford various reasoning tasks (i.e., semantics), all in a weakly supervised manner. Focusing on systematic generalization, we carefully design a five-fold test set to evaluate both the interpolation and the extrapolation of learned concepts w.r.t. the three levels. Further, we design a few-shot learning split to determine whether or not models can rapidly learn new concepts and generalize them to more complex scenarios. To comprehend existing models' limitations, we undertake extensive experiments with various sequence-to-sequence models, including RNNs, Transformers, and GPT-3 (with the chain of thought prompting). The results indicate that current models struggle to extrapolate to long-range syntactic dependency and semantics. Models exhibit a considerable gap toward human-level generalization when evaluated with new concepts in a few-shot setting. Moreover, we discover that it is infeasible to solve HINT by merely scaling up the dataset and the model size; this strategy contributes little to the extrapolation of syntax and semantics. Finally, in zero-shot GPT-3 experiments, the chain of thought prompting exhibits impressive results and significantly boosts the test accuracy. We believe the HINT dataset and the experimental findings are of great interest to the learning community on systematic generalization.
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ Humans possess a versatile mechanism for learning concepts from data (Firestone & Scholl, 2016). Suppose, for example, that we were tasked with deciphering ancient Egyptian signs based on the examples in Table 1. Given sufficient time, we may comprehend these signs by how to recognize them—what each sign looks like at the perceptual level, by how to compose them into valid sequence—at the syntactic level, and how to predict the results—at the semantic level. Learning concepts heavily rely on these three-level interweaving meanings. Such observation is also consistent with the classic view of human cognition, which postulates at least three distinct levels of organizations in computation systems (Pylyshyn, 1984).
18
+
19
+ <table><tr><td colspan="2">Train</td><td>Test</td></tr><tr><td>○△△△→60</td><td>○△△△→18</td><td>○△△△△→?</td></tr><tr><td>○△△△△→100</td><td>○△△△△△→16</td><td>○△△△△△→?</td></tr><tr><td>○△△△→12</td><td>○△△△△△△→41</td><td>○△△△△△△→?</td></tr><tr><td>○△△△△△→4</td><td>○△△△△△→4</td><td>○△△△△△△△△→?</td></tr><tr><td>○△△△△△→26</td><td>○△△△→17</td><td>○△△△△△△△△△△△→?</td></tr></table>
20
+
21
+ Table 1: Can you decipher these ancient Egyptian signs from training examples and apply them to test cases? Interested readers can refer to the website, https://liqing-ustc.github.io/HINT/Egyptian, for more training and test samples with the ground-truth meaning for each sign. We strongly encourage the readers to play this game prior to reviewing the answers.
22
+
23
+ Another appealing characteristic of human concept learning is its systematic compositionality (Chomsky, 1957; Montague, 1970): the algebraic capacity to understand and construct an endless number of novel combinations from a finite set of known components, i.e., "infinite use of finite means" (Chomsky, 1965). As illustrated in Table 1, this form of compositionality is essential to the human ability to make strong generalizations from simple examples to complex ones.
24
+
25
+ Various benchmarks (Lake & Baroni, 2018; Hupkes et al., 2020; Keysers et al., 2020) and methods (Lake, 2019; Gordon et al., 2019; Csordas et al., 2021) have been introduced by the emerging community of learning models that capture human-like systematic compositionality. As it is difficult to collect real data with systematic compositionality, the majority of existing benchmarks are derived from artificial domains using synthetic data and tasks, covering only a subset of the concept learning spectrum; see Table 2 for a detailed comparison. When evaluating systematic compositionality, prior datasets frequently conflate syntax and semantics. For instance, the SCAN dataset (Lake & Baroni, 2018) is a semantic parsing task from natural language commands to action sequences; when a model fails on a longer command than the ones in the training set, the root cause could stem from misinterpreting the complex syntactic relations in a long input sequence (command) or its inability to generate a long output sequence (actions) (e.g., as a result of the EOS decision problem (Newman et al., 2020). In addition, previous benchmarks frequently incorporated simple semantics (e.g., a simple mapping or repetition), resulting in an undesired bias toward syntactic generalization.
26
+
27
+ To expand systematic compositionality to a full-spectrum systematic generalization w.r.t. perception, syntax, and semantics, we draw inspiration from arithmetic and present a new benchmark called HINT, Handwritten arithmetic with INTEgers. The HINT task is intuitive: Machines accept as input images of handwritten expressions and predict the final results of expressions, restricted in the integers. Since there is no intermediary supervision, the three-level meanings are apparently intertwined during learning, and models are expected to simultaneously acquire the three-level meanings to make correct predictions. To provide a comprehensive and rigorous test of how models generalize the learned concepts, we introduce a carefully structured evaluation scheme with five subsets, focusing on generalization patterns (i.e., interpolation and extrapolation) at various levels (i.e., perception, syntax, and semantics). In addition, we build a few-shot learning split to determine if models can rapidly learn new concepts from few examples and generalize them to more complicated scenarios. Being minimal yet comprehensive in terms of systematic generalization, HINT is fundamentally more difficult than earlier datasets because: (i) The images are of actual handwriting with considerable visual variation; (ii) The syntactic relations between the tokens in the expressions are more complex with long-range dependency. (iii) The semantics of arithmetic concepts are more complex than the simple mappings in prior datasets.
28
+
29
+ To facilitate future research in this direction, we conduct extensive experiments of various sequence-to-sequence (seq2seq) models, including Recurrent Neural Networks (Hochreiter & Schmidhuber, 1997; Chung et al., 2014), Transformers (Vaswani et al., 2017), and GPT-3 (Brown et al., 2020) (with chain of thought prompting Wei et al. (2022)). Our experiments indicate that all models still struggle on HINT; even the state-of-the-art model, Universal Transformer (Dehghani et al., 2018) with relative positional encoding (Shaw et al., 2018; Dai et al., 2019), achieves just $54\%$ accuracy on HINT, although it achieves virtually perfect accuracy on prior datasets such as SCAN (Csordás et al., 2021). An in-depth analysis of the results on each test subset reveals that current models still struggle with extrapolation to long-range syntactic dependency and semantics. In the GPT-3 experiments, the chain of thought prompting significantly increases the zero-shot test accuracy from $8.6\%$ to $27.6\%$ . By examining the scaling trends of the test accuracy w.r.t. the size of the model and the dataset, we find that it is impractical to solve HINT by simply scaling up the size of the dataset or the model, as is typically done in NLP tasks (Kaplan et al., 2020; Henighan et al., 2020); more data and parameters do not significantly improve the extrapolation over syntax and semantics. The few-shot learning experiments demonstrate that, despite the fact that the top-performing models exhibit decent capabilities for learning new concepts, they are still far from the human-level generalization that only requires the learning examples of a new concept in a primitive form and readily generalizes to more complex compositions of the learned concept.
30
+
31
+ In short, we introduce the HINT dataset for investigating the systematic generalization across three levels—perception, syntax, and semantics. By benchmarking various seq2seq models on HINT, we uncover their primary weaknesses in systematic generalization. We hope the HINT dataset and our experimental findings will stimulate future developments of systematic generalization.
32
+
33
+ Table 2: Dataset categorization and comparison. SP: semantic parsing, IC: image classification, QA: question answering, i&t: image & text. Perception/Syntax/Semantics: whether the task requires models to learn perception/syntax/semantics. Generalization: the type of generalization required for test examples. *: the generated images in these datasets have little variance.
34
+
35
+ <table><tr><td>Dataset</td><td>Domain</td><td>Task</td><td>Modality</td><td>Perception</td><td>Syntax</td><td>Semantics</td><td>Generalization</td><td>Size</td></tr><tr><td>SCAN (Lake &amp; Baroni, 2018)</td><td>synthetic</td><td>SP</td><td>text</td><td></td><td>✓</td><td>✓</td><td>systematic</td><td>100K</td></tr><tr><td>gSCAN (Ruis et al., 2020)</td><td>synthetic</td><td>SP</td><td>i&amp;t</td><td>✓*</td><td>✓</td><td>✓</td><td>systematic</td><td>300K</td></tr><tr><td>PCFG (Hupkes et al., 2020)</td><td>synthetic</td><td>SP</td><td>text</td><td></td><td>✓</td><td>✓</td><td>systematic</td><td>100K</td></tr><tr><td>CFQ (Keyers et al., 2020)</td><td>real</td><td>SP</td><td>text</td><td></td><td>✓</td><td>✓</td><td>systematic</td><td>239K</td></tr><tr><td>CURI (Vedantam et al., 2021)</td><td>synthetic</td><td>IC</td><td>image</td><td>✓</td><td></td><td>✓</td><td>systematic</td><td>15K</td></tr><tr><td>COGS (Kim &amp; Linzen, 2020)</td><td>real</td><td>SP</td><td>text</td><td></td><td>✓</td><td>✓</td><td>systematic</td><td>30K</td></tr><tr><td>Mathematics (Saxton et al., 2018)</td><td>real</td><td>QA</td><td>text</td><td></td><td>✓</td><td>✓</td><td>systematic</td><td>2M</td></tr><tr><td>PGM (Barrett et al., 2018)</td><td>synthetic</td><td>IC</td><td>image</td><td>✓</td><td></td><td>✓</td><td>systematic</td><td>1.4M</td></tr><tr><td>CLOSURE (Bahdanau et al., 2019)</td><td>synthetic</td><td>QA</td><td>i&amp;t</td><td>✓</td><td>✓</td><td></td><td>systematic</td><td>7K</td></tr><tr><td>CLEVR (Johnson et al., 2017)</td><td>synthetic</td><td>QA</td><td>i&amp;t</td><td>✓</td><td>✓</td><td></td><td>i.i.d</td><td>865K</td></tr><tr><td>HWF (Li et al., 2020)</td><td>real</td><td>IC</td><td>image</td><td>✓</td><td></td><td></td><td>i.i.d</td><td>12K</td></tr><tr><td>MNIST-Add (Manhaeve et al., 2018)</td><td>real</td><td>IC</td><td>image</td><td>✓</td><td></td><td></td><td>i.i.d</td><td>-</td></tr><tr><td>HINT (ours)</td><td>real</td><td>QA</td><td>image</td><td>✓</td><td>✓</td><td>✓</td><td>systematic</td><td>1M</td></tr></table>
36
+
37
+ # 2 RELATED WORK
38
+
39
+ Benchmarks on Systematic Generalization Although several benchmarks (Lake & Baroni, 2018; Hupkes et al., 2020; Barrett et al., 2018; Zhang et al., 2019; Teney et al., 2020; Keysers et al., 2020; Bahdanau et al., 2019; Ruis et al., 2020; Kim & Linzen, 2020; Keysers et al., 2020) have advanced systematic generalization, the majority of them are based on artificial domains with synthetic tasks, involve just one or two aspects of concept learning and often mixing the generalization over syntax and semantics. SCAN (Lake & Baroni, 2018) is tasked with translating a natural language command into a sequence of operations in a simplified navigation domain using only syntax and semantics. CLEVR (Johnson et al., 2017) requires parsing questions (syntax) and grounding visual objects (perception), although objects themselves lack functional semantics. We refer readers to Table 2 for detailed comparisons of related datasets.
40
+
41
+ In contrast, the proposed HINT benchmark stems from the area of arithmetic reasoning with real handwriting images (at the primitive level, rather than the expression level) and requires joint learning of perception, syntax, and semantics. The precise definitions and boundaries of these meanings in HINT permit to build test splits to evaluate the specific generalizations. Notably, HINT possesses more complex semantics, which eliminates the undesirable bias towards syntactic generalization present in earlier datasets. The task of the HINT benchmark is inspired by the HWF dataset (Li et al., 2020) but requires full-spectrum learning of perception, syntax, and semantics. By going beyond an i.i.d train/test split in Li et al. (2020), HINT focuses on examining systematic generalization across many aspects of concepts.
42
+
43
+ Methods on Systematic Generalization To capture systematic generalization, new training regimes (Lake, 2019; Andreas, 2020; Akyurek et al., 2020; Zhang et al., 2022) and model architectures (Dessi & Baroni, 2019; Russian et al., 2019; Csordas et al., 2021; Gordon et al., 2019; Bergen et al., 2021) have been developed. Russian et al. (2019), for instance, expand a seq2seq model by segregating syntactic and semantic information. Csordas et al. (2021) investigate a variety of Transformer configurations to enhance its systematic compositionality. Andreas (2020) and Akyurek et al. (2020) investigate data enhancement for compositional generalization.
44
+
45
+ In particular, several neural-symbolic methods with domain-specific designs (Chen et al., 2020; Nye et al., 2020; Liu et al., 2020) achieve near-perfect accuracy on prior systematic generalization datasets like SCAN (Lake & Baroni, 2018). However, these neural-symbolic methods introduce certain non-trivial domain-specific symbolic components, making it difficult to transfer to other domains; their flexibility and transferability are unclear. In this paper, we benchmark on HINT with prevailing seq2seq frameworks, including RNNs, Transformers, and GPT-3, which require minimal domain-specific design and may be of broad interest to the learning community. We reserve for future research the investigation of more sophisticated methods, such as data augmentation and neural-symbolic approaches.
46
+
47
+ # 3 THE HINT DATASET
48
+
49
+ In this section, we present the specifics of the HINT benchmark, devised to evaluate models' capability of learning generalizable concepts at three distinct levels: perception, syntax, and semantics.
50
+
51
+ # 3.1 THE DEFINITIONS OF PERCEPTION, SYNTAX, AND SEMANTICS
52
+
53
+ We first define the perception, syntax, and semantics in the domain of HINT, as shown in Table 3. Perception refers to the mapping from image pixels into meaningful patterns, such as mapping an image of handwritten expression to a symbolic sequence. Syntax refers to the mechanism of how the concepts in one sample are structurally organized e.g., parsing the symbolic sequence into a tree, and the syntax in Table A2 is expressed by a phrase-structure grammar. Semantics refers to the functional meanings of these arithmetic concepts, e.g., what value '5' represents and what value $+^{\prime}$ produces when given two arguments 1 and 1.
54
+
55
+ Table 3: The definitions of perception, syntax, and semantics In syntax, number, op1, and op2 are the HINT grammar's pre-terminals in Table A2. In semantics, $i$ and $j$ are the operator's inputs. - is defined as $\max \left( {0,i - }\right.$ $j)$ to prevent negative results,and $\div$ is defined as $\operatorname{ceil}\left( {i \div j}\right)$ to remove the decimal portions of the results.
56
+ (a) main concepts
57
+
58
+ <table><tr><td>concept</td><td>perception</td><td>syntax</td><td>semantics</td></tr><tr><td>0..5..9</td><td>0..5..9</td><td>number</td><td>0..5..9</td></tr><tr><td>()</td><td>()</td><td>parenthesis</td><td>none</td></tr><tr><td>+</td><td>+</td><td>op1</td><td>i+j</td></tr><tr><td>-</td><td>-</td><td>op1</td><td>max(0,i-j)</td></tr><tr><td>×</td><td>×</td><td>op2</td><td>i×j</td></tr><tr><td>÷</td><td>÷</td><td>op2</td><td>ceil(i÷j)</td></tr></table>
59
+
60
+ (b) new concepts in the few-shot learning split
61
+
62
+ <table><tr><td>concept</td><td>perception</td><td>syntax</td><td>semantics</td></tr><tr><td>x</td><td>x</td><td>number</td><td>11</td></tr><tr><td>y</td><td>y</td><td>number</td><td>12</td></tr><tr><td>a</td><td>a</td><td>op1</td><td>max(i,j)</td></tr><tr><td>b</td><td>b</td><td>op1</td><td>min(i,j)</td></tr><tr><td>c</td><td>c</td><td>op2</td><td>(i+j)÷2</td></tr><tr><td>d</td><td>d</td><td>op2</td><td>2i×j÷(i+j)</td></tr></table>
63
+
64
+ Notably, although these three levels have a clear boundary by their definitions, a model need not necessarily represent them by separate and individual modules. An end-to-end neural network trained on this domain, for instance, will likely contain neurons and parameters from all three layers. The notion of perception, syntax, and semantics simply requires the models to capture these meanings during evaluation, regardless of how the models finish the tasks, implicitly or explicitly.
65
+
66
+ Task The task of HINT is intuitive: predict the final results of handwritten arithmetic expressions in a weakly-supervised manner. That is, only the final results are given as supervision; all the symbolic expressions, parse trees, and intermediate values are latent. In such a setting, any model must simultaneously master perception, syntax, and semantics to solve this task successfully.
67
+
68
+ # 3.2 DATA GENERATION
69
+
70
+ Table 4: Examples from the training set and the test subsets of HINT.
71
+
72
+ <table><tr><td>Train</td><td></td><td>2×5÷9 2 (9-9)×(3-4)-1×(0+3-(6-(9-2÷2)) 0
73
+ 5×5+(3-0-2) 32 4×(3+8)-7-(0-5) 41</td></tr><tr><td rowspan="5">Test</td><td>I</td><td>1÷4 1 1×(2÷5)×(8-8-6) 0 6-4+(0-(6+0÷(4÷(6/4)×1)) 1)+(9+4) 15</td></tr><tr><td>SS</td><td>1+3÷4 2 3×(7×2)+(8+4)+4×3 66 4+(D-(7+7+6))×4-0 4</td></tr><tr><td>LS</td><td>3×(8×(8×1)+0÷9) 192 5×(3:1×9) 4(2-5)×(7×(6+5)) 135
74
+ 2×(3×(3÷6+6×(3×4×6÷(1×6)))+0÷3 438</td></tr><tr><td>SL</td><td>(6×5-0)÷((4+8+5)÷9)+(3-(2-(2+(3×7-8÷9))/(4-9)) 18
75
+ 6-3÷(9×(9÷(4-(4-7))))+(1+1/(6-2))-(7×2+6÷8)
76
+ (7+3)/(6-6×(0×(6÷7))-(3×1-6-4/(4-3))×(9×3) 2</td></tr><tr><td>LL</td><td>(6+2)×1+2÷4×(1+4-0÷3)×8-(4×3×8)×(0+(2×9-0)/3)÷(8÷9) 174
77
+ (3+(8+(4-7×(7+8))×(8÷4-(4-(6+5)+6)) )÷(7+8×1×0)÷5 1
78
+ 7×(8÷(1×(7÷7))+1+(1+2)×10+9-5÷(8+4÷(9×6))+(8-(3-8+3)) 620</td></tr></table>
79
+
80
+ The data generation process consists of three steps. First, we extract handwritten images for each concept from CROHME (Mahdavi et al., 2019), including digits 0 through 9, operators $+, -, \times, \div$ , and parentheses $(,)$ . Second, we randomly sample prefix expressions and convert them to infix expressions with necessary parentheses based on the operator precedence; only single-digit numbers are permitted. The symbolic expressions are fed into a solver to calculate the final results. Third, we randomly sample handwritten images for symbols in an expression and concatenate them to construct the final handwritten expression. We only retain the handwritten expressions as input and the corresponding final results as supervision; all intermediate results are discarded.
81
+
82
+ Full-Spectrum Systematic Generalization To rigorously evaluate the systematic generalization of the learned concepts, we substitute the standard i.i.d. split with a meticulously crafted evaluation
83
+
84
+ scheme. We randomly divide all handwritten images into three splits: training (75%), validation (5%), and test (20%). First, we limit the maximum number of operators in the training set to 10 and the maximum intermediate values to 100:
85
+
86
+ $$
87
+ D _ {\text {t r a i n}} \subset \mathcal {T} _ {\text {t r a i n}} = \{(x, y): | x | \leqslant 1 0, \max (v) \leqslant 1 0 0 \}, \tag {1}
88
+ $$
89
+
90
+ where $x$ is the expression, $|x|$ its number of operators, $y$ the final result, and $v$ all the intermediate values and the final results. To ensure diversity in the training set, we sample a maximum of 100,000 distinct expressions with the same number of operators. To prevent bias in the final results, we cap the percentage of a certain result at less than $5\%$ . Next, we carefully curate the test set to evaluate different generalization capabilities (i.e., interpolation and extrapolation) on different levels of meaning (i.e., perception, syntax, and semantics). Specifically, the test set comprises five subsets, formally defined as:
91
+
92
+ $$
93
+ D _ {\text {t e s t}} = \mathrm {I} \cup \mathrm {S S} \cup \mathrm {L S} \cup \mathrm {S L} \cup \mathrm {L L}, \text {w h e r e} \tag {2}
94
+ $$
95
+
96
+ $\mathrm{I}\subset D_{\mathrm{train}}$ generalization on perception only
97
+
98
+ SS $\subset \mathcal{T}_{\mathrm{train}}\backslash D_{\mathrm{train}}$ interpolation on both syntax and semantics
99
+
100
+ $\mathrm{LS} \subset \{(x, y) : |x| > 10, \max(v) \leqslant 100\}$ , extrapolation on syntax and interpolation on semantics
101
+
102
+ $\mathrm{SL} \subset \{(x, y) : |x| \leqslant 10, \max(v) > 100\}$ , interpolation on syntax and extrapolation on semantics
103
+
104
+ $\mathrm{LL} \subset \{(x, y) : |x| > 10, \max(v) > 100\}$ . extrapolation on both syntax and semantics
105
+
106
+ All subsets of the test set require generalization on perception since all images in the test set are unseen during training. For the test set, we sample no more than 1,000 unique expressions with the same number of operators, and the final results are also balanced. The maximum number of operators is set up to 20, and the maximum intermediate value to 10,000. We also build a small validation set for hyperparameter tuning. See Table 4 for training and test examples and refer to Appendix A for further dataset statistics.
107
+
108
+ Few-shot Learning and Generalization To determine if models can rapidly learn new concepts, we constructed a few-shot learning split to learn six new concepts, as shown in Table 3. These six concepts have different meanings in terms of perception, syntax, and semantics: two new numbers $(x\mathcal{X}$ and $y\mathcal{Y}$ , representing 11 and 12, respectively), two operators of precedence 1 ( $a\mathcal{Q}$ and $b\mathcal{V}$ , representing max and min), and two operators of precedence 2 ( $c\subset$ and $d\mathcal{D}$ , representing arithmetic mean and harmonic mean). The train, validation, and test splits are constructed using the same strategy as in the full-spectrum generalization. Expressions are sampled to guarantee that the corresponding new concept appears at least once in the expression. This few-shot learning split is used to determine whether the models pre-trained on the training set can rapidly learn a new concept by fine-tuning on only a handful of examples involving the new concept. In this context, "few-shot" implies that the examples used to acquire a new concept are significantly fewer than those of the training set, but still exceed the number of examples required by humans to learn a new concept.
109
+
110
+ # 4 DEEP SEQUENCE-TO-SEQUENCE BASELINES
111
+
112
+ The task of HINT can be naturally formulated as a sequence-to-sequence (seq2seq) problem: The input is a handwritten expression, segmented into a sequence of images by a sliding window, and the output is an integer, converted into a sequence of digits. We benchmark deep seq2seq frameworks on HINT; see Figure 1 for an illustration using a detailed example.
113
+
114
+ # 4.1 IMAGETOKENIZINGAND EMBEDDING
115
+
116
+ Existing seq2seq frameworks typically accept a sequence of tokens as input. To tokenize a handwritten expression, its height is first resized to 32, and a 32-pixel sliding window is applied along the horizontal axis to render a sequence of images. Next, each image in the sequence is encoded by ResNet-18 (He et al., 2016), sufficient to handle the visual variance in handwriting.
117
+
118
+ # 4.2 ENCODER-DECODER ARCHITECTURES
119
+
120
+ RNNs Recurrent neural networks (RNNs) have long been a dominant choice for sequence modeling tasks. We test two popular RNNs in the literature: long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) and gated recurrent units (GRU) (Chung et al., 2014). Each model is evaluated both with and without attention (Bahdanau et al., 2015).
121
+
122
+ ![](images/9426fdabff55b9a6d67334008b5ebafec6245a36758d468b6d09934f831882c4.jpg)
123
+ Figure 1: The seq2seq framework applied to an example in HINT. $\mathrm{SOS}_{\zeta}$ : start-of-sentence tokens. $\mathrm{EOS}_{\zeta}$ : end-of-sentence tokens. A sliding window segments the handwritten expression into a sequence of images, which are then separately encoded by ResNet-18. The expected output is a sequence of digits in reverse order.
124
+
125
+ Transformers Since its inception (Vaswani et al., 2017), Transformers have gradually supplanted recurrent or convolutional neural networks as the de facto choice for various sequence modeling tasks (Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020). Nevertheless, prior work (Dehghani et al., 2018; Hupkes et al., 2020; Kim & Linzen, 2020) suggests that the vanilla Transformer fails substantially in many tasks requiring systematic generalization when the sequence lengths exceed those observed during training. Recently, several simple tricks have been proposed (Csordás et al., 2021) to improve the generalization capability of Transformers; two of them work particularly well: (i) using relative positional encoding (Shaw et al., 2018; Dai et al., 2019), and (ii) sharing weights across the blocks in the Transformer, a.k.a., Universal Transformer (Dehghani et al., 2018). Therefore, we benchmark Transformer variants: the vanilla Transformer, Transformer with relative positional encoding, and Universal Transformer with relative positional encoding.
126
+
127
+ GPT-3 Since the commencement of GPT-3 (Brown et al., 2020), there have been intense debates and different perspectives regarding the mathematical reasoning capacity of pre-trained large language models. To systematically and comprehensively evaluate GPT-3's competence of arithmetic reasoning, we test it on the proposed HINT benchmark using symbolic expressions as input. Since all tokens of HINT are in the vocabulary of GPT-3, we directly evaluate GPT-3 via zero-shot prompting using the OpenAI API. We construct the prompt in the following form: “Q: What is Expression? A: The answer is”, similar to the practice in Brown et al. (2020), but with more complex expressions.
128
+
129
+ Recently, chain of thought (CoT) prompting (Wei et al., 2022) has been extended to the zero-shot setting (Kojima et al., 2022) by adding a simple prompt, "Let's think step by step," to facilitate step-by-step thinking prior to answering each question. Zero-shot CoT surpasses the standard zero-shot prompting by a significant margin in various reasoning tasks. Therefore, we also apply zero-shot CoT prompting to evaluate GPT-3 on HINT; we refer the readers to Appendix B.2 for the details of zero-shot CoT.
130
+
131
+ # 4.3 TRAINING AND EVALUATION
132
+
133
+ Training All models are trained using the Adam optimizer (Kingma & Ba, 2014); the gradients exceeding 5.0 are clipped. Dropout (Srivastava et al., 2014) is applied to each recurrent layer of RNNs and each sub-layer of Transformers, including both the multi-head attention layers and the feedforward layers. No training is required for zero-shot experiments on GPT-3; instead, 100 samples from each test subset are selected and fed to GPT-3 through zero-shot or zero-shot-CoT prompting.
134
+
135
+ Hyperparameter Tuning To produce reliable results, a thorough hyperparameter tuning is performed w.r.t. the number of layers in the encoder and the decoder, the dimension of the token embedding, the number of hidden units per layer, the number of attention heads in Transformers, the dropout ratio, and the learning rate. We refer the readers to Table A3 for further information.
136
+
137
+ Evaluation Metric We report the accuracy of the final results. A predicted result is considered correct only when it exactly matches the ground-truth answer.
138
+
139
+ Table 5: The accuracy on the test set using image inputs. All models are jointly trained with a randomly initialized ResNet-18. Reported accuracy $(\%)$ is the median and standard deviation of 5 runs. "rel." denotes Transformer with relative positional encoding, and "uni." denotes Universal Transformer.
140
+
141
+ <table><tr><td>Model</td><td>Variant</td><td>I</td><td>SS</td><td>LS</td><td>SL</td><td>LL</td><td>Avg.</td></tr><tr><td rowspan="2">GRU</td><td>w/o att</td><td>61.3±1.4</td><td>53.3±1.7</td><td>30.5±1.2</td><td>9.2±0.2</td><td>11.9±0.5</td><td>33.2±0.9</td></tr><tr><td>w/ att</td><td>66.7±2.0</td><td>58.7±2.2</td><td>33.1±2.7</td><td>9.4±0.3</td><td>12.8±1.0</td><td>35.9±1.6</td></tr><tr><td rowspan="2">LSTM</td><td>w/o att</td><td>80.0±5.7</td><td>76.2±7.4</td><td>55.7±8.2</td><td>10.9±0.6</td><td>19.8±2.6</td><td>48.6±4.9</td></tr><tr><td>w/ att</td><td>83.9±0.9</td><td>79.7±0.8</td><td>62.0±2.5</td><td>11.2±0.1</td><td>21.0±0.8</td><td>51.5±1.0</td></tr><tr><td rowspan="3">Transformer</td><td>vanilla</td><td>20.9±0.4</td><td>9.3±0.2</td><td>5.7±0.3</td><td>1.5±0.3</td><td>2.9±0.5</td><td>8.3±0.3</td></tr><tr><td>rel.</td><td>86.2±0.9</td><td>83.1±1.3</td><td>60.1±2.3</td><td>10.9±0.2</td><td>19.4±0.5</td><td>51.7±1.0</td></tr><tr><td>rel. uni.</td><td>88.4±1.3</td><td>86.0±1.3</td><td>62.5±4.1</td><td>10.9±0.2</td><td>19.0±1.0</td><td>53.1±1.6</td></tr></table>
142
+
143
+ Table 6: The accuracy on the test set using symbol inputs.
144
+
145
+ <table><tr><td>Model</td><td>Variant</td><td>I</td><td>SS</td><td>LS</td><td>SL</td><td>LL</td><td>Avg.</td></tr><tr><td rowspan="2">GRU</td><td>w/o att</td><td>74.9±1.6</td><td>68.1±0.5</td><td>42.1±1.9</td><td>10.5±0.2</td><td>14.0±0.8</td><td>41.3±0.6</td></tr><tr><td>w/ att</td><td>76.2±0.6</td><td>69.5±0.6</td><td>42.8±1.5</td><td>10.5±0.2</td><td>15.1±1.2</td><td>42.5±0.7</td></tr><tr><td rowspan="2">LSTM</td><td>w/o att</td><td>84.3±5.2</td><td>79.6±6.0</td><td>63.7±6.1</td><td>11.7±0.3</td><td>22.1±1.4</td><td>52.3±3.8</td></tr><tr><td>w/ att</td><td>92.9±1.4</td><td>90.9±1.1</td><td>74.9±1.5</td><td>12.1±0.2</td><td>24.3±0.3</td><td>58.9±0.7</td></tr><tr><td rowspan="3">Transformer</td><td>vanilla</td><td>93.9±0.3</td><td>91.0±0.5</td><td>33.2±1.2</td><td>11.5±0.1</td><td>11.5±0.7</td><td>47.4±0.4</td></tr><tr><td>rel.</td><td>96.6±0.3</td><td>95.1±0.4</td><td>72.1±1.5</td><td>11.8±0.2</td><td>22.3±0.6</td><td>59.4±0.5</td></tr><tr><td>rel. uni.</td><td>98.0±0.3</td><td>96.8±0.6</td><td>78.2±2.9</td><td>11.7±0.3</td><td>22.4±1.1</td><td>61.5±0.9</td></tr><tr><td rowspan="2">GPT-3</td><td>0-shot</td><td>19.0</td><td>9.0</td><td>3.0</td><td>10.0</td><td>2.0</td><td>8.6</td></tr><tr><td>0-CoT</td><td>42.0</td><td>36.0</td><td>5.0</td><td>49.0</td><td>6.0</td><td>27.6</td></tr></table>
146
+
147
+ # 5 RESULTS
148
+
149
+ # 5.1 JOINT LEARNING OF PERCEPTION, SYNTAX, AND SEMANTICS
150
+
151
+ Tables 5 and 6 summarize the results of all models on HINT using image inputs and symbol inputs, respectively. Among all models, the Universal Transformer with relative positional encoding ("Transformer rel. uni.") has the highest average accuracy on the test set. Upon careful examination of the results, the following observations and insights can be made:
152
+
153
+ - Models attains high accuracy on the subset I. Particularly, Transformer rel. uni. using image inputs achieves an accuracy of $88.4\%$ . The test subset I shares the symbolic expressions with training and has different handwritten images for symbols. This indicates that Transformers and RNNs, jointly trained with ResNet-18, have strong generalization over perception. As depicted in Figure 2, the model forms meaningful clusters for each concept and captures syntactic roles to some extent without direct supervision on perception.
154
+
155
+ ![](images/1d62511a89b8a9f76d062b7822b6ff821b8ac86b264cdfb6680061b7656f0bff.jpg)
156
+ Figure 2: The t-SNE visualization of the embeddings (the outputs of ResNet-18) of handwritten images using the Transformer rel. univ. model. The image embeddings form clear clusters for each concept based on visual appearance. In addition, these clusters reflect the concepts' syntactic roles: The majority of digits are towards the bottom, operators are around the center, and parentheses are near the top.
157
+
158
+ - Transformers achieve high accuracy on the subset SS. The expressions in SS share the same length and value distribution as training. This result indicates that Transformers exhibit robust interpolation over syntax and semantics.
159
+ - The accuracy of Transformer rel. uni. on LS is substantially lower than its accuracy on SS or I (see Table 6). Note that the identical model yields perfect accuracy on the length cutoff splits of SCAN (Csordás et al., 2021). This result, however rather unexpected, may be explained by the syntax difference between HINT and SCAN shown in Table A2: The expressions in HINT may have a longer-range dependency and greater tree depth than the commands in SCAN. This observation suggests that present Transformers, which have finite depth, are incapable of adequately capturing the syntax with long dependencies and large depth.
160
+ - Transformer with relative positional encoding achieves similar performance on I and SS as the vanilla Transformer with absolute positional encoding, yet relative positional encoding doubles the Transformer's accuracy on LS (see Table 6). This contradiction implies that relative positional encoding is essential for Transformer to generalize to long expressions. Sharing weights between the layers using the Universal Transformer can further enhance performance.
161
+ - Models behave clumsily on the subsets SL and LL. The accuracy on SL and LL is significantly lower than that on I and SS. All models exhibit near-zero accuracy on samples whose answers are over 100 (the maximum final result in the training set). This finding suggests that neither RNNs nor Transformers are able to extrapolate to larger numbers beyond those in the training set.
162
+ - While GPT-3 with zero-shot prompting performs poorly, chain of thought (CoT) prompting significantly improves the accuracy. Notably, GPT-3 with zero-shot CoT achieves an accuracy of $49.0\%$ on SL, which is superior to other fine-tuned models. We believe this is due to the fact that GPT-3 has been pre-trained on data with larger numbers, and CoT improves the reasoning process. Despite CoT prompting, GPT-3 performs poorly on long expressions in LS and LL.
163
+
164
+ Summary We observe a significant room for improvement on HINT. Even the best model, Universal Transformer with relative positional encoding, can only achieve an accuracy of $54.3\%$ on HINT, while the same model achieves virtually perfect accuracy on earlier datasets of systematic generalization, such as SCAN. The challenges of HINT stem from the fact that it requires joint learning and generalization of perception, syntax, and semantics: The perception has a large variance in real handwriting, the syntax supports long dependency between symbols, and the semantic complexity is well beyond the capability of the state-of-the-art models.
165
+
166
+ Scaling Laws Since HINT can generate an endless amount of data for training, one may wonder if merely increasing the dataset and the model size can solve the problem, akin to certain NLP tasks (Kaplan et al., 2020; Henighan et al., 2020). Empirically, Figure 3 depicts the test accuracy's scaling trend w.r.t. the model size and the number of training samples. By altering the hidden dimensions, the embedding dimension, and the number of attention heads, various-sized models are constructed. Similarly, various-sized training sets are generated by randomly sampling the original training set. Assuming a log-linear scaling trend, we need to train a model of $10^{33}$ parameters on $10^{15}$ examples to attain $90\%$ accuracy on the test subset LL, which is impractical. Hence, efficient architectures and training algorithms are still in need to improve extrapolation over syntax and semantics.
167
+
168
+ ![](images/bda8af3e60cb5e36dcb9409d7eaa3ff9fb7a9608abd0ef4cea4da20b4cf98055.jpg)
169
+ Figure 3: Scaling trends w.r.t. model size and dataset size when training Transformer rel. uni. on the test subset LL with symbol inputs.
170
+
171
+ ![](images/fd0933b4d06a28efe138fa4c88f6282feef39f1716e2011fade7976466793545.jpg)
172
+ Figure 4: The few-shot learning performance when training Transformer rel. uni. with varied maximum operators.
173
+
174
+ # 5.2 FEW-SHOT LEARNING AND GENERALIZATION
175
+
176
+ In this section, we fine-tune the top two models on six new concepts; Table 7 summarizes the results. Transformer rel. uni. outperforms LSTM w/ attn across all concepts by a significant margin, which is greater than six times their performance gap in Table 5. This discrepancy suggests that with limited data, Transformer is superior to LSTM at learning new concepts.
177
+
178
+ Table 7: The few-shot learning performance of the top two models: LSTM w/ attn (left) and Transformer rel. uni. (right). Reported results are the median of 5 runs. See Table 3 for the meanings of these concepts. *Please refer to Appendix C for the details regarding the human study.
179
+
180
+ <table><tr><td>Concept</td><td>I</td><td>SS</td><td>LS</td><td>SL</td><td>LL</td><td>Avg.</td><td>Human*</td></tr><tr><td>x</td><td>87.8/89.2</td><td>47.3/80.2</td><td>42.8/58.6</td><td>10.8/12.2</td><td>16.4/19.3</td><td>42.8/52.8</td><td>95.0</td></tr><tr><td>y</td><td>64.5/83.8</td><td>39.1/74.8</td><td>38.5/54.0</td><td>11.6/13.8</td><td>18.9/22.4</td><td>35.4/50.7</td><td>100.0</td></tr><tr><td>a</td><td>71.8/84.4</td><td>44.2/72.0</td><td>29.7/48.9</td><td>7.9/8.4</td><td>11.1/12.3</td><td>33.8/46.4</td><td>97.5</td></tr><tr><td>b</td><td>73.4/77.1</td><td>29.9/59.1</td><td>27.4/39.4</td><td>7.4/16.8</td><td>12.7/17.1</td><td>31.1/42.6</td><td>77.5</td></tr><tr><td>c</td><td>61.5/59.2</td><td>19.6/34.0</td><td>15.2/24.4</td><td>4.5/6.1</td><td>6.5/9.4</td><td>21.9/27.3</td><td>90.0</td></tr><tr><td>d</td><td>59.2/62.8</td><td>22.7/39.0</td><td>20.2/27.0</td><td>7.2/9.2</td><td>8.9/10.7</td><td>24.7/30.4</td><td>60.0</td></tr><tr><td>Overall</td><td>69.7/76.1</td><td>33.8/59.9</td><td>29.0/42.0</td><td>8.2/11.1</td><td>12.4/15.2</td><td>31.6/41.7</td><td>86.7</td></tr></table>
181
+
182
+ Figure 4 depicts the test accuracy of Transformer rel. uni. while using varied maximum operators for training. In general, the more data and longer expressions used for training, the higher the model's performance. One test case for learning new numbers ("xy") is (0, 26.5), where the model is only exposed to the primitive concept during training and is expected to generalize to complex compositions during testing. The classic thought experiments (Fodor, 1975) indicate that this is straightforward for humans: If you grasp the meanings of "1," "1 + 1," and "x," you should also comprehend the meaning of "1 + x". A similar test case for learning new operators ("abcd") is (2, 24.1) since expressions comprising at least two operators are required to capture the syntax of a new operator. Transformer performs poorly on both of these tasks, demonstrating that it is still far from human-level generalization.
183
+
184
+ # 6 DISCUSSIONS: CONCLUSIONS AND LIMITATIONS
185
+
186
+ In this paper, we took inspiration from arithmetic and introduced a new challenge for the learning community, Handwritten arithmetic with INTegers (HINT), which serves as a minimal yet comprehensive benchmark for examining the full-spectrum systematic generalization of concept learning w.r.t. perception, syntax, and semantics. HINT is intrinsically more challenging than previous datasets on systematic generalization due to its substantial perceptual diversity in real handwriting, complex syntax, and sophisticated semantics. We benchmark on HINT with the state-of-the-art seq2seq models, including RNNs, Transformers, and GPT-3; the results point out their inability to extrapolate over syntax and semantics. The scaling trends of test accuracy w.r.t. dataset size and model size indicate that it is impractical to solve HINT by only increasing the size of the dataset and model. We believe that the HINT dataset and our experimental findings will inspire new advances in systematic generalization, particularly extrapolation over syntax and semantics.
187
+
188
+ Limitations and Future Work Despite a large visual variance, the handwritten expressions are rather basic in terms of spatial locations and visual complexity. It would be more intriguing if we could further increase the perceptual complexity w.r.t. spatial relations like natural images (Lin et al., 2014). Although syntax and semantics in HINT are already more complex than those of prior datasets, they remain context-free. Extending our findings to context-dependent syntax and semantics would be of practical value given their prevalence in natural languages; e.g., a word might have different syntactic roles or semantic meanings in different contexts.
189
+
190
+ Regarding model development on HINT, our findings reveal that current seq2seq models, including Transformers, are unable to extract the systematic rules for both syntax and semantics from the training data. Improving the systematic generalization of Transformers, particularly extrapolation over semantics, is a crucial future direction. We also intend to investigate more advanced methods, such as meta-learning (Lake, 2019), data augmentation (Andreas, 2020; Akyurek et al., 2020), Edge Transformer (Bergen et al., 2021), and Neural-Symbolic Stack Machines (Chen et al., 2020). In addition, understanding the systematic generalization of large language models by evaluating them in few-shot or fine-tuning settings will be beneficial.
191
+
192
+ Acknowledgements. The authors would like to thank four anonymous reviews for constructive feedback. This work is supported in part by the National Key R&D Program of China (2021ZD0150200) and the Beijing Nova Program.
193
+
194
+ # REFERENCES
195
+
196
+ Ekin Akyurek, Afra Feyza Akyurek, and Jacob Andreas. Learning to recombine and resample data for compositional generalization. In International Conference on Learning Representations (ICLR), 2020.
197
+ Jacob Andreas. Good-enough compositional data augmentation. In Annual Meeting of the Association for Computational Linguistics (ACL), 2020.
198
+ Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR), 2015.
199
+ Dzmitry Bahdanau, Harm de Vries, Timothy J O'Donnell, Shikhar Murty, Philippe Beaudoin, Yoshua Bengio, and Aaron Courville. Closure: Assessing systematic generalization of clevr models. In Visually Grounded Interaction and Language (ViGIL) Workshop in NAACL, 2019.
200
+ David Barrett, Felix Hill, Adam Santoro, Ari Morcos, and Timothy Lillicrap. Measuring abstract reasoning in neural networks. In International Conference on Machine Learning (ICML), 2018.
201
+ Leon Bergen, Timothy O'Donnell, and Dzmitry Bahdanau. Systematic generalization with edge transformers. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
202
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
203
+ Xinyun Chen, Chen Liang, Adams Wei Yu, Dawn Song, and Denny Zhou. Compositional generalization via neural-symbolic stack machines. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
204
+ Noam Chomsky. Syntactic structures. In Syntactic Structures. De Gruyter Mouton, 1957.
205
+ Noam Chomsky. Aspects of the Theory of Syntax. MIT press, 1965.
206
+ Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS Workshop on Deep Learning, 2014.
207
+ Róbert Csordás, Kazuki Irie, and Juergen Schmidhuber. The devil is in the detail: Simple tricks improve systematic generalization of transformers. In Annual Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
208
+ Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. In Annual Meeting of the Association for Computational Linguistics (ACL), 2019.
209
+ Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal transformers. In International Conference on Learning Representations (ICLR), 2018.
210
+ Roberto Dessi and Marco Baroni. Cnns found to jump around more skillfully than rnns: Compositional generalization in seq2seq convolutional networks. In Annual Meeting of the Association for Computational Linguistics (ACL), 2019.
211
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), 2019.
212
+ Chaz Firestone and Brian J Scholl. Cognition does not affect perception: Evaluating the evidence for "top-down" effects. Behavioral and Brain Sciences, 39, 2016.
213
+ Jerry A Fodor. The language of thought. Harvard university press, 1975.
214
+ Jonathan Gordon, David Lopez-Paz, Marco Baroni, and Diane Bouchacourt. Permutation equivariant models for compositional generalization in language. In International Conference on Learning Representations (ICLR), 2019.
215
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
216
+
217
+ Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B Brown, Prafulla Dhariwal, Scott Gray, et al. Scaling laws for autoregressive generative modeling. arXiv preprint arXiv:2010.14701, 2020.
218
+ Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997.
219
+ Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. Compositionality decomposed: how do neural networks generalise? Journal of Artificial Intelligence Research (JAIR), 67:757-795, 2020.
220
+ Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
221
+ Daniel Kahneman. Thinking, fast and slow. Macmillan, 2011.
222
+ Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
223
+ Daniel Keysers, Nathanael Scharli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, et al. Measuring compositional generalization: A comprehensive method on realistic data. In International Conference on Learning Representations (ICLR), 2020.
224
+ Najoung Kim and Tal Linzen. Cogs: A compositional generalization challenge based on semantic interpretation. In Annual Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
225
+ Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
226
+ Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
227
+ Brenden M Lake. Compositional generalization through meta sequence-to-sequence learning. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
228
+ Brenden M. Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In International Conference on Machine Learning (ICML), 2018.
229
+ Qing Li, Siyuan Huang, Yining Hong, Yixin Chen, Ying Nian Wu, and Song-Chun Zhu. Closed loop neural-symbolic learning via integrating neural perception, grammar parsing, and symbolic reasoning. In International Conference on Machine Learning (ICML), 2020.
230
+ Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision (ECCV), 2014.
231
+ Qian Liu, Shengnan An, Jian-Guang Lou, Bei Chen, Zeqi Lin, Yan Gao, Bin Zhou, Nanning Zheng, and Dongmei Zhang. Compositional generalization by learning analytical expressions. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
232
+ Mahshad Mahdavi, Richard Zanibbi, Harold Mouchere, Christian Viard-Gaudin, and Utpal Garain. Icdar 2019 crohme+ tfd: Competition on recognition of handwritten mathematical expressions and typeset formula detection. In International Conference on Document Analysis and Recognition (ICDAR), 2019.
233
+ Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. Deepproblog: Neural probabilistic logic programming. In Advances in Neural Information Processing Systems (NeurIPS), 2018.
234
+ Richard Montague. Universal grammar. Theoria, 36(3):373-398, 1970.
235
+ Benjamin Newman, John Hewitt, Percy Liang, and Christopher D Manning. The eos decision and length extrapolation. In BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, 2020.
236
+ Maxwell Nye, Armando Solar-Lezama, Josh Tenenbaum, and Brenden M Lake. Learning compositional rules via neural program synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
237
+
238
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
239
+ Zenon W Pylyshyn. Computation and cognition: Towards a foundation for cognitive science. Cambridge, Ma: MIT Press, 1984.
240
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI Blog, 2019.
241
+ Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446, 2021.
242
+ Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, and Brenden M Lake. A benchmark for systematic generalization in grounded language understanding. Advances in Neural Information Processing Systems (NeurIPS), 2020.
243
+ Jake Russian, Jason Jo, Randall C O'Reilly, and Yoshua Bengio. Compositional generalization in a deep seq2seq model by separating syntax and semantics. arXiv preprint arXiv:1904.09708, 2019.
244
+ David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models. In International Conference on Learning Representations (ICLR), 2018.
245
+ Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), 2018.
246
+ Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research (JMLR), 15 (1):1929-1958, 2014.
247
+ Damien Teney, Peng Wang, Jiewei Cao, Lingqiao Liu, Chunhua Shen, and Anton van den Hengel. V-prom: A benchmark for visual reasoning using visual progressive matrices. In AAAI Conference on Artificial Intelligence (AAAI), 2020.
248
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems (NeurIPS), 2017.
249
+ Ramakrishna Vedantam, Arthur Szlam, Maximillian Nickel, Ari Morcos, and Brenden M Lake. Curi: A benchmark for productive concept learning under uncertainty. In International Conference on Machine Learning (ICML), 2021.
250
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
251
+ Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relational and analogical visual reasoning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
252
+ Chi Zhang, Sirui Xie, Baoxiong Jia, Ying Nian Wu, Song-Chun Zhu, and Yixin Zhu. Learning algebraic representation for systematic generalization in abstract reasoning. In European Conference on Computer Vision (ECCV), 2022.
253
+
254
+ # A DATASET STATISTICS
255
+
256
+ The handwritten images for each arithmetic concept originate from the handwritten math symbols dataset<sup>1</sup> hosted on Kaggle under the "CC0: Public Domain" license, parsed and extracted from the Competition on Recognition of Online Handwritten Mathematical Expressions (CROHME) (Mahdavi et al., 2019)<sup>2</sup>. We further clean the dataset by removing duplicate images, resulting in statistics shown in Figure A1.
257
+
258
+ We conduct a detailed analysis of the collected data to demonstrate the validity of the HINT dataset as a benchmark for systematic generalization. Table A1 shows the size of each split in HINT, and Table A2 shows a comparison between the grammars of HINT and SCAN.
259
+
260
+ Table A1: Dataset size. The first row is the main split of HINT, and the rest are the few-shot learning split. As advocated by Csordás et al. (2021), the validation set also contains five generalization subsets for model selection.
261
+
262
+ <table><tr><td rowspan="2">Split</td><td rowspan="2">Train</td><td rowspan="2">Validation</td><td colspan="6">Test</td></tr><tr><td>Total</td><td>I</td><td>SS</td><td>LS</td><td>SL</td><td>LL</td></tr><tr><td>main</td><td>998000</td><td>4698</td><td>46620</td><td>9980</td><td>8000</td><td>10000</td><td>8640</td><td>10000</td></tr><tr><td>x</td><td>1100</td><td>491</td><td>4900</td><td>1100</td><td>900</td><td>1000</td><td>900</td><td>1000</td></tr><tr><td>y</td><td>1100</td><td>493</td><td>4900</td><td>1100</td><td>900</td><td>1000</td><td>900</td><td>1000</td></tr><tr><td>a</td><td>1000</td><td>470</td><td>4700</td><td>1000</td><td>900</td><td>1000</td><td>800</td><td>1000</td></tr><tr><td>b</td><td>1000</td><td>470</td><td>4700</td><td>1000</td><td>900</td><td>1000</td><td>800</td><td>1000</td></tr><tr><td>c</td><td>1000</td><td>470</td><td>4700</td><td>1000</td><td>900</td><td>1000</td><td>800</td><td>1000</td></tr><tr><td>d</td><td>1000</td><td>470</td><td>4700</td><td>1000</td><td>900</td><td>1000</td><td>800</td><td>1000</td></tr></table>
263
+
264
+ Table A2: The phrase-structure grammars for HINT and SCAN. While the grammars of both HINT and SCAN can generate infinite examples, HINT produces examples with larger depth and longer dependency due to the parentheses; the expression inside parentheses can be arbitrarily long. Specifically, the maximum depth and dependency range in SCAN are 6 and 4, respectively; the maximum length generated by the non-terminal "S" in the grammar of SCAN is 4.
265
+
266
+ <table><tr><td>HINT</td><td>SCAN</td></tr><tr><td>T = {Expression, Term, Factor, Number}</td><td>T= {C, S, V, D, U}</td></tr><tr><td>Start symbol: Expression</td><td>Start symbol: C</td></tr><tr><td>Σ = {+, -, ×, ÷, 0, 1, ..., 9, (,)</td><td>Σ = {walk, look, run, jump, turn, left, right, around, opposite, and after, twice, thrice}</td></tr><tr><td>R = {</td><td>R = {</td></tr><tr><td>Expression → Term</td><td>C → S — S and S — S after S</td></tr><tr><td>Expression → Expression Op1 Term</td><td>S → V — V twice — V thrice</td></tr><tr><td>Op1 → + | -</td><td>V → D[1] opposite D[2]</td></tr><tr><td>Term → Factor</td><td>V → D[1] around D[2]</td></tr><tr><td>Term → Term Op2 Factor</td><td>V → D — U</td></tr><tr><td>Op2 → × | ÷</td><td>D → U left — U right</td></tr><tr><td>Factor → Number</td><td>D → turn left — turn right</td></tr><tr><td>Factor → ( Expression )</td><td>U → walk — look — run — jump}</td></tr><tr><td>Number → 0|1|2|3...|9}</td><td></td></tr></table>
267
+
268
+ For each split, we plot the frequency distributions of various aspects, including symbol, number of operators, expression length, tree depth, maximum dependency range, and result, as shown in Figure A2. The symbol distributions are similar across different splits, and the Kullback-Leibler divergence between train and test is low (0.0055). The digits and operators are approximately equally distributed, except for the test-SL split. The test-SL split has a relatively higher portion of multiplication (\*') since generating large numbers generally requires more multiplication for short expressions.
269
+
270
+ The test set's result distributions differ from the train set. All results in the training set are smaller than 100 as desired; about half are in [0, 10). In comparison, $29\%$ of the results in the test set are larger than 100.
271
+
272
+ Several properties of an input expression, including length, number of operators, tree depth, and maximum dependency range, are indicators of the difficulty of calculating the expression. We plot the frequency distributions w.r.t. these input properties in Figure A2. These distributions demonstrate significant differences between train and test.
273
+
274
+ ![](images/5a6d7bb7f6111b0b102a3cfacb5cbd26a6ee377507e0fa27f952b216867b4a1e.jpg)
275
+ Figure A1: The number of handwritten images for each symbol. There are 82 arithmetic symbols (the top 50 are shown here) and 83,501 images in total. We use the handwritten images for digits $0 \sim 9$ , operators $+, -, \times, \div$ , and parentheses $(,)$ in this work; others are for potential future use.
276
+
277
+ ![](images/623eb98f044089e9a0d76265c605e066fdf4d19c6b81ed4c154606317aa6aa70.jpg)
278
+
279
+ ![](images/2a3279e9d49688b5edc41c21229bcd28ddf25f7d4cc32778e8661204712f53cb.jpg)
280
+
281
+ ![](images/da01d225bbd86eeb7fff77723828d323a945235c4c93de41f37f00f4af4e128d.jpg)
282
+
283
+ ![](images/6acd4ceab20ba6cdeb0e2ca93601928ec0c4b259d07d86b8eea78f61cdee1333.jpg)
284
+
285
+ ![](images/11e986ffb657d02a78f2a2979d8b3434ddd2fff8d2fb99a05a016a513d025eaf.jpg)
286
+
287
+ ![](images/f76ab7139e95cbe2f1f1f4a09031254282ce55883d0dc32cda2cc6861b0ef22f.jpg)
288
+ Figure A2: The frequency distributions w.r.t. various aspects, including symbol, number of operators, expression length, tree depth, maximum dependency range, and result.
289
+
290
+ # B IMPLEMENTATION DETAILS
291
+
292
+ We benchmark deep sequence-to-sequence (seq2seq) frameworks on HINT, as illustrated by Figure 1. All models are implemented in PyTorch (Paszke et al., 2019).
293
+
294
+ # B.1 IMAGETOKENIZERAND EMBEDDING
295
+
296
+ To serialize a handwritten expression, we first resize it by making its height 32 and apply a sliding window of size 32 along the horizontal axis to render a sequence of images. Next, each image in the sequence is encoded by the ResNet-18 (He et al., 2016). We found in preliminary experiments that pre-training on the ImageNet does not help, likely due to the domain gap between ImageNet and HINT. Therefore, we use a random initialization for ResNet-18 in our experiments.
297
+
298
+ # B.2 ENCODER-DECODER ARCHITECTURES
299
+
300
+ We consider the following three choices for the encoder-decoder architecture in a seq2seq framework: Recurrent Neural Networks (RNNs), Transformers, and GPT-3.
301
+
302
+ RNNs We test two popular RNNs: long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) and gated recurrent units (GRU) (Chung et al., 2014). Both networks are evaluated with and without attention (Bahdanau et al., 2015). Our implementations of RNNs are adapted from a seq2seq tutorial.
303
+
304
+ Transformers We benchmark three variants of Transformer: the vanilla Transformer, Transformer with relative positional encoding, and Universal Transformer with relative positional encoding. The implementations of these Transformers are adapted from Csordás et al. (2021).<sup>4</sup>
305
+
306
+ GPT-3 To test GPT-3's ability to perform simple arithmetic operations without task-specific training, Brown et al. (2020) developed a small battery of 10 tests that involve asking GPT-3 a simple arithmetic problem in natural language; see Section 3.9.1 and Table 3.9 in Brown et al. (2020) for the results. In these tests, GPT-3 displays reasonable proficiency at simple arithmetic in the few-shot setting. However, they do not evaluate the multi-hop reasoning capability required by complex arithmetic expressions, which usually involve more operators and larger numbers.
307
+
308
+ To systematically and comprehensively evaluate GPT-3's capability of arithmetic reasoning, we test GPT-3 on the proposed HINT benchmark using symbolic expressions as input. Since all tokens of HINT are in the vocabulary of GPT-3, we directly evaluate GPT-3 via zero-shot prompting using the OpenAI API $^{5}$ . We construct the prompt in the following form: "Q: What is <Expression>? A: The answer is," similar to the practice in Brown et al. (2020) but with more complex expressions.
309
+
310
+ Via task-specific zero-shot or few-shot prompting, pre-trained large language models achieve excellent performance in intuitive and single-step System 1 tasks Kahneman (2011). However, LLMs struggled on System 2 tasks that require slow thinking and multi-hop reasoning (Rae et al., 2021), even at the scale of over 100B parameters like GPT-3. To address this shortcoming, chain of thought prompting (CoT) (Wei et al., 2022), which feeds LLMs with the intermediate step-by-step reasoning to augment the final answer in a few-shot setting, has been proposed to elicit the multi-hop reasoning in LLMs.
311
+
312
+ Very recently, chain of thought prompting has been extended to the zero-shot setting (Kojima et al., 2022) by adding a simple prompt, "Let's think step by step", to facilitate step-by-step thinking before answering each question. Zero-shot CoT amazingly outperforms the standard zero-shot prompting by a large margin in a variety of reasoning tasks. Therefore, we also apply zero-shot CoT prompting to evaluate GPT-3 on HINT. More concretely, it follows a two-stage prompting strategy similar to Kojima et al. (2022):
313
+
314
+ 1st prompt "Q: What is <Expression>? A: Let's think step-by-step." This prompt extracts the step-by-step reasoning process in the form of natural language from GPT-3,
315
+
316
+ Table A3: Hyperparameter tuning. Our choices are underlined.
317
+
318
+ <table><tr><td>Model</td><td>Variant</td><td>Encoder</td><td>Decoder</td><td>Embedding</td><td>Hidden</td><td>Heads</td><td>Dropout</td><td>Batch</td><td>Steps</td><td>Learning Rate</td></tr><tr><td rowspan="2">RNN</td><td>LSTM (+ att)</td><td rowspan="2">1,3,6,9</td><td rowspan="2">1,3,6,9</td><td rowspan="2">128, 256, 512</td><td rowspan="2">128, 256, 512</td><td rowspan="2">-</td><td rowspan="2">0,0.1,0.5</td><td rowspan="2">128</td><td rowspan="2">100K</td><td rowspan="2">\(10^{-3},10^{-4},10^{-5}\)</td></tr><tr><td>GRU (+ att)</td></tr><tr><td rowspan="2">Transformer</td><td>vanilla relative</td><td rowspan="2">1,3,6,9</td><td rowspan="2">1,3,6,9</td><td rowspan="2">128, 256, 512</td><td rowspan="2">128, 256, 512</td><td rowspan="2">4,8,12</td><td rowspan="2">0,0.1,0.5</td><td rowspan="2">128</td><td rowspan="2">100K</td><td rowspan="2">\(10^{-3},10^{-4},10^{-5}\)</td></tr><tr><td>relative universal</td></tr></table>
319
+
320
+ ![](images/feb8f7a4a403a394967cb42f8deb5c3bec86546ff30bcffd4a11a9ef53d2d53c.jpg)
321
+ Figure A3: Test accuracy (avg.) of Transformer rel. uni. using symbol inputs as a function of several properties of samples: the expression's length, the depth of the expression's parse tree, the expression's maximum dependency range, the number of operators in the expression, the final result.
322
+
323
+ ![](images/039cbafb1bbf217d8cc248631cf9d7867541f6dfd96057ab7e602ae5451b0529.jpg)
324
+
325
+ ![](images/23c7aac5912996a45597940057f846bd1cca521adbe5c11ea3377981f11b16f7.jpg)
326
+
327
+ ![](images/feb7f643d5615403009b2f0dbd7eb64bc55be4aa86afe0c13a9e8bb633ed5897.jpg)
328
+
329
+ ![](images/9524529f62c138050d36374072f6f16c6ca11eb9cbdaacef42874e6ce063f2b0.jpg)
330
+
331
+ which is denoted by $\mathrm{i}Z_{\dot{\zeta}}$ .
332
+
333
+ 2st prompt “Q: What is <Expression>? A: Let's think step-by-step. <Z> Therefore, the answer (arabic numerals) is” In the second stage, the response $iZ_{i}$ generated in the first step is appended to the initial prompt along with an answer trigger sentence. This second prompt is then fed into GPT-3 to predict the final answer.
334
+
335
+ In our experiments, we use the 'text-davinci-002' engine in the OpenAI API, the most capable GPT-3 model at the time of writing with approximately 175 billion parameters<sup>6</sup>.
336
+
337
+ # B.3 TRAINING
338
+
339
+ Table A3 shows the tuned hyperparameters for the baselines. Our choices for each model are underlined, and the performance is reported under these settings unless explicitly stated otherwise. When generating the output, we use greedy decoding in all models for simplicity.
340
+
341
+ For the few-shot learning experiments, models are first pre-trained on the main training set and then fine-tuned on the training set of each new concept individually. Models are fine-tuned for 1000 iterations using a batch size of 128 with half examples from the main training set to prevent forgetting. The learning rates are $10^{-5}$ and $10^{-3}$ for Transformers and RNNs, respectively.
342
+
343
+ All models reported in our paper can be trained on a single NVIDIA Titan V GPU with 12G memory. It takes at most eight hours to train a model.
344
+
345
+ # B.4 ADDITIONAL EXPERIMENTAL RESULTS
346
+
347
+ Figure A3 shows the test accuracy as a function of several sample properties. Figure A4 shows the importance of these properties.
348
+
349
+ # C HUMAN STUDY FOR FEW-SHOT LEARNING AND GENERALIZATION
350
+
351
+ We conduct a preliminary human study to evaluate human performance in the few-shot learning experiment. Specifically, we test ten human subjects on the six concepts that are unknown to subjects to reduce the human prior as much as possible. The human subjects are asked to determine each concept's meaning from 10 training examples and answer 4 test questions. We report the accuracy of test questions as human performance.
352
+
353
+ ![](images/aba87f5433f6625f5f2a3af2c18de6b09fcfacd31983a99d9619ee4af0911da6.jpg)
354
+ Figure A4: The importance of sample properties w.r.t. the test accuracy of Transformer rel. uni. using symbol inputs. Normalized permutation feature importance is reported here using a k-nearest neighbors classifier $(k = 3)$ to predict if the model can generate correct results.
2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84a47e33baaeb7ba0d0ee8478dc92757c703f52bf9d8c84dd16c10c3b0db1602
3
+ size 807375
2023/A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/4e97ac8a-8041-408b-8c69-b9855a34c746_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/4e97ac8a-8041-408b-8c69-b9855a34c746_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/4e97ac8a-8041-408b-8c69-b9855a34c746_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45d1291b3d9115c7dcf4e3730a9498b27fbc11b35b553f40db5d1bff35e48047
3
+ size 3152193
2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9158d0ba171acf8e12ba00e8e2dbc887284274a7dd152a8a5e6a4d9189b78281
3
+ size 2307984
2023/A Model or 603 Exemplars_ Towards Memory-Efficient Class-Incremental Learning/layout.json ADDED
The diff for this file is too large to render. See raw diff