Eric03 commited on
Commit
eee5a67
·
verified ·
1 Parent(s): 94f5630

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2004.11892/main_diagram/main_diagram.drawio +1 -0
  2. 2004.11892/main_diagram/main_diagram.pdf +0 -0
  3. 2004.11892/paper_text/intro_method.md +12 -0
  4. 2008.01700/main_diagram/main_diagram.drawio +1 -0
  5. 2008.01700/main_diagram/main_diagram.pdf +0 -0
  6. 2008.01700/paper_text/intro_method.md +18 -0
  7. 2104.00479/main_diagram/main_diagram.drawio +0 -0
  8. 2104.00479/main_diagram/main_diagram.pdf +0 -0
  9. 2104.00479/paper_text/intro_method.md +24 -0
  10. 2104.07660/main_diagram/main_diagram.drawio +1 -0
  11. 2104.07660/main_diagram/main_diagram.pdf +0 -0
  12. 2104.07660/paper_text/intro_method.md +63 -0
  13. 2104.12133/main_diagram/main_diagram.drawio +1 -0
  14. 2104.12133/main_diagram/main_diagram.pdf +0 -0
  15. 2104.12133/paper_text/intro_method.md +58 -0
  16. 2106.13882/main_diagram/main_diagram.drawio +1 -0
  17. 2106.13882/main_diagram/main_diagram.pdf +0 -0
  18. 2106.13882/paper_text/intro_method.md +29 -0
  19. 2110.11236/main_diagram/main_diagram.drawio +1 -0
  20. 2110.11236/main_diagram/main_diagram.pdf +0 -0
  21. 2110.11236/paper_text/intro_method.md +38 -0
  22. 2203.12193/main_diagram/main_diagram.drawio +0 -0
  23. 2203.12193/paper_text/intro_method.md +63 -0
  24. 2203.15845/main_diagram/main_diagram.drawio +1 -0
  25. 2203.15845/paper_text/intro_method.md +40 -0
  26. 2203.16517/main_diagram/main_diagram.drawio +0 -0
  27. 2203.16517/paper_text/intro_method.md +123 -0
  28. 2204.02071/main_diagram/main_diagram.drawio +1 -0
  29. 2204.02071/main_diagram/main_diagram.pdf +0 -0
  30. 2204.02071/paper_text/intro_method.md +110 -0
  31. 2210.03675/main_diagram/main_diagram.drawio +1 -0
  32. 2210.03675/main_diagram/main_diagram.pdf +0 -0
  33. 2210.03675/paper_text/intro_method.md +105 -0
  34. 2210.07158/main_diagram/main_diagram.drawio +1 -0
  35. 2210.07158/main_diagram/main_diagram.pdf +0 -0
  36. 2210.07158/paper_text/intro_method.md +85 -0
  37. 2210.16906/main_diagram/main_diagram.drawio +1 -0
  38. 2210.16906/main_diagram/main_diagram.pdf +0 -0
  39. 2210.16906/paper_text/intro_method.md +76 -0
  40. 2301.00061/main_diagram/main_diagram.drawio +1 -0
  41. 2301.00061/main_diagram/main_diagram.pdf +0 -0
  42. 2301.00061/paper_text/intro_method.md +238 -0
  43. 2302.10970/main_diagram/main_diagram.drawio +1 -0
  44. 2302.10970/main_diagram/main_diagram.pdf +0 -0
  45. 2302.10970/paper_text/intro_method.md +69 -0
  46. 2304.05516/main_diagram/main_diagram.drawio +262 -0
  47. 2304.05516/main_diagram/main_diagram.pdf +0 -0
  48. 2304.05516/paper_text/intro_method.md +236 -0
  49. 2304.07645/main_diagram/main_diagram.drawio +1 -0
  50. 2304.07645/main_diagram/main_diagram.pdf +0 -0
2004.11892/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile modified="2019-05-07T21:32:22.100Z" host="www.draw.io" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:60.0) Gecko/20100101 Firefox/60.0" etag="IpQNVKgcKxVaGC5hemVI" version="10.6.7" type="device"><diagram id="Z0G94IeOXpua3MbAxepi" name="Page-1">7Vr/c6I4FP9r/LEdIEDxx2p1ezN2d3veXvd+jBCFayRuiFX3r78XSAQMKra67c1sxxl4L8lLeJ/3FdpB/fn6E8eL+IFFhHYcK1p30F3HcWzHv4GL5GwKTtcNCsaMJ5GaVDLGyU+imJbiLpOIZLWJgjEqkkWdGbI0JaGo8TDnbFWfNmW0vusCz4jBGIeYmtynJBKx4tqWVQ7ck2QWq60DTw3MsZ6sGFmMI7aqsNCgg/qcMVHczdd9QqXytF6KdcM9o9uDcZKKNgvu/b8dOr5bZY/fFgs0v2dPD+yqW0h5wXSpHrjj+BTk9aYMxEq9UsbzEf/HUh6113GQ7/f7UgEly5/JK4AgyFpoEXCWQkoxqtQgNlq3oJGFvE2ZgEtvFSeCjBc4lLwVmBPwYjGnQNlwmxW2Ybtw/0K4SACkW5rMUmAKJidjRVEylftlICpJZ3/Jsbsrf3uCqsqUFqU8sq6wlAo/ETYngm9gih5V5qvM2XULclWxDQ8VvLhiFrblK5tU9jjbSi4hgxuF2gkI2o6hVxKBCSuScRGzGUsxHZTcHmfLNCJSrAVUOWfEcmVJdf9LhNgof8RLwepgkHUivlfu/5Girj1F3a2V5JzYaCKF5/1eJSqrJFkuyym9bi9qGVvykBzQjQJCYD4j4sA8hanU20Eb4IRikbzUQ8PZEUUNPrmDcB2/I45zBqt3vZrVdxus3vaarP5CKvIMFeE0WxH+NkVNE0r723iHpkFIwlAGEsHZM6mMTALP9Q6a5gkBRWeS/brVQeaXqFZvdsj8fkGAeb3TBy2d3vE+lNcHezNxlLzoFNpfZgL247KgwgJXEm1lUsO6AcUZJMwxwTyM5UkAnLWeN+G7K3fl7cnb4YZKQfy4Z00KcxhNtgwcPs9yI/myFCCFnC9WbV1DOZTXNYOVa5keFVzKoxwzoA/GUqMaDU6yJRWZqedVMqdY6iavxrTHWZVSJwQ95QA0V0RhnNBohDdsKR8yE6B1TfVixpOfIBZvCyxwGaEc0vFrM8ZypdoaTgtzvmqI7B3WA17XJo7A8hQDakmKF1kyoRruOThpkvaYAKs+lu3bWwAK6unKRaYF6KhWi6m6ljq/CZhBFRwGGLYBel5A7+aclBVWUElQitXSEFQtPMor4zu35PypHl+yGKyd0rxHiZMoImkeyAUEmsnW8BYsSUWuHq8HP/CZvqzgPDh6H2i7pOEnp3PRZyk8DU5yPAmYw4pkohHpw+5zHH+Ft+O3w9u5VE3umPWJgXMR9BTOurXcrUJ2QFaNzS7EcwArz7r1jsc2cEcm7qgBY4onhH5lWSISJuXzYu4O9u8Gr+e0gze4FLr+Hmc2Qf7tzG9G27fe2ZmRY8B9fX3dDmroFaz87zfebfEO3jt4uy367w/dACGVfY52QEhVxh+kA0JNWdNoZR6XYICQl64KTbftgMCOcxvkr+56SkTtFi8r94YBw+fPUPB6NzsFr3vtNXhRQ9PjX6rpQf57OM3/47Wk7gaOO+jHekWBXve1AKFudzg0vxY8xUzuCH2vlSdUUOqw7aeDk97rva8z2qi1NzrdtyPn3nz+EkSfB3QZj/4A13qcDJ6v7L3A5RGvolkNktT+VfHJ5Vb2qf5ibSIIVzyXyqbVOKps4WSZp31r2sbxkmGZ358mDUG+blh9aXgXOflwCKaPWpxcv7u2Tjq4Un2dSifZorII6sR0u2LgdKC4uvUrEmvj5f7G5udVjPxrCgg/VGrf/8iHtHMod6vivBIT8pdTldJY5hL9hbFNZb7tuc2muKHQb2wA6zFsT5hpCEb733zq+KEjT2AbccfxmuLO6VUAkOWn63ys8g8AaPAf</diagram></mxfile>
2004.11892/main_diagram/main_diagram.pdf ADDED
Binary file (38 kB). View file
 
2004.11892/paper_text/intro_method.md ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ We focus on creating high-quality, non-trivial questions which will allow the model to learn to extract the proper answer from a context-question pair.
4
+
5
+ **Sentence Retrieval:** A standard cloze question can be obtained by taking the original sentence in which the answer appears from the context and masking the answer with a chosen token. However, a model trained on this data will only learn text matching and how to fill-in-the-blank, with little generalizability. For this reason, we chose to use a retrieval-based approach to obtain a sentence similar to that which contains the answer, upon which to create a given question. For our experiments, we focused on answers which are named entities, which has proven to be a useful prior assumption for downstream QA performance [@lewis-etal-2019-unsupervised] confirmed by our initial experiments. First, we indexed all of the sentences from a Wikipedia dump using the ElasticSearch search engine. We also extract named entities for each sentence in both the Wikipedia corpus and the sentences used as queries. We assume access to a named-entity recognition system, and in this work make use of the spaCy[^4] NER pipeline. Then, for a given context-answer pair, we query the index, using the original context sentence as a query, to return a sentence which (1) contains the answer, (2) does not come from the *context*, and (3) has a lower than 95% F1 score with the query sentence to discard highly similar or plagiarized sentences. Besides ensuring that the retrieved sentence and query sentence share the answer entity, we require that at least one additional matching entity appears in both the query sentence and in the entire context, and we perform ablation studies on the effect of this matching below. These retrieved sentences are then fed into our question-generation module.
6
+
7
+ <figure id="fig:question-example" data-latex-placement="t!">
8
+ <img src="uqa_example" />
9
+ <figcaption>Example of synthetically generated questions using generic cloze-style questions as well as a template-based approach.</figcaption>
10
+ </figure>
11
+
12
+ **Template-based Question Generation:** We consider several question styles (1) generic cloze-style questions where the answer is replaced by the token "\[MASK\]\", (2) templated question "Wh+B+A+?\" as well as variations on the ordering of this template, as shown in Figure [2](#fig:question-example){reference-type="ref" reference="fig:question-example"}. Given the retrieved sentence in the form of `[``Fragment A``]`` ``[``Answer``]`` ``[``Fragment B``]`, the templated question "Wh+B+A+?\" replaces the *answer* with a Wh-component (e.g., what, who, where), which depends on the entity type of the *answer* and places the Wh-component at the beginning of the question, followed by sentence `Fragment B` and `Fragment A`. For the choice of wh-component, we sample a bi-gram based on prior probabilities of that bi-gram being associated with the named-entity type of the answer. This prior probability is calculated based on named-entity and question bi-gram starters from the SQuAD dataset. This information does not make use of the full context-question-answer and can be viewed as prior information, not disturbing the integrity of our unsupervised approach. Additionally, the choice of wh component does not significantly affect results. For template-based approaches, we also experimented with clause-based templates but did not find significant differences in performance.
2008.01700/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-08-04T01:16:22.706Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36" etag="WcSQMpqqdBI2Dp-A-Ruo" version="13.5.9" type="device"><diagram id="f0MlJ7Kajufxq2jq3lXt" name="Page-1">7Zpbe6I4GIB/jZfTh7N4aT2Nz7RTq51nOpcRImQHiRti1f76DRJESKp2FdDZ7U3Jl4PkzXdKSEPvzNcDAhb+I3Zh0NAUd93Quw1NUxVFY/9iySaRWLaeCDyCXN4oE0zQO0x7cukSuTDKNaQYBxQt8kIHhyF0aE4GCMGrfLMZDvK/ugAeFAQTBwSi9CdyqZ9IjaaSyb9C5Pn8lzWDV8xB2pYLIh+4eLUn0nsNvUMwpsnTfN2BQcwuxZL0639Qu3svAkN6Sodvv5/ejeflw2D53nUJ1Ydj3P7CR3kDwZLPl78s3aQACF6GLowHURr6/cpHFE4WwIlrV2zJmcyn84CVVPY4Q0HQwQEm2766C6A9c5g8ogT/hns1lmPD6YzViNNI3wkSCtd7Ij6tAcRzSMmGNVllC6Km+uLLFgNwJfB2fTNQ7IGz+gQ3UxE4QZfpDS9iQn3s4RAEvUx6nyeZtXnAeMH5/QUp3XAjAEuK83ThGtHX+PlOUQ1e/hUPd2eoTV7urvn428ImLYRs1q/7haSfmRazbttS2q+wnhb76/f54CNIEKMJCW/74UpGeEkceIAmVzoKiAfpgXa6mTSMUR9UDAIDQNFb3oRlSrDt2iYEbPYaLDAKabQ38igWsAZcG7XU+Dc7J5e3vGPtzYL+JS+QaeNuJmcoqFqzgirmf1BBteafoaB6BRqqlRt6ZrYDHWnomdqmYR7UhxNCT1qbBoL6QpFeMsfZTJNzdK2pZVqX4agreQ00DBFrU4JVbVklYTUErD8i5kmKaNn8aJ5fnlOIQ1iAykUgQF7Iig6jFruo+5gWYolnm1fMketu3bJswfJLepEVEIgbMj0uLaMSeA9+DP9c3GqrZt6q6CYqTBB4YOfpQVpzLDXIsoFfuWRAnhr8+3DfPDHcp47+cuH+vEjQqjXny+V75olLeplsr4Kltq5qqZuSoG8FsWt00Rt79OLHjo9xBFkJzGMfFk6jxZaV0gvfEMHhPKalsRdQ2t72MRlhStIBUgl7xb1hz0svLuE787mCJHK1JJ7ULC0Bs/83u8+aXbqEx+3u4juq86KmIhjeBMY7qa+bBSRfRoCA7Y4zujYj2W1K9qzEbt6ZFdqJUe+Z2W3aiXGinaQjXoudSLdPsZ3AYCFGpGLgmTGEmtKFFKDgTFMqrEl/+1eOidk1ByKtVgO77pQ+zd9uLuBYgiHlsjcFhG6WwLHpTeuPPNZ15WeqmCynnylfYERR6O0wvhCAwkRwdRhlEbxa9yJ+LEy8ufJyhVpXvzcW1Y5b6ZWGs+IZrKqJBGVnsHppBG2B4PMDBCSx0DIoznBIeaBTtQtRbZk5qpIPBtKDbbssqC0B6qQ9nrRvFmizZqC66Ba7z99vFqfkXLpanJqIc3zDPNXa9VP8QNiujijBlGXJODyWwJ8BuHaNFfe6nWVE8TzLy29Gd/MZgCY7NKoUreSr4IZx7e8A5zZD15pYnbgDukQi9dr5NgiCoTN9DIeTv613/9HvSm75dW85RMl2QmVZvJSn6FHHveH3/tO407tdqBIlrRSq6EVHo6fbxVlhni+3efHgSIAJQ7cdX5FmJScAUYScAr8Cp0uepWvHDtM/XJTjt/UkH3UP2fHRo769NTQPHCF87lKfeAuveIJRjAfJxHmvTD2EgfTCwbRdGCfhIoxz/GIfK2b31ZPm2aV/vfcP</diagram></mxfile>
2008.01700/main_diagram/main_diagram.pdf ADDED
Binary file (11.8 kB). View file
 
2008.01700/paper_text/intro_method.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ The proposed EasyRL framework allows the training and evaluation of RL agents on a variety of openAI gym as well as custom real-world environments. EasyRL follows a highly modularized implementation with abstractions such as *Agent* and *Environment*. The sequence diagram for navigating through the framework is shown in Fig. [1](#fig:structure){reference-type="ref" reference="fig:structure"}. The GUI for the framework is shown in Fig. [2](#fig:gui){reference-type="ref" reference="fig:gui"}. The user can select from a variety of RL agents and environments (see Fig. [\[fig:gui1\]](#fig:gui1){reference-type="ref" reference="fig:gui1"}). The user can then set the hyper-parameters for training (see Fig. [\[fig:gui2\]](#fig:gui2){reference-type="ref" reference="fig:gui2"}). The training results are plotted using metrics such as mean rewards and training loss. The graphs also show the epsilon annealing process. The training environment is dynamically rendered on the screen. The rendering speed can also be changed (see *Display Episode Speed* in Fig. [\[fig:gui2\]](#fig:gui2){reference-type="ref" reference="fig:gui2"}).
4
+
5
+ The trained RL model as well as results can be saved for future use. The user can load a previously trained RL agent using *Load Model* and run test cases as well as visualize the results. The framework provides options to create custom RL agents and environments using *Load Agent* and *Load Environment*. We provide a detailed help guide to assist the user with these commands along with tooltip texts.
6
+
7
+ <figure id="fig:cust" data-latex-placement="ht!">
8
+ <img src="customEnv1.png" style="height:3.5cm" />
9
+ <figcaption>API for Custom Environment</figcaption>
10
+ </figure>
11
+
12
+ EasyRL currently hosts a list of model-free algorithms that can handle both fully-observable environments such as Q-learning [@watkins1992q], SARSA [@rummery1994line], DQN [@mnih2015human], DDQN [@van2016deep], PPO [@schulman2017proximal], REINFORCE [@williams1992simple] and partially-observable environments such as DRQN [@hausknecht2015deep] and ADRQN [@zhu2017improving]. The off-policy deepRL techniques mentioned above are implemented using standard experience replay for sampling experiences. It should be noted that our framework also supports model-based RL agents. The user is also allowed to create custom RL agents and import them to the EasyRL framework (as a python file).
13
+
14
+ The framework hosts a variety of OpenAI Gym environments (classic control and atari). The user can also create a custom environment by following the API shown in Fig. [3](#fig:cust){reference-type="ref" reference="fig:cust"}. We have implemented some custom (real-world) environments for selecting sellers in e-markets [@irissappane2014pomdp] and chemotherapeutic drug-dosage for cancer treatment [@padmanabhan2017reinforcement].
15
+
16
+ The EasyRL framework is highly modularized and extensible (MVC design pattern). The EasyRL framework is predominately written in python and supports both tensorflow as well as pytorch deep learning libraries. EasyRL also supports C++ native implementations (see DRQNNative, DDQNNative) via CFFI which speeds up the training atleast by $5$ times. The framework, by default, uses local CPU/GPU during training, however, it can be easily configured to use resources remotely. Further, EasyRL supports training multiple RL agents in parallel via the python Threading library. The EasyRL framework is easy to install and is supported by linux, windows as well as iOS. We also provide a command-line interface, offering the same functionality as the GUI.
17
+
18
+ Our demonstration will show how a GUI can greatly simplify the process of developing, training, and testing a RL agent. We will demonstrate our simple installation procedure and show how a user with with minimal knowledge of RL and even programming can successfully train a RL agent. In addition to training and testing different combinations of agents and environments, we will show how to save and load pre-trained RL agents along with the results from a training or test run. We will demonstrate how to create custom environments and RL agents and show the training results for one such custom environment including the visualization graphs. Furthermore, we will show how multiple agents can be trained simultaneously and the improvement in training speed when native C++ implementation is used.
2104.00479/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2104.00479/main_diagram/main_diagram.pdf ADDED
Binary file (91.1 kB). View file
 
2104.00479/paper_text/intro_method.md ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ <figure id="fig:overview" data-latex-placement="t">
4
+ <img src="images/overview.png" style="width:80.0%" />
5
+ <figcaption>Overview of the proposed approach. First, we analyze the distribution of the activation space of the Creative Decoder <span class="math inline"><em>C</em><em>D</em></span>. After we extracted the activations from the model for a set of latent vectors <span class="math inline"><em>l</em></span>, we compute the empirical <span class="math inline"><em>p</em>−</span>values followed by the maximization of non-parametric scan statistics (NPSS). Finally, distributions of subset scores for creative, non-creative processes are estimated, a subset of samples and the corresponding anomalous subset of nodes in the network are identified.</figcaption>
6
+ </figure>
7
+
8
+ A visual overview of the proposed approach is shown in Figure [1](#fig:overview){reference-type="ref" reference="fig:overview"}. Subset scanning treats the creative quantification and characterisation problem as a search for the *most anomalous* subset of observations in the data. This exponentially large search space is efficiently explored by exploiting mathematical properties of our measure of anomalousness. Consider a set of samples from the latent space $X =\{X_1 \cdots X_M\}$ and nodes $O = \{O_1 \cdots O_J\}$ within the creative decoder $CD$. Where $CD$ is a generative neural network capable of producing creative outputs [@das2019toward]. Let $X_S \subseteq X$ and $O_S \subseteq O$, we then define the subsets $S$ under consideration to be $S = X_S\times O_S$. The goal is to find the most anomalous subset: $$\begin{equation}
9
+ S^{*}=\arg \max _{S} F(S)
10
+ \end{equation}$$ where the score function $F(S)$ defines the anomalousness of a subset of samples from the latent space and node activations. Group-based subset scanning uses an iterative ascent procedure that alternates between two steps: a step identifying the most anomalous subset of samples for a fixed subset of nodes, or a step that identifies the converse. There are $2^M$ possible subsets of samples, $X_S$, to consider at these steps. However, the Linear-time Subset Scanning property (LTSS) [@neill-ltss-2012; @speakman_penalized] reduces this space to only $M$ possible subsets while still guaranteeing that the highest scoring subset will be identified. This drastic reduction in the search space is the key feature that enables subset scanning to scale to large networks and sets of samples.
11
+
12
+ **Non-parametric Scan Statistics (NPSS)** Group-based subset scanning uses NPSS that has been used in other pattern detection methods [@mcfowland-fgss-2013; @mcfowland-tess-2018; @feng-npss_graph-2014; @cintas2020detecting; @akinwande2020identifying]. Given that NPSS makes minimal assumptions on the underlying distribution of node activations, our approach has the ability to scan across different type of layers and activation functions. There are three steps to use non-parametric scan statistics on model's activation data. The first is to form a distribution of "expected" activations at each node ($H_0$). We generate the distribution by letting the regular decoder process samples that are known to be from the training data (sometimes referred to as "background" samples) and record the activations at each node. The second step involves scoring a group of samples in a test set that may contain creative or normal artifacts. We records the activations induced by the group of test samples and compares them to the baseline activations created in the first step. This comparison results in a $p$-value at each node, for each sample from the latent space in the test set. Lastly, we quantify the anomalousness of the resulting $p$-values by finding $X_S$ and $O_S$ that maximize the NPSS, which quantify how much an observed distribution of $p$-values deviates from the uniform distribution.
13
+
14
+ Let $A^{H_0}_{zj}$ be the matrix of activations from $l$ latent vectors from training samples at each of $J$ nodes in a creative decoder layer. Let $A_{ij}$ be the matrix of activations induced by $M$ latent vectors in the test set, that may or may not be novel. Group-based subset scanning computes an empirical $p$-value for each $A_{ij}$, as a measurement for how anomalous the activation value of a potentially novel sample $X_i$ is at node $O_j$. This $p$-value $p_{ij}$ is the proportion of activations from the $Z$ background samples, $A^{H_0}_{zj}$, that are larger or equal to the activation from an evaluation sample at node $O_j$. $$\begin{equation}
15
+ p_{ij} = \frac{1+\sum_{z=1}^{|Z|} I(A^{H_0}_{zj} \geq A_{ij} )}{|Z|+1}
16
+ \end{equation}$$ Where $I(\cdot)$ is the indicator function. A shift is added to the numerator and denominator so that a test activation that is larger than *all* activations from the background at that node is given a non-zero $p$-value. Any test activation smaller than or tied with the smallest background acivation at that node is given a $p$-value of 1.0.
17
+
18
+ Group-based subset scanning processes the matrix of $p$-values ($P$) from test samples with a NPSS to identify a submatrix $S =X_S \times O_S$ that maximizes $F(S)$, as this is the subset with the most statistical evidence for having been affected by an anomalous pattern. The general form of the NPSS score function is $$\begin{equation}
19
+ F(S)=\max_{\alpha}F_{\alpha}(S)=\max_{\alpha}\phi(\alpha,N_{\alpha}(S),N(S))
20
+ \end{equation}$$ where $N(S)$ is the number of empirical $p$-values contained in subset $S$ and $N_{\alpha}(S)$ is the number of $p$-values less than (significance level) $\alpha$ contained in subset $S$. It has been shown that for a subset $S$ consisting of $N(S)$ empirical $p$-values, $E\left[N_{\alpha}(S)\right] = N(S)\alpha$  [@mcfowland-fgss-2013]. Group-based subset scanning attempts to find the subset $S$ that shows the most evidence of an observed significance higher than an expected significance, $N_{\alpha}(S) > N(S)\alpha$, for some significance level $\alpha$.
21
+
22
+ In this work, we use the Berk-Jones (BJ) test statistic as our scan statistic. BJ test statistic [@berk-bj-1979] is defined as: $$\begin{equation}
23
+ \phi_{BJ}(\alpha,N_\alpha,N) = N*{KL} \left(\frac{N_\alpha}{N},\alpha\right)
24
+ \end{equation}$$ where $KL$ refers to the Kullback-Liebler divergence, $KL(x,y) = x \log \frac{x}{y} + (1-x) \log \frac{1-x}{1-y}$, between the observed and expected proportions of significant $p$-values. We can interpret BJ as the log-likelihood ratio for testing whether the $p$-values are uniformly distributed on $[0,1]$.
2104.07660/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-03-29T16:04:09.926Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.192 Safari/537.36" etag="wzVcMPqF1caA3t2e2Dzj" version="14.5.1" type="google"><diagram id="CyOvieYD7xED29DLxN-L" name="Page-1">lL1Zl6rM1jX4a85l1QDF7jKCvm8EFW5qICiCIoIN6K+vtQIzc596v29UVY7n7J3bBiJWM9ecKyI4/5mK9aB26e1kN/nh8p8Jlw//mUr/mUxWwhz+xBfe4wuzFT++UHRlPr70zwvr8nP4vsh9X32W+eH+Xx98NM3lUd7++8WsuV4P2eO/Xku7run/+2PH5vLfd72lxeF/vLDO0sv/fHVb5o/T99X5TPh7QzuUxenn1vx8Nb5Tp7+fHl+4n9K86ceX2Oym8n+mYtc0j/G3ehAPFzTej2FGEyj/m3d/R9Ydro//L184R7X24NYF1/5fuW2dz11zXv4f/Gy8zCu9PL9T/o728f6xQdE1z9v/vNt3AK9D9zgM/ytnpPufK/xNF+Lk0NSHR/eGz32/NRG+zv7GyM9EuP4fi0/4xf+5/F7/9I/B56vvh9Ovp4vf6//ZAn75muP/j2n+3y0DLr3hr2XNwoiiLUoIHivdHy5ecy8fZXOF9/fN49HU8IELvkHT7Iw2veZic2k6dqnpkf38cw1yKQv87qO5wavp/TaG97EcDjBqym5Jfl7lfl6B3/P0kf5nSsZ/TpTbtfjPRCw31A16zlSLhsCPs45OclQQIqp3+GeoiCTG13Mju8r4i7ij+nZnw28L/Lc7EHVj9sIefi+IfJH9TSBc3U/uzvSMX2xVLucC+xEFDzVoiaF3ds+7Uei0zZKfbHzXN+N7YBxXBzBx+bxIs0/18R7r3jRsRZILv+DpcXHscSzsR5iE6Dz8j2bjbziX3WX8fTF9wRvu6wi/T7uQHz84+/x+56hP+YP4809l+rpV4xsXHtKTPg/FhSifm3XMpJ9bpvahW8tusQydgJ6l40Cbn7eye7nzS7fouBO9EE80t8Qe39FpfmsTd1efZ5xTpIm81u2Czo1V/P0qLR6JIEkbWSVqFqT992WLNvs0DPyWnp7FmrTCmf+52cG88r7sRiQiz7jgvq/CXOg2cG48DZpS0omp8zH5Xq1wG6lK8R2/IQJd/o1czNurVgeGT33yktXi+7IxuQlxq84DnWzL/fbnznNZvBVxDNcghzKMfg3zOAW3G2k4MAtZ6s7vfRtJnZFm8vCjVDaJS7zmbxZqJe/SpE9SvXjDW/Hd/1pElpSue5fxvpGJKZz4nzGZlV9vMnaPs5H83YMaGun5gBZi5Kc9/VrVv5sTOjziQufMiPji7Z/xmg7d3CpdVU3w0rzc/c7PtMNMiSc8hXtE0nyg33v7arLQC7yHFK3nP/cQ/T6/EX/mU+Jyf54T1/1q16pruPbkn2s/7VPWeHI7vKrHY7WsX+8FXF/k/N/r0b5pLySePO52kD8X3nUO0fMupd9xuxOtWcXv1c7pxULb6s73hqTa6Ip69WS1p761gEhiPxIR66NT8x+akIbw8vZO/PGdpawb/MebyPWS+M71z+/eKszmZBgmvkmMQOmIPL6urR8zrQ5boZjC7V6/vnLEzaZdXRdq9Ka+tJYnP/kpOfKkvEPpoI1KlMH7vZK0vg3TOjTxSi5d/sYv3K+r8+zD7jxIv59XAqsx1N0BrLk6wYC+P5tQeUXlPDR9sa8uxU+ONvXznqwC40Vs2v1zbYE000kt97SgkOW/8/WpZtfwQqjwPgch/Rfp95PS2BSu45Dwd7aGaB8Cb2sbZ9F3/7KlNyNZPfWXaSEWzq9XpCJ3E+dpWIZCaPuX78SFSUpX298TJ1B/5+n2JWdS2bjYpNWV3+je2pOluI+CPXhW7348S/r3MbK3tf2mMVn/RohOX5R2XGDBh8mslL7e7q3eONnniDj6b5T6OthaoJeuFkhGt7+RnstmQbKD7vREKPlfu6bJQ5C7Wsb7Vb/o4RCf/9iFxExKlKfufO/YaLoorUvvHcA4muKb82LhPxOnERc4sj/8MIkInocUetdElL4eEIlfnLmGIBTvC4kLfrOEKPZ2KZZSBtl1/P10pXw84qZxSBTAmN9xQ0QW5CCcp8SBYXznWJDQrnu6Qz9G4uF37jtb4iWRaEs/IVrmp+OrtJcbWY2oCbEmXeT6G+E2obPzhGy7ck42xP2NEfVTHIq9L0J4+NKfD7Yi/AaV1i6iwtr/+FFcxo4gz42LgBOl6R+aVzu7l2qMWNfIv9Egkj3YhHgxRAPJ9J+IMkk3g5c/NeTgliz+cBQy3LcLa5NNiZEaP75J9RkAOcmEYkKuv1WFNsX1HBaemYREE66brwV10j4TgUbo4oE+SPeX+QC2ROQ+W9/kyPPXhkHovMiDvJb+3jd/41VS5G1BlvZwFh9k/XeNOnF6pTD5bFoY7e8I3ywGxQRvGP76gZzjlSAXNLBvhXj5RRqNJBCDUO0UgbxL56fuFoptQCI21+wc6dKfj08ziIWU1kLBFc6fH/Y4S998wLXJ7TefFAABopCK/+RQiX7jJzaMmdxLQtj1FLDzi6oS0cDgRJf40Iwx5n4qHdR6GI/m6o4g8r+5Q7n3vtcJ/3ETcu5/8zLHGCSSHi56Wtk/cxcduS6Ib13s23/xDm7Aa+Q7g4f4UZryZ4StUXUQfr4BUfhAU7ERUl+NYcZeqCSk/csogKflG0JMLpfkGIU/I1HsCkZCCyuxbxGgJv3akBLdh3HZBP3jez8+lqmKUViIzyTU5fdvdlPjDVFI3vZ1A4gg/fhHUAEhfLHYgFnesvszwp4c7Rqu9QjNhLj1F0t1Ej4NgRIrUF9E22EhYz9uf8Z8OdT6m2ZC9TP7iS0uRV8iCwjCOwIvmxEtTlAbwE9WaAL0/l37SiDrCtXJpjGd/WRgIdEXKWhvHGX1KP7izHymQT2TycG4OOQHu3N5Dc6hvgbW9gLjW6Oog9AN1t2d31fON7/WlkJl5feksFLdmZTBjz1aA7ISqrr0TsJU3/xkfHOGrIQvyeoqMH/utwEOSmB+mmHfeNn8qerRe9EbcF8NsLG5/GSwWJo+3AP8EoemfP6Zny69PgbY6+buW1rV39wTyQxxFJDaAb9Ep9+5GAhfxCyVd7B+2tfF/hcHjYzFIGOPwwd5tsrrPyO92nteYh4i79OaO7wWnN0PJPgZhSCKFKPqFTivpd12a9U+/3jRqQBJ4PdudrGF43Xfa24jr7WvJ+g2kzi4tDTJdttKm9zLKvmpmlRYpxCmRLrKoMYol3DPs/3nFxiirYJniHSDgnabok6A1/rPNhjO5Y81oTyy65t6Pm+G1UyP3+H2azp3tAZxjYtn3ifH9+y3glIOqy18LwoX7+DD1/9e08kqds00qfTLNXVDYw43/1ZNsZB9jEmpuczv+9niXnKn8scvRQyVE/2zgZhGIcOtDkfzV2WYT2OO978bO+OibdYBbaoflFXkeYH3vwLZW26m5p+NwQQ3QEN4L3peZvZzNyw+vHKT5PWIAWDCDUYDlNGHIr0MufsbESlkATOf+s9HXEdTw752fflz3ZP93rMJpy9+fo5v5yz5jQllqq/QT6lyvS2j5L/eMwaHQZu3vj1wPF4Ksqh0v7YVs69tSb64gr/16h+fT8Z4oEU9v18jrs097fLLX80RKMjn6mxfihL+zRGw1gPEgR9+dV/UG0ua2edf5nhwKbOrr82sZZZWdWk6529mFa6AFYYUzu1eBpd0U/8TnyYNEQ0hRp5CEdY8TR4/VWWuW4DNcD85zXrjMwW1qRyz8FOhDN3NULCqHrh4GbN3vNdxx6+O5+nqbuWqOnpcJAkyASK1+nlWhOVj8O/w4cmJXr3hQitV3wvwb+9tl/rpiJfkeK8SUPaCzl33whRlf7j4QV3jNaIuJTdbXuf+/bM5BffgfbW0bbcYdbF60gYrb+JYENZNP9l9I7O5nsGIblOGtWi6RGsvkmUfrPddDrRjV9+Xh8V8Em1MTs/HeGykrg59sanCkrq+rjRierMm8fIgCLz+TKUqLo+NML16n6X2E8MbCGhgy8rVJlnmG1ADTE1N0UhlFZWXxnVKyyyrcAXqn6Z3Eb/TE7hPLKZwl1jnQACQpBOadxRn+0Q7u66d97GOVoq4eG99uXzU4wt2CyFarK8lZECznfOn4RFcqvjE85/twdvAR/Zruvn16FtuC387lcWoDwyIqhtInE+9fj03RRXjiHo5QQvq98gW2X0kow4Pt/IzWoDo4vtBg/omhaoWw9WD4RaF5no2IpYmzibt7PozD4ipyIvjZD3LeP2VHPnpaTKL1uZPZKyXvaI+wFb4XXUTy5MDp6jp8g7qppSK10QvhaS0FImMLOK0iLZJU4T43anuJ7P9qdTvm8uXuU62g3N9WTD8316JapU4Gw2/kYZTtz68h1XbqXP706l3hNOwXX9RjcYws+vHjuICc0onR4jRxT7meL0VHHDbJBgeX5aMJCMrQ/cgjjrsUO5s1zkctqKHN6xJJTzzRp/gYK47+GO+EEcPbFzPdl1vc50+sZcDbznwv1d/n+AYb+3EsAU/2Qc36WxDCAuDvDtHe+tbw3K3Ml3VNcXhjjzGghHeFvvJXU40kJPp5xmfozESKRFme7HSYSrxWKNbI5qdPzoMOyqXhb+X0+U1ur/HaJLW9UWqKh4jJtTQ3gqZx35yPUCtETqdMSff3bxDo8mETGd4KtaKuotnB76p9JyfVvUXbxrJ5IegfolSjHbM+Luf9M3wvoJMDww/2BsnyvJZPGXilGEzSe+6vX6mb4afMj0dmvNn52Q4NDe+z8/SJM/xPkee/ypu0ec2ZiznyuTDbkTI42TUK/dlb0W73C/Itx9D9dbLBGF2YVaoRc+OtWDSSpqpbD9L83A5Mp+a5POK3unC240BdFqgcz7T52r6Qgx694iV074Y5j/26HXg1MencMfxeHwiTzLdHk55vjlX+n3FS0n2ZdbgUsRwsQ+Ti/iOg6pcYm3/jEh3LqL9iOMikTIfVIKoIZ9YhPRK7i97jH5qb0nz0cGWycdl7+eBUp/tdmO/y+qi2c1PlMCl9aWAs42Npi9W6eUq21/eLuV2fVlrV7wo9atqM5wqIq0qVcIrEi2QrorsZRgWhri1dahvyYmq6Tb2h29Pw+bWySpw2KydrE8404nucSrnmaZOEpYfhZrqNt7PEsLkjoqx/0gy9ljy61Ebljtq3EdMEoGqQZ4GWlk57eqFCK5hdZl7oxumz+k7wrJYfHQ6r2WZcdO2lNy5kc0UzD2Vd64iTJ9sb6AuDrtwq/14KBLXIXJHIi11+y16LVw08UdVQokMtg4lFWxJyX75Tnzv19ZJKXnx+9SbGFZyvSf3W/8xTfcz42n25Q5rWcX4FFfyJCGvbH74tZCZQ+rZ5UAv96QXEDVTUXKJu/1ax/ZkNQ3OUhHBrc0HAIms7qknhnbTLLw6+qqpXo3rMDsxS/vyQpTksrspRSXchqLv2djIXS4zzKWV3Wbikd7/6ikLLW+a/VRiSSjDUuI6XEl4LY+rY4PNZ2xGL/Y61oJBqYlZx76P2QU8fhNaDAm0u2HPyeGWTYpKX9jN/ssSDT7BWR49gnwPuCuEf9aTR1W8T29rO840NexJmbgBfEERggT0sC3YrOpVoXLzcQx2w8b/zMT9+sGTOgqmrzpSPt8rtAZGUqEIui2Q9SK5SKOqoiQB/y0l54OGWEAuoMgvgOHDaBR5ciSHJ9pbah/rPRg4IcdJG/vB5QEGHrNBpuAh0xbpSuwHC/g9xVAF2OuVBhCsCeaC7dgMIkX/HCK60eYansNGPEq2HV5nk3x51D42PX0rg1zpN3LJ0KzTlXdYTA+rzyuHf60csNnZuPdka2djbVUG/Uq0WS3boBfLrU0cvdnHycXR1prejXVeI2WGeO6IrbNTVZt4Rc49PqY6akJXFjNyiCBQdWDUy37szSQXFsPFLY16nTMzsh4/YdtYtsSN+N7WORgCLCeOlbsnAWOuHL2d6Jmm0dRuRkSSBG1m15yYgJUp6QDIfJdvXyA0hm8ETgPj6is/frQCB/zAt2aeCNfz184O4IQfF1DiyLaT688JKqS16cw81XHAs/uY12KxPeg2/k0BSAspBgUmHq8z7Ogsj/KI5yZx1gAYJwmbeRDfi+L4Wmmv42J6zAwii8HHOJPVt9fhGztIUnrAit7QUJR0/FLlZN/uBo361Ecl6tsTqEOmc1K/uXeR6ww0hZH7HBHUWycUA1hGZM0lhhERRAm9ifsYK5QOfo3Fb29JqRn2FbRN7F4rpK+vrC6LkvMao3jJKhuQJLsXhZL1hEXCCedPYTS6U4hEVHGNpV7NdgIDlFXo3EgXFIjskrriz4TTMz6L4568xWOH2uxV6kV8KrXPer443QssiOK+DMGcDH3s31WWQt6VkgmBB1HSkGkdAjMuNuMYrkLxgUwzYAyFBRWZqM2FWd2tsuhWBHqG45He8iSGWDHZfH31iA1EVq0LybwzNPOlaN1iH0HyXQMYu3OHa5LC2sK8rO8KhBSoV6whRmhfyZl860FrQOl66vtOLWh/BdEDuDF2ehT8vOpXNbzmxGECQz6w3ohIim8NkXSxE8MYPHYUYUJCwMZAfVkbFWDzrXfESRZv1BqsAl1voFDSCuoOFdXNYnanqcH1RJCyFzOtXhol4KptLWr3PTDvG2jZeR25NCbbOrRpIX3GLjvN1qzWmxAggCfutytoDMaVWKGD8ZSWmo1Meuw80cm2GOs64Nd6/+8o7WaMeeVkjJYKCIgfAsWXZW0o0Ru5k4j/RGRtbzPRhwuwdSWpx/IJ/rNiH67YOzV204KROWmBBbhDohhs58VrHOV0iaOkxMGGMvxt1a9SUv/5jhTczOytY8XLyAIUFRUQy4iRMvZciFjvJR2Hz4mrjwG+0UbfFOty7AfwOjarxdX/M1qFMcLMBkJPLHITEMlCMJfJDls3WDgwFyPRofQOnMcADfxdI9NJWUoY4TRiHLMAp8k3hsS6VV9s0RGvQGRfm8+OiKVlQx00jVGHP41eAA8fkDouK+BZhRT5LFrE7NMiJgBcmUL9gaGl39UW6eQwqbIZ2BgeYx0asyymU7CKNNkGQEZgTD9dYpZtRIpBwVJbxupCfIlbsx4bXd6BJgd68UH7HLM39vyW4TiKHNgk6b1Nxvd0zkqgUdRoqcKexw4B1xB/79tPGRX6RMCVC7HIGH5KxV79+s+SNvwq3wWyb/czaWbkGcslXwUG4lPfegCZoQ27/IetUYhg0m8uATxUWyKZxnkm9yJxGDJCLu3wugH4Qo18A4LHEL2RYdxqrsCCiz6xSSMOzBOmXgAXlebCvlEzMxpRqyeFxlb/6BkEDwADYE5DPmg5UOisP1roIZtbTzXAKJgjtmRxfO/3d3wSyX2+qDFTTJqzeiWVhourmfKzJz7aVZrrD1QhYg7KBbNtwe5BaYcdlfUT7OVSs6sf5LnaQ3jrmJkYoC5IITpWghYYFsgxbd5g6GZpJOYnSHjiXVbH3SAHtj9Is9vhLOyRdUpydquG96N/9a9bT30pfguR0+oZWBGQnHUqJf+23wxD+atVqTn+fdzvZy12UxZr+CP1XsZJYmN4YtyQmqEJ5eL11UX1ut3t8LPCItRcFcSmcl8eD4/LuutLKKtQDa2dcyAvnWU1uh5IL/3wl48DurPYG3K71E/94NtkW+7iQgu3cA1jSDH7emNcbd3A0KjsurZ9WK2aG3DWEX2hBJKxAvD5+Z3urc2yD0wpag9poAjghKGrX8Q+JBgaY5dXF8vHTuebj3+hgwCBvyQHgOtC28pjn2kN+U5JBLxuFjWbg7wWhAJGIkHthfs8GO6JPp9sQ+Py2ri6sn52KM4se8IRm07hjv7zE8Er/AA8BKOVRZlkPI2Owhy9h1cSSacZpS/Ov0H0ysRjFV1517eqkjbxdC6GPZnrSgHKCSiITBolZ5EcXkANkfoe7axCXmtv7I9i5ScqCN5/swzLEmalua3b4dLfLmsaCH6nFeJy1jWqEwD/gejzLByb2D3ocK1l6ZResSewTy43shXNGLRX551nLNBZxwFJDfHE9Trw9A7+vU6U09ImjJUAOcH+S4aZWJjpKymHYRN7219D24vzs4hvtQRsU6cdq7ZSF2WzHkNQHDYC1U5lcSQT399f5TpG5J3HbEUUAEeXXed1WHyO9gPqgxpjRIn+BdcvTfR0CkpHV7jyMbuddwXJQzP3J1uWayZha+eMyQvS9hKMIyokXXeuJTe7Z1yMHNf1WSVo9cE4T12hLZTHGoYVWFVJkkC6O/KVfc7BBrlMIoniyuN6AvyvTc5bouhDguzf5XzEOEVeA4MTj+c6Tyld+Noq4ydYeQFDHt30ckTuYq/s+hmcSLFtuHyeXCoeSPSLV3H3TbNarT4am6nELdxXZ7718lRrOuaIye2cGPm+zPs9qc4nvl97ouIjaLTz6ex4SZkF9yxmaRYYAAXqbHGc9QPXW7wt5GupZrHZMwbogjo4T9uY6Eqs1K05CKK0KLexaK+yN9zJHtdcC8pnH18kC65xPdGQtbs0Y763x7WKggI8b/l+47uFZZ7Fqj/tPLfczYcO0pl1hq5sRdIaCkTrdxdOhwn5WMfAvltaXr/GHuop+Ltmoz2fyaiNOMjuo3Zy12fdaeNRmVFkPpTkwB2OoUnJnW5JsWqqD64HiYWC/NekMwPX3BRAiKovAzpfIZZO5ozn4G4IxAe1NNuhXgptoxJxYym7k/x0DfpcovLBSBpXTVujwYqCHjtOZ6dbE2y614Uz2WgME+quSNobDByw95snjaK53SYH2s866NauQ7RkzKV+JkArZVB5Z2dxqJN4lTaFs642qizuVvYnuGJn5Hi84oCbVi9VOba75dnD7o5ORwbitP68MbSTeD84qIDgxxs9n4iSo/ebCGxQadMTlpbPPrxfproboFHB/2wNWz6JyPF6ulnvjkrjHEVBfxR+Uox9I4zoC1+a2B2E75yjK2BQgoRiWRXp3vxgcwXYBN75uwroZp+tJN+A5vuSexGhJojVomwyrL5EoFidaWGz/hu5Y1fg8RFO/cfCbvUznUH1BPs89RnwJRongADYJnyH09A+5TeW2ah8sFduzQYntre1DNCg3T6bAzl9cNxWyXoPbnl0GcO594p6xYI1wRJX5t0DN7AtzC/NllXGx8c6ZXBDGlndzgGc8k4nM5vHy9dhrJIfrtRfC76brpSTisoFGQ9EmVrG7XBGtby7ASt/RlJ1SS8NaGAB55LpAXI/wGwB18bw1tbkcM82nwdg3wQ1kyahRvaWt58819kOtEhSQ+Ml5mLgg2bKbkLT3+KFH8RrXcG7i4jkSigiFhD3tPucafV+BHJZs55BMdm8nhTe6kDjxfSxwwi9PNe2PZlPt1ywfM3NYXg68roJGtalVDOVwq2KM1tTbXj+ncmJ9jrc461VMyux5eKCDIAG2HdoM6eqJVzsJqqYoNXlE7321oZ1tJxtwX9uoJLQK4xHgsacYYaAIr8VEzdzogeVT1MWuKj5QFPxH6ewhoz3cVXN2q/i/nFtXUd0MtarPuIOOZVUrD8BVCqZ9NPgdA+Mi46MHMyIHD3WFQN5j2AoRPgMofyuWS1ttA3GnyCojcrbkSwWol9UVXVoR9XgTOBvq6lZHBkLZwjOAWm4rgR1BxXFE7CByp8mo5cM5qXc3p5FuPZS7zp1GwV71tflIJwk31KyT6NydQQj1wGp8oA+cLIdFhOH2OMsJOUhX/NbazqiOyrG2+KYScB+sTfsWytmDf5KVjG2ePaLIA68t2lwzScYSi9gbEi17buen97ee33ELpZJgF8y5LUGel0d9EEPTdYI5FHgROLYe7nrCSnmtaE4451t3K8gY4PCvCe49uKBNWrRTEckNLZQMmbKh/UW+Ftr2SjPXgPUZ+zLBqQDjTfLsJd1gVGLxOvKQx9cZBX5v43c7kfpiP0pLCVTODO/yuhXaaxr4N+Oe+/PNt08AlJM8f2VM3t7YJEnCjZlgPgC/bj5eK39euTNU9cERSqXSSPMFBbLXXVQuaXdJ9rM2JWg7iTJuoHeJ5aoucBxvDhuxpiRIvQidpK0jWTSqKsboIOcrD6CNi0Nhc1MwSar5ts7ZqX+BGqDbwo28s13ZLn8Tom1eW9xN0dFdgnn645y+zxXXCm8HtlaG5zzTWfLyusooBsVl4pS1T5jfhSup5paE5/u/eudPS5STNVn0tCHbOJwZs9kBhT6w1R9V1pbQB0ZNQfqHpfgao7xaBg2GqA8WiMY45+863BLVhzGmUa2KPfpBrynAUADp+6eypwC+ipbXhalDDOI9ahoYDDNmQzGtZe2h0bgdP4T/75f++cPUWafDcy1OimNL92qB1VOOvJ05aR1RPVriXV3hKIqJLAHf15bbPQaelqcw9yzsWMlTWtANr3w9+fvHOqvDpKptbM54uoZ7hEUPsfIjrou35+04Xgwcc9VljTxO0wxd4OJJ3sENdHxPFvoEHBMTYI2B9qpltoaV69guHfsTWB3wFxcW0BfL3RAhnEhy1uqlGB5Y8wccsNdNccEErM0WQRzujudkiWoy0ggc2Lj2gcn3QZabWfTrRAs1/ZdQFt4io5x46svda2dv15dXaSU7pGRvEsPZAzdANwV763tx2+G6Cmi6/NJW8zMs6EL5HMqf2OQMUzqYFuCt/MN4FjrnPIrZAhXaPw4PnGOm8LkM3I3mbYdfLZlLVCx8La600jtk9V2qdaBFWeopwGHXIgL39NADnkPzCxff9i1b45Zi3vb6EAvJBuvSp54VSepf60SWS2zClbwBMYuPlPA1fOnkGYP/vkQKkH2N/6j0lLQXenIzpxs9NUh6reRo4uXYZwzwOOk2sR8ULFYshvsPjyTnPEAPUx6ulJPrX7esiqSBKCozXxwfJd0EZApTVSeYVH5LaEr+fuZL+sSix3qbl3AZlWvPLKPILXTyZsrX+fVPTKXwwR3b8Ao1mpkn4QRS+2uvJN0GHsMYqlArhXajNUi1mMVxM0O+6l1e0+JOkA9D4udyfap6LgTOtB9tiZmYsXAPWc7CPaJ3kAQ4r4CEP+53WL9mD3ThvbiqwSduPyreFKxBlk310W2J8I3z8h6L2oK71P79nMN57sew7oGG9Fr43PEcp72V74hag3B0ogx1CFpbn9jhqgAYIg8nms4KVByFvPiBff1NPKE1cvV6YbEqSpslBjp7oI70eII3CD5MgA8cRp0cwMlHnPdlScPePvN8vyGeXUWZVY5dMFPOap+78HjXpRCvIRclMoZqyyp7qzJCeuhF7qUXMgOf99+eE0d1CfmZm+sIcoQFi7YOY0J2+c3C5yGKM7zwT9kXIEqeu2RzCUpK1lsW1yqr2YnkMq4oYP2z3RJDcSUQr1gE+6NXbSn00WBfoO6IZIpoFMs5iwaI+M5c0VHbAA68wg9NrLmTWXckJ3z4w5AU0fU0qVjGe4Kusk+UCKXBhBgDhuoCsg49IQFjIATR8SzGsMhYnPZYneCHuB7TcUUidhcw0IqHLAGXH9u8/tFkRPx/KSnCjXvDDTjrtA0B5eJCFlUh8UutUq6PLa2txnqqjrvWF/KjYViIclefZmFSuJP0schFYdnBaV7tmowVNywPiD67UD9018mqnJBuhKjFjishS2OGTaEPldvZMAzYXE4VvvVy3AEqFgOVHgpDm7Am6XV4uUmJeZsgd3ZtDrsQKHXxZ3fXaezqOsDIbavGM+qY9fEkpzlInKYZaQoTHnx4sSamorXaOt5vGdnezmdvTCOJov6AFpCYxxMregWxitoPb2FWbSXj/p6MFZBPOd3ZBmVS4ArLzTZarQ8z8g+PAPfVwYvT8Syoqo6sVlcqFEirG4/cSFCXPDi/QUjmNs3X1jtds/POhfudos8GHJFIo4BVMzG9RUDZJp5udahuzloast2qPVSkc9nLa4yAERhjxKjJQrMgeZ1pqgt7lDHlaHXt9bFZCbFm33k405yQIc8LlAzup/b/f7eKZu4n45dLvsFMSY2n6nLyQctxddmgHpnO9wUWdrwilQpR2AlIvcufJPz0RsuF2PFlt9n7N5Lna1cItx3sudp0L4snZ82Ve8HuNr+gdpyJitV+WIqJSngTXiL5hytNlrdLu+n4BWwHS0kogkqo/TpzGkhceuFjzX6uV+cj77Hv+HilXJ2ncdGUDR9tdyShCB6NVqIqFbtoXQH2vt40em2cjR1bcH3NdG14yZnXfllV91BVoJjE/lQS2Uyk6q4enDvM8oj7O5KmYj6UUoQG6keph9qlhHY1zv4Ar8bwLcz12yhWpmIH6C9WFZvbD8N7Wa1/xhaa6kunpYonP3I73GzigE3uKWQ609UwxLw1OsQZq1Ecas/3CkVcF+rIZkZ6C8hb7CDcj3YrxXx9DbLZ+cyvaf8tHad2dxweioMvm9m0magZzJj8/Y+AqGP+YqenoGOFVmqK1PrgCsmO14/XD7RfKO6hR0Y7c5+sd1Fa5BWuMNqkn81nU4eA23zVNUOgVkG00ci6JxFSE72G6D890WJmRIFHdgVuyjNBEgsbSrdT5NNaTpvI5vi/gwkcYXkiw/GZhelFKUcb7J7X9zX5jV4TCqEVEE69pAnHPJmZQtXiKaKCiJmKHix8i7Pa7SESPeg9Pr6Xd5m2LRaI1qDrogbznzQoamaKN6jZ4g3icuPXVLAbVy7pXXTcFAZsE4pr2cAqQiqferWyz5IX2Hl5jPBeAjENNxOvUv6yn+T12NgjGK5lrm7I8T7sGf3s4JHxW+xR4JbW0Jx1inme+rrNjsgATmhjBWll+beOXEN3MMB9UOZb3eHRK879QoMELsd9pbDvdFY7qBenlker0AMADWRhLCt6H0+BfRdE7bq3RoXsTimM0A28aznYC633G1FW1q+05cYbuL4fT2UGl2RnhYmdj1kekBAKbz5crabEav30yUUx6t72fTAGl9E7a/ze8qROe4MWEy382npVYMQGv0LGJSqg17SSVr5WvuNnRpi52k4NA7mFZ3sExdrdkVkH5ArZJ3SA1PvVuB1mlq2yI+aAkx8ouKTdcm0DLx6krFyAHSt4HeNVZEJRJBEabcQw52OR2zu+tJQxPI4x36Hys+rCjfbXheTxXvC7T953D8vnp4a671aEMchgVBIE1OSTRqL9sF3Vjq3rGdLU31wpiGv11hdXosITyu27mN69y7bwQj5WS6seYhPwkE1XjzF5fsAwvgZyC/Izu/KiH+pFmmpkZICs8tVIKM+UAzCzoz55jxnO4WrhNeRBgIK0k7SxEMCd3C2hWMdVXUAydOiF8MWIPMR43qYECAmKWK9n5+PupiJ1SECdlQFtiMIg3DY+y5gQ0cOGEvP8H6oBm1Yfc6qTlerhyMzFWEefZrmOV8my/Nuyb3Sz+l1P81PtIvwaLNy4Lj3hYSxn96lN9vfnS1UxF7/VCaVBFUIe35++nD32xH7CNmSR4MrSzSuEQlE+vrgqkzDcyfaFNNCJPxlYjI1Dtez5ANcT/Rvkxu4SAaU0Lm7CA6/9ddpQU0B0Glqr9jeXTeorR8mb278TdoX3LK0ogJn2lPfu7Je3UWV472cx4i7a5Sb+zkJFpeXXB9gBH03UZf2OjwQIylAFT6zVSMXRpxXywPutvA+vvKu9um+CFiW0MVumNOcf6POO1/7zTPtT5xd7iPi1Tm2K99B43ubXinEnHmm4fJy7FLY9F02BX+VMmz3OE/+IHriPPAJahpE+wtT5tL2PJnc9ZxADEXO2vUyb+rt+AqK5ZJcwlby7YathIz4Jvavlq+kQ+K+xTAqrMXhEL5UauH7oHfgrjOWt250rj4R1IfIT8+jNHqRjRLzrTO5jGwRpIrslhi/8PaOt5bBsuVMMCnxWlvvp6fr0OBqH+g5nlSYSzbSkGoZFMF1gN8KUk9vjcCHPfbBnYN23Ux3xm6+fpmTdyfJ24QLhcICWV9MGmzuRg6w+fLD+giRffnEZYddTWdo75uT1ChIjUFyRABxdL83Wz+tM6eQAYW2efe8C2dfJ5nxSIB1XMO8OfG6NJnS2yvwyl4R6GWbLVH7LyXQJIw/GdlKU++64zQVT0jZ3TOMjSkxyh3rSnj6FbVF7e7rre3gaSDXPWar3j1Hpn4PTTfQmZ3UiKH3TopvPIQRZ7ktyT8l/7BvXklU32Ar1vyYw5/p/jB4LvZNxDPw6tp5zrh3fR3os9dAf900/fZpCh8PN3yahRZzwfJ4IpYtR8f9MWDRfP0s3IXVPJeJd16koMhjWY0FFNb6fd8DidphbRe5dDLcxg6uyM32D1JMnrJ6p+kw7gZg3RF/6TT60oRaIhbWbvlI0XX9tVQhz8T1gXhbzLNs6mEVB1t03MUx+n3k4hpBtLDunhCcwX70nU2eS2B3UiKI/NZe4sUPzi4ugz2gkRM244rtlq24xYYTk/VOv0/1dnj10mbr7L9WV0rWp0t1JS7ebOnuLN4+r8l8tvPNXusZNvTGbbTEa7HIvyh1xL6neZtgQ/zMSrrBOvSgS1fYY8auSYILvaBReXm+JAkuSfuq1kS3HjKa7VNYy1hlgH6gHaA0gphYWz192LP5qKba4YdhoY70kfv6dC4UfBpvttXuuwtLuaNVLWC61Jc6WYUYJyIelyMcG79vT+cpZHyJXXE6lFUGFvHpU18JeNTTxcy/O5e7cC2Ww1PpccX6KUWNGA3n+bkE0Vc4b90JmhosL0zNsZrjJqd15ptX7/btH2AUBMhw3cDtHLlByW63C0UdtK0j0NnoJ1EIO/YZEz5T4GcO98Vq+3xw4pV4rL9rSyvMGV9dYPezv18e3k0oprVhQx5s+gl2FlSEMsQXneYwoyQGZaVX00KaK1d2wmRr5/FPr5wvtW2qn3zsQAnFZBtfouq78wZXGfV9ADPC+ingxurl6uE+54O8WtNOUVGIExeyjqM5ZXvKpNkcuwlivsbuHnJBXEZudSBeyxFTiFhZE33sKNjiYQieW5iJxD8/uwOQuf5hF8bF22RlNXI77CKwVRX5vk75AESU5t/qF9g55KWk3OP5LrjXHBVdIW5B+TcXHjyPuD8DwWvf9nu1tcPVuNNLA5raS+QKHF4Bhko0yenSPDzaJ28+6ycoZhvxmLKmlpM2FSBDoQGxwbrZMabi5A/ndgcKvxpcigHhdb2c6hvQBlQ4Kbip4iS+FNkzeLHGHTDYOKkWyncNR7TlySMaTDzZdg/2POmKcbf7R42+NVGnQ9zwpihFZAviuOkHyEph3JNliRy3cPywRT97snqCygEcZdJ+I0/713Lkjgthw8j3gXqBim1IJLrdPWIIu9UX7MQY0fRqgjuXXtPpo110crUycNcXDZwW+IuZ4RpXd5VnyeZ8v0Y2rt9hf8hE7OFARShyDRe2bvtwfuBkyvVVPp7zuchz5A4QBKxRWwRgSqtTD2xno459Rsm3IJJi4C/AKcWXxhrZ53nB1iQJbd6bkpPXWgZWSXE1phBN5o3myk4NmZSHoK/6B9jTgbxRZL8dGrKU62W2mQFKdKxvS6TyVUElu8oFLRTc1wR1CeL7PZk7ztsyRXprs/LqjoYTo7TzksFThMr2TkuomWUv2fGfLbGTWJYkAnzcX7PHtDk1+ZSvQsYuUBPNsWLjEdEq4wANjUp7HvBxF4+Ed98HWS6r8ZzBpdylwMDh49HbePikUHf5oWovD+1USdGcnevyaYv9M2xeVhFeTTkd30dNxS4hX7EzaYU2h9fFMtyKjuRD5r8h/JZZmqFkROR9/cRfJiKWP0Eu5NnleDPxiQdvEENwiTevqMtxo6dJ7V+rQnXFvQqZmWdCw83Cduw74Wmr4x70a+Sb7DOGHKDlIyvZC8JEz937Ho/jmLaoj6dWeTW8j50xQPft4+y3BYWyNlPqlexZbF9S9mxPr4hNKcX2fS+99XyvFwr2TG794vpands8HGclERfQhcgnrzPUGH0xue67eRJYS1BybM+8TR6Ilmd9FutvFXfg9Ln6XGaH2gPtSNjefeyuWMig9p/V2Jl6vUNc+lPGY9yuMFRS7rJ9qHJ1O0iDOriZdizEqBWWk7A4TwmtwmA6+2GgnRHz5xNpwWpT11g3mtoep+7SWI07YReVkk2asn8AZ8oGp+vkN67K2Nn66qvW7JQ9BeeilzAQAegd24+riR23iM4Gr5NYmZVVELEdAutlLFbp+cPdXdZZDQ93q2E7vjzVEmpug/nVqxZTiuKqj9Oe3zpYUZeHTcV2/hMXO/pStTkiV2ye5WI4zwKts+Q1etO319zynVi11c71fDysfDvRBzBYfDIBVFT4jPNY06DdVSb/dI2M7Yyfjtgwlbd3IGjgZ3UyXy0GQ7zP2c5uoi+Bq0lkDygjDoDDMt2vi4Ybzv3qMuin6Xg+AWkH8IBeCsoqjgBFQB9X9cGZ1Xt8hgR2ALXagwoV4I6AEpDIYiOy+FSUQnXrjCcL9xXwxtK8IwE6KacqPyNDOXnSIXcuNeXZqUajaqfeXce6uX3tH7MlHuA78thqD8QJdsr90+qzs9FMU0czhEulSQfTvmIVbZX7cXHYL+9v9hSFADtcx7eUEvHkaasZv0uOwjq7Vbiep3u6BxTg8+i6046dC5N8dTbTcXi+/JLVyh1aiI3ZTjskdOrexrN/O2QbqaHEcZ/gkdjmxJGqO+BRk88lZsvehTFriCBuIacPGLuF1c/2M33a4moV5MsZCm1BIWkC3GwpEo+cuXcV8Hnkst0FQLLiElDyBDnjHk54iOUwn1svNcRHMWTrA+4ZPA+7GvOjeT/2VbljZ25ULsF1c76KiWmsYqGfwQglveDESsu+HnePBw3XkxTS3rPUAYIOebn4GLfVK7DCNzaID1l7yNc6LqrBT2Q+h3GtnZp6ngrFhu2avnIuSb8z8i3sOpkRO186AXudtIezyzl/P57ByjH7L3J6txMF17jiy1mUjPifkWuoGUD1d5pKZUTq8rh8p/fP/swOu6UoMAUpA2R9YmQx3vsMynlXR6KH+pEAG3g4bA1fAz7gHu+xppNHNa5GuOSBTKMHc7t0+NwA4tv7ODsrP+Eq594fP8O4lIatZuADl/tizONV2E3VTwJ5LELMgraYQy5Z+TCtfTySEdzxvlWeX1YRA+7W8YDohwHwXcjNu40cIqYtpeUkfXbNuDuCPecmJvsHnlXdPi48sOWrmeSxIAAN7gnobQFFXpVOnk8lYyc9jEB7wPDqOz6NgCogYeKhG23U6g4+owjYHp47u4I+Jlp1fZHzcv9sZxd8hAnbfx+YeIUzXEHPcXuzrFr2i61OwtU7y2nhFShQuL5Wx/0EYk18ZA9B5m7l/FlqGYtwq7exG52C/6es3b6VXSg6FEhFTq73ZOvMBgtjuZDe+eHNvSYQ8pv7VFUx5xXIbvOEKbO0cPII+DDE8RTIRd2N9UmgXd30KCEJbw8zeV5JvGbfZOzeiuRVL7m6uueVRgSt41AhJfidvRPkHdq7xX4gtoCvuc92GL2NVUNis60BanFWfOYI8qRK/sm7Gs9MCLfbJUYP++4ZnCfG9zxyZrw+TjzH8xQu40CNHLp1L06zUsDWI1lW9pZ1/wXpvcIDm2q+maAz5KvJn9f3eBNJhaHrji7OQmmyk0vtwC4qRb/W7FIUhjDplhhkMeMm5LGp8vFT7+wgcM9JQZv1ajucLgC7Sm841gmrpWDl0+Flvw8Nq/IzVRutOKdPPotTZCPZ012KCfmMWSv6F+BQjchvhtMNO/Bv1Hzrf3HIs8M9aOVmfxvtIXcwXyAQl1eeieOcL6WXAobvN3tVjSDPFOKnvvzM1r3MdhagHphfAqcTtQgUkfO53UmILZa/2csAkoXURmd8Zge18Ok1OzXhcV/ePkz3Lp17WVhkLzUWC6N2guAe9FAszxc/AOK1ISox8KpTpKztvEnYrjTgC59KGnsmFzMUWHxmYgb0+3y7IoVcHHDjz7o6/GWX9Kmn6uPJAbHG8tK6DvJh+0MPgUvysB6xyr7H7kzG5dGawweuzbnrJhfK7LZbqKNX5AxCVHrboJ/WTcDWiokZY6U9d8sTs2VvTjKnl5viixi1DsH5pschi5ivbiejgXx8zJ+tmo1PPHhiT2tmHppxLoTHWfPq455nLju/SIyT+sBzLROjlFKMUxs13SNbj75aVNkYn720jc5vPBGxmnEccaUdv5Nb+cDuQ10JtaMK+Kjeb/oj7dtCBZVgb1fYI8T2bsi5j4U6ZzFiZbMVf0vhvn3nAkWD+/rWE+amFdFmrIeClFQGbyAL+ryrbosI/rA3c0BHiyGs+0r2EhB9CiOUTkLoZ7hb5UHEvpvwJwm1JsW+nnmxf6uKQ94cqvfOtJmXtM1nWqxBQk3+qgpIlMK6XJTJHTiL1LKH44jhWMlZ729CDlvWHYW60J0AK8kjOc2tciO+WMePLIbvDi7e7HCxdPHpnvfAi5r8zlDkLM1QqjxsaURQrU9/KvDtznbHFaE93/eSNVcWAHV0G5vLcDPNEWPZkySmyBmfEYRpSJ4lX0ltkZAtPm0GaM3EBe+/mXbgEndUPYLivVkEsCBF/wnafpyTQ+Zok6e5ZnGoK0t83kz8WQUfE8854GfcBjSmJJiX+o74/8bK9TEKsZe+FfEhlxw5khsUc9Bmkn5iK6XmX90yyRYtRy3MeLuwgL1h7zqe3/eflp1D8usfnZMHL36DHlT9Ez4zwe7OyLVu+ePuXOWl9nR6V6tWIuhx9Tzq7sJrH0rNPLad7XvjYj7w+QSH9oisXop+awD8vK/h03lPoPLRFPffIe7zpeYD8kxG1ikWSW1fcc3HF/mtF9tib70x84jeDk/GAHDTe0CfayRak8NrDsI75E1vALLKNOyAxz2k3c/1HCy0Umox1IzpzMXnqjhD95upOrasUrIt+y2LLCpU+M1LJf4hjYh7CsXL9h2ybJjasxk+qG3/ZWmR9dNX0877Y8IQwzZ4ApioD/k/3vT0X5Q7JHee1Q3lA1X/0q+Pq5Jrl4av3rYOOzNeBc/88IHaWwQLT8FKXGgyeyJRojg7Rxj53uxHHy4rajEfe8V+yTQwr/I/yC9Cpp2hFH2cQxErjRqx6wQ4fk1yks418PCwHJ+WHp1P8CjMNaA3MTxN0Ko38p71U1AmgvMPRhZagC0mV2samMiNpPPhiqfVk2k0ninEOxrhs6rAaGfSsSYBrkTsf2NTpQ+ITaHzVtfmOXbILosHQ8NIel1D3E9Tbf/8YNb2WE94UDQg5sEmjRLjXIhjjAxGF48S9iYnRdy/Zt9rUe8axlK2Gh+miY1cAKiH+mmfia8vWSdJstzXYS9CQvZKXw8DREsKtQrXGywjnrKnFhAtfufrgL7Ia71nHmlU5BS4R0AbEcWsDacRZ1PNV2lKIvZ975Z8v7/By7fA0E+XCOjf+N2HOv2tg1Ck4Pt0sYt7ZtMPttIJZNvmz5MGQFmhPB0W0WuqYV0BitiNnIfGb/7b+/FmE+cBYFuQGT7bjEji7lcpaF76L4Z7/pI4uLwUBfGkx4K/Kncs1tvw8iLdujmutJr/1CTfXBm/ieXrdXc9mDBb3WfYQ9fsLoaU8k7kCKNVp6r0DHTwMvaup8cbP66iIOX9yGjBWJ0yK5wtd+D96TViuFzKKXHZqtWKZVOHlwYdWiajl2N1Xkrbap/PkfKFLFJaexoB+oJKY5kP9Zcn9YI9rYa7k33p4faxwvhWOdwUQVbd28vMFhwqIDITVZr/eUL5qhGnXV4mDckHi62XGAen2wrCBU+c2OS3j8bbfqKokChwncsDyc3EMzozkvKrfKdMc0M6h0IzabShlA6kZB1Me3j9qiL4uS4T96YXk0Zin7HGUdH823cQTlvglXhez2S5RgJERWv3X758AVriQ2WkNT7a004N9JdO3iMiSoUK6u8iH4BP8MMUO992M8l3G+nJ9g+TmvPTCJ/4ecF9XXCPV/anD1IIKNYDV8U5FkLMZiixa4+r84HefBWrC1g5/CKjLa3HHkKED+CDqhcj37jOFQ67XC8/KcKL592iNduDQ5Rbb73Ibh+xMy/NhT33SMK1PeazuT7J11R59BbItAvxmQU9sWnT+x7X6W7m7X3ExjahRiArEYABdrjZ0wwT1CS480K9VO3wGiuwhZvBFXyaXmH3rEYVCmQI3mtS7tJC2xrOGnjmjTHwWb1rnkZkLtnzqKgX0BafAqqOTx7TxfVoYdD43wwssB2+JZfv9/mxzyT5Va1y8lZ5oN7Z94vD0btOV2uvD6N1S2gv7lbl8DrKvdi8d5QXZkFPhEqorG6pQWhG5uKU4jmlILhN56Z5HDlOebTUinxYxOji9o+1iNFfP/t2cjoI7/HEsnvIX+XsbeEuT5M8Acbn23wTXy+FhCv+uKO+d6dq33D6qEHIRZ5MySLzUxvVGlpTmoWf+UbGZ10S3IeXt2QO72e6xp7dRtxn/yxw9zbxgKdnxipF6GBPftle/qrYIXAevVQO9M5/WEz+F0MRo3j9SUtKn2x37Zlbp6CHtsjmy8HLcmKVzsi7pMsPL4GwB5yMBXzyG1H+QT4cVHjeCc19kup8XMwb9ows4txOTHniqbUR5aOwHegm6dnzx3TV+sUGHR1XSpum4lsqkC/e/GA03x2TPRRUz96fMk86YHDshGKCz8wr3vqMjcgWV19c0K+TUpo0Z97UrXFdwYj71WYAxc263QXN2W6EFzLaav98IqXvzzmItLM4YXsXiPr8MCsb4vW+LSq/O1E8MZrYG+QpKYfqNl56nEM8o2azxIVx5Qe7EuBiZ2NgHi3ot9rZFQBW3jQFDOyRFHk2/3Zg2hvkDwv5wt06ANjBHlTq3U+770F4/77QNtI3j3hQnI0YlFKwr5CxI2X9KnCmznFbiC0+x3lo7/RQChe/Y08i7J2VrLowzyWVGMJDPZ5/e1idPJn4+kJWne/z2HrXrbtaGHHEjQelxv30eXzlAQjRh8D9VhDUR7JUiL5XN9VhVL1GtQOOo9+tPS4AwjQqyg4MQ7np3NtGXHy2cTjAW/YgNQdpNWuwow3K8/6CiIbC9Or7HZ7WAX6Lbc6/GqQDICe8nq6n6dxbLXL+211ojVXSXGV2/o28a4MfM6kuYbglX5c+PusS7Xia1924uCDiSe/J1rjFDe6ZYedbquOf3sCe8aRXLEW9feO1sOIZq1ESOcI4P7h+FuCjPLDKSN6fgnTF9RgzFcW+8mnnHSiRTA7rZr943TmIqpT1jXVJk8xRa07l9B6txycjwh12X69M3eEolZ809tshIG7OEquwNmNEWZU3rmHur2zlhvEIiKjb87dThU9iCNOYwNfPKeP9YIe306VzYYP9JqgFcx1bKJ0h7senf8moUVpvUveTLOwPwpQIB3GJFLYF7BeC+Xuwn9+RODfbS7K1nrOnWQBEdMVFOVXUH/VJI25G7m2nrG+RZq6DSVfPu/NrIolckL4/8iGwiaDgo919gG4K3JM/D/kUNAleowsuKlufvjQlfysNttIK/psv/6ro9rp75onRNNNZWfnZ9zGf0nPGj3OcflnAvPS2T+OLpKzd8Vd/eDwCyEXYUx6Gwh9jQqf8Nmfde3+fB3anqeoYS8yyyV9HMIXq1uqrGFn/0vje37StHztT310pqpe0rvONJyKWexZrsghV83VJ34sty3f11R35arLsY9JbZn09Pw3c3oN7RlT9zvbZP1Pc+spfpQMLP5M6jM/odPsTT/CJN2sd4g9E2fycHv67QzztMtx9mVz11zYIHrO23G+xI8j8atZ31mWwor07U97Wo8PnC/GfXb6+r7Rop6nF18hU4H+5jhE2F0i6VeBvcH/Y2C2S1sMlno7YTE3j0ensQP7A9DAjCOsRN9Tmo9STviuHSjqM8QWY6fzpXVo8azm+Pqz9hH/IgRbk+OD/kzG/5dhfA5vSlsdNA/5im6WQXXPGH4CzzCiAnhv95+dpdhcvkNfLl/VZrrJcUZcPCxT3WNnwmR1p+twXeEZLKu63VCo3/dJDfqDtu4MlOiCTLqnj+N8n9VpBsdd2wMp9zB0/rkhw3iy5e7I/tee9fFiQQE5Nh665w69VZMliFnHJoSiu28eDf7JzcMdhMsenpQkFP3ZeNLUZcAHwey9P2vzaQyeZKO6LaSilpSVOlunp5U1/9ZjnNS/hyiIdH/sTvWavwabsgS0ZgoJZzKZuUSnh8kOAID3oyRFFUNrTpZjsX/nx9caVuJ1wlHFfDB1uVZiBIn9oavecZfx3ZZ/YbJcxkGb+FDz3n3XqDhM7k4rjy+wNMiv8sDdrtn5SbS9P+OOLQSLJE1wCxsO2CWhRyU8/agyMxOmNdjKZTO/YFH/Fi12y4z9RCRkyfZ5OHduhYpVHPGUMXP9iOW+91R6A+PRtJAVJr9KT2k31Yz13syql+Pu8YjfofrWBFoUhkc5pcHoEwAJcN1+cvk8FxbUhxN+bk5SQTDFRgRa+6yjj0EWL2YA66tAt7F+3UFKnqBsCy4Yvf+6ANpaf7XefxWL6E3SUO+AzWXs8Fj5z5GrV4W7GgrsfhTZisNj6A9Y713cvqSTGMGebGLvb7BDX+qrfOmt2UPIYhfL0cE9jbubdvCAVP7ikf+jl8bGn3nb/7NrAffTkYAQFts0dHrieenrtD85iXmRjkXCSqVovQXVZNhiCEm0XQ3QPGHyr6rF6Oa9WCJbHvAwm92t0/07UEYX7jlT9s1Ev5kJJZb2HWezCEAp9tz9NzvsHF+P/TwPUseNkH0cLF/eA1u31GpXW/93elzWrqi3p/przWBWAiPo46PseFV8qEBVBBBUV8dffkYnOtXZzb1SdqDgVN6JW7HP22tMpzRjZfJkj80tFe0vLlApuivkdr7w14g/rf2VbQhCxP29O8cfKb5Nhc6Q75EbsvfzyAJPzeQvnwK+i0qAJVdw2QVq6GzgkFQ/bGytAZvsaziWKPbn3FZ87i0CepZg/wSveK/m9kG2Fob7Ejm/TfVIb99naOQvsBDvBjXqzY1/H9+OWhskqY9LlaeRW1ss0H84TKq2ZAuGPLOl8ek4O51PVXl79z2QH53UFWSv9+HU8Q3+kdOZWwqTbjWJ92G622kM5KCzXFu2UiX54fVdunLAp+z7Mpue2gEqfLTMQoNudaMbcY6vlUFWrUckPz4lwglT9EWwfiqc4c3moZ83r6ubhE7qS9VhZ7lebdfcBwjNbuaU/0DCjg52laKaFnHDNQtQWJBgF8kjym3Jf7vHeMh6vsFlowpr+Lw9ncL8K+HjAl14PJ2WoljTG3yhcI1cNDmsZRpZPcg8EvZ8f1om7Y3V2IcfMwlcO+nRTGYCy1/6zSJtjY04O/ns+PcT2avHcgu6oGvcYvjLy1swu2PuriTDhFn6QPD/adb/hMzizKk4j/VS+tiNrIdGlSReg5ojdJoG0NPRbmyvhVinMmsbeOX3UkeOLkNykvq6KHt0WK+V8ofdb/yYX1/cs1aTpPZSGyS8plB6r8hlocuu6pSlhN6PZ/FGHFvdkGnYQm271vAyjL190v7xeSCoUvksN9wB3tyWYdqQeTw87fbHwtMcvD+5cUtXWhfyHcGgYunICEG6HeN4OJ59S+i7OyYunkbzeMO7Omqxni1E81k3HAPeIv3xuod2Jn0f+60RFoteji/qOzi/xCmdBl8Tt4W7qwIXX4fXRgGs7Pa5tVwApqLhnE71GTkYKl5Mizyr6W5Xf1hU+JlttoJqEp675l0ZHxZQRiwjCnEAU/mDVt+5Nw+42mznlAGZM1f5YaTXqLieximzc04qtSrG/PY9clyXZD2u8tLzGTaCJZgfStb9yj6wfEGOuYfn8Lxd9S4y7GF5kZKKVSAMRGv94Zsef+SG3ZX4lL2ySNEQ/vsonfh3WG7vk5gmc95iICgiZNQavy1AfFDfLHbyHvrxe982Fr/iD5mSQKcSwRmusqTn3gG1PPJyfe8cbgOMo9tsJlhdARYep8xFk2AxGrfan8eBBbm5Fz31l5LQp31gHFutKtlUi7DksrOzkG9owWU2r52JEJIdZxqyk2/xjoU0Kk/IM7nEm1TfNwIUfPk2FlNCNBW/RXKzIllbIJ1mGu5S9fPW8kyzx9bx4b8HAnr1CHL7MvkUaWO6CQNQvMby3rs3I3rPDxODeIEf6Z45A1gR/Xiv18by+LZ/ZAnchnAettGjCfWrlX5svi/h5WQUDYAoVU0DkcAAbt4gO2ERfW4hlwAvrFay12fRQX5KYrjibLJ7pCtAKjdq4JwDxw30SGtQH9hLJHiFPzlBtPGF2ysv334CCjGZIFx/WY4k/JVJ1EFPsgK/9qfkP5DWBP4lhy/xBfu2y2+3Rzp/D3XjuD+CPyQv0/rgGr269kHo+8jEXZ2sjhyzw/mvP1RtZXq2g+L6p0tOVLJDKsV1v5yt2WGw+xfdjdOIq3LHXKHgamTnEc/cHlIi1Pc10EfmnX1LjKufqxO8nd5vuS8o3yR/loHx9GPFd4hTBdr1waUSBDLVReaSwiUXHgjz/8tAe5mA0Xvz8UG7mCtWOtyjecljYoCvP2NuZ/nHfp036p31XVLBWwQ3s1+S5xC7N2Rr7MLvd7anEU0P/MAESVVt0fuRLRZH5aIMV2YtA+5v6wxYtWXSjaSA+RgPbn3kO7Us9l4bDljTSBU4yYfK7huxtVMUG8UFcqx8udwpQXadU8RtmZfvC6fCx7NSo081e7C+3B5imZympF2HflH45lwNg424Yf+o+7osHI08smCHW1JMHOylNprG/3mS1lYog+ug23Bm92BzZUgifHPwJfzocDQrQzA1jiYkKhtSDlAlbJd3IMSz1TtFlrGcyaeSUN2hnsaUsgfaZYv7h9KMRFjUTt+RazDmjMzeQCEpl68ufT61qFZRP/hRBF9lg7MDOybc7/TPzDzPOY7n54T1v/Mtr+Z2toxGOFGy2a9pViL5QzOcAM67i+FTF1jjI/N6/LdGbwwaXzbRuZ/OjDJ0/ALAgXIS+Nu502JDt4sAfdtvLgY92m2YVjDWPZ7YJCl1ZoY3Rus87k74MEvfszCR4g9j1r492U8OJFJwUJ4fk+f7xY7azl8T72++Rcca7CUFoCcPHT/lkLb3aXu877Etfp8+bdklHHmmxndwvfJgmijw9i0fIF1FfMn5Phj4O/abAIYpPWpOVr8h8B3bNCdNW0r1CT9Aypu6Le/CTxQEaVmaTiArJdBH781n5PM49OFCtXjxzuE4/00hIYLj5c8stjmk4e6myxQhylRbQCEACC0LXcg3fYnZg53B3VYnrDjvhcQW0RWPTyEtZyFEgH47M+742fKzOvBmeYIZmiMteI/2UuoCvOX5jCxTgFohNE4q/tVpa+fdjr0J3s9QIg733tag63BcHGm+Be4DTesJM5KKGowL6Hy5gDs0zG6B0NGlMdVk9X7LTTr8orKKhbAIF4S61k7Z+mA1OyT+bqXfQYts7zIBNAb3f9x8kqIZkAF+MTOvrIsgSE53TkNvI/mAWzy5PL/yHi31fGO7VXKBNdzLunt1ioCp6Rbfx3TpRoMoQ6mhtgdnvj6hj2Zysja49cuU8VsG+IetD92Xkk5DXB1W77rFvDHOAHP2Ix3xhe2AnFb+sqlof8xa8fBXFRnLBfhRGadTT/brUC18VKpu+vWbtRzzTS3v6e+SGCMvibvslW3Xf61hH8DJvz2nYOegAHL5VwM5J971ZUtSefEqhWy2V5JX4uMxVbey1JRlPpX2yzHw34U3Mshkkk+SIXA5rMTyFKYddBFTLTqapnX+98yUsR46bThwaatiM0srDQCEFucz3ExiFqG58wYj4qRzvPCnOSaHGgsxRf0BvsQ3Vtvff1MfwJV+5UmpfIIZPvB4iPXPYBc/77bl4KSGbvW4b2+ucfFuO9XWSNR825AZIYPSzRew2+s+KO3IkihfqJEd8sbGfuxeQhanVgn0s5yshrLXXq2yO3yxpEWwCq8OunuS6O6Z4stOLaTTK1fu4md+JJdKl5Qq1hZyJJDztYR468exUxOy1tFeetB5DYoe6FUPMJOzJP4pxtguubcdiZTXacDicv2GGtvb8LZKv3ppFQH3VNs49L4Hqh4eTggXG/vrhKY+aPlVEiEgbgOWTHfXn2IlK4qvojKJpDJa1vu/bfV1XbzNWq6A49GPtqElXJFCvupaXL8S6UG14ddxEcGFcx6fihMzgXNvb78RjFy7Bk1iVvbs9PlHfY8E9OH3lTs8e/eKBfjHmfyrFYEIcER2mz7fKSq3hZw7kSTXZeoTxuNbSy2+J2iPlmFPZQIUlTwX2EIIf4YrdUd0t9utx4qZ60QWDKd7+5RBhVxb4gqlswL/eZEV2dDdajoZ3wNf1KoLuBt/boAh4oXnTtdknVl3qJyuMba3Z95hfJDRi1wpiWzD8s+RMtNvMjUpY5NlwJ3dOo1kbsE3GDFtT844gZfKQsuvqcF09MgpwzqaO2JLYx4ftCNhcenN09DYX6bBaGcdxXWSY9SIPxr3Amh+0svSx74uZns3ae8qOleeY+Q4MyIj7xhkUczCc2NKYKBNeDj63IruQL9uEKu6gdrbcozGenLGKVJAl4UbrkNzmYWStJAe7JAcVbLi/ayZpL/IwUewKDwPVEc81GmpwBFedeqHLQ9ctkUkodscJliDJU3A0IQ3TTQXM9+H5kxs8aE8aKdFtvs/267NZ+rtaAsw0xn0tdZIzqTvFZ7TAVB1PDHWYETiSXglV9NepopZVanxOmQGeecX2MMnKrYOdK5vb77oLeVpOONZllTijPZuWcUA0qV0vsB2denLHrO3MdKO2QHXzyB7OAW9HWDOZOF2Q1aGKJwK5UQPt3hY1VTqaFtVUuhdLhUP7FkAdJ69rmME/u2DRDmtAd0Wz2h2PjwbqynMk7Cebw7MOJDzHmXqz+yxiyknpkOGpf2Jo0ZAcqK+W5DU18MbDXIipWQNyKo1jx74pUnvHBvovRcLuM9uZoCy9aazKxBXqT+ACx43UxYhqifk7+lpl01sprz8HbaPlsEwmtrwkhpWZONCfDg0F/Vj554XVV25PxhTlYUaRGy/d8F6938K9El6O05JUyKS2ypupalzSYtqNtzHHs0Nzf4XZO30HPSBGElt+rGj3l4tS6xMe6gHc9VHYt1cjF1OVIsdual8fxm5TqB3wBN6WB6D/fh+CgXggKFa1FJanBOXZ6OPsTWboux1xDfu5KnVw5IX7Pj+o22dXJvZMkMzEPV3hntpT4SOXPXHgGDTDetw76EogP9E/0pvdJ4nbBRtzZPkCbvaAeuU3WuiGarIhj54iofGsWIiOQWED0anEQOuJop1BnAAlhA7UoSxeIGlSUExfXpsW7GCO9hFprHr5PCJn0Z+w08N6VCUqUiX/Wp35PLnyJJcB+ELTZXMw5mJmUIWljpEq4VM5ACU6H0/uoIiT2Y6JlBS7Wtu3pTcbkMU8VJJm7UavWyIOqHk9EDkWUl/dPeoRYCfFwBcUbUrXzTIv+HR7I4SniwTDc0udrPD0bMVNuAUNf2b6drax7rDOV2N8mzPMWQzEqQCZ5mIhdVvMjMAcwatogdUhnpTA9Lh5sxTU06qHvt5XcUpeLb2KOcWrcJWzQi44upMh4iN5VT/bpKSB2gVPTqRiXWAv2Bbxkbn5A7KQWAosEQG12mRxnIAvBSz4Ouwn+7d2qrn7ecKc1o3/OtIAYJCJKE9v5yiKY9vQm2ewEcJncme64zIpQ/+lQW53kdHIcVvNzuyk3t7ap0+Bi5cM294MLvNnTd0uWNPLoL/nsyb/7MDV5M1abnO5pTDI5Z7ZrOzIXke8wo4z/u45p8BJ2f5h9eJkIr6IIB6AUa+GGqodWdXvFX8MaJShyPe1m7UhP7tp5fTWy7scbFmBXcriXTlfeuuFLC/I0rot6RPSOGgGemOQC+hMItFLLIuSpSEIyrVGFqA3vPxAe3avMi999d5xclwcduxdOq06ksyjLWo+dBPLxP/gp+ljyXjIzxsxmXh5xIQiGe8a+KeY2YX3wocNlIzSvuzs0H8Lc3Hy2MUCzg6DPESGLO+JWevtDtdH5R64PuH9lWMMB9XhksC0Kyp7Yg2Y/ApcU+SAz7ovbSg84Woak52VRITqRP5dafxaMsAmv7MFtXzLYkMxLkwhpQ6Bb7Da9rh93IwjpkaUTSnnRwM0NXAY5MfdApKzCDPWEBjNdJcZ8rOUV4WC3kVE5kuitBVcIVXYWZxphpuJBOVbaUs40Y3KQ4r1rJBIaRhQ5/ti8AjHvOUqAicBb7aPnjdeeh7a69iFL/dP50eW6kG/byY+1UcD9u5k0LWy01GWnO9aLXGtCovBtg31VgJOJnZOIx1ptE/vt/mkQRiDGU9qFF6N5kBHrxKMDP9n9r2mZhmGHwFzSw41TcQPCcQ4M7om0jOPMkaswK+9mfpgT7i7qI5vuUeKKanYwAKYgGyoZvaIhP0+2SjQgUPsyq5rmFMyaT8sKShzOBekTiWnXLsnwECpLRxiIZpMDF6cU8ihgQYLZrJRLcgYMCRrvOlasku2P7SdnSdbiKsMwl6i+57u/PkQlFsv7D0l1G7r+65fBY4AuRkDOgMXP/Wsm/d1zDwOkOM49GfQIgvKG3qx0DH7Iz/cdFrpRND3OrtY7wFdFPIZsbxsxAIc4xLMandiGlvyCyZpdhgBeS8d7D0MU1LRV3hbLo+ve/pacyJUgesKGYX60p0p9qdny4t7qIhxpRjOK5NI2UIIaxl51IANbaiPpVeRe8XMIfMMWWKqU8tLLeXLk7w3S7TlFV6DVG2Q1vkZ7UkOvNfLx6GkmnOjNoTcwA545W5leNEDkIL7tpcpSOZtvdz1XOKmevIATX+KUKMjJ30K3RMyB/QJbgnvLgpYfGDBtsME0HXlQdWUUa3Zxb65nR+Gm8o7sCyFulmoq91Y/KK1b4ALb8ugft91Wp90FHAFR2ReK3fgD1qVuXBGYSyBM9MtlRWFuzFE3W4sAZYIQFq0NC0Nx+hocE4O7OUowWSHEBytfT7d3hCQvwrpIEHxkd9yTZKEgHGUaBOsgAB5KFcQRQ1YnSqTGiIlQ/abG/eAj69YFidxQKlIEeIMfZebnthCuhoLOOtKUep5UcDXulwECq70FbFalKFZuQ6wYhNkyKu283XplOr8wHTO+0UjTwc0jCIVlETrZeK8KN9udwEHPYMnQDJoWzTRxnubMKteyswFjGFAR0g4OEroZSEtLNX9TbIGTcm2p950BRA8IXd3AMiFYwx+0qr06doy4VlyxbXfj917zFxTYLTYFh5/kI3ILqflKQFJ2cII70BmYUIwUaVISFeuhGy3sg1ZfOiRLcQWn8dSTxacM9mkpdD4oOXpW+ZD6XkT5pP1LgeakLlxh+c38PevyhnKG5V67RYwr5x4ylB2wCQ9J9AHLHeoXdTXXW5OC6tQ6LM8x2lyag/qC5037+r6/nBOKNICEZv+0m9E7wnwAcujDgZkjr334tFpt0DKfxjXeOJkWC3ph3C+IzPTXXqB86HeuCoaKx0W3jO+MZMUa0iU4lEzI1PirdCa0erztrdm0LDXzy4kFrc9Hu+xfxzAcDvAUgRcVKSHicjK/vD2IlOC3VZjTeyvjndkXOVlHqgvpsvtliIE3/fPVNUBaWNJAbI9UwaeJJ9afOybBJsTQJT120rTB3mjES/IBvIpV4PF2sAGfZBI6MoFyJMMPaSQM/Kv5691bhX50iz8yvM2C0mBsTQkiWwKBTUeZZxTedzfOqA7QTYLl09gJmvbUNtc2AzuaQrgSnaUUEE/7FUO2LKa2XfPhQS2tGcmlzbY5vaT6bfhAdIGIyqeecvhkcSQJesPcQ/ZqqgOJmDxTpueSnjIeEe+NORFzoI9u5oR2Ls5+igi1tENpun1I1MX0QPcfWqIodApIQecFLafb17AcKBYmFKBOPMJmJXCpGqC61x8oivA/cBCTZiYrgAD8RlacGDToPowHaubSv/IyoHGQdaWgk12ltkQNxVEqZVgSeMNQdxhjak2WslW5A4XVoZn2oGHYD3u9sN6ZDFooe5eU+/kK8yRUoi2SkGa+oLN6njWLxov10ARDDlbR8K69Cu2j12s9PCnuqhQJCORalOTdPXQ+0EpYdCa41GUWTFhVhMZa8oUuYQ8dqA+AcU58CRTrGqi9r4y4JQ5h4ze1VX4qlF4HabISH28LcQuEvCsNLeKZKvsdUukseabBNYeznt0sgaz5mHng0xafnMsjaIvewPY2CFb4EPkPFGwPoXctJNQDjaghvb1Lj2YW8BClZtZjFwXdgr01nSn6Xbz8lKK92SCk2klijzpR544/YXMfIMsYEIGB3Ee5LdV8CW89lgPQ1IeGq6gjyCD5iUiTGyXZspqGpTP8P3u5pMG7TVgtfF5qLGVC2lA74wap8gmSEUuz2m8jGdrhYcWKIHsbIFV5d9pdMAtJoIr0pKRC0dOYkBcKuHgrP0Vmg+id0DmizQ5sOu18qDomiyPwIdVjJOdBmErhqeI4hO5gEOR2B90JZ+/BWVGIyTqLb2cuHyN1W1yGwUtZ3TWEs4vgvHzLAcyh+Vu5DvtgRPV6WVRCTc15Lhur7y67iYNiDKFYCu3GHXzAsYFcmjLV4so2SJ5Pe9q4w4ncjbZDlGdc7AaC4dXpOo8pVaFnNh3XsjCBquu1HK1qdUcc1TQr5DIb/pmgYucYcAPqGLkIE7h9FwwaIS5aMepyHq5T6nDbS41MWLYJ5lqypSsPwVQwssFO30CH12I4mPk9IXcYoc5jBnYj44E1PPLeWHzP/crENHOAOb18l7RZsTHHgQFVBtUag9dctRJp8e3/oRdPGbIZ1+GOZS0iAIUPy2fzKIebygl4Qyxnw976isST9J5iA9phs+f01TqgUCu96bZkxn/4XA5+hIkQy4UU5mXkrTLCCQ3VdaSfDiqaA1UaHhKw6I0uqGkK9XhjHexC7d3sh2fXNr+8osrYIMqqH91Vx9JFgDXUhS7jn6POgBdHeJcc62HrbmqiJMnIScwRTvlTV+PZV/yDeuRDOkJ1wPUBnT0eeqxz0wJ9dFDGLGtFVJQp92W9aRfdswlWxWRWwqIhQQ0Sr8/VtdXMzJQUe+sghbO5QEsAvUwR1HG+eMw4WtZUE3Pzop0dDWM6vPccO6I4ojII0rzxBes6NU49sU7GZN8Kqgt1XZZRR6amka0+34UKM64wDWjcyIdw3HiBKHXHN+XIlqnYDjjBOwdUo9rbFCNfEIDDfalPqxdaUijKAPPBGk1XariEYERqVId6tqV1SaEz5/my4PpAxVMjaoSiu83mRzvorcrIxWwMPoQCpPto45cdcqDMRYMnmkkopdl5k139nNoPQUfVxBh+lCVZmRMImy0/fg4/4XzdJrQfdR4dETGmJV+NJ8jY8Gzb5lhLC0W23IFHewvp1XJANLqKA++i74+jdog1B0KluFUdkXtLDeykkmFCS9qiBNYcV6aSdX+U/PslRtccbSlck99jLb96OhcEXqSkz1wBoRK5PDTamIciAp37owLDAhnMEPzhr4IdtlQv4VbZRCMbgs4N1ePu+uexn3FeM3cXFw6arnASrTSHeegJAOH7RWj7PDy6SdqWBmMqWotVkDs7q70+Hi3Qk/c/kXCNEyBLBUr4QbLOueKh2tI3Md2jAglwt5+IWqilGc+3ybAGKBqUomDoPdzG27M62f0MJPb4qY/yCoLN1S+dSpGQ5GQB84NaR/R/OcZfEzM0HgZZ6NZ92TPf1abzKFKw5BXI+/TtdNW/JefaI7zMLXkeVn4F5uKGFzCDajFfs6m7081blVZZ8ipDBvwfcTevFGb77l+KyQfqU19GhGDWKxwFrgOM/1e1ODhECQNpxGrq9l0b2tTVpTb8SyEzTUaR2pV6E/QVxCxQDYJgZ3NsYzk/h4LwMWgmURHczLRXTgpAj+oEKhnfs5ms2HRJBvbnh7sGZ7uq8SH3Z68NTE4T3DmyXLzsJ3JbDw7EE+hsjvc3MgoD3ASf+i+xwri4/mcYCaT+ry1ix1axLscKcwXM5xuY92YdRa/mm3dp5NLHXzOozebDphLj8CueC1B2qXrvGqih7SGBVBatPTZ48ssTdc9+dF/WRKDyxrzQuouc1x5uj2fzuanv5xk6S+0s/wdwzlSHgo773KdrTMmnMuy+YB9U5A/WM7tYh0IgbssLO5kvR7Aqw5vmL3ECiICKiEdk526NJuWrwdyuVOfNWXKASeKBzCMmz6Rs81yetOVVB2A2B68wwbM9mFNTSp7RfakmXL1XFnYPmZpOA+nj3IcwV3oc2QptEE/T0pySYaD75jjxN8pniQRFueY2fersbPSdEiR5WCADLtXylIYXmFuBtE3IbOITWe6OScfTlTqH3XMNmKdiNRfl3xZxbXNpxucNEIxXs0h7lrTdxVXkNi4JUPsn/tisnaxQ0rE6LtVQtR6N7JcTgj7nhlZTGaa86laozq8r6H2pCq9fTiXZKxBEn1h0lynYn3G/lQNwMDPnhpyB1Fs79CQ9tG9TnTvAu0Zzbi9nubVXGnFibB6RFC7Zshg6cReZZ2Ge3sxVte7j2y0F/I1pTGKCrjLkLalHHP3dxQrXxYXJYf5rMjCR+VhEptckkZNPk5P+sFD1NukyKe9MVr29aZrhIU4xCdpH2zIOhnA76G9PQvISXT9tQ4+WN1AxviZGAWjnclZBGkxeYxTF89w8C6TPX0tiThlVdMrwKlZkikhvZJ8rKdOz7MKcniL08ijRn2cW0Dcd3OgIU8I8zXJpRbb5eqBaGRyNXc2iUdOpxxmJtLY3qQQxMQZUmf9etEWtjOcwtEZiwbIZO9uIe4IPKo7zUDDV2E0cuCPYBjRUaXvsTIB4SnZxnsoKzju0LRs7FTqo/6DdFPZiS0af8HUHTnULo5WJbfj0YSuSCLv5dwJlJWyEoEHCeZ1iZdCaAP2nJxEjCEYtklg3R3p/ZkYFYp19z5hHeZ4+tOrN2TCMpZOM9U+KwpVifIpB3ZeGlhRh72QU2WJyOModVmtmRezXEqfE9ktXsf28Trwzuays1wm+HBAXHtITmA1t5EMu8S8w9HZGPiL7Zl67lTvzolWLmFNpdKP6JNeJrKEEhh4HfI7GuD1qVYmw54ZTh+JQLJZeYLlW0payutLiWtjSHj6xZ5z6bhB0reCUKleONNxyrXEZbARyDVs85DzInp6YnjLq8+bwPhUyLBRrmqhXQ7U6s5xhq3YvXeBNVkF7QRv7S+NJ3WbyE7fy0P06YnB3J+6mHob1gP2e/r+d+V8Tqbr6yLsYt0wiBA7nhvaxSdm2pLHJKRWnIi5OWeseOWlC0HvlU0Oz1ZvTkKxXtW9sxmfX15a3VA8YIETwPBqaUbE4zZRil34qwJYwGWcITknYWxOZX1tV+MMB+yU7ckEIxuYGUODK2irJvwbB0C5IWZvTA1jA/kaOi27MJ1oZGSj3r2HSgSCvJgaXzKtJNhGsh2jG3HAWi/4rl9KVwoye3bgR0Z8in4xph1Zn3/kSnOpXMl8aAXiDIT7MnJQRWC8fyZ28f6crry9GK22SKzsGzlKhTJO5WoYZE0eZZ8U91dINzi4aYy8Bh3CmZsqM9CrOLCM2J3UORTaRmkYaCQGy0YRE11H/rgZnyG/NF4LiOUzyfVxpqGwXiZnqPSFXPGdGuRzIB13AkQn7flEg0zTw7pB4qcl5CruOJGs9OhaFBnTAd/GCFpNvoHPd+PnWkR84zeL2csOTrQ1IfJcULUh1iqV3rucwx5ScUMfq9jlh8qh6OHV6EBk/5JTJl4MPv0N/bmeTCYAQSV5OaKNZ39hPdEW9gdt4Rvi1mv06jU/wLiZNaI+CBMRDU00Iz2OsTCMyBEe2UD9wtJZJxL7lUDt/koDh3CYEYZxulwhrjbYVbuH5iGudKnXFMDGtVp5MHpq+u6SpWozZ717GJ9aj/6+OvcUr6FlOamQkbBM1xD3tsOdcLaXIQkXSKHpsuOBsY4Ssk2esPc8nLSr4wI5sECz6VaSd2OlWDKcHYfvqAlMwqyXgiYpZF5Z/PI4zjCejLRSdpHPTR6+3zRcHgAFnAzuovDy9qLL8mj5xYMCh7cqIBHqf67UHIjFqC36C+vjFjBpyCZ2MuREeFtouXoo8q2JrQwVA/OTe2sD39+CEQKfiLGe0zZsQddd2HjEzyYuVbSRlVFujysy8pLSz91sChxOlbXrscIRcoAw6xiGekDMtjJUalPE7nc/Ii4wlTdOzJ2/3cAMdDZ6aAra+BZUC1kHK8gE2Q+o0LluEPlIhb1wC2MAniUpgPPVwnykXqG29RT7JIh/DeHznAukIdrpqWdMpzp9QdBVqCXf0Kchz02NfiZzi5ccGjRsILIx874nqQGZxiobXB8PlrsrB7s/0dV8ame6PFAnR6OdhdIfxMIMbs1oG2dOXcuHSt0EDPGBgsSXNxPlZb38TneysRfZuy43qphzmK/dSzvwKbArE61/fH0K8vRg3hWwtGAeDRrmNRddrGb3yzjdAJimIenvQiGvVwSwP2oZo+YbsgDeJVWhH04CKnfCmVfU+kJvYWb0/U+xh0ACIOV8XU4CicjaGfuiWZCqXkbeZ1c5n0hiXH/5H6xX60gncy/xJFaxyZDEtmoayeHcKqoHMC/6TKWrN+klAji0KORkU87ynBo9uhBQol9DgTNO+uml/nj+apuYnuiutpp1jguxDak/G8VZSszgV94TJoInEsUMhXR3Lmgzg6YJXkPUW3NSULMq+pBvVyh2IH6soLYo4uabtdnDnE6BSAb8/ZyPckcgaBchdSK3qnSGgUbxb3axsBjgcCv2K2rcHRiVJRfu6Xk8fqZWejRg5MWtzr+V0ug2r7ffBZYpwn2Cy4mn0ar6S8vdLqJAhvCAHcwzlXDIFjX56EWk3s/x/ESDeXRFdorpbu/HHRYXQLHjRZuA17tfEaneHhn6AmvgKYTcNB9Es6DizvwpoLZKIzR2aPMPGjD4NZw8h1Db0QJj7pMol5ylMUQhwIzb86iHqdg94BkxlhS7KAtE8hgb58lNmwPyR0u3hhGD1tthyYmwWv6Y6OIdZ6bKyicrkBUJDYtTs04JR2pY88wYfVHh5b+yAjZfUIvCn670fbUqZk7A5oDTnIthHYxV6h5W9UA2+/w5jVzRaAXrn3qXLtqCrJXh2ANleS9SYBWMleo/CPYACKNXdkhKCfoB2vzVvk4ZAMc/AZ1M3iJLP8z709hdvqVolZHT2NLJDFCRTE0QVI5/NAavC9kTXt7FJkUuXDeu1FukQbKL3wn8DXhOib3QIG2cmCkHGAHK7OwHkW0kimz8Fsoll0KVlr4+5aWpTFz3ZYO8s9FRL3GCSm+Sbf5T5XLdvfs9tXzlJ4tlQZywcxLIShem7qwpsB1ZQypv0cF5BUUoMtCKkQ68XLGTvvInMxahugb4PSUPSJZcwD4E4tWZ7O5w+mcsJuJnImhyiVePxjqeiSycFUnc8w2elrqANC8Fol6vCnci5GZUcmWn3mXh0lfNoSIzkdaS/42FoH6WYsS2P8TZ6wL8LXKe1FL6LDuqJcJyUyuKxoIZr9/r4gDTMd0kzuAMcXnOovQ01s5pbJ8GHqkpfim068ajMrq/5QAEseuHoxi01YdH1gGv8AUmpGU4MVw9F91kzGmIOMishcgiIxZoL9GENY2k0BgFEiB/iQmp71QVDgSx92Scakci6gYTkY8tn/AwndBUcXrYbbRSEhPY1FRJT0U7iWEImRv3E/W4oQPrI1GLEOgZtT6EzOvyt7dKoi1mzM1aLrwVFH8+oFqMeJsvuhCD848dlbCfYXc+XEXo5dB9aqewR4KUSwt+B4oPAhUOZD/XOSwtPkWcTlo2Jo4SqucEmMVrKiZmdFF6CeheTLDxUbXfhaHa9fYWsqsu4auGyu6R7kYvbYG4jNixjVaXbjlkIe448bCjKNcgjJiP66idIIJKIPd7UyxYx+AurDY1GSsqO8xFiee3LpJrB3Ga77wTODPOfxCV1IugQRLcWZzAZgCD9WMty3dg39DIE+x99Fz+ROBXw7soAmGxA5M1KqhNOZGKLK7btXvN4InYqNT3o44ZZP5TSeauwLeyD20Qb6JxgWjdvI05H/3+TgN/ZewiskipGVPPuIPcx86l0ohpYOZeOiIEIgU0mp5Mr9OfWbFThpOrAA4Oqf9f0CCjs/Ftp8vf5MBY2HBq/LANcfeWqEmmlpmRG5QlM+x+cJgxT4+lUvYltV8uDBgQZZRjqwp/Ii5fcoRosODUx09PbCs52IO1ulCPF8OLyaSjl6PBiCJdX9SjKjtQWaJIt9ErA/NvT5HA/iVWoQMrEsAZ/bgmDszjkBADuM4W3+SsYAy0hyrNVKIoZVUqEc5dEZhfCJNgf9qbhPM/RhFiW1ELa6FME7K9OT86HVKtwtxGTS4nnM33etj3V7+FCjYTulMlnDe5dGJ8jpuEgQUHcYFE3vQd8TztDNULyhLl15G5zzu+L92nH0MtVYrarM2jhamewyfmcy4rUhn1e0OlbQkMXJUE/dZUu6rRJrysBcWkNoUeUl9eDrFwiKAF76VRyOUUyOnFakfl8OQdauc2mAu6E52xlWUnJNQsaMcd1spZYp0dLs3HwmsPKvLnDVhmnNiDuimR5wdnEhoP0qcdlBXwj5Atvo0iHQC7+dBjG4/aSMPzDPKSuPaYn9i0dXBKOJNiJDh9gM4tm++O4QOlI4MiWZ5Gm/JyRJ90BUd94wpzX9+4ijjn5ZxC8RMyDdcUT9FIGiOw6U/WDGaUiKLFDAXizBuJAAsTR7p/nhNzNxPlipWPGfbAzXu0lBqc24q9RL/mOxXuZyen+DXG2PycVUD7gLwq5AcWS1H0dn9n4Tgt2BKZbx7yqqRW5YxoTTCov+OmiIKIc6y/uhTY55+ap15EBPIkLsqZKkU/tkaqDl8vBzmIDfpMaisvh482iQHYqgvMCX1SrZ1Lo+xbpYvYzsKkLoHZiwtmwHOJWzwik2lI3ZJIjuxVrAA/0TeoMYaxxBzsG/VjzGsB06nl4PSbBZ1KctpK8R8wqeBA+TK1hGTxtP8uFqe4dvGe25fjYk8Dktt4AvDUWW8JBSVA5Diq6KvUjMje+wONLcT5I1sQBveQyO9pOX087BTO32dQWdsqVNc37YlBvpnj7psrD6CGa23QYNGcAjXY8NgZAdY5iP1jVt+0l/785qvvlfPpwtcDz1a0OBTRzNycO0b+M2kOM26OQ7FBXL10nigbVL7wXr0ufHbxKdknekuZ3jJrK3wm51hRE1sH2GEm3s65UMJxAFf6q1a8oi2nEOuFCdMCT56AF5WjUWXbnHCG7G8+VuplChACk3r5Y2wCLobKlp/8qJtWHMWBn/qJ0tby/D1TPKqthleOTbBylkopnLhcwOPHAVT2nPxTTZEkFOkEG7Kib6iCMUb5fI5vSNeZeq+7oj2COhmfykYbRxFDarjsmUjiwYizGZmNO0W6H53wyA0SSvTyFDdxBmuMfI6/zgtAn2hwaZXbleG75xhZ+Ypl831r0m9fly6EyZUCKVM4211glF1oP5GNeIjmY+ayT9hoInuYTQy8g6Jd3xauFF39Tx7ioPHp0eio0p1E9SVSBHAN4J5ER2wtE1bI3hYcAdjWolGE5r4Y/FYVMHJ8HT7ouQfek1W6vj9S5Twji9ro6q2wVXvSBtOLqqze/TmA6V4HW7u/XRVnPYhWV33PEOgaDtOO+laf4lU3HOcEJdvykxFzYRTihnA0SrJlutmQGTYmP2coglEXxXCKLRWnmB8gbgEtLak5RW9okQmsu0NXHQRCxh2l5hFRYWBg7Vyt0O8q0U+u/uycoQegGb9Yso8yv4bILIIWEs0M5MgEvZ2HSqzhKhPgszPduK1R4oGOT7XzTzaLSC33/vSSnMg5bTnvLLeYq+d+zgnEa51aYxX4QL1Hkurjash9Y9d9P0vwd745lZoIaWAoA9WRbPSYhLA/tlBiUvlqQ6sTFZO0ZZRR8L9xrtoeOSIDD10FBxHwXUOco41n4bRlSy1osq3Dz9nn6GnlJLrJYfPwXxr1AWMtctWLM1W7JKM9DdzGubiSsNDdqWZsHbtXCNK307dawRTCkZWwrTCbJGIrFvjBtrdGG2CR3Wh3SzHFE0uYo7mV/lBdnppU7gud3d0W4wzBcneDLo5pqWmsL8nJ5SwWmI1q8P8GmKHULZXznXDL98oypYuCudTKHnNID53HHJI45CuFmp17WaXWKAyGNM0dF+2flwB6PK7d8624FlkSC0N4zkaOPV7qLhOznN2P4ySuVJ7/nNjTGGc1KNFpPIWHFCE2d/2gx55QkFfB+QJyZMw//nsar3Yh+OiKxv+z7/ett4vft6nv7y6sL/KVcTby16z3yZZCgXsZjJIKc3h/5VZYqJHUy3ylOfK4B6SohWUfwBwzR8q+97l9/P0f/K6bnjmp1KUwao/ssRI/etLqwSeylwnWIPl5mL3fxsfKBMYbfXIHYCKXH6/XhR2rZBQyolsbOva2ODfGfY+aTaMaB99Po1/zwtcf3m/FlyxVdmZkaRB+WRCx4OKPbnWkocK+QSRO/9SdM86SdhfmF9XPjTpNGWtcT1l8fnFsIa0/1Z3rzZmcEmCiMz44gbh9dQzveM76cJcM9vSE8qta9P7gNc9PnzhWzd3m2NYQw8wgPYFyCnkKre0CmA93N+IAH5k35odkuranyEynyFDfNV25OwoKWu2wW+/SfpQJqfdH/cv7duUuqOYXMV1M7vHl6OPSKKsP++YJxB23x4palT2MN5CvhhtPf3jmCn3yKwcP7lcrBrYR1XG2AMGyGU3LcJ4yxlOZ2MR5kn20lsw7sFTT+zjMV3413u+W4Njfhw0FGQ7OQCQ6hLsZjc4kciiA5Xu61Fc/T4z+l6KrJ5xmuIu1ComEcaUvZ/DiLawGhTEPauM0YRm6tw99xS/7/CtXzUjvsp2zkClFpyv/SAZkOMcT1UzcvPL08y7QOzmMVteQhZ/T1NVxlWw17WPryaxS1ttbyUHcXD1OXXmWq/XmalJA8ZoxHjm+kjDjPxfUjz8VCndI726DNhNeR8S6cBq0qWn0IEJhtZwp3GqleKMEOmQ/akMod75mS2HY9hz94nW0LEQ3hhNWmWKsfpes98oZsTH9Y/6Wl6khu7aNi+k6EJ6ryct8byDDlgjNePZ44E8+lbLZ5aFqxv2a1nNrlYbF8OwDIqjdej4HvXXCKAnFpdtMD4ZgCNItdQxFmi2rRdduELNAh/x7ORlzzRQfQ9WWu+xqthJHHVZE+4vFWm0AKVrF8+Rn5VfVZrKNfGhMRPQ/lXOKO23hfEpE4bkMbpqO8SeUJaAHl45rqCls76ebNtTad/WXu375s/qOSG20SGHRt1LFTNJYX2E0DX98H3em1vKxLle5sVgh+oBCQEI1kLjS7VfNACjzK+5gUmqlnfnTLQ22VhKak9nJorbx6hrPKLiJR1Kss/l8V/9IvU9yzFj65ymc0sz0fWfsRu6jQk9+RUhY/ytai/X8I9ZyOqktHs+l3VJLKexg9gAgtPKjMm71c5Lb65urXAFLoKuEBlU+Pz/2MFuCGhS/iYUPB5QWD/x874PgUOBuLaCvQMWcza5/iwqF/nNiyvG38gmtKl+9M1S8/FOv6rTH5ZflqX2rnMDDrLPOEDJqiWPLJY8vx5bqvOge4ITs61YMIWt8J/vvp0tNvJinFU91oG/ghNYNX5vAhbl649Wl61sFu6RL2zQsrSZJXjRquSscdD28Bxzh5ZdGxQvCaEKd25xiHmn1TpPtDSaMukPDakigmpKRBSXDfJsnvoqCLWLus5Ja6QAGKpwc8vNYWY1vp6lo32e1K3Um9JpyD4ffNJ3cST/cQAzmqx3xHoHhssexcoBhHfC2WEAIg7HJ/fveO62/ACIQ1LOXOo4iPSFRN3J7hU44f3h6BjPXvVfOFRIHvsP7zK6bPGF+UlCQwjVzwDql/KkRe4FduL1ZLDW8HewmChcD4DOhGU8eYuWHuYSBAjqvnN6wn4J+ChfNxzU3ROg7PlSvdLosjAnHnzZ3CkpFnpWJk0MX4zxPVky+Ddx7DhWHZ08bYCVsuDaEtM7nWRv/DSRoj68fpA/fzReX45m+qtdy1VJey9UmZ9MPaYm8uG5TzY1Uqo/jypphMlVuJ7Moin9MZPyH/iL3D465ZLd9c4efcNwpOet3JiqY63/sHPt0urWn+b+x0/EXn/vbff8af5GFH02Uf0yk80vbt+f9/UbflenL3f34+Q2Onf37nB2/etyXxfFzD2HBjD/MuvEHxc/3OXg5vCqks1/Svq6/N8G/c0y5+388KP95zqx+7MdfG3/Q3Yf684N9syO3W9vT/8rrrOvKnK7D8X6m95FZ+tfuTq+0X31eY/LzE6mt2xteYiJJDP3zX1++/a7Y/2cXb8r8dd2+P7vt6+xePn+/1t+v5ecOflvS5+OYz85NWf7fhc8jDeOPuPmfrtK1j1u+/3zx16b85VozRvjLtUDHfr/WPbsV+/tfroUb/PPy//yeT/8le84wqvr/9Z4z/77405Zz/+yWc3+61GTyr91x4V+047Dn/x/vOPdXLZ/8c1s+5eZ/vhbLMf/SPV/8zZ4LNV0y8dDio/7afOH6aL8f/FtXvulPKWZiJtzl9etD+rdi/DcP/0yl5z7/x0zc/2Mm/wf994mlf/l89rkPOHK81feLfxI56hnvfydkX5Fq2mYPT1XW9Z9+lNVl0YCkUgHb05+L4GfLPKvJ54NzudvBbcT+WN730SXL4Z79LbvQn93aR7Pbw9r9E+bpP+HRX1+n/VcZWHz++zeJ5v9Gov8sKv9t7p5l/8VSwf2vVPxJKoTZn6Vi8jVg/2NSwf2LpWLyv1LxR6mYTv/ie1hu8T8sFX8XG/y3SsU5ux+3ByoQdxCM0/9KxG8SwQt/E3NM/mclYvKfgJF1XV66/9va/bZZv+/LBzv+M9HCf0HFmMlfFhQoVf+0oN8L/76gX/v4X1hQyIK0IPW/IB1dgqPT7vbwG/8H</diagram></mxfile>
2104.07660/main_diagram/main_diagram.pdf ADDED
Binary file (42.8 kB). View file
 
2104.07660/paper_text/intro_method.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ While models of humans in clothing would be valuable for many tasks in computer vision such as body pose and shape estimation from images and videos [\[9,](#page-13-0) [15,](#page-13-1) [32,](#page-14-0) [33,](#page-14-1) [37,](#page-14-2) [38\]](#page-14-3) and synthetic data generation [\[62,](#page-15-0) [63,](#page-15-1) [73,](#page-15-2) [85\]](#page-16-0), most existing approaches are based on "minimally-clothed" human body models [\[2,](#page-13-2) [31,](#page-14-4) [44,](#page-14-5) [51,](#page-14-6) [56,](#page-15-3) [77\]](#page-15-4), which do not represent clothing. To date, statistical models for clothed humans remain lacking despite the broad range of potential
4
+
5
+ <sup>†</sup> Now at Facebook Reality Labs.
6
+
7
+ <span id="page-1-0"></span>applications. This is likely due to the fact that modeling 3D clothing shapes is much more difficult than modeling body shapes. Fundamentally, several characteristics of clothed bodies present technical challenges for representing clothing shapes.
8
+
9
+ The first challenge is that clothing shape varies at different spatial scales driven by global body articulation and local clothing geometry. The former requires the representation to properly handle human pose variation, while the latter requires local expressiveness to model folds and wrinkles. Second, a representation must be able to model smooth cloth surfaces and also sharp discontinuities and thin structures. Third, clothing is diverse and varies in terms of its topology. The topology can even change with the motion of the body. Fourth, the relationship between the clothing and the body changes as the clothing moves relative to the body surface. Finally, the representation should be compatible with existing body models and should support fast inference and rendering, enabling real-world applications.
10
+
11
+ Unfortunately, none of the existing 3D shape representations satisfy all these requirements. The standard approach uses 3D meshes that are draped with clothing using physics simulation [\[3,](#page-13-3) [40,](#page-14-7) [43\]](#page-14-8). These require manual clothing design and the physics simulation makes them inappropriate for inference. Recent work starts with classical rigged 3D meshes and blend skinning but uses machine learning to model clothing shape and local non-rigid shape deformation. However, these methods often rely on pre-defined garment templates [\[8,](#page-13-4) [39,](#page-14-9) [47,](#page-14-10) [55\]](#page-15-5), and the fixed correspondence between the body and garment template restricts them from generalizing to arbitrary clothing topology. Additionally, learning a mesh-based model requires registering a common 3D mesh template to scan data. This is time consuming, error prone, and limits topology change [\[58\]](#page-15-6). New neural implicit representations [\[12,](#page-13-5) [48,](#page-14-11) [53\]](#page-15-7), on the other hand, are able to reconstruct topologically varying clothing types [\[13,](#page-13-6) [16,](#page-13-7) [67\]](#page-15-8), but are not consistent with existing graphics tools, are expensive to render, and are not yet suitable for fast inference. Point clouds are a simple representation that also supports arbitrary topology [\[21,](#page-13-8) [41,](#page-14-12) [79\]](#page-16-1) and does not require data registration, but highly detailed geometry requires many points.
12
+
13
+ A middle ground solution is to utilize a collection of parametric surface elements that smoothly conform to the global shape of the target geometry [\[20,](#page-13-9) [25,](#page-13-10) [82,](#page-16-2) [84,](#page-16-3) [86\]](#page-16-4). As each element can be freely connected or disconnected, topologically varying surfaces can be effectively modeled while retaining the efficiency of explicit shape inference. Like point clouds, these methods can be learned without data registration.
14
+
15
+ However, despite modeling coherent global shape, existing surface-element-based representations often fail to generate local structures with high-fidelity. The key limiting factor is that shapes are typically decoded from *global* latent codes [\[25,](#page-13-10) [82,](#page-16-2) [84\]](#page-16-3), i.e. the network needs to learn both the global shape statistics (caused by articulation) and a prior for local geometry (caused by clothing deformation) at once. While the recent work of [\[24\]](#page-13-11) shows the ability to handle articulated objects, these methods often fail to capture local structures such as sharp edges and wrinkles, hence the ability to model *clothed* human bodies has not been demonstrated.
16
+
17
+ In this work, we extend the surface element representation to create a clothed human model that meets *all* the aforementioned desired properties. We support articulation by defining the surface elements on top of a minimal clothed body model. To densely cover the surface, and effectively model local geometric details, we first introduce a global patch descriptor that differentiates surface elements at different locations, enabling the modeling of hundreds of local surface elements with a single network, and then regress local non-rigid shapes from local pose information, producing folding and wrinkles. Our new shape representation, *Surface Codec of Articulated Local Elements*, or SCALE, demonstrates state-of-the-art performance on the challenging task of modeling the per-subject pose-dependent shape of clothed humans, setting a new baseline for modeling topologically varying high-fidelity surface geometry with explicit shape inference. See Fig. [1.](#page-0-0)
18
+
19
+ In summary, our contributions are: (1) an extension of surface element representations to non-rigid articulated object modeling; (2) a revised local elements model that generates local geometry from local shape signals instead of a global shape vector; (3) an explicit shape representation for clothed human shape modeling that is robust to varying topology, produces high-visual-fidelity shapes, is easily controllable by pose parameters, and achieves fast inference; and (4) a novel approach for modeling humans in clothing that does not require registered training data and generalizes to various garment types of different topology, addressing the missing pieces from existing clothed human models. We also show how neural rendering is used together with our point-based representation to produce highquality rendered results. The code is available for research purposes at <https://qianlim.github.io/SCALE>.
20
+
21
+ # Method
22
+
23
+ To encode our UV positional map of resolution $32 \times 32$ into local features, we use a standard UNet [65] as illustrated in Fig. A.1(a). It consists of five [Conv2d, Batch-Norm, LeakyReLU(0.2)] blocks (red arrows), followed by five [ReLU, ConvTranspose2d, BatchNorm] blocks (blue arrows). The final layer does not apply BatchNorm.
24
+
25
+ To deform the local elements, we use an 8-layer MLP with a skip connection from the input to the 4th layer as in DeepSDF [53], see Fig. A.1(b). From the 6th layer, the network branches out three heads with the same architecture that predicts residuals from the basis point locations, normals and colors respectively. Batch normalization and the SoftPlus nonlinearity with $\beta=1$ are applied for all but the last layer in the decoder. The color prediction branch finishes with a Sigmoid activation to squeeze the predicted RGB values between 0 and 1. The predicted normals are normalized to unit length.
26
+
27
+ <span id="page-8-0"></span>![](_page_8_Figure_5.jpeg)
28
+
29
+ Figure A.1: A visualization of our network architectures. (a) The UNet for our UV pose feature encoder. (b) The MLP for patch deformations. The numbers denote the dimensions of the network input or the layer outputs.
30
+
31
+ We train SCALE with the Adam [36] optimizer with a learning rate of 3.0e-4, a batch size of 16, for 800 epochs. As the early stage of the training does not reliably provide nearest neighbor points on the ground-truth, we add $\mathcal{L}_n$ and $\mathcal{L}_c$ when $\mathcal{L}_d$ roughly plateaus after 250 epochs.
32
+
33
+ The residual, normal and color prediction modules are trained jointly. To balance the loss terms, the weights are set to $\lambda_d=2e4, \lambda_r=2e3, \lambda_c=\lambda_n=0$ at the beginning of the training, and $\lambda_c=\lambda_n=0.1$ from the 200th epoch when the point locations are roughly converged.
34
+
35
+ For the inference time comparison in the main paper Tab. 1, we report the wall-clock time using a desktop workstation with a Xeon CPU and Nvidia P5000 GPU.
36
+
37
+ We normalize the bodies by removing the body translation and global orientation from the data. The motion sequences are randomly split into train (70%) and test (30%) sets. For the clothing types in the main paper, the number of train / test data samples is: *blazerlong* 1334 / 563; *shortlong* 3480 / 976; and *skirt* 5113 / 2022.
38
+
39
+ Here we elaborate on the local coordinate system used in the main paper Eq. (5). As illustrated in Fig. A.2, for each body point $\mathbf{t}_k$ , we find the triangle where $\mathbf{t}_k$ sits on the SMPL [44] body mesh. We take the first two edges $\vec{e}_{k1}$ , $\vec{e}_{k2}$ of the triangle, as well as the normal vector of the triangle plane $\vec{e}_{k3} = \vec{e}_{k1} \times \vec{e}_{k2}$ , as three axes of the local coordinate frame. Note that $\vec{e}_{k1}$ , $\vec{e}_{k2}$ , $\vec{e}_{k3}$ are unit-length column vectors. The transformation associated with $\mathbf{t}_k$ is then defined as: $\mathbf{T}_k = [\vec{e}_{k1}, \vec{e}_{k2}, \vec{e}_{k3}]$ . The residual predictions $\mathbf{r}_k$ from the network are relative to the local coordinate system, and are transformed by $\mathbf{T}_k$ to the world coordinate according to the main paper Eq. (5).
40
+
41
+ <span id="page-8-1"></span>![](_page_8_Picture_15.jpeg)
42
+
43
+ Figure A.2: An illustration of the local coordinate system defined on a body point $\mathbf{t}_k$ . We take the triangle where it locates on the SMPL body mesh (in grey), and build the local coordinate frame using the edges and surface normal of the triangle.
44
+
45
+ During inference, SCALE allows us to sample arbitrarily dense points to obtain high-resolution point sets. As
46
+
47
+ ![](_page_9_Figure_0.jpeg)
48
+
49
+ Figure A.3: Pipeline of neural rendering with SM-PLpix [59].
50
+
51
+ the UV positional map provided by the SMPL model [44] has a higher density around the head region and lower density around the legs, we mitigate the problem of unbalanced point density by resampling points proportional to the area of each local element. Note that we approximate the area of patches by summing the areas of triangulated local grid points. See Sec. B.2 for qualitative results of the adaptive sampling.
52
+
53
+ Elaborating on the neural rendering of SCALE as shown in the main paper Sec. 4.5, we use the SMPLpix [59] model for neural rendering. It takes as input an RGB-D projection of the colored point set generated by SCALE, and outputs a hole-filled, realistic, image of the predicted clothed human.
54
+
55
+ **RGB-D projections.** Given the colored point set, $\mathbf{X}^+ = [\mathbf{X}, \mathbf{X}^c] \in \mathbb{R}^{KM \times 6}$ , where $\mathbf{X}^c \in \mathbb{R}^{KM \times 3}$ are the RGB values of the points $\mathbf{X}$ , we perform 2D projections using a pre-defined set of camera parameters $(\mathbf{K}, \mathbf{R}, \mathbf{t})$ . The result is a set of RGB-D images, $I_{\mathbf{x}} \in \mathbb{R}^{W \times H \times 4}$ . In the case where two points are projected to the same pixel, we take the value of the point that has smaller depth. These images are the inputs to the SMPLpix model.
56
+
57
+ **Data and Training.** We train SMPLpix using the same data and train / test split as what we use to train SCALE. Each (input, output) image pair for SMPLpix is acquired by performing the above-mentioned RGB-D projections to the SCALE predicted point set and the ground truth point set (of a higher density), respectively.
58
+
59
+ Note that the distorted fingers or toes in some of our results stem from the artifacts present in the ground truth point clouds. Similarly, the holes in the ground truth scan data lead to occasional black color predictions on these regions. In addition, as the synthetic skirt data does not have ground truth texture, we use the point normals as the RGB values for the visualization and neural rendering.
60
+
61
+ The SMPLpix network is trained with the Adam optimizer [36] with a learning rate of 1e-4, batch size 10, for 200 epochs, using the perceptual VGG-loss [30].
62
+
63
+ **Discussion.** The neural point-based rendering circumvents the meshing step in traditional graphics pipelines. Our SM-PLpix implementation takes on average 42ms to generate a $512 \times 512$ image without any hardware-specific optimization. Recall that SCALE takes less than 9ms to generate a set of 13K points, our SCALE+SMPLpix pipeline remains highly efficient, and shows promise for future work on image-based clothed human synthesis with intuitive pose control. Animations of the neural rendered SCALE results are provided in the supplemental video<sup>2</sup>.
2104.12133/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-10-19T14:56:44.733Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.80 Safari/537.36" version="13.8.0" etag="hqXO9KhqJyGx8RY_bckJ" type="device"><diagram id="NJpVMQBVu1OyRF7QyWBd">5Vxbc6M2FP41fkwGkLn4MY433Ye222na6fZRBtlWVyBWQOL011cSwgZJbsgGfIk3M1k4uiB937lIR5AJuE+3PzGYb36hCSITz0m2E7CYeF7gRvy3ELzUgqnj1oI1w0ktagke8b9ICR0lrXCCik7FklJS4rwrjGmWobjsyCBj9LlbbUVJ96k5XCND8BhDYkr/wkm5qaWR7+zlnxFeb5onu44qSWFTWQmKDUzoc0sEPk3APaO0rK/S7T0iArsGl7rdw4HS3cAYyso+Dby6wRMklZqbGlf50kyW0SpLkKjvTMD8eYNL9JjDWJQ+c3a5bFOmhN+5/HLNYIL5s+8poUy2Bw8Pi1kQtMoWmHFSMM1ED6jg45wXJaPfUNMooxnvfa6GhliJtgen5+5A48qGaIpK9sKrqAaRglnpmTurb5/3pIGZqrJpEQZcXymLUpT1ruc9lvxCwWmHFgwMLeUluBQziZxh0NnZk4Kn0d8WPG5ggYfb6/vhmVrgCYhQBtwBKfheCWOYr2hW3qxgigkf7d1E6HUA01wiAcBUAoQy0RRmhVEm69/bWkGGITlcX/R2UyCGV0IkC+RICumUxDj8fLsr4lyUN5DgdVaX/VMVJV697GfBr9bqf9LMSjqqxlz2FcFM/jPb/kwZSuupkFr+UHfT7XrZCLhXrNJW5eUQgxDeW9bnEGHRGnINqCHjXrfgBo7KSpRym89xEeNszW8QwapSwdVedCGEuCpSmkj40lx2ibMYJzip5Lgq8YvAJZ+2qF02j0bSoa4zKJ5C8PcK3vKrP0U5ynAqn/0GmFIsGw0O0xMfDEzrWX+vMNdNJ6Pc4VVicGiLGLdqKN0hnyohMI1pM9+6Mi5wg4KcJs5lQ/ELyvGkHD3aAM9hKAUOi/pRsCoFTphVEq+GM5y9ARmGcoY2iLsqJugbHCE5mCdKqpwDgSRkUk8cVBTiNsaECA3e044qYYfVGkNRLROw1Z6PCyompv9pG6O8RFWtoHJYNI4hikULZezZsshboz8AAxfjvew9HnzF59EKjKsoRnFsRD9esoz8qT+Uk/e0GGjz8lOLl58N4OT9g04+wU9WN/9mD1rkMDvXgPGK1z8wZntoIThDNw0/olAEb7/9DLvdRY74OTyWnW43YUWFi0tx78ptn7Gb1dznGbk7bPF1B1xgbWZagGCWitKwR/CWiY+iZGrzlpG3BHKDodzGgqDV/vGDOs+gp++MBvCdwfi+8xJX2mOspM/U5b1pYXICk0QuN8rQZpKzIAQwGMcGrdtUmxGCAYwwNIzwV/TMBfdQuGoNS2F/XcAajxTzuSKOzVwggWNI7lRBipNENOdrbK7ncCm7EkzklHtxOXB/PvEXoq+qpLUt1FxwjXhUz3aHgNnvwgxMlENbLmAAkKPXMyXFBubiMq4YeZkzGH9D5ev6ulducafAc259WaRWA2Bx48qEiqHE04X4GUaJQfD6KhyMtAifXR+8ljhtQ3eIMN3k0K4JXpsLtuE7hAd2XQu+PUJ+mDhhaIb8BxiXxcHAekqHPgRPsy5PM5MmW6Acwoe7tpOE12lCMFjytbtB0x1bVynKrocqN/KPFm9d29HE61zBIHKWkcnVl6rkm2N0LUx5vsmU9ZRkCKZspyTndYi0O9s8fIhk0+NwgBM215Ze1NBBWXInTnwnu2PF9n6G6+mD2ugu/sApEqmceoH/O00hV7052uLyq6ourv9uXS+2CnR589LcZHwaX9s3uzbiZt9I3jWt6oGjRDt4LmjFYtRVhhKyNWq4CXrz1SLEtxDSyBgifGnx1B2GjSX1hN9onW5q1GE6u42CrmvzNK7rSamGe7qNvjx90Rx5t1pXNRZGV1JzdnPvp0y2fMt7TO3wiqyH2hkusb31Hu583ANRF2Fbgiu0WO8AxmturY9ivM4PGK9zLcYbagbnaDGsr+W6+osXrtbRgHbbI3vw8ew28DXPGPWy22AAu+2RTvhAQbfJnnTsNjo3u9Vyd54T6oGyd8z19Zg7muU2C+QriQAXqUkuMJZc56hJthzRx48BbtfoXXNfCnxTU6L3x4AG3iuxXABMy30DXyeKAfo7rL3tNtSDyWh7rj7vyH6gtcRl6pF+BPXDeqSnAQbUox5pslHPYGajH8G4+il4MDW9vS3PNkAW8n9e41P54hVUOt7khj8j8oREDtdMF//GUIwSlJVnmTDu4VGGYFNLmgInNNgcK/t/jBeLPv6bYa4Dbrt7cN80yGC3eB6cRFv27JJJ7PEy9DFIDM08yogk2lJXl0xijxfCjkGi63jHZLFHQuzEZ3Sz2W2oQbTf1x/hoA7YMj2XrOgnCzmeTmNgoXE8XQeHX8a5TCJPFnZ0IgPnuETaMjiXTOTJQo9OZDQ9LpHn/5mxvtmxJSpHizyHPzO+TD0/k82O7VP6EZX8CN8RXkfY0VicgmOy+NESD2ey3fECM3k0Iotm5uELSSb1JzGFge9FfxPD901dqMf7KIbf7v/qS50C3//pHPDpPw==</diagram></mxfile>
2104.12133/main_diagram/main_diagram.pdf ADDED
Binary file (40.4 kB). View file
 
2104.12133/paper_text/intro_method.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Despite the importance of the precedent in civil law, its operationalization remains shrouded in philosophical debate centred around *how* the precedent actually forms the binding law. Jurisprudentially, we can think of this as searching for the ratio decidendi in the judgement, i.e. separating the ratio decidendi from the obiter dicta, or binding law from merely circumstantial statements. It is the nature of ratio that distinguishes Halsbury's view from Goodhart's.
4
+
5
+ The case argument contains the judge's explanation of why the case is decided the way it is. It incorporates knowledge of the precedent, facts of the case and any new reasoning the judge might develop for the case itself. We consider the intuitive position that a legal test is formulated by the argument that the judge put forward when deciding the case.
6
+
7
+ A legal test is by its nature part of the ratio and, thus, would be binding on all subsequent cases. This is the position endorsed by Lord @halsbury. Under this conception of the ratio, it is the arguments that matter, becoming the law; the facts of the case are of secondary importance. If a judge acts as Halsbury suggests they should extract the logic of the implicit legal test of the precedent, and attempt to largely ignore the specific facts of the case. Halsbury's view remains the conventional view of the precedent to this day [@lamond_2005].
8
+
9
+ In contrast, @goodhart observes that many cases do not contain extensive reasoning, or any reasoning at all; judges seem to decide the outcome without these. Therefore, he claims that the facts of the case together with its outcome must form the ratio; otherwise, a hypothetical new case with the same facts as any given precedent could lead to a different outcome. @duxbury_2008 observes that judges, when in disagreement with the precedent, concentrate on the facts of a previous case more than one would expect if Halsbury's hypothesis were fully correct. Halsbury would predict that they should talk about the facts of previous cases as little as possible, and seek the most direct route to ratio in the form of argument, but they evidently do not. A potential explanation is that, when disagreement arises, it is easier for judges to claim that the facts are substantially different, than to challenge the logic of the precedent, i.e. to overrule that case. Overruling a previous judgement is a rare and significant legal event [@how_judges_overule; @overruling] because it threatens the stability of the legal system. By concentrating on facts rather than running the risk of overruling, the judge can avoid this problem, including the threat of overruling her own previous judgement.
10
+
11
+ In support of this view, inspection of the argumentative part of the judgement reveals judges do not usually formulate legal tests of the kind Halsbury implies [@lamond_2005]. Neither do judges usually search the precedent for such legal tests [@alexander_sherwin_2008]. Goodhart's position suggests that the precedent operates less as an enactment of rules, but more as reasoning by analogy; hence it is the good alignment between the facts of the two cases that leads to consistent outcomes.
12
+
13
+ # Method
14
+
15
+ We denote the set of cases as $\mathcal{C}$, writing each of its element as $c$. The set of cases that form the precedent for case $c$ are denoted $\mathcal{P}_c \subset \mathcal{C}$. We will consider three main random variables in this work. First, we consider $O$, a random variable that ranges over a binary outcome space $\mathcal{O}= \{0, 1\}^K$, where $K$ is the number of Articles. An instance $o \in \mathcal{O}$ tells us which Articles have been violated. Since $o$ is a vector of binary outcomes for all Articles, we can index it as $o_k$ to get the outcome of a specific $k^{\text{th}}$ Article and we analogously index the random variable $O_k$. We will denote $o_{c}$ the outcome of a specific case $c$.[^2] Next, we consider $F$, a random variable that ranges over the space of facts. We denote the space of all facts as $\mathcal{F}= \Sigma^*$, where $\Sigma$ is a set of sub-word units and $\Sigma^*$ is its Kleene closure. We denote an instance of $F$ as $f$. We will further denote the facts of a specific case $c$ as $f_c$. Finally, we consider $A$, a random variable that ranges over the space of Arguments. Analogously to facts, the space of all Arguments is $\mathcal{A}= \Sigma^*$. An element of $\mathcal{A}$ is denoted as $a$, which we again term $a_c$ when referring to a specific case.
16
+
17
+ In this work, we intend to measure the use of Halsbury's and Goodhart's views in practice, which we operationalise information-theoretically following the methodology proposed by @pimentel2019meaning. To test the hypothesis, we construct two collections of random variables, which we denote $H$ and $G$. We define an instance $h_c$ of random variable $H$ as the union of arguments and outcomes for all precedent cases of $c$, i.e. $\bigcup_{c' \in \mathcal{P}_c} \{ a_{c'}, o_{c'}\}$. We will denote the instance $h$ when referring to it in the abstract (without referring to a particular case). We analogously define instances of random variable $G$ as $g_c = \bigcup_{c' \in \mathcal{P}_c} \{ f_{c'}, o_{c'}\}$. While the set-theoretic notation may seem tedious, it encompasses the essence of the distinction between Halsbury's and Goodhart's view: Each view hypothesises a different group of random variables should contain more information about the outcome $O$ of a given case. In terms of mutual information, we are interested in comparing the following: $$\begin{align}
18
+ \mathrm{MI}(O; H \mid F),\quad \mathrm{MI}(O; G \mid F)
19
+ \end{align}$$ If ${\mathrm{MI}(O; H \mid F) > \mathrm{MI}(O; G \mid F)}$, then Halsbury's view should be more widely used in practice. Conversely if the opposite is true, i.e. ${\mathrm{MI}(O; G \mid F) > \mathrm{MI}(O; H \mid F)}$, then Goodhart's view should be the one more widely used.
20
+
21
+ ![Our formulation of Halsbury's and Goodhart's tests as a classification task. Current case facts are truncated to $512$ tokens. Outcome of the precedent is concatenated with either the precedent's facts or arguments, and both are jointly truncated at $512$ tokens. Finally, these are concatenated together and embedded in $768$ dimensions before being fed into the [Longformer]{.smallcaps}.](images/prediction_task_cr.png){#prediction_task width="\\columnwidth"}
22
+
23
+ The $\mathrm{MI}$ is calculated by subtracting the outcome entropy conditioned on the case facts and either $H$ or $G$ from the outcome entropy conditioned on the facts alone. Therefore, to compute the $\mathrm{MI}$ we need to compute the Halsbury's and Goodhart's conditional entropies first: $$\begin{align}
24
+ \mathrm{H}(O \mid &\,H, F) \\
25
+ &=-\sum_{o, h, f} p(o, h, f) \log p(o \mid h, f) \nonumber
26
+ % & \sum_{\vo \in \{0,1\}^{|\calO|}, \vf \in \Sigma^* } p(\vo, \vf) \log\frac{1}{p(\vo \mid \vf)} \nonumber
27
+ % \label{entropy_facts}
28
+ \\[5pt]
29
+ % \label{entropy_args}
30
+ \mathrm{H}(O \mid &\,G, F) \\
31
+ &=-\sum_{o, g, f} p(o, g, f) \log p(o \mid g, f) \nonumber
32
+ % & \sum_{\vo \in \{0,1\}^{|\calO|}, \va \in \Sigma^* } p(\vo, \va) \log\frac{1}{p(\vo \mid \va)} \nonumber
33
+ \end{align}$$ as well as the entropy conditioned on the facts of the current case alone: $$\begin{align}
34
+ \mathrm{H}(O \mid F) &= -\sum_{o, f} p(o, f) \log p(o \mid f)
35
+ % & \sum_{\vo \in \{0,1\}^{|\calO|}, \vf \in \Sigma^* } p(\vo, \vf) \log\frac{1}{p(\vo \mid \vf)} \nonumber
36
+ \label{entropy}
37
+ \end{align}$$ The conditional entropies above reflect the uncertainty (measured in nats)[^3] of an event, given the knowledge of another random variable. For instance, if $G$ completely determines $O$, then ${\mathrm{H}(O \mid G)}$ is $0$; there is no uncertainty left. Conversely, if the variables are independent, then ${\mathrm{H}(O) = \mathrm{H}(O \mid G)}$, where $\mathrm{H}(O)$ denotes the unconditional entropy of the outcomes $O$. We now note a common decomposition of mutual information that will help with the approximation: $$\begin{align}
38
+ \label{mutual_information_A}
39
+ \mathrm{MI}(O; H \mid F)
40
+ &= \mathrm{H}(O \mid F) - \mathrm{H}(O \mid H, F)
41
+ % \\
42
+ % &\phantom{:}\approx \Hentropy_{\theta}(O \mid F) - \Hentropy_{\theta}(O \mid H, F) \nonumber
43
+ \\[5pt]
44
+ % \nonumber \\
45
+ \label{mutual_information_F}
46
+ \mathrm{MI}(O;G \mid F)
47
+ &= \mathrm{H}(O \mid F) - \mathrm{H}(O \mid G, F)
48
+ % \\
49
+ % &\phantom{:}\approx \Hentropy_{\theta}(O \mid F) - \Hentropy_{\theta}(O \mid G, F) \nonumber
50
+ \end{align}$$
51
+
52
+ In this work, we consider the conditional probabilities $p(o \mid \bullet)$ as the independent product of each Article's probability, i.e. $\prod_{k=1}^K p(o_k \mid \bullet)$. Information-theoretically, then, they are related through the following equation: $$\begin{equation}
53
+ \mathrm{H}(O \mid \bullet) = \sum_{k=1}^{K} \mathrm{H}(O_k \mid \bullet)
54
+ \end{equation}$$ Following @williams-etal-2020-predicting, we further calculate the uncertainty coefficient [@theil1970] of each of these mutual informations. These coefficients are easier to interpret, representing the percentage of uncertainty reduced by the knowledge of a random variable: $$\begin{align}
55
+ \mathrm{U}(O \mid H ; F) & = \frac{\mathrm{MI}(O;H \mid F)}{\mathrm{H}(O \mid F)}
56
+ \\[5pt]
57
+ \mathrm{U}(O \mid G ; F) & = \frac{\mathrm{MI}(O;G \mid F)}{\mathrm{H}(O \mid F)}
58
+ \end{align}$$
2106.13882/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-01-13T23:07:22.701Z" agent="5.0 (X11)" etag="09Lp4iksvs94-uHUagJg" version="16.2.4"><diagram id="ILZC3g9_S9t6SYFUEO0a" name="Page-1">7Vlbb5swFP41kbaHTsZcAo9NetOktpP6sL264AarBCPj3PbrdxxMAENCsyRLp7aKVHw4nGOf7+NckoE9ni5vBcniex7RZIBRtBzYVwOMLSvw4Z+SrAqJ7zmFYCJYpJUqwRP7TbUQaemMRTRvKErOE8mypjDkaUpD2ZARIfiiqfbCk6bXjEy0R1QJnkKS0JbaTxbJWEs9VFO/o2wSl65xeWdKNtqFII9JxBc1Z/b1wB4LzmVxNV2OaaKiVwamMHSz5e5mZ4Km8i0PPH63RfgwxQ/Z5fhXcB/OxKO80FbmJJnpE+vNylUZAppGlyqSsEp5CsJRLKcJrCy4zKXgr5vQwJlGgs/SiCqfCFYvPJUaVcuDdXvTegc0mtRDro9wS/mUSrEChUUVfleHOK4FvpQJmhDJ5k34iGbBZGNu4+EHZ7ATjDRjHV/b0Xz1PNQ0kfOZCKl+qh5t05DXNGSZhiQREypbhiDUZFVTy5RC/vYNl360ObgoLJarWkwr0Zope7AGf7JmJ9ie85esCUxDJ2JNy4/zD1hjd7AGqoFy7aw/BoUkXcou3ox5wkXFqxeWJIaIJGySwjIEvlCQj+ZUSAZZ/VLfmLIoUm5Gi5hJ+pSRUPlcQA3bxUXY4w4uKh90uZONJV2GJspusa6xFXewFRtkqBOzBt3+yDifyGhkHAMZG50XGW8LMtZHR8Zyz4zMcAsyTx8NmcB9Z9nMP2p34rRi2MRtgO2bGwR//2PfsqV5PLxvMRugI/UtQ7yz2+3Vt80ZyTyHe6D+sE8fH6TvoD595zD9nngGrneQvtsbT/8gfa9P3+RpU/80fW/Qn4/ymGTqEh5IViNBwlcqt+TzWqKq0pJavSQsu+tMYXif1LRH4ve7X5a+xG+fLPOX3xq1ivLAHa/r73D0g0DtlIyng+EVSIuPvsfSXJIUgg23PlgRtxBqgrnpt2pgWnYHmv7JwHS3gGld3HyZf333CPU0CMdCzjcaYxycuTG2ts0s83cP2ZGhGVpGtTn3zGJ1DS0GGs2i0lOBOuoMoPSzPIuKY0zCeCborXr0yq8El+lk7dJxDWzhfbGeiUWVtRzqYBhrZxHJ46rabXnDOqnD4QBMKhT8vdr0PcbTVi0MWlA75W8fdaw3c+zxwX7DHHQWsE8DasQEDdeF3b6CsUXt0aQVQkGw35z29wTYjFL1d932/mk39C4bz7y4Rt/cIyXZ1hdD7TevK8la3o5x+aC44/4u9BEYjFHCFxeFEkZyldGOlnTB4Aywcdi3CpTgz+SZJYr2H69HHZo/qXV1OsFxWlRYVr/MFoNg9QO3ff0H</diagram></mxfile>
2106.13882/main_diagram/main_diagram.pdf ADDED
Binary file (16.1 kB). View file
 
2106.13882/paper_text/intro_method.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ It is well known that Bayesian-optimal mechanisms for revenue maximization may lead to inefficient outcomes. A seller may rationally refuse to sell to buyers unwilling to pay a high price, even if there is an acceptable lower price at which the seller could still make a substantial profit. But what if a buyer is able to *prove* to the seller that they are unwilling to pay the high price? Upon receiving such a proof, the only rational course of action is for the seller to offer a lower price. As a result, both the buyer and the seller will see a welfare improvement.
4
+
5
+ However, the possibility of such communication will undoubtedly give rise to secondary market effects. Will the seller infer that a buyer has a higher valuation simply because they do not choose to disclose such a proof, and if so, should the seller raise their price even higher? And if there are multiple buyers competing for a single item, how will the disclosures of one buyer affect the ultimate welfare of another? To realistically discuss the overall welfare implications, it is thus necessary to investigate not just specific possible interactions, but the equilibria of the game played between the seller and the buyer(s).
6
+
7
+ This inquiry is inspired by the realm of online commerce. The increasing accessibility and quality of buyer data have made the personalized pricing of goods an ever more attractive prospect, and has served as the motivation of previous work studying the impact of information signalling on buyer welfare in auctions [@bergemann15; @ali20voluntary]. Motivated by the prospect of a future in which consumers are able to exert precise control over their online data (and a perhaps more immediate future in which sellers implement personalized pricing), we aim to answer the question,
8
+
9
+ > *"Can consumers benefit from the ability to share their private data, and if so, how?"*
10
+
11
+ In recent work, @ali20voluntary initiate the study of how such *voluntary disclosure* capabilities can improve welfare, considering a handful of special cases. In their model, a prospective buyer is allowed to credibly disclose to the seller a set of possible types containing their true type; the seller then sets prices based on this information. They report overwhelmingly positive news for consumers. When there is one buyer, one seller, and one good, they demonstrate that there always exists a disclosure strategy for the buyer such that
12
+
13
+ - the buyer has no incentive to deviate from the strategy after learning their type (technically, the strategies are part of a sequentially-rational Bayes-Nash equilibrium),
14
+
15
+ - the good is always sold,
16
+
17
+ - the seller is weakly better off than they would be without disclosure, and
18
+
19
+ - every interim buyer type is weakly better off than they would be without disclosure.
20
+
21
+ For a parameterized family of canonical probability distributions over the buyer's value for the good (including the uniform distribution on $[0, 1]$), they show that it is possible to strictly increase ex ante buyer utility as well. Furthermore, there is an intuitive characterization of the buyer-optimal equilibrium, determined by the limit of a greedy algorithm that iteratively constructs better and better equilibria by having all buyer types who are not sold the good declare to the seller that they are of such a type. In the end, we are left with a *partitional equilibrium*, in which there is some partition $\mathcal{P}$ of the types and every buyer reveals the set in $\mathcal{P}$ to which their type belongs.
22
+
23
+ However, the settings of these results differ markedly from most online commerce, and it is in the direction of these differences we depart.
24
+
25
+ In [3](#secMultipleUniform01Buyers){reference-type="ref+Label" reference="secMultipleUniform01Buyers"} we investigate the effects of disclosure when there are *two* i.i.d., uniform $[0, 1]$ buyers instead of one. Surprisingly, we find that the natural, buyer-symmetric analogues of the optimal one-buyer equilibria from @ali20voluntary no longer yield buyer welfare improvements. Perhaps even more surprisingly, it is possible to improve the expected buyer surplus (the sum of the buyers' utilities) by having only one buyer disclose information about their type (though this harms the utility of the other buyer). As for the question of social efficiency, with a few additional assumptions in the spirit of [@ali20voluntary], we show an extreme impossibility result ([6](#thmNoTwoBuyerEfficiency){reference-type="ref+Label" reference="thmNoTwoBuyerEfficiency"}): in any equilibrium where the good is always allocated to the highest bidder, both buyers must always receive utility zero. Since the model assumes the seller has no cost to sell the goods, this shows that social efficiency is incompatible with maximizing buyer welfare, which lies in stark contrast to the one-buyer, one-good case.
26
+
27
+ In [4](#secGeneralImpossibility){reference-type="ref+Label" reference="secGeneralImpossibility"} we further generalize these impossibility results to settings with richer disclosure capabilities and arbitrary priors. We show that, with either multiple buyers *or* multiple goods, maximizing buyer surplus may require the seller to sometimes *not* sell all of the goods ([7](#thmGeneralImpossibility){reference-type="ref+Label" reference="thmGeneralImpossibility"}). This holds even with the restrictions that buyer valuations are additive and independent across goods, as well as independent across buyers.
28
+
29
+ Finally, in [5](#secComplexity){reference-type="ref+Label" reference="secComplexity"} we study the problem of maximizing consumer welfare through disclosure schemes from a computational perspective. We model this problem by approximating arbitrary priors by discrete probability distributions with finite support, which are encoded as part of the input. We show that, while it is possible to efficiently compute the buyer-optimal equilibrium in the restricted setting from [@ali20voluntary] where disclosure messages must be "connected" ([8](#thmDP){reference-type="ref+Label" reference="thmDP"}), the more general problem is (weakly) NP-hard ([9](#thmHardness){reference-type="ref+Label" reference="thmHardness"}), and is inapproximable by connected equilibria ([10](#proNoApprox){reference-type="ref+Label" reference="proNoApprox"}).
2110.11236/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-09-25T09:59:49.909Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/89.0.4389.128 Electron/12.0.7 Safari/537.36" etag="IaIMPx4f99qFhIgBzqVl" version="14.6.13" type="device"><diagram id="c07lX0vNQMB8iy3QyAYI" name="Page-1">1VrZcqM4FP0aP8YFSIB5jB07XbOmkulK99OUAorNNCAPyLHdXz8CxCKxx+BkXCnHuhIS3HvOXSRmYOWf7kO03/1OHOzNNMU5zcDdTNOApbHvWHDmAkVJBdvQdVKRWgie3J+YC7NhB9fBkTCQEuJRdy8KbRIE2KaCDIUhOYrDXoknrrpHW76iUgiebOThyrBn16E7/hT5Y8QdX7C73fGlWZeV9vgoG83niHbIIcfSYmA9A6uQEJr+8k8r7MXKyxSTLrFp6M3vLMQB7XNBiJ+t6Hn74xfP+7pcfyXqn3fWDeD2eUPegT8yv1t6znQQkkPg4HgWZQaWx51L8dMe2XHvkRmdyXbU91hLZT8jGpIfeEU8EiZXg9VqtVlB1vPqel5JvtmsjdWKyavPwR/tDYcUn0oi/lz3mPiYhmc2JOtVF9weHGaawdvHwmhaZrNdyV4q5ELEgbLNJy9UyX5wbQ7RLKgoEjsMW7wZkID9W4q6JSHdkS0JkPcbIXuu0X8wpWfODHSgRNQ3Prn0W+n393iquc5bdyc+c9I4Z42APeC3cqN0VdwsLkta2XWpbTMeaG22i8ghtHGbenTOZhRuMW0buEgHxsprxUKIPUTdN5G445tVryEMZH/2TF8Hf8/MJZ2Zd6moAoAO7qBon/qwV/cUY2L5SgJaoszasDaKUkcyqaeXiYbQyxLpBTItlOgFa9g1HbmMVnIVPFoX0om41so0B0W7ZEWVNx4QpTgMEommwIRizm0cqOLbZqBKJRvXyxa1D+FbPkXF8LqxhGvrMiaOTjB+6QNx2b0UKAJZbMmctCXBI/UF/DIJIfl9XAAa8yNBo5RAo7aCBnsvCRxigrp2AokeOIqY9miGJB5dEhnHkjIUa8OdTyMGJ4eWZujzhW4uDAhMQzMs0VsZ1lyFDH+6oQETKKY4fUqQCupqFlElL9gTv0zl6Fwato8HRK1PY9QuVPAhnXJcdiwaAluUBDbaENLYzCwXx92JYSW4TROq8szv04QqqzvD7nZDAjMv9klV1ecpoTosJXx/2IFZOXf9BLApPsnQuXJ8yjRSYeCplYFDeSfEEsnJK4q1NkGdk5d6ytiJ58jQqzQCYgCHWXn9uTgM30PZi1ma6DXtVPV6m8SfkQszCD+Ml5fZCDawx5mgMGvOwyVC5fLrRDvNrO57XJcpeitTxtr1KFPj8h2NC+3VyZQMmpNHMFWX4CBHsP65ri7VaovrxkK1JtOc3uMKobHAVAGj7+W+BkyNEVIvwNq1Si5giAgBPRHSXA01oxFIuIZaP1yPVhvVxZYPzuEn3RvsvX07fWUvFd1QDl1TJ+XtO42jbeM3OBtlmLMZFwXdKeC1UKBacG4YSv4BoutR+2GiG12VxGhix5It90Fbkvk50cCd7ElDXO+dg+lDnJxO9fQ93SFuNPy0HzJeqTCV4LDZCGXphdl174J1cjhA2VlMtz9kQ/Ph38dbRw9ef320F4Z/j+BNdXvohVBK/JvDvoICVkBSqYb13G18XmAzBWNmj2V2vHDLO3zXcVJ84Mj9iV74fg5YcviyefXlTL+L52KQiDgAuP03yHe9WDFfsPeG44lb/ESjsYdsEqm6aA2tWvqaNaWvnDq+p/StNY9WMQ8l+xuHHIOPtY7IQp6w9DVZ99nOoLc01E6TGdc0GaiaDPt7EiLvf2yy7iPhISxbSCyrebHmqiare/1CMlW4I/7LgS3cdSQlaPQv18csRit/4CP7fiQ+Ct6V6UivOY3r86A6t4CVfxaCbfL4VLKNasxVtWoeVZ/rExnI6DZQ6RiaY1vILodZ6NNv07TBuDNnKZlSr6FZJhs7061U2b03DgGUZppu47BWsdXTVnuHgljLm0+cJGXZdJ5d85tsK7ZGcSi6MVdEDw8VY25Wz9tGcvKsWbzxmtq8eG8YrP8D</diagram></mxfile>
2110.11236/main_diagram/main_diagram.pdf ADDED
Binary file (27.6 kB). View file
 
2110.11236/paper_text/intro_method.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ Fig. [7](#fig:mechanism){reference-type="ref" reference="fig:mechanism"} shows the insides of the event detection mechanism implemented at each level of the model's hierarchy. Given the current timestep $\tau+1$, the mechanism at level $n$ is triggered if the bottom-up communication has not been blocked by the level below, $n-1$. Upon receiving new bottom-up information $x^n_{\tau+1}$, the model proceeds in evaluating four key variables. First, the latest posterior is assigned to be the new prior under the *static* assumption, $p_{st} = p(s^n_{\tau+1} | x^n_\tau, d^n_{\tau}, c^{n}_{\tau}) \leftarrow q_\phi(s^n_{\tau} | x^n_\tau, d^n_{\tau}, c^{n}_{\tau})$. For clarity, we use the deterministic variables $d^n_{\tau}$ and $c^n_{\tau}$ to represent $s^n_{<\tau}$ and $s^{>n}_{\tau}$, respectively. Second, the new posterior under the *static* assumption is computed using the deterministic variables of the latest block, $q_{st} = q_\phi(s^n_{\tau+1} | x^n_{\tau+1}, d^n_{\tau}, c^n_{\tau})$. As explained in the main body, we then compute the KL-divergence between the two states, $D_{st} = D_{KL}(q_{st} || p_{st})$. Third, we trigger the transition model to predict the next temporal context, $d^n_{\tau+1}$, in order to produce the prior under the *change* assumption of the model, $p_{ch} = p_\theta(s^n_{\tau+1} | d^n_{\tau+1}, c^{n}_{\tau})$. Lastly, the posterior under the *change* assumption can also be computed using the new bottom-up encoding, $q_{ch} = q_\theta(s^n_{\tau+1} | x^n_{\tau+1}, d^n_{\tau+1}, c^{n}_{\tau})$. As with the static assumption, we calculate the KL-divergence between the prior and posterior states, $D_{ch} = D_{KL}(q_{ch} || p_{ch})$.
4
+
5
+ Practically, prior state $p_{ch}$ is computed only once, after which it is stored for subsequent comparisons until the event criteria are satisfied and the block is updated.
6
+
7
+ Additionally, criterion *CU* ($D_{st, \tau+1} > \gamma \sum^{\tau}_{k=\tau-\tau_w} D_{st, k} / \tau_w$) involves two hyperparameters: $\tau_w$ is the length of a sliding window used for calculating the moving average, and $\gamma$ is the threshold factor that multiplies the value of the moving average. In running the experiments, we found that the optimal values are $\gamma=1.1$ and $\tau_w=100$. These values also proved to be robust for use across different datasets, as we kept their values constant for all of the reported experiments.
8
+
9
+ <figure id="fig:mechanism" data-latex-placement="h!">
10
+ <embed src="05_two_posteriors.pdf" />
11
+ <figcaption>Event detection mechanism relies on the computation of key variables shown in the figure. Given the latest updated state of a VPR block at level <span class="math inline"><em>n</em></span> and timestep <span class="math inline"><em>τ</em></span> denoted as <span class="math inline"><em>p</em><sub><em>s</em><em>t</em></sub></span> (and the corresponding deterministic states <span class="math inline"><em>d</em><sub><em>τ</em></sub><sup><em>n</em></sup></span> and <span class="math inline"><em>c</em><sub><em>τ</em></sub><sup><em>n</em></sup></span>), the model receives a new observation <span class="math inline"><em>x</em><sub><em>τ</em> + 1</sub><sup><em>n</em></sup></span> at some timestep <span class="math inline"><em>τ</em> + 1</span>. This new observation is used to compute the model’s posterior belief, <span class="math inline"><em>q</em><sub><em>s</em><em>t</em></sub></span>, using the latest deterministic variables of the block. Concurrently, VPR makes a prediction using its generative model, producing <span class="math inline"><em>p</em><sub><em>c</em><em>h</em></sub></span> (and the corresponding <span class="math inline"><em>d</em><sub><em>τ</em> + 1</sub><sup><em>n</em></sup></span>) representing the model’s prior belief about the features at the next subjective timestep. Lastly, VPR produces a posterior belief state, <span class="math inline"><em>q</em><sub><em>c</em><em>h</em></sub></span>, under the updated temporal context variable <span class="math inline"><em>d</em><sub><em>τ</em> + 1</sub><sup><em>n</em></sup></span>.</figcaption>
12
+ </figure>
13
+
14
+ We can visualise the values calculated as part of the decision-making process in the event detection mechanism in Figure [8](#fig:mechanism-values){reference-type="ref" reference="fig:mechanism-values"}. As described in Section [4.1](#sec: detection){reference-type="ref" reference="sec: detection"}, the *CU* criterion acts as the initial supervision signal, which subsequently results in the rapidly improving transition model and thus the *CE*-based detection. Figure [8](#fig:mechanism-values){reference-type="ref" reference="fig:mechanism-values"} shows two examples of the computed values over the length of an observation sequence at the early (left) and later (right) stages of training. It can be observed that at only 500 training iterations the decision-making is primarily driven by the *CU* criterion. At 18500 iterations, their roles tend to switch, as the *CE* criterion becomes significantly more accurate at detecting events.
15
+
16
+ <figure id="fig:mechanism-values" data-latex-placement="t!">
17
+ <embed src="decision_summary2.pdf" />
18
+ <figcaption>Key KL-divergence values computed using the level 2 detection mechanism at different stages of the training process using the Moving Ball dataset. Left-hand side graph shows the early stages of the training (training iteration 500), while right-hand side the later stages (training iteration 18500). In line with the described detection criteria in Section <a href="#sec: methods" data-reference-type="ref" data-reference="sec: methods">3</a>, if one of the blue lines falls below the orange line, an event is considered to be detected. As such, it can be seen that the decision-making is dominated by the <em>CU</em> criterion (light blue) in the early stages (left). On the other hand, as representations mature and the transition model learns, the <em>CE</em> criterion (dark blue) begins to dominate the detection process (right). </figcaption>
19
+ </figure>
20
+
21
+ The model consists of several components implemented using neural networks:
22
+
23
+ - **Bottom-up**. Encoder is a combination of (a) a convolutional neural network that embeds high-dimensional image data into a lower-dimensional representation, (b) stacked fully-connected networks with residual connections $f^n_{enc}$, and (c) fully-connected networks in each layer that compress layerwise observation embeddings, $x^n$, prior to being passed into a posterior model.
24
+
25
+ - **Top-down**. Decoder is a combination of (a) a transpose convolutional neural network, $f_{rec}$, for reconstructing images using top-down information $c^0$ and (b) stacked fully-connected networks with residual connections $f^n_{dec}$.
26
+
27
+ - **Temporal**. Layerwise transition models are reccurrent GRU models [@GRUU] with hidden states of size $|d^n_\tau|=200$ and an additional fully-connected network with $|s^n_\tau|$ neurons.
28
+
29
+ - **Prior and posterior models.** These models are implemented using four fully-connected networks and parametrise a diagonal Gaussian, thus outputting a vector of size $|s^n_\tau|\cdot2$.
30
+
31
+ The convolutional components of the encoder and decoder are analogous to those used in [@Ha2018WorldM]. Fully-connected top-down and bottom-up components are all made out of the same building block, which consists of four fully-connected layers with a residual connection at the output (e.g. $x^{n+1} = f^n_{enc}(x^n) + 0.1 \cdot x^n)$ and Leaky ReLU activations [@LeakyRELU]. The number of neurons is kept the same throughout and is equal to the dimensionality of the block's input (e.g. $|x^n_\tau|$). Component (c) of the bottom-up model, as well as the posterior and prior models, also consist of four fully-connected layers but with no residual connections.
32
+
33
+ We use the same VPR architecture for the Moving Ball and 3DSD datasets. Specifically, the latent states are of size $|s^n_t|=20$, while the temporal, top-down, and bottom-up deterministic variables are set to be $|x^n_\tau| = |c^n_\tau| = |d^n_\tau| = 200$. In Moving Ball and 3DSD, observations $o_t \in \mathbb{R}^{64\times64\times3}$, so we set the model's input layer to have the same shape. For the Bouncing Balls dataset (see section [8.3](#sec:bballs){reference-type="ref" reference="sec:bballs"}), we increase the capacity of the model, such that $|x^n_\tau|=1024$ and $|s^n_t|=60$.
34
+
35
+ For training, we use Adam optimizer [@Kingma2015AdamAM] with a learning rate of $0.0005$ and a cosine decay to $0.00005$ over a period of 15,000 iterations. We employ linear annealing of the KL coefficient from $0$ to $1$ over the first 3000 iterations. Although we also used KL balancing [@Vahdat2020NVAEAD] in the models presented in this paper, we find that it does not have a significant effect on their resultant properties. Further, we use binary cross entropy reconstruction loss and sequences of length 15 for the Moving Ball and Bouncing Balls datasets, and mean squared error loss and sequences of length 50 for 3DSD and Miniworld Maze datasets. Batch size of 32 is used for all datasets.
36
+
37
+ ::: algorithm
38
+ :::
2203.12193/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2203.12193/paper_text/intro_method.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ 3D scene understanding [@qi2017pointnet; @ilg2017flownet; @liu2019flownet3d; @dgcnn; @choy20194d; @he2021learning] of a dynamic environment has drawn increasing attention recently due to its wide applications in virtual reality, robotics, and autonomous driving. One fundamental task is *scene flow estimation* that aims at obtaining a 3D motion field of a dynamic scene [@vedula1999three]. Traditional scene flow methods focus on learning representations from stereo or RGB-D images [@basha2013multi; @jaimez2015motion; @teed2021raft]. Recently, researchers have started to design deep scene flow estimation networks for 3D point clouds [@gu2019hplflownet; @liu2019flownet3d; @wu2019pointpwc; @puy20flot; @mittal2020just; @gojcic2021weakly; @he2021learning].
4
+
5
+ However, major scene flow approaches rely on supervised learning with massive labeled training data that are expensive and difficult to obtain in real-world environments [@gu2019hplflownet; @liu2019flownet3d; @puy20flot]. Consequently, researchers have turned to model training with synthetic data and rich annotations followed by a further fine-tuning step if necessary, or the use of self-supervised learning objectives to eliminate any dependence on labels. Early attempts at self-supervised scene flow estimation assume that the scene flow can be approximated by a point-wise transformation that moves the source point cloud to the target one [@wu2019pointpwc; @mittal2020just; @Kittenplon_2021_CVPR]. The alignment of point clouds is measured by popular similarity metrics such as the Chamfer Distance (CD) or Earth Mover's Distance (EMD). However, for scene flow estimation, these metrics are limited. CD is sensitive to outliers due to its nearest neighbor criterion and tends to obtain a degenerate solution as discussed in @mittal2020just and EMD is computationally heavy and its approximations can achieve poor performance in practice.
6
+
7
+ This paper presents a principled scene flow estimation objective that addresses both limitations; it is robust to missing correspondences and outliers *and* is efficient to compute. We accomplish this by proposing to represent discrete point clouds as continuous probability density functions (PDFs) using Gaussian mixture models (GMMs) and recovering motion by minimizing the divergence between two GMMs. This is in contrast to previous nearest-neighbor-based objectives which assume the existence of hard correspondences between pairs of discrete points. Intuitively, if point clouds are aligned well to each other, their resulting mixtures should be statistically similar. We, therefore, can obtain the approximated scene flow with a decent alignment between the source and target point clouds. In summary, our contributions are:
8
+
9
+ - A new perspective on self-supervised scene flow estimation as minimizing the divergence between two GMMs. The obtained soft correspondence between point cloud pairs differs from the existing nearest-neighbor-based approaches with the assumption of an explicit hard correspondence.
10
+
11
+ - A self-supervised objective that leverages the Cauchy-Schwarz divergence for aligning two GMMs. It admits an efficient closed-form expression and leads to more robust and accurate flow estimation over CD and EMD in the presence of missing correspondences and outliers on real-world datasets.
12
+
13
+ - State-of-the-art performance compared to other advanced self-supervised learning methods, even outperforming some fully-supervised models that use ground truth annotations.
14
+
15
+ # Method
16
+
17
+ Point clouds can represent raw data, e.g., 3D shapes, or the surfaces from which they are sampled, e.g., those collected or reconstructed from LiDAR or RGB-D sensors. Our goal is to estimate 3D scene flow from consecutive point cloud frames. Denote the source point cloud as $\boldsymbol{S}= \{ (\boldsymbol{c_i^s}, \boldsymbol{x_i^s}) \mid i=1,\dots, N\}$ and target point cloud as $\boldsymbol{T}= \{ (\boldsymbol{c_j^t}, \boldsymbol{x_j^t}) \mid j=1,\dots, M\}$, where $\boldsymbol{c_i^s}, \boldsymbol{c_j^t}$ are the 3D coordinates of individual points and $\boldsymbol{x_i^s}, \boldsymbol{x_j^t}$ are the associated point features, e.g., color or LiDAR intensity. Due to the viewpoint shift, occlusion and sampling effect, $\boldsymbol{S}$ and $\boldsymbol{T}$ do not necessarily have the same number of points or have strict point-to-point correspondences. Considering points $\boldsymbol{s_i} = (\boldsymbol{c_i^s}, \boldsymbol{x_i^s})$ in the source point cloud $\boldsymbol{S}$ being moved to a new location $\boldsymbol{\widehat{c_i^s}}$ at the target frame and denoting its 3D motion as $\boldsymbol{d_i} = \boldsymbol{\widehat{c_i^s}} - \boldsymbol{c_i^s}$, a scene flow estimation model will predict the motion for every sampled point $\boldsymbol{s_i}$ in the source point cloud $\boldsymbol{S}$ via a function $f$: $\boldsymbol{D} = \{ \boldsymbol{d_i} = f(\boldsymbol{S}, \boldsymbol{T})_i \mid i=1,\dots, N\}$ such that they are close to real motion.
18
+
19
+ In this section, we introduce the proposed approach for representing and aligning discrete point clouds using PDFs. To the best of our knowledge, this is the first attempt at doing so for scene flow estimation.
20
+
21
+ Unlike existing self-supervised learning objectives such as CD and EMD that rely on hard pairwise correspondences between discrete point clouds, the key idea of our paper is to represent point clouds by PDFs to obtain a soft correspondence. We demonstrate the conceptual differences between CD, EMD, and CS in Figure [2](#fig:cd_emd_cs_curve){reference-type="ref" reference="fig:cd_emd_cs_curve"}. The main intuition is that point clouds can be interpreted as samples drawn from continuous spatial distributions of point locations. By doing so, we can capture the uncertainty in point cloud generation, e.g., jitter introduced during the LiDAR scanning process.
22
+
23
+ Formally, for a given point cloud $\boldsymbol{x}$, we represent it as the PDF of a general Gaussian mixture, which is defined as $\mathcal{G}(x) = \sum_{k=1}^K w_k \mathcal{N}(x | \boldsymbol{\mu_k}, \boldsymbol{\Sigma_k})$ with $$\begin{equation}
24
+ \mathcal{N}(x| \boldsymbol{\mu_k}, \boldsymbol{\Sigma_k}) = \frac{\exp\left[-\frac{1}{2}(x-\boldsymbol{\mu_k})^T \boldsymbol{\Sigma_k}^{-1}(x-\boldsymbol{\mu_k})\right]}{\sqrt{ (2\pi)^d |\boldsymbol{\Sigma_k}|}},
25
+ \end{equation}$$ where $K$ is the number of Gaussian components. We denote $w_k, \mu_{k}, \Sigma_{k}$ as the mixture coefficient, mean, and covariance matrix of the $k^{th}$ component of $\mathcal{G}(x)$. $d$ is the feature dimension of each point. In our case, $d=3$. $|\boldsymbol{\Sigma_k}|\equiv \det \boldsymbol{\Sigma_k}$ is the determinant of $\boldsymbol{\Sigma_k}$, also known as the generalized variance. Note that if $K$ is large enough, $\mathcal{G}(x)$ can well approximate almost any underlying density of a point cloud.
26
+
27
+ Inspired by [@jian2010robust; @roy2007deformable], we simplify the GMMs as follows: 1) the number of Gaussian components is the number of points with uniform weights (the occupancy probabilities or the mixture coefficients), 2) the mean vector of a component is the location of each point, and 3) all components share the same variance (isotropic, or spherical covariances), i.e., $\Sigma_i=\Sigma_j= \sigma \boldsymbol{I}$ with the identity matrix $\boldsymbol{I}$. We, therefore, obtain an overparameterized GMM model which can be equivalently obtained from a fixed-bandwidth kernel density estimation (KDE) with a Gaussian kernel [@scott2015multivariate]. More complicated GMMs are non-trivial and would require computationally expensive model fitting such as the Expectation-Maximization (EM) algorithm [@moon1996expectation], which we do not explore in this paper and instead reserve for future work.
28
+
29
+ The principle of PDF divergence minimization results in the specification of a self-supervised learning objective that optimizes a scene flow model such that a dissimilarity measure $D_{dsim}(\mathcal{G}(\boldsymbol{S}_{w}), \mathcal{G}(\boldsymbol{T}))$ between the GMM representations of the warped point cloud $\mathcal{G}(\boldsymbol{S}_{w}) =\mathcal{G}(\boldsymbol{S}+\boldsymbol{D})$ and the target point cloud $\mathcal{G}(\boldsymbol{T})$ is minimized. Recall that $\boldsymbol{D} = \{ \boldsymbol{d_i} = f(\boldsymbol{S}, \boldsymbol{T})_i \mid i=1,\dots, N\}$ where $f$ is implemented as a deep neural network. We can construct a suitable $D_{dsim}$ such that it is differentiable so it can guide optimization via backpropagation and gradient descent. We now describe how to achieve this goal.
30
+
31
+ We choose the Cauchy-Schwarz (CS) divergence [@jenssen2005optimizing; @principe2010information] for measuring the similarity between the two GMM representations of point clouds $\boldsymbol{S}_{w}$ and $\boldsymbol{T}$. The CS divergence can be *expressed in closed-form*, allowing an efficient end-to-end trainable implementation for scene flow estimation. We optimize $f$ by minimizing $$\begin{equation}
32
+ \footnotesize
33
+ \begin{split}\label{eq:cs_divergence}
34
+ & \mathcal{D}_{CS}( \mathcal{G}(\boldsymbol{S}_{w}),\mathcal{G}(\boldsymbol{T})) = - \log \Big( \frac{\int \mathcal{G}(\boldsymbol{S}_{w}) \mathcal{G}(\boldsymbol{T}) dx}{\sqrt{\int \mathcal{G}^2(\boldsymbol{S}_{w}) dx \int \mathcal{G}^2(\boldsymbol{T})} dx} \Big) \\
35
+ & = - \log \int \mathcal{G}(\boldsymbol{S}_{w}) \mathcal{G}(\boldsymbol{T}) dx + 0.5 \log \int \mathcal{G}^2(\boldsymbol{S}_{w}) dx \\ & \quad + 0.5 \log \int \mathcal{G}^2(\boldsymbol{T}) dx.
36
+ \end{split}
37
+ \end{equation}$$ The CS divergence is derived from the CS inequality [@steele2004cauchy] and is expressed as inner products of PDFs. It defines a symmetric measure for any two PDFs $\mathcal{G}(\boldsymbol{S}_{w})$ and $\mathcal{G}(\boldsymbol{T})$ such that $0 \le D_{CS} < \infty$ where the minimum is obtained iff $\mathcal{G}(\boldsymbol{S}_{w}) = \mathcal{G}(\boldsymbol{T})$. It measures the interaction of the generated field of one PDF on the locations of the other PDF, which is also called the *cross information potential* of the two densities [@hasanbelliu2011robust].
38
+
39
+ <figure id="fig:main" data-latex-placement="hbt!">
40
+ <img src="main_cs_3.png" style="width:99.0%" />
41
+ <figcaption>Overview of the proposed self-supervised learning for scene flow estimation. Our model takes both source and target point clouds to extract deep features via a UNet-like encoder-decoder backbone network <span class="citation" data-cites="ronneberger2015u"></span> based on MinkowskiNet <span class="citation" data-cites="choy20194d"></span>. We then warp the source point cloud by adding the estimated scene flow. Both the warped source and target point clouds are further fit using two separate GMMs. We train the model by minimizing the discrepancy between the two corresponding mixtures via a closed-form expression for the CS divergence.</figcaption>
42
+ </figure>
43
+
44
+ The CS divergence in Equation [\[eq:cs_divergence\]](#eq:cs_divergence){reference-type="ref" reference="eq:cs_divergence"} can be written in a closed-form expression for GMMs [@jenssen2006cauchy]. The basic idea is to follow the Gaussian identity [@petersen2008matrix] to obtain the product of two Gaussian PDFs as $$\begin{equation}
45
+ \begin{split}\label{eq:main_product}
46
+ \mathcal{N}(\boldsymbol{x}| \boldsymbol{\mu_i}, \boldsymbol{\Sigma_i}) & \mathcal{N}(\boldsymbol{x}| \boldsymbol{\mu_j}, \boldsymbol{\Gamma_j}) = \\ & \mathcal{N}(\boldsymbol{\mu_i}| \boldsymbol{\mu_j}, \boldsymbol{\Sigma_i} + \boldsymbol{\Gamma_j}) \mathcal{N}(x| \boldsymbol{\mu_{ij}}, \boldsymbol{\Sigma_{ij}})
47
+ \end{split}
48
+ \end{equation}$$ where $$\begin{equation}
49
+ \footnotesize
50
+ \boldsymbol{\mu_{ij}} = \boldsymbol{\Sigma_{ij}} (\boldsymbol{\Sigma_i}^{-1} \boldsymbol{\mu_{i}} + \boldsymbol{\Gamma_j}^{-1} \boldsymbol{\mu_{j}})
51
+ \end{equation}$$ and $$\begin{equation}
52
+ \footnotesize
53
+ \boldsymbol{\Sigma_{ij}} = {(\boldsymbol{\Sigma_i}^{-1} + \boldsymbol{\Gamma_j}^{-1})}^{-1}.
54
+ \end{equation}$$ Then we can use the Gaussian identity trick to simplify each term in the right of Equation [\[eq:cs_divergence\]](#eq:cs_divergence){reference-type="ref" reference="eq:cs_divergence"} and get $$\begin{equation}
55
+ \footnotesize
56
+ \begin{split}
57
+ & \mathcal{D}_{CS}( \mathcal{G}(\boldsymbol{S}_{w}),\mathcal{G}(\boldsymbol{T})) = - \log \bigg(\sum_{i,j=1}^{N,M} \pi_{i} \tau_{j} \mathcal{N} (\boldsymbol{c^s_i} | \boldsymbol{c^t_j}, \boldsymbol{\Sigma_i} + \boldsymbol{\Gamma_j} )\bigg) \\
58
+ & + 0.5 \log \bigg(\sum_{i,i'=1}^{N,N} \pi_{i} \pi_{i'} \mathcal{N} (\boldsymbol{c^s_i} | \boldsymbol{c^s_{i'}}, \boldsymbol{\Sigma_i} + \boldsymbol{\Sigma_{i'}}) \bigg) \\
59
+ & + 0.5 \log \bigg(\sum_{j, j'=1}^{M,M} \tau_{j} \tau_{j'} \mathcal{N} (\boldsymbol{c^t_j} | \boldsymbol{c^t_{j'}}, \boldsymbol{\Gamma_j} + \boldsymbol{\Gamma_{j'}}) \bigg), \label{eq:final}
60
+ \end{split}
61
+ \end{equation}$$ where we denote the sets of mixture coefficients for two GMMs $\mathcal{G}(\boldsymbol{S}_{w})$ and $\mathcal{G}(\boldsymbol{T})$ as $\{ \pi_{i} \}_{i=1}^N$ and $\{ \tau_{j} \}_{j=1}^M$ and the corresponding covariance matrix sets as $\{ \Sigma_{i} \}_{i=1}^N$ and $\{ \Gamma_{j} \}_{j=1}^M$. Note that the third term in the right of Equation [\[eq:final\]](#eq:final){reference-type="ref" reference="eq:final"} is a constant value for a target point cloud and can be optionally removed for faster computation. The detailed derivation of $\mathcal{D}_{CS}( \mathcal{G}(\boldsymbol{S}_{w}),\mathcal{G}(\boldsymbol{T}))$ can be found in the appendix.
62
+
63
+ ![A toy example to illustrate the conceptual difference between CD, EMD, and CS. For CD and EMD, we visualize their matching correspondences between points of blue and orange curves by adding blue (the forward matching) and green (the backward matching) arrows. CD and the EMD approximation find the hard pairwise correspondence while [@fan2017point] CS tries to link every blue point to all orange points via Gaussian functions --- here we only show $\mathcal{N} (\boldsymbol{c^s_i} | \boldsymbol{c^t_j}, \boldsymbol{\Sigma_i} + \boldsymbol{\Gamma_j} )$ in Equation [\[eq:final\]](#eq:final){reference-type="ref" reference="eq:final"} and weight the arrow color according to their values.](cd_emd_cs.png){#fig:cd_emd_cs_curve width="48%"}
2203.15845/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-09-23T15:05:41.839Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36" etag="ShUV4u59FSaOmuOmI8tM" version="14.9.9" type="google"><diagram id="-35T-4y-sUON_p6VWYr6" name="Page-5">7V1dc+O2Ff01nkkejMH3x+Pau5tOJ2nT7kzbzUuGK3FlNrTkSPLa219fUCIokaAkUCYpQKQnk5VACpR4D+65uAe4vCH3j68/LaOnh18W0zi9wXD6ekPe32AshVL6n6zl+7aFckG2LbNlMt22oV3Dp+R/cd4I89bnZBqvSieuF4t0nTyVGyeL+TyerEtt0XK5eCmf9nWRlq/6FM3yK8Jdw6dJlMbWaf9OpuuH/IdhsWv/S5zMHsyVEc9/8WNkTs67WD1E08XL3rXIhxtyv1ws1ttXj6/3cZrdPXNf3v+J2OyD+G3yr8e/Pj2/zunL6u+322/5sclHip+wjOfrdrvGIr8P36L0Ob9jN5in+jJ3q6donv3s9ff8XvI/n7Pfevd1MV/frjaWfqdPUPBJw+Vud1y/mmX/Fj3pL7XtbNue386iX7yOX7P2h/VjqhtQdu31cvFHfL9IF0vdMl/M4+y6SZpWmqI0mc3124m+M7Fuv/sWL9eJtv67/MBjMp1ml7l7eUjW8aenaJJd80WDXbctF8/zaZzdH6jfTaPVw+YNyt/8Gq11p/NNC4Yo/+U5xBUsvqbBlSx+20lT5SbNvm38uoey3HQ/xYvHeL38rk8xRzlh28/kAxFRroCS27aXHbIxzMfBwz6qSd4Y5aNpVlxghxj9IgdNIwDJ7gCERgC1BqDC4/mGH9UdfsiIn9bwgymFXgJIwu4AREcAtQYggrkFIHh59KDu0MNG9LSGHoqgf+ghENegp2LceKrnI/nb3Grl+75Yrh8Ws8U8Sn9eLJ7y+//feL3+nt/q6Hm9KMNjzw4Cb+wQLdfvstlShoU0Wq2SiWn+mKTmY/F8ak7Kv4luyY9DC3U3mCgF9Z9laN7Y0NlNOGrmZZxG6+RbedJWZ7D8o78uEn3dHTygUgBzQQXhkuoxWcKKPg6oQBhSJCWVyITSpn99m2bxOu+ygoniO54PE26BxHYBuhs9Iz40UPdMH62ettPkr8lrBqGKaRCsOAptxfvNXw1sNu/z7+A6yTzD7ysJsMCcICakoiZ/kNuG2xEEwhgIjJkeuFIDkGFSM6gPnNP6GLfnx/dDMh6EHEjIC+PRoIxnz03fDcl4TFBAlCwGHg/KePbE8H2vxvv48QO/oPEQh0ATWW45ScsjT3MdoJr2JBFccWIYzVdbmmxwx3GRtsHy+3/233zOO9u8ef9aeve9FPbsBzfxa7LOuoGA5e8+m07161032ZtdL3sxWP6D9gMwWIOkUwFZKSRvhrTV4nk5iY+clzuDbexzehiejODcGENysAdaaLI/7QMO9Qs4OALuFODEdQPOngp+aJGuDgUWdTR2EbqihAN9g/MJGILlbHNodEVG7+GZ93ClKzMOQ3MfNJgACckeIWelpO7u8pRUy/iSV44vZtFT0zy3wAfy3D+55rlbna01S1fXMOHhCV9nJHk8FUYlBtsEJReMKM2ZnrMk95glz3NQVWY9mmg/4MIuyZLK1YuxML2Y8Jglh4k4w4fXCjlMLeIcknqjSYkBTJjJQuKSEXxPIWM76hmSeoORIkCPkECNZ+umQ1JvsOAIEF5oNzIs3RTbwumQ5BtMqABIEH2HNwRWWZAQVjoMS4/DrmGmw0xYcjLsMuPQNew6zhm9RV2qX8SNCdjTiGNXjThiLyMekn6DOSYAK2nkGxb0cgMyqr/euQ9XwjIDMTD3gccQyTfEuUqGgSKOWIQ1Kjp9KzrH02OhCTqkZxG6Z0FnI1xfTGDGruufzMAOzB0xjwlwmNKM4bUrRRwdtjJDEUVAlfOKAgHIJeTZbu8NYfjNN3TY8gyRTDskTIM24bBFGkqRBAixPGmCOQvamMPWbCiEBChtNCO5VXJgAgKqso3HTHHJpOfBPA1HtGlnRenhfc9NsxK9zQmoa1KMNsyp11KLVIAKTokSCjLB1JFd1m9D3ije+JYLo67iTVBAY8PWbCiWEGgGMbFHeYFIYHTFRsnGO6fhyk6sYQL9sk5jVGq8A5qrUhMW0EaB5vICTV2WbKPZMM4oZRBzJDxfScmuW5UJMbNOXbUc1jCzflmHNUo4vgGNuUo4QQFNYosZh6TcSEGBZp5iqV1ZPgssZyztKGdIGg6TBEDCC1uSoG1pK6pDEnMQ0j6R79IpSARtTFtcbVPM8T05pkck0ExWLFFAJVsGlhyTPe+0H3MWp+uFYMfITDbc93wWo7SPuJ532r8hHXul9WjIdePLLs06JOkGQYQB46zY3FteNxIaPY2Cb7j0JEN0H2qsJ+sd4lzVwkARZ3z8qOZcTs05njkLTNdRPQvRHug6pxfenfZ5nfow12VSxhsE5sN6LmvrgcDjPeJc646GiTg9z7Foc0hSD+ISA0UEKR7jUDKD70WcUM0Droak7iCiUPbkKFI/TffffLY4NyhBRzGl/RyBRQ3JwMxn63FDknCQjuSBDvlZPYcFVpNGf3+Po69h5iyK8OR09VozFl3jrxPU0VvFZOhzke6hos41ORsu6uydpIOSdyTWEywIsVlJIsNmrp63ko5i8Bt4quEGPm88Rs8a4hgdOaDOdQ1CsKhD9g7SUdXpW9U5kSQLrYoa8vrZk8PcR1EE0qddGWq4x9AbV9aznOiBuOM/6lz3iYWKOlseMKSX2cqFPiXU9Glx5z+j+XTxqA9/ih6f0mQ+2+PSbc8HuFRTzroM2HKQnmNgn+LypihNZhlLTrTtY91+lxFYMonSd/mBx2Q6TQ+xdHlcNSNhLjskV2yQ9b3yfo8+sajhy6KWVQfPIrSzA8MSBYU4Eu/o6CTLHag8Q+B7tMPslbwD05joEY0pNGMqy5htKk6+V4BDen5xRHFCMNshi/PVZbBQdLw1Jx+XyXqXVGHOMaEZjK4x4S0mlSiw1vl0ADOfJ7wDhZlwXcoYEMzs2UabopL35CQRPyIqhUdOPa9GHb1Gm+RkBqP/XqPn8kljDOQAM9dVzwHBzC63MCpJ/StJRzMrmtMoQFLt1l74TpE+L94aaE7fhMsOvqvhzvzL+a6eiz30LB1d9rlPBYVdEWDEsB/cgkimrx0W38LK+Qr7ES5DUmMwlfDwI/xCs6WtrA1JjMGCcUB48QAXSYI2pq2sDUmMwYRigATZJruYWcMQarrLfH8vgyx/8xBtR2PCtdZ8Mfy8j8ZkzyrfuDy/ep5yXl0YDqjsjftDEls0q0iAlSzqJ1QySaGxj6kHFoCP8Il9upVonanIDEb/vYbPFWaGCjNnTS8cmNmFEUaxpW+x5XjiJDitRfZcnmHUWhxcl7NOLBtulL+c6/K5lPtAYWZ474pgpmyG7FqhqdLTselc59M3JCXQpHMVSX1lq21dCzQ+2ZJgRQAWuNhgEXRSX9lq25AUGsK4AkTJwpY8aGPactuQFBptRAYwVMVqq8CTZKrnOjdvyF5cayLdedeCalhx5HKR2PhwDN9SYtjUO74emGFoV7IZkl5DKEGAEGae9gbL23BDoyJsCn6NXsMbr+FOTsVg9N9r9CwLjnqNA8xcZcGAYGbXIR/1mr71mhN5lNAEGwx73tg3CjYOvstVay5cgv++y+ey7cOEWUF8VwQzZFPkkAQbAiECEvKCnEKuiYWRrb4NSbGhiHBg8l6FBQmAfHvbGYcY+25CW3TrWqfxyoTahgAhZh60yKteMTBr2qrboIQapSRQWBSiW1WoURhQVViB+R76I583LA8zbVFw3umYzAxG/2OyfvTAGmXvjdafpNFqlUxKALBr4Z4v5DlbDgkMDDnkxlOQAaFcjJd3++si2ZQSLphJ9ymqta9Q1mm5qy1k80/voGB3SGS5RC5C0O5tC2urtw2wih/+FqzZlTeHpN9QLBjQlGHCjWrhzcAICoezJW8kqINj0XuCMq51zKtfMK9eN9vVHkYAzASVXGnnpajy3WH5XIxxoFlO5CwEGj/gv8PyuRjjUGHmWtAsHJgRmxcHlUyXlAGGqyn0sFJ2zLbhkMpS6cABMIiL+gP45vRA8teW/YtbPtmSCQkg4aL+qRKh2dLWtoa0gQUh7Vv5Lk+BRNDGtFWu970a87JZJ0EQoGS3SxCVbBncomHm897igWadDHGfjq7NWHSNrs9hlQ4w17MUN2Y6HTDnur0lVMzZhTHb1EoOxRsu075eQhAIBWCcsfryw+HR1rhDzj8X4k5bDSsa+uFCxifjeYg5Z4EuUMzZJTVHta5vte54Xi24TTDm4RJecmfoj+4owuPTHsmM7cA8ks/lOgcqxzk/Gy1MzB3eCZpZyoUDJdQcaBHgP2PNAZrX8s6+LM2BTy9x/GQ37/Hl9sIH+FLTyvqmZrmmcT85QPZpLG+K0mSWMeFEAyPW7XcZSSWTKH2XH3hMptP0EBOXB10zouWyQwLVsAPVCSeRQOOn+KvJegtos6RGZEcgq3lSbFOUaQo/EGqBzd814wdl9r9Loy9xqt39NF5WvllX0KLMrJPKkcUItKBUbP4rQWmz3KormpS20xqSFH3LIQKIQko4zvaPq3LMTMxS6f3xLjVrC0SxzP8va4LkA+d0YL/+i2F6ZT/toGFoJut/+51fJlMUSIb1PaZUB268smMrAAN2u+PO9xz9rYQUZMajECHFESuHTFxgsDOCMEvT/bVmOBvu9L1sIV+620G1PzmsS2g1Q9Dp+Z90Fu9k0310XOnQXULBpdCzKKnKyx2wnh4qypHALHvuTbF2swM09Vxn8w2CT/Bocs6ph4umbnfOec80RAmgFKf6HhOq9I0PmmlUODvngvcN7kzTdEOcL77BXHiMW7pHk7NWEi6a3p5WdBJwjyYUhy7gnshGKSlLfFcj4PrFd9e98XLj1Tp3Pc4bJQ1YwnM9171xsh+YOG90DBcmdtq0NYaCYUleb0NTk0yfQHr+pVBmeCIILJemYRIDtE9IFh8pCLQHy4p0MYwYkTV0RFm5D8o7A5Cdth0B1DGAOJdAMoYgRQoRSUnQALIfiTQCqGMAUUoA0tbX/2bFRlTYHsje3DECqGMAYY4B4wIqTiHXsYwKGkB2Pri1ZdbIdZn1wAB0nMKkYKXZtD0n9wlApOZRSq15IDR6oDMoLDAAobqNHtXZu7XGeM/UpbSctTCvZol8M7ucnNbu3VhWs9rOtC3jNFon3/b7alLJE6HyqiReSbK5FvBEsOxuMDLlF3sq3knQ2xPDCNYua/4h+jEsj+HRSuVbbh5vZYABhb2etOelyaTmQQ9tYeXLiJWzV7VTToHyEC095X393qnlvMMFI8DFbodLpcw9VVhHqeeXntb9AkoRYghLRCitPGNSIg0hJRhRWM+lkNmE0JTOlA6F9Dys+EPVq1TA1jm5Hc4pu86n6rfsfIrTeLLWsMVwvYzmq2SdLOZhTbA88mBKMqAqgBfQ9mCkLqLr0oPVZZSrLu0Sq7O6MoQe+BjgvfIcvGIUpg+LXXkOz7cYk5rK9O2M/5usQiX9YXWT7caAWaBLN/+NDuDMsjAoo6cKWwi7+guWCvBefYAZ8k0gNNkaLYPPcvblB+204BYoe69+zF5mdxRuAPc1ekzS79vP6I6ix6fNQUIydfkhTr/FmZmtI+VO8m+S9TFfLB+jtHz4Jb9j2XG6/S6bg2mc2f1W/5xJMp/Vfj5D7G0Ovuxwjr/S4URja553vxk0+wc3LPlVd2q613A2J7wsltPy1fc/Pk1WT2mU35pkniZ7n/yaLqL1fo/2OG1levJpHa3j4ubPv6ye8nt/uukl0di1Tjsw1WnkPk7wzd7YVc3D1iaPvyMKwEoNeUEkqEl1SaaA2U5SYgQCOxu/b39+7xYUdSnRrdtfXb/3R2fMfJo83pdyQIi1C9ugqSgTth9oqH5jQVOSukMc3bD7x+j19xtxF90I3Q38x36AsfMs+UdgIQlmpQFauPyI37PrCohNAYpD015ek4yhkAMo+oVw3Wy4tenMGyrLd2YXoShgqJjOVEmKhjWbweHsPbuSwm+uyTuOEZAV4VgKgOX5CTuikMpSclJKhAmllbXiesaUuRxCGaNKR1vVzJrzE+Qg0f0gPc9lFBEqeGXGjyAFXAiSvdSDpILt9vJ39xO0nM3vf1uSrz//Pvk8m36er27botwz+Tbr9zySHRm29cQULAuu9aQKeYekWgvRMxLMY3ZgzA4MJjuAJMqWVx2MjUVNmaROswS1o/jwyvExR+AJA2CIIZD4SI6ghg86yxHUoujtq3/rxYa/PT8WNdt2VQC/RJM/np9W9oFxIUV75QGpntxRsQOdtDHW1qoK/Xa5yOy+i6D1/Xr4ZTGNszP+Dw==</diagram></mxfile>
2203.15845/paper_text/intro_method.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ A significant challenge in reinforcement learning (RL) is to overcome the need for large amounts of data. [Off-policy algorithms have received great attention because of its ability to reuse data by experience replay [@lin1992self] for learning the policy.]{style="color: black"} The key ingredient in off-policy methods [is learning a Q-function]{style="color: black"} [@watkins1992q] which estimates the expected sum of future rewards (i.e., return) at a state. State-of-the-art deep RL algorithms train the $Q$-function by *bootstrapping* $Q$-values from state transitions sampled from the experience replay buffer [@mnih2015human]. In bootstrapping, the sum of the immediate reward and the succeeding state's $Q$-value are used as the target (or *"labels\"*) for the current state's $Q$-value. Because of the dependency on the successor states' $Q$-value, the order in which states are sampled to update the $Q$-value substantially influences the convergence speed of $Q$-value. An improper update order results in slow convergence.
4
+
5
+ As a motivating example, consider the Markov Decision Process (MDP) shown in Figure [1](#fig:motivation){reference-type="ref" reference="fig:motivation"}. Let the agent receive [a positive reward]{style="color: black"} when it reaches the goal state (labeled as G), but [zero]{style="color: black"} at other stated. [(note that our method does not need rewards to be $0$ or $1$)]{style="color: black"}. Starting from state $C$, the agent can obtain the reward by visiting states in the sequence $C \rightarrow D \rightarrow G$. Now if the $Q$-value of taking action $a$ at state $D$, $Q(D,a)$, is inaccurate, then using it to update $Q(C,a)$ will lead to an incorrect estimate of $Q(C,a)$. It is therefore natural to start at the terminal state $G$ and first bootstrap $Q$-values of preceding states ($D, E$), and then for states $(A, B, C)$. Using such a reverse order for bootstrapping ensures that for each state, the $Q$-values of successor states are updated before an update is made to the current state's value. In fact, it has been proven that for acyclic MDPs such a *reverse sweep* is the optimal order for bootstrapping [@bertsekas2000dynamic]. Other works [@grzes2013convergence; @dai2007prioritizing; @dai2011topological] have further empirically demonstrated the effectiveness of *reverse sweep* in cyclical MDPs.
6
+
7
+ <figure id="fig:motivation" data-latex-placement="t!">
8
+ <embed src="figure/fig_motivation_new_1.pdf" style="width:80.0%" />
9
+ <figcaption> The ordering of states used for updating <span class="math inline"><em>Q</em></span>-values directly effects the convergence speed. <strong>(a)</strong> Consider the graphical representation of the MDP with the goal state (G). Each node in the graph is a state. The arrows denote possible transitions <span class="math inline">(<em>s</em>, <em>a</em>, <em>r</em>, <em>s</em><sup>′</sup>)</span> and the numbers on the arrows are the rewards <span class="math inline"><em>r</em></span> associated with the transition. <strong>(b)</strong> Random sampling wastes many backups at states with zero values (gray) while each backup of reverse sweep propagates values (orange) one step back. </figcaption>
10
+ </figure>
11
+
12
+ However, it is challenging to perform $Q$-learning with *reverse sweep* in high-dimensional state spaces since the predecessors of each state are often unknown. State-of-the-art $Q$-learning methods [@mnih2015human; @lillicrap2015continuous; @haarnoja2018soft; @fujimoto2018addressing] resort to random sampling of data from the replay buffer. Their speed of convergence can be slow as these methods do not account for the structure in state transitions when selecting the order of states for updating the $Q$-values. The speed of convergence matters because data collection is interleaved with $Q$-value updates. If the $Q$-values are incorrect, the agent may take actions that result in low rewards. Such data is useless for improving the $Q$-values of states in the high-return trajectories. Therefore, the slower convergence of $Q$-values is directly linked to the data inefficiency. One way to reduce dependence on data is to artificially increase the speed of convergence by increasing the ratio between the $Q$-learning update steps and the data collection steps. However, when function approximators are used to estimate the $Q$-value, excessive updates on a fixed set of interaction data can lead to over-fitting and overestimation of the $Q$-values, resulting in worse overall performance. Our experiments in Section [5.4](#subsec:more_compute){reference-type="ref" reference="subsec:more_compute"} empirically confirm this hypothesis.
13
+
14
+ To speed up $Q$-function convergence in high-dimensional state spaces, we approximate the *reverse sweep* update scheme via stitching the trajectories stored in the replay buffer to construct a graph. The graph is built directly from high-dimensional observations (e.g., images). Each state is a vertex, and two vertices are connected with an edge if the agent transitions between them during [training time]{style="color: black"}. As a result, trajectories are joined when a common state appears in two different rollouts. In our framework, graph building and exploration proceed iteratively. The resulting graph provides information about the predecessor of each state, which is used to determine the update order of the $Q$-function based on the reverse sweep principle. The reverse sweep is initiated from a set of terminal states because bootstrapping is not required to determine the correct $Q$-value for these states. We call this method *Topological Experience Replay* (TER) because the update order of $Q$-values is based on the topology of the state space.
15
+
16
+ # Method
17
+
18
+ We consider reinforcement learning (RL) [@sutton2018reinforcement] in an episodic discrete-time Markov decision process (MDP). The objective of RL is to find the optimal policy, $\pi^*$, that maximizes expected return $\mathbb{E} \big[ \sum^{T-1}_{\tau=t} \gamma^{\tau - t} r_\tau | s_t = s\big] ~\forall s \in \mathcal{S}$, where $\gamma$ is a discount factor [@sutton2018reinforcement], the $\mathcal{S}$ represents the set of all states in the MDP, and $\tau$ denotes the time step. At a time step $\tau$, the agent takes the action $a_\tau = \pi(s_\tau)$, receives reward [$r_\tau = \mathcal(s_\tau, a_\tau, s_{\tau+1})$]{style="color: black"}, and transitions to the next state $s_{\tau+1}$, where $R$ is the reward function. At the time $\tau$ of task completion (e.g., reaching goal states) the episode termination indicator $\mathcal{E}(s_\tau) = 1$, otherwise $\mathcal{E}(s_\tau) = 0$.
19
+
20
+ $Q$-learning is a popular algorithm for finding the optimal policy. It learns the $Q$ function $Q(s, a) = \mathbb{E} \big[ \sum^{T-1}_{\tau=t} \gamma^{\tau - t} r_\tau | s_t = s, a_t = a \big]$ using bootstrapping operations: $Q(s, a) \leftarrow \mathcal{R}(s, a, s^\prime) + \gamma \max_{a^\prime} Q(s^\prime, a^\prime)$ where $s^\prime$ is the state encountered on executing action $a$ in the state $s$. The policy $\pi(s)$ can be easily derived as: $\pi(s) := \mathop{\mathrm{arg\,max}}_{a}{Q(s, a)}$. When $\mathcal{S}$ is high-dimensional, the $Q$ function is usually represented by a deep neural network [@mnih2015human]. The interaction data collected by the agent is stored in an experience replay buffer[ [@lin1992self]]{style="color: black"} in the form of state transitions $(s_t, a_t, r_t, s_{t+1})$. The $Q$ function is updated using stochastic gradient descent on batches of data randomly sampled from the replay buffer.
21
+
22
+ Our method, topological experience replay (TER), is central at performing *reverse sweep* to update the $Q$-function in high-dimensional state spaces. Reverse sweep is known to accelerate $Q$-function convergence yet requires the knowledge of predecessors of a state, which is often unknown in high-dimensional state spaces. We overcome this limitation by building a graph from the replay buffer and guide the $Q$-function update with a reverse sweep over the graph. Section [4.1](#sec:overview){reference-type="ref" reference="sec:overview"} presents the overview of our proposed algorithm. In Section [4.2](#sec:ter){reference-type="ref" reference="sec:ter"}, we describe the procedure for building the graph from high-dimensional states. Next, in Section [4.3](#sec:reverse-sweep){reference-type="ref" reference="sec:reverse-sweep"}, we describe how the graph is used to determine the update order for $Q$-learning. Finally, we present a batch mixing technique in Section [4.4](#subsec:batch_mixing){reference-type="ref" reference="subsec:batch_mixing"} that guarantees TER to converge.
23
+
24
+ Similar to prior works of reverse sweep in tabular domains [@dai2007prioritizing], we assume that the objective of the MDPs is to reach terminal states (i.e., goal states). When an agent successfully completes the task, the agent must be at some terminal states. This assumption is required, otherwise reverse sweep will not help the agent learn optimal $Q$-value faster. [Another assumption is that an agent must be able to visit the same state twice so that we can find the joint state between two episodes for building the graph. This is an admissible assumption since @Zhu2020Episodic shows that an agent can visit repeated states even in high-dimensional image state spaces [@bellemare2013arcade].]{style="color: black"}
25
+
26
+ Once the graph is built, the next step is to determine the ordering of states for updating the $Q$-values. Same as common experience replay methods, $Q$-function is periodically updated online with mini-batches of data sampled using a fixed replay ratio [@fedus2020revisiting]. Each batch of training data is collected via a reverse sweep. More specifically, we maintain a record of vertices that correspond to terminal states, and this set is denoted by $V_{\mathcal{E}}$. Each terminal vertex $v_e$ acts as the root node. We perform a reverse breadth-first search (BFS) on the graph starting from a subset of $v_e$ sampled from $V_{\mathcal{E}}$ separately. The $v_e$ serves as the initial frontier vertex $v^\prime$. For a frontier vertex $v^\prime$ in the search tree, we first sample its predecessors $v$ and append the corresponding state-transitions sampled from $e(v, v^\prime)$ to a `batch_queue`. Then we set all the predecessors $v$ to be the new frontier vertices and fetch transitions. Once there are $B$ transitions in the `batch_queue`, we pop out $B$ state-transitions for updating the $Q$-function. We resume the tree search next time starting from the frontier $v^\prime$ to update the $Q$-function. Note that BFS only expands each vertex once, and thus infinite loop on cyclic (i.e., bi-directional) edges will not happen. When there is no vertex to expand, reverse sweep is restarted from a set of root vertices sampled from $V_{\mathcal{E}}$ again.
27
+
28
+ Compared with uniform experience replay (UER) [@lin1992self; @mnih2015human], the extra memory needed is at most $|E|$ low-dimensional vectors in the hash-table's keys. Instead of growing endlessly, the graph $\mathcal{G}$ is pruned once the number of transitions $(s, a, r, s^\prime)$ on the graph is greater than a user-specified replay buffer capacity. For pruning details and the hyperparameter settings, see the Section [10](#sec:impl_detail){reference-type="ref" reference="sec:impl_detail"}. The time complexity of making a batch of training transitions is $O(B)$ as we fetch data from $B$ vertices. We show that the wallclock computation time is close to typical methods in Section [10.7](#subsec::compute){reference-type="ref" reference="subsec::compute"}.
29
+
30
+ One potential drawback of starting the reverse sweep from only terminal states is that the states that are unreachable from the terminal states will never be updated. Consequently, the $Q$-values of such states are not updated, which impedes the convergence of $Q$-learning. Prior work [@dai2007prioritizing] has shown that interleaving value updates at randomly sampled transitions and at transitions selected by any prioritization mechanism ensure the convergence. Thus, we mix experience from TER and PER [@tom2016per] to form training batches using a mixing ratio $\eta \in [0, 1]$. For each batch, $\eta$ fraction of the data is from PER, and $1-\eta$ is from TER. Additional details of batch mixing are provided in the supplementary material Section [10](#sec:impl_detail){reference-type="ref" reference="sec:impl_detail"}.
31
+
32
+ :::: algorithm
33
+ **Input:** A hash function $\phi$, a warm up period $T_{\text{warm}}$, a batch size $B$
34
+
35
+ ::: algorithmic
36
+ Set empty graph $\mathcal{G} \leftarrow \{V=\emptyset, E=\emptyset\}$ and terminal vertices set $V_{\mathcal{E}} \leftarrow \emptyset$ Set `search_queue` and `batch_queue` as empty queues and $\text{\texttt{visited\_vertex}} \leftarrow \emptyset$ for BFS []{#line:trainloop_start label="line:trainloop_start"} []{#line:step_start label="line:step_start"} Take action $a_t$ and receive experience $(s_t, a_t, r_t, s_{t+1})$ []{#line:interact label="line:interact"} Add $\phi(s_t)$ and $\phi(s_{t+1})$ to $V$, and $e(\phi(s_t), \phi(s_{t+1}))$ to $E$ []{#line:update_graph label="line:update_graph"} Augment the terminal vertices set $V_{\mathcal{E}} \leftarrow V_{\mathcal{E}} \cup \{\phi(s_{t+1})\}$ if $\mathcal{E}(s_{t+1}) = 1$ []{#line:aug_terminal label="line:aug_terminal"} []{#line:step_end label="line:step_end"} []{#line:bfs_start label="line:bfs_start"} Add sampled vertices from $V_\mathcal{E}$ to `search_queue`, $\text{\texttt{visited\_vertex}} \leftarrow \emptyset$ []{#line:reset label="line:reset"} Pop $v^\prime$ from `search_queue` until $v^\prime \notin \text{\texttt{visited\_vertex}}$ []{#line:fetch_start label="line:fetch_start"}
37
+
38
+ Sample a subset of the predecessors edges $E_P(v^\prime) = \{e(u, u^\prime) \in E ~|~ u^\prime = v^\prime\}$ of $v^\prime$ Push $v \in \{ v ~ | ~ e(v, v^\prime) \in E_{P}(v^\prime)\}$ to `search_queue` []{#line:fetch_end label="line:fetch_end"} Push $(s, a, r, s^\prime)$ stored at each predecessor edge $e(v, v^\prime) \in E_{P}(v^\prime)$ to `batch_queue` Mark $v^\prime$ as expanded: $\text{\texttt{visited\_vertex}} \leftarrow \text{\texttt{visited\_vertex}} \cup \{v^\prime\}$ []{#line:mark label="line:mark"} []{#line:bfs_end label="line:bfs_end"} Pop a minibatch of experience $b = \{(s_i, a_i, r_i, s^\prime_i)\}^{B}_{i=1}$ from `batch_queue` []{#line:train_q label="line:train_q"} []{#line::build_batch label="line::build_batch"} Update $Q$ function with $b$ []{#line:trainloop_end label="line:trainloop_end"}
39
+ :::
40
+ ::::
2203.16517/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2203.16517/paper_text/intro_method.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Deep Neural Networks (DNNs) have shown great promise as predictive models and are increasingly being used for various computer vision applications. However, their reliance on large-scale labeled datasets limit their use in practical scenarios encountered in the real-world. The occurrence of objects in the real world inherently follows long-tailed distributions[@openlongtailrecognition; @zhang2021deep], implying that visual data sampled from the real world may not be readily available for all categories of interest at the same time. Thus, it is required for models to have the ability to generalize and recognize novel objects semantically similar to the ones encountered during training even though visual data for these novel classes is not seen by the model. Existing efforts aim to tackle this problem, by designing generalized zero-shot learning (GZSL) models that are equipped with the ability to generalize to unseen classes at test time.
4
+
5
+ Another important aspect of object occurrence in the real world is the gradual addition of object categories with time. This can be attributed to the discovery of new objects or the continuous nature of data collection process owing to which a previously rare object may have abundantly available samples at a later stage. However, existing GZSL approaches are not designed to tackle dynamic addition of classes in the initial pool of seen and unseen categories, limiting their scalability and applicability in challenging practical settings. Fig. [1](#introfig){reference-type="ref" reference="introfig"} illustrates the shortcomings of a GZSL model in such practical settings where classes arrive over time. Evidently, the GZSL setting is limiting, and is unable to adapt to dynamic changes to the initial pool of categories brought by the gradual addition of new seen and unseen classes over time. This can be attributed to the fact that since the previous data is no longer available, the GZSL model tends to *catastrophically forget* knowledge pertaining to previous tasks when sequentially trained and reused over time. This necessitates concerted effort toward carefully designing problem settings that resemble the occurence of objects in the real world, and building models that can adapt over time and seamlessly tackle such challenges.
6
+
7
+ Very recently, sporadic efforts [@cln; @ghosh; @iisc] have been made towards designing models that can dynamically adapt and generalize on addition of new seen and unseen classes. The aforementioned works term this setting as *continual generalized zero-shot learning* (CGZSL). However, these efforts are nascent, and vary in the definition of the problem setting, practicality, data splits and protocols followed -- thus inhibiting fair comparison and a clear path forward. To address this issue and motivated by the need for progress in this direction, in this work, we firstly consolidate the different CGZSL settings tackled in these recent efforts, and clearly segregate existing methods and settings according to the challenges they tackle. We propose a more flexible and realistic *Online-CGZSL* setting which more closely resembles scenarios encountered in the real world. In addition, we propose a unified framework that employs a bi-directional incremental alignment-based replay strategy to seamlessly adapt and generalize to new seen and unseen classes that arrive over time. Our replay strategy is based on a feature-generative architecture, and hence does not require storing samples from previous tasks. We also use a static architecture for incremental learning (as against using a model-growing one) in order to facilitate scalability and efficiency. In summary, the key contributions of our work are below:
8
+
9
+ - We identify the different challenges of the relatively new CGZSL setting, and consolidate its different variants based on the challenges they tackle and restrictions they impose. We hope that this will enable fair comparison among such approaches and further progress in the field.
10
+
11
+ - We establish a practical, but more challenging, *Online-CGZSL* setting which more closely resembles real-world scenarios encountered in practice.
12
+
13
+ - We propose a novel feature-generative framework to address the different CGZSL setting variants, which avoids catastrophic forgetting through bi-directional incremental alignment thereby allowing forward semantic knowledge transfer from previous tasks and enabling generalization.
14
+
15
+ - We perform extensive experiments and analysis on three different CGZSL settings on well-known benchmark datasets: AWA1, AWA2, Attribute PASCAL and Yahoo (aPY), Caltech-UCSD-Birds (CUB) and SUN, demonstrating the promise of our approach. We observe that our model consistently improves over baselines and existing approaches, especially on the more challenging *Online* setting.
16
+
17
+ <figure id="fig:settings" data-latex-placement="t">
18
+ <img src="images/settings.png" style="width:100.0%" />
19
+ <figcaption><strong>Comparison of proposed settings with other known settings.</strong> In ZSL, model is trained on seen classes and evaluated on unseen classes. In GZSL, the model is tested on both seen and unseen classes. CL models are trained on classes that arrive sequentially, but do not have unseen classes either during training or testing. Continual GZSL (CGZSL) settings (highlighted with a *) are proposed in this work. In static- CGZSL, classes that arrive in future are considered unseen. In Dynamic-CGZSL, each task has disjoint set of seen and unseen classes. Online-CGZSL allows for conversion of previously unseen classes to seen (based on availability of data) in addition to handling new seen and unseen classes at each task.</figcaption>
20
+ </figure>
21
+
22
+ # Method
23
+
24
+ In this section, we consolidate and provide a holistic overview of the various CGZSL settings tackled by recent efforts and discuss in detail the major differences in their formulations. We also describe our proposed *Online-CGZSL* setting which is more flexible and resembles real world scenarios more closely. Fig. [2](#fig:settings){reference-type="ref" reference="fig:settings"} illustrates a pictorial representation of the various CGZSL settings and strives to correlate and appropriately position them w.r.t. other related limited-supervision and continual learning settings. We now describe each of the CGZSL setting variants below.
25
+
26
+ In this setting [@cln; @bookworm; @ghosh; @azsl], the dataset is divided into $T$ subsets, and the model encounters each of these subsets in an incremental fashion over time. The setting assumes all previously encountered tasks as seen and future tasks as consisting of unseen classes. Formally, for a given task $\mathcal{T}_t$ at a given time step $t$, the first $t$ subsets i.e data belonging to the current and previous tasks are considered as seen classes while the future tasks are considered unseen.
27
+
28
+ This setting differs from traditional GZSL in the fact that during evaluation of the $t^{th}$ task, previous training data is unavailable. Thus the model should be capable of retaining previously learned knowledge while adapting to the newly encountered seen classes. However, *static-CGZSL* presents a constrained setting which requires the total number of classes or tasks to be known beforehand (hence the name $static$). Furthermore, the setting mandates that all tasks till current time step $t$ are considered as seen classes, with only the future tasks contain the unseen classes. Thus the dynamic addition of classes is restricted to conversion of an unseen class to seen after a particular task is encountered while learning continually. While it is reasonable to assume that visual features of unseen classes may become available in future, it may be non-viable to assume that novel seen or unseen categories, unknown at the beginning of training, will not be added in future. This fundamentally limits the notion of continual learning where a model should be adaptable to any number of tasks or addition of new classes.
29
+
30
+ Considering the limitations of *static-CGZSL* setting, [@iisc; @iisc2] proposed another setting where each task has an exclusive set of seen and unseen classes and the model can accommodate any number of tasks over time. We categorize this setting as dynamic-CGZSL, which is less restrictive than *static-CGZSL* setting as it allows for addition of both seen and unseen classes in a continual manner. However, this setting imposes a constraint that previously unseen classes may never become seen in future and is unable to tackle the scenario where a rare class may have abundantly available samples at a later time stage. This is limiting in practice as owing to the continuous nature of data collection, it is plausible that some of the unseen classes may become seen when its data becomes available over time.
31
+
32
+ In order to better align with the scenarios commonly encountered in the real-world, we introduce a new *Online-CGZSL* setting which can handle a variety of dynamic changes in the pool of seen and unseen classes. Specifically, each task has a disjoint set of seen and unseen classes, and an arbitrary number of such tasks/categories can be dynamically incorporated on the fly. Importantly, this setting allows the conversion of previously unseen classes into seen if the corresponding visual features become available depending on changes in data availability in the future. Note that our setting is more flexible as it does not require the knowledge of entire pool of categories beforehand, impose any restrictions on the conversion of unseen class to seen class or require task level supervision at test-time. Next, we formally describe and formulate our proposed *Online-CGZSL* problem setting.
33
+
34
+ Let subscripts $s$ and $u$ denote seen and unseen classes respectively. Each task $t$ consists of train and test data. Let $A^t$ be the union of attributes of seen ($A_{s}^{t}$) and unseen ($A_{u}^{t}$) encountered so far. The training data $D_{tr}^{t}$ at task $t$ is given by $D_{tr}^{t}$ = {( training visual samples of current task $t$ ($X_{s_{tr}}^{t}$), their class labels ( $Y_{s_{tr}}^{t}$), class attributes($A^t$))}. The test data at task $t$ be $D_{te}^{t}$ = {( test visual samples of seen and unseen classes encountered so far ($X_{s_{te}}$ and $X_{u_{te}}$), their class labels ( $Y_{s_{te}}$ and $Y_{u_{te}}$), class attributes($A^t$))}. Let $R^t$ be the replayed seen visual features of previous tasks. During the training phase of each task, data pertaining to current task and $R^t$ is used as training data. At the end of each task, the model's performance is evaluated by testing it on current as well as previous tasks' seen and unseen data. We operate in semantically transductive setting and do not use unseen visual features during training.
35
+
36
+ The overall framework of our approach is shown in Fig. [3](#fig:my_label){reference-type="ref" reference="fig:my_label"}. Given a time step $t$, we first train a generator $G$ which employs a cosine similarity based formulation enabling it to dynamically incorporate any number of categories or tasks over time (Sec. [4.1](#gan){reference-type="ref" reference="gan"}). In order to ensure that generated visual features are discriminative w.r.t class distribution at current time step, we impose normalized discriminative loss (Sec. [4.2](#real_class){reference-type="ref" reference="real_class"}). In addition, we propose to utilise incremental bi-directional alignment in order to adapt and ensure knowledge transfer from previous $(t-1)$ tasks and strengthen semantic relationship among classes encountered till time $t$ reducing catastrophic forgetting (Sec. [4.3](#iba){reference-type="ref" reference="iba"}). At time step $(t+1)$ we use generative replay strategy to generate seen class visual features ($R^t$) for all previous tasks and combine them with the seen class samples from the current task (Sec. [4.4](#gen_replay){reference-type="ref" reference="gen_replay"}). This procedure is repeated for the next time step.
37
+
38
+ We learn a generative model which comprises of a generator, $G_{\theta}:\mathcal{Z}\times\mathcal{A}\to \mathcal{X}$ and a discriminator $D:\mathcal{A}\to \mathcal{X}$. The generator takes as input random noise, $z\in Z^{d}$ and class attributes $a$$\in$$A^t$ and outputs a generated visual feature belonging to the same class. On the other hand, the discriminator $D_{\phi}$ takes class attributes $a$$\in$$A^t$ as input and outputs identifier projection of the attribute. *Identifier projection is a projection of the attribute in the visual space*. The $G_{\theta}$ and $D_{\phi}$ are trained adversarially where discriminator tries to minimize the cosine similarity between identifier projection and generated seen features belonging to same class, while the generator tries to maximize this cosine similarity. In addition to minimizing the aforementioned cosine similarity, the discriminator tries to maximize cosine similarity between real visual features and the corresponding identifier projection. Through this adversarial training, $G_{\theta}$ learns to generate visual features similar to the real visual features and $D_{\phi}$ learns a better mapping for the attributes. Training the $G_{\theta}$ and $D_{\phi}$ adversarially with the cosine similarities between generated seen features($X'$), identifier projection($D(a)$) and the real visual features ($X^{t}_{s_{tr}}$ $\cup$ $R^t$), is formulated as: $$\begin{equation}
39
+ \begin{split}
40
+ L_{GAN}
41
+ &= \mathbb{E}_{x\sim p_{data}(X^{t}_{s_{tr}}\cup R^t)}\hspace{1mm}\left[\hspace{1mm} \log\hspace{1mm}\left[\hspace{1mm}\cos(x,D(a))\hspace{1mm}\right]\right]\\
42
+ &+ \mathbb{E}_{x'\sim p_{\theta}(X'|(a))}\hspace{1mm}\left[\hspace{1mm}\log\hspace{1mm}\left[1 - \cos\hspace{1mm}(x',D\hspace{1mm})\hspace{1mm}\right]\right]
43
+ \label{eqn:1}
44
+ \end{split}
45
+ \vspace{-3pt}
46
+ \end{equation}$$ where the distributions of real visual feature and generated visual feature are denoted by $p_{data}(X^{t}_{s_{tr}} \cup R^{t})$ and $p_{\theta} (X'|(a))$. During testing, we compute the cosine similarity between test sample and identifier projections of all the class attributes encountered so far. The test sample is assigned the class label of identifier projection with which it shares maximum similarity. Thus we are able to achieve continual generalized zero-shot classification using a single GAN model without the need for training a linear classifier during each task. The proposed architecture is simple and allows easy adaptation to increasing number of classes, as the classification is performed by merely computing cosine similarities between the identifier projections of the classes encountered so far and the test sample.
47
+
48
+ As new tasks or classes gradually arrive over time, the specific features that allow us to best discriminate among the pool of classes dynamically changes. Thus, in order to ensure that the generated visual features are discriminative w.r.t the class distribution at any given time step $t$, we impose a set of losses on the generated visual features from current and all previous timesteps. Specifically, we calculate softmax score of the cosine similarity between all identifier projections and visual features. The class label corresponding to identifier projection with highest softmax score is the predicted label. We use three classification losses- (i) Real classification loss ($L_{rcl}$) is the classification loss corresponding to real visual features (current task + replayed samples). $L_{rcl}$ enforces the discriminator to consider inter-class distances while mapping the attributes. (ii) The classification loss corresponding to generated seen visual features called pseudo-visual classification loss ($L_{pcl}$) is added to the generator. $L_{pcl}$ encourages generator to generate more discriminative visual features. (iii) Since visual features of unseen classes are unavailable, a classification loss of generated unseen visual features called seen-normalized loss $L_{snl}$ is added to the discriminator. $L_{snl}$ serves as reference for finding appropriate mapping for unseen attributes.
49
+
50
+ The classification losses $L_{rcl}$, $L_{pcl}$ and $L_{snl}$ are defined as follows: $$\begin{equation}
51
+ \begin{split}
52
+ L_{rcl}, L_{pcl}, L_{snl}
53
+ &= c\_e( \log \frac{\exp(\cos(x,D(a_{i})))}{\sum_{i\in A^{t}} \exp(\cos(x,D(a_{i})))}, y_{i} )
54
+ \label{eqn:2}
55
+ \vspace{-14pt}
56
+ \end{split}
57
+ \vspace{-14pt}
58
+ \end{equation}$$
59
+
60
+ $x$ corresponds to the real visual features in $L_{rcl}$, generated seen visual features in $L_{pcl}$ and generated unseen visual feature in $L_{snl}$. $c\_e$ stands for cross entropy and $y_{i}$ is the true class label of $x$. $t$ covers only seen classes in $L_{rcl}$ and $L_{pcl}$; but sums over seen and unseen classes for $L_{snl}$. During testing, classification spans all classes encountered till $t$.
61
+
62
+ Owing to the nature of the CGZSL setting, the visual feature space dynamically changes as new seen and unseen classes are added over time. Furthermore, the visual features for seen classes belonging to previous time steps and entire pool of unseen classes are not available during the current time step. Thus there is a need for a mechanism that can forward transfer knowledge from previous tasks to avoid catastrophic forgetting and exploit the current semantic structure to generate better visual features (especially unseen). To this end, we propose to use an incremental bi-directional alignment loss ($L_{iba}$) consisting of nuclear loss and semantic alignment loss. Semantic alignment loss helps in using semantic information [@lsrgan] as a reference for generating unseen visual features, while the nuclear loss aids in transferring visual information from seen classes to identifier projections (projection of attributes in visual space).
63
+
64
+ The visual similarity between two classes $c_{i}$ and $c_{j}$ is the cosine similarity between their class means. It is represented as $X_{sim}(\mu_{c_{i}} ;\mu_{c_{j}})$ where $\mu_{c_{i}}$ stands for the mean visual feature of class $i$. Let semantic similarity between the classes be represented as $\tau_{sim}(a_{c_{i}},a_{c_{j}})$ where $a_{c_{i}}$ is the attribute of class $i$. Semantic alignment loss constraints $X_{sim}(\mu_{c_{i}} ;\mu_{c_{j}})$ to lie in a range $\tau_{sim}(a_{c_{i}},a_{c_{j}})$ plus or minus $\epsilon$ (hyper-parameter), thus transferring the semantic structure to generated features. With the addition of new classes, the semantically similar classes of $c_i$ may change. Hence we incrementally calculate the semantic alignment loss ($L_{sal}$) for all classes encountered so far. Nuclear loss is the L2 norm between $\mu_{c_{i}}$ and corresponding identifier projection.
65
+
66
+ The incremental bi-directional semantic alignment loss is given by: $$\begin{equation}
67
+ \begin{split}
68
+ L_{sal}
69
+ &= \min_{\theta_{g}} \frac{1}{N}\sum^{N}_{i=1}\sum_{j\in I_{c_{i}}} ||\max (0,X_{sim}(\mu_{c_{j}}, \mu'_{c_{i}})\\
70
+ &- (\tau_{sim}(a_{c_{j}},a_{c_{i}}) + \epsilon))||^{2} + \\
71
+ &||\max (0,(\tau_{sim}(a_{c_{j}},a_{c_{i}})- \epsilon) -X_{sim}(\mu_{c_{j}}, \mu'_{c_{i}}))||^{2}
72
+ \label{eqn:6}
73
+ \end{split}
74
+ \vspace{-17pt}
75
+ \end{equation}$$ $$\begin{equation}
76
+ \begin{split}
77
+ L_{nuclear}=||\mu_{c_{i}}-S_{c_{i}}||^{2}
78
+ \label{eqn:7}
79
+ \end{split}
80
+ \vspace{-12pt}
81
+ \end{equation}$$
82
+
83
+ :::: algorithm
84
+ ::: algorithmic
85
+ **Input:**$D^t_{tr}$, $D^t_{te}$, $G$, $D$\
86
+ **Output:**Predicted labels $y_{pred}$\
87
+ **Parameters:**$\theta_{G}$, $\phi_{D}$\
88
+ $R^{t}$ = replay data till task t-1; $X^{t}_{s_{tr}}$ = $X^{t}_{s_{tr}}$$\cup$$R^{t}$ `// seen pseudo-visual features` $X_{u}^{'}$ = $G(z,A_{u}^t)$ `// unseen pseudo-visual features`` `$L_{GAN}$` `$\leftarrow$` eqn (`[`[eqn:1]`](#eqn:1){reference-type="ref" reference="eqn:1"}`) ``// using `$\tt{X}^{t}_{s_{tr}}$`, `$\tt{X}_{s}^{'}$` and `$\tt{D}(A_{s}^{t})$` `` `$L_{rcl}$` `$\leftarrow$` eqn (`[`[eqn:2]`](#eqn:2){reference-type="ref" reference="eqn:2"}`) ``// using `$\tt{X}^{t}_{s_{tr}}$` and `$\tt{D}(A_s^t)$` `$L_{snl}$` `$\leftarrow$` eqn (`[`[eqn:2]`](#eqn:2){reference-type="ref" reference="eqn:2"}`) ``// using `$\tt{X}_{u}^{'}$`, `$\tt{D}(A^t)$` `` `$L_{D}^{t}$` = `$\lambda_{1}$` `$L_{GAN}$` + `$\lambda_{2}$` `$L_{rcl}$` + `$\lambda_{3}$` `$L_{snl}$` `$\phi_{D}$` = `$\phi_{D}$` - `$\eta_{1}$` `$\times$` `$\nabla$` `$L_{D}^{t}$` ``// update `$\tt{D}$` `$L_{pcl}$` `$\leftarrow$` eqn (`[`[eqn:2]`](#eqn:2){reference-type="ref" reference="eqn:2"}`) ``// using `$\tt{X}_{s}^{'}$` and `$\tt{D}(A_s^t)$` `$L_{iba}$` `$\leftarrow$` eqn (`[`[eqn:6]`](#eqn:6){reference-type="ref" reference="eqn:6"}`) and eqn (`[`[eqn:7]`](#eqn:7){reference-type="ref" reference="eqn:7"}`) ``// using `$\tt{X}_{s}^{'}$` and `$\tt{X}_{u}^{'}$` ``// Overall `$\tt{G}$` loss`` `$L_{G}^{t}$` = `$\lambda_{1}$` `$L_{GAN}$` + `$\lambda_{2}$` `$L_{pcl}$` + `$\lambda_{4}$` `$L_{iba}$` `$\theta_{G}$` = `$\theta_{G}$` - `$\eta_{2}$` `$\times$` `$\nabla$` `$L_{G_{s}}^{t}$` ``// Update `$\tt{G}$\
89
+ `Inference: model is evaluated on `$D_{te}^{t}$` `
90
+ :::
91
+ ::::
92
+
93
+ where $N$ is the number of classes encountered so far. For a given class $c_{i}$, $S_{c_{i}}$ is the identifier projection, $\mu_{c_{i}}$ is the mean of real visual features and $\mu'_{c_{i}}$ is mean of generated visual features. Mean of real visual features is available only for seen classes. Let $I_{c_{i}}$ represent the set of $n_{c}$ nearest neighbours of $c_{i}$.
94
+
95
+ We work in a setting where data arrives incrementally, and samples from previous tasks are not accessible during the current task. This results in catastrophic forgetting. In order to retain previously learned knowledge and adopt new knowledge, we use generative replay. Visual features of previously seen classes are generated by passing a concatenation of attribute and noise vectors to the generator network. A combination of generated features of previous tasks and real features from the current task act as input data to train the model. To ensure that we are replaying credible and good quality visual features at a time step $t$, we classify the generated visual features and replay only features that are classified correctly. $$\begin{equation}
96
+ \begin{split}
97
+ R^{t} = G (z,A^{\leq t-1}_{s})
98
+ \label{eqn7}
99
+ \end{split}
100
+ \vspace{-12pt}
101
+ \end{equation}$$ where $z$ $\sim$ $\mathcal N(0,1)$ and $A^{\leq t-1}_s$ denotes attributes of all the seen classes encountered so far.
102
+
103
+ For a given time step $t(>1)$, we concatenate the replayed visual features and current task's visual features from seen classes along with their corresponding labels. The concatenated data acts as input for training the generative model. During training, the discriminator and generator are trained sequentially on different tasks. After the training process, the discriminator learns a mapping function to map attributes to visual space. The generator learns to generate synthetic visual features conditioned on attributes. During testing, we classify the test sample using cosine similarity as described in Sec. [4.2](#real_class){reference-type="ref" reference="real_class"}. Based on the setting we are working in, accuracies are computed as described in Appendix.
104
+
105
+ ::: table*
106
+ []{#table1 label="table1"}
107
+ :::
108
+
109
+ ::: tabular
110
+ lllll & &\
111
+ & AWA2 & CUB & AWA2 & CUB\
112
+ A-CZSL &0.09&0.14&0.07&0.13\
113
+ Tf-GCZSL &0.09&0.07&0.09&0.09\
114
+ DVGR-CZSL &0.15&0.14&0.12&0.12\
115
+ NM-ZSL &0.11&0.08&0.10&0.08\
116
+ Ours &0.09&0.13&0.09&0.11\
117
+ :::
118
+
119
+ []{#forgettingtable label="forgettingtable"}
120
+
121
+ ::: table*
122
+ []{#table2 label="table2"}
123
+ :::
2204.02071/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-11-22T11:29:21.984Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" etag="cLbbfvazhKbW4mNZBDgM" version="15.8.2"><diagram id="bWhlT7AJAoevReduld1m" name="Page-1">7V1Lc6M4EP41rpo52IUkQOKYx+zuZapmaw6bOW1hwDY1xGQxSZz59SsZybxkjAmRPIAvASGE0v3111KrETN097j/M3GfNl9jP4hm0PD3M3Q/gxDYGNE/rOQtK3GQmRWsk9DnlfKC7+GvgBcavPQ59INdqWIax1EaPpULvXi7Dby0VOYmSfxarraKo/JTn9w1f6KRF3z33CioVfsn9NNNVkogzsv/CsL1RjwZ2E525dEVlXkTu43rx6+FZ6EvM3SXxHGaHT3u74KICU/IJevQHyeuHjuWBNu0zQ271x1+uLc39gNcbn3//u/V/OscQN7dFzd65v8y7276JmSQxM9bP2DNgBm6fd2EafD9yfXY1VeqdVq2SR8jfnkVb1OuRtohdLtLk/hncBdHcXJoDS2JZVr5FSFVk5YkceqmYbylp3OHVVmFUVS4dUW8wPPEQ3gHuSBegiQN9idlA44Sp1AN4scgTd5oFX6DhazsFo5SE9gLjtPXXOuAcIxsChqHBl4ALkSXg219fECuD3rAVXKJerBq7dgeCZarTtrx3YCsmHZ6UAiwwcIEJZ0ACCQ6QZZynfSskpoKjMNPqgI38bj2gNGPoKFTRr6AeEHEMtQLqfcuXSiRrh2lTBpP7rYkZvu/Z8adt/SfT+duFK4pMm9ojShYpTOYycdYut7P9UEfcy8TMauSrJefoGWxWpD206gcf84bp0dr/vfQC2ZA0l6wC/PdQTnsCVQHe1krFMHmr5n1ZYZvP4HPM3z/Lz2iMLhB9Di7LB5F5Zc9rdyDZVItoRUz4YjiChyZgJowt423QcWceRGX6r1H0RTQ8luGtZB6xxt+4TH0ffYYKchzM+gLrAKKghKMOloBkMAVfRRcxcMGygbA1kwHQMa2g+cDqtwbe+KDy/lA4r0U84HMfw2HD6Cpmw/QGPkA03uciQ8u5gMocV+K+cAcNB8gqJsPrDHyAVPnDXU1EyNcyghI4sDUMgLqO6SjlxEsUomc6Z4xIDICRthPUYMLIIrLHGBqnyWgvoPu18UBlu5Zgnj+8DlgihR04wBL+8zAbMEBwda/YWuYTI6Ru9uFXlkZZTFV13bYuYwSqMiStwfexOHkB2thYYnT+z1vMTt7E2f7MH0oHBfuomf5TexE3FOmH3hUZuCLZddTqqSCiJ8TL2ga/GO5zgs6la0OibIkiNw0fCn3Q6Zn/oRvcXiwWg4phCpuBVSgkvWf35WjRdJQZQgFnYVT+JFys6mbrIO01uwBhEchdMelJeNOJbisY0UlUtviUhfcbFhBiWEsjMIPdgNfrVm94GsRPlUGvg4g6grY3w18ttGIkq7gsxFeAGKdwvRHg69FrE4V+Hr0ktmEWJuXtCteEnf0klWsHCcZqtAhizTqpqYjHf0QnPNu/9YCUVAnoMzqbL4roKrDruNatypAyQJV4wQU0OrcPgxRUDVFaZtSXh+i0CARdVxvU4Qo3GI87j0nL8c4pXp4Ebs0PjcMs3mETk++BUlIJcPCUpNrlMIOV2DX0jVS1btvhWpPrMKuocOg3GFgGY39MishFxGdO1kflusL88mtJOtxvzbTYqCox2bE5NRYAHgsYGYzp3aDzgZV2FnVdHI7BAgVDJG1iM/MlRVYIjQypF6tp4AOKluA0dFkTUJEdrwAu+UsCM7n6Bh/iA0fBznCJg0VNtZipKPFxtoCWNt0HJdBApzOLqKKtg9yEVYFXlgBvEjf73goeSmK9/l9r91UlsmM63jnhrQYh/5GC+aU9ReWUwjel52A9hwaMoYs2ymH5hLEYtKIWO0pNWRoibaNDKE9w4aMIe92yrDpjyH0J9wQe0wMYevOzCeyPOdBMsT0dk4vDGFrT80nLVbkhsMQWPLytFqGkAVxBskQ0/s6PXEElng1tRwh4mXTGiuPEmhb7LIZUkroIFWqar/M2m62pWjV1WkRfhwLxqyRYMxWvLLvtIiojgVjWl8jUIkxR2dutyMLKlcQN+WSDNO/YlRtC6MuWLx49dCo/RPW2ZyR2i02OHMLqN2CVayCOy1i3lOmiSI7bZFqotfRQAe0WsK5PPPEXjikYMrl3DFm6MU0lI8xdIjqhk5UmOC1Jnu1xbe+RBRYU1j3dMVaW7jTa2wXuxez7issBagDvW8r9LvsC1tNB7qaTWGlGxG9RyO6t3SgI7YiqZekrnuTWABka3lDCx1P28R2BS+bupwE7xVsGtv7tt7XTBZXsIfsGLaEmjaR/Ri20J7ZBsSgZRRsoX+HWRGIHxdbTEksfbDFFWw4C1ssHg6GLfTvPyukOy62mBJa+uEL/dvRAthiWW4kC8E8rKArJIqtJqi8Y+OcVrMzRavAAF7RLkva8aZ1ayYteFO9JwqA075NlanNqPCGgM6cFwBbvHQyJb0M3wfbjRi1VCxW2k5jnOvM1isYNM57zyTVYKNxFKxiWxYAWwTdp2wZVQbeZmcWve7KIqDNKtKlVICxvZDYv4KdWizYaMIqMtYAbPGK1ZQvI/UgsAmNXbdxaW4WmSocE0aNuFSx4QtAfb8cfDahhsAlsm1pFPVsQo0VEN8UD+n1M9i1PN0To2u9X8Ue3DeU6mK3c/np+9biGNbNp1e4L41mV5BKpEhVHMUe3BeV6i9MmFfACOP5sNK0wvUOTsBy76X606tX9BrtrMcJtOYPOlh1F4Drg72LorcSpiOXzjrOTwXoaRIze8+rU5PYfI39gNX4Hw==</diagram></mxfile>
2204.02071/main_diagram/main_diagram.pdf ADDED
Binary file (27.1 kB). View file
 
2204.02071/paper_text/intro_method.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The volume of data, measured in terms of IP traffic, is currently witnessing an exponential year-on-year growth  [@forecast2019cisco]. Fuelled by the demand for high-resolution media content, it is estimated that 80% of this data is in the form of images and video  [@forecast2019cisco]. Data service providers, such as cloud and streaming platforms, have consequently seen costs associated with transmission and storage become prohibitively expensive. For example, an increased demand for streaming services forced major providers to throttle the maximum resolution of video content to 720p during the coronavirus pandemic. As such, these challenges have renewed the need for the development of high-performance data compression codecs.
4
+
5
+ One solution to this problem has been the development of approaches using likelihood-based generative models capable of discrete density estimation  [@mentzer2019practical; @townsend2019practical; @hoogeboom2019integer; @{berg2020idf++}; @ho2019compression; @townsend2019hilloc; @kingma2019bit; @mentzer2020learning; @cao2020lossless; @zhang2021ivpf; @zhang2021iflow]. Such methods operate by learning a deep probabilistic model of the data distribution, which, in combination with entropy coders, can be used to compress data. Here, according to Shannon's source coding theorem [@mackay2003information], the minimal required average codelength is bounded by the expected negative log-likelihood of the data distribution.
6
+
7
+ From this family of generative models, there have emerged three dominant modes for data compression: *normalizing flows* [@hoogeboom2019integer; @{berg2020idf++}; @zhang2021ivpf; @zhang2021iflow], *variational autoencoders* [@townsend2019hilloc; @kingma2019bit; @mentzer2020learning] and *autoregressive models*  [@{salimans2017pixelcnn++}; @van2016conditional; @jun2020distribution] [^2]. In fact, each of these approaches can be thought of as a traversal on the Pareto frontier of inference speed and compression performance. With broad generality, autoregressive models can often be the most powerful but the slowest; variational autoencoders are often the weakest but the fastest; and normalizing flows -- depending on the variant -- sit somewhere in between.
8
+
9
+ In this paper, we consider data compression with VAEs, and focus on extending the efficient frontier; obtaining solutions faster than popular VAEs that achieve state-of-the-art compression ratios. Use of VAEs, however, poses two outstanding challenges. Firstly, we should achieve competitive coding ratios without greatly sacrificing time complexity. For example, best iterates currently require one of two ingredients to improve performance: building either a deep hierarchy of latent variables [@child2020very] or use of autoregressive priors [@pmlr-v70-reed17a; @gulrajani2016pixelvae]. The latter idea, especially popular in the codecs of the *lossy* compression community [@NEURIPS2018_53edebc5], posits a model that flexibly learns both local (via autoregression) and global (via hierarchical latent representation) data modalities (e.g. low-frequency information). Whilst these approaches, such as MS-PixelCNN [@pmlr-v70-reed17a] and PixelVAE [@gulrajani2016pixelvae], have had some success in achieving more efficient trade-offs, generation of even moderately sized images is still to the order of minutes [@mentzer2019practical].
10
+
11
+ Secondly, there should exist a practical means by which to efficiently perform single-image compression. Single-image compression then permits parallel coding, which is highly desirable. However, translating a VAE into a lossless codec is currently achieved using the *bits-back* coding framework (predominantly, bits-back ANS), which requires a large number of *initial bits* [@townsend2019hilloc; @wallace1990classification; @hinton93keeping] (see Section [3.1](#sec:bbans){reference-type="ref" reference="sec:bbans"}). Whilst this is a trivial number of bits on large image datasets (where we can amortize this cost), it renders bits-back an impractical approach for single-image compression. Furthermore, even large datasets are often coded such that images are interlinked. Access to a single image in the middle of a sequence would therefore require all prior images in the bitstream to be additionally decompressed.
12
+
13
+ To that end, we propose two novelties for use in VAE-based compression designed to address these challenges. The first, our *autoregressive sub-pixel convolution*, introduces a simple autoregressive factorisation -- not dissimilar from the transformations used in normalizing flows [@{berg2020idf++}; @zhang2021ivpf; @zhang2021iflow] -- designed to present an efficient interpolation between fully-factorised probability distributions and the impractical per-pixel autoregressions. Built from a modified space-to-depth convolution operator, we losslessly downsample data variables before performing a computationally efficient autoregression along the channel dimension. Our autoregressive operator is then advantaged by a number of network evaluations invariant to data dimensions, with each autoregression crucially performed on a downsampled version on the input tensor. More broadly, we view this framework as a generalisation of many popular autoregressive "context\" models used in data compression [@{salimans2017pixelcnn++}; @DBLP:conf/icip/MinnenS20; @gulrajani2016pixelvae; @Zhang_2020_ACCV].
14
+
15
+ Our second contribution, *autoregressive initial bits*, presents a general framework for avoiding the impracticalities of bits-back ANS, allowing for eminently parallelizable coding. This technique, highly compatible with our autoregressive model, partitions the data variable into two *splits* such that the second partition is conditionally independent of the latent variable(s), given the first. In this way, we illustrate how we can use the entropy coding of the conditionally independent partition to both *supply* and *remove* the initial bits necessitated by bits-back ANS. We demonstrate that this approach reduces the bit overhead on a per-image basis by close to 20x.
16
+
17
+ Finally, we combine the above contributions to present our codec, *Split Hierarchical Variational Compression* (SHVC). SHVC posits a hierarchical VAE of general-form autoregressive priors that permits parallel coding. Using our framework, we outperform all other VAE-based compression approaches with fewer latent variables and a comparable number of neural network evaluations. We further illustrate the effectiveness of our architecture by training a small model which is able to outperform similar VAE approach Bit-Swap [@kingma2019bit] -- but with 100x fewer model parameters.
18
+
19
+ # Method
20
+
21
+ Our method posits a hierarchical VAE where we parameterize the priors using an autoregressive factorisation. We begin by defining a lossless downsampling convolution operator, before describing its application to density estimation using both *weak* and *strong* autoregressive models. We then describe how this autoregressive structure can be leveraged to avoid many of the challenges associated with bb-ANS *without* sacrificing the performance of stochastic posteriors. Finally, we describe how these contributions can be combined to form our SHVC codec.
22
+
23
+ <figure id="fig:ps" data-latex-placement="!tbp">
24
+ <img src="sbpc_conv" />
25
+ <figcaption>Left: a <span class="math inline">3 × 4 × 4</span> input RGB image. Centre: the image <span class="math inline"><em>x</em></span> downsampled using the convolution operator of <a href="#eq:conv_op" data-reference-type="eqref" data-reference="eq:conv_op">[eq:conv_op]</a> with <span class="math inline"><em>k</em> = 2</span>. Right: the image <span class="math inline"><em>x</em></span> downsampled using the convolution operator of <a href="#eq:conv_op2" data-reference-type="eqref" data-reference="eq:conv_op2">[eq:conv_op2]</a> with <span class="math inline"><em>k</em> = 2</span>.</figcaption>
26
+ </figure>
27
+
28
+ The space-to-depth and depth-to-space transformations are popular operations across image analysis, from generative modelling [@hoogeboom2019integer; @{berg2020idf++}] to super-resolution [@shi2016real]. They define adjacent operations for efficient up and downsampling transformations by folding spatial dimensions into channel dimensions -- and vice versa. Unlike learned operations, they greatly reduce computational complexity, allowing for greater parallelism by losslessly moving computation (and data) into the channels. Indeed, these operations have become an essential component in papers seeking real-time execution (e.g. [@shi2016real; @waveone2021elf; @cortinhal2020salsanext; @liu2018deep]). Specifically, given a tensor of $C$ channels, $H$ height and $W$ width, we define the space-to-depth and depth-to-space transformations, $f$ and $f^{-1}$, such that $$\begin{align}
29
+ f&: \mathbb{R}^{C \times H \times W} \longrightarrow \mathbb{R}^{Ck^{2} \times \frac{H}{k} \times\frac{W}{k}}, \\
30
+ f^{-1}&: \mathbb{R}^{C \times H \times W} \longrightarrow \mathbb{R}^{\frac{C}{k^{2}} \times Hk \times Wk},
31
+ \end{align}$$ where $k$ is the scale factor.
32
+
33
+ As described in [@shi2016real], these operations can be efficiently performed using *sub-pixel convolutions*, which are referred to as *pixel unshuffle* and *pixel shuffle*. In particular, their space-to-depth transformation, pixel unshuffle, is performed using a $k$-stride depthwise convolution where the $n^{th}$ element of $Ck^2$ $k \times k$ filters has one non-zero element such that
34
+
35
+ $$\begin{equation}
36
+ K_{h, w}^{(n)} = \begin{cases}
37
+ 1 & \text{if }h = \floor{n\mathbin{/}k}\, \text{mod} \, k, \, w=n \, \text{mod} \, k \\
38
+ 0 & \text{else}
39
+ \end{cases}, \label{eq:conv_op}
40
+ \end{equation}$$ where $h, w$ are the indices over spatial dimensions. The result of this operation is visualised in Fg. [3](#fig:ps){reference-type="ref" reference="fig:ps"} Centre.
41
+
42
+ Defining a channel-wise autoregression over the resulting tensor would posit a checkerboard autoregressive structure over each of the channels in the original tensor, sequentially. However, as identified in PixelCNN++ [@{salimans2017pixelcnn++}], sub-pixels in adjacent channels, sharing the same *spatial* location in the original tensor, have high correlation and therefore do not require complex models to describe the dependency structure. As such, the authors of PixelCNN++ use a linear model predicted by a single network evaluation, conditioned on decoded *context*, to define the joint distribution across channels. In this way, they obfuscate the need for separate RGB network evaluations. (We note that in our setting, context refers to previously decoded pixels in either the current or previous hierarchical latent variable.) From henceforth what we refer to as a *weak autoregression*, is then defined similarly to [@{salimans2017pixelcnn++}] according to
43
+
44
+ $$\begin{align}
45
+ p\left(x_{0:C, h, w}| D\right) = p\left(x_{0, h, w} | D\right)\prod_{c=1}^{C}p\left(x_{c, h, w}| x_{< c, h, w}, D\right) \label{eq:weak_ar_1}
46
+ \end{align}$$ where $D$ is the decoded context and $p$ is some parametric probability mass function (pmf), obtained via integrating a probability density function (pdf) over discretization bins, with mean at channel $c$ location $h, w$ given by $$\begin{equation}
47
+ \mu_{c, h, w} = \alpha_{c, h, w} + \sum_{i=0}^{c-1} \beta_{c, h, w}^{(i)} x_{i, h, w}. \label{eq:weak_ar_2}
48
+ \end{equation}$$
49
+
50
+ Here $\alpha$ and $\beta$ are scalars predicted for all channels and spatial locations by a single network evaluated on decoded context, and $i$ is the index over channels in decoded context such that $\beta_{c, h, w}^{(i)}$ is the scalar for prediction of the mean associated with pixel at channel $i$, spatial location $h, w$.
51
+
52
+ Inspired by this, we introduce a new space-to-depth convolution such that the resulting autoregression is alternatively re-ordered into $k^2$ *sub-blocks* of $C$ channels each. Crucially, the resulting channels in each sub-block share the same spatial index allowing application of the autoregression detailed in [\[eq:weak_ar_1\]](#eq:weak_ar_1){reference-type="eqref" reference="eq:weak_ar_1"} and [\[eq:weak_ar_2\]](#eq:weak_ar_2){reference-type="eqref" reference="eq:weak_ar_2"}. We note that should $k=H=W$ we return an equivalence to the per-pixel autoregression of PixelCNN++ but perform an autoregression exclusively in the channel dimension. Likewise should $k<H$, we define a block-based context model in raster scan order where, unlike MS-PixelCNN, adjacent blocks are dependent.
53
+
54
+ To achieve our desired downsampling operation, which we denote by $g(\cdot)$, we expand the depthwise convolutions of [\[eq:conv_op\]](#eq:conv_op){reference-type="eqref" reference="eq:conv_op"} into regular three-dimensional kernels where the $n^{th}$ of $Ck^2$ $C \times k \times k$ filters has one non-zero element such that $$\begin{equation}
55
+ K_{c, h, w}^{(n)} =
56
+ \begin{cases}
57
+ 1 & \text{if }c = n \,\, \text{mod} \,\, C, \, h = \floor{n\mathbin{/}Ck}\, \text{mod} \, k, \, \\
58
+ & \, \, \, \, w=\floor{n\mathbin{/}C} \, \text{mod} \, k \\
59
+ 0 & \text{else}
60
+ \end{cases}. \label{eq:conv_op2}
61
+ \end{equation}$$ We further visualise this operation in Fg. [3](#fig:ps){reference-type="ref" reference="fig:ps"} Right. The resulting density of the downsampled tensor for spatial location $h, w$ is then given by $$\begin{align}
62
+ &p\left(g\left(x; k\right)_{0:Ck^2, h, w}| D\right) = \prod_{i=0}^{k^2} \bigg[p\left(g\left(x; k\right)_{iC, h, w} | D\right) \times \nonumber\\
63
+ & \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \times \prod_{j=iC+1}^{(i+1)C -1}p\left(g\left(x; k\right)_{j, h, w}| g\left(x; k\right)_{<j, h, w}, D\right) \bigg]\label{eq:weak_ar_3},
64
+ \end{align}$$ where $i$ is the index over sub-blocks (i.e. a strong autoregression evaluated using neural networks).
65
+
66
+ Whilst we are restricted to $k^2$ evaluations per latent variable at inference time, the same does not have to be true during training. One efficient parallel training scheme is to use 3-dimensional convolutions applied to the downsampled tensors by expanding them into a $d \times Ck^2 \times H \times W$ volume [@pmlr-v48-oord16; @mentzer2018conditional1], where $d$ is some auxiliary dimension. Here we can apply zero-masking along the channel dimension of the kernels to enforce the causality condition, along with k-stride channel convolutions on the input. Full details are available in the Appendix.
67
+
68
+ For our choices of $p$ and $q$, we use a discretized mixture of logistic distributions for $x$ and a discretized univariate logistic distribution for all $z^{(l)}$ [@{salimans2017pixelcnn++}]. That is, given some mean $\mu$, scale $s$ and uniform discretization bin-width $b$, one can obtain the univariate pmf by integrating the logistic pdf over the discretiztion bin. For $x$, we typically use a mixture of 5 discrete logistic distributions as defined above.
69
+
70
+ As discussed in Section [3.1](#sec:bbans){reference-type="ref" reference="sec:bbans"}, bb-ANS is able to achieve efficient codelengths, but can lead to several shortcomings. Fortunately, our proposed autoregressive model naturally accommodates the possibility to bypass the auxiliary bits needed in other bb-ANS methods. We achieve this by exploiting the block-based autoregressive structure on the data variable. We outline this process below, which we refer to as *autoregressive initial bits* (ArIB). Different from the models considered in existing VAE-based codecs, we remove the direct causality between the latent variable $z$ and some partition of the data variable $x$. In practice, we simply remove $z$ from $D$ in Eq. [\[eq:weak_ar_3\]](#eq:weak_ar_3){reference-type="eqref" reference="eq:weak_ar_3"} for the final $n$ sub-blocks in $p(x|z)$, along with the partition from $x$ in $q\left(z^{(1)}|x\right)$. As a result, we factorise the likelihood as $p(x|z)=p(x_{s+1:k^2}|x_{1:s})p(x_{1:s}|z)$ with the approximate posterior as $q(z|x)=q(z|x_{1:s})$, where $s$ is our 'split' index. Instead of conducting the first step by decoding $z$ from $q(z|x)$, one can encode $x_{s+1:k^2}$ with $p(x_{s+1:k^2}|x_{1:s})$ and thus obtain the bitstream from which to decode $z$. Then one decodes $z$ with $q(z|x_{1:s})$, encodes $x_{1:s}$ with $p(x_{1:s}|z)$ and encodes $z$ with $p(z)$. At the decompression stage, one decodes $z$ with $p(z)$, decodes $x_{1:s}$ with $p(x_{1:s}|z)$, encodes $z$ with $q(z|x_{1:s})$ and decodes $x_{s+1:k^2}$ with $p(x_{s+1:k^2}|x_{1:s})$. We illustrate this technique in Fg. [1](#fig:bbans){reference-type="ref" reference="fig:bbans"} Right.
71
+
72
+ For this approach to be valid, we require the satisfaction of two criteria:
73
+
74
+ 1. There exists some $s$, $k$ and $z$ such that imposing $\left(x_{s+1:k^2} \perp z | x_{1:s}\right)$ does not greatly hinder performance.
75
+
76
+ 2. The *entropy* of $p\left(x_{s+1:k^2}|x_{1:s}\right)$ and $q\left(\hat{z}|x_{1:s}\right)$, where $\hat{z}$ is the discretized analogue of $z$, should be such that $\mathcal{H}_{p(x_{s+1:k^2}|x_{1:s})} \geq \mathcal{H}_{q(\hat{z}|x_{1:s})}$.
77
+
78
+ In our experiments, we demonstrate that the performance costs associated with criteria one are negligible. Crucially, we demonstrate that it is both orders of magnitude less that initial bits required of vanilla bb-ANS and a parameterisation of our approach using deterministic posteriors.
79
+
80
+ For criteria two, we formulate the optimization of [\[eq:ELBO\]](#eq:ELBO){reference-type="eqref" reference="eq:ELBO"} as a constrained problem subject to $\mathcal{H}_{p(x_{s+1:k^2}|x_{1:s})} \geq \mathcal{H}_{q(\hat{z}|x_{1:s})}$, where we estimate the respective expectations during training using Monte-Carlo integration. Whilst a variety of techniques from optimization theory may be applied, we found it sufficient to simply penalise [\[eq:ELBO\]](#eq:ELBO){reference-type="eqref" reference="eq:ELBO"} according to $$\begin{equation}
81
+ \mathcal{L}_{pen} = \mathcal{L} + \lambda \max \left(0, \mathcal{H}_{q(\hat{z}|x_{1:s})} - \mathcal{H}_{p(x_{s+1:k^2}|x_{1:s})} \right), \label{eq:constrained}
82
+ \end{equation}$$ where $\lambda$ is some Lagrange multiplier. We find that this further presents flexibility when choosing $s$, with a variety of choices yielding the same result.
83
+
84
+ ::: table*
85
+ []{#table:results label="table:results"}
86
+
87
+ Compression Model CIFAR10 ImageNet32 ImageNet64 CLIC.mobile CLIC.pro DIV2K
88
+ -------------- ------------------------------- ------------------------- ---------------- ------------------------- ---------------- ---------------- ----------------
89
+ *Generic* PNG [@boutell1997png] 5.71 5.87 6.39 3.90 4.00 3.09
90
+ FLIF [@sneyers2016flif] 4.19 4.19 4.52 2.49 2.78 2.91
91
+ JPEG-XL [@alakuijala2019jpeg] 5.74 5.89 6.39 2.36 2.63 2.79
92
+ *VAE-Based* L3C [@mentzer2019practical] \- 4.76 4.42 2.64 2.94 3.09
93
+ Bit-Swap [@kingma2019bit] 3.82 4.50 \- \- \- \-
94
+ HiLLoC [@townsend2019hilloc] 3.56$^{\ddag}$ 4.20$^{\ddag}$ 3.90$^{\ddag}$ \- \- \-
95
+ **SHVC** **3.16/3.41$^{\ddag}$** **3.98** **3.68/3.71$^{\ddag}$** **1.96**$^{*}$ **2.02**$^{*}$ **2.57**$^{*}$
96
+ **SHVC Lite** **3.76** **4.49** **4.16** \- \- \-
97
+ *Flow-Based* IDF [@hoogeboom2019integer] 3.34/3.60$^{\ddag}$ 4.18 3.90/3.94$^{\ddag}$ \- \- \-
98
+ IDF++ [@{berg2020idf++}] 3.26 4.12 3.81 \- \- \-
99
+ LBB [@ho2019compression] 3.12 3.88 3.70 \- \- \-
100
+ iVPF [@zhang2021ivpf] 3.20/ 3.49$^{\ddag}$ 4.03 3.75/3.79$^{\ddag}$ 2.39$^*$ 2.54$^*$ 2.68$^*$
101
+ iFlow [@zhang2021iflow] 3.12/3.36$^{\ddag}$ 3.88 3.70/3.65$^{\ddag}$ 2.26$^*$ 2.44$^*$ 2.57$^*$
102
+ :::
103
+
104
+ SHVC formulates a hierarchical VAE built from the components described above. Here we partition the latent variable into a simple disjoint hierarchy of $L$ layers, such that $z =\{z^{(1)}, . . ., z^{(L)}\}$. We define the prior and posterior according to $$\begin{align}
105
+ p\left(x, z^{(1:L)}\right) &= p\left(x|z^{(1)}\right) p\left(z^{(L)}\right) \prod_{i=1}^{L-1} p\left(z^{(i)}| z^{(i+1)}\right), \label{eq:prior} \\
106
+ q\left(z^{(1:L)}|x\right) &= q\left(z^{(1)} | x\right) \prod_{i=1}^{L-1} q\left(z^{(i+1)}| z^{(i)}\right),
107
+ % q\left(z^{(1)}, z^{(2)}, z^{(3)} | x\right) = \prod_{i=1}^2 q\left(z^{(i+1)}| z^{(i)}\right), \\
108
+ \end{align}$$ where we parameterise every conditional density in Eq. [\[eq:prior\]](#eq:prior){reference-type="eqref" reference="eq:prior"} as per Eq. [\[eq:weak_ar_3\]](#eq:weak_ar_3){reference-type="eqref" reference="eq:weak_ar_3"}. While this factorisation naturally fits the coding scheme proposed in Bit-Swap [@kingma2019bit], we additionally introduce a local *reverse* encoding to accommodate the autoregressive structure for factors in Eq. [\[eq:prior\]](#eq:prior){reference-type="eqref" reference="eq:prior"}. In more detail, for the encoding of $z^{(i)}=[z^{(i)}_1, ..., z^{(i)}_{k^2}]$ with $p(z^{(i)}| z^{(i+1)})$, one needs to encode in the reserved order of $z^{(i)}_{k^2}, ..., z^{(i)}_1$, to accommodate the first-in-last-out nature of ANS based codecs.
109
+
110
+ For purposes of experimentation, we define two versions of our model: one with and one without the dependency structure permitting ArIB. From henceforth we shall refer to these models as SHVC and SHVC-ArIB, respectivley. For SHVC, one can encode $x$ with $p(x|z^{(1)})$, along with other variables in Eq. [\[eq:prior\]](#eq:prior){reference-type="eqref" reference="eq:prior"} as discussed above. For SHVC-ArIB, one performs encoding and decoding for $x$ as discussed in Section [4.2](#sec:arib){reference-type="ref" reference="sec:arib"}; and applies local reverse encoding for slices in $x_{s+1:k^2}$ and $x_{1:s}$, respectively. We note that the only difference in SHVC-ArIB is that, whilst $p(x_{s+1:k^2} | x_{1:s})$ and $p(x_{1:s}|z^{(1)})$ are both modelled using [\[eq:weak_ar_3\]](#eq:weak_ar_3){reference-type="eqref" reference="eq:weak_ar_3"}, the former evidently omits $z^{(1)}$ from $D$. In addition, we restrict the posterior such that $q(z^{(1)}|x_{1:s})$. We visualise the overall architecture along with the coding scheme for SHVC-ArIB in the Appendix.
2210.03675/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-08-30T21:27:38.523Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36" etag="UZz7x8LXByBUpoxljJpj" version="20.2.7" type="device"><diagram id="s94h5_V5JulMMWMV1XXZ" name="Page-1">7V3rd5u4Ev9rcs7uB+eAJB7+2CTd9tzt3tu77T76kdiKTYPBi3GT9K9fiYcNg0CSAT8a8qG1xXgQ+o1mRqPRcIVvV8/vYm+9/C2a0+AKGfPnK3x3hRAyXJv9x1teshbTneKsZRH787xt3/DJ/07zRiNv3fpzuqkQJlEUJP662jiLwpDOkkqbF8fRU5XsIQqqd117C1pr+DTzgnrrX/48WWatrmXs299Tf7Es7mwa+ZWVVxDnDZulN4+eSk347RW+jaMoyT6tnm9pwEevGJfsd780XN11LKZhovKDvx/eW96b/9HJ9w+h57/ZPAafbyc5l29esM0fOO9s8lKMAOv3mn9MGML0e8TZ3axp7K9oQuNy+8d9483T0k/op7U34798YhSsbZmsAvbNZB/jKPESPwrZ14npGqxlkyFvXCMbkWnpz2YX649a9JvGCX0uNeWP/o5GrCfxCyPJrzr5gxVyaOeoPJVAzZuWJTxJ3ublYrTYMd6PNPuQD7bGwCP5wNNw/oZLMPsWRiGtjuCDHwS3URDFKS2eW9SdEz6OSRw90tIVF91j295dKYSY0869zZLOc4aNY0znlRlSH+HSEFqCISzaYhowzL9V55VoXPM7fIx81pMdgAVgBYAYILOJtvGM5j8qTwPAh7gSRokXL2hSY8Sg8F5KZGtOsGnuL7wNMVq7NQXkZjt5cV2V/e4pwVM30luAHioXSO/o8UeGJj3S6w+C/ZeMT028ZPwhvSGhh/hK6LEmvlgTX6yJL26YNY30mvJMYP9l9LD/1f6wD9l83SvoncI6XGfjfnX2gzujs5lIZ9+7Fn/Ai9fZNV0Lzamq0nahuoCMelLaVsN9mvplwWkqmRYW0aSHYm5J6KEalvUfAiRRAzYcH4kas6HaltGD8XEl5HB4ZOyh1pZoGRtqMcnwOPBxJcPvaGphp92pGEbrWQKtZwcJ12dR2rm9+rP/2UbFhUnm0b9hBKa7ft5fZJ8W/P87OisYsX5lvLIrNa068BLE6GmJQa6tCjzYyVe/JW3rWHVtO9giwz7EYFUsTGnA6rbo0syPhcBAH7pmmDiAUYP56WsKEoVleqvnUcUOXZof4TrACrqH4WghYC7c48LY1YG8cBhNbPeDowmjOUcG0hLNR02baDGbmBpDYBZvo9UqCgOOyh8bBm3O9z7eW0fbW3FjFt5v+H+/UW+zjemKpjd+2IYzbtY2qraVGbcEWEbKuundpwTcNOZOM6O2bq6sO9bibZMoD9jxH3iBv+B2dMa6kJpebjP9mRe8yS8kEe/ohtliP1x85l/uJvVgVS7vvLe/eCs/4OB+ZuacPYrxX/rE/v09WnlhTlJaLhns7/Y2b/+UP5ogPqpt0SduVWDZMrtm0M2jhg0LDTaE5N15iTeZx2x+h6PUnVTqgBtpmoVjeTq5E60Cmk3XLPA2G3/WZr1wt0E/e1sHfI1dQE/X1rFFYTujgW2dLbV1sx08e42SAsVX4/smBc3kNGimnTrqhYvficuvBZvNtlt3FlfopqQuU3bVjrJWX9Amoqsq27bRytHiXYsX9z8xiJhcGMV/P2dQXeiQBl3HczgrFdCHRMNGVTTjLVMVPjNxmW5s1on15UFqmvLOOP1EOUwyBWEO5IjskyPQrXCrrb9Ah8gtOm4QKd/H7mGEkVEd35PvVTt6xv+HW7cyX0sMiPa6dWpWGR3ZljsHBQR/ICBtpx8g0YmBnE47AqmcPuIY9s0NrkFvQej5l49ewrRmmC2DDHQBAmGCDR4CA7vKEWJnChhhJYHQ3aCEeRBFhxsFlYB+ubid3gL0tiY9kdC7evTY0HtejMQ4qOZBEMmOIwYbrMSW5U3A/kjyGmD/rfb+E4hv9XmH2RE0DUegfvSiUJYh2hO85n8niCEd4J3nbnWapjFEHEPdX8Q2qcgAsutRyqlAh0Jd15u/aBoi8zTuGQuwgwkkJ98yNk3RJtWInWi/P9fNZ4QdEWDXwS3UzlCz1dzCXKOyFpd9+7pdrXNt2orMmfiMMBmAFLBrJ7VhCaOBktqK+6gmtRFbQk806Ws+l4Qe+lyy/kOArHZ6mNRGsIQe+uAyejg+poQejo+MP8hqI0RCDxc9kvGBWW2y8YdZbTKf1wH4WpIsPqdh0dY4z6A8V+VtIB/ZHHfMNPUqmCbYBCzUs3wkjHrSq8QR32dguRpy7TUuvfRcQKiqRUsv0UYIjDv26AK6WmpHJcR77orDwrXEW8spWrSVR40ZMmrMBo7ummbX8O4FgmjBbUVsuQeDWGOGjBqzoUEsrNBrB7GveSiY1INDKLK1rw1CBJNr1SG0ZKwGB/AVWsP6HDR6s4XYIMeeg+T1qVHHcWq6b3qoLawzQ0aN2eAg9n+A5dxBdA0EQbSNQ0GsM0NmjdngIKIRRA5iX/NQMKkHh7D/E0iXByE6NBXbcVwZq8EB7LrD8yMAiIv8gx7moGEPNwf/8/2f1Zc//3pnbcM/7r6u/Cd8+71zEYlLT8GDwXsbH4blLtS221VRy7jqC8f+J+Jl4bj3P8C2nz6StWVGsUN2LCyLffMydmyYi0NdUZwso0UUesHbfetNHG3DeYoXj0PvaT5EabSag/iVJslLHq3m4e+qBNBnP/m79PkLZ3Vt5d/unnPO6ZeX/Es5/L0v6lWKbpvIQlm+ZlFurr5Lr6P12aCkMLaNni0WxK4L2drBp+n1VE1XS8s18UXyND+jxP/Udpq0N97BE9jVjc3OG0QtaLTuDxVFFosdIFNyTCvwQzopdEia4HPVdhhos10L2YBdKGQ0noAOmbhWz/KspWee4PwNAn+9ofK8oNKMwj3VGqxXAtmlMJQPcCKBdu5jY0gsFvKUvHaxUDzlp3IEreM5MjbxxILzMaZzPz2H3PkEHZCmMzkIpre/WVX6QNZRqxHoUPSGWERN1Ps4DCYUdeHGyw8m6ovU+2AUSbxlozyK+5HEHaR8Elcg7OSowv7KSwNZLtzuOPgMj2XDQAOBBaMGXocU2L1aMGGm3sFQYnJaIF95lMfFxjXqaVq6Bq7xOjacrzzY41jGtdMTnA7zGCGvI8NZGOPThHv2IZ4vpSvicE8liKPoGxX1Htz28JB6REgaACqyLKTS2vW0rmPD/RcYPR5aEdTzST7HXrh5iOJVWqDjbTiL5vwTkK+99JhqIYmDgC47v41wapT6MKrxqyKPuuzsirTOYEEMrFe/QJhTX53HvU+wfJkommB7BaAT8eVfSsfc2o9ISacqds7KsEym02uj9Ieqyyvk1tKNlEttwEM9w5wBmMD7FPtbTf2q/YD0e2igFXWV8jizbRy83MTe7JEmclVV1WvnorjYIFfrcyCMBSWKRFEpE/oihyivL/9/F99Y1u+G9369fn6k/ttfPeEbX8Y6ar3UUWsr+rVZM7nr9rjluFnGbqzT1olRY5027SG9wLHrLo/9gDBrAqGjrJ9nLLnvGno1YyQwWc1RZBuYJ2EFvV1Rqb4DyULzJApxZPAy/h43T0sv3tCkImvb5GHiHmRnHnJguIheFbWFcwKR5wBIhFN4t1uSX8juzK+EbH3mBaVr37zY99j/TIS8ZBvzNwy20s28tYAkoLyywWRXzgBej+L1kq0O895lbXw+THKJfpPyeEhKV3zmTnEh4peKDfLsSlIsNIv7pEGmTGTS1xyWbvIUxfNqt3a8Jk/0/tFn7DjPTNonudBV6O6ZB5jtMk2AOkXEzTRpWlwq/WAVSjXlOqezKE6LbUySpT97DOkm754f+olfjA6kLeHVSlfqToVOReu3KHeRjRu0rKh6YgLax2c66R2zlsC6W78rrvd6VzvNb5dRsm7jlB+n/OFTXmvDvuy663tz7jVmfwe6dD06b6IFyJEUWKs/1oN2I7ha3dMS1M09qm5TOMWtEwkWubDl8YV+8cqfz9Oti6ZYf6fRdqs1cZFpC1xY0XBPhxruIjz9Ksa7dHDhVOItr0Ty4M2qiuw9Db5RPmh6AS1EaopTL0nrQxQ9cuvGfvcXM7fRU6NGPM/V6zCZUJ3k0anK40TgSAoDrH1UvxOLo+is93mK4y9RTGfeJuGu2iiRfUkkrEMnFElRct5gIinIG9lFqxXDd9MGl9EUO4A//Tks/z13pRz/Ju4/Md9cmN7/8xnJe0MocPemJzgjvCWbBVXRng7ka7kC2y+qMzXYZhau+1p7yVZcEzUJR08SjNolGCyReps4ktuOE+eUE4egk0+c5vLKHQXvz3ByIqU9ip5A9KagOLQpykCwjyp6zVs8nUVvFLyzFTxh6osoUDCc4NWD/NfX+aseLgjAUgX5KTiLbVrD2KupapBnsG1hQepYdQmTzUSstBxppR09pHPwkGyRxB3VQyIKhbMvN4yLYIrr6ePmRCGr+scZ8DMInJPmwPmZO2Lm6IhpSB5G5+aJkXqMfPTE1PT06V0xUt9PHqPJrWppdPnEol09HC12QY7q8xXzaAwnHzmcPM6cTjOHGCefOW0bMWe5Ph9FrpPI2eTkIoeGW74MvYUxLmC0ZK/6cmFsimTvqHsYlt4bOY71VrVa0UyFQT5SnVa3VvL40PLXmEhZ9Vd+QYy+NMVrPEN7EKMFOOR6yCGX8Yyr2hnX8Rxlb+cokQvKwQjPUYqco8GCNNZrevV5N/Bgfecjvj5bCJ1dD0IcBF1VGyhmHWdQvg2VQa7rg2J6zxgAKaSNGy3amqNa5iY/h1Z+WXjRBOb40Ed+3OqJH1PxxM9wIiQ6gjKK0DmLEDJhAdHTS1HzAnt0c8dSMaMbDUd6LBUzlor5QZc48FDzOZSKsRXK4Z5FOTl8VVm4TKY9LV0wqr0YdR9mkxSY261Z+4dFIXFujI2WA5q1F09Z+MB3HQtY2YO961iMvvS9P6PTODqNF+L4jE7j6DSOTuPB/okNXrJ5Fk7juHOr55044GUo2DSvi4Ls+i9dkPMa2j8R7d2O+Dfjbylgpoq/jU+Pv0I1rNZ3qJRjxQzHuUXdObmCgWV2xUX32LZr4sITwytvXeFfPnq8nF+YthSl9M5bLMxptXaspfjy0hoj4hqAEVESCN0S+btVcFHwXlIhfzdQRb/yt/OqvtuVTCX0LqQ32+mncJza6THW448tAKgpoXdMPf6a/SdILBeN9PB5XQk9xLfK/0rzDQjsaxxxx3NPHnvr5W/RnHKKfwE=</diagram></mxfile>
2210.03675/main_diagram/main_diagram.pdf ADDED
Binary file (62 kB). View file
 
2210.03675/paper_text/intro_method.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Temporal distribution shifts frequently occur in real-world time-series applications, from forecasting stock prices to detecting and monitoring sensory measures, to predicting fashion trend based sales. Such distribution shifts over time may due to the data being generated in a highly-dynamic and non-stationary environment, abrupt changes that are difficult to predict, or constantly evolving trends in the underlying data distribution [\(Gama et al., 2014\)](#page-9-0).
4
+
5
+ Temporal distribution shifts pose a fundamental challenge for time-series forecasting [\(Kuznetsov &](#page-10-0) [Mohri, 2020\)](#page-10-0). There are two scenarios of distribution shifts. When the distribution shifts only occur between the training and test domains, meta learning and transfer learning approaches [\(Jin et al.,](#page-10-1) [2021;](#page-10-1) [Oreshkin et al., 2021\)](#page-11-0) have been developed. The other scenario is much more challenging: distribution shifts occurring continuously over time. This scenario is closely related to "concept drift" [\(Lu et al., 2018\)](#page-10-2) and non-stationary processes [\(Dahlhaus, 1997\)](#page-9-1) but has received less attention from the deep learning community. In this work, we focus on the second scenario.
6
+
7
+ To tackle temporal distribution shifts, various statistical estimation methods have been studied, including spectral density analysis [\(Dahlhaus, 1997\)](#page-9-1), sample reweighting [\(Bennett & Clarkson, 2022;](#page-9-2) [McCarthy & Jensen, 2016\)](#page-10-3) and Bayesian state-space models [\(West & Harrison, 2006\)](#page-11-1). However, these methods are limited to low capacity auto-regressive models and are typically designed for short-horizon forecasting. For large-scale complex time series data, deep learning models [\(Oreshkin](#page-11-0) [et al., 2021;](#page-11-0) [Woo et al., 2022;](#page-12-0) [Tonekaboni et al., 2022;](#page-11-2) [Zhou et al., 2022\)](#page-12-1) now increasingly outperform traditional statistical methods. Yet, most deep learning approaches are designed for stationary timeseries data (with i.i.d. assumption), such as electricity usage, sales and air quality, that have clear seasonal and trend patterns. For distribution shifts, DNNs have been shown to be problematic in forecasting on data with varying distributions [\(Kouw & Loog, 2018;](#page-10-4) [Wang et al., 2021\)](#page-11-3).
8
+
9
+ DNNs are black-box models and often require a large number of samples to learn. For time series with continuous distribution shifts, the number of samples from a given distribution is small, thus DNNs would struggle to adapt to the changing distribution. Furthermore, the non-linear dependencies in a DNN are difficult to interpret or manipulate. Directly modifying the parameters based on the change in dynamics may lead to undesirable effects [\(Vlachas et al., 2020\)](#page-11-4). Therefore, if we can
10
+
11
+ <sup>∗</sup>Work done during the internship at Google Cloud AI.
12
+
13
+ reduce non-linearity and simplify dynamics modeling, then we would be able to model time series in a much more interpretable and robust manner. Koopman theory [\(Koopman, 1931\)](#page-10-5) provides convenient tools to simplify the dynamics modeling. It states that any nonlinear dynamics can be modeled by a *linear* Koopman operator acting on the space of measurement functions [\(Brunton et al., 2021\)](#page-9-3), thus the dynamics can be manipulated by simply modifying the Koopman matrix.
14
+
15
+ In this paper, we propose a novel approach for accurate forecasting for time series with distribution shifts based on Koopman theory: Koopman Neural Forecaster (KNF). Our model has three main features: 1) we combine predefined measurement functions with learnable coefficients to introduce appropriate inductive biases into the model. 2) our model employs both global and local Koopman operators to approximate the forward dynamics: the global operator learns the shared characteristics; the local operator captures the local changing dynamics. 3) we also integrate a feedback loop to cope with distribution shifts and maintain the model's long-term forecasting accuracy. The feedback loop continuously updates the learnt operators over time based on the current prediction error.
16
+
17
+ Leveraging Koopman theory brings multiple benefits to time series forecasting with distribution shifts: 1) using predefined measurement functions (e.g., exponential, polynomial) provide sufficient expressivity for the time series without requiring a large number of samples. 2) since the Koopman operator is linear, it is much easier to analyze and manipulate. For instance, we can perform spectral analysis and examine its eigenfunctions, reaching a better understanding of the frequency of oscillation. 3) Our feedback loop makes the Koopman operator adaptive to non-stationary environment. This is fundamentally different from previous works that learns a single and fixed Koopman operator [\(Han et al., 2020;](#page-9-4) [Takeishi et al., 2017;](#page-11-5) [Azencot et al., 2020\)](#page-9-5).
18
+
19
+ In summary, our major contributions include:
20
+
21
+ - Proposing a novel deep forecasting model based on Koopman theory for time-series data with temporal distributional shifts.
22
+ - The proposed approach allows the Koopman matrix to both capture the global behaviors and evolve over time to adapt to local changing distributions.
23
+ - Demonstrating state-of-the-art performance on highly non-stationary time series datasets, including M4, cryptocurrency return forecasting and sports player trajectory prediction.
24
+ - Generating interpretable insights for the model behavior via eigenvalues and eigenfunctions of the Koopman operators.
25
+
26
+ # Method
27
+
28
+ Time series data $\{x_t\}_{t=1}^T$ can be considered as observations of a dynamical system states – consider following discrete form: $x_{t+1} = F(x_t)$ , where $x \in \mathcal{X} \subseteq \mathbb{R}^d$ is the system state, and F is the underlying governing equation. We focus on multi-step forecasting task of predicting the future states given a sequence of past observations. Formally, we seek a function map f such that:
29
+
30
+ $$f: (\boldsymbol{x}_{t-q+1}, \dots, \boldsymbol{x}_t) \longrightarrow (\boldsymbol{x}_{t+1}, \dots, \boldsymbol{x}_{t+h}),$$
31
+ (1)
32
+
33
+ where q is the lookback window length and h is the forecasting window length.
34
+
35
+ Koopman theory (Koopman, 1931) shows that any nonlinear dynamic system can be modeled by an infinite-dimensional linear Koopman operator acting on the space of all possible measurement functions. More specifically, there exists a linear infinite-dimensional operator $\mathcal{K}: \mathcal{G}(\mathcal{X}) \mapsto \mathcal{G}(\mathcal{X})$ that acts on a space of real-valued measurement functions $\mathcal{G}(\mathcal{X}) := \{g: \mathcal{X} \mapsto \mathbb{R}\}$ . The Koopman operator maps between function spaces and advances the observations of the state to the next step:
36
+
37
+ $$Kg(\mathbf{x}_t) = g(\mathbf{F}(\mathbf{x}_t)) = g(\mathbf{x}_{t+1}). \tag{2}$$
38
+
39
+ We propose Koopman Neural Forecasting (KNF), a deep sequence model based on Koopman theory to forecast highly non-stationary time series, as shown in Fig. 1. It instantiates an encoder-decoder architecture for each time series segment. The encoder takes in observations from multiple time steps as the underlying dynamics may contain higher-order time derivatives. Our model has three main features: 1) we use predefined measurement functions with learned coefficients to map time series to the functional space. 2) the model employs both a global Koopman operator to learn the shared characteristics and a local Koopman operator to capture the local changing dynamics. 3) we also integrate a feedback loop to update the learnt operators over time based on forecasting error, maintaining model's long-term forecasting performance.
40
+
41
+ We define a set of measurement functions $\mathcal{G} := [g_1, \cdots, g_n]$ that spans the Koopman space, where each $g_i : \mathbb{R} \mapsto \mathbb{R}$ . For example, $g_1(x) = \sin(x)$ . These functions are canonical nonlinear functions and are often used to model complex dynamical systems, such as Duffing oscillator and fluid dynamics (Brunton et al., 2021; Kutz et al., 2016). They also provide a sample-efficient approach to represent highly nonlinear behavior that may be difficult to learn for DNNs.
42
+
43
+ We use an encoder to generate the coefficients of the measurement functions $\Psi(\boldsymbol{X}_t)$ , such as the frequencies of sine functions. Let n be the number of measurement functions for each feature, d be the number of features in a time series and k be the number of steps encoded by the encoder $\Psi: \mathbb{R}^{d \times k} \mapsto \mathbb{R}^{n \times d \times k}$ every time. The lookback window length q is a multiple of k and we denote $\boldsymbol{x}_{tk:(t+1)k}$ as $\boldsymbol{X}_t \in \mathbb{R}^{d \times k}$ for simplicity.
44
+
45
+ As shown in the Eq.3 below, we first obtain a latent matrix $V_t = [v_t^{(1)}, v_t^{(2)}, \cdots, v_t^{(n)}] \in \mathbb{R}^{n \times d}$ . Every vector $v_i \in \mathbb{R}^d$ is a different linear transformation of the observations, where the weights are learnt by the encoder $\Psi$ :
46
+
47
+ <span id="page-3-0"></span>
48
+ $$V_t[i,j] = \sum_{l} \Psi(X_t)[i,j,l] X_t[j,l]; \quad 1 \le i \le n, \ 1 \le j \le d, \ 1 \le l \le k.$$
49
+ (3)
50
+
51
+ Our measurement functions are defined in the latent space rather than the observational space. We apply a set of predefined measurement functions $\mathcal G$ to the latent matrix $V_t$ :
52
+
53
+ $$\mathcal{G}(\mathbf{V}_t) = [g_1(\mathbf{v}_t^{(1)}), g_2(\mathbf{v}_t^{(2)}), ..., g_n(\mathbf{v}_t^{(n)})] \in \mathbb{R}^{n \times d}$$
54
+ (4)
55
+
56
+ In our implementation, we flatten $\mathcal{G}(V_t)$ into a vector and then finite Koopman operator should be a $nd \times nd$ matrix. Finally, we use a decoder $\Phi : \mathbb{R}^{n \times d} \mapsto \mathbb{R}^{k \times d}$ to reconstruct the observations from the measurements:
57
+
58
+ $$\hat{\boldsymbol{X}}_t = \Phi(\mathcal{G}(\boldsymbol{V}_t)). \tag{5}$$
59
+
60
+ Here, the encoder $\Psi$ and the decoder $\Phi$ can be any DNN architecture, for which we use multi-layer perceptron (MLP). The set of measurement functions $\mathcal G$ contains polynomials, exponential functions, trigonometric functions as well as interaction functions. These predefined measurement functions are useful in imposing inductive biases into the model and help capture the non-linear behaviors of time series. The encoder model needs to approximate only the parameters of these functions without the need of directly learning non-stationary characteristics. Ablation studies (in Sec. 4.8) demonstrate that using predefined measurement functions significantly outperforms the model with learned measurement functions in the previous works.
61
+
62
+ Dynamic mode decomposition (DMD) (Tu et al., 2013) is traditionally used to find the Koopman operator that best propagates the measurements. But for time series with temporal distribution shift,
63
+
64
+ we need to compute spectral decomposition and learn a Koopman matrix for every sample (i.e. slice of a trajectory), which is computationally expensive. So we utilize DNNs to learn Koopman operators.
65
+
66
+ In classic Koopman theory, the measurement vector $\mathcal{G}(V_t)$ is infinite-dimensional, which is impossible to learn. We assume that encoder is learning a finite approximation and $\mathcal{G}(V_t)$ forms a finite Koopman-invariant subspace. Thus, the Koopman operator $\mathcal{K}$ that we need to find is the finite matrix that best advances the measurements forward in time.
67
+
68
+ While the Koopman matrix should vary across samples and time in our case, it should also capture the global behaviors. Thus, we propose to use both a global operator $\mathcal{K}^g$ and a local operator $\mathcal{K}^l_t$ to model the propagation of dynamics in the Koopman space, defined as below:
69
+
70
+ $$\mathcal{KG}(\mathbf{V}_t) \coloneqq (\mathcal{K}^g + \mathcal{K}_t^l)\mathcal{G}(\mathbf{V}_t) = \mathcal{G}(\mathbf{V}_{t+1}), \ t \ge 0.$$
71
+ (6)
72
+
73
+ The global operator $\mathcal{K}^g$ is an $nd \times nd$ trainable matrix that is shared across all time series. We use the global operator to learn the shared behavior such as trend. The local Koopman operator $\mathcal{K}^l_t$ , on the other hand, is based on the measurement functions on the lookback window for each sample, shown in Fig. 1. The local operator should capture the local dynamics specific to each sample. Since we generate the forecasts in an autoregressive way, the local operator depends on time t and varies across autoregressive steps, adapting to the distribution changes along prediction. We use a Transformer architecture with a single-head as the encoder, to capture the relationships between measurements at different steps. We use the attention weight matrix in the last layer as the local Koopman operator.
74
+
75
+ Suppose an abrupt distributional shift occurs in the middle of the look-back window, the model would still try to fit two distributions before and after the shift but a single proporgration matrix is never good enough to model multiple distributions. This will results in the inaccurate operator used for the forecasting window. To address it, we add an additional feedback closed-loop, in which we employ an MLP module $\Gamma$ to learn the adjustment operator $\mathcal{K}^c_t$ based on the prediction errors in the lookback window. It is directly added to other operators when making predictions on the forecasting window, as shown in Fig. 1. More specifically, we apply global and local operators recursively to the measurements at the first step in the lookback window to obtain predictions:
76
+
77
+ $$\hat{\boldsymbol{X}}_{t-q/k+i} = \Phi((\mathcal{K}^g + \mathcal{K}_t^l)^i \mathcal{G}(\boldsymbol{V}_{t-q/k})), \quad 0 < i \le q/k.$$
78
+ (7)
79
+
80
+ Then, the feedback module $\Gamma$ uses the difference between the predictions on the lookback window and the observed data to generate additional adjustment operator $\mathcal{K}_t^c$ , which is a diagonal matrix:
81
+
82
+ $$\mathcal{K}_t^c = \Gamma(\hat{\boldsymbol{X}}_{t-q/k:t} - \boldsymbol{X}_{t-q/k:t}) = \Gamma(\hat{\boldsymbol{x}}_{t-q:t} - \boldsymbol{x}_{t-q:t})$$
83
+ (8)
84
+
85
+ If the predictions deviate significantly from the ground truth within the lookback window, the operator $\mathcal{K}_t^c$ would learn the temporal change in the underlying dynamics and correspondingly adjust the other two operators. Thus, for forecasting, the sum of all three operators is used:
86
+
87
+ $$\hat{\mathbf{X}}_{t+i} = \Phi((\mathcal{K}^g + \mathcal{K}_t^l + \mathcal{K}_t^c)^i \mathcal{G}(\mathbf{V}_t)), \quad i > 0.$$
88
+ (9)
89
+
90
+ In a word, the feedback module is designed to detect the distributional shifts in the lookback window and adapt the global+local operator to the latest distribution in the lookback window.
91
+
92
+ KNF is trained in an end-to-end fashion, using superposition of three loss terms $L = L_{\rm rec} + L_{\rm back} + L_{\rm forw}$ . Denote l as a distance metric for which we use the L2 loss. The first term is the reconstruction loss, to ensure the decoder $\Phi$ can reconstruct the time series from the measurements:
93
+
94
+ $$L_{\text{rec}} = l(\boldsymbol{X}_t, \Phi(\mathcal{G}(\Psi(\boldsymbol{X}_t)\boldsymbol{X}_t))), \quad t \ge 0.$$
95
+ (10)
96
+
97
+ The second term is the prediction loss on the lookback window to ensure the sum of global and local operators is the best-fit propagation matrix on the lookback window.
98
+
99
+ $$L_{\text{back}} = l(\boldsymbol{X}_{t-q/k+i}, \Phi((\mathcal{K}^g + \mathcal{K}_t^l)^i \mathcal{G}(\Psi(\boldsymbol{X}_{t-q/k}) \boldsymbol{X}_{t-q/k})), \quad 0 < i \le q/k.$$
100
+ (11)
101
+
102
+ The third term is for prediction accuracy in the forecasting window to guide the feedback loop to learn the correct adjustment placed on the Koopman operator.
103
+
104
+ $$L_{\text{forw}} = l(\boldsymbol{X}_{t+i}, \Phi((\mathcal{K}^g + \mathcal{K}_t^l + \mathcal{K}_t^c)^i \mathcal{G}(\boldsymbol{V}_t)), \quad i > 0.$$
105
+ (12)
2210.07158/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2022-10-02T13:49:56.072Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/20.3.0 Chrome/104.0.5112.114 Electron/20.1.3 Safari/537.36" etag="oBNcjTMKROHNee7GHFFK" version="20.3.0" type="device"><diagram id="FHGbvqbWkDhLCtrh04Gu" name="Page-1">7V1Zcxu3sv41qsp5EAr78mhJUZbj+GTxrSTnJUWJlMQTipQpKl5+/W2QM+QMBiJBajDEyFBStswZbt39Nbq/bjRO2Pn9p+/mg4e7n2bD0eSE4uGnE3ZxQimRRsFf9pHPxSOY6tUjt/PxcPUY3jzw2/jLqLyxePRpPBw9Fo+tHlrMZpPF+KH+4PVsOh1dL2qvOJjPZx8faw/dzCbD2vMeBrej2h32gd+uB5NR47bfx8PFXfEofLPNhe9H49u74q01Lb7x/WB98+qBx7vBcPax8l7s2xN2Pp/NFqvf7j+djyZWenW5XD5zdf3B5qPpIuQJ51f8F/Lj4Hr2z7u3g5ur8e8/8W9P+epV/hlMnoovXHzYxedSAvPZ03Q4si+CT9jZx7vxYvTbw+DaXv0ISofH7hb3E/gXgV9vxpPJ+Wwym8O/p7Mp3HT2uJjP/h6VD55Qxi/sf3Cl+Q2KL/XPaL4Yfao8VHyj70az+9Fi/hlu+VSKGyMjV88qTIwSg4gwm59CIR+r6iNIrB69q+jOlHZT2Mzt+u02YoVfCsnuIWXRcykbRhGvS9noo0uVvT6pHtd2L95+GAxvr8Xb86fz//51+37yWeNTIprW+91SrqeL+RN8Klfm8O0XdcHOR4/jL4Or5Q1WD4Onxexx5ent5cFkfDuF369BlCOQ85kV4Rhc8Jviwv14OLRPPnuYjaeL5RcUZyfiIkRnxYN1Q7iZTReXg/vxxIr9/fgeFhiK340+wp+/zu4H0+KWYjUisvh3+ZrD0c3gabIo7yq+Om7BJDSVSJGaSWjcNAGMmwZA4xmAbBjAf+Ar3YwXi/H0Nuu/Rf0bzhCt659ijAhv2IDp1gRUwwT+D+SZbSCGD2DE9QFp2IBuuoGnxWQ8mj9m/bfpA0TDB3jWANWt7s3uWGs0Hb6xCddG4MPB491S3sSxhZoiJoOr0eRscP337fJxv4DXmvp+NPlnZC2jrh/zrHoaIZzG9r/1lTKro/aRu8GDfbUimbTPOLt+mv+z/hLPqnc0LNPIZ5RbUZ3wqK58bD6aDBbjf+rJp0+fxTv8bNFQ8R2E1QwH0u36SzzOnubXo+JZ1WRxxwsx7NjWYjC/HS0aL7Q0r/XXfoHF6WbQ0bA4azAP4YhbkwSFGzqp5uE+JAqIxjDWRHBMtWZYOLKF1NNIYkB5XGLGWBOkAhnNpCGGU8YM1k3FK3gL+INwY7CCjEDFgrDEuwVa2v9i+dfOfCkAl4Q8C8z5bAGmPrPO/ZQt8bh6Cix1TCiuKGacGwiJHRcPEL5c/qwAu+JUmni2b/wwmoNjhwXFfgkbp6xuvJ0PhuPR5jPBK56Zb/EbUbl2MZ6Dpaw+3mjwuPCCf4fpBrt8ghHFRHNhlGWWBDGOpSEiwW8pDmuDkVI2c28hkPBYFzwcy6A4D+CO4HXGD48BtjR4fFixdzfjT9bbNjV+Dj9RlQBg12qjBCZ5TQkSI9ALI4QpwqQo+b8q2gHsYLGSUI4ZeAfjSdOfuyeCegJIpz6ph1MEkfFaPXXlCMRVRTeEpK6cgNWtNeV8+0aeSRlXOXyLcjhox1S00/ReiSmnmez2GjmwfFQdm2D1OIbhmnqE8cQxaemnmYjG1I+lYT1BQ5HjHG09ItLxealrLSSF7JHLo7AemWrMVmfyOas7PZq4ekRAetAnpycMpBEV9AjH6SkkRUU95YKcrn5Ip/rB8BNVP0wi9mzEwEjPliRBj7EkRUQPQ5pUtENr6qGkngqlo535X+bjv385e3c2uhJ3Tz9/ufzhf/NTGhAvrKgkj+xfLOO9WadTiQivkE64EU1LXiUDGsJnDFxfpYZcfrSq8CHoYxJvfsrqQvvC3485vp4MHh/H1+/vxtNtnHGT1KV6xRTBa12OJ+WzmpQQXC+oKebV7lYLSoXlRRBvSEGJYoYwzOvFAggM4THGlaYSa7jTYYB2UsAlyh08tsf4emXMQgKQbCgpGcr2d+nKbkICo2w3CdoNFXu9bIsVpx9+eHi6/vB2iK/eqR8v/vj+jvzp7SaTqwriw2Basyf54ck2bZ5dr8zjjRX47dU3IHv4H94dV377l/3V2gG2VZDTm6Je8mZ5hxzc26CjeLm7dQnFuVB/iVWZxL4AoQ+fnGurz2gvTmfz+8GkfvljYSr2Ol99yuXFyWixGM1PH8sqief5tqJ/OgaETIvn48p7Ly8u5oPp4w08q3z+dLS+4eNsPqy/fPXpV+vi76kjUmq5nkKU1DII5e+iItjh+PFhMiiEOp5OxpU3vpnMBovqB5Jrmcrb1d8Q+fDPq79KjYMBrZRe3uP4k6K9YVccH9zTsKeX2IOX9UeCJeKoPwisxpKx2gx8EPQVlDIEvxIIfnmlEDwFlGkfygoQniokfRg9Fgh94ZRVzI9/naizE3E+mDzcATzOpyfq4hswoPPP/yo1F1lFnj6vptbaYkBOOWTRtJKG1ykQjjCuJOHKw777lCZbUJr/CwbUrGK3yxCB6+0ydZERzFvolzEE1FKRfDRKr2TsD5DoDhW9WNAGCV6pPNTDaIE0q9YdREPKQiLK18wb1qYpZckQOPsNOcei2W3UqlCgaja6Dq+pgqVWnbYTWQmB7Oq+/qFNU0+JvyYmhHxJUA2MIFPdLkPToaKfEXTUMk5EQTNH0PXVEBw/rZh7M49ITAsBvj0pLawpWYRJ1e0k71cC9uglJegw9556fxkxvlS6D3KnfgMv68LauZy6GqL2YMZTA9eIiqr517VAEdEVby+Td/dddlvu2ztxgHpsu6sv6ilXCdm3VaKzhsvN7oOI4KHIVAgd7ezIkb3KDKhqJmjvRouPs/nfDRXlTX+Hb/qju3d9d7rjb/0B8nah9rYL7W0VChNw5U7oSRGtxaYNQ+l+LxDViXQn7i9hy5c5Era5rhECM62EkSVDlayHLpf3rbIf3o5Kd1V6y117XeueczZf3M1uZ9PB5O1s9lDc8r/RYvG5gK717XU91tDWyi7fLe7h2RYPX79GxZisKa18Q3M38GIwXzhbmpePFa9nv9bo03jxx9ILQXJS/PvPlVci5b8vPlVuvvhc+cfPpcMpH9vRO1JOklo2WITgcWc/yQsbRRTDDQcFSZrbTRjaEaLKqV/rkoFChNZfK/LGY+qppNg/cPlRzyezp2GOfVqMfQjbGfsQ2mnwE0KHx66mSWWQfnbz+bLJl2+KadRDhaSz+ZwmQbhu1LpPpl0ttXGmalqQdmdddRdJs9iWVqyQBB97gB44QaKyQZ7X1bC75JmYFlJgZw/RAtumBQ5qqG6pSnxHFU2CnD1AC1ur/xCP1ZSAmwMEE9NCVG72aCsDkX3zSVE52Ih6gJVBVje307pTYnU8JF64W49V7ZsahIJgtQqHulda1u16MzuFmi73qu9bMDpAO0wg/PzCzUi/1oz1LLIE60WHQIciris71evFVvBvvUoxmAzBTqYjI9ORVSpyOw25g3LcbuW7mcjSlcZmIrllYCkB78W11Fo4LQuwAFG7q9oQDp6QaMf8QwlKohGpDv2g9beRyKhNb7Jy+ZTI1CWTzVh65fcofl/uDFnV0FxAZvryBaVbKZHT6v7MvN71DMZOWEymQmiGPLR1i4a72W1LnEMAiDaNqkf43Fay87ViuyEVQKyU8za8wA8H5d6sOsMcYQes3LbkVCrqngjLE1GJeK0XARHUBo1kd4hbi3xeGNg08fdcs8UzA9O2dEw0UH9+LuU+qdHG9HZ772Opl3U3K+g5rWEk4si0LOMCxKrdo6rOURJsxwuU/RVmvU8s2ZRGRK2a9EBjtHca62w2caIaY73TWGfjio+lMbt5p06Lip7pqAwkvwZqh2+JgAJ4mEZateF9/OOF/PnNyibBFOef17SO/cef9Wsbamf5r2qL2R+FiDaNaWIrH7Sv/e9kgNbxzirtCPEB7XFFL7T3kNpktvds795o8Rj27icF7LQKstlxQprBnVbKTvRUsJKIkjrtii6gIdWcjLKMMm+E3xeUMYakqqCs49KA5wzHVx9ek97F1yE8Q/aE2RMeGl/Ljjyh3bVWQgZcH69PPKNme8ARWutgdgjXmgLhgjkHZOx4m9gul4R01GU0ZzQfmj2kgWbupA/yMDDviJ6OnaOE7G/NWM5YPjRHSQPLYnuS0g6W+ZEzoSMMGzl2JtS/SkNI/S772+xvD82EVDf+lm33t8y0Ejvx7YnQrnwreiIUsqclgzmD+dBEKA0wiy4SoR3ZVvQ8KHcAZCjHzIPSgLLsIg/akWxFz4M6O0V5ex50A2/x/YGm95KciLk5UeopUUiDXHa92fUemhKFHkH30o16O+jkdlIisT0l2pF4Rc+IvqI9shnLR8iI0sCy7CIj2pF2Rc+Ics9GhnLMjCgNKKsuMqIdaVf0jKizqS/HqgxxWts1ZNwxev0qEtHMK2fXu7/rFcGut6PRJfYQM7N2vZqrOlFh6k5RHuh6uT19seJ6nXl1bPu7xPa95di2jOWM5T2wLFPDchSU7esxyGEuoz0w531zGcz7g1n1C8y0GzAfuP63R3DkVqwM5v3BrPsFZrs9T1aphzAasQlmXKFRGHdOtrBlh23vEhvL8giDU1NiOHq3I5DlfQfHcb4HONFDHXYE5xtMcZQO4ZVQHGwH+FXl4E3MHdRGpzhypSgHUhEpjkSwfCD50E3y1R6Y8+beDOaIFEciYO6m+HDk2Sg0d1ZmLEdkODrDcgzuYU8o88N4lPYYjs4Ook6T4ehfE0duhc3ONybD0dFJ2Ck0cRw9KcqB1HGw3G+2MpzhSAPLrTEcW+lKemy6Mi/MGcwxGY40wNxN7YEdGcw005UZzDEpjq7A3A3FsRXL3KbYFSyLjrEsaQCWX83G/e10R9828bM86z774ZhsB00iqPo6EqTcGHscLPebuQxnO9LAcjf9HEcuAbO8lTRjOSbZkQaWuylDHHnAM827zzKWY3IdXWE5gXYOcex2jld/8rLE1UFlRut68NW3Y3x5bnLPznd/52uCnS/rxvk6sDTO/MBTwhEj4D01lhQTwY0JcozNOYUg7I2TJ+te1BcRKa25X55rwBnNe6O57HftD5oBzHgDZoHDDkjYF8y7DqSLjuacGB0Hzb0uPpSl3XTRbOooO6wssC+YDzzEsj0w50piXpr3BzNNDswMYFbhH9o4NbtJcpjqcFMt6+9hDhlt2h7FEVJGfMUUB+0dxZHrvtnzxqQ4eDeeNwr1cEjqdQCP0t7M5xxGZTDHZDjSAPOB1MO+aLbdmJV34R2Dmecu9wzmmARHGmA+kHrYF8zk2GjOs+8ymmMyHB2hOQL5sCeLcuSGLBmyLfzVbFjZznaw3rEdeU//cbxwr4tGe7AdIomYqj22Q1DAY/FS2j1giRNJlcEQV0nFNe/WD4scUGUox+Q60oByi1zHFixDdsRpBcwqrMzcXnaUd6xkMMfkOtIAc4tcxxYw02ODmWYwZzBHpDo6AnMnVAdBkD9vfurbYviRN4WrgB5LC5+HhkVSr42UIbtLP1BcgH5wVb4s9qu0eBaVVjf1EIYohGlTlFUuwsM9lCXT1rkGEkIUbVwa2U0W1dxXkHciW7yTq60q8wSe5nL5A4/fzgfD8WjzGoWjKh++GM9Bb+PZFC6NBo8Ln9eS6s2ZudzHa8hnDKVpEMdSL8WJ8IARZFpcVbV5FVorJzPpGflHg4bzvmaN9a05jQYNJXnNGusbwQ7r72vXGGRXuD6HoGdDyte7onN+lkvRpf3vzM/W8c7O/GztA9rLz1647ueiX7b3ve2dhhb9Iti7nyugHHLeZ9sibHC3jVuMzBXQPAwyo2x/lJURfl9QdtzJbhSHFL1fWXjdt2OOKcnFzOwJY8bXHY0np6Ra/6DrrYItFzMZr7YYcuE2bu94m9guNw/gy2iOmj2kgWbupA8HDuDbET0dO0fJPYMZyzFzlDSwLLYnKe1g+bjbMCimX18m1L9KQ564l/1tzEyIduNv2XZ/y0wrsRPfngjtyreiJ0K5bJjBHDMRSgPMootEaEe2FT0PCgmeMpQzlA/Ng9KAsuwiD9qRbEXPgxI5U+Cg7egt5EQ9OzOxaGzPrje73kgpUUeTTvkOOrmdlEhsT4l2JF7RM6Jc6M1YjpkRpYFl2UVGtCPtip4R5ZN+MpRjZkRpQFl1kRHtSLuiZ0SvfpvX9lPk+1YkyuMCsus9ICMKHeSz9gev5BT5rSdcErb9XWL73jyzOGP5ACyHzvHpDMtRUNbRWfWtgVnkhTmDeX8w036BmXYD5iMffkpZ3gSbwbw/mFm/wMwOO+G9m9PqW8MyefVDNrYzHL3bEZjPqs0TEWNSHKSjvdEdURxsB/iPOROR5lHFOZCKSXEkguUDyYdukq/2KI5c9s1gjkhxJALmbooPR56NwnKQnbEckeHoDMsxuIc9ocwP41HaYzhe/Zyj7QxH/5o4ciCVnW9MhqOjyQopNHEcPSnKU1KOg+V+s5XhDEcaWG6N4dhKV9Ij05UiZ0UZzDEZjjTA3E3tgR279pA3j2Ywx6Q4ugJzNxTHVixzm2JXsCw6xjKhAVh+NRv3t9MdfdvEX+6qyn44++EobAdNIqj6KhIknmcUHgfL/WYuw9mONLDcTT/HkUvAIq/LGcsxyY40sNxNGeLIA55Z3kqasRyT6+gKywm0c4hjt3O8+nN8Ja4OKjNa14Ovvh3jK0PIqex8s/P17gEMcL68G+frwNI48wNPCUeMgPfUWEJKJ7gxQY6xOacQhL1x8oTR+kSkI+/9VblslNG8N5pJOF2ZCJoBzHgDZoHDDkjYF8y7DqSLjWaZOzqOg+ZeFx9IOGF5JDSbOsoOKwvsC+YDD7Fsj7HMLEdemvcHczhj2RWYGcCswj+0cWp2k+Qw1eGmWtbfwxwy2rQ9iiNkvs4rpjho7yiOvGMle96YFIfoxvNGoR4OSb0O4FHaYzjylpUM5pgMRxpgPpB62BfNthuz8i68YzDLnBNlMMckONIA84HUw75gJkdGs8jzsjKaYzIcHaE5AvmwJ4ty5IYsElJ3eDUbVrazHax3bEce/3wcL9zrotEebIdMIqZqj+0QkB3i4qW0e8ASJ5IqgyGukopr3q0fVpm4zFCOyXWkAeUWuY4tWIbsiNMKmFXHu89k3n2WwRyT60gDzC1yHVvATI8N5lyFyGCOSXV0BOZOqA6CIH/e/NS3xfDjbgrnnuOgv//8MJrDQ789zW8sq0Hx5XixGE9vGxBfjD4t6vibjx7HXwZXyxusEVmEPhZAsv+cjG+n1gGATY0s+i1NAZCbvCku3I+HQ/vkswf7jZcyEGcn4sLhTdZgq4KneLDuQ2oQfz++Hz3CV3g3+gh//jq7H0wdsMtnwb68q/jquImN0iGGzwmRdWujGKOyh6lCrhBZvFmVTaH4eUN/EXvCPdNNf1+9cdZ9e7pnDsmmNGoOginn2dUUz2IpPuR8gHL5vpmMPhWLWnV9u54MHh/H145R1DTS/vJstqzOzsrKL+x/jZWVuEEIfKXyml0BpNJMYio1Bl+5vFq8NXhuQxQTgiliMNb2aqk/ibQwAnOIDykjHPsb1kKHUFWtwuMOysdeuBjacaxubzKnqDrQxKVtg49Rl6zxuqz6umEbGFpb9ULGJ2Rrf9XWzrj2WbuumuWBEZ/r3u3rGt6IADsz9oCKSDb2V23slEnkJB5U2crege5cCVTOEqm8nKpMGSSsYyMPqNlmI3/VRs4JuFnVNHLxco/OBYGXfm4H19L25fH8e0AXYTb91236iiBR9+826Kha7KGhuyCW8K5E6qrxNtXhsl3bfkDPXbb9V2370p+2thHIS0/aesRAPqTqYo3y4VmVNSgqiotaTMHbnRTc4vPUlTFIVwHv9OBiA8pY94tp4+O1EOXrAhrWpmkZ0s5RrbSLRWO9WAAP0E6nHwD58vIcfsIrKxt9BxOLRKAaq+JUNASE7RXzLUP4VHv5OOtsq2k36rHTDqvRaEnnpquAzjphQQHfvpFnUsZVAHMUUIMHx+B2mq4tXe2EbB/qAzw2XQ2YVN1X8v4pZMdHewpYJp7s7HY+GI5Hm9ixqH0dbVUpe3TSVVNI61mPvBihfpyUfQvauZy4eoLO+umDGyuTcY2oqMKlrh2KiG4mzOlqp7MDapfasSKJqh3bOueLwco1SNbBQ5sJTGL6ocdYgyKiB1LsSuqunQxcpprBzP8yH//9y9m7s9GVuHv6+cvlD/+bn9KA/HKVsXtk/2IZ753cn8LKUf0RzpkH0puWV4RPPcKm0YQbkh02eL73d+PpVq6v0ZBJ7X8n1cZN4mXjNhQb82pvq4UkQqZpu9tPCkoUM4Th0vlv0AePMa40lVjDnfvyap/9JtEeceaXcUgWmw0lJUPZ/i5d2U1Ifp3tJkG7cXdY7XjZ6MR9MwwoXnZ8fZIbrSM3Wou6MXibbYlSnsglWt3Bs0l9YwTvZvP7gX2Nbx8X43sA22yaDaJFg5Cszpdo3DQH02nXvWfQ9cV4brOBhjVkY4hrDM9tw9CRDIIL+v7vy9tzevvnf//693//fHc2+3zapFy+GfwLHvh5PgZxwILxn/dvHrMZFF99FWItVo6SXZwaP4G0v2ngehBBSZ0a8qwirEsraRI/31xZK/nP0zwbR2zjMHXjsPv4tNhOTEWyDfPlt8u//pj/79MPn/7hv/6ixo9f/n3qCTC+ubbGQS/8ASc+HzyOstlENxsnGCVG7irWrAdatG04v3337Yfrzx+uxr/+9Ym/ff/L4PPV0LP0NGxiD8KyNWaSCNxsYBfIUz2JRUF6pdV0wQ1ppUQQbNP418EPRCcgvSIOmWOUzSQhM+mafvRKPKQ3MVtNelZzNPLRy/8GnUjaGFKyxX4G8zKW4j57urxkbNlx7ljQct7IcPB4t25YfzZc2D0fpOz3TsSuhGhuOyqHnexrPco50Za6m/Ri20tAVTV2kzkvm8JLGbCNPKvpNvGMSFAtRMZ+wQQEe3uExtuF//I+/SaHJdR6n0St2V4gZiLJrM3Gu4anwfj8fNlp5zSpSHl5ebZHhWyt2OBGVZuM2I0yinOpsS4HjVdkzTf7sWqyRopygZngEhtDDYnVG9FmY6pH8Fb0PsFb0UcR/KbzkWNpT14hsIYTzetLLchXG40JhyvMKFLy6rVCDyISC6GlMPCPsjmvqiSGKJeMUCykYkpoHUlHbXZve5bhopMuvP+xJR1pRBnlDBOphFGqrH2WPUQKCbmCjSBYsiZ0qAANEawN04Irrss9WLV1G0GMYZfGEkuRVLRf4rYrdqpt1/OiCjcHv5FnDjvbqq1E4iII1BAXxLKlAlNlmNNsKS1puoQYBOVcl9LdO2SiSIAroIIwIyVXZp83eSacAp0OPlduK9jKZ78qRuAzlNEaonxNwMSJ+yEKU382ayCIYAprCVm5N6m33s4x0qSKMrn9/eCX1VdqN1j0+TBu/xfn94PF3fVgcqLO3p2oi7/gb3hwcTdaDE7o+RQe+uY7+OX8X8UTXFwVzPOuhtVgutlDLkdzgaekwVaeNh2dtyAgI3ky4gvF5GrH8cNgWpO9/PA0sxeuV+J6Yx3B7dU38HHhf3h7XPnN1hSWosSWPj+9KYj4N8s75OD+YXmRMZsr3q33QrtX6i+yKifYlyD04ZNzbfUp7cVpWRmvXP5YyNJe56vPubw4GS3APk7hq17bcofv+dbgTsfgrqfF83HlvZcXF/PB9PEGnlU+fzpa3/BxNh/WX7769Kv1dvFTR6jgkdbCpDaQLH8XFdEOx48Pk0Eh1vF0Mq688c1kNlhUP1CpPfjtdvW3xdd5CbOVzsGGVmov74mMvnhQcw4BAkcsJN5aiItVpPV+o/16lHezL4FBgxttuENnK4Nr68RhdX4t8fA9oYoMPaumm0BkzRSUPWCOukPrBtKgbluX9+tczubTsvmspxocZi6umcWm7kI20mVrie1snCSnXKCS9zXERxrlEPGrCRG/Sy1E3Lu7RpjGKCCqKcLquDEgDfDLx2mqoTVZES59ZZZuN/WFbDdOqOq9VeGJLEh976nxy5hmQ+mZoaSxqa9nzVjZbtLsq2EB/ucYi7rUdUIshUXdc8hM/zBXKvzrwNxxFvWgeYzZUFIylCQW9aBBkdluErSbtBb1kKMyjrOou3tf+NEX9ZATF5LHnEiLOn6Vi3rI+QTZUJIylCQW9ZDZ/tluUrSbxBb13Pr0Vde1Lntf1+KSIVMfI2uHMXoiwE7rWuWAjNSiZcPqsrJVQVk5k6LZimlAmJ69RLqF2Bl/ufnwwP7+z2/s5+FPf/78y9Mf4veWd9lvV86LxSmQ0BTr9WFCzPHuxPbrK6YNsxKWojnCwGAkFGZK8trM49q2LYa0PeZeKCK1YFq9XPLewRg86okfTd1sVX/whiMuEcZcc6qwAFmbJvBBSZJpsRSeYVR49rU8c0v7Io56akfrIt5YucSKaMmptLsIaM3ImUKSamMPRZJYKta08ZQUEPXUjlgKgChRakqEMFRjcNJU9VgDUU/miKUBipHBHMJzcORMYeWc80AlElhxLbRVACfN8c8paSDq0RzRNCARl8Ju6VkaOa93EhqkhJDcyl4oahJfBaIeuhFLAZwhyI0FEViuRFQPdQRiRDIJ8clylWDN/aUJaUAEhOfpaUAIpAADYrkIaK2Us78DaUiNYXkG7wQiZEk7IREQ56enARvQU8KZoivxOA0FdpXg9nxJAIoB9TSzqZQ0QGNqoLG3Wj93ekk7mmEUYSqZ4QKAAcmUchUDcMGa42UES1mz7TUlzYT0hCSHDWEQZZAmUCbsKZfO3BdCkeDUwLoNCbM9lTdpBfQrEy73eiNw+0vvtAxSaZ3AFgxJaSmKkkXwTEFJSAO9TJSZRCAUbNYYqGcJGjFJjKaGEsY1VZ4DYBPSQC8zZYURg5WGFjJOPAbqZSpsI31wMHjlxokpqXZfJgb3pW3kvUyFHTpI10lnrhHBBrwLXiXDaUOgl6mw5EgTw3CZatVDHYeLEEkrQEaNNT1TsJ45ALSlBZggJrBNwVYLsHNIvdSIKsgB7BFtlkVqjkhOSTNdHsAeXTNaIIztlCG1IqndSY6Iw5/GCIKJHYqUCnvkn04WqWd/s8Nb+rpxnhsf2ujP2XTj+HeKb9NoIs04zBoEFhqySqIwEY2TSDkEHpxJYSTn2KlFh/biCI4IVxDA1OehlAkTRVpSW3U1THJIqLptDVQh8G/fyKwcloMKs5HVjYxIdpiVcQWvo9wwWQgED22si8UyLn8FIsCDuXMrQmdjuCMuqmZEfSZ3tvyvt0aEuMJ27Vol/evCRznnhNozlRURq+4EdaARGY7AUJkidElgOiczWebHNhGuUq71COiW5/hpBBk1BFdUL6Mn5+xuEANjoqA9SNEj0NaEPX+ARI9nxE1T3dpplIitQoSEtcYlOUWU0wxGEMRpXFj7oqDrfaezlLa6nGMqVnVASU3dVhlGUmJL0duGM2KcN3mhrUYd6RiSKx2lqc+d9QspKBHrrj5PTx9B3DM6VrfQDelHaqjcAlHFXiw5VxjNqXidCKZj+mmrcsI77uycaCYVkZjb0Ig27EsghZkxRFBYjLAqt67WczXvLa1LuGN+6YUS/rQRDlbKRhMGsmZnpdVIKKmklkJIzbz8XjLiD5q7lpz87SR0TBnT0iYGtMfi77jLoh3xU+uRIWKmhNheR/cYUlCOEsRgyJsIBKK+4xwS0kBAkJqgBkDGEANqAhauqTLODgs7wxkTpjmH+E7bUkTSKui4naIdFXCOGJaQ9EHuaOg6cdy024FjIopJQrGiwlvoTEcDHfdTtLQKAwgElooyYjSW5ZjwTbedIppKQghmnHCRthvquJ+ipXUYFgJs65yEKg0Ox1UBXFYc7qBE25J0OaUzURVEbagI77drRzUM7B9kDu6HK0W1E6NazYA0l/S5hjDJsx0vJc103IfRjgYkRgpkpA1jRhnu8HEE9APXYA3XApZz5dl6lpIG+pUIrxvu7FlKsDYrqwVeBtsbhh9WaIPhNykBjWnHSL3Mk5myJTpGMch+GSfVFGATNVifBbUOSHOZ9AIdMgg3PQUoCiES/Mk0ZkoI5muoSEfEvUyGbaRPJGdca2U3F9S9vJOICV/fRDoK6GUuXCeDiKnX97hBDOJUiE3tuWxCJZ2HBQ14TU4BUtqT0gznlBktuHNqngMB4u1rP4YG/KWLkHU24dahbRpNpMjJJES+VCq86uqQjU4zSCWtqzQYMkbl9Ijt0TokdKVoVLdJwZAlcIzNjjDDgfX41jqH9jvrJLnOoddmY5ocaGO4blQSYVUxqnILekctQwE5cgodQ30wHiSEktQwW8SAmE45HUMc2bMuKSMSsniCD+0Yss1tiqtVYlRnj23iCo9ye8yvgo8S1tm4f7+QEJpBZKSMNtg5fITZmSLwKUAMTGqmqGOzMfqFAvzisfuFWFKWeloEV1JJIYyUzjInQcPSQGbNKKe2k/JwS+VECSOYkdYg6m4PIjMOH4JgZZvbSNhauq+tEg5hPMT4CkRHBOFOzdH2Agu4LDDF4IRVkQQ8O+QPIyWoHdqkwVsbSesIJ5A1M8YBd/BmWmqj4hs/7Xg73/4bBMBfKKePSRlE7EY+CNNsk5N3w6TR1ntgwyBHo+WOunpo772l/eSqYxZ5fwkzjhRzRYwRrAEbEfsIhHRE3DFNvL+IOYR+uNci7pgIPmAjEUHOEFUqNKK4ImEfBZCMhFnHTO8BElaIun6C98mIWcdM7/4iVgJR6YrYTtqsrHa+il06Iqapi1g34wlGkexNPBF0MtBxFztbgXY8MUW2+r+WME1awl3usN3SkbG/5KUlGBuSJ7zioH2FiHQk33E70v4SNhS57pn1aQVMPt2zs3KcyRWwLCLOe+Ofk8/3LD/jLIAayf7YcOrZnsHcFTA1fRJw8rkeF41cr1eJCE8+11sSFtysfxyHTBCIuCeJNU8+6zPgkMkOe05Ewv7KcoiEE+5eSGtygJYKCbypIjdJLcIk05JLCZnDoedUaXChzmAL7W9e6Ko9IYQ5SLg9oc9GpA8sJmuIzF0yRVNUDjjuqAWBBmSNKfQgJGYhDBTlZFoMKySr5/4c6F0MxLuNLI4pZMhm+kC5f7bt7gJ4m0Y1izHEm28dt6tgvzVxlx2GtA4kZl/lFy6VYNj6+K297cmZkmS32R1gSwfo9vyK/0J+HFzP/nn3dnBzNf79J/7tacBKVdfejqCySgkWhtDwK/zC/tdWTR0jp1/ZNKsKhJAyja2ah2lh6IVXqodPA2ltYopm9d46gjUyZsvEFKWQFlIRwSBg5+tGmKq8lvux7EYgpaRRWJmXi8/fIxTpQO+ugvBtNrHyE6v7Lt5+GAxvr8Xb86fz//51+37yWePT0nwT8X3MjkIBw+BEcds/5Jb0IPqSiim4TO3k70OnfwkO1lkJ2BuT3TtqDPWrpGOOY6tPOXiITQPx7o4Az7ykZ255CeL9Eu6Y2HihhL1DbFTdZt0pKnT37shjKiAkZU1OAc4UG0xkn1XQcTW8HRW4k2ycU+OcSTbUR/6lo4GOh3i0pAFnko2or9AHTLI5pgo6Lpu3owJnko0zKsKdZEN3Dyo4pgY6rqu3tBQ7k2ycHSfOKBvvgQXpaKDjsntLa7EzyQY7GnAn2aQdj0YtzIf3TbWjGneSjZvD7T/K5piq6bik344K3FE29VzBHWWTdJREcC+TNWeUjXN43/6jbI6qgV5ma+4sG8cNucNsdg/6OKoKepmtudNs0hZxL9MxZ5qNLGf4P5OO8cR10Mt8zB1o4ww4difaJB2KEtzLhMwZaSO2jteF5SBlFeiAkLO7bfXb9JNK0ag+/2FN+/nnP9Rf/bDpD9QpAjvzH0hYwehF8x+Ie7yWMwBC6tqW+hf3anhNlYeMgIhdeyZCIVGt5jlUADNI8k1rjg/8ElG+KfiVAKwVoxlE7OtbGkNEWgM/6zrY3qHXfQIRU/nRuq4Fm/Hw5qEg1Y4AZHRFwNh4WiieuyeCGjoOuNtSAyPIiIoePMcpJCbojsPu1gTNHEHXzJ1DxEEr5t6M+RLTQseB94u1sO7HRRBwV9xO8n6l4/C6I/dejmpOV+4dVzlakzv1G/im56B+OXU1dLwFsS01cI2oqJp/XQsUker+D0/3U2JqiFrVaBSc8HMFp7bUA8sA90U9m/mR/Vol4p7yVdXPcHQzeJos4oKHIsMq0ndO3pH9ygw8h5XbP3CZrJ9PZk/DhrZAVAuHmBk9jr8UOa5Fx+BpMXtc9fkutTYZ307h92tQxwg0dWblPb4eTN4UF+7Hw6F98llBG8DbiLMTcRHSFl88WOeGbmbTxeXgfjyxink/vh89wld4N/oIf/46ux9Mi1uKXmQii383LWl5V/HVcdOs9u66J6xev4TAGnkK+oR6KCbaQtP9xQ8/PDxdf3g7xFfv1I8Xf3x/R/70bmWQy6//+ACyqipffnia2QvXK0m9sYzX7dU38Mngf3h3XPntX/ZXC0ZspXh6U+jjzfIOObi32C5e7m40+WdkTcK9UH+JlU3ZFyD04ZNzbfUZ7cXpbH4/mNQvfywEaa/z1adcXpyMFmCTp/BFr8fTW+/zrb2fjsG6psXzceW9lxcX88H08QaeVT4fTLK84eNsPqy/fPXpV4Prv2+XtnvqiBTW6bUoqd1MV/4uKoIdjh8fJoNCqOPpZFx545vJbLCofiC5lqm8Xf0Nhsc/rf4qNQ4GtFJ6eY8f/Lv8dDDiW8CUcHsCPOfZsHbwBP+cz6wQN8wjfPe7n2bDkb3j/wE=</diagram></mxfile>
2210.07158/main_diagram/main_diagram.pdf ADDED
Binary file (82.7 kB). View file
 
2210.07158/paper_text/intro_method.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Estimating normals from point clouds is vital for various downstream applications of 3D computer vision, such as point cloud filtering [@avron2010L1; @sun2015denoising; @lu2017gpf; @lu2020low], surface reconstruction [@kazhdan2006poisson] and rendering [@blinn1978simulation; @gouraud1971continuous; @phong1975illumination]. Though this topic has been extensively studied, it is still a challenge to work on point clouds with different types of noise, outliers, and density variations. It is well-known that point cloud normal estimation can be formulated as the least squares optimization problem [@cazals2005estimating], which explicitly fits a geometric surface (e.g. plane and polynomial surface) on local neighboring points and then computes the normal from the fitted surface. Specifically, the point-wise weights dedicate the importance of each point for the surface fitting.
4
+
5
+ Existing normal estimation algorithms can be roughly divided into two categories: traditional methods and learning-based methods. The traditional ones usually approximate potential structural properties of point clouds and use a well-designed algorithm to fit local planes or polynomial surfaces [@hoppe1992surface; @cazals2005estimating]. However, explicitly fitting geometric surfaces heavily rely on careful parameters tuning, such as the neighborhood size and the order of polynomial function. The learning-based ones initially try to directly predict normal vectors from point clouds through regression network [@boulch2016deep; @guerrero2018pcpnet; @ben2019nesti; @zhou2020normal; @zhou2022refine]. Alternatively, recent methods employ Convolutional Neural Networks (CNN) [@lenssen2020deep] or the PointNet architecture [@ben2020deepfit; @zhu2021adafit] to learn point-wise weights, then the classic geometric surface fitting is utilized to compute normals (see Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"}(a)). The learning-based regression methods lack geometric prior and are hard to learn a general mapping from severely degraded inputs to the ground-truth normals, especially for complex geometric structures. For the learning-based surface fitting methods, one issue of these approaches is that explicit surface fitting is sensitive to noise and outliers. The weights on noisy points that are far away from the underlying surface significantly affect the accuracy of normals, even small weights on outliers still result in an erroneous normal estimation. Another inherent issue is that their predefined polynomial functions may not be suitable to fit various surfaces since a constant order of polynomial functions is selected for all points, e.g. plane in [@lenssen2020deep; @cao2021latent] and 3-jet surface in [@ben2020deepfit; @zhu2021adafit]. If the selected order is smaller than the order of the underlying surface, it will result in an underfitting, which smooths out the fine details and affects the accuracy of output normals. Otherwise, overfitting makes the algorithm sensitive to noise and brings instability to the normal estimation [@zhu2021adafit], as illustrated in Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"}(c).
6
+
7
+ <figure id="fig:intro" data-latex-placement="t">
8
+ <embed src="images/intro.pdf" />
9
+ <figcaption> (a) Prior SOTA methods focus on weight learning and geometric surface fitting to estimate the surface normal. (b) We use features <span class="math inline"><em>G</em>, <em>C</em></span> to learn a hyper surface <span class="math inline">𝒩<sub><em>θ</em>, <em>τ</em></sub>(<em>G</em>, <em>C</em>)</span> to directly estimate normal for each point. (c) Existing surface fitting techniques are severely affected by overfitting, underfitting and outliers, which lead to inaccurate surface approximation and normal estimation. </figcaption>
10
+ </figure>
11
+
12
+ To address these issues, we propose a novel network called *HSurf-Net* for unoriented normal estimation. It makes full use of the powerful learning ability of the neural network to implicitly learn hyper surfaces. The hyper surfaces are represented by MLP layers whose parameters interpret the geometric structures in a high dimensional feature space. The advantage of our hyper surfaces is to adaptively fit more complex point patterns in a robust way. Based on the learned hyper surfaces, we introduce a *Hyper Surface Fitting* module to directly predict normal vectors (see Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"}(b)). This module learns from the well extracted point features and gets the surface representation parameters optimized in a data-driven manner, rather than explicitly fitting 3D planes/surfaces by a polynomial function with a predefined order in current methods. Moreover, to avoid the selection of neighborhood scale and construct a noise-less feature space, we design two network modules called *Space Transformation* and *Relative Position Encoding*. They can cover local, small and large scales to enhance the extraction of structure-aware and multi-scale features. Overall, the combination of these modules extracts the discriminative geometric information and avoids the issues caused by explicit polynomial surface fitting, thus improving the performance of the normal estimation framework. We conduct evaluation experiments on the synthetic shape dataset, the real-world indoor and outdoor scene datasets. HSurf-Net significantly outperforms other baselines on the challenging cases in these benchmarks, and also shows a strong generalization capability on real-world LiDAR data. Extensive ablation experiments validate the effectiveness of each component that contributes to the final results.
13
+
14
+ Our main contributions can be summarized as follows.
15
+
16
+ - A technique for representing polynomial surfaces as hyper surfaces, which are parameterized by MLP layers.
17
+
18
+ - A Hyper Surface Fitting in a high dimensional feature space to optimize the surface representation for point cloud normal estimation, which brings more robustness and higher accuracy.
19
+
20
+ - A Space Transformation module and a Relative Position Encoding module to map 3D point clouds into the feature space. Their combination can fully explore the local geometry and extract features from different neighborhood scales.
21
+
22
+ # Method
23
+
24
+ Given a local point set $P=\{p_i|i\!=\!1,...,N\}$ centralized at a query point $p$, our algorithm aims to estimate the unoriented normal $\mathbf{n}_{p}$ of point $p$. Fig. [2](#fig:net){reference-type="ref" reference="fig:net"} shows an overview of the proposed approach. First, to remove unnecessary degrees of freedom from the input data space and lower the learning difficulty, we normalize each point coordinate with its patch radius and rotate the points into a local coordinate system defined by the PCA. Then, the Space Transformation module extracts a *global location code* $G$ for each point (Sec. [3.3](#sec:space){reference-type="ref" reference="sec:space"}). Moreover, local frames are formulated at point $p_i$ based on spatial coordinates and are used to compute a *condition code* $C$ by a Relative Position Encoding module (Sec. [3.4](#sec:encoding){reference-type="ref" reference="sec:encoding"}). After that, we perform the hyper surface fitting in a high dimensional feature space (Sec. [3.2](#sec:fitting){reference-type="ref" reference="sec:fitting"}). Finally, we recover the 3D normal vectors from fitting results by an Output Module.
25
+
26
+ <figure id="fig:net" data-latex-placement="t">
27
+ <embed src="images/net.pdf" />
28
+ <figcaption> The architecture of HSurf-Net for point cloud normal estimation. </figcaption>
29
+ </figure>
30
+
31
+ We first briefly review the formulas and mathematical notations of the explicit surface fitting with a polynomial function with a predefined order. A smooth surface can be locally formulated as the graph of a bivariate height function $f(x,y)$ about the $z$-axis that is not in the *tangent space* [@spivak1970comprehensive; @lejemble2021stable]. An $n$-order Taylor expansion of the height function over a surface is given by $$\begin{equation}
32
+ \label{eq:jet1}
33
+ f(x,y) = J_{\beta,n}(x,y) + O(||(x,y)||^{n+1}), ~~{\rm where}~ J_{\beta,n}(x,y) = \sum_{k=0}^{n} \sum_{j=0}^{k} \beta_{k-j,j} x^{k-j} y^j.
34
+ \end{equation}$$ The polynomial function $J_{\beta,n}:\mathbb{R}^2 \!\to\! \mathbb{R}$ is the truncated Taylor expansion called $n$-jet [@cazals2005estimating]. $\beta$ is the $n$-jet coefficient vector and involves $N_n\!=\!(n+1)(n+2)/2$ terms. $O(\cdot)$ denotes the remainder in Taylor's multivariate formula. The predefined order $n$ determines the complexity of the fitted surface.
35
+
36
+ Consider a collection of points $P=\{p_i\!=\!(x_i,y_i,z_i)|i\!=\!1,...,N\}$ around its origin $p$ on a sampled smooth surface $S$, and the height function $z\!=\!f(x,y)$ given by Eq. [\[eq:jet1\]](#eq:jet1){reference-type="eqref" reference="eq:jet1"} in the defined coordinate system, we can investigate the $n$-order differential property of the surface $S\!=\!f(x_i,y_i)$ at point $p_i$ by interpolating $S$ using a bivariate $n$-jet $J_{\alpha,n}(x,y)$, such that $$\begin{equation}
37
+ \label{eq:jet2}
38
+ f(x_i,y_i) \simeq J_{\alpha,n}(x_i,y_i), ~~ \forall i = 1,...,N ~,
39
+ \end{equation}$$ where $\alpha$ means the coefficient of the jet sought. Thus, the coefficient $\alpha$ is the solution of the $n$-jet surface fitting, denoted as $N_n$-vector $\alpha\!=\!(\alpha_{0,0},\alpha_{1,0},\alpha_{0,1},...,\alpha_{0,n})^T$. To find the solution, the least squares approximation strategy is used to minimize the sum of the square errors between the jet value and the height function over the point set $P$ $$\begin{equation}
40
+ \label{eq:surface}
41
+ J_{\alpha,n}^{\ast} = \mathop{\rm argmin}_{\alpha} \sum_{i=1}^{N} (f(x_i,y_i) - J_{\alpha,n}(x_i,y_i))^2.
42
+ \end{equation}$$ Then the surface fitting problem of Eq. [\[eq:jet2\]](#eq:jet2){reference-type="eqref" reference="eq:jet2"} is described as finding the least squares solution of a homogeneous system of linear equations. Once the $n$-jet coefficient $\alpha$ is solved, the normal vector at point $p$ on the fitted surface is computed by $$\begin{equation}
43
+ \label{eq:normal}
44
+ \mathbf{n}_{p} = h(\alpha) = (-\alpha_{1,0}, -\alpha_{0,1}, 1) / \sqrt{1 + \alpha_{1,0}^2 + \alpha_{0,1}^2} ~~.
45
+ \end{equation}$$
46
+
47
+ The explicit surface fitting method in Sec. [3.1](#sec:pre){reference-type="ref" reference="sec:pre"} requires a predefined polynomial function with an order $n$ and its performance is susceptible to noise and outliers, as shown in Sec. [4.3](#sec:ablation){reference-type="ref" reference="sec:ablation"}. Motivated by the rapidly developed data-driven approaches, which excel at adaptively learning a fitting model that describes the pattern of the provided noisy data [@chen2022latent; @li2022learning; @ma2022reconstructing; @wen20223d], we propose to implicitly learn hyper surfaces in the *feature space* using a *Hyper Surface Fitting* technique. We expand the 3D point coordinates $(x,y,z)$ into high dimensional features $(G,C,F)$, and $F\sim \mathcal{F}(G, C), F \in \mathbb{R}^c$. Then the new feature-based formulation for the polynomial function $J_{\alpha,n}(x,y)$ in Eq. [\[eq:jet2\]](#eq:jet2){reference-type="eqref" reference="eq:jet2"} is given by (see supplementary materials for the derivation) $$\begin{equation}
48
+ \mathcal{N}_{\theta,\tau}(G,C) = \sum_{k=0}^{\tau} \sum_{j=0}^{k} \theta_{k-j,j} ~ \mathbf{g}_{k-j} \mathbf{c}_j = \theta ~ [G : C],
49
+ \end{equation}$$ where \[ : \] means feature fusion operation, such as *concatenation*. $G\in \mathbb{R}^c$ and $C \in \mathbb{R}^c$ are high dimensional features of the 3D point clouds extracted by two different modules, which are introduced in the following two sections. Here, both the parameter $\tau$ and the coefficient $\theta$ are parameters of an MLP-based module, which is designed as a sequence of skip-connected Residual Block units (see Fig. [2](#fig:net){reference-type="ref" reference="fig:net"}). Similar with Eq. [\[eq:surface\]](#eq:surface){reference-type="eqref" reference="eq:surface"}, the bivariate function $\mathcal{N}_{\theta,\tau}(G,C)$ aims to map each feature pair $(G_i, C_i)$ to their true value $\mathcal{F}(G_i, C_i) \in \mathbb{R}^c$ in the feature space $$\begin{equation}
50
+ \mathcal{N}_{\theta,\tau}^{\ast} = \mathop{\rm argmin}_{\theta,\tau} \sum_{i=1}^{N} \| \mathcal{N}_{\theta,\tau}(G_i,C_i) - \mathcal{F}(G_i, C_i) \|^2 .
51
+ \end{equation}$$ In contrast to compute point normals formulaically by the $n$-jet coefficients as in Eq. [\[eq:normal\]](#eq:normal){reference-type="eqref" reference="eq:normal"}, we recover the normal vectors $\mathbf{n}$ from the $c$-dimensional hyper surface using a function $\mathcal{H}:\mathbb{R}^c \!\to\! \mathbb{R}^3$, which is applied to the fitting surface and the surface of fitting sought, i.e. the ground-truth surface $$\begin{equation}
52
+ \label{eq:fit_normal}
53
+ \mathcal{N}_{\theta,\tau}^{\ast} = \mathop{\rm argmin}_{\theta,\tau} \sum_{i=1}^{N} \| \mathcal{H}(\mathcal{N}_{\theta,\tau}(G_i,C_i)) - \mathcal{H}(\mathcal{F}(G_i, C_i)) \|^2 .
54
+ \end{equation}$$ Then we get the optimization function about the normals $$\begin{equation}
55
+ \label{eq:loss}
56
+ \mathcal{N}_{\theta,\tau}^{\ast} = \mathop{\rm argmin}_{\theta,\tau} \sum_{i=1}^{N} \| \mathbf{n}_i - \mathbf{\hat{n}}_i \|^2,
57
+ \end{equation}$$ where $\mathbf{n}$ and $\hat{\mathbf{n}}$ are the output unoriented normal and the ground-truth, respectively. Finally, the normal of the query point $p$ is formulated as the weighted maxpooling of its neighborhoods (see *Output Module* in Fig. [2](#fig:net){reference-type="ref" reference="fig:net"}) $$\begin{equation}
58
+ \label{eq:output}
59
+ \mathbf{n}_p = \dot{\mathbf{n}}_p / \|\dot{\mathbf{n}}_p\|, ~ \dot{\mathbf{n}}_p = \mathcal{H}(\mathop{\rm MAX} \{ w_i ~ \mathcal{N}_{\theta,\tau}(G_i,C_i) | i=1,...,N \}),
60
+ \end{equation}$$ where ${\rm MAX}\{\cdot\}$ means maxpooling, $w_i\!=\!{\rm sigmoid}(\Psi(\mathcal{N}_{\theta,\tau}(G_i,C_i)))$ is the weight, $\mathcal{H}$ and $\Psi$ are MLPs. We experimentally find that $sin$ distance $\|\hat{\mathbf{n}}_p \times \mathbf{n}_p\|$ is more suitable for measuring the vector difference and guiding the estimated normal to match the ground-truth (see Sec. [4.3](#sec:ablation){reference-type="ref" reference="sec:ablation"}). Our method adaptively learns the optimal hyper surface $\mathcal{N}_{\theta,\tau}^{\ast}$ from high dimensional features, and it is more robust than the $n$-jet fitting, which fits a single type of surface from 3D points with a constant order.
61
+
62
+ ::: wrapfigure
63
+ R0.4 ![image](images/trans.pdf){width="\\linewidth"}
64
+ :::
65
+
66
+ The previous methods usually use the PointNet-like architectures to learn features from point clouds. Since PointNet is inadequate for encoding local structures, we design a feature extraction module called *Space Transformation*, which learns from local neighborhood, small and large patch scales. This module provides a noise-less *global location code* $G$ for the hyper surface fitting, and it consists of a sequence of Local Aggregation Layer units and Global Shift Layer units (see Fig. [2](#fig:net){reference-type="ref" reference="fig:net"}). We repeatedly employ each kind of unit on different levels of feature representation details. The raw point clouds are often noisy and the points have position offsets relative to the noise-free points, which will lead to deviation in the learned features. The Local Aggregation Layer delivers a smooth filtered relative feature of each point by the cascaded local frame construction and maxpooling operation. Since the relative features only describe the local structures, we design a Global Shift Layer to provide the final features with global position information in the feature space by fusing global features. Specifically, we explore the global information from different scales, as shown in Fig. [\[fig:trans\]](#fig:trans){reference-type="ref" reference="fig:trans"}.
67
+
68
+ In the *Local Aggregation Layer*, we first convert the input points to a fixed dimension. Then, we group the local neighborhood features at each point by the 3D spatial distance based kNN search, and refine each grouped feature via a chain of Dense Block units [@huang2017densely]. Finally, we compute an order-invariant per-point feature via maxpooling [@{qi2017pointnet++}]. In the Dense Block, we use the skip-connection to leverage features extracted across different layers. In this way, the information of different layers is combined via intra-level connections inside the unit, which realizes the information reuse, thereby improving the learning ability of the network.
69
+
70
+ In the *Global Shift Layer*, we provide the localization of points in the entire patch feature space by fusing global features extracted from multiple neighborhood scales of the query point $p$. Extracting multi-scale features has become an effective strategy to further boost the normal estimation performance [@boulch2016deep; @ben2019nesti; @guerrero2018pcpnet; @zhou2020normal; @cao2021latent; @zhu2021adafit]. The small scale results in a more accurate description of the details, while the large scale provides more information about the underlying geometries. In summary, our scale-aware Global Shift Layer is formulated as (see the blue part in Fig. [\[fig:trans\]](#fig:trans){reference-type="ref" reference="fig:trans"}) $$\begin{equation}
71
+ G_{s+1,i} = \mathcal{U}_s (\mathcal{V}_s({\rm MAX} \{G_{s,j} | j=1,..,N_{s} \}), ~ G_{s,i}), ~ i=1,...,N_{s+1} ~,
72
+ \end{equation}$$ where $\mathcal{U}, \mathcal{V}$ are MLP layers, $G_{s,i}$ is the per-point feature at scale $s$. ${\rm MAX}\{\cdot\}$ means performing feature maxpooling over all neighboring points of $p$ with size $N_{s}$ and $N_{s+1} \leqslant N_{s}$.
73
+
74
+ Position encoding is important for the transformer architecture [@vaswani2017attention; @guo2021pct; @zhao2021point], allowing the self-attention mechanism to capture sequence order information of input tokens. We borrow the idea and use it here to extract a *condition code* $C$ from the local geometry of point cloud data. Contrast to the traditional position encoding, which manually craft for language sequence or image grid based on sine and cosine functions, we design a parameterized and learnable encoding scheme which is trained together with the whole model. Given a point $p_i \!\in\! \{p_i|i\!=\!1,...,M\}$ and its neighborhoods $\{{p_i}^{j}|j\!=\!1,...,K\}$, our learning-based position encoding function $\phi$ using relative coordinates is formulated as $$\begin{equation}
75
+ \label{eq:positionencoding}
76
+ {C_i}^{j} = \phi({p_i}^{j} - p_i, ~\mathcal{E}({p_i}^{j} - p_i)),
77
+ \end{equation}$$ where ${p_i}^{j} - p_i$ denotes the relative position in a local frame, and $M\!=\!N/4$ neighboring points of $p$ are used for encoding. The encoding functions $\mathcal{E}$ and $\phi$ are MLP layers. Experimental results show that our encoding scheme outperforms the traditional position encoding in the normal estimation task.
78
+
79
+ To predict normal vectors that match the ground-truth as close as possible, we minimizes the $sin$ loss between the output unoriented normal $\mathbf{n}_p$ and the ground-truth normal $\hat{\mathbf{n}}_p$ at the query point $p$ $$\begin{equation}
80
+ L_1 = \|\hat{\mathbf{n}}_p \times \mathbf{n}_p\|.
81
+ \end{equation}$$ Meanwhile, we also adopt a weight loss term similar to [@zhang2022geometry] $$\begin{equation}
82
+ L_2 = \frac{1}{N} \sum_{i=1}^{N}(w_{i} - \hat{w}_{i})^2,
83
+ \end{equation}$$ where $w$ is the generated point weights in the output module. $\hat{w}_{i}={\rm exp}(- ({p}_i \cdot \hat{\mathbf{n}}_p)^2 / \delta^2)$ and $\delta={\rm max}(0.05^2, ~0.3 \sum_{i=1}^{N}({p}_i \cdot \hat{\mathbf{n}}_p)^2 / N)$, where ${p}_i, \hat{\mathbf{n}}_p \in \mathbb{R}^3$. In total, the final loss is given by $$\begin{equation}
84
+ L = \alpha_1 L_1 + \alpha_2 L_2,
85
+ \end{equation}$$ where $\alpha_1=0.1$ and $\alpha_2=1.0$ are weighting factors.
2210.16906/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-08-08T17:47:40.463Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36" etag="jHYQqdwdaLe0QnBEE-bF" version="20.2.3"><diagram id="EB11yOZlg-aLmLPRBKia" name="Page-1">7VxLb9s4EP4tezC2PbjQ049j4qTJoYsNkBRtTgtGYiw2tKil6Njur19SpJ6mZTmWImcTIIBFckRSM983HA2pDOzZYn1FQRT8RXyIB5bhrwf2xcCyTNOe8B9Rs5E1I9uVFXOKfCWUV9yi31BVGqp2iXwYlwQZIZihqFzpkTCEHivVAUrJqiz2SHB51AjM1YhGXnHrAQy3xH4gnwWyduIWpK8hmgfpyKahWhYgFVZdxAHwyaowln05sGeUECavFusZxEJ5qV5kR193tGYTozBkTW6wzh7i6Oe9cfX33U3wfTZ6+D6bD1UvzwAv1QOrybJNqgFKlqEPRSfmwD5fBYjB2wh4onXFbc7rArbAqvmRhEwZccKLqndIGVzvnLaZKYOjCJIFZHTDRdQNdooEBSBzqgC0ys1hj5RMUDTFRFUCBYF51neuJX6hFHWA0rZ1BH0OGlUklAVkTkKAL/Pa81yLBi/lMt8IiZTufkHGNkp5YMlIWbNwjdjPwvW96OqLq0oXa9VzUtioQsEaU9EW+meCErzoYRDHyLsLUCgbviKcjRRyJf1M+xOFwliimA+WlNLRpFaEKupNzRkM6ByyGg27ekhQiAFDz+X+deZVt94QxEfOoTQpQ8k23HIXcl7qrhwkXGlgUxCLhEBcM45dHsdSkP3aUD4t5xiVM9DfzZ2RdrRsujFZUg9uPVUC/Uy5L2eD3ScbjC/TaYkQjjXqkhJD44thjCu8GNt7mJGUbiBFXNeQFqZVrWtOIWnUOs/uNOTa+FW4ZlndcM0aTY/i2sHyrvOGuOm8lZUqX3PyZea+SL4dzHoZnVukmduQZeaOKKcxzY6KWFxNmDfCTCmwhJDRv0uSNgzjRLVnXMC0onWiubSdX83lryP+3JkIeHncPBifXw3GFwP30vqHX6NfoiBl1JD8CeSoaQdViGLMo3u4P9gEcSRD/ke0FoCtoiFmlDxlYbvVTjzqGDuCiEI86mrCUberaHT0wfGOOT5uyPFpnxQf90Fx811QPItd+qL4pN8Q2y2SvIPoukWqHktBfRg3qgaB426C2a1x3JoXwbZixOmrOo5L7iuE0+CVLIAMHOw64gBE4pJxXwF/E/Fo51HhNSqrL7xb7c1aCeeicGs2cTI+otwtIcKxfMExKerbcDzuluMxthyPLtU16srxpLm3D8+z/3372AC/oUuovkHucD1teQdTlyH+cA8n4R5so2/3YHUIDnf2qQSNbwoat7ffkuBz9rkpJrjGWdmi0mQzgonAREhCmJgZ40oVwGgujOhxCyVIEfZDfD5nqmGBfB/vimPLjrCCmhbwMKouF+NmeKim4trDg308HowaPAjTJ3Y3ROmGQlEaius7ClCIwvmbhkVLO2aOVU46mtNtWJi2Bhd2Z37C0eCiYo2etxmtKpfc7Xe+sUZn08505taGXgqJ7cVahUgrrd+TxsnyNvfFlv1JnEnCNEBZGqGpZ0nqVHBmlGM4JZEHb21vljRN8VjOkVHecZioz/W1jYmXJffeISrMXjN/pi7110XCrS1XW1mfshiluD7pDnR05mrr82tdutrGtCo7271nMU6cWNO34W51qbFTJpZbIZbmfeBViWXVp48+iNU2sSy7KbEmfRIre54ecPECVBywQfmmMdGrs9X51hN+33kvmOjXT+jySV0twElR9ZvvDbS6INua/ZzXXZAbJGJOKqIxKhHNpG8FvnJW5mU7YG15HP1L/T7X16afmjT0U0e/ge84v1mNqKvfFHS8+2a9bsLnELi1cJrrFBfGpoDrOVgan3oAfZDH+MBEG5iYvO21vf/gqIWTUO5hRyhfeHzy/7pzuRXuaQ5e6ja0O9u4TDsuQUJYig3cS267PxPTid3oGC3kRW7gu4JdT9KALVhs6/OY3i22+7xSHIFQS2JPqlgQmM4fPvGp8bGN9OdzoikjYfojWCC8kaLXED9DofpCu+Zci2qQg4qWkNAFx0fe9gwoAvyXGxCwJRVfO9fKeSDaJbJSKhaN4hRz0oL5Gg/pkD++J05IbN1JaBSAUHUpj1AbAqBDhTVRncEtbUMcR6EaKT2yIVsY5Z098v7TkRLcSkgkn2MXhlkR6pcnlvXFn+XhCfHuRJ+SHUMFqpLcA/Ce5gmuhxU7Ws5EmtBypurCTa2Z9OpDj1AgzmkNWYC8pxDGanooRAyl+qnKFmxZK1eYTknuERPAqsrxURxhsEnFMeINlvEHWkQ8bgMhq1lVWCMPlK0skgb1K8ue6KANx+FUzzU2O7vkdOY4rB2uvqTRa/Xdw0n79cLCbLZkLnd8an5elxrcZS7rvZtL95FaS+bixfx/WMi8R/6fQOzL/wA=</diagram></mxfile>
2210.16906/main_diagram/main_diagram.pdf ADDED
Binary file (22 kB). View file
 
2210.16906/paper_text/intro_method.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Graph Neural Networks (GNNs) have recently found great success in representation learning for complex networks of interactions, as present in recommendation systems, transaction networks, and social media [@wu2020comprehensive; @zhang2019deep; @qiu2018deepinf]. However, most approaches ignore the dynamic nature of graphs encountered in many real-world domains. Dynamic graphs model complex, time-evolving interactions between entities [@kazemi2020representation; @skarding2021foundations; @xue2022dynamic]. Multiple works have revealed that real-world dynamic graphs possess fine-grained temporal patterns known as temporal motifs [@social_net_model; @temporal_motifs]. For example, a simple pattern in social networks specifies that two users who share many friends are likely to interact in the future. A robust representation learning approach must be able to extract such temporal patterns from an ever-evolving dynamic graph in order to make accurate predictions.
4
+
5
+ Self-Supervised Representation Learning (SSL) has shown promise in achieving competitive performance for different data modalities on multiple predictive tasks [@liu2021self]. Given a large corpus of unlabelled data, SSL postulates that unsupervised pre-training is sufficient to learn robust representations that are predictive for downstream tasks with minimal fine-tuning. However, it is important to specify a pre-training objective function that induces good performance for the downstream tasks. Contrastive SSL methods, despite their early success, rely heavily on negative samples, extensive data augmentation, and large batch sizes [@jing2022understanding; @garrido2022duality]. Non-contrastive methods address these shortcomings, incorporating information theoretic principles through architectural innovations or regularization methods. These closely resemble strategies employed in manifold learning and spectral embedding methods [@balestriero2022globalocal]. The success of such SSL methods on sequential data [@videomae; @ijcai2021-324] suggests that one can learn rich temporal node embeddings from dynamic graphs without direct supervision.
6
+
7
+ SSL methods are attractive for dynamic graphs because it is often costly to generate ground truth labels. Contrastive approaches are very sensitive to the quality of the negative samples, and these are challenging to identify in dynamic graphs due to the temporal evolution of interactions and the lack of semantic labels at the contextual level. As a result, it is desirable to explore non-contrastive techniques, but state-of-the-art models for dynamic graphs suffer from shortcomings that make them hard to adapt to SSL paradigms. First, they heavily rely on chronological training or a full history of interactions to construct predictions [@kumar2019predicting; @xu2020inductive; @rossi2020temporal; @wang2021inductive]. Second, the encoding modules either use inefficient message-passing procedures [@xu2020inductive], memory blocks [@kumar2019predicting; @rossi2020temporal], or expensive random walk-based algorithms [@wang2021inductive]. As a result, while SSL pre-training has been applied successfully for static graphs [@bgrl; @hassani2020MVGRL; @BYOV_graph], there has been limited success in adapting SSL pre-training to dynamic graphs.
8
+
9
+ In this work, we propose DyG2Vec, a novel encoder-decoder model for continuous-time dynamic graphs that benefits from a window-based architecture that acts a regularizer to avoid over-fitting. DyG2Vec is an efficient attention-based graph neural network that performs message-passing across structure and time to output task-agnostic node embeddings. Experimental results for 7 benchmark datasets indicate that DyG2Vec outperforms SoTA baselines on future link prediction and dynamic node classification in terms of performance and speed, particularly in medium- and long-range forecasting. The novelty of our model lies in its compatibility with SoTA SSL approaches. That is, we propose a joint-embedding architecture for DyG2Vec that can benefit from non-contrastive SSL. We adapt two evaluation protocols (linear and semi-supervised probing) to the dynamic graph setting and demonstrate that the proposed SSL pre-training is effective in the low-label regime.
10
+
11
+ # Method
12
+
13
+ A Continuous-Time Dynamic Graph (CTDG) $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{X})$ is a sequence of $E=|\mathcal{E}|$ interactions, where $\mathcal{X}=(X^{V},X^{E})$ is the set of input features containing the *node features* $X^{V}\in\mathbb{R}^{N\times D^{V}}$ and the *edge features* $X^{E}\in\mathbb{R}^{E\times D^{E}}$. $\mathcal{E}=\{e_{1},e_{2},\dots,e_{E}\}$ is the set of interactions. There are $N=|\mathcal{V}|$ nodes, and $D^{V}$and $D^{E}$ are the dimensions of the node and edge feature vectors, respectively. An edge $e_{i}=(u_{i},v_{i},t_{i},m_{i})$ is an interaction between any two nodes $u_{i},v_{i}\in\mathcal{V}$, with $t_{i}\in\mathbb{R}$ being a continuous timestamp, and $m_{i}\in X^{E}$ an edge feature vector. For simplicity, we assume that the edges are undirected and ordered by time (i.e., $t_{i}\leq t_{i+1})$. A temporal sub-graph $\mathcal{G}_{i,j}$ is defined as a set of all the edges in the interval $[t_{i},t_{j}]$, such that $\mathcal{E}_{ij}=\{e_{k}\,\,|\,\,t_{i}\leq t_{k}\leq t_{j}\}$. Any two nodes can interact multiple times throughout the time horizon; therefore, $\mathcal{G}$ is a multi-graph.
14
+
15
+ Our goal is to learn a model $f$ that maps the input graph to a representation space. The model is a pre-trainable encoder-decoder architecture, $f=(g_{\theta},d_{\gamma})$. The encoder $g_{\theta}$ maps a dynamic graph to node embeddings $\boldsymbol{H}\in\mathbb{R}^{N\times D^{H}}$; the decoder $d_{\gamma}$ performs a task-specific prediction given the embeddings. The model is parameterized by the encoder/decoder parameters $(\theta,\gamma)$. More concretely, $$\begin{align}
16
+ \boldsymbol{H} = g_{\theta}(\mathcal{G})\,, \quad \quad \quad
17
+ \boldsymbol{Z} = d_{\gamma}(\boldsymbol{H}; \Bar{\mathcal{E}})\,,
18
+ \end{align}$$ where $\boldsymbol{Z} \in \mathbb{R}^{N\times D^{Y}}$ is the prediction of task-specific labels (e.g., edge prediction or source node classification labels) of all edges in $\Bar{\mathcal{E}}$. The node embeddings $\boldsymbol{H}$ must capture the temporal and structural dynamics of each node such that the future can be accurately predicted from the past, e.g., future edge prediction given past edges. The main distinction of this design is that, unlike previous dynamic graph models [@rossi2020temporal; @xu2020inductive; @wang2021inductive], the encoder must produce embeddings independent of the downstream task specifications. This special trait can allow the model to be compatible with the SSL paradigm where an encoder is pre-trained separately and then fine-tuned together with a task-specific decoder to predict the labels.
19
+
20
+ To this end, we present a novel DyG2Vec framework, that can learn rich node embeddings at any timestamp $t$ independent of the downstream task. DyG2Vec is formulated as a two-stage framework. In the first stage, we use a non-contrastive SSL method to learn the model $f^{SSL}=(g_{\theta},d_{\psi})$ over various sampled dynamic sub-graphs with self-supervision. $d_{\psi}$ is an SSL decoder that is only used in the SSL pre-training stage. In the second stage, a task-specific decoder $d_{\gamma}$ is trained on top of the pre-trained encoder $g_{\theta}$ to compute the outputs for the downstream tasks, e.g., future edge prediction or dynamic node classification [@xu2020inductive; @wang2021inductive].
21
+
22
+ We consider two example downstream tasks: future link prediction (FLP), and dynamic node classification (DNC). In each case, there is a prediction horizon of the next $K$ interactions. The test window for FLP starting at time $t_i$ is $\Bar{\mathcal{E}} = \{(u_{j},v_{j},t_{j},m_{j})|j \in [i,i+K]\}$. This is augmented by a set of $K$ negative edges. Each negative edge $(u_{j},v'_{j},t_{j},m_{j})$ differs from its corresponding positive edge only in the destination node, $v'_j \neq v_j$, which is selected at random from all nodes. The FLP task is then binary classification for the test set of $2K$ edges. In the DNC task, a dynamic label is associated with each node that participates in an interaction. We are provided with $\{(u_{j},t_{j})|j \in [i,i+K]\}$, i.e., the source node and interaction time. The goal is to predict the source node labels for the next $K$ interactions. The performance metrics are detailed in Appendix [7.4](#appendix:imp_details){reference-type="ref" reference="appendix:imp_details"}.
23
+
24
+ We now introduce our novel dynamic graph learning framework DyG2Vec, which can achieve downstream task-agnostic representation. We first present the SSL pre-training approach with a non-contrastive loss function for dynamic graphs. We then introduce the novel window-based downstream training approach. Finally, we outline the encoder architecture.
25
+
26
+ ![The joint embedding architecture for the non-contrastive SSL Framework. Each slice of the input dynamic graph contains edges arriving at the same continuous timestamp. $B$ is a batch of intervals of size $W$. $\hat{\mathcal{G}}$ is a batch of the corresponding input graphs of each interval.](figures/DyG2Vec_SSL.pdf){#fig:overall_framework width="\\textwidth"}
27
+
28
+ Given the full dynamic graph $\mathcal{G}_{0,E}$, a set of intervals $I$ is generated by dividing the entire time-span $\{t_{0},t_{E}\}$ into $M=E/S$ intervals with stride $S$ and window length $W$: $$\begin{equation}
29
+ I = \big\{[jS-W,\,jS)\,\,|\,\,j\in\{0,1,2,\dots,M\}\big\}\,.
30
+ \end{equation}$$ Let $B \subset I$ be the mini-batch of intervals selected by the sub-graph sampler $m\,(\mathcal{G},B;W)$ to construct the mini-batch of input graphs: $\hat{\mathcal{G}}=\{\mathcal{G}_{i-W,i}\,\,|\,\,[i-W,i)\in B\}$. The corresponding mini-batch of target graphs is denoted by $\Bar{\mathcal{G}}=\{\mathcal{G}_{i,i+K}\,\,|\,\,[i-W,i)\in B\}$. In principle, $\mathcal{G}_{i-W,i} \in \hat{\mathcal{G}}$ is an input (history) graph used to predict the target labels of the corresponding target (future) graph $\mathcal{G}_{i,i+K} \in \bar{\mathcal{G}}$. The parameter $W$ controls the size of the history while $K$ controls how far the model is predicting into the future. In practice, we set $S = K$; hence, each edge is only predicted once. Since $\bar{\mathcal{G}}$ is only available for training in the downstream task, SSL pre-training only operates on $\hat{\mathcal{G}}$, as seen in Figure [2](#fig:downstream_training){reference-type="ref" reference="fig:downstream_training"}.
31
+
32
+ We formulate a joint-embedding architecture [@siamese_ssl] for DyG2Vec in which two views of a mini-batch of sub-graphs are generated through random transformations. The transformations are randomly sampled from a distribution defined by a distortion pipeline. The encoder maps the views to node embeddings which are processed by the predictor to generate node representations. We minimize an SSL objective (Eq. [\[vicreg_loss\]](#vicreg_loss){reference-type="ref" reference="vicreg_loss"}, described below) to optimize the model parameters end-to-end in the pre-training stage. See Figure [1](#fig:overall_framework){reference-type="ref" reference="fig:overall_framework"} for an overall design of the SSL framework.
33
+
34
+ **Views**: The temporal distortion module generates two views of the input graphs $\hat{\mathcal{G}}^{'}=t^{'}(\hat{\mathcal{G}})$ and $\hat{\mathcal{G}}^{''}=t^{''}(\hat{\mathcal{G}})$ where the transformations $t^{'}$ and $t^{''}$ are sampled from a distribution $\mathcal{T}$ over a pre-defined set of candidate graph transformations. In this work, we use edge dropout and edge feature masking [@bgrl] in the transformation pipeline. See Appendix [7.4](#appendix:imp_details){reference-type="ref" reference="appendix:imp_details"} for more details.
35
+
36
+ **Embedding**: The encoding model $g_{\theta}$ is an attention-based message-passing (AMP) neural network that produces node embeddings $\boldsymbol{H}^{'}$ and $\boldsymbol{H}^{''}$ for the views $\hat{\mathcal{G}}^{'}$ and $\hat{\mathcal{G}}^{''}$ of the input graphs $\hat{\mathcal{G}}_{i,j}$. We elaborate on the details of the encoder in Sec. [4.3](#sec:encoder_arch){reference-type="ref" reference="sec:encoder_arch"}.
37
+
38
+ <figure id="fig:downstream_training" data-latex-placement="t!">
39
+ <embed src="figures/dyg2vec_window.pdf" />
40
+ <figcaption>DyG2Vec Window Framework. Every slice of the dynamic graph <span class="math inline">𝒢</span> contains edges that arrived at the same continuous timestamp. The blue interval represents the history graph <span class="math inline">𝒢<sub><em>i</em> − <em>W</em>, <em>i</em></sub></span> that is encoded to make a prediction on the next <span class="math inline"><em>K</em></span> edges (yellow interval). <span class="math inline"><em>B</em></span> is a batch of intervals of size <span class="math inline"><em>W</em></span> edges. <span class="math inline">$\hat{\mathcal{G}}$</span> is batch of input graphs. <span class="math inline">$\Bar{\mathcal{G}}$</span> is batch of target graphs that is only used in the downstream stage.</figcaption>
41
+ </figure>
42
+
43
+ **Prediction**: The decoding head $d_{\gamma}$ for our self-supervised learning design consists of a node-level predictor $p_{\phi}$ that outputs the final representations $\boldsymbol{Z}^{'}$ and $\boldsymbol{Z}^{''}$, where $\boldsymbol{Z}=p_{\phi}(\boldsymbol{H})$.
44
+
45
+ **SSL Objective**: In order to learn useful representations, we minimize the VICReg regularization-based SSL loss function from [@bardes2022vicreg]: $$\begin{equation}
46
+ \label{vicreg_loss}
47
+ \mathcal{L}^{SSL}=l(\boldsymbol{Z}^{'},\boldsymbol{Z}^{''})=\lambda s(\boldsymbol{Z}^{'},\boldsymbol{Z}^{''})+\mu[v(\boldsymbol{Z}^{'})+v(\boldsymbol{Z}^{''})]+\nu[c(\boldsymbol{Z}^{'})+c(\boldsymbol{Z}^{''})]\,.
48
+ \end{equation}$$ In this loss function, the weights $\lambda$, $\mu$, and $\nu$ control the emphasis placed on each of three regularization terms. The *invariance* term $s$ encourages representations of the two views to be similar. The *variance* term $v$ is included to prevent the well-known collapse problem [@jing2022understanding]. The covariance term $c$ promotes maximization of the information content of the representations. More details and complete expressions for $s$, $v$ and $c$ are provided in Appendix [7.3](#sec:vicreg){reference-type="ref" reference="sec:vicreg"}.
49
+
50
+ Unlike previous regularization-based SSL approaches [@simclr; @bardes2022vicreg] in computer vision, we do not use a projector network because the embedding dimensions are relatively small in the graph domain. The full pre-training procedure is illustrated in Figure [1](#fig:overall_framework){reference-type="ref" reference="fig:overall_framework"}. Following the pre-training stage, we replace the SSL decoder with a task-specific downstream decoder $d_{\psi}$ that is trained on top of the *frozen* pre-trained encoder.
51
+
52
+ In the downstream training stage, the DyG2Vec model $f=(g_{\theta}, d_{\psi})$ consists of the SSL pre-trained encoder $g_{\theta}$ and a task-specific decoder $d_{\psi}$ which is trained using a similar window-based training strategy. The model is trained to make predictions depending on the downstream tasks (e.g. link prediction or node classification) given the input and target graphs $\hat{\mathcal{G}}$ and $\bar{\mathcal{G}}$ as follows: $\boldsymbol{H} = g_{\theta}(\hat{\mathcal{G}})$ is the node embeddings returned by the encoder, and $\boldsymbol{Z}=d_{\psi}(\boldsymbol{H}; \Bar{\mathcal{E}})$ is the prediction output of the decoder. Here $\Bar{\mathcal{E}}$ is a set of (partial) edges for which predictions are requested from the decoder. The model parameters are optimized by training with a loss function $\mathcal{L}_D(\boldsymbol{Z}, \boldsymbol{O})$, where $\mathcal{L}_D$ is defined depending on the downstream task and $\boldsymbol{O}$ contains task-specific labels (See Section [3](#sec:problem_form){reference-type="ref" reference="sec:problem_form"}).
53
+
54
+ The window-based training strategy has several major advantages. First, the window acts as a regularizer by providing a natural inductive bias towards recent edges, which are often more predictive of the future. It also avoids costly time-based neighborhood sampling techniques [@wang2021inductive]. Second, relying on a fixed window-size for message-passing allows for constant memory and computational complexity, which is well-suited to the practical *online streaming* data scenario. Third, unlike previous works [@xu2020inductive; @wang2021inductive] which generate separate node embeddings for each target edge, a generic encoder allows us to use the same set of embeddings for any prediction. This dramatically reduces the training/inference overhead. Another advantage of this design is that it allows the model to forecast unseen edges relatively far into the future, in contrast to existing works [@xu2020inductive; @rossi2020temporal] that focus on predicting the next occurring edge.
55
+
56
+ Our encoder combines a self-attention mechanism for message-passing with the Time2Vec module [@time2vec] that provides relative time encoding. We also introduce a novel temporal edge encoding that efficiently captures the temporal structural relationship between nodes.
57
+
58
+ **Temporal Attention Embedding**: Given a dynamic graph $\mathcal{G}$, the encoder $g_{\theta}$ computes the embedding $\boldsymbol{h}_{i}^{L}$ of node $i$ through a series of $L$ multi-head attention (MHA) layers [@vaswani2017] that aggregate messages from its $L$-hop neighborhood [@xu2020inductive; @gat].
59
+
60
+ Given a node embedding $\boldsymbol{h}_{i}^{l-1}$ at layer $l{-}1$, we uniformly sample $N$ 1-hop neighborhood interactions of node $i$, $\mathcal{N}(i) = \{e_p, \dots, e_k\} \subseteq \mathcal{E}$. The embedding $\textbf{h}_{i}^{l}$ at layer $l$ is calculated by: $$\begin{alignat}
61
+ {1}
62
+ \mathbf{h}_{i}^{l} & =\mathbf{W}_{1}\mathbf{h}_{i}^{l-1}+\texttt{MHA}^{l}(\mathbf{q}^{l},\mathbf{K}^{l},\mathbf{V}^{l}),\\
63
+ \mathbf{q}^{l} & =\mathbf{h}_{i}^{l-1},\\
64
+ \mathbf{K}^{l} & =\mathbf{V}^{l}=[\Phi_{p}(t_p),\dots,\Phi_{k}(t_k)]\,.
65
+ \end{alignat}$$ Here, $\mathbf{W}_{1}$ is a learnable mapping matrix, $\texttt{MHA}^{l}(\cdot)$ is a multi-head dot-product attention layer, and $\Phi_{p}(t_{p})$ represents the edge feature vector of edge $e_p = (u_p, v_p, t_p, \boldsymbol{m}_p) \in \mathcal{N}(i)$ at time $t_p$: $$\begin{alignat}
66
+ {2}
67
+ & \Phi_{p}(t_p) & & =[\boldsymbol{h}_{u_p}^{l-1}\,\,||\,\,\boldsymbol{f}_{p}(t_p)\,\,||\,\,\boldsymbol{m}_p],\\
68
+ & \boldsymbol{f}_{p}(t_p) & & =\phi(\Bar{t}_{i}- t_p) + \Theta_{p}(t_p)\,,\\
69
+ & \Bar{t}_{i} & & =\max\left\{t_l \,\,|\,\, e_l \in\mathcal{N}(v_p)\,.\right\}\,,
70
+ \end{alignat}$$ where $||$ denotes concatenation and $\phi(.)$ is a learnable Time2Vec module that helps the model be aware of the relative timespan between a sampled interaction and the most recent interaction of node $v_p$ in the input graph. $\Theta_{p}(.)$ is a temporal edge encoding function, described in more detail below. In contrast to TGAT's recursive message passing procedure [@xu2020inductive], the message passing in our encoder is 'flat': at every iteration, the same set of node embeddings is used to propagate messages to neighbors. Our encoder performs message passing once to generate a set of node embeddings $\boldsymbol{H}$ used for all target predictions on $\bar{\mathcal{G}}$.
71
+
72
+ **Temporal Edge Encoding**: Dynamic graphs often follow evolutionary patterns that reflect how nodes interact over time [@Kovanen_2011]. For example, in social networks, two people who share many friends are likely to interact in the future. Therefore, we incorporate two simple yet effective temporal encoding methods that provide inductive biases to capture common structural and temporal evolutionary behaviour of dynamic graphs. The temporal edge encoding function is then: $$\begin{equation}
73
+ \Theta_{p}(t_p)=\mathbf{W}_{2}[z_p(t_p) || c_p(t_p)]\,,
74
+ \end{equation}$$ where we incorporate (i) *Temporal Degree Centrality* $z_p(t_{p})$: the current degrees of nodes $u_p$ and $v_p$ at time $t_p$; and (ii) *Common Neighbors* $c_p(t_p)$: the number of common 1-hop neighbors between nodes $u_p$ and $v_p$ at time $t_p$.
75
+
76
+ By using the degree centrality as an edge feature, the model is able to learn any bias towards more frequent interactions with high-degree nodes. The number of common neighbors helps capture temporal motifs, and it is known to often have a strong positive correlation with the likelihood of a future interaction [@cn_lp].
2301.00061/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2022-12-23T00:24:08.002Z" agent="5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/20.7.4 Chrome/106.0.5249.199 Electron/21.3.3 Safari/537.36" etag="JrR95nfBAbze53tpqTs8" version="20.7.4" type="device"><diagram id="C5RBs43oDa-KdzZeNtuy" name="Page-1">7V1dd9q4Fv0t88BamQdY/jY8hqTNnZlkVjtp2t77MkvYAjwxlitEgPn1V5Il8BfENBiL4K7VgCVbtnT23j4+OjId82a2usMgnj4gH4YdQ/NXHfO2YxhuX6N/WcE6KbAtKymY4MBPivRtwWPwLxSF4rjJIvDhPLMjQSgkQZwt9FAUQY9kygDGaJndbYzC7FljMIGFgkcPhMXSb4FPpklp33C35f+BwWQqz6w7g6RmBuTOoon5FPhomRTxzpkfOuYNRogk32arGxiysZPj8u239bfw/tm5+/3z/Ad4Gv7x5c+v3WRYPh5yyKYLGEbkp5vug/VX/Y/V/MnSQ3vqfNaX/9O7hi36RtZywKBPx09sIkymaIIiEH7Ylg4xWkQ+ZM3qdGu7zz1CsSj8BxKyFmAAC4Jo0ZTMQlELVwH5Tr9rPVts/TdVc8ugp8mNtdyICF6Lg9y+LGDHdbWeZpqyZHs038oc/gniYAYJxKJwjCIiLlJnV1JxnCUe0AJ7cM/gSrgDPIFknxGS/djApwAnrHgHEb1kvKY7YBgCErxkgQ0EPyab/bYYoF8EDA6BhHMukFDEfJZa5nNb8x1kPlMt8/Vb8x1kPlsp84l2X0C4EGe6BQTMaQ8MB8yoMYbRaM4+OvbN1Xf659eCubPGXE4DAh9jwMdqST20rOHGQRjeoBBhfqzpA9gfe7R8TjB6hqkax+vD0VgcIdE0RyHrzRDR9gPCDGu/3bIvEBO42msLWSt9S+Fa6obYXm4dNcsRZdOUk2ZpdbFvcC7sS7tDaWdo6/mU+0E1s9aoyNq+Uqw1Cqx9XIw4aTXO07/14zPVhn3fKmNq3xiZjtMkDQdOgYaDU7LQ1AtD3bLwEBaaFVk4UIqF5mssNN8zCy1NNRZqLQvfxEKrIgtlpEwRGlqv0dB4zzTccEwZGhotDd9EQ7sqDXWlaGi/RsP4PdPQthWjoVMwB32iD4kYkowNnB8LJCu6cz5Y13QH3Y5XfMRkPf02YZ89Win+Jy3SC0waTeoLNqZjSLKGxJCeB4z4DoxSjNBzYSa2GQaTiH73qJVY8HvIDBF4ILwWFbPA97lwxCiICB87e9ixb3MYiVAEC/AQhVngcZyIK9ZPiZsKXpRVghuzLty4O3HjBy+0wgvBfJ6BzSyI/K4XBvEIAeyn4fIGjH1hXYUR9PNxqDkXB0NDY3YxHB/zpC2mMg/JJxXOOf32wJ+DkyJvEfMSI13S6/VEFS98kBq1gTXv8w5U1xz3moiBZvTYzOTxjWdIvKnYUCQSZpolsD1pJMw0z8XtqNl96Fd0H1ylvId+QXaGzDa0SChBEE0Exy8qtmXYTbvzZzPhXjOvjh57Eod+Yh5MKqyi71BW2URCbHFUzriby/h5ew+qE/GiwlvNE9FqiVhT+ElNJsrTV6HiRYW4mqfi2WQcKRrikqGrM3NS5WVXYeRFRbsaZ6RuFCzThrvOINxVBpyThruss5m6r1uSq87B62qlwujFWfh7FnrcxA2Tv7Tkif3hen2hsQRLM5pVaetsYgmq+k2Vp+gVI2lxjv4Akl5UnKF5khotSd9G0soT+IqRtDiDfwBJLyoC0TxJzyYYqCpJnTMlaTGv4wCSvu+ghDVQi6T67lyKNiihcFCiDDgnDUroxdnwRpJw7tGSWt7QRsxMqSybRHE2aTYzsGL5M+7wfvg3HWmWSUO/Gck3lmIji3iajXubiNDNrjSbBjr6FMelHX3a0dGnTUefih19ynVUhWSixtKEzCyzBmaRWaVpQm5N1JKBa/Udp7odoEFFB8hQa7m4vO4SbZzHICqVAS9hBGM1noyuGAYYUbefvyYCwuViDGZBuE72vsYBCJO6EfCeJxwI3Wxz4Mqw7aSh4hetp+npxito0SP14ELIx9NfeCRAUScTq9zISdLdOvREHZ8uH8K03YYnmuyzSTNU9MFLLpk+N92R1315umNeoO7ko7LN687ZzFOqqjuVX26kmO4Uc04uRHeMC9SdfKC5ed05mynbuvXDOFP9MC5VP/JruS5BP/KJeY3rh1FMz2lj4GcQAy8Dzklj4HazOdap2872JvSaw5txd7feb0MOb+VcIbXuV8VUoUamBH6jQFqxFvgbDPhqY3aDiXyQqBa/66SmCuh/9gJeqg0dd/gXC/6nZw4KtWz+YLssuVidWabcEbMLclNW5Y7J3PDEzd0HBHThjwXgN8b0QGwvjnYxezmswNgUbBdJJzVxZoT5CQI+WOnWtcxORZvJS+pyTT6OyV71ayo39OFjZzjsDD/u9B72FqeHvjgy+mEjo3kLrNz4ZBsq9uEZsNtOLVe942RdxgUq48c9J3+l9lEawmh5lIaYpvK2isX8Wg8BKKDeendKPRiYVQaCqfLuB2ky5vzGedwRP4xp+/unqAAYrQC0AtAKwHsWgDbZg35zHL34CHnad8KUJeDlHyoj/5r9agd7ymcoD7zs8Jc8nqfGchuo1Ab9TipU2dM0p7P3qZFu5H/e4WenTo785OhWfXCs+ib6FADsEvvLsjeu1zfkb4PI9fr9HK6SjhfW67/a0GYy8EQL/82y9L+Lh60iKDNNq9fva9t/OdC5zs+Bjrebg53ey7dWN/LKkqta5KmBPMvKIk8/DvBYs+4g9S8vfidHofQHWhQqiEJ7UAsKebONA6/spakt8NQAHrVZT0sDL6tTNpUxOXdwuMNHm9Z2Ydp2T+v+We1Ti7ooNE13j/xtfmvycO/P3XcTtp3BybWwfQhRF4WWVQsKWbOKobB9IFEXhfagFhTyZpsGnr177W6taXO7AtRvzqcrLnn0AWFpc778vT8NeB7CYrU4QezPVObVybSHIJ0VUS2z7lySrlKagBGR+RLdweH0rB4y37wRac/KY9M9adrV7qXHzeA+qUjOyWoihGeZg17YLBb9pOAAZIFlTs7O/TwQ79plKUaYVbKgL68JIaEw7LLfHWDUKByJcDwFkWjSSMoY5rsCx6x4A2VZx3gUiTNpsqu8hmDa2Ji2L8/E8ZpAgv+2eOo0S0rX7IVt2qJ9GT0HhM8hdxMKdAWoMvu9OU2Xn8CHVDnEpPU08J4jOBdXGkQB2Rg5v2/KrHv3S11ZZr9xiADJj5MfzOMQrOXuYRAxEfslmMUIE6ZiZWp4B6jWYdYkRjN2hwUem/eNMfJYVw6Tutcm9Apqk9aiuqTGkm8OkbNzZnEptiHf03UaqWloJW8rNa3UtFJzQqmxHLsoNeYppcYpm1FopaaVmlZqzl1qrIzUmPIV7WmpMZxTSk1D63VbqWmlpkGpeYwxBH5HBI5SQpMojcaSmkMK23ehOdk8R90ucW8ct0RzrLdrzp3z/UF/+NR3H12nuwh168uk3y16N58ApsMNQzneI1wqPFXTYa8eQPxu3xvp5H/L0LEK9iyLwR0jbbXUnLvvICddAfcI+d1Fnbf0Xf3FVnjDy36lXh6tlvRkToDWvv4ZOv7Xme4vTP352l6B8W2J+BzPEpubUk1j6eSYbxW9R10vGUxdzkseMJp0EyOG5+00Fe379AH5kO3xfw==</diagram></mxfile>
2301.00061/main_diagram/main_diagram.pdf ADDED
Binary file (50.7 kB). View file
 
2301.00061/paper_text/intro_method.md ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Cluster analysis is a task to group similar samples into the same cluster while separating less similar samples into different clusters. It is a fundamental unsupervised machine learning task that explores the character of datasets without the need to annotate cluster classes. Clustering plays a vital role in various fields, such as data summarization [@kleindessner2019fair; @hesabi2015data], customer grouping [@aggarwal2004method], facility location determination [@hansen2009solving], and etc.
4
+
5
+ There are several typical cluster models, including connectivity-based models, centroid-based models, distribution-based models, density-based models, etc. This work focuses on one of the fundamental centroid-based clustering models called the $K$-center problem. The goal of the $K$-center problem is to minimize the maximum within-cluster distance  [@kaufman2009finding]. Specifically, given a dataset with $S$ samples and the desired number of clusters $K$, the $K$-center problem aims to select $K$ samples from the dataset as centers and to minimize the maximum distance from other samples to its closest center. The $K$-center problem is a combinatorial optimization problem that has been widely studied in theoretical computer science [@lim_k-center_2005]. Moreover, it has been intensively explored as a symmetric and uncapacitated case of the $p$-center facility location problem in operations research and management science [@garcia-diaz_approximation_2019], where the number of facilities corresponds to the variable $k$ in a standard $K$-center problem.
6
+
7
+ Formally, provided a $K$, the objective function of $K$-center problem can be formulated as follows: $$\begin{align}
8
+ \label{eq:obj}
9
+ \min \limits_{\mu \in X} \max \limits_{s\in \mathcal{S}} \min \limits_{k\in \mathcal{K}} ||x_s-\mu^k||_2^2
10
+ \end{align}$$ where $X=\{x_1,\ldots, x_S \}$ is the dataset with $S$ samples and $A$ attributes, in which $x_s=[x_{s,1}, ..., x_{s,A}]\in \mathbb{R}^{A}$ is the $s$th sample and $x_{s,a}$ is the $a$th attribute of $i$th sample, $s\in\mathcal{S}:=\{1,\cdots,S\}$ is the index set of samples. As to the variables related to clusters, $k \in\mathcal{K}:=\{1,\cdots,K\}$ is the index set of clusters, $\mu:=\{\mu^1,\cdots,\mu^K\}$ represents the center set of clusters, $\mu^k=[\mu^k_{1}, ..., \mu^k_{A}]\in \mathbb{R}^{A}$ is the center of $k$th cluster. Here, $\mu$ are the variables to be determined in this problem. We use $\mu \in X$ to denote the "centers on samples" constraint in which each cluster's center is restricted to the existing samples.
11
+
12
+ # Method
13
+
14
+ To introduce the lower bounding method in the branch and bound scheme, we first propose a two-stage optimization form of the $K$-center Problem [\[eq:obj\]](#eq:obj){reference-type="ref" reference="eq:obj"}. The first-stage problem is as follows: $$\begin{equation}
15
+ \label{eqn:two}
16
+ z = \min_{\mu \in X\cap M_0} \max_{s\in \mathcal{S} }Q_s(\mu).
17
+ \end{equation}$$ where the center set $\mu$ is the so-called first-stage variable, $Q_s(\mu)$ is the optimal value of the second-stage optimization problem: $$\begin{equation}
18
+ \begin{aligned}
19
+ Q_s(\mu) = \min \limits_{k\in \mathcal{K}}||x_s-\mu^k||_2^2
20
+ \end{aligned}
21
+ \end{equation}$$ We denote a closed set $M_0:=\{\mu\,\mid\,\underbar{$\mu$} \leq \mu \leq \bar{\mu}\}$ as the region of centers, where $\underbar{$\mu$}$ is the lower bound of centers and $\bar{\mu}$ is the upper bound, i.e., $\underbar{$\mu$}^k_a=\min \limits_{s\in \mathcal{S}} X_{s,a}$, $\bar{\mu}^k_a=\max \limits_{s\in \mathcal{S}} X_{s,a}$, $\forall k \in \mathcal{K}$, $a\in\{1,\cdots, A\}$. Here, the constraint $\mu \in M_0$ is introduced to simplify the discussion of the BB scheme. Since $M_0$ can be inferred directly from data, it will not affect the optimal solution of Problem [\[eq:obj\]](#eq:obj){reference-type="ref" reference="eq:obj"}. Constraint $\mu \in X\cap M_0$ means the center of each cluster is selected from the samples belonging to the intersection set of the corresponding region $M_0$ and the dataset $X$
22
+
23
+ To introduce the bounds tightening and sample reduction methods, we propose a MINLP formulation of the $K$-center Problem [\[eq:obj\]](#eq:obj){reference-type="ref" reference="eq:obj"}: $$\label{eqn:overall}
24
+ \begin{align}
25
+ \min \limits_{\mu, d, b, \lambda} &\; d_{*} \label{eqn:overall:obj} \\
26
+ \rm{s.t.} \;\; & d_s^k \geq ||x_s-\mu^k||_2^2 \label{eqn:overall:dis} \\
27
+ & -N_1(1-b_s^k) \leq d_s^*-d_s^k \leq 0 \label{eqn:overall:bigM:b} \\
28
+ & d_*\geq d_s^* \label{eqn:overall:d}\\
29
+ & \sum_{k \in\mathcal{K}} b_s^k = 1 \label{eqn:overall:unique:b} \\
30
+ & b_s^k\in\{0, 1\} \\
31
+ & -N_2(1-\lambda_s^k)\leq x_s-\mu^k \leq N_2(1-\lambda_s^k) \label{eqn:overall:bigM:lambda}\\
32
+ & \sum_{s \in\mathcal{S}} \lambda_s^k = 1 \label{eqn:overall:unique:lambda} \\
33
+ & \lambda_s^k \in \{0,1\}\\
34
+ & b_s^k\geq \lambda_s^k \label{eqn:overall:logic} \\
35
+ & s\in\mathcal{S}, k \in\mathcal{K}
36
+ \end{align}$$ where $d_s^k$ represents the distance between sample $x_s$ and center $\mu^k$, $d_s^*$ denotes the distance between $x_s$ and the center of its cluster, $N_1$ and $N_2$ are both arbitrary large values. $b_s^k$ and $\lambda_s^k$ are two binary variables. $b_s^k$ is equal to 1 if sample $x_s$ belongs to the $K$th cluster, and 0 otherwise. $\lambda_s^k$ is equal to 1 if $x_s$ is the center of the $K$th cluster $\mu^{k}$, and 0 otherwise.
37
+
38
+ Constraint [\[eqn:overall:bigM:b\]](#eqn:overall:bigM:b){reference-type="ref" reference="eqn:overall:bigM:b"} is a big M formulation and ensures that $d_s^*=d_s^k$ if $b_s^k=1$ and $d_s^*\leq d_s^k$ otherwise. Constraint [\[eqn:overall:unique:b\]](#eqn:overall:unique:b){reference-type="ref" reference="eqn:overall:unique:b"} guarantees that sample $x_s$ belongs to one cluster. We also adopt Constraint [\[eqn:overall:bigM:lambda\]](#eqn:overall:bigM:lambda){reference-type="ref" reference="eqn:overall:bigM:lambda"}, [\[eqn:overall:unique:lambda\]](#eqn:overall:unique:lambda){reference-type="ref" reference="eqn:overall:unique:lambda"} and [\[eqn:overall:logic\]](#eqn:overall:logic){reference-type="ref" reference="eqn:overall:logic"} to represent the "centers on samples" constraints, $\mu \in X$. Specifically, Constraint [\[eqn:overall:bigM:lambda\]](#eqn:overall:bigM:lambda){reference-type="ref" reference="eqn:overall:bigM:lambda"} uses a big M formula to make sure that $\mu^k =x_s$ if $\lambda_s^k=1$ and Constraint [\[eqn:overall:unique:lambda\]](#eqn:overall:unique:lambda){reference-type="ref" reference="eqn:overall:unique:lambda"} confirms that each center can only be selected on one sample. Constraint [\[eqn:overall:logic\]](#eqn:overall:logic){reference-type="ref" reference="eqn:overall:logic"} ensures that if $x_s$ is the center of the $K$th cluster, then it is assigned to the $K$th cluster. It should be noted that the global optimizer CPLEX also relies on this formulation to solve the $K$-center problem.
39
+
40
+ This section introduces a tailored reduced-space branch and bound algorithm for the $K$-center problem with lower and upper bounding methods.
41
+
42
+ In this section, we adopt the two-stage formulation and derive a closed-form solution to obtain the lower bound of the $K$-center Problem [\[eq:obj\]](#eq:obj){reference-type="ref" reference="eq:obj"}.
43
+
44
+ At each node in the BB procedure, we deal with a subset of $M_0$, which is denoted as $M$, and solve the following problem concerning $M$: $$\begin{equation}
45
+ \label{eqn:clt_ndpb}
46
+ z(M) = \min_{ \mu \in X\cap M}\max_{s\in\mathcal{S}} Q_s(\mu)
47
+ \end{equation}$$
48
+
49
+ This problem can be equivalently reformulated as the following problem by duplicating $\mu$ across samples and enforcing them to be equal: $$\label{eqn:clt_lift_ndpb}
50
+ \begin{align}
51
+ \min_{\mu_s\in X\cap M} & \max_{s\in\mathcal{S}} Q_s(\mu_s) \\
52
+ \textrm{s.t.} \quad & \mu_s=\mu_{s+1}, s\in \{1,\cdots,S-1\} \label{eqn:non-anticipativity}
53
+ \end{align}$$
54
+
55
+ We call constraints [\[eqn:non-anticipativity\]](#eqn:non-anticipativity){reference-type="ref" reference="eqn:non-anticipativity"} the non-anticipativity constraints. By removing the "centers on samples" constraint $\mu \in X$ and the non-anticipativity constraints [\[eqn:non-anticipativity\]](#eqn:non-anticipativity){reference-type="ref" reference="eqn:non-anticipativity"}, we attain a lower bound formulation as follow: $$\begin{equation}
56
+ \label{eqn:lb_pb_minmax}
57
+ \beta(M):= \min_{\mu_s\in M} \max_{s\in\mathcal{S}} Q_s(\mu_s).
58
+ \end{equation}$$
59
+
60
+ With constraints relaxed, the feasible region of Problem [\[eqn:lb_pb_minmax\]](#eqn:lb_pb_minmax){reference-type="ref" reference="eqn:lb_pb_minmax"} is a superset of Problem [\[eqn:clt_lift_ndpb\]](#eqn:clt_lift_ndpb){reference-type="ref" reference="eqn:clt_lift_ndpb"}'s feasible region. Therefore, it is obvious that $\beta(M)\leq z(M)$.
61
+
62
+ In Problem [\[eqn:lb_pb_minmax\]](#eqn:lb_pb_minmax){reference-type="ref" reference="eqn:lb_pb_minmax"}, since $\mu$ of each sample is independent, it is obvious that: $$\begin{equation}
63
+ \label{eqn:lb_pb}
64
+ \beta(M)= \max_{s\in\mathcal{S}}\min_{\mu_s\in M} Q_s(\mu_s).
65
+ \end{equation}$$
66
+
67
+ Clearly, problem [\[eqn:lb_pb\]](#eqn:lb_pb){reference-type="ref" reference="eqn:lb_pb"} can be decomposed into $S$ subproblems with $\beta(M)=\max \limits_{s\in \mathcal{S}}\beta_s(M)$: $$\begin{equation}
68
+ \label{eqn:lb_sub}
69
+ \beta_s(M) = \min_{\mu\in M} Q_s(\mu).
70
+ \end{equation}$$
71
+
72
+ Denote the region of $k$th cluster's center as $M^k:=\{\mu^k: \underbar{$\mu$}^k \leq \mu^k\leq \bar{\mu}^k\}$ where $\underbar{$\mu$}^k$ and $\bar{\mu}^k$ are the lower and upper bound of $\mu^k$ respectively. Since $Q_s(\mu) =\min \limits_{k\in \mathcal{K}}||x_s-\mu^k||_2^2$, we have $$\begin{equation}
73
+ \label{eqn:lb_sub2}
74
+ \beta_s(M) = \min \limits_{k\in \mathcal{K}} \min_{\mu^{k}\in M^k} ||x_s-\mu^{k}||_2^2,
75
+ \end{equation}$$ which can be further decomposed into $K$ subsubproblems with $\beta_s(M) {=} \min\limits_{k\in \mathcal{K}} \beta_{s}^k(M^k)$: $$\begin{equation}
76
+ \label{eqn:lb_subsub}
77
+ \beta_{s}^k(M^k) =\min_{\mu^{k}\in M^k} ||x_s-\mu^{k}||_2^2.
78
+ \end{equation}$$
79
+
80
+ The analytical solution to Problem [\[eqn:lb_subsub\]](#eqn:lb_subsub){reference-type="ref" reference="eqn:lb_subsub"} is: ${\mu_{a}^k}^* = \text{mid}\{\underbar{$\mu$}^k_a, \ x_{s,a}, \ \bar{\mu}^k_a\}, \forall a\in \{1,\cdots,A\}$. Consequently, the closed-form solution to Problem [\[eqn:lb_pb_minmax\]](#eqn:lb_pb_minmax){reference-type="ref" reference="eqn:lb_pb_minmax"} can be easily computed by the max-min operation on all the samples.
81
+
82
+ At each node in the BB procedure, the upper bounds of Problem [\[eqn:clt_ndpb\]](#eqn:clt_ndpb){reference-type="ref" reference="eqn:clt_ndpb"} can be obtained by fixing the centers at a candidate feasible solution $\hat{\mu}\in X\cap M$. In this way, we can compute the upper bound base on the following equation: $$\begin{equation}
83
+ \label{eqn:ub_gp_ct}
84
+ \alpha(M)=\max \limits_{s\in \mathcal{S}}\min \limits_{k\in \mathcal{K}}||x_s-\hat{\mu}^k||_2^2
85
+ \end{equation}$$
86
+
87
+ Since $\hat{\mu}$ is a feasible solution, we have $z(M)\leq \alpha(M)$, $\forall \hat{\mu}\in X\cap M$. In our implementation, we use two methods to obtain the candidate feasible solutions. At the root node, we use a heuristic method called Farthest First Traversal [@GONZALEZ1985293] to obtain a candidate solution $\hat{\mu}\in X\cap M_0$. Using this method, we randomly pick an initial point and select each following point as far as possible from the previously selected points. Algorithm [\[alg: fft\]](#alg: fft){reference-type="ref" reference="alg: fft"} describes the details of the farthest first traversal, where $d(x_s, T)$ represents the minimum distance from sample $x_s$ to any sample in set $T$. We use $FFT(M_0)$ to denote the upper bound obtained using this approach. At a child node with center region $M$, for each cluster, we select the data sample closest to the middle point of $M^k$ as $\hat{\mu}^k$, and obtain the corresponding upper bound $\alpha(M)$.
88
+
89
+ Our algorithm only needs to branch on the region of centers, $M:=\{\mu: \underbar{$\mu$} \leq \mu\leq \bar{\mu}\}$, to guarantee convergence, which would be theoretically discussed in Section 5, o. Since the desired number of clusters is $K$ and the number of attributes is $A$, the number of possible branching variables is $K\times A$. The selection of branching variables and values will dramatically influence the BB procedure's efficiency. In our implementation, we select the max-range variable at each node as the branching variable and the midpoint of this variable as the branching value.
90
+
91
+ The detailed reduced-space branch and bound algorithm for the $K$-center Problem [\[eq:obj\]](#eq:obj){reference-type="ref" reference="eq:obj"} are given in the Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"}. In the algorithm, We use $relint(.)$ to denote the relative interior of a set. We can also establish the convergence of the branch-and-bound scheme in Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"}. The BB procedure can generate a monotonically non-ascending sequence $\{\alpha_i\}$ and a monotonically non-descending sequence $\{\beta_i\}$. We can show that they both converge to $z$ in a finite number of steps.
92
+
93
+ ::: theorem
94
+ []{#theorem: conv_finite label="theorem: conv_finite"} *Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"} is convergent to the global optimal solution after a finite step $L$, with $\beta_L=z=\alpha_L$, by only branching on the region of centers.*
95
+ :::
96
+
97
+ Since the following acceleration techniques also influence the global convergence in Section [4](#sec:acc){reference-type="ref" reference="sec:acc"}. We present the detailed proof of Theorem [\[theorem: conv_finite\]](#theorem: conv_finite){reference-type="ref" reference="theorem: conv_finite"} in Section [5](#sec:convergence){reference-type="ref" reference="sec:convergence"} after introducing the acceleration techniques.
98
+
99
+ :::::::::::::::: minipage
100
+ ::::::::::::::: multicols
101
+ 2
102
+
103
+ ::::::: minipage
104
+ :::: algorithm
105
+ []{#alg: bb_sche label="alg: bb_sche"}
106
+
107
+ ::: algorithmic
108
+ Initialize the iteration index $i\leftarrow 0$; Set $\mathbb{M}\leftarrow\{M_0\}$, and tolerance $\epsilon > 0$; Compute initial lower and upper bounds $\beta_i = \beta(M_0)$, $\alpha_i = FFT(M_0)$ // Alg. [\[alg: fft\]](#alg: fft){reference-type="ref" reference="alg: fft"} ; Select $K$ farthest initial seeds // Sec.[\[sec:fbbt_fft\]](#sec:fbbt_fft){reference-type="ref" reference="sec:fbbt_fft"}; Select a set $M$ satisfying $\beta(M)=\beta_i$ from $\mathbb{M}$ and delete it from $\mathbb{M}$; Update $i\leftarrow i+1$;
109
+
110
+ Cluster Assignment // Alg. [\[alg: assignment\]](#alg: assignment){reference-type="ref" reference="alg: assignment"}; Bounds Tightening // Alg. [\[alg: fbbt\]](#alg: fbbt){reference-type="ref" reference="alg: fbbt"}; Obtain the tightened node $\hat{M}$; If $i\ \% \ i_{sr}=0$, Sample Reduction // Alg. [\[alg: reduction\]](#alg: reduction){reference-type="ref" reference="alg: reduction"}; Find two subsets $M_1$ and $M_2$ s.t. $relint(M_1)\cap relint(M_2) = \emptyset$ and $M_1\cup M_2=M$; Update $\mathbb{M}\leftarrow \mathbb{M}\cup \{M_i\}$, if $\ X\cap M_i^k \neq \emptyset, \forall k\in\mathcal{K}, i\in{1,2}$; Compute upper and lower bound $\alpha(M_1)$, $\beta(M_1)$, $\alpha(M_2)$, $\beta(M_2)$; Let $\beta_i\leftarrow \min\{\beta(M')\,\mid\,M'\in\mathbb{M}\}$; Let $\alpha_i\leftarrow \min\{\alpha_{i-1}, \alpha(M_1), \alpha(M_2)\}$; Remove all $M'$ from $\mathbb{M}$ if $\beta(M')\geq\alpha_i$; If $\beta_i-\alpha_i\leq\epsilon$, STOP;
111
+ :::
112
+ ::::
113
+
114
+ :::: algorithm
115
+ []{#alg: fft label="alg: fft"}
116
+
117
+ ::: algorithmic
118
+ Randomly pick $s\in \mathcal{S}$; Denote $T$ as the set of $K$ points selected by farthest first traversal; Set $T\leftarrow \{x_s\}$; Compute $x_s \in \arg \max\limits_{x_s\in X}d(x_s, T)$ to find $x_s$ which is the farthest away from set $T$; $T\leftarrow T\cup \{x_s\}$;
119
+ :::
120
+ ::::
121
+ :::::::
122
+
123
+ ::::::::: minipage
124
+ :::: algorithm
125
+ ::: algorithmic
126
+ $x_s$ is assigned to cluster $k'$ with $b_s^{k'}=1$;
127
+
128
+ $x_s$ is assigned to cluster $k'$ with $b_s^{k'}=1$.
129
+ :::
130
+ ::::
131
+
132
+ :::: algorithm
133
+ ::: algorithmic
134
+ Given the current center region $M$ and upper bound $\alpha$ Obtain the assigned sample set $\mathcal{J}^k$ using Alg.[\[alg: assignment\]](#alg: assignment){reference-type="ref" reference="alg: assignment"}; Compute the ball-based or box-boxed area of each assigned sample, $B_{\alpha}(x_j)$ or $R_{\alpha}(x_j)$; Tighten the center region by $M^k\cap B_{\alpha}(x_j)$ or $M^k\cap R_{\alpha}(x_j)$ , $\forall j \in \mathcal{J}^k$; Further tighten according to the "centers on samples" constraint;
135
+ :::
136
+ ::::
137
+
138
+ :::: algorithm
139
+ ::: algorithmic
140
+ Initialize the index set of redundant samples as $\mathcal{R}\gets \mathcal{S}$ Obtain the index set of redundant samples for lower bounds, $\mathcal{R}_{LB}$, according to the criterion in Sec. [4.2.1](#sec: re_lb){reference-type="ref" reference="sec: re_lb"}; Obtain the index set of redundant samples for upper bounds, $\mathcal{R}_{UB}$, according to the criterion in Sec. [4.2.2](#sec: re_ub){reference-type="ref" reference="sec: re_ub"}; Update the redundant index set, $\mathcal{R}\gets \mathcal{R}\cap \mathcal{R}_{LB}\cap \mathcal{R}_{UB}$; Delete samples in the redundant set $\mathcal{R}$ from the current dataset.
141
+ :::
142
+ ::::
143
+ :::::::::
144
+ :::::::::::::::
145
+ ::::::::::::::::
146
+
147
+ Although the lower bound introduced in Section [3.1](#sec:clt_lb){reference-type="ref" reference="sec:clt_lb"} is enough to guarantee convergence, it might not be very tight, leading to tremendous iterations. Therefore, we propose several acceleration techniques to reduce the search space and speed up the BB procedure. Since Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"} only branches on the region of centers $M:=\{\mu: \underbar{$\mu$} \leq \mu\leq \bar{\mu}\}$, we focus on reducing the region of centers to accelerate the solution process while not excluding the optimal solution of the original $K$-center problem.
148
+
149
+ In each node, the assignment of many samples (i.e., which cluster the sample is assigned to) can be pre-determined by the geometrical relationship of samples and regions of centers. This information can be further used to reduce the region of $\mu$.
150
+
151
+ The task of cluster assignment is to pre-determine some values of $b_s^k$ in the MINLP Formulation [\[eqn:overall\]](#eqn:overall){reference-type="ref" reference="eqn:overall"} at each BB node before finding the global optimal solution.
152
+
153
+ We first demonstrate the relations between samples and centers. Denote $\alpha$ as the upper bound obtained using methods described in Section [3.2](#sec:clt_ub){reference-type="ref" reference="sec:clt_ub"}. Then based on Objective [\[eqn:overall:obj\]](#eqn:overall:obj){reference-type="ref" reference="eqn:overall:obj"} and Constraint [\[eqn:overall:d\]](#eqn:overall:d){reference-type="ref" reference="eqn:overall:d"}, we have $d_s^* \leq d_* \leq \alpha$. From Constraint [\[eqn:overall:dis\]](#eqn:overall:dis){reference-type="ref" reference="eqn:overall:dis"} and [\[eqn:overall:bigM:b\]](#eqn:overall:bigM:b){reference-type="ref" reference="eqn:overall:bigM:b"}, we can conclude that if $b_s^k=1$, then $||x_s-\mu^k||_2^2\leq d_s^* \leq \alpha$. Therefore, we can derive Lemma [\[lemma:fbbt_1a\]](#lemma:fbbt_1a){reference-type="ref" reference="lemma:fbbt_1a"}:
154
+
155
+ ::: lemma
156
+ []{#lemma:fbbt_1a label="lemma:fbbt_1a"} If sample $x_s$ is in the $k$th cluster, then $||x_s-\mu^k||_2^2\leq \alpha$, where $\alpha$ is an upper bound of the $K$-center problem.
157
+ :::
158
+
159
+ Besides the relation between samples and centers, cluster assignments may also be determined from the distance of two samples. Suppose sample $x_i$ and $x_j$ belong to the $k$th cluster, then from Lemma [\[lemma:fbbt_1a\]](#lemma:fbbt_1a){reference-type="ref" reference="lemma:fbbt_1a"} we have $||x_i-\mu^k||_2^2\leq \alpha$ and $||x_j-\mu^k||_2^2\leq \alpha$. Thus $||x_i-x_j||_2^2 = ||x_i-\mu^k + \mu^k - x_j||_2^2 \leq (||x_i-\mu^k||_2 + ||\mu^k - x_j||_2)^2 \leq 4\alpha$. Therefore, we have Lemma [\[lemma:fbbt_seeds\]](#lemma:fbbt_seeds){reference-type="ref" reference="lemma:fbbt_seeds"}:
160
+
161
+ ::: lemma
162
+ []{#lemma:fbbt_seeds label="lemma:fbbt_seeds"} If two samples $x_i$ and $x_j$ are in the same cluster, then $||x_i-x_j||_2^2 \leq 4\alpha$ where $\alpha$ is an upper bound of the $K$-center problem.
163
+ :::
164
+
165
+ We propose three methods for pre-assigning samples based on these two Lemmas:
166
+
167
+ **$K$ Farthest Initial Seeds:** []{#sec:fbbt_fft label="sec:fbbt_fft"} From Lemma [\[lemma:fbbt_seeds\]](#lemma:fbbt_seeds){reference-type="ref" reference="lemma:fbbt_seeds"}, if $||x_i-x_j||_2^2 > 4\alpha$, then $x_i$ and $x_j$ are not in the same cluster. At the root node, if we can find $K$ samples with the distance between any two of these samples $x_i$ and $x_j$ satisfying $||x_i-x_j||_2^2 > 4\alpha$, then we can conclude that these $K$ samples must belong to $K$ distinct clusters. Figure [1](#fig:k_farthest_points){reference-type="ref" reference="fig:k_farthest_points"} shows an example of this property, in which three samples are pre-assigned to 3 distinct clusters. We call these $K$ points initial seeds. To find the initial seeds, every two samples must be as far as possible. Therefore, in our implementation, we use the heuristic Farthest First Traversal (FFT) (Algorithm [\[alg: fft\]](#alg: fft){reference-type="ref" reference="alg: fft"}) to obtain $K$ farthest points. For about half of the case studies shown in Section [6](#sec:results){reference-type="ref" reference="sec:results"}, we can obtain the initial seeds using FFT. However, for other cases, initial seeds can not be obtained using FFT, or the initial seeds may not even exist.
168
+
169
+ **Center-Based Assignment:**[]{#sec:fbbt_center_based label="sec:fbbt_center_based"} From Lemma [\[lemma:fbbt_1a\]](#lemma:fbbt_1a){reference-type="ref" reference="lemma:fbbt_1a"}, if $||x_s-\mu^k||_2^2>\alpha$, then $x_s$ does not belong to $k$th cluster, which is $b_s^k=0$. Consequently, if we can determine that $b_s^k=0, \forall k \in \mathcal{K} \setminus \{k'\}$, then $b_s^{k'}=1$. However, the value of $\mu$ here is unknown before obtaining the optimal solution. One observation is that if the BB node with region $M$ contains the optimal solution, then we have $\beta_{s}^k(M^k) =\min\limits_{\mu^{k}\in M^k} ||x_s-\mu^{k}||_2^2 \leq ||x_s-\mu^{k}||_2^2$. Therefore, if $\beta_{s}^k(M^k)>\alpha$, sample $x_s$ is not in the $k$th cluster and $b_s^k=0$. In summary, for sample $x_s$, if $\forall k \in \mathcal{K} \setminus \{k'\}$, $\beta_{s}^{k}(M^{k})>\alpha$, then $x_s$ is assigned to cluster $k'$ with $b_s^{k'}=1$. Figure [2](#fig:center_based_assign){reference-type="ref" reference="fig:center_based_assign"} illustrates an example in two-dimensional space with three clusters.
170
+
171
+ This center-based method can be adopted at every node of the BB scheme. Since $\beta_{s}^k(M^k)$ is already obtained when computing the lower bound in Section [4.2.1](#sec: re_lb){reference-type="ref" reference="sec: re_lb"}, there is no additional computational cost. Nevertheless, we do not need to apply this method at the root node since $M_0^1=\cdots=M_0^K$. As the BB scheme continues branching on the regions of centers, $M^k$ becomes more and more different from others. Then more samples can be pre-assigned using this center-based method.
172
+
173
+ **Sample-Based Assignment:**[]{#sec:fbbt_sample_based label="sec:fbbt_sample_based"} Besides utilizing centers to pre-assign samples, assigned samples can also help pre-assign other samples. From Lemma [\[lemma:fbbt_seeds\]](#lemma:fbbt_seeds){reference-type="ref" reference="lemma:fbbt_seeds"}, if $||x_i-x_j||_2^2 > 4\alpha$, then $x_i$ and $x_j$ are not in the same cluster. If $x_j$ belongs to $k$th cluster, then obviously $x_i$ cannot be assigned to $k$the cluster and $b_i^k=0$. With this relationship, if all the other $K-1$ clusters are excluded, $x_i$ will be assigned to the remaining cluster. Figure [3](#fig:sample_based_assign){reference-type="ref" reference="fig:sample_based_assign"} shows an example of the sample-based assignment.
174
+
175
+ There is a prerequisite to using this sample-based method. For each cluster, there must be at least one sample already assigned to the cluster. Based on this prerequisite, sample-based assignment is utilized only after at least one sample is pre-assigned for each cluster.
176
+
177
+ In this subsection, we adopt the Bounds Tightening (BT) technique and the cluster assignment information to reduce the region of $\mu$.
178
+
179
+ **Ball-based Bounds Tightening:** For a sample $j$, $B_{\alpha}(x_j){=}\{x | \ ||x-x_j||_2^2\leq \alpha\}$ represents the ball with center $x_j$ and radius $\sqrt{\alpha}$. By using cluster assignment methods in Section [4.1.1](#sec:fbbt_assign){reference-type="ref" reference="sec:fbbt_assign"}, assuming that sample $j$ belongs to $k$th cluster is already known, by Lemma [\[lemma:fbbt_1a\]](#lemma:fbbt_1a){reference-type="ref" reference="lemma:fbbt_1a"}, then $\mu^k\in B_{\alpha}(x_j)$ holds. We use $\mathcal{J}^k$ to denote the index of all samples assigned to $k$th cluster, i.e., $\mathcal{J}^k= \{j\in \mathcal{S}\ | \ b_j^k = 1\}$, then $\mu^k\in B_{\alpha}(x_j), \forall j \in \mathcal{J}^k$. Besides this, we also know that $\mu^k \in X\cap M^k$. Denote $\mathcal{S}_{+}^k$ as the index set of samples satisfying all these constraints, $\mathcal{S}^k_{+}(M):= \{s\in \mathcal{S} \ | x_{s}\in X\cap M^k, x_{s} \in B_{\alpha}(x_j), \forall j \in \mathcal{J}^k\}$. In this way, we can obtain a tightened box containing all feasible solutions of $k$th center, $\hat{M}^k {=} \{\mu^k|\hat{\underbar{$\mu$}}^k\leq \mu^k\leq \hat{\bar{\mu}}^k\}$, with the bounds of $a$th attribute in $k$th center to be $\hat{\underbar{$\mu$}}_{a}^k {=} \min\limits_{s\in \mathcal{S}^k_{+}(M)} x^k_{s,a}$ and $\hat{\bar{\mu}}_{s}^k {=} \max\limits_{s\in \mathcal{S}^k_{+}(M)} x^k_{s,a}$. Figure [4](#fig:ball){reference-type="ref" reference="fig:ball"} gives an example of bounds tightening using this method. One challenge of this ball-based bounds tightening method is that it needs to compute the distance of $x_s$ and $x_j$ for all $s\in \mathcal{S}$ and $j \in \mathcal{J}^k$. If we know the assignments of the majority of the samples, we need to do at most $S^2$ times of distance calculation. Note that we only need to do $S*K$ times of distance calculation to compute a lower bound. To reduce the computational time, we set a threshold on the maximum number of balls (default: 50) utilized to tighten bounds in our implementation.
180
+
181
+ **Box-based Bounds Tightening:** Another strategy to reduce the computation burden is based on the relaxation of $B_{\alpha}(x_j)$. For any ball $B_{\alpha}(x_j)$, the closed set $R_{\alpha}(x_j)=\{x \mid x_j-\sqrt{\alpha}\leq x \leq x_j+\sqrt{\alpha} \}$ is the smallest box containing $B_{\alpha}(x_j)$. Then we have $\mu^k\in R_{\alpha}(x_j), \forall j \in \mathcal{J}^k$. Since $R_{\alpha}(x_j)$ and $M^k$ are all boxes, we can easily compute the tighten bounds $\hat{M}^k {=} \bigcap_{j \in \mathcal{J}^k} R_{\alpha}(x_j) \cap M^k$. Figure [5](#fig:box){reference-type="ref" reference="fig:box"} gives an example of box-based bounds tightening using this method. Obviously, the bounds generated in Figure [4](#fig:ball){reference-type="ref" reference="fig:ball"} is much tighter, while the method in Figure [5](#fig:box){reference-type="ref" reference="fig:box"} is much faster. Consequently, if $|\mathcal{J}^k|$ is small for all clusters, the ball-based bounds tightening method gives more satisfactory results. While if $|\mathcal{J}^k|$ is large for any $k$, box-based bounds tightening provides a cheaper alternative.
182
+
183
+ Another way to get tighter bounds is based on symmetry-breaking constraints. We add the constraints $\mu_1^1\leq \mu_1^2\leq \cdots \leq \mu_1^K$ in the BB algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"}, in which $\mu_a^k$ denotes $a$th attribute of $k$th center. Note that symmetry-breaking constraints and FFT-based initial seeds in Section [\[sec:fbbt_fft\]](#sec:fbbt_fft){reference-type="ref" reference="sec:fbbt_fft"} both break symmetry by providing a certain order for the clusters, so they cannot be combined. Our implementation uses symmetric breaking only when initial seeds are not found from FFT at the root node. It should be noted that we also add this symmetry-breaking constraints when using CPLEX to solve the MINLP formulation [\[eqn:overall\]](#eqn:overall){reference-type="ref" reference="eqn:overall"} of the $K$-center problem.
184
+
185
+ Some samples may become redundant during the lower and upper bounding procedure without contributing to the bound improvements. If these samples are proven to be redundant in all the current and future branch nodes, we can conclude they will not influence the bounding results anymore, resulting in sample reduction.
186
+
187
+ Denote $\beta$ as the current best lower bound obtained using methods described in Section [3.1](#sec:clt_lb){reference-type="ref" reference="sec:clt_lb"}. According to Equation [\[eqn:lb_pb\]](#eqn:lb_pb){reference-type="ref" reference="eqn:lb_pb"}, lower bound $\beta(M)$ is the maximum value of each sample's optimal value, $\beta_s(M)$. Based on this observation, we further define the best maximum distance of sample $s$ to the center region of $\mu$ as $$\begin{equation}
188
+ \label{eqn: lb_re}
189
+ \alpha_s(M) = \min_{k\in\mathcal{K}}\max_{\mu^k\in M^k}||x_s-\mu^k||^2_2,
190
+ \end{equation}$$ It is obvious that $\beta_s(M) \leq \alpha_s(M)$. If $\alpha_s(M ) < \beta$, we have $\beta_s(M) < \beta$, which means sample $s$ is not the sample corresponding to maximum within-cluster distance. Hence, we can conclude that sample $s$ is a redundant sample in lower bounding for this BB node. Moreover, $\forall M' \subset M$, we have $\beta_s(M') \leq \alpha_s(M') \leq \alpha_s(M)$. According to the shrinking nature of center region $M$ and the non-descending nature of lower bound $\beta$, if $\alpha_s(M)< \beta$ is true in a BB node, sample $s$ will remain redundant in all the child nodes of this branch node. It should be noted that $\alpha_s(M)$ can be calculated using an analytical solution similar to $\beta_s(M)$, which is $\mu^k_a= \underbar{$\mu$}^k_a$ if $|\underbar{$\mu$}^k_a - x_{s,a}| > |\bar{\mu}^k_a - x_{s,a}|$, otherwise $\bar{\mu}^k_a$.
191
+
192
+ Obviously, a sample $x_j$ cannot be the center for $k$th cluster if it does not belong to $M^k$. Moreover, according to Lemma [\[lemma:fbbt_1a\]](#lemma:fbbt_1a){reference-type="ref" reference="lemma:fbbt_1a"}, if a sample $x_j$ is the center for cluster $K$, $||x_i-x_j||^2_2\leq \alpha$ must hold for all the samples $x_i$ assigned to this cluster. Hence, a sample $x_j$ also cannot be the center for $k$th cluster, if there exists a sample $x_i$ assigned to $k$th cluster satisfying $||x_i-x_j||^2_2> \alpha$. If sample $x_j$ cannot be centers for any cluster, we denote this sample $x_j$ as a redundant sample for upper bounding. Since the non-ascending nature of upper bound $\alpha$, if sample $s$ is redundant for upper bounding in a branch node, it will remain redundant in all the child nodes of this branch node. It should be noted that the calculations in this method are identical to Sample-Based Assignment in Section [\[sec:fbbt_center_based\]](#sec:fbbt_center_based){reference-type="ref" reference="sec:fbbt_center_based"} with no extra calculations introduced in this method.
193
+
194
+ If a sample $s$ is redundant in lower bounding, it implies that sample $s$ is not the "worst-case sample" corresponding to the maximum within-cluster distance. If a sample $s$ is redundant in upper bounding, then it means that sample $s$ cannot be a center for any cluster. If the sample $s$ is redundant in both lower bounding and upper bounding, then removing this sample will not affect the solution of this BB node and all its child BB nodes. Algorithm [\[alg: reduction\]](#alg: reduction){reference-type="ref" reference="alg: reduction"} describes the procedure of sample reduction: first, obtain the redundant samples for lower and upper bounding in each branch node; then, we can delete the samples that are redundant for both lower and upper bounding in all the branch nodes. In our implementation, this sample reduction method is executed for every $i_{sr}$ iterations.
195
+
196
+ Sample reduction can reduce the number of samples that need to be explored by deleting redundant samples every $i_{sr}$ iterations, as described in Algorithm [\[alg: reduction\]](#alg: reduction){reference-type="ref" reference="alg: reduction"}. It can also accelerate the calculation of lower bounds and bounds tightening at each iteration. For the lower bounding method in Section [3.1](#sec:clt_lb){reference-type="ref" reference="sec:clt_lb"}, we only need to solve the second-stage problems for non-redundant samples that have been validated by the lower-bounding criterion in Section [4.2.1](#sec: re_lb){reference-type="ref" reference="sec: re_lb"}. Additionally, once a sample is deemed redundant for lower bounding in a particular node, it will remain redundant in all child nodes of that node. This means that we do not need to solve the second-stage problem for this sample in the current node or any of its child nodes. For the bounds tightening methods in Section [4.1.2](#sec:fbbt){reference-type="ref" reference="sec:fbbt"}, we only need to calculate the bounds based on non-redundant samples that have been validated by the upper-bounding criterion in Section [4.2.2](#sec: re_ub){reference-type="ref" reference="sec: re_ub"}. Similarly, if a sample is redundant for upper bounding in a node, it will remain redundant in all child nodes of that node, and can be eliminated from the bounds tightening calculations in the current node and its child nodes. In this way, sample reduction can not only delete redundant samples at every $i_{sr}$ iterations, but also eliminate redundant information in the current node and its child nodes, thereby accelerating the overall calculation.
197
+
198
+ We also provide a parallel implementation of the whole algorithm to accelerate the solving process. Since our algorithm is primarily executed at the sample level, like $\beta_s(M)$ in the lower bounding, we can parallelize the algorithm by distributing the dataset to each process equally, then calculating on each process with the local dataset and communicating the results as needed. The detailed parallelization framework is shown in Figure [6](#fig:parallel){reference-type="ref" reference="fig:parallel"}. Here, the green modules represent the parallel operations at each process, and the blue modules represent serial reduction operations. This parallelization framework is realized utilizing Message-Passing Interface (MPI) and MPI.jl by [@byrne_mpijl_2021].
199
+
200
+ As stated in Theorem [\[theorem: conv_finite\]](#theorem: conv_finite){reference-type="ref" reference="theorem: conv_finite"}, the branch-and-bound scheme for the $K$-center problem in Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"} converges to the global optimal solution after a finite step. In this section, we present the proof of this theorem.
201
+
202
+ Specifically, the branch-and-bound scheme in Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"} branches on the region of centers, $\mu$, and generates a rooted tree with the search space $M_0$ at the root node. For the child node at $q$th level and $l_q$th iteration, we denote the search space as $M_{l_q}$. The search space of its child node is denoted as $M_{l_{q+1}}$ satisfying $M_{l_{q+1}} \subset M_{l_q}$. We denote the decreasing sequence from the root node with $M_0$ to the child node with $M_{l_q}$ as $\{M_{l_q}\}$. The search space of $k$th cluster center at $M_{l_q}$ is denoted as $M^k_{l_q}$. Along the branch-and-bound process, we can obtain a monotonically non-ascending upper bound sequence $\{\alpha_i\}$ and a monotonically non-descending lower bound sequence $\{\beta_i\}$.
203
+
204
+ In the following convergence analysis, we adapt the fundamental conclusions from [@horst_global_2013] to our algorithm. It should be noted that the convergence of the $K$-center problem here is stronger than the convergence analysis in [@cao_scalable_2019] for two-stage nonlinear optimization problems or the convergence proof in [@hua_scalable_2021] for $K$-means clustering problem. Both @cao_scalable_2019 and @hua_scalable_2021 guarantee the convergence in the sense of $\lim \limits_{i\rightarrow\infty}\alpha_i = \lim \limits_{i\rightarrow\infty}\beta_i = z$. They can only produce a global $\epsilon$-optimal solution in a finite number of steps. While for the $K$-center problem, the algorithm can obtain an exact optimal solution (e.g., $\epsilon=0$) in a finite number of steps.
205
+
206
+ ::: definition
207
+ (Definition IV.3 [@horst_global_2013]) A bounding operation is called **finitely consistent** if, at every step, any unfathomed partition element can be further refined and if any decreasing sequence $\{M_{l_q}\}$ successively refined partition elements is finite.
208
+ :::
209
+
210
+ ::: lemma
211
+ []{#finitely label="finitely"} The bounding operation in Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"} is finitely consistent.
212
+ :::
213
+
214
+ *Proof.* Firstly, we prove that any unfathomed partition element $M_{l_q}$ can be further refined. Any unfathomed $M_{l_q}$ satisfies two conditions: (1) $\exists |X\cap M^k_{l_q}| > 1, k\in\mathcal{K}$, and (2) $\alpha_l - \beta(M_{l_q}) > \epsilon, \epsilon>0$. Obviously, there exists at least one partition to be further refined.
215
+
216
+ We then prove any decreasing sequences $\{M_{l_q}\}$ successively refined partition elements are finite. Assuming by contradiction that a sequence $\{M_{l_q}\}$ is infinite. In our algorithm, since we branch on the first-stage variable $\mu$ corresponding to the diameter of $M$, this subdivision is exhaustive. Therefore, we have $\lim\limits_{q\to \infty} \delta(M_{l_q})= 0$ and $\{M_{l_q}\}$ converge to one point $\bar{\mu}$ at each cluster, where $\delta(M_{l_q})$ is the the diameter of set $M_{l_q}$.
217
+
218
+ If this point $\bar{\mu}\in X$, there exists a ball around $\bar{\mu}$, denoted as $B_r(\bar{\mu})=\{\mu\ |\ ||\mu-\bar{\mu}|| \leq r\}$, fulfilling $|X\cap B_r(\bar{\mu})| = 1$. There exists a level $q_0$ that $M_{l_q}\subset B_r(\bar{\mu}), \forall q \geq q_0$. At this $l_{q_0}$th iteration, according to the terminal conditions $|X\cap M^k_{l_q}| = 1, \forall k \in \mathcal{K}$, the partition elements $M_{l_{q_0}}$ will not be branched anymore. Because the dataset $X$ is finite, we have the sequence $\{M_{l_q}\}$ is finite in this case. If $\bar{\mu}\not\subset X$, there is a ball around $\bar{\mu}$, denoted as $B_r(\bar{\mu})=\{\mu\ |\ ||\mu-\bar{\mu}|| \leq r\}$, satisfying $|X\cap B_r(\bar{\mu})| = 0$. There exists a level $q_0$ that $M_{l_q}\subset B_r(\bar{\mu}), \forall q \geq q_0$. At this $l_{q_0}$th iteration, $M_{l_{q_0}}$ will be deleted according to the terminal conditions. Consequently, the sequence $\{M_{l_q}\}$ is also finite in this case. In conclusion, it is impossible to exist a sequence $\{M_{l_q}\}$ that is infinite.
219
+
220
+ ::: theorem
221
+ []{#terminate label="terminate"} (Theorem IV.1 [@horst_global_2013]) In a BB procedure, suppose that the bounding operation is finitely consistent. Then the procedure terminates after finitely many steps.
222
+ :::
223
+
224
+ ::: lemma
225
+ []{#terminate_lemma label="terminate_lemma"} Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"} terminates after finitely many steps.
226
+ :::
227
+
228
+ *Proof.* From Lemma [\[finitely\]](#finitely){reference-type="ref" reference="finitely"}, the bounding operation in Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"} is finitely consistent. According to Theorem [\[terminate\]](#terminate){reference-type="ref" reference="terminate"}, we have Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"} terminates after finitely many steps
229
+
230
+ Finally, we prove that the BB scheme for the $K$-center problem is convergent:
231
+
232
+ **Theorem [\[theorem: conv_finite\]](#theorem: conv_finite){reference-type="ref" reference="theorem: conv_finite"}.** *Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"} is convergent to the global optimal solution after a finite step $L$, with $\beta_L=z=\alpha_L$, by only branching on the space of $\mu$.*
233
+
234
+ *Proof.* From Lemma [\[terminate_lemma\]](#terminate_lemma){reference-type="ref" reference="terminate_lemma"}, Algorithm [\[alg: bb_sche\]](#alg: bb_sche){reference-type="ref" reference="alg: bb_sche"} terminates after finite steps. The algorithm terminates with two situations. The first situations is $|\beta_l - \alpha_l| \leq \epsilon, \epsilon \geq 0$. When $\epsilon$ is set to be 0, we have $\beta_l=z=\alpha_l$.
235
+
236
+ The second situation is the branch node set $\mathbb{M} = \emptyset$. A branch node with $M$ is deleted from $\mathbb{M}$ and not further partitioned if it satisfies $\beta(M) > \alpha_l$ or $|X\cap M^k| = 1, \forall k\in \mathcal{K}$. In the first case, it is obvious that this branch node does not contain the global optimal solution $\mu^*$. Therefore, the branch node with $M'$ containing the optimal solution $\mu^*$ is not further partitioned because the second case $|X\cap M'^k| = 1, \forall k\in \mathcal{K}$. After bounds tightening according to the "centers on samples" constraint, the tightened node $M'= \{\mu^*\}$. Obviously for this tightened node, we have $\beta_l =\beta(M') = z = \alpha(M')= \alpha_l$. In this way, we have proved Theorem [\[theorem: conv_finite\]](#theorem: conv_finite){reference-type="ref" reference="theorem: conv_finite"}.
237
+
238
+ In this section, we report the detailed implementation of our algorithm and the numerical results on synthetic and real-world datasets.
2302.10970/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-05-26T15:32:23.538Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36" etag="EtUz_8qgYKga3OFJuiqX" version="18.1.2" type="device"><diagram id="p6pekGb_pC8uchOlPug_" name="Page-1">7Vpdc5s6EP01nmkfkgEJMDzabpq203Zub6bT5qkjg4xpAVFZ/uqvrwSSQUAc20ljQvuSSKvVIq2OjlYrD+Ak2VxTlM0/kADHA2AEmwF8NQDAgQ7/KwTbQgBFTQhCGgWFyCwFN9EvLIWGlC6jAC80RUZIzKJMF/okTbHPNBmilKx1tRmJ9a9mKMQNwY2P4qb0SxSweSF1wbCUv8FROFdfNh2vaEmQUpYzWcxRQNYVEbwawAklhBWlZDPBsfCd8kvR7/UdrbuBUZyyQzp8G30mhvPWDpNPt+/S5A1YjZwLyy7MrFC8lDOWo2Vb5QKcBiPhSV5LScqF4zlLYl4zeRFvIvaVl41LW9ZuKy2vBAwMVdmqSsrottJJVG+rbWW3vKb6NWcsnbAgS+rjPdNUyEE0xGyPnjKIAw0V0qHXmCSYD4grrEss2HJ95xUYKBnFMWLRSscSkpAMd+Z2X/iPRHxqwJC7B1rSjtw8pmfoJoqJy17Vta8b8mqGrJqhwjMNQ7xQmXYpyqF1BMzMBsqA7TSARskyDXAg8bOeRwzfZChf2DUnFx14lDDuWpLy6oUn0DGL4nhCYkJzYzBA2J35XL5glPzAlRbHd/F0tg9PK0wZ3uwFgGo1bd2xttz+FYB4LQCB9t1Y0Jx/rKfVwlb3LwfzjawSyuYkJCmKr0rpuHS8cGSp856QTLr7O2ZsK+kZLRlpZwGzwgElI3SFBdwDWeBQEjh4dz9oRd0WhnZiJhBP8j1eLrXzc0lUw8UiX6wRVzBBtikbeSmU/3MriwylrVb8YssICzScvoDWQJwwfBKGVnwpysKckX92hpIo3hbduC2UZHkj5H3gmEUJP8yBkeK1sEoS8emaTstIJ1eDMVTj5U4shlw0vti8VC1TqoSOUdEu3KQm/RDOqVHMbDYDfivFBM7U4QT3OBRj3E8x8CkpBnrnpJhjAo3TqcI7lCpAp7jC+8cVfzVX7Db9Vg89zkUVZjPwe0ZQepytM3k9cOGdEGvF6eMiz/VxO/Kmrm3Ze3nyGOSBSx17QN0tq+Czm+ADxh8CHxye9ZzqdCiszq37D7huBcNq3P27SYIDwrwnvUmasK+uNh3jbK7+gezRR7qhMXA+/VolNh6u04uWnFvXiUqjqZK19hJVjUNOZC77QOZ6MFHlXUeUom1FIRMpskXFci3TBrz6KVjLytb0reFefV4oRvC46bhm1rcnW3uXee0Mizq9dbXXtQOrefvtiasb7wDndrWK2XvoardjqLbOGxyc9Kx3SnBwSiywexG+NxboVpoONNMiPdk+FuwaUzUvjH9bBur/63Hv00+WN9Rx5ziX504/uc+OuM2nIm7165770092p4jbbMY9z++BRVrcR0nl6I56XglanleA1aPnFVc/3f7c8wqvlr8MK6795c/r4NVv</diagram></mxfile>
2302.10970/main_diagram/main_diagram.pdf ADDED
Binary file (15.5 kB). View file
 
2302.10970/paper_text/intro_method.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Given a set of scene pictures with corresponding camera positions, novel view synthesis aims to generate pictures of the same scene from new camera positions. Recently, learning-based approaches have led to significant progress in this area. As an early instance, neural radiance fields (NeRF) by [@mildenhall2020nerf] represent a scene via a density field and a radiance (color) field parameterized with a multilayer perceptron (MLP). Using a differentiable volume rendering algorithm [@max1995optical] with MLP-based fields to produce images, they minimize the discrepancy between the output images and a set of reference images to learn a scene representation.
4
+
5
+ In particular, NeRF generates an image pixel by casting a ray from a camera through the pixel and aggregating the radiance at each ray point with weights induced by the density field. Each term involves a costly neural network query, and the model has a trade-off between rendering quality and computational load. In this work, we revisit the formula for the aggregated radiance computation and propose a novel approximation based on Monte Carlo methods. We compute our approximation in two stages. In the first stage, we march through the ray to estimate density. In the second stage, we construct a Monte Carlo color approximation using the density to pick points along the ray. The resulting estimate is fully differentiable and can act as a drop-in replacement for the standard rendering algorithm used in NeRF. Fig. [1](#fig:summary){reference-type="ref" reference="fig:summary"} illustrates the estimates for a varying number of samples. Compared to the standard rendering algorithm, the second stage of our algorithm avoids redundant radiance queries and can potentially reduce computation during training and inference.
6
+
7
+ Furthermore, we show that the sampling algorithm used in our Monte Carlo estimate is applicable to the hierarchical sampling scheme in NeRF. Similar to our work, the hierarchical scheme uses inverse transform sampling to pick points along a ray. The corresponding distribution is tuned using an auxiliary training task. In contrast, we derive our algorithm from a different perspective and obtain the inverse transform sampling for a slightly different distribution. With our algorithm, we were able to train NeRF end-to-end without the auxiliary task and improve the reconstruction quality. We achieve this by back-propagating the gradients through the sampler, and show that the original sampling algorithm fails to achieve similar quality in the same setup.
8
+
9
+ Below, Section [2](#sec:nerf-recap){reference-type="ref" reference="sec:nerf-recap"} gives a recap of neural radiance fields. Then we proceed to the main contributions of our work in Section [3](#sec:theory){reference-type="ref" reference="sec:theory"}, namely the rendering algorithm fueled by Monte Carlo estimates and the novel sampling procedure. In Section [4](#sec:related-work){reference-type="ref" reference="sec:related-work"} we discuss related work. In Subsection [5.1](#sec:nerf){reference-type="ref" reference="sec:nerf"}, we use our sampling algorithm to improve the hierarchical sampling scheme proposed for training NeRF. Finally, in Subsection [5.2](#sec:dvgo){reference-type="ref" reference="sec:dvgo"} we apply the proposed Monte Carlo estimate to replace the standard rendering algorithm. With an efficient neural radiance field architecture, our algorithm decreases time per training iteration at the cost of reduced reconstruction quality. We also show that our Monte Carlo estimate can be used during inference of a pre-trained model with no additional fine-tuning needed, and it can achieve better reconstruction quality at the same speed in comparison to the standard algorithm. Our source code is available at <https://github.com/GreatDrake/reparameterized-volume-sampling>.
10
+
11
+ Neural radiance fields represent 3D scenes with a non-negative scalar density field $\sigma: \mathbb R^3 \rightarrow \mathbb R^+$ and a vector radiance field $c: \mathbb R^3 \times \mathbb R^3 \rightarrow \mathbb R^3$. Scalar field $\sigma$ represents volume density at each spatial location $\bm{x}$, and $c(\bm{x}, \bm{d})$ returns the light emitted from spatial location $\bm{x}$ in direction $\bm{d}$ represented as a normalized three dimensional vector.
12
+
13
+ For novel view synthesis, NeRF [@mildenhall2020nerf] adapts a volume rendering algorithm that computes pixel color $C(\bm{r})$ as the expected radiance for a ray $\bm{r} = \bm{o} + t \bm{d}$ passing through a pixel from origin $\bf{o} \in \mathbb R^3$ in a direction $\bf{d} \in \mathbb R^3$. For ease of notation, we will denote density and radiance restricted to a ray $\bm{r}$ as $$\begin{align}
14
+ \sigma_{\bm{r}}(t) := \sigma(\bm{o} + t \bm{d}) \text{\;and\;}
15
+ c_{\bm{r}}(t) := c(\bm{o} + t \bm{d}, \bm{d}).
16
+ \end{align}$$ With that in mind, the expected radiance along ray $\bm{r}$ is given as $$\begin{equation}
17
+ \label{eq:expected_color}
18
+ C(\bm{r}) = \int_{t_n}^{t_f} p_{\bm{r}}(t) c_{\bm{r}}(t) \mathrm{d} t,
19
+ \end{equation}$$ where $$\begin{equation}
20
+ \label{eq:density_field_dist}
21
+ p_{\bm{r}}(t) := \sigma_{\bm{r}}(t) \exp{\left(- \int_{t_n}^{t} \sigma_{\bm{r}} (s) \mathrm{d} s \right)}.
22
+ \end{equation}$$ Here, $t_n$ and $t_f$ are *near* and *far* ray boundaries, and $p_{\bm{r}}(t)$ is an unnormalized probability density function of a random variable $\textnormal{t}$ on a ray $\bm{r}$. Intuitively, $\textnormal{t}$ is the location on a ray where the portion of light coming into the point $\bm{o}$ was emitted.
23
+
24
+ To approximate the nested integrals in Eq. [\[eq:expected_color\]](#eq:expected_color){reference-type="ref" reference="eq:expected_color"}, [@max1995optical] proposed to replace fields $\sigma_{\bm{r}}$ and $c_{\bm{r}}$ with a piecewise approximation on a grid $t_n = t_0 < t_1 < \dots < t_m = t_f$ and compute the formula in Eq. [\[eq:expected_color\]](#eq:expected_color){reference-type="ref" reference="eq:expected_color"} analytically for the approximation. In particular, a piecewise constant approximation with density $\sigma_i$ and radiance $c_i$ within $i$-th bin $[t_{i + 1}, t_{i}]$ of width $\delta_i = t_{i + 1} - t_i$ yields formula $$\begin{equation}
25
+ \label{eq:grid_color_approximation}
26
+ \hat{C}(\bm{r}) = \sum_{i=1}^m w_i c_i,
27
+ \end{equation}$$ where the weights are given by $$\begin{equation}
28
+ \label{eq:nerf_weights}
29
+ w_i = (1 - \exp(-\sigma_i \delta_i)) \exp \left( - \sum_{j=1}^{i-1} \sigma_j \delta_j \right).
30
+ \end{equation}$$ Importantly, Eq. [\[eq:grid_color_approximation\]](#eq:grid_color_approximation){reference-type="ref" reference="eq:grid_color_approximation"} is fully differentiable and can be used as a part of a gradient-based learning pipeline. To reconstruct a scene NeRF runs a gradient based optimizer to minimize MSE between the predicted color and the ground truth color averaged across multiple rays and multiple viewpoints.
31
+
32
+ While the above approximation works in practice, it involves multiple evaluations of $c$ and $\sigma$ along a dense grid. Besides that, a ray typically intersects a solid surface at some point $t \in [t_n, t_f]$. In this case, probability density $p_{\bm{r}}(t)$ will concentrate its mass near $t$ and, as a result, most of the terms in Eq. [\[eq:grid_color_approximation\]](#eq:grid_color_approximation){reference-type="ref" reference="eq:grid_color_approximation"} will make a negligible contribution to the sum. To approach this problem, NeRF employs a hierarchical sampling scheme. Two networks are trained simultaneously: coarse (or proposal) and fine. Firstly, the coarse network is evaluated on a uniform grid of $N_c$ points and a set of weights $w_i$ is calculated as in Eq. [\[eq:nerf_weights\]](#eq:nerf_weights){reference-type="ref" reference="eq:nerf_weights"}. Normalizing these weights produces a piecewise constant PDF along the ray. Then $N_f$ samples are drawn from this distribution and the union of the first and second sets of points is used to evaluate the fine network and compute the final color estimation. The coarse network is also trained to predict ground truth colors, but the color estimate for the coarse network is calculated only using the first set of $N_c$ points.
33
+
34
+ Monte Carlo methods give a natural way to approximate the expected color. For example, given $k$ i.i.d. samples $t_1, \dots, t_k \sim p_{\bm{r}}(t)$ and the normalizing constant $y_f := \int_{t_n}^{t_f} p_{\bm{r}} (t) \mathrm d t$, the sum $$\begin{equation}
35
+ \label{eq:mc_color_approximation}
36
+ \hat{C}_{MC}(\bm{r}) = \frac{y_f}{k} \sum_{i=1}^k c_{\bm{r}}(t_i)
37
+ \end{equation}$$ is an unbiased estimate of the expected radiance in Eq. [\[eq:expected_color\]](#eq:expected_color){reference-type="ref" reference="eq:expected_color"}. Moreover, samples $t_1, \dots, t_k$ belong to high-density regions of $p_{\bm{r}}$ by design, thus for a degenerate density $p_{\bm{r}}$ even a few samples would provide an estimate with low variance. Importantly, unlike the approximation in Eq. [\[eq:grid_color_approximation\]](#eq:grid_color_approximation){reference-type="ref" reference="eq:grid_color_approximation"}, the Monte Carlo estimate depends on scene density $\sigma$ implicitly through sampling algorithm and requires a custom gradient estimate for the parameters of $\sigma$. We propose a principled end-to-end differentiable algorithm to generate samples from $p_{\bm{r}}(t)$.
38
+
39
+ Our solution is primarily inspired by the reparameterization trick [@kingma2014adam; @rezende2014stochastic]. We change the variable in Eq. [\[eq:expected_color\]](#eq:expected_color){reference-type="ref" reference="eq:expected_color"}. For ${F_{\bm{r}}(t) := 1 - \exp{\left(-\int_{t_n}^{t} \sigma_{\bm{r}}(s) \mathrm d s \right)}}$ and $y := F_{\bm{r}}(t)$ we rewrite $$\begin{align}
40
+ C(\bm{r}) &= \int_{t_n}^{t_f} c_{\bm{r}}(t) p_{\bm{r}}(t) \mathrm d t \label{eq:y_reparameterization_1}\\
41
+ &= \int_{y_n}^{y_f} c_{\bm{r}}(F_{\bm{r}}^{-1}(y)) \mathrm d y \label{eq:y_reparameterization_2}\\
42
+ &= \int_{0}^{1} y_f c_{\bm{r}}(F_{\bm{r}}^{-1}( y_f u)) \mathrm d u. \label{eq:y_reparameterization_3}
43
+ \end{align}$$ The integral boundaries are $y_n := F_{\bm{r}}(t_n) = 0$ and $y_f := F_{\bm{r}}(t_f)$. Function $F_{\bm{r}}(t)$ acts as the cumulative distribution function of the variable $\textnormal{t}$ with a single exception that, in general, $F_{\bm{r}}(t_f) \neq 1$. In volume rendering, $F_{\bm{r}}(t)$ is called opacity function with $y_f$ being equal to overall pixel opaqueness. After the first change of variables in Eq. [\[eq:y_reparameterization_2\]](#eq:y_reparameterization_2){reference-type="ref" reference="eq:y_reparameterization_2"}, the integral boundaries depend on opacity $F_{\bm{r}}$ and, as a consequence, on ray density $\sigma_{\bm{r}}$. We further simplify the integral by changing the integration boundaries to $[0,1]$ and substituting $y_n = 0$.
44
+
45
+ Given the above derivation, we construct *the reparameterized Monte Carlo estimate* for the right-hand side integral in Eq. [\[eq:y_reparameterization_3\]](#eq:y_reparameterization_3){reference-type="ref" reference="eq:y_reparameterization_3"} $$\begin{equation}
46
+ \label{eq:reparameterized_mc_color_approximation}
47
+ \hat{C}_{MC}^{R}(\bm{r}) := \frac{y_f}{k} \sum_{i=1}^k c_{\bm{r}}(F_{\bm{r}}^{-1}(y_f u_i)),
48
+ \end{equation}$$ with $k$ i.i.d. $U[0, 1]$ samples $u_1, \dots, u_k$. It is easy to show that the estimate in Eq. [\[eq:reparameterized_mc_color_approximation\]](#eq:reparameterized_mc_color_approximation){reference-type="ref" reference="eq:reparameterized_mc_color_approximation"} is an unbiased estimate of expected color in Eq. [\[eq:expected_color\]](#eq:expected_color){reference-type="ref" reference="eq:expected_color"} and its gradient is an unbiased estimate of the gradient of the expected color $C(\bm{r})$. Additionally, we propose to replace the uniform samples $u_1,\dots,u_k$ with uniform independent samples within regular grid bins $v_i \sim U[\tfrac{i - 1}{k+1}, \tfrac{i}{k + 1}], i=1,\dots,k$. The latter samples yield a stratified variant of the estimate in Eq. [\[eq:reparameterized_mc_color_approximation\]](#eq:reparameterized_mc_color_approximation){reference-type="ref" reference="eq:reparameterized_mc_color_approximation"} and, most of the time, lead to lower variance estimates (see Appendix [8](#sec:toy_exp){reference-type="ref" reference="sec:toy_exp"}).
49
+
50
+ In the above estimate, random samples $u_1, \dots, u_k$ do not depend on volume density $\sigma_{\bm{r}}$ or color $c_{\bm{r}}$. Essentially, for the reparameterized Monte Carlo estimate we generate samples from $p_{\bm{r}}(t)$ using inverse cumulative distribution function $F_{\bm{r}}^{-1}(y_f u)$. In what follows, we coin the term *reparameterized volume sampling (RVS)* for the sampling procedure. However, in practice, we cannot compute $F_{\bm{r}}$ analytically and can only query $\sigma_{\bm{r}}$ at certain ray points. Thus, in the following section, we introduce approximations of $F_{\bm{r}}$ and its inverse.
51
+
52
+ The expected radiance estimate in Eq. [\[eq:reparameterized_mc_color_approximation\]](#eq:reparameterized_mc_color_approximation){reference-type="ref" reference="eq:reparameterized_mc_color_approximation"} relies on opacity $F_{\bm{r}}(t) = 1 - \exp \left(-\int_{t_n}^t \sigma_{\bm{r}}(s) \mathrm d s \right)$ and its inverse $F^{-1}_{\bm{r}}(y)$. We propose to approximate the opacity using a piecewise density field approximation. Fig. [2](#fig:spline_inversion){reference-type="ref" reference="fig:spline_inversion"} illustrates the approximations and ray samples obtained through opacity inversion.
53
+
54
+ <figure id="fig:spline_inversion" data-latex-placement="t">
55
+ <embed src="figures/spline_inversion.pdf" style="width:100.0%" />
56
+ <figcaption>Illustration of opacity inversion. On the left, we approximate density field <span class="math inline"><em>σ</em><sub><strong>r</strong></sub></span> with a piecewise constant and a piecewise linear approximation. On the right, we approximate opacity <span class="math inline"><em>F</em><sub><strong>r</strong></sub>(<em>t</em>)</span> and compute <span class="math inline"><em>F</em><sub><strong>r</strong></sub><sup>−1</sup>(<em>y</em><sub><em>f</em></sub><em>u</em>)</span> for <span class="math inline"><em>u</em> ∼ <em>U</em>[0, 1]</span>.</figcaption>
57
+ </figure>
58
+
59
+ To construct the approximation, we take a grid $t_n = t_0 < t_1 < \dots < t_m = t_f$ and construct either a piecewise constant or a piecewise linear approximation. In the former case, we pick a point within each bin $t_{i} \leq \hat{t}_i \leq t_{i + 1}$ and approximate density with $\sigma_{\bm{r}}(\hat{t}_i)$ inside the corresponding bin. In the latter case, we compute $\sigma_{\bm{r}}$ in the grid points and interpolate the values between the grid points. Importantly, for a non-negative field these two approximations are also non-negative. Then we compute $\int_{t_n}^{t} \sigma_{\bm{r}}(s) \mathrm ds$ , which is as a sum of rectangular areas in the piecewise constant case $$\begin{equation}
60
+ I_0(t) = \sum_{j=1}^{i} \sigma_{\bm{r}}(\hat{t}_j) (t_j - t_{j - 1}) + \sigma_{\bm{r}} (\hat{t}_i) (t - t_i).
61
+ \end{equation}$$
62
+
63
+ Analogously, the integral approximation $I_1(t)$ in the piecewise linear case is a sum of trapezoidal areas.
64
+
65
+ Given these approximations, we can approximate $y_f$ and $F_{\bm{r}}$ in Eq. [\[eq:reparameterized_mc_color_approximation\]](#eq:reparameterized_mc_color_approximation){reference-type="ref" reference="eq:reparameterized_mc_color_approximation"}. We generate samples on a ray based on inverse opacity $F^{-1}_{\bm{r}}(y)$ by solving the equation $$\begin{equation}
66
+ y_f u = F_{\bm{r}}(t) = 1 - \exp\left( -\int_{t_n}^t \sigma_{\bm{r}}(s) \mathrm d s \right)
67
+ \end{equation}$$ for $t$, where $u \in [0, 1]$ is a random sample. We rewrite the equation as $- \log(1 - y_f u) = \int_{t_n}^t \sigma_{\bm{r}}(s) \mathrm d s$ and note that integral approximations $I_0(t)$ and $I_1(t)$ are monotonic piecewise linear and piecewise quadratic functions. We obtain the inverse by first finding the bin that contains the solution and then solving a linear or a quadratic equation. Crucially, the solution $t$ can be seen as a differentiable function of density field $\sigma_{\bm{r}}$ and we can back-propagate the gradients w.r.t. $\sigma_{\bm{r}}$ through $t$. We provide explicit formulae for $t$ for both approximations in Appendix [7.1](#sec:inverse_explicit){reference-type="ref" reference="sec:inverse_explicit"} and discuss the solutions crucial for the numerical stability in Appendix [7.2](#sec:inverse_stability){reference-type="ref" reference="sec:inverse_stability"}. In Appendix [7.3](#sec:sampling_pseudocode){reference-type="ref" reference="sec:sampling_pseudocode"}, we provide the algorithm implementation and draw parallels with earlier work. Additionally, in Appendix [7.4](#sec:inverse_implicit){reference-type="ref" reference="sec:inverse_implicit"} we discuss an alternative approach to calculating inverse opacity and its gradients. We use piecewise linear approximations in Subsection [5.1](#sec:nerf){reference-type="ref" reference="sec:nerf"} and piecewise constant in Subsection [5.2](#sec:dvgo){reference-type="ref" reference="sec:dvgo"}.
68
+
69
+ Finally, we propose to apply our RVS algorithm to the hierarchical sampling scheme originally proposed in NeRF. Here we do not change the final color approximation, utilizing the original one (Eq. [\[eq:grid_color_approximation\]](#eq:grid_color_approximation){reference-type="ref" reference="eq:grid_color_approximation"}), but modify the way the coarse density network is trained. The method we introduce consists of two changes to the original scheme. Firstly, we replace sampling from piecewise constant PDF along the ray defined by weights $w_i$ (see Section [2](#sec:nerf-recap){reference-type="ref" reference="sec:nerf-recap"}) with our RVS sampling algorithm that uses piecewise linear approximation of $\sigma_{\bm{r}}$ and generates samples from $p_r(t)$ using inverse CDF. Secondly, we remove the auxiliary reconstruction loss imposed on the coarse network. Instead, we propagate gradients through sampling. This way, we eliminate the need for auxiliary coarse network losses and train the network to solve the actual task of our interest: picking the best points for evaluation of the fine network. All components of the model are trained together end-to-end from scratch. In Subsection [5.1](#sec:nerf){reference-type="ref" reference="sec:nerf"}, we refer to the coarse network as the proposal network, since such naming better captures its purpose.
2304.05516/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <mxfile host="app.diagrams.net" modified="2023-04-01T22:04:54.503Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" etag="R6Tx5OOXAIfl4_tZHvX_" version="21.1.2" type="device">
2
+ <diagram id="Lg7lJIH-VIT71j-iI9BR" name="第 1 页">
3
+ <mxGraphModel dx="797" dy="397" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="827" pageHeight="1169" math="1" shadow="0">
4
+ <root>
5
+ <mxCell id="0" />
6
+ <mxCell id="1" parent="0" />
7
+ <mxCell id="-y90d5PvFkiMf80TmlVm-71" value="" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;glass=0;sketch=0;fontFamily=Times New Roman;fontSize=14;strokeColor=none;fillColor=#FFF2CC;align=center;verticalAlign=middle;" parent="1" vertex="1">
8
+ <mxGeometry x="113" y="187.5" width="230" height="252.5" as="geometry" />
9
+ </mxCell>
10
+ <mxCell id="-y90d5PvFkiMf80TmlVm-15" value="" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;glass=0;sketch=0;strokeColor=none;fillColor=#D5E8D4;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
11
+ <mxGeometry x="350" y="190" width="69" height="250" as="geometry" />
12
+ </mxCell>
13
+ <mxCell id="-y90d5PvFkiMf80TmlVm-1" value="User 1&amp;nbsp;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontStyle=1;fontSize=14;" parent="1" vertex="1">
14
+ <mxGeometry x="60" y="185" width="60" height="30" as="geometry" />
15
+ </mxCell>
16
+ <mxCell id="-y90d5PvFkiMf80TmlVm-6" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#FF8000;fontSize=14;align=center;verticalAlign=middle;" parent="1" target="-y90d5PvFkiMf80TmlVm-5" edge="1">
17
+ <mxGeometry relative="1" as="geometry">
18
+ <mxPoint x="103" y="224.969696969697" as="sourcePoint" />
19
+ </mxGeometry>
20
+ </mxCell>
21
+ <mxCell id="-y90d5PvFkiMf80TmlVm-8" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#FF8000;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-5" target="-y90d5PvFkiMf80TmlVm-7" edge="1">
22
+ <mxGeometry relative="1" as="geometry" />
23
+ </mxCell>
24
+ <mxCell id="-y90d5PvFkiMf80TmlVm-5" value="Local Model" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#4D4D4D;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
25
+ <mxGeometry x="121" y="202.5" width="49" height="45" as="geometry" />
26
+ </mxCell>
27
+ <mxCell id="-y90d5PvFkiMf80TmlVm-16" value="" style="edgeStyle=orthogonalEdgeStyle;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#FF8000;rounded=1;strokeWidth=1;exitX=1;exitY=0.25;exitDx=0;exitDy=0;entryX=0.993;entryY=0.413;entryDx=0;entryDy=0;entryPerimeter=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-7" target="-y90d5PvFkiMf80TmlVm-15" edge="1">
28
+ <mxGeometry relative="1" as="geometry">
29
+ <mxPoint x="420" y="300" as="targetPoint" />
30
+ <Array as="points">
31
+ <mxPoint x="400" y="215" />
32
+ <mxPoint x="400" y="293" />
33
+ </Array>
34
+ </mxGeometry>
35
+ </mxCell>
36
+ <mxCell id="-y90d5PvFkiMf80TmlVm-7" value="Rando-mizer" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#4D4D4D;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
37
+ <mxGeometry x="236" y="205" width="55" height="40" as="geometry" />
38
+ </mxCell>
39
+ <mxCell id="-y90d5PvFkiMf80TmlVm-11" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;strokeColor=#DDA66E;dashed=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-14" target="-y90d5PvFkiMf80TmlVm-7" edge="1">
40
+ <mxGeometry relative="1" as="geometry">
41
+ <mxPoint x="120" y="264" as="sourcePoint" />
42
+ <Array as="points">
43
+ <mxPoint x="103" y="260" />
44
+ <mxPoint x="264" y="260" />
45
+ <mxPoint x="264" y="245" />
46
+ </Array>
47
+ </mxGeometry>
48
+ </mxCell>
49
+ <mxCell id="-y90d5PvFkiMf80TmlVm-13" value="$$x_1$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#FF8000;strokeColor=none;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
50
+ <mxGeometry x="83" y="215" width="20" height="20" as="geometry" />
51
+ </mxCell>
52
+ <mxCell id="-y90d5PvFkiMf80TmlVm-14" value="$$\epsilon_1$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#FFCC99;strokeColor=none;glass=0;sketch=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
53
+ <mxGeometry x="83" y="248.5" width="20" height="20" as="geometry" />
54
+ </mxCell>
55
+ <mxCell id="-y90d5PvFkiMf80TmlVm-18" value="$$\tilde{g_1}$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#FF8000;strokeColor=none;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
56
+ <mxGeometry x="312" y="205" width="20" height="20" as="geometry" />
57
+ </mxCell>
58
+ <mxCell id="-y90d5PvFkiMf80TmlVm-19" value="" style="edgeStyle=orthogonalEdgeStyle;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#DDA66E;rounded=1;strokeWidth=1;dashed=1;exitX=1;exitY=0.75;exitDx=0;exitDy=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-7" edge="1">
59
+ <mxGeometry relative="1" as="geometry">
60
+ <mxPoint x="290" y="239" as="sourcePoint" />
61
+ <mxPoint x="419" y="370" as="targetPoint" />
62
+ <Array as="points">
63
+ <mxPoint x="360" y="235" />
64
+ <mxPoint x="360" y="370" />
65
+ </Array>
66
+ </mxGeometry>
67
+ </mxCell>
68
+ <mxCell id="-y90d5PvFkiMf80TmlVm-20" value="$$\epsilon_1$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#FFCC99;strokeColor=none;glass=0;sketch=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
69
+ <mxGeometry x="312" y="227.5" width="20" height="20" as="geometry" />
70
+ </mxCell>
71
+ <mxCell id="-y90d5PvFkiMf80TmlVm-21" value="$${g_1}$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#FF8000;strokeColor=none;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
72
+ <mxGeometry x="194" y="215" width="20" height="20" as="geometry" />
73
+ </mxCell>
74
+ <mxCell id="-y90d5PvFkiMf80TmlVm-22" value="User 2&amp;nbsp;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontStyle=1;fontSize=14;" parent="1" vertex="1">
75
+ <mxGeometry x="60" y="273.5" width="60" height="30" as="geometry" />
76
+ </mxCell>
77
+ <mxCell id="-y90d5PvFkiMf80TmlVm-23" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#009900;fontSize=14;align=center;verticalAlign=middle;" parent="1" target="-y90d5PvFkiMf80TmlVm-25" edge="1">
78
+ <mxGeometry relative="1" as="geometry">
79
+ <mxPoint x="103" y="308.469696969697" as="sourcePoint" />
80
+ </mxGeometry>
81
+ </mxCell>
82
+ <mxCell id="-y90d5PvFkiMf80TmlVm-24" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#009900;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-25" target="-y90d5PvFkiMf80TmlVm-27" edge="1">
83
+ <mxGeometry relative="1" as="geometry" />
84
+ </mxCell>
85
+ <mxCell id="-y90d5PvFkiMf80TmlVm-25" value="Local Model" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#4D4D4D;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
86
+ <mxGeometry x="121" y="286" width="49" height="45" as="geometry" />
87
+ </mxCell>
88
+ <mxCell id="-y90d5PvFkiMf80TmlVm-26" value="" style="edgeStyle=orthogonalEdgeStyle;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#009900;rounded=1;strokeWidth=1;exitX=1;exitY=0.25;exitDx=0;exitDy=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-27" edge="1">
89
+ <mxGeometry relative="1" as="geometry">
90
+ <mxPoint x="419" y="260" as="targetPoint" />
91
+ <Array as="points">
92
+ <mxPoint x="390" y="298" />
93
+ <mxPoint x="390" y="260" />
94
+ </Array>
95
+ </mxGeometry>
96
+ </mxCell>
97
+ <mxCell id="-y90d5PvFkiMf80TmlVm-27" value="Rando-mizer" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#4D4D4D;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
98
+ <mxGeometry x="236" y="288.5" width="55" height="40" as="geometry" />
99
+ </mxCell>
100
+ <mxCell id="-y90d5PvFkiMf80TmlVm-28" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;strokeColor=#819D72;dashed=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-30" target="-y90d5PvFkiMf80TmlVm-27" edge="1">
101
+ <mxGeometry relative="1" as="geometry">
102
+ <mxPoint x="120" y="347.5" as="sourcePoint" />
103
+ <Array as="points">
104
+ <mxPoint x="103" y="344" />
105
+ <mxPoint x="264" y="344" />
106
+ <mxPoint x="264" y="328" />
107
+ </Array>
108
+ </mxGeometry>
109
+ </mxCell>
110
+ <mxCell id="-y90d5PvFkiMf80TmlVm-29" value="$$x_2$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#009900;strokeColor=none;align=center;verticalAlign=middle;fontSize=14;" parent="1" vertex="1">
111
+ <mxGeometry x="83" y="298.5" width="20" height="20" as="geometry" />
112
+ </mxCell>
113
+ <mxCell id="-y90d5PvFkiMf80TmlVm-30" value="$$\epsilon_2$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#B9E0A5;strokeColor=none;glass=0;sketch=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
114
+ <mxGeometry x="83" y="333" width="20" height="20" as="geometry" />
115
+ </mxCell>
116
+ <mxCell id="-y90d5PvFkiMf80TmlVm-31" value="$$\tilde{g_2}$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#009900;strokeColor=none;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
117
+ <mxGeometry x="312" y="288.5" width="20" height="20" as="geometry" />
118
+ </mxCell>
119
+ <mxCell id="-y90d5PvFkiMf80TmlVm-32" value="" style="edgeStyle=orthogonalEdgeStyle;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#819D72;rounded=1;strokeWidth=1;dashed=1;exitX=1;exitY=0.75;exitDx=0;exitDy=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-27" edge="1">
120
+ <mxGeometry relative="1" as="geometry">
121
+ <mxPoint x="295" y="322.5" as="sourcePoint" />
122
+ <mxPoint x="418" y="400" as="targetPoint" />
123
+ <Array as="points">
124
+ <mxPoint x="291" y="320" />
125
+ <mxPoint x="370" y="320" />
126
+ <mxPoint x="370" y="400" />
127
+ </Array>
128
+ </mxGeometry>
129
+ </mxCell>
130
+ <mxCell id="-y90d5PvFkiMf80TmlVm-33" value="$$\epsilon_2$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#B9E0A5;strokeColor=none;glass=0;sketch=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
131
+ <mxGeometry x="312" y="311" width="20" height="20" as="geometry" />
132
+ </mxCell>
133
+ <mxCell id="-y90d5PvFkiMf80TmlVm-34" value="$${g_2}$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#009900;strokeColor=none;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
134
+ <mxGeometry x="194" y="298.5" width="20" height="20" as="geometry" />
135
+ </mxCell>
136
+ <mxCell id="-y90d5PvFkiMf80TmlVm-35" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#0066CC;fontSize=14;align=center;verticalAlign=middle;" parent="1" target="-y90d5PvFkiMf80TmlVm-37" edge="1">
137
+ <mxGeometry relative="1" as="geometry">
138
+ <mxPoint x="103" y="394.469696969697" as="sourcePoint" />
139
+ </mxGeometry>
140
+ </mxCell>
141
+ <mxCell id="-y90d5PvFkiMf80TmlVm-36" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#0066CC;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-37" target="-y90d5PvFkiMf80TmlVm-39" edge="1">
142
+ <mxGeometry relative="1" as="geometry" />
143
+ </mxCell>
144
+ <mxCell id="-y90d5PvFkiMf80TmlVm-37" value="Local Model" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#4D4D4D;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
145
+ <mxGeometry x="121" y="372" width="49" height="45" as="geometry" />
146
+ </mxCell>
147
+ <mxCell id="-y90d5PvFkiMf80TmlVm-38" value="" style="edgeStyle=orthogonalEdgeStyle;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#0066CC;rounded=1;strokeWidth=1;exitX=1;exitY=0.25;exitDx=0;exitDy=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-39" edge="1">
148
+ <mxGeometry relative="1" as="geometry">
149
+ <mxPoint x="420" y="230" as="targetPoint" />
150
+ <Array as="points">
151
+ <mxPoint x="380" y="384" />
152
+ <mxPoint x="380" y="230" />
153
+ </Array>
154
+ </mxGeometry>
155
+ </mxCell>
156
+ <mxCell id="-y90d5PvFkiMf80TmlVm-39" value="Rando-mizer" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#4D4D4D;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
157
+ <mxGeometry x="236" y="374.5" width="55" height="40" as="geometry" />
158
+ </mxCell>
159
+ <mxCell id="-y90d5PvFkiMf80TmlVm-40" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;strokeColor=#779FC7;dashed=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-42" target="-y90d5PvFkiMf80TmlVm-39" edge="1">
160
+ <mxGeometry relative="1" as="geometry">
161
+ <mxPoint x="120" y="433.5" as="sourcePoint" />
162
+ <Array as="points">
163
+ <mxPoint x="103" y="430" />
164
+ <mxPoint x="264" y="430" />
165
+ <mxPoint x="264" y="414" />
166
+ </Array>
167
+ </mxGeometry>
168
+ </mxCell>
169
+ <mxCell id="-y90d5PvFkiMf80TmlVm-41" value="$$x_3$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#0066CC;strokeColor=none;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
170
+ <mxGeometry x="83" y="384.5" width="20" height="20" as="geometry" />
171
+ </mxCell>
172
+ <mxCell id="-y90d5PvFkiMf80TmlVm-42" value="$$\epsilon_3$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#90C0F0;strokeColor=none;glass=0;sketch=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
173
+ <mxGeometry x="83" y="419" width="20" height="20" as="geometry" />
174
+ </mxCell>
175
+ <mxCell id="-y90d5PvFkiMf80TmlVm-43" value="$$\tilde{g_3}$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#0066CC;strokeColor=none;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
176
+ <mxGeometry x="312" y="374.5" width="20" height="20" as="geometry" />
177
+ </mxCell>
178
+ <mxCell id="-y90d5PvFkiMf80TmlVm-44" value="" style="edgeStyle=orthogonalEdgeStyle;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#779FC7;rounded=1;strokeWidth=1;dashed=1;exitX=1;exitY=0.75;exitDx=0;exitDy=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" source="-y90d5PvFkiMf80TmlVm-39" edge="1">
179
+ <mxGeometry relative="1" as="geometry">
180
+ <mxPoint x="295" y="408.5" as="sourcePoint" />
181
+ <mxPoint x="420" y="340" as="targetPoint" />
182
+ <Array as="points">
183
+ <mxPoint x="390" y="404" />
184
+ <mxPoint x="390" y="340" />
185
+ </Array>
186
+ </mxGeometry>
187
+ </mxCell>
188
+ <mxCell id="-y90d5PvFkiMf80TmlVm-45" value="$$\epsilon_3$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#90C0F0;strokeColor=none;glass=0;sketch=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
189
+ <mxGeometry x="312" y="397" width="20" height="20" as="geometry" />
190
+ </mxCell>
191
+ <mxCell id="-y90d5PvFkiMf80TmlVm-46" value="$${g_3}$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#0066CC;strokeColor=none;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
192
+ <mxGeometry x="194" y="384.5" width="20" height="20" as="geometry" />
193
+ </mxCell>
194
+ <mxCell id="-y90d5PvFkiMf80TmlVm-57" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#C40000;strokeWidth=1;dashed=1;dashPattern=1 1;fontSize=14;align=center;verticalAlign=middle;" parent="1" edge="1">
195
+ <mxGeometry relative="1" as="geometry">
196
+ <mxPoint x="170" y="240" as="targetPoint" />
197
+ <Array as="points">
198
+ <mxPoint x="500" y="450" />
199
+ <mxPoint x="190" y="450" />
200
+ <mxPoint x="190" y="240" />
201
+ </Array>
202
+ <mxPoint x="500" y="440" as="sourcePoint" />
203
+ </mxGeometry>
204
+ </mxCell>
205
+ <mxCell id="-y90d5PvFkiMf80TmlVm-47" value="$$-\hat{E[\eta]}&lt;br style=&quot;font-size: 14px;&quot;&gt;$$&lt;br style=&quot;font-size: 14px;&quot;&gt;&lt;br style=&quot;font-size: 14px;&quot;&gt;" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;glass=0;sketch=0;strokeColor=none;fillColor=#D4E1F5;gradientColor=none;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
206
+ <mxGeometry x="465" y="190" width="70" height="250" as="geometry" />
207
+ </mxCell>
208
+ <mxCell id="-y90d5PvFkiMf80TmlVm-52" value="" style="html=1;shadow=0;dashed=0;align=center;verticalAlign=middle;shape=mxgraph.arrows2.arrow;dy=0.67;dx=20;notch=0;rounded=1;glass=0;sketch=0;strokeColor=none;fillColor=#999999;fontSize=14;" parent="1" vertex="1">
209
+ <mxGeometry x="422" y="245" width="41" height="30" as="geometry" />
210
+ </mxCell>
211
+ <mxCell id="-y90d5PvFkiMf80TmlVm-53" value="$$\{\tilde{g_i}\}$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;shadow=0;glass=0;sketch=0;fontSize=15;" parent="1" vertex="1">
212
+ <mxGeometry x="430" y="225" width="21" height="17.5" as="geometry" />
213
+ </mxCell>
214
+ <mxCell id="-y90d5PvFkiMf80TmlVm-54" value="" style="html=1;shadow=0;dashed=0;align=center;verticalAlign=middle;shape=mxgraph.arrows2.arrow;dy=0.67;dx=20;notch=0;rounded=1;glass=0;sketch=0;strokeColor=none;fillColor=#D9D9D9;fontSize=14;" parent="1" vertex="1">
215
+ <mxGeometry x="422" y="352" width="41" height="30" as="geometry" />
216
+ </mxCell>
217
+ <mxCell id="-y90d5PvFkiMf80TmlVm-55" value="$$\{\tilde{\epsilon_i}\}$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;shadow=0;glass=0;sketch=0;fontSize=15;" parent="1" vertex="1">
218
+ <mxGeometry x="429" y="334.25" width="21" height="17.5" as="geometry" />
219
+ </mxCell>
220
+ <mxCell id="-y90d5PvFkiMf80TmlVm-60" value="" style="endArrow=classic;html=1;rounded=1;dashed=1;dashPattern=1 1;strokeColor=#C40000;strokeWidth=1;entryX=1;entryY=1;entryDx=0;entryDy=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" edge="1">
221
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
222
+ <mxPoint x="190" y="321" as="sourcePoint" />
223
+ <mxPoint x="170" y="322" as="targetPoint" />
224
+ </mxGeometry>
225
+ </mxCell>
226
+ <mxCell id="-y90d5PvFkiMf80TmlVm-61" value="" style="endArrow=classic;html=1;rounded=1;dashed=1;dashPattern=1 1;strokeColor=#C40000;strokeWidth=1;entryX=1;entryY=1;entryDx=0;entryDy=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" edge="1">
227
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
228
+ <mxPoint x="190" y="409.5" as="sourcePoint" />
229
+ <mxPoint x="170" y="410.4999999999999" as="targetPoint" />
230
+ </mxGeometry>
231
+ </mxCell>
232
+ <mxCell id="-y90d5PvFkiMf80TmlVm-62" value="$$\hat{g}_{global}$$" style="rounded=1;whiteSpace=wrap;html=1;shadow=0;fillColor=#F55B5B;strokeColor=none;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
233
+ <mxGeometry x="421" y="440" width="40" height="20" as="geometry" />
234
+ </mxCell>
235
+ <mxCell id="-y90d5PvFkiMf80TmlVm-65" value="User 3" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontStyle=1;fontSize=14;" parent="1" vertex="1">
236
+ <mxGeometry x="58" y="360" width="60" height="30" as="geometry" />
237
+ </mxCell>
238
+ <mxCell id="-y90d5PvFkiMf80TmlVm-66" value="Local Process" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontStyle=1;fontFamily=Times New Roman;fontSize=17;" parent="1" vertex="1">
239
+ <mxGeometry x="168" y="155" width="110" height="30" as="geometry" />
240
+ </mxCell>
241
+ <mxCell id="-y90d5PvFkiMf80TmlVm-67" value="Shuffler" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontStyle=1;fontFamily=Times New Roman;fontSize=17;" parent="1" vertex="1">
242
+ <mxGeometry x="340" y="155" width="90" height="30" as="geometry" />
243
+ </mxCell>
244
+ <mxCell id="-y90d5PvFkiMf80TmlVm-68" value="Analyzer" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontStyle=1;fontFamily=Times New Roman;fontSize=17;" parent="1" vertex="1">
245
+ <mxGeometry x="455" y="155" width="90" height="30" as="geometry" />
246
+ </mxCell>
247
+ <mxCell id="aras4iEDxo8HkMEQ8ZmZ-1" value="Global Model" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#4D4D4D;fontSize=14;align=center;verticalAlign=middle;" parent="1" vertex="1">
248
+ <mxGeometry x="476.5" y="360" width="49" height="45" as="geometry" />
249
+ </mxCell>
250
+ <mxCell id="aras4iEDxo8HkMEQ8ZmZ-2" value="" style="endArrow=classic;html=1;rounded=0;fontSize=14;align=center;verticalAlign=middle;" parent="1" edge="1">
251
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
252
+ <mxPoint x="500.81000000000006" y="318.5" as="sourcePoint" />
253
+ <mxPoint x="500.81000000000006" y="338.5" as="targetPoint" />
254
+ </mxGeometry>
255
+ </mxCell>
256
+ <mxCell id="aras4iEDxo8HkMEQ8ZmZ-3" value="&lt;span style=&quot;font-size: 14px;&quot;&gt;$$\frac{1}{n}\sum \tilde{g}_i$$&lt;/span&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=14;" parent="1" vertex="1">
257
+ <mxGeometry x="465" y="247.5" width="60" height="30" as="geometry" />
258
+ </mxCell>
259
+ </root>
260
+ </mxGraphModel>
261
+ </diagram>
262
+ </mxfile>
2304.05516/main_diagram/main_diagram.pdf ADDED
Binary file (54 kB). View file
 
2304.05516/paper_text/intro_method.md ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Federated Learning (FL) (McMahan et al. 2017) is an emerging machine learning paradigm that allows multiple clients to train a global model collaboratively while keeping the private raw data of each client locally. While not directly sharing private data, recent works indicate that FL by itself is insufficient to preserve privacy of users' data. By observing the global model or intermediate parameters during the training process, adversaries can infer the membership of users or even reconstruct training records (Fredrikson, Jha, and Ristenpart 2015; Zhu, Liu, and Han 2019; Nasr, Shokri,
4
+
5
+ | Methods | Personalization | FL Process | | | |
6
+ |-------------|-----------------|--------------|--------------|--|--|
7
+ | Wictiods | 1 CI SOHAHZAHOH | Local | Central | | |
8
+ | PLDP | ✓ | ✓ | Weak | | |
9
+ | Uni-Shuffle | × | $\checkmark$ | $\checkmark$ | | |
10
+ | APES | $\checkmark$ | $\checkmark$ | $\checkmark$ | | |
11
+ | S-APES | $\checkmark$ | $\checkmark$ | Strong | | |
12
+
13
+ Table 1: Comparison of related work. ✓ denotes protected, ★denotes unprotected.
14
+
15
+ and Houmansadr 2019; Xiong et al. 2021). These attacks can lead to severe data leakage, hence it is necessary to provide additional protection with strict privacy guarantees for both the global model and local parameters. Moreover, in practice, different local privacy levels may be desired depending on users' privacy preferences. A one-size-fits-all approach would either downgrade the model utility or sacrifice privacy protection for certain users. Thus, an open problem in FL is how to provide strong central privacy as well as personalized local privacy while maintaining model utility.
16
+
17
+ Several recent works have attempted to address this problem. Personalized Local Differential Privacy (PLDP) protects both local gradients and the global model by perturbing gradients with heterogeneous parameters (Chen et al. 2016; Li et al. 2020). The central privacy of the global model is equivalent to the weakest local privacy. For achieving both strong central and local privacy, a potential solution is the shuffle model (Bittau et al. 2017). It amplifies central privacy by permuting data points randomly after local perturbation. However, existing studies on shuffle model only focus on the scenarios where local privacy requirements are assumed uniform (Uni-Shuffle for short) (Erlingsson et al. 2019; Balle et al. 2019; Girgis et al. 2021; Feldman, McMillan, and Talwar 2022). To the best of our knowledge, there is no work that provides both strong central privacy for the global model and personalized local privacy guarantees, while achieving strong utility of global model (cf. Tab.1).
18
+
19
+ To narrow this gap, we propose **APES**, a privacy **A**mplification framework for **PE**rsonalized private federated learning with **S**huffle model (cf. Fig. 1). APES gains a strong privacy amplification effect. Unlike previous works that just
20
+
21
+ <sup>\*</sup>Corresponding author: Hong Chen, chong@ruc.edu.cn Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
22
+
23
+ permute data, both data points and privacy parameters are randomly shuffled in APES. Clip-Laplace Mechanism is also introduced to implement the framework without damaging model utility. In order to mitigate the privacy-loss explosion problem caused by high dimensions, we propose **S-APES** which improves **APES** with the post-**S**parsification. The basic idea is to select only informative dimensions of gradients after perturbation and pad the rest, which saves privacy cost.
24
+
25
+ To bound the privacy of APES and S-APES, we carefully quantify the obfuscation effects contributed by users with heterogeneous privacy parameters. First, inspired by Feldman, McMillan, and Talwar, the central privacy of a specific user is boosted by the rest of the users who generate "echos" of her with heterogeneous probabilities; next, to measure the probabilities, we propose Neighbor Divergence and Clip-Laplace Mechanism for limited output range and bounded divergence among distinct output distributions by users' local randomizers; then "echos" are transformed into certain form, and a tight privacy bound is derived.
26
+
27
+ Our main contributions are summarized as follows:
28
+
29
+ - (i) We propose privacy amplification frameworks via shuffle model for personalized private federated learning. APES strikes a better balance between central privacy and model utility with Neighbor Divergence and Clip-Laplace Mechanism. Based on it, improved S-APES enhances the privacy for the high-dimension scene.
30
+ - (ii) We provide theoretical analysis for both privacy and convergence bound of the proposed frameworks. To the best of our knowledge, the shuffling effect on personalized local differential privacy is considered for the first time and a strong privacy amplification effect is yielded. The central privacy bound is tighter than the bound derived by naïvely adopting existing methods for unified privacy.
31
+ - (iii) Comprehensive experiments are conducted to confirm that APES and S-APES achieve comparable or higher accuracy for the global model with stronger central privacy compared to the state-of-the-art methods without downgrading personalized local privacy guarantee.
32
+
33
+ # Method
34
+
35
+ In this section, we illustrate the privacy definition, shuffle model and several properties of differential privacy, all of which are prepared for the proposed methods.
36
+
37
+ Differential privacy (DP) (Dwork, Roth et al. 2014) is a *de facto* standard that is widely accepted to preserve privacy in FL. The notion is typically built up in a central setting where a trusted server can access the raw data. Local differential privacy (LDP) (Erlingsson, Pihur, and Korolova 2014), on the other hand, offers users a stronger privacy guarantee for the settings without assumption of trusted server.
38
+
39
+ **Definition 1 (Differential Privacy)** For any $\epsilon, \delta \geq 0$ , a randomized algorithm $M: \mathcal{D} \to \mathcal{Z}$ is $(\epsilon, \delta)$ -differential privacy if for any neighboring datasets $D, D' \in \mathcal{D}$ and any subsets $S \subseteq \mathcal{Z}$ .
40
+
41
+ $$\Pr[M(D) \in S] \le e^{\epsilon} \Pr[M(D') \in S] + \delta$$
42
+
43
+ ![](_page_1_Picture_12.jpeg)
44
+
45
+ Figure 1: Procedure of APES. Gradients $g_i$ trained by user data $x_i$ are randomized locally, then privacy parameters $\epsilon_i$ and $g_i$ are shuffled separately. Analyzer acts as the curator to aggregate and calibrate gradients $\tilde{g}_i$ for global model.
46
+
47
+ **Definition 2 (Local Differential Privacy)** For any $\epsilon, \delta \geq 0$ , an algorithm $M: \mathcal{D} \to \mathcal{Z}$ is $(\epsilon, \delta)$ -local differential privacy if $\forall g, g' \in \mathcal{D}$ and $\forall z \in \mathcal{Z}$ ,
48
+
49
+ $$\Pr[M(g) = z] \le e^{\epsilon} \Pr[M(g') = z] + \delta$$
50
+
51
+ Shuffle model (Bittau et al. 2017) was proposed to strengthen central privacy while preserving local user privacy. Given n datapoints as the dataset $D = \{g_1, g_2, ..., g_n\},\$ each $q_i \in D$ owned by user i is perturbed locally by a randomizer $M: \mathcal{D} \to \mathcal{Z}$ to ensure $(\epsilon^l, \delta^l)$ -LDP before being sent to shuffler. Shuffler, a trusted third party, permutes and releases all the datapoints by algorithm $S: Z \to Z$ to analyzer. Untrusted analyzer aggregates all the datapoints. The process $P = S \circ M$ satisfies at least $(\epsilon^l, \delta^l)$ -DP against analyzer (cf. Lemma 1). Recent works (Erlingsson et al. 2019; Balle et al. 2019; Girgis et al. 2021; Feldman, McMillan, and Talwar 2022) achieve a much stronger central privacy guarantee, which is considered as privacy amplification effect by shuffling. Among existing works, Feldman, McMillan, and Talwar provides a tight privacy upper bound for singlemessage summation. Take neighboring datasets D and D'that only differ at $q_1$ (or $q'_1$ ), any perturbed datapoint $\tilde{q}_i$ can be regarded as a sampling from the distribution of a specific perturbed point $\tilde{g}_1$ or $\tilde{g}'_1$ with probability $\exp(-\epsilon^l)$ . By this observation, the privacy bound is yielded.
52
+
53
+ As a general technique to implement DP, Laplace Mechanism (Dwork, Roth et al. 2014) perturbs numerical values.
54
+
55
+ **Definition 3 (Laplace Mechanism)** Given any function $f: \mathcal{D} \to \mathcal{Z}^d$ and neighboring datasets D and D', let $\Delta f = \max ||f(D) - f(D')||_1$ be the sensitivity function, Laplace mechanism $M(D) = f(D) + Y^d$ satisfies $\epsilon$ -DP, where $Y^d$ is random variable i.i.d drawn from distribution $Lap(0, \frac{\Delta f}{\epsilon})$ .
56
+
57
+ Composition theorems provide tight bounds for the algorithm combined with several DP blocks.
58
+
59
+ **Lemma 1 (Parallel Composition)** (Yu et al. 2019) Given an $(\epsilon_i, \delta_i)$ -DP algorithm $M_i : \mathcal{D} \to \mathcal{Z}$ for $i \in [m]$ , a class of $\{M_i\}_{i \in [m]}$ on disjoint subsets of D is $(\max \epsilon_i, \max \delta_i)$ -DP. **Lemma 2 (Advanced Composition)** (Dwork, Roth et al. 2014) Given an $(\epsilon_i, \delta_i)$ -DP algorithm $M_i : \mathcal{D} \to \mathcal{Z}$ for $i \in [m]$ , the sequence of $\{M_i\}_{i \in [m]}$ on the same dataset D under m-fold composition is $(\epsilon', \delta' + m\delta)$ -DP where $\epsilon' = \epsilon \sqrt{2m \log 1/\delta'} + m\epsilon(e^\epsilon - 1)$ .
60
+
61
+ No matter what dataset or query is adopted, any privacy mechanism can be reduced to a basic random response with the same privacy level (Kairouz, Oh, and Viswanath 2015). **Lemma 3 (Degraded Privacy)** For any $\epsilon$ -DP mechanism M, for $X: \{x, \bar{x}\}$ , $\exists \tilde{M}$ dominates M where:
62
+
63
+ $$\Pr[\tilde{M}(x) = z] = \left\{ \begin{array}{ll} \frac{e^{\epsilon}}{1 + e^{\epsilon}}, & z = x \\ \frac{1}{1 + e^{\epsilon}}, & z = \bar{x} \end{array} \right.$$
64
+
65
+ This section illustrates our methods for a strong privacy amplification effect. We first introduce Clip-Laplace Mechanism to implement the effect. Then two frameworks are proposed. APES is a general framework which shuffles both privacy parameters and gradients, the improved S-APES sparsifies dimensions without downgrading shuffling effect.
66
+
67
+ To make bounding privacy while maintaining model accuracy possible, it is necessary to introduce a mechanism for LDP with a finite and fixed output range. Existing works on this task provide non-fixed output ranges (Geng et al. 2018), or increase noise scale when ranges of input and output are not overlapped (Holohan et al. 2018; Croft, Sack, and Shi 2022). To address this issue, we introduce a variant of Laplace Mechanism, *Clip-Laplace*, which provides $\epsilon$ -DP for continuous real values with the same finite output ranges. **Definition 4 (Clip-Laplace Mechanism)** *Given any func-*
68
+
69
+ **Definition 4 (Clip-Laplace Mechanism)** Given any function $f: \mathcal{X} \to \mathcal{Y}^d$ and sensitivity $\Delta f = \max ||f(X) - f(X')||_1$ for any neighboring datasets X and X'. Clip-Laplace Mechanism is $M: \mathcal{Y}^d \to \mathcal{Z}^d$ . Each $Z \in \mathcal{Z}^d$ is a r.v. i.i.d. drawn from distribution $CLap(f(x), \lambda, A)$ of which the probability density function is defined as follows:
70
+
71
+ $$p(z) = \left\{ \begin{array}{ll} \frac{1}{2\lambda S} \exp{\left(-\frac{|z-f(x)|}{\lambda}\right)}, & -A \leq z \leq A \\ 0, & otherwise \end{array} \right.$$
72
+
73
+ where normalization factor $S=1-\frac{1}{2}\exp(\frac{-A+f(x)}{\lambda})-\frac{1}{2}\exp(\frac{-A-f(x)}{\lambda})$ and $A\geq \Delta f/2$ .
74
+
75
+ **Theorem 1** Clip Laplace mechanism preserves $\epsilon$ -LDP when the $f(x) \in [-\Delta f/2, \Delta f/2]$ , and $\lambda = \Delta f/\epsilon$ .
76
+
77
+ The proof is provided in Appendix A.
78
+
79
+ Discussion. (i) When achieving the same level of $\epsilon$ -LDP, the variance of Clip-Laplacian outputs is smaller than classic Laplacian outputs. This property is based on the assumption of symmetric limited range of inputs (cf. Theorem 1), which is reasonable for many fields such as gradients aggregation, location statistics, financial analysis and so on. (ii) The Clip-Laplacian outputs are biased. A feasible solution for correction is to calibrate the outputs with the expectation, which can be estimated when privacy parameters are given.
80
+
81
+ We formalize **APES**, a privacy <u>Amplification framework for **PE**rsonalized private federated learning with <u>Shuffle Model</u>. The framework includes three procedures: local updating, shuffling and analyzing process with three parties separately. Convergence upper bound of APES is given at last.</u>
82
+
83
+ **Architecture** Consider 3 parties: (i) n users, each holds a dataset $X_i$ and a randomizer $M_i$ satisfying $\epsilon_i^l$ -LDP. (ii) A shuffler with algorithm S. (iii) An analyzer, trains global model with shuffled messages. The process $P = S \circ M$ ensures $(\epsilon^c, \epsilon^c)$ -DP for global model, where $M = (M_1, ..., M_n)$ with $\epsilon^l = (\epsilon_1^l, ..., \epsilon_n^l)$ for dimension level.
84
+
85
+ **Basic Framework** Algorithm 1 outlines the procedures of APES. We denote clip bound by C, learning rate by $\alpha$ and training epochs by T. Main procedures are as follows:
86
+
87
+ - Local Updating. Each user randomizes each dimension of model gradient $g_i$ with $\epsilon_i^l$ by applying Clip-Laplace Mechanism. Both perturbed gradient $\tilde{g}_i$ and $\epsilon_i^l$ are sent to Shuffler. To keep the order of dimensions, dimension index k of $\tilde{g}_i$ is sent as well.
88
+ - Shuffling Process. Shuffler shuffles $\{\tilde{g}_i\}_{i\in[n]}$ within the same dimension, $\{\epsilon_i^l\}_{i\in[n]}$ is also permuted.
89
+ - Analyzing Process. Considering Clip-Laplace Mechanism is biased, the average gradient $\tilde{g}$ needs to be calibrated. We cannot calibrate $\tilde{g}_i$ one by one as the correspondence of $\epsilon_i^l$ and $g_i$ is invisible to analyzer. Empirically, we observe that the value of $\tilde{g}$ is close to the value of $\mathbb{E}[\tilde{g}]$ (cf. Fig. 4 in Appendix C), where $\bar{g} = \frac{1}{n} \sum_{i=1}^n g_i$ , $\mathbb{E}[\tilde{g}] = \frac{1}{n} \sum_{i=1}^n \mathbb{E}[\tilde{g}_i]$ and $\tilde{g}_i \sim CLap(\bar{g}, 2C/\epsilon_i^l, C)$ . Hence we can estimate the clean gradients $\bar{g}$ by approximating $\mathbb{E}[\tilde{g}]$ with $\mathbb{E}[\tilde{g}]$ . Specifically, $\mathbb{E}[\tilde{g}]$ is estimated by $\tilde{g}$ , and each term of $E[\tilde{g}]$ with $\epsilon_i^l$ is as follows:
90
+
91
+ $$\mathbb{E}[\tilde{g}_i] = \frac{(C + \lambda_i) \cdot (e_1 - e_2) + 2\bar{g}}{2 - e_1 - e_2} \tag{1}$$
92
+
93
+ where
94
+ $$e_1=e^{\frac{-C-\bar{g}}{\lambda_i}}, e_2=e^{\frac{-C+\bar{g}}{\lambda_i}}$$
95
+ and $\lambda_i=2C/\epsilon_i^l$ .
96
+
97
+ **Convergence Analysis** To demonstrate the performance of global model under Clip-Laplace perturbation, we provide the upper bound of convergence of Algorithm 1 with the objective function $h(w; w^{(0)}) = F(w) + \frac{\mu}{2} ||w - w^{(0)}||^2$ . The regularization term $\frac{\mu}{2} ||w - w^{(0)}||^2$ of h is introduced for the ease of calculation (Li et al. 2020).
98
+
99
+ **Theorem 2 (Convergence Upper Bound)** After T aggregations, the expected decrease in the global loss function $f(w) = \frac{1}{n} \sum_{i} F_i(w)$ of APES is bounded as follows:
100
+
101
+ $$\mathbb{E}[f(\tilde{w}^{(T)}) - f(w^*)] \le a_1^T (\mathbb{E}[f(\tilde{w}^{(0)})] - f(w^*)) + \frac{a_1^T - 1}{a_1 - 1} (O(a_2 C / \min(\epsilon_i^l)) + O(a_3 C^2 / \min(\epsilon_i^l)^2))$$
102
+
103
+ $$\begin{array}{lll} \textit{where} & a_1 & = & 1 \; + \; \frac{2\beta(\alpha B - 1)}{\mu} \; + \; \frac{2\beta LB(\alpha + 1)}{\mu\bar{\mu}} \; + \\ \frac{2\beta LB^2(1 + \alpha)^2}{\bar{\mu}^2}, a_2 & = L(\frac{1}{\mu} + \frac{BL(1 + \alpha)}{\bar{\mu}}), a_3 & = \frac{L}{2}. \end{array}$$
104
+
105
+ ```
106
+ Input T, \{(X_i, \epsilon_i^l)\}_{i \in [n]}, h(w), C, \alpha.
107
+ Output model w
108
+ Analyzer initializes and broadcasts w^{(0)}.
109
+ for t = 1, 2, ..., T do
110
+ ▶ Local Updating
111
+ for each user i \in [n] do
112
+ w_i \leftarrow w^{(t)}
113
+
114
+ □ Update local model
115
+
116
+ g_i \leftarrow \nabla_{w_i} h(w_i, X_i)
117
+ \bar{g}_i \leftarrow \text{Clip}(g_i, -C, C)
118
+ \tilde{g_i} \leftarrow Randomize(\cdot)
119
+
120
+ user i uploads (\tilde{g}_i, \tilde{\epsilon}_i^l) to Shuffler
121
+ ▶ Shuffling Process
122
+ for each dimension k \in [d] do
123
+ generate permutation \pi_k over [d]
124
+ \{(\tilde{g}_{i,\pi_k(k)},k)\}_{i\in[n]} \leftarrow \text{Shuffle}(\pi_k,\{\tilde{g}_{i,k}\}_{i\in[n]})
125
+ generate permutation \pi over [n] \{\epsilon_{\pi(i)}^l\}_{i\in[n]} \leftarrow \operatorname{Shuffle}(\pi, \{\epsilon_i^l\}_{i\in[n]})
126
+ send \{\{(\tilde{g}_{i,\pi_k(k)},k)\}_{i\in[n]}\}_{k\in[d]} and \{\epsilon_{\pi(i)}^l\}_{i\in[n]}
127
+ ▶ Analyzing Process
128
+ for each dimension k \in [d] do
129
+ \tilde{g}_k \leftarrow \frac{1}{n} \sum_i \tilde{g}_{i,k}
130
+
131
+ \hat{g} \leftarrow \text{Calibrate}(\tilde{g}, \{\epsilon_i\}_{i \in [n]})
132
+ w^{(t+1)} \leftarrow w^{(t)} - \alpha \hat{q} and broadcast.
133
+ \mathbf{return}\; w^{(T)}
134
+ ```
135
+
136
+ The proof refers to Appendix B.
137
+
138
+ *Discussion.* The convergence upper bound increases as the bias and variance (the second and the third term) of Clip-Laplace perturbation grow, of which the influence is the same as classic Laplace Mechanism.
139
+
140
+ To strengthen privacy in the high-dimension scenario, we propose **S-APES** framework, which improves **APES** with post-**S**parsification technique.
141
+
142
+ Since gradients are usually high-dimensional, limiting the number of dimensions helps to save the privacy cost (Ye and Hu 2020; Duan, Ye, and Hu 2022). Selecting part of dimensions with large magnitude keeps majority of information (Aji and Heafield 2017) and reduces privacy loss, but needs extra protection since the selection itself is data-dependent process. To select informative dimensions without breaching privacy, we propose post-parsification technique.
143
+
144
+ **Post Sparsification** Algorithm 2 demonstrates the local process of S-APES with post-sparsification. Concretely, each user i is asked to select the largest b absolute values over d dimensions of $\tilde{g}_i$ . To keep the selected dimension index private, the selection is executed after local perturbation. For avoiding the shuffling effect degradation caused by members reduction, each user pads the rest of (d-b) dimensions with perturbed 0. Denote sparsification process by K, the whole process of S-APES is defined as $P_s = S \circ K \circ M$ .
145
+
146
+ Algorithm 2: $Randomize(\cdot)$ for S-APES
147
+
148
+ $$\begin{array}{l} \textbf{Input} \ \{(g_i,\epsilon_i^l)\}_{i\in[n]}, C. \\ \textbf{Output} \ \ \text{perturbed gradient } \tilde{g}_i \\ \tilde{g}_i \leftarrow \operatorname{CLap}(0,(d\Delta f)/\epsilon_i^l,C) \ \ \rhd \text{Clip-Laplace perturbing } \\ I_b \leftarrow \{k|k \in max(|\tilde{g}_{i,k}|_{k\in[d]})\}^b \ \ \rhd \text{Post-top-b index set} \\ \textbf{for } \text{each index } k \notin I_b \ \textbf{do} \\ \tilde{g}_{i,k} \leftarrow \operatorname{CLap}(0,(d\Delta f)/\epsilon_i^l,C) \ \ \rhd \text{Dummy padding } \\ \textbf{return } \tilde{g}_i \end{array}$$
149
+
150
+ In this section, we first derive a naïve privacy bound based on existing works, then show the local and central privacy bound of our frameworks. The sketch of privacy amplification effect analysis is provided at last.
151
+
152
+ To analyze the privacy amplification effect of shuffling under personalized LDP, the most naïve way is applying existing shuffling bounds (Erlingsson et al. 2019; Balle et al. 2019; Girgis et al. 2021; Feldman, McMillan, and Talwar 2022) on heterogeneous local privacy budgets, i.e., $\epsilon_i^l$ , with classic Laplace Mechanism. However, different $\epsilon_i^l$ lead to different scales of the Laplacian distributions and their divergence may be infinite. As a result, the central privacy may be unbounded. Hence based on the previous work (Feldman, McMillan, and Talwar 2022) we can only approximate the true bound by using the same maximum $\epsilon_i^l$ for all users:
153
+
154
+ $$\epsilon^c \leq \ln(1 + \frac{e^{\max(\epsilon_i^l)} - 1}{e^{\max(\epsilon_i^l)} + 1} (\frac{8(e^{\max(\epsilon_i^l)}\log(4/\delta))^{1/2}}{n^{1/2}} + \frac{8e^{\max(\epsilon_i^l)}}{n})) \tag{2}$$
155
+
156
+ Proposed techniques Clip-Laplace Mechanism and Neighbor Divergence make analyzing privacy amplification effect possible. Without loss of generality, we suppose two neighboring datasets $D = \{g_1, g_2, ..., g_n\}$ and $D' = \{g_1', g_2, ..., g_n\}$ that only differs at $g_1$ or $g_1'$ of user 1, and provide privacy bounds of our frameworks as follows.
157
+
158
+ **Theorem 3 (Local Bound)** Given $\epsilon^l = (\epsilon^l_1, ..., \epsilon^l_n)$ , the local process $M = (M_1, ..., M_n)$ of APES on d-dimension gradients satisfies $\epsilon^l_i$ -LDP in dimension level, $d\epsilon^l_i$ -LDP in user level for each user i.
159
+
160
+ *Discussion.* Our frameworks achieve personalized LDP for each user. This comes from Theorem 1.
161
+
162
+ Theorem 4 (Central Upper bound) Let
163
+ $$i, j \in [n]$$
164
+ , $\delta_s \in [0, 1]$ , $\sum_{i=2}^n \sum_{j=1}^n \frac{p_{ij}}{n} \ge 16 \ln(4/\delta_s)$ , $P = S \circ M$ of APES satisfies $(\epsilon^c, \delta^c)$ -DP where $\delta^c \le \frac{e^{\epsilon_j^l} - 1}{e^{\epsilon_j^l} + 1} \delta_s$ ,
165
+ $$\epsilon^c \le \ln(1 + \frac{e^{\max(\epsilon_j^l)} - 1}{e^{\max(\epsilon_j^l)} + 1} (\frac{8(\ln(4/\delta_s))^{1/2}}{(\sum_{i=2}^n \sum_{j=1}^n \frac{p_{ij}}{n})^{1/2}} + \frac{8}{\sum_{i=2}^n \sum_{j=1}^n \frac{p_{ij}}{n}})$$
166
+ when $\sum_{i=2}^n \sum_{j=1}^n \frac{p_{ij}}{n} \ge 16 \ln(4/\delta_s)$ , $\delta_s \in [0, 1]$ and $p_{ij} = \frac{\epsilon_i^l}{\epsilon_i^l} \cdot \frac{1 - e^{-\epsilon_j^l}}{1 - e^{-\epsilon_j^l}} \cdot e^{-\max(\epsilon_i^l, \epsilon_j^l)}$ .
167
+
168
+ Discussion. APES gains a strong central privacy for dimension level. Theorem 4 indicates most users are provided with a much stricter central privacy as $\epsilon^c$ than their local privacy $\epsilon^l_i$ . A sketch of the proof is provided in the following section.
169
+
170
+ **Proposition 1 (User Level Central Bound)** With ${\delta'}^{uc} > 0$ and $0 < b \le d$ , the process $P_s = S \circ K \circ M$ of S-APES with b-dimension sparsification is $(\epsilon^{uc}, \delta^{uc})$ -DP where $\epsilon^{uc} = \epsilon^c \sqrt{4b \ln(1/\delta^{uc})} + 2b\epsilon^c (\exp(\epsilon^c) - 1)$ and $\delta^{uc} = {\delta'}^{uc} + 2b\delta^c$ .
171
+
172
+ Discussion. S-APES achieves the same dimension-level $\epsilon^c$ as APES. Considering dimensions of a gradient are not independent and extracting b dimensions leads to 2b sensitivity, we derive the user-level privacy amplification effect by composition theorems. Note that $\epsilon^{uc}$ grows linearly with b, which implies privacy loss reduces when fewer dimensions are uploaded by post-sparsification.
173
+
174
+ To analyze privacy of proposed frameworks, we first introduce Neighbor Divergence, then present the sketch of Echo of Neighbors (EoN) analysis for privacy amplification effect.
175
+
176
+ **Neighbor Divergence** We introduce *Neighbor Divergence* to characterize how well a user's output distribution closes the gap between itself and other users' distributions. Concretely, it defines the distance among output distributions of local randomizers of users with heterogeneous privacy budgets and different raw datapoints.
177
+
178
+ **Definition 5 (Neighbor Divergence)** Consider any $g_s$ , $g_t \in \mathcal{D}$ and randomizers $M_i$ , $M_j$ satisfying $\epsilon_i^l$ , $\epsilon_j^l$ -LDP separately. Let $\mu_i^{(s)}$ and $\mu_j^{(t)}$ be distributions of $M_i(g_s)$ and $M_j(g_t)$ respectively, $U_i^{(s)} \sim \mu_i^{(s)}$ , $U_j^{(t)} \sim \mu_j^{(t)}$ , the neighbor divergence between $\mu_i^{(s)}$ and $\mu_j^{(t)}$ is defined as:
179
+
180
+ $$D_N(\mu_i^{(s)}||\mu_j^{(t)}) = \max_{S \subseteq Supp(U_i^{(s)})} \left[ \ln \frac{\Pr[U_i^{(s)} \in S]}{\Pr[U_j^{(t)} \in S]} \right]$$
181
+
182
+ In particular, the neighbor divergence under Clip-Laplace Mechanism is demonstrated as follows.
183
+
184
+ Lemma 4 Let $f(x) \in [-C,C]$ , $\lambda = \Delta f/\epsilon^l$ and $\Delta f = 2C$ , the neighbor divergence between distribution $\mu_i^{(s)}$ and $\mu_j^{(t)}$ under Clip-Laplace Mechanism is $D_N(\mu_i^{(s)}||\mu_j^{(t)}) \leq \ln(\alpha\frac{\epsilon_i^l}{\epsilon_j^l}e^{(\frac{\epsilon_i^l+\epsilon_j^l}{2})}+\frac{A|\epsilon_i^l-\epsilon_j^l|}{2C})$ . Specifically, $D_N(\mu_i^{(s)}||\mu_j^{(t)}) \leq \ln(\frac{\epsilon_i^l}{\epsilon_j^l}\cdot\frac{1-e^{-\epsilon_j^l}}{1-e^{-\epsilon_i^l}}\cdot e^{\max(\epsilon_i^l,\epsilon_j^l)})$ when A=C. $\alpha$ denotes $\frac{(1-\frac{1}{2}\exp(\frac{\epsilon_j^l(-A+C)}{2C})-\frac{1}{2}\exp(\frac{\epsilon_j^l(-A+C)}{2C}))}{(1-\frac{1}{2}\exp(\frac{\epsilon_j^l(-A+C)}{2C})-\frac{1}{2}\exp(\frac{\epsilon_j^l(-A+C)}{2C}))}$ .
185
+
186
+ A sketch of EoN Analysis We analyze the central privacy bound in Theorem 4 with the observation of *Echos of Neighbors*. There are three main steps: (i) After shuffling, output distributions of the rest users are converted into the same distribution of user 1 which can be seen as "echos" by neighbor divergence. (ii) Then all the "echos" are transformed into certain distributions which disentangle from different $\epsilon_i^l$ by
187
+
188
+ degraded privacy. These distributions form a mixed distribution. (iii) Finally, we measure the divergence between the mixed distributions on D and D'.
189
+
190
+ **Step (i).** Recall that LDP mechanism $M_i: \mathcal{Y} \to \mathcal{Z}$ satisfying $\epsilon_i^l$ -LDP for any $i \in [n]$ . Based on neighbor divergence, for any $\mu_i^{(s)}$ and $\mu_j^{(t)}$ we have $p_{ij} \leq \mu_i^{(s)}/\mu_j^{(t)} \leq e^{-D_N(\mu_j^{(t)}||\mu_i^{(s)})}$ by Definition 5. Specifically, for any user's distribution $\mu_i^{(i)}$ on $g_i \in D \setminus \{g_1, g_1'\}$ , "echo" $\mu_j^{(1)}(\text{or } {\mu'}_j^{(1)})$ of user 1 with $g_1$ (or $g_1'$ ) is generated as follows:
191
+
192
+ $$\mu_i^{(i)} = \frac{p_{ij}}{2} \mu_j^{(1)} + \frac{p_{ij}}{2} {\mu'}_j^{(1)} + (1 - p_{ij}) \gamma_i^{(i)}$$
193
+ (3)
194
+
195
+ The distribution $\gamma_i^{(i)} = \mu_i^{(i)} - p_{ij}/2 \cdot (\mu_j^{(1)} + \mu_j'^{(1)})/(1 - p_{ij})$ . The idea is inspired by a prior work (Feldman, McMillan, and Talwar 2022). Consider the situation that both g and $\epsilon^l$ are shuffled, the correspondence between $g_i$ and $\epsilon^l_i$ is broken. An adversary cannot decide which $\epsilon^l_j$ is used for perturbing $g_1$ , hence any value in $\{\epsilon^l_i\}$ is possible. Based on it we derive a general bound, then consider the worst-case situation with the largest $\epsilon^l_i$ on $g_1$ for the upper bound at step (iii).
196
+
197
+ **Step (ii).** Except for user 1, the mixed distribution of multiple $\mu_j^{(1)}$ with different $\epsilon_j^l$ from n-1 users is still hard to bound. Hence, with the help of degraded privacy (cf. Lemma 3) we transform $(\mu_j^{(1)} + {\mu'}_j^{(1)})$ into $(\rho^{(1)} + {\rho'}^{(1)})$ for any $j \in [n]$ , then $\epsilon_j^l$ is disentangled from $\mu_j^{(1)}$ .
198
+
199
+ **Lemma 5 (Transformation)** Let $\rho^{(1)}$ and ${\rho'}^{(1)}$ denote the distribution of function $\tilde{M}:\{g_1,g_1'\}\to\mathcal{Z},\,\mu_i^{(i)}$ be the distribution of $M_i(g_i)$ , and $\gamma_i^{(i)}$ be the rest part of $\mu_i^{(i)}$ except $\rho^{(1)}$ and ${\rho'}^{(1)}$ , then $\mu_i^{(i)}$ is mapped as follows:
200
+
201
+ $$\mu_i^{(i)} = \frac{1}{n} \sum_{i=1}^n \left( \frac{p_{ij}}{2} \rho^{(1)} + \frac{p_{ij}}{2} {\rho'}^{(1)} + (1 - p_{ij}) \gamma_i^{(i)} \right) \tag{4}$$
202
+
203
+ where $p_{ij} = \exp(-D_N(\mu_j^{(1)}||\mu_i^{(i)})).$
204
+
205
+ **Proof** By Lemma 3, we have $\mu_j^{(1)} = (e^{\epsilon_j^l}/(1+e^{\epsilon_j^l}))\rho^{(1)} + (1/(1+e^{\epsilon_j^l}))\rho'^{(1)}$ and ${\mu'}_j^{(1)} = (1/(1+e^{\epsilon_j^l}))\rho^{(1)} + (e^{\epsilon_j^l}/(1+e^{\epsilon_j^l}))\rho'^{(1)}$ . The influence of $\epsilon_j^l$ on $\mu_i^{(i)}$ is isolated:
206
+
207
+ $$\mu_i^{(i)} = \frac{1}{n} \sum_{j=1}^n \left( \frac{p_{ij}}{2} \mu_j^{(1)} + \frac{p_{ij}}{2} {\mu'}_j^{(1)} + (1 - p_{ij}) \gamma_i^{(i)} \right)$$
208
+ $$= \frac{1}{n} \sum_{j=1}^n \left( \frac{p_{ij}}{2} \rho^{(1)} + \frac{p_{ij}}{2} {\rho'}^{(1)} + (1 - p_{ij}) \gamma_i^{(i)} \right)$$
209
+
210
+ **Step (iii).** Now we can bound the divergence of the transformed distributions on D and D'.
211
+
212
+ **Lemma 6 (Generalized Central Bound)** Let $i, j \in [n]$ , $\delta_s \in [0,1]$ , $\sum_{i=2}^n \sum_{j=1}^n \frac{p_{ij}}{n} \geq 16 \ln(4/\delta_s)$ , $P = S \circ M$ of APES on D and D' is $(\epsilon^c, \delta^c)$ -distinguishable where $\delta^c \leq \frac{e^{\epsilon^*}-1}{e^{\epsilon^*}+1} \delta_s$ and $p_{ij} = \frac{\epsilon_i^l}{\epsilon_j^l} \cdot \frac{1-e^{-\epsilon_j^l}}{1-e^{-\epsilon_i^l}} \cdot e^{-\max(\epsilon_i^l, \epsilon_j^l)}$ ,
213
+
214
+ $$\epsilon^{c} \leq \ln(1 + \frac{e^{\epsilon^{*}} - 1}{e^{\epsilon^{*}} + 1} \left( \frac{8(\ln(4/\delta_{s}))^{1/2}}{\left(\sum_{i=2}^{n} \sum_{j=1}^{n} \frac{p_{ij}}{n}\right)^{1/2}} + \sum_{i=2}^{n} \sum_{j=1}^{n} \frac{p_{ij}}{n}\right) \right)$$
215
+
216
+ **Proof** By Lemma 5, any output distribution $\mu_i^{(i)}$ can be mapped into $\rho^{(1)}$ or $\rho'^{(1)}$ with probability $p_{ij}/2n$ , into $\gamma_i^{(i)}$ with $(1-p_{ij})/n$ . Consider outputs of n-1 users, we get a set of mapping distributions including n(n-1) elements.
217
+
218
+ With any $T \subseteq [n(n-1)]$ , $\Gamma = [n(n-1)] \setminus T$ , we define an mapping event $U = \{u_1, ... u_{n(n-1)}\}$ where
219
+
220
+ $$u_t = \begin{cases} \rho^{(1)} \text{ or } {\rho'}^{(1)}, & t \in T \\ \gamma_t^{(t)}, & t \in \Gamma \end{cases}$$
221
+
222
+ The effect of $\gamma_i$ can be removed in process P under the same $U_T$ since all the $u_t \in U_\Gamma$ are the same in D and D':
223
+
224
+ $$\frac{\Pr[P(D) = \mathbf{z}]}{\Pr[P(D') = \mathbf{z}]} \le \frac{\Pr[U_T \cup \rho^{(1)}] \Pr[U_\Gamma]}{\Pr[U_T \cup \rho'^{(1)}] \Pr[U_\Gamma]} \tag{5}$$
225
+
226
+ Then we define $T_0 \subseteq T$ and $T_1 = T \setminus T_0$ , $\forall u_t \in U_{T_0}$ is $\rho^{(1)}$ , $\forall u_t \in U_{T_1}$ is $\rho'^{(1)}$ on D; $T_0' \subseteq T$ and $T_1' = T \setminus T_0'$ , $\forall u_t \in U_{T_0'}$ is $\rho^{(1)}$ , $\forall u_t \in U_{T_0'}$ is $\rho^{(1)}$ , $\forall u_t \in U_{T_1'}$ is $\rho'^{(1)}$ on D'. Put aside the randomness on $g_1$ and $g_1'$ for now (which means the output of user 1 can be regarded as $\rho^{(1)}$ or $\rho'^{(1)}$ ), when reaching the mixed output $\mathbf{z}$ with the same number of $\rho^{(1)}$ or $\rho'^{(1)}$ , $U_{T_0}$ on D and $U_{T_0'}$ on D' should be different as $|T_0'| - |T_0| = 1$ . Recall that $|T| \sim \sum_{i=2}^n \sum_{j=1}^n \mathrm{Bern}(p_{ij}/n)$ and $|T_0| \sim \mathrm{Bin}(1/2, |T|)$ according to Lemma 5, we can bound Eq.(5) by deriving following equation:
227
+
228
+ $$\frac{\Pr[U_T \cup \rho^{(1)}]}{\Pr[U_T \cup \rho'^{(1)}]} = \frac{\Pr[U_{T_0} \cup U_{T_1} | U_T] \cdot \Pr[U_T]}{\Pr[U_{T_0'} \cup U_{T_1'} | U_T] \cdot \Pr[U_T]}
229
+ = \frac{\binom{|T|}{|T_0|} \binom{1}{2} \binom{|T_0|}{2} \binom{1}{2} \binom{|T| - |T_0|}{2}}{\binom{|T|}{|T'_0|} \binom{1}{2} \binom{1}{2} \binom{|T| - |T_0|}{2}} = \frac{|T_0| + 1}{|T| - |T_0|}$$
230
+ (6)
231
+
232
+ With Chernoff bound and Hoeffding's inequality, when $\sum_{i=2}^{n}\sum_{j=1}^{n}\frac{p_{ij}}{n}\geq 16\ln(4/\delta_s),\ Eq.(6)\ is\ bounded\ as \\ \frac{|T_0|+1}{|T|-|T_0|}\leq \ln(1+\frac{8(\ln(4/\delta_s))^{1/2}}{(\sum_{i=2}^{n}\sum_{j=1}^{n}\frac{p_{ij}}{n})^{1/2}}+\frac{8}{\sum_{i=2}^{n}\sum_{j=1}^{n}\frac{p_{ij}}{n}}).$ At last, we consider the randomness on $g_1$ and $g_1'$ with
233
+
234
+ At last, we consider the randomness on $g_1$ and $g_1'$ with certain privacy budget $\epsilon^*$ , the rest of the proof follows existing work (Feldman, McMillan, and Talwar 2022) and the general bound is proved. The full proof of Lemma 6 is provided to Appendix A.
235
+
236
+ From the analysis above, it is realized that which $\epsilon^*$ adopted by $g_1$ or $g_1'$ is crucial for the bound. For the worst case that $\epsilon^* = \max(\epsilon_j^l)$ for $j \in [n]$ , the divergence is upper bounded as Theorem 4. The proof refers to Appendix A.
2304.07645/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-10-13T03:27:16.669Z" agent="5.0 (Macintosh)" version="20.0.4" etag="9kHWbCuNFNbuNVEzqDVR" type="github"><diagram id="1nLK0o8a2wV31pPwAkNJ">7VtLc6M4EP41VO0eNiUhxOOY2MnuIbuVqhx2cppSGdmwg5ELy4k9v34lEOYh4SE2NuNJnFQFtYQM3V9/3WopFpost39mZBX9zUKaWDYItxaaWrZtu74r/kjJrpBAx7cLySKLw0IGKsFz/J2qgaV0E4d0rWSFiDOW8HjVFM5YmtIZb8hIlrG35rA5S8KGYEUWtPEYUvA8IwnVhv0bhzwqpD6ujf6Lxouo/GYIVM+SlIPVFOuIhOyt9l3o3kKTjDFeXC23E5pI7TX18tDRu3+wjKa8zw1K72u+K9+NhuJVVZNlPGILlpLkvpLeZWyThlROAESrGvPI2EoIoRD+RznfKbuRDWdCFPFlonrnLOWqE/qiXTyD/OKmbtgmmykRUlYm2YKqF3P1d4V7DQrsUbakPNuJIRlNCI9fm7MThYHFflylJnGhNGXWmnqaV5Js1KSW7Yjf+W8WnsxCaTzxUkA0eEQ5+b3o1VTdVORbFHP6vCL5K78Jz+mjtFeacbrtNH6HQsobAr+4RfmhDSAuBG8VqmHpA1EN0S44XYm6Pi4APaGZbPdF3Z83XmTjBpfN6bbeOd2p1hGQxTpk0XiQxR2Q3XZgU6CKN1W35hn7RicsYZmQpCyVBpnHSdISkSRepKI5E+9IhfxOYjQW5HmrOpZxGObWNCG+aeFzgN4JWqBHQAd9oGMeDYB5t8MKu49mBeS0qcdgBWRgniGs4I3CPNuYf6ld13hHtCrakY0TWMfXWQeC3tYZnHb8DsCL0LggyyX5aLiH2L/xcAP60IeeBn1T0B0C+sGY0K/g/lLrMUO/CtRVbH5phObBAnXpHw2fscfzmfJ5NKeJtPRSTL74adNLoUQd60DHegB1rOMBsA6hppKfFOxS+Q9kGSdS8LiZxaEgRjBh6ZrlD1U3jqvaJe9ZNpKfhweNJEUPyD/7nnKxCns6hn2RFDa/9TbLyK42YMXilK9rMz9JQQ1c2NPAFbiojo9j7hEXxZNUCNu/Uj/Q2d0RL18MfrSIZzvoxm6zQIAuluxBdC0sUAt5PvDrQe8PcANEknA48uWtJ5rFQkUSB+9lFtvMExqn7NmmD4M4htDqjxhaHYN33ukASZJ4te5ymZqZyXpV1Bfn8VbCRdNn3lYTl+1ezN3S8kkuiAO3RXrY75Vx2kP4X1flQRKiJD3vbkrnZJNIon4iGZHfEH8Xdmep5U2vii5/5G0D2BK2Fs7QwzqXCgvf4DPRqamE0XafNLyVVfZK6TU76Rrsy0+OkZ/qJhZ+M8fyx+RRbv4RPSFZR/kDHEiDehikpm9s8J1S9r40SMtZUKAvGL3SfctpCspVdx5IgAyT2cBHzckKrtYmOyYXGqXQcswK0FA1eQcShg9TXWWTK9tgwAC2wGbaYDBlfkNsMMBxih2X22GwTYWLEbMru6tw8eE2GZyy0DZGebt83M9dBtdt84/BDOcqtZoSo19mm8G0IV+u8gevDWl5jONgvbKIWnlMR1J0RB7Tud//UXcxHAgNuxj6mvJsDDfO2YEjXOuUbOC0dVEfJzYcUSgD1yj5w4FSgTw5lS0t7+7e8qZfq9rBo91dJDgp/R2lUuO4pu1BfYFvWnA6QziWO6ZjXWp/8CxbLa2e97qip7tiWawcxRW9blec0oSTq9zO6DLmAJ6LA6B77gW3OQ6dtFBU+c/TrfVZYu1Bwkg3pbHK6rhnsmXQYcsrOwCAXYNPAD1NDA6UT086pgs+08Qfls+PTHO0aHbuyGVeCWLHbuIrcPvVxo9YBqKuuso+FuJ7cHVkeq54CNuGMTDouYIhGqX00tbusSeB3neyp09yeYY1ndkdXYD1wx5uv8KMcbLOlOrk3SrRrP7Nphhe/bcSuv8f</diagram></mxfile>
2304.07645/main_diagram/main_diagram.pdf ADDED
Binary file (23.6 kB). View file