Eric03 commited on
Commit
c6d2e5c
·
verified ·
1 Parent(s): f239caf

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2006.00080/main_diagram/main_diagram.drawio +0 -0
  2. 2006.00080/paper_text/intro_method.md +88 -0
  3. 2006.03465/main_diagram/main_diagram.drawio +1 -0
  4. 2006.03465/main_diagram/main_diagram.pdf +0 -0
  5. 2006.03465/paper_text/intro_method.md +135 -0
  6. 2006.04166/main_diagram/main_diagram.drawio +1 -0
  7. 2006.04166/main_diagram/main_diagram.pdf +0 -0
  8. 2006.04166/paper_text/intro_method.md +257 -0
  9. 2011.06782/main_diagram/main_diagram.drawio +1 -0
  10. 2011.06782/main_diagram/main_diagram.pdf +0 -0
  11. 2011.06782/paper_text/intro_method.md +219 -0
  12. 2102.07762/main_diagram/main_diagram.drawio +0 -0
  13. 2102.07762/paper_text/intro_method.md +226 -0
  14. 2105.14573/main_diagram/main_diagram.drawio +1 -0
  15. 2105.14573/main_diagram/main_diagram.pdf +0 -0
  16. 2105.14573/paper_text/intro_method.md +125 -0
  17. 2107.06325/main_diagram/main_diagram.drawio +1 -0
  18. 2107.06325/main_diagram/main_diagram.pdf +0 -0
  19. 2107.06325/paper_text/intro_method.md +77 -0
  20. 2109.13016/main_diagram/main_diagram.drawio +0 -0
  21. 2109.13016/paper_text/intro_method.md +127 -0
  22. 2110.00280/main_diagram/main_diagram.drawio +0 -0
  23. 2110.00280/paper_text/intro_method.md +161 -0
  24. 2111.05011/main_diagram/main_diagram.drawio +1 -0
  25. 2111.05011/main_diagram/main_diagram.pdf +0 -0
  26. 2111.05011/paper_text/intro_method.md +119 -0
  27. 2111.12701/main_diagram/main_diagram.drawio +0 -0
  28. 2111.12701/paper_text/intro_method.md +115 -0
  29. 2111.15362/main_diagram/main_diagram.drawio +0 -0
  30. 2111.15362/paper_text/intro_method.md +65 -0
  31. 2112.08544/main_diagram/main_diagram.drawio +1 -0
  32. 2112.08544/main_diagram/main_diagram.pdf +0 -0
  33. 2112.08544/paper_text/intro_method.md +163 -0
  34. 2202.05420/main_diagram/main_diagram.drawio +1 -0
  35. 2202.05420/main_diagram/main_diagram.pdf +0 -0
  36. 2202.05420/paper_text/intro_method.md +250 -0
  37. 2203.16001/main_diagram/main_diagram.drawio +1 -0
  38. 2203.16001/paper_text/intro_method.md +79 -0
  39. 2204.03688/main_diagram/main_diagram.drawio +0 -0
  40. 2204.03688/paper_text/intro_method.md +29 -0
  41. 2205.09963/main_diagram/main_diagram.drawio +1 -0
  42. 2205.09963/main_diagram/main_diagram.pdf +0 -0
  43. 2205.09963/paper_text/intro_method.md +242 -0
  44. 2205.11028/main_diagram/main_diagram.drawio +0 -0
  45. 2205.11028/paper_text/intro_method.md +17 -0
  46. 2205.12006/main_diagram/main_diagram.drawio +1 -0
  47. 2205.12006/main_diagram/main_diagram.pdf +0 -0
  48. 2205.12006/paper_text/intro_method.md +125 -0
  49. 2205.14794/main_diagram/main_diagram.drawio +1 -0
  50. 2205.14794/main_diagram/main_diagram.pdf +0 -0
2006.00080/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2006.00080/paper_text/intro_method.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The privacy issue, while important in every domain, is enforced vigorously for medical data. Multiple level of regulations such as HIPAA [2, 11, 36, 13] and the approval process for the Institutional Review Board (IRB) [6] protect the patients' sensitive data from malicious copy or even tamper evidence of medical conditions [38]. Like a double-edge sword, these regulations objectively cause insufficient collaborations in health records. For instance, America, European Union and many other countries do not allow patient data leave their country [25, 47]. As a result, many hospitals and research institutions are wary of cloud platforms and prefer to use their own server. Even if in the same country the medical data collaborate still face a big hurdle.
4
+
5
+ It's widely known that sufficient data volume is necessary for training a successful machine learning algorithm [10] for medical image analysis. However, due to the policies and challenges mentioned above, it is hard to acquire enough medical scans for training a machine learn-
6
+
7
+ <sup>\*</sup>equal contribution
8
+
9
+ ing model. In 2016, there were approximately 38 million MRI scans and 79 million CT scans performed in the United States [\[41\]](#page-11-3). Even so, the available datasets for machine learning research are still very limited: the largest set of medical image data available to public is 32 thousand [\[51\]](#page-12-1) CT images, only 0.02% of the annual acquired images in the United States. In contrast, the ImageNet [\[9\]](#page-10-5) project, which is the large visual dataset designed for use in visual object recognition research, has more than 14 million images that have been annotated in more than 20,000 categories.
10
+
11
+ In this work, we design a framework using centralized generator and distributed discriminators to learn the generative distribution of target dataset. In the health entities learning context, our proposed framework can aggregate datasets from multiple hospitals to obtain a faithful estimation of the overall distribution. The specific task (e.g., segmentation and classification) can be accomplished locally by acquiring data from the generator. Learning from synthetic images has several advantages:
12
+
13
+ Privacy mechanism: The central generator has limited information for the raw images in each hospital. When the generator communicates with discriminators in hospitals, only information about the synthetic image is transmitted. Such a mechanism prohibits the central generator's direct access to raw data thus secures privacy.
14
+
15
+ Synthetic data sharing: The natural of synthetic data allows the generator to share the synthetic images without restriction. Such aggregation and redistribution system can build a public accessible and faithful medical database. The inexhaustible database can benefit researchers, practitioners and boost the development of medical intelligence.
16
+
17
+ Adaptivity to architecture updates: The machine learning architecture evolves rapidly to achieve a better performance by novel loss functions [\[48,](#page-12-2) [17\]](#page-10-6), network modules [\[18,](#page-10-7) [45,](#page-11-4) [37,](#page-11-5) [42\]](#page-11-6) or optimizers [\[46,](#page-12-3) [54,](#page-12-4) [32,](#page-11-7) [56,](#page-12-5) [57\]](#page-12-6). We could reasonably infer that the recently well-trained model may be outdated or underperformed in the future as new architectures invented. Since the private-sensitive data may be not always accessible, even if we trained a model based on these datasets, we couldn't embrace new architectures to achieve higher performance. Instead of training a task-specific model, our proposed method trains a generator that learns from distributed discriminators. Specifically, we learn the distribution of private datasets by a generator to produce synthetic images for future use, without worrying about the lost of the proprietary datasets.
18
+
19
+ To the best of our knowledge, we are the first to use GAN to address the medical privacy problem. Briefly, our contributions lie in three folds: (1) A distributed asynchronized discriminator GAN (AsynDGAN) is proposed to learn the real images' distribution without sharing patients' raw data from different datasets. (2) AsynDGAN achieves higher performance than models that learn from real images of only one dataset. (3) AsynDGAN achieves almost the same performance as the model that learns from real images of all datasets.
20
+
21
+ # Method
22
+
23
+ An overview of the proposed architecture is shown in Figure 1. The central generator, denoted as G, takes task-specific inputs (segmentation masks in our experiments) and generates synthetic images to fool the discriminators. The local discriminators, denote as $D^1$ to $D^n$ , learn to differentiate between the local real images and the synthetic images from G. Due to the sensitivity of patients' images, the real images in each medical center may not be accessed from outside. Our architecture is naturally capable of avoiding such limitation because only the specific discriminator in the same medical entity needs to access the real images. In this way, the real images in local medical entities will be kept privately. Only synthetic images, masks, and gradients are needed to be transferred between the central generator and the medical entities.
24
+
25
+ The generator will learn the joint distribution from different datasets that belong to different medical entities. Then it can be used as an image provider to train a specific task, because we expect the synthetic images to share the same or similar distribution as the real images. In the experiments, we apply the AsynDGAN framework to segmentation tasks to illustrate its effectiveness. The U-Net [45] is used as the segmentation model, and details about G and Ds designed for segmentation tasks are de-
26
+
27
+ ![](_page_3_Picture_0.jpeg)
28
+
29
+ Figure 1: The overall structure of AsynDGAN. It contains two parts, a central generator G and multiple distributed discriminators D<sup>1</sup> , D<sup>2</sup> , · · · , D<sup>n</sup> in each medical entity. G takes a task-specific input (segmentation masks in our experiments) and output synthetic images. Each discriminator learns to differentiate between the real images of current medical entity and synthetic images from G. The well-trained G is then used as an image provider to train a task-specific model (segmentation in our experiments).
30
+
31
+ <span id="page-3-0"></span>scribed below.
32
+
33
+ For segmentation tasks, the central generator is an encoder-decoder network that consists of two stride-2 convolutions (for downsampling), nine residual blocks [\[15\]](#page-10-15), and two transposed convolutions. All nonresidual convolutional layers are followed by batch normalization [\[20\]](#page-10-16) and the ReLU activation. All convolutional layers use 3 × 3 kernels except the first and last layers that use 7 × 7 kernels.
34
+
35
+ In the AsynDGAN framework, the discriminators are distributed over N nodes (hospitals, mobile devices). Each discriminator D<sup>j</sup> only has access to data stored in the jth node thus discriminators are trained in an asynchronized fashion. For segmentation, each discriminator has the same structure as that in PatchGAN [\[21\]](#page-10-17). The discriminator individually quantifies the fake or real value of different small patches in the image. Such architecture assumes patch-wise independence of pixels in a Markov random field fashion [\[30,](#page-11-14) [22\]](#page-10-10), and can capture the difference in geometrical structures such as background and tumors.
36
+
37
+ The AsynDGAN is based on the conditional GAN [\[39\]](#page-11-15). The objective of a classical conditional GAN is:
38
+
39
+ $$\min_{G} \max_{D} V(D, G) = \mathbb{E}_{x \sim s(x)} \mathbb{E}_{y \sim p_{data}(y|x)} [\log D(y|x)] + \mathbb{E}_{\hat{y} \sim p_{\hat{y}}(\hat{y}|x)} [\log (1 - D(\hat{y}|x))]$$
40
+
41
+ $$\tag{1}$$
42
+
43
+ where D represents the discriminator and G is the generator. G aims to approximate the conditional distribution pdata(y|x) so that D can not tell if the data is 'fake' or not. The hidden variable x is an auxiliary variable to control the mode of generated data [\[39\]](#page-11-15). In reality, x is usually a class label or a mask that can provide information about the data to be generated. Following previous works ([\[33,](#page-11-16) [21\]](#page-10-17)), instead of providing Gaussian noise z as an input to the generator, we provide the noise only in the form of dropout, which applied to several layers of the generator of AsynDGAN at both training and test time.
44
+
45
+ In the AsynDGAN framework, the generator is super-
46
+
47
+ ![](_page_4_Figure_0.jpeg)
48
+
49
+ <span id="page-4-0"></span>Figure 2: The optimization process of AsynDGAN. The solid arrows show the forward pass, and the dotted arrows show gradient flow during the backward pass of our iterative update procedure. The solid block indicate that it is being updated while the dotted blocks mean that they are frozen during that update step. Red and blue rectangles are source mask and target real image, respectively.
50
+
51
+ vised by N different discriminators. Each discriminator is associated with a subset of datasets. It is natural to quantify such a setting using a mixture distribution on auxiliary variable x. In another word, instead of given a naive s(x), the distributions of x becomes $s(x) = \sum_{j \in [N]} \pi_j s_j(x)$ . For
52
+
53
+ each sub-distribution, there is a corresponding discriminator $D_j$ which only receives data generated from prior $s_i(x)$ . Therefore, the loss function of our AsynDGAN becomes:
54
+
55
+ $$\min_{G} \max_{D_1:D_N} V(D_{1:N}, G)$$
56
+
57
+ $$= \sum_{j \in [N]} \pi_j \{ \mathbb{E}_{x \sim s_j(x)} \mathbb{E}_{y \sim p_{data}(y|x)} [\log D_j(y|x)] + \mathbb{E}_{\hat{y} \sim p_{\hat{y}}(\hat{y}|x)} [\log (1 - D_j(\hat{y}|x))] \} \tag{2}$$
58
+
59
+ The optimization process of the AsynDGAN is shown in Figure 2. In each iteration, a randomly sampled tuple (x, y) is provided to the system. Here, x denotes the input label which observed by the generator, and y is the real image only accessible by medical entities. Then the network blocks are updated iteratively in the following order:
60
+
61
+ - 1) D-update: Calculating the adversarial loss for j-th discriminator $D_i$ and update $D_i$ , where $i=1,2,\cdots,N$ .
62
+ - 2) G-update: After updating all discriminators, G will be updated using the adversarial loss $\sum_{j=1}^{N} loss(D_j)$ .
63
+
64
+ This process is formulated as Algorithm 1. We apply the cross entropy loss and in the algorithm and further analyze the AsynDGAN framework in this setting. We stress that the framework is general and can be collaborated with variants of GAN loss including Wasserstein distance and classical regression loss [3, 31].
65
+
66
+ for number of total training iterations do
67
+
68
+ for number of interations to train discriminator do for each node $j \in [N]$ do
69
+
70
+ - Sample minibatch of of m auxiliary variables $\{x_1^j,...,x_m^j\}$ from $s_j(x)$ and send to generator
71
+ - Generate m fake data from generator G, $\{\hat{y}_1^j,...,\hat{y}_m^j\} \sim q(\hat{y}|x)$ and send to node j.
72
+ - Update the discriminator by ascending its stochastic gradient:
73
+
74
+ $$\nabla_{\theta_{D_j}} \frac{1}{m} \sum_{i=1}^{m} \left[ \log D_j(y_i^j) + \log(1 - D_j(G(\hat{y}_i^j))) \right].$$
75
+
76
+ end for
77
+
78
+ for each node $i \in [N]$ do
79
+
80
+ - Sample minibatch of m auxiliary variables $\{x_1^j,...,x_m^j\}$ from $s_i(x)$ and send to generator G.
81
+ - Generate corresponding m fake data from generator $G, \{\hat{y}_1^j, ..., \hat{y}_m^j\} \sim q(\hat{y}|x)$ and send to node
82
+ - j. Discriminator $D_j$ passes error to generator G.
83
+
84
+ - Update G by descending its stochastic gradient:
85
+
86
+ $$\nabla_{\theta_G} \frac{1}{Nm} \sum_{i=1}^{N} \sum_{j=1}^{m} \log(1 - D_j(G(\hat{y}_i^j))).$$
87
+
88
+ The gradient-based updates can use any standard gradient-based learning rule. We used momentum in our experiments.
2006.03465/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-05-09T17:14:16.884Z" agent="5.0 (X11)" version="13.0.9" etag="Q_zTSkSqoeepaTmmeJ32" type="google"><diagram id="65v913z9W6DuIJAY2mVv">5VnbcpswEP0az7QvHQQI8KPjpG1mnDZTd3p5lGENajBihHzr11eAMFe7ngy2E+fJ0tHqdvaspDUDY7zYfOIkDh6YB+FA17zNwLgd6DqybCR/UmSbI0NtmAM+p54yKoEp/QsK1BS6pB4kNUPBWChoXAddFkXgihpGOGfrutmchfVZY+JDC5i6JGyjP6knghx1sFbin4H6QTEz0lTLghTGCkgC4rF1BTLuBsaYMyby0mIzhjAlr+Al7/dxT+tuYRwicUwHXS1DbIu9gSe3qqqMi4D5LCLhXYnecLaMPEgH0GSttJkwFksQSfAPCLFVfiNLwSQUiEWoWuXa+PaX6p9VfqeVD7io3m6qjbdbVcvXmi6wttuELbmrIEOpgXAfFAG4zQnaMS0lCmwBchZpwiEkgq7qoxOlFX9np7qOOCfbikHMaCSSysiPKSANNjsV5CMq0ZtOzTWykI9Y1CpLK6HMfd2uVFtfkXCpdjDQrVDu+MajK1n00+J9FC/TNX2dJcBXcrcsKqzk+BXDjr7vphnR74umGW8aN4doSKsunHVABUxjkrluLQ+KukjmNAzHLGQ862t4GBzPlHgiOHuCSoujzwzL2s23Ai5gszcc9rhedbDrLsJFmK4rUa6goBLgBdallYp/D7rPvLJIxO1IRNrR/jg6FI+lF7ei4/7hcTQZSewLiDXjTwfkil6oXJGF60cKbusVabgtWKsHwVpXJlj77V4d9smvju8Zq5e5OuZzsFy3KxY9ezjTtH5isXF1GNb5rg7nyiKxUHvt7ngpsWgejsX/26N+Y3f4FpxvXInzsd2v84vhKyf3hAjIlvANYg6JLKuj+tU9xvVhg2yz43WDOl43fZypCLUYe+Vx1ZEaI6f/uDqa4Iv89fBcouwLEtVO7B9ZSN10sgnZAk8OxPYzMhcCzrzztWS5Dszm/cS2qdczF9wV23rHe6mPzAVdW66NupJt64KSbWfbI0/qJSH8RKo9zxu/qVrDOadqrRapY04FddO7Xvov8nu948/DKHYaaZN9vrQJtdPekdv/W+ks52mTR9yhzJPx6LR4/JEVXz+N5ulolNXyM0/+9C8/lhl3/wA=</diagram></mxfile>
2006.03465/main_diagram/main_diagram.pdf ADDED
Binary file (12.7 kB). View file
 
2006.03465/paper_text/intro_method.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ The goal of the RL network is to simultaneously minimize the PPO loss $\mathcal{L}_\text{RL}$ and the confusion loss $\mathcal{L}_\text{Conf}$. It samples observations, actions, and rewards from the source environment and observations from the target environment. If then computes and minimizes these losses. $f$ is a function that approximates the Wasserstein distance between the source observation and target observation distributions. Thus, it should be trained to convergence for every update of the RL network. As in Wasserstein GAN, it is optimized for $n_\text{critic}$ steps for each update of the RL network. This process is outlined in Algorithm [\[alg:adversarial-ppo\]](#alg:adversarial-ppo){reference-type="ref" reference="alg:adversarial-ppo"}. Note that we use the weight clipping method defined in [@wgan] rather than the gradient penalty method defined in [@wgangp] to directly test the effect of the novel Wasserstein Confusion loss term. We believe that combining Wasserstein Confusion with gradient penalty is a promising direction for future work.
4
+
5
+ ::: algorithm
6
+ :::
7
+
8
+ <figure id="fig:network-arch" data-latex-placement="h">
9
+ <img src="figures/network_arch.png" style="width:75.0%" />
10
+ <figcaption>Network architecture. Layers are represented as rounded rectangles. Blue indicates use in training the RL policy, orange indicates use in training the critic, and green indicates use in training both. Note that the network architecture mirrors that of domain confusion and randomization but is modified to work with Reinforcement Learning rather than Supervised Learning <span class="citation" data-cites="domainconfusion adda dann"></span>. The combination of the green and blue networks is identical in architecture to the IMPALA network used to benchmark the Procgen Environments and mirrors that used in Robust Domain Randomization <span class="citation" data-cites="procgen robustdr"></span>. Only the green and blue networks were used when measuring the performance of PPO and Robust Domain Randomization.</figcaption>
11
+ </figure>
12
+
13
+ There are four algorithms that we implement: PPO, Robust Domain Randomization, VR Goggles, and WAPPO. All four are built off of PPO [@ppo]. We use the high quality, open source implementation of PPO provided by OpenAI Baselines [@baselines]. Furthermore, we use the neural architecture and learning rate provided by [@procgen] as a baseline. This architecture consists of the CNN component from the IMPALA network [@impala], which is shared between the policy and value prediction components of PPO. The value and policy networks then branch and each have one fully connected layer which outputs the policy or value, respectively. As in [@procgen], we use a learning rate of $5 \times 10^{-4}$. We evaluated with both the Adam Optimizer [@adam] and the RMSProp Optimizer [@rmsprop] but find negligible difference.
14
+
15
+ For PPO, Robust Domain Randomization, and VR Goggles, the network does not have an adversary. As such, we used the green and blue sections depicted in Figure [7](#fig:network-arch){reference-type="ref" reference="fig:network-arch"} to train the PPO agent used in both these algorithms. The VR Goggles training process is the same as that used in [@vrgoggles]. Specifically, we collect a dataset of $2000$ source domain images and $2000$ target domain images and train the VR Goggles translation network with a learning rate of $2 \times 10^{-4}$ for $50$ epochs. As in [@vrgoggles], we found no performance gain when using a larger dataset or training for more epochs. As [@vrgoggles] does not provide an implementation, we re-implement this method by adding their novel shift loss to the open source CycleGAN implementation [@cyclegan]. For Robust Domain Randomization, we implement the regularization loss and use a regularization weight of $10$ and as described in [@robustdr].
16
+
17
+ For WAPPO, we use the entire network depicted in Figure [7](#fig:network-arch){reference-type="ref" reference="fig:network-arch"}. The green and blue sections are optimized according to the PPO loss $\mathcal{L}_\text{PPO}$ and the green and orange sections are optimized according to the Wasserstein Confusion loss $\mathcal{L}_\text{Conf}$. Similarly to [@wgan], we take $5$ gradient steps of the adversary network per step of the RL network. The adversarial critic network is made of $8$ dense layers of width $512$, separated by Leaky ReLU [@leakyrelu] activation function.
18
+
19
+ Two different domains for each of the OpenAI Procgen environment are depicted in Figure [8](#fig:procgen-samples){reference-type="ref" reference="fig:procgen-samples"}. To evaluate the transfer performance of an algorithm, it is trained on one domain and evaluated on the other. Note that Figure [8](#fig:procgen-samples){reference-type="ref" reference="fig:procgen-samples"} shows one transfer task for each environment but the results in Figures [2](#fig:cartpole-training){reference-type="ref" reference="fig:cartpole-training"}, [5](#fig:procgen-training-easy){reference-type="ref" reference="fig:procgen-training-easy"}, and [6](#fig:procgen-training-hard){reference-type="ref" reference="fig:procgen-training-hard"} are evaluated across multiple trials.
20
+
21
+ <figure id="fig:procgen-samples" data-latex-placement="h">
22
+ <figure>
23
+ <div class="center">
24
+ <p><img src="figures/procgen-bigfish-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-bigfish-2.png" style="width:40.0%" alt="image" /></p>
25
+ </div>
26
+ <figcaption>Bigfish.</figcaption>
27
+ </figure>
28
+ <figure>
29
+ <div class="center">
30
+ <p><img src="figures/procgen-bossfight-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-bossfight-2.png" style="width:40.0%" alt="image" /></p>
31
+ </div>
32
+ <figcaption>Bossfight.</figcaption>
33
+ </figure>
34
+ <p><br />
35
+ </p>
36
+ <figure>
37
+ <div class="center">
38
+ <p><img src="figures/procgen-caveflyer-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-caveflyer-2.png" style="width:40.0%" alt="image" /></p>
39
+ </div>
40
+ <figcaption>Caveflyer.</figcaption>
41
+ </figure>
42
+ <figure>
43
+ <div class="center">
44
+ <p><img src="figures/procgen-chaser-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-chaser-2.png" style="width:40.0%" alt="image" /></p>
45
+ </div>
46
+ <figcaption>Chaser.</figcaption>
47
+ </figure>
48
+ <p><br />
49
+ </p>
50
+ <figure>
51
+ <div class="center">
52
+ <p><img src="figures/procgen-climber-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-climber-2.png" style="width:40.0%" alt="image" /></p>
53
+ </div>
54
+ <figcaption>Climber.</figcaption>
55
+ </figure>
56
+ <figure>
57
+ <div class="center">
58
+ <p><img src="figures/procgen-coinrun-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-coinrun-2.png" style="width:40.0%" alt="image" /></p>
59
+ </div>
60
+ <figcaption>Coinrun.</figcaption>
61
+ </figure>
62
+ <p><br />
63
+ </p>
64
+ <figure>
65
+ <div class="center">
66
+ <p><img src="figures/procgen-dodgeball-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-dodgeball-2.png" style="width:40.0%" alt="image" /></p>
67
+ </div>
68
+ <figcaption>Dodgeball.</figcaption>
69
+ </figure>
70
+ <figure>
71
+ <div class="center">
72
+ <p><img src="figures/procgen-fruitbot-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-fruitbot-2.png" style="width:40.0%" alt="image" /></p>
73
+ </div>
74
+ <figcaption>Fruitbot.</figcaption>
75
+ </figure>
76
+ <p><br />
77
+ </p>
78
+ <figure>
79
+ <div class="center">
80
+ <p><img src="figures/procgen-heist-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-heist-2.png" style="width:40.0%" alt="image" /></p>
81
+ </div>
82
+ <figcaption>Heist.</figcaption>
83
+ </figure>
84
+ <figure>
85
+ <div class="center">
86
+ <p><img src="figures/procgen-jumper-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-jumper-2.png" style="width:40.0%" alt="image" /></p>
87
+ </div>
88
+ <figcaption>Jumper.</figcaption>
89
+ </figure>
90
+ <p><br />
91
+ </p>
92
+ <figure>
93
+ <div class="center">
94
+ <p><img src="figures/procgen-leaper-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-leaper-2.png" style="width:40.0%" alt="image" /></p>
95
+ </div>
96
+ <figcaption>Leaper.</figcaption>
97
+ </figure>
98
+ <figure>
99
+ <div class="center">
100
+ <p><img src="figures/procgen-maze-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-maze-2.png" style="width:40.0%" alt="image" /></p>
101
+ </div>
102
+ <figcaption>Maze.</figcaption>
103
+ </figure>
104
+ <p><br />
105
+ </p>
106
+ <figure>
107
+ <div class="center">
108
+ <p><img src="figures/procgen-miner-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-miner-2.png" style="width:40.0%" alt="image" /></p>
109
+ </div>
110
+ <figcaption>Miner.</figcaption>
111
+ </figure>
112
+ <figure>
113
+ <div class="center">
114
+ <p><img src="figures/procgen-ninja-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-ninja-2.png" style="width:40.0%" alt="image" /></p>
115
+ </div>
116
+ <figcaption>Ninja.</figcaption>
117
+ </figure>
118
+ <p><br />
119
+ </p>
120
+ <figure>
121
+ <div class="center">
122
+ <p><img src="figures/procgen-plunder-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-plunder-2.png" style="width:40.0%" alt="image" /></p>
123
+ </div>
124
+ <figcaption>Plunder.</figcaption>
125
+ </figure>
126
+ <figure>
127
+ <div class="center">
128
+ <p><img src="figures/procgen-starpilot-1.png" style="width:40.0%" alt="image" /> <img src="figures/procgen-starpilot-2.png" style="width:40.0%" alt="image" /></p>
129
+ </div>
130
+ <figcaption>Starpilot.</figcaption>
131
+ </figure>
132
+ <figcaption>Observations from each of the 16 Procgen Environments. Two different domains are shown for each environment.</figcaption>
133
+ </figure>
134
+
135
+ [^1]: `joshnroy.github.io`
2006.04166/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-04-27T12:29:31.466Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.122 Safari/537.36" etag="Q0_zdGVqdBMqsmdWgnxi" version="13.0.2" type="google"><diagram id="Ve6IYk6dTw9EO8_dQO0i" name="Page-1">7VtLc9owEP41PpaxJT/kYwlJ25n2lEOTU0bFApwIy2PEK7++ki1jwDYlU4IEhgvS6mF59/OnXT0seDddfctwOvnFIkItYEcrCw4sABwHBeJPStaFJPTCQjDO4khVqgSP8TtRQltJ53FEZjsVOWOUx+mucMiShAz5jgxnGVvuVhsxuvvUFI9JTfA4xLQu/R1HfFJIkWdX8u8kHk/KJzu2KpnisrLqYjbBEVsWorwOvLfgXcYYL1LT1R2hUnmlXoqOHlpKNwPLSMKPaRCg0Xz1/hLivh/h1ygdRD8WX4CH1Oj4unxlEgkNqGzCEvHXz9g8iYjsyBY5lvEJG7ME05+MpULoCOEr4Xyt7IfnnAnRhE+pKiWrmD+p5jL9LNM9T+UGq62iwbrMJDxbP5UdyMzzdknVKM9VraKv0u7V4IXkIaZUlY9YwtUwXZmf8Yy9bWwLhaRQiNRCq6ZLk7J5NiSH1OsDhVmcjQk/WBNtMCE+JsKmRLyVaJkRinm82B0LVqgeb+pVhhcJZfsP4SDUhgPnOBQouFw7DgJbKw58+6w4cG44aMOBoxcHjjYcHD0v2J3AAdSLAzXQBaZz9SwL+JRLDaU4EemxTFu+/fziWJI9ikLxtJ3yGpgoFS6cNMNyEnPymOJcYUvhRe4iBM/Swq8bxSuJtAbDbJkO2puHLUjGyeqwqepqLRsAVxGh8ls3Dumy8gJDJZpsOYCl7BMsAW+e2md+kcGlMLN7Y2YzcODrxYF389jNwIHmGdq/eexm4MDVi4PgeE8NXJGn5oaeaZ4aqlvCt59erKDvWMHAaoqxDVeyvecOu7Z2JYc1Jfd6vZpixTvzPQ3SeJyI9FC8O8mEQGomHmL6VRVM4yiiOWeSWfyO/+RdSUWmLE54/iJe3/IGsi9Bk7NC1c4HmOn/bOHAfVt4NVt4DbbwPs0Wgd0K+LcLBbyDTAN84BxQMrhYZtlozBxF15c8OsIssAZ67cwCW0EPLpVa3Bp/a0e8e0jLl8stnnFeS+B1lVt887wWvxX18FK5xTfPbamHpb3q1wHkA3sP+eVBjW2bOO55od8QoQIxCbg5+qcbzs9l5/oIgH8ihQPzJtiuBqsAAtNov2TIBtrPgX9m6j8Z6mFgGupR184QHDS0W83xp1mZRio2+vfKNNJ7pggBbTjQs0NhKg4CvWcMUUMw37ZDIdm4JOKr2KcA+yuK+vcpUHvYDy857A+MW1JEnQ37kXFLiqg97HcvNewPjYt4ynsfnTmtdt45Pyzvwph+KgHpu1/SBd/vAzjQe1oNnfd+SddigONxgPTyQdgwuRq3JtARJOhlhHKgx0SD8HriQCfY3w7QHgeGoCk66fIWTVOUcqotGpGtLv3mZVtXp+H9Xw==</diagram></mxfile>
2006.04166/main_diagram/main_diagram.pdf ADDED
Binary file (31.4 kB). View file
 
2006.04166/paper_text/intro_method.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Undirected graphical models, also known as *Markov Random Fields* (MRFs), are probabilistic models in which a set of random variables is described with the help of an undirected graph, such that the graph structure corresponds to the dependence relations between the variables. Under mild conditions, the distribution of the random variables is determined by potentials associated with each clique of the graph [12].
4
+
5
+ The joint distribution of any set of random variables can be represented as an MRF on a complete graph. However, MRFs become useful when the graph has nontrivial structure, such as bounded degree or bounded clique size. In such cases, learning and inference can often be carried out with greater efficiency. Since many phenomena of practical interest can be modelled as MRFs (e.g., magnetism [5], images [19], gene interactions and protein interactions [27, 8]), it is of great interest to understand the complexity, both statistical and computational, of algorithmic tasks in these models
6
+
7
+ The expressive power of graphical models is significantly strengthened by the presence of latent variables, i.e., variables that are not observed in samples generated according to the model. However, algorithmic tasks are typically more difficult in models with latent variables. Results on learning models with latent variables include [20] for hidden Markov models, [7] for tree graphical models, [6] for Gaussian graphical models, and [1] for locally tree-like graphical models with correlation decay.
8
+
9
+ In this paper we focus on the task of learning *Restricted Boltzmann Machines* (RBMs) [25, 9, 13], which are a family of undirected graphical models with latent variables. The graph of an RBM is bipartite, with all observed
10
+
11
+ <sup>\*</sup>Massachusetts Institute of Technology. Department of EECS. Email: guy@mit.edu.
12
+
13
+ <sup>†</sup>Massachusetts Institute of Technology. Department of EECS. Email: rbuhai@mit.edu. Current affiliation: ETH Zurich. Computer Science Department. Email: rares.buhai@inf.ethz.ch.
14
+
15
+ variables in one layer and all latent variables in the other. This encodes the fact that the variables in one layer are jointly independent conditioned on the variables in the other layer. In practice, RBMs are used to model a set of observed features as being influenced by some unobserved and independent factors; this corresponds to the observed variables and the latent variables, respectively. RBMs are useful in common factor analysis tasks such as collaborative filtering [23] and topic modelling [14], as well as in applications in domains as varied as speech recognition [15], healthcare [29], and quantum mechanics [21].
16
+
17
+ In formalizing the learning problem, a challenge is that there are infinitely many RBMs that induce the same marginal distribution of the observed variables. To sidestep this non-identifiability issue, the literature on learning RBMs focuses on learning the marginal distribution itself. This marginal distribution is, clearly, an MRF. Call the *order* of an MRF the size of the largest clique that has a potential. Then, more specifically, it is known that the marginal distribution of the observed variables is an MRF of order at most d, where d is the maximum degree of a latent variable in the RBM. Hence, one way to learn an RBM is to simply apply algorithms for learning MRFs. The best current algorithms for learning MRFs have time complexity $\tilde{O}(n^r)$ , where r is the order of the MRF [11, 17, 26]. Applying these algorithms to learning RBMs therefore results in time complexity $\tilde{O}(n^d)$ . We note that these time complexities hide the factors that do not depend on n.
18
+
19
+ This paper is motivated by the following basic question:
20
+
21
+ In what settings is it possible to learn RBMs with time complexity substantially better than $\tilde{O}(n^d)$ ?
22
+
23
+ Reducing the runtime of learning arbitrary MRFs of order r to below $n^{\Omega(r)}$ is unlikely, because learning such MRFs subsumes learning noisy parity over r bits [2], and it is widely believed that learning r-parities with noise (LPN) requires time $n^{\Omega(r)}$ [16]. For ferromagnetic RBMs, i.e., RBMs with non-negative interactions, [4] gave an algorithm with time complexity $\tilde{O}(n^2)$ . In the converse direction, [4] gave a general reduction from learning MRFs of order r to learning (non-ferromagnetic) RBMs with maximum degree of a latent variable r.
24
+
25
+ In other words, the problem of learning RBMs is just as challenging as for MRFs, and therefore learning general RBMs cannot be done in time less than $n^{\Omega(d)}$ without violating conjectures about LPN.
26
+
27
+ The reduction in [4] from learning order r MRFs to learning RBMs uses an *exponential* in r number of latent variables to represent each neighborhood of the MRF. Thus, there is hope that RBMs with *sparse* latent variables are in fact easier to learn than general MRFs. The results of this paper demonstrate that this is indeed the case.
28
+
29
+ Let the MRF neighborhood of an observed variable be its neighborhood in the MRF of the marginal distribution of the observed variables. Let s be the maximum number of latent variables connected to the MRF neighborhood of an observed variable. We give an algorithm with time complexity $\tilde{O}(n^{2^s+1})$ that recovers with high probability the MRF neighborhoods of all observed variables. This represents an improvement over current algorithms when $s < \log_2(d-1)$ .
30
+
31
+ The reduction in time complexity is made possible by the following key structural result: if the mutual information $I(X_u; X_I | X_S)$ is large for some observed variable $X_u$ and some subsets of observed variables $X_I$ and $X_S$ , then there exists a subset I' of I with $|I'| \leq 2^s$ such that $I(X_u; X_{I'} | X_S)$ is also large. This result holds because of the special structure of the RBM, in which, with few latent variables connected to the neighborhood of any observed variable, not too many of the low-order potentials of the induced MRF can be cancelled.
32
+
33
+ Our algorithm is an extension of the algorithm of [11] for learning MRFs. To find the neighborhood of a variable $X_u$ , their algorithm iteratively searches over all subsets of variables $X_I$ with $|I| \le d-1$ for one with large mutual information $I(X_u; X_I | X_S)$ , which is then added to the current set of neighbors $X_S$ . Our structural result implies that it is sufficient to search over subsets $X_I$ with $|I| \le 2^s$ , which reduces the time complexity from $\tilde{O}(n^d)$ to $\tilde{O}(n^{2^s+1})$ .
34
+
35
+ For our algorithm to be advantageous, it is necessary that $s < \log_2(d-1)$ . Note that s is implicitly also an upper bound on the maximum degree of an observed variable in the RBM. Figure 1 shows an example of a class of RBMs for which our assumptions are satisfied. In this example, s can be made arbitrarily smaller than d, n, and the number of latent variables.
36
+
37
+ The sample complexity of our algorithm is the same as that of [11], with some additional factors due to working with subsets of size at most $2^s$ . We extended [11] instead of one of [17, 26], which have better sample complexities, because our main goal was to improve the time complexity, and we found [11] the most amenable to extensions in
38
+
39
+ ![](_page_2_Figure_0.jpeg)
40
+
41
+ Figure 1: Class of RBMs with mk + k observed variables, m latent variables, d = 2k, and s = 4. The m variables represent observed variables, the m variables represent latent variables, and the edges represent non-zero interactions between variables. The " $\cdots$ " hides variables that have consecutive indices. The variables hidden by " $\cdots$ " have the same connections as the variables at the extremes of their respective dots.
42
+
43
+ this direction. The sample complexity necessarily depends on the width (defined in Section 2) and the minimum absolute-value non-zero potential of the MRF of the observed variables [24]. In the Appendix F, we show that our sample complexity actually depends on a slightly weaker notion of MRF width than that used in current papers. This modified MRF width has a more natural correspondence with properties of the RBM.
44
+
45
+ The algorithm we described only recovers the structure of the MRF of the observed variables, and not its potentials. However, recovering the potentials is easy after the structure is known: e.g., see Section 6.2 in [4].
46
+
47
+ The second contribution of this paper is an algorithm for learning RBMs with time complexity $\tilde{O}(n^{2^s+1})$ whose sample complexity does not depend on the minimum potential of the MRF of the observed variables. The algorithm is not guaranteed to recover the correct MRF neighborhoods, but is guaranteed to recover a model with small prediction error (a distinction analogous to that between support recovery and prediction error in regression). This result is of interest because all current algorithms depend on the minimum potential, which can be degenerate even when the RBM itself has non-degenerate interactions. Learning graphical models in order to make predictions was considered before in [3] for trees.
48
+
49
+ In more detail, we first give a structure learning algorithm that recovers the MRF neighborhoods corresponding to large potentials. Second, we give a regression algorithm that estimates the potentials corresponding to these MRF neighborhoods. Lastly, we quantify the error of the resulting model for predicting the value of an observed variable given the other observed variables. Overall, we achieve prediction error $\epsilon$ with a sample complexity that scales exponentially with $\epsilon^{-1}$ , and that otherwise has dependencies comparable to our main algorithm.
50
+
51
+ We present now the intuition and techniques behind our structural result. Theorem 1 states an informal version of this result.
52
+
53
+ **Theorem 1** (Informal version of Theorem 4). Fix observed variable u and subsets of observed variables I and S, such that all three are disjoint. Suppose that I is a subset of the MRF neighborhood of u and that $|I| \leq d - 1$ . Then there exists a subset $I' \subseteq I$ with $|I'| \leq 2^s$ such that
54
+
55
+ $$\nu_{u,I'|S} \ge C_{s,d} \cdot \nu_{u,I|S}$$
56
+
57
+ where $C_{s,d} > 0$ depends on s and d, and where $\nu_{u,I',S}$ and $\nu_{u,I|S}$ are proxies of $I(X_u, X_{I'}|X_S)$ and $I(X_u, X_I|X_S)$ , respectively.
58
+
59
+ The formal definition of $\nu$ is in Section 2. For the purposes of this section, one can think of it as interchangeable with the mutual information. Furthermore, this section only discusses how to obtain a point-wise version of the bound, $\nu_{u,I'|S}(x_u,x_{I'}|x_S) \geq C'_{s,d} \cdot \nu_{u,I|S}(x_u,x_{I}|x_S)$ , evaluated at specific $x_u,x_I$ , and $x_S$ . It is not too difficult to extend this result to $\nu_{u,I'|S} \geq C_{s,d} \cdot \nu_{u,I|S}$ .
60
+
61
+ In general, estimating the MRF neighborhood of an observed variable is hard because the low-order information between the observed variables can vanish. In that case, to obtain any information about the distribution, it is necessary to work with high-order interactions of the observed variables. Typically, this translates into large running times.
62
+
63
+ Theorem 1 shows that if there is some high-order $\nu_{u,I|S}$ that is non-vanishing, then there is also some $\nu_{u,I'|S}$ with $|I'| \leq 2^s$ that is non-vanishing. That is, the order up to which all the information can vanish is less than $2^s$ . Or, in other words, RBMs in which all information up to a large order vanishes are complex and require *many* latent variables.
64
+
65
+ To prove this result, we need to relate the mutual information in the MRF neighborhood of an observed variable to the number of latent variables connected to it. This is challenging because the latent variables have a non-linear effect on the distribution of the observed variables. This non-linearity makes it difficult to characterize what is "lost" when the number of latent variables is small.
66
+
67
+ The first main step of our proof is Lemma 7, which expresses $\nu_{u,I|S}(x_u,x_I|x_S)$ as a sum over $2^s$ terms, representing the configurations of the latent variables connected to I. Each term of the sum is a product over the observed variables in I. This expression is convenient because it makes explicit the contribution of the latent variables to $\nu_{u,I|S}(x_u,x_I|x_S)$ . The proof of the lemma is an "interchange of sums", going from sums over configurations of observed variables to sums over configurations of latent variables.
68
+
69
+ The second main step is Lemma 8, which shows that for a sum over m terms of products over n terms, it is possible to reduce the number of terms in the products to m, while decreasing the original expression by at most a factor of $C'_{m,n}$ , for some $C'_{m,n} > 0$ depending on n and m. Combined with Lemma 7, this result implies the existence of a subset I' with $|I'| \leq 2^s$ such that $\nu_{u,I'|S}(x_u, x_{I'}|x_S) \geq C'_{s,d} \cdot \nu_{u,I|S}(x_u, x_{I}|x_S)$ .
70
+
71
+ # Method
72
+
73
+ We start with some general notation: [n] is the set $\{1,...,n\}$ ; $\mathbb{1}\{A\}$ is 1 if the statement A is true and 0 otherwise; $\binom{n}{k}$ is the binomial coefficient $\frac{n!}{k!(n-k)!}$ ; $\sigma(x)$ is the sigmoid function $\sigma(x) = \frac{1}{1+e^{-x}}$ .
74
+
75
+ **Definition 2.** A Markov Random Field<sup>1</sup> of order r is a distribution over random variables $X \in \{-1,1\}^n$ with probability mass function
76
+
77
+ $$\mathbb{P}(X=x) \propto \exp(f(x))$$
78
+
79
+ where f is a polynomial of order r in the entries of x.
80
+
81
+ Because $x \in \{-1, 1\}^n$ , it follows that f is a multilinear polynomial, so it can be represented as
82
+
83
+ $$f(x) = \sum_{S \subseteq [n]} \hat{f}(S) \chi_S(x)$$
84
+
85
+ where $\chi_S(x) = \prod_{i \in S} x_i$ . The term $\hat{f}(S)$ is called the Fourier coefficient corresponding to S, and it represents the potential associated with the clique $\{X_i\}_{i \in S}$ in the MRF. There is an edge between $X_i$ and $X_j$ in the MRF if and only if there exists some $S \subseteq [n]$ such that $i, j \in S$ and $\hat{f}(S) \neq 0$ . Some other relevant notation for MRFs is: let D be the maximum degree of a variable; let $\alpha$ be the minimum absolute-value non-zero Fourier coefficient; let $\gamma$ be the width:
86
+
87
+ $$\gamma := \max_{u \in [n]} \sum_{\substack{S \subseteq [n] \\ u \in S}} |\hat{f}(S)|.$$
88
+
89
+ **Definition 3.** A Restricted Boltzmann Machine is a distribution over observed random variables $X \in \{-1,1\}^n$ and latent random variables $Y \in \{-1,1\}^m$ with probability mass function
90
+
91
+ $$\mathbb{P}(X = x, Y = y) \propto \exp\left(x^T J y + h^T x + g^T y\right)$$
92
+
93
+ where $J \in \mathbb{R}^{n \times m}$ is an interaction (or weight) matrix, $h \in \mathbb{R}^n$ is an external field (or bias) on the observed variables, and $g \in \mathbb{R}^m$ is an external field (or bias) on the latent variables.
94
+
95
+ <sup>&</sup>lt;sup>1</sup>This definition holds if each assignment of the random variables has positive probability, which is satisfied by the models considered in this paper.
96
+
97
+ There exists an edge between $X_i$ and $Y_j$ in the RBM if and only if $J_{i,j} \neq 0$ . The resulting graph is bipartite, and all the variables in one layer are conditionally jointly independent given the variables in the other layer. Some other relevant notation for RBMs is: let d be the maximum degree of a latent variable; let $\alpha^*$ be the minimum absolute-value non-zero interaction; let $\beta^*$ be the width:
98
+
99
+ $$\beta^* := \max \left( \max_{i \in [n]} \sum_{j=1}^m |J_{i,j}| + |h_i|, \max_{j \in [m]} \sum_{i=1}^n |J_{i,j}| + |g_j| \right).$$
100
+
101
+ In the notation above, we say that an RBM is $(\alpha^*, \beta^*)$ -consistent. Typically, to ensure that the RBM is non-degenerate, it is required for $\alpha^*$ not to be too small and for $\beta^*$ not to be too large; otherwise, interactions can become undetectable or deterministic, respectively, both of which lead to non-identifiability [24].
102
+
103
+ In an RBM, it is known that there is a lower bound of $\sigma(-2\beta^*)$ and an upper bound of $\sigma(2\beta^*)$ on any probability of the form
104
+
105
+ $$\mathbb{P}(X_u = x_u | E)$$
106
+ or $\mathbb{P}(Y_u = y_u | E)$
107
+
108
+ where E is any event that involves the other variables in the RBM. It is also known that the marginal distribution of the observed variables is given by (e.g., see Lemma 4.3 in [4]):
109
+
110
+ $$\mathbb{P}(X = x) \propto \exp(f(x)) = \exp\left(\sum_{j=1}^{m} \rho(J_j \cdot x + g_j) + h^T x\right)$$
111
+
112
+ where $J_j$ is the j-th column of J and $\rho(x) = \log(e^x + e^{-x})$ . From this, it can be shown that the marginal distribution is an MRF of order at most d.
113
+
114
+ We now define s, the maximum number of latent variables connected to the MRF neighborhood of an observed variable:
115
+
116
+ $$s:=\max_{u\in[n]}\sum_{j=1}^m\mathbb{1}\{\exists i\in[n]\setminus\{u\} \text{ and } S\subseteq[n] \text{ s.t. } u,i\in S \text{ and } \hat{f}(S)\neq 0 \text{ and } J_{i,j}\neq 0\}.$$
117
+
118
+ The MRF neighborhood of an observed variable is a subset of the two-hop neighborhood of the observed variable in the RBM; typically the two neighborhoods are identical. Therefore, an upper bound on s is obtained as the maximum number of latent variables connected to the two-hop neighborhood of an observed variable in the RBM.
119
+
120
+ Finally, we define a proxy to the conditional mutual information, which is used extensively in our analysis. For random variables $X_u \in \{-1,1\}, X_I \in \{-1,1\}^{|I|}$ , and $X_S \in \{-1,1\}^{|S|}$ , let
121
+
122
+ $$\nu_{u,I|S} := \mathbb{E}_{R,G} \left[ \mathbb{E}_{X_S} \left[ |\mathbb{P}(X_u = R, X_I = G|X_S) - \mathbb{P}(X_u = R|X_S)\mathbb{P}(X_I = G|X_S) |\right] \right]$$
123
+
124
+ where R and G come from uniform distributions over $\{-1,1\}$ and $\{-1,1\}^{|I|}$ , respectively. This quantity forms a lower bound on the conditional mutual information (e.g., see Lemma 2.5 in [11]):
125
+
126
+ $$\sqrt{\frac{1}{2}I(X_u; X_I|X_S)} \ge \nu_{u,I|S}.$$
127
+
128
+ We also define an empirical version of this proxy, with the probabilities and the expectation over $X_S$ replaced by their averages from samples:
129
+
130
+ $$\hat{\nu}_{u,I|S} := \mathbb{E}_{R,G} \left[ \hat{\mathbb{E}}_{X_S} \left[ \left| \hat{\mathbb{P}}(X_u = R, X_I = G|X_S) - \hat{\mathbb{P}}(X_u = R|X_S) \hat{\mathbb{P}}(X_I = G|X_S) \right| \right] \right].$$
131
+
132
+ To find the MRF neighborhood of an observed variable u (i.e., observed variable $X_u$ ; we use the index and the variable interchangeably when no confusion is possible), our algorithm takes the following steps, similar to those of the algorithm of [11]:
133
+
134
+ - 1. Fix parameters s, τ 0 , L. Fix observed variable u. Set S := ∅.
135
+ - 2. While |S| ≤ L and there exists a set of observed variables I ⊆ [n] \ {u} \ S of size at most 2 s such that νˆu,I|<sup>S</sup> > τ <sup>0</sup> , set S := S ∪ I.
136
+ - 3. For each i ∈ S, if νˆu,i|S\{i} < τ <sup>0</sup> , remove i from S.
137
+ - 4. Return set S as an estimate of the neighborhood of u.
138
+
139
+ We use
140
+
141
+ $$L=8/(\tau')^2, \quad \tau'=\frac{1}{(4d)^{2^s}}\left(\frac{1}{d}\right)^{2^s(2^s+1)}\tau, \text{ and } \tau=\frac{1}{2}\frac{4\alpha^2(e^{-2\gamma})^{d+D-1}}{d^{4d}2^{d+1}\binom{D}{d-1}\gamma e^{2\gamma}},$$
142
+
143
+ where τ is exactly as in [11] when adapted to the RBM setting. In the above, d is a property of the RBM, and D, α, and γ are properties of the MRF of the observed variables.
144
+
145
+ With high probability, Step 2 is guaranteed to add to S all the MRF neighbors of u, and Step 3 is guaranteed to prune from S any non-neighbors of u. Therefore, with high probability, in Step 4 S is exactly the MRF neighborhood of u. In the original algorithm of [11], the guarantees of Step 2 were based on this result: if S does not contain the entire neighborhood of u, then νu,I|<sup>S</sup> ≥ 2τ for some set I of size at most d − 1. As a consequence, Step 2 entailed a search over size d − 1 sets. The analogous result in our setting is given in Theorem 5, which guarantees the existence of a set I of size at most 2 s , thus reducing the search to sets of this size. This theorem follows immediately from Theorem 4, the key structural result of our paper.
146
+
147
+ Theorem 4. *Fix observed variable* u *and subsets of observed variables* I *and* S*, such that all three are disjoint. Suppose that* I *is a subset of the MRF neighborhood of* u *and that* |I| ≤ d − 1*. Then there exists a subset* I <sup>0</sup> ⊆ I *with* |I 0 | ≤ 2 s *such that*
148
+
149
+ $$\nu_{u,I'|S} \geq \frac{1}{(4d)^{2^s}} \left(\frac{1}{d}\right)^{2^s(2^s+1)} \nu_{u,I|S}.$$
150
+
151
+ Using the result in Theorem 4, we now state and prove Theorem 5.
152
+
153
+ Theorem 5. *Fix an observed variable* u *and a subset of observed variables* S*, such that the two are disjoint. Suppose that* S *does not contain the entire MRF neighborhood of* u*. Then there exists some subset* I *of the MRF neighborhood of* u *with* |I| ≤ 2 s *such that*
154
+
155
+ $$\nu_{u,I|S} \geq \frac{1}{(4d)^{2^s}} \left(\frac{1}{d}\right)^{2^s(2^s+1)} \frac{4\alpha^2 (e^{-2\gamma})^{d+D-1}}{d^{4d}2^{d+1} \binom{D}{d-1} \gamma e^{2\gamma}} = 2\tau'.$$
156
+
157
+ *Proof.* By Theorem 4.6 in [11], we have that there exists some subset I of neighbors of u with |I| ≤ d − 1 such that
158
+
159
+ $$\nu_{u,I|S} \geq \frac{4\alpha^2(e^{-2\gamma})^{d+D-1}}{d^{4d}2^{d+1}\binom{D}{d-1}\gamma e^{2\gamma}} = 2\tau.$$
160
+
161
+ Then, by Theorem 4, we have that there exists some subset I <sup>0</sup> ⊆ I with |I 0 | ≤ 2 s such that
162
+
163
+ $$\nu_{u,I'|S} \geq \frac{1}{(4d)^{2^s}} \left(\frac{1}{d}\right)^{2^s(2^s+1)} 2\tau = \frac{1}{(4d)^{2^s}} \left(\frac{1}{d}\right)^{2^s(2^s+1)} \frac{4\alpha^2 (e^{-2\gamma})^{d+D-1}}{d^{4d}2^{d+1}\binom{D}{d-1}\gamma e^{2\gamma}} = 2\tau'.$$
164
+
165
+ Theorem 6 states the guarantees of our algorithm. The analysis is very similar to that in [11], and is deferred to the Appendix B. Then, Section 4 sketches the proof of Theorem 4.
166
+
167
+ Theorem 6. *Fix* ω > 0*. Suppose we are given* M *samples from an RBM, where*
168
+
169
+ $$M \geq \frac{60 \cdot 2^{2L}}{(\tau')^2 (e^{-2\gamma})^{2L}} \left( \log(1/\omega) + \log(L + 2^s + 1) + (L + 2^s + 1) \log(2n) + \log 2 \right).$$
170
+
171
+ *Then with probability at least* 1 − ω*, our algorithm, when run from each observed variable* u*, recovers the correct neighborhood of* u*. Each run of the algorithm takes* O(MLn<sup>2</sup> <sup>s</sup>+1) *time.*
172
+
173
+ The proofs of the lemmas in this section can be found in the Appendix A. Consider the mutual information proxy when the values of $X_u$ , $X_I$ , and $X_S$ are fixed:
174
+
175
+ $$\nu_{u,I|S}(x_u, x_I|x_S) = |\mathbb{P}(X_u = x_u, X_I = x_I|X_S = x_S) - \mathbb{P}(X_u = x_u|X_S = x_S)\mathbb{P}(X_I = x_I|X_S = x_S)|.$$
176
+
177
+ We first establish a version of Theorem 4 for $\nu_{u,I|S}(x_u,x_I|x_S)$ , and then generalize it to $\nu_{u,I|S}$ .
178
+
179
+ In Lemma 7, we express $\nu_{u,I|S}(x_u,x_I|x_S)$ as a sum over configurations of latent variables U connected to observed variables in I. Note that $|U| \leq s$ , so the summation is over at most $2^s$ terms.
180
+
181
+ **Lemma 7.** Fix observed variable u and subsets of observed variables I and S, such that all three are disjoint. Suppose that I is a subset of the MRF neighborhood of u. Then
182
+
183
+ $$\nu_{u,I|S}(x_u, x_I | x_S) = \left| \sum_{q_U \in \{-1,1\}^{|U|}} \left( \sum_{q_{\sim U} \in \{-1,1\}^{m-|U|}} \bar{f}(q, x_u, x_S) \right) \prod_{i \in I} \sigma(2x_i (J^{(i)} \cdot q + h_i)) \right|$$
184
+
185
+ for some function $\bar{f}$ , where U is the set of latent variables connected to observed variables in I, $J^{(i)}$ is the i-th row of J, and the entries of $q_{\sim U}$ in the expression $J^{(i)} \cdot q$ are arbitrary.
186
+
187
+ Lemma 8 gives a generic non-cancellation result for expressions of the form $\left|\sum_{i=1}^m a_i \prod_{j=1}^n x_{i,j}\right|$ . Then, Lemma 9 applies this result to the form of $\nu_{u,I|S}(x_u,x_I|x_S)$ in Lemma 7, and guarantees the existence of a subset $I'\subseteq I$ with $|I'|\leq 2^s$ such that $\nu_{u,I'|S}(x_u,x_{I'}|x_S)$ is within a bounded factor of $\nu_{u,I|S}(x_u,x_I|x_S)$ .
188
+
189
+ **Lemma 8.** Let $x_{1,1},...,x_{m,n} \in [-1,1]$ , with n > m. Then, for any $a \in \mathbb{R}^m$ , there exists a subset $S \subseteq [n]$ with $|S| \leq m$ such that
190
+
191
+ $$\left| \sum_{i=1}^{m} a_i \prod_{j \in S} x_{i,j} \right| \ge \frac{1}{4^m} \left( \frac{1}{n} \right)^{m(m+1)} \left| \sum_{i=1}^{m} a_i \prod_{j=1}^{n} x_{i,j} \right|.$$
192
+
193
+ We remark that, in this general form, Lemma 8 is optimal in the size of the subset that it guarantees not to be cancelled. That is, there are examples with $\sum_{i=1}^m a_i \prod_{j=1}^n x_{i,j} \neq 0$ but $\sum_{i=1}^m a_i \prod_{j \in S} x_{i,j} = 0$ for all subsets $S \subseteq [n]$ with $|S| \leq m-1$ . See the Appendix A for a more detailed discussion.
194
+
195
+ **Lemma 9.** Fix observed variable u and subsets of observed variables I and S, such that all three are disjoint. Suppose that I is a subset of the MRF neighborhood of u. Fix any assignments $x_u$ , $x_I$ , and $x_S$ . Then there exists a subset $I' \subseteq I$ with $|I'| \le 2^s$ such that
196
+
197
+ $$\nu_{u,I'|S}(x_u, x_{I'}|x_S) \ge \frac{1}{4^{2^s}} \left(\frac{1}{|I|}\right)^{2^s(2^s+1)} \nu_{u,I|S}(x_u, x_I|x_S)$$
198
+
199
+ where $x_{I'}$ agrees with $x_I$ .
200
+
201
+ Finally, Lemma 10 extends the result about $\nu_{u,I|S}(x_u,x_I|x_S)$ to a result about $\nu_{u,I|S}$ . The difficulty lies in the fact that the subset I' guaranateed to exist in Lemma 9 may be different for different configurations $(x_u,x_I,x_S)$ . Nevertheless, the number of subsets I' with $|I'| \leq 2^s$ is smaller than the number of configurations $(x_u,x_I,x_S)$ , so we obtain a viable bound via the pigeonhole principle.
202
+
203
+ **Lemma 10.** Fix observed variable u and subsets of observed variables I and S, such that all three are disjoint. Suppose that I is a subset of the MRF neighborhood of u. Then there exists a subset $I' \subseteq I$ with $|I'| \le 2^s$ such that
204
+
205
+ $$\nu_{u,I'|S} \geq \frac{1}{(4|I|)^{2^s}} \left(\frac{1}{|I|}\right)^{2^s(2^s+1)} \nu_{u,I|S}.$$
206
+
207
+ This result completes the proof of Theorem 4.
208
+
209
+ ![](_page_7_Figure_0.jpeg)
210
+
211
+ Figure 2: RBM with α → 0 as → 0 and α <sup>∗</sup> = 1, β <sup>∗</sup> = 2 when 0 ≤ ≤ 2. The X variables represent observed variables, the Y variables represent latent variables, and the edges represent non-zero interactions between variables. All external field terms are zero.
212
+
213
+ Figure 2 shows an RBM for which α can be arbitrarily small, while α <sup>∗</sup> = 1 and β <sup>∗</sup> = 2. That is, the induced MRF can be degenerate, while the RBM itself has interactions that are far from degenerate. This is problematic: the sample complexity of our algorithm, which scales with the inverse of α, can be arbitrarily large, even for seemingly well-behaved RBMs. In particular, we note that α is an opaque property of the RBM, and it is *a priori* unclear how small it is.
214
+
215
+ We emphasize that this scaling with the inverse of α is necessary information-theoretically [24]. All current algorithms for learning MRFs and RBMs have this dependency, and it is impossible to remove it while still guaranteeing the recovery of the structure of the model.
216
+
217
+ Instead, in this section we give an algorithm that learns an RBM with small prediction error, independently of α. We necessarily lose the guarantee on structure recovery, but we guarantee accurate prediction even for RBMs in which α is arbitrarily degenerate. The algorithm is composed of a structure learning step that recovers the MRF neighborhoods corresponding to large potentials, and a regression step that estimates the values of these potentials.
218
+
219
+ The structure learning algorithm is guaranteed to recover the MRF neighborhoods corresponding to potentials that are at least ζ in absolute value. The guarantees of the algorithm are stated in Theorem 11, which is proved in the Appendix D.
220
+
221
+ The main differences between this algorithm and the one in Section 3 are: first, the thresholds for νˆu,I|<sup>S</sup> are defined in terms of ζ instead of α, and second, the threshold for νˆu,I|<sup>S</sup> in the additive step (Step 2) is smaller than that used in the pruning step (Step 3), in order to guarantee the pruning of all non-neighbors. The algorithm is described in detail in the Appendix C.
222
+
223
+ Theorem 11. *Fix* ω > 0*. Suppose we are given* M *samples from an RBM, where* M *is as in Theorem 6 if* α *were equal to*
224
+
225
+ $$\alpha = \frac{\zeta}{\sqrt{3} \cdot 2^{D/2 + 2^s} \cdot D^{2^{s-1}(2^s + 2)}}.$$
226
+
227
+ *Then with probability at least* 1 − ω*, our algorithm, when run starting from each observed variable* u*, recovers a subset of the MRF neighbors of* u*, such that all neighbors which are connected to* u *through a Fourier coefficient of absolute value at least* ζ *are included in the subset. Each run of the algorithm takes* O(MLn<sup>2</sup> <sup>s</sup>+1) *time.*
228
+
229
+ Note that
230
+
231
+ $$\mathbb{P}(X_u = 1 | X_{[n] \setminus \{u\}} = x_{[n] \setminus \{u\}}) = \sigma \left( 2 \sum_{S \subseteq [n] \setminus \{u\}} \hat{f}(S \cup \{u\}) \chi_S(x) \right).$$
232
+
233
+ Therefore, following the approach of [28], we can frame the recovery of the Fourier coefficients as a regression task. Let n(u) be the set of MRF neighbors of u recovered by the algorithm in Section 5.1. Note that |n(u)| ≤ D. Let
234
+
235
+ z ∈ {−1, 1} 2 |n(u)| , w ∈ R 2 |n(u)| , and y ∈ {−1, 1}, with z<sup>S</sup> = χS(X), w<sup>S</sup> = 2 ˆf(S ∪ {u}), and y = Xu, for all subsets S ⊆ n(u). Then, if n(u) were equal to the true set of MRF neighbors, we could rewrite the conditional probability statement above as
236
+
237
+ $$\mathbb{P}(y=1|z) = \sigma(w \cdot z), \quad \text{with } ||w||_1 \le 2\gamma.$$
238
+
239
+ Then, finding an estimate wˆ would amount to a constrained regression problem. In our setting, we solve the same problem, and we show that the resulting estimate has small prediction error. We estimate wˆ as follows:
240
+
241
+ $$\hat{w} \in \underset{w \in \mathbb{R}^{|n(u)|}}{\operatorname{argmin}} \frac{1}{M} \sum_{i=1}^{M} l(y^{(i)}(w \cdot z^{(i)})) \quad \text{s.t. } ||w||_{1} \le 2\gamma,$$
242
+
243
+ where we assume we have access to M i.i.d. samples (z, y), and where l : R → R is the loss function
244
+
245
+ $$l(y(w \cdot z)) = \ln(1 + e^{-y(w \cdot z)}) = \begin{cases} -\ln \sigma(w \cdot z), & \text{if } y = 1\\ -\ln(1 - \sigma(w \cdot z)), & \text{if } y = -1 \end{cases}.$$
246
+
247
+ The objective above is convex, and the problem is solvable in time O˜((2<sup>D</sup>) 4 ) by the l1-regularized logistic regression method described in [18]. Then, Theorem 12 gives theoretical guarantees for the prediction error achieved by this regression algorithm. The proof is deferred to the Appendix D.
248
+
249
+ Theorem 12. *Fix* δ > 0 *and* > 0*. Suppose that we are given neighborhoods* n(u) *satisfying the guarantees of Theorem 11 for each observed variable* u*. Suppose that we are given* M *samples from the RBM, and that we have*
250
+
251
+ $$M = \Omega \left( \gamma^2 \ln(8 \cdot n \cdot 2^D / \delta) / \epsilon^2 \right), \quad \zeta \le \frac{\sqrt{\epsilon}}{D^d \sqrt{1 + e^{2\gamma}}}.$$
252
+
253
+ *Let* z<sup>u</sup> *and* wˆ<sup>u</sup> *be the features and the estimate of the weights when the regression algorithm is run at observed variable* u*. Then, with probability at least* 1 − δ*, for all variables* u*,*
254
+
255
+ $$\mathbb{E}\left[\left(\mathbb{P}(X_u = 1 | X_{\setminus u}) - \sigma\left(\hat{w}_u \cdot z_u\right)\right)^2\right] \le \epsilon.$$
256
+
257
+ The sample complexity of the combination of structure learning and regression is given by the sum of the sample complexities of the two algorithms. When δ is constant, the number of samples required by regression is absorbed by the number of samples required by strucutre learning. For structure learning, plugging in the upper bound on ζ required by Theorem 12, we get that the sample complexity is exponential in −1 . Note that the factors D<sup>d</sup> and <sup>√</sup> 1 + e <sup>2</sup><sup>γ</sup> in the upper bound on ζ, as well as the factors that appear in Theorem 11 from the relative scaling of α and ζ, do not influence the sample complexity much, because factors of similar order already appear in the sample complexity of the structure learning algorithm. Overall, for constant δ and constant , the combined sample complexity is comparable to that of the algorithm in Section 3, without the α dependency.
2011.06782/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-10-15T08:13:42.288Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36" etag="YQpcQjKaCm7n8AsmzD14" version="13.7.9" type="google"><diagram id="YACVYT3mHra4VwgD2--U" name="Page-1">7V1rc6M4Fv01VCVT5RRCCPDH2LG3dzvdM73p6cenFLGxTYLBi3GczK9fCSSMhMAYA3EeqVQHhBBC96Fzri5qBQ6XT/8K7dXiSzB1PEVTp08KvFI0TUOqiv+QkuekBKjQSkrmoTulZbuCG/cfh1WkpRt36qy5ilEQeJG74gsnge87k4grs8Mw2PLVZoHHP3Vlz51cwc3E9vKlP91ptEhKLc3clX9y3PmCPRkY/eTK0maVaRPrhT0NtklR/HJwpMBhGARRcrR8GjoeGT02LskIjAuuph0LHT+qcoMarr4+/hr9PTU/Tx+HA/Pnt29Wj77Go+1t6AvTzkbPbATCYONPHdIIUOBgu3Aj52ZlT8jVLRY6LltES49eTl9SxSdzz16v6fH6wYkmC3YShcFDOp46qRraUxe/yDDwghCX+YGPHzCY2utF+uiZ63nsuqLB8Xg4HBm4nL6CE0bOU+HYgHTEsa46wdKJwmdchd7QQ6ae3EP11GT6t90JXUe0ziIrcEgr2lTR5mnjO1ngAyoOuWh+X27/+/DpPlhHT1vj838WX3/3vveYoTQmG8++c7yBPXmYx7cJQz0L/GhsL12PjMD1ZuJObfysYeCvA49dp+ZJRpKTBm2CFytKRZOTg0RaxaIxmCiYC7HyogGQiSErG6S1JRsN5WTzxYns3g/bw+MWuYGPL95ExLe0KTLeICZT5866E0Rl4XMy2C52aZeeO/dxYRSsWC3aL5BKT3hEAwJMLYTKT4Nm3rRk4tMBbEt8UC6+76Ht+q4/70R4tYREzjMiT36KhCfoRz/+acgqdeaemFQ1dIEqylXTWdXGJQtyglU0w4uIYMjRPIpfPym5EwvI4HLyNv63CdiF3joW1iWuoGmrp91F1sqfm8gJ8dVr5zEGQX+uInfp/kO9QfIE/E7JQ/gH4+JcZ3BZvstT97GxtwB92VucYTnYiXbY+NAJ1/hks8JOzTmv9xaN9tmS9VnRdPKLhkvXv1XMAT6KFuQ10ND18T/f4xMTN6vis1loT3AlQArMwRfyBw3Xm6WCRl/I3W5sapnSz6T0gZZukyra8CGpgnXyDP8hbiG8C4kriJ9P0B82Znxyiet587O0T9qQu3wVNzO6uXXP8VHcOAayYc9LtGiY9Hm1cG/d+A2kd3/LduqcDschwjrGzQleZjZzjMmkxfmkpwOT+Y9nhszykAD2kQStWa2htf1gzfGnl4SR4LMJQcbuhB9IPCTh8y8KkeOT3+TkArHTq6fsxatnepadSfLouhSGrYNNOHFKXosi3sgO505U5nfpnO5MOTqVF2JGQkgG2WhZ6HjYcT7yJEwmNPqEvwI39iCpjmh9TkOQYfFtJG9Ob8uyJrElaO5pKRmbXEuxFqUvXl+x9MIZbb2y/dpTmF7uSLGHuZtxrnSEz86i89hrXlV1MXwXBXvA9h7xFmBT7DPBeorn0jwoWrrTacxGZD5p57XUvegruRqE2GvLcFkWcfHuzTDG+KchT4YQD40hMC7MPIoyYd5QjLYcWYWIwD5Htt8hOU9u9Iu5PXyccXT4bOfnyAlzc3XdY33nx+JN+5yfdVq+D+oa77H65oUmAO6q7g9BvinL7NT5yUIgL+f8FG0A3r0DJO5PLbWrA0IDvPtDkqAbUCWW0prz6x9AH5m2sZA10ycgU62U97TMLmUErEA1K7HLtl5STj6V0VgZDBXL2sbC3lFPdqFf71ULX4wvS4y1PgvNkVuelv4soKBf03LMN0nF+5SBjtjlLOm7xgUHc864/V2NH5Rs3qePuYmpYzkFvacUdG9j36Q8VCYxmR/PCbJBeuqAKXLMFukp0K0Uw5XSU+2iU4IKuicSPznW8G5nzFYpA1ItSdy1U8YA8oshnWlWCsnerXY1icdQ/9QAGTA+6KgATvfH4tBJ8VHDulAzPzo/NyLLqk1Ogbq/rZYJKrDy6oiHndlrEEaLYB74tjfalZIkiRW5OvOcJ6q5gq/Y3XcdkHXAWDPvnSh6popsb6KA13Jeq6Gg9+yccxxEHJwm75T3N6e7BXFncXlxPC5fXtwfcDGrqnhFDT82lKIJ2qobRrfqJYvG7SOk9Qjmvwnm308wmyCTdVcmaXYZtzRpT+0V6WKFxUlJj6oyPvzQEsbHg4pMFKkqG2OLgOekD7RaDx/Y3mph3wJ85Nt3nn2bWdjM08H9jYsYqYB7lWKk45YGJ5N4abCr1JOergq5J5LcIWhI5j4AYWs5CpqMiZ3MWmEdLHSEx6+8wHhSmKanGyanWNDUasOYHhRgt6yxlicals96kjimAGcckrFoleCf+tqrGRW1t980XjnOAclI1TtFFWnIuRRdyOLPrw5l/ChFGdpxKOPH+0UZwOBJgmZBSUiuAGa0lqEsW8o6EGTwA4aHNmHwUtjQtpPuMkTDz2H1pwer4uzA/PGpgBshYRv2hUX/qsBGE9rRqyUPYJ20nzPVVqTCuri/YoI50tXSbon1oW4J5pb0oFGAxZ4pmXPpBDZJtX43A8QWN5tliyTTyF0aVO7RRshsEs7vzjSEiKOO3bVwfM432vgqNpLNZXQ195BF3Fo+q0v4yDzTzhulwbMKnumwkBs/L3JfR+Wc9XgMYfmyU2MOTK/ov6gBqhfARPmPQFp0acKCbGbR7GCyhgC80Pu7H/4zCAgrU7dD3ZwhhDJUc29Hy27g3dzudtadYDZbO61wTSj7SONAM6+o+qfiDUAtbwAO8gb3m+WK9hAcF5NBFc0eVLX7roIyAg7QauIWIS1NN9vBLYbQX8Msxy05ixZuaAm45Ff32SwuCRhdk2VspeYaeuhg/IApqEP1mw4gYasDBV0JVogqWq3AX2K/wFPDnKWUu638h8b043PadSX7fbd8/lMhHz7UGjEAoOkXppWZmwzuIT3hc8oWPXzxslWXebskJ+3LdRKY+ONdJ4kMkAn7Vw2FO3ivZaJ+Dsl1myMCi7N2XyBLXH33GeIk380wmlG2nikGEU7gExm9mMu3nfCmvu9kt1ZV6wRSKfXDqNGdF0wevi9cX2REAlP6y46wRP0E7KhajhgZzRAjKM0yosmvXQZw8RCNXY+NQKGu7GVCbGZpLADSERGyBGKBahIhMRJsqN1+/sU+ZPswB4lad5+Ayua9VxcYMC2Ty0Hlo4FH5W4YL5+7oR+WT/QareSwaJrcSg6LtNe3kqpBc3hiWyaUGwnSjwiZi4G5fFttm0g+4wVzNoUAwAQtu1UAOiT5GTHCFjB6lAHdLivsYaHhbmEbotkuJwTAOfv85HiPDnmOYJhpvmABHm8ugwIKgEW2xRvUS3T/GCAu3RexAg4/bF/E3I6HfkBjpZkdETmmQ7hO2Ygft+Eh7AN+3jJk+3cBTTLoMGe9jQ17Bbz3uoddU/lhRwjkB10WPYON7JkmHfQKWxc1OuiFO0cKSCKzeH3UoANh9Rca5gmoegXQ9qpHHUJd0PSK3xK2qOn5payB28O/8uzUNz1fH+fGcnuyalZeuPI9WduasSt8KPoWQp5HbnOMeKM0NUmiqswm25KaZIOfGGrPSBL2NggfSAP+ahNVEObePVBpERO658yi40RebbfrjI2Cfk7gTUgV8FvUmVDiavv9vFib2B9ZDp9lKyRvwRqFhdQjMbjOZwG8uDUCCe3JmWOwiT7ssVyuIP0AoMQiNWDIdoBpzyYrkKtXaZPC/ioNz5CGJBTRqU1K6Flsk3RaVO0oDTORiNaHZR4yUxoSItjtTFmBB75KqxT2pWl4pnxxq5TseRRbJZsdP8zyuAlTYpidT5gSShnLmGyR30u/E/0QbIm/FVJsTOZbs2KVBYHaE6osNZWm9NFvW5PkqrcgVCKbFoS6W76jYrVUGeHsUqpFcYRkI1El81n325BsV35YJtmODVaW3ZukRH5Y6oGwSWqnZiPCxKe7/8ouWbbf/Y+AcPR/</diagram></mxfile>
2011.06782/main_diagram/main_diagram.pdf ADDED
Binary file (85.8 kB). View file
 
2011.06782/paper_text/intro_method.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ A core aspect of human intelligence is adapting to new tasks quickly while drawing upon relevant past learning experiences. The authors in papers  [@schmidhuber1987; @thrun2012learning; @Naik1992MetaneuralNT] show how meta-learning algorithms could discover common structures among various tasks enabling faster learning of a new task with as little data as possible. Several meta-learning algorithms such as [@santoro2016meta; @vinyals2016matching; @finn2017model] learn new tasks with as little as one example. One algorithm that has shown great success over the past few years is the model-agnostic meta-learning (MAML) framework [@finn2017model], where the objective is a bi-level optimization problem. The "inner\" objective in MAML corresponds to an individual task adaptation problem, whereas the "outer\" objective is the meta-training objective of finding the typical structure among all tasks.
4
+
5
+ <figure id="fig:maml_ood" data-latex-placement="!htbp">
6
+ <figure>
7
+ <img src="figs/MAML_wrt_OOD.png" style="width:80.0%;height:5cm" />
8
+ </figure>
9
+ <figcaption>Performance of MAML for different OOD ratios</figcaption>
10
+ </figure>
11
+
12
+ MAML tries to identify optimal initial parameters of the model that minimize the expected loss on the whole task distribution after a few gradient steps. MAML assumes equal weights to all samples in the same task and equal weights to all the tasks in the training task distribution while calculating expected loss. This assumption is too selective for real-world datasets. For example, whenever the training tasks have out-of-distribution or noisy samples, the MAML performance deteriorates because of equal weight assignments. The Figure  [1](#fig:maml_ood){reference-type="ref" reference="fig:maml_ood"} illustrates how MAML performance decline with a growing proportion of OOD tasks in the training data.
13
+
14
+ A natural way of dealing with OOD tasks and noisy data in meta-training, is by assigning weights to either instances or tasks in meta-learning. For example, when source task distribution differs from target task distribution, assigning a weight of zero to OOD tasks in the source distribution improves the performance of the MAML. The resulting problem is a simple extension of meta-learning called *weighted-meta learning*. Weighted meta learning has been studied in [@cai2020weightedmeta], where they study the weighting on restricted class of loss functions like the square loss and hinge loss. In contrast, in this work, we propose a framework which can, a) handle arbitrary loss functions, and b) jointly learn the weights along with the meta learning model parameters.
15
+
16
+ Our work's significant contribution is introducing the reweighted MAML (RW-MAML) algorithm, an approach where we provide an end-to-end framework, for instance-weighted MAML training. Our RW-MAML algorithm considers the weights as hyper-parameters and tries to find the optimal hyper-parameters by minimizing the meta-loss on the validation tasks in a **bi-bi-level** manner. Since RW-MAML uses an online framework to perform a joint optimization of the weight hyper-parameters and model parameters for the weighted MAML model, the computational time of RW-MAML is almost comparable to MAML, specifically 1.5 times MAML's running time. Finally, we compare RW-MAML's performance to existing robust meta-learning and hyperparameter optimization algorithms through extensive experiments on real-world and synthetic datasets for out-of-distribution (OOD), noisy-labels settings in training data.
17
+
18
+ <figure id="fig:overview" data-latex-placement="h!">
19
+ <img src="figs/overview.png" style="width:85.0%" />
20
+ <figcaption>Overview of our RW-MAML framework that solves a bi-bi-level optimization problem. (a) In the meta-training stage, model parameters <span class="math inline"><em>ϕ</em><sub><em>i</em></sub></span> of each task are adapted from meta-parameter <span class="math inline"><strong>θ</strong></span> through the inner level optimization; (b) In the outer level of the meta-training stage, we update the meta-parameters using the weight matrix <span class="math inline"><em>W</em></span> from the previous iterate; (c) The matrix is further updated in the meta-validation stage using the gradient of the meta-losses with respect to <span class="math inline"><em>W</em></span> used in the meta-training stage.</figcaption>
21
+ </figure>
22
+
23
+ **Contributions** of this work are as follows:
24
+
25
+ - We study the general class of the instance and task weighted MAML optimization, where we pose the instance/task weight learning objective as a bi-bi-level optimization problem. To our knowledge, ours is the first work that studies the bi-bi-level optimization problem, which comes naturally in such a setting.
26
+
27
+ - We introduce a novel algorithmic framework called reweighted MAML (RW-MAML) that uses validation tasks to enable robust meta-learning. We used an efficient online framework to solve the *bi-bi-level* optimization problem approximately and provided a convergence analysis.
28
+
29
+ - We provide comprehensive experimental results demonstrating that our meta-learning algorithms achieve state-of-the-art results for out-of-distribution, noisy labels for a mix of synthetic and real-world image classification experiments.
30
+
31
+ - In the case of out of distribution data, RW-MAML significantly outperformed other baselines by up to 6 - 10 % for large OOD ratios of 60% and 90%.
32
+
33
+ - To the best of our knowledge, we are the first framework to consider the noisy label scenario in the case of a few shot learning methods.
34
+
35
+ # Method
36
+
37
+ In this section, we discuss a more generalized MAML framework called weighted MAML where we weigh all the data instances in the query set of a task. One of the significant purposes for considering a weighted MAML framework is to make it more robust to adversaries during training.
38
+
39
+ During meta-learning experiments with real datasets, we sample the datasets $\{\mathcal{D}_{i}^{S}, \mathcal{D}_{i}^{Q}\}$ associated with each task $\mathcal{T}_{i}$ from an underlying dataset $\mathcal{D}$. In instance-level weighting, we associate each data instance $\{\mathcal{D}_{ik}^{Q} \mid k\in [1,K]\}$ in the query set of a task $\mathcal{T}_{i}$ with a particular weight $w_{ik}$, where $K$ is the number of data points in the query set $\mathcal{D}_{i}^{Q}$.
40
+
41
+ The objective will be formulated as follows: $$\begin{equation}
42
+ \begin{aligned}
43
+ \label{instance-weighting}
44
+ \boldsymbol{\theta}^*_{ML}= \operatornamewithlimits{arg\,min}_{\boldsymbol{\theta} \in \boldsymbol{\Theta}} {\mathcal{F}(\boldsymbol{\theta})}\text{\hspace{-3cm}}\\
45
+ \text{where\hspace{2mm}}\mathcal{F}(\boldsymbol{\theta}) &= \frac{1}{M}\sum_{i=1}^{M} \sum_{k=1}^{K}w_{ik}\ell(\mathcal{A}lg(\boldsymbol{\theta}, \mathcal{D}_{i}^{S}), \mathcal{D}_{ik}^{Q}) \\
46
+ &=\frac{1}{M}\sum_{i=1}^{M} \mathbf{w}_{i}\mathcal{L}_{i}(\mathcal{A}lg(\boldsymbol{\theta}, \mathcal{D}_{i}^{S}))
47
+ % &=\frac{1}{M}\sum_{i=1}^{M} \boldsymbol{w}_{i}\mathcal{L}_{i}(\mathcal{A}lg_i(\boldsymbol{\theta}))
48
+ \end{aligned}
49
+ \end{equation}$$ where $$\mathcal{L}_{i}(\mathcal{A}lg(\boldsymbol{\theta},\mathcal{D}_i^{S})) = \begin{bmatrix} \ell(\mathcal{A}lg(\boldsymbol{\theta},\mathcal{D}_i^{S}), \mathcal{D}_{i1}^{Q}) \\ \dots \\ \ell(\mathcal{A}lg(\boldsymbol{\theta},\mathcal{D}_i^{S}), \mathcal{D}_{ik}^{Q}) \\ \dots \\ \ell(\mathcal{A}lg(\boldsymbol{\theta},\mathcal{D}_i^{S}), \mathcal{D}_{iK}^{Q}) \end{bmatrix}$$
50
+
51
+ $\mathbf{w}_{i} = [w_{i1}, \dots, w_{iK}]$ is the weight vector corresponding to each sample in the query set of task $\mathcal{T}_i$. The instance-level weighting is useful in the scenarios where our underlying dataset $\mathcal{D}$ is prone to distribution shift or noise. We need instance-level weighting to distinguish the noisy samples with corrupted labels in the task. The optimal weight assignment is to assign large weight values to clean samples and small weight values to noisy samples in a task.
52
+
53
+ Likewise, we bring in a specific example of the weighting scheme called task weighted MAML, where we assign equal weights to every instance in the query set of a single task. We need task weighted MAML where every instance in a query set of a task is from Out-of-Distribution or In-Distribution. In this scenario, the optimal weight assignment is to assign small weight values to an OOD Task and larger weight values to an ID Task.
54
+
55
+ We attempt to find the optimal weight hyper-parameters and use the learned weights to solve the bi-level optimization problem defined in Eq.[\[instance-weighting\]](#instance-weighting){reference-type="eqref" reference="instance-weighting"} weighting scheme used while training.\
56
+ Our method uses a clean meta-validation task set $\{\mathcal{T}^{\mathcal{V}}_{j}= \{\mathcal{V}_{j}^{S}, \mathcal{V}_{j}^{Q}\} \}_{j = 1} ^ {N}$ that is assumed to be relevant to test task distribution for generalization performance to find the optimal weight hyper-parameters. Each validation task is splitted into support and query sets: $\{\mathcal{V}_{j}^{S}, \mathcal{V}_{j}^{Q}\}$. The number of tasks in the validation set is assumed to be very small compared to the number of tasks in training set $(N \ll M)$.
57
+
58
+ For our RW-MAML algorithm, we want to select weight hyper-parameters minimizing the meta validation loss for the model after taking a few gradient steps from MAML initialization parameters.
59
+
60
+ The weight optimization objective function for this instance weighting schema will be as follows:
61
+
62
+ $$\begin{equation}
63
+ \begin{aligned}
64
+ \label{inwt-opt}
65
+ {W}^{*} &= \operatornamewithlimits{arg\,min}_{\mathbf{w}} \frac{1}{N} \sum_{j=1}^{N}\mathcal{L}(\mathcal{A}lg(\boldsymbol{\theta}^{*}_{W}, \mathcal{V}_{j}^{S}), \mathcal{V}_{j}^{Q})
66
+ \\
67
+ \text{where\hspace{2mm}} \boldsymbol{\theta}^{*}_{W} &= \operatornamewithlimits{arg\,min}_{\boldsymbol{\theta} \in \boldsymbol{\Theta}} \frac{1}{M}\sum_{i=1}^{M}\mathbf{w}_{i}^{*}\mathcal{L}(\mathcal{A}lg(\boldsymbol{\theta}, \mathcal{D}_i^{S}), \mathcal{D}_i^{Q})
68
+ \end{aligned}
69
+ \end{equation}$$
70
+
71
+ ${W}=[\mathbf{w}_1,\dots,\mathbf{w}_M]^{\intercal}$. Since, the optimization problem for $\boldsymbol{\theta}_{W}^*$ is a standard bi-level optimization problem (*i.e.* MAML), the complete optimization problem (Eq:[\[inwt-opt\]](#inwt-opt){reference-type="eqref" reference="inwt-opt"}) turns out to be a **bi-bi-level** optimization problem, which is more complex than MAML. Hence, we adopt an online framework to solve the optimization problem more efficiently.
72
+
73
+ The hyper-parameter optimization problems for instance weights is expensive because it involves solving the MAML optimization problem entirely and the weights optimization on the optimal MAML initial parameters. To tackle the above problem, we adopt an online algorithm to solve the optimization problem.
74
+
75
+ At every step $t$ of training, a mini-batch of training tasks $\{{\mathcal{T}_i} \mid 1 \leq i \leq m \}$ is sampled, where $m$ is the mini-batch size and $m \ll M$.
76
+
77
+ Then the online approximation of the above problem is as follows. Instead of adapting to a task entirely, we adapt our model to an task by taking a single gradient step towards the inner task adaptation objective's descent direction. Similarly, we also optimize the MAML's meta-objective by taking a single gradient step towards the meta objective's descent direction.
78
+
79
+ The model parameters update for instance weighting scheme will be as follows:
80
+
81
+ $$\begin{equation}
82
+ \begin{aligned}
83
+ \label{online-equation}
84
+ \boldsymbol{\theta}^{(t)}_{W} = \boldsymbol{\theta}^{(t)} - \eta \frac{1}{m}\sum_{i=1}^{m} \mathbf{w}_i^{(t)} \nabla_{\boldsymbol{\theta}} \mathcal{L}_i(\mathcal{A}lg(\boldsymbol{\theta},\mathcal{D}_i^{S}))|_{\boldsymbol{\theta}^{(t)}}
85
+ \end{aligned}
86
+ \end{equation}$$ where $\eta$ is meta objective's step-size and $\alpha$ is the inner objective's step-size.
87
+
88
+ After the one-step approximation, the optimal weight optimization problem will be as follows:
89
+
90
+ $$\begin{equation}
91
+ \begin{aligned}
92
+ W^{*} = {\operatorname*{arg\,min\hspace{1mm}}_{W}} \frac{1}{N} \sum_{j=1}^{N}\mathcal{L}_{V_j}(\mathcal{A}lg(\boldsymbol{\theta}_{W}^{(t)},\mathcal{V}_j^{S}))
93
+ \end{aligned}
94
+ \end{equation}$$ Unfortunately, solving the above optimization problem is also very expensive, and we try to optimize it by taking a single gradient step towards the validation loss descent. We want to evaluate the impact of training the model on weighted MAML objective towards meta-objective of sampled validation tasks $\{\mathcal{T}_{j}^{V} \mid 1 \leq j \leq n \}$ where $n$ is the mini-batch size and $n \ll N$.
95
+
96
+ The weight update equation for instance weighting scheme is as follows: $$\begin{equation}
97
+ \begin{aligned}
98
+ {W}^{(t+1)} = {W}^{(t)} - \frac{\gamma}{n} \sum_{j=1}^{n}\nabla_{W}\mathcal{L}_{V_j}(\mathcal{A}lg(\boldsymbol{\theta}^{(t)}_{W},\mathcal{V}_j^{S}))
99
+ \end{aligned}
100
+ \label{online-weight-update}
101
+ \end{equation}$$ where $\gamma$ is the weight update's step size.
102
+
103
+ The lemma given below gives the form of the gradient of meta validation loss $\frac{1}{n} \sum_{j=1}^{n}\nabla_{W}\mathcal{L}_{V_j}(\mathcal{A}lg(\boldsymbol{\theta}^{(t)}_{W},\mathcal{V}_j^{S}))$ w.r.t., the weight vector $\mathbf{w}_{i}$, therefore giving the full update equation for the weight vector $\mathbf{w}_{i}$.
104
+
105
+ ::: {#weight-update-lemma .lemma}
106
+ **Lemma 1**. *The weight update for an individual weight vector $\mathbf{w}_{i}$ of the task $\mathcal{T}_i$ from time step $t$ to $t+1$ is as follows: $$\begin{align}
107
+ \label{weight-update}
108
+ \mathbf{w}_i^{(t+1)}&=\mathbf{w}_i^{(t)} + \frac{\eta \gamma}{mn} \sum_{j=1}^{n} \nabla_{\phi_j} \mathcal{L}_{V_j} \Big(\nabla_{\boldsymbol{\theta}}\mathcal{L}_i(\mathcal{A}lg(\boldsymbol{\theta}, \mathcal{D}_i^S))^{\intercal} \nonumber\\
109
+ &- \alpha \nabla^2\widehat{\mathcal{L}}_{V_j}|_{\boldsymbol{\theta}_{W}^{(t)}} \nabla_{\boldsymbol{\theta}}\mathcal{L}_i(\mathcal{A}lg(\boldsymbol{\theta}, \mathcal{D}_i^S))^{\intercal} \Big)
110
+ \end{align}$$ where $\phi_j=\mathcal{A}lg(\boldsymbol{\theta}, \mathcal{V}_{j}^{S})$.*
111
+ :::
112
+
113
+ The detailed derivation of this is in the supplementary material.
114
+
115
+ Next, we rectify the weights to have a non-negative value in order to reduce the instabilities encountered during the optimization procedure. $$\begin{equation}
116
+ {W}^{(t+1)} = \max(0, W^{(t+1)})
117
+ \end{equation}$$
118
+
119
+ Once the optimal weights $\mathbf{w}^{(t+1)}$ at time step $t+1$ is found, we train the model using the new weights. $$\begin{equation}
120
+ \begin{aligned}
121
+ \boldsymbol{\theta}^{(t+1)} = \boldsymbol{\theta}^{(t)} - \frac{\eta}{m}\sum_{i=1}^{m} \mathbf{w}_{i}^{(t+1)}\nabla_{\boldsymbol{\theta}}\mathcal{L}_i(\mathcal{A}lg(\boldsymbol{\theta}^{(t)}, \mathcal{D}_i^{S}))
122
+ \end{aligned}
123
+ \label{model-update}
124
+ \end{equation}$$
125
+
126
+ We will repeat the online algorithm steps given in the equation [\[online-equation\]](#online-equation){reference-type="eqref" reference="online-equation"} from $t=1$ until convergence.
127
+
128
+ Even after adopting the online framework, the weight gradient calculation involves calculating multiple hessian vector products, which is expensive. Since the coefficient of hessian-vector product term in the weight update (Eq. [\[weight-update\]](#weight-update){reference-type="eqref" reference="weight-update"}) involves the product of three learning rate terms $\eta\alpha\gamma$, we can make an approximation that the term involving the hessian vector-product term is close to 0, given that the above learning rates are small.
129
+
130
+ After the above approximation, the weight update equation will be as follows: $$\begin{equation}
131
+ \label{weight-update-appr}
132
+ \mathbf{w}_i^{(t+1)}=\mathbf{w}_i^{(t)} + \frac{\eta \gamma}{mn} \sum_{j=1}^{n} \nabla_{\phi_j} \mathcal{L}_{V_j} \nabla_{\boldsymbol{\theta}}\mathcal{L}_i(\mathcal{A}lg(\boldsymbol{\theta}, \mathcal{D}_i^S))^{\intercal}
133
+ \end{equation}$$
134
+
135
+ The number of weight hyper-parameters in the instance weighted MAML scheme correlates to the number of data instances in the query sets of meta-training task data sets. We need to determine a significant amount of hyper-parameters if the number of training tasks or data instances is enormous. A more significant number of unknown hyper-parameters affects our hyperparameter optimization algorithm and leads to instabilities during training. Accordingly, we seek to evaluate a smaller number of hyperparameters by sharing the weights among instances. Task weighting scheme is an occurrence of weight sharing where we share the same weight among all the instances in the query set. Apart from choosing a task weighting scheme, we tried to cluster tasks based on some similarity criteria to share the same weight among all the data instances in a cluster's query sets. We likewise present an ablation study in the experiments section illustrating how the number of clusters in the training tasks or instances affects our RW-MAML algorithm's performance.
136
+
137
+ We can leverage automatic differentiation techniques to compute the gradients of the meta-validation loss with respect to the weight hyper-parameters. The pipeline of our algorithm is shown in the Figure.[2](#fig:overview){reference-type="ref" reference="fig:overview"}. This implementation can be generalized to any deep learning architectures and can be very easily implemented using popular deep learning frameworks such as Pytorch [@NEURIPS2019_9015]. The pseudocode algorithm could be found from Algorithm [\[alg-RWMAML\]](#alg-RWMAML){reference-type="ref" reference="alg-RWMAML"}.
138
+
139
+ :::: algorithm
140
+ ::: algorithmic
141
+ []{#alg-RWMAML label="alg-RWMAML"} $p_{tr}$ distribution over training tasks $p_{val}$ distribution over validation tasks $m,n$ batch sizes $\alpha, \eta, \gamma$ learning rate hyperparameters Randomly initialize $\boldsymbol{\theta}$ and $W$ Sample mini-batch of tasks $\{\mathcal{D}_i^{S},\mathcal{D}_i^{Q}\}_{i=1}^{m} \sim p_{tr}$ Sample mini-batch of tasks $\{\mathcal{V}_j^{S},\mathcal{V}_j^{Q}\}_{j=1}^{n} \sim p_{val}$ Compute adapted parameters $\mathcal{A}lg(\boldsymbol{\theta},\mathcal{D}_i^{S})$ with gradient descent by Eq. ([\[param-adaptation\]](#param-adaptation){reference-type="ref" reference="param-adaptation"}) Compute the gradient $\nabla_{\boldsymbol{\theta}} \mathcal{L}_i(\mathcal{A}lg(\boldsymbol{\theta},\mathcal{D}_i^{S}))$ using $\mathcal{D}_i^{Q}$ Formulate the $\boldsymbol{\theta}$ as a function of weights $\boldsymbol{\theta}_{W}^{(t)}$ by Eq. ([\[online-equation\]](#online-equation){reference-type="ref" reference="online-equation"}) Update $\mathbf{w}_i^{(t)}$ by Eq.([\[weight-update\]](#weight-update){reference-type="ref" reference="weight-update"}) using $\{\mathcal{V}_j^{S},\mathcal{V}_j^{Q}\}_{j=1}^{n}$ Update $\boldsymbol{\theta}^{(t+1)}$ by Eq. ([\[model-update\]](#model-update){reference-type="ref" reference="model-update"}) using $\{\mathcal{D}_i^{Q}\}_{i=1}^{m}$
142
+ :::
143
+ ::::
144
+
145
+ The following theorem shows that under certain conditions, RW-MAML converges to the critical point of the meta-validation loss $\mathcal{O}(1/\epsilon^2)$ epochs. The proof is listed in the supplementary material.
146
+
147
+ ::: {#meta-validation-convergence .theorem}
148
+ **Theorem 1**. *Suppose the loss function $\mathcal{L}$ is Lipschitz smooth with constant $L$ and is a differential function with a $\rho$-bounded gradient, twice differential and $\mathcal{B}$-lipschitz hessian. Assume that the learning rate $\eta_t$ satisfies $\eta_t = \min{(1, k/T)}$ for some $k>0$, such that $k/T < 1$ and $\gamma_t$, $1 \leq t \leq N$ is a monotone descent sequence, $\gamma_t = \min{(\frac{1}{L}, \frac{C}{\sigma \sqrt{T}})}$ for some $C > 0$, such that $\frac{\sigma\sqrt{T}}{C} \geq L$ and $\sum_{t = 0}^{\infty}\gamma_t \leq \infty$, $\sum_{t = 0}^{\infty}\gamma_t^2 \leq \infty$. Then RW-MAML satisfies: $\mathbb{E}\Bigg[\left\Vert\frac{1}{N}\sum_{j=1}^{N}\nabla_W\mathcal{L}(\mathcal{A}{lg}(\boldsymbol{\theta}_W^{(t)}, \mathcal{V}_j^S), \mathcal{V}_j^Q)\right\Vert^2\Bigg] \leq \epsilon$ in $\mathcal{O}(1/ \epsilon^2)$ steps. More specifically, $$\begin{equation}
149
+ \underset{0 \leq t \le T}{\min}\mathbb{E}\Bigg[\left\Vert\frac{1}{N}\sum_{j=1}^{N}\nabla_W\mathcal{L}(\textit{Alg}(\boldsymbol{\theta}_W^{(t)}, \mathcal{V}_j^S), \mathcal{V}_j^Q)\right\Vert^2\Bigg] \leq \frac{1}{\sqrt{T}}
150
+ \end{equation}$$*
151
+
152
+ *where $C$ is some constant independent of the convergence process, $\sigma$ is the variance of drawing uniformly mini-batch sample at random.*
153
+ :::
154
+
155
+ ::: table*
156
+ +:---------------------+:---------------------------------:+:---------------------------------:+:---------------------------------:+:---------------------------------:+:---------------------------------:+:---------------------------------:+:-:+
157
+ | | **5-way 3-shot** | |
158
+ +----------------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+---+
159
+ | $\mathcal{D}_{out}$ | **SVHN** | **FashionMNIST** | |
160
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
161
+ | OOD Ratio | 30% | 60% | 90% | 30% | 60% | 90% | |
162
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
163
+ | MAML-OOD-RM(Skyline) | 57.73$\scriptstyle{\pm 0.76}$ | 55.29$\scriptstyle{\pm 0.78}$ | 54.38 $\scriptstyle{\pm 0.12}$ | 56.78 $\scriptstyle{\pm 0.75}$ | 55.29 $\scriptstyle{\pm 0.78}$ | 53.43 $\scriptstyle{\pm 0.51}$ | |
164
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
165
+ | MAML | 55.41$\scriptstyle{\pm 0.75}$ | 53.93$\scriptstyle{\pm 0.76}$ | 44.10$\scriptstyle{\pm 0.68}$ | 54.65$\scriptstyle{\pm 0.77}$ | 54.52$\scriptstyle{\pm 0.76}$ | 41.52$\scriptstyle{\pm 0.74}$ | |
166
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
167
+ | MMAML | 51.04 $\scriptstyle{\pm 0.87}$ | 50.28$\scriptstyle{\pm 0.97}$ | 41.56$\scriptstyle{\pm 0.96}$ | 50.32 $\scriptstyle{\pm 0.93}$ | 47.54 $\scriptstyle{\pm 1.05}$ | 42.09 $\scriptstyle{\pm 0.97}$ | |
168
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
169
+ | B-TAML | 53.87$\scriptstyle{\pm 0.18}$ | 49.84$\scriptstyle{\pm 0.23}$ | 42.00$\scriptstyle{\pm 0.21}$ | 51.14$\scriptstyle{\pm 0.23}$ | 46.59$\scriptstyle{\pm 0.20}$ | 36.69$\scriptstyle{\pm 0.21}$ | |
170
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
171
+ | L2R | 47.13$\scriptstyle{\pm 0.13}$ | 40.69$\scriptstyle{\pm 0.62}$ | 47.26$\scriptstyle{\pm 0.72}$ | 33.14$\scriptstyle{\pm 0.60}$ | 44.03$\scriptstyle{\pm 0.70}$ | 33.06 $\scriptstyle{\pm 0.60}$ | |
172
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
173
+ | RW-MAML (Appr ours) | 54.76$\scriptstyle{\pm 1.19}$ | 45.86$\scriptstyle{\pm 1.19}$ | 43.55$\scriptstyle{\pm 1.20}$ | **57.00**$\scriptstyle{\pm 1.20}$ | 55.18$\scriptstyle{\pm 1.16}$ | 48.52$\scriptstyle{\pm 1.21}$ | |
174
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
175
+ | RW-MAML (ours) | **57.12**$\scriptstyle{\pm 0.81}$ | **55.66**$\scriptstyle{\pm 0.78}$ | **52.16**$\scriptstyle{\pm 0.76}$ | 56.66$\scriptstyle{\pm 0.78}$ | **56.04**$\scriptstyle{\pm 0.79}$ | **49.71**$\scriptstyle{\pm 0.78}$ | |
176
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
177
+
178
+ +:---------------------+:---------------------------------:+:---------------------------------:+:---------------------------------:+:---------------------------------:+:---------------------------------:+:---------------------------------:+:-:+
179
+ | | **5-way 5-shot** | |
180
+ +----------------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+---+
181
+ | $\mathcal{D}_{out}$ | **SVHN** | **FashionMNIST** | |
182
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
183
+ | OOD Ratio | 30% | 60% | 90% | 30% | 60% | 90% | |
184
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
185
+ | MAML-OOD-RM(Skyline) | 61.89$\scriptstyle{\pm 0.69}$ | 61.31$\scriptstyle{\pm 0.75}$ | 57.79$\scriptstyle{\pm 0.69}$ | 59.83$\scriptstyle{\pm 0.76}$ | 61.31$\scriptstyle{\pm 0.75}$ | 59.61$\scriptstyle{\pm 0.75}$ | |
186
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
187
+ | MAML | 58.90$\scriptstyle{\pm 0.71}$ | 58.66$\scriptstyle{\pm 0.75}$ | 49.94 $\scriptstyle{\pm 0.69}$ | 59.06$\scriptstyle{\pm 0.68}$ | 59.25$\scriptstyle{\pm 0.73}$ | 49.84 $\scriptstyle{\pm 0.69}$ | |
188
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
189
+ | MMAML | 52.45 $\scriptstyle{\pm 1.00}$ | 52.17 $\scriptstyle{\pm 1.05}$ | 46.51 $\scriptstyle{\pm 1.09}$ | 51.46 $\scriptstyle{\pm 0.91}$ | 54.13 $\scriptstyle{\pm 0.93}$ | 50.27 $\scriptstyle{\pm 1.00}$ | |
190
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
191
+ | B-TAML | 58.34$\scriptstyle{\pm 0.20}$ | 56.07$\scriptstyle{\pm 0.21}$ | 49.84$\scriptstyle{\pm 0.20}$ | 55.19$\scriptstyle{\pm 0.20}$ | 52.10$\scriptstyle{\pm 0.19}$ | 40.02$\scriptstyle{\pm 0.19}$ | |
192
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
193
+ | L2R | 47.11$\scriptstyle{\pm 0.51}$ | 48.01$\scriptstyle{\pm 0.70}$ | 51.53$\scriptstyle{\pm 0.71}$ | 46.03$\scriptstyle{\pm 0.30}$ | 49.15$\scriptstyle{\pm 0.68}$ | 55.03$\scriptstyle{\pm 0.46}$ | |
194
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
195
+ | RW-MAML (Appr ours) | 57.96$\scriptstyle{\pm 0.94}$ | 53.66$\scriptstyle{\pm 0.95}$ | 47.58$\scriptstyle{\pm 0.96}$ | **60.59**$\scriptstyle{\pm 0.99}$ | **60.55**$\scriptstyle{\pm 0.95}$ | 49.23$\scriptstyle{\pm 0.98}$ | |
196
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
197
+ | RW-MAML (ours) | **60.76**$\scriptstyle{\pm 0.70}$ | **60.53**$\scriptstyle{\pm 0.71}$ | **57.88**$\scriptstyle{\pm 0.70}$ | 60.41$\scriptstyle{\pm 0.72}$ | **60.54**$\scriptstyle{\pm 0.72}$ | **57.95**$\scriptstyle{\pm 0.71}$ | |
198
+ +----------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+-----------------------------------+---+
199
+
200
+ []{#tab:ood-classification label="tab:ood-classification"}
201
+ :::
202
+
203
+ :::: table*
204
+ ::: center
205
+ +:---------------+:--------------------------------------:+:-------------------------------------:+:--------------------------------------:+:-------------------------------------:+:--------------------------------------:+:--------------------------------------:+
206
+ | | **5-way 3-shot** | **5-way 5-shot** |
207
+ +----------------+----------------------------------------+---------------------------------------+----------------------------------------+---------------------------------------+----------------------------------------+----------------------------------------+
208
+ | Noise Ratio | 20% | 30% | 50% | 20% | 30% | 50% |
209
+ +----------------+----------------------------------------+---------------------------------------+----------------------------------------+---------------------------------------+----------------------------------------+----------------------------------------+
210
+ | MAML-Noise-RM | $60.2\scriptstyle{\pm 0.02}$ | $59.35\scriptstyle{\pm 0.01}$ | $58.21\scriptstyle{\pm 0.71}$ | $61.2\scriptstyle{\pm 0.21}$ | $60.3\scriptstyle{\pm 0.32}$ | $59.1\scriptstyle{\pm 0.68}$ |
211
+ +----------------+----------------------------------------+---------------------------------------+----------------------------------------+---------------------------------------+----------------------------------------+----------------------------------------+
212
+ | MAML | $54.8\scriptstyle{\pm 0.64}$ | $53.9\scriptstyle{\pm 1.10}$ | $51.8\scriptstyle{\pm 0.12}$ | $59.2\scriptstyle{\pm 0.28}$ | $57.6\scriptstyle{\pm 0.36}$ | $53.5\scriptstyle{\pm 0.48}$ |
213
+ +----------------+----------------------------------------+---------------------------------------+----------------------------------------+---------------------------------------+----------------------------------------+----------------------------------------+
214
+ | RW-MAML (ours) | $\textbf{55.24}\scriptstyle{\pm 0.72}$ | $\textbf{54.7}\scriptstyle{\pm 1.20}$ | $\textbf{53.68}\scriptstyle{\pm 0.21}$ | $\textbf{59.6}\scriptstyle{\pm 0.54}$ | $\textbf{58.16}\scriptstyle{\pm 0.87}$ | $\textbf{55.61}\scriptstyle{\pm 1.32}$ |
215
+ +----------------+----------------------------------------+---------------------------------------+----------------------------------------+---------------------------------------+----------------------------------------+----------------------------------------+
216
+
217
+ []{#tab:noise label="tab:noise"}
218
+ :::
219
+ ::::
2102.07762/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2102.07762/paper_text/intro_method.md ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ Our MIA consist of computing the two terms in Eq. ([\[eq:L-mem\]](#eq:L-mem){reference-type="ref" reference="eq:L-mem"}), i.e. $L_{rec}$ and $L_{pred}$ for a given query pair $(x,y)$, where $x$ is an image from the input domain and $y$ is the ground truth from the output domain, using only a black-box access to the victim conditional generation model $\mathbf{V}$.
4
+
5
+ $L_{rec}$ is computed using the pixel-wise error between the output image predicted by the model, $\mathbf{V}(x)$, and the ground truth image $y$, see step $1$ in the Algorithm 1. For image translation models, we set the pixel-wise error function, $err$, to be the $L_1$ loss: $$\begin{equation}
6
+ \label{eq:L-rec_l1}
7
+ L^{trans}_{rec}(x, y) = \| \mathbf{V}(x) - y\|
8
+ \end{equation}$$ For semantic segmentation, where the output values are probability vectors rather then pixel values, we use the cross-entropy loss: $$\begin{equation}
9
+ \label{eq:L-rec-seg}
10
+ L^{seg}_{rec}(x, y) = -log(\mathbf{V}(x)[y])
11
+ \end{equation}$$ In the case of medical segmentation, following Fan et al.  [@fan2020pra; @fan2020inf], we use the weighted IoU loss and binary cross-entropy loss:
12
+
13
+ $$\begin{equation}
14
+ \label{eq:L-rec-med}
15
+ L^{med}_{rec}(x, y) = L^w_{IoU}(x,y) + L^w_{BCE}(x,y)
16
+ \end{equation}$$
17
+
18
+ Defined as: $$\begin{equation}
19
+ \label{eq:L-w-iou}
20
+ \small
21
+ L^w_{IoU} = 1 - \frac{\sum\limits_{i=1}^{H}\sum\limits_{j=1}^{W} w_{ij} (\mathbf{V}(x)_{ij}\cdot y_{ij})}{\sum\limits_{i=1}^{H}\sum\limits_{j=1}^{W} w_{ij} (\mathbf{V}(x)_{ij} + y_{ij} - \mathbf{V}(x)_{ij} \cdot y_{ij})}
22
+ \end{equation}$$
23
+
24
+ $$\begin{equation}
25
+ \label{eq:L-w-bce}
26
+ \small
27
+ L^w_{BCE} = - \frac{\sum\limits_{i=1}^{H}\sum\limits_{j=1}^{W} w_{ij} log(\mathbf{V}(x)[y]_{ij})}{\sum\limits_{i=1}^{H}\sum\limits_{j=1}^{W} w_{ij}}
28
+ \end{equation}$$
29
+
30
+ Where $H$ and $W$ are the height and width of the query sample, and $w_{ij}$ is the weight of pixel $(i, j)$ and is defined as follows, where $A_{ij}$ represents the area that surrounds the pixel $(i, j)$: $$\begin{equation}
31
+ \label{eq:L-w-bce}
32
+ \small
33
+ w_{ij} = 1 - \left| \frac{\sum\limits_{m,n\in A_{ij}} y_{mn}}{\sum\limits_{m,n\in A_{ij}} 1} - y_{ij} \right|
34
+ \end{equation}$$
35
+
36
+ $L_{pred}$ is computed as the average error of a linear regression model, $\mathbf{P}$, in predicting pixel values from deep features of the input image.
37
+
38
+ Our deep features are the activation values in the first $4$ blocks of a pre-trained Wide-ResNet$50{\times}2$ [@zagoruyko2016wide]. These features are of sizes $56{\times}56{\times}256$, $28{\times}28{\times}512$, $14{\times}14{\times}1024$, and $7{\times}7{\times}2048$. We interpolate all features to size $56{\times}56$ using bi-linear interpolation (step 2), and also reduce the output image to $56{\times}56$ using bicubic interpolation (step 3). This gives a concatenated feature vector of size $3840$ for each pixel $i$ in the $56{\times}56$ image ($256{+}512{+}1024{+}2048{=}3840$). We denote the concatenated feature vector for pixel $i$ as $\psi(i)$.
39
+
40
+ We randomly select $70\%$ of the pixels as train set, and compute a linear model $\mathbf{P}$ to estimate the RGB pixel values $y_{train}^i$ from the corresponding deep features $\psi_{train}(i)$ (step 4). The remaining $30\%$ of pixels will be used as a test split, $\{\psi_{test}, y_{test}\}$ (step 5). I.e. $|\psi_{train}| = 2195{\times}3840, |y_{train}| = 2195{\times}3$ and $|\psi_{test}| = 941{\times}3840, |y_{test}| = 941{\times}3$.
41
+
42
+ The linear regression model $\mathbf{P}$, a matrix of size $3840{\times}3$, is trained to minimize the error over $\{\psi_{train}, y_{train}\}$ (step 6). $L_{pred}$ will be the average absolute error over $\{\psi_{test}, y_{test}\}$ (step 7). We found that fitting the linear model to 70% of pixels and measuring the error on the remaining 30% gives better results than just measuring the error of the linear fitting.
43
+
44
+ We compute $L_{mem}$ according to Eq. ([\[eq:L-mem\]](#eq:L-mem){reference-type="ref" reference="eq:L-mem"}) and compare the results with a predefined threshold value $\tau$, such that any pair $(x, y)$ for which is holds that $L_{mem}(x, y) < \tau$ is denoted as a member of the victim models' $\mathbf{V}$ train set (steps 8-9).
45
+
46
+ We have experimented with different resize methods (step 3) and found that our attack success rate is not very sensitive to the resize method. Additionally, we evaluated the effect of different train-test partitions (steps 4 & 5) and found that using less than $50\%$ of the image pixels for training the linear regression model results with unstable performance, while all values of $50\%$ or above results in similar attack success rates.
47
+
48
+ We experimented with different values for the $\alpha$ value in Eq. ([\[eq:L-mem\]](#eq:L-mem){reference-type="ref" reference="eq:L-mem"}). As can be seen in Fig. [6](#fig:alpha_effect){reference-type="ref" reference="fig:alpha_effect"}, $\alpha = 1$ was the best choice over all benchmarks.
49
+
50
+ <figure id="fig:alpha_effect" data-latex-placement="h">
51
+ <div class="center">
52
+
53
+ </div>
54
+ <figcaption>Effect of <span class="math inline"><em>α</em></span> in Eq. (<a href="#eq:L-mem" data-reference-type="ref" data-reference="eq:L-mem">[eq:L-mem]</a>) over the attack success.</figcaption>
55
+ </figure>
56
+
57
+ ::: mdframed
58
+ **Algorithm 1.** Membership Inference Attack\
59
+ **Input:** Query pair $(x, y)$, victim model $\mathbf{V}$, feature extractor $\mathbf{F}$, scalar $\alpha$, threshold $\tau$, error function $err$\
60
+ **Output:** Membership inference result\
61
+
62
+ 1. $L_{rec} = err(\mathbf{V}(x), y)$
63
+
64
+ 2. $\psi = \mathbf{F}(x) //$ $|\psi| = 56\times56\times3840$
65
+
66
+ 3. $y = resize(y, 56\times56\times3)$
67
+
68
+ 4. $\{\psi_{train}, y_{train}\} \xleftarrow{70\%} \{\psi, y\}$
69
+
70
+ 5. $\{\psi_{test}, y_{test}\} = \{\psi, y\} \setminus \{\psi_{train}, y_{train}\}$
71
+
72
+ 6. Train linear regression $\mathbf{P}$ with $\{\psi_{train}, y_{train}\}$
73
+
74
+ 7. $L_{pred} = \frac{1}{N}\sum_{i=1}^{N} \| \mathbf{P} \psi_{test} (i) - y_{test}^i\| _1$ $// N = 941$
75
+
76
+ 8. $L_{mem} = L_{rec} - \alpha \cdot L_{pred}$
77
+
78
+ 9. **if** $L_{mem} < \tau$ **then**\
79
+ Return **True**\
80
+ **else**\
81
+ Return **False**\
82
+
83
+ []{#alg:attack label="alg:attack"}
84
+ :::
85
+
86
+ As described in Sec. [4.1](#sec:investigation_experimetns){reference-type="ref" reference="sec:investigation_experimetns"}, we evaluted the effect of reducing the output dimension on the accuracy of reconstruction-based MIA. The reduction was achieved by randomly sampling $N$ output pixels, and using them as the output, where $N$ ranges from a single pixel and up to $200$ pixels. Fig. [7](#fig:rest_dim_reduce_effect){reference-type="ref" reference="fig:rest_dim_reduce_effect"} demonstrates that MIA accuracy indeed scales with the number of output dimensions. Results for Pix2PixHD, UperNet and HRNetV2 are presetned in Fig. [2](#fig:dim_reduce_effect){reference-type="ref" reference="fig:dim_reduce_effect"}.
87
+
88
+ <figure id="fig:rest_dim_reduce_effect" data-latex-placement="t">
89
+ <div class="center">
90
+
91
+ </div>
92
+ <figcaption>Effect of reducing output dimensionality over a reconstruction-based attack. MIA accuracy is correlated with the decrease of output dimension, i.e. number of pixels, demonstrating that high output dimensionality problems are more vulnerable to MIA.</figcaption>
93
+ </figure>
94
+
95
+ As can be seen in Tab. [1](#tab:main_method_results){reference-type="ref" reference="tab:main_method_results"}, using our membership error $L_{mem}$, Eq. ([\[eq:L-mem\]](#eq:L-mem){reference-type="ref" reference="eq:L-mem"}), substantially improves the sucess rates in all of our experiments. As can be seen in Fig. [8](#fig:rest_of_calibration_effect){reference-type="ref" reference="fig:rest_of_calibration_effect"}, our $L_{mem}$ can better separate train and test images by a simple threshold compared to the reconstruction error $L_{rec}$. Results for Pix2PixHD on the Maps2sat and Cityscapes datasets are presented in Fig. [4](#fig:calibration_effect){reference-type="ref" reference="fig:calibration_effect"}.
96
+
97
+ <figure id="fig:rest_of_calibration_effect" data-latex-placement="tb">
98
+ <div class="center">
99
+ <p><br />
100
+ <br />
101
+ <br />
102
+ </p>
103
+ </div>
104
+ <figcaption>The proposed membership error <span class="math inline"><em>L</em><sub><em>m</em><em>e</em><em>m</em></sub></span> can better separate train and test images by a simple threshold (i.e. a vertical line) compared to the reconstruction error <span class="math inline"><em>L</em><sub><em>r</em><em>e</em><em>c</em></sub></span>. Pix2pixHD for Maps2sat and Cityscapes are presented in Fig. <a href="#fig:calibration_effect" data-reference-type="ref" data-reference="fig:calibration_effect">4</a></figcaption>
105
+ </figure>
106
+
107
+ We compare our self-supervised single-sample predictability error with the human-supervised difficulty score proposed by [@tudor2016hard]. In Fig. [9](#fig:supervised_samples){reference-type="ref" reference="fig:supervised_samples"}, we present images ranked from easy to difficult using our implementation of the supervised-image difficulty score, for the Cityscapes and Maps datasets. The ranking seems correlated with image sharpness and level-of-detail images. As can be seen in Tab. [3](#tab:single_vs_multi_diff_score){reference-type="ref" reference="tab:single_vs_multi_diff_score"}, our score outperforms the human-supervised score. We compare the correlation between the reconstruction error for unseen images to our self-supervised predictability error and the human-supervsied scorre.
108
+
109
+ ::: {#tab:supervised_score_correlation}
110
+ ------------ ------------- -------------- ----------------------
111
+ **Model** **Dataset** **Ours** **Human-Supervised**
112
+ train / test train / test
113
+ Pix2Pix Facades 0.79 / 0.50 -0.02 / 0.16
114
+ Pix2Pix Maps2sat 0.51 / 0.77 0.79 / 0.52
115
+ Pix2Pix Cityscapes 0.78 / 0.71 0.04 / 0.09
116
+ Pix2PixHD Facades 0.67 / 0.36 0.27 / 0.04
117
+ Pix2PixHD Maps2sat 0.38 / 0.79 0.77 / 0.56
118
+ Pix2PixHD Cityscapes 0.76 / 0.62 0.36 / 0.48
119
+ SPADE Cityscapes 0.80 / 0.68 0.29 / 0.53
120
+ SPADE ADE20K 0.48 / 0.27 0.25 / -0.05
121
+ UperNet50 ADE20K 0.66 / 0.13 0.34 / 0.05
122
+ UperNet101 ADE20K 0.65 / 0.13 0.38 / 0.06
123
+ HRNetV2 ADE20K 0.61 / 0.22 0.36 / 0.10
124
+ ------------ ------------- -------------- ----------------------
125
+
126
+ : Our self-supervised difficulty score is better correlated with the reconstruction error than the human-supervised
127
+ :::
128
+
129
+ <figure id="fig:supervised_samples" data-latex-placement="h">
130
+ <div class="center">
131
+ <p><img src="figures/supervised/low_city_0.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/supervised/low_city_1.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/supervised/low_city_2.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/supervised/low_maps_0.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/supervised/low_maps_1.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/supervised/low_maps_2.jpg" style="width:15.0%;height:18.0%" alt="image" /><br />
132
+ <img src="figures/supervised/high_city_0.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/supervised/high_city_1.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/supervised/high_city_2.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/supervised/high_maps_0.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/supervised/high_maps_1.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/supervised/high_maps_2.jpg" style="width:15.0%;height:18.0%" alt="image" /></p>
133
+ </div>
134
+ <figcaption>Examples of images from the Cityscapes (first two rows) and Maps2sat (last two rows) datasets that received the lowest (first and third row) and highest (second and last row) predictability errors using the supervised difficulty score.</figcaption>
135
+ </figure>
136
+
137
+ As discussed in Sec. [4.2.2](#sec:single_vs_multi_diff_score){reference-type="ref" reference="sec:single_vs_multi_diff_score"}, we compare our single-sample predictability error to a multi-sample predictability error (MSPS) by training a \"shadow\" model, sharing the same architecture as the victim model, on auxiliary samples. As can be seen in Tab. [3](#tab:single_vs_multi_diff_score){reference-type="ref" reference="tab:single_vs_multi_diff_score"}, when training the MSPS on $100$ images, it underperforms our method on Pix2PixHD and the evaluated semantic segmentation models. For the smaller Pix2Pix architecture, MSPS was more successful, obtaining competitive results with our method. We analyzed the effect of number of samples over the MSPS performance. As can be seen in Fig. [10](#fig:multi_image_comparison){reference-type="ref" reference="fig:multi_image_comparison"}, in most tasks, increasing the number of samples did not improve performance.
138
+
139
+ We also compare our method to the setting were many out-of-distribution but similar sample are available. We trained shadow models on $4K$ samples from the BDD dataset as MSPS for the Cityscapes dataset. As can be seen in Tab. [8](#tab:multi_image_bdd){reference-type="ref" reference="tab:multi_image_bdd"}, this too underperforms our method. Note that it is rare to have similar datasets with nearly identical labels, such as in the case of BDD and Cityscapes.
140
+
141
+ <figure id="fig:multi_image_comparison" data-latex-placement="tb">
142
+ <div class="center">
143
+ <p><br />
144
+ <br />
145
+ <br />
146
+ </p>
147
+ </div>
148
+ <figcaption>Comparison of MIA accuracy when using our single sample vs. using multi-sample predictability errors, as a function of the number of training images. Note that the multi-image score assumes knowledge of the victim’s model, as well as the availability of many labeled training images</figcaption>
149
+ </figure>
150
+
151
+ ::: {#tab:multi_image_bdd}
152
+ +:----------+:-----------:+:----------:+:----------:+:--------:+:-:+
153
+ | **Model** | **Dataset** | **Ours** | **Multi-Image** | |
154
+ +-----------+-------------+------------+------------+----------+---+
155
+ | | | | In-Dist | BDD | |
156
+ +-----------+-------------+------------+------------+----------+---+
157
+ | Pix2pix | Cityscapes | **82.94%** | **82.47**% | 74.43% | |
158
+ +-----------+-------------+------------+------------+----------+---+
159
+ | Pix2pixHD | Cityscapes | **99.29%** | 96.86% | 66.2% | |
160
+ +-----------+-------------+------------+------------+----------+---+
161
+
162
+ : Comparison between our single-image predictability error and two multi-image baselines, using in-distribution images and a larger amound of out-of-distribution images (BDD).
163
+ :::
164
+
165
+ ::: table*
166
+ +:----------+:-----------:+:----------:+:------:+:------:+:-----------:+:-----------:+
167
+ | **Model** | **Dataset** | **Ours** | **In-Dist** | **Out-of-Dist (BDD)** |
168
+ +-----------+-------------+------------+--------+--------+-------------+-------------+
169
+ | | | ROC | ROC | Acc. | ROC | Acc. |
170
+ +-----------+-------------+------------+--------+--------+-------------+-------------+
171
+ | Pix2pix | Maps2sat | **85.65%** | 80.15% | 73.4% | \- | \- |
172
+ +-----------+-------------+------------+--------+--------+-------------+-------------+
173
+ | Pix2pix | Cityscapes | **83.23%** | 78.68% | 67.5% | 72.57% | 56.16% |
174
+ +-----------+-------------+------------+--------+--------+-------------+-------------+
175
+ | Pix2pixHD | Maps2sat | **99.42%** | 98.63% | 93.7% | \- | \- |
176
+ +-----------+-------------+------------+--------+--------+-------------+-------------+
177
+ | Pix2pixHD | Cityscapes | **99.09%** | 96.39% | 64.0% | 95.78% | 56.5% |
178
+ +-----------+-------------+------------+--------+--------+-------------+-------------+
179
+ :::
180
+
181
+ As discussed in Sec. [4.2.2](#sec:single_vs_multi_diff_score){reference-type="ref" reference="sec:single_vs_multi_diff_score"}, for the interest of completeness we compare our method with the popular approach of shadow-model classifiers for image translation MIA. For this, we select $N$ images, denoted as *shadow_train*, for training the shadow model. As an upper-bound, the shadow model is given the exactly same architecture as used by the victim model. Another $N$ images, not seen by the shadow model, are set to be *shadow_test*. We define a new label for each sample as follows: $$\begin{equation}
182
+ label(x)=
183
+ \begin{cases}
184
+ 0,& \text{if } x\leftarrow shadow\_train\\
185
+ 1, & \text{if } x\leftarrow shadow\_test
186
+ \end{cases}
187
+ \end{equation}$$ The classifier $\mathbf{C}$ architecture and training procedure are similar to [@he2019segmentations]. For each image, we compute the structured loss map between the ground-truth image and the generated image, using $L_1$ as the loss function. At every epoch we randomly crop $15$ patches of size $90\times90$ from the structured loss map. We train a ResNet-50 [@he2016deep] from scratch on the $90\times90$ patches, modified for binary classification. We use a batch size of $8$, SGD optimizer, weight decay of $1e - 2$, initial learning rate of $0.1$ which reduces by a factor of $0.1$ every $15$ epochs. As previously mentioned, we do not evaluate this on the Facades dataset, due to its size.
188
+
189
+ We compare the performance of our single-sample method to the shadow model method in Tab. [\[tab:shadow_model_bdd\]](#tab:shadow_model_bdd){reference-type="ref" reference="tab:shadow_model_bdd"}. For fairness we compare both the ROCAUC over the classifier's confidence, as well as the classification accuracy. It can be seen that in both cases, and for either in-distribution or out-of-distribution auxiliary data, the shadow model approach is inferior to our method for image translation models. We discuss the case of semantic segmentation in Sec. [4.2.2](#sec:single_vs_multi_diff_score){reference-type="ref" reference="sec:single_vs_multi_diff_score"}.
190
+
191
+ As mentioned in Sec. [5](#sec:discussion){reference-type="ref" reference="sec:discussion"}, memorization is the main reason for the success of our method. Fig. [11](#fig:overfitting_effect){reference-type="ref" reference="fig:overfitting_effect"} shows the accuracy of our method as a function of the number of epochs used for training the victim model, clearly suggesting that memorization is indeed the vulnerability.
192
+
193
+ <figure id="fig:overfitting_effect" data-latex-placement="h">
194
+ <div class="center">
195
+
196
+ </div>
197
+ <figcaption>Effect of memorization on the attack success rate.</figcaption>
198
+ </figure>
199
+
200
+ In Sec. [5](#sec:discussion){reference-type="ref" reference="sec:discussion"}, we discuss the Gauss defense, including other common defenses, against our attack. We evaluated our attack accuracy as a function of different noise STD. Fig. [12](#fig:rest_gauss_defense){reference-type="ref" reference="fig:rest_gauss_defense"} shows that a considerable amount of noise, which corrupts the generated output, is required in order to have a significant effect over our attack success, which is still much better than random guessing ($50\%$). Results for Pix2PixHD, UperNet and HRNetV2 are presented in Fig. [5](#fig:gauss_defense){reference-type="ref" reference="fig:gauss_defense"}.
201
+
202
+ <figure id="fig:rest_gauss_defense" data-latex-placement="h">
203
+ <div class="center">
204
+
205
+ </div>
206
+ <figcaption>Effect of Gauss defense on the attack success rate. Even with large amounts of added noise, our attack still manages to success much better then random guessing.</figcaption>
207
+ </figure>
208
+
209
+ <figure id="fig:imagenet_samples" data-latex-placement="h">
210
+ <div class="center">
211
+ <p><img src="figures/imagenet/easy_train_0.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/easy_train_1.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/easy_train_4.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/easy_train_5.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/easy_train_8.jpg" style="width:15.0%;height:18.0%" alt="image" /><br />
212
+ <img src="figures/imagenet/easy_test_0.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/easy_test_1.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/easy_test_3.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/easy_test_4.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/easy_test_6.jpg" style="width:15.0%;height:18.0%" alt="image" /><br />
213
+ <img src="figures/imagenet/hard_train_1.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/hard_train_2.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/hard_train_3.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/hard_train_4.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/hard_train_5.jpg" style="width:15.0%;height:18.0%" alt="image" /><br />
214
+ <img src="figures/imagenet/hard_test_0.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/hard_test_1.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/hard_test_4.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/hard_test_8.jpg" style="width:15.0%;height:18.0%" alt="image" /> <img src="figures/imagenet/hard_test_9.jpg" style="width:15.0%;height:18.0%" alt="image" /><br />
215
+ </p>
216
+ </div>
217
+ <figcaption>Examples of images from the ImageNet dataset that received the lowest and highest predictability errors. First row - lowest scored train images. Second row - lowest scored test images. Third row - highest scored train images. Last row - highest scored test images. As can be seen, the predictability error is effective even on images that were used for training the feature extractor. </figcaption>
218
+ </figure>
219
+
220
+ Our predictability error relies on learning a mapping between feature vectors to their corresponding pixel values. We use a pre-trained Wide-ResNet$50{\times}2$ [@zagoruyko2016wide], which is trained on the ImageNet dataset. We do not make any assumptions regarding an overlap between the pre-trained model's training data (i.e.  ImageNet) and the data during in the attack. In the scenario in which such an overlap exists, the concern is that the predictability error would lose its credibility.
221
+
222
+ In order to verify this, we computed the predictability error of a random subset of $1$K train images and $1$K test images, from the ImageNet dataset. As no input-output pairs exists, we trained the linear predictor to predict pixel values from deep features of the same image. We do not observe any significant difference between the two - both share similar mean and std values: ($0.0549$, $0.018$) for the train images and ($0.0556$, $0.0191$) for the test images. A ROCAUC score of $51\%$ further demonstrates that there is no clear difference between the distribution of the predictability error on seen and unseen images.
223
+
224
+ Fig. [13](#fig:imagenet_samples){reference-type="ref" reference="fig:imagenet_samples"} further demonstrates this. The first row presents the train images that received the lowest scores, i.e. marked as easy images, and the second row presents the test images with the lowest scores. Both correspond to \"plain\" images, regardless of whether they are known (train) or unknown (test). The same applies to the Difficult images. The third row presents the highest scored train images and the last row presents the highest scored test images. Both contains complex patterns and high variance. This demonstrates that the predictability error is not affected by the having prior knowledge of the image, and is only measuring the amount of variance and complexity of an image.
225
+
226
+ [^1]: Our source code is available at GitHub: https://bit.ly/3k0UE6P\
2105.14573/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-10-30T11:42:35.080Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36 Edg/95.0.1020.30" etag="Wiyx_-1XOfbEk2Zy-obY" version="15.6.2" type="github"><diagram id="YqkQWoA2Uyh0TeYYoeie" name="Page-1">7V3db6O4Fv9rKu0+FPkLA4/Tzs7sw11ppdHVzjzShCbskpAldJrev/6aYCdgO8EhxpDORKoaDBhyzs/H59O+w4+r3eci3iz/yOdJdofAfHeHP94hhABF7F/V8la3QIh4y6JI53UbODZ8Sf+X8AtF60s6T7a8rW4q8zwr0027cZav18msbLXFRZG/ti97zrN5q2ETL5LWa1QNX2ZxliiX/ZXOy2XdijFoXP57ki6W/NGEhvyWVSyu5g3bZTzPXxsPw7/d4cciz8v622r3mGQV+dqE+XTi7OHNimRdmtzw5d/V7uHvD7vH8Dfy37//+s/9/eenewjDup/vcfbCfzN/3fJNEOF1mZbJl008q45fGafv8MOyXGXsCLKv8XZTk/453SXsYQ/qm/GX/Z4UZbJrNPE3/Zzkq6Qs3tgl/GwYBZ5f38Shg8NQtLweORFwPiwbPBBtMef94tD7kTrsCyfQJcTqJlWynn+oYMeOZlm83aazNq0YRYq3r82Db+wAsF/GDz9WBACHozd+dJKk2/ylmCVnXjrkRCvjYpGUZy70o/rCZL5IzrKoQX5fQ37RViRZXKbf2wNJxxP+hD/zlP22AwIgokiGAPLbvdQ/nt/YhL7cF4ZyXyiQ+qrpo/S1B8rhx/fHDrKAnV1afm18/yaQwr4fYVMdvDUx9LV58K0JvRNY25ZF/k/ymGd5sX8xHO0/V6EQUkMUQjgpFPpIBSHwesLQD1QUKp0NjENsD4fgchzC0XGITXEIJoVDqs6H/XFIsdwZilzjkAw1l3ah6oDees41wu8VeEM3ijebco9OQO7RbrwVy3z19MJe56FD4X1Os6whlh4fAfuw9iIvGc3zNWu+x6oEW+frxI527AfKAA6IRjsW+lITILIOZU07NjAkbojCEZTxD8DYFI7eFYWpL2PYH53CQgqfnZayLN1sk24SD2EX+0QRpUhHNayZGvBgdrGJYTwm1ag/RaqZmIS9qSaNbwD4+LYiG6lqmI9OTRPDZkwMBuCkO2M0mpko4dMatwSPP279iVNNnSOmQDUDBXxUqpFIMz690YdoYEC2Ye1kYyt5EH+Nqfda4Gsi9vPZMXix31CFZujafjaJExkDsRHvuASK5o5vF4BD0wKcEi0hpL/DRgmXjAA4A2t3yIjJZRA9Cbhbw1GlcpGIUgADGARRiCQhFnkoxJi1h8yEEkLhYnyp3kUSOcYXMjH1jQXawHGNbhThaaEotCiNAqygxXm4Apm4OAZBy+UybNz47bTiGEQB4jXxWwWI7uMYAndWArgepUETVh7wyfnpkR38mRQp+xVJcb2uNbGpz2rMS9HHGu5tV1ixEey/ZePRONg/raSTQHF7XRV8VWKF7oFoI9p/y9lPxkbltNQ4NWHpCi+GRrw6NyqFO8+9RBxE6Z+Wz0vjrb/CBaGESwh0jhYTn3UHWk7zfbuMN1UncX37aRxMhL2Bmo/bn72RatOpnQ3NXgPfelmk8XqRGcQkmokT9GTixHO+Llth2SgiH3XhWt5+EhQXBNGI75FQ0gD4PNNAjvDltPIs8GmQXBXWQAbOvfdAehR5YdAmfaArY3BKfGwwCb4H4kPskagtY4AuT8Ep8X2d25FmJadRiwv035dcnLjf7mujPrALEN7sjifZt0X1P1k9JfN5ul6I7tjr1T3W51UWJ7vy3FwlONfgEG+Ks3RR8XvGmFO5AR4qpqSzOPvAT6zS+fwkdor8ZT2vorbnvQdXjTLsq2wmDA+6XLKhQre+DZfhWEbP4/5zldFDTY0eOi2XYaRzGfbTciDQeQydKjk+sodC5wUfNlCIDFE4tfI36GsqPvrCUKkeQZFjGI7vinRS70FNbfepwQ1YlHpwfKlnoOIap8qbpcSfSqi3oOCwaSRUCDpyprxvYDzfEoVVlzuAI1PYZkHNCQqLNQ2MtH/rNEdERrWw10ajuUnS0ZhZqhAoftGDBTNWjirVmbSTIhpUy/9HJ9qgNTPSsKX0AX36ZE1Uaso/RiamiZEzLgJ1i1mMnZNPp14zoxm4BI89cKdeNKOZIsYn2tRrZiJNygXxNEq2W7JZCECOlsBjw3nDx1q3C5EzajLW9OkheLnzRoVm6NaapjaLkCZc+2EOt2lF5pklpcm86As3JefHOdxslBrdQOXH1FBUKVvN0g9JfoVecH3lh87PKIx+V/CyUVjkKsrWDSI0MRCphY+9RRFU6x4dhy0CE9fGVMs+XMZwJxbOiBQU9o5mRAoIXQczAouJBGOXfExtzrMa9lK0sMNrukIKsiiubtFeRKaz5rRSTiBSC66viL5qCj7cwtDiIqM3mflkbEdOTHlT05WucFuoktWxHRmMtsboIIr+xFxcGtd8f5+DungFdIyVMXKih8t9hmwoq0n/oyc/B2NUXAxIZRBq8vvVFAanNA4NJt+bonFA1TR+NZzllsaDhrOU4D0A5+eOC6iJsLKurSjOHSvIFdoIcl3j7sUgbE7uwIuCoMvcqY5kq3geb5f7ugboRnsMJ6YRAIxVjaBvgSCTrXJv7ld9CS0EvlqoUBTDw7qsDTwiTCQ80l54PAnBm0OW1qncG1lat7JrZBnEuGYvxfcDcnrUGXelak6Fu0DlrlrqfUX8kqiLZ5zgLqNw/Na4bFNdsD336jrDhbccEVN3axc/uiBWq4xwdoDCsVJQgKLRZFBuCHTlhrN883ZblYYNVdZSBnMkfHRn0qwQ8XCgjp3Bqg5FgOynYKmMYCKbZxpZYD5vqINdXQ3FlmSBWE6OdyRZInS9zmOg5gwcXTiJTXv75U0uhvCu9suLDNwYvapfKPWQoR/DbrULke0ZhFV3nNv9ZIaqkhuNxkGo+ON8qPF6uiWzzb2npkBm9lHITHXLmrgl81D1cqOR2Zd9dSikY1MZAgP9oJ9/GWJ/DAdzFXuXlw8Zf50YNl+8MzpDjXAWAaHxqGwgNG6LyiT0xCJz1UeOAepmQ7ckN9lj7bbkdMSmQ7l4nAkQpErqg03nRlZrN2a7cLEpX+f9uauCRqR6gXv2B+vD23IHWeA80WzyopugicbqG8wBBLX7ylll+o/JbCwFoVHkUXUZRce81lnJTV4Lrljl/8NxwMeritjrp+2m8RgFE3XzUyG3/DDgwViumUdEa5cSlyvUQZPdxX4QZ3EEJV++nIBmXC/g+x6JGp92tzIzLfmLIxSceypPmBnUcQxNdtQRix4/Z8mOI6kTVFOHDvGZYdOeG2Dk4Z4VcsSPPBCB4we35YaPD9OOq524BB9/RMYGzUElx5WJ19fDvudyeGq8Yj90nI8ATbZoODUV8Dm3Kx/rsACfPmTTSpRhvX9Kq5/QyKcWi1UR3UTTbdF179Iwsd0XKFCVhqgf3CCAkucF0EHmIf+YScOfBKOoHbZU7gmFg1zc4RMHsxUxCH5cpPtIiV5xUX7ZK9Qf62yuOT8KtHi2VmfQNQo6w5rRtAYBQ66y9iHoX3wuucNEAY71vCBfgjQvOh8Y0gaBpqEgra++AkYAvlKwdxcnTyydUQfpoDekkSQ8pZ5sGRi0bR8heF6oyzuHIn488Ai4wqA9rcVcsnTwdVpMdyigU4sRW2tMBOy63UdkqXtBaaskVsNhwE6F41w8JzgPdkoknadjcLBRe/aGoUaH7aTjAVSexuo7pkpP96Ax2LWUTGrUVEj3Qno0D4mM+95ZjEzzP2Skqr5I62mMJJSfBintMAjYOwLlrsNPHniMGKwO5G6M9MD6dBBMvKjh4QgldvbV4aFsydJh5gAI5Fem5PybgUhy1fEVxM/cIE2M2IWVYGEHIqyND203WVreVmjHrGTPPM4TIuIBSXdAnMmjBQm1W8DUXNpu4rWW4bx6pGJ2sXj6hb0bezgQ/37dUwvsUfEcr9Lsrb50mWTfk4oXjfPSvlWNE/VDqzPrvFjFWePcKydMdZJw6w9kSckYfs/eeVZtc6XcWUHqnqOjOncAiDiXMs6vea9AvMv+TFnE6+0z60v0ukdaxbS8mLefeLjxKZ79s9jD6V6iFqoUrYpQe3/d/osvaDZP2TCJOb3SdZaKJz1neVxKj5dHGGtnZEs2jUFWc/D8IOvIoTm1N5uF8RCEkcdEPyQwIBAEUBKQEa3OsvYopBSIda6amUkgUkcKRoONFJ0T/OKRchwsx/GCgH5InAm0/8LIsU+vufMf42yzjH9VMi/azO8u/+JlGJoH3naEviXGB4RzqObw63bSxBrQDijedY7sn6D9CdpjkWpbIxGuuzExq/NUXw8FG/lNArgdaW4/sWckMKXshYPDrqkME98LdRnIw6FP5yX+ib53h75IDvch39NsVXMagLJXzACC7LDIK8YebXZGguUf+Typrvg/</diagram></mxfile>
2105.14573/main_diagram/main_diagram.pdf ADDED
Binary file (33.9 kB). View file
 
2105.14573/paper_text/intro_method.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Understanding the loss landscape of DNNs is essential for a theory of deep learning. An important problem is to quantify exactly how the loss landscape looks like [@weinan2020towards]. This problem is difficult since the loss landscape is so complicated that it can almost be any pattern [@skorokhodov2019loss]. Moreover, its high dimensionality and the dependence on data, model and loss make it very difficult to obtain a general understanding through empirical study. Therefore, though it has been extensively studied over the years, it remains an open problem to provide a clear picture about the organization of its critical points and their properties.
4
+
5
+ In this work, we make a step towards this goal through proposing a very general embedding operation of network parameters from narrow to wide DNNs, by which we prove an embedding principle for fully-connected DNNs stated *intuitively* as follows:
6
+
7
+ ***Embedding principle**: the loss landscape of any network "contains" all critical points of all narrower networks.*
8
+
9
+ A "narrower network" means a DNN of the same depth but width of each layer no larger than the target DNN. The embedding principle slightly abuses the notion of "contain" since parameter space of DNNs of different widths are different. However, this inclusion relation is reasonable in the sense that, by our embedding operation, any critical point of any narrower network can be embedded to a critical point of the target network preserving its output function. Because of this criticality preserving property, we call this embedding operation the critical embedding.
10
+
11
+ We conclude our study by a "principle" since the embedding principle is a very general property of loss landscape of DNNs independent of the training data and choice of loss function, and is intrinsic to the layer-wise architecture of DNNs. In addition, the embedding principle is closely related to the training of DNNs. For example, as shown in Fig. [1](#fig:syntraining){reference-type="ref" reference="fig:syntraining"}(a), the training of a width-$500$ two-layer tanh NN experiences stagnation around the blue dot presumably very close to a saddle point, where the loss decreases extremely slowly. As shown in Fig. [1](#fig:syntraining){reference-type="ref" reference="fig:syntraining"}(b), we find that the DNN output at this blue point (red solid) is very close to the output of the global minimum (black dashed) of the width-$1$ NN, indicating that the underlying two critical points of two DNNs with different widths have the same output function conforming with the embedding principle. Importantly, this example shows that the training of a wide DNN can indeed experience those critical points from a narrow DNN unraveled by the embedding principle. Moreover, it demonstrates the potential of a transition from a local/global minimum of a narrow NN to a saddle point of a wide NN, which may be the reason underlying the easy optimization of wide NNs.
12
+
13
+ The embedding principle suggests an underlying mechanism to understand why heavily overparameterized DNNs often generalize well [@breiman1995reflections; @zhang2016understanding] as follows. Roughly, the overparameterized DNN has a large capacity, which seems contradictory to the conventional learning theory, i.e., learning by a model of large capacity easily leads to overfitting. The embedding principle shows that the optima of a wide network intrinsically may be embedded from an optima of a much narrower network, thus, its effective capacity is much smaller. For example, as illustrated in Fig. [1](#fig:syntraining){reference-type="ref" reference="fig:syntraining"}, training of a heavily overparametrized width-$500$ NN (vs. $50$ training data) with small initialization first stagnated around a saddle presumably from width-$1$ NN and later converges to a global minimum presumably from width-$3$ NN, which clearly does not overfit. This implicit regularization effect unraveled by the embedding principle is consistent with previous works, such as low-complexity bias [@arpit2017closer; @kalimeris2019sgd; @jin2020quantifying], low-frequency bias [@xu_training_2018; @xu2019frequency; @rahaman2018spectral], and condensation phenomenon of network weights [@luo2021phase; @chizat_global_2018; @ma2020quenching].
14
+
15
+ <figure id="fig:syntraining" data-latex-placement="h">
16
+
17
+ <figcaption>(a) The training loss of two-layer tanh neural network with <span class="math inline">500</span> hidden neurons. (b) (c) Red solid: the DNN output at a training step indicated by (b) the blue dot or (c) the orange dot in (a); Black dashed: the output of the global minimum of (b) width-<span class="math inline">1</span> DNN or (c) width-<span class="math inline">3</span> DNN, respectively; Blue dots: training data. <span id="fig:syntraining" data-label="fig:syntraining"></span></figcaption>
18
+ </figure>
19
+
20
+ # Method
21
+
22
+ Consider $L$-layer ($L\geq 2$) fully-connected DNNs with a general differentiable activation function. We regard the input as the $0$-th layer and the output as the $L$-th layer. Let $m_l$ be the number of neurons in the $l$-th layer. In particular, $m_0=d$ and $m_L=d'$. For any $i,k\in \sN$ and $i<k$, we denote $[i:k]=\{i,i+1,\ldots,k\}$. In particular, we denote $[k]:=\{1,2,\ldots,k\}$. Given weights $W^{[l]}\in \sR^{m_l\times m_{l-1}}$ and bias $b^{[l]}\in\sR^{m_{l}}$ for $l\in[L]$, we define the collection of parameters $\vtheta$ as a $2L$-tuple (an ordered list of $2L$ elements) whose elements are matrices or vectors $$\begin{equation}
23
+ \vtheta=\Big(\vtheta|_1,\cdots,\vtheta|_L\Big)=\Big(\mW^{[1]},\vb^{[1]},\ldots,\mW^{[L]},\vb^{[L]}\Big).
24
+ \end{equation}$$ where the $l$-th layer parameters of $\vtheta$ is the ordered pair $\vtheta|_{l}=\Big(\mW^{[l]},\vb^{[l]}\Big),\quad l\in[L]$. We may misuse of notation and identify $\vtheta$ with its vectorization $\mathrm{vec}(\vtheta)\in \sR^M$ with $M=\sum_{l=0}^{L-1}(m_l+1) m_{l+1}$.
25
+
26
+ Given $\vtheta\in \sR^M$, the neural network function $\vf_{\vtheta}(\cdot)$ is defined recursively. First, we write $\vf^{[0]}_{\vtheta}(\vx)=\vx$ for all $\vx\in\sR^d$. Then for $l\in[L-1]$, $\vf^{[l]}_{\vtheta}$ is defined recursively as $\vf^{[l]}_{\vtheta}(\vx)=\sigma (\mW^{[l]} \vf^{[l-1]}_{\vtheta}(\vx)+\vb^{[l]})$. Finally, we denote $$\begin{equation}
27
+ \vf_{\vtheta}(\vx)=\vf(\vx,\vtheta)=\vf^{[L]}_{\vtheta}(\vx)=\mW^{[L]} \vf^{[L-1]}_{\vtheta}(\vx)+\vb^{[L]}.
28
+ \end{equation}$$ For notational simplicity, we may drop the subscript $\vtheta$ in $\vf^{[l]}_{\vtheta}$, $l\in[0:L]$.
29
+
30
+ We introduce the following notions for the convenience of the presentation in this paper.
31
+
32
+ ::: definition
33
+ **Definition 4** (**Wider/narrower DNN**). *We write $\mathrm{NN}(\{m_l\}_{l=0}^{L})$ for a fully-connected neural network with width $(m_0,\ldots,m_L)$. Given two $L$-layer ($L\geq 2$) fully-connected neural networks $\mathrm{NN}(\{m_l\}_{l=0}^{L})$ and $\mathrm{NN}'(\{m'_l\}_{l=0}^{L})$, if $m'_0=m_0$, $m'_L=m_L$, and for any $l\in[L-1]$, $m'_l\geq m_l$ and $K=\sum_{l=1}^{L-1}(m'_l-m_l)\in\sN_+$, then we say that $\mathrm{NN}'(\{m'_l\}_{l=0}^{L})$ is $K$-neuron wider than $\mathrm{NN}(\{m_l\}_{l=0}^{L})$ and $\mathrm{NN}(\{m_l\}_{l=0}^{L})$ $K$-neuron narrower than $\mathrm{NN}'(\{m'_l\}_{l=0}^{L})$.*
34
+ :::
35
+
36
+ The training data set is denoted as $S=\{(\vx_i,\vy_i)\}_{i=1}^n$, where $\vx_i\in\sR^d$, $\vy_i\in \sR^{d'}$. For simplicity, here we assume an unknown function $\vy$ satisfying $\vy(\vx_i)=\vy_i$ for $i\in[n]$. The empirical risk reads as $$\begin{equation}
37
+ \RS(\vtheta)=\frac{1}{n}\sum_{i=1}^n\ell(\vf(\vx_i,\vtheta),\vy(\vx_i))=\Exp_S\ell(\vf(\vx,\vtheta),\vy).
38
+ \end{equation}$$ where the expectation $\Exp_S h(\vx):=\frac{1}{n}\sum_{i=1}^n h(\vx_i)$ for any function $h:\sR^d\to \sR$ and the loss function $\ell(\cdot,\cdot)$ is differentiable and the derivative of $\ell$ with respect to its first argument is denoted by $\nabla\ell(\vy,\vy^*)$. Generally, we always take derivatives/gradients of $\ell$ in its first argument with respect to any parameter. We consider gradient flow of $\RS$ as the training dynamics, i.e., $\D \vtheta/\D t = -\nabla_{\vtheta} \RS(\vtheta)$ with $\vtheta(0) = \vtheta_0$.
39
+
40
+ We define the error vectors $\vz_{\vtheta}^{[l]}=\nabla_{\vf^{[l]}}\ell$ for $l\in[L]$ and the feature gradients $\vg_{\vtheta}^{[L]}=\mathbf{1}$ and $\vg^{[l]}_{\vtheta} =\sigma^{(1)}\left(\mW^{[l]} \vf^{[l-1]}_{\vtheta}+\vb^{[l]}\right)$ for $l\in[L-1]$. Here $\sigma^{(1)}$ is the first derivative of $\sigma$. We call $\vf^{[l]}_{\vtheta}$, $l\in[L]$ feature vectors. The collections of feature vectors, feature gradients, and error vectors are $\vF_{\vtheta}= \{\vf^{[l]}_{\vtheta}\}_{l=1}^L,
41
+ \vG_{\vtheta}
42
+ = \{\vg^{[l]}_{\vtheta}\}_{l=1}^L,
43
+ \vZ_{\vtheta}
44
+ = \{\vz^{[l]}_{\vtheta}\}_{l=1}^L.$ Using backpropagation, we can calculate the gradients as follows $$\begin{align*}
45
+ \vz_{\vtheta}^{[L]}
46
+ &=\nabla\ell,\quad
47
+ \vz_{\vtheta}^{[l]}
48
+ = (\mW^{[l+1]})^\T\vz_{\vtheta}^{[l+1]}\circ\vg_{\vtheta}^{[l+1]},\quad l\in[L-1],\\
49
+ \nabla_{\mW^{[l]}}\ell
50
+ &= \vz_{\vtheta}^{[l]}\circ\vg_{\vtheta}^{[l]}(\vf_{\vtheta}^{[l-1]})^\T,\quad
51
+ \nabla_{\vb^{[l]}}\ell
52
+ = \vz_{\vtheta}^{[l]}\circ\vg_{\vtheta}^{[l]},\quad l\in[L].
53
+ \end{align*}$$ Here we use $\circ$ for the Hadamard product of two matrices of the same dimension.
54
+
55
+ We introduce the one-step embedding for the DNNs which will lead us to general embeddings.
56
+
57
+ ::: definition
58
+ **Definition 5** (**one-step embedding**). *Given a $L$-layer ($L\geq 2$) fully-connected neural network with width $(m_0,\ldots,m_{L})$ and network parameters $\vtheta=(\mW^{[1]},\vb^{[1]},\cdots,\mW^{[L]},\vb^{[L]})\in\sR^M$, for any $l\in[L-1]$ and any $s\in[m_l]$, we define the linear operators $\fT_{l,s}$ and $\fV_{l,s}$ applying on $\vtheta$ as follows $$\begin{align*}
59
+ \fT_{l,s}(\vtheta)|_k
60
+ &=\vtheta|_k,\quad k\neq l,l+1,\\
61
+ \fT_{l,s}(\vtheta)|_l
62
+ &= \left(\left[ {\begin{array}{cc}
63
+ \mW^{[l]} \\
64
+ \mW^{[l]}_{s,[1:m_{l-1}]} \\
65
+ \end{array} } \right],
66
+ \left[ {\begin{array}{cc}
67
+ \vb^{[l]} \\
68
+ \vb^{[l]}_s \\
69
+ \end{array} } \right]\right),\quad
70
+ \fT_{l,s}(\vtheta)|_{l+1}
71
+ = \left(
72
+ \left[\mW^{[l+1]},\mzero_{m_{l+1}\times 1}\right],
73
+ \vb^{[l+1]}\right),\\
74
+ \fV_{l,s}(\vtheta)|_k
75
+ &=\left(\mzero_{m_{k}\times m_{k-1}},\mzero_{m_{k}\times 1}\right),\quad k\neq l,l+1,\\
76
+ \fV_{l,s}(\vtheta)|_l
77
+ &=\left(\mzero_{(m_{l}+1)\times m_{l-1}},\mzero_{(m_{l}+1)\times 1}\right),\\
78
+ \fV_{l,s}(\vtheta)|_{l+1}
79
+ &= \left(
80
+ \left[\mzero_{m_{l+1}\times (s-1)},-\mW^{[l+1]}_{[1:m_{l+1}],s},\mzero_{m_{l+1}\times (m_{l}-s)},\mW^{[l+1]}_{[1:m_{l+1}],s}\right],\mzero_{m_{l+1}\times 1}\right).
81
+ \end{align*}$$ Then the one-step embedding operator $\fT_{l,s}^{\alpha}$ is defined as for any $\vtheta\in\sR^M$ $$\begin{equation*}
82
+ \fT_{l,s}^{\alpha}(\vtheta)=(\fT_{l,s}+\alpha\fV_{l,s})(\vtheta).
83
+ \end{equation*}$$ Note that the resulting parameter $\fT_{l,s}^{\alpha}(\vtheta)$ corresponds to a $L$-layer fully-connected neural network with width $(m_0,\ldots,m_{l-1},m_l+1,m_{l+1},\ldots,m_{L})$.*
84
+ :::
85
+
86
+ An illustration of $\fT_{l,s}$, $\fV_{l,s}$, and $\fT_{l,s}^{\alpha}$ can be found in Fig. S1 in Appendix.
87
+
88
+ ::: {#lem:repinv .lemma}
89
+ **Lemma 1**. *Given a $L$-layer ($L\geq 2$) fully-connected neural network with width $(m_0,\ldots,m_{L})$, for any network parameters $\vtheta=(\mW^{[1]},\vb^{[1]},\cdots,\mW^{[L]},\vb^{[L]})$ and for any $l\in[L-1]$, $s\in[m_l]$, we have the expressions for $\vtheta':=\fT_{l,s}^{\alpha}(\vtheta)$\
90
+ (i) feature vectors in $\vF_{\vtheta'}$: $\vf^{[l']}_{\vtheta'}=\vf^{[l']}_{\vtheta}$, $l'\neq l$ and $\vf^{[l]}_{\vtheta'}=\left[(\vf_{\vtheta}^{[l]})^\T,(\vf_{\vtheta}^{[l]})_s\right]^\T$;*
91
+
92
+ *(ii) feature gradients in $\vG_{\vtheta'}$: $\vg^{[l']}_{\vtheta'}=\vg^{[l']}_{\vtheta}$, $l'\neq l$ and $\vg^{[l]}_{\vtheta'}=
93
+ \left[(\vg^{[l]}_{\vtheta})^\T,(\vg^{[l]}_{\vtheta})_s\right]^\T$;*
94
+
95
+ *(iii) error vectors in $\vZ_{\vtheta'}$: $\vz^{[l']}_{\vtheta'}=\vz^{[l']}_{\vtheta}$, $l'\neq l$\
96
+ and $\vz^{[l]}_{\vtheta'}=
97
+ \left[ (\vz_{\vtheta}^{[l]})^\T_{[1:s-1]},(1-\alpha)(\vz_{\vtheta}^{[l]})_s, (\vz_{\vtheta}^{[l]})^\T_{[s+1:m_l]},\alpha (\vz_{\vtheta}^{[l]})_s \right]^\T$.*
98
+ :::
99
+
100
+ An illustration of $\vF_{\vtheta}$ and $\vZ_{\vtheta}$ can be found in Fig. S2 in Appendix.
101
+
102
+ ::: {#prop:one-step-embed .prop}
103
+ **Proposition 1** (**one-step embedding preserves network properties**). *Given a $L$-layer ($L\geq 2$) fully-connected neural network with width $(m_0,\ldots,m_{L})$, for any network parameters $\vtheta=(\mW^{[1]},\vb^{[1]},\cdots,\mW^{[L]},\vb^{[L]})$ and for any $l\in[L-1]$, $s\in[m_l]$, the following network properties are preserved for $\vtheta'=\fT_{l,s}^{\alpha}(\vtheta)$:*
104
+
105
+ *(i) output function is preserved: $f_{\vtheta'}(\vx)=f_{\vtheta}(\vx)$ for all $\vx$;*
106
+
107
+ *(ii) empirical risk is preserved: $\RS(\vtheta')=\RS(\vtheta)$;*
108
+
109
+ *(iii) the sets of features are preserved: $\left\{\left(\vf^{[l]}_{\vtheta'}\right)_i\right\}_{i\in[m_{l}+1]}=\left\{\left(\vf^{[l]}_{\vtheta}\right)_i\right\}_{i\in[m_{l}]}$ and\
110
+ $\left\{\left(\vf^{[l']}_{\vtheta'}\right)_i\right\}_{i\in[m_{l'}]}=\left\{\left(\vf^{[l']}_{\vtheta}\right)_i\right\}_{i\in[m_{l'}]}$ for $l'\in[L]\backslash\{l\}$;*
111
+ :::
112
+
113
+ ::: {#lem:crit-prese .theorem}
114
+ **Theorem 1** (**criticality preserving**). *Given a $L$-layer ($L\geq 2$) fully-connected neural network with width $(m_0,\ldots,m_{L})$, for any network parameters $\vtheta=(\mW^{[1]},\vb^{[1]},\cdots,\mW^{[L]},\vb^{[L]})$ and for any $l\in[L-1]$, $s\in[m_l]$, if $\nabla_{\vtheta}\RS(\vtheta)=\mzero$, then $\nabla_{\vtheta}\RS(\vtheta')=\mzero$.*
115
+ :::
116
+
117
+ ::: {#lem:degen .lemma}
118
+ **Lemma 2** (**increment of the degree of degeneracy**). *Given a $L$-layer ($L\geq 2$) fully-connected neural network with width $(m_0,\ldots,m_{L})$, if there exists $l\in[L-1]$, $s\in[m_l]$, and a $q$-dimensional manifold $\fM$ consisting of critical points of $\RS$ such that for any $\vtheta\in \fM$, $\mW^{[l+1]}_{[1:m_{l+1}],s}\neq \mzero$, then $\fM':=\{\fT_{l,s}^{\alpha}(\vtheta)|\vtheta\in\fM,\alpha\in \sR\}$ is a $(q+1)$-dimensional manifold consists of critical points for the corresponding $L$-layer fully-connected neural network with width $(m_0,\ldots,m_{l-1},m_l+1,m_{l+1},\ldots,m_{L})$.*
119
+ :::
120
+
121
+ ::: {#lem:multi-criti .theorem}
122
+ **Theorem 2** (**degeneracy of embedded critical points**). *Consider two $L$-layer ($L\geq 2$) fully-connected neural networks $\mathrm{NN}_A(\{m_l\}_{l=0}^{L})$ and $\mathrm{NN}_B(\{m'_l\}_{l=0}^{L})$ which is $K$-neuron wider than $\mathrm{NN}_A$. Suppose that the critical point $\vtheta_A=(\mW^{[1]},\vb^{[1]},\cdots,\mW^{[L]},\vb^{[L]})$ satisfies $\mW^{[l]}\neq \mzero$ for each layer $l\in[L]$. Then the parameters $\vtheta_A$ of $\mathrm{NN}_A$ can be critically embedded to a $K$-dimensional critical affine subspace $\fM_B=\{\vtheta_B+\sum_{i=1}^{K}\alpha_i \vv_i|\alpha_i\in \sR \}$ of loss landscape of $\mathrm{NN}_B$. Here $\vtheta_B=(\prod_{i=1}^{K} \fT_{l_i,s_i})(\vtheta_A)$ and $\vv_i=\fT_{l_K,s_K}\cdots \fV_{l_i,s_i} \cdots\fT_{l_1,s_1}\vtheta_A$.*
123
+ :::
124
+
125
+ Note that neuron-index permutation among the same layer is a trivial criticality invariant transform. More discussions about it, specifically for NNs of homogeneous activation functions like ReLU, can be found in Section A.1 in Appendix.
2107.06325/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-05-29T19:53:25.672Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36" version="13.1.7" etag="OIYoR8xiMi-57aAf3IsN" type="device"><diagram id="G2JKxd4xbtjrREkr_zZP">7V3XmqNIsn6auTz94c0lRlhJeBDc4YT3IMzTH6jqals92zPTc3b2rFTVXSKBJDMyMjL+ICLyN5ipFr732/TSRHH5GwREy28w+xsEQShI7n+OkvW1hADx14Kkz6LXIvBzgZFt8cdC4GPplEXx8NWFY9OUY9Z+XRg2dR2H41dlft8389eX3Zvy66e2fhJ/V2CEfvl9qZNFY/qxFxD+uVyIsyR9ezKIfexw5b9d/LEnQ+pHzfxFEXz6DWb6phlfv1ULE5cH8d7o8nof94OznxrWx/X4MzdArzc8/HL62LeP7RrXt872zVRH8XE98BtMz2k2xkbrh8fZeR/evSwdq3I/AvevfTP6Y9bU+yF5XP6x+rgf4+WHTQQ/dXznmLip4rFf90s+3oAAyOstH5kFIz4Sc/5MevCt2vQLsiPExxH/ONrJp6o/E2T/8pEm79MHfoc+WDkend5PYcn40sfXknuzd+hL0mHd1Lyd+J/hhYup/QIQa5fPJ99q0aZ4eKHcx9r2lr1W+PVD9uIvn/xXxupXjA0EfwBhEiJQHET2/7GvRgqBwe9GCic+vF769v874wb89XFD/jVf71OvPb5m1ctspw9CZPscP/tBXKrNkH3k46AZx6baLyiPE7QfFskLmZmmbPqXquD7y+eLOqgyS457x+aguT+0r1Loni3H4NAvj6TeSoG3kv175I/+ziWvhxDX1slvEJPZtKLPgMwnDbV/roaVnqxk/3ZCjmOXodz9D23dwwe8fxHPp/Kk2brLxJAXO4sFLo7w2OKzB23q5YrcLhcvqVIpoE9a0oqFOXOeXqGyqMtykkhdJ1eNLVmG1ZacnuznLLylOExu+qA5cV5zOp0qU2q5hrt7PNMy86ozGxRT9hlS1Zy/E15dPaZhn6X0BSKdHLohPRj68F1V1DiCtxyG4Vq5FbZu2/fyGMkbdTqfzix+8PyGecVZPrUEe7IvklXJFGmjcQb4kd09kMWc5iLsFFRNvfiaNeFDeqiPaXG01Y8lH7mg0XTaTChBrX69KBpqFa2oSd1dBZwsb0YOF5Jo5XJQh6RBPS+DhTIEueAXcBUoRIj6ITrf6Gy86ik9z/biKM2pyHnHzxhJ26oLk1YMVYIFkoedtMQLpoZ7N33SirKAXQmJjX3D5rdosjgwyvcm55o1DRjCLHc4JFFWHhov67I2ZIWWqVn8gU7Xg6kheq5juXHEq6mcuOtpbmug1SpozC2aQu5Sye/sQI93lUESNbMKaq2S+iLqpDUfDWgHVHH8SUCj9gHJ/L2TtehWaWfZLzz1sdhB2vj7lKBTRGQuCxCJPhpaHQMzJxFqIM0QNKQn1Rto2Z6TdmLTINy5yMW1Z6WtSgJ0b2LgoVcG87sOXFSPEBPpEakl7sduQN730wLripweIoszx3a6P8rpOBtF76soolKctuGquYhHChCZYxgwhgoa0LH/uGxiKOxXK4V6PIS23BlCr83k2IDlKNHD2PDgtj2EtCLopEpOc7qmd+ByP1eQv+A9L4RxwCXjLk/pGM2i+8FCorALFe6hOUgv2CDd3bpUyzQDcMWqKsLtprV07ljKsN9DLyxyh8+y2iZOKfHIzVV4z1LGartrxV4JpDRUuoZe5zDi2SQe6HkfVWMdZMIuilwLG2NgBt5Dz8Y9DnaGgE3aTYYaQ1NG7VVucUsi93HiIAd50adJrIeLTj1UBfeCbV0huWgGrrjbijtFhIuoGa3DJWTrwKQDuPUYGPc2oQmKuYISpPW5VLu4MHlzcF1oZsCSCYyqayu2Xce2iUkFRxnH4SYjZyI/cNxYddDUXBEt8RdJtTMmkKqHZ427lKbD3LuKdhLsnYTtGbTOMaqG3Foij4NP4LHXtZO93PIWPPOuqe6X4cr96pHw7DmQwdRI1/dIzEx01fVtmiOEr7uUTeI+2x50E04PjoJPVyxAjZEIQiSt66yu6qrZh4oLw4eYs8F1NFh/TbpA2sv0DRse5ws98NkuObhGg5WygbY55q9Yqz7cvUzlHX3vbDKdoXu8sffYtFeFUUFvPfhzBQbX83hXrjKkxci75Jc5kGqd6EJG3ZKicCZxLvNtW25v5mW522V5NZsVgFIk4sIue6wAUtSER7pOcZ3QG2MF6wMmYmOtphyOL6ZVlKtzFZscdHIhJBqiV9uOfDTMnYRlf8h2ItEeq3iPnoLc82Wr5SIS6PrROgaQc2KImGiIhQVq2dG1tCPssqTLsLhIh8W+7xeiep08QtMVZ4xhz9TF5aopNpQNmUme8nuHPUzivMTs3TL37prcVLPokrl5opnd5HbZKbS0vDsBqLHPCSFp6Z2rT3XZm5FQPhxkSFJijE5GjcXKrYfGa1Xm5wLknH3e0X1iXwqw6hJ73o/26glZO4OSZ8QsOCVXAE6brWvTBpuTwuD359Pl6SLfA2iZ477NdEZjvfHq1c44+hecovlN3xcuDpDvOXEJ5gHk6wfvBprC3438Eur3BcNZXCthm9j428FcAAXy0xBPgdt5InEPV1kYFBhvp12gdfjJA2s4WkSnh2JwnTtScxQh33mCxnVnhrVoU4RtxWHvJqJdDGv3bUxjdSUE9bIy4fm0ukRB1gty9YuS8xUQIB5Q7cMJiIDy7WDucOlHASpl6WBF4YQ5tYBXriuLKzhpD4+w3I2J7qUjDAwZ8lKFG+Gpv5bBuVCne87VaLcdS+ysC6/0OwW36RyKrumxCEXDp/C1GLrIuA52oLZLKJqpya5WqCX2j3VvRgVVwAJSE+qrP1KIPShFvQU6xLuIK1vUaelRFiQzOF4P+mOd58Z8pGZn4H6zyv6GxDgYw3nqAzc3orT9EjHb1xVuWU/wQeAc6XMszNlddaIroOadZAKz0wa2BZsixqP1Fts85GbloJMa4h3KYcQxLRkjO0ksCyv5PDzABwqXyWILygMkmCI0r5p87gFQ9OsNl4ejl5jFSRfhJozQw3Yu5OviBqdhM6JKZD8m+ZD22GbvJ+or3Ez+FppxJjN39fq4Ar0t+Cp5ukllViFDpYC5mIP44KrX3tvvoC7YNt7UUxuUaGzzM77thSUxOiEANQYkpcRq1uhWFbrFQsm8aV3XTdfucrNOg2ikgLWVZuvTmjgxwtYO8hyNiv06ZHT0OBSWYgsZQxcu5R0gxvGsIABkb/siqFPm8LJA7cSkj171yLSlgsqIBQwyLXZxkeb8cGPMEOXxZufso67Og2J4vgLDBAZTtbDMoRjpXtKZ/IZckGPVlAiQwSGCgGqjLC0bOZ8r63rrUSJvIXRzz/uzgtkLmPzEB+OUWD1OhvoNJFV7/+gXEeaSOuLvgNPToNPtV+d7pRaQcZd8SzZLbFplZR0dTxuqpLpMEzHV9wx9GbVHqTHUzRZUi8au/JATXZc7J2/HJzSlaFJ6aNj5tuS+GYkXKGsTd7oYoBfoOAclBwMql7uxRpUuyokQmyHGlSPy6CDApVWT5SQnGUPjdqJZC6p51QdF6s4CNng5X8+nMKIPjU47NLoC20LXcj9qdGoMsqI3kNWg+mPbb5dsQYoTBvu474AjqOP9cp5MWRQfqbhwh+SqQCvExcshw0Lf7Ow6rHc+5Y45fb0nYV81nV+l41kHrpdQ0u3KIW4duas2F/2YkP0lL/Pgem0fIc4XYS22Z3HjbQR6NInknVUvZLsIqgkF6rBORprbLeapWy6iLrSZfgA7ZEw6ddgrnpCYhK1eVFo47fXOXAjcwErcBTCvHUo6LekWeuoLKUmSAzQcv78CgUEg8oGAoc+Q6msIhr1ZVr6EYNgHDCAxCMUwAkFI6HsEtqM6FIdAEkJBGN7xNgH9dUSGPhHZE5E9EdkTkT0R2RORPRHZE5E9EdkTkT0R2ROR/bchMgT4gP5DMBn2xGRPTPbEZE9M9sRkT0z2xGRPTPbEZE9M9sRkT0z234bJMOQfgsjwJyJ7IrInInsisicieyKyJyJ7IrInInsisicieyKy/zZERuD/mLdkxL/GZAeyan9IlY9hqH7wdjnwZ6iFo18EzuFfUQt/C6j9klb7HTjxxT3fE+stkPKvEIf87wCsw3HsvwJW9lgCZP7AsAJpvUFWcIesc4jqL1pBuA0Pfbxj2zWGcCEHecSoRgZQrKQPuU5PGZrTDcnWACVNHrjedl0iWQkpMbLcMZ19TY0maxhD65JzYJwyiZN0UZSSMNeAQpGLcboWpzXq+118QKQqPC74sFgBGdxxGfNUfRVB2yX8+z4QNILDj8e9JPc/d9dTui5rDuAViBpy6YxDI9JFDWCSB9JlyC631CI15IRwmLwzV8+8RbnVbRSee6Ji8wHhR8V0NYJDrR1hzZOzob/qy2Wk1hww0mkvjkqQOBsY8dAkmZMRaDCHywmHHKleDkkPscA+OiHV3jGyU6otGtJRbVMa9ci6cEAhj2Kz5fPONXxfK2kjUOizcqVX8ljVpdsFZ9ljYZ5lxxVr9tBdO+FlbRNC+LrR3WIAlrqTd/WoNG7nq+zsau+8ibFlTnJ4n+l6r4eJNMDwqZlB+axbMmQSrVJMPNJAHi515pvsMk+60vg3nkWA5kG+QFBB4s6nqoluYzKeGz2jLMO63HIfPlPYKBQPNJGLmO7ocB9uyx06fYUzXz/RhHYOcz24dLgVYIWfuZaWnXzd6LTFTcuzme8QTq9Sagk8kMSuNXprZ5UizHtcAremEt37tpTR3up0VY91DjxrKCsoIX9aGOZqpvEyHisobZ/Bhq3jE3i3HONQaVRKjPpDe+P5s71PWlqAINkf0VyRUYWm+HZ41e5WOBxVy5nozeNMasUGJQZRfF8OaDRU5+g6HUCHZtxYCtkrYPs7R2/9Iw2c3GTdfJHjAxthh8xwdx0WlvPxemJmD0Yz6sxMmmdWpraWdWqJKazWok3Sq6lBJhBkWUdlu3IlUIK1PyEJbqhwK8kp9bZda6IB93Gr+8flluxas4JfWMfT5QmjDmUhHNML1B1K72D4RSHU5STNgXkgeL5bNZYvO8OwM0pxHrwJwWQEivfba3eDlEXCOO3uZhbMpKFZjLbKUN83AoYfT4Vu1LrG/PkA4+qhq4cb7DqTmmxRbRUjchVxNDF3bVW5VtGO1BsnZypz3icQdiWXCCySTU9J8OBXH3StR7kj9GsrGAtnV0wQXAwShtdE5KgiON9cQlRGFbF6IBHRNs6xUZ3rNJWRg+cGtbzrKex3V+hMFzwm05O86xtxeqKmwQmMQuZ2+HPFzUw/VDuENnyhSCx4dKXECnQEEBFwxXD1AmLNXJ/CYHwAgqaTjhPWZ8KYg/AKu0GIMl3d4prp4mY0HiplMTcDWMTORZ3G6TZ0PlMzwVAe5I7pDHWpWB+QU08lBgDD5WlualHA+W2cu94hPXqV8hV+rLRrmPzV36YHhXDaOb/ZsS1wG3k/NGsvOa/4UWFzjc6c50G3ZAps2TmnVmhiXIVaO7qgb/IuWG77is+VYnYioX0CHdCgbwK2bgk/a8BtJrprJs4Ve3KQmJ0DJigMmvO2uzNOFlGsm4NdJ4C/+eh2o/urdpfuVqqdxCRhDdfTHPumIxzOdc5tSY0tNIBwS5UUlVxb6JH+DFQc3Vvbo5gEngpZbAeGdRm6CCx6+OXcVMSWybbNdi0Jdty+6qkcikyOaA55l8vJBcCmTRjsJGzs5ayjFQNvAH73+xMDBYaHLqhAr0g+9PWVZ/N7QOr2wADjrkrQJ7Um73iCc/rDXiZR1wIQsFB5o+L6CriJT9298lHtOt/C0Mn1dj0LMhZq7KHEUd4FleK8lAUazvMwHdA7p8LkJGYpnweDArkLEaaTaJABShGDPKi1u4uWw5CFy+uEY4kTqsv9viuctniCWv26y0x6YpqVWcjqQZkUDHDRFudEhPfq3Aa2y5XQ1bQ82chHrQRD88A32HSIzS65ZZciNrUrSCIcZ12dOwuPN/Ew1zjsneh0uUQJclc7ET9X68mMrqYbKmqTFa20zC2AQiPDXZZMqUdjckddUG5aEepdnQKHssyxzmCshfQ6t1sMubtuPceoW7E6dRfmxbvr0wua47ymv7KaChPdfUrA4pBpYQrk1fSCNPdf8FazC1mzlo3ShPJoCFbAZ8rDbbvEBNzDxTXho3scrM3dlkUJOrtgMJigA3v16T6CDnjIQNUv6VXg24e+5FpjlbcHpLpFCwJ5fN0facr8cA9GyrzmfEwzc8zXw461yPWKYPd4trWhOxI8cNbeJ4Q4LTgLu9mxbh6CvJFEUGMH4JDlKstn+D7FA+GBb6qZI3iNamY7SkO78Y1zmGcX5IqPnQpdcH+Xu2GHTQtyTCvtEasmms6LSdVQgZPiHE0rxoByJCDiAabxJnPYpGFjBQnIGA4dtABo+nhs59bxwQ3Si3ybNMshQ9vQYbIxTV/o2RoYFOt68tAgafV9YaVBaNZArjQEWYyyrrFBRtYaufC7plIZqWNt0BVsmAU7X+nNoD2UsfWyN1ud915zl4RWs+US7vxE11DFw3y7umZxCY/1TTkzQrZNDoXVm1svwIEiaSA6iGNh6qMRZJr3gnbh8UJfm1bcsnDh771dcRwI9pY92rQp5o0KD4Y+3+8U32UCXgVbv00lBTmFpmyBr9xSOD1pxflSi9fBzNWBILaEthE20zgW2XkCIXX+BMQ1BqZu0i6OJk3aQDQyPGw1PIdn5RRs5y4qMuYkReyUoKstc4HeFy3kXWllACbCvwB3hBdu8mEs0BfyPBjNrpDSqzLsGJLogSYAOuB62CBO45m6tI9rJ6KwqDIQw1R2ysokKaPFrBK1YFuQC3O36y3Fylz2Lnl1ntspLqcCMJHIiqdkV7f9nFDXgnJ23eDAoGAdPSAjGkZvW3tQa7DLLSPhZHHbk0adHBqwMVM4dzIatqbnmtstqrAAHbXwmOZ80zQKvr6I66Q4FnJAINGVLfljkKxmIC4I5ETl0pZqSFReYLcRqXYZ7nobFh7a9NAdHE9dyUuizn1FzfOhblOnkjMLY9IqhvlZ0En8LowCPiDkrmJ9Tj+Cwd/jJvgDhsEgAGIAAuEEgL8DMtEPKIyhCEgQJELAAIb+dRz1ltroCaSeQOoJpJ5A6gmknkDqCaSeQOoJpJ5A6gmknkDqnwakSOKLtI84CH/1Puqtwn8ArAKfsOoJq56w6gmrnrDqCauesOoJq56w6gmrnrDqCav+mbDqB++n3hLqQ/8YXAU9cdUTVz1x1RNXPXHVE1c9cdUTVz1x1RNXPXHVE1f9o3DVWzTZP8fN78cbyr1tH/e22+HbBnHg93vFvd0S/Pnt5s6GefliT7ngnX3mvt5+7htwNx7a5y/fTg4Bv45rA77PywKj348U/At2jAP/S7aMA45j72PqFYDrrvqReiX7lHoFdH1rDsHlJfja5PABH1ziIk4+QQ36nPCnGQkqstUckAtf4q6NsdqZixNNx+8y6VEbdgfste4iIBu7zupKyZ6cypKL1pJv/CAVfT/WQYH0SXVtMCkAMHYKl6xvLBLir5DXexi1KTHswZ4Xc32N69jrKn3HP0Y8Hwr6dn/YjGVk2hHTe4OlTJMgc3TlRVDIYNYNbNUrB81YUcuAq9REEulXa5lQ8GNqOZwk7wnAP2ic8Vjk0KA7TtpBgX1mglvb5tquLHudb1OoKTeCmDbqrd5ATtAw4ZYbVI3c2f5WNmS3iWS5JitkG83NAqJGHgduGtBWChe6aDwXWyHOUL293RxxDXXzRKpHogJu7iXe2b80kSlYeTAuG0UoCUClJ5mVB651azaoON0BLb4irQxFZb64oY69XkQdO2t+rs0OsGkYlgATsFDnk5df9ZleVju5pO3+r1gr/94i9YmfI6rjRZ+bfYZdnJkRDTBwY3HZLttl5meoR3meFQZ6hWWEqKxDM29c6EpviBJ4IwnQLBYK2slYkIPuNLehpN7R48kAlas7xade5nXDXS5pdRohbczr9DKckCNMfjjGysd4ePB9jIY9/8B2F6fk7BVXNH+MzPvKtF0ClpRoWmAWxSc/Kq1b050vXXTkYRBD7c4oQkcap1k51sXhsGyMlHUrxBgK9VrZD9kJSTkXBuwr3V3FOpZoMRIlSna8KJKsjSpTDkJUke0onGRiC6x3yI8lszPVDjjbvZ5hF7eeBLM09S20UbLQiDPv032kWtf5GqwXFzjmFY07Ps6CvQg8RFYGkfBe3HtMWV1aBG3HbiudCZyDS/mBTh/a9X5MoNnIlRdSHI1P3XTwrwZE5g8lZCNlh9ppeyVGDLpWtMmrMQoIAmGB+XazSvLEijF5MDw/HIHOnBZF8RVbUDQzSIChl5QjVxY5c3ENb1sNWqMKXLjAjtxObKV8icEX+gUSdruwCVH4WlXfTJjlVbiwrw0khAyfgJ6WkuquDI39dZSJcD4HEO8Eg4C07F2qN41Ps4hyjhmSEqpwnx3TK1tNIKLIiHstuieOfRhl9n9XA7xPbQeNzUzEbiqMinDOAg+bQblqrF1ph27WKQURJI/R+IyddYdrlM6lMt88YWxFJgrW+A+07kRynPzD1EO/IEH/tg4nahPLmws8GswYDJcufNz2M4w4NL71NqkKTQGwoEupLCEhx4h2oz6084pusk2MrVJdt/symqyOSd6uoeau2L+sBsekpKoDc80n3a4qZuEc9TEcqXgEMMDTqyKZuW5tlbf6un3bCKDIq2PSDoEzAUqq34TwxgC5qBYYOcSsfKSoyGHVIADLi6X7AYNigaer4FSBE6yD9KErne4aBjq3ajy5M4MrIHq0hcgOVV++P1h3TbH4yNiwWp17khVkpHggKO9OD0D6KN52yGwOq7pyjARIY6iQPGTSUmZZzKBO0GH1SVVgIaLSHBdM4RZSZTlSTqEr0KQX0ueBBfKZFczYyyXYHvVh5qAGn8A7admVdCYb1Wb1dGTIJCZsiiIQeJvCCnRVxc4vWeUMSJeQyQho9JmpBuhpbNhaOllGsAuXwzwTpt4KKdaq8Q5yS8oWFzYFh4jWXLrzugiXM9X14CJwVpF7Rz4fe4kNTnUuBUUJ9yP7h+6BTVJB253VzzIqKQVaNravW3iNrZZ+4MuKKSa6YS7hZBuiMuSPyprOC3M2/e0y0WJVlwrSLPEUDewDKC/sjeC9jdODWYWC7obO3kpQ0UH1snKjs9k0UFCd5Dvr5lOgUZlMGxFl3ReOBvWMDvhZRcegqNFjOHP3ZZAXyeFbtgh0YlpsGoerE1Urrdqfx9YKc0D0HnOVFCQYUT0Z7kJ+dNlwO0PDRCfwLn+4ejpxju251mFOVdp88MCro2rK7K8xdxChBc0h4pUELwjwCkqX7RAmm6vomo26k1vLtMjntRrt0wh5XLYekZV00gndmEOnOKvSQdn0YdxQe2Qd9HI6jNPwzAqXWB6LhnG01pFyjB/wVLuqnDXlh2mGWKBddHkn81QnDhXcth6i0caTAd8gdjH6eF2FB/HgVI87S9f5sOvuYkmaL3fidvbkSORrfj2k1Vm7XeDsMLwd9qzjvstGEsZ4o1IxT2+m+6LacSQKhzKFHtZcIHbO7DVNz/lJfsk8hcfyIp0OCnVWexdwTKIwyF7DKIy0jtMwYKrZNEEvIo3a8FWVj4co0UMFnEiZ8yMFSdvKYaaxMGnHWYyoR/ohiZQZSJPE+376DBD5VT3MnX73OKfBATUdmbzeCwu0FLGZ2SvE93B9AwxNBg5Bt9qqhjmQqHv+cqmHiA7dUlBAgmp1Rx2NaWr3lQZuPO/oNJo/OkUgOBvQWjmYA+GiJQKXsxq58M4y4AnmoochLJHjqjgoqR4jH21GG8YK6ZVsQCQsOVTEfPSM5295Y83KXbqzHECmftSxAnOTDvOubjALg4wqLpxzBAkibReoCjQFtXt2D044j726M3LSh/cIfsRzHeQ0hQHqDu59uUIZHrDjXaeLldNQnw9DklcDvlsmzhxT3XqaaLO4L+Smr+Wq5UbhnZ1SXnzJ5AEVkDZ1dC9UQdpyc2KvzdwNTRSy1XnIgnOoz1VbuYs6AezmVZohp1TnX2cuLTX0xrE6UCnyNEmtEswh7yrNct32eX01GA0A5LThuIyWoF52zVPQxlW9Oi88SKN60LH74rt/ffTYRt8h+2j2Wp8uUHYBLCLZTJBjzNJeS4jX56j1DwvvYSsyTMMPT1zehQMdOf3WCqKJIO2c3UjGOOP30dEyQ+nEEzw31XIFu3MI0oVo+3w5KIfpqJM2QTwE9JHFRTxbpG87QL6vwBhD3x/nlTLYhruk0SrDgrrYPRvGZQyVk5usJ3EkvTPbCj1VHmYaToVv27pw3mHQSqJNOV4w3daAvOevcw2QgqHoy5fpU4jGZXQufBX2YW4I1K6gyvaCmkxbD9npLD129eduMjckFxmWvajb2WMXpU+xg8Fd4DZZGLzLVQIlVgdI55PjZ2Vdy659Igui5DSOctj5zrhXWUy2dmjDFpzoaLyw9THh3+4gjzsOEWgzt+OFCkWYduPL1xGNsHew+9+wHTy6w3ICw7/dQPwTYgS+Q4wY9IFASZAAABjFUAB72xz8K6wPfPiUVgYk8bdsNH8JUL63492v2CleyIaxeSHJqQriKMoOxPdnUfsw9k0RvwHTuqkPXHvPyvKbIv8jPA33YYr7d3BrlUXR8Zh3d53/el/6n0DFv4JREOADDn3BKV8zCgq9k2AIJj/AX6bZQd9llV/AG+/tvPE6hIcR+GtTzF/jFrUps/AgyzUe56Yvfsgqn4q/bME3/PP1OL430r/WIIS9WWXeDHkk8AF+x7kc/H6YkF8xTPh3FIijJDY+Hjb9mDZJU/vl6XPpN7z++Zpzc1h2XiiTx+O4Gq+GPn9Xob+mW7xk4+2L7+5R1c6sr0fs8rHml4P148FrO4/GfUXpoZn68K2I+Gnq93G5i+3H13W9R8iPt6pN9sKiH0cNfgtmfptrxDdjMfp9Eo8f7/o8HFTf++sXl7XHBcOPn/PJavvxOQQJfFndv2zXN9cfy9lLCz7zxiea/By7/ESmsD80g35CUH4pqo9y9Pj5Tq7vZ7CXz0ep8UX56+fvWasxkvwAkCDwJlHRryczSrwjgnfZ+AH5wgUKh76f278icdlbyrT3RDDyp0Uw8p4Ivr54agCneszG9SfkL/JD+bt3IWuHH62yX3DOd/bjXzCa+NtQvI0f8E7WuXckMf47AuRnRwt6Lz3C3zlamF8dJK2DoX05B+gvMnGfHv9/xo94J8PiO29X8F/w3uuttb9cGTb2cdrve+q/3w83iHwAgM/C92tXUgzDP7ypxP8GBfhtZXyHH757FfnXGIQKf27WBk+mOZhmV7BR+LusoZ8UOeTfCJqg996dfzNWr4lY/4Ay8wtys5L4hy+mGQJ9A1kI9B2ageiu5SBf6Dnv+B+gX+WyJQH4F5Dwv+Ql93N/kef+Is/9RZ77izz3F3nuL/LcX+S5v8hzf5Hn/iLP/UX+Y/YX+VOm6Fd4+EOY9vV74v+B/im7i0DvvSb+fwjJnpG0z0jaZyTtM5L2GUn7jKR9RtI+I2mfkbTPSNpnJO3/aSTt3wGrvn7ZhRDfoyrgAw5iKI6jCAACGPmOgwGEfEBRBMbwwyUDwgDoVzgc/NjDcmj9+qfeJ+PvvU9+8+J51/n2ter/0PfI3Mvnb+IT9Ov9Pf8H/t73ACP/pvfG+N/ke/LZJej/nyf238oMyNeOgO/4oRDwBwz+MsP038Qa73ls/grW+OTr940b2UcXl/7TdW98M/x/YZxf6Y0CAfCHbzzAUeL7lADgW9DHL2ePH/uI/nrJ8eSAn+MAAvo+xOdv44C3zBL/ngiAz17/7m9fOP3/2QgA+OfJ/xcjADD0a0977G+KAPhkxv+UIPP3IwC+bdc31//lCIA3Cv8fxPUY+yTeoR7A75Mz/Y8K6vkf7BsPORh7b6934sN7zuTop2v/0rR+z/30By9d7mW87FzRzMeMq6OPX9mw9IchC39PEO+yEHj5+e27OI1P6tVeo/Ox0yD0gURAEMcQHEAJAoKx1/MfxcS3Z1H0t++COT5V+yqdmz6K+y9b8/L5sbD4iTH8YoTQd2TuW9lfjSCCP5Dk58g9+Bv0gGHIBxAlP3++rv9V5n0nXL57CvqXnvIDEfZnpMaPk0L9aqnBU+Z/lKzAwX2IyB/lQ8aQ9/yRXyTHFwFEf1NwIPwHfGmfMuTfIEMA4Ovp/c3s3lnrm5XkZ+UGDP3Rmn+hrPgD7gJPrvvHcR2O/Mm16l/w3Hf1/kKOe8+S+uS4fwzHgTj0u9II27Uc4C/rSiD5V57yC7nxx8bcX60rmb1fD/emr+L+P0pnQkDkO6vZW/TfVzYT5MNbnpNfrhv9RCT8U2b8+2QGgn0KSvv05u6b8LKfFQxHVcAXWVa+Me1/G7X2CyXBe8bZJ4/9g3kM/TaD8i/iMeJv07bfZOGTx/6hPAbCv6+VfGct/GmW+9YE9S9r/oVc9wc2rX1y3b+D65DfxWJvAfB/nOf+WL2/kOOgn+e4/+QghHeTn/P1F8nPHWu17OUI5PEqhXK9MeUQe9FsTgMc6dxTXoYs0jVwYAyQg8MTsApMKfSrwGsA/5bNLOkUZFeOVW74dqtUmcnJHGg03dmS3YSjO5ZlWPYhPobDfXO7160KH0E5keahPj5LsKC8hoaR4Ibnh1NXdZ/uBDQwiUVRJjkxqkxHJ3Voq7tC1Q8EV/rHjJFFENMtsJa87gZ28lrFhDfXez+b1diQED0+rjGq26mXa34emmlGCEwljkg1AuzhqX0hO0BTDm/tSMLLOp2FC05XJ+hIeRxHZG4yC4MZ+Qgq7DIr+pobYs37kjEoixatgyOZynS+AF4S1o0zRgkDn3E9j0etHX2Zqtjr5XLE2EkJ/FCyVcBpdlHqWTKFBlMuK3utkiy6NZNeZ+WFPWLKpNNiawnICzY/ME11ufIvCcUXK3+UlGQy6kjci+VqhsIYDaRQ1bSvXKXmGh1OjE6KQ4+A8KxICHu0yFuGVHUe1aWiUfpQ4U64c0RxYABrCYnq4zp1LwqITDcvuyy2NzLuxGP14dLq3PAgX2+GytOVf0oLJ7/u7SF7DGfiCTq7t7hTrhV1eI9TuBrej2rlQO/NG2k8fAmWMifboMcZ0Hk48l1306mlo4gm0OGqtGoH52GbHAihJsTucblQLwnpb/jEj5K2lpwd6nBZw9qaS7RLDC6GHxEbCcN1o5QYsnQ+vEgTQbnDg3nnjFO0H7JQCjoFkqJhvDaR0qLO4QtPpKm3YRhd3M3hlB/+xTsjHilRj1B5TphxUyx56FY9FiSW+2XUIt/zLpfDiV90IlJOsPv5ZBtxtiDmaZ9LQm7ntXKE6/nF4afolDkklMBVa2SmY/yw04Y8juO+7dmVTQ7hRVv7mDGWxIIW4Qq2KMYt5u+NEJyMY/wtEsAuymdFOKIxKoxl1n67N4Mak1uYPBwjgGhdMtyaRTxq57GS7tBTJs9SrAQ2nzECbbcQSEYalMmSlvuqIF2M3B/BM5WzrrfXE2mlIxDao7HYR3xWGGG+4XCxT8yik1l3dWjx0XMNQVLCdgQhHd66NjqySHuLylQVD1dNpIzWwoa0uZDXoevN8AjrvhYcVNAy0p0e51vAAUyx1URyK0kexMWTvvpCmdKdqWPxGUNMApQ04YjOCFOmmhAS0Pd1GQ0k9IhAkOeIc2rJr06jtFRTtXljNj8iinJsVDQt6KIHc4EqEeRncuImixQl48UupLJnJwl4hFqhUzeh0FIv9vz5ute1WRA6r23ChGgrRmSTPTaqPUU1IBwxvY8L1jgsAKwTca+zoMnd1WSjE+ZztipzucwaEmrITTct2MU4O514hUbtgboPuPFFOyirh0ApMSDNk8IOGAO4GTfembQoYBEBTZ3RHwLi8hm5sZrQmf0Y9OW1yMRGvh9Jqjk5ra0Ez9Rgi465ttP3cScuSN8ONpOZl7rfVX9J8i5GTZ+rrfFTkUUWRXzQeVMnirRQxj476wCdr6pjNRhUCcDhfR2Jtq01hn8IHDhs7HI9AvqHBppwYED3mmnu5bKZXW8bmK9QZxi7tDwLBltzgI2iU/bIaSBq8Qccz3Jll6CV5YdYXY7pxga+T1oqXLN2Xkw11mk8NbDoGWp2wXi4GINyhfon9bzpmJSP7cqj7SM6jfPFovU4ri3VfHUathSQQwnpInDJ5V7OAFne52qiN144gm8fO4v1slIBysg5CoFmSUt22uUBeaEeDkfkmYycGl2WCVFQiEjfWzZapC1sAXPpZchyJRG/eYwe+Ufv3cQDLl0J7bJXktqBtGsHZtsVf8wrqC2kwvgdPmgeUldGzsoLmfKgm41Byt4FfKg6S0HxK3HXlGm8jQzmOZPZmBeR2JKudX28fREnC+nMoAbNXeOsq1vZS1iKBjCyECvLN6viZfsELFOzHG78ML1cqQtV42KZBVnYOMb2cErIznBVmxyTvt6GlyQiwkklUQsHxmNpWrxUL+UmlKMjq3HiohbqnCnYhuOctN1jIWXxlkHUiCIKT16u+sQPqCxDSVKDmU9743gHm17QD3Gt+LdAFNtmlc6v4dicqdFDgHuTg4nDiTi3ZCORR2SWZ2t3q+93iaRfd+npkuy9wI8ZDG0Jv4tBEe8zDhe5oSPR1sQ79HX9HTC4L8NWdgpnJ0Dgcv2WQPdDQOPbAogK3oQ8P3JwE9oJg/PjUt3g+TISu5A2ZzJFiwZub1MKlMzSHzG9hxMNXSCAkjfyRMIKaWJ3ghNv8rIrBccQp/V8CKlxyS4gLrj155UdHD6u7FJ111sI60Gzf1wpKTqnYSWt9zRim/IBVwv14NxtSrPL+eBAaYG2I8EKXT6qCFwrYVAvhcWd/PykFRJj8DVwbMDCeZ5snWVT4m3gMqb79CgXiMW3I8ydnjHFXSMJrcLlSCFPMhC/bt0aHTRnnd93U/9FeSgRCP0AH+mfv04p+WY/IL83jhJHGiwChhAEAjD8kyb+lXvZjgBB4gvfduLHOOJn7abIT6S4eiK/f6dNC/lwOBt++nzjyQQRH1D881noz5q74A84/oW56xvbA/K7T/mFqPA9D4df8WZFm+LhY1a+p3/0X0lxDn6d1QAlv/eG+dtcY5GnJ8I/WlbhIPqVNelrUYUD5Icvfdv+5GthHP4LD/mFkurpo/CP5kUUh/9+Xtz1uH8EL77nofDkxX8OLwLY71rZQeArPsH/JDNC2O8x4+8/5Bcy4y8IcIPe3T+2aYqpfSplf3TfGQL/Pnrp/yqB8puP4C9nBurhZ+VLCmUIeM23/VTX39sOA4M+7OLmB5zx3nYKfxNn7Id9c4zgZ6lyRBFdmig+rvhf</diagram></mxfile>
2107.06325/main_diagram/main_diagram.pdf ADDED
Binary file (33.7 kB). View file
 
2107.06325/paper_text/intro_method.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ <figure id="fig:example_image_sg" data-latex-placement="ht">
4
+ <figure id="fig:subim1">
5
+ <p><img src="motorbike_img.jpeg" style="height:6.5cm" alt="image" /> <span id="fig:subim1" data-label="fig:subim1"></span></p>
6
+ </figure>
7
+ <figure id="fig:subim2">
8
+ <p><img src="motorbike_sg.png" alt="image" /> <span id="fig:subim2" data-label="fig:subim2"></span></p>
9
+ </figure>
10
+ <figcaption>Example of an image and the corresponding scene graph. Since the scene graph is a directed graph with typed edges, it resembles a knowledge graph and permits the application of knowledge-base completion techniques. </figcaption>
11
+ </figure>
12
+
13
+ Visual Question Answering (VQA) is a challenging task that involves understanding and reasoning over two data modalities, i.e., images and natural language. Given an image and a free-form question which formulates a query about the presented scene --- the issue is for the algorithm to find the correct answer.
14
+
15
+ VQA has been studied from the perspective of scene and knowledge graphs [@tang2020unbiased; @chen2019counterfactual], as well as vision-language reasoning [@gan2020large; @abbasnejad2020counterfactual]. To study VQA, various real-world data sets, such as the *VQA* data set [@antol2015vqa; @krishna2017visual], have been generated. It has been argued that, in the *VQA* data set, many of the apparently challenging reasoning tasks can be solved by an algorithm through exploiting trivial prior knowledge, and thus by shortcuts to proper reasoning (e.g., clouds are white or doors are made of wood). To address these shortcomings, the *GQA* dataset [@hudson2019gqa] has been developed. Compared to other real-world datasets, *GQA* is more suitable for evaluating reasoning abilities since the images and questions are carefully filtered to make the data less prone to biases.
16
+
17
+ Plenty of VQA approaches are agnostic towards the explicit relational structure of the objects in the presented scene and rely on monolithic neural network architectures that process regional features of the image separately [@anderson2018bottom; @yang2016stacked]. While these methods led to promising results on previous datasets, they lack explicit compositional reasoning abilities, which results in weaker performance on more challenging datasets such as *GQA*. Other works [@teney2017graph; @shi2019explainable; @hudson2019learning] perform reasoning on explicitly detected objects and interactive semantic and spatial relationships among them. These approaches are closely related to the scene graph representations [@johnson2015image] of an image, where detected objects are labeled as nodes and relationships between the objects are labeled as edges. In this work, we aim to combine VQA techniques with recent research advances in the area of statistical relation learning on knowledge graphs (KGs). KGs provide human-understandable, structured representations of knowledge about the real world via collections of factual statements. Inspired by multi-hop reasoning methods on KGs such as [@minerva; @wenhan_emnlp2017; @hildebrandt2020reasoning], we propose Graphhopper, a novel method that models the VQA task as a path-finding problem on scene graphs. The underlying idea can be summarized with the phrase: Learn to walk to the correct answer. More specifically, given an image, we consider a scene graph and train a reinforcement learning agent to conduct a policy-guided random walk on the scene graph until a conclusive inference path is obtained. In contrast to purely embedding-based approaches, our method provides explicit reasoning chains that lead to the derived answers. To sum up, our major contributions are as follows.
18
+
19
+ - Graphhopper is the first VQA method that employs reinforcement learning for multi-hop reasoning on scene graphs.
20
+
21
+ - We conduct a thorough experimental study on the challenging VQA dataset named QGA to show the compositional and *interpretable* nature of our model.
22
+
23
+ - To analyze the reasoning capabilities of our method, we consider manually curated (ground truth) scene graphs. This setting isolates the noise associated with the visual perception task and focuses solely on the language understanding and reasoning task. Thereby, we can show that our method achieves human-like performance.
24
+
25
+ - Based on both the manually curated scene graphs and our own automatically generated scene graphs, we show that Graphhopper outperforms the Neural State Machine (NMS), a state-of-the-art scene graph reasoning model that operates in a setting, similar to Graphhopper.
26
+
27
+ Moreover, we are the first group to conduct experiments and publish the code on generated scene graphs for the GQA dataset[^1].The remainder of this work is organized as follows. We review related literature in the next section. Section [3](#sec:our_method){reference-type="ref" reference="sec:our_method"} introduces the notation and describes the methodology of Graphhopper. Section [4](#sec:experiments){reference-type="ref" reference="sec:experiments"} and Section [5](#sec:results){reference-type="ref" reference="sec:results"} detail an experimental study on the benchmark dataset GQA. Furthermore, through a rigorous study using both manually-curated ground-truth and generated scene graphs, we examine the reasoning capabilities of Graphhopper. We conclude in Section [6](#sec:conclusion){reference-type="ref" reference="sec:conclusion"}.
28
+
29
+ # Method
30
+
31
+ The task of VQA is framed as a scene graph traversal problem. Starting from a hub node that is connected to all other nodes, an agent sequentially samples transitions to neighboring nodes on the scene graph until the node corresponding to the answer is reached. In this way, by adding transitions to the current path, the reasoning chain is successively extended. Before describing the decision problem of the agent, we introduce the notation that we use throughout this work.
32
+
33
+ A scene graph is a directed multigraph where each node corresponds to a scene entity which is either an object associated with a bounding box or an attribute of an object. Each scene entity comes with a type that corresponds to the predicted object or attribute label. Typed edges specify how scene entities are related to each other. More formally, let $\mathcal{E}$ denote the set of scene entities and consider the set of binary relations $\mathcal{R}$. Then a scene graph $\mathcal{SG} \subset \mathcal{E} \times \mathcal{R} \times \mathcal{E}$ is a collection of ordered triples $(s, p, o)$ -- subject, predicate, and object. For example, as shown in Figure [3](#fig:example_image_sg){reference-type="ref" reference="fig:example_image_sg"}, the triple *(motorcycle-1, has_part, tire-1)* indicates that both a motorcycle (subject) and a tire (object) are detected in the image. The predicate *has_part* indicates the relation between the entities. Moreover, we denote with $p^{-1}$ the inverse relation corresponding to the predicate $p$. For the remainder of this work, we impose completeness with respect to inverse relations in the sense that for every $(s, p, o) \in \mathcal{SG}$ it is implied that $(o, p^{-1}, s) \in \mathcal{SG}$.
34
+
35
+ <figure id="fig:architecture">
36
+ <div class="center">
37
+ <img src="Architecture_Compact.png" />
38
+ </div>
39
+ <figcaption>The architecture of our scene graph reasoning module. </figcaption>
40
+ </figure>
41
+
42
+ The state space of the agent $\mathcal{S}$ is given by $\mathcal{E} \times \mathcal{Q}$ where $\mathcal{E}$ are the nodes of a scene graph $\mathcal{SG}$ and $\mathcal{Q}$ denotes the set of all questions. The state at time $t$ is the entity $e_t$ at which the agent is currently located and the question $Q$. Thus, a state $S_t \in \mathcal{S}$ for time $t \in \mathbb{N}$ is represented by $S_t = \left(e_t, Q\right)$. The set of available actions from a state $S_t$ is denoted by $\mathcal{A}_{S_t}$. It contains all outgoing edges from the node $e_t$ together with their corresponding object nodes. More formally, $\mathcal{A}_{S_t} = \left\{(r,e) \in \mathcal{R} \times \mathcal{E} : S_t = \left(e_t, Q\right) \land \left(e_t,r,e\right) \in \mathcal{SG}\right\}\, .$ Moreover, we denote with $A_t \in \mathcal{A}_{S_t}$ the action that the agent performed at time $t$. We include self-loops for each node in $\mathcal{SG}$ that produce a *NO_OP*-label. These self-loops allow the agent to remain at the current location if it reaches the answer node. Furthermore, the introduction of inverse relations allows agent to transit freely in any direction between two nodes.
43
+
44
+ The environments evolve deterministically by updating the state according to previous action. Formally, the transition function at time $t$ is given by $\delta_t({S_t},A_t) := \left(e_{t+1}, Q\right)$ with $S_t = \left(e_{t}, Q \right)$ and $A_t = \left(r, e_{t+1}\right)$.
45
+
46
+ **Auxiliary Nodes :** In addition to standard entity relation nodes present in a scene graph, we introduce a few auxiliary nodes (e.g. hub node). The underlying rationale for the inclusion of auxiliary nodes is that they facilitate the walk for the agent or help to frame the QA-task as a goal-oriented walk on the scene graph. These additional nodes are included during run-time graph traversal, but they are ignored during the compile time such as when computing node embedding. For example, we add a hub node (*hub*) to every scene graph which is connected to all other nodes. The agent then starts the scene graph traversal from a *hub* with global connectivity. Furthermore for a binary question, we add YES and NO nodes to the scene entities that correspond to the final location of the agent. The agent can then transition to either the YES or the NO node.
47
+
48
+ We initialize words in $Q$ with GloVe embeddings [@pennington2014glove] with dimension $d=300$. Similarly we initialize entities and relations in $\mathcal{SG}$ with the embeddings of their type labels. In the scene graph, the node embeddings are passed through a multi-layered graph attention network (GAT) [@velivckovic2017graph]. Extending the idea from graph convolutional networks [@kipf2016semi] with a self-attention mechanism, GATs mimic the convolution operator on regular grids where an entity embedding is formed by aggregating node features from its neighbors. Relations and inverse relations between nodes allows context to flow in both ways through GAT. Thus, the resulting embeddings are context-aware, which makes nodes with the same type, but different graph neighborhoods, distinguishable. To produce an embedding for the question $Q$, we first apply a Transformer [@vaswani2017attention], followed by a mean pooling operation.
49
+
50
+ Finally, since we added auxiliary *YES* and *NO* nodes to the scene graph for binary questions, we train a feedforward neural network to classify query-type (i.e., questions that query for an object in the depicted scene) and binary questions. This network consists of two fully connected layers with ReLU activation on the intermediate output. We find that it is easy to distingquish between query and binary questions (e.g., query questions usually begin with *What, Which, How*, etc., whereas binary questions usually begin with *Do, Is*, etc.). Since our classifier achieves 99.99% accuracy we will ignore the error in question classification in the following discussions.
51
+
52
+ We denote the agent's history until time $t$ with the tuple $H_t = \left(H_{t-1}, A_{t-1}\right)$ for $t \geq 1$ and $H_0 = hub$ along with $A_0 = \emptyset$ for $t = 0$. The history is encoded via a multilayered LSTM [@hochreiter1997long] $$\begin{equation}
53
+ \label{eq:lstm_agent}
54
+ \mathbf{h}_t = \textrm{LSTM}\left(\mathbf{a}_{t-1}\right) \, ,
55
+ \end{equation}$$ where $\mathbf{a}_{t-1} = \left[\mathbf{r}_{t-1},\mathbf{e}_{t}\right] \in \mathbb{R}^{2d}$ corresponds to the embedding of the previous action with $\mathbf{r}_{t-1}$ and $\mathbf{e}_{t}$ denoting the embeddings of the edge and the target node into $\mathbb{R}^{d}$, respectively. The history-dependent action distribution is given by $$\begin{equation}
56
+ \label{eq:policy_agent}
57
+ \mathbf{d}_t = \textrm{softmax}\left(\mathbf{A}_t \left(\mathbf{W}_2\textrm{ReLU}\left(\mathbf{W}_1 \left[ \mathbf{h}_t, \mathbf{Q} \right]\right)\right)\right) \, ,
58
+ \end{equation}$$ where the rows of $\mathbf{A}_t \in \mathbb{R}^{\vert \mathcal{A}_{S_t} \vert \times d}$ contain latent representations of all admissible actions. Moreover, $\mathbf{Q} \in \mathbb{R}^{d}$ encodes the question $Q$. The action $A_t = (r,e) \in \mathcal{A}_{S_t}$ is drawn according to $\textrm{categorical}\left(\mathbf{d}_t\right)$. Equations [\[eq:lstm_agent\]](#eq:lstm_agent){reference-type="eqref" reference="eq:lstm_agent"} and [\[eq:policy_agent\]](#eq:policy_agent){reference-type="eqref" reference="eq:policy_agent"} induce a stochastic policy $\pi_{\theta}$, where $\theta$ denotes the set of trainable parameters.
59
+
60
+ After sampling $T$ transitions, a terminal reward is assigned according to $$\begin{equation}
61
+ R = \begin{cases}
62
+ 1 &\text{if $e_T$ is the answer to $Q$,} \\
63
+ 0 &\text{otherwise.}
64
+ \end{cases}
65
+ \end{equation}$$ We employ REINFORCE [@williams1992simple] to maximize the expected rewards. Thus, the agent's maximization problem is given by $$\begin{equation}
66
+ \label{eq:objective_agent}
67
+ \mathop{\mathrm{arg\,max}}_{\theta} \mathbb{E}_{Q \sim \mathcal{T}}\mathbb{E}_{A_1, A_2, \dots, A_N \sim \pi_{\theta}}\left[R \left\vert\vphantom{\frac{1}{1}}\right. e_c \right] \, ,
68
+ \end{equation}$$ where $\mathcal{T}$ denote the set of training questions. During training the first expectation in Equation [\[eq:objective_agent\]](#eq:objective_agent){reference-type="eqref" reference="eq:objective_agent"} is substituted with the empirical average over the training set. The second expectation is approximated by the empirical average over multiple rollouts. We also employ a moving average baseline to reduce the variance. Further, we use entropy regularization with parameter $\lambda\in \mathbb{R}_{\geq 0}$ to enforce exploration. During inference, we do not sample paths but perform a beam search with width 20 based on the transition probabilities given by Equation [\[eq:policy_agent\]](#eq:policy_agent){reference-type="eqref" reference="eq:policy_agent"}.
69
+
70
+ Additional details on the model, the training and the inference procedure along with sketches of the algorithms, and a complexity analysis can be found in the supplementary material.
71
+
72
+ In this section we introduce the dataset and detail the experimental protocol.
73
+
74
+ The *GQA* dataset [@hudson2019gqa] has been introduced with the goal of addressing key shortcomings of previous VQA datasets, such as *CLEVR* [@johnson2017clevr] or the *VQA* dataset [@antol2015vqa]. *GQA* is more suitable for evaluating the reasoning and compositional abilities of a model in a realistic setting. It contains 113K images, and around 1.2M questions split into roughly $80\%/10\%/10\%$ for the training, validation, and testing. The overall vocabulary size consists of 3097 words, including 1702 object classes, 310 relationships, and 610 object attributes.\
75
+ Due to the large number of objects and relationships present in GQA, we used a pruned version of the dataset (see Section [5](#sec:results){reference-type="ref" reference="sec:results"}) for our generated scene graph. In this work, we have conducted two primary experiments. First, we report the results on manually curated scene graphs provided in the *GQA* dataset. In this setting, the true reasoning and language understanding capabilities of our model can be analyzed. Afterward, we evaluate the performance of our model with the generated scene graphs on pruned GQA dataset. It shows the performance of our model on noisy generated data. We have used state of the art Relation Transformer Network (RTN) [@koner2020relation] for the scene graph generation and DetectoRS [@detectors] for object detection. We have conducted all the experiments on "test-dev" split of the GQA.
76
+
77
+ The questions are designed to evaluate the reasoning abilities such as visual verification, relational reasoning, spatial reasoning, comparison, and logical reasoning. These questions can be categorized either according to structural or semantic criteria. An overview of the different question types is given in supplementary (see Table [4](#tab:experiment_question){reference-type="ref" reference="tab:experiment_question"}).
2109.13016/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2109.13016/paper_text/intro_method.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Over the past few years, deep neural networks have achieved significant achievements in many applications. One of the major limitations of deep neural networks is the dataset bias or domain shift problems [@5376]. These phenomena occur when the model obtains good results on the training dataset; however, showing poor performance on a testing dataset or a real-world sample.
4
+
5
+ As shown in Figure [1](#fig:samle_images){reference-type="ref" reference="fig:samle_images"}, because of numerous reasons (illumination, image quality, background), there is always a different distribution between two datasets, which is the main factor reducing the performance of deep neural networks. Even though various research has proved that deep neural networks can learn transferable feature representation over different datasets [@Long2015LearningTF; @autonomous_nav], Donahue *et al.* [@donahue2013decaf] showed that domain shift still influences the accuracy of the deep neural network when testing these networks in a different dataset.
6
+
7
+ <figure id="fig:samle_images" data-latex-placement="htp">
8
+ <img src="domain-adaptation.PNG" />
9
+ <figcaption>Examples of images from different datasets. (a) Some digit images from MNIST <span class="citation" data-cites="726791"></span>, USPS <span class="citation" data-cites="uspsdataset"></span>, MNIST-M <span class="citation" data-cites="ganin2016domainadversarial"></span>, and SVHN <span class="citation" data-cites="37648"></span> datasets. (b) Some object images from the "bird" category in CALTECH <span class="citation" data-cites="1597116"></span>, LABELME <span class="citation" data-cites="Russell2007LabelMeAD"></span>, PASCAL <span class="citation" data-cites="pascal-voc-2007"></span>, and SUN <span class="citation" data-cites="5540221"></span> datasets.</figcaption>
10
+ </figure>
11
+
12
+ The solution for the aforementioned problems is domain adaptation techniques [@10.1016/j.neucom.2015.03.020; @5640675]. The main idea of domain adaptation techniques is to learn how a deep neural network can map the source domain and target domain into a common feature space, which minimize the negative influence of domain shift or dataset bias.
13
+
14
+ The adversarial-based adaptation method [@NIPS2014_5ca3e9b1; @8099799] has become a well-known technique among other domain adaptation methods. Adversarial adaptation includes two networks - an encoder and a discriminator, trained simultaneously with conflicting objectives. The encoder is trained to encode images from the original domain (source domain) and new domain (target domain) such that it puzzles the discriminator. In contrast, the discriminator tries to distinguish between the source and target domain. Recently, Adversarial Discriminative Domain Adaptation (ADDA) by Tzeng *et al.* [@8099799] has shown that adversarial adaptation can handle dataset bias and domain shift problems. From there, we extend the ADDA method to the semi-supervised learning context by obliging the discriminator network to predict class labels.
15
+
16
+ Semi-supervised learning [@semisupervised] is an approach that builds a predictive model with a small labeled dataset and a large unlabeled dataset. The model must learn from the small labeled dataset and somehow exploit the larger unlabeled dataset to classify new samples. In the context of unsupervised domain adaptation tasks, the semi-supervised learning approach needs to take advantage of the labeled source dataset to map to the unlabeled target dataset, thereby correctly classifying the labels of the target dataset. The Semi-Supervised GAN [@odena2016semisupervised] is designed to handle the semi-supervised learning tasks and inspired us to develop our model.
17
+
18
+ In this paper, we present a novel method called Semi-supervised Adversarial Discriminative Domain Adaptation (SADDA), where the discriminator is a multi-class classifier. Instead of only distinguishing between source images and target images (method like ADDA [@8099799]), the discriminator learns to distinguish N + 1 classes, where N is the number of classes in the classification task, and the last one uses to distinguish between the source dataset or the target dataset. The discriminator focuses not only on the domain label between two datasets but also on the labeled images from the source dataset, which improves the generalization ability of the discriminator and the encoder as well as the classification accuracy.
19
+
20
+ To validate the effectiveness of our methodology, we experiment with domain adaptation tasks on digit datasets, including MNIST [@726791], USPS [@uspsdataset], MNIST-M [@ganin2016domainadversarial], and SVHN [@37648]. In addition, we also prove the robustness ability of the SADDA method by using t-SNE visualization of the digit datasets, the SADDA method keeps the t-SNE clusters as tight as possible and maximizes the separation between two clusters. We also test its potential with a more sophisticated dataset, by object recognition task with CALTECH [@1597116], LABELME [@Russell2007LabelMeAD], PASCAL [@pascal-voc-2007], and SUN [@5540221] datasets. In addition, we evaluate our method for the natural language processing task, with three text datasets including Women's E-Commerce Clothing Reviews [@nicapotato_2018], Coronavirus tweets NLP - Text Classification [@miglani_2020], and Trip Advisor Hotel Reviews [@ALAM2016206]. The Python code of the SADDA method for object recognition tasks can be downloaded at <https://github.com/NguyenThaiVu/SADDA>.
21
+
22
+ Our contributions can be summarized as follows:
23
+
24
+ - We propose a new Semi-supervised Adversarial Discriminative Domain Adaptation method (SADDA) for addressing the unsupervised domain adaptation task.
25
+
26
+ - We illustrate that SADDA improves digit classification tasks and achieves competitive performance with other adversarial adaptation methods.
27
+
28
+ - We also demonstrate that the SADDA method can apply to multiple applications, including object recognition and natural language processing tasks.
29
+
30
+ # Method
31
+
32
+ <figure id="fig:semi-adda" data-latex-placement="h">
33
+ <img src="SADDA_summary.PNG" />
34
+ <figcaption>An overview of the SADDA. Firstly, training the source encoder (<span class="math inline"><em>M</em><sub><em>s</em></sub></span>) and the classification (<span class="math inline"><em>C</em><sub><em>s</em></sub></span>) using the source labeled images (<span class="math inline"><em>X</em><sub><em>s</em></sub></span>, <span class="math inline"><em>Y</em><sub><em>s</em></sub></span>). Secondly, training a target encoder (<span class="math inline"><em>M</em><sub><em>t</em></sub></span>) through the domain adversarial process. Finally, in the testing phase, concatenate the target encoder (<span class="math inline"><em>M</em><sub><em>t</em></sub></span>) and the classification (<span class="math inline"><em>C</em><sub><em>s</em></sub></span>) to create the complete model, which will predict the label of the target dataset precisely.</figcaption>
35
+ </figure>
36
+
37
+ In this section, we describe in detail our Semi-supervised Adversarial Discriminative Domain Adaptation (SADDA) method. An overview of our method can be found in Figure [2](#fig:semi-adda){reference-type="ref" reference="fig:semi-adda"}.
38
+
39
+ In the unsupervised domain adaptation task, we already have source images $\mathbf{X}_s$ and source labels $\mathbf{Y}_s$ come from the source domain distribution $\mathbf{p}_{s}(x, y)$. Besides that, a target dataset $\mathbf{X}_t$ comes from a target distribution $\mathbf{p}_{t}(x, y)$, where the label of the target dataset is non-exist. We desire to learn a target encoder $M_t$ and classifier $C_t$, which can accurately predict the target image's label. In an adversarial-based adaptation approach, we aim to diminish the distance between the source mapping distribution ($M_{s}(X_s)$) and target mapping distributions ($M_{t}(X_t)$). As a result, we can straightly apply the source classifier $C_s$ to classify the target images, in other words, C = $C_t$ = $C_s$. The summary process of SADDA includes three steps: pre-training, training target encoder, and testing.
40
+
41
+ **Pre-training**. In the pre-training phase, training source encoder ($M_s$) and source classifier ($C_s$), by using the source labeled images ($\mathbf{X}_s$,$\mathbf{Y}_s$). This step is a standard supervised classification task, a common form can be denoted as:
42
+
43
+ $$\begin{equation}
44
+ \label{supervised_loss}
45
+ \mathop{\mathrm{arg\,min}}_{M_s, C_s} \mathcal{L}_{cls}(\mathbf{X}_s, \mathbf{Y}_s) = - \mathbb{E}_{(x,y){\sim} (\mathbf{X}_s, \mathbf{Y}_s)}
46
+ \sum_{n=1}^{N} y_{_{n}} \log C_s(M_s(x_{_n}))
47
+ \end{equation}$$
48
+
49
+ where $\mathcal{L}_{cls}$ is a supervised classification loss (categorical crossentropy loss), and N is the number of classes.
50
+
51
+ **Training target encoder**. In the training target encoder phase, we first present a training discriminator process and then present a procedure for training the target encoder.
52
+
53
+ Firstly, training the discriminator (D) in two modes, each giving a corresponding output. (1) Supervised mode, where the supervised discriminator ($D_{sup}$) predicts N labels from the original classification task. (2) Unsupervised mode, where the unsupervised discriminator ($D_{unsup}$) classifies between $\mathbf{X}_s$ and $\mathbf{X}_t$. Discriminator correlates with unconstrained optimization:
54
+
55
+ $$\begin{equation}
56
+ \label{supervised_discriminator_loss}
57
+ \mathop{\mathrm{arg\,min}}_{D_{sup}} \mathcal{L}_{cls}
58
+ (\mathbf{X}_s, \mathbf{Y}_s) = - \mathbb{E}_{(x,y){\sim} (\mathbf{X}_s, \mathbf{Y}_s)}
59
+ \sum_{n=1}^{N} y_{_{n}}
60
+ \log D_{sup}(M_s(x_{_n}))
61
+ \end{equation}$$
62
+
63
+ $$\begin{equation}
64
+ \label{unsupervised_discriminator_loss}
65
+ \begin{split}
66
+ \mathop{\mathrm{arg\,min}}_{D_{unsup}} \mathcal{L}_{adv_D}
67
+ (\mathbf{X}_s, \mathbf{X}_t, M_s, M_t) =
68
+ - \mathbb{E}_{x_s {\sim} \mathbf{X}_s}
69
+ \log D_{unsup}(M_s(x_s)) \\
70
+ - \mathbb{E}_{x_t {\sim} \mathbf{X}_t}
71
+ \log (1 - D_{unsup}(M_t(x_t)))
72
+ \end{split}
73
+ \end{equation}$$
74
+
75
+ In equation [\[supervised_discriminator_loss\]](#supervised_discriminator_loss){reference-type="ref" reference="supervised_discriminator_loss"}, $\mathcal{L}_{cls}$ is a supervised classification loss corresponding to predicting N labels from the original classification task in the source dataset ($\mathbf{X}_s$), which will update the parameter in $D_{sup}$. In equation ([\[unsupervised_discriminator_loss\]](#unsupervised_discriminator_loss){reference-type="ref" reference="unsupervised_discriminator_loss"}), $\mathcal{L}_{adv_D}$ is an adversarial loss for unsupervised discriminator $D_{unsup}$, which trains ($D_{unsup}$) to maximize the probability of predicting the correct label from the source dataset or target dataset. One thing to notice is that the unsupervised discriminator uses a custom activation function (equation [\[Custom_loss\]](#Custom_loss){reference-type="ref" reference="Custom_loss"}), which returns a probability to determine whether a source image or target image (sub-section [3.2](#Sec:discriminator){reference-type="ref" reference="Sec:discriminator"} for more details).
76
+
77
+ Secondly, training the target encoder $M_t$ with the standard loss function and inverted labels [@NIPS2014_5ca3e9b1]. This implies that the unsupervised discriminator $D_{unsup}$ is fooled by the target encoder $M_t$, in other words, $D_{unsup}$ is unable to determine between $\mathbf{X}_s$ and $\mathbf{X}_t$. The feedback from the unsupervised discriminator $D_{unsup}$ allows the $M_t$ to learn how to produce a more authentic encoder. The loss for $\mathcal{L}_{{adv}_M}$ can be denoted:
78
+
79
+ $$\begin{equation}
80
+ \label{target_encoder_loss}
81
+ \mathop{\mathrm{arg\,min}}_{M_t} \mathcal{L}_{{adv}_M}
82
+ (\mathbf{X}_s, \mathbf{X}_t, D) =
83
+ - \mathbb{E}_{x_t {\sim} X_t} \log D_{unsup}(M_t(x_t))
84
+ \end{equation}$$
85
+
86
+ **Testing**. In the testing phase, we concatenate the target encoder $M_t$ and the source classifier $C_s$ to predict the label of target images $\mathbf{X}_t$.
87
+
88
+ In this section, we describe the detail of the discriminator model and provide some arguments to prove the effectiveness of the discriminator model in the SADDA method.
89
+
90
+ In the training target encoder step, the discriminator model is trained to predict N+1 classes, where N is the number of classes in the original classification task (supervised mode) and the final class label predicts whether the sample comes from the source dataset or target dataset (unsupervised mode). The supervised discriminator and the unsupervised discriminator have different output layers but have the same feature extraction layers - via backpropagation when we train the network, updating the weights in one model will impact the other model as well.
91
+
92
+ The supervised discriminator model produces N output classes (with a softmax activation function). The unsupervised discriminator is defined such that it grabs the output layer of the supervised mode *prior softmax activation* and computes a normalized sum of the exponential outputs (custom activation). When training the unsupervised discriminator, the source sample will have a class label of 1.0, while the target sample will be labeled as 0.0. The explicit formula of custom activation [@salimans2016improved] is:
93
+
94
+ $$\begin{equation}
95
+ \label{Custom_loss}
96
+ D(x) = \frac{Z(x)}{Z(x) + 1}
97
+ \end{equation}$$ where $$\begin{equation}
98
+ Z(x) = \sum_{n=1}^{N} \exp[l_n(x)]
99
+ \end{equation}$$
100
+
101
+ ::: {#experimental_custom}
102
+ Output probabilities (prior softmax) Custom activation Entropy
103
+ -------------------------------------- ------------------- --------- --
104
+ \[9.0, 1.0, 1.0\] 0.9999 Low
105
+ \[5.0, 1.0, 1.0\] 0.9935 Low
106
+ \[-5.0, -5.0, -5.0\] 0.0198 High
107
+
108
+ : Experimental compute on custom activation - the output of unsupervised discriminator model
109
+ :::
110
+
111
+ The experiment of equation ([\[Custom_loss\]](#Custom_loss){reference-type="ref" reference="Custom_loss"}) is described in Table [1](#experimental_custom){reference-type="ref" reference="experimental_custom"}, and the outputs are between $0.0$ and $1.0$. If the probability of output value *prior softmax activation* is a large number (meaning: low entropy) then the custom activation output value is close to 1.0. In contrast, if the output probability is a small value (meaning: high entropy), then the custom activation output value is close to 0.0. Implied that the discriminator is encouraged to output a confidence class prediction for the source sample, while it predicts a small probability for the target sample. That is an elegant method allowing re-use of the same feature extraction layers for both the supervised discriminator and the unsupervised discriminator.
112
+
113
+ It is reasonable that learning the well supervised discriminator will improve the unsupervised discriminator. Moreover, training the discriminator in unsupervised mode allows the model to learn useful feature extraction capabilities from huge unlabeled datasets. As a sequence, improving the supervised discriminator will improve the unsupervised discriminator and vice versa. Improving the discriminator will enhance the target encoder [@odena2016semisupervised]. In total, this is one kind of advantage circle, in which three elements (unsupervised discriminator, supervised discriminator, and target encoder) iteratively make each other better.
114
+
115
+ In general, training SADDA is an extremely hard process, there are two losses we need to optimize: the loss for the discriminator and the loss for the target encoder. For that reason, the loss landscape of SADDA is fluctuating and dynamic (detail in sub-section [4.4](#convergence_analysis){reference-type="ref" reference="convergence_analysis"}). When implementing and training the SADDA, we find that is a tough process. To overcome the limitation of the adversarial process, we present a full architecture of SADDA. This designed architecture increases training stability and prevents non-convergence. In this section, we present the key ideas in designing the model for the image classification and the sentiment classification task.
116
+
117
+ **Image classification**. The design of SADDA is inspired by Deep Convolutional GAN (DCGAN) architecture [@radford2016unsupervised]. The summary architecture of the SADDA method for digit recognition is shown in Figure [5](#fig:sadda-digit){reference-type="ref" reference="fig:sadda-digit"}. On the one hand, the encoder is used to capture the content in the image, increasing the number of filters while decreasing the spatial dimension by the convolutional layer. On the other hand, the discriminator is symmetric expansion with the encoder by using fractionally-strided convolutions (transpose convolution).
118
+
119
+ Moreover, our recommendation for efficient training SADDA in the image classification task:
120
+
121
+ - Replace any pooling layers with convolution layers (or transposed convolution) with strides larger than 1.
122
+
123
+ - Remove fully connected layers in both encoder and discriminator (except the last fully connected layers, which are used for prediction).
124
+
125
+ - Use ReLU activation [@Maas13rectifiernonlinearities] in the encoder and LeakyReLU activation [@Maas13rectifiernonlinearities] (with alpha=0.2) in the discriminator.
126
+
127
+ **Sentiment classification**. The design of the SADDA method for sentiment classification is inspired by the architecture called Autoencoders LSTM [@https://doi.org/10.48550/arxiv.1502.04681; @brownlee_2020]. The summary architecture of the SADDA method for sentiment classification is demonstrated in Figure [7](#fig:sadda-nlp){reference-type="ref" reference="fig:sadda-nlp"}. In general, the architecture of the model used in the sentiment classification task has many similarities with the architecture used in image classification. Firstly, we remove fully connected layers in both the encoder and the discriminator. Instead, we use the Long Short Term Memory [@article_lstm] (LSTM) to handle sequences of text data. Secondly, the network is organized into an architecture called the Encoder-Decoder LSTM, with the Encoder LSTM being the encoder block and the Decoder LSTM being the discriminator block respectively. The Encoder-Decoder LSTM was built for the NLP task where it illustrated state-of-the-art performance, such as machine translation [@cho-etal-2014-learning]. From the empirical, we find that the Encoder-Decoder is suitable for the unsupervised domain adaptation task.
2110.00280/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2110.00280/paper_text/intro_method.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Human pose estimation is a vision task of detecting the keypoints that represent a standard set of human joints. The area is extremely competitive, especially due to the advances in deep learning. Pose estimation is particularly important for applications such as medicine, fashion industry, anthropometry, and entertainment [@a-review-of-body-measurement]. In this work, we focus on 3D human pose estimation from multiple views in a single time frame.
4
+
5
+ <figure id="fig:transfer-learning" data-latex-placement="t!">
6
+ <embed src="Figure/transfer-learning-fig.pdf" />
7
+ <figcaption>We propose a stochastic framework for human pose triangulation from multiple views and demonstrate its successful generalization across different camera arrangements, their number, and different public datasets. The upper two and the lower left image shows different camera arrangements and their number on CMU Panoptic Studio dataset <span class="citation" data-cites="cmu-panoptic"></span>. The lower right part shows the Human3.6M’s 4-camera arrangement <span class="citation" data-cites="h36m"></span>.</figcaption>
8
+ </figure>
9
+
10
+ The common approach to multi-view pose estimation is to (1) detect correspondent 2D keypoints in each view using pretrained pose detector [@simple-baselines; @openpose; @cpm], and then (2) triangulate [@learnable-triangulation; @cross-view-fusion; @epipolar-transformers; @lightweight-multi-view; @generalizable-approach; @rethinking-pose-in-3d]. A naive approach takes 2D detections as they are and applies triangulation from all available views. Due to the variety of poses and self-occlusions, some views contain erroneous detections, which should be ignored or their influence mitigated in the triangulation process. One way to ignore the erroneous detections is to apply RANSAC [@ransac], marking the keypoints whose reprojection errors are above a threshold as outliers [@multiview-bootstrapping; @epipolar-transformers]. The problem with vanilla RANSAC is that it is non-differentiable, so the gradients are not back-propagated, which disables end-to-end learning. Most of the state-of-the-art 3D pose estimation approaches extract 2D image features, such as heatmaps, from multiple views, and combine them for 3D elevation in an end-to-end fashion [@learnable-triangulation; @lightweight-multi-view; @cross-view-fusion]; we refer to those approaches as the *learnable triangulation approaches*.
11
+
12
+ Due to a mostly-fixed set of cameras during training, the learnable triangulation approaches are often limited to a single camera arrangement and their number. Several works attempt to generalize outside the training data [@lightweight-multi-view; @learnable-triangulation; @generalizable-approach; @epipolar-transformers; @view-invariant-probabilistic-embedding; @adaptively-multi-view-transformer; @voxelpose], but the demonstrated performance on novel views is significantly lower than using the original (base) views.
13
+
14
+ Inspired by stochastic learning [@stochastic-computation-graphs] and its applications in computer vision [@dsac; @less-is-more; @ng-ransac], we propose *generalizable triangulation* of human pose. First, we generate a pool of random hypotheses. A hypothesis is a 3D pose where the points are obtained by triangulating a random subset of views for each joint separately. Each generated hypothesis pass through a scoring neural network. The loss function is an expectation of the triangulation error, i.e. $\mathbb{E} (h_i) = \sum_i e_i s_i$, where $e_i$ is the error of the hypothesis $h_i$ and $s_i$ is the hypothesis score. By minimizing the error expectation, the model learns the distribution of hypotheses. The key idea is to learn to evaluate 3D pose hypotheses without considering the spatial camera arrangement used for triangulation.
15
+
16
+ The proposed approach has several practical advantages over the previous methods. First, we demonstrate its consistent generalization performance across different camera arrangement on two public datasets - Human3.6M [@h36m] and Panoptic Studio [@cmu-panoptic] (see Fig. [1](#fig:transfer-learning){reference-type="ref" reference="fig:transfer-learning"}). Second, we show that the proposed model learns human pose prior and define a novel metric for pose prior evaluation. Finally, we apply the same stochastic approach to the problem of fundamental matrix estimation from noisy 2D detections and compare it to the standard $8$-point algorithm, showing that the proposed framework successfully applies to computer vision problems other than human pose triangulation.
17
+
18
+ # Method
19
+
20
+ <figure id="fig:method" data-latex-placement="t!">
21
+ <embed src="Figure/method-inkscape.pdf" />
22
+ <figcaption>An overview of our method. Before stochastic learning, 2D keypoints, <span class="math inline"><em>y</em></span>, are extracted. In each frame, the hypothesis pool, <span class="math inline"><em>h</em><sub><em>i</em></sub> ∈ <strong>H</strong></span>, is generated, and the poses are passed through the scoring network, <span class="math inline"><em>f</em><sub><em>S</em></sub></span>. The hypothesis <span class="math inline"><em>ĥ</em><sub><em>i</em></sub></span> is selected based on the estimated scores <span class="math inline"><em>s</em><sub><em>i</em></sub></span>. Finally, the total loss, <span class="math inline"><em>l</em><sub><em>total</em></sub></span>, consists of three components (<span class="math inline"><em>l</em><sub><em>stoch</em></sub></span>, <span class="math inline"><em>l</em><sub><em>entropy</em></sub></span>, <span class="math inline"><em>l</em><sub><em>est</em></sub></span>), and is calculated with respect to the ground truth, <span class="math inline"><em>h</em><sup>*</sup></span>.</figcaption>
23
+ </figure>
24
+
25
+ We first describe the generic stochastic framework, and then describe it more specifically for generalizable pose triangulation and fundamental matrix estimation. The framework consists of several steps, shown in Fig. [2](#fig:method){reference-type="ref" reference="fig:method"}:
26
+
27
+ 1. **Pre-training.** Prior to stochastic learning, the 2D poses (keypoints) are extracted for all images in the dataset. In all our experiments, we use the keypoints extracted using a baseline model [@simple-baselines] pretrained on Human3.6M dataset. The input to stochastic model, therefore, consists only of keypoint detections, $\mathbf{y}$. In each frame, $J$x$K$ keypoints are detected, where $J$ is the number of joints, and $K$ is the number of views.
28
+
29
+ 2. **Hypothesis generation, $\mathbf{H}$**. As it is possible to generate an extremely large number of hypotheses, only a subset of random hypotheses is created. Following [@stochastic-computation-graphs] and [@dsac], we model the hypothesis generation step as a *stochastic* node.
30
+
31
+ 3. **Hypothesis scoring, $\mathbf{f_{S}}$**. Each generated hypothesis $h_i \in {\mathbf{H}}$ is scored using a scoring function, $f_S (h_i | \mathbf{y}) = s_i$. The scoring function is a neural network, i.e. a multi-layer perceptron. The network architectures for 3D pose triangulation and fundamental matrix estimation differ and are specified at the end of the Sec. [4](#sec:experiments){reference-type="ref" reference="sec:experiments"}. The network is the only learnable part of our model. The estimated scores $s_i$, passed through the Gumbel-Softmax, $\sigma_{GS}(s_i)$ (Eq. [\[eq:gumbel-softmax\]](#eq:gumbel-softmax){reference-type="ref" reference="eq:gumbel-softmax"}), represent the estimated probability distribution of the hypotheses $\mathbf{H}$, $\mathbb{\theta_{\mathbf{H}}}$.
32
+
33
+ 4. **Hypothesis selection, $\hat{h}_i$**. We experiment with several hypothesis selection strategies. The one that works the best for us is the weighted average of all hypotheses:
34
+
35
+ $$\begin{equation}
36
+ \label{eq:weighted}
37
+ \hat{h}_{\textit{weight}} = \sum_i s_i h_i, \quad \sum_i s_i = 1, \quad h_i \in \mathbf{H},
38
+ \end{equation}$$
39
+
40
+ where the scores $s_i$ are used as weights. We also try other strategies, such as the stochastic selection:
41
+
42
+ $$\begin{equation}
43
+ \label{eq:stoch}
44
+ \hat{h}_{\textit{stoch}} = h_i, \quad \text{with} \quad i \sim \theta_{\mathbf{H}},
45
+ \end{equation}$$
46
+
47
+ where hypothesis $h_i$ is selected based on the estimated distribution $\theta_{\mathbf{H}}$. As shown in Sec. [4](#sec:experiments){reference-type="ref" reference="sec:experiments"}, the stochastic selection performs worse than the weighted, in contrast to [@dsac].
48
+
49
+ 5. **Loss calculation, $l_{total}$**. The loss function consists of several components:
50
+
51
+ 1. Stochastic loss. Following [@dsac], we calculate our stochastic loss as an expectation of error for all hypotheses, $l_{\textit{stoch}} = \mathbb{E} (e_{\mathbf{H}}) = \sum_i e (h_i, h^*) s_i$, where $e_i$ is the error of the estimated hypothesis with respect to the ground truth, $h^*$, and $s_i$ represent the probability that the error is minimal.
52
+
53
+ 2. Entropy loss. Score estimations $s_i$ tend to quickly converge to zero. To stabilize the estimation values, we follow [@less-is-more] and minimize an entropy function, $l_{\textit{entropy}} = -\sum_i s_i \log(s_i)$.
54
+
55
+ 3. Estimation loss. We define it as the error of the selected hypothesis with respect to the ground 3D pose, $l_{\textit{est}} = e_i(\hat{h}_i, h^*)$. The estimation loss, in the case of generalizable pose triangulation, is most similar to the standard 3D pose estimation loss, used by the competing approaches [@learnable-triangulation; @lightweight-multi-view; @cross-view-fusion; @rethinking-pose-in-3d; @generalizable-approach].
56
+
57
+ Finally, the total loss is a sum of the three components, $l_{total} = \alpha \, l_{\textit{stoch}} + \beta \, l_{\textit{entropy}} + \gamma \, l_{\textit{est}}$, where $\alpha$, $\beta$, and $\gamma$ are fixed hyperparameters that regulate relative values between the components.
58
+
59
+ In order for the estimated scores $s_i$ to represent the probabilities, their values need to be normalized into $[0, 1]$ range. The standard way to normalize the output values is to apply the softmax function, $\sigma(s_i) = \frac{\exp{s_i}}{\sum_j \exp{s_j}}$. To avoid early convergence, we use the Gumbel-Softmax function [@gumbel-softmax; @concrete-distribution]:
60
+
61
+ $$\begin{equation}
62
+ \label{eq:gumbel-softmax}
63
+ \sigma_{GS}(s_i) = \frac{\exp ((\log{s_i} + g_i) / \tau)}
64
+ {\sum_{j=1}^k \exp ((\log{s_j} + g_j) / \tau)},
65
+ \end{equation}$$
66
+
67
+ where $\tau$ is a temperature parameter, and $g_i$ represent samples drawn from *Gumbel*(0, 1) [@a-star-sampling] distribution. The temperature $\tau$ regulates the broadness of the distribution. For lower temperatures ($\tau < 1$), the influence of lower-score hypotheses is limited compared to higher-score hypotheses, and vice versa. The purpose of *Gumbel*(0, 1) is to add noise to each sample while retaining the original distribution(s), which allows the model to be more flexible with the hypothesis selection.
68
+
69
+ We now describe the stochastic framework specifically for learning human pose triangulation.
70
+
71
+ **Pose generation.** The 3D human pose hypothesis, $h_i \in \mathbf{H}$, is generated in the following way. For each joint $k$, a subset of views, $\mathbf{v}_k$, is randomly selected. The detections from the selected views are triangulated to produce a 3D joint.
72
+
73
+ **Pose normalization.** The input to the pose scoring network, $f_{S, \textit{pose}}$ are 3D pose coordinates, $\mathbf{p}$, normalized in the following way --- we select three points: left and right shoulder and the pelvis (between the hips), calculate the rotation between the normal of the plane given by the three points, and the normal of the $xy$-plane, and apply that rotation to all coordinates. Other than the 3D pose coordinates, we also extract 16 body part lengths, given by all adjacent joints, e.g. left lower arm, left upper arm, left shoulder, etc. Finally, we concatenate both normalized 3D coordinates and the body part lengths into a 1D vector and pass it through the network. The output is a scalar, $s_i$, representing the score of the hypothesis $h_i$.
74
+
75
+ **Pose estimation error.** The pose estimation error, $e_i(\hat{h}_i, h^*)$, is a mean per-joint precision error (MPJPE) [@h36m] between the estimated 3D pose, $\hat{\mathbf{p}}_i$, and the ground truth, $\mathbf{p}^*$:
76
+
77
+ $$\begin{equation}
78
+ \label{eq:mpjpe}
79
+ e_i(\hat{h}_i, h^*) = e_i(\hat{\mathbf{p}}_i, \mathbf{p}^*) = \frac{1}{J} \sum_k^{J} ||\hat{p}_{ik} - p_{k}^*||_2,
80
+ \end{equation}$$
81
+
82
+ where $p_{ik}$ is the $k$-th keypoint of the $i$-th pose.
83
+
84
+ We describe how to learn fundamental matrix estimation between the pairs of cameras using the proposed stochastic framework. The fundamental matrix describes the relationship between the two views via $x_2^{\top} F x_1 = 0$, where $x_1$ and $x_2$ are the corresponding 2D points in the first (target) and the second (reference) view. From the fundamental matrix, relative rotation and translation (the relative camera pose) between the views can be obtained [@zisserman].
85
+
86
+ **Hypothesis generation.** The relative camera pose hypothesis, $h_i$, is generated in a slightly different way than the 3D pose hypothesis. The required number of points to determine the fundamental matrix is $8$ when an $8$-point algorithm is used [@8-point]. However, with the presence of noise, the required number of points is usually much higher. Instead of using a single time frame as in pose triangulation, we select the keypoints from $M$ frames, having a total of $M*J$ individual point correspondences. The camera hypothesis $h_i$ is obtained using a subset of $T<M*J$ correspondences, passed through an $8$-point algorithm. The result of an $8$-point algorithm are four possible rotations and translations; we select the correct one in a standard way [@zisserman].
87
+
88
+ **Input preparation.** The input to the camera pose scoring network, $f_{S, \textit{cam}}$, are the distances between the corresponding projected rays. The rays are obtained using the reference camera parameters, $(R_{ref}, t_{ref})$, and the estimated relative camera pose, $(R_{rel,i}, t_{rel,i})$. To achieve the permutation invariance between the line distances on the input, we simply sort the values before passing it through the network.
89
+
90
+ **Hypothesis selection.** The camera pose hypothesis, $\hat{h}_{\textit{weight}}$, is selected as the weighted average of the rotation[^1], i.e. a weighted average of the translation of all hypotheses.
91
+
92
+ **Estimation error.** The hypothesis estimation error, $e_i$, is calculated as:
93
+
94
+ $$\begin{equation}
95
+ \label{eq:reprojection-3d-error}
96
+ e_i(\hat{h}_i, h^*) = e_i(\hat{\mathbf{X}}_i, \mathbf{X}^*) = ||\hat{\mathbf{X}}_i - \mathbf{X}^*||_2
97
+ \end{equation}$$
98
+
99
+ where $\mathbf{X}^*$ are random 3D points (used as ground truth), and $\hat{\mathbf{X}}_i$ are 3D points obtained by projecting the points $\mathbf{X}^*$ to 2D planes, using the estimated parameters, $(\hat{R}_{i}, \hat{t}_{i})$, and then projected back to 3D. More specifically, using the estimated, target projection matrix, $\hat{P}_i = K_i [\hat{R}_i | \hat{t}_i]$ and the reference projection matrix, $P_{ref} = K_{ref} [R_{ref} | t_{ref}]$, the points $\mathbf{X^*}$ are first projected to 2D, $\hat{\mathbf{x}}_i = \hat{P_i} \mathbf{X^*}$, and then triangulated using $P_{\textit{ref}}$ and $\hat{P}_i$. The intrinsic matrices $K_i$ are assumed to be known for all cameras.
100
+
101
+ ::: table*
102
+ +-------+---------------------+-------------+---------------------+-------------+---------------------+------------------------+
103
+ | Train | CMU1 | CMU2 | CMU3 | CMU4 | H36M | Max diff. $\downarrow$ |
104
+ +:=====:+:========:+:========:+:====:+:====:+:========:+:========:+:====:+:====:+:========:+:========:+:======================:+
105
+ | Test | CMU1 | 25.8 | CMU1 | 25.8 | CMU1 | 25.6 | CMU1 | 25.2 | CMU1 | 25.6 | 2.3% |
106
+ | +----------+----------+------+------+----------+----------+------+------+----------+----------+------------------------+
107
+ | | CMU2 | 25.4 | CMU2 | 26.0 | CMU2 | 25.5 | CMU2 | 25.6 | CMU2 | 25.9 | 2.4% |
108
+ | +----------+----------+------+------+----------+----------+------+------+----------+----------+------------------------+
109
+ | | CMU3 | 24.9 | CMU3 | 26.0 | CMU3 | 25.0 | CMU3 | 25.0 | CMU3 | 25.7 | 4.4% |
110
+ | +----------+----------+------+------+----------+----------+------+------+----------+----------+------------------------+
111
+ | | CMU4 | 25.1 | CMU4 | 25.6 | CMU4 | 25.3 | CMU4 | 25.1 | CMU4 | 25.5 | **2.0%** |
112
+ | +----------+----------+------+------+----------+----------+------+------+----------+----------+------------------------+
113
+ | | **H36M** | **33.5** | H36M | 33.4 | **H36M** | **31.0** | H36M | 32.5 | **H36M** | **29.1** | 15.1% |
114
+ +-------+----------+----------+------+------+----------+----------+------+------+----------+----------+------------------------+
115
+
116
+ []{#tab:generalization-evaluation label="tab:generalization-evaluation"}
117
+ :::
118
+
119
+ ::: {#tab:iskakov-comparison}
120
+ +---------------------------------------------------------------------+
121
+ | CMU $\rightarrow$ H3.6M |
122
+ +:=========:+:=========================================:+:===========:+
123
+ | **Ours** | Iskakov et al. [@learnable-triangulation] | Improvement |
124
+ +-----------+-------------------------------------------+-------------+
125
+ | 31.0 mm | 34.0 mm | **8.8%** |
126
+ +-----------+-------------------------------------------+-------------+
127
+
128
+ : The evaluation of generalization performance from CMU Panoptic Studio [@cmu-panoptic] to Human3.6M dataset [@h36m], compared to the volumetric approach of Iskakov et al. [@learnable-triangulation]. The proposed approach achieves 8.8% better performance on H3.6M compared to [@learnable-triangulation], when trained on a 4-camera CMU3 dataset (see Table [\[tab:generalization-evaluation\]](#tab:generalization-evaluation){reference-type="ref" reference="tab:generalization-evaluation"}).
129
+ :::
130
+
131
+ []{#tab:iskakov-comparison label="tab:iskakov-comparison"}
132
+
133
+ ::: {#tab:remelli-comparison}
134
+ +-----------------------------------------------------------------------------------------------------------+
135
+ | Intra-dataset |
136
+ +:=========================================================:+:=========:+:==========:+:====================:+
137
+ | Method (train dataset) | Base test | Novel test | Diff. $\downarrow$ |
138
+ +-----------------------------------------------------------+-----------+------------+----------------------+
139
+ | Remelli et al. [@lightweight-multi-view] (TC1)$^\dagger$ | 27.5 mm | 38.2 mm | 38.9% |
140
+ +-----------------------------------------------------------+-----------+------------+----------------------+
141
+ | Remelli et al. [@lightweight-multi-view] (TC1)$^\ddagger$ | 39.3 mm | 48.2 mm | 22.6% |
142
+ +-----------------------------------------------------------+-----------+------------+----------------------+
143
+ | **Ours** (CMU1) | 24.9 mm | 25.8 mm | 3.6% |
144
+ +-----------------------------------------------------------+-----------+------------+----------------------+
145
+ | **Ours** (CMU3) | 25.0 mm | 25.6 mm | 2.4% |
146
+ +-----------------------------------------------------------+-----------+------------+----------------------+
147
+ | **Ours** (CMU4) | 25.0 mm | 25.6 mm | 2.4% |
148
+ +-----------------------------------------------------------+-----------+------------+----------------------+
149
+ | **Ours** (CMU2) | 25.6 mm | 26.0 mm | **1.6%** |
150
+ +-----------------------------------------------------------+-----------+------------+----------------------+
151
+ | Inter-dataset |
152
+ +-----------------------------------------------------------+-----------+------------+----------------------+
153
+ | Method (train dataset) | H36M | CMU1 | Diff. $\downarrow$ |
154
+ +-----------------------------------------------------------+-----------+------------+----------------------+
155
+ | **Ours** (H36M) | 29.1 mm | 33.5 mm | **15.1%** |
156
+ +-----------------------------------------------------------+-----------+------------+----------------------+
157
+
158
+ : The evaluation of generalization performance compared to Remelli et al. [@lightweight-multi-view] (lower is better). We measure the performance drop between the base test set and the novel test set for intra-dataset and inter-dataset configurations. Note that we do not compare on the same datasets, so we only measure the relative drop in percentages. Still, our approach demonstrates a significantly smaller performance drop compared to the competing method in all setups. The $\dagger$ presents the canonical fusion, and the $\ddagger$ presents the baseline approach in [@lightweight-multi-view].
159
+ :::
160
+
161
+ []{#tab:remelli-comparison label="tab:remelli-comparison"}
2111.05011/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-09-28T14:53:15.912Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36" etag="5tF_ITI-FIJbAlh8fSDH" version="15.3.3" type="device"><diagram id="bSxaVCUZvFbmRVomfmju" name="Page-1">7VrLcpswFP0aL9vhYTBe1k6aLNpOWy/aLmUkQIlAjBA29OsrjDBgYdeZQEziOItIVw+kc+65V8KemMswu2MgDr5SiMjE0GA2MW8mhqFPDVv8Kyx5aXEqg88wlJ1qwwr/RdKoSWuKIUpaHTmlhOO4bXRpFCGXt2yAMbptd/MoaT81Br58olYbVi4gSOn2C0MeVLuY1fZ7hP2gerJuz8uWEFSd5cRJACDdNkzm7cRcMkp5WQqzJSIFeBUu5bjPR1r3C2Mo4ucMWIY/iJnPHzaOAe/dZLW+/0k+GKZcHM+rHSMoAJBVynhAfRoBcltbF4ymEUTFtJqo1X2+UBoLoy6MD4jzXLIJUk6FKeAhka1ixSz/XYz/aFXVP3K6XeUma9VyWVO3LFFIaMpcdGKf+lT6DmA+4qcAkTMWKDQeISG9QzREYkWiA0MEcLxpuwmQ3ubv+9WEiILk5An8VOveAJLKR23BBnmUhRPDJmInizUTJb8oCQ1sFDprsgrktwHmaBWDHVhbIdk2MR4mZEkJZbuxpue4yHWFPeGMPqJGy9qxptZJSjaIcZSdhFC2Tm2pEBkiDE3Wtw3BSVPQ0JqlDQR6Mf6SomhJolZI/6KwXqkoLEUUhKYwQknyEqLwkN0tCjibr7WhRGFfWhT2mDKFPpwo7HNFMR2XKGxFFBHFCSr2nEc8QInAmPWsBM84ogR7bVv2MEowjQsrQdcVpE/Aqj0dVqv464LV3n2KETTiDXv56QdufX4QeEwV7v2Zton3fDC8FbjTOAFhTHDkq/GegLxnN4cAOV6nm9uug9ZeP7jvbzxjCfi6cx2nIOPMeK+PLN7P3+npOhSOhJ79rf+dntbxZCz0KDmFiQMSTAFRM0rCgfv4CjOKObaMYl2HJKrXlP+9QJypiIps7cjBrP/UciWxyzk3do2VKEeJYiJ2FjAUYTQWEU2URTClkcKnCCG8DX87FEU0QgdxS5oAwX4kqq6YWxy0zUURkLALyCfZEGIIdx7RFRPbXtJDkFPAnllKkJt1BDljMPGoqSUT1W9vlwJzNu329wYF05ekwLiSRGOcnWnGdXUx1Bt9pjJGCI6TY07cwB0kcfnFn4ezgr8+XNo6fPvaEVW63oEM59KzS7j0M1yzd4+TQ79TvMtvkijbOiDKsdpTlNqQow5I2C/jGbyo389NjMW4nNk+I0W+rDOr5xaQQkyFiab87SZKxVmHO6uIav3LgtLZ699nmLf/AA==</diagram></mxfile>
2111.05011/main_diagram/main_diagram.pdf ADDED
Binary file (13.8 kB). View file
 
2111.05011/paper_text/intro_method.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Deep learning applied to audio signals proposes exciting new ways to perform speech generation, musical composition and sound design. Recent works in deep audio modelling have allowed novel types of interaction such as unconditional generation [@Chung2015AData; @Fraccaro2016SequentialLayers; @Oord2016WaveNet:Audio; @Vasquez2019MelNet:Domain; @Dhariwal2020Jukebox:Music] or timbre transfer between instruments [@Mor2018ANetwork]. However, these approaches remain computationally intensive, as modeling audio raw waveforms requires dealing with extremely large temporal dimensionality. To cope with this issue, previous approaches usually rely on very low sampling rates (16 to 24kHz), which can be sufficient for speech signals, but is considered as low-quality for musical applications. Furthermore, the auto-regressive sampling procedure used by most models [@Engel2017NeuralAutoencoders] is prohibitively long. This precludes real-time application which are pervasive in musical creation, while parallel models [@Defossez2018SING:Generator] can only allow fast generation at the cost of a lower sound quality.
4
+
5
+ More recently, [@Engel2019DDSP:Processing] and [@Wang2019NeuralSynthesis] proposed to leverage classical synthesis techniques to address these limitations, by relying on pre-computed audio descriptors as an extraneous information to condition the models. While these approaches achieve state-of-the-art results in terms of audio quality, naturalness and computational efficiency, the extensive use of audio descriptors highly restricts the type of signals that can be generated. A possible solution to alleviate this issue would be to rely on Variational Autoencoders [@Kingma2014Auto-encodingBayes], as they provide a form of trainable analysis-synthesis framework [@Esling2018GenerativeMetrics], without explicit restrictions on the type of features learned. However, estimating the dimensionality of the latent representation associated with a given dataset prior to model training is far from trivial. Indeed, a wrong estimation of the latent dimensionality may result in either poor reconstruction or uninformative latent dimensions, which makes latent exploration and manipulation difficult.
6
+
7
+ In this paper, we overcome the limitations outlined above by proposing a VAE model built specifically for fast and high-quality audio synthesis. To do so, we introduce a specific two-stage training procedure where the model is first trained as a regular VAE for *representation learning*, then fine-tuned with an *adversarial generation* objective in order to achieve high quality audio synthesis. We combine a multi-band decomposition of the raw waveform alongside classical synthesis blocks inspired by [@Engel2019DDSP:Processing], allowing to achieve high-quality audio synthesis with sampling rates going up to 48kHz without a major increase in computational complexity. We show that our model is able to converge on complex datasets using a low number of parameters, achieving state-of-the-art results in terms of naturalness and audio quality, while being usable in real-time on a standard laptop CPU. We compare our model with several state-of-the-art models and show its superiority in unsupervised audio modeling. In order to address the dimensionality of the learned representation, we introduce a novel method to split the latent space between *informative* and *uninformative* parts using a singular value decomposition, and show that replacing the latter part with random noise does not affect the reconstruction quality. This procedure allows easier exploration and manipulation of latent trajectories, since we only need to operate on a subset of informative latent dimensions. Finally, we discuss the application of our model in *signal compression* and *timbre style transfer*. Our key contributions are:
8
+
9
+ - A two-stage training procedure where the model is first trained as a regular VAE, then fine-tuned with an adversarial generation objective, as depicted in figure [1](#fig:overall_training){reference-type="ref" reference="fig:overall_training"}
10
+
11
+ - A post-training analysis of the latent space providing a way to balance between reconstruction fidelity and representation compactness
12
+
13
+ - High-quality audio synthesis models with sampling rates going up to 48kHz
14
+
15
+ - 20 times faster than realtime synthesis on a standard laptop CPU
16
+
17
+ Audio samples and supplementary figures are provided in the accompanying website[^1]. We highly encourage readers to listen to accompanying samples while reading the paper.
18
+
19
+ <figure id="fig:overall_training" data-latex-placement="ht">
20
+ <img src="figures/first_page_figure" />
21
+ <figcaption>Overall architecture of the proposed approach. Blocks in blue are the only ones optimized, while blocks in grey are fixed or frozen operations.</figcaption>
22
+ </figure>
23
+
24
+ Generative models aim to understand a given dataset $\mathbf x \in \mathbb R^{d_x}$ by modelling its underlying distribution $p(\mathbf x)$. To simplify this problem, we can consider that the generation of $\mathbf x$ is conditioned by *latent variables* $\mathbf z \in \mathbb R^{d_z}$, responsible for most of the variations present in $\mathbf x$. Therefore, the complete model is defined by the joint distribution $p(\mathbf x, \mathbf z) = p(\mathbf x |\mathbf z )p(\mathbf z)$, which is usually not analytically solvable given the complexity of real-world data. Variational autoencoders address this problem by introducing an inference model $q_\phi(\mathbf z|\mathbf x)$, optimized to minimize its Kullback-Leibler (KL) divergence with the true posterior distribution $p(\mathbf z|\mathbf x)$
25
+
26
+ $$\begin{equation}
27
+ \phi^* = \underset{\phi}{\text{argmin}}~~ \mathcal D_{\text{KL}}[q_\phi(\mathbf z|\mathbf x)\|p(\mathbf z|\mathbf x)],
28
+ \end{equation}$$
29
+
30
+ which can be rearranged to obtain the final objective used to train a VAE, called the Evidence Lower BOund (ELBO), as shown by [@Kingma2014Auto-encodingBayes]
31
+
32
+ $$\begin{equation}
33
+ \mathcal{L}_{\phi,\theta}(\mathbf x) = -\mathbb E_{q_\phi(\mathbf z|\mathbf x)}[\log p_\theta(\mathbf x|\mathbf z)] + \mathcal D_{\text{KL}}[q_\phi(\mathbf z|\mathbf x)\|p(\mathbf z)].
34
+ \label{eq:elbo}
35
+ \end{equation}$$
36
+
37
+ The ELBO minimizes the reconstruction error of the model through the likelihood of the data given a latent $\log p_\theta(\mathbf x|\mathbf z)$, while regularizing the posterior distribution $q_\phi(\mathbf z|\mathbf x)$ to match a predefined prior $p(\mathbf z)$. Both posterior distributions $q_\phi$ and $p_\theta$ are parametrized by neural networks respectively called *encoder* and *decoder*. [@Higgins2016Beta-VAE:Framework] proposes to weight the KL divergence in equation ([\[eq:elbo\]](#eq:elbo){reference-type="ref" reference="eq:elbo"}) with a parameter $\beta$ to control the trade-off between accurate reconstruction and strong latent regularization. They show that increasing $\beta>1$ leads to less entangled latent dimensions, at the detriment of the reconstruction quality.
38
+
39
+ One of the first approaches addressing the raw waveform modelling task are WaveNet [@Oord2016WaveNet:Audio] and SampleRNN [@Mehri2017Samplernn:Model], where the probability of a waveform $\mathbf x$ is factorized as a product of conditional probabilities
40
+
41
+ $$\begin{equation}
42
+ p(\mathbf x) = \prod_{t>1} p(x_t | x_1, \hdots, x_{t-1}).
43
+ \label{eq:ar_modelling}
44
+ \end{equation}$$
45
+
46
+ Those models require a large amount of data and parameters to properly converge. Furthermore, the autoregressive nature of the synthesis process makes it prohibitively slow, and prone to accumulate errors.
47
+
48
+ WaveNet has also been adapted by [@Engel2017NeuralAutoencoders] for their NSynth model, addressing the representation learning task. Unlike equation ([\[eq:elbo\]](#eq:elbo){reference-type="ref" reference="eq:elbo"}), they do not regularize the learned representation, and rather encode the raw waveform determinisitically to its latent counterpart. It implies the absence of a prior distribution $p(\mathbf{z})$ and, therefore, prevents sampling from the latent space. This restricts the applications of the model to simple reconstructions and interpolations.
49
+
50
+ As a way to speed-up the synthesis process, [@Defossez2018SING:Generator] proposed an autoencoder with feed-forward convolutional networks parametrizing both the encoder and the decoder. They use a perceptually-motivated distance between waveforms called *spectral distance* as the reconstruction objective
51
+
52
+ $$\begin{equation}
53
+ l(\mathbf x,\mathbf y) = \left \|\log(\text{STFT}(\mathbf x)^2 + \epsilon) - \log(\text{STFT}(\mathbf y)^2 + \epsilon) \right \|_1,
54
+ \end{equation}$$
55
+
56
+ where STFT is the Short-Term Fourier Transform. Since they use a squared STFT, the phase component is discarded making the loss permissive to inaudible phase variations. They show that their model is 2500 times faster that *NSynth* during synthesis, at the expense of a degraded sound quality.
57
+
58
+ Following the recent advances in generative adversarial modelling [@Goodfellow2014GenerativeNetworks], [@Kumar2019MelGAN:Synthesis] proposed to use an adversarial objective to address the parallel audio modelling task. The discriminator is trained to differentiate true samples from generated ones, while the generator is optimized to produce samples that are classified as true by the discriminator. A *feature matching loss* is added to the adversarial loss, minimizing the L1 distance between the discriminator feature maps of real and synthesized audio. This feature matching mechanism can be seen as a learned metric to evaluate the distance between two samples, and has been successfully applied to the conditional waveform modelling task (e.g spectrogram to waveform or replacement of the decoder in a pretrained autoencoder model).
59
+
60
+ # Method
61
+
62
+ Ideally, the representation learned by a variational autoencoder should contain *high-level* attributes of the dataset. However, two perceptually similar audio signals may contain subtle phase variations that produce dramatically different waveforms. Hence, estimating the reconstruction term in equation ([\[eq:elbo\]](#eq:elbo){reference-type="ref" reference="eq:elbo"}) using the raw waveform penalizes the model if those subtle variations are not included in the learned representation. This might both hamper the learning process and include in the latent space those *low-level* variations about audio signal that are not relevant perceptually. To address this problem, we split the training process in two stages, namely *representation learning* and *adversarial fine-tuning*.
63
+
64
+ The first stage of our procedure aims to perform *representation learning*. We leverage the multiscale spectral distance $S(\cdot, \cdot)$ proposed by [@Engel2019DDSP:Processing] in order to estimate the distance between real and synthesized waveforms, defined as
65
+
66
+ $$\begin{equation}
67
+ S(\mathbf x,\mathbf y) = \sum_{n\in \mathcal N} \left [ \frac{\|\text{STFT}_n(\mathbf x)- \text{STFT}_n(\mathbf y)\|_F}{\|\text{STFT}_n(\mathbf x)\|_F} + \log \left ( \|\text{STFT}_n(\mathbf x)-\text{STFT}_n(\mathbf y)\|_1 \right ) \right ],
68
+ \label{eq:spectral_distance}
69
+ \end{equation}$$
70
+
71
+ where $\mathcal N$ is a set of scales, $\text{STFT}_n$ is the amplitude of the Short-Term Fourier Transform with window size $n$ and hop size $n / 4$, and $\|\cdot \|_F$, $\|\cdot \|_1$ are respectively the Frobenius norm and $L_1$ norm. Using an amplitude spectrum-based distance does not penalize the model for inaccurately reconstructed phase, but encompasses important perceptual features about the signal. We train the *encoder* and *decoder* with the following loss derived from the ELBO
72
+
73
+ $$\begin{equation}
74
+ \mathcal{L}_\text{vae}(\mathbf x) = \mathbb E_{\mathbf{\hat x\sim} p(\mathbf x|\mathbf z)}[S(\mathbf x, \mathbf{\hat x})] + \beta \times \mathcal D_{\text{KL}}[q_\phi(\mathbf z|\mathbf x)\|p(\mathbf z)],
75
+ \label{eq:elbo_spectral}
76
+ \end{equation}$$
77
+
78
+ We start by training the model solely with $\mathcal L_\text{vae}$, and once this loss converges, we switch to the next training phase.
79
+
80
+ The second training stage aims at improving the synthesized audio quality and naturalness. As we consider that the learned representation has reached a satisfactory state at this point, we freeze the encoder and only train the decoder using an adversarial objective.
81
+
82
+ GANs are *implicit* generative models allowing to sample from a complex distribution by transforming a simpler one, called the *base distribution*. Here, we use the learned latent space in the first stage as the base distribution, and train the decoder to produce synthesized signals similar to the real ones by relying on a *discriminator* $D$. We use the hinge loss version of the GAN objective, defined as
83
+
84
+ $$\begin{align}
85
+ \nonumber \mathcal L_\text{dis}(\mathbf x, \mathbf z) &= \max(0, 1-D(\mathbf x)) + \mathbb E_{\mathbf {\hat x} \sim p(\mathbf x|\mathbf z)} [\max(0, 1+D(\mathbf {\hat x}))], \\
86
+ \mathcal L_\text{gen}(\mathbf z) &= -\mathbb E_{\mathbf {\hat x} \sim p(\mathbf x|\mathbf z)}[D(\mathbf {\hat x})].
87
+ \label{eq:hinge}
88
+ \end{align}$$
89
+
90
+ In order to ensure that the synthesized signal $\mathbf {\hat x}$ does not diverge too much from the ground truth $\mathbf x$, we keep minimizing the spectral distance defined in equation ([\[eq:spectral_distance\]](#eq:spectral_distance){reference-type="ref" reference="eq:spectral_distance"}), but also add the feature matching loss $\mathcal L_\text{FM}$ proposed by [@Kumar2019MelGAN:Synthesis]. Altogether, this yields the following objective for the decoder
91
+
92
+ $$\begin{equation}
93
+ \mathcal L_\text{total}(\mathbf x, \mathbf z) = \mathcal L_\text{gen}(\mathbf z) + \mathbb E_{\mathbf{\hat x\sim} p(\mathbf x|\mathbf z)}[S(\mathbf x, \mathbf{\hat x})+\mathcal L_\text{FM}(\mathbf x,\mathbf{\hat x})].
94
+ \label{eq:full_generator_loss}
95
+ \end{equation}$$
96
+
97
+ The loss proposed in equation ([\[eq:elbo_spectral\]](#eq:elbo_spectral){reference-type="ref" reference="eq:elbo_spectral"}) contains two terms, a *reconstruction* and *regularisation* term. Those two terms are somewhat conflicting, since the reconstruction term maximises the mutual information between the latent representation and the data distribution, while the regularisation term guides the posterior distribution towards independence with the data (potentially causing *posterior collapse*). In practice, the pressure applied by the regularisation term to the encoder during training encourages it to learn a compact representation, where informative latents have the highest KL divergence from the prior, while uninformative latents have a KL divergence close to 0 [@Higgins2016Beta-VAE:Framework].
98
+
99
+ Here, we address the task of identifying the most informative parts of the latent space in order to restrict the dimensionality of the learned representation to the strict minimum required to reconstruct a signal. To do so, we adapt the method for range and null space estimation (see appendix [8](#sec:range_null){reference-type="ref" reference="sec:range_null"}) to this problem. Let $\mathbf Z \in \mathbb R^{b\times d}$ be a matrix composed of $b$ samples $\mathbf z \in \mathbb R^d$, where $\mathbf z \sim q_\phi(\mathbf z | \mathbf x)$. Using a Singular Value Decomposition (SVD) directly on $\mathbf Z$ to solve the problem of finding informative parts of the latent space would not be relevant given the high variance present in the collapsed parts of $\mathbf Z$. In order to adapt this to our problem, we first remove the variance from $\mathbf Z$, by considering the matrix $\mathbf Z' \in \mathbb R^{b\times d}$ that verifies
100
+
101
+ $$\begin{equation}
102
+ \mathbf Z'_i =\underset{\mathbf z}{\text{argmax}} ~~ q_\phi(\mathbf z | \mathbf x),
103
+ \label{eq:argmax_latent}
104
+ \end{equation}$$
105
+
106
+ Hence, dimensions of the posterior distribution $q_\phi(\mathbf z | \mathbf x)$ that have collapsed to the prior $p(\mathbf z)$ will result in a constant value in $\mathbf Z'$, that we set to $0$ by removing the average of $\mathbf Z'$ across the first dimension. The only dimensions of $\mathbf Z'$ with non-zero values are therefore correlated with the input, which constitute the informative part of the latent space. Applying a SVD on this centered matrix, we can obtain the matrix $\mathbf \Sigma$ containing the singular values of $\mathbf Z'$, by computing
107
+
108
+ $$\begin{equation}
109
+ \mathbf Z' = \mathbf{U\Sigma V^T},
110
+ \label{eq:svd}
111
+ \end{equation}$$
112
+
113
+ As detailed in appendix [8](#sec:range_null){reference-type="ref" reference="sec:range_null"}, the rank $r$ of $\mathbf Z'$ is equal to the number of non-zero singular values in $\mathbf \Sigma$. Given the high variation that exists in real-world data, it is unlikely that the vanishing singular values of $\mathbf Z'$ are equal to 0. Therefore, instead of tracking the exact rank $r$ of $\mathbf Z'$, we define a *fidelity* parameter $f \in [0-1]$, with the associated rank $r_f$ defined as the smallest integer verifying
114
+
115
+ $$\begin{equation}
116
+ \frac{\sum_{i\leq r_f} \mathbf \Sigma_{ii}}{\sum_i \mathbf \Sigma_{ii}} \geq f.
117
+ \end{equation}$$
118
+
119
+ Given the fidelity value $f$, and a latent representation $\mathbf z \sim q_\phi(\mathbf z | \mathbf x)$, we reduce the dimensionality of $\mathbf z$ by projecting it on the basis defined by $\mathbf V^T$ and keep only the $r_f$ first dimensions. We obtain a low-rank representation $\mathbf{z}_f$, whose dimensionality depends on both the dataset and $f$. Before providing $\mathbf{z}_f$ to the decoder, we concatenate it with noise sampled from the prior distribution, and project it back on its original basis using $\mathbf V$. We demonstrate in section [5.3](#sec:balancing){reference-type="ref" reference="sec:balancing"} the influence of $f$ on the reconstruction.
2111.12701/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2111.12701/paper_text/intro_method.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Artificially generating plausible photo-realistic images, at ever higher resolutions, has long been a goal when designing deep generative models. Recent advancements have yielded direct benefits for fields such as medical image synthesis [\[19\]](#page-8-0), computer graphics [\[9,](#page-8-1) [79\]](#page-10-0), image editing
4
+
5
+ Source code for this work is available at [https://github.com/](https://github.com/samb-t/unleashing-transformers) [samb-t/unleashing-transformers](https://github.com/samb-t/unleashing-transformers)
6
+
7
+ ![](_page_0_Picture_9.jpeg)
8
+
9
+ Figure 1. Our approach uses a discrete diffusion to generate high quality images optionally larger than the training data (bottom).
10
+
11
+ [\[45\]](#page-9-0), image-to-image translation [\[65\]](#page-10-1), and image superresolution [\[27\]](#page-8-2).
12
+
13
+ These methods can in general be divided into five main classes [\[5\]](#page-8-3), each of which make different trade-offs to scale to high resolutions. Techniques to scale Generative Adversarial Networks (GANs) [\[20\]](#page-8-4) include progressive growing [\[34\]](#page-9-1), large batches [\[8\]](#page-8-5), and regularisation [\[46,](#page-9-2)[49\]](#page-9-3). Variational Autoencoders (VAEs) [\[43\]](#page-9-4) can be scaled by building complex priors [\[10,](#page-8-6) [70,](#page-10-2) [73\]](#page-10-3) and correcting the learned density [\[77\]](#page-10-4). Autoregressive approaches can make independence assumptions [\[60\]](#page-10-5) or partition spatial dimensions [\[48\]](#page-9-5). Normalizing Flows utilise multi-scale architectures [\[40\]](#page-9-6), while diffusion models can be scaled using SDEs [\[67\]](#page-10-6) and cascades [\[27\]](#page-8-2). Each of these approaches have their own drawbacks, such as unstable training, long sample times, and a lack of global context.
14
+
15
+ Of particular interest to this work is the popular Trans-
16
+
17
+ <span id="page-0-0"></span><sup>\*</sup>Authors contributed equally.
18
+
19
+ <span id="page-1-0"></span>former architecture [74] which is able to model long distance relationships using a powerful attention mechanism that can be trained in parallel. By constraining the Transformer architecture to attend a fixed ordering of tokens in a unidirectional manner, they can be used to parameterise an autoregressive model for generative modelling [11, 55]. However, image data does not conform to such a structure and hence this bias limits the representation ability of the Transformer and unnecessarily restricts the sampling process to be both sequential and slow.
20
+
21
+ Addressing these issues, our main contributions are:
22
+
23
+ - We propose a novel parallel token prediction approach for generating Vector-Quantized image representations that allows for significantly faster sampling than autoregressive approaches.
24
+ - Our approach is able to generate globally consistent images at resolutions exceeding that of the original training data by aggregating multiple context windows, allowing for much larger context regions.
25
+ - Our approach demonstrates state-of-the art performance across three benchmark datasets in terms of Density (LSUN Bedroom: 1.51; LSUN Churches: 1.12; FFHQ: 1.20) and Coverage (LSUN Bedroom: 0.83; LSUN Churches: 0.73; FFHQ: 0.80), while also being competitive on FID (LSUN Bedroom: 3.64; LSUN Churches: 4.07; FFHQ: 6.11).
26
+
27
+ # Method
28
+
29
+ In this section, we formalise our proposed 2-stage approach for generating high-resolution images using a discrete diffusion model to represent Vector-Quantized image representations; this is visualised in Fig. 2. We hypothesise that by removing the autoregressive constraint, allowing bidirectional context when generating samples, not only will it be possible to speed up the sampling process, but an improved feature representation will be learned, enabling higher quality image generation.
30
+
31
+ In the first stage of our approach, a Vector-Quantized image model compresses high-resolution images to a highly compressed form, taking advantage of an information rich codebook [73]. A convolutional encoder downsamples images x to a smaller spatial resolution, $E(x) = \{e_1, e_2, ..., e_L\} \in \mathbb{R}^{L \times D}$ . A simple quantisation approach is to use the argmax operation which maps continuous encodings to their closest elements in a finite codebook of vectors [73]. Specifically, for a codebook $\mathcal{C} \in \mathbb{R}^{K \times D}$ , where K is the number of discrete codes in the codebook and D is the dimension of each code, each $e_i$ is mapped via a nearestneighbour lookup onto a discrete codebook value, $c_i \in \mathcal{C}$ :
32
+
33
+ $$oldsymbol{z}_q = \{oldsymbol{q}_1, oldsymbol{q}_2, ..., oldsymbol{q}_L\}$$
34
+ , where $oldsymbol{q}_i = \min_{oldsymbol{c}_j \in \mathcal{C}} ||oldsymbol{e}_i - oldsymbol{c}_j||$ . (4)
35
+
36
+ As this operation is non-differentiable, the straight-through gradient estimator [3] is used to copy the gradients from the decoder inputs onto the encoder outputs resulting in bi-
37
+
38
+ <span id="page-3-2"></span>ased gradients. Subsequently, the quantized latents are fed through a decoder network $\hat{x} = G(z_q)$ to reconstruct the input based on a perceptual reconstruction loss [18,80]; this process is trained by minimising the loss $\mathcal{L}_{VO}$ ,
39
+
40
+ $$\mathcal{L}_{VO} = \mathcal{L}_{rec} + ||sg[E(x)] - z_q||_2^2 + \beta ||sg[z_q] - E(x)||_2^2$$
41
+ . (5)
42
+
43
+ The argmax approach can result in codebook collapse, where some codes are never used; while other quantisation methods can reduce this [14, 32, 47, 58], we found argmax quantisation to yield the highest reconstruction quality.
44
+
45
+ To allow sampling, a discrete generative model is trained on the latents obtained from the Vector-Quantized image model. The highly compressed form allows this second stage to function much more efficiently. Once the training data is encoded as discrete, integer-valued latents $z \in \mathbb{Z}^D$ , a discrete diffusion model can be used to learn the distribution over these latents. Due to the effectiveness of BERT-style models [13] for representation learning, we use the absorbing state diffusion [1] which similarly learns to denoise randomly masked data. Specifically, in each forward time step t, values are either kept the same or masked out entirely with probability $\frac{1}{t}$ and the reverse process gradually unveils these masks. Rather than directly approximating $p_{\theta}(z_{t-1}|z_t)$ , we predict $p_{\theta}(z_0|z_t)$ , reducing the training stochasticity [26]. The variational bound reduces to
46
+
47
+ $$\mathbb{E}_{q(\boldsymbol{z}_0)} \left[ \sum_{t=1}^{T} \frac{1}{t} \mathbb{E}_{q(\boldsymbol{z}_t | \boldsymbol{z}_0)} \left[ \sum_{[\boldsymbol{z}_t]_i = m} \log p_{\theta}([\boldsymbol{z}_0]_i | \boldsymbol{z}_t) \right] \right]. \quad (6)$$
48
+
49
+ In practice, continuous diffusion models are trained to estimate the noise rather than directly predict the denoised data; this reparameterisation allows the loss to be easily minimised at time steps close to T. Unfortunately, no relevant reparameterisation currently exists for discrete distributions [28]. Rather than directly maximising the ELBO, we reweight the ELBO to mimic the reparameterisation,
50
+
51
+ $$\mathbb{E}_{q(\boldsymbol{z}_0)} \left[ \sum_{t=1}^{T} \frac{T-t+1}{T} \mathbb{E}_{q(\boldsymbol{z}_t|\boldsymbol{z}_0)} \left[ \sum_{[\boldsymbol{z}_t]_i=m} \log p_{\theta}([\boldsymbol{z}_0]_i|\boldsymbol{z}_t) \right] \right], (7)$$
52
+
53
+ where components of the loss at time steps close to T are weighted less than earlier steps. This is closely related to the loss obtained by assuming the posterior does not have access to $\boldsymbol{x}_t$ , i.e. if the $t-1^{\text{th}}$ loss term is $D_{KL}(q(\boldsymbol{x}_{t-1}|\boldsymbol{x}_0)||p_{\theta}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_t))$ . Since we directly predict $\boldsymbol{z}_0$ and not $\boldsymbol{z}_t$ this assumption does not harm the training. Experimentally we find that this reweighting achieves lower validation ELBO than when directly maximising the ELBO.
54
+
55
+ Esser et al. [18] demonstrated that in the autoregressive case, Transformers [74] are better suited for modelling Vector-Quantized images than convolutional architectures
56
+
57
+ due to the importance of long-distance relationships in this compressed form. As such, we utilise transformers to model the prior distribution, but without the architectural restrictions imposed by autoregressive approaches.
58
+
59
+ Using convolutions to build Vector-Quantized image models encourages latents to be highly spatially correlated with generated images. It is therefore possible to construct essentially arbitrarily sized images by generating latents with the required shape. We propose an approach that allows globally consistent images substantially larger than those in the training data to be generated.
60
+
61
+ First, a large a by b array of mask tokens, $\bar{z}_T = m^{a \times b}$ , is initialised that corresponds to the size of image we wish to generate. In order to capture the maximum context when approximating $\bar{z}_0$ we apply the denoising network to all subsets of $\bar{z}_t$ with the same spatial size as the usual inputs of the network, aggregating estimates at each location. Specifically, using $c_j(\bar{z}_t)$ to represent local subsets, we approximate the denoising distribution as a mixture,
62
+
63
+ $$p([\bar{z}_0]_i|\bar{z}_t) \approx \frac{1}{Z} \sum_j p([\bar{z}_0]_i|c_j(\bar{z}_t)),$$
64
+ (8)
65
+
66
+ where the sum is over subsets $c_j$ that contain the $i^{th}$ latent. For extremely large images, this can require a very large number of function evaluations, however, the sum can be approximated by striding over latents with a step > 1 or by randomly selecting positions.
67
+
68
+ There are various options to obtain high-quality image representations including using large numbers of latents and codes [58] or building a hierarchy of latent variables [59]. We use the adversarial framework proposed by Esser et al. [18] to achieve higher compression rates with high-quality codes using only a single GPU, without tying our approach to the characteristics typically associated with generative adversarial models. Additionally, we apply differentiable augmentations T, such as translations and colour jitter, to all discriminator inputs; this has proven to be effective at improving sample quality across methods [33,81]. The overall loss $\mathcal{L}$ is a linear combination of $\mathcal{L}_{VQ}$ , the Vector-Quantized loss, and $\mathcal{L}_G$ which uses a discriminator D to assess realism based on an adaptive weight $\lambda$ . On some datasets, $\lambda$ can grow to extremely large values hindering training. We find simply clamping $\lambda$ at a maximum value $\lambda_{max} = 1$ an effective solution that stabilises training,
69
+
70
+ $$\mathcal{L} = \min_{E,G,C} \max_{D} \mathbb{E}_{\boldsymbol{x} \sim p_d} \left[ \mathcal{L}_{VQ} + \lambda \mathcal{L}_{G} \right], \tag{9a}$$
71
+
72
+ $$\mathcal{L}_{G} = \log D(T(\boldsymbol{x})) + \log(1 - D(T(\hat{\boldsymbol{x}}))), \tag{9b}$$
73
+
74
+ $$\lambda = \min\left(\frac{\nabla_{G_L}[\mathcal{L}_{rec}]}{\nabla_{G_L}[\mathcal{L}_G] + \delta}, \lambda_{max}\right). \tag{9c}$$
75
+
76
+ <span id="page-4-3"></span><span id="page-4-2"></span>
77
+
78
+ | Model | P ↑ | R ↑ | D ↑ | C ↑ |
79
+ |----------------|------|------|------|------|
80
+ | Churches | | | | |
81
+ | DCT [51] | 0.60 | 0.48 | - | - |
82
+ | TT [18] | 0.67 | 0.29 | 1.08 | 0.60 |
83
+ | PGGAN [34] | 0.61 | 0.38 | 0.83 | 0.63 |
84
+ | StyleGAN2 [37] | 0.60 | 0.43 | 0.83 | 0.68 |
85
+ | Ours (t = 1.0) | 0.70 | 0.42 | 1.12 | 0.73 |
86
+ | Ours (t = 0.9) | 0.71 | 0.45 | 1.07 | 0.74 |
87
+ | FFHQ | | | | |
88
+ | VDVAE [10] | 0.59 | 0.20 | 0.80 | 0.50 |
89
+ | TT [18] | 0.64 | 0.29 | 0.89 | 0.59 |
90
+ | StyleGAN2 [37] | 0.69 | 0.40 | 1.12 | 0.80 |
91
+ | Ours (t = 1.0) | 0.69 | 0.48 | 1.06 | 0.77 |
92
+ | Ours (t = 0.9) | 0.73 | 0.48 | 1.20 | 0.80 |
93
+ | Bedroom | | | | |
94
+ | DCT [51] | 0.44 | 0.56 | - | - |
95
+ | TT [18] | 0.61 | 0.33 | 1.15 | 0.75 |
96
+ | PGGAN [34] | 0.43 | 0.40 | 0.70 | 0.64 |
97
+ | StyleGAN [36] | 0.55 | 0.48 | 0.96 | 0.80 |
98
+ | Ours (t = 1.0) | 0.64 | 0.38 | 1.27 | 0.81 |
99
+ | Ours (t = 0.9) | 0.67 | 0.38 | 1.51 | 0.83 |
100
+
101
+ Table 1. Precision, Recall, Density, and Coverage for approaches trained on FFHQ, LSUN Bedroom, and LSUN Churches.
102
+
103
+ | Method | Params | Bed | Church | FFHQ |
104
+ |----------------|--------|------|--------|------|
105
+ | DDPM [26] | 114M | 6.36 | 7.89 | - |
106
+ | DCT [51] | 448M | 6.40 | 7.56 | - |
107
+ | VDVAE [10] | 115M | - | - | 28.5 |
108
+ | TT [17, 18] | 600M | 6.35 | 7.81 | 9.6 |
109
+ | ImageBART [17] | 2104M | 5.51 | 7.32 | 9.57 |
110
+ | PGGAN [34] | 47M | 8.34 | 6.42 | - |
111
+ | StyleGAN2 [37] | 60M | 2.35 | 3.86 | 3.8 |
112
+ | Ours (t = 1.0) | 145M | 5.07 | 5.58 | 7.12 |
113
+ | Ours (t = 0.9) | 145M | 3.64 | 4.07 | 6.11 |
114
+
115
+ Table 2. FID for various approaches on FFHQ, LSUN Bedroom, and LSUN Churches. Lower FID signifies higher quality samples.
2111.15362/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2111.15362/paper_text/intro_method.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Convolutional neural networks (CNNs) have been ubiquitously utilized in almost every field of computer vision. Particularly, researchers harness the power of CNNs in image restoration tasks [@Zhang2017BeyondAG; @Zhang2017LearningDC; @Zhang2018FFDNetTA; @Guo2019Cbdnet], which refers to the task of recovering the original image from a corrupted version. The success of CNNs comes as a result of their ability to learn a mapping from a corrupted image to its uncorrupted counter-part. However, the ground truth labels are not always available to learn such a mapping for a given domain, limiting the use of approaches under supervised settings. To tackle this problem, researchers orient their attention towards unsupervised approaches. Recent discoveries have shown that the architecture of CNNs contains an intrinsic *prior* that can be used in image restoration tasks [@Ulyanov_2018_CVPR; @Saxe2011OnRandom]. This insight led to the *Deep Image Prior* (DIP) framework [@Ulyanov_2018_CVPR], which works solely with the degraded image and can produce competitive results for image restoration tasks without a supervised training phase. It offers an alternative solution to restoration problems by suggesting a new regularizer: the network architecture itself. In addition to this empirical discovery, Rahaman  [@pmlr-v97-rahaman19a] investigated the spectral bias of neural networks towards low frequencies theoretically, which can explain the impressive performance of the DIP framework. Chakrabarty [@Chakrabarty2019TheSB] further explored the underlying reason behind the success of DIP in denoising natural images. The work demonstrates that the network tends to behave similarly to a low pass filter at the early stages of iterations. Finally, DeepRED [@Mataev2019DeepREDDI] merged the concept of "Regularization by Denoising\" (RED) by adding explicit priors to enhance DIP.
4
+
5
+ One problem faced in the DIP framework is that the architectural design of the network has a substantial impact on the performance. Recent works attempted to automate the search process of network architecture for various tasks, which is referred to as the *Neural Architecture Search* (NAS). In the context of DIP, Chen  [@Chen2020NAS-DIP] applied NAS to the DIP framework. However, current NAS approaches come with substantial computational costs, as they require optimizing a large number of architectures to determine the optimum. Moreover, this cost prohibits determining the optimum architecture for every image; instead, existing NAS approaches search for the best architecture for a dataset of images.
6
+
7
+ **Our work.** In this paper, we propose novel image-dependent metrics to determine optimal network architectures in the DIP framework with minimal training. Unlike previous works, we apply our metrics to DIP for finding image-specific architectures since performance is strongly dependent on the content of the image that is to be restored.
8
+
9
+ We first motivate image-specific NAS, by showing that in a given search space, there is only a small overlap of the best architectures for different images. This is illustrated in Figure [3](#fig:overlap){reference-type="ref" reference="fig:overlap"}, where the matrices show the number of overlaps between the top 10 models (of a total of 522 models) for each image for denoising and inpainting.
10
+
11
+ To identify architectures that are fitting for a specific image, we propose image-dependent metrics that measure the property of how far the power spectral density (PSD) of the generated initial output of a network is from that of the corrupted image and use it as our metric. The intuition relies on the fact that the more these two are similar, the better the model will reconstruct the image since it is closer to the solution space.
12
+
13
+ <figure id="fig:overlap" data-latex-placement="ht">
14
+ <figure id="fig:dn overlap">
15
+ <img src="Figures/overlap/denoising_overlap.jpg" />
16
+ <figcaption>Denoising</figcaption>
17
+ </figure>
18
+ <figure id="fig:in overlap">
19
+ <img src="Figures/overlap/inpainting_overlap.jpg" />
20
+ <figcaption>Inpainting</figcaption>
21
+ </figure>
22
+ <figcaption>The overlap of the best-performing architectures between different images is shown here. The numbers indicate how many of the top-10 models of the search space for image x are also in the top-10 for image y. This is shown for the task of denoising and inpainting in (a) and (b), respectively. E.g. the value at the intersection of chest and lena in the denoising heatmap, which is 2, indicates that there are 2 models in each of the images’ best-performing 10 models that are the same.</figcaption>
23
+ </figure>
24
+
25
+ We motivate the choice of metrics by looking at the correlation between the metrics' values and image restoration performance. There is an imperfect correlation; hence we select a small cohort of architectures to optimize based on the metrics' values. A final selection is then made by selecting the model whose output is closest to the average of outputs of all models.
26
+
27
+ We conduct experiments on conventional datasets for image denoising, image inpainting, and single image super-resolution tasks using the proposed strategy. For each image in the datasets, we run our ISNAS algorithm to identify the optimal image-specific models. The results demonstrate that our method is superior to the state-of-the-art work [@Chen2020NAS-DIP] in terms of its quality improvement.
28
+
29
+ **The main contributions** can be summarized as follows:
30
+
31
+ - We empirically show the necessity of identifying image-specific models to augment the quality of DIP.
32
+
33
+ - We present novel metrics to be used in NAS requiring only the randomly initialized CNN network. These metrics allow ranking architectures within any search space without lengthy optimization as a surrogate to their success on image restoration tasks.
34
+
35
+ - We introduce two selection procedures among a subset of models for finding optimal architectures in an unsupervised fashion for DIP.
36
+
37
+ - We generate a *NAS Dataset for DIP* having 522 models optimized for ten images from different domains, including image denoising and image inpainting tasks,.
38
+
39
+ - Extensive experiments on commonly used datasets and *NAS Dataset for DIP* validate our approach.
40
+
41
+ # Method
42
+
43
+ In a typical NAS algorithm, the main bottleneck, in terms of time, is the training of the models to compute their performance. If one can find an easy-to-calculate training-free performance predictor, this bottleneck can be eliminated. In addition, our experiments show that model selection for DIP settings should be image-dependent. In this paper, we propose several different training-free and image-dependent performance predictors and study their effectiveness.
44
+
45
+ Ulyanov  [@Ulyanov_2018_CVPR] observed that the architectures with better performances in DIP tend to have outputs possessing large spatial structures at the early iterations of training. One useful metric to capture the distribution of the spatial structures is to use power spectral density (PSD). Coarse and fine textures will lead to a PSD that is concentrated on low and high frequencies, respectively.
46
+
47
+ Inspired by these, we put forward a hypothesis that if the PSD of an untrained CNN's output is *similar* to that of the image to be reconstructed, then the model will be closer to the desired solution space, hence it will facilitate the optimization and lead to better restoration results compared to others with lower similarity. In this section, we formulate different metrics to quantify the similarity between an image and the CNN's random output.
48
+
49
+ It would be preferable to compute the distance using the ground truth image. However, in a practical situation, we do not have access to them. Therefore, we use the distance between the CNN's random output and the corrupted image as a proxy for the distance between the output and the ground truth image.
50
+
51
+ One straightforward approach that can be used to measure the distance between PSDs is the mean square error (MSE). Generally, the PSD of an array consists of numbers that are orders of magnitudes different from each other. Thus, in MSE calculation, using the logarithm of the PSD, which we call decibel PSD, instead of directly PSD itself is better suited. To that end, we first calculate the decibel PSD's of the output of a given CNN with randomly initialized weights and the corrupted image. Then, we measure the MSE between them. A schematic representation of this metric can be seen in Fig. [4](#fig:metric summary){reference-type="ref" reference="fig:metric summary"} and it is formulated as $$\begin{equation}
52
+ \frac{1}{n} \sum_{i,j} \left( 10\cdot \log X_{i,j} - 10 \cdot \log Y_{i,j} \right)^2
53
+ \end{equation}$$ where $X$ and $Y$ denote the corresponding power spectral densities of the CNN's random output and a corrupted image, respectively, and $n$ denotes the number of pixels.
54
+
55
+ Spatial structures and texture in an image are related to its PSD, but each frequency region of the PSD does not equally contribute to the spatial structures. Very high-frequency regions of the PSD of a corrupted image are heavily affected by noise. Hence, focusing on only a band of frequencies around the center in similarity comparison is more suitable. In light of these insights, the metric is calculated as follows: First, a mask is applied to the decibel of the PSDs of the CNN's random output and the corrupted image. Then, we calculate the MSE between them. As the mask, we employ a strip having inner and outer diameter sizes of 10% and 20% of the image size, respectively, to reduce the dependency on the image size. A schematic representation of this metric can be seen in Fig. [4](#fig:metric summary){reference-type="ref" reference="fig:metric summary"} and it is formulated as $$\begin{equation}
56
+ \frac{1}{n} \sum_{i,j} \left( 10 \cdot \log X_{i,j} - 10 \cdot \log Y_{i,j} \right)^2 \cdot M_{i, j}
57
+ \end{equation}$$ where $X$ and $Y$ denote the corresponding power spectral densities of the CNN's random output and a corrupted image, respectively, $M$ denotes the mask, a 2-D array of 1s and 0s, and $n$ denotes the number of non-zero pixels of the mask $M$.
58
+
59
+ To make our metrics rotation invariant, we use the histograms of PSDs. In this metric, we first calculate the PSDs of the CNN's random output and a corrupted image. Then, we discard the entries of PSDs, where the corresponding entry of a mask is zero. For this, we use the mask defined in the previous metric (PSD DB Strip MSE). Afterwards, the PSDs are flattened into two 1-D arrays, which are then converted to histogram representations. Finally, we calculate the earth mover's distance (EMD) between these two histograms. The range and number of bins of the histograms are determined as a result of trials. In our experiments, we use 75 as the number of bins and 0-1 as the range of the histograms. A schematic representation of this metric can be seen in Fig. [4](#fig:metric summary){reference-type="ref" reference="fig:metric summary"}.
60
+
61
+ In our evaluation, we also included an image independent metric also using the PSD, inspired by the structural bias of CNNs as exploited by Heckel [@Heckel2020DenoisingAR]. This allows us to dissect whether the contribution is due to using the PSD of the CNN-generated image or the image dependency of the metrics described above. The structural part of an image can be thought of as the low-frequency component, just as noise or corruptions are of high-frequency components. Relying on the hypothesis that if the frequency spectrum of the output of a randomly initialized CNN is concentrated on low-frequency regions, then it tends to perform better in restoration tasks such as denoising and inpainting, we propose a metric to measure the low-pass characteristic of a CNN and use it as our image independent metric.
62
+
63
+ A straightforward method to quantify the low-frequency nature of an array is to calculate its bandwidth. We define the $P\%$ bandwidth of a 2D array as the radius of the circle containing the $P\%$ of the total energy in the PSD of the 2D array.
64
+
65
+ Obviously, the bandwidth of a CNN does not depend on the image to be reconstructed, which makes it an image independent metric. To choose a value of P, we created several outputs from randomly initialized CNNs and selected the P value resulting in the most variation in bandwidths. This was $P = 99$, so we used this value in our experiments.
2112.08544/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-11-01T01:32:12.061Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.2 Safari/605.1.15" version="15.5.8" etag="Dx_P5tRwdqZtTHkNzKe2" type="device"><diagram id="_Qhvysq75VgtT0szBVpV">7Vxbk6I4FP41PnYXEEB57NvsVO1lpqp3a58jBGUGiBNiq/vrN4EECQTBNjrdTvdL6yHX79xPghPwkG1/I3C1/BNHKJ04VrSdgMeJ49iu47N/nLITlCBwK8qCJJGg7QnPyX9IEC1BXScRKpSGFOOUJiuVGOI8RyFVaJAQvFGbxThVZ13BBeoQnkOYdqn/JhFdVtSZM93TP6NksZQz235QPcmgbCx2UixhhDcNEniagAeCMa0+ZdsHlHL0JC5Vv089T+uFEZTTMR2cqsMLTNdib2JddCc3S/A6jxBvb0/A/WaZUPS8giF/umH8ZbQlzVLxOMY5Ffyy2WbuI1gs674FJfh7jVjZPEnTB5xiUk4F4jh2wrBu2XgS+XPf89kTsV5EKNr27tmukWQyiHCGKNmxJqKDNxVdavETzNjseelIiVwqfHRuPSFFQoIW9eh7kNkHgbMec/CBueNPJZIK6N7ZQHeHQV8w1FddgCIPzSJXB9DMmQP/IEDCAMG5nMM6Fjjbs1RhtbrCCuSwCm5Sqk9BzRtGbUA4YZoscvY5RTE1i20TQ1eP4UUw8jUY+SnfbJS8KFj5P9aYCoW9KUqNvWMNbG+1LTcqn7NPC/H/4DgMEXojAOYDfVsXNIl3ZU/r2EmKFcyPWm01QwyzJN1VjyBJYFo94/xKmMNsLm8OC5QmOdIu4yvBN78TlLEGckUM+GpRb2qhEy5Ib3mBGYoS+LaXyNWQqRxrhRBfx2pdLJN8wUctVavAhPLHOOZRYhLHqFR19jiPSjItuzHrSgmMkpDiUt1zFttBmryg4pbNBDNuivJ5sWpsnG+j7CkNzX5pzBNaFreue9IJuvv3EpUgMWBSxPdCl5DPXMBN+YU/nSeLBSo4FeULFjNm1SbzdTZHpKjRCFOYMKbux6h6vyRkzRttSiQzmK9jGNI1KVvOd3W7f3JmoDntmUJaQSNFo0KjFo2jEeN4xbEhxD4RnNWL/vfzl4nzUH0lqC0vOeb2N48xYQF1gisS4Ti+sMQgD3kHytsUa4mwAl2ONuWWCM6hhDHE67REruxMYF5kCaV7MDNc/FgnFB9E8JDOMXJpyA9TW96Vm3jVoaqeMselijXdqiC1HK/U0TtBzpIo4pNog8tmROmd6INlB03YYp3LJU+HwxaWeK34xzhF2zueErJtojwSHx+ZzhVFEvYH2qAbzNhzaCNHF8xYlv9096kGEkWd9HIwHGzg5mlgkzSC0tICqimvBkoxw1eclOotuOTO6mBbxpuzQB2kwGsSItGvmVm2hvJ8zVDtSJ5CskC0M1jJ33rzo1g+67D8L7Then1XmeAzaFbIeIbICbrFIgnxXSzLNpNsOa6aM/hd3bNdjRABA6oXdPjwwN0XIz0iylKh0lq3WMFGSFZFH2A9CujqONHSSGSzBGOq08jAnwJoKLd1p64q5xpbFzj9OnsK3HKqEzK0trEfAjWGURB5OlAhcm3ADaBavii+IxpyHLi41wUvy1RhwW+ZmTrJblYWbNDFX7LtJPzt3uzvcIzdE0q3gyUwkLWJ6Gw46uoONDaku/bo7M1FXMCUZgQtvZh29cKq63BNzfBNWKYRpeXLW6YFj+uE9WlZqaZhUkzWWaxUoCl/2kDHjJkJZuhqzscVqYYs0SXNnbn0vW/C95i+H5N8Xq3Rm03VWMzq6pkdaPXMiNHTHTO8wlP3KsD8NKnGK57T8jb1gMSI9jNg5pL2hSQLXsUty3dGZlE0pkf8ByV6fNpGEFuTOLPhDmDF89NSKrz7iffIx1pTXAjJPUJZziDw06B1PqRJPvwDBYOTpF13PHSMnLo9/JYp4zNnVRUwOjpJO4Mde11Svw8lrEk3TTXAZt8L2tUUzam1rTvkMpHV2/2HXOM5PYbN9i/OZt9rsVl7UH4+Po+onErE/oBzlH5lZrDM8cDjHFPKcsQupBS34npZe822C35B6HYOiyS8XeF0t+Aj3fNPDxiTqLa599YtX9qDdQsAt8As+WzQnJpka77b7Q4dgnXrBg2aJWbqpZUugC+yOuDqCMO+Bty8d1EnGJpLGBFEs1h7CcMPZ2gem5GumaP6ihunI1lA5yuAzOVOEq1uhbZhQl5Z1ThkVobC5GPMyxuLHRzxXTliKLMfI3JiW9NWUOF2bZDuFMIxYYK6FeQOg+qbOmMt8On3cGZBK7EAOvc7PU+g5eiqvCZUheWUld9tFOfHxdevqtk7OlU6V80+OMjPG9XNakyh7jKaBwwwU1cybqH7c04n2Qy1w7Kq72LAKfe7jNXi6f5M+PBRZg8LLnOWWd+Uk/VPt336OPYo03ZV3XfGnWIybsFdo5lwBf0LdrTT7MWqGvC1J6RSzM5Zm2iXGXTOdl8REK1lNb+8AdR+eo2pv2Mo9Z8BVV40Bf7gTKn/mEvMlZc+n0sOWgo+7XrkunhuutCnuU9cZ7NVIfxpSwl8p+fdFXf7fafdyiPsrtzpIiED17jH3Ei+DufZx4PLOM9gppqWzs2d8b5TDbTaVQpjvlM7jTHfqat/GTd3A2pXvwrQTDnAuQzciErQUW+pKOerzWNY5ax1uIxypjcIKg4fVDdt1c0I1v2lEc2N0fEHrfWVXoYGKP1HefLaphq4XaJNLj/OOU855+wTSHnpTDV3Y4vERhLY0RWay8V+2nf4dMFffT520ht8I27dGXAHPelz/61O7Y5NhPug/5qbgVotC2XVePlKSlDA0jOwHUY3lVZ3J94zcFER6LL/qwyc+1D/qDr9jKoTuEihYEDTui/RdhVPF0gD7xYYcJhgxMvH1xRLg548Zqf0GDzPBgz85p9vgBP9N1bec6StvDxrtV8A/SXD5z4Z3PbLoOa8x5QBuEz54HC82N2yM9W8FWEioQW64oHBo33ry/wb/12X64waDyd+N/KVE1l8dbuirA0ipwb4qitUXGcQOVwO+ggiLxdEXqTiMKB4Y4JIR2NQuQ9xgv3fzEA2J9/e/GUiymNd28UiSre/MvKeI0ptTfaXjykPXynSlGBtYzEl+7r/VbjKhO5/XA88/Q8=</diagram></mxfile>
2112.08544/main_diagram/main_diagram.pdf ADDED
Binary file (78.1 kB). View file
 
2112.08544/paper_text/intro_method.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The internet era has ushered in an explosion of online content creation, resulting in increased concerns regarding misinformation in news, online debates, and social media. A key element of identifying misinformation is detecting the claims and the arguments that have been presented. In this regard, news articles are particularly interesting as they contain claims in various formats: from arguments by journalists to reported statements by prominent public figures.
4
+
5
+ Check-worthiness estimation aims to decide if a piece of text is worth fact-checking, i.e., whether it contains an important verifiable factual claim . Most current approaches largely ignore relevant attributes of the claim (e.g., the claimer and the primary object associated with the claim). Moreover, current claim detection tasks mainly identify claims in debates , speeches , and social media , where the claim source (i.e., the claimer) is known.
6
+
7
+ [t]
8
+ \centering
9
+ \includegraphics[scale=0.21]{images/intro-ex-2.png}
10
+ \caption{A news article containing a claim regarding the origin of COVID-19 with the claim sentence in italics, the claim span in red, and the claimer in blue. Also shown are the claimer stance and the claim object.}
11
+
12
+ News articles, on the other hand, have more complex arguments, requiring a deeper understanding of what each claim is about and identifying where it comes from.
13
+ Thus, here we introduce the notion of claim object, which we define as an entity that identifies what is being claimed with respect to the topic of the claim. Figure shows a claim about the origin of COVID-19, suggesting that the virus came from space, which is the claim object.
14
+ We further identify the claimer, which could be useful for fact-checking organizations to examine how current claims compare to previous ones by the same person/organization.
15
+ In this regard, we extend the claim detection task to ask for the extraction of more attributes related to the claim. Specifically, given a news article, we aim to extract all claims pertaining to a set of topics along with the corresponding claim span, the claimer, the claimer's stance, and the claim object for each claim. The claim attributes enable comparing claims at a more fine-grained level: claims with the same topic, object and stance can be considered equivalent whereas those with similar claim objects but opposing stance could be contradicting.
16
+
17
+ We note that while identifying the claim span and stance have been explored independently in prior work , we bring them into the purview of a unified claim detection task.
18
+
19
+ To promote research in this direction, we release \datasetname{}, a new evaluation benchmark for claim detection.
20
+
21
+ We consider this in an evaluation setting since
22
+
23
+ claims about new topics can emerge rapidly\footnote{harmful-content-blog-post}, requiring systems that are effective under zero/few-shot settings. \datasetname{} aims to study how existing NLP techniques can be leveraged to tackle claim detection in emerging scenarios and regarding previously unseen topics. We explore multiple zero/few-shot strategies for our \subtasks{} including topic classification, stance detection, and claim object detection. This is in line with recent progress in using pre-trained language models in zero/few-shot settings . Such approaches can be adapted to new use cases and problems as they arise without the need for large additional training data.
24
+
25
+ In our benchmark, all news articles are related to the COVID-19 pandemic, motivated by multiple considerations. First, COVID-19 has gained extensive media coverage, with the World Health Organization coining the term infodemic\footnote{COVID-19 Infodemic} to refer to disinformation related to COVID-19 and suggesting that ``fake news spreads faster and more easily than this virus''. Second, this is an emerging scenario with limited previous data related to the virus, making it a suitable candidate for evaluating claim detection in a low-resource setting. \datasetname{} covers claims about four COVID-19 topics, namely the origin of the virus, possible cure for the virus, the transmission of the virus, and protecting against the virus.
26
+
27
+ Our contributions include
28
+
29
+ (i) extending the claim detection task to include more attributes (claimer and object of the claim),
30
+ (ii) releasing a manually annotated evaluation benchmark for this new task, \datasetname{}, which covers multiple topics related to COVID-19
31
+
32
+ and is the first dataset with such extensive annotations for claim detection in the news, with 889 claims from 143 news articles, and
33
+
34
+ (iii) demonstrating promising performance of various zero-shot and prompt-based few-shot approaches for the claim detection task.
35
+
36
+ # Method
37
+
38
+ Our task is to identify claims related to a set of topics in a news article along with corresponding attributes such as the claimer, the claim object, and the claim span and stance, as shown in Figure .
39
+
40
+ \noindent Claim Sentence Detection:
41
+ Given a news article, the first \subtask{} is to extract claim sentences relevant to a set of pre-defined topics. This involves first identifying sentences that contain factually verifiable claims, similar to prior work on check-worthiness estimation, and then selecting
42
+ those that are related to the target topics. To address misinformation in an emerging real-world setting, we consider the following topics related to COVID-19:
43
+
44
+ Origin of the virus: claims related to the origin of the virus (i.e., location of first detection, zoonosis, `lab leak' theories);
45
+ Transmission of the virus: claims related to who/what can transmit the virus or conditions favorable for viral transmission;
46
+ Cure for the virus: claims related to curing the virus, (e.g., via medical intervention after infection); and
47
+ Protection from the virus: claims related to precautions against viral infection.
48
+
49
+ \noindent Claimer Detection: Claims within a news article can come from various types of sources such as an entity (e.g., person, organization) or published artifact (e.g., study, report, investigation). In such cases, the claimer identity can usually be extracted from the news article itself. However, if the claim is asserted by the article author or if no attribution is specified or inferrable, then the article author, i.e. the journalist, is considered to be the claimer. The claimer detection \subtask{} involves identifying whether the claim is made by a journalist or whether it is reported in the news article, in which case the source is also extracted. Moreover, sources of such reported claims need not be within the claim sentence. In our datatset \datasetname{}, the claimer span was extracted from outside of the claim sentence for about 47\% of the claims. Thus, the claimer detection \subtask{} in our benchmark requires considerable document-level reasoning, thus making it harder than existing attribution tasks , which require only sentence-level reasoning.
50
+
51
+ \noindent Claim Object Detection: The claim object relates to what is being claimed in the claim sentence with respect to the topic. For example, in a claim regarding the virus origin, the claim object could be the species of origin in zoonosis claims, or who created the virus in bioengineering claims. Table shows examples of claim objects
52
+ from each topic. We see that the claim object is usually an extractive span within the claim sentence.
53
+ Identifying the claim object helps to better understand the claims and potentially identify claim--claim relations, since two claims with the same object are likely to be similar.
54
+
55
+ [!htb]
56
+ \small
57
+
58
+ \centering
59
+ {|l|p{16em}|}
60
+
61
+ \hline
62
+ Topic & Claim Sentence \\
63
+ \hline
64
+ Origin & The genetic data is pointing to this virus coming from a bat reservoir, he said.\\
65
+ \hline
66
+ Transmission & The virus lingers in the air indoors, infecting those nearby \\
67
+ \hline
68
+ Cure & Vitamin C is an effective treatment for COVID-19. \\
69
+ \hline
70
+ Protection & Taking a hot bath prevents you from getting COVID-19. \\
71
+ \hline
72
+
73
+ \caption{Examples showing the claim object in bold for claims corresponding to \datasetname{} topics.}
74
+
75
+ [!htb]
76
+ [c]{0.35\linewidth}
77
+ \centering
78
+ \includegraphics[width=1\linewidth]{images/dist_hist.png}
79
+ \caption{claim counts per news article}
80
+
81
+ [c]{0.31\linewidth}
82
+ \centering
83
+ \includegraphics[width=1\linewidth]{images/claimer_dist_3.png}
84
+ \caption{claims by journalists vs. reported ones, along with claimer coverage for reported claims\vspace{-2.3em}}
85
+
86
+ \hfill
87
+ [c]{0.31\linewidth}
88
+ \centering
89
+ \includegraphics[width=1\linewidth]{images/claimer_coverage.png}
90
+ \caption{coverage of claimer within a window size based on number of sentences around the claim sentence\vspace{-0.2em}}
91
+
92
+ \hfill
93
+ \caption{\datasetname{} benchmark statistics: (a) number of claims per news article, (b) claims by journalists vs. reported claims, and (c) claimer coverage by window size within the news article for reported claims.}
94
+
95
+ \noindent Stance Detection: This \subtask{} involves outputting whether the claimer is asserting (affirm) or refuting (refute) a claim within the given claim sentence. We note that stance detection in \datasetname{} differs from the task formulation used in other stance detection datasets as it involves identifying the claimer's stance within a claim sentence -- whereas prior stance detection tasks, as described in a recent survey by , involve identifying the stance for target--context pairs. For example, given pairs such as claim--evidence or headline--article, it involves identifying whether the evidence/article at hand supports or refutes a given claim/headline.
96
+
97
+ \noindent Claim Span Detection: Given a claim sentence, this \subtask{} aims to identify the exact claim boundaries within the sentence, including the actual claim content, usually without any cue words (e.g., asserted, suggested) and frequently a contiguous \subspan{} of the claim sentence. Identifying the precise claim conveyed within the sentence can be useful for downstream tasks such as clustering claims and identifying similar or opposing claims.
98
+
99
+ Dataset}
100
+
101
+ In this work, we build \datasetname{}, a new benchmark dataset for evaluating the performance of models on different components of our claim detection task. Specifically, we release an evaluation set based on news articles about COVID-19, which can be used to benchmark systems on detecting claim sentences and associated attributes including claim objects, claim span, claimer, and claimer stance.
102
+
103
+ \datasetname{} uses news articles from the LDC corpus LDC2021E11, from which we selected those related to COVID-19. We describe below the annotation process (Section ) and provide statistics about \datasetname{} (Section ).
104
+
105
+ Given a news article, we split the annotation process into two phases: (i) identifying claim sentences with their corresponding topics, and (ii) annotating the attributes for these claims.\footnote{Detailed annotation guidelines and screenshots of the interface are provided in Section in the appendix.}
106
+
107
+ In the first phase, the interface displays the entire news article with a target sentence highlighted in red. The annotators are asked whether the highlighted sentence contains a claim associated with the four pre-defined COVID-19 topics and to indicate the specific topic if that is the case.
108
+
109
+ In the second phase, the interface displays the entire news article with a claim sentence highlighted in red. The annotators are asked to identify the claim span, the claim object, and the claimer from the news article. The annotators are also asked to indicate the claimer's stance regarding the claim. We provide a checkbox to use if there is no specified claimer, in which case the journalist is considered to be the claimer.
110
+
111
+ For the first stage of annotation, which involves identifying claim sentences (and their topics) from the entire news corpus, we used 3 annotators per example hired via Mechanical Turk . Only sentences with unanimous support were retained as valid claims. For the second stage, which involves identifying the remaining attributes (claim object, span, claimer, and stance), we used expert annotators to ensure quality, with 1 annotator per claim sentence.
112
+
113
+ Annotators took ${\sim}30$ seconds per sentence in the first phase and ${\sim}90$ seconds to annotate the attributes of a claim in phase two. For claim sentence detection, the inter-annotator agreement had a Krippendorff's kappa of 0.405, which is moderate agreement; this is on par with previous datasets that tackled identifying topic-dependent claims , which is more challenging than topic-independent claim annotation .
114
+
115
+ \datasetname{} consists of development and test sets with 18 articles containing 103 claims and 125 articles containing 786 claims, respectively. The development set can be used for few-shot learning or for fine-tuning model hyper-parameters. Figure shows a histogram of the number of claims in a news article where most news articles contain up to 5 claims, but some have more than 10 claims. Claims related to the origin of the virus are most prevalent, with the respective topic distribution being 35\% for origin, 22\% for cure, 23\% for protection, and 20\% for transmission. Figure shows the distribution of claims by journalists vs. reported claims: we can see that 41\% of the claims are made by journalists, with the remaining 59\% coming from sources mentioned in the news article. Moreover, for reported claims, the claimer is present outside of the claim sentence 39\% of the time, demonstrating the document-level nature of this task. Figure shows the claimer coverage (in \%) based on a window around the claim by the number of sentences and indicates that document-level reasoning is required to identify the claimer, with some cases even requiring inference beyond a window size of 15. Note that the 61\% inside-sentence coverage in Figure corresponds to a window size of 1 in Figure .
116
+
117
+ [ht]
118
+ [b]{0.48\textwidth}
119
+ \includegraphics[scale=0.12]{images/MNLI_topic_2.png} \caption{zero-shot NLI for topic classification}
120
+
121
+ \hfill
122
+ [b]{0.51\textwidth}
123
+ \includegraphics[scale=0.13]{images/MNLI_stance_2.png} \caption{zero-shot NLI for stance detection}
124
+
125
+ \caption{Diagram (a) shows the template and an example for leveraging a pre-trained NLI model for zero-shot topic classification; topic corresponding to the hypothesis with the highest entailment score is taken as the claim sentence topic. Diagram (b) shows examples for leveraging a pre-trained NLI model for zero-shot stance detection. Each example shows hypothesis construction based on the class label (in pink) and the topic (in blue).}
126
+
127
+ In this section, we describe various zero-shot and prompt-based few-shot learning baselines for the claim detection \subtasks{} outlined in Section . We describe a diverse set of baselines with each chosen to be relevant in an evaluation-only setting.
128
+
129
+ Given a news article, we aim to detect all sentences that contain claims related to a pre-defined set of topics regarding COVID-19. We use a two-step procedure that first identifies sentences that contain claims and then selects those related to COVID-19.
130
+
131
+ \paragraph{Step 1. ClaimBuster:} To identify sentences containing claims, we use ClaimBuster ,\footnote{https://idir.uta.edu/claimbuster/api/} a claim-spotting system trained on a dataset of check-worthy claims .
132
+
133
+ As ClaimBuster has no knowledge about topics, we use zero-shot topic classification, as described below.
134
+
135
+ \paragraph{Step 2. ClaimBuster+Zero-shot NLI:} Following , we use pre-trained NLI models as zero-shot text classifiers: we pose the claim sentence to be classified as the NLI premise and construct a hypothesis from each candidate topic. Figure shows the hypothesis corresponding to each of the topics. We then use the entailment score for each topic as its topic score and choose the highest topic score for threshold-based filtering.
136
+
137
+ Given the claim sentence and a topic, claim object detection seeks to identify what is being claimed about the topic, as shown in Table .
138
+ We explore this \subtask{} in both zero-shot and few-shot settings by converting it into a prompting task for pre-trained language models as described below:
139
+
140
+ \paragraph{In-context learning (few-shot):} This setting is similar to , where the few-shot labeled examples are inserted into the context of a pre-trained language model. The example for which a prediction is to be made is included as a prompt at the end of the context. We refer the reader to Section in the appendix for an example. We use GPT-3 as the language model in this setting.
141
+
142
+ \paragraph{Prompt-based fine-tuning (few-shot):} Following , we fine-tune a pre-trained language model, base-T5 , to learn from a few labeled examples. We convert the examples into a prompt with a format similar to the language model pre-training, which for this model involves generating the target text that has been replaced with a $<$MASK$>$ token in the input. Thus, we convert the few-shot data into such prompts and generate the claim object from the $<$MASK$>$ token. For example, given the claim sentence: Research conducted on the origin of the virus shows that it came from bats, and its topic (origin of the virus), the prompt would be: Research conducted on the origin of the virus shows that it came from bats. The origin of the virus is $<$MASK$>$.
143
+
144
+ \paragraph{Prompting (zero-shot):} We consider the language models that were used in few-shot settings above with the same prompts but in zero-shot settings here. In this case, GPT-3 is not provided with any labeled examples in the context and T5 is used out-of-the-box without any fine-tuning.
145
+
146
+ Given the claim sentence, stance detection identifies if the claimer is asserting or refuting the claim.
147
+
148
+ \paragraph{Zero-shot NLI:} We leverage NLI models for zero-shot classification. Here, we construct a hypothesis for the affirm and the refute labels and we take the stance corresponding to a higher entailment score. We consider two settings while constructing the hypothesis based on claim topic availability. Examples are shown in Figure .
149
+
150
+ Given a claim sentence, claim span detection identifies the exact claim boundaries within the sentence.
151
+
152
+ \paragraph{Debater Boundary Detection:} Our first baseline uses the claim boundary detection service from the Project Debater\footnote{Project Debater} APIs . This system is based on BERT-Large, which is further fine-tuned on 52K crowd-annotated examples mined from the Lexis-Nexis corpus.\footnote{http://www.lexisnexis.com/en-us/home.page}
153
+
154
+ \paragraph{PolNeAR-Content:} Our second baseline leverages PolNeAR , a popular news attribution corpus of annotated triples comprising the source, a cue, and the content for statements made in the news. We build a claim span detection model from it by fine-tuning BERT-large to identify the content span, with a start classifier and an end classifier on top of the encoder outputs, given the sentence as an input.
155
+
156
+ This \subtask{} identifies if the claim is made by the journalist or a reported source, in addition to identifying the mention of the source in the news article.
157
+
158
+ \paragraph{PolNeAR-Source:} We leverage the PolNeAR corpus to build a claimer extraction baseline. Given a statement, we use the source annotation as the claimer and mark the content span within the statement using special tokens. We then fine-tune a BERT-large model to extract the source span from the statement using a start classifier and an end classifier over
159
+
160
+ the encoder outputs. At evaluation time, we use the news article as an input, marking the claim span with special tokens and using the sum of the start and the end classifier scores as a claimer span confidence score. This is thresholded to determine if the claim is by the journalist, with the claimer span used as an output for reported claims.
161
+
162
+ \paragraph{SRL:} We build a Semantic Role Labeling (SRL) baseline for claimer extraction. SRL outputs the verb predicate-argument structure of a sentence such as who did what to whom. Given the claim sentence as an input, we filter out verb predicates that match a pre-defined set of cues\footnote{Appendix contains the complete set of cues.}
163
+ (e.g., say, believe, deny). Then, we use the span corresponding to the ARG-0 (agent) of the predicate as the claimer. As SRL works at the sentence level, this approach cannot extract claimers outside of the claim sentence. Thus, the system outputs journalist as the claimer when none of the verb predicates in the sentence matches the pre-defined set of cues.
2202.05420/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-05-19T01:17:25.226Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36" etag="EmjdZMepQQYw2oM9HCAT" version="18.0.7" type="google"><diagram id="m1LEVi-3A7UfNiXZoHmG" name="Page-1">7L3X1qRIkjX6NH05swhUwCU60FreoQNNoOHpD/5lZnVVV8tZ3T3T/8lvVUaAAy7MzM22ue+g/oAw3SFM8fhWhyxv/wBD2fEHhP0DDD9QGL+/QMn5reSJk98KyqnKvt/0xwK7uvLvhdD30rXK8vk3Ny7D0C7V+NvCdOj7PF1+UxZP07D/9rZiaH/b6hiX+e8K7DRuf1/qV9ny/lZKYNAfy195Vb5/tPyAvl/p4h83fy+Y33E27L8qQrg/IMw0DMu3o+5g8hYI74dcvj3H/4Wrv3Rsyvvl73lAFF65rGNcQxf5bFY6UWrKf32vZYvb9fuAv3d2OX9IIO8zCgjyPkvbeJ6r9A8I/V669i543IfTsPZZDhqB7rPf9+p7E3n2G0F/76OQD12+TOd9w/5H8f6Q7vtXkv1RNuVtvFTbb9UTf9dy+Ut1v7RgDNXdExj6bpEw+r2e7/aI/jj/UcU8rFOaf3/q1+L8GxU94D+paImnMl9+V9F98Kth/7HoS1v/gObgn5r7H2oOw/9GRf9izaH/kOb6ob8L6Sye31+qevw1HRZV2zJDO0xf1SAFkeYpUPq8TEOT/+pKQmAoBv1y5YdrAzUWQ7/86k6Gge6//wD7QOB/kn38aUX/7pmN/7SPf4d9wMT/0D5Q6G9U9C+2j+f/bfv4v24Hf1N9/1M7+HfHEeLfZgcZlhMZ+ufsgIATBMf/Hj8BQTiOIP8J9kH8k+LIn1b0744jP1KonwbyLw4k0D/JQH5X0b/aQP5cDoG3y3fdgHQ5Tr9dwD8ryFNpZujuLAKG7Lif7y/V/uOl3xjWj0JQz3/NX1k9dd/weIzH161/fAgvwbfbt3GSt7d9fe/APaBvffh2/XeGu+TH8lsD/a3pfbfmX9vp96K4rcr+Pk1v88rvcnrLp6W6M37q+4WuyjLQDL2/qyW3x28y2Kd4/P00uLv4fc3i8VdjH2gjP/6q8f5Icx6/NQry++mvbBv/M7aNQH/ZjH9jN/+wkSC/MxLlh6r+z+qEj7uqBQL8vb3+S5SG4fBvYwaM/e9qDf59kqkN/8kK+4cyjn9gtmF/4oJR5H9Zcb/P/n4q7s8oDkF+qzjkf11xv0/Lwnz+f0Rzfxub/QOaw8n/xp6/RcYo+r+svN/nUj+V9+eUh5KPP/GX/9uaI/8Whv3boJT4C6D0Dwj/nwdHf7GMW3lTWcW/twcchyCG+RMgRPyTJvcT+619wL+Hr38uNfvTHPt/Yh/WhtXszsc9dj7LmK7O/2L4P7vB9ddTHCGe4m7os38ws0FvI/q9BX1vK/lRYMfdeFcE9ivBwVEtQHjFrZq7jbyr/mtex3zaqhnAayjObsnP8VTFbXv+1zQk6wy63Obx1Fd9+SvjTP60xf8Mg/0nWNwD/63F/QgtvzI4lMD/NR7JzIjz6g3j9bqKqV4K+4TLv2dLdX7HIzisuq/d518k+JVYGcNcLdUAJJkMyzJ09w1fyTEdp035Jb8fGsnyIl6/1P2nOlgGIOx4Hr/tiRfVAWROfzVI/SiFfpSAquLldhXUt1OYH4F1MZVH69YOyUI5UPefZrtvzi3vIyG8P2iZob6+pdnmtvtARoWWNR+0ZEJq6b6kLeraOTIpyhY7r90ztyEav9SLu37oWGmJsyh7fWsNh1JyzL3c1+C1EcVxzUdk3BddV+eKykMpDlJLexnP80dWfXZ+9un7H180nEfnNuonckW8kIpDAqTfgiK/ol69Zw992wWPYSGm3sdFenfV3kdNVmHao1ixhRNZpATSD147AxSdqvdYtOYu2w0DTaumV5idE1nx1if/CHJkD8y7otelxMsVbAgwEpifINFhdI+gZJp7Syqf9mi3d4iYq35wTiJt0g+aFCNl/gR1QS/vd3lInC7QqYtnGGi27tmLjjVX6vBSfKG2RqmMkztHJJd0I0COca5CL/ZPMTTrxKasiknliSuSskJK5i0JaMhHXbSEKHvy83t5K1RM7ZwQ9nfdRNvjLfiWzo0kkGq0R3yOIWXfOhnTRD24x4OQ63UrhPYv6uI9anci6ZbuTImREFbgeroZ9hKekdLkMe8odKTC3F2+2zQXGqlHrO7dAsNTC7rNL3mx+sv8hKiVX4ZXTBLWRNrssSt/9fz82cUo3t3OJG/v1Z14UsmkULVB5TTyhvRVLFXj9X69WG2r1mBs0pej1Zi8ubs1pEchU/3eD89pwncSNIr3+eylWDbOrmeXc5/kmYExjF/E43Xot1AoXJ+RxNKXRcq6/Fxq2zGY1kiYmmLaYESdj3cmtXXpeOK9z/qhorvPO85rpwO0pI3uMFl7p4inOFOvNyWYSgvHxq11WnekFPYqT+Nhe9fTVecUGeeSC1YiP6ujvbgjAm3AmUFLm3gmq5SahBYjCjzMKVYUYm930aPyAyqjU0ZkhRLuMshGqQ/lUpoSXHwip7b1uL0WTxFsJVKywFEOTklsSMVzo51pSW+7WYs69eFKZrbZd0gZFkM5p0rZM33OVOiXgrALPsS1MPX47BS/C/TOXznF6ZS6lsxW0lPJvgYxhygyL7vX/hoOhhNtVqapDBocSBQMkZtkFpaoF0blvEnzpiGZ3OiKtk1xy5uMmMOjRo/ql9L/vHv8VUN0P3CLySNhhg/3hGYLM7Jmvumo17QzyZ5IpXbDVZoudWrMKblggJFxxEqxOcMX1KOg3ivV3GXUA9z2JCiS4BaUpjxTok1nDO/S05x4y/i8TY8SP7TtsZNrekM6Wo/4jXvM+qEQV3mEamz2DyYkeUijluUdRCyNyzUUXFC4WTJJRZBUDNRatiTbQwoSzrkFZg3rsmwscKsqo7FSpXoTjlHJ1K+2e8GhNR/YRT9U4+7GXfOwtGEDH9fyphJ2h9lkVtddeHLkSiWpBofFuotXmTpvra8C4/XOJXQVjlT00efzza6ajnbT+24T4b5mEQ4czEIYT5RWdq5Hie24QQXP9YRSHHfg4F/1IblMGnIeJA6my4qSv1dP5tbyB3Xuh8uutli/kjMhEzhPFXH07b3ZqcpzqUbfm33/UWnH8xxqN4/YkZv3y6YzaYstI6KnqFwP1qnM6aQ36a6NaBfM2JrlMOUjdCrL7cqIV2G3G8bobA2ZJj2cmXy53b0yHT6t/fGk8XbHn/vZgZaHo3W1zFbhum11w41J5/0wpU8Vf4zPQDwi/+PF08tLFs8inSGeos/gf6L4vbnsZ8zc91qiftlp97jpuOQj6oML2VgElhtY7HqHJzpzHqSCQHfew4PBk31Idx3LoHXtvVRdvYaIOieINLOHYLgtPgafiHSfsFB1TKQqtc6uYYu1rw74waATrvLSWV1QSce4bzxTwuXrcYiDd8vUtqzRKPoUIgj2x5Hs1i6WosnowvOGY4Pnr1poT94jxp6+ENVSt85Ic51nnAdOR8GPEIbEp9mvUTYgsJdD1suDM4qGx6uPs/ka2bU5MoWEIMRRMvf5YfPeJNkLPoKxz+MECpSX67+xgP50TBbuyBTfXZ5feLVBd/5DvxF8IOKOHjnhPjMTjS4JrTLjuXw4rDqr8A6/6sJWsfewaPhw5R2tqkKYVOiJCEEaFhd1aGHz8P3CTlip/HQvWUuha3I0MuLCg7isJOGjJVvmNCZQ2LmKIg6eLDVF+0LTkce36YKaS29ffFvPywpH72jq1WY+P/3iLffTla974cPrH36tLyZ+XVO0LgEBwd0I7Stswbeh08emDSgsONEzsZ4x1J1sYmiLDyMFzPchkcOfp/BM7qlCX+TTK6KFvL0Hn000xLHTnE51b7/yFH/W+pwhxwfTr4Y4R6TrTTgtsTPoXumVJ7uJ1ckNCTJVQHNlndO6fxNz751OkJBxPb1gzE/WdqVJSMDGWufSZ6TjhYGkTIY9n68tzVksejVErL1q/EZLtESALsGbPra4EOzCWjwQXdftXEvR+jqXTfDvG5IMNxD6IL3+DYaQKMSA7EohLqRuYO9i8e7CHjk04PUN8n7iDpoEurN86bt0IjLhritUM0vrtN93elf87FkzxWgZVXQ60lkRVafyBj204IuOaLbTHS51Fuvw6C6Dq5xwLmnTI5NTBw2xFFxXdyh5vcIJeJ2QUNj9s9E9l4TWeaq4uHZWdYqGNeEGMTxj3nB3uDpiLoNyE31ItdzdPaUtdhIecwwfKqLDGI9IHjoF76PgZkapuC70gF87X+cXXKK9rb8nFi69ww5pH+XasWPZzMpiRZ3CnlilfMQF2HPIVeGorW0JvbNcloEH0DBFdTLcMtQLQYFw6u2wUCYduBKSkDDu9zwcn3bXy6Y8PvExcc1I6D/8DklGaHnoWIy73z3d57Oh8XirSVJhVHg4OafHOjf3DqbBLeXqgjGqEA0fpdgu19YIl4yHycxgdqVdYeeGgLzIE2M004s4BJ4fygnmkHqEYrzTFvxu396IdkJ38tvm4wXp+ThBIAMx77qfNUDggJsofwNg5hcg/eNzb2q3/Ma6VzgfUtwl51lEwfszSEB68cx6bzI0H6ezrVG8NaC+uMCfS8JP/r5ATBsG5AsKCP6RDe7nPSn0s89WHAEg+JZSBgLMs8+LDblHcpfdWNUPKhiWT3QFuaNOb6txsk+T8Az0WBytD9e0HSH8qcEqbF9BphjcrLfyx8PGAE6i7jnGnep5kUt2z2n4wMH5RuQMJuqk1WYcb8SPX0fJZJH3xODDrY0VWb8xtGdCz+WVzwA88+8RZvvLe+R3H+gRAbFvX5ENwaz7qmCQ4EnuKXQf7dOsE/HyMc0k4GO9gIWoPQlAAhDF07kleN/8XFuADTYQGKhD5K8lxR+LmT/YOy0H4WJ2s1B/cSVwGbp+PFc5yD89ZlBP/Q0AR4kOdX2LJG96EPxy1nnGxXCQk4EG6IOENU1MKuxTjsvZoX5kpRCpFNwWEDFbP+PHOoUgc/RBBwBy+wrId7Bz8CpFelvO5bp8ZPW0kVyVLAIyOPon2raIfwKjMo4RJgv7SQR6gaGGGih34bvQtY3GnTcJFUhVuevaolgRteS6rRI05t0HJQMhRp+BUK+ZPq6r/iC3NefP4elNzpKPHYYiF3mdWxC8ta2AW2e7nr2T1Qn2nK71ALLPc1R7UkDc13MCeKMx6De5GoN1B0crJIMXBvwV1D/uEfHRhiUmSjoX0eYIiU/OqRfGetXkEyGC4vkEdo0zEWb1WAoyp35JdZiCU8Ewi83ggcMegIhaUg4GSAdJ1IN9HigWRB8Qy8BkMaYa39g0NcgJ9reDxOZc3rBsI55vpFhEmKsdEus3gJxI17As4H51Gsj7MF4RWRsXUQSFyJN4UN1ghO+DR3d/0ciBa1GIa3bQnEhjTiUnCr0kMru0kfaLSNmdJemSBflNZJBkaFHNrVKaYUygjcrObjjaHtSH6QSUGSKT4QJKK/MzsKHqvoP+pNVTFE8Z1yAqAaF4exmEnYcfEonNba4Ku6v3XECWiS6ZlRaZTfK1eri8OnyQUyOYBbUwvk72QY1uHzE3WwiAuiyytrNUuAe7HZrUA+kZZICSR6cC104B3dVPABT3DyV8S0zjGlYwkLRXnRfafqQ55p1e3H8MRXGUNVC0Cc448AHqsOZfldACZd9p/v79/D58ww1FU79c16kff9LX59el5sd1TBYoKgUnNvjIZdCo+ct90Ydif30OmroPaRfSJBsU81qddl6b9NqQIDajKg7/1fOAFv1AvW8N2/vUl0xecisH9JJ++K3r38bquDsewuSidHyTwFKr9O8l/d5V41XiufDYE9+DQptGE/9Y02tEDVt6RwJZRc4IzpcosN7KKZY5O6JJQEPxBVU+9zBFTtvSzi21Ct2VmqqMcihFhiLSzur0VuKs6pc6zzt7JsXuDWUvCldOEsmQdM0udU0QqVcuDtOdZlNZcVNrcQHPK4GGJb31zpnHmp7qj7r/WH9jtSmsnTHoj0+u4ktqonq84yj5yFioEv94//d/v/SlzYW2u8c4ZC9r1ytiS3p1DX1rCzt3/ZIV7KGxH26qfY/rFIHBV7+qiT2+atBbeswFb/Zda7wluKZwuyU1imjsvt3Barx7YwKsnPAtF/sPoL0xAdvQf7027je1HbdMiO/9wV2hRSP/CC2hXSIfu8C4jd+Ok/jesnen46d5jwXUlbGYEyJS+/WMQEIGQ35v07uiQGIT+AHqg34tYwX5kicXBcD22j0Tvvojg+v380CfuNtYwv3Mfe+v7MhBZdSzvG9WeIwWR7pfxlb+MkOArVLylwG7P865A+KmXgcP2Y7tWrT3KumnNEryC8teYPWKx/pmiLKy1N4Rk4iVG/rcIKeGLAhev7viaA+2vTexxb1ni/+YA2yVcCVJkXO2Dtb38Jcr5YFHHTG8J5E+A+thBDGBzwRD7kgM4z1ArziGYRdwH9EFXEv0uNZHJNL4m9VLf+bpeuzEUpcd+kxLjrcGwfIoXlpxNx0IWvwMTIN4su76VOO5rJ1U/UtmGVuu6HHBrJqn3xIfMZBPDAcznA3T1LGSuyHtRjZtO0vbyypV6SM1SjfUaIU7xXQp90YY1lnvXPUpZQdglpbAsNWoTeI9Jz68yAlYtHJOaJLPDeDQS2qLd6/C5JxsCsAS5Ya1VwK5FcH1FRxN8bnYKQgU4UqQC3JPyT29jTe7CgRdXdQpnzq6OTuhsogGrj11tgYBpkDIYnwBWF2sbIMJNgiZAC8AgAK7e4etyazEd6gnNuW48zK+VGDSSCp8LUUFW/sxdsp76kG8Js7TMCjpc37RlHtoTvqCSNa6xwMWwumiw+5QCpCDgZCizb0pT3Lck4M02l4nMKT3Oo2xxvOeZwcANzbtE3kg8Akz+Dp9ygFXWeXuBZr3ox1e6QZJZ/qiKO1KEVQyjdfEE0YNZuQ+Gyy1mxzN0UTBns0Ye3T2MqH8tTwpVL8wnb3BRxpYxXypl8G8PN5yLSoSzGZVKmxPe5GxoUWZFeGgHFCj0dKuR5kBfxT3NJyVQMa7sUyYW55LnE+vs9+jy0hg7XNuE7qNt9/ZWCIRjo5+ZJ4vXgA6sWGXdVE5UIUM3z3RL+qcJxlnHTxnIZK3qZOoxVJVGLPi5pVo3VqncUF/ApDJXu5jSRhE29NDDXjT9XjaNjvpPWrO15LRmbYSSgnQpuzq8syRXdO2O8b4xS6qBwpAodGUcyK8B/Ml3k6Q39Pl2iY8r8xXnEVaPd869HjVbkamKvOqEOhbvzG06OxBJK9OBavB6q2DWmHmycQ+a0OfDHeoN0CCE70sq/GT9+LVBgunlQ0VC59csTGDmELjvcDLljRxqFFYDQyqYOaogmqb0zTFOu5eUWIm6LxEaWdrFggY8HuwuUooPvj6MXtBOCfjKaA6Vb4Hn5cddhZ6rfRZW4DIKa7eVsFyjNNgujVWFTk2yfjNP8xPt6QCZTNOopFlwU0Qx4PTPnq8Y78ZoXVQW99QdS7cxNmla7GT1JA9yML1ym7RnEcWeI+37VaKDK0iTe+hQGp+6CTrc71uf8VV9Jyax6TfllDKp323aB9VZghHEzdHKvZz4OKvPOmw4XO+pC2GlwE6kQfe3Nns9AGyLvd2kKM+grzUvSXVRg2sOe7I9Ciac29sDVrBZjtMN8spqUyvXWJFvM6Y7wzadPlOgrMplL+AqNWMdJSxFGGE0O02OX6GR68OHl+THF2l+PHU2igPqkdNi6/G8/k2qhKrMLOh6LzHTKgfSxtz9B7b5Cek2gHDguVOtXBelznx87WuWe6L+uK+ED0xMoQs1cEHz3sa1emSgqocWkpMPNWaejPP6gb0F+QXeEejBufFave1cFxcmuAQzHsI3iaLQppoQ5x1O7In2mh16B9E8Squ+QGZoVehq9IECUrXt30OkbwiHVgypPv5SVtKGoR2YQbUl/fqEyx5XNj2BDjvG0gEZpQliwFwHAe1nOk5i9/7g8dDZg9nyILW2dtnqHf2CGGx05BCeFVm89GqocsqrQoF6UY2930Eimwof60JcI5TsX3toSEoQunG+px7bLywPuqxKzteL9mEk2yAnBE4Y8PSQbo3QLEW3s8F2+v5LQngHxdJ3Bkrrt1QnxjMLXg+VRfPdCwUeQaR0zuTQ5PsAwVypr8y4B4LZ/u2ymC8WAzF9RfbM2Q+eShqStxjOmv+umo63V7vdA2k8sXSVYlD8mrA25On8YryBFNEcymN5q/aPppw5XpfP55jt3kTyDWLTxsvdnt7w2LMWk4ujB6uLJTIoeJVf8ndHt3VFTmOOccZUe4J1pyRJvMrnLyeG1i8LvTedkrow7MgiNTsewfbYd/GnG01ojmjO/KKt8Ijdw/30xBUitQ3tqUkPymCMhOgo3PWIgGJVu2ValfojiDRbXXMgk4D12BUVjAd6GDTd69xED23ZLFxT2nxsLEjkRtvaJ8Wyex7nzOi+gV5go4FE0Z8TBDb+AhUUzzG82mR8SK8aRMNVTHh3ertTu3i84SvRBIdLzFyfbOjETqj+RwK5JbDFAh5MVOrL3t5mahOtKNzUry3beFClx+fH0ydxKIt9eFuFuUA4gR7gk81rGAiSmefL/0DEaSHiXty60uWMizVK4dY9YNlKxFO3ucqgW2k8X3deX/rwXamkns/O9ERpbDpK7ujv8yzX6Z/YVg8L8AoC2J9sW+kVzuWT6d2x5wxnLvwucGTOyHP9jqOqX+ahr70yeJXWLJUJ5Jkq688cAIXYDRAoB3GKSiDqc6Rair9rHVQbAdKFOkIEYW9txixOu8oXX0iye5eitmLqIzgy7CXkP1AMsfy59ssabW1O5t2cUvkMpzIE2NlC8mfvInnbztyWeZgUWp8YE2idZcYjsC4KwfDSblaOWeBExo9wtJWwBSip37K6NDkBOEav0mkvoLJ2wJ3rDnBQDQYDmlNQPtWJ40SD1Z48pWYtIfdUgyFb+s7B29TB5qJiD4GO9J7ueIiNddf3tk5dJM9nguSIM8dFXlDzUUhC/BbAm2IsPnmPwsdRQNojd6Lqd2AP1Pw7xPZWAUfHdL7VpDaRyocA7U/jeW2HNqtW6Vd4AvMjGw9QpPa767I1Z15K5162z+oYIGfWfaY40DzN1+WVLbipA4sAq2BgoDvtCDo57v3345BWeY6pkAgFf7awlLlV6Ca60bF42S673dFg7XYc0SSA2uO+RHPNkuXkHzyEAXxOJm3rQ893wcKujdzT/621srq2fERV1i+sSVAl7wO3YNGvuk0Dg7O4Qyiqw9MLRd2McaaqIR5585Ind2WhHvv8/Zu73b7IqRbAzDb1uVQEWQBa70JHSVbf2LHe0sT+dUXxgTFmX6b4owk3XulqU2ggx4lZF4pEKtQ5Sia4TgGO2eB8rVEIDmBU/fG7nsdW64rEwz2RXjx4te2uxFBqVPo1VjEpzEoXB1nY70qu0C9FUnaruaU2VfaS2SMZ8juJT2pfGJiZ6IHagccAZ0HRZGvvTdtvdJWUzpShc/zUY2Bi1lC7+oecudto49TN5n7cBCa8TwpM/bKur1DzlujJrY8R3daMpHSb9/NkumjjulgemDHNzNJFjKPFPEzcrth8eRuDaEgevIjcl2eISmRp1pbC5X4ui0EoYxjVezonsSmOusELMTYTObPKG5KTkCPGYrVG/35zC62hGZaeyt+9hvecRHZt8ltTm1TecrtWnLNX/0Lp38JhxiZLPdQ3YAdfYUWrmwlFV8rjQ9LlccKyZbUgN2m0pq72kJBBtBFGSW65X5ZOzG4dPiGW/HF8W/+PMKQb9SjeoVi2XBgs18G69Z30rdyadMPZRFy6K3AfT/PWaT2VMdA/ZeUSvq4I+/PTM3NgCuvSIMVv3O0fuMgZVWJQ1Wr0HQ6K4sINHXFK6jawpeWBZl27EnCT7BsPFu+fmNjKab6ULpddu+8V+NVAx+oled8bO8BYw4ZJu5J9EHXy99P26opXKYHlZnMO2pCakDHu2qXqjqyloiGpblnnrZDDaVw2rOrw1ngc9hC112/XfKzvaO4rM6wqO90Q+QsABlFMbkK3bkjrJUctVH2i3JSnxnZvFmOBoKLvVJP+dNKn5B+5OjXrtY8sOiHsaK3nIZoqjt2QfmVWXqrGlIPk2Wp0MeX23XdseAZCjHsIEiyIN7RoYTJ1qgqsJtQW+IHS9dExpUn1WFMAk/0XooD3V/R6D3Ql80m7ArPQY46n8mTewbP5DY8gVtJh8GVI5u5sBm5dm8mn62+1fMalpMyxHcUJeVEg0mxfFEjMw8LVMWydJjckWbOgaKZnZe3LKELKT6uD1VPEICJzrVlyZz8sNLl/qDRhemkxXZNiU7JLy+CW7SA280AZEBblI1A6D2j9nL20MWUlaFi28pI3pD/3j2XKwdtaG3bZR6YuPAu2F7IDb8/0NltREkLJaip4s1P2F30z9gtU5+HUJWijTBL7/RGvxGQbFkKaseX/qUoGbq43Yz41d9ND3vovbO8pOZMPerltNf4uGvaqUE43OJ0nxLMt56idY4lIWO1lSRi9NcouLlbyzL/Ia03mcs+s84svGTKGhAffq5sommbNlt78526TeN1z4TP++mt3PADU8D+d1PL+mlFRtkfotrTsYHgZGZDsafbe1WCUC9GHq5uV3nM36ar0rOPJ9m9eV9yxHDhYuCRDsBd4heG1RF/ilZcGW0TFRQ+nky7DpqSrWqZnmasrl800pToozFoLuRSpkmk4vsSdJGFsElfXF6DKcv5m/nYvUrPPsbjsEU/ql7qcGaWNyiHwbLBDqnMRxEDnC434EyYzz6YrPG1QUaeLVWG5c5JmokixsZFr3DoT84YSM5MLwVje/A8pbIT3RiXG6I0tKvlzDhmolCE/oSFdHWXjx6SFG3Ps5sbr3GIV8XQTZHeX3dza8EU2HlKFDIzDynkmg9lO9GmDq/YZJ2mz3hJstM7YREJ9Wu/UuruMGT0cjm75+eqTRqjtEEBbuPO2gfOxjDblGYDuxgDNSh5nyMCTyWqG63joXRuGQmLSH28w7wjB28zw+s1p0+8qKcyEKqFZ9XKpBymgp4Nzd1g4mWbNddSzT4s+kFZpRnS0C25yBRV+WXOpRpxPpXKDOS6p0K9OUVhO4srX0K58PQ/XNeXj+WFXHNPuN3rbwvUkuVi3NRIZVkCDh7475/DdyV/wz5E0cfv6IcI8Xv2Ifz8bwz/FxEQ/473B/y/QUCkiG8ERMY9gT+4flIQf1IQf1IQf1IQf1IQf1IQ//CTgviTgviTgviTgviTgviTgviTgviTgviTgviTgvgXKYgG9dcoiNY4PrzkJwXx/ygFEVb/f09BPAIBV82/SEHMf6Eg0uiP85dxx31aQ0zwWKr8QkLk29HmvYTXUP0qtHl0or7LKsprbV/boKFs4htK+XRxflomvsQXHoSWbLnuaNl39ORp1+NKn24FK3q/LdfuOBeOP3qE+2BTunqmBva4Nj0vPkVOPhDkgQAkGaQzCNN5DELwtdVuYWASyLrE5GWnrEGj1ytMRbCMFrKHw58ce5SFSqMfyiTbE04Po6OIOxfPzzExwv66Z3zJGZK5TE8Z5OvbiKMgQm/Ai16gLebikYw1d2GbH21jg5rTAi6VCyfTrbQKYRdndCv3GISDKRgaV+EsygzV4aR5KfCIYnqZvhjhrDN3nsJ/0LFJiLyU8+1rg1e+wPYoCDZfv8Xf0C2GW5CWgvCGw0m2nRPYNvlI7HtHhb7Gvt3J1PJJbPzACawFNt9zWAdrY4aRhAKWfYtO+UeuhpC7fdp+y8nmCYqyaLO5ZTEQhUE4KJFrfPqid/FrMQF+PjACupMXg6YKY4NrkwOeUhwMIAk6AnA/JGKVs7m3aEq9bi0mT1XIJN+QWmkvC+DJHkOXO/d/Hi+jZKYgWeCheZtoLPqJ1lV+74xDd5/nMrkiiYrUX5taRACEnsycnek9zQAmW7exw/M7bsoaQn68wDb+Vw5TFIt9YShp+0N9n3YfIcAXf4rfX3U9XUCvWPsLU5ue7jJxUpizWZUSbd6jxrUl+0pP2umiTuoRlFC5kfWbTFKFEtOtBl+E+sYP+4e/9VNbM8wysQFW5mlAIqqyeA3c+YoIvLkdmRjUWnVQJZ7ppbkF06P6YMUGPb+RdfoUJ7MViuxPdObo3jnQa1LSwugcZ8SIylQSQ2nOzO4ca2VNVgiaiKcBCUGC20hiA7/VnhNYWaOjxyi29lFytIlRQXKgan476Uhv3p6kOS5UC+uX7eDFt5adXeTArAiu8d3V6foSIGARZN58ntJAs+WQyGfTJxvSwF/b03Tn3BgaCkNNOMtOGqJ4gW/4m7I3rrfSgNsdqdn32FiPkMjEtX6R6gJIVBx7UqXiPXAy1tJeu5gsKrXli2TlLrfq8RaZHs/5wTwzvWXkRzwPTGb075JKs90oZXsa8NYZ4Q8li6C+67ECfmTnAZT7ZkRSy+NHfEQzGt6ZOeBTPGuCo0tYyKFDsn0OCZ/hq5T9dvR406WZoko0QBbQAA1H1TY7pjGfEX2wXsVrcmiwAn+SuWS8FzKUrSbiuAwu2v29t3SSNhTwdU8pXkn105S+7KiD4lx8dYSm+rV1GqhNHHxlIiBfoJum46P1kUcOj+ZYG0H5Fc1Q25nSZ2ffKGHz37cJFc6r23FHU5Fnl3Ly8CfICWS2e8XkPbnxdvaNV7xm0DP7eLyrd8eAKQkw9umDLyMyHVjXbJfeUWVZOFCm4kKruWXH6eJUnhG/6B8v3kItgr42JcdxuCU8aesZmetTR1wdYUqmCNRX2HZfpgKFsUxw2HYr6TA9PvZGdQNZX2svgULpxw3800Y+EJ3TbS2E46S3vcwc+8Ac3IO6YbjXqlo7TXjp6MLVnqORMiV+fKBY4t/TIHibsyLOkggWdA+aefDlLegoiU51U6A7JS81QNkpAC1nDdPLXy/ZeL5e9+Tr5/eE2GHFcL20pyAbok1dA/orSNI2Jpk3yzlQVodhj6jzVurduCxhxnClULEMSWf5JMrbLynSiUKJBl1GRl4XWCTEFgbar7Gi7okWn2OqMCb0OPHFHE/s0w0yc4NNKLaK10FEEfPu4ppsmRKVPm6rOc2i7dETrfgvNkuBnY4AB5rR+TOsavRn4RLIL5F3Z3q9z2WfuPogycjyvuW7H23onWi8XWn0VMHOMS+kkECX8u03GpvEbTca8tcbW3GGej/O0Rx5+kY9TZlPVDxzOeRbVhEAV9oViTkJ5QM7Oneq2kdXoF8/Akg+eFLyLDXxemB0AjN9nvoxj2S0Y/MeNe0jTz+eU4q72kVzLrAm8sE/H1IBGp/Anor1TB5fPl7MDQRTLdfj30boxG8yoFIRf7wqcX7QzAeOVxk9n7S567DC128yYcuKshBuUez90d8xT3W4q4HXIl0DrfFOpoLzu3g9T129856koqvdLVFqzbKXGWF6KmbLa0inOqZ3x8AFvDaRZ3OHTmZYDT29jWNFis3tLWiRkwx//eGLVM+HJFy9B+N1Yc3jsoDb782PbLcWhEcBJRb+lgP2GEg36Ns7RRG5WGrlNMQN3kXy4QmMKHwjWOX4sbm6ULmfEY8D+3BHgbudX7Sg8O70+XFLHRtJzY/H9pnRZRi/XRyE5nbW5D4ONTMOr3mSh4U1XQ3E7cL52tpPmrOtsCzeaq6RGPPLfxQGkn3Fdjd+cLhYhwPYwVThRAPpeOdwINiVsq4JV3Yex7aBtDT4eKHOdJYgnJCOURxr0Q6vlOoF69iCCEdpGsEEyRSU3ynr1Q2NIz1lg66qL9Ip2L5PS9V64R93rMfaqr+W0ADTGfwGQaHCgz4nZwCOaiIDQAiCCyP9XEqLEewuAdGWStaeWAwWOyhFtkNoegwImb4ali6PMmQoJSNvf2Kgdy7FfsvpyNLKepDNhNAXYtQfke++vcCqs3Fpueboo5qC3C7ryuHtko/bj8Xu9IAEBhhjMZrau2q1ptysKiRpCjIdc4hNl+ejivRtufPgira0cYiaJmrHtlvIZdxZZNqgYHv0hnGVcMFiGXaRG4hGADXSCDkBh7oUqafKAnn7wNjG0tVKvhMQbYZzJRCEvgDZN9Zasb6CiV8TtQuBVvJq/CQCimfeHW5K7VZ50GL5+UXPK03teGbDY1FyBLRz3OidxvP6c7sQzbpzxA7akBE6i4ejYGzcDR983aYgDcEbCfn6QPf0NoSKEm9rFFlq5yTfa4sbHwHXd800VlQHluD0exeXFU5SXeRsFQrZaEOGA8S89xf6yQQ2W18cXhgD0mFdmAE1P5Ifu8DWbYxUGRqvz4Vd6qVanMaWT5ymwRIcuKH48B+5HMig/yYVpVNuhTiY2jmqf4WqdhK5paY3IK9z0CCamtoNHjAPrBuazwcZeJlfVGlEjLtB32mfz8eVi6tdcvv2Dk11juGrR/y+8SJAsp/VfeXX9poeMpgmk3WnmZW4G8iThF5mfse1Dz4fCYkSo2w3K9gYj13+88g3utauRzx6IrWi8WNvbsx4nHPyOvYMy4O2pMfeciijO7F085X2lJw3TV93u9VFA981+1ug9F/IVnm9CPgxxZAW++/pi9MMPXpkQ56t60IPHQ+V8EThRO/FHlx9nQGl4VgKFQ/WKndKL/Q+0BMkdkYQs2jK5D5Kc0DEWalmImsCOyjs/ATLLMarJnPGFhmBfcMy6NuKuzy8rg/2AF350C4Ra1SJBbe6xV68UoS/HVhBhsmEdFq2AmplORFjAyeE8QV4oWcNKyQ4QmL1xTY2egeTHJBD4y9uAQp2nQCTAR/guv3ylc4SWLejfvCJ/zU7Fq0IRpDTaFzdCGhKRxtlcdlrRR5W9rxDto3wgNK9vCbWA/t/sSj56HC5TA7CeG7YR7ryrBU2rVSFmwbGxafesbV3auu/50Q/YLsi842nq4MsAm83um/CAJ/J4j8z2MgbiLfeNvvI9eE25BzZ6lPi6lgYbxPo0kz9Yq8B56/GZDyXYEGp6PEBu6STpfev9B04t3v+Eh/xQKDj9k4NX/YA246R6e/OcWHDsY6jYNvKnajhJhQ1VlCD6Wr30qlCsfaSyakTOxO8oo6E92Cf+gNNFxL8qISaOutVjRoCfk7hRIJBcgi/Y597WnBMTQ8PprTzh2rEaYHggOR3Y1XyI7/1nq1okIE40gAx6eorrma5Fh+iEvLD+xStO03I12+UHh8sq65sQ+rSgGIsgkUi0/dwrSZmV8U9umf9wk0y18+Tw762Y0esEuDsN1aFAuNiDSbba49YwXwEZXE1p2XM8DMz5AS7JRG998GUmKpcaW68pi7Ju+vxXFqpHCV8nQfqo5zlYPMvG0jDtIT68bgny12jI3UMyOLQ/E4H8LFrijs/XZAPFGGvCyYD8nmFYgQncHljqdnJViTnW4mo1FqybdmrP8ClvNRnI76pE0eNIgFUe5rnqZH53IWldXs05jVVI/1kGX4qU4O786kZuMVvzulJmuXAiLmP+QpGrBEbQXCbxvA65eMtIyQUmYhCM70yG8VcbldyvmVLEe3WtqXBS0cjOtOnaXm5SEbB81vO6WZNV+LlWfb+GZ4amYakcK5O/tTk1obGj30xXs2k/J07ZVXE6EaNYInz8fJCAavhpBq+3i52qB+a2nOo2V2ZiUGMr450k1sGdfNYaCHmUC6QaL2W62pYGSsu4UneXmrprqgEPzXA53jfocMyFe8FcqCPr2Zo3Ahgp6CoD7A4Aad1JTNHREZHPQrW3dH6pcOCKUzXbrDhqeeZ08NBuaM1HA93Fo7mW72HNqy8RL6TQGB/EPQOQeOMPu3NyXthx4uvTQqff5GslQPr08boKfIBgcVDQCxvgTFdn0h9hUeX2xGFUp6EHWa00fhoJcxO/OlBbOe8KR6Yj+6zu33q+GBeTSPlmWLF/vG1ljIQKnWmBg2Z3v1nLrLl9vZ5UvO4PhDXT9PyvGH7oQLKt4uuLmGnYLOv4YXXS3W/tgUJKqwinF/qcOfLJKiE2+MzhjWTrE/PCYPS45GYWi29Bh9eJsqn3zdyxZZQPLgLku7oUColvHGLCatN/o5iYWZDmN/rpIJLh7MNc1kSn5pZGbUeV7hbH6kf7pBQiwGt5Qr+AuAsIgFkBistSLB+W0RueaexV7P7elXmP4MqCKF/QhUkfk8VRH9PFXyQ/02S/yKq4N/xv5L5j6cKKl/qxL9RBSmz14zyJ1HwJ1HwJ1HwJ1HwJ1HwJ1HwDz+Jgj+Jgj+Jgj+Jgj+Jgj+Jgj+Jgj+Jgj+Jgj+Jgr/cN8u/JQrqf/Vdhc7R2cn2kyj4f5Qo+ND/qURB7Qr/44iCOEME/V8mCmZ/JApCP87lyNk/dC3eh8Pxq3cV9h/aUjg0STIVwjBs91SfFWguMmu9NNQCOmhip8V9qOg3cw1yFbquo3aU8WERQ6cCQ/fzDYZIgA8BWwRP4azdNw2/XqS+1/X8DDyEfLSb5zlu/84r4EF1rubkVBwNSnrJuU+QLswYJUcOjfnYa5QWsa9dvh6gsN2BdYsNwPW6AxwYOp2etBFinJYCw6FsHWyhGti5zRxZ/4UaCvbFJdRWPPTGll7NEkCMdLTM/QUxPGAVFXxLI5vSnlD61LIXAE5ICt7/tfXXCH0i9ePSssavFAz2gOgtUAw6279jzO1255z9pGWQj574N3e4Et5VqMDTIczbGgZLbs5YW/vpEbfl8x2pXjtTw7cKkEQ71df0FApT5f08f5JklhV9M5Lf6sIy0xGn9znXwrvtyu3ZAyIaPusvkKpOC/wsPl/74Qa+bgFA19Fz7Z3RE0NVSPf+21aK3Oqdw70oo//WaqDBzwwivU9rSUR+Fq+J+Mp0TtZhSBXCI00gOXhDLmxYvgg+4CPfghWAFZYo7iwv1RODrrpN79nWG9hjBsjP6BxfN8C7KeisK9a0Vhpbf9k3ulsyjoZh6vkUA53QX3x1pB8bC2e5R3CSfo2xRcg2zIppfoVqokqn2PTDUD9uL/1qRUoU8sn7wCkcH9kezN8GDESNKG3NiHNATEJiUskXAYwpy/QTa/FyxbOEaCUhJeSJhoH5THWG+tqorbhzvQNySN1J/BUtnm4ItRdITeVrxfvdwJM1xuszXfMJwjOjMzoQB2854aS/8nfuDtfEs4xrga+1hBKURWrOpFs9Cu/r8WEvMJmThThH9amwCsUrvdbVriKOD4dvLBA8jerQ+2Lt2Vs0EoshCkbCZCmtqJYASpmXwXjmBa/6/fDAG9C84sv2QiR3vtgPXo4ky1WgzwUndVQYJoXvrwN/JNkMP2KeBSHKfvue4s1wega7+ZrEIu9tKfE+D3usP0e4+PXbAzMFX5m8xKYWvl3CaLqfO0zeztthv9IPo20hLnvpmpBwOwzFLwk/R/CmntiV20D2QrTrI6GubD9+y9gJ+i6iqfHCottaBgmHp7raI/ueHoPI32OSXsB8FfCOl9ETamuNc7K/rF3tBNZauPzZDirjGX19CAUGK5UdK/gF3niKzbDCn6R8hIPHxkk8yGImU2+DoHylMPrTLozucr2XCZEPqcHTmKX3BtUIjL0bVOnH9D0nQlP9RY8EnuUfUZqWYcQAiXCUntnnMYJMr9hqCSM6m9Wud6Uf8DC+AMlXzMCaQQnnmzWEBBkv/kdiu4heSzTRmnM301gV/ABQKaqX+4iX7hh12TPMTLhx4q4vTzpZKNUgjt28gTj8TKQTc4dIR+K5ZbAMHlmhGWst0zffscLZF5gcOp6tDCYwoHic4FVBn1tm7642tCFm3xR1AUJC5wBXsdUHRsyml+jIGH2+vSN0J7T265Vp/SNaursCfi7x+PGWGjiWhEBp3x5Jqv1t4arWeY0BbiUmaiJx0h3ghERnQAxtaK62mpouS1dL7a8X7LzLx8HfFYr9DE+ektHO4zAc8KK8LrnKg+ZsvZ7tyjmpSA/kRuFxAtZcKRNtHyfzDaDGJXI0po3JfJ2/8R4bZzIbNBti7/Zpsa0j2bqjHlBOS6iCtivZikTiERJh+D4DcxEm71yqGbxIyyrvpCrDdvlx7emuvhVRyJqTCqEsZtj49eByWtb1It11gjoeV+PJi+/YFFyKYVwfKNmAd/lhk3jQh1i/eMaywok+le22OJa8BUUpDsiNE6KatUALem4exqfHoMaHV792zg3OAzuPtXsS5bt7K29zhHZEzxtxp0PorT0e+BE1aIU6pQetHZfSmxT1SYdzWa0fAYpn7rq/CMJpzWaHWdbnHJ/WfbQsSQiquitbWw+q37bHvr84MOMdbOI1XhgnLTicNg3p3Dg8VDkeUt790dUfo72k9iydsfd2sTC+ef3Cbt0PT5tyw7tkIA0LikybJjfMcUVgWrLXx+pVwhFMY7F1MGY6nvqXnKpuKekht1S3Gljm2CW4uL12ZDBvc064Xq2GLr8nIaLKrgGBDjpjg+jSDNZsaqmm7NPz5xFXidxSXkVtsfzFdyoJgzcr8Xl6Pnk+H8XJw1FueN0el7mn3J1SRHV/PSVzdNGL9VXGYVDeqjsjGVtn0ODJ/cjlHVWN3nkb0neuxqZAjuSKOmUqjxLaI/EjrsOHhgWRs8jopW0aEEcwuv6UrYknx+jsKyd4Wxy065O4yF38vjLyYavIACXZ5Dbs6V6f13pAVklUuju75uB+bBG8t0jrlE3y0hYhXFaOccIAlJtbJou93K7trQQieNudU7I7sXoK5HlQuQo2h3KjUreRIodh26mIR02lxlQhWtqNWNovV+OSPHe4tYT9mY4asZ9e6u3RdEGQ48KCxYvR7TuPCe+Suf/ODZt07lne3gSnFFouSqkmclWgqQczcZZAweDXEagLS0KJUu9g5/tblUmzHWa0jl6VnDnuTrv6YmuTF9i7VZPjOeFgy1GUe6OR7JQpDziEhUoo3haOmgNa7fr4SrMTu32SlYAVp2f+4LxrOFDhGYiWzxEix6eyJIX+M5T1QwiFTza6a0B/vCemfRpHCzEoynrAoxXPkJmJ+vZzmibdnvVweHq6tYoqVUXJbIu5noBk8+NpnU96DB6AVCNe8ac7MS557uRqcWGQCiw7OJmCw5+UgaCVyHnmyOPi5fgT31XBwYYw2q0c4wLI2siNTWmG2IH3LyEyJUpRhXHZe2FnN9Erh4/jK+2oVlXuzsLBcO76LOQyAJ4Dj4hByQcaIWlOA3YEHmSDLCp59E62BS272qhd1Vjlmj4mslgilqgRyhLldgk5KS1Eu5WlhJoI1h829s1y8PtgX+qZKw/jpQy7Gec9fSXBYBuRQHB0KCkj5Qiq3wNlm885PMM9fDmfB9MM4ci4DZzhbAF1V+WsJKWTcJXNDgh5JBOPZx40Fnhz76t7CIXYMEKFEGdE6h5S6pULmLsLooq+uEwVc8omgToDsfFUWZjUxnwWsyTD9Os9jXOJ0NmGYOT52rGWvvqiMOtVsabHiVZ+7vRDIEHgxcTHjsBqp4pKMLWPqdson9MGALZLAjLYx4Qv9jj2VU3l2COjuTEOlgj2diz8aGMfBrEhBu+1DyZDSw2tKIptJqzr4KfbHd7jaJ/746FS9HsnimR4NrU1I/mIDfRO5Kw+IDMUo1m7DlmEIHnW8k0R9xQk2wy3H8d7fezScwc0PXOK9lNP0airJO5ObrVmgaRmjM0bVGqB3egx56CvGnNqXphIu4vW6wmIb6Kt+ooj4Jps5qiL9j0b7wm6zP9fe++15DySZA0+Td+OgRAEeQmtNUCIO2hJaBDi6TeCWdXT091/z67ZjM2/NvmVWVlmkgSBCA/3c9xPeHgl/jFOG8NkcDk2EWG+i6T6hhU/5r3h9OyJ3PM2NppsIzMCBMI7wj6wPF651R4l3AARGVvy9kWe+PbGYN9aunrFT/3GjlbIsaLcDTMdkEQBhuFhqgZL6WOIFUsrn6flguFeFybTKogY0ONqWlM9brgd7F19S/Ytul0lzoUK+N5sur5NtvoyDnbJCu5B0qWDQd5VcmLyRcLNSIwEp4ve18R95BBrch6R/NvFjaUkQoNvEpgJ+Ykr/DnlbQB+p8lRtjneI19jUANuyau3+Wr6fpbrS2ojxHvR0Em7SslIg03rnEl7L/4InfAI91KuQpippTQFgGmJc4VeCzU6Rcggx+fgKMK7cemZRxqfyh/CrN/u+3RzxweuIw9bDrqfdF5h8AagPwdAkMRbar4CpUDpiusnQASf4pQeMCDSMcCA6wnwBUzx6gCXkESP+KT16ZMV7UetUW8A7cNF9DB4djVC4rtQfkTIBNzIA3sd5w0BE4mN/TMUnyu6f/ZwBh/Nx9PvpiT4YBiZfci//NEk89O3ZbsvvsLnkITNzyrJP3jwwAwsWXBK4pmtPMq8/9kwRPwpQkQiiIK557cK4mzQVBTYofaWsghspiheB47T6x1LxlugEKkR4c/8gvwYRGbheV7xlcGNbV2147Ce50b7Fe1fblcsqP/RKZgnB1Hd2+zr7TaIWMSr0KWbSFdrhp9ROgwc3AmEPxzDYgsuBtzwVoMHUZXyIH/2tfEm+SSfDzJIChEM7GRx0vHBSDCwrU8aQX/KNAgXXG0yllx8xNpe4qhVj30j+c4IbiEIzZBBkt4kc28Bp76k0xle5b7Mwje8D5du9sHEQ8uDRMgFwC9IPtgI5gVgUSp67KGlfD/nTUNyPwSwip7DAaZg7EYj2Q0xmG46P9NWB6JCVQUhVBu/a3Y++71mc7A6Aj3rSjifEKV3ANWqynC+ToHhJk+4QlLE0K+8t3iIJLd/XLih4gF7zIp8dim7JeSSBtk5H2fk9N1EdYYBd4ufUTtptSsjzxrPaYAeAoVniPyjlsJ227BkEL9liUTNVB12Y+abZsXI0hJEKT/gXPfsiiVvL6hl3259PitCesz76a5eTvKzFa4ka3xQIoCakw7K1R9KBsiLsqofqIQfIkUXruINjOeMvau/oq2vnITsP0LB0uVeih/krgmo+RIe7xixBFkvuXMN5umk40xp61eobfrljN5y1aYJIGOGAwDWw96COOTnMEQcQ76jgU4OSGwQVqgJDdskIISekbVid52m8+gGvis5oAyTgrUQzjaS7CO+L7bzyzegHGuuBCqstkxbGpuAczMN7BJZWvUFe4dfVfHp9xjmK6xPbd8pUriEwSEAczsBjQ+UGsQWHSazEfKO2ILGoEm2+sqhUbA/tPzQMwkD9GFC4oBcUUBfBPyy2cqV2M97fimw1fOJ5GCJXoBdePrOHi7c/bDw+5Q7y4tehpIlJc/eaacSkvvHV7n6dq9sC9sIW88OaBxEFqA1o/0wyuo2BKRqVAElDhkJuON6q2Xj5sbzNgPkQtfyovBCXbFkZgGCGtmUcQ4Xfgut/VTxK0FhHIX983e3ge1eMaNzxhdOLGdUnC9+siTiHYhiQ/ffbtlmfyNqQ3Q3v7WpJpP4/oEoiiQawCx7hyemsBk0gsuHxU9krk7hY+auUVb0bWNuqzeWslBHDpyBwahZ2zLr/Wdb3XCVNlm22wepbakfXK8IpPfIkww+3/i7/6xtrZ9Kg64Yffad0f9TuIbmsGc3WlNGAAZcaGpIm2yfKNNIdg3PEI/wzbXlBm79SvMJ6y9iPWEPZc7p32ia2EqogkAC1jIs5nw3HNO2JMpCPOZvnoRbFh73u9SBWa1vr2q/OQXuzxg53pRbPngS17sEsYz20W/j96iIgKG4UCYiyoAlddEdkdofPWH+AGSZjt7wEiMSAT7c/LObvYhUtGSVFn/s1IvVnCm0OD9Y6iRGHIHW6khiz7xlG/Iu8zUhvS+VBxHIPXOTA5N2UBaCEGJPD7BMdeFpDKyVJJSKZq+3UwXSwLyHEp1Yopv3hUZ2PNSZDBKw8Pm+UoNnXjZ2htsBvHdyQLyDDzggH9/MXzrOt+mwM7+2DNknLGW/+Uy6c9XmAXpreVTEJrI9kG7uL8CNBC+lE1br1Ny7l+uAxQfX7d69/ogz041FHI7eyyuE7jcejIFayY58ovdmYwU/iGGgpTuZ5tlDG4ISLVqO5V68TTlkrt7boXLEtHxm7JGWLrzkwX8k5G5qip7mJuAAlLUGQ7KOVf2WYiOdlDWwKwiVTLaS2YfXynYfA2xTOhCzCjLXul3yzk134MLuRKTWlfcGVsiOpVGsl00+70/Hl462L5OKkNrQsY/asTu5UpgJhca9l2matkKivXxMnjjl7H+yaJnRWJItVQPMcKws4DIrQHftLA8VNgBPQZzTw+Y7OV73VrJessHVDkKwrXi6ldC2ev+wSSSNdQFpJfelkrLXUtMLuMcuWTR65HYVsu7BDeUMR5VDjVaUUDYNDW+c8LPltG/skZuzEUHCrpkqhJFZRXZUe4+h2cH+36+05adaGwism9ref0D5JJeTl1DeyvlmBs0+QFE6cBFUcS/nT8WXYT3sjpjH7PrQSaFgWgDeKCeYgZXcXuZbZIEvnjfM44/6zTxaqSNSL+FGaXzaARZhLoX3vsPj5gDenzgDxpM259k8EUqWUBCGV40p4VSdMXEwF7Ro7CPIEBIsoWdwJyhn4FX+pZdqFWVW41iLJz8lmZOcvn6/6DhiglEqUq+U+nAeKZioijgAKtdFvqVUGUobJSVUnd6P0KDpk95JzrzKkYU/syF/YalCl4y+G2ZC+74glYyxSwGz+VdppZRlHBeOK/tug59RWiNuzJ4yzcJWKJWBWMeBnxkBqitwj6U4t0x/UCOzaJxLcVlpGXq6ihSgyUPWiMT+qkpImReGICee2nkRF9mlJE2Gk2CJIFadrGbPU2O+2mTK8V6GrRBMKEn/ZR1qceLvO9Q+/434B+E5gf/bHf0n2vPHf5Pw/PErPP8Vnv8Kz3+F57/C81/h+a/w/Fd4/is8/xWe/wrPf4Xnv8LzX+H5r/D8L7/C81/h+a/w/Fd4/is8/xWe/wrPf4Xnv8LzX+H5r/D8V3j+Kzz/FZ7/Cs9/hee/wvNf4fmv8PxXeP4rPP8Vnv8Kz3+F57/C81/h+a/w/Fd4/is8/xWe/18iPL+RxJ9C8/PPTubI/7jw/Ib9L1Ces9/C2v6jPKe5W9Jdv8rzX+X5r/L8V3n+qzz/VZ7/5Vd5/qs8/1We/yrPf5Xnv8rzX+X5r/L8V3n+qzz/d6X53ynPNepfKs+5pfkxi/9UfS78qs//R9TnyH+p+vymN39/ter/1dX+Tlfe/a0VWX/Vs78CvUtb/ZN8r2j97dXAU/yzcf5zjv9Qpzs2B0koA+7yD+v4Wgt41+scia/OXBQ6zr1Zfyy3P/7R35zZN5Em/PV3+7HQW3dBjQBXfTUCMRUsr1s7FN3iYXhstZvNE1XGtoPdqsNwRlr5MIOKFuw7Z/WlpSQ67qUT61C1zALzrq2s60efcOW+x4PCROcMlvjbq8iKrzuBThaClM9soHmxrEUGq9fP5D4HvifRDSkSH0d9QKfS67eC/jx3am0/5fNZPYx5qhvkYbK07noAuk8JKr6yov+sDcsTRUAadGkJ6Ee9tEI8Hm8ZzcSX7nif+bTnx6NNZ859Ni3BuBAYmOdw1NJHhfX8HgLXrPC3BP5gv7GVTHHTxfg95ejnp8A2WLKHVS5sS/voRte0QhQwUHOWz/O31fUOaVLrZw+BRDOozL5Cv1H0xKUlAtE9bCmkyxxcNxH5o82xjZwVtz20/DHEQtUeRUA8VpU59v7Ac7E9Ir/onhimVjBfKVZE7nVep7tG8OEG9U0Y2RyjLb0Uepz3I76bzR1KHc0K38CrkqM11I7obFyPk8enAX1oLneW0zQpyyydnLwUQvEZ9qAQxnv85ugyVCiubSmPp4tH6M7yLFaPQrpaQvDUbZjynngEFWO+9XMZFYFeoKT7Sqk3sSWh30FxkSWXL97yiK3oI1eUPtE979sIetpUtDEbOCKIQ3n2eLyE2CLxWjp1278tqhW/u0eWBsy72IktFN7ROwL3bpUyFXBtBR50RbLAf4aGeove7qcjYp+jh5hpeQJQqPr7WMOpN54oASb3SRBTcjKaRsb6gfApbrFz1MqU9Zan2Koo9tNyihdzxjA5LS1aVH9xioXNxa4xFEHMw/64NNYQKzyuP/OjWEjmGszg/U4lqpUHOZYa5Nlbm96NsX8wkkw34uXHhuMBBMy+suBFEJ9tniAKUyhJNYHFusihsw63kRvJ3pCJZ9jX9LzAvJSREIg8NBpyI2qpSpbGv1XxEzxEkYD4Qbnc2gHnMFpcIhxLGSAv3i5Y7Ct0IFyfhie6W+AVdXhdktvfjvnFUNZaMPujUE/80x/gVpRaAv9ZguIdqoOsBB7cjxrLNjB5b+C9hsRH+VmtrUE5l5krRScTnduzwJoBQtvqsJRFNS7uYTt+N8Qmhk4usAzjdew2HbpcpezrDHxjmHD0zWI/CfpsI8XZ2xcv0tB+MyxCOVpzqX1RmZPI8U8zF7hHj+xZ+nWGCPxOgUFx1yTHHsTuUe6AGwpMUH9+8MTNBAuy3pnR+Z7KbXfWx5ZVXg7bsYEr9jPW4Vutr+A4LIGj6igY7kKNb+TagJjjPbVn1+d2TjPVw4+ZemjlDdBr1w6pu9GMqas0OWKXNw6sG+l2cRMFj7n/sEfq86tOzgy8V4FksRXPHa+TUX2O2zWs8LQ3riG+b/PAaQ13tNYmPnWJ5OER9XcWW2bl9PgqPDnBKc75QTM7bsg1Vpw282j3imI6uBEDeZq2FXTQvYyzPsT+m+4i/0XrbITaPbs/zPO5+Dy41YHR3aBdhYDC27DKkwrPORWPxN5hOEq/Uiy3zIUwHG5PRRoS5MUWEJ62bJkCEWcVu/n0px1zZxx/0BCVsD/jC+V7vjPDvyjDdy+SgfqXt56W7Qrjjaqsis4YGlDr3ufKSXaYu+Yn2TUdk1Quks/EZK6o6yFLUoDIpLMPilLy82hjUuV4UsXcuFA7YGB3CAKJ7qaKhvn0fCIvPM2pjRwRInqQn17ur4CSomPrR2A/PgPsJvuwMOiajR6SZmSzVFhxU0fVAmde94J9goWUrfHmPqzJIgswQ1bjhYcJo47pM6eQzAU5PD4NjmoheIVOhVcHXH1wg+/YusJoIo2O5zBeZwcNwUJuYZj6llDz4MoBgknKifInxKfBoOYiZIYkujQHbre7tKnMs1mzvn4G61VAZpnrd3BxPrh/1QlT5w10S4IZdrHbSsi1RWv1KeLwxSWp8Zor27KPIXz2V+30LYcedJoSrXuf1DDwvOxtamWN7QkUeCLgI2K/leUz7SnUlKNZD4PbI6DM1920cPOP2Ry1yONOeebuwHObNICXU+BYGn3/hFA3wQfIwFPgtvog6Ig8KJOo5vFSiHTUEWGqgNjcN2fV3K7Y1VumJRnrYVWVEOQm3I3IFcLQu/GzTfGoUWFt2rzxgCFzna7K4cmFysCqJ7LgAHhWZXvkzfCNTcusehysivmu5UiSAVbnh0WzNjKJaFqjQYznV9x6E+TRrPhkIvvV4d19eTWtxsSuY9FtFzgkltT3rVWIwIiyznlZc8M5uNmDiJiejZcAMieSGsqv/RMZ6XlD1zJ8C7enOLxoSqJZ6vGRzowdshUV0OK5fFTmoqJZq8DdNuDLH5kj3otwjJB6N3QfeMiyfLqbW+GhqF0L7gg1Oz6DicPajJFD2docmgThHA7Q9PqCDPtecRwzo5EQvat11V24ui4dJhZQ8Iiqp7fVk7s7ZH5brS1oW5hQO9RTo+v11vZi6O4i/w7w+1M+ACYCzvLJCiPO1pf6KPJ2n5U2fcGE5Vh5qXxFHWpCN0nemWJnA3kub/WOjj8kiZZtj+DmVi7L8lv6/q9qu4b+fds18h9q3yj2T0/7Jm//XbVv/H9B7Vv4KhoeP7Vvxjth+eG3+v1b/f6tfv9Wv3+r37/V79/q92/1+7f6/Vv9/q1+/1a/f6vfv9Xv3+r3b/X7b6rd0fQfq9/mv6x+2+N4eyX/aeX7t+/a/0zlG9X+S/uugTH5/13ftSMQ7tr/ue9aDv/303cN//N30YQdD3TMgh9L1b92XuO70eFfCa/jxlXoy+hG/TurqVfn+PoHGco2BlDKp4tz6pj4ksR7ENqK7Xmj7YDoydPeiyt9uhPsqKpsz3lzHhpPRnT3R+i/yNQkbtfHyIupyJ83DLth3/2S6QLDdB7DEHx9Gq8wCRmyLikRnZQ1afwSw1SCabSQPVz+5NijLDQanyjr2Z1oephv6gG4eH6OiRn2F1jxJWfK1jqTSvitjN1xGKE/0Ite8LuYi8cy1tqFz3LrWtiCJEwLtFSv+zP9lHYh7NKCf8o9huFgDobWUzmbskJtOGleDl6PYhYtX4rurLu8Xyo/4WObPPJSgfvYIcL52Ypf/+WP1i/4J0Y7SEtheLujSfY5Zwz2bpLZaseF/lvAg/fVKOfjww+cwNpabog5CqsGvmkmoUD8bGSn80mph5ADPm0H4+TwD4qyaasFYzE8CvPh4o9c51OR3qVvMgElb8QDAeTFpKnC/KCNxUFPKQ0mHAk6gnA/fMQa53CVZMm9Ya8WT9XYrABIrXaXDfFkT+Ar4P7kIZolMwfJig5tZeGx5Cf6u/Z7dxze4PdceW5YomHNd2vqI/jp8MY5mdHTzLEkwvvDDuQfuClrH8pN1LZA/XKYolidi8Cfjj9A1cB7EoL76s9x9b0W6Y3Io9j6i9Dann5n0qwyZ7upJd5Wo851JSumJ+2+o7fcY/hD40bWbzNZE0rCsNv7KjQAP+wTbBTX2AvKMvHPrvQkm5A6i7fAW67ocW+BI5OCRq8PqrxnRml9gvlWT0TxQcifrgd9en9mGxI5U3Tm+P52EXFW08J8u+5IPGpLTUy1PTPn7doba7FC0EY8DTsYyWgXyWzgdzo5w8waHd1GqXOOkqMtggqSA9dy4KQjo61esu56SCNsX9u5Fz/f7MK+DnC5XGP1btJNFBBoEc+8nUh5oNlySJSz7ZMP1qKNDR3f2wUYGglDXTjLtzxE8YoC+JuyANfbacDtrtzue2xuR/jIpK0Rn9oKiygce1Kl+rrdn7Ge9vrFZFGpw157dOGtYOrvHexxs9wYMjM6BvbhGpjM7KuSSrPdLBVnHu7f3gKUIsHrXbcN7gN9vyDKrRjpqefxLT6iBQ8BMwevGGTz4OgSFXLkkB2fw0IyFEvF70YopqCZok50MA+Ert7uD03/ODFN+Izkw3wVryuhyQr8+cxls1qfoWK3EcdlaNHt1d7RSdpS0NeRcrw9taktfcXVBtW9YPsuC/qU3Ay0Ng6+TOS7k7xt33y03fLI5fGc6CIkv6IF6d6WPO1shT8c/mchFir3arqf3bLsWs6vOwk5gcK+xfgJFve9W3xTjLcMIbPpxXvG+xgINYHGPk/3dYQd6d7t5zLeVFkWLpJpd6HTvfLNGdJcnhG/GtMr/oR6BFFYkY3jAEZ41rczsjbSwDwDY0qmCDQx7H4alSFhrDw44gMm6bBefPwaNdicku+cNVAp4wDAP22VAzM4w9FDNE5655VZYx9Yg3dQAIa/Ok3v5vleuoZwdedopkx5PyYklvlqHoTXx90wd00EGwEPzdz4Egx0lESn9lERQMlLHbaZgiVgegvTy98uxSRFESy+fqlmzAlrhuvlPYVsiLYM/dvj4/l0zFnhrXIJ1M1l2CN6vzaqaj32YcVorVKxgshnST5K4JdU+cSRREcuM3teF0wSEiuD7NdYU2ChxeeYqoyF3M77ao0nMb0HhQFgE4ltKIaKIqZ6x82zY0pcnqCUqF31PSLxmv9uui6I0xXQQDff/oJqOj2tXIL4JVa9rVfvc9kU1xOWjCzv27436UPvRiNwpRGpFZAtCiki0KUC/EbrPO+OFw25WBHbnaGq2zlaI08D1NOW+UzFsLGUb9tFAF3pu0isWShvxPH25rq7vQsc1o/4ZLonJc9SM28E5ltg5ok0jmV8Rjux7FHb3fJ0ermltGvvaMkF1sKm+zQ9VTjjM6yp2GRy+/p4KTcxQrO9F1+ZoRtXz4BKpftNrKXlRjMTGm8KfpK0tRuoyjfVM2HLmrIxblWd/daDmKe53NWiWwH7ibSvk6nRHPx5O09DA7wnqel690qc2rJMtCLCSKVsFYd0bmJ6d827cG8sjGxB6GSGzTRSYBwbVny83kZWJcnusM0T5FLhE62rwRQvor1dNnT7vTUpTmcj9yigpML/5APkCpBKAe8URc/V1mq3fQDwLj1vL4GRhO0bf/P78fEMofam8R4HzuGNAgecX7Ti6O72+QFGnRifuh+PHZnRZRhX3h2G5m7RlT4OdSsOr2VWhpW1PB3G7cL9fHvktGdXE1n8abhWZqyv/yhMLPvGdi++cXepCYev/ABNdEjH3y4Hg12pGLpwZedxfD6QlgbTKzSYty0IJ2IQFMfatMurpXahBrFiwlFaZjAjCoXkgLJe76F1ZVIx6bouzB6tt8Ds01Kzxfvkjc3Y2M03hQaby0CNnkqFB33O7gAd1fwMwJ8WtDDT6VI74sHuMhzaUs26k4hhsoNSFSdE5tuAPVOxZenyKEOGUrMn8CcmLvxVIfMs7ayHbCZEfvQxt8j3qldgN9m4dlx79FFDId47e5dD5T1vwI/F3nxDBNiixC5GS6/qTm/Lj12HT5pCLNcaYsvj+ah++o7yfqE1bevjELVt1I3de32u485i8wcJPrfeNK8SLVgiI67nB0YjiBpp7DlDh7oW6UtThCfwgbFDpJud/NEcw2E4T4ZB6AvImp91vonBzG+J9g7hrOT1OCUCfs+gjqzUwZRDScgJ29JQpaUfZDbcVjXH4PccAL3T97yZgAvRbcAR38gHG5GzuLkqwcZQMrV95iANcZhjag58T4Eh1BTseCOx1M7J/qsrAD6Cru9aaKKoDyK509UurRuapIbEORoSstEHGw4Y86ov+skENttE7l6YA/Ym3mEGp/mW/FkFtoExUmVoitNFQNmVzelsSd5pGqbgvk34Jn5SyuEZ9D+jor5VMCEuAbt3+leo6ecjt7UUAPImh1+Ip5YOwAPxgnlDi7w9g1fmF3UaPcbdpAHt8/m49u7aO4FdRfHU4Bi+vsUV7HYC8xCbJ+bXR5xvClwmsw1oZi3tJkY+EdHKQVyb7suRPPHHqDjtBgvjscdPt/xDN/p1i8eXRG14fNtbgBmPc0nEY8+IPOhKeuxtlzLfJ5F+fLU7Zbei6Qt8b33R0HctsGVS/0W2qig+0NscI3rsV9/GywVy67EPRnaeh9yMe6iGJ44mRi/18FXxDCj9TqRIcWPtcqeMwugDI8Fid4Qxi6YsblLbA3mctWYlii6wg8ouJEyzmGLzzBlHYgS2QhV4b9vd49Ftu7EHvJWJ9h6wHyoRgOmWeulKMR44sOIZJjP21rNv399yfsDOb4+fvi4I2aDqE/6ExZrItg4Ogkluzkgcf7UFOKw6QSXDfUCb7usr3TWwgaO+8Yn/XR2rXgQj5DQ617QCntLRh7K5TNywm52RIGQ7GF9X3ryKM/uC9b9Ykn18uDwmh2E8N50j3XjWDttOrsOPDp+LT1/HpwPU9qtBPFCnfuYfnq6PZxG8dvP9Mxjw/8nqkxlq5i3C25XD3nJjgI0YsU8D+9TFwghM4J1mGrQwyBNCLX7GSwkTSkV/H4hLPll6/9J36NzA+n1M0oEhB/BOLV/2ENuOkeXv7nERw7GNo+A4KiBqdwuJWjto4HJ1evnUkFgXlef8lt4WipHEE92Dfe4PPF2fXmhW1Py2xXrUsRrfVDcSzCeH8TsxgWXBMQ093JjSyW+aGacFds9cgOU5+jkpldGzNQ0ZiCsPCJNuvurptmfzIS5jf3qfovPmGYMNkvnbRGT1lX2wpjSRmIhQ6ZEZe7jVM7Nr0h6BVb9ys8L1y+yy4ufYMbuEOLsi6lBgPKIlFGfrMTtYjqAsrva0zQUlM1NJCDASUbUPlszU5UZz4zW/k/x93ci1k8tRvm/LQE3qWQ4OLzpwNCxbaG43sFjAFV35zUAWh+eADtzHd1sAfrpiExIR4oU+gyd5hVKEJmgJsNTiZhuW8538qLVGdhzl1UzfNrMa2UoVdd5xswCxFfyF56mRmcAfSxt4NEac65EmWYafy9TkAJ9aoFv8cU7k0yoHRsp9wleJxxaxEYJ2aYxucz6CMcJCiYkoPDNqq1WtFbiSs1JsVXI6x5GHVzqa0ZmSlv3KpWcUkD+c08vad3kvz7L3z/DUn2n4FM7NzUld6RxknJyLeTVMygPulNURY5gNRiTu9MoLFWbDn1ooVh5xaBNN7TnS7p7CxDDGw85gSsfgXh4LHcIc6gWJlrheV8sqRHEJ5BN4qfV9ReUmstV9ifcdOWxLfYmQA02+luFxK8BKQdEcMDmBpk2tMEf0jI5mFGxwo41ooIIlzNdusuFp5Jnbo0G54w0aD4CF4/mn2UMHVUWJf8swsN8e9I4g44KTzsfNe2G/F98ihc+LT9bOofXpY0RKfPAg4iF4rJXAWJ7/SH2Vx1fgiEI5T8I3YXbReOtkwkn8+fb4nMtHfcH16JFv4FPHGyO2rZxnqh37xzeXMjw06kxNGrFe4J+1KrbXO+dJLeN2wzw/TcsTwPZDM0T26eGb93BSWOxreUEUtW//Q/RBhXV059cm3PkyCWoBeHzGtJcn69NLwuD0eCSW3sji4KPrTPl0BZArsYbSwV2IDKJDqZboh1stVGvzKoqFhQ1Rfm+SGi1dzjGtdU18amEV3L5d4W5Pcj+AkNBIAa3n6l2E4Cx6QsgMMy1YsP0kkTvebZ3NejPMf1GnnCf2H7SCGP74N+T2D3JBDP+ncsHn879LLkj8b5EL0sofrXLkxeE+v2LBX7Hgr1jwVyz4Kxb8FQv+5Vcs+CsW/BUL/ooFf8WCv2LBX7Hgr1jwVyz4Kxb8P4oFnX95SKvDaKrL/4oF/y8VC2L6/3qxIP6yXz9W+E/Fgt1fxYKM9+fv3IFwc2/ADznu3xzSKisikYlfqQLRt0OUlaVeRUwi1V7oc4OSmoogvPrdk0ZncJy9jW2uWmx+sgbULtFaliP37Fyi79Gfagn0qCNx759Yn8F82OMBD5Z5JAQGIjF6hweG0neCIC7oPqILupbodm23SKLvFWuU/sLTzfiWSkNx6TMtOd4eBPtF8fJ299LhQUvTwLTYSzE8n2pfHuskdS8qLOMoNT2uhN3wdCXzEYP4j+FghrNl2iZWcy+kvcihHXftekWjamOkRhlAjU4AFNOjPIAw7LPZuXoqFRdilu5BEJvZWI9qSXx0VRKYtHJPZFbOD8Shl9wVVa+hzyX5qBBLlB+iuxLEqx9cX6PRHJ+rk36P1dwezxUDS3JPgfFmV4Hhm4e7JWngH3d/aCymw9dIg22qr/TmWYwi8a1xsy0hODBkQrwAAQrq7bBVzaLGINQ/PuoBi8ilij5N2E6glFRi68fYLcHSQ3hdWuZhUFNyEWnKO3Q3FZEna4Pnwb7apzcBQilEDib2lByuol6y650cotPONsNHqrZ5jHWef72cAOLGtiOxG4aeKHPf5qkc7hqrgrvA8350wiv9IPKZitS3gQcuW6Y48w+z+Tks0mSp3eJojn4U7NmO8YvORAvJxZWkcOMiDBaAjzSwi+XSLpMRX7zt2VQkWO2m1sSe9hLjIKu6qMJBud8+Gh3tvSgr4I8CLMNFDZT7eywTBoznGuezePZ7dJkJqk/nZ8Y/I/A7H/aRCMebvmUvX7ogdGLDd/aOyoEqFBTciXFR5zIrd9a95yzy5B3qfDRSqamMVXPL9ui8xqDvgvGtBbKXd1sTBtP39NAC3vJePO1Yb7kadfebMjrTTsYpAfmou7aSObbr+gfEGL/YJe34nhJituWSCNVgiRLUee7pen3me15bYpxFerOAOXzxmtOOTF3mdSHQYH5jZDXY45GIb1iLmjUwB43KLLNFTFtLnwx3aAAgoYlRlvU45b10dcHK6WVLxcKUqw5hPubQrFZ0/SRtHOoU0UCDKpglqpHG4XRdtaHKk5IyweBlSj87q4AyS74aHK4Wium+TVYvCOdskgJuUGU1+LzisovQ66XPOgLynOO6sguWY9yWMOyxrp9jm4w//mEhvZIK1I95PlpFEbwEc18obEVVxX47Itugdb6pGVz4kRaPbqS3rIXs8Sy8V/ledfeWBa9b5Xi1qiCbRNN7KDx1P3STjdwu4K+4ml5S65gNYAmlcsIjPpyjzkzhaOP2SKV+Cby7mCdvYphOUf7E6DogJ3a7t4DNzhMc63LvBiXqI3ggDhipLmpR3fVGpsfxnKuILegEh30ThlXOSW29ujVWpeuM+bdJWx4Pm3XNXxXPrbfbkY4ylnqYIQLcJscv6Phqgtt3keObHN9IvYvyoL41tCS2L5/vojqxCysbivfrtjy0ydbHHAfPNvvJU3tDw0KVt2bfeUPhpOmb1yz3VRO5L6J/jMxDkZtguuc9jRt0SSF1jqwlIZ1aQ1UMWQNAfyF+cX/TuMm9Yu39TRwXly64D6YagspicUSXHISzgSMj8VZvQv94FGJxLTfECl+wwN4GCU43wD6HSNmwN0wZ0v1C0raaBqFTWAH19V59QiS3i/iQEOf9gERoRlmywt0aMod0nPVyV7/3hxePWD2aYSveZJXPUFV2C1HprWOFINZWO+n18M5qvQ4FGSAb8L4Hjn1w/toS6Bzn4vMtomE4RhnmRi49MV5EH/XElR2iqFhokg2IO0JnbNoGpHsDEush+FwAD1n7sczb9XwAxnrXAdR/DNYnIEnNu2cGAQ+FwpQUMDkcSo0DJTPE7CuldX/aOtGmyBL43RDZnoHHKeO4JXO3+Wz462ro9CNW6RbIpcjSdXlHlM1EPyRP32vqJVgSnstp9D1x2p2gkMmAxxCN789rhlyzmLp4dTrgDYsx6zjlKx2z8UeOFGLzHXdn9DZP4jjmHBdMBQsMnpOr8BuaiOQHJq8Lo3fcEoGH0EJ6/iNERn6eOfs0mO6O3sirrw0dOfC4U/ugUqwB2JaS/aQIykxAjre7FQkkWs2r1N6F4Qoy3dXHIhj0twVdbQfzgQ8ODe76DqPnJ1md+0vt7mHrRBI3AmifFsniv6YzovoVgwK/IpiJx2R9ZYPRV9tzG0/SfsarUNEWHmpSwntQ3NKtPv/w1Uim4zXGrh87gkeYLedQYGAc5kDIi4XafOWVl4nmRju+JEX1+axc6PEjORHaLBVdaUDpIc5BxAlrgqQW1ugjShefL/0DE+SbdX8pnS/b6rDWYo6w2kRk2yOcX9NVQttIY/C6W/3cwedMZQ98dqYjSmVTMQPRX+G/R2T1F0HEy/o9HfkBRQRYr71ZPp27nXDHcHmH5AedvRkeGHYcc09aprH2yerXRLLWJ5Zkmw+F0ncBxQMM2dE7hWQo9XblhkqnrQmKD9Sjp1BQ7+wd8djcCh7h9kgycJdSJj7gKWPQsNeQnRCFY/mzskpa65y3Q3t3W+Ky+yNPzI0tZH9+zTwP7MhjmYPFqfFGtIn+vqRwhMZdu8T9qdQb565oQuNHWDoqXEL03M8ZHVqcIFzjz4g0VzC/PoE3NpxgYjqKhrQu4H1nPM3yHmzo7Kvx0xl2WzVVvmsAB+9SF1keEX0MTmT0Ss1FcMvE63y7dJvdyBVLMHLHJd7UcknIgjsYgS7E2Pzjk4WB4wGyRdVq6QDwZ+r9j4VsboKPDyl4K6T2kYbGcNpJc4WdsrymU7sVveDKyLYjtKgd3IpSA+atvjVg//ACK0pm2W2JA92HR+xqbM3JUBYFNzpAYbSWFg+arHq/ck3KtrYxhQNS38VPWGr8BqfmAqh4nC2vqmoa5mLPEUsOoj2WW7w4LF0iyskjFMLfn3nX+Qg8Ihje3sKR/AQP++vZ8RbXRP5hS4gueQMBD439zGkcHJzLmY93cxBaubKrOTaPWlh27oy0xeueaP+aqhfwbsAXYe8fOe22HhqGfQWRCR0ln/4kjuqTJorYF1CSlhnAFBcseVcbTX0EOujxh8KrBWYXmhLBc7BjWDkL1G+KQHYDt+nN3X+92XLbmGBwrgc8UKpxvM8jKA0Kv1r7MbUmddfGxdyu2inw14Yl3bvh1MVXu0tiTDJk95KeNT6xiDMxAu0NHQGdB0WRb/1r/kBR3ZyOVOHzfPSj/MwSetf2kDuBjd5Ow2LAj4PQjudJWfGrbDoQciqdmtnyHL15zSTKSKF4Jb01MQ3lmMdf/jgr85lHqjSN3G7a/HO3h1CQXsot8jyeeVIST3WOHqrxBSwEo8xjU50ILGJLW4wHKsQEPKY+ituSE/BjQWINoD+f2aXuoVv23knTDuAdFz37LgHm1LX1SwWuJdf9zb/u9F/DIfFMVvCoXsCOvkoLV7Y9VV8vzYmlymNDFFtuYbWptJf3d0MIj76jjJK8cr/s/TF4dFihnSRyfMWfRxjyrXbUYiiVLQeL/QrMWwPSt3Fp2w9lEXI4mMB9P89FovbUIOD1LzmVjXHHqmmhlna4q2Kko6r/dvX+wyHqpj0OTatDy33bGTwl3pOuoO4KX15XbN4J8omSMG282L4BsLEcU30oA5fdu9Vmig30gXp5LsenGgjmUFB4wuCEb5e/n47dUHeFHjRmtkDURLSAjnfNKTVtZG0JD0trz176jrSUyunkuwkXgc9RG992A7hksgNRXNEWVDJ2un3kLAQZRTF7Kv32RlQvOepDOSLlpj4zsnm7Hi2CFnutncrUyVNI33L8W9VaBhafGDuqlDTEU8N1CsqvrfK1aSF1s1iWCn14BmoMYgEZCjHqYliyYq/jjT8stsE1gf0IjS1NRLolyl0lqTfBJOhM76U00P0Vja8bLjpswm7oEuS4O80vpWfumdKF3/Pu02HwlMhhLmLBrv21PMnO+DTLFpazOsQgij6VREefUilSI7MMK1LHinxY3JFCKSOeOXkJxhK5sGLyfKT+npb6eHuOIluzH9aG0h80vjJveXU8S6bT59eL3G1auDvtAMeAtikHQ3CwovZyeeGrpahDzXa1mVSIX+0vjysHfegcx2NuhLTy3lfxafr9gS9eK8l6KCNtHX/8hN0l/4y9MvV5BNco2gyzFNAbAyAgxbZV3Ikv4ztRCnJxuxXxm79bL+Jm9O4qyu2ZvijR7a7xBq60U4NweMXpkTLKdy9Vf7u2jI31p3zCjXej4OVeoyj89LSrZ674zLaw6JqpW/CY+KV2Hm3XdtnWW1Xqte3rTSZ83s+VCuAHocL6d9soxmlHZtkfktbTsYndn5mDxC/D2esShnopet21z1Uey89yVeHB7M93xfuyK4UrF0OPdEDtEr8yrIH5c7Td1dGxcEHl49lymqAt2bpR6HkhmkaksbbEb61JcyGXMm0iF3+koIssRC364vIGLlnO/1i3/VUb2WTeDkfyo1rUhjOzX4N6mCwb7IjGTKoU3Onyu6mBmfbBYs1vgex5dlQZljsn6xaOmR8uEsOhPzlzeHJWeqkE28PPUxo70615eSFOI7tWLoxrJSr1MEhUSDdvnYzwSdHOsni5KY5DvKmmYUn0Dg803wqmIM5TprCFuckh106U40YfbRBji3XbPuNl2UkBYZEe2rdeKb/fcMODUi7eOV2NRROUPqjQbQDWPnAOQTiWvJjExZi4SSn7Ag99lan3aB839e2VkbBK1PQ6LBA5eIcZRHFJyXvRzGUg1CvParVFuUyNkC3NATAhOlbDdVS7D6txUHZphTQCRi6yJE0RraXUIs6nUoVBPO9UqYpTVfZtc6UolCtP/3++1tfH8kKueyfa7c1Pgvq/qWHh34sQ0fvjHxWIj39UIKLkvxH3/y4F4v0/VyDmfUbN87CD3/qhhxrEan2Db2Fv4MevxhDqBb8KwaLuuj8Fh39BseKR5mkK/r6s89Dmf/MKAv7x/F9f8f8YAXjFYujXv3knz2PY8/mvZiDPyvxfjv/fHoaI/OP4/vm3OQeErf7k/+Hi/2zE//gGc6jBnfx7P0oc+bt+lMh/vMQybHOa//Gpf5+3//xCf3ax/PNCazyX+foPF/oawF8f+5/ZBEw7DcP6t2+f47HShiyH7/h/AA==</diagram></mxfile>
2202.05420/main_diagram/main_diagram.pdf ADDED
Binary file (39.9 kB). View file
 
2202.05420/paper_text/intro_method.md ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The problem of learning predictors that are immune to adversarial corruptions at inference time is central in modern machine learning. The phenomenon of fooling learning models by adding imperceptible perturbations to their input illustrates a basic vulnerability of learning-based models, and is named *adversarial examples*. We study the model of adversarially-robust $\pac$ learning, in a *semi-supervised* setting.
4
+
5
+ Adversarial robustness has been shown to significantly benefit from semi-supervised learning, mostly empirically, but also theoretically in some specific cases of distributions [e.g., @carmon2019unlabeled; @zhai2019adversarially; @uesato2019labels; @najafi2019robustness; @alayrac2019labels; @wei2020theoretical; @levi2021domain]. In this paper we ask the following natural question. To what extent can we benefit from *unlabeled* data in the learning process of robust models in the general case? More specifically, what is the sample complexity in a distribution-free model?
6
+
7
+ Our semi-supervised model is formalized as follows. Let $\H\subseteq \{0,1\}^\X$ be a hypothesis class. We formalize the adversarial attack by a perturbation function $\U:\X\rightarrow 2^\X$, where $\U(x)$ is the set of possible perturbations (attacks) on $x$. In practice, we usually consider $\U(x)$ to be the $\ell_p$ ball centered at $x$. In this paper, we have no restriction on $\U$, besides $x\in\U(x)$. The robust error of hypothesis $h$ on a pair $(x,y)$ is $\sup_{z\in \U(x)}\I\sqparen{h(z)\neq y}$. The learner has access to both *labeled* and *unlabeled* examples drawn i.i.d. from unknown distribution $\D$, and the goal is to find $h\in\H$ with low robust error on a random point from $\D$. The sample complexity in semi-supervised learning has two parameters, the number of labeled examples and the number of unlabeled examples which suffice to ensure learning. The learner would like to restrict the amount of labeled data, which is significantly more expensive to obtain than unlabeled data.
8
+
9
+ In this paper, we show a gap between supervised and semi-supervised label complexities of adversarially robust learning in a distribution-free model. The label complexity in semi-supervised may be arbitrarily smaller compared to the supervised case, and is characterized by a different complexity measure. Importantly, we are not using more data, just less labeled data. The unlabeled sample size is the same as how much labeled data a fully-supervised method would require, and so this is a strict improvement. This kind of gap is known not to hold in standard (non-robust) $\pac$ learning, this is a unique property of robust learning.
10
+
11
+ The following complexity measure $\VCU$ was introduced by @montasser2019vc (and denoted there by $\dim_{\U \times}$) as a candidate for determining the sample complexity of supervised robust learning. It was shown that indeed its finiteness is necessary, but not sufficient. This parameter is our primary object in this work, as we will show that it characterizes the labeled sample complexity of *semi-supervised* robust $\pac$-learning.
12
+
13
+ ::: definition
14
+ []{#def:vcu label="def:vcu"} A sequence of points $\sett{x_1,\ldots,x_k}$ is $\U$-*shattered* by $\H$ if $\forall y_1,\ldots,$ $y_k$ $\in$ $\sett{0,1}$, $\exists h\in \H$ such that $\forall i\in [k],\forall z\in \U(x_i)$, $h(z)=y_i$. The $\VCU(\H)$ is largest integer $k$ for which there exists a sequence $\sett{x_1,\ldots,x_k}$ $\U$-shattered by $\H$.
15
+ :::
16
+
17
+ Intuitively, this dimension relates to shattering of the entire perturbation sets, instead of one point in the standard $\VC$-dimension. When $\U(x)=\sett{x}$, this parameter coincides with the standard $\VC$. Moreover, for any hypothesis class $\H$, it holds that $\VCU(\H)\leq \VC(\H)$, and the gap can be arbitrarily large. That is, there exist $\H_0$ such that $\VCU(\H_0)=0$ and $\VC(\H_0)=\infty$ (see Proposition [\[prop:gap-vcu-dimu\]](#prop:gap-vcu-dimu){reference-type="ref" reference="prop:gap-vcu-dimu"}).
18
+
19
+ For an improved lower bound on the sample complexity, @montasser2019vc [Theorem 10] introduced the Robust Shattering dimension, denoted by $\mathrm{RS}_\U$ (and denoted there by $\dim_\U$).
20
+
21
+ ::: definition
22
+ []{#def:dimu label="def:dimu"} A sequence $x_1,\ldots,x_k$ is said to be $\U\text{-robustly shattered}$ by $\F$ if $\exists z^+_1,z^-_1,\ldots,z^+_k,z^-_k$ such that $x_i \in \U\paren{z^+_i}\cap \U\paren{z^-_i} \forall i\in[k]$ and $\forall y_1,\ldots,y_k \in \sett{+,-}, \exists f\in \F$ with $f(\zeta)=y_i$, $\forall \zeta\in \U\paren{z^{y_i}_{i}}, \forall i\in[k]$. The $\U$- $\mathrm{RS}_\U(\H)$ is defined as the maximum size of a set that is $\U$-robustly shattered by $\H$.
23
+ :::
24
+
25
+ Specifically, the lower bound on the sample complexity is $\Omega\paren{\frac{\mathrm{RS}_\U}{\epsilon}+\frac{1}{\epsilon}\log\frac{1}{\delta}}$ for realizable robust learning, and $\Omega\paren{\frac{\mathrm{RS}_\U}{\epsilon^2}+\frac{1}{\epsilon^2}\log\frac{1}{\delta}}$ for agnostic robust learning. They also showed upper bounds of $\Tilde{\O}\paren{\frac{\VC\cdot\VC^*}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}$[^4] in the realizable case and $\Tilde{\O}\paren{\frac{\VC\cdot\VC^*}{\epsilon^2}+\frac{\log\frac{1}{\delta}}{\epsilon^2}}$ in the agnostic case, where $\VC^*$ is the dual $\VC$ dimension (definitions are in [7](#app:prelim){reference-type="ref+label" reference="app:prelim"}). @montasser2019vc showed that for any $\H$, $\VCU(\H)\leq \mathrm{RS}_\U(\H) \leq\VC(\H)$, and there can be an arbitrary gap between them. Specifically, there exists $\H_0$ with $\VC_\U(\H_0)=0$ and $\mathrm{RS}_\U(\H_0)=\infty$, and there exists $\H_1$ with $\mathrm{RS}_\U(\H_1)=0$ and $\VC(\H_1)=\infty$.
26
+
27
+ - In [3](#sec:knowing-support){reference-type="ref+label" reference="sec:knowing-support"}, we first analyze the simple case where the support of the marginal distribution on the inputs is fully known to the learner. In this case, we show a tight bound of $\Theta\paren{\frac{\VCU(\H)}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}$ on the labeled complexity for learning $\H$.
28
+
29
+ - In [4](#sec:realizable){reference-type="ref+label" reference="sec:realizable"}, we present a generic algorithm that can be applied both for the realizable and agnostic settings. We prove an upper bound and nearly matching lower bounds on the sample complexity in the realizable case. For semi-supervised robust learning, we prove a labeled sample complexity bound $\Lambda^{\mathrm{ss}}$ and compare to the sample complexity of supervised robust learning $\Lambda^{\mathrm{s}}$. Our algorithm uses $\Lambda^{\mathrm{ss}} = \Tilde{\O}\paren{\frac{\VCU}{\epsilon}+\frac{1}{\epsilon}\log\frac{1}{\delta}}$ *labeled* examples and $\O(\Lambda^{\mathrm{s}})$ *unlabeled* examples. Recall that $\Lambda^{\mathrm{s}} =\Omega(\mathrm{RS}_\U)$, and since $\mathrm{RS}_\U$ can be arbitrarily larger than $\VCU$, this means our labeled sample complexity represents a strong improvement over the sample complexity of supervised learning.
30
+
31
+ - In [5](#subsec:agnostic){reference-type="ref+label" reference="subsec:agnostic"}, we prove upper and lower bounds on the sample complexity in the agnostic setting. We reveal an interesting structure, which is inherently different than the realizable case. Let $\eta$ be the minimal agnostic error. If we allow an error of $3\eta+\epsilon$, it is sufficient for our algorithm to have $\Lambda^{\mathrm{ss}}=\Tilde{\O}\paren{\frac{\VCU}{\epsilon^2}+\frac{\log\frac{1}{\delta}}{\epsilon^2}}$ *labeled* examples and $\O(\Lambda^{\mathrm{s}})$ *unlabeled* examples (as in the realizable case). If we insist on having error $\eta+\epsilon$, then there is a lower bound of $\Lambda^{\mathrm{ss}}=\Omega\paren{\frac{\mathrm{RS}_\U}{\epsilon^2}+\frac{1}{\epsilon^2}\log\frac{1}{\delta}}$ labeled examples. Furthermore, an error of $(\frac{3}{2}-\gamma)\eta+\epsilon$ is unavoidable if the learner is restricted to $\O(\VCU)$ labeled examples, for any $\gamma>0$. We also show that *improper* learning is necessary, similar to the supervised case. We summarize the results in [\[fig:results\]](#fig:results){reference-type="ref+label" reference="fig:results"} showing for which labeled and unlabeled samples we have a robust learner.
32
+
33
+ - The above results show that there is a significant benefit in semi-supervised robust learning. For example, take $\H_0$ with $\VCU(\H_0)=0$ and $\mathrm{RS}_\U(\H_0)=n$. The labeled sample size for learning $\H_0$ in supervised learning is $\Omega(n)$. In contrast, in semi-supervised learning our algorithms requires only $\O(1)$ *labeled* examples and $\O(n)$ *unlabeled* examples. We are not using more data, just less labeled data. Note that $n$ can be arbitrarily large.
34
+
35
+ - A byproduct of our result is that if we assume that the distribution is robustly realizable by a hypothesis class (i.e., there exist a hypothesis with zero robust error) then, with respect to the [non-robust]{.underline} loss (i.e., the standard $0$-$1$ loss) we can learn with only $\Tilde{\O}\paren{\frac{\VCU(\H)}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}$ labeled examples, even if the $\VC$ is infinite. Recall that there exists $\H_0$ with $\VC_\U(\H_0)=0$, $\mathrm{RS}_\U(\H_0)=\infty$ and $\VC(\H_0)=\infty$. Learning linear functions with margin is a special case of this data-dependent assumption. Moreover, we show that this is obtained only by *improper* learning. (See [6](#sec:improved-01-loss){reference-type="ref+label" reference="sec:improved-01-loss"}.)
36
+
37
+ # Method
38
+
39
+ Let $\X$ be the instance space, $\Y$ a label space, and $\H\subseteq\Y^{\X}$ a hypothesis class. A perturbation function $\U: \X \rightarrow 2^{\X}$ maps an input to a set $\U(x)\subseteq \X$. Denote the 0-1 loss of hypothesis $h$ on $(x,y)$ by $\ell_{0\text{-}1}(h;x,y)=\I\sqparen{h(x)\neq y}$, and the robust loss with respect to $\U$ by $\ell_{\U}(h;x,y)=\underset{z\in \U(x)}{\sup}\I\sqparen{h(z)\neq y}$. Denote the support of a distribution $\D$ over $\X\times\Y$ by $\supp(\D)=\sett{(x,y)\in \X\times\Y:\D(x,y)>0}$. Denote the marginal distribution $\D_{\X}$ on $\X$ and its support by $\supp(\D_{\X})=\sett{x\in\X:\D(x,y)>0}$. Define the *robust risk* of a hypothesis $h\in \H$ with respect to distribution $\D$ over $\X \times \Y$, $$\risk_{\U}\paren{h;\D}=\E_{\paren{x,y}\sim \D}\sqparen{\ell_\U(h;x,y)}=\E_{\paren{x,y}\sim \D}\sqparen{\sup_{z\in \U(x)}\I\sqparen{h(z)\neq y}}.$$ The approximation error of $\H$ on $\D$, namely, the optimal robust error achievable by a hypothesis in $\H$ on $\D$ is denoted by, $$\risk_{\U}(\H;\D)=\inf_{h\in\H}\risk_{\U}\paren{h;\D}.$$ We say that a distribution $\D$ is *robustly realizable* by a class $\H$ if $\risk_{\U}(\H;\D)=0$.
40
+
41
+ Define the *empirical robust risk* of a hypothesis $h\in \H$ with respect to a sequence $S\in \paren{\X\times \Y}^*,$ $$\widehat{\risk}_{\U}\paren{h;S}
42
+ =
43
+ \frac{1}{|S|}\sum_{\paren{x,y}\in S}\ell_\U(h;x,y)
44
+ =
45
+ \frac{1}{|S|}\sum_{\paren{x,y}\in S}\sqparen{\sup_{z\in \U(x)}\I\sqparen{h(z)\neq y}}.$$ The *robust empirical risk minimizer* learning algorithm $\RERM:\paren{\X\times\Y}^*\rightarrow \H$ for a class $\H$ on a sequence $S$ is defined by $$\RERM_\H(S)\in\argmin_{h\in\H}\widehat{\risk}_{\U}\paren{h;S}.$$
46
+
47
+ When the perturbation function is the identity, $\U(x)=\{x\}$, we recover the standard notions. The *risk* of a hypothesis $h\in \H$ with respect to distribution $\D$ over $\X \times \Y$ is defined by $\risk\paren{h;\D}=\E_{\paren{x,y}\sim \D}\sqparen{\ell_{0\text{-}1}(h;x,y)}=\E_{\paren{x,y}\sim \D}\sqparen{\I\sqparen{h(x)\neq y}},$ and the *empirical risk* of a hypothesis $h\in \H$ with respect to a sequence $S\in \paren{\X\times \Y}^*$ is defined by $\widehat{\risk}\paren{h;S}
48
+ =
49
+ \frac{1}{|S|}\sum_{\paren{x,y}\in S}\ell_{0\text{-}1}(h;x,y)
50
+ =
51
+ \frac{1}{|S|}\sum_{\paren{x,y}\in S}\sqparen{\I\sqparen{h(x)\neq y}}.$ The *empirical risk minimizer* learning algorithm $\ERM:\paren{\X\times\Y}^*\rightarrow \H$ for a class $\H$ on a sequence $S$ is defined by $\ERM_\H(S)\in\argmin_{h\in\H}\widehat{\risk}\paren{h;S}.$
52
+
53
+ A learning algorithm $\A :\paren{\X\times\Y}^*\rightarrow \Y^{\X}$ for a class $\H$ is called *proper* if it always outputs a hypothesis in $\H$, otherwise it is called *improper*.
54
+
55
+ We define the supervised and semi-supervised settings.
56
+
57
+ ::: definition
58
+ For any $\epsilon,\delta\in (0,1)$, the sample complexity of realizable robust $(\epsilon,\delta)$-$\pac$ learning for a class $\H$, with respect to perturbation function $\U$, denoted by $\Lambda_{\RE}(\epsilon,\delta,\H,\U)$, is the smallest integer $m$ for which there exists a learning algorithm $\A :\paren{\X\times\Y}^*\rightarrow \Y^{\X}$, such that for every distribution $\D$ over $\X\times\Y$ robustly realizable by $\H$, namely $\risk_{\U}\paren{\H;D}=0$, for a random sample $S \sim \D^m$, it holds that $$\Pr\paren{\risk_\U\paren{\A(S);D}\leq \epsilon}>1-\delta.$$ If no such $m$ exists, define $\Lambda_{\RE}(\epsilon,\delta,\H,\U) = \infty$, and $\H$ is not robustly $(\epsilon,\delta)$-$\pac$ learnable with respect to $\U$.
59
+ :::
60
+
61
+ For the standard (non-robust) learning with the 0-1 loss function, we omit the dependence on $\U$ and denote the sample complexity of class $\H$ by $\Lambda_{\RE}(\epsilon,\delta,\H)$.
62
+
63
+ ::: definition
64
+ A hypothesis class $\H$ is semi-supervised realizable robust $(\epsilon,\delta)$-$\pac$ learnable, with respect to perturbation function $\U$, if for any $\epsilon,\delta\in (0,1)$, there exists $m_{u},m_{l}\in \N \cup \sett{0}$, and a learning algorithm $\A :\paren{\X\times\Y}^* \cup \paren{\X}^* \rightarrow \Y^{\X}$, such that for every distribution $\D$ over $\X\times\Y$ robustly realizable by $\H$, namely $\risk_{\U}\paren{\H;D}=0$, for random samples $S^l \sim \D^{m_l}$ and $S^{u}_{\X} \sim \D_{\X}^{m_u}$, it holds that $$\Pr\paren{\risk_\U\paren{\A(S^l,S^{u}_{\X});D}\leq \epsilon}>1-\delta.$$
65
+ :::
66
+
67
+ The sample complexity $\M_{\RE}{\paren{\epsilon,\delta,\H,\U}}$ includes all such pairs $(m_u,m_l)$. If no such $(m_u,m_l)$ exist, then $\M_{\RE}(\epsilon,\delta,\H,\U) = \emptyset$.
68
+
69
+ In this case we have $\risk_{\U}\paren{\H;\D}>0$, and we would like to compete with the optimal $h\in\H$. We add a parameter to the sample complexity, denoted by $\eta$, which is the optimal robust error of a hypothesis in $\H$, namely $\eta = \risk_{\U}\paren{\H;\D}$. We say that a function $f$ is $(\alpha,\epsilon)$-optimal if $\risk_\U\paren{f;D}\leq \alpha\eta+\epsilon$.
70
+
71
+ ::: definition
72
+ For any $\epsilon,\delta\in (0,1)$, the sample complexity of agnostic robust $(\alpha,\epsilon,\delta)$-$\pac$ learning for a class $\H$, with respect to perturbation function $\U$, denoted by $\Lambda_{\AG}(\alpha,\epsilon,\delta,\H,\U,\eta)$, is the smallest integer $m$, for which there exists a learning algorithm $\A:\paren{\X\times\Y}^* \rightarrow \Y^{\X}$, such that for every distribution $\D$ over $\X\times\Y$, for a random sample $S\sim \D^m$, it holds that $$\Pr\paren{\risk_\U\paren{\A(S);D}\leq \alpha\inf_{h\in\H}\risk_{\U}\paren{h;\D} + \epsilon} >1-\delta.$$
73
+
74
+ If no such $m$ exists, define $\Lambda_{\AG}(\alpha,\epsilon,\delta,\H,\U,\eta) = \infty$, and $\H$ is not robustly $(\alpha,\epsilon,\delta)$-$\pac$ learnable in the agnostic setting with respect to $\U$. Note that for $\alpha=1$ we recover the standard agnostic definition, our notation allows for a more relaxed approximation.
75
+ :::
76
+
77
+ Analogously, we define the semi-supervised case.
78
+
79
+ ::: definition
80
+ A hypothesis class $\H$ is semi-supervised agnostically robust $(\alpha,\epsilon,\delta)$-$\pac$ learnable, with respect to perturbation function $\U$, if for any $\epsilon,\delta\in (0,1)$, there exists $m_{u},m_{l}\in \N \cup \sett{0}$, and a learning algorithm $\A :\paren{\X\times\Y}^* \cup \paren{\X}^* \rightarrow \Y^{\X}$, such that for every distribution $\D$ over $\X\times\Y$, for random samples $S^l \sim \D^{m_l}$ and $S^{u}_{\X} \sim \D_{\X}^{m_u}$, it holds that
81
+
82
+ $$\Pr\paren{\risk_\U\paren{\A(S^l,S^{u}_{\X});\D}\leq \alpha\inf_{h\in\H}\risk_{\U}\paren{h;\D}+\epsilon}>1-\delta.$$
83
+
84
+ The sample complexity $\M_{\AG}(\alpha,\epsilon,\delta,\H,\U,\eta)$ includes all such pairs $(m_u,m_l)$. If no such $(m_u,m_l)$ exist, then $\M_{\AG}(\alpha,\epsilon,\delta,\H,\U,\eta) = \emptyset$.
85
+ :::
86
+
87
+ Let a partial concept class $\H\subseteq\sett{0,1,\star}^\X$. For $h\in\H$ and input $x$ such that $h(x)=\star$, we say that $h$ is undefined on $x$. The support of a partial hypothesis $h:\X\rightarrow \sett{0,1,\star}$ is the preimage of $\sett{0,1}$, formally, $h^{-1}(\sett{0,1})=\sett{x\in\X: h(x)\neq \star}$. The main motivation of introducing partial concepts classes, is that data-dependent assumptions can be modeled in a natural way that extends the classic theory of total concepts. The $\VC$ dimension of a partial class $\H$ is defined as the maximum size of a shattered set $S\subseteq \X$, where $S$ is shattered by $\H$ if the projection of $\H$ on $S$ contains all possible binary patterns, $\sett{0,1}^S\subseteq \H|_{S}$. The $\VC$-dimension also characterizes verbatim the $\pac$ learnability of partial concept classes, even though uniform convergence does not hold in this setting.
88
+
89
+ We use the notation $\Tilde{\O}(\cdot)$ for omitting poly-logarithmic factors of $\VC,\VC^*,\VCU,\mathrm{RS}_\U,1/\epsilon,1/\delta$. See [7](#app:prelim){reference-type="ref+label" reference="app:prelim"} for additional preliminaries on complexity measures, sample compression schemes, and partial concept classes.
90
+
91
+ In this section, we provide a tight bound on the labeled sample complexity when the support of marginal distribution is fully known to the learner, under the robust realizable assumption. Studying this setting gives an intuition for the general semi-supervised model. The main idea is that as long as we know the support of the marginal distribution, $\supp(\D_{\X})=\sett{x\in\X:\exists y\in \Y, \text{ s.t. } \D(x,y)>0}$, we can restrict our search to a subspace of functions that are robustly self-consistent, $\H_{\U\text{-cons}}\subseteq \H$, where $$\H_{\U\text{-cons}}=\sett{h\in \H: \forall x\in \supp(\D_\X),
92
+ \forall z,z'\in \U(x), h(z)=h(z')}
93
+ .$$
94
+
95
+ As long as the distribution is robustly realizable, i.e., $\risk_{\U}(\H;\D)=0$, we are guaranteed that the target hypothesis belongs to $\H_{\U\text{-cons}}$. As a result, it suffices to learn the class $\H_{\U\text{-cons}}$ with the 0-1 loss function, in order to robustly learn the original class $\H$. We observe that, $$\VC(\H_{\U\text{-cons}})= \VCU(\H) \leq \VC(\H).$$ Moreover, there exits $\H_0$ with $\VCU(\H_0)=0$ and $\VC(\H_0)=\infty$ (see Proposition [\[prop:gap-vcu-dimu\]](#prop:gap-vcu-dimu){reference-type="ref" reference="prop:gap-vcu-dimu"}). Fortunately, moving from $\VC(\H)$ to $\VCU(\H)$ implies a significant sample complexity improvement. Since $\supp(\D_\X)$ is known, we can now employ any algorithm for learning the hypothesis class $\H_{\U\text{-cons}}$. [^5] This leads eventually to robustly learn $\H$ with labeled sample complexity that scales linearly with $\VCU$ (instead of the $\VC$). Formally,
96
+
97
+ ::: theorem
98
+ []{#thm:known-marginal label="thm:known-marginal"} For hypothesis class $\H$ and adversary $\U$, when the support of the marginal distribution $\D_{\X}$ is known, the labeled sample complexity is $\Theta\paren{\frac{\VCU(\H)}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}$.
99
+ :::
100
+
101
+ The following Proposition demonstrates that semi-supervised robust learning requires much less labeled samples compared to the supervised counterpart. Recall the lower bound on the sample complexity of supervised robust learning, $\Lambda_{\RE}(\epsilon,\delta,\H,\U)= \Omega\paren{\frac{\mathrm{RS}_\U(\H)}{\epsilon}+\frac{1}{\epsilon}\log\frac{1}{\delta}}$ given by @montasser2019vc [Theorem 10]. For completeness, we prove the following in [8](#app:knowing-support){reference-type="ref+label" reference="app:knowing-support"}.
102
+
103
+ ::: proposition
104
+ []{#prop:gap-vcu-dimu label="prop:gap-vcu-dimu"} There exists a hypothesis class $\H_0$ such that $\VCU(\H_0)=0$, $\mathrm{RS}_\U(\H_0) = \infty$, and $\VC(\H_0) = \infty$.
105
+ :::
106
+
107
+ We can now conclude the following separation result on supervised and semi-supervised label complexities.
108
+
109
+ ::: corollary
110
+ The hypothesis class in Proposition [\[prop:gap-vcu-dimu\]](#prop:gap-vcu-dimu){reference-type="ref" reference="prop:gap-vcu-dimu"} is not learnable in supervised robust learning (i.e., we need to see the entire data distribution). However, when $\supp(\D_{\X})$ is known, this class can be learned with $\O(\frac{1}{\epsilon}\log\frac{1}{\delta})$ labeled examples.
111
+ :::
112
+
113
+ In the next section, we prove a stronger separation in the general semi-supervised setting. The size of the labeled data required in the supervised case is lower bounded by $\mathrm{RS}_\U$, whereas in the semi-supervised case the *labeled* sample complexity depends only on $\VCU$ and the *unlabeled* data is lower bounded by $\mathrm{RS}_\U$. Moreover, note that in [\[thm:known-marginal\]](#thm:known-marginal){reference-type="ref+label" reference="thm:known-marginal"}, when $\supp(\D_{\X})$ is known, we can use any proper learner. In [4](#sec:realizable){reference-type="ref+label" reference="sec:realizable"} we show that in the general semi-supervised model this is not the case, and sometimes improper learning is necessary, similarly to supervised robust learning.
114
+
115
+ In this section we present our algorithm and its guarantees for the realizable setting. We also prove nearly matching lower bounds on the sample complexity. Finally, we show that improper learning is necessary in semi-supervised robust learning, similar to the supervised case.
116
+
117
+ We present a generic semi-supervised robust learner, that can be applied on both realizable and agnostic settings. The algorithm uses the following two subroutines. The first one is any algorithm for learning partial concept classes, which controls our *labeled* sample size. (In [12](#app:algo-partial){reference-type="ref+label" reference="app:algo-partial"} we discuss in detail the algorithm suggested by @alon2021theory.) The second subroutine, is any algorithm for the agnostic adversarially robust supervised learning, which controls our *unlabeled* sample size. (In [13](#app:algo-agnostic-robust){reference-type="ref+label" reference="app:algo-agnostic-robust"} we discuss in detail the algorithm suggested by @montasser2019vc.) Any progress on one of these problems improves directly the guarantees of our algorithm. We use the following definition that explains how to convert a total concept class into a partial one, in a way that preserves the idea of the robust loss function.
118
+
119
+ ::: definition
120
+ []{#def:partial-robust-class label="def:partial-robust-class"} Let a hypothesis class $\H\subseteq \sett{0,1}^{\X}$ and a perturbation function $\U: \X \rightarrow 2^{\X}$. For any $h\in\H$, we define a corresponding partial concept $h^\star:\X \rightarrow \sett{0,1,\star}$, and denote this mapping by $\phi(h)=h^\star$. For $x\in \X$, whenever $h$ is not consistent on the entire set $\,\U(x)$, i.e., $\exists z,z'\in \U(x), h(z)\neq h(z')$, define $h^\star(x)=\star$. Otherwise, $h$ is robustly self-consistent on $x$, i.e., $\forall z,z'\in \U(x), h(z)=h(z')$ and $h$ remains unchanged, $h^\star(x)=h(x)$. The corresponding partial concept class is defined by $\H^{\star}_{\U}= \sett{h^{\star}: \phi(h)=h^\star,\; \forall h \in \H}$.
121
+ :::
122
+
123
+ The main motivation for the above definition is the following. Fix a hypothesis $h$. For any point $x$, as defined above, the adversary can force a mistake on $h$, regardless of the prediction of $h$. We would like to mark such points as *mistake*. We do this by defining a partial concept $h^\star$ and setting $h^\star(x)=\star$, which, for partial concepts, implies a mistake. The benefit of this preprocessing is that we reduce the complexity of the hypothesis class from $\VC$ to $\VCU$, which potentially can reduce the labeled sample complexity.
124
+
125
+ We are now ready to describe the algorithm.
126
+
127
+ ::: algorithm
128
+ **Input:** Labeled data set $S^{l}\sim \D^{m_l}$, unlabeled data set $S^{u}_{\X}\sim \D^{m_u}_{\X}$, hypothesis class $\H$, perturbation function $\U$, parameters $\epsilon$, $\delta$.\
129
+ **Algorithms used:** $\pac$ learner $\A$ for partial concept classes, agnostic adversarially robust [supervised]{.underline} $\pac$ learner $\B$.
130
+
131
+ 1. Given the class $\H$, construct the hypothesis class $\H^{\star}_\U$ using Definition [\[def:partial-robust-class\]](#def:partial-robust-class){reference-type="ref" reference="def:partial-robust-class"}.
132
+
133
+ 2. Execute the learning algorithm for partial concepts $\A$ on $\H^{\star}_\U$ and sample ${S^l}$, with the $0$-$1$ loss and parameters $\frac{\epsilon}{3},\frac{\delta}{2}$. Denote the resulting hypothesis $h_1$.
134
+
135
+ 3. Label the unlabeled data set $S^{u}_{\X}$ with $h_1$, denote the labeled sample by $S^u$. (On points where $h_1$ predicts $\star$, we can arbitrarily choose a label of $0$ or $1$.)
136
+
137
+ 4. Execute the agnostic adversarially robust supervised $\pac$ learner $\B$ on $S^u$ with parameters $\frac{\epsilon}{3},\frac{\delta}{2}$. Denote the resulting hypothesis $h_2$.
138
+
139
+ **Output:** $h_2$.
140
+ :::
141
+
142
+ The main idea behind the algorithm is the following. Given the class $\H^{\star}_\U$, we would like to find a hypothesis $h_1\in \H^{\star}_\U$ which has a small error, whose existence follows from our realizability assumption. The required sample size scales with $\VCU$, which is the complexity of $\H^{\star}_\U$, rather than $\VC$. This is where we make a significant gain in the labeled sample complexity. Note that $h_1$ does not guarantee a small robust error, although it does guarantee a small non-robust error. We utilize an additional unlabeled sample for this task, which we label using $h_1$. If we would simply minimize the non-robust error on this sample we would simply get back $h_1$. The main insight is that we would like to minimize the robust error over this sample, which will result in hypothesis $h_2$. We now need to bound the robust error of $h_2$. The optimal function $h_{\mathrm{opt}}$ has only a slightly increased robust error on this sample, namely, at most on the sample points where it disagrees with $h_1$. Note that $h_1$ might have a large robust error due to the perturbation $\U$. However, a robust supervised PAC learner would return a hypothesis $h_2$ which has robust error similar to $h_{\mathrm{opt}}$, which is at most $\epsilon$.
143
+
144
+ In the first step, we convert $\H$ to $\H^{\star}_\U$. Then we employ a learning algorithm $\A$ for partial concepts on $\H^{\star}_\U$ with a labeled sample $S^l\sim \D^{m_l}$. The output of the algorithm is a function $h_1$ with $\epsilon/3$ on the [0-1]{.underline} error. Crucially, we needed for this step $|S^l|=\Tilde{\O}\lr{\VCU(\H)/\epsilon}$ labeled examples for learning the partial concept $\H^{\star}_\U$, since $\VC(\H^{\star}_\U)=\VCU(\H)$. So our labeled sample size is controlled by the sample complexity for learning partial concepts with the 0-1 loss. In step 3, we label an independent unlabeled sample $S^{u}_{\X}\sim \D^{m_u}_{\X}$ with $h_1$, denote his labeled sample by $S^u$. Define a distribution $\Tilde{\D}$ over $\X\times\Y$ by $\Tilde{\D}(x,h_1(x))=\D_{\X}(x),$ and so $S^u$ is an i.i.d. sample from $\Tilde{\D}$. We argue that the robust error of $\H$ with respect to $\Tilde{\D}$ is at most $\frac{\epsilon}{3}$, i.e., $\risk_{\U}(\H;\Tilde{\D})=\frac{\epsilon}{3}$. Indeed, the function with zero robust error on $\D$, $h_{\text{opt}}\in\argmin_{h\in\H}\risk_{\U}(h;\D)$ has a robust error of at most $\frac{\epsilon}{3}$ on $\Tilde{\D}$. Finally, we employ an agnostic adversarially robust [supervised]{.underline} $\pac$ learner $\B$ for the class $\H$ on $S^u\sim \Tilde{\D}^{m_u}$, that should be of size of the sample complexity of agnostically robust learn $\H$ with respect to $\U$, when the optimal robust error of hypothesis from $\H$ on $\Tilde{\D}$ is at most $\frac{\epsilon}{3}$. Moreover, the total variation distance between $\D$ and $\Tilde{\D}$ is at most $\frac{\epsilon}{3}$. We are guaranteed that the resulting hypothesis $h_2$ has a [robust]{.underline} error of at most $\frac{\epsilon}{3}+\frac{\epsilon}{3}+\frac{\epsilon}{3} =\epsilon$ on $\D$. We conclude that a size of $|S^{u}_{\X}|=m_u =\Lambda_{\AG}\paren{1,\frac{\epsilon}{3},\frac{\delta}{2},\H,\U,\eta=\frac{\epsilon}{3}}$ unlabeled samples suffices, this completes the proof for [\[thm:realizable-sample-compexity\]](#thm:realizable-sample-compexity){reference-type="ref+label" reference="thm:realizable-sample-compexity"}. For a specific instantiation of such algorithm ([@montasser2019vc]), we deduce the sample complexity in [\[thm:realizable-sample-compexity-cor\]](#thm:realizable-sample-compexity-cor){reference-type="ref+label" reference="thm:realizable-sample-compexity-cor"}. A simple analysis of the latter yields a dependence of $\epsilon^2$ for the unlabeled sample size. However, by applying a suitable data-dependent generalization bound, we reduce this dependence to $\epsilon$. (Full proofs appear in [9](#app:realizable){reference-type="ref+label" reference="app:realizable"}).
145
+
146
+ We now formally present the sample complexity of the generic semi-supervised learner for the robust realizable setting. First, in the case of using a generic agnostic robust supervised learner as a subroutine (step 4 in the algorithm). Then we deduce the sample complexity of a specific instantiation of such algorithm.
147
+
148
+ ::: theorem
149
+ []{#thm:realizable-sample-compexity label="thm:realizable-sample-compexity"} For any hypothesis class $\H$ and adversary $\U$, algorithm $\grass$ $(\epsilon,\delta)$-$\pac$ learns $\H$ with respect to the robust loss function, in the realizable robust case, with samples of size $$\begin{align*}
150
+ m_l= \O\paren{\frac{\VCU(\H)}{\epsilon}\log^2\frac{\VCU(\H)}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}
151
+ \;,\;
152
+ m_u = \Lambda_{\AG}\paren{1,\frac{\epsilon}{3},\frac{\delta}{2},\H,\U,\eta=\frac{\epsilon}{3}},
153
+ \end{align*}$$ where $\Lambda_{\AG}\paren{\alpha,\epsilon,\delta,\H,\U,\eta}$ is the sample complexity of adversarially-robust agnostic supervised $(\alpha,\epsilon,\delta)$-$\pac$ learning, such that $\eta$ is the error of the optimal hypothesis in $\H$, i.e., $\eta = \risk_{\U}\paren{\H;\D}$.
154
+ :::
155
+
156
+ ::: remark
157
+ Note that if we simply invoke a $\pac$ learner (for total concept classes) on $\H$, with the 0-1 loss, instead of steps 1 and 2 in the algorithm, we would get a labeled sample complexity of roughly $\O\lr{\VC(\H)}$. This is already an exponential improvement upon previous results that require roughly $\O\lr{2^{\VC(\H)}}$ labeled samples. The purpose of using partial concept classes is to further reduce the labeled sample complexity to $\O\lr{\VCU(\H)}$.
158
+ :::
159
+
160
+ The following result follows by using the agnostic supervised robust learner suggested by @montasser2019vc. A simple analysis of the latter yields a dependence of $\epsilon^2$ for the unlabeled sample size. However, by applying a suitable data-dependent generalization bound, we reduce this dependence to $\epsilon$.
161
+
162
+ ::: theorem
163
+ []{#thm:realizable-sample-compexity-cor label="thm:realizable-sample-compexity-cor"} For any hypothesis class $\H$ and adversary $\U$, Algorithm $\grass$ $(\epsilon,\delta)$-$\pac$ learns $\H$ with respect to the robust loss function, in the realizable robust case, with samples of size $$\begin{align*}
164
+ m_l= \O\paren{\frac{\VCU(\H)}{\epsilon}\log^2\frac{\VCU(\H)}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}
165
+ \;,\;
166
+ m_u = \Tilde{\O}\paren{\frac{\VC(\H)\VC^*(\H)}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}.
167
+ \end{align*}$$
168
+ :::
169
+
170
+ We present nearly matching lower bounds for the realizable setting. The following Corollary stems from [\[thm:known-marginal\]](#thm:known-marginal){reference-type="ref+label" reference="thm:known-marginal"} and @montasser2019vc [Theorem 10].
171
+
172
+ ::: corollary
173
+ For any $\epsilon,\delta\in (0,1)$, the sample complexity of realizable robust $(\epsilon,\delta)$-$\pac$ learning for a class $\H$, with respect to perturbation function $\U$ is $$\begin{align*}
174
+ m_l= \Omega\paren{\frac{\VCU(\H)}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}
175
+ \;,\;
176
+ m_u = \infty,\;\;\; or
177
+ \;\;\; m_l+m_u= \Omega\paren{\frac{\mathrm{RS}_\U(\H)}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}.
178
+ \end{align*}$$
179
+ :::
180
+
181
+ In Section [3](#sec:knowing-support){reference-type="ref" reference="sec:knowing-support"}, we have seen that when the support of the marginal distribution $\D_{\X}$ is known, the labeled sample complexity is $\Theta\paren{\frac{\VCU(\H)}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}$. This was obtained by a proper learner: keep the robustly self-consistent hypotheses, $\H_{\U\text{-cons}}\subseteq \H$, and then use ERM on this class. The case when $\D_{\X}$ is unknown is different. We know that there exists a perturbation function $\U$ and a hypothesis class $\H$ with finite $\VC$-dimension that cannot be robustly $\pac$ learned with any proper learning rule [@montasser2019vc Lemma 3]. The same proof holds in the semi-supervised case. Note that both algorithms $\A$ and $\B$ used in [\[alg:generic-algo\]](#alg:generic-algo){reference-type="ref+label" reference="alg:generic-algo"} are improper. (The proof appears in [9](#app:realizable){reference-type="ref+label" reference="app:realizable"}.)
182
+
183
+ ::: theorem
184
+ []{#thm:improper label="thm:improper"} There exists $\H$ with $\VC(\H)=0$ such that for any proper learning rule $\A :\paren{\X\times\Y}^* \cup \paren{\X}^* \rightarrow \H$, there exists a distribution $\D$ over $\X \times \Y$ that is robustly realizable by $\H$, i.e., $\risk_{\U}\paren{\H;\D}=0$. It holds that $\risk_{\U}\paren{\A(S^l,S^{u}_{\X});D}>\frac{1}{8}$ with probability at least $\frac{1}{7}$ over $S^l \sim \D^{m_l}$ and $S^{u}_{\X} \sim \D^{m_u}$, where $m_l,m_u\in \N \cup \sett{0}$ is the size of the labeled and unlabeled samples respectively. Moreover, when the marginal distribution $\D_{\X}$ is known, there exists a proper learning rule for any $\H$.
185
+ :::
186
+
187
+ In this section, we prove the guarantees of [\[alg:generic-algo\]](#alg:generic-algo){reference-type="ref+label" reference="alg:generic-algo"} in the more challenging agnostic robust setting. We then prove lower bounds on the sample complexity which exhibit that it is inherently different from the realizable case.
188
+
189
+ We follow the same steps as in the proof of the realizable case, with the following important difference. In the first two steps of the algorithm, we learn a partial concept class with respect to the $0$-$1$ loss, and obtain a hypothesis with error of $\eta+\epsilon/3$ ($\eta$ is the optimal robust error of a hypothesis in $\H$ and not $0$). This leads eventually to error of $\,3\eta+\epsilon\,$ for learning with respect to the robust loss.
190
+
191
+ We then present two negative results. In [\[thm:dimu-labels-agnostic\]](#thm:dimu-labels-agnostic){reference-type="ref+label" reference="thm:dimu-labels-agnostic"} we show that for obtaining error $\eta+\epsilon$ there is a lower bound of $\Omega(\mathrm{RS}_\U)$ labeled examples, this result coincides with the lower bound of supervised robust learning. In [\[thm:vcu-2eta\]](#thm:vcu-2eta){reference-type="ref+label" reference="thm:vcu-2eta"}, we show that for any $\gamma>0$ there exist a hypothesis class, such that having access only to $\O(\VCU)$ labeled examples, leads to an error $(\frac{3}{2}-\gamma)\eta+\epsilon$. (All proofs for this section are in [10](#app:agnostic){reference-type="ref+label" reference="app:agnostic"}.)
192
+
193
+ We start with the upper bounds. First, we analyze the case of using a generic agnostic robust learner, then we deduce the sample complexity of a specific instantiation of such algorithm.
194
+
195
+ ::: theorem
196
+ []{#thm:agnostic-sample-compexity label="thm:agnostic-sample-compexity"} For any hypothesis class $\H$ and adversary $\U$, Algorithm $\grass$ $(3,\epsilon,\delta)\text{-}\pac$ learns $\H$ with respect to the robust loss function, in the agnostic robust case, with samples of size $$\begin{align*}
197
+ m_l= \O\paren{\frac{\VCU(\H)}{\epsilon^2}\log^2\frac{\VCU(\H)}{\epsilon^2}+\frac{\log\frac{1}{\delta}}{\epsilon^2}}
198
+ , \; \;
199
+ m_u = \Lambda_{\AG}\paren{1,\frac{\epsilon}{3},\frac{\delta}{2},\H,\U,2\eta+\frac{\epsilon}{3}}
200
+ ,
201
+ \end{align*}$$ where $\Lambda_{\AG}\paren{\alpha,\epsilon,\delta,\H,\U,\eta}$ is the sample complexity of adversarially-robust agnostic supervised learning, such that $\eta$ is error of the optimal hypothesis in $\H$, namely $\eta = \risk_{\U}\paren{\H;\D}$.
202
+ :::
203
+
204
+ By using the agnostic supervised robust learner suggested by @montasser2019vc, we have the following upper bound on the unlabeled sample size, $m_u
205
+ =
206
+ \Tilde{\O}\paren{\frac{\VC(\H)\VC^*(\H)}{\epsilon^2}+\frac{\log\frac{1}{\delta}}{\epsilon^2}}$.
207
+
208
+ We now present two negative results.
209
+
210
+ ::: theorem
211
+ []{#thm:dimu-labels-agnostic label="thm:dimu-labels-agnostic"} For any $\epsilon,\delta\in (0,1)$, the sample complexity of agnostic robust $(1,\epsilon,\delta)$-$\pac$ learning for a class $\H$, with respect to perturbation function $\U$ is (even if $\D_\X$ is known), $$\begin{align*}
212
+ m_l = \Omega\paren{\frac{\mathrm{RS}_\U(\H)}{\epsilon^2}+\frac{1}{\epsilon^2}\log\frac{1}{\delta}}
213
+ \;,\;
214
+ m_u = \infty.
215
+ \end{align*}$$
216
+ :::
217
+
218
+ ::: theorem
219
+ []{#thm:vcu-2eta label="thm:vcu-2eta"} For any $\gamma>0$, there exists a hypothesis class $\H$ and adversary $\U$, such that the sample complexity for $(\frac{3}{2}-\gamma,\epsilon,\delta)$-$\pac$ learn $\H$ is $$\begin{align*}
220
+ m_l = \Omega\paren{\frac{\VCU(\H)}{\epsilon^2}+\frac{1}{\epsilon^2}\log\frac{1}{\delta}}
221
+ \;,\;
222
+ m_u = \infty.
223
+ \end{align*}$$
224
+ :::
225
+
226
+ What is the optimal error rate in the agnostic setting when using only $\O\lr{\VCU}$ labeled examples?
227
+
228
+ In this section we learn with respect to the 0-1 loss, under robust realizability assumption. A Distribution $\D$ over $\X\times\Y$ is robustly realizable by $\H$ given a perturbation function $\U$, if there is $h\in\H$ such that not only $h$ classifies all points in $\D$ correctly, it also does so with respect to the robust loss function, that is, $\risk_{\U}(\H;\D)=0$. Note that our guarantees, only in this section, are with respect to the non-robust risk. The formal definition is in [11](#app:improved-01-loss){reference-type="ref+label" reference="app:improved-01-loss"}. A simple example for this model is the following. Let $\mathcal{H}$ be linear separators on $\mathcal{X}$ the unit ball in $\mathbb{R}^d$, and $\mathcal{U}$ as $\ell_2$ balls of radius $\gamma$, the robustly realizable distributions are separable with margin $\gamma$, where $\mathrm{VC}_{\mathcal{U}
229
+ }(\mathcal{H}) = \frac{1}{\gamma^2}$ but $\mathrm{VC}(\mathcal{H}) = d+1$ can be arbitrarily larger. Moreover, we have the following example. (All proofs are in appendix [11](#app:improved-01-loss){reference-type="ref+label" reference="app:improved-01-loss"}.)
230
+
231
+ ::: proposition
232
+ []{#prop:vc2m-vcu1 label="prop:vc2m-vcu1"} For any $m\in \mathbb{N}$, there exist a hypothesis class $\H_m$ and distribution $\D$, such that $\D$ is robustly realizable by $\H_m$, $\VCU(\H_m)=1$, and $\VC(\H_m)= 2m$.
233
+ :::
234
+
235
+ Standard $\VC$ theory does not ensure learning in this case. In this section we explain how we can learn in such a scenario with a small sample complexity (scales linearly in $\VCU$). Moreover, we show that it cannot be achieved via proper learners.
236
+
237
+ ::: theorem
238
+ []{#thm:improved-01 label="thm:improved-01"} The sample complexity for learning a hypothesis class $\H$ with respect to the 0-1 loss, for any distribution $\D$ that is robustly realizable by $\H$, namely $\risk_{\U}\paren{\H;D}=0$, $$\begin{align*}
239
+ \O\paren{\frac{\VCU(\H)}{\epsilon}\log^2\frac{\VCU(\H)}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}
240
+ ,
241
+ \Omega \paren{
242
+ \frac{\VCU(\H)}{\epsilon}+\frac{\log\frac{1}{\delta}}{\epsilon}}.
243
+ \end{align*}$$
244
+ :::
245
+
246
+ This Theorem was an intermediate step in the proof of [\[thm:realizable-sample-compexity\]](#thm:realizable-sample-compexity){reference-type="ref+label" reference="thm:realizable-sample-compexity"}, and the sample complexity is the same as [\[thm:partial-sample-complexity-realizable\]](#thm:partial-sample-complexity-realizable){reference-type="ref+label" reference="thm:partial-sample-complexity-realizable"}, $\O\paren{\Lambda_{\RE}(\epsilon,\delta,\H)}.$ We show that there exists a robust ERM that fails in this setting (Proposition [\[prop:erm-fails\]](#prop:erm-fails){reference-type="ref" reference="prop:erm-fails"} in [11](#app:improved-01-loss){reference-type="ref+label" reference="app:improved-01-loss"}). Then, we claim that every proper learner fails.
247
+
248
+ ::: theorem
249
+ []{#thm:proper-fails label="thm:proper-fails"} There exists $\H$ with $\VCU(\H)=1$, such that for any proper learning rule $\A :\paren{\X\times\Y}^* \rightarrow \H$, there exists a distribution $\D$ over $\X \times \Y$ that is robustly realizable by $\H$, i.e., $\risk_{\U}\paren{\H;\D}=0$, and it holds that $\risk\paren{\A(S);D}>\frac{1}{8}$ with probability at least $\frac{1}{7}$ over $S \sim \D^{m}$.
250
+ :::
2203.16001/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-03-07T15:20:20.496Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.3 Safari/605.1.15" etag="Am9Fw7eAqz7TVutysUAI" version="16.6.3" type="device"><diagram id="Rkc6eyOnoFsHsPaOxzbC" name="Page-1">7H3ZlptIs+7TeK29L9qLZOYyASE0IaSSAOlmLzFqQEjMEk9/Mqgqj+W/q0/b3eX6E3fbVQhBkhkZ8UV8mREfOO18Gxa76352CaP0A8uEtw+c/oFlEc+K5B84c388Iz2fSIpD+HTR5xMPhy56Osk8na0PYVR+dWF1uaTV4fr1yeCSZVFQfXVuVxSX9uvL4kv69VOvuyT67sRDsEu/P+sewmr/eFZmpc/nzeiQ7J+fjETl8ZPz7vnip1uU+114aR9P9S/HDT5wWnG5VI8/nW9alELnPffLYw8YP/j0U8OKKKte84WD5qDzfREP2evkEo8NK1X2f8iPd2l2af30wk+Nre7PPVBc6iyM4CboA6e2+0MVPVx3AXzakjEn5/bVOX36uKyKyynSLumlIGeyS0YuU+NDmj6f+sByoRDJIU/OPz07Kqro9sOXQp+6ishYdDlHVXEnlzx/gZGEx+88yZcgKB/lp9u0nweM5bjHc/svBut5qHZPMpJ8uv3nbiQ/PPXky726ZMfSnbv8cWxmunMRQm3UGH9wIveT+zW+ZJWxOx9SeE3cRNmheDr7cKmL/kv7qiJzghU4TP4izYe/4ILyY3K5JGm0ux7Kj8Hl3H8QlP2lRvx4T/LjN3d9nIiIe+WYGgL8+TSm3w3gC8P8wzFlefTVkMrMd8OpcMz3w8nywkcJ/aIhRc8z/qcNaZLuSmgl8+8NLyt/P5KxAH9+OOzky19c/Hj8nGHnn8f5adi5F8adF8QXxl1gftGgE23y3aB//Pjxu3Enb1j9RT34dGqXHpKM/BqQfotgdKC/DsT84KcPzocwhMe8KE2f5e1fFKJeRxSXalcdLtBihfk58iAp7MevFQGviN9JhPiCIuB+gjyIgTsSrIU9x3nkdLwxOamnP9B30vA3tfo3hnEXyXHwI+n5GZpV/LpDEfv9FEPKCz2K5F/VpewLXSqm1ZOIAWJ77DxyNq8BKqlPMvjpdwCXu2r3B1z/R1kEX138KPAg7kTYfyDqvaB/EvNPQv7lAz4P8PPJx6f1M4DcnCEDc+sv/fwlMXn6t38b/2/dZbUrT+Rz9Hw30mj/2yc8vd/n029WST1bn5+kKFjlawiIuBekGin/oJ54CSp8JdTls5/zPNDo70qOTCTnu1v0tuo9SMw/ZdZ+gjQqAvORVb4SSJYR/l279fyw39ZwCfzX2PANGC7xJTBALdcPLBf7PvTQT7ZcgiJ//HPbJb0Quvh1quIlQPbNeCSkY66vf/1Pwbmd/3wH5j93C/qmU6TvO4UVwdlnPh/8C5NfePman99p7yXc80I84FO07pW6/T9L1aunBve9yUToJQWPftlM4L8fVEH7n4f/+yCp+P/YD5L+QRiQn6tid8jgt2d9WbxKYaIfqV1B+983rAH/jSgV+iWC9qVkyf8oGvs+qPT3tMUbiCS+pDkiRHSH9I9rDiR8by0E9JF/wYr+QuUhvtaMvjhr/27n/GWTy37DojAfmS8P7ntlLL2kjJmfE4F/+WVfEYAv97sr/Hg497Se2v+Ly+sjMwjzY/f8S3y4wfz6pA+nOz9K7Ut5eApj+peqInPge4VZXa7/8jx7ejsdnAdiTR5/ZY1rlnxgtYOjzpctMxkmF0wO62G9H6wT8hP8j0dYwxvyr85E1kqEM6ppaQ/OYqThZBTj/ekAJ3HaPhhpR36Y3+DqcqGmD+V82MIXUmbp7Jk1q5xDM9wH5zUOuZCbntN6x1nHjaem07Ny3wpNbaukDeqUtGGpOuYhkEJUddOQ80LS3JuieBIqdoU2ZMJZVTwM0WW0DJE8s4KHPU53eG0dRuWyfOh2Xafdp6NwvplOq5vFbIOW3bHnyYFZLz+wKj8buGy5nGqlOmKdXNM49VxW6kNVW+bDOiszZG+A61PB0/Ci+0EMOEcmP7cxEXtVFjviexhF6rWxqYAzMu0ym0ibGhEIoC4VEy/wuzseuhNTJ8bm4M0LUSyafCfW07yY54xYmTkfFb6w47z6JOVSnRfCIvYl0TedrPLn4MMtZrjmDCFcz/0RcejU7sblRPzUVEeSWMZ5I93tnNexnsy3yPccokGMJZfE5B9GWceOTa6toxMi2kItxFLHKr++hk7GSts2Vk6hRy5EOYtqa3snulTVLmoyIeck5ehzZGyMEAaIzHn1Rn45wdWl4qb3O+/h6aEcsDZvotrMhWZnm+0ID7vtecMKkq2U4tl2huQL08Jc6JtzXfrQGM2uWKGSFTJPpthMpRgwgSFM0YBzjEVTqq064zf+kDzSsEOU246rNEFLZtWxmgt+FKS2aBcBly4GWJuzQrmzpTvnbGK/GXrtccLVA4yDxRS8dvK42VCR4uta2Y4rfz/Cpk46OO6wl4+iPJZYownGGIvr6HYB0RxHpSjWq6rY+thUcTzhbguuID6YmjOyQWYlx2pDPFVXXNidjVYmAgs9bZTaTfEbh1wXNlvrrFecWp3CqxxzAVFKZDqopZTHaO3pEa/P8WCEZ119Emc+O4w8B5TKNF1omyzRsTTkI/ItnCKvyQN+yy0SfDTHrOLoR4lMJtWOjnqrTFEz8tXhmFdqjLdSNc3J2KgXaXlc3NpgWxeYSIvhjPcYt8ai3t1mgyW8x/Io5pyUMdPEd27qQs8bMs6tpEihNu+7fM3aN1Pm5vshHoIByRwzZJpDJzsG8jg+0mbaaZp3IFa2qUfGUqn1K+qkkqi8hinrvL4R+TSO68gRMNbOoWt2XMLz3hQdJESumldM1HirLj7PkxnIlJPgIDJ5Aw/Ib7UcXXS04RwLOZ6ziyx/mMSZIyuuzTXMhsH35/nkw3y6es5BaeTbJJS7nL9E2l4si3zFbIi6vul4pClz37Hm+qGBBvv5oS4MtK7zHSq4JRfCwJCH5kTNMri1LkGeCrtm54pVbHJKFq4KpRALMirbiZiDNeXQYj4z+yE1TFe9xVuNjBuj9/NI6iywvWEvu8dj7tYeu9jgs2V1+ZW0kq91Zg1zQIxy5YFlMJGCshYbf0i607hkYHlVom4LldfGmWMrV3kDl+dRHt6Jh0gm+XTLbqp2MXfuyi6wOGw3s/NCP9w4a3oOS72YS9A4GzQEilbZotvNfLzRdGla5Eywi3s968coD8v64YA3qri+x5G00b1pcT1sLPJ58DAONokLCuQc5Usxq0yLm8xuIvG3jHlxGePxUfW7FSgjW7nEOxe0Qzs4Yf2wJ5dMvPxYQrsN0Ubi2fWIFKl38wRjot+PtghDsbo2++g+5ZtqYOGZOotz77YlX5ElFMlNM+Axrx6zSjqZdnfBkrEg7bu5m6EB1mWhF7KnIqdxJ8wGrU3wINW51LfMCMSLOcxkgx+i6fS23KSXAG8crs45QQhV2wl5RGaMeg6QdyxaPSAKU/UlYntJX155nrGxdgvjMem1qlWTzBOUEFtVcVrgyX2NWz1XYm6ho7V7HLYqu2o1HMbXkxJ55N5tPz+P2EwSXA54CS0CMHQq3mIjueGF7u7IOy6KaCtzX1xpiJy+grGv2hZOF9jEiVaimzy3C55MovJM+kg8evxInnTqrY3ZBcEXindtZduwTe4mb7A8IM9oyDM2FuiT+z7z/KF3A/PLWMnVaVUCSsYasbAzbaASw3Qmig+3I6JGsbY5galSZw/kr2SRjDAe3MhfGtYWJ+iaPVjm8eGbL2O8GhC9vBkQUPP4ZaLUT9/c9ssvP912qGqAkvov/7e1adL5I34Os4Nh9SMZbqM8kc9Ezrf5uRXP7CPfavyY2JjauwkwevbMNMH2bsakCajm6jYAWMdG+hFL5Yk8SeRWGUyBibyLzVuStgvsTmP7UabAmJNpbaiGPzbvYsneyHP7YDcBZIWrR0SCRf9+YfdpIKFUThc6trr9mgvjDTY1vj5Pr/vF1BeWG1xY1VzxTdVe3vP4LLS6rHVj8toguoxYX8xc8U4u6QgWuSWI+q3Dq8zxw9DXdu0aH48QNVfPxRHL6lyJdwZqL9icW9wQmiQflZ1NJq2L59ooYzuZ15WdjkAvhGGW4kR1jA50tqwqvoIevKl0PeCJlh8/nY3RPqznh0nicN2MwfzcFI5k/tqBkaE2uhiXW6kmI7sES+0cj0SJGV3MTpW1512Tuhwm87NSk5MDUOHiOdL8Pdt6a721lhFAqz1oOLRCguL6KSpFosELYV6wvc2TrmKho9hz8nVVkV6YiXuZiSugFtYKtxNSe0hGy82MVMljQ+9diXyieLF7TmwigRNbMXnswauEjo38oCqJwpkJQZcXRKOCJplXYD7KuJNmKpGRLBahl+Hh3BbVilcg4TAjesvIw3XGFvLaUqzGj8WuRo220Gc80esqA5A7C8XREAajuwaNb4vHnGu0hGCv2SP2miuoqQuEHsyqaDmChhKb3KfhmZiv6gsr1vE9DYY40vi+u47QMQErSmYeipc4RyJ/8xgtwXaW9tiqHkf5XLzGvmoxXdW5+4TAfY+YO+IaGNvjUnIflFhHD4AIhPUIy+a6KRqxJK8g9kawQIHiXlmUCuT9hFBaRJm8q8eNyEXFQxG1AZlbgc1y8saS7Lqoxd2hlNQWF6GAYy27E2dPvdr5KedKk6hUVsuc+gbW/2xHnX3lbf1E8BLgKfV4Js5J5R1FPoWevRcuq7U4GCrtALcC2ja9bBO5VGPJ2SiuwaIjz+qWhwfYzGGISu66DQMv8Rpfv7sszMNYjwC/tKMsSedFdVwpOscrBJJfs7lSDgF8LyYEoK751cZWnZ1yudiK4HOrPFG8ilgtFavz8QRwvASclHyOc9Jlls/eGTwfaPUumw9t87qYIpfzYWgLPFeT+a6xzvH0svEJwvYzIyYPgQl7ttA28G3pgXM3ShluwmS75nfEtOGRXhPbpuGBshbQg8z2UQTlEBcyoGWnjtbMgsFj8e45MFVy3hQmCkO+w2tLjijPrOyKvFVQMAI9hdbh0CLIecxfdeQruYwmCbONL6koeb6bEbnXnVWe1r59SzgQlPvkjPHOtsD5AHQvHLlcEdnMZfTEXoukZ8kHPB/f48o9Igek517yBIDkoddwXD8ZYNh2oDSrqxtiXnPmuStMm6ZxAIsXq4obCkT+3IJV5IUl9E5SLdaTqhcYC4RwZ98wB+IhrjMvJf72HFt+rgrNEd5KMEXF/yRLaOw5mMyMe1MtYyQpq7kk4E4TArGwrYxvp4ixHcBet6gR2CphiB5nh31Ls2tUsFYqV82EFzeo9NVELRS+n4NqWqGzVXmpxBA58XxzGOO5jufg21hr8pfWz7oNKxZ+bt6U7aAlisoUU5gCU87lo3Pv3R238YTYhKwcoCIOsphXPEncR6Q3QrzBujRe5QxpvTCtryEKPZ9TpuQ5gCEVs9cF48wFB/5SpKjXLuo5dDyinxfwYVQDMhbLqDiNVxsdD/3bmtxNLEPwQERPRGeXJbjkHDhbBKixx8+d7TSh6xvVRSeobW6TuwVw/gz/w6O7RYldzrqH6/gIaPu2QpGZFy44shdiYP3ZifStNLLRWlk3XBucSjMxsxDZe7icXaGr4nBTf1+T+7uV5AkPM+myAXdtUBeWyGymhXrRQJi2dv+WZYb2yrpDyp1Z4LZmiO6ucexKoWPc7+Im0Tbc0LVMwPYmqjjnuqnroXVY9ch22BlDJuCCUUuMYOhWRI8Q+z+vk969BX/J2E2RwO7WGp46ghIpXo/6+ZDoRDc7VHiGlwXRXrxFYL8Mhsld1I0RJg9d21vYAh2IJYPHT2YsdC1x5m0f5PkGYRWl93BuDvFnzS68RDl8kMAkuonNNEX8Bdvwll5Y8OWj/fcNYu5L042IH0CAxZjBCkynocVged72GOQCRocN4R1QUw4UPC0TQBzHBY8tMiDQ1QGWRgQEzStWJpiE6zrAIjNcE8+o9XOIYlRNPG6DwAxPJ5yYjiTVcB335XUES5VabS6P5G6zbE4gkmS2o0JYXN4H7vvN27SZornJdfKtDWJTIQ4x8WVEoeuDQiDa/HymR3dyE5N4YTOtH1OvlMPYvpvqTY5Z4sAQ/VkcStNsuLiVNwNlQNrdrDdmosm2fRRbtR0R3RtOVVBqsbFQN7Me8Z5P4rVQbe+2l+0hwZ6k9WO29pubnhjyBhCvDoh3EnpHVkzqjIhvi9szSG4RJr6n3bbxeKG3oROguDnG3g3totzEQdVqie0JtTjzc10Kx8QX63286UUGtVAsmTXxLm9ZiKeYA4/szss9CCyxs5SJrcS4m080q4mjpfLYEVyXPWit1uqcWMngThv9nWZMUhpY39XFgudIGxrHCprG2OCLnjd1bvemQLk0brEVFiPsRApMUQuQo5sCblmlFVsT27zqAHWIBOlad2JAsZ505LWPS9C0N8XnwIsn+i4gQAq0Y0IUVVwMhG1QDfBsOovFYAGqgzSgQUKv53LGqqTmHE9AOXBzaTN3hKCsyMudrbmAVsQCkyceINLo9zpWT0OXA0QjP8Uyit3N2dh4PBcuduGsyhvYpzJ290K8A4G6+yEYK2m0EuQZaHa0ihu5JSKBAy/NlcLLW3IvJZrX7tLbELkU25OYe06nwHs5Um/l0lrMarY5NFgLjKntqs0qb+Z+Y6tZHa4dtGiKVHjwnJkSolsBLvAuzoXKtNpBySv6RT3jtMtZiJ0PY9NB6229j3uQlF19ZmM66VkC44ADlZnmSk1MPaCbuQf4dncniMruyMysuvxeQyREPMiBxYPSrVb1ihmR6VPnnQSOyGAgX/ZCA8EAgFFdStCyBNpeHS0Vjyg2ZopdNZyem5qHR6qefTE3+jhzxgogIBVHThy6U6XrpskU39MOGqoN5HwvNg3Y9MZtE+IzjolrIFWxtAqUlhiKY8DEycHOU6WR5V4OJnXAcSeI+x2VsHFEr4kXibrJOjIT9pqdG8Q3mM2IZlhOY/lW38CSh/FOAq+izveAV3nypG4u5cyCK0tZcWJpLm65xRHr+oO9AXBoTgEIA+Ri40pAFeZhmaWKit5uQ0gJARrloggv8fAwJ3Zobp1lUcXafD47cOD9RsHZ3nLkaRjCmGyzV5v1CsyOuh3HPM6E4I5ZDyzgDJ42Vir73MhZNbSIthmLl8CPpR2MppfL3QNYsQwiLUYyDNtVEh+dQIvzXNrKFU8UnckqxJ+SLc4lFo+4WUR6IRK524bu41w6d5IfrVfkNYwkZnAyzGTvMe7XmO404AR9McPs9NKxSecyga2I0YV4sKp7igkSCPcEtFictOgj4zCzl40rqeHCxoohuFzo8W0fNhYrZZ3e7/08znHpB12TEKdrKB4dId/LeH88gBars/yy3YBkwUhei1yJvC3LTu8EAYylJXlaU42ZpMdPPrT5oe+J6zEMBvBTYifDW0v0lVberMwJw3WRC6FnoDirVmkRS6pYmPmpbsJRvMussl1yZp7ZTga+ooQY/W+TIG/weAzLAz7C4YPEtTJzyMdq/9EgNVanh3px1jRYZ7IrI4gwvkHGb9J+wfgVxdDJ4p7xg7dK7lUxCAcHkT249lFr+Rti9hVzqkeH0f46W2qoXqqTUedWeZ4v3Ty1lsulkz5M0+XGmi2S/LByF9pD6frVBLP+Ihms28PY8PTpeLGZGMPZSJ0XyWBUVNHp3uorSymD8MDX01unp1IWRaXMQTST6RVDDQEX8PSrSQC/gw6r4u1OfYeM33TuLUTM4GyuLkLXQ1Uf7mt2E7ExrwclPh5DV0C1DpohKjQRzMy1VHYQOak5p4lryd+oiWXddrF3RG6dKwiMNjE97F3KmyhDqFuKEUE5PLCKhrQAUDCNQOv2KKTM8iSqvZu1GGLb2/FRZqJbnUfpnhM5kfy0hZXKqnZSk2HRGXFroS0HTkQIGjsywbpH1e5+a11cr9gsOlWoenSS5AfxIjOKSnySdH/P4+IgVnH/STWoVeHerbGMN1rhObfelpmAYVBcCNuQSKxp8bvG83sde7miygzSWYLHd/aGHppdM1tklanHkxhNUNNoCQ42Vs9Vyhig1LBqsedPTxw29SQzIqVqEAfRQaUWc8eXCDI0dd3Oj70pGA0Vyb/ezgyjJSP7tO2jMNO9rBTj6OKSLh6T/hvMREBaJ67HGA8Pyi70xdsGE6++19zgXkMAf6iEJkJuy+jJ1JUu0/xKrP8W0LbtjW9K0PvJ6k6ZXAOCm67bS0f8GFx5Pf9aRWWRxNE9tW+5gpfH69Ea+Hi0k2r7PIwV11yyMl7uCcoeSvG6YCGAtIH3Qn419IlmWB73QbOzbzDAYzMdBCrutJQ0nbM4fnEUC0/yRYOg1A+P3Duzt+8RRFrFuggrRcPGwcvYozxQhW2h7BWtNRJvKzhcGPOj1T3n0nkIIdGyJk7YJDMq5WobemSsCLBiRfHOa+0sl6vG6Pm+IivOUtozeXfPcxmCc/nllOXq84qbLh5wuDo/+pfZqkcABgimwTlguB92dqutotTr+5t8cjXzW9OtdYYgjHli4VElutcq4+DjtcHJytpCBJHsxMq7cmcm8M3bWZg4BwhPGFcTh0deCH0FNN59igLOuYlX2xn3jOv50ikjZa3zkj9M9HKlANlj1A2fhSspv/aW9Wo0k4WZVOo9tyud33hi3qNVdFBc9NAt+nhGC/GMPISwkBiLdVT46fmk41mXIojLCrbDRAVCy7m7S8An2Ji9b5+s0I5zOEUpw5neTpaPBtx4jPyauY3iuK66JcGCcyQtYLIAbpGUKI+nXB+32OkbJlYgHNL0o+lNvfRCzm/5qJj3ToO8bx/F1LgLwo6gbW4ho7jWYncQFY0UXw5EB3DVkb94yqGOCNLBpVTJ3EoZ9BzdJs7t+40gEC0Rk8foxsUG/+h2FyRvqHHd/qBgFxfFHuJBSTSfV35Wx8BMXI6ke3RdAu0zeBzyG0QzIOCR4Jk+vJLWk19m8a6xjiZMNLQfO+B1OVJXJ5Zpx7l4qo8Eo+OS21s89JrCCRbEwMsJhDCITUW1Ii0hLpREKeoXMuwJ6sNllx9rP245PFNgjubE+9QD+RqbvSfDTrPbdDMn83FjsbXpk+47hjCb+HgBfmbtXLpw0bg2o1701Zjos/qIBNm+xoUOgYkIFBfBZQkuh6Knoz4cefQ54hgXI7g6ZZjIZm3d4WMWHGzLJx6fDTywvEqUDbG9iZpwS56LbM5XRdmWwMueN+x5Zvb+yQY8w/Gij5xEti4qrRAR33oPvrXvLfBcH/Oxq5n6hYvI3C4uwBlqAiuPivGy+W+NRLytNulFGJhktOWZ14di1T5i0RzaqPnQc3QocYiXPdSU6f4Mc2YfmEXP+iXA+kE8bkauldREdVfA6frXmsgg+E3mXtmowYDo7mwmXuLh9G6uWgQSYvTRl7XnEDeiWZBWJDpi15xTBgZ7U8jrJEOFdzOXKQM71jdaGxx4tbULeZM5F1ntw25w93F/d08US+9YCQPoVWmcdblS9gsFQFm6czC8CWkshACPqNc5YJ4UN4blLkEVQxTDvF4fXRrQYt58H8gTbKxisLi+FYngORmMjeSza2utGnghmG110pvXlEXi6KRuzLGdnwXothlMvTKE3hFHZ9WOrCwB22Z4jhSSqZgS3RUJJ/YIvIKqEOd/FwBAYsHn42MnUSJuai5LLDSOx8JrwKwEuJ9zqCMKA7fFVrnIt0Zi+gUWp+hq3G9MiOfbjrjJ02N/fSdneVW1nYpVH3z5VW4TfSaPFFBw7OGqJjr2TfFeF80NhpRbiknj1IsGwbqXwMlQBdYhnRMP3ZgNW3L9OfRsxPXh1MM+zgsE8bOM0VssKU2X82BnJn3cnNXGWwizxZGXAwsi77k1aOC8q25gR1s94e3rFHlKHj8OCSOxK0RsR+P7DKwgwJkzVMjIgLPJ+3WFgPrjLay2/ArWcqjbuIGYRExML4Tlr2YRO3pCvugZhrKGZ4VgesnduwWXbsJiJ8XetST6lJvnSAi4EMy7uutJ1OjqVdW6JfLU3YkNKCIBgiSmGtWNkIWxwLLJhvj5D3ydm+KunpkyktZcuO5bdQLubn0SKx94YHWtiPncgZ/uaQDMzlXZck4GnI9X+bafOx2eqg8OCjnL5BOJ9RQx3Yg91+cDHh3oEdjqAiDt5OK1+KyIsm8LOleepQkMH09Q2s1Bm7llylfJgVO2jCPsVQ1/8W+e4sh30P/D5Ia2Zt+qjSKO606H2AKeHHO/55xnmhIuxTpmka/jOU56HG6ZijpFBeccWbFyW7Vd2PfHEN/1Gl1WyPf83W2NzT56ytkQ63hQSv+aHLieMU0nRA5QVZcE1nq+7+jtyGBuxF5io+J29uJpZRag/e5oS+tmDZ2YN5kj+YtJ34h1wrUrnoDAK8uWTjDWeHWml9wxFHFbi9emaNAasOVqE3etc7ROIQELc3HAlKKpow3brx0mmohLSiypTcwc7zbOBD0Mt48jCX7C7pHBanY558zD6soyxFYKnoNlhjg5jeMzx9FCOws3HufTxo91MOscwb7sg+U5iuI3fiaytabgmbrYon5Vg723Ba5S5gFW8EKdSWKRx489PVdSzj84HZHQe4pW8c68JdypEZFSXT0CB8yJBxpJAeU51KO8K3ZCelKxrimtsi7yIzAkxkiPDlKeRfXVnScBVmUtRb5SHC+BEfsLMe6qCvOazocdcYpRfW6qNZE7tK7dM1G0/CGo80wsIjbaccpkZhIXI9+N64T4YvLhUhcZm8Gcqrgl5lpf2Xi55jQVrEjcjKJcFPNewo1BiKqGNccNHuIJh1glDxBBnnOI4KylfGc7oGRUokUH3aEWm9hvbphjUmU+l6d4o0KHbAYs5jxmDWgHfr8EU4UYxGs2dvvZW9Q9785CCAjZV8UjJgA72U1SnBpwnipmLvFGrwmDZxPAIy7HCvIC9CFBs3MrIX5DorHCdJrr/Z0EmO1VfM57zVfMCBytsj6MQhzORIpTg9jJkj97lSQFgC0JqF62ESazw7HgBgdzxZUNyhTUera+0FdETUsyzxII6QvqgozhWJ/mVR9xNhbE94UAjTAh9qxf3WjE0bBdyHV0HuHL5FL0mp1TjbvSWwnPHILXPYY32RKT5pzCgT8PhvbR4mGl3eF2AwG4g2nbR1Wsbds1XjXa8MmmGRvF8WR+lGAr6Y69LSH/6VOYTos6NpzDJmphFYQAusW3l+fuWNe7GTEnWscuAKMPMknic2JVbVGT2EI+8HPoMju54g0MU0dmOTSBw+mmjgfpYQZ3zBxeihKrai4n7JwuODE9MQvA/4MJY1i87areI+sxdwqZgEE50hZtMSajEF0AD48qhRg94NeIEbc98T7X0fRGIGdBcCOsJBNYSB9CRqC/1yphosTGxysXs9NgENceb16w0+PVDW95oDbjgZFc0/Z3x2fvo02gbX3URmXcg0e1HMz6lWSYt72G1ftTeP7ItxJPc+TqqA3ZB9Ly0K8agift4fTL76WP8BSk4vm6uHkgshHPM4ILRz0u5BjiFw+JR6Ny2xEe9TF9pUsU/3HKz49LXpfJUzF56lUOG1swUDgzpdbc4B4FMx14WL1HxBB8m1hp6NxaH4zADYKAwMkZt3sunVNZM3E3mtxnDy25WxnlRzIllra7gjhKpvLEu9o2NtfCjAT4Wvi3WtEifZ2tUhHLWg3LgLc2jrocHMV1A64SbzLEI1yAj0gsTLyzxz5/gJm8wHzrspbe6mZ+DMrYIYpdrSIuPu5jsRXdjIvbc2DgkSSd+d6DBQ3hXMXCnGYVwSqYBVbDDOEb4NRqShRmR1w0qUOs/cy8K3WxFZX6QDzciBErKU+rweM63JNrPZpuQE2yXUJXNuKD2hKbWne59vg84r5jZT1Nu5Yp9VmNA10N/FW/JEaLZ9zlOO4OXXRO0ZLzCn4Kq4pMlOpbdWMvQHGxCjuP9FQTeB9PfA49rscdRDkntxImOmcQKIDPJj3D6EGuQLWRjFUi94uwq4RgRuWCGCAS7NhZLurQbLWzMg16NnBpXyMxjwtB5PRxhUeaYE6Kfn1cosbitY9WurlYp1Nw1LfFJOYSpZ03rul7/E2eE899gjh1l7DZ7RGv1FURV6oNHZP7OQwxEewGQjmpUsv9npLRuQ0svIL1B7pKsICzJBB+HOMkKlRraO8fI6hKPciy0pSSE1Yua38IcRMIP6WSG4dekbM1y6xa7E4I3K5KH1YX1Xvx4qXnEOKwpYtLLNoAeuGhm2PS7PZ+foiJdl7jciA2ftO2ZurCy8OMuBH3ys6Dnqkakl76OmqGL0oTcLZA3AweGehBKBPVmkkg/YthonNoy6Lo1lhHIuf3uxkx9TI+KK3f4FYDzmziDePbJQY2EGhgWMK299W6SYZ47rNdXWyEDDaCGY/R5G4x1wmiEAmWamAM2kzeG+g6vbbKnRkkpiuW9pUIf3w/mMXVaC4Q41Qdy+00LiL9PJ0fWWZZbnnzDL7VUVvo2Ojj04zX7nmrPwtBRtkwOm8t8IAtOKbbYHcqT891PGwnSp27AQ40HBSZeJQX1u4xSrodhuueQWtZbZoMyAvO+aLIOzmwxIN/TQ8cq6F4AGvWTlyaKjA/vIhZVtt2FJoQBVWKuptnB081xDtaEL8AmyzxRB4Zu6gWmaPvSTfytOBSzHskZszMZcLdM8XPq5BY+3FDgNeWjLp8s/uFlE2R5onZ4jaF4FzpnyW5N9ROVK7v/kLD8z763cBCWsZXFeRnU2uNIQJXiiVHhLP45K9G4mWacsS7VHHoeUQnuLFlSnwsCqJQTE11oc5DosWTuvByqdzJEijGMeCAROWXo6gQUNTkR8ktL/Yj9jj68WJ6PBM9JXWa5xwRiCVxINXQi2Cdv3Ez833TrTuZz9dzMjZ6OzOIS4W2ntk83uIOm43KIC5kIlu7W7shyu6s6O05Klyx4hyCPAd/n/F4g4d6PMAEMK756Oachmc77Fk6dbxcC4PiNE6SBLYywn+v3gUq/mAX6H/Y7flRQjwrC89/f7/ZXhA+cjIvIuHp7xfSJ/3gkp+/8fMVSUrpxk9KA1Ma+FUHpYEpDUxpYEoDUxqY0sCUBn7fbaI0MKWBKQ1MaWBKA1MamNLAmNLAlAZ+U/jsfbSJ0sCUBqY0MKWBKQ1MaWBKA7/q+Hdp4DfK8yr/SM7kH3XTXy9T8EINkue6JP9YVuTnNMyUHP/NyHGaFfktHjQrMs2K/IFmRaZZkWlWZJoVmWZFfgfBSZoVmWZFplmRaVZkmhWZZkWmWZFpVmSaFZlmRX7vbaJZkWlWZJoVmWZFplmRaVbk93F8oFmR6XbYX3LQ7bB0OyzdDvuBboel22Hpdli6HfZdt4luh6XbYel2WLodlm6Hpdth6XZYTLfD0u2wbwqfvY820e2wdDss3Q5Lt8PS7bB0OyzdDvuq41dsh1V+sM/znWRFft5RSjd+/mYbPykN/BYPSgNTGpjSwJQGpjQwpYEpDfy+20RpYEoDUxqY0sCUBqY0MKWBMaWBKQ38pvDZ+2gTpYEpDUxpYEoDUxqY0sCUBn7V8e/SwP86z7tkx9Kdu/xxbGa6cxFCbdQYfyCW/3OiN8pCXBSX9nM25H11ToEgJT8WlzoLgdztf/u3mFqO/F5WxeUUuU/9zP31pM1RmESvHsMXUjE/nyuidFcdmi/v9fLQPT3BvhxI+z6tFOCfbnN/TvDMfOQk5fMhfX3Dsu/Yp3t8Foc/uy1I2Fd35b++bbUrkqj67rZEEHb3Ly67wgXld3L4qdP+xhIE9h9J2P38lL+dsBtxwtcdzHxkvjy4t5DPm6PLOn7LZR00n/dbPGg+b5rP+wPN503zedN83jSfN83n/Q7C6jSfN83nTfN503zeNJ83zedN83nTfN40nzfN5/3e20TzedN83jSfN83nTfN503ze7+P4QPN5043cv+SgG7npRm66kfsD3chNN3LTjdx0I/e7bhPdyE03ctON3HQjN93ITTdy043cmG7kphu53xQ+ex9tohu56UZuupGbbuSmG7npRm66kftVxy/YyP1ps+l7zef9im3edOMnpYEpDfyqg9LAlAamNDClgSkNTGlgSgO/7zZRGpjSwJQGpjQwpYEpDUxpYExpYEoDvyl89j7aRGlgSgNTGpjSwJQGpjQwpYFfdfy7NPAb5XmF73leQfufh//7IKn4/9gPkv5BGJCfq6is4BdWTCvYB1t8RQWLeX2pnrjTP8qePCXyz7Doeuv78vlz8lPS/yto//sdnUw6sPo6Vfhjgu5v8jS/kLp590QWB2SwouIFFvl8CEN4jNruD1X0cN31lHFb7K5fJSRn/j2qmUU/IQX1D9ceyLD24IvU0uj71NLyC6mlOeYXSd2n1Nf/aXnB14niXxq5L0TlS6n4wHLhLpLj4Eci9H0/o7/ao4ogfmSVr/OuP2fn/rJblZcydsu/rl/FF/r1ccqCsJFP4sce/Dwnn6TxiznKwFKHPx7nchF8dfGj6IPggyV5Weh7kf8k8J/E/csH/LnmYH6kOZ4V0N+6y2pXnsjnp+e7gS/17ROe3u/z6Terrp5VCPNzRBsRvf2dbHPfy7b4z2oM6bXZ/V/dA385Y78sfz/pBekj4r/rG1b8RuPyL6gB4eVrfn7fPdeZ+Hna9l9cj/WtphciOeRfr+n/RLpeb1K/KbrBfpTF79U/ekn9o182SQT0/UB/RnOnz2iu2B0yCufeBpz7/xA+XniN8P2zkE5gv5e9v6VkknQHQOLfHMaXFE6EiMqR/nmFg5TXDLqAPvLcP6tzXlEi5u+X3flxh/1lKy59rblF6Mf/XHZHekmL/8qyO0igy69/z+XXtO7OWzxo3R1ad+cDrbtD6+7Quju07g6tu/MOlr/Quju07g6tu0Pr7tC6O7TuDq27Q+vu0Lo7tO7Oe28TrbtD6+7Quju07g6tu0Pr7ryP4wOtu0MTLv2SgyZcogmXaMKlDzThEk24RBMu0YRL77pNNOESTbhEEy7RhEs04RJNuEQTLmGacIkmXHpT+Ox9tIkmXKIJl2jCJZpwiSZcogmXaMKlVx2/IOHS5+2m77TwDhJekRqHbv2kRDAlgl91UCKYEsGUCKZEMCWCKRFMieD33SZKBFMimBLBlAimRDAlgikRjCkRTIngN4XP3kebKBFMiWBKBFMimBLBlAimRPCrjn+ZCH6rTO9LxTp+fubkH3bUX86c/FxH4MvcyM9FT/7B3MivqARBCfI3SJDT3Mhv8aC5kWlu5A80NzLNjUxzI9PcyDQ38jsIUNLcyDQ3Ms2NTHMj09zINDcyzY1McyPT3Mg0N/J7bxPNjUxzI9PcyDQ3Ms2NTHMjv4/jA82NTLfE/pKDbomlW2LpltgPdEss3RJLt8TSLbHvuk10SyzdEku3xNItsXRLLN0SS7fEYrollm6JfVP47H20iW6JpVti6ZZYuiWWbomlW2LplthXHb9kS6z4g52e7yY3sky3fv6WWz8pEfwWD0oEUyKYEsGUCKZEMCWCKRH8vttEiWBKBFMimBLBlAimRDAlgjElgikR/Kbw2ftoEyWCKRFMiWBKBFMimBLBlAh+1fEvE8FvlelV/pHcyJz0g476y7mRkfTU5/en57EfZZH5fHBvIXWyyFD+/Lfkz2nq5Ld40NTJNHXyB5o6maZOpqmTaepkmjr5HcQvaepkmjqZpk6mqZNp6mSaOpmmTqapk2nqZJo6+b23iaZOpqmTaepkmjqZpk6mqZPfx/GBpk6mO2Z/yUF3zNIds3TH7Ae6Y5bumKU7ZumO2XfdJrpjlu6YpTtm6Y5ZumOW7pilO2Yx3TFLd8y+KXz2PtpEd8zSHbN0xyzdMUt3zNIds3TH7KuOX7JjVvnBRtAfp05WfqvUySKiWz9/y62flAh+iwclgikRTIlgSgRTIpgSwZQIft9tokQwJYIpEUyJYEoEUyKYEsGYEsGUCH5T+Ox9tIkSwZQIpkQwJYIpEUyJYEoEv+p4G0Qw/7Z5X/Z73lfQ/ufh/z5IKv6/0wdJ/yAMyM9VVFbwCyumFeyMLb6ihsW8vlRPXOofZU+mkvnAsOh66/v2+XPyU9L/K2j/+x29TPoTPttX5xS4WPJjWRWXU/RN7uYX0jnvnsjjgAxeVLzAKp8PYQiPUdv9oYoerrueQm6LHZDNxaXOQmCrewr7X6KeWfQz0lL/OP00K36U0Bf5ptFX2agRZKP+PgO1/EIGao75ZaLIv7AE4VHcoKNeI3A8/yOBe5bbv3WXjx8/Pt8I0Pi3N//wKAafT79JAU/SXVn+u8LO8y8KO/qrUi0j5aPy5fG1UCvfL7kR/1mJFv58Uc3nwUE/GL4v5OUNjB0rfyOiH1guFuDPD+WZfPmLix+PnyMAPPPViHPPCuuLEeck4fshZ4VfN+bin4/580Kqigxw1F3ghuo1Kg6kCTC7P5+3P5/8M9HoV1w9r2N6u6IShJEv+71SqnZPi8EU5kei8xNEBDHCVzLCi9/LyEt2Tvx1EiK9ICF/zc6RPnnZQtkFaePukB1gtRozi6rdHw+78zWFlSDUSP016X1+9sNT56CfII3Ctwrre+8AvWSjfqE0yn9bGpHwA2mkovf/uQ5V+Em6j/1a3Hj5hUXIL1aZ+fvStmTH0p27/HFsZrpzEUJt1Bh/CC8J2zdS8Fi259Uv/9cr8SjfGoTvO4V/oVN+BmR4uVNeUcuI3OZwLX8kwF/MmO9WYL8Vcf4GB8gM/IH2FbvwEH2FEB891Ndjgv8saq+fLdLHrxbs8+z3k+WbNf38C2Ly8iU/XWpeU6vpfUnN0/i/XmDCKN7VafVLZYZ7ebifdIv8kfvqeNsS9YotIP8VEvV0+h8XJpb5KHw11L+xLL0QVn7fsvRGbRrz8SuLJn+NfZD4UfzqeIGseENCxVGhegtCxfK/sxC9xDK8ayF6i7hJ/Cj/B1PHvuCmviEJekVUn0rQLwdL0kfuq+HmficRegVJ8L5E6G1aMu7j1w6//LVl47hvMPeblqmXaIV3LVNvUS1xzDeWTf6NJeoV0VoqUb9Mop7j0+i3gtc0mP02LBv/Hx3/30uoJBrrfgOm7U9i3W8bcUv/dcHtt6mX/nOM+43L0CuC2r+avRbeGHkt0ZjsL5hXnyTtnZLX0n9dEPaNSs07oq+l/7qw7NuUqXfEYks0TPsmROpdsdjSf12c9m0K1W/NYkv/daHZtylEvzOPLdFQ7ZuQod+ayZb/60Kzb1OI3hOTLdNY7duQqXfEZcuviN1SmfqFMvVbEo8yjW6/DU30nthsmQa/34RQ/dZ8tkyj3W9CiH5nPlt+TSQyC3FRXFryWwBCcQi+Fp2v985/381PUD4Kk+g/duef7Eh/PldE6a46NNFXN3+pY56eYF8OfQ6Dp9H6duu38g0tXvYS/fSlz7373X0Q+pMbVbsiiarvbtQP06e3/hsj95rQzbsaOST9rKH7Mxn4eUP3csKNV0Ru/1JOqH9LP7+Q3CcUIjnkX6+D/3JCC/Z5Gddz0F76PpkPQi9ltPgZKS1eHs8XpuLnBIrocwJFSMtDMyj+4xkU/7qMceKfy9g/mhmRe0U87HdLI/eS+ogQUSDSL1QfnPznQyugjzz3TyoQ7hXu4OOqwhdn7uu75S+vNWTFr7UtgF3my+P7iN+nHv06nRAg2l/Vfa9whGjh0jdYuHRqfVG4tOqmIeeFkM9aUTwJFbtCGzLhrCoehugyWoZInlnBwx6nO7y2DqNyWT50u67T7tNRON9Mp9XNYrZBy+7Y8+TArJeQ+382cNlyOdVKdcQ6uaZx6rms1IeqtsyHdVZmyN7AdpW+XKIX3Q9iwDlQn6uNBchvLnZQBKNIvTY2FchdNu0yG7KAR5CbfqmY+B0WLn3oTkydGBsoKiqKRZPvIHt4Mc8ZsTJzPip8Ycd59UnKpTovhEXsS6JvOlnlzyEl7mKGa84QwvXcH0GRiO7G5ZDbP9WRJJZx3kh3O+d1rCfzLfI9B+oFLLkE6h0wyjruS2XW0amv8FeIpY5Vfn3ty39u21g5hZAvHeUsqq3tHQoWaBc1mZBzknL0ISuyEcIA6ZCUn/wCBWNQqbjp/c57eHooB6zNm6g2c6HZ2WY7wsNue96wgmQrpXi2nSH5wrQwF/rmXJc+NEazK1aoZIXMkyk2UykGy28IUzTgHGPRlGqrzviN31c/tUOU246rNEFLZtWxmgt+FKS2aBcBly4GWJuzUMaLuHPOJvabodceJ1wNhYEWU/L1ACocDRUpvq6V7bjy9yNs6qSD4w57+SjKY4k1mmCMsbiO+mI8xTgqRbFeVcXWx6aK4wl3W3AFFLXMGdkgs5JjtSGeqisu7M5GKxOBhZ42Su2m+A1USgqbrXXWK06tTuFVjrkA95U91FLKY7T29IjX53gwwrOuPokznx1GngNKZZoutE2W6Fga8hH5Fk6R1+QBv+UWCT6aY1Zx9CMsUlHt6Ki3yhQ1I18djnmlxngrVX0xH/UiLY+LWxts6wJDrQJnvMe4NRb17jYbLOE9lkcx56SMmSa+c1MXet6QcW4lRQq1ed/la9a+mTI33z8VxMocM2SaQyc7BvI4PtJm2mmaQz2m2jb70qW1fkWdVBKV1zBlndd96Z/jOnIEKNkWumbHJTzvTdFBQn2BUyZqvFUXn+fJDGTKSXAQmbzRV8mo5eiiow3nWMjxnF1k+cMkzhxZcW2uYTYMvj/PJx/m0xVqlDTybRLKXc5fIm0vlkW+YjZEXd90PNKUue9Yc/0AJStqPz/UhYHWdb5DBbfkQhgY8tCcqFkGt9YlyFNh1+xcsYpNTsnCVaEUYkFGZTsRoRbEmUOLeV9wgfxiuuot3mpk3Bi9n0dSZ4HtDXvZPR5zt/bYxQafLavLr6SVfK0za5gDYpQrDywDBZTKWmz8vrDpJYMNxypRt4XKa+PMsZWr3FfSy6M8vBNnkEzy6ZbdVO1i7tyVXWBx2G5m54V+uHHW9ByWejGHEgaqDRoCRats0e1mjyVSpkXOBH0FwqXixygPy/rhgDequL7HkbTRvWlxPWygAkvwMA42CRRvY85RvhSzyrS4yewG5e2MeXEZ4/FR9TuobtfZyiXeuaAd2sEJ6wco3DLx8uNjaRTRRuLZ9TQosmGeYEz0+9EWYShW12Yf3ad8Uw0sPFNnce7doDiTLKFIbpoBj3n1mFXSybS7C5aMBWnfzd0MoZYJkdpC9lTkNO6E2aC1CeyEOpf6lhmBeDGHmWzwQzSd3pab9BLgjcPVOScIoWo7IY/IjFHPAfKORatD9VrVl6CYkN1ceZ6xsXYL4zHptQrKSHmCEmKrKk4LPLmvcavnSswtdLR2j8NWZVctlPm5npTII/du+/l5xCYUOB3wEloEYOhUvMVGcoMCk1BsYlFEW+DBPl1piJy+grGvWigoZEOhQSgwdZPndsGTSVSeSR+JR48fyZNOvbUxu4Cylt61lW3DNrmbvMHygDyjIc/YWKBP7vvM84feDcwvYyVXp1V/xyJN765Nk84f8X3tcobVj33R0hMULeV8m59b8cw+8q3Gj4mNqb0brA8w7Jlpgu3djPsCUFzdBgDr2Eg/Yqk8kSeJ3CqDKTCRd7F5S9J2gd1pbD/KFBhzqI6kGv7YvIsleyPP/dBXS5LFwtUjIsGif7+w+zSQUCqnCx1b3X7NhfEGmxpfn6fX/WLqC8sNLqxqrvimai/veXwWWl3WujF5bRBdRqwvZq54J5d0BIvcEkT91uFV5vhh6Gu7do2PR4KzWfVcHLGszpV4Z6D2gs25xfUF/OSjsrPJpHXxXBtlbCfzurLTEeiFMMxSnKiO0ZctlFXFV9CDN5WuBzzR8uOnszHah/X8MEkcrpsxmJ+bwpHMXzswMtRGF+NyK9VkZENd7tA5HvsqmzE7Vdaed03qcpjMzwqU5hmAChfPkebv2dZb66217Esx7fvSVCskKK6folIkGrwQ5gXb2zzpKhY6ij0nX1cV6YWZuJeZuIJCNmuF2wmpPSSj5WZGquTxY8HqLJ8oXuyeE5tI4MRWTB5DKS85dGzkB1VJFM5MCLq8gKKnoOX6Cjtl3EkzlchIFkMVs/7h3BbVilcg4TAjesvIw3XGFvLaUqzGj8WuRo220Gc8VECEukpRFoojqIZ07q5B49viMecaLSHYa/aIveYKauoCoQezKlqOoKHEJvdpeCbmq/rCinV8T4MhjjS+766+ZlfAipKZh+IlzpHI37y+8GaW9tiqHkf5XLzGvmoxXdW5+4TAfY+YO+IaGNvjUnIflFhHUAbwJKxHWDbXTdGIJXkFsTeCBQoU98qiVCDvJ4TSIsrkXT1uRC4qHoqohXKngc1y8saS7Lqoxd2hlPrypQKOtex+h3Ksdn7KudIkKpXVMqe+gfU/21FnX3lbPxG8BHhKPZ6Jc1J5R5FPoWfvhctqLQ6GSjvArYC2TS/bUG03lpyN4hosOvKsbnl4gM0chqjkrtsw8BKv8fW7y8I8jPUI8Es7ypJ0XlTHlaJzvEIg+TWbK+UQwPdiQgDqml9tbNXZKZeLrQg+t8oTxauI1VKxOh9PAMdLUAtePsc56TLLZ+8Mng+0epfNh7Z5XUyRy/kwtFD2KZnvGuscTy8bnyBsPzNi8hCYsGcLbQPflh44d6OU4SZMtmt+R0wbHuk1sW0aHihrAT3AOh/jfFMOcSEDWnbqaM0sGDwW754DUyXnTWGiMOQ7vLbkiPLMyq7IWwUFI9BTaB0OocDVmL/qyFdyGU0SZhtfUlHyfDcjcq87qzytffuWcCAo98kZ453d12UDdC8cuVwR2cxl9MRei6RnyQdQtjau3COCyrHbewkFbfPQaziunwwwbDtQmtXVDTGvOfPcFaZN0ziAxYtVxQ0FIn9uwSrywhJ6J6kW60nVC4wFQrizb5gD8RDXmZcSf3uOLT9XheYIbyWYouJ/kiU09hxMZsa9qZYxkpTVXBJwpwmBWNhWxrdTxNgOYK9b1AhslTBEj7OPJTKza1SwVipXzYQXN6j01UQtFL6fg2paobNVeanEEDnxfHMY47mO5+DbWGvyl9bPug0rFn5u3pTtoCWKyhRTmAJTzuWjc+/dHbfxhNiErBygIg6ymFc8SdxHpDdCvMG6NF7lDGm9MK2vIQo9n1Om5DmAIRWz1wXjzAUH/lKkqNcu6jl0PKKfF/BhVAMyFsuoOI1XGx0P/dsaSi6Xfcll0RPR2WUJLjkHzhYBauzxc2c7Tej6RnXRCWqb2+RuAZyHSoNneHS3KLHLWfdwHUNFZOO2QpGZFy44shdiYP3ZifStNLLRWlk3XBucSjMxsxDZe7icXaGr4nBTf1+T+7uV5AkPM+myAXdtUBeWyGymhXrRQJi2dv+WZYb2yrpDyp1Z4LZmiO6ucexKoWPc7+Im0Tbc0IUir8bRRBXnXDd1PbQOqx7ZDjtjyARcMGqJEQzdiugRYv/nddK7t+AvGbspEtjdWsNTR1AixetRPx8SnehmhwrP8LKA/PUWgf0yGCZ3UTdGmDx0bW9hC3QglgweP5mx0LXEmbd9kOcbhFWU3sO5OcSfNbvwEkERulsCk+gmNtMU8Rdsw1t6YcGXj/bfN4i5L003In4AARZjBiswnYYWg+V522OQCxgdtq+tjZpyoOAplArUxOOCxxYZEOjqAEsjAoLmFSsTTMJ1XV8LHdePpTMhilE18bgNoJTqCSemI0k1XMd9eR3BUqVWm8sjudssmxOIJJntqBAWl/eB+37zNm2maG5ynXxrg9hUoIa67opC1weFQLT5+UyP7uQmJvHCZlo/pl4ph7F9N9WbHLPEgSH6sziUptlwcStvBsqAtLtZb8xEk237KLZqOyK6N5yqoNRiY6FuZj3iPZ/Ea6Ha3m0v20OCPfuivbXf3PTEkDeAeHVAvJPQO7JiUmdEfFvcnkFyizDxPe22jaEsaugEKG6OsXdDuyg3cVC1WmJ7Qi3O/FyXwjHxxXofb3qRQS0US2ZNvMtbFuIp5sAju/NyDwJL7CxlYisx7uYTzWriaKk8dgTXZQ9aq7U6J1YyuNNGf6cZk5QG1nd1seA50obGsYKmMTb4oudNnT/WMVcujVtshcUIO5ECU9TqC4lCjcrrKq3YmtjmVQeoQyRI17oTA4r1pCOv3RdrFG+Kz4EXT/RdQIAUaMeEKKq4GAjboBrg2XQWi8ECVAdpQIOEXs/ljFVJzTmegHLg5tJm7ghBWZGXO1tzAa36wtHqASKNfq9j9TR0YZepKj/FMordzdnYeDwXLnbhrMob2KcydvdCvAOBuvshGCtptBLkGWh2tIobuSUigQMvzZXCy1tyLyWa1+7S2xC5FNuTmHtOp8B7OVJv5dJazGq2OTRYC4yp7arNKm/mfmOrWR2uHbRoilR48JyZEqJbAS7wLs6FyrTaQckr+kU947TLWYidD2PTQettvY97kJRdfWZjOulZAuOAA5WZ5kpNTD2gm7kH+HZ3J4jK7sjMrLr8XkMkRDzIgcWD0q1W9YoZkelT550EjshgIF/2QgPBAIBRXUrQsgTaXh0tFY8oNmaKXTWcnpuah0eqnn0xN/o4c8YKICAVR1CcdKp03TSZ4nvaQUO1gZzvxaYBm964bUJ8xjFxDaQqllaB0hJDcQyYODnYeao0stzLwaQOOO4Ecb+jEjaO6DXxIlE3WUdmwl6zc4P4BrMZ0QzLaSzf6htY8jDewTZdo873gFd58qRuLuXMgitLWXFiaS5uucUR6/qDvQFwaE4BCAPkYuNKQBXm+7LGqOjtNoSUEKBRLorwEg8Pc2KH5tZZFlWszeezAwfebxSc7S1HnoYhjMk2e7VZr8K+AOw45nEmBHfMemABZ/C0sVLZ50bOqqFFtM1YvAR+LO1gNL1c7h7AikHF3oWRDMN2lcRHJ9DiPJe2csVDWV9WIf6UbHEusXjEzSLSC5HI3TZ0H+fSuZP8aL2CIrBJzOBkmMneY9yvMd1pwAn6YobZ6aVjk85lAlsRowsUVndPMUEC4Z6AFouTFn1kHGb2snElNVzYWDEElws9vu3DxmKlrNP7vZ/HOS79oGsS4nQNxaMj5HsZ748H0GJ1ll+2G5AsGMlrkSuRt2XZ6Z0ggLG0JE9rqjGT9PjJhzY/9D1xPYYB1JItEjsZ3lqir7TyZmVOGK6LXAg9A8VZtUqLWFLFwsxPdROO4l1mle2SM/PMdqAOPSMhRv/bJMgbPB7D8oCPcPggca3MHPKx2n80SI3V6aFenDUN1pXsyggijG+Q8Zu0XzB+RTF0srhn/OCtkntVDMLBQWQPrn3UWv6GmH3FnOrRYbS/zpYaqpfqZNS5VZ7nSzdPreVy6aQP03S5sWaLJD+s3IX2ULp+NcGsv0gG6/YwNjx9Ol5sJsZwNlLnRTIYFVV0urf6ylLKIDzw9fTW6amURVEpcxDNZHrFUEPABTz9agJ1mkFtNFW83anvkPGbzr2FiBmczdVF6Hqo6sN9zW4iNub1oMTHY+gK6LG4dFRofSno62OhbbHmnCauJX+jJpZ128XeEbl1rvTFv4npYe9S3kQZQt1SjAjK4YFVNKQFgIJpBFq3RyFllidR7d2sxRDb3o6PMhPd6jxK95zIieSnLRQ7UbWTmgyLzohbC205cCJC0NiRCdY9qnb3W+viui/WXKHq0UmSH8SLzCgq8UnS/T2Pi4NYxf0n1aBWhXu3xjLeaIXn3HpbZgKGQXEhbEMisabF7xrP73Xs5YoqM0hnCR7f2Rt6aHbNbJFVph5PYjRBTaMlONhYPVcpY4BSw6rFnj89cdjUk8yIlKpBHEQHlVrMHV8iyNDUdTs/9qZgNFQk/3o7M4yWjOzTto/CTPeyUoyji0u6eEz6bzATAWmduB5jPDwou9AXbxtMvPpec4N7DQH8oRKaCLktoydTV7pM8yux/tvHItzjmxL0frK6UybXgOCm6/bSET8GV17Pv1ZRWSRxdE/tW67g5fF6tAY+Hu2k2j4PY8U1l6yMl3uCsodSvC5YCCBt4L2QXw2hSPfyuA+anX2DAR6b6SBQcaelpOmcxfGLo1h4ki8aBKV+eOTemb19hyphqlgX4f9j7027G8WZ9+FP0+f8nhd3DmLXSwHGeAU7NmC/+R+zesHY7Daf/pFI0tNZZiYz05l2MmJ6ktjGIKRS1VV1qUolVJG+c1N2L/cUYZ3DLVQbPXbXgs0FET9YXDMuMQMSEi0q7ISNUr2EZ0t/vun6JJPLWu/4vjzNj1LSMXlX13UYjHP5+ZjlquOCG8/uUbA4PviX6aJDADoRTJ2zieG+31iNuggTt+tv/MnZyC51u9QYjDDMeIoGpeicy5QjHy91TobLKcCIZCOW7pk7Mr5nXI7CyN6R8IR+NlCw54XAg0TjXcfA5+wL2d9+2DGux1MLB3Cp8ZLXj7ViAQnZo1c1nwYLKTt3lvWs16OZEZfKNbNKjV+5YtahVbCDDrhvZ108oyHxjCwgYSExEqsw95LjQUOTNgEkLitYNhPmAMxNZxMTn2BldL59vAAbzuYgLIKJ1ozmDwZcf4j8GpkFoqgq2znGgiaQZmSyENwiwTCLxlwXt9hoKyaCJBxSd6Ppjt3kRDZK58Pc7JwGeds8iKl+FYQNRtvcTAZRpUZOL8xrKTrtsA7gyj1/cuGuCjHSQYVUytwC9jqObhVl1vWCEYgai/FDdONkEf/ochUkt69y7XYHkYPyfEviQXFomqWXVhFhJk573D2aJhHt03sY8guJZpCAR4wmWv+MW49fTKJNPd0bZKKB7dAmXpcttVU8NawoEw/VHmN0VHDbKU96DXLClMTAixEJYWCbCioozUlcKA4T0C1k2GLUh4o221de1HBoAskczbD3qfnyOTI6T4Ydp5fxysTzcTVlK8PD3bcPyGzioxnxMyv71Aaz2rEY5aQthlifVXsgyNY5yjUSmAiJ4sK4LEZFX3Q10IUj9x6HHeN8QM5OGCa0WEuz+YglDvbUwx6fRXhgeRHDFba9sRJzc54LLc5TRNmSiJdt1uxxYnT+yYp4hsNZFzkJLU2EjRBi33pLfGvPnSFTG/KRoxraiQvx3M5PhDNUBVYe5MN5/V+NRNxWm7Q88A082vLE7UKxShexqHdNWH/rODoQ29jL7qtwvD2SObP1jbxj/WLC+pF43ASfKymx4iwIp+udKyyDxG8ytnCl+D2su9OJeIr646uxaACREL2LvixdG7sR9Qy3ItYAu+TswtfZC8SPE/ch76QOU/hWpK3Uxt/xSmPl8iq1T7LShd3I1Yfd1V1RLNx9KfRIr0rDtM1g0S0UIMrSMYnhjXFjSQhwDzqdQ8wTdCKy3MUvIxLFMM7nB5eGaDHX3PryCOmLiFhcbxqKxHPSGQvIR8dSG8V3A2K2lVFnXhMWiIODsjKGVnYUSLdNyNQrAtI74uCoWOE0jYlt011bCvBUTLDuCoUDuye8ggKx87/xCUBiic/HR3YMQ25szAsk1LbLkscgs5LA/YwDLVYYqMnX8CRfaonpFlgcwrN+vTABMtctdpPH++78Vk6zsmxaBSke8eUXmYX1mTyARMGxu7MSa8gzxGuV1xcypNxcjGu7mtWArHvx7RSUxDokJvbQ9Um/wecfA9cCXBdO3W2jLAckfpYyWoMkWLcZT+zMqIubs+pwTcJsUehmhAWRt9ySaOCsLS/EjjZazFvnMXBhFj0MCSOxC4BtR+15DFlBgFK7D/HIEGeT96oSEOqPnyKl4RdkLYeyjmoSk4iw6SVh+bORR7YW4y+6ug6X5F4BMb346u2MS1ZBvpEi91xgfcqZGRB8LiDmXdl0JGp4dsty2WB5aq/YBuShQIIkhhJWtZAGkcCy8Qr7+fd8lRnippoYMpCWXLDsWnUg3N3yIJYe4YGVJRQz0yZ/XROfMDtnuObslHA+bulZXma3aKzc2yDgpgYfS6wLxWQldlyfR/BoTwuJrc4JpB2d3AYdoSh7lqBxxVEakeHjMUq72GBlTg35LNnkLUtGIXLLmj95Fxfa8pXo/358AWuja9UKisOq1UhsAY32mddxzhMVBnOxiljgachEcYfDpwZUxiDn7D0rlk6jNDPr+hDiO5/D0wJ4rre5LJHRRU85i8Q67mHhneMd1zGmyQjLASirAsNa1/NsrRnozAXbS6SX3MaaPa7MImi/3VvSsl6STszq1Ja82ahrxDLmmgWPQeCZZQvbH6q8MtEKbh+IqKnEc53XYEmw5WIVtY29nx4CDBZMsccUoqGBFdutHcaaiIsLJCl1xOyvFkoFLQjWDyNJ/ITNA4NVbzLONoPyzDLYVgqujWQGOzm17TH7wUw9ChceZePaizRi1jmMfdn7qWtD6NVeKrKVCtFEma1Bt6rB2loCV0LTRxDNlIkk5ln00NMmTDhvZ7dYQq8JWEQb4xJzh1oEsDy7GA4YI5doJEiUZ18LszbfCMlBQZoKG7jMsz1hSPSBFu6kLA2rs2PGPlJkNQEezPcnX4+8mRi1ZYl4VeODFjvFoDrW5RLLHVhWzhErWn7nV1kq5iEbbjg4mhjYxcg2wyrGvpi8O1V5yqZkTpXcHHGNB1duptp1SVYkrgZhJopZJ+F6LwBlzRrDGvXRiAMszHyAkadJIjhLKdtYNlEyCtaivXZXiXXk1RfEMQk0TXmMVgrpkFWPRZzLLAnaIa9P/hhig3hOh043e/Oq491ZEgIC1hm62AQgO71I0K4IzlPE1MHe6Dlm0GRE8IjDsYI8I/oQo1lzGmO/IVZZYTzOtO5KApntZXTMOs2XTzAcLdMujIIdzliKEh3byYI/uqUk+QRbYlA9b0KEZ4c9JRfYGQuuqEEKQeNa2kxbYDUtyTyLIaQnKDM8hkNtnJVdxFmfYd+XBGiEEbZn3epGPQr7zUyuwuMAnUanvNPsnKJfYWclXKNPvO4heZI1Nmn2Ieh5pt+39lOerLTbXS5EAK7EtG3DMlLXzRItarX/aNP0FbRdmR/EaBq3+86W4H/amEynWRXp9m4VNmQVhEB0i2fNj+2+qjYTbE7Ulp0RjN5LJYnPsFW1RFVic3nHm6TLrPiMVmSYWjzLSRM4lKyqqJfsJuSKqc1LYTwt69MB2YcTig1XTH3i/5EJo095y1HcB9bDtHMZg0E5VGdNPsSjEJ4IHh6UEBs9wq9hI2654tXUwPiCIWeOcSNZSSawGimCpO+6ay1iJowttD9zETv2e1Hl8sYJ2R1eXfFTl6jNqKfH56T57Pjsa7SJaFsPNGERdeBRKXqTbiUZ4i23ZrXuLWQ+8K3Y0xw4GmgC9h63PPDKGuNJqz/+8XvJAzwlUvF0XlTfY9mIzBTjwkGHCzkG+8V97NEo3HqABl1MH7Yx9B6mvLmf85qM74rwXc9yUFuCDoKJITXGCnUomGmJh9V5RAzGt/E0CexL4xEjcCFBQMLJ6ZdrJh0TWTVQOxhdJ/cNvloRZns8JeaWsyBxlFThsXe1ri2uITOSwNfcu1RQDbVlukhEJKsVWQa8tlDYZsRRXNbEVeINBnuEM+IjYgsTbayhx+/ITJ4hvnHYqdZoRrb3i8jGil0pQy7abyOxEZ2Ui5qjr6OBJB35zoMlGsI+i7kxTkuMVRBLWA0jIN8gTq0KwyDdo7xObGztJ8YVVvlahNUOe7ghI5ZSlpS9h3W4B2f6YLoJapKtgnRlLd4rDbapVZupD/fD7juCy3HSNkyhTSrka4rvLbolMWo04U77Ybtrw2MC5pyb82OyqsgAibZWVtaMKC4WsmaoJarAe2jkceBhPW4vzDi5kRDWOT0fEnw26hhGlyWLj2pJX8Rytwi7jDFmhCfAECLBiuz5rAqMRj3Csd+xgXPrHIpZlAsipw1LNFAFY5R36+NiJRLPXbTSycQqGRNHfZ2PIi6GjVk7hufyF9nEnvsIcMomZtPLA16pyjwqFYt0TOZlZIixYNcklJPASu5ySgbHxp+iBVl/oCkYC9hzDOGHEYrDXJn2re1DBBVWvTQtDCk+IHhaen0SNyHhp0RyosDNM7ZimUWDnBGG22XhkdVF1VY8uckxIHHYwkEFEi0CeslNV/u43my9bBdh7bxERU+svbppjMQhD09mxAW7V1bmd0xVH/fS86gZOsHa5ywBuxk80MG9UMTKdCIR6Z/1Y40DaxaEl3q6x3J+vRohU82jHWy8GjUq4cxGbj+6nCLCBhIamCxh23pKVcd9ZHpsW+UrISWJYPpDNLmdmRpGFCLGUjUZgyaVtzo4j88NvDK92HDEwjpj4Y+uOyM/6/WJxDgVe+q0Khfifh6be5aZF2veOBLfaq/ONKR38WnGbbb8tHuXBBllXW/dpcATbMEx7Qo5Y3l8rKJ+M4JV5vjIV5Gfp+Jenk03D1HSdT9Ydgxaw6rjuIcf0OTzPGtlfyruvHOy41gVRD2yZu3AJQkk88MNmXm5bgaBQaKgMK9aM925ii5ewQz7BchgsSfywNiFlcjsPVe64Lv5p9zskJg+MeYxd02hl5UBtvbDGgOvNR51+WJ1CynrPMlio0FNQoJzhXeU5M5Q22GxvHozFZld9LsmC2kZT4HAS8fTJSIRuEIsOCyc+Xd/NRRP44TD3qWCAtfFOsGJpobER6IgCvnYUGaKGWAtHle5m0nFRpaIYhwSHBAr/HwQ5gII62wvOcXJesAeey+ajfdHrKekVnXtPSBiiR1IJXBDss5fvxjZtm6XrcxnSxOPjdZMdOxSgbVr1A+XuJJko8KPchnL1ubSrLCyO0KtOYa5I5acjZFn758zHjd4KPsdmQD6ORtc7EP/aAUdS6cM50uhlx+GcRyTVEby742Mzz/OK31/5vPLwiNvVC8R7jiZF4Hw+BO+UfDh7VN+fuLnO4pz08RPSgNTGvhdB6WBKQ1MaWBKA1MamNLAlAb+2m2iNDClgSkNTGlgSgNTGpjSwIjSwJQGvil89jXaRGlgSgNTGpjSwJQGpjQwpYHfdfxaGvg2eV72HbtU/KX6yH+tm/5y1eSnDQ9+rIrM/ttVkdl37DJAyfEbJMdpVeRbPGhVZFoV+RutikyrItOqyLQqMq2K/AWCk7QqMq2KTKsi06rItCoyrYpMqyLTqsi0KjKtivzV20SrItOqyLQqMq2KTKsi06rIX+P4Rqsi03TYDzloOixNh6XpsN9oOixNh6XpsDQd9ku3iabD0nRYmg5L02FpOixNh6XpsIimw9J02JvCZ1+jTTQdlqbD0nRYmg5L02FpOixNh33X8QHpsN+zSb9oVWQW0sTPT5n4SWngWzwoDUxpYEoDUxqY0sCUBqY08NduE6WBKQ1MaWBKA1MamNLAlAZGlAamNPBN4bOv0SZKA1MamNLAlAamNDClgSkN/K7j19LAv5znnbND6cqd/revJ5p9EgJ1UOv/A0/Vif+I6A3TAOX5qcGv/GRTFDsf99C2PCaEI8V/5qcqDQi/25G+v4qs5Z5ePzactKwo89MhdB67npyBxzO/uqSpd5Lw9Hr12PTuhXZ59ur6+OovFoAOgzh8tzy8Udb56b08TDblrv7xWm+LweMdrNMOt+/7qgPusYDz9Umsnl+g6Ebk8Tu/idIbl7kTfpBHGT67KoDCHfzxkJ7fpdzkcVi+uksnsN975B/s4PyzK3s/TYl/XMObk9hnHYXPv2N+PLgbqPH9JCJ0qccnW+pBa3zf4kFrfNMa399ojW9a45vW+KY1vmmN7y8Qaqc1vmmNb1rjm9b4pjW+aY1vWuOb1vimNb5pje+v3iZa45vW+KY1vmmNb1rjm9b4/hrHN1rjmyZ3f8hBk7tpcjdN7v5Gk7tpcjdN7qbJ3V+6TTS5myZ30+RumtxNk7tpcjdN7kY0uZsmd98UPvsabaLJ3TS5myZ30+RumtxNk7tpcve7jg9I7v6eV/pFa3w/XZgmfn6yxE9KA9/iQWlgSgNTGpjSwJQGpjQwpYG/dpsoDUxpYEoDUxqY0sCUBqY0MKI0MKWBbwqffY02URqY0sCUBqY0MKWBKQ1MaeB3Hb+WBr5NnpdnXvO8gvp/9//vm6Sg/we+Sdo3oYf/LsOiJC9YMSlJHmz+jAoWs+pUPnKn/ys68hTLP8OC86Xry6fP8V9x91tQ/79XdDLuwPJ57fCHCt0vSjK/UaV580gW+3iwwvwNFvm4CwJyG6XZ7srw/rzpKOMm35xvpUI5C/5StenfryrNgjsJ/FBFmn9ejVtiXleVlt+oKs0xH1RUHr6jovRDQe6PK73NisyzXuGB+LpXGOFOEH7oSPDnk1T8qD4T6WKMT7kYg1bhvsWDVuGmVbi/0SrctAo3rcJNq3DTKtxfIBhOq3DTKty0Cjetwk2rcNMq3LQKN63CTatw0yrcX71NtAo3rcJNq3DTKty0Cjetwv01jm+0CjdNv/6Qg6Zf0/Rrmn79jaZf0/Rrmn5N06+/dJto+jVNv6bp1zT9mqZf0/Rrmn6NaPo1Tb++KXz2NdpE069p+jVNv6bp1zT9mqZf0/Trdx3/LP36j5NJ/3YV7qdMyl+enf3280k08fNTJn5SGvgWD0oDUxqY0sCUBqY0MKWBKQ38tdtEaWBKA1MamNLAlAamNDClgRGlgSkNfFP47Gu0idLAlAamNDClgSkNTGlgSgO/6/i1NPBt8ryAYMA/I3rDNEB5fmpIoetkUxQ7/3m17Nd1rJ8IVPjXuzIM4vDdHflG/ein9/Iw2ZS7+sdrvd1/j3ewTjvcvu90/f+edsl+rNQs8s+vUHTE8+OXfhuEV9d5cRn2xWXKTR6H5avLdGP5/Zn/wfA+FY++qaLX4uulDzK4Y6AMGfDwE/C/ruY1YFi69uFTrn2gRa9v8aBFr2nR62+06DUtek2LXtOi17To9ReIPdOi17ToNS16TYte06LXtOg1LXpNi17Tote06PVXbxMtek2LXtOi17ToNS16TYtef43jGy16TbOdP+Sg2c4025lmO3+j2c4025lmO9Ns5y/dJprtTLOdabYzzXam2c4025lmOyOa7UyznW8Kn32NNtFsZ5rtTLOdabYzzXam2c402/ldxwdkO/+WTfpFq14D5h3Z0DT1kxLBlAh+10GJYEoEUyKYEsGUCKZEMCWCv3abKBFMiWBKBFMimBLBlAimRDCiRDAlgm8Kn32NNlEimBLBlAimRDAlgikRTIngdx2/mAi+VaaXv73CyMIb7PhNFUYWKDv+KdlxWhj5Fg9aGJkWRv5GCyPTwsi0MDItjEwLI3+B6CQtjEwLI9PCyLQwMi2MTAsj08LItDAyLYxMCyN/9TbRwsi0MDItjEwLI9PCyLQw8tc4vtHCyDQf9kMOmg9L82FpPuw3mg9L82FpPizNh/3SbaL5sDQflubD0nxYmg9L82FpPiyi+bA0H/am8NnXaBPNh6X5sDQflubD0nxYmg9L82HfdXxIPiz/OzmhX6YwskhTPz9l6iclgm/xoEQwJYIpEUyJYEoEUyKYEsFfu02UCKZEMCWCKRFMiWBKBFMiGFEimBLBN4XPvkabKBFMiWBKBFMimBLBlAimRPC7jl9MBN8q0yu/Ynrv7u5ekb346XDLlG15TAgziv8syvx0CNVTcsrxO+kpJRxwtEuSF29tHqlcH3dlmL/B8R53QUBuozTbXRnenzcdodvkG0L95qcqDQh33BHKP5KzsPu03DxSyJB5c9j++g7Gkvy8SPMTFf7D+D3Vcf5xwDjmw0ZIejVCi02BvR9mGpbNKT8Un2C0pKfXj40EP2GoOB7cgWeDxUuvB0v4VwcLvGPhRJgGKM9PDenmZFMUO//5WP2xzP9ux4VBHL5bA73RKU/v5WGCJ1UdPrv4Wz31eAfrtMMt+W0C8cKLCfSis4tuzcbjt37r79cXkl5cSHhxoXKTx2H56kLdwH1/7H8ylq9V4398LHnI3LHw5wznW9f68BGF/3xEX45heNmVLhnfO/yADy9Xj6eSv7XL49h3L65PL1L8JO7TaeTFqruE8PTyt691r56+95s4gT8Sn+JpWdSfLvB66PFHs+Y7A2E6s0yUhXbL66ODcvgfEMWbEsn/AfZP1MJ7BZJ/IZDSS6vwO9KIpWNz/eG0Mzmh+P0Wcwz77D7io0L8vXaBF5PuxfmEbOha8HOnBsu8MTXEpHyUePxJtHkUKTGrTuT9xzVz31/jc8gquf+R8/9X5P6zkx8W6JHlecQJeXtpXrcw7/uyvO+L8n68wW8T9enNh7t1ExJfnAHS+dKd+tuXxJj8noS4aea53B137QNofHy8x+Y8nfdJwNRPwE+/J2g/zmQgCP8mfHqa2+9U0I/9/Ka9fdLVv2LhKPddSJzHruR+VPpPpuJJ74M/0fq/O9Z/rr2fhu9GtDfHPAfsHPs3lbfMSs8u9N0n+8nKG0L5zQZ/sDJ+B/J8Wn5dYm0RtidyQeUc5jt8c6Jrfnvf+u3NN/XMj9iGrNN+EmL8OiYA6AcF9AvmEiu/0K3fWM4PQk/23vD/31TMP0FTCvzznZt48bWmlN+YNS+F8icqytdIdo67N8BvPQYHULA5l88MnZe/x4DirnrbgP6fHXZ9oocN/jnAAtVdvfj/btRi3oLwfrfcPzHswcvPhZGTwSthZKV/VRq5PwWPf47bhN8Ru/vN8ZyQpcufCa3dgOwB4SfpPuGF3y4Lr8TtLcQAPk7awBvS9kIMPnovP5F7aRFe9wr/BzjqA3qFfQd0TpLdufg9Ef5hzrzK3LoVgX6BBGSG/Efal2+CHR7mHz5juuP9qOBPhO39bpV092OmH/75er48TwZ8c5fHt0/5ALnh/mty8ygB7xeZIIw2VVJ+rNRwbw/4o36R77hnx43L1Dt2W/1PyNTj2/++OLHMnfBssD+zNL0nKPSlpOlWLRtz98yuyc+RIRDvxGcHf9ti9R6qlorVx4sVy39qMXq9POOLi9FN4ifxTv4Dg8e+4bPekgy9Z6UBlaEPB03SHfdswLlPJUTvWdzwpYToRu0Zd/fc/Zef2zeOe4G+b1qq+LdCu19aqm5SNXHMC/smf2aZekcAl8rUx8nUU8wafCqg/RRH/e9Iza3aN/4PwwCfTaxo/PsWDNyfxL9vHHvz/7mA943qpj+Oe9+6FL0j0P3RxDYPb43Y5mmc9iPm1ndh+6rENv+fC8zeqtx8JWqb/8+Fam9Uqr4Sw83T0O1tCNXXYriF/1zs9kbF6nMz3MJ/Llx7o2L0qTlugYZvb0OKPjfLLfznwrU3KkZfiuUWaPz2RqTqK/HcAl24/Iul6nNSkgKNeN+INvpSTLdAA+K3IVafm+sWaAT8NsToU3Pdwk8oxfY85/51R99WMT3hZco4/Ju1b75XPPu9C310GT3xPXHlLzV2IvhJY/eqbsC/PXbSezzdL13UUpRfDQFGefC3Q/x7Qyu9lpE/vO6Hj/Rfq6X2JyP9ZunLv1nF8reSme+vmPm7UvWntS5Z8bbkT34ShH9awFJ4IcgCkN4lYX+1BpoMnxdtE6D019r1/PwPqpkmveU0/+UCln+/vuTwsRMey2+NTwRsfqaaRT/O7p9UYVJ8ERR+s8Ik+8ZU+7gKk9IbTrCg/h/6f98ksiuICr5JGn7jVoua/SqXSP45AiGx8E56UbP9jWWE/26BfekNV+RHkWCpSPzLIvH0+peJhPyGh/OjSOCP9lQq/l2pEMBr4/GmVPwBIvuHUsG+korb3ivlVwiA+O1D9mWRuNcCwbxWE2/h9pc1hn+iQLxjMcJTnd7dcUPcD6X7jZ7CnswPgzzeeGFinYrdY9d5p7LEnf9aCsrT+RfP8MeH0Ui9d4yLH16y+jmNsXLc2Yo5b5hRPz6RDZam98ttb0n2Wur2WxogFa3wb40Jpwux24HJmKr39mygongQoe1h123XlDT3etLiP8wLObuYKcl9YfYb8oWEmdtbZsnCY2AEW/+4RAEXcONjUm246X7lKsn4CK9roa4sBbdBGTX3y7liGztfCkCe9+00CnBzL2T3vPha5r2gtxPZnWPt1Ya/AGZbModqsBtsz5O5Cqq5Mhq0Tpll2dzJkul8PreT+3EyX00nszjbLZyZel84XjlCrDeLe8tmN9RdbTycrUZ6fzJQzDzuDbCgH66NtpjCwg92fDW+tFoipWFYyBzZiZHB4qazFf4rwI67Xo7IlnBEQOsyWm+U2b+/YdZHH2PTnZH9WVNTmQWOC0qT7HJWb0ZibZx3MNrvA0cAD/vYhbna7Tp3ftjTT6w4u44qyVsp8XR62UTuHjhVBrt9Bss2Y69ShmUXgHYuhqhB/M41c12arbvdmMkufd02BEWaxWHlXqazPrLcDR+mBrhUWZhsOZET8V9rYmAU9aDE/bzVo2YK1hzZCzUgPl1o4OFRwnJzvTQOqrp94UpQWn087XX5XjzJDFQaDSXbaxblZBfq7pOyVynCtV0iGa3U3LUvkOyEa5CdEUCUC+sAS6wx5Te163X72J3OoDT8ZBKj4ZW9gPt6U09maWlo0SgCI1DXaoz81TSwU1aSEdlLsl82yPXGBw4ZWpzqISxrQLZX12ElZrYnzfENNM3K9tKGbBXbh5J3vhwZRo0H1mHtkKcab2WYD8OTg7t4iPuvNxGxdtAPXLe95f093ASeeFkhA6lkx8Bus1z8NdSHgQGA0zBaPHak0zg7y/50/bDf3/ACfYN0gbKBo7OPjtPzmuzmKaHStckmkmVY5HEUXhPrkkE035/3056HBhupso79CDrGnJXRfDtAq74ULXOWbNi3Is8FPLJzNcJf2fr1xrqQAR4aSc9XUKsmuOnclONnezF3JU/UEdk5XiEjyGytKyktrIhVHpRQRfrOTdm93FOEdQ63UG302F0LNhdE/GBxzbjEDAqyLWCFCnWU6iU8W/rz/R0nmVzW5L05zNP8KCXFAJnl1XUdRl5N+fmY5arjghvP7lGwOHY9p6QLsm26rhPB1Dl7THZz31iNuggTt+tv/MnZyC51u9QYKWPMeIoGpeicy5QjHy91TobLKdmafSOW7pk7Mr5nXI7CyN5p3SbjBgr2vBB4JBKjX8fA5+wL2UpzSB5GOp5aOIBLjZe8fqwVC+iTjTurmk+DhZSdyWac6VmvRzMjLpVrZpUav3LFLFiSrt9BB9y3Mw31vUtDdqvPArIbshiJVZh7yfGgoUmbkP1GoWDZTJgDMDedDbYFrbky+i1RwQuw4WwOwiKYaM1oTraeJ0+tdU3PLBBFVdnOkaOYQJqRyYL/TyUYZtGY21ZouN9oKyaCZNvQuhtNd+wmJ7InIx/mZrebs7xtHsRUvwrCBqkrbiaDqFIjpxfmtRSddlgHcOWeP7lwV4UrC6FCKmVuAXvdTqmrKLOuF8lFaizGDtktWz9ZYBNml6sguX2Va7c7iByU51uyK24cmmbppVUkkxP3uHs0TSLap/cw5Lj3I7LFfEb2Lu2fV90er5NoU0/3BploYDu0GzW2bKmt4qlhRZl4qPbTpocKbjvlSa9BTpiSnXOLEdmyE9tUUEFpTja7jcMEADIHtmSz1aLN9mTHWw5NIJmjGUKN5svnyCD9y7Hj9DJemXg+rqZsZXi4+/YBmU18NBugfFrZZM/e2rEY5aQthlifVXsgyNY5yjWyK2tIFBfS4hgVfdEloSNWF/Yeh2YoH5CzE4YJLdbSbD5iyW7GU6+UV90ewPIihitse8ku03OeCy3OU0TZkshmx2bNHidGHj4oFj0fYgNolmxoaSJshNCA8RbNNMdzZ8jUhnzkqIZ24kI8t/MT2epXFVh5kA/n9a3t3HyLu0l/fJu0PPANPNryxCUaa6V0+0HXu4dtp9lQA7HNzFBfhePtkcyZrd9tbK0XWLC63a7dCT5XUmLFWeDWQe9cYRnE70+NLVwp3f7U6UQ8Rf3x1Vg0gEiITiREXLo230T1DLci1gC75OzC19kLxI8T9yHvpA5T+FakrdTG3/FKY+XyKrVPskKke0KuPuyu7opi4e5LoUd6VRqmbQaLDdk6nihLxySGN8aNxYBB34PHHeSVFXQiG//2y0hfoZNxPuO24I+IFnPNrS+PkL6IiMX1pqFIdsPVGQvIR8dSG8V3A2K2lVFnXhMWiIODsjKGVnYUSLc97DIbkN7p9rsPp2lMbJvu2lKAp2KCdVcoHNhuj14FLm2w8QlA6nay5yM7hiE3NuYFEmrbZcljkFmpEB3BgRYrDNTka3iSL7XEECWoHMKzfr0wATLXbVDb4313fiunWVk2rYIUTyb7WGcW1mfyABIFx+7OSqwhzxCvVV5fyJByczGu7WpWAywoR99OQUmsQ2KWkaRP+g0+/xi4FuAC0pe7bZTl4EI0PqM1SIJ1m/HEzoyC7lnU4RpffxaFbiYUG0veckuigbO2vBA72mgxb53HwIVZ9DAkjMQuALYdtecxOQYwKLX7EI+Mi0eG96oSOGSn8ClSGn4hEnCwjmqsb5UIm94FMU1GHtlajL/o6jpcknsFxPR+I7stc8kqyDdS5J4LrE85MwOCzwXEvCsbsn9xG57dslw2WJ7aK7YBeSjkxKApYVULaRAJ2D1dIZG756vMEDfVxJCBtOSCZdeqg49G7fIgll5/TzY1h2Jm2uSva+Jjye2d4ZqzU56Prm7pWV5mt2is3Nsg4KYGH0usC8VkJSoNygOP4NGeFhJbnRNIOzq5DTpCUfYsQeOKozQiw8djlHaxwcqcGvJZsslbloxC5JY1f/IuLrTlK9H//fgC1kbXqhUUh1Wr2eTt0T7z2oDMIxUGc7GKWOBpyOw2+B6HUwMqY5Bz9p4VS6dRmpl1tbqZcz6HpwXwXG9zWSJDIaPPWVulXt7DwjvHOw5jVWQlIywHoKwKDGtdz7O1ZqAzF2wvkV5yGwsb9L76MOZsu7ekZb0knZjVqS15s4ddxpcx1yx4DALP2FO3/aHKKxOt4PaBiJpKPNd5DZYEWy5WUdvY++khwGDBFHtMIRoaWLFd5ABrIi4ukKTUEbO/WigVtCBYP4wk8RM23TOl9SbjbDMozyyDbaXg2khmsJNT2x6zH8zUo3DhUTauvUgjZp3D2Je9n7o2hF7tpSJbqRBNlNkaENeVtbaWwJXQ9BFEM2UiiXkWPfS0CRPO29ktltBrAhbRxrjE3KEWASzPLoYDxsglGgkS5dnXwqzNN0JyUJCmwgYu82wPiW4ZaOFOytKwOjtm7CNFVhPgwXx/8vXIm4lRW5aIVzU+aLFTDKpjXS6x3IFl5RyxouV3fpWlYh6y4YaDo4mBXYxsM6xi7IvJu1PV7cVNED83R1zjwZWbqXZdYsvhrwZhJopZJ+F6LwBlzRrDGvXRiAMszHyAkadJdg1fStnGsomSUbAW7bW7Sqwjr74gjkmgacpjtFJIh6x6LOJcZknQDnl98scQG8RzOnS62ZtXF+KAsWTncmCdoYtNALLTiwTtiuA8RUwd7I2eYwZNRgSPOBwryDOiDzGaNacx9htilRXG40zrriSQ2V5Gx6zTfPkEw9ESa1ylczhjKUp0bCcL/uiWkuQTbIlB9bwJEZ4d9pRcYGcsuKIGKQSNa2kzbYHVtCTzLIaQnqDM8BgOtXFWYm1HLAn2ffHFJWGE7dk4OxCyMew3M7kKjwN0Gp3yTrNzin6FnZVwjT7xuofkSdbYpNmHoOeZft/aT3ls5dvd5UIE4EpM2zYsI3XdLNGiVvuPNk1fQduV+UGMpnG772wJ/qeNyXSaVZFu71Yh1uGWJxDd4lnzY7uvqs0EmxO1ZWcEo/dSSeIzbFUtUZXYXN7xJukyKz6jbrP1Fs9y0gQOJWQ39GQ3IVdMbV4K42lZnw7IPpxQbLhi6hP/j0wYfcpbjuJi33Y1ZE07lzEYlEN11uRDPArhieDhQQmx0fON4ICNuOWKV1MD4wuGnDnGjbFaAIElm7fhEeiutYiZMLbQ/sxF7NjvRZXLGydkd3h1xU9dojajnh6fk+az47Ov0SaibT3QhEXUgUel6E1UbDE9xFtuzWrdW8hUZ2rjZdjTHDgaaAL2Hrc88Moa40mrP/7xe8kDPCVS8XReVN9j2YjMFOPCQYcLOQb7xX3s0SjceoAGhB9SYBtD72HKm/s5r8n4rgjf9SwHtSXoIJgYUmOsUIeCmZZ4WJ1HxGB8G0+TwL40HjECFxIEHM60Rr9cM+mYyKqB2sHoOrlv8NWKMNvjKTG3nAWJo6QKj72rdW1xDZmRBL7m3qWCaqgt00UiIlmtXHsH1xYK24w4isuauEq8wWCPcEZ8RGxhoo019PgdmckzxDcOO9Uazcj2fhHZWLErZchF+20kNqKTclFz9HU0kKQj33mwREPYZzE3xmmJsQpiMSpWjIB8gzi1KgyDdI/yOrGxtZ8YV1jlaxFWO+zhhoxYSllS9jy0UjXp4EwfTDdBTbJVkK6sxXulwTa1ajP14X7YfUdwOU7ahim0SYV8TfG9xVTqPppwp/2w3bXhMQFzzs35MX7OgQESba2srBlRXCxkzVBLVIH30MjjSJSCWJ0w4+RGQljn9HxI8NmIoGrHZUWivSR9EcuIOPJljDEjPAEGazrFiuz5rAqMRj3CsU90szK3zqGYRbkgctqwRANVMEZ5SBRMrETiuYtWOplYJWPiqK/zUcTFsDFrx/Bc/iKb2HMfAU7ZxGx6ecArVZlHpWKRjsm8jAwxFuyahHISWMmQxLUGx8afokVBIgsKxgL2HEP4YYTiMFemfWv7EEGFVS9NC0OKDwiell6fxE1I+CmRnChw84ytWGbRIGeE4XZZYEy456qteHKTY0DisIWDCiRaBPSSm672cb3Zetkuwtp5iYqeWHt10xiJQx6ezIgLdq+szCfh1HUf99LzqBk6wdrnLAG7GTzQwb1QxMp0IhHpn/VjjQNrFoSXerrHcn69GiFTzaMdbLwaNWqfeEhuP7qcIjsKnITAYWK6PKWq4z4yPbat8pWQEvZIf4gmtzNTw4hCxFiqJmPQpPJWB+fxuYFXphcbjlhYZyz80XVn5Ge9PpEYp2JPnVblQtzPY3PPMvNizRtH4lvt1ZmG9C4+zbjNlp9275Igo6zrrbsUeIItOKZdIWcsj49V1G9GsMocH/kq8vNU3Muz6eYhSrruB0viByoNq47jHn5Ak8/zrJX9qbjzzsmOY1UQ9XJkKgcuSSCZH27IzMt1MwgMEgWFedWa6c5VdPEKZtgvQAaLPZF9F00OK5HZe650wXfzT7nZITF9Ysxj7ppCLysDbO2HNQZeazzq8sUiQVaxzpMsNhrUJCQ4V3hHSe4MtR0Wy6s3U5HZRb9riYAiT4HAS8fTJSIRuEIsOCyc+Xd/NRRP44TD3qWCAtfFOsGJpobER6IgCvnYUGaKGWAtHle5m0nFRpaIYhwSHBAr/HwQ5gII62wvOcXJesAeey+ajfdHrKekVnXtPSBiiR1IJXBDibjUFyPb1u2ylflsaeKx0ZqJjl0qsHaN+uESV7EaZ4Uf5TKWrc2lWWFld4RacwxzRyw5GyPP3j9nPG7wUPY7MgH0cza42If+0Qo6lk4ZzpdCLz8M4zgmPCb59yOt2y2R+DnE7ss95sFrVhe+weqCj9vOUH7HGmlK61Jal9K6bx2U1qW0LqV1Ka1LaV1K61Ja92u3idK6lNaltC6ldSmtS2ldSusiSutSWvem8NnXaBOldSmtS2ldSutSWpfSupTWfdfxN2ndn8Tpiq8KZb5O3f63Od33VC38rRrSY0L2H5ZC+lXZ1Q/Z5M5jV4JPUHJLkF9kbj9tKveXy6cJLy4EXgjMTyp59Oo+jw3+2BJG8ls1EZ+VMPr75Ynud0cSGbDy0/7pml7+j674OSsc/YpJ+7OqKYnPtarAia/rH3CvJ/HT7lI/X6nCv6ZU31li7lcMEfdKr3KvhPQby4m+HHrRC0nF7webUI78T6CJuReKjX2qh/WXqx0+FSl/uWzrJ2vilw3mHjcffW+7Xpz/MZqbfapr9kcT4bmg/0ld5Bsog8zKr+VcF8h/v6u+T8/qIj/u0PBTdB+E/AtxY1/pPu5NSMl/FKRkn8rPvGeZYIlHOGxP5ILKOcx3uAnEmv32vvXbm38mG12p7CfFdbuy4gehJ3tvVAN6Z5nsv76FOPPkPnzfu1h6JSTyGzLyYV4Hy7yuGDVIoxA/JB6K2wRJtyBN38tRPnYO+BkaBLwwWMxrDcJK/650vFU+6i/ifeH38P7meE7IIrjPBNBvQPa64v8/RRlJL2Ig8LW8vQX0Xnq0P0/cuNfK6B+g9c9Qth0wr0rl/024i4fzT670weW8We4tZfG1Rw+8Kpb/d0cP/JkcfPjovU4psfIw2PkdMLpNbfxqS5SfbJEBRvXPR4V/nfrzFl4DL33Nn6gjf2rR/M+4PQL47sz+Nld+yv4IgH09nX/lBgks/4bT/qLI74EW+f3AIr+ABc+3d+SfCqr+qsLPuAV0/osvtDKUf9L8f6nu/+TCHz3/37Uj9SeNXn8GVlB+8n+f5EGQ74S/ifBk/uW14L8sTD8VnlNh+uvC9CJALEt3gviDcuH/rmC9vK78LwvWF+bYPqNgCaz8XLB+jlwJ7L+tsF4vNvjRI30iNyjm/blxQZl/6fO+3jT4afXUvwN539jN8ZkgvFY3VBA+QhCkXy4Ir/nMZ4Kwp4LwMYLw3AkWwGsn+F8WhHc4Rk/EdpSEl0fsoVAY8nEwhGdf0JeQv3sJRd+LPb7z4n9wrY+GH+Jf2DqHyti/I2OQ/3kyBuUbkLGfutfxbcnTm8sTFQYKzPf9dn/YYvf73rzf/nhn3hdLeELOl5nPQBXy4Hn4h5X+dpCSfyG30vv8sr+6rvFli7lHf+zdDeMeFzp+7MLGJ570j6ZQjOfI+f1AiGX8U5qGfrnxnq7A/CFAAszLQKEE7jj5FUji3wBJL/dW/nkg6V1rn5Nkdy5+D8X+oGU2T4UDu7V9N7QS5oVCkBnyH2lfvgl24bPlnkx3/J4P8Fo+/kTg/hw//zD00p0kclASgCCwEtaB7GtJ+J1TPkAw3srioILxLwrGo5qQ7wSe4SRWlHDTJP6J7rlNseHes4Scis1His0Tcr3j8VgLLAQyK0vwDUNzU2LzjkXoVGz+BbFh7oAkMFDC0BM7HgIELxhJVrjjRUHmZHwSB0RJvHG5ekcMiMrVvyBXAN6xIiNhDCxDsp5VeJ3TcFNy8x4enMrNx6Mfjrt7UjUSA2TxdVD5psTmPaEaKjYfr24wambwaIs8y2AR4sUXnvfdjyZMemMD9psSqvcs5qVC9fG6iGfvWE5kWF7meYl9GpZblRqRSs1NqKLfQz6fUqjeEUelQvXrvPtPKVM0An0bMvXHmImT7r6HIDkeiDeOmWj0+jYwE5DufpAZHjKvl4fdktwAGr6+DWWE/X5WlHmZxSLDSlj7PFNGgL3j2R9N3I1LFY1u34hUiXcsLwBOBDLx+3n2+ep1FtwBIDOAfRSu2wZOT7WDqFT9aqn6Q12FsbrAYMMiCJBhRPaNfSFvSaj4dwSbPnylDHgiub+vlGF/+UoZjqcBlQ+Ybr8J3GddKcPx/7mgyOMQv18mgjDaVEn5MWLxKdfJvCtDnwrNhwnNp1wlw9Ooxi82QV9zlYxAox63IVefbJWM8J+La9yWEfuUa2TeVa+GCs1HI5+vtELmXVVrvpRI3Zr9+ozEs0AX69026vmUQvWfW6x3k9btC62PEf5zceeblKivtDpGoBHrW/DbPtnamHeV8PtSUnOjeOlLrY15o4AflapfIlUv1sa8iGh/qqUxbxQD/OJCdVsG7gsujHmjCtNiUxzwO/chKXDzf097FO2eidnvbHiE+/vtDY/Cu/juoWj/fVc2jGXUrp5TtPMfdh/rPpuH/inF41n5v713d3f3wzZJu6cL0qL/zybOh2zK9big6btkP+3g92P9f/aDah/uVBscr7Ooz55Hp2ioTxO4/d87XManqnS744bUvlK63+hJ1TFv6b2n0R5vvDCxTsXucTs871SWeBRei0N5Ov9CFSk/PRXRdptyg2few0tWP6cxnjQ7WzHnDTPqxyeyq/j0frntLckG490m4wOkohX+rTHhdCF2244bU/Xeng1UFA8itD3suj3Kk+ZeT1r8h3khZxczJbkvzH5DvpAwc3vLLFl4DIxg6x+XKOACbnxMqg033a9cJRkf4XUt1JWl4DYoY9yGuWIbO18KQNmOA84NcHMvELoSyDe52meCSZnf98FpMA+APJn691uUbNByuhsU8+K+3bSteh0PAnM1HpeXKbP2G3bDHkc7Zjn/xir8pOewxXysFsqAtTNV5ZRjUSr3ZTU17pdpkQJrRTZOUIgOccPrTvQ5G+N+vYmw/Cqy2GJdp+eJ20QGJLuxjdvUwmpdCbF6VObQQLN/f5f4jz7u2wNTxfpq55q5KOZ1thGrcZabGSOWRsaHuSdsOLc6SJlUZbkwizxJ9Aw7LT0Td5E3m6CK04VgaXoDD/dSe+EyLH5KogFJLKKslq5WxmtIi8018FwbAy99zsUR/sXAZWRb+NwqPBD7qORioSGFX54DO2WldRPBQ+DiE0HGgmq6vuKJr6gnJR7h9yS494jJ1QMyQHjOKxf84kDOLqCTXK+8i8a7osdavAEqIxPqjWU0A9Rv18cVK0gWLMSjZffxF8a5MdNWx6rwSGNUq2SFUoZ4noyRkUgR2d1IF8agx9n6rC6URpnwK6+Pb6lbAcgs24G13+BZtS9NwQv9xBKt3OeSWQ+pJisUGwsbXHsVeXXfbfYjruoh5M/G+Os+vt2kD6XovITrYeltB8jQcAdHLXKzQZhFEqvX/hAhcRleTkQ0h2EhitWizNceMhQUjbjLjMvP+DoZI+t4VnKs2kdjZcEF7VFvZCywpKf1Qr1Ar7bxeUG9nh61klPKQ3CWI87HSglPB6WQsggsXS3kNRP1BmjSVgdx4rH90LWJUhknM3WVxhqS+nyIv4US4NaZz6+5WYz2xpCFtraX8GRSrHCvNXAM6oGn9Ic8rBBaS+U4w2OjnKT5fnZp/HWVIywtuj3cItTos2pzmfTm5DnmezHjpJQZx559UWZaVuNxbiQoBarZdfmStS6GzJnbPuoT+5HaRsDUu1a2deByfKhO1MM4a4lYWYYW6nNYaWfQSgVWeTVTVFl1wfKp75ehLSCkHgPHaLmY590x2EkAn2WWTFi7izY6mvGEyJQdIz80eB318KtKDk8aWHH2FNiuvQmnXj+OUluGjsXVzIpB16f55JH5dHbtHazlyyiQ24w/hepWLPJswaywur5oaKBC07OnprarSYO9bFflOlhW2Qbk3JwLyMDgm2ZYzTKomZ78LBE29cYRy8jgYBoscpiLOR6V9UjMiDHlwMycGN2Q6oajXKK1iseN0bp5JLVTYnqDTnb3+8ypXHa2QsfptM3OuJV8pTFLMgfEMIP3LIOwFBSVWHt93J36KSUlaRWsbnOFV4epbcGzvCKnZ2EWXFkuwJN8vGZXZTMz7Svc+FMOWfXkONN2F246PgaFlpsSaZxFNAQIF+ms3Uw8tFI1aZxnjL+JOj3rRSALiup+h1aKuLxGobTS3HF+3q2m+HP/fuivYocokGOYzcW0NKbcaHIRc/yWmZ+GaLhXvHZBlJEFT9HGIdqh6R2QttviU0Zuti9Iu3XRAuLRcbEUKVfjQMZEu+4tkQzF4lxvw+uYr8veFE2USZS5lzX+iiyBUK7rHo94ZZ+W0sGw2hOS9Blu38VZ9XViXWZaLrsKsGtnxKzA0iDoWjGlrmW6L56MfirrfB+Mx5f5Kjn5aGVzVcYJQqBYdsADPGOUow/cfd5oPlaYiidh24v78szzjIXUSxANca+VjRKnrgADNC3zwwyNrkvUaBmMuJkGls6+3yjsolFREJ0PMHTxtZtufu6REceo6PESmPnE0ClojfT4gmaas8HPOMvDNdnk6fuZushpCzL2ZdOQt3NkoFgtwEU2rZzHk6g44j4S9y4/kEetcmkidobxBXTPjWzplsFd5BWSe/geNb7Hakr0yXWbul7fvRDzy0zjs90oGJQMVWxhJ2pPwYbpiBUfagZYjSJ1dSCmSpnc4x/xLB4g1LvgHypSZwfSNVtimYe7F19GaNHDennVw6Dm4ctYqR9eXPbHLz9etq+oBCV1X/6vtWnUegPeJLODYbU9Hm69OODPRM6zeHMaTaw936j8ENuYyr2QoKluTQyD2N7VEDcBVFzV+ATWsaG2R1JxwHcSuUVKpsBI3kTGJU6aGXLGkfUgU8SY42mtK7o3NK5iwV7wfYmDQABZ7mghlmDRu57YbeJLIJGTmYam7XbJBdEKGSpfHcfn7WzsCfMVyqelCT1DsebXLDoKjSar7RA/NhFdRqxORgbdg4M7ggVOQUT90qJFantB4KmbZon2e4yzWeWY75GsmDDa6KA5IcOccn3SJHkPNxaetA4y1UHKtjKvwY0GiF4IgjRBsWLrLdHZsgI9CO7dsXTeoZGa7b+/G4FtUJm7UWxz7YRBvGkIezx/LV9PQROe9NOlUOKBVRBLbe/3WInpbcSO4dJ1z3FV9GPzCCv8Zo+ocPEYqt6Wbdyl1kznIYFWW6LhwAII0PESUIhYg+eCmbOdzZPOYq6ByLWzZVniXpiIW5mJStw9+hJyGyGx+ni0nFRPYBbpWudKZCPoRs4xtrAEjixo8MgljxLYFvD8ssAKZyL4bZZjjUo0iVkS81FErTRRsIykkUh6mdycW4MKujkQdhOst/QsWKZsLi+ncFp7kdhWoFZn2oTHel1hCOROA3HQJ4PRnv3as8R9xtVqjLHX5AF7mRDUVQ7AvVHmDYfRUGzh69Q8E/FldWLFKromfh+FKt911550jM+KkpEF4inKgMhfXEaNkZUmHbaqhmFmiufIU6ZMW7bONsZw38XmDrsG+no/l5x7GGngniACYTlAsrGs81os8COInRHMgQ+dMwsSAT+fEEizMJU31bAWuTC/z8PGx3PLt1hOXk0lq8orcbMrJKVBeSCgSE2v2NlTzlZ2yLjCwCqVVVO7uhDrf7TC1jrzlnbAeIngKWV/xM5J6e5FPiE9e80dVm2Q34dNDzUCWNedbGO5VCLJXkFHZ8GeZ7Wpi3rIyMgQFdx5Hfhu7NaednVYMg8jLST4pRmkcWLm5X4BNY6HGJKfUxMWfQK+ZyMMUJf8YmUp9gaeThYUPG6RxdAtsdVSkGIORwTHS0cyW45Rhrts6rFXBpk9tdqkZt8yzrMxcDiPDG2OTCU2N/X0GI1PKw8jbC/VI3wTMmGPU7D2PUu655wVLIJVEK+X/AabNjTQKmzbVNSDSwHcy2wXRIC7KJcJWrarcMnMGDQUr65NpkrGG8IIMvg7vDrnsPJMizbPGgj8AdFTYBn0pxg5D/mzBjyYyWAUM+volIiS6zkplnvNXmRJ5VmXmCOCch0dEdpYU+J8EHQv7LkMimzqMFpsLUXcs/gDno+uUensgU2k51rwGIBkgVtzXDcZyLBtiNIsz06AeNU2M0cY13VtEyyeL0quL2D5c3IWyrOp0DlJlViNyk5gpkQIN9YFcUQ8xGXqJtjfNtHUyxSh3pOnEgwRet9lCQxdG+GZca3LeQQkuDAlAbWq4Iu5NU35ZgwYyybY6xLWAlvGDNbjbL9raXoOc3aayGU94sUVKDwlVnLId3NQSUpwnJZuIjFYTlzP6EfI1JBJfJvpEv9Qu1m3YsXcy4wLXPcarKgMMSFTYMw5fHjsvLv9Ohphm5AWPZBHfhrx0JXEbYh7I0ArpEnDRcbg1gvj6hyAwPU4OMb3IRgSGp0uGKYOceBPeQI67aIcA9vF+nlGPgwrgozFIswPw8VKQ33vssRXE4uAeCCiK4Kjw2JccvTtNSCoscPPrWXXgePp5UnDqM208NV88v6R/E9u3c4K5HDTa7CM9gRtXxYgNLLcIY7sCRtYb3LAfSsNLLCEy5pr/ENhxEYaAGtLTmcX4AxtbuxtK3x9p5Rc4X4inVbEXetV+VRkVuNcOalEmNZW95RFCrZw2QJ4ZWaoqRisuysUOVJg69eruIrVFdd3pgbB9gYoOfu8qqr+dLfokG2/1fuMz/mDBhvBwCmxHsH236zizr0l/pK+GQOB3SxVNLYFGEK3Q/18gHWik+5KNEHzHGsvfophv0wMkzOraj2I79ums7A52GFLRm4/mrCka7Ezb3lEni8krAI7D+diY3/WaINTmJEPYjKJLmI9TgB/QhZ5SjfI+eLB/ns6NveF4YTYD8DAYsggSKZTf8og2Ww6DHIiRocNyDOAuuhBNC5igjj2Mx5N8YCQrvaRNMAgyCxZGWMSrm0JFpmgCntGjZeRKEZZR8PG943gcECxYUtSRc7jfjwPY6lCrYz5Hl9tkpoYIklGM8iF2elr4L5P3qbVGJgG18qXxo8MiB1i7MuIQtsFhYho8+ZEC6/4Igb2wiZqN6ZuIQeRdTWUixyx2IHB+jPfFYZRc1Ejr3qwh9tdL1dGrMqWtRcbpRlg3RuMFaLUIn2mrCYd4j0exHOuWO5lK1t9jD1x64ds5dUXLdblFUG8GkG8o8Dds2JcpVh8G9QcieTmQey56mUdDWdaE9g+iOp95F7AJswM5JeNGluuUIkTL9OkYIh9sc7HG59kohbyObPE3uUlDdAYccQju3ab8+rbAtlzGdtKhFpzpE7rKJzDh47g2vRebdRG48RSJu603l1pwsSFjrRNlc94Drehtqd+XesrdNKyusqszhTAU+3ka2E2QHYIyRSdEuToJAS3LJKSrbBtXrQEdYgY6U6v2IAiLW7xY+/nRNNeoMcRLx7rOx8DKaIdY6yoorwnrP2yhybjSST6M6I6cANqIHR6LmOmpVQfoxFRDpwprUxb8IsSP9xxagpggS0wvuOORBq9TsdqSeBwBNHIj7GMfHOxVxYamsLJyu1FcSH2qYicrRBtiEBdvYAYK2mwEOQJ0exgEdVyg0UC+W6SwdzNGnwtGJqVM3dXWC7F5iBmrt1C8ly21Fm5pBLTiq13NVJ9fWw5Sr3IatOrLSWtgqUNZnWeCPeuPYEBuOTEBd5EmVAa06ZX8FA7KUeUtBlLYuf9yLDBcl1tow4kpWePWRl2cpSIcUC+wowzWGFTT9CN6RJ8u7liRGW1eGaWbXatSCRE3Mn+lCdKt1xUC2aAp0+VtRJxRHo9+bQVahIMIDCqTTBaloi2VwZz6GLFxoyRowTjY13x5JaKa52MlTZM7SEkCEhBoR0Fzhi27Tgeo2vSkoaqPTnbinVNbHrtNDH2GYfYNZDKSFr4sMGGYu8zUbyzsgTWstzJwajyOe5A4n57GNS26NbRLFZWaYtnwla1Mh37BpMJ1gzzcSRfqgux5EG0ITuH6FW2JXiVx3dqTSljZlxRyNCOJFNcc7M90rR7a0XAoTEmQJhALjYqBVAinmwdroC8s9skpAQIGuXCEM1Rf2diO2ROj7KoINU0JzuOeL+hf7TWHL4bImFMtt4q9XJBzI6yHkY8SgX/iliXWMAJudsQltaxltOyP8XaZiiefC+SNmQ03Uxu74kVS0mkRY/7QbOIo73tq1GWSWu55LGiM1iI/Sl5yjnY4mE3C0sviURu1oHzMJeOreSFywV+DD2OGBT3U9l9iPvVhjP2OUGbTRA7PrVs3DqMb0ExPGEPVnEOEUYCwRaDliknzbrIOJnZ89qRlGBmIagLDhe4fNOFjcUSLpPrtZvHGSo8v61j7HT1xb0tZFsZbfc7osWqNDutV0SyyEie8wyG7pplx1eMAIbSHN+tLodM3OEnj7T5vuuJ8z7we+Sv2Ir7lwbrK7W4TFM7CJZ5JgSuDqK0XCR5JClibmSHqg4G0SadFs2cM7LUslPiK0qA0f4xCXKDx0NYnuAjFNxLXCMzu2yodB/1En1xuK9mR1X9xinepghJhPEGGb9R8wPjl+d9O406xo88VXwt817Q24nszrH2asNfALMtmUM12A2258lcBdVcGQ1ap8yybO5kyXQ+n9vJ/TiZr6aTWZztFs5MvS8crxwh1pvFvWWzG+quNh7OViO9PxkoZh73BnkZHq6NtpjCwg92fDW+tFoipWFYyByJZjKdYqhIwIV4+uXIJ6+JDiuj9Ub5gozf2HRnImJQaiqzwHFB2YX76s1IrI3zDkb7feAIoNKIZghzVSRm5lzADYmcVJxdR5XkrZR4Or1sIncPnCqDgBhtbHrYq5TVYQpAOxdDjHJ4wirq0oyAgnFItG6HQoo0i8PKvUxnfWS5Gz5MDXCpsjDZciIn4r/WZI2Toh6UuJ+3etRMwZojTkRANHZoEOselpvrpXFQtWDT8FCC8sFJku/Fk8xABfskyfaaRflOLKPuk7JXKcK1XSIZrdTctS+dLTMIhgFRLqwDLLHGlN/Urtfp2NMZlIafTGI0vLIXcF9v6sksLQ0tGkVgBOpajZG/mnZcpYwIlOqXDXK98YFDhhanegjLGnAkOggrMbM9CSNDQ9OsbN+ZgkEfSt75cmQYNR5Yh3UXhRlvZZgPw5ODu3iI+683EQnSOnAdxri/h5vAEy8rhL36TnMT95oE8PswMABwGkaLx450GmdnbP3XBG1b7vAC/c5PVjZwdPYxbjqvTy32Y1DpdvxrGRZ5HIXXxLpkEM335/2056HBRqqsYz+CjjFnZTTfYpTdl6JlzpIA0oo8F/DKvoc1w3y/9euNdSEDPDSSnq+gVk1w07kpx8/2Yu5KnqhjlPrtgXtnttY1JJFWscqDEqpI37kpu5d7irDO4RaqjR67a8HmgogfLK4Zl5gBCYkWFXbCRqlewrOla6G+wMCKFcUrrzaTTC5rveP78jQ/SknH5F1d12EwzuXnY5arjgtuPLtHweL44F+miw4B6EQwdc4mhvt+YzXqIkzcrr/xJ2cju9TtUmMwwjDjKRqUonMuU458vNQ5GS6nACOSjVi6Z+7I+J5xOQoje0fCE/rZQMGeFwKPlF3Sr2Pgc/ZFPFv2sGNcj6cWDuBS4yWvH2vFAhKyR69qPg0WUnbuLOtZr0czIy6V/7+9K21WFsnSv6Y/dgU75McERFxBrwL6TZZEuYjsKL9+zsHqmqqpqpiKmJ6O6mqNGzfeqwhJLuc8yyHfV+W2pnQKlGpCq/yN+PzXuJv0jAH1jCpGWUhhSpfUYX7/NulmzHnUZWXX45Ka5/eOf0mRE5zsidunB/4ieiIhTbwxh9X+ncCtt/JrVy7PWNeOe8CCDq/ucLEgblFJUrG1OOkWF/PEMYJySD+NZrAO8ge8f5aS2plIg3Yd3tPUesnyBdC2uNN41hnMnyV1r7LHDWKA2GbSIyC3LgGkQxu11cQDmU0e3YlV7usJCMRIlfStbjxc5EfPl6wGc0McrzdCfVrXV9SD0sRx2rDoGDoTjwy6xzRVjD6z95A/Uc1AwSOlG3NeQuvhjw279NvMxoXGX5cesi5PHbt0a7usUr67DDA6bcTrVsJeI6K8RQ28WaGEATmV74i6R10oTXJ+KmS4AuqjzVhlXcgGkW4IrtEK2KcZaSWzJyYjrIvn+uTAejxthc4OofuyGFeTxHbIMzvvMca73nc5/WEelhDPuoyXNbdktYnCRIKBC3BZSpu5Epj8JEdmoQjEuF7g0TnHJa7gmp7EBCTY2xAYn4s+sHZIyQlyb6qn4l4SE1cMdUVzVWTZTi/cN/bET07IDJe7STlJXFMhg5wAt74itw6DHXXMpcR8wzYfYgJru36gZ2jIgraol/v+P1WJ+HO1yazjyIbR1jbBJMXqk2LR34ak/9vk0fGpByx7bpD19Y5r5hrZ9eT6pej6oR63gWNVPdX9A3q6YdnBHETeZF/JSY9mELuLjfJg8/XLPgw8zhBrUl+OgQc0ot9BK1KTF46i10SW8CRwO+mcSH7hc03kMvNkDNFN0ge31k6F99D0SXbDsy+nsweK0gRZK8+wV9VlMVakmQoFMFj6DibeFBqLEmDGTzEH0xPxGZa7RC1DFcMuyzelwSgWONdIW1HrwDDjhttEQeZkcS6v3X3XGPQoiDFt66spveYCryy+9ZO9dKu7jN22waXXxNg7yuKuu8m2SDG3WYGnxrAUc4hdifwtZOgr6ATI/yVCgCQg55OYl5JEXNv7hsq9Fwh4G7gqEe5XIj9CwKBDfSYP7dmr3FRg8Z2U1uvJxdQ5j0CT19l0/KgVVdsOo071ELn8oXIhnmkLggFOuJV6atLQVl5d3T9xSMW9kvZet+t5rHuJvIJvMTvkDjB0azMf4Ph7HLi8OMmptyurah71s4IzB6qSfqwkzDOrSTcXjOUZZTaWBBW6INpVPGIErsb2iXl0MFPJLdd8QCr2HhJOFQ485I4+DDmsIKCFNycwMkg2pbBrebT+pC3VB+mAtRz6mfWoSTBIvSjLl3bNPDOFLwaWRY54rRhTL5x93In5Ka4vKgvKBuKp6FS8HIkxpnf9MpmoSRm07XGA+TS+IAfUiYwiia0nXS8XMZMFIT0Bz/+SuspWLt3G1nj1KMbHqVXf6N0dv5U2RB9YPxKlcjz81yuP0NkpyVn0CvR8gjZ0w8ob6Vr/8vhY3NpSqgoBUfKTMnl9IeLRmZlgrq4R0q4ewUDvRNFCVzbF5q6ucPgkQGlPjz85W1srVQ/fcjWa0KDtpUf4DIinvTD+z9Mnf7anVp2IsuxGE7UFusqqcPKcNwaJ90rHBD40qUPTCYdvbaKv+Vr0MkFp/UEfdu7rLfGVZfI48GEQXp5Hak/qqeii1vFFmrBMb+LkmOYrmAd82zUAa4Mw9MxhYXFPyJfUasWLu/uxMgvR/pi56rE/YidWfeGp4W41NeKYisNBAhBYCkLjRUtD0jdmI2axQodOKfu654+ILQ8nNg5etv2OASw4yoxrFNvkT8JUOgyRSEwbquo947KXSwvZjOPzeySRJ1zeDlZ/qUTPidtS4CBXyoFHNQ5ITu+FXLbYGXf5KdFq3YfMxLQuAvYVvraBR0jYh4UidAahG3135qeqBvfqymJLnIgSutM3qlJX7N3TDsnF8OaNMENfOX9gF/uZit+9wpO2DAAO2KsAIxLB4Dk3k2qsL3L+rVPTIAM51lWGDom1MJObWhVJV/pOGlFdM3I+JHX2iCwW7hQ2ti2VDFOKRyDFfHfv2yPMO/7Y+XcItNIt6qpCqRMhuYhktbGBYlSXZZcCF9Nuj64uhALXVCvuqTiE5BRUhte3WJF4WiSVolTTDLdmMd/2gr3s6ZyuRF4gVcQD8nRQwTmq1cX1MMjoEEVn461Tehb2TypyOXEcbU1POnbIaSZQMeCOiHbw70e0JpAQy2LpT6u37ibfXUAJiHdLEkAKoF7xVInXIc7TlcIHNlqmHN2sEI/4oiBrO4yHgGadbQq8ITUEeb2uzOlMMq72lt2rKfLVG4CjbTHJKEA4U5XlFuTJRroHrapGiC0BVO+HhMLq8LZ4gpt9EJueLwg/BK65Mw8QplVNEgBChrK+gzFcmuuqnRRnawfcFwUaeQX5bKputFgyH3Zal9wX9LF61FNkF3XrRaYsEdhzZN1LvJMzpDTvO56FTjR3s62ElXa35xMnwAtT2zVpmXEejvTQG/Mfc5p1Il6gSYuUbtMxm3IJ/JhrXE67jlne7ZQMWAUhY2wJ3f19zLrusoF0YozCDjH6rFBVqYKs6iqGKtTaTXKwy9y0pCccphFWOTZBpPmpY7P8tsEzFp6kJum27R/f1Pt+0NQOlCJC/ocLxtpKrq8Hb9fD8WoNwKCWGLuhXsIoJA/Ew4uWQNJDfw2SuBsoL8fk10+AnDXgRqwkkwX8/7NhBKZzHVIuSV2alSIT1tGMdYFkP6g34dWTtA0wbLKZlZb58O+Oz/4abcJoG/JD0rAJPOrNbDNVklHJDXrBnN6izttvBaa58E1+iIUvaHkctj3gSXe+/vn38jc8xVnxj+NY/wVzgzkF4MLFhAtFDnjxHBiNLp4XdDFp+mRMSfhe8k62l0wNrkrhqqUW965s8fHGVgf7RCcUzI3IsCZGxAG+Tbd57D2HEJPAE0VA9OSs56tS77lm2HRcrF6brwHO1iRVBkti7/oH1FEKXQJ2de5dccAVifC1Dp8dMRLzWBxyhWpGh2XAZ5cmY4VE8dgjVZJsDhjhDjkiZBh2cZehdMOVvKPS4AtbczDtKosa5kFg19tEZNmVKYPiFyIb7pFFF6p6lyYGixHCK5XaXhctYBUqoKthx/gNJLUGSeIio3Wfe5DtN/aLdPVZId0NGG7CKa1a5e3sXYf77W/fqRtRk+Y22JW98qUPkFO7sTLe1wP6TslxnY8D15ibjkamHoWHqSTGYBvxkS3H25jcc34vBrW0xqoim8/Ns35ydxi4BCI4iZkbshTSVSjy73rcWVKJ2qBSiDmziCA+W00OYyBg8VGvWodUm4qw2xQwI3nwHBoJLvP2uy62B+NO1tHkBu7dMlEqVsuKaC5bujBke1VP9XGpzpRyUiv9SunyNRL1c71iYkoGp/ftMJCemgPMfcWL+iUViucbr3RtzVrdxY6pwgqHGCZ2j1JOTjpteqZkcR+iLT1g/YGpAxbw9gDhl4ymSa1v5+71raCSblYUja2m35Q8juEcdROUn3LVZ3FQV0IncIeB+iuA220TYnVRd1UeQX6PUYdtfNpQxUXQixc9ZWl/uYbVjUF0PtJmpvRhPwx27uPN44p4Ar1yq2hyqubQS79UzeiD9JHoykAzJN7iv+Qm1bcbFWf/bp6aIn8W+OTZbzOY56+XnXDdnt3IEPZ0MNAzWwVz9nwwdAPRBsYStmuod306p04ojF19kgsVb++tJo87xwREoQCW6nEMhkK7Wny5Lgfy4map7SuNW8LkZ6+bXZdW/0CNU/e2/miICfTz2skEbt+cJfuO3Cozdia1Jn2aC4artJ3eRZFRs6wxOMoSYguRG0/UX2vre8fmw4p0lR/RyKBRXSiZttte3irpeR4fJwdtEIx1OoMbdKS6rkYt2iq3sMxvomDwbIY1a99inhNcH0HC7dvzsIhtVEFJ3Y1OcQt0S3nxO+AF1BaAibwdu6RTuCwM1CdcLXrUzoTErI29T8VXQcKqjSHbL3sAXmcYde3pToWUfZ1XqT3QIUdxrgnvqjYlai9pjq9wZ1BnUr97LKTlQp3wYbHeHikqcI3SiDA565/4aqI81rkI7FKncRBATPDZ1lYlpsiKXK9tfac7MUTxtKuDSm0umoqBcYk4INWl/SKpZT7pq0z1m4f7xh5ZyHbr7A5xSh2NwMt4nJZAIPU4SLDO33ra1bUfj6MmVUcHxsYcNhZQKv4c2P37FC982KiJWK3B3Lo8hxMEuzsxh3tS+0oreoA8Z/93x+NP+NKzGy4Aq6wWT+97fnfjyaXTl/ujPKu/l2ma4qOM+KP/zhObv/Fc5+/vQsUR+QeVlwRNfv/+5Z5BsqL8oAi/eq5Tln8QNUnh5R9/k18/5fk7h/zTH/r8A5u6fB76/FjAHwv4D70+FvDHAv5YwB8L+GMBfyzgjwX8127TxwL+WMAfC/hjAX8s4I8F/LGA6ccC/ljAfyp89tdo08cC/ljAHwv4YwF/LOCPBfyxgP/Q619uAavcv5np+1u75783gUZXFD5h7z2Y/3un6R9t05/tPM2hUfr39w7VdfSLg98eLTq0OA9/252dvNmfnNmffNmfX+B/3Q+bV8vnrzfDjpPi0SYNfH5/xAn2RZNAA362zfX7Jt9H/4k3uv7HZtPC/3Cr1f+veauSH8Rf76cuaP+cTahRlXngSP302Rzu/LqZBkmc/Rc=</diagram></mxfile>
2203.16001/paper_text/intro_method.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Modern depth sensors such as LiDAR scanners can capture visual scenes with highly dense and accurate points, expanding the real-world applications of point clouds to traditionally challenging 3D vision tasks. However, while existing deep network architectures such as PointNet [@pointnet] are capable of consuming these dense point clouds for downstream tasks (*e.g.,* classification, reconstruction), it is standard to downsample initial point cloud to reduce the computational and memory cost, especially for resource-constrained or real-time applications. As such, the objective of extracting a representative subset of points from raw point clouds while maintaining satisfactory performance over various tasks is a key problem.
4
+
5
+ <figure id="fig: teaser" data-latex-placement="t">
6
+ <div class="center">
7
+ <img src="Images/teaser.jpg" style="width:95.0%" />
8
+ </div>
9
+ <figcaption><strong>Overview.</strong> <strong>Top Left:</strong> We highlight the points sampled for classification (red) and reconstruction (blue). It is apparent that classification concentrates on more generalised features across the entire point cloud whereas reconstruction focuses on denser aspects for optimisation. <strong>Bottom Left:</strong> We evaluate the classification performance of two frozen PointNets on 16 sampled points from SampleNet. A large performance gap is observed despite the two models (one adopted during SampleNet training and one unseen) having an identical architecture, implying overfitting onto the model instead of the task itself. <strong>Right:</strong> An overview of our meta-sampler as a pretrained model that can rapidly adapt with a joint-training mechanism.</figcaption>
10
+ </figure>
11
+
12
+ Early techniques usually adopt Farthest Point Sampling (FPS) [@{qi2017pointnet++}; @pointcnn; @wu2018pointconv], Inverse Density Importance Sampling (IDIS) [@flex_conv], or Random Sampling (RS) [@hu2019randla; @hu2021learning] to progressively reduce the resolution of the raw point clouds. Albeit simple and universal, these sampling schemes are inherently heuristic and task-agnostic. Recently, Dovrat et al. [@learning_to_sample] and Lang et al. [@samplenet] explored a new domain of learning-based, task-specific, and data-driven point cloud sampling strategies. They empirically proved that leveraging the task loss can effectively optimise the sampler to preserve representative and informative features. Although remarkable progress has been achieved in several downstream tasks such as classification and reconstruction, there remain two critical issues to be further explored: 1) The learnt samplers are shown to **overfit to a specific task model** instead of being generalisable to the task itself --- this causes a signifant performance drop when adopting another network for the same task even when the two architectures are identical (as exemplified in Figure [1](#fig: teaser){reference-type="ref" reference="fig: teaser"} Bottom Left); 2) Training a sampler to fit a particular task is both time-consuming and computationally expensive, which counters the original objective of sampling to improve efficiency.
13
+
14
+ To this end, we propose an *almost-universal* sampler (Figure [1](#fig: teaser){reference-type="ref" reference="fig: teaser"} Right) comprising two training alterations to address the aforementioned issues accordingly. First, we suggest jointly training by forwarding the sampled points to multiple models targetting the same task instead of a single model and updating the sampler through a summation of task losses. This kind of ensemble allows us to better simulate the distribution of different task models, encouraging the sampler to truly learn the task rather than a particular instance. Second, we introduce our meta-sampler to learn how to adapt to a specific task, rather than explicitly learning a particular task model. We incorporate a set of tasks, each with multiple task models, for the meta-optimisation. Our meta-sampler can serve as a pretrained module to adhere to any tasks through fine-tuning while being *almost-universal* in the sense that it could be optimised with fewer iterations.
15
+
16
+ Extensive experimental results justify the performance and versatility of the proposed meta-sampler. In particular, there is a significant improvement in performance for several mainstream tasks with our joint-training technique compared to the best results from the conventional single-task training on SampleNet. Moreover, we thoroughly evaluate the versatility of our meta-sampler by adapting to particular tasks (both included and excluded from the meta-training), model architectures, and datasets. Our meta-sampler adapts rapidly to all challenging scenarios, making it a suitable pretrained candidate for task-specific learning-based samplers.
17
+
18
+ In summary, the key contributions of this paper are threefold:
19
+
20
+ - A joint-training scheme for the sampler to truly learn a task rather than simply overfitting to a particular instance (*i.e.,* a specific task model).
21
+
22
+ - A meta-sampler that can rapidly adapt to downstream point cloud tasks within and beyond the meta-training stage, models of varying architectures, and datasets of different domains.
23
+
24
+ - Extensive experiments validate the performance and versatility of our meta-sampler across various tasks, models, and datasets.
25
+
26
+ # Method
27
+
28
+ The goal of this paper is to develop a learning-based sampling module $f_\theta$ with trainable parameters $\theta$, which takes in a point cloud with $m$ points and outputs a smaller subset of $n$ points ($m > n$). Apart from the objective of SampleNet to learn task-specific sampling (*i.e.,* particularly suitable for a single task such as shape classification or reconstruction), we take a step further and aim to propose an universal pretrained model, which can be rapidly adapted to a set of different tasks $S_T = \{T_i\}_{i=1}^{K_T}$. Formally, we define the ideal adaptation of sampling to a specific task $T_i$ as capable of achieving satisfactory performance by integrating the sampling module into a set of known networks $S_{A_i} = \{A_{i,j}\}_{j=1}^{K_{A_i}}$ (trained with unsampled point clouds of $m$ points to solve task $T_i$. ). We split $S_{A_i}$ into $S_{A_i}^{train}$ and $S_{A_i}^{test}$ (*i.e.,* task networks used during training are disjoint to the ones for testing) to make sure that our evaluation on $f_\theta$ is fair and not overfitting to task models instead of the task itself. Note that while $S_{A_i}^{train}$ is available during training, the weights are frozen when learning our sampler as suggested by [@samplenet].
29
+
30
+ To achieve the dual objectives of high accuracy and rapid convergence, we must first carefully evaluate the best training strategy to better learn each individual task, and then design a training strategy which is adaptive to multiple tasks. We build our sampler $f_\theta$ based on the previous state-of-the-art learnable sampling network --- PointNet-based SampleNet architecture [@pointnet; @samplenet] --- and then introduce our training technique in a bottom-up manner.
31
+
32
+ For an individual task $T_i$, we hope that the $f_\theta$ learns to sample the best set of points $\forall A \in S_{A_i}$.
33
+
34
+ The conventional way of training the SampleNet uses one single frozen network $A'$ as $S_A^{train}$ for training by defining a sampling task loss $\mathcal{L}_{ST_i}$ targeting $T_i$ as:
35
+
36
+ $$\begin{equation}
37
+ \mathcal{L}_{ST_i}(f_\theta) = \mathcal{L}_{T_i}(A'(f_\theta)),
38
+ \label{eqn:loss_single_task}
39
+ \end{equation}$$ where $\mathcal{L}_{T_i}$ is the loss when pretraining $A'$. We refer to this configuration as single-model, single-task training. As mentioned previously, this training method, having accomplished promising results in several tasks, still exhibits a large accuracy discrepancy between the results on $A'$ and $S_A^{test}$. In other words, even though $A'$ is frozen during the training of SampleNet, the sampling stage is overfitted onto the task network instead of the task itself.
40
+
41
+ To alleviate the issue of model-wise overfitting, we extend ([\[eqn:loss_single_task\]](#eqn:loss_single_task){reference-type="ref" reference="eqn:loss_single_task"}) and create a joint-training approach for a single task. Specifically, we take a set of weight-frozen models $\{A_{i,j}\}_{j=1}^{k}, 1 < k << K_{A_i}$ as $S_{A_i}^{train}$ and compute $\mathcal{L}_{ST_i}$ as:
42
+
43
+ $$\begin{equation}
44
+ \mathcal{L}_{ST_i}(f_\theta) = \sum_{j=1}^{k}\mathcal{L}_{T_i}(A_{i,j}(f_\theta)).
45
+ \label{eqn:loss_multi_task}
46
+ \end{equation}$$
47
+
48
+ It is critical to understand that all the frozen task models are under inference mode (*i.e.,* not significantly sabotaging computation power) and that only a very small number of task models (easily obtainable online or by self-training with different random initial weights) would bring significant improvements to the sampler's performance. We further show in Section [4.2](#sec:single_task){reference-type="ref" reference="sec:single_task"} that a very small $k > 1$ allows the sampling network generalise better across $S_{A_i}$, as $S_{A_i}^{train}$ becomes a vicinity distribution rather than a single specific instance to $S_{A_i}$.
49
+
50
+ In addition to the joint $\mathcal{L}_{ST_i}$, we also update the weights with a simplification loss comprising the average and maximum nearest neighbour distance and a projection loss to enforce the probability of projection over the points to be the Kronecker delta function located at the nearest neighbour point (identical to the SampleNet loss [@samplenet]).
51
+
52
+ Instead of restricting ourselves to a single task (*e.g.,* classification), we consider whether training the sampler over multiple tasks could lead to our vision of an almost-universal sampler. Broadly, we aim to extend the sampler beyond multi-model to multi-task, such that given any task $T_i \in S_T$, where $S_T$ is a set of tasks, a good initial starting point could be achieved for the sampler. In this way, adapting or fine-tuning to a particular task (which may even be beyond the known set) will be rapid and cheap.
53
+
54
+ To tackle this, we draw inspiration from the MAML framework and propose a meta-learning approach for rapidly adaptive sampling [@maml]. In essence, we aim to utilise the set of $S_{A_i}^{train}$ to mimic the best gradients in learning a particular task $T_i$ for meta-optimisation, such that given any task $T_i \in S_T$ or even beyond the known set of tasks, the MAML network can quickly converge within a few iterations and without additional training of the task networks.
55
+
56
+ The joint-training procedure discussed in the previous section motivates that a particular task is better solved with a set of task networks instead of just one --- we transfer this idea to the meta-optimisation such that the sampler is adaptive to a number of tasks instead of just one. Formally, we first optimise the adaptation of $f_\theta$ to $T_i \in S_T$ by updating the parameters $\theta$ to $\theta'_{i,j}$ for every $A_{i,j}$ through the gradient update: $$\begin{equation}
57
+ \theta'_{i,j} = \theta - \alpha\nabla\mathcal{L}_{T_i}(A_{i,j}(f_\theta)),
58
+ \label{eqn:meta_inner_update}
59
+ \end{equation}$$ where $\alpha$ is the step size hyperparameter. Similar to MAML, we can directly extend the single gradient update into multi-gradient updates to optimise the effectiveness of $\theta'_{i,j}$ on $T_i$.
60
+
61
+ With the inner update ([\[eqn:meta_inner_update\]](#eqn:meta_inner_update){reference-type="ref" reference="eqn:meta_inner_update"}), we then follow the meta-optimisation procedure through a stochastic gradient descent: $$\begin{equation}
62
+ \theta = \theta - \beta\nabla\mathcal\sum_{i=1}^{K_T}\sum_{j=1}^{k}\mathcal{L}_{T_i}(A_{i,j}(f_{\theta'_{i,j}})),
63
+ \label{eqn:meta_update}
64
+ \end{equation}$$ where $\beta$ is the meta step size hyperparameter that could either be fixed or accompanied with annealings. Note that we apply the single task loss in the inner update ([\[eqn:meta_inner_update\]](#eqn:meta_inner_update){reference-type="ref" reference="eqn:meta_inner_update"}) but sum all losses from all weights to resemble a task in the meta-update ([\[eqn:meta_update\]](#eqn:meta_update){reference-type="ref" reference="eqn:meta_update"}). Section [4.3](#versatility){reference-type="ref" reference="versatility"} shows that our meta-optimisation design is sufficient in learning tasks for rapid adaptation. Simplification and projection losses are also directly optimised at this stage. They are however directly updated rather than included in the meta-update fashion as they are task-agnostic.
65
+
66
+ <figure id="fig: pipeline" data-latex-placement="htb">
67
+ <div class="center">
68
+ <img src="Images/pipeline.png" style="width:100.0%" />
69
+ </div>
70
+ <figcaption><strong>The pipeline of the proposed meta-sampling.</strong> The illustration exemplifies the pretraining with multiple tasks through our meta-training strategy, then fitting onto a single task with our joint-training mechanism.</figcaption>
71
+ </figure>
72
+
73
+ We describe the overall training pipeline of the proposed meta-sampler (Figure [2](#fig: pipeline){reference-type="ref" reference="fig: pipeline"}) as the following:
74
+
75
+ **Pretrained Meta-Sampler:** Our pipeline begins with training a meta-sampler. First, we take a set of tasks $S_T$ (*e.g.,* shape classification, reconstruction, retrieval) and their corresponding task networks $S_{A_i}$ for every $T_i\in S_T$ (pretrained on the unsampled point clouds). Next, we freeze all their original weights and perform our meta-sampler training as illustrated in Section [3.3](#Sec3.3){reference-type="ref" reference="Sec3.3"} to obtain a pretrained meta-sampler.
76
+
77
+ **Rapid Task Adaptation:** The meta-training attempts to optimise $\theta$ to a position optimal to learn any task $T_i$. Therefore, to adapt to a particular task, we can simply take the pretrained weights of the meta-sampler and fine-tune it with the joint-training strategy as illustrated in [3.2](#joint_task){reference-type="ref" reference="joint_task"} along with the previously proposed simplification and projection loss.
78
+
79
+ **Disjoint Task Networks for Pretraining and Training:** Realistically, one should be able to directly obtain a pretrained meta-sampler without the task networks and fit to their own networks. To mimic such real-world constraints, we ensure that the meta pretraining and joint-training use disjoint sets of networks --- both of which are unseen during testing.
2204.03688/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2204.03688/paper_text/intro_method.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ The images in DAD-3DHeads dataset are anonymised without additional metadata. The results of labeling in the form of 3D head model do not contain any private or sensitive information. The data gathered is not being used for identification purposes or in connection with any other personal data.
4
+
5
+ []{#sec:dataset label="sec:dataset"}
6
+
7
+ As stated in [\[ssec:data_statistics\]](#ssec:data_statistics){reference-type="ref+label" reference="ssec:data_statistics"}, along with the dataset of images and annotations, we provide additional information per image such as gender, age, illumination conditions, image quality, pose, presense of expression, and occlusions (see [7](#fig:histogramss){reference-type="ref+label" reference="fig:histogramss"} and [\[fig:dataset_stats\]](#fig:dataset_stats){reference-type="ref+label" reference="fig:dataset_stats"}).
8
+
9
+ We use multiple sources to construct our dataset, among which WIDER FACE dataset [@wider], the Adience dataset [@adience], Compound Facial Expressions of Emotions Dataset[@fce], WFLW[@wflw], AFW[@afw], Helen[@helen], LFPW[@LFPW].
10
+
11
+ We provide more visualizations of the results of DAD-3DHeads annotations compared to GT scans on neutral faces from NoW dataset[@RingNet] in [8](#fig:now_scans_vs_GT_big){reference-type="ref+label" reference="fig:now_scans_vs_GT_big"}. This is in addition to (and in the context of) [\[fig:now_GT_vis\]](#fig:now_GT_vis){reference-type="ref+label" reference="fig:now_GT_vis"}.
12
+
13
+ We provide here more details along with visuals to prevent possible misunderstandings around the labeling process. The full video is available via [the project webpage](https://p.farm/research/dad-3dheads).
14
+
15
+ As was already mentioned in the [\[ssec:labeling\]](#ssec:labeling){reference-type="ref+label" reference="ssec:labeling"}, the annotators do not explicitly control or label either the 3DMM parameters or the blendshapes. They also do not have to disambiguate between identity and expression. The annotators only have to \"pin\" a 3D head model - a mesh - to the head image. They do so iteratively having the initial generic mesh optimized after every \"pin\" being placed (see [1](#fig:iteration){reference-type="ref+label" reference="fig:iteration"}). The nonlinear optimization is performed, indeed, over the shape and expression parameters, along with the pose.
16
+
17
+ <figure id="fig:iteration" data-latex-placement="h!">
18
+ <img src="images/iter.jpg" style="width:45.0%" />
19
+ <figcaption>The mesh is being deformed due to the under-the-hood optimization after the "pin" is placed on the ear.</figcaption>
20
+ </figure>
21
+
22
+ <figure id="fig:holes" data-latex-placement="h!">
23
+ <img src="images/holes.jpg" style="width:45.0%" />
24
+ <figcaption>The rendered texture here has "holes" due to the occlusions, but can also be "torn" if the "pins" are placed poorly.</figcaption>
25
+ </figure>
26
+
27
+ After each step, the annotators can inspect if the fitted mesh aligns well with the image in several ways: they might view (i) the reprojected set of selected landmarks that correspond to the recognizible features on any face or head (see [6](#fig:191_landm){reference-type="ref+label" reference="fig:191_landm"} as an example), such as contours of the eyes (eyelash line), lips, etc.; (ii) the image rendered onto the mesh as a texture given the fitting - it helps to inspect if there appeared texture \"holes\" due to poor labeling or occlusions (see [2](#fig:holes){reference-type="ref+label" reference="fig:holes"}); (iii) the mesh itself is visible in $360^o$ to inspect if the skull shape is not deformed (see [2](#fig:holes){reference-type="ref+label" reference="fig:holes"}, right). These measures help to partially overcome limitations introduced by the absence of camera parameters for the input images, thus possible ambiguities caused by the effects of perspective projection.
28
+
29
+ Another important issue is that our annotations consist only of the 3D vertices and transformation matrices, we do not interpret the resulting mesh w.r.t. identity, expression, or any other ambiguous and ill-defined over a single-image input feature. The concerns arise as there might be cases, e.g., faces with extreme expressions, where it is impossible to perfectly detect the shape of the head with neutral expression based on a single image. Indeed, the 3D scanners would provide an accurate 3D head model that the manual annotation cannot guarantee. However, they operate under controlled capture, i.e., *3D scans have been in-the-lab instead of in-the-wild*. We provide the community with the complementary data. It has a *known* trade-off in accuracy, but the performance is sufficient for many applications that operate in-the-wild.
2205.09963/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile><diagram id="DizzCbV0eDCYzfVY5jJ4" name="Page-1">7V1tj5s4EP41+/EibPNif+xur+2XSpX2pOt9WnEJTVCzISJsN7lffybBhJdJGsDYwbBSq2ASAzPPeDzmGc8DeXrdf4797eprtAjWD9ha7B/IxweMiYP4/2nD4dTgOs6pYRmHi1MTOjc8h/8FWaOVtb6Fi2BX+mISResk3JYb59FmE8yTUpsfx9F7+Ws/onX5qlt/GdQanuf+ut76d7hIVlkrctn5xJcgXK6yS1PsnU78689/LuPobZNdbxNtgtOZVz/v5tSwW/mL6P3UdHxs8ucDeYqjKDl9et0/BetUqkJiJ9l8unA2v+U42CS3/MBz3dNPfvnrt+y5sztLDkIQ/Ca36cf5W7w+PMb86QLe+eP7KkyC560/T8+9cwjwtlXyuuZHiH88CiBYZEc/1uH2i/gcrtdP0TqKj92TT8c/3r4IY67HMNrw9k0Up6J63CVx9DOXP0lbTkCxZg4/qD9vJoJfQZwE+0JT9vyfg+g1SOID/4rAqcCbgKmdieT9rHTHzb6zKugbZ9/zM5wt867PAucfMplfkj8F5G8/8381NfDnScoiPglHiPIIs7J0syZ/HS5Toc65lALe/phKJ+RI/5CdeA0Xi/QyoE7PirTS7qNNktkqlaOAXLhCARTVFADJn1jd5U/z272C/9SWt7c/aD4i+f+KHqyrAqCMlgRAbKsmAOwhQAKIzpzuQiDUhkC4f7EHA0Mu8cQPN2mvx9N1bdHmwEQWEeLNNINpLnAF4EQWODrchs7Lz9sYn3/g2+DIPbwMOPLHZhAesUFoLKj2djjaTgWOTt1Tkd6w6FmQUiikFP5TPkm7JLqCrvzd9jRz+xHuU2FWBXlRJTveZbhZ/hVt055c+TI/iNleHfvEAUQsBfce5IxSZ2CkiLOzmLglUHORA2Nsj0LHkNAds4XOoxi9Qid14S549JUdZkN0QaLBPky+Fz7/8yBCgPTo4z6TzvHgIA42iw9pHHjukLd8CtPbOp4vxxb42OLHSeU3x7bCr7jc48P34kHhXtLD880cjw43KG4XvcXzPPwVIy2/8DLIvytwmsrpqorjYO0n4a9yFAupK/vptyjkd1SABpq53LUQRDDjxmmXgMLIjHo2canrWZ5LKiHQ6Y6zDothZ+UahEG9CCxib4bqNyCucZJV7RpHzOVSuBWGDmT74LTXXNt3bJWG7zY1/H6MOBtOrBlznNKQQlLRXRtU+MG3IA75s+d6OQ8Js9RMS8OCx+F0fWA4HlW77DxaOGpGC1bGEq1MNW8eEKpDQKUfqUYPLrmYOpGtGr0aK2cy3PvdOHfUt2unaozVrliZ29JaK/1gVgGNTGulYEw0FmsVmvLqS6L9WS+tLzLdl492XdrJR9te2UcTC//Gwnvx0QLavftorwwmWunhZidNZg6yWf5HKyAldFZDoNShoHHQ2DcucxwKHJFOuMS0jEvXdfvFpUBCxR0pwiWqTB6R3dIfIYJmjlVAZjlmRZRwYFrnP9wjRsFXKd643FW+4qzGXTkSJpvouuE2GkicbitJPU82BUL1riNVVoBs1tIl3clCEm28rNFzwNMNg/ICHoE2PR4GW5I8TBlnNsd1uV8HVc73CDUP8jC7kb8cxbOqRmzohUZvb0cpuKiUjF0rhOrWC0glQCPXS4FKoEQLDOQOmEQvkkPoQFQlo4OBS2vgm+9Ra4XHKWqtBUN6IZNeqtZiUZVaAT2J4eQc5NWEbpO60PuK7ZEFOo547KbAKjrB6twGs8DRaf+CTVJKU43YNaqwi1VShSkG1x33L0bNdBvbiWatMJcAWqkoo7fsgqbZBJLo28wFoWjS8HDW6zDI28wFeW4DeoneROIaqNsMTKMb0uSwBaT1EreZC66DDoi33WYU0UrbZvkS/5hZ27naii868hG2+J4jh+hE2Zb1+oOJ1LAJgnUIehAEXUUQtNyZc2ZYIBEldn3xxqdNReKGyBvJKUesOmWViDU4421A2QESPIzC5AAmgpKe+F03vxIfXm5AA58kQD0lBtTgV4/NNcPPLqHPs39DUvodu5CV0UcQ7RN9EPtInTtibtFvWKiCImfmIqd2ujnvEM/sIiP2OuZlYhVaWDI1oNeQwsI8QJCjymBp4lKIGqMeXvoKE2l4d0JZb8J4lcNNveAbiEbfgK1KEIFo2+H/SDuvsf4ErtLQRQUnkHng6uNIvIGGFCnmeXdl1gPKkGriVhSNB2akRzGv8eLslB111Ue5Gn2UealRTOSEDzYzSoKTUpkYxUT4eTd5UR0XynsNYZgiux5PUhSjjZdzRxFCC6hpin1MS4hiYrcCc/Oh2nAztKbdMAouT5qUDdWKL6M3GYrBuekmMUS7EfOU6OD2HedGqQMNiVB5oG5uHpQUnahNg2JwnrNJWVByLEVhEhSDs5zNprnWU6Ag5nZvQTy4ahKP2wqwpY9KzyATqGii74IRuG3BCFkpHnAuJDYJlUxCgqrS/fnhVMkhvQRsIHINOR4opwkN1/s1B7Xu3fnhbNMhZXm0GEg0b84v+BHj5tgzcD8rC2A1nlE6JXrI2+fKksLAMxSFGEShqn07zcv14MIbeikICY5GaSUI8Sbqfuj2g8n2aOSapkIQFxHYL4vP4ISPCwAEmBUKvZLZKR9ckEOvW9LFQ6lxSWMvW9LMs0xVSy4iCQ29aomMZQulRUvESuD90J8HQ8lvYvRIkTc3g5TP5TXVLJELSwFAbZPMMn4Gz8vnEgVpYUMi5nf3VWorlqB7q1iinJnfyOXYimx7PNx8LtR7K1hyJ6GOAJse92IePZ9L1Ph6JW1eq2sui4HAtSSjCPptuA66q5Ug46uVdONVKVECBolURlH0ZZDb1NYqweCCmlEcfRlKUV2qBGNILUaR9KXYitJKJRikMpjNU6yz9ClRSprDYBJRPG5LqBL1YZ30RdXH4CR3/2LS8NS4JoanuSYGsojOohjZrxzNGRRcCuCqKzYJmmdNDyWHgoCec0CvbRvJXEsSBQH95IAmJ21grTuLggy9VkarsURzGgW5v2oZHSvTtyCwnxVXXmcXIi6vs5OpYMbxtMz1d3J/FTPuCIZAzYyzj5oSKVrgzb59+dRcX6M0k8Lut25Gd+rb3WZSNPNOAtlTKkUdgkOvR9DJ6NVYuZR8SIleXDVjo6GxTkUJLkNp6PvFSwkHldLT7WnHeDU+eto0vtlQMO0aLxmXAoGaQkoDCeqCaztYgroMd6WWoe6Mfe/4hl5n2j5evtk7Y98//qKHAXaQV+dhTOSoOxjyMCZx1Fu9tdTMhnaM30W+3btk3Sx1sbhtLku9I3VFjRaM30leDoFILU/dAZfWTOKpy9GKaqK6mD2MinZUJ0Wr5Xq5oAOPR24MVVK00uQAZIE6GTcl2q5Som27N5XwwziKkmKwwp969TVaBOk3/gc=</diagram></mxfile>
2205.09963/main_diagram/main_diagram.pdf ADDED
Binary file (25.1 kB). View file
 
2205.09963/paper_text/intro_method.md ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Given a graph with a start vertex $s$, a goal vertex $t$, and non-negative edge weights, we consider finding an $s$--$t$ path with a small total weight. The Dijkstra algorithm [@Dijkstra1959-ai] finds an optimal path by exploring all vertices that are as close to $s$ as $t$. It, however, is sometimes impractical for large graphs since exploring all such vertices is too costly. Heuristic search algorithms are used to address such situations; among them, greedy best-first search (GBFS) [@Doran1966-pj] and A\* search (A\*) [@Hart1968-jm] are two popular algorithms. Both GBFS and A\* use so-called heuristic functions, which estimate how close an input vertex is to $t$. GBFS/A\* attempts to avoid redundant exploration by scoring vertices based on heuristic function values and iteratively expanding vertices with the smallest score. If well-suited heuristic functions are available, GBFS/A\* can run much faster than the Dijkstra algorithm. Furthermore, if A\* uses an *admissible* heuristic function, i.e., it never overestimates the shortest-path distance to $t$, it always finds an optimal path [@Hart1968-jm]. Traditionally, heuristic functions have been made based on domain knowledge; e.g., if graphs are road networks, the Euclidean distance gives an admissible heuristic.
4
+
5
+ When applying GBFS/A\* to various real-world problems, a laborious process is to handcraft heuristic functions. Learning heuristic functions from data can be a promising approach to overcoming the obstacle due to the recent development of technologies for collecting graph data. Reseachers have demonstrated the effectiveness of this approach in robotics [@Bhardwaj2017-lk; @Takahashi2019-il; @Pandy2021-wn; @Yonetani2021-wn], computational organic chemistry [@Chen2020-xa], and predestrian trajectory prediction [@Yonetani2021-wn]. With learned heuristic functions, however, obtaining theoretical guarantees is difficult since we can hardly understand how the search can be guided by such heuristic functions. (A recent paper [@Agostinelli2021-dy] studies learning of admissible heuristics for A\*, but the optimality is confirmed only empirically.) Moreover, learned heuristic functions may be overfitting to problem instances at hand. That is, even if GBFS/A\* with learned heuristic functions perform well over training instances, they may deliver poor future performance. In summary, the emerging line of work on search algorithms with learned heuristic functions is awaiting a theoretical foundation for guaranteeing their performance in a data-driven manner. Thus, a natural question is: *how many sampled instances are needed to learn heuristic functions with generalization guarantees on the performance of resulting GBFS/A\*?*
6
+
7
+ We address the above question, assuming that path-finding instances defined on a fixed vertex set of size $n$ are drawn i.i.d. from an unknown distribution. Our analysis is based on so-called *data-driven algorithm design* [@Gupta2017-ng; @Balcan2021-fy], a PAC-learning framework for bounding the sample complexity of algorithm configuration. In the analysis, the most crucial step is to evaluate the *pseudo-dimension* of a class of utility functions that measure the performance of parameterized algorithms. We study the case where GBFS/A\* is parameterized by heuristic function values and make the following contributions:
8
+
9
+ 1. [3](#sec:upper_bound){reference-type="ref+Label" reference="sec:upper_bound"} gives $\Ord(n\lg n)$ and $\Ord(n^2\lg n)$ upper bounds on the pseudo-dimensions for GBFS and A\*, respectively. The bound for A\* can be improved to $\Ord(n^2\lg d)$ if every vertex has an at most $d$ degree and to $\Ord(n \lg n)$ if edge weights are non-negative integers at most $\poly(n)$.
10
+
11
+ 2. [4](#sec:lower_bound){reference-type="ref+Label" reference="sec:lower_bound"} presents $\Omega(n)$ lower bounds on the pseudo-dimensions for GBFS and A\*. We prove this result by constructing $\Omega(n)$ instances with unweighted graphs. Thus, our bounds for GBFS and A\* under the integer edge-weight condition are tight up to a $\lg n$ factor.
12
+
13
+ 3. [5](#sec:suboptimality_astar){reference-type="ref+Label" reference="sec:suboptimality_astar"} studies a particular case of bounding the suboptimality of A\*. We show that we can sometimes improve the guarantee obtained in [3](#sec:upper_bound){reference-type="ref+Label" reference="sec:upper_bound"} by using an alternative $\Ord(n \lg n)$ bound on the pseudo-dimension of a class of parameter-dependent worst-case bounds [@Valenzano2014-sw].
14
+
15
+ An important consequence of the above results is the tightness up to a $\lg n$ factor for GBFS and A\* under the integer-weight assumption. Note that this assumption holds in various realistic situations. For example, the Internet network and state-space graphs of games are unweighted (unit-weight) graphs, and A\* is often applied to path-finding instances on such graphs.
16
+
17
+ # Method
18
+
19
+ @Gupta2017-ng proposed a PAC approach for bounding the sample complexity of algorithm configuration, which is called *data-driven algorithm design* and has been applied to a broad family of algorithms, including greedy, clustering, and sequence alignment algorithms. We refer the reader to a nice survey [@Balcan2021-fy]. A recent line of work [@Balcan2018-pe; @Balcan2021-kv; @Balcan2021-fz] has extensively studied the sample complexity of configuring integer-programming methods, e.g., branch-and-bound and branch-and-cut. In [@Balcan2021-kv; @Balcan2021-fz], upper bounds on the pseudo-dimension for general tree search are presented, which are most closely related to our results. Our upper bounds, which are obtained by using specific properties of GBFS/A\*, are better than the previous bounds for general tree search, as detailed in [7](#app_sec:comparison){reference-type="ref+Label" reference="app_sec:comparison"}. @Balcan2021-jv presented a general framework for evaluating the pseudo-dimension. Their idea is to suppose that performance measures form a class of functions of algorithm parameters, called *dual* functions, and characterize its complexity based on how they are piecewise structured. This idea plays a key role in the analysis of [@Balcan2021-kv; @Balcan2021-fz], and our analysis of the upper bounds are also inspired by their idea. Its application to our setting, however, requires a close look at the behavior of GBFS/A\*. @Balcan2020-gm showed that approximating dual functions with simpler ones is useful for improving sample complexity bounds, which is similar to our idea in [5](#sec:suboptimality_astar){reference-type="ref+Label" reference="sec:suboptimality_astar"}. A difference is that while they construct simpler functions with a dynamic programming algorithm, we can use a known worst-case bound on the suboptimality of best-first search [@Valenzano2014-sw]. Lower bounds on the pseudo-dimension for graph-search algorithms have not been well studied.
20
+
21
+ @Eden2022-fl theoretically studied how the average-case running time of A\* can be affected by the dimensions or bits of learned embeddings or labels of vertex features, based on which heuristic function values and computed. The sample complexity of learning heuristic functions, however, has not been studied.
22
+
23
+ We present the background on learning theory and our problem setting. In what follows, we let $\mathbb{I}\prn*{\cdot}$ be a boolean function that returns $1$ if its argument is true and $0$ otherwise. We use $\Hcal\subseteq \Rcal^\Ycal$ to denote a class of functions that map $\Ycal$ to $\Rcal \subseteq \R$. For any positive integer $m$, we let $[m] = \set*{1,\dots,m}$.
24
+
25
+ The following *pseudo-dimension* [@Pollard1984-zp] is a fundamental notion for quantifying the complexity of a class of real-valued functions.
26
+
27
+ ::: definition
28
+ Let $\Hcal \subseteq \R^\Ycal$ be a class of functions that map some domain $\Ycal$ to $\R$. We say a set $\set*{y_1,\dots,y_N}\subseteq \Ycal$ is *shattered* by $\Hcal$ if there exist target values, $t_1,\dots,t_N \in \R$, such that $$\abs*{ \Set*{ \prn*{ \mathbb{I}\prn*{h(y_1) \ge z_1}, \dots, \mathbb{I}\prn*{h(y_N) \ge z_N} } }{ h \in \Hcal } } = 2^N.$$ The *pseudo-dimension* of $\Hcal$, denoted by $\mathrm{Pdim}(\Hcal)$, is the size of a largest set shattered by $\Hcal$.
29
+ :::
30
+
31
+ If $\Hcal$ is a set of binary-valued functions that map $\Ycal$ to $\set*{0, 1}$, the pseudo-dimension of $\Hcal$ coincides with the so-called *VC-dimension* [@Vapnik1971-me], which is denoted by $\mathrm{VCdim}(\Hcal)$.
32
+
33
+ The following proposition enables us to obtain sample complexity bounds by evaluating the pseudo-dimension (see, e.g., [@Anthony1999-mm Theorem 13.6] and [@Mohri2018-zs Theorem 11.8]).
34
+
35
+ ::: proposition
36
+ []{#prop:complexity_bound label="prop:complexity_bound"} Let $H>0$, $\Hcal \subseteq {[0, H]}^\Ycal$, and $\Dcal$ be a distribution over $\Ycal$. For any $\delta\in(0,1)$, with a probability of at least $1-\delta$ over the i.i.d. draw of $\set*{y_1,\dots,y_N} \sim \Dcal^N$, for all $h \in \Hcal$, it holds that $$\abs*{ \frac{1}{N}\sum_{i=1}^N h(y_i) - \mathop{\E}_{y \sim \Dcal}\brc*{ h(y) } } = \Ord\prn*{H \sqrt{\frac{\mathrm{Pdim}(\Hcal) \lg \frac{N}{\mathrm{Pdim}(\Hcal)} + \lg \frac{1}{\delta}}{N}}}.$$
37
+ :::
38
+
39
+ In other words, for any $\epsilon>0$, $N = \Omega\prn*{\frac{H^2}{\epsilon^2} \prn*{\mathrm{Pdim}(\Hcal) \lg\frac{H}{\epsilon} + \lg \frac{1}{\delta}}}$ sampled instances are sufficient to ensure that with a probability of at least $1-\delta$, for all $h \in \Hcal$, the difference between the empirical average and the expectation over an unknown disrtibution $\Dcal$ is at most $\epsilon$.
40
+
41
+ We describe path-finding instances, GBFS/A\* algorithm, and performance measures considered in this paper.
42
+
43
+ We consider solving randomly generated path-finding instances repetitively. Let $x= (V, E, \set*{w_e}_{e \in E}, s, t)$ be a path-finding instance, where $(V, E)$ is a simple directed graph with $n$ vertices, $\set*{w_e}_{e \in E}$ is a set of non-negative edge weights (sometimes called costs), $s\in V$ is a start vertex, and $t\in V$ is a goal vertex. We let $\Pi$ be a class of possible instances. Each instance $x\in\Pi$ is drawn from an unknown distribution $\Dcal$ over $\Pi$. We impose the following assumption on $\Pi$.
44
+
45
+ ::: assumption
46
+ []{#assump:feasible label="assump:feasible"} For all $x\in \Pi$, the vertex set $V$ and the goal node $t$ are identical, and there always exists at least one directed path from $s \neq t$ to $t$, i.e., every instance $x\in \Pi$ is feasible.
47
+ :::
48
+
49
+ Fixing $V$ is necessary for evaluating the pseudo-dimension in terms of $n = |V|$. Note that we can deal with the case where some instances in $\Pi$ are defined on vertex subsets $V' \subseteq V$ by removing edges adjacent to $V \setminus V'$. The feasibility assumption is needed to ensure that GBFS/A\* always returns a solution, and $s\neq t$ simply rules out the trivial case where the empty set is optimal. In [8](#app_sec:additional){reference-type="ref+Label" reference="app_sec:additional"}, we discuss how to extend our results to the case where $t$ can change depending on instances.
50
+
51
+ We sketch algorithmic procedures that are common to both GBFS and A\*(see [\[alg:gbfs,alg:astar\]](#alg:gbfs,alg:astar){reference-type="ref+Label" reference="alg:gbfs,alg:astar"} for details, respectively). Let $A_{\bm{\rho}}$ be a GBFS/A\* algorithm, which is parameterized by heuristic function values ${\bm{\rho}}\in \R^n$. Given an instance $x\in\Pi$, $A_{\bm{\rho}}$ starts from $s$ and iteratively builds a set of candidate paths. These paths are maintained by $\texttt{OPEN}$ and $\texttt{CLOSED}$ lists, together with pointers $\texttt{p}(\cdot)$ to parent vertices. The $\texttt{OPEN}$ list contains vertices to be explored, and the $\texttt{CLOSED}$ list consists of vertices that have been explored. In each iteration, we select a vertex $v$ from $\texttt{OPEN}$, expand $v$, and move $v$ from $\texttt{OPEN}$ to $\texttt{CLOSED}$.
52
+
53
+ Heuristic function values ${\bm{\rho}}$ are used when selecting vertices. For each $v \in V$, the corresponding entry in ${\bm{\rho}}$, denoted by $\rho_v$, represents an estimated shortest-path distance from $v$ to $t$. (Although heuristic function values are usually denoted by $h(v)$, we here use $\rho_v$ for convenience.) In each iteration, we select a vertex with the smallest *score*, which is defined based on ${\bm{\rho}}$ as detailed later. We impose the following assumption on the vertex selection step.
54
+
55
+ ::: assumption
56
+ []{#assump:tie_break label="assump:tie_break"} Define an arbitrary strict toral order on $V$; for example, we label elements in $V$ by $v_1,\dots, v_n$ and define a total order $v_1<\dots<v_n$. When selecting a vertex with the smallest score, we break ties, if any, in favor of the smallest vertex with respect to the total order.
57
+ :::
58
+
59
+ If we allow $A_{\bm{\rho}}$ to break ties arbitrarily, its behavior becomes too complex to obtain meaningful bounds on the pseudo-dimension. [\[assump:tie_break\]](#assump:tie_break){reference-type="ref+Label" reference="assump:tie_break"} is a natural rule to exclude such troublesome cases.
60
+
61
+ Let $A_{\bm{\rho}}$ be GBFS/A\* with parameters ${\bm{\rho}}\in \R^n$. We measure performance of $A_{\bm{\rho}}$ on $x\in \Pi$ with a utility function $u$. We assume $u$ to satisfy the following condition.
62
+
63
+ ::: assumption
64
+ []{#assump:utility label="assump:utility"} Let $H > 0$. A utility function $u$ takes $x$ and a series of all $\emph{\texttt{OPEN}}$, $\emph{\texttt{CLOSED}}$, and $\emph{\texttt{p}}(\cdot)$ generated during the execution of $A_{\bm{\rho}}$ on $x\in \Pi$ as input, and returns a scalar value in $[0, H]$.
65
+ :::
66
+
67
+ We sometimes use $A_{\bm{\rho}}$ to represent the series of $\texttt{OPEN}$ and $\texttt{CLOSED}$ lists and pointers generated by $A_{\bm{\rho}}$. Note that $u$ meeting [\[assump:utility\]](#assump:utility){reference-type="ref+Label" reference="assump:utility"} can measure various kinds of performance. For example, since the pointers indicate an $s$--$t$ path returned by $A_{\bm{\rho}}$, $u$ can represent its cost. Moreover, since the series of $\texttt{OPEN}$ and $\texttt{CLOSED}$ lists maintain all search states, $u$ can represent the time and space complexity of $A_{\bm{\rho}}$. We let $u_{\bm{\rho}}: \Pi \to [0, H]$ denote the utility function that returns the performance of $A_{\bm{\rho}}$ on any $x\in\Pi$, and define a class of such functions as $\Ucal = \Set*{u_{\bm{\rho}}: \Pi \to [0, H]}{{\bm{\rho}}\in \R^n}$. The upper bound, $H$, is necessary to obtain sample complexity bounds with [\[prop:complexity_bound\]](#prop:complexity_bound){reference-type="ref+Label" reference="prop:complexity_bound"}. Setting such an upper bound is usual in practice. For example, if $u$ measures the running time, $H$ represents a time-out deadline.
68
+
69
+ Given the above setting, we want to learn $\hat{\bm{\rho}}$ values that attain an optimal $\E_{x\sim\Dcal}[u_{\hat{\bm{\rho}}}(x)]$ value, where available information consists of sampled instances $x_1,\dots,x_N$ and $u_{\bm{\rho}}(x_1), \dots, u_{\bm{\rho}}(x_N)$ values for any ${\bm{\rho}}\in\R^n$. To obtain generalization gaurantees on the performance of $A_{\hat{\bm{\rho}}}$, we bound $\abs{\frac{1}{N}\sum_{i=1}^N u_{\bm{\rho}}(x_i) - \E_{x\sim \Dcal}[u_{\bm{\rho}}(x)]}$ uniformly for all ${\bm{\rho}}\in \R^n$. Note that the uniform bound offers performance guarantees that are independent of learning procedures, e.g., manual or automated (without being uniform, learned $\hat{\bm{\rho}}$ may be overfitting sampled instances). As in [\[prop:complexity_bound\]](#prop:complexity_bound){reference-type="ref+Label" reference="prop:complexity_bound"}, to bound the sample complexity of learning ${\bm{\rho}}$ values, we need to evaluate the pseudo-dimension of $\Ucal$, denoted by $\mathrm{Pdim}(\Ucal)$, which is the main subject of this study.
70
+
71
+ While we allow heuristic function values ${\bm{\rho}}$ to be any point in $\R^n$, the range of heuristic functions may be restricted to some subspace of $\R^n$. Note that our upper bounds are applicable to such situations since restricting the space of possible ${\bm{\rho}}$ values does not increase $\mathrm{Pdim}(\Ucal)$. Meanwhile, such restriction may be useful for improving the upper bounds on $\mathrm{Pdim}(\Ucal)$; exploring this direction is left for future work. Also, our setting cannot deal with heuristic functions that take some instance-dependent features as input. To study such cases, we need more analysis that is specific to heuristic function models, which goes beyond the scope of this paper. Thus, we leave this for future work. Note that our setting still includes important heuristic function models on fixed vertex sets. For example, we can set ${\bm{\rho}}$ using learned distances to landmarks [@Goldberg2005-lj], or we can let ${\bm{\rho}}$ be distances measured on some metric space by learning metric embeddings of vertices [@You2019-hs].
72
+
73
+ We present details of GBFS and A\* and upper bounds on the pseudo-dimensions of $\Ucal$. In this section, we suppose that vertices in $V$ are labeled by $v_1,\dots,v_n$ as in [\[assump:tie_break\]](#assump:tie_break){reference-type="ref+Label" reference="assump:tie_break"}.
74
+
75
+ ::: algorithm
76
+ :::
77
+
78
+ [\[alg:gbfs\]](#alg:gbfs){reference-type="ref+Label" reference="alg:gbfs"} shows the details of GBFS$A_{\bm{\rho}}$ with heuristic function values ${\bm{\rho}}\in \R^n$. When selecting vertices in [\[step:gbfs_argmin\]](#step:gbfs_argmin){reference-type="ref+Label" reference="step:gbfs_argmin"}, the scores are determined only by ${\bm{\rho}}$. This implies an obvious but important fact.
79
+
80
+ ::: lemma
81
+ []{#lem:gbfs_total_order label="lem:gbfs_total_order"} Let ${\bm{\rho}}, {\bm{\rho}}' \in \R^n$ be a pair of heuristic function values with an identical total order up to ties on their entries, i.e., $\mathbb{I}(\rho_{v_i} \le \rho_{v_j}) = \mathbb{I}(\rho'_{v_i} \le \rho'_{v_j})$ for all $i,j\in [n]$ such that $i < j$. Then, we have $u_{\bm{\rho}}(x) = u_{{\bm{\rho}}'}(x)$ for all $x\in\Pi$.
82
+ :::
83
+
84
+ ::: proof
85
+ *Proof.* For any $x\in\Pi$, if ${\bm{\rho}}$ and ${\bm{\rho}}'$ have an identical strict total order on their entries, vertices selected in [\[step:gbfs_argmin\]](#step:gbfs_argmin){reference-type="ref+Label" reference="step:gbfs_argmin"} are the same in each iteration of $A_{\bm{\rho}}$ and $A_{{\bm{\rho}}'}$. Since this is the only step ${\bm{\rho}}$ and ${\bm{\rho}}'$ can affect, we have $A_{\bm{\rho}}= A_{{\bm{\rho}}'}$ for all $x\in\Pi$, hence $u_{\bm{\rho}}(x) = u_{{\bm{\rho}}'}(x)$. Moreover, this holds even if ${\bm{\rho}}$ and/or ${\bm{\rho}}'$ have ties on their entries because of [\[assump:tie_break\]](#assump:tie_break){reference-type="ref+Label" reference="assump:tie_break"}. That is, the total order uniquely determines a vertex selected in [\[step:gbfs_argmin\]](#step:gbfs_argmin){reference-type="ref+Label" reference="step:gbfs_argmin"} even in case of ties. Therefore, the statement holds. ◻
86
+ :::
87
+
88
+ From [\[lem:gbfs_total_order\]](#lem:gbfs_total_order){reference-type="ref+Label" reference="lem:gbfs_total_order"}, the behavior of GBFS is uniquely determined once a total order on $\set*{\rho_v}_{v\in V}$ is fixed. Thus, for any $x\in \Pi$, the number of distinct $u_{\bm{\rho}}(x)$ values is at most $n!$, the number of permutations of $\set*{\rho_v}_{v\in V}$. This fact enables us to obtain an $\Ord(n \lg n)$ upper bound on the pseudo-dimension of $\Ucal$.
89
+
90
+ ::: theorem
91
+ []{#thm:gbfs_upper label="thm:gbfs_upper"} For GBFS$A_{\bm{\rho}}$ with parameters ${\bm{\rho}}\in \R^n$, it holds that $\mathrm{Pdim}(\Ucal) = \Ord(n \lg n)$.
92
+ :::
93
+
94
+ ::: proof
95
+ *Proof.* [\[lem:gbfs_total_order\]](#lem:gbfs_total_order){reference-type="ref+Label" reference="lem:gbfs_total_order"} implies that we can parition $\R^n$ into $n!$ regions, $\Pcal_1, \Pcal_2,\dots$, so that for every $\Pcal_i$, any pair of ${\bm{\rho}}, {\bm{\rho}}' \in \Pcal_i$ satisfies $u_{\bm{\rho}}(x) = u_{{\bm{\rho}}'}(x)$ for all $x\in \Pi$. Note that the construction of the regions, $\Pcal_1, \Pcal_2,\dots$, does not depend on $x$. Thus, given any $N$ instances $x_1,\dots,x_N$, even if ${\bm{\rho}}$ moves over whole $\R^n$, the number of distinct tuples of form $(u_{\bm{\rho}}(x_1),\dots,u_{\bm{\rho}}(x_N))$ is at most $n!$. To shatter $N$ instances, $n! \ge 2^N$ must hold. Solving this for the largest $N$ yields $\mathrm{Pdim}(\Ucal) = \Ord(n \lg n)$. ◻
96
+ :::
97
+
98
+ ::: algorithm
99
+ :::
100
+
101
+ [\[alg:astar\]](#alg:astar){reference-type="ref+Label" reference="alg:astar"} is the details of A\*. As with GBFS, ${\bm{\rho}}$ only affects the vertex selection step ([\[step:astar_argmin\]](#step:astar_argmin){reference-type="ref+Label" reference="step:astar_argmin"}). However, unlike GBFS, the scores, $g_v + \rho_v$, involve not only ${\bm{\rho}}$ but also $\set*{g_v}_{v\in V}$. Each $g_v$ is called a $g$-cost and maintains a cost of some path from $s$ to $v$. As in [\[alg:astar\]](#alg:astar){reference-type="ref+Label" reference="alg:astar"}, when $v$ is expanded and a shorter path to $c$ is found, whose cost is denoted by $g_{\textrm{new}}$, we update the $g_{c}$ value. Thus, each $g_{v}$ always gives an upper bound on the shortest-path distance from $s$ to $v$. For each $v \in V$, there are at most $\sum_{k=0}^{n-2}k! \le (n-1)!$ simple paths connecting $s$ to $v$, and thus $g_v$ can take at most $(n-1)!$ distinct values. We denote the set of those distinct values by $\Gcal_v$, and define $\Gcal_V = \Set*{(v, g_v)}{v\in V, g_v \in \Gcal_v}$ as the set of all pairs of a vertex and its possible $g$-cost. It holds that $|\Gcal_V| \le n\times (n-1)! = n!$.
102
+
103
+ Note that once $x\in \Pi$ is fixed, $\Gcal_v$ for $v\in V$ and $\Gcal_V$ are uniquely determined. To emphasize this fact, we sometimes use notation with references to $x$: $g_v(x)$, $\Gcal_v(x)$, and $\Gcal_V(x)$. As with the case of GBFS([\[lem:gbfs_total_order\]](#lem:gbfs_total_order){reference-type="ref+Label" reference="lem:gbfs_total_order"}), we can define a total order on the scores to determine the behavior of A\* uniquely.
104
+
105
+ ::: lemma
106
+ []{#lem:astar_total_order label="lem:astar_total_order"} Fix any instance $x\in \Pi$. Let ${\bm{\rho}}, {\bm{\rho}}' \in \R^n$ be a pair of heuristic function values such that total orders on the sets of all possible scores, $\Set*{g_v(x) + \rho_v}{(v, g_v(x)) \in \Gcal_V(x)}$ and $\Set*{g_v(x) + \rho'_v}{(v, g_v(x)) \in \Gcal_V(x)}$, are identical up to ties. Then, it holds that $u_{\bm{\rho}}(x) = u_{{\bm{\rho}}'}(x)$.
107
+ :::
108
+
109
+ ::: proof
110
+ *Proof.* If the two sets of scores have an identical strict total order, we select the same vertex in [\[step:astar_argmin\]](#step:astar_argmin){reference-type="ref+Label" reference="step:astar_argmin"} in each iteration of $A_{\bm{\rho}}$ and $A_{{\bm{\rho}}'}$. Thus, we have $A_{\bm{\rho}}= A_{{\bm{\rho}}'}$ for any fixed $x$, implying $u_{\bm{\rho}}(x) = u_{{\bm{\rho}}'}(x)$. We show that this holds even in the presence of ties by using [\[assump:tie_break\]](#assump:tie_break){reference-type="ref+Label" reference="assump:tie_break"}. First, any two scores of the same vertices, $g_v(x) +\rho_v$ and $g_v'(x) +\rho_v$, never have ties since $\Gcal_v$ consists of distinct $g$-costs. Next, if $g_{v_i}(x) +\rho_{v_i} = g_{v_j}(x) +\rho_{v_j}$ holds for some $i < j$, we always prefer $v_i$ to $v_j$ in [\[step:astar_argmin\]](#step:astar_argmin){reference-type="ref+Label" reference="step:astar_argmin"} due to [\[assump:tie_break\]](#assump:tie_break){reference-type="ref+Label" reference="assump:tie_break"}. Therefore, even in the presence of ties, we select a vertex in [\[step:astar_argmin\]](#step:astar_argmin){reference-type="ref+Label" reference="step:astar_argmin"} as if the set of scores has a strict total order. Thus, if ${\bm{\rho}}$ and ${\bm{\rho}}'$ induce the same total order up to ties on the sets of possible scores, it holds that $u_{\bm{\rho}}(x) = u_{{\bm{\rho}}'}(x)$. ◻
111
+ :::
112
+
113
+ By using [\[lem:astar_total_order\]](#lem:astar_total_order){reference-type="ref+Label" reference="lem:astar_total_order"}, we can obtain an $\Ord(n^2\lg n)$ upper bound on the pseudo-dimension of $\Ucal$.
114
+
115
+ ::: theorem
116
+ []{#thm:astar_upper label="thm:astar_upper"} For A\*$A_{\bm{\rho}}$ with parameters ${\bm{\rho}}\in \R^n$, it holds that $\mathrm{Pdim}(\Ucal) = \Ord(n^2 \lg n)$.
117
+ :::
118
+
119
+ ::: proof
120
+ *Proof.* As with the proof of [\[thm:gbfs_upper\]](#thm:gbfs_upper){reference-type="ref+Label" reference="thm:gbfs_upper"}, we partition $\R^n$ into some regions so that in each region, the behavior of A\* is unique. Unlike the case of GBFS, boundaries of such regions change over $N$ instances. To deal with this situation, we use a geometric fact: for $m \ge n \ge 1$, $m$ hyperplanes partition $\R^n$ into $\Ord({(\e m)}^n)$ regions.[^1]
121
+
122
+ Fix a tuple of any $N$ instances $(x_1,\dots,x_N)$. We consider hyperplanes in $\R^n$ of form $g_{v_i}(x_k) + \rho_{v_i} = g_{v_j}(x_k) + \rho_{v_j}$ for all $k\in[N]$ and all pairs of $(v_i,g_{v_i}(x_k)), (v_j,g_{v_j}(x_k)) \in \Gcal_V$ such that $i\neq j$. These hyperplanes partition $\R^n$ into some regions, $\Pcal_1,\Pcal_2,\dots$, so that the following condition holds: for every $\Pcal_i$, any ${\bm{\rho}}, {\bm{\rho}}' \in \Pcal_i$ have the same total order on $\Set*{g_v(x_k) + \rho_v}{(v, g_v(x_k)) \in \Gcal_V(x)}$ and $\Set*{g_v(x_k) + \rho'_v}{(v, g_v(x_k)) \in \Gcal_V(x_k)}$ up to ties for all $k \in [N]$, which implies $u_{\bm{\rho}}(x_k) = u_{{\bm{\rho}}'}(x_k)$ for all $k \in [N]$ due to [\[lem:astar_total_order\]](#lem:astar_total_order){reference-type="ref+Label" reference="lem:astar_total_order"}. That is, for every $k \in [N]$, if we see $u_{\bm{\rho}}(x_k)$ as a function of ${\bm{\rho}}$, it is piecewise constant where pieces are given by $\Pcal_1,\Pcal_2,\dots$. Therefore, when ${\bm{\rho}}$ moves over whole $\R^n$, the number of distinct tuples of form $\prn*{u_{\bm{\rho}}(x_1), \dots, u_{\bm{\rho}}({x_N})}$ is at most the number of the pieces. Note that the pieces are generated by partitioning $\R^n$ with $\sum_{k\in[N]} \binom{|\Gcal_V(x_k)|}{2} \le N \binom{n!}{2}$ hyperplanes, which means there are at most $\Ord\prn*{\prn*{\e N \binom{n!}{2}}^n}$ pieces. To shatter $N$ instances, $\Ord\prn*{\prn*{\e N \binom{n!}{2}}^n} \ge 2^N$ is necessary. Solving this for the largest $N$ yields $\mathrm{Pdim}(\Ucal) = \Ord(n^2 \lg n)$. ◻
123
+ :::
124
+
125
+ Compared with GBFS, the additional $n$ factor comes from the bound of $(n-1)!$ on $|\Gcal_v|$. This bound may seem too pessimistic, but it is almost tight in some cases, as implied by the following example.
126
+
127
+ ::: example
128
+ []{#exmp:large_Gv label="exmp:large_Gv"} Let $(V, E)$ be a complete graph with edges labeled as $\set*{e_1,\dots,e_{|E|}}$. Set each edge weight $w_{e_i}$ to $2^{i-1}$ for $i\in[|E|]$. Considering the binary representation of the edge weights, the costs of all simple $s$--$v$ paths are mutually different for $v\in V$, which implies $|\Gcal_v| = \sum_{k=0}^{n-2}k! \ge (n-2)!$.
129
+ :::
130
+
131
+ This example suggests that improving the $\Ord(n^2\lg n)$ bound is not straightforward. Under some realistic assumptions, however, we can improve it by deriving smaller upper bounds on $|\Gcal_v|$.
132
+
133
+ First, if the maximum degree of vertices is always bounded, we can obtain the following bound.
134
+
135
+ ::: theorem
136
+ Assume that the maximum out-degrees of directed graphs $(V, E)$ of all instances in $\Pi$ are upper bounded by $d$. Then, it holds that $\mathrm{Pdim}(\Ucal) = \Ord(n^2\lg d)$.
137
+ :::
138
+
139
+ ::: proof
140
+ *Proof.* Under the assumption on the maximum degree, there are at most $\sum_{k=0}^{n-2}d^k \le (n-1)d^{n-2}$ simple $s$--$v$ paths, which implies $|\Gcal_v|\le (n-1)d^{n-2}$ for every $v \in V$. Therefore, we have $|\Gcal_V| \le n\times (n-1)d^{n-2}$. Following the proof of [\[thm:astar_upper\]](#thm:astar_upper){reference-type="ref+Label" reference="thm:astar_upper"}, we can obtain an upper bound on $\mathrm{Pdim}(\Ucal)$ by solving $\Ord\prn*{N^n\binom{n(n-1)d^{n-2}}{2}^n} \ge 2^N$ for the largest $N$, which yields $\mathrm{Pdim}(\Ucal) = \Ord(n^2\lg d)$. ◻
141
+ :::
142
+
143
+ Second, if edge weights are non-negative integers bounded by $\ell$, we can obtain the following bound.
144
+
145
+ ::: theorem
146
+ []{#thm:astar_integer label="thm:astar_integer"} Assume that edge weights $\set*{w_e}_{e\in E}$ of all instances in $\Pi$ are non-negative integers bounded by a constant $\ell$ from above. Then, it holds that $\mathrm{Pdim}(\Ucal) = \Ord(n\lg(n\ell))$.
147
+ :::
148
+
149
+ ::: proof
150
+ *Proof.* Under the assumption, every $g$-cost $g_v$ takes a non-negative integer value at most $n\ell$. Since $\Gcal_v$ consists of distinct $g$-cost values, $|\Gcal_v| \le n\ell$ holds, hence $|\Gcal_V| \le n^2\ell$. Solving $\Ord\prn*{N^n\binom{n^2\ell}{2}^n} \ge 2^N$ for the largest $N$, we obtain $\mathrm{Pdim}(\Ucal) = \Ord(n\lg(n\ell))$. ◻
151
+ :::
152
+
153
+ Note that if $\ell = \Ord(\poly(n))$ holds, we have $\mathrm{Pdim}(\Ucal) = \Ord(n \lg n)$.
154
+
155
+ A\* is usually allowed to reopen closed vertices as in [\[step:reopening_if,step:parent_pointer_updating,step:reopening_o\]](#step:reopening_if,step:parent_pointer_updating,step:reopening_o){reference-type="ref+Label" reference="step:reopening_if,step:parent_pointer_updating,step:reopening_o"}. This, however, causes $\Omega(2^{n})$ iterations in general [@Martelli1977-ij], albeit always finite [@Valenzano2016-re]. A popular workaround is to simply remove [\[step:reopening_if,step:parent_pointer_updating,step:reopening_o\]](#step:reopening_if,step:parent_pointer_updating,step:reopening_o){reference-type="ref+Label" reference="step:reopening_if,step:parent_pointer_updating,step:reopening_o"}, and such A\* without reopening has also been extensively studied [@Valenzano2014-sw; @Sepetnitsky2016-zo; @Chen2019-nf; @Chen2021-zq]. Note that our results are applicable to A\* both with and without reopening.
156
+
157
+ We present lower bounds on the pseudo-dimension for GBFS/A\*. We prove the result by constructing $\Omega(n)$ shatterable instances with unweighted graphs. Therefore, the $\Ord(n \lg n)$ upper bounds for GBFS([\[thm:gbfs_upper\]](#thm:gbfs_upper){reference-type="ref+Label" reference="thm:gbfs_upper"}) and A\* under the edge-weight assumption ([\[thm:astar_integer\]](#thm:astar_integer){reference-type="ref+Label" reference="thm:astar_integer"}) are tight up to a $\lg n$ factor.
158
+
159
+ <figure id="fig:gbfs-lower" data-latex-placement="tb">
160
+ <img src="fig/lower_unweighted.svg" style="width:100.0%" />
161
+ <figcaption>An illustration of the instances <span class="math inline"><em>x</em><sub>1</sub>, …, <em>x</em><sub><em>n</em> − 4</sub></span> for <span class="math inline"><em>n</em> = 8</span>. Each vertex is labeled by <span class="math inline"><em>s</em></span>, <span class="math inline"><em>r</em></span>, <span class="math inline"><em>t</em></span>, or <span class="math inline"><em>i</em> ∈ [<em>n</em> − 3]</span>, as shown nearby the vertex circles. The values in vertex circles represent <span class="math inline"><strong>ρ</strong></span> that makes <span class="math inline"><em>A</em><sub><strong>ρ</strong></sub></span> return suboptimal paths to <span class="math inline"><em>x</em><sub>2</sub></span> and <span class="math inline"><em>x</em><sub>3</sub></span>, i.e., <span class="math inline">$S = \set*{2, 3}$</span>. The thick edges indicate returned paths.</figcaption>
162
+ </figure>
163
+
164
+ ::: theorem
165
+ []{#thm:lower label="thm:lower"} For GBFS/A\*$A_{\bm{\rho}}$ with parameters ${\bm{\rho}}\in \R^n$, it holds that $\mathrm{Pdim}(\Ucal) = \Omega(n)$.
166
+ :::
167
+
168
+ ::: proof
169
+ *Proof.* We construct a series of $n-4$ instances, $x_{1},\dots,x_{n-4}$, that can be shattered by $\Ucal$, where each $u_{\bm{\rho}}$ returns the length of an $s$--$t$ path found by $A_{\bm{\rho}}$. We label vertices in $V$ by $s$, $r$, $t$, or $i \in [n-3]$. See [1](#fig:gbfs-lower){reference-type="ref+Label" reference="fig:gbfs-lower"} for an example with $n=8$. We define $M = V\setminus \set*{s,r,t}$. For each $x_i$ ($i \in [n-4]$), we draw edges $(s, v)$ for $v \in M$ and $(v, t)$ for $v \in \Set*{v' \in M}{v' > i}$, which constitute optimal $s$--$t$ paths of length $2$. In addition, for each $x_i$, we draw edges $(i, r)$ and $(r, t)$, where $s \to i \to r \to t$ is the only suboptimal path of length $3$. Letting $t_i = 2.5$ for $i \in [n-4]$, we prove that $\Ucal$ can shatter those $n-4$ instances, i.e., $A_{\bm{\rho}}$ can return suboptimal solutions to any subset of $\set*{x_1\dots,x_{n-4}}$ by appropriately setting ${\bm{\rho}}$.
170
+
171
+ Let $S \subseteq [n-4]$ indicate a subset of instances, to which we will make $A_{\bm{\rho}}$ return suboptimal solutions. We show that for any $S$, we can set ${\bm{\rho}}$ so that $A_{\bm{\rho}}$ returns $s \to i \to r \to t$ to $x_i$ if and only if $i \in S$. We refer to the vertex labeled by $n-3$ as $m$, which we use to ensure that every instance has an optimal path $s \to m \to t$. We set ${\bm{\rho}}$ as follows: $\rho_s = n$ (or an arbitrary value), $\rho_r = \rho_t = 0$, $\rho_i = i+2$ if $i \in S \cup \set*{m}$, and $\rho_i = n$ (or a sufficiently large value) if $i \in [n-4]\setminus S$. If $A_{\bm{\rho}}$ with this ${\bm{\rho}}$ is applied to $x_i$, it iteratively selects vertices in $S \cup \set*{m}$ in increasing order of their labels until a vertex that has a child is selected. Once a vertex with a child is expanded, it ends up returning $s \to i \to r \to t$ if $i \in S$ and $s \to v \to t$ for some $v > i$ if $i \notin S$. We below confirm this more precisely, separately for GBFS and A\*.
172
+
173
+ We consider applying GBFS$A_{\bm{\rho}}$ to $x_i$. $A_{\bm{\rho}}$ first expands $s$ and add vertices in $M$ to $\texttt{OPEN}$. Since vertices in $v \in [n-4]\setminus S$ have sufficiently large scores of $n$, they are never selected before any vertex in $S \cup \set*{m}$. Thus, $A_{\bm{\rho}}$ selects a vertex from $S \cup \set*{m}$ with the smallest score. If the selected vertex, denoted by $v$, satisfies $v < i$, there is no child of $v$; hence, nothing is added to $\texttt{OPEN}$, and we go back to [\[step:gbfs_argmin\]](#step:gbfs_argmin){reference-type="ref+Label" reference="step:gbfs_argmin"}. In this way, $A_{\bm{\rho}}$ iteratively moves $v \in S \cup \set*{m}$ that has no child from $\texttt{OPEN}$ to $\texttt{CLOSED}$. Consider the first time when the selected vertex $v \in S \cup \set*{m}$ has a child $c$ (this situation is guaranteed to occur since $m$ always has a child). If $i \notin S$, we have $v \neq i$ since $v$ is selected from $S \cup \set*{m}$. Then, since $v$'s child is $c= t$, $A_{\bm{\rho}}$ returns $s \to v \to t$ with $v \neq i$. If $i \in S$, then $i$ has the smallest score ($\rho_i = i+2$) among all vertices in $S \cup \set*{m}$ that have a child. Thus, $A_{\bm{\rho}}$ selects $i$ and opens $r$. Now, $r$ has the smallest score of $\rho_r = 0$. Therefore, $A_{\bm{\rho}}$ selects $r$ and reaches $t$, returning $s \to i \to r \to t$. Consequently, $A_{\bm{\rho}}$ returns the suboptimal path if and only if $i \in S$.
174
+
175
+ It first expands $s$ and add $M$ to $\texttt{OPEN}$. Since $g_v = 1$ for all $v \in M$, only ${\bm{\rho}}$ values matter when comparing the scores, as with the case of GBFS. Therefore, A\* iterates to move vertices in $S \cup \set*{m}$ from $\texttt{OPEN}$ to $\texttt{CLOSED}$ until a vertex that has a child is selected. Consider the first time a selected vertex $v$ has a child $c$ (so far, $s$ is not reopened since $g_s = 0$). As with the case of GBFS, we have $v\neq i$ and $c= t$ if $i \notin S$, or $v = i$ and $c= r$ if $i \in S$. Now, every $v' \in \texttt{OPEN}\setminus \set*{c}$ has a score of at least $4$ since $g_{v'} = 1$ and $\rho_{v'} \ge 3$ for $v' \in M$. Therefore, if $i \notin S$, $t \in \texttt{OPEN}$ has the smallest score of $g_t + \rho_t = 2 + 0 = 2$. Thus, $A_{\bm{\rho}}$ next selects $t$ and returns $s \to v \to t$, where $v \neq i$. If $i \in S$, since $r \in \texttt{OPEN}$ has the smallest score of $g_r + \rho_r = 2 + 0 = 2$, $A_{\bm{\rho}}$ selects $r$ and opens $t$. Then, since $t$ has the score of $g_t + \rho_t = 3 + 0 = 3$, $A_{\bm{\rho}}$ selects $t$ and returns $s \to i \to r \to t$. To conclude, $A_{\bm{\rho}}$ returns the suboptimal path if and only if $i \in S$. ◻
176
+ :::
177
+
178
+ Given the results in [\[sec:upper_bound,sec:lower_bound\]](#sec:upper_bound,sec:lower_bound){reference-type="ref+Label" reference="sec:upper_bound,sec:lower_bound"}, a major open problem is to close the $\tilde\Ord(n)$ gap[^2] between the $\Ord(n^2 \lg n)$ upper bound and the $\Omega(n)$ lower bound for A\* in general cases. This problem seems very complicated, as we will discuss in [6](#sec:conclusion){reference-type="ref+Label" reference="sec:conclusion"}. Instead, we here study a particular case where we want to bound the expected suboptimality of A\*, which is an important performance measure since learned heuristic values are not always admissible. We show that a general bound obtained from [\[thm:astar_upper\]](#thm:astar_upper){reference-type="ref+Label" reference="thm:astar_upper"} can be sometimes improved by using a ${\bm{\rho}}$-dependent worst-case bound [@Valenzano2014-sw].
179
+
180
+ For any $x\in \Pi$, let $\mathrm{Opt}(x)$ and $\mathrm{Cost}_{\bm{\rho}}(x)$ be the costs of an optimal solution and an $s$--$t$ path returned by $A_{\bm{\rho}}$, respectively, and let $u_{\bm{\rho}}(x) = \mathrm{Cost}_{\bm{\rho}}(x) - \mathrm{Opt}(x)$ be the suboptimality. From [\[thm:astar_upper\]](#thm:astar_upper){reference-type="ref+Label" reference="thm:astar_upper"} and [\[prop:complexity_bound\]](#prop:complexity_bound){reference-type="ref+Label" reference="prop:complexity_bound"}, we can obtain the following high-probability bound on the expected suboptimality: $$\begin{equation}
181
+ \label{eq:supopt_bound}
182
+ \mathop{\E}_{x\sim\Dcal} \brc*{\mathrm{Cost}_{\bm{\rho}}(x) - \mathrm{Opt}(x)} \le \frac{1}{N} \sum_{i=1}^N \prn*{\mathrm{Cost}_{\bm{\rho}}(x) - \mathrm{Opt}(x)} + \tilde\Ord\prn*{H\sqrt{\frac{n^2 + \lg \frac{1}{\delta}}{N}}}.
183
+ \end{equation}$$ That is, the expected suboptimality can be bounded from above by the empirical suboptimality over the $N$ training instances (an empirical term) plus an $\tilde\Ord(H\sqrt{n^2/N})$ term (a complexity term). While this bound is useful when $N \gg n^2$, we may not have enough training instances in practice. In such cases, the complexity term becomes dominant and prevents us from obtaining meaningful guarantees. In what follows, we present an alternative bound of the form "an empirical term $+$ a complexity term" that can strike a better balance between the two terms when $N$ is not large enough relative to $n^2$.
184
+
185
+ To this end, we use the notion of *consistency*. We say ${\bm{\rho}}$ is *consistent* if $\rho_v \le \rho_{c} + w_{(v, c)}$ holds for all $(v, c) \in E$. If ${\bm{\rho}}$ is consistent, A\* without reopening returns an optimal solution. @Valenzano2014-sw [Theorem 4.6] revealed that for any instance $x\in \Pi$, the suboptimality of A\* can be bounded by the inconsistency accumulated along an optimal path (excluding the first edge containing $s$) as follows:[^3] $$\begin{equation}
186
+ \label{eq:inconsistency_valenzano}
187
+ \mathrm{Cost}_{\bm{\rho}}(x) - \mathrm{Opt}(x) \le \Delta_{\bm{\rho}}(x) \coloneqq \sum_{(v, c) \in S^*(x), v \neq s} \max\set*{\rho_v - \rho_{c} - w_{(v, c)}, 0},
188
+ \end{equation}$$ where $S^*(x) \subseteq E$ is an optimal solution to $x$ (if there are multiple optimal solutions, we break ties by using the lexicographical order induced from the total order defined in [\[assump:tie_break\]](#assump:tie_break){reference-type="ref+Label" reference="assump:tie_break"}). We call $\Delta_{\bm{\rho}}(x)$ the inconsistency (of ${\bm{\rho}}$ on $S^*(x)$).
189
+
190
+ Given $N$ instances $x_1,\dots,x_N$, we can compute the empirical inconsistency, $\frac{1}{N}\sum_{i=1}^N \Delta_{\bm{\rho}}(x_i)$, at the cost of solving the $N$ instances, which we will use as an empirical term. To define the corresponding complexity term, we regard $\Delta_{\bm{\rho}}(\cdot):\Pi\to[0,\hat H]$ as an inconsistency function parameterized by ${\bm{\rho}}$, where we will discuss how large $\hat H > 0$ can be later, and we let $\hat \Ucal = \Set*{\Delta_{\bm{\rho}}:\Pi\to [0, \smash{\hat H}]}{{\bm{\rho}}\in\R^n}$. The following theorem says that the class $\hat\Ucal$ of inconsistency functions has a smaller pseudo-dimension than the class $\Ucal$ of general utility functions.
191
+
192
+ ::: theorem
193
+ []{#thm:pdim_inc label="thm:pdim_inc"} For the class $\hat\Ucal$ of inconsistency functions, it holds that $\mathrm{Pdim}(\hat\Ucal) = \Ord(n \lg n)$.
194
+ :::
195
+
196
+ By using [\[eq:inconsistency_valenzano\]](#eq:inconsistency_valenzano){reference-type="eqref" reference="eq:inconsistency_valenzano"}, [\[prop:complexity_bound\]](#prop:complexity_bound){reference-type="ref+Label" reference="prop:complexity_bound"}, and [\[thm:pdim_inc\]](#thm:pdim_inc){reference-type="ref+Label" reference="thm:pdim_inc"}, we can obtain the following high-probability bound on the expected suboptimality, whose complexity term has a better dependence on $n$ than that of [\[eq:supopt_bound\]](#eq:supopt_bound){reference-type="eqref" reference="eq:supopt_bound"}: $$\begin{equation}
197
+ \label{eq:inc_subopt_bound}
198
+ \mathop{\E}_{x\sim\Dcal} \brc*{\mathrm{Cost}_{\bm{\rho}}(x) - \mathrm{Opt}(x)}
199
+ \le
200
+ \mathop{\E}_{x\sim \Dcal}[\Delta_{\bm{\rho}}(x)]
201
+ \le
202
+ \frac{1}{N}\sum_{i=1}^N \Delta_{\bm{\rho}}(x_i)
203
+ +
204
+ \tilde\Ord\prn*{\hat H \sqrt{\frac{n + \lg \frac{1}{\delta}}{N}}}.
205
+ \end{equation}$$ This bound is uniform for all ${\bm{\rho}}\in \R^n$, as with other bounds discussed so far. Thus, the bound holds even if we choose ${\bm{\rho}}$ to minimize the empirical inconsistency. Note that the empirical inconsistency is convex in ${\bm{\rho}}$ since $\Delta_{\bm{\rho}}(x_i)$ consists of a maximum of a linear function of ${\bm{\rho}}$ and zero, hence easier to minimize than the raw empirical suboptimality in practice (and suitable for a recent online-convex-optimization framework [@Khodak2022-sf]).
206
+
207
+ Before proving [\[thm:pdim_inc\]](#thm:pdim_inc){reference-type="ref+Label" reference="thm:pdim_inc"}, we present a typical example to show that the inconsistency is not too large relative to the suboptimality.
208
+
209
+ ::: example
210
+ []{#exmp:inc_vs_subopt label="exmp:inc_vs_subopt"} Suppose that every edge weight $w_e$ is bounded to $[0, \ell]$, which ensures that the suboptimality $u_{\bm{\rho}}$ is at most $H = \ell(n-1)$ for any ${\bm{\rho}}\in \R^n$. For simplicity, we consider the following natural way to compute ${\bm{\rho}}$ values: compute an estimate $\hat w_e \in [0, \ell]$ of $w_e$ for each $e\in E$ and let $\rho_v$ be the cost of a shortest $v$--$t$ path with respect to $\set*{\hat w_e}_{e\in E}$. Then, ${\bm{\rho}}$ enjoys the consistency with respect to $\set*{\hat w_e}_{e\in E}$, i.e., $\rho_v \le \rho_{c} + \hat w_{(v,c)}$ for every $(v, c) \in E$. Therefore, it holds that $$\Delta_{\bm{\rho}}(x) \le
211
+ \sum_{(v, c) \in S^*(x), v \neq s} \max\set*{\rho_v - \rho_{c} - w_{(v,c)}, 0}
212
+ \le
213
+ \sum_{(v, c) \in S^*(x), v \neq s} \abs*{\hat w_{(v, c)} - w_{(v, c)}}.$$ Hence $\Delta_{\bm{\rho}}$ is at most $\hat H = \ell (n-2)$, implying that $\Delta_{\bm{\rho}}$ does not largely exceed the suboptimality. If empirically accurate estimates $\hat w_e$ for $e \in S^*(x)$ are available, the inconsistency becomes small.
214
+ :::
215
+
216
+ We prove [\[thm:pdim_inc\]](#thm:pdim_inc){reference-type="ref+Label" reference="thm:pdim_inc"} by using the general analysis framework by @Balcan2021-jv. To begin with, we introduce Assouad's dual class, which provides a formal definition of the class of functions of ${\bm{\rho}}$.
217
+
218
+ ::: definition
219
+ []{#def:dual label="def:dual"} Given a class, $\Hcal\subseteq \R^\Ycal$, of functions $h:\Ycal\to\R$, the *dual class* of $\Hcal$ is defined as $\Hcal^* = \Set*{h^*_y:\Hcal \to \R}{y\in\Ycal}$ such that $h^*_y(h) = h(y)$ for each $y \in \Ycal$.
220
+ :::
221
+
222
+ In our case, we have $\Ycal = \Pi$ and $\Hcal = \hat\Ucal$, and each $\Delta^*_x\in \hat\Ucal^*$ is associated with an instance $x\in \Pi$. The following definition will be used to capture the piecewise structure of the dual class $\hat\Ucal^*$.
223
+
224
+ ::: definition
225
+ []{#def:piecewise_decomposable label="def:piecewise_decomposable"} A class, $\Hcal\subseteq \R^\Ycal$, of functions is *$(\Fcal, \Bcal, K)$-piecewise decomposable* for a class $\Bcal\subseteq\set*{0,1}^\Ycal$ of boundary functions and a class $\Fcal\subseteq \R^{\Ycal}$ of piece functions if the following condition holds: for every $h\in\Hcal$, there exist $K$ boundary functions $b^{(1)},\dots,b^{(K)} \in\Bcal$ and a piece function $f_\bb$ for each binary vector $\bb\in\set*{0,1}^K$ such that for all $y \in\Ycal$, it holds that $h(y) = f_{\bb_y}(y)$ where $\bb_y = (b^{(1)}(y), \dots, b^{(K)}(y)) \in \set*{0,1}^K$.
226
+ :::
227
+
228
+ The following result of [@Balcan2021-jv] provides an upper bound on the pseudo-dimension of a class of functions via the piecewise structure of the dual class.
229
+
230
+ ::: proposition
231
+ []{#prop:balcan_pdim_bound label="prop:balcan_pdim_bound"} Let $\Ucal\subseteq \R^\Pi$ be a class of functions. If $\Ucal^* \subseteq \R^\Ucal$ is $(\Fcal, \Bcal, K)$-piecewise decomposable with a class $\Bcal\subseteq \set*{0,1}^\Ucal$ of boundary functions and a class $\Fcal \subseteq \R^\Ucal$ of piece functions, the pseudo-dimension of $\Ucal$ is bounded as follows: $$\mathrm{Pdim}\prn{\Ucal} = \Ord\prn*{ \prn*{\mathrm{Pdim}(\Fcal^*) + \mathrm{VCdim}(\Bcal^*)} \lg \prn*{\mathrm{Pdim}(\Fcal^*) + \mathrm{VCdim}(\Bcal^*)} + \mathrm{VCdim}(\Bcal^*)\lg K}.$$
232
+ :::
233
+
234
+ Now, we are ready to prove [\[thm:pdim_inc\]](#thm:pdim_inc){reference-type="ref+Label" reference="thm:pdim_inc"}.
235
+
236
+ ::: proof
237
+ *Proof of [\[thm:pdim_inc\]](#thm:pdim_inc){reference-type="ref+Label" reference="thm:pdim_inc"}.* Since there is a one-to-one correspondence between $\Delta_{\bm{\rho}}\in \hat\Ucal$ and ${\bm{\rho}}\in \R^n$, we below identify $\Delta_{\bm{\rho}}$ with ${\bm{\rho}}$ for simplicity. Let $\Bcal = \Set*{\mathbb{I}\prn*{\iprod{\zb, {\bm{\rho}}} + z_0}}{(z_0, \zb) \in \R^{n+1}} \subseteq \set*{0, 1}^{\hat\Ucal}$ and $\Fcal = \Set*{\iprod{\zb, {\bm{\rho}}} + z_0}{(z_0, \zb) \in \R^{n+1}} \subseteq \R^{\hat\Ucal}$ be classes of boundary and piece functions, respectively. We show that $\hat\Ucal^*$ is $(\Fcal, \Bcal, \Ord(n^2))$-piecewise decomposable.
238
+
239
+ Fix any $\Delta^*_x\in \hat\Ucal^*$; this also uniquely specifies an instance $x\in \Pi$ and an optimal solution $S^*(x) \subseteq E$ (due to the tie-breaking). We consider $K= |E| = \Ord(n^2)$ boundary functions of form $b^{(v, c)}({\bm{\rho}}) = \mathbb{I}\prn*{\rho_v - \rho_{c} - w_{(v, c)} > 0}$ for all edges $(v, c) \in E$. We below confirm that these boundary functions partition $\R^n \ni {\bm{\rho}}$ into some regions so that in each region, $\Delta^*_x({\bm{\rho}})$ can be written as a linear function of ${\bm{\rho}}$, which belongs to $\Fcal$. For each binary vector $\bb_{{\bm{\rho}}} = \prn*{b^{(v, c)}({\bm{\rho}})}_{(v, c)\in E} \in \set*{0, 1}^K$, we define a subset $S_{{\bm{\rho}}}(x)$ of $S^*(x)$ as $S_{{\bm{\rho}}}(x) = \Set*{(v, c) \in S^*(x)}{b^{(v, c)}({\bm{\rho}}) = 1, v \neq s}$; that is, each $(v, c) \in S_{{\bm{\rho}}}(x)$ satisfies $v \neq s$ and $\rho_v - \rho_{c} - w_{(v, c)} > 0$. From the definition of $\Delta_{\bm{\rho}}(x)$, we have $\Delta^*_x({\bm{\rho}}) = \Delta_{\bm{\rho}}(x) = \sum_{(v, c) \in S_{{\bm{\rho}}}(x)}(\rho_v - \rho_{c} - w_{(v, c)})$. This is a linear function of ${\bm{\rho}}$, and thus we can choose a piece function $f_{\bb_{{\bm{\rho}}}} \in \Fcal$ such that $\Delta^*_x= f_{\bb_{{\bm{\rho}}}}$. This relation holds for every $\bb_{\bm{\rho}}\in \set*{0,1}^K$, and thus we have $\Delta^*_x({\bm{\rho}}) = f_{\bb_{{\bm{\rho}}}}({\bm{\rho}})$ for all ${\bm{\rho}}\in \R^n$. Hence $\hat\Ucal^*$ is $(\Fcal, \Bcal, \Ord(n^2))$-piecewise decomposable.
240
+
241
+ Since $\Fcal^*$ and $\Bcal^*$ can be seen as classes of linear and halfspace functions of $(z_0, \zb) \in \R^{n+1}$, respectively, we have $\mathrm{Pdim}(\Fcal^*) = \mathrm{VCdim}(\Bcal^*) = n+1$ (see also [@Balcan2019-rr Lemma 3.10], a preprint version of [@Balcan2021-jv]). Therefore, from [\[prop:balcan_pdim_bound\]](#prop:balcan_pdim_bound){reference-type="ref+Label" reference="prop:balcan_pdim_bound"}, we obtain $\mathrm{Pdim}(\hat\Ucal) = \Ord(n \lg n)$. ◻
242
+ :::
2205.11028/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2205.11028/paper_text/intro_method.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Motion estimation is a fundamental building block for numerous applications such as robotics, augmented reality and autonomous driving. The low-level motion cues can serve other higher-level tasks such as object detection and action recognition. Given a pair or a sequence of images, we can estimate 2D flow fields from optical flow estimation by either classic variational methods or modern deep learning methods [\[25,](#page-9-2) [51,](#page-9-3) [52\]](#page-10-1).
4
+
5
+ Different from the scene flow methods that extend 2D optical flow to stereoscopic or RGB-D image sequences [\[21,](#page-8-0) [24\]](#page-8-1), increasing attention has been paid to the direct 3D flow
6
+
7
+ ![](_page_0_Figure_8.jpeg)
8
+
9
+ Figure 1. Visualization of the results on the KITTI scene flow dataset. With the increasing number of alternate optimizations, the source point cloud (in green) is gradually aligned with the target point cloud (in blue).
10
+
11
+ estimation on point clouds recently, which has several advantages over image based methods for a variety of applications. For example, one of the most prominent benefits is that it avoids the image sensor readings and the additional computation of depth from images for autonomous driving, which enables low-latency 3D flow estimation for a high-speed driving vehicle. While for augmented reality, especially for AR glasses, the 3D flow estimation on point clouds enables the computation distribution on the cloud server because it saves much more transmission bandwidth than images. It also protects the privacy of the surrounding people by not using images.
12
+
13
+ Therefore, some learning based methods [\[2,](#page-8-2)[18,](#page-8-3)[27,](#page-9-4)[30,](#page-9-5)[61\]](#page-10-2) utilizes the recent advances made for high-level tasks and customize the scene flow estimation specifically for point clouds. These methods predict the 3D flow vectors from costvolumes, where similarity costs between 3D points from two point cloud sets are measured. However, different from the cost-volumes in 2D optical flow that search a fixed regular neighborhood around a pixel in consecutive images [\[25,](#page-9-2) [51,](#page-9-3) [52\]](#page-10-1), it is impossible to define such a search window on point clouds because of the irregular data structure. Therefore, previous works like [\[18,](#page-8-3) [27,](#page-9-4) [30,](#page-9-5) [64\]](#page-10-3), they designed some complicated layers to measure the point-to-patch cost or patch-to-patch cost.
14
+
15
+ <span id="page-1-0"></span>In this paper, we avoid this irregularity by a simple and effective method. We decompose the problem into two interlaced stages, where the 3D flows are optimized point-wisely at the first stage and then globally regularized in a recurrent network at the second stage. Therefore, the recurrent network only receives the regular point-wise information as the input. Besides the scene flow estimation, our method also enables another important motion estimation task that registers two point clouds with the different 6-DOF pose. Since we only measure the point-to-point costs, we avoid the discretization of the 6DOF solution space, which is difficult because the rotation vector and the translation vector are two different variables that have different scales and ranges.
16
+
17
+ To evaluate the proposed method, we conduct experiments on both the 3D scene flow estimation and the point cloud registration. For 3D scene flow estimation, we made comparisons on the widely used FlyingThings3D [\[32\]](#page-9-0) and KITTI [\[33\]](#page-9-1) benchmark. For point cloud registration, we follow the previous works and generate data pairs with large pose and partially overlapping from ModelNet40 [\[65\]](#page-10-0). We have achieved state-of-the-art results on both 3D scene flow estimation and point cloud registration, which demonstrate the superiority of the proposed zero-order method on irregular point cloud data.
2205.12006/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-04-01T14:58:24.339Z" agent="5.0 (Windows)" etag="Q_EjuA1AcX56pJrwnn1J" version="17.4.0" type="google"><diagram id="ShsJROpuopXBC5jiDDhl" name="Page-1">7Vxbc9o4FP41zHQfYGz5Bo/hlnbbJu0ynWb3JWNsAZr4wtoikP76lWwZbEkEd7GN08ID2EeybJ3z6ZxPRzIdbeTvbiN7vfocutDrAMXddbRxB4CBYpJvKnhJBX1TTwXLCLmpSD0IZugHZEKFSTfIhXGhIg5DD6N1UeiEQQAdXJDZURRui9UWoVe869peQkEwc2xPlH5HLl6xXhjKQf4eouUqu7OqsBLfziqzJuKV7YbbVJTU0SYdbRSFIU6P/N0IelR3mV7ShqZHSvcPFsEAl7ng863h7m6Vh686KTSW31cPw49d1sqz7W1YhzuAGEjfpT/syfFLpo4o3AQupC2qHW24XSEMZ2vboaVbYn8iW2HfY8UL5Hmj0Auj5FptOp3qI43IxSfPHgNGGO5yItaTWxj6EEcvpAorNftMzQxWQDfS8+3BSBqrssrZx2LVbAaL5b7lg+bIAVOeXJF377d3zr1nDmd/LgI48++DH9+kijQ9ctehi57pDT20DJIC898NNfnQIb2H0eGcHC3TX6r5+FHNTJA2Q54qaSmrdI5hsodhj1CrpVS1aCmNwbRgKaNniLYy67KVVbmt7n5NW+l9yahq1lYDia04dRLHuqaHyE98eV55tOuIOPMbpkQcrnPST/Ycel/CGGEU0tJ5iHHokwoeLRjaztMyMVVO24vkQ6okN7uJ12nMUaihspMF2lHjDtnzjFcY02B1QxUBpo4bqD1EwtUCERBEPYfckcDFxjb5ofKY/K5gFCbH3Q0ipxRZUzfEcTd79K4K+r11sKzC6IrVy9xiZnfJGNUl3jSTVe9NRXc6IwPAjlBIpBN/Dl0X0d4rdxBvw+jplTGm/HSgGhuT/livaEhxgUrTxCGlAolutdoiFfhVdWsoImwb1q122l3BwL2hxJT6dM+OY+QUFVbULtwh/MBK6PHfVE48cHo23uWqjV/YyVHVxuEmcmCJcQfdAisWLZDTsCFRcCaLoGdj9Fzk0jKtszt8CRF55L2BtQE3eCzO4aQdYlfluS/fkMY1ZGjFhrAdLSEWGkpAsO/2GbjQ24ALAofo5SF/kruKnh4uS84qwBPrZqre1yqa7QIeT1r7em+Q+/SrgaGhgGZhaEhgmGO+eVIL65yAcEFhNCIsdlpNUBCGOgC9cnNDszY2Y0rUnur4roZ5dp3a1PvWxbUpm779Dr60rCttVwzfzyXOjeGqxcdwrqG6nWf/98SdVRZ3oFW4Uw0u1vYr4o4m79mO4I4AwX7JVVvTCnEtyLwmSepOkgDOi108Q5Ld/2rz2myutc7mrcguHIlA9fEeo1VxpTI+oyuDnjFQlIGl06wgsHQObXpPyRVzKauayQ5occJCrQ9q7cpCNAY1w+gNlMFAAYZlmroFzGaxJstKnIe1qknySfILWoYcjvzqfEwqjZzBiYbqxoYsdXIsY/Um81U6t4ihqyLLaDS/AmTzXEHlhB3irr1cRnBpJywxt1z0k/q/7CKSofMsT5YvbHYZCZSY0LV5GSmDUFu8YVVx1LywN9RKTPpajYuWRcmqcCH4kKZxcWL3WXGzUscaQn/escZvMl7yO/8uHi81ICj/L/jpW6ee/RPT6Yh8qlGlxe9NkYc+2QjkR0p12mzxxFPpgUrS7K/txD0508jg1hIfaggr5f/Xh5pcQ1rDPrT6WWiFwLPqW9/Rrsi7MPJkc9w3xOp+VVwMhAXnfrO4KDURZyTOGC0i2yHUTqXEzhre0R9jFG98yvdQAgYqmWRlQPn6btcBI2rhR/RHY1xwYo6V8aQaAiNYCMh2p0gJTG10UBfpoKC/N7cwp59cmIPPyOuylbkYRl1E31ZYUEyAKX3la7r2NvF+ca7WjI424HazS2YIZpOQMEpw2nPmBFVOr1RuTqyaovJkuTBQm/JK8LLzJlQVvj/DK0+7tPJKUIu2JGIF5Q0urbwSGw3bMpXnlbfXysWUJ5KXD8F6gwUNkg5ijsvCGP2w50mFJExtMIl4yYvB0hfi+JDpI9elFw/ZjityG2PYMcacBYIwoJViHIVPkBMWzVqFfYzTnkG2l6Q++5RYYjgH3FXSPAHcxoXBncXEArgJGEnriPAimExi1gTHpM9sdeyK+iNkYr+ttxnLiZlylqUlJFYJQjr9I/zVpe3QNc3PH76Qb5/9q8E7x6bCOTWwHc0RjuwIebTDhBOvPbj742roo7Fba9S/meI0bJ+OV3CYWREmrzty9r6a8AiD2Ju1GROK2x3vN/jKIV4Lg7JURxUGsqKPiv80/edRc+DHp6dZ//79o/Q/L66Zjp/LdAjml4CkPDHqN0eMpIgQne5NYYvQOx/SCJokHTd+egCx03vDcfM8A+rCfy/IbLh/0f1MI5LTw/8NpWnqw582aZP/AA==</diagram></mxfile>
2205.12006/main_diagram/main_diagram.pdf ADDED
Binary file (29.6 kB). View file
 
2205.12006/paper_text/intro_method.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Mathematical programming consists of a gamut of tools to solve optimization problems. Under perfect information, i.e., when all the data is deterministic and known, many of these problems can be solved as a linear program (LP) or mixed-integer linear program (MIP). However, in many cases, there is a need to deal with problems with partial information. Stochastic programming is one such framework that allows us to incorporate uncertainty into decision-making.
4
+
5
+ In this work, we focus our attention on Two-stage Stochastic Programs (2SPs). A 2SP involves two sets of decisions, namely the first-stage and second-stage (recourse) decisions, to be made before and after the uncertainty is realized, respectively. Given the (joint) probability distribution of the random parameters of the problem, the most common objective of 2SP is to optimize the expected value of the decisions. For example, in a two-stage stochastic facility location problem, first-stage decisions consist of which facilities should be built whereas second-stage decisions involve assigning customers to open facilities to meet their stochastic demand, and the overall objective is to minimize the sum of the cost of the first-stage decisions and the expected cost of the second-stage decisions.
6
+
7
+ 2SPs are usually solved via Sample Average Approximation (SAA), which limits the future uncertainty to a finite set of possible realizations (scenarios). The SAA approximation of a 2SP is a
8
+
9
+ <sup>∗</sup>These authors contributed equally.
10
+
11
+ <sup>†</sup>Corresponding author: [khalil@mie.utoronto.ca](mailto:khalil@mie.utoronto.ca).
12
+
13
+ ![](_page_1_Figure_0.jpeg)
14
+
15
+ Figure 1: Overview of Neur2SP. The leftmost block is the input, namely, a 2SP. From the 2SP, we follow the data generation procedure from Section 4.3 to obtain a dataset consisting of tuples of (first-stage decision, scenario set, corresponding expected second-stage objective value). We then train one of the learning models presented in Section 4.1 to predict the expected cost given a first-stage decision and scenario set. The trained model is then embedded into a MIP using the procedure in Section 4.2 to obtain an approximate MIP (the "MIPify" step). Lastly, the approximate MIP is solved with an off-the-shelf MIP solver to obtain a first-stage 2SP solution.
16
+
17
+ reduction to an equivalent deterministic problem and can be solved by the so-called *extensive form*: a monolithic formulation where scenario copies of the second-stage decision variables are created and linked to the first-stage decisions. However, even for small 2SPs, solving the extensive form may be intractable as it requires introducing a large number of (possibly integer) variables and (possibly nonlinear) constraints. As such, specialized algorithms are required. If the second-stage problem assumes the form of an LP, then algorithms such as Benders' decomposition (also known as the L-shaped method) can be leveraged to efficiently solve the problem to optimality. Unfortunately, in many practical applications of 2SP, the second-stage problem assumes the form of a MIP, for which specialized decomposition algorithms might not be efficient. The existence of continuous first-stage variables linked to the second-stage problem significantly increases the difficulty of solving such problems. This is exacerbated when the second-stage problem is nonlinear, for which no general and structure-agnostic solution strategy exists.
18
+
19
+ In this work, we propose Neur2SP, a framework for constructing an easier-to-solve surrogate optimization problem for 2SP with the use of supervised deep learning. In a nutshell, a Rectified Linear Unit (ReLU) neural network is trained to approximate the second-stage objective value for a set of scenarios. Using MIP-representable activation functions such as the ReLU, the forward computation of the trained network can be embedded into a MIP. The surrogate problem is then confined to optimizing *only* first-stage decisions with respect to the first-stage objective function and the neural network approximation of the second-stage objective 3 . Assuming a small and accurate neural network can be used, the surrogate problem is much smaller than the extensive form, and thus faster to solve. The entire procedure is summarized in Figure 1. Our main contributions are as follows:
20
+
21
+ - 1. Novelty: Neur2SP is the first generic machine learning approach for deriving a heuristic solution for 2SP. We introduce a highly parallelizable data collection procedure and show two separate neural models which can be used to formulate a deterministic mixed-integer surrogate problem for 2SP;
22
+ - 2. Generality: Neur2SP can be used out-of-the-box for 2SPs with linear and nonlinear objectives and constraints as well as mixed-integer variables in both the first and second stages, all without using any problem structure, i.e., in a purely data-driven way;
23
+ - 3. Performance: Neur2SP is shown to produce high-quality solutions significantly faster than the solely applicable general baseline method, the extensive form approach, for a variety of benchmark problems, namely, stochastic facility location problem, an investment problem, a server location problem, and a pooling problem from chemical engineering.
24
+
25
+ <sup>3</sup> For a fixed first-stage solution obtained via this surrogate, an optimal second-stage decision can be obtained relatively quickly for each scenario if desired.
26
+
27
+ # Method
28
+
29
+ We introduce the 2SP setting and describe the MIP formulation for a ReLU activation function which is central to the surrogate model we propose in this work. Appendix A summarizes the notation used.
30
+
31
+ A 2SP can be generally expressed as $\min_{\mathbf{x}} \{ \mathbf{c}^{\mathsf{T}} \mathbf{x} + \mathbb{E}_{\boldsymbol{\xi}}[Q(\mathbf{x}, \boldsymbol{\xi})] : \mathbf{x} \in \mathcal{X} \}$ , where $\mathbf{c} \in \mathbb{R}^n$ is the first-stage cost vector, $\mathbf{x} \in \mathbb{R}^n$ represents the first-stage decisions, $\mathcal{X}$ is the first-stage feasible set, and $\boldsymbol{\xi}$ is the vector of random parameters that follow a probability distribution $\mathbb{P}$ with support $\Xi$ . The value function $Q: \mathcal{X} \times \Xi \to \mathbb{R}$ returns the cost of optimal second-stage (recourse) decisions under realization $\boldsymbol{\xi}$ given the first-stage decisions of $\mathbf{x}$ . In many cases, as the $Q(\mathbf{x}, \boldsymbol{\xi})$ is obtained by solving a mathematical program, evaluating the expected value function $\mathbb{E}_{\boldsymbol{\xi}}[Q(\mathbf{x}, \boldsymbol{\xi})]$ is intractable.
32
+
33
+ To provide a more tractable formulation, the extensive form (EF) is used. Using a set of K scenarios, $\boldsymbol{\xi}_1,\ldots,\boldsymbol{\xi}_K$ , sampled from the probability distribution $\mathbb{P}$ , $\mathrm{EF}(\boldsymbol{\xi}_1,\ldots,\boldsymbol{\xi}_K) \equiv \min_{\mathbf{x}}\{\mathbf{c}^\intercal\mathbf{x} + \sum_{k=1}^K p_k Q(\mathbf{x},\boldsymbol{\xi}_k) : \mathbf{x} \in \mathcal{X}\}$ , where $p_k$ is the probability of scenario $\boldsymbol{\xi}_k$ being realized. If $Q(\mathbf{x},\boldsymbol{\xi}) = \min_{\mathbf{y}}\{F(\mathbf{y},\boldsymbol{\xi}) : \mathbf{y} \in \mathcal{Y}(\mathbf{x},\boldsymbol{\xi})\}$ , then $\mathrm{EF}(\boldsymbol{\xi}_1,\ldots,\boldsymbol{\xi}_K)$ can be expressed as
34
+
35
+ $$\min_{\mathbf{x}, \mathbf{y}} \left\{ \mathbf{c}^{\mathsf{T}} \mathbf{x} + \sum_{k=1}^{K} p_k F(\mathbf{y}_k, \boldsymbol{\xi}_k) : \mathbf{x} \in \mathcal{X}, \mathbf{y}_k \in \mathcal{Y}(\mathbf{x}, \boldsymbol{\xi}_k) \, \forall k = 1, \dots, K \right\},$$
36
+
37
+ which can be solved through standard deterministic optimization techniques. However, the number of variables and constraints of the EF grows linearly with the number of scenarios. Furthermore, if $Q(\cdot,\cdot)$ is the optimal value of a MIP or an nonlinear program (NLP), the EF model becomes significantly more challenging to solve as compared to the LP case, limiting its applicability even at small scale.
38
+
39
+ Mathematically, an $\ell$ -layer fully-connected neural network can be expressed as: $\mathbf{h}^1 = \sigma(W^0 \boldsymbol{\alpha} + \mathbf{b}^0)$ ; $\mathbf{h}^{m+1} = \sigma(W^m \mathbf{h}^m + \mathbf{b}^m)$ , $m = 1, \dots, \ell - 1$ ; $\beta = W^\ell \mathbf{h}^\ell + \mathbf{b}^\ell$ . Here, $\boldsymbol{\alpha} \in \mathbb{R}^m$ is the input, $\beta \in \mathbb{R}$ is the prediction, $\mathbf{h}^i \in \mathbb{R}^{d_i}$ is the i-th hidden layer, $W^i \in \mathbb{R}^{d_i \times d_{i+1}}$ is the matrix of weights from layer i to i+1, $\mathbf{b}^i \in \mathbb{R}^{d_i}$ is the bias at the i-th layer, and $\sigma$ is a non-linear activation function, here the activation function is given by $\mathrm{ReLU}(a) = \max\{0,a\}$ for $a \in \mathbb{R}$ .
40
+
41
+ Central to Neur2SP is the embedding of a trained neural network into a MIP. Here, we present the formulation proposed by [Fischetti and Jo, 2018]. For a given hidden layer m, the j-th hidden unit, $h_j^m$ , can be written as
42
+
43
+ $$h_j^m = \text{ReLU}\left(\sum_{i=1}^{d_{m-1}} w_{ij}^{m-1} h_i^{m-1} + b_j^{m-1}\right),\tag{1}$$
44
+
45
+ where $w_{ij}^m$ is the element at the j-th row and i-th column of $W^{m-1}$ and $b_j^{m-1}$ is the j-th index of $\mathbf{b}^{m-1}$ . To model ReLU in a MIP for the j-th unit in the m-th layer, we use the variables $\hat{h}_j^m$ , $\check{h}_j^m$ and $\hat{h}_i^{m-1}$ for $i=1,\ldots,d_{m-1}$ . The ReLU activation is then modeled with the following constraints:
46
+
47
+ $$\sum_{i=1}^{d_{m-1}} w_{ij}^{m-1} \hat{h}_i^{m-1} + b_j^{m-1} = \hat{h}_j^m - \check{h}_j^m,$$
48
+ (2a)
49
+
50
+ $$z_j^m = 1 \Rightarrow \hat{h}_j^m \le 0, \tag{2b}$$
51
+
52
+ $$z_j^m = 0 \Rightarrow \check{h}_j^m \le 0, \tag{2c}$$
53
+
54
+ $$\hat{h}_j^m, \check{h}_j^m \ge 0, \tag{2d}$$
55
+
56
+ $$z_j^m \in \{0, 1\},$$
57
+ (2e)
58
+
59
+ where the logical constraints in Equation (2b) and Equation (2c) are translated into big-M constraints by MIP solvers. To verify the correctness of this formulation, observe that constraints (2b) and (2c) in conjunction with the fact the binary $z_i^m$ ensures that at most one of $\hat{h}_i^m$ and $\check{h}_i^m$ are non-zero.
60
+
61
+ Furthermore, since both $\hat{h}_j^m$ and $\check{h}_j^m$ are non-negative, if $\sum_{i=1}^{d_{m-1}} w_{ij}^{m-1} \hat{h}_i^{m-1} + b_j^{m-1} > 0$ , then it follows that $\hat{h}_j^m > 0$ and $\check{h}_j^m = 0$ . If negative, then $\hat{h}_j^m = 0$ and $\check{h}_j^m > 0$ . Thus, we have that if the left-hand side of (2a) is positive, $\hat{h}_j^m$ will be positive; if it is negative, then $\hat{h}_j^m = 0$ ; this is an exact representation of the ReLU function.
62
+
63
+ In this section, we present two neural architectures, the corresponding surrogate problems that approximate a given 2SP, and a data collection strategy. Figure 1 summarizes the Neur2SP framework.
64
+
65
+ We propose two distinct neural architectures for predicting the second-stage costs: NN-E approximates the expected value of the second-stage cost of *a set of scenarios*, whereas NN-P approximates the *per-scenario* value of the second-stage cost for *a single scenario*.
66
+
67
+ **NN-E** (Figure 2) learns a mapping from $(\mathbf{x}, \{\boldsymbol{\xi}_k\}_{k=1}^K) \to \sum_{k=1}^K p_k Q(\mathbf{x}, \boldsymbol{\xi}_k)$ . In words, the model takes in a first-stage solution $\mathbf{x}$ and any finite set of scenarios sampled from $\Xi$ , and outputs a prediction of the expected second-stage objective value. We embed the scenario set $\{\boldsymbol{\xi}_k\}_{k=1}^K$ into a latent space by passing each scenario, independently, through the same neural network $\Psi^1$ , then performing meanaggregation over the resulting K embeddings. The aggregated embedding is passed through another network, $\Psi^2$ , to obtain the final embedding of the scenario set, $\xi_{\lambda}$ . This embedding, representing the scenario set à-la-DeepSets [Zaheer et al., 2017], is appended to the input first-stage decision and passed through a ReLU feed-forward network $\Phi^E$ to predict the expected second-stage value. Hence, the final output is such that $\Phi^E(\mathbf{x}, \Psi^2(\oplus_{k=1}^K \Psi^1(p_k, \boldsymbol{\xi}_k))) \approx \sum_{k=1}^K p_k Q(\mathbf{x}, \boldsymbol{\xi}_k)$ . Note that the embedding networks, $\Psi^1$ and $\Psi^2$ , can be arbitrarily complex as only the latent representation is embedded into the approximate MIP. Also, although $\Psi^1$ is trained using K scenarios, once the networks are trained, they can be used with any (potentially much larger) finite number of scenarios.
68
+
69
+ **NN-P** learns a mapping $\Phi^P$ from $(\mathbf{x}, \boldsymbol{\xi}) \to Q(\mathbf{x}, \boldsymbol{\xi})$ for $\boldsymbol{\xi}$ sampled from $\Xi$ . Once the mapping $\Phi^P$ is learned, we can approximate the expected second-stage objective value for any finite set of scenarios as $\sum_{k=1}^K p_k Q(\mathbf{x}, \boldsymbol{\xi}_k) \approx \sum_{k=1}^K p_k \Phi^P(\mathbf{x}, \boldsymbol{\xi}_k)$ . $\Phi^P$ is a feed-forward neural network with input given by the concatenation of $\mathbf{x}$ and $\boldsymbol{\xi}$ .
70
+
71
+ We now describe the surrogate MIP for both the NN-E and NN-P learning models from the preceding section. Let $\Lambda$ represent the number of predictions made by the neural network. For the NN-E case,
72
+
73
+ $\Lambda = 1$ as we only predict the expected second-stage value for a set of scenarios. In the NN-P case, $\Lambda = K$ as we predict the second-stage value for each scenario. In this section, we use [M] to denote $\{1,\ldots,M\}$ for $M\in\mathbb{Z}_+$ .
74
+
75
+ Let $\hat{h}_i^{m,\lambda}$ represent the ReLU output for the j-th hidden unit in the m-th hidden layer for output $\lambda$ , for all $m \in [\ell-1], j \in [d_m]$ , and $\lambda \in [\Lambda]$ . Suppose $\check{h}_i^{m,\lambda}$ is a slack variable used to model the ReLUoutput for the j-th hidden unit in the m-th hidden layer for scenario k, for all $m \in [\ell-1], j \in [d_m]$ , and $\lambda \in [\Lambda]$ . Let $z_j^{m,\lambda}$ be a binary variable used to ensure that at most one of $\hat{h}_j^{m,k}$ and $\check{h}_j^{m,k}$ are non-zero. This variable is defined for all $m \in [\ell - 1], j \in [d_m]$ , and $\lambda \in [\Lambda]$ . Suppose $\beta_{\lambda}$ is the $\lambda$ -th prediction by the neural network, for all $\lambda \in [\Lambda]$ .
76
+
77
+ With the above variables we can define an approximation to EF as given in Equation (3). The objective function (3a) minimizes the sum of the cost of the first-stage decisions and the approximate cost of the second-stage value. Constraints (3b)-(3d) propagate a first-stage solution x to the output of the neural network for each scenario. Constraints (3e)-(3h) ensure the prediction of the neural network is respected. Constraint (3i) ensures the feasibility of the first-stage solution.
78
+
79
+ In this approximation, we introduce a number of additional variables and big-M constraints. Specifically, for a neural network with H hidden units, we introduce $\Lambda \cdot H$ additional binary variables for $z_i^{m,\lambda}$ . In addition, we introduce $2 \cdot \Lambda \cdot H$ continuous variables for $\hat{h}_i^{m,\lambda}$ and $\check{h}_i^{m,\lambda}$ . Lastly, we require an additional $\Lambda$ variables for the output of the network. Although the number of variables we introduce in this approximation is quite large, we hypothesize that the resulting MIP will be easier to solve than the extensive form, in particular, when the second-stage problem is nonlinear. In the remainder of the paper, we refer to the surrogate MIP given in (3) as MIP-NN.
80
+
81
+ $$\min \quad \mathbf{c}^{\mathsf{T}} \mathbf{x} + \sum_{\lambda=1}^{\Lambda} p_{\lambda} \beta_{\lambda} \tag{3a}$$
82
+
83
+ s.t.
84
+ $$\sum_{i=1}^{d_0} w_{ij}^0 [\mathbf{x}, \boldsymbol{\xi}_{\lambda}]_i + b_j^0 = \hat{h}_j^{1,\lambda} - \check{h}_j^{1,\lambda} \qquad \forall j \in [d_1], \lambda \in [\Lambda], \quad (3b)$$
85
+
86
+ $$\sum_{i=1}^{d_{m-1}} w_{ij}^{m-1} \hat{h}_i^{m-1,\lambda} + b_j^{m-1} = \hat{h}_j^{m,\lambda} - \check{h}_j^{m,\lambda} \qquad \forall m \in [\ell-1], j \in [d_m], \lambda \in [\Lambda], \quad (3c)$$
87
+
88
+ $$\sum_{i=1}^{d_{m-1}} w_{ij}^{m-1} \hat{h}_i^{m-1,\lambda} + b_j^{m-1} = \hat{h}_j^{m,\lambda} - \check{h}_j^{m,\lambda} \qquad \forall \ m \in [\ell-1], j \in [d_m], \lambda \in [\Lambda], \quad (3c)$$
89
+
90
+ $$\sum_{i=1}^{d_{\ell}} w_{ij}^{\ell} \hat{h}_{i}^{\ell,\lambda} + b^{\ell} \le \beta_{\lambda}$$
91
+ $\forall \lambda \in [\Lambda], \quad (3d)$
92
+
93
+ $$z_j^{m,\lambda} = 1 \Rightarrow \hat{h}_j^{m,\lambda} = 0 \qquad \forall m \in [\ell-1], j \in [d_m], \lambda \in [\Lambda], \quad (3e)$$
94
+
95
+ $$z_j^{m,\lambda} = 0 \Rightarrow \check{h}_j^{m,\lambda} = 0 \qquad \forall m \in [\ell-1], j \in [d_m], \lambda \in [\Lambda], \quad (3f_j^{m,\lambda}) = 0$$
96
+
97
+ $$z_j^{m,\lambda} \in \{0,1\} \qquad \qquad \forall \ m \in [\ell-1], j \in [d_m], \lambda \in [\Lambda], \quad (3\mathrm{g})$$
98
+
99
+ $$\hat{h}_{j}^{m,\lambda}, \check{h}_{j}^{m,\lambda} \ge 0 \qquad \qquad \forall \ m \in [\ell-1], j \in [d_{m}], \lambda \in [\Lambda], \quad \text{(3h)}$$
100
+
101
+ $$x \in \mathcal{X}$$
102
+ (3i)
103
+
104
+ A diverse dataset of input-output pairs is needed to train Neur2SP's supervised second-stage value approximation. To generate such a dataset for a given 2SP problem, we adopt an iterative procedure. We begin by generating a random feasible first-stage decision. For the NN-E case, we sample a set of scenarios with random cardinality K' from the uncertainty distribution. Here, K' should be chosen to balance the trade-off between the time spent to generate a sample of second-stage values for a given first-stage solution and the time to estimate the expected second-stage value for a set of first-stage decisions in a given time budget. Specifically, if K' is large, then on average more time will be spent in determining the expected value using a large number of scenarios, while for a small K', the first-stage decision space will be explored more since expected value estimates would be
105
+
106
+ | Problem | First stage | Second Stage | Objective | Constraints | Objective Sense |
107
+ |---------|-------------|--------------|-----------|-------------|-----------------------------------------------------|
108
+ | CFLP | Binary | Binary | Linear | Linear | Minimization Minimization Minimization Maximization |
109
+ | INVP | Continuous | Binary | Linear | Linear | |
110
+ | SSLP | Binary | Binary | Linear | Linear | |
111
+ | PP | Binary | Continuous | Bilinear | Bilinear | |
112
+
113
+ Table 1: Problem class characteristics.
114
+
115
+ obtained faster. For a given input, i.e., a first-stage decision and set of scenarios, we then compute a label by calculating the expected second-stage value $\sum_{k'=1}^{K'} p_{k'} Q_{k'}(\cdot, \boldsymbol{\xi}_{k'})$ .
116
+
117
+ For the NN-P case, at each iteration of the data generation procedure, we sample a single scenario from the uncertainty distribution. For a given input of a first-stage decision and scenario we generate a label by calculating its second-stage value $Q(\cdot, \cdot)$ . Last, the input-output pair is added to the dataset.
118
+
119
+ This data generation procedure is fully parallelizable over the second-stage problems to be solved.
120
+
121
+ The NN-E and NN-P architectures exhibit trade-offs in terms of the learning task and the resulting surrogate optimization problem.
122
+
123
+ **Training.** In data collection, both models require solving second-stage problems with a fixed first-stage solution to obtain the label. A sample in for NN-P requires solving only a single optimization problem, whereas a sample for NN-E requires solving at most K' second-stage problems. As this process is offline and highly parallelizable, this trade-off is easy to mitigate. As for training, NN-E operates on a subset of scenarios which makes for an exponentially larger input space. Despite the large input space, our experiments show that the NN-E model in the training converges quite well and in many cases the embedded model outperforms the NN-P model.
124
+
125
+ **Surrogate Optimization Problem.** As the ultimate goal is embedding the trained model into a MIP, the trade-off in this regard becomes quite important. Specifically, for K scenarios, the NN-P model will have K times more binary and continuous variables than the NN-E model. For problems with a large number of scenarios, this makes the NN-E model much more appealing, smaller and likely faster to solve. Furthermore, it allows for much larger networks given that only a single copy of the network is embedded.
2205.14794/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-05-18T06:47:56.105Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36" etag="O1SVFm-3mRB4P2t7dKAA" version="18.0.6" type="device"><diagram id="aJ0njdGhQDEY5yaIV6Dt" name="Page-1">7V1bk6I4FP41PjYVCOHy2G1fZmpnqma3a25PU7SgMoPiInbr/voNEoSEoMEWiKI1VUNCCJDvO+ck55zQAzicrZ8iZzH9HLpeMNCAux7A+4GGfwDg/5KaTVpjAZRWTCLfTavUvOLZ/88jleS6ycp3vSXVMA7DIPYXdOUonM+9UUzVOVEUvtHNxmFA33XhTMgdQV7xPHICr9Tsu+/GU/IWqND6g+dPptmd1eyFZ07WmHSxnDpu+Fa4F3wYwGEUhnF6NFsPvSAZvGxc0o4eK87uHizy5rHIBb/Dmeuqmjb8471+/Tb8+/Xx8ceNZqTdvDrBirwxedp4kw3BJApXi/LdyAO8elHsrXlYOC+Bx74u5okXzrw42uB25KobnQwZociOM2/5gGPepHXT4mBnlQ4BebLrOx8HfECGos6wlAbBczEtSDGM4mk4CedO8JDX3uFRmrte0ivApbzNpzBc4EoVV/724nhDOO6s4hBXTeNZQM56az/+UTj+mXSlIFK6X5Oet4UNKYzDeUw61JJyGaa9sC/DVTTy9jQkiCZvX4C4DGXkBU7sv9JCw4OFXPol9PED7iigqjQFVB3RXcRONPFichUD7u4xjsc743JRCowgJgNMMcH4dxVmJ26W26G/xQ3woK/zk/hokv6vJ//QMNEGWKkMzLvHgXlPqskt8BOnd8muYakXBFjbJRR7m/qx97xwtpC9YYVLE8hZLlIVOPbXCRHvxn4QDMMgjLYdwTtogUc8cneTyHF9zJDCOfPh1ng4gkEl8S9zY8NgXBBrkyPVZlNCDXsB8nAIgGE0AmSVsHYNrH4F9iTAarIBi87FDGMkos2PYqFwVVLML9uW2jLfROWl1vOwBJ3OzL8Ldt6ctCF5fupOni0wHgPAt8b6vW2DZqwxq8S7km2NZ46ZsR6totetKG9lce7eJgs6XBwFznLpj+jxZmVpGUfhn92iTSsDMLZG3mi0a1k482IhHYGCLgAKBCalD2wd7tcIuPDFi3w8WF5E6k4t3YagdENB6S5QAXGokNW9d64PaSvDMix97dJUv9yPxlgrjemoYs2ASeRsCs0WSYPlnudlrKJFLbPxQdrhSdcjGm9KI51w7IyeYpgqbfiAqh8wfduSLBIiusxtSUIQ4jGuvoRAS9GBgYCh2tA0dd1iBEZXVAsiVUVAM7FBgo3Ij0GLj9GG+KDWxSe3FKZp05ZiZzkqXDjvEDxK6A4JnEQWSXS+2ZpFUjSkaoZmIEs3gUmLn2Erpm6Z0IS6ijRbY1xTotKIDMU2Qf6DrPVSbKtwWmvGmDHSmLmrGxVHXTssju+TwKQFI9DHLchqCsRBohtSEZ2ZecEjmWzS3QjOu45g0y/n64fgaW1+m39cfBkO7xeu9evG5JDp8rw9NXy1nAiMIIGldPFxQbeuoDcOunTuP40n6szYNx2cNDRW2ZUHBWqcUWkwOGmVGSinW/RY9ybB/eB0U7MFzeuZhyez97xw9Vc7wpHphzOKQYIrku9DUoIZCh/ZfuQQNI+sdNMQyFmtymlv2w5DitrpTOsdjlScPN/ofcgLZMd174bvJEZliiIKBBFty9WnKrpl734m7RI3DAUAu/Cj+7+GpjCgAouy7mWiu9CUqGAIr11aEgwVC0ayvDV1PIQqymAmzNJ1xTKRheUH2iZQM8V4jUiJSg1vBdes1Fx2RErY/kgW2jV0BcFSlCdjIlR0uxADYrq/RpoGB9zEvLhvrzPHjnITm3wJ6SBzjAuyqrWH8gUHAypQliXio/YjXb8jlCXwovFR70cuf7eoS+dhyzYA1vKwUdPljgJa5SyU0yWetLX1DiCaDaW0kIoJYaknCxzoqGL+esTkb6V/XQXR8Nvy018PT0O4/A30mLuJj2FV09HiXapmJ1tZuaPSD4W6JyyxlytnEW/ivkGLS51LxrVLW8h/A97qhh3eA84hOviUO4Iy4/azcKZOJGln4ISN2D6QJHHJQMQnwBFO/31ZxGh/FvHpMkj4Qy6w26omq3IXO6rDkWPYWI9XxXDkPpMoCf9UvUID1SagTjOOVVFNU0xgz9KRFFMvXglpJyNBxW6FtrUQd3Xf4lz0Ityz1cSV1zvbj1TF0zlw6mAsjZeuH1mM3YAswYKTD3o/Ehw7BV2+1aiAU02SrXvH+HPLoHHWEGewsY9dQ+ymk3Wnj9A+zhl8skXEidxaW+2y0zPAdWLnJm0XjaiLp3GcfEnyNnmTVMUslUkYTgLPWfhLZRTOcPVoiZs8jp2ZHyTjcv95oN09O/MldY+xM6KfK2kG2GaZ+krf6SWr+Mdz3IKqe2EbHlJ/WA/FtNTRyTrzcO4xuo5UOYE/mSeii0UgycK5S7Saj9XwLTkx8103qFKs9IKtUpzE/XGM10TnONANjhzBxhTgiT5GdG6E/B5hsK+M5KjE7ilplgZb0v0ICUEeiyRN2UemBJmJTRmvoYTzKGE9quI9SpmPm2bcx4fbjhHhv8hehpYCq7pxwJQKb7eHHdtk3uZrhoBNB1ZtgW8EQ5Ujhc0FVtV+eDpqR+CsCsylcGdwHznbnXFF8lgkJfBZ8JHth8+ieWSlc0yc10cQ5JsE7eXF4e93tZWcxuRTtZxRJvKN0nPPxcjk6FDQPJvuSOLxYuOl7A7Li8jaEPkM6PlnbYgyMINYEgZeSt6GyMcyZcrbsKRiwcUlbuy19wWOKIpSookcPr93Tn5qTIrZzPn2HIPcCO31u/mnSK+x9mLefRi+H4nfTYfhK1CWJtmiRVnuH8oSuKv4qF+/bds86vK5snoaYr7mPFQE+DqPMHM/vNsDRl6THvJdwYZknOR8Qemcl6C1EblhtxZzzNaJAMHF/M8Spy6E/I87w4f/AQ==</diagram></mxfile>
2205.14794/main_diagram/main_diagram.pdf ADDED
Binary file (43.1 kB). View file