Eric03 commited on
Commit
6292ced
·
verified ·
1 Parent(s): d8e5922

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2002.03734/main_diagram/main_diagram.drawio +0 -0
  2. 2002.03734/paper_text/intro_method.md +118 -0
  3. 2102.12092/main_diagram/main_diagram.drawio +1 -0
  4. 2102.12092/main_diagram/main_diagram.pdf +0 -0
  5. 2102.12092/paper_text/intro_method.md +105 -0
  6. 2103.10584/main_diagram/main_diagram.drawio +0 -0
  7. 2103.10584/main_diagram/main_diagram.pdf +0 -0
  8. 2103.10584/paper_text/intro_method.md +72 -0
  9. 2104.02847/main_diagram/main_diagram.drawio +0 -0
  10. 2104.02847/paper_text/intro_method.md +109 -0
  11. 2104.08677/main_diagram/main_diagram.drawio +1 -0
  12. 2104.08677/main_diagram/main_diagram.pdf +0 -0
  13. 2104.08677/paper_text/intro_method.md +158 -0
  14. 2104.14557/main_diagram/main_diagram.drawio +0 -0
  15. 2104.14557/paper_text/intro_method.md +90 -0
  16. 2106.02990/main_diagram/main_diagram.drawio +0 -0
  17. 2106.02990/paper_text/intro_method.md +62 -0
  18. 2106.15339/main_diagram/main_diagram.drawio +1 -0
  19. 2106.15339/main_diagram/main_diagram.pdf +0 -0
  20. 2106.15339/paper_text/intro_method.md +79 -0
  21. 2109.10259/main_diagram/main_diagram.drawio +1 -0
  22. 2109.10259/main_diagram/main_diagram.pdf +0 -0
  23. 2109.10259/paper_text/intro_method.md +136 -0
  24. 2110.06149/main_diagram/main_diagram.drawio +1 -0
  25. 2110.06149/main_diagram/main_diagram.pdf +0 -0
  26. 2110.06149/paper_text/intro_method.md +142 -0
  27. 2110.09419/main_diagram/main_diagram.drawio +1 -0
  28. 2110.09419/main_diagram/main_diagram.pdf +0 -0
  29. 2110.09419/paper_text/intro_method.md +71 -0
  30. 2110.14880/main_diagram/main_diagram.drawio +1 -0
  31. 2110.14880/main_diagram/main_diagram.pdf +0 -0
  32. 2110.14880/paper_text/intro_method.md +30 -0
  33. 2112.00061/main_diagram/main_diagram.drawio +0 -0
  34. 2112.00061/paper_text/intro_method.md +19 -0
  35. 2201.00248/main_diagram/main_diagram.drawio +1 -0
  36. 2201.00248/main_diagram/main_diagram.pdf +0 -0
  37. 2201.00248/paper_text/intro_method.md +147 -0
  38. 2201.12122/main_diagram/main_diagram.drawio +0 -0
  39. 2201.12122/paper_text/intro_method.md +54 -0
  40. 2205.00303/main_diagram/main_diagram.drawio +0 -0
  41. 2205.00303/paper_text/intro_method.md +33 -0
  42. 2205.13001/main_diagram/main_diagram.drawio +0 -0
  43. 2205.13001/paper_text/intro_method.md +74 -0
  44. 2206.05099/main_diagram/main_diagram.drawio +1 -0
  45. 2206.05099/main_diagram/main_diagram.pdf +0 -0
  46. 2206.05099/paper_text/intro_method.md +68 -0
  47. 2206.08564/main_diagram/main_diagram.drawio +1 -0
  48. 2206.08564/main_diagram/main_diagram.pdf +0 -0
  49. 2206.08564/paper_text/intro_method.md +55 -0
  50. 2208.10660/main_diagram/main_diagram.drawio +0 -0
2002.03734/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2002.03734/paper_text/intro_method.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Automating visual inspection on production lines with artificial intelligence has gained popularity and interest in recent years. Indeed, the analysis of images to segment potential manufacturing defects seems well suited to computer vision algorithms. However these solutions remain data hungry and require knowledge transfer from human to machine via image annotations. Furthermore, the classification in a limited number of user-predefined categories such as non-defective, greasy, scratched and so on, will not generalize well if a previously unseen defect appears. This is even more critical on production lines where a defective product is a rare occurrence. For visual inspection, a better-suited task is unsupervised anomaly detection, in which the segmentation of the defect must be done only via prior knowledge of non-defective samples, constraining the issue to a two-class segmentation problem.
4
+
5
+ From a statistical point of view, an anomaly may be seen as a distribution outlier, or an observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism (Hawkins, 1980). In this setting, generative models such as Variational AutoEncoders (VAE, Kingma & Welling (2014)), are especially interesting because they are capable to infer possible sampling mechanisms for a given dataset. The original autoencoder (AE) jointly learns an encoder model, that compresses input samples into a low dimensional space, and a decoder, that decompresses the low dimensional samples into the original input space, by minimizing the distance between the input of the encoder and the output of the decoder. The more recent variant, VAE, replaces the deterministic encoder and decoder by stochastic functions, enabling the modeling of the distribution of the dataset samples as well as the generation of new, unseen samples. In both models, the output decompressed sample given an input is often called the reconstruction, and is used as some sort of projection of the input on the support of the normal data distribution, which we will call the normal manifold. In most unsupervised anomaly detection methods based on VAE, models are trained on flawless data and defect detection and localization is then performed using a
6
+
7
+ <sup>\*</sup>Equal contributions.
8
+
9
+ distance metric between the input sample and its reconstruction (Bergmann et al., 2018; 2019; An & Cho, 2015; Baur et al., 2018; Matsubara et al., 2018).
10
+
11
+ One fundamental issue in this approach is that the models learn on the normal manifold, hence there is no guarantee of the generalization of their behavior outside this manifold. This is problematic since it is precisely outside the dataset distribution that such methods intend to use the VAE for anomaly localization. Even in the case of a model that always generates credible samples from the dataset distribution, there is no way to ensure that the reconstruction will be connected to the input sample in any useful way. An example illustrating this limitation is given in figure 1, where a VAE trained on regular grid images provides a globally poor reconstruction despite a local perturbation, making the anomaly localization challenging.
12
+
13
+ In this paper, instead of using the VAE reconstruction, we propose to find a better projection of an input sample on the normal manifold, by optimizing an energy function defined by an autoencoder architecture. Starting at the input sample, we iterate gradient descent steps on the input to converge to an optimum, simultaneously located on the data manifold and closest to the starting input. This method allows us to add prior knowledge about the expected anomalies via regularization terms, which is not possible with the raw VAE reconstruction. We show that such an optimum is better than previously proposed autoencoder reconstructions to localize anomalies on a variety of unsupervised anomaly localization datasets (Bergmann et al., 2019) and present its inpainting capabilities on the CelebA dataset (Liu et al., 2015). We also propose a variant of the standard gradient descent that uses the pixel-wise reconstruction error to speed up the convergence of the energy.
14
+
15
+ <span id="page-1-0"></span>![](_page_1_Picture_4.jpeg)
16
+
17
+ ![](_page_1_Picture_5.jpeg)
18
+
19
+ ![](_page_1_Picture_6.jpeg)
20
+
21
+ ![](_page_1_Picture_7.jpeg)
22
+
23
+ ![](_page_1_Picture_8.jpeg)
24
+
25
+ (a) Training sam-
26
+
27
+ (b) Anomalous sample
28
+
29
+ (c) VAE reconstruction
30
+
31
+ (d) Gradientbased projection
32
+
33
+ Figure 1: Even though an anomaly is a local perturbation in the image (b), the whole VAEreconstructed image can be disturbed (c). Our gradient descent-based method gives better quality reconstructions (d).
34
+
35
+ In unsupervised anomaly detection, the only data available during training are samples x from a non-anomalous dataset $\mathbb{X} \subset \mathbb{R}^d$ . In a generative setting, we suppose the existence of a probability function of density q, having its support on all $\mathbb{R}^d$ , from which the dataset was sampled. The generative objective is then to model an estimate of density q, from which we can obtain new samples close to the dataset. Popular generative architectures are Generative Adversarial Networks (GAN, Goodfellow et al. (2014)), that concurrently train a generator G to generate samples from random, low-dimensional noise $\mathbf{z} \sim p, \ \mathbf{z} \in \mathbb{R}^l, \ l \ll d$ , and a discriminator D to classify generated samples and dataset samples. This model converges to the equilibrium of the expectation over both real and generated datasets of the binary cross entropy loss of the classifier $min_G max_D \left[ \mathbb{E}_{\mathbf{x} \sim q} \left[ \log(D(\mathbf{x})) \right] + \mathbb{E}_{\mathbf{z} \sim p} \left[ \log(1 - D(G(\mathbf{z}))) \right] \right].$
36
+
37
+ Disadvantages of GANs are that they are notoriously difficult to train (Goodfellow, 2017), and they suffer from mode collapse, meaning that they have the tendency to only generate a subset of the original dataset. This can be problematic for anomaly detection, in which we do not want some subset of the normal data to be considered as anomalous (Bergmann et al., 2019). Recent works such as Thanh-Tung et al. (2019) offer simple and attractive explanations for GAN behavior and propose substantial upgrades, however Ravuri & Vinyals (2019) still support the point that GANs have more trouble than other generative models to cover the whole distribution support.
38
+
39
+ Another generative model is the VAE (Kingma & Welling (2014)), where, similar to a GAN generator, a decoder model tries to approximate the dataset distribution with a simple latent variables prior $p(\mathbf{z})$ , with $\mathbf{z} \in \mathbb{R}^l$ , and conditional distributions output by the decoder $p(\mathbf{x}|\mathbf{z})$ . This leads to the estimate $p(\mathbf{x}) = \int p(\mathbf{x}|\mathbf{z})p(\mathbf{z})dz$ , that we would like to optimize using maximum likelihood estimation on the dataset. To render the learning tractable with a stochastic gradient descent (SGD) estimator with reasonable variance, we use importance sampling, introducing density functions $q(\mathbf{z}|\mathbf{x})$ output by an encoder network, and Jensen's inequality to get the variational lower bound :
40
+
41
+ $$\log p(\mathbf{x}) = \log \mathbb{E}_{\mathbf{z} \sim q(\mathbf{z}|\mathbf{x})} \frac{p(\mathbf{x}|\mathbf{z})p(\mathbf{z})}{q(\mathbf{z}|\mathbf{x})}$$
42
+
43
+ $$\geq \mathbb{E}_{\mathbf{z} \sim q(\mathbf{z}|\mathbf{x})} \log p(\mathbf{x}|\mathbf{z}) - D_{\mathrm{KL}}(q(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) = -\mathcal{L}(\mathbf{x})$$
44
+ (1)
45
+
46
+ We will use $\mathcal{L}(\mathbf{x})$ as our loss function for training. We define the VAE reconstruction, per analogy with an autoencoder reconstruction, as the deterministic sample $f_{VAE}(\mathbf{x})$ that we obtain by encoding $\mathbf{x}$ , decoding the mean of the encoded distribution $q(\mathbf{z}|\mathbf{x})$ , and taking again the mean of the decoded distribution $p(\mathbf{x}|\mathbf{z})$ .
47
+
48
+ VAEs are known to produce blurry reconstructions and generations, but Dai & Wipf (2019) show that a huge enhancement in image quality can be gained by learning the variance of the decoded distribution $p(\mathbf{x}|\mathbf{z})$ . This comes at the cost of the distribution of latent variables produced by the encoder $q(\mathbf{z})$ being farther away from the prior $p(\mathbf{z})$ , so that samples generated by sampling $\mathbf{z} \sim p(\mathbf{z})$ , $\mathbf{x} \sim p(\mathbf{x}|\mathbf{z})$ have poorer quality. The authors show that using a second VAE learned on samples from $q(\mathbf{z})$ , and sampling from it with ancestral sampling $\mathbf{u} \sim p(\mathbf{u})$ , $\mathbf{z} \sim p(\mathbf{z}|\mathbf{u})$ , $\mathbf{x} \sim p(\mathbf{x}|\mathbf{z})$ , allows to recover samples of GAN-like quality. The original autoencoder can be roughly considered as a VAE whose encoded and decoded distributions have infinitely small variances.
49
+
50
+ We will consider that an anomaly is a sample with low probability under our estimation of the dataset distribution. The VAE loss, being a lower bound on the density, is a good proxy to classify samples between the anomalous and non-anomalous categories. To this effect, a threshold T can be defined on the loss function, delimiting anomalous samples with $\mathcal{L}(\mathbf{x}) \geq T$ and normal samples $\mathcal{L}(\mathbf{x}) < T$ . However, according to Matsubara et al. (2018), the regularization term $\mathcal{L}_{KL}(\mathbf{x}) = D_{\mathrm{KL}}(q(\mathbf{z}|\mathbf{x})||p(\mathbf{z}))$ has a negative influence in the computation of anomaly scores. They propose instead an unregularized score $\mathcal{L}_{r}(\mathbf{x}) = -\mathbb{E}_{\mathbf{z} \sim q(\mathbf{z}|\mathbf{x})} \log p(\mathbf{x}|\mathbf{z})$ which is equivalent to the reconstruction term of a standard autoencoder and claim a better anomaly detection.
51
+
52
+ Going from anomaly detection to anomaly localization, this reconstruction term becomes crucial to most of existing solutions. Indeed, the inability of the model to reconstruct a given *part* of an image is used as a way to segment the anomaly, using a pixel-wise threshold on the reconstruction error. Actually, this segmentation is very often given by a pixel-wise (An & Cho, 2015; Baur et al., 2018; Matsubara et al., 2018) or patch-wise comparison of the input image, and some generated image, as in Bergmann et al. (2018; 2019), where the structural dissimilarity (DSSIM, Wang et al. (2004)) between the input and its VAE reconstruction is used.
53
+
54
+ Autoencoder-based methods thus provide a straightforward way of generating an image conditioned on the input image. In the GAN original framework, though, images are generated from random noise $\mathbf{z} \sim p(\mathbf{z})$ and are not conditioned by an input. Schlegl et al. (2017) propose with AnoGAN to get the closest generated image to the input using gradient descent on $\mathbf{z}$ for an energy defined by:
55
+
56
+ <span id="page-2-0"></span>
57
+ $$E_{AnoGAN} = ||\mathbf{x} - G(\mathbf{z})||_1 + \lambda \cdot ||f_D(\mathbf{x}) - f_D(G(\mathbf{z}))||_1$$
58
+ (2)
59
+
60
+ The first term ensures that the generation $G(\mathbf{z})$ is close to the input $\mathbf{x}$ . The second term is based on a distance between features of the input and the generated images, where $f_D(\mathbf{x})$ is the output of an intermediate layer of the discriminator. This term ensures that the generated image stays in the vicinity of the original dataset distribution.
61
+
62
+ <span id="page-3-1"></span>![](_page_3_Figure_1.jpeg)
63
+
64
+ Figure 2: Illustration of our method. We perform gradient descent on $E(\mathbf{x}_t)$ to iteratively correct $\mathbf{x}_t$ .
65
+
66
+ # Method
67
+
68
+ According to Zimmerer et al. (2018), the loss gradient with respect to x gives the direction towards normal data samples, and its magnitude could indicate how abnormal a sample is. In their work on anomaly identification, they use the loss gradient as an anomaly score.
69
+
70
+ Here we propose to use the gradient of the loss to iteratively improve the observed x. We propose to link this method to the methodology of computing adversarial samples in Szegedy et al. (2014).
71
+
72
+ After training a VAE on non-anomalous data, we can define a threshold T on the reconstruction loss $\mathcal{L}_r$ as in (Matsubara et al., 2018), such that a small proportion of the most improbable samples are identified as anomalies. We obtain a binary classifier defined by
73
+
74
+ $$A(\mathbf{x}) = \begin{cases} 1 \text{ if } \mathcal{L}_r(\mathbf{x}) \ge T \\ 0 \text{ otherwise} \end{cases}$$
75
+ (3)
76
+
77
+ Our method consists in computing adversarial samples of this classifier (Szegedy et al., 2014), that is to say, starting from a sample $\mathbf{x}_0$ with $A(\mathbf{x}_0) = 1$ , iterate gradient descent steps over the input $\mathbf{x}$ , constructing samples $\mathbf{x}_1, \dots \mathbf{x}_N$ , to minimize the energy $E(\mathbf{x})$ , defined as
78
+
79
+ $$E(\mathbf{x}_t) = \mathcal{L}_r(\mathbf{x}_t) + \lambda \cdot ||\mathbf{x}_t - \mathbf{x}_0||_1$$
80
+ (4)
81
+
82
+ An iteration is done by calculating $\mathbf{x}_{t+1}$ as
83
+
84
+ <span id="page-3-0"></span>
85
+ $$\mathbf{x}_{t+1} = \mathbf{x}_t - \alpha \cdot \nabla_{\mathbf{x}} E(\mathbf{x}_t), \tag{5}$$
86
+
87
+ where $\alpha$ is a learning rate parameter, and $\lambda$ is a parameter trading off the inclusion of $\mathbf{x}_t$ in the normal manifold, given by $\mathcal{L}_r(\mathbf{x}_t)$ , and the proximity between $\mathbf{x}_t$ and the input $\mathbf{x}_0$ , assured by the regularization term $||\mathbf{x}_t - \mathbf{x}_0||_1$ .
88
+
89
+ We model the anomalous images that we encounter as normal images in which a region or several regions of pixels are altered but the rest of the pixels are left untouched. To recover the best segmentation of the anomalous pixels from an anomalous image $\mathbf{x}_a$ , we want to recover the closest image from the normal manifold $\mathbf{x}_g$ . The term *closest* has to be understood in the sense that the smallest number of pixels are modified between $\mathbf{x}_a$ and $\mathbf{x}_g$ . In our model, we therefore would like to use the $L^0$ distance as a regularization distance of the energy. Since the $L^0$ distance is not differentiable, we use the $L^1$ distance as an approximation.
90
+
91
+ While in our method the optimization is done in the input space, in the previously mentioned AnoGAN, the search for the optimal reconstruction is done by iterating over z samples with the
92
+
93
+ energy defined in equation [2.](#page-2-0) Following the aforementioned analogy between a GAN generator G and a VAE decoder Dec, a similar approach in the context of a VAE would be to use the energy
94
+
95
+ $$||\mathbf{x} - Dec(\mathbf{z})||_1 - \lambda \cdot \log p(\mathbf{z})$$
96
+ (6)
97
+
98
+ where the − log p(z) term has the same role as AnoGAN's ||fD(x) − fD(G(z))||<sup>1</sup> term, to ensure that Dec(z) stays within the learned manifold. We chose not to iterate over z in the latent space for two reasons. First, because as noted in [Dai & Wipf](#page-8-7) [\(2019\)](#page-8-7) and [Hoffman & Johnson](#page-9-9) [\(2016\)](#page-9-9), the prior p(z) is not always a good proxy for the real image of the distribution in the latent space q(z). Second, because the VAE tends to ignore some details of the original image in its reconstruction, considering that these details are part of the independent pixel noise allowed by the modeling of p(x|z) as a diagonal Gaussian, which causes its infamous blurriness. An optimization in latent space would have to recreate the high frequency structure of the image, whereas iterating over the input image space, and starting the descent on the input image x0, allows us to keep that structure and thus to obtain projections of higher quality.
99
+
100
+ We observed that using the Adam optimizer [\(Kingma & Ba,](#page-9-10) [2015\)](#page-9-10) is beneficial for the quality of the optimization. Moreover, to speed up the convergence and further preserve the aforementioned high frequency structure of the input, we propose to compute our iterative samples using the pixel-wise reconstruction error of the VAE. To explain the intuition behind this improvement, we will consider the inpainting task. In this setting, as in anomaly localization, a local perturbation is added on top of a normal image. However, in the classic inpainting task, the localization of the perturbation is known beforehand, and we can use the localization mask Ω to only change the value of the anomalous pixels in the gradient descent:
101
+
102
+ <span id="page-4-0"></span>
103
+ $$\mathbf{x}_{t+1} = \mathbf{x}_t - \alpha \cdot (\nabla_{\mathbf{x}} E(\mathbf{x}_t) \odot \mathbf{\Omega})$$
104
+ (7)
105
+
106
+ where
107
+ is the Hadamard product.
108
+
109
+ For anomaly localization and blind inpainting, where this information is not available, we compute the pixel-wise reconstruction error which gives a rough estimate of the mask. The term ∇xE(xt) is therefore replaced with ∇xE(xt)
110
+ (x<sup>t</sup> − fV AE(xt))<sup>2</sup> ) in equation [5:](#page-3-0)
111
+
112
+ <span id="page-4-2"></span>
113
+ $$\mathbf{x}_{t+1} = \mathbf{x}_t - \alpha \cdot (\nabla_{\mathbf{x}} E(\mathbf{x}_t) \odot (\mathbf{x}_t - f_{VAE}(\mathbf{x}_t))^2)$$
114
+ (8)
115
+
116
+ where fV AE(x) is the standard reconstruction of the VAE. Optimizing the energy this way, a pixel where the reconstruction error is high will update faster, whereas a pixel with good reconstruction will not change easily. This prevents the image to update its pixels where the reconstruction is already good, even with a high learning rate. As can be seen in appendix [B,](#page-11-0) this method converges to the same performance as the method of equation [5,](#page-3-0) but with fewer iterations. An illustration of our method can be found in figure [2.](#page-3-1)
117
+
118
+ A standard stop criterion based on the convergence of the energy can efficiently be used. Using the adversarial setting introduced in section [3.1,](#page-3-2) we also propose to stop the gradient descent when a certain predefined threshold on the VAE loss is reached. For example, such a threshold can be chosen to be a quantile of the empirical loss distribution computed on the training set.
2102.12092/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-02-02T22:35:33.918Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36" etag="-zMQAqE-jAKZkP__aizN" version="14.2.9" type="device"><diagram id="xSj_qT1uDAoFkBVDPaJj" name="Page-1">7Z1dd5tGEIZ/jS7tI3b5vIydpD2nbU5P3XOS9A6LtUSLhYpxbefXdxFf0gyxkQSaHUW5aC0QK3h2ZvW+C5qdyOv755+ycLX4LY1UMhHT6Hki30+EcAKp/1tseCk3SN8rN8yzOCo3We2Gm/ibqjZOq62PcaQett6Yp2mSx6vtjbN0uVSzfGtbmGXp0/bb7tJk+1NX4VyhDTezMMFbP8dRvqiuwpm2239W8XyRgx234eyfeZY+LquPmwh5t/5X7r4P66aq9z8swih92tgkP0zkdZamefnX/fO1SgqyNbXyuI/f2ducdqaWeZ8D/vBWeTD7GkTep2+zv67+/iV693Jhl638FyaPqr6M9cnmLzWf9SWqohFrIq+eFnGublbhrNj7pANCb1vk90m1+y5Okus0SbP1sfLjx+vrINDbq49RWa6ev3v+VkNFx5pK71Wevei3VAfYFccqzJwq7J7aPqsjarHRXZZfRUoVJfOm3RaV/qOitQM5Z0Ryy3SphoHmmAXNYwHNNQuazwKaZxa0gAU03yxo9YcZTi0wjJrFgpo1NQyb4IHNMgubjUe2+erRQuj0FefbfMIkni/13zN95UpDuiq4xFoIv6t23MdRVBx+lamH+Ft4u25qql+v0niZry/EuZo474u2HvP0oZTy1kBKJdjCLDFmuwNzLa5HSOoxpfFw4QlUse9Tj4Vj6uKBHYVjGjuXRci5pmHj4So807Dx8BW+adh4OIvAMGyCh7WAIpmeGxNzYZnGDbsLLZMxO+Yy2cWcjyuThWQRn0AmN6dNF5887IVjHDdG/sI1Dh4Pg+EZx42Hw/CN48bDYgTGcePhMaBYpgcnmbgMyzhw2GZouSwRPOZy2ZpSTytL7EtMjFCol2spQxegPHwG1Mv03Hj4DCiV6bkx8hlQL9PD4+EzoF6m58bDZ0C9TM+Nh89AepkeHBOjAfUyOTgbGw2tl20Ej7tetjHo4+plm8f9D6iXA5s6QHn4DKiX6bnx8BlQL9Nz4+EzoFSm58bIZ0C9TA+Ph8+AepmeGw+fgfQyPTgmRgPqZXpwnU8tOwged73sYdDH1csONiYmRijQy0JQPy/k8PAZjnHcePgM1zhuPHyGZxw3Hj7DN44bI58RGAePh8+AetkAcEyMhmUcOGw0tF52ETzmellMqZ9fdrAxMTFCoV52qQ2dy8NnQL1Mz42Hz4B6mZ4bD58B9TI9Nx4+A+plem48fAaUyvTcGPkMpJfp6TExGlAv04PDRkPrZQ/B466Xber5ZZfHHRColwNqQ+fy8BlQL5Nz83j4DKiX6bnx8BlQL9Nz4+EzoF6m58bDZ0C9TM+Nh89AUpkeHCejAfUyPT1sNLRe9hFB7nrZo55f9rBe/pRGSm+xesB+yLP0HwXSuSPD+3dKVxK0aTIdqBe8rV6oC2RudIJz1E7wxxSRA48VzSGG1Ju0fB5KUkjTuPFQksI2jRsPJSkc07jxUJLCNY0bj0r+wjONG4+5auGbxo3HMzEiMI1b5yMxfVS02ZYFzlWTV772eUxVQ6FMXnIy4OQypGnwmLgM2zRuTFyGYxo3Ji7DNY0bE5fhmcaNicvwTePGxGUEpnHrfCAGs+OulskLYAfYlhgZoEAt05ecDJjYDGkYODHl5DNs4+gxMRqOceCYOA3XOHBMrIZnHDgmXsM3DhwTsxEYB67zsRj+ZbBRfTrqOthiyuQuCJTN1JUnm1/Mmg4OymZ6cEz8BlTM5OBGXRV6aL8BZTM9PSZ+A8pmenBM/AaUzfTgmPgNKJvpwTHxG1A204PDfuMkqmEj2UxdDlswWSIayWbqApSCySLRSDbTg2PiN6BspgfHxG9AxUwObtSloof2G1A209Nj4jegbKYHx8RvQNlMD46J34CymR4c9hsnURQbyWbqqthi1FWjx5PN5HUoBZMVo6FsNgAcE79hGweOid9wjAPHxG+4poEbddHoof2GZxw9Jn7DNw4cE78RGAcO+42TqI2NitZRF8cWTBaPRrKZuhylGHX16BFlMz04Jn4DymZ6cEz8BpTN9OCY+A0om+nBMfEbUDGTg+tYO3o4cEP7DSib6ekx8RtQNtODw37jJEpkI9lMXSNbMFlDGslm6qqUgski0kg204Nj4jegbKYHx8RvQNlMD46J34CymR4cE78BZTM9OCZ+AypmcnCjLgk9tN+AspmeHvYbJ1EpG8lm6lLZomNJ5KpU9qcetJmWym4m7gyplS06F1h2k4L2XaqvabMb3H8f03rHRRme7/QbLHv13O7Uf82L/19eXtYN6TMr2yr3nG7vWugHBLh7j5tkneso7ti9067uDZPkIlPRo+bZs5fVMnqXZelT23sbHa6e4/zLxt9fi065dKpX75+rPlq/eKlfLDWiL5svNo4qXraHrV/Vx8EYuU3zPL3HX3SRo/zIRtGo9/jiVrpus+dz1bvFyS/SLP6mMYRNKOuYi5fzq+pD3ouGjYrm6vVA0/zSx2ymXunh+mdzeZjNVf7aG8uq+Dh2M5WEefzf9ql0RWJ16O/Ft1Ub8xIsFWLBJspLqI5q4xk1ZL/VUHmJqKF1YjTXc0iu4C+k9SAG41h326/hrUo6vsn3kgPlJ8zS5VLN8qqxSftY/OsD0mtpf0BfHzbqdJSULcaLeZgvNApxvT6TYuy4eJiFeYFnp8ECZ13RJzfV0Sq5TZ8+tBvaccLabZzQ5/AxTpoPXYarP9My+sr9cLgIlX836xou3Jmvbu/2HOW2BpSu4StPi++yMJvdlILRG3aAafRIjxHG6466N8RNve3Qgcja9hKyLnqx60AEG6rXCHtjHNLxGr5svK1S9a+cLxjvKi/UZl7Z4rCDXEfhXRNSc8ev8PFSczpkajbKAoqAabvtV3VXnPWFvZXE1nTYLG4W8nwzi23KHIYLtklLXDr7ZTFsynMvHV+2/7xxkhrktH+UnO5xA2rXrN0zMXdPvL5BTva1AudNwLdBf3n7ejtjq9s+lZzPQTJQkNTPtR0aJLCd0YOkx43Fc5AMFCTTvY3y6+2MHiQ9bqKeg2SYIGkWEj8wSFA7owdJjxvG5yAZKEjkMCMJamf0IOlxc/wcJMMESTDMQAKbGT1EejwGsGOInGJXW4P0NZySOnJnyz6l7s+dDTrbqn9ZeGhvo4ZG7+4ez3afuxt2976KEHX3kSWh7LOiwLm7QXeL/e9iQJsoj9zdw89K/gDdLf2Buhs2NHp3Dz+/+AN0d10q6eDuhg2N3d0dxaxO5sEiPg8IHfCER3dcWQI4AG+/+BRvtDN6eOKJbKbP8lSZdkBPH/Qsj+wsCXZO9FNL9L1dJMz0o7vIjtJrbFPdpU31runYc6qfXKrvO4OAUv3YMwgdNe/YprpPmuqdRfDOqX5qqd6s4nloqqOGxk71jjqDXFNdHvrDiwNTvWva9ZzqJ5fq+84colQ/9sxhR4FHtqkuaFP9PCn3Q6T6vrPGKNWPPWvcUSiTbarTTst1Vs48p/qppbqc7vkkOUx11NDoqX4603KSdlruGJU3t/OkTYHDyiGA6SJcDMGSHfda5WjlEOSotTjHRCngj+NHZKlfZmnxndCOB/q6F7+lUZGlH/4H</diagram></mxfile>
2102.12092/main_diagram/main_diagram.pdf ADDED
Binary file (32.9 kB). View file
 
2102.12092/paper_text/intro_method.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ The dVAE encoder and decoder are convolutional [@lecun1998gradient] ResNets [@he2016identity] with bottleneck-style resblocks. The models primarily use $3 \times 3$ convolutions, with $1 \times 1$ convolutions along skip connections in which the number of feature maps changes between the input and output of a resblock. The first convolution of the encoder is $7 \times 7$, and the last convolution of the encoder (which produces the $32 \times 32 \times 8192$ output used as the logits for the categorical distributions for the image tokens) is $1 \times 1$. Both the first and last convolutions of the decoder are $1 \times 1$. The encoder uses max-pooling (which we found to yield better ELB than average-pooling) to downsample the feature maps, and the decoder uses nearest-neighbor upsampling. The precise details for the architectures are given in the files `dvae/encoder.py` and `dvae/decoder.py` of the code release.
4
+
5
+ ``` {#lst:dvae_preproc .python language="Python" basicstyle="\\footnotesize\\ttfamily" caption="TensorFlow~\\cite{abadi2016tensorflow} image preprocessing code for training dVAE. We use \\texttt{target\\_res = 256} and \\texttt{channel\\_count = 3}." label="lst:dvae_preproc" captionpos="b" float="tp" floatplacement="tbp"}
6
+ def preprocess_image(img, target_res):
7
+ h, w = tf.shape(img)[0], tf.shape(img)[1]
8
+ s_min = tf.minimum(h, w)
9
+ img = tf.image.random_crop(img, 2 * [s_min] + [3])
10
+
11
+ t_min = tf.minimum(s_min, round(9 / 8 * target_res))
12
+ t_max = tf.minimum(s_min, round(12 / 8 * target_res))
13
+ t = tf.random.uniform([], t_min, t_max + 1, dtype=tf.int32)
14
+ img = tf.image.resize_images(img, [t, t], method=tf.image.ResizeMethod.AREA,
15
+ align_corners=True)
16
+ img = tf.cast(tf.rint(tf.clip_by_value(img, 0, 255)), tf.uint8)
17
+ img = tf.image.random_crop(img, 2 * [target_res] + [channel_count])
18
+ return tf.image.random_flip_left_right(img)
19
+ ```
20
+
21
+ The dVAE is trained on the same dataset as the transformer, using the data augmentation code given in Listing [\[lst:dvae_preproc\]](#lst:dvae_preproc){reference-type="ref" reference="lst:dvae_preproc"}. Several quantities are decayed during training, all of which use a cosine schedule:
22
+
23
+ 1. The KL weight $\beta$ is increased from $0$ to $6.6$ over the first 5000 updates. @bowman2015generating use a similar schedule based on the sigmoid function.
24
+
25
+ 2. The relaxation temperature $\tau$ is annealed from $1$ to $1 / 16$ over the first 150000 updates. Using a linear annealing schedule for this typically led to divergence.
26
+
27
+ 3. The step size is annealed from $1 \cdot 10^{-4}$ to $1.25 \cdot 10^{-6}$ over 1200000 updates.
28
+
29
+ The decay schedules for the relaxation temperature and the step size are especially important for stability and successful optimization.
30
+
31
+ We update the parameters using AdamW [@loshchilov2017decoupled] with $\beta_1=0.9$, $\beta_2=0.999$, $\epsilon=10^{-8}$, and weight decay multiplier $10^{-4}$. We use exponentially weighted iterate averaging for the parameters with decay coefficient $0.999$. The reconstruction term in the ELB is a joint distribution over the $256 \times 256 \times 3$ values for the image pixels, and the KL term is a joint distribution over the $32 \times 32$ positions in the spatial grid output by the encoder. We divide the overall loss by $256 \times 256 \times 3$, so that the weight of the KL term becomes $\beta / 192$, where $\beta$ is the KL weight. The model is trained in mixed-precision using standard (i.e., global) loss scaling on $64$ 16 GB NVIDIA V100 GPUs, with a per-GPU batch size of $8$, resulting in a total batch size of 512. It is trained for a total of 3000000 updates.
32
+
33
+ The $\ell_1$ and $\ell_2$ reconstruction objectives are commonly used when training VAEs. These objectives correspond to using Laplace and Gaussian distributions for $\ln p_\theta(x \,|\,y, z)$ in Equation [\[eq:elb\]](#eq:elb){reference-type="ref" reference="eq:elb"}, respectively. There is a strange mismatch in this modeling choice: pixel values lie within a bounded interval, but both of these distributions are supported by the entire real line. Hence, some amount of likelihood will be placed outside the admissible range of pixel values.
34
+
35
+ We present a variant of the Laplace distribution that is also supported by a bounded interval. This resolves the discrepancy between the range of the pixel values being modeled and the support of the distribution used to model them. We consider the pdf of the random variable obtained by applying the sigmoid function to a Laplace-distributed random variable. This pdf is defined on $(0, 1)$ and is given by $$\begin{equation}
36
+ f(x \,|\,\mu, b) = \frac{1}{2b x (1 - x)} \exp\left(-\frac{|\operatorname{logit}(x) - \mu|}{b}\right);
37
+ \label{eq:logit_laplace_pdf}
38
+ \end{equation}$$ we call it the *logit-Laplace distribution.* We use the logarithm of the RHS of Equation [\[eq:logit_laplace_pdf\]](#eq:logit_laplace_pdf){reference-type="ref" reference="eq:logit_laplace_pdf"} as the reconstruction term for the training objective of the dVAE.
39
+
40
+ The decoder of the dVAE produces six feature maps representing the sufficient statistics of the logit-Laplace distribution for the RGB channels of the image being reconstructed. The first three feature maps represent the $\mu$ parameter for the RGB channels, and the last three represent $\ln b$. Before feeding an image into the dVAE encoder, we transform its values using $\varphi: [0, 255] \to (\epsilon, 1 - \epsilon)$, which is given by $$\begin{equation}
41
+ \varphi : x \mapsto \frac{1 - 2\epsilon}{255} x + \epsilon.
42
+ \end{equation}$$ This restricts the range of the pixel values to be modeled by the dVAE decoder to $(\epsilon, 1 - \epsilon)$, which avoids numerical problems arising from the $x (1 - x)$ in Equation [\[eq:logit_laplace_pdf\]](#eq:logit_laplace_pdf){reference-type="ref" reference="eq:logit_laplace_pdf"}. We use $\epsilon = 0.1$. To reconstruct an image for manual inspection or computing metrics, we ignore $\ln b$ and compute $\hat{x} = \varphi^{-1}(\operatorname{sigmoid}(\mu))$, where $\mu$ is given by the first three feature maps output by the dVAE decoder.[^11]
43
+
44
+ <figure id="fig:xf_embds" data-latex-placement="t">
45
+ <img src="xf_embds.png" />
46
+ <figcaption>Illustration of the embedding scheme for a hypothetical version of our transformer with a maximum text length of 6 tokens. Each box denotes a vector of size <span class="math inline"><em>d</em><sub>model</sub> = 3968</span>. In this illustration, the caption has a length of 4 tokens, so 2 padding tokens are used (as described in Section <a href="#sec:learning_prior" data-reference-type="ref" data-reference="sec:learning_prior">2.2</a>). Each image vocabulary embedding is summed with a row and column embedding.</figcaption>
47
+ </figure>
48
+
49
+ <figure id="fig:xf_attn" data-latex-placement="t">
50
+
51
+ <figcaption>Illustration of the three types of attention masks for a hypothetical version of our transformer with a maximum text length of 6 tokens and image length of 16 tokens (i.e., corresponding to a <span class="math inline">4 × 4</span> grid). Mask (a) corresponds to row attention in which each image token attends to the previous 5 image tokens in raster order. The extent is chosen to be 5, so that the last token being attended to is the one in the same column of the previous row. To obtain better GPU utilization, we transpose the row and column dimensions of the image states when applying column attention, so that we can use mask (c) instead of mask (b). Mask (d) corresponds to a causal convolutional attention pattern with wraparound behavior (similar to the row attention) and a <span class="math inline">3 × 3</span> kernel. Our model uses a mask corresponding to an <span class="math inline">11 × 11</span> kernel.</figcaption>
52
+ </figure>
53
+
54
+ Our model is a decoder-only sparse transformer of the same kind described in @child2019generating, with broadcasted row and column embeddings for the part of the context for the image tokens. A complete description of the embedding scheme used in our model is shown in Figure [10](#fig:xf_embds){reference-type="ref" reference="fig:xf_embds"}. We use 64 attention layers, each of which uses 62 attention heads with a per-head state size of 64.
55
+
56
+ The model uses three kinds of sparse attention masks, which we show in Figure [11](#fig:xf_attn){reference-type="ref" reference="fig:xf_attn"}. The convolutional attention mask (Figure [11](#fig:xf_attn){reference-type="ref" reference="fig:xf_attn"}(d)) is only used in the last self-attention layer. Otherwise, given the index $i$ of a self-attention layer (with $i \in [1, 63]$), we use the column attention mask (Figure [11](#fig:xf_attn){reference-type="ref" reference="fig:xf_attn"}(c)) if $i - 2 \!\!\mod 4 = 0$, and row attention otherwise. E.g., the first four self-attention layers use "row, column, row, row", respectively. With the exception of the convolutional attention mask, which we found to provide a small boost in performance over the row and dense causal attention masks when used in the final self-attention layer, this is the same configuration used in @child2019generating.
57
+
58
+ ``` {#lst:xf_preproc .python language="Python" basicstyle="\\footnotesize\\ttfamily" caption="TensorFlow~\\cite{abadi2016tensorflow} image preprocessing code for training the transformer. We use \\texttt{target\\_res = 256} and \\texttt{channel\\_count = 3}." label="lst:xf_preproc" captionpos="b" float="tp" floatplacement="tbp"}
59
+ def preprocess_image(img, target_res):
60
+ h, w = tf.shape(img)[0], tf.shape(img)[1]
61
+ s_min = tf.minimum(h, w)
62
+
63
+ off_h = tf.random.uniform([], 3 * (h - s_min) // 8,
64
+ tf.maximum(3 * (h - s_min) // 8 + 1, 5 * (h - s_min) // 8),
65
+ dtype=tf.int32)
66
+ off_w = tf.random.uniform([], 3 * (w - s_min) // 8,
67
+ tf.maximum(3 * (w - s_min) // 8 + 1, 5 * (w - s_min) // 8),
68
+ dtype=tf.int32)
69
+
70
+ # Random full square crop.
71
+ img = tf.image.crop_to_bounding_box(img, off_h, off_w, s_min, s_min)
72
+ t_max = tf.minimum(s_min, round(9 / 8 * target_res))
73
+ t = tf.random.uniform([], target_res, t_max + 1, dtype=tf.int32)
74
+ img = tf.image.resize_images(img, [t, t], method=tf.image.ResizeMethod.AREA,
75
+ align_corners=True)
76
+ img = tf.cast(tf.rint(tf.clip_by_value(img, 0, 255)), tf.uint8)
77
+
78
+ # We don't use hflip aug since the image may contain text.
79
+ return tf.image.random_crop(img, 2 * [target_res] + [channel_count])
80
+ ```
81
+
82
+ When training the transformer, we apply data augmentation to the images before encoding them using the dVAE encoder. We use slightly different augmentations from the ones used to train the dVAE; the code used for this is given in Listing [\[lst:xf_preproc\]](#lst:xf_preproc){reference-type="ref" reference="lst:xf_preproc"}. We also apply 10% BPE dropout when BPE-encoding the captions for training. The model is trained using per-resblock scaling (see Section [2.4](#sec:mp_train){reference-type="ref" reference="sec:mp_train"}) and gradient compression (see Section [2.5](#sec:dist_opt){reference-type="ref" reference="sec:dist_opt"}) with total compression rank 896 (so that each GPU uses a compression rank of 112 for its parameter shards). As shown in Table [1](#tab:cmp_rank){reference-type="ref" reference="tab:cmp_rank"}, this results in a compression rate of about 86%, which we analyze in Section [9.1](#sec:dist_train_bw){reference-type="ref" reference="sec:dist_train_bw"}.
83
+
84
+ We update the parameters using AdamW with $\beta_1 = 0.9$, $\beta_2 = 0.96$, $\epsilon = 10^{-8}$, and weight decay multiplier $4.5 \cdot 10^{-2}$. We clip the decompressed gradients by norm using a threshold of 4, prior to applying the Adam update. Gradient clipping is only triggered during the warm-up phase at the start of training. To conserve memory, most Adam moments (see Section [8](#sec:mp_train_guidelines){reference-type="ref" reference="sec:mp_train_guidelines"} for details) are stored in 16-bit formats, with a 1-6-9 format for the running mean (i.e., 1 bit for the sign, 6 bits for the exponent, and 9 bits for the significand), and a 0-6-10 format for the running variance. We clip the estimate for running variance by value to 5 before it is used to update the parameters or moments. Finally, we apply exponentially weighted iterate averaging by asynchronously copying the model parameters from the GPU to the CPU once every 25 updates, using a decay coefficient of 0.99.
85
+
86
+ We trained the model using 1024, 16 GB NVIDIA V100 GPUs and a total batch size of $1024$, for a total of 430000 updates. At the start of training, we use a linear schedule to ramp up the step size to $4.5 \cdot 10^{-4}$ over 5000 updates, and halved the step size each time the training loss appeared to plateau. We did this a total of five times, ending training with a final step size that was 32 times smaller than the initial one. We reserved about 606000 images for validation, and did not observe overfitting at any point during training.
87
+
88
+ In order to train the 12-billion parameter transformer, we created a dataset of a similar scale to JFT-300M by collecting 250 million text-image pairs from the internet. As described in Section [2.3](#sec:data_collection){reference-type="ref" reference="sec:data_collection"}, this dataset incorporates Conceptual Captions, the text-image pairs from Wikipedia, and a filtered subset of YFCC100M. We use a subset of the text, image, and joint text and image filters described in @sharma2018conceptual to construct this dataset. These filters include discarding instances whose captions are too short, are classified as non-English by the Python package `cld3`, or that consist primarily of boilerplate phrases such as "photographed on `<date>`", where `<date>` matches various formats for dates that we found in the data. We also discard instances whose images have aspect ratios not in $[1 / 2, 2]$. If we were to use to very tall or wide images, then the square crops used during training would likely exclude objects mentioned in the caption.
89
+
90
+ <figure id="fig:grad_scale_plot" data-latex-placement="t">
91
+ <img src="grad_scale_plot.png" style="width:40.0%" />
92
+ <figcaption>Plot of per-resblock gradient scales for a 2.8-billion parameter text-to-image transformer trained without gradient compression. The <span class="math inline"><em>x</em></span>-axis is parameter updates, and the <span class="math inline"><em>y</em></span>-axis is the base-2 logarithm of the gradient scale. Darkest violet corresponds to the first resblock, and brightest yellow corresponds to the last (of which there are 128 total). The gradient scale for the second MLP resblock hovers at around <span class="math inline">2<sup>24</sup></span>, while the others stay within a 4-bit range. The extent of this range increases as the model is made larger.</figcaption>
93
+ </figure>
94
+
95
+ The most challenging part of this project was getting the model to train in 16-bit precision past one billion parameters. We were able to do this after detecting for underflow in various parts of training, and revising the code to eliminate it. We developed a set of guidelines as a result of this process that we present here.[^12]
96
+
97
+ 1. **Use per-resblock gradient scaling (Figure [4](#fig:grad_scaling){reference-type="ref" reference="fig:grad_scaling"}) instead of standard loss scaling.** Our model uses 128 gradient scales, one for each of its resblocks. All of the gradient scales are initialized to $M \cdot 2^{13}$, where $M$ is the number of data-parallel replicas (i.e., the number of GPUs). In our setup, each grad scale is multiplied by $2^{1 / 1000}$ at every parameter update when there are no nonfinite values for any parameter gradient in that resblock. Otherwise, we divide the grad scale by $\sqrt{2}$ and skip the update. We also disallow consecutive divisions of the same grad scale within a window of $125$ updates. All grad scales are clamped to the range $[M \cdot 2^7, M \cdot 2^{24}]$ after being updated. Figure [12](#fig:grad_scale_plot){reference-type="ref" reference="fig:grad_scale_plot"} shows the gradient scales in the early phase of training for a 2.8-billion parameter model.
98
+
99
+ 2. **Only use 16-bit precision where it is really necessary for performance.** In particular, store all gains, biases, embeddings, and unembeddings in 32-bit precision, with 32-bit gradients (including for remote communication) and 32-bit Adam moments. We disable gradient compression for these parameters (though PowerSGD would not make sense for 1D parameters like gains and biases). The logits for the text and image tokens are computed and stored in 32-bit precision. We found that storing the embeddings in 16-bit precision sometimes caused divergence early in optimization, and using 16-bit logits resulted in a small shift in the training curve, so we switched to use 32-bit precision out of an abundance of caution.
100
+
101
+ 3. **Avoid underflow when dividing the gradient.** For data-parallel training, we need to divide the gradients by the total number of data-parallel workers $M$. One way to do this is to divide the loss by the per-machine batch size, and then divide the parameter gradients by $M$ before summing them over the machines (using all-reduce). To save time and space, the gradients are usually computed and stored in 16-bit precision. When $M$ is large, this division could result in underflow before the gradients are summed. On the other hand, if we attempt to sum the gradients first and then divide them later, we could encounter overflow in the all-reduce.
102
+
103
+ Our solution for this problem attempts to minimize the loss of information in the division prior to the all-reduce, without danger of overflow. To do this, we divide the loss by the overall batch size (which includes $M$ as a factor) rather than the per-machine batch size, and multiply the gradient scales by $M$ to compensate, as described in (1). Then, prior to the all-reduce operation, we divide the gradients by a constant that was tuned by hand to avoid both underflow and overflow. This was done by inspecting histograms of the exponents (i.e., base-2 logarithms) of the absolute values of the scalar components of the per-parameter gradients. Since the gradient scaling keeps the gradients close to right end of the exponent range of the 16-bit format, we found that the same constant worked well for all parameters in the model with 16-bit gradients. When using PowerSGD, we chose different constants for the $P$ and $Q$ matrices.
104
+
105
+ We use PowerSGD [@vogels2019powersgd] to compress the gradients with respect to all parameters except the embeddings, unembeddings, gains, and biases. In Section [9.1](#sec:dist_train_bw){reference-type="ref" reference="sec:dist_train_bw"}, we derive an expression for the reduction in the amount of data communicated as a function of the compression rank and model size. In Section [9.2](#sec:dist_train_impl){reference-type="ref" reference="sec:dist_train_impl"}, we present a detailed overview of our adaptation of PowerSGD, and the modifications we had to make in order to fix performance regressions, some of which only manifest at billion-parameter scale.
2103.10584/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2103.10584/main_diagram/main_diagram.pdf ADDED
Binary file (36.8 kB). View file
 
2103.10584/paper_text/intro_method.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The recent performance breakthroughs of deep neural networks (DNNs) have attracted an explosion of research in designing efficient DNNs, aiming to bring powerful yet power-hungry DNNs into more resource-constrained daily life devices for enabling various DNN-powered intelligent functions [\(Ross, 2020;](#page-11-0) [Liu et al., 2018b;](#page-10-0) [Shen et al., 2020;](#page-11-1) [You et al., 2020a\)](#page-13-0). Among them, HardWareaware Neural Architecture Search (HW-NAS) has emerged as one of the most promising techniques as it can automate the process of designing optimal DNN structures for the target applications, each of which often adopts a different hardware device and requires a different hardware-cost metric (e.g., prioritizes latency or energy). For example, HW-NAS in [\(Wu et al., 2019\)](#page-12-0) develops a differentiable neural architecture search (DNAS) framework and discovers state-of-the-art (SOTA) DNNs balancing both accuracy and hardware efficiency, by incorporating a loss consisting of both the cross-entropy loss that leads to better accuracy and the latency loss that penalizes the network's latency on a target device.
4
+
5
+ ![](_page_1_Figure_1.jpeg)
6
+
7
+ <span id="page-1-0"></span>Figure 1: An illustration of our proposed HW-NAS-Bench
8
+
9
+ Despite the promising performance achieved by SOTA HW-NAS, there exist paramount challenges that limit the development of HW-NAS innovations. First, HW-NAS requires the collection of hardware efficiency data corresponding to (all) the networks in the search space. To do so, current practice either pre-collects these data to construct a hardware-cost look-up table or adopts device-specific hardware-cost estimators/models, both of which can be time-consuming to obtain and impose a barrier-to-entry to non-hardware experts. This is because it requires knowledge about device-specific compilation and properly setting up the hardware measurement pipeline to collect hardware-cost data. Second, similar to generic NAS, it can be notoriously difficult to benchmark HW-NAS algorithms due to the required significant computational resources and the differences in their (1) hardware devices, which are specific for HW-NAS, (2) adopted search spaces, and (3) hyperparameters. Such a difficulty is even higher for HW-NAS considering the numerous choices of hardware devices, each of which can favor very different network structures even under the same target hardware efficiency, as discussed in [\(Chu et al., 2020\)](#page-9-0). While the number of floating-point operations (FLOPs) has been commonly used to estimate the hardware-cost, many works have pointed out that DNNs with fewer FLOPs are not necessarily faster or more efficient [\(Wu et al., 2019;](#page-12-0) [2018;](#page-12-1) [Wang et al., 2019b\)](#page-12-2). For example, NasNet-A [\(Zoph et al., 2018\)](#page-13-1) has a comparable complexity in terms of FLOPs as MobileNetV1 [\(Howard et al., 2017\)](#page-10-1), yet can have a larger latency than the latter due to NasNet-A [\(Zoph et al., 2018\)](#page-13-1)'s adopted hardware-unfriendly structure.
10
+
11
+ It is thus imperative to address the aforementioned challenges in order to make HW-NAS more accessible and reproducible to unfold HW-NAS's full potential. Note that although pioneering NAS benchmark datasets [\(Ying et al., 2019;](#page-13-2) [Dong & Yang, 2020;](#page-9-1) [Klyuchnikov et al., 2020;](#page-10-2) [Siems et al.,](#page-11-2) [2020;](#page-11-2) [Dong et al., 2020\)](#page-9-2) have made a significant step towards providing a unified benchmark dataset for generic NAS works, all of them either merely provide the latency on server-level GPUs (e.g., GTX 1080Ti) or do not provide any hardware-cost data on real hardware, limiting their applicability to HW-NAS [\(Wu et al., 2019;](#page-12-0) [Wan et al., 2020;](#page-12-3) [Cai et al., 2018\)](#page-9-3) which primarily targets commercial edge devices, FPGA, and ASIC. To this end, as shown in Figure [1,](#page-1-0) we develop HW-NAS-Bench and make the following contributions in this paper:
12
+
13
+ - We have developed HW-NAS-Bench, the first public dataset for HW-NAS research aiming to (1) democratize HW-NAS research to non-hardware experts and (2) facilitate a unified benchmark for HW-NAS to make HW-NAS research more reproducible and accessible, covering two SOTA NAS search spaces including NAS-Bench-201 and FBNet, with the former being one of the most popular NAS search spaces and the latter having been shown to be one of the most hardware friendly NAS search spaces.
14
+ - We provide hardware-cost data collection pipelines for six commonly used hardware devices that fall into three categories (i.e., commercial edge devices, FPGA, and ASIC), in addition to the measured/estimated hardware-cost (e.g., energy cost and latency) on these devices for all the networks in the search spaces of both NAS-Bench-201 and FBNet.
15
+ - We conduct comprehensive analysis of the collected data in HW-NAS-Bench, such as studying the correlation between the collected hardware-cost and accuracy-cost data of all
16
+
17
+ the networks on the six hardware devices, which provides insights to not only HW-NAS researchers but also DNN accelerator designers. Other researchers can extract useful insights from HW-NAS-Bench that have not been discussed in this work.
18
+
19
+ • We demonstrate exemplary user cases to show: (1) how HW-NAS-Bench can be easily used by non-hardware experts to develop HW-NAS solutions by simply querying the collected data in our HW-NAS-Bench and (2) dedicated device-specific HW-NAS can indeed lead to optimal accuracy-cost trade-offs, demonstrating the great necessity of HW-NAS benchmarks like our proposed HW-NAS-Bench.
20
+
21
+ # Method
22
+
23
+ To ensure a wide applicability, our HW-NAS-Bench considers two representative NAS search spaces: (1) NAS-Bench-201's cell-based search space and (2) FBNet search space. Both contribute valuable aspects to ensure our goal of constructing a comprehensive HW-NAS benchmark. Specifically, the former enables HW-NAS-Bench to naturally integrate the ground truth accuracy data of all NAS-Bench-201's considered network architectures, while the latter ensures that HW-NAS-Bench includes the most commonly recognized hardware friendly search space.
24
+
25
+ NAS-Bench-201 Search Space. Inspired from the search space used in the most popular cell-based NAS, NAS-Bench-201 adopts a fixed cell search space, where each architecture consists of a predefined skeleton with a stack of the searched cell that is represented as a densely-connected directed acyclic graph (DAG). Specifically, it considers 4 nodes and 5 representative operation candidates for the operation set, and varies the feature map sizes and the dimensions of the final fully-connected layer to handle its considered three datasets (i.e., CIFAR-10, CIFAR-100 [\(Krizhevsky et al., 2009\)](#page-10-7), and ImageNet16-120 [\(Chrabaszcz et al., 2017\)](#page-9-6)), leading to a total of 3 × 5 <sup>6</sup> = 46875 architectures. Training log and accuracy are provided for each architecture. However, NAS-Bench-201 can not be directly used for HW-NAS as it only includes theoretical cost metrics (i.e., FLOPs and the number of parameters (#Params)) and the latency on a server-level GPU (i.e., GTX 1080Ti). HW-NAS-Bench enhances NAS-Bench-201 by providing all the 46875 architectures' measured/estimated hardware-cost on six devices, which are primarily targeted by SOTA HW-NAS works.
26
+
27
+ FBNet Search Space. FBNet [\(Wu et al., 2019\)](#page-12-0) constructs a layer-wise search space with a fixed macro-architecture, which defines the number of layers and the input/output dimensions of each layer and fixes the first and last three layers with the remaining layers to be searched. In this way, the network architectures in the FBNet [\(Wu et al., 2019\)](#page-12-0) search space have more regular structures than those in NAS-Bench-201, and have been shown to be more hardware friendly [\(Fu et al., 2020a;](#page-9-5) [Ma et al., 2018\)](#page-10-10). The 9 considered pre-defined cell candidates and 22 unique positions lead to a total of 9 <sup>22</sup> ≈ 10<sup>21</sup> unique architectures. While HW-NAS researchers can develop their search algorithms on top of the FBNet [\(Wu et al., 2019\)](#page-12-0) search space, tedious efforts are required to build the hardware-cost look-up tables or models for each target device. HW-NAS-Bench provides the measured/estimated hardware-cost on six hardware devices for all the 10<sup>21</sup> architectures in the FBNet search space, aiming to make HW-NAS research more friendly to non-hardware experts and easier to be benchmarked.
28
+
29
+ To collect the hardware-cost data for all the architectures in both the NAS-Bench-201 and FBNet search spaces, we construct a generic hardware-cost collection pipeline (see Figure [2\)](#page-4-0) to automate the process. The pipeline mainly consists of the target devices and corresponding deployment tools (e.g., compilers). Specifically, it takes all the networks as its inputs, and then compiles the networks to (1) convert them into the device's required execution format and (2) optimize the execution flow, the latter of which aims to optimize the hardware performance on the target devices. For example, for collecting the hardware-cost in an Edge GPU, we first set the device in the Max-N mode to fully make use of all available resources following [\(Wofk et al., 2019\)](#page-12-7), and then set up the embedded power rail monitor [\(Texas Instruments Inc.\)](#page-11-6) to obtain the real-measured latency and energy via sysfs [\(Patrick Mochel and Mike Murphy.\)](#page-11-7), averaging over 50 runs. We can see that the hardware-cost collection pipeline requires various hardware domain knowledge, includ-
30
+
31
+ ![](_page_4_Figure_1.jpeg)
32
+
33
+ <span id="page-4-0"></span>Figure 2: Illustrating the hardware-cost collection pipeline applicable to various hardware devices.
34
+
35
+ ing machine learning development frameworks, device compilation, embedded systems, and device measurements, imposing a barrier-to-entry to non-hardware experts.
36
+
37
+ Next, we briefly introduce the six considered hardware devices (as summarized in Table 1) and the specific configuration required to collect the hardware-cost data on each device.
38
+
39
+ **Edge GPU:** NVIDIA Edge GPU Jetson TX2 (Edge GPU) is a commercial device with a 256-core Pascal GPU and a 8GB LPDDR4, targeting IoT applications (NVIDIA Inc., a). When plugging an Edge GPU into the above hardware-cost collection pipeline, we first compile the network architectures in both NAS-Bench-201 and FBNet spaces to (1) convert them to the TensorRT format and (2) optimize the inference implementation within NVIDIA's recommended TensorRT runtime environment, and then execute them in the Edge GPU to measure the consumed energy and latency.
40
+
41
+ **Raspi 4:** Raspberry Pi 4 (Raspi 4) is the latest Raspberry Pi device (Raspberry Pi Limited.), consisting of a Broadcom BCM2711 SoC and a 4GB LPDDR4. To collect the hardware-cost operating on it, we compile the architecture candidates to (1) convert them into the TensorFlow Lite (TFLite) (Abadi et al., 2016) format and (2) optimize the implementation using the official interpreter (Google LLC., 2020) in Raspi 4, where the interpreter will be pre-configured.
42
+
43
+ **Edge TPU:** An Edge TPU Dev Board (Edge TPU) (Google LLC., a) is a dedicated ASIC accelerator developed by Google, targeting Artificial Intelligence (AI) inference for edge applications. Similar to the case when using Raspi 4, all the architectures are converted into the TFLite format. After that, an Edge TPU compiler will be used to convert the pre-built TFLite model into a more compressed format which is compatible to the pre-configured runtime environment in the Edge TPU.
44
+
45
+ **Pixel 3:** Pixel 3 is one of the latest Pixel mobile phones (Google LLC., e), which are widely used as the target platforms by recent NAS works (Xiong et al., 2020; Howard et al., 2019; Tan et al., 2019). To collect the hardware-cost in Pixel 3, we first convert all the architectures into the TFLite format, then use TFLite's official benchmark binary file to obtain the latency, when configuring the Pixel 3 device to only use its big cores for reducing the measurement variance as in (Xiong et al., 2020; Tan et al., 2019).
46
+
47
+ **ASIC-Eyeriss:** For collecting the hardware-cost data in ASIC, we consider a SOTA ASIC accelerator, Eyeriss (Chen et al., 2016). Specifically, we adopt the SOTA ASIC accelerator's performance simulators: (1) Accelergy (Wu et al., 2019)+Timeloop (Parashar et al., 2019) and (2) DNN-Chip Predictor (Zhao et al., 2020b), both of which automatically identify the optimal algorithm-to-hardware mapping methods for each architecture and then provide the estimated hardware-cost of the network execution in Eyeriss.
48
+
49
+ <span id="page-4-1"></span>Table 1: Important details about the six hardware devices considered by our HW-NAS-Bench.
50
+
51
+ | Devices | Edge GPU | Raspi 4 | Edge TPU | Pixel 3 | ASIC-Eyeriss | FPGA |
52
+ |------------------------------|--------------------------|--------------------|---------------------|--------------------|--------------------------------------------|-----------------------------|
53
+ | Collected Metrics | Latency (ms) Energy (mJ) | Latency (ms) | Latency (ms) | Latency (ms) | Latency (ms)<br>Energy (mJ) | Latency (ms)<br>Energy (mJ) |
54
+ | Collecting Method | Measured | Measured | Measured | Measured | Estimated | Estimated |
55
+ | Runtime Environment | TensorRT | TensorFlow<br>Lite | Edge TPU<br>Runtime | TensorFlow<br>Lite | Accelergy+Timeloop /<br>DNN-Chip Predictor | Vivado<br>HLS |
56
+ | <b>Customizing Hardware?</b> | X | × | × | × | ✓ | ✓ |
57
+ | Category | | Commercial | Edge Devices | | ASIC | FPGA |
58
+
59
+ <span id="page-5-0"></span>Table 2: Two types of correlation coefficients (larger means more correlated) between the realmeasured hardware-cost of the whole architectures and the approximated hardware-cost based on 100 randomly sampled architectures from the FBNet search space.
60
+
61
+ | Correlation Coefficient Types | Datasets | Latency on<br>Edge GPU | Energy on<br>Edge GPU | Latency on<br>Raspi 4 | Latency on<br>Edge TPU | Latency on<br>Pixel 3 |
62
+ |--------------------------------------|-----------|------------------------|-----------------------|-----------------------|------------------------|-----------------------|
63
+ | Pearson Correlation Coefficient | CIFAR-100 | 0.9200 | 0.9116 | 0.9219 | 0.4935 | 0.9324 |
64
+ | | ImageNet | 0.8634 | 0.9640 | 0.9897 | 0.7153 | 0.9162 |
65
+ | Kendall Rank Correlation Coefficient | CIFAR-100 | 0.7373 | 0.7240 | 0.7470 | 0.3551 | 0.8593 |
66
+ | | ImageNet | 0.7111 | 0.8379 | 0.9163 | 0.5806 | 0.8064 |
67
+
68
+ FPGA: FPGA is a widely adopted AI acceleration platform featuring a higher hardware flexibility than ASIC and more decent hardware efficiency than commercial edge devices. To collect hardwarecost data in this platform, we first develop a SOTA chunk based pipeline structure [\(Shen et al., 2017;](#page-11-11) [Zhang et al., 2020\)](#page-13-3) implementation, compile all the architectures using the standard Vivado HLS toolflow [\(Xilinx Inc., a\)](#page-12-9), and then obtain the hardware-cost on a Xilinx ZC706 board with a Zynq XC7045 SoC [\(Xilinx Inc., b\)](#page-12-10).
69
+
70
+ More details about the pipeline for each of the aforementioned devices are provided in the Appendix [D](#page-15-0) for better understanding.
71
+
72
+ In our HW-NAS-Bench, to estimate the hardware-cost of the networks in the FBNet search space [\(Wu et al., 2019\)](#page-12-0) when being executed on the commercial edge devices (i.e., Edge GPU, Raspi 4, Edge TPU, and Pixel 3), we sum up the hardware-cost of all unique blocks (i.e., "block" in the FBNet space [\(Wu et al., 2019\)](#page-12-0)) within the network architectures. To validate that such an approximation is close to the corresponding real-measured results, we conduct experiments, as summarized in Table [2,](#page-5-0) to calculate two types of correlation coefficients between the measured and the approximated hardware-cost based on 100 randomly sampled architectures from the FBNet search space. We can see that our approximated hardware-cost is highly correlated with the real-measured one, except for the case on the Edge TPU, which we conjecture is caused by the adopted in-house Edge TPU compiler [\(Google LLC., c\)](#page-10-14). More visualization results can be found in the Appendix A.
2104.02847/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2104.02847/paper_text/intro_method.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ 3D delineation is a fundamental task in medical imaging analysis. Currently, medical segmentation is dominated by [FCNs]{acronym-label="FCN" acronym-form="plural+short"} [@Long_2015_CVPR], which segment each pixel or voxel in a bottom-up fashion. [FCNs]{acronym-label="FCN" acronym-form="plural+short"} are well-suited to the underlying [CNN]{acronym-label="CNN" acronym-form="singular+short"} technology and are straightforward to implement using modern deep learning software. The current state has been made particularly plain by the dominance of nnU-Net [@isensee_nnu-net_2021], *i*.*e*., nothing-new U-Net, in the [MSD]{acronym-label="MSD" acronym-form="singular+short"} challenge [@Simpson_2019]. Yet, despite their undisputed abilities, [FCNs]{acronym-label="FCN" acronym-form="plural+short"} do lack important features compared to prior technology. For one, a surface-based representation is usually the desired end product, but [FCNs]{acronym-label="FCN" acronym-form="plural+short"} output masks, which suffer from discretization effects (see top panel of Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"}). These are particularly severe when large inter-slice distances come into play. Conversion to a smoothed mesh is possible, but it introduces its own artifacts. While this is an important drawback, an arguably more critical limitation is that current [FCN]{acronym-label="FCN" acronym-form="singular+short"} pipelines typically operate with no shape constraints.
4
+
5
+ ![**Top Panel**: [DIASs]{acronym-label="DIAS" acronym-form="plural+abbrv"} produce high quality surfaces without discretization. Its use of rich and explicit anatomical priors ensure robustness, even on highly challenging cross-dataset clinical samples. In this example, nnU-Net [@isensee_nnu-net_2021] oversegments the cardiac region (green arrow) and mishandles a [TACE]{acronym-label="TACE" acronym-form="singular+abbrv"}-treated lesion (red arrow), causing a fragmented effect. **Bottom Panel**: a 2D t-SNE embedding [@Maaten_2008] of the [DIAS]{acronym-label="DIAS" acronym-form="singular+abbrv"} shape latent space. Shapes closer together share similar features. ](images/embedding_wide.pdf){#fig:intro width=".7\\linewidth"}
6
+
7
+ Shape priors are critical for ensuring anatomically plausible delineations, but the techniques and concepts of [SSMs]{acronym-label="SSM" acronym-form="plural+short"}, so integral prior to deep learning [@Heimann_2009], have fallen out of favor. Despite the incontrovertible power of [FCNs]{acronym-label="FCN" acronym-form="plural+short"}, they can produce egregious mistakes, especially when presented with morbidities, scanners, or other scenarios not seen in training (again see top panel of Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"}). Because it is impossible to represent all clinical scenarios well enough in training datasets, priors can act as valuable regularizing forces. [FCNs]{acronym-label="FCN" acronym-form="plural+short"} may also struggle with anatomical structures with low-contrast boundaries. Efforts have been made to incorporate anatomical priors with [CNNs]{acronym-label="CNN" acronym-form="plural+short"}, but these either do not directly model shape and/or do not estimate rigid poses. Thus, they do not construct a true [SSM]{acronym-label="SSM" acronym-form="singular+short"}. Even should their interoperation with [CNNs]{acronym-label="CNN" acronym-form="plural+short"} not be an issue, classic [SSMs]{acronym-label="SSM" acronym-form="plural+short"} also have their own disadvantages, as they mostly rely on the [PDM]{acronym-label="PDM" acronym-form="singular+short"} [@Cootes_1992], which requires determining a correspondence across shapes. Ideally, correspondences would not be required.
8
+
9
+ To fill these gaps, we introduce [DIASs]{acronym-label="DIAS" acronym-form="plural+short"}, a new and deep approach to [SSMs]{acronym-label="SSM" acronym-form="plural+short"}. Using the recently introduced deep implicit shape concept [@park2019deepsdf; @chen_learning_2019; @mescheder_occupancy_2019], [DIASs]{acronym-label="DIAS" acronym-form="plural+short"} learn a compact and rich latent space that can accurately and densely generate the [SDFs]{acronym-label="SDF" acronym-form="plural+short"} of a representative sample of shapes. Importantly, correspondence between shapes is unnecessary, eliminating a major challenge with traditional [SSMs]{acronym-label="SSM" acronym-form="plural+short"}. Statistics, *e*.*g*., mean shape, [PCA]{acronym-label="PCA" acronym-form="singular+short"}, and interpolation, can be performed directly on the latent space (see bottom panel of Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"}). To fit an anatomically plausible shape to a given image, [DIASs]{acronym-label="DIAS" acronym-form="plural+short"} use a [CNN]{acronym-label="CNN" acronym-form="singular+short"} to determine rigid and non-rigid poses, thus marrying the representation power of deep networks with anatomically-constrained surface representations. Pose estimation is modelled as a [MDP]{acronym-label="MDP" acronym-form="singular+short"} to determine a trajectory of rigid and non-rigid poses, where the latter are defined as [PCA]{acronym-label="PCA" acronym-form="singular+short"} loadings of the [DIAS]{acronym-label="DIAS" acronym-form="singular+short"} latent space. To handle the intractably large search space of poses, [DIASs]{acronym-label="DIAS" acronym-form="plural+short"} make use of [MSL]{acronym-label="MSL" acronym-form="singular+short"} [@Zheng_2008; @zheng2014marginal] and inverted episodic training, the latter a concept we introduce. A final constrained deep level set refinement [@michalkiewicz_implicit_2019] captures any fine details not represented by [DIAS]{acronym-label="DIAS" acronym-form="singular+short"} latent shape space. At a high level, [DIASs]{acronym-label="DIAS" acronym-form="plural+short"} share many philosophies with traditional [SSMs]{acronym-label="SSM" acronym-form="plural+short"}, but they modernize these concepts in a powerful deep-learning framework.
10
+
11
+ As proof of concept, we evaluate [DIAS]{acronym-label="DIAS" acronym-form="singular+short"} primarily on the problem of 3D pathological liver delineation from [CT]{acronym-label="CT" acronym-form="singular+short"} scans, with supplemental validation on a challenging larynx segmentation task from low-contrast [CT]{acronym-label="CT" acronym-form="singular+short"}. For the former, we compare our approach to leading 2D [@Harrison_2017], hybrid [@Li_2018], 3D cascaded [@isensee_nnu-net_2021], and adversarial learning [@yang_automatic_2017] [FCNs]{acronym-label="FCN" acronym-form="plural+short"}. When trained and tested on the [MSD]{acronym-label="MSD" acronym-form="singular+short"} liver dataset [@Simpson_2019], [DIASs]{acronym-label="DIAS" acronym-form="plural+short"} provide more robust delineations, improving the mean [HD]{acronym-label="HD" acronym-form="singular+short"} by $7.5$-$14.3$mm. *This is over and above any benefits of directly outputting a high resolution surface.* More convincingly, we perform cross-dataset evaluation on an external dataset ($97$ volumes) that directly reflect clinical conditions [@Raju_2020]. [DIASs]{acronym-label="DIAS" acronym-form="plural+short"} improve the mean [DSC]{acronym-label="DSC" acronym-form="singular+short"} and [HD]{acronym-label="HD" acronym-form="singular+short"} from $92.4\%$ to $95.9\%$ and from $34.1$mm to $21.8$mm, respectively, over the best fully [FCN]{acronym-label="FCN" acronym-form="singular+short"} alternative (nnU-Net). In terms of robustness, the worst-case [DSC]{acronym-label="DSC" acronym-form="singular+short"} is boosted from $88.1\%$ to $93.4\%$. Commensurate improvements are also observed on the larynx dataset. These results confirm the value, once taken for granted, of incorporating anatomical priors in 3D delineation. Our contributions are thus: 1) we are the first to introduce a true *deep* [SSM]{acronym-label="SSM" acronym-form="singular+short"} model that outputs high resolution surfaces; 2) we build a new correspondence-free, compact, and descriptive anatomical prior; 3) we present a novel pose estimation scheme that incorporates inverted episodic training and [MSL]{acronym-label="MSL" acronym-form="singular+short"}; and 4) we provide a more robust solution than leading [FCNs]{acronym-label="FCN" acronym-form="plural+short"} in both intra- and cross-dataset evaluations. Our code is publicly shared[^2].
12
+
13
+ # Method
14
+
15
+ Fig. [2](#fig:system){reference-type="ref" reference="fig:system"} illustrates the [DIAS]{acronym-label="DIAS" acronym-form="singular+short"} framework. As the bottom panel demonstrates, a [CNN]{acronym-label="CNN" acronym-form="singular+short"} encoder predicts rigid and non-rigid poses, which, along with desired coordinates, are fed into a deep implcit [MLP]{acronym-label="MLP" acronym-form="singular+short"} shape decoder to output corresponding [SDF]{acronym-label="SDF" acronym-form="singular+short"} values. The encoder searches for the best pose using an [MDP]{acronym-label="MDP" acronym-form="singular+short"} combined with [MSL]{acronym-label="MSL" acronym-form="singular+short"} (top panel). We first outline the deep implicit shape model, discuss pose estimation, then describe the final local surface refinement.
16
+
17
+ Deep implicit shapes [@park2019deepsdf; @chen_learning_2019; @mescheder_occupancy_2019] are a recent and powerful implicit shape representation. We use @park2019deepsdf's formulation to model organ shapes using an [SDF]{acronym-label="SDF" acronym-form="singular+short"}, which, given a coordinate, outputs the distance to a shape's surface: $$\begin{align}
18
+ \mathit{SDF}(\mathbf{x})= s: x \in \mathbb{R}^3, \, s \in \mathbb{R} \mathrm{,} \label{eqn:sdf}
19
+ \end{align}$$ where $s$ is negative inside the shape and positive outside of it. The iso-surface of [\[eqn:sdf\]](#eqn:sdf){reference-type="eqref" reference="eqn:sdf"}, *i*.*e*., coordinates where it equals $0$, corresponds to the shape surface. The insight of deep implicit shapes is that given a set of coordinate/[SDF]{acronym-label="SDF" acronym-form="singular+short"} pairs *in some canconical or normalized space*, $\tilde{\mathcal{X}} = \{\tilde{\mathbf{x}}_{i}$, $s_{i}\}$, a deep [MLP]{acronym-label="MLP" acronym-form="singular+short"} can be trained to approximate a shape's SDF: $$\begin{align}
20
+ f_{\theta_{S}}(\tilde{\mathbf{x}}) \approx \mathit{SDF}(\tilde{\mathbf{x}}), \, \forall \tilde{\mathbf{x}} \in \Omega \mathrm{,} \label{eqn:deep_sdf_1}
21
+ \end{align}$$ where the tilde denotes canonical coordinates. Because it outputs smoothly varying values [\[eqn:deep_sdf_1\]](#eqn:deep_sdf_1){reference-type="eqref" reference="eqn:deep_sdf_1"} has no resolution limitations, and, unlike meshes, it does not rely on an explicit discretization scheme. In practice the resolution of any captured details are governed by the model capacity and the set of training samples within $\mathcal{X}_{i}$. While [\[eqn:deep_sdf_1\]](#eqn:deep_sdf_1){reference-type="eqref" reference="eqn:deep_sdf_1"} may describe a single shape, it would not describe the anatomical variation across a set of shapes. To do this, we follow @park2019deepsdf's elegant approach of auto-decoding latent variables. We set of $K$ [SDF]{acronym-label="SDF" acronym-form="singular+short"} samples from the same organ, $\mathcal{S}=\{\tilde{\mathcal{X}}_{k}\}_{k=1}^{K}$, and we create a corresponding set of latent vectors, $\mathcal{Z}=\{\mathbf{z}_{k}\}_{k=1}^{K}$. The deep [MLP]{acronym-label="MLP" acronym-form="singular+short"} is modified to accept two inputs, $f_{\theta_{S}}(\tilde{\mathbf{x}}, \mathbf{z}_{k})$, conditioning the output by the latent vector to specify which particular [SDF]{acronym-label="SDF" acronym-form="singular+short"} is being modeled. We jointly optimize the network weights $\theta_{S}$ and latent vectors $\mathcal{Z}$ to produce the best [SDF]{acronym-label="SDF" acronym-form="singular+short"} approximations: $$\begin{align}
22
+ \mathop{\mathrm{arg\,min}}_{\theta_{S}, \mathcal{Z}} \sum_{k}^{K}\left(\sum_{i=1}^{|\tilde{\mathcal{X}_{k}}|} \mathcal{L}(f_{\theta_{S}}(\tilde{\mathbf{x}}_{i}, \mathbf{z}_{k}),s_{i}) + \dfrac{1}{\sigma^2}\|\mathbf{z}_{k} \|_2^2\right) \mathrm{,} \label{eqn:shape_loss}
23
+ \end{align}$$ where $\mathcal{L}$ is the L1 loss and the second term is a zero-mean Gaussian prior whose compactness is controlled by $\sigma^2$.
24
+
25
+ The implicit shape model assumes shapes are specified in a canonical space. However, segmentation labels are usually given as 3D masks. To create canonical coordinate/[SDF]{acronym-label="SDF" acronym-form="singular+short"} training pairs, we first perform within-slice interpolation [@Albu_2008] on masks to remove the most egregious of discretization artifacts. We then convert the masks to meshes using marching cubes [@Lewiner_2003], followed by a simplification algorithm [@Quadric]. Each mesh is then rigidly aligned to an arbitrarily chosen anchor mesh using coherent point drift [@Myronenko_2010]. Similar to @park2019deepsdf, [SDF]{acronym-label="SDF" acronym-form="singular+short"} and coordinate values are randomly sampled from the mesh, with regions near the surface much more densely sampled. [SDF]{acronym-label="SDF" acronym-form="singular+short"} values are also scaled to fit within $[-1,1]$. Based on the anchor mesh, an affine matrix that maps between canonical and pixel coordinates can be constructed, $\mathbf{x}=\mathbf{A}\tilde{\mathbf{x}}$. More details can be found in the Supplementary.
26
+
27
+ <figure id="fig:shape" data-latex-placement="t">
28
+ <table>
29
+ <tbody>
30
+ <tr>
31
+ <td style="text-align: center;"><img src="images/mean_mesh.png" alt="image" /></td>
32
+ <td style="text-align: center;"><img src="images/o_n.5.png" alt="image" /></td>
33
+ <td style="text-align: center;"><img src="images/o_p.5.png" alt="image" /></td>
34
+ <td style="text-align: center;"><img src="images/1_n.5.png" alt="image" /></td>
35
+ <td style="text-align: center;"><img src="images/1_p.5.png" alt="image" /></td>
36
+ </tr>
37
+ <tr>
38
+ <td style="text-align: center;">mean</td>
39
+ <td style="text-align: center;"><span class="math inline"><em>λ</em><sub>1</sub> = −0.5</span></td>
40
+ <td style="text-align: center;"><span class="math inline"><em>λ</em><sub>1</sub> = 0.5</span></td>
41
+ <td style="text-align: center;"><span class="math inline"><em>λ</em><sub>2</sub> = −0.5</span></td>
42
+ <td style="text-align: center;"><span class="math inline"><em>λ</em><sub>2</sub> = 0.5</span></td>
43
+ </tr>
44
+ <tr>
45
+ <td style="text-align: center;"><img src="images/0.png" alt="image" /></td>
46
+ <td style="text-align: center;"><img src="images/1.png" alt="image" /></td>
47
+ <td style="text-align: center;"><img src="images/2.png" alt="image" /></td>
48
+ <td style="text-align: center;"><img src="images/3.png" alt="image" /></td>
49
+ <td style="text-align: center;"><img src="images/4.png" alt="image" /></td>
50
+ </tr>
51
+ <tr>
52
+ <td style="text-align: center;"><span class="math inline"><em>α</em> = 0</span></td>
53
+ <td style="text-align: center;"><span class="math inline"><em>α</em> = 0.25</span></td>
54
+ <td style="text-align: center;"><span class="math inline"><em>α</em> = 0.5</span></td>
55
+ <td style="text-align: center;"><span class="math inline"><em>α</em> = 0.75</span></td>
56
+ <td style="text-align: center;"><span class="math inline"><em>α</em> = 1</span></td>
57
+ </tr>
58
+ </tbody>
59
+ </table>
60
+ <figcaption><span data-acronym-label="DIAS" data-acronym-form="singular+abbrv">DIAS</span> shape embedding space on <span data-acronym-label="MSD" data-acronym-form="singular+abbrv">MSD</span> liver dataset <span class="citation" data-cites="Simpson_2019"></span>. The top row shows the shapes generated from the mean latent vector and scaled version of the two first <span data-acronym-label="PCA" data-acronym-form="singular+abbrv">PCA</span> bases (<span class="math inline"><em>λ</em><sub>1</sub></span> and <span class="math inline"><em>λ</em><sub>2</sub></span>). The bottom row renders an interpolation between two selected latent vectors: <span class="math inline"><strong>z</strong> = (1 − <em>α</em>)<strong>z</strong><sub>0</sub> + <em>α</em><strong>z</strong><sub>1</sub></span>. </figcaption>
61
+ </figure>
62
+
63
+ Once the shape decoder is trained, any latent vector can be inputted into $f_{\theta_{S}}(.,.)$ along with a set of coordinates to rasterize an [SDF]{acronym-label="SDF" acronym-form="singular+short"}, which can then be rendered by extracting the iso-boundary. As the top row of Fig. [3](#fig:shape){reference-type="ref" reference="fig:shape"} and bottom panel of Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"} demonstrate, the latent space provides a rich description of shape variations. The mean latent vector, $\boldsymbol\upmu$, produces an anatomically valid shape. A [PCA]{acronym-label="PCA" acronym-form="singular+short"} can capture meaningful variation, *e*.*g*., the first basis corresponds to stretching and flattening while the second controls the prominence of lobe protuberances. Interpolating between latent vectors produces reasonable shapes (bottom row of Fig. [3](#fig:shape){reference-type="ref" reference="fig:shape"}).
64
+
65
+ The next major step is use the compact and rich [DIAS]{acronym-label="DIAS" acronym-form="singular+short"} shape space to delineate an object boundary given an image, $I$. We assume a dataset of labelled images is available, allowing for the generation of coordinate/[SDF]{acronym-label="SDF" acronym-form="singular+short"} pairs: $\mathcal{D}=\{I_{k}, \mathcal{X}_{k}\}_{k=1}^{K_{\mathcal{D}}}$, where $\mathcal{X}_{k} = \{\mathbf{x}_{i}$, $s_{i}\}$ is specified using pixel coordinates. Note, we only assume a mask/[SDF]{acronym-label="SDF" acronym-form="singular+short"} is present, and do not require *explicit* ground-truth rigid and non-rigid poses. We need to define: 1) the rigid-body transform from the canonical coordinates to the image space, and 2) the latent vector that specifies the shape variation. We denote the rigid-body transformation as $\mathbf{T}(\omega)$ with parameters $\omega=\{\mathbf{t}, \mathbf{s}, \mathbf{b}\}$, *i*.*e*., translation, anisotropic scale, and rotation, respectively, where $\mathbf{t}\in \mathbb{R}^3$, $\mathbf{s}\in \mathbb{R}^3$, and $\mathbf{b}\in \mathbb{R}^6$. Here we use @zhou_continuity_2019's recent six-dimensional parameterization of the rotation matrix, where we actually predict deviations from identity, *i*.*e*., $\mathbf{I} + \mathbf{T}(\mathbf{b})$. We model shape variation using a truncated [PCA]{acronym-label="PCA" acronym-form="singular+short"} basis that only captures salient variations: $\mathbf{z}=\boldsymbol\upmu+\mathbf{W}\boldsymbol\uplambda$. Unlike explicit [SSMs]{acronym-label="SSM" acronym-form="plural+short"}, the [PCA]{acronym-label="PCA" acronym-form="singular+short"} is performed on the latent space and does not require correspondences. We employ an encoder network, $g_{\theta_{E}}(I)$, to predict the rigid pose parameters, $\omega$, and the non-rigid [PCA]{acronym-label="PCA" acronym-form="singular+short"} loadings $\boldsymbol\uplambda$. The parameters predicted by $g_{\theta_{E}}(.)$ are fed into a frozen $f_{\theta_{S}}(.,.)$ to produce the object's [SDF]{acronym-label="SDF" acronym-form="singular+short"}: $$\begin{align}
66
+ \mathit{SDF}(\mathbf{x}) &= f_{\theta_S}\left(\mathbf{A}^{-1}\mathbf{T}(\omega)\mathbf{x},\boldsymbol\upmu+\mathbf{W}\boldsymbol\uplambda \right), \label{eqn:formulation} \\
67
+ \omega, \boldsymbol\uplambda &= g_{\theta_E}(I) \textrm{.} \label{eqn:formulation_2}
68
+ % \boldsymbol\uplambda & = h_{\psi}(I).
69
+ \end{align}$$
70
+
71
+ While [\[eqn:formulation\]](#eqn:formulation){reference-type="eqref" reference="eqn:formulation"} and [\[eqn:formulation_2\]](#eqn:formulation_2){reference-type="eqref" reference="eqn:formulation_2"} could work in principle, directly predicting global pose parameters in one shot is highly sensitive to any errors and we were unable to ever reach convergence. We instead interpret the encoder $g_{\theta_E}(.)$ as an "agent" that, given an initial pose, $\omega^0$, generates samples along a trajectory by predicting corrections to the previous pose: $$\begin{align}
72
+ \Delta^\tau, \,\boldsymbol\uplambda^\tau &= g_{\theta_E}\left( I_{k}, \omega^{\tau - 1} \right) \mathrm{,} \label{eqn:encoder_trajectory_3} \\
73
+ \omega^{\tau} &= \Delta^\tau \circ \omega^{\tau - 1}, \quad \text{if } \tau>0 \mathrm{,} \label{eqn:encoder_trajectory_4}
74
+ \end{align}$$ where $\circ$ denotes the composition of two rigid-body transforms and $\tau$ indicates the current step in the trajectory. An observation of $\omega^{\tau-1}$ is injected into the input of the encoder so that it is aware the previous pose to correct. To do this we rasterize the [SDF]{acronym-label="SDF" acronym-form="singular+short"} corresponding to the mean shape, $\mathit{SDF}_{\boldsymbol\upmu}$, *once*. After every step it is rigidly transformed using $\omega^{\tau-1}$ and fed as a second input channel into $g_{\theta_E}(.,.)$. The agent-based formulation turns the challenging one-step pose estimation task into a simpler multi-step correction task. Note in [\[eqn:encoder_trajectory_3\]](#eqn:encoder_trajectory_3){reference-type="eqref" reference="eqn:encoder_trajectory_3"} we do not predict a trajectory for the [PCA]{acronym-label="PCA" acronym-form="singular+short"} loadings. Unlike rigid pose estimation, which can use the transformed $\mathit{SDF}_{\boldsymbol\upmu}$, it not clear how to best inject a concept of [PCA]{acronym-label="PCA" acronym-form="singular+short"} state into the encoder without rasterizing a new [SDF]{acronym-label="SDF" acronym-form="singular+short"} after every step. Since this is prohibitively expensive, the [PCA]{acronym-label="PCA" acronym-form="singular+short"} loadings are directly estimated at each step. We break the search space down even further by first predicting rigid poses then predicting the [PCA]{acronym-label="PCA" acronym-form="singular+short"} loadings, as detailed below.
75
+
76
+ We first train the encoder $g_{\theta_E}(.,.)$ to predict rigid poses. In training we generate samples along a trajectory of $\mathrm{T}$ steps, which is referred to as an episode. The encoder is trained by minimizing a loss calculated over the episodes generated on the training data: $$\begin{align}
77
+ \mathop{\mathrm{arg\,min}}_{\theta_E} & \sum_{k=1}^{K_{\mathcal{D}}}\sum_{i=1}^{|\mathcal{X}_{k}|} \sum_{\tau=1}^{\mathrm{T}}\mathcal{L}(f_{\theta}\left(\mathbf{A}^{-1}\mathbf{T}(\omega^{\tau})\mathbf{x}_{i},\boldsymbol\upmu \right),s_{i} ) \mathrm{,} \label{eqn:rigid_loss}
78
+ \end{align}$$ where back-propagation is only executed on the encoder weights $\theta_E$ within each step $\tau$, and the dependence on $g_{\theta_E}(.)$ is implied through $\omega_{\tau}$. Note that [\[eqn:rigid_loss\]](#eqn:rigid_loss){reference-type="eqref" reference="eqn:rigid_loss"} uses the mean latent vector, $\boldsymbol\upmu$, to generate [SDF]{acronym-label="SDF" acronym-form="singular+short"} predictions and the $\,\boldsymbol\uplambda$ output in [\[eqn:encoder_trajectory_3\]](#eqn:encoder_trajectory_3){reference-type="eqref" reference="eqn:encoder_trajectory_3"} is ignored for now. This training process shares similarities to [DRL]{acronym-label="DRL" acronym-form="singular+short"}, particularly in its formulation of the prediction step as an [MDP]{acronym-label="MDP" acronym-form="singular+short"}. Unlike [DRL]{acronym-label="DRL" acronym-form="singular+short"}, and similar to [MDP]{acronym-label="MDP" acronym-form="singular+short"} registration tasks [@liao2017artificial; @krebs2017robust; @ma2017multimodal], there is no need for cumulative rewards because a meaningful loss can be directly calculated. At the start of training, the agent will not produce reliable trajectories but, as the model strengthens, the playing out of an *episode* of $\mathrm{T}$ steps for each training iteration will better sample meaningful states to learn from. To expose the agent to a greater set of states, we also inject random pose perturbations, $\eta$, after every step, modifying [\[eqn:encoder_trajectory_3\]](#eqn:encoder_trajectory_3){reference-type="eqref" reference="eqn:encoder_trajectory_3"} and [\[eqn:encoder_trajectory_4\]](#eqn:encoder_trajectory_4){reference-type="eqref" reference="eqn:encoder_trajectory_4"} to $$\begin{align}
79
+ \Delta^\tau, \,\boldsymbol\uplambda^\tau &= g_{\theta_E}\left( I_{k}, \eta^{\tau} \circ \omega^{\tau - 1} \right) \mathrm{,} \label{eqn:encoder_trajectory_3_noise} \\
80
+ \omega^{\tau} &= \Delta^\tau \circ \eta^{\tau} \circ \omega^{\tau - 1}, \quad \text{if } \tau>0 \mathrm{.} \label{eqn:encoder_trajectory_4_noise}
81
+ \end{align}$$ A downside to episodic training is that each image is sampled consecutively $\mathrm{T}$ times, which can introduce instability and overfitting to the learning process. To avoid this we introduce *inverted episodic training*, altering the loop order to make each episodic step play out as an outer loop: $$\begin{align}
82
+ \mathop{\mathrm{arg\,min}}_{\theta_E} & \sum_{\tau=1}^{\mathrm{T}}\sum_{k=1}^{K_{\mathcal{D}}}\sum_{i=1}^{|\mathcal{X}_{k}|} \mathcal{L}(f_{\theta}\left(\mathbf{A}^{-1}\mathbf{T}(\omega^{\tau})\mathbf{x}_{i},\boldsymbol\upmu \right),s_{i} ) \mathrm{,} \label{eqn:encoder_trajectory_1_inverted}
83
+ \end{align}$$ where $\omega^{\tau}$ is saved for each sample after each iteration.
84
+
85
+ The [MDP]{acronym-label="MDP" acronym-form="singular+short"} of [\[eqn:encoder_trajectory_1_inverted\]](#eqn:encoder_trajectory_1_inverted){reference-type="eqref" reference="eqn:encoder_trajectory_1_inverted"} provides an effective sampling strategy, but it requires searching amongst all possible translation, scale, and rotation configurations, which is too large a search space. Indeed we were unable to ever reliably produce trajectories that converged. To solve this, [DIASs]{acronym-label="DIAS" acronym-form="plural+short"} use a deep realization of [MSL]{acronym-label="MSL" acronym-form="singular+full"} [@zheng2014marginal]. [MSL]{acronym-label="MSL" acronym-form="singular+short"} decomposes the search process into a chain of dependant estimates, focusing on one set while marginalizing out the others. In practice (Tab. [1](#tab:tbl:MSL){reference-type="ref" reference="tab:tbl:MSL"}) this means that we first limit the search space by training the encoder to only predict a translation trajectory, $\mathbf{t}$, with the random perturbrations also limited to only translation, *i*.*e*., $\eta_{t}$.
86
+
87
+ ::: {#tab:tbl:MSL}
88
+ Stage $\Delta$ $\eta$ $\omega^{0}$
89
+ ---------- -------------------------------------------- ------------------------------------------ -------------------------------------------------------
90
+ Trans. $\mathbf{t}$ $\eta_{t}$ $\omega_{\mathcal{D}}$
91
+ Scale $\{\mathbf{s},\,\mathbf{t}\}$ $\eta_{s}\circ\eta'_{t}$ $\{\omega_{t,k}^{\mathrm{T}}\}_{k=1}^{K_\mathcal{D}}$
92
+ Rot. $\{\mathbf{b},\,\mathbf{s},\,\mathbf{t}\}$ $\eta_{r}\circ \eta'_{s}\circ\eta'_{t}$ $\{\omega_{s,k}^{\mathrm{T}}\}_{k=1}^{K_\mathcal{D}}$
93
+ Non Rig. $\{\mathbf{b},\,\mathbf{s},\,\mathbf{t}\}$ $\eta'_{r}\circ \eta'_{s}\circ\eta'_{t}$ $\{\omega_{r,k}^{\mathrm{T}}\}_{k=1}^{K_\mathcal{D}}$
94
+
95
+ : [MSL]{acronym-label="MSL" acronym-form="singular+abbrv"} schedule used in [DIAS]{acronym-label="DIAS" acronym-form="singular+abbrv"}.
96
+ :::
97
+
98
+ The initial pose is the mean location in the training set, denoted $\omega_{\mathcal{D}}$. Once trained, the translation encoder weights and final poses, $\{\omega_{t,k}^{\mathrm{T}}\}_{k=1}^{K_\mathcal{D}}$, are used to initialize a scale encoder of identical architecture, but one that predicts scale corrections, $\mathbf{s}$, in addition to finetuning the translation. Importantly, to focus the search space on scale, the random *translation* perturbations are configured to be much smaller than before, which is represented by the prime modifier on $\eta'_{t}$. Finally, a rotation model is trained (while finetuning translation + scale with smaller perturbations). In inference, the rigid pose is estimated by successively applying the models of each stage, using the final pose of the previous step to initialize the next.
99
+
100
+ Once a rigid pose is determined, anatomically plausible deformations can then be estimated. We initialize the weights and poses of the non-rigid encoder using the translation + scale + rotation rigid model, modifying [\[eqn:encoder_trajectory_1_inverted\]](#eqn:encoder_trajectory_1_inverted){reference-type="eqref" reference="eqn:encoder_trajectory_1_inverted"} to now incorporate the [PCA]{acronym-label="PCA" acronym-form="singular+short"} basis: $$\begin{align}
101
+ \mathop{\mathrm{arg\,min}}_{\theta_E} & \sum_{\tau=1}^{\mathrm{T}}\sum_{k=1}^{K_{\mathcal{D}}}\sum_{i=1}^{|\mathcal{X}_{k}|} \mathcal{L}(f_{\theta}\left(\mathbf{A}^{-1}\mathbf{T}(\omega_{\tau})\mathbf{x}_{i},\boldsymbol\upmu+\mathbf{W}\boldsymbol\uplambda^\tau \right),s_{i} ) \nonumber \\
102
+ &+ \dfrac{1}{\sigma^2}\|\boldsymbol\upmu+\mathbf{W}\boldsymbol\uplambda^\tau\|_2^2\mathrm{.} \label{eqn:encoder_trajectory_1_pca}
103
+ \end{align}$$ As Table [1](#tab:tbl:MSL){reference-type="ref" reference="tab:tbl:MSL"} indicates, the random rigid perturbations are configured to be small in magnitude to confine the search space to primarily the [PCA]{acronym-label="PCA" acronym-form="singular+short"} loadings.
104
+
105
+ Like classic [SSMs]{acronym-label="SSM" acronym-form="plural+short"} [@Heimann_2009], non-rigid pose estimation provides a robust and anatomically plausible prediction, but it may fail to capture very fine details. We execute local refinements using an [FCN]{acronym-label="FCN" acronym-form="singular+short"} model, $r=h_{\theta_R}(I, \mathit{SDF}_{\boldsymbol\lambda})$, that accepts a two-channel input comprising the 3D image and the rasterized [SDF]{acronym-label="SDF" acronym-form="singular+short"} after the non-rigid shape estimation. Its goal is to refine the $\mathit{SDF}_{\boldsymbol\lambda}$ to better match the ground truth [SDF]{acronym-label="SDF" acronym-form="singular+short"}. To retain an implicit surface representation, we adapt portions of a recent deep level set loss [@michalkiewicz_implicit_2019]: $$\begin{align}
106
+ \mathcal{L}_{r} &= \sum_{\mathbf{x}\in\Omega_{b}}\left(\mathit{SDF}(\mathbf{x})^2\cdot\delta_{\epsilon}\left(\mathit{SDF}_{\boldsymbol\lambda}(\mathbf{x})+r(\mathbf{x})\right)\right)^{1/2} \nonumber \\
107
+ &+ \lambda_{1} \sum_{\mathbf{x}\in\Omega_{b}}(\|\nabla (\mathit{SDF}_{\boldsymbol\lambda}(\mathbf{x})+r(\mathbf{x}))\|-1)^2 \nonumber \\
108
+ &+ \lambda_2\sum_{\mathbf{x}\in\Omega_{b}}|\max(0,r(\mathbf{x})-\rho)| \mathrm{,} \label{eqn:refinement}
109
+ \end{align}$$ where $\mathit{SDF}$ is the ground truth [SDF]{acronym-label="SDF" acronym-form="singular+short"} and $\delta_{\epsilon}$ is a differentiable approximation of the Dirac-delta function. The first term penalizes mismatches between the iso-boundaries of the refined [SDF]{acronym-label="SDF" acronym-form="singular+short"} ground-truth. The second term ensures a unit gradient everywhere, guaranteeing that it remains a proper [SDF]{acronym-label="SDF" acronym-form="singular+short"}. See @michalkiewicz_implicit_2019 for more details. The third term ensures the refinement does not deviate too much from $\mathit{SDF}_{\boldsymbol\lambda}$ beyond a margin, $\rho$, otherwise it is free to deviate without penalty. Following standard level set practices, we only produce refinements within a narrow band, $\Omega_{b}$, around the $\mathit{SDF}_{\boldsymbol\lambda}$ iso-boundary, which is also represented in the loss of [\[eqn:refinement\]](#eqn:refinement){reference-type="eqref" reference="eqn:refinement"}. Finally, in addition to standard data augmentations to $I$, we also independently augment $\mathit{SDF}_{\boldsymbol\lambda}$ with random rigid and non-rigid pose variations, enriching the model's training set.
2104.08677/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-02-01T11:52:00.254Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36" etag="0bcI9aZL3rxGogUB3zi9" version="14.2.9" type="device"><diagram id="R2lEEEUBdFMjLlhIrx00" name="Page-1">1VfbTuMwEP2aSOwDKJfeXmlpuy9oC6wEfUImdhMLx44cp035+h03TpPUgZaVCrsIqfbxeByfmTm2nWCSFHOJ0vhWYMIc38WFE9w4vu/5wxH8aGRbIv2eASJJsTGqgQf6RgzoGjSnmGQtQyUEUzRtg6HgnISqhSEpxaZtthKsvWqKImIBDyFiNvpIsYpLdNR3a/wnoVFcrey5ZiRBlbEBshhhsWlBpFAzwZX5xAWRCeKEKxi5RfKVSKc/jZXSO712/Bn8r7T1VSRExAhKaXYVigTgMAOT2QollGmaG47GxhEsF0ydYCKFUGUrKSaE6VhVYSi/afbO6J4Hqf2eMOF5ztFyvU5HGV35S9ELC/rr0huUbtaI5YZgQ47aVoxHUuSpMSNSkaIrzuilMnftD/P224W0JCIhSm7BxDjqmRkmIf0qlJs6vMOBweJmaAMDIhOvaO+6pgEahonPsGKRco841pF1H4XE8DNNXgjGlEfQvrh/nP6wWAPSOCZ6FdcJxpuYKvKQolCPbqAyAYtVAl9140FTp5EpNYhIMH6X7SarHwTU5rpBptdF5uBcXPoWl3OUZxlFHNCFFDgPdX3d5Ygr+oYUFXrgYr64++c5NW4Ct53AQzt/v5bywKJ8ljO2vfwtEeXkIH3/C4oH7ndz2juuk4Tja33AQS9kCFI8/JgtUlD1ZMZ0e6lJv+qb3k1hYrDrbKsOh808NTuNWbpbT9v1qnnmeMPW2XpSXGCbIpchOS6ZCsmIfOQwOCpP/Y6wVpgkDBRi3d5EV6zNCgtBd6d3VamDdqX2DrKl3KWZ1DxKD/z0vLafS+/AUUmD5WiXeftd/30y2of2q5WNUIKqnX+ZkuKVTAQTEhAuONFJSRk7gBCjEddJDKkBN5VgrAuawg3s2gwkFGO9TKc8tAXk60R4YCtEryOT/HMJxOh8AlGLwrKpCUcEotaEZSUy3ykQ/okCMfxWgRh1H+WfFoig7efwpnpmfRiekIuMwYvtvSJuJCXK0vIZt6KFLmsrS5sK4ujLkP47a6kP+y1y/b59N+h6Lny+8qFbv8/K4NSP6mD6Bw==</diagram></mxfile>
2104.08677/main_diagram/main_diagram.pdf ADDED
Binary file (10.3 kB). View file
 
2104.08677/paper_text/intro_method.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ One of the main challenges in NLP is to properly encode discrete tokens into continuous vector representations. The most common practice in this regard is to use an embedding matrix which provides a one-to-one mapping from tokens to $n$-dimensional, real-valued vectors [@mikolov2013distributed; @li2018word]. Typically, values of these vectors are optimized via back-propagation with respect to a particular objective function. Learning embedding matrices with robust performance across different domains and data distributions is a complex task, and can directly impact quality [@tian2014probabilistic; @shi2015learning; @sun2016sparse].
4
+
5
+ Embedding matrices are a key component in a seq2seq NMT model, for instance, in a transformer-base NMT model [@vaswani2017attention] with a vocabulary size of $50k$, $36\%$ of the total model parameters are utilized by embedding matrices. Thus, a significant amount of parameters are utilized only for representing tokens in a model. Existing work on embedding compression for NMT systems [@khrulkov2019tensorized; @chen2018groupreduce; @shu2017compressing], have shown that these matrices can by compressed significantly with minor drop in performance. Our focus in this work is to study the importance of embedding matrices in the context of NMT. We experiment with Transformers [@vaswani2017attention] and LSTM based seq2seq architectures, which are the two most popular seq2seq models in NMT. In Section [3.1](#sec:rwe){reference-type="ref" reference="sec:rwe"}, we compare the performance of a fully-trained embedding matrix, with a completely random word embedding (RWE), and find that using a random embedding matrix leads to drop in about $1$ to $4$ BLEU [@papineni2002bleu] points on different NMT benchmark datasets. Neural networks have shown impressive performance with random weights for image classification tasks [@ramanujan2020s], our experiments show similar results for embedding matrices of NMT models.
6
+
7
+ <figure id="fig:RWEvsFully" data-latex-placement="t">
8
+ <img src="RWEvsFully.png" style="width:80.0%" />
9
+ <figcaption><span class="math inline"><em>k</em></span> is a hyper-parameter that controls the number of Gaussian distributions required to approximate the embedding matrix.</figcaption>
10
+ </figure>
11
+
12
+ RWE uses no trainable parameters, thus it might be possible to recover the drop in BLEU score by increasing the number of parameters using additional layers. We increase the number of layers in RWE model such that the number of parameters used by the entire model is comparable to fully-trained embedding model. We find that even in comparable parameter setting RWE model performance was inferior to fully-trained model by $~1$ BLEU point for high and medium resource datasets. Our results suggest that even though the embedding parameters can be compressed to a large extent with only a minor drop in accuracy, embedding matrices are essential components for token representation, and cannot be replaced by deeper layers of transformer based models.
13
+
14
+ RWE assumed a fully random embedding sampled from a single Gaussian distribution; however, to control the amount of random information, and to better understand the importance of embedding matrices, we introduce *Gaussian product quantization* (GPQ). GPQ assumes that $k$ Gaussian distributions are required to approximate the embedding matrix for NMT models. The means and variances of the $k$ distributions are learned from a fully-trained embedding matrix trained on the same dataset as GPQ model. $k$ is a hyper-parameter which controls the amount of information distilled from a pre-trained embedding to partially random embedding in GPQ model. GPQ has the ability to move from a fully random embedding to a fully-trained embedding by increasing the number of distributions $k$, as shown in Figure [1](#fig:RWEvsFully){reference-type="ref" reference="fig:RWEvsFully"}. Our results show that only $50$ Gaussian distributions are sufficient to approximate the embedding matrix, without drop in performance. GPQ compresses the embedding matrix $5$ times without any significant drop in performance, and uses only $100$ floating point values for storing the means and variances. GPQ demonstrates effective regularization of the embedding matrix, by out-performing the transformer baseline model with fully-trained embedding by $1.2$ BLEU points for $En \rightarrow Fr$ dataset and $0.5$ BLEU for $Pt \rightarrow En$ dataset. A similar increase in performance was also observed for LSTM based models, thus further showing the effectiveness of GPQ for embedding regularization.
15
+
16
+ Product Quantization (PQ) [@jegou2010product] was proposed for fast search by approximating nearest neighbour search. PQ has been adapted for compressing embedding matrices for different NLP problems [@shu2017compressing; @kim2020adaptive; @tissier2019near; @li2018slim]. Our method is an extension of PQ, first, we incorporate variance information in GPQ, then we define *unified partitioning* to learn a more robust shared space for approximating the embedding matrix. Our extensions consistently outperform the original PQ-based models with better compression rates and higher BLEU scores. These improvement are observed for Transformer-based models as well as conventional, recurrent neural networks.
17
+
18
+ PQ is the core of our techniques, so we briefly discuss its details in this section. For more details see @jegou2010product. As previously mentioned, NLP models encode tokens from a discrete to a continuous domain via an embedding matrix *E*, as simply shown in Equation [\[eqn:wordembedding\]](#eqn:wordembedding){reference-type="ref" reference="eqn:wordembedding"}: $$\begin{equation}
19
+ E \in \mathbb{R}^{|V| \times n}
20
+ \label{eqn:wordembedding}
21
+ \end{equation}$$ where $V$ is a vocabulary set of unique tokens and $n$ is the dimension of each embedding. We use the notation $E_{w} \in \mathbb{R}^n$ to refer to the embedding of the *w*-th word in the vocabulary set. In PQ, first $E$ is partitioned into $g$ groups across columns with $G_{i} \in \mathbb{R}^{|V| \times \frac{n}{g}}$ representing the *i*-th group. Then each group $G_{i}$ is clustered using K-means into $c$ clusters. Cluster indices and cluster centroids are stored in the vector $Q_{i} \in \mathbb{N}^{|V|}$ and matrix $C_{i} \in \mathbb{R}^{c \times \frac{n}{g}}$, respectively. Figure [2](#fig:pqsimple){reference-type="ref" reference="fig:pqsimple"} illustrates the PQ-based decomposition process.
22
+
23
+ <figure id="fig:pqsimple" data-latex-placement="h">
24
+ <img src="structembnew.png" style="width:80.0%" />
25
+ <figcaption>The matrix on the left hand side shows the original embedding matrix <span class="math inline"><em>E</em></span>, after dividing columns into <span class="math inline">3</span> groups (<span class="math inline"><em>G</em><sub><em>i</em></sub>; <em>i</em> ∈ {1, 2, 3}</span>) and applying the k-means algorithm with the cluster number <span class="math inline">2</span> (<span class="math inline"><em>c</em> = 2</span>) to each group. The digit inside each block is the cluster number. The figure on the right hand side shows the quantization matrix Q, and codebook C.</figcaption>
26
+ </figure>
27
+
28
+ We can obtain the quantization matrix $Q$ (also known as the *index matrix* in the literature) and *codebook* $C$ by concatenating $Q_i$ and $C_i$ for all $g$ groups along the columns, as shown in Equation [\[eqn:pqconcat\]](#eqn:pqconcat){reference-type="ref" reference="eqn:pqconcat"}.
29
+
30
+ $$\begin{equation}
31
+ \begin{split}
32
+ & Q = \text{Concat}_{\text{column}}(Q_1, Q_2, ..., Q_g) \\
33
+ & C = \text{Concat}_{\text{column}}(C_1, C_2,...,C_g)
34
+ \end{split}
35
+ \label{eqn:pqconcat}
36
+ \end{equation}$$
37
+
38
+ The decomposition process applied by PQ is reversable, namely any embedding table decomposed by PQ can be reconstructed using the matrices $Q_{i}$ and $C_{i}$. The size of an embedding table compressed via PQ can be calculated as shown in Equation [\[eqn:PQparamsize\]](#eqn:PQparamsize){reference-type="ref" reference="eqn:PQparamsize"}: $$\begin{equation}
39
+ \text{Size}_{\text{PQ}} = \text{log}_2(c)\times|V|\times g + c n f_p \text{ (in bits)}
40
+ \label{eqn:PQparamsize}
41
+ \end{equation}$$ where $f_p$ is the number of bits defined to store parameters with floating points. In our experiments $f_p$ is $32$ for all settings. We use Equation [\[eqn:PQparamsize\]](#eqn:PQparamsize){reference-type="ref" reference="eqn:PQparamsize"} to measure the compression rate of different models.
42
+
43
+ # Method
44
+
45
+ In this section we first explain RWE which stands for Random Word Embeddings. In RWE, all embeddings are initialized with completely random values where there is no syntactic and semantic information is available.
46
+
47
+ Next, we move from completely random embeddings to weakly supervised embeddings by incorporating some knowledge from a pre-trained embedding matrix. We propose our Gaussian Porduct Quantization (GPQ) technique in this regard, which applies PQ to a pre-trained embedding matrix and models each cluster with a Gaussian distribution. Our GPQ empowers the PQ with taking intra-cluster variance information into account. This approach is particularly useful when PQ clusters have a high variance, which might be the case for large embedding matrices.
48
+
49
+ A simple approach to form random embeddings is to sample their values from a standard Gaussian distribution, as shown in Equation [\[eqn:univariate\]](#eqn:univariate){reference-type="ref" reference="eqn:univariate"}:
50
+
51
+ $$\begin{align}
52
+ S &= \mathcal{N}(E_{i,j}| \mu, \sigma^2) \; \forall (i,j) \nonumber \\
53
+ E^{'}_i &= \frac{S_{i}}{||S_{i}||} \; \forall i,
54
+ \label{eqn:univariate}
55
+ \end{align}$$ where $\mathcal{N}$ is a normal distribution with $\mu=0$ and $\sigma=1$, $S$ is sampled matrix of the same dimensions as $E$, $E'$ is the reconstructed matrix, $E_i$ and $S_i$ represent $i^{th}$ row vector of matrices $E'$ and $S$ respectively. We normalize each word embedding vector to unify the magnitude of all word embeddings to ensure that we do not add any bias to the embedding matrix.
56
+
57
+ Since this approach relies on completely random values, to increase the expressiveness of embeddings and handle any potential dimension mismatch between the embedding table and the first layer of the neural model we place an *optional* linear transformation matrix $W$ (where $W \in \mathbb{R}^{n \times m}$). The weight matrix $W$ uses $n \times m$ trainable parameters, where $n$ is the size of embedding vector and $m$ is the dimension in which the model accepts its inputs. The increase in the number of parameters is negligible as it only relies on the embedding and input dimensions ($n$ and $m$), which are both considerably smaller than the number of words in a vocabulary ($n,m \ll |V|$).
58
+
59
+ RWE considers completely random embeddings which can be a very strict condition for an NLP model. Therefore, we propose our Gaussian Product Quantization (GPQ) to boost random embeddings with prior information from a pre-trained embedding matrix. We assume that there is an existing embedding matrix $E$ that is obtained from a model with the same architecture as ours, trained for the same task using the same dataset.
60
+
61
+ The pipeline defined for GPQ is as follows: First, we apply PQ to the pre-trained embedding matrix $E$ to derive the quantization matrix $Q_{|V|\times g}$ and the codebook matrix $C_{c\times n}$. The codebook matrix stores centers of $c$ clusters for all $G_i$ groups and the quantization matrix stores the mapping index of each sub-vector in the embedding matrix to the corresponding center of the cluster in the codebook. The two matrices $Q$ and $C$ can be used to reconstruct the original matrix $E$ from the PQ process.
62
+
63
+ The PQ technique only stores the center of clusters in the codebook and does not take variance of them into account. However, our GPQ technique models each cluster with a single Gaussian distribution by setting its mean and variance parameters to the cluster center and intra-cluster variance. We define $\mathcal{C}_i^j , (1 \leq j \leq c)$ to be the cluster corresponding to the $j-{\text{th}}$ row of $C_i$ in the codebook (that is $C_i^j$) with the mean and variance of $\mu_i^j \in \mathbb{R}^{\frac{c}{n}}$ and $(\sigma_i^j)^2$. Then, we define entries of the codebook of our GPQ approach as following: $$\begin{equation}
64
+ \begin{split}
65
+ & \hat{C}_i^j \sim \mathcal{N}_i^j (\mu_i^j,(\sigma_i^j)^2).
66
+ \end{split}
67
+ \label{GPQ:Gaussaian}
68
+ \end{equation}$$ Consequently, we model each cluster of PQ with a single Gaussian distribution with two parameters. Then the embedding matrix $\hat{E}$ can be reconstructed using a lookup function that maps values $Q_i$ from $\hat{C}_i$. We illustrate our GPQ method and compare it with PQ in Figure [6](#fig:main){reference-type="ref" reference="fig:main"}. In GPQ, we only need to store the index matrix, codebook and their corresponding variances in order to be able to reconstruct the embedding matrix.
69
+
70
+ The PQ/GPQ method relies on partitioning the embedding matrix into same size $G_i$ groups across the columns and then clustering them. We refer to the partitioning function used in PQ as *Structured Partitioning*, and propose a new partitioning scheme, which we refer to as *Unified Partitioning*. The details of each scheme is described in the following.
71
+
72
+ <figure id="fig:main" data-latex-placement="t">
73
+ <figure id="fig:orig">
74
+ <img src="OriginalEmbedding.png" style="width:100.0%" />
75
+ <figcaption aria-hidden="true"><span id="fig:orig" data-label="fig:orig"></span></figcaption>
76
+ </figure>
77
+ <figure id="fig:comp">
78
+ <img src="PQvsEstimate.png" style="width:100.0%" />
79
+ <figcaption aria-hidden="true"><span id="fig:comp" data-label="fig:comp"></span></figcaption>
80
+ </figure>
81
+ <figure id="fig:rec">
82
+ <img src="ReconstructedEmbedding.png" style="width:100.0%" />
83
+ <figcaption aria-hidden="true"><span id="fig:rec" data-label="fig:rec"></span></figcaption>
84
+ </figure>
85
+ <figcaption><span id="fig:main" data-label="fig:main"></span>The figure on the left hand side shows the original embedding matrix <span class="math inline"><em>E</em></span> after applying the k-means algorithm, with 2 clusters (<span class="math inline"><em>c</em> = 2</span>) and 3 groups (<span class="math inline"><em>G</em><sub><em>i</em></sub></span>) (<span class="math inline"><em>g</em> = 3</span>), which results in 6 partitions (<span class="math inline"><em>P</em><sub><em>i</em></sub></span>). The figure in the middle shows the difference between <em>PQ</em> and GPQ. The figure on the right hand side is the reconstructed matrix <span class="math inline"><em>E</em><sup>′</sup></span>.</figcaption>
86
+ </figure>
87
+
88
+ The original PQ method is based on structured partitioning to partition the input matrix $E_{|V|\times n}$ into $g$ groups of $G_i$ of size $|V|\times \frac{n}{g}$ along the columns uniformly such that $E=\text{Concat}_{\text{column}}(G_1, G_2, ..., G_g)$. Each $G_i$ group is clustered into $c$ clusters along the rows using any clustering algorithm. If we choose K-means clustering algorithm for this purpose, then we can obtain the center of clusters $C_i \subset C \text{ (where } C_i \in \mathbb{R}^{c\times \frac{n}{g}})$, their variances $\sigma_i^2 \in \mathbb{R}^{{c}}$, and the quantization vector $Q_i \in \mathbb{N^{|V|}}$ corresponding to $G_i$ as following:
89
+
90
+ $$\begin{equation}
91
+ \begin{split}
92
+ & [C_i,\sigma_i^2, Q_i] = \text{K-means}(G_i,c).
93
+ \end{split}
94
+ \end{equation}$$
95
+
96
+ The total number of clusters in this case is $k=c*g$. In our technique, we store the variance of clusters in addition to their mean which utilizes more number of floating point parameters compared to Equation [\[eqn:PQparamsize\]](#eqn:PQparamsize){reference-type="ref" reference="eqn:PQparamsize"}, as shown in Equation [\[eqn:SPQparamsize\]](#eqn:SPQparamsize){reference-type="ref" reference="eqn:SPQparamsize"}. $$\begin{equation}
97
+ \text{Size}_{\text{GPQ}}^{\text{Structured}} = \text{log}_2(c) \times|V| \times g + 2 c n f_p \text{ bits}
98
+ \label{eqn:SPQparamsize}
99
+ \end{equation}$$
100
+
101
+ *Structured* partitioning has limited ability to exploit redundancies across groups. Motivated by this shortcoming we propose our *unified* partitioning approach in which we concatenate the $g$ groups along rows to form a matrix $\mathcal{G}\in \mathbb{R}^{g|V|\times \frac{n}{g}}$ and then apply K-Means to the matrix $\mathcal{G}$ (instead of applying K-means to each group $G_i$ separately). $$\begin{align}
102
+ \mathcal{G} &= \text{Concat}_{row}(G_{1} , G_{2} , ..., G_{g}) \in \mathbb{R}^{(g|V|) \times (n/g)} \\
103
+ & [C,\sigma^2, Q] = \text{K-means}(\mathcal{G} ,c).
104
+ \label{eqn:unified}
105
+ \end{align}$$ For better understanding we illustrate our method in Figure [7](#fig:unifiedPQ){reference-type="ref" reference="fig:unifiedPQ"}. Equation [\[eqn:USPQparamsize\]](#eqn:USPQparamsize){reference-type="ref" reference="eqn:USPQparamsize"} defines the size of parameters for *unified* partitioning function. $$\begin{equation}
106
+ \text{Size}_{\text{GPQ}}^{\text{Unified}} = \text{log}_2(c)\times |V| \times g + 2 c \frac{n}{g} f_p \text{ bits}
107
+ \label{eqn:USPQparamsize}
108
+ \end{equation}$$
109
+
110
+ ![[]{#fig:unifiedPQ label="fig:unifiedPQ"} *Unified* partitioning function with 3 Groups.](unifiedPQ.png){#fig:unifiedPQ width="100%"}
111
+
112
+ ::: {#tab:discreteparameters}
113
+ Model Float Integer
114
+ --------------- ------- ---------
115
+ PQ 25.6k 16.3M
116
+ PQ (Unified) 50 16.3M
117
+ GPQ (Unified) 100 16.3M
118
+
119
+ : Comparison of floating point vs integer parameters for vocabulary size of 32k, embedding size 512, 512 groups, and 50 clusters
120
+ :::
121
+
122
+ ::: table*
123
+ +-------------+----------------------+---------------------+---------------------------+
124
+ | Model | WMT En-Fr | WMT En-De | IWSLT Pt-En |
125
+ +:============+:===:+:======:+:=====:+:===:+:=====:+:=====:+:=====:+:=====:+:=========:+
126
+ | 2-4 | | | | | | | | | |
127
+ +-------------+-----+--------+-------+-----+-------+-------+-------+-------+-----------+
128
+ | Params | | Layers | | | | | | | |
129
+ +-------------+-----+--------+-------+-----+-------+-------+-------+-------+-----------+
130
+ | Params | | Layers | | | | | | | |
131
+ +-------------+-----+--------+-------+-----+-------+-------+-------+-------+-----------+
132
+ | Params | | | | | | | | | |
133
+ +-------------+-----+--------+-------+-----+-------+-------+-------+-------+-----------+
134
+ | Transformer | 6 | 60.7M | 38.41 | 6 | 63M | 27.03 | 3 | 11.9M | 39.58 |
135
+ +-------------+-----+--------+-------+-----+-------+-------+-------+-------+-----------+
136
+ | RWE+linear | 6 | 44.4M | 35.76 | 6 | 44.2M | 23.04 | 3 | 3.7M | 38.14 |
137
+ +-------------+-----+--------+-------+-----+-------+-------+-------+-------+-----------+
138
+ | RWE+linear | 8 | 59M | 37.11 | 8 | 58.9M | 25.83 | 6 | 11.1M | **40.69** |
139
+ +-------------+-----+--------+-------+-----+-------+-------+-------+-------+-----------+
140
+ :::
141
+
142
+ ::: table*
143
+ +----------------------+----------------------+----------------------+---------------------+
144
+ | Model | WMT En-Fr | WMT En-De | IWSLT Pt-En |
145
+ +:=====================+:========:+:=========:+:========:+:=========:+:=======:+:=========:+
146
+ | 2-7 | | | | | | |
147
+ +----------------------+----------+-----------+----------+-----------+---------+-----------+
148
+ | Transformer Baseline | 62.5 MB | 39.29 | 72.3 MB | 27.38 | 31.3 MB | 39.88 |
149
+ +----------------------+----------+-----------+----------+-----------+---------+-----------+
150
+ | PQ (Structured) | 11.82 MB | 39.95 | 13.65 MB | 27.04 | 5.9 MB | 40.33 |
151
+ +----------------------+----------+-----------+----------+-----------+---------+-----------+
152
+ | GPQ (Structured) | 11.92 MB | 40.04 | 13.75 MB | 27.11 | 5.95 MB | 40.16 |
153
+ +----------------------+----------+-----------+----------+-----------+---------+-----------+
154
+ | PQ (Unified) | 11.7 MB | 40.02 | 13.54 MB | 26.87 | 5.86 MB | 39.34 |
155
+ +----------------------+----------+-----------+----------+-----------+---------+-----------+
156
+ | GPQ (Unified) | 11.7 MB | **40.51** | 13.55 MB | **27.35** | 5.86 MB | **40.36** |
157
+ +----------------------+----------+-----------+----------+-----------+---------+-----------+
158
+ :::
2104.14557/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2104.14557/paper_text/intro_method.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ We study the task of learning personalized head avatars in a low-shot setting, also known as *"neural talking heads"*. Given a single-shot or few-shot images of a source subject, and a driving sequence of facial landmarks, possibly derived from a different subject, the goal is to synthesize a photorealistic video of the source subject, under the poses and expressions of the driving sequence. This task has a wide range of applications, including those in AR/VR, video conferencing, gaming, animated movie production and video compression in tele-communication.
4
+
5
+ Traditional graphics-based approaches to this task rely on a 3D face geometry and produce very high quality synthesis. However, they tend to focus on modeling the face area without the hair, and they learn a subject-specific model and cannot generalize to new subjects. In contrast, recent 2Dbased approaches [\[1,](#page-8-0) [2,](#page-8-1) [3,](#page-8-2) [4\]](#page-8-3) learn a subject-agnostic model that can animate unseen subjects given as few as a single image. Furthermore, since these works learn an implicit model and do not require an explicit geometric representation, they can synthesize the full head, including the hair, mouth inte-
6
+
7
+ ![](_page_0_Figure_9.jpeg)
8
+
9
+ Figure 1: Our framework disentangles spatial and style information for image synthesis. It predicts a latent spatial layout for the target image, which is used to produce per-pixel style modulation parameters for the final synthesis.
10
+
11
+ rior, and even wearable accessories like glasses and earrings. This remarkable generalization ability however comes at the cost of low quality and poor identity preservation when compared to their 3D-based subject-specific counterparts. Bridging the quality gap between 2D-based subject-agnostic and 3D-based subject-specific approaches remains an open problem.
12
+
13
+ Recent efforts in 2D-based approaches can be divided into two classes; *warping-based* and *direct synthesis*. As the name suggests, warping-based methods (*e.g.,* [\[2\]](#page-8-1)) learn to warp the input image or a recovered canonical pose based on the motion of the driving sequence. While these methods achieve high realism, especially for static and rigid parts of the image, they tend to work well only for a limited range of motion, head rotation and dis-occlusion. On the other hand, direct synthesis approaches (*e.g.,* [\[1,](#page-8-0) [3,](#page-8-2) [4\]](#page-8-3)) encode the source subject into a compressed latent code, and a generator decodes the latent code to synthesize the target pose. These approaches learn a prior over the compressed latent space, and can generate realistic results for a wider range of poses and head motion. However, they exhibit a noticeable identity gap between their output and the source subject.
14
+
15
+ We posit that the identity gap is caused by the entangled representation of the source subject in a single latent code. This compressed 1D latent encodes multi-view shape information, identity cues, as well as color information, lighting and background details. In order to synthesize a target view from a latent code, the generator needs to devise a complex function to decode the uni-dimensional latent into its corresponding 2D spatial information. We argue this not only consumes a large portion of the network capacity, but also
16
+
17
+ <span id="page-1-1"></span><span id="page-1-0"></span>![](_page_1_Figure_0.jpeg)
18
+
19
+ Figure 2: Overview of our training pipeline. The cross-entropy loss with the oracle segmentation is used during pre-training the layout predictor G , and then turned off during the full pipeline training.
20
+
21
+ limits the amount of information that can be encoded in the latent code.
22
+
23
+ To address this problem, we propose a two-step framework that decomposes the synthesis of a talking head into its spatial and style components. Our framework animates a source subject in two steps. First, it predicts a novel spatial layout of the subject under the target pose and expression. Then, it synthesizes the target frame conditioned on the predicted layout. This factorized representation yields the following key performance advantages.
24
+
25
+ Better subject-agnostic model performance. The performance of our subject-agnostic (also called *meta-learned*) model not only performs better than previous subjectagnostic state-of-the-art, but is also on-par with the subjectfinetuned performance of previous works when there are only few source images available (*e.g.,* less than 10 images).
26
+
27
+ Better fine-tuned performance with less data. Finetuning our model for a specific subject requires significantly less data and fewer iterations than previous works, and yet achieves better performance. For example, we show that finetuning our model using 4-shot inputs outperforms previous state-of-the-art models fine-tuned using 32-shot inputs.
28
+
29
+ Robustness to pose variations. We show that our model is more robust against a wider range of poses and facial expressions, while still producing both realistic and identitypreserving results.
30
+
31
+ Improved identity preservation. Shape difference between the source and driving identities poses a challenge for identity preservation in reenacted results. The intermediate novel spatial representation learned by our model reduces the sensitivity towards such differences and better preserves the identity.
32
+
33
+ In summary, we make the following contributions:
34
+
35
+ - A novel approach that disentangles the spatial and style components for *talking-head* synthesis.
36
+ - A novel latent spatial representation that proves effective for few-shot novel view synthesis.
37
+ - We achieve state-of-the-art performance in both the singleshot and multi-shot settings, as well as in the meta-learned and subject-finetuned modes.
38
+
39
+ # Method
40
+
41
+ Our approach factorizes the representation of a head avatar into spatial and style components. It breaks down the novel view head synthesis of a subject into two steps. First, a layout prediction network $G^l$ translates facial landmarks for a target view into a dense spatial layout of the subject. Then, an image generator $G^s$ synthesizes the final image conditioned on the predicted layout. We first give an overview of our pipeline in Section 3.1. Then, we explain
42
+
43
+ <span id="page-2-2"></span>![](_page_2_Picture_5.jpeg)
44
+
45
+ Figure 3: Layout pre-training predicts meaningful segmentation maps despite the noisy oracle segmentations. Our latent spatial representation encodes more information than traditional segmentations.
46
+
47
+ how to pre-train the layout prediction network $G^l$ to predict semantic segmentations of novel views in Section 3.2, followed by the full pipeline training in Section 3.3. Section 3.4 explains how the layout prediction network $G^l$ transitions from predicting semantic maps to learning a more powerful latent spatial representation. And finally, we discuss how to learn a personalized head avatar through an optional subject-specific fine-tuning stage in Section 3.5.
48
+
49
+ Given K-shot inputs $\{I_1 \dots I_K\}$ of a source subject, a two-headed encoder $E = \{E^l, E^s\}$ processes the inputs and generates K layout latents $\{z_i^l\}$ and K style latents $\{z_i^s\}$ for $i \in \{1 \dots K\}$ . The K latents are then averaged to get an aggregated layout latent $z^l = \frac{1}{K} \sum_{i=1}^K z_i^l$ and style latent $z^s = \frac{1}{K} \sum_{i=1}^K z_i^s$ . Averaging the K latents cancels out viewspecific information and transient occluders, and maintains implicit 3D information like the head and hair shape for the layout latent, and color and lighting information for the style latent. We have two generators: a layout predictor network $G^l$ and an image generator $G^s$ . The layout predictor takes as input the facial landmarks for a target view $x_t$ and the layout latent $z^l$ and generates a spatial layout $y_t^l = G^l(x_t, z^l)$ , such as a semantic map, for the target view. The image generator $G^s$ processes the style latent $z^s$ and utilizes spatial denormalization layers (SPADE [30]), conditioned on the predicted layout $y^l$ , to synthesize the final image $I = G^s(y^l, z^s)$ . An overview of our framework is shown in Figure 2.
50
+
51
+ Training the above pipeline end-to-end without any supervision or constraints on the predicted layouts results in a degenerate solution, where the spatial layouts and their corresponding spatial denormalization are completely ignored. All spatial and style information are thus encoded into and decoded from the style latent $z^s$ , which results in a poor performance. Therefore, we opted to pre-train the layout prediction network to predict a plausible semantic segmentation
52
+
53
+ <span id="page-3-4"></span>of a target view, given the input facial landmarks x<sup>t</sup> and the layout latent z l . To supervise this training, we use an off-theshelf face segmentation network [\[34\]](#page-9-2) as an oracle to segment the target image I<sup>t</sup> into a semantic map St, and we apply a cross-entropy loss (X-ent) between the oracle segmentation S<sup>t</sup> and our predicted segmentation y l <sup>t</sup> = G<sup>l</sup> (xt, z<sup>l</sup> ). We observe that the obtained oracle segmentations are very noisy and have poor quality (*e.g.,* Figure [3\)](#page-2-2). This is caused by the domain gap, in terms of image resolution and the distribution of head poses, between the datasets used to train the oracle segmentation network [\[34\]](#page-9-2), and in-the-wild videos of talking heads. Thus, to regularize the segmentation prediction training, we use a mutli-task pre-training strategy where the layout prediction network predicts an extra RGB reconstruction R<sup>t</sup> of the target image It, which is used as a secondary supervisory signal. Specifically, we have
54
+
55
+ $$y_t^l, R_t = G^l(x_t, z^l), \qquad z^l = \frac{1}{K} \sum_{i=1}^K E^l(I_i)$$
56
+ (1)
57
+
58
+ And the objective for the pre-training is
59
+
60
+ $$\mathcal{L}_{\text{seg}} = X\text{-ent}(y_t^l, S_t) + \lambda_R \mathcal{L}_R(R_t, I_t)$$
61
+ (2)
62
+
63
+ where L<sup>R</sup> is a perceptual reconstruction loss, and λ<sup>R</sup> is a relative weighting term which is set to a low value.
64
+
65
+ Once the layout predictor network has been pre-trained to predict semantic segmentations, we plug it into our full pipeline. The predicted segmentation is fed as the spatial input to a SPADE image generator G<sup>s</sup> that synthesizes the final image as
66
+
67
+ $$\hat{I} = G^s(G^l(x_t, z^l), z^s), \qquad z^s = \frac{1}{K} \sum_{i=1}^K E^s(I_i)$$
68
+ (3)
69
+
70
+ We observe that the SPADE generator quickly utilizes the input spatial segmentations to resolve spatial ambiguities, and we no longer fall into a degenerate solution where the spatial input is ignored.
71
+
72
+ Our full pipeline, comprising the layout and style encoders {E<sup>l</sup> , E<sup>s</sup>}, the layout predictor G<sup>l</sup> and the image generator G<sup>s</sup> , is optimized to minimize three losses; a reconstruction loss Lrec, an adversarial loss Ladv, and a latent regularization loss LL2.
73
+
74
+ For the reconstruction loss Lrec, we employ a perceptual loss [\[35\]](#page-9-3) based on both the VGG19 [\[36\]](#page-9-4) and VGGFace [\[37\]](#page-9-5) networks, as well as an L1 loss. While the VGG19-based perceptual loss is a standard reconstruction loss, we follow Zakharov *et al*. [\[1\]](#page-8-0) and utilize a VGGFace-based perceptual loss to promote identity preservation. We also use an L1 loss to better preserve color transfer between the synthesized and ground truth images.
75
+
76
+ The adversarial loss, Ladv, encourages the output to be photo-realistic. To achieve that, a discriminator network D is trained to discriminate between real and fake images, while the generator network, G<sup>s</sup> aims to fool the discriminator by bringing the output closer to the manifold of real images. We borrow the architecture of the discriminator network D from [\[38\]](#page-9-6) and use a non-saturating logistic loss with gradient penalty [\[39\]](#page-9-7). Finally, we impose an L2 regularization on the learned latent codes to encourage compactness of the latent space. The full training objective is given by
77
+
78
+ <span id="page-3-3"></span>
79
+ $$\min \mathcal{L}(\hat{I}_{t}, I_{t}, z^{l}, z^{s} | E^{l}, E^{s}, G^{l}, G^{s}, D) = \mathcal{L}_{rec}(\hat{I}_{t}, I_{t}) + \lambda_{adv} \mathcal{L}_{adv}(\hat{I}_{t}, I_{t}) + \lambda_{L2} (\|z^{l}\|^{2} + \|z^{s}\|^{2})$$
80
+ (4)
81
+
82
+ where λrec, λL<sup>2</sup> determine the relative weights between the loss terms.
83
+
84
+ Spatial denormalization (SPADE) generates per-pixel denormalization parameters by feeding a dense spatial input through a small convolutional subnetwork. While SPADE [\[30\]](#page-8-29) originally uses semantic maps as input, we explore learning a latent spatial representation that better suits the image synthesis task at hand. To do this, we turn off the cross-entropy loss so as to give the layout predictor Gl the freedom to diverge from predicting traditional semantic segmentations and learn other latent representations that better optimize the few-shot novel view synthesis objective. The layout predictor is thus supervised only by the training objective of Eqn. [4.](#page-3-3) Figure [3](#page-2-2) shows examples of the learned latent layouts. Although they might look less interpretable than traditional semantic maps, they seem to encode more information and capture accurate details.
85
+
86
+ Training our full pipeline learns a powerful subjectagnostic model that produces high quality and identitypreserving synthesis. Optionally, we can learn a personalized head avatar to further refine the results for a given subject. To do this, we follow [\[1,](#page-8-0) [3,](#page-8-2) [4\]](#page-8-3) and fine-tune the subject-agnostic model (also called *meta-learned* model) using the few-shot inputs of the source identity. Specifically, we compute the layout and style embeddings {z l , z<sup>s</sup>} and fine-tune the weights of the layout and image generators {G<sup>l</sup> , G<sup>s</sup>}, as well as the discriminator, D, by reconstructing the set of few-shot inputs, and optimizing the same training objective of Eqn. [4.](#page-3-3) We observe that subject fine-tuning restores high-frequency components and improves background reconstruction when compared to the meta-learned outputs.
87
+
88
+ <span id="page-4-1"></span><span id="page-4-0"></span>![](_page_4_Figure_0.jpeg)
89
+
90
+ Figure 4: Qualitative comparison in the single-shot setting. We show three sets of examples representing low, medium and high variance between the source and target poses. Our method is more robust to pose variations than the baselines.
2106.02990/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2106.02990/paper_text/intro_method.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Contrastive learning [\(Chen et al.,](#page-9-0) [2020a;](#page-9-0) [He et al.,](#page-9-0) [2020;](#page-9-0) [Grill et al.,](#page-9-0) [2020;](#page-9-0) [Jiang et al.,](#page-9-0) [2020;](#page-9-0) [You et al.,](#page-11-0) [2020\)](#page-11-0) recently prevails for deep neural networks (DNNs) to learn
4
+
5
+ *Proceedings of the* 38 th *International Conference on Machine Learning*, PMLR 139, 2021. Copyright 2021 by the author(s).
6
+
7
+ powerful visual representations from unlabeled data. The state-of-the-art contrastive learning frameworks consistently benefit from using bigger models and training on more taskagnostic unlabeled data [\(Chen et al.,](#page-9-0) [2020b\)](#page-9-0). The predominant promise implied by those successes is to leverage contrastive learning techniques to pre-train strong and transferable representations from internet-scale sources of unlabeled data. However, going from the controlled benchmark data to uncontrolled real-world data will run into several gaps. For example, most natural image and language data exhibit a Zipf long-tail distribution where various feature attributes have very different occurrence frequencies [\(Zhu et al.,](#page-11-0) [2014;](#page-11-0) [Feldman,](#page-9-0) [2020\)](#page-9-0). Broadly speaking, such imbalance is not only limited to the standard single-label classification with majority versus minority class [\(Liu et al.,](#page-10-0) [2019\)](#page-10-0), but also can extend to multi-label problems along many attribute dimensions [\(Sarafianos et al.,](#page-10-0) [2018\)](#page-10-0). That naturally questions whether contrastive learning can still generalize well in those long-tail scenarios.
8
+
9
+ We are *not* the first to ask this important question. Earlier works [\(Yang & Xu,](#page-10-0) [2020;](#page-10-0) [Kang et al.,](#page-9-0) [2021\)](#page-9-0) pointed out that when the data is imbalanced by class, contrastive learning can learn more balanced feature space than its supervised counterpart. Despite those preliminary successes, we find that the state-of-the-art contrastive learning methods remain certain vulnerability to the long-tailed data (even indeed improving over vanilla supervised learning), after digging into more experiments and imbalance settings (see Sec 4). Such vulnerability is reflected on the linear separability of pretrained features (the instance-rich classes has much more separable features than instance-scarce classes), and affects downstream tuning or transfer performance. To conquer this challenge further, the main hurdle lies in the absence of class information; therefore, existing approaches for supervised learning, such as re-sampling the data distribution [\(Shen](#page-10-0) [et al.,](#page-10-0) [2016;](#page-10-0) [Mahajan et al.,](#page-10-0) [2018\)](#page-10-0) or re-balancing the loss for each class [\(Khan et al.,](#page-9-0) [2017;](#page-9-0) [Cui et al.,](#page-9-0) [2019;](#page-9-0) [Cao et al.,](#page-9-0) [2019\)](#page-9-0), cannot be straightforwardly made to work here.
10
+
11
+ Our overall goal is to find a bold push to extend the loss re-balancing and cost-sensitive learning ideas [\(Khan et al.,](#page-9-0) [2017;](#page-9-0) [Cui et al.,](#page-9-0) [2019;](#page-9-0) [Cao et al.,](#page-9-0) [2019\)](#page-9-0) into an unsupervised setting. The initial hypothesis arises from the recent
12
+
13
+ <sup>1</sup>Texas A&M University <sup>2</sup>University of Texas at Austin. Correspondence to: Zhangyang Wang <atlaswang@utexas.edu>.
14
+
15
+ <span id="page-1-0"></span>![](_page_1_Figure_1.jpeg)
16
+
17
+ Figure 1. The overview of the proposed SDCLR framework. Built on top of simCLR pipeline [\(Chen et al.,](#page-9-0) [2020a\)](#page-9-0) by default, the uniqueness of SDCLR lies in its two different network branches: one is the target model to be trained, and the other "self-competitor" model that is pruned from the former online. The two branches share weights for their non-pruned parameters. Either branch has its independent batch normalization layers. Since the self-competitor is always obtained and updated from the latest target model, the two branches will co-evolve during training. Their contrasting will implicitly give more weights on long-tail samples.
18
+
19
+ observations that DNNs tend to prioritize learning simple patterns [\(Zhang et al.,](#page-11-0) [2016;](#page-11-0) [Arpit et al.,](#page-8-0) [2017;](#page-8-0) [Liu et al.,](#page-10-0) [2020;](#page-10-0) [Yao et al.,](#page-11-0) [2020;](#page-11-0) [Han et al.,](#page-9-0) [2020;](#page-9-0) [Xia et al.,](#page-10-0) [2021\)](#page-10-0). More precisely, the DNN optimization is content-aware, taking advantage of patterns shared by more training examples, and therefore inclined towards memorizing the majority samples. Since long-tail samples are underrepresented in the training set, they will tend to be poorly memorized, or more "easily forgotten" by the model - a characteristic that one can potentially leverage to spot long-tail samples from unlabeled data in a model-aware yet class-agnostic way.
20
+
21
+ However, it is in general tedious, if ever feasible, to measure how well each individual training sample is memorized in a given DNN [\(Carlini et al.,](#page-9-0) [2019\)](#page-9-0). One blessing comes from the recent empirical finding [\(Hooker et al.,](#page-9-0) [2020\)](#page-9-0) in the context of image classification. The authors observed that, network pruning, which usually removes the smallestmagnitude weights in a trained DNN, does not affect all learned classes or samples equally. Rather, it tends to disproportionally hamper the DNN memorization and generalization on the long-tailed and most difficult images from the training set. In other words, long-tail images are not "memorized well" and may be easily "forgotten" by pruning the model, making network pruning a practical tool to spot the samples not yet well learned or represented by the DNN.
22
+
23
+ Inspired by the aforementioned, we present a principled framework called *Self-Damaging Contrastive Learning* (SDCLR), to automatically balance the representation learning without knowing the classes. The workflow of SDCLR is illustrated in Fig. 1. In addition to creating strong contrastive views by input *data augmentation*, SDCLR introduces another new level of contrasting via"*model augmentation*, by perturbing the target model's structure and/or current weights. In particular, the key innovation in SDCLR is to create a *dynamic self-competitor* model by pruning the target model online, and contrast the pruned model's features with the target model's. Based on the observation [\(Hooker et al.,](#page-9-0) [2020\)](#page-9-0) that pruning impairs model ability to predict accurately on rare and atypical instances, those samples in practice will also have the largest prediction differences before then pruned and non-pruned models. That effectively boosts their weights in the contrastive loss and leads to implicit loss re-balancing. Moreover, since the self-competitor is always obtained from the updated target model, the two models will co-evolve, which allows the target model to spot diverse memorization failures at different training stages and to progressively learn more balanced representations. Below we outline our main contributions:
24
+
25
+ - Seeing that unsupervised contrastive learning is *not immune* to the imbalance data distribution, we design a Self-Damaging Contrastive Learning (SDCLR) framework to address this new challenge.
26
+ - SDCLR innovates to leverage the latest advances in understanding DNN memorization. By creating and updating a self-competitor online by pruning the target model during training, SDCLR provides an adaptive online mining process to always focus on the most easily forgotten (long tailed) samples throughout training.
27
+ - Extensive experiments across multiple datasets and imbalance settings show that SDCLR can significantly improve not only the balancedness of the learned representation.
28
+
29
+ # Method
30
+
31
+ **Contrastive Learning.** Contrastive learning learns visual representation via enforcing similarity of the positive pairs $(v_i, v_i^+)$ and enlarging the distance of negative pairs $(v_i, v_i^-)$ . Formally, the loss is defined as
32
+
33
+ $$\mathcal{L}_{\text{CL}} = \frac{1}{N} \sum_{i=1}^{N} -\log \frac{s(v_i, v_i^+, \tau)}{s(v_i, v_i^+, \tau) + \sum_{v_i^- \in V^-} s(v_i, v_i^-, \tau)}$$
34
+ (1)
35
+
36
+ where $s\left(v_i,v_i^+,\tau\right)$ indicates the similarity between positive pairs while $s\left(v_i,v_i^-,\tau\right)$ is the similarity between negative pairs. $\tau$ represents the temperature hyper-parameter. The negative samples $v_i^-$ are sampled from negative distribution $V^-$ . The similarity metric is typically defined as
37
+
38
+ $$s\left(v_{i}, v_{i}^{+}, \tau\right) = \exp\left(v_{i} \cdot v_{i}^{+}/\tau\right) \tag{2}$$
39
+
40
+ SimCLR (Chen et al., 2020a) is one of the state-of-theart contrastive learning frameworks. For an input image, SimCLR would augment it twice with two different augmentations, and then process them with two branches that share the same architecture and weights. Two different versions of the same image are set as positive pairs, and the negative image is sampled from the rest images in the same batch.
41
+
42
+ **Pruning Identified Exemplars.** (Hooker et al., 2020) systematically investigates the model output changes introduced by pruning and finds that certain examples are particularly sensitive to sparsity. These images most impacted after pruning are termed as *Pruning Identified Exemplars* (**PIEs**), representing the difficult-to-memorize samples in training. Moreover, the authors also demonstrate that PIEs often show up at the long-tail of a distribution.
43
+
44
+ We extend (Hooker et al., 2020)'s PIE hypothesis from supervised classification to the unsupervised setting for the first time. Moreover, instead of pruning a trained model and expose its PIEs once, we are now integrating pruning into the training process as an online step. With PIEs dynamically generated by pruning a target model under training, we expect them to expose different long-tail examples during training, as the model continues to be trained. Our experiments show that PIEs answer well to those new challenges.
45
+
46
+ Observation: Contrastive learning is NOT immune to imbalance. Long-tail distribution fails many supervised approaches build on balanced benchmarks (Kang et al., 2019). Even contrastive learning does not rely on class labels, it still learns the transformation invariances in a data-driven manner, and will be affected by dataset bias (Purushwalkam & Gupta, 2020). Particularly for long-tail data, one would naturally hypothesize that the instance-rich head classes may dominate the invariance learning procedure and leaves the tail classes under-learned.
47
+
48
+ The concurrent work (Kang et al., 2021) signaled that using the contrastive loss can obtain a balanced representation space that has similar separability (and downstream classification performance) for all the classes, backed by experiments on ImageNet-LT (Liu et al., 2019) and iNaturalist (Van Horn et al., 2018). We independently reproduced and validated their experimental findings. However, we have to point out that it was pre-mature to conclude "contrastive learning is immune to imbalance".
49
+
50
+ To see that, we present additional experiments in Section 4.3. While that conclusion might hold for a moderate level of imbalance as presented in current benchmarks, we have constructed a few heavily imbalanced data settings, in which cases contrastive learning will become unable to produce balanced features. In those case, the linear separability of learned representation can differ a lot between head and tail classes. We suggest that our observations complement those in (Yang & Xu, 2020; Kang et al., 2021), that while
51
+
52
+ (vanilla) contrastive learning can to some extent alleviate the imbalance issue in representation learning, *it does not possess full immunity* and calls for further boosts.
53
+
54
+ Our SDCLR Framework. Figure 1 overviews the high-level workflow of the proposed SDCLR framework. By default, SDCLR is built on top of the simCLR pipeline (Chen et al., 2020a), and follows its most important components such as data augmentations and non-linear projection head. The main difference between simCLR and SDCLR lies in that, simCLR feeds the two augmented images into the same target network backbone (via weight sharing); while SDCLR creates a "self-competitor" by pruning the target model online, and lets the two different branches take the two augmented images to contrast their features.
55
+
56
+ Specifically, at each iteration we will have a dense branch $N_1$ , and a sparse branch $N_2^p$ by pruning $N_1$ , using the simplest magnitude-based pruning as described in (Han et al., 2015), following the practice of (Hooker et al., 2020). Ideally, the pruning mask of $N_2^p$ could be updated per iteration after the model weights are updated. In practice, since the backbone is a large DNN and its weights will not change much for a single iteration or two, we set the pruning mask to be lazy-updated at the beginning of every epoch, to save computational overheads; all iterations in the same epoch then adopt the same mask<sup>1</sup>. Since the self-competitor is always obtained and updated from the latest target model, the two branches will co-evolve during training.
57
+
58
+ We sample and apply two different augmentation chains to the input image I, creating two different versions $[\hat{I}_1, \hat{I}_2]$ . They are encoded by $[N_1, N_2^p]$ , and their output features $[f_1, f_2^p]$ are fed into the nonlinear projection heads to enforce similarity be under the NT-Xent loss (Chen et al., 2020a). Ideally, if the sample is well-memorized by $N_1$ , pruning $N_1$ will not "forget" it – thus little extra perturbation will be caused and the contrasting is roughly the same as in the original simCLR. Otherwise, for rare and atypical instances, SDCLR will amplify the prediction differences between then pruned and non-pruned models – hence those samples' weights be will implicitly increased in the overall loss.
59
+
60
+ When updating the two branches, note that $[N_1, N_2^p]$ will share the same weights in the non-pruned part, and $N_1$ will independently update the remaining part (corresponding to weights pruned to zero in $N_2^p$ ). Yet, we empirically discover that it helps to let either branch have its independent batch normalization layers, as the features in dense and sparse may show different statistics (Yu et al., 2018).
61
+
62
+ <sup>&</sup>lt;sup>1</sup>We also tried to update pruning masks more frequently, and did not find observable performance boosts.
2106.15339/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-08-31T01:02:34.857Z" agent="5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36" etag="6XWHHaDY1zz-E5ewMZhK" version="13.6.5" type="google"><diagram id="LpUzXhhVu1oKbT2YixPe" name="Page-1">7Vxbd5u4Fv41fjRLd+AxiZ3mTDudrDhtc+alC4NsM8HgYtwk59cfiZu5yPgGttOO8xDYCCHr+/ZFewv38M389UNoLWZ/Bg73egg4rz086CEEdZOJf1LylkgogIlgGrpO2mgtGLn/46kQpNKV6/BlqWEUBF7kLspCO/B9bkclmRWGwUu52STwyk9dWFNeE4xsy6tLv7lONEukBgVr+R13p7PsyRCkV+ZW1jgVLGeWE7wURHjYwzdhEETJ0fz1hnty8rJ5Se673XA1H1jI/WiXG1bkG6ef0T36wwLfXWPxx/Xgaz+b5p+Wt0q/cQ8xT3R4PQlEv2LY0Vs6F+zHKsgu9JcxUleiATQWr+uL4mia/o97GWeC//iLlezv0RqL/tKrYrTj6h1Cljw6E6PSKFDEX6V8Fs09IYDiMORiNHG3eADEubWKgmR88WXLc6e+OLbFRPFQCH7yMHIFxFfphbnrOPLm60Xg+lFMGHrdowP5XV3Puwm8QNw28ANfNlpGYfDMK8IwWPkOd9IB1KFJ0ZJP5q8FUQrVBx7MeRS+iSbp1T7DQAOFT9pDqkaEIg2gRPSy5iXM8JwVOJkR0EpVYZo/bM0WcZASZh/ywI3kWS4s/3DyXE2nIZ9akZhQQRgx237/xV1K1ljzhbjuj5eLuDkYzsfccVx/WmBP8uwN7FkDJanxMnMjPlpYtrz6IixYmVhlqHsIM9vg40mFF0LuWNyY2C0hDwnQqFGGW6d1rIGh6bgOtw41wLpCHHWE+EPw0h9byxjw6+HDo4TWt4U3CU8DrKObYwDqwE4mnNltAUsyr5GhSkAdVaxrlNZRJUxDpCtUiQLVyvRy37mS7lTaUc9aLl27PKNiWsK3p9T+xSf/lScazU4Hr8WLg7dtppI7Jcdcn9XCpFGF1ctkIfesyP1ZdueqKUyfcC99QBE03SyBRnGlj2WwCm2e3lb0v7WeaMWGM13LFCrrLLLCKY9qncWw5t/9CKT1Le6+7IqP0+k7bsX6C66iSKDrBv5GXU5jhXDfWOA4pTfQGDOmsOaUGw45UunXOl5y45lpLrDXULGXaKArdUe0BXV/daOnwnFB2cXZWtflSabqVdNK5V8XpiPRyEQ0u3t6nl0/fTQ/PTvL/t/syb21+9lCIla29Av/43//ukIOos793XJ0Z0TQyqPjC7FFiFHNIIWQkJbsifAdwp6Y649+oKGCZTcFCdVIJaLo2E7hrpYloy9/imsPV58/DHsyLibDz4PRx+HjzV1PutdNdmhfCXmIuwMPYiEB5EoCgRtx2EfJsbg4Gt6rm8C8iRha2s3wr9t67Fs773b15PFJ9E7WTsjASpub+9162EWAQm3NFhZOzPjyGXz5wYn/N3Zh6FB4+10dcCX4RcnyGIyDUDrPIsFhkc6FwefX5XegpUa2GMzCShZIxaak1KquS2nnsR6lvSbY5POjuJQMuW8Hnmctlqka5mcbMwTROHDeasJQOa6xZT9PY77Ix0gSXcW6gi3dNgnKh1K+OIk/lXGmX0EaCeEwPNdJ2prGeIJgw2hnkhAp+dfNYu3YHAFJj5Prp+yijU5BB302j1MKa3ZvE1RHzbKzMY/0kBnJgrVsTCbV+9q1d9Bl5weNfC8ANukKtybIwS2pQ/O3BV3h9JiEbomFaxsjcDKIOpz6TlWkSw35VRXka+zspdfccXbyG+C+N6D950gKVY5YytOM/a4BZW2dbQP511Mk12K/jK8DEdtNvHi1KZv0Osmn65Xkm040s/gh9Vyc2VFQqM7NsI1B4fG51SR9Dm4C/2fgrRpzMe3mVdmYUUWKReCOWsur5phk0AKmUVVxBGsM1hEVy1vaUaCPNwf6x2EqpnI1938zWBnRFGWQM6Cq0tQTJNDyRFkfaADiXilbBpnZa86XxWf3PHTFLMRLy52TaI32qphFaywTXkgWrc80g5Dc7JcLbRShWq5r97wZqRDWqPiK9rJmal6qsvvnruMU6dFoIy+FHhhWzQ49kA9MQ3BNNBNWarpQeClYCEC64kpTCrz1BOun0aPMsA54tXjbYR3HZDq2FN6IQ4dyvR1vZJrlaFGvaLrwNrjmm5SVnRbiRyWiaAuibZb20vEMrMjq/d71PQrryeQOC3pK5Nso36fhCNAwI8WQpDkgESctBhToshxBXhPIlBxXVHf3wADV1p47GXsBmfVWaJbWWPYYMgDNI0NszxtqX6V8gzhIBt2qszp3vN1FYbpxf0YxaGpy35eiKvFexWKoU+IIUhSQd9YdE2u6YdbCpKxrZGpE4ZdPFEW1EXBfGjGbgot3x8tm8tDDeSl4h9mm3blI6INZfK7RFSuV21ZajAcuhpWNmYXtKQj9snnJSKvEpBsZf15itrHx7ECSHULodonZZFcvhZeVBBbGh/MQN/KQnJWHDW+9HLcuHj3zyJYbL+5D7rh2FISnyccDbgDDqK90x4BjztrJgKDacuikOQ81kF29gfJg+VN+ehwh0LHJFThCx5m0tTuugiNF9aLKaUFUZa5OEK+8w/3IlxVeN4Ya0Gg28TsvAIWZYbgQaZdDbagIlbr2HuZGo3Pk5uRnd9GLq7nyVdZyPnXPTb3KjbaKDbm7vxupMmXt79LA2FTuysgCEqau7OuGlqFSqgG3YLLcb8T/bgJXv7W+fHL/QT/44mNDrr2Vsv47eBOuhqkC+c0wV9+Eo9n56d6EU+KKW/BER3mPimXfWlHdzM7zrSH0Skn94LQ5qdbTddzhC3PKqXyXqZRdSLRzNWZ7boWdmW4VU/Ke6aZKkPwmWwCPcyjVvWI5C062U0yJZ1dbOpM44fdB09C1bP1wXkDPuMPq8AxmK0EFq/uNCww+qpu3qGFo5NCVZ/P+LVpdqpzUUxi/amCylYj0nRARaWKVs/6U39WmBtMOrYhX4xRq1vrqmHyqBMgJyLc7iYrkaNh0ejZ2QMY0bG78cSdhWdDBL++TcmcUgJNyQ1lb+SUs01aLQ+sWrKE6eKnkM3UNHbrjubYDvvq2VNfkU9WDzkg+/YTs2+91jAtlH2VI0w9mH644Rnhi9qnSh/8WBlpZEm4pDDCM8zj/nIWBdn9KTYNG0Z5ADRC4xaA0vOS13cqccd9JWXNZNWjZY6u1mCVg5GVJo/LrbYRppo4J3WvRdvQ+bIbNbcNuvGHfbdXidP3Dsknz9c/z4uH/AQ==</diagram></mxfile>
2106.15339/main_diagram/main_diagram.pdf ADDED
Binary file (34.9 kB). View file
 
2106.15339/paper_text/intro_method.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Spreadsheets are ubiquitous for data storage, with hundreds of millions of users. Helping users write formulas in spreadsheets is a powerful feature for data analysis. Although spreadsheet formula languages are relatively simpler than general-purpose programming languages for data manipulation, writing spreadsheet formulas could still be tedious and
4
+
5
+ *Proceedings of the* 38 th *International Conference on Machine Learning*, PMLR 139, 2021. Copyright 2021 by the author(s).
6
+
7
+ error-prone for end users [\(Gulwani,](#page-9-0) [2011;](#page-9-0) [Hermans et al.,](#page-9-0) [2012b;](#page-9-0) [Cheung et al.,](#page-9-0) [2016\)](#page-9-0). Systems such as FlashFill [\(Gul](#page-9-0)[wani,](#page-9-0) [2011;](#page-9-0) [Gulwani et al.,](#page-9-0) [2012\)](#page-9-0) help end-users perform string transformation tasks in spreadsheets using a few inputoutput examples by automatically synthesizing a program in a domain-specific language (DSL). Recently, several learning approaches based on different neural architectures have been developed for learning such programs from examples, and have demonstrated promising results [\(Parisotto et al.,](#page-10-0) [2017;](#page-10-0) [Devlin et al.,](#page-9-0) [2017;](#page-9-0) [Vijayakumar et al.,](#page-10-0) [2018\)](#page-10-0).
8
+
9
+ All these previous works formalize the spreadsheet program prediction problem as a *programming by example* task, with the goal of synthesizing programs from a small number of input-output examples. We argue that this choice engenders three key limitations. First, this setup assumes that each data row is independent, and each formula is executed on data cells of the same row. However, real spreadsheets are less structured than this. Data in spreadsheets is typically organized as semi-structured tables, and cells in different rows could be correlated. As shown in Figure [1,](#page-2-0) in the same table, different data blocks could have different structures, and formulas can take cell values in other rows as function arguments. Second, because spreadsheets are semistructured, they also contain rich metadata. In particular, many spreadsheet tables include headers that provide highlevel descriptions of the data, which could provide important clues for formula prediction. However, table headers are not utilized in prior work. Finally, programming-by-example methods output programs in a DSL, which is typically designed to facilitate synthesis, and is much less flexible than the language in which users write formulas. For example, the FlashFill DSL only covers a subset of spreadsheet functions for string processing, and it does not support rectangular ranges, a common feature of spreadsheet formulas. In contrast, spreadsheet languages also support a wide variety of functions for numerical calculation, while the argument selection is more flexible and takes the spreadsheet table structure into account. In total, these limitations can compromise the applicability of such prior efforts to more diverse real-world spreadsheets and to richer language functionality.
10
+
11
+ Instead, we propose synthesizing spreadsheet formulas *without* an explicit specification. To predict a formula in a given cell, the context of data and metadata is used as an *implicit* (partial) specification of the desired program. For example
12
+
13
+ <sup>1</sup>UC Berkeley <sup>2</sup>Google. Correspondence to: Xinyun Chen <xinyun.chen@berkeley.edu>.
14
+
15
+ <span id="page-1-0"></span>(Figure [1b\)](#page-2-0), if predicting a formula at the end of a column of numbers labeled "Score", and a cell in the same row contains the text "Total", this context might specify the user's intent to compute a column sum. Our problem brings several new challenges compared to related work in programming by example [\(Gulwani,](#page-9-0) [2011;](#page-9-0) [Bunel et al.,](#page-9-0) [2018;](#page-9-0) [Balog et al.,](#page-9-0) [2017\)](#page-9-0), semantic parsing [\(Popescu et al.,](#page-10-0) [2003;](#page-10-0) [Zhong et al.,](#page-11-0) [2017;](#page-11-0) [Yu et al.,](#page-11-0) [2018\)](#page-11-0) and source code completion [\(Ray](#page-10-0)[chev et al.,](#page-10-0) [2014;](#page-10-0) [Li et al.,](#page-10-0) [2018;](#page-10-0) [Svyatkovskiy et al.,](#page-10-0) [2019\)](#page-10-0). Spreadsheet tables contain rich two-dimensional relational structure and natural language metadata, but the rows do not follow a fixed schema as in a relational database. Meanwhile, our tabular context is more ambiguous as the program specification, and the spreadsheet language studied in this work is more flexible than languages studied in the program synthesis literature.
16
+
17
+ In this paper, we present SPREADSHEETCODER, a neural network architecture for spreadsheet formula prediction. SPREADSHEETCODER encodes the spreadsheet context in its table format, and generates the corresponding formula in the target cell. A BERT-based encoder [\(Devlin et al.,](#page-9-0) [2019\)](#page-9-0) computes an embedding vector for each input token, incorporating the contextual information from nearby rows and columns. The BERT encoder is initialized from the weights pre-trained on English text corpora, which is beneficial for encoding table headers. To handle cell references, we propose a two-stage decoding process inspired by sketch learning for program synthesis [\(Solar-Lezama,](#page-10-0) [2008;](#page-10-0) [Mu](#page-10-0)[rali et al.,](#page-10-0) [2018;](#page-10-0) [Dong & Lapata,](#page-9-0) [2018;](#page-9-0) [Nye et al.,](#page-10-0) [2019\)](#page-10-0). Our decoder first generates a formula sketch, which does not include concrete cell references, and then predicts the corresponding cell ranges to generate the complete formula.
18
+
19
+ For evaluation (Section [4\)](#page-3-0), we construct a large-scale benchmark of spreadsheets publicly shared within our organization. We show that SPREADSHEETCODER outperforms neural network approaches for programming by example [\(De](#page-9-0)[vlin et al.,](#page-9-0) [2017\)](#page-9-0), and achieves 42.51% top-1 full-formula accuracy, and 57.41% top-1 formula-sketch accuracy, both of which are already high enough to be practically useful. In particular, SPREADSHEETCODER assists 82% more users in composing formulas than the rule-based system on Google Sheets. Moreover, SPREADSHEETCODER can predict cell ranges and around a hundred different spreadsheet operators, which is much more flexible than DSLs used in prior works. With various ablation experiments, we demonstrate that both implicit specification from the context and text from the headers are crucial for obtaining good performance.
20
+
21
+ In this section, we discuss the setup of our spreadsheet formula prediction problem. We first describe the input specification, then introduce the language and representation for spreadsheet formulas.
22
+
23
+ Input specification. We illustrate the input context in Figure [1.](#page-2-0) The input context consists of two parts: (a) context surrounding the target cell (e.g., all cell values in rows 2–7, and columns A–D, excluding cell D4 in Figure [1a\)](#page-2-0), and (b) the header row (e.g., row 1).
24
+
25
+ In contrast to prior programming-by-example approaches [\(Gulwani,](#page-9-0) [2011;](#page-9-0) [Parisotto et al.,](#page-10-0) [2017;](#page-10-0) [Devlin](#page-9-0) [et al.,](#page-9-0) [2017;](#page-9-0) [Vijayakumar et al.,](#page-10-0) [2018\)](#page-10-0), our input specification features (a) tabular input, rather than independent rows as input-output examples, and (b) header information. Tabular input is important for many cases where formulas are executed on various input cells from different rows and columns (Figure [1\)](#page-2-0), and headers hold clues about the purpose of a column as well as its intended type, e.g, the header cell "Score" in Figure [1b](#page-2-0) is likely to indicate that the column data should be numbers.
26
+
27
+ Note that we do not include the intended *output* of the target cell in our input specification, for three reasons. First, unlike programming-by-example problems, we do not have multiple independent input-output examples available from which to induce a formula, so providing *multiple* input-output examples is not an option. Second, even for our single input instance, the evaluated formula value may not be known by the spreadsheet user yet. Finally, we tried including the intended formula execution *result* in our specification, but it did not improve the prediction accuracy beyond what the contextual information alone allowed.
28
+
29
+ The spreadsheet language. Our model predicts formulas written in the Google Sheets language<sup>1</sup> . Compared to the domain-specific language defined in FlashFill, which focuses on string transformations, the spreadsheet language supports a richer set of operators. Besides string manipulation operators such as CONCATENATE, LOWER, etc., the spreadsheet language also includes operators for numerical calculations (e.g., SUM and AVERAGE), table lookups (e.g., VLOOKUP) and conditional statements (IF, IFS). As will be discussed in Section [4,](#page-3-0) around a hundred different base formula functions appear in our dataset, many more than the operators defined in the FlashFill DSL.
30
+
31
+ In this work, we limit our problem to formulas with references to *local* cells in a spreadsheet tab, thus we exclude formulas with references to other tabs or spreadsheets, and absolute cell ranges. As will be discussed in Section [3,](#page-2-0) we also exclude formulas with relative cell references outside a bounded range, i.e., farther than D = 10 rows and columns in our evaluation. We consider improving the computational efficiency to support larger D and enabling the synthesis of formulas with more types of cell references as future work.
32
+
33
+ Formula representation. One of the key challenges in
34
+
35
+ <sup>1</sup>Google Sheets function list: [https://support.](https://support.google.com/docs/table/25273?hl=en) [google.com/docs/table/25273?hl=en](https://support.google.com/docs/table/25273?hl=en).
36
+
37
+ <span id="page-2-0"></span>![](_page_2_Figure_1.jpeg)
38
+
39
+ Figure 1. Illustrative synthetic examples of our spreadsheet formula prediction setup. (a): The formula manipulates cell values in the same row. (b): The formula is executed on the rows above. (c) and (d): Formulas involve cells in different rows and columns. The data value in the target cell is excluded from the input. All of these formulas can be correctly predicted by our model.
40
+
41
+ formula representation is how to represent cell references, especially ranges, which are prevalent in spreadsheet formulas. Naively using the absolute cell positions, e.g., A5, may not be meaningful across different spreadsheets. Meanwhile, a single spreadsheet can have millions of cells, thus the set of possible ranges is very large.
42
+
43
+ To address this, we design a representation for formula sketches inspired by prior work on sketch learning for program synthesis (Solar-Lezama, 2008; Murali et al., 2018; Dong & Lapata, 2018; Nye et al., 2019). A formula sketch includes every token in the prefix representation of the parse tree of the spreadsheet formula, except for cell references. References, which can be either a single cell or a range of cells, are replaced with a special placeholder RANGE token. For example, the sketch of the formula in Figure 1a is IF <= RANGE 1 "A" IF <= RANGE 2 "B" IF <= RANGE 3 "C" IF <= RANGE 4 "D" "E" \$ENDSKETCH\$, where \$ENDSKETCH\$ denotes the end of the sketch. Notice that the sketch includes literals, such as the constants 1 and "A".
44
+
45
+ To complete the formula representation, we design an intermediate representation for ranges, *relative* to the target cell, as shown in Figure 2. For example, B5 in Figure 1c is represented as $R\$ R[0] C[1] $EDDR\$ since it is on the next column but the same row as the target cell A5, and range C2:C6 in Figure 1b is represented as $R\$ R[-5] C[0] $EDDR\$ R[-1] C[0] $EDDR\$ . The special tokens $R\$ and $EDDR\$ start and conclude a concrete range, respectively, and $EDDR\$ separates the beginning and end (relative) references of a rectangular multi-cell range.
46
+
47
+ A complete spreadsheet formula includes both the sketch
48
+
49
+ ```
50
+ <Range> ::= $R$ <R> <C> $ENDR$
51
+ ```
52
+
53
+ Figure 2. The full grammar for range representation.
54
+
55
+ ![](_page_2_Figure_9.jpeg)
56
+
57
+ Figure 3. An overview of our model architecture. and any concrete ranges; e.g., the formula in Figure 1b is represented as SUM RANGE \$ENDSKETCH\$ \$R\$ R[-5] C[0] \$SEP\$ R[-1] C[0] \$ENDR\$ EOF, where EOF denotes the end of the formula. In Section 3.2, we will discuss our two-stage decoding process, which sequentially predicts the formula sketch and ranges.
58
+
59
+ # Method
60
+
61
+ In this section, we present our SPREADSHEETCODER model architecture for spreadsheet formula prediction. We provide an overview of our model design in Figure 3.
62
+
63
+ **Input representation.** Our model input includes the surrounding data values of the target cell as a table, and the first
64
+
65
+ <span id="page-3-0"></span>row is the header. When there is no header in the spreadsheet table, we set the header row to be an empty sequence. We include data values in cells that are at most D rows and D columns away from the target cell, so that the input dimension is (2D + 2) × (2D + 1), and we set D = 10 in our experiments.
66
+
67
+ Row-based BERT encoder. We first use a BERT encoder [\(Devlin et al.,](#page-9-0) [2019\)](#page-9-0) to compute a row-based contextual embedding for each token in the target cell's context. Since our 2D + 1 + 1 rows contain many tokens and we use a standard BERT encoder of 512-token inputs, we *tile* our rows into bundles of N = 3 adjacent data rows, plus the header row, which is included in every bundle. Then we compute a token-wise BERT embedding for each bundle separately; the BERT weights are initialized from a pre-trained checkpoint for English. Specifically, in our experiments where D = 10, we concatenate all cell values for each row i in the context into a token sequence R<sup>i</sup> , which has length L = 128 (we trim and pad as needed). We combine rows in bundles Srb = [Hr, R3b−1, R3b, R3b+1], for b ∈ [−3, 3]; here H<sup>r</sup> is the header row. We set the BERT segment IDs to 0 for the header tokens, and 1 for data tokens in each bundle. There are 2D + 1 = 21 rows of context, so each of the 21 data rows is covered exactly once by the seven bundles. The header row is assigned a different BERT representation in each bundle. To obtain a single representation of the header row, we average per token across the embeddings from all of the bundles.
68
+
69
+ The number of data rows N = 3 is set to seek the balance between the size of the tabular context fed into the encoder and the computational efficiency. Since the BERT we use takes 512 input tokens, we can feed at most L = 512/(N + 1) tokens per row. To generate formulas referring to cells within D = 10 rows and columns, L = 128 is a good fit in our evaluation. If we further decrease N and increase L, it imposes extra computational overhead due to more forward passes over BERT (21/N).
70
+
71
+ Column-based BERT encoder. As shown in Figure [1b,](#page-2-0) some formulas manipulate cells in the same column, in which case a column-based representation may be more desirable. Therefore, we also compute a column-based contextual embedding for all context tokens. We perform similar tiling as for the row-based BERT encoding, yielding column bundles Scb for b ∈ [−3, 3]. Unlike with row-wise tiling, where we include the header row H<sup>r</sup> with every bundle, for column-wise tiling we use the column of the target cell, H<sup>c</sup> = C0, as the "header column" in every bundle. After obtaining all token embeddings from this tiled computation by the BERT encoder, we discard token embeddings of C<sup>0</sup> in its role as header column, and only use its regular token embeddings from the bundle Sc0.
72
+
73
+ Row-wise and column-wise convolution layers. Al-
74
+
75
+ though the output vectors of BERT encoders already contain important contextual information, such as headers, nearby rows and columns, they still do not fully embed the entire input table as the context. To encode the context from more distant rows and columns, we add a row-wise convolution layer and a column-wise convolution layer on top of each BERT encoder. Specifically, the row-wise convolution layer has a kernel size of 1 × L, and the column-wise convolution layer has a kernel size of (2D + 2)×1 for row-based BERT, and (2D + 1) × 1 for column-based BERT. In this way, the convolution layer aggregates across BERT embeddings from different bundles, allowing the model to take longer range dependencies into account. For each input token, let e<sup>b</sup> be its BERT output vector, c<sup>r</sup> be the output of the row-wise convolution layer, and c<sup>c</sup> be the output of the column-wise convolution layer. The final embedding of each input token is the concatenation of the BERT output and the output of convolution layers, i.e., e = [c<sup>r</sup> + cc; eb].
76
+
77
+ We train an LSTM [\(Hochreiter & Schmidhuber,](#page-9-0) [1997\)](#page-9-0) decoder to generate the formula as a token sequence. Meanwhile, we use the standard attention mechanism [\(Bahdanau](#page-9-0) [et al.,](#page-9-0) [2015\)](#page-9-0) to compute two attention vectors, one over the input header, and one over the cell data. We concatenate these two attention vectors with the LSTM output, and feed them to a fully-connected layer with the output dimension |V |, where |V | is the vocabulary size of formula tokens. Note that the token vocabularies are different for sketches (formula operators, literals, and special tokens) and ranges (relative row and column tokens and special range tokens). The output token prediction is computed with the softmax.
78
+
79
+ As mentioned in Section [2,](#page-1-0) we design a two-stage decoding process, where the decoder first generates the formula sketch, and then predicts the concrete ranges. In the first stage, the sketch is predicted as a sequence of tokens by the LSTM, and the prediction terminates when an \$ENDSKETCH\$ token is generated. Then in the second stage, the range predictor sequentially generates formula ranges corresponding to each RANGE token in the sketch, and the prediction terminates when an EOF token is generated. Both sketch and range predictors share the same LSTM, but with different output layers.
2109.10259/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-12-13T16:44:29.758Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.8.7 Chrome/91.0.4472.164 Electron/13.6.2 Safari/537.36" etag="7leITCitzS4d1MlHkKHP" version="15.8.7" type="device"><diagram id="V_OCyBq2jp3I5MJ94G6E" name="framework">7V1bd9s2tv41eTQW7pfHNG06syaZdCZpJ52XLlmibTWK6ZHoxjm//gASSfECSaBEgCBtz7S1IIoysb+9sW/48Iq8+fr083r2cPc+XSSrVxgunl6RH19hjCET+j9m5PtuBDEodyO36+UiH9sPfFz+X5IPwnz0cblINrULszRdZcuH+uA8vb9P5lltbLZep9/ql92kq/q3Psxuk9bAx/ls1R79z3KR3eWjmMD9G39Llrd3+VcTnr9xPZt/uV2nj/f5992n98nuna+z4jb5pZu72SL9VhkiP70ib9Zpmu1++/r0JlmZeS1mbPe5twfe/Vf29v0/VPLun5/e//HxM0WLPx+yK6r47sv+mq0e82e7vi2eYZ3cZ+ffmovWrfMHy74X87idicTcBr0iP3y7W2bJx4fZ3Lz7TUNHj91lX1f527P1PEcC1a/af+SJJ0zWWfJUEV/+LD8n6dckW3/XlzzVQfa9/vJbVdDF4F1VyDQfnOXoui1vvZ88/Us+f/a5fM9u3/73j3f07tu///jbgl3za6Ufglimkq/09/5wk+rHr84p/99jWrxxtdnO12t9AZIPT/s39W+35r8fl1+Xq9l6mZnHf5duNsVt9Z+5u/PuussEd7Ncrd6kq3S9/SyZzxN2c6PHN9k6/ZJU3iGcKLLwKlxGWV26nLTFSyzSxdKTcIsv6124b/RF69kmW/6VPBfpStKULm5Ll9l015fqMuxLuqvZZrO8Wc5n2TK99yngw4JsiP7mZj5XyrNtpg3rzFFbwsKThN/8+enjIrn7Thjn9/j6N/nl0+qKtGY3WWhnIX+ZrrO79Da9n61+2o/+sJ9/qF/tr3mXpg/5rP+ZZNn3fL2bPWZpXSbJ0zL7bD4OWP7q98o7Pz7ld96++F680Obg++fqi9/3dzAv9x/bvio+t3s+81AN58dVvJv0cT1PjlyJFc2duNn6Njl2T8yIHR/rZDXbWrqaq2eR9fajr9fr2ffKBQ/p8j7bVO78ixmowE7WUYcFbOBmd8c9iso/zQlYx2flEtvBbbbjw3p5u9Rw0xdsPXQvZmPvrvG2qXjLzP/MVavl7b0eWyU35iuNJdAmbfU6H86MPvyw0V+yvL99t73mR7If+bRVlyvi1ebwpj8oAVWMQMj1j0Sk7UBgajFAqFibLrFAB1xtedrVNlHHw8WTVIZUs+vixvDo5OGG4jDm5m5JAjiCGEKqfxg9ptEXzR1h6vTcnYn7LkthKcLTsIxm6rjNc22uhauVDsoTh2nbPOwi9ZvlU2JxLt6+NermZ0ZL5wJABCs/uIZdBpRSCHMklLYBCPI2kiHABGEqdYSIJZJKWszAgWs8iAcFFE/p9nsWDxWES41pyHDxfIVpoQBBQTlCGvxM4z5u8dCg4nmrf6QcUnsoAYhJolWHCKWwJeqOSzq2uK036Sxmm7tyPfErEihVqTO8LhKOgdDSYFggqgSFlqU5Lpn0kAajNn/4H4mZtH+mi8TVE9YCyI5Fydu8bl0L86HC4Z1raSdri8v7dblYrA6Bqh441mBk/t7c8UB+10kigZJVTac1WBGgV0nFBJFQX0WlJcMmbf4xB0K7i1QjVkoFZZH09ICjHuIqK44+ZHdapC9IckcSBPLImoGxWfEZZ0IIaCxMC0iKgcKBHwhLzBOW3s82X7REXsDkDia9nDV8DgQgV4RAqj0PxFQ7VagQEGJQ/LQLcP3g58d1+vDwAqAuAMJKu0uq8iPr7hLUcbUgihFjmVR7VVMKoCGNUZ4cHTyX0T3FppWzYfhle3ZtqQ4KGNUKgCHkOiD0NK9liu/YvG7uZg/m1/njtUNEcL2TwrvrcqBsDPjwmK2W90k+vpitv3wwyc5sm44HkNUH8XbUUoczxZj53K/UeENqhAPalkdFhrZ0FReAE4wQ4Tqmx7BY5D1I0eb2vUiR0HFJ0eZwvUiR4DFJkYp28uu3ZfJNj/yc3CfrWZaaOKot2Qn5J91lXC6BRaCNFFCKadkjwaksWg2q+RtrIYoD7YVAvXhCRLScYQ/rpqUmfEbPGKa0JfGAxfTi90pZ/EgxffHaNBIaRG07Mea7wbfLVXnbi+rt1anMZ/dkHZ0UjQ+n6+g0d7AaZX3XOh460k1hL6ELUr+DZABThaTxi41n3KhS7Z41v0cVR83b8k633U1M67a9leyL8uYYAHweOh2Afz6AMXNtBCmhHgzAVMWENEIi6juCjkjbg6vsVvKEtCp+jrbUBoMPb7QVY26yBYhjRISQ0NH+9QYfGhF8XNvW6vCBQxqqMmg6CbTCKwsGNIQwA4dz6jqAAhS125dCAY/xeIAXn906DTzmauE8Au/C2M+W8m5A4kC3WhE19t2WZtvjYYuPxfE1v8c58rd5BnXplhTuQWo+lTokhQJSYjLeplPKcXsF01aJMKiIYgoJrISvmS1yGhcXXHahGjT5oaviSbbvGmyxdkHm1/vN40Oy/mu5MVWZ3Tder/eVmGdXoOmArWLzDgKCC8kEonj7b17XYgGUwlzfmChixNSCHqeAQLL/PGkDEWGAGEdYg5cRgvWNvCGxbx0/1JK3130zuR7lIwGhVfHU+0IQ1zPLdOCGTQ1NWwdL47QAlY/rf9v6jRiQRDIlsaCQKor9Cajv1uBD+2VCCqg2vXX9oQgoJiGBXOqQhBbIb8inJuEhxVPE472Xzj++2OnLYEYFQI16BMXaNmi1JwohBrVj0DbOggAmVYnNopejAS6NKUm46SzTdyl6f3oHF5cOJV/fTmoz44TbDZr2nSbYl+9eZt1635KUV3/QuLYiiVfNrUgdGnr3GOu05yO4yG0OczOCH2qXR5c5LNZAarIjDHOCmJCq4aRQDigXxXuCtM2UAghKjvU3a0eGUIuHYr/Cg2QcbNSYJGMSV+XmDi0B2egrhsDEidq5h0oIWnoWccoGtUVRSXPla/1YKz/5n79PatnrlseB4VDOzCXsUA1yTec77xO+UPwOvUYTUk0EIeBQO3ZEu2RESdr2VCJSzeM12r5VM3Ctw6qaF27W66KyxFllqaPKlhCsQU6CeiRLvcHleMHCpyX3w+cQEVjE5MDikNMbk91v2HkTohMiCWWSU84tGbyIzLx8MfP+NNfdMxMj0VwVFi7dwYJGChV3j2AkUJG2JJgvI3+2ZLo497V53G4L54LzbSGWYNTuHY/HysuwcfZZVn6kiisn58pLi54GsvHOSZk6WEbkzHeAy1js/NTyqwKBIq9tkjmwvr+VISCF4kphKhGGWEVt94/vFIkwvzoaRXZ32GTHJurBFDlsxi+GZHxAu++exMEjgUvYjN/zCQXdoSI77u4ZDCoTy/eZNp2Ki4BaDWKVwnncxXHpwIk6GL3eOZJRoMLkxup4FxJU3yvaouIUTNhE28X1lPFYV9fNv6VuxG5di/63mJM7o03hd1iMxwIXr4yq53B2nhOkcwIqMTpqkCsemMwoLb2Vv35s8jAbQiqNL0LVwU0YKBvSTFtyzCuvGlsY3GGnZSdjehF0+syEqpEkztXIemDGiZti/ZoQbsRwFicie+MBKnJyzlvIeD1EUV0dc+NG1TCrhmuNia1s6ncFoNA5hB9Jyb18olHHAEhIUDl3hDQnkwLt+itY7E6JWJcpjLlf5rhOnptyGcqbc6UyK3XEvy7nt2wwBplbSM4RZxQRs0W3nvWXCuTR7Q7ijWPMPBOd0YK/aNQWBEMEqkcX4fG6AxQepw4bMhh0NSDxmw93V6BUkMHMB1S1siASDWxzUIV2kSILZj/CdgrVvNczfNdLY1APSLRQ3V0Ya4bZhVn+5VOJKU3/nqycT1FfRBTT71YINCxHKESzhpzgyZ1iZa91jPaOndGLyjL3ODKujdPlXz6Vto3txmlR0dk6LY4AVFVUNuaOGsrCho7Dt9wG1Vj37vneNdbu1iENTqEkMfzESiDY6AUjCEiEkI4YNYJ1eAMbuPPt1rHAbt1zW0Hc8egafoRaQULu2gjg9DWpNhTYnupSLBlxrBgffvv89pds8d/Nx1SHg3///Nvr32+u0PTat5oceg5cz1209qKDPkLz3F9IsOfQHBScYM/CAx2Ybo3CHg6EPMawh58Vwx49AJU2JIYUedhW9a6bCNznsFyxujDsMWuuW1Su4PYlzHaJB+E4kACPSTgdSfasDC/xCGfA/qLYWfaK+Oh0fte1l6jUhVhcfRQ2WTS0brpVCaNRThRzq0GkuzfclVa5Ki3qtbvAW6MQsqhuJFVlP4Y8HFaQay5nNFiJm5jjDIe5No1OLHvxmPnnl4gNaOadfTPUMdEzlOoGLvx2x0pc3Ao+HIKRIMXhmLjQfX9nePS1uWtS60Vu2cNG1xPg1nN3ySbnvgdm2z0nExMbtZ4PtIzEtk8tqdqNWc/amReN2cdhy7bD9/lE6aS5plqHVWQ8up6wEZl97Jy3ga6N1wOjJWyOL9bg76LYo0/YFCFW7LCZWLqvG8le5M5ChBw/3QXSgVvPSoocjziGo/gZP1GLezzGXS1soR6xW9jAjLcR5/CjWZzHAp2QDLhnO9u++Pes3dfRrAYk7oNqOsumGxeflYovHtmMLXIenBzLQ7qUjCO5TsLGzbF6dR5WY1cO3NEgZThuDhiRTfHsxBXbdCfjxJEpRPF9MO7F4x4M1ycz/XoqcY7fR1J9J157ZS5mTTijntqJcC/uFCmJuY0mLOWeB1V2pskioYod+R1HRbJXVBkmkynog28vHgsS75E5IRn3hnQESKgK+wHr0Y1jTzW23Pk2HzQw00P3sLNXjj3P4SV1PQfAPbwMtCGz+MtHHUd2Y9mLux7Mnl9hrweWJGdNZc75w/63TtsXCkQwMB5kuYW4Bl4NPIklLKskBVdSMNKuqRULT7H7oSpVE4vczWRhe/iHb/wNaSq4O59zKFNRI/iDXLYI/nSkyncEfxqBRdNLMFsR+NSRZ7Z0dcBj76zOdjwqBDCS+yCnhkaqAxwBUWkFReAECY+bl/bSLedIh5eQ7L3auN1acqJ5Ym8GftqPDrVwnSQIPHejQhscrvSB2Jnds6SjO5M/8OoYgaDdDJSqVqxDmAKqjRDU2BISlQya/Wu+lcQSn9Z73zSFuEFTiCQgHEEMIdU/rGz6rSgrslLYQQm4IphDpqdSQW9U0bhIcB2btTP5BZV+tZpdJ6tf0s0yW6aGOHCu5zzRtrNkFHzXuOA6zbL066s9A2HrE3UOwrYwj2LDPS1djw909AAlgVyva/qHY9tpvhY5SgJq4veVzsEuW/A3d7MH8+v88dph5bveCf3ddTkwm3+53ULhw2O2WhqnaDu+mK2/fDC8j9nW3AHI6oN4O4osUd8iuZbXHmUo6jIkRFuninlqdx0Siwg5B1LqqyFhpoRZJPg9iNAh4n52ImTjEqFDbe7ZiZCMSYRUtKOHnI735+Q+Wc8yPW/YcjSfnoysLrp6iJhHkdXJz4dOLnRfl4vF6hBW6q5yLe/eZMDuXcAnvRgOhPaYKIRadgpKyj1JjeDjGQjPUUY9IXFOd2WnlqxD0clBv/Z0lOF8AmLpLw7IUm5FbQQc5ZLWn7V89pN6giGQ2r0Ukpv4HjVrwv0tUC7NIxd4+44WpnuPiCUbb42aICDEkIIz7aRTBn3ZG+yylSq6hf5wkaUfKTXc7Wbo22bQsi30wSImPEZfzbcI2bhE6MBN9OxESMYkQirahBE/3c/ThXaEp+thu8u09JoogFKHPdIsbBAVZPrROOD4RN9jlA54vSLYodPMgwNeOGYnTwnqurst1ClB5MTOx6gKPRGGYGX55nQI1rXB9fJCjyCNQg8yNV4lJBLaHgnUqPDuHtZbhZecOJL2BWpnGJsDM438Qe2icN+h3BE63EcMEMERYXqtZUJS50VaR/+QKMSpyYdC7C/6p1FE//yAAFxLcUgZZmaFKaZU6Im29D8MmhtwORd57FFJZxk2cgM6LIFcKEqxVBgSLHFLiINGloXde5Hh4eRA9DLELzI8lR2IXIZUtGU4/fSAu1Djif/pcTaToPG/945gDy45jSD+P9Tr27gDh4AqSTiUEEIkOK7fsb/gz6ocDoUX344/gmxknj+hzuVRZysl+prO7kuStMylQkAJZIydMXWKM29TyU5PZa9B1AmJOi/9Vw1FvjpdGnAKpHy14hA6wp7Gm5v5XPmVYzPtcGCNP+HCMf3nc26OuOCcC+yN8IbQEZbKA0ixsS6fV6YL1RVHXDJHz0mIHeMjb2xSor2uv9k6pzfLSYdIh5yPqEOkYaoWDruaAhY22vK9KEQ67qKFi5GaTRcMYKqQNLAywGr49QdCpPZtZafb9re96ub6X//biF9/53/ekhn+dPPrv9+hK4ejPXxHXkU6qBAUEkD7MEwxAvU/NtdHXwOgIIRpP4RRoootaTUTgKiJHThlesYFlf5KAyos+d/B41GOytc9p4+BhuF+F3BjGzABkiihTerueGTRFg6CABOkFVtHwlgWXND1YMN2hQfBhCRV8y4YhLT/UWVfb1CLMMC4oFxrhOFkRMWzxigZwsNSMgYu7lu5BbrBwqG537mzRHXsLLlqcKddSUAJx0pja7s/HXtCBVYOnZ+T0VfT/Sco1MYU7k6Zt2wijkZfYWB+uQi4QNxreBfqsasDXKrHST0uEAYw5IxjtmMxQdKX4hLLzrlY+DH9GPNA4CAQTwAceEpWvRGsCApMRYPr/2/PlLfUiOIx4mHPpnhWRtzVGSu1IUI9pWHh0R0caJTQcE5wRQwNh+poCNrIC014Pm8cA2TybBLtaDhJ1J73CAhkx6iXBE7B7w58ssU5aZQ6OMbjeU/BbE8r/8kUqJzajWG9jEIVUFgSLiBCWFj6iyKy6WGZVYfn+Y3Q13LNiQZXWhQ21RZDajyUTS8y1w5LvivrZnh4hE21PZMozR0aZSwUITQmlWgjEJAjyz0RQFChEMsL0RHXoQlyaLIYQfj8VM585ZBPhhvnqGmxCUUVlXqNxQK2GVUjkkvYjNfFZYuR2NJCU0/b0kIxIrSlgY9/fT658g5LbcTwCNny5d2iI4ZqJzfXDPqhSY3ToofsIPEuFwwhqDaQKFx3gRDgTHAF9WMzKiVvby+IRzI4UFgb9Jj7c4Lo3ldb55xHIdz4zCkeWf/IWFZa56p0xNCwGPBQ6bAzoOHL0vQNjWIr2pidMDypsFrUXDDYoGUZVbcoHq6TZOplSewaW0dctcbT6CYp3uUcVJJhpHnwOAeUK4G16u50O2a9jbnXxPWg+25JkWCumHJdb7HH+kJ+k+bZ9ogDqZcUzigijBBVz+gqDBhTijLFtktPY9uz54MfiQsp/4ishVSgmjsnI17lhzutuC9bEaOlcF7hS80IZymQEqCyNVQUh5uVO3chMMfpYcwZJII1mY98mwoeeP3qHivWvM5LY8W+kcf7r9sMvKOwfKRpLB6mAU5WDruva5+SgDNUaDeEEWeD6QliwSmW17qcLX6RHlOKHfV4NDuDy0eahh5vdwaLiiLLmFU1bCV8+GbVgIrqfD7feBR1vCmEKOXe8fT34eTuQEU6HgMtEFDqoKOlrTcxRffC1SoeNUbzTVzOLTjMEdsju26D9WpHjog5RdRw15Snv8RCCOtyVkAAQthSfNMlhB0jo39nFsruchwZISxzKMY+QymOixCWOaziz0mI0RDCttNYUyOEPeRpHPEoGnz9erHSXptQTCmEECyaJQNQxeqX6zTNqtlsPSN379NFYq74fw==</diagram></mxfile>
2109.10259/main_diagram/main_diagram.pdf ADDED
Binary file (49.5 kB). View file
 
2109.10259/paper_text/intro_method.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Graph neural networks (GNNs) [@kipf2016gcn; @velivckovic2017gat; @xu2018gin; @hamilton2017graphsage] are gaining increasing attention in the realm of graph representation learning. By generally following a recursive neighborhood aggregation scheme, GNNs have shown impressive representational power in various domains, such as point clouds [@shi2020pointgnn], social networks [@fan2019gnnsocial], chemical analysis [@de2018molgan], and so on. Most of the existing GNN models are trained in an end-to-end supervised fashion, which relies on a high volume of fine-annotated data. However, labeling graph data requests a huge amount of effort from professional annotators with domain knowledge. To alleviate this issue, GAE [@kipf2016vgae] and GraphSAGE [@hamilton2017graphsage] have been proposed to exploit a naive unsupervised pretraining strategy that reconstructs the vertex adjacency information. Some recent works [@hu2019pretraingnn; @you2020sslgcn] introduce self-supervised pretraining strategies to GNNs which further improve the generalization performance.
4
+
5
+ More recently, with developments of contrastive multi-view learning in computer vision [@he2020moco; @chen2020simclr; @tian2019cmc] and natural language processing [@yang2019xlnet; @logeswaran2018efficient], some self-supervised pretraining approaches perform as good as (or even better than) supervised methods. In general, contrastive methods generate training views using data augmentations, where views of the same (positive pairs) input are concentrated in the representation space with  views of different inputs (negative pairs) pushed apart. To work on graphs, DGI [@velivckovic2018dgi] has been proposed to treat both graph-level and node-level representations of the same graph as positive pairs, pursuing consistent representations from local and global features. CMRLG [@hassani2020cmcgnn] achieves a similar goal by grouping adjacency matrix (local features) and its diffusion matrix (global features) as positive pairs. GCA [@zhu2020gca] generates the positive view pairs through sub-graph sampling with the structure priors with node attributes randomly masked. GraphCL [@you2020graphcl] offers even more strategies for augmentations, such as node dropping and edge perturbation. While above attempts incorporate contrastive learning into graphs, they usually fail to generate views with respect to the semantic of original graphs or adapt augmentation policies to specific graph learning tasks.
6
+
7
+ Blessed by the invariance of image semantics under various transformation, image data augmentation has been widely used [@cubuk2019autoaugment] to generative contrastive views. However, the use of graph data augmentation might be ineffective here, as transformations on a graph might severely disrupt its semantics and properties for learning. In the meanwhile, InfoMin [@tian2020goodview] improves contrastive learning for vision tasks and proposes to replace image data augmentation with a flow-based generative model for contrastive views generation. Thus, learning a probability distribution of contrastive views conditioned by an input graph might be an alternative to simple data augmentation for graph contrastive learning but still requests non-trivial efforts, as the performance and scalability of common graph generative models are poor in real-world scenarios.
8
+
9
+ ::: minipage
10
+ :::
11
+
12
+ []{#tab-aug-compare label="tab-aug-compare"}
13
+
14
+ In this work, we propose a learnable graph view generation method, namely AutoGCL, to address above issues via learning a probability distribution over node-level augmentations. While the conventional pre-defined view generation methods, such as random dropout or graph node masking, may inevitably change the semantic labels of graphs and finally hurt contrastive learning, AutoGCL adapts to the input graph such that it can well preserve the semantic labels of the graph. In addition, thanks to the gumbel-softmax trick [@jang2016gumbelsoftmax], AutoGCL is end-to-end differentiable yet providing sufficient variances for contrastive samples generation. We further propose a joint training strategy to train the learnable view generators, the graph encoders, and the classifier in an end-to-end manner. The strategy includes the view similarity loss, the contrastive loss, and the classification loss. It makes the proposed view generators generate augmented graphs that have similar semantic information but with different topological properties. In Table [\[tab-aug-compare\]](#tab-aug-compare){reference-type="ref" reference="tab-aug-compare"}, we summarize the properties of existing graph augmentation methods, where AutoGCL dominates in the comparisons.
15
+
16
+ We conduct extensive graph classification experiments using semi-supervised learning, unsupervised learning, and transfer learning tasks to evaluate the effectiveness of AutoGCL. The results show that AutoGCL improves the state-of-the-art graph contrastive learning performances on most of the datasets. In addition, we visualize the generated graphs on MNIST-Superpixel dataset [@monti2017mnistsuperpix] and reveal that AutoGCL could better preserve semantic structures of the input data than existing pre-defined view generators.
17
+
18
+ Our contributions can be summarized as follows.
19
+
20
+ - We propose a graph contrastive learning framework with learnable graph view generators embedded into an auto augmentation strategy. To the best of our knowledge, this is the first work to build learnable generative node-wise augmentation policies for graph contrastive learning.
21
+
22
+ - We propose a joint training strategy for training the graph view generators, the graph encoder, and the graph classifier under the context of graph contrastive learning in an end-to-end manner.
23
+
24
+ - We extensively evaluate the proposed method on a variety of graph classification datasets with semi-supervised, unsupervised, and transfer learning settings. The t-SNE and view visualization results also demonstrate the effectiveness of our method.
25
+
26
+ # Method
27
+
28
+ Our goal is to design a learnable graph view generator that learns to generate the augmented graph view in data-driven manner. Although various graph data augmentation methods have been proposed, there is less discussion on what makes a good graph view generator? From our perspective, an ideal graph view generator for data augmentation and contrastive learning should satisfy the following properties: (1) It supports both the augmentations of the graph **topology** and the **node feature**. (2) It is **label-preserving**, *i.e.*, the augmented graph should maintain the semantic information in the original graph. (3) It is **adaptive** to different data distributions and scalable to large graphs. (4) It provides **sufficient variances** for contrastive multi-view pre-training. (5) It is **end-to-end differentiable** and **efficient** enough for fast gradient computation via **back-propagation (BP)**.
29
+
30
+ Here we provide an overview of the augmentation methods proposed in existing literature of graph contrastive learning in Table [\[tab-aug-compare\]](#tab-aug-compare){reference-type="ref" reference="tab-aug-compare"}. CMRLG [@hassani2020cmcgnn] applies diffusion kernel to get different topological structures. GRACE [@zhu2020grace] uses random edge dropping and node attribute masking[^1]. GCA [@zhu2020gca] uses node dropping and node attribute masking along with a structural prior. GraphCL [@you2020graphcl] proposes the most flexible set of graph data augmentations so far, including node dropping, edge perturbation[^2], sub-graph[^3], and attribute masking. We provide a detailed ablation study and analysis of GraphCL augmentations with different augmentation ratios in Section 1.1 of the supplementary. JOAO [@you2021joao] optimizes the augmentation sampling policy of GraphCL in a Bayesian manner. AD-GCL [@suresh2021adgcl] designs a learnable edge dropping augmentation.
31
+
32
+ In this work, we propose a learnable view generator to address all the above issues. Our view generator includes both augmentations of node dropping and attribute masking, but it is much more flexible since both two augmentations can be simultaneously employed in a node-wise manner, without the need of tuning the *"aug ratio"*. Besides the concern of model performance, another reason for not incorporating edge perturbation in our view generator is, the generation of edges through the learnable methods (*e.g.*, VGAE [@kipf2016vgae]) requires to predict the full adjacency matrix that contains $O(N^2)$ elements, which is a heavy burden for back-propagation when dealing with large-scale graphs.
33
+
34
+ <figure id="fig-view-generator" data-latex-placement="t">
35
+ <div class="center">
36
+ <embed src="figures/fig-view-generator.pdf" style="width:100.0%" />
37
+ </div>
38
+ <figcaption>The architecture of our learnable graph view generator. The GNN layers embed the original graph to generate a distribution for each node. The augmentation choice of each node is sampled from it using the gumbel-softmax. </figcaption>
39
+ </figure>
40
+
41
+ Figure [1](#fig-view-generator){reference-type="ref" reference="fig-view-generator"} illustrates the scheme of our proposed learnable graph view generator. We use GIN [@xu2018gin] layers to get the node embedding from the node attribute. For each node, we use the embedded node feature to predict the probability of selecting a certain augment operation. The augmentation pool for each node is drop, keep, and mask. We employ the gumbel-softamx [@jang2016gumbelsoftmax] to sample from these probabilities then assign an augmentation operation to each node. Formally, if we use $k$ GIN layers as the embedding layer, we denote $\boldsymbol{h}_v^{(k)}$ as the hidden state of node $v$ at the $k$-th layer and $\boldsymbol{a}_v^{(k)}$ as the embedding of node $v$ after the $k$-th layer. For node $v$, we have the node feature $\boldsymbol{x}_v$, the augmentation choice $f_v$, and the function $\text{Aug}(\boldsymbol{x}, f)$ for applying the augmentation. Then the augmented feature $\boldsymbol{x}_v^{'}$ of node $v$ is obtained via
42
+
43
+ ::: small
44
+ $$\begin{align}
45
+ \boldsymbol{h}_v^{(k-1)} &= \text{COMBINE}^{(k)} ( \boldsymbol{h}_v^{(k-2)}, \boldsymbol{a}_v^{(k-1)} ) \\
46
+ \boldsymbol{a}_v^{(k)} &= \text{AGGREGATE}^{(k)} ( \{ \boldsymbol{h}_u^{(k-1)} : u \in \mathcal{N}(v) \} ) \\
47
+ f_v &= \text{GumbelSoftmax} ( \boldsymbol{a}_v^{(k)} ) \\
48
+ \boldsymbol{x}_v^{'} &= \text{Aug}(\boldsymbol{x}_v, f_v)
49
+ \end{align}$$
50
+ :::
51
+
52
+ The dimension of the last layer k is set as the same number of possible augmentations for each node. $\boldsymbol{a}_v^{(k)}$ denotes the probability distribution for selecting each kind of augmentation. $f_v$ is a one-hot vector sampled from this distribution via gumbel-softmax and it is differentiable due to the reparameterization trick. The augmentation applying function $\text{Aug}(\boldsymbol{x}_v, f_v)$ combines the node attribute $\boldsymbol{x}_v$ and $f_v$ using differentiable operations (e.g. multiplication), so the gradients of the weights of the view generator are kept in the augmented node features and can be computed using back-propagation. For the augmented graph, the edge table is updated using $f_v$ for all $v\in V$, where the edges connected to any dropped nodes are removed. As the edge table is only the guidance for node feature aggregation and it does not participate in the gradient computation, it does not need to be updated in a differentiable manner. Therefore, our view generator is end-to-end differentiable. The GIN embedding layers and the gumbel-softmax can be efficiently scaled up for larger graph datasets and more augmentation choices.
53
+
54
+ Since the contrastive learning requires multiple views to form a positive view pair, we have two view generators and one classifier for our framework. According to InfoMin principle [@tian2020goodview], a good positive view pair for contrastive learning should maximize the label-related information as well as minimizing the mutual information (similarity) between them. To achieve that, our framework uses two separate graph view generators and trains them and the classifier in a joint manner.
55
+
56
+ <figure id="fig-framework" data-latex-placement="t">
57
+ <div class="center">
58
+ <embed src="figures/fig-framework.pdf" style="width:100.0%" />
59
+ </div>
60
+ <figcaption>The proposed AutoGCL framework is composed of three parts: (1) two <em>view generators</em> that generate different views of the original graph, (2) a <em>graph encoder</em> that extracts the features of graphs and (3) a <em>classifier</em> that provides the graph outputs.</figcaption>
61
+ </figure>
62
+
63
+ Here we define three loss functions, contrastive loss $\mathcal{L}_{\text{cl}}$, similarity loss $\mathcal{L}_{\text{sim}}$, and classification loss $\mathcal{L}_{\text{cls}}$. For contrastive loss, we follow the previous works [@chen2020simclr; @you2020graphcl] and use the normalized temperature-scaled cross entropy loss (NT-XEnt) [@sohn2016clloss]. Define the similarity function $\text{sim}(\boldsymbol{z}_1,\boldsymbol{z}_2)$ as
64
+
65
+ ::: small
66
+ $$\begin{align}
67
+ \text{sim}(\boldsymbol{z}_1,\boldsymbol{z}_2) = \frac{\boldsymbol{z}_1 \cdot \boldsymbol{z}_2}{ {\lVert \boldsymbol{z}_1 \rVert}_2 \cdot {\lVert \boldsymbol{z}_2 \rVert}_2 }
68
+ \end{align}$$
69
+ :::
70
+
71
+ Suppose we have a data batch made up of $N$ graphs. We pass the batch to the two view generators to obtain $2N$ graph views. We regard the two augmented views from the same input graph as the positive view pair. We use $\mathbbm{1}_{[k \neq i]} \in \{0, 1\}$ to denote the indicator function. We denote the contrastive loss function for a positive pair of samples $(i, j)$ as $\ell(i,j)$, the contrastive loss of this data batch as $\mathcal{L}_{\text{cl}}$, the temperature parameter as $\tau$, then we have
72
+
73
+ ::: small
74
+ $$\begin{align}
75
+ \ell_{(i,j)} &= - \log \frac{\exp ( \text{sim} (\boldsymbol{z}_i, \boldsymbol{z}_j) / \tau )}{ \sum_{k=1}^{2N} \mathbbm{1}_{[k \neq i]} \exp(\text{sim} (\boldsymbol{z}_i, \boldsymbol{z}_k) / \tau ) } \\
76
+ \mathcal{L}_{\text{cl}} &= \frac{1}{2N} \sum_{k=1}^{N} [\ell(2k-1, 2k) + \ell(2k, 2k-1)]
77
+ \end{align}$$
78
+ :::
79
+
80
+ The similarity loss is used to minimize the mutual information between the views generated by the two view generators. During the view generation process, we have a sampled state matrix $S$ indicting each node's corresponding augmentation operation (see Figure [1](#fig-view-generator){reference-type="ref" reference="fig-view-generator"}). For a graph $G$, we denote the sampled augmentation choice matrix of each view generator as $A_1, A_2$, then we formulate the similarity loss $\mathcal{L}_{\text{sim}}$ as
81
+
82
+ ::: small
83
+ $$\begin{align}
84
+ \mathcal{L}_{\text{sim}} = \text{sim}(A_1, A_2)
85
+ \end{align}$$
86
+ :::
87
+
88
+ Finally, for the classification loss, we directly use the cross entropy loss ($\ell_\text{cls}$). For a graph sample $g$ with class label $y$, we denote the augmented view as $g_1$ and $g_2$ and the classifier as $F$. Then the classification loss $\mathcal{L}_{\text{cls}}$ is formulated as
89
+
90
+ ::: small
91
+ $$\begin{align}
92
+ \mathcal{L}_{\text{cls}} &= \ell_\text{cls}(F(g), y) + \ell_\text{cls}(F(g_1), y) + \ell_\text{cls}(F(g_2), y)
93
+ \end{align}$$
94
+ :::
95
+
96
+ $\mathcal{L}_{\text{cls}}$ is employed in the semi-supervised pre-training task to encourage the view generator to generate label-preserving augmentations.
97
+
98
+ For unsupervised learning and transfer learning tasks, we use a naive training strategy (naive-strategy). Since we do not know the label in the pre-training stage, the $\mathcal{L}_{\text{sim}}$ is not used because it does not make sense to just encourage the views to be different without keeping the label-related information. This could lead to generating useless or even harmful view samples. We just train the view generators and the classifier jointly to minimize the $\mathcal{L}_{\text{cl}}$ in the pre-training stage.
99
+
100
+ Also, we note that the quality of the generated views will not be as good as the original data. During the $\mathcal{L}_{\text{cl}}$ minimization, instead of just minimizing the $\mathcal{L}_{\text{cl}}$ between two augmented views like GraphCL [@you2020graphcl], we also make use of the original data. By pulling the original data and the augmented views close in the embedding space, the view generator are encouraged to preserve the label-related information. The details are described in Algorithm [\[algo-naive\]](#algo-naive){reference-type="ref" reference="algo-naive"}.
101
+
102
+ ::::: algorithm
103
+ :::: small
104
+ ::: algorithmic
105
+ Initialize weights of the two view generator $G_1$, $G_2$ Initialize weights of the classifer $F$ Get augmentation $x_1 = G_1(x), x_2 = G_2(x)$ Sample two views $v_1, v_2$ from $\{x, x_1, x_2\}$ $\mathcal{L} = \mathcal{L}_{\text{cl}} (v_1, v_2)$ Update the weights of $G_1, G_2, F$ to minimize $\mathcal{L}$ $\mathcal{L} = \mathcal{L}_{\text{cls}}(x)$ Update the weights of $F$ to minimize $\mathcal{L}$
106
+ :::
107
+
108
+ []{#algo-naive label="algo-naive"}
109
+ ::::
110
+ :::::
111
+
112
+ For semi-supervised learning tasks, we proposed a joint training strategy, performs contrastive training and supervised training alternately. This strategy generates label-preserving augmentation and outperforms the naive-strategy, the experiment results and detailed analysis is shown in Section [4.1.3](#sec-semi-exp){reference-type="ref" reference="sec-semi-exp"} and Section [4.3](#sec-joint-strategy-analysis){reference-type="ref" reference="sec-joint-strategy-analysis"}.
113
+
114
+ For the joint-strategy, during the unsupervised training stage, we fix the view generators, and train the classifer by contrastive learning using unlabeled data. During the supervised training stage, we jointly train the view generator with the classifier using labeled data. By simultaneously optimizing $\mathcal{L}_{\text{sim}}$ and $\mathcal{L}_{\text{cls}}$, the two view generator are encouraged to generated label-preserving augmentations, yet being different enough from each other. The unsupervised training stage and supervised training stage are repeated alternately. This is very different from previous graph contrastive learning methods. Previous work like GraphCL [@you2020graphcl] use the pre-training/fine-tuning strategy, which first minimizes the contrastive loss ($\mathcal{L}_{\text{cl}}$) until convergence using the unlabeled data and then fine-tunes it with the labeled data.
115
+
116
+ ::::: algorithm
117
+ :::: small
118
+ ::: algorithmic
119
+ Initialize weights of $G_1$, $G_2$, $F$. Fix the weights of $G_1, G_2$ Get augmentation $x_1 = G_1(x), x_2 = G_2(x)$ Sample two views $v_1, v_2$ from $\{x, x_1, x_2\}$ $\mathcal{L} = \mathcal{L}_{\text{cl}} (v_1, v_2)$ Update the weights of $F$ to minimize $\mathcal{L}$ Get augmentation $x_1 = G_1(x), x_2 = G_2(x)$ $\mathcal{L} = \mathcal{L}_{\text{cls}}(x, x_1, x_2) + \lambda \cdot \mathcal{L}_{\text{sim}}(x_1, x_2)$ Update the weights of $G_1, G_2, F$ to minimize $\mathcal{L}$
120
+ :::
121
+
122
+ []{#algo-joint label="algo-joint"}
123
+ ::::
124
+ :::::
125
+
126
+ However, we found that for graph contrastive learning, the pre-training/fine-tuning strategy are more likely to cause over-fitting in the fine-tuning stage. And minimizing the $\mathcal{L}_{\text{cl}}$ too much may have negative effect for the fine-tuning stage (see Section [4.3](#sec-joint-strategy-analysis){reference-type="ref" reference="sec-joint-strategy-analysis"}). We speculate that minimizing the $\mathcal{L}_{\text{cl}}$ too much will push data points near the decision boundary to be too closed to each other, thus become more difficult the classifer to separate them. Because no matter how well we train the GNN classifer, there are still mis-classified samples due to the natural overlaps between the data distribution of different classes. But in the contrastive pre-training state, the classifer is not aware of whether the samples being pulled together are really from the same class.
127
+
128
+ ::: table*
129
+ []{#tab-unsup-exp label="tab-unsup-exp"}
130
+ :::
131
+
132
+ ::: table*
133
+ []{#tab-transfer-exp label="tab-transfer-exp"}
134
+ :::
135
+
136
+ Therefore, we propose a new semi-supervised training strategy, namely the joint-strategy by alternately minimizing the $\mathcal{L}_{\text{cl}}$ and $\mathcal{L}_{\text{cls}} + \mathcal{L}_{\text{cls}}$. Minimizing $\mathcal{L}_{\text{cls}} + \mathcal{L}_{\text{cls}}$ is inspired by InfoMin [@tian2020goodview], so as to make the two view generator to keep label-related information while having less mutual information. However, since we only have a small portion of labeled data to train our view generator, it is still beneficial to use the original data just like the naive-strategy. Interestingly, since we need to minimize $\mathcal{L}_{\text{cls}}$ and $\mathcal{L}_{\text{sim}}$ simultaneously, a weight $\lambda$ can be applied to better balance the optimization, but actually we found setting $\lambda=1$ works pretty well during the experiments in Section [4.1](#sec-sota){reference-type="ref" reference="sec-sota"}. The detailed training strategy is described in Algorithm [\[algo-joint\]](#algo-joint){reference-type="ref" reference="algo-joint"}. And the overview of our whole framework is shown in Figure [2](#fig-framework){reference-type="ref" reference="fig-framework"}.
2110.06149/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-05-07T08:03:48.833Z" agent="5.0 (X11)" etag="n-IirJRnFRuXw3N6YCrh" version="14.6.10" type="device"><diagram id="iGyCLbtvGnDWooacof5f" name="Page-1">jLxXr8O60iX4a+5jA8rhUTknW/lNOVg5S79+xH3One4PjQFmY8OBkkWKrFq1VhXt/6Bcf0lLMtXGmBfdfxAov/6D8v9BEJSAiPcJtNz/tMAQ9W9LtTT5v23/u+HbPMV/T/y3dW/yYv0fJ27j2G3N9D8bs3EYimz7H23Jsozn/zytHLv/2euUVMX/1fDNku7/bg2afKv/acUg6H+3y0VT1f/2jP73QJ/899x/G9Y6ycfz/2hChf+g3DKO2z+v+osrOjB5/52Wfz4n/n8c/X/HtRTD9v/nA6VbErFawHs0+uT/is9q3or/Rf9zlSPp9n/v99/Bbvd/J+Ad9wReNv3fTLFHsWzNOz96khadPa7N1ozDezwdt23s3xM6cIBNsl+1jPuQc2M3Ln+XQsu/v//jGkzXVOCz2zi9rck6/bOCZXMV76jZvy6Z/7ZC/215X+fJlvwHZf55i4jTUP0H4RqftT4npEnVyLx/5terBa96X4XG+yDeHBOBdhlvR5Rh2K8mT14gf75RmFWZrB5x363xl4WKkO0UyceS4D1fbpMlhaffhmFbj+IDYWHOzTBjvVpMzPZmZCTlDIl7/dUeo6ZovPc7rsN3Hh8Y/OpJ2q4LmyzBuiBs9s63uOd/b8SmgORtu/B80vT3WPiD6eCQ+/fl+59YJTq/z9R74okOm/0+F0c/vM7DcsM/F3j/O7l98Pfi7FBW5Pu+fBjwCOXpQvzTpziz4fuYKdjfhcXSphf7oFwkI6QSnFLDZ/HNldee2NW+3scvSZXHP6OQid0HHzoaWkvf59D+XCg4BwUnlDLo7F7/6ee9vpir79DYieRs8h0NaxupjwS+v8eSlkVDHbq7TR8oa4AbqFH3yWn8fSW2pPxeT9ThLp8vcMnxfXApjXufLBQuD5geBmqHTCp4r5rn4QWuvoOB/TNQ0bbA52WCKh/i877sve2mqITKvH8G97OV95F8sAIDfe+y/E6k6LngQiJ1kvNBv2bPHmC6bZr20aPE0R30cs9rYvef2baxzV3K5EYC8MmFvM53fUTtn+vbpYiiJEUe+UYOsXGEVva2suTVEY31aAXmLIvMyebzkOT161u0FdVbm4wsVT9HT3f9smbfFCVKFS0k07S/MKInUTzcRnfUMOgAOiIaTDxZtgZELicnYwjbnP7bhvP/WJd9kCmwkz9r48kiJycB8w/j8CPKh7974Vzo077HcJFdv+hE0ze6s+Bk9P00UpIPT4mcbw7soRY4FemoleJ6yz7v0d+MQPQzXHv2lMM5iyNBlBzJCih/PAYnXQySqVqUfZ96avlIhFPpcZ7YlnMjZPnxmwHrU2oMDx1xvZSGJtKB6OrySftA08urTS7B7X3f7NOhxxCCHG+qs466enKr8BYcvu6nQBU7p7ouOX7eb9ykpSenYfFDYd+23C9IPE3gUFmWNA1eeCFJ5pf28jFtaVmAiUNlMIZYMukCOFH5mDkUZYl/tu2jz4lm2cWtAI/DJ7zruqFH0UIjCMhgrWXRl4TYt/foeYiVFpPL7dE4sVkAfLbzWI+kp9HlCWiS2BbQCNaKzsHD4IMZPuiSFBcsKENswyYbBhhAXktpBwudR31lZcMuuWuwUJzloxlTReSO5AOXe/67ZhBZQM6U7U69otuzWJy1RmEoICoY0T+uCjpEDmDcXUnmu4sc9AyWVX5bkj2ztn8chW7xk0DIDSw6+aGRtBgG/OjxDxk6pgu5yFpuBG3ZG/ws+n7sc/j1ujxI7ChEwl1ItqgY5judiv51exZ+4AfjkCPc5hndB+Awoj/i5YZ38hPmtH8eDjXDKtN0TU/FX7ACT2yAcdwzFfVdDJwpBr4S3qdI0fT1rpaO0SVDEVNHWIOQWw821csnjvQbnUMrj+O9qI0qeC1fDEzuLKattXlzocVJAZbaXGhfxS35TtZpLuXsf96FpOpkh006/4jbv9A2Srez9Of6q/OY3R5pQGiiGPDetS7j3sVHmVoVw4m2L9bZfbuliNZGT9TvCOmmn7Bgi9Svpa2W2/FjEXL6YDV1Jru7rPJnaark7SIwMPIKDrtVEzp+2iiOyyseRxdrV/QaCDHXBciEulYmrXNnxtQoafTJ1q5rg7XMCfxpD3N1IxT7Asj6EOxE49TuJqWP0m15f4wfgMmhPQt/Y4Y+FfZcEQab2V8QYFmCMCyS4wj03MSFZXp6yXxycbzv59CID66IQfQMxMJ7Kz0KJZZv1cbz+korOICIMTB3eiXhu4UXtT+6KarUmRuMsyhUJaXhSGJ3TrPt1ksiHSGOUOgAlK+NZMcu6oiwXlGWD4lP0Ciudmn8oKO4HJcwWkfjJicjxlRNEv58YIk4vV77AoKTdezTtdbzCCVVJJjPhtyTBot4UTfBHiUK5aOhSS77/kiWAXUSIS1txZlPfXQDW7AUfSh1SG67VvPRLYOIGwRc4+3ad1my2gbhd91cfLCLvcyDTbByNR43X5GHlkBz77te3tgpAgwPDLxUV6gBXCUpu/v90hYpNhNzQISUgWfNIwrD0IQPgZlrsoR9l4NhCfk0tnbz8QJENG2ymSioEl/Ds7Bmpto9sbzWGTVIUlIRuy3dxVZZg6w4Gy/5ID+V9tqoB7F+6tn+R3cv8CGn1cgeMFvrgBQTID5Pwm3vD/dgGHNvlJI4oxgsy6cvvB+V3sDIqu5PazN00PpNZNWH/S4eh7GJjGPI9CMgqTzQw5RUy9jNhEYvo7amxBAv9drO2bNozQarubSt5R5H/aLtauA/4HGrQVPbMtjolLowo9FUqKBtKVy2OcMoM7cr7VihdTcPzWuUd1HLKoWah7B69l2A91sjk0Uy4BkgbOacQ9NeRvE9CgL4lVy/s6dpEPe5iy8uGqaM0LkOaQPD2acyHY56bwuM1sNcOpVrUAqbfe2VrbJCed2BhWy5HATUAF2xmdSZjqcUV9oXDxb12Mt8sVBsFxMmjdKoY7/28+PaBYY6W2iudzgjpUcXMsP7/DyD3Fjt/D4nz1Xt8kbw+URFgO3dIbZbmSKK/ZwY5fas9BMQbnPBTS0io5xeQCwbHxFoepPIpOiMqH5qQFBufKTmnKwhEsyloEk9j5D0FdBVaXcjE0LA9Y6CyJIocofviYkcllOvAR1YGLFfsbkA4zp5/iVNGS+vha9bTjoWSqeO6fxZy8QCIcGiY/g58moO+WCfALmsCNbKfsB2PW02j0Iv1b5P14WdziMA4TEZ0OPOqFVnEyLsBD1/Izvz00f45BX1/EBrw6hkVppKWh42iTdxchSZhLano9nNLxpb+DrJXwZWgLjQUgc8kHKvi1IAPQx+InRyN+Ap/rqE8k0vQyFIeOaSx2EJ9wFf9Tqd3w2ERZ0/Wj1aUQlE400JcBuEuIQnW+aNoHlOZyUrH5SKBLsRmjkqqlRQGn6W7SPHRu7Rdni03jVb4UORdAAEPgtayLPM2SL1xr7lWHmt9EbgJc1cDFUx1on5CZx/bFLOvBSQlcbToHgIzSrC1YmWCG7ZYzTnVKksRpE0SULdKC0uW+x9SvAnrM+GIpoKpxkuaNXqh9FJoC6nz0VQpIsaxkyeIQtjnrlb1mE28wzns0P4hRBnmVfiaaGxZwS/yQ1LZxM97/ayQxyt3MHQyT3R0hDkxdE1JrEWgrKhdZ09CEfkyeoWvHZDPrcN9I3YtFPAtjEBfQEC9ECeFj335CX47Ochy2UyZOjzZPaCqGr4lDPbtXUpAxQmMv/qbhmx8QRFNkYpuTiOEZ7MNFcRJRPNXuwYPPhii1dsiQZPQd8McaCduHZt+7EIpkwbtUUS35dwhSY4nnyRkw2EDm3AaCq70ruHQPt0JAppmMwL+4l+J6h0vOIoN+1Y3dm7iYXSYbg6jQ6SIfSujczA9SvYyZzOnDQUHXmXD0tsXvR3KFKl19j1CVsmXdycx+66Nlr+uCouZBmXlvnQkaaDEjKdPZXwyCgZe+SmbE8dkJCsxzMgsri6byUgRCLqy0WZ5yq2G7FVlRfwevwYJmKbSOQN0/pMP4TTVs4LaFb8MHPYbF1lJilTKk8bdnkOOZe+Q+4NC6QLPBwwTriNpPQBdxGTIcvKXOxFIqHdqrED0DMYjDkY48NymB8w7eJRUnNzqcJ4JhkWAVsIOCogpQghg4Jiku88MC8w6DAugf1lWLvWX65/RRhA/cq8rgCFHZFY2NzmuohxLf6GLyucq3qJPOhThGf6tXtTdbwdBlihDsHqk2LmeqwHref3D8PuHVEnILWex7ayjVZWN9l5hXCsyx8ypv9ym1NoJ9+YsRNbPNM7DJWhEi6QMCmReLTYCXKHIy9PNHL6y2lD7AtHRc6y1Jq2wuUWxSsCDt4chcj0fgzLvsCYknappSk9ajHGDR3v1VpBM3rDxBH/pRT4cygMyG9EBCkVAGlETrrZjtk6lpi8A9q0UKm50LclzgLENw7nF3DYNoePpTGIPpgV4XSQzwk4+1eiP5P33n2+H29w1kiDexkaSS1NSp5Ayi4QwdJHtl5IRHB0FobOjQz8DNEX0ZfXo9Tf4oyNajG4iEerslZOSC4F9cQpPbcluxW+0GOF33EpMJvLisQ3HxCb70v706ues3ylbSJnofJALMgtjHmorn18rTbdh+eZlrcCL6w0ayVm5mQArNvfH2aES3NVL0YsiirxrMlNV9JUblRlXImY/lN5b1z5VShQ/jdzVR/AeZapsWbM7Jm4qyT7Ww6YsmYrtNwo9Tul/rTIAaklW4IC4Spk+wcN2Y4oPKIqNW97WolokiRHOROltXPtvvaSo1LyGWKoFNW1dJjE5DLYmMthn3O36pLxKhoAb2CzZ5SPHoopnZOY0/FL0U6mfwqjvQEK/QCcjSIp51U+6gdkq1s1ZO0xQlt2EDSn9gO2+aTMjDJ3floO1CeR14qlQdpfIX7DRlHFHjdzGMP3FYg4BePyX8NWFjNrXhLBMp+Sfo/JYRVtx/P9pAbyfRGRuZ0kuMvzqV6GLAipY3TEe1KTc2TqCtgNPOm0LsFhjJRHPKYhA8gJToeRQuoTRluvS0VkUd8823rGnupPfP4ERnHewNnp3HZk9wbgpQZW2M/XPlXToWiG0JpvwONDNujk3VCZqVKikTRclnQplv9UUabxgU9EhMZjM99TjUmQaMOt+AaVr+FS3LjahlrZyKiioXZJtpIu761E/oOwhhY2mcIGXjVt9wQFXU2tKm1ivwQ11OYNHexHqYDYUoFhuCaLnAUynp9j94RAOIzXcNf8DeIM0A73yYRS2x0ejfBb4P8YDsE4tIq8xFzH3OY/IJmi+XB5EVv5PEkXYjQ1neRwyd1y3Ax9fr4Bm+L4QeTLTSE77+P7tBw0cfgYCDUFGXxl5C4qRsEGt/Arc4NAAqOD1Tk8hHyqzK8AH7y3vK6/sn31fP9I3od80CWTmEMz++f2xbhA5U7UHgxaqWw5ziEFaSx8E2oE4XWm5fhTfThpJzSKXmzGsTj1j9Hzp1KwN0GzLpSurE2ihcpdMm+htnfmTJq92HvXUfV5ye81oGYpsDevVTTvNQyGv/ptYBmy1tSF2rBK/nESfwPKGlejUGQhc3fXSTwkxEEmSA+NPD0s0dur1Eazf5ZQKt3kCMiNI8wPpREK4ck/IxipLu/QEpAd7dXrSeUJt6G+sMly3wCAlrLmetsCQS9qFPlg7HpLNbaHkEY1xAu47NbYzVwKfBiCixh80JSO9BliDqoJow4JclTDqGCaspwHxvzxEPbMTHsN24nYz3tPNclPUn0X+phBrqoYGrAFF8BTtjL0p5Skl5ulfPZlV4R2sha1TcaG2ThkcVNojAlGd2brRQpWq1CuiRfWBVZqhBhXl5nl4xQnvjMIMHoNu28UzA3VGBnYiaPT16ixSHi6jBjgfTP0odbc2RTQdxikgMpMU0e6dASNX6nFcpmDPkSb1QFZNhrGf+MHbZifyB/hhdbqG6AdxqKHgmITyhpDw1G5Dy6jBU9/rJpEiVXhI25tsCWumVfIPlIrpwQ/VEC0gdDSOYX0pcnqs573C3KQufoeSBocnRVGymLBmE4rriHSmMLaMSuMwsN71Kn3Yu6mudmLGXvDW1CCSDJcNW0UkabVDq5Kn4xAtopNVJsyKPn8VKpJvgrLqiGTL3nz4gOdlR25EgROzCyjEEQojoaarvevsjM8qcGOcCqRNKZ0ZPwms1IS6veZbtFk6XTJk5dPfQfgn1K7fVCYkSBFI731k9f0FuzIZcbCT8DFV7BDQmYVjMivX35jsp1hXWWUJF/jlNEymlP2uNM3nU8DoCyqJZFkfpAgKmh0wJpSbRDz2cdDIJ4q/r1W3+knFrRKTkcdypyvfTaxzE9F5rri7MiIoVOQT65KR8TNx9vQ5Mg5zktyfBaGuEIeG+PN284/HipmZgfWG/V/FmPWbldlmaXsPFTBsjfoW/QamNYf+DZU9sU5QJA58nZZOqFO+hABP6lHRuhnuWdqdbRbBdGMvP6Zxog/skCB3IjozdI+MkD4gFRzcBq+InQFCBkxdQbHWd0dfrCMrvOkveIY9gzm3ikRX7+60Okq/yZToMQ6pjJunihEujKohHIbpTSuzqm1kddUStEyTWLaYJuE1MBbmrs+qMlRqPjtmf1X7jJ7j19y+9ylLttK8gplPL1M/csUbPp57sRkDL/UU7PXA2yULfZJfL9R7oFKBw9T+WMh21KCDkAiD5NKBYqKOXVziT1D7CijK4LJmlH1VEfL4XdMvtPz1xT0dHx+YUZoSLn+V4XHeLZrr8Ik2VKycWdvGXRpTIW5WQHwDnyI5CBpdhNGsVo17PdO5n65eP0cXoWnqjXdGoz5OMrzwS99sDuM2TThk0VIJoenzztYszHa9wIUW9qi72GmmrF45CdvcwXGSVaKUJed2kQ5TW0K56nwS8xUdlWNd8w6T1Xz6jCZimTLaakppEpaMRwmNLYi6/bo8j1hFwCvXiF+uC8LN2u9GdFL4f3tdMSq6F66EtkzCtNOOBa+oMeUQ/Eeinfhqzd+KqMuTh5Tj0EYkMscQmrOlYuBOOEQ+tFLK0CEb2DTk6SltJ6wkBxVJC5MUG2BFK9TUxXPt3ePcpgUy9zuSej8e72M4d0lBgUGv7qJyGf3oclQux+jsQ6DillZPxG+KIscYeqemA4fTxHxAP+0b0xxDU2tlPMuoBSH5P1BJuNkZI8pf4xJ0Sa1yJOXUVqU4NivGl6gDtqnOWOxuznYxdkYsA9M1dUXuN9XfYViR1R1mnKMVz0cB8iB11EEX0Byf9LZx4650kHxBcm4RBfHDAiADsEMaPyR6K2R3VL4pxHIOh/0z0vMH2Ng4Fkpaju+TPJTSUjWsI3eFYd9Li5m1dOeCV8aOuOCJFN+5qBImZUA8r/O58Ufbu2WOC96XBBJMIunNg/4UrHDG1F3kCnIXPQyBwlfT1qHuolgflkyjL/oY+Pqr7Mq3uRCrwErH2fmqKp76gdqwmned6KGYxFycRkCWs56TjqLCr7VWWS/MjzX8emR7V7gHSIpBQ45mcdixCPuksYgpb2BhEXiVU6FxdBsbi0UoyD6lh3WfhovNZfxJ4HJc4os+XjTK5hHrVG/DG82ijaK1Jki2VIFCDhJxjgnY9QKOT+6sktXKrXVB1LyznV/YIVEQAOyWKZTLFPWUw9kLU6ebR3O5uxXlqKnlGyHoeoqWfZWbnDKlUDl+8jl2wcL3GZQRcDT118uON1f0rBKKm7adD4gsZQTkBBiY4BfLSGVmUpCOR1oYmKAohilKmesbCbxCu0yOUkaPy2ZuwE7LAl5ulyQJDefCabQtvfq4EvhfOqT91nQQP4rzgGy9vlnXPMv9VLAySg4ST15/3TYTA+7Qlc45Y5CJ6hk8hX42lGYqImkwsllgQiymb9ckh3yLCuDQTAMqIuaGjInL/0DVP5YHbVP4D3TJVLoI4kFhzVTLEnMe/2hRVq/fdn3l6hVQSGz3zXoS1ms79oWzhtpOGgMYaFDW6KTh1N6wArmr4w6Xd5zPhTWBRqOY1WJNynLWpDSvIG3oiPZNW7MURFmpipcZwOx4WPjZQiq7h3P2sTF+RnZZembHQqgQmU81sFylNPECpF/8R3vl2LLDW0EDCzY27IKRWVNDigAUT/yXLSPGtpojvlf4p2mYI+dAQuR7Wq2cKwYSokwFwWUgQ/MNmGCZSpKc2HRgJWbAgIB5xMe5+RTUsKLtnKz4uvwGqSOBw9UXiIeZJXyqh0y2W1pRyzUxDwqkq3++NcVXScnq0o2ScE0V84g2KZez+A3fdI6WoLPj4v5nTxH1arpafsgR7nkt2qz0I9XQYh5V7deNbtR3khQcowWcWwEasLj16/gZfK1l6o7rmKaZz5+Vq36CfqGMEXwDVL7niN1dCfsrIBTvPPjfR6BZg6XYRyhzrxkhZ3Xe4M3FE1P2SSycP7EwDHTVVod+urxJBUHLfLGsMKQXG7CJtDQjNDUapYo9KPYPTOhn0nq1rOrLbxcK8P6eqAa5fEMqizSp5k5vyY/Ccrwh7FRUQgPWgKo6MwTM0hMU2efP6+GS06Z1IlIq3sKsd4wjhEAnHTVEQ2QnPU4IOMtjcMEYmG/GLqGzS/IudOwS/EcNSZO8H9qwqzk78b8S2P5xGLs+Ex+XugqtChvCHkMbrkkaalIedgTa/qJijz9Evj8KpXk1YC+eaQ34Zp++4w/IIqItrVbN0P/76XpKFic3Zk2aIVa0F/LVIULquMMl8lzxThhLxkck1tae3uh4BhvXDwlIn45AXOjiZ5X4D5MCbXSXKpwSm6rkRUvO2wDTKkec4lwMzs3+PNhGpStXHaZTAd9ORxb6DfNrJmt0kn/QsWw1fxWWQA0R4zA0dws/YetyWumSRA3fZyvj7mrVbUySkyrTCjO6ucq5AxqKseAGJzRXhB72YcR9vZPqNP2sH08d3qIscYPV6S7ijEFhQmUrDAQDggL1DQ7Bj2bQ0OwPPoVGZ7nXw30MQ6SgJ0yurLkjkYrI3COPjnU5kZNkcr2ZF098wPfBnLtTnnn6kVOY7+PYZJkBUQWqxTDvWyjUMZ4Z+/Ht/vw1W2xMwmY5Yt870Qgx2+2r5CoeWoy0cMzN8GgWqx65w/qG7Rr8voxOd6ku1JIQPb9B3tqPbBsz/FYIJ7lkFYB4KEy7nz0dEc0EuS1Ajc8TVgSTCM+gt2wkN60HINX2ONM6qe9hMuu9Nu77dNo68RLYydV5Pjpse/0FDFIt/Bfu53FIprwrTnzADbdlxGAklBOYkQl21IdZEuSZIGRQi9jOWd2x8c8EpCXOE7+cT1X7Lcz3Ztx6Eg33ld+AKY13f/Kqqa2emfTe4YQOOLufDVSZDp0U4eydbAqeWYt5uNupA1rqebgXOJgOfzgsLZItboRfZpAZeVfOIg0BSOy5OYgFdGD6u07X7O9pA+r1xV/vEwcb+Y1zKIYx44LJE7vQalPs4YLfgJQBLGEPwZNXUO5lPGvOy3c9VOQcvUlCATVj6mYgL18i6huV7Sr4hKZNNNwYpbVIdzkX4rX+yqG/G4XoyysRBvgcgwOGwfsDpOUD2xMI/LKtcY3WZMpuqviNKflXOnIsumg3NUxorfocWDI6uaNjTEEgcd1fEKklX/0Jhoifmetput5sg2fES1gd3OHazjXCEKUx+Ne0XB+7C+ZJiCOI8ZCreQgIBF8ruYbb7RHy21e8sR/80g+7E0hLSY8t9O6xfJaXTo8Vd7raB90Q0doZhLj8fa7sClN6ZJO3W3nLJw9muXz6rgST3xhl8UisWCIHs/mkq14Q795GnDi1ZelBn9UXwm+nbsF9m+Kgd2yJbBLdRcfRMV+l2GxWFfxL13NttoLKxHFTOiVHbhdzK/uvbFQ/FjV0TLc9w0sQ/YkT3U3/b1+ZLr6BSGozwXpNIiY0m+3ocr9DcFJo9DGlzQMmHiXokHH2agmfkwBFQ1ReFS5FR3+Kpot2NuSlf1XSPlIiWD600rsJ2S52GcKiYcrsqt1ldviV3+Ols0ADiFXkct/YBPu0XNR4pmyBIxVL4S7QI2nyDsJ7PEJSPe2j1APvgsDjyA/u8oS/XG1q+hKbzIYyzXWeM7PGmcK7Wd8KQPydK1o6fTnED+podvFicB+pAINHI7XSe96jdeo7UfybEgZGcqcyWzWkOmiKqzEhaQdgSELO58cDI4On7nw1b5j6lioKrWy1L+iXXreXslAFWQzXVKd7zw7aI8jlTNKmoibddccLSrA0DkNjf12zV8JswFgMafx94tPOMYV4uuwj80w2DT1isxnShPTN9OfsAuLdMY6qyEvMrD+yMJCe7k40x+Zh8BgBXMU5NF+/WdHmArPcVLm2gcmTJKKpA8Ji36SVQY99bm3zyfYiuA5GLsW4S1QPwoGCS/OpoS0QuYy7pjPSnfIh6WmPWShgJPrMA1vLio+Sy6mwdb9TKaCQBRphaPYkjDK8V4uhJyxYkO08b0VdYuP1yiI9OUdzRHmmzx9PjoiG6+WGyZZq4SoisMK58vgQvwMrYKQfySjVmnS7tcf9v3gKDSDbEYTPuvxtERJ4j9DExdQJojyURYpRz4fCVo+ZpD/Nps5QUSN7wMkRUc91y2Fpjw4RIyEsfN3ehGOALUtQyoB3Bx8g0DnKPqt/SJVVgVT7IZ57EjM9KKJKWSZpGMSUfW2muo7wT6LSu52zRsaQyfu8bL0mnXoTnS5hOt045epJQu/UcQwRFTr43cSqEYSXcGqW+hcQ2o+CUlFFLgSRYz7GunIS0se8F5gfFbRl5y1IVuqZ5a0/F2BMuHgZiYsCumq+ySGcwirv7pV7lmLZB3VPvJbM2Mq3Bi9k1znxzp92ChG9Pp+leInpix2XUBx6ZXnMd9Iq84lj0Gl5XjF2f0t7cOI3FW6gg6iK/7Rn37OzSJx8s7D+861mDsYPxAxURVAlUYoKdh3E5TcBaQSXL4UQAFWG2Zv/gU2XFvRyOzOemb1JSK7ZMkKxGgQN5K2RFlirHv+5OzM4lYrw3D4RBmv4hallQGWqH72F7Ju2kT9wwP+/H1wGwjdWLckUhdBJsJE/BqSOPViM4dpzGVVXEdJckNmJOiQHNTYuv5KdknOK1UGlTLS3b/BjHJn5vjiyec9YAPKl82C12fBfh+t059k+TyYtrABy0J7jp6KVIkfiU424A9I4n5ge89bsXHs9cpFAFEFdz8p0jw92/MnrKENeZwnkFjCS3H0mFk9QPzh1rtPElPnEez2TD/FdGEg1JkcqCexLwD/YCQUAeXla5SHk/gKTlo9HNb927amJWYtNjQPbMAbEEYaMTaRJitSg7GWLxKuIqco6FH+mTUvNWCPJ+888WkLlYWZhPtQkr5WX3tdo8VvAVHqBENmzUVczvCBjaHafBGH63Zipk9LfFFJbyrT3VT/ROsabTjOEwXvWH/WPIoKblsJxpnYHf3iqM5YTC8/Ree47tgGn3P/Pp+yoq1w8uvvy/7imM/h+xtjNn1qVmv51f29bVA2Y+epoKlEzT8RV7ayo4GM3j30FtgHpz/DxJgwlnyx767bK3svPz1nUH41xOi3tLrk8dWnxvdXXp/TszHpqgx282jijg5YNjKn0W0hg1LSzm/tN5ntUyaS+4fAYOLFK6r47FXtVXuuC9hZwWnNmYANPmaKn8ttV5CH5D5hGvc9a0m1DPHX2wlYcNSMFzgsfpxiRhVmmwGEtJdX7DrSxmulESiRpTdzgZIHQmgUUFnTEgZEcV0swXDK62ZrIOV3CGs6L1lxXVevUsiVKMyrevXWUgNZkRvCVr/pkF+byOmMAJVwzvl38zeCBybFPLnTtllR7qRACoUj/lbbY/rrXFShXMRUBpEEOsjlxTeNJfZfl9U/iSLrKRs/KBOLtrgbCjz6NpLufCdOE9QXX95NxMCvKNaQ1lfiJwo5QlnWWp7/iHuREEWLYqISFgz8M0yhOdIdltfyvMGIlD+ux0RALytLx0AgpPYnSh7ydU4SGYjB99r6TqoQ+hEIpJ6b86Gqh1oL1lJWPojU2kRe82vvdll1bQd+zqc527TA7qE++M387IePVG2ez0HTNi19p2on9TXj3rk+J6y0k58G9Uc3yEopqmPzAg9jOpJl8yrU8W9E8cQZ/f5JnJ5KlQ7N9a5MCzaEcRSn0u65N7dwffiGgr7OM5MssI3kPkvNDp5E6QFSzEUIvFiW1ESR7qFpPm7K8xH1ytb6slMmlXmOstwrDt5bz4ahPk8hSHbLiNvPeKgfQ9MRr1BVuDLmWYzn836J0AOojIBMnU237WGYYsoB0HHGhzd33vxJwGFsA0O1Fj1OKZFgw36+jWZ5t/SEl+hbQ77yrp8q7b2IonLjVGS5Qk3il2ByYZzJBTlVVcQMYjWIzPpQ09l+ttfM19RVky1IKcDTCjlP52tOBp3jIgPL2+zkfZYgWsZ59QsItZxZ1WLYo7UJXx+6dvic47TEWRqTunrs1T+TxUOe1+vVMQ7rk2DEJ9OdZ6K/evyT/YJRmHWDjHUV4a7fDo5r0edGYWahrhU1olyrdqcW2WOg47hyJLAXLN0iSWoXIJLCdIME30B/LdJIbUwV4+GkdU0wICe13eS1LhBs0drE/qwEfX3CDWt5SGKrZydSXGzlB3LiSOTeQgbKRb7ACRXxbLMp+ZXi1xYz2o9IRGOARxD8Nfjm0w4WsOQ3Bl7MEh15f+4lC/2q5uLnkOtHGVI0mj/X7zZfmH6WG3eGQ5qKIa6497nYPdD8YQKU3DX/6mptj2+TPE26B0Sy/YxmFCWj/FOopkNWPceukoTGRC0FQVwFVoBdiIQL4RhKeE4pQh6sBJD/OvN0wUc1wtYMI/z7peybWdOLqBlYn6BNnW5ZymZqX+T5c2npS18iP6qb815sySMRIaG8QPZ4D2PrLz00HdiDbKEhRyLlybsPBrbU61GCzNN+JopAeHIR3G3Ehi3OiGKNi4wiCwXf4AOM/FVTGNdB9JNpSM5Lv6Z3k72slZaj+6koAsLOfqrvAXbGEeHU7oAjt5+8Pm4eAVVlJehbuY9OpwwwpBowACnSAI5APuVyzFThAS8Fz0dcmL0yiq9YZ9kQpZbLGx/kte1oN2jyyysMMzn5dAhfUAD9ydRSGUhVheUGIYtiFKRnT5GKlL8WqsSqpOuPRpZRMQg+wUcxsRhzHLdvdNSgkHtlLmsCkkREOZa1KzaEUoYXyaLZSNTLmyknrFdA1dJvTcuwqXAPAmF6Vj7g+ML9hNPM6N9vd1WyvLr1ssYqlKMzzvoNYn4WZfSUXTxRwxpi6IAZonXk1f3mUVf5NhRV311F6qgqWG1E3sKs1MsUy0JI1M7afWc2lb/MreALInMxsLKZoSEnjD47+oQOI2vNWIdQVAUGim1PMoPdUgnpYT7WFTvaQ81+m2l1l+L5a0OHy/Bo1q1DMqWHwlAFLBAafeio0vsjWX+u1PfEqHp6iraxwIzk5ox5djrj9t5gDro3kZUnNeyqzMLMg+ChVKlid6yHG8bLpPyVQXxdsV4oQWCn3s9iDNNG1hfJIuP+Fu4TsUfD77/S9vlQQQ9HyF3LLeb4K9yyfjyP1BDhN5nazuDWZCdTHcPD0598KIYezjhyfzVSUk5lIDW/IAx9g5+UpxYH8d37ji0toS3rklnhHhqkMjqCuRF+Clm0+P1KFt2d1S4jo668VNFGZRYrFOxz0Vlg+5jzw6j1JRKt4L6Lev+UCCstDOQeIyTJaSsGxW5vIQ5m0Qv4sJnMYMOoj35F/NoHyhQB1/gMHQgGw5h7TMWfs/6tsQFgYcsXTEPF4qjIjIFn1rxf7hQ/RE5CM9tFIJKDSgMAkGhDKOm+Hw0lcAvk11+OHufFSl3cElD6HcxuQU9jvmuY5P/YfHEbD843OA3fQMBPQY8VYKMrIT/lj87gZjg9+I14hSSgmSDvLk1RbLNLCqPdPTdJ8qyHL4riAyi00r2AQxH5WfcIUMj8QIYjM833yHOflYhG4KtKGhXWDywS7PxjmRKmjuC8r2Kv8i+24dKzVTEbrUEGzOWsa7tjpYmlqTxQA6q10xAOQMKW8a2ganw9wiDp/DEr5GNB5vdNWsIdaqE7/ZulNQSbEUwCxE2DJbJpcRBJy48vQFsRw3syUxYRGaEN435ZRqubtQhNquqOHipx+zLwoP1TkwucnxiZZmXQz3i5LpJhG8IqJaGB3OWiD418fEcKeIONJ6dl22J+Tp0kpMIaYQihaGfGVHm2P169FIJa+PlMnRraDuSrqK3PH1kLS0GrD1K7ORQ7phrj4NfSJyqB7eQ421RsgzMeTwxHJH4+JaHlju0J5x/YtVCpG9Tbf6VxtdcO+ugr6vrJd5Sr2+FSN0yhZ/KJWavJIKBIQsH37F81OMy1WvKYh2Zte7nhe8yk5B1IQVdCKEMnpTzQQ1sEyduXGKZ3qZClsFWLXbvmpj+jbdMs28HsR4ISGwNFp4Fl3U6vW/zHi1TEIIsJEqqBHGVDHCBWY8THK3YjqWC4syspZhLyvABff9Kemi6INRyfFhF+9FbXeuTNNot2mL+9EpOEl+3zEgmGp2LT3tlUkxu4Hvyv2ORNxKXOBFU+2oaJOAlVtEQBtdcLxHPqauOfEg31+AEuzxL32dzots/VmS+tJQjwkgNNMXQRfZpkFZWdaHY2L6tw9ohIz5T8eY4TT5tbuELm0mKLjGNf8hW1IuoIE5TcORxa8OPXqdxFuC1yoJ/5PAMs8lxhpvmLCtZpdx1ghYz8BoxzCjoLoU5GwLgWfMvmIB4VbLfsT+o+16mHPm3CXrxByPDwV/wnc+BDbHuDqTq1x1wUIX2ZaJV+jAVx/PopNB4Ok5P9Wk+m3459MPa3p7s9R46uMSvJOz9fb+VNW+xnOvnI2/5alIu2+lB2HZ/+FCisY+3iiuhK687c4laPpwPV8qtHvNN2U8pMdLK2+pS7BNFpP+Ujnf75fIU49GOEnGtLNvZGWAoqKIucBTvR6vVnJ2KzudcxRPlRl9g5rqdZ58zfZjDj/nw+x59oaejoxDV51VlV4X4kHRV/VGNVbgYh6w4EpVA6jyF9u34yElE+CvbyVUbhrRIemdS2WeFvLyhfrmLAnUd2HMh2wL8VCxIThBJJA1xlRTK7CKoT7KUVTSXcjnGkI/TM3Bqw9hhORTbmr0ev+HmpM+TLpk/wxUEegL3gPHqKBIlzZMyIyi++x4DsE/k9WbpqCHDD0Thl8B1AVQB9In1RkoAuXPgUX6bp+BX9IDlwUVfGbe2LY8o3+hjyQuk2aboIfFFQa8jJ6fTnKpFSPfCIZn7zFJ4x/UoU2V9X8+O8SoHXipxs0TRY8lplf1OUM1PaffY1CY49qYXTyl4J6KAZbme7ULn2BWZG74ZmZ5L+yLELCJYT8QTyiTjCZbgA4o6fCLtHxImcmlXmduzlEwHJcwqhjhol9swGmVwNliVkRRYVmu8Ijx5QWlziLj1y5KPyMzyM4j3Cs1mafq4hUooLzW5RRJHDEBptSjOXIVtxkYwISwr3KEEt8jxRkCC3M5IgIwITBuwyC0k09s+hhuuqvRUlsYMjaZDWq5avoVonxVwIutv6tQezjXxL64sHdKBwl6hZnKtgLHxvy0grr+xXnlcQk0+W7HL+RPRLSATce1yXFfCBd04mqKjCiJ+CWhn9DDTprFMUmT5H+r2oqoC3TPZU5evudGn/SFL7C174r7BQWWVBKT0V9O9PU3VlNgFsSsZwFhgkS6v3dztQ2wn4j9S4jjxRJNJycJJrEOZSdkgx8mb/9RfTOxAd+xr7Y88hMJAiZbAjc8WQWz4B5OCIyRgC86MypWEezevy1bkeyGhwaAdj+JVdFYJtkTo03p56+ES4+x/hXXHeJwquMLeug0EIjjWQWkJUnQ5Z0Wx4b4I7YX4GmRwYzPQTO3t5AWeOAuKCBEgaoORuTBATk1zJ3TuvU+KjtrO8/hR3Wc20DTCAaXpWLMK9aTfZpdF5aIor+q8n1Gnm1JVCl0BL6zjWD2ZrdtA8FrlfY4SBZKE/p6nPWiOPPDgy8IQliS8vl1Fk+wrC+jr59xMoKkE2ZaU4NMM+QN6tn/gl4Z6c1zEoGvRaPYv7YLOfCei7YBdP1lHkZoNozPD3HU6PIgrOQsI6J0wIzie3MvYSFCjZ7u8LdvDfNw3j/KUN66MoAC0AgEQqBWa3T9RlJ/hM3gFUBcoqFPm9JG0HcLuieYwZGhXF0LD2Gf1Y5VfVk2Uln9EWCdSSufZh07ATftT782wEyrTAMOTncI8FEM4HmQobweA3RsKFod994zxQvJoAcuJUrMTSImcGpO9ZOchzl8fJoMVM9edIfA+fJxczFcmySaUHlcQ2XEJXqe5w6+XlMgj1PeAWq2VYTS+VipLziJwgu7rbuCRdMj2cn1E8VQyZGeT8JfyVBunWXA2pM652GaEeKPSoMdVPLYiPfTZlcokVm+fRqz9vOp83vHjoV3LqFgkx7lwd1yaCr2fLbTw1zFwEsnaXBZykkB8Z4eiCPLuBCcjWLkXzSIiWLMX2dX0pVJ2hbSiI+yyr2pLLGakx6RLmpzq072sSLUefKQlhTE/bTRNJg21JT+KP9w04L0Mzn+bExfLVdwS5jFxxiwHKmpNZH3Hny4MA23LFvDqsJhnoEbDdgZc6N11QQyzHYkGyywYlagb1WHMzaGu99BDLUnh6lfC8P/6BSegi6uqCmqxrVieXtTEF3IlC2rqPTYyTONfBI6bTXwXIPSM3WWlu+mSKkzWpb2Ynvjypa4BOa6osKxsvzEbEmOWp2C750+Q0B3nc8wAEYKnS7StGOaqE91JT+wpjYLSGOUc4Y8y0rlgsoA2ZOfEmxGHLIUGwwcT+ONnEjzkd8jXjyM84+iU7eU08Ky2bKRtFaOqqhMc5Ek1wBwl9KfZWI1puRPCbBAiCK52r+RjFSz73ZQ6FnmJfsAgGfZX9r1JOuDTzytQHqZF+5BtWNKHEUsDSPYeS02Y+hqrt0MpS5o6oLPVn6If2clZu3JNNgbQ0tEBSjMRP+/Pbm8j4+wYNUo9Zgb8qQRwRL3NEfJkv6IgNOWgOtQxSWu2apUxFTJN4NoMCNTMd44uy2lejJk156c6iJs59mrDw55JX8blWzFKO4SfPK5wv4hYCnTc/tse5iGVUtkdGClL5isz1P66jHR2q6p/15YDqqdibvk3sdZhcMlBmNmOjDqyf8KXTfD4/+xZu6kBZ++MQZLAcFMb3QT1gFlswYlAhtM2UHBAdVMB8rvQy0TwqXroOlC8QRM88ogpxnFK50p2LzlbODM4zhHsUN7W1QzwteLLjmu3F/zLcSTqTZKcf9zTFtINvuokDk6eKdl23/cIxe8KNZ0gywk2ZnwNuyLQlCusKl9Laz5saJ5b0EuyCATi9UcBoIDIAToiO0rY+eEQROYFnS5bzSn83wtMedDztILvOmay5M7hTjtWPTOxrapks1svUDUYeIC3zWV9pw3oWATY0fnqqbTpa31Wl0dzscRYnpxeJuVnzOYkMaMPaX10wgj37y97/qsLuhnimCF1mZDn83QYzjIYDE4jxAQa5Rcp9UCilP1FhLpk15oLaASQsFx6WTfLuUegMFVQ+6JXPblhohS4SnzH2DBBF80dWzJmtK1M8QugLAaGkmViSV5tnkr3HQXJE30oV9UTydcP9WzpEcmgomepcD348wkj2w2AliXgp1DlTdHAkuOsYeIfhtnM36RVs/cfCwgCDz6nnhh7rJ/byr7V+VWmTkpk3s3Zmx/vh5cpAbnn6q0qyzmDmG9rJCcHVEOswtCHLsiEOO/YFP3gt96EHv8O4oKyawjbPBkYGdpIdkPDHXbfce8tvHe/+2T8CAytU6iUb+E0Rprni3fn3mqQXw1/7CTvr5JZYadPTcDYTmY5VNJCOTQxGHSHLeEXj1SqFpSke2OfFBhPb6x/Ie0LGpBkHwikOsAa1CS4WbAUEb9gJx+OXADgps/VW+IZRnafP87LYnTI8LIHbnCG9v98wcW3N3LTmNRrDmfzjWdLPhsfdtrKFljPinBgnAMNlBvTdwCOv4umcdMeYpPwAfikQYlg6ZeHkb13YiwBLGu2wezEKlAiMfaNlQklB/dwg/8ciJ49cE8kdDJqt9CIaDls9spWFA1VAePqYf5kQNomed3JSO2+IbAmclMDQ5yQxqAdQ1FdyYF27s9bXUaMj2zEG5+2NjNjs46/NSoI6zeZfwR6NrFUK+eavfSgcmMndJU6jfewpASlxQ9v+PFg4I9LgLN1b8jxNI7peuRaIjHC7uXUXp8b/RL8+Q4pgvGqYtfl93a1H5d95zNW953bY/EtPPlVZV74qR5HrYQgfVojK/dWNsOvl84wbj7xYXBPzmrON/hqkeLY6njJ+3cj68ZtXbJ/z3+8yCN9TkMFeE5E7/x+WriJBbiQIfkkMRzG0qNXim5iZ9fpVjXcv9nrGY7UqISIS6tt+iILXGq9avo9cJpB5rEpkVo8lGcc0q4mMVkhJjMkEtHCjViyfx/TIXmNsJJROG66YYaEXij9jmg0M+TdkmfDDJ9kETLLOuM9NNrgR+G9E39aMZkBiv3PBQCQoZ37QCIWrAPI/b7bJLToJbkb3NkL4oQ+CQQSg5DbI7HoXmdfXSz6Ri1DqAFfTECWF9HTLwIAaphnuDjCjqp/C11hExaCDk6EOU9/cJP8Mw36XdCXxUgz4nzwzLM81iM3b48sBTNpgU92K/ma+3QmHtS2fJj6b6MtBnkgc28o6dQahy7B+s2hFWst2KAIq8wJ+zH90U0PpeOcDojLMC2nzJ6rRUG2pbDveHGeSzO5zB9/GSTXfZkYWKwK6P3fvhcMHunwXfjKsiluT4orT5Xs4jPuNfp3I0xfMRvffxJtdbWs57xhZnhbxp6EuJ442FOHdcSR7ZPbzcsoMn+vyr9qsLwqd/VXj5ULbvdNc8WyQG2i53KJaMGJRv3+taGqW2MdrpdyzgaxZXZcuGqEJXamMXN/sd+YzbNpowAU/3WdiGbBxOGbrzKM7qDJywITuVBe4kpRn1eWRs+tW9rRzKVKNkgy0gzvqWiAmacSHXGZvLmgquosYCPfZL57ByE4JDirIteA3g/88OeT5AqrXcubMLD39kpx2ekapN7qDkwbIBSTycrrFhPQnPeasg1O83sS89HsaknZ/I6mSdOhuz4Xw6oZzki+53ZDHKgi40riX6AGrA8Jf5JvB93GVZAk3VDOrIzf5Ooi7aC3DlPfjVBAOmY8FKh8oO+/3/iFx0yTFvRHpZwUDv156CX8zM5inaoIS37CHIgkrGEy5PV8f+qCwKGl/8qVAIyNiumf52mbM8XZcLUxuS0P5giqKnW7UQtUcTX4ryxDsyFEzMEXEBVBByP0fH8BsZUG8e/Llt5Ulu7WMbkRyynmjKkKRWv6MO2G2gTIdutzn2LyT+t5iCR1xP8J/RqAaoZHzuesiBLXS7/w9LHYeeFZm0m7aG9VfRgWdiFEiA8HQwyWQTgNzQTtEVS9KfDn33fBxHv0riEJxgEDUJo1CTpBmkeirFU2GcV5z8YQQ07SfrrysvwVN4OOjvGNwpk//GruM05z7svEqWOlVvrCXSZczf5xLhHbww3c9mXqghLIFYEiZx5ItXmzLjTmZLkVA4Ebb4pv8vn+NFnc0lpU60MKjbrQ4t4LLy+7rYHJzy5lM0PfnJSbHEZggiMisInhfEHqq2YlkjWlyEtmD3H69UMPip/5eMfXSZM7o+ZaEktaCIzh56h8Y6WU7lwwalgGEUQYpgF1ni7nXg1wyQg8obYsvZWu3XP2e+kEkNGzgBFz8FRFLZcwPpaK5cgPq/E4VGhSqL3cCgPSQtJxz9UhxZgCPIvDOFeL0Xwa/3q5IZ19buqj766CyULpKiatvVDFrqv+5i/tG6rCxokGX1aBqy1aAzSHL1j8/VX67g8AKuvsM2mIa35+lLFHxjkufrBGOpMQKm9GqC41DbQw3z0W5GmEmOXfmv15SOs9bjjtGnPid4nFycDluX+OaAjVEJL/kHD5uQOcv4zKy6VRfT9Svg1yhD9vdhwAj8IBHAZm57nLoyEBnakjtSpnbpmj6bjn6dnLErEnZeAUVq362YJRZYnDHiBuuvFITdAZCRCrZR6pE1ogerG+y9Gl+76lFYWdQUI+RdCPoSclu9ttbxd0UQIY71rNRmw6HfU4j/lVDRj/+vM8Ykz6l+DQpKrP3ssdNnKyw5TmzSij+fZhOUJbOV6+x+mKEx3zTM2wxX9SHjjAcSNB0VGnrsPhmTK+BeInsPJg3EJTsXzpDfghp034BQxoV5cnHImJ/MFJ0QfI6zs7iZw4v/38NZzoNSnMZcpyHZVVguQKYfpvUtgqLF48/MJVwWUiC6WLWkOwRGBFEc40Y586bxZj1+xetGzIlVVDtjz9TqsyZxoheyRXZiAWKpQxZADOHHbeKn0sGjnwicuoOC8YIyAZhcByS1pbSnlxiSVLbfChyVdJ7w5lyLPrg2JfvdankzOxw6cxFsWJ/2EwxjfsibeNDIj6A/30lOfq5+zhpUxiZbNbyCVFv8WOQ4HId1zOMk2UysEVE0V/ve/+KM3wtOvLTFFuzxRSDa9OGcdMXMKHElgIwg2Jl2zeDXYirkBe3ToMIbcc4cSCiIB/JRb8Yt3B+jmCY/4MNhGAcYC2VTsweQIgjXp+tZIJPmbVqnRew7WLrvXwvL+QHnnmY8npZ3U2nDsm4kKI6MZ6XROar5MTJmU10CSRJQJggYvZbnqVYYLhoqODZYBAAWZAA1i/xVyxsSYLhplA0VRI2VfgnbZ/cS8TUm7D013y2TT7X7w5IL5yU2VHdaViQ3z5iwsnsUnXdlpIhxV6jNrzOTRr1Vs1jURsid9kZ8IpiVNkw591qqeI+ik+ace0HNx50+nyN2+3INosuupNQ+2BprXtj1oersQNib/0kO94sGhB0iKPYc/NS9idLW6PFyNjAB+81NxXtHw45pwC9PYoOMrQDETIRtAg9P8zsMwYIuq1/Ou+JIwfeofPDX0JJUY+meLAxMJw5Ed8KtX8/vWYCLuMB8417OJJMwi5nqZ8sNd5sJAqbfKMjlyK5CV08iEQL04OM7Cez+F107mx1P+iSdzZiCZjRGPbHkwx1WmMu4SwC3h+UB5l2HJTLaPqEhAt1lUZ6tUspZTpUv6w1uNup0QEyFouIYcF8Kf9lu9pJc+wiE//LX9MRgOgN7BNvIs4+EUlJ+SHMyozdCq7KcJmxwH5Afro/KlqIoPSSoMCWjOry5z+RhCeXSWUpkFfZ7tf8wTntVOmSynmKWRgpzggjITdfmI1le00nzkSbBVuM+mwi8JMM4PKTr2WMuRahbbgiTijt6PONFdXwsdKJX9IPlbWe7lacAc4vNHQ9P96PBadrXCbyQlIfP0BdsDGSbatZ5dnM/BHfeYh2ep1+9CdiqehiqvwYSsR8EZCSU8lVQIZqViWiyUnFOAUfKsLzEg6zPnoRXrTmBVNXQ3sjYcKSPTFkWigCdQt+yGYNS645MzKPZYMiL1uHttSTijq+9EhGQLbOqRuRMO9FuoJnfVlljgYtXqUvtJO/r/DbA4sdjcbNGzvm7UtsA14aXsRgbYnXV59vQA9cFJa/TEu076QgL5z0ni4bjP6a07PCSwkQIXtg92AptmnPMPKbHAmInUt3sogFiwvosZtQkYFGWHsxkFhIxtYSlB9IHPowL0Q8pUYvoO9XstlmicaT7YpLnGZUilqf9OIP3sf115myxW97YEbJesKRNg2/UwH9+UJ4KzFyU+9HDnoMCIcZBcfZRy3iUyoRwEHc5FQDHfG1bAIyRbA53MDfJYSflSRqb7AF8JPJPtAGlKYLd6ELZ+LuJlvuasEfMe3nQrlzU2XeX4PEtHa8kxwFCfRl5fxGM451ceTPNtIvHlDanZkX0o+mtfgTbFpoKvlm6/S73o3XNI7nVIayB7AC7X+dLju58S38tOGT03okZYK7zb6IXU4PEM0qbG173GnNofr2TLAxKxSYpnnoXa1nTmc0KcpIRK6sgSXumV4b89nTpmIiRFg5faZ9pTkJmBq/ynAotFD6mYXTECP13VquCT+vA0C7DJV6lSOwJhKJAN7h3msY9fTWTThgv+zlSjhZ8hDplx8XZyPkJzpIAgCfsmWO60moWCbzKh5e7cZ8EZGYKX7+bUpl4TWdpA52leI0MUT/QYRwpFqQ+qOp8fx+YxqV8KAvkpVyns9Cscehsjd31qZCBjpyXSJwJyPZ1rn63e1V3UwB+lPh6LcYS+biRNFU3RbZmLiYa/p1Hj/9B4pz/LjInrYsY13jN4l4vDUCCxq4XN5YZ5JFc+WvPbq/xCwiv/X7KzkN3kyFvI8n7zZZt0GpzVuvtXDYLCHgp1RX0/9DhUFv9ou/hsqKz0TANntez5p3b6P2qPhj1Gi9Xrw+6jedKBDzleKKQoQjprsuk24z0AyiUzlW4/LF5FmJ6MatxWdHThQO+SwXkdoV6MoekVgj8sGROtCT2v/In27mn0UrERELS1pLu590/F6b/Cy6HE0Uz+ui6kBb0aHfvUr4Iv7SgKT9MlfG3ny6r5lYb+Z+Ztt56Yf6NDM+ruGZg+J4KCFQG/GNRmmyPoG1gEID+kKM9wmRFIilIoN9+Gdwn2uSsc3BQdJN+hYE5EXmEC4Vfz2O98AwzJVBBTehGExqu/fbyfCLhggKUIGxyS994/2GDgpRkNRRxaYm8FWcgeLVF0Xafhkr+XL7krviwik+MiLx+JVUzdVvmcszc4VZLMI0gvzJdU7u24mNOj7ZTQVO71qcvEWPVDEOfBZqYx3orOx5r7MxRfyXQiRvh03KsWb39bwWQlnVqSdjXqynT98EKCeL+LficmjTqQTNE1gFYbl++pVw7J2Q3kx6EvuPik7Tz7/4IP/oRQ7v++V8rqrAAwg56kuiEWV63lOOnAlnDxIL37edb+cHCD17hNFkHt8FUlkK/rMQCMZ8DPYceOIi9kqf3LaHtnRkvxZX2FXj8FLlNnAO5UFjg8XOjUyqn6ZXLLkb45fi07+9e/UCNF9x4K0gGzpsoZIeA5290uas05pkNehSb+OcKQEEN0FpnK6jzgidBqk7O9gS+AX/UvYkUTBtg9WSDkL7KTheHD3DtcDbWP3s+X7uzcLtkhUHm/3hiJII7jiAv7Oz01mU9YU57aaMKYzvOMH6ELLt3Y3Fe9BaF1mnPVtE+i9QTDuG/1zlJ3aD/Isu/iNQ6+1QlJLz1zgj19MQmPh9j7od1Yz6eoL2CWEZotSvSqwHtdeVvFlVKvH+AD4Pj+0OTG7LoSIPauBNTgCLXfeKScyHxurlWkBoc0KPlzTry/BAmJ9M1BuQnEMIuj0sbgXFxDdo6IVidgxudUdtMeEPwRNUA/s4WIlLrZgcIsR0VK0P6UlDIQgLYXWxI6ewGMoTECPslexEDBI+XZ12GQIIYlNFCPsVAn3CN/BtG0K2pnJ4ejNAMYGoRP0lITO7IkvQVJxlPl+x61J6wnVXy+ZyXNFYFlpLmlJv98rkYIRdfiP3YTpZi8dmoWPAHeMLvzvZxRfnb11y9pVV703odf/F+XxoGLxtDc5dzkKykguBmg6B1Pw9nY+cBGRr7gu7ynv4y8CDrsqCGR3/JhoVEvDYdRVBgV8bnBj93o/UzkITgoP1RPEtlr8CqlCKawn5r1A3x/T49qjU5MynfjmtlHuhdjqJLVWtXSk+9kdgDuj6iImOUG3vlCaJSxREiegVQ5/QstuQoMEB0THEjBP6I8CrYwVfHITfJV1RwPicvCYp5SJlzgclhPDpmnv0JTvN6PNUtBeXWgfOyU1qfMcQVN4/XX3Rf4m/aM8DbzxO3TtyL4KxtGCCz0+KK1S+jnPWTEAQxvChqZHCxmt15bunXjDmk9gx0lJKCEort/rdpbMMZSPJwjeIKet26BdpwVI9U8Y4YSr6zUKMogT2tSQz9KUHBVE87kp8qZSeRwyMksnSh7vJaCSYtfbwSkQLrqoXJaHjjT9XHD7eOrfri6ByOEBXNxEk4XYEQvO2GsNJALUiKnK/sEYVfVM2n3V+yg4/QD69Kv4HpPGZwUiZ8eQE0T5KN/2MfTsgXx0C2NPUR4AIGxz2xfm7uVphFf5tgSECp4iWe5Mtpp1geLm2Poy7qeFhm/RIYwrpWQIiMshSKftYmDFlnofM8SdZt/DTbAAaDhHYh5gzYkHGPyeJnrBr2b68xA97tKsoy3/oEFsuK+C4uf86Vfb8xl+ZD4mm1Ra1fSWGHyzjlP+KdKBKhaZ6c1mUPYoiSsbS0g1uQpAmU1OZ8PI4VJspisDCXCOsKaouOlx91PxooHugLBMm5Nrb46NC1PbjNHnUqobyO9oGBAgDlLX+lrSlFxO/bO2TH8l3pI1HeBxHNvnQJ+jYEUOHE/oYxqYotfgN6awEHSAkTK0s+JI1fdb0BUVQRa2Gze4wissCs+W/qaE/efJymjRjk08PhNrrBQ+/7w+y8ps6a+3Fkdbg3YDCQKfGaKKobLoxxymNWodngEYpsbx5/3eS+ae4iAllo2RXKh+PU8pUerq5fqSKIfnW80LzmDjPdzDI/9hduKDKNkzBmBPZmT/DcMp7sx7amrnSI7Jq+WCvZ+m3b/1We5MzxhoWLwT4+qCAzW6oYQnq5Un3+VMTDPqakb2oeH9oKYJ8qPGRkgpm9pynldCQKzUm6xnDrox+OQ8U6Q/CeJfEEaHwWFa6M9JMgp+axAyBMYeK/ypQ52FhXW00vThrkZMTnn45UAKJJT8WzvaJ3H8ckf1O9nG0SC6OGcRnDxrdRWPnstiduTzr2bDGNXJjym4VZzAadqtZMnTz84oUHlF/biSolv5MZ3JxBCgk5T6UrOqJpJgMXzbGohD7OPxhZXKwsDQHigm6CUH2ItvKM7Csh8C8WAYdnjtHfAWs/hkEpCoXkmFtmj1B54GompJFqfx6fci/toKK3MD2Lfaja2pnr8GeT1B1mbSU/rgH7I84VebbUAWWAq9dicAcEkFwHLG30OXHTGCOTDlM0hQ2+jp97z4Z6KTQN66owme9rKymFqRTDdpxMVem1x7X/VNEqOHN4yBOVcyKk8HeMfvS8YEbuAYsEAhxskX+JCWuJqFfTP6BmRcYABzqfPkM4Hk7EsbvXzGjPRQoDHD2VnIsXZZC6/xWB13XT+RQ+o2TOJN6w2U5mJzktbaWrEoQCu8pSuYgW2ukIpkf5rYOYFjzOo1CXW9XQH3E6mgxQkJvwFbG/Jy0hCDpovzy8R91ENO5JE6fGL8PWuXDZCxq2OB2fl+Jc9EDfcT7gUq+Eg35h0hY3OoDu+yxpyo7MqdfFvjiOWLnvZUCNwaIiKpjj5wDPYqXsPClgwBsAJPaXGmoKRyi/KQ6U+Mg3hQ+FncfBymse/687byrJBnNljL4CHXmslhaqS6iaTsPt9LAE1P6oPOJ1wb2neLHz6zg3lm1svE7FaTpRVM1QmvrQU85LTamXRweEdqBXxY+8Rz7jKlCuabZX8sTK42tt2bslFsL4DIsXrciNNSCsIKMCPNhnFGQ9nhhcdgZTTYf7MggB0285lnoad8gulodPf4GL2MxyaU2eAhKTZhdP1Q2UlnnxVW/9UZSv6SUQ+X1aTbDPfXFspcA9CV5ZfflT6XTiYs4QrKUxu2wSvxLC1fSKj7vyGdOUXJDckKgWD2bvOAR0nUrs38GAMwrakRDcRC7I10GH9YkkTiWy241BTYiiUgxmAi8tBekSR/EC/bES8I3jKy6mXgsZtGHfpKgV4Bt5hoHHojbF0Un8lb+Rsnxc3VKMDJ9PNFvRjiY4QN6pCElRJgzrFzlCYygLf1kpEz0SCq16F/P4htOKwWNmN7ujhH9YYN5hX+olY9FiBSReH/nY1gRd59ktNMzf3mZ5C2RFzZ2oEtzw3Iy+ewly7zOMAqfDTZj5oubFbbHT9c/4U9yBSxbEdbvtaCpTS1bYfZPg5lj9aI/BnBt9loOLljWq+LSZFiC43RtfjWynYovBjKHPTaCDEzLTm6ccbRsvgfS/ZDYcp4NWkOX8vgVAYHTfDo0NF1LMubcT5Czr5HhZSPz4F/LIzDM52Mvc5BRORs5GjHoyEeV6Lh5bFiotHlj6KPKVZncEqOOfyCD0/szknJekJzR6MQusja6wk8RqOZh14+I8+fJZiy6HGg88YxDp294l8EJ5WaIXUVB/rmsCs/iI+NjnrzOmq5toqGD8igMGZgTou1bcuLWRJCV9Slt+bBw4CaXGzFsg4nC33bYbNZxZlbL5sbq25xxhJh6NeOoNchw49RJC+AJ9kXezoGQdMWWlq4xPdcXnMA2g4E7PhneBMNIwcB/5d7+QJGIVAancbOW1bUA6jA1qOU10ArxLL0RWW5Sy5YtjkM83UixnMYrL2aU23j8zChd3DECkn7N6MATgfAvyvkuNCSyF6YVhRvicQijFs/p5FwuHX1FHqFicjDlLMqeNi+Kk4uorhElvXq6k8dM7AcFRJ3MyMreE0RY7pE2DwDzyPCGdNWTKcffCv0xuhBfLeHuFLtxevvN22s0A5V/1E3M/3xTrQqG9YU/3lUcvVDwqYwxcj3NwIhOzUOOjLKDuUo2l1QFeyOFMqHCoFUGzXCtoNw4hGa0CyNhWsV4GY9EeGd2NOzQlCQt/OSjI5v7HRPY8qLTJDoRKCU+TqIxKYl7eP95QP2AgsheNjjAwbKIBgWel1CigTvnL7A2BgTPdD6QavscdXcow4Rw7tQMlpjkiW2ua8DSCK/lQS8UwhwoXkotrseWQakHDwQY0Z/Lxfb6O7iA5uy+p4PA7Isoh2AFHVxiluPaiUfEXMUUX8mOMma9I5SMxb4YrwIX5LDYBakRP/1mAeG6W335Z9Xb4kKYTPGhplWCkS9leYkinIe+dTi50/r341y5xXYQtDstClRm6txAIY7OXN5BnYeCI8wIfykB9KZb6B9sfFGSMCr14RIKBsyM447auE9Ez+o1UGEQvr3Pxm+lbWb6KKPx3/oTGbS9rzInfdU18eGyc+SVIhLShkYY+y6Ua/CdBTvyR5tRnn6+op/TIxSYv2ttlI1FP0FGF93v8OPUH2h6b4WtW2NvjuUUVEf5kE5gM61WFFDCxDqU0T+I1HGalAC4uo8gGqPtL8lwQPQZaAmvaEH/wz56i5sv5IyHLOi/7+mZNXyETWZxzJw3fFg4kseSslPuP8okmg4qqkAlU67RwQ+emcElqJowHF+98Jzln8SteTsNhz+6jk3Dxf8RsUr/nfUMawxfRGVNQrrdEqnxG8eJ+kahZpIAx0BYpBPSi0zLYd05rZkjJdfY78PXGGKrD3S8mPhOrK24qVv7h6UqsznRIwC8wV1UwAH8mcR0bi9/t7DxL7G4wgMoJK7Y5PR0IY6fpQSBQxCpVBSfIpwQpbSfj5wFLZsVJp4rhBaoPAA0Ku08ep+m0VkKO/fmfP6wEkb8fPoSsa5cMMMDHwb+ECeiN68LTY7ve3409kyZQdOSbKqB2IDDaDH6ZCGfWO2d0Ek/kodHI/qG223ctAtoc4s+phR1rKi7vrxykeInGRiyQv+GbGnKCeMyX1dvBCdschHyEH78TRVGlDsVYr/Dx56gbXAsDk9gHopCc6d44LskJVdD7ECH8eEgE0LvQDtu+Pj9wOsLSmO/YNSDK9uuXtJsrDQ0ei6/7I9ubICoeKjTanCrfcDMvOhfUkqHnypOHlAEYOEIWiBXmp6uFD7XcJtmBuBTOQNdO0uVM5EJubmJGEVSG5ge368y7zyyIy/Jz+ImLLNuHQ4P4rWbIpb0gp7GC020T+rZR8xOB30NBmxDR3QdPGeNuPfSExLEDFh6AVb5FDUIq0uPgbN/SWV7CJZsEOWPIcjul9014csvbve2gCqr0ya7PiKUP9V63pSHsPD5wh7xOgSePhvbkMn4ZraHMH/lYT4WQ72pso3BeyweHcu2lneST+0VduIuX12sIpFdzD8raIL0S5f0gMBpcA836QFmsHpGXyaQ0dQE9NBsrfBnEhvVYyl0G/0ZMOtOv7xs1QcmEgF8Bv2j+N6zWTPncSiOyvkGGZITaNezZ4udiM7j6liqLEHs6T1dvVwGzzBLaT5By6QHl75Z4caSIwt5vSmaEl0L5ZORQYCubJmUUhzNARGn0A+ptsqzIo6HcOdY8jlZSM1Ft5qRQuKvYyLqnpz/Zgifkc9pfdCCAZPNrCy4fCxEVODTw0FF0BYJGiFRoE4btWhmXOHqjNRtBPV5ViSdN5dC3sj+Zs0bjlD0GOXvc52QiwQ3EMTECfLIdgtOi0qI5hZi9ZRwHtlPotNrPfK/n9h88TePd4yWF3TrevuJMdafgXHuLIPLEMDJuwSrxF5BLEPUy3C6Xc9yu0fUb5Aw4SFZvHYHAjxbQU/GNpCWBClpaXzGObuPUVa4kiYD7+J5XRyJ3TuBUTDjHjmC8hH2GCHF/0JwXHEqZN1yUHD5w9Hf9/VLkwzYaxXKgRknfATZalahen29079Qm/ha6H2osAM2NJmRR09Jc60feSK70juVqjNH59y+vnnMqTH1BULA2RxQP64qBoEoMe4F8V8ZZ+Kvyon8KixqcLH6fZMlFeVf5uHHxvtNzVlnoHF2eAiu2kKjYvqkwmhC3mKzIM9lk6qhwl2m+ahyT1Yxr++iVp9UDlQuleCjETmlw6kRLBwGazsjSP2js3ZplktSQKjRF8fEWJbiYIMHsm1LeslWa5WlYqtaOCaA/RZl+avXphvdIDPdZ6C56M2GPwSVG6fI2B8eBB3kJ1au+L7ms1M2P3gp9irZIgr08lZszlGgd42qNZuRn0S7Xhowx3Jj8NeicGzFhupvitD/rsfjnascbGq7Rp/j6WHMGurLsfUW5ZY9DFix47yBf8wwnF/7O1+iCXnnUfifYuGAHsueLt/BiUmT4fv/1N/qAQT6u/TwEIoA0PboYoBwtktpMKFr1BqbymqziN/vv2Xn01wGS5bo9W8PhSpU3S//e6xA7NIcVv+qF1ZPRXzJpWTQ/3WbVIIGX7dVUi8NqlSeS5WcGjHyt4N/DFGOM6BOJVjxZ/kRwYc+urxvjmPAQCUR5LDtWz0JOSvsGNRUtkrCZHAZSB6j6CN3+vP+cFAqXBYGOaBzo+nND/gVbfSXJ+c2XZqRcLznbov7hmWY+Wv94AVrbhfLvkmyjDfBfuucVNrXoGQPcrdwquwXPIGLmBhMtQQct37/t9gKYND1TUJOWMlZkrvgz7rraWkAdwSq/+1m1kXhrUO1Rz2gR8GGuBfFzV9cOD+UGC03bUW6wj9/t5UNEbkvicP5DPKto5pb9Oj7S3VX2s7wjkM3Fm9sLQJQsI26b7H+Fs/YhXZOL1/FiLLzklgthqyD+m9hM8/Mm8sJkT8Tyh/l05pys3dEmRW4CVN/us34MG7pyojCxj4onOBmRirOs7dQgpqcNjtTN/fP+S0rSdecXwgNTDZwdzYPHov8kt+CtQQo9rQvaKOkGf7plS0QaPHyZkrrtmMavFqeLeP/axzdKNqM3Xn0LcfnLo+mbmL7RvpycI75Hb19B5FlkxGbhzS0p00CNkO4uzxQM/bSdFI/v5AYTinfVHjeqv0nRBg3ynaTDEWiMeKC7kFiDJBvj1uSWb4Ay0h0y6Z9rqiZlBbU6G7xsPWFa04GJ2GXSNnprHv9iTxugWO02Pm7GWiS15MPu/eX+2Pjx8uDdkbBztiJeDNgMY1BrhfTXq2V2S8+FineNxZdqKgsCkRlLAQfFQs+HxEa3KInbglbcir9qAwhP3hbKfuBlxkoOVmWrs6wjsHl34Vix7ShsnIqa2HxTj1WZ/5P2jaqydJbyQAE9DWnatijHU7CokRqhcMUrTrnyyquWxnJqEojhQVdTelOSgCERjTSaTpfAvWX4j/fT0AceGBX4LpSSYxf4O0KHf+9X8TW0/0bszuGYbFBRmzLbfgv+4gBvC2T93VJjq/bj0D4tuCl2XcWPyI3s16cORxOC8YITVy78+VneU3KuuKFr0lzb2aMG7WSxeqkx826xh3sIJyvssY8hhLU32I7LD8LjED8vq/gpvxMkf0iX0H74OJPhuqaIaBwsyvO64nFrHvl7wIobZYbgkbTiOyKnLSgGG2db4KxphJN1eq/1PSCP3GX0tjQLJ/fxHKFgFFhJsi1TZCattCX1iwrjne2gpu5gmE/jnKRbtvD7nt7uapgb3iL8WJ/tHpPt6de8N+0oH9LInVVC7opx5xB//5+Vyn8FOH+VDWDeGFtmHPrK7VTrOr/l2cy2FWoamQsDHamFFExVe5gQEgVy4zM+tDXUicc2/J1uSSgdCYWs1GUHjJ/8whCpIwiDpgAqtXsb4NDa3ZocdLqC2Kt7GCuwNPavxLEff2pMVAf8HpvM41DV1/tLNqk6BiLSEb5nLctSUGpbuAhTtrbJWOYXD5lzl63lEI31EM3cCIRvoCA/GUoMfjYV5GBDnZgMExU/6BMYP7/D8REkyxO8Pvu/Ind8/7GCr8MwyMC+zcWPcRf8NUM+rmwCBoWXZGtmFNX1MOzFYbhGObn2qwnNzEpw3AvPh1S4HRkXFuUhTTZxsJX+HwF9op/m1BPqYYWP9T9kShNR6ZPz5v9a6hR+4rxwgnua1kV5/bAicllIU1Sb7b3kEHylG8Xu6Wsj9BMNvaYWo68Nq08pR90L3AUcRmdZcvX+JnSaQ2/ZExrBlje49Q5Oe2+NGU53UKe7UUcfz+SZ+UO/ECF+/1WGoPlX+7oKcmSm0Ztye7jmG7Wp1v3goi7TE+RU8sZaSlyzS14TEBegVPYCVMEVnvxolboFm/2fTGu3O1eKZ9IudEID8j4IZ0d4Xx+VVV+v3/f9eVJBkOcF6uBnH02SyteEa/I+t+zC8TFFxNgpwnzB7H5U7/akKesLQx9QdAnKpT1QnSiNOJJsdXp8vc5GcjL7Znh+oLpMAeNtr8LcY31dqDdATO+w9/YMcgazsqi6UuS1yMYTdzFBYzsIyqlc15+3zp7bezWRQ1XABi9loIw3xvoSmEHaA2OruGvx/8KgHs7yPo3HxLKL1n94URSy0a3AZmMRO2/pRbbbt+WHR9wnqeLSq8yFVQNcdrFlak/47ldroXYQTSXQQzWaztAziZAe9q2EP652W/q80QbPUB2c4QiolM7l/2KehEntxjxo1nq9bfvBeCPLOgf+gc+pQks5nf7TQ/UBf+5KuWXi41SvGS9pJ03s3tHhhji0iB9RtwuVAhXv3afNDwlbHAfAUWhXKxRWdY4Y1ISnYP2DlMVag+UtYIlTYbub3DuXjdGehvhA5nXpQzuWZT+7lV+0+PD9R1/VBEChgPZjI9xpf+KjNLrFaClQU53uqYfbHxIzUJDjudjjK8leSv3uPThX8Mzm5KUaQwZM943L9YKHk/lFnemgQwVP03Wr0zQFJbsKRGbALpvpfGAXJ2AXaIsqa0TSaoUepNP7Z0DNNSGr7+v8/Mm29++g2iDV07w4X/UUcp5NKKzg7DyJ7dA/QFB5Ik6BSZZwAtluRWRZOkKZ16rb5DXUPyvtbUDX+2o4J9pxeeWAaD3cFmRJ2lM/cjv9wdV7AW+jQGbjcW/LeagJ14cf9gLOv3tPbQ20O9pdHQpSnhQ2UaaVa6wG02KkTH3UPokkL6ff4pGw353g+uuaGlRzQ4J47XstrvcbILRM6cNjT8MP+7E/HOmeB254G0b9bcQ3vxfr3t+cfLLgOxsnSMK3+XIGbA3CofqiyptGStqhCkzCDOeNE0J6YKq36NxiHD/Hnap7aX/VW0LHb9eNeOlKVMeqP+2eZFwJ734IBuJeqZ64w6TU23RXtZz8F7ALAHbrXrhlE+JKOXfZlnjdV3w3mRz4LPIJRKucSbOY3RnGc/rsOhRNZiQclgdF5PYpLZ9mkFgXPo5+Pwi9kYYDQjcMLYzFnwlHK+ir++nFyY3rfZ7aX7WRk9/amr28M7HJF/M/mc7Evhb0xonrg7TdKDX0GwdBnsR5i+PeL7Zckk3+K5JQDqq05GOHe1IreYkTAw0+oqgkZ0ifTN/IRsEMHs1DTypXN7F3fMHFI6VJZW+c5vzwFdLTdXaWnDNw1Jbg5xjNOiGLqcpCFOanThD6UTbKrl+Zhgk8t3JYfTSrOnlP/hsqudsVzo3HPBt659M79cn+9oJWeHn3yIFVsM5dbwcOyY9TED0O9Z6Zapz9SP5D88kzXqGQP744h/m/t0iShVMvAuoCiHO/KmCL1ifug+4UPfqXLUfiFNuglSwhJHDqrRG07S4r5y36Y/zW55xXw/iWqEcoY+WKhxXVJSsjCydOT4OGcvUEKoiqDqq9cQw34OEj3tPwTXHjW7NzVP7mbVv7UhF247YwOBK64phU1F9igj6GBIPdDmduGuBVEo7MrTxoJAWO+bKRyrIv5tOPYwiaBr6mqX7Yx1XA6rWTgej9T4nBcLkXCk8FP5DLoibPE8I6jHcmZcgrtJKLNkkEdSIsqLr7VE1NdG+MaGkK3XGPocaSG0AgE5lmjDtHYkRlOxkBXZ6iIfTELIKqr3i1+66e3/xMCzyLDypqH7jdo2KL5c3FE79fd3av0e+bX+K7gpRff3KK7QRC9SfOfFBEKnAEPfwZvWuSlE1TvjsTixJuKMdDvXmfFUjPkhJhW2RfPyCm7Bk4qakG8s4+DFTeGwEQw2jaLNKjcMnp+mpfMWKuPThZsabJlVjnfrh+r0TJQSDniRoQK7WtA2wZs1YyHUkXO8GGdDLmc41JakOwkStoolOX69TteRnEosXzx2ux/12Sjny2QBDA6HcYtw5lmfyuvLNqeQg+1V4WnW6aU/ROyBk/MoPaI9wFNejmh/IlbUsF005QH/bx9HXkyjm5swAxasRa0qLHxgot2s9r1d+jiF4OkC0cwIvRZp5u8hy3DrgdvaeXHe3yyVFauEeuMr3gW1nZYQDJ+1hTPUTk0iMxb+iKp7DYUCwJbMJYGI6NtPuy6N70wqY3FsA19WwN3PUdLNShReOI64qmR1uKmq9xG/ixfVA4K/9ZMl3Ix90Pf62wZxadez58uV+YPwGBLZLx/mh6WafgAY06Tx9SU0lAbYYNFXxrFffp8aAKSw//V0uDm4xZjOgUWOyOPhgyJ69MVXPxkfSSmLqPzIPzDoOviJk+89CwgX4p+NmA/uYxSH5+6o/PovErcPjUkETUhR20stfE80us3PSFyqE9vq4CN9Zr+OGfUZPYlk3Ep0aS3+2Mn4aesBcV4u6GcAHi31REHgoSL6FKXtxtY8fTQ8z/KjTwfFD8ozPsBGfPb83lP0NajarpvWGRUmA6Y42ftFcM+/p+lXuC2vGVgJhx0egkCMkRAsS5XBhZlwiiVWY8vx+SXqIyO8+5Ohu8I50kNFSh8FHGdELtmPpmLg65ptPmoRvYl9J386nKClhMDvOUz73CB7z7WKXACokHJovOCVC48sFgTQI+7Rq3zwKwTQsYqHXINFbKoVEZHlNOKZ/c/wzxP32ZI0sCZOAmnBTWmR/pBzKD0tMA6rsQFVimgUEJ1bR/qIvl5z0PlWYRuBG+yCugR8GnLzAeDetMfmikSmnyKYfx+wXyhp4qv01wY7iRrTkxszvreNRJSp45fiJI/VHO+3zVhwq3Qk6/aKoMeCZm/FkU1EYbRni2lIIMy1Xa4CjatySR/C/Pdisyf6Eae/GPWEh6ssIrW1GAFAUwQ8VfWQIZIbEuiJK9xP3EJKMfwWALNJadB1dlZoYKnuDqQZLoYosAUeTU0Qe77WbYLhS4HjP51FkhqfxHKvUJZ8hWQ5NmmfBgHZjkUKoa4USC0qtGe3pfFf3Y1LBKDU58px5Nnzb9+/hJE8uBTebCZgN/8CFxEDS0y/nlyFT210ALpRPm45DJyJNler/HLyzoYjJsqvhU6YhkAh0ukUvA/+b7Jxk/ugbv0gvzmD81m4tMzgG8UoDrVPqNz39/UxBRXQAyxUB/iuzjnFh/rERoFH086aK+LGPmH6RwsBwm4o/IF3si1DmPMD8vynNO5iN6udJnaeIdNmZqI8KSKT6xobFr7mBQskphj/142Fu30Yahmr5DMmNUHp5QYwDbs0pXf9PoEfmxa79fFGJ1Tf6gT/jyptgzFVMdA31fwguQyV1GOYniSnzKi9CnDwtTcHDewLXPATkb6cB8D0gdL9CsmdN06L4t5vF4v1eJ2EOoNaWmGUA8yB0JIFnr/pgG3F4lICL6+84tjhcYf9vT+XSAGUniSgr1JjL/IMRyNS7vWOm2WXy5MzPbiQvHwyS6qQwuDeicot2VrIib1N92+op/fyz9waX3+Q6FXLtupGQ/QUMz7sEZFj1tW7XBY3LxLKji79mdPwIyHqogNdG8vCG6PUjOhzLuXu9qaWAF2P1gdFihq3J4CJ7b2Q/vzb4PII4O/fvQZN1M4p4B1/lwYV2Ws7w8YhN3Y60kJvtUzyrJ6iLDJAo6/I51CAmL75v/xTLtgKhVz6bpH79MutxvprsApBj1f96C+jvOd/02Wp7ozve/btc9/fCFYidHRhA3LYPm64E4slOplp+ZjakBrog8Tc2XkEAQOBwTtQ2r/vULFL9/dsbUMD5p0PRpGr/ML2e0Si+MHzjpuluUSQc1bCt6r4fAMwfpqLs31Aw1LhTtt6SGRXWPe3ShO95Hw9GQXSCBZ5sQZsjKmJiAXeRqG7E07ofOl/ADut68eWUoI78lF5WxJQzN3Gcgp9eQ9GkqzVdzJRuN41dRoN2iSaS42yT0r463wS1BfLf5St462hUFEO3hOydKVcybZufnStEdTiIJWtiLPWSICpQ60FmaoZ8sV9PwDRNHuEVoLiLV1MaZ2LURvJP+t7OHYbt9bHcbmy+BS+KyhEQ/csrSJ6G39A86cY1Jzub4tEyGtqLlORYUK/mjL9uZLRePtnu3QNLRYflV/p6EaDsqka8pym7atbKjxXV36yq954MWGVDQq0+n6CRItWrYSisY8Qdv6wuL+7l4D9i6l62aZYWmtjEp7/1zhONJPOU5s6Pvl1u+rWjG3PHOJHrRiy/tARlaO20y9433ecR96rJ1/lPi9Jcqyy58UJz4crp0VEG0305ibLy5YlTeeGsEYqqjKzEtqpx1Q/bspk5Hi8Gj8hIFYARFg8RP1k30sqeviRqUNMozIqeTH/juL/Z9fTuDPV0NBUiAOmLHRRMzRmb0gYsZ31hMgnXti13V9t+jPiBbu0fryp293T9eEZki2ilKH7k2a/5PzWV5Z+Vl/H2o6g/yW6fOXjYr74eLIbgramoVntl3c+kncGWBDs8MkFQ14aN3jeh/ji1E+xqtm3tT1qx01Wxv3lY61/J+IY/D+xyGUJNUzoaRl/vO6PJUA8bzdfPE10KTmTpskqx88mTBfgyaZwEy+GivMfCrT0JkRYZ+eQNeiFFLxVzZDVO5ARYTq2K0WN04+Ao1ebEC03H0zbkk9qGgpRmYcpj2B4yjle66i/7B5deU0aek6s8AXQjTDaoN9L7NydRHFyf+tIH93u/iKmnTY2LLvTJl/bGZW8f7gmzmg9Ew0C67kPAPhNoNN2E15q1KtWR1jovSw4UtUVql9wR7xp2fT2SPGoL/UA+epCzwpORVY32prrTx3nOwX8BVbGcNTjZvLbHYd971m3eLCro6xAsiNeQQC3l/UJLY6I/dwe7WDHqjR8tOD5sipUIEfpuvLiditYMWhlfnEtbgaW6iUkegRHtmbM29ieRkdaP6LCib58R3xAKs2vG+WZt+J5QXdbZtiDN5j75MF2OLXU+ocVLo62QQLhTFcbn1A/z4Hl0Yp3DJruXoHrMBYLcRTo9bdiuEsjq124b1vUX92++3Zi8iUr40pSiYhxqLTxC562qET6tso+aS0hAqSHI6vPB1E9Pr/dVZw/DA2kEVtbsKN4gBymgfGn2n7S0I37L6fybsBKDdaaL1nXmezE7PNhIov1gIK5Dpnr1RRluYk75x8AUWlcepZERMnB3SB23UxNSFvILhUFl7mMUkCVZnTfgYF41OmA+rMc6qozB+fJZjdziteMk2U7zqoQlEesi24MeUxY0ciBd8J3oZKV6YZ46yodDtsUKSdE/EoJNX68+rA/ZTv3KSc7lV4sQ++1tqdUwg8BqHce+8TYhLJj+KGIVb7uGae1iHN+Aax1R/hGt3NYhI1Go74NJrFIXmMQPz32qDQf3bjT7eUo7FX/Kh8WzZWtoIVQRtNLC+rSQcVt2L5Ado5RhqaSMZYtUukCpmPqslImze82LmzAWY+j1e8HiHkoewl7Df3ukcdAQEGBPsNNWssTguYgZDM+Ck/l9tRfBNJy0FJ2wP75iRWAXsHgLn0iyIbYSfO3yBzIhuK+AOy9JWw2mvWAzgF1mqAvaiSkoJrXqyjYnoFXNrenBI3W1KGjB/jUihFSfo1Z32nmDHlmvtsjjrGwdYTWjFp0HWcio2nDS/3S2vlt8bskyPe53Cgl3+NT6SPl1YliCpzVXsfxg8OeRoFLpHma8bmYwR+cTV8plJijXAvcy0nwTme7Jdp0j53BsVUO/gjGIkbRTPL9KwWQGj/mOnw6PchuspUrJSDaQvHB1jvsGcftfAyBOlWy5KlqkQ/ViWovw0Ow0MFbLIlfNclGoc3g9RrA82y+Q5Xo3J+yBoBb2NJDtBrhNRssSFinKWtoJh8bur/EHFEX8z8TPUZBeJxIaGOVkWZhBI4Vpup9z21NA6gXhN/N1O3d74NczQcgTtgJqptx0TXYHj7c8Rms2p3L/zNBlAsDm29lTNUB62oRZWLbzXClUlBw1DnDxbI161hK3CMUEsHDwqHsIWKv+TL8dre90KpcY1zcNGpFZhoQyFndbp5DMSxKLLHhzfPfzbcrGVsplviwbH3AsrerSMesWjsJgE/uk44zLHNN/XF3H2qPIsnyauwfhl3jvPTuE9wiEffpD6e9vFnfXPdOSMFWREZmRWQE/geMhBIifed6YnVkPNlBXCo9frEltkkV1mjeVA9fllFk2+/N92apau7oOl8oqsnDQc69e2Fy+ObOJXImt8D9i9Q3xSL2e4ED3gqf4gdwLp6PLFugLF4xSZSvbJZn+o2idwzH3ShaKZb1Q47YlyFR0K+oReHDl1exW6Ms3x0n7Mq9we9/2jM1a5oIeRr9FlUSvQ+Apum+DnRVK2fDVfjN3vXhmb5rmWYbTrPRylufLOkbFf1TIFeW1F7LvJnRoejzXZdwMhPmY/jUKREYEejreSPlRZpQybKFfetW8oS+Z95rh8B7sQ8kAclCTb5goiW8By8eY4jxo2RZwd7GMFxUJGil8sklyH3gml/x0sCIYUkKqnhgrxP4dgqAJdl8RvPcnHWiO/uYvjGVnoPCGamIOYo9IpTl0Wxmak65kVhN7FPmdUFRm9tWtz5Ng6MruLBKUdWhwt3MlTJWUvzO9Kns0qiKGr2yZpdVfB3d8Pvz6V6A49qIVJ8llbPC/2QcMC10ClaaHxYMvlQUStLbof3+LtTnGKvl5mKz9Rmiiupsjvt311W/1vAjo4mEjuMOMLpUIf3ZaaKr3+OrUhydLv2ztUI+6qjOgxPT8YieUlUmjGXPXq/2+4M0B4HwTnXpk8qU66Xp/YTFlb0122kd+m4ZcutC47Y1ay94HeCl2fq0zUG0TzrIotDjS3CIjH7zaFY1Anxj4sixTlxO8dEEV+z39+2VLl5m+e+J/S6/1s0AS2HiD3EqkFABbdOPefxCmWYiLD19kekvuiL0zXn9+zuY583h1Gvrpyu/k4VjhcSpQtcUtixEoK5/i6wL3m9aRNIOy6tYrNHYhXAyQE5UJI51vZMhytAtPJ0pG5L1RhPbLHpUKEcskGQK30cS7qVCO+LPuUGoRvmBUHNPC0wCKW/TEXdwVU5yOY7o+vkhzvJ1NU4Ia+rj4YhW5xIwQuoo6hG1ro8fDjw/2RQ5MIriRwT+F8JsCPL3rFR/6+9iUBl3SV7R8zXYR0kKHzcWs59yULLndfqd8wBf2sy7PvveNmDQY0tHRvVfonIevM+24FmGGKllW2NjDVRedF3i4K2Xz0pgcR5dvbkf3cV9S7LyARpn4ecEe6MvZt1x5+zv+vMY7NpCofkjDWISBvh2kIOenqFzQBAxSe3/wdmqDZ8HZz70XhHa8+DwZ3qvfGA7oqx1k50DoWn1t6H1mNbjT3v6w1br1bsS/3AK/p4Lz81eISCtKiPFy6p1t6Hz/gZBJTF/6kBX9/DK05Sb8+aDcnc7Dcw0/mrRhq7aGxRtkFTwUrBL8a+o0sKzxMdvD2OYyUb1kN2mWLatuJ77UwpWELgXCyauLop26OZI1OGKUl3i8CILwW1iXCDFD4uEjKr/jtApLer/vvKW8QFy01jXVcS/Mov+ryapUqZ/hvO6GTv6OVaEKCk3PHfvKB02PWGG4hYvV/R2RWV8d7pRrINjO8s9GjMy4NbXy/AaLtZO4uU2/M+Nh05e+UbMsXgUkNa9A53GMzjyU0wb+XBNSjjIbho3l45gCpFPR0ELyXC2J058+XH3LZwfN93sCpJW7KUrYWIiYxwnRHowLi2YzE28E8s0dtzDFN1ASK/EMwRRtSse4L1kyrjsD+Zzg1UrYGqh3UebWSZomzITSh/LLXaUP+isP7NarIulNhIs/sbs4lwNkCduFFBnoN1H3tjI/8GcvUgPGHb0PAvF+SVvIwdn7axd9bHuJ3l/i+A5BRWOuVEWr/k7GFTxa6dlBaGytsjlJIFDkeiIs8rmSir9tG/v7VzAUFf2CebLToX1dGOgXgAyh5yScf7K8aHunB048kOhpHylVcJhyzw03L0DLobi1wqZTaCuNEGRuW6BSLsAOQsq+4zIvAovbXUnNuPLhjfOEkzxXq508e82SC3Go3jJnNXbr4Lvshj/JFlyFLq43YuVU/ooMARPOUPt+o7jBLNzi+btsvXAgFNliVy9AOzyKOzEe49bCKcrJL4TSVxvW71xOQ6o9Ww8ooDLjbfgvbYgrBGIK1R8Rz7W32Pd8mzJHruKt5wEWcH3ddEYtYDs0q1icMrEq9GjLtcbQ3D5zCa80uufQZKwflw8j3FzaBs/3A3kcg2OOP3O54bz5Gt5E/lkNjxJjukmt0KuryJgtG2LZwDfGtaw9mtBeBTb370XJ3rdOB1RnIyKLmuE2fiI+8TJ/NiMEgrThc2fpGipcwnEG9ooO0ArMvI47zSxIBG0Se/dwWxbz9xs8EmQ+sIVulVfgp+GYUSTy+ehl9Utp6vF3XSpUmccDHPOpM1hzRywHKjrOu8k5hcDKFxRLUwuZGZUsqw/np8Ol5G0M7xcwWKedBCq6rHDaQzit/cZt5/Wp9dSxiG7wtqAWpzE19e1zIIuSFrlDD0ZpeMqzG3t7cEIxMcyNEvxGtaUhlxT/Fthq6n3WpREcO1o5Sbdc7CbHtXFzipglyPHWPYFnXODrQYHqCaPoSHEoN48hCcoh9FtwUy4LxkPonmtKrxBsC+vRIom1pURLU1DBSREBarF1viuOgCw57SUz+jl5jxQL5J7eZV49z/eSfVdjGdd3G96dccohUk8BPOqsTAuGHOrtLufofjeV9FjYAyUoWHJC46qSYlaG9eL6Ur2IzWYj5HQwhzUAWmE6CbiStM+gjuuQwLJsl2uqJICzFmObz0XxAMJDOO+/RG8lNK6yqs/1lw8nap+NC7blh6cxUldmBCUdUyK8VgL/ebq3uRg4N+EP1xHErnyFsACdvBRZ7wrUFJj3HDOi/+78H21sEF9vdHtIunVKFJ963Qnc0iGga7nd5F6K8ntdeBuEmmlQFj/njZRZSxUaN9sPZ0W1siFu6UJbTCK4rjIaalAa1rlXqGWNGDIYeaTtywwECfSt3WZp/ZN5c+N2GLz2+fBv7LP+Mkn0hT97IaX0qd2SsYR2n6b7F3pN77PgHemQGORCM/3bLzS94n1cteHbU0xceY1asvEoNxL7FjMqCfMStaBIN8IPzROVVnZ8r0tgMr3TVL80z+mMw+Y0hiLp1xPiwPJocjRguRE5hQTXNQnU8z3pDdUuT3/mk9YdFasJ5+iDnlqutYMvKMulE4HiXwmk/mUQtVl0QwkXaHuCbzt/yN0VrzAe302CnGI2Nx1nSBZWu5hKh/xnwJKZkuo3gdSb8tx5at2cFaW8EaS/0ZDiMa3ceqF6uX2v83d6+HeC/UMAFWLu7Ld4GJuXN4Mug6kRDgI4fmEb08BwUOEDPZyfA2nSj8vfFTJQvpMhB8+PP58J55IR3BaWuZc292jamkxBGE7Tm02U8qee8I+VNqPav/t3cLN5Hh0Z2Hb4zFzQONK0bSTA9wACR+KMRmt9D3paZq5GPlRUZR/g36AIYEupKoQ0a69xd5+DQM273PsJZBoqJt+pB5D43WXPNqXUCEP8O+XI3ERIWDTp42XChql+9kcuTR9M4Baf2ntlqtiwbnudxN+yYG/loxPkoC1BAfQtEqnzklRoRJImN9iStPLgDthq4fHCeJdh/eagwrVwC7ASBJSpE0prLm96YItdeGjPHL6Bqpj9GPus84VGzaZDfA66RV+ga0pwSuN9NiELIo8hRWjBL5ppTdu3FA5pe2F2oNBqZraqOEX0xzO9nYEQwKYI8bzOxCdEk2EXdOcZ5mOZIrtx6oMfLVFfoeYmHIrjenZbL8oKPQANzxvvib1PkyAO5nRPWu/vmX9ZeHy4NKc5v6k7is12zEMSI6j/qK01etokE99vYf7anKqxzMkngn7upBeg6LzKt0NIFFyewE6WnfCj4OTWL9d/2aTZwsERxjccRAgs6upNBf1zF9BMl2AdPrqIrIyhqtiy+lsCjAQhwI2AkpL0gY19eOvr75jtnfuo92XDQ7PDXv9BogmZQgLaXQGtbnzN3oSV/A4/3bjYF7VEMjmNgjv766+M77qUvuTQ77U68PpNAUdtR0NDPpbfRNY4y18aIlDyFoZguIOvZWrM3/WQtjW9cfOUNjERlMjbjhK8PJFCbfFFgsOuGYdlw25pcZ30pyWJFXNEwYcFNPxV2eOWwl8vc8Pwemqb5tv9dovTTncPHum4W0zmnL9z5Nf47Ra5/pBUqjSCUKtFcO8G8XaqnOJzA314o+tp2G9EsZcZs7XhbxJkyxNVDOMd9tlh5hS32/Zbr1p0M+izABmL1ycN6D6PFN/Lo74E2/nyUs3CVYt62VWfiPHmV8YWf1hHmYpHNB2HLmNCjZZ4SMAzD/RFt6m3+FWTzw2KKf4b3pj1ToDJp0JRDEwoEDjPuJEi81hw47lRnrHCCZAHvUJVh+ZUmelaxppsStjaDcAHJOApF5Yqc9+llthd8Yac+a4fnq1lVWEQJ66iW9mWKo7b8Td2DTD1njHue8Z9+VIcH+4U2DWg9UPquEZTLYFUvx6LTvi+RqSnNeZR48GDjajZZZifwcDQPEkbebolh4KmHQHmrReBk+a2Gy8N/rpJloVfvJ23kaOW85cHz9g3UPD+5nDaZ/u1qBDS5ysXpWmjqB6VuSouze/4MtAsKPivCiYi+v4j0rIePgtrOFK4VPltyT08z6VblNsAlMXHY0bGB4nhrcBtBDsUQ4iTzqeQ31YflSDZajM5tXYC9xWgpv07EKwVHtU6enph5rgKHje1Byp7GjfIA4N3bviObUCkkNku38GSFFgq0JUpKioMKPjVO7aZJ2MwFDKpe1K/edR2cLjt3oHGhvhAwecLLkwA53Clc1p1EKL19pjo6pm0O8FPrtRUHfsZhKMNb6H449AlZbkH91IRZjCQrJv8QVEyLai233sGcV9ACuMEnr0Xu3Vr8LM/v9u2pa9pGK83j4E7X+vjYxTa2MYWs2zUr0jYbrysdvDAzh/dLAxkrWlSqq+XDDhCu+GvKVUYOoEVzaxRTZBdPqzjide+FutDRc2zZfv2DfNk1+EbT7o2xfIdQvTzmOBCWftJMPjkdJrYDkBDbvOilo8nqrbC8PCHP2zbBE+Tru21lEDO7mHWPacnDet4n1Os8NSTR9dWIiTA+V+1ey/3CNY8MmQrmpZpRnF4wT9nD9GBESZiduSt3dPwZhU/bCwJiakAXZ+1JPLQEZpwdl2yALaHVwf+uw17j7KUlClAwXdhZMCbtcljbuLAyAvvv+WZ/To0Ajczy6E28EW/Hw1+/g7sU+OFKODiFt8E0Td8m7ttbkpvpr1uk+IJbIdSH1+M3q/YrEhRLxqTqvwSltICAGHmw0Cs4JNkrWblJ2RlR6mq+f2qI3XKLWl201dLJiDOfEiQ8nm7FihKRcYeWKV6gniQ+gG633CzotP8bij28JkIoaDGPbQAESar5QVL6nHCU2CINNT+ZiDehphQJo1hD1gvg0fATL6zqANLi1BX9V0UC2ohLrJt/LTxtRXnPKmYypv7N8HPqwpiwCz43ca/mKSDMvsXLal3hl4ccD6nZRdWDXuoH0NZz3TqfscdRsmbFn/OHSz3Kem6sg9YpfDkJe4dKGWGdf3m+Gi2xKpGFfwR1G3mu5H1SuTq/p2eymHBCcUiBFIJ6Hu5W5fPzB9YuhUL2Iz6nuAYijoUKxsZvbXtEWQd6DR0V/xFzrUmxNVclISeVL3KTaPop5rwWaqb14otPZhFghdKgNMCVZ87WaFB+jm19JiYqPUTnLG6hsF4YWzqI9dRKoqJCR1bL3dw/QYvENL3y1iyrPQEYp9Z2zDPevCkMaDbPjBsMNUf1rvzZVrCoxSc+vK0nnGmYWJujWmDr2eTKMhXT+Qn3nnROl9fJ/ehkKLrCG7KaP/uHnZ/3tgM0Kjr5TPb7NQE2Mm89GczYzCkkOVNsK1v2Dgf957p/yLzt2nxqNvo90j85rETr1Oz7AeUFidsEZVhIKuioqa9+gUgJXim8hmNGw7CLjq9XFuYHXkN1Uhrumdp05dZvi5t2MLGiGvH/wWeb5+JE6ZSfmf8zoJcvrNwKRASo2eCIfUitbAvoErVpAsI8j1Pf0NQmpXWm1pPfWvAf30TC9GK7c7jDAIITSFJpseutfXzOmAf0yxk1/R9ZzN350EBvtwR+Jzrk0EH/6St8pMlx/6vbsS8wnEyjfn4fK081p9frvlx3cHLotCD26BsLNnCs2Ole4/SWXCd8jLq7B57t/rc/VANo5LU3yM5sJmm8J6fRvshm0pz0yD9INlKJiXHs7xkiJ+q9KLooIH181Ejx1xR8XR8FI1KeayjuR8FlkCGduFnjDol8yWbkUd3G/IJ+TVW6m9ySXIxTAcBfaVHgwSsmitIo3Qzht6V+U7zDBWqbd6C9BcRrekJrKfKCIqSbiwC9mdLExgULaxatKhjlH2CCGh//yarkdYeFg58C01XdYuWkGEAh+4oqsTZMPCz3uF7+TUNctW/KaEctCxFyo02Y/vPpeS6FbPZkD3Xgv5tZUF0kJdgHpugqvUUfT6iZ45pOVc3XOyW917m89shy2G/9ufKYWx0LsSC5eKnq4HVSWTml7m29I1C0jvksrQc6LWMiFnP5ZM35pa4iTNwruQiYgkEAIRTeR3NMl6h8ZW9D6JbOWCaYu5ChYqURF8IrMxO2eo7HPJ7hEHf2x8GR4+6GHuo2ftYrXb8AzJ/oJyBZYx3HsltupIveJ186ws3YC2vF9nnH3+mF48/Je11kfpkvnweAp9E1yKh76+YRbb1Oy3oMmMFexVUqjYPkh9mr6q4LKG1jfu/dhFCfIQU1cSwdMkkgr9aGMVR4CCi2NzhiuHSkfaLJ7j9CHIPnUT6VkXUnubFGp5f8SOyNNnP4C/gFWPIZ5AdfC1q9NMKxMf5RJEeRE0ReyB3YXzh1ukFLN3cAuK0PR0V4eum8XrA9VRBMC7W0BneQ3ajflpPeHe1fQW9xmYUMbFOP1P+FrRX7LpQq4eKi4exYxYnO1rTz8O64ifNx2LJCXnoWBjqNmqlx9XVt+ep+3TrizxaTQmwOXlnA/nbVxhUJIosGy9bTO4mzlR5+XAaDEN0ygWfdfrhEQf8usAOOzsBk55O/WZd+6uLHyj+2gH7FhY9THv+NQy7uwEMJM3vbuhviL1C9nNjec2GEX6JDtYn7/vz3TXy0Y2IYi1LhvwO4qYeROBNn++NTRJ91hMFhHIFJAg/MtfJjuxtFvDVNZ+O1cpw48ZmwrSEtQFFFVHDq6HYvedq4UKVLMISboIsUAyVp93fMAnye6Qfw9nvV0jCCuM5Cru0rXo+6gULOc8UwtGyPxlzDIt0aczDp1qh12D2kS5hQXEahveN7JdMLbk2IQvFWpv5QJLeqBlus/S+uU0d50G4OeZhDAa2Alh+B+IyrYpp3o1jgsQy824gSWJjlQ54vP4Ilxg4m+FGcPhJ09o3jJmgfI8k7UZJJachuMbtDFdzv897YWufVhVNixbvleP2wYIZc4+WqKHFG9ervIxsD1WEw6fXkGaP2BRZQ5ZZGMZ6toMz3hGGkY1cYaqGdXg5dYtzLU8amZrpXw6L+rFY5N5ztw3PHoz1FFEZdmRoeYSJyEd+/eZ4XlEk4dseouSgH67HFxPlahk1hZevFih7QTddFA/aojfCZSePKcOyjeoJMoez3GU4rHC86YiqCSLe5AKE4SeDy3yhFhBn4rl818+kfqSSmx2z/u70NXH4nzL4CvQxouijx/bf9Dd61NHO3uAsyXCClrR6EH7Xor0eFXkeQddk514utG8TMNAQ5rPu7D4KcaRIdac3TkeUBF3w5EstqvC3QkijMJFZ2k7CIgTmbD5f9gcmvot0brvo3ccTjnnSeJYF1ZtgWBzXHiZD1NYywX1SzvnPd1TBFFBmRES8amylUUGnAVHXUNXR88BW/al4+AoDbPnym3WLx8K9Lvm54xzrW1CP1ARPqbgtwfYRy4KDsflfTZencdANiBB3JR7REco8S1dmf3zqIrRlmqZt+a8i+fuzmrAn/fvzr1MQfIHM0H8VW7rqtO7371mGk6N/n2VFe/v/3/PfZ3nWlNPQOD1oY77Ql5jqmUyeTfNQ2iwC7QUMQlvUWNcg+ZYAM1y0QU7Lqy7uJ3vUZedvruxx7FLI0j0CFTajLwumcCoECjzj+VrOGG+jgdB29fp49qDbycv3NgdmKM3FeX9f1GSmnTfUpKdxkv34G6XVU29ilbRxX4dUwulW0LHt1obF3szYmlJ7jJQ4TJ1uCN66UltN15M4XwAbDwGhcHwcORjlxeQKXIz3gFyLu42/ib+viFlGEhyqKPREMJAQusWrB4gBWCucaaWn5kal8iATyPsAuveWWUzdX5wmE/u3kGDA3IuBsAAJp1L/Woy6uvVJJMz9QGUQntPKjybIhOwOaaGAfNnirt+rPiCIT4pfDN9BKmTkyLgQl7+Q33dKVmAZRpANuZAmB7pk2D0mo+SDhTIk7f2WF+uWJV4P3G9gZpbwcjxkkUKaXg88Wo+22qh4DTiNi6l1qiThl+jMsddrHlziBWFo5mke8m6h7aurbK0sfmQDmUsw45K1xqVLk1oa49taiCjmyZx4ZHLN47FV/pyHCn5ZX/INvQwGe9E0Jzo7dDbe+TDc28i5J7QAsUGVx80Sz1Ou/N9Z1tbCzVFrUC/y/UhRb6dKFd+cuJz8unUL60p7yx/sgtSso4A7MFaMasS7A3l/ZkMz2q/7RGy70F0YnblMQlPqVS/e0EbmuKU0q7EBUHY3WA0MqN2sKn9bE3jRSnXfO5fm0PqVG9rjRcY1ftYI4q/H9WX9PXlXUL0L0wVwBkBafqRA0jMGYLrElUH0vpHZs5qC1C1LLBvS4+MBbIPQa9ecUukogXg2WxOepgboXWVFMrCoQJ93djd0p1eTb5Q+COA6Iklci06OT69607r3kIBmqS26ScMsK3IECi3lJJRo9jdpHQOvzc+TUb8+JI0XySwflmBOLTuCwkC3saRk1cXI/XfWDwyycfBwfCphlmbIPr/37TWg2myx+PD21MgLvCZfSzXO3THHsbW5OTSkrFwFBLW7crm9zZgUH5it/h2iJPILaAgQPFSJdY5TS62t70uWHmmCKsubRNgUnP8qELEzWnPHWTm5u9N+VrL43D3YS4BrfvFnT3jWVDlXrmn9W4NpfXarl9FUiXA/PNmtFjGu+q6ZvkRMv6XszRqMSWcsRGDbNY+qyso2UgWUvUxgmmJ75DIq6F58UTGgn3bade+GXb6AG8U8qtnenHnGBGoPHX8iSiWK7PE69qWTSJfCREQToHkD28eTvNeX6Nnju4d+CmuLlWFBOYP9ixb99AJRbDmMeqJp4m06UO2mzNDWcsDFHdWa9coK2ENKynHv/T4VcdFmOPpsN833jPcQFZdG9PpvEhQi8NhRFsOZqy+u3O9+p5a7rtmBRtJfydHmJLgRavtB8JlFHw1+AdsSW/Toszf+D7h5AAfvEdDboiughQjS3CaIS0SvuRGbnNxwHCh+oaFTL7NSE8+aIw0LOhaJiM4EMt7rrU5UbG8qSLpDi52vH2UMqZjfiQxUUm017Mn8173V0s21OfhC9Z29+6uSA7wC6BiBGRXMYTz8KWlue9pBPqfFacgHC+beZkVjz+IDWYHGRgXyvcrua/lnyrX+jdQRYkh6PpsmwIWHmPdT04RATUGDUwRiItB+z9dAeL7o9KGtLcLovLwlChunnd8AC4sHIklNoKZVqblXZortsPbnqiYjEaOot7cz0TRK4COCcpn13ahaLXudPs2+5H0aOQu7pEYaJ2nYuOdMHDm2eUCDRv4qDRv4SXOGNMxNTk0g9SDepMnKyoMsjpKg1KOu60ZT8vamWm6FOy47aEyhR7knkPm99qkln1Rio+Ke0x7jHy9VbhjP/zTJRLps4C1rIUL23fKyOWOiy/g+JQeNps1+jbKl52wJc+irxaIiDRp1t5EkJozxhc16OSxwLYGENNTCq9Mqguc97BOxNkOtnejqaPvQ5je7SEP4NZJu/RT61PKMSRBJHTzydsa8Ht+Y9AtvVmICFuN+YJDYmtItnsi88kBigvtAN4R+m/zl7eOzE5YsE0gFYydV4ceqK6nox0s/FeLJhiP5IeAEgA0bOi5ZdSUAIkDOQMMBvEPPgXUegKJj1bBfLt0SBOx/DoAugSqvX0oHrs0FJG+tbYEDdvlwLDnpguST5XscJoGGtIBhNAhfyzsr8sLskDEcnzA1NYNj2B9xiG8GZEb0n8nYo1fnStt2YTUpNJn3nCImjU5SuvGtjDFJBwAWmulHOfhiL7EEkYMUiPgPxL7Vyu4hP7OLU/RpZLq82SuwDgf1V10XHhSJc87Fjy8xrvTbzlb1AYPuk1p285HCZlFR2HWER3fa05IILepMOa1zYxxxw8q99NQ6C2GAxtFt5n592Ln/JZJuVhEmDPTaDclYSo5Wng1pcp5lJ4P3/TZzIzodBrP1M53teJchumuHcMu+YrQlLxGZv3kPf8vNBkSXMPF+HZMf73lkk9KCSYqM9ehzQepKEBITRivviAXgB56Ye1AuRX2UV+voryzcHM/4vjHGLvLWJUgiKXLTlaLP8NVD6Qljz4L8SjpUrefba1vneQIKA4pUe92l+5WM8KJso/2FSMHmaGjLBRwucpKTRk9o3xlNgxyvSEbL0sC8mahhQEyXLsNb1XErqprzgapCDFvlmyffXRLum3ZozfbaPTUusp2MV277om8n+x6OXE+McOUQEz7KIKz7wMiVLA58yOKh2AphZT2LeXpBNxvw0dJm5LoY/A7HDN5AclPBnTc6lU1LL4SuFk2xFVatfcFIrkADbkUzPBl91GJucZ9nYCVnXPNrt7l7lBD1pGKx3SkLr0s9y8d8gp0P5X/fYrM6KAtIBQqV/lv7FgYdO5KWVDXYJc/SGSLxYLqsWlxNG8eEntQMzqeJIIbMj1R2Z7ZZcF9frY1z7SK9Sbq2FYTg9Tik+oLofgKUGnBcWYjajGvvr0vg1jx2s1RBc3MVvf/OaWKHP8rdxNG4Y2ehmDFRx1G5j4H+EhMfzHZ4twuUfVZNfP+qOlIyEnn7YanXJ9Iv5e2n1VkD4sv3X1i/mFzURLn1izKUxWivDQXCKP6LHbgRjtuqXxzIOZtAsAdx+Bny0dKpIjmltgM1wSgGETU+kBFU+gD2XzAwCDAGanAmqV48986CuEBgxhsQFRDz1Hw0ZHjBTsM4HjVCqlmjSn29aAR5IUyBTv27xDqxu9oF2duCo/qjZLH4xwOfm/bs5+sS34Eau5jY4bqtXwbw8+B+kpSxz4zd8aWkHL892OFnjjbzcfvVgCAn0Ck3WZzaexUeS0Hf93QAKlEwL08iODGDXFi/myYQ5ymV3KZVC7srNPcBqyoZNFhhZJe5S+N81jd6xx/aOeIvGRNGXAEf0aw60Zna4NmyNlPHs+gkUHZR7yUrP5UVeLCwMzdrtRwB7W/VzTu2mu9+WKvglYJBkA+m1TMHiFCCv549jp/cV27pR8unLeN8PTvLgQSvY8mgRhb/Whm3Ji8+49g8W4GhDswhSMyDMQyv9dKmDvN1TiANcdzjYTZZryBfdaEcW9GFb3TdEryVuP3wc6585dF+UzmFoJRSarGnaPavyefZIJNUAy30SKPp+lKe4ZuMh7xmTTSuE9nkZVnEWug1I6xIcPQ7zxuKKXISDumTphgN1siBq2h60fxOM7N/XbEzaT2aEhThjjNHeeC9KUw0O2qTnD9YdLQrfctlKEl7YJYvS3oBJvvycrRX3OBE2rNLq6v9kluxtPNfoMJ61KqQXxg4fyfjtm3SvVYc/ApTfFjzq2AI+puUg/tEg4GUMSkzBS2BGoangr8H4epPIV6wwDc6tvkKhrF2Ewqi6vNxaBrTTW0MAANZ3HKjm2U+ghXw/ZKVj2mCLjychhONTEQJBrUVWdDdOEpzHtRBywVMgdDZ61LGoSKD3p8d0JcJwYJ0c+kl6TtSOMSHSoFyGZvli+nKeRg/j+fC3Zy9msTbbbDYuY9KxQqRTvatkBFnpmH/FMItsp6Hs78lrQV+HmE/P2Ie1HeeZIG9XIDlgafBKzFDPiR6mGjch8lm6LeKgAtJ6PoLbt7JxzIjYZrvCAKG02GSryDYST/wvwBPXG74JFdX+5c95DdHx/m6VD8FmTTd1PnKyH+sI3L6uB7Iiie/7BclRd5yxvmrsFOa1CrjcUatxlDFGkBbPXqNDaXQ0N1M9l178D2qZN6PxDJ06WyZgXMfDCJAbXDHcgE9ecn9zPdPI1syXfuGRVtyYiT0/p1kdp6z+rbL7yR0jUd7GTa5TvUAFmuuLvMwiN2Gzg+REEnm+OIwifR77JtIlBPuOrox5URQB3e/sG9+J9eQodsdy1w5ezP/jW8uajASi+F0OVoz6rwqIx5ripCVrniD+R1hc1K/Xr6u7ndNoWfj2QCfXpFvMJ9jPZJn8w+ocQw4DMCwTp6HyfqdJegpjjbiAxhNaIjx7sv62CX+S0u8v3NEGcmIUoHJ5MyqW/w8S19fljsZFNEE3VGmE+6wCY4GFTiWkGDZeFg1ESTvUynv68eldlKV7y5GnCppBZTx78CtygXFwSJ3Q9Xt1s0/Hq4npd4DK1tRaafQ4N5MhWB+TnEZKDGS4GIklzKFCyW2hEWs+RB9YpVQSxvnamttAlsBx32z8qImbvXpjgnnWpxAyXH/RTjQqT4J6d4+TxgQFUuz8A/nNgU4G1mYAvFQOy4f7df2ZtAlwlxG7e7mLZC7LXB7gOo5G0USQiSC/c5Iwv9EkhroBTj3oKliAVSfuPCLmWeXe/i3twYMdlOXPN/B0B0dGHv2G16FxpR4wMdOg0yQ9ItZ1qSeZtsPGgTvk1yr3rO6qquI8FVIPTLkZvIKV8voHlzXK9Umbu0rip/aIF435DDGmKOrIL3h6Shx/4XDIQjvsvkjZ71jBSZjJDv6CCNJi9l2e8F1M6BvsjDbxXn7dx82M/uBH2GD6Gj7aMbRyfXDMSiDm1P/LK+wXMa2qiiRr69Q3+AvKdXxaJ0j9+Jb9EDeL7+CdCdRNXct3u092nBxPfLDto3L6LFBhpHT+BBngw+zcst2QGp8DORm3sJYy4HcFnMGbYDzhbvBPCYPjiNWGOMMWi7Qsw9vC+XpwRrDfm99ww59dLCeMSjRVrgWIXDmFtWESO+Iw298vMAMJCQvJJ0j+2DRI9UxNwAgvjCLKPp9wpPxGy2qZPCbrMlSh5/X2m/sOoPubScxMPKWpbA7AEjNd7XeEpbi7wXS8lLD3K+6J22oly+SzU/Mb4EFG3ARCs8Ki1Lcrs2eK6Np7HjAGUUviprspIRsyLcFcjr3COkL7nuXMIqRIQPsjEJbKQ3orIj5fLqbA0UVz8H5TgndgruBR7gEgGgR2rtBkVxUWlqtwmih3GYJimdR3pCnIwnDa5L0oQaRptrLA5jB45G3sc1LZhiIdS/Ci8nfiIb5/p213bXki6UP8mvlUwHrQ5BoH9AyF+mcD5iD5H0LlqS6VUbQDCa2zgf5gmjELxE6Kks1mTtgcareLSqV4JkxGVPj312lbEVa6XEzKCDf2GzANo2zr8px5F8vDeaK6RXSBkGmICNW9YJkA+fZwvRDTsTFYBBEOg14ydcdGaBZtVmd/og03qD1i3BkIxAhBMUwirCxJ+hBadfrN2XVaVsRONApuPftGb39HSSg1CzNxBjPh4P10XzXRG/6wmlSWxmZp33TKXQ5ph/6LzLEUadpgbGfVWgpMsFCcqNCA551DyOmT8DYENMKfBgeO07IEvghXdIi6C9GH17dHDzUyzi9825Rlt7k/aqv4Hr4Hy3QAsOOhc4EDjbf38nAm1Lzo8eBZwbPV5Mq+4nqBwWsh4W/oLaavvn5vU0ZrnUPv7qv/BF86Gw3Cudvd45Dp7SiMIgXh4RXxf98EkcIFtzk9uX4qOuU6PXPe7DRFvJald7TJX5ubbk37MJkCThnEx+Ib7w5+quwcIXbXbggNTL4VzAGhNKfNBkqYUkkROueTxzCkGQT0LteyxS1/mZQ3DNG8vueQaMvRhIsZQ/6Azsc+I5LnkA+56sQVPuxfTMR0hjSuRsuaPgD8rJxZl8H+y4pKMkxaN8J3lZx8mU33hCeb/JESKT+xbVSJY2fz95Rixqt2vohP3iIonj8fqew88sbUIqOX4rts7SKUoV4rXI2XgiHAj9inkLGy7UOHVm7ZSDSIk+5XWAeZD0QAt3Dm6WhftR6tZ06npn91CeHK/32BiBwx9FOvtQfNcuiDRqRGfhG6JfORd4ade9JAbKL6Ly7a05Ka0IY66v4psj1q2RAmHlgqWD7QYPf029GSfrRA/wEDoEHkV+1puwviMR5zAWHoQuV4EdNIYXKh6o+eVxBIuTUfLjkYv1cXRH2BUvrrptxo9F76roRnd1lGEQ/agTM3PnNLNFF4uuDd4ipWgaqJ/RHNWgIjBeJm6ys8fW6pKTYoY+yRmE0LBrGVNuKBHNT3Ko3viKNrfG0apOGnogq7StrWL5C80JmNmRfgJsLPbQRnysQ/B9TIFayKTuMTbBK//j6v9lxelYIqIVEJyZv7ceUra4Q4bL0k+OUtwxtl7wxY5obicgvdNcZjYMnYByjpMlr1u+93NDm2urAN90LGcJCt8LvxquiF5Uaw7xKuqpYma/YCyeP+EEdyvaQemU/rZtkmlLxg23+fGD8ew8zjkvM5BBQzEalhbg2ZnXbG04MxkL3EdDdRsYbyjR0eIWfuGV8z5ARTEVh9euLwVcdHO5bYbkXiXYPK9hAxRDsXjEKv6Z4Mm/rAaRXXR0wa4Hczrhvpv2RsfQLk4at6gN1u3qTWWuLAZdgZKqUpTjnaQkLGzx8Qlh4e1YziCzJSdNpBA2B9yJjfz0CZr/mU4nhjMrATaUab+iRLGE79XgxOylamtGc8IzD0Is2azfJxR/oGrqcXI0Zw5GdqBME4TQq5B0vZCfBLmVmLRXc8WPRIqrsCGixo6DUFoqWhTErq38KJg1dyBzqncdNsp0Fb7h+E0Z84OUX3iZW+qoof68Gbi8ny8aqv5VfWzIpvQK4+dxan+fdu95fv01YpsfYozwhnsQb6JRJsVOPYbBPcbsfsSYLroUD5gP9qt97icB2cVI9CK8MlrcdDH9b/jfPJBXFfR4/vdEAXh611Vs1bVQxkPeA23JpPYv5iY5dDAxf5YP3UzUCt0VXCe5K7bVPV58TmkgYkRRIe2h/BLJaYDyESgr6YGZQhiTdIyyxQQBnkt/f5Q2OZKKGzlWuJ8QFkegbqu8FXudYHJ8oAujzYOH3TV5br9fypSSvkHMOfTVAdpfJEhw57y6JIJAaNKB6SoB2rz9H2sS3filTiYT6w+U8fCNO5cIRH+prrHf4zonp7pcteDRO8EB1gAf6S0UwPFRokXKTZx3Nx2K2qAniHEydDseizJa+22IoslvaU0tqdwkazdm1UG5E6QhOSRJ4rdfPxRN28+wvqDE5b6i1TpgUGhyYQnPPCn1z9XdVoQ2143o1iy+anbD51V6znGuwvi2wVV1PjE6i6wui3fE+SN5asmFmx9H8fP3UEUAQcq8nbvrVA0EV+5tgg2g3Nm9m++wKRRdFzWxmVEwUeo36P7X3PsPKpWW+SQ9xCZJVV5l6sj9PnAgTnRfCO/Cquco1O7mjHgDxZ9I0wKXvtTC+cCCJn1QJcqdIDZSlymk1MbFs6EtzE9aZaNrOMmFWWh1+YvCzgz32Bi34veM3o8aCWxkfYcBGfFhnU9yg8VgINkx2HP1uOshlQOcp31sScCt++8zI+vzlP0xbMcQ+/6S1lwjIVvLggvB3Q2d83QvqPKir1bhrGIDXSdmuzT06KMGizkF+bVYkOQAp0rqsraPThXzpEXL1aYS/GiYXK0Z0y7oghaavvXZnNaPpm1FDUSV3WF7LGT764NFZcX6+lDdWOYxinI4m5AzPe0UgjX2HH858fBiJBoNIWKzAcratA/b8Vffpg3t+ebPpX/0/YTH6bxrx38zf35/nTvtZAmi6+Go0T4+yIuklg70kLNfLW1YCx8zN8Vmx9L+Zw7RGAyTMVgxbz3Pm45F49KJR1eq88vTzXXWnNVyk597igJ66kUGtPOqr36zIpWtIh787guNdhv7NF24IGrjldysgd4K2/64UrUaSPSUDfKZwGG4vj6rOOjP4s0KIF3fEsSmcuicQzkl3slA6D64p2F2ZWpVxbTcafoJ3rF4HQ9HzGMILfOmOFx0wpJU1/PEbdCqgodQWnH+UIsT0YzMrsdup9TTqv1psx5vvJj9Sh4AyL6tE2dUrv8etCoIjsayhMaGgsQuMG9vD7leZRUPg0yUu5IROkE6K5MADHiVh4L+Gtaye9z6L4NdPXhhhyILydS9dJRjmaoN0wN4mrTlOqLFtYfJIYlxX4jSuRaewuDOtnT6WuOtIEGLcDYV+XgrN97wduG0hfW+LOrxbfVAbhxTQ/WWnhNn6RlfimfTjDUlIyV4vKvURTvl7/FSB8XxrerefTIMpLOIfEC/7GnF//nTR5vJfOYBYqRVAqTjuk05+j+78aEY9Ql+tO4FfuAfEHA3lvxwon7H4uwNVV0c1hzAUSdYiwpXC4xsPgVqRmAN763yoehPT66O8oFZxZUUYfq4K0gQW8DWzd/GbQz4mSjEQzAiBhL6J8s28abJHBBmec099pG9xtvtGcTxafi2dsxuMfj2c79rs3xlXoy5G4p0Z1wPMA3mhy3c5UJDYKHtIykcx8TYIQicrqinWmqTWtFxo2yWSx14x19YlgA2NCVnvkfVUwXt8oJzWlbl/vsY/3+61fGEhsskVUQnplycn43JjgUM3HnxGvZSzNh9whG8wdYfzvCrF+YrqCg8CVc2dILSvZ/yORrrEY3HtrQmj+o9rf98gl2io2TW+2w/VKP6jz2ihQyhzPAum3wo8e8kIvUe4xlRfJ8XpUUhLMt+4QjUSZHeQc3wiXOW+pGYUiwAmvzS683cDs/kLMNpqFQuKTtrEn1SDOqNqk9Xh4ZtF2raSntfn6/vc57hVoPSnzX20gVlWgEXVhpw+klW2mPrR79zlHNVwqdK5SrD7smu9083EPjB5NjuNXrrswfU4PXTi9PzR5dmvEIuy7cm8nQKiuVdzoGP+3CWrkgElQEsLZHjUyE+GFJVPcPBkvSxX3Wyhrc/ih1OC7Tv6nCv6ncBMqX6mn/VOhcvQ9uclUyD5XVrQILNXC2w05Rw819xrht9bfa1AlpRDu2YptWeCPpFf5n/Iqt4uXog8LzNTE9Vi1PJ7m7lIgbaS+2VCy2pgafooaQyMtmV018p9cZ5Kix+/CbvEoFr5DpSE4G9IpvcW6snZEcMCGuAm+9WmqevO4tUUX3LXPthBLO4e6EqB4QunbVYi9ajkq+aMYYs2hg2/7RraYsVH3c4OzALMnoBVcDFLn9Kr7Tk60E9Hzl6vjbtJY8g1AVRcw/fNVih6LGiuc5orH8QpfyUtpgfg9D8rZ3BllvEggZ302Ujs+Ctrwqe1kWIRIVjil4xOmtO33c+YSfcuKs0dOplrJt25hXrukmSWfdgRdl5u6pyXEvvYdYfT+Al191ITWxIFQd923eT86i3yFZx/pPiVuLc3WeYT+Ut0HiYmG949c8WqmcCE9Bu9tdjyV5ZAVjwWWp8Wv0HvNwH3KMwMiKjUOsJopPy1c2MkE6OvrwVXe0CQ1XfqfJXnYDiJlhlrdLVCO3lLfUN1J6g4caKntrRuAfJErbUYJ6jtL8SttJQXiQp/+LH+Guj+fCTx0JBv99TUfGoBpM+I1sOsSH2JiTxozhb5kMBKZnykE908Qi+e9OQbXQ13uefPnq3/RkVSk3/BOuYlPrw93OQuHblazSnl73f+jpO4bxvj7q/mbJ6rh9Sy/tb+ax5g4NUG02IeZfk7jHxcwXrxud6rcsB9Ydp6Z+YJXe8oGsAEcDu5hBj5vPyJpLRX2rF+G/z6tj/C25l0hkPwnWm/mWSfz5eyyqNTtRQ/ZjtsTCnktQb0t5JhHzMiu10GgY/32B06/gX5kHrOikJoULuqin3TfMH6nNtuUdA7S2T0LwFaExAKHNWvh+CxdBBkbz1cTw+PVc5FQLKLDysUmoz2Dfqhgul3cWR5QEGi3VY5s6SLkrHgCkqi2grPg7gkGPi9bH3HyYXeET0LMuwYKGmmEStwByTDriLLbEJ+BcWRJy9U/cDe6AtphDzGhOmyXXyoJ3e1q0c9fCT/r/quf77QNB0Pq5ZB7/1kJ75VkR5qp/bVVHGz2rZ7NPLseh8M3261zTxMpHPqY45wjIjiQ19bn+dHd3TdakTDvBEP1GjklldaX/TzbybYDjt9hJxWP3mlziLv1eorBlwj/sRyRatIPUqo33e9E7JxkFtHA42iM/AeZTFBRoGOFMK7i5U6VmEkTY7UelxWRHoROvY786zLXPWgwub6pU1s/Hm+mWGwjxmkrfMKpqjdEz+/6bmfF7odVPUugCWKeTiu09CVBxrhhG7fx0l8ArbTFTumEiFwwb+2IAVhewIMkHgZHEsg/RAQaxll3TEXEzyNxwUiwbLCn0SckOr8IPz065UdkKj4DXFD52gZwZv241Bkis96vdY3BSZ8CDvV8RhcHZ8ZtRmLq9A/zKDdR/0j5b4JbQG4Es9rL8sguyC8ewnYFf0EWDAQeq0P1qrf2csuvY6iFrsgBTWyn708puZ/7vtFSztessntxbISLTNWHaMQuCDuTRA5Z1GYGKz8UezqNq9qZoZjA1v4C/5vklJ4X/3Ss14owd80nZIMBrHpkRt88SUW00NNXWg6k2lbEEtZxsWTL63wE9ieZq98AcfRtG+ghKmHweDZ84tV8MLVxS54ROZN4eJ5TYWuR3jdfsu2zj2hhqdXf6Nb/obrh4gzBg83UFAViTwiG6fIncwNy0k+5PcOcmBmZFq+ZM9qwsqZ+1e6rGfPALtkiJsuQTPFSliqu7NXo8c1SDm8oR7ysvOCAr2pByRXmPaddkq1k5GXZphQS9Hx/glsemRicjGwN5fpD661lZ629/9ouq71SJkd+ErkcEnOOXNHGnLOPP0B738u/Hl3xh4D3S1VSSWpqclFioTfvwo6imLkhnPzH+21Wj01IJBYQF5kAln4DGrh/n2BVlY8Evt1X/xJg367+WJyJ4p+h2A8SW6MKfxFkXNjTsCNBgnKaSZpdg12iM20gdAZIHp14YMjNMNIYGfy9XTDo3dDKK4Ke/bZUN4JGiYLoQcuwVPCy74q+bkTOfY3nZ7OFY+XtgerNX9cf1/YqFH6uO6e0qI/n53nn/iiQVkSOklzddQKqRyFAYIRf3fUtO5rkXG/PEvRVQwz8DV9speD1cF/WGFpb5jPCyBhqWk5dMPj9nc1Bn5AfjWJQPr5sZdYC/bfCb2cPQLrWroDP86SzPnrueXGGW5w/HYGNVU5ORuggJgq3Y/pddpXRzaQV1w9r/5lYfyYoPWIKjIH20y0UrCdA/L0jSu1v1gqRo9eqS2+kPmRbyeebc1974FrU66UX2K/gmeMlMnLkaN+DlA4jPMyX98BGAtzvqDtfHuUgxEwVrY6IpmtMGoR263Us3Bl0BJYi479pQpECEL2GGAKzQnlxkVGW8vlMrH0C5Qk1Opw2Lu8v7YOB8nJCyd4ZA7iFpFdjMrwtv7y6X85h/r9TM1Vy/+A7qoQCCRKjaTV89TORuuUcbNaTq3YFuRZH0JcTSw4O7Y1wGEtNoP5G8EzXjoO2+235YttJRzd8mUw/+tUqZgvfzi4EhiVZn2hn61HJqqoL/XlqidOowO/WJhoba/QEfNFJh6aUWPnsVtYfwoNt0FKO/ja5/6e674/N/ZVF627DjltkYd/CunBzsSX3PIdNKBfDvFLi5fd1jxujYLHa99JfkUjVRPrLZ21IKMUPQI77ARP46oAXYugQHhYqnh+eqm4QaQ4836lNfIiogvjY19awyQXiYcXYda1hyaC2QEcr/z0hboEb6toypMzXssVuzL+4u2ULT+nvRqiX7ICNAyfao1GV/s8TmKtyU/7MAnSTBtkypo1sig+yFkWXe8qmYESSn/amLR0dYH8iB9CpsNqJRWY88mifFVnNP77nH4BErkMdOByQN6mHwCpNsu+sIa6sJQ67nh1WkxF08ytAvIaH1kr02r71V5TrS2zZcvpZ+53VRDxQ5iFv2kYkml+UYXnmjrJ4bBr5hoGK8UHBBMYAGUS2+yX8Rovg+kDKlsvj6li1kwRqZ9vm1M+MiBJXYiGr8GB86QE04yDFhRmle+OOit5ekDHGzvTKXkDjIxv+s1sVjFDo8jjncmx57ghWoUKWM3vmRJCNqqiQLXtHT0LrLGUpJtM391VLw3mDA0Q3SlylZZjYSQu4KAr/AnnTia0ALva3bEvYiTovdUpndI6zEmS4XgT2rSxnOI/QeRVuU623oDNY5MkbTT6WLEk8dO1MuVUKDP1EHOVxxw9jGdGpjFv8ie2vIaYdddPw3DHZSAWmu9Uqf8FFDzr9e5ivq2fYHxqSLPs9AoFzv7oP10M/dshKGZ/2O8wDE54xHb+mSGOFs1CGMo24UvF+uafBsqrPyL/CCi3b0jIbjmx2TtRW9wwyMr797L5G9T8boYYUSWVy37JXxyDjVEtVcg8RfY5HBulKKrevNw7HD73zaHoLkpAWnGU8q1FlY9V9OtTaNNKQu78CgejL2h2JA89OOCnRc1WCDXShhyqqUWB/MigFcE0+wtlYCvNvrBaCQkqBiG/+Tx34ccAUHylgF+Ckme+PUyGDjDH++BkwxnC4P3uK9LSgmFR0MRl7vgvwAHdNeOfIIh5MlwM62xB5jSFmvlybmL7Hl0CdZ+mgI4twSM++adniFABzd97051dWtfoNPHtNSYEtuGwgYIW/gZ4fmDLXh6HtM6+CiQ9fHzkct/3K6FAdLwsEzfAHMdA1/bTeKuvb2dHQ3Z1YJVAaOq6VctgyZmaUd/FpYpgZl2BJBear1qAr0fXVJALfUx2WsuAFmD7NXaluLEO3nZYzWa5K1s8ZgtLxytfGp9QIPUxEt5lByWhrdXhZB7SrcRvWj7ZbH/WeScMZzyn1htjzpDSKot/UdwJFB18pIfD2QEmS4Ae1pZDUy2/j05J3qOrvE6/8lVuWjajmgiMeZw4zgewSAA6VLBLtOGdFvIJudH5yWZOcmn8Xwdew/y4Hw/WqOHkfL/xy7fAe5W+FqFQn2/vgG6ykhxhfpM36eey8Gc67tnFx7OqJEf750S+THRCBDSeRnJEAO8a8fgPvXN+5rMz5FVGS+B0ABeJ/gZx8llou2T0lx8nQTUb3dF4beHBcji4nZmQj+GPTn8gpzsI73R1GNp8l/yAD87p3eWX1AeVgYgx7KPgoO3yCdhFwA0AEbHs00B5WXvmL+DBDTDpVZjYPBKx0veVDFG4/EDG1hfR8rXEZYG2PqM6I8s/nnDgT1W2xB2qlOwWIaJaajYbWNw5VR1ZFq6PSKPGmA9UWMaWTDjoBltePe2iyFHlL1wCqzq+Qo1t/po6nghufbE7+LH+CpCL7sNF5srESj6pn856P6HcTbih1VHIJV9C+iKHOk8LRJ6o0eANb3S0ptl4Cy044hOwAVhKubax0IT0Bf5SWeOrXnTqteLLgE2NU6vGtGMsakz9v3G4NX/8POaznN8VXIS+swGLG4hWZVD7YzdS9afEh6jPgU4FWH+WAJuTUSPOm5WTXfc0h7FxJA70SOozlO7HrQJN4yZ46t9wtw3sTQjnTvzHAIXxAC8XVMtmAHoLwnVw/7YGjYsaSI1ed5xhWUlWoS6J6DYvfMbGhjJvCunktWP505L/lDzyrqJJA0P205DFd/h54Uuu/66E1wKWABVRggLmNbK4WB2cCOXH3Ib9KMtJWmbdj/NSM6qZUEnSnDGbbxcDbjwYJpzUvYTKCoSQf+Xc6HICZS1UifEzHZM58fO8XdsS8jr7/VVOlDhqQtgvQmQb89vzT0BLGNYv6Km03cFdAgx6VlH4zmC+V8OIkgPBzVvm9zOmdHOP7slTF7X12C/WFWK3K+1klgSxdzl3ghr7qXu/0DusUQ0OKEYDl0vSvo0Bv5bsel4S1HJOLEg/bHrRDFiUyM+ExS4rN28OtYuTO9Mcop0MlOZbX5h2izahp6h7DV+WAXmca/PvtVwLpCDOWFKjHl+Kslijags1otr9nhBUpSVQOmbUvAxftI5YZtbkYDAd8nIZk19ANdSE5dg6yLqPalV9F4blz+Druj0Qg+SodIzqNRGUFXPqmpoxB3t0Ogay7c0gHSRAUAWh/LkXIHv2NmacUf9CdaoCpEIw4dHfsNwPH4mESa5Hmn+oWvC/GXVzDUvkXYOupIBJ81oucnnPobIb8XeuDI4zX0Rd8NA9mNnX3YZWiu3UpEtHL/vMmXXxgEfq5ObiJrV6jeWxLyWH0zYZPuTvz3H9PFt97EIxFih66X+PaGKzNWt2mEGgcvrrfh2kDzBReCGRZKpfFPwRTOPZzNo/jnmMnSHGSE7LmgWs9ZQE8/ZK+i6H0g9fi/v6wA3fwwD8wM/a3GsokHKdXJ8z+yzDcM8C7OO4K5g9mJ27kRha5XhzkpHo+amf0CYzx65w6gWMkL50aRfnLu0swwFQMkinL4DuwWgNo9uHtnhR11XEAhCaLQwJl8TUG7T0b9PYGRcRBVlzd97FgZKGAtA4aEWvu0875vtD4NPemImC/0H7Om5robP0oKm1YyYFBkF7r2cCyVZZcImEkkkC8rTD52cU4byIr1FaX0+/O0q9edekDQhkDAMAHaS2g/7Z/NUwCnKRP09XrVeMtfbUP3xjRETaO7sveS8ptxW+PoyHQycU9cv2h9rlp0iRb2DzNaShQ5Y+GTaIRP9LaFutnHM1IhfMMxlF/m+mFRR9KDruqdY5lCF0FMF/F5rr4xyJK+3Dfijjeb4wPBDgs9w89nYn0V1CgHJzG6FMyy9elrnVzFyBXOZQ5bZp4I3nuBzdw3CNzdHHT8aGyfWrDWjJYEKZvP4s6o6Y5y4oL2XEze7xVPKfD9DP+F303UWerlT+kkzXySPQ5bGMERbA2dmSWP7XdCGsFsQwCleJI4IkRBiKam0/Eq8tvgDsC39DktA+H4jrwFx9mxc7VFHchLBTRPZxsb30tQM8RmDCmLjRmM8+0w8EakEvspLhxWKc/KHfj0iIf3zjFhDAHSULAD61dfae6bXF0E6LXRzTRBJs/nqcmTmxPOqt9vlUrjT4wMdPKVWgGhqNJFxg6Cc3TJRsk7PpysKnKhqvG6iZTJc+FmVKtjtBH6SnGyRMeHEN5NoOyntsZ1boxcSZBKpc2ybs0JcbsTBAPLTvQzVTQYcg26apkOs0oNnAkkyKiRhX/mKZk8di8RKkgbogZbGRiME+9R9k/Vgxet1eKY3o3jNCFPT6u8tf7D4hRB8QOxGwbcCjCwOKw3vqAszvPZcXaSTuzvdaI0tF2ukzPbv4FP65286L68hCuOAnNWcWdqHxeFZRzvP1xc7mIwkxe54HvP39/dv9rcnHbLKfDzstZ4sRvq2FwdPP3B9w6bEKRiy2xcfBOCN3gTQC+S/c+/rR+tsIP3j9MX+2rO/8FyAjjiM+MQu3BaMHNGIzd8CyoBVaH1twp1memxtt8BeL7Gw7z7EHdK8lytuMWOJQuh8vQPFDJlbF8C0PCOxgzjbeFYALA+sTjLJ+DCiOWVCCGyBFNAeoVDvBizHqynxKxCKQXgzmyhMghsJ90qjyaXvx2j5ta4ZmyZfLpTvGdi4L48towg+Yk3x/AjsmSDqv+m4pW2vEq2B817ZlcyvUAlAHfX676dEu4wnWi7NkLehyltHC5utV1VF1wJ0C50Yzxr44+1TSQI0fmRAr8T1kmlXIGA2vzuCze6tNKLHTF7OafrrxzAQltlcgYMqSNED9tTJXCjI3FDDvAr5Wh+61gsao81fScUWnTV8uCaA75L3jKZ2C0kFnphylmgBEeZCU99+KY1geT79fcqj9vcC0S//3Am3dBTvyLaJzL78pb7f1eNHevEY1JdUqMkh0v+AwUC2kt2utm6dywJS1dGV8yR9mjPg4w0gWE2Wo8/KrqCak8RhIgW6H1/PTCv1za+rlHiY7AAnUUlCYzp4H0kTg+jeK1NRrIw92J5IyJLP37BHeEGp6fk793wQsgq1403QWuWv+JvBZt8I0Dsv8MzjjpYN/fJg+K9tEO/oOnyxlvwQ0zAObEbjPrdcxW1rZOR609adlNMSnZuvzRGKq1Mr8575ETV0mqD3nPtbGG987oG8vfU13vSSLRpHaZE5xH1Qng1G4y/SxvmDXfr8LDYIND9SqdmU33hQTBARPg7DctAcNMrzr4WdwLdzB85kf5ubla46NwXpU9bYpo1v0+MSWJPCuXMIbpGqj8gFl4MpW9/t++7V5qO06buAnSs1jOxXDAdEg7yNymCDrr0V0LFYU/3mTD/EkrpcrvLV+1W+yXpVSCoFO2b5/uaQ7EIhkYCWW5d505tO2dRB8sIZ6xrf2HCtOdEmCbGyhdsUX9ylUxCR/x9ZCR82KrM16wdgrulrfabkGbHj14goHc75G51BxQi6dauZ23VmAMzl/4fqh/tP6BMGvlMzKLcDbsRUWrWmT/cXihgDR07y2kw+iTWvppf6SqAAXdWWhUT9c6sbbUCrA8fvyOnYC0/fhNtstGzmyxYGds5BUlNLtr5P896sLbEDxenTKHJg2VHAc1avK4IP6X/f9EhlGDvXR3pLXhLF0GviT9Kzo6LeUJ1N0ZvWubOyU2a3VZhZJQqj3QKGQfWEPT/dup8h6cqXMSy7gF4CL+BpMgEhdsaksAwZykINml0zjp4P/CVx0PvhgCY7OrPVl6lkOsABCCuDnJkvqhRXRj1dOZMAutyy3XdGUGvRbmYO+vuEhIbnH5x1pS/1lVRE66WUNmx3f0sCf0wm83PZE536Q4fzXZPGE4SKZakLxco+/3nOinIDP3w4PxUkUO39/ue73gUsGS8q4VI6hyib+QjJiXdL7L4nOVF9aAI/jLGxCRsBR/J48Fah/3CirbmM240yEdLw58TlhEA4ihFzo2c9UryjU5HfdPt4lsnEsQH9alvFJ7us5v5TFRfzOuts+9T0qEtTxPv/rGmdY51NtTgLMZQ7DF3jeH0deGvKZE3+8fMK98hVq2Kcevv6weLDXbfWShnWPKo2t5HMi9ZWMcOWHLbjYv+VscEM8DZol4/QByDVouF3gt8skeqnSzr7e7F+w91hIQx3TYbNOty2SCnpRRZFqwW5LjgZHXlsTYsc7Iuc0Qe8aDsfZ5lILr+ElQxTQh76fAuFLBsPnwX/GUT6Xp7tmCw8ofN0EMYUQvYIhLRoVWymR7VAapFrXMdtKtvgp00KV3J0aEZ7nml98so8LWQyzqn8S761q4t3y0kfw3wiIv8CQmuhyhYQmA47jtSycIaUKUGY54k0A3Fv4ZwQPk6YMxKFMlYl1odNtrAZFBSfTaridzyetxzSj1zcN0V+nrJy8joWcSjUOJWOyKUu+pfpavdGs2pxVrXoqa7exnG2OJRfFQrG4Be+7MHTaLT8HmDM3yIcgDTNN3iqJ6WX0MpGWk/+UF3jUdDMTSM4J0lndmEJNHsPfS4sJqmzgxl87vyZQ76bHOuVTWwTn6xWUJdaYyCtXroDPFkXhgU8mZ7h5YQAWVnpK4icG4sa+Z0m/I3dsyxMwAj53qBNxoW1oX7xfqd0Ke+OS8z1qDJFeVRT2cGHR7ARHouHcVREzWRSFVV7I3bbhdS6bYcjfrIR3n/+a48NOnnuNYFtfgK4uF8GCehQfdbrUGKUd5qNhCKfkGkVolA9DOC/7DxkIfx16QVjPbD/DOErZf3OWf6awCrP/BhhvZuq+5xJNby2dc6r6PHfu6kJVp/dl3r7cXWhwt4jZmCHaZz+2MMgvnpVzqRbn3gzTy5dhGynXmGHM8vt4ezq/gqqq9TPYthS99yAo5q1MGcQTa5Yc8YFwEAjvcszi/77yGs1d0gpgRDj/m44ZCN8mVtmW7dmBk31I7Ytk4W9TMlwdrPup4LGQkaHjiw1HOZef68perJ5k/WkON+fq+pJHrI/TFevlWmdIP6HX+cEIhEnIO7z57r41ftBGJyiipuro5jffB3daj51PuczZLU9AoZHuYwwx9VhCW1qu3biTcA/dBmfY5DAj+e+mUyS/Dpxfa7uVJP94RP7T1XxdTnjc0LgcoXcKkfdbkJCRs6mJZOhHLKWODUU1D9vs7FjNliSb4ibuJsB4atz0m0LOi1GgeNM4dAgdGeC3Vl613RUUfJ8vyoXKftDhKDPHN8Sx42K5k8/YfUzpcNX1BRkB1bbeM7XMnk6Zj4KX1zeG1bYB+8urA0pVQXhXDKHr88Ob6SG2D65Of6Pp3bFcWVTmpA1PZ/vXjgudfWFJeBjGmh5geWe7ZF1PxkpDd0K+myVQA3hZ1AiIuqh5TydWf0lqz5YDAm6G8IFGGCLMooCdFHVKZRRl/eBeF8d0P9jlUEOX4ChSTbOA8+U91o2aXLrsoISyNLDd24oAVaVmHKYP6EiMec8v6Grd+PAej0TNI/+Ky3hXHYzKCWZraLs/kKJ/MZKEP9S4HpxguzvLJI/mwr9Hl0gb+gvZ9MJC1B4D2fLnVf8CW+u05EE/vcZNr+z3PJktwfV60+ryZkXaCVAeIy6JyfHbNpu27ojAp3qlAxQgn7wlssZwP1oqtRL31RbxC2Pwj0OF2K/SUYhFynZAKEzo4MoFkJhWSPW1KmO6qdKUZyL4Gmaw345MeRmijEzoC14F2w6TL4NSM9z4eYyvt/VHO8c7mZuoUBOVqcARA42nmvy5a71O6Vy/K/rPsbyW2zSq90B8Zk0M3FaJjWIZ5aqKNQy43XvwJP9LJgPkZrO6Oq1mwx8igqPjpVVnZGbTZzS0M3VhT3QvaBjkXb1xNKbGJfqUseepw7ojnf3cSUOMyReQHWQvhtMslrcDu4QWuaFbQsHOAYPYjSQaV7jezn/1zaEvbNcFkbN3qZYlVdmy/NCV1et1C+cVNk78UImVSdpJptDzYUtpePDXn+NCSeoL65W0Vnt//X4d6rfDQc4TFxXA30ZWV5WRSe+Lf65oUrrpA+fNF5mJabdnR32wGSa0dxHICdAt1+HkmR+ONwP9tXen9TPvttf5Q1eaYiCAvNd0d5L92BxLiTDYI+nT2Zu16hw9uBLZthlfiA9ORkz9Cag7AGkBn+7t5bXiRTNziguYnERZQwpXGCchJ6ublSbM78aHBlDkMFKiCz8D9R+JOeQZeawBPeM3moE2jiaSLS79fd1vU5zIvleLhh5mWwBrVP5VLeXciXM/KSO9D9xoBLgfW6qfZDz867yvj1m29JIjCCOItkz+zAenQAHnEDSXQU3AwPG+aT9QiHFNDB2gePhpdeg6ZhBu/GueIU120Cvzw60ao2i4bJvNdjGoZZUj8xtV2G4i4HiIh1aAvLw/CPxbSSfqjIm0GfkTCBT35PJNbvYXsQekwrpfat2WQPCTmGQDqZYHUymEWLY4/IVNM8+RfgaClqwQ8e5CtCw6cjE/OtJM6pGtDqaWPYjsrS96kcXcsOQo108l/uwZJjpZ23YZyLWceTm22pU9L+41qt16oN3/2JrLSsFFJjftAbjEj8eXubHXX3VNpCfNudS5r0d81waRBxwapf5bVqPyVQMhXsySSV/33L+MT/2FvtXNR/Dv5JMGbfOKby6WOCKwfQXUd/KI0QrZn2s8n//UFHr7vr+PNtoD6ZTvwk4wVqyw35HGsJMUD+wEXTMfDfJY/AdgX4pR9Ul0n08VrBRFMRTleDbts/xRWcPww73TUaI5D+R1ejCzYj7kDUgfiH7+kqgD9eI+TpZf3P6lolWE31YRCOWjWs28UR4U2z+wVQO4UboSvBxdZnhwTFgN2MiDzjDqnJg/RquqHtLxqkf5sqk0uuMx1nS6R9pJYriz4tb4aXPf7W5Y/gcVRcBJuW87rHwXAxnjLR69r860k0kr7sM1GnQ9h2EG7IVZX9qjeA4DBxTol/2W1Iz8TVIK0/2nucLOUQI2MBxXKgruyYL7GVp+bXUavKrVOj8Jc4gB+rEPIS/JxErHrofxstUd5be2551yV9h+pWGMh9in5XXrZlyAU/Rlgf0CXg7+8pCeT80XEabvAgJ52FQEYRcA53SR/0XofSgd5PmvbJAkyr5eXOTW0dhLQDqYAZ3unRjsm94u+exdFFf1/PwcM4ALl4L7lE3kdiUSGKRNVV2+TD71L++5vZvA/d13n+iCzZo0SGml0r6YFRTLsCadJow2FH/8RLmikkOkgM3UjG8dsXE8p6X2lG9+KE8eNgB+FDNA8JCcuUbd24SjxAMEv3xURfw3FSEWcjth//bgSsmxEc3a/NOfVCHY/Ld+P/qXNPEhscg6Xbk6Bve+3X7AXU9/gBSP6lHD09z4cg3kWfMBTsZh6IgK1Vgw8qc13ftaAwBDwmk+J2bbfzHY2DStINUEkrBs9URRppd/SQOnjr9v59jMtBWvxeKZ7H54QQRxFA/q6fLcDpMHy2KUE7tcUQShtjNn2YAY8/5bTpfRfixZ4XOPOKPhY/eDpIZuQSotLJHI7utxe26s/s13FuhyC/qAOgmyAZzvaGQSIGuoCt+LcHr+ux8WLP27viy+pqrlmb6hNU0RJ8O/TuviEvZlsySqxdPYf95FAPuggUQGJ4tKilh2P1MdQ8fl202GH6hflqqUUPqWGLoCiI3MJPAhWcPCE1OWTIWfgfOMza8JJC98T3dlXPBFRdzsUr85t1awg4qGlBO8aATRxVGnLzwQRNMj2Ovms87fjeWzmnUv8z8LI0YVrPUwv9S8s+YOJsquVm27Gy4KyOh+XLYIFM7b7tzV3kPGFeaPTz4gp0AHMUXQ5fInsTjYQGrrZaOporbWUMCd0tGlLeL/y8beUtk/PeNk68edPkclgf78qPauVe9zuv27jHvmVBa2hWMg7gzl6DGff7mdWCx5EkFYuwWCyJSdH7fofJ1lh2kzG2R8ReYlggyrYMO/JGrmti3d4AHzxc5CTmvW6FRLVlJ8tgZN/ZaeZT0CxDZIJGUZ8YkV+V4qnO/8ZYWg5DsCuCxLMRgFpn5ZMz8fxtrSA2qIj0OzfCaTVaRdQWR0iXY2NE23X7N5XjuMDNLWbWntL5tesSf3KwlvtVgxyx5PGKKKqVwvFtA8Y8wLxlNT8ZtVpbXyb9JbIPF5j/yu7t+Zm1mod3kHfhn3mIr485V18Grip0UY+Qlz6FT6uS6cXUaY5ZAFS0wrS4/hUA77k6SE7DFkPyZp+hD2FQZysvgEU7K+fEWO9i/1P69zfOzrGX8UPvatcIhvfl4GO85mRVp59wddtAseNYeewZJhihEZof9IT1PmhkG3s5A/CELMY1wMI14JBNFWoTB98ZwGBDLsZHaxg++L/+z7Px9a/+StYQldVci8RQhcJnpVcT+jEAJTSAKjwVOomkjzE0eZFnWF1cb8R0x9dzMLhIvh+sxYOMWlcACEWb43C8IuptRIs/xmqtCE0j+qscLC4eCol9OGWIq0WJocZrucE/Y7e0BRcMJ9VHPWX+f+RrTwCIrQ/jwViDE8kqteWPw6DEzyhB2dOWIva6pb7OtQDVORNesbfUJ3CboNA+cYu5nvZQh+o4qxczrnF/tvchNiuO7S0yUBUFkYiTeDOIn2hRIYgRHh9JQWnkV/OfBP8/Nx4uYztdTo19DftBXkI8pqTRRGhsedgJ977REFSKRf+Puv6vLD4OkeuqsMPMzlWrYkw1/EZfcxFOEBQQ6XJWjYTm1yJi1MSmGT18jcz6fKiVBBxz/2YqA18J8k63jBP3q8JmPLEJ98PaxR4q9jXC2cLSKM+8QMLmcvg7Xi8EmSzRY+ZCYyCWpWhILYol5VJ/wFthsAbTua5suS/GEULKsF5atsnIUu23k5vnA/sTjgwwYL29AAX7mEL78l700m0PdJDx8ROAQZDGZUb4GAZr921qrd++/WG2galaKIvYlBhOKRL+o/+duzIQcPgv7e1GDVobhOBOa+Oy5AQ7MOzC/C1Q2684Qq6utTVd2VNBL92qZWcRj13Lkc2IZyeO5F1gJWGiqNXbpRii8FBNX9SQzuHDRgQX62RLglJ+bbi5eoS+GjzVWJUv6TR32aDwzTT+NvJAz9gjnMYfiYoySuwHPh43Ywp1z1uNnvaWIdPg0dh6Vcglgugej0kXyx36JtRsLAUKXqThABBvjPFpzv/iNRNZWDaP3is5rwQ0ebVC9YIggprRf69O0IKEbsflE4Ijpfczy68xuX8X+MJXz/A9cU2qfucZlYLC6TFv7qsC86DfddjG4b0/pPHYOsyvND39UErFpKhLOP5i4hAIS8dx6QN+e6tPf+mxPtzFwJgLPFGTwGqxKsJVbZ03Fs58BTpkjhAzmz2LlhP9vRMT5urrcEgwVvXQFS1E4JaZC1KJIsSIGjnOz5Y+JhNchff6rEPdXvqV4IDY9zy31PJlif99grAJsWLkyHZ6EFN9c7lg0aIDiqORm1thWj2jDX5ZOhiBO7YVJpK3E9RrQTt4RXkRl8UlY0OubdW8Ea7CV23tOpjQO33yn6AP5EokH9XkXH4ZDH9T+NKIFtV1tMxY8YvYLpY46Iphwm3+p/8iyh2+1hHNF9sttpQqWEbUpb02iUuFosSetOD9qnsSQqUkNx2uJZEHWq+Q45H6DxuefpVMTnlQCdYT4xQ+M4jCbR0a8AgTe7SezfGNsdXVJV3dnfD4J+WdYrG8M/XRkaR4JY3SDqgl8GGt97SWoigpTS62bqigRc++O43kKb9suykyoKFgJl/BFzzDFrLcUcMoPTbaeeAcbejqoXLO9FXf9xDQVqPJZtjsh3PyqD9qrp1bQ/NMco5/6TcRf5In1jK6P8qusJFDWyhAfZhdoc5vl89vx7mpwVehLuyXYKCfJ9HhRTRDpTy3nJcSxZobs5dylP0BCSn18mod3upwObrs4LVDlx8mBo/WK1JtVt9wLncPT6y9869cc7mzY2pecckYFVEIQ5finBXH5wRzPteVH+jI2Iy4zxZ1DRyeOOANvr7K9TU29+fSNvfHIqYr98NxGj3XWboJnJXkdr0zZwtVdZnjJvhG2DgB4YySeZdE5lRLFtL1HZ46VBBo77YppyQHBFlsAGAfhCLM1k/WbIqtYdhFo6MNWfzSbTX7eQ12TsVrPW51FkQj6d1C92LMyYnqJClAK0Rp7Xkf9AzdwaBakyWOjoDw6gzSMrsOxqiivpBFhErQz8gnKtdYQjnXkNMVF8HmyuJVgTnGCIWu4cfebe9wVYJxmxnkeTb9QvDSsJ4e50Ujp63oMY59ogcu9zar4QQR7YAoTHUCep34JT5tS+Zy1iOueUVbH05JivnVsl2tcDXI8oKw8Tg+Udv4ulYtTQgzY9B9KnLrnULGWANf/L0La/vow59O/fd3z/pW/LSfr3wkuqsnP8l+Tdo38vaOf0L8lbxx7476e0ufj7tVGw/6WCsz77e6FWc07/Jaoa0+aD4wlVc5eRY/9W6HxxLedmggsz7JjA7Ad73b7f+MMI+ly9JVtnVuFotNf8cOi6CQTwos9FpaZYdSuP7ACTao5MkODoaq4oeyAWNNxOTjHpFH8VyS1/PvRqnJWfO3Rh+5DavZcYe92JI3JxIelFFT78JJ9WoNdMPvkNlmHEWK1dCNOfAAKIg839/orKZ8JR8nxw5355D9HPkWWBk4un+n0R0+KHI6E3PQnylwzRnmOPOURiHMGR8GOFTHxHpPv4tnqaidqVxBQrvs4sIpn2UMaWK1v2hLycjbzD7Tp7WQU5z1uJD2O57S2O2p+AYsE/MoG1s0FDo858fsqml0KksqkOCIn4gWQptNF7vyL2qxjoIJHsuxw2SHj/C8JMzbDHzFcj4SQ7W5Nixf2OR6CQ8K/PpYgMB+DoXbGuiSf2mMM2wE/uEIAT4WODU8I1/+Tc4Udt+hiKa0PgMPaRuFDcK8rHtWHRaUXapKT8iE8ZwuR7epnjdZfQ+Culj+Hjm7+44Q6z3QZ8qt5N/YjVkW8EpLh/WsQIY8UOly7RnACaJogXApwcQ/11zLDikDSP7se2IrjZO8iYX+xSm8FF1E/0wzStlndPZ9YJnAeWMigvAv8N+AmWGgGz0S/fuUSgntJuLeQ2q+53HFUpf4OR9E+gcqIJEl+U20eERj+shwH60F5HvuQYx0ZLI9i2n9Cyk4NrTYtAbr5A6Rp+hLqs0kZo7ADkigLxd1M2Y3IIbU6d+9ee6BEt6j+d10sis7iDRfTny1ETJr1gPFeUsCnz8BTXLGK4FBI8RMcX52U+e/vhg8SfP6t9qrR3rVqFyD8LLkY52a3cDVDA/oAjvYAb9+WjvvybJqQi2AOL3Kg/Emn99Hls2hnon+I9mdkIrz0qJyQEXpuViTj8V8W629xy9cW2k/sX5Nmrhda5XZKrkvtXNnclVZYAY6oBJwRZ/2ZW0kSy96NEtPyCr8khfuh3ImBEVlIHZpMq7WUJ1hmOzurn3oKo8ONubPQbtWe+H3MaTbolgfofKFmEyX3MJyhRdFbcNpuarbV6zhycLYi9Zwt58G/aXzwdPKt22RIEFzlLxKxbdi+PNDEuflLwmstkcYptcPQjIeFkKeSyuw9Ya8DuKw5qFLHHaur16Nbc/RuCp3qcs2rUXXsq5VYMvGNfTvEvtqg7+IcYK+pUFdqtKr+UHOqvUjGzF89aacjFMcJeWULyL12VstM0758zeykjJ2UbHSU19rgSbc5PJgepjg5C4eSkiBF40rs+KH1V++vYCXhOmXAS24QvAxc/ASRvVjnOeVdvUjm0if33Y5tL9Huco3KI3OKmuQH+8gh55ovmwsNrxXpMeag4DBJR3HLiOBFvIyHV5PstURdhfjnfr/CrBTryrMAI5zoiTJOk7T3BHXee3AeEaS5rnCMMQha7wv6mmDLOmMFZq/LlAgukTW6QMAUBkmZ//je3dHm8Rt9owBIEnv64D1qRQ0Ry4888ttFQliMRP/47TXI6J2bYbGuCaftGau1POIWbEEOzILORKpdGKjh0WFj2fDkO9pcMeOGTFeVNgUlCs7A3ZxeGX4j/YhR3/5uijyGBU5X7CS/BrXOzRNclgna6KQ7W27ud5GQ+5TPx80Nq1qyvLf7lpdVIdrBcrRbwlYxhCMZ7q1NvQKKs/ngQwG6BXhN3Csg8Hk0KdlV6frAwzRounPPzp6oMFOV48dceUAMi4xa4f015s0Sm5kCxQ0wWIOfMq2Ehv9lJ9m1hlFYuniP8xcohRvvrA3VNmPms9eC+HkleO2Rj2UGZ+PePyPbuxftZweIc4ajKhj1ldDtRfA2RBtIWTSY9mTINYZ4LcPVT6mBHixpGQAwGBXq9Xwv2GtbhC54vOHyNQh+aqc419HRQuFZ4S5Z1md8pS1vMeK82+mszRhz+dc7cVQT6ra3S+ntbH31sBCcjHmSukafBOVniT+R25goYbw88PguQe5EBuscDDPcaypmuifJKQF8ccJ6E32cDmd/DOXPMXjNGpFM8vZfkcBHd16aFqzWrddlvk2PD+1A8PQKOwyi0TPPrbwZ+28odUV3xZcH+5Uo3y2x3Fbhxjc60M/F4gNU0yjXd+8vUX3P2YxNhSaJZjlH6riBY1ZzflyetX9byYD9gKG5DR/NOlVax/xzrVxnDFFopt/JROJhRO8MQmdtC4LUeYT6pDsHMl2shzuVO99vXsCrNPel+YmQ7r2Yke5XkX/yAmTXgt9JtZdGAIMFftjeqJ4Ay1hvbACfRZCzd0JyKi+REfTIBqojhntcmVyiDytRXMbBUKxWfWBmggqOtJX09phx10AbPZ5rANKljB2fVhNh2ORtjqwHlZ1QnCze7wzYrssyXPxUZuf3JkxJKOzmWVzPFvTuCPxaA5cuK6CQZ5ogc6+gJSMOnRTMOhwqFDsYeqMDupzE1kF4+r4BA5yNHBxOpXWiCtU2wCpOWx+eZYFoztHiaoKE8kZT647Wh5y7S5z5qJdFmnHTreUKLuvawtFjYIcUJszn6iov0vNddeB9GreQE/8UqVTKjozfgfO3+QFlBy0e9ZWOMmB/3U8eMGlEi/byL/VBAL84BCE1WVjmOPZr9mJGy9mRIzyIpPrbH88UrfQ0OR5xacw/NPUeKxIM2k1J7gctudYmu3b96pWAU2OCxBwNJ/HJYoM1/MEVRwaAZeUuf3JF0q3HfEQYqulzSOFpEAOcZNFEbcTvpcZc3o5TRSrBXn7+yxpRyqR8RlpfFl2NKsn0j8s7mKkYAPszhYJcT4o2yg4NsVWfvSJJTLhnC+2h1sbCLI13yem+0fZavziYLRwwumr9ioIVF3leD4YkO8T0SGXH06I3cCFrkvhIPgMWxjHOS4DgTjTlxGr1ozJfuvU9tP+0aUIU1BaFD/MylTd77l4GcVod44izqOBddxb8mjX2RzXzgduEf7/hOMKNOCLiRzeurPrUW/FLfyimnLwvy37QVRzzk2MOjv/7xjSsale7LeBNVSH3Km5ljJpoZMvntJ9F+uAzE5fP1jzjqVgOUpmaFCSh+aSS8wcuJf9IVKCJ9jMGWHkM1aP0bGLoW6rlMXPyJs7XlT5l4YvWpUmqHDYDiC2OmSpabwlXH/XVMlGQEjD8db7GMcZGFMkvqt2E5BAmdptHn4z8EEyF0te+KxwR2Ho83FHUoaA/iz22+5czx6TkvQ7npT2+HLg3Y1ueLKgy+Xnk+vEnmvrjQHNBHMuQBdhB6eR6n9l+C6nGxewPxtBSa+MVJAzUhzFHJt0E93Uihvl6rpVzumTf/1SZ8ceZnS9qKeMTRteY1EAo6dxmKHMYQZpAr1V3q5qmXD0TGCJfA1dWNC+Dzb9gbAT+aVq29LjG5w/D6WfXXe8SaMqK4ATyY1QUh0RwycU0B2Sn4To9fbirnSZV08L1VD5EH+xeuPPk2YDtimSuGwoPAtcrYqRT6vgWuZwShGGDkp5vno5qAwDELAqpzacQoPx5hr4hBpjfkan9K/OZqoKP8gDtkcfot97OvJUNX7aZ3PflD9caPxZhrNI3EJ82P+VNXird2UgKw6HLfhI8CeGpCTQosUhdw5zLApb6YYgZECYUTM9YCIXcGtoPgBZZMemmH8J/pwRIPAiprpqsxd7/oKDGR3VeD1NNSZRz1xDofwAKpPvQQthzwcvyyBIb3XlsIukzE5+zoTaYU0/CJkPghouvVZ7tCfVGquQiXjFAMRfiC48R3RfC9axGc9PR3G+5PSC+ndgwq44KIyEeoOCEQBm8KD9HhjquRPk4ntAh2aWC/+uQt3Y+Lo96VGuS3p6MqypFEZezOCcF3Q6/d/hMk2hfcZcoaTGxO7x54w3VMZrnDnUANLmrpZ2ev/phKQ3odHY3qKwScTAaxVeY40rWnYHWQKnMhm6r35xJGGC4mSpva1L1XgTrXgy+pmaCBI3rsCWN5CZt2pPrT8LKNWd6Pi5XZSqTVZrW6R5h85wrE6sbUsmVSOp3wLzPr/8HlfOKqaFYYmVY9PDQ4mxMuVmAlZDNqBZWZ21HVsXVBqoE4ihNzc9giI4luc/Hl6RdE3+wCsTw/R8IUejFv8Uj9RA1rfjjaOCKTKbSZVpQ8m8Ys2tmLN9tSmBU1BCquDZdPGBqBfMI+7SUZ4hdFN9/PAP7lZlfFdX/N822wo/kLCGfSeEJpYV+PDXaQOdAef+ajKVrEV/mBjuf5p2Pgmb+QNqfPWCb9da3M+4yHqGb59lhhZ3+5u4tQ+n7OXHC605Lwl0dtvkUkcgNSIr/O0j/EMV2Z8ZNQJTLpphkUJLQw9q+vj5eDXnBs3xlHCMr4rh0j8hO9PQDuiRE4Fab6eBA//vyS4ghz7Q6z83T0htTHKhfg914lCY1iYlPBetwtNGZIVQli7+5pG5lDDvxrmcsHRvWBrV+vlanlSsilXuik5y2SYktK1CIaPrmogDvtsJHdaEHvUuyfAqc2lqdt3fNpNJJxcrbquTOBRQA5pxkto0KV9q9khi/R4beFCdj96UKpAY1n8rntz80kWCuzSvNBBlgqsCOMv0sxw3pSQmnCd02/yCvat5QCjR4hyDxe3VvDjet1Ews9vGz7V5Dsr+z+athRt/kxwMxGu5rWtfCs5dAQp9wWkD62laV5WQjys6E+3MKUL3Je3uMdOC97VLkK3S5RMlAHMh4qidz+5XRMlENNRl2115fJhBgZaYx76nQW5atWaOfjadPnlIb0w3CXsEOGFcyRvIDdCe7T+QWqXUC+SPuHcjF59F4pql6eC3RrIOp98O9OVfFCcy5JzpswMZEvBhJlf91MPwFNba/+KqYJ8XUp4qFiyCN1U+LCbLJoK+9jkiRRBxsTwGJEZI0tCi3Z155vSU/ug+3fbqR/YA4PHKde+T5hTOkwcPFLo7DAC6FbX2va9yTitbAlFRQN4zajHyUqzcLLreS/8qmBLvPX1X9KjF/Fzn9KbrMcJDA9OlR3ExJm82kXaqBm9tgAMsZQl1StCpw/vzkCXzEX5GRDCuL01+GCg0aHE4jdSdUAh2NizYxHlN43kOdFzeXUPLVoVruMqZf+VVNb3uxToH0Wrh33t/YvH7jRhCG6RjZunpcHPSbg16dX4t1clIFwrIvfGd86fcFkuMKHslkttUWrpihDMHFgDPMvDJP51pqIudDUoSW0ybgW9nlE9maSaO3GnGWiovuZQNYMWRbUR078VvCz/hF5pI8bvWfx+UF/BiU3ZGeF63baFQTFBOesaYVmXnRxmY6qh1M08c4vWFfB+pfF61BCrISvibnvr1g1+5P8R3BdBB3xu5OwbYmwbxVxEiSZ9Zw0OGD+6vt/FW4cD8roOOqfL20tXckhaJHQEhf2hXC2GxYat/uv3oivswlRuXlzM1pl37f1VT9Z7bPNU0l3W6pcsSkk36yz9grv3s8KzPtUcv0HgPFLSvPPsrlNv2bT1brtBEJRDQ9qIwhfLNQ1eXgGA1x/H4pP/0i9C8Xo60BDZjhW9tJd2BNtQAeLURF8rEnQz8RScYcpSzK8pGOVU44jf4GcL+yYYwIUHo9pIaf+3brOutj6jS20WFvC4hysGx94BKpSlWTxvWs3pBZ40eMhPSvKp7rastFrPPj1SlK5OY01ewDuV95oFXc0xexgQGnGSKcQeVYZbPTlqciW8VVKKru7aOGkRJXk0OlGcO/qfBWSAsSWKUOay5aavpjXwKJbf1qi9o/0vRiVN4STCUYE8dK/MI1/hLUl/NFywHjfc3WAOWzCtpabWHs3y3JFaupaJeVxrbHXVmgVt1cne8wSBE0Q6A29Jo0vdjgF2x5nO2lBRzdpvf29TKugkj6lwvfug2iDn7lwgNcHOLjF+Jp2Zw6hXz8IkXkSK1zct1D3RpDX+dI6ylBffIGVYbygmJ/pRpOrnH9K8OdacqVtw6GRnN/f3qS/nbMxZnmvhHidMdAYaiqKPWd/Bh8EkMvTjgNrnMoOZll8+RsnX0R+kIK5zKyucxXntpZ+81oC41hPA/GTNS8hj8NzRmlnzsyV1K1Oz6RZeXTQ+PSdkEMptORKX1yLj8yc9AnYGkUDDgOematuUacXYIDPTxd2X+G/HNC6MY8ls95miHLMc4NBoM55n7w2idu4xFjedRZtSw8rQNFd/yYvRFMa1gpvHkH6HOV1+YIvFQDtXKqyi1VkREDog6Z0ckkKM7YAoXTKpugVifoBV/HLCF+G0Xj/a++7tqXFjjSfpi+nF95c4k1ik8TekXhP4uHph52nVK2u6pnWSCX1zLT+Veb8HBLITZjvi4gdMR231TzbiPlA9Jx/5+TqvfXogLHnwqe1I6uRzq/0IxlZ9s4FEOFA7JvCa2N1WG5W4yhf0klgd/SjH6nsw7+aV88lUY2Kp/5tU8L5aeq7bGTeiFdkfO67ax0uXzUe4sHUlMOibXTnpENnf3yj5o06D5XvmIOirKhcNA7P+sClb881dZop8pbbVGsGjxJ6VtOkQaP2azUfiNu9+tMTPu7RU9ARfIe829iPGHw4TBbzgTDgvtTVtnB/qVub1XMcXVZVv/UPoMMNlWu8nEgvamUgWU/7YIxlMwaA7bPqWi1KQgm9ZSwMc8/F8bN1Bp8rX43axE1yyGWhHy2qug+2Cyn8WewKk5MJ9/LxIhaWJMoyx/vwfIIDoBNPH9qdLyYXz5tlfjz2chVazOPRUcu6EdeGmedTD72r601PcVOiLbIPvrZP/0GUzTHcpvp+LAUwjU44Av52dTV7yewMecA7pLxaK+4sSqQAB+JH13AseJz1C9OiHA75GpfN0x4FgmkPOi0If6q/owRHCn21Ux2+TnCvHcOoSX9jjTyvrNA3LsQOGGpNmgTiMLN3+4zV6jpu3SYIMpg4bVyaES1JBB4EH2lfeS2W98Gw24L1aBe+iBCtuAI1N27fxmGNEtyz47RqDPUTcuVtMB9BOcsqnYIh7T2hSsXgdei76T1zqLZipobptaOXqDOn+yjeopfKGp9YihSgIzTPV9A/vjNqz6Rhd3H1F2DKzLNnS9PST9Iozqc04c3UcrseNUF89W53yhWhnLLxcH4Sfs1gHnj5cjvom008mIH7027Q32UORZAh/eVznWfcB7hEN/lRvH95MCm/vQhdFpPL0lKbdXVfS2Htm0ms1JqypDfYRFM3ZMpm9ZJS475PMLwN+C/5R0dOdHr7ovM3zBosjRTlEdQYZps19ZO1NBTfbjjOVEfTovnliL4lWlV7uDBzjgSvGPJ10hWLbDWlMBZPqsLU3A8pdU9TBSiDj2kSb+CQo2bY8elIdCDuBqALDbpWutmVjC9Fg8X+EPSc3AwjVonEJThudFicDZGZQUnohmR1OiGgF6bezq/A2jLsKdvQusgZJoD4CLAsNoHVAw9ydspHc/cInhLIRfxEUDRTBTGw5BoaOja8mO9c2K9ZwGHa7JHy801cY8ImJ5oId6qqZtg9QqZm1qg6NfaXYunhKWR1Rr2Ddbu13SvPrm5olR5K7DsjbJkVXG/F4vXdbsziee6Pw5ND35t6xRQdPNFT2vrAyxTeS/qJsKzbXyA3Dh88+lxB4yIPdXr5LMdTqEmQOoKwVSaRLp7HCAMx8W2Tc+/ZB/P1fEXRbmOW268LCyl6E7x2hlEYVn0KohtLilUuSqy9SXIjg0LHeH2FZ1Baq4M6CEMQYhxdI1BD4qN41r5qA+Xfw0F9K8crRi0EWym3jb5JV/S6qMRuM01C0tSLeGhp1leYJXKBFASpgH3/7FiPD29WHKsexheGTZZS+ZPm1+aEn84qvA5V4IGZpBAlZTo55lEfyiJuUlQ+9IJZLiZuu1zvUcs+dLR1O/ddEb0OO+bran/js8ccoplVtmlMvGWVI5eRyDypbz+e5JuLzBzubyeYm2GYeEfIlYpuPBSkh5PGsEmjg5an1aWvp205lX7dRL7bc3xewjQJnuGD9gmUHdClHF/TGnG4yvKEJcm6tps9gx/uoZcXEx2Uud/kAyS+QlJjstszHpg1yfgbswFQ5XdnY7FD+cxjyxzQsyS1a64kFRCLK+6Z3p/DV3m/SB6NKQN/7bxvoWI1M+Gjl2U5OWVc33okJ2JcJ2yNFiRzgrFUzg3dKIVDNfMHMDXo4NY9vdVYk0U82OX2rpEVCWzp5nfnqCqUDs5a4kcNPeGTZQkrjfIeADSPcOiJyG//KEIPBtO/MvvgSPhlLNE1tNTZ7Pr6lJ+E/qbLmri52fRUZMZcWfkbUbjPBvsXQWyRCXDs/b5KgVUg/epJkxJK76xOhUTdz8dIatM938lNo5Twph6sDhcYIVzJowoHyBjr57sLON+jnBeLHzcx1Cu176Pt+Nxg51uCD5iY7mlR7EnUJ7joyNEHSvYHgviZ4R5fcEmCk3pXad45vKu3KlqAXpA3232AMrBhui8PcHc8LZD9bgAIFyGIws+0c8g6/azMTy16H4mjMiKlq0m3L4ocgLuGUTtKlcex1jx/enzEiTYxoIBRSAH+vkVwHl9XiqGOlmVxKAcJ8lTQ5003iuAZBec09mNuT/UrrPogqrRP3xKbZW7yVtKQA1I8rLPqvsyeYg06PeV8nhOQki18T3/2N7ksxWjm2NElCQLdTzRfF91FJ9gvpv/kXqZnstgelFMXiP5VEHWwEG1qajby8Rl9TO+DgAbTA58dSgoXN+iHfUvs0rYs/fz1QOG9bsBaTzditLu8cSdsLxmeLTIEmXWkyGyXX6I33z886brNumnd6t1kAnTeDv/BoOgrk7rClcB6zRaa2mm1FG4Hq3goLvKQ4eCGMk4YW7QsCSjRlgqp9h7+Oj+SQZiqlGmnKTCKi9h2Il92zRtH23281dPM3BfbWcYxmKYszvWFzjp9XzJ6WdAQtINR944Con1xqC66B5038eEUeNtyGfEFYOfOyaDZvChfiQtkX1bVroaIdCmyWfgIJ0GvznMN6YvyQfU7sl2z0sIoU9nPui1UsLXhEfo9QmVHuVhpkyD00xkpHSE/H+GiOgbHHgvqNUxcZ7dhci5DZIw0qC+KQGLsRfJ9Uwy3BQtn1gF9bNliidr1ccbPA8K+RfFL5UJKtbURaKEuHCCgg4EK1vFmT1a83OReW93WprdPYJYfOmFyzK56V1P2o5bAmCU2ewrlWNpFye77DenHuX4iSsNUlwyK2IfHM5agXaDQVpCNZSqv5/rOK1oYbzgHi8noes/SeZeN0z+vKOHlEMpq5sJtKrwptvLgw2EyPHayqdYUmcok3wGPH7Icv5sZr3Z43By38YeoPhIpQ+FhQlMxNWLz9E4Kb6ebabFvgVyjlEwx0F2u7DNJ2uOcENyjbJF3ufGqjxnj9jaNOsjJDLPCJZCx0pBOjgEbnzMNC25Hkz7ynkYaV4I57DyNcV2O1EivmxqTVCHcb5T5pV/c+IRvTeusYZhuyHUqrHahJExAsvztMaGfDlARwJoOzASNoDD1fRExl0kZyCjs4D+zLsvkPl6t/WDdcqyKD0ZK1pZLBej68d1vmpmCWQQgCZQ4jxomPw4Od9bqm+LIfytIb8DysIFwbFiU47Ah9xfkgoo3YS6esRVm2rsaJfe2SCL7rAkJRbEN7bNGUAqnrOytquDTKks7IQaRf7AyCeoJhv6x9nqJ8WDn/fSdslVGpM/xF8eYaeJj3AN5TbtlmW2rG2NDNsP5CFl97g/cLFdzj4ncE9MXITYDb+3u47U/pA/K9mxRIoijvx/hRu/C5rS0cg2qEvDR07Y5d6i6h2138ieH1ZKQNMH7Dpoew3qQ1WHgebU5dxsdaOVz8jdAqqUKQvfIeb56a5mZfiBzvtoZ/H1ItuU6DLZusxbBqGEX5knBN0izxKKB900VjRYFceAgkeuAM3ZTfz6CAQUURK5djZdaYxWJW81pclj2nMcQs6BkvcAWln7k9GF9yDyoB/IT7yonamYV6lmt6ebcKaATIzU4RM8T0OtXNGdbEPukqsu8iaDPxhjZb+0DELXWQabwRiFBlf1I1ruMbwuxKgCjkR5AAFo2HpQegiiUPHj3tyawz242AU4ErcXkFAtOorbCpd9vsSXfKO7xGbK9TAVsXj47YwVAiwmnJlwTMKlOrPcjxW3THl5rv3VR0IYJhJmwNg3I6+gc5yFget2n/Y15VyG5qTPMN/7tVToilKM8DiHfb4yPXT7EQnH4HiMK3C+bDtUMu4Yfnyko2rgh99rNko7bI+zz4guRMMn9adw6/BkO6YSk2iFI/yS1k4tPyhtWjyCNI9USmrB2EMNiGkB6F/YAUWb7Y9WzxHmcN9LY0G0jhfBwZ64Vg49bJMsb60mc8hDA/lLduLb6asevnfMwWXo4Psy3XHPKA74d44XeX3zfrcMxnKknRDV+fr5z2bCtG2CLHEHG4pFmHc7yyeRLiGnTaf0ICgXaaAj9yMrgP+gomuUHLTNr7Ayp/Q0ol1OckfahevWcxQmq7tCnVTUd1tQ2In/Ciy9eya7sraAvnngie4IRZ1jFT/ud5HBSy9pbAZ3l+OJGOwmCGDJYa6lFK4AnViK3CWAZ+rqBptNnKh8w/z4c0b7r2JOiYbV5fqdlIXpJiisjguQoOfOmxtRJrz0mHczkizTHnSRjiLsdIihKYBL5bb0aEEtx+w1Yw/yj52m+zat1+dBopwYJhNPEKt6E8uJGkVVfkn6ASxZWjYs7ZzCZwaxsYF5Xvp7oc8gBS9Iqw2vDNArFzPHHKytryeQqCZtsJLukD0Yjz3h/aoWtBP25M4d3pVoNgpyfh2uwbeFnfNVhawn3LAY6nEAG9S2CHAriJK1vF9tXLfQXGztTDR5bHGfDfMsnbiG3vVlkHSPVra3A1hlDawwdBBc2frcS0M2okTBY5zaNTWp7XTVIZf3sQkV+5fWG85+hNMuMNxeXPl2G9cJv3mTeeKUnp5iboDRYXlsKPSkUYf1Qx50QUmB7RJ7F6eXc7ScUowzm6TZHNGF+7J5vdM8iunxfBQcjTUUgA82ElmdPvwQdkEUtMm/MIMsPzdMYC9AXFqk6CVLALF2W0+dT4RrMRe5zFBAaHgIJdDT51O9op6dgsSMzLm68fPKYb71af/CftWl9+p4IH3XXQoE07haZcOzcrvm3QO+gm0oqO0J1LufRuzdTetlVDHLyh88E8ZQn0/aZ3IR+vJVw37AzfFapONC3Zml1apy1ZR4axBddvT3u9fgchYEaSUzfBiUQQE5rvnXo+aZIPzFcehNqyhCNI6p3d3qN4hR1EA2DEtSAxg/nQ/2yxb7DvG8XgfcveX0IwSmQ6mD3UL8Rgsd9fNAl+YHX80qcOXdEMfXO82Wwl7bXvvJtZTnuMcMnAq7ahTkwS4UkcYQHikFvB4lb4We78dsT0k+C2qoaqNwBG+/6gEA61gQfVLBLlXXJqX2eFPebtH6n6QBvRCgNIoOqb4Gy+aUBxpdfCWh4Ldz3xy3SWGh6tp4TFrJXlZHOnHvYwsglBAo3b16D3AaM0+gQwSPvg/Fu7QUlra8X4zKmtLNg94204j2wSWpN4DMPyjCfqNhSzc5ZOs18/Pi4maoBghKNqZtNFWtMsVhXozEaIoNSQoVe81CHKwIezk8WfAZDuhzDZKFmE0PMSlXEsg5dHSyQZ3xrl0ePdtcZxip7Vt2ZTJy3P1zkkb112vyWcjQ35Jn2q/iky4ePolsOQFplsH583Lkf8PjMQh59ASvFdmVwKAzXRpO4etT4VMPWzP0xs30N7EjC9Fmxi0Jh/FuWYY70eVCzvPAZhqf1krx3DL4JTIzRNy9VN5HLvIuRcDMJOtDhcbttiK1m5APDH1G1WVGoXjieXdypme6w0fYpVzAlT108IKZjWAMMZkekeDPGvhHnTYi0gJA9+/Q9VUyMux87sdGfPjplwVLPQ63X7YldoKT6SUjTgSlibnTK8Cz1iHUD5/R2xWHrDx1mAgiAMg9xjLnPawXfXduryQKhu4+kTNAeSG/GBfEfd9xlQW9VQdwR28XeilSwSwudfNse6CpWjYf0NSi3Zo012rsmgIy8rL1CgrPZgYIjQZ9r2xUwIqfaZ307r9E9vSQE6KSrCmHiIdeF5Jtv8VsN9kSK9Fp8h4qxjhC03bb3AFa47LoYT0Htmz2xUNp0oaHyfFlGDNsLEzdC/Mqzeqn+9mU1cHKWav8KhaECSmr7yTvPvO+M3tMmBRCx2dfsXXbGSbe5Cd9ErYdttmZvC4MQZ/tozq0DnjulPpdd54J4S0ksCOb1jhqDLI4lW47qxJgHe5qvJNMrdqmniAC40yX2GZGaQq4YjZ0GmI+lZX+SdkvV5wsqh0M9FUVJwYjrK4xLKHa8FpQbMCA+MxNnEoqR/8G828aFdClgjvD2a5Bx0W+pPPM6VyOzKt2XMH0Twmp0fHcGSo343py5ikhmbuC4LXnj8QP8xXk/ZZe059cAjQ1jf6MCTzR9J9lXLj8g1i9LUmPPm+osu68qgIzymPpLCSNK9YtPdQXFtE4U3nZKtRSqd27wXbmaGGaosHKjZUNhWuIOHkOkHOjYNXv8dc391YgPu1Oc5RuRY5pGoHu4OjJs8aYREzqv48KclUEVxDek59WIMpLLIA/zR/7669JJe9VByYSeSfBVSwq6OFBWE5AErupMCaFfipPB+2uJEwki/MicLqpnzy2IZtPuy+MMekmVQF/cm/SEt8GFYLAXepIbgzf0h399mvudo/0llK8sI3XV1mD/bBqy+/ZGjgxFrPRPBWuy8vCc6n2zw/GVq9z1+LYUmNjb4Tf+ydFPIG/wbha6QY3NGjBY6QNjw78LpNgSTHxjp/ix1IenS/W3+3fnw+vHMVC8Wgqe0J5IZ4jdqVtTfPrvT1NvntD0kuVC7I3fgYeZM3sC/0enS24f61xXryy87MRNZND6h429IE71w/94YGIoaxqPyMy1YtachxqNA0LkX66WwVdEgC5Nr3iM6lmAU9wpF/Hznruc1XXOW3DPfDxC90rrlxg183e+CPiydqG9rth4NyP0juPkMpNVJ1Rq+ZwwiHXslqj7i89hciFtikJ4n+/kuHRddufiQRzLU4URI2QLm+Tufku7zJg+Cww619JicfGFabKemPkjD3pTHE6kwkiYnOpVSwFDCbTORWmnmj/kg8LdOElabO1z4SUo4CvLYR/Fib9zeohFcOP6Mi1aHziyG+npgBEJIqmGDq09SSr/RsuwhmjN06VQ84gnqMV5v366Yt+dDwXU1M8Z7qyOcE4IiActMvR+OJVjr6G+U6wzMJqmx1UyfrMEaFUkZ/eBH49R34i33Pb7/ZJSMxW5Eb79mQBLDuuZSDgrNKi9ykXjUpoFLtK0xXob7RatglvVMJ+f12M1cuYZuWVv8dJlpW7Mj95FO182nzweRJp20c7Z6EXc70P7BINjc1FQjfV432xZIoYphKELn6q0hjh5WGGKp8X+CqDMUVmhOJpJoZ4Y//TfYKElO6EQL+5AtwIo1EGYaIGBnaqj1M6qBGNOnrE61igSKXqzLQbsabPJsy9MtzzfMijgFi2vnVwlD674PISdcy+u+bPcDGu7/KP/yeCwzasBB4SygGN3+HAKwCUAI3OJeqQ54T3GuBta45uJObnnnGuNBeqXyveBQGmmJenjFt0B/8kB1Y6cYE/GQeXeLIdU1wmBKfFvJMgjmOfze2e9EGgtIin9tfklWx/e/ktfUWc8Ftya+OqMd4dBRUkV6l7kXLkrWQ7rHaUJs9CZ0uhwvs3RnaICtYtcZd264VyyLW2F0I6WsnIBJvuqflDTBXETTj8QeggsBYs2ct4UNHg4qP9kxZvevApheeLIq93QNmPC2LIzUe+RF3H66m5Cth2/lAQUPD2Toy93zhdu1u/u+PvzmmIwee5+AlkIHvQw1komXkrwDAGLQZc5ToCOC4lWnwjhhSs3TGdol1gkkl3HpBVY6vwwfunKYSPuZ7bVixTodLqwUM/0CTbcIRHQvCW8pfKGhcTg4gFvfHNgQRyU1AMhP7jlS6itWz6UWdb0jvoqaQDybbzcAMJhTujjw/sjqGdkhZDWDuFG3JS68u6LuuQgf1D53GB7ZJtvJkHpDVTozMcNvVqUn4oDu8/GosDuVGbiAdl7v1TU3AOXDDymKYqm8bkC2TP5dXP0J/+aU4xVki6rMmkb+gTsnPD0NzcklNCE5JbzpqMJeJBBWU92K6qEYApJ2FXQhrwn3UsTvF8LBMtosA/M/cBiGPRE+3jCiH3f823WU4KldE3SVtE5nXUclf1g9yMGj2bMM0sdady8PIuH9Kgml1e9g04nkNwrUhvVJQOKIhoJ123n22rT/c6LBdqMGG+wOX62Hk57QjjX7Ihp8D6JUM0LpEB2nQUVnweBVopOPuFbXF0ZZjbEnevnok+JwVr8HNATUNcJ5DSzYLZZNNaxb1Smv52kDnzshL/rPjxnZKM/1ksPkHTPQcjPj6bgQeEgwiTKM0qPB2gEEIdeB4Iv4J1uvJ7fzFrJzhGXivOhurM2gXhDJ34mO0lvgvzTOUYkXHjTUZDk64XpJdPLdBAfI72Xkd8hQQ6+D6kQZCBau/CN1WXl60bmcZkN33lhnROkesq3O71ZNPMiPS50cPh4FzroL80EqTK8GEJVKOP9QrNb1wXzkchjBng7B8dxyHM9TGoEreglqGhjdz6c3lKK1zg127Sc0stBhajoPsaRanUCg245qAi5EHTy/RSC8P0sfJ0LR5t52XEYlbM0HzbYK+a9zujDimu3c4FwKIkHB6zKgr2M5jh3j8KhikHOVDl/dKC23mlv+YpZt6jrCfputScSme9PzVlh790Xs+7vTBCIRNCy67qzK3tUwasslLA7CoeuknzZCIKFSxZHFjuuG9LlOU7dtR/XByvTLT9p3HlFZwPcUOf0wnw+hiEmzEVwB4jT5zXoMrfz0YYy1V6/C5D8AT0f2RFco+acpxO1dPRxvf2AcuzDFbacLlNBnc9gZY6I+1xxKMiT/kwJZLqer/Dk9ljz/IAPma7AFH7VfSVmn3PKR9BNbDxuDCdxWJx25sTWWo90SWNCF/S3GdV5TLR7SFq+3D4lvksyCnfCnZEL0D8IMbNXhGRPySchbVoNpUB8lo3X0+XFN4dDZZ4oJGdXlFa0a4Q6ZBiA2KUw6b7kAenbFbm0voNRzr0pWT1HzIWPVH6TgXPul6oaILC7iil4PUs+7n0gWehdGdC2stGtrxSeX6F31X70XYfjhw6KebXXTIIqrGo7VfJa03AZrPUjAM9i5nDv9xXoChyTRFr2WfZQprhWg/hVJrTlVstN09mkoegcwfaCZsDzsvuZQC10k1NrJr6cq7Kq+XoVYWyS062gYBcM/Pw+M5hw8CHxtJtsJPVNcz9NUCYIphwclnzkgg56TDL29NCn+UhBB6U2T4MBRCRQmvCeU+r1a99ry4j60bt9pVhq7ovaP56z4tij9NJsx1bGIbCHt1PwslNdaTqchHtoyNnzoPFY33UdbyK2sEyjod98iiku1dlZu2k5zm0VR9JkysHft594L5On0TSJJ0+EHaptdJbo7GOr761zr/HrykEAJQNRXjSj8+47Vwn8J/Nlncn4vQikpCcgnntIC6geEi3UsyiYZVVmXdYPxL+G+eXGP91ukMx6YWKd9DoyX+8OLwZ8mz61e6MmeHt8SG7DexqOCzvQUq7JOx3zPYQabBQh2Q3037ktsfVA240Mkfc5iftoix85RKWn952uSCNdp2UrzNpbKxLkEX5MI+ya19tJTmPUFs/+zFafxe3LfXAemXBdvEsoCsdc54YeZ07Ua0AkwK713X4OKechz3BqnRdijYzmHwj+DK5Qe4yB8n6Tn5VdEp5ADaKs6U49T/iIPBiZ57phJbBpZrv/TmePhH2N5DNmTql6768mKBLXFVTSXnlgYWIQfQPzD2Q/rnxdrNs5bd1ZzaS0qrShJql4scNY/cyJKnjMG1fUsqlFW/RpZ7ODE3pGLwWXoTUEMTmHw99jVU8t2LDVyU9RWJQQKFi0SssAs/6a4jdeB/syMKVnDLbnqUMhqksqIXHzGomYVFZU3y3GNtaBfSwGsCFl/rQJPwyfbZbcB1QP1/sbRCVd1x/zz3BzVxuIiKiYbK2bpqqwu5t2eesUnwXkJxz7sX9ufBVrshBrgfb8orz9eYOgsrXQTRX8IolFf5GNZDZf23c3uEQZ5Ba47naAFdVIasrHsO+ApmgHhGU1ly3by0Ov8cBF94aRBCXab+CfRBm6X5kjJrIpe0hFEW/xsEqFA7yG/G6zFRzSCzGxLC4KY2n1utQArIm30HE9+e8pYtOofnH2821Zxr2yBG4TSE3Bmb7o6kutSAI5hlus79+yA18Azc9xuMMxfCxkYO4PilO/BoFDP3NxysC0n428W+lWt+itDELH63p5WNvL0l7ezkhrf1wy+OIGa1dcIajqEQZ7FWFVCXO28HIb5ZYPmsnt1B8khqdZIk/OZiQL7CrMkSZvyeTfNcZsj6WiXaAune2a73FlRRzllKfjiDe20Ryu7Ew8oaaaARZxVUAOURSfx/IUVIUk2tWsTFmz+TSY4y/u4WpkhvNWY2DGLaJgutT6UB+h8357j4f4ajZhnVhG163+IIUF7qPxvklLw/1Tgl/+xxa6faAgXE4ybVfFSmKo5Uy+A9sFpm/p7nKSq1M+/pEa+lXVz1Rik7xvjiUWtd15HjxlERKccHtuVCB6utWFXPgm3oqWmkcAuG7aS2vSPq7gusWvukC9XIICHclCEcHcgX5vH+r1fgRCkrAhhtnBcmu1PGR7JfjADyVYCORmqnl7bsMPRTyVRdndt8Bjy0xwtdLJuZqxNfXjyUXLBqfzquG2cJ6HjXyFLfxMHSEal6PcrHO2yeN62kzj+E9Xoaa4025HlB10fHu5VoAJl8ulQ8yl7oimOZdW57ZJonwj+RntJ6neIMmyrkHCiHfF1JS8quhWn3IDZeWYZ1qx4p3gSZ9v3TE2mQLVgHSv2uhIyTSzBYCqv5RvWNGejt2PQgjviLW5t16CfY2gc/oOYg/X0L5ItN1xOl8shbaYVM6JxGSDHdKkFZuWlH1z6kWH7EemB6SrLSfkxbL1aNqpV3+qn12bvSz/lpsaYzFjp+XpSdHRGwq5oMD5zj4+mk+FMT65sXgOj3cd3IDCtaWscVO/YCtXC3sNYTolCh+8BsQtjaZQlhKJYqLX+zhCIilVUAPHlkYd5t++aZ5KzFlDG5XWCZHSFPmA1FnFxTXJXVWLmlJjPJ2wd6XoZUtrwYHKmuBBwommL+UIMAKD9ka2qki05DdwD12b1sSiy09i/wiMcBrdijlltctQzh1ZCR2+GV+eM7Li6wFbteLe5uTQ15b+yLZXd699zb1h0EXBxyi95OU3iaM2YtuM352q1HlX8JlxrKmPK6QwLkMppFCJa6ubKCn4MFES40PH7xDHqzlVgqTyVTKkIMy4jvh1kROJqsB0ja4IikFuvzJPMN4dpCuSe+aAF6kJwQoyV3P/bVTSwoFC5W6+JM4jnQPoFKikn1/adX2caW5BHJEVtpzmElfidePCqeiGYikIU/XtuspXUNLkWSLkVBRENAmhlvxMp+0rVF52vPOLV7RN0JvkevS5KiFWr7lgBqjboM7WEhQSphZCfLSBIu2+Ah+l0MbDxtO4XgrYeMQaCkp00kWRvvuWa2JGVXIoxfUaQpHhR+tV6UnYy0Y5PSha9bWttFWmVnDGWxehTumtUPL84sTmjRfQ5N5mdTuQFuvbFt6ecvsN4C+RGUW8wV9fXP02dLnud76KxW57zh8bW90gf7GvDURycYk+ug0zhYaTeC+rrVnzJ6X1Xyw72Gvh+7ZbAXk+ZR47a5Lc0RgyvwklVhp4oowGqlefaYTElScd6whfGvlIrfXN0pIt3PriehsnG+POM9G3F/EQlNvtOYLPCokPsCvjY19Gdb/SnOa7hrJUkUR7p0ZZvfzGRPTCyLBZeiAYFGVBGHWB/ggHxvFlk7tKOmUGWkbXg5TH2g1RXPXK6eZoLVlN+o1xQ9neuwAEsGp1Z+XXznp0/OGk+8OKwD5ZsfI72homNPlUg0H2YpHPz7fRWUufsZ2Tgk5QijwZ7Vi8aVE5Zr3lBKZTUZnWMq8Yju8MpI+huXyOEbyGr+/KRdcPtofP9gNBOI0wNy8zcQw2wHBOkcbOnsuUOZigTULVRSSvIb3PRC6cxHe2RzANdNdsqxpsfmJ6BDnE5YCaV8gUyqtkurB5URk2abW9cB+1rp1uwt6VPl0mx4K5gez0jEBxqLkSDzLa00VPxbzScmRwZIXa2HLw443fsRMeCJqrS12tYT83MrOUtbgaXi2SbKj0Ej2aXNhO0lep8nLd0Z4gTbUx7uKgIO3RhCQEPIiQmydZ7GjqUDSELpZEwBrObRxurI4BcN4irpsGxHJ62HoLif0of60dvgKUsMWPYV6d+e1WHKN4RnCwh0FU4givZ+Zp1tZKfuhyjwISM2KOt7aLAscsS0ZKqEywZQ3klTxj9LLcC2FxiOkyvP+NYKjq1fOQ1FnmETC7BuJ5mYrpZ53b8s6FTwdkgpzd28sS3sfJjcoC6qSuCr1eMVjzhvhcBBVc6aoxBjlmoS+rQTVcWn88jUNjkX9POj6+w7I2X7dXPioM0w1mASI7plBl2w8vLXPl80xCYndt3JQHbjYKd36+zJGkLhdtb3YuhQ8lexY2mzzsScM99+EZobdcjEZJSGGE8OelEHJIPTMioTidH8LLapgojdeR8JzCBjEThNMI6vyCd8z+cdMqPaHLiT2Km8Ic9FhbKPVtUY5isYVaspRnQYcyR/wGW2TEC1ce3MIt66rr3brmsfHUuBdJ3syZk872icVRwvdO1jUQQU/eeJvhmHxULk7fht/MrsgMmbJIkjEuJy8z+ae6fvv3lV/7pw2kkiAVLH6TkE3Bv9gIv4aN4nWrXC4qxYy5dtHL427rn5F4VF+izPHgo5irgdyRYoe5/BIK2pIxkE9h1gryzCwhaHStEcsLeeImO2XiGQjyiffwZBhGxtxWsD1ne7arvK01SPiyij+91AKzP7wjwtWc66S4OzgVLtXFdByjBmxwWFcsTRgszxkvIvobWiC+sIWN04Qk4BsJB2M6xdNut6cdnqhyQ/BEZUbVRO9HzCD1Ge3Vzzs4KJAMm3GyDKybrpHn5SF9tLB4Un5eU88/cg+BHLCfmvCKbxCSd1gqE213hNN80NWxVyK0h0VQZ43hPEOpuTE6PNI0I2xGQqPqIkxhDVWaVWPYSWVRRhU2VhNzIPAbCoqd84thfsSw3ej340VBidUT13RteyA8wQAjgxGjvqg+wyILXN0a45Uosp6iUuPWFFaRYRsHozQpkh0rpaoHPJfLClpfZSoJ1bfAyjyoBURy9RNJj9uSLQBj0Fo/R2tAaDzFltZP8UG6pMVgUKzeT3AbBFIr5+ONsknokl5hfRpM4JxGmjaTHzbUPO+nSPjBJxoZLn51lNZxBfqBVhyF6OfRH6elXH2/ntpKIAtsfzlwkMPxDebKkyRmP+9t/WN/0kJh4X7OP0og18mG4c93/zjfJAHfZkpJxceUAMGw0R4ILH+CJIsQnBWKivcLdB6ZGmyHZCdrzts152CJ/uZD0aqdb7VCuQjVCzYvGqa/Q+2pZFyFt/iCCmSR1p0O5bgzsk4/nts7zG+kpg22lzs6XRGekhrBDQMg5yEHglwVGQUfaJ/IfSJ88czQhiABquWEzjzBLMmX6teXM296P/ZysWYShp4IfQ3Ve4O5KLjBcWJaICqBPVGQZXy7aw9QsgoqDQY9JIul+EGAjlpp6ym9Agdfb2l9aQvGJ5Yt6Lxit+y1FTT1PtwyZ4xz1kEJw8phAIeSt7CMkrR4n0fnRvC1V2CDloFN5VYwEiAIUykr9ZaCHvEI3AnQ3j3278gi+QJ+UixCzHKQBMrQoSpqUda5jh+gw9ylKBA/dGAyCAXB8vOF60/1trCjENEdFJ6WKb8aWGRRJ/y2lO4A8iuCY27f7rGukN9kdRyJPqRWgZc5ft4Z0EOpjsHnnaP4wG2FheJbtRf+ckk3zUdAzcu3EW4V58BDYtKM/ewkV5C1rcaVfsga4za0JP8uEuKlH623Kz+QBx8wyzxdTDXzgcv48UkuNUGNTe6jGv2urzBYMrkZQmJ719+UDtaxAjz2N7poARiDiq9JFMwW1ovsQrH95ioW7z+yxmM79t1xh/pjPt5PaL6xUtPZrScVE/iwDPG15J+tpn1eL+fngUZYwpJ5OqCFAoapfcoyPdnjgxA25sysWLWaw6pm6kzYa8cqTM5qEsE3eIa6hyA81QIp0e/eXIpgZRPbsowahmcG7M8VrjDrmKvZW3qxg1m9rIvCI3cbGFca1Qn061iKyV0+9WXcxopx+9bcSORm0zeeZXBTUFh1RTIC2yG4I46hx9nQjr5dxbABhC93iAveNBZSa9jehAFc75i2rX5k+00M+TZNY4if3BKB2LZ9JuRYQZoslY/7VX8q2BPHiAu/+xT3m8SB1WVlGTU0H06xvPRdTu4XkER9lShYPsN5DV1IWSX9WKvYYZZoXT4S6kcaGvjBd1aUFIeTOBvO82E/3OpbsVQdyQjhObpZiV3Jt9gAKHMbZ23Hckmnda/4HJRI9816a/sJi9L5/ph1x0+jfvk5srxAqY+Y9hbd42FAsr7vKrzYeGlarI1XVVFFP8PRilgbIkrJi6AHDtHfjvXXB8QbF8eKyXesSgn/VD7alKKEJlaT15z69EmH+Nm7Jwjhjxabx8LAzgFjsqzhaIjKQvbTlYrh5kxt+aGAjyk92/3Qzu4qy8LTbxB1eTcfdZ5KGBB74LfZokDh99BRt1z7b9atEQQVkxcYMH2JIFzLptxecIgVF3IIaUuk4DwLTy9NyjJvlEQFVZwAuG3H5E1gN47DcJTjwb/44Xa6GwlxAMur0HOkdrCY9Op97q+01lAge8foo/DHJK40xpedcTV2Wx6qJMULg8CbO8POvAIrzYD+eyy/NVhuqiEPoCRPY1HrtOLbf/iGlfP3yTrrPCIa1UhLuT8RI5NdPF64MH2OFx4czvvE7TgucCs/U5RarQdXi8k73j1uFNzX5RqfuV3ptBp4S8WngKrsQYK19DOoB/ydkxjtVyuHig8MIfkxxA+QYfipoqGlsO1lCt9mkfD1aLYUaNxFXOh4oH6uqwBtTuhJKg69NKenMMe9hmYZhue4HPzOP59amLQUHI+bZ+agwAU6psuI1jkWz+YA5aligg1II9ywzfnuXErt7WFcvIA3vis9T6ZcEcRXy6Qz32NXSKgYT4cWy3EvU5PbJv0DATtnRO9cns91W3S/RZdbsXoF1x1qU8qHwNJsKzLYw7Bdo0O255ZEGkbQePleStCt8h0OiKYxFVcY3bc4exd6F4+QeoMeD9rF9GrSbmyhC4i2aGVno8LDdp6q/TBroYoht5HGYeGOo3CuJa/K/iHqdTcjPs/cOqYpBV+v7GBE5ehNl8dPsdJEQJuISohAiVnLLQ/B1VUAYlU/fqqfPmDoanwrkcYmtq+Jtf+29eLR1+goBGSpTXVcbA3y/rGxulTI22zAWYszz24oj85i5SeZKR/Dc6F6vb2wEgQChQzLzQUY9X6zVbeyJIV24+sCXQIKDjr6sQD7YZuGwWbAcFdDkBEZzKf6ZgU04eNojlKJKRRjGXTzutMDpCXMeE0+ssY1GozZ6nsZQLCguFmwHFGUYxxmYD76khKVHPNBOXtmTI+ugBfVBkud3yitelSs7zhnUYq+/RCq97eDAOOt0UMIuIb23fV4F03bijwnhpyjA6dRlqz6WBmVYdDLHEPbcaqqLK1OBkIcvwvuOc7CMAzgN/O3P+ROZ0lIKyQnGh4D/rDq0721qFGLovgXlP/+czsR5F8QaMumJTvAEQSBfw6N8ZT1y58dQoV/QbnukLKhyxbQOgb65QMY9K84Qf/Zn58LnL98FoKgnwN7lS7lz0EUgf6VIn4Ol1lVlMuvx38OxvPPgeLXu4Gs1M8z3D90B5e17Z8e6fszAlXpz2fyV05Eagav4eCR/yPai8+S/Q+E/OWbxu2a/Zz3c2BezvaXA/dlqnG+/8LuZbVkzhgn4Df7FI/3sXLp7jvy8P1jPI9ZAh45r47sviubV23LDe1tZ1G+H3pwiXmZhib708F/ua3x98+vv/F/WQ30PjLcd6oWsGAk9Me8Exz+zUugf/8S/rTWf/4C/n7rT/0F69+nzDQN+/23BMx3q5J/v+x/6ZLiv1/SZJ028Ka+17nv41QXuOm/rXaWFtn/6Vr/2Vri/8Fa/unYlLXxUm3//vL/0QL/cgdrqO4b//oqid++yt++o3lYpyT75VP/9pr+0wvB0G8utMRTkS2/u9D9TuLzz04bwQnz//qBfyd7BPQb8fm54r8J069r+jfIF/1P+fpj5Ot3YvGXytc/TMDg36kE9Q+QMBT6p4T9dRL2dxSx3zmsP0jEfv3qvz7yP8KIofB/K5DyO09xa/J/LUpBkX/q+B+k4381TPm9jhN/Jx2HiN/oOPaP0HH0nyL2B4nY7+zAXy1ivxPWP0rEqN96PvgfIWLYf6mI/b8uVb81N3+pVKHQX2m4/rD3jv/3gg/0b9cb/6+GD8Q/Fe8vVrzfqwtO/ytFQL/++c0V/2I1xP7Prvv3Vsr/XoFHFP2/Tin/GXn8azX0d5r017pG+Lc46O+G6X/nzf8hoaG/JPj4/4+Ow+T/bTqO/TM291fq+H+umX+1jkN/N97+W1KF/wN0/E+28J8i9jeL2O8E4693I38n3v67DMM/JDSEIf+lIvbvperfCdz/cyKGwH8Ml/gdR/nfX/ePsnB/8me/3vYfIn5/SWTy/x8U83um8l+dfcD+Gbf7W8IHf5DK/470/FUq/4cp5V8S0/sn7PiPX+R/9N7+5uTBf3Jd9O9U9ICSv/UJf1tG+v7rNAzLn59+G/FSH9IMnPE/AQ==</diagram></mxfile>
2110.06149/main_diagram/main_diagram.pdf ADDED
Binary file (40.9 kB). View file
 
2110.06149/paper_text/intro_method.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Decision problems with an underlying combinatorial structure pose a significant challenge for a learning agent, as they require both the ability to infer the true low-dimensional state of the environment and the application of abstract reasoning to master it. A traditional approach for common logic games, given that a simulator or a model of the game are available, consists in applying a graph search algorithm to the state diagram, effectively simulating several trajectories to find the optimal one. As long as the state space of the game grows at a polynomial rate with respect to the planning horizon, the solver is able to efficiently find the optimal solution to the problem. Of course, when this is not the case, heuristics can be introduced at the expense of optimality of solutions.
4
+
5
+ Learned world models [@ha2018recurrent; @hafner2018learning] can learn to map complex observations to a lower-dimensional latent space and retrieve an approximate simulator of an environment. However, while the continuous structure of the latent space is suitable for training reinforcement learning agents [@hafner2019dream; @chua2018pets] or applying heuristic search algorithms [@schrittwieser2019mastering], it also prevents a straightforward application of simpler graph search techniques that rely on identifying and marking visited states.
6
+
7
+ Our work follows naturally from the following insight: a simple graph search might be sufficient for solving visually complex environments, as long as a world model is trained to realize a suitable structure in the latent space. Moreover, the complexity of the search can be reduced from exponential to linear by reidentifying visited latent states.
8
+
9
+ The method we propose is located at the intersection between classical planning, representation learning and model-based reinforcement learning. It relies on a novel low-dimensional world model trained through a combination of opposing losses without reconstructing observations. We show how learned latent representations allow a dynamics model to be trained to high accuracy, and how the dynamics model can then be used to reconstruct a *latent graph* representing environment states as vertices and transitions as edges. The resulting latent space structure enables powerful graph search algorithms to be deployed for planning with minimal modifications, solving challenging combinatorial environments from pixels. We name our method **[PPGS]{.smallcaps}** as it **P**lans from **P**ixels through **G**raph **S**earch.
10
+
11
+ We design [PPGS]{.smallcaps} to be capable of generalizing to unseen variations of the environment, or equivalently across a distribution of *levels* [@cobbe2020procgen]. This is in contrast with traditional benchmarks [@bellemare2012ale], which require the agent to be trained and tested on the same fixed environment.
12
+
13
+ We can describe the main contributions of this paper as follows: first, we introduce a suite of environments that highlights a weakness of modern reinforcement learning approaches, second, we introduce a simple but principled world model architecture that can accurately learn the latent dynamics of a complex system from high dimensional observations; third, we show how a planning module can simultaneously estimate the latent graph for previously unseen environments and deploy a breadth first search in the latent space to retrieve a competitive policy; fourth, we show how combining our insights leads to unrivaled performance and generalization on a challenging class of environments.
14
+
15
+ # Method
16
+
17
+ For the purpose of this paper, each environment can be modeled as a family of fully-observable deterministic goal-conditioned Markov Decision Processes with discrete actions, that is the 6-tuples $\{(S, A, T, G, R, \gamma)_i\}_{1...n}$ where $S_i$ is the state set, $A_i$ is the action set, $T_i$ is a transition function $T_i: S_i \times A_i \rightarrow S_i$, $G_i$ is the goal set and $R_i$ is a reward function $R_i: S_i \times G_i \rightarrow \mathbb{R}$ and $\gamma_i$ is the discount factor. We remark that each environment can also be modeled as a BlockMDP [@du2019provably] in which the context space $\mathcal{X}$ corresponds to the state set $S_i$ we introduced.
18
+
19
+ In particular, we deal with families of procedurally generated environments. We refer to each of the $n$ elements of a family as a *level* and omit the index $i$ when dealing with a generic level. We assume that state spaces and action spaces share the same dimensionality across all levels, that is $|S_i|=|S_j|$ and $|A_i|=|A_j|$ for all $0 \leq i, j \leq n$.
20
+
21
+ In our work the reward simplifies to an indicator function for goal achievement $R(s, g) = \mathbf{1}_{s=g}$ with $G \subseteq S$. Given a goal distribution $p(g)$, the objective is that of finding a goal-conditioned policy $\pi_g$ that maximizes the return
22
+
23
+ $$\begin{align}
24
+ \mathcal{J}_{\pi} &=
25
+ \displaystyle \mathop{\mathbb{E}}_{g \sim p(g)} \Bigg[ \mathop{\mathbb{E}}_{\tau \sim p(\tau | \pi_g)} \sum_t \gamma^t R(s_t,g) \Bigg]
26
+ \label{eq:return}
27
+ \end{align}$$
28
+
29
+ where $\tau \sim p(\tau | \pi_g)$ is a trajectory $(s_t, a_t)_{t=1}^T$ sampled from the policy.
30
+
31
+ Our environments of interest should challenge both perceptive and reasoning capabilities of an agent. In principle, they should be solvable through extensive search in hard combinatorial spaces. In order to master them, an agent should therefore be able to (i) identify pairs of bisimilar states [@zhang2020learning], (ii) keep track of and reidentify states it has visited in the past and (iii) produce highly accurate predictions for non-trivial time horizons. These factors contribute to making such environments very challenging for existing methods. Our method is designed in light of these necessities; it has two integral parts, the world model and the planner, which we now introduce.
32
+
33
+ The world model relies solely on three jointly trained function approximators: an encoder, a forward model and an inverse model. Their overall orchestration is depicted in and described in the following.
34
+
35
+ <figure id="fig:model">
36
+ <div class="center">
37
+ <embed src="images/architecture.pdf" style="width:80.0%" />
38
+ </div>
39
+ <figcaption>Architecture of the world model. A convolutional encoder extracts latent state representations from observations, while a forward model and an inverse model reconstruct latent dynamics by predicting state transitions and actions that cause them. The notation is introduced in </figcaption>
40
+ </figure>
41
+
42
+ Mapping highly redundant observations from an environment to a low-dimensional state space $Z$ has several benefits [@ha2018recurrent; @hafner2018learning]. Ideally, the projection should extract the compressed "true state" of the environment and ignore irrelevant visual cues, discarding all information that is useless for planning. For this purpose, our method relies on an *encoder* $h_\theta$, that is a neural function approximator mapping each observed state $s \in S$ and a low-dimensional representation $z \in Z$ (*embedding*). While there are many suitable choices for the structure of the latent space $Z$, we choose to map observations to points on an $d$-dimensional hypersphere taking inspiration from @liu2017sphereface.
43
+
44
+ In order to plan ahead in the environment, it is crucial for an agent to estimate the transition function $T$. In fact, if a mapping to a low-dimensional latent space $Z$ is available, learning directly the projected transition function $T_Z: Z \times A \to Z$ can be largely beneficial [@ha2018recurrent; @hafner2018learning]. The deterministic latent transition function $T_Z$ can be learned by a neural function approximator $f_\phi$ so that if $T(s_t, a_t) = s_{t+1}$, then $f_\phi(h_\theta(s_t), a_t) := f_\phi(z_t, a_t) = h_\theta(s_{t+1})$. We refer to this component as *forward model*. Intuitively, it can be trained to retrieve the representation of the state of the MDP at time $t+1$ given the representation of the state and the action taken at the previous time step $t$.
45
+
46
+ Due to the Markov property of the environment, an initial state embedding $z_t$ and the action sequence $(a_t, \dots, a_{t+k})$ are sufficient to to predict the latent state at time $t+k$, as long as $z_t$ successfully captures all relevant information from the observed state $s_t$. The amount of information to be embedded in $z_t$ and to be retained in autoregressive predictions is, however, in most cases, prohibitive. Take for example the case of a simple maze: $z_t$ would have to encode not only the position of the agent, but, as the predictive horizon increases, most of the structure of the maze.
47
+
48
+ To allow the encoder to only focus on local information, we adopt an hybrid forward model which can recover the invariant structures in the environment from previous observations. The function that the forward model seeks to approximate can then include an additional input: $f_\phi(z_t, a_t, s_c) = z_{t+1}$, where $s_c \in S$ is a generic observation from the same environment and level. Through this context input the forward model can retrieve information that is constant across time steps (the location of walls in static mazes). In practice, we can use randomly sampled observation from the same level during training and use the latest observation during evaluation. This choice allows for more accurate and structure-aware predictions, as we show in the ablations in .
49
+
50
+ Given a trajectory $(s_t, a_t)_{t=1}^{T}$, the forward model can be trained to minimize some distance measure between state embeddings $(z_{t+1})_{1\ldots T-1} = (h_\theta(s_{t+1}))_{1\ldots T-1}$ and one-step predictions $(f_\phi(h_\theta(s_{t}), a_{t}, s_{c}))_{1\ldots T-1}$. In practice, we choose to minimize a Monte Carlo estimate of the expected Euclidean distance over a finite time horizon, a set of trajectories and a set of levels. When training on a distribution of levels $p(l)$, we extract $K$ trajectories of length $H$ from each level with a uniform random policy $\pi$ and we minimize $$\begin{align}
51
+ \mathcal{L}_{\text{FW}} &=
52
+ \displaystyle \mathop{\mathbb{E}}_{l \sim p(l)} \bigg[
53
+ \frac{1}{H-1} \sum_{h=1}^{H-1}
54
+ \displaystyle \mathop{\mathbb{E}}_{a_h \sim \pi}
55
+ \Big[
56
+ \| f_\phi(z^l_{h}, a_h, s^l_c) - z^l_{h+1} \|_2^2
57
+ \Big]
58
+ \bigg]
59
+ \label{eq:loss-fw}
60
+ \end{align}$$ where the superscript indicates the level from which the embeddings are extracted.
61
+
62
+ Unfortunately, the loss landscape of Equation [\[eq:loss-fw\]](#eq:loss-fw){reference-type="ref" reference="eq:loss-fw"} presents a trivial minimum in case the encoder collapses all embeddings to a single point in the latent space. As embeddings of any pair of states could not be distinguished in this case, this is not a desirable solution. We remark that this is a known problem in metric learning and image retrieval [@bellet2013survey], for which solutions ranging from siamese networks [@bromley1993signature] to using a triplet loss [@hoffer2015deep] have been proposed.
63
+
64
+ The context of latent world models offers a natural solution that isn't available in the general embedding problem, which consists in additionally training a probabilistic *inverse model* $p_\omega(a_t \mid z_t, z_{t+1})$ such that if $T_Z(z_t, a_t) = z_{t+1}$, then $p_\omega(a_t \mid z_t, z_{t+1}) > p_\omega(a_k \mid z_t, z_{t+1}) \forall a_k \neq a_t \in A$. The inverse model, parameterized by $\omega$, can be trained to predict the action $a_t$ that causes the latent transition between two embeddings $z_t, z_{t+1}$ by minimizing multi-class cross entropy. $$\begin{align}
65
+ \mathcal{L}_{\text{CE}} =
66
+ \displaystyle \mathop{\mathbb{E}}_{l \sim p(l)} \bigg[
67
+ \frac{1}{H-1} \sum_{h=1}^{H-1}
68
+ \displaystyle \mathop{\mathbb{E}}_{a_h \sim \pi}
69
+ \Big[
70
+ - \log{p_\omega(a_h \mid z^l_{h}, z^l_{h+1})}
71
+ \Big]
72
+ \bigg].
73
+ \label{eq:loss-ce}
74
+ \end{align}$$ Intuitively, $\mathcal{L}_{\text{CE}}$ increases as embeddings collapse, since it becomes harder for the inverse model to recover the actions responsible for latent transitions. For this reason, it mitigates unwanted local minima. Moreover, it is empirically observed to enforce a regular structure in the latent space that eases the training procedure, as argued in of the Appendix. We note that this loss plays a similar role to the reconstruction loss in @hafner2018learning. However, $\mathcal{L}_{CE}$ does not force the encoder network to embed information that helps with reconstructing irrelevant parts of the observation, unlike training methods relying on image reconstruction [@chiappa2017recurrent; @ha2018recurrent; @hafner2018learning; @hafner2019dream; @hafner2020mastering].
75
+
76
+ While $\mathcal{L}_{\text{CE}}$ is sufficient for preventing collapse of the latent space, a discrete structure needs to be recovered in order to deploy graph search in the latent space. In particular, it is still necessary to define a criterion to reidentify nodes during the search procedure, or to establish whether two embeddings (directly encoded from observations or imagined) represent the same true low-dimensional state.
77
+
78
+ A straightforward way to enforce this is by introducing a margin $\varepsilon$, representing a desirable minimum distance between embeddings of non-bisimilar states [@zhang2020learning]. A third and final loss term can then be introduced to encourage margins in the latent space: $$\begin{align}
79
+ \mathcal{L}_{\text{margin}} =
80
+ \displaystyle \mathop{\mathbb{E}}_{l \sim p(l)} \bigg[
81
+ \frac{1}{H-1} \sum_{h=1}^{H-1}
82
+ \max\Big(0, 1 - \frac{\|z^l_{h+1} - z^l_{h}\|_2^2}{\varepsilon^2}\Big)
83
+ \bigg].
84
+ \label{eq:loss-margin}
85
+ \end{align}$$ We then propose to reidentify two embeddings as representing the same true state if their Euclidean distance is less than $\frac{\varepsilon}{2}$.
86
+
87
+ Adopting a latent margin effectively constrains the number of margin-separated states that can be represented on an hyperspherical latent space. However, this quantity is lower-bounded by the kissing number [@kissing_number], that is the number of non-overlapping unit-spheres that can be tightly packed around one $d$ dimensional sphere. The kissing number grows exponentially with the dimensionality $d$. Thus, the capacity of our $d$-dimensional unit sphere latent space ($d=16$ in our case with margin $\varepsilon=0.1$) is not overly restricted.
88
+
89
+ The world model can be trained jointly and end-to-end by simply minimizing a combination of the three loss functions:
90
+
91
+ $$\begin{align}
92
+ \mathcal{L} = \alpha \mathcal{L}_{\text{FW}} + \beta \mathcal{L}_{\text{CE}} + \mathcal{L}_{\text{margin}}.
93
+ \label{eq:loss-total}
94
+ \end{align}$$
95
+
96
+ To summarize, the three components are respectively encouraging accurate dynamics predictions, regularizing latent representations and enforcing a discrete structure for state reidentification.
97
+
98
+ ![Overview of latent-space planning. One-shot planning is possible by (i) embedding the current observation and goal to the latent space and (ii) iteratively growing a latent graph until a vertex is reidentified with the goal.](images/header.pdf){#fig:tree width="\\linewidth"}
99
+
100
+ A deterministic environment can be represented as a directed graph $G$ whose vertices $V$ represent states $s \in S$ and whose edges $E$ encode state transitions. An edge from a vertex representing a state $s \in S$ to a vertex representing a state $s' \in S$ is present if and only if $T(s, a) = s'$ for some action $a \in A$, where $T$ is the state transition function of the environment. This edge can then be labelled by action $a$. Our planning module relies on reconstructing the *latent graph*, which is a projection of graph $G$ to the latent state $Z$.
101
+
102
+ ::: wrapfigure
103
+ r0.40 ![image](images/fringe.pdf){width="\\linewidth"}
104
+ :::
105
+
106
+ In this section we describe how a latent graph can be build from the predictions of the world model and efficiently searched to recover a plan, as illustrated in . This method can be used as a one-shot planner, which only needs access to a visual goal and the initial observation from a level. When iterated and augmented with online error correction, this procedure results in a powerful approach, which we refer to as *full planner*, or simply as [PPGS]{.smallcaps}.
107
+
108
+ Breadth First Search (BFS) is a graph search algorithm that relies on a LIFO queue and on marking visited states to find an optimal path $O(V+E)$ steps. Its simplicity makes it an ideal candidate for solving combinatorial games by exploring their latent graph. If the number of reachable states in the environment grows polynomially, the size of the graph to search will increase at a modest rate and the method can be applied efficiently.
109
+
110
+ We propose to execute a BFS-like algorithm on the latent graph, which is recovered by autoregressively simulating all transitions from visited states. As depicted in , at each step, the new set of leaves $L$ is retrieved by feeding the leaves from the previous iteration through the forward model $f_\phi$. The efficiency of the search process can be improved as shown in , by exploiting the margin $\varepsilon$ enforced by equation $\ref{eq:loss-margin}$ to reidentify states and identify loops in the latent graph. We now provide a simplified description of the planning method in Algorithm [\[alg:simplified\]](#alg:simplified){reference-type="ref" reference="alg:simplified"}, while details can be found in .
111
+
112
+ :::: algorithm
113
+ **Input:** Initial observed state $s_1$, visual goal $g$, model parameters $\theta, \phi$
114
+
115
+ ::: algorithmic
116
+ $z_1, z_g = h_\theta(s_1), h_\theta(g)$ $L, V= \{z_1\}$ $L = \{f_\phi(z, a, s_1) : \exists z \in L, a \in A\}$ **return** action sequence from $z_1$ to $z^*$ $L = L \setminus V$ $V = V \cup L$
117
+ :::
118
+
119
+ []{#alg:simplified label="alg:simplified"}
120
+ ::::
121
+
122
+ The one-shot variant of [PPGS]{.smallcaps} largely relies on highly accurate autoregressive predictions, which a learned model cannot usually guarantee. We mitigate this issue by adopting a model predictive control-like approach [@garcia1989model]. [PPGS]{.smallcaps} recovers an initial guess on the best policy $(a_i)_{1, ..., n}$ simply by applying one-shot [PPGS]{.smallcaps} as described in the previous paragraph and in Algorithm [\[alg:open-loop\]](#alg:open-loop){reference-type="ref" reference="alg:open-loop"}. It then applies the policy step by step and projects new observations to the latent space. When new observations do not match with the latent trajectory, the policy is recomputed by applying one-shot [PPGS]{.smallcaps} from the latest observation. This happens when the autoregressive prediction of the current embedding (conditioned on the action sequence since the last planning iteration) can not be reidentified with the embedding of the current observation. Moreover, the algorithm stores all observed latent transitions in a lookup table and, when replanning, it only trusts the forward model on previously unseen observation/action pairs. A detailed description can be found in .
123
+
124
+ In order to benchmark both perception and abstract reasoning, we empirically show the feasibility of our method on three challenging procedurally generated environments. These include the Maze environment from the procgen suite [@cobbe2020procgen], as well as DigitJump and IceSlider, two combinatorially hard environments which stress the reasoning capabilities of a learning agent, or even of an human player. In the context of our work, the term "combinatorial hardness" is used loosely. We refer to an environment as \"combinatorially hard\" if only very few of the exponentially many trajectories actually lead to the goal, while deviating from them often results in failure (e.g. DigitJump or IceSlider). Hence, some "intelligent" search algorithm is required. In this way, the process of retrieving a successful policy resembles that of a graph-traversing algorithm. The last two environments are made available in a public repository [@environments], where they can also be tested interactively. More details on their implementation are included in .
125
+
126
+ <figure data-latex-placement="!htb">
127
+ <p>ProcgenMaze<br />
128
+ <img src="images/maze_solved.png" alt="image" /> DigitJump<br />
129
+ <img src="images/dice_solved.png" alt="image" />  IceSlider<br />
130
+ <img src="images/ice_solved.png" alt="image" /></p>
131
+ <figcaption>Environments. Initial observations and one-shot <span class="smallcaps">PPGS</span>’s solution (arrows) of a random level of each of the three environments. ProcgenMaze is from <span class="citation" data-cites="cobbe2020procgen"></span>. DigitJump and IceSlider are proposed by us and can be accessed at <span class="citation" data-cites="environments"></span>.</figcaption>
132
+ </figure>
133
+
134
+ The ProcgenMaze environment consists of a family of procedurally generated 2D mazes. The agent starts in the bottom left corner of the grid and needs to reach a position marked by a piece of cheese. For each level, an unique shortest solution exists, and its length is usually distributed roughly between $1$ and $40$ steps. This environment presents significant intra-level variability, with different sizes, textures, and maze structures. While retrieving the optimal solution in this environment is already a non-trivial task, its dynamics are uniform and actions only cause local changes in the observations. Moreover, ProcgenMaze is a forgiving environment in which errors can always be recovered from. In the real world, many operations are irreversible, for instance, cutting/breaking objects, gluing parts, mixing liquids, etc. Environments containing remote controls, for example, show non-local effects. We use these insights to choose the additional environments.
135
+
136
+ IceSlider is in principle similar to ProcgenMaze, since it also consists of procedurally generated mazes. However, each action propels the agent in a direction until an obstacle (a rock or the borders of the environments) is met. We generate solvable but unforgiving levels that feature irreversible transitions, that, once taken, prevent the agent from ever reaching the goal.
137
+
138
+ DigitJump features a distribution of randomly generated levels which consist of a 2D 8x8 grid of handwritten digits from 1 to 6. The agent needs to go from the top left corner to the bottom right corner. The 4 directional actions are available, but each of them causes the agent to move in that directions by the number of steps expressed by the digit on the starting cell. Therefore, a single action can easily transport the player across the board. This makes navigating the environment very challenging, despite the reduced cardinality of the state space. Moreover, the game presents many cells in which the agent can get irreversibly stuck.
139
+
140
+ In recent years, several other authors have explored the intersection between representation learning and classical algorithms. This is the case, for instance, of @kuo2018deep [@kumar2019lego; @ichter2018robot] who rely on sequence models or VAEs to propose trajectories for sampling-based planners. Within planning research, @yonetani2020path introduce a differentiable version of the A\* search algorithm that can learn suitable representations from images with supervision. The most relevant line of work to us is perhaps the one that attempts to learn representations that are suitable as an input for classical solvers. Within this area, @asai2018classical [@asai2020learning] show how symbolic representations can be extracted from complex tasks in an end-to-end fashion and directly fed into off-the-shelf solvers. More recently, @vlastelica2021neuroalgorithmic frames MDPs as shortest-path problems and trains a convolutional neural network to retrieve the weights of a fixed graph structure. The extracted graph representation can be solved with a combinatorial solver and trained end-to-end by leveraging the blackbox differentiation method [@Pogancic2020Differentiation].
141
+
142
+ A further direction of relevant research is that of planning to achieve multiple goals [@nair2018visual]. While the most common approaches involve learning a goal-conditioned policy with experience relabeling [@andrychowicz2017hidsight], the recently proposed GLAMOR [@paster2020planning] relies on learning inverse dynamics and retrieves policies through a recurrent network. By doing so, it can achieve visual goals without explicitly modeling a reward function, an approach that is sensibly closer to ours and can serve as a relevant comparison. Another method that sharing a similar setting to ours is LEAP [@nasiriany2019planning], which also attempts to fuse reinforcement learning and planning; however, its approach is fundamentally different and designed for dense rewards and continuous control. Similarly, SPTM [@savinov2018semi] pursues a similar direction, but requires exploratory traversals in the current environment, which would be particularly hard to obtain due to procedural generation.
2110.09419/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-05-16T17:47:03.485Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36" etag="W4m8obO47reXAZ5VHKXA" version="14.6.13" type="google"><diagram id="q4w610eKK704HTmwibl4" name="Page-2">7V1bc9u4kv41rt19MAoXAgQf42Q8U2eSMznj2U3maYsSaZsnsqih6LG9v34B3gmAIimTFGUzSVUkXiBK/TX63n1BPj48/xy5u/svoedvLjD0ni/IpwuMsQOJ+E8eeUmPXFKaHriLAi89hMoDN8H/+dlBmB19DDx/X7swDsNNHOzqB9fhduuv49oxN4rCp/plt+Gm/qk79y77RFgeuFm7G1+77FvgxffpUY7t8vgvfnB3n38yYk565sHNL84W3t+7XvhUOUR+uiAfozCM01cPzx/9jfzx8t8Ff//fp2+3n2+vvn5Gl5+//Lq/+i2+TBe7brhle//w+292/Kv/gTrf7D9D73q7u8Q4o8Hf7uYx+1r/vY2DeON74uhn98WPskeMX/LvnX+9yN/GQz+OpT2O9vFR+Lj1fLkMvCBXT/dB7N/s3LU8+ySAJo7dxw8b8Q6Jl8UvK6+927j7ffZ64678zZW7/nGXrPcx3ISROLUNt7687Ycfr++zS2/DbXztPgQbidOf3ch9CLdedjxDpXhucVMchT/8fKULTFzfQkQQ8yoUzxfE8nYiF3Q3wd1WvFmLH1D8vmKlYLOp3Hfreo5HxXH9l26hoh/F/nMFndlv/7MfPvhx9CIuyc5eEpphL2M+RHK2eiqxbFvZsfsaju3sqJsx0F2xfElw8SKjuZn+jy/k9823v27g97++Xj79A//xj/CfGWTeCPUpo8j2+1Pfg2vfxwX1NZp2B0Qj9YnTgfgOmZb4+C0RH/occt6f+CvoE5+NSnyxJXWgvpBW05Lfbie/oOhOvly/bAKBg4i0g2CVIubzqjhQ0P23x1gs4+doSalJdZL4yKO+bSKyw2ziMhUmv/ibv/04WLsKTsalqgNZjaiSzBpRLaqT1KLAZq+nqVEFQBoJN+H6R8LAyCjehlIkkAFMbBNL+teeh/31KHWshE6XKQQ+iAsI3D0nz5efF6/u5P//kyyYLSWeaZWfeN1W1QU/3bYP5Hm30IRVBG3i+N1/dNQA1eZtBSJlW6GYAmQ55R99k6EccBMmOUD49Zg0M1L7PuNvvQ/SPiiFQk9aYUkC/zmIv2c3ydd/SiwAmr379JxBI3nzkr/Ziq/4vfqmcpd8W96WvMvv60fUffgYrf1Dv5HF0ytjN7rzD21L2Yq+l5tKDSCpEJ0a5Ep+LPI3bhz8XTewTBDIPuFrGIgvXGCwsL0KCFoOoMoy6bfP7qxuLMpijgMLcFbWqy+W/kDaYgI+7kvlsp28YH/gwe3mBy/Bnq5aQr/4fY/nBta4UQav2ijzVfY7d2tcaJ1uTXKR6G71n4L+4gvA/L//SpaAyafdZsz2IcFMyW/5ecPTpCeeMoDJUxZM2QRu/FjsnJd7qZZt79KT2zB6cDfpebHbxZdSu9hmdxZrJmfiyN3ub8X1+Z3bdE+FT2Hk1VctbixVjkvlWyd8Jr8wlrhKXtD8u3vBfrdxs+8dbFNVJflum9CNlY9Xf/4b342EEisAuF6LB5PPJJ0j4rT7IGXPdrXfVagkoJMSqk48s6wcl3LnQh/1F0/vNGoGxbGgUVuQz95NTahqA42iSFMTxhL6VDEkhdZJALM0QY8ZApzo+z4ZwJow7msma2LZ12bAN6/b134X4Aj8v/22jStaaDPAnraA/aRgV6W1sJ2FchxKUlz2lOQnouC7odMAroUvgedtGnwLB2n7JhULx2bAUWwiTh3gVPwJXFczLLM/gY6lZvBXOhNSN00ePezlXuj3y8/EQBf6IoCVP6hOYQdiQO3jzPWWpW1HX7rBeB/KtnYWcAwIDoyYA5BVniZj4ARDKj9lUpzkrk6zt7q+2UtCdxYzmoz5xXc9cRJVREe6oCZRujq2dYzeRa4XCNh9CiJ/nagrEtuRPH8VhbGbHbq0aOXiarjFZSuWBFWUoDhf++v1hcG1veLUogfFlsoeFSYzScaxhJqlxtyll6+KRT1ig0xBWIqBzUeSaEhPCBlGvzkKwaaFMhCTOYPYozZxPB3EPvOg55tA7DhMKDgdQUxOCWKmunwQBFzXxRA2IXeASKMZth0Sh0YQtuRshS1iQtfKZWi+HzECAVEiHF3lqnFBivUFx5amr9XJ3zjduZr7Qxk7juIYWkhbCkDWidzNEbLBAdFPD1/LTKFg/cd9sG3bIcRd18GmwI2ir1D5983jqRsIOkMKqUkEDMNOeBoKLfnHnQotmmLAkj8X1UyyzsruAqsmWNGJYUWa1eolIHXGvt8l0L4E2i8m8IczbQPjpMhsqnnAEUAGD/hogXar2Xn1uo2tEuqtsVKaBJyEwOAwnomPj6umuEsTE7cCTNMElCzzqvjPDmlYkugQMNt8yE48pBEiM3rrHo2RlIPueMWQq4ocdQBmGmAJQ4CZMkMIYHgszDYL48w7tS70rxIsJPXRVQ+9AnT/DBMJ4z8Hq2Aj1bomb1YrvrrjJvLFg7mrZClJ/MzmEuvSqwv6Sa71GIdZJnwzdLtujvJERZG9Sv6OiDmOGLDqWfCIQmraJhE3qK7WaHukyTk1Md5a9cSuC10nmFUg2xCXXpB8ZC69o4l7xI04NoUHxsMx101hYS7eZG+lfzy8C7fu5qfyqCKYyms+h+EuI86/hSr6khFC0q2OijyPvsyd/7NyxpxH3zkSNAxlWxPqedd8+vzCVjO8s339KopTk6Rs9YaY9J/341nnlmBVXGNfoc04AB0ZsEZQjbiQiV3qtFV+HS9Ovt27coFgL7njXir2uYIf3haHRolAGmXu7e2AMjeSRfQ9gpQwXP07aVRwfbwMPZW1MYVRcYltRSxajlEsUpMJrHr8htsk6Xjs8bH0kwywZlfM5xZ4G3wb8qy7frmbtJ5XXlO1wV+56uc8TTa57A+hZZwdM53UdFfjg0yYUIaSDpTnX07DY82FaiPymJFdOF4RGTIdRET8HPn+djouk3Adg8u05OUzYbTcdJkF46nCzUEMcKwxHjY1ZhiP8ZorqaZmvKzdwjCM9/Ux2h1A7OCcl7WsGIX7rsI4Dh/Oj/vmJOYEu8nKRS29uMp4JmfLeIxnymMazeiq1BYNbnetK1LnUHzljK2fkyL5EisOAsShBYgOYGpq5zUegE15VycC8GRiZRiO2anCyRTkrrPGOfsO5sU9MnQJ8j4fVe6ZVO/KP8zAPfe4E7jYUIn4TQvdxO7WcyNP6v+Pmzi4lJn5Sd/ID3EsqJRwY5v7S36Z8wcoGxGgjCGtFFJmWelOL17kX08D0eYuW3OB6MfwYRfuAwlFd7MAc+AopY2UdH/CdVhazJ4Sk82+2Hs0D0x+cmNXnL/x4zhJF2wFIlqA2CLCkdbM1gDEac03ZnJYNkdOl+pftZUQYpgDfHyRb30xKRzVxUaOmtpHxc67VBJ47v6+aKL52rKCt40qR3WqMkSO7giIoNo5V67GSGNN+tgI04XdT8/BXpZRwUBawmW992wFR+6Bx3OwBRkXKrStdSwt8i6oBYhzyEWIMJ5WB7d1MWOEgL71LBAwK7UIU0CpQnjSSniCCyNtGsL36Jf98HwnJ3CAh3D943EHHtxI/hf53neDxEiTAZo7qTdmUp7Wmy9+f06bOTepQz5EQJNyOF53Qt23/8Xd7dLqIS/099v/kGLHl5y8MG7H7G8Kdb7lZdpd1RKAU1oCvNmTd9AtncG4t1u60XItilaaSmtKH7qp0GXxKvepf0Fq/YuFdZN00ioEfmzVi+PMrwphmDjL7/7j3k98MVkVZ7UzkinmMkydwyk4oibDy1JyaxbcIpiDAGLpShemuqZllXXgtdEkDqBjcc4o+c/9kjG7cU61EtvOK7E7l6T5SfBGOie3LZgf8Sn8p4Sw0mpZ2G5ctlP7kSVlCi12DrOnNXC5yaHf6kUzEf4oV+o+dqNYcdL2I0drbRCC2fYyE5cZ4rYKC4oAURrOdW8SpCZaEVXDGdlLxttcJFXpP8tNZHa2FmIaUSkBtkFYY6foujrNdqG7RerEziY4LbTubskQW6E1MXT7YwaVbDwqH/Kf+Bm9F/J2YmXOUaFm5xQ2pZdM7DTp0LStpYmLu9+lU39vg2df7cWZD3usaQazIMglUrM5KYXGAhzjgEY2FkUckxurweG8ltVcw81m9Nzox2+F7gwBpPWDODmqs3S1N3DeCwAqQBDXXF9nLu9SP2ez0M/1vAZqO8W0xmqOjVN0YK25kkqZPDwcTMlf74ZBgTIXjlsYIOfQaIKpuRW3kycXY5/lPNyvWZqcOLVKy0l0OReHCgXVANPKFbYXWOuSNGmJ7HPPMvFoVlA3C9JCopGWE8AOktYyqD4WH5H1OiSWnPWo3HFpTG2FxNIkNbWmGnFarpmuHXpYv5stVZiO3BS7m3of1d1PB9wFZz4boUtb+aoBg8bCA7KIpgNzQM1pF7alA4IiMBqXNleivy4YNVhxVJnyM98BCbe3LoUGXfz2du1hbkKyZzsrOCSSxwKvNuWDC/GS9zJsHe1haZ7W4ZDbXFHavVXnEA2Iy062ZTNbXA0TVfvZFkkKFzIiKMlakFw9U19EffDqObVjcX7yQFPc/BJjX9zaSVNr3PwCc3fc/KypFXHx3Eo34gMNU8u63Sg/aeqa2je2NzNCnjOt+kuMPl1rW8KmfZrXqntq4+arOUk30qq8KkwIRQmdckPW07ap3dDSZsokQMfk4lxaws+g5fjSEn7e9FF/8Uy4vfeW8Ml8VD0GiwmbtiU8gqZQwbKzzYBzlunr86XNAvZZgb05OX6Zvj4zOg3gfNNaqA1pT52dauFwWKRtlf54GzC9Kw5mE49cL9Ilu4S/ggdX5lRWf21zULM1FtpgziYuS/lH6jbywz7kERhJSS0ckz3Pp/s43u0TSF6Lf2tvC0GwDre3SawOrCUSr72kT8S1PL6XLlw3XodP4gXBz0T+huJ5f4CdZMEjQzpHVHX3GWGgdS1HdjFZuxplw8AUZ8OjZfMj2C+l+IStGVrHBZxB7rCa7dtjumiHNOSxp0EaR7c0xdrfTdYTHrObiwNtAOv1u4hyB+SpvafNeyqyApa2HifcZ7jYCjCvY4QhViSy9t1sHGhej9in6u2BreZo3SuiBv969KOXCy08pBo0v/ovZl34YJj4mCHamk6rQt31bHfdrIBl0jk9KwwTP+oQaahVZd1JDs1e73/48fr+wtTpQG6rFqPQGnHrYxbTWv1RRADCJQwNQ6BMKTE2KiIcI+yBzaGL15lhWXVGb+R1sLAGQeMKed6tERsI2sTxx1SoIbTUBgfIdoShRsrkRF1ECtFptM8sLlA1Fj6oyQHcR8PuLL3y4WD5OLA/L6qDwszDwQRhopfv1TeVu+Tb8rbkXX5fP8q21v1h2lXLL/htLtIXIw2JHFmAV9JklQlVnQUxp8DSVya8ceUGOSyw5b5ULssmBDZ/J7vrdyo5I/2MYcV+DopF7L8nsY/UiXHIts0DHyeX9fmDLbJ+SlmvxHYxZBDAShnCXAR9vx6X71XQW10Ffc5ssxH0ShAAcgtYR/ZkdYSuyqm6HgFOpXBKSdoeTLY3f42RpXlz55hFmr9Zaa4Z8RiKXRgaEg4nl+bGnv2L93LipsTMKrzZZZmhA7hC9u59iZF5QYedzH3Jlu7XpwcaF1sRsjStsYCIgwHlJUTwcfDjTuunVNzokE4MxA4RvLOshB45COcUfb4KQjIEsCbDpq5+xsYRH8vOcgIRVi2Lr+s7iAodiDYKn36C7dDH2ABW9pZu5sNwW0u/GR8LEqeQcUyFiAOY0yh9jpVxhk/hSI3RTYVD2xRuWfLtzz61dakkmjrnPr3zvVUSIUaR6ppDDEGAsCZ6q6m/Fpq2sAjbS3Ds3bnTEHMUO0TO8qKlsM3N1pP61exRHL2VZqXnHCZT+l6PhZR0+pIeJCsmMEivf0WHMzRpO0UMzTDFa0Do4POGzohYUUe5QQRsWwPPydHRYc7TYmOObGMKNuLAxgpgiF2oP/2LG2wlw53ZitY0utFoyi9WgLVUNgyryjiMAo5UPZuYEn1OUdqQY3LZa0641yDpeq8mHaplCUQg48jsSuTQw2tbghkrmrWiLo+9JXGTedeMv6EK845obTkXrDAoNF6sKS05PR1CATkWKy1rc6yvPTpA8CsB8sbhwNWx19xGx1EfQwtpS4GOAZbm/KzhETFamkE3KfW2AdUNBZ0xhdSORY5arzf6BtIhGWBMuDQrNaWu2q9d74IrA6745Lhq9uEM0Fo16whsnBe4Sv4e7NVU82KrvdIaOgP36RH5Pge8QrVFtZNPc61OlWYIMMO4PEKKjsDDm3GG6Ugn7Fk+g07PJ+pZzhmZSc9ybBzeswTO3lngjGMyu8CZcY7QEjgbPfpxMFJmc2eOkTLjUKMlUjZ1kFXVeyCbY+BskGkHS1LcxZIU996T4t5ne20k6/jUpDg793ifsr82HmRywDLIZcThIC18ByusByvcN/AcmLGaFC9knyHZp+lP/Z4n/pySugPYOV9S7+rS4jrz36pj2mxs7Lhj2QaXyGjdrYlxdsYgEaolB6dHfIoczJNJ2u4dWayPoRqkt9XppSNHqwg0+dwWjE2e54VrGDO1drReXZyPHNr6MazSJmLawCnJv9ZSnd8vkdTGB/enfHM6YZ0+ga/tDbZk89WoKl2vgI6TzSdnJGprj877/ZJx3n02n6xgApVO0Md2VdCyb9KF55bbR3rO6Vhy+3rquF1QcHQOlqyZOQTW0TeX0XpzLJl+M0IZOjHKTNV5S97fe8r7k4NYga0HvU+R+kfgwciQNie+7jOGFSdjj6l4C2RaIWOrbW8xA3kbs9ooNMNGrM7PGg4sSHc1njBP1HfZKrWuFYnK1/56bZKoRS3oeeeJiiMQ2LPIEyWoORtvSZpZkmZOQLlzoc/SSSp1M+Wex0LUoVLfOWXvKJJnGi4p8O84BV5O0QJ2JQVeV8ImT4EnyOQoX1Lg59Q7CkNm1NhPkdlMkMlvvqTBT9wwCiM5u2QelREELV2iTu/rRFyVNnK8DTq297VjGVbDVJNdU/k6UYfuQEXDqJdNMsqetG8OnbtGZR2fqI48H3nUt03Ic5hN3LmE7NWRXvmBqgfItHFQYI9m8GOTwb9sHJOnm6nY4BDkQz/7h3e5uprlAGeo3UK8jUKpOZSXy5yaL6Hnyyv+Hw==</diagram></mxfile>
2110.09419/main_diagram/main_diagram.pdf ADDED
Binary file (87.2 kB). View file
 
2110.09419/paper_text/intro_method.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Attention mechanisms have become integral parts of machine learning models across a variety of domains. The modern notion of soft-attention was first introduced in [@Bahdanau2015NeuralMT] for machine translation to allow recurrent networks to perform well over long sequences. Since then, attention has taken center stage in several models that forego recurrent networks altogether (i.e. Transformers [@DBLP:journals/corr/VaswaniSPUJGKP17]), and has been leveraged in a wide variety of applications, like natural language [@Bahdanau2015NeuralMT; @DBLP:journals/corr/VaswaniSPUJGKP17; @devlin2018bert], computer vision [@dosovitskiy2020image] and physical reasoning [@ding2020object; @locatello2020object].
4
+
5
+ At the core of this success is a simple idea: enable task-driven *flexible* connections between elements of a sequence to extract and merge information. This process is implemented by attention (or alignment) functions which, in their simplest form, take a reference or query entity and "pick\" (i.e. attend to) the most relevant input entities in a set of other entities. Modern attention systems refine this key principle in two meaningful ways. First, they utilize *key-value attention*, where the attention function takes "queries\" from the reference entity, matches them to "keys\" attached to input entities, and returns "values\" representing a transformation of the selected entities. Second, they allow multiple attention mechanisms to run in parallel, often called *attention heads*, allowing the model to attend to multiple entities jointly, leading to increased expressivity. Despite these advances, Transformer-like architectures still struggle on certain tasks [@fan2020addressing; @nogueira2021investigating], including context-sensitive associations and out-of-distribution (OoD) generalization [@pmlr-v80-lake18a; @DBLP:journals/corr/abs-1802-06467]. They are still far from human-level performance on physical reasoning and object-centric tasks [@webb2020emergent]. In an object-oriented world where entities have several attributes, current multi-head attention mechanisms learn rigid search-retrieval associations which lead to various limitations, as illustrated in and .
6
+
7
+ Addressing these shortcomings, there are several recent attention-enabled systems developed to allow better decomposition and re-composition of knowledge [@goyal2019recurrent; @goyal2021neural; @goyal2021coordination], some of which we discuss in . However, most of these efforts hinge on purpose-built architectural components that remain niche and often are difficult to implement at scale. To complement these efforts and build on the proven efficacy of Transformers, our goal is to develop minimal modifications to key-value attention to enable flexible decomposition of computations found in attention heads, and eliminate some parameter redundancy. Crucially, we aim for a mechanism that is easily implemented and plug-and-play for existing Transformers (and all the models based on them).
8
+
9
+ We propose *Compositional Attention*, where the search and retrieval operations can be flexibly composed: the key-query search mechanism is no longer bound to a fixed value retrieval matrix, instead it is dynamically selected from a shared pool of value matrices accessible by several *compositional attention heads*. This results in increased flexibility and improved performance.
10
+
11
+ <figure id="fig:main" data-latex-placement="t">
12
+ <embed src="Figures/main-fig-v2.pdf" />
13
+ <figcaption><strong>Motivation behind Compositional Attention.</strong> In a visual question answering setting, we see that the “rigid" mapping between search (query-key) and retrieval (value) in multi-head attention leads to redundant parameters being learned (middle row; (c)). In contrast, when the search and retrieval mechanisms are disentangled and have a pairing set dynamically, these can be composed efficiently without learning of redundant parameters (bottom row; (c)). For details, refer to </figcaption>
14
+ </figure>
15
+
16
+ **Contributions Summary.** **(a)** We formally describe the shortcomings of rigid search-and-retrieval coupling in standard multi-head attention and empirically analyze them through experiments on an illustrative synthetic task ( and ). **(b)** We present *Compositional Attention* to disentangle search and retrieval, and validate its advantages with a number of experiments ( and ). **(c)** Through a series of analyses, we demonstrate how our proposed attention mechanism decomposes relational task structure as expected, and facilitates OoD generalization (). **(d)** We discuss the computational complexity of our proposed method, which can be scaled in either of the components (search and/or retrieval) independently, and is an easy drop-in replacement for multi-head attention in standard Transformer-like architectures ().
17
+
18
+ # Method
19
+
20
+ In this section, we provide additional details about the general motivation, architecture setup and our argument for using parameter sharing across layers. We further provide details about computational complexity of the proposed model and some ablations that we consider.
21
+
22
+ <figure id="fig:toy_task_perf" data-latex-placement="t">
23
+ <embed src="Activation_Plots/Toy-Task/Perf/perf-v2.pdf" />
24
+ <figcaption><strong>Performance on Contextual Retrieval Task.</strong> We compare our proposed model against standard Multi-Head attention (lower loss is better) on various setups of the task. Our proposed model outperforms the baseline across various model capacities (low and high) and number of heads.</figcaption>
25
+ </figure>
26
+
27
+ The standard transformer model [@DBLP:journals/corr/VaswaniSPUJGKP17] has a number of layers, where each layer is composed of two components, the multi-head attention () which is followed by a MLP (Multi-layer perceptron) with a single hidden layer. There are residual connections at the end of the multi-head attention step as well as the MLP.
28
+
29
+ In this work, we follow [@DBLP:journals/corr/abs-1807-03819] and consider the models that have weight sharing across layers. For ease of experiments, we do not consider adaptive stopping criteria. We consider this choice because we want reusable pieces of computations, and Universal Transformers is one step towards that goal.
30
+
31
+ Our view of transformer models is that different heads perform parallel information retrieval with not only different kinds of searches but also different kinds of retrievals. Information from these parallel retrievals is then jointly processed through a linear layer, followed by another MLP. There are residual connections after the linear layer and the MLP.
32
+
33
+ For our proposed Compositional variants, we basically replace Multi-Head Attention in the models with Compositional Attention while keeping all the other details the same.
34
+
35
+ <figure id="fig:toy_task_perf_ood" data-latex-placement="t">
36
+ <embed src="Activation_Plots/Toy-Task/Perf/perf-ood-v2.pdf" />
37
+ <figcaption><strong>Performance on OoD Contextual Retrieval Task.</strong> We showcase that our proposed mechanism outperforms standard Multi-Head attention (lower is better) on out of distribution (OoD) variant of the various setups across various model capacities (low and high) and number of heads.</figcaption>
38
+ </figure>
39
+
40
+ A number of works demonstrate that Transformers with weight sharing are competitive with the standard transformer models [@DBLP:journals/corr/abs-1807-03819; @bai2019deep]. We also believe that reusing computations provides more pressure on the system to learn meaningful and multi-purpose parameters (eg. it is easier to learn a redundant head if it is used only once vs if it is repeatedly used). One might be tempted to think that increasing the number of layers or removing weight sharing might compensate for the flexibility provided by our proposed system. However, we argue otherwise.
41
+
42
+ Lets assume we have a Transformer model without parameter sharing which has $l$ layers and $h$ heads. Then, the number of unique search-retrieval pairings that can be computed by the model is $lh$ ($h$ if parameter sharing). Contrasting this with compositional attention, we see that the number of unique search-retrieval pairings are actually $lsr$ ($sr$ if parameter sharing) where $s$ is the number of searches and $r$ the number of retrievals. So, if we use a similar number of layers, compositional attention still allows for more combinatorial possibilities to be learnt. Viewed another way, at scale, the proposed mechanism has the potential to reduce the number of layers needed for tasks calling for flexible search and retrievals.
43
+
44
+ Another important point is that even if we have more layers (with or without parameter sharing), multi-head attention can still only learn a rigid combination between search and retrieval. So, if the task requires dynamic choice from all possible pairings between search and retrieval, the model will have to learn each pairing in separate head combinations, whether it be in the same or future layers. This is because adding more layers does not change the way searches and retrievals are combined, which is what we focus on here.
45
+
46
+ <figure id="fig:toy_task_conv" data-latex-placement="t">
47
+ <img src="Activation_Plots/Toy-Task/CRT.png" />
48
+ <figcaption><strong>Convergence on Contextual Retrieval Task.</strong> We see that the proposed mechanism converges faster and works well even in low data regime (low iterations).</figcaption>
49
+ </figure>
50
+
51
+ []{#appendix:comput_complex label="appendix:comput_complex"} **Number of Parameters.** We keep the parameter counts within 5% of each other for the compared models and the same parameter count at 140M parameters for the language modelling experiment. We also stress that our proposed models with fewer retrievals are even more tightly matched and often lower in parameters than the baseline and still outperform them on a number of tasks.
52
+
53
+ **Training Time.** While Compositional Attention increases the complexity of the model, we note that the training time of proposed models are generally within 10% of the baseline and hence the added complexity does not impede the model much.
54
+
55
+ **FLOPs.** We estimate the FLOPs of the proposed model for Equilateral Triangle Detection task using an off the shelf library [^3] and see that they are 10% of each other and the baseline. In particular, we also see that for fewer retrievals, the FLOPs are either the same or lower than the baseline.
56
+
57
+ **Parallel Computations.** Transformers allow for efficient implementation using GPUs due to parallel computations for each word in the sentence (or each object in the scene). Further, they allow for parallel computation of each head for each word. Correspondingly, in our proposed model, we still do parallel computations for each word in the sentence, and compute the output of different searches in parallel. The only additional complexity is another soft-attention for choice of retrieval for each search. This is also done in parallel for each search and hence we retain all the major efficiencies that Multi-Head attention enjoys on GPUs.
58
+
59
+ **Amenable to Different Variations.** We note that a lot of the current advances in standard multi-head attention, eg. sparse attention matrix, can be incorporated in the proposed model too. We can also have sparsity on the retrieval end where we can restrict certain searches to pick from a smaller set of retrievals. We believe that these analysis are important future works but out of scope of this paper.
60
+
61
+ **Complexity vs Combinatorial Advantages.** While we sometimes have more complexity than multi-head attention, this small increase in complexity is often offset by the combinatorial advantage that we gain. In particular, for $h$ search and retrievals, multi-head attention can only compute $h$ possible search-retrieval pairings while the proposed model can compute $h^2$ possible pairings.
62
+
63
+ **Controls.** In all our experiments, we control for the number of layers, searches and parameters between the baseline and the proposed model.
64
+
65
+ Suppose our systems have $h$ heads for multihead attention and $s$ searches, $r$ retrievals for compositional attention. Lets assume the input to the system has $N$ tokens. Then, we can see that the number of parameters in multi-head attention is proportional to $3h$ while in compositional attention, it is proportional to $(2s + r)$.
66
+
67
+ Further, focusing on the highest computational cost of the attention procedure (which are associated with the coefficients of $N^2$ and ignoring the coefficients of terms linear in $N$ ), we see that the coefficient of quadratic complexity is proportional to $2h$ in multi-head attention and $s(1+r)$ in compositional attention.
68
+
69
+ This shows that depending on $r$, there can be fewer parameters in the proposed model but the time complexity is strictly higher. This is because we allow for combinatorial more search-retrieval pairings and this cannot be obtained free of cost (no free lunch theorems).
70
+
71
+ Importantly, if a task that requires $h$ searches $(s=h)$ and $h$ retrievals $(r=h)$ and a dynamic choice of any search-retrieval pair out of all possibilities (which are $h^2$), then multi-head attention would require $h^2$ heads which leads to $(3h^2)$ parameters and $(2h^2)$ computational cost to fully do this task well while the proposed model would only require $3h$ parameters and $(h^2 + h)$ computational cost, which would be significantly smaller and also more computationally efficient when compared to multi-head attention. We use this exact motivation as best as we could in the Contextual Retrieval Task to showcase the fundamental differences and the parameter efficiency.
2110.14880/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-09-22T17:07:54.656Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36" etag="fXW1wH1Ib_ff4lZQxayy" version="15.2.9" type="google"><diagram id="9njYctX7kwDwtwLbNcq0" name="Page-1">7L1X26NIliD8a/py5wEECF3ivffcfA8ChBfe/voNlJlVmVnZMz2z1bPVs58ypRfCnjj+RATB3250u/Nj3Bdql2bN3xAo3f92Y/6GIDCKIH+7/kPp8SXlfrt/ScjHMv1a6PcEuzyzr4nQ19SlTLPph4Jz1zVz2f+YmHTvd5bMP6TF49htPxZ7dc2PvfZx/rVH6PcEO4mb7A/F/DKdiy+pBPZdaSEr8+JbzzD0NaeNvxX+mjAVcdpt3yXd2L/d6LHr5i9X7U5nzYW8b3j5Uo/7O7m/ATZm7/kfqXBvODfBcP7/8x/9q73t3dC8/tfXVta4Wb4O+Cuw8/ENAwDu/ros2w+qqDUb5xIgSImfWWN0UzmX3RvkP7t57lpQoLkyqDip87Fb3indNd34aer2+ny+a4NsyvyqO3c9SI2n/gsJX+WeAaipT5fkt1ToWwq4TuM5/tuN/HKLcP07/xtClx6lWxsk83lHgo9muwXr5uBKZ8EPU9CkeqVLT1sSr4vYo1SPDS6Uff4TV6kc1ZkYmfsi9Ln1CXBB0XknL+Nwp7dJEm22arElvi/3Nb2lN+Sxxud01+26l0Ex+tlis+7UaC/bLbooeS3R5qocoPGDWOW2H6J7dlvUStw9z/PTdwS9hJ24aWfKmJBqsRYbcPtLOYh5gtXJvwCS64Y1PQt967cIh8fYpCw+dCNKJWuJ2gB85O4a69WDPFawRxJ9MS134sGlL/wZPNvRVQ9E5clSBP8ETPdSRC3euNauHaEhLaiY9LnLxI9jg/GnVihZjM39c8ZylTLwoO/fj8JTVIdu7l6jh/T7tkbOjuJV1vmPBsEba+NPtqVvuN/Hmp+btHt0B60/mCKZUZTJ3VYTh4DKQRN1vOgDNi7rjhnG0GK4E00CvbZZJAwa7syj7JipZ0QpfEMMP0oe0eSBEb6xCg6aZNTdSAe3HdaRMI7f12mWAVoNHIxh2h2+tuTleCgqUeapOm3zTIbZId3jm1g/5ohrOZiYFBHePLogc0usbZGlBfp5RIQvIt5oxVhhrdgaDcQ2ZWQLw28uROnnO6ZciqPhnZLNJY/NuhYDV+3omBtp2i1SEikAXyxiZLMQ47N1QeNFQ9rmnJh2rXR8LUk0BzMy2RbxljdiYGqdbbNjybuFRsKgsrl0kcnWSsCyPY1TPW2bRWoeudT5kyjZHMTEoFkZLRrRMbUQVFYG3qWk/16YLkGjrYTRiqblhkh5DVCBFXGbxC0B28+Wy22OLSmy2HqiCKTUc11oWd6g1tZu4/v+yFanUVBf37RFQ7LOYBg8u/eYaL4qQNWCexzBy74Vu8ZrZzgpz25vHGI7H9rdS7ESRhBHqc9qqGOqLY/1XG93U229RRB0ggnTJ1d5c9toYVG7zMmM89A6Qr8uR14qhHR/KsvEBUqlcLs1zzoMu5F3T5fWZx/vOgb9KxJ+n4fbQqQP4cnrFDYa5x1XJYwIpQmhi4ff4K/TyTcnL/j2glgKMMKZDhW+J4+BcFJeSlGQzA7Qmh8B4TL3w79dekWDQwVFiekJv41eJWCgcqh1ZUTwB8BiMKTwmIb7CW5NG/zchi14TGt7C1fsiRYveQ2hBKOw7ob3y3sqFqC6b0oCBLZOcCEd5g50ko/DGzv8Rwda5YCN5eA77l3qAYdfA1e01oIEsHvrcO7V4SEQI26U2EvhWNz7TsJPznhnpwSSu+LSjSTbcE5tL2ZL00DvfjUTQG1n+9+1P/BvVg24A1nXZvMINB/0tcLthn2p8tUTQAnk3x7El6TtO9P61VwW31nVb2nxV2Oe/9b47/YOXHw1ef8J84f8zzB/VZ9d9g9cPQBfcihJAimPJCsnKdIEtDTBXxJAwCEbDUxhzVM76bAU+G6Ex+6uw1luwFNuwJmEx1kJ+Lo+V7ghSAtlOs95qmsEy/VEhy0kux6Vkli4nTh1hpi1I5+VcvqYScp0OaDwOaacInavR/mY/s++XwwzSVKseFlx1FQpMSdp1mQZ1hRA1k6SCWBWkgTWijRZ8vsPteUUZYYsmRufkpQH8ilyY8HQXY0hE+6rRwCwIzIgTWAd1hZ9bqdqgIWHKLKH2Gic5NaRvk+jYXFUUquVd+SRXHR3p+CssDIbn2kYjbESZbM4rYxihwFqgOOqbvoKofszZN8+l3IwSY4CAP3qc4Edk+TGAAqCMasWyRBk+1AzizkUfhPVmjDNd4nCleKyB9JQ9ZvJCp3em1lyFD+B+CRRoR09pqfdQp2aE5HJhedNaZRtEViuBXJFXR7AhAVoKbMCuYpncl99xitdHuTNJRJnTanClnrw6mvVdV7C+ErbEYYdpIUFAsapju1DnhypF39Nu8KjRFc9EYsnZrm4FcpWDzgenp7aZ/egm4+Ih97oyGOagLyZoPQGly08mIBnY4ciFE0yWeevgZNKW3sYnj2QG/9m4e41pYwTvkKIS+SSDkdmxYe2podexPzHgw2slUTP/G4z1XTn7H2z0RR5xUwiZ89HX5RhgKqNoCDGfeP1p20O5fgQpduUwXdhCbGeirFeCTTxEEU475S5RqWqaLtyea/+A1oi2xtz6RzKFiZtohISJ9JNTD/Oszljl08ytWP2JkQa9REdZkSZgpBM0X3V2wyjanDTGuIU4Xn7G9UAlQY6ekK8/eIndMQOpB5Fns9iBnC+IjOYhOZTFOdTTbTx3qwy1ayOEtutDzXqHJ3m9ePELtJovan1jpGArtTH7W1mNqLYVY9KPRbEaqLQD8ACfCUylXSQ6OG5ThFtZTvii9+3RmTojB+ipa6+2PAWUuhhZ5pjwFa6co3muBVTwA+sfjVjxBXdrG9rcqkSxSVFWiyBW5kDx1KdEtP06zxH88o1rTrJJL9aCjR69Xx5cfGyD/KTqtsgupiebbZLT+U0M8eZK4/xncerQkXyWZZtCzBeRYmCdQq+yS/wnOa6hcZJ5nBUfDP8pD5vWZpQsUeklVXnztpuaIe+30ICLAxVMiiMDpda5A/hTZ1xKyYj2h0fnpL15DWZdvReHMPluieFRkjqo1o/HNr9sulY6dsvC/A+R90gVHk9yfxJtn4nfSEWo2Rf8N/LUXy2+fUz5Unka71v6mqnjvJeTp1xCeIUPW2vflPhAb818voBVLroRQqye96k9xJUcwmLcQx8q8y0douxk8uBEElAKZ0k+anORSN0qunCGYw2+2O+vRHDWRQBroKF15UwxwNFjZ1sF6DGDDIuizykfJz8klI2+qhoCOVQLCqNisVwY7eGU9wNutPPNUB9y28CrS2YnT2k6WkSJIvqCXbOLIvGt1e4gHiEYi3SAtr2t6soRgGRUTberEGXGxAIAYD57K2EMnb2KG1SahLYe+JLKOZXtRElrU7SZcFyyBB34aMS9oVB1ljQEiifE3jbmdxqae5SRACxpgdfdGKkM2jPPgjnRDB6IpO2PZWE6mDDhYd0zNMXf8I6Hvz0s/zRbTw0ijAOV4A9mjq+pODly2LJYProA+03vRt5n2VqlqN7/cE9gVPXz/iAzYCeocY98ioFUJw5+RKBv0M9VDScKUSLrYdyHpRPnTf50pX0Manwyz9qFE8t2VqgM9c2nrw8WGpwf4GyL1fUXRFm7mG4pjOBshojHmZMqEBR6ZQhJqrReycqmdgY5PibZJ+lag42FwioEFsTfl4hJHpb/PcOKqdw0xpIp61gOPhZt9mE/T6is6oW/9IDfpJZ1EOcwkuZiMAIRPemLsJLeMjrR30cABkIRb2Fku7HbehOJoAnixd8vPQWfjPSPN35CxdNICGX0JOavL5BiMk5nKeOrbZXA8kf7Gs5NZ6UME3qBGhOj06sf4GBr1fCMC1FagCv90ShqeKZ7LHFRgcPOKcBlg+hFxdCGomAFokjj0EAmUTSGatjIqKpNOmiEdl1qGe6N5q28b5MbOFekguUbf5F497P/sIEULc5+E4oYfpa7qe5GW2+V0TmS+J17LKDEnxvqpBlQhZQJL/8p0vOKfAl0gnxKHezXYII66rhmfa95zlhUaRosCZ3V2ccyLfL+c/tOcfL+V4yvo2b/c5H3a5Yh52/OIOraqBdEn1cuW7T9VXwis88Q0ujy+053MXBxs4hGJVbtORJeM1kQBRC6TfL9DzLF7RtfsfXAD2XmlOnBsM0zOBAUP+8OBEf9y3lqKDnQm8GRUE2BwdVjsv0lxIkuQxEIoDWcMvnHiCRyPAvtawimze1mI6hKMvmJSK24twNE8pT7bRVP2cYV5eoXVa6cQXtMs7TGtHBg/oqJowkIMk3Vdl3krw3SjXmuHOFGk+8tmnSBdTg9IXuDQZbB0t85sPPuRnB2WF6O19uGeBbQLtXKGT4HsVv5mxUokyvZV3eS3zox0BtB3/J+9TaRQg4tYdKwUV/J4hlFMq2ym9pH3m8CRrQG+ZWMbQdthYiohuDKM5M0vjdSDFIQF1NEIBJkKW8KquYn0MoJMlIeRBAVJyryAlTJPhquXOxA4lQET96+8j7XQR0y/MA/EHVPtdQxLu++MnORAVkiuCrNGO2MtNStinlelIkQ5r0hIcuHRLVTKXRVqtjHJIZH7nXpN8uFGgXSkCwHlHZI7woj0A1sXsP6rrPXguEfu68+LJ6ZrD6wvs+G7W8dqJyGxkUh+7L5L1Mz4eA4X088O18++8ULkec5q870OTcb9h1PyAlDVUjqEZouqCCWn5wtdjc5ytarD/N1Kbvoubt9CPBsZdeWlYH19knw7nqsxe8El1iKlcJNc14elxjeDhhd+yHMgJCPN10AyEjjd9CUtm7/KG9PQJiRymMAstWdSWYJQpIKrU4lVcslEZojGLPWqojdPdEFJCz1UqkUkN2AxK2o/vOCuXmJazpzxzula+dag0NBpwc9L7UPs3oYXe4PNQlDXerPTnAdzjxxoLcx0PedR3iX+6gl8A5zOf4Cd1Z47Kr6oXzyE05JnuHlySCVoVTrrphsEXFbx/6rq7Q14Y0PC0jGuIz51tDa9SgXg2zU8JyMUzUxRZD6puXNulUEjX3Lm8aMMvQw9DtgKNLCgT4CT8F4LXlH2eAYQzt4w5EwA/LQYBDf7jqCyc9LHjb1lcv82PdrwkxnSihAWHcRxPNdciqRp0U/er5utsZ6byC0nQY4Oh74fgoEPyNLQIcBpm2PnqDlNZxl6XnlrEjkFssbmqrih8jX4wlnW0xj9YV7gpp6vSOZQwDtFaXmrGKKA+0+M5w2L7tx74JpUvQZ6ielPCw3pMXKYPOQ1xzegwWRhKl8HCH37own9NqZErNn233CQY0MJveg78obtOuFU0U/AirC/nhiYiKnhuPXSiIvV3vHHeOXyrYXyvgTP7M3iZSuRnWWsd2cH0C2bRx2aFSDag+dkaS0gWRzu/CQ7iiQAr91rh0+Y4dM7DsC+azPXsbLpBlCqsiiDfzy8HrAARPPT++lrCw7b6+ysskjy976m7OWjHZoqt5GcNnW2zknUOaIdGyUUtQfbiCbMy55rQo7/qlNUoW6DEvbG9WJGct45sDzxlckVi1Ex66YboDnFkU9tai2oJB0Ye3S1xQVl1xDzQgGntdUxu6j7GVp9mIe3KVfbrZ6EtPbIz3uuaarDUTtkUbvGdcjYKv8dKUFf0jAXkydp9ShVpfWJjYjEcICjnEePd0IChrCfwRkTHciVSGvkWGfbWjQnD36a71sa/LflMMEakGx2FZS4Lt8JuxqS1Wc91MFjm3r7nVIuKiOcG7pnyJ0LZWW9ThvvJWEwaxyO0u3D15XR/+q7oJF8RJKlG3EGV9r32qvSFeqnmnNhEE0A/efdcicC1ZDRT03uhjh42bYPsEKgaTq3WeE3tTAjm3Yo/gzcKVA+L2Zwm9SZR8BwP8rC/XXCDL46JB0UjFyRqT87AV6ZJw3m0F9dMf4ELuq1KnOgMWpxrzI/vizemTzwB43iJIysziUV7GgxcYv5igK/rAJYE8QlzxNeXGatuuC6wWngGIcAETcs/w6qm7puvsS5FTsISyiFMrvw3Wil73LeuGuO1u7kNiSAwaAYX3KbQ1RiBxPZtTV7k/BUEwMIWAhZkwGLycebqt+mVh90PP0DXkK3LeeH4lSDjPsRIPH8toYB+7kY0hBkKPIUvoexxZ8ENx1xC7hwFuxTcGKDP75lXDDg+x9YLNjPpg2bThWT6ovbGka4LQo34nWHYcyn3U34PYvc1zT5OEL+4FC5zeKtcUK7Gpakfjmu3CURbzIcpOYvaLM39Q5U2gyivumkfZlvunwbCIy84xenDA6ph0yoaMcP1TqBhoICyembeFM1yY4SPGT8Q1P5xcQvUWxbe2NWlxiy7avrLdANpr0u/y635H5M2H+veo2/Q6PotrXmg73h96UI9SW9hxZb6V++L+mV4CopjAa94uOgineex4GKbkyuRBJFAbL8JhJr20w7xnkI+/CJKxh/AyRCfS1ohz0TEZhHu9Nkh5+zBb73J9mjvXVAtMAfePHmaiu2feFygUmL244ZV7aer7lcN3VVuWUd9UJrEqb27h64TPUSYoRox5GGhwVDMW3NHVteRIyUugvXQgraiZ8DhqL4QK1CG3rmGx/ISEGThH60p7iJA2uqFkO5VIwYf3QYXbbfLy6REdH4+Gi67p6Mf5eFpQPbsRKvE7yZ1+a+qQbdJWz5BoiKIhFt6bUXlTCNEZ72sGuqu9/Ak7VQh3ydt/EacX4NiWriCQgNZBYrmBeJjOF6/ZNCXYDoWlzbHBgbZFRalb5nmD1r44heMRcY5tYt2VN+B8qtNHptL39A3bF4ZZSNDsrOWuILJfjM1eMOw9QIK+/E5R8Qd0jNQVcoOAT74iRpCz3aevWe+vSDEIEjCPT3/mYkzvnPUrqF6LvGk4x1lxZD8fJO/qCV+xOVsBsATjzvGP+gGZyV3ATW9/vmCo8Y40bK7Vp0eLufh0cdstxw4DyvNlKYEUBF+loANScF9z4yXYbkM5LgQrKkYAEeSN0Yqtns0ftWkm9MKt10KDP5oYlOXAOF0RRxrETeBYMOuIO7BpPp6AmAeAJQDaMK7Lvi+FNwC/HZPHHfiSQgA0LSp5mcz0Sl4k76eQbRcTchJ/k5D9NgnN6zBTHP/UuyYyvlRkXh+XJG88tNPrheaOdErvFBYQ9QPTEf2RE5cltpkoaIMBmjvXWGK+bUIAGIi+4YmkWgmmel2StsAIpJcyc5/VHcYej0S+L4rViZbhqhV+66leEQqkdKoOrbycKdo1SBpi+UiaJuez8tNcluetjfA81tuedWVgB2EwuPDbDZY4fjeZJT7OwHo8xe1VlLTAhgLLJbeXctnz96ULXhxeSq9q8xZ05tdcfeiHMYaJHwNDskg8r5MwXpRaTF94euHTZUMWeyBcoBRvH8v1S4RSP9zcxgs/o0Gww8pRUJ3+hNDn368KGcxlY+x6uBuDHMtNezFC1MWCgUN8HpuSEdZvUSmfPwhhcNvyZZrtqsfk0qufHUpcjoFZM2+SzUM0ZNiwVtAl2UGqAwITPIBZ4BFnGwhj+TiKi/bCMEW+PYlBHTOBJTz0m6Vd+Xxa9L5q87pmx3aNccPcQWQriok1HDA6pqlbG4VGne67LxJuM1Eahru5SR0NhLcsdb5XBrDCLgI/NiHzhviOK0Xlg6b9Xn/w1GzB+pUv18MMvmfR5w8F/4DQ7zj0R9wi3hxgkNsYhiDYHn+FxdqdxwPfvHGnDYW1Lm0wYboCYhsFK+jXnEnLIXMXGMjKcIqM69IZIktmIi6HMFZ5J8P4LKoh4HwHBCDn9OoLKk1Zi3KPRwfMroltQGJ9bhl4rfJfR3oBeupeNiIOggOTRuFzWRu5lD1Md367sGYA53+bFafTqgTeoNI0bZSNqybP29s1VdvJavVqIiPzgCLkGivVGHCr833jGpgbOKb9nOgozrvjAfLlI9Hc1MNko+8UYpSEx83L7AqPGSh4VPT+Jh4MbrnGTRLmu/W8NOIGDXj2kC9fsxhxEoINQeuXOkSXOgYhBR++y8R2muE2tkpdIY9uhq45ptij4uBaO7jmlZ3Y5bwn5iDE+CCCQWzUZjPvzoQ4Z5bI/P27HDiFG+zBmZfWAIwFtceaMDxcIQtDMOMOWEs0EPFaYNzIrxE3zDsjGMv9GIhGgCDgElIe0EUec11wEMS22DVhfiHq/R1mnBuotMHqsErWQ6OgFF8PLSY/vnLxaf9jrk4s+V2QXol+v43GMOpDHNNycjgaYwZTFUPhQkkSUyZ56gviQi2J8SXSTxW8tKh4/WipCeubUY+fdp47Q52/0q+S8VyHZ2IkTXqehjfaqdtU977T4/qhKl5gBoXbtDuseo4gAx02IguwNvGmsXleA+20ZvlUvEf5mrTq9t+G/6HrgX+onhFfRyktH7dm4O1Kgn9G1B8wJ7wSkQF6Kge2MLlQMCZD6l9ezHP7mDiXQG6vKKpwEK9ynKtnV4hxB7ZMRGboDTy+8m4DpUkULvObRFC/ScT7Ny/ukHH8h1uyVB64XDbtq7MsA3hzVTcBx3TSA2qR0O2yeNQ12Ts7jm2tisTrVXLTxnaJ0eeWhgEm0vcWEx9rrKQfj74JGGV5/uyG9c5nHQB+S58Vj8t8GL+ZD26cle84c48voOT4/RIJNdzPm37EcJ+gRNZ3cHP5bjGAGSmbMu1ANFxeMANnupp0EVcmphguC0ORQXTThRkzX3rzsMRF/fE+n4MvM9HASZVpSh9sY/vqEXWJuz/c+v5NAbu6SIvrIGtnSQI4QKjNmLRtJzmpQsJRPrw0ggiJBLq5VKZC3O90ojs2uZ9CFTQmh4/yvcsxxAChmLx6UZB+03AsB0MWyzkK8moGSE+EAVFqD4bvxgHHgs5AvFK7lI8syMTT1YQ0TxZeGoUTJJEIw8M2YoFtbjxVgeCP8iR3uaW3vaz71oUjRpIRkZqSOsuD4pIPeT8DlswjMsdChvHKvKY/DtLlEBhRQtyR3HE3BrFDwPAidaqZ1lP62GzyyPb6qq2B1l0uIDZkEuQyJSMGuBEIuLTHN2oRY4gl375tUzVphrvJ3bQjOL5pNmJAo/RMav321QUhSWigoeW3id/PLHAnysPtmjdXmtdaPix2UYI4+DHB5KIDo7dOVJTiDbiiHXLsfiMyMjPh1mAhqoNBGDzyvbZsq6e9Lb4dxMDeyOakTwvH+EDgzkQIqdO2nZamnXc10PF4KGQ+3LXBt2cqFThJ4gk0nOoQWigKLYZN34aevnzLy55T1uw5tgsL17XqkHPTNBB5HyNYD/fRNjUxLu1kSv2Vtchcme5v5nsfgb9SjOYHpWX47pcZ/q9Tmyyd/sCA8hXCDs53LGpyoTBkI1U8rcNrS8EB7FAxpZYkT1IHjqlt21ngHfJ63JE7TZT2cUrXCrF6QFABlRtL3QWUjB06tXaSVJyD8bVM9C3U9BXKg4hGUzcjEmktoxRnN+JM2k7K0yR05u4tmuOSklh1whQlp73p4Zo60x7XXi8duO2RTLBp2uCIPueV78+x0+uYmpDMCGMyy1LZcCy4qKcV0LrHCPveH0x4nX4z4UnZ0+sT1eAVD4Zb2TTZ4Z3BlB3aHVliX+OfFiAjze3YRM8EPRjiCSKs0fUPLffW3LMw/vmZZcrl2yZ3RKXdWX62nsojwH9LuILrS8f9Xt+7/R5pavISCtT3+SmWvLcYxAoEe+Kxr+vc1GePuzAJ0W5UMRkw/p1D4SSX6YPPOmmij8wEv/TQ/BSBorOdvrjNGm/+6joWlEBQslXJTrhJnd95qRJIgKky8d9uWqI1wd6voLWYr02QzjW9GqfcwmzOfPlwPPx6GG62/3ifclQP+dg1Rh1a0eflkkEP68uov0+BXL69lCTzm5KcL8QNaf/akPGzGpt1j/LuY/e8ipmn3/e9rMeyLozVe22qugKqKNvI2ORrO2BclquTD0O7alMTpVDTNPlh4+0BfY0jmbJ9DTWe3oLhM12YMmNSpQqJxn4NoSGx8LO55WIZdrRUluQP2pPDEu9+J/vb4XnqXtmPsw83Pg/rB3AnqVT6eSkrpBX5uxXw64rLQZIWbZcXMgmfzaNp0k5dvk1dyVzlVSVOhNc9eWlJKeoDaOZ1rTrV1wahP2WXFQL99XZZ3f5n7LL6r20yzn+1yVg91WObFCZEtKrBMm9LBCoJuP3ah7UTL8GGZtFmCxBU35emjwPxUC83aetkuvSbPrxp5+t2xQWvEb9WVeSy6y3X47AsaM1aotqoDUuxlGhwg6TvKPDgjM47hdoSTnXYLeKt+gRAACcYnp8+XIatDM2KdW1CBmBkRRezjWQH7G0dagCw8HS/bUJGUhiRK72gSBjF7vONl65QdkPP+XbN6V77HfPAVq0Tyhm5zN07tt3YsNcZumMY5XGZDPNkVPLf+bIPdUHeRqef1yTPmxoWhRobGH5FMJ5Q1KKFae2ly7VoZhEaf83db/FylzYtOtzA1wLEoHpGyVIdzVJZg9nYjuTA42IYWHEF6gU3klJg87rZtXr3Ym6mt8vZ8uvBbmarniGCh0Ng7bfYfLztkeOUabu5UO04XtHkG9QkgApGw8alxK0+0JuxOGnISh8HwU/N6t4tS61Moa27vrTP7eFcK0M3iB3v9aybcGAH7VN+VkkRIzCNsyp/n0FQfglXT6mQNBURXxljv/eL+XiGXFBTgnfctkPHW810Ukve1gUjjtUQk6eOjcyBr7OE14Jx6cOxP20a97w0y+5uscyuc/dnnXbdBngmdVBtyJpxD+jaMV741ZzfxrNm4cC4V945Tru5muIyNYU3XBNPjTQOAUukzxNrqSTW1+Fab58JL3jBYRUT0qQ+LlOA9hRkGMGNr+5JM2VCq9JbP4fuR2wCd2AROQmr8LIRxDVxDvD9aHUERZ7RW28GCz3pCjWhRdiThtuv63x5Ymng+hfDFO/iik2BpDgBtPOLWJiRoMU2rRZtWQoC4nmw9qrEqFqU3Qtq/6Fa44IK4hbeVxDhWdfSEry4j0GELDkYfctpirGCCKy6XxuRmkqNNvgo3ronWfX7EN9KBWNOt9oOj84aYTdvIi1hI3q883sO5Sj+vC+lJExObBfX+lPq3p6CU4XBtT5QWJd31fh8NWXFBJPL21WN4vlCkSxt6kcTCPBDqAncet1uMU+xblz1GpUJejpP/ahWUj88dYIvygy1bUYork0n16R/W7MD3a0DazO4SSvCQhyjlRUVAoMwFNDUZxw79/13A/zvozgTA/YpjQjkcyi69MUDD6gGSmFriw42kT4Rb4cIK5RGMeF4r4omHlK2xqvtLeaC3s9XQMgVC5vxwVmrC+zMVlt66TF0TNQ7T0LTpmGNgtvzuTPkIlgGc8WQ9RgnMydH7QgwsAzaWNAwNz4vT46kH4KRMMu2uEt2BjKk7ZPNhNBB4qRm+tULOOLXBoaMvkpbTU5MqLUuyXbtbr5HKP9CgOa4dXIfQb44DoNLkQXOY0pQlA6HWI9b7pGfyUC08EzxVYZRsT8eRBRiGgIrqWXhz2snQ9lt4zIvC53dCNVMwjlc6Qwq6ieP2PPLIoz+fq0VU8tt3Pucg4zXrl/bCziXrwLsNsTsZ/nH5LEz5jHqFvXrJSnvDcYCtt9zy7/ic1THxQF/VBkv+7l3sztWLR52rN/o0ipg/HmjpnLRpNShCeZk2CuIWc/4MS7nqFhV8hyUZxTzBt4AMGZewhj9ErPWk49B6T7LWDqfovnRz5W53qvbhTKYYd7Csm8avG1ohMBqfzF3KyqlUgXHzEC7UV4zBL5aafnBL4jwps2btLhubkL7NaMV8/2+Pd6+5u6MtbYq99k19yocntCdR4x6cWt0/L2BKn0IECn0IzHvZ4aY9idjqib0nEr2WnkbdD7rj4txuyf3eBXNI3wZBh4V5hkRmxgS853hIYt57ReECYq8XpcPluzOC9JotC8bvgpf1ozV2YqBuM7Aqy9rb2HMneXGhI9XbklyXtwq4zUjFKO1DoOd/vQ44SsCNZvy9qIPUjnWvACaQEFTtQqh7o42D6BefefCFwiritUyLuL1OQRfC38w5uY15gso1J8hZJoY7/cNTrpPfi2kDemIYB99QrNe5NpTTMZnDcY9qoPQLpd3yfIkELeRls9bo5joRCyph5/zGqm6y7FQ1TDB9HrbH3eTq9UHrqEn5cMzMPcOIop81zZ1HzajzPXPQtBZFZrXlRlHOG9HXZgWAduaEbdMqGfod1sbaXVxWpfrw/3+oq/rJ8Mfoh5pz0yRVAO9+xUuuOv56BGDEZe+kbNReSa3eH7N1ua099K6jTaZyenHN6GKUtP29XneVwEf63fw2R4sJ6+C666tcRLNX+H30vD3YMNA4OQewjoxgD0uQhfn7OAhDsmHwdyRm9JdijELOET1UFpboJh+JqZzwGWBLli3UWotofJKrmOjcaPitPL9ks6UNoj63hLFEUR0+X7tTwlaPN2yG0l8Mhc3ViaV+ZP7UHk8NLOFTtEAmG/ALPHzvESzwviV6h1HrAwiV0N4KWk03ZEXbAcouu+UvEQ16VR3jSBnjgmMVivfdK3uIblKWCyGysw/F4bZOJ6SGaBD+rMlxb0ePM22eWdkZjh+MI0gctglS/1YcrIkoIKUMDYTCZLWelhvnl5sN7oWi8+t3WWVS/3ChPd3UQJ1dg7ExWvR+aDeudxPiFIQzLUg+4ybyI5nx6+xV0mKriy3NzR9rN7A+Ol57dSJL5yW2J5xJ1J0Y07wMo9HpXGZ1HqFuZKIkAgpKvvI3GktPDepsDUjdPcuMm2jv9crMsnPsUqttmvtJ0ksXWkx91qz7LKr7On+QmF500rbYWYQqC1e+YzKmPS4dY0HbrR5ENVhNjYl1PNJQjfeZal5CcRyHKroVkiO1sFX8A6VQG4yxubSPLvBhtX1tqqpJ1qnI6RNq1d2TXy29V4rZ6oFse4Z0LRsJ6n+e9+auHwP/E3PNz8Gcbz52SNvu4A7ZIwORfFPC6JQHPoxiLr/3w+i0F8EUXgDeqVeHRjV99EUPizdt4z/NX0eqQWOIQSj/f57JrjKv/79tPL8lkA33QKkHbKzESDxW/bFnj9XuUzDp+9vyT8FdYAAV3oxt2CkDAwup3ns6uxbsPbu3les9yqb5qek+GvIlgCCARj+GMu1ZZpe3VBbUc6Z3cfJ1ec2xleI94kIr9juE839CfwAEz8G1Rj+S36AkV8wBPLPYgjs7zLE1MfvP5Eh7DYeLwYzios4/z4/fOn6fzo/3OC/ID/g/138wKbXdAjEZGuZ/P/88LEPKPTX44f7/4xZt2m9FjZ2wCIIbQgaEh0U+vT3JTmhMhYsKGG6Vbmlt/TAbuqBrUmbrGpFbir9ONM2KUUh7SPB6gxbPFUHBMA8CHX9tEsZqFSrfBNL6kyFZoocqAxv3pEgj0Mvqfl5ixr9EPO0bZoUktYMlNcZdxcZEdZp8mor/9SvUALABCUttyRIBPqHHuLbOkIfO588B0V2PuuOuys0BOmO2YuCVoX+3siM2F91E1AmpqkKlD2vMT35ZolBv0+fO2LEOxTfWyLQN4CzNmxJ0x12VxgV1cot1518UxgT+Gtintws7Mm7oG8NwPIYIxu+xb4FxUyXqzQK68c1TmmNEReMjSpCpIEyB+Q501X3eCJzA8Z9RIjYqfSWf/nWd7G8dhAYAlWkfJ6HyF4kN/Uhtnv/bCeATbhI2rlJSrh5tuZDBJBc0DpOvav2BoGoaxKZGgWtQSoDsC9oUAZGD0byGzQp0tSgbVxkyEX7DhrFf2yhr/WpUIM89n7V/YwMQJ20aSkzJGj78+0BpVaj2rcwsDqR16Yw0E7QxxT72AgwN4eBNEaBOEc+BrAIoOQBrDWEqQ47aUyIKDSJ6aAtjSGR7zniN1iCL1ykl+SmVO7vsNjwCihYxYGFgbz9OyzjoiC9wxJQiCY/X9mZSuX8r1Lc3f4lKH7qzO8U1/6yFBe/o7j5F6W4iXxHcegvS3FY/ReguHrm31E8/6tS/PiO4uhfluKQ9p1W1/+iFNcc8i9O8W+aHFDEISH9i+78y+Hym44EY91VxsS++B5/Qc78xo1gtOymMfVH5v8FqH75SP8CVL90078C1S/b/q9Ade1fg+rIvwrVw38FqsN/xRjoj1R3/5rxxR+p7vzzozUwqgPA2IiC+BAr9pov+J0D2kcd2WIe+wDOinqptJiHtxxcX37SdV2D65/sZRv1T3771t+uHQArFQmDPg9gsfp/BKaU99D0O078cg/wJID+SvTUfG8GcKKAyqh2Xhx3weFCP8GxPt/mnCANHn2oJm0Z818Ys/2rMf9kLf6EMf+nYCp/CRPyfxWm41cw/SRp/60wcd/zK/rl+oOnPx2m/wS/Yhrj/hP5lfueX78f8899/ffSofwlTH+63vjHYZK+442PrgV6WcOuOdSfYpQ29vfpNy3vJEA3i5tmX7CpqPwPWgKryQTzd0vw/tyDHq2rx0U9veYJILtmYIEd3TUAWQa0veb8OHMTtY8lpeEzDvrm0u7qm0K/2kSg8SUwUjdPqk8b5Xf2cXkiVgPa6S8rozbEfmHla1/AZv8VNJr0HYf8QI2fvcg/gxq81ietNkU2dYAe9g8f/GajsTXigXzyzeVBHMGpXpCgn3LAc/hg7kMZ8ifKNNOT+eaxsLcPlhj3gu7Ur57/Q69mWyOAsYj32gT8/eZxfYFDKpK3tKY/zdWrNDlrNuCCy6uhf/fSvC+eFsBk8ruvUf++JqBdHt1N6oGXBF1tp4G1ihe2/Ci8MA7a+s57+eKZkJPieCrgl0llN+Dzub/y8b7xweXF5PpndPnFC/mP2PnjyL9B/sHCVwwoNw0BGPnTdjzcfto2jjx+vYKF/mIFC0bQf/tW/U9fxSL+41Ws7J2S10nf13pgE09TmfxqUfHb6d3IbwjL0m9Hf//j6PoOF9gvUPEtbcyaeC7XH5v/FW6+9mB05WcPx9+hBnr7nRrfWpm6ZUyyrxW/P/b7p7aQ+09t3f/Y1hyPeTb/oa0PwX4b/H+dho//J2l4R//tJ8zD93+Dbv81Kt7+QMU/tvV3qAjwGh/fFeuvAtO/B/hPXeFXV9Dvn9tPPPKlgz+VY76d/v//Fsug6J8n9uj/fbGH/4GXC/y+FQP+O5s1/i5Fb7/eGvLjFpK/ITc8IbLn68/aCPbTxsBfP131mwj9YCahf9ZOD/hXx1j/tGnnz9ksyGR90x1/d//fr7YC/U/d83ND/r6wfscKv3KY/mlbfuBfPWn3T2EEpyjHa5OoEY/zhRRS/NWOsP93uePx0wZB+JfM8fgFc6D/NOb41Q7in20qsI3219tunIsu795xw/6e+hOmfi+jdNc+rw/Jqmyej69v8omXufuRoNlezsF31+HVFPCUvtwx+9eWPzfH15s/mvIO0K+cv+V/5wp8ZYd/1NSD0X8M6j9gxr5Yy/9Q+v7IEv+HXgEA4Cdm+sdN+R8aw7Cf2vrHXYw/zS341cblf8GNif/Jly50Ak25Hkfl18sVIt7KI4FmU36fYr6YUokiCnD95PfuKVrTU7Shmj/EAVzHUpU0conuGmNCgq3ixtmdRgmdGsNS9DYpLEmYwi7awqkOEt1dz/V+/mkO+DLf7tT9t6sf/0Hf31111L99/9IF8tcvXXiCEdLkxn4Z6+8fljJFhgI4oL60wpIMCkqw9PV2BvDlyE3kQb2rKGiRZESHZdmas6hrHsfSqTpmeXUUNrNRKChj3XBlPRHWjryXna3SD6h0vkAQg56ZL43++NYHMlfBz/X+guvlCvJ+vRxCpMmcZchN5X4vffWvFmzFUewhWRTP7XnE72otF9uLvJ5cfXSgA/V6OYRFglaMjQkIZ49rbuNtxVvC4Mgk9i0nuzsSmiYGYZkNSWHZh9t7Z4Lki38dfU+GqdxHPme0ihjfEYbdiVTgu/1xQ/WduTOS3V3n94i4wdY2XcuK+jRX9FVgpRlFoj3fbkmKSjsvugVTF56f4uvM1kNjh1afw83aUuWdIbG4Thm6UI+gBkLwuO8Sr2psT8abVI2788pP73qEzuygBMJNt0jECa8n2vJ5dnjGU7bm71iyS40cJNNQAg6toNCwePt67weUMuW7yBWWp2Ge7HG0qbOyZqlzgHKG7Wy2Xt34uVm6b2eEMnmqcq9sQzsOB0pZV54cFRuZWNblIznIhl0x6mRfK4sdB+ZY590cglIqk+1zEk/BEwnUIfcgT+g2Mfw7VfpZkXYmmnMo7J4MZ+j95jiyljWi7WCvUnYPtRNqyN8w7c7VkjfkWRJKbzFsqahAuBS9h7XPyKxFi/Ug6uEgH0Mom/qDz3uXccKI5Ftxd+tDslkbLmP6xfq9A7JCSn/PTw8tSjfyVF6jYrjzB8803XI+SRSKhJHXy+AtWJUAOzSaCgXaFSYpSZREAYg5xbweif+cH7TRbd03/SGdXXifLF8dKn6Qm95sTGqRoVtipgVh9u5b7FjRDujtTt5ERKr2/uXA83UWVPSsn7IdrY+E5Z75cdPWycs533DLSDWdVij59ytmEf467PCAZJe220LiPMY2mN2jm5OrWc5KSKm8zo1p4wxGu0iFdHGXaKgW1PZ8uiUelHRd4IIZ9LSc7rY/bgeJBFidX2ef7hKssiFTSZQovgjSNG+9beOCNfaZeD38+xDnJrwetW7CExHQvFKzEPNTJCnpRCvfpB7jw+6wmKXtE4wIbVtARV4++VdQTh2dIO+RI0YuDR8C18xsH96FLWA9J/T7XTXbcNALjI53qsZT+mTD3PKMPRVddOWsE1OLjFeldzjFngIRqIu+OdmeijsvOVOce7I9IrlD2oz+Jkk9mJzrzICqRIn2wUC1L6F5vaDYLMcPkdzcVVPpBqFLbk60OMf3gTAIRUoPm+y3YxCyikMeNF/BPr1BZeK0ricvter2ON0YWBGFkqyMh4P1Dz5EbxVMBYW94YzLBmcMV3r4ph3T9npxvI6tJ15uNK00oFfkh+Uu3ap+ETYcauRe9qjbUx0fqC4wsmx7r+SaBF6UhvXkd2LObvU8q+ucsHIt7tQZ2BFwa2dO9LsZPuopPIXrefsijgpXaTV00OjK8eDiDTA6wehd1PonFxGXEZfKeYN4dHBvp8rxi0tvWlwRhqUjsEC2RYdzk9XEo+Wo5HPkSpgqOHs0cZPZ8igb9ywvayYaNMomTYpSbcLS96rDFigFem6dmOuB10DZ1SNSgHQMvh6PrOFmk42+dinaaiRAQ1lw7Skm67rozFAo2hdL+0PYkHKyaTty36wCgNCcEZBqMUEmHxCZyDJeeCy38VHr7X6X3YImDDWl31R30vDJCEyosUdUlniy27zoWC2UqbO8Y20ZKD7b09aBubBr9dezrzxaWm/JN/uSR6DnSDv7Gafp9cDhFKmhgwF82nZfY9uKpTavOXeTL2iYimspkk18qTZJ8d9i/lI7edBrNqxKkfJZ1+5yBMEtW41kaH0fassequnTT2MINXNpQ58q5FFogAAPe6b0mdR1zwbtOmPq2jj2ysMWNDFG4xdvbzUOaYRWhmkGvSZoWlnL099Q4VdsW1vvsK2MkdEdDi3YO+vmVLJxN6QsXBwEZuJDr+16myuFFmVoZJCjrbyRb1AjLyQag4781dxV6pk0eErx1/kF3MQO8YRNW9Mzg3uni4R6oEDqBV3p6WTwRQjhb8nDcVaq+BxueZlls5KI924u6Lo3cZOvDamRDJ5lEx2mlm9PN51QoqHs7Vj3uAUZymxbGYYaSaZVcRsz5saY8np0NZGriSSOGu85cwFLBKooNhJzHUogzjjjq+Hi6cDrkTvBhYdgdWf8aRLibdle12GQ/TGZHKNeWy/ahiJPNeAXQtLpMC8ELdmD/UbyqaKbmJcn/ef4EAkyfB7zBxbtjc+hr4Yu9BRhKaQyJLOgj1VwEA90UzESqnBtj+wmNG0sT0NyvJ5gJ/eH+eqFc8iGBXI3DXYI2vHIY/cNL9fYUigVzdbz1pUzBvgnMm6yvctTnmKa7IvB97vwLl/IDoUisZd2mTRnS1sADCEWxZ3SU8CqgzI9q7O60YntlmEbggbMB0dBbcFZ5LQknMOxmNuUSJ7cSb9qXIGgJsXSGSkui5mrl6HyyhSlGsfN+O6N3RgtmoOqUpXJDMQezgkSURNJc2vOzBpB8PVAY9qZFhnH0E16vltHcx3U3JDoc5JDv4ximnVEt/ODoijw7Sxdt1iLfu702umco0d8xcAXDmWG66QWp9SnTHnDhVrxu5SpRUAaqtailS0RStNGGg06JCIGojpMnLGZjp9lp6Jkgsg19aZaMUnJzsBNjvXdCjd2L1npY2XMze2om/JYpYe4Z/obIzouBmYK9MjXQ1ix8imfkndKhn1mFpO5cqLm706fRSZd9s6yy8+JdY4gP7bGAfztmgZQkfn11sRSQh3NcAU8rx4vmyGkMotmtDE2bpzkXV09ZYXy1Padj8s8CuEKzEgXoCl8Haa/kTjQL6VXbM0CXQcxQzRCiv4shToFVVNa5Q3l9hHDmPDLTNG1tRFOtBPP5ERR6pGo1MIXP4aSYXrFoontcSotNb+P5npMHtXu1iV0sk1MsKmwhlkMlQ25aCPRuYlp+e3m02qae/ZWqTReY+Vjs+3NqG+jEHd7BSVEq9zG2JCG/EFxeB0MBEM+RJmVo3yMOtVpobIM92dGeycpxqewFgpuYUJmCFb+tpsmF6ZQ2nDf7e1c7kUSb/mMQ+0BdcIt0LZ36dDu7BYAxrtX1c3D4nEtAtoK4pQYMm4ZP10nhlGSas621dLJmOabqkOJZDeG+m41E/KchCT3zznR8CWTlw+lZJV8GjYhL7nKV9TyinlqG05LfHl3l0P1hRV7e3w+hMo+jS0i69vn/QouF1mn9gzjFS5LK29YtrczYPAqbpSnjqIpqwCBkmdDS69wo4qcXEVnoyZqSIS5dT49d6yGYjqqocaSOHwLctVvpKG59ZgVYZIUmMAJfRdlpZzYs5W5g+/MoLIPwgburhIcZPcwlc9xf2jsRsWEiSHpGbZ+nbnhGml1BW0MvkUbNHWcTNo0Kkmv1cytTnoBVHrUdcA3ZKbqTsJ4qyxXEEazD3mTSj9ZcLtxNBgYRADK9RIKDqHC+ya7hNUMjpYAHeMO18lA3rYLLnsdAeDEZEEoco01Dlc4L5FkPJxs7mkHX1MnFCfWJPYcohPBfTh/oaCTqqyET1AYrrIWLs6dqdqrSaW7DhRBoULFJHx66XVwcI1oBA9N9K/wjzJKNn0tkzppJTvauruubMOjHd6nrYfexMOhr8O/4+uQFw+NEwMte/W9qztwRaRl64S7l6/lQgMfld8HZgqP3QMhmghpNEMM+XPPNKfsQET07m+oxmgoWrwl43JCJ6W/cZDD1AkMR6s8d4W8J01kiG7UkBQHlRXXxNSDiQV5LRDf5CiF9VRvRwcVasfJSWQ0mRMIkKMnlbW9InlSQIp19z1F3RtyMWk7jt/mXYU0nU0MQo6LwlzuDX4jn+ZFMKR4E3YC1FdfNgMl+45iKy1KntQDb/P7xAiaDCgnYpwNYi2C7CXtWc4MyyV6w+8z9XmdiGP2Bz7Bq2KLb2Z18W041Ou0ffMKrntodPeXjGqKxx091y3N5dujzMUDPv+Iupv4OS+TIi34IT7yu/x5DUqIa/hdOzQBqHBXok2SvTnw0+ZwnWcldRcV7M2kz1KQe6aO7j494nsyS2Z75JvI3cu+H4XSbCG/s7pGdjdqAJ7uLb/ThOo++2QNb5ygBonuc++dhUiORsa2yiY9sqdhs2XcGG2YATpW6g5z3l6M2dXzyW3wHPoKGgdt0iVS58MUhFSggwEYpzB6R7fl/ugeXumpiBs5Ybs8lxRVo5nqtnhiO4cXxQ6jYDhmMNse7AMYQNtO/nd777Elu5Gl6T5NDS8XtANDaK01ZnBoreEAnv4CwWRlkTzdlZ1FMjvvDQ7OooeHI+CwLf5ttu2zY6PH5zkWMaOai74eF7uirI8jNG7JPWmP9srrQKomdbi6pNRmggq4ZO5mlqomygeVYXG4sy7AJj090eYuex5HKg9wEbMIElE+BzuKclPrWXTCD62sgGIhUJ4WETnNE82VtmcegonZW2WdyxBDIz/vQNnoSpjZ6jKSr4J0YWkLH7KHuKitJJ+j3fIzW5Bf82sGVOtitwXMQ/iZsvZWg32dKjv6ioAQdyMZXvKwr1kosqRz1OHXB0aE0mRRkG0wDtRe3XJrDlxTD1O+bFTArAF37yGXBoLnoREvucR+57p/qHVAqDZyfZSO6Go5f1IoC3Bj6N2qpm/yyaIMDk8bkdTuAtYkXQGewiNg282eDmqHyqTzTmIvcq6yic9yZAJ72Fz+zg/KRnVTv6WygvqSmAgm2jxlsQhggNTZH7dHO/EpQz6MVA4xcKDL1cn+GMpWxm6aPdJVUev1IyKGVAR4sWlZUIl15MGPPMgYC7PouKWvdgiWPLFSucrFM4HLww1a8FbzsVHsvFS5lae1s8Sxky3kO07uZM/iYhQHnJKhZMZspO1x8l0o82Wo57vfuL7tpgWWlV0tqCtJRg5wqdBnCdeDNzqHnI1T1adAKwqSbC0tbla6ZmejIosF9N8zVgmYmnmD6JND2UkQjiD7FMjuZo3jSp+e5E6+VyokINePQXUgtK5cjLNj4W8UBbyq0id71B2hd/YiaRrXbq0lke7nYtn+0QCgX/eEbb90b1SdqWGU94vNRJMw5XxmcuEk75p4rurc56YoFGQnOjOcfx6YvVYaJ5QjzKaU7kwiUc6cVqvhaJmsuNx37tJ0hI8iU3rQ/naC1620AtVNRtmGZtaFbmFc3yWFOnl4ucxKvMGxYTj3E5VALvhCBumWnaxcXo7WI469spqZMdTrLn34dKubHxzQlxhqB57qtDxBUugBsd1dqj00zCZ51dmazxxYNb33TMRtl3L7eFBB1EOxifoHlkOyROghJWnbw8VTgAK+x/ABd7latAkt9aaysE8rQkOCsGdE/yTXenH2Jz42TqZI2ZeesED+MvDgsG368jRa8xnjZXmftlrcyTy0bt7HZx6TzUs/LJkTpELQHourNqtWvIXtJwudvhNvyehZOST48s6D3HzKg7aAPQ1swAewH+HydfQktT/+/MwIcp7stXkLCv0rNuMpdm1Q9rLTZ0hqZtad9w8MFsACabABG0wbSutMcryRprSL5M4GPF+5KkTr5FdXgR93gs5XWnKmW2a05vShNM3mAbi8cMN+YMduD8AvC7w8S0jd93MakUKnoADYBlw4FW8Yteg+wccKBbN9bs8u3lfJHW+OlSls0T/FfgSmx/WL8Zwa3bO6Oz2TQUbRLv5TKYboHryk0qrvGkNf+NuMLmEz0MIPm6zxj2MYIBkdks/x4TW3lcz80x7Q9TzXmaPv4jIWHSCXB/8RSCVZvepIWojnRAO2t714VOqFupjLxlnaLtWVhpkJbYjWaF6BsNN2WDaKNCGyjzWNh6FBdSs3RfNJVLJnDSAZMi1kklasemguP9z81vNyWTB2Z5MfX2jkh0mrDlIse89ZSlm/4cFdXapn06xWCTy6ZHo96inxQ4HC+0reHiN9jtbmZjJWqyke21qbDqHPYap4MQptX5vGcJnYRg1hdSTe6iJtLo8QzUHW6QXTPuKTa2I9h1khdPiGwYR3RkqvNo7dZeixND7MKlavvJRw4vRvtx7qKjWs4wQgw8A+beSOliRtzvSp7ohD2d4tlymsIB+MnHrLXXw5Iiq8k8mcwK6fFaQxiYOjndW8XSs8GftMT6Ly8yFTw8sCPPX23lsXMIf6FnHTbkreJQf70PRIgXMXEjJGCrgxWZkSHyl5kkhGm3LtZ2qYr4I6XawRNdm9YhcCAitPBX1AisVUezXbLqlzku8W9MdpRBus0BQD811gjnWO3k0cIgiyKG0Zq/WdH87eMkVpyNxiBZ81Eu/EgZU6l+RsJ814j0RoIzmLAmVksjKflrf1MXSA5p0h659YCvfq/NTyaEqKgFaOSS0fYHrmBMw6QzEPe0HaNQA639iF2zie7PJaBz3lWuDMzJvCjWa+HBsVtee4F6+QNeHarKoKXH7j71D8BJi2z9Z69rV3aVTOBxzaiHbv+Oa/RpcWkncgdHjaKdLRvtqpmJwWMACcdxx5C6lspCgi3I/zFl4R59Dl5yOKFOGX2lEj5SnSbq7OE6ghz4yZ3l1tL3uTq6AT9OTHUfCwRpjkJ/yFmcoTa1cjQDO2qLnI9iGKYAbbl1/LdJgzPqcSXGbqz7QMqR2Y6VTR0J4kutjk5aa3+gqilrprL5/d5ON9SfOdNIF6V2Rugbp+p4+iB/SljWW0v4RBzsYH8MjtfIVwekTayur2xHCVwcMe4z1V964NuUhhEMr6s4V3HEoAHHMJJBfnyrhumS87tKfBMW8w0nVCkLD7jwMaXX1wFMa0FCahZq59HYpTFPf3bieGO1+YxgYCNChZkbP36D4UxPGrkhpMT5ZXtuI3/TPaI0HLed7TMH0lUyvBpDHEXxFGVAamHTBH1j27xg87z8RzNM4R36EvpCC1aM3DwL6DFRpTi1uL6nN6N236mOGmh2A3dlKqiZhgn0EtafsJCPghm4aOP8ruQVay5+gPdXfHDAIfmNxzWdyyORQY778sYWByp2OBHmySlqn+RW5PUqSGqvGdNxcc7DaxtycH6/su7LxCbIf2duT0hMIakvChqZyuj8DAOOuQIUiZuG7VMkJeckZu5fcyta1vtt+gACxOjeW7C6/ogp9MwXAPuyAb7ykxoZGcYLeV94uUjZGNfI1UXB0wzxzAaNR1jsMeoxOatE7cfKwIMIt/G+xnlPeD7VaMOh14EYW7jJzdpjyf8EcqekBhrtA7958BPVmcPmQYp07iE7WgHXochK/A0D11VSLf9Za37B7ElIWFnt2lHdk68JW+H3Rx35KQYI+UvxEm+N592B/wQZaPNq0EMeFnV2FemLkL1WFeYo0/s93P3H2I0QbDP69w8nzWVzK9wIuSl3vW3YToCHDvGVSz3Ro+fFeFORU5LiWUE80Fdbgq/arh0qtZkr1N2LxTvuP6eScQO6i6o/xMCUZyZzdFwMXPFEawIS/WOgy2h+9Moyae0dZqwYitFHISd5J2s33dFUCBmNUvt+StPssoXX76eXncTA34MZ2j3LKeaVuP2HNLOIKlr5qSefjJLnXnt6gI2mQm7VnZ2pBi6Na0L8Zy04xheB063kuYsBZSARKN1wma4LPe3GOwE5sIPYK6d0+B1Gwq6jgf69/5SL+LndpJO1o+ZCOJ5HXX6PZp8wkngqt5BFUEAE9W4lvTbPDjvnum2OERs0ZdHTnRRoZ0agbjmmFJ6T57+KblxZzVlm6WuaPGO4Ky0+sWJUCCmWs7eS3tNfrcMljRgnQRncer9ml2V0xheXYM2gdJe9R+or7QgqrZGgiNfqH0NaeZrvMzSjJjSmn1uZNOwcaT9tpEO6NnqbM6v7WkwB/kBZHeYVEYIFC7Ds4hcTEwhmI+X18m0Y5zgMcpz2WaSZow40Uqm2TBXKzhaDBi3FGfbGqQ1xzNBvGl3hVYiLFhcdg70ykm09G0W7TTavbglTWxg4TCdcfgxnLZ3TP5FIvzTRyP/PPJ2UcDedEgRHY1Oo3+pkRSSs9rYbieB0d1FMq7NhZtv33iWdlxDLsnjWd4WIdrhsTbdo3ytTYMn1kJuq1hox4KJ+89NEABl8uzE0NGJ6/bE1AqGfkoHSXhq+qikbtolXtM0OEmjBLmsNQPVejjDL9nOFMRYtl5b4By1j/BuzyDJP/c0FjYeTw0eeeuPo2jU0SlVVghozhfE4anPCYPZkFn/AmshTEx2GFVuHehGv0Iw6e7gZTNDXmmBMqQH14P2LWxSQuk4FvPKF8Z5sNC86toMoRNyRrwc5C0W2uzSQpOi1tvMAYtnqVMB+40VTDiriZjsquzg7Vhb3FIvsVn0mJLHzArgUQ2npBDk6s+XghhhhpdGCuD9RIo8WpLr8uH1AiF2WZ7Dneb/JVNbdTBAFnUdLiwLAlGZ/7GaJWouelOxcMTBW8fmySblPuLUrmvFSBevMsssmT0VqZhLtDhB11p05nYRTKZgEb4Ecqo9Ecxkdw3CyYYWYcQDpeZMt3V8m1mgJzoY3PXMbJGKFmTrLFpD+zy1n3j1lP5zEq6fHu6k7VW4BfcI0O6rvGRgKFuqaRn8x62T/qECKRBiLXpglvaTJwtEPI+r1TTha0fEsqb/yjtZKAVBYLoU1G1U6ONMHY44h15Gq4T8zkIzlKgNHpQ8TS764avg5aorI6+ag8+vwSzH8shH5tRPLlXrAakbpa2A0VETb+YUndZh53IZEmF+M4WwZ1vq3jJPu7ANqybIQqb3zXOEjKeRLZN0ZS+p1W6R0qhkZ4HPiCD9EKZ29UKeXJH6s7eAToDbqAx4Z3+lJLn2kFPy96xPj6Z31VtPAMERcOUgnqcNTHxpR7TQ7RHasiqCqtbm2zdQny+v/gYzYeHr50joIdptzYwpdZdMKxmN6KO6YpEeD7ugnSjpedoZV16/oqr+4NOQoirVy2zrKg0iPVq67eXy0AFcwjuM68cjpMz0XkPJ365+JKyXCHcXGAp2RwOah8mkAUhZDI+jk7Qk+YQSMiFjvvBJHqZidWX2O/qpBC28dlA7eniwPXo/LSyVHlcbZnR1ZumvDrdJSlEUEYf2GoEhHTdJpo461n9DZs71Gp+2oVsbKwcMJkkO1LkpUsVOdyB3kbIqUELwiS89+GVphvRzYb1z/ySO/tNqZtNs70AcuC1u5ojL7Rox0tJ4wWru+loN8QBaNPBglia45RhtEdxhatujkMyTq/xcm3Tuqi6o3C4NvD08wUybnDNbobpNVxPIhwEvSnyRyxtxQc4bAu5i4M7IEf8z/IfGhpytZ6uAs4bTOk4wZldo8geLdQzzkkPqQi0bX4Nef5zxg2Gk1GX8FItXtLtKDZHLRfqWK+6494eZg0JWYhuAdCcZ6/MMlA9CYlpRR/8fF/7UsdYPdVJiklOcc+WM5CC7VAZx0yTnzQhVvua5mbas/HJZe8ArNxJySUnm27fxe1BxSFJ2YMXd5IBb2qrtUY1rdTjnIZbEk6mdlrPcp6GuEK5jNEzt+0R6YvBPky5uO++kxT80NKt7YrVYH2SEpnuwAApv2Vx2TZM/dl/PngbHpBN9gWb1GXl9nzTtuXbV8jttcYlWHwdWcRZG2pg5O2ZE+7TbaCCTNYjecxXGI/6tSYh20AWdKkRDgRxRqZIvMgeUuR16WT4+6PrMsEXkbiKAqWRlrHzACXfg0G0HZWf7Jx1zZFx/UZ96oVOwDBg5rz7drZDkj2fO3Fbij3Jdjo5KeD7PZ812Yakt24NCjqt2CmEFJ3WnjVwvd2e4e0FEi3YDzoBCdeP+9esBlzY70GV/Jj2Pligm6zjohvxycpNPGBOZ3saKU5QUBuWk/TJcV3uGjIIXFHFr3GJWe/aHywtPBVJnLJn/LSz4jmamyqSrzl8OiggB1XLikrjXmkm2gIlSdKLHMRMS+yUU48N1AVC9bI8sQMfn4r7k1Sxlq+5F70eXhHiXveefEyl6VcPEGI6o2KDAkcX7/l7S8XVUSlKjOB4QzTrWbTIyCZMJiU6/Y2aN3glwvetUJnp1tMj9RxprLRxcbFkA2jKPr0VeQJC4nK063W8hK+nYjwaL3oWTJr2S6Ni9dAvIPNM4s7mKFKgFA5jgpXWGozCJnbJEJlxQSYY6JnYTk/2+mnrTgx5QtvTVVyDWq260ZuZuJhH56qbO9+UMx1lEkVXbuk0HvbmXkARsGzBJHtYgBq2BYMRxoeld4LKS/Er5k6eFLo8oxZ/2tb7U6nPAPfAh+ErKpZS4ueGIOYnSQHPjYrqc4jq005EyEWWeOezlqALHO+sW//CkdFuqqpMmjwCYXxYhndI9QUre2pCCm1av9v3mfE0bZvFhHrsRMWY29pOYB0EcAcylZ3dsTAJWT2ioBtWqmOP0p/1ATBXySV5ynEpOX7GM3o0aouUTQWvlEbV6J13P6dp0nJrB4os8qEeCy3w+pxnNB5mcMo2O7mxTI/jJcv2R0R9J8lVkUbeMn2r8MorBZ8wXQ6GrpyAFh2deNdoKYpyzZTESGdBvurChEVxjziA52RBUK4Nin9LjdeQyUjJ4EDpCNpHV8hz6iWX7hEXzWwpKV2Rdw6UaHrtbym70K+As76mEHG3ZNdbHjTjxSmP6nWLgjovbjJXe+StyPc9NTqbSV66cKArxuUC1ecRsuqUeMe/NBZNFT77efTa7Z3XodwRIbLoR/uqbBMVTKBaJ6TdVbbP21L1sW+PE6eAc6z2TS/xm0JwG0ApS+oEcZSbAnWG2GMK5olytanU0qT6lnqXiSx6P0CFqjiZEXDolRLCey21kYgxEpjUo0Ixd6LdRwFzqkcygChljdNeMOllI0l3Wie9nyPLQw1ZizHObH+VJEMda79WaRfQV/rIOPsz+cnIPAvnL8PXN9mpZIB1VNO97fDxmX0zMnrRKretbPsC9VYrYy6vjOCdb1eKLI0De63II2O6m9jR32oSqh/nBYvO17SVMJz85HHeruQG0ntMA2vZXlwp4s2tN3WUgwUfmYGVyRnctgd0tfBhviUOhjGv2AVxLryYLw213/Kc5jLeSy5+oObQld/GnQBEDnl/slzyzdMb9cioJCnUb01bYrX2LJrjnu62UwG/nnW6fWcNnHQdJEojhno956bdah+I9rG8C2X4S1+LpSb3zxpHHj/WZ9iH2o626Q/udKdPedSRh6RLiUcnHs2nad3bsedJi58G7YoPaNxT2dW6mgHS0uXAqJpetZTGXy8ykUfRu4wVUvIPEIRp6HXOkAevlYOkdvsaAm20y1cMEHCpLhNlX8XyIpwR0zh2s+SKiJPyLNJaEP1n/vyIMN8IcrjwMtG1CccrKZ/XPN2TlGDOcxAqVnctpRyU95OBa+ixXdbb8q8KmajSZ26L4WQPNc3PnT6BKTbYS3caTkja03Etr5dsOgbQ0tztWnitrMfKd1XbjxMirpj5rLKZZRWldMOD1pPbnoKFHLjzcFtx80mmhGGweJLgLUYTlluPjdbu/OlewWjyR+4uAHXEUdWz0rnB6BS/tPcjtu0pXO6EXjpol6hX80Tv8RaS4tS0zVIkBymmEyTlToqI/l1Pm2UkuVav8eeGdeL+zAvFfDL729cpBOIJSKjdmqky7vhUnjqtoH3uITFi26W72p4kyWjMqW3PW0BEdIykkqHAyn0mKc/bG2eD9+XNtmqP7czVPqM37qqO9jJ3tbTHO3PvLerxIzQLWrNa81kMaZglUigx/LuhbXUedU+u2tg37ugPAhUsiOQHozryCbu2iVDSlr9MEeV5yh7JLvJISnIpfH7Uis85ARxLzqIp7Mp+FBoYU7kBjI4AoFjXnhB0YCp6PBmBFdM2Al0olZ+5sMvNN9wPD7tuJriaTLqdxnPkHHQmtkVVo48Wn4P9hHXRE21WzVv66JKia5uqQyvefst+F640IDny2Si0l2BmKxcEnI1hfJc2smc9ZkTFXPQZWjQZUomjP3nvDvy4WCnxCflb9d0ViO1Kbiu5LjWPTEzPm43HDjOqZ9B2otPb9DSRzWQORqSRcxeV1gUXUcacaFRFFSZNTDt2xomalz/gBnZ6MMQNl43GBXur7tbem7b9CE/rIATWjf1ZFtstIA3SD+ZORG5C8zur6mR5Z2t9KTgp0VnOGkFIeZv9Z1Cm+6Zlcp+E9f01AyQEd3KXVlrlD5/xX0Ws+tPBPpXeCLJt0xXiC1A+eHN9pm6u+qSxwoVs42Z1WsGajBfPZ3ZvMsa5sAdaBV/qEqxwJLXNpgxUgeF7N89jmmskTdgxYoerRUpkaaEIpb5irpX2UGtQXnSP610Il3YLXkMNnp5fS0xez2htQDLqYVfST2rIoQ0qx1jqEjpmy9mYtIluIuXk1kbVW4azt5D0dpB6LxcRVDGK9XGtuXQ3chvuXpKekJJm+2H6uSWvsthkm75VdtGwpuPDwd+55BbvwS1NvmYaKbrngzsXjJXZkBDvbdCK8N04esVxR4QVtYb0/i3MzdI8f5HFSi9XkyruRDhccsjsGOZqABL2Vz2MHZRwUuQ2PvqsJnNfJkolZ7EeeKwKTzmheGIwtaQ0UMfM2MV+a/o0oruFNsQwZQzWyN4xH3RdtMRH5S76hv1Y8Ist7XlU2FdZPEtFKJBRTFmMzHIb4sBoaU+NTjpU6xza8ue4iy7aWfPrrolm91UUm32MqOVV5Nt4Y2m4fIrkWd+1lCe128LURgV+0VLIPBBsapgMqRJxJOw19STHsJrbbtz0JmBYGxzCkTbfpJ75bbwN3tdXKSLbjSr3LjgRjlc23gGy9b4Z2I7nqJyaKlAC5C1op3QFZIhfQbJ5GlRcSHToEe059HBpdKyb2NcZSBL52umdRRzdHSjPc6bw8XqXfhFf8U885Ch2baDWruhrcd728yGo61OyS/uVHc1x7vxsFxc5qJ9xk3d4Ds4y/3CW6I82x0khB5MyjVJkVMUE0n62oHMtlW5qmXA3+pgOcXmB5bti0bE86FZlb5sGsjMbFM2WjnI88tdX750YLU8y5XjB77zehoKWGSdX5DS5aIoVLpXPXDHSNPqm/Kz2YcUzHeBBU9x8kE/AjwdJna13cgzaLH6S4WYfYtHjECZF6u5sNSQJkbq/XvPIifqgs19Ds4dvBZlWF9TznnZVWoHsdoGf5isuiCk1L0G7sNUt8hg3VzqJU0p4hgrPNzrxlsnR8jnLaBLmSTFV6QTtD2aq8DlJSXtEhLwZNvOxx/LtcQtKu+34npxsR4xLyueMA/DNzsporcDb0e9o4/pSVPiQ/Pg/FhR6aA7JXHgUlX4traa6JCvPoxJriAttH6EhUlqxzUgse+n8ooixPCDCtkBnkZS+cndAiGIzK7NGW4piMHbrvEkFRyUDrNoEPANtDoAB1HFHXoMvc1gsu4yoi3W13SCGoZ2ur+ayg/ZYyQV9AMJeHwMR3UX1TyLbfTwt0SmIRMZIusWhIbTjboG3AuGLuUXxWEZKJ4MzIyVs016AlmcKcJlny6MMd3jrwHga3dsEuxJBMd4ilAtpF1SCanwKGTdQFmh3BhHvVXckOQMtepqVJrgLVc9WaWdRF5kUJ8rf51gplfNWZNO1glphyIptr1GOUgkTjI97a90kVx0PEbyYCY98QpLMchB5v54VMtx6MUBNYbhN0L18qzZVefPT7byizALPEXyegGssTtCag8YWG55hEbmLsJs8+D5keS2pVpK0Cr7Dj7s2cshGkK1B5FMO42zCmVu2tnIkTmP2wREDmUtcHlD3WqqmLXjp+TUfp90Vt1oODFerMNnzfMk+8hwaBgEGniq0Tfo0S7JAweey6Bd30GxtkniiTWVkEm65s85KDDa5t7txE1A3DazbMF71m3bYz9wK23gU0ySQe3+3l0mpXSOaQ7lMkxpTTwtBK3Ss1Cm0lZJ6qYAzxL07BhXuJOHNvs1KdBs9AflZQI4/q5vh2iuMw+yY+867rqsbTk+WzjEFxHs84XSgkI2WMu+WLjZzMWZeyTW0iD5S8yg4+iPvUnwkW5b4Dmb0yA8/G2HblzElvhYjrFhfm+wFLNnN5URLcyyb7tWzZ8jlWXqJRkoXjtMtFOksLbkpTxtNGu2tnECekRtTyp5zV1K03bCSFzZu+gqKq4KENmJtNL9G6uJq1w9jg2waVH/ELsoo07mT4eeOFHv0Ma+7Es/VVeiMnvYXLdAu9arKMFih1+HdacR6ULU0ctLF7Slw5Yte5z9dmr0iU7w3NSo6L7E5VZtXUM9peJTUSaJ9pGLaKxAE4SWZSd1knx53DFRIYg2cPfW7YLuKvzjeZPa61YeN4VbwGy65il6ED8hyt7DdR0AHUl/yD7PKDVzd2srMfQVAllauqFRKUiRQXDqFFiMaQIHM2PG2ji0cr+Zi39HKMFcSHjAUyZc++xgJEj1cfYw71uZYKg6HIamBAvmmgD2lma3z4onPIL97wwR+5WDnvsnIbs9LAPGhlyRSDDkphe6gmLIfXZhSpBWDCX1pjB00pFVxTSlycxbWFzM6rhqzxhvO5FzQI+2rnaOotve5+6VCIGy41ZR7BjjIk3dZ8hSr2QkmH6haQY7ir6wDMtH/1Kn4yVXfbIkTl+8iaqAN9ZDfVU9d/OVjmCWodO9fFV/z6/LR0WfRsskO6pm3j/XyesoEyHz2mv5RLCbwhf2E47/apgvhr58A6Hf7xzHsJ+gHpAkM/YnA/qxd5D86ZuZ/CfUA/qdQj/+A4BgCAAj9EdmDIAAAYf6GLPjRlt8/YDCg32Ae4B9jHsAfcbH+tJ38/8DRLv/ndJxfb6n/RzfQ/6tYObeR/xq/gf7zrJzfHueEAH89K+dP4Zx9D+m/ckj/FOzZr4d0ub/D+k+iL/5lwwwhPyE48Z//vX6d6hDoXzLq/2tK2h9tF7/Qf/7/xTaDfsM2gyH4JwT7L1Q55J8ccuC31/3LHf2Xh/EPDujf3PT/414O/g5hif8EIyCAEwCCoCiO/JMgRBz5CQP/Hjxw4jd/hfgJByEcf93aEHzh/1NC4h9uKtCf4fv3iM9n8B+/sJDuF/8FqvS8/DtV6evV/6kg+AfYR9DrX2lsyG+oRjBI/NM5BP1Ftv9vrvVnB5QfYdy+CUnfhKRvQtI3IembkPRNSPomJH0Tkr4JSd+EpG9C0jch6ZuQ9E1I+o9vQtI3IembkPRNSPomJH0Tkr4JSd+EpG9C0jch6ZuQ9E1I+iYkfROSvglJ34Skb0LSNyHpm5D0TUj6JiR9E5K+CUnfhKRvQtI3IembkPRNSPomJH0Tkr4JSd+EpG9C0jch6ZuQ9E1I+iYkfROSvglJ34Skb0LSNyHpm5D0TUj69yEkwb/AQ/61eCQI+YY73INB/ElwB+KvhzugPxhQrL2fH3WnC6xYv4bn5x88GKpfDTU2bcMvb/w/S3UnP/jOjgCIjMff3/zlKmI/bs/H//Py8y/v+F/LCIAzV0WRzb+8/0yEfP3BX9/E/eP/cmO/Mb3b5dYfGdsvu8H/Bpf4rwSuv/0o/tue8OS2uPsefr9ZvKvS9PkzP+R9/ZoI9gdEAPy3m/SB33k/CPzA4H+Br/zxjv8jLtq/wE70b0P5taH8JoLgrx/j2/5SW/kRv+1PsZUvGsT9Lh2v//7WMA9r/DeqxR9kHCj4W27I78PI6y+1jB9h4P5Uy2CG4t/fMv4MU/hxoPhrreFPIcj93y4mERT+jeh7/YT80zCf3wrI31/rTxaQ8D+Ae/u7NYP/PTD1B770e4bq71ipOPSGMeyP8RXi1w8Vgn8fNVHoR47yA6DjH+Ys8I8obP/T0In9KHRSbZw07+F4uFjD80+Z3f92Q/oVUN3lUV4ck+W3a36LsJ/f/QcsBkb+ytAKQ/+9V/47ILbGZzMAXXmUbn0AmS+GB9uk2W7JusX9f+rzDyPSpPrAqOKyxZbnFwqPUj02+Corua/um/vdecK2fsRSxwW1wAOfVYLHPLd6wAyAMMz7KvWIZY3DHrpRIzhLkRZjAnrQovn9EbORaJMwHADWrvSVwUsiUBf9WcSAO/IMXl/LSz+pImQ/KkPdv/ZM0lVhE/FWU4o2N8fQCqV9lAfgz5sO9vkln8s4xHyV9NJnkqs5gtJWtNkKWudINMgSuW9bTpqWNT0L6XUojYNtViia18IVrNIeHj3e0dj+5y/5IXDoM0z0MpMD0+9tlju6E8rrJSIMqcVtzRE1A71dh9m6c9eexc0r5XAQ9olBEA2DfL2NukzBzzlIT2sZs+7p/erTSQdNUU8TqS2iLS8fglCj3TPpXbovzIZPu8CVs8CED7QHaeSYo1+3eyY9DSDDs3dPkO/ruF/rbwM1Wo1o6siVztswDOBLOha3WGvwheSh136gF7OYugXC3gII25JKDBgqYbpammXtTb1rLyO8OJETqPR4dp8j7M6dTqcz4DgKTW34W04khOSliRsD87jm3ZrTkBQ1+Gev522cvaL3Zx39+ZHtOVUycxI/lpZfwvMdy+mc34h0CPI4BpVBdMvc1su85i3PqivRtKqvpBnordaYE1I2UT2bTL3k4Jd3frjJhg64nZuUreinWkXvYCa8Tbwzw3nf0c1x3HaMgwwRT1GLuCX65ETfJo8pUglFaq5uuXwOioU3HQ60ce9KFQ7VpjHuwFBBLN/PY8dsgGM/5rpn3dMkvj7PmNXPkAsFeGivOcTrFYxe2HyVzXjVUANG6fKz0YG+Xjxt6400vaV7nF3Dl0N0dbO3pBB4gCRmY2XUpSsuzvGsmIbn+0R7UdY3i3OP2tnK/WhJCtSk12e+Px/+7ENmAQopDtAc6FLI4QA269iiNUvUOqdi8YQyp+C9adFs0VP2hTsMXfpQaTKemQ7Fhi1S0djl9GA3ZiwFu0vkhTG17es1QBZ1NeNaOIoTbfQBRmjXvjZqxhLLmV7ugG8MIMnGIilOXxv4Iu2JCp4Gj7zbT3YALdKRUqpe0Qx6woyb5QTAVc6uNSLMbK0El/nsRz1YqbSgL3qN61cbk/GqkE6rxwKHP8T3WNzB5unhp5oxSADUGNKIFfBuHiA92sVGpym+X7IdXcA2RCT1YAIL9LJZlLq0hKrKQx3Iqymy4ojK+jDvtyQZOYm+lKChqaI+7FMiDdtjbRazDTwhxRShkN1AuJxnoxxoGIq0FzPy7stv2eqXqIofJv8aL5TtLyL+UMNCmf2cU9nESguMNYbnqCyZjaIe9fk46Cu79dZeq6ZVYWGUxLW1KSVgofygW9r8SYRz3e3maYrNPqXEPAtI7JpyFsrhzJvkvdLwoPD2koi2uHQQAQ+vmIML1KT0TFI/7L49hQv6dHyNs22BiEP5vmTh8BQxrBbA4VuLoXie94BRYpS84mJb8OTGNN1iiY3wDZTNsNn7Vh94yqg9IhxVY34sKYSW64Dl/iwxwoCDAVVFOa3CrJL0TNJXCDyrdCePUsJ4xmZWIRzZQtI/zSG/uTpM5pPFa+bYQz8sCkkpWh0v/KouN4coo4qmmnejld4QWPV8aIWgNhX9LO1NIdft/ZYEi8tEXVlvVdXQ/Yy+umfr8xJIuWYZd4BV7We36sfOgMKXaZderY6pFc3LTpY3TaA6XYGNsMqjjwMguSFmsTo6CHU/nw0V6ImIlOGx+MqRIJjTx94M7aQ52iX5Vl8WI/kZgZiWkp7b7l88SLd8tv8BQmTrUdOQoyFiKg0oeTTHsJ1QdbOBHxPADcRHdGUoPV+VLgp884prlZyGsifNcAjdf6ic8HTOK8tCjA5xvqCCTg9A/Nh+YJF1LDZAShc8mufFUpWBYpn2OVugqhZea9Nvd1195TAdiSHN1wE0je/ALn19bLHN3vOFPs1AvmbJaGwlVkOL8geJn610rVxffMBa5Ol/pLMT5klI47wWKVzmytfQdh1rMbbDxCRpRSk+IcmbcWlzNkDSSgorHvbFERmzeO+77jI5D0refGFFC1wi7UyxhTf6ieU7axJEYibqyUTnSjOeeivy/FljomuRVSd3TOhNelYcfe5a+1GRE4CSxD3Z/biboesynhFNpRjoPoqUq3TeTzU0xNcV2R95yyv0vgxSJkLlMhW/X4iyE+J4lR/xcZ0nwqKTlHTBUe72Q+bRLQriY0PYvCp38iWMKa9nJtuNMtTDW2QX803ZwOg4GuohuBSWII7P/hue9Xd1un8qzjSGRnmWOnkYRUnOVMoSU2dZi1zz6IuRgx+NYyyDd52CvfAM4oQrwJpA1xN7Y/iZrWK9Fr/gYMpnyH2+ev/VxNFu9/cVebOe1h1m6vUi5+u+G+/p8RYXgh2voxShrrNKSrQqN6teXRNXYAxH5sSm3AC0NdPe2VWbhK4fbHp9L4tL6x0g7o8NEOC7rRTxoyGtjwnW+7aFxAbszOSFbdrKJzPGE6If9bh28fHiJ2vaKQq+Mxv6MU8u2nnf1N9HcpkhCqXsrbtWyzSjL6Io3jnL1p+S9ewTiiTjIYLpGU8E1yXoMDLWQnGY8ftVeqAtMkHXgTFkoGc+iBqnUFQvBOprfPbhSEOGeRsC14IffmSaVv3IaWprwLwsaXIO3JqOGAvijvTPunm2S0Oirc++J2x61ZGmjsQdftgP+vwO20XIdN92nTBXxNXQgoOvTzkWGw6A6S5voBCCzLX5DcOGQkTBV7sV/Ad+xUI/c/YE2UA0oWuOti1lw+Mu2JABbyFOvfY94+FQcE3T6pZwkE+CnIUxgo3hzJ8uSu3K5XKZPTNM6z5fnx1U+rOabBX9Q++4xvPypXzrAAOLwR6dXPiE3VGlypKhQn250znYTAJycL3vpVqNW26jZRf+WNDzeeT6uKq6ybf0vl+FHAGMBjSyuK9YZ3Ay9LtAdYA0I5BGX9WMzT3ovNgYZPzPMfWgl5BAh1imSFGdr7TNQWNINpwGXFpNkmXQMWzbubOuL0GDQUK8XxmMZk7W+oKtl7BgFIEZxS7UmVPDImT7zJTFaMGScgzHOOLmSgaQF9Ua2lj4FEBm55b679LuCZ48iTbyBLIQy/B11CWxtAAKjPLBBixZRwyWO++IeNpuOZAIaaYprOrW4o79bMyVPQbQSJwKc8wuiQZ0fWX1KUv3OHTrSyrFptTyrcQ10IQzGbz66j/NpSkbdLzbX7nIcLgdjPD0LGNQRBz3PhqjWR77hBmh2hukONLJXyVUEA2lj29Gf5rBkT7kBkZPmzslocTAuKbTfuJ95c7aG5gpkNwSW7C4gFj1ZJv9kYx9eX9i8940kExK+iqnmcjjYzug18UWYXWn4We7Ogcgahwj+ZuDM1Bt+3WMPBNWYgQLHfnBQ5wv9vGlV/qe46e1dqrsbqh8z2mZh+XhGkBvtnGdu0LoL0Nj4mxS6SvvRj0Y1OGzAShVeSjC86fxg9qi8QTd3ckFSHin0hK82WUV4Cd9ZbtRvRcI196mtzt9Rk0vcnaAJxxs4rtaCQRWkGC03v4AI1SzPZtAdkOn/BdUvo7l6YiDuU2z8rawF/ndUZo6z36rASX3Pshkd9kpKwCz5HrkcDdykJu5oSHMugyWeRvi6UExJHs4DM8mIm5BEp5Fm94lrU+v2tPeZPlxjh23Dv3Uie5VUER3S4JcL30Uuq+CgTIg1WtOTg2RKBfuOBjMHBmPhwvxqgiV8VaoTgdGO8iZIfIpd4zvn8avCQoX6Lor0pkPbdvUIJjyqLreaPWO5rszLmJ84B2Xt1IaTKj0NFs2pytZxToRz24IMfhMhV9nQdRKyOzuD2yOmrDPuFV5qOMyWS/PdrBB9MuCjh1gZRxIQ0yompxMtHncdGKeAShPfj/Srh/g7Ofy51jz83JNT7IDeEgvX/XSonse+jlUlg9iHhyL/iSz8f7S1NbhoFl+ulUZw+kBjJX7VSTAp/ik8vjFrQ7QwT/jlyXLRdm5kYpbNf+BXREY/BvkPv7Do4p+NIX9+tPmWX6EMv9TFjTU7/m3/41l/Hq5HMV+aBn4XzoD96d0yvzm7LAcT7Ik+dF8+BtHEfTf4VSk340dCP7zZxtg//21/uzlkB/10/zRw56iGZ4i/8AyyP+9w/67oUL/+WHHkP/+Wn/2sP8F7TFA8p/j/PcfwlmMve8R/12+MLfsef7f2QFCX7/RDcQPs8Nfuz7zF7TI/NheACBJfmQvXNXHz23/rXHi227uMAL99GvLQX98OCb+gybcf8Jy7pfz8IzJ3wPT/c3Kn4UfzP6/</diagram></mxfile>
2110.14880/main_diagram/main_diagram.pdf ADDED
Binary file (57 kB). View file
 
2110.14880/paper_text/intro_method.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Deep Neural Networks (DNNs) have pervasively been used in a wide range of applications such as facial recognition [@masi2018deep], object detection [@szegedy2013deep; @li2022few], autonomous driving [@okuyama2018autonomous], and home assistants [@singh2020voice; @zhai2021backdoor]. In the meanwhile, DNNs become increasingly complex. Training state-of-the-art models requires enormous data and expensive computation. To address this problem, vendors and developers start to provide pre-trained DNN models. Similar to softwares shared on GitHub, pre-trained DNN models are being published and shared on online venues like the BigML model market, ONNX zoo and Caffe model zoo.
4
+
5
+ Since the dawn of software distribution, there has been an ongoing war between publishers sneaking malicious code and backdoors in their software and security personnel detecting them. Recent studies show that DNN models can contain similar backdoors, which are induced due to contaminated training data. Sometimes models containing backdoors can perform better than the regular models under untampered test inputs. However, under inputs tampered with a specific pattern (called trojan trigger) models containing backdoors can suffer from significant accuracy loss.
6
+
7
+ There has been a significant amount of recent work on detecting the backdoor triggers. However, those solutions require access to the original poisoned training data [@cclustering; @spec; @huang2022backdoor], the parameters of the trained model [@chen2019deepinspect; @guo2019tabor; @liu2019abs; @nc; @wang2020practical; @dong2021black; @kolouri2020universal], or the predicted confidence score of each class [@dong2021black]. Unfortunately, it is costly and often impractical for the defender to access the original poisoned training dataset. In situations when DNNs are deployed in safety-critical platforms [@fowers2018configurable; @okuyama2018autonomous; @li2021invisible] or in cloud services [@fowers2018configurable; @chen2019cloud], it is also impractical to access either their parameters or the predicted confidence score of each class [@chen2019cloud; @chen2020stateful].
8
+
9
+ We present the *black-box hard-label backdoor detection* problem where the DNN is a fully black-box and only its final output label is accessible (Fig. [1](#fig:motivation){reference-type="ref" reference="fig:motivation"}). Detecting backdoor-infected DNNs in such black-box setting becomes critical due to the emerging model deployment in embedded devices and remote cloud servers.
10
+
11
+ <figure id="fig:motivation" data-latex-placement="!t">
12
+ <embed src="figures/motivationff.pdf" />
13
+ <figcaption>An illustration of the black-box hard-label backdoors.</figcaption>
14
+ </figure>
15
+
16
+ In this setting, the typical optimization objective of backdoor detection [@chen2019deepinspect; @liu2019abs] becomes impossible to solve due to the limited information. However, according to a theoretical analysis, we show that the backdoor objective is bounded by an adversarial objective, which can be optimized using Monte Carlo gradient estimation in our black-box hard-label setup. Further theoretical and empirical studies reveal that this adversarial objective leads to a solution with highly skewed distribution; a singularity is likely to be observed in the adversarial map of a backdoor-infected example, which we call the *adversarial singularity phenomenon*.
17
+
18
+ Based on these findings, we propose the *adversarial extreme value analysis* (AEVA) algorithm to detect backdoors in black-box neural networks. The AEVA algorithm is based on an extreme value analysis [@evt-pot] on the adversarial map. We detect the adversarial singularity phenomenon by looking at the *adversarial peak*, *i.e.*, maximum value of the computed adversarial perturbation map. We perform a statistical study which reveals that, there is around $60\%$ chance that the adversarial peak of a random sample for a backdoor-infected DNN is larger than the adversarial peak of any examples for an uninfected DNN. Inspired by the Univariate theory, we propose a global adversarial peak (GAP) value by sampling multiple examples and choosing the maximum over their adversarial peaks, to ensure a high success rate. Following previous works [@dong2021black; @nc], the Median Absolute Deviation (MAD) algorithm is implemented on top of the GAP values to test whether a DNN is backdoor-infected.
19
+
20
+ Through extensive experiments across three popular tasks and state-of-the-art backdoor techniques, AEVA is proved to be effective in detecting backdoor attacks under the black-box hard-label scenarios. The results show that AEVA can efficiently detect backdoor-infected DNNs, yielding an overall detection accuracy $\geq 86.7\%$ across various tasks, DNN models and triggers. Rather interestingly, when comparing to two state-of-art white-box backdoor detection methods, AEVA yields comparable performance, even though AEVA is a black-box method with limited access to information.
21
+
22
+ We summarize our contributions as below [^1]:
23
+
24
+ 1. To the best of our knowledge, we are the first to present the black-box hard-label backdoor detection problem and provide an effective solution to this problem.
25
+
26
+ 2. We provide a theoretical analysis which shows backdoor detection optimization is bounded by an adversarial objective. And we further reveal the adversarial singularity phenomenon where the adversarial perturbation computed from a backdoor-infected neural network is likely to suffer from a highly skewed distribution.
27
+
28
+ 3. We propose a generic backdoor detection framework AEVA, optimizing the adversarial objective and performing extreme value analysis on the optimized adversarial map. AEVA is applicable to the black-box hard-label setting with a Monte Carlo gradient estimation.
29
+
30
+ 4. We evaluate AEVA on three widely-adopted tasks with different backdoor trigger implementations and complex black-box attack variants. All results suggest that AEVA is effective in black-box hard-label backdoor detection.
2112.00061/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2112.00061/paper_text/intro_method.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Recently, there has been a growing and widespread concern about 'fake news' and its harmful societal, personal, and political consequences [1, 21, 28], including people's own health during the pandemic [7, 8, 34]. Misusing generative AI technologies to create deepfakes [14, 24, 40] further fuelled these concerns [6, 16]. However, *image-repurposing*— where a real image is misrepresented and used *out-of-context* with another false or unrelated narrative
4
+
5
+ <span id="page-0-0"></span><sup>&</sup>lt;sup>1</sup>For code, checkpoints, and dataset, check: https://s-abdelnabi.github.io/OoC-multi-modal-fc/
6
+
7
+ <span id="page-1-0"></span>to create more credible stories and mislead the audience is still one of the easiest and most effective ways to create realistically-looking misinformation. Image-repurposing does not require profound technical knowledge or experience [\[2,](#page-8-9) [31\]](#page-8-10), which potentially amplifies its risks. Images usually accompany real news [\[49\]](#page-9-2); thus, adversaries may augment their stories with images as 'supporting evidence' to capture readers' attention [\[17,](#page-8-11) [31,](#page-8-10) [57\]](#page-9-3).
8
+
9
+ Image re-purposing datasets and threats. Gathering large-scale labelled out-of-context datasets is hard due to the scarcity and substantial manual efforts. Thus, previous work attempted to construct synthetic out-of-context datasets [\[23,](#page-8-12) [43\]](#page-9-4). A recent work [\[31\]](#page-8-10) proposed to automatically, yet non-trivially, match images accompanying real news with other real news captions. The authors used trained language and vision models to retrieve a close and convincing image given a caption. While this work contributes to misinformation detection research by automatically creating datasets, it also highlights the threat that *machine-assisted* procedures may ease creating misinformation at scale. Furthermore, the authors reported that both defense models and humans struggled to detect the out-ofcontext images. In this paper, we use this dataset as a challenging benchmark; we leverage external evidence to push forward the automatic detection.
10
+
11
+ Fact-checking. To fight misinformation, huge factchecking efforts are done by different organizations [\[37,](#page-9-5)[38\]](#page-9-6). However, they require substantial manual efforts [\[53\]](#page-9-7). Researchers have proposed several automated methods and benchmarks to automate fact-checking and verification [\[36,](#page-9-8) [51\]](#page-9-9). However, most of these works focus on textual claims. Fact-checking multi-modal claims has been under-explored.
12
+
13
+ Our approach. People frequently use the Internet to verify information. We aggregate evidence from images, articles, different sources, and we measure their consensus and consistency. Our goal is to design an inspectable framework that automates this multi-modal fact-checking process and assists users, fact-checkers, and content moderators.
14
+
15
+ More specifically, we propose to gather and reason over evidence to judge the veracity of the image-caption pair. First , we use the image to find its other occurrences on the internet, from which, we crawl textual evidence (e.g., captions), which we compare against the paired caption. Similarly , we use the caption to find other images as visual evidence to compare against the paired image. We call this process: 'multi-modal cycle-consistency check'. Importantly, we retrieve evidence in a fully automated and flexible open-domain manner [\[9\]](#page-8-13); no 'golden evidence' is pre-identified or curated and given to the model.
16
+
17
+ To evaluate the claim's veracity, we propose a novel architecture, the Consistency-Checking Network (*CCN*), that consists of 1) memory networks components to evaluate the consistency of the claim against the evidence (described above), 2) a CLIP [\[39\]](#page-9-10) component to evaluate the consistency of the image and caption pair themselves. As the task requires machine comprehension and visual understanding, we perform different evaluations to design the memory components and the evidence representations. Moreover, we conduct two user studies to 1) measure the human performance on the detection task and, 2) understand if the collected evidence and the model's attention over the evidence help people distinguish true from falsified pairs. [Figure 1](#page-0-1) depicts our framework, showing a falsified example from the dataset along with the retrieved evidence.
18
+
19
+ Contributions. We summarize our contributions as follows: 1) we formalize a new task of multi-modal fact-checking. 2) We propose the 'multi-modal cycleconsistency check' to gather evidence about the multimodal claim from both modalities. 3) We propose a new inspectable framework, *CCN*, to mimic the aggregation of observations from the claims and world knowledge. 4) We perform numerous evaluations, ablations, and user studies and show that our evidence-augmented method significantly improves the detection over baselines.
2201.00248/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-10-02T19:21:07.243Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" etag="vrzPfo3ZNvV33LoxKrdB" version="15.2.4" type="google"><diagram id="swKlHjNR9AgqnxMq6xx5" name="Page-6">7Vzdcps4FH4aX8aDJBBwmThJO7PtNm120mZvOhhkmyk2XkwS20+/EiB+hGwwBhO3zkwSS2AJ9H3nVwcGaDRffwis5eyz7xBvABVnPUC3AwgBBIj+Yz2buAebatwxDVwnOSnreHS3JOlUkt4X1yGrwomh73uhuyx22v5iQeyw0GcFgf9WPG3ie8VZl9Y0mVHJOh5tyyOl0767TjiLew2oZ/0fiTud8ZkBNuMjc4ufnAyxmlmO/5abC90N0Cjw/TD+NF+PiMcWj68L3gbg1f559fBjayxu1t//ms+tq/gq7w/5SnoLAVmEjYe+Xqhj7cODu1S/rv9eLyfjmRleoQTdV8t7SRYsudlww1cw8F8WDmGjgAG6eZu5IXlcWjY7+kY5Q/tm4dxLDqeLpNDGxF+ECSMga6/CwP9FRr7nB9HQaGxoqhad6Xpern9i2MS2ab9jrWbp1PHXOYpq8rUv9FrccMPnSO6HBCFZCwyoWD6QYkqFgfhzEgYb+r1kFMT5lcgB1PBQi3veMl6ltJ8VOJWQzUq4PE1Hz/CiHxLIDoBPqUaPArJkH0MKFdn6bLibJQlcegEkyPc/ZJ1VIE/cNeGivhd0TQY6tg0ynpRBdyxiTBjogR9aoesvaLcZ0SYerUt0gaZzNBN8gQol+GJNAi9SOoIXSODFXpgscqQ0+erh/16YHqILhBRFtU0z34Wn7P/dwqbaPeBDjAN+gPfQa4zH5d0Cleiah4K0F7Bd+AsiwJp0WZ47ZXDaFKOIYAxBlyrp6+TA3HUcNo2UeZkCUqo0DGvnKBX/dMkbvagVgKGUOKNKVALsijKwTBms+PT33NDUekATGUUwzbICOCmYBxlnpblxBmX06+jpzmDQBRx0NNTLUABNggXGQ6h1BIfaqjp+8D3Xphd1/xSNd9HBTemiKqZouyVa2NBPKLhateCShXPNwhuGiGetVq4t+Fg1V7XoFKM94rpbwOOLIw6PpI7EJbfqMhHlfQHxqJv3WpxTBkUyw4PvRlLGtYQqOOQZDfggK/8lsEnyvXxoJA4lWPHUi+cDhVYwJWFpoIgb6Y03pwu+0KVzukAM26ILNGGvdNH7pEtTN+HM6KKrIluacQXAioE65opRzRX7JXhNcywXPXOsWQJinkhpxpySfTNBybXdQR6KoLXJnbZkJ6z26EZTmEpPp8roGA/aKjnNanL+lvGNtnO5K+MbHYpWq71skyyb2Dy+ud0srLlrry6RTXOiGECIbHL49xXbAFlW8uJ+tOytChoZAaOpFdGEgXRRgXTsgQBpStK6pCRrqQCo6UX8lJ5zkkCSlKRQaqOZFQ70m2Cg3zaBlkksHmMNlyV5MpnAaDPwZCB3BqYQOKqgnKdSjTKYCKT5z/bxlGU133Hyga5/sPkx4NEKbTwzFIcab96uE1Tj1maQ3zlsaAagEivJPesI+406jGLKHIl7krXthbDNnabGWg45ShesIYHKHcQbQJKXxcr2rPVVpzUPCiiABHRd4n0iCa1xh2GKJFvKtji1O2qElqyG4eJaNNnthFhijE7qWdTIVb0rS7R2wx+Z7aGtZ26W6OfMDLFGS1Yo1uL7rJAih/w0VkhVxSoapKKS69I0AaZC3IkpUrEqnadbUyTJfeXc6IcdbnQ76TD8ztJhxfVHSlkRYVwmrdqVIoKSRFgOm29/EDYqUkSJ7h2dyqq4DBOek2QHruIywmt6Al2WdTlhmcd4yzAWPIp20ph/hOOYVo1z0sAyaVDqIhYcx67cRu5MdEGb7YUqjakCyhrmHZClRuWd1BVdUZ8krOuhRrpe8FDv7zG+kSa6hSN535V7otz3zGVEdnqiaR4lbdTLo9DbTi4f8fvNtTv3bxMc3nk6BugiqdWmXjCNt8Shau4Ct+WqwqYZwoswHCEMve08qVBk2xAZSvoDGtJYTH3vH7ZrSksTgcElcVTTuxMCc6AN+RZ0b889SLOCn34+nB+mwNijCTtzw7BYRI0kJSmnhVRS58gg/XaBtJ6YqsUEryqJ3E8LqCzB2yAEi2IvafC+nLktxl/vr+IotpWdqgFhU0BRh7xwujfayMoED6MNMPfTJmaNwq6T/sXKU45F7DnJY2b4LcgIzB7IKCl/UzQZHcFpn8qr8dR1dTyUgPfHBENR+95lS338TlhfwZGGip4v9YSHinJ0cKQZBw3bcXCEdifYx7VUobFb2X76OdBvxtaK8D2UVDmOz1Uz9uHaaagYTQOjZxvNr+fA2nHbjkxIrusIej3O6KI6B5rtXYNd24G/WrU02D/W6ldbY52xP1F4mUuFCHXuXGjCQxiyfc2uROgZBtdP/379PELbreqvv0z/+ajWeZdLneqXe2vueuyenkjgWAureqlHo2SpRZMtGnRxb0o3x0r2Wp9j3Y7SDhfB9tH1+zW2FOJ6sr48CuFVB2nd/+HPGQoDifvv7TkNUva29FDIO2av3oy+mTedOdDPBf+5k3rlSton9KjeSIN9iodQJQyNpuKhCeXGymnF47D9ZHlsuF82ZOmiprKRGt+ibIBzDduA+CitWNRem0biiyLwaWl02E7shUYt00ismG6sjcQsa82HLNqikez1NHUe9G0QrKmykqlV6C8H7IWhluOSqDb3bGMazxoT78ayf02jk8QrOU7eBh3FPkB8AF1S+m+YsoqrBtEPbWbvSI0ZnL1pFt39Dw==</diagram></mxfile>
2201.00248/main_diagram/main_diagram.pdf ADDED
Binary file (32.1 kB). View file
 
2201.00248/paper_text/intro_method.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Deep Reinforcement Learning (DRL) has the potential to be used in many large-scale applications such as robotics, gaming and automotive. In these real-life scenarios, it is an essential ability for agents to utilize the knowledge learned in past tasks to facilitate learning in unseen tasks, which is known as Transfer RL (TRL). Most existing TRL works (Taylor & Stone, 2009; Zhu et al., 2020) focus on tasks with the same state-action space but different dynamics/reward. However, these approaches do not apply to the case where the observation space changes significantly.
4
+
5
+ Observation change is common in practice as in the following scenarios. (1) Incremental environment development. RL is used to train non-player characters (NPC) in games (Juliani et al., 2018), which may be frequently updated. When there are new scenes, characters, or obstacles added to the game, the agent's observation space will change accordingly. (2) Hardware upgrade/replacement. For robots with sensory observations (Bohez et al., 2017), the observation space could change (e.g. from text to audio, from lidar to camera) as the sensor changes. (3) Restricted data access. In some RL applications (Ganesh et al., 2019), agent observation contains sensitive data (e.g. inventory) which may become unavailable in the future due to data restrictions. In these cases, the learner may have to discard the old policy and train a new policy from scratch, as the policy has a significantly different input space, even though the underlying dynamics are similar. But training an RL policy from scratch can be expensive and unstable. Therefore, there is a crucial need for a technique that transfers knowledge across tasks with similar dynamics but different observation spaces.
6
+
7
+ Besides these existing common applications, there are more benefits of across-observation transfer. For example, observations in real-world environments are usually rich and redundant, so that directly learning a policy is hard and expensive. If we can transfer knowledge from low-dimensional and informative vector observations (usually available in a simulator) to richer observations, the learning efficiency can be significantly improved. Therefore, an effective transfer learning method enables many novel and interesting applications, such as curriculum learning via observation design.
8
+
9
+ In this paper, we aim to fill the gap and propose a new algorithm that can automatically transfer knowledge from the old environment to facilitate learning in a new environment with a (drastically)
10
+
11
+ <sup>†</sup>{ycs,rzheng12,xywang,furongh}@umd.edu ‡andrew.cohen@unity3d.com
12
+
13
+ <sup>∗</sup>The work was done while the author was an intern at Unity Technologies.
14
+
15
+ ![](_page_1_Figure_1.jpeg)
16
+
17
+ **Figure 1:** An example of the transfer problem with changed observation space. The source-task agent observes the x-y coordinates of itself and the goal, while the target-task agent observes a top-down view/image of the whole maze. The two observation spaces are drastically different, but the two tasks are structurally similar. Our goal is to transfer knowledge from the source task to accelerate learning in the target task, without knowing or learning any inter-task mapping.
18
+
19
+ different observation space. In order to meet more practical needs, we focus on the challenging setting where the observation change is: (1) unpredictable (there is no prior knowledge about how the observations change), (2) drastic (the source and target tasks have significantly different observation feature spaces, e.g., vector to image), and (3) irretrievable (once the change happens, it is impossible to query the source task, so that the agent can not interact with both environments simultaneously). Note that different from many prior works (Taylor et al., 2007; Mann & Choe, 2013), we do not assume the knowledge of any inter-task mapping. That is, the agent does not know which new observation feature is corresponding the which old observation feature.
20
+
21
+ To remedy the above challenges and achieve knowledge transfer, we make a key observation that, if only the observation features change, the source and target tasks share the same latent space and dynamics (e.g. in Figure 1, $\mathcal{O}^{(S)}$ and $\mathcal{O}^{(T)}$ can be associated to the same latent state). Therefore, we first disentangle representation learning from policy learning, and then accelerate the target-task agent by regularizing the representation learning process with the latent dynamics model learned in the source task. We show by theoretical analysis and empirical evaluation that the target task can be learned more efficiently with our proposed transfer learning method than from scratch.
22
+
23
+ **Summary of Contributions.** (1) To the best of our knowledge, we are the first to discuss the transfer problem where the source and target tasks have drastically different observation feature spaces, and there is no prior knowledge of an inter-task mapping. (2) We theoretically characterize what constitutes a "good representation" and analyze the sufficient conditions the representation should satisfy. (3) Theoretical analysis shows that a model-based regularizer enables efficient representation learning in the target task. Based on this, we propose a novel algorithm that automatically transfers knowledge across observation representations. (4) Experiments in 7 environments show that our proposed algorithm significantly improves the learning performance of RL agents in the target task.
24
+
25
+ # Method
26
+
27
+ Basic RL Notations. An RL task can be modeled by a Markov Decision Process (MDP) (Puterman, 2014), defined as a tuple $M = \langle \mathcal{O}, \mathcal{A}, P, R, \gamma \rangle$ , where $\mathcal{O}$ is the state/observation space, $\mathcal{A}$ is the action space, P is the transition kernel, R is the reward function and $\gamma$ is the discount factor. At timestep t, the agent observes state $o_t$ , takes action $a_t$ based on its $policy \pi : \mathcal{O} \to \Delta(\mathcal{A})$ (where $\Delta(\cdot)$ denotes the space of probability distributions), and receives reward $r_t = R(o_t, a_t)$ . The environment then proceeds to the next state $o_{t+1} \sim P(\cdot|o_t, a_t)$ . The goal of an RL agent is to find a policy $\pi$ in the policy space $\Pi$ with the highest cumulative reward, which is characterized by the value functions. The value of a policy $\pi$ for a state $o \in \mathcal{O}$ is defined as $V^{\pi}(o) = \mathbb{E}_{\pi,P}[\sum_{t=0}^{\infty} \gamma^t r_t | o_0 = o]$ . The Q value of a policy $\pi$ for a state-action pair $(o,a) \in \mathcal{O} \times \mathcal{A}$ is defined as $Q^{\pi}(o,a) = \mathbb{E}_{\pi,P}[\sum_{t=0}^{\infty} \gamma^t r_t | o_0 = o, a_0 = a]$ . Appendix A provides more background of RL.
28
+
29
+ **Representation Learning in RL.** Real-world applications usually have large observation spaces for which function approximation is needed to learn the value or the policy. However, directly learning a policy over the entire observation space could be difficult, as there is usually redundant information in the observation inputs. A common solution is to map the large-scale observation into a smaller representation space via a *non-linear encoder* (also called a *representation mapping*)
30
+
31
+ $\phi: \mathcal{O} \to \mathbb{R}^d$ , where d is the representation dimension, and then learn the policy/value function over the representation space $\phi(\mathcal{O})$ . In DRL, the encoder and the policy/value are usually jointly learned.
32
+
33
+ We aim to transfer knowledge learned from a source MDP to a target MDP, whose observation spaces are different while dynamics are structurally similar. Denote the source MDP as $\mathcal{M}^{(S)} = \langle \mathcal{O}^{(S)}, \mathcal{A}, P^{(S)}, R^{(S)}, \gamma \rangle$ , and the target MDP as $\mathcal{M}^{(T)} = \langle \mathcal{O}^{(T)}, \mathcal{A}, P^{(T)}, R^{(T)}, \gamma \rangle$ . Note that $\mathcal{O}^{(S)}$ and $\mathcal{O}^{(T)}$ can be significantly different, such as $\mathcal{O}^{(S)}$ being a low-dimensional vector space and $\mathcal{O}^{(T)}$ being a high-dimensional pixel space, which is challenging for policy transfer since the source target policy have different input shapes and would typically be very different architecturally.
34
+
35
+ In this work, as motivated in the Introduction, we focus on the setting wherein the dynamics $((P^{(S)},R^{(S)}))$ and $(P^{(T)},R^{(T)}))$ of the two MDPs between which we transfer knowledge are defined on different observation spaces but share structural similarities. Specifically, we make the assumption that there exists a mapping between the source and target observation spaces such that the transition dynamics under the mapping in the target task share the same transition dynamics as in the source task. We formalize this in Assumption 1:
36
+
37
+ **Assumption 1.** There exists a function
38
+ $$f: \mathcal{O}^{(T)} \to \mathcal{O}^{(S)}$$
39
+ such that $\forall o_i^{(T)}, o_j^{(T)} \in \mathcal{O}^{(T)}, \forall a \in \mathcal{A},$ $P^{(T)}(o_j^{(T)}|o_i^{(T)}, a) = P^{(S)}(f(o_j^{(T)})|f(o_i^{(T)}), a), \quad R^{(T)}(o_i^{(T)}, a) = R^{(S)}(f(o_i^{(T)}), a).$
40
+
41
+ **Remarks.** (1) Assumption 1 is mild as many real-world scenarios fall under this assumption. For instance, when upgrading the cameras of a patrol robot to have higher resolutions, such a mapping f can be a down-sampling function. (2) f is a general function without extra restrictions. f can be a many-to-one mapping, i.e., more than one target observations can be related to the same observation in the source task. f can be non-surjective, i.e., there could exist source observations that do not correspond to any target observation.
42
+
43
+ Many prior works (Mann & Choe, 2013; Brys et al., 2015) have similar assumptions, but require prior knowledge of such an inter-task mapping to achieve knowledge transfer. However, such a mapping might not be available in practice. As an alternative, we propose a novel transfer algorithm in the next section that *does not* assume any prior knowledge of the mapping f. The proposed algorithm learns a latent representation of the observations and a dynamics model in this latent space, and then the dynamics model is transferred to speed up learning in the target task.
44
+
45
+ In this section, we first formally characterize "what a good representation is for RL" in Section 4.1, then introduce our proposed transfer algorithm based on representation regularization in Section 4.2, and next provide theoretical analysis of the algorithm in Section 4.3.
46
+
47
+ As discussed in Section 2, real-world applications usually have rich and redundant observations, where learning a good representation (Jaderberg et al., 2016; Dabney et al., 2020) is essential for efficiently finding an optimal policy. However, the properties that constitute a good representation for an RL task are still an open question. Some prior works (Bellemare et al., 2019; Dabney et al., 2020; Gelada et al., 2019) have discussed the representation quality in DRL, but we take a different perspective and focus on characterizing the sufficient properties of representation for learning a task.
48
+
49
+ Given a representation mapping $\phi$ , the Q value of any $(o,a) \in \mathcal{O} \times \mathcal{A}$ can be approximately represented by a function of $\phi(o)$ , i.e., $\hat{Q}(o,a) = h(\phi(o);\theta_a)$ , where h is a function parameterized by $\theta_a$ . To study the relation between representation quality and approximation quality, we define an approximation operator $\mathcal{H}_{\phi}$ , which finds the best Q-value approximation based on $\phi$ . Formally, let $\Theta$ denote the parameter space of function $h \in \mathcal{H}$ , then $\forall a \in \mathcal{A}$ , $\mathcal{H}_{\phi}Q(o,a) := h(\phi(o);\theta_a^*)$ , where $\theta_a^* = \operatorname{argmin}_{\theta \in \Theta} \mathbb{E}_o[\|h(\phi(o);\theta) - Q(\phi(o),a)\|]$ . Such a function h can be realized by neural networks as universal function approximators (Hornik et al., 1989). Therefore, the value approximation error $\|Q - \mathcal{H}_{\phi}Q\|$ only depends on the representation quality, i.e., whether we can represent the Q value of any state o as a function of the encoded state $\phi(o)$ .
50
+
51
+ The quality of the encoder $\phi$ is crucial for learning an accurate value function or learning a good policy. The ideal encoder $\phi$ should discard irrelevant information in the raw observation but keep essential information. In supervised or self-supervised representation learning (Chen et al., 2020; Achille & Soatto, 2018), it is believed that a good representation $\phi(X)$ of input X should contain minimal information of X which maintaining sufficient information for predicting the label Y. However, in RL, it is difficult to identify whether a representation is sufficient, since there is no label corresponding to each input. The focus of an agent is to estimate the value of each input $o \in \mathcal{O}$ , which is associated with some policy. Therefore, we point out that the representation quality in RL is policy-dependent. Below, we formally characterize the sufficiency of a representation mapping in terms of a fixed policy and learning a task.
52
+
53
+ Sufficiency for A Fixed Policy. If the agent is executing a fixed policy, and its goal is to estimate the expected future return from the environment, then a representation is sufficient for the policy as long as it can encode the policy value $V_{\pi}$ . A formal definition is provided by Definition 9 in Appendix B.
54
+
55
+ Sufficiency for Learning A Task. The goal of RL is to find an optimal policy. Therefore, it is not adequate for the representation to only fit one policy. Intuitively, a representation mapping is sufficient for learning if we are able to find an optimal policy over the representation space $\phi(\mathcal{O})$ , which requires multiple iterations of policy evaluation and policy improvement. Definition 2 below defines a set of "important" policies for learning with $\phi(\mathcal{O})$ .
56
+
57
+ **Definition 2** (Encoded Deterministic Policies). For a given representation mapping $\phi(\cdot)$ , define an encoded deterministic policy set $\Pi^D_\phi$ as the set of policies that are deterministic and take the same actions for observations with the same representations. Formally,
58
+
59
+ $$\Pi_{\phi}^{D} := \{ \pi \in \Pi \mid \exists \tilde{\pi} : \phi(\mathcal{O}) \to \mathcal{A} \text{ s.t. } \forall o \in \mathcal{O}, \pi(o) = \tilde{\pi}(\phi(o)) \}, \tag{1}$$
60
+
61
+ where $\tilde{\pi}$ is a mapping from the representation space to the action space.
62
+
63
+ A policy $\pi$ is in $\Pi_{\phi}^{D}$ if it does not distinguish $o_1$ and $o_2$ when $\phi(o_1) = \phi(o_2)$ . Therefore, $\Pi_{\phi}^{D}$ can be regarded as deterministic policies that make decisions for encoded observations. Now, we define the concept of sufficient representation for learning in an MDP.
64
+
65
+ **Definition 3** (Sufficient Representation for Learning). A representation mapping $\phi$ is sufficient for a task M w.r.t. approximation operator $\mathcal{H}_{\phi}$ if $\mathcal{H}_{\phi}Q_{\pi} = Q_{\pi}$ for all $\pi \in \Pi_{\phi}^{D}$ . Furthermore, • $\phi$ is **linearly-sufficient** for learning M if $\exists \theta_{a}$ s.t. $Q_{\pi}(o, a) = \phi(o)^{\top}\theta_{a}$ , $\forall a \in \mathcal{A}, \pi \in \Pi_{\phi}^{D}$ . • $\phi$ is $\epsilon$ -sufficient for learning M if $\|\mathcal{H}_{\phi}Q_{\pi} - Q_{\pi}\| \leq \epsilon$ , $\forall \pi \in \Pi_{\phi}^{D}$ .
66
+
67
+ Definition 3 suggests that the representation is sufficient for learning a task as long as it is sufficient for policies in $\Pi_{\phi}^{D}$ . Then, the lemma below justifies that a nearly sufficient representation can ensure that approximate policy iteration converges to a near-optimal solution. (See Appendix D for analysis on approximate value iteration.)
68
+
69
+ **Lemma 4** (Error Bound for Approximate Policy Iteration). If $\phi$ is $\epsilon$ -sufficient for task M (with $\ell_{\infty}$ norm), then the approximated policy iteration with approximation operator $\mathcal{H}_{\phi}$ starting from any initial policy that is encoded by $\phi$ ( $\pi_0 \in \Pi_{\phi}^D$ ) satisfies $\limsup_{k \to \infty} \|Q^* - Q^{\pi_k}\|_{\infty} \le \frac{2\gamma^2 \epsilon}{(1 - \gamma)^2},$
70
+
71
+ $$\limsup_{k \to \infty} \|Q^* - Q^{\pi_k}\|_{\infty} \le \frac{2\gamma^2 \epsilon}{(1 - \gamma)^2},\tag{2}$$
72
+
73
+ where $\pi_k$ is the policy in the k-th iteration.
74
+
75
+ Lemma 4, proved in Appendix C, is extended from the error bound provided by Bertsekas & Tsitsiklis (1996). For simplicity, we consider the bound in $\ell_{\infty}$ , but tighter bounds can be derived with other norms (Munos, 2005), although a tighter bound is not the focus of this paper.
76
+
77
+ How Can We Learn A Sufficient Representation? So far we have provided a principle to define whether a given representation is sufficient for learning. In DRL, the representation is learned together with the policy or value function using neural networks, but the quality of the representation may be poor (Dabney et al., 2020), which makes it hard for the agent to find an optimal policy. Based on Definition 3, a natural method to learn a good representation is to let the representation fit as many policy values as possible as auxiliary tasks, which matches the ideas in other works. For example, Bellemare et al. (2019) propose to fit a set of representative policies (called adversarial value functions). Dabney et al. (2020) choose to fit the values of all past policies (along the value improvement path), which requires less computational resource. Different from these works that directly fit
78
+
79
+ **Require:** Regularization weight $\lambda$ ; update frequency m for stable encoder.
80
+
81
+ - 1: Initialize encoder $\phi^{(S)}$ , stable encoder $\hat{\phi}^{(S)}$ , policy $\pi^{(S)}$ , transition prediction network $\hat{P}$ and reward prediction network $\hat{R}$ .
82
+ - 2: **for** $t = 0, 1, \cdots$ **do**
83
+ - 3: Take action $a_t \sim \pi^{(S)}(\phi^{(S)}(o_t^{(S)}))$ , get next observation $o_{t+1}^{(S)}$ and reward $r_t$ , store to buffer.
84
+ - 4: Sample a mini-batch $\{o_i, a_i, r_i, o_i'\}_{i=1}^N$ from the buffer.
85
+ - 5: Update $\hat{P}$ and $\hat{R}$ using one-step gradient descent with $\nabla_{\hat{P}} L_P(\hat{\phi}^{(s)}; \hat{P})$ and $\nabla_{\hat{R}} L_R(\hat{\phi}^{(s)}; \hat{R})$ , where $L_P$ and $L_R$ are defined in Equation (3).
86
+ - 6: Update encoder and policy by $\min_{\pi^{(S)}, \phi^{(S)}} L_{\text{base}}(\phi^{(S)}, \pi^{(S)}) + \lambda \left(L_P(\phi^{(S)}; \hat{P}) + L_R(\phi^{(S)}; \hat{R})\right)$ .
87
+ - 7: **if** $t \mid m$ **then** Update the stable encoder $\hat{\phi}^{(S)} \leftarrow \phi^{(S)}$ .
88
+
89
+ **Require:** Regularization weight $\lambda$ ; dynamics models $\hat{P}$ and $\hat{R}$ learned in the source task.
90
+
91
+ - 1: Initialize encoder $\phi^{(T)}$ , policy $\pi^{(T)}$
92
+ - 2: **for** $t = 0, 1, \cdots$ **do**
93
+ - 3: Take action $a_t \sim \pi^{\scriptscriptstyle (T)}(\phi^{\scriptscriptstyle (T)}(o_t^{\scriptscriptstyle (T)}))$ , get next observation $o_{t+1}^{\scriptscriptstyle (T)}$ and reward $r_t$ , store to buffer.
94
+ - 4: Sample a mini-batch $\{o_i, a_i, r_i, o'_i\}_{i=1}^N$ from the buffer.
95
+ - 5: Update encoder and policy by $\min_{\phi^{(T)},\pi^{(T)}} L_{\text{base}}(\phi^{(T)},\pi^{(T)}) + \lambda (L_P(\phi^{(T)};\hat{P}) + L_R(\phi^{(T)};\hat{R})),$ where $L_P$ and $L_R$ are defined in Equation (3).
96
+
97
+ the value functions of multiple policies, in Section 4.2, we propose to *fit and transfer an auxiliary policy-independent dynamics model*, which is an efficient way to achieve sufficient representation for learning and knowledge transfer, as theoretically justified in Section 4.3.
98
+
99
+ Our goal is to use the knowledge learned in the source task to learn a good representation in the target task, such that the agent learns the target task more easily than learning from scratch. Since we focus on developing a generic transfer mechanism, the base learner can be any DRL algorithms. We use $L_{\rm base}$ to denote the loss function of the base learner.
100
+
101
+ As motivated in Section 4.1, we propose to learn policy-independent dynamics models for producing high-quality representations: (1) $\hat{P}$ which predicts the representation of the next state based on current state representation and action, and (2) $\hat{R}$ which predicts the immediate reward based on current state representation and action. For a batch of N transition samples $\{o_i, a_i, o_i', r_i\}_{i=1}^N$ , define the transition loss and the reward loss as:
102
+
103
+ the transition loss and the reward loss as:
104
+ $$L_P(\phi, \hat{P}) = \frac{1}{N} \sum_{i=1}^{N} (\hat{P}(\phi(o_i), a_i) - \bar{\phi}(o_i'))^2, \quad L_R(\phi, \hat{R}) = \frac{1}{N} \sum_{i=1}^{N} (\hat{R}(\phi(o_i), a_i) - r_i)^2$$
105
+ (3)
106
+
107
+ where $\bar{\phi}(o'_i)$ denotes the representation of the next state $o'_i$ with stop gradients. In order to fit a more diverse state distribution, transition samples are drawn from an off-policy buffer, which stores shuffled past trajectories.
108
+
109
+ The learning procedures for the source task and the target task are illustrated in Algorithm 1 and Algorithm 2, respectively. Figure 2 depicts the architecture of the learning model for both source and target tasks. $z = \phi(o)$ and $z' = \bar{\phi}(o')$ are the encoded observation and next observation. Given the current encoding z and the action a, the dynamics models $\hat{P}$ and $\hat{R}$ return the predicted next encoding $\hat{z}' = \hat{P}(z,a)$ and predicted reward $\hat{r} = \hat{R}(z,a)$ . Then the transition loss is the mean squared error (MSE) between z' and $\hat{z}'$ in a batch; the reward loss is the MSE between r and $\hat{r}$ in a batch.
110
+
111
+ ![](_page_4_Figure_24.jpeg)
112
+
113
+ Figure 2: The architecture of proposed method. $\hat{P}$ and $\hat{R}$ are learned in the source task, then transferred to the target task and fixed during training.
114
+
115
+ In the source task (Algorithm 1): dynamics models $\hat{P}$ and $\hat{R}$ are learned by minimizing $L_P$ and $L_R$ , which are computed based on a recent copy of encoder called stable encoder $\hat{\phi}^{(S)}$ (Line 5). The computation of the stable encoder is to help the dynamics models converge, as the actual encoder $\phi^{(S)}$ changes at every step. Note that a stable copy of the network is widely used in many DRL algorithms (e.g. the target network in DQN), which can be directly regarded as $\hat{\phi}^{(S)}$ without maintaining an extra network. The actual encoder $\phi^{(S)}$ is regularized by the auxiliary dynamics models $\hat{P}$ and $\hat{R}$ (Line 6).
116
+
117
+ In the target task (Algorithm 2): dynamics model $\hat{P}$ and $\hat{R}$ are transferred from the source task and fixed during learning. Therefore, the learning of $\phi^{(T)}$ is regularized by static dynamics models, which leads to faster and more stable convergence than naively learning an auxiliary task.
118
+
119
+ Relation and Difference with Model-based RL and Bisimulation Metrics. Learning a dynamics model is a common technique in model-based RL (Kipf et al., 2019; Grimm et al., 2020), whose goal is to learn an accurate world model and use the model for planning. The dynamics model could be learned on either raw observations or representations. In our framework, we also learn a dynamics model, but the model serves as an auxiliary task, and learning is still performed by the model-free base learner with $L_{\rm base}$ . Bisimulation methods (Castro, 2020; Zhang et al., 2020b) aim to approximate the bisimulation distances among states by learning dynamics models, whereas we do not explicitly measure the distance among states. Note that we also do not require a reconstruction loss that is common in literature (Lee et al., 2019).
120
+
121
+ The algorithms introduced in Section 4.2 consist of two designs: learning a latent dynamics model as an auxiliary task, and transferring the dynamics model to the target task. In this section, we show theoretical justifications and practical advantages of our proposed method. We aim to answer the following two questions: (1) How does learning an auxiliary dynamics model help with representation learning? (2) Is the auxiliary dynamics model transferable?
122
+
123
+ For notational simplicity, let $P_a$ and $R_a$ denote the transition and reward functions associated with action $a \in \mathcal{A}$ . Note that $P_a$ and $R_a$ are independent of any policy. We then define the sufficiency of a representation mapping w.r.t. dynamics models as below.
124
+
125
+ **Definition 5** (Policy-independent Model Sufficiency). For an MDP M, a representation mapping $\phi$ is sufficient for its dynamics $(P_a, R_a)_{a \in \mathcal{A}}$ if $\forall a \in \mathcal{A}$ , there exists functions $\hat{P}_a : \mathbb{R}^d \to \mathbb{R}^d$ and $\hat{R}_a : \mathbb{R}^d \to \mathbb{R}$ such that $\forall o \in \mathcal{O}$ , $\hat{P}_a(\phi(o)) = \mathbb{E}_{o' \sim P_a(o)}[\phi(o')]$ , $\hat{R}_a(\phi(o)) = R_a(o)$ .
126
+
127
+ **Remarks.** (1) $\phi$ is exactly sufficient for dynamics $(P_a,R_a)_{a\in\mathcal{A}}$ when the transition function P is deterministic. (2) If P is stochastic, but we have $\max_{o,a}\|\mathbb{E}_{o'\sim P_a(o)}[\phi(o')] - \hat{P}_a(\phi(o))\| \leq \epsilon_P$ and $\max_{o,a}|R_a(o) - \hat{R}_a(\phi(o))| \leq \epsilon_R$ , then $\phi$ is $(\epsilon_P,\epsilon_R)$ -sufficient for the dynamics of M.
128
+
129
+ Next we show by Proposition 6 and Theorem 7 that learning sufficiency can be achieved via ensuring model sufficiency.
130
+
131
+ **Proposition 6** (Learning Sufficiency Induced by Policy-independent Model Sufficiency). Consider an MDP M with deterministic transition function P and reward function R. If $\phi$ is sufficient for $(P_a, R_a)_{a \in \mathcal{A}}$ , then it is sufficient (but not necessarily linearly sufficient) for learning in M.
132
+
133
+ Proposition 6 shows that, if the transition is deterministic and the model errors $L_P, L_R$ are zero, then $\phi$ is exactly sufficient for learning. More generally, if the transition function P is not deterministic, and model fitting is not perfect, the learned representation can still be nearly sufficient for learning as characterized by Theorem 7 below, which is extended from a variant of the value difference bound derived by Gelada et al. (2019). Proposition 6 and Theorem 7 justify that learning the latent dynamics model as an auxiliary task encourages the representation to be sufficient for learning. The model error $L_P$ and $L_R$ defined in Section 4.2 can indicate how good the representation is.
134
+
135
+ **Theorem 7.** For an MDP M, if representation mapping $\phi$ is $(\epsilon_P, \epsilon_R)$ -sufficient for the dynamics of M, then approximate policy iteration with approximation operator $\mathcal{H}_{\phi}$ starting from any initial policy $\pi_0 \in \Pi_{\phi}^D$ satisfies
136
+
137
+ $$\limsup_{k \to \infty} \|Q^* - Q^{\pi_k}\|_{\infty} \le \frac{2\gamma^2}{(1 - \gamma)^3} (\epsilon_R + \gamma \epsilon_P K_{\phi, V}). \tag{4}$$
138
+
139
+ where $K_{\phi,V}$ is an upper bound of the value Lipschitz constant as defined in Appendix B.
140
+
141
+ Transferring Model to Get Better Representation in Target. Although Proposition 6 shows that learning auxiliary dynamics models benefits representation learning, finding the optimal solution is non-trivial since one still has to learn $\hat{P}$ and $\hat{R}$ . Therefore, the main idea of our algorithm is to transfer the dynamics models $\hat{P}, \hat{R}$ from the source task to the target task, to ease the learning in the target task. Theorem 8 below guarantees that transferring the dynamics models is feasible. Our experimental result in Section 6 verifies that learning with transferred and fixed dynamics models outperforms learning with randomly initialized dynamics models.
142
+
143
+ **Theorem 8** (Transferable Dynamics Models). Consider a source task $M^{(S)}$ and a target task $M^{(T)}$ with deterministic transition functions. Suppose $\phi^{(S)}$ is sufficient for $(P_a^{(S)}, R_a^{(S)})_{a \in \mathcal{A}}$ with functions $\hat{P}_a, \hat{R}_a$ , then there exists a representation $\phi^{(T)}$ satisfying $\hat{P}_a(\phi(o)) = \mathbb{E}_{o' \sim P_a^{(T)}(o)}[\phi(o')]$ , $\hat{R}_a(\phi(o)) = R_a^{(T)}(o)$ , for all $o \in \mathcal{O}^{(T)}$ , and $\phi^{(T)}$ is sufficient for learning in $M^{(T)}$ .
144
+
145
+ Theorem 8 shows that the learned latent dynamics models $\hat{P}, \hat{R}$ are transferable from the source task to the target task. For simplicity, Theorem 8 focuses on exact sufficiency as in Proposition 6, but it can be easily extended to $\epsilon$ -sufficiency if combined with Theorem 7. Proofs for Proposition 6, Theorem 7 and Theorem 8 are all provided in Appendix C.
146
+
147
+ Trade-off between Approximation Complexity and Representation Complexity. As suggested by Proposition 6, fitting policy-independent dynamics encourages the representation to be sufficient for learning, but not necessarily linearly sufficient. Therefore, we suggest using a non-linear policy/value head following the representation to reduce the approximation error. Linear sufficiency can be achieved if $\phi$ is made linearly sufficient for $P_{\pi}$ and $R_{\pi}$ for all $\pi \in \Pi^D_{\phi}$ , where $P_{\pi}$ and $R_{\pi}$ are transition and reward functions induced by policy $\pi$ (Proposition 10, Appendix B). However, using this method for transfer learning is expensive in terms of both computation and memory, as it requires to learn $P_{\pi}$ and $R_{\pi}$ for many different $\pi$ 's and store these models for transferring to the target task. Therefore, there is a trade-off between approximation complexity and representation complexity. Learning a linearly sufficient representation reduces the complexity of the approximation operator. But it requires more complexity in the representation itself as it has to satisfy much more constraints. To develop a practical and efficient transfer method, we use a slightly more complex approximation operator (non-linear policy head) while keeping the auxiliary task simple and easy to transfer across tasks. Please see Appendix B for more detailed discussion about linear sufficiency.
2201.12122/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2201.12122/paper_text/intro_method.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Large pre-trained language models have shown impressive performance in natural language [@devlin2019bert; @Radford2018ImprovingLU] and vision [@dosovitskiy2021image] tasks. Furthermore, Transformer-based autoregressive language models [@vaswani2017attention; @baevski2018adaptive; @radford2019language] have shown to be powerful sources of zero-shot and few-shot performance [@brown2020language], with notable rapid adaptation in low resource settings, demonstrating their easy adaptability and transferability to a number of tasks in their respective domains. Adapting autoregressive language models has also been extended to the multimodal setting [@tsimpoukelli2021multimodal] for tasks such as visual question answering.
4
+
5
+ Concurrently, offline reinforcement learning (RL) has been seen as analogous to sequence modeling [@chen2021decision; @janner2021reinforcement; @furuta2021generalized], framed as simply supervised learning to fit return-augmented trajectories in an offline dataset. This relaxation, doing away with many of the complexities commonly associated with reinforcement learning [@watkins1992q; @kakade2001natural], allows us to take advantage of techniques popularized in sequence modeling tasks for RL.
6
+
7
+ Pre-training, particularly, is an essential technique for alleviating higher compute costs from using more expressive models such as Transformers. However, such concept is still relatively fresh in RL [@singh2020parrot; @tirumala2020behavior], due to the difficulty in parameterizing different scenes and tasks through a single network [@wang2018nervenet; @jiang2019language; @zeng2020transporter] as well as the lack of large off-the-shelf datasets for pre-training [@cobbe2020leveraging; @zhu2020robosuite; @yu2020meta]. Adopting pre-training as a default option for recent Transformer-based methods [@chen2021decision; @janner2021reinforcement; @furuta2021generalized] appears far away -- if we only look within RL.
8
+
9
+ Unified under the umbrella of sequence modeling, we look at whether Transformer-based pre-trained *language* models are able to be adapted to standard offline reinforcement learning tasks *that have no relations to language*. Given the setting of having a single model pre-trained on natural language to finetune on each offline RL task individually, we demonstrate drastic improvements in convergence speeds and final policy performances. We also consider further techniques (e.g. extension of positional embeddings, embedding similarity encouragement) in order to better take advantage of the features learned by the pre-trained language model and demonstrate greater improvements.
10
+
11
+ We demonstrate that pre-training on autoregressively modeling natural language provides consistent performance gains when compared to the Decision Transformer [@chen2021decision] on both the popular OpenAI Gym [@brockman2016gym] and Atari [@bellemare13arcade] offline RL benchmarks. We also note a significantly faster convergence speed, with a 3-6x improvement over a vanilla Decision Transformer turning hours of training to tens of minutes, indicating long-term computational efficiency benefits on language pre-training.
12
+
13
+ Our findings allude to the potential impact of large scale pre-training for reinforcement learning, given its surprising efficacy when transferring from a distant sequence modeling domain such as natural language. Notably, unlike other work on multi-task offline RL, our model provides consistent results in terms of both reward and convergence regardless of environment and setting, indicating a forseeable future where everyone should use a pre-trained language model for offline RL.
14
+
15
+ We consider a standard Markov Decision Process (MDP) with state space $s\in \mathcal{S}$ and action space $a\in \mathcal{A}$, specified by a initial state distribution $p(s_1)$, a dynamics distribution $p(s_{t+1}|s_t,a_t)$, and a scalar reward function $r(s, a)$. The goal of reinforcement learning (RL) is to find the optimal policy $\pi^*(a|s)$ which maximizes the $\gamma$-discounted expected return as the agent interacts in the environment, $$\begin{align}
16
+ \max_{\pi} \mathbb{E}_{s_{1:\infty}, a_{1:\infty}\sim p,\pi}\left[\sum_{t=1}^\infty \gamma^t r(s_t,a_t)\right]
17
+ \end{align}$$ In *offline* RL, the objective remains the same, but has to be optimized with no interactive data collection on a fixed set of trajectories $\tau_i$, each of the form below with horizon $N$, $$\begin{equation}
18
+ \tau = (r_1, s_1, a_1, r_2, s_2, a_2, \dots, r_N, s_N, a_N).
19
+ \end{equation}$$ Common approaches include value-based or model-based objectives with regularization [@fujimoto2019off; @levine2020offline], and more recently, direct generative modeling of these trajectories conditioned on hindsight returns [@chen2021decision; @janner2021reinforcement; @furuta2021generalized].
20
+
21
+ In this subsection, we briefly review the Transformer architecture [@vaswani2017attention] used to model sequences. The Transformer is comprised of stacks of identical *Transformer layers*. Each of these layers takes in a set of $n$-dimensional vectors that are fed through the two main building blocks: a multi-head self-attention sublayer and a feedfoward MLP as shown below: $$\begin{align}
22
+ \text{Attention}(x) & = \text{softmax}\big(\frac{Q(x)K(x)^\top}{\sqrt{n}}\big)V(x) \\
23
+ \text{Feedforward}(x) & = L_{2}(g(L_{1}(x)))
24
+ \end{align}$$ where $Q,K$ and $V$ represent linear projections that parameterize the projection of input $x$ into the query, key and value spaces; while $L_1$, $L_2$ and $g$ represent the first linear projection, second linear projection, and activation function that comprise the feedforward MLP. This is followed by a residual connection [@he2015deep] and layer normalization [@ba2016layer].
25
+
26
+ Although there are now multiple techniques for language model pre-training (e.g. masked language modeling; [@devlin2019bert]), we will review autoregressive language modeling given its correspondence with the sequence modeling objective we employ for our offline reinforcement learning tasks.
27
+
28
+ Given a sequence $\rvx = [\rvx_1, \rvx_2, \dots \rvx_N]$ comprised of tokens $\rvx_i$, we look to model the likelihood of the sequence $P(\rvx)$ by way of modeling the probability of predicting each token $\rvx_i$ in a step-by-step, or autoregressive, fashion (commonly left-to-right). Naturally, it follows that each tokens prediction will be conditioned on all the previous elements in the sequence $\rvx_{<i}$ as shown below [@bengio2001neural]:
29
+
30
+ $$\begin{equation}
31
+ P(\rvx) = \prod_{i=1}^{N} p(\rvx_i | \rvx_{i-1}, \rvx_{i-2}, \dots, \rvx_{1}) \label{eq:LM}
32
+ \end{equation}$$
33
+
34
+ # Method
35
+
36
+ In this section we discuss our proposed methodology and techniques to better adapt pre-trained language models to model trajectories, as in the case of offline RL tasks with minimal modification to architecture and objectives shown in Figure [1](#fig:main){reference-type="ref" reference="fig:main"}.
37
+
38
+ Following @chen2021decision, we model trajectories autoregressively by representing them in the following manner: $$\begin{equation}
39
+ \rvt = (\hat{R}_1, s_1, a_1, \hat{R}_2, s_2, a_2, \dots, \hat{R}_N, s_N, a_N)
40
+ \end{equation}$$ where trajectory $\rvt$ is modeled analogously to sequence $\rvx$ as shown in in Equation [\[eq:LM\]](#eq:LM){reference-type="ref" reference="eq:LM"}, and $\hat{R}_i=\sum_{t=i}^N r_t,s_i, a_i$ represent the returns-to-go, state and action for each timestep $i$ given $N$ timesteps, respectively.
41
+
42
+ We find the issue of lack of alignment between state, action and reward input representations and language representations --- partially holding back further extraction of the capabilities of the language model. To this end, we use a similarity-based objective in order to maximize the similarity between the set of language embeddings $E = [E_1,\dots,E_V]$ with vocabulary size $V$ and the set of input representations $I = I_1,\dots,I_{3N}$. The input representations are parameterized by linear projections $L_r, L_a, L_s$ corresponding to the target reward projection, action projection and state projection, respectively.
43
+
44
+ Given the following cosine similarity function: $$\begin{align}
45
+ \gC(z_1, z_2) &= \frac{z_1}{\|{z_1}\|_2} \cdot \frac{z_2}{\|{z_2}\|_2}
46
+ \end{align}$$ we compute the negative (as we use gradient descent to optimize this objective) of the sum of the maximum similarity value for each embedding $E_1,\dots,E_j,\dots,E_V$ and each input representation $I_0,\dots,I_i,\dots,I_N$ as follows: [^2] $$\begin{align}
47
+ \gL_{\cos} = -\sum_{i=0}^{3N} \max_j \gC(I_i, E_j)
48
+ \end{align}$$ This allows us to encourage the input embeddings to become more similar to their language counterparts. However, due to computational cost of computing this loss for large values of $V$, we propose to use $K$-means clustering over the embeddings to reduce the size of $V$ to number of clusters $K$. We then treat the cluster centers akin to the original embeddings in order to compute our loss. Furthermore, we optimize this computation with vectorization.
49
+
50
+ We also experiment with continuing to train jointly on language modeling and trajectory modeling. This allows us to encouraging the model's transformer backbone to be able to handle both language and trajectories simultaneously.
51
+
52
+ We now combine the objectives into the final objective below: $$\begin{equation}
53
+ \gL = \gL_{\text{MSE}} + \lambda_1 \gL_{\cos} + \lambda_2 \gL_{\text{LM}}
54
+ \end{equation}$$ where $\gL_{\text{MSE}}$ represents the mean squared error loss used for the primary trajectory modeling objective, $\gL_{\text{LM}}$ represents the negative log likelihood-based language modeling objective, and $\lambda_1, \lambda_2$ represent hyperparameters to control the weight of the cosine similarity loss and language modeling loss, respectively.
2205.00303/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2205.00303/paper_text/intro_method.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Images with overlaid texts and embellishments, also known as visual-textual presentations [@yang2016automatic], are becoming more ubiquitous in the form of advertising posters (see Fig. [1](#fig:teaser){reference-type="ref" reference="fig:teaser"}(a)), magazine covers, etc. In this paper, we generally call them posters. To automatically generate visually pleasing posters with input images, determining what and where to put graphic elements on images (namely graphic layout design) plays an important role.
4
+
5
+ ![(a) Poster layouts annotated from manually designed posters; (b) poster layouts generated by automatic algorithms of baseline [@contentGAN] and ours.](pictures/intro_single.pdf){#fig:teaser width="8.5cm"}
6
+
7
+ Layout design of posters is challenging in the sense that it should consider not only graphic relationships but also image compositions, and has attracted continuous research efforts. Recently, deep-learning-based algorithms [@DBLP:layoutGAN; @attrGAN; @DBLP:conf/cvpr/ArroyoPT21] have been proposed to automatically generate layouts. However, these methods focus more on graphic relationships while ignoring image contents, which is essential for visual-textual posters.
8
+
9
+ ContentGAN [@contentGAN] is the first to introduce image semantics in layout generation. Benefiting from content information, it can yield high-quality layouts for magazine pages. However, we note the neglect of image compositions, especially the spatial information, in this work. Such problems can negatively impact the subject presentations, text readability, or even the visual balance of the whole poster. As shown in Fig. [1](#fig:teaser){reference-type="ref" reference="fig:teaser"}(b), human faces or products are occluded, and texts are wrongly assigned leaving a large blank region. To tackle these problems and better learn the relationship between images and overlaid elements, we combine a multi-scale CNN and a transformer to take full advantage of image compositions. A structurally similar discriminator is involved to distinguish whether the images and layouts are matched.
10
+
11
+ To overcome the lack of data in poster layout design, we construct a large-scale dataset. Considering the high cost of acquiring image-layout pairs by manual design, we just collect posters and images from websites following [@contentGAN]. The posters are labeled for training as shown in Fig. [1](#fig:teaser){reference-type="ref" reference="fig:teaser"}(a) and the images are for test (the first row of Fig. [1](#fig:teaser){reference-type="ref" reference="fig:teaser"}(b)).
12
+
13
+ Note that there is an obvious domain gap between training posters and test images due to graphic elements. To address this, ContentGAN masks graphic elements on training data[^4] and uses a global pooling after the frozen image-feature extractor, by which image compositions are lost for the change of domain adaptation. Here we design a domain alignment module (DAM), which consists of an inpainting subnet and a saliency detection subnet, to narrow the domain gap and maintain image compositions well.
14
+
15
+ Existing metrics [@DBLP:layoutGAN; @DBLP:conf/iccv/JyothiDHSM19; @DBLP:conf/cvpr/ArroyoPT21] only consider the relationship between graphic elements and ignore the one between graphic elements and image compositions. So, we additionally use user studies and three novel composition-relevant metrics to verify our method. The experiments demonstrate that our method can yield high-quality layouts for images and outperform other state-of-the-art methods. Besides, our method can produce layouts to fit input user constraints (incomplete layouts).
16
+
17
+ In conclusion, our main contributions are as follows:
18
+
19
+ - We propose a novel method to generate composition-aware graphic layouts for visual-textural posters. First, a domain alignment module (DAM) is designed to run the training process without obtaining complete image-poster-annotation pairs (poster-annotation pairs are enough). Secondly, the composition-aware layout generator is proposed to model the relationship between image compositions and graphic layouts.
20
+
21
+ - We contribute a large layout dataset including all different kinds of promotional products and delicate designs. To the best of our knowledge, it is the first large-scale dataset in advertising poster layout design.
22
+
23
+ - On this dataset, we demonstrate that our method effectively tackles the image-composition-aware layout generation problem without paired images and layouts. And the experiments show our model outperforms state-of-the-art methods. In addition, our model is capable of generating layouts with input constraints.
24
+
25
+ ![**The framework of our model.** The training and test data first go through the DAM to be less distinguishable in domain. Sequentially, a generator, consisting of a multi-scale CNN and a transformer, is applied to yield layouts based on outputs of DAM and user constraint layouts. Besides, a discriminator structurally similar to the generator is used for training. The whole model is trained with a reconstruction loss and an adversarial loss.](pictures/pipeline.png){#fig:2 width="18.1cm"}
26
+
27
+ # Method
28
+
29
+ As mentioned above, ContentGAN generates layouts considering image contents and is our main comparison here. Based on the released codes [^6], we add content-feature extraction and post-process parts (to be fair, except align and sampling strategy) to reimplement ContentGAN. The quantitative results can be seen in Tab. [\[tab:contentGAN\]](#tab:contentGAN){reference-type="ref" reference="tab:contentGAN"}. Ours is much better in the user study and composition-relevant metrics, which indicates that CGL-GAN improves the relationship modeling between image compositions and layouts.
30
+
31
+ The left part of Fig. [3](#fig:3){reference-type="ref" reference="fig:3"} shows CGL-GAN captures the location of the displayed subjects, so its predicted layout helps rather than effects the display of the products or models. The middle part shows CGL-GAN has learned some practical aesthetic rules that texts are better put on flat regions for readability and visual balance of posters, and underlays should be collocated with text when the backgrounds are complex. And the third part implicates that our model also outperforms in handling the internal relationship between graphic elements. More qualitative evaluations are shown in the supplementary.
32
+
33
+ Moreover, we also do our best to implement recent content-agnostic methods [@DBLP:conf/cvpr/YangFYW21; @DBLP:conf/cvpr/ArroyoPT21] and use nucleus sampling to generate various results for comparison. As shown in Tab. [\[tab:vtn\]](#tab:vtn){reference-type="ref" reference="tab:vtn"} and Fig. [3](#fig:3){reference-type="ref" reference="fig:3"}, it is not surprising to see our model has superiority in user study and composition-relevant metrics. But ours works worse on graphic metrics, especially the alignment. We infer the reason is that the task of these content-agnostic methods is simpler, and they are not influenced by changing images.
2205.13001/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2205.13001/paper_text/intro_method.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The capability of synthesizing long human motion sequences is essential for a number of real-world applications, such as virtual reality and robotics. Beyond early attempts that consider body movement synthesis in isolation [\[1,](#page-11-0) [2,](#page-11-1) [37,](#page-12-0) [39,](#page-12-1) [42\]](#page-12-2), recent works [\[4,](#page-11-2) [12,](#page-11-3) [34,](#page-12-3) [35\]](#page-12-4) begin to explore the influences of surrounding scenes on human motion synthesis for different actions. Limited by the 2D representation of scene context [\[4,](#page-11-2) [35\]](#page-12-4) or the reliance on manually assigned interacting targets [\[12,](#page-11-3) [34\]](#page-12-3), these approaches mainly focus on modeling the body movements and fail to comprehensively investigate the inherent diversity of sceneaware human motions. In order to synthesize long-term human motions guided by the scene context and the target action sequence, we propose to model the inherent motion diversity across different granularities, each contributing to different aspects of human motion.
4
+
5
+ As shown in Figure [1,](#page-0-0) the diversity of scene-aware human motions can be factorized into three levels, given the target action sequence (*e.g*. A man lies first. Then he sits in different places. At last, he stands somewhere.). Firstly, given the surrounding scene context and the target action sequence, there exists a distribution of valid locations to re<span id="page-1-0"></span>alize the actual human-scene interactions for each of these actions (*e.g*. We can sit on any chairs or beds and stand on the ground). Different locations can be sampled from the distribution and serve as the anchors of the whole synthesized motion sequence. Based on those anchors, we can then follow various paths to bridge them one by one. Finally, our body poses also differ from case to case when we move along the paths to connect all anchors. We demonstrate these three levels of diversity in Figure [1.](#page-0-0) Existing attempts for scene-aware human motion synthesis [\[12,](#page-11-3) [34\]](#page-12-3) only emphasize the last level of diversity (*e.g*. walking to the pre-defined object or position in the scene) via manually assigning the interaction locations and motion paths. Consequently, the importance of the scene semantics is substantially muted, as it mainly affects the distribution of valid interaction anchors and the distribution of valid motion paths. To faithfully capture the diversity of scene-aware human motions, we propose a novel three-stage motion synthesis framework, each stage of which is responsible for modeling one level of the aforementioned diversity.
6
+
7
+ For diverse human-scene interaction anchors, we design our pose placing framework for the given action sequence. Different from [\[41,](#page-12-5) [43\]](#page-12-6) which only consider the influence of scene context, we first synthesize scene-agnostic poses according to the target action via a conditional VAE (CVAE) [\[28\]](#page-12-7). Then we follow the practice of POSA [\[14\]](#page-11-4) to place these poses into the scene. To be specific, the 3D scene is uniformly split into a set of non-overlapping grids, each of which is associated with a validity score that measures its compatibility as a candidate for placing the poses. We make two modifications to the original placing method used by POSA First, we introduce the position relationship between poses with the same action label to enhance the placing diversity by avoiding them being placed to the nearby positions. Furthermore, we leverage another CVAE model as the placing refiner to produce diverse offsets for each discrete grid. Examples of generated anchors are depicted in Figure [1](#page-0-0) (a).
8
+
9
+ To produce diverse obstacle-free motion paths following the sampled anchors, we employ an adapted A<sup>∗</sup> algorithm over the discrete 3D grids as the path planner. The standard A<sup>∗</sup> algorithm used by previous works [\[12\]](#page-11-3) only generates deterministic paths as they only consider collision between objects and distances to the target locations. To model the inherent diversity of motion paths, we amend the original algorithm with a trainable stochastic module learned in a data-driven manner. The new module, named Neural Mapper, can provide dynamic scene-conditioned probabilistic guidance to the A<sup>∗</sup> algorithm, so that the algorithm can automatically produce diverse yet natural paths given the deterministic scenes and location anchors. We show several examples of generated diverse paths given the same start and end locations in Figure [1](#page-0-0) (b).
10
+
11
+ Lastly, we propose a novel Transformer-based CVAE, called motion completion network, to synthesize diverse body movements guided by the paths generated in the previous step. Inspired by [\[26\]](#page-12-8), we leverage Transformer as the basic architecture for synthesizing continuous and smooth motions. Differently, we focus on diverse motion completion of poses with long-term distance and different actions, rather than synthesize motions for the single action [\[26\]](#page-12-8). Therefore, this motion completion network first generates diverse moving trajectories, loosely following the paths sampled by the aforementioned A<sup>∗</sup> algorithm. The body poses are then produced by taking the scene contexts, action labels, human-scene interaction anchors, and synthesized trajectories as inputs.
12
+
13
+ To summarize our contributions: 1) We analyze the inherent diversity of the human motion and decompose it into three components, namely the diversity on humanscene interaction anchors, paths, and body poses. 2) We propose a novel three-stage framework to faithfully capture the diversities of scene-aware human motions. This framework can automatically synthesize human motions following these diversities with the condition action labels. Qualitative and quantitative results on datasets such as PROX [\[13\]](#page-11-5)demonstrate that our method significantly surpasses previous approaches in terms of diversity and naturalness. 3) In the proposed framework, we make several technique contributions for this task, including the action conditioned pose placing framework for generating diverse human-scene interaction anchors, Neural Mapper for planning diverse paths, and motion completion network for producing diverse and continuous motions. With our decomposition on motion diversity, these technique contributions can achieve our goal efficiently and effectively.
14
+
15
+ # Method
16
+
17
+ We first formally define the task of scene-aware 3D human motion synthesis. We use triangular mesh $S=(v^s,f^s)$ to represent the scene context, where $v^s$ and $f^s$ stand for vertices and faces. Our task is to synthesize diverse 3D human motions in the given scene context S, driven by a sequence of target action labels $A=(a_1,a_2,...,a_N)$ . Each label stands for one scene-related
18
+
19
+ human action, such as sitting or laying. The synthesized 3D human motions are represented as a sequence of SMPL-X models [24] described by their parameters $\{P_0,...,P_T\}$ , where $P_i$ is composed of $(t_i,\phi_i,\theta_i)$ , $t_i\in\mathbb{R}^3$ is the global translation, $\phi_i\in\mathbb{R}^6$ is the global orientation represented in 6D continuous rotation [44]. $\theta_i\in\mathbb{R}^{32}$ is the body pose parameters, represented in the form of VPoser [24]. We use mean values for remaining SMPL-X parameters, including shape parameters, facial parameters, and hand poses.
20
+
21
+ The overview of our framework is depicted in Figure 2. We aim to solve this challenging problem in a hierarchical manner via exploiting the inherent properties of the scene-aware human motions. Our framework first generates diverse human-scene interaction anchors for the given actions. In this step, the framework first produces sceneagnostic poses corresponding to the action labels and then places these poses into the scene considering the compatibility between the synthesized poses and the scene. In the next step, we leverage a path planning module to produce diverse obstacle-free paths under the guidance of the synthesized anchors from the first step. Finally, a motion completion module is adopted to synthesize diverse body movements that fill in the missing motions between consecutive anchors while roughly following the planned paths from the second step. In the following, we introduce our modules in detail.
22
+
23
+ We first synthesize human-scene interaction anchors. Unlike previous works [41, 43] that only condition human motion synthesis on the scene context, we use action labels describing interaction types as an additional condition. To be specific, we first synthesize scene-agnostic poses corresponding to the action labels. Then we follow the practice of POSA [14] with several modifications to diversely place the synthesized poses into the scene. This design affords us more control over the final synthesized motions.
24
+
25
+ <span id="page-3-3"></span><span id="page-3-0"></span>![](_page_3_Picture_0.jpeg)
26
+
27
+ Figure 3. **Human-scene interaction anchor generation.** There are two steps to synthesize human-scene interaction anchors. The first step is generating diverse scene-agnostic poses conditioned on the target action, as shown in (a). The second step is placing synthesized poses in the given scene, as depicted in (b).
28
+
29
+ Scene-Agnostic Pose Synthesis. As shown in Figure 3 (a), we follow the standard CVAE framework to synthesize scene-agnostic poses $\theta_i$ with the target action $a_i$ . To be specific, we first sample noises from the prior Gaussian distribution and encode them with a fully-connected layer. Then, we use the one-hot vector $a_i$ to represent the action condition and encode it with another fully-connected layer. These two features are added up and then served as additional input besides the noise. The model outputs the synthesized pose $\theta_i$ , which is directly used as the body pose for the anchor $P_i$ for i-th anchor.
30
+
31
+ **Scene-Conditioned Anchor Placing.** In this step, we place the scene-agnostic poses $(\theta_1, \theta_2, ..., \theta_N)$ into the given scene. There are two aspects to be taken into consideration in this step. The first one is how to place poses to locations with compatible scene structure and interaction semantics. The other one is how to efficiently find multiple reasonable locations given a pose.
32
+
33
+ Therefore, we first select our placing candidates following the practice of POSA [14]. Specifically, each candidate consists of a translation parameter $\bar{t}_i$ and an orientation parameter $\bar{o}_i$ for the anchor $P_i$ . We split the given scene into uniform non-overlapping discrete girds as translation candidates. For each discrete gird, we then uniformly sample eight different orientations that are parallel with the ground plane to build orientation candidates. Each translation candidate is paired with one of its associated orientations to form one placing candidate. For each scene-agnostic pose $\theta_i$ , we then rank all the placing candidates by their com-
34
+
35
+ <span id="page-3-1"></span>![](_page_3_Picture_5.jpeg)
36
+
37
+ Figure 4. **Map building.** The map for our planning algorithm is built based on the collision detection (a) and Neural Mapper (b). Neural Mapper provides diverse moving probability for each neighbor gird to plan diverse paths.
38
+
39
+ patibility scores with the pose, which is proposed by [14] that considers both the affordance and penetration. An intuitive idea is to select the candidate with the best score. However, our empirical study shows that candidates with the same action labels tend to be located close to each other since the same action usually shares similar physical and semantic structures, as shown in the first row of Figure 7. To increase the placing diversity, we introduce an additional penalty on the locations that have been occupied by anchors with the same action labels. As shown in Figure 7, this new penalty helps produce more diverse placing candidates for similar poses. In this way, we can sample an initial placing candidate $(\bar{t}_i, \bar{\phi}_i)$ for each pose $\theta_i$ . The initial anchor $\bar{P}_i = (\bar{t}_i, \bar{\phi}_i, \theta_i)$ is then constructed subsequently.
40
+
41
+ In practice, we further adopt another sub-module called Place Refiner to improve the micro diversity of the placing candidates. Place Refiner is implemented as a CVAE model that takes the noise of $\theta_i$ , the scene context encoded by the PointNet [27] and the initial anchor $\bar{P}_i$ as the input. It outputs the offset $(\Delta t_i, \Delta \phi_i)$ to the sampled position and orientation $(\bar{t}_i, \bar{\phi}_i)$ . The final position and orientation are obtained as $t_i = \bar{t}_i + \Delta t_i$ and $\phi_i = \bar{\phi}_i + \Delta \phi_i$ . The framework of Place Refiner is depicted in Figure 3 (b).
42
+
43
+ In this step, we discuss how to generate diverse obstacle-free paths from human-scene interaction anchors. Previous works such as SMAP [12] often use standard $A^*$ searching [10] for this purpose. The $A^*$ algorithm tends to generate deterministic shortest path for practice. However, humans usually move stochastically in the given scene. To reflect the diversity of human path planning, we incorporate the standard $A^*$ algorithm with scene-aware random information concerning the diversity of human motion.
44
+
45
+ <span id="page-4-3"></span>To begin with, we first discuss how to apply the standard A<sup>∗</sup> algorithm into our scenario. We first divide the whole 3D scene into the same set of non-overlapping discrete grids as in Section [3.2.](#page-2-1) We then define and calculate the cost function f for each grid in the A<sup>∗</sup> algorithm [\[10\]](#page-11-18) as:
46
+
47
+ $$f(q) = g(q) + h(q); q \in \mathcal{N}(p), \tag{1}$$
48
+
49
+ where g(q) measures the cost for moving from the beginning point to grid q, and h(q) measures the cost between grid q and the target grid, during searching points as the next step for p in the neigbourhood N (p). To ensure obstaclefree paths, we further filter out inaccessible grids that might have collisions with the human body. The collisions are detected via placing a cylinder model that approximates the volume of a human at each grid. We show an example in the right of the Figure [4](#page-3-1) (a), where red stands for valid and blue stands for invalid. After calculating f for each grid and excluding invalid grids, an obstacle-free path connecting two human-scene interaction anchors can thus be obtained using the standard A<sup>∗</sup> algorithm. It is worth noting that the path obtained in this manner is deterministic and fixed for the same pair of two human-scene interaction anchors.
50
+
51
+ An intuitive solution to incorporate diversity in path planning is appending the cost function f defined in Equation [\(2\)](#page-4-0) with a random noise term. This strategy sounds feasible but fails to generate reasonable paths, which is demonstrated by the examples shown in the top two rows of Figure [5.](#page-4-1) To this end, we replace the random noise term with a controllable signal m produced by another CVAE, referred as Neural Mapper. For each grid p, Neural Mapper takes sampled latent code and the local scene context feature obtained via BPS [\[32,](#page-12-16) [41\]](#page-12-5) as the input and outputs the feasibility score for each neighbor grid q ∈ N (p). Based on the Neural Mapper, the cost function is updated as:
52
+
53
+ $$f(p,q) = g(q) + h(q) + (1 - m(p,q)); q \in \mathcal{N}(p).$$
54
+ (2)
55
+
56
+ The score of m indicates the feasibility of moving from the current grid to this adjacent one so that we can build the cost as 1−m to reflect the moving guidance by our Neural Mapper. The Neural Mapper is trained in a data-driven manner thus it can help the A<sup>∗</sup> algorithm to generate diverse and reasonable paths. We show several examples produced by Neural Mapper in the bottom row of Figure [5.](#page-4-1)
57
+
58
+ Without complex manually designed conditions and constraints, the proposed Neural Mapper equips the A<sup>∗</sup> algorithm with the ability to find diverse obstacle-free paths in a flexible and generalizable way. In Neural Mapper, we can easily change the characteristics of sampled paths by restricting the latent codes, without hurting their naturalness and coherency.
59
+
60
+ <span id="page-4-1"></span>![](_page_4_Picture_7.jpeg)
61
+
62
+ Figure 5. Examples for diverse planning. We sample different paths from ⃝1 to ⃝2 with different strategies. Random Noise means sampling different weights for each discrete gird. Shared Noise means all discrete girds share the same noise vector. Neural Map refers to probability generated by our Neural Mapper. The results demonstrate that our method is more effective in generating diverse and natural paths than simply adding randomly noise does.
63
+
64
+ With the obstacle-free path obtained from path planning, we are now ready to complete the missing motions between consecutive human-scene interaction anchors. As shown in Figure [6,](#page-5-0) our motion completion network consists of two components, namely Path Refiner and Motion Synthesizer. Although paths for human-scene interaction anchors are planned in Section [3.3,](#page-3-2) this Path Refiner accounts for the gap between diverse real human motions and the path formed by straight lines between the discrete grids. Both the Path Refiner and the Motion Synthesizer follow the CVAE framework. Specially, we apply Transformer [\[33\]](#page-12-17) as the basic architecture for both the encoder and decoder of these two networks to synthesize continuous and smooth motions. Our motion completion network simultaneously synthesizes M frame paths and body poses as [\[34,](#page-12-3) [35\]](#page-12-4), instead of one-by-one in an auto-regressive manner [\[9,](#page-11-8)[11,](#page-11-6)[12\]](#page-11-3).
65
+
66
+ <span id="page-4-0"></span>For Path Refiner, we take the scene context encoded by PointNet [\[27\]](#page-12-15) to synthesize the refined path. The refined path is composed of pairs of the translation and orientation sequence{(t1, ϕ1), ...,(tM, ϕM)}. Following [\[26\]](#page-12-8), we introduce the positional encoding formed from sinusoidal functions which take time steps t ∈ [1, ..., M] as input to ensure the continuity and smoothness of the refined path. Moreover, we leverage one more positional encoding obtained from the planned path by encoding each step of the planned path (t<sup>i</sup> , ϕi) in Section [3.3](#page-3-2) by a fully connected layer, to ensure the refined path is still in the obstacle-free regions.The effectiveness of this additional positional encoding is illustrated in Section [4.2,](#page-6-1) where our Path Refiner further improves the diversity of synthesized motion.
67
+
68
+ The motion sequence with M body poses {θ1, ..., θM} is completed by our Motion Synthesizer. Same as the Path Refiner, we take the scene context encoded by the PointNet as
69
+
70
+ <span id="page-5-1"></span><span id="page-5-0"></span>![](_page_5_Figure_0.jpeg)
71
+
72
+ Figure 6. **Completion Network.** Motion completion network is composed of two modules, namely the Trajectory Refiner and the Motion Synthesizer. Trajectory Refiner reconstructs the motion trajectories under the guidance of planned path. The Motion Synthesizer takes the scene context, planned path, the pose and action label of human-scene interaction anchors as the inputs and generates body movements. The two-modules are trained altogether in an end-to-end manner.
73
+
74
+ the condition to complete these scene-aware motions. The completed motions should fulfill two requirements, namely matching the paths produced by the Path Refiner and naturally transforming between the given human-scene interaction anchors. To achieve this, the Motion Synthesizer at first takes the refined path as additional position encoding to guide motion synthesis, similar to the practice of Path Refiner. For the motion transformation, we need to model the relationship between the given two human-scene interaction anchors and the potential motions that could be completed in our Motion Synthesizer. Inspired by the practice of action token in [26], which helps the transformer decoder to build up the relationship between synthesized motions and the given action, we encode the action labels and poses of human-scene interaction anchors by additional fully connected layers as learnable tokens and add them to the beginning and ending of the positional encoding respectively. With these tokens, our Motion Synthesizer can directly build up this relationship between and synthesize reasonable and smooth motions. Following these two steps, the motion completion network can generate natural motions for the given human-scene interaction anchors following the planned path.
2206.05099/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-11-11T03:06:36.161Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.44" version="15.7.3" etag="A_E7ZTWk3VOk3ME8jaxx" type="device"><diagram id="x2VqXs5KyUVAhxwup3OJ">7V1bk5s4Fv41XZV9CIXu0uN0Z7P7MqmpzdTs5GnKsUm3d9xNl9uZdM+vX2ED1s2AMQgMSqUSg4VkdL5zpPPp6OgG3T2+/mu7eH74OV0lmxsYr15v0IcbCAHnVP6X3Xk73OEkPty4365XeaHjjc/rv5P8ZlHs+3qVvGgFd2m62a2f9ZvL9OkpWe60e4vtNv2hF/uWbvRWnxf3eYvx8cbn5WKTWMX+u17tHvK3gOx4/9/J+v6haBlQcfjmcVEUzqt4eVis0h9KW+ifN+hum6a7w6fH17tkk3Ve0S+HH/TxxLflD9smT7smD8DDA38tNt/zd8t/1+6teNn7bfr9OS+WbHfJq6uLF183Zo8dfwIoX0wiIkkfk932TRYpKmL5IzkYYHH949i1AscRwuL451DiQelkAkXESC7jXL73ZWPHLpAf8l5w9wiq7xEptOfs406iO/k7zZ68fU62a9lWslXv/3K8efvjYb1LPj8vltmTP2QJee9h9yh/xgcgP35bvyYF0vfX6dMuv5Rdim5fdtv0z+Qu3aTb/Y9AdMmTr9/KbwocHurabJSSq0XCvy3l/W26W+zW6ZO8/V7sKz00ICJI5NVJGauyhJWyRJRGkOviJCyKgSVRyiOEbCky1okQsSXE/3z6ZMlRAvtplWSPxA0EVCcQQRla0CYCScCKJKybHgdQ1x5gKw+P7X7G8eWdTGbTyWV3+e9kOptORnSwTmb1Nj95Wv2UzRzk1XKzeHlZL/VO1SVgdnF2rXRbvP9jdbDsnNvkdb37Pa8z+/wlqzEi+dWH17yB/cVbcfEk3/d39UJ5Krs8Pra/Kp47vGSyKiY7p0QmOyL9vl0mmm3dLbb3STH2ugWrSI44JFfc2yYbOSz9pf8IlzjzFn5J1/LnnVROZgDi8Nvzh9SpkFmPMOYhRj2HF7bq2WOrfOlGcOMBbs3hRmy40SHhhpABExHxWPljVNgUfchAMeoNfWJE6GuJpDaobYe+QaGGLUxEQIVa4aucizXMLQgLtV6E+8JeMaYH8J0Cn2rn8JDgc2AEqSBh7bBn+iuZ+VTrFbw37AEH9nD2l9x9TTerl7dH+d8Nu329YR/+APl3nc60q8FZfPOUPiW1k++80M3l825iTLtzICooQw6UmeNTm2l30VStROT1w2InP7xJ0cxLOqat9ikemwqbqu9pWiXhz/cE8+GqTE/NZy/Ph6wyeRSfvWyzVVYXB8+28C0K+6pO+cCgcz6L5yQGKNpyKdJt0SvqcFoXyLtzMOdg78Cg9J3Jp4C4H0IFgP4gOCZCb5RerQbBQeFmcioSFgap0g5u1jw91kkV2Bv4Ap93BqVyGHNHwqnsMaKSH7Qd9uw1YJ1UAX1hr2i3GaUCZ+O0G5SK8Oezw6Ykl0WpzEc6pqn2KR5oiWeqbqhllBxhXX35oY4grql2s+Wy+ezm+VBXVnCKz262uSurj4OHW7gXrlkfHHTWZ6qoNZK0ZlW4UVGHM7vA5J2DOQeTN+zqrcWqcOkStEOdRaSI/lA3Ji5vlL6shrpBEWYRKaInIoV7IlJGFZY3SvBp5u3EfGkYIoX3RKRwX0SKi8Q7SaSg2bjqZmwK8+eqF3Wcz6TMRzyWrfYpHxfTNbi5PlroL5qBrjbXRxP9RTHeXc4VXMt+gzpFhiuDTANuhKA2tuDGvNf0v7sz2QgG/J1nNDx520xEjGogaB3FYFfVn++DGmw8DXgawJMWneHJUVWPeLLJ2YCnceIJtCUEXXWZ2986RNSYeOiAqKphqjWiXHX1iKgxscwBUcdZNOkOUa66ekRUpwzy+Y54/boFa7dwAeuov5c/k93yobgo0spkF//7/vj8OX99jSBQXuPjx/w1VouXh/Ldx45Ta4Mkj5QcMaLMM3QpDyiMekUj+EqQLd6UYs9ZgZeKt6GmT1z5I83itLp4GYLdrrjQissPh7drraedku1BT69LTwGoVqjWfD3T6/WjpwCcp6gAn6d6Z5dnHetqgwjjy5OEnQBoXhGNsfaO7+2MUpiXqaM0CHeTFqxA8MTzgrVNC3bAyEnxvScR0eRXXqsGSDgFqMj1IgHa6wV3XQerDbRIU9P5MOJ6SjaAI84cdlJVJkduti5WbTC0pDDVkEEqdJNld3JfAYO4AZV8MNh6TsAsnqzi1S+34rzY8jiYFW9Aik7AirfO7ohPrAmWZlwTH/Vtwm0CciomvKbjUYSYGluFdUHsDbr6vT249mbQR0nh6T7ZkFFTQ7lchOqjD+QRhSo1olfY1OMy5+GyWqKu25uMS3fMHg6xoaMEGiWoNO5urDHaFmzCVbWKN9gb2kIw6CjRxhiO1IEO9gI8Dmpa8YPBsLN7lBjkFEiz1DcGRVzTihcMkga01wArtBwBHYdxLe/eeheR4uZ9cPpqcXx3156TN1I+DrqngxJoDrgARLFCFDFB2sGZCmfVqpGFrDcUN4gV9s1gMwfp1iv3QWB9J0yA+2hLfZDqU0pGwGATm9abCv1R0/ljYrDJfDa9mww290dhkwaxhqOgsP2b8QaE2wTMeGsKm5xILzYOCpvYLNZkbHh1x4+XwiaB6xmln21S2CA2XOuWKejNmXhWrx9nOhA6owTagcI+RbV0BjtR14oXENJRMjoBhNXMdlcgrGa2/YFwlJu3AwhNapv0AkKT2rZb8QNCOEYQTpfbJoMepuXgtoXObbdepXZWrBpZ1BuEG0f69ceIWNNp4psSoTYlZ/XCBCiRtowIrQvqG5zZptMN7Kvp/DEx23Q+Zw2bzDag/qht2iCQbRTU9gCGvAETNwFD3prbPkBnrNw2temtyVjx6o4fL7fNAtszSkfb4rZ5T9w29+RMs8DojBJoNdx2V7Cr5ra9gXCUjE4AYQ233REIa7htbyAcZSK/AMIabrsjENZw295AOM7sf5Pltg+e64i4bcD6Irf3Natmtjd2mzUIAjwvSZCWb0cTu5aWp53XqabnsbXAkd3Hb+KhsdtnxnTXBeFuknlZeRiwj2RezGg1T5556keaxWl1ccouKi604hcnCGKd7gEPenpdegoA6SSZl8XJM+IhmZepeCA/w76pogJcXd5UvbPLM6385brqWn3Ap04aAbM5yoIgI0epnZqmr5MsmGvV46RI4GxFIjyKxBWpe1Ik8znwxdIS5k8m3LWU4ZKJdSDPfKyYMOl/j+IJJ0+fLR6PFo3DtuKZj3WztMendRtrzj5EdfoHUuyKD2NREU6r9gzdRzdc3jk2mzmVyAJeHR8GQMSJiLP1YrD/1xh/G0QWiAgwjAgXxb89AXiUp4OEZQ9oZIeGgkREYYapoZ+NTwoxT7WW9WLlAFioV9sdK8xttum39a/dGoORhCmafYwIKdl4D5GKPKTcuwqVRqwflc7q9aPSYQf2KIGGoA0IpkSnU9QOaMg86UTWS7kPoNm8zlzGDjn5K/caeBg7hIutCSo9uEqbYweWUwqKuh87snoJ9qDSwsU7BaANDjRz7MgA0cfYsQewj7FDwC6BNsJV7mOY1nGN+kuBykYnIAKONARLKVdiOLsww7Sa4lo/3zwHk3a+OcBD4h8zY46DRdQ26zph8tEjxJFOUyKGI/O89K6OuuLGSwBWvMTJ067MR47zjpMqXeR2b/6IeeZVm0eI+S4XL66LTkN/J2Qimiu1rsEnUooNdLqkIK01GKBKFcboeEhq3yosajXFfARnNHP1I4Jd+gTqXhvPi4HOMa4pmaKXuoKgak1tpZwOhct+zC+LnRwdn/bNwBhewdSPGFM/Upx9dfa4Z57D2khD2kDFXrx4t/iHvPF5t1j+KWUMY9dWeOmd73SULDbr+0xUSymJ/c7jzIdfLxebn/IvHterVfb47TZ5Wf+dr8tl0s/1WNZLbm/Ih6yu77v05bgp2bkQ5Vh2agq3m8s5BiB0+cgbLn66PO+56zUnYS8FvPuaie3u06f3UmDvXauEcxcaRcgQGrMlJvqSmL2k8G5ZSOy39a9BYk4105fgMc84G49qZtPz71aF0O6CmrnVzFiqxxy6+Nf+NC0kGx3l5MiYW2dny3NljS3u6Mj6fb2Gs9LdfKn0pUMYuREga0iBxy6l7yuIDMSBDL8GpUesH6Xf19uj0sNzlB7OVekR9az0IeHANSg9Jv0o/b7eHpXexaKFrTCm0mPsWelDuO0old5cNICCtV40MHn4rC6mLCIYP65LpXft6A57rTQDYEmHCb8WIETnXoUFQKw7C5DV5ckCNN2UPOPtfJZ0iGcLECi+q7AAmHRnAbK6/FgA0Hq/9XycAEs6yK8FADbfN5d4dsiwz3h2AFzkVzC2gxtbM84424doRkO2jS3e75XEXoxtAw5vApnv2ya+rzYNAJKIK1k1CzkVUiR2JnzKI+RIuZ5NrzvYr56HHgdx9iFOhGhU3PIo0QZ8W5BoO4liMIhEw0HaF4gUUSlSI4gZsCHE2IAGC2I8Q4wI2fGAfcuwwf7vUSS/QRhHxIZ4v9lvAJjuwTq56E+PHFeT/6Zclh09iDEcAsTQpiomA+KDTCYB4gYsxyAg3k8hlfMZipjk0jBDV0h035i2vfbpYLpGHleE6QbO+BgxPZCdnu5ZrDkSJoHpBs7rGDE9RPZIAG0XcTqQPnGo+hVCOiQBa+fDM6ALlSDuWhmrWoM556ghah4IhGLRSfo5KpxVe8lAB6DtYwf0jQ59jLFeoMeBXa8f3KEGrEHA3dC4c1k91p/VY97QF/aLXQH6LKvXEfQsq+cPdzDgbvy4c1g9jHlfVi+r2hP6woa5K0CfafW6gp5p9TziziYiO+chulh2vmnCPFQjhRrpvaAALqQU52ypSAHm0cut2AQ0v3gdCKKOogIIMkKu5EDgWF0pcwlril4Rwdlceja9ORXGrkZvQKzlCTTkQLErwgY7QjM6oeTQDGNrpBZR5fz2WHQjWIbjCGHrNFNFwQoGqlbBOpGsTbbOQ8E4EBGip+XgTGDXn4J1yjr6SBgbx3d355+cXExFm8xSyxlxmVS22Yz4puHsVk8cXU4UtMSzB8M3jnlwmT2zjM+z8z00TySt1wV549wR5+adpcYcQk4JK38aN/f36OUvzgYLcGBar8DrI5RE6jBphPnEMELcst5nsw9xXSNQbaRZYuZWmAz86/VjkuuYbE2JVYNStgKFD5YCw4DJa8ckpj4wmbXiCZMuxjakm5QQY5HOost5VHmSgI8d6Pis9GBwxpLJyBuvknExoCFxm0MyGHuWTIPgzTDCDj3CckojZbnJZIpi2TS7fIQVMY3UpJDAaoX5ONwM4JBN7EowqaLF8BFYL5gEViOeIBkCgq8ekhiz/iGZNeIJki6+PuTF1NAn5JiFyClZZYEA0F5h6W2uRVysb8hjeIbEEPUsMRcnGvLOnSExjOIeJSYvt2m6U42o7MSHn9NVkpX4Pw==</diagram></mxfile>
2206.05099/main_diagram/main_diagram.pdf ADDED
Binary file (28.6 kB). View file
 
2206.05099/paper_text/intro_method.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ A wise person can foresee the future, and so should an intelligent vision model do. Due to spatio-temporal information implying the inner laws of the chaotic world, video prediction has recently attracted lots of attention in climate change [@xingjian2015convolutional], human motion forecasting [@babaeizadeh2017stochastic], traffic flow prediction [@wang2019memory] and representation learning [@srivastava2015unsupervised]. Struggling with the inherent complexity and randomness of video, lots of interesting works have appeared in the past years. These methods achieve impressive performance gain by introducing novel neural operators like various RNNs [@xingjian2015convolutional; @wang2019memory; @wang2018eidetic; @wang2017predrnn; @{wang2018predrnn++}; @wang2021predrnn] or transformers [@weissenborn2019scaling; @rakhimov2020latent], delicate architectures like autoregressive [@ranzato2014video; @srivastava2015unsupervised; @xingjian2015convolutional; @kalchbrenner2017video; @ho2019axial] or normalizing flow [@yu2019efficient], and applying distinct training strategies such as adversarial training [@mathieu2015deep; @saito2017temporal; @tulyakov2018mocogan; @vondrick2016generating; @saito2018tganv2; @luc2020transformation; @clark2019adversarial; @acharya2018towards]. However, there is relatively little understanding of their necessity for good performance since many methods use different metrics and datasets. Moreover, the increasing model complexity further aggravates this dilemma. A question arises: can we develop a simpler model to provide better understanding and performance?
4
+
5
+ Deep video prediction has made incredible progress in the last few years. We divide primary methods into four categories in Figure. [1](#fig:cmp_archi){reference-type="ref" reference="fig:cmp_archi"}, i.e., (1) RNN-RNN-RNN (2) CNN-RNN-CNN (3) CNN-ViT-CNN, and (4) CNN-CNN-CNN. Some representative works are collected in Table. [\[tab:previous_works\]](#tab:previous_works){reference-type="ref" reference="tab:previous_works"}, from which we observe that RNN models have been favored since 2014.
6
+
7
+ <figure id="fig:cmp_archi" data-latex-placement="h">
8
+ <embed src="fig/cmp_archi.pdf" style="width:47.0%" />
9
+ <figcaption>Different architectures for video prediction. Red and blue lines help to learn the temporal evolution and spatial dependency. SimVP belongs to the framework of CNN-CNN-CNN, which can outperform other state-of-the-art methods.</figcaption>
10
+ </figure>
11
+
12
+ In this context, lots of novel RNNs are proposed. ConvLSTM [@xingjian2015convolutional] extends fully connected LSTMs to have convolutional structures for capturing spatio-temporal correlations. PredRNN [@wang2017predrnn] suggests simultaneously extracting and memorizing spatial and temporal representations. MIM-LSTM [@wang2019memory] applies a self-renewed memory module to model both non-stationary and stationary properties. E3D-LSTM [@wang2018eidetic] integrates 3D convolutions into RNNs. PhyCell [@guen2020disentangling] learns the partial differential equations dynamics in the latent space.
13
+
14
+ Recently, vision transformers (ViT) have gained tremendous popularity. AViT [@weissenborn2019scaling] merges ViT into the autoregressive framework, where the overall video is divided into volumes, and self-attention is performed within each block independently. Latent AViT [@rakhimov2020latent] uses VQ-VAE [@oord2017neural] to compress the input images and apply AViT in the latent space to predict future frames.
15
+
16
+ In contrast, purely CNN-based models are not as favored as the approaches mentioned above, and fancy techniques are usually required to improve the novelty and performance, e.g., adversarial training [@kwon2019predicting], teacher-student distilling [@chiu2020segmenting], and optical flow [@gao2019disentangling]. We admire their significant advancements but expect to exploit how far a simple model can go. In other words, we have made much progress against the baseline results, but have the baseline results been underestimated?
17
+
18
+ We aim to provide a simpler yet better video prediction model, namely SimVP. This model is fully based on CNN and trained by the MSE loss end-to-end. Without introducing any additional tricks and complex strategies, SimVP can achieve state-of-the-art performance on five benchmark datasets. The simplicity makes it easy to understand and use as a common baseline. The better performance provides a solid foundation for further improvements. We hope this study will shed light on future research.
19
+
20
+ <figure id="fig:overall_framework" data-latex-placement="h">
21
+ <embed src="fig/overall_framework.pdf" style="width:90.0%" />
22
+ <figcaption>The overall framework of SimVP. Both the Encoder, Translator, and Decoder are built upon CNN. The encoder stacks <span class="math inline"><em>N</em><sub><em>s</em></sub></span> ConvNormReLU block to extract spatial features, i.e., convoluting <span class="math inline"><em>C</em></span> channels on <span class="math inline">(<em>H</em>, <em>W</em>)</span>. The translator employs <span class="math inline"><em>N</em><sub><em>t</em></sub></span> Inception modules to learn temporal evolution, i.e., convoluting <span class="math inline"><em>T</em> × <em>C</em></span> channels on <span class="math inline">(<em>H</em>, <em>W</em>)</span>. The decoder utilizes <span class="math inline"><em>N</em><sub><em>s</em></sub></span> unConvNormReLU blocks to reconstruct the ground truth frames, which convolutes <span class="math inline"><em>C</em></span> channels on <span class="math inline">(<em>H</em>, <em>W</em>)</span>. </figcaption>
23
+ </figure>
24
+
25
+ Video prediction aims to infer the future frames using the previous ones. Given a video sequence $\boldsymbol{X}_{t, T}=\{ \boldsymbol{x}_i \}_{t - T + 1}^{t}$ at time $t$ with the past $T$ frames, our goal is to predict the future sequence $\boldsymbol{Y}_{t, T'} = \{ \boldsymbol{x}_i \}_{t}^{t + T'}$ at time $t$ that contains the next $T'$ frames, where $\boldsymbol{x}_i \in \mathbb{R}^{C, H, W}$ is an image with channels $C$, height $H$, and width $W$. Formally, the predicting model is a mapping $\mathcal{F}_{\Theta}: \boldsymbol{X}_{t, T} \mapsto \boldsymbol{Y}_{t, T'}$ with learnable parameters $\mathbf{\Theta}$, optimized by: $$\begin{equation}
26
+ \Theta^* = \arg\min_{\Theta} \mathcal{L}(\mathcal{F}_{\Theta}(\boldsymbol{X}_{t, T}), \boldsymbol{Y}_{t, T'})
27
+ \end{equation}$$ where $\mathcal{L}$ can be various loss functions, and we simply employ MSE loss in our setting.
28
+
29
+ As shown in Figure. [1](#fig:cmp_archi){reference-type="ref" reference="fig:cmp_archi"} (a), this kind of method stacks RNN to make predictions. They usually design novel RNN modules (local) and overall architectures (global). Recurrent Grammar Cells [@michalski2014modeling] stacks multiple gated autoencoders in a recurrent pyramid structure. ConvLSTM [@xingjian2015convolutional] extends fully connected LSTMs to have convolutional computing structures to capture spatio-temporal correlations. PredRNN [@wang2017predrnn] suggests simultaneously extracting and memorizing spatial and temporal representations. PredRNN++ [@{wang2018predrnn++}] proposes gradient highway unit to alleviate the gradient propagation difficulties for capturing long-term dependency. MIM-LSTM [@wang2019memory] uses a self-renewed memory module to model both the non-stationary and stationary properties of the video. dGRU [@oliu2018folded] shares state cells between encoder and decoder to reduce the computational and memory costs. Due to the excellent flexibility and accuracy, these methods play fundamental roles in video prediction.
30
+
31
+ This framework projects video frames to the latent space and employs RNN to predict the future latent states, seeing Figure. [1](#fig:cmp_archi){reference-type="ref" reference="fig:cmp_archi"} (b). In general, they focus on modifying the LSTM and encoding-decoding modules. Spatio-Temporal video autoencoder [@patraucean2015spatio] incorporates ConvLSTM and an optical flow predictor to capture changes over time. Conditional VRNN [@castrejon2019improved] combines CNN encoder and RNN decoder in a variational generating framework. E3D-LSTM [@wang2018eidetic] applies 3D convolution for encoding and decoding and integrates it in latent RNNs for obtaining motion-aware and short-term features. CrevNet [@yu2019efficient] proposes using CNN-based normalizing flow modules to encode and decode inputs for information-preserving feature transformations. PhyDNet [@guen2020disentangling] models physical dynamics with CNN-based PhyCells. Recently, this framework has attracted considerable attention, because the CNN encoder can extract decent and compressed features for accurate and efficient prediction.
32
+
33
+ This framework introduces Vision Transformer (ViT) to model latent video dynamics. By extending language transformer [@vaswani2017attention] to ViT [@dosovitskiy2020image], a wave of research has been sparked recently. As to image transformer, DeiT [@touvron2021training] and Swin Transformer [@liu2021swin] have achieved state-of-the-art performance on various vision tasks. The great success of image transformer has inspired the investigation of video transformer. VTN [@neimark2021video] applies sliding window attention on temporal dimension following a 2D spatial feature extractor. TimeSformer and ViViT [@bertasius2021space; @arnab2021vivit] study different space-time attention strategies and suggest that separately applying temporal and spatial attention can achieve superb performance. MViT [@fan2021multiscale] extracts multiscale pyramid features to provide state-of-the-art results on SSv2. Video Swin Transformer [@liu2021video] expands Swin Transformer from 2D to 3D, where the shiftable local attention schema leads to a better speed-accuracy trade-off. Most of the models above are designed for video classification; works about video prediction [@weissenborn2019scaling; @rakhimov2020latent] using ViT are still limited. More related works may emerge in the future.
34
+
35
+ The CNN-based framework is not as popular as the previous three because it is so simple that complex modules and training strategies are usually required. DVF [@liu2017video] suggests learning the voxel flow by CNN autoencoder to reconstruct a frame by borrowing voxels from nearby frames. PredCNN [@xu2018predcnn] combines cascade multiplicative units (CMU) with CNN to capture inter-frame dependencies. DPG [@gao2019disentangling] disentangles motion and background via a flow predictor and a context generator. [@chiu2020segmenting] encodes RGB frames from the past and decodes the future semantic segmentation by using CNN and teacher-student distilling. [@shouno2020photo] uses a hierarchical neural model to make predictions at different spatial resolutions and train the model with adversarial and perceptual loss functions. While these approaches have made progress, we are curious about what happens if the complexity is reduced. Is there a solution that is are much simpler but can exceed or match the performance of state-of-the-art methods?
36
+
37
+ We have witnessed many terrific methods that have achieved outstanding performance. However, as the models become more complex, understanding their performance gain is an inevitable challenge, and scaling them into large datasets is intractable. This work does not propose new modules. Instead, we aim to build a simple network based on existing CNNs and see how far the simple model can go in video prediction.
38
+
39
+ Our model, dubbed *SimVP*, consists of an encoder, a translator and a decoder built on CNN, seeing Figure. [2](#fig:overall_framework){reference-type="ref" reference="fig:overall_framework"}. The encoder is used to extract spatial features, the translator learns temporal evolution, and the decoder integrates spatio-temporal information to predict future frames.
40
+
41
+ The encoder stacks $N_s$ ConvNormReLU blocks (Conv2d+LayerNorm+LeakyReLU) to extract spatial features, i.e., convoluting $C$ channels on $(H,W)$. The hidden feature is:
42
+
43
+ $$\begin{equation}
44
+ \centering
45
+ \label{eq:encoder_layernorm}
46
+ z_{i} = \sigma(\mathrm{LayerNorm} (\mathrm{Conv2d}(z_{i-1}))), 1 \leq i \leq N_s
47
+ \end{equation}$$ where the input $z_{i-1}$ and output $z_{i}$ shapes are $(T,C,H,W)$ and $(T,\hat{C},\hat{H},\hat{W})$, respectively.
48
+
49
+ The translator employs $N_t$ Inception modules to learn temporal evolution, i.e., convoluting $T \times C$ channels on $(H,W)$. The Inception module consists of a bottleneck Conv2d with $1 \times 1$ kernel followed by parallel GroupConv2d operators. The hidden feature is:
50
+
51
+ $$\begin{equation}
52
+ \centering
53
+ \label{eq:encoder_inception}
54
+ % z_{j} = \mathrm{Inception}( [z_{j-1}|| z_{2N_s+N_t-j} ])
55
+ z_{j} = \mathrm{Inception}( z_{j-1} ), N_s < j \leq N_s+N_t
56
+ \end{equation}$$ where the inputs $z_{j-1}$ and output $z_{j}$ shapes are $(T \times C,H,W)$ and $(\hat{T} \times \hat{C},H,W)$.
57
+
58
+ The decoder utilizes $N_s$ unConvNormReLU blocks (ConvTranspose2d+GroupNorm+LeakyReLU) to reconstruct the ground truth frames, which convolutes $C$ channels on $(H,W)$. The hidden feature is:
59
+
60
+ $$\begin{equation}
61
+ \centering
62
+ \begin{aligned}
63
+ \label{eq:decoder_layernorm}
64
+ z_{k} = \sigma(\mathrm{GroupNorm} (\mathrm{unConv2d}(z_{k-1}))),\\ N_s+N_t<k \leq 2N_s+N_t
65
+ \end{aligned}
66
+ \end{equation}$$ where the shapes of input $z_{k-1}$ and output $z_{k}$ are $(T,\hat{C},\hat{H},\hat{W})$ and $(T,C,H,W)$, respectively. We use ConvTranspose2d [@dumoulin2016guide] to serve as the $\mathrm{unConv2d}$ operator.
67
+
68
+ SimVP does not use advanced modules such as RNN, LSTM and Transformer, nor introduce complex training strategies such as adversarial training and curriculumn learning. All the things we need are CNN, shortcuts and vanilla MSE loss.
2206.08564/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-05-18T23:31:31.728Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 Safari/537.36" etag="kdOFtTeoA_nmFZmqs-Zo" version="18.0.2" type="google" pages="3"><diagram id="njX9GM816sZrOZjtLQRZ" name="Page-1">7VxZc9s2EP41mtqZUYYE70fbcZqHpPU0nWnzlIEISGJNESpJ1ZZ/fQESFA+AhyIekmN7PCYXB0Hs7reLxRIz7W7z/GsIt+svBGF/BhT0PNM+zABQgWPTf4yyTym2pqeEVeghXiknfPVeMCcqnLrzEI5KFWNC/NjblokuCQLsxiUaDEPyVK62JH75qVu4wgLhqwt9kfqXh+I1p6qmkxd8wt5qnT3aNPgLbmBWm79KtIaIPBVI2v1MuwsJidOrzfMd9tnsZROTtvtYU3oYWYiDuEsD+2X76eXuUYee7hP/5tZ+2Vtz00i7+Q/6O/7KLh9uvM8mISS7AGHWjTLTbp/WXoy/bqHLSp8o2yltHW98eqfSyyUJYs5Hld1HcUgeD5MHWA3P9++IT8Kkd+3eZL+UjmC0Tp6SNytU00zN0WjpLR8wDmP8XDsV6mGCqWhissFxuKdVeAPT5q/NpVI3OY+ech4DhdPWBfaCrCLkcrU69J3PPL3gk38EI3TtZ2SElinHGTFCFRjx+vmglvXBABI22BI2qPZQbNBMYdYxotDMb0kYr8mKBNC/z6m3Zb7kdT4TsuXT+A+O4z1nB9zFpMwr/OzFf7Pm7w1+961Q8uGZ95zc7PlNlb8lrrGbBxjHOAwSClDUA8fY+zTzi74+2YUubgJwbhRhuMJxQ72Uw6IAhNiHsfdfeSAybiZNb8IQ7gsVtsQL4qjQ8wMj5HIFtArQ2hUbVanvKE3V6UU6gFysDm9ygqRZlyJpVDjCfaERu/1WLMubJXd1EjqF+Nknit9JYGIKkP7Rw9QTZNoITJ8O/HYR0qsVu3ogVKDptN2kxbXQr3aA/jKwL5dL4LqUvgoh8uhcl8qQY5oynEfmwjTMfnD+4FdzBdMcEegNCc4bQ8G8KhrXEZVPLaherohtyldSvVwTa5SvR2VTsxVQm7aZUyqbKjpQmbaBZm1rcrR+QNtsF9dpG4ZUqaTatrAN3VD60TZVOzdt0y9F20pOVJ8qpHU1WDWsHUmHxNVgpkPol2YlEkuv1Pnm+h3qVbkQxPayRrksql1YkSmX6dp4sexp6XhuuuVcim6dgyUzLsJvtCb1TvpYGqgtPC3hbHGdAIRIRd9I3FUE9FNF4IdWrpWVqKU3L1xNtbH+MCtXS7QS1BJ4scfEjdLvNwuMkBesmHVYUhAGyhcYPWJmRu6DmFbEkVTAP8MF9stCCX1vxUIYLmU0pnB+y8Dbc6F/wws2HkKp/OPIe4GLpD8mYnymaefG7cz4IJGzWmvAo/q8s9khlF6UtwbFrbUdc+W94hhOiWNz/szjxCoXhKwbrdyCLJcRFe8qEvXAfVUM3R9sf8ExOIhD1S0oSkdaRodxaCnWzyhegQA3zE0IFhH7d4WSGab4sFlQAPyOvM11oetCu4rIRWu4ZZdRjFk/Wxx6dHKYlCWkh/y+3Ut5xtkuksxrUVyMc2QrlBimoVq4p0V2JZhqi57JYWVQdE2GC2krwpS/MtekOcbet+WyO1ouMEnMVauYosw01Zmuan1jDNOVzWETeN2RwIUxDuifHKFqYGQbEhdHUTtaLKD7uErk/Pdd7HtBDWosIXKQIUMNiHVVa7RgRwQLKnuSpiKBDelW2FCwAfR2HmUEpoAlbpj/7khWMI8S1UwCqvr2OW3GywVrRCt9hnscFliedl5+YKMkUC7EZVaXWRcQxusSnzmpu5sjk60yZrZsBPUkN05FbCyJ2ACJ2GiDic2kC+EL308BXVdFwDrRtpzG4w6+558hDCK66tgwXVbms8Jmy/GO5nEgo9aAzBVmKMMc0xm4Vbm3Guw239cYoui6HXLqfdjjgmkFAdIFd0WSErBcYjOJZYu7Q5azUHqKV2t6exqAKguq6UNhiTap5zoFlgwV+AZWV2CZNOKWDfM4YAFvwHJZwCJbEo8KLJYxJbCMsSSujeyKcnEM1Jy6mhWWn7oDyg5sNaUshTbeqmEda5hWWcicSkcp9AkdHbu+NhRLOuDacSnycQ26vgYd1tcijgZvOHpROGpK0mVHdtDELYgBYwSBS1AirG8RgmOkxiwH+GRJ1uOGCIyfyq0ffCs1+6Zm+K3U07BCjCdmSS2bTGmj3UFni9GCArk5++Wojk6yEBX8N9jvTJoPs0h+ZbbBTH760XLD1t9Xkq0lEeRRU2IMcHFqrp6vko+WL3GakouRQVHJtzDo5AgoNY6ARMlBM1owavLU40DkDAb6qkDKrDiwmiQQMC5EiRn+5w5R5TjAOUGUfhl+iN7FD+mo+fZ7jf10V3/UI05dbd6h61eGEAY4M4QwJ3Vixk6eGR4juibPpOl5YyfPHLbP5cG69vpjJM/ojcG9Du4KaMOrOvQTm3zBlI8d0KtTZz/w/JsATfj0P/AcUv4GrVPQGIyiLPe2UV3IqJiEG23TQy6SPMdZQ5C/04fmCxWhpfRDDVWxNAf3hOhVFTFERJd9Wz7Yp+Wmdnk+3xlHn7I86vavEhW5nIzj9WXDfEtZeU07IlY1eXXylBVz0m8szwNdhkpiyTYu2qFm0q83s2G+JbG8aqiZPoll0jNyxliaJneF73+GzGzpgj9dD2Cw9CkWr3bF027LTKnWHyUzJZvDt8yU1wyO02emmOKhP4KcHS0ddZkpH/BbZsqPSA1oP4dx3MwUy74Yk0pvqpaxjwjw4EbU6rpLZE36iYsl7hINiR9BhE9BjezbR9LhI+kjznyQBQ2hayPpNhDQdN3o6UBKB7QHDc0xg4aW+GHCQ4iR58Zz0Q1JTljge4o97s4hA9uI6adkd86xkELHKGGLDRZaX7tzqlJdBGUHCE+1PWeJYb3fSDA/nHORcEE85aLNyiK8hLtUt2sOZx3M2rZ7eD0wspISqukSw2uNaXhtMWbyxsOWjRUdtDMxy8c6kYn0Nj+pPF0K5ge+a/f/Aw==</diagram><diagram id="OEZTtWLqELejxfH6CQRH" name="Page-2">7Vxbc5s4FP41nk064w43YXjMdfvQ7GY33Wn71JGRbLMF5AW5sfvrV+IOEhcn2Dhp7MkEXcH6jr5zdI7ERL/yt7+HcL26Iwh7E01Zhi6a6NcTTVPZH8tYwyWuZPAaD+7PLFNJczcuwlGlIiXEo+66mumQIMAOreTBMCSP1WoL4omP8eBADwu5n11EV2muatpFwQfsLlfZnUxgJCU+zGqnTx6tICKPpSz9ZqJfhYTQ5MrfXmGPD002MEm724bS/MlCHNA+DX7ST/rdbHrvO7fbz18+/HO1xX9NtaSXH9DbpL84fVi6y4YgJJsAYd6JMtEvH1cuxQ9r6PDSR4Yoy1tR32MplV0uSEBT0FSejmhIvudDp/EaruddEY+Ece/6jcm/LB/BaBXfpWhWqqabuq2z0sv0eXFI8bZxINR8eJnUYeJjGu5YlbSBniGyS9KGlaYfC4Q1Jc1blcDVjDQTpkK0zPsuxp1dpEO/Bwz6LwiDCiooAE2CgiVBQbUOhYIhDDpGjA3SJAnpiixJAL2bIveyCktR5yMh63QU/8WU7lI04IaSKlR469IvvPl7kKa+lkqut2nPcWKXJurwVkDjiXtIKQ6DOEdT1Bww/nva4WI/n2xCB7cMk5nSLgyXmLbUUw05/iH2IHV/VB9EBmbc9CIM4a5UYU3cgEalnu95RiFWml6Vq2x23zbUt+VkUMhR8gCFVOW/5OmCBl6KoDHZCHelRjz5tVxWNItTTQI6hvSZz5S+Z1GJKRD6rYuZqcEno2Z67MEv5yG7WvKre8LkmQ3bRVLcSPxqD+Kv0vpisdAch+UvQ4hcNtaVMmSbpozlkTk3gTkMy+eGWzq/dFukeSBheXAokrcakdHakWlTyU9AxnJwEzIYMgCkyMwtYABlIP1rnBgymaRIoEG/tWMjlp6pU//8HRoUMwSxtWjAbMZAw4oMM9Ox8HwxkOl6apCJdHVETZZdf61otS5NVtFjhVpr0GQDai7VeBGqSxWJ7oVZJ2oHphVzuWyqaMJSaWDjpbcI6NoYtnPNFp4Z7aazqbbWP4ztrIprZaYJXOpycWP5N/4cI+QGyyh27zCFoNzB6DvmauQmoKwijqQC/hHOsVcVSui5S76IchjQmNH5JSdv14HeRVrguwgl8o8j9yecx/1xEUtHmnUOLifgWiJnjdog9VylnU1y/1FZ3lombqPumCrvFRvYFcSm6T33E6tCELJu9GoLslhETLzrTDQE+oaAfq77S4ZBLg51s6AsHUkZe4y8pVg/y3FLGdDnZkIwj/i/MxSPMOMHf84I8Bty/fNS16V2NZGLVnDNLyOKeT9rHLpscLiUxVn3RbrbStnizFMqs1oUB+OC2UolwATqDA9k59e8ORKfmqpLTBPzUKYJGFWNHcM0KTOKcXDN1XvdbY9qvIgLb4EfrkjgQIoD9icngYaZug6Jg6Ooe0LOofN9GYvSnxvquUHDxFxAZCMgm5gQG6reqiT2WOZZ1ZlpKpKZKfV2H2zRMOvGKMvgMl5Bw/xvQ7KCaRRLf+w2MdbbpFlaLhA+q/QR7nBYgjzpvHrDVklgKNAq1FXoAsKxruCcZvW3JGSyVaWlDm/vQHJjd7vnVU0iNvrBCH32RugV6Idkd/tY65LnMYfdzRwPlLFvbO1/CmEQsRWAj8NnWHv70ZDaQENnmPMQtw4n2iUbHvb7lWDjf1thiKLzbk5qtiP3c2i1SpgkLrhYYDN2U4pO4pk9VwZyRepGjWyyGE6ZbGSOLfVgni2tx0JjOCUVOARx1fSmovYSG1MeuxtPR2m/VGTv4O4yre+iY1y1pDVH+/xs0kabfM6qpYlcym6PcOzV0YAxjwXg34k05jGPvzLlYMafYWY5sIz3tZC+ZAlz1LCHNqop+qRprp7uJLdexiRvDhwXc3MNg16GgNJgCEgmudbOFjw3vut+JHICD/qqSMqsWbC6xBQ5LkXZL46iqqvlU6KozF48cYrKHnMIirLe6/zTf/qjAXnqzH+Hzl8ZQwDtxBgCWGMyxGn404bkCK0nRwAwKkdoAkeU/Bo9LAKtixKaCEZscofZUPUgiF6dPeH+FwEa8e5/4ykMQxh0DkGrv4dB7q6jJq9MeS9DtE7Ow8Th4knLLpheBwbmKkIL6X43VZnpNh6INE27tvYDM5E2ZacEDnZIwBz1lMBetMkSpW0Ex6dEIzug1UmJilwKjkOJ2WMexdV7jYOoPOH3dvBm0WjSY2fIHhvdZFMcOhaS2kWabhhgoGNAdu00lmyCm8ec4EC0oltM0ddyGEtRjAoMhiWGYGQ8ezj71BA3+v1Bgmm+mS9e2ohb+boCIAgv4CaZyw2jfrBASHf0bQAkazER3ZDERGbHjIkYYiTtDcMOs8fQukHMYqTHAREIIN6HGLkOnYpL+QLaydAHmhDAFjLk63h7hpTZTMaUljbXh1rHq0rt4KphGwI4R13IG2L4qSc2bYrtVWADgDkyNuJet57YoNeODfesjYuNGNERxhwH6IK/NWKSb5OoaIPSgFd1Rl810b3OEge053g9dU9/jlftfKBZByJZ/6XNCix69FTfWposEIWehjo0YMi2jT0ZaHE7iy4FuF0gTh3++rH4bPLuDb6u1jrKo/lHQh/IdnK99vWcacvfgjDiO07AqBHJY8Qb4tSoHjeQot7pcTMaxOc4Hjcgribe9vG+gn28s7ovb/x9vKDHgaABnbtv+3ifIjYNr+wZbx+vOep7DZ76KqjDq5e++2DGDeiYL2YX9q+FHksWrztMzPvilZD6zf8=</diagram><diagram id="mzobUtI8enx6LSECruy2" name="Page-3">7V1rk5s4Fv01XZNslbsQDwEfuzudzO6kNz2bqZ3JpynZkm2mbfACTnfPr1+BkdELA20wtuOaysRIQsY6V0dHV/eSK+tu+fIpRqv5Q4TJ4so08MuV9eHKNIHpe/SvrOR1U+JZ9qZgFge4aFQWfA3+JkWhUZSuA0wSoWEaRYs0WImFkygMySQVylAcR89is2m0EL91hWZEKfg6QQu19PcAp/OiFEC/rPiZBLM5+2roFD9wiVjr4qckc4SjZ67Iur+y7uIoSjefli93ZJGNHhuYzX0fK2q3TxaTMG1ywy+fRl889PTt18XDrzPogk+z+c2o6OU7WqyLX1w8bPrKhiCO1iEmWSfGlXX7PA9S8nWFJlntMwWdls3T5YJeAfpxGoVpgSLIrpM0jp62Q2dmLYLF4i5aRHHeu3UPs/9oOUbJPP+W8jaumQUt36K1t8XzkjglL5UDAbbDSw2TREuSxq+0SXGDY7ibWwqbtL0CoecSYdMoyuYcuKZdFKLCqmbbvstxpx+KoW8Bg/kDwmAapgCDY2pg8DQwAK8vGDxl1AmmdFBcRnE6j2ZRiBb3ZemtiEvZ5nMUrYph/Iuk6WsBB1qnkYgVeQnSP7jP37Kurp3i6sNL0XN+8couQvpz/+AvuLuyy/K2/Irdt/l92Y/aDRodg2gdT8gu5mB0jOIZSXc1tPRmEJMFSoPv4pN0Dqk/JKQljN+4Gj2kwsTj56+pzF8gWEAJ+je+7ogsABp7WkB+600co1euwSoKwjThen7MCjh+cUWad21pfZTa+76xqz39sHkC/d3A8MTboSP9oM0wFbdJVr0dlLcbOlNM3BryGCVBGmSWS8vvl2OCcRDOklwHxfT/Dyh5oiZH68KUNiSJdq58RmMq6gT7RotgFtLPE2ozhC4Nt9lCEFDVdFNULAOMN1OJJMHfaJz3l5lfARrt3Lm9cj5oTL1yZSkkXtFZqat4062mgMplaGRcG77jC9gxWdTOQEubYN1Y4h3RdJqQfsDX6Ti4oKNyO6YfZtkH3hqKqpjV8caxqaOPsb1Tbc9KAq4ALTMREo6T7K93OB9hyjTLMaXSP3GwfM91zd0nWRxVyavsY5KSrJ8ViQM6OJmR5UWP5XWtBgpeCNtRAFXxYGNCSMmtXI1Dp65LOhI5QBI5Gq0JLI3IgX1pHPZA5ytyeEaxtYtnl4ug1VQGwSFlEHvMXQRxF4UTlJKQ/tGzQMVUXcXRhCRJ/Ywco8nTLDelL+t0EYQVM3OKsI8d3cxExAbWzkWi+cy0XHFmQkMzM7W7wN5mpl2PESvIbFxAA/5vHbGKUZJb/w1tAOzVy+a2ol5hfNroM3olMQf5pnPxC3daAkUhFaEWoQujDGsB56KouZLQ2ZZIS4KCzi4eUUo7DfOSbLfZDaN79dtWYGrsxurNbpwLowvYd0nv+7K2frfgOvDaEekHSObR93YB1pPN15QSdr5B+C1GYUI3DUsS76EQ2zEXqGCudySjrkxRXpm3dOTp7zfC9fLPOUE4eV9PY9Xas5wToIG83GmTGhfbdErgZKJb1rDrj41yc76fc9OS6MlyVHpydF41pzd+cg+5roWTCGer2WVVa2U2dr1P/LCrmu1orKYlQphM0XpjVxXu7t6QqqeHDlCDosvJdlTQfJ2C9ZyeQHNOZnNZM61kCC2t6iAhvskOGcuZT0s+BtmYde1stc1ehMh2/raUIW2dsUDyjjqmxXdX2549l9YZ25Umss3mC9VWRfw3b5w7uzY1yXrbGHDrDFesqqXHKPPKFkubXqs0WLDa6RdZnUzNXJ3MYoQDap5CHfYh1CoXOIYO7IbMXInMWOgAR2ZQQ2a9yRbW8fFz2Vu3VZ1wYJcc19iXVmFJ/R4oAVviMA/s5jB7CA5r4OhrxWHmWzjMHIbDvAmp4jCCKFVpOWzsOZlw6oPDHAcOzGFAGeAj5bCT02P2aesx+Xv83Yfjih5r2/4g3NfCgd6I+/BPbyE/tfYdGC3f/wMPQooYEW9aQYouZUVi6EgRTjwynvZCihAOTIq2zvUps6TCJAJBccOtcpXKTabCqrJ7oCkzqYPMjaLOrcfK9uQlYEjuahOKXWwIU+Gl+o4cqaOe/d62V8kRzPfIgpCZaxHwXkaNa1J1QnKs08CVaVW4Mv8dhaMlC9H5GJAF3ul7V1ycP7A7U3GMqU5wnWOsN2emD4fUYW3CEKviCsvdY/0RnSVKsfEimjz9Ng9CQY+pYckd7yZhoQaONDwRGEBkQsdpF58ote9HUkGdpLp44QWy8SRgbMY+fCC7bm1mkQSd0w3LTvmh8gkAgKaIw/B5Hf7JZBQcIX8bDfnb3dcbuN9cU+Ou28k/s0L+bfejKxQ21J0PhA4Vv2XNb1X3rK1EbIvvvwnxgN/+HzJCdAUMa4dg516bQh6skqq1hg+CT1abjMM80HjXytOIMccA46l2Lw4M1/JJNyTpO5KIcFWO1OVcdZFytX6c/vwYjMLpL/+0jS/G7PFf338fnU7sGr3gos9FsZwlEeTZC/zpC7RqNHN+Jfd5hERsNhXSg2Z6scc8SJjRBxImPM20Di5iwdNRg0yGFqlhOmJBEw+XPimuxrRs2+lKfVmeGEGkYxadm6+3ZE6oOwk7exFsu0AUwY55rTo/dBTfIxJqKNdjTHAwSUeqr7z0dl3lsRBdusId4uFsG6hxhfsuNlxXB49nji3YUYwDsD0ozhLD1sCzy5HbPTqqO7whOrvm03mgA93B0VGDZxuig88dHegOP3d0pwmynm18lLT7iKjKd3XEh0YOMOQ0B7hdjlqfHGl6s5Teej4+YqLmBPYrbwwKq92UqLbY8X6DKZj6/Yapt+7D7DfYY14yaM4rgwYAeSczfAoNy1U90N72kkLzFruRX2g0fA6Naw25WrU5dj7oAgObxhkP69By7Qt6p4seO9w9BqXoNpSK25dHtU/LllzV2vDbfpMHmoZ77B25u9+JsHs8ZnFqJ8LewSJ69pv5l5iZWrHkNkhd1b99xbPkLX93wHXqzfkxAoM9w1ecMpKcberg0XQlvwWrZ++Op4ubOs7g4Pw9B8Lb/EI+XHh6CRduxUai3dlQzT09bLywzrd0YaKaFAUIJP5wDOfN3mbDbtBbz3zkV7t9Gnt5qghE8R89nHWqQePXjLaIpZIz1jXZnloN0x9tVK9f9Qlw23CWE3+BQb8vKZDfY8FepzlUMtvWwDrD3HwL5uYwmLOE736TumXMG0Y29Yh5g6OnVpifVaJrv8mscmLz8MZQLRLqjWG5G+GlxlZq1oSDYk4Adog28sSHroU6In3fkBZ6zcHOgTHf5y0fb8C8Zk04R8xlcTc85vu83aAx5lXpFvK2wqjYVgjkX5F8cW62AowBjUWbVVGtD47Nj1UEM57vxrO/eAPbNa9lv5Uu2Jtlg+65BaWX5T9WtvFwlP/mm3X/fw==</diagram></mxfile>
2206.08564/main_diagram/main_diagram.pdf ADDED
Binary file (45.8 kB). View file
 
2206.08564/paper_text/intro_method.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Recently, self-supervised pre-training (SSL) followed by supervised fine-tuning has emerged as the state of the art approach for semi-supervised learning in domains such as natural language processing (NLP) [\[5\]](#page-13-1), computer vision [\[3\]](#page-13-2) and speech/audio processing [\[1\]](#page-12-0). Given that there is an extensive amount of raw, unlabeled data in various settings such as healthcare, finance, marketing, etc., most of which exist in tabular form, extending SSL to tabular data is an important direction of research.
4
+
5
+ Broadly speaking, there are two dominant approaches to SSL: (i) reconstruction of masked inputs, and (ii) invariance to certain augmentations/transformations, also known as contrastive learning. Several prior works [\[23,](#page-14-0) [19\]](#page-13-3) have adopted the second approach of contrastive learning for designing SSL methods for tabular data (tabular-SSL). The underlying structure and semantics of specific domains such as images remain somewhat static, irrespective of the dataset. So, one can design generalizable domain specific augmentations like cropping, rotating, resizing etc. However, tabular data does not have such fixed input vocabulary space (such as pixels in images) and semantic structure, and thus lacks generalizable augmentations across different datasets. Consequently, there are only a limited number of augmentations that have been proposed for the tabular setting such as mix-up, adding random (gaussian) noise and selecting subsets of features [\[23,](#page-14-0) [19\]](#page-13-3). While reconstruction based SSL methods have also been proposed earlier for tabular data [\[25\]](#page-14-1), their performance is suboptimal compared to the contrastive based approaches [\[23,](#page-14-0) [19\]](#page-13-3).
6
+
7
+ In this paper, we build upon recent advances in reconstruction based SSL methods in computer vision [\[7\]](#page-13-0) to design MET– a purely reconstruction based approach for tabular-SSL. MET achieves a new state of the art (SOTA) performance on several tabular datasets. In particular, MET gives an average improvement of 3.2% accuracy (averaged over five standard datasets) over the previous state-of-the-art approaches.
8
+
9
+ There are two key ideas behind our approach. Similar to the reconstruction based approaches for SSL of images [\[7\]](#page-13-0) and text [\[5\]](#page-13-1), we use a transformer architecture to efficiently learn the relationships between different coordinates. However, unlike [\[7,](#page-13-0) [5\]](#page-13-1), using an average pooling or a special output token for the representations leads to a large drop in accuracy on tabular datasets in general. So, our first idea is to instead concatenate embeddings of all tokens for the final finetuning step. Surprisingly, even though the finetuning step trains a large model, it generalizes very well. We conjecture, and empirically demonstrate on a simple toy-dataset, that the highly separable representations alleviates the risk of overfitting during the fine-tuning phase. Second, our proposed method MET searches for adversarial perturbations in the input space, by performing gradient ascent over reconstruction loss, which we add to the input before passing to the auto-encoder. We observe that adversarial training along with the masked reconstruction loss gives superior representations, which have better downstream classification accuracy.
10
+
11
+ We conduct thorough experiments to demonstrate the effectiveness of MET in the standard tabular-SSL setting [\[23\]](#page-14-0). On standard tabular-SSL benchmarks like permuted MNIST, permuted FMNIST, permuted CIFAR-10, as well as on other popular tabular datasets such as Adult-Income and CovType, we show that MET can be up to 10% more accurate than the SOTA tabular-SSL methods like DACL [\[23\]](#page-14-0), SubTab [\[19\]](#page-13-3), VIME [\[25\]](#page-14-1). Furthermore, in some cases, MET trained with about 20% of the labelled train-set can be as effective as a standard supervised learning methods trained with all the labelled points in the train-set. Finally, we demonstrate that MET indeed learns non-trivial powerful representations which are significantly more powerful than the random kitchen-sink features which are known to approximate the universal RBF kernel [\[15\]](#page-13-4).
12
+
13
+ To summarize, in this paper, we design a novel algorithm for tabular-SSL. The algorithm is based on three key insights (i) masking is a natural technique to bottleneck the information in tabular data, (ii) utilizing the embeddings for all coordinates without average pooling, (ii) an adversarial reconstruction loss. Experiments on five different datasets demonstrate the efficacy of our algorithm in practice, and that it significantly improves over SOTA methods.
14
+
15
+ # Method
16
+
17
+ In this section, we formalize the general task of learning self-supervised representations and introduce all the notations and assumptions applicable.
18
+
19
+ Notation: We use x<sup>i</sup> to denote an example and x j i to denote the j th coordinate of x<sup>i</sup> . For a set S of coordinates, x S i denotes the restriction of x<sup>i</sup> to the coordinates represented by S.
20
+
21
+ **Task:** Consider access to a corpus of unlabelled dataset given by $\mathcal{D}_u = \{x_i\}_{i=1}^{N_u}$ where each datapoint $x_i \in \mathbb{R}^d$ . Further, every co-ordinate in $x_i$ i.e. $x_i^j \in \mathbb{R}$ can be either a categorical or a non-categorical value, without being explicitly specified. The general goal of self-supervised learning is to learn a parameterized mapping $f_{\theta} : \mathbb{R}^d \to \mathbb{R}^m$ between the input $x_i$ and its representation $f_{\theta}(x_i) \in \mathbb{R}^m$ , such that the representations are well suited for a downstream task as described next.
22
+
23
+ Evaluation of learned representations: In this paper, we evaluate the quality of learned representations through accuracy on a downstream classification task. More concretely, we have access to a small set of labelled training dataset $\mathcal{D}_{\text{train}} = \{x_i, y_i\}_{i=1}^{N_{\text{train}}}$ where $y_i \in \mathbb{R}^k$ and each $(x_i, y_i)$ is drawn independently and identically (i.i.d.) from some underlying distribution $\mathcal{D}$ on $\mathbb{R}^d \times \mathbb{R}^k$ . The task is to learn a classifier $h_{\phi} : \mathbb{R}^d \to \mathbb{R}^k$ which minimizes $\mathbb{E}_{(x,y)\sim\mathcal{D}}[\ell(h_{\phi}(x),y)]$ , where $\ell$ is a loss function such as 0-1 loss or cross entropy loss etc. Given the learned representations $f_{\theta}$ , we train a shallow classifier $g_{\mu} : \mathbb{R}^m \to \mathbb{R}^k$ (we use a 2-hidden layer MLP in our default setting) and use the resulting accuracy to evaluate the quality of $f_{\theta}$ .
24
+
25
+ As described in the previous section, our goal is to learn a parameterized mapping $f_{\theta}$ . We take a denoising auto-encoder approach for this [24]. In this approach, we have an encoder represented by $f_{\theta}$ and a decoder represented by $h_{\phi}$ . The task of the encoder is to take a noisy version of input example $x_i$ , e.g., where some coordinates of $x_i$ are masked, and reconstruct the entire example $x_i$ . More formally, if we choose $S_i \subseteq [d]$ coordinates to be masked in $x_i$ , then this approach can be written as minimization of the following function:
26
+
27
+ <span id="page-4-0"></span>
28
+ $$\mathcal{L}_{\text{rec}}(\theta, \phi) = \sum_{i=1}^{N_u} \|x_i - h_{\phi}(f_{\theta}(S_i, x_i^{S_i}))\|_2^2.$$
29
+ (1)
30
+
31
+ While the high level approach of denoising autoencoder was proposed in the seminal paper [24], instantiating this approach for various domains has required several domain specific insights including architectures of $h_{\phi}$ and $f_{\theta}$ , which coordinates to mask etc. ([5, 20, 7]). For downstream evaluation task, we discard the decoder $h_{\phi}$ and only use the representations computed by $f_{\theta}$ . We now describe the details of the architectures of the encoder and decoder as well as the masking strategy.
32
+
33
+ Encoder-Decoder architecture: Motivated by the success of transformers [21] across various domains such as natural language processing (NLP) and computer vision, we use a transformer as our backbone for both the encoder as well as the decoder. Recall that $f_{\theta}$ and $h_{\phi}$ denote the encoder and decoder respectively. The input to a transformer is given as a set of tokens, each represented by an embedding. In the current context, given an example $x_i$ , each coordinate $j \in [d]$ is masked with some probability p. Recall that $S_i$ denotes the set of masked coordinates for $x_i$ . The input to the transformer consists of $|[d] \setminus S_i|$ tokens each corresponding to one coordinate $j \in [d] \setminus S_i$ . Note that no token is passed corresponding to the masked coordinates. The input embedding corresponding to the $j^{\text{th}}$ token consists of a concatenation of $pe_j \in \mathbb{R}^e$ , a learnable encoding corresponding to the $j^{\text{th}}$ coordinate and $x_i^j$ , the value of $j^{\text{th}}$ coordinate in $x_i$ . So, the total dimension of the embedding is e+1. The encoder takes these input tokens through several layers of multi-headed attention and
34
+
35
+ $\begin{array}{c|ccccccccccccccccccccccccccccccccccc$
36
+
37
+ <span id="page-5-1"></span>feed-forward layer of dimension fw and outputs an e+1 dimensional representation $w_i^j$ for each token $j \in [d] \setminus S_i$ . We now pass these to the decoder $h_{\phi}$ , along with representations for masked coordinates – again concatenation of $pe_j$ with a learnable mask parameter $u \in \mathbb{R}$ for $j \in S_i$ .
38
+
39
+ Adversarial loss: In the context of supervised learning, several papers have demonstrated that adversarial training can yield more robust features [18] that are better for transfer learning [16]. While an adversarial loss function has been observed to encourage learning of robust features in contrastive SSL [2, 11], to the best of our knowledge, it does not seem to have been explored in the context of masked autoencoders. In this work, we demonstrate that an adversarial version of the reconstruction loss works better than standard reconstruction loss. More concretely, the adversarial reconstruction loss is given by:
40
+
41
+ <span id="page-5-0"></span>
42
+ $$\mathcal{L}_{\text{rec}}^{\text{adv}}(\theta, \phi) = \sum_{i=1}^{N_u} \max_{\delta: \|\delta\| \le \epsilon} \|x_i - h_{\phi}(f_{\theta}(S_i, x_i^{S_i} + \delta))\|_2^2.$$
43
+ (2)
44
+
45
+ In this paper, we constrain the adversarial noise $\delta$ in an $\epsilon$ radius L2 norm ball around the input data point $x_i$ , where $\epsilon$ is chosen from a grid-search in $\{2, 4, 6, 10, 12, 14\}$ . The overall algorithm for MET, which minimizes (2) and (1) is given in Algorithm 1. For consistency of notation, we present a non-batch version of the algorithm.
46
+
47
+ <span id="page-6-1"></span>![](_page_6_Figure_0.jpeg)
48
+
49
+ Figure 2: (a) We analyze the representations learnt by MET on a 10-dimensional binary classification toy dataset, where 10-dimensional points are sampled by concatenating five points sampled i.i.d. from the respective circles. (b) 2D projection of the source data. (c): Mean distance between the learnt representations for the two classes as the SSL using MET proceeds. (d) 2D projection of the representations learnt by MET.
50
+
51
+ Before moving to the experiments on standard tabular dataset benchmarks, we first work on a 10-dimensional toy dataset to visualize the kind of representations learnt by MET. We generate a binary classification dataset from two overlapping circles as shown in Figur[e2a.](#page-6-1) We generate 5000 samples for each class, where every 10-dimensional sample is a concatenation of five points sampled i.i.d from the respective circle.
52
+
53
+ Figure [2c](#page-6-1) shows the mean distance between the representations from the two classes as the selfsupervised training using MET evolves. Notice that the separation between the learnt representations increases as the training progresses.
54
+
55
+ Further, for better visualization we project the generated data to 2D space, where the x and y axis projections for each 10-dimensional sample is obtained by taking average over alternate co-ordinates (since the data was generated by concatenating the x and y co-ordinates of five points sampled i.i.d from the circle). Figure [2b](#page-6-1) and Figures [2d](#page-6-1) show the projections of the source data on the learnt representations on the 2D space. Observe that MET is able to learn representations which are much more easily separable compared to the source data.
2208.10660/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff