Eric03 commited on
Commit
343ff53
·
verified ·
1 Parent(s): fb5e0ef

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2003.08413/main_diagram/main_diagram.drawio +0 -0
  2. 2003.08413/paper_text/intro_method.md +39 -0
  3. 2105.03491/main_diagram/main_diagram.drawio +0 -0
  4. 2105.03491/main_diagram/main_diagram.pdf +0 -0
  5. 2105.03491/paper_text/intro_method.md +7 -0
  6. 2105.12774/main_diagram/main_diagram.drawio +0 -0
  7. 2105.12774/paper_text/intro_method.md +104 -0
  8. 2108.02479/main_diagram/main_diagram.drawio +1 -0
  9. 2108.02479/main_diagram/main_diagram.pdf +0 -0
  10. 2108.02479/paper_text/intro_method.md +23 -0
  11. 2109.09133/main_diagram/main_diagram.drawio +1 -0
  12. 2109.09133/main_diagram/main_diagram.pdf +0 -0
  13. 2109.09133/paper_text/intro_method.md +15 -0
  14. 2112.00735/main_diagram/main_diagram.drawio +0 -0
  15. 2112.00735/paper_text/intro_method.md +117 -0
  16. 2202.01085/main_diagram/main_diagram.drawio +1 -0
  17. 2202.01085/main_diagram/main_diagram.pdf +0 -0
  18. 2202.01085/paper_text/intro_method.md +51 -0
  19. 2203.11894/main_diagram/main_diagram.drawio +0 -0
  20. 2203.11894/paper_text/intro_method.md +126 -0
  21. 2205.05871/main_diagram/main_diagram.drawio +1 -0
  22. 2205.05871/main_diagram/main_diagram.pdf +0 -0
  23. 2205.05871/paper_text/intro_method.md +6 -0
  24. 2205.06688/main_diagram/main_diagram.drawio +1 -0
  25. 2205.06688/main_diagram/main_diagram.pdf +0 -0
  26. 2205.06688/paper_text/intro_method.md +164 -0
  27. 2205.13346/main_diagram/main_diagram.drawio +1 -0
  28. 2205.13346/main_diagram/main_diagram.pdf +0 -0
  29. 2205.13346/paper_text/intro_method.md +151 -0
  30. 2206.14754/main_diagram/main_diagram.drawio +0 -0
  31. 2206.14754/paper_text/intro_method.md +198 -0
  32. 2207.09944/main_diagram/main_diagram.drawio +0 -0
  33. 2207.09944/paper_text/intro_method.md +155 -0
  34. 2209.10222/main_diagram/main_diagram.drawio +0 -0
  35. 2209.10222/paper_text/intro_method.md +105 -0
  36. 2210.08884/main_diagram/main_diagram.drawio +0 -0
  37. 2210.08884/paper_text/intro_method.md +126 -0
  38. 2210.15088/main_diagram/main_diagram.drawio +1 -0
  39. 2210.15088/paper_text/intro_method.md +116 -0
  40. 2211.05568/main_diagram/main_diagram.drawio +1 -0
  41. 2211.05568/main_diagram/main_diagram.pdf +0 -0
  42. 2211.05568/paper_text/intro_method.md +36 -0
  43. 2211.14646/main_diagram/main_diagram.drawio +0 -0
  44. 2211.14646/paper_text/intro_method.md +76 -0
  45. 2212.01026/main_diagram/main_diagram.drawio +0 -0
  46. 2212.01026/paper_text/intro_method.md +129 -0
  47. 2212.13545/main_diagram/main_diagram.drawio +0 -0
  48. 2212.13545/paper_text/intro_method.md +66 -0
  49. 2302.03251/main_diagram/main_diagram.drawio +1 -0
  50. 2302.03251/main_diagram/main_diagram.pdf +0 -0
2003.08413/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2003.08413/paper_text/intro_method.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ We first show the 3D bone structure in two rendering ways as in Figure [5,](#page-5-0) where the volume rendering can show the reconstructed surface and the maximum projection can indicate the restored density information. Then we summarize the evaluation metrics in Table [2](#page-6-0) to compare with other methods. We can see that *Oral-3D* has the best performance over other models. Comparing *Oral-3D* and Auto Encoder with the Residual CNN and GAN, we can see the importance of decoupling the back-projection and deformation process. To be noted, R2N2 achieves the worst performance, where the model only learns the shape of the oral cavity but loses details of teeth. This has indicated the defect when converting the PX image as a collection of multi-view images. Additionally, we see that Auto Encoder has the closest performance to *Oral*, although the latter has a more clear surface. This has proved the promotion brought by the adversarial loss.
4
+
5
+ In this paragraph, we show two of the most common cases in dental healthcare, *e.g.*, dental implants and tooth pulling, to see if *Oral-3D* can provide dentist useful reference. Both cases require to locate the operation location before the surgery. In the first row of Figure [6,](#page-5-1) three wisdom teeth can be seen clearly on both sides in PX. These features also present in the two sides of the reconstruction results. In the second row, the patient misses two teeth on both sides of the mandible. While the missing place can also be located
6
+
7
+ <span id="page-6-2"></span>![](_page_6_Figure_0.jpeg)
8
+
9
+ Figure 7: We show a workflow to apply *Oral-3D* to obtain the dental arch curve in real-world applications in this picture. We first take a picture of the patient's mouth and segment then dental area semi-automatically. Then we use a cubic function to the fit points sampled from the skeletonized image of the binary mask.
10
+
11
+ Table 2: Quantitative Evaluation of 3D Reconstruction
12
+
13
+ <span id="page-6-0"></span>
14
+
15
+ | Method | View | Prior | D-Net | PSNR (dB) | SSIM (%) | Dice (%) | Overall |
16
+ |------------------------|------|-------|-------|------------|------------|------------|---------|
17
+ | Residual CNN | 1 | No | No | 17.46±9.58 | 72.90±2.09 | 57.95±7.43 | 73.54 |
18
+ | GAN | 1 | No | Yes | 17.71±1.04 | 69.96±1.91 | 57.80±7.76 | 73.78 |
19
+ | R2N2 | 3 | No | No | 18.06±0.94 | 71.94±1.36 | 57.71±6.52 | 73.32 |
20
+ | Oral-3D (Auto-Encoder) | 1 | Yes | No | 19.04±0.85 | 76.78±1.65 | 69.68±4.98 | 80.56 |
21
+ | Oral-3D (GAN) | 1 | Yes | Yes | 19.22±0.83 | 78.27±1.74 | 71.28±4.69 | 81.89 |
22
+
23
+ <span id="page-6-1"></span>Table 3: Evaluation results of different combination of discrimination loss (DL), reconstruction loss (RL), and projection loss (PL).
24
+
25
+ | | DL only | DL+PL | DL+RL | DL+RL+PL |
26
+ |---------|---------|-------|-----------------------------|---------------|
27
+ | PSNR | 8.06 | | 18.06(+10.00) 19.14(+11.08) | 19.22(+11.16) |
28
+ | SSIM | 46.61 | | 73.02(+26.41) 78.41(+31.80) | 78.27(+31.66) |
29
+ | Dice | 35.50 | | 64.53(+29.03) 70.89(+35.39) | 71.28(+35.78) |
30
+ | Overall | 40.79 | | 75.95(+35.16) 81.66(+40.87) | 81.89(+41.10) |
31
+
32
+ <span id="page-6-3"></span>Table 4: Evaluation results on real-world images
33
+
34
+ | Dataset | PSNR | SSIM | Dice |
35
+ |-------------|------------|------------|------------|
36
+ | Real | 17.36±0.70 | 69.30±2.03 | 71.44±3.66 |
37
+ | Synthesized | 19.22±0.83 | 78.27±1.74 | 71.28±4.69 |
38
+
39
+ accurately in the reconstruction image.
2105.03491/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2105.03491/main_diagram/main_diagram.pdf ADDED
Binary file (85.2 kB). View file
 
2105.03491/paper_text/intro_method.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Neural networks have achieved astonishing performance across many learning tasks such as in computer vision [@he2015deep], natural language processing [@devlin2019bert] and graph learning [@kipf2017semisupervised]. The theoretical understanding of the generalization capability of these models, on the other hand, has been lagging behind in development and can so far only offer limited insights into the inner workings of these algorithms. Almost every work concerning generalization is based on the paradigm of uniform convergence as a tool to bound the capacity of the model [@arora2018stronger; @bartlett2017nearlytight; @bartlett2017spectrallynormalized; @neyshabur2015normbased; @neyshabur2018pacbayesian]. Recently however, @nagarajan2019uniform have cast doubt on the power of this technique. By constructing a dataset consisting of two concentric spheres (referred to as adversarial spheres), they were able to show that a neural network misclassifies a specific projection of the training data entirely. The existence of such an adversarial dataset renders any generalization bound based on uniform convergence vacuous. This surprising behaviour has been shown to hold empirically but, to the best of our knowledge, neither a mathematical proof nor a theoretical account of its origin has been given in the literature.\
4
+ In this work, we revisit the aforementioned dataset and study the phenomenon of the model mathematically through the lens of infinitely wide neural networks. We leverage the analytic structure of the Neural Tangent Kernel (NTK) [@jacot2020neural] to prove the observed behaviour as well as unravel the dependencies on different parameters such as the sample size and the magnitude of the bias of the output layer of the model. These theoretical findings suggest a very simple fix consisting in increasing the output bias sufficiently. We validate our theoretical results using numerical experiments on the adversarial spheres dataset. Moreover, we explore the hypothesis put forth by @nagarajan2019uniform suggesting that there may exist a decomposition of the model into a clean and a noisy part. The noisy submodel should encapsulate the observed degeneracies while the clean submodel enjoys good generalization and robustness, making it amenable to uniform convergence. We investigate the most natural decomposition induced by the eigendecomposition of the kernel and show that even a restriction to the optimal set of eigenfunctions does not eliminate the adversarial effect.\
5
+ Our mathematical analysis suggests that the failure of uniform convergence in this particular setting is not pointing towards a deeper problem in neural architectures but is rather a result of the specific dataset and the architectural bias encouraging the network to rely on angular features instead of radial information. This questions the relevance of the observation in @nagarajan2019uniform regarding more realistic datasets containing angular structure.
6
+
7
+ We structure our work as follows. We first discuss related work in Section [2](#related){reference-type="ref" reference="related"}, followed by an overview of the mathematical setting and notation in Section [3](#notation){reference-type="ref" reference="notation"}. In Section [4](#prevres){reference-type="ref" reference="prevres"}, we proceed to summarize the main results of @nagarajan2019uniform and @jacot2020neural as we build upon their findings. We then present our own theoretical and numerical results in Sections [5](#ours1){reference-type="ref" reference="ours1"} and [6](#ours2){reference-type="ref" reference="ours2"}, detailing the origin of the adversarial effect and its behaviour under a decomposition of the model. Finally, we provide a discussion of the implications of our work in Section [7](#discussion){reference-type="ref" reference="discussion"}.
2105.12774/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2105.12774/paper_text/intro_method.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The problem of dynamic points occluding static structures is ubiquitous for any visual system. Throughout the paper, we define points falling on movable objects (e.g. cars on road) as dynamic points and the rest are called static points (Chen et al. 2019; Ruchti and Burgard 2018). Ideally we would like to replace the dynamic counterparts by corresponding static ones. We call this as Dynamic to Static Translation problem. Recent works attempt to solve DST for the following modalities: Images (Bescos et al. 2019), RGBD (Bešić
4
+
5
+ Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
6
+
7
+ ![](_page_0_Picture_13.jpeg)
8
+
9
+ Figure 1: Static background points are shown in black. 1) Left: Dynamic LiDAR scan with occluding dynamic points in red. 2) Center: Static LiDAR scan reconstructed by DSLR. Although DSLR reconstructs complete scan only in-painted dynamic points are shown in red. 3) Right: Reconstructed static LiDAR scan shown from a different angle to highlight reconstruction quality of 3D structures like walls.
10
+
11
+ and Valada 2020), and point-clouds (Wu et al. 2020). DST for LiDAR has also been attempted using geometric (non-learning-based) methods (Kim and Kim 2020; Biasutti et al. 2017). To the best of our knowledge, no learning-based method has been proposed to solve DST for LiDAR scans. We show that existing techniques for non-LiDAR data produce sub-optimal reconstructions for LiDAR data (Caccia et al. 2018; Achlioptas et al. 2017; Groueix et al. 2018). However, downstream tasks such as SLAM in a dynamic environment require accurate and high-quality LIDAR scans.
12
+
13
+ To address these shortcomings, we propose a Dynamic to Static LiDAR Reconstruction (DSLR) algorithm. It uses an autoencoder that is adversarially trained using a discriminator to reconstruct corresponding static frames from dynamic frames. Unlike existing DST based methods, DSLR doesn't require specific segmentation annotation to identify dynamic points. We train DSLR using a dataset that consists of corresponding dynamic and static scans of the scene. However, these pairs are hard to obtain in a simulated environment and even harder to obtain in the real world. To get around this, we also propose a new algorithm to generate appropriate dynamic and static pairs which makes DSLR the first DST based model to train on a real-world dataset.
14
+
15
+ We now summarize the contributions of our paper:
16
+
17
+ <sup>\*</sup>denotes equal contribution
18
+
19
+ - This paper initiates a study of DST for LiDAR scans and establishes that existing methods, when adapted to this problem, yield poor reconstruction of the underlying static environments. This leads to extremely poor performance in down-stream applications such as SLAM. To address this research gap we develop DSLR, an adversarially trained autoencoder which learns a mapping from the latent space of dynamic scans to their static counterparts. This mapping is made possible due to the use of a *pair-based discriminator*, which is able to distinguish between corresponding static and dynamic scans. Experiments on simulated and real-world datasets show that DSLR gives at least a 4× improvement over adapted baselines.
20
+ - DSLR does not require segmentation information. However, if segmentation information is available, we propose an additional variant of our model DSLR-Seg. This model leverages the segmentation information and achieve an even higher quality reconstruction. To ensure that DSLR works in scenarios where corresponding dynamic-static scan pairs might not be easily available, we utilise methods from unsupervised domain adaptation to develop DSLR-UDA.
21
+ - We show that DSLR, when compared to other baselines, is the only model to have its reconstruction quality fall within the acceptable limits for SLAM performance as shown in our experiments.
22
+ - We open-source 2 new datasets, CARLA-64, ARD-16 (Ati Realworld Dataset) consisting of corresponding static-dynamic LiDAR scan pairs for simulated and real world scenes respectively. We also release our pipeline to create such datasets from raw static and dynamic runs. <sup>1</sup>
23
+
24
+ # Method
25
+
26
+ Given a set of dynamic frames $D = \{d_i : i = 1, ..., n\}$ , and their corresponding static frames $S = \{s_i : i = 1, ..., n\}$ , our aim is to find a mapping between the latent space of dynamic frames to its corresponding static frames while preserving the static-structures in $D_i$ and in-painting the regions occluded by dynamic-objects with the static-background. Our model consists of 3 parts: (a) An autoencoder trained to reconstruct LiDAR point clouds (Caccia et al. 2018), (b) A pair discriminator to distinguish $(S_i, S_j)$ pairs from $(S_i, D_j)$ . Inspired by (Denton et al. 2017), the discriminator is trained using latent embedding vector pairs obtained using standard autoencoder as input, (c) An adversarial model that uses the above 2 modules to learn an autoencoder that maps dynamic scans to corresponding static scans.
27
+
28
+ We also describe 2 variants of our models for (a) unsupervised domain adaptation to new environments, (b) utilizing segmentation information if available. For all following sections, x refers to a LiDAR range image (transformed to a single column vector), r(x) represent the the latent representation, $\bar{x}$ represents the reconstructed output, $x^i$ refers to a component of x.
29
+
30
+ The details of our model architecture and pipeline are illustrated in Fig. 2.
31
+
32
+ <sup>&</sup>lt;sup>1</sup>Code, Dataset and Appendix: https://dslrproject.github.io/dslr/
33
+
34
+ ![](_page_2_Figure_0.jpeg)
35
+
36
+ Figure 3: Adversarial training along with UDA for domain adaptation on KITTI data. Purple color indicate trainable weights. Refer Section 3.
37
+
38
+ **Input Preprocessing:** We use the strategy described by (Caccia et al. 2018) for converting LiDAR point clouds into range images.
39
+
40
+ **Autoencoder:** The Encoder $(E_{\phi})$ and Decoder $(D_{\theta})$ are based on the discriminator and generator from DCGAN (Radford, Metz, and Chintala 2015) as adapted in (Caccia et al. 2018). However, we change the model architecture to a $40 \times 512$ grid instead of a $64 \times 1024$ grid. This is done to discard the outer circles in a LiDAR scan because it contains the most noise and have least information about the scene. The bottleneck dimension of our model is 160. We define the network G as,
41
+
42
+ $$G: x \xrightarrow{E_{\phi}} r(x) \xrightarrow{D_{\theta}} \overline{x}$$
43
+ (1)
44
+
45
+ The autoencoder $G_{\phi,\theta}$ is trained with all $s_i$ , $d_j$ to reconstruct a given input at the decoder. We use a pixel-wise reconstruction loss for the LiDAR based range-image.
46
+
47
+ $$MSE(x, \overline{x}) = ||x - \overline{x}||^2$$
48
+ (2)
49
+
50
+ The autoencoder tries to learns the optimal latent vector representations r(x) for static and dynamic LiDAR scans for efficient reconstruction. For ease of representation in the rest of the paper we omit the usage of parameters $\phi$ and $\theta$ for encoder and decoder of G, and they should be considered implicitly unless specified otherwise.
51
+
52
+ **Pair Discriminator** We use a feed-forward network based discriminator (DI). DI takes random scan pairs $(s_i, s_j)$ and $(s_i, d_j)$ and transforms them to latent vector pairs using G. It is trained to output 1 if both latent vectors represent static LiDAR scans, else 0.
53
+
54
+ $$DI(r(x_1), (r(x_2)) = \begin{cases} 1 & x_1 \in S, x_2 \in S \\ 0 & x_1 \in S, x_2 \in D \end{cases}$$
55
+ (3)
56
+
57
+ The discriminator is trained using Binary Cross-Entropy (BCE) Loss between $\overline{y}$ and y, where $\overline{y}$ denotes the discriminator output and y denotes the ground truth.
58
+
59
+ Corresponding static and dynamic LiDAR frames look exactly the same except for the dynamic-objects and holes present in the latter. It is hard for the discriminator to distinguish between these two because of minimal discriminatory features present in the pairs. To overcome this, we use a dual-loss setting where we jointly train the already pre-trained autoencoder G along with discriminator. Therefore, the total loss is the sum of MSE Loss of autoencoder and BCE loss of discriminator. Using reconstruction loss not only helps in achieving training stability but also forces the encoder to output a latent vector that captures both generative and discriminative features of a LiDAR scan. We represent the loss as $L_{\rm DI}$ . For simplicity, we present $L_{\rm DI}$ for only one set of input data $(x_1, x_2, x_3)$ , where $x_1, x_2 \in S$ , $x_3 \in D$ .
60
+
61
+ $$L_{DI} = MSE(x_1, \overline{x_1}) + MSE(x_2, \overline{x_2}) + MSE(x_3, \overline{x_3}) + BCE(DI(r(x_1), r(x_2)), 1) + BCE(DI(r(x_1), r(x_3)), 0)$$
62
+ (4)
63
+
64
+ Adversarial Training: In the adversarial training, we create two copies of the autoencoder represented as $G^{1}_{\phi_{1},\theta_{1}}$ and $G_{\phi_2,\theta_2}^2$ . We freeze $\phi_1,\theta_1,\theta_2$ , leaving only $\phi_2$ of $G^2$ trainable. Inputs $x_1 \in S$ , $x_2 \in D$ are passed through $G^1$ and $G^2$ respectively, which output $r(x_1)$ and $r(x_2)$ . These outputs are the latent representations of the static and dynamic scans. We concatenate $r(x_1)$ and $r(x_2)$ and feed it to the discriminator. DI should ideally output 0 as shown in Eq. 3. However, in an adversarial setting, we want to fool the discriminator into thinking that both the vectors belong to S and hence backpropagate the BCE loss w.r.t target value 1, instead of 0. As training progresses the encoder weights for $G^2$ , i.e. $\phi_2$ , are updated to produce a static latent vector for a given dynamic frame, learning a latent vector mapping from dynamic scan to static scan. Additionally, we also backpropagate the reconstruction loss MSE( $x_2, \overline{x_2}$ ), which ensures that $G^2$ generates the nearest corresponding static latent representation z when given dynamic frame $x_2$ as input. The corresponding static scan reconstruction obtained from z using $\theta_2$ in such a setting is qualitatively and quantitatively better than what would have been obtained through a simple dynamicto-static reconstruction based autoencoder, as shown in Table 1. We represent the adversarial loss as L<sub>A</sub>. Here, M denotes the total number of latent vector pairs used to train DI.
65
+
66
+ $$L_A(s_i, d_j) = \sum_{i=1}^{M} \sum_{j=i+1}^{M} -\log(DI(r(s_i), r(d_j))) + MSE(s_i, \overline{d_j})$$
67
+ (5)
68
+
69
+ Due to the unavailability of ground truth static scans S for corresponding D, training **DSLR** on real-world data is often not possible. A model trained on a simulated dataset usually performs poorly on real-world datasets due to the inherent noise and domain shift.
70
+
71
+ To overcome this, we use Unsupervised Domain Adaptation (UDA) for adapting **DSLR** to real world datasets where paired training data is unavailable.
72
+
73
+ In UDA, we have two domains: source and target. Source domain contains the output labels while the target domain does not. We take inspiration from (Tzeng et al. 2014) that uses a shared network with a Maximum Mean Discrepancy
74
+
75
+ ![](_page_3_Figure_0.jpeg)
76
+
77
+ Figure 4: DSLR-Seg For a given dynamic frame, DSLR gives us static reconstruction and we use U-Net to get a static segmentation mask. We substitute dynamic points with reconstructed static points and take static points from the given dynamic frame for the final reconstruction.
78
+
79
+ (MMD) (Borgwardt et al. 2006) loss that minimizes the distance between the source and target domain in the latent space.
80
+
81
+ MMD loss is added to the adversarial phase of the training as shown in Fig. 3. Latent vectors from the dynamic-source and dynamic-target scans are used to calculate the MMD loss. For more details refer to Section 1.5 in Appendix<sup>1</sup>. Latent representations of static-source and target-dynamic scans are also fed to the discriminator with adversarial output as 1, instead of 0. All weights except for the encoder in blue, in Fig. 3, are frozen. The following equation denotes adversarial loss with UDA, $L_{\rm U}$ :
82
+
83
+ $$L_U = L_A(s_i, d_j) + \lambda \sum_{i=1}^{M} \sum_{j=i+1}^{M} MMD^2(r(d_j), r(k_j))$$
84
+ (6)
85
+
86
+ where, $\mathbf{s}_i \in S$ , $d_j \in D$ in source domain, and $k_j$ represents dynamic scan in target domain. $\lambda$ is a constant which is set to 0.01.
87
+
88
+ Although **DSLR** achieves a high-quality of reconstruction without segmentation information, it can leverage the same to further improve the reconstruction quality. High-quality LiDAR reconstructions are desirable in many downstream applications (e.g SLAM) for accurate perception of the environment. While the output reconstructions of **DSLR** replace dynamic objects with corresponding static background accurately, it adds some noise in the reconstructed static which is undesirable.
89
+
90
+ To overcome this, we propose **DSLR-Seg**, which uses point-wise segmentation information to fill the dynamic occlusion points with the corresponding background static-points obtained from **DSLR**. To this end, we train a U-Net (Ronneberger, Fischer, and Brox 2015) to segment dynamic and static points in a given LiDAR frame. The U-Net model outputs a segmentation mask.
91
+
92
+ We consider static-points from the input dynamic-frame and dynamic-points from the reconstructed static-output and
93
+
94
+ ![](_page_3_Picture_10.jpeg)
95
+
96
+ Figure 5: Dataset generation framework. Dynamic scans are shown in red, static scans in black. (a) shows a random dynamic and static run with overlap for few LiDAR poses. (b) zoomed-in portion of an overlapping region. (c) zoomed-in region showing paired correspondences using a bidirectional arrow. (d) shows a dynamic-static paired correspondence which often has significant mismatch (highlighted). (e) scan pair after applying pose transformation.
97
+
98
+ generate the reconstructed output as shown in the Figure 4. Mathematically, we can represent the above using Eq. 7.
99
+
100
+ $$Recon = Mask_i * \overline{s_i} + (1 - Mask_i)d_i \tag{7}$$
101
+
102
+ Here, $Mask_i$ is a segmentation mask generated by U-Net model consisting values 0 for static and 1 for dynamic points. $d_i$ is the input dynamic frame to the model and $\overline{s_i}$ is the reconstructed static frame which is given by the model.
103
+
104
+ Set of corresponding static-dynamic LiDAR scan pairs are required, to train **DSLR**. We propose a novel data-collection framework for obtaining this correspondence. We collect data from 2 runs in the same environment. The first run contains only the static-objects while the second run contains both static and dynamic objects along with the ground-truth poses. Using this, we create pairs of dynamic and its corresponding static-frames. Finally, a relative pose transformation is applied as shown in Figure 5 so that the static structures in both the scans align. All these operations are performed using Open3D (Zhou, Park, and Koltun 2018).
2108.02479/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-09-15T15:20:40.853Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36" version="20.3.0" etag="ZI53ODXow33KFwJuTGBG" type="google"><diagram id="BLs0-39YQgALcJqZN9ca">7V1Zc9s4Ev41qprdqqBwEsCj7Tixd3JV7Jl5liXa1kQWtZQ0SfbXL0CRNA/wkASKZEhPjSNDFEGhv74bjQm5evnx3p+unz96c3c5wXD+Y0LeTjCWRKrfeuDnfgARhvYjT/5iHo69Dtwt/ueGgzAc3S3m7iZ14dbzltvFOj0481Yrd7ZNjU193/uevuzRW6ZnXU+fwhnh68DdbLp0c5f9tZhvn/ejAvPX8Rt38fQczYyc8Bu/TKOLwxtvnqdz73tiiFxPyJXvedv9q5cfV+5SL160LvvPvSt4N34w311t63wA7z/wz3S5C79b+Fzbn9GX9b3dau7q69GEXH5/Xmzdu/V0pt/9rsirxp63L8vwbd/bTrcLb6X+fCOhGnhcLJdX3tLzg5uRd8GPGg/ndf2t+6Pw2VG8IgpKrvfibv2f6pLwA5hzDqAj459w+UNUUYmBRK/vSmf/9vdXihEKiNiPPifo5YTEmYYweYqnfl1J9SJcTPPCkuqFVbRf65eznb/8eelPZ9/cbZ0VTpLj0VttQ/7AMPw7sdww+FHjm63vfXOjd+bu43S33JaQJ0/H5WL9p/oren0TPoANOirOATBNPEwxIDl6RVyTJJbgFqhFDdRy9gukFjSQJNEaOf/daea8fF3dxJDzpP+9u794f60+pN6Ck2s8EXAi1QzwYTd/cvXdghvBr9Ec6un20+w/Hg1v1tNVCjHRPOHDXKg3/aeH34Jp8FU04f7Vv/KPNbm6nlzy+O67dfTGG5J4ksR4cjR4lvTwMUsTffbBN0ySXoQMtyh8bdOckMb0ylu5GUCHQ9Pl4kkDeabw6KrxS43WhZLmF+EbL4v5XE9jZL1Xfov4K+Q3JPL8ZlW8ISXe0lzhEAckBJoUeQ5xZMxKSS4hFpiE9VhXIAFhaik5E0AZIiVLiaRB2thQDU6VsMlzvH7jzSbAneZ6JNY/krxl5sHH4CfPg1coz4VX2DBGDGPUMMYMY45hjBvGRIP8r9YAXej/LAoBSzoPxWorYm0pK/DIDHBUsD4dj9yq8stiOTLeo+tQiVKog32M09jf3wIAkADSQ6VyGbVQfS1EYiM5kp1qqByrxIBVGypI9FkFSdwZFSRHFTSqICk7o4KiwI5FQBYDzRa5q73oX5PikQF7EsFRjyU5hgQDh3RFmCNTEG+U5sOS5kqE5jDZpkA3xT+bEugmrA1WyNsDArUh502R1d7IeSYYYIUJhjZFvikWd0726pco7x8TO6wMeW0K9sroZc1oUY1ID1QIDaCZAek9TILlhBv1NupjyvXFWA6uCx8eWQAjElxkQ0AOJAAmflAOgU2FgJCtcOUpAEQjAM8JQCXhOgRAUxDy3ADEIwDPCECMUJcAWBm9PQMA2QjAcwIQkw4BEJuitRl69a646WbyWtz0p0VLHnIKOE1RjiAJaHllmqnSiSp/kwkL5DPFXq1UOznTF03Q1cNmXcbaJySXC6upGp/6a8U9x2qtTovrpvPkSDoc4DSfM6b4HOd5m7CG5HJlIuAMhkEquDMaBk0bBkRKIHBXDIPKqP8ZAMhHAJ4TgAwK0BnDtEa2obeGqQ1iUQ4B5kVhZUopwChBOVrLLsVQWiBdZUbjKKMUF3LxaISeZISiyWiEds4IVZYlK+HvInvUlCyyIo5NuaIMjZLEifbC6VWbTzfP8RLWJ0ck3V9+POmth2C/2w+Dv3cv69vVhf5L31wvCIKA6td67YgEaq3URPqCG3eqJ2YUwFZEP2cZyZ/ce2UFJYQAlIYGUaKf5sMQkgAH58GBEI92Z52ED1MqJ4OPiNIfpg/u8ou3WYSL9OBtt97LYahJYC0LoK23jokWbelkBjy9eLNvuzV49PyXDXj2/LvlYh7cO03vCQ6zs/qd4JLI5nqYbhazeFR9If2wimf1ZM/TlUJxdOVs4c+WKfmE93eez1338VF/24XvzsL1WHm+fugj0ZpBZTyeNEh0+tiSq4wVfHjOQIyg6MCq6JjJfkTcQrmK9QLz48RFss4sjUlcoK/8WTiLLRqJqEwlpIqM9k8lyCAYAtFe7iQppA3dYb9EfW8lfHCf3NU8uKKnVkQpkOzQ3gFUZMif9+KchsyG6B4J0l9Z40KTCXYwFyIzvZtae4PNZlx7BISF9AA5Lj1QWBxcl886RVHLLKVcZCcWlS2R1RQc7o6xRc9lbFEMBOPDsbYCIz4dIJbK9Tk8QGejzwIpDhAXhi5qaXX3x1qtp6JELlgw81aPi6edv1/ZovCEdZ2/dB+3/dH4kpRZ5EpSAZmPBSJEADN4h1ZsgOKOHKcBZbcaoXJaKZRDAeet46M4XHwaPvzF5pt601OWDNRBo8XqaaK7S+lfz67+PfX3Ed/NVneJGlFSEGISAFWbscgYerRh7dgrU7fewICacHfzc+36/1GASyDqYQx4VyoumKu6IBADlJdGjXmr9urRz4i0S2X8jkg7yDGXaRMps8Mu05lJng9/ve6JIWDaFUk7KhRVhoIb22JH7Acgx13VfdtVrds0leCTMFSFz8Y24tEaRdhDSJ5hBKQcUjwH0nxBeYspM9rnTh5IorLAR5v6J/KW2tM/Jh1yik4a9c8R+BSlqfI29U+NDsdD0D8UAyzkgPQPYhqS3dE/ve4wAklX/R9qv8WIBf1j0g3n0EkD1T+6Grw7Gsd+4+RipNmid+9azzRjctjoGUgrI60W6W+SNIPFhC0xYKOjGLXVfsO4M2FsV93vML3SNCxtQjF8jGlqIzJPTRHkAXpIggBH9KW+fe36C0Vp9Y0UonXy3962C33MTYccJ1YjgNxdxwmxDJ+36CoxUwj0l0sV1b2f06By6rKrFG9h7YCrFJVdtWYqGxFpQpXIjw3XzLYFIWihgoqN8d0hbs4LmgCI7uzIYzXCu0OAoaAASTogHDKWPmTI6Q4kaxyEN+guIxw7QKaLf9tsLcJs1QKnW4to27e9fneVp4eOrTaaDTQ5GAOeLgIt6q+BBYhcROtnctZoodBdJ95RUpyzVyGekRkSAcewN625Q5tZZWy5aUfe4CIN1J12GC7R/wxy4CSQE3W6qUpD2XCMutB+OeU2j03uMrLJepM7x1HGaNo3b6/LnWMK3J4bgHIE4DkByIWOQ3QFgPbj3c2rv/5FA7kjAS60jhisbCJu1IDcAv1N0eXeGJ1EoGyAi3IKUJ5/HAqiVbV+Brz98/FOtzRN6ZyBWp9EKN6TOZAImhC4pBbD2ejY7/S6xpXK0qaz3KmSYw6Lw0nW+bD1KteRD8uxIwBOnIOXSfrzo7xAKyzZWDHqaANl6Z8VGMe5/hYi/04nzp4bTz88q+sldTYDpyDYoutVY7f/kHNPBCIMElHDTNiQYh4306tKPVEEHBvOkq1wYTr7hFrNPg2o0T1OPMnY6L4r2TchSLZNalH2TZkLgjYjjbkpEpuhzRCKU4ZWIyW58ki7UyPFj+vYWyWHxi0xv4asVFgEIlF7k+4dQmmlO9WUMctrxJHd1Tw8tSMiQywiUZEBm3UXaKUBG0lMEi+5q8ydQxe8KuwQjvnuUsmkf9K3N61jOMMXbxEwYqz7Mt4IE8Y95NEdN97On7nhTV6JY7wvUA53QawFc61ecz5PNMu+LWZulgAC8RLVQ4UxNJ4yOD9d/6UuuPx6cfX79X0OMgcdKVPVVrPoQJlp8igZCJxJeJIM1TdfedvZczhRsT6rhmXer8or+CYjtxIKIGUGBnp3cEJa5MUFRspzMlhckgJmQ9s11Ro4bP26ffbdzbO3nBt0zDvlDb2dyMuJ0I7Ev9X/S2+z+W2xmu1eHtSq/qtUL40dYhPYYpSAzHYHc4dYEXfzTOkfByBsAUw1anm7m1lBmUNauFqsZM1UPtLR2B44bj8e3sE9cGO7qnJAKh5OtUHNlExJWQHPxjbE8cZ6R5wBdsNN/zSCJygBZBYg1efGu5hgBGhheW2bimTsuzsqEkxkGTxbVCTCFPLtiyIxwW64yqURjGE7ykWYQrq9US6UpIPhnVEuAo/KZVQuDJbBs03lYr8quHuwG67CaQR3UauXk3BXo764l1VFtignlULLJHypJFUJX1NZEXFkHFI/iWSVhcs9LCuq3NQ+lv0MO5UtUHpLVOYQJgoBhmUSFHMMomoJ29lsYYqA98Zk5zTXnJBjCKL2O0njHIo4j2fdPrcfpj3QPq9rFJX2MhuYPc25TvhatmugDTj1OkQrhASZAwHaYMnW47EjSx5hsCIBcMI4zWwdb48lZWX8dKxVHLCBJ/Vh9YW45YSZxF9T9Ymyz0HYYCeMIIVLeX5NEpUt2o9uGTTEcKNMWvQjG/Lexo5J2VhE04pVMOJFyQnl+UWldB2wD2qEIvM14ZbLwJ0cWQMCpqofu1sgjiSWWV+eCgQozVH00BpxJBk78Nb2CsNljVrNERrlewewoA1BQzlDsD1o1Ii9jdAohwaKGhtaxkXuRJ6zgaJGb9QRFOWggNQBPG8JWIYI1GF52hJKasQIR5SUo4QqlIhE4uUghq+NEqpQcvQsFgFT41ivETDlgJGcA5nYkprpow1LtxrWBYxaO5LN2lfc2h5KYmdohMnxMGHK25CJlntZjsemcpqD5QpTkuPoWWwipka4cURMBWIgBySzq5AJANkreY+GCSEH3tomNvBB2Jgtp5t9mw5L8KgHnK7DAwmHxIcHxL28EICJDc3yOHgQxEhqY/Qhk9gESmXA1W6LOkN/ulQU9tjb9DZFdtbmdAQyDEim3qK15nQoai0yJnf7iNwzJHeVi5YFK2wNrHYraEew/lpgRdxgKsAzwrO4ocJ88U+WfrYLo+9+v/2i3rq/vru//fRevfr8LrhJYXG4qRpaDRse9bSnDx89fO7AUHmYzr49BUB5k7h+sVpsF9PlfhZDBbrkE/FOXXf1+dO72/d/fL24v/38Sf9NDvxW1jnFah+XxvmEEQHSbIKFBI5hNwR2lA1ssO1Z3F73NIax1Zw5J5xDw/b243WZrB2FarW5irCuFC6K+zkoL185ijMD9kVsjdD/ENp3Kk0nMZ8Mpn8ngVKAqPlgB/p3IjimFE4N7RCkPIkMTW3lEZDSXDyBjYOyFRZjO8iUUhg9llG5hjBFBKBC5coJrajpa8yVQabExojbEbeReNXauAi3LfYnRlFZbJlerij5zxpwgf0FgsX+qmynq+ly6e0Ct1gvh+D6pKuwqywKDMA9EfTLhz2FPjxUb67HlVq/Xi9kA8oSFpmtVqCcONnKQUJ16r7UHEOm7fVIeZMW2u8gVJxGCZx/gxTRjPsm5EEtRgI3vlAS2Q7Y7Fsi3366+uPj5fWn+6MiM6cIYzurYhSXRTEkSzTYj329vftdB7xuvl7f3Xz+8FYv5p369ceXtxf312+PWs9R1NfYZU4cWuV4IchB1BbQ+oYDhIqTVmsjyJaLlfsmeo5ghwrAxSBT8+e6R9iHb9Px2ubknY3orWbUCQ/2BF2FIdzgX2fCqzg3L3EaetK/lFTRHVaSnVYOezAzeUytSRJjfRQ5sUFiQ+RgJrPb4YoC1E7AyFkpw60ImRp7V4YQahzaSUEcvh560IVIIxo3ypxcY8iho2j66q9mNy7YqUrlBAOW8HvbqkpFNXbRjJLr15Ncgons+RxEqdIjQjGWJNe4T+f0k6OU6snQlEGRoqnIRE/qnx4lnEPvbVNK1cigdbZVR7B2TibsyXjc/OlM/TnigNqBcfqixk2VDlW9Rg8y7xFdIVM/W4RNg7Da6R1WY6jc0ZWUOFUGaWPtHhA25Yb6wrhBjx3o5BRimzxsSlmMPPwr8TCBFAFSeFBju+zcXAdzEzLq4soWUPrY3Qdld60dCRArcTBco73PQed3Vq9z0Qmef+9e1reri+Q5nggC7QsEqVgigdQ1ccHlN+5UT8wogK10NQ88z6ipedTg/M+JvRQsk8osyMSpMAaGKhFJgGOIkiLEATkYIOpP39N8/mr3axp99OauvuL/</diagram></mxfile>
2108.02479/main_diagram/main_diagram.pdf ADDED
Binary file (73.1 kB). View file
 
2108.02479/paper_text/intro_method.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Hyper-parameter tuning is a crucial phase to optimize the performance of machine learning (ML) models, which is notoriously expensive given that it typically implies repeatedly training models over large data sets. State-of-the-art solutions address this issue by exploiting cheap, low-fidelity models (e.g., trained with a fraction of the available data) to extrapolate the quality of fully trained models.
4
+
5
+ HyperBand (Li et al. 2018), henceforth referred to as HB, is one of the most popular solutions in this area. HB is based upon a randomized search procedure, called Successive Halving (SH) (Jamieson and Talwalkar 2016), which operates in stages of fixed "budget" R (e.g., training time or training set size): at the end of stage i, the best performing $1/\eta$ configurations are selected to be evaluated in stage i+1, where they will be allocated $\eta \times$ larger budget (see the bottom diagram of Fig. 1). By restarting the SH procedure over multiple, so called, brackets using different initial training budgets, HB provides theoretical guarantees of convergence to the optimum, incurring negligible computational overheads and outperforming state-of-the-art opti-
6
+
7
+ Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
8
+
9
+ mizers (e.g., based on Bayesian Optimization (BO) (Brochu et al. 2010)) that do not exploit low-fidelity observations. However, the random nature of HB also inherently limits its efficiency, as shown by recent model-based multi-fidelity approaches (Falkner et al. 2018; Klein et al. 2017).
10
+
11
+ We introduce HyperJump (HJ), a novel hyper-parameter optimization method that builds upon HB's robust search strategy and accelerates it via an innovative, model-based technique. The idea at the basis of HJ is to "jump" (i.e., skip either partially or entirely) some of HB's stages (see the top diagram of Fig. 1). To minimize the risks associated with jumps, while maximizing the attainable gains by favoring earlier jumps, HJ exploits, in a synergistic way, three new mechanisms:
12
+
13
+ - Expected Accuracy Reduction (EAR) a novel modelling technique to predict the risk of jumping. The EAR exploits the model's uncertainty in predicting the quality of untested configurations as a basis to estimate the expected reduction in the accuracy between (i) the best configuration included in the stage reached after a jump and (ii) the best configuration discarded due to the jump.
14
+ - A criterion for selecting the configurations to include in the HB stage targeted by a jump, which aims to minimize the jump's risk. This is a combinatorial problem<sup>1</sup>, which we tackle via a lightweight heuristic that has logarithmic complexity with respect to the number of configurations in the target stage of the jump.
15
+ - A method for prioritizing the testing of configurations that aims to promote future jumps, by favouring the sampling of configurations that are expected to yield the highest risk reduction for future jumps.
16
+
17
+ We conduct an ablation study that sheds light on the contributions of the various mechanisms of HJ on its performance, and compare HJ with a number of state-of-the-art optimizers (Li et al. 2018; Klein et al. 2017; Snoek et al. 2012; Falkner et al. 2018; Li et al. 2020) on both hyper-parameter optimization and neural architecture search (Dong et al. 2021) problems, for sequential and parallel deployments. We show that HJ provides up to over one-order of magnitude speed-ups on deep-learning and kernel-based learning problems.
18
+
19
+ <sup>&</sup>lt;sup>1</sup>The number of candidate configuration sets for a jump to a stage with k configurations from one with n configurations is $\binom{n}{k}$ .
20
+
21
+ ![](_page_1_Figure_0.jpeg)
22
+
23
+ Figure 1: Search methodologies of HJ (top) and HB (bottom). The figure illustrates the 3 key mechanisms of HJ: i) skipping HB stages based on a risk model; ii) determining the order in which configurations are test so as to minimize the risk of future jumps (this is depicted in the figure through the different set of configurations explored by each approach); iii) dynamically updating the risk threshold based on the quality of the current incumbent.
2109.09133/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-05-04T19:33:35.562Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" etag="mHRIY1XXaEgKPixnVLn1" version="14.6.9" type="device"><diagram id="bOBUNajieU8sP3a2mLa0" name="Page-1">7LzXkuRMkjX2NHNJGoBMqEtokdAygRsatNYaT09EdX9idmf/3TEajTQjYV1VmZFAhEeEi3PcI/sfL6Y7hTkaS3VIs/YfCJSe/3ix/0AQ+IWizx/Qcv1qwZHfDcVcpb9v+qvBru7sdyP0u3Wr0mz5pxvXYWjXavznxmTo+yxZ/6ktmufh+Ofb8qH951HHqMj+U4OdRO1/bvWrdC1/tRIo9Fe7mFVF+cfIMPT7ky764+bfDUsZpcPxt6YX948XMw/D+utVdzJZCxbvj3X59Rz/X3z6p2Bz1q//kwdEZM8NJwpiP7mZ1sbf/wdz/G+/e9mjdvs94d/CrtcfK/DIPYKXVfezVPSezWv1LJASxVlrDEu1VkP/fB4P6zp0zw0t+ICOkqaYh61PmaEd5p+uXvnP9bc+qLYqwLPrMD6t0TL+2sK8OrNHavpnSOqPVuiPlud1Gq3RP17Ur7cIX49Z8Q+EeV6R9fPrTVGm3YSyVVA0ZXJUSNENRT0S8DhFWZbo1dHraTApjnou9/kBTy6UeVASXQQiXbgSfSQiYzYSUxwuax4KcxwhV6oGT0OReL5ToTwSiTE3+aIqhVGfm4vXxz4O1TEhjSZ63YHean1s6m1ef/TOyU+P6tNTwJpQoFZcIjnSpdb/136e3gmK4syCtgpRow6uoijJokzVovLnM46il2euHE1R6rMa1N8vni4klqY4EfRhUsb5rBfFsZRp8gxV8NTh/r6RY6jDtGgrKBszsFWPd7lLpt2OP4uMc6VbZoZYpCHHc4LeONVnJXhfa4rJeDo+KSoCu/BI9h/Hf9pMnio44df4n5P7Nf5h0o9U/F93/xqfZzlb8njbLNXK9AWLdqR7iAUarO9AsWB+YOYnRxkHSz6t1P/LLvGKhkruQqji8A+ClgFxO3zeRgPJ74/i0kaAjeM09XYn8unVVVo0Qm7Z4mAqXHN9yGBBJiVJiefeVDm+6YjC+HdGtdvZxt42Dv+T2FuWXbPXI+83vq6aayE3l5FH9/QR1esGWcAMzCP1Ui9Hpmnac32uRzFpaoaApfD5dDKqHowovKAv7pItUuWpx/bcaYRn3iNNerdOMg+PDPfXc/CxYxLldEG/bq49O06/MEp1ImTu/VSfG/gWU8/wn481c0nlmNSbL67xSUqqpO+mvpzU9dYN9+wSZNdxz308cpeh3lgJ0X+NV9yz6EeYds/RVMxhXs7X+QqrihVsC/2PV/79jVe+e3nJ5lmO5QjLLTl6r3RtCORVBHmOV+w7L1klvy0ctbUxjjfUvaOTjNN+6fLdy5dmhmcgnxZrO/o61+flQe1ei1vXgoTEnPTT09voHGPm8sgnGGtbiD1Pc7FbosjslyQs8s5gdWY6aX3pq4W1uWA0hElMYzVpcgbBnnvPjl4HgYAsuQYNXS5/gZS4Z8T46PYotsYqgn+DroPbW0v3pdpB1AFmsJfW9t76rBx9Ff/icU+spq7xV4dafMq7ngtbM3diX9mO/qMl/suLRT2P11TYUS3OmtWLLfMdIbLbSzf41zpe5zMytI6vdPhm+qp1Xa49qsH7eYTiPtJ7/eu5YbbUMxj6OwCqHPWaIi4pl3pfRbyWHmtI3hB/dJzyW+mfBKAt5Lv6aPHVR2/2gqmvHWxfy3n2roqMx83HhmmYal+uoYWaYx4ZdTGt7rTHqu3xfjQH3MDmqdUJ56lz7UHnQQ7kEVrAOTN1d9t/vwaSaIuR6I1PbyWCvp8VTDodh8wF9S2nesMX+UVcax7BPM9yXpHtGTKdt5cMz48JhD7hw7CXuoQ5oCs+jJiEtgcVKdJ/HirKAwEyFCQf1mvGrpfY3a9nZ3k5DhTh+hBfeX1hOlKlqf4MpqPbrCHAsMNhg5zZ+XIOt7w+a8r7S2Ij6oeEoVQlDrT4TNi/npwwxT0enITo19+rxeFPiBu5MYS3iswtutx8ek9QNlpHjY54Zny/r3fp7WHK7+utOKgOE4rQBdNueZ74igy9VTdSHnBF14gI26GyL5PxVJKEhJld3+AkbaN95HvPQRd/MlJnfuYRz2Dv9T02n7+9C6yKZTjIPsMVKBfHoTySt5/RgYr73dffeCJJFkK2BkYM8is+2//cVU2BWaJa9TLeqdW8gqfJ8d/AuIkPOfKp501grE0XdV6vUCt5zK2aPlOIy1Ua5D2+ncRjvXwFk9GZTkOPpM3zlqa/5UmEfpzlnhhtmnePShu5yOJ32et+Zvjc4/XftA29W73k/Zvqq2qJ7PLW22CBUp5zpweF0Shpo5pJNda/sDEnFdV5fsOZ4RnC4xOMrUeXa8C5/XnQRMYITCNOiCuvkbwqr48+vT5n9dIrnFsr6vL/5d46ehNddwen2+1pXs1OUL1M24nlKfAl9bpH0OszLfbpoU2itG7WZZgRlAoIQwBE6WsX+l99t4Bjl5F7orblv/G3yZwmyLzyl45002dzKgehj+dpMwqrqZPrpdO+MBR43++RZPO+gsVrcERWIwdEk2bsfamhMuM/uiZaRl6rkDhCBce9TSZ1qeZpDV2TB1uWpXkuZnHRrbAo4nld2AJEZIQqWDQxFQc/iWAYju6TGbvvPkKRdSt3oWlV8b+ezXMpmwaV40nYcFiL0zwNQ9WF3e54dz0a+i5ytQP2luhyTn2TW0Yci9W+htqWAQqFYNGOThU4w4C8+pfD7XGAnAUqNO7rPU495D1PR2fxnh4TIdKm1dO07oH7MIPe1wJchEviGwleeYtNvJBfWH5DRFcd2oaoU7C+VkVDHvxL51r/sqevGm31LmY8dnfuV4fI0Sv4NP77hOx1R41G6524yvXthvMS821/a1p8DtcN5ojPR8trEMM5b3rDhhPX5OKoMzYvj1QD5UzDf7tovy4GXuY49Q1Nqz9TN8GI9TVisFqEZjge76mepTrcOXNAQYx2Djo4PDEsiUiP9LzHj9wkkiMuUYsPR6Gxdgi2r/rjuU3qRqbfoxCZtTrsGULbd8pWbDWE/MhO54m4FttNU9cBBJUdJt4/OwFCt4co7XfvpNQnAFV6/I70g3U0o2lrcbt/+Zu0M5oOj2bseQuDkLV00vObEYsU8UBYaVAPUC04m2FsTbU0FW34I0FjJWjJhE1oZ0gUlvwPl+q5tO+iRBzqG6iAenO6PqCNxHrInhelDD1f7HycSFug0qmYi2PcPfaFBdNjMF0AV37YIFs1+7oc6yLQh9EaDdUYmOUJb+wddmlwxN/v7iUAR9yOGHt4L0ZCGh9qFK+5MGU8b8z2uTnNKwH656W7l0ZT5mYXX51Bh7760r2SSfki50V26drLt9x1n3H1d3fsV5IYD03vqmfBP0FApqmXYLAzUcb3X0S5/9UFb4faX0DKYejSaHZJ552VTTsPrwzr4mR1DGarv+8nMPAt3UCkt+XQGqS5k9bkalJXhxjNA024W8UsUQhzd48RRzcWbKq69DMvpG2k2NfjVfgBgmyFfvnnJdBLIUX67u7D1ssWEDzGt2vhRkcED6TpPgFUEiSAcrf3F5T7n1/sKxZnol9jsd8nbNfFMqvp3okETc3vKL8T+JwdVccTzDsNPB7oah/KLnsiXHjN2vwtHUv76nkD3XIq5e/ZILee9Hq+9t8jCmF+M9DjEvy7UlFUVlP1HFsnQGz4aqC7TgIUM+etNvsS6YcTaqKbMzrqhj9u9BTxWJXwH+marTEIDxhPCJZv9pcWKkEABZ4CwXLYTxxgWczHBzS0AiTNLoTuPP5tKekE0/xsbb8zdB7DgC0/Po+Q4iBmz/2HiETyVNsTigKrFoEE9Peatu41ov7yqZ+4Z5Q2FcaxEGA2AsTt/Ljrmal2WuOOUhkzA+i1PrOh3bH2JxjwFVoMP7E4YXHzQb4W7I9QqNKzqv7b8te13jxKcyd28tKGNcpCctoyTIojqZKgSpOctY6B0y2DbU3y71d//AE+uN23+Wbbvp5sfCoGhGkh2lAePOQfLIowefzBvXaAvyC6JIcg9dn2URgY22PLyNlrHVcjJIW8Ju2wUlpDqOZHazp9f6ZupNP6jcJxfmhj0E6IAn0/IJOisFfMIdP24jaKSf1/a5pSFXZSpYV9yZkUJBxk8xCZFuX1flwh4FmTfbvWMnKc/P5kahLFAOsBVHab2Ee3mlwox2CT9f37/UXeP4oDOharm+yQBIC58St6c41rSbrkMm7hjveQwgP9IU+fTZDvtAt11jPaD+BWYNIPuYpJupM6TX4fPaqfqih10S7f8ROAEZDI49nlO2fANX7eIfyi+u6liKz51WtDxPbP0857Iq7vaS8RVoTafS2vQOU9P7FJ8TTP5BVeBudvbofr1ruCr+bFzxeBrDj8X30Y+yT2hLR4QcVK+X6cP/qOKxjV5ox01a+GJPXJz6283U0mrcmonRHRwXs2Ccpfff3R+XvNnOi1eyx//9nXH50n/i6v8WsqK/TPvv7sXPk2/7+g//cIGv/R13bVw+3NZNi6IHCtAxh4D04FaN9DfZBUwjN0VgwLZwYII74K4B7Ho2FNXpBkzZ5L/MOPY54X10o3NN7rXOsUpqimZuC1sFTwTq1CO091NLGZDH9+5wXks0svHulnirL9wf189wT2A+933kcbDyb3Y/cX/xXjxEVJfPNvO7b/py/6F4ZDTJBfBv/o3/nsbF6z879MlMN/pt+7U8iGLlvnB3JAvx9AiN8Z+98lCxj7/f74qwAAk/j/ThK/msu/5//fv++Nftcdij+7/ys1D/bhV3b+38jUv/4/kKmnQE6e/pWpRw6Go6hGZoriw1CLwpjLhzkWhS0plSkWjTsTj6NNgy8lmy0Dmz1N4/mxebpw+NIMJXrIBKuYJetouPNI1Uq6P3XSSjVF0wXoC9KZ41DZ4q2yFqWLlqmxjOTyVqKLRfFlacrnaO7LnJTDU4slWFzGWI3KMFLw3GuIFqWxFOFz1OLytuoLINXNUDQnUTpFD4dKSwXFcCbHcqb4KwsO+AD7k4n/ycb/7SpUmqZcgT7AmiAFJf3ky6mfukQClkVlvvI93pWd4S9sSvU/qSvdZeL71QGuiwIPsZVaBbmTgAOQU2I9AEcrPv/OEIB1l3MgjjFucmEUnygeBHbBNTyHDXhdf+663zUxoZMxz/n0KntTu03qQVJZ5L4EAfBdQFBxcKeQtzvqrh8vq9c53gpXHh92Rp1cN5IWiOup/0hKEwDbpNkXTXhxtmsoqlVt0obP9K7kz4fPfnzd92UgfPFrzoCpKaHuVGgwdxv58QY83j6VrKuh5eYLaUB6XRA7QQ09/DzCXvuZvsf1Y7e6+55IMSws+eylMYJx0kkqYs1iEvBw2wQpTYc4DnllYwQ3LPILfTB6LKKL2BsQ08HkxNNBMSSoSYAJrkckEZBUFIFLXUmvnZ8NxMVeo+9c9WW+N+9jovhC6gPgiOaPfweTVgjG/fZpZNtCLIV61GlOF8rSOM27B914ReM9DlxuDmZMf3DF0cNNXzjdZ3zO9ZL2SyfDTZb5WPZ7viLl8MiRowTpdMsS4MLHTNDgvnpleEu3b3ep6PchcIfDjgwBRLHvnEAwyvFs677z6ptFH4l4k74DWduoYd8ZQNyQBHtzCctBiWeOtml8zkqbeVKiBfJlYnrqQf4muR5KzmnbPhtD788Q38cDQpTRGVdJlq8TfaGsrgEllNHTbd1WgPw4hMdQLyJ8JEEWb9zudStBWNPBBuN3LK1NhwLFCBU9kb9fvs98Q3sJeUzyQ49ZJJzdGwhtTqsQQTFrDBtq0nxf8u2JX1bGSDlltLDDUNEgqLUfSMPKYOEVnOhe74VwLWqdNxKRvj9BU75hOMB9dnOX+uW8ciifsG0e9hRBEv0LeItqfNAIflbTFiXA/9AXd0UWVj9ziLeN7A0oBkn2+3muHvcQeWQ9sl+K7eud9JEqdRfrQnaqaMAoHzAqi4xxI4fJrwWRPMhR1s076YGxKkRJvGtF0exFjAt/6SNb6CRMhZn958HsxMQvRGo4yePIcF3lSYhNRhPz+/5c1mtstRAXU1jzxiDccKzfsx3LU/hnlH7A9IrsA65ne9QwiBeAHN3OE69y9uPVEFF4KmWUcbDcIvOX4ZDijm+vGcuqNXp8VzXqqh63IOlolM/ThvTOjK6WVTWqSeP7ykZcHAGYSBGy/0I+YMvrAFhdNcrSVQtqcLYyd2tR+4HXiJy92a8XpBB+nFBRTZU0KV1Ad8ut4tutzRFFxcMvZxZ2QRAI9NbVFOjwIcK/IvyfNcRC/WfPCS4T1DBd4VcVkKE45pfvlG2GoShFpqjSpY3JUAtZYgyTMCjmlYPMDttF/upLihUHr61eE/z2TL/tXlqXxqIt93J/Rc9tbWG5/CjZz/wG5flN86b/2C3zKVaJoniKfyT9/Px+JrE9L/+82F/Cc088oP+sgP4HBPPM6Alj8Z9+X/1JftKMqZYmVQPu21L9/dpf3BObCANJD7EEhraDGfT5ErSOcxF+vZG4xDnOSWz9N+0yXdB9fMRT0uUk1R2u4QmiJlO4z4pQHyDEhyqA8Bz49URd9y+R8j+kZoEs0iMhqNGawrMj7B9Sc6ZUUqd0ms/FXbKlCpyrNvIpMSpk8nJFO9plwnTx556Auq5E7YgQlREZ00alPl6Plt+A1H0O530/TQT3FQLhdWZMzr+lTT7YmdDTmOK/5f6To2PVCE6sQsoFN7efB9UzH3OOMO8pGGfAI+lNT93PvMoJcNjHzJ38sEknpNRqnILkFy8nTrF4nvYFSeJ1tN2de78YUNux94rIhjj1sC0ZmqbNZuOTuQ2B6dI5pR9dHX8K7bLztfztJtuaiK2SXAW5encPR+RvGSR96czSRYmwx96dr1V1FQfCiW2uktXLe1maul7T+XPxTwiFyLzK5NROdfZqnquH5oZL9h1LwBv2DWfM1W3d1nouSFiciIQlhuwE00CA2MrY4MpNGCIF23ZU5/YUCfkyweaIqTgHqzOf0VRhIMnMt5EDX+wQtpJEZja4Pazs/Pi0iy+QUFPjaIe7eUyd+ly4ICs91nCYCGu9AGVjAHbNaanrR7q1GgtC0s+ApVYb0v90tSGE3cO7hD71MOcjJHhxFKGTHfXjdLv5D6ehYi/mRtKpDTLip48ipvzwyggN9TVr1CL49r4qqWPzcXouI6mtDBEq5nOpSOtl0ZsHOwQ0pFKYwICmthTsogiwiYbfAWEXmXS+WxN9rTLefM15AiG9VTOwdTWeOFgv5ooF0ovE1F7fUVQPmN30fCbQZOoYjzhHu/dYBBeTFnWRCSO5JeaWe35Dh1KpoKgkn867F4xqiVDCqeOce58wkQBj1KbU4SGVdJpmHxAEHzfdrlOhkqoLSjo0y750Pk2G+8Yd1khcrNGZ2LFIRYbL0T0izzNhp/2pZY0I1vhWrtFoBPlui44K9okstXeb9ltFqYPIWq2lIBsMIfJLExkwSxnr5ohXX36JjGYWykQiArUBhvAzJOevHZKF2AsfCzxoo68SKinUahBs7RJ5r9fqhiuGX5PtcdAmFX2K+S4iVxi8OOoaqzEUolYHh0vrIqijzbEJp+3qcbCZrrb8igixcpAc+OsJk0mjXj/Me4bWNWaJdXYTdAN62CkWAmv+pFzEXH98aMLR5GO0YwsCcPzt8Hqt24xwbCjuoHGVyGlDV37dmRS6Lj6V9a+nTpKCdhkb/qzU7iI6QLUelI2PTX0YDnK2KLKhJhPmXW3nrLxgBHmQxcdJxVS57UbX9G8MjqEBWTUQyJTYXYkb4JdXlNmwHMXhGh9l3xDSvkah5oEpWDy5csbKB+FhYO0zGqj3jpwnSx1Y4/T7vPU1mFFWmL1gd0nIM/o02mpWSJVYrv9y0Q7B0zISQEHtEy23VHVwN4hq81G+q5rb0a7ZuLvRtyI//uJOEHxDPSbV3XTmzyj9qVOwQ/KITQ+kU8E6fS14DQnN3u25NlLEcvzGzjbkoFD6UcLAoEeChHVf1SR8Trp95DmtHfa9ycZUhEwd35dIbleG3rLvNvtyuGUy3O4vNH+TefdAYIf3Y2a2tpgfXR3iXjpudp8oIKYOcfHRjRAjm9DRhWL+Z3uvkThjhfbUF9Eu685hmL/cG5SSJHeebZOV3i6tw7eNu1lRfKzX0K9S8ALvPW5Xs9rza+e525/L8gADHLmQxkNHb/XRJTZH79x6rNcTsoksYyEHA++XnKC+uaGYr3C7kMtA2T280AR7tHu0V6PNi6QTlk50zMf2edexepvcEDQCRvEs48R47bp79muFlV0tf9LgbOKvC3B47xaKEHmoHPzV2ZD5kHgC6iMsjnvS+tohqQF3v1NzHynBSj1CSXuAPFzrSjSrO9W7h33gMMQmgXS5gWCkmnx8HePI4YckY496feHr7bWDnHakiJzAePnhel2wgIucGXnJkBVvSAtB/divcuY6vgomjBruQts4ta7sQqi7e6GHvzcOCy9sw0i4C/fOX4ZlmCTYijf7joCfcKtOhw+YVr856b6jhaiBtuN3i7sgMQ8rCNL4X7O+UWvIOP1GMy56ob4g68Ng20uZTecwWd8yPx8Mias3sooKIpA/bFAlGnLMQUq+WltjT3bWJN14WVomVoDO6pcGldzsVpDYPkrT3grEuVAWRZb0nVfWxOydbaajWzXpjpU26CYLM2Nui+8g8dF4kx3zYiDphWrJmCAI8GosooDAhUUSb8vM9y2BixToIZUw+h16BykRIu/X2ATfY4S2K2BmtKxs1xm0PSJcWUKsuh/YU7B89qynzUX1qIl2mYE3Q1YKbcbwiqK46lEtCDVjqrlMtPpt9/Hb38O3oKdKxcp1EwBqE6XAmQR5xwVjc9ZjgEIqwDXCqcFH3yaKrVA25DMKw5fSJXkfdaRFPGIY9hM6HzAcO/aO3jzCW1A9io7epeolmJfgXp37iFGT6N+CrDq20h/Xz1mJ6k/ExiQWOD9H/5yh+xuko36QM0db3MlVVGnS1GXa0mG2P+X536j5QagSpfcgySJ6ZeDzNalCocRMk1hc6vE2ivSahDqV3lvbBoKRHanpGcn3pNTsC9INA/gFqwZsGWtravooI7iaFA+DuM/PAbpkNtmsc0Ibwz4K46hEEL8XsyeMvnW03+KeWirDN7xl/ulpTV0NUlqgqdZs488z3h15F3MtZXKUKiR5uWht6gBh+8SOEFTvKdk8nl4RWxAbzF+rQpiUbD3zN8GZQomm/364wHwmfnLPujxI1jIL7uL+eU3YnzUxfDha/dmbfv95PoZOjWqBdX+3XJIpC8afD563ZKF8ruKgZ6lvcUE3dqNYGlnRupA9NCsxBnIbzL4JXdlsg0w7/LeS8NGLjttNDJuASzou2e63T+gw2NsyC9UhNaJPLhkP0qy3b5DQzmpUmN6z8zHS53u5OO74nvZw2POJtu+kRIX+dWlGhcfPRmtVHQP+95NjYX+YFM1yP7zDZOgi4WkzEClK/cVFaI6RHJ52fc4GaL7IKVYahXJJREvl5avYuNvktIqwKEditHqoLdN96/ayyAdQeOVv2SvGIQAseVtmXQ3vx8PzLxmUFiObyw0+stDX7PjNUCMfYzOGrSndBJBpyypwgMaZsvqUJVPFyfBziAeg/RGyLbCxNzh4V9UV7onIbH0KnPNfQ4HvG04F89x/+qgrhC9syfoTHRgsx1cQE09QspnKOKkYY8u/1V6eroOCY4kVzokm+wbIxct2cAAGASn1q2PNIT+4vgcnZqAlcY1e7HgDL8LvQ4Q3SzR4Tm5zvNSZN8l9aXL19Q11vecj4BexJTUqkC9ZN3P13ySP5Civ2cyYGdCX9qR+J6goUDfaQYP6InkpqGs91TDsQBWUpu73UD4b9ugsD6jSi350xpHGa3BNDvU9QDYsJzHq1wSKZa7+iv0pv8Fi2YvF6QaWQTu3ZBb2ub13+qr6W+fTmEM/t5gSXuU0i7xL8MwHMQDJCpg+69vt0M9qxO6pKFdyS4mWYmlxVrgR0O6pYiTTlXCeZoaH7A4eA/6BV5L3+GSJJRv386E/xXakqiJ1IcWXYUYqIjp/kCbB4BJdFmvYUcDEPEf0oJh5fExsV6sTgqxk6PdG6X1Lo8kjYF1V74BV4v19QZI6jX6OZHmgRA33ObenXslLs6aeRvKWNJZzhuLV+urV1IqXRJA1QRq7O3oshncF6Bs9jZwogzOaGCA02DhCNAw0QE3sybKr+h1LOsMzSP26/ZrnWXjjM9gYzvg7rKogLOI0APKgwkIqNNZ9pQ8obOykXtxvmgqdl15Sb3+rmlVkLm01TJ35CWAKTraZWq+X/oE4WE+3o8VB5VZ/BDCVD/RV39sJNhmjXr6AumWj/RzqgY5AtzCZ+I4IgTgOhkbgLIrFtUV09YWLQj5dloSV02rYKTzZ+bSP4g2VqZ8p3DPgxh41H4wv6fXvDLqJFqQzMt2KT895P8gEtbj5eaiKfe1UJe6AOWmvHKfolUvmgzdmc9XK8X2/1IyTlBQm6S3NaHOpsJu/iBVUyVTgO5i+08DUF2yGer9ojvK1VIfh1vV+oEuRTelXfk2d3hoDITlFx9qYr2ztHer0nApU+qMMFBmePFuS7ldVTx5Lp9jadFIbcQHXAGCmIecnS6znVZfl+J2UsNbD7+HWobId4vETzPx9W1oJhY6NQX6ELAodb5XRrPxbboI7s5Pilg8nsvC3i1viHLL96VeEv/aMjpMuMZa4cDSTTn9gUknGjJCAS9ts9HKCM4is+1BKNkX2Y05FoKsz7vAxUCr/O4xMQlDExYKzmElPxzMS+GVbuZoo3KYAaX2qS7ixrJ3QjUcEccbe4lC7W2UmJMkXVRXGIr0xomApwsx7WKnz8QyVVsR2YGDA0l9hz3w7YBIoZWL5O6NPRbEqzI3O9bWQ2UevoWxlBTLoQsZjoXy+5XZ2S12wwdN110P5QZDYAwA36QeTIRZcwFk45Uwl7zCRhWklXglxgQIlLbmIaDVy+bXdI4yM/jU0x6avfdFBdS6zE6QKqTJsn/2nxQ2uy28e94UwExRepf41hlWxDBIUHyZKtXqomN5WAhVVhQeFZfCaXpOEEu/8t+17hANVj2q+827Z6bwjWNzR4BN96WVn1YVKjqLQsqItIQ1zsZhqjRkwj3UL+bUfWRaHl89gVnqIqZotRlwnKpJolj2bO90t9rLGQ3Xkykfxc7T0ywuKyfbOZr+UwbxbyFVco8Tkl+FhyT6hJKI17ECK+au8kaXDbowTt9NFD0cc5XpM4XaM+PzVG9moSYc2P20OZiidMzj9GLcfmohHUt4Gnrhyc93d4c5lt3xwoNRd6WwpfjVu/szqagj8EFzw0nGVmujksCHY6tzLHhUuHQGb0HU+QL803sbUnwBoO2bfazadWBZzvmB20LTKij6j8rgBg4QJgEHt8SZNcPolCsa7xjJ29tf9JwXwUz2uQBhtYoxiud2Z6e2M4Kx8ab6v6rI/TkcIR9IDQj7E+73fW5h2UcCBSGixZs19Q4Nduw8OeEOqpiDWs9V7dPtGuiuDuD5bFbG9HDXqAw99eBeTgjBBGEVXVH5VcP01u+osU/LzNrHuZj57x5MzG8v7/JJRWIG0LWne0Dc2+sufwojynKmnPgNcP7ypdJoua4u19+7DiwTxdN8GQUH116v45XkK+BBey/HOxeXugeZKtnYdsHYdqcO30+rf7NHVT/4gBXO/q1G330d9viX9YIs7Exxt//pQjo8IyCIflk5X1+M3yAldMjIQkR4cEcB7aVhDYvziOsnq8vI5yg2SMDxyE7mTDUomq0AXBFXVgMUy3gS9uthi5vlbKeknwmQcQ95TYXgi6QgykZHG2LkBiUSBWeOxPSXA8jomwSShpMiH+3g27RePtFdsiUj3bNNNVkbmvgGoaFksefkWdN+Pin2suteATTh925LjSuOfQAv6wZNY7/b3AoO1ZwI8ycoq+YZkv8B7T8AQF/uAqF57YbOac5SeBa9v5StQ/Gur4S+GkJ7AYzajVhGMXSscrPDF57hxle5Mim9NDz3l4XmpYKOoWOypsSocnxC3mXiSwG9I6Hzf56ZjoGj06ZnPlYJ8Ok/YlYGVk0gpNcmhMl0WBT+EuLxdEhSkmYT7okHT70qALNU8Fn0teDN/F51VcZPxqZKKS21hLp1aEsBKXCqDWBYGPDG0grOiX/+iBzMjDOPRguiTCN9gjbIuyUKpSTmiIuHHMRZPCIxZTPFJXOw3h2Gj0mM/1DP3Fi6tBlp7EXavL2ce/qCNnmZMT0RQ7XF/0dEX1bBJVJ2Xcz+yfQl1K/U6eLfqqZin141w+z4Tz1UD7VhPD0dhmWjNfs75HZ023fXOjsxbz1BaONBwh05Q8sQ9DwFgj4hDm/A6uh5WXT6Vmaqw3jVNEgp31/n6kctectWdx1KrtwpwX/MRgJEdxTUb73zqOL9iOR8RxghLiqjRUilybvzo1IeAcEivIW2pucj12dvi8TEPK20S9Ua3qa0h63iTVecoI1s+sflIQV3nIVQy3xXxZ+rFpScect9/kXQcNHEBw0ZdOier7b62NWCnuIFf/qxM+TQ+0IyKB+LZDn7LzWSZm7XdHUyrUbnPZ7ydTldnmujAi5Eir+mjZZyh9MkSsswnE5ifLCIARAY311tmtf37sxS59P5dDrLfc1wRBkzO9s5MbQi+5OEbQt6M8WxWJg/SsbTnIIzwU0FEgeNnQJFP6ZhYemHnPvroAxlF7lvSI1vJ2vd+3BtAMQ3aw0jHzzGl30g4rGMenYEV0m5JcWbs322PTEmcXyQSb5YCeYBcqVrKoZmPunx+04zrN8eBdiOpZElnNY9Gw6kd1V9mX4JPGG1MO2BINEoRo6/uOx9R+VASAvL3lc5D3Ce/3PIDc8HaKdD3rqlPK/HBZezhB5w9Hkne4cbC0nVitJmOVXO+PLweOtukmIZOg02deR34XCkF6uk+ennwUu/k/LBvHJNg0DdYQzm+t4F+5cici9hXj2JbwRyIW79o+RpsYQTq6IcJNN5VDQhQV5etjjDc6gui8ULDcNh5nhwFUs9lbViMV6ERn6SNjpi8wdbdLlh6W199iTGvB9c3XRwL6g+KA3NbyjS4oSFmV1IZ3vnNT1/mlYps4elG9FhdGmzT4+MvmBw/GXU7kQBlH27EqMc995misF1FHxzlncDmX/3isC/5M6Q3jtg31HtyRbeuq6uwGptAEHCMzza469l2u5NW8r3ssrfmEyLqaML2+brsVamnJxa34KhiNdIGusfYdrceg5Mwb6aZvOTj23Y0X4He/uUwH0UohKzGn0AvzCnxMLHEr+OXRmASaTPLd69L3QZlKf7nHAdi5+86lqP9DWCiUk0FaxXnO7c8RXWWB5xNInTFDwLb6nV/D44w002pPtxiFbK+Ko7mZa0efUU12+8WNiAdviXuOyrOz52JlNSTLzdsI70P0ISPwoj4WWmXAFSnZTBI2Zks+0zyj04db1bOlVbT2XctVqsJ6Qx3CNXLbosKcJIafdtuEYVW6L+U7FGiZnvJrrhiijc+sFkhtpH9Kmmfoup0eld5tK8QFFmuAGhXADgPz/eGl/N2idC9G8DOzs/CQ/kn+UTRcvtwb9N4BeerjCBzXyaQrOH9USVF0mV/chVflHh/Qo2Jw4+JEPCElBsWO4hTTFD96aF08Pv7Pa5SPfFer04ZJzAD8SaI1iuHD7JWXjaQcAS+WnSBDS18cEBHduj5Na31S12yFZtg7TV5hCka2oNXiilP0A+uRFrSGNdh3NcFW+qZvF/8TxlJimTyRfKPpcYuDloqzX+CxHuZHXfP4+glkQ+LzeEtVmxZvxQ6+qkQ3Kc7pPMTH0Ea9wnyT2/ZvFohKiMjta9KicqkGAP1dLDJ9d6N0VjYBiVE80KFtfPcWP/AzowT1OxWRPuwRegL7Os7bYbsmSOYHGHHfTKxMx9B3/K0sOZBpwBunt/3py1fytcIXtPey/OLKOnSW9LrZRZhRoxAEyF/xuEXSnje6onvZb0SE9Fg/1j2TtBRrX0c945fP0noayyxufDOV/gh1dbNmFeNo0pSg3S5ZQqecDo1n1bvM3uQ4aisRrNP8dTxLdJD35CMlRWdQFhJWvu4v7CT2XsI+HeISBPI0aA80zP1R3I+bL+6ssvZRfXrF+0ypfORuUH60Q4GfM/0d85z+KlSg9wedZjsX2erOEsyqdP0+DIJBGvImfNIxRJ8k5hwf5KDf1WmnzdSTlAplGcUz4lIBYzmDeo0vHLLeF7V60QlWU+CgHZaZqGxJZfBGQuxBMS9k2lVif00tdpMAwqtMTisSC7vOjYofmPvZyS1oJQGJtzvMsQviyVg0gfesFrpn5KOaBn5FBOQHZdGajkjhNnJXGMkU3wDFo5xyJ4ent2KpkyCgpe91V5BHYf2cNwXuDulsOkQh/32ZjllScvU+soia3MpcAMrvOw2wJyrD2NtVw791LJ1mzaVJiCxovspKb+QiaXfXe7tUf16CBxYhQOLdj6XgKSmOy6GmW+flmKjX4uzmTrIOO2E3jsTmzDD9KlNIIzejMCPtIqDD1rz+PECo62gwHlMRoKTeGZoUvLdGPaPl4qCB/Ua8AU2lcPUNsiZzNyhT3xHb72MStNNBO8CmTni0YOU6EnLmQcf0ev5AVMyJ3+0XestrW8pB6MGsapiy++546YIposo5uf0rPn+GYsaHFE68k+NjIR47LCbkdU+OsdrBYBGLdjxK0q1SOglG/OFJ9rsC+CsYDBYVLMejX7M06VMmxJMKyAfrSkqQeJ7sb69gQcpTODsB/ZZhOcz7xoymoC/JUF+YnIYXjhY50iSru8dfMahoBa2HOd0Mwjx2cTnSaztQeTt+xGHKysA1DimpuV8FOBRIku1/MxjPWEE3vZFpTqosDD0Js6kuzkEPoFQvVLDwcW8A5igIZKg50eYEVROvK5gIn2gl5eDqhgE0Au75Nh7mASROgrpW8n48DLptpak7XsSu8VYKZqZ0UInVT2YEXVaYB9CYZyiZ/QfzQN/ftaWibradUYsnCJK801ho+a2DZoPQwfXhx91FVdYSbucd1VaH0qsaQDcQDZ3SKyMvXmaXezYkXTOQvuhYKguXTjOVxtymiDqbcnywVKfuC6ATVMd5gH3FdjO1JTVxhzPZFfvenWbx47SiEQwlx+VSHjxTlSlUP5sC85hPEXu1tzXW9k7t8mTzeNcKK2q3ErHJNOW8IZ5RPQWiT4a666XbvcLXuvKomJ0Jjil9IEyzsF+yt4KCyiEbZh0A1OignWqlzcNWSTwJ5BUgJOuNECsSEfrkgMEEM0t4Jm1BFrA8pj0Xt9euHUWE9iMGVsyM5tk6FVUhNP+p6EDblFLwS8+gELR2AhxRcExkcw0dAWyU6TaCBhdyhJIrZdeL8Gd8F7A8TVSkowrUubZrnGoHAZAj7wj90euXPh9Ga7XDI2L/SMh4QL/Pe6nA8g9KMUDFgwsPk2M10/V+UuTIH0evfcpv78/23ZQMOH8vIjyE9oOg2n5MvuUSeHKOTyhueRPVsCs8a0/BiOH4hOF6NVtOVORbKQJv8hQI1SzxFZMT++2sKOwqaj3jrsC53JuQxQWXlT+ZEqNwrHsA5jz5Hm84Lt9C/qd2snvUepMK78pLE6/R9rIVxGi1QG/hw4TLBwXUxJYh1O8crEbw0gSbrM6y44/JqlyaLWD6IareBXfPxUVjjrKpvJNwYu3MIEgOCrbtAoBJzqvloNDXebtB8W4Wx9a8RP2U6c0TXOHKlFvrrJuYxp81eZq6VQDtpYhlZGcuOCKklLZCl8wuSKbhudtkgKRdJU6Q30p13p2yQg9oJorJQpvpM/SlHmu3UqbfSjf4iiuVxZw1Almmi7obMrORF4OpBcScWY2Ub6vu3LWKK9hKWmpGQr3025ioO5HHT+MiPWvfSFB1ldiYaKrF1rbrNANFfNjfs0ezapkfMnmoQ41BjnU9MS+hUquj3yx3zaM3hSXx9YYFAHFSv6aA3NLBbNZy5//b4Axbl6jCx2vGWGs9xclWPJGhy/OydXEzViTgUYsaVFjoSw1nICrN/1bFKvZLH5OKdhmuMRBNOBUWdgCPXwbLrc7kyvElTPE9tLN47MxJmOjmx/SVd3fbtPICk/NFFiT9+O6bKVZFCrcKrR9pabBLGVZSXwRF6IkShD3BN4tOalrfkn4SFk7sKPhdz3p8YSiUMs0V1BsKkkcxxYaJtdfc6NknjmlKrHO63MeZc81ktVSVNt5U9Nw1qbSbUGNjaB/NonDj01aqIJnc3noRJn6DE5RZvKKzV5kZKlsJ6Ly6Ekvl4Xuqj1jC+JqVG1QyCtjOfan5i/lg7HKoUtmJcsFl/q8bq+qysl8IEqBgI9NcMqRzHXMC7KdxxMPzsOoXKbVcDLKIzfQXi00m+XmudwBNQxF+S9kLHWfph/SurLhQyzeicjTtPUi2ZepdpT6BDdbKZbcrs07itUeqpDkM1PF2kvGdxa47QFHMhReW/9s+xawTHCUzbfWM0ySVe6rNRQivc6XE+mX5L9hyYoqNYWSwlweIHZlJKK4TmC3sWlbfH8YofUsqclWd/4hmcTxHn6hPfGbcM1k6GKmqBpFG4lCqCqfkndrCWv48brpmx1sRJseDx0XdO3tQtS9CcRko8K/xNMjAse23rj7psSUc5bm8gLZt+kj9CfEf0xH02j6TFTrfUGJJXFE1JjNy3QqYmqcMFUZlPMu1znA+YiQligemw87PG4/okyeA4WegYYKx7yJCtRdbFmxe2gQ1rdGUMF2+lAlhEhrVVcFx25TjL4cjF4hhaKGTJzBSbTOph3TORDJzLQmW4VHh71Eod29mw8xf3Bk9FErv/Q9qYmpxRtzV1270tYKjrjbXjDZWw88ngm304bshNBRfXVcJTYpJhS+Lx+6/MfBBBZFDXoVH5bBFsg2HuCLePSsMbctFWln0onJ38pIfU2afnPHG6mbCxk7yvTCgLr8ByILkOraDw6uiOpUpAh94EPTmfyhf60OQyUCimuOs9dmYUXcHCorPfVLGJuwbsnyQT/cIMjKEjZhPNwc/YTypSHnO6q8xGpYSjSFD3V0gilgjvgqXi7LiIyPKAA/OocGJwpDQsUdvJmSeRSx8tVi2I5yFreooqrD/7I2Vj/A3XgYY2ccpMHzjSIoy7kr2ltluy6UqEJRex6EohYrKLOM3fJBDCBwfZiTb0KHXRZUHsy0ONiNBzGON3eWbRbb0k0i+PAMU8XpoMREKhSWoTzrxinM6wGDXsgUFyt20qg70Twwj2dzl7d+kOI1Nk1oCVX9oQ82Z9XpsFlLu9MNl/uoPyuZ7X6+/cq5IUqp08elEDm8aEVBgzOFrLdAmEUn8Ttvck1DmSdDS5o+ScXjCdtVZuSxOeBq3DgI/I9UVEw5D+6LSrkKzVTJ5bYOTyrDy0OFqF1KmsA9a8PoVf241qrPqNX+KFbZF+GX4blXU5cMHR384YjdKyi8uQ1RvKk8n+LPz0V2YZH0tKDZgfxKiprp0KKGI+MofMVSvm0L0T470KFs0myRcMTU04PkFJ/wllUKlyeesYnhlIVkHPzHTRo4xdM/ucG2aurEFb1GfGHqoI6U2qoAmlRkblWO7sQjyaL5MLJMgiSCC6Az5DcPwNHm3uhPh/5QrN+rNfiaAi2WixtY7M85nsB4P/vx6Rhni/n4Pdnwx6Fb+QPPXDy9J4Bp7HeWpe5bOs0qOZSrElN8xKReEGQU6EnUnOe+EDUFqhgUbVKBWg0qAwnMgSuhnFJSwWO2BEAt20IuFegQVxfgazSfWDaX5rCoWheXDOPhxk1a5nAo021C3GX4L8RRzZemD+uKJEqQ10EgjqNIoeNxoP30SZpmbObd+hmlepNs03wWBxB6kJjDWJrbxaqkKLEL3C+rUunUcC+6u7USYnajtit7Ye6GsfgqmS2bapI0VM3sgz7BO3t8CYoGX/UIB/1jyhzCLterwULXWgaJopJQNHkdcW1RuqsYZIpZwcbUwlTBMe6G8iuXotOpXVxLvyXqAeIFt78phaWkEaDBoKFPYdblD1PnkhLRg416mKWUSUk3iS9bR3tZjZ5TVtPSgTAprUhDRCmkDatxb1Rt1LBytW9OSlDh3qPvCOM33yrK+LQx3tT3eEiUQdNXewYVPyNjY3LBub0Jt7jl4SzCSrmV3cku85EcwN1MrhZKfwtU14Hah1jKhnSH+GiOyElxDN7Qdw89rrIAGQdqRVQnKjwmOyRtYUkNmpfhvn5SMgTkI8xi5gXMneh01x/jR83eXakTTUHOGuJL5ERrryZ5QpRxsC5RibUUOBuqpOZJcN27mGMPM1PzcJK5x3CPWjlRbIsLoH9IcxFTJsQH0NmGPdkxQ6vDKRQgQcd8Lq57+H+j0i+Uix5Ww4XEIEhNqkLzuRM/+ilX/k9GzJRxi4W5WSgK6pWuwMiYQBctytSJ5MCTzlM3KnHHtLVQ2rnLzgqrPq3znbbapGImSa2EmIoYWxqagp1q9WEzMvc+yhDkkCZV5kuKVQMLt0+GcjKnODrzLIULeTEZdQQUdXeCKxscBNBT2jJ4URBNyvPjCPIXp2BSXtm+LTPcAsyk75CBXu9x6PhbHZUjc6WqurpOugMnKO3wQ12e9YkU5sMtRcB6IItrov3DeGyGHIP7ZckLRY2K+hIKU/lsnuXWhyaoogGy9FUrSGFlcwyT9KnfVtJOi+A8Jv8tC6sek9H25NVmgmLmG78w130qzFdFpWy2v3w/LZZna+K+auSFRd9RRLJVWSrSUb1eVM7Ln00Y5aLfp3vimE5rDqSyYzqDGUemCNUg0q0yuUMvenHcb5uW+Iq96sRps3w35T5jH4f4RuMXBvkH8rVb+luTjw9ZnIVeDZ7k3cLN6cgWtNrBJsylGBOlx5Lv9FhYOFn6P9v7sm1JjWPtp/GlvZiKgstknueh4I55LuYC6ul/KHVLaqlly8fWsf91tLtX94YqEoj8IvKLyMgMdgJN30hrd9Bx14j0DY44hF5HULCP8jaC1aL4pIFDtjA/I4yg2BM7meY0M1fABr8M6Ua+QohBCsDfH9z95ozLJhYyyQtjh3FsUolia9MrEBvphqQ44KkqOo6Skc+hMobsUJBEu/Qb0LIZduvZek62t3StLSqinspZWoijvtb7k7gDFWO8okJO82zhhCUbFcVhxbsRKCYXSYIZyfduAoYNSKYEG1XhsEmxlNExUSjOXTrRmr5JOnDkMhWLpLLwSpWojK2SflFX01sHgRZHcxZNmTjpCGBZ4IlTzJrGA5JOTIN0n4kk8UTZUmfRyHeNnRXaB+UWX17WCfX9ddDKNTfe8jbZdniXT4RJgcbhRmCXuMC6NFvQsg6jJDejw2Kakk9tbVyvlaVQHa2Ec9UbBcQexkkeOuxa6oH0jW+5ZnF69emVPuuwp/NQeTB3xb2lGYfunJNOT/dEHUWJJiSC8uEoxNpUwasYBpGAG45ajhvb1BzoV5Y3K7ozbS5DBJtiEkqG2J2WPiu+dXopKVDEQroHz3gsSh731dlaxMEQR9oU5ROPSQDkkJSlwutts4VkcFLZa4dAeT2KeepXYTYfZMqGS4s0xoQX8s5lk5QjnxiqKnszpFsafWxQytknCMRRRbJtp1QSrENXUd6Hq51NrkLDnKgazaxPQCVaM6Kx0uBJq7IqmBNV5jK879pzDbanURX35PDmt8qDzII3jWUFFt6id8QEZcqS+jFJyxSy1Es6mUOxcqK32xLt1lEf+IxiBLrdJtthCtrmF5IR8MUhlU/pcAFXiW2155GvFA32Euem16WJiozodEVvp2uyDE/mddl12jqflD3NAfNI5NSLeKY8+qSCHpnnnz1aIxwJbCprxYIn+HjCpL7ZmtMvUq4AM9WrskK5beFHWzEJSN9u4vzoVB9g/JU8RnXIzPXABuSVk+OvyvzO1cvPLmCvMCeWhzNvxVjX3IEt7azgFLYGPUAwiTcqj5cFKkygOFgW8a9E5vEIhEh8I91GKpnmWLcF92QIoOx6SJ1COCeP1jYQAHIXJjskpnYnTcXSRpY9qGA/dcU5KUMaApkuAMU0KAGFLXiG1ASkU7wRkynPxjTpduzqueH6EFCSVO9usdRckZ4ObcEukesoa9wi1Y5YS7c+6024yUVEXRQI45+aL3eFgSd5VlTPYF6UAS0eLCNXBeuzDTtYRcGIZqKxrcY3KLOwrsQQEutyKwIOEEhx0JBoqjHiqRCnw46BeRhJqWILyuKfY1wvpxU41fougKhE09VNeqq9BbZpn4YwrFeZxveiW9UG0SEBcKOqNjrGxG2bR1VKo1u0iHRODW3FVmGxg0VkjjbzuDg6aYeM+FaqdDUDHswV4sWQuUYXzJQWSj0d1QFSQjcE4TUT2hUVI7H2lhyMNqGK4oWlpNdnI9eMWD9XRDGbUMkRUkeGWqPfrc3iYblfTpsY+PDO+yebDwD14G0az1mpKE1nl1YJemNkqQSisJ1OmJtArMI21ScX93WtrmoDJKiWIglfp1eZj74TGqNuZSLmWUGzDGZR2fSDvLtSaFMhWznP0yN2mLMHw/0m4G5gjg+fABae6L0iiXiDCUc0FyffU+1ZsWUHiC4f7Gp2i09/1t6KPvRRmUdPOuuBzVdt2kn8HYNK1rbYNcQsHWgvz9QgAE7bAShFRBI40572Lp3O92VVKS5i7vW9OS0NxJjaFa4eeiiq2i0XYhxmOxo+KarmM2onhyXGj1t1DijnuFNaW9BGYueJsyTOXNbRrnoX65Pve70T0NW8KbGoHKXIu2+QAr5kgeOWUsGqV+yIkdZwFin5rZ0gOrVA0jTZTAKB4qhtiMOolmhgPtgDsbrHtptSQN3Mk+29qVkE8pP3ZDepi53Je5bf57pNTOslMiR7BSSXplmoQg8Kn3vQ3dSLy8bdqxe1trTCKj0Hrqj6m4C2ujLfXVzdwPMOuYhC649eoZgFzqqSvoz/1Z83cnxts2SuqcWQd48tYpYOSQrN3mQ8lwmLsRVyLSx/JrsFVBjty6LjLFXs+CSt77NlJq1UbsAWQO3xANUgm78yg0i9A94m5BEmRvBNrkzQU8+F2oXs2VVvWShWhW70w3lFlXKOc0lmqXk3UeKsmH5nESnoxNYr+Ct1ka7v7gEKgVk6lbfFyg4WR9CrUD0tlYbzLqj4YjIzN+ZLkYaBlB3PcMXNkFJRNjJ9EChF7agPhlLF6hCDeaX6gHnvSQ1nDy2ma0S8fJm1NPnTgvTnPwLJWD1M0PX7nQ2SAJNiD85RAxKpE9P9JpYwpT/sijLNp0KvHiNwj2PjWTlshM3F55K2TeBQgXfzZIopWd4rDlScma5ggXCEAEVpy3RTeg4YgfadJKh6nRvY0+E97afP00t1+Z0FJdHW1itHvzVi6GB0GxkqOz9PJfEiXUZQeYlOX5ulBPGkgzfFhDiW0IVTWoWCBNNFtmJYW+1RMgzAZ0M8MNwVy9WtGyHR0xax8eMIMF6F9kCmmzA4asP0QnEEosW70emxlBnhoDdlrk+7/PLro3tKszXUqUNobL+f+MZchIPJzwwRu59kELcXFoJ4XQr8Y94/gYuZPUyTIyEhirPZtZvGbREmy5wJGlkmYXnCts3X5tyVTr2dnA/ws9wpotIUVZiNVbe94zjZCuTu8lcfMaxKkC+Oa22tn/V4bpQJGujTLAApLV6yNU5TRWYBL57M0bNFiF4KtqT17c5CtMQVLPt6F+7loJelG8jESfTORjUX5C0zk5BHl6Z22YgrnSfSKLMoCt7bolBimc2vIySG7nWWUJK47ESsNg96rYVL1akVw8O5LM1UFf3KLEcpKmUKO+3T9r74V7XRTVHolBg18ovd8N0qX0VpQfS8ki5kzIyZqZtURYvEUmZoLpWGcu57LuqV3uxCH62KBVyxs9w1eo/pyfciiTp0geBwiWPH+81UfYkYiKYFTk917tzCwgxsmDK5w6frMoHpUSGhGh9rQFvK1JcoeYdP9mH1swywaAtUtOpMN2z0hM3Np9Oz7ibWgZTXR78WdM8l1BgTaYQGPfC5RLawBzK0jRykCho8lngAYWvr3CSQpQMKLTezwBA6mDZPVmLTN9C8Xd1TCvJK7cxkkYdr2lHY55ZN0+nCb6dDzpTMwdyGUvOBNpf9UThr7g+qAZclBdEXXdxN05GkMMWTBKoaGjDt0RQra2cgUivmObDU6fO5RcQpVbL42rpqCGp5QBp0ZX0k5427Zrrmw99nL/euzF3jQNlX14x3Z9mhk1t9w4XNe+8I1z9C04H391O8vy2pfZ6DxB1T8cQ9B95ZccFTtFpTD3f3VMgosU3mcpxx78FWE9ts1NSdUkAICDDFc7arlYMdk4c2bTQnLEMoVzRZqYFA3Dsv17EooRHvXdI8NboT+V0c0s0N5bfFcAo+cuL2XLQaVgtQsmEyF0vr1FVd0w8orLgiEz31sKuAKzPxcrBJ3ypW/i1bMOrbFqqgDt/lWWU7NA8J6mXG6eL0l+R3RtOC4MBmVVnscCPJfkdtdnN1XGjV/aaBRuL2t6q1NZu1Fgtp22cKJZzEpniLQnFkeLCBmuKXw06ny8FeIBMzKX0vd8u65gs5rD7Fjnlqg5oVP8kHxo5wxFCh+2Lf17zGCTgkb9BMU1LdLV+jpF9hAx3eLW50xSJect3I6jbyhRd57f4sCoTpUp+uu3owh8T+Wl1TYuCUlCBoar/vieuw0jX8cA2NGXoUtwdoXrRxHIsphSJGn7QCU3nFjChqK6hyfr1Y/hYFVQVcHjlcDls6nvPIg3ZY/niITRZkuESrQPa2zGzxgOguvDAKbsWUpWPQullW+yDkQclbvGl6yMfasMN7TFqZ0xEaMddPGClCxE98Xn9AItuovIcnReKKaARY6JpjI+5a+AhurycB3SvndV+Il89aIha5vXgj5ZWRQIU7YhlVOLS+HUp0GKa5EzBiapO607ieZCS19ebK976CvcmxCI+nyIN73HWHGZC6goJh9ETK2eadj4ylmP1yeqrplWho2jV+pRpo5sJgLM44lPJCrQUgzLWfb1Nw4JS6XKRXsoR8zba6VSUbBDCbOyf4lxYZUPWkD9J4fXLHfGs2Zior3O6uSkQuCpOpJJQCeP34JJdQUBlK12U3AiWwKWAxxEj7e+P3y5xYQdpsFmptVBxoeAdVlWsGiVkfVEjLLaA3GdFuBn3rqr1tntw213hNesD/pGBeq62uHLJrqaxRb1Or9s7MEq2fwayiBJ2ORFL75k7jcfaa4VAeGrxJcZIY0Fxpb7Qey1UyE3MV5SD1jWTs2DlU7p54E6j3OBkSteOamyYUTOKngUIfV6oMbVeuX0Gna8G3J1+98rc8uTYhzTE1Njp5hAVE2AfZNaS3Dyu9VEPsS7K65tRx5l1fzPpmMcXrKQ8axl9hNs8moOp2+W0n6A102tcL6RwOE4WB5YO9dZiBTRaugaX6pLWWA13En4US4yWImSHh2XiP5TU2X97A4m/pZ3HW2W+A81uTLgzienqstYd8O8/yOUZrrHytCpfddXQ8qXe70sY5aWxhG2oVGwI9Y/YGZrnXRgzXQ8PntVd2VP4Z6n84uOXb6Mlnq+7P//vEA6+A11U1wfrsncWAHzKg6B93GbFY8GU1n/nJgIoF6topBcqZkriWjEYAFNeVlLFfMgOCt4PguueTIaPblROwfHzhXXghVo5lPJ416peEDSQnh67kyTnsrlc8XQ4dl3O4A6+kHI11MpqBfBHAJwjjs99UoDIlfjFkVA241wtc+FApmbqr1zzNGwmb0WTuL0bv6u1KSscu4Stdf1nYK6dwfvGR/JmAPpF8Dla4md9PylWXSPEmT1XwBORal/lqSUY8jJ7uU7AH+tg+rYStrPslXi/J2MUAwt0OaGnnzNF9LNTb53EP3c3nK6RgQucrhmxGhcMtdzrdUvFIasDjxgxhzCUK800yrJkZV4pPmu7zyXPy0dwT+n3kd1VpyO6KhJLhYr2sfIw+UhVvveUWIPZSOkxb8hIG2hgVRr260rnAiRbXCm2//pReQOXcLDzrngVFFGWX4RVDKm8y5o5RngKpxIuBkMu6nNLYzYf8jFT2sxUMij+9fXGN44E/U5Anx+vqSZBfAtQ+CYMMGWtN/Ezs6lpwcz3ofBLuz0IO40GlE2ZjgvgWE5u5S8W9pN38qvHA1cquXNOf5O0CvMprvtm+eqYnpfa6A0PMcjZfGWmhPAqrP6KR/qr5ANy6etWATLLXfPsmxDfVpK6clfGFMyRFanLUYqwbucqi8DUUHNzMX+/RKAySJQkaB1eK5UeCRhCsb/19kgVjQuxuFp2yHlvxNEgXfZNRxn8opNLmD2h9KO2BVaL0bghkUAgQqt4tv7/uOghczNOlOUxYQOk8RmZCKWkWvzQFspHsMWvXchx0Xbz3Q7Ynpa+X1sJTQ03yvMzr3c3fulHCbU9nV+dfg1ZOrPSEPvoM+tS+2Cj8pd10MSZ6aFNJB702u6JKqiRmVZhIHuGpy17bnUmPaLw80zisUyL/bLhOSrWf58/Luhgtgbuice0gTg0bubLSOKylK8vXdOJp3drjJkDvKrHuu0RS/txpUIyjyEzV4ZtizUd3Z8lKb9Lq+n66LFMW+mvyZozTTyFzqVHS18PBkmA/oC0sm+1kOsUW0tjpzFnX1FtPjHheo0V9mVNP+AEoSVbEpkMjgt1fHo87NMRnN4qL7fe6Kd0VDrl71xYA8NE9WF4XGFI7aipLrLqIcetwRMbFnnS3I2tJ3y9M8v0tJ0DQSy6Qp5rzFghQMJxPYfaiqdjksJgohKwONqmVW70oPiAinR3Vi+mTfRRtF0u6EuSoS4ueelFfMIWIhMJ23yMTnQoMRmq23FX86ZknTqWefwXjfKQXlzzI26uaI8RYynTLWwOfmktfqSIfLlL7ELxyQC8ZGFckWU2SAHWcj833m+s0Ai4KTPNdX0mZmYnkS2Y5DAsSVs4v1M6U5xZGfCOjxyXBWGXxlqyDMkFuwOJYnVPs+VA0SUnpmwvPbrTiATs1ZHmIEO96ctd1ZjaUCCK8kZL2YxFfjn1AxPhROYt2hRqT+GasbPiOtD3PP6mTQEqyKF/1s4+vbew2ersGEuuTBTUkOIrBlUFdu7Fx+FZvmMBQzZE73hXCHplym+BP+EJM8EuQBvXpe+6D9/Y0cOg1u1BrvYFcQUE5hN93pCZn+Ja3lw3Rud67AzvpeMARI+WAE0ybhmmC6ObCM1GYyBEa501bWQ2JzBzWOVO5FTDEiBtEquc1Rw0pulrc9dBV7oU2HlIScW8R+V148dp4LxBPw+zdXqVy5Je0qGyA8LKuMQFbc1pzeGno0/xJv42+nxC2tQhQPA88Nf1F1SFh2CIrnBs68AccfosBBVA53KGmoXRA4bgowFFZbterZVFxD0bXp56hfG2BD8mc6hfeIJfzNUsFQn7PkP3kzo5LI0cYPl5zpqTXxjMUo9wD4jExfIBm1qU83LaWGXTyAaeEH3fsmptpMoU0CEf+bKG2f0byCROgobldiHpcZ4ipyo2jE64uKC8fL7i4BRTcn7145eEx0stJRz6HeRLV9btwGrCTtUTydOc/O2UBVKzm0SnpYQIkZtHNyZGfIjbchvTdGSFw3dmXcQLd0FM95qe2PouE+yTAqoFZgVtQFFJvv807zpufYi5aIUK90RZOCNmcXuK2ApsR3pYCmrJaD7qZKG8FhoSQSNMaiJkQ3kU4F7mGhNTQc5HTCdhoMuZWGLyaSy4Hvfe7fRcl6lFLCWPv5liJrrHBCwLHc3V3TqfQ8jmUEgGP0t6sxacJlzAGxnR1lv1+KxVAVyVfP81W9ERatvQnHBYia3jLta/UGyJ3o1WZA+7DIjKX9vEe4TFrrIdt2ZqTaSfSlAcSuu3QmHIP7708Q1bt9rd96CPPJkWzMTVhQAKzTWSFEDz6LYeDpCjz4RoyZe0uy2wmY09BvbK5ahnD+/RfkYi927givVFp1suA6xO98+Jq9hRr1SJtOdjq8irHZ889a7Re7I40vfXUC6JsuoavrU5pMX9YIDVKNMQ97rC9kUWQThbS3qSO125ImPUu4uldf60yNPheBmHBwYxnOJOLx77lt1G2pTkw40lsl+RdBjRhkcKyR653R/jOTu+s2EHHhBdKLr8Wf7DqWd68mEXLO0+fDBaE5X5QFR3zZaDwBJSUGUDku0iU6yyTQXn6sCMpQoFI4TZJnUBmm6jFxdyudHl88SsrL7Y0U9OtCAq16CJTLxY3QNRA47zHeJ9rLbjZhdcugdihaMRoOV0ZnhPefIhvjoTji9OLkWDF65hEZAbgla41x20ve5zo1LFdPVuhnpYfLJwqwy1+1Q2pFAin45Tegl2C4RsD7v1rTavVhHV0Fu4QV8vGS+XhKT8/gCU62Z6AX0gPX1DcGn04sa5EsKVVQmqcmXEq9isxw0lOaOoiG2WLDXSCteArRndnCHqotQe/+ZwqN5yMk7uUNS4K8MnqIxKqV+BH/FRdyzr61omammVWpVRd+bHQ2IJFIh8d7n1JdH5QNxPmnm7MBI5JmuGcUfTavFo4i5KSSlyPLwElbidzpWYbtv1RWseZB5KSFfBpNMvrFs+zj9mLg40idLK6fuLpjEuEvASvcNldNVrHUS5EGoPfJrq8mWiJ5O2UfgtOs6oCrHi7uCWaw8lyB4YTqfYouGfkCllDwPw66acyWIelBTlvOUwbXdl+zkydzj6oQAhK+04JkGBu/MbJ3faG3OmNP0Vdgn0I0Cc9XIwI8W6cz/r8OZjhej3joyg+xulaG2vyPdikYc9NjkG0GjxLT2iSFIqQdrl2OKY2UATzfcJPzI/3181Otnd6u6sdO1htKYqrCzOCvvAcs5GioJgyKyVN93olUs3dni8lTg5IG9JyuTVSWdp0yyL6npbFApnTNPDjc9n1eyeOa2RfFhkxYjwP1xsRtUhIL7EZCUUhv0DwKsvYyFKPBWhv5bTCr53ZiMQsxXWxPe2IKC+vbLW7+jbL9+IgGKuSsEyzZ+tZKN3pFfqsVUYTGzLPqLR1OB1PJ9iFYNduZG9dwPCoejEK7MZfeQ/0QsdRDWTFr5XWwqXuHQvBalYl3p1HR0brwAfg3xbA1LNl05aLFSgaP5Q0dF97LlIRfgl3fnBlnGvu8eN8d4OCPDAGoyGZ7IPNbKUOXyKHsSBZuRjVZWgqLHrksbilKpjlWFQnlkdP7YlfZa8qvTW9hYzzSpkVz7Wy1ZyuKJlixuaIJyeS16oAr4piP7WcqNZKb1jPb2I9uPUxFPzZUpO45VKUkKptsSnswHv7kvjKArqS9W7qSRi8J6ikMuVl1j4HJVkVqq4ytrEHQ9Otpw2Qgapr0EWWOYG/wbdVuvY1HMR5P4isnEbD3L2h6sjmtjGZJ/EVsWhI4jWP4DUHjDNiCA1NtPSwzHBsNW2Z4pqQSE1ALdNoF2lfGQiyrzUYYS/jXTmkhUbYg4R57EpKbOGlBhSBWi6UpfK2vq/XoxGcaJZjs75cVolv2c+ysGpv0ngwrJTkjtMHHnqSurey26XpMB+4FjZyibYrTY2nmG0PsHd9Vw5zMMXd5Mn7x/GGaBilpAKCHdQUaseiomEPsA7CRTaOXinwUO/1qT9He8Eteo4VR6eXT8L1FAXU+SbpiUjZJHZTcuxKeKMfYyqPGcy7z8hrCMzUqH00NzdyzFsgxaiVqJIqliFJp8Ew0i+T3lWNAaWFatS6WAOy9ZocYCQTQ1jwvPGKFQp68qpNs2h2KkC7VyM9opeUjkvPuhN3CvplBfqSWmVVs3YDnm5XGa1XDEuzWcu6UKN/v2/xwTwOuuIxlAn9ZgMY0gyisTI9HXumw4UlQsZGCVJQq0j05kap9Sgqb6c6tMU4YEyzOzTWLEIB8G0TTJ6lzLFVThHWSIYBbSLlRRYE0XswmxEcyLHGLaQwhoHWKyxpnQPUcJMKUfLJbUUsosF0qR74oMSncDax5dp+4KZp0TENIhpEVNY8QDMyPaiUGb2BIGqMMJp6DRI1kZXj9D29+H2k+vSlHiKTmtnpG6TyLa1Y+/48rfdDSJmZOseS3WSyqO8XNhAda/dYvcEnry6Dkt3c3cx2y2yBJWLBgMmDVPT3Ka1z6i6DLAHDMg/78BBWPrp4IQTsRntmZKdYgghVPQw0WihQUCFrN854WUneTYjePZH6JCM9OcjYWA+vSOuZZ34trwSBBW4fSc+kY23AUb40lDfDZLGiqotuxCCltEV4GHK5XDs+QsY0fa2MLDl4B9+oeRLNArL0PTJdTHHvO8ZuPvACv0LN7ZEFzCoahEokBAkXZoH6LFX004OrIYCcz4kAFNOWE2l3R7OkNMxLixcLOmahXMbGY0EunhPrBGb4BtVLPowouYHbuGU0onipKRhP2ra46taboOKbmumNuXPYDXZDdi6Ul1/aMHG3itJ+eZpnXl4ZASwUH1x7oOUssXXIEVp5l+eCiXClWhlMLU15IQMfuaua6aNhDfE3atm2lakKRJvheJwWR1ltJyJr0FqnCRFgc3nQPaR6c9WInv/eni1kwTfpEU7Ondt8RZTdIJa95SRdiepKgTaYIwTWclif28GJGtXHJtNzUmrGHvOwG5NG0WWnuepy8kq0cPwnYPmqJyS7sQY69EpTJzzc8kdkAtiRKWYqlL0FG/Y2LIvJLp2tfzzkCX2wb3mzELZ+aRbO4pG7SHVLL3s+ErSMgj1rq2shRS+J0W7jgeaKAKUFY7+5M4deK344y7XNIXGUTeyA6UMFBVhnbTgzL1seQgaWYebXzYzUOHMBdOtLV18y5NGVBQ5YiyM0KOo9dQ7f7bJylGLHXG7sPko7IjVA3RFYFfx2MYI4bXzdUKcOG0NHDfxGRtcEHrfY9kmnl8BzdZZwe5PlFRd6PhrMZmyiVGC3JFp56TdOna2NxKG3g1TWyPm+Zl1JnyLkOpAquOYycr32mrerZu0IS8pzgEfEXdJGf9MwvN6t1FBVXWvxy1/uxYcRFBWvl4haUuZCpSNt9lMW1IjaG7JoX9RrsOeJxDwhlPq4zTT5mCrWmfRl7Bih9VId0pTorom52snWMK2D+5K0J4VO+Wxno4P1TJqdyJJYuhsOL+QoqLRVW2DMxdY9q6SV0DCdK+5pFKICZug13CDCsX3FMgk+Gh+FCfW4mj0olZ0hjoXhOb7fU5jFnmtKk3NWDuKY1hXLra3lPy09hvDVku5jBdXOKsF9Ihc0zHnnXX0niKvmQRmmDeSqlRsWxmwMNI3ZRygEvHobhpZs+2NXm9rOGbFiP8FpPtiJewYIjT9sqgSlX7vQUD8kUwxSxqveAc7BXNmynNm2DSD4sThWjPuUS9zuJKe01zBnJqpBtNt4Mq7aHQalMHFF65/+810wsQYF7SMNbTuMrLGz5qN7Pm9aaW5vmMc2fLrmr6tTCTURLsaRfAWh1GQkrQ+WdfmsAXhGMs1J99mRWxuhYHnteplt0LjaLfI0VY6vpCaO6UV64336tJ3iHL8rXeEQkgpVXIgQHx1vnPRarhHUHowHzWci3lMQrXki028Lb4jtWOZDVwyzyswryQx3rF+h1IlMzXkhchThbGwZ7buk5YhfiMeE2tpBL2QUUZQpt9Sr9o7MtUKUc85Rl1k76TE7vtsUGIq59StTLgJ8gqrgkf0ut3kJeQsYK2SaF4VlDV9Q+Hilo4Et1qMWj6dvGr5dxY7rwBTkQA/ahM8xUa7fD73wFgnkDVogLUz7MalQLt+3qw5jyx7SVssPA01HNDiG8mWusvjc+povRYzxXNNqnFWuREs/9QiBGrJKlUcATrdtTMxbUg/7Vl0LxDnf4PbT37Jk3KPvMndyNVSUVvHWB6K8LwE17NrDgwUKsWXpNbxoEgSJilTMi+LG2mdOoojcXzxGuqzM03c7c99vUHGV9kz5yPJMGX9rjvZot81q9dSVGhYSm0L0bGi4KTerPs3jhDFJm5W0411bzTxF/kFtB3bFp4pV6faYQGSHGMq0AIqrRYKpfcJbio+ovKx59MwYGagrQG7maRlMi65ZU5QeNPUYBljbbTAVWKT1QnoXEd/1MU9WI3WOWdZFoBfNEHehJycXMTPpmefjc5hYXCRLzxoDhjMkn7ZcanNVTSq5t9HRlQ7JIm/ip0Myxo5sGd0hKJWiiApNx3Ldy0yslLcFj08zdtqqPtoawTRm+Ip3ipVtponCcidpgxZo0su1PBbsSYg8dDqBOS1ck6wjexQe83qFEy0TrvF+piKkdyxFjNB0ZbICfcBecTY+ubGsriXZ0bX7AnUEz5eWsW8NogGrpF1RkaVmJTmTSLBj6vTgcpwLPAEawBNV7XAveiV68rdhFuRliBepckttiC2BqETSlWLx2HCWsjoASdr75QGJCb2ip3SKZB5zdDogQ8Rr1RZtFzEEmvzEM6rkr1AzD+F6Qxd6b1O7Zq6IuvoWv508q8NqPt58RjGh0V80lzaLmh5YS7vXtia8zIo9xi17Dbiwa53/tLV2NcvZd/XTGwxmn8dCiUpQXJ+UKiB6U3QNuuzPISsrEEbszQc2q9yMJJ0cOuCQHGBmxYLcFr6nT51q3OZ1iM4Vx9ZhInKuNfC9u+lMe7zgxI+g4JrpmG5VRz02Z7y7+hwORyzaQt3MLRnSnFnKwAqvbTmpED7k3vcl22Ir/YiercjZHeoRof4uh0dj2ED8VCnj2+EcASlSgouX7WqxWDz4LnAY7xaoU5wC3BeLMEsiLFJhvm2R7aoRrMhWZFvPOgqcIFQB58uCE5c4nMZWgTUNE4u8V4rVPcRyhpUg2Xqh0i4VbtXTssftomjSuBUNTSOeUPDYQ2IbnrZ9zILX8x7h0BPlY9Gut70VO6BbWHCer4QpOumV8jAXpsitEGG6kfDXupJBKt52XsNYFx6Y/TBL2+uOEYgUCkvY3jPUA4kNJiUVp+jlcwAaE2nKmGy5lUopmfe8eRasT9J0YNIZNxgMRwVH8fZ2Xs6lR5JDXJ/Or1oyNd5kKklu73aNd0wELT4/vUr6EXgQW1UH29xMbaji7hwPA1OHl2XFzZNQuFHsbJdXZSpHD0EmpxMRzF71OurF3Gh+oV5PrXAFTCtyCrGgwYTp7R34xnHwUMW+3vPgKxGv94Gz+idpYMpbvGCmiM1Ecce6OrVJ3DC5HkJfKbSBt0Mieklbg01+JtsfPCKUEmHSD4Q3gWUtOGtTc8/cawtR/DJvcyJzqi1RJ9LesdKWAD69J0Qhn0Xhwdw83begRBjTd6luKmxTMdFJWU56v1yv0GX1Fq+jOImRcZX4fM6ZdhJA4Dumac0R5e5cNhSpWjSIvHFXjHjH5IgaDgksTpFQhSja+L0ICFkuEty6N97JIFXX6s19uSJnKLvA9FIYoIwKHxEiuuUMJyM3rxCNcxAuzVNcYDlYrTvphgMqVmu3k+k5eVCZuTi6diAXh6i0YaTfg8PnkaJvzgHS712WLc6Pd55binOMQk6yzdxYuQGpfFiTq9BsImc2PHYvTuvg8v4gGs96AZ+L8LnhWWnVMVfRog7wtsmmk9bXJRVeVi4PTQqA3pwdpl3Qx6Hc0vcGBINxIkvnC+uabT6/jPaeQtvoamgbSk3qCPfPRC7pwqdxJKEHf9bcDPZMZVQ4TufmSkCYQHRpGlALzEtRs/ENp9eINAvajaFSILnKmuWJM86sJVw6ahvdwxzWRnVoNW0ts1K7gwFh427lrHhm9Dgd5AOei2uFPSTSLDNw59h4c2zKrw3HFG/2Aqa1pwKykJtY4mF+KOqAZ90rOcTVCUtUcAryJuBd2wgy1vsdFTNcg3CEK/mC+4FTgX65R5Wn9Bbr7Wwl4lWgswgk6wANPe3+KMT75N549D36zHZrRnMEC5m2AeZXhLfPYUQ+tme/MK9HYtn30X7KE7iDhxv1NuG7YUqnFSNSYtDves4bWs6mJif5pzGwn3eNbmooVMUEqk/3uYrH0BzwPXwCHWmz2+lGMFITK5pkpLbMwy+5kctSx9ib71gcB0SFAotQcvAbE2MowhGHsihzjKbqrhU8nbDt5su8BNNslWGkZ2C9K7w/uyR27dtTm4XWqxoLTduD+4fQz72+4vJKCmWhndzb58n5WXjXFlIPpTc3MxnF+xBD9cprTXMXX591WI0F3RfwMF8PMdWYWy6uewbSh4tPTrZtuNTsI17wZgTitJhuemKPVxVSLvM2x3vkFC+Xn91JimshW1SA3lC46XTGZ3m/Wx2sjfV0H59TRPPkmMG3jeWo+ztvZLN1ibsa3HdJ5tKl91k4senJrMObu28YGoOHYNKNEVC2ZZIx8NCQGvrFO90NBkkna8sb/j26Y/I47atI20+zwSm/4CfnCEwJmvr9dO+eJ+sSOaY2THqT7AjvAy9rebvdmPcrfkXLUQ034UW18JX7BHIz+xRCKwNdgBJewM0n/U7Ye4sOQXS9MmigtStufmlaPUBGa9zI15P+zJ9Tt4FtlGQRej3QIaSKPSv2IKiH9IwreMakZRVajmnUuV7YFYYrt4w/IOfACYnz+nQ/DXTwTEL4Jhamfy83R7UEm4bFiYpaxZOM+fmi+V5tUbMvGejF355UiyoJFYHJTxxfpMp+Qg5tiKYxTTuKc8jSRSZZn2BxZdxO2UPqFq01vBcBmhs3z72GPRNWlKZm1mJ8BpYZj7iNb3Tulo/YCxlR6u/JJOPSvfJ5mBJNid1hrVlAwb9DIHf45JmUXrGYMkXX/uOvY2G9VN2oQFiXUp4K6nDoG7CEhqHV3DMXRD87p41FDWM+OenUYG52ZXiV8ZQpinoFzQ3qqU/ujEkajjXbmgZJhmhjNs81EYt4NQTlKQQXqc6N0NMZDVOl981fI5qy9NN2Sdvs8rD7bAInPSkUVL0jtY/6yGjoRW9EGKFDty57kDSGDtbmaF2PQ8MWmGb6eNU00sBzv71NvdDyedcJmcboEQBpfLJ5sNsljw9rKxHPt4vSu3klfFwBiKtuI1eg9Y7N/gkbAeaL2wK3PPrDIkdnSGSUX3pR2zA6Qt6K4UqQ4yF8oaeNX5VrRyUNK/ROuscIoGp/FzHhJPVlVWzCQYV9M9ItwOFdgtBJ6N98lIKCHgNu82DXxrpRFqf47h4CBLLTecRViKzBlVuCJdrjuDVIlTXRukK3slcr1Y7QucHJAUJ8p7oD77g5TKOZbgFg/enSZ09uw6r3BTOoLD12LKNWGw4GuqX5DijgnTLr7HXbkzOaNOmrk0t5RQmiDh0RQX7FyFhj1Ky1zvCeCtZYIYvxAPFw4lmayRK4kerliO47kFszbuMm2fSq3rmFcigcWwObLu28WRpjTkiQLsjw7Gt7yuKTMQSQmtkE2MdcOMZ2tsRYQDYNBHjhkjTFm7ShlLADCnHpGMdD56ECYnAeS2G33SU64AvuMk8j7bZjhj6QoCkCs9Vrcbk0n+maCNU9GT/vAPfd9uBSnh2sKE7atOJim6YkKxq7IY4i/aIfs0qPBxCStSLJ+aoLJky4mg/Cp2bUBqgHrF8TnEXo32K2b3DdbFwmGWXolim1WSRAZuasOjYRcEKp2ZCnpNyzlpEO+Vz/BLQQ7fjIClR/9pqMV9XguEtNm/HbYhDhdJ74EdyjKoRAS4FiQmV4cZG2x+BwTeUHasNBKyypKNIQhajMiJnPdOZXfBQHDkJvcHHSE7tOZPHePGelMTd+fJixi7BWc2V0xSY/4VCjRRpy+hsF845RisY1iLdOBhm8NJ2TAzblKS6sG37thml+FZW9mek9OC2E8XjuM5Mer3KivJGGLSrVOycQHAy26GJdZ77k+moa7vO9zJY5aBnGPhntGgWzU1nAB7HeIFdaPsLZOxgGt+xhuY2l2Y+3k4mESzcLdEucfogHdWR5eqbawGwc2qs3OrMjKetXBBCkpsTMFN8MwerPlwugkKue05POqOKR9ZMzMDkEcpS5r13nZfUn9e1iGMeWNESjUbj4uPOnw7LdH/drupaqr8Q56lP/Dn/DRn8lAvT1tht9J5H0CxozVSMCBTNqhjk9fPwR5WWA52uYErBkvKUrf2pTodFgbC0pUKet25PbXBM2d/lt0bjMk1R9n2/r/TOeCEiE0qzHva25vPn3kb8n9UCbeUgu55eDcmNt8VnwBZ5G/EaB9fbiRSEWJdNXUTmK0Y0dEmSLqXV5k3s5sl0nRnWkgdjfUkHUh9bUkEJx+QxBOSV5VLtZm4gcuXZi91HK9lSmkwp+5anOcfZq4OxguXxdlBWTKbFKNda1i6bU4iFh0h2mRWo0S84L7hkf78i6Ob26DBHiFUtJV/3gx25PFjY2sU9QQCUdS7m30ax7PhXoWvcxB/dNuZRHicvX7eyh81dt53caIScIRUmSj+XAtCUSsgnGtzGDWWoIQzAhuvTsBg5BJDBk0SPkaThvKB/7l09tBq5KjUHSAZZPqqrtj9rwVQvbmya1HL0xPnlGYGjJPL1SSs0DZaNKmrqbWgDtCmB6/bxz97v/fM7Yg5UTAOj3vcbiAN9KyT60QlML8lnlUD7X9OOxffJXuLzAYBhNrsnE00Mnaiq+QnqnJB0pPwIjwOHRwKUn3dxWCyL5ZK+HWB5zaI1k5YqOjHbcwTVx7TM6tMiVbBpcuU27b7fTcVFeV1LV4nWla3I8HxggUw1se5RlTj17Hhg8D47QIEDMsPt2S17bG1OdIqjhZx/iVI1f61yoOfdO9zIHV9B4TdB9NtyNpEq8ZuAgK96QSirUpjKkReHbEXz2Ayn7UM379cF8Fg6J15XTUzqSWjzU6/8nCAK1c6jPhpPn/dvtyC1jfH+ql+LG9WrYoP7Q1F++Vkac2auOickytPmLEs6maIFdvqp6bJ99EMNrQRdbq44EXIG5Wvix9kvzybTsuKzqbwhCLmQrbKP+EtJN8CTyjUaMUcK9W16Vqal8UDwXyUdlnm1GnM0ydKUqpB+iXVd+ZcK6veOZOTWpA0NQ0lVQLKuVsvS9M6yS3dZXouMFFxbQhfWlxuD59L+ojFieT1tzZWOqm2uJhdpQlCtQ+1VN9kvu8sZKxsagJ+0gDBgiGpywV78YtUl5NEHoj14rpHEmXlqBlB4L2Y88lFjs3oasXCXekqt5lDvQdECdZc9a3xyhfBNj/vCOUIxDfuCU2yVzsvm3VZeG4W+rS6Nfj39eXRqBfl1aGsf+Bt/+oOLS2HeKS+PteWMq78/X+nmVaXxc+68f/HWu3ufZ06hDMDHsP314/lZc/y9l9DzxAp0DPvSLFpMfi03/dNFppCDoWhfzy3aGaKi+Xn9l3n6a+OGj3zz9i+LYnwrXV63qT3XqrayWzB6i5Pp0m6KrqHW5dKfQGPjf08+/KCKO3X/dzcR3evn2R9UPh4nfX0B8ieLr1M8l8j2BzUs0LfYHAh+hJqf8o+qZTV+uObu4jYa5+jT2wzfKqk2V6OjX5ettvh79Wyq339C/YQj00w/ybR98R9WQG/y325386Qf9TlF3/I/qFOSfqOp+imGpotbKkiV6Fr+nf34t/3TqByeaiuxrlfahr55LNrGvU8Lzl3N51bZfS8E/++d1o0/V98+HbZZ/vfZrGfnPwfRFXD82+hHVjTr/nsKjob/d/nKtH6DPY/in4xvz+fq00P1zXqYTPFcbWTQvWzb/blD8Hbz/Gir/ZN9jf1jX/xtsLvY9m/u1lfjrCdb+mY2M/1m7+a/A7/m8vv0T/P5lZPUnEvK2387DskrT7PkvQeQH7fu7EMGQ/01I3H4TEvMQPX+6/GuXp30y//WjwM+o/WuxVulf7zcERXMU++s9z/O/YimR/zXBMPSvEITn2Z2I7lF2/23MDOc90urbYbldpp9f8B1YtqfN/+tXCX2A+bfrNX74eneam+r5108/X59Bw/LLz752948f/9bjfZHC/1AxiilKqrPTLmR8Qyd+aPYX2jN9R1GG75z79uI/lWf/VnF+zmqh7/CdP06Z8D/Ivn7ltL+Hz14cJM9/3cZJeS5Ni9L0+Bdp7f9dhMHfgxj8vwqx+5/s7X+ZvR3fdul/DZn7nnP1h5A5zvqTzP0dZfwvInPkHwSJLpuS6tck5s9+/+/kISj0385DfgNC/+mA3f9dzP7nmQ0K/8ls/jPMhvgvYzYo8r/FbCz3T2bzd5Txv4fZoN8LWv9bIHH2EQH/5RQhxX3+hX52hvj8+8Pv7DU6ffO1H38n/gTMbwLmv4YS/VGh7w8lmv+kLv85bP0XUJfvBdH/pC7/C9Tlx6yF/xru8keFgH/FXULhT+7yd7Txv4i7fC9k+2+BBEtcRAQgP/sF+wvF/AVAf2HvfyGIi5r8+MufwPgtYPzXcJQ/KqLr/Dunj/7kKf9DfP3neQr2PRL8i/7JnimYpo9Ivog0jebyk8QGf9tH9doNXzKxYOhHwWVpkf2m2P6xV3n7jjy+npuyNlqq17fNf09IX+5gXATmZ/LHsb99ver42vIvRD3365RkXy78Sdq/auvHZ/qthpYPPftVQ59u+/HN/4We/G2a8SMVoKLkVHo86i7lecbz8K2Wxr+poifwl2+7Omqr4tQNJskumnmeuNSjSqIWfPmgO7Xno8NTdhqjn6notyTy4onRuvTzF9j85Urom/om+4UW/y4d/AdeAXH7poew70S5vgc25A9Tvt+mAT92hTNFz/kCef/8P9RT99/QpX+QtvqH9dRXWPwedy5vs/2LvaR+ZjqTNprnKvm2by7TaH9pImvjfmN/OvFFuv6XV/61vM+hGWOuP78Y987z+OfnusHV6Hnua5//E0YZgZavDuVvieU/a7x/AZIfU9j/WcuNot829KvR9jcs99mz0fGzr31Rl9984B/b/XqfLwm9PwHzhxb/rcPC7Z8IRPwzyP1tKP4atr8Jzr1aHpet+dv99uUw+GJ6rt+Z/ecHx5eD3w/fH/r/79nf/yh8f8EXsF+i7vfCF/4FrNBf5on/wcSD+INs468QhhAo/PsQBkE4RpOfe5zv9nOIXccfjP0NR7CvJ34C2ufo+PmRkU3VKaHPagLoH1ns34vN/xTmMOQXmPufmkwEQr/FHIb9DUJI/OuKB/h3IfCfNaC/fHz0i/fyhxpQ4n8tfCc+8xNm34/gfUPZr8ju586ftXdQejrNfXHJMbmk9316/+tM4h+br377vv/SFMY6X81CV/T381z/Sozg17T2u9zzO+7/76e/34swfLte699AbVHoFyTgO+HqH1f6fLPw7o/itsQ/DEb+U73+9dp/LUs+24d+zuZfYfVnkPw5Un+A2q+/O12d+f1L+vx7FyineX1ei3wB+Aex9O+l6///geJvh0/k34Xr748s30S2br+GNfpHwZr8HbTk90e2rvNGtFxrXD5nrgHwe5L8LSLyKzLDfX7+f+AM33Ys/j+Nj92Qf9DQ/5imnodTf1mPn75+wr5U+zS7vvH/AA==</diagram></mxfile>
2109.09133/main_diagram/main_diagram.pdf ADDED
Binary file (61.8 kB). View file
 
2109.09133/paper_text/intro_method.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Data collections of natural language utterances bear the risk of disclosing sensitive information about the recorded participants, including their gender, race, or political preferences. Unlike explicit mentions of private information, like a user's name or location [@Tang2004PreservingPI; @AdelaniDKK20], such user traits are often encoded rather subtly in a user's speaking or writing style. Nevertheless, they can be predicted with high accuracy by deep learning-based classifiers even when they are not obvious to humans [@elazar-goldberg-2018-adversarial], enabling third-parties with access to the data sets to profile users without their knowledge.
4
+
5
+ A common method to alleviate this problem is the application of an intermediate transformation step to remove sensitive information via text style transfer. While a number of different style transfer techniques exist [@Shen2017_cae; @Fu2018StyleTI; @madaan-etal-2020-politeness], they require large amounts of text data labeled with user trait information to perform well. Additional annotations need to be provided for every new user trait that the model is expected to handle, multiplying the associated costs and effort. Furthermore, the impact that such transformations can have on the utility of the resulting data is often overlooked. Conversely, we argue that the privacy-utility dichotomy should be at the heart of all research on this topic because it is fairly easy to consider one of the two but difficult to improve both at the same time.
6
+
7
+ In this paper, we explore a simple yet effective zero-shot text transformation method based on multilingual back-translation. Back-translation (BT) is an alternative approach without the prerequisites of labeled training data. Sensitive user traits can be significantly obfuscated when translated to another language and back [@rabinovich-etal-2017-personalized; @prabhumoye-etal-2018-style] since many concepts cannot easily be mapped across languages. For example, in languages such as Japanese and Korean the speaker's gender can be inferred from the choice of certain pronouns. When back-translating them via an intermediate language that does not make such differences, such as English, these gender indicators will be largely obfuscated.
8
+
9
+ Results from extensive experiments show that our simple zero-shot text transformer has comparable or even better performance than popular style transfer methods, considering both the privacy and utility of the transformed texts. In summary, we make the following contributions:
10
+
11
+ 1. We propose using multilingual back-translation for hiding users traits. We experiment with using 6 high-resourced languages: German, Spanish, French, Japanese, Russian, and Chinese as the pivot language. This provides more opportunities to pick a language that can hide sensitive information represented in the original language. Our approach is zero-shot without the need for additional data to train style transfer models.
12
+
13
+ 2. We show that our simple approach is competitive with style transfer models using automatic metrics, and better performance using human evaluation in terms of content preservation and fluency.
14
+
15
+ 3. We perform a comprehensive evaluation on three datasets with popular style transfer methods. These methods have been well studied in the style transfer community, but they have never been evaluated for both privacy and utility preservation in downstream tasks.
2112.00735/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2112.00735/paper_text/intro_method.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ <figure id="fig:intro" data-latex-placement="t">
4
+ <img src="intro.png" style="width:85.0%;height:85.0%" />
5
+ <figcaption>A conceptual example highlighting different cases how unannotated samples could be integrated into a networks feature spaces and decision boundary. Color intensity represents network prediction confidence.</figcaption>
6
+ </figure>
7
+
8
+ The acquisition of detailed annotations for semantic segmentation is a complex and time-consuming process [@cordts2016cityscapes]. It becomes increasingly difficult in the medical domain due to the required expertise [@menze2014multimodal]. When considering a doctor's obligations in the clinical routine, gathering a large amount of detailed medical annotations can become almost insurmountable. Thus, these obstacles make it desirable to perform accurate semantic segmentation while minimizing the necessary annotated data.
9
+
10
+ Semi-supervised semantic segmentation solves these tasks by combining small quantities of labeled data with a lot of unannotated data for training. In recent years, several directions have been investigated such as student-teacher frameworks [@gou2021knowledge; @chen2020big; @Xie_2020_CVPR; @pham2021meta], consistency regularization [@ouali2020semi; @sohn2020fixmatch; @rebuffi2020semi] or pseudo-labels  [@lee2013pseudo; @iscen2019label; @rizve2021defense]. Most pseudo-label methods typically employ network predictions for unlabeled data to either save them for retraining or use them online as targets in the same iteration. They are often paired with the generation of predictions from different perturbations, e.g. data-augmentations, on an input image [@berthelot2019mixmatch; @berthelot2019remixmatch; @sohn2020fixmatch].
11
+
12
+ However, by enforcing predictions' pre-existing biases of the network, incorrect conclusions can have a snowballing negative effect. We run into the issue of confirmation bias [@rizve2021defense]. Yet, we argue that the positive properties of these methods can be kept, while reducing the adverse effects by taking a different path: comparing embeddings between predictions of unlabeled samples and labeled reference images to instill the supervision. For all pixels in a given unlabeled image, we find class-wise nearest-neighbors among the pixels of images in the small labeled reference set. From this, we compute a class proximities, attribute importance via confidence-based weighting and then infer the pseudo-label. By taking this detour and not directly using the class-predictions, but matching them to a known reference set, the bias towards large classes can be regulated and we bypass the problems of direct class predictions as supervision.
13
+
14
+ We illustrate the characteristics in Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"}. First, while the *U*nlabeled *P*ixel (UP), depicted by grey-diamond 1, would be predicted as orange, it is more similar to the misclassified green sample, and due to its proximity, we would instead pass a green pseudo-label. In the second case, we would assign a green pseudo-label to the UP2, but as its distance to both an orange and a green sample is nearly equal, the weighting would be minimal. In the third case, we assign a green pseudo-label with a high weight to the UP as it is close to a green sample and far from the second nearest class.
15
+
16
+ We view this approach as a straightforward support mechanism to pseudo-labeling in semi-supervised semantic segmentation. It is easily extended with any semi-supervised learning approaches. We demonstrate the effectiveness of our methods with extensive experiments for multi-class and binary multi-class semi-supervised semantic segmentation on the RETOUCH [@retouch] and JSRT [@jsrt] datasets. We achieve competitive results across these datasets, excelling especially for minimal amounts of samples. We summarize our contributions as:
17
+
18
+ 1. We illustrate a different view on online pseudo-labels in semantic segmentation. By enforcing consistency between predictions and the feature space, we cover cases not handled by standard pseudo-labeling approaches.
19
+
20
+ 2. We show the effectiveness of our method on different datasets and various low data settings. Thereby, we demonstrate its use for handling the challenging segmentation of overlapping labels from scarce data as we reach fully supervised performance from six labeled images.
21
+
22
+ 3. We provide a detailed ablation study investigating different aspects of our pseudo-labels in various settings.
23
+
24
+ # Method
25
+
26
+ In this section, we propose a novel strategy to generate online pseudo-labels based on label-wise feature similarities from a pool of references. We first define preliminary information, then elaborate on Reference-based Pseudo-Label Generation (RPG) and finally expand on RPG with augmentation-based consistency regularization.
27
+
28
+ In the setting of semi-supervised semantic segmentation, a small set of labeled $\mathcal{S_L} = \{(x_i,y_i)\}^{N_l}_{i=1}$ and a large amount of unlabeled images $\mathcal{S_U} = \{x_i\}^{N_u}_{i=1}$ are provided. An image be defined as $x_i \in \mathbb{R}^{ch\times h\times w}$ with $ch$ image channels, height $h$ and width $w$. Labels be defined as $y_i\in \{0,\dots,c-1\}^{h\times w}$ in case of segmentation into $c$ classes or $y_i\in \{0,1\}^{c\times h\times w}$ if at each location more than one class can be present (multi-label segmentation). Thus, the task resolves to using $\mathcal{S_L}$ and $\mathcal{S_U}$ to find a model that correctly predicts labels on unseen images. For later purposes, we define the segmentation model as (1) a dense feature extractor $f_{\text{feat}}:\mathbb{R}^{ch\times h\times w} \rightarrow \mathbb{R}^{d\times h\times w}$ and (2) a subsequent pixel-wise classifier $f_{{cls}}:\mathbb{R}^{d \times h\times w} \rightarrow [0,1]^{c\times h\times w}$ that transforms the $d$ dimensional features at each location into class predictions. $f_{{\text{feat}}}$ is parameterized by a neural network and for $f_{{\text{cls}}}$ we leverage a $1\times1$ convolution and normalization function (sigmoid or softmax depending on the $y_i$ formulation).
29
+
30
+ We propose using image-reference pairs in the pseudo-label generation process for semantic segmentation. Contrary to directly deriving pseudo-labels from network predictions, we search for a best fit in feature space among a pool of labeled reference images and transfer their semantics. We display our approach in Fig. [2](#fig:overview){reference-type="ref" reference="fig:overview"}.
31
+
32
+ **Reference Pool.** We utilize labeled references to generate pseudo-labels. Therefore, we project both labeled and un-labeled pixels into the same feature space using $f_{\text{feat}}$. Since available memory is limited, processing $h \times w$ $d$-dimensional pixel-wise representations for each image in $\mathcal{S_L}$ is unfeasible. Additionally, solutions like a memory-bank [@wu2018unsupervised] are difficult to integrate due to sheer amount of pixel-wise representations. We approach these issues by randomly sampling a pool $\mathcal{P}$ of labeled images from $\mathcal{S_L}$ in each mini-batch iteration: $$\begin{equation}
33
+ \mathcal{P}=\{(x,y) \sim \mathcal{S_L}\}^p %&, \quad c(\mathcal{P}) \stackrel{!}{=}
34
+ %\mathbbm{1}
35
+ \end{equation}$$ As we later generate pseudo-labels from $\mathcal{P}$, all classes have to be present otherwise the missing class-labels can not be recovered. We, thus, sample $p$ images such that each class occurs at least once in $\mathcal{P}$.
36
+
37
+ We generate our reference set $\mathcal{R}_{\mathcal{P}}$ by extracting the pixel-wise features of each image in $\mathcal{P}$ to get pairs of pixel-representations and -labels: $$\begin{equation}
38
+ \mathcal{R}_{\mathcal{P}}=\{(f_\text{feat}(x),y): (x,y) \in \mathcal{P}\}.
39
+ \end{equation}$$ To further reduce the memory constraints we sub-sample the pixel-wise representations and labels to a feasible size $s \times s$ using nearest-neighbor interpolation. In the following, we dispose of the spatial relations between pixels and only consider $\mathcal{R}_{\mathcal{P}}$ to be a set of $d$ dimensional feature vector-label pairs with $|\mathcal{R}_{\mathcal{P}}| = p\cdot h\cdot w$. By sampling $\mathcal{P}$ continuously during training, the labeled images can experience a large variation of data augmentation techniques, leading to more diverse pixel-representations in the reference set $\mathcal{R}_{\mathcal{P}}$.
40
+
41
+ **Label Assignment.** We build pseudo-labels by finding the closest labeled pixels in feature space from the reference pool $\mathcal{R}_{\mathcal{P}}$ for each unlabeled pixel. For each unlabeled image $u \in \mathcal{S}_\mathcal{U}$ in the mini-batch we extract pixel-wise features $\hat u = f_{\text{feat}}(u)$. We now assign the target of an unlabeled vector $\hat u_{\text{x},\text{y}}$ with the spatial coordinates $\text{x},\text{y} \in \mathbb{N}^{h \times w}$ based on the contextually closest feature vector in $\mathcal{R}_{\mathcal{P}}$. The clipped cosine distance $\mathcal{D}$ between of the labeled pixel-representations $r\in \mathcal{R}_{\mathcal{P}}$ and the unlabeled pixel-representations $\hat u_{\text{x},\text{y}} \in \hat u$ serves as proximity measure: $$\begin{equation}
42
+ \mathcal{D}(r,\hat u_{\text{x},\text{y}}) = 1 - \max(\frac{\sum_{i=1}^{d} r_i\cdot \hat u_{\text{x},\text{y},i}}{\sqrt{\sum_{i=1}^{d} r_i^2}\cdot \sqrt{\sum_{i=1}^{d}\hat u_{\text{x},\text{y},i}^2} + \epsilon },0),
43
+ \end{equation}$$ with subscript $i$ indexing the $i$-th dimension of a vector and the small constant $\epsilon=1e^{-8}$. Using $\mathcal{D}$, two feature vectors have a distance of zero if they are identical and the maximum distance of one if orthogonal or contrary to each other. For each unlabeled pixel $u_{\text{x},\text{y}}$ a pseudo-label $l(u_{\text{x},\text{y}})$ is assigned based on the label of its closest sample in the reference pool: $$\begin{equation}
44
+ \label{eq:pixellabel}
45
+ l(u_{\text{x},\text{y}}) = y: \mathop{\rm argmin}_{(r,y) \in \mathcal{R}_\mathcal{P}} \mathcal{D}(r,\hat u_{\text{x},\text{y}})
46
+ \end{equation}$$ The whole image is labeled by $l(u) = \{l(u_{\text{x},\text{y}}): u_{\text{x},\text{y}} \in u\}$. Note that this way, $y$ can be either a one-hot vector or a sophisticated multi-label vector. Our approach is contrary to classical pseudo-labeling, where assigning a multi-label vector requires the network to hit manually designed thresholds for every class. Our nearest-neighbor target assignment is related to previous methods [@iscen2019label; @liu2019deep; @mechrez2018contextual], however, we operate online and access multiple reference images at the same time.
47
+
48
+ **Density-based Class Entropy.** Overall, for an adequate pool-size $p$, this nearest-neighbor label assignments showed to be beneficial for semantic segmentation. However, we noticed a potential pitfall: For features with similar distances to several classes direct assignments mislead the network during training. To avoid this issue, we apply a weighting mechanism based on the ambiguity of an unlabeled pixel's surroundings. With the feature $\hat u_{\text{x},\text{y}}$ we compute the closest distances $\delta^j_{\hat u_{\text{x},\text{y}}}, j\in {1,\dots,c}$ to each class among the $k$ nearest neighbors $\mathcal{R}_\mathcal{\mathcal{P}}^k$ in feature space. $$\begin{equation}
49
+ \label{eq:dist_entr}
50
+ \delta^j_{\hat u_{\text{x},\text{y}}} =
51
+ \min_{(r,y) \in \mathcal{R}_\mathcal{P}^k\land y=j} \mathcal{D}(r, \hat u_{\text{x},\text{y}})
52
+ \end{equation}$$ If class $j$ is not represented in the reference pool $\mathcal{R}_\mathcal{P}^k$, it's distance $\delta^j_{\hat u_{\text{x},\text{y}}}$ is set to one. We use these class distances to model $j$ class probabilities $\mathit{P}^j_{u_{\text{x},\text{y}}}$ via class-wise normalization: $$\begin{equation}
53
+ \label{eq:class_prob}
54
+ \mathit{P}^j_{u_{\text{x},\text{y}}} = \frac{1- \delta^j_{u_{\text{x},\text{y}}}+\epsilon}{\sum_{j'=1}^c 1- \delta^{j'}_{u_{\text{x},\text{y}}}+\epsilon}
55
+ \end{equation}$$ We then calculate the weighting factor $W_{u_{\text{x},\text{y}}}$ through the normalized entropy of the class probabilities: $$\begin{equation}
56
+ \label{eq:entropy}
57
+ W_{u_{\text{x},\text{y}}} = 1 + \sum^c_{j=1} \mathit{P}^j_{u_{\text{x},\text{y}}} \frac{\log \mathit{P}^j_{u_{\text{x},\text{y}}}}{\log c}
58
+ \end{equation}$$ With the factor $W_{u_{\text{x},\text{y}}}$ we put a lower weight on pseudo-labeled pixels that lie in highly ambiguous regions in the feature space. On top this weighting nudges the pseudo-labels towards including more classes instead of opting just for the most common one. This is due to the fact, that distances of all classes influence the weighting of a pixel-label-assignment. Further, this weighting handles extreme cases where e.g. $\delta^j_{u_{\text{x},\text{y}}} = 1$ for all classes, as here the entropy will be maximal, which in turn will lead to ignoring $u_{\text{x},\text{y}}$ since $W_{u_{\text{x},\text{y}}} = 0$. We illustrate further cases on the righthand side of Fig [2](#fig:overview){reference-type="ref" reference="fig:overview"}.
59
+
60
+ Ultimately, our method is formulated as the following loss function $\mathcal{L}_{RPG}$: $$\begin{align}
61
+ \mathcal{L}_{RPG} = &
62
+ \, \mathbb{E}_{(x,y)\in \mathcal{S}_\mathcal{L}}
63
+ [\mathrm{CE}(f_{\text{cls}}^c(f_{feat}(x)),y) ]
64
+ \nonumber\\ \quad
65
+ & +
66
+ \mathbb{E}_{x\in \mathcal{S}_\mathcal{U}}
67
+ [\mathrm{CE}(f_{\text{cls}}^c(f_{feat}(x)),l(x)) \cdot W_{x}],
68
+ \end{align}$$ with $\mathrm{CE}$ denoting binary or multi-class cross-entropy depending on the type of segmentation task.
69
+
70
+ To showcase that our approach works complementary to consistency regularization methods in semantic segmentation, we expand the formulation of FixMatch [@sohn2020fixmatch]. We generate pseudo-labels from network predictions on weakly augmented images and use them as labels for strongly augmented versions of the same image, thereby enforcing consistency between them. While weak augmentations are commonly used perturbations such as random flipping, for strong augmentations, we follow RandAugment [@cubuk2020randaugment]. We follow a similar setting as in Sohn  *et al.* [@sohn2020fixmatch]. Since we handle segmentation instead of classification which is done in the original work, we generate pixel-level pseudo-labels and set the designated label for the areas affected by the CutOut augmentation [@devries2017improved] to 'background'. For one-hot targets $y$, we use the standard pseudo-label formulation [@sohn2020fixmatch] $$\begin{equation}
71
+ \label{eq:multi-class}
72
+ l'( u_{\text{x},\text{y}}) =
73
+ \begin{cases}
74
+ \mathop{\rm argmax}_{c} f_{cls}^c(\hat u_{\text{x},\text{y}}) & \text{, if } f_{cls}^c(\hat u_{\text{x,y}})>\tau
75
+ \\
76
+ \text{ignore} & \text{, else}
77
+ \end{cases}
78
+ \end{equation}$$ and further extend the FixMatch formulation to enable multi-label segmentation as follows: $$\begin{equation}
79
+ \label{eq:multi-label}
80
+ l'( u_{\text{x},\text{y}}) =
81
+ \begin{cases}
82
+ \lfloor f_{cls}^c(\hat u_{\text{x},\text{y}}) \rceil
83
+ & \text{, if } |f_{cls}^c(\hat u_{\text{x},\text{y}}) - 0.5 | \\ & \quad \;\; > |0.5 - \tau|
84
+ \\
85
+ \text{ignore} ,& \text{, else}
86
+ \end{cases}
87
+ \end{equation}$$ where $\tau$ is a scalar threshold value separating labeled and ignored pixels. The whole image is labeled by choosing based on the task the respective $l'(\cdot)$ $l'(u) = \{l'(u_{\text{x},\text{y}}) : u_{\text{x},\text{y}} \in u\}$. We denote the final consistency regularized loss term $\mathcal{L}_{{RPG}^+}$ as: $$\begin{equation}
88
+ \begin{aligned}
89
+ \mathcal{L}_{{RPG}^+} = & \mathcal{L}_{RPG} \,\,+\\&
90
+ \mathbb{E}_{x\in \mathcal{S}_\mathcal{U}}
91
+ [\mathrm{CE}(f_{cls}^c(f_{feat}(\mathit{a}_s(x))),\mathit{a}_s(l'(x))) ]
92
+ \end{aligned}
93
+ \end{equation}$$
94
+
95
+ ::: table*
96
+ +----------------------------------------------+-----------------------------+-----------------------------+-----------------------------+-----------------------------+
97
+ | Methods | $N_l=3$ | $N_l=6$ | $N_l=12$ | $N_l=24$ |
98
+ +:=============================================+:============================+:============================+:============================+:============================+
99
+ | Baseline | $0.59\pm0.04$ | $0.73\pm0.02$ | $0.81\pm0.01$ | $0.85\pm0.01$ |
100
+ +----------------------------------------------+-----------------------------+-----------------------------+-----------------------------+-----------------------------+
101
+ | Pseudolabel$_{\tau = 0.8}$ [@lee2013pseudo] | $0.56\pm0.04$ | $0.73\pm0.04$ | $0.81\pm0.03$ | [$0.87\pm0.01$]{.underline} |
102
+ +----------------------------------------------+-----------------------------+-----------------------------+-----------------------------+-----------------------------+
103
+ | Pseudolabel$_{\tau = 0.95}$ [@lee2013pseudo] | $0.57\pm0.03$ | $0.74\pm0.03$ | $0.82\pm0.02$ | [$0.87\pm0.01$]{.underline} |
104
+ +----------------------------------------------+-----------------------------+-----------------------------+-----------------------------+-----------------------------+
105
+ | Nearest Neighbor | $0.64\pm0.05$ | $0.76\pm0.02$ | $0.81\pm0.02$ | $0.84\pm0.01$ |
106
+ +----------------------------------------------+-----------------------------+-----------------------------+-----------------------------+-----------------------------+
107
+ | FixMatch$_{\tau = 0.8}$ [@sohn2020fixmatch] | [$0.71\pm0.05$]{.underline} | [$0.79\pm0.02$]{.underline} | $0.80\pm0.01$ | $0.85\pm0.00^*$ |
108
+ +----------------------------------------------+-----------------------------+-----------------------------+-----------------------------+-----------------------------+
109
+ | FixMatch$_{\tau = 0.95}$ [@sohn2020fixmatch] | $0.67\pm0.05$ | $0.77\pm0.02$ | $0.81\pm0.02$ | $0.85\pm0.01^*$ |
110
+ +----------------------------------------------+-----------------------------+-----------------------------+-----------------------------+-----------------------------+
111
+ | RPG (Ours) | [$0.71\pm0.02$]{.underline} | [$0.79\pm0.02$]{.underline} | [$0.83\pm0.02$]{.underline} | [$0.87\pm0.01$]{.underline} |
112
+ +----------------------------------------------+-----------------------------+-----------------------------+-----------------------------+-----------------------------+
113
+ | RPG$^+$ (Ours) | $\mathbf{0.77\pm0.05^*}$ | $\mathbf{0.85\pm0.01^*}$ | $\mathbf{0.87\pm0.00^*}$ | $\mathbf{0.88\pm0.01^*}$ |
114
+ +----------------------------------------------+-----------------------------+-----------------------------+-----------------------------+-----------------------------+
115
+ | Full Access ($N_l=123$) | 0.85 |
116
+ +----------------------------------------------+-----------------------------------------------------------------------------------------------------------------------+
117
+ :::
2202.01085/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-05-18T19:45:55.034Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/89.0.4389.128 Electron/12.0.7 Safari/537.36" version="14.6.13" etag="CSwx4w5w0w3gmvA3mzRV" type="device"><diagram id="aTX786unrvDcIZlR-gwy">7H3XkutIkuXXlNnuw5RBi0dIQpGQhODLGDQhCE2or98IZpauHrHTPT1tPWn3ZhKBUHB53OEAf8CF136Z4uF57bO8/QFDsv0HXPwBwzAaRcAf2HL81EJ+t5RTlX21ob80uNWZfzf+1O1dZfn8m45L37dLNfy2Me27Lk+X37TF09Rvv+1W9O1vVx3i8ntF5JcGN43b/A/dgipbnt+tNPKr7kpelc+flkZ/OvOKf+791TA/46zffrUYLv2AC1PfL1+fXruQt5B8PxHmayL5L5z9eWdT3i3/kQHY9zaW46eLyzNwrd+H/bQ8+7Lv4lb6pZVP39Oaw/EoOJj6d5d9jhBw9MsAo++H7y51vizHNxfj99KDpufyar/PFn23fJ/Eke9joW/76bMdXJYR8APa52Xqm5/pTYKWr63D/f6WpP17Sr+b8G/piKcy/6YHjv+RRujPlAdCm/evfJkO0GXK23ip1t9OH38LT/lzv1/ICz58U/jPqf299Bq37+9J/0B+IBAD/Fi9PkLIx/PwJcJFtUMy898nxCxe4h9w7usQk4eu/AETKp83nQ3RL2XPgZ+be39K9xJ8Sgjw63IIXAT+ColHXVzYwQwdOVAcL8EeSIbJx8Pmb1HobMmFRZJLW6nKo02725BgxGnU9vvqbmWsOEiqXCnjYJdf9zVe7PEA5JAfKPcDxsNVOSHk1SC8gk+zB34Z0iZxr2GDmwjHeyvZvkN0Jp55WJk+SPrCgEvhp6XCj/3Y+ovAPAKRM49Ie/pbLL4ey3m8lFdshm/7qnFsLWzPs7vaOSmK3RpVCCumC8eapf8SO1OvsXfEe1YkbM3ObXOyD7W2p7UUxQ6QPf/Q9MKPxxc90gtYdgX/we67aHH8qSOKsBkt9ML6Y+bEjWsPFenLIiK4g0A+Ck1vfGEDI1xfHeWBaXQnMDxSv1fzvXPvk0wM12YIjN19aO5clI0bt9G/O6o9dVVRL37wfN/HlDcJQdwuy3t5rQnYmW0/UPvJHY6jVVYepD1oq4F682O4tMGN4JdeMTah3zgl5kmGqyP99kZHJrgxRu2kpe1GHkkJo7i7GTqJGElvSndJ8HTmUoaPiZU681Hgm6ryuCIMYrhR3TXbQCGvuCBDzlp0xnDmbCVu3bHdEwo+2IPHYmbSqOAjsQ9dbuIR7BuR9mE75NvsxRvDT9sFNHLDdOQoh1M0azLoI8cFDR1ljH5fTz7FGAWseFGks7zl0SMMzX2KSflBdvZl9QxvnV+xql01pr6uG8cgJNKWPF5N4lzTNwxv94p1l9fpTPIJlorBfwRf0Qb8xaaUKygsYelsVzJbawOR4NGqSAu3OR1ee1rz44VZk6ZTK8d3QA/lxYSykOA7sHq0a9yeVlTy6cu8RV61CM+mrE4p7y4uJNLov604Jt0XY7+Zi2tG52v5iY7dNx2dd5Ld0leNaSQ4BIaAP47H7+e5W4uXVFask+YFSy1IjeiRnIfZCBdXsCwns2PDgyqS8zPOulKekdaK6zReYNHTNBoddDOuDpY+DJEaIrjSbWFz/skIJ0cuGEYxxXy18ctsqaK1Tlca1yTpJYGODthtMEDK7cR5vLkbfVOFNgDk38jianKzQt9kC25+ekv0CeRO7uVXP5v5kILLqM/UrUUlI+vnUaBw4WF5zDm8UN2T9TFM+bhC8baeXd1rrwQYfm4EQ2n31xAohzvI9mw1ras/40ITGr+Cgu226iRrSKvvue5pul8x9xdQE4XQpH+cUZ5pKD13fz3aQKeOvbpfOmEVmfK1hV4xATrcA4s6PH2WNzkp1bBMC+QqXRspT+q7GqHGpIgN2eQrG1HAvPI37dVwj7bLuSTnjoDe9E1AgTsqOBUT6mSJV7kuBGLHdU5fOc4affWhqswpHdv+sLH8Kiq1DlhF3/UwAKrcrZyfl0aCX3oLpwSwOTxghjotgcUtUK2BO3nUHhf1kwh++LhADXQhxF3pcGaii8Qy99U57qH6LUl5OZfMvcviZ/am6yuTF7mw8sHlkaBvG1zv+BS0phI8oVDcbAN0e3eB93ijlTSu3P48lbTtnWsDzRVBnQE0Vxh1XC9oBQZLl029k/U7QNSX+7EGmTbxxGBS7KN+udOi0VdtNsA2zLyxy4faN5tCoq/ZqprkhXuhDESPr/ICmHx5nl92D4gz78qwN7M1tiId4VJNCGyEJxlHk4/9+bZU7KeV8M9KyyYxZF55hQ7myKCdIDv1HMlfdD74tp3LHdLjzREMrxEZZnhJsSLc26CvHttETQDNZxFH+MXVtSvP1tfasZKhffJP9qFVqH15PFB65WSOoY+rvyIM/dKpExJx8qGPfQOUme1fqz5PIyU8FMtluK2JzV5ZdxgeOs73echKeZrqNDAdGbgNnjnLL9utP71mcHVAYuCmwqo53DZaZa0BAg26CY42+k+mGZ27IZLaveqD7rAHOVqsf5xRttC19cNkpvWjPoLb+shIisVBtpaBudSX28cTaNryXamRbgeeWnNjZFkkvITmVtG/qHUbwK8V2wuc9hhiEgmg8WhveRj9KAogc3K1s9bjUTTy1wxL8baex07d8ZLdt/caIsX7nYTFHheD5oxTyIh4GYAFugJix4J7MxBMQuMIpFnWjtcSgFGe/PhskqdCnC2ArIav8+tK0CF5OMCks8OllYkCjkDZ93Tz4BQPOjPDBPftAm2ZZtEo6hHLqU1n1Agnm0GXshkyJ++8VqNyPDHgrCv13kiApFz3jfsvuPANs+yFkZZLCEaIx8vPUBYS4nt/yxIpdTkU2HT7ohLnUdDnKkZCj2B48bF/zeSoS5Ykg+P0CZxWTtNFmIpwus8jfYMbUopXMuy+PE7RnJekE+v5/W3rOLq/NKVp54GMUEfBrjmbjNQ1+tmDRE0Y44S3UjS8qvU/iLX+/xDa/4xRVwIQiHcfLz+aJuOi2XGqM9YhJHx2gyxkHUidSbuOW8j5b+6Z8zsEkNsXIrvZz/r+nlPTezmQVWg9GQSITW3+FWQARhKilqzcxBnhH3GkkvbcyHAQ4Q/3j9+/eoYot+KD2K75dbxei8Uesnhj3znD+xS3rGkMjU3R6+nHXibH63aIfA+l5f767Oh1u/JVNV/Bjwwl674GeGlJeVEc68LODH1hFe0XBHt8EOw3btxJltAZOXq0a0CKEEVJv3iGL3udPW4vOlDdkREvyjU7gqd0fFBfPd0iS9bwZFK3JDAYO+Ge3sskE7t6qGWzdTvympWqoSf0ZT70N9LN22tbCZJy/hzfhTX17+C7OnWt0kDrbEHEte0uF4Kpf1qrS+FaI855Z0tpN2Zi4TzFK63NqVG/fARoyB6JSAwqhQJaEGK9Sd0Tj5bb8mXIbIFY593CnMqJQpYFjvzLB2LX4yax71cgXRS0eRk+xBDhm+Hu4rmqO7BCsne+gTzdIBGXkBJISufxn1dVrsv5tMIHcZ+giKHYAyvUxQrG3T1ajAve79sekVJQ06cJATQzyub/MCT2Nxt1PBZJbER12nDdv7nETsjbowWRB8/RyfCgD5YKhcz5Jao7Xs+fvbY+p1GTdTOE1LwRBoFDWkgJ8KCxiQmI0LFc87kLtQvnpiillGQJ1PElKR3lzdp8W8r8AgClJksvFeL4SxJzx5MoGI0DrkiovbGt6bRlRboUkrxB+Tce+gRP0yFpxHf3Yifv4fa9m+jUuCi6Px7Atw0muUA1T6FAr3sSJxDq4MtL9SryW7KzX0k2NvttXcMufi/gxyTMJ47a2m4/Pzo+WgA+3hKCeG89N+xpA8JZl8yPVbG2K4BTKvSfGnQfIqMycJ715sHERgEDc2lWNvJML0NKnCd7Wy3mePRQa44rTDnwIcGfG6BzjtnAS/E9Rgc4SWN0HMPddda1fI/rQYtI1Eg+8XZV9TqkinapuXdo19BN3cy04qBPvMY41dxM5eOSH+n6baHSZ2cyhUpcS2711ht6XyEazQJ1Y6jOelg5BybEHisqy4cj863xitRr2gwrrWVPOsYOSMaNuRgiQWHtz1F2HrE1THpsxNtZtxtWGvcQQ3OuAG0iXELJybAgEDJcbycDGkGI/dotvMBVJJXueVKN/FkgxUd6xrdWr3pKPvfktgALfraMkw9OW/uixiuvnr1eB5oiHrlTkRx9C+bQ4KdT0D9ump3wVzyE4aUQdcea3ylBqP18ym9+g6G0BiwUO39s1i1FMU9l+72uTeQ7Js809PmCmJpIFn2G1mSH+YkzZwMYUz6WDfI2NB4w5OEVgE/rCQYLEH4tIqCGqIYRfpu90IWJhIg9cUAw2Qd2tywZqEpNS79+knLZvDb7aTI3WVtnRn7F20VslXsIgtnL/e0Nqvh66srjjVeXQuA0HwKX5JphSvhUr2CuJMva2/oVY1DUCQUTPyxsk2/vZ0kiWL6cvnBXeLBa9PbItVwJhe2Z1Xu1PY5dWCgai5XdkwwI602ffta+N9iXV/BS4AuyWZdVdV59LYXUSEKa2OtMh3mFghKelBw8rVeUEQ+vYO1HdP9FLkHw83S8cFGZGYv5KKnPCVpgWiiFDggXD4RrNJrmqj/G9XkxpORNMZigz050UYnqaIFOZApTPYKc1Ljq9Q5Oc8nJbU+v5nVVktul5rMAWvlp2xQJAwSr7sRGrxARQDCKscubf7pBd2OBAZCdtjXe3OLECXPOvayWXWW5BBCEyv2fhfT/dqOG7tEF5bXagcJeXY6h/KhoJEynErag5hq6Xh+eLR8r5+TcbqWIA82bqTl1fQcievGciFQPnQ4DyiY2w2Q4jOHbpSv9Ul5naHh43mH4Cx7vhed15tt73zk05rnCXp3NARZIa+oKK+c5VSzd7szY6Yd1BAqK7fQmh72AsfsosjAqHLaJRVcQVqrXoMRZdwCCrLyiZjN1/bSbfXedm/lmAkQkmTR9UCE07OvrTp+39Scd6t+uvdUtyiThJA9pZ16jb5QVeuuSMk7afK6w0jSxeo0xQCry842Me+JdVB/T6mrNo8HtoQ3Z5ZbfXfd2vePI3Zza9/W2cW/PknNmt9vP1VWaDqZp4mharynmZ4MVPeOAfYqL/tEiKbgrsqYiQLluM96SjxuBvyF2rXWjqQRRDyVPUyMfod7alBwOJKe5ZZCsc85csQPvylS7Q4sZVW5JDOOrXR9syXWdgQgJHSYdIJ18rPljOUpS7ctDRuXDNeWrH57L/QBBPHR0/GNLMTp5FNYmvKHHWj39njQw7uELyg9MQTeu+6NMz8ctyQdIm3p0yJg1b8I2KaZi7M+thZbx0cPcj+X7RwfGBh3LzajM0/hVul/+DY2crJJzqxbngnG5AQx2Fxr73EH0UByn848RvfznR3EX/74QBGACQF6dLCPbQHrJRqKGjsGYnv3kamH490blWgK9+hD0eibxsHIvRgi/wucQBrDr0n4B5fFFuukAQqm14sx2wb5aocel+ZoKcOi2dBAZJ2Amgc7W4llJKHCqcK3U7KDOsxxqIc0jCDceleEJYFGNQlyfwHvLTZAbBVt8XVvogDg2PoB4Q4DkwhAB+sGe1BsysHY+0wPsXii0FTvD964aKxlPmD+ysBz3WoxFAbGuhkGU4TS00WMXl3NQ73UElhcX9PHy2tt3MqMwzrEhNQrYMC5xwgdOYNFi4pvAPIpUMR4o7esuMCeQduq/HcG7qEo99ojlrqg5FT6m20H8YtsCN0DUCTtRraW+zzR/dmhLNbU6gSj/ocTH/b0JKPqcT5N6mpPmHftYWdcIIOEM3xVNCk4Qu3PnOOLCxrqJARB3p/2PRPd/7VGv7JUBiG7I2YPYVfEc/CcYb3vRNKwoFMzDxMk7OJ/zL4aXCVFizw+ibIj2Yt6g8Jzb4ybX39QexJqbaRAsbJcAy/mAgxmlTTgJWdlAuDDJLMC2FGmStuj3Es8K7Jc/eLmmEQvHvqXMIKTkcKleEENWp5Uof5IThqFkucOcsN5AO/oAAJbop5vgAcssIRS9J7bV7fveDRn+pXAPb71irPOF9amK1ETBDMN9yeegPVJiL6fNf74CNgOgiEeBGftGzmrTmHo8TBAEX2h3kpdr9Ayjl6mzZNX+wSM4NvUOiA3nBPLK+PQpC/7nCtWmB9M4Q/Vm9vPhYw1byUOCSSYyfuy087DTe1N78o3BzjxsG7zOaaiW1/5XSOwZeiPxWlZNISSax04K+NszMfKc337tu0bgrjqME3IQMWjrkr0L6BLBhl+4/o1vgW9VifvuFybqaSB0Fwm5489kWVoGKp1OwEiHvQ7O24tCiH4HP/rC4C/+vDXoQn3oYg5KOyEEYghzSqTrRRGCmn1VK8P6oV6QcYEI5/2urOn2jUG8WnyDmfeNtKgiwTDiImtxikHnyPG6Dm/NjAj+6B627oiWFHQwWWTcpH8khPa/uO5vg+sqdPmgOhhzlTTBZSV1FACYDN5aVfgB7/S9cgiEyMC8uuTGsp64msw7/uhPfV13eKNCTHs/C0obh5nA6VC0GIZJckaaAsWn/ATTlVGeKjHQl7HY7sDZ9nRRnO35iXbmpDWvhUEyDfcWxYK7N1SsTVWO7OeQYRZZ5BMMCouMT5LxseKLbDI4hGTYItMQmJKOWiTB92wN5WzsnAwceUtxxfg748nTm+QEJjqs8ppztxxilLy5/5pf6jBRTXBfxcVLK9rhUyKMJUhIOrz6KPebqPFyQgQ1wltEzIy4f10Pd4UY55P6rbqBS5FUgyYE/kit7DXu234Jwm9rejD050qdNZ+WfP+LhUp/oQjnpwEIwXyN+a4aI9AfMfKrZfulCIsgqK+256/qrwic+K/X7hD/W7vzc+1Ovkq/qt0hlnbHO4Xx2Zu6+shV0/dGknV+CNRrKtmRyzAP0Swx5+XmmskK9lGXMIUpOoPB8pOh280Vy+3XXm6NNIiCISndk5A41e5b3RN0HsRT/TUHAIV6H289796QawUsjyjSO1WMYzytPHE82LXG6GzBqS7PB5KgUOYe8TbPGRTlF6UKSOZIDCdx78bYOFug7JUDJwfhb9K2GsnwYDhEXDtcYhrLhYwuWcPiPbHXSZgsAsgDAnD2LkrdzuUpC29bN8agbVLUFmfBz5LSWjmvXe9ReOCMxfFnp1jr0JFXdgVMAiHOlwKGrgKEhM/U4j/Yqni8maGbN+UpsmFGTWTMW7kapRndiNCxkNZ0izeWrs+8VtqBtC7vS8ewhlDhbAiUE84kIttl9u8vhYCRlwwDG1rogPNiciMgJHPAtRtqBJkcjw99+BqhSjW1+BHP5uvzRN+30Sxflk+265UUcfsr+Hj5OBrEoyt4BHdGiyOSuMeeEPUWeY5CTwFtH2HJozQ511GZF0mvFBXExdlFUgdaMpJMyHdoRysC9evN56xgwoKQvCGG2SBIfjMdwKlNmy46PbJtWaOEF3yI36Kum7t0PXYeMjJLaYPQ87Tjxg8x7dI4x5TSL3eUsEpt44ak8Cqryua48o6024CQi3HvLgHjvf0odmWPsZWFuUxIIrdVOO3mnrr981RGO30ZYqRasZTbpfdw+UgjumS4jPutf2p3rOnAHJVIlGq0VV5pTqhVxMpdgLK123lpPmFG0QpEonlmi0vsQKov/bEM+1PPFeiMWV2RcM41bIrLvAv0axuxPXMFfnD2GyfV90LiRwnSnA8vkKA7BP/kHDq8+j3fCuYzFSNsEgjnP/HtzbrGUrhx42Q9TRWEgKaCxWK8gxiCg+ZDqdhX/ZKUg1PDfbQVU3iLWGwmYB/ura/ooHtm73pJnopbAMHsgOFmAkboUFVjv7LQj37yqSuS9xLNVZVhP5KVn9Ke8iqO2PvexS5kaaO5KF/AHi/obQtsfIlXp6y/pDtgMPYiz0U9SZshWRvnidpW6lruRdedPcJMzuUndletkXMM25fPVye0LYGX9Pb2js/Nxy16tRvMOPecdKsHlH74ZXnBz2ziXpTFn2eEtAVENfLzArmommHprsNTNQ6Pv1FLEVCvRBsAsiIL18A++2qkm2dj1kvmC+u1Lcw46kp4xgZKOKXRMEDRWYHz58LUQtQYiWkOkDeiOhCA4gknblC6Zt3PLlp+yUZbTMg3pt6Strxsel9iGv/gdiCs2m20A27ieUaYwwuqnquBPnOgeEaYn6vqMFCilSeaDvG6kGLxfEsBZyAG9crVfVz5k82Tqu8t+5ZSF0rVrk9Ak2x88/znXtbqHy0Uk6ERmWaTRiy0Pze5zKdKDUIhBXz+LB4BrDHiruczyypwNvKF51LNvrLxZT1yIjeeDV6BPW4cgWkbp9MB7kLLeYmHxKtUC+h9A9hyYegBrcWIk125griR06w3XF07Pe/eFa3l1RfLz/s41RIolI9aj5kZFr09eao4rGE5PHavWxbWKYqi4yRv+T6BfuUYW/MdP3NUmGCQ1q4uvZGMZjXVkVKsXX6tpnxWU4iZ8mAtE7k+a0Z80NsV54WU4K4blkQDaES7zqLqpa437MHzb8KHV+NeevVNgRAYu1g85kdvVyIAF8sXFeI594TicxwpRFUWwZles2RcfpJPW3PKJ7wYphziBOYJlRgoQzA0faeuWwdbSo4Ub1s3M3ayiGiwiNdbdNd0TMqwRlIIYW6Xqb44FyAKsOzqYud8Q4gtR5cly13xWKbGR5lJFO7Luc3/6tztjKQIFS+q1iX+5B+KzmNfnE/e92vTOotvrwoXDArDKX0wRyUTj9OU9bPVA+dkqWrUwTiBFz4O4lrBuwdeQwsiwR0XW7WQ7jhrdJgP9fysh/SFP0KbhF8ED/YKH6Vnlo2IEDMP03QqMOsCy4h9mfFaGN+l65t3Pqs7Gf/sc9MfpaHervcLnE/GYslrRUHvEZEy707BCdTstgj7xot7HKAfByhG0mQe97XLZMulSM3UFAxLYKbhzR7jworOHEMtES1MfOhNqArKjg8Z8mShy25ZeBuLkmA8hucLn3KR4dC8agHz1FPWYFrT+X5dSPveIpSxLkFgk1cVgkv9aKBoFrBw0qlRfD7XLNZ8SkRqyi9p9s3Goio4K9/T5Xxhzf4i+P06HAerGfRWUhm1vbXhjDJYR9ufuIWcXZYCLoorwEdTZDVdtdjvzy3aeHJDaMrIvPS1Dmt9qsDoAd55WgVPX5mNJ2LggEVCjU1C44FPuZOuj3vewE6WJaOm4S+3XIWsoUhgdHL7oZc+cIK1/PCx+YWRpFLmGOKiQBjrm1KlkNM8T9r+YFVkpjgAkBder1wACYrrslSY/dCeXAHzk0Z51DQYPtN0uECnkgSlcG2DkdiZPh+jeW/XKecEm2RPCCo0Hi3anWQh6NMq85U4AdwYMVgeHclR4IEokFJW0Szy9wcpITga6jVRmprpjYEHU6gwVc9rFVztufuIrykCuZ8nwIcKgbNmlZzXWm02nDNK3anQ6zViU55qIBxpOmXxEGjmshRls68E6xA+v1OtJgTw+2PDmUUw1IMnBhkGi9xUYk8QZG47nqcpndW1yVh9kZsDn7ZrfknvxLmf+cuCrhXZK2me6YfsAsTQo+3Uu/HbzpmUZJmUEmnkTPYdCk/Yobl9VkK1ZhxQDVsDczOs9p7WZ4EPDlKyqFl4Jv6pPynQq3QJuj2K/EAjuPfulIWZZ8AoDe3C4ErzIi0ldDQk1+qmvi9fVk+ikWW+3nIcqJW5vVkW3vylrT6z+klTXjy1eKXgDgsWTIIvvgwMIyEl2OxV9xpwy8pLdr9sxhUlMmMSRdZMEkgkOoaucFEUT9Wjntl0f8y4KyFlScJNT/SBmf6MrVaZvbO7zK5m06M1pyPlJ0ulzl1o6n32NDtrjtQFUTpqNRSkaw8MZ/FxI61iFmLU5CQ2MoPc33Y+Z6kPouQNmFVlhtA7T/oex3KLW6y1HPRGJQCdQ9RO3R6bRsb4CSXkOC1WsyCXaerxhUnvrLaENpxu2SgyZd7AlQ0TzFpmEkBtHIrp1+SccKK23MNnVTi6V6iIyV273/c+Det6Ss/9SeXgylYHod0IszRb1WGuVqv9U8hS6IrQJ6vhKwVE0FAwygaXtU4aAM2BAKCN9L7BPnh4wrQLJuGzaBzXgkbVngV45llY+00mDAiRyTfwmWzPGzlGNbd1R6jABuEDXQgVn8Xy4FEywPaI2csYf5OEg6CxtMOg2Vb5i3JTheiACTNlmYzMNQASfVzj6LLHQvFEQ54GLDJUQrUorQgrKaDR9L42MafReRCH8zswiA0gLL6wwzmgH1G47u5r0dKpEAA6G9GX4FHl/biGSQzAGs3eNWCwVncgjSj/FLfqzcCG5cx7PI87866r2a47vd5I/qo9JoC9YVX/XXRzJp4QpAFegk9Sv6S6aMhhRadstHlnibAOKSx3HjiMpJoVm2qTI7Ad/z50V2DiBU47xtHj2AuM2Izx1VxrJFqOTeQUVwZueYO8z+tdCtB1Z/vMNqIx1UVI/cQHDjvMXIxRqBp0PevMo6GYL9722M/e7sS0Cd11GycN1qAAG+6Psnu9KxFMmpT7XCRXaD9GkiPewebcVlh9OYz26ABsQV31HEXnG5EDTCEDiVIsqFSMqAYsZxfR7YQhQMzdffZYarro0Ksu71cPt0lh74UCTHF1l+jF1DK6sxTVCdvLsO/Q13LJg4QlxPyiXm4hblbGEnPdxzUm5Su+tPN9Shb7uSeSUSEpLog0N8vuMysuOtRgTQBi4S63ZvE6R+d5tgznHmuyQBYEMDMU6HPAKxDR1Zy4enIFDJ+fLsRF/FSELORWzZlaA5RJSW97i+4qLAtcY0RsJrZqF/aFW3CdTyIeAOPs6o127zl9HvtzeCeGGZ6l8TaB8Ky6w8gIj1pHxKw7hxjWp1iXTfKqtJ8hXT/jycfCJT0+mMTgnjB1xvdtPEUs6qD2I0DPD9rnRL5MgJuLnyiVBK8dXaDp0T6Aa+Mq6EGwi9jzqN/lSrc6CkCwIpCHeEyeuSBSpZShO+8mvp8ls11InPw2r3rD0col4qhcvmPXkowcGPfY1ZhcbGaJDpya9RhtSToYUI73+8QYKt1/MaSR0Jiv7JzmfZwd0YdTRrorP9D0F+YW7yo1nX6A8nZEMex0JzhyCmNqxJ+VNBrTifM8xenGA6kbPFk7vzH22yPPgdZy12nDPk+5HHusuPH+rDeGO8o6XpXihWYliDr9O9iovakUO1Pi5dm8ZphF6DX+lmPpIg63CCoqx5fiZREft2jbiYLTP6EH1wuAbZtQ6htQMlPQ0JrSIse8BBsIhJaJ0dTHbQTxuq9jilDd62kjzBO1di5+Bsbu604jZQoIib8u87yscKsuZViPVWdDcOINmg2zFKvx3ik9YUkfuf6MvutIZRAgZlieW9/y2LvgkrKBvo3vXch5A3Y2huReqUBJRiRbq6d+MOazd4nnVtxh/msVZzLqo6qs+1gaAbbLSwddxPLZsdTlyX0m43XWuNPqSKVPFUSBo/S0KgCcLhJ1lCDiAjITeZMuEk7E5QTGSp6N1zdagxGR1Egfm7EhS6s/+6WZKhO0bR9XZy1E7a31LZ9WNpmsPRRcxbaLUsGEvb2P0MSwOY3enxvSYIDMXPYckjE6mRz4W9yVJPh8g/fmnoARVhSDWPT6MeQ8rVjP8ql9IQcxmVNMEidm/zqW4bF8u5vRm7tKigrsp3amexg1X8d693WcE1/HYrByg5nI1HoVYFWaWD1vIdzNvVz19ZKf/utz6zo8SZ8yqp2LYIRL4NayUPiMbbhJLIWIs+e4Xl+MGeKL1fGhxAPkjRg8TPZgaJef0WOFHs8eF50e/aBlkWX3Fy7VkZvFf6pdbyMDuYohpzxuDXy4k69CZKJcpFUiaG8sUVpEb4PzxuWrWJbrns9YBrPzms76Q1gEeH3od5uBlqsliFrLHYwF0YMkitKAOUw9bFs6EE71VgNKfsiY12SIt3ERaqRSNMbXWGhz89r2ftL73WOlDX2DOLwXAaUiITMENnprcHb58YIB0tGpXEC3mZLC88wkkI9nWCbYdGzyfAlDt2ZATH0nh301qSze0otOqfxK+yjdccV5Zy6F5wIFh5J5tlJ7KuZzCZ8FGbE2f2tUQpb1tcjSWlEGepNL/pMpLZCOKkcHhiLF9bqu7sxeVQN5tzWVVxR7bTpkhvchofm8pXF3YzQDZ5q3+3YL+aRS0xo0y2NgiSTM5hkDwkw1k7JthyrrCiIpKNmBxay+V7RRb30q3bx5+cJgpCWPiEdR6zqtDZO1HcLePnWXq/1mZwIWHPBhTc4gXDQsIt+dCceLUYHKisQf0y1EXekiiYH6Giu9unGS8oMdGLHrisoY8ntTaMpi9wJaNIUz1JYE868q8haq3qOGgMStmPZDzj2rQOypdqQKV2WtQZpi81FsNHXdFSvoP0/XQUVEBDfqj1XryNRWjLfbLtrVj1FCrDu8wlmgAnkaEVbmVzI+pa0Vv+8P0k3fh9IlJWaJFCd6Siis602n0Vabth0WhdxB1HFUi0+KJvATPDdUJiFdb6ucPFkY2IlQPlgHWZiXoL/OfcNhRTgPAv3R4TDhfAOs8ujc542mCBJ6hmMLuBuIdiJI5Css6gvTBmEU9xaboTns3KMKYFXHM8iPITdEPYpXmHxwYcSbB/mnhw9zhmWkaZVSbRHnClq354WsXuGkr003v1ofHcOF7aVy+8WbYqXj81C0OZs3yBI+mhWqwFmjDyUke85wu9C5m4vM2VzTdDmhpUxZ+jmL+oo9AFyaZbLNInfAVs8hustFvSzdzDyCIJmFZCZlGHZ0UPdpY1zdkisFLxZOcnrQ5stvXwMUHcPAv+D5QMRL7leEHjehcdHBtTJSF3jvhPQD7Ginr14wQ2izorlHUCHMtpbtxYvFk4zLecRDNv/qpvQrAgwbUqvyKIW7obMoZRQ4i+1OkzjMY7uUUrhRyiKTHl0pj3jFg5qYtf7SSx0SzmrWFAqA39xYBluQez7D0i5nf3BEuOdgxtSkLp0xo4YpKII50CLTgbC7Ki4rceWk1mbiIOjb4Skerx4WNXHKLAiCcWKrfMH1bs1tQSE44TAI+CqEYHit5HevSjBoP0OjCXXTe8w8euGl1mXA6K+497W9sM38jmjuyNNANl8E5pPAgUKUc+m1sL3lyOc27ypYQwjtt6wTPPB/9SEkOsWi2XpzeoT2QpRaW0bjeI2XofFfqtRfXULZGoQksXDNfQIG6979NnfTzrnAA0O316F3dXF92jx8oec1RNkWE94VaU53Qh2R9TKY8lksbLdzzuM+Q/QDkwVG2Ya0hXi3mRfODC+S93xxnm2FKw9Du+TePQrfiuPTlito2BfWpcK4RKNZ4m7BKYfnYuAPHl3zqH1tije9eybRqMYWyniKBSgfoTwNETtUYFh2XeNen8PD5kZNRwluGYtEmTL4TJUWM4PsuUgo2MIYHSsRmOdrphflUntoPyZBtbn0ul25+flg7hNmdHYQsMiTfscVrKoQZ+Bu26SdwgvMl2sLEmjMyJfFjSekcIDhJr9x6jpH0pLsSjAXrKPeYKHVEwH4XAyp2XsGL6JDF01IrBbnRFcgjNuLfvHSG3834XRPs6DDs41Dylw4By0eh34rg77kpEokEusyDvGk1Q6AN5KtrsOJLedFHC8j1hN3VeUugkGme5Lu4vjsTXezOI1zCqsla2x4euNzAqIml9eZ3knHvK89PuyBPHBaJVWQZeEjis5yKT0QSWqCQaX3ku2uZqNy2tUpbo+r4j+eyFPitEnyCwO1dFGf79BT5zi1cHu+SQrxKVfhFMxsd3aCJWOCI0h6JQxxaxpHdC/3ZzBGCqzrC1c0uL9Wl+5r6kHvHkFwqjT7CgHv1W/HhAoiTIVNO2mYT9k3W+5mMoryqb4UZnVgW5jLm0Ds+Y5XTABs5DZhozc33yn9msufs2eHVdWhKEXtl68RLzlsQNjIuFWH1G7CbmwHyumERuEUcxF14JZRrV73fqtzXsTjzWwSvhAfT5nR5VzUJM6EobhePeQzZTmRc7Yoj3lPlrgXB/QZY+epx2+lL2ocZ9qWiI56wF6jh8pJxx1ExgPOIWRGzAlU3em5Z0RUSsQCb3ldUF9GZgNWvNPXqGKvtpAdizKypoF5xdquAGMl287tuAqMm+jvzjM3XcbG+FlE9K4toZSP0XiUCp+qBvBXRt+FPPbMu7E1x35l03YrZ3j/lrpQjIVwd3OW387AGhtWJ7cjd5fdsCeEC03Gf9kTq6rY4phDppIS6R74/UxkURjWNz+lp7IftRLNE5a93gUdkPmBp6fMX/ggTJWQfqlue1mPrPgsB2FssdJ3owk9GvHJ1OO4F+r4zOQSMBS0H8mt/kIb/XeeDrffjqyqxF6akS+vG58APCxs0Fx6/avnaHjnsgWU66tmE6V80w+qBHPJRbRoMXYg2YTdBX2sfYGU7uY1El+yEwLnksi7FSfTLBmObPvindNbp2AueOdkyzIfdxCa9Bf74gPjQpTYi04kn+NU/dUSurXwJRf1D7yEcdx+v9nng1P5R2IHBf/1ecB1m7Jc9p2ByBOKDWXf9wd35R6Ak56KPAFk3m9bX5ZYIEyMLQBc6Syi1txJM2CbaFmdlA81CRpnlWpVFM03k5zYJGyTWgmQ0XgYFBMkjcjlu7QlRFe+Kfrk831qWipkpXqqEDHmU3163oMzQeSFVlGK8QnJk+a8EAo3IpE1uuaTgaEpb5yu4fB84h6x69Vd18+79ZzozLFizJlvxISoL+MhMRqqm+3KeVWYoflDVoXa5xbF7OtXAm+FkDc3zLOvbCkL8QlwQ1qC18BXFhF+gBDRApFOwN4zyiiV5r0P9Xt77LRPalWtQ//NAykvvOPCaWoMc+H8jYiYAtiGLPrU8xqW3axezSQwhfAQYVXTJSk2nKuEiYyYE4ANHSqVpONEhSLaLVCL3IR7I1AXPtQC/LTdUDbedeooKXtKc9yQrG9DJAaA7ADCHl1KwU6icPG+pUgaXkUjeOgNl2CFEnspLZ1IkXk4F8xO6fxy1Z6toH+y4H34uFyyBbkdotHS6VmiXJQd5BDZ8tn1txx/4VnYrAl57XIQOa3w5iBfDk+Cl7lo8/MLumndGiVbjuCuiBfMkqX60yG8euOv/c1/OAsntCmzrUWHddD+Yewq4kdf7z3wQrZZ8roa6BNhy/rA3Jw9xszHihm38M2rEFqQrvSk24seUTvh5a/E2o5z9QWbuyDmMG88n7NnwXj751b687Z0a8xAPF3yFK/wKFE/r9aQyDSJ3CaczOiBQlWJXjaxl0fcfM52hOOe6cYr3Z/0Y0NnmnqJeDggXMPJePJq7RY6SBP5pKtqW2yRfJgR47FB2tpcaXIwc3ch5EXfumXfagj0ToRHLxfysZUGTtTuwBJMRL1DXBTdr4pXuEW/2MMR/bgEhew7VkEm5K0P1ERv7xWFoNyyVRPGIbe4fYPoJl5ZTzC24H27YMFjzWUcZ7peeQGhCPMUzRBrJWRVL9fng/3gC9yJNstcS5ZjPpUYOqG706XU6A1cYIM+Q+B+0qr08k/C+nvFK7EZNAys7hqpv4EiPzMsMYq8H2xTQXhN0kUI7vmZ15fHe/eyrvEH/7qrn2mC5rXXqDvdQze8KzdduRHSfPam4UOcJ1y04GUJtneLfRBrvWwxloVke6SvRTWoy0Jtu7C0bZj0YeV1rMq9Fmdg7PAmjp+aOvH+Fu4tiSg3qmJ1Y+TJ0QMo1hzcK5khcwvtsHzv/VDsGsrwo1dWEUQ5giDUvL9EGi2pvG0pN37T+cPJnpdZnTyXCcZdNyyYCqU5eytVx+VZFl3o/ha0IAZKKjZridH27fagJXvjBNd9DozGGrWuYG9UXTYIQjDG1586gDIuepVhTmBUkCaGYNNfBhnTVxnO4gt37ipPB/GIPMq3d6upmVm6IAzJeVMYlEF4Tx+UkD3JvZ/s2208LEK7DeRxfRloPdmGSKnyVI7VLaIQHcCFmihldAJdr6o1eQsiSH7wXj5PFfamVSU4Nse20HPQnZWwpE6e73kuXhruxd9O5rqE79wgWgtGgXx20n2cvs+7pDO9HjWlvGkw8NZpopWbYHTlGQChy2ZDK8YvGOtYF2jjTmINXhVyHW0+QiAO39xsDvf0Oi0HH8cbXwbLZwSJh+rjnkweoAJUoBI7zh6Mu6HPnntJyZbcjZYvxZu6Vlgfqkm9zzynl+oWZaR5VfleVFkFOOVLSxo4V3Ic/xCYWyiRLfos61rgYq5axGGKkqA9BM4ur71dWOE2d0N44WIAI13Jf6Vs2Q69ng0zuhm9AFTR8MGxPyyf41kOG1Jdds/p2OSYl9fMZ6pVbjg96FNQyssIoNdwYS7SddpdPyjXJShF/3NBm9MKo9sj2p0TZCk8oxrpb7YEuNPv4mDHpfQ4a/t8pZR9szn+Xhxv/qoQvDBaT7t65gVVCiXMjw1ASzrHbX17KjnDpu5pkrq+HA82ZzqW2DbmAwvpKGNqy3ztHkqsj47jSI4RsMW/BO4VO2GIre78O4CF3ehsygLLOeWLJcOpmeDtjJ0ntfB8nJvVr2+3npxGvcku/5jE9Loe5WoUr1sz0YzT3G/0SJFUY34wazOeAMXfKW7wFRCt4iY9UVxbvp7JAUOMz22Fyl8I1SSe17bUI7sAGrkueCg1FwRm2YOQxtKr+/QleOsoeb8lZkD4Q5jQcCiZ2tieTjtWQjx3hVLcR3548sxFWFV3nlfP4cnAfh7crXEKvu4QaTKAj+B6zeET4Gh4Q7gQ1xtxB6w7kJS/v+ksKAiS808+taP5SNfjHnGRckwXfmuFY16Pc+tK6bnyO4Ndgt1vHj67BXf5k7LbOYkBzuLVeEv8DvtN+G5VmV5R2BvKPprjuCnGPeLzoqFKzokFA0Cwu4f1CCffp+7cEDceayf1zV7QSfuDFHkvQA6scchBbCe5cWVx5Ks8Mq1b25CjT9J6Z3PtfaE6U47MdXRZ8zCjrc07JR13d/VjyNb3xxZzDy2F2EM7aCYxEKRcXQFwKrEhsFRvQwOkacfKG4e0cZexOgozpa/9rN76sYf8S4we7LmnfIXqnCbcb+tTaEUIy5uRnHwk2GNSkK2Ub0+nqYhelWzBcnn/k/85WYQGAbtfcNmlfeb1dUq27a6VatQoAUECgAhRUIcGhezmn7czvcjZt6Wev7Qwy84VwL8XbjU2x126OqUITmBlw+rpo84q/DEE5XnhLpW65kJiME83ywz9zj1xdFPsyoqFE9VQe5l97roPn6xS47KBJmNoyImivFf5thyreoc9bny/B7k8JNHEvIY22bNeul7wm7gs8GI8Aa/I8jKipioiq7hunwDKVgaeNhEBImFe1upkT0GUxYntLObGiwumO1nq2TYrHjdkaYjL97ZIrxT7+tz6L+M+dZF0dQ+GpPzxbsECA/sdseYg37SQLJCK0SyYRXvGVD+tMAdysS6LsGc3cGEI1e4IRd263RwSj4KiJw9Hnl7YKy1fJGjneYpjKawgxZXdHqWgAavD3acnLWCcu9kpB9TU62yBm2B+XhXeMGde6sTnbuRfvW2FWbSywv08OKsev35Ktd27bzo6KUSqCsuW/0bV2jj5I/mbeu1/IQjmR5b6b6zYJv/jFdvpe2oPforTBr4mkt+e1ZK7Q/x5jeQ2xcNvX1n5yzsvPy+wbKtB+fnMEi9V34FD9o8vrcT+RsRGcfa3pEZR9keCYn/5of9Adpz8I9UxBPuJaf8VulN/QncC/iMF+P7TNG5/oHnpB1r8bv49UwAllt8S/IuMP70VtOs7WF1fVG37u6a4rUpI+xRQLgftPKRrBRbkvk+8qiz7vMP0zzj821eZ/m0YRf1OK9AfERz55Qf7MwVB/oxT/3U20f8s6oETv7NEOEoCS/T3Ug/mf9XjLzIK+73TQFnq37Nkfyv9YP8yn5ICsGgHLPrX9J+OSejvnsPCMRLYsP82rvz0DvC/yJbjn5MtOPlbtqD/zWxB/xPeZOrnOYmn/7Qn+R3Z/sR7/IpvP2C4wMqkABFuOcVZlf/yXvFvnv416E799Gr3n9XhjxbqZ6H9q1Md+wvKgP7TyT/NIr/zHfhP3yrw3yH+f/Zmd8iBf/nn4wRJo39PTvzZc7pfHKBaSPFk+g0jqPENv2vhQ4l/mT+k4EAHDB32Dz1+Og8+lfDv/6n+FYUP6/4r9lea8P8CxwX+/R84K/p///mkBQOhKv1bcSF+RNk/CAxN/FFgGOpnH/dfkhnyDwT/978F47/8xRf5Xi0hHP4j+X0U/eqMuH/P/Dk4/k0O/JtfgfGTQvz6OzB+avt7fAcG+hfTA/8ACor9Eyoow/6I/Ta/hNM/Ysgf372A/hnMYf4aNv3PUhX/CCKD/ZPadBz5kcZ+IzJ/IRz5BSr8Wmbov4bM/Fma5a8ckGTVlKff+SxgcuFF/c8MUhj094kVFEN+/Ikiv45U/kSDUZb9kWb+Chz5SwmVf0KMzOC/ZwiO/BHy/K0wMvaXcij/jIwgfq8ZKPojy/5dMo7YfyKJ8ld+x9CNAb/Ex/c7hsrevO9/53cM3fWfvx8My7aISXc8UOAL8jQ9aNVJ7QX/lktcfX2XkaA6nSaEzIVTZ0XTuNIpS/M8Lq9NEjxit96Pynhk8F5oKfHPqtKyz1PXZyN1F4xnJIFrno30bARLeDicvT1F7Old2jDQscKscvyBPeiBHql8Gsf3hHORTrxPhGWLyafZYPLZMDva0VC5+ml6N4TbnOek1gR/kUpHOYQrL1pv52jYxfEj7k1w4iawbwxWC9wDi85HOTRPtfJ3Zu77bBy7sl+T+yPgijfnzhy2crxYckopJtkCd06O1l48dEPxXs7nzVwUJkvTuuqNslndeCetmuC6zQgiA1Xf2duRjJoe4qbwustXXSwRe95JEor1Oj2CU9Q9otLjeu+6/OgVq4P1iBa8913yKSyOEJUN1pPxkxzkm2Jd1K8qWx0+9Q/FO2EW8kEe93fKh+1drnTse74sTBhxKbkV1uWeGFVwcP9FVyr3MFLnqjhwwgr5KKW8SuSXeXk9MVNMokxRj/2CKt7bq5qzp8WqQ67zTM0ifOLjoIeR3AK9Ciw+u3rJGOC6gyfhuoQ2Ksk23GvPEioSKETn1sOp7m6OZsuxWMp2k/ubGtoO5ciXPQ0kwVIi5la7JKZJHl1Tu3JTfA0ZMkfvwrs5VLB6/DojU7uiUbt2lHanZo3Pw+NKXjTFq2vaUULUdS8E0NLLaKSsQPMnTvX38Gz2QuId/YVXglUyaBM6kNNCkBeBSs1Ex8Ci5sYWz82EBQqwYOKd6sSjsNxl6Lv1Nt9jJ8O1AZujGxmh2YU/Zvfatesi5cVF4x6kU5kk/JaW5GaZa7AE68WvCCMsP4854sqqP3UNyeFFqDIiP8+eqvDsylhuQ6cvg9foBKmKcJawueCmnGcJJSHhs8V5bD4T4sIrsPIkPgjCMkdYphRFmpkUkr2njzNb0KRBnv22ljiwYoPnH1WFug8/K2v+SO/djsfMthFe1108isSx7EkUw82zOP1AcWolCNz5vM70lBGTYLfyKei1Mz5solh2Z8iH6V73sB73eMvCLtbZfbIlGz4S42HPq+sG7DQGEyzXNuOyvdimSl3gWX1nK1nHKX++j2EBV1gfk1erSlhQ+LvFBp4qHSKwmhcaw1KTLkb2V9qJ8B0Lr/eKXEKyFalSHh9W6YR7ee0zI7lNlXVT45HtipsiwToa+EBXD1yc26g0aVMBkmD9hng5WtzCWzSJXqGmEde6/efrcsSh4lhhADqMGPYjBqIPx9M0ejRB6wU7rz/vwkMUhiP4KkkSuavJqnPRsQznDv5NxDoGlgrDN2hkAnxbAyxACduaEl9btVbUPA8qgV/fqnvLrM1Wk09v5Po8bHFClFvdVFn4RKKdNUlyHrZbQ14WiyU952omLcd3ZL0r+/L/2vuyJkaxY81f44iZB3ewCnhkXwRiR8CLg33fJbZfPxxVl91L9Yx9Xe17e+yKiioJEBxO5sntfJmZCNKO81XlBghd0NU6h0Mu2UlipQonamBxuWSJdW/JdEb9gTiNAcU8jEVdXT1Qnqid8O2A4mzqBB7/GG3YF5vqQRsTfPqlZk72Bn/G9anppTBdBrIVHHyeESDcdp0gaxnfUu6UrvXNp51JrPPUdENTzRUJSaRlWg9zRI/OiH3YYk5e2bldz9Rh0uhIMSut4COnnluW5Hw3G6tQRJaxgXdlxQqdN1BrS+5MDBaIOQsDCpNAJEL1Lk8rhowFM1hkeuOjT41FDIlE+f0uTQArPck7ZrVVJH5qz0V3SxPb/l2Cb4+yrnqE/HDQWCM7WMq6MfLyTepyZXBBHvn+skUggCEvib1dt5UVIMuXwDKx2uCNY4QUzP0glRDITR+ZLV+kkhtxG4q19PiSydR0fQrD/XBGfhN9sZUD4Sl3eN3bU6Pc3y6FsbJ9nGOiX4PobbZRJxvJ6ElaCHeTANQNP0fJfkq1mbWj1+D8W0D1VNai3Kv2gmR42fKhuenTcQya2wyg0dr1EjNmlsr6cmSRY7sWtjB2CIpVELvWyyTFiJD3xsPkjTh5ZnxNDgFJBqumLRXmTBzRLGajJv64P8unKX5aP+lrqCSvvEVk/KHWRunqyHgn15v3fqTcczR884PQDxjfDfeSwWgloWmSVuPXvKJauwlU8twk/65gDUlG6/uSzK0q5q+UC2QVUy5LIn3tLRfQaT+rfN3Oe2vnlMjVBuorStDl3aXrC9anX/iwQAGUw1Azojp8CK907gAKMq/5Q5SdDfRWylMSIL8Fr5xfkTD3lx4DGQENQFdlO4EtEnIWu8pyA6q+wZUwR0iIvwBlLUANWd/MFprzyWidbsaTyOnn20fyD4bvC2P0LNlnJW0gm6l1SgB+FgJ+lnt4pgrKiwWpwezbQlNvfceCrmZQwZaCOPuUPU/OlrMnD01v1tqmlqcf1GW1XYfLabv55RJpaoj5C2MAnLgNTA771dTzbkT3wVC7JKdlMu12PtLvx4BAjJKQqepL+CgT7foeFwtGo4uvLLL3y4EDBcP7N4+c11oethcuwFzEy5Rx67nTeybEffF8FaQQMcNlaz56ns8fN6zjygJIwMdkoQkkWqGJlPfwk2lrnsZrigtuD9zUGZ9RHAxIRbwd5xFssU3giYYcPYl4krftuNAyT9mKclgUAyp1Gdh8yMr5mnYvebMtxhmLr/fSxhq0/8aIQAY6FdiXd9JD4uX+GKj7JwcXHm46l3WtPA4NlFfNaxDOtZqm1lwudYMK8Me8EXCD6PWhRItAzmMTNYxf6L/ngSrn3e6tuX0uC2Jlz3G1EDbF1e3IZH2sV7q7AXtnneA36dy301kXTefDB9FnEcgm6Q8/V/nbW2j8x41m+y/GGRmNOdBixivP7ht1wPKcuOzF2COeUlWfHoeCLkVJJ+/wkfJ0zG9bIGf+FPJcPhoPcXk46sHmIhCvkYaGuukETeSQEVbwIWlqWKSlm6cgY1M3z3268YCqL9hl5JkgDxHJWj19eS7nvzijtluHjbh7BpIQfP5WPfK4FJGYvLfFEGk41OCQ7b9greSCBySLUXH0R95aTL6LtsO+J1n5HKvydtfnqtnqmGl02VQ8tb0sFnkeRTSaUr6TUpYRh5AGy6LXqndjH0d5MNrr4bue07X3GhYwYrFRXYqOesxRzFZfQzRdDHhEHLkxgMHGnHrCscuT2LAMn9otmtLY++jcYC43qHtGf6Fsmix+++qxksnRlxTy92HyOQikEwookWyHnL8XtyzAWGr3Qepu3ifa83lbdeCArDDOQCCf1xPQGCt5x1nMymgMPcJCyhu81FLViEJeBxeARPJbxsW6ZqMqN/edQWAmpRs2veEMacoV/yBdhmIUAw7LiTeapOiq102x0BOOWIeaovTLsRZXTH8TbSAjIj7kGzs/K8COl3+hXC6Ome+WFIA5oIvMvsas2awgkfy5zFIkBRDZX75BJdeXsau5eWEb9mMeq9K6bveuHide9lEFzA758iKdDZHBne6W7PSPzkFR++mffdXmdT/jWHqA9BWYopBF2RKlKw+EPcsFgJgRPqVCpyKyfgO1bRjpRkA0cHuL1yXDOJohx+aMzhO2n/mem22e+Op+lmApTN1Bv8jIqkmdHEbrhngz8aTy3IJAeYcEibjxJEKr3y4Xx6kkjyV59TbRKRa1uMStd9F9F09yCq0bev1ygvUvlD7jzb35rvRcX3MkRTBU6Ne9jNxTwghPcs942tQtxH13F5phG8o4BdkUayGQ6gRTbepg5qf+04q/VHhgwubWet08UK9Tf1fV07y4sOb60rRFqD+mFi3UDHctjFZf9/u1Yl1Zed/hAKiLwCrudy9jekKZNfZJXuQUSKkrDF14lGUbxtPD62ypQqhbxofdPeL8u+m0W222nNeSUFIjbziWuTLLKbV4X8L25XcpEre5QHqnCZKLM2RjWe/mRaBSnXFUN19Il9K55oUxxAInoo0r7ad3Y2znRcnvElR4KnOv9nPfVwMAWfd2LJYQOY0zMOsiLtt67qATIGbpq6zvadON4UYFZ1nIItd3W3bRc0nnQY7J0S10cepdZVZ0WCsEhU6PyzZnKNrfKNjyP26ZTOqKuCuJjAcqjk5t7kRvkN1MEpU2OEvCN7TtLM7WjoS9+yvdV45LLpdSWkwyI16CGJmSjSnN20o5+8Gkc9k4I0pbUjnKmeyUMm+SEcKu+oP0DaeJjw578c/sfmrvbHXQN8hwYIxd0DQMtDNjzvvNoSQIB+yy1xLIlZDHoqQolAh2NJ0QUnysTgMSnkT5pJIUGOZGsEZkiMBGfSsUHbi2WAbWYqEZoNTk0kpp/9DNz9oCyReTuWBG3hKGfo/r5+XI88GolQj+HqXophM94ojiRvO9k4uvbBZVp/DqHlRtgUYQurGMOOq7gjSTh+x8yoOrs9XHORsgQT+BQghP2WQfge+2+wRJJnCIrFl8gCxXhQuAg07OcOkB2ZExMn8xacqpt3cDTAIFFhISw099D4K6nrIoomO9oDSnN89j9NjX6YPleUom41O3tNzYY0DtlW2l6SGyynZ/q2Ndxf4n3ex6/w1jUdlGzxYYUXtINoeMnlXC7hjF4FNOaR1tJoyfPz3XDbdPAxKUzZJPaagROGCC4M2lomIrZi/qJZ+ESt/UXo/8zTbRoaPkO2AqGrwcF5MMuqQr84gbfUosrMuL+b6mxym+1iz/NFyDjnEXRVm+LxFVtczYO6yml+LlHoj9LQU2SABtNR0wy+1WXmtLpneGETq/4slA3R69b7fvWbjphkQ1BussoLjYLq5FNT15ueDXFK4rPf+Uo+irw01C5yEAXmAd3JCB65pPU1C5WegYQmPQt8QnqktHCc8e9gsoRDDSEhjGV9W32RtTtQa9G6/mplhUu9h+leqf2rxw9AjT3Lob2nAnHR4hre5mDHIF2cW8axm2s1nd7rSF7XJ+6BXM9iqXgMnVHHRndcRWRidPdjXXbeTYpjx2M9DXEzDAGwGVFSAkBkE5V1qn+0wLPsuU+navb76oKFaWS4/3fjzdTroPVPp8hlIyRgjqsAcJfgXKCtNTVRsLzbFl9sgZGuFi4jUbOdObAplecskrJK+oH/WrcjK0s05i01a5xSxNEt4HduwDrZ97W1kIjFUv4FDVwtbPa/uY7bRzdb1gc/oVFis0bCmcrmM5Z/W4EW8OuFCMbHa5LjDjTqFHjXlP8Q0PyOuNg7r66EVNAjEMBqgtJJAU0yUenLS69pm5aG4EECNvdvJO1XtNNg8wG/00YpHdGz+GuZD2cdnpQ9RdEkqT94aCdtboer7cYViijurQHumTzhAQSHKs8SK+cY+Dqd/Ip5fMIahqEUJ5lJeNHqKv6f5C/IE5sLlKHk/OIdPaHMWHxL3fRrws9u2ySLsER2AZJuujmJwD1wot+9j+l51620FWN1WNc5tQaZI/1LC+7HnCu6wUAcLuH9nfeOdyY4D7QdyaUwXGG4G/787Zh241UYkk7bRrjy+sqDHjEQrQdT0rc6JF+F6Q3aIKokjkIbhT2nXPuz4A8dhCXGaIyxDoDosjlNwzOLlM7UGpuH53nD7+3FlQNzGXp1TPGz6jZ9AhQFAEV6nOAanRB08Zx4BnrLe/zsTtOT+iix15IORb585pNjqRc57EwSc+r6U+lcLJMLzcjk3fFiPfiKBX9yPmU4LguFLME1ya+jVEJeboYW1jZiLPbg/YkPtXhYJQsyeLNxSk2eslHVzrTQM2jl9tptVT9KUzLlkNpDSL8huQKlruY+dYrQl65w8wkVuAIOb7UqRP7DJtWPUT1TCzJ5DtYVmUEQfez1ZFOYafYq2dpRWRirpZ3CT1tckL7q4mt67iK/Jkq8e2laWoliDpqET9M9BF3hLy+6sW71E1bg3I6G45Q7/BmUSc/oFm6cA0w2r5+5yMGwcEH0wZyVNpkFR9Su8H5TT9g9DlSyVsJY1uRgC4vC+maotys2fDY0MxQdIsH5sxgYC6cpkq5xAh1i44w/XonU1ph3uB1Fi6ZbDzXWT3271aKz6uHvoqvlNmJcf5kbu+ZvbCfTuwxTw9+ARyCd51DG0aIecVP+V4TcgW0NSCSs43jT8yys8NsVNkOedUSRPyd1fexgEYQ3dVt10g3DkWaTH4Ng8zoufKGDS4UkpzFLIbbug7KhuD2yP8cndb3dYCuzYI/kFErsKQUDPz/AaiCdjUyp7t9iwGCi1iYchqigmm4ZKK4zFz2Grc7MV0uJsp3HGEo6KPbQokh7Bda8yZhhBUTRMMYPCjZDi2HJOs/JjmQkxq8Lpx2ZBjnRh43YS9k5fOyzDTNF5Dai2RmvezPGkP33k47eQ3u3tNbEP+YXkOI/uk1QAtxry2dooGpN0TyC2+dG+/CWraKJRcqEyWv0zgzc+ptZ6lXz3Ri//XUq5uNlBRfHY0i7BNPQj0smaab4tYCI4zKhJSNtHe8Jp6SUWGOl2y7iPT4IT7Iga2G4MOOGy+17s1M5WWkokf2P49LMhRH3lBttDN9I7M0qOLWf1WQPRxCIXMgGjeAsmBQxDS2M2vzwXhh5kIDkXUs8IsC/Ja8L/SJ1APA7/B8wqWqGjufFXdoHVVn+Ulz5+AuY+Wepk5Db+WZGyUEhkN32u7klCf8ErtDgyJ1NrRHiwmgZnRrE/TBs0iRERQs6pdFz8WBt5YeOBhfjZrBVeRI+os4CwnfpaLYGFh/SpxhNRv6oyK7qNgkOwSqjqIiwNhntYYt6MH0ZfnspUWSTuYMEEC1uD6be7ffB801gNMM2+PFTAHb3u3fNEDAniDmHFjR0cwYBTdKIifasmFhBZFJ5UkoOVjYnFwOmbQl0g9tGbK68fOBjiw5jB470s1VQbAFU/pqE/SXR6Kg/74kPjZz3UB6xz72ZF45/W39YyNIzhvvo1Xz6azXs+vEaeTSZFJbi34h06+2hDu63s9T+36XoNGYsCc+w7VW466g992Rt2dR0imXtrvK0mrLEGl9Bn4ZyucRJO4bZSxdTI4fc2Vz+dDLRGVDZK1xohr7s9LEueaM1VvT4W5IQwDhZRAia0WdWv00u/AMmAEWiVyRL71m+EGMBLfDWPtHR0elHYgkkr2e+RMXenjqeE5fmTC6XQraBcZdZMwTpnob/cFco7KKIwX2CAVbq25aaJsJV9+fwPhMtdAUcyac2uCO4vSpVfVP+8yFjnrwL91KULkr3tccl+CInc7d6s9Iio8n2iGjEaxVlSxnsSYT5zDoLAW52OzEc7XIV9W4SVs68RfQ9IVhMsHfrixgDuXuiH2+K97ZxNVlwqHM7yx1WAta8rWsECJ0NblllhkUdNmvrHUo2WMF/aEs082JqNYLs7PjVIUBdjhRX+vjEycQn+10w39AP30z6/xOQiCfmOr+yse65/a6/4WdP0L+Aq8/d8Dv0Khcf819uqv2SAAOjDHMwAUEMz1vb2E7J9w8Gl5d3+5jv0N9QUwDSDf6h8Bf/3G03E2vx755YlGBR5Tf+ALyDUj0LU8oOt4n01gJv7yeSjOX9fMf/F/+vQf7wXOfob9v67/kw+w4ro0S8EJkH/0+Wn64+j/0lz//PknFxZzlvVfLl2+jAN8BrecAS3/95dT//AwvzmoXz7rxx/94wP6PDhJh9eXT3//XCLfay6R78QJvx89/kuv+msKIf88hdZrGMjPl9J3WkbILeoAAqiPl/HLsvqM4E8495cvk6QCOPNnOvYfR0X8/BTORuM4D/uXAYPMyk962Pq9Rvi3/MwvNwN7Rh/B9fUN/qdjqFDoO2kWFPolhgqDfp3wAUNfs5V+Di78HrrkW9k3/7a4qb75G26KwJYURnvJgPI5EzLb83ShDgd+PzKvckdevLwSdpe1x9sShujZLWlICXClOVmhQnku+KDE7PnYy3sorGZL3TpXBM2NGeWZTqw7pO1Yx8wgTXjrvvRWPHR6aGLLJj5NIbpHR1DULe38tTvH5sTxHTcqjieWW5rhGI6jqw6CGYZWc5cLTVdsvhEwajKMyA6V0UgDa4m5bDh87gTS5MKwUBdGf3kEhY2cQUrMi4CkjTyk1/Du/bFXl3+pvu/lIzXngG4wEJ+yNVpBivtKd69mk4h2yJWsCZr0Pj7kPs6OQRCoHapWWkvmFILRQunpNy6/3aqExsad8tdl+6nvOJ7qfkS7zKCfJBT5tDGKimRPjq3c0OUuoDgLvA/Yx+iaNrYXKaxFpjUhp75AVWUgHhZSeXcqiIInoUUUjJhek961jQ7LlefcRm3SHYytNynr8n4BsQYtXDihhOnkBrzAG4mvm//2ymPUNcvyygVCdkp6xv4U0nVcj5Y8gl2YiVBLdR7TsR1EGPGoZ+i8jIbOdN1fZrQfW5fCIIBDiPFuipHiNbrzphqPy99spKix2JFyu14aU5UD0RGuUAQZphHECZF9EWnJ/3SYgB17REbenC7jOKqreY9Dsbc7NLgHmNF4zmLL7Wc0NixFfnXfVA4ESe0RqkekCChODabuKx3U9zGw6Okf4uJ6dp5PCE9qgIVKI9Tvkt1ZNih9BS3Tw855zyKZc5Of8XQpywPWL99GwPjSagiyj8EOXdZaSMjbU24ptQUXgZu/LgdyN+118oPbcRR+ptBkElDAn0ix8HKWhjaoX/aXCRTTdWffb+2tJULfTfALfZCNoKz0XtZ9V+fO5Tm3Rxg13vvO3PkeYPwgRqvWgrGqIXOxXmwztmoSzLvPXBih7ie6dNcCkyXZfpPed7Dx1bMvAtUFe9mQYxl8aRxJWfSBo0jMb4zrznx6nNY1S/R+Df8eHeTgScxrqyXIQauRUpewhueE9DSr0+PchrthxuhFC+aqF0Dw1+bu4ercwpl6eiTYOCBj6XHufitZzXCLu/c8sjuTFLvJHrfs0PT4U5gMfQj2liDhe8/B/LyZ8+gF84b16tEdFkSUctfYftrfhrl615tgNBRZZQAM0b1xyg4Z0tx77iGyN5V87LavUXljLuuA6bwH4kxVRbhvaEuP2b8UHanzbZ4FgN2jWw1qYDfP9XIpt242wCYC4+QTrMDpgAszbI/pO/9sChwQKDlNve07p7bPI1NTwtpZImm8ZELtcUxu6M5Mzu1mRrj7drt54LXWZXtf5sZMv1+f/QrcKJlFrN7kAnsv9unfOnJaaZakH6NoFoJcx4gFgWgA3t83knmBow5Lzz6UayE7ooWemc3Ab0VQreYRnK077/Ymr2JSdDGI/gP/j9usbGnp16ziFgIOgNYJzCMW/dPujN3OoOnpKM0df/ni3KXYODX2dqmG5119hquYM4NxEp/fVBL0Vm1dZDJx7PC+nIvOHMtBHl6heIPJe3x9n5XGUoPtwWLwrh8Rq/c2nKhmaG1j6mClmA5vTn2wt3PSBUOEFTXQOpjRu5QbdE0K4E13H3SjeSMZvkb7/bwxA1cMLxl/9i35DLjxxu2NMSqJNAuVHGqQOuuLNACx5juj4PanwTpBqjdpirADNA6+HQREYqPnJWFDz3tNbcbEFgjcRT4sfiIGxSvqdfqdMTxhnzReT4uWJy+VtWuttm9qhBGjhrwY4MfjnlNL1XzJrA/oMXTz95uZn6NTCPGq4aMgpwGDMHwOKihG3pN7xEpliPXzKKmyjeksUUDBa3fb5RdhVXlGBu2uKwDOVwHIkYU/5PORtuDndVMwiP+klaL3Vm2JHkcJvS5d8TJZID4vRULLCTNhbrn3dUahLHSGTZwSU14693PpIKkOnqFk63tDUnLjR++NZs91eo3pi07Sqa6BhBQM5FF2KOJcExayzsT3kVzfkYkDLSwvod9FDUi2AhqkFOEpIYLXw0SN9zZ4090v0nw4dE9jrTaGUHLAzgOXEILTZqbLon1dH3UJHZ6GSfhd9BbBS3udh05iRt2Fjt5PsG88dILzpkN9pM58bhrqS7RqO4jskoI03Olz9PZjZSFJxKHZuV2yG/5yiUfJ8VT70g4/oMjurbUnCMQtBl1t4pu6o5I3+thrKZ21jhdByySbHRFC4a0cVaW1oRCDMNQcPWcLM3EoNrPn7dIT2MR5mJTqdp4yDvGejuJWw86CsLqAyzjc7yUgU4KueTKCDRrkLphsFB1+He9pNYGT4mG1BDfxU0sUMlo8kwV5Qf2UFowiga1sPAdRajc9GwNs7OJqDqRPEk97oPDX0vBQIz7duNu9senUad1wjDe0Vh2ljUWLT1wruamLOb5PGSEujQqlrQWN1yyAnb91DTgY7hvW4imbeARurTkxlQe0FdYTGhJIXhVC7r4tuG9pgFCyGexGETE1m7AwP23DMHhvtqAZI0q9zZQ9yZgOHwTsLJfl2d/W1KorBNv9p+iLBnueXgQgg4Bhy8uO30RNA1ylGV5h0lP88Nvt4smn4r0tPe0E86W20ok+STWTHaZJQK+xBqWe5iYEi/wUbT/vuzjx7GrJaAsT0FqXJMPxHJHjlAqg2yLrIN1tXYiqXLA2U2O5YUtOat84ojQekOBCNlpJZpCPXJWwZ2L2DisZhHeYute9UrJQh/bTRwKMWcq3t8A+2IZ67HoPW+0rOMsduuwJ2+kACmFWbP2J97ZgsXfcTu8NtfDl4FzqgQg2VjqzVzXX9QB4oU8LzHACIIvWAcGs8ByBvJVn5WbEHRzAj0zRcYm3tEHSzMTyiyHzCzF2K/jlyMstYHpt4qHKr4xiuj1U/7Uz6D5smLyfmtiDabbYy1YcbwOPZqEurwQkytHJ9PunZVHnQhZZeHJ1FodREN0LbP+vTBqQGe8B5mCYU5vbM3FFImUt1/IhC5O0I3oLZ3IZEre8xsvAuUTFe0meecfTUEXiRWL1MHOXXZqeUyfALTeR37LaMOz9ZklPIXpwlxYPrgVEASUhTMol8+82AN+SD3rR9SKd4SBs+ks8M0KeieZivAu4boPZji9zDCtWTQBbfExxXJa86x7xSwB2QRgYmTJGDmwPb0vaVOwSx4htPwGJx0lw7x2AtaCbEhgK51Mmg9XGrkQnKdSdBpgEYgXIeqItv6tJ0L0HiXxjj4LOmO2iL+qEhnWsOnVa27LWKKB1zCcz/zCHdJCryXjqxMtQN20HQBUNCyHp9ab6yGiyEjunXQ46gcIVDCxIWoDH7LGhCZN/erZxPo8Sidl0cWcF2GutyU6r5Kkm908Cwp0hP9F304KqHH9c1rY+EMGZp90zaKhaLjSDJjbgeDId4cRFrB90bif55TE83svQv3PINbfP25Gqsr9R950zCDEDfFmFhVOrxToR09Q83iMguJXLKJzXT0sDkwU7ZM6zAJsLlhm6MWjkF2G2U+NVWK57+GSQwO8EUswt7rLxwMZAhJV2lVUG7Ov6RFpm7EKqnsU57WkwwO0I6yzXH1ODfZ0YSTG9my4h4IBwGQR1WsgoEAmw7yXg036T1pgEUfBo3eNIewtHtiV/+lIG0xbN56DBl7Ah5wNBSswXm2lsMQ5tPSW/HBPKw+Z8JRzsnXntULih1Cnd3fF4ixWXOlOlF27Zzj7WeRvIxIEWl8krusVYn/z6OGs0IBeTyTVfI8tKsZXCkt4Wm0HLSEX5woJVZNSm9zQ1aDt2HTggBqtd3GcNinNTDSY5wi9DvbwsByLVGCP6KiHChlNpErLzKm0iWssi0X2PCjfhm5dvBinaDi4vs3esnaWE61A6Oc+9sUqHF0Yt+lUhgamL4QhudoqIL1afLvjljhsNmF5MdPduLjmXRDU9QzELc1pWRsX6kYrKZQGjl1EBMOyPW7IRGPrE4s2WY75OY/4mhR2nslxIDkqa7a1ppYeRLhsuwq9qZwE3ynGwuf4TquOQXhfAdKyW6zB+D1YOyAzSFLR4SqEM15+hxZpMAmRd541gHRYnWTmqyiQU+6mBr/ll/Hk7tkVm8Xx/2VUK0ZjkjXzjujxjlLF4YohvS/a2QDSGMsihVhVWXlIpWfFEBVqcyssbkTB+YiHZpkJ359gqTWhPpLhmxSO3mXptYNcmZWDyti9WATToRPvdnbSzWMAElZigmBf2Hg/AaG0Upg/EeN/6Z6k3jetA/sPL2eu5OCdrwxDSx/ZA0s8OuxxhxvMYOJWHQvmFkm6jadxWPW8dBbkZ4h76ePNu5iw+AkoXkLApNOwS07qmcuB1q2gvaFs6R65B66cWRzvUrOzuBGLz6KO71TwMmbgk/eWOHT7kDoaYzJEexxw2Dr3EZw9aWe6e4Tt76NUaPPK68CahgxsnuVxlSylVVhhJ7hnOjn9wytM85LKXmx0zdsEzuOvcpf/jEDKPpygX7aJis5Zwq3gSod6YrICTj+VzrmqWO3atHUPZ+t10bM7yGdVazMw51YrHQolgbkYFxKLcjQXVT847VDKqguRwCG5qKADvaMpdJQxMUm0/bQYd63hh1mm2ECxOtJIKWwMUrWxlWTBbvJI9rMmEfAYs98c0dfKJBQtFMsHQYfUQqdLm12GiKEe9v0J4DxacZNxRpF84v6FiSTpOqTOPHMAo+AZ7X6/tslRl7gaPYQ7pNHZdhYdwpEL5DoEzqeybf4fJbo3Zw63p3ZKqncp4DBqvJXkZKTDtWygrV8fRnvj76IPaaH26C/cD12B4KIbV3NE827ye3TPPVpPGD0weIlwtB3ydbcrLTNVxpNPDlanLCpfCRynms/vS1pvEYvK0YKgZBI1/zbTf1gWCvnfORyfk7RLEzoL1xTo4X0fP7aVgagmHZX2Luj7G1TvE3x0s8Hto1GrnLEPfAii11F5OFUUmXCnK5yVFzDDZwq2ncNW+461kA4tjcESEJkqJgFynEl9hiZXGUy5NpqcFMRCxtveDD942ZM76Pn2JwMSbEocagNo8pDK8Bd7+mmaDzjKaf/gByLl7VxVANI2bmqTILN3vvJ97Yz2c54a39b0hMLstKirB60w2bxurFQxa3C9P4LHskms0NHTQvK4GnFOo6sbIXloAnIR9sTv3KvfKdml47gWJW6ZPuotgRxQtxzvVDnS9iX1h93zCCEPQE5dr/JkLMkomHbN1O/KMr/5NOpfLz/2bS1lomoEuYx/IaPoYe7ROnqlxr1+yIeTEfIST4/RA8PDxczxL840vKx1tTpVUOPDTmXWtASqMMXHUTkNKKzlwZ7qiENgk38S9vmFSexjUIYZP+kE2nC+yN0lWD7bM9kAQyFQ625TCehHMYO0gAzaRIVwgAHT/CrY2eDnn80EQL4g2A2oLRRhFbMzOyegJS17Ult15nqFkHLzPpUsJHEDyOmON5R5DJYRUSXZsx4uISrAO41h2CQwhGEe6aO/r0F3LYLTUlPi18iRZgnjei7ALz4f5ZKXn5az7UV2ZfZL96fLtZE3HX44HLQhsu9ocI1OyujPNoxEEwmIfSCbV+c2DCFuX0AvHcZvlkuNMy4OI5b12+0sa5VhCskJ3e5s1xsQb7dNvSoBkF40bwsKu0/O9HgnlLlPtw1yaTlDHZh+ITbvryBtuMezxhNUlCUNXXfkm0/kIV4j78IoOK8Rv1SvSjGONX+9XaXfXc2sdvt7VE1FdYx3Cj5Cz+XBXh9LBtjuEtV4L1R4A31+iHHLmpHhZrjzTIDoG38WSzf0plFn7knqkmTUqr2wFBXC9Q+UjOlQ0mK3wtjmtgsJ656HvJ2UzFzeSXKkAq8A5rK3ZUpHpLBTiK2zVJet1I2/YYUr9jbh/tuNzxOKN9jrz3oT8khUKfA8m84leNj/Jry9NMqeRYFNYnXFXHTZTqZp2Ho+7enNVebNVZZT4Nltu/REfyLCYBtMpyh1P3jgUVt4WyO5u2+2GCx51F6BAbmHT9kjMUac26Vi+th3gg9r3lun4ux2uuBx1OkwkYYtkTFJBkR22uBy+n2cfC204UmpZOY9VYQEiWP0SZS7SB2VjFq4Al6QplZxpEXDXOME+ICi9me0TGKNKpVHIPgekMxqyWsyMEd3IoXN4YJUG52vxdtFSUZZ9YpQ82fxIzoqOWdGLxshh5HPGk1H24Ta6JN/3Cfpo5wiyzOd+8/yHUO+cg9R3+9Dv120nC3pzqQQtyqNO++xBqEGoDg9g42RFOcDdgbQY3zeP50UduzZKcgeJgOfar1a5JMAX74lHeo8XN7IzXahYXqByMs0yIBkI4AWcxuHl0byJUiGQdDgqWCBx2LM4OL68GZEdLCr+dggcFpdXmTECxsXbJSasJx0ecSGClA5B0ZlLt8e2tl4+w/KIcxRjWoyOCbMlkcvHTnOAYjIlGZGOtNRzbSE3QXjCUXgT0lok3XWWUk9/SNq7e4xuiEnnRjcdYbDkcHjc6/Icp2xy/PINm8BQfRBe3Qqs8Z4aGA5HkEBROimpOBsTrck+f0JWIs0Tm+AV5/MI6uPQbuEZtgA3hHjEbZd81s51LxwaMEtMRbxBkAaleOGWS+LlwjjuLdCSti2YAN/Dk7gXro9K9p2kd7X04c14jiYaMR3J4tu1BLu4LFw0cuHzyS2Xo3urgjFGYHnkqTsRl0Bqc5JvT1Cauj6iRsN6vvD+snTXh0ghj73mHk8dMzg9GGa9Dk6HMo6gah18DN18LEG4W6CezsO2ky/BJKyOPDFdnq/pOc4vxpyfdXgqt5prawI3dtE41fGLqcq3fegq3ql/Yinlo9q4iGSIdknWXVbDU78GrGjKizgNsomg3JFhZqVW/LkEbSs8pQOBh8u6UG60LNHAeKKz20GQlL6Q5gt+eeqtfD1PrwowELFm8ZM0dDQtMtP74lYk6T3PlZFw06eePC3WWSfyaUn+vmdlBZwmTh/MTgAFykGfUqGghtshnue7UkdDWeKC+9SFr8F+hYsYwEbfDEjK54WftU+ArJsVlI0NQ3cT1UFLozLecByHbJxfsuDRZSDJR/jyN0rfFhkmcW55aLUPB6kDUemwCNF9oM0UNoZpZz2gnkceWwosroFSOOLI3eM0K8xYV3UavRU4FeWM0QWN3toXZF9WAVCqcBcFI7mowEJvc+TSolK3wxTP35+fVrkroMGnKYQe4zJ+folC1lQRkon+jqLuhD0n6otc52HKVBs1F2nkfCSL+KLVKa3LouSmDGf0nrBhEDrCOSCdTLh6IVFDXbJgFmYQvWLAjtx8b4Y9D5n7dfUxwAppedImGXT58dhBoA5INr0aARXkS/ff71ZtQVMDVemld+vUr4FFfDdyByaL8/FKFDB0Kg3g+y4fnn0LHwKh0nZ/0DJR9ocGb/zgpXwuq+YXvK+ZVQDQu+VFkozpxmei8S9Fad0w6hdb6RT2rbpJvyMu649cP/HfrzzbDf1l2VMc+zX04vZ7Faz5VouFH+F3XzFFMTj1BVL0Jxykmv7D2JtvUf0LvuwPQuzvhrRBflF7G/m1ZEB/L1J/q2zjz/CayV9rmv2NWJ/eBnn+00P/BNG/0PvDRn9B/oO++oq+Qn/OE7+u6va78cRv1WX8QiL0D7U+wfeflOUDjCsI34lGGET+t9Hot9pNfKER9h8afaUR/t9Ho9+qjAhotP/lD1OV71s0EgRApe9EIwjFf0Ej+F9GpK/3/Q0iIf8h0lciUdAviPRrMPDvRqRvVVD8G5H+0BrpuxIJpn65kv6FREL+r0T6Q6uk70okBPulafcvJNK3MPX/hfys37bjf5qVcf+SZfK3xJ+f5WsADQh9ngT9JHvrwwQIW//px+ytv2XEsPNfgi9JIc03s32+8Fn1uYT9bVf1L/V/LVfkm1GK9cuYwXg/Y4XBbX75nv9gTOT/NcH/7r4RRcE/Wz848ev1g38rNgJ/jwrkX1tYfmMFxX8PdbHfpu4/tg5/80bID7/JIn89HP9x2Ab7bmzzc0/gz+iv2eZbJaDh7yJ3vxVS+0KNZYz678I4/xz7oT98mz3+euzLOP/duAbGqNsvux78GcV++Dvrh8NfGxb8U8zzrSDdLwjw05kvo3TYfpySNFrKv87P3z/XX/Pqur24Jrr8IZrnYVuQH+p3N8o9Db6Bm4MJgUGfmxRMF0qCZ4BzUhaBZ+L4z4nyfVon/KTdKPHdwm0I9QP8c9v5z5ex9MM3yvejBP7D1047P2tuDFE/YN+hvzH6d3Rw/fek9p9vn0X+k86z4LP34+t+BzYgkNsPJPFLLiB++Krzf2pgXFxAfKPF9UW/H75mxP5TXPCtAN+/tpBC86f/AQUUgm/nvf/Npj/+O+ol/Mao/mfVSwj+3iIC35zMP1bBhL/7XX+PggnfvzBCfM3a/6U2wk+rMvy1QsLH8fxaJOGvDqgKTgffdsJjcINv+KvwfyopfE9/9a/79F/LKMC/djxg6Ftd1r5LGQX0W9sQv5j7/x97+N5+Me0oDH8Tc/N7dfHFfmtn4Q+z8fO90Cwk/qsmvug38U+/Vwcm7Lf2D/79epTd0F+18f0X0+K3tgn+CEi0f7+ujzj234lE+7pT8v9lC9+vK+GnLXy/HvsHCAT/XM3dvse0/1HRosh/1uiP8axvyvTfb51+K7z9nW3MP0pbVpL4VVdWCPvhK6b2X9SVFfutdtz/fhYPSf2SHgiF/kstnt/CU/4bkuKXJQWp27+UEr8dVP0fsncL/2fv9luMg6E/N3L+/I0Qyje336B/nGuur/MAiPLXcyLYFNGGNANX/B8=</diagram></mxfile>
2202.01085/main_diagram/main_diagram.pdf ADDED
Binary file (78.1 kB). View file
 
2202.01085/paper_text/intro_method.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ []{#sec:intro label="sec:intro"} Kernel matrix-vector multiplication (KMVM) is one of the most important operations needed in scientific computing with core applications in diffeomorphic registration, geometric learning [@charlier2020kernel], [@10.3389/fnins.2020.00052], numerical analysis [@https://doi.org/10.1002/mana.19921560113], fluid dynamics [@BELLEY20091253], and machine learning [@10.5555/559923]. For a dataset of size $n$, KMVM using direct computation has complexity and memory footprint $\mathcal{O}(n^2)$, both unfeasible for modern large scale applications where $n\approx 10^9$ is becoming increasingly common. Pioneering contributions presented in the *Fast Multipole Method* (FMM) [@10.1137/0909044] amend the complexity of these problems to $\mathcal{O}(n \log{(\epsilon^{-1})})$, where $\epsilon$ is the chosen error tolerance, with varying reductions in memory footprint for data restricted to dimension [$D=2$]{style="color: black"}. Subsequent developments in [@borm2019variable; @greengard2020fast] mainly focused on extending approximations for a broader set of kernels for a fixed dimensionality $D\leq 3$, tailored for problems in physics with narrow data such as electrostatics, stellar dynamics, Stokes flow, and acoustic problems, amongst others.
4
+
5
+ In this paper, we introduce *Faster-Fast and Free Memory Method* (F$^3$M), a novel algorithm built upon the FFM [@aussal2019fast] framework to perform KMVM on a GPU for *tall and skinny* ($D\leq 7$) data of order $n\sim10^9$ in under a minute with user-specified error tolerance, providing between $2-8500$ times speed-up over existing methods.
6
+
7
+ **Notations.** We use capital and lower case bold letters to represent matrices and vectors, respectively. In this paper, we will work with matrices $\bfX\in\RR^{n_x\times D}$, $\bfY\in\RR^{n_y\times D}$ and vector $\bfb\in\RR^{n_y}$. For a kernel $k$, the goal for KMVM is to compute $\bfv := \bfK\cdot \bfb$, where $\bfK := k(\bfX,\bfY) = \{k(\bfx_i, \bfy_j)\}_{i=1, j=1}^{n_x, n_y}$, and $\bfx_i, \bfy_j$ denote the $i^{th}, j^{th}$ row of $\bfX,\bfY$ respectively.
8
+
9
+ # Method
10
+
11
+ We present a summary of the algorithm presented in FFM in [\[FFM_aussal\]](#FFM_aussal){reference-type="ref+Label" reference="FFM_aussal"} and the **modifications ${\text{F}^{3}\text{M}}$ does in boldface**.
12
+
13
+ ::: algorithm
14
+ Initialize treecodes $\tau_x = T(\mathbf{X}),\quad \tau_y = T(\mathbf{Y})$\
15
+ Initialize near-field interactions as $\mathcal{I}_{\text{near}}=\{0,0\}$\
16
+ Initialize output $\mathbf{v}=\mathbf{0}$\
17
+ Compute remaining near-field interactions $\mathbf{v}\mathrel{+}=\text{NearFieldCompute}(\tau_x,\tau_y,\mathcal{I}_{\text{near}})$
18
+ :::
19
+
20
+ Instead of comparing the average box size to $\zeta$ we compare the maximum box size. When points are non-uniformly distributed, taking the maximum ensures that we don't compute near-field interactions on boxes with many points, since it will be inefficient.
21
+
22
+ We conduct a scalability analysis over $N_{\text{GPU}}=1,2,4,8$. We parallelize the KMVM product by considering the $k(\mathbf{X},\mathbf{X})$-case and divide the work onto multiple GPUs by partitioning each subproduct of the KMVM (see for [5](#parallel){reference-type="ref+Label" reference="parallel"} an example when $N_{\text{GPU}}=8$). We take $\mathbf{X}$ to be Uniform and 3 dimensional. We present results in [6](#scal){reference-type="ref+Label" reference="scal"}.
23
+
24
+ <figure id="parallel" data-latex-placement="H">
25
+ <embed src="rebuttal/rebuttal_3FM-parallel_jobs.pdf" style="width:70.0%" />
26
+ <figcaption><span>Partitioning a KMVM product into 8 jobs</span></figcaption>
27
+ </figure>
28
+
29
+ <figure id="scal" data-latex-placement="H">
30
+ <img src="rebuttal/scalability_plot.png" style="width:70.0%" />
31
+ <figcaption>Since the V100 cards we use have very high throughput, we only get a performance boost when <span class="math inline"><em>n</em> = 10<sup>9</sup></span>.</figcaption>
32
+ </figure>
33
+
34
+ We also use `nvprof` to analyze the % of peak throughput of the V100 cards ${\text{F}^{3}\text{M}}$ can utilize. We run `nvprof` for 3 dimensional uniform data for $n=10^6,10^7,10^8,10^9$. We present our results in [7](#flops){reference-type="ref+Label" reference="flops"}.
35
+
36
+ <figure id="flops" data-latex-placement="H">
37
+ <img src="rebuttal/FLOP_eff.png" style="width:65.0%" />
38
+ <figcaption><span>We used the <code>flop_sp_efficiency</code> metric in <code>nvprof</code> to generate the plot. One GPU was used for this experiment.</span></figcaption>
39
+ </figure>
40
+
41
+ The performance of ${\text{F}^{3}\text{M}}$ is tuned by choosing $\eta$ and $r$ to trade speed against accuracy. In [8](#param_impact){reference-type="ref+Label" reference="param_impact"} we plot how different choices of $\eta$ and $r$ impacts computation time for ${\text{F}^{3}\text{M}}$ on 3D data.
42
+
43
+ <figure id="param_impact" data-latex-placement="H">
44
+ <figure>
45
+ <img src="rebuttal/eta_plot.png" />
46
+ </figure>
47
+ <figure>
48
+ <img src="rebuttal/r_plot.png" />
49
+ </figure>
50
+ <figcaption><span>We see that larger <span class="math inline"><em>η</em></span>, more aggressive <code>smoothness criteria</code> and smaller number of interpolation nodes <span class="math inline"><em>r</em></span> improve speed.</span></figcaption>
51
+ </figure>
2203.11894/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2203.11894/paper_text/intro_method.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Vision Transformers (ViTs) [\[9\]](#page-8-0) have achieved state-ofthe-art performance in a number of vision tasks such as image classification [\[40\]](#page-9-0), object detection [\[7\]](#page-8-1) and semantic segmentation [\[6\]](#page-8-2). In ViT-based models, visual features are split into patches and projected into an embedding space. A series of repeating transformer encoder layers, consisting of alternating Multi-head Self-Attention (MSA) and Multi-Layer Perceptron (MLP) blocks extract feature representation from the embedded tokens for downstream tasks (*e.g.*, classification). Recent studies have demonstrated the effectiveness of ViTs in learning uniform local and global spatial dependencies [\[32\]](#page-8-3). In addition, ViTs have a great
4
+
5
+ <span id="page-0-0"></span>![](_page_0_Figure_9.jpeg)
6
+
7
+ (a) Recovering data from vision transformer gradient unveils intricate details.
8
+
9
+ ![](_page_0_Figure_11.jpeg)
10
+
11
+ (b) GradViT improves noticeably over prior art. Example within a batch of size 8.
12
+
13
+ Figure 1. Inverting gradients for image recovery. We show vision transformer gradients encode a surprising amount of information such that high-fidelity original image batches of high resolution can be *recovered*, see 112 ˆ 112 pixel MS-Celeb-1M and 224 ˆ 224 pixel ImageNet1K sample recovery above and more in experiments. Our method, GradViT, yields the *first* successful attempt to invert ViT gradients, not achievable by previous state-of-the-art methods. We demonstrate that ViTs, despite lacking batchnorm layers, suffer even *more data leakage* compared to CNNs. As insights we show that ViT gradients (i) encode *uneven* original information across layers, and (ii) *attention* is all that reveals.
14
+
15
+ capability in learning pre-text tasks and can be scaled for distributed, collaborative, or federated learning scenarios. In this work, we study vulnerability of sharing ViT's gradients in the above mentioned settings.
16
+
17
+ Recent efforts [\[10,](#page-8-4) [38,](#page-9-1) [44\]](#page-9-2) have demonstrated the vulnerability of convolutional neural networks (CNN) to gradientbased inversion attacks. In such attacks, a malicious party can intercept local model gradients and reconstruct private training data in an optimization-based scheme via matching the compromised gradients. Most methods are limited to small image resolutions or non-linearity constraints amid the
18
+
19
+ <sup>˚</sup>Equal contribution. :Equal advising.
20
+
21
+ <span id="page-1-0"></span>hardness of the problem. Among these, GradInversion [\[38\]](#page-9-1) demonstrated the first successful scaling of gradient inversion to deep networks on large datasets over large batches. In addition to gradient matching, GradInversion [\[38\]](#page-9-1) is constrained to models with Batch Normalization layers to match feature distribution and bring naturality to the reconstructed images. However, vision transformers lack BN layers and are less vulnerable to previously proposed inversion methods. Naively applying CNN-Based gradient matching [\[10,](#page-8-4) [38\]](#page-9-1) techniques for ViT inversion results in sub-optimal solutions due to inherent differences in architectures. Fig. [1](#page-0-0) compares reconstruction results obtained by applying current stateof-the-art method GradInversion [\[38\]](#page-9-1) on the CNN and ViT models. We clearly see significantly degraded visual quality when inverting the ViT gradients.
22
+
23
+ Since ViT-based models have a different architecture, operate on image patches, and contain no BNs as in CNN counterparts, it might be assumed as if they are more *secure* to gradient-based inversion attacks. On the contrary to this assumption, in this work we quantitatively and qualitatively demonstrate that ViT-based models are even *more vulnerable* than CNNs. To show that, we first study the challenges introduced by ViT's architectural difference, then propose a novel method, named GradViT, which addresses them and obtains unprecedented high-fidelity and closeness to the original (hidden) data (Fig. [1\)](#page-0-0). Specifically, in GradViT, we tackle the absence of BN statistics by using an independently trained CNN to match the feature distributions of natural images and the images under optimization. We use a ResNet-50 model trained with contrastive loss and its associated BN statistic as an image prior. That is, another model can serve as an image prior instead of the exact BN statistics and their corresponding updates. Moreover, we discover that the proposed image prior generalizes to unseen domain (*e.g.*, faces) which makes it universal.
24
+
25
+ In addition, while a gradient-based optimization attack can lead to a legitimate reconstruction of patches, their relative location will most likely be incorrect. This happens due to the lack of inductive image bias and permutation invariance in ViTs. To address this problem, we propose a patch prior loss that minimizes the total pixel distances of edges between patches. In other words, we enforce spatial constraints on shared borders (*i.e.*, vertical and horizontal) across neighboring patches as we expect no significant visual discontinuities between them. Minimizing all three losses simultaneously leads to sub-optimal solutions. Therefore, we propose a tailored scheduler to balance the contribution of each loss during training, which is observed to be critical to achieve a valid image recovery.
26
+
27
+ We validate the effectiveness of GradViT across a wide range of ViT-based models over changing datasets. We start with batch reconstruction of training images from ImageNet1K dataset [\[8\]](#page-8-5) given the widely used ViT networks (*e.g.*, ViT-B/16,32, ViT-S, ViT-T, DeiT, etc.) as the base networks. Our results demonstrate new state-of-theart benchmarks in terms of image reconstruction metrics. Furthermore, we demonstrate the possibility of detailed recovery of facial images by gradient inversion of a ViT-based model [\[43\]](#page-9-3) from MS-Celeb-1M dataset [\[12\]](#page-8-6). Our findings demonstrate the vulnerabilities of ViT-based models to gradient inversion attacks and specifically for sensitive domains with human training data. With these concerns, we perform extensive studies to analyze the source of vulnerability in ViTs by investigating both layer-wise and component-wise contributions. Our findings provide insights for the development of protection mechanisms against such attacks, which can be beneficial for securing distributed training of ViTs in applications such as multi-node training or federated learning [\[15,](#page-8-7) [26\]](#page-8-8).
28
+
29
+ Our main contributions are summarized as follows:
30
+
31
+ - We present GradViT, a first successful attempt at ViT gradient inversion, in which random noise is optimized to match shared gradients.
32
+ - We introduce an image prior based on CNNs trained with contrastive loss and show scalability across domains.
33
+ - We articulate a loss scheduling scheme to guide optimization out of sub-optimal solutions.
34
+ - We formulate a patch prior loss function tailored to ViT inversion that mitigates the issue of patch permutation invariance in the reconstructed image.
35
+ - We set a state-of-the-art benchmark for ViT gradient inversion across multiple ViT-based networks on ImageNet1K [\[8\]](#page-8-5) and MS-Celeb-1M [\[12\]](#page-8-6) datasets. Our method recovers high-resolution facial features with the most intricate details.
36
+ - We study the vulnerability of ViT components by performing layer-wise and component-wise analysis. Our findings show that gradients of deeper layers are more informative, and MSA gradients yield near-perfect input recovery.
37
+
38
+ # Method
39
+
40
+ Table 3 shows the performance of GradViT given varying architectures and changing patch sizes. We observe that transformers with (i) a smaller patch size, (ii) more parameters, and (iii) stronger training recipe with distillation, reveal more original information and hence are more vulnerable in
41
+
42
+ <span id="page-6-5"></span><span id="page-6-0"></span>![](_page_6_Figure_0.jpeg)
43
+
44
+ Recovery from Face-Transformer [43] gradients with GradViT (ours)
45
+
46
+ Figure 4. Qualitative comparison of reconstructed images from MS-Celeb-1M dataset using batch gradient inversion of Face-Transformer [43]. GradViT is able to recover detailed and facial features identical to the original. Recovery at batch size 4. Best viewed in color.
47
+
48
+ <span id="page-6-1"></span>
49
+
50
+ | Loss Function | $\mathcal{L}_{ ext{grad}}$ | Image Reconstruction Metric | | | |
51
+ |----------------------------------------------------------|----------------------------|-----------------------------|-----------------------------------|-------------------------|--|
52
+ | 2000 2 411011011 | ~grau | PSNR ↑ | FFT <sub>2D</sub> ↓ | LPIPS ↓ | |
53
+ | Random | 8.143 | 0.706 | 9.964 | 1.351 | |
54
+ | $\mathcal{L}_{grad} + \mathcal{R}_{reg}$ [3 | 8] 4.190 | 11.431 | 0.071 | 0.498 | |
55
+ | + $\mathcal{R}_{image}$ | 3.127 | 11.291 | 0.078 | 0.504 | |
56
+ | $+\Gamma(\cdot),\Upsilon(\cdot)$ | 3.047 | 13.404 | 0.049 | 0.412 | |
57
+ | + $\mathcal{R}_{patch}$ | 2.326 | 15.515 | 0.032 | 0.295 | |
58
+ | | | c) c | | | |
59
+ | $\mathbf{x}^*$ $\mathcal{L}_{\text{orad}} + \mathcal{I}$ | $R_{reg}$ [38] + | $\mathcal{R}_{image}$ | $+\Gamma(\cdot), \Upsilon(\cdot)$ | $+ \mathcal{R}_{natch}$ | |
60
+
61
+ Table 2. Effect of each loss term on reconstruction quality of final synthesized images. Results presented among a batch of 8 images with total variation and $\ell_2$ priors included by default in all runs.
62
+
63
+ <span id="page-6-2"></span>
64
+
65
+ | Network | Distilled | Image R | on Metric | |
66
+ |----------------|-----------|---------|----------------------|---------|
67
+ | - 102111 0 - 1 | | PSNR ↑ | $FFT_{2D}\downarrow$ | LPIPS ↓ |
68
+ | DeiT-T/16 [36] | No | 12.243 | 0.079 | 0.489 |
69
+ | DeiT-T/16 [36] | Yes | 13.212 | 0.076 | 0.454 |
70
+ | DeiT-S/16 [36] | No | 12.664 | 0.059 | 0.461 |
71
+ | DeiT-S/16 [36] | Yes | 13.092 | 0.055 | 0.419 |
72
+ | DeiT-B/16 [36] | No | 13.252 | 0.058 | 0.413 |
73
+ | DeiT-B/16 [36] | Yes | 13.708 | 0.041 | 0.407 |
74
+ | ViT-T/16 [9] | - | 12.521 | 0.062 | 0.483 |
75
+ | ViT-S/32 [9] | - | 12.365 | 0.063 | 0.505 |
76
+ | ViT-S/16 [9] | - | 13.658 | 0.042 | 0.412 |
77
+ | ViT-B/32 [9] | - | 13.599 | 0.048 | 0.436 |
78
+ | ViT-B/16 [9] | - | 15.515 | 0.032 | 0.295 |
79
+
80
+ Table 3. Quantitative comparisons of image reconstruction quality from gradient inversion of various ViT and DeiT models on ImageNet1K.
81
+
82
+ gradient inversion attacks. In addition, we observe more vulnerabilities in ViTs in terms of revealing more information than their counterpart DeiTs.
83
+
84
+ <span id="page-6-3"></span>![](_page_6_Figure_8.jpeg)
85
+
86
+ Figure 5. Effect of increasing batch size on the quality of image recovery. ImageNet and MS-Celeb-1M images are reconstructed in $224 \times 224$ px and $112 \times 112$ px respectively. Representative sample reconstructions are presented for batch sizes of 8, 16, 30 and 48. The maximum number of batch sizes is limited to 30 for ImageNet dataset amid GPU memory constraint.
87
+
88
+ <span id="page-6-4"></span>![](_page_6_Figure_10.jpeg)
89
+
90
+ Figure 6. Visual comparison of reconstruction quality with different batch sizes on ImageNet. Although GradViT recovers major visual features, the quality decreases with increasing batch size.
91
+
92
+ In Fig. 5, we study the effect of batch size on reconstruction image quality as gradients are averaged over a larger number of images. Considering GPU memory constrains, we experimented with maximum batch sizes of 30 and 64 for ImageNet1K and MS-Celeb-1M datasets, respectively. In both datasets, we observe that image quality degrades, as expected, at a larger batch size. For facial recovery, Grad-
93
+
94
+ ViT is still able to recover identifiable images even at the batch size of 30 (see examples in Fig. 5). In the Appendix we will also study the likelihood of person identification as a function of the batch size, and the potential of auxiliary GANs to improve fidelity. We also observe a similar trend on ImageNet1K, as shown in Fig. 6. Reconstruction at a batch size of 30 still reveals major visual features.
95
+
96
+ To give guidance on future defense regimes, we delve deep into tracing the source of information leakage – among all shared gradients, (i) *where* among all layers and (ii) *what* exact components leak the most original information?
97
+
98
+ Answering these questions are key to targeted protections for enhanced security. As an attempt, we ablate the contributions of gradients from varying ViT architecture sections to input recovery. More specifically, we conduct two streams of analysis. *Layer-wise*, we study the changing effects of removing gradient contributions from transformer layers of different depths. This hints at the possibility to share gradients separately as a remedy to prevent an overall inversion. *Component-wise*, we retrain by using gradients from either MSA or MLPs across all the layers in the target model, and analyze the strength of their links to original images. This gives insights on what exact transformation retains the most information. We base both analysis on ViT-B/16 and present our findings next.
99
+
100
+ More specifically, we remove gradients from initial, middle, and later stages to ablate the impacts on recovery efficacy. To this end, we reconstruct images without including the gradients of layers 1-4, 5-8 and 9-12. Table 4 shows that reconstructions by excluding the gradients of earlier layers are more accurate than those of deeper layers, whereas dropping the later stage alters the recovery the most. In other words, gradients of deeper layers are more informative for inversion - see Fig. 7(a) for qualitative comparisons.
101
+
102
+ We next perform two component-wise data leakage studies on the ViT-B/16 model by only utilizing the gradients of MLP or MSA blocks for the inversion attacks. We present results with Table 5 and Fig. 7(b). Table 5 demonstrates the importance of MSA gradients, as its reconstructions have significantly better image quality than images synthesized by MLP gradients. As illustrated in Fig. 7(b), reconstructions from MLP gradients lack important details, whereas utilizing the gradients of MSA layers alone can already yield high-quality reconstructions.
103
+
104
+ <span id="page-7-1"></span>![](_page_7_Figure_8.jpeg)
105
+
106
+ Figure 7. Reconstructed images from layer-wise and component-wise ablation studies using a batch size of 8. Later layers (9-12) contain the most critical information that leads to data leakage. The component-wise studies show that gradients of MSA blocks have more critical information than those of MLP blocks. See supplementary materials for more visualizations.
107
+
108
+ <span id="page-7-0"></span>
109
+
110
+ | Layer-wise Gradients | Image Reconstruction Metri | | | |
111
+ |----------------------|----------------------------|---------------------|---------|--|
112
+ | zajer wise Gradienes | PSNR ↑ | FFT <sub>2D</sub> ↓ | LPIPS ↓ | |
113
+ | All (baseline) | 15.515 | 0.032 | 0.295 | |
114
+ | w/o Layers 1-4 | 13.982 | 0.047 | 0.412 | |
115
+ | w/o Layers 5-8 | 11.086 | 0.086 | 0.555 | |
116
+ | w/o Layers 9-12 | 10.284 | 0.091 | 0.598 | |
117
+
118
+ <span id="page-7-2"></span>Table 4. Effect of layer-wise gradients on ViT-B/16 reconstructions.
119
+
120
+ | Component-wise Gradients | Image Reconstruction Metric | | | |
121
+ |------------------------------------------|-----------------------------|----------------------|----------------|--|
122
+ | | PSNR ↑ | $FFT_{2D}\downarrow$ | LPIPS ↓ | |
123
+ | All (baseline) | 15.515 | 0.032 | 0.295 | |
124
+ | w/ MLP, w/o others<br>w/ MSA, w/o others | 12.256<br>13.559 | 0.066<br>0.047 | 0.568<br>0.408 | |
125
+
126
+ Table 5. Effect of component-wise gradients on ViT-B/16 reconstructions.
2205.05871/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-01-13T20:07:08.742Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36" etag="G4YogWj6XBCGWMkeXo5q" version="16.2.6" type="google"><diagram id="UZNOI7EwSWnj6lydf3-V" name="Page-1">7L3XsttYsi36NRX33ofTAW8e4UEYwhGEebkBT3gQnvj6g7kkVcl1lbq7SqXeWwyFFgGCMJkjM0fmnDn5C8q1uzRGw0Pv06z5BYHS/ReU/wVBCIg8/wc7Xu92oCj0bkcxlum7XfBvO5zyyN7v/HDYUqbZ9MmBc983czl8ujPpuy5L5k/2RePYb58elvfNp1cdoiL7YoeTRM2Xe70ynR/v9lII+dt+OSuLx4crwwT97pM2+nDw+1NMjyjtt3e73h4OFX5BubHv53fv2p3LGiC7D3J5JwHxn3z6642NWTd/yxeqfDra5Oroah1Z7FqF6H37PzD+/kHWqFneP/IvCNGcZ2Tz/jzxed/z670wiOfSf/jg/0xvqmLOA2B02H/78HxXvPuLnf+e//8vJPsLzg2P8heS/3/Pd0AucX7uXc8dv5Dcx7t2sAsXqv/v3Zc/3Mj5TO/u5cOZkU9uC5mzHex/zG1z7oDPt9M89nXG9U0/nnu6vsvAbZdN89muqCmL7txMThlm5352zca5PLXPvP+gLdMUXIbdHuWcOUOUgGtuJ9TPfWO/dGkGxAu9l4oYtWUDkH4r2xO1CHTNtvN/u2+j7v0h7yEOo++3P9zRLwgqiuj5+vUJP1bte22D28v2j3a9V7WU9W02j6/zkPefYgT6D+w9FD+YHkm9295+AzLx3sweH2OYeG8+702n+PXkv6HrfPMeYP8K2FDkK2D7TJnneU7b/mci/0jH/6K8v4TFKXH67fUZNt40AV6/fuOD5UP/wP8c7Zwe5FPV4F+qhqT+QeJfaueD//zTtUN/RRfp6Qvfb763mGQZ1zfIA3n24/zoi76LGq3vh/c7q2yeX++lHi1z/6nWsr2c/V9FCbaC9/YD3vP7xxuvDxvd+XwffQlsBh/OBzZ++9rb1m/fSxkQAoCBN9E0lcntUXbvPhDL5jdnEY2/wuTdpx9tfYqZNMujpZl/DwVTv4xJ9ntmgL07EEj3d8EyZk00l+unkehren//VbMv3zz2e5DhBPIPgqJ/fX0KOQz/DEjv7vv9OT6OI//SaVEc/Qf08eszZ3LKusjmL67yhthfH/0/ADH6E8TfB8TodwIxBv0lIP7d0/7NICawnyD+PiB+H1/f6fN3DqTo74R2Gv0HRfwGy88IHAZ9VxzCX8sNPgPmpxz4Dyjbn8CccJz8B0V+KhYa+oI8wV/jtX8Vc/oBxYSdQRr9VEwY9DeL6UNe8afJ6Z+R9j8DZij+Ocz+fvl9RXyfZOp5lHyaqH8tK/oiUW/6M98Fvq1L+vRMgr8x6f5NU/DfqymY/kJTyJeaoogvNYX8VZpCvimAf4r17xTB/70Q++/G/X8/NCPfGJnJ7xOYMQr7IuyQ3zcafyg5fjdQId8PVT8x9TdhivrOmCK/G6bg/8WY+i1xhT+FF0H/g/74hX1ftP01BKZo+vgvZzB/BidHvrD3rzKVrxR9/zqmAn0hlu9fSkD+JU7xqTf6Z37g973Hv2/vH+pef2jwyJ9t8P/h0MtXbO/vTuW/kmN9GI78gxzrLxsE+YpYfhzm/u8g/fsz9w/DG9/fQv5JTQ3G/kGiFIWf6KIhmMA/Q9x3rqlhP6AhfllT+/sN8ZsGcv47DZH8Xpb4reXt72iJ/5ycfndLxH9ASySpH84S0T+bOvyVxcCveLK/u2yLUf9F8vsK/v52+dH/FXb6t49CfXCfP5ScvqT4f7+c/qJxlDT7oasPYJyEQD5VBfG16gNIx79fAQLHv5DMV3jef8DsPrwPPmF5f1ST+JigfZQ5/ROK9u+VKj+ZwJFG0+PXjY+mS/RD9uk8if+MEr63xT9khB+cyZ9MCc+nil4fHTAAcjd9dObPGSP0ZYmcxj8D3buT/qncECd+ovIHRCX811TR/2VUwl8WcmHoO6Dym0YAf6LyO6PyA0H9AVGJfAdUEt80hvinzVb8BKLk72P0XwLKnzz58C8vplDU704VRD894TdPjP3XTvsX12jIr00Ze2sN+qxrCLQZzaB76H3j0GeA/HE6hD5uR/kzWD11snr0U6MnviwYId8zwaK+VjD6mtL+n59a+82yPpTZ/jatfa0c+rnW1v+FysKwb5pK+32V9QNOOf5arQf5EtTftXZN/dm11z9DTjj5Dxj5weT0X1Jj/bvlRH+txvq5kzz+V/MR6vMq49/uLOkv5f+zneovaaeC3rvbvzr3QT9rQf2g4f+0K/APzvt397Z+U6n8J47/BBwT3wnHn3Whfj7y/+/C+HdP+3ej+JtK6z9R/J+j+ENA/eOy+ocRv78c7z9Ueyv9AyZR6A/Y3kr/gEkU8gP2t9J/dhL12UQf7u31J+HsR+xvhb6WXf13drj+qbr6ATtc4Q+TN3/I+bn/HS2uMPStM+Xp75RXIX9/RyIMfek0f7a5/sTVf46r791Q8D+u1fVHxdUP2u36q7j+t/a7Ij9gvysMfVlE+9nw+k3FsD8uI3wA/I/S8wr92QsO/Rlk/gdseoWhb5pY+N/ZbPfdyDz195nJP6m2fdb4iv1Befkvj4c/YB3pK/W2H8Ac6f+55vjdel+/vfr9Pe3xB2p/hb9Wbfu7zfEH7H+F4a81jP2odd0fsAEW/jDb6b9CgD9gByz8I67e8hVB/e0jVfCPuLrGVxj/DyCor832/Z/fBIv+mE2wMPyz3/B7dnbR35yr/UW/1fCv9nahf1MfLAx/U2XgJzC/NzA/zF35u3H5N3XCwvA3NR3+xOX3d5h/ze+C/BnA/B7NsDD8TbWbn92wf3Zt5Y/aVn93yus3T6z9j67yVxd0kK8VdH42y36svh+vWRZGvlZd+tkt+/tq+9u7ZeEPhYWf7bKfj/r/gO2yMPIjFtB+wH5ZGPkRC2g/YMPsry0TP5SgfsCOWRj5liU8/je3zKI/YMssjHzDWP1HKcR7Sf+nv87756bDYMOM5lPn3dseBML+oxTlI2XgX1HGh33/6S94fCVz/dCe/6/mK185Fwp9hpm/OitB/3CkIfl1JO+30QTwQ9honn+862/4IfLyf8sPkf8JbgyBP5/gjX3hw3D4V2r4iRsj/yo3Rnzfasy/Muf7nxQN/6Bk+K/plvrli9+Yf1eZ+OBtP/Pfb/s+KgJ95OHjpk/qL/paf79ShH1hAB//yP2nP60OvgnMxziRXc7g6ZD/cALQB6T9cc3yB1nsFP0Qzt9bEEJ+j4LlBzH9XqSfHtEA3pZtBCT0q3PSojhrzH4q57IHTiru57lvzwMa8AEbJXXx5pq+jNqfu7cZmBr7dn5mGrJkfg+u91fk02iOTif/bhMRh674BeHKO2vYG6RKRc+cr6vjPgS3ON+57vkf73JMAPZrxk0wz78q4jW8BbOKBemFKytr2DZTaDGMc2nvzZZaNV17hQEEfl7hSFXhwVlC2/dFZ3EO0z4eKRE/L1Zdi2x5Y4o+be87m2ZcWZQ993qqqlYSzHjx9svlvgchbdmBGN6hCRH5UEZLAfXRAx1mVC9ztMWvx41sDpLaFEaxrK5xIjaa2JpDldp1z1tgecYRBj1YpIRh2pLxLsK9OvHE+lOpl9aDkepzY83xHE53drE9OqymnARFSZF63i+idRQJ49QBdnf4LnyZSbkUlv8aNcZiYUkT4jMys/fLGTLFAtk6K3RLHoDIMxJzEvO9esDdVXxiB7a79ku4WTxTozXl3XomFFfhmKpmeF33FytcEKgXgqLj1J1H8MuMMb1jrxIrpWtM6TYuIOyxN+eD9o6pOyusHaZzXmh36fu+5Ds0uBSUZhgmo2LS1l5My+en2e0GboZUD1HR8YM9oYI4rJuUz0d45Gty4Jv79IZLJN5IJdCNM9yIR6mIgbHVJ3tlvWQQao+SX3zjL8dWX1rDRacD9qVXJks3TbHIR0K/Yg13kyGJbhcjwxV8erZUOe3BcIdqnBsrHJtLrEzw+77516l7HUZVqPZ5egMq8ZvNdaMI9VyuxkAz5UweeAzQc6f2O1mj9m7dnsYrvcLeudfV1awsG9O9Bq1gjsQt7wR1RRrjRqr+VZ04FRylczMThXlnlyuKl5yfIMP6MC+6jj5YIdT1/CLknCAAgLAMS16EamKM5cKyW9JbWkJFEZa8fG1QFQhycZ6PX1hv6WNvrzp2fcTIvPIUH5xfxxFMaM8LrlVMaw6rn29Dd70yFPMQKsm7ze0LLZSNHbvqrk2DtF/xnbBURiA6yvZeKFPH2bJpDPZgxJRjrzMj7xOTeJwJG9YVxS6wKzM3zrL2CyvrrHwedx041tQYYWQcsWcfEyvOrJEWYtqbNMYyGDOMDOczks+w/sbAG3tsAhox5srYPMZDQeEwFxYWCulpXSWXa2v2XvOkUNyowuR6qewFq2fdp6woDOcV6hnJnw/hWSTPXnn24fnMBUYzIcFqLpcicvZkWIJBIcmN2VApDIISW0p4BWxsPgK+kLrC6LZtLXKtYPwCWN6W+psVF7ZZXPM3R8SgOatlbCG6nOMqU2+FAVz2t7JPRMtRC6ixXjAvRqLqpl6fNZb2fMAwR3tXqbZpqzXtK2/xcQAj9paVAS1SNKNB+moRKd8QJtnbuYXTHI3wcq2cl9tgZlsexVixi1RCHTNI5ku/eIFbb1f8UbWmERhDAQxB7teTJ6ihx6bThQ/sFhPTSkRkNBDnTQtLg5RoRBIS/pldJ0rBMV3ZTXEzxgeHyn5ipfuACnR2GiKLFR1moRvdPQD/ZZWsOKMKK2qUm2/gpjiUsoA1Y5QFnx6a5VPOEETGE26BdX3wEScm4DDLtvvqaXOxXewP/uC4RVECm8C6rgRovMRQXdcPWWlsWxGGq1frw+UB1b5zQ+obYj2RUNMvTnY7HQOMy2h9XxWaaib14SkSFQW161964tXCjWa4Cu6YamXdpd1bypptFvcpOk9TvYV3/1nYdxf2Tkp/3PPr3YEj4wmfUIWtxzNs++QakpE+uuqcXZ++N/JEZcMgKNw32jvgNOtl+PSS7EDD1/3OG08ZvjNaxC/DQDT0rJzA1KMHFXVjFM99iHR0Ku0pF6UOcEqFSOsmdFt7ax9c3s2uxgURpYcbKYdHXO9SFA5DcjjHlQ8H+WrQz3wg2isG0bfbYybjGxHdsi4/Tzcu3qyk1hNqR2t5mrAXLg8yytJnDCsL1JGThUc7fOncMjVDIjeyEGWJ/DaQs4lD2NLKeHfejsXm3pw6V6hBpgVe0edCbN2IZ4WCuzG5Zl4Ms7kL08CnQcBBOCgxAF3KvATsIyyLFyQNG5zpvBXEQuD12L5XlVg/pSmI1GIbeW3S2W0PHqMoPaFXVLFELGxQ+Gg7yaP2uNIzvRI27nYPg0bpqIlzA71/iWGz6Aho2hQ9XrpeJtRDTC+8xiYsKa8273RjCaoEe+InGMWao5DxQPYw4PBQHCbXKPfFm1TvNiTla1tVQvSOJuY8BGnlu2Hpx4Z6UijEcMXh41T5sTFnxDGOsj3rJ6+K7rfQvD7xutXxaxsPY2iuRjyG17lZoAzBUuT0iMUC3xHND1eSQ08ax+bBnR6TVGVI4JunKW0fcGbAPc49YNSY8aOvWvHE2h51JkEsR6nJxpLu+Kb7onJyIYjwNNuzpSmNXtr4ePgynTdjmSYmWeMLiGiY7MvkRqORmZp0Ia/nk2tGMGHRRk9oT1f8jsmNm6Ehph/7zZQPhFytF52J5N0HZIGuYkhEKz87TqZDMgdi5cEow1PeJtTebVRypXDn9pzWV5UzM1WiGwiqT0BGZp9ccwGlgHNBARyM/FUxqsfuljQVSsVgiXgtRDgzQSDMvaTPOP9yfj5AmlKkMosnsrkhO69I4ny5cb0rPVpIzx9hKzmJGm21gF3R/QCRVb+Ek8OV4hmIQ8GuT5kchDb1QyjMbn7rl1Q8Y8l5IHBO+m16yW0Q7fjIqYjmYrBWtct1wflVoinH39FtegmD1fhC5zp70F2duKwX1YkGQw7z0UCPBdfSul9uGkY+jqbDwl1LUL8NJ4g2XBqWQYzMINvWne5Qi9BaV5nNrh6FNeEctbdRwN85wTulrsw9ZpkrxyOSQsltZMOD4gFDI07KII4tzUh2QzzcK1e1pb6K46FzOho2uV7oB/3yjbC/d7RnVQltvG7Y0rv9S8nylqDo3NDoW1S6cyMHayIuW5pIliY+vAuW18l5idvV8O3a1Z5PXUyp00rZSqGT1/wykUDEUzRVZcKaOu6JTZNIwCnNH7kJ7GsNVrs8/y4qogShdi/J4eROmTe6ujdExCghOJWrHO5DI9PEkw81SRt3L6Qc47vjjegTMXwadnox2/onjy4YvxLFcOdaT/S9NTnTJPYg5BU5iSlrLsZs4eNMnNej76XVL80+eadJnNuSm+Y02gCtnpyUVc35kcro+QnbY9fjMSPxZC+N9KIWqhnP1EjEHDrUqHomr304xfPBneeA8Rn1M7hemurpkfNDQtU4hBvMc0+NoUiMg1Pj6cvN25QDQlIf9Zi+Ru9+hsZLOh8JESNHPuTZuChph1IuAHy5qk/uOaXPaeksNO3yhWxTElsjiET8eYxQ++m0wETjwZyeNJnTJ3dlE9Jd5tiVU4aenhdzFGWXZ3HdRKuqzogSR8fnedSzlo4+LzZDfbM2Q+0aEC4XuQAmC3WRm8bgbCFt5Co4wq4eLmz1iCb1WWQLaHqcqO6SjK/GDF60BICB8tsGhTpeoN3uamFLJrjU/cY+UxmcKspkj/DmAsmdAzh15PBmMuPjB2nAR50lxFFk+YpWoOBFaWHaxSQRTJkI0UkyzkD0vDYms37HX8v1jiOLPK5tasDBA73KGOr5BAWhvgRuP9XJpyn7NwJcNvRIAtXmo0VH/KUAPRs4jQYLMDCiHheC3LqFTs9NcwHOjbAygq+DZJRbIRNvsJfBB2GbqUJHlLceIWSgWU6a8rzk/XlX7NLxfkeuL5AC1SuKDir6kLJ7TKAGi8fxW2qE1eCpMCHWEAA6Qp6G/Fa/PLMEbhvo8D5Sew6ozkumd9p/7Cp4kobU/P78C/xwnnrv3p15Mpti5Ir0QfUAaKIOFHvLv+T+PFiMQSIT0kFHg60dfTOBQUPZPGtIdRVhPFy3lMoqLzKresD1mcTUC8uJAq4xIIpczdYCtn0S37TQGWx5E5s/JP3ldCvi6SvPREB8XM3D4CusOBi/tULpLYVly5WRelu5heW0FeWZn8mGfg2YRB5yXMOl2qAfxu4I6JlBXQhj7T0u4A7GYEdgsVXvXM2euCbRcC1kRjvzNbbDq3GHbvOFYe9LFz2CG2GaDcWfl8Mdpc2p1czPRM3T78YFQHjsKG7uLlG6K+04gMScwcL7Qwlr+Mpwbzn7hWEkObIYAWwwZy7OsG/7QbpeMMVH+12vYLaPtq/W21tvMcjrxjCKDCu2ILoZI8+d1VJNahI7Si1O9pDw4nW6nO6KSs5UnaRJVFzoojZuvcSDo95x6xK5Hg256pMY3frMg/fLw3kW+sU1HXkD5V+RpoiBRE0PQBXkEeTNNPJ1AlQIuDIQAXS8I5EsJFEyT3NiXAEgz63VW+YQfO1lMn19KS/MpRSYpYgB5y228lIw5cXiMdZI7tTJmxSwzRiWSzu6xdiCxQQrEzdh5Njyhb/IMIEJKFpYnmbJWQ7CcTCVwI0vHUXyfDlKDXewHauKEau3+UYNrnVKzkYgl9XYshqFiV/X7WqCe0+CrKJSLDlT5bKWU602iCvjGZfDcMsLJzlMMeAF+1KAAewsMB8GwNJk0THnkCPxhM3N9flALZ+pTZzBk1E0L2dixCqFOgghe7tdkFtV9kZx98ZCjc6kccVbOw5lu4oZsoTFnPFXCzc5WE2HU/tGcmZSktXrlnDQvh+oKh9HCqFWFKLgilMV9/F+nj4jLHKjTLOq1lKCqZmsJj/GV1xbAfhJXETwUEABwyVPFDDuTtvjIezyvdEcgTWmwGJo3zWLCI4LL55eNwhH8evjfiFH9G75i6EjmG7dQJZtx0MDnpx1AvD/fBwXk8evcChpWu1TRuj28nhvBSm/DIzHdKoaJVIPfLcFjMYQZEy+GBUHqM3EcZZztJFpYiCxurKg4MIG93DtjhvCJ6zKXy0Wb86Eu8ufia7Lup7IHr1NRGGyQi8UAr472gOaRIXzD1y58xW3cGN8IHLC2Pir4ArHuRYNDFGZe7eAkqkkutZGZjTzgafYsa4kY6AY2VxKpq2Li5oFbF/aJuVJfMKFzXb6JvYoMSthOSTh7B25y7XTwnR5aFD2kF8wthYtX7BzLGo37VIo561wnSfYhTHvPKZmuSFWr/NpGqPKKrzumJHLmPJxT7PjCSXhUULHUw4J9VYxp94E+oXUIKKAQH/xFNcpDOZWi5eZv4FIoFzewo0GsbNZqef7Ob00yeTimwk4FYP42rKG0MmpW2qFGha9MVnI7ojtBRaW46CGFEaKo1g3RnmVkyuA6l67tl6N8uH86iWDLQkNIRL2yCVruQVJsBBKePH1q02qqVyanf48ox8ISjaZFmc6XnPyCWUmm9hLdReZ+iqJsnO5B5TcLW6VvIKkL9tW8r0suwC3ubGvx/oslElqb/sLv7PipQJyl8okDW0joDfNKnDjiG9dqxucOGASbcjUVtMaxbaMeye7YBow3sTgaCquw4oaEA+zEatcbpBxPZIqVh3XLg77aSidnk6jGYUMKOuBUN0NEnI30ivKxUZor+19DM50iiaqbL6e3IG8KwImpYLC3k7jLML7JNxUJ44650ISL6RiD7i7ncTLsMoXHom4Esz+eS59PNCnf9/PfKgZu+eQx3jjwheBK7MCUKwcANupfKVii0lPHnpnWmTaoKUa6dmZMBn9ZAhmXBnoYt1JH1TUIJ+38cuLuXViA37R8nSpvBhvImpmNreoWwuZsHFGzCuZ83yRd5XHaIggGWc2qPcIze4OZBW2dWq5bLlytR9hdToYLzR6Ht42mwzXvZx6x9ZoqOeJwoMLl0jU9qbNcscI9HUu6UtZKw6DlnVrR4ZuXESElGqakuzkcCHK2Ik+w5uRwub2st5dw06DI/eDJJMrOk1X/JIMJb3cBUbyC/mpagU9+IWqibP+yLohvNI5OkI6hN/yQ5GyiE7FBLMvPj/ZEMFqfsNybAiwrxhn1oMG6wPu0QGrDYmxa8w8JRSY1Vs6ccomOjhLc3gzo5zhZC7iBb6FVKNEF4sTBJy1tMI28ZGV9HtWUMWgXZqrFHYEm6KXuEdZASPcCkUgiGReETO6llA/OMuFNk5ZlVaHWpLFTA6ZtdpdeM4YYxF40y6VjIa8dIk5C1kFSNJmE3KGrtQyQ613L5gwmvox3/e3Kj1LbsBlqmNheAzvURy77wViqQ6p8xnmo7dRV18asGKQzRHAWyWh8oYjqij7gXjkVi1wSvnysZ3FXxfAhTbfuQxdwJ80V6wuPiXVaVF24oxKlp0dpsiUzAPjkr7PLoroHxmrtji2PJa+S1p4T+jEU1SYyjyCSwM6xoY8xwOC7UbfIR+nGdcDc6ukkgdRHkJaA48mPzRBhWq0diwhR1pB6nqsOac6b6DxmSXKsNaDCQY5kQZZggb8zCPSX1x+PdOk1B7rwJst8Vat99m62/prXvcCr155HdbEO2WC58fpixoP7ma/uKQYTqRZN4I1cyLAX7d6qaQLjVwf3KWNl6CcDVPLGuANU1i35xVYP7sJuJ1D7im5KazKjUUy4moQOH8rL6B+pYqOFSMB6KjmqCWXj4VEu1zS6dul70/EOBViPBXijG9a85wLhpKT5a4qpBEpnl0OD+e6XpEZvhZhobS+agusMLA3WKcYT+v11pQcB1GgkTrmOVOyJXfg+QnRHvGOjDK5pULewkFlEhDlelGDiMjuxuSzuWHFt+bBGSyxj1HTcc0t4S7umbAaNUx2jUSiaDpDpaJYArbXE7zc34h4hhUaSgr67ULq2O0qvGQqgo9EOqQwTjJRIJ84jeG9AzmuRYEIzqbshAzhMxZtJD0SjleyLCFBeJnZaCsbd3jdseAOuI4EuPzxkhx/bea7incCFslv8UMcniBj8ulrF3Y9cnr47sErcTEjVURENI4+mmt93Z8K+ehIoikEh7AIqXmVszhS5Ba1MF4ekLiQ2ZIdNro/xyeiS4C1sKCIupJ0vKUx7vsm94pw3VOBPxvEZ/xcXuGMruXoeUXSsoL/6lKQsohCeKxeLN/OVEmz78+a04vBfj7xFL55NKuQGRdMPupmiaze1atWmJtpEuYUy28jhsTFt8OTK0Y4XioyxR/HpuXazRvzMeUvmHW/taTS8GSq5cgwZj0kXZYae/RE2UiDKlewe7jOybTNDlg8Rp4kT5+YA5Ad0XZW18OOnoREQ/f6q4lMxiSr6d2bjGvt4M1eVMk9uPY6FuoOuupIdnlCi8YyJ8txC4jCcLVt8LIq2jx0a3qtZ9czMepyJU+TW1fmJEbcFioDc4ZTNxqWBC22WDVRe0Sp144mqFENqTf5FWPF/nl7sPg04QvMMqVUwuk7YD7iDlp7W8dg27giFVRmzfOtVkOkh/P0Uzwjr237UJ5cFjzLGCbGpIUW7KGfWANJ4hw+kCLkCNYYPQbCaDiwDuD1+OueEN7pobgkmIroKjXwCtgAG6JXo380Sa0VwsE9/bzLiHk4t06G7CsLBLGq5W77DME9YLoYf3NwPA7ZNqB5rF3GzoafBCbzFZJcJpYlComPU0hJrjqwj0OF4ie9FkIsuuiSDPam+/DpHaH1orRXsvRD9EhDIzLZNFhpAnasqQER1+Wj8R7V61owWbCxdzASNWZCCF8MOaJLXFkyBi7m1rlwxPWoyH4dy9r2ixOdizgj5CuqbhYKyk4iPt9ergSLaeqCNHls7JIp4ywYaSSeyW3HxGsLv+L5ekY1Udf98BqJ6RMT7IoepHsxlfSayibvoKc/ldxDHg0/tXeQ4nTrJrizr/enq2NvPKI+d6OCturZzHqAmIc88Ud4ag7q77cnmxgRxkSLkEF4aiTOkvYvB2LPzGqjn5BHBkzuRNgcr6TSngI2aJyUQs2BuCK3JzBwz9pdfKX0t2rEnovIALES0EwKuD3R11qzvxsrBki6LzDC0mMWLI061E4R1F6RWyYKMWIFEsHRywDLHfG61wyA74BBA10rW4oxmf0k+Ed/5lVxCp98zMFgBt8w6hiHO9xB18P0dEJNEeUB3c9M13O2rJi5oYwKJ2u3k0ceyRIbAYlMCawEw6WXlaaXxJmGZn4fS+KUtAUkfbufks7lbDIiyVwuF2JjmhJ6FeBZ0eVObURSeGqitIKtXG+80PD7c37Ejb1iTL7yNmCT3qV1ubtFM8TjpPYrppuu5zv8A3qTq2SAGpi/oc+qtasAndVbzWoD4Z+Xx/1VHEkMlLz8BmYb6tS+Cd19w514vmUrF32WJx/lOFbhIGF8btqdHy3jcZd7Pd0UsXdUgLZWfB7rlIUSMj0dpXVAnAY1EU5ReAwkf5QcD2EWzF6l6mCUmX5Xqkncl8o2EMjPLABu1jObd+qz3UAvkRb3UWRV3MNJb4I2WE5UwgvGdMgskJyTq7ehKYm6Frqen1qCgwIQfel6osZyxivpRuBJFGFNbkcWzd3QQTAndC5omODkq9LFiq30NxfbIR1BQS2IbdPEpfFZDusnQSV59ir2Jsjcp1fp7Exe5Lgm2/ceKgo2tNTq5p0WrcOIYNvLc8S8P3QkWpDY18s2NAtVloseWmaGBsbCk/4djgKmu6hWhBDRi+MPfRdeCD4X+IZIkmednkygHid82TZL6eyOFjnwaROzq+xwprKSYnkv3CMfbs4Ua156Dd7BowOqHZeez1AdwsS3SCd30vqEJk1589wM0sJ+j+rVGc3JNnF5+DmE7Ot+qSDTRWjaATVIkdxBsY1OWvLGgEoBL75unuUDl2flBZS2Z5yrF/K+p95S2085aQJB1fbzLjFV3hMXUYJ3nvKt8OoBX9kKN1sJ0KA+VHK7Jj7Iv1siNiAufMxy3B+JSs3MK9kRs0YQQoE2EDmsaNoyTURjLJVMbl2UZ0zIdTVd52Ws4Yg4NQeHWqfhV8A4I2AFzBn+8WVW2YA4paQVbbvLhQcKr0fcRS1xOBQVUQ8Xt+umBpkQyaLoFN8M4DHa9mZEiK0NUenWFzUNrJ4N3/HAYbsht/nepBja0s3o3ewF5eX5jEUNSnGaOE7I0xIf/RlTWi56lWNfP30G2emYLQMyCniRDRWWvJXADgG2aymX0aMYL4EsQs/wAjzuJIHxmymXFzcEnkuQk1AzdeEIXxipBOtyv83ksOS6xzzeJlLcocxyMPzNchZqJDv0tejr8vSXUSRK7mVpyHRP6CbHBWPl/Av4Et2K2v249MDfFrTiTL7Woi0YQd75G3PCkKINhfKjbhKuj0Yk2BhUY/OoINJd11+Gb9gtsF69Y26NmJOhhjVqeGYNcvtgH01nj0NYnk6BmJRhTg9qOkJxTiTDfOm3QXIi5vLiaukZ11mOq80c6VSz3V8FjklP+/LQnpMC+7E8e5j4ODSvn1vZ2y/BO+9QPxPECCXq5VUFwlDL/cwMtVCZuQfvrgxIVdtgNMLihal+gAMuykDsmTUITBXa8S6mOXvgJ6PyvCtMLrxXho2+gcHf5HkvnNVMkmmHqmu86cgdr3SQq4DqOM7V++mdPSX32QtGqEv3eE1FWdwmCkKvrbZd3nmy10tjU/X03ApzG4V+1hSY1AZ44411aXL/9Iz+A0ztgSj6cTMVjN7HaoOTjPbW3ge5uXw8Tkd67FM/oyXu4flr70cF3WtjhYi5VUNz3YUNQg3idE/a1QukSW6PCDdbZzu/OOKT+JytEMSzh+DFLhIqJ0wrBZWLt2k9Sg+dBAdDHpCvBRwoUFMpjN827S1DAWXz251UkjFZ8PwQFmNTrhLkgCPUQKAN4PP2gchSFjjK2/0Jpl7EDkF6sqpeCzAjbkhQbwh0GkI5YrJSZxyO3HmOL4XTrKPaOY/MQcZWoAzwZ7ojO21cbY9Ay6LKfdZnbuVkHUxvBK0lZpCI6kk51cblxnFltOx5mSJsOfzkZBk2Db9sr3BDIn4y0hyf5CtwO0FvjG6pcQ4PWBnFglJV5Pl0cWWHg5Hg0B0LO8+Seu3WRE58CUyYUCQWuVdH5YHRRkCQRfKkiNG9bkEtCLfCS9Z1a+WbkZ6w4fXcHIstB9Ljl4kH451iyyLBcMPq0/57sCPGkyQvfDDIkLqrN0TI6GyTUXjrM8EggkZVhTuS9EUjiu7h1la/fyK22vt+JxuhywxeGO6b2CfyXYeQ3OKUIhUn2ChPb9LZ1bXqNwfdT94DAmUMmI+9+N6IJGRUp9C69E0n6WURGoyKGy84ir3HeY/Hfje4pxC43OvVEWPoAa0XNK90T+T0AzPi82mM3L3MBxxPEeNaJAd/TSYmTfywX0EOA4bGLuEoXHQ62HW02XxEokDGcrIWBloZibpVXeozBPYqQBIn3IGnMA9flhCKex04Aty/Q9fpKWVXKBPz2Nk45WQfX2upt8pYU9X8ObRBdk3cuaIfwL4o9+YrJRqcrAVKdDAI1cEZc/XmM045E8h4j7dhQ5C4KyO8aS07ymK2mWHm9nJNcWIasBU/muoxsBuoGNsLZKkOHTlJMEdxICSsPZx2d491Wzarlp0vQolxymLHKXCcCXTIt8JFkTO0Ys/HC0Kmd7KQxYOBH3Gxx8cFqdpLjdQ5V4/AHPR5RuPnRCEYKxsefsOKDwiJLnkmYRfIFlDzJFmzLNQYFyYTSP3CJ5na68aZPEkS+NVrgtNnNi0bzzCUHgoybo+YF9Dq5LmKITSQGXhwD42+p76iSG8TKg1Cs7ndKlnjA34ARBngo7ylgi+XFX7FtiTQncCxt42RmVv44qNpZo4KA84QnbXpXa0FjITo0FXXi+NGQPf7gQQSFDh5oaTGOMBgZJi5UprSwVCMclIdXjiuaxn54gAF79iZM+z3fmxfxi1Fx3a6t7jTR/omyzIT5veUm87w7YwONWEg06k4A9WWkmayZ8bax8i5j6cc4TXHNKBjm7W9qMT9q49CtbfcmkYA/jncPTiNmTsPVe8i9c5fNiKGWVTcB/CE5NwxqnvmirmxxLJyDKH7Vsl0iyihTZu3ySZU2httZNvgPBVx03eRzO7AXYjliiQMikKOVmD0UyWyoEpijKrngmoO9FVB1BWLYvWF6qxKLCAE7KsNaQSzP2Aff4yksRAvp2l7tJHaB0WBulyxGx6Itr2CdXCDTTdA2h4KyY9jzefXFqMfJ3nLVzKc3U7j3jKSvZhL4pU4V5Jx6YAtwwMpzHYfWKGh2Kt2r1AzQol4czNaOJ4zqt41Hjql82z2I97bDIySlHJs1iPmXV+gVnef4164ljd+FHOOBONCeRKwICusE+XkRsDzOenrHQ9PTrlp+tA4Lh8XMYKcNOBk7CGzoNcL6cAVtSKHf6Am570MTbknp3RvChhnTFl1pfsZ3lMEBjBkSyFva8SPSgvSfWsjc1BZpB+rTyNnTn5nwCjKXWXalXOVQlGPpkCfJ10NWJo0EQY9YS5w6X4SDDEA7twnTnyHh/BydOsCcodgT3W899u1xygTMu8zhE+yKjerHVqKdXIBMGTYSXKldWFIs8vDYoWlvmv30/tqIhCDUb97Zlyh2W6Ql0ba68dwvzTwK7yyzIvtuoyQeTziqGM/pBmucM6z3/KfVmhz29VxSjLH1FdZlhxfuMLDqbQzbXAkz8WYQLCAQE3pkvUTuPINXJnPEEPA1y0w4n3HY0ncRFn27nHDBmdmBFwC+QSD4PSUMWXU16rvvmm3SguKHipZhmfiiaf9mZ4STCDfbiWCgTEYOyU5WHUGBGQMw6WK6F15PciHt5hiShrxLAycW0FaFoOMxlSTlreD06FqGQTy+Fs1WwSWv/xC9UaNMW75DRRQUj7126KqhYO1a4okbBJf3BWSlvgJzYBWQhsJPaDuqqGCX5jpBNOueKuO9Jq0N8tSLzgPbNXJia1ZmWDFFad/d4e9D+9MznvOmoNhHX/Bl8qyT1bHAqo+MfeARsFXx/FtsB/k7OPFOjFjX7ibWZ3eBBWk2XXkkqg4zjr9pmbx643SD9dG6Ht8uzyngW1eZek2rVGRw5BhOKejNgkLDZZBMNP2M14Qz8NK1AiMv1WGmtpaKrMXx9JZ2o5vDC/DLlUMw4Q9Z8fs1ojNgvZZp6Jtc9ztqeUR9nw9OkgTBhCVbqzoOn0W+u3Ff6s/3Su+On21ogLCZ1sXy3JoO/A1RvfWRrs0uH1/V9H0Hs2ykcp0ngJZE1SQaV9EIo51/MKo565BAU792JdVREn71vWU603jZU52kq1wT6tWQ7UVfLodskCbLgSQXD3zU7Mb4WPwQci+JgjqOLV98lURDKmq22oewdXTzExfGdlUy3VwpgGMRyZyeMUkzYRVsFwxu+mtqUC1Zt1q8WqwSTohZ5ymI/y1rpA9dleTvS9J70jsVrXTwbhpaxp+NLBeXCh7OaVi6KOviweLIEm8Yi9BAjEdHaJlYMq6DjilF0F1vOlWDszmoKS5pdE4VVXbZY3pXgUjR89IYUOy5Iat9mrW5dIstVMP4cyQzykEBe4YFBmqtcByeVjrvj6Q1K68G+3A5LAlV7wCBHCmcNvWaKnsPWsQbmVGEOK6Tj2d+fOS5fJxgCg8CMqZuXhS2MEP512sYZnQnIpdpxcXZJhFkuLXZ1267CBoJckDCas5pQBqH3gVhGP9W12YUk5/Ajys+myE/Q3ot9eCaj2qaN18bOr9drsaHadYSxHoZtgdRnvmCrzSX0Yt7ay4CqEuNADwcWxzZFKmTI93Yj5fC4dk0M7RhYd8ERn5uNEVpWOYxp4eItLPPUKyC2cutQeMNJ26km0IkHi2fbPs7pLLpgbyDCEp9CLXmiKWMu0udAciC+kLYxv2jIWn/pKYC3H/1DXwSkA7KG6U6WO+0m82/rISntIJnCplFxRzxuviQCmoQXaNVtb7iTepf2nhxHgkPIrOcDiQNI4l9ZRyh3JUSy6UxrhWL4hHlyU5Xusra5cjHFRbAv5FqVHfKnmZoQ8QD1WqI+bw0Aa3tB51SXGNuBk+3HFrXk9GiK0hRPlJLLUKGMFRlshBpEGiwWiFiBkRHBFVk3OYHaV1qUvBNpL7CEYfXFCuoawhQIJsjKyelWNcJtzexypospA46dHB2SPupMcmKrZzUPva8LQuoSIzcnrQMQ3SrtHSNjNPOg7j3sUWGnmp9zpgctfliGuFH6ByAUbxdKsF00FFm+AbQTo9d5vFlgaGVDRdzO5ljYGhrdC7zbONUmnpdg/ntNzN7Fo87Lhm8PJogxqZmnJPHe4rsRf2i32x3MPFcjLKMxLFHUwKb+SERG4N8Nza3rVpT8QRx8mPT9d3MzjmdaZ0gMxM4x2q7Rf3ZCUCk6OBrMWnGGoIf4QgSoAJG4kneNxRKTjFGBaodoQxbGXPO3rG3vrGaNtNv10eYOqgQMXwgp7PHi0vYAax9vKh9FZDkKgEFF6hupFmxND2R57A5om107rRSaKet8153QhW4xEeU9QpSZVI6zbp2Xld2JiWM5nEPdOMGsMFRe7Ti/LQ8eQ+GS6oZhkXlr8QFuxOJo1TOE1BZMSUtrU6LuC4Qtg+3yXvTNObqXwdOyuAwIyHMRouLeNv3bRHWfQMcSs4fQq5nCTQx+YJKFYdkryzdbg4fct4LfhW05TXEtrPI1mZ2k2ja3kXLwwjvMUr++ik5X5nDEElggFy5TNHJ58hUd4cw+vWJ7a/hFdw3DpF7x97UQ06VAhk5dJg+CmIMEqbM3ZBw/JhHa3kihzzln48RGpy/Ry4So6wFqSThSkRpxghVIhjHHmTSZIJKDOZVhJ95gyj6ZNrXJpeT7sWq0KDzBE8YcbXCxobqJ0x1ZRL5/29S4xwcMtDJSgxHOFFDHe6n1y9IHI7uiyKjsvK5Ka35W0Cjvx4yQ6zVUuoiHwlRSuWbHA66n64bU3lvWTCDDmvspxBhEwDmYBbTIKaAW6SlbBcms2XOVH0I118dIlP+TeMy9RJxvHbwZUoLdIURx+gphy8HIMRaPMxucoSJk1usAdJqMUmq8wyYUh3CEIYXh1buDCB5MnSWzVa3eM6bChnfBJdu2jPM8T3bYelA0Txg4Q4981RgiSA0qk8/f35heYQPODd8qJ8oT56ukzfm+PLsHguvr1OdZx3fiSRz0ihvbqWsNUVE8sxrInpiHhLtbkP637kfvRIznyMQUCyLcLe6KkH4DdCR5q1a5DqFdRW32opdHG5C68HYz9PoEhZoj3WWI8IPWoKW/IJ7W3ajHnGxDqbWB8M5lW5e8NMJ06RhB3wfUJCyy1CxtGvV0gbJ6k1l9ajpEG+7EOuwGOJM96RpJ0OxL+/Aih7UA8vJzlTllQhX1/2SDzDoTIbISGdZ8aV0fVy90wii7DFEmukQ4AGwKWRt5HFW3RkQP+grtPGvlJDpxxIZpRDPHCU8ZX7VpLWBix1wP0F+eQ7a9VGomnr1GCukJ6wWSfTZ0I9SAkiPEHWmVCDEEi9ED3Ct+FLO6HfZiKD/KlyFcrunUsQlntE7pa/br4bmFZM8fTD5ys+pxHtVOsNJ0QhZFoIFEfEzrgac17pS4ZDmNZc0sdSQSKYIUlWDGFXm2NYyA32934F6XmVsbNaOgvgncJ9k/NAAfdFtQeTG4TqXhw+Snu5uD+T0Lqu5AOoEGUMBmI3JZgFVrAYRIbRA/CLhH28sBWaqgeytApE77Eje9tCuYeFtyIdyAc7mB45ptz1VRa2G5fppTzt9J1jGvyBFWXjkUqUcdrHIYvzJfWBLF3SJq8pJaTpsS5R6Hf8/Eg46EwPLVB5Ak6Wc2QajYZocNGiSTOG2tdD7ejgTvfXTBZeNhGOXhxvSmKOYhweugqYDPoSCTnSQh5NXjgSK5cgXmFiiwagZybLsg2jWLTS2C40z0B7ZbhMvgJUPMlqOwwRerVRZFlXQzlF8taj82okKPGdORbr/Np7qsKAms6F5nuvxPuXKbjlmnqHG+i3dfH2Hcuu/m5At+3lU3XjrDQNe0jTb5godpyBvR8gIuZnUwbMReHSTcVMi1/I6ZUl9P2pPzrqQnEvhrK3+50P09tVFmwQSnVql1sNzQOsVK8DCPOQr2smqjiX4nTaFXSBjcnJ0vOmJj0ktRTDVs13puDOc9h+u9bP22GkIrTdVNR3hv0qQYNNQYaROLCz6+z5JOl24RL2ciZGoNi7RkysZ9wIU9uteMnMeZL7KOkle6EGTbJGvRnpe7+c7FDj+VhdnVP3nO3ci2ZijPUya2dWEycovw2eSAb6Qds3pIrP/BaUW51EcReWaONnIaUl3Ykyo4JiFbfvWwEG3R5PhCkFoeC2/RGNxrIVNUiUuzPVBMUe+hEzNbkINxbwv33h/NBMMqU+RTC+0qoSAAiXra3BrB7WBaEoJ0dQc2w5J2K0+R1Gg3fmkaWHTHAds4ebH4uRxd5LDrnrULIzZGr50ulTrDVsgzgLHtE8XK1b9CoFhuCSnZbXqtNuW8XC8KQnk4BDdCMYYAQXucbMwCOLWfCNP06eYazlzhC7gy1U4wyvkhF7h3V3irEW3EcwkqeZNleW+y2kePzQrYR6a9FT9BSMvjBOzBJlTevs/twfWDnL057kld1wMkdnJ6cTLOHkPCU7kpQ0Ltzb5PRYZkHEj3uG3DgmfdBz2wUKbmqJuJVLkNpBQhZJjnUzM4p53PhcCwYEjoq+gmzfSE+HULibjehPo5zrZ8YoZNWQIQBy3oNacvLiJ0FEH3tOrKAklCskfjWzfgzl4HLHE/5e0i5D2s1BzSdhdlh9A10KomBSElU8YVQa81Gq7BdzeukesUeMcBL0wLIoRxdVKR/TjmD6/Y1dMu2WUKAIJYIgUBodbzTamYdSiqiw2mnuxl1l2+vARXtyJxkLX9HqoeEM9ALMaPVz+ky5ge/PnchyzTMNySBMt60RL+CD17GlO6JniiUEsKkOewQs0YGZ1glrAwIrCaUbxPyZ1jBRLDFSksVD7jy29QFIcNL3AhYXaof0+4PT8itlaS1QYFmjWP7msRESh9zLMdGWU9jCplICunAZVKNRYlM1PwV13YfUcZTMsZpBsJtNOHknN077ThWUNMVnReIz03EcCCMfeFUn2+0Z1uQFzL3lThZQkJ10HNRS2HC3mLvqdjaS4UPCricCQMbGZgep1MerWZDptrzFUv5dTqisGaFml6SbovltgjoTLouIs0FsUhYtN3bECyxP0o12vRhssIWw5LioSKcxehQsob8acz64m9xoGH6S1R3NfDBjFFjeDUngloyD7ukJ4fhEa6Mej6f4yqpGKMFk9TNrapXkmGVpD2m/exlD220YhjJ0ArNmHIBqfLMPIXeTohwzvQy7oYtTrewemHX3uDt8Tw0zIKsSaR0jX2zUgvOPNx/LKiPToRR3o7DTLRl7tiKS9Ry4B1podyqBcPpaHlkfSrGwmOfZUT9ndIrioL1L6LxtRt56KKyP8Py6mZB2xqonbzh1FQYPhmM66njUbfi8gyhPy8wOU64/iTy/cOOsjcw199hnm7ltf9U7WIc60GQRLvTGP+5o5eUG/qCu8ihxXO+XJNKCSUWvibsfAR/Hhfu8j/YBqmkC6FSOrxKylkaQJ3eeddJSv2Zi/uj9YGJVhyzFKYSh8alP5sMAdQ2zAHZ2HAB6I2qEaDRDD+BJNkKjWrXDQbeJ2CYBXbIQsizDNc+rbPHeIsqplQvJElcrfeyowbKAXKzUfdW9qehYpASnvqAG5e0SelAdecgxeYLhJnRXNSelokO3SwgVLNRlg3NNbCY7/ds1qxs0v6v+6+JyY0R3dBztMi46IXEr7JK/sCCjlIAs7VeHQWhjd/ZaC05J3HaFUeruoJ9dwtTA6UMYbQCOZcZC62zFE9sGtHygjtDQKFOGeU7pPZnZtnHng9ubD7QzTAd8hyGLgb3e1IYrIapowbMmltJyIChMhv2qsktYiV6o3kg1VLH7jgzGShyPIiDDuypYao4G8IXilfB56UypCR8P7sk1h4Gj2y0i4Bce2PLLKbh3EVGxelfJIoG5yTNDLTH5kmaaXWsGZJe2uoUdooISQw8PeWk88sQ9Ih8xexm4qqVtBKzBEb9mFyKC30JkTpAvSnQSFsWph3u7N8Im2BQvsTH5VhvDL5oRxzvRaKCbw3enWFHk2DY58tpLj3vY4qd9uhVJtB3i0m8tRBLC5AeSQ3LyNkCV1hWobkCKPp7O/+SE1uvW8pq4cCluH2h2yXJwKfFggWT96fGgcJGAEzUfRzlf3vhpN8r67Zpna8djFYhNwkUohLXkzSQ1zUIyzO7ZLRu6oFwtCQGvagxoZtFERec3kb3YAvP2nQCMekhMQ5f9eQDDnNl/9wxfzJt0xULAbWuR6re2Fw3ggoLoh+5ZO6a8LdbBCI14q53Fav+sX3RBSOjzBSnRr636iv+DRL6yNhHyl61N9E1LmP9cm+g7rk0EXqL4PdYm+ubf1MSIH2Ntoi9/yxWhv8vyRN/wG57/dcsTJR8vT2QehvD6uTzRz+WJfi5P9HN5op/LE/1cnujn8kQ/lyf6uTzRz+WJfi5P9HN5op/LE/1cnujn8kT/DcsTZV9dnqiaqZE6/yoN+5XliWiS3M1GOHMXW7LV/8vef/XIrjRbguCvKWDmYRpkUAUfqbXWfGlQBLXW5K8fep57e6rqXjRm0FXANOrDOXvvzMhIBt3d3GwtM/NFyZZlmA9ljlMS+Qr1NkvTRVadjJ48occffEj4bNbq0M70yfz6bccIsVaCzje6zg5gE+Znetw/kaJf/NdpR4jIB6DDJ8AQ0wj3LSde3LSlhBmAJmHQUqvWFqczMUUxMUNZGX3sRllStgT+SBxO3/G7NuD70im5mb4by+BAIYgG38Qv/P73n62znKG35yh0xPxTpErEr7YEEhvvzEcEqfyb/xZHIyXXntGbc2CrDoNS4WQv0VjI3vjSQ0OnpZkFJzroBs4Cugc8RpIwxqXuqxii7Y1sJihVLfeShRivhclGfA3KLYUsyNAQ+8R4C9Ezw1PY9Nq9hXasxGZhz0oHTUDh/b0vcMhQzUKZQXDNx+GIjovvxlzW5LWlX6FH6HEKfbsULFaVwp4akVu+u0vkeEHxVDcqy4QlX0ZlVvxet0+7JPH8THscvhXjUxhfSZmIbqbFQIfAejANGQUv7dA4G4mHckZ30vBysA2rcgtoSjqOU45yKb1PM3J18Vvyi/O9C80AKwrNOZOeRJ360mAN2+zoFBaXeFde/wjjpOmXej/q/bKZfkhyufcCm0kSUpp2gE1BKVUhLMdCSi1yfrLt/kq0s1LFQ23AQzSn/HtOvIyPMEKDyoS4kWFsoczaU+I+pFKtzDtpErpHhnYYt1ooy0xnTLL67VhA8V1EMT2HnTE4pJK9i+gEfSVAKYQeMl1sj+voS6ozjiWVstfULXMHoH4pmWx8VNS80SLwBLLB19KJAnB1dDa59lRROxC6CZYI+pcYnP8d1/UlZf4SYuqUEfcew1x8WvxpFeFpKWcUqVVuRpLnjmbuL9pIblyB8uWrWsg6TG8MqQJ1pAMYxrskVkjbiJvPG3lqNYgF9/riG64wm8BU60W9Zn2qVPQxbVoFznLNLLc/+/rgUwiAx8U02Q5h7sxnXjLdcWYRj9CKHBpHBRMOVdgTK3SxOGgJwetgh9ttG+983j8BMPyTMxus+NKXGvjAxq3DHaG4Q2VvcHZ2HL9v1A2dLZh+6ovjFd+XCjbO/raOB4raQeUN53h3yU3yVeKPZNYOuwrNe4vn7phf9Xb0Jx+aVBHtDkPWjXgAnRba4t1ct9HuAwu7NVY58khgboUJDemBlE9upr4Ae6y2URn1ykiK29p1xQ8UcNemahP8zCiLPXeDGbDy3Sh1mPqpOqDoLBzaJKtC/8DGKSuNyeHsDfp7lFOew6zhmQekyjzXBmlRGvVjw4xAR++L1T7PCrMvTP5BwGlN4x2FZLbRn5Gq0Ihrax9Y+6rlaCEfrI1U9O6FMLn3uIe7MxF/0qrSpUD2fe0xJ+RZ1c3H1uNDcCrObgE9TqPD0DK1X49LS1F6Ft7TjJ2wrtTDZwmbCyACIgvXYNtxG2d5KzxrKT0hB1jRI3xqkrj5W7xHeLLNXiy8pOilKVGr3T61IdRMwuW55o1c9aRI7J+X0qKdc8rEVERK4CdRitufACHP1vyYoLOFHp3IorpAqzuxMs2R19MzutI5/HVlModWqRGfoKjzfOiKvCKCRv4MoK2T735lEL4sEmTgeMfbMqpzjMwWr6o5maQhGr2AO4FvwsaX2R+CMf4kBO8+4IfvUtBbJkJKTy/Vy5TynbqcfWMIHWAtPxEwJ2J8CdvbPkkgUu2i36Cu+B0Xw3yK+O+XMnOlc1mD/piBndJkke4fjqdw9fKIAveXnPyEeSWh1sgqX6tUP71k/TjE+HX9F0qfatgWSrSQzfjWlb/H63A3F4QY2DuzgUXWX732Zf+xmN+SyNhKLrLEsmPH7GIgT271VR8RjrCdiBD++qyAqLABVsIfDLhHFq7JYYWXHl13lBSabL1RxjiLl9K5U5LChz2YPJriNfKpt6tnL/7D7u3y7PMFaNHn05QwleK5Ylkc97fzpdVcIkt+WeZPmoKoCDNzPiUYJvQ928ZxdxC36TbOCQmIUaL5ep7ys+CfFWcxptSWi6GOZKzoUedKr1aJr9stDnPhn4dwcoUXjN/cCWDmeQDv8hDqD/7LCP/Ey+kuDFIp1M/5ejQVAmfDsHryyZ0m8mtDGqWpQTfV0OnBEingAoP/cTGvCjtdrhU6rAM0HawdWm1OY3zgc7uozqVMnjkh+nzzOOnCrjS31AANAUHPFA60Ftiq3mb/+6MS574WMs5qSA9assSdK+XuHfA9PD+Nk9fbxhaQXkYk3uO3bxQu7gOi6U1tWr9R8XV3Ve/u30wcPUydGI4eLMYfIZwlG4rOw6U8Ob8PCOMixBjMgq8O8ZXDu54nXqEK6LuPT8EAcJElhK+Tidiukzrrn6NkmsyoUFPuo+Xeu4S0Z/CkXBJMEuK0/d4A6Ur1cqLSeAu0Sb1jH1pAUei1X42DleT0+0Gta5Fbc8KQ+uG4SLt9MeQ1oV7UkEzhL2vOExLFo2O8F3j310ONbyjEGtXZqpgrjjJV5EXfNWH4q8HIwfSl853Wna9bjY4H4tB79W/ZrYhNcBfgCL3xNQI6R76JC5GQeKXfdMLzi+kK9Pm4VZ8R3zg/H69EltgxwgnkLtp07p9r7nOpBiEcSxCnzZsahZPeUD4xoksL+1HzfkUrROWZJJxA0kuuW4R66P4i8OCoz35F4loFU/yLmPDetwuge4AVfxbZwiBB/zvSvu2XfQlh6cPXY7Y2NH//SBeqL0fqPRyyEqcCFCy0n6h8EcJWsqZHTCuVczqOQ6Jx4ERZUDegT6mpq1c6xlN6vnN/JIFLiByxt+HE9psIlV2KwHtpakPBZIr6UqkpAREfyqCcu+GcVceBiNHSMsKRcblw4Co/Ia5l7jp17tB45oeXwnWsa4GK0A6UC+INWU4QaFiwHDBN89P1IADuNoOtlkXXBsoYVnnmiQ+XD6TL+dVvWNzul/cD7pnkpn0+SIy1BUsqoUpY7/rp0I7azeQRobDjv5iZ6Z53O32VRmT9Bu6RfcwT0OKC+m1F9IMuu4rvLc4QBxUWO55QdMIS6QwLr99Bc8bETNKLd56iQ+CFSBp0GXqjR4NgNhFd1WJD8nksaPzb3pwkm78ctG5ZnPiWABNbo8a7wGJrVr3oPRCXTyISV84t55x7A6J9ZrCK3aqX1KX81CHouaL+zMfXMIvM1LPi+EIb9VqNCsO77LEdzqfy1dbh3IQ/UEVL8LMrok/oC/HH08sgEN67QiDlUBceoz1WJeDMGpnudALGZaIY9QRB/UKDfGR3jxn9QuTZFiyo9KQc4d4+9Zi/uwyznyYv3xIfNhJp4hoZ0cOglZuoclg1WWnH++fx5qnbUuzISHOb8utLa+cPVRFYa6OQwj9fbgVu5fiFfM+SUeI6MsD8v527g6GHc8wNxBWq0ya1f+GYHkCMCl+m19KXkhktq0xalE70Qvd4zFSVZy2nR7jJvnYy4PnUTsUI6Nt7jMXD9gGEPXKsHbsVJEob17UZ194VNRS5B3hFISuTyOzRRMWhfTwpB01Vmh4Xw9JekGup7unvi6b1uSe+5BK+Iuf31MqFU2SNhITVNr9V7mUZKUKF7L+0aZZk9cQb6N0E8nNo7lCsxXOf6qqQ84sptVQwxoZv/Au1UIA54/dzpZbqEiOd6LZRE72/d97+jeFg6L2OfdJA6LWSQXbgCWx02BpQwZRYM6gKlrVs+A0z7EhAFoj0RsfPTlYQe3Wx/SEABq+rAcziSwl9rQFS+25g/acTk6IQfjdGiJ+Z5q8pDhUo9mvd1mD8RzqtGdyq8Bty4ezpGH2jvtI5pUimj740N4A7QiYfxx1ChkDeygusbfTd+Rs/is/INeKSpJRkpPvF3rEiVytL8BIIhEPiJEv+g2xXNMIMe5btkxl4+W6xK3f8HaWMFRaZoWRJfOkQJaSgyv626IxCBmH0cYnYCc8ZDmRfhlLD18s7UGdqvCkZDZkYDz3oDfH4Lt5hm5o8q7CRzg13NYmdjnUuAip1dy0BmrgXks+FAoX9AXDdXEkzblxg/PCdTPEo7Fqnj6QRK/69BujndOPoF0pFzEVnYc8bdKbpRUFGnQ2OqTK6E9UfdH5DbpHEPzLq0Iouau61ts/vdj7g40IBVjnrF0fpPYkXOu756vpsoxIktiABtX9wbwul0tn37TYXU+2HSB/W/kUjgyDW38UWQ0VzpAA+lZFZrPHj5pysGnCZDVcFYDyL8Z+0SDexOCvqdjPGy+a4AYmC31BJ1DwrC4X3cWyjU4LBHHCtGIQ4uZ74VdMJpKBGn+Qip8Y8by/oqjpkeNm8thLzuheJjaeHmZQBCjGTvtG/pORJ5cncPgZIZiNND5ToQAaet8pCpj+w9CImu2qlyZnsf1JWVhBtXRo24teL+aBMfFlETTTQ9qmwyuPQWJpQnB+fQFaWfgI7S0/psVwtrCPep6b5mDgGUVps+gdQtV+otrG4akoZ1Z98at7YmPEyWh9qJFXrYvO9LLmO6yUjJA6NvTT536kskVmP3Q+SjuA9yaaQk3BIvFkAZqQp/3dEIJbWwcOJHU7HstMszPx4RlL2TrkBJjM+7QFir9FtUcndYFdWFmqqjweUI3j5454tVD+Dbxvu9KJn7iIeMVTlEgmpJAs8/Pi01vcsN15m2ds8zOb7RwJQ0I7/03++dfHF+bVO+wWKGmdW3oGk4lODrBwSGILRqZvMGa5fSvDctrYC0ih01FQI0/xoQ7d57FdZp983Zibhz99Cj0IM11uhKDayxz9Yf9YXeKbgwFBkpuS0GyTLGiT9jmY68pxlYZ2msC//ld3++1A88WsSDqrx0MyL6PAg4d33GynNz0SYZd5/hxaFNcnBeJbGc+G3tpQyboQuu1uofxBOXkJDkeFPnj/crR7asUHXDP0x8CKHjsmvJmx9mZOBM0n2p7et3RMitHFSHFFY4K0WTE3Q9umZFksEj44JEzK7n2DCWi1Bv70c/DCwjeU/peEPg6Wy5VtMD05Ic9Hw8ajAn7oX77LDr8IS+PF/nxC3Qww8MpPf98sE1grQhaIImfZhvhDzUa1gVFNougGyokZjXc+/7v+OSJN1BrUVweT1Kwko2W+ibIhRddA/uFsSbtZoq3UTXwKanD2nGRR/0K+qTN0hsHfqBrtbi3rmS4TR+j7nSr8JCTMI5AJw7NS80FOspO0VxF3clUybK/0xTxE/Adi2N+NonBq8pkmxnGuFzxVbzbJN2zH0dZy+vA3qvIwWCByEQxcRAaYhncKtf6K/GxLSgPyZ0b7YCLLQ0GAhon5ikDEQg191S3dqoUWu5sbrLj6/jMAizyoOCIYeEZz/xrTOMoJIZbkUEUnJnrMXdglpMn5hMuIJfZuoFxHwzQq34gTut/PmPvjC7+JBP8IR8kvQ6Zf5owp5ZHvIdDxrk+XCeZe5JysZj/wpTOpP3edP8eKidWM8IZcgohEB7l9x23frfh7Z/CDLNbaRmZoDUzv35sTUbt9Lw+MPHndU7N30F4hUF5HLSjnbr3UwHxn/xjq08agl7RCvUbRYeJ/BiMXkG0m+8pQnGfhPnpmbcE4AiC5O/DrTN7J+3k0NFoi6bxfkOzG0VUqvDci6kgZKHEMBFe2uelEDAZmF+sZA0wuDj4uOs/UCtLZvnAs0gNMPpku2TmALSOvS9q/eqDaqyBAJ9jYJPw4PPKld/1Vk0IzR7NV9jTdaA/QQG/1G91weJj/73iULg0Z7nWq4QTZfDrEEueAQegD2Rd6hL9PZK3bj/mRWLsamXjB6rwqsDlH79b6YAK8fiG/wkWqrJjzVjQAUhUYVA1Q0uIHP8thh12u0hWuZXoCwBI1Wa7Bt48SsuhKWHeHv95RWBCv3gG+t47b2t04fhowRIH29gaxYK/U6h9mNfIlxCMvCwn7n+FaXqZg1cwpsN7gWAoDd6lk+ftJcT7UPI5mg5Ke7bn8koU42eaIYNU/WdhvYNOL1WQs1NW0Rg1XgnfeAGsdSmiKygejQid0DSR47JNaJXuDEmWBhztMaU3HeehCdAvgFOmVBAd+kSPE0j4P78omn25j3k++GKyoX2ggEsi4FNZ+0fmOeBUFYthuXJA2nNcmQOse1i/w2+/pomIEFJdHGpS30Hpkdvofq86Kwids2elaTc5Idh5gQauR4uDGp3ED5ekKOdxN/XmdWZJUqe8wkr3xXKgA7kvyenqqeTsDUogGyXH/x0VhrCBvenm9i3JsojH9qRR4V1yTH672dAmAoRsalV1HMCs3U4pzP5CzeJrmodEarOJd4vlgNZKZlmT4bP6DfBXHSkNOA54RpBmsKx5BzEGwEG6Shi7X/Idpe+IC+Us2IpAGuGnHQuE6AF6rnCSKCJDLvPUSMz6k8XXS/9ZTr3TGrY5EtIamdnzy/DFGeE7xj5JPAyTa+YdjKYxU6eYJv6vLWdm0x5lRNn3nkSrsnAbMlV30Pt/e+0RdxeU0TeQS+pZNCfxMvXmvT12LdSNUys2fxr7xKCzv19NuFRM4kKtDgVe0XXlSShmF+UHzmbmcBnWeIFxWid5Uv4SN/ClOf/iVdlYVnLJ5wJdNXHeXKljQxqKGbVkfxtlsE5ewS0A/TqKYI2gJrPqG0I2j1E1n9JcBti6C/gKWFMltPALwlGQTmb7kqDo3TnqR4alnyhUUtz+5YYn1cZNyDJKWasRll0khYGY88LWMLvWxAt5VWqeqx3Ke2oqOK0PcXcwr68Rplb1b4NnlZr7D3ji2p85Gkl0Fo22XSoqxJvHV/U29gGTqrhCywPa8tyneNQNYCinsiFM+6ou+POhniz6orzu0Cykbn8PLcYgpoT0PmnXn9gtuEpULCPm7+6t7bk0/GC6j5+2316v00k55VGeAmeTCtXG3KH/YT/FgmHXjMgBrylDp6CivDcwft7aCsoqXm9/yQIZ0IeVyY7tSoU3B5mXBkrqCL/Hqok/Bh2QQB2dD6WOMPK/C+hH5abQZq4pjAZByj1kbDoXnEjMjg4jXK8Bgy2I+HFfCJsA9B866GN9ATzfb5WDlyPx0elB1LSjvLOJWj07GxDLaGg6kQ053Kkdy2gtjBmZvuE+sgSPzS0Drfv7DUqJnH60aoNpkz87pbz/IaK3c6uma06THGq7/8XgS09o+fjtT4ZLGSfyxhCH2ugGUwZB76tSamSJtILIhBGwucZX3We18OnmJmDMQBOUCs2ZjAfMleUXUvVKnfed3ULi6Bgt4A8rk1mTCz75vh3GM/Y/rlRhKCPOA+dMJjQq2zNDgJK9cpI8hH/ZiZp5UyvksVZUF/Da2IfFdf84VyIvG66VB0qTp5fZ0LG0EO2jEg69zP8PHuAslG0ffo8eD2Mrm32Jubg6zxJU7VYS2SqLYvEMNl/FCNb6iBU0D0hoVdyiA6yTBlok9nm2Xv5BNP8pBNlfktQb5Ac1gDq/36jiFkIBbO92kD9mJoQbBmUpkdXWyxm1BJI9BgTV5ofVhRVQ+qoTH463c+uWXYEZKFQ9EwgQOBU1phWJ886gnwNSUCLTmUUeJN4UOuXfx46efOqoW4PY5C/1T+TC/ghFn0s1/Qq4HQrMhXEbVvELtKF0WkCDhOGfD5FG38cwHnQP14FibvItf39rXDq3/Byn/558kwfy1H4A/ZKfYp0AAgbgkFKaM08OGLNfelv6/k2RWBZxKbVOdR1E0VPktS0fWPsSI22EAolZPei74/u38Jmv1XvdMk4TJfXqymgEAWm4dv80dVoA3MQJiEg7l2gbNa5KxeDhB6FSqz6ejFFhqfUSoWDqYAcnCJjIhpYT+JravDOMiaCK9NrHpbH4wB/kuFG+UV3u8WWB6stFIiaN2cH3malZhfGohFjXJCL0l+XIr1k9YQ+1ZFtz4/IWRarvFBpARDe89EnJs19g+EPiah/OjJFS5ew6aKfz04M8NznVjEDAbbKz87QZ5RhUsKHKyHdGE3/yF1GbN8IPgLm7RLGsP6V/3bVHUcBrjHvaONJ3bv3t0UsjiCQy1KG5aBOL7xp1BnLZYUr6x2Jzas7qPImyZMVTZsT0ff/0UHkHIdad0Z7cMSFHaqua6Jkew89gfl9AWesHmPqJx6Ekvqv4lVYKd5McpDbMhTHQ8LKlvHzsaNKZNKoNKMJCN8JX7XM8YOz2jh10tLfQu3Ydfz88/u6xneytOZkauvdxmxaN6GQU3HzDAE5DlpL90kcO9JBF9QlRL0xCNjZFalTqlVsINEPYT+3iFoYhi1bmRAV+Yn0y6biva69RF16aDXPEW9PJlSVtOO5JCUTO8BePO+aDL5yiZML6zX0+OGyN5ZBvvnwsLmYzqRpklt+iJjt5TDcUmW4GesbXbb9mtBz/Lwdrnch0SpdjKSL9OLf7/JU9aa7qsUNjC/irJG+Itx+XwnTdGz2Upoq51MmhRGXjcKs/nyn8gls/R5wlCehHrHy6z+EhbD8ZujT+vcdYsvr2LG400Z9YZU0daWFYBddaCkuDVynpo+Dle4WrJh7TmLyDA2vz/0UQWuJRLeKTOmifeFfMsjsji2JVAKHOehZv9+sUviPzFyE3789jrIAvMmq9mPQiAqZCXZ8Oj3pNeXHwAiaLccMzPODRWD3UWY0T2E+umZxVDVYJLqoXJldIhvdtyRS11C/gYdY0TYcyrHcmMziuk6hx85b4V1SWwVOAi6+l5TXoHavl+iacUDcB5SZyViNyPLe78VffFXRJlc98P0c4Mo9vP3/A3q3X7Ot0jisQa9GiqrMuYpRXrXJ8BkxhD07KvrJeKrnkSSimpTybEdyCZ8UAx+qePfOX7g7e/isnqksDqrpJU15xQtwDBCJyXvwTqsBZZIEvv5oiActM9AhiRa/WkpjkOnReTCGogfAG1eIGdQhR+zyoAgCn/VfvTgytlteCP7fcOiRvZY56Tuv+BcC336tmVYci6ob/vVQ9xaHuvkPlrkDiHmG1LAxkcu0Ftx9Zn3RXEkL92LoNOn25/3Yy5kkbv4cY9RGkWPqxrHdQnh24kGyOwqF/H+4I/kPT/bZazdjxihwhq86srBFx1TB/UDiAnpZYYjn2zEGnwi/ydZBepjmtID3+nxL5APo72DD6iyzVB/br5eRKmtHNXWwDwqbXzyGvjlmOeRoGLGZIsY6S+nuObLgikm2YX314+KLF5aMCJ5/QJarTmVRcgapCqYrflPqUX6J88jR7G/i1RyzuRYcRSSqucnXeXt4TrQfaaUW0ytLs3+CiGFcuP19t6Dho33pdRwfwMVp7Vh2Q0j7asIVYh7qnji9ztuWJMQKK2UGfoYys4ZOjViSZTG/mdzO23lJE+9LQ5oQvECk2w8AQMffI5a/1XpPnD40dE1shtMZCPnWbCgI/HqJ/47P58PNq63WpQaXAg3cgMR8OOkwEkJOfMxoYv+/BzODl6bUCBMJwbAXl4qC8CFXt2pBMsf957qGEpvVn2DfFiqDWzilf0ZDl6F0fumjKevOLSbJ9HQamNQWGF0sF/qdji8Brge/LYsajqhfiiUc51jPT+IGpA/3CyFYCGZxBQDeX7dtWU4UQXClzAjfxyKsIRJVUGrXUZRNPtGSk+oYQblDNcyJcC6eGFaA06isBtJPXmyqLALNDpS7hBs2nKsVW55PCGatozRk9UdY89Hiqgchdg43th4bBfegA0CFRXm3DA8oZVc3hGuvtuIejfxd12+EGBfVrAV7uCCkMdggwV6qsRKpcvOO7LaVJti+lRCtF5EdHCI+dC9Nkpw4f6CqBSpX4hR7mrLvd+AMpTbPPQjWjKt+amjsBETPYsC8MnlbdOCBXrxBc2CmMceIXTZWvkimIiNq+H7T8xlFFFUVQ0LE66GVC42rqUZFQNxrRLkO66i/HhCMB4oqb1M8fltkbRIVo28s/33IeAe5g8VKzTJnq1xiF3mdxnzM0jCYEcebNNAkhghm3A77SyAlprjWJF+00hxe7E9zkYbWcZnu5sw+bfVmjSEUOw+PxMEzTnb06ci29Byfx7xs37qLDSKRUTOVlaSCPWaNqDM2X0gfVShPBf2GBq578CuUuQa1clKAmhd5OdzVn/dLX51Vp4owq70mo4dKCAupPFhEqy/jkxpv8lsosXxIyyfUCBvH50SncKr38d7XEPofn9znsSKnPkSbrsmGy30syKfDTd1VQATbwyuY0NGzFD9caCIYATY4iVHnEzJdmPXZY1dYsoSedFmAJbecCfy7jpQBftiq9ttFvTrLGr+/QhNd+FNXMovTYJcftho1yfZaVvov5Nd4sC+B94ngjomaojItIZTi0l1HF2cc+rlAZVNodjUK1O80rq7kw2wPk5gbBiRmRkVb6y46P1mg4+LkIa5aU2Lp75R3RQqYslhJ1jLztPeRxILds4lnBheYL+100Bwq4ftl23Pt7B6UEuzkLAUxqtYIdmgBWqfrGUrBKcggj/iebNDt5PXvQ+VQL8IRankD1ESTWj0yGAIDJ/zz/OhXu5ZBr3yJxSk+wWuJTO8VAiP3StmFXpFgliCW/3rxsThIqEiPvaeuw4F6yhiKhijaITPQxH1V1m+f779GjMHoieL29NQiwwkiR3/sR70+8mRHcF+7gSBtFKvkWjpWhWreSCvdquEaX/9sasMPOQuaovQj/snUmMj8MO03QFOyPL0oYbsWrHMZd+rUxhQVj1oQOEf4jWCWSSEYXviD2ozIIu8yrPuvcgK7TfPziW0+h2KLKol9sW+iDBFBEPaHQXIHqNgiHQRda9CRiiVeDZ0PTCQ8cSVfP1aMx+yzafLVfNjaguyocdOGh7bOL8AxTOoJF4SmTiEVB4n6X6cqlP8CY1754wcp6TZEW4i9Um+jxc+JJOz7ZiZlRuTuoj8CJPNSrAPhR+nYYtafICHb5Piu/DZP11TAQnGPqxE/xgw47OW8GF+iO0KCfKLH8L/gyBvUEKGIfbrYR6x4r0/RYsNfvXG7/SdXaud7UUrIcPPqTb/0zJd1CDm3EsDLkJDLewm7qdco+fuhHx/yaD7c6XczNPi1/yeZitAkuUb/JLoK1gyWMHCJ1bc6vxM/s6b0Uoyhp32qkCVqSdD+MvV7k84WcwnPClTvBQWebPLfe9d54JaDyS9yjgHCjG0chwOQrBe7ScbBEoz9FQA8oLtip4t19d08lDlj/d+Q93HjuZ7qJHQNF/CpAnHoqiOk1J2nP0bEpyM7c2lfWmtukCQbCmfM0u2KwUKZfMEcurpQkiV17hlZKhaXUeDvSx9gnvtXJI+RL+I6eh/UtNlXXKEShUPOAqnVWAkc08avhORz3dpfUg+2ULVFuabvoxpjDiHoS5qQfYnU09/im4JRlD4W7bfkDzy+cfkin/75nr7PfY6KVD3iuiXzLgSEQVcAAsj9RLxsvxZ0kvs4srK9w2z8tkTg59LhThMXsjN6DjmNMG+QMC+E3H95Q1IXdBdh8HmlMXo1bcB6k/7iGmco9AlVYFoyncwre9leOnygCReGSbea1tjeq6dPimgkIbcaobiobAzlhpjSRMtjjXGGtfWSqnPfaQzpUph77AmBVr49cutbtT7/xRb5UyD580FPUKLYiPctY6hnFhuHO745bevS7/4nkFtSdZpagKSndPMjtEKjUDTgH/D9nWm1ENzHSeyL8ciQ+UFrmBfpiB/U7e1zP3JERmtcr4zEDjVY9J+M5WjBdDGtlONFmI4q/pMR/AVjBFLd5ObCqny5V1L1z2kqHjZSxjZptXoeK+1m0pGFTnw+BR7rRV8tCrzv8eM6j91TysYFOcX/mNkzQvc+5YOxeTSkOMQlTWDHZ4CDjnRoOU7ZjuJr++2B7aucqduBVQUC1TpuT8TtNBY/hOHqdukWExMQkseyWoO/ooR39K270W8nO2B/OmjRAlMwH+yXaYO+oBIcRfHHMm75uaxl35TFhMiZ2w1A2AQD19bvFWD5o7ciQwfjRH/MrwRmzsY/pSPccYjMvaW/4nyC1zOopbl0Xo/RBVmqGVbPkVJLCYjBYf50PDg504pC+pP8/iR/lBmDNu/UNCyDiRelKrQxclMxiVM1EZ5gcRVjthNn7Y4KXTvdFAJHo/bq9pEg5MF7e305vZCM1RXUPkRERBjFl/63Jv4guPte3HYO0OkQiuBsyvR1hAQiHr398/xjvPvtYeB8v8epUMzhlzrKbJ/7kZuS7jAhhQZCRgq8wH+qw2myvr7e45ipt0ZRpP7IinuVMHq3zN3SwjjlAafgkuwvx1+cBaz1UzZb0NuX2/UMD1HzDZoQCjOxbLJ/GrszIt6sYdRvk72Gd+4ROUk/LOd1cui3k39ieDMEdkG6wdHLqcxsvD3AMKZ3JlE/URKB7hQVZLL4B0tLG5FyWeBBjaJs72T0HHT5Q5qttHkUFf8xy3ruWn9n7BplB68EC8vYbG5pbK3LMqi/SwSG5Aes4XJUUCdGQ6pQCfogaHZ2G6vKWzYM2dvGQ/ri7/NNvT6WWZUE3JXYsGhhJ9KdfvYHwsLa4qOLxGsR8hZblQM0XEQdPYCRsHZlJVFl/deCy/slVaKPZRIkSnm8UN/1kBxKWm8x+rd9d2NIz+XlQiIVTSE5M9DctJLo1a05XgsBkWfaCmuDCCITU4GPWQa8kV3dxVra0wmBTX7e4g+KajHUBuoQOyzFSdHNNtCJygLjvIXNJMA48+DEcZrLKEhBEgcR8k0FMHqMMWNoCQqlmSRYzglQBIRHBd+2p70DBdUeJ8V8nnnBj7zP/D3S150kQpf1WDv8BqrTNVZCdlEOI+EOnz9BSlPfl3Gz2hZ0i5hVMN8Du9eik+Bkm4Lu1PXfpYdRg4L30CRJm3m72FN9wu/ANrzOs9YV/8DSdHhLGPZ/1XNXc4by3n15VzcWOmFLDmFEr/L07rq5ZYvjzhpTqkVDHcb58IfCxbSrsxVv4qFsAixQg5tOP3d/HcGEIua10H2KrjaivsnsaFiXNq8SemNBbBUgqNNXhutc8G+WIzIfrl+BtaVGA0HicIPaijttD+F3XAV5dOPjpZCo9lDIiX9J6DdqSAyQddFHPfbSwQPg3rRgO610wX/Pbw5J35CZn28KE6RcqtGf4a/S/h1Auh6l4DIANcPXoptQENBTYPJwGLZMGEdlMpMmem685dRML36uKtgNSpuk/kT+payH2Qt4oeM2D4ANMDh1J7Np52sWBmCANpcK9N9V6XhrG6tx0JT+zR4pjiNhTD6C5riHn+dCFxdWddHvISORHEIp1NuARPY6JXxOhTy7I0Mtkxcfcja0jylcC2zhBG14Nf/MDQCi8WG2yUmJnyOfQPnMgCoAnUc4dpNxnkqz6DJ/nkj6GCD0z3e1zEUAVKJsxvzOfBUvw0RlOShf64nak6mRlLj2VDnhq6/XYMp8/bjSyK6TJklu7LdhI4SDpCR6btvgvM8AGwdkfg3Gpu3oW+vyhz1+qzwxPKsSsWSrI0MoVMRFaO+/xxMXTIpio5/Z3hlB2sJXjvX+XJmkBR50JeuAze6DGfoHTkjKviBZhKiRQyJGP0urA7rNqctjWYpi+zIzZrXg8Nv9rGit73o5ZYVf61PrV0Rwwlaj5plr+SwQVaSURiNQO3H7JyFVz9/j6Ww7JeGelw2B+Z3NMzyAW5ABAW4DLo41x/RqbZbVrK4d5kEkdOabO5xkRIkyIko7uB66t6ENOt/M/gerZgyG0+21tHWotSfbILjaOcZie/Ois3pn9NkjucbtoIxkSQBRZv/MbJXCPkfZa+Q/w37D8JXCPSfqF79n+gH/V9TvUK+/4meD/4nugPUoP4bYR983sd//8H/a/3TbXpNGoKR6fr//PD9qvzn33dPofP//l8I+r9gzFTV/4Vg/x/vV32yVWnxvnq8L/wXgvmvX7rASxjX/D//+eV/vxFAEv7u5d+v/N/pDQEtqP9WZeu/VZL6N8Wqou66/+6l5N/0hLJ3XX/LfyI01Nd5Dj6GPqt6+zlT8qcjdS4J0B/6Eyv6EwP7/11uC0b+o94WzwPBq/9RKmvIfy8QhRDkfzQ2FOiQ/Ud7w/+n2Rv5f93ePq+9/Rnaf2py/2ZQ7yK+FqX/91YHARP7vDePJz1YwyFdp79fhf7rd8Hvu/5XMMLP/4no2/8Qnwf9ByMEeeX/3gix/8Mz/jdGiP7PMkL0PxMx+5/i9P535v97t1f/r2Bx/4nb+x9pceh/1MVDPuT/4eP+K6ND0f/tP3V8//7q/3irw6D/f7S6/1WD7f9Qq/ti/wHbIeh/NDnoPzW5/3kGB/8nBvd/d63GFWg1Bv9oNdKRoTDIv7Qa/6XV+C+txn9pNf5Lq/FfWo3/0mr8l1bjv7Qa/6XV+C+txn9pNf5Lq/FfWo3/0mr8v4NW4/7faTX+8+SZyMqs8/1e1rv/SqvxF/GcF1zXV2pnrXYo9KWmWVbLUluPE+pwEU/bXFdCJW5xlMdxgcNUsr5KFkvLDkRLvdOGtoStTcVrnWyINLDGHYYh0nv29z8DF4Z/Kq7/2MG7P6YXO+3LsfikfMwMsKOuNg3J+xNdfP/QUtuyKj0+l0TVNkXVTsklzI2BZab/3mGepSWEE307P6uUWouSve449fYdXCtRBa2UoLwasZwHgWuWJW8LlsVY0sBiZNDE64ehUv2De6K7fW7tk+ooXGOgw68Iv3Oo3l9f6RyrrX3V2RdvjuCGlWkKPFIb1ZXKUUBl6QWBBLoBF1IYXcw19mdWoOrLvFcbOjjZgnYQmwrqUhL/Ezd4mAwqS5lyRpUzmC7uEya/sQzKelcuSwuNtDYwxWZqQaPdC6Q7Z/YPs6nrR0a/AjV5CkMhZHHZtCTxJTffH8aOWVBpZ34vy4fJZOvPCSd/B4m0X0hzNME1lQvoIvGqgCSbLHiKO7RiDZDJMJEQ1TmBExSicmllKcHViWZmR2Bg693L63hj9Z6iTYH+aZKmE+tjTu3i38QGKXC6fVyC3JEkMNJ3so7uHD0Va3ct04PZD0Jn4oY1tidOoEqYchbVo/VPVOAvjBpIMs8BMzrcKVx8s3+SDTzgj/+F7IQQ9dk5k3saPWbs4vsRq8u9NzgMyPlvvQDpBh4Vxztn+f1+JVYbHrIYRWuSaXoGYxqddF2PxxzcdXinI+z+5uD9XxcWbPGX15e8FCyWh+bCDgntOpMVRRDfpUtq3cn7meKgdncVdrDoTtA6eLNvwL97FAVGoCXaKNfyvD02H4ADVlsn6ewRjbLX+qkzomLCFxq7fZe1hVo7Gr174Slpfa+KFcHiHyqwq07bLZBVauwR98l/xhUrRq213AvxfTmlJ2Ooj3f8pP+bvDlQIk4VVS1Q+Fjn6yuW0Uzhn6yT2dkI61C0WBCgvhCzmwtB4M5wkOSzgN0kUcBabprKxTXgX5DbGYs/f/bXzKF1ZO9rqSTy/lBquB7pFqxDICfta8Zi/L0jKYISPVdALOQgxnJ0KrZbqDMIDddnOIGV5/VR8fbBc6pkm25qSifW+iaWfg9XranyeDOcm5pk7AySEo+8Ii6GEUTTknX+bTnaesca0+evMVyV+uJGEIZYEmufbKZkjCS7To8i3/F+I2KD/QlnP1Nwyq+bPByS5rOHCzcx/CUYnIwqqJep4ue3G3bDPXWmsgjIk/NYf6b2kXGLdAa9giSTDxQ/XjqcnA8PVeEPHnNY31OOY0fUOxNulf2zILyDZXvnv+Ms2z4UbYzUeXFzfDZI37WNTYY8kGkCoFYBiWDwK0t+mg4W/LvL49+xHrY9s4zzcl/kMH72xL9zR14yqtW2hMohYa9IWpXU2oeqXG/BIlOvNT6WQ23DyL8eYlDri+o2EgVmEiKQzN3JizIdK6TUNXCScNT2HaD8oHReS3k3JPPbvItBEZmvG8vRFvCAeeB/dxU08SRrH6ieda9nZDnL66n6SRQoqm0rixVVWWeeLAxPYzRl0N6jlYstKdDs0fA6xcAV/NQcpJXWfkwZst2N0Kj+ZE4xj/phsfyicAZ/4gOpT1+BYy+eiAVW2gYU48NU/mCllcrh7LjAXsmNo238eKH4uxh/qAB+0bss8RRD8K3grHeHfTRStp135jj0N9CRKOOZGt3VaQ6eB5EyjGN5l9yDKw+1HhrHoFES49hMySKqTbHXV/5yxhDOkwEKAPqhR6vAgNUxkaN4suHjRNoN5aZpChzx2vR9l648ix9Cb5u9aVH0pFjZoESqiP6JKi3ZmSUAF8XzPGSoaSy6dJAvW7NCF8JpD+5xFFmCRfELtLF8r6qLbsuSA6tlxcyNhgCQHL9/MxDvjrNxNFOrtqNFeiaIkD1DYNVoKEfwbGrwpkY0MfuO9uT3uTlflh7Uei3Veb7lXrSeQWfP8Z2Zq2guFNcZL1FWnPujsVvVMPTIUNd5qtxZlR4lee+8Wuv2zqt67WFoWoveP66TdrnAUCNhoMyjE+Dk8YDrQvM9CIKAcpX3K6O/2DIhKl1YFePKmEgWLhsvZTdB2KNldJFtQN0Hv3hbTqUhwDBwvk7W7+Z57lomuPR6YQVFE1051NmPpBtjj3nHl1h6/ajM6MsZgI2ZQVPX7+WMLVvx+efLV2Chy9Is3/nLYvFaw6nXrSlIlRatnTXbQ2UdgB3v22zdbR+wxuIJ70XHTlMduio8j2trjRL3oLb5Gy24K/pGY10p6nNwSkIUPIZyH9Y6+egr4EE6wfiVHDNMPD/rdgG4KE5NocJq2Dhi3oQn9toXJcyfjEchWxF3FHZ9k6xOPNeq5aV9EDzC3Tsb9DEm4jDwHpLw79IQCnXYWu+q8hlpohwUxsDzfoNaSo37bKb1C8COqdanHt6X8IlrBepaEg4sL7fVL1w5QhC1FOOek9G+MadFWbq8rYXvH2lu5NdZhXHOlFY4e8jtlu0bSCbvuxLts4IWFLqI6pIl+pUA/YrEatAVyCduoMJQgJbtLjxq91QnhykHzgZaBQ16q+1R7EOwYG0ahAZkmkdZpioGPWam21d9oKWn8o29V7ZHvbZCLBU/lJSz0mzVJ05YQ02tkTeWbus9mk4MFccyyR/vvthmfAOA5kpXL/D1wdYJle9PNDqfFNrM/2rkEqA93d8A1L5WJLE+TwCpctWpbfrFWz00Z6rSM1cyAk9nGgYCZuDFjQpz+yKDqqfEQ+mGpHhYZybd0U9mOzPPcJoQ2ETDadrxBosVZrGw16VYpgBK48GUGChFh4TBgp56FsUVwxwHlN6BNINeH8iRaXTPj1je9+25AGSw2fJuUunhLQv1ApCs7tJtvpYEiDa0mhF3PZxMYWnd79BPbnYeTe9ej/Bg3+NDxJv3WTZ70zucH5gz7uJF/NZYr2k7plFd4hEf+PNsve4zFPTiTHdzVRZ2T1mw4GQPjcEJz99CRCt476HUmNQ/+TH0Lz7sPtMcBtU2qBZqCwA2qr6R7KJZMWkm6sRJ+Y55NxIVcCgG53kUJ+/2j+1O5/ApuVl4NRPo3RjxSWLdS0by4jXaKdPS13bglT2FGifh03HEHtl9Y3zBiTNJTzy+uLT5PC7SMCUWvu5iDfP0i/N+LeB6rPGxTJ4gT8ot4kdt+H0PRJZutL3qYKXUFie2wAFYEPlODXEm6FtkpOujxYvPkDVmVHAWaU/LpJXWxf2E07omCNE19LhfIBhiDacycAr68X9+Lao2OGPTt7/cGdk3qLUbjuu//ok3bj9ZHd18La7N3Q00IibGKF74lyjnhXRV+cWU62CLxmeHNxUz/HkLXJeZLJ+TI01Vk45u4PmK1+Efj10cPnd1Fw/GIGcI+SLVv9xXuvZSQDqNpusvkM57uCtdtL/GSBMS5XfJ9cLwdLzPDvbC8XzeLCAfRZBvBK2rieXvDmUOZA88xTdb0hjcHCLrm/e41/vnkH4O7nRbSW70ihtuwCSS+1QV2zMlSQFBemC20drXOdM6TQtywJTCnN3BkaZ/w6DbiMcvvkx8EDMFkRMK2qCO7NjfW18ADXgxNPkCfmhMpL+SB2BiXzH589wC83KZ00rajn95KOX4F1W/jpmmUEPX+KFmxQZklQ6Q4xs2oJreldiRmg4H0FiyWU7av/s52PqL7qSeY/B3KK33/n23w/jXZv39+bocfV0M3LuYP8I/2OYP3pBZJMBPOrWfFF4FmB2IdxTuKL1/TU92m4PZNwJzA6wk7YApsD12mDrzzxVMRfUy42VNugJ2YuOWgK9XojQ0H2ITFt9pOgDZTbb6rllR3NXUsHVKyUFnkWuIn7LY0HUFU6hZqC/HjC3IPs4GUnyV/wRZ/s8E/44XO942OHus7nHc1DQV/3opPpz9MlS+Pj1zY/AIEgeCSAtYUb+szL8+neIn5AWCWeXj3eUiwLPlQpUGF9maRoMBYJYjiYCwHg6/azVO+hesz4/Qc8/wSglFG7BSOxP5hCV8hPU+zb00FjGV/jTSUBQl1Zd4+Z25RdUzeyV6onX5RpDESP6MVwlosjQNGoWYQZZJ5kX/WQdQxju7333y8J+nc0nmRO/O6sWtLrOQJej4o6wQBFVAI0HywnBiRHFYpJuyiPGlFkIfckJI7ZQzqRURbhignzyxBDBnx5CicVXLyp2zitrO93+SQstLrY9elqLckd5PagX9dyB4Dgxw+8klfZ26wdg4DcMY4f5wgnjekX/OaXAvbH1j0ym3oCRMW9jMw03UqwDlmsNjj++bFvlvJmNhkMSS0+h9aF5yU9dxZK0tj7eLQ5/hmMqQI5u1lXls8mIJiG7vmKUGzT5b6p37DctfxijFuw2hGUfX0cfQDco0JIju8Jj6XAHuFMXxEdKQfmeZfGLX4rTalYFX/pK6rDp7Gpj30UmMbrsgDu9z+tW+SkOfmDe3Mj+Lw+u0QifQ2nfqVu6B/s2GGRZgWIUokSjAP5JHd0m894440flf8lDN9kD1tRvVY4H2cknqQ6WT+XdDj9Mv1ZH5R5++5CpQKepJ6MNvbH9xqWCK3jIyfkV0MNhMwfPHBABnX5OvRb4bme8FMezOFGl2hB1oVa3+Wfmz2naByPdOOSNObOSGUtL3BtLraiVlfH+vE/LCkF35RL2AbX0sB1cuZJAWKxaYdEDC3YJOCp5efLXfIyKEWv96QpyMNjn62DiTv5x04FOrOkpcFxs4nkuqCA/F5o2skhyq/n6zKL0radu1lxVv22Eh0hFXIwZ8QBsYBcEeZu/+v9l7jyXJlSVL8GvetgQc8CU4d3C6g4NzODjw9Q2LzCvzeqq6Vt0zMiORIjdvukeAmampnqOqdvCW9Fymi1vY973rl/3dMSNvX8oIfyn27P1+waimj45TAi8UANWbankTqAeXqQO4BVfSI+8UpRJMSPJ5A0YYNA+7q2aq3EVTBp69Ks814cwccLDPuz6sz6AE4DdfU/qQA9AvwURLoDrJ4zdj65NWvV3W5leGncDLvjzc4R2IqlP4AMpOSEQk9RByVOncCBSQ2c0z/tMF9FYt481Kh5AuKZEavj4Z/p16taJhSjcMP7WDxSDTzXn/kIaPKmAvgrZHTdbFZT4eg5d4zqy2ylqDzrUWwua7llUFduCNyrqK/cS2UGrs3tLOeZGqh8jHPn1UNeXOapu+ESJN27eurUw/RXIbTQ9IjUclXjL4SBIGxCjCB17bvnlGXZYxIY+H90cjGIxnPVHY0tPBgMg67RpK3CGvqvjR90bZwXYfH6OPc8IB0aU0C2SJJR6wkY6e/fGPohNPPeXSZ/V1YxfMfgMKU1hJ7HLsdwIXHsjsz/GzAkkvUXQv7tFgGiTRKqPSep6wLF/57P/Iw4LgEr9drpEdwXowoWrUfYEtkxkw32clNJW1hVytJhubj90UKtT1eUOUId3kIpW0MbO9Hz9xgJgf/I+4G6p1apJA90xNjF0r6ig9OKZTvz7bNxxTsUtONu8De7AhiOhXM04djxlaQDLc3Hpp3j739YUyhcjg0AXrXQMa/FS7PdA/ANAGobFJj72uk6zWZBTOfy/xIsxShcv1AClvcXBsSWsvkfco87n2rUxQbTV3y0Agx9rIGlIOUvHEmV2sAwEUGacwfChMzb3LB2dOEPGB02kgkSAv1ICSX/XxIykUaCAOateIOIsxr6hfT2g4w6VyfxiX5JEmnCcY7IsU3PKIjCs3BZDNO/NUKawaRKYqtXM2ZPh4iE+W2lNK6N61hbFbaA5dlRWFwE+zGZruPci1CD2eY0jWHiSXNaWBj2D2voyQoe8vdSe3sFxYE8wT9M3veKEyQtx0RHn8b5LrojVS17uJAzWR4MPXDb1XNHpnoSARWCbxF1HbWptJGbVdHl9ewKUzPXd/hxZSNNj8HpVnhQd47ioQFYjvvrRp6vnd7bOu4U/0yF/d1EXSXX+emC1QrVhaD7yjwRuJPtvCLD3q4b1E6CDlGLWW6E83mtgaSWLWF3+skvBATGJy8/34TE//MzsRJ7mgLZMptnvUs6VrEizZQNSD/McNfiSdstQHsfuKWpXpmuWz51RjA3kePa37g5xfu2A+Y/T4fpB5iBeZIujd6qwg+x7Fh5ZQ4yKZYPkGpP+NU8NZ2prhZW5wk3313BFmMbuMQQ6uW8ZjKfAXKt3dNSlavkvdfSRY5uyWuzCgfoN87KklzmK/SQKYZcKK3YMEvpeNZWQAJBuEn2olF896ubglzT9wB4uAIYVbBrZAngd7YURXKTzTid4HfLWe31Mu7asfDttHaptuuyaWRmUFSCTv6Afd3Qrf2Bg2JtEdNYD9RmiCG409YrJ0XnfAUCvVGM8xAtvYJfyTz2tSBWyCIl+wtVDYJTMr8NPIN5ps9QiS4gGvqVgb8vL4oRUmsif42k5QdwiR8d2DRO0cINFPJue5wB2HUgrcaa/+9FUMtkhXsXOvuL0nmK9EOhDTgFWLP6PefpIzWE2+xKLn/Mqb7ht7mnhD4tq5izi3ez1YAszhqxboOOi5daHrTF1h2T3lXKXHuw6ttYc/ZSSAtxcUJoULRDPqIoq7z/lL70O+Xt3OlISsSTVNOYw98BinsclVvdPdGj1duBdofNUgqztU70Xk9OCOdNXztMfPeNKFqnQap7bFEMhXmv0NvaYRTs7Tt4nMe0LMddQInqzeF/nEz63wangb3dnyD3HIy+DBDjTwoPtDY4FBx1/zwUj/RN0dzQYiejtrAOeqUEMJ5mvna+NbSIN/MhQ+wp7R0j6fXtc0NbyT04rfMe+1BHyQzJZnwOSvx8NlJhlpLctH5307W4l4NNPeSRYaccPxuQiOirVqzk7oJ4dQXRlZ5rmpKQruQfZ0mjSv72Y4Qx47qsAI5z+EIgVYtONsUX5Q1tUogCiI4sOXv56azhnyPHiqlV8PqMAwhfNcXWFsHMDvoV2V+/EsFW2Dp+VTfqObTYzrPQYVAT46Q289FZoljNk1m4jKjS7dQo1c3JqGZzfw4tJLcMCoGgaA73Bu73Axj0PgUodguguPrz+QD0XFgz2Y7EMYN/9c8YkBzxUjL930/qOLGiBW0xkj57oKs/UusQtjRjblBvvxBcP0IY4MpB9aCKwmGfAKjnujuRncDxfjYehzr3N6CFiPLifPfCTz3bIi5yVPvKVw2eUvnQ/SMi2f6KiboPNp2/YdT56fUklGIZ5qG+i7GiFR5khe/XkPUkT9ufWuARGRMxi+t0QvPsuEtu2HHAzuQoMUfPIOlsihhA8SQ7QyzKXR5WF5pCY73ZHnKFqpj1djXoMLeNm2gQohncAbVXZv+eCPMiRxKtw+OFNSuUvrn7gtbVZ4PyisJUK6KBnk8eD6/VxwQ42oq/iVll/GHXuXeBbFA/wj5O1ncsQ6FMAoBxXF+xCi5Kvq0ONWqyfCEeJDYByd0Wme/IpNvCQ84QB/v4T5xNPdXj33yGlnVS1o8kVuZ5JbT+sOjJOKQUtU31J+AG/s1AcEXbNg23JrPNYm66gpEc9CHUHa+bnBbEnjuOR3un6OR5OLKjDmfFGY5OU8pXBgLE7ltr8kKNYczAIlPFNgrNSTk7vMH1QD+cGLejgXDjt7Un8Z3O9qZdFAwZGEiddj0ugwDGHuGaze8gzFBKr4Debgmw4NXdKCOijS8WMhrDC82Ln4Eb574RYB6p2O/nAJVezBoSxjzgiZLOk5cwa+IJ/OtUULRLXnyVS+EfIlYKUZ/sIuubI/+Uequ8tSUfiktB9iGA6xgaL/UHshtTbQLcCXPAHsU7HMz47lU/gEoND9KU6hbFEJtJ4wc0CNi8aZ6NcHIiqygvAPqYUE9ismQriFnzWoA3v3RoG5dIYXuV0vmUh2v/4z5nz07meOFrwOKRS0D9viYLyuFsaS8LWOVELRUrcPmBDH/1MVvUKwD5D5LKmz7pMbWFH8Dr8+PPtnT0tDIAaW9VkOA7QKvECm9Tgfo3khowcd3at7rKrUsXSaE/izXl+cqx4eJLe39QyFEnF5/PG/cOxqVJ7rJvKgR/qJHdaDFo4LCgAc1iL6Yojry7HvoXsQDJpkkF1t/gQwXNb0MSTkxTPmcqNm7/6Kydd2nOlimT25rBxz6Izc0Wc0+njxzD2cXT+KMk5c0tP4p9rXrjt3f4THKxPnDwDAGO5UivirO6XWIgvFWIALAZwFKxydx8DftjLHxvnwPE3+Rg06KI3ly9nP1ZSW1hj5edgzumzvuRQ4llACrmGdDzhyqUT1AOnW+xV6H/ebjfMty6nSCTvtHv1tkn3zggiFeVggXaH7G5elFqBl98e9k+JKosW+r+9LHzZO6eaXRh8eK7PcTVG6eeEPHFLnrgGllebxo7n/fqCvBVj21KYujlGq8+CoxzqITGxArsjVe5BzwizeankakHS7B2vmfSz3CRFvxkL1Gc59JVZ2fSgZsiXJP2b72SxWyfo1uNPmLmmlUhR6fq45STtKLuNalVShfTNzgD+O+FOxAZXymONSkMwtTPP9hkYjp1G158SmXSKZjRyHNon0Dh4Mi5Apry7vW+By9ILBWyEEFtimUQBDuxzoPdOvV2a7dWkA9XT1Sv963uuK2CS3LK5UGmZB3fULJ0iR8v1KrcgdHyVkM5PUMq/6dlmF+4gblK8zFzC8H9sqQ81Ky4dN07gPJTGl2q5ITP+T30o310Ate+JbMHTl0h7P2J52sOTKMKBIkHTZhkbOE506RWEo+tu5E57YMpR+xYD/4PoV6z9RWTGAR2Pz+xjv88Ll6lSyhJAtZDcry6kZh+kbf853uIqASFm+hzLstfYLCa9Go/HFWsbXs/YAy/v0q+6xhslBEMPRbtMRvVZnxsCwC01E7XDQwwMtwPCbmqZRmy66hv4lleG+MIk7Ma8QnnhCY7zOW/yUdBpAarFZbKivtg7Dv+rspcg6m8cWZ3U1/JnqSg3p6dP5jkd7XPt1+IF+4hx330cpiQXIyjtLbkjuEEcnowQgG92+tv7tTpAqcukeN80N9Z/sQWFP/JEjQeap2lGiRRfvWFcflxKmRKw/XDC6rANE/GLX2KVmqc+2sVd8nRGVfqvyhgB0PvrSK93cDH2fAK0tk6p/WI/Qv776J48rYO6oaVB5BstmHQ85K3H+RlVQk1b7x6PLTMYanFnoV6SnXwIFWaByQNrOcOyRdk25OyjSfxjR80ya8heYkAoYC7krbp3u+K7lzKFBfKRgQfZaPUfogz+DP4zVciTaQW5nrKzKGKL3WWWZIUWH5Pl8T4+E9PGkEg2L4jkVTp1rV+aLQpoJE4cvNQK1GsYsqywNCIjIYhzH7Y3eylZmIY+m8XU7Pq9jAYm5tha4aqpd0UweJw9SNhgm8j+umXg4v7JLw91wD8rCjKqRZYE9pwl0/rwZZkTnCz91lxf5UDh2LzPQ+NAxn1FMns7xz6kroHpZfL61KvGJMvSjaujM2dsfnmKQUL5g4lAK03lC5bOgVpAwmp3iNCr64SnWi0ZYpQ/9dzx6sjDfVtHdqsKIp4I9xJCczCcopnMc6QrQk3x86fO3x9uVxRY0HKZy0+dnG1RKtIcIlh5hcn6yDaHiUJB15bl/jYYElWNKtN3+xK1/AX2x0ZRv+vPu75LUqudJsOdJouC9N+gDy1XJJMcRuLtRZZFUlIF7pbuKZOPijX8R66eKmEJOtCRxO1LFQ3NeGg+EWR/uGk9d8nmbkl9xJ44fM8i5Nq1Mu6x+FSaQlfnC6rOC6ZBz10+YGc/TjK5K7cBD0ft1R21qzJo5tk33hBb+AihBJ94FFO1QW/blQFc5EYYoMfm0QP6UFcIZVDFaPXwWn7aCFDtSnHD3+PtodK6iHqZEBWALeVcX1/I6t+Zfga4mT14DMvFGKDeBtaOsT8d5V+nhE8hVwAECu6wPOWPpFmtbtnOVRboMzTS/8hJqx7ysoYICETMq2Kn8XgrshWJF2kbjC61A/lIvXnRdRev3i5wne2CPA0sfF7Y/NpWAfFfSOXia+Kmzva1TOTxPsJ2wJBpJu/B0d8gnZARusQ+QKDiN7MYLcmV/KMqrSZvG4p2UHRmNzdgYkZiy9FUhk2Yb7L5w0yl/MDGw9cin6YDi9PoxE3NHIe+a1Y2kzKo6DVrflH0fehHid+g6ov31XJlI8avwK/iaNYBor+L5A/qRKZpjROQzwckVvHlbOYxSWsiphRP8k+Z/jOtld/6OanWFeBEGLIVI2/yDlOxaHKTIHZ8HHNdYDX/xDGQIBmcKTfKDaQ8CaZmefS9w8vW9YVt0jHg3vud9Q5WtL3Uj4G5D5zPum3j9NtbJ0bf+4Pew7tTqGJvPQ07W+FlfZGx/Z7g+rfUhp946xCf2nTs1qA6qgKf7CUo+eLEec70Ht7uVNXOJlsPIYGQH9L5q3/n6fFdRZhiGxybO2bHNQ/s2/2BSZhecJRCEm3n4zotuQfPJK59qU4cO9lObB/G3nsZ+ge7au+qW3lXIPaE0b86nz5GYBUVRL/LhkU/kWl0v/hbMGCsHr9NVEwWaP7agk9BHq7Ia7ufJfLWflrrZXvFRl6JghI8f3/ysRx9G0em9qwpMI3vq+IQDAWKBvhyJjff9er1NtSqRxaISHuwTDL8wmKKcUFRrRLdgoR9PIQmNBnlKQY1RRfCdHBfU1Q3vxwGNodZf2Nv7JLcAbw9TfKCqLNkLirKJ4cefaB/LjH3YNvsPq5sb2maCBxLN60SRxOPRaQeAvJcimlLTeduzTuFvc6OoZLCd4z+MP/065Ekclwy5PBHjyd9+vWRZwvmMIun7Bn2eGLAr5t2jiaqRUmoH+mej/SJDiKzAO2xkb0lNl8hyymf6ibA0bcC1oejGojllK6B9w9Es4Pmt9qfLcQyjXPaYJl+oQ1Idi1ANfoqjfgHbC6XvZ07qcrZxgmCFeA2+vmk2ZXngX6NLBEZXtEhnadrHVOr10ws3+zUSm+GK5PTilp8Ft7s2N0KFD6AcL3aYYAJD4Wu/BfnlcIzYTtEjoLXjKNbDb0eRq66xSI8heskST/QFeJmakXXnw+ZHh2wg7W8X7c4coLI0yUoVR9jLsVJZZp0HPwukVB3QZ9oszvtpuTbQPACjwcZXafhiXUxoN/G9y9DgrdqMlN4O+TCMj/tCu/6OG28AFvyEvPbx9GC4Kf3jb4GcpucEPVOjBaMJ9VDTtaj3qq+H1oJOqBT+QokhdqomlZJmPlDYApUMGD3PGispc55nZHG/dnvHnf3dEFfhF8g04vtleH3tdQP7tRWi+T64u1C+NY69cmQkU7I2oHJ581BZErRr1fB1xyMa4uyAdMF7JNXtcVmqfibqmB+IVDl0RGjwc5uaQ4mDO61B2jxWPF/toHsyeCqH7gtjUESRs/dnVUCf3dSkY//pBxV+yt85bFpL5IbjmmU5j4HXrX2xLxXWATup84uawN1otHNoP29p10IYF6fyjLDEUztfLfameiKkp/gWWltTV8cQFtAqXyRTUMaiPqAk4QTcNPA/94M/i50QG2CMu5mI3Y2/8q9XnhiWQa+YPij6yPYXP4X1jzYz9CcxJZAokOE3H6NbQvyQ5lchYSw23CL7fVP3g44Q9AkY9BMVticq7ug5dEQLk6YW4517UZBWR72yffPwh7khLUaBUl7YmOKGYz/MyOfw6idPjJ3IA5wWGMGDhAHaXWSlHYhYywSwefwc9GcM8VfVNNxuUHCMvQ71UyFXp9FSqFN61l1rqE2vwyNAVALnBFsGKOzGJ7VH3h3l1TQjWezQwXCaBtpP9wTuqsZAfHhDmLN/0OQre4PNYwikxNYDPAeuPOeWZeidZzU/30LllhpSWK2PUgJVLso26Sd2oLZ1/bxkagqAf2rYen9HEUhS36SuaiauyxLWj8tu6S89eouPaYqUeTAP/yFEn8i2gc/DSLGZ0v9AFPtwlpQe0wedfmePyA0XCxT8RHZShpJ3NN+IPunkF2GLAeY7I0TIbG9M/glijdJez72+5b8zOFeP35+F8INTRl778Ri1Fq3aSs7FkV0MSwZZgETq5hk9Lj50PPONRzK8Azj/ol4feXfAOD48F/RSDMDfU9PQ9QfDQsuzVLNg3Wc0yUywsUKC1AbVVdamAYLBhEGRQddfrPwJUsX805GGEXC9Pt++VUoCNUYIjYQ+g9NTWm2WJnhf6B8KsQYgnzPB7VJhqAIe8IQXwb1eLoK7xeT9ebMpa3SnbLYyd7M8vUdinsdQBTaH7sFeXH50gJ1xFCgdCXXuoaHJSrTtHZgQZFbngFOo7rUjsvp+cUxb+hDs8KucboWh5WiyxuZz4YW820mx9rDXgeJbHNA4AofQCTU2RkL6xq7D3g3aT2f5rsxVlJ/mDIuNvdSod8VakMqTaGIKwCDfc4iqfNUDdkeB6wJokz7mq7IfLwAQrSNL1Ia57TNL8iZHXJhuh1hWLfaRfMM12En47Ni3S1Z153K21nNKonLXFD5BOIpBEKLkitbBh2iKgDkEEpa/J6pU9DwEwdKJUobhsf/z2N+MZ8tcPngHouvSIuMRcpR1Lo08qCTUP8ZAY6sHECY+9PU19rgYDNNfJN+9Mu/lBTkZL1D6+XlnyyK2KNRlU6j4DJ0LM8+8ML+FJytE27t54uHBQXzEMwPoJUeqU/6I13JwXTVK2t82pkwDvTC7fwJ4q7zN8Q99EHpFQkjThBSQhV4lrY7PovyR2pwa7eGo0PveE7St2Zim2dSWZH4LNGXqdL8MD3zZqh0IszG2ABH7Yc/6zCBi5ShvMVdc7f0jbVoMD3OAerB4RdEWfBJsl5iTnJeUtg6F5ZwLnOwedsM7D1WzeLPpOpiCcGsYKmZ5nJYjspvwPuKsxCvIgEvZ35+ITGZfeM3xffRUYTUBMkXUD0fcgaeIya06YMeJXSqH+oxkcD0s2qDsJtXzONaZ/i3VvSjIS6DZ05EWsNOBvv5X3uOFqLoOz/A1yySsRZ/I9zk+Ydd4UvIn9DtwJ8pnHfQWMFXnCdb0V1M9aaE0vT5+vnPAG+b1evByGlKelfO3E4xUggA8odiCzKtLj3DY6qxi0AS9U1zikMkCraXvtP/HtPaw/0JrDyP+X9faw9FfPZZfPZZfPZZfPZZ//eqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/Oqx/OtXj+VXj+VXj+VXj+VXj+VXj+VXj+VXj+Vfv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3osv3os/4/qsZDQfxDo/6zHgsP/SY0Fwf+DRP4LQRbk/5ggy+s/6a/kWZk7fz8O4wAkWNJt3nNwHvj5MM5rNZbjkHTaCIRUfr5s8nW9nPoGByXbOj5fVWvf/f1pftZrCERW/gP/+yn6K7kC/s2d//7h+ufD8Dzgvx0EPkb/nA98+L8O+/n0z3HFOKxC0tcd+MKte1Dwhd758fxtj30y/P2Vv/dK/f34j2jMvxAU+vnzc96MnufxeL6ukq5g6zkFgwK+F2owzj/XW9Zk/udsf+4t+7dPyzqPbf6fJWl+jvovz/7zk387///S+JZxm9P8v5tcjPrzm88Zy3z9b83g7znB3P+35jznj3uu9/x/upP//Wb5z53//0onKAU6Qd4fnSDavA3++tUJ+tUJ+tUJ+tUJ+tUJ+tUJ+tUJ+tUJ+tUJ+tUJ+tUJ+tUJ+tUJ+tUJ+tUJ+v+CTlD+f9MJsn7+2azUTD3/Vzrm33SCeqrLTOIZb5I8zY5/uIst2qpsKwoshArPq4lyhu82/XxmRXNSZvLEnriJIRHSr16HdvqeTMpvO1aM9dIDcLdOd2ATJjLdbvBjsPEBShmkhCIAHd4BjppGuK0Z+eCm9UOaAdj+QOKbp9UW//7JLMcsbaXMvhllSYMkqi3LPMFc8TM34HPplPyXuRrL4LsfeaHnQ/zA739+tnyVFLs8R2Ui9k9eO5EofQ5kLt5YRAKtNpdAFXsjJ+eWMquz48sbBiIakz1HY6F440MPjTcjf7kSbLds4DRgesBjZBlnXfo6iyFan8hm/lQLrzkNcUEPk5WkDNotxTRIsRBHYqKFmC8r0Pj02L2FdZzMpWHPyTtDQuFFXScoDWhpqLAoofsEHDFxQa3saU1eW/oVtocerzKXS8NSVancoZOZ5bub/BpPKJ7qRuPYsBTKqEyL/HH7jPsi79y0x4GqWJ/GhUpOJWw1LRbaRc6DGcgoBHmDxq+ReBhvdAcDzzvXcBo/gyravh9KlMmf6zAj9y1RpTA71FXoBphR6Juxn4OsP748WMP6dd40HpdEV54KqEpwnw9Fr6D6LDRTjiane82wmSQhres7WBS0WhXivM8vuUUPJF0vSmachS5uegUeojmU/D6IMt7DCAsqE+JHlrXFMm0PmUdearWwz6DJ2BYZ+m5cWqHOXyZlk8VvxwKKryKKmW/YGYPzUtNnEp2gr0ToA2G7whTr7Trv+fNmHQvsTm7qlr0C0Gslm1y8V/R3ZaQfsRNDqOUDA+Bq7+zX0tNF7UDYKloSaMBkCSHfz5N6KcIpxvShoO41hpl0t8TdquLd0s4o0YvSjC+B35tvfzJGchEqlM2UZqHLMD0xpAq0kQlgmOiSWH3ZRtwgT+SptSAW3ZMiVkJlV5GtlpN+zPrQ6AgxbUYDznJJLbc/+noXPhAAj7Npch3KXqnPPmS6480iHqEF3XWeDiYCqvA7VplidrASgpfBDtfLNp7xvHIRMPyDNxu8oJhTC3xg49bujlDcYYo3OBs3jtQTdUNnDaZce3C86vtywcXpz9LxwM6aoPKGY7y65HoJVeKPr7QdNg36bi2RuWN21uveH0Jo0kW0OeyrbqS9AUUfS3Czt411CCxu1lhl6C2DsRUnLGSGF9iCSVMAeyy2URn1wsqq29p1JQw0cNemZpPCl1Vn+9sNZsApV6PW4cf/aAOGfcVdnxRN7G/YOBS1MXmCu8CGP/VQvmHaCOwNUmWea4O0KIP5sWFGoOT+YDXkXmDugck59LOXYbyi8JWuDDLSFRbxbe0Da1/0DCuUnbPRitm8EH5tPeER7peMkU9VveVA8X39Nif0XrTVx5cdIXmN4NaAGafRYRmF3s7bZeTocxTe3YyduCz0LaQJl4kgAqIz3+DrfhlHeakCZ6k9qQR40aPCx3wRZj57t3inqz1bREkzc1NiVrsitSHWbMJnme6NfHV/0Ng/TrXFOudQyKmI1MBPog9hIwH6OloTAU3OHjM6kUV3gV53UmWao/D+HNH5+YZ5Vybf0Cp1EgmKOsuGrsgqMmgUZJhBn0KXl0H4sEiQgRMcb03pzjFSWzqr5mCThmzeBdyJQhM2vsLlKM76kxg860AYqLlg1lSC1J6Zq4cpZRt9OtvKkm+AtfxExJ2I9WV8a/skgV5aF+WDthBXXAzfQyLy/MN+qzefNljODtz0SWb5ygniA1cPjygIf85eSJhVMgYqbpRVakgvWzmPGnnXU9DnroZ1piULXQ2qrvwtXoarOSHUwJ+RDaxXTb1rX/Fvi83nRMGX16zIHDd27CYFyuRWlHZLcIRvZIQKJ7IAosIFeAkjOHCPHFy/hgWee2zZsJfYpMuFscZRPJTOnZIPvNuDKWAfokaRej177hQQbmvne/uegBYhSFPC9IfIVMvi+Z+VLy/mHFnKwzJzeQqiIkzN7yHDMPne0nUcNwd1m27lnZCEWDX6nvddIjOBLASHs6U+nyy9J2PFjG++9MB+UrebHfYkkJt0MlUQjfzbiWDkwZ49IQuhfhcoVvwTL6erMF5qoSHH49E0aAXIpZ7818aQ2bmijdrUYI/u0L2DOVLBCQYfcXGvCru3UqtMWAfYZ7A2aLF5nfWBz+2iOpNT5cuLEUJlcdKFXWmuHwPIFAQ9WzjQUuCLdpl9/kMljm0pFILT0R7U6aWNL5XueeBruHOdV5bLxmeQXkZlwRNWKgpn9wbR9KJXvV/p+Ly6qnc3KpVGD9cmlmcGi/VHiOBeDc1k4VwevN8HpHGSUgxGwdeG+Mzg7Z0lXqGJ2LOOD9EAcJEjRcpJJXx7v96cf4yyabKjSk+Zj5Vb75LylsKTesrwi5SmNX8CpCvX84HJ4yUyJv08+9ACisIs/WLsnKx8KASzzllpzQlH65vnI/3ypVDQxXrWwtcHpjjzO6FRPDrGc4Jnfd30+IRCvNGctYr5Yi8/qjK/N10cfmowSjBRTLYxb4dyq9HxQBx6zk6V3YLaJH8CjtAblBEwGUolLvSCpPNDfSYiO9muwG7ErfqUpOLsuL0SnWPHCCeQu2g/3/4+v30m1yCE4wnqtFlTY3DSGyoSo2955hAt6xesQjWBTcIJJL2UukXpm+lPkgj2+ugXNK41MMR5xIbXtv7ovgCsmFuvFgYJ+nz/9G0/b3MIy4hQj+nSMMKVv1yoPh259wjISpwKULDQviOwK3ItOdMjp4XO+DdBQJKxE2RZ0BegTx/zrZ2fMZ4+xzP2exK4pMSTWxtOXL9KUNl9UHgrTX0o2FTVHio1JSDiQymU8Reccdo4kDFWWkY4si4fDnzlJ+Q5f7tO+3ZY/BWGh8J1nGuBitAWA+ayovMBAg1oPRVghhGm80YB3G0GWyuLrg3UMayy1JNuPhteLu9X+TC7XZ71A+GZr1VHEDTG24J7qaFGWs/8vaENs5vJI0NxI/KY/TK94HbvRR7RhQrcPUXMA9Digs7XIsqh067ia41T1MHE2Y4nDJvwRD7Cwuu3FBgJO8kP3rmLDoVnMmmweeiNHguCr4m+NT02ZF/Ag8a/7NVJ0i/FQ8uaxolviTC5Nlq8iRy+pNWD3gNpRhKJPDN+Pr6ZN6A68gWz2C3vkj7VXBuCni9q5LtThlmk5jstdgpa6cdqNBjeFI/rCOGjnG0dfpswB1W0hDi6IkJCX4wR710GgfjcFQqpuzYLOONxGgmn1sh2hxOwLhvFmCeKGgUNyp5ePW70M5mlazBj8v3hSffy6dvMrzJMc12ZqZIY1hfaxDU6YrvBqBdZZbBmcvJG9Pftfadu/eB7+jLXKTspRj9yTENhvY1CmkAo/k/nUx4KPfeKEtdRAObPN/4Khh7OcDeQFqj+NB87D8fP3omTSszTY+lzyY6WVSYtxiTv4u0JuKmp91JOt3i9+tpJgefTOg0nIar3WEuA7R10fWV4O3YLSJQ2rmuzrr2pWijxN/CKYlomkdljiUZA23jQDvbRGGacDUt/QK6luYe/zbreZ570kEv4jJz8rtWToF81GpJW2+SL0isKWoTqq6cY0yxf1R2vPcjTFwWOZQ7NWQKPVGeFHhSu1nLBGiuxCg/UwgDmjJ/ryi3dJcZnYtpGS979tQl2PoaD8e7fOPIJxF4vWXQDnsDGhrUBFUyZM4Oq4DjLhp8ww40kZIFIb3TC10kLcqtOrt9FwODfWgBzxFxClDVAWt8NnH93UlIUYn7hpIR8GeGc4lCFYr9+2zpM5C+nNYNLE/MhE4+eibEn6qudU0qvz/2emwvAHTFV9v0KIUN8XeoDrG3sWfmrMEr3yDfSnHxo2fhsJ3fFqlItHCnIIBAOiZPMWQ7ZrmSEKX7PK5IaRPkssTNz/A2jjQWW2KHkXsTcoWpIQ5VNtdgXgwzS6OMStROBNxzIPg21hs+Hd2DO1HhTMhoKOe7voDeknZq93TZ15avBxufb8GeT2J+xziRApa6uJcGmufklZGKBwf4AuG6mflJ+nGFi951U9Wj8XCZE1smFoM4Byp1uHP1Crchv0Vn4/QSdaXpQkFGng2Nq7NuJagT7PiG3SOL8FXVYxRQ1/1gbkl8OAi4XirDGW3kcfa5JOrFxyxbX5xqNfOEzGtAbQnhrKJfOtq2XOZtaP0TvYekfNDKIUk3NthSquiMH8KGO7GyNiJvximbAZTqcFYDxHC4gn+KzSsVR0Zebsl76jRuQKMiHSqa/X3WmiT6ObWxKcJj/Ee2AUCd7J37VdOJL1CIkOV9TYx6XF3RVHbKCYp5riXvdg8TGw8NN2gCFmOm9MnlSCi/1Tt0+BkhmfZkeKNGBDLxglYXCILD8ICa7auXJmew/KSsriNbuEzYS5cVCUCa+ImEmFujbVFjlvuscQ6pOLiSQlX6QwE4/h3xbrh7WkeDT03efeNBkDHbvAUDVUlBt43HVlAr2vrOpeWJjKihYvWuRXC2zLfSK7Dqul4yQNDT23GQQyOhK7LJvfpB0pODJNo0epPMimhlgRob28z0CsbQObl7qCCZWnGZmv7dnJGXvlCtgMuPd7iD2Gt0alfwFVmVlYaZ2ewzILSuIe7RQfQ++bbjTg575k7ylUFNKNKSTNPCIHWkt6ihXQeG4y9zN5kf2OMdAa2r+zn3rFIqDsg77AYo6b1bejn6kuwZZOTQwRKPTVoU3XL+U4W/b2ipIozBRU6FskzPG2xbwvLIOv2/MVCbun4kexRiu10JVbXSLc/h9Lw/w/IAtS5H5eU2b8eI448U8TzPtWcZx8JuhcUqgFLenblog8ybhoZoIzayIdg8Sn3W/vuTvPZFmmfXU0GKwLju4wDFEJuZLS6vjSr4Vdw3fCMorc2ioCoxk2c1f2q7vK3R+oR8GXmTQPvnVhC8PczIINkkBdtf1a0LFNk6KPQoLotWDqQna/nN8ijmCR8eESYXbDjBgrZ5gVK8EOQ6WsQJmWEZY/KNYvsX2YMcMHw2IRwf+1D14lxvyCk/g28+RkLBDHAIp5W07TWCtAF2oqpjqCEtBLKJZwah9oOkCyIoejWU5DPCvjvwkyxfUVkRTeJ9JQCt+E6VDjGnDGyHcknTTRl+si6RIaHK2jGEx4sYoTZ26XeSujxtsbi29U18mjdb3eVfOJzRMIZALIPBD90JPtZK2V1F3dpfXpzk/OXsX8R2AZXuxjs5rwWOaNMe7VnifsdXM67TuQ1/Hn4e3QZ2XMiJJ/OiVoRLANC+ncOtc8jdDRhuQPzPaBxtBFhYaHETWdwwyBlKQV5d8fSysyLTMeNwFkqckHnlWsUMwdEugtxXXO8sIIo3jP6j0ku1v+sAu8ZOMFPyKBPK9TvSDCIRmgVtpAvfbed8+oOBn8qCcdMTsFN/Mw/wx9bWnW8h2Ame/ypn3TnNLllc8Coc4abm2fZHiwUXLynpiJkNkIwHcvxC279b9d+Sy/VUusY1+6W9g6sfWHLjWbVtpeMIuEI6GP4v+BJHqJDNFLb/2Yx0sohBU/IZWAbPkDRJ0mpEKDxmMWEqoSPbVuzxegX9nqbmKxwSA6OzEjzN9IivyLGowQfR1uSDfiWOtWnpt8KoreaClMRQxye6qBzWQkFloTww0vTBAXGz8Wg9Aa/vGOW+QIbvxt2y/SXzef9rt83ql26h6hWiwtUmIOALwpHb9U5HBUla3F/cx3mgJsF1q3he2Zcow+Sl1lRw8g/hON/ygmA+HmINMdMh3ANZF1mEP09kqbgXbNgXxZG36AaPXosLaELWUR+EivCCQ0BAj3VZNeGgrCSgKg6kGqGjwg5BmscMt52iL5zw9AGEOGr3WYdsmyK/myni6h3l+l1YEq9dArK3jtjZVf272FaNG9DpWkBVr5f7N43ajnFIcwoo4c9Q3vrR5Kr66OQW2G5wzCcBudc+InzTnXW3D+EqwF9Kdlz++oE4xBbIYdU/RNxvYNOr1aQs1NWORg1UQnXeDGsdcmhK6gujQSd0NyR43JNaBnWAjgGjhzt0aU3Fc7yA6RHAA8+FAAd+kX9Jh7jtPCYn3tnEvV66GLyoXWkkUsk4VM+9P/cQ8C4LwdDNOWR4Oa1Ig7RvXLpqv9onouIEHJdnGpS323ivdfQ97f2eVS9y2eaf165uk+y4lpBY5HmFMGj/Q/jt5jVcTI48zK9JKUzx2UhahK1WAHV/C9jm092cCphYNkOX6s4/FekPa8HpTiXGtkjiChyU8Oq5fvPDu7Q8AhlJknO8qijmxmVqC99mMI9okk9TOaFXnlI4Hq4HMtKIwR+MHzDMhzifkdeA5YYbFm8IxlAwEG9EGaehi6XNU3wof0Fe6GdFPQGhGHDSuExCF5nmihKKJIng3GRPfjzKdTL/2tOtdMffGI1tGP3Z2CMI8RFlGCo6RTSKv2MSK44uAV9jkib75Vta2a4sxo2vmyCJX3jwZmO1reW/h+tw39iAur2kijyTWz6QyVOLFS236evw2PlqZ2l/pp7zKiBt99+uJRs4kqdDgVS0Fz9qLgWFhUH32ar8i9v1CgqSSvatSpI+CeoSA9A/pqiwi5YiEL9m+6mhXseSJxYy3aXW0YLtFUH5dEspxnW6KoC3wBgnlDcWqXOLeDwFuWxTLA44Ry3T5EWCUlR/Zj3JRHYZgPFn1tLIUCoue782xpHo/X3EPkpRayqW0yaBhZdzKNI8t9LCBt622alWP5Ta1FRNV5Ht7MKf43h+j7M2KWCcv7VXu2vD54yCy/DAIfT1NRlJ0WbAu6uMNHMuklZgGtue1RfnMEchaQHFPhtJRV8yFaJMh5VZd8W4X0Db2DU/PLaaA8XT0u7GPX3CbsFRfsE+Yed17W4KkgoiZeb7Wi5frJvPVFICblMG0Mq0pczwX/Vh5OfCY9jBeTx/n/YHV4b6C9nKAcsTHpA7kFTKJmMWF6U6NNgWnl4p76opvSVh2bRIRjktQkA2t9yVGOFHwZQxp9S/YaoEDeWBWq42Gx7KIHdHBJWqMFXB0sG8PL+AD5W6SEVydaKA7+trHbWXodXdEUHbcS9441qmcNxMb82DrBBgK6bPRGZrZVhA7BHsxfWLt5Is4dazONgqWGy31hLcRak3qfIW3W3+VJVavz+ia0fqOcUHLs2sWsdrf8zdaE5PFyf4+hyGEnAHH4uh36JeanCJ9euFBDNpY4DTt096jeHiK2TGQBnQHsWZlA/Mhe0XVPVAFqL6tWheXyluEBpDPrV8J+/V9M/z2eG5MeWYkIcgDbkMn3ibUOnNDvGD1PBQURTTETD29VIhNrmgL+mloRZWroswHyknk46ZDyaXr5PF1LmwEGWjHgKxjO8Lbuwo0HSXfY8ad38rkWmPv2+yvmpjjjzYsRRLV9gliuELsmkGFegtSlysedh8Wfb9Ytkze09Gm6TP45J3cr6ZK/ZZ8PUBzWAKrpXzHEFMQC7/XYQP2YuhBsKRyme5dbHGrWMkjUFZPHmi9W1FVD5qhs8Tjd5DMMuwITcOhaNjAgYBQXhjWh4B5InxOicjIDm2URFP4kGsXuSDn7lezULcnMOhP5c/0Al78Sn6aB70WiM2CUqqkU0Hsql0UvSTAccpAyKZoFe7zZ39QLnDw6yqy99Y+dnj2D1gBtgjoJMgXgf9enWofIgMA4prQkDrKgxA+WHOb++tM7k0VBTaxX9p3lN6mBh/lS32/EWNBbbCAMDp7eQ/6Rjb/FHX7p3qny+JpPrxY+wACWawesX4RTYVWMAJhEg7m0gXOYr2+2umA7TliZTYdM9ti47NqxcHBFEAOIb8icpo5JLHf2jAOii7BSxNr3toHY0DkH/HCBFXwuxlWButTqRG0rE7+OsxKyk4dxKJGPaCHJN8uzflJa0h9q2Frnx0QOs3neKNygmO9Z6LOxRkbAmG3Sao5M7niKej4VAmPB2e/8LdOLPILHrZXcztB71GDSxpoykJvcTP/kLqUnREIpmCTcV/GsPxU/1ZNG4cB7glvb+OJ27pnNYUcgRJQizGGZaCOb0xg0qzZkuOF06/EhrVtlATThOnKhu1p7/uf6ABSriPzdkZ7t0SVm2q+a2I0Pfbtxvj3DE/4d4vojL4TS+6pxCrwwzxZ9SZX9K72mwOVrX3j4sZUXmqgMaysoEIlUcsR47tntPDjpeW+hduw64Vvbvf1F17Lw/miZ19vCmoxgg2Dmo6Z4ijIczLeZ5XBvScRfELVh2QmAR0jsyrftFYFG0jUQ1j+PIIuhVHrRgZ0pn4ybYqp6o9bHzGXCXrdU7XTU2h1Me1ICV+y6d0Ab14n80ooxYSZmfN6ZlxRxTvKYENOPGwQ04l0XW4/DzJ2SyUc52QOcmNp08u2Hwu651uwy/naZVqzk/H1ML04zydPXWqmrz6wgftVlDbiT4zLvlfSFD2XLqS+2Mmky2HkdaP4NR/+E7mv9HPfYahMYr0RZVpTpMXywuq8p+XbdbOvLFIqEE0Z9YZcMdaaFoBddaCkuDZK9jF9Aq4IreTC2nNmiWVtYbuZvQpcSyK9Q2FNk+gL5VJGdHZsS6RVOM5C3c7z2H0RuRS5iTBS/RtkgQWT0+1bJVENspJ0uN/X9K5PPwBE0G559ss6F1QMdhfhRneTGtKzs6FpwSTXQ+Uq2BBf3LihpzaHwgU6xsiw5zWe48dmlD7LN0SUrBWXObE14CCYijqnrAK1fb/EPpUAwHlIH5WEX6yibP1a9MVPEWVyXYTtvw2q2jeo6wr0s/wcqkjisQa9GhqnseYhR++uB9IhwhiCnn1tOSVieSeRrGH6VPJcB7IJCIbDD3UEfb8s8PZXcVo9CvQUS0ZdMl7VA/DOiZfs3XiHt8ASX+R2PCiIAO0zkCFLVn9YquMwnyJyYR3ED4A2T5AzqELErFIHhKyz9qObUI9uJRrF7xsOM9LbOiZty4NjKd4T1ZZhybugvu1XN3npWfx+baP12iDUfEIKWPjoCXorzj71KIxAs9I9SeZzd9sNJO3RWeni291HeZQ8vmoc1yVFqpMMkNlVT/L5wQ/Ju3PbZa3Nj1ixwhui6srBlxzzDeoHEBsy8xeO/Fcj1eCKAjAtBtTHdLUHvtMTHiAfRlsH71Blm+H7voR6luS2cjRbB+OotvEh6ODgWBDQoGLHZI1Y+SenuGTzjKvmqwsvyo+KNJ5b8ETKQgFarTuVRSo6pKm4rft3qUdvJMsiR7WpWS55Z3KsOApfmucnXeVt4TIwfaqWa0wvLsPlhfiBMuPx9t6NhY1H0Vq4PYGK19uw7IaR8TWULqTto3oSRY0r3iQkxqhlit2GuvHGmx7xJPrEPrK6nb7wsqddFm8BMCWyySqQMPDBx6j3lMb0gSOMzlt/dYOJrq/vV7SgPfHqO+6BXWSDTbxbPfoYfAg3SgOR8O18gJMSMxYxoZNBcoe3g8cmVAh/kwNgLw+VBeDiXV0fGVYQ95rqGPpcnPYE+bDUGtgkKhsZdkGDseuijbuveKz7TpKh18agcuLo4PnH7Qh4CYh3kK9p1HRifdMY7zr7ciCoFrxywizFYH6xiSkFyvdx15bhRBUIX+IX/eFQpCVOmgZa7VKaZrgnUnpiDbMYb7iWKas/ugHTEvAyjV/ox1Mmiw67QGci9QrBoi3HWuPn2xOjaU3Zd7K4Y+z5aBGVoxgb+xMb9/UkGrBAoKLCnQuGJ6xSyisitGcZ0c8ippaZggD7soK1cAf3R/gEHyzQUyVVGlN23p7WptYUE1KJ0XKS0c6j5s30+ijDhZsHUSnReYjT7mIrvd+AMpTb3MwtWQqj+x9H5SI2umcV4JPTW6cZD94FBZoFcY/bQ+i09fJBMBEXV8Of15UJrCpJmqbjYcLXkMbHxjk3o2qgrlWCfMdZlIgnBuOOvfSHKd75GsmzbNVoANQ8n4uAe/gidKwyL+5ojV3qUr9L2dx4kQY3CmCZBrLMiulE2J/OAmip2fcF7Vf9Ja0Ptie4aH2V8dFuJvz6WWrNJ4Qw/DqQCYK+Gdczh6rY0Hwht4QsSJ2GRjFL6NEqahJhXtMGtPl1b+g9alCWiVsMjTw1cIscuUZ1cLIIWheF7/HV8u6SqDenTDRpV++aiR0oIE+08eEXmP83On36VeESPY5vcQZyaZePTcmbJqoc8W7XELv8Z8yTWFVSXyZs1+SimbkXFFkJ862JYOCNwXVsyIhZut93DBWNAJ+9ZI+TKVkv/DytsUtMRX6djBmAqTfc6XV1HaiCUfjidqsF5Z1Ff/Oc1N8uvEpzSTEvkMsPG/1Eko2xxZ6a7BIIBDOD4JNBHZM1RKZ6w2vFpDnOW/pm9MMDKpvG8KlXp3hh3u72aoD18SJrw6jCfjHpwouT2S4uQFz0ZZir3rTExzeqi8YkPNntBG+577T1kcyBlXOKB04UeL50Oghu9bDm6XpThdWDWpqFhqU4/o/23qx5XmTJEvs0bSY9TBvJziP7nuzri4w92Ul2+PQisur23O57Z2wkTUumsfq/VEH+yIQIj+PnuHs4Z7lAisGI9DZZ81qKTkkEP+F5cUO3Uee1DR+ReRiK+lFgoiKa0OiRwRBZIRfuG6Yf7VkFvQrCNvTbL3E9+b7mDyJg14JZ5ftDAV+CW/0DY9JwUlAZ71vPn7uKdTQxlaxRNiJ800RNqjP5w/ZzzByImSx+S0M9MpAkdvzbulESzpENwQp3gkBYqdcptHKtD6d7IK52aYRpk/7YfQw85E96jVDYVUCEzEZeN9t2O9gAKzC7FnLLh2NP+1qc0oCyz40GNA4TjxF8JUIc1juGUZsFUeRF+b69h1mh/erZuYx+il1VJK3CSIxExCkiWMruaCD2WBVD5JOoew0yQrnCs6HrgYGMB67mC2l9hZBr4C7XTNjUZ2RF940yPK5xigDFM6giHhGZOIRc7Qflws6nU/0JjXvniBynYrjx1UTanZC3F94Um3PtmJkfN6beElIQJpdVv9cWFLyOzVoJA4Rvk5KcheyPqqmAAs8+LER/Gy/W5ywRZgvEdsUEKeKb8H8U5HFKyDDEfj18R6x87k/VY0NYvJGcyK9rtV971ivI8HO6zRUQqZ+1IObdUwcQoaMWdhHXXS3RfXVivj1i0C1cOTfztCya4m7W8teBMyiSiBQtBcxg6RMLbnV+ppDf1WhlBcMOe1Ghj/lOhrDItS4AwVUpn/CkSvFKnJXVrratd50Taj0Q9KrivATKVN13ByE4r/aTFQKpGWYqgXjBNvWdzSdpOnmoCftzv+Hbx/aG3LVIbBqSMBnCsWi64+WUG7/+BYlOxvXm3D6yVpshSLFU+MiS9UzJiY++E4ippzMhf7zGrSJD0+s6Gux57hPca78V5UPMw5j2vpCbLuuSPVQ/8YCjr/QTGMm3pwzfiaibnFsfUg6u1PSZJdNHMY0R77D0Sc/Idmfa4U/RJb8Q9EVWLRlSe/4t2Fz1L99cLr/HHpACea+IecSMKxNRwAcvcaQfIV5VhSU/wi7+WPm2Ylb+9aSgcOkQf1EncrFvHHOaYJshYN+JtBR5A0IXTNdhL3PKYvTs2wD1p23EdN5RmYr+AG8qdC/mvVXh+VYGJPGqMPEe2xrTY+nekwoSacilZSgeihtraTGWNNHsWGOs822tVu9vH73ZSqOx57EmFZqFheQXN+r9P5KtSqa/vqsLaoRm1UZ40FtMSSw3Djf89NsH0k+hZ1FbVt4MPbW1Lk5fbowWaAQ9BITHbZ9HSt8M3/ES92gsKlQf4grWZQriN3VbKzzIBWZGqx7PCATO5zYZv5mq0QJsY93oRg8xnNN8tiOEzwsj5u6iVg3SlNM75667KUn1skcwck2rM/FW6xedjBqy4/Eh9Xor+uinyi2gWN+FtqWfF0jOzwJsZM1D3PuWCaXk1JF9l9QlezkCDQA50aGZHLONwpdn2QNb1/jjbQV0FIt05bmFCUpoLP+Ow9RtUiwmJrGl9mQxB3/BCLKy7WuWTme9IX+C1Sh5ES9AAjzzDeqAKGmTxhzJu+YSsEd+0xYbIkdsNQNQELdQW4IF3l3A5E5k+GiM+Kfhjdi3e73g6jaOeETG3vLhKD/B11n0PN9674eoyg61YiuHJEvlZKRgMx8a7sK3U6uSvsGU3PKPZcYvuwhFPfs1PlE/5VuazGScw0Rr1IdInNWIXcxhS5PK9E4HVRV92L2mTwzYWdBeTm+uDzVD3yqq3BICfMzsy/C1Sg85XsmTx54RolRGDZxNjdaGgIDXu8gf8I7f4rGHgfarEowJayj1O0U2+GqUtnqV2JAiI/GCqnx4/XKDqboUQMcdmX5lGENts6y60+elKaAUooIwXm3wKThFm+zwnbfYtWarfh1y+3y8huk5UrZCA0LzLpZNJqlzX0F6l1sY5ctkH/GFy3ROvQrbWbws6t3UnwjeHBHwsuJX5PI6q4i/RjNfamMTDY7UDmihT0XNg7e3L2ktKyELdLBInPUZhI6fTnfQspWhhvojwG5Vf5vWL8RVp9/BQ/Hy6iU1l1z1lkVbjJ9FUgPCY7Y4OSrIM79COngTzMAyXGy35xQ23JFzl4KH9SlcZht6/VdhNRNyF2LGoUSYKm2FbdjCwppm4lMC8xHylhuVQ7TvBJM9hFF0VnXh0Pm519ILe7WVYw8lUmSKBXx/30ugurQ8XuPnWfXdhSOFy8kExKk6QgnHLjvpqdML2vICFoOkTzSXZwYYxKokwztkG+phd9cn1peYSkr6628heqcgH0OvIAOxfa042aOvLXaiOuOocEJfCnD872CE8RLLaAgBEcfTCgNFL22Y4kZUEw1LssgxnAowiegVl37aHsz3VdLhdXwQ+Bmb15H/yF+RPOwiFUnN4K7wHD+Z9uZkZJVeeSTW4YMXlDL5dRXfo2XJm4zRDQvv3jWXcIlSbvtyp66F5+2F7Ba+giRN2nzJ3Zquh34Btud1nrEsPgzJ0e7MY9X/suYu743Vd/GVXFo5+aEsOY0Sxenp3efRlo+OOBherVUMdxvnxG/rJaZdlWv+JxbDMsRKJbRfaXEJ5O9lY/R3GRTv8/qs5VXIXKgap/5d5fTCgpdcga1NXhst35J7uBiRFfn7CKwzMRoeksQCamj9sOHSbvgP7TP3G63ERreHRE56OGDcqSQy8f2WcNxvTwm0E3/YwNtrp/MFFAiTE4WYWbAXxSlSrZ/R/77IOSSdADqfKSAyoPWDR2Ib0FDS02CyL6lq2LAOKvVLm+myCadRsr12u4toNRpuU/kd+pa67VQt4buC2D4gNABwas8W0k5RrQxBgGyu1em6PpXhLG79jsWm9hkYGAcWvtAiaMprLDoJQF1V13s8h6AvHMK/abd8EdjoVfEylMrXG1lsnvh6V/S5uSvxnL8yRtSi/+d7nAkslhp+k9mY8HnucZzzAKgK1PGEazcZ72kCiyYb/HjQwQa7ezzSMVQR0oijG/Nv4Gl+GyIoJUB/fJ+kO5kWyY1nQ50buv56DqYi2LcvS+g8ZZbiKnYTOmo4QEb23nwT7OcBZGuPpN/T2IINkb0GusetdXhgefZJpYqqjQxhUgmVor6Hd7au2BRFx98eXsXBWkLQj+V7Ol8QFLnRR64DGJ2HI/T2nJVUfEczGdEjlkKMfhMXh3Obw5ZHs1IkbuS/uteDzW/2vqCXPb+rNSt/pU+t/SGGA5QeNfP2UcIGWShWZXUCtW+zc2ZBg8F1LehryXp89g1McjTM6gYwIIEEXAadvOuP6FTbLQf6TMq0KPF6k317XKJFGXIimt/5nr5WMc364guO0Q9bZePB1W+0tWitUEywHe04IulZWbE5jf95faZQ+F9J4t/3mcKxf8X+odMUgvwrgf+TTlP/aS19MPR/wZY+3t+39NEMlzf/aunzV0ufv1r6/NXS56+WPn+19Pmrpc9fLX3+aunzV0ufv1r6/NXS56+WPn+19Pmrpc//H1r6BP+0pU+wGcT7oGlFev2Tlj7k5hQfEauuB3KGNyI6S/OQJkHxIFntvHZLJ0f1MUtOvICCPPWLz1776OBT/jjfSpc905EOEB0UKBKfCMT8NfMBOoJwTaPcQVTMAlD22wGFDQRcxARClHmJzzswyOdoD7Y1BpddJj22ci3Tcs3TWwWy0nJ11HJF17LFoYyR+eTDmxRwTBuWRzm6Rdu8RUc7nXZx4tiSzMnSC0d5BKmsQLOkogTuOFpqAOPbQBIcV89ix97MwKhCwuh9eZCTZ4GUCgx5jMbUzcwv3L4fb5DRs7KoaMgczR6pXLdSrrUG/qYDQ74Nr5ZZ0aGrCauY65f9OBmwfGhgliaDzCUL31nAH16przdihXRrYjSWzYIpg/pCpVInPmZcV4bdph6Nyg/mSk0e0bhjvZ3Gkt2kNFG/hJIOdwsz2ZeaT8/sG9mjpERr1C3+psIwUlUuTRRcbUhYwRSnqfwZhJML3CIO0jSbZq/FF7kSzRKm2I5pu/iLvgowFvMgrWcRjxXQ3knZ882fkt9pDs8YS2TRVOiZVfJKqyBdLhfCEOz98WViRnwr3AwdRnXLBSrbTqfuF7R2QK6DWe9bNjns/YpFTWtD0oi9UZr9nhdLeaIDelDVJBNHgN2/VKDBS6gkGw0LqM3CspZz94lpgpAz+mZAwIWJ/HgfbhfmMkbl3haDdY/gHspvpuuSrmdSQB0LXpkMP/IVj52O9oEWQWHDG1N8rmE3dk5vWMpoG7sqtnKcd9W9ILLwfOv36owsebdGYXTrjeXove8EbSAo0ck13beVrBYRM9a2SQYil7FxdzzYxNw1amUMC2esfcK+1Dr9i6pvDSo+0vVC96rnKmZNBc3V5Ep5boUdAt6ujPXkULUoDaG5nqfpjKZosHagZ7ag64+fF/cXyuK7hu6vFOOq29DPvPHUBYO4PPfbBxgonlMZtNsK8sq5wBP80VhJ0yBmNRtQYb7mcpctHnaYgFPRcKhteww9nLond6hjEJcuYuaE7SCy0BIDMaQ4URzFcmnlqhePB9G9fu+DFuHi9RpFg6lxDcYz5i5Fa3OjLNpwJZZD/W0Tai7V5qB/H+8HnJJN5NUjx1tWekyZLhZGbnyBbt+iIDmyH5HSsHlNdkXZWPe9GAZFAVKgw8Fcn/1bKYvYu+eF+YwgN2DcxTrLYxtsu9KsCjPu1B163WCFCRUpQyKPltJIpqc9nxiiZUI5E30lS/WedsSAuBeTMIrsQsb7zppUdTy7uu2voQx6vsxmEoPKkwq46mESYd/I3wibGrG99/4cPXKKwptifT/cgfAVHhVzXmHcZ3FWsb/wruqkyeDIBH7BDXO/BvchXoZVX1giYEq0hs936fONfEP/fPRQNw/fqUyxznvJPFsXFaBYJTBspwmVhqkWPfvog2kReYfUaqIXj2AyxsXgzbQxkM3yiRBE1KCQszH5ot1B6AQBTJPOCekhIGZhs5t69JD5Mh6P+SZKjqvKoQloDeZF41GD+ghTzOlAVmVbzyzXPVvv9iduHoAJYmPkXsdhE/F+1svo2BoFjRxeBa/KwzO1d7VVGmieeq81Jdet4tBI3fZ2YuiGLMCE2FKkaGe3B5HGiY8F1s0kuvby7nuGnUd3GUZZITVUnu+YnE01tfk8LYaV9FW1iprCStWEVf8UwxS/qRKZIR3C3PJWxCKhciFDbTnkFhvCGS3sGJaJge0rxqN6kGj/vEZkQltDpO0WNZ8RiszmJydA3cDNWprDmQXpTOBtOvLLjclOSWSL5XmMsbTKNrGZEXW/qMhq0uTuLcYDzuSInI4Iw6O41yAwBBH0ldCzZ/Hth7U88IKsXel1qCcY1GThVWu9jWONORUAmg65aHSEPGTmyhe/iufDxqUC2clthfrAr+g4Wca5PM9flJ4hQB0Ao86VEdBcQLLMeVawpTqEzhVoiLizrl6grBU4FQkUFgtZrPzsiKzqccI/pdXyrFJfIXoy2AVKd5kjdORpiDhQ1NfIISm2eVUPwoqIll3cpkDX9Adls3EsZEUI74JRewzdPts4ZP3rzKgsUNQXWQQ4m0dUioKWDxHODHPoEJ9nGbcT7TZizQEvD8G9gSVLGJsgQjVbJ5oRM6XAbTu3rANyUV1Ib0mB9sELp+HH0iCL1wDOfBL9Ysv3I5Nye26jYLUEt9n91fJt/Vr3s8Kaq2zj9s9X6YHnxyhZTSfvsC82q6bH0iwXZ8wSj7DLbbdGlCn4/WHlPt2iejVMregAGuYv3V53sPqZg8fsEvKekVvipj4YuMDfBo5xbi2D+JUqOFYKRzCMYiy5ldK9EchQijrlyuP4WIzTwMZXwR//pnXftaJJKdt8VSGMRAnsevo47/0Nr693FVdKH6o2z/AT4750kg60Ue9N0XFgBZrJe10LpdhK57V+ISrA/yCjdGmpULCxUJ1FeL3LapTghW8sIVMaVup2H9Zg8HNOuoHt3IyVvUewGu2LGDqRQJB8hWpFsXj0bJfX5v+IeIFWGkLwuisTOuq++Usik9edibcYp1kh8MQXo1BsdCDHs0jgwZmcWeAp/qaCDed3xnJKUWQEcC8rkxx1502Xj0Y+4Doi4PL3JTrh3q2+ig08mkg//yFMoGrNCan3EA8j/CD88OGUtFrhJsETCkM+3bt9n1+F+AwE3lW8g1u42F31KswkcSRgj90NCRtRbMVtI+d3/sK6CFgLA4KoO0GlR55iYWiyV4LpgQrwbBK+6Xe74hXZ6zkIqqxn+PAaciBZBD6+9yCV3Ecqabb/bVm9muzvF8tfbkAxClGw0RIiXpFJqq++tco8TBM3l1T6ZQxxObTjhysmGFYrEgne4quVmhvM5ZxzMmr5bk8oHUfkWglPcwHedb+16GfEa9DkRWpe3u05D9M2B7DiUeIhefpC34DsCLazewF6jwQEdouNbxNejEVScz9YjHfrYN1ZNZkfvUcdjXUH2XW4kL/QpjH0w3K8CiJRTO07rG6qvoy9ltrb1QtMlJTfxLPk9p1+iBF7xMpEP+7US6YtQ6ojVU3EnhHyOpEMMZopD5awoa0UbDt7CV/zJb8Yuhbr159vGf2kA7SPto6+bOMNN1BddN9frAbPb+cb5lhBvPv+o3zZIvrW6Qufsx7a0I/+2BoQiWv8gauYxRljDmgIpV6R9at+4t5nhgcPQrFZtFTJW+xeO2ADTIy8jfHTZa1W8Tf7DcuhwNfpOXoYcqhsEMSolnecK/QaAdNFOdfBsDRm+oji0H6bB/v1xVGJa+BMXhgGr0QuzSEle+tgfdwqlH6pveJTwUO2bLIPPXw96AjtstK/iTqMkTuPjcRk8min8JdjLR3wuB6XzH7S7ntFF9HB+CATNRd8/JINKaFqTNkK+lWtvSOz+PtuiHGf69YOq8c6N2GFiStpXAsBYScBW93LE19CnntAJs+dXdN1WkQzBacrcZyo8O5fV7q+H68m6HoYvxMh/6K83VCT6FdLTe25ZHIO8uCp6N3SbIS5fQKJM+wH762hPj5Qx7gcrH5Po4GO5tutegSbt7Rwd/zMHDT67pfJjASlk40vICw3MmfLx8uBmEdZHdQXCoiILp0EXdOdUPpngA0KI8RYcyC2Ku0lAVFMe0jfpP6LRpylAE8QI4KZyQG3x8dW684/csXAkvztBTPUXERbp06tU0VtUJWWiUC0AMpF4zkoAMudsXbUDGDfEY1Eulb3JG3S50Pw7/HRVWn+eviYg75o7EDJe5781wC9bzPQcTWHlQ/kP0o3cI6iWtmpTiqn6I+HR97ZlhoRAS/ZS4kmeZSUbhSFlYJW7pxr/BlpC4y06z8jXUrFYiSiuckyftBdDV0VeFZk88kDz6pAzZSet5W3y/Edd37XT9rZO0qXO2cDNhnIvcf6FkXjn4fa76huekHocB/oN64i2EyZhgfybXq7iZBVdVtGm/Dw+Xks3IWZQEHIK+xeTEc+s29Cfmh4C8f1TOMh3/rhoyzLKCzEz99D87nZMj6+NOr5oQijowJr64XvvS9FLMLL11F6sKWJATERVlE4FIg/UkqnuIjWoFF1kGWm/gjVZN6lMh0E9JkFjJsJzO6P6bO9SK/hHgsReFe828ldXpssJ6lfG0oP8MoTrFOq7tTVeNvyw8gtPc5CEfC+VLuQc71ijejiWJYkaFfaiUWxLjLx5oKsFfXCWemtDKliK6ProSekwwiIBTF9nnkUtkpx+8XJrCyu6uyiwvsGjc6shCylLdH/iVBJdCC11nZ/zKJ1G8nLDsoSNv2PDicbnIZ63cdmpUpSNULbSlNgsXBE6L+SiB5k1UpgPLlY7tZP/oKxtcIOWBQD60EynvyAarC+yKnCR35lksJCnyozPVJWVKzgwgLi45V0tZd10GHDa3ZAtEMeuQLRIVT4eTppEPcvtGjKD7lpuH+FI6I3jzcn+szjXt8pZi5fbiDTgynKATFIgThBsI3KesKlQaSAEy43sEApKmqVFZT3j59rN8I/82Br7a+UdRGvaudzl6gqnZkHK9EfSPkLvAYAK3vetZUIidpbJY53FgL93eOpAbHxZ5XS8c5UcqWv7ITNFoZxBTqA57CS5Sg0AUnRXDTZfVO+KS61zfJet7l9Jfgzc69YGzQM7ItpErAK6Mf9Y9uqMhH+jJJW9f0pVQEIvN7pkPT47ZBkQn48zG67FighgkGQJXUNgBh97xoJbGtTUnutrOaRNTLxHzxwOlzYXf0uR5Ge6ubAtTeEk9bHF3UIyWrCvMBfS/iMj0/p2eSq57H9hjR8UilTR0QScQITKwzh1mAdAttuxVJC7mqWI0mAvrEMEHcRQf5mKaXNA/sBfV7KYs3U+Tu+UEKJ9s13V2LaSj2gP79CCh8qLAfFfitnI2diQK5N37dvuM0CXrOXpcGLn1FdifHGzoYyuIjqBc2/5RHgbUUpzhJqPdKDDPLJufRjhiRlKGSYDAv//nQCzqQgGlsmFZ6fun4ZoWH3YPXqA+12QknEGtqp8aMapP7DfLrBnqe4fkABX5RpzW9yuWNhzUTDvHR3Ep2Eli+2Fb9pW5SY2q2JTnaHf1UYKn5t+aN9F+UVptIaoMLn1oJx7aXglKM/0KH9ZrARi+QVNBVMk5v/KEMtVlb2w3k7DaRqH81GXF2oGkYY4KI0xDyqgaeb2E5PIS+ZG3sYVRC8X8TGBXXc6WC7fZV9/crZzSxbTqh5p4cO+1ijA60CouMY254POgdKGTIyiqvb8LmWqq7chYSQd68d8h9Idl0ak6sPciu0O/PjqikvQpteB2fsW1eGDzKGH1DaA5HUxzUVlDrn5nhlBRXsYwi0uXR/HiC9z2VckRoLsPI6x1lBztbYIXzt1djcT/6AEAN/4El7B5G4SP2dYGbvHM+FM7YI39WKgT/78EHqwbHymGmjIFL1K+tRRughOCj8gUItYkGAmsxfmHtoP4UCwuauTyjZnG1YefObcYB+CQ74CzXiKQNg3jnhRc4AoHT9Lyi9SB2cCCRVfVeg4G3KkGCKdApCWHyxcmee7tL5zpfCatbdnGxAlECxVQgN8Ex3JKdPm+MTaUXSeN/20VZOMbyoA6e0zIwyQX0op9p57DzvtFZ85SVBtzvMHpZhU6/LDiovxtMvLa7pQ74ib+D1zhi2FmOxiJEQNKpVRVofiKsHDGSCY2+u7LLI2n3YMykLRVAwoYgM7Dd3E4BsIyDIAvFQxMRvexALwqxYLoZhb0Iz0TMmfj+Hc3WUYPS4beFAvlPoGTiaXLR91v8ITqRYlpVVCJIMubcHUwLPzrEYVbB/MxTCKURV2DvLLwpW9ACzjvbPJ2KacxxPouOHwuD4yT+EMZN8HYJLi1WqXFheRv2gyWA372Y8HOR8eA9wlClgPvYWBjOcEUmbQ/s2doOo11Vs0CpmXK8kDT7PPd6nb7BfPvLY6xrwOQ7ArFcUpwxf+MGBFQ65PIX9oAgBx1OEtBWIKdyzhc6zMB53oGFAakyOZ17WqejUke4IYRFU9QPWQkM7LZJuM+QhjaPXrx8M7wOkMO9QEmGSvW4MBvDvUG3+jLLH15l5n0yas1KI7a04WnWqqWr5nfqoeGfe2lAfsL5Izw2VGoke1gJlOkhCDa+Cfgfr46ecBSje+5c2BMJdmV+H1jOzJBSHGRfeKLUkK+QR03Czqd4Tc4CIsb1BlupQiZNFa5JGfMbY07Pu/FS3JbPpmVXma5RVNjvNAXBm0C25lYfAj2tFv58Lgpc/xkISbvr1SaszvWW46eUWbku2ncFy0NcVSb8LCaOMZASYi1Z/s5BELgsRlSGbR8yHZK0S36JsnC1A+sVfIrf3gzU5gsCxd9BFD2Z2PZOuLyi/FXg+PinHI83DcxWD7yAzCl4jNIeBeiWJ3mdkHsVm57qNpHER9+vnAOyjdnM+lOoGe6NHFulO5NjHQUu0G19csqz03aAADJFVW/6ItYBMiA69db26XRzy/RuORChyykrJjXn6tcKi36SmDC8oRVixjWWWHXpakh0wwSf6aIbTH+f+MtwcmfvF7zFnTPRDkiQ6Lv2cXR737cwOuaBA6TSsgWhbTdHFt2Dse2a9z1dKsJalO+hXKR0kNRa+QwRqg83tOh7gc3wGrzylfQ5q/vDUJycfePpiEOGcwBMS60Cr3qMVS2NLJeWeYu8XyfSqJKNMm7OJLlZ6lzKKY3K+inDop0AUPoALod7hjEYQyNEqlPqqeBE1WYqS7VqR3Y1cDUS+0SRVL0RnVHwDLuDcbUjD6fPzCrHPTBgbfjldPyKd2H9IEsTlqtP4bZwbFXR4dejiAtL2UQhunluufPco9XnIW7kT8eoNGvtTJGe11viVOW+C9qiIqeMbrsz+nBi+I5m35jeImSB4engFxd/fFVF9jYOe0fl2552efQGyJLWUmu2MBu8LxOr8NR35d+1ys1CyBMgLlVnE/Eq7M+XhRgD5nPz6g4dnz7hp+tQ5HpdWKQw/NOBh7DG9IW+ZcF4NucN3eCMmG1yGpvjZM7qu8tvjzqg7Na6vM4dfwAyZmi/7Fg6T2oL00DqIEkQWqc8eUvCjyX0aZFF8le531lMqRb27Cvk+dDViKMKEaeQxc57Nz4dgCBGA8xB/7Du++cvRLRloh+jMdWwM+31ESRMy/RXCFkmVut2OLcV6uABIGQ6i1GhDHFPM9rEYfmt9zX/QVxPAMBjtH8+MKRQzTNLWiWf7mXy5e13xm6EvZhgKXOKwhCXv8xbXV4Oxgf3TPz3fl7anY6RoznmoMgwxX5jCvXLxpPvozr6bAV4mIkAgpiQX4wJ+2QW/zBWwwWP7ERnpeWKpKByCJAV+2jHRo4wAJBBfkASnloKuk7FVQ+83u01ekdTUSNJrxb9YPj7yFKcjyXVrGAU5GDsn2JfqTDBQDJPcJNSpXB/iE2ymkBNGuvIT6zWQVqRA0Zhq1nN29ACqVkBAx7vNauFoeYWVGswabbilCwIoOZeHfdW0/M3YLUngNng9yg6JW/qFVkAroYOAPtDw1hA+rMx8eVEe2Fmfv7PetSxVxjiwVp0SP7qdjnZMccY/7nAMXyddcoGzlyCtE27Y1lj2w+rAq9GZhfYjCgGXglb/TFACzT7L1mMztsy6ZvOgCcKLq+dINd6wrPXgpmZxu0vqt2fDlJ+68neZmO6qa6/rjYaYpgLFWB2xiRffoQX0ovtxxSr8e1uZmoD8W2Ooua3lEiM7ls5QdurSnPTyyGqaFvS7OuawJ0wR9d82F2ybZd2vVibo9/oMkMZPwCu5jOA5YxGHvRz+4k9+wzUPVv926Uq2JVuWQ9lRqNF6sHea3GG2/0dEM/h020Eoy/MV8J4hvESFApywjBNWRrsOHQLsNExDSYWVfOy9QHm7GiexkpMdlfesajVWez6k+qmItEXGwci1K7d0pxF/phC47HcGI47T2g9fFUBKVT12847egWYW+k5Lplrvk7NMIB+ZSfEbFTXzpaLAFA+9NxWo1Sy3Fd4Gk+UL/PhpKsGufYfseXibjL9loyMyR9MvN+3lvWmEycQEaaWc9ZILcYhccvASgEh8oxcvAp+OTMk20XXbRqwyCiA63g07C6o5SHHtKSTNVdX2GGPxm2hmqRWubEgSvbjXrm7f5G5rnXaKV5r4LjEIcKcgyNDsFVpK096O7Q3ndhO4lPMipiN7Yw0ggCuJ2bZGifUYWBPv1gWOC/u+jFQRrltRSvcNvPDEK49yCcR4eH2cP3wNQ8fmUp06tYHmnHeV5dj729YeM/FaTXBghNWSVAC1j4IGwtDxFxcmlQdPAMKq344/f4buXhuijYiiDet9qL7rvo2BVaytinQzHm6jf7QCp4zyrOWDlTYxNMQGMHwMPRyJkEgz4JyUK/fKIWhkcHT+I8kCLd0u1ZA6imrMgxCJ/pzhs5N/tNQZ0eLyzJVkQ7/2Lf1vZQ9yKZka0Bl8VulVqXVVKhaazw83LPH5hTId8/jCZ/6ylI2x8JlrgEpgdhDMqPPP+qZ+a/yyMo7UcYysJfDeJWF+bw6Ugxjk0Gl1ez72Jo6XFi90QLxmwZluBxLnuSa/YumQjmpJldIZ7+aCOGTbsvvar6Lf7nhSbRHgi9IioVVzEk3dwB+q5ICv8a1NXm192ppkO+EwwtfA7mW7GDG6xxAZZqnYKyCDo2yJA4uTSIFshYAaySvBm65kUTvJwfbl6JiJE/TMWz0QriGtKYKjYk6skZFSTMK9MUQbaLHgNBuRyTkT9qHHJiL0a9SG2vS15FiRaCm/qZQCsmu2tMMss4FF2T98CwVfqt9GdOl5LP5usBtELkAWT7d6UA4q2DjX8eKD3H2Rgu4yTKHpQuHXLQpSW3HgrquNkHntDR/nWbmHOfRYPLDdFJTJAXUSuZSBOvk7flb2xVwM+/HQkkjKgkAwBxVjl1jgxGuBPfd28O76x+Lw++HHD/S5Bktfj6QDZGaZfai1L/bLiDgqJRPRCl8h1mDujoGXAAUbWcAH7N0oGEkbFoh2xOnLKr4+8vje1qW1w9Vd+QNKB3kyfW3I8+zJdoFlkGpXCOVuC0GCEpFYg+hGXuBTP95l9jIfW3tWN7KI5Nc9nMvFGY2DOVRRlyxXEm04xO8QDHFnWs5i4n6hGS2K8Yo05rLy0bHMXwwPRLMMmeFk3Hp5i0lhJEaREJHQtW3tjgc4Lh/33z/EO92NZi6958GKfr2t52SSezo8huVMiuQbY1b0YAqxPSQwRNcFTKw6ZeVg66/qwZb5XXG9pinXFtvfO9vp1suTd+0LMk3zP39l34O4+T5t8CoeTZAnPRqd+MZ47TpGMOxf9Lz4K7rdQdHHz1k1kw5VPNF4FEg/RQlKamvBbEhcf6y7Fz2BpX/y4yOQixeWACpZ3NrgQeKXTFhSGFchlnakQyIIOiLNbNkJ5FvStKYvniF3o54PPdrEBlHCWEbP1wXNHdSvqGpKtfPnvYs0f7PbR8VJIZ5fmxCf1Lh4eoWXdiJvio5JyuLl7vYrwJE+l+TQR7PFisA1YrKj2fHKZz2Mj6NrgkvCzZgNGsuZBMg04AXAYha1NIBJRkRLcTUvcyGpT76FyJY+49/RHt1mBcsdN1sjlECRLHWDmHJ0OQbNU+Zn8ZQtzrrSYG4CV6tDUultQeHh5vk4fjs2L9ORGEjiLxqtnmkbd6Qzf/Gh37Tv4+LHfkDzCSK5SYQd/3CUKIugfKkfvH8u6G4+AOhWVvWFhMgDmWGwpvK0BR52XM90PHd+Z0lIi7G9exZ/tA2dSulLE/IZDrbm8D6Wf5dh8skePUbDQGwLr2AO1BvwG34gzNYzCBW0dkJ/sRSqkn3++tD29zEUsci0z57qCa4nXWWLIa79ymbMxye2xcKEIJnXlJ6Lmk6awxkzYecCx5ZXxbSjv9+QNi9ib259QIqTJJ9TqbzmGqODO8sHHQz/eUVQ8SE/QUmwpiSqfLlf9ox/46kxOz4jnG/B1slb9gMTLxJ0s4QWHmAwA+Cn4V9m0U3uAsw/iOv0aai00DMOBD1LMRY5ynyVoZXlrfESBwB/UbmEzt70iWDaOjmZO6RnTDFI1COoJzGD+S9QnRno5yGOfPKJf+lLO6N+lchAPzWeQtqjI0dxfSbEaYX7EXqRaaUkR31CruFKCtaeaXUxXOBjuodAcEQYjLexlo2+FRiEap2cf7YGEkCFJNHQuN0cjmHB7is8xx3I86ZgVrV2NsA7ef+QyujXrYbsb7o0cNWTHS7JR6nyv1lsvXfiA6YQoQ0aYg4lWnmGt2hYeiE34BcZ87nQHVqaD7z1CkSdqSMFx0Z6t4X1AhVJNzOZATHn7PuqK9tL61yun3X6BzBN4cQIkvHJRdJ41sctCauch2AsPcIm3jnJ5/m9b0kcDtz6yVjokYcWiDwBkGUdiUKSKZk8pOrygibP/VYHKvKp8V1I/GXj8Ryk6aFk5iyk8a2rgMkgl4BLiRZzSHZhcKrIUbq/8COZfo2EiqI4UJJBGo0ZYvNxtG+aLaQ3sIov0Ry3IUBXnySW9TaUZ0h+e3SuToSy0FlToS3fY6AqNIjpyBQ3BjU2Xibv1Xse3F6ku/sWnCdavMPTgNzjCsm2c3aKegVwNx6oIAysgf6ZIMLXb1dHtKyw+aGipsVtxHIVGeV/9c9AyiR70aR9+D4X5+5b4m3gSnXylHoNKSO0Vt8TcPNQqGsmojhy9YB2A8kvY3GK/LmpRY8JLUfRXQudJfI5Fj3dd/t1byMXoMNVkdCZzrcITTYJGUbmvJxTZ54nyQ+ZzRj5EUYg2LsndKoX7PwiD7e6JPr5En8W9ZqRyUkTrVnvZsoft4cdahyXqrvzzD1rO37VLbSxy6v2qJo0Q7hjCgQi0m/KduEmffQtCLc6meJtDN6n30rMa2oQJFr9dSo8z6MCSbfPF6Zrnq/Y4/wks7EdVQuE8vBITRDsoT4p3RIb7zKA/50bG8ZmVijtMwTzlTcND4xwO/qW+7UuAa6oJGYQc+xZJ6G19Q8bjf5YHkV+Szg70Gd8hKmQWIxfs7CvQ9lJE7kVig+mWHvcR2kRfZJ1eltuctU8jbPZSUl7M2ju0TCv16JnC49BVMcbIIMLv1N64uDNrLgunJfAMPb6pPHTQTeyc6arpoXRYbyTpK0NC2GU4Ci6L5XNd2OSw27dysjfFj1Fz3/vHHBSBq9bSmfO7/lB61Vazqxs7I6VWKp4OB1v8Q/nqZmZIMV5Y3/F6anEAI+fjjRxsHT+odZ+iBTM1DLhqLcot6OMqLISHVZ6Fsq0C1nQTuy6G+oN1L6RP4BQeYcN61+jXttvQStE0xExMORyBLHk7OIWXkA+Z4nvICRUKgT2NotxjqVI9rGM82vKowm7u8n1IcwOox9gl4LAm6RIVt8XIs7lLDb2RT8oPcL2jOJOhtxokZTIpir1ZzlhVPd/7JLuj4wEQSgBOIHaGDij0x4dSiqCwmjPcjd8lenfE5ucmU/QFrYjzUfDaOgCzGgPS+qR3AD7SyexPPORIQWE6rY1Y9Xr5nR0G+7km6MZDtbUgH4iBh9+LX4YGxBYka+9KOUeWUMnqUiLWZFOpfM59g8gwdk48mhaqQM8nh9WK9+kpf06JtYtgpY/xIYJDPLke6Esp7L5QyV5ZGMLqEWSzCZbbonadozJ+67pezej6DS7eAkebpyPg8oreY6tisgVpuM4EEp8sKbNDvcbt4QMam/ZhwVUxCDeN7lV9mvYzFP1BhsusClj9scCgGJjiptQ2vvqNnhxt58v5f7QhMpe4GohZ8OSrL8CdTreNgFjotQkLUrq7ITjGY6gOu0tG0x0xC/R8RCBylPkrhhcvzpzvVlX6jQUe8jqiRQhqBgFK8+Fs1dPpNHwDfh4/iKt0c73V7iKBrxotpaBauqV7F4l8YypcLiMqR8OFEVoKnsxZhqBaHx3TjHrikmJmkGBusjmNDtzRmY7fHyHG8lpBWRVJKx75qqD3DDu88NYRpnpASFZl0QfWDLOYodF6zuxH6TSfDKDMOpd38UYiym/mc+3I2FJ6yTJQueQUWXfzZz1UZgQ5rj9MCHt8VVfznDaJo4+NEsP5P1p+/jrAy9PSfT5Ir1wEThuY+dVm+l3GTDfvvD68a0PLx0awCaLeKMO7uMjTVAa2Id8S7PIsmNYE3APioquhfXviEvTyvv6s32DaBoPdiqnbxHeayMqM59jnLzW34VQfsYwWhjVIWphiV/Q/NUX82OAuIZZgXV2gxdOCDNixEiyQh+AJAeukb06YGC3idBnEVUzELxt07ssm2ILfh7lmRWZYPC3lX9OxGAYQC520t/1YKkGBq7BV8uIQQaniNzkQNxSSjzG4PLDWy0JsRqQQ46hioGGYnLemU0XD769i7ZDSl8NL9lj54QaqDQ5JUxwYtyt7JqTGaAoRTCW9jWgENLZg723vFPj7qnQSjvc1HfIaNBrXIBQygAcy0z53jmqL3pMSP1BHL6jELqOy5LUR6KwbcPnIveHgXaB6oDv0EQ1MW9X7dgaIqsePGtmKT0LnMJi2FdTyHEjBLHqEmqsov4JT8aO358qImJf5S21RKKXTHJK/JUHU+ziz4f9st1tYMjhJvjrwiJbupyK/cMjKtboKUXC06600uSWEpe4Usze0kBd2uoRD7AKQgzjaypr41Nm3p2EsDlKAKq2vuPRDoPDltnw5PVzkSVOXKTgZAyCkR/P9Tv+4G2SE5mU+MXGMFkz0vTEOw3s5gi9JVUUKbVNlniP4sePe+xZn15D4P0Ae9RvC5EI0+UNl5CU/RJUeduA6Aak6PMD/g8ntC635zRhY3PMvpFCLkrwU8LNgJENl8+HxAT8lanlPEvl9uOnwyzp7rss9oFDG+CbeJmv+L3mzCw3zUo0zOE7bAeyIWwr8hGnajTYzKIJis4dAiPbPP27JgJZD5HuqHp8/oCmH/U/fOOL/o2uUPGYbW1i+9v2ogG7ICHqowfWiSq/Zh003wlu62xWz7L/k1qvYPg/tF4h0H/WeOVvnU/+vvEK9Z/WdwX/J31X8F9zlHJ8HhaGsj+7pjynv9u4/kYDAf/K8u9P/V2jlr+dBF/wX5YHJ/8FeUAdeiHT+fdX4NUf/30GAf3+H/9CMP+CsdOn/heC+9+e/+uT9ZOWz9n9OfEvBPv3p05wCuPr//2Pi/92wz92Ae75b9/8H/rHPDMFzn/W/hkj7vX877LOY1v8rS/MMA6grUxZd91/OJX82R8me+a/mP9J45i+znPwM8zxqdfCmZIM/OYxJ6CfzK/5TJH/2UsG3KKQ9HUHrMCt++IRx9AbdA+E7LFPhj//xPmNG/dC/ieZ3+v1rzjyHzr/EP9gfjj0j7aHQP9Zxof/Y4+fIq8K58/DPwc/2+b9N3pgxsZ5/YzVOCSdNoJmPb+TTbGu158Dlmzr+O8nuTjrNQSD/6/Yn0fR333CnX/Oy+/g+tvB8Dzg310EDqO//+y/XvY7uv5vTS755yH7b0sMgX7/fqaZzA+9nsfj+eCTdCVbz9nPxH6fCDUY6z/vIP+nf/ec/7u/+l31N6P649O/O/r3C+G/Nkj6b1reMm5zVvx3keVPa3t+tyrW/5HeT2Du/7u2PBddstZ78e/u5H++WRLQ/6tm+XdG+V9N9J+bZZ4sn3/7TXBgJusDSMPvDAyh/3kW+OdD/3dsL+3GrHU/9fB/wfTQfzC95wYEAXiXf/sk+BOdXv8PzRH6HzZH6P8rc0xuyuAyvxQbrn+fg7p5nvxf/gb1f++ggddj/+b8/lfwcvB/xw7/G67un9jBf9P7wSjxr8h/IF8v6h+83z9jXi/0P2lesX+Y12Ias8/y14T+j0wo8fDk/0BnXuR/1oQCKTwC1vpvn4nPEH30MS/AX/yf</diagram></mxfile>
2205.05871/main_diagram/main_diagram.pdf ADDED
Binary file (43.8 kB). View file
 
2205.05871/paper_text/intro_method.md ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ <figure id="fig:system" data-latex-placement="!ht">
4
+ <img src="figures/TS-DSAE.drawio.png" />
5
+ <figcaption>System diagrams of Two-Stage DSAE. Left: The constrained training stage where the local modules are frozen. Right: The stage of informed-prior training where the global latent is regularised by the associated posterior learnt from the first stage. The dashed arrows denote broadcast along the time-axis.</figcaption>
6
+ </figure>
2205.06688/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-08-31T16:05:14.850Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Safari/605.1.15" etag="SzmrQ6L7dLoxGAEqcd4z" version="14.9.8" type="device"><diagram id="axKbjaB0FkojJKYSv3i8" name="Page-1">7VxZc6O6Ev41qbr3YVyAWB9ZDV7B8YJ5ucVuvIADGLB//RXetyTOnDjOmUlqGEMjCam/T61WS/AE+FlRjc35qBk57vQJQ5ziCQhPGIZSCAJ/SslyIyFRYiPw48DZJjoInoOVuxVu8/mLwHGTk4RpFE3TYH4qtKMwdO30RGbGcZSfJvOi6elT56bvXgiebXN6KR0ETjraSGkCOchlN/BHuyejuwbPzF3ibRHJyHSifCNapwHiE+DjKEo3Z7OCd6el8nZ62RQkvXJ3X7HYDdNbMgwxryFMY3qeooBUhy+jKHJ/oQizKSczp4ttk7fVTZc7HcTRInTcshj0CXD5KEjd57lpl3dziDqUjdLZdHvbMZPRPq0XhekWUwyH10kaR5O9IrEyRTCd8tE0itePAi7qEC61T7m7E0YhLIO7bPJWC5kbp25xJNqqoOpGMzeNlzDJ9i6zhWdLRwzfXucHcMGOe6NjYMFWaG4J5e+LPugcnmzV/gEIGPKeCJzqEWqYISlgkmea3+r3FB70czSOMsSpygmisjMCR1pHEeKK1hG8QhN3UjxKXOM+OU23xD2BgHxZRLsbv5I1pVmYAMXmxeEmPPM3v3j5j9jUYn2+LRdWdFP0LuEZ0FCn6Sma5jTwQ3huQwRciBVXaj6ABord3pgFjlNm52IX1su01kUh8HoeBWG6VhrBPRFCWdYijTZ1v9o7PwFsnD7rXzSoEAxy+AMXwJNXeht+r86GXkG8xEcJ54t0B9UjMbmX3ftFnQHD4BUGuwCDugIGdi8wwPuGDw6a8/I0mK3H6b2iG6blTlWouDSISoVbUZpGsytIpFFpHdfZ2WS+8RBKIMzdhRcUpbHktk8QHDM1Yd/eXGLSPPSfMD7oc+1OjtSrfsTCv9ZzbyT2fHjG5/A/SedZBf5yA0fISinLgU6t2+vJhObr/LLZVl7cgtbgjeG81hGlnltbpdhYqmuo2Os4oveEcWa3pvDLNOxTMzFye2BSV/r9mjnq8KJF6VzVKvRYwLjFaowCA21jtCc3Y2Fs6xPDtWiff1FuOOozrb3ErE6azqrNWqAo0ftHPap3hJiwarSH072sAVqJ1hG1zg2Ho/WCFmCyMeIq+UDOi6Khcbfl7Q6loQfIAaTfBFf76Djo3djGxoAEPSJE7LrKxhBBnrwll+KbmUNbMENpo5Vcp+BP52P1zShysKJhCR0/80DbMNm6EvA3HCabiXIYaGPBtfuddM0x9rnXb3fqBD9UlLIvwHE9iCFlN3wPo7jssp8yPiOnlgFgpcm+sAzEFctAVnD0TraB+Ftsg9o3ec4okw6cvW0AjWqLHYtspydmC9+MZkreDmpyJHJiVhvF83bMCxAirmtHBU3SqgTPVzQTNtoFKRtun1kuRC0I3j9mfqNowabErQ6pTocNyhvS05tywiOdsDaxCkOdn+G6agyVqAPZ/P5R0j3xRh13uCyWHWUkibccNUtpWmkrcYauQakLu/esdDr8LUcBwjYF3Gmk0iFs66h/U657HSqTCx2f7veZl9a6o4tTqTt5Xmgznv/CXo6DCkIeOWbko3v8NWd860aXc2nLe6I4/okS/mQvDUXPvDSKqTDHf/iVedNXumy7+r2J0m6yU46mm3mO+afjhp3OcQECcbuc6lzzru8WV0Bfm+u8DZX1t0EF0ApGHnWxB6NG//sdn75/7PhEg6bMX3F8NKweNCcwKccfJkXoqtoeD8ThaCFTTcSZBaHEzymlE/eJkDMMKVVq1ZhqPtfE2ngShYHQNMKqCasULMZWrtozD8jlGM94jS6GOh4Y6yEkMqfqPskIqee6siUMZiM0N2Aqg8Bl2ucUVmNFha2xHVbz2QkncjzLa6L2pTKuB5vB0UymiUMiUNgRC2sVwB9O4RVeZEVf8RWNff5q2UuiIr48CzEiM9iOExJUE3KQk2NSHWe+k9GOp6L4QtY6rIy32AfWdC974Ux3M4+bC9oSTuS4EGHa8EfggKsmrJSLPsc+EOsjmTwYyqmAhkTpwbM4KS+ZCYcDil7MI8eGvqIFLHYwS7Fc8HXYtnXNea2qiSwszIAcgYVNvlQm9LEwUxvhuIB1tkHit101TJC2L4sLdgI7/ffigMGZnjWDWpUwL0xytarSkmmCIYuj34gHDR0DadOqqhasNYZ7BtujPXU+cXwlYUWY/qFGYCt7UXHOkxPceRZ0qadcC1P88wF677zuQ8iXSzTXpiY4c8elghtWyb77qPwTqvwJVT4iVOmaSfo5rvtmNny2mngsO15V2gcqjm0EcS+3Hbs2L/4jDcRPvPInXvkF8cpzLwAAskJedvMvjlFi15Yl3o+pPD9Rwv9gkqk5sxzziRBhqv+k//3TIy30+caPy/VmFPvK4ApFXmjadXz3eXtZsjfyo9Ccigcpd5A2otIAr3U2dtN0ud07UarzFKzzTU3AphHk2mYcDmGI9Z3TDT6vKr+s7Zuqj92pmQbZ6ea1a3rcZlVLchwgo+izMCZ1WkJqxr6bbjMdwLgoB0fB2wUl0SK23YuC2Dg2l0fJttx9tb7nz8G3+0teq9dZ+86Sw5NNBQ4M26v290lH/wHbHG6L6O1dA643OkT06EVMgKZdwKfIIUWsvFBuzVaNcZWQFJMn0/lU6Rv8cLYUu0kjCjSq1dMHDcmPxoPauMiHsTStPbcmL3W+EXr8ornsmspqSPiEQZsebRm4qpIcQ2YZnobhYjVaQd9i2rUGNCAoi2R8xo2BbtOqkPoF05ON3LWxbo6t40H4EtpeqSXEY5zm3RW7MryZl/Nk7iz00YSG0EiFypdBF+URkZZTGeo2emOCScs9ZF0CV+qrsnq5l0lEKjeKlZNwQRlGogxW38wN4P/eKvHrmJcRvpO0gkIdpxBoyYkt7gXezagUhB1NwFva46MLpcyMpwxVd7o6hVOLqqALuq3qGey1UluVySbVFFo+5ba7ZStgCzlfTXIVA6o6oRR+NUfwtj6lE5TKFbZBVx8fK4OyPpuugSoI9RlQOb3A1NWUpOUYNUgqhLjY6oslYEOZasRo7ky4AmUYM7RYctLXVywx1GNoYyRajV9q3oLm2dIMcdHj+chrktbo2FRramBh18Xx1ooSctjdYC9y9QLLhRmQiWjNu9Y485PUa698CqepiWz4bqLragNXdamY4Rmb8FkZ3XxY/G0rS/tFQkkkgbca86rRwvTSAdcht1IFLUkopCOadmSDdT3bm42wvFVOw6sQvYZk6hPG0fvx9+Jev+zubqpTPpGI0pxamwXJVjnScestA7PlUIUCjV/H9aUwzFDIQQOh6Naul0leBvGy4YmqWzkcDssQJbxCFF8oG/rYyOkOOSffmISkXTaDwhloP9A8SWBHcywV0Oja4g8Vxlvbe3LIzDiioAlSN9humkH/C8sbtJet+rhnSatxYXN4y2ZF9uFGccTWy+gQqyS5nQdee6G1LyZin79DBCOJCkId1kVp6uFhWOavcaUOi6NC/+BKjaWXqSQOR6kcyoidr56FIScV7CLipEGQUlB/HPuiK4PYZkOsPzfMQLYWOT/URETu2CbVKBN0VczmmIDOhBQAytEdULpApF1gXuiTjD4HA1O2BMuJm4IeFnS7m/mM0dKTvN2el0upiV8FVm3KZOX4m5RI5sqIhVx9eFd5EUx0GnerODdq+DUWKvbhy6eiX/exCa2SLYCOmAnXLRhnpZEsgr4IMUYvvsMS70bGv1gmgmZDadSimMzIbTqTIcJDecg3WA6a2u9Q1XqApY6V+HBOyAEdDrs6hy8M38lVDg1L619ANnPlEup3WdyTuhSpVcvOIk2Q9RC0tAWR8lV0Ucb2Nl468V1qu16KHKTpYKiXw/14ZQOLxSxETvKWH36npciGiaKZVh3FoJzzhAaWq+rKgE6MzYlle5SHOV3HU7Y+3SeFoDFrjXv+Jppe6/QIMZ7UfN9fr5V8xpIpwL/fkilD3RAqVf/wEChAz0Ogl07UF2/gRPH3fSg3dNjy3dlS2VMzSQL7LL559bXOo7jmZksxvOMWQaqXKFQwYns53IJSngvF8cVydxHClurHF+s8FWJ3eci2vtrlsxdx9n4MdRN6fEtF2yDhJth5Q8J3o7LvdMOd7B8Gb7GLFznOSPRK0PWyIOJ8F/JZQa+EgT8avb3Y6UEhb9frYnf0O+nPZzTMSfr7hHvR3V6RDy0yHOL/yOcsOZy+R338Su/rbwS/2mXuvexw/orwb687oGdvOd5r3QGlL55TQYmjTcTUFxBt13t+iHYz0cDFPv3fJBo4t7V3IhpAL4kGsC8gFyC+N7keZ6vwy55P/h6Jfl0B97ysz1oofe1Br67gvpPhTrTDwQ/truLBvGNvbuUc8TV2i7zkD3RHD3/Em5X8UO57MfHaVp+Tj4PYe8Ycvv8BrM2Ojtc/CVLOa4MUzn2homEhlmlPcjN21qxKkqP58Df8YMi9pskYzpwAjiOX8Yu9y/U102T8mztXDzNEOHOK1T8ZAHHqwh272wBIX1b7AxbpY9nvZJKIyyjZDyfX6ODvO1I3c/LSs78bJ6/4kjh94BT6dkU/kvtejLz2jvEHv6BFXfuCVqVS+atGQJSpHBkTBMOoE2yxKy+Tgy8dD4lr7yX/IP0bX0jD8VOovxnQ5O8MMn/THmn8s2JIxHk4mwH7T7J98khz8Sj67cj5eRvP0t9pNCGpH+pdp96easTvUY08pyxRwbFzV+HTYwBvcuiicZ/NsVYYOqN6TvIqSLlqrjPaqHP1+4/vT+ppgqTKT++8P6l/DsLJKIpDWNLUXMIh6d86n78Y0K4Q/tUx7vxbkzhyMuRd+Qgoeu0roL8xyJWvBu6/qLwhzeG71ED8Pw==</diagram></mxfile>
2205.06688/main_diagram/main_diagram.pdf ADDED
Binary file (28.1 kB). View file
 
2205.06688/paper_text/intro_method.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Optimal transport enables us to compute the distance between two probability measures on the same domain $\Omega\subset\mathbb{R}^d$. In this work, we consider discrete probability measures $\mu:=\sum_{i=1}^ma_i\delta_{\vx_i}$ and $\nu:=\sum_{j=1}^nb_j\delta_{\vy_j}$, defined over the sets of points $\{\vx_1,\dots,\vx_m\}$ and $\{\vy_1,\dots,\vy_n\}$, where $\delta_{\vx_i}$ is the Dirac measure at $\vx_i$. Such measures are fully characterized by the probability mass vectors $\va\in\Delta_m$ and $\vb\in\Delta_n$ that lie on the probability simplex $$\begin{equation}
4
+ \label{eq:probsimplex}
5
+ \Delta_m=\bigl\{\va\in\mathbb{R}^m|a_i\geq 0,\va^\top\mathbbm{1}_m=1\bigr\},
6
+ \end{equation}$$ where $\mathbbm{1}_m\in\mathbb{R}^m$ is the vector of all ones. We can then define the distance between $\mu$ and $\nu$ as $$\begin{equation}
7
+ \label{eq:otdistance}
8
+ d(\mu,\nu):=\underset{\mP\in\Pi(\va,\vb)}{\min}\langle \mP,\mC\rangle_F.
9
+ \end{equation}$$ The transportation plan $\mP\in\Pi(\va,\vb)$ determines a discrete probability measure on the product space $\{\vx_1,\dots,\vx_m\}\times\{\vy_1,\dots,\vy_n\}$, whose marginal distributions coincide with $\mu$ and $\nu$. Consequently, $\mP$ is contained in the transportation polytope $\Pi(\va,\vb)$ defined as $$\begin{equation}
10
+ \label{eq:polytopepi}
11
+ \Pi(\va,\vb):=\{\mP\in\mathbb{R}_+^{m\times n}|\mP\mathbbm{1}_n=\va,\mP^\top\mathbbm{1}_m=\vb\}.
12
+ \end{equation}$$ The cost matrix $\mC\in\mathbb{R}^{m\times n}$ specifies the transportation cost from individual points $\vx_i$ to $\vy_j$. Choosing $$\mC_{i,j}:=\|\vx_i-\vy_j\|_2^p$$ for $p\geq 1$, e.g. yields the so-called Wasserstein distance $d(\cdot,\cdot)=W^p_p(\cdot,\cdot)$, see [@villani2003topics].
13
+
14
+ Evaluating the distance $d(\mu,\nu)$ in practice requires solving the linear assignment problem (LAP) from [\[eq:otdistance\]](#eq:otdistance){reference-type="ref+label" reference="eq:otdistance"}. This can be done via specialized algorithms like the Hungarian algorithm [@kuhn1955hungarian] or the Auction algorithm [@bertsekas1979distributed], as well as recent solvers [@rubner1997earth; @pele2009fast]. However, most approaches are computationally heavy and slow in practice [@cuturi2013sinkhorn]. A popular alternative is augmenting the LAP objective in [\[eq:otdistance\]](#eq:otdistance){reference-type="ref+label" reference="eq:otdistance"} with an additional entropy regularizer, giving rise to the *Sinkhorn operator* $$\begin{equation}
15
+ \label{eq:sinkhornoperator}
16
+ S_\lambda(\mC,\va,\vb):=\underset{\mP\in\Pi(\va,\vb)}{\arg\min}\langle \mP,\mC\rangle_F-\lambda h(\mP),
17
+ \end{equation}$$ where $\lambda>0$ weights the regularization. The seminal work of Cuturi [@cuturi2013sinkhorn] shows that the additional entropy regularization term $h(\mP)=-\sum_{i,j}P_{i,j}(\log P_{i,j}-1)$ allows for an efficient minimization of [\[eq:sinkhornoperator\]](#eq:sinkhornoperator){reference-type="ref+label" reference="eq:sinkhornoperator"}. Specifically, this can be done via a scheme of alternating Sinkhorn projections $$\begin{align}
18
+ \label{eq:sinkhornscheme}
19
+ \mS^{(0)}_\lambda:=&\exp\biggl(-\frac{1}{\lambda} \mC\biggr)\nonumber,\quad\text{and}\\
20
+ \mS^{(t+1)}_\lambda:=&\mathcal{T}_c\bigl(\mathcal{T}_r\bigl(\mS^{(t)}_\lambda\bigr)\bigr).
21
+ \end{align}$$ The operators $\mathcal{T}_c(\mS):=\mS\oslash (\mathbbm{1}_m\mathbbm{1}_m^\top \mS)\odot (\mathbbm{1}_m\vb^\top)$ and $\mathcal{T}_r(\mS):=\mS\oslash (\mS\mathbbm{1}_n\mathbbm{1}_n^\top)\odot (\va\mathbbm{1}_n^\top)$ correspond to renormalizations of the columns and rows of $\mS_\lambda^{(t)}$, where $\odot$ denotes the Hadamard product and $\oslash$ denotes element-wise division. As shown by [@cuturi2013sinkhorn], in the limit this scheme converges to a minimizer $\mS^{(t)}_\lambda\xrightarrow{t\to\infty}\mS_\lambda$ of [\[eq:sinkhornoperator\]](#eq:sinkhornoperator){reference-type="ref+label" reference="eq:sinkhornoperator"}. In practice, we can use a finite number of iterations $\tau\in\mathbb{N}$ to achieve a sufficiently small residual.
22
+
23
+ # Method
24
+
25
+ Integrating the Sinkhorn operator from [\[eq:sinkhornoperator\]](#eq:sinkhornoperator){reference-type="ref+label" reference="eq:sinkhornoperator"} into deep neural networks has become a popular tool for a wide range of practical tasks, see our discussion in [1](#sec:relatedwork){reference-type="ref+label" reference="sec:relatedwork"}. A major contributing factor is that the entropy regularization makes the mapping $S_\lambda:\mathbb{R}^{m\times n}\times\mathbb{R}^{m}\times\mathbb{R}^{n}\to\mathbb{R}^{m\times n}$ differentiable. To allow for first-order-optimization, we need to compute $$\begin{align}
26
+ (\mC,\va,\vb)\qquad&\mapsto\qquad\mP^*:=S_\lambda(\mC,\va,\vb) \label{eq:forwardpass}\quad\text{and}\\
27
+ \nabla_\mP\ell\qquad&\mapsto\qquad(\nabla_\mC\ell,\nabla_\va\ell,\nabla_\vb\ell), \label{eq:backwardpass}
28
+ \end{align}$$ which denote the forward pass and the backpropagation of gradients, respectively. Those expressions arise in the context of a typical workflow within a deep neural network with a scalar loss $\ell$ and learnable parameters before and/or after the Sinkhorn operator $S_\lambda$, see [1](#fig:overview){reference-type="ref+label" reference="fig:overview"} for an overview.
29
+
30
+ A common strategy is to replace the exact forward pass $S_\lambda(\mC,\va,\vb)$ in [\[eq:forwardpass\]](#eq:forwardpass){reference-type="ref+label" reference="eq:forwardpass"} by the approximate solution $\mS_\lambda^{(\tau)}$ from [\[eq:sinkhornscheme\]](#eq:sinkhornscheme){reference-type="ref+label" reference="eq:sinkhornscheme"}. Like the original solution in [\[eq:sinkhornoperator\]](#eq:sinkhornoperator){reference-type="ref+label" reference="eq:sinkhornoperator"}, $\mS_\lambda^{(\tau)}$ is differentiable w.r.t. $(\mC,\va,\vb)$. Moreover, the mapping $(\mC,\va,\vb)\mapsto\mS_\lambda^{(\tau)}$ consists of a small number of matrix scaling operations that can be implemented in a few lines of code, see [\[eq:sinkhornscheme\]](#eq:sinkhornscheme){reference-type="ref+label" reference="eq:sinkhornscheme"}.
31
+
32
+ The goal of this section is to derive the main result stated in [3](#thm:closedformbackward){reference-type="ref+label" reference="thm:closedformbackward"}, which is the key motivation of our algorithm in [3.3](#subsec:algorithm){reference-type="ref+label" reference="subsec:algorithm"}. To this end, we start by reframing the optimization problem in [\[eq:sinkhornoperator\]](#eq:sinkhornoperator){reference-type="ref+label" reference="eq:sinkhornoperator"} in terms of its Karush--Kuhn--Tucker (KKT) conditions, see [8.1](#subsec:prooflemmakkt){reference-type="ref+label" reference="subsec:prooflemmakkt"} for a proof:
33
+
34
+ ::: {#thm:kkt .lemma}
35
+ **Lemma 1**. *The transportation plan $\mP^*$ is a global minimum of [\[eq:sinkhornoperator\]](#eq:sinkhornoperator){reference-type="ref+label" reference="eq:sinkhornoperator"} iff $\mathcal{K}(\vc,\va,\vb,\vp^*,\bm{\alpha}^*,\bm{\beta}^*)=\mathbf{0}_{l}$, with $$\begin{equation}
36
+ \label{eq:kkt}
37
+ \mathcal{K}(\cdot):=
38
+ \begin{bmatrix}
39
+ \vc+\lambda\log(\vp^*)+\mathbbm{1}_n\otimes\bm{\alpha}^*+\bm{\beta}^*\otimes\mathbbm{1}_m\\
40
+ (\mathbbm{1}_n^\top\otimes\mI_m)\vp^*-\va\\
41
+ (\mI_n\otimes\mathbbm{1}_m^\top)\vp^*-\vb
42
+ \end{bmatrix}
43
+ \end{equation}$$ where $l:=mn+m+n$. Here, $\bm{\alpha}^*\in\mathbb{R}^m$ and $\bm{\beta}^*\in\mathbb{R}^n$ are the dual variables corresponding to the two equality contraints in [\[eq:polytopepi\]](#eq:polytopepi){reference-type="ref+label" reference="eq:polytopepi"}. We further define $\vc,\vp^*\in\mathbb{R}^{mn}$ as the vectorized versions of $\mC,\mP^*\in\mathbb{R}^{m\times n}$, respectively, and assume $\log(p):=-\infty,p\leq 0$.*
44
+ :::
45
+
46
+ Establishing this identity is an important first step towards computing a closed-form gradient for the backward pass in [\[eq:backwardpass\]](#eq:backwardpass){reference-type="ref+label" reference="eq:backwardpass"}. It reframes the optimization problem in [\[eq:sinkhornoperator\]](#eq:sinkhornoperator){reference-type="ref+label" reference="eq:sinkhornoperator"} as a root-finding problem $\mathcal{K}(\cdot)=\mathbf{0}$. In the next step, this then allows us to explicitly construct the derivative of the Sinkhorn operator $S_\lambda(\cdot)$ via implicit differentiation, see [8.2](#subsec:prooflemmaqp){reference-type="ref+label" reference="subsec:prooflemmaqp"} for a proof:
47
+
48
+ ::: {#thm:qpjacobian .lemma}
49
+ **Lemma 2**. *The KKT conditions in [\[eq:kkt\]](#eq:kkt){reference-type="ref+label" reference="eq:kkt"} implicitly define a continuously differentiable function $(\vc,\va,\tilde{\vb})\mapsto(\vp,\bm{\alpha},\tilde{\bm{\beta}})$ with the Jacobian $\mJ\in\mathbb{R}^{(l-1)\times(l-1)}$ being $$\begin{equation}
50
+ \label{eq:qpjacobian}
51
+ \mJ:=\frac
52
+ {\partial\begin{bmatrix}
53
+ \vp;\bm{\alpha};\tilde{\bm{\beta}}
54
+ \end{bmatrix}}
55
+ {\partial\begin{bmatrix}
56
+ \vc;-\va;-\tilde{\vb}
57
+ \end{bmatrix}}=-
58
+ {\underbrace{\begin{bmatrix}
59
+ \lambda \operatorname{diag}(\vp)^{-1} & \tilde{\mE}\\ \tilde{\mE}^\top & \mathbf{0}
60
+ \end{bmatrix}}_{:=\mK}}^{-1}.
61
+ \end{equation}$$ For brevity we use the short hand notation $[\vv;\vu]:=[\vv^\top,\vu^\top]^\top$ for stacking vectors $\vv,\vu$ vertically. Note that the last entry of $\tilde{\vb}:=\vb_{-n}$ and $\tilde{\bm{\beta}}:=\bm{\beta}_{-n}$ is removed. This is due to a surplus degree of freedom in the equality conditions from [\[eq:polytopepi\]](#eq:polytopepi){reference-type="ref+label" reference="eq:polytopepi"}, see part (b) of the proof. Likewise, for $$\begin{equation}
62
+ \mE=\begin{bmatrix}\mathbbm{1}_n\otimes\mI_m&\mI_n\otimes\mathbbm{1}_m\end{bmatrix}\in\mathbb{R}^{mn\times(m+n)},
63
+ \end{equation}$$ the last column is removed $\tilde{\mE}:=\mE_{:,-(m+n)}$.*
64
+ :::
65
+
66
+ In principle, we can use Lemma [2](#thm:qpjacobian){reference-type="ref" reference="thm:qpjacobian"} directly to solve [\[eq:backwardpass\]](#eq:backwardpass){reference-type="ref+label" reference="eq:backwardpass"}. However, the computational cost of inverting the matrix $\mK$ in [\[eq:qpjacobian\]](#eq:qpjacobian){reference-type="ref+label" reference="eq:qpjacobian"} is prohibitive. In fact, even storing the Jacobian $\mJ$ in the working memory of a typical machine is problematic, since it is a dense matrix with $\mathcal{O}(mn)$ rows and columns, where $m,n>1000$ in practice. Instead, we observe that computing [\[eq:backwardpass\]](#eq:backwardpass){reference-type="ref+label" reference="eq:backwardpass"} merely requires us to compute vector-Jacobian products (VJP) of the form $\vv^\top\mJ$. The main results from this section can therefore be summarized as follows, see [8.3](#subsec:closedformbackward){reference-type="ref+label" reference="subsec:closedformbackward"} for a proof:
67
+
68
+ ::: {#thm:closedformbackward .theorem}
69
+ **Theorem 3** (Backward pass). *For $\mP=\mP^*$, the backward pass in [\[eq:backwardpass\]](#eq:backwardpass){reference-type="ref+label" reference="eq:backwardpass"} can be computed in closed form by solving the following linear system: $$\begin{equation}
70
+ \label{eq:closedformbackward}
71
+ \begin{bmatrix}
72
+ \lambda \operatorname{diag}(\vp)^{-1} & \tilde{\mE}\\ \tilde{\mE}^\top & \mathbf{0}
73
+ \end{bmatrix}
74
+ \begin{bmatrix}
75
+ \nabla_{\vc}\ell\\-\nabla_{[\va;\tilde{\vb}]}\ell
76
+ \end{bmatrix}=
77
+ \begin{bmatrix}
78
+ -\nabla_{\vp}\ell\\\mathbf{0}
79
+ \end{bmatrix}.
80
+ \end{equation}$$*
81
+ :::
82
+
83
+ In the previous section, we derived a closed-form expression of the Sinkhorn backward pass in [3](#thm:closedformbackward){reference-type="ref+label" reference="thm:closedformbackward"}. This requires solving the sparse linear system in [\[eq:closedformbackward\]](#eq:closedformbackward){reference-type="ref+label" reference="eq:closedformbackward"}, which has $\mathcal{O}(mn)$ rows and columns, and thus amounts to a worst-case complexity of $\mathcal{O}(m^3n^3)$ [@flamary2018wasserstein]. We can further reduce the computation cost by exploiting the specific block structure of $\mK$, which leads to the following algorithm:
84
+
85
+ ::: algorithm
86
+ $\mT\leftarrow\mP\odot\nabla_\mP\ell$.[]{#alg:lnT label="alg:lnT"}
87
+
88
+ $\tilde{\mT}\leftarrow\mT_{:,-n},\tilde{\mP}\leftarrow\mP_{:,-n}\in\mathbb{R}^{m\times n-1}$.[]{#alg:lnTt label="alg:lnTt"}
89
+
90
+ $\vt^{(a)}\leftarrow\mT\mathbbm{1}_n,\tilde{\vt}^{(b)}\leftarrow\tilde{\mT}^\top\mathbbm{1}_m$.[]{#alg:vtavttb label="alg:vtavttb"}
91
+
92
+ $\begin{bmatrix}\nabla_\va\ell\\\nabla_{\tilde{\vb}}\ell\end{bmatrix}\leftarrow\begin{bmatrix}\operatorname{diag}(\va) & \tilde{\mP}\\ \tilde{\mP}^\top & \operatorname{diag}(\tilde{\vb}) \end{bmatrix}^{-1}\begin{bmatrix}\vt^{(a)}\\\tilde{\vt}^{(b)}\end{bmatrix}$.[]{#alg:gradagradb label="alg:gradagradb"}
93
+
94
+ $\nabla_{\vb}\ell\leftarrow\begin{bmatrix}\nabla_{\tilde{\vb}}\ell;0\end{bmatrix}$.[]{#alg:gradbresidual label="alg:gradbresidual"}
95
+
96
+ $\mU\leftarrow\nabla_\va\ell\mathbbm{1}_n^\top+\mathbbm{1}_m{\nabla_{\vb}\ell}^\top$.[]{#alg:Umatrix label="alg:Umatrix"}
97
+
98
+ $\nabla_\mC\ell\leftarrow-\lambda^{-1}(\mT-\mP\odot\mU)$.[]{#alg:gradm label="alg:gradm"}
99
+ :::
100
+
101
+ See [6](#sec:pytorchimplementation){reference-type="ref+label" reference="sec:pytorchimplementation"} for a PyTorch implementation of this algorithm. Most methods listed in [\[table:relatedsota\]](#table:relatedsota){reference-type="ref+label" reference="table:relatedsota"} consider a special case of the functional specified in [\[eq:sinkhornoperator\]](#eq:sinkhornoperator){reference-type="ref+label" reference="eq:sinkhornoperator"}. The resulting gradients of [\[alg:backward\]](#alg:backward){reference-type="ref+label" reference="alg:backward"} are thereby, for the most part, consistent with such specialized approaches. We now show that the resulting gradients $\nabla_\mC\ell,\nabla_\va\ell,\nabla_\vb\ell$ from [\[alg:backward\]](#alg:backward){reference-type="ref+label" reference="alg:backward"} are indeed solutions of the linear system in [3](#thm:closedformbackward){reference-type="ref+label" reference="thm:closedformbackward"}.
102
+
103
+ ::: {#thm:algorithmequivalence .theorem}
104
+ **Theorem 4**. *Let $\va,\vb$ be two input marginals and $\mP=\mP^*$ the transportation plan resulting from the forward pass in [\[eq:forwardpass\]](#eq:forwardpass){reference-type="ref+label" reference="eq:forwardpass"}, then [\[alg:backward\]](#alg:backward){reference-type="ref+label" reference="alg:backward"} solves the backward pass [\[eq:backwardpass\]](#eq:backwardpass){reference-type="ref+label" reference="eq:backwardpass"}.*
105
+ :::
106
+
107
+ The main idea of this proof is showing that [\[alg:backward\]](#alg:backward){reference-type="ref+label" reference="alg:backward"} yields a solution $\nabla_{[\vc;\va;\tilde{\vb}]}\ell$ of the linear system from [\[eq:closedformbackward\]](#eq:closedformbackward){reference-type="ref+label" reference="eq:closedformbackward"}. To that end, we leverage the Schur complement trick which yields the following two expressions: $$\begin{equation}\label{eq:gradientla}
108
+ \nabla_{[\va;\tilde{\vb}]}\ell=\bigl(\tilde{\mE}^\top\operatorname{diag}(\vp)\tilde{\mE}\bigr)^{-1}\tilde{\mE}^\top\operatorname{diag}(\vp)\nabla_{\vp}\ell.
109
+ \end{equation}
110
+ \begin{equation}\label{eq:gradientlb}
111
+ \nabla_{\vc}\ell=
112
+ -\lambda^{-1}\bigl(\operatorname{diag}(\vp)\nabla_{\vp}\ell-\operatorname{diag}(\vp)\tilde{\mE}\nabla_{[\va;\tilde{\vb}]}\ell\bigr).
113
+ \end{equation}$$ In [8.4](#subsec:algorithmequivalence){reference-type="ref+label" reference="subsec:algorithmequivalence"} we further show that these two identities in their vectorized form are equivalent to [\[alg:backward\]](#alg:backward){reference-type="ref+label" reference="alg:backward"}.
114
+
115
+ [4](#thm:algorithmequivalence){reference-type="ref+label" reference="thm:algorithmequivalence"} proves that [\[alg:backward\]](#alg:backward){reference-type="ref+label" reference="alg:backward"} computes the exact gradients $\nabla_\mC\ell,\nabla_\va\ell,\nabla_\vb\ell$, given that $\mP=\mP^*$ is the exact solution of [\[eq:sinkhornoperator\]](#eq:sinkhornoperator){reference-type="ref+label" reference="eq:sinkhornoperator"}. In practice, the operator $S_\lambda$ in [\[eq:forwardpass\]](#eq:forwardpass){reference-type="ref+label" reference="eq:forwardpass"} is replaced by the Sinkhorn approximation $\mS_\lambda^{(\tau)}$ from [\[eq:sinkhornscheme\]](#eq:sinkhornscheme){reference-type="ref+label" reference="eq:sinkhornscheme"} for a fixed, finite $\tau\in\mathbb{N}$. This small discrepancy in the approximation $\mP=\mS_\lambda^{(\tau)}\approx\mP^*$ propagates to the backward pass as follows:
116
+
117
+ ::: {#thm:errorbounds .theorem}
118
+ **Theorem 5** (Error bounds). *Let $\mP^*:=S_\lambda(\mC,\va,\vb)$ be the exact solution of [\[eq:sinkhornoperator\]](#eq:sinkhornoperator){reference-type="ref+label" reference="eq:sinkhornoperator"} and let $\mP^{(\tau)}:=\mS_\lambda^{(\tau)}$ be the Sinkhorn estimate from [\[eq:sinkhornscheme\]](#eq:sinkhornscheme){reference-type="ref+label" reference="eq:sinkhornscheme"}. Further, let $\sigma_+,\sigma_-,C_1,C_2,\epsilon>0$, s.t. $\bigl\|\mP^*-\mP^{(\tau)}\bigr\|_F<\epsilon$ and that for all $\mP$ for which $\|\mP-\mP^*\|_F<\epsilon$ we have $\min_{i,j}\mP_{i,j}\geq\sigma_-$, $\max_{i,j}\mP_{i,j}\leq\sigma_+$ and the loss $\ell$ has bounded derivatives $\bigl\|\nabla_{\vp}\ell\bigr\|_2\leq C_1$ and $\bigl\|\nabla_{\vp}^2\ell\bigr\|_F\leq C_2$. For $\kappa=\|\tilde{\mE}^\dagger\|_2$, where $\tilde{\mE}^\dagger$ indicates the Moore-Penrose inverse of $\tilde{\mE}$, the difference between the gradients $\nabla_\mC\ell^*,\nabla_\va\ell^*,\nabla_\vb\ell^*$ of the exact $\mP^*$ and the gradients $\nabla_\mC\ell^{(\tau)},\nabla_\va\ell^{(\tau)},\nabla_\vb\ell^{(\tau)}$ of the approximate $\mP^{(\tau)}$, obtained via [\[alg:backward\]](#alg:backward){reference-type="ref+label" reference="alg:backward"}, satisfy $$\label{eq:errorbounds}
119
+ \begin{equation}
120
+ \begin{multlined}\label{eq:errorboundsa}
121
+ \bigl\|\nabla_{[\va;\vb]}\ell^*-\nabla_{[\va;\vb]}\ell^{(\tau)}\bigr\|_F\leq
122
+ \\
123
+ \kappa\sqrt{\frac{\sigma_+}{\sigma_-}}\biggl(\frac{1}{\sigma_-}C_1+C_2\biggr)\bigl\|\mP^*-\mP^{(\tau)}\bigr\|_F
124
+ \end{multlined}
125
+ \end{equation}
126
+ \begin{equation}
127
+ \begin{multlined}\label{eq:errorboundsb}
128
+ \bigl\|\nabla_\mC\ell^*-\nabla_\mC\ell^{(\tau)}\bigr\|_F\leq
129
+ \\
130
+ \lambda^{-1}\sigma_+\biggl(\frac{1}{\sigma_-}C_1+C_2\biggr)\bigl\|\mP^*-\mP^{(\tau)}\bigr\|_F.
131
+ \end{multlined}
132
+ \end{equation}$$*
133
+ :::
134
+
135
+ We provide a proof in [8.5](#subsec:errorboundsproof){reference-type="ref+label" reference="subsec:errorboundsproof"}, as well as an empirical evaluation in [7.1](#subsec:gradientaccuracy){reference-type="ref+label" reference="subsec:gradientaccuracy"}.
136
+
137
+ In comparison to automatic differentiation (AD), the computation cost of [\[alg:backward\]](#alg:backward){reference-type="ref+label" reference="alg:backward"} is independent of the number of Sinkhorn iterations $\tau$. For square matrices, $m=n$, the runtime and memory complexities of AD are $\mathcal{O}(\tau n^2)$. On the other hand, our approach has a runtime and memory complexity of $\mathcal{O}(n^3)$ and $\mathcal{O}(n^2)$ respectively. We show empirical comparisons between the two approaches in [4.1](#subsec:computationalcomplexity){reference-type="ref+label" reference="subsec:computationalcomplexity"}. Another compelling feature of our approach is that none of the operations in [\[alg:backward\]](#alg:backward){reference-type="ref+label" reference="alg:backward"} explicitly convert the matrices $\mP,\nabla_\mP\ell,\nabla_\mC\ell,\dots\in\mathbb{R}^{m\times n}$ into their vectorized form $\vp,\nabla_\vp\ell,\nabla_\vc\ell,\dots\in\mathbb{R}^{mn}$. This makes it computationally more efficient since GPU processing favors small, dense matrix operations over the large, sparse linear system in [\[eq:closedformbackward\]](#eq:closedformbackward){reference-type="ref+label" reference="eq:closedformbackward"}.
138
+
139
+ As discussed in Lemma [2](#thm:qpjacobian){reference-type="ref" reference="thm:qpjacobian"}, the last element of $\tilde{\vb}$ needs to be removed to make $\mK$ invertible. However, setting the last entry of the gradient $\nabla_{b_n}\ell=0$ to zero still yields exact gradients: By definition, the full marginal $\vb$ is constrained to the probability simplex $\Delta_n$, see [\[eq:probsimplex\]](#eq:probsimplex){reference-type="ref+label" reference="eq:probsimplex"}. In practice, we apply an a priori $\mathrm{softmax}$ to $\vb$ (and analogously $\va$). For some applications, $\vb$ can be assumed to be immutable, if we only want to learn the cost matrix $\mC$ and not the marginals $\va$ and $\vb$. Overall, this means that the gradient of $\vb$ is effectively indifferent to constant offsets of all entries, and setting $\nabla_{b_n}\ell=0$ does not contradict the statement of [3](#thm:closedformbackward){reference-type="ref+label" reference="thm:closedformbackward"}.
140
+
141
+ <figure id="fig:computational_complexity">
142
+ <div class="center">
143
+ <p><embed src="figures/computation_cost_bigger/runtime_10.pdf" style="width:31.0%" /> <embed src="figures/computation_cost_bigger/runtime_100.pdf" style="width:31.0%" /> <embed src="figures/computation_cost_bigger/runtime_1000.pdf" style="width:31.0%" /></p>
144
+ <p><embed src="figures/computation_cost_bigger/memory_10.pdf" style="width:31.0%" /> <embed src="figures/computation_cost_bigger/memory_100.pdf" style="width:31.0%" /> <embed src="figures/computation_cost_bigger/memory_1000.pdf" style="width:31.0%" /></p>
145
+ </div>
146
+ <figcaption><strong>Computational complexity.</strong> We compare the runtime per iteration (top row) and GPU memory requirements (bottom row) of our approach (blue) and automatic differentiation (orange). We consider a broad range of settings with quadratic cost matrices of size <span class="math inline"><em>m</em> = <em>n</em> ∈ {10, 100, 1000}</span> and <span class="math inline"><em>τ</em> ∈ [10, 2000]</span> Sinkhorn iterations. For the runtime, we show both the total time (solid lines) and the time of only the backward pass (dashed lines). Both ours and AD were implemented in the PyTorch <span class="citation" data-cites="paszke2019pytorch"></span> framework, where memory is allocated in discrete units, which leads to a large overlap for the minimum allocation size of 2MB (bottom row, left plot). </figcaption>
147
+ </figure>
148
+
149
+ <figure id="fig:manifold_bary">
150
+ <div class="center">
151
+
152
+ </div>
153
+ <figcaption><strong>Manifold barycenter.</strong> We compute barycenters of two circular input distributions on the surface of a sphere (first row). Specifically, we compare the results of minimizing <a href="#eq:barycenter" data-reference-type="ref+label" data-reference="eq:barycenter">[eq:barycenter]</a> with AD (second row) and implicit gradients (third row). The sphere is discretized as a triangular mesh with <span class="math inline">5000</span> vertices. On this resolution, AD is out of memory for <span class="math inline"><em>τ</em> ≥ 200</span> Sinkhorn iterations whereas ours is still feasible for <span class="math inline"><em>τ</em> = 1000</span>. The obtained interpolations produce the slightly elongated shape of an ellipse since the surface of the sphere has a constant positive Gaussian curvature. </figcaption>
154
+ </figure>
155
+
156
+ <figure id="fig:number_sorting">
157
+ <div class="center">
158
+ <p><embed src="figures/number_sorting_bigger/number_sorting_200.pdf" style="width:31.0%" /> <embed src="figures/number_sorting_bigger/number_sorting_500.pdf" style="width:31.0%" /></p>
159
+ <div class="overpic">
160
+ <p><span>figures/number_sorting_bigger/number_sorting_1000.pdf</span> (45,48)<span>(OOM)</span></p>
161
+ </div>
162
+ </div>
163
+ <figcaption><strong>Number sorting.</strong> We show that we can improve the Gumbel-Sinkhorn method <span class="citation" data-cites="mena2018learning"></span> directly with <a href="#alg:backward" data-reference-type="ref+label" data-reference="alg:backward">[alg:backward]</a>. Specifically, we consider the task of permutation learning to sort random number sequences of length <span class="math inline"><em>n</em> ∈ {200, 500, 1000}</span>, see <span class="citation" data-cites="mena2018learning"></span> for more details. We replace AD in the GS network with implicit differentiation (blue curves) and compare the obtained results to the vanilla GS architecture (orange curves). Our approach yields more accurate permutations while using much less computational resources – GS is out of memory for <span class="math inline"><em>τ</em> &gt; 200, 100, 50</span> forward iterations, respectively. For all settings, we show the mean proportion of correct test set predictions (solid lines), as well as the <span class="math inline">10</span> and <span class="math inline">90</span> percentiles (filled areas). The curves are to some degree noisy, since individual results depend on a finite number of (random) test samples. Also, notice that the log-scale of the y-axis exaggerates small fluctuations for <span class="math inline"><em>τ</em> ≥ 100</span>. </figcaption>
164
+ </figure>
2205.13346/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-11-09T06:23:52.162Z" agent="5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36" etag="gbfhNqb2e07gU2zOLk1c" version="15.7.0" type="google"><diagram id="J6hQ28SPMxmkDTaU3fn3" name="Page-9">7V1bc6NGFv41rso+WNV34DG2x5NsMpupnU3t5mkKS8gmkYUj4Znx/PoFiUb0BWgQ3WBLmq2NhVBL6nPpc75zu8DXj9/eb8Knhw/JIlpdILD4doFvLhCCAPvZf/IrL/srJID7C/ebeFHcdLjwKf4e8XcWV5/jRbQVbkyTZJXGT+LFebJeR/NUuBZuNslX8bZlshI/9Sm8j5QLn+bhSr3633iRPuyv+j44XP8piu8fik8muHjhMeT3FitsH8JF8nV/aXcPfneBrzdJku7/evx2Ha3yvePb8jl+WaZz+GH77vtPT8EXeA/Yn5f71W+7vKX8BZtonfZe2n8Gf/59M8e///0T/Pcyvv98c/lb8RbwJVw9F9tV/Nb0he/fJnleL6J8EXiBr74+xGn06Smc569+zTgmu/aQPq6Kl7fpJvkruk5WyWb3buxfg+xRvsIpkG3e1SLcPpTrGv7EYiu+RJs0+lahTvGT30fJY5RuXrJbilcJKH5iwb2Qc+/XAy8ExY9+qLCBTwoOLLjvvlz5sMXZH8Uu63f859CnN354hX77Pv/lPSPX//xEL5mywdEi49fi6TpZZ/+5Oux5vnXJJn1I7pN1uPo1SZ6KDfszStOXQtrC5zQR6RB9i9P/5W+fBUFQPP9j9xwyWjy/+Vasv3vywp+ss1/5v+qT/fsof3p42+4Zf98yWae34WO8yi/8J37MRB6Bf0Vfs///d/IYrovP+RhtstfSaFO8rRvd861qpPomWoVp/EUUfR0Ni7d+TOLsc0tuwYjMAoyD8uGJzEPJDFFx0W3yvJlHxTpV8ZOWzhixfG/Dcmm4uY9SZbkdq5W/sz/34YHl3YDqgpirWqBeY9hSCYgGIhm4xq+oBMZUlQBxoJCrj1rQ/godYUj+P3q9je8fw+KZTKzsJ6dNGrjQJ8t4tZIuhav4PiPPzTzb3Vwar/INjLNj88fihcd4scg/RssCooIy4IL8lgqRAbBMZIqRQGTPn1GFygSoVEbAEoVV4tlU/FBQ+nZUviK6JVWjNTcEmRl7dGODvZJsug8zNwcGCYjAZghjo+Ohhz7X/lDmXp/bklhIRYnFwFMtNZcC6w29t6oSFs7GEfeeAdHQIYFGW0JKNYdiZsPYOhP9Ro15UI7vDlev5s+bL+WOviptOjj596qn4b7AVJtCMrQ2PYovgnbBzFaJn7Z11kuFwuH2aQ86LONvOae8SouWYtHJRQHRyS/UaM/y4uBUgnBAMomKUlWkmyTNGC/J7dZLqqMByx63t+NqWZoZoZ7oe2AAZ0BFJJCvUsr3M1/eFq0M/EJjWjWTh+WCsIg3mdTtL2RqKv/ZOqmq0O/2tpChUemXaXGRfihgs6D6IKrYlWdplZrEmtgRDSmrnuRndPYlO2KIxJuJyA32mEJnp84kpPVUfnzuQGJz0m2ibfw9vNstle/2U+7T7H4XvbqgN/lamf203ZtSI9uzBCOZYlqT1nNKM52zJkgmPEvm0ZIJRwd6oM5xPMimOZFPRTZRgEeXTb/dADouEHa1+ydJ6c6wyR/jkgQHov+gAi+Q6nwHbI0cJi7eevFjHh7OBWQVbrfxvMFlaCDH0Nve6iRXtlW3q/zascgklMWM0RmiVVtVXNI4kMVEZiGeuM5wESztLnMV0AkVOgNBjUAQl65WJIgL/ESAoDIzYwK84Jkyw4H+fwjk1zPDWFpsOJ7h+mHgUEym+8OXyg2FyVOrDzGGgtrymcR++wWH1VVoYKtizPBMsOPx6hZSnakNkeZMw149NY/TALhhQ4/OnaEVCYdWxHsEufUN5TYYXNfrBRNJxqevBJRMLRN5JSiznW3TRAe99UVR1cCE6DcsaOQviDbcgO4wsxJ375BjA4BsfepDiqqugLbCiUiHmZ0oeRCcGnEMMhlOhTgETk92DJIhToU8jEyNOAbg1WDEuQMRjpiOOCDyge+PTJyD9TAd8qhg1s/rp+fccNnmyO46o4RMrleD+LszILDO2wj80ilxghNjtWDi4ybO9h+xVU6wu/yv+/yvdZR+TTZ/KZQVd7hDKH3yihFjSfKQQi4d3mkNRMbjoENTQf2s10r4Irk9ydkyxpMhaFzHstPG88QNQw2Flq2X0SXN/51oyAELlIQQ9WUJcSGC3LKESTZUE0sISajZnbdx/gVacubfLldIVU+gN8AjL2UYeuoKFcsf43vN34ppFaHpj2DC7XaAaGyCTJ3jqV2UW+/jzhv3uLMKgknug3gy2tN0HZJPoIyt+GqyA1aZCNezy3E2qlXY6xWSg4MrI5HDBOY668kWPSlLWO6jD593AoFjzalD2fZef05L7vfPw/UiXoRppEIC4u0VnmJ/Pyf8hct9lt6P2Q0QPn07vMhXOWBGoPwC2Q+qfof6j1ZurEUnOha0VTPZHDJ9B1UjnbsadII5RSd0KW4d+QPp+KOBASusg888ouERScUEqm3glEfIOLluNbkM5WsNCFaxKCyKwNWD8uANF75xpU7cbjWsQdG4HeSsq/uJqC9wIU9rqE97kNOpgcR5FY9SfTfG2ne7qmAnQ5YJtgWvIrigkadTPwHzcDgygF6ppS7LA0ePXnHs7UwfBD00OeoMWbj5yqnjBxOUHpcpYdOmD5J6TLiljrYblkoLS8mjZb5op+TRNoPLrr3UmklKqKlh5eOhDavjhNJpImAY+cu5TijZ3I/uliMLJZqiyaGCou93gnmZbvJeAQgkz+kugUYi2slnzeQt6qQkfYx0xZUQlnR30/xMBVY/Jttsq53lziyXSzTXyuGC3TE6du4MYJIUqvCUDrT1fVsEa26+NHyfUvu1FiMfl5x8rcelN7HTMnDKCW4aF4abtAI8dUaqJsw9kGsO2/lfDI0ayKZqJua+scL3mv4KQ4VRlfPjzqeEagKsS38e7U6cMVsW+VKhh4po6+r1rNkB1ATtawmwnl4KFSWSqEkrmIZQqY8a17EtsiZQ4pn4LcSXO9P3Jb68jm3iq0jlTTRPFpm3dXbjZK0dyGkx0PPL3uMVtuPJS250d0NnucfnE+9qRJEoXRRpvG5d8NgeuXQoV8cEA6ZLMCj99cOFnd116GxV5hHcdc0tOEkjrZyoc4DgRrbSTLISbR7UFRcfMF907mCAW9y73bPjBo20ullcuiZiJBAqoX5QSpE0z7LDzQvZNhM8ldPO7Z2OxA58Q149tiJMBjE9x7wzdBO/KU1D4HI51jQE3vjYSdBs2mC9Jk+H0bGDZkyHh50ofeQ8nQlQx2WW28Spo+bpTIA+JtDUadBHydOZAHVcZrlNnDpqwsYE6KMiQ/+K7vem5DlZo1YRapI1iGrluU7V4NXMbyjeO3Lknw+TbAUVOEI/kcg/08FAVhFEdEYQh0EQmXoiOEUQmZrvpVLKFYJIiKguUL6VoyOIzHRE5UgIIutdp4ubF7KMAjGTbniuWE9MUJv5bSfVmfHyXmYDMZ68kG3Ga06UO0PXfUynoQ0iPesgJpZcYshmAQGHR8+GRQoi3rKuZQ71TDBCZ6qRiczn+RPQjVMP66H+vbN8OMNU6cJRv7BtXnSKiHIbvcGqb+decdQr3KnLbvzXIZ9E0Ujaka66qUzW5mGfMdLDvGy5XENbZOOUOk4x0tcnTz6dIQnY1gyYd0syl8XBExcoqZgA8wlwo9HGZY3o1GkjGSFUG3BwSh2X0yKmTR0qdYvBvOJjNNq4HBUxcdowZWA45v1URiOPy2EREycPlhXb2LQ551iVh47UFR5zY2ks2vhO/dXXRRsS6IpT3JLHqXP6+twfJUCo7fLhlGIWRzeeQFIBRwbb2wkM3tXwOLKfnd5DZaeM7moHqjoVShO3Vxch2LVU6BY4uL0tAgeH3gpycwap3UcgxwywSU9Sk5iEHFbgiuQSzAAggjbxIGrWJ9mT46IUrbLPsb7hZN9NOIMBBeUpM+C6xjMYkPxeOZfFcvzC71sj10NSWPbYdU1ukZQevee6Be06tacbmOm5hTdc/xxH9ftMgp25J9OZ46G00KGzlyueb24K5TTDATVbde4NMl6N1cqfvF34RAwy/r0riuyv6OVrslk02OfdO7/fwcViqU2shMDDQc4pSbZWnOYE80dOzUaHnHregEGTmc1bWggZlcgWmXTxvXHHT8Dz+IneowWQGnt0Olog0DTIalXlwly7V+WsG3R4G+HEMK0LCAYv4D6Od5orRFzyjou+kT3mWlQtc7V7YH0KnQQJTpsrIZiYIVNfrrJ9ynbK5MgjTeUqsViusvh84V2l2b56N0rZSqw5+fZfwsLJN+0W00zyfxiakaCSAquGq7HTg7BPw5HxldnbUkq287s9DGZSFgsEbMZb27a4zybLBUhZzfLgnKB+Hp0ldZdORM0Zx9Z0mtCWmvPksb5oZL3Wpxpluga+Z1Kb8nYUYquVVhpfIzgPu7d2nWrm8b7UXP8WwH+tjpXu96kkEhbGZJebqrS4fFFtvQE127QH98gGHKS63AGnyg1yq3EKSPRbqbUrO8e3q5TBy/KO5Abk2hQCEzGFpunxScn8Y5tCEKipRhPVFmNIPTGUeghNiyNdib0ulcii2L+M7QFN207IPCDFyc5MBe6IjCf9ak7Tx/3cKbCIs52M757TOFkrlDr5BlU48KUoKNOOEitbnTtpLQOhKva//HoTf8kurZLttkHiQLvEqYmCXnAHdG1/lhEbu+0P86UEQgI0BPJ1CYSenDoyIH1Uefvw67vTo47n05nUkgQyXZ2wjkAIWSOPGiaZJ+vFWf0p6g9CuT8fxKopy9wqP7W6Ll7v2ivKORdFEk+TxHW3QJY0/6d1PHaPiz5ZPPktlZX2D4uE9SWh5CnpLfmA9iwUOKG8Omv5F8X3ION4Ocg0ZbqUsJHxUu4083yh4gfUwqVH3Q4LI90uugr7hCTObG6HzTmEfUpsjoiLIAI//Mdh836D57tj06cUazOWKm61TwUhQ6OGSfol551ZcRBWnFiIhkvGmRNPjhOHHz91JCuqlRT/2YTxWnWhb8I03Ebq6ILtQ/iU/7nIX0+TTR2U0W2UVJ2rXece17vgthxn2WkgurYPujIGag1sRCoYzOdNnAERY0AESTh/iXWMBokg1nhevIkpFB1PC9PT4dVl52Loz2glI1xqrol4+L57Q/C8er/MM5cSNJVlLRfPQqQm7MaPT8k21YzAtpwcMulKy0DF2J1WWkLUjFMNpnu6iDy5EKzZ/VBcbs/OMuZuVFmC6tE1qJhMlpoBcOVomqkCTRnqC3UhxGYEEUi8AHmYUXlZNAuqD7PuAINpJayiVedT1cWp2u5hYWOZoHZiFupJHUDZUiR9D2duudctZPs45rtrue/BKz+NtZ6A09OYh9wNG+0Up3Mfj81en4CJtKIhWBZfjL2e4qsshQJpKesCbNLxvokxhGqnw6kA3J4Dk+ENPBxvyEuZ8kbX6KHuS4Pm70ZlrIM3DjP/NRgK77AURsQmjRLPes+MtxWiB33NFnUp6tpw6daj8az3WnhDGajcnzeUpQx5o7veUz6p6EdYr/cULea1vEP9DCa8w5beG2DqLGqq6qitDZnv2T5fYXN/90OelZfxWPYzQP5f/if6h7r0h/AhXIXr5C7eNja/qlaG7LOhu/bFemtJ0lRq44l4LmRF+j1Nhq29yAO3esaKVAdBFdsDM0ia+82adPA8uiWtdWCBagYlQqVJTI/Tusuy1k/ucfN8ZwBggbX89h4zb4+zPDwDpIK39p2nKLNWy7rWectkdkjLbE8NfFrBRkVGFHDXEzQbiQhfYgB68pG0EOL8OrTJWPOF7RpzRNdQY0hjrlhlXrLe4UYMESR5tZP83l84zMvCx9x2Wt9tn0Q7rWqMne20HfsAtYMWobr8n0OIwFXBIVGRfIUy9xlpnpxsYc6M63U0T8M7/uGgcWs9gNXmZH7ZTqyqz4hLQ5hPKmva1k4M370j9IDUIt2DIaJeVuMgTMPm9qhhgHa/ZWpAqWjPiBz26jD5aXqq5ODR2+b9t6icDEDQt7z/0uhCTRquRhrsncAGw1DeMjWI6GOOrpxMRrA2RQjKZJzyyR/VJzWpOHrivLNDnLH9wNKNEt9h6vZR2RsTl7GNFpA+SNS56/HryqvO2xRD74BIiUh7tTdTnwbIDSvjwHUzZEhU+Ot9tI42YZrxaMbn5dQSietPvsuJ53lKL2vdGMKAzXRxGHuHGFWho4+7ZPlX07rL7oTQDraix+RGNjggGhpDPpXGDYX95hofW+EQd2NAxPR9Q8S8PBeFUM3U2kYaTzuDzFFuMhMPoUtqOlqyxxn0c+jTGz+8Qr99n//ynpHrf36ilzqPtB2k9uc7utcC3He1TSl/gP/IFi3A7MtV9CVaqZA2uM4+exNu04vmnpXKx7RB3dNQsSNqVRjIiBTTYbaQ13sMjYtsNu82/jN5+uu/v794yXL++/vl50sdEt7GR8dw6Q8450Eh/UXHcbrpcO2MeOY46Rxn3kxmOkZnUJ2gAwkpWw864TtdqMCq7suzsMDP68y7y6x6rvzOys4W61EQzPgA3tKEZBplB2bYabYWaM7Weo114lXEgnU1NKdWxOaZmonAtKyT+61SeWbZt68DPiIvgQ6usCvcAuimONUi5yblZ2M2vAVKw1t4cD2rYIKu4609HWGSGrVaxU/bOj1c2d9w+xTN86+8jL/lNLHTo/1QEuh1T6nrQjE8kwOrMNBldUBdToe9FtK6tKEzxS7yXtEaGZsCxQxycE6UYtCfpozpbPYzxXKKYbVx+yQoZlLwepoUo9M8x5jdc6xbT7nCF7JFA0iY3MnH87UkAPoM0cNlC4QY8niSIwGaTRaw/k2ShrswFb7J83ZGRi3ZLBCpFGghJKQhkdyiY0D6aMgxGTe+Y5zHfZ5AmSIpJpVU5c/cK4ZMKkgASuq2aZqA/IU0S9n2r/k5fmas0RkL8RIZzg2EVtrzHdr+decypX1ly8rWmc6kdcNQhpo9cK/DqcIy+0s+VnRTYjWHir0znyqbPlzSWlXOzVDgTqmUUuM/JVdT7NowQm+yohWASfzfdNBET2VzJJMYZC6/LVH1FFFFYHRRNUlYHooKbpPJO9MngDPMgtrcyvHVanPqsFWLykzTgo6atlG3jj7Sp4OmNc20GkfTuozCTFXTaieoupRer3k4kU3pNR270S0t8y3JbzBl+fVcRngmIb+UzgIJwIC6tGmn8usWKAPd5Vfwc4xG5r0V+fUswypHco7L6NEU5BcBIONCCI4tvmT6xy8GwgF8CWYAtVvR2TO5EdRwrdmJzluz17bUQCkY57VxsZuoUjhjZPaYxDM/OabNJKeGkSF18jsi3thHh0uMbBpUoHJlak6YkakwIvrVL9+8RYueD+mKbE1U/54acobkpsA8hjSS0PtuQbN+02rfGGzWwXiaNGzmnxxs5kvCS8fGvH3Njk8ZMzsd0fUnjZj5LhEzgzlhY57IKJBT4jNtM7Zcjwem2ar9nCKKYTrnvZSXiUqzCnVNDXod8hQYg1OM/a1pcwpf+Zim5UDXtLycLwnus/Ph4fV1HxCqHkY0MSGZcWbjp5FHtIXGZBY4bb1JTWwGHaqyq/rvOZ0BXIhdA7TJbbe3OHs0uIxtePsIs56N59oWjEF4IQvXE76miXpDM9AO8x+QXO2kLGI+CVdZimDDkovBUtapScr6MHxb35FT2xjNBd+qzT4H5+SO7S0ccjJTZzr3HfypLgWIc04+djzia5ibWaoOIHEJ1flkw/CJL3IkogppjfkEti5lnUtMYppDDdG8vTWH5SRVtFN1b4B5PCmR7jCGtzvzsLalhhqZROs+qX6Yk/QOcciSZohYyydc2BnLRK3GkiW/YXoQNUHyaFOE1eNVM2yH1PP9kW6DQRuot0wQGiiGg1qSPghBsqebJPfKDwKV++IfkkWU3/F/</diagram></mxfile>
2205.13346/main_diagram/main_diagram.pdf ADDED
Binary file (86.9 kB). View file
 
2205.13346/paper_text/intro_method.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Generation tasks such as storytelling, paraphrasing, and dialogue generation aim at learning a certain correlation between text pairs that maps an arbitrary-length input to another arbitrary-length output.
4
+ Traditional methods are mostly trained with ``teacher forcing'' and lead to an ``exposure bias'' problem .
5
+ Incorporating the generation method with contrastive learning achieved impressive performance on tackling such issues, which takes an extra consideration of synthetic negative samples contrastively .
6
+
7
+ [t]
8
+ \centering
9
+ \includegraphics[scale=0.62]{pic/intro.pdf}
10
+ \caption{
11
+ The semantic meaning of the sentence ``what are the best books on cosmology?'' would be greatly changed if the keyword ``cosmology'' is changed to ``astrophysic''.}
12
+
13
+ Existing contrastive mechanisms are mainly focused on the instance level .
14
+ However, word-level information is also of great importance.
15
+ Take the case shown in the upper part of Figure for example, the keyword covers the gist of the input text and determines the embedding space of the text.
16
+ The text representation will be significantly affected if adding a slight perturbation on the keyword, \ie changing ``cosmology'' to ``astrophysics''.
17
+ In addition, as shown on the bottom part, under some circumstances, it is too easy for the model to do the classification since the semantic gap between contrastive pairs is huge.
18
+ Thus, the model fails to distinguish the actual discrepancy, which causes a ``contrast vanishing'' problem at both instance-level and keyword-level.
19
+
20
+ Based on the above motivation, in this paper, we propose a hierarchical contrastive learning method built on top of the classic CVAE structure.
21
+ We choose CVAE due to its ability in modeling global properties such as syntactic, semantic, and discourse coherence .
22
+ We first learn different granularity representations through two independent contrast, \ie instance-level and keyword-level.
23
+ Specifically, we use the universal and classic TextRank method to extract keywords from each text, which contain the most important information and need to be highlighted.
24
+ On the instance-level, we treat the keyword in the input text as an additional condition for a better prior semantic distribution.
25
+ Then, we utilize Kullback–Leibler divergence to reduce the distance between prior distribution and positive posterior distribution, and increase the distance with the negative posterior distribution.
26
+ While on the keyword-level, we propose a keyword graph via contrastive correlations of positive-negative pairs to learn informative and accurate keyword representations.
27
+ By treating the keyword in the output text as an anchor, the imposter keyword is produced by neighboring nodes of the anchor keyword and forms the keyword-level contrast, where the similarity between the imposter keyword and the anchor keyword is poorer than the positive keyword.
28
+
29
+ To unify individual intra-contrasts and tackle the ``contrast vanishing'' problem in independent contrastive granularities, we leverage an inter-contrast, the Mahalanobis contrast, to investigate the contrastive enhancement based on the Mahalanobis distance , a measure of the distance between a point and a distribution, between the instance distribution and the keyword representation.
30
+ Concretely, we ensure the distance from the anchor instance distribution to the ground-truth keyword vector is closer than to the imposter keyword vector.
31
+ The Mahalanobis contrast plays an intermediate role that joins the different granularities contrast via incorporating the distribution of instance with the representation of its crucial part, and makes up a more comprehensive keyword-driven hierarchical contrastive mechanism, so as to ameliorate the generated results.
32
+
33
+ We empirically show that our model outperforms CVAE and other baselines significantly on three generation tasks: paraphrasing, dialogue generation, and storytelling.
34
+
35
+ Our contributions can be summarized as follows:
36
+
37
+ $\bullet$ \textcolor{black}{To our best knowledge, we are the first to propose an inter-level contrastive learning method, which unifies instance-level and keyword-level contrasts in the CVAE framework.}
38
+
39
+ $\bullet$ \textcolor{black}{We propose three contrastive learning measurements: KL divergence for semantic distribution, cosine distance for points, and Mahalanobis distance for points with distribution.}
40
+
41
+ $\bullet$ \textcolor{black}{We introduce a global keyword graph to obtain polished keyword representations and construct imposter keywords for contrastive learning.}
42
+
43
+ # Method
44
+
45
+ \centering
46
+ \includegraphics[scale=0.68]{pic/model_V2.pdf}
47
+ \caption{
48
+ The architecture of hierarchical contrastive learning, which consists of three parts: (1) Keyword-level contrast from keyword graph; (2) Instance-level contrast based on KL divergence for semantic distribution; and (3) Mahalanobis contrast between instance-level and keyword-level.
49
+ }
50
+
51
+ \paragraph{VAE:} Variational auto-encoder (VAE) is a typical encoder-decoder structural model with certain types of latent variables.
52
+ Given an input $x$,
53
+ VAE models the latent variable $z$ through the prior distribution $p_\theta(z)$ , and the observed data $x$ is reconstructed by the generative distribution $p_\theta(x|z)$ which is the likelihood function that generates $x$ conditioned on $z$.
54
+ Since $z$ is unknown, it should be estimated according to the given data $x$ as $p_\theta(z|x)$.
55
+ While the posterior density $p_\theta(z|x)=p_\theta(x|z)p_\theta(z)/p_\theta(x)$ is intractable, VAE introduces a recognition posterior distribution $q_\phi(z|x)$ approximates to the true posterior $p_\theta(z|x)$.
56
+ Thus, VAE is trained by optimizing the lower bound on the marginal likelihood of data $x$ as:
57
+
58
+ log p_\theta(x) \geq \E_{z\sim q_\phi(z|x)}[log p_\theta(x|z)] \\ - D_{KL}(q_\phi(z|x) || p_\theta(z)),
59
+
60
+ where $D_{KL}$ is the Kullback–Leibler divergence.
61
+
62
+ \paragraph{CVAE:} The conditional variational auto-encoder (CVAE) is the supervised version of VAE with an additional output variable.
63
+ Giving a dataset $\{x_i, y_i\}_{i=1}^N$ consisting of $N$ samples,
64
+ CVAE is trained to maximize the conditional log-likelihood, and the variational lower bound of the model is written as follows:
65
+
66
+ log p_\theta(y|x) \geq \E_{z\sim q_\phi(z|x,y)}[log p(y|x,z)] \\
67
+ - D_{KL}(q_\phi(z|x,y) || p_\theta(z|x)).
68
+
69
+ Assuming the type of latent variable obeys Gaussian distribution, the first right-hand side term can be approximated by drawing samples $\{z_i\}_{i=1}^N$ from the recognition posterior distribution $q_\phi(z|x,y)$, where $z\sim N(\mu, \sigma^2I)$, and then objective of the CVAE with Gaussian distribution can be written as:
70
+
71
+ \mathcal{L}_{cvae}(x,y;\theta,\phi) = -\frac{1}{N} \sum^{N}_{i=1} log p_\theta (y|x,z_i)
72
+ \\ + D_{KL}(q_\phi(z|x,y) || p_\theta(z|x)),
73
+
74
+ where $z_i=g_\phi(x, y, \epsilon_i)$, $\epsilon_i \sim \mathcal{N}(0, I)$.
75
+ The distribution $q_\phi(z|x,y)$ is reparameterized with a differentiable function $g_\phi$, which enables the model trainable via stochastic gradient descent.
76
+
77
+ Inspired by , we add keyword $u$ as an additional condition to the prior distribution to control the generation process, which turns the $p_\theta(z|x)$ in Equaton into $p_\theta(z|x,u)$.
78
+
79
+ In this section, we introduce our hierarchical contrastive learning method, which is comprised of three parts: instance-level contrast based on KL divergence (sec.), keyword-level contrast based on keyword graph (sec.), and inter-contrast: Mahalanobis contrast (sec.).
80
+
81
+ To tackle the ``exposure bias'' problem and discriminatively exploit the different quality of references, instance-level contrastive learning is introduced to learn discrepancies of targets.
82
+ Specifically, in addition to the observed input data $x$ and positive output $y^+$, a negative output $y^-$ is added to construct a contrastive pair $\{(x,y^+),(x,y^-)\}$.
83
+ In this case, the prior distribution $p_\theta(z|x)$ is learned from a prior network, which is denoted as $f_\theta(x)$.
84
+ The approximate posteriors $q_\phi(z|x,y^+)$ and $q_\phi(z|x,y^-)$ are learned from a posterior network and represented as $f_\phi(x,y^+)$ and $f_\phi(x,y^-)$, respectively.
85
+ The objective here is to make the distance between a prior distribution and positive posterior distribution closer than with the negative posterior distribution.
86
+ Thus, the instance-level contrastive loss function can be written as:
87
+
88
+ \setlength{\abovedisplayskip}{1pt}
89
+ \small
90
+
91
+ \mathcal{L}_{ins} =
92
+ -\E_{f_\phi}[log(1-\frac{e^{h(f_\phi(x,y^+), f_\theta(x)) / \tau}}{\sum_{y^* \in Y}e^{h(f_\phi(x,y^*), f_\theta(x))/\tau}})],
93
+
94
+ where the $y^* \in Y$ can be positive sample $y^+$ or negative sample $y^-$, and the $\tau$ is a temperature parameter to control push and pull force.
95
+ The function $h(\cdot)$ denotes the distance between elements, which is set as Kullback–Leibler divergence in instance-level contrast, $D_{KL}(f_\phi(x,y^*)||f_\theta(x))$, to measure the difference between two distributions.
96
+
97
+ Since the instance-level contrast focuses on learning high-level information and fails to discriminate the contribution of each word, we incorporate it with a keyword-level contrast to pay more attention to the specific keyword.
98
+
99
+ \paragraph{Keyword Graph:}
100
+ Given an input-output text pair $(x,y)$, keywords $k_x, k_y$ can be extracted from $x$ and $y$, respectively.
101
+ For an input text $x_i$ with keyword $k_{x,i}$, input texts that contain the same keyword are gathered into a cluster $C_i=\{x_j\}_{j=1}^n, k_{x,j} \in x_j$, where $n$ is the number of texts in $C_i$.
102
+ Each text $x_j\in C_i$ has a positive-negative output text pair $\{(y_j^+, y_j^-)\}$ containing a positive output keyword $k_{y,j}^+$ and a negative one $k_{y,j}^-$, respectively.
103
+ Thus, spreading to the entire cluster $C_i$, for the output text $y_i$, there exists positive relations $r^+_{i,j}$ between its keyword $k_{y,i}$ and each of the surrounded positive keywords $\{k_{y,j}^+\}_{j=1}^n$.
104
+ Likewise, negative relations $r^-_{i,j}$ correlates the output keyword $k_{y,i}$ and the surrounded negative ones $\{k_{y,j}^-\}_{j=1}^n$.
105
+
106
+ Based on these keywords as nodes and their relations as edges , the keyword graph $\mathcal{G}_k$ is constructed.
107
+ Each node representation $h_i^{0}$ is initialized as the average BERT embedding of texts in the cluster $C_i$ with the same corresponding keyword $k_{x,i}$.
108
+ Then, the relation edge $r^{0}_{ij}$ that connects node $i$ and node $j$ is learned via a feedforward layer $r^0_{ij}=\text{FFN}([h^0_i; h^0_j])$.
109
+
110
+ Then, the representations of nodes and relation edges are iteratively updated with their connected nodes via the graph attention (GAT) layer and the feed-forward (FFN) layer.
111
+ In the $t$-th iteration, we first update each edge representation by paying attention to the connected nodes, denoted as:
112
+
113
+ \beta_{r*}^t = \text{softmax}(\frac{(r_{ij}^t W_p)(h_{*}^{t} W_h)^T}{\sqrt{d}}),\\
114
+ p_{ij}^{t} = \beta_{ri}^t h_{i}^{t} + \beta_{rj}^t h_{j}^{t},\\
115
+ r_{ij}^{t+1} = \text{FFN}(r_{ij}^{t}+p_{ij}^{t}),
116
+
117
+ where $h_{*}^{t}$ can be $h_{i}^{t}$ or $h_{j}^{t}$.
118
+
119
+ Then, based on the obtained edge representation $r_{ij}^{t+1}$, we update the node representations considering both the related nodes and relation edges by the graph attention layer, $\text{GAT}(h_i^t, h_j^t, r_{ij}^t)$, which is designed as:
120
+
121
+ e_{ij}^{t}& = \textstyle \frac{(h_i^{t} W_q)(h_j^{t} W_k + r_{ij}^{t+1} W_r)^T}{\sqrt{d}},\\
122
+ \alpha_{ij}^t &= \textstyle \frac{exp(e_{ij}^t)}{\sum_{l \in N_i}{exp(e_{il}^t)}},\\
123
+ u_i^t &= \textstyle \sum_{j \in N_i}\alpha_{ij}^t ( h_j^t W_v+ r_{ij}^{t+1}),
124
+
125
+ where $W_q, W_k, W_r$ and $W_v$ are all learnable parameters, and the $\alpha_{ij}^t$ is the attention weight between $h_i^t$ and $h_j^t$.
126
+ Besides, to avoid gradient vanishing after several iterations, a residual connection is added to the output $u_i^t$ and the updated node representations $h_i^{t+1}$ is obtained.
127
+ In this way, the new representation of each keyword node consists of the relation dependency information from neighbor nodes $N_i$.
128
+ We take the node representations from the last iteration as the final keyword representations, denoted as $u$ for brevity.
129
+
130
+ \paragraph{Keyword-level Contrast:}
131
+ The keyword-level contrastive learning arises from input keywords against positive output keywords and negative impostor keywords.
132
+ The input keyword $u_{in}$ is extracted from the input text as an anchor, and the output keyword $u_{out}$ is extracted from ground-truth output text.
133
+ While the impostor keyword is calculated from the negative neighbours of the output keyword $u_{out}$, written as $u_{imp} = \sum_i{W_i u_i}$, where $u_i$ is the representation of keyword node which is obtained by the keyword graph learning procedure described above.
134
+ \textcolor{black}{In this way, with the help of neighbour nodes in the graph, we can obtain a more indistinguishable and difficult negative sample.}
135
+ The loss of keyword level contrastive learning thus can be written as:
136
+
137
+ \mathcal{L}_{keyword} = -\E[log\frac{e^{h_(u_{in}, u_{out}) / \tau}}{\sum_{u_* \in U}e^{h(u_{in}, u_{*})/\tau}}],
138
+
139
+ where $u_* \in U$ denotes the positive output keyword $u_{out}$ or imposter keyword $u_{imp}$. In keyword-level contrast, $h(\cdot)$ utilizes cosine similarity to calculate the distance between points.
140
+
141
+ Note that there exists a space gap between the instance-level contrast and the keyword-level contrast, which disturbs the completeness of this hierarchical contrastive architecture.
142
+ Besides, the contrastive values vanish when the distance metric is hard to measure the actual discrepancy between positive and negative merely in instance distributions or in keyword representations.
143
+ To mitigate such problems, we design a Mahalanobis contrastive mechanism to correlate the instance distribution and keyword representation, where the objective is to minimize the margin between the output keyword $u_{out}$ and the posterior semantic distribution $q_\phi(z|x,y) \triangleq f_\phi(x,y)$ and maximize the margin between the imposter keyword $u_{imp}$ and the posterior distribution $f_\phi(x,y)$:
144
+
145
+ \mathcal{L}_{ma} = -\E_{f_\phi}[log(1-\frac{e^{h(f_\phi(x,y), u_{out}) / \tau}}{\sum_{u_* \in U}e^{h(f_\phi(x,y), u_{*})/\tau}})],
146
+
147
+ where $u_{*} \in U$ can be the positive output keyword $u_{out}$ or negative imposter keyword $u_{imp}$.
148
+ In Mahalanobis contrast, $h(\cdot)$ utilizes Mahalanobis distance to measure the similarity from keyword point to the instance distribution.
149
+ In the univariate Gaussian case, $z \sim p(z|x,y)=N(\mu,\sigma^2)$, then the $h(f_\phi(x,y), u_*) \triangleq D_{MA}(p_\theta(z|x,y)|| u_{*}) = (u_{*} - \mu)\sigma^2I(u_{*} - \mu)$.
150
+
151
+ Finally, we equip the CVAE model with the proposed hierarchical contrastive learning framework to unify hybrid granularities by adding $\mathcal{L}_{ins}$, $\mathcal{L}_{keyword}$ and $\mathcal{L}_{ma}$ to the reconstructed loss of Equation .
2206.14754/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2206.14754/paper_text/intro_method.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The composition of the training dataset has key implications for machine learning models' behavior [Fel19; CLK+19; KL17; GZ19; IPE+22], especially as the training environments often deviate from deployment conditions [RGL19; KSM+20; HBM+20]. For example, a model might struggle on specific subpopulations in the data if that subpopulation was mislabeled [NAM21; SC18; BHK+20; VCG+22], underrepresented [SKH+20; STM21], or corrupted [HD19; HBM+20]. More broadly, the training dataset might contain spurious correlations, encouraging the model to depend on prediction rules that do not generalize to deployment [XEI+20; GJM+20; DJL21]. Moreover, identifying meaningful subpopulations within data allows for dataset refinement (such as filtering or relabeling) [YQF+19; SC18], and training more fair [KGZ19; DYZ+21] or accurate [JFK+20; SHL20] models.
4
+
5
+ However, dominant approaches to such identification of biases and difficult subpopulations within datasets often require human intervention, which is typically labor intensive and thus not conducive to routine usage. For example, recent works [TSE+20; VCG+22] need to resort to manual data exploration to identify label idiosyncrasies and failure modes in widely used datasets such as ImageNet. On the other hand, a different line of work [SDA+20; NCA+20; KGZ19; LHC+21; HSN+18] does present automatic methods for identifying and intervening on hard examples, but these methods are not designed to capture *simple, human-understandable* patterns. For instance, Liu et al. [LHC+21] directly upweights inputs that were misclassified early in training, but these examples do not necessarily represent a consistent failure mode. This motivates the question:
6
+
7
+ *How can we extract meaningful patterns of model errors on large datasets?*
8
+
9
+ One way to approach this question is through model interpretability methods. These include saliency maps [AGM+18; SVZ13], integrated gradients [STY17], and LIME [RSG16b], and perform feature attribution for particular inputs. Specifically, they aim to highlight which parts of an input were most important for making a model prediction, and can thus hint at brittleness of that prediction. However, while feature attribution can indeed help debug individual test examples, it does not provide a global understanding of the underlying biases of the dataset — at least without manually examining many such individual attributions.
10
+
11
+ <sup>\*</sup>Equal contribution.
12
+
13
+ <sup>1</sup>Code available at <https://github.com/MadryLab/failure-directions>.
14
+
15
+ In this work, we build a scalable mechanism for globally understanding large datasets from the perspective of the model's prediction rules. Specifically, our goal is not only to identify interpretable failure modes within the data, but also to inform actionable interventions to remedy these problems.
16
+
17
+ Our approach distills a given model's failure modes as *directions* in a certain feature space. In particular, we train linear classifiers on normalized feature embeddings within that space to identify consistent mistakes in the original model's predictions. The decision boundary of each such classifier then defines a "direction" of hard examples. By measuring each data-point's alignment with this identified direction, we can understand how relevant that example is for the failure mode we intend to capture. We leverage this framework to:
18
+
19
+ - *Detection:* Automatically detect and quantify reliability failures, such as brittleness to distribution shifts or performance degradation on hard subpopulations (Section 2.1).
20
+ - *Interpretation:* Understand and automatically assign meaningful captions to the error patterns identified by our method (Section 2.2).
21
+ - *Intervention:* Intervene during training in order to improve model reliability along the identified axes of failure (Section 2.3). In particular, by leveraging our framework in conjunction with off-the-shelf diffusion models, we can perform synthetic data augmentation tailored to improve the analyzed model's mistakes.
22
+
23
+ Using our framework, we can automatically identify and intervene on hard subpopulations in image datasets such as CIFAR-10, ImageNet, and ChestX-ray14. Importantly, we do not require direct human intervention or pre-annotated subgroups. The resulting framework is thus a scalable approach to identifying important subpopulations in large datasets with respect to their downstream tasks.
24
+
25
+ The presence of certain undesirable patterns, such as spurious correlations or underrepresented subpopulations, in a training dataset can prevent a learned model from properly generalizing during deployment. As a running example, consider the task of distinguishing "old" versus "young" faces, wherein the training dataset age is spuriously correlated with gender (such that the faces of older men and younger women are overrepresented). Such correlations occur in the CelebA dataset [LLW+15] (though here we construct a dataset that strengthens them)2 . Thus, a model trained on such a dataset might rely too heavily on gender, and will struggle to predict the age of younger men or older women. How can we detect model failures on these subpopulations?
26
+
27
+ The guiding principle of our framework is to model such failure modes as *directions* within a certain latent space (Figure 1). In the above example, we would like to identify an axis such that the (easier) examples of "old men" and the (harder) examples of "old women" lie in opposite directions. We then can capture the role of an individual data point in the dataset by evaluating how closely its normalized embedding aligns with that extracted direction (axis). But how can we learn these directions?
28
+
29
+ **Our method.** The key idea of our approach is to find a *hyperplane* that best separates the correct examples from incorrect ones. In the presence of global failure modes such as spurious correlations, the original model will likely make consistent mistakes, and these mistakes will share features. Using a held out validation set, we can therefore train a linear support vector machine (SVM) for each class to predict the original model's mistakes based on these shared features. The SVM then establishes a decision boundary between the correct and incorrect examples, and the direction of the failure mode will be orthogonal to this decision boundary (i.e., the normal vector to the hyperplane). Intuitively, the more aligned an example is with the identified failure direction, the harder the example was for the original neural network. Details of our method can be found in Appendix A.
30
+
31
+ <sup>2</sup>We can also detect this failure mode in the original CelebA dataset (See Appendix B.1)
32
+
33
+ ![](_page_2_Figure_0.jpeg)
34
+
35
+ Figure 1: A summary of our approach, as applied to CelebA. Consider the task of classifying faces as "young" or "old", where the training set contains a spurious correlation with gender. (**Left**) Given a trained base model, we evaluate it on a held-out validation set. (**Middle**) For each class (here "old"), we train a linear SVM on a shared vision/language latent space to predict whether the base model would have classified an input from the validation set correctly. We then extract a *direction* (gray) for the captured failure mode as the vector orthogonal to the learned hyperplane. Here, the SVM learns to use gender to separate the incorrect (red) vs. correct (green) examples. (**Right**) Images farthest from the SVM decision boundary exemplify the hardest ("old women") or the easiest ("old men") test examples. Furthermore, we can select captions that, when embedded in the shared latent space, are farthest from the decision boundary and thus capture the pattern of errors learned by the SVM.
36
+
37
+ **The choice of latent space: Leveraging shared vision/language embeddings.** Naturally, the choice of embedding space for the SVM greatly impacts the types of failure modes it picks up. Which embedding space should we choose? One option is to use the latent space of the original neural network. However, especially if the model fits the training data perfectly, it has likely learned latent representations which overfit to the training labels. In our running example of CelebA, the few older women in the training set could be memorized, and so their latent embeddings would likely be very different from those of older women in the test set. As shown in Appendix B.1.6, these discontinuities reduce the efficacy of our method, as the resulting featurizations are likely inconsistent.
38
+
39
+ To address this problem, we use an embedding that is agnostic to the specific dataset. In particular, we featurize our images using CLIP [RKH+21], which embeds both images and language into a shared latent space of unit vectors. Using these embeddings will also let us automatically assign captions to the directions extracted from the SVM. We consider other latent spaces in Appendix B.1.
40
+
41
+ Having captured the model's failure modes as directions within that latent space, we can detect, interpret, and intervene on difficult subpopulations. We now describe implementing these primitives.
42
+
43
+ How can we tell whether the extracted direction actually encapsulates a prevalent error pattern? To answer this question, we need a way to quantify the strength of the identified failure mode. Our framework provides a natural metric: the validation error of the trained SVMs. The more consistent the failure mode is, the more easily a simple linear classifier can separate the errors in the CLIP embedding space. Thus, the cross-validation score (i.e, the SVM's accuracy on held-out data) serves as a measure of the failure mode's strength in the dataset. It can then be used to detect classes that have a clear bias present, which will be useful for the following interpretation and intervention steps.
44
+
45
+ Since the trained SVMs are constrained to capture simple (approximately linearly separable) patterns in embedding space, we can easily interpret the extracted directions to understand the failure modes of the dataset. We explore two approaches to extracting the subpopulations captured by our framework.
46
+
47
+ **Most aligned examples.** The examples whose normalized embeddings are most aligned with the extracted direction represent the most prototypically correct or incorrect inputs. Therefore, to get a sense of what failure mode the direction is capturing, we can examine the most extreme examples according to the SVM's *decision value*, which is proportional to the signed distance to the SVM's decision boundary. Returning to our running example, in Figure 1 the images of the "old" class that are the "most correct" correspond to men, while the "most incorrect" ones correspond to women.
48
+
49
+ **Automatic captioning.** We can leverage the fact that the SVM is trained on a shared vision/language latent space (i.e., CLIP) to automatically assign captions to the captured failure mode. Just as above we surfaced the most aligned *images*, we can similarly surface the most aligned *captions*, i.e, captions whose normalized embedding best matches the extracted direction.
50
+
51
+ Specifically, assume that each class has a reference caption r, which is a generic phrase that could describe all examples of the class (e.g., "a photo of a person"). We then generate a candidate set of more specific captions $c_1$ , …, $c_m$ that include additional attributes (e.g., "a photo of a person with a moustache"). Our goal is to pick the caption for which the *additional* information provided by c — beyond that which was already provided by r — best captures the the extracted failure mode.
52
+
53
+ To do so, we score a caption c by how aligned the normalized embedding $\hat{c} = \frac{c-r}{||c-r||_2}$ is with the direction captured by the SVM. This amounts to choosing the captions for which $\hat{c}$ has the most positive (easiest) or negative (hardest) SVM decision values. In contrast to previous works that choose the captions closest to the mean of a selected group of hard examples (c.f., Eyuboglu et al. [EVS+22]), our method avoids this proxy and directly assigns a caption to the captured failure mode itself.
54
+
55
+ **Directly decoding the SVM direction with diffusion models.** Some of the recently developed diffusion models (e.g., retrieval augmented diffusion models [BRO+22], DALL-E 2 [RDN+22]) generate images directly from the shared vision/language (CLIP) space. In such cases, we can *directly* decode the extracted SVM direction into generated images. This enables us to visually capture the direction itself, without needing to generate a set of candidate captions.
56
+
57
+ Specifically, let r be the normalized embedding of the reference caption, and w the normalized SVM direction. By rotating between r and either w or -w via spherical interpolation, we can generate harder or easier images, respectively. Here, we expect the degree of rotation $\alpha$ to determine the extent of difficulty<sup>3</sup>. As shown in Section 4, passing this rotated embedding to the diffusion model indeed lets us directly generate images that encapsulate the extracted failure mode.
58
+
59
+ Now that we have identified some of the failure modes of the model, can we improve our model's performance on the corresponding challenging subpopulations? It turns out that this is possible, via both real and synthetic data augmentation.
60
+
61
+ **Filtering intervention.** Given an external pool of examples that was not used to train the original model, we can select the best of these examples to improve performance on the hard subpopulations. Specifically, if the goal is to add only *K* examples per class to the training dataset (e.g., due to computation constraints), we simply add the *K* images with the most negative SVM decision values. In Appendix B.1.2, we discuss an alternative intervention scheme, upweighting hard training examples.
62
+
63
+ <sup>&</sup>lt;sup>3</sup>Our approach mirrors the "text-diff" technique in DALLE-2 [RDN+22], which uses spherical interpolation to transform images according to textual commands.
64
+
65
+ ![](_page_4_Figure_0.jpeg)
66
+
67
+ Figure 2: For each CelebA class, the fraction of test images that are of the minority gender when ordering the images by either their SVM decision value or model confidences. We include additional baselines: Domino [EVS+22], LfF [NCA+20], and confidences after early stopping. Our framework more reliably captures the spurious correlation than all other baselines.
68
+
69
+ **Synthetic data augmentation.** In the absence of such an external pool of such data, we can leverage our framework together with text-to-image diffusion models (e.g., Stable Diffusion [RBL+22], DALL-E 2 [RDN+22], and Imagen [SCS+22]) to generate synthetic images instead. After automatically captioning each failure mode, we simply input these captions into the diffusion model to generate images from the corresponding subpopulation. In Section 4, we show that fine-tuning the model on such images improves model reliability on these subpopulations.
70
+
71
+ # Method
72
+
73
+ In Section 2, we presented an approach for distilling the failure modes as directions within a latent space. In this section, we validate our framework by evaluating its performance on datasets with *known* pathologies. In Section 4, we apply our framework to discover challenging subpopulations in datasets where the bias is *not* known beforehand. Experimental details can be found in Appendix B.
74
+
75
+ We focus on two settings: (1) detecting a spurious correlation in a facial recognition dataset, and (2) isolating underrepresented subtypes in image classification. In Appendix B, we consider other settings, such as Colored MNIST [ABG+19] and ImageNet-C [HD19].
76
+
77
+ We revisit our running example from Section 2, where a model is trained to predict age from the CelebA dataset. The training dataset contains a spurious correlation with gender (which we enhance by selectively subsetting the original dataset). We use a validation set that is balanced across age and gender, but also explore unbalanced validation sets in Appendix B.1 and Section 4.
78
+
79
+ Capturing the spurious correlation. Does our framework identify gender as the key failure mode in this setting? It turns out that even though only 33% of the dataset comes from the challenging subpopulations "old women" and "young men", 89% of the images flagged as incorrect by our framework are from these groups. Indeed, as shown in Figure 2, the SVM more consistently selects those hard subpopulations, compared to using the original confidences. As the underlying linear classifier is forced to use simple prediction rules, the corresponding SVM flags a more homogenous population than does using the original confidences. We find that our method further outperforms other baselines such as Domino [EVS+22], Learning from
80
+
81
+ ![](_page_5_Figure_0.jpeg)
82
+
83
+ Figure 3: The images and captions for each class with the most extreme SVM decision values. Those scored as most incorrect are in the minority subpopulations ("old women" and "young men"), while those scored as most correct are in the majority subpopulations ("old men" and "young women").
84
+
85
+ ![](_page_5_Figure_2.jpeg)
86
+
87
+ Figure 4: Our SVM's cross validation score (corresponding to its estimated ability to assess the strength of the extracted failure mode) compared to the strength of the planted spurious correlation. The SVM scores are highly correlated with the strength of the shift.
88
+
89
+ Failure [NCA+20], or using confidences after early stopping (as in Liu et al. [LHC+21]) (see Appendix B.1 for further experimental details).
90
+
91
+ **Automatically interpreting the captured failure mode.** In Figure 3, we surface the examples most aligned with the extracted direction using the SVM's decision values. As shown, examples with the most negative SVM decision values are indeed from the hard subpopulations. We also automatically caption each captured failure mode. To this end, we consider a candidate caption set which includes attributes such as age, gender, and facial descriptors. The captions that are most aligned with the extracted direction in CLIP space capture the spurious correlation on gender (Figure 3).
92
+
93
+ **Quantifying the strength of the shift.** As mentioned in Section 2, the cross-validation scores of the SVM quantify the strength of the spurious correlation. In Figure 4, we train base models on versions of CelebA with increasing degrees of the planted spurious correlation, and find that the SVM cross-validation scores strongly correlate with the strength of the planted shift.
94
+
95
+ Above, we considered a setting where the model predicts incorrectly because it relies on a spurious feature (gender). What if a model struggles on a subpopulation not due to a spurious feature, but simply because there are not enough examples of that image type?
96
+
97
+ To evaluate our approach in this setting, consider the task of predicting the superclass of a hierarchical dataset when subclasses have been underrepresented. In particular, the CIFAR-100 [Kri09] dataset contains twenty superclasses, each of which contains five equally represented subclasses. For each such superclass, we subsample one subclass so that it is underrepresented in the training data. For instance, for the superclass "aquatic mammals", we remove a large fraction of the subclass beavers from the training dataset. In Figure 5, we find that our framework correctly isolates the minority subclass as the most incorrect for each superclass more consistently than does confidences.
98
+
99
+ **Validating automatic captioning.** For each superclass, we construct a caption set that corresponds to the possible subclasses (e.g., for the superclass "aquatic mammals," the caption set is "a photo of a beaver", "a photo of a dolphin", etc.). As we find, the caption corresponding to the minority subclass was the top negative caption for 80% of the superclasses, and in the top 2 for 95% of the superclasses.
100
+
101
+ **Filtering intervention.** Having isolated hard subpopulations, we now apply our framework downstream to improve performance on them. We use the trained SVMs to choose a subset of images from a larger
102
+
103
+ ![](_page_6_Figure_0.jpeg)
104
+
105
+ ![](_page_6_Figure_1.jpeg)
106
+
107
+ Figure 5: For each CIFAR-100 superclass, the fraction of the top K flagged test images that belong to the underrepresented subclass (averaged over classes). We compare using the SVM's decision value and the model's confidences to select the top K images; the SVM more consistently identifies the minority subpopulation.
108
+
109
+ Figure 6: For CIFAR-100, the accuracy on the minority subclasses after adding K images per superclass from the extra data (reported over five runs.) We compare using the SVM's decision value or the model's confidences to select images. Relying on the SVM's values provides the most improvement on the minority subclasses.
110
+
111
+ pool of data to add to the training set. In Figure 6, we find that using our framework to select this extra data improves accuracy on the minority subclasses more than what using model confidences or choosing a random subset offers, while maintaining approximately the same overall accuracy.
112
+
113
+ Above, we considered datasets with pathologies, such as spurious correlations or underrepresented subtypes, that were known. In this section, we apply our framework to datasets without any pre-annotated difficult subpopulations. Our approach discovers new subpopulations representing key failure modes for the models. Specifically, we apply our framework to CIFAR-10 [Kri09], ImageNet [DDS+09], and ChestX-ray14 [RIZ+17]. Experimental details can be found in Appendix C.
114
+
115
+ We begin with the CIFAR-10 [Kri09] dataset. Since the accuracy of a ResNet18 on CIFAR-10 is very high (93%), the original model does not make many errors. Thus, we instead consider a weaker base model trained with 20% of the original CIFAR-10 dataset, where the other 20% is used for validation and the last 60% is used as extra data for the subset intervention. (See Appendix C.1 for additional experiments, including applying our framework to the larger CIFAR-10 dataset.)
116
+
117
+ **Finding interpretable subpopulations within the CIFAR-10 dataset.** Figure 7a displays examples of the failure modes identified by our framework. We identify white cats on grass and black dogs as hard, while classifying brown cats and white/brown dogs that are inside as easy.
118
+
119
+ Do these directions in latent space map to real failure modes of the base model? Without manual annotations, we can no longer directly report the original model's accuracy on these minority groups. However, as a proxy, we evaluate images that are closest in cosine similarity to the captions chosen by the SVM. For example, to get white cats on the grass, we rank the CIFAR-10 cats by how close their embeddings are to the caption "a photo of a white cat on grass" in CLIP space<sup>4</sup>. Using this proxy, in Figure 7b we confirm that the surfaced SVM captions represent real failure modes in the dataset.
120
+
121
+ **Decoding the SVM direction to generate challenging images.** Since the SVM operates in CLIP space, we can directly decode the extracted direction into an image using an off-the-shelf diffusion model [BRO+22]
122
+
123
+ <sup>&</sup>lt;sup>4</sup>In the Appendix C.1, we validate that this approach surfaces images that visually match the caption.
124
+
125
+ ![](_page_7_Figure_0.jpeg)
126
+
127
+ ![](_page_7_Figure_1.jpeg)
128
+
129
+ (a) Most extreme images and extracted SVM captions
130
+
131
+ (b) Model accuracies on each identified subpopulation.
132
+
133
+ Figure 7: (a) The images with the most extreme SVM decision values for the CIFAR-10 classes cat and dog, along with the most positive/negative captions according to the SVM. (See Appendix C.1.3 for more examples.) (b) For each class, the accuracy of the K test images closest in CLIP embedding space to the most positive/negative SVM captions for that class (averaged over classes). The images closest to the negative caption have a lower accuracy than those closest to the positive caption.
134
+
135
+ ![](_page_7_Figure_5.jpeg)
136
+
137
+ (a) Examples of hard and easy generated images.
138
+
139
+ (b) Model accuracies on the generated images.
140
+
141
+ Figure 8: To directly decode the SVM direction into images, we spherically interpolate the extracted direction and the embedding of the reference caption before passing this vector to a diffusion model. (a) Examples of synthetic hard and easy images of CIFAR-10 cats and dogs. The generated images match the trends found in Figure 7a. Further examples can be found in Appendix C.1.4. (b) Base model accuracy on generated hard and easy images (100 images per CIFAR-10 class) over varying degrees of spherical interpolation ( $\alpha$ ). Neutral images were generated using the reference caption. The base model performs worst on the hard generated images and best on the easy ones.
142
+
143
+ which also samples from this space. By spherically interpolating between the embedding of the reference caption (e.g., "a photo of a cat") and the extracted SVM direction, we can generate harder or easier images corresponding to the extracted failure mode (Figure 8a). For example, interpolating with the hard direction for CIFAR-10 cats generates white cats on grass, matching the extracted SVM caption in Figure 7a. Across classes, we verify that the original model performs worse on the "hard" generated images and better on the "easy" ones (Figure 8b).
144
+
145
+ Targeted synthetic data augmentation. By leveraging text-to-image generative models, we can generate images for *targeted* data augmentation. Using a off-the-shelf stable diffusion [RBL+22] model, we generate 100 images per class using the corresponding negative SVM caption (e.g., "a photo of a white cat on the grass") as the prompt. After adding these images to the training set, we retrain the last layer of the original model. Fine-tuning the model on these synthetic images improves accuracy on the hard subpopulation — defined according to similarity in CLIP space to the negative caption — compared to using generic images generated from the reference caption (Figure 9a).
146
+
147
+ It turns out that this procedure for targeted synthetic data augmentation is particularly effective for the identified challenging subpopulation. Generating images using the easy caption (e.g., "a photo of a cat inside") does not improve accuracy on images within that subpopulation any more than augmenting with
148
+
149
+ ![](_page_8_Figure_0.jpeg)
150
+
151
+ ![](_page_8_Figure_1.jpeg)
152
+
153
+ - (a) Evaluating images closest to the hard subpopulation
154
+ - (b) Evaluating images closest to the easy subpopulation
155
+
156
+ Figure 9: We fine-tune the model using 100 synthetic images per class generated via stable diffusion based on the hard, easy, or reference caption. We measure the accuracy of the K test images closest to the (a) hard or (b) easy SVM caption in CLIP space. Fine-tuning the model on the images generated from the hard captions boosts the accuracy of the model on these corresponding, real test images more than augmenting with generic synthetic images does. The analogous phenomenon does not hold, however, when we target the easy subpopulation.
157
+
158
+ ![](_page_8_Figure_5.jpeg)
159
+
160
+ ![](_page_8_Figure_6.jpeg)
161
+
162
+ - (a) Most extreme images and extracted SVM captions.
163
+ - (b) Model accuracies on each identified subpopulation
164
+
165
+ Figure 10: (a) The most extreme images and captions by SVM decision value for ImageNet classes tench and red wolf. Our framework identifies clear biases on color (e.g red wolf) and co-occurrence of unrelated objects (e.g tench). (b) For each class, the accuracy of the top K images that were closest in CLIP space to the most positive/negative SVM captions for that class. The identified hard (negative) subpopulation has a lower accuracy than the identified easy (positive) subpopulation.
166
+
167
+ generic synthetic images (Figure 9b).
168
+
169
+ It is also possible to augment the dataset with images generated directly from the SVM direction, as in Figure 8a, using a diffusion model that samples directly from CLIP space. In doing so, we could skip the intermediate captioning stage entirely. Currently, the diffusion models that operate in CLIP space are either not open-source (DALL-E 2 [RDN+22]) or trained with less data [BRO+22]. However, with further progress in CLIP-space diffusion models, this may be a promising mechanism for tailoring data augmentation directly from the SVM direction.
170
+
171
+ We also apply our framework on a larger scale: to find challenging subpopulations in ImageNet [DDS+09] (see Appendix C.2 for further experimental details). In Figure 10a, we display examples of the failure modes captured by our framework with their associated SVM captions. These biases include over-reliance on color (e.g., the coat of a red wolf) or sensitivity to co-occurring objects (e.g the presence of a person holding the fish tench). We include more examples in Appendix C.2.1.
172
+
173
+ As in Section 4.1, we validate that these directions correspond to real failure modes by evaluating the accuracy of the test images that are closest to the positive or negative SVM caption in CLIP space (Figure 10b). The identified subpopulations indeed have a significant difference in overall accuracy.
174
+
175
+ ![](_page_9_Figure_0.jpeg)
176
+
177
+ Figure 11: The most extreme examples by SVM decision value and by confidence for the "no effusion" class. Notably, the SVM distills a failure mode that is not easily seen in the low confidence examples.
178
+
179
+ | % Overall<br>that is | AP | 38.18% |
180
+ |----------------------|------------|----------|
181
+ | | Incorrect | 12.62% |
182
+ | | Flagged | 28.11% |
183
+ | % Incorrect | that is AP | 55.351 % |
184
+ | % Flagged | that is AP | 59.826 % |
185
+
186
+ Figure 12: The fraction of examples from the "no effusion" class that are taken in the AP position. Notably, the base model disproportionately struggles on AP images, and the majority of images flagged by our framework are AP.
187
+
188
+ ![](_page_9_Figure_4.jpeg)
189
+
190
+ Figure 13: The fraction of the top K examples from "no effusion" class that are AP. We compare using the SVM's decision value and the model's confidences to select these images.
191
+
192
+ We conclude our analysis with consideration of the ChestXray-14 [RIZ+17] dataset. This dataset contains frontal X-ray images labeled with 14 different conditions. ChestXray-14 is a multi-label tagging task (i.e., a single image can have more than one condition), so we treat each condition as its own binary problem. In this section, we focus on the condition Effusion. Results on other conditions can be found in Appendix C.3.
193
+
194
+ The trained SVM identifies visually distinguishable failure mode directions in latent space. As shown in Figure 11, the representative images flagged by this SVM as most incorrect are blurrier and less bright. Moreover, this trend is not reflected by the least confident images, indicating that our framework is isolating a different trend than the one corresponding to ordering the images by base model confidence.
195
+
196
+ In fact, we find that the SVM may be picking up on the *position* in which the exam was conducted. While the majority of the X-rays are Posterior-Anterior (PA) radiographs, a little over a third are Anterior-Posterior (AP). PA radiographs are usually clearer, but require the patient to be well enough to stand [TB20]. Examples of AP and PA radiographs from the dataset can be found in Appendix C.3.
197
+
198
+ As shown in Table 12, the SVM for the class "no effusion" flags a large number of the AP radiographs as incorrect. This indicates that the model might indeed rely on the position in which the radiograph was taken to predict whether the patient was healthy. Moreover, the SVM selects the AP examples more consistently than ordering the radiographs by the base model's confidence (Figure 13).
2207.09944/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2207.09944/paper_text/intro_method.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ []{#sec:intro label="sec:intro"} Despite remarkable successes in recent years [@lecun2015deep; @silver2016mastering; @jumper2021highly], machine learning systems often fail calamitously when presented with *out-of-distribution* (OOD) data [@torralba2011unbiased; @beery2018recognition; @hendrycks2019benchmarking; @geirhos2020shorcut]. Evidence of state-of-the-art systems failing in the face of distribution shift is mounting rapidly---be it due to spurious correlations [@zech2018variable; @arjovsky2020invariant; @niven2020probing], changing sub-populations [@santurkar2020breeds; @wilds2021; @borkan2019nuanced], changes in location or time [@hansen2013high; @christie2018functional; @shankar2021image], or other naturally-occurring variations [@karahan2016image; @azulay2019deep; @eastwood2021source; @hendrycks2021natural; @hendrycks2021many; @robey2021modelbased; @zhou2022deep]. These OOD failures are particularly concerning in safety-critical applications such as medical imaging [@jovicich2009mri; @albadawy2018deep; @tellez2019quantifying; @beede2020; @wachinger2021detect] and autonomous driving [@dai2018dark; @volk2019towards; @michaelis2019dragon], where they represent one of the most significant barriers to the real-world deployment of machine learning systems [@ribeiro2016should; @biggio2018wild; @maartensson2020reliability; @castro2020causality].
4
+
5
+ Domain generalization (DG) seeks to improve a system's OOD performance by leveraging datasets from multiple environments or domains at training time, each collected under different experimental conditions [@blanchard2011generalizing; @muandet2013domain; @gulrajani2020search] (see [1](#fig:fig1:train-test){reference-type="ref+label" reference="fig:fig1:train-test"}). The goal is to build a predictor which exploits invariances across the training domains in the hope that these invariances also hold in related but distinct test domains [@gulrajani2020search; @schoelkopf2012causal; @li2018learning; @krueger21rex]. To realize this goal, DG is commonly formulated as an average- [@blanchard2011generalizing; @blanchard2021domain; @zhang2020adaptive] or worst-case  [@ben2009robust; @sagawa2019distributionally; @arjovsky2020invariant] optimization problem over the set of possible domains. However, optimizing for average performance provably lacks robustness to OOD data [@nagarajan2021understanding], while optimizing for worst-domain performance tends to lead to overly-conservative solutions, with worst-case outcomes unlikely in practice [@tsipras2019robustness; @raghunathan2019adversarial].
6
+
7
+ In this work, we argue that DG is neither an average-case nor a worst-case problem, but rather a probabilistic one. To this end, we propose a probabilistic framework for DG, which we call *Probable Domain Generalization* ([3](#sec:qrm){reference-type="ref+label" reference="sec:qrm"}), wherein the key idea is that distribution shifts seen during training should inform us of *probable* shifts at test time. To realize this, we explicitly relate training and test domains as draws from the same underlying meta-distribution ([2](#fig:fig1:q-dist){reference-type="ref+label" reference="fig:fig1:q-dist"}), and then propose a new optimization problem called *Quantile Risk Minimization* (QRM). By minimizing the $\alpha$-quantile of predictor's risk distribution over domains ([3](#fig:fig1:risk){reference-type="ref+label" reference="fig:fig1:risk"}), QRM seeks predictors that perform well *with high probability* rather than on average or in the worst case. In particular, QRM leverages the key insight that this $\alpha$-quantile is an upper bound on the test-domain risk which holds with probability $\alpha$, meaning that $\alpha$ is an interpretable conservativeness-hyperparameter with $\alpha\! =\! 1$ corresponding to the worst-case setting.
8
+
9
+ To solve QRM in practice, we introduce the *Empirical QRM* (EQRM) algorithm ([4](#sec:qrm_algs){reference-type="ref+label" reference="sec:qrm_algs"}). Given a predictor's empirical risks on the training domains, EQRM forms an estimated risk distribution using kernel density estimation (KDE, [@parzen1962estimation]). Importantly, KDE-smoothing ensures a right tail that extends beyond the largest training risk (see [3](#fig:fig1:risk){reference-type="ref+label" reference="fig:fig1:risk"}), with this risk "extrapolation" [@krueger21rex] unlocking *invariant prediction* for EQRM ([4.1](#sec:qrm_algs:eqrm){reference-type="ref+label" reference="sec:qrm_algs:eqrm"}). We then provide theory for EQRM ([4.2](#sec:qrm_algs:gen_bound){reference-type="ref+label" reference="sec:qrm_algs:gen_bound"}, [4.3](#sec:qrm_algs:causality){reference-type="ref+label" reference="sec:qrm_algs:causality"}) and demonstrate empirically that EQRM outperforms state-of-the-art baselines on real and synthetic data ([6](#sec:exps){reference-type="ref+label" reference="sec:exps"}).
10
+
11
+ **Contributions.** To summarize our main contributions:
12
+
13
+ - *A new probabilistic perspective and objective for DG:* We argue that predictors should be trained and tested based on their ability to perform well *with high probability*. We then propose Quantile Risk Minimization for achieving this *probable* form of domain generalization ([3](#sec:qrm){reference-type="ref+label" reference="sec:qrm"}).
14
+
15
+ - *A new algorithm:* We propose the EQRM algorithm to solve QRM in practice and ultimately learn predictors that generalize with probability $\alpha$ ([4](#sec:qrm_algs){reference-type="ref+label" reference="sec:qrm_algs"}). We then provide several analyses of EQRM:
16
+
17
+ - *Learning theory:* We prove a uniform convergence bound, meaning the empirical $\alpha$-quantile risk tends to the population $\alpha$-quantile risk given sufficiently many domains and samples ([\[thm:simplified-gen-bound\]](#thm:simplified-gen-bound){reference-type="ref+label" reference="thm:simplified-gen-bound"}).
18
+
19
+ - *Causality.* We prove that EQRM learns predictors with invariant risk as $\alpha\! \to\! 1$ ([\[prop:equalize_main\]](#prop:equalize_main){reference-type="ref+label" reference="prop:equalize_main"}), then provide the conditions under which this is sufficient to recover the causal predictor ([\[thm:causal_predictor\]](#thm:causal_predictor){reference-type="ref+label" reference="thm:causal_predictor"}).
20
+
21
+ - *Experiments:* We demonstrate that EQRM outperforms state-of-the-art baselines on several standard DG benchmarks, including `CMNIST` [@arjovsky2020invariant] and datasets from WILDS [@wilds2021] and DomainBed [@gulrajani2020search], and highlight the importance of assessing the tail or *quantile performance* of DG algorithms ([6](#sec:exps){reference-type="ref+label" reference="sec:exps"}).
22
+
23
+ <figure id="fig:fig1" data-latex-placement="tb">
24
+ <figure id="fig:fig1:train-test">
25
+ <embed src="figs/train_test_domains_1a_final.pdf" />
26
+ <figcaption aria-hidden="true"></figcaption>
27
+ </figure>
28
+ <figure id="fig:fig1:q-dist">
29
+ <embed src="figs/q_dist_1b_final.pdf" style="width:95.0%" />
30
+ <figcaption aria-hidden="true"></figcaption>
31
+ </figure>
32
+ <figure id="fig:fig1:risk">
33
+ <embed src="figs/final_fig1c_cropped.pdf" />
34
+ <figcaption aria-hidden="true"></figcaption>
35
+ </figure>
36
+ <figcaption><strong>Overview of Probable Domain Generalization and Quantile Risk Minimization.</strong> (a) In domain generalization, training and test data are drawn from multiple related distributions or domains. For example, in the <code>iWildCam</code> dataset <span class="citation" data-cites="beery2021iwildcam"></span>, which contains camera-trap images of animal species, the domains correspond to the different camera-traps which captured the images. (b) We relate training and test domains as draws from the same underlying (and often unknown) meta-distribution over domains <span class="math inline">$\bbQ$</span>. (c) We consider a predictor’s estimated risk distribution over training domains, naturally-induced by <span class="math inline">$\bbQ$</span>. By minimizing the <span class="math inline"><em>α</em></span>-quantile of this distribution, we learn predictors that perform well with high probability (<span class="math inline"> ≈ <em>α</em></span>) rather than on average or in the worst case. </figcaption>
37
+ </figure>
38
+
39
+ **Setup.** In domain generalization (DG), predictors are trained on data drawn from multiple related training distributions or *domains* and then evaluated on related but unseen test domains. For example, in the `iWildCam` dataset [@beery2021iwildcam], the task is to classify animal species in images, and the domains correspond to the different camera-traps which captured the images (see [1](#fig:fig1:train-test){reference-type="ref+label" reference="fig:fig1:train-test"}). More formally, we consider datasets $D^e = \{(x^e_i, y^e_i)\}_{i=1}^{n_e}$ collected from $m$ different training domains or *environments* $\Etr:= \{e_1, \dots, e_m\}$, with each dataset $D^e$ containing data pairs $(x^e_i, y^e_i)$ sampled i.i.d. from $\Prob(X^e,Y^e)$. Then, given a suitable function class $\calF$ and loss function $\ell$, the goal of DG is to learn a predictor $f\in\calF$ that generalizes to data drawn from a larger set of all possible domains $\Eall \supset \Etr$.
40
+
41
+ **Average case.** Letting $\calR^e(f)$ denote the statistical risk of $f$ in domain $e$, and $\bbQ$ a distribution over the domains in $\Eall$, DG was first formulated [@blanchard2011generalizing; @muandet2013domain] as the following average-case problem: $$\begin{equation}
42
+ \label{eq:domain-gen-average-case}
43
+ \min_{f\in\calF} \bbE_{e \sim \bbQ} \calR^e(f)
44
+ \qquad \text{where} \qquad
45
+ \calR^e(f) := \mathbb{E}_{\bbP(X^e, Y^e)} [\ell(f(X^e), Y^e)].
46
+ \end{equation}$$
47
+
48
+ **Worst case.** Since predictors that perform well *on average* provably lack robustness [@nagarajan2021understanding], i.e. they can perform quite poorly on large subsets of $\Eall$, subsequent works [@ben2009robust; @sagawa2019distributionally; @arjovsky2020invariant; @krueger21rex; @ahuja2021invariance; @robey2021modelbased] have sought robustness by formulating DG as the following *worst-case* problem: $$\begin{equation}
49
+ \label{eq:domain-gen}
50
+ \min_{f\in\calF} \max_{e \in \Eall} \calR^e(f).
51
+ \end{equation}$$ As we only have access to data from a finite subset of $\Eall$ during training, solving [\[eq:domain-gen\]](#eq:domain-gen){reference-type="eqref" reference="eq:domain-gen"} is not just challenging but in fact impossible [@krueger21rex; @ben2010theory; @christiansen2021causal] without restrictions on how the domains may differ.
52
+
53
+ **Causality and invariance in DG.** Causal works on DG [@arjovsky2020invariant; @krueger21rex; @peters2016causal; @christiansen2021causal; @rojas2018invariant] describe domain differences using the language of causality and the notion of *interventions* [@pearl2009causality; @peters2017elements]. In particular, they assume all domains share the same underlying *structural causal model* (SCM) [@pearl2009causality], with different domains corresponding to different interventions (see [1.1](#app:causality:defs){reference-type="ref+label" reference="app:causality:defs"} for formal definitions and a simple example). Assuming the mechanism of $Y$ remains fixed or invariant but all $X$s may be intervened upon, recent works have shown that only the causal predictor has invariant: (i) predictive distributions [@peters2016causal], coefficients [@arjovsky2020invariant] or risks [@krueger21rex] across domains; and (ii) generalizes to arbitrary interventions on the $X$s [@peters2016causal; @arjovsky2020invariant; @rojas2018invariant]. These works then leverage some form of invariance across domains to discover causal relationships which, through the invariant mechanism assumption, generalize to new domains.
54
+
55
+ In this section we introduce *Quantile Risk Minimization* (QRM) for achieving *Probable Domain Generalization*. The core idea is to replace the worst-case perspective of [\[eq:domain-gen\]](#eq:domain-gen){reference-type="eqref" reference="eq:domain-gen"} with a probabilistic one. This approach is founded on a great deal of work in classical fields such as control theory [@campi2008exact; @ramponi2018consistency] and smoothed analysis [@spielman2004smoothed], wherein approaches that yield high-probability guarantees are used in place of worst-case approaches in an effort to mitigate conservatism and computational limitations. This mitigation is of particular interest in domain generalization since generalizing to arbitrary domains is impossible [@krueger21rex; @ben2010theory; @christiansen2021causal]. Thus, motivated by this classical literature, our goal is to obtain predictors that are robust *with high probability* over domains drawn from $\Eall$, rather than in the worst case.
56
+
57
+ **A distribution over environments.** We start by assuming the existence of a probability distribution $\mathbb{Q}(e)$ over the set of all environments $\Eall$. For instance, in the context of medical imaging, $\bbQ$ could represent a distribution over potential changes to a hospital's setup or simply a distribution over candidate hospitals. Given that such a distribution $\bbQ$ exists[^2], we can think of the risk $\calR^e(f)$ as a *random variable* for each $f\in\calF$, where the randomness is engendered by the draw of $e\sim\bbQ$. This perspective gives rise to the following analogue of the optimization problem in [\[eq:domain-gen\]](#eq:domain-gen){reference-type="eqref" reference="eq:domain-gen"}: $$\begin{equation}
58
+ \min_{f\in\calF} \: \esssup_{e\sim\bbQ} \calR^e(f) \quad\text{where}\quad \esssup_{e\sim\bbQ} \calR^e(f) = \inf\Big\{ t\geq 0 : \Pr_{e\sim\bbQ} \left\{\calR^e(f) \leq t\right\} = 1\Big\} \label{eq:domain-gen-rewritten}
59
+ \end{equation}$$ Here, $\esssup$ denotes the *essential-supremum* operator from measure theory, meaning that for each $f\in\calF$, $\esssup_{\bbQ} \calR^e(f)$ is the least upper bound on $\calR^e(f)$ that holds for almost every $e\sim\bbQ$. In this way, the $\esssup$ in [\[eq:domain-gen-rewritten\]](#eq:domain-gen-rewritten){reference-type="eqref" reference="eq:domain-gen-rewritten"} is the measure-theoretic analogue of the $\max$ operator in [\[eq:domain-gen\]](#eq:domain-gen){reference-type="eqref" reference="eq:domain-gen"}, with the subtle but critical difference being that the $\esssup$ in [\[eq:domain-gen-rewritten\]](#eq:domain-gen-rewritten){reference-type="eqref" reference="eq:domain-gen-rewritten"} can neglect domains of measure zero under $\bbQ$. For example, for discrete $\bbQ$, [\[eq:domain-gen-rewritten\]](#eq:domain-gen-rewritten){reference-type="eqref" reference="eq:domain-gen-rewritten"} ignores domains which are impossible (i.e. have probability zero) while [\[eq:domain-gen\]](#eq:domain-gen){reference-type="eqref" reference="eq:domain-gen"} does not, laying the foundation for ignoring domains which are *improbable*.
60
+
61
+ **High-probability generalization.** Although the minimax problem in [\[eq:domain-gen-rewritten\]](#eq:domain-gen-rewritten){reference-type="eqref" reference="eq:domain-gen-rewritten"} explicitly incorporates the distribution $\bbQ$ over environments, this formulation is no less conservative than [\[eq:domain-gen\]](#eq:domain-gen){reference-type="eqref" reference="eq:domain-gen"}. Indeed, in many cases, [\[eq:domain-gen-rewritten\]](#eq:domain-gen-rewritten){reference-type="eqref" reference="eq:domain-gen-rewritten"} is equivalent to [\[eq:domain-gen\]](#eq:domain-gen){reference-type="eqref" reference="eq:domain-gen"}; see Appendix [2](#app:sup-and-esssup){reference-type="ref" reference="app:sup-and-esssup"} for details. Therefore, rather than considering the worst-case problem in [\[eq:domain-gen-rewritten\]](#eq:domain-gen-rewritten){reference-type="eqref" reference="eq:domain-gen-rewritten"}, we propose the following generalization of [\[eq:domain-gen-rewritten\]](#eq:domain-gen-rewritten){reference-type="eqref" reference="eq:domain-gen-rewritten"} which requires that predictors generalize with probability $\alpha$ rather than in the worst-case: $$\begin{equation}
62
+ \begin{alignedat}{2}
63
+ \label{eq:prob_gen}
64
+ &\min_{f\in\calF,\, \thres \in \bbR} &&\thres \qquad \subjto \Pr_{e\sim\bbQ} \left\{\calR^e(f) \leq t \right\} \geq \alpha
65
+ \end{alignedat}
66
+ \end{equation}$$ The optimization problem in [\[eq:prob_gen\]](#eq:prob_gen){reference-type="eqref" reference="eq:prob_gen"} formally defines what we mean by *Probable Domain Generalization*. In particular, we say that *a predictor $f$ generalizes with risk $t$ at level $\alpha$* if $f$ has risk at most $t$ with probability at least $\alpha$ over domains sampled from $\bbQ$. In this way, the conservativeness parameter $\alpha$ controls the strictness of generalizing to unseen domains.
67
+
68
+ **A distribution over risks.** The optimization problem presented in [\[eq:prob_gen\]](#eq:prob_gen){reference-type="eqref" reference="eq:prob_gen"} offers a principled formulation for generalizing to unseen distributional shifts governed by $\bbQ$. However, $\bbQ$ is often unknown in practice and its support $\Eall$ may be high-dimensional or challenging to define [@robey2021modelbased]. While many previous works have made progress by limiting the scope of possible shift types over domains [@eastwood2021source; @robey2021modelbased; @sagawa2019distributionally], in practice, such structural assumptions are often difficult to justify and impossible to test. For this reason, we start our exposition of QRM by offering an alternative view of [\[eq:prob_gen\]](#eq:prob_gen){reference-type="eqref" reference="eq:prob_gen"} which elucidates how a predictor's *risk distribution* plays a central role in achieving probable domain generalization.
69
+
70
+ To begin, note that for each $f\in\calF$, the distribution over domains $\bbQ$ naturally induces[^3] a distribution $\bbT_f$ over the risks in each domain $\calR^e(f)$. In this way, rather than considering the randomness of $\bbQ$ in the often-unknown and (potentially) high-dimensional space of possible shifts ([2](#fig:fig1:q-dist){reference-type="ref+label" reference="fig:fig1:q-dist"}), one can consider it in the real-valued space of risks ([3](#fig:fig1:risk){reference-type="ref+label" reference="fig:fig1:risk"}). This is analogous to statistical learning theory, where the analysis of convergence of empirical risk minimizers (i.e., of functions) is substituted by that of a weaker form of convergence, namely that of scalar risk functionals---a crucial step for VC theory [@vapnik1999nature]. From this perspective, the statistics of $\bbT_f$ can be thought of as capturing the sensitivity of $f$ to different environmental shifts, summarizing the effect of different intervention types, strengths, and frequencies. To this end, [\[eq:prob_gen\]](#eq:prob_gen){reference-type="eqref" reference="eq:prob_gen"} can be equivalently rewritten in terms of the risk distribution $\bbT_f$ as follows:
71
+
72
+ ::: mdframed
73
+ $$\begin{equation}
74
+ \tag{QRM}
75
+ \min_{f\in\calF} \: F^{-1}_{\bbT_f}(\alpha) \quad\text{where}\quad F^{-1}_{\bbT_f}(\alpha) := \inf \Big\{t\in\R : \Pr_{R\sim\bbT_f} \left\{ R \leq t\right\} \geq \alpha \Big\}. \label{eq:qrm}
76
+ \end{equation}$$
77
+ :::
78
+
79
+ Here, $F^{-1}_{\bbT_f}(\alpha)$ denotes the inverse CDF (or quantile[^4]) function of the risk distribution $\bbT_f$. By means of this reformulation, we elucidate how solving [\[eq:qrm\]](#eq:qrm){reference-type="eqref" reference="eq:qrm"} amounts to finding a predictor with minimal $\alpha$-quantile risk. That is, [\[eq:qrm\]](#eq:qrm){reference-type="eqref" reference="eq:qrm"} requires that a predictor $f$ satisfy the probabilistic constraint for at least an $\alpha$-fraction of the risks $R\sim\mathbb{T}_f$, or, equivalently, for an $\alpha$-fraction of the environments $e\sim\mathbb{Q}$. In this way, $\alpha$ can be used to interpolate between typical ($\alpha\!=\!0.5$, median) and worst-case ($\alpha\!=\!1$) problems in an interpretable manner. Moreover, if the mean and median of $\bbT_f$ coincide, $\alpha\!=\!0.5$ gives an average-case problem, with [\[eq:qrm\]](#eq:qrm){reference-type="eqref" reference="eq:qrm"} recovering several notable objectives for DG as special cases.
80
+
81
+ ::: proposition
82
+ []{#prop:average-case-equiv label="prop:average-case-equiv"} For $\alpha\!\! =\!\! 1$, [\[eq:qrm\]](#eq:qrm){reference-type="eqref" reference="eq:qrm"} is equivalent to the worst-case problem of [\[eq:domain-gen-rewritten\]](#eq:domain-gen-rewritten){reference-type="eqref" reference="eq:domain-gen-rewritten"}. For $\alpha\! =\! 0.5$, it is equivalent to the average-case problem of [\[eq:domain-gen-average-case\]](#eq:domain-gen-average-case){reference-type="eqref" reference="eq:domain-gen-average-case"} if the mean and median of $\bbT_f$ coincide $\forall f\! \in\! \calF$: $$\begin{align}
83
+ \label{eq:dg-average-case}
84
+ \textstyle
85
+ \min_{f\in\calF} \: \E_{R\sim\bbT_f} R =
86
+ \min_{f\in\calF} \: \E_{e\sim\bbQ} \calR^e(f)
87
+ \end{align}$$
88
+ :::
89
+
90
+ **Connection to DRO.** While fundamentally different in terms of objective and generalization capabilities (see [4](#sec:qrm_algs){reference-type="ref+label" reference="sec:qrm_algs"}), we draw connections between QRM and distributionally robust optimization (DRO) in [6](#app:dro){reference-type="ref+label" reference="app:dro"} by considering an alternative problem which optimizes the *superquantile*.
91
+
92
+ # Method
93
+
94
+ We now introduce the *Empirical QRM* (EQRM) algorithm for solving [\[eq:qrm\]](#eq:qrm){reference-type="eqref" reference="eq:qrm"} in practice, akin to Empirical Risk Minimization (ERM) solving the Risk Minimization (RM) problem [@Vapnik98].
95
+
96
+ In practice, given a predictor $f$ and its empirical risks $\hat{\calR}^{e_1}(f), \dots, \hat{\calR}^{e_m}(f)$ on the $m$ training domains, we must form an *estimated* risk distribution $\widehat{\bbT}_f$. In general, given no prior knowledge about the form of $\bbT_f$ (e.g. Gaussian), we use *kernel density estimation* (KDE, [@rosenblatt1956remarks; @parzen1962estimation]) with Gaussian kernels and either the Gaussian-optimal rule [@silverman1986density] or Silverman's rule-of-thumb [@silverman1986density] for bandwidth selection. [3](#fig:fig1:risk){reference-type="ref+Label" reference="fig:fig1:risk"} depicts the PDF and CDF for 10 training risks when using Silverman's rule-of-thumb. Armed with a predictor's estimated risk distribution $\widehat{\bbT}_f$, we can approximately solve [\[eq:qrm\]](#eq:qrm){reference-type="eqref" reference="eq:qrm"} using the following empirical analogue: $$\begin{equation}
97
+ %
98
+ \begin{alignedat}{2}%
99
+ \label{eq:qrm2}
100
+ %
101
+ \min_{f\in\calF}\ F^{-1}_{\widehat{\bbT}_f}(\alpha)
102
+ \end{alignedat}
103
+ \end{equation}$$ Note that [\[eq:qrm2\]](#eq:qrm2){reference-type="eqref" reference="eq:qrm2"} depends only on known quantities so we can compute and minimize it in practice, as detailed in [\[alg:eqrm\]](#alg:eqrm){reference-type="ref+label" reference="alg:eqrm"} of [5.1](#sec:impl_details:algs){reference-type="ref+label" reference="sec:impl_details:algs"}.
104
+
105
+ ::: wrapfigure
106
+ r0.29 ![image](figs/kde_smoothing.pdf){width="\\linewidth"}
107
+ :::
108
+
109
+ **Smoothing permits risk extrapolation.** [\[fig:kde-smoothing\]](#fig:kde-smoothing){reference-type="ref+Label" reference="fig:kde-smoothing"} compares the KDE-smoothed CDF (black) to the unsmoothed empirical CDF (gray). As shown, the latter places zero probability mass on risks greater than our largest training risk, thus implicitly assuming that test risks cannot be larger than training risks. In contrast, the KDE-smoothed CDF permits "risk extrapolation" [@krueger21rex] since its right tail extends beyond our largest training risk, with the estimated $\alpha$-quantile risk going to infinity as $\alpha\! \to\! 1$ (when kernels have full support). Note that different bandwidth-selection methods encode different assumptions about right-tail heaviness and thus about projected OOD risk. In [4.3](#sec:qrm_algs:causality){reference-type="ref+label" reference="sec:qrm_algs:causality"}, we discuss how, as $\alpha\! \to\! 1$, this KDE-smoothing allows EQRM to learn predictors with invariant risk over domains. In [3](#app:kde){reference-type="ref+label" reference="app:kde"}, we discuss different bandwidth-selection methods for EQRM.
110
+
111
+ We now give a simplified version of our main generalization bound---[\[thm:generalization\]](#thm:generalization){reference-type="ref+label" reference="thm:generalization"}---which states that, given sufficiently many domains and samples, the empirical $\alpha$-quantile risk is a good estimate of the population $\alpha$-quantile risk. In contrast to previous results for DG, we bound the *proportion of test domains* for which a predictor performs well, rather than the average error [@blanchard2011generalizing; @blanchard2021domain], and make no assumptions about the shift type, e.g. covariate shift [@muandet2013domain]. The full version, stated and proved in Appendix [4](#app:gen_bounds){reference-type="ref" reference="app:gen_bounds"}, provides specific finite-sample bounds on $\epsilon_1$ and $\epsilon_2$ below, depending on the hypothesis class $\calF$, the empirical estimator $F^{-1}_{\widehat{\bbT}_f}(\alpha)$, and the assumptions on the possible risk profiles of hypotheses $f \in \calF$.
112
+
113
+ ::: theorem
114
+ []{#thm:simplified-gen-bound label="thm:simplified-gen-bound"} Given $m$ domains and $n$ samples in each, then with high probability over the training data, $$\begin{equation}
115
+ \sup_{f \in \calF} \left|
116
+ F^{-1}_{\bbT_f}(\alpha - \epsilon_2) -
117
+ F^{-1}_{\widehat{\bbT}_f}(\alpha)
118
+ \right| \leq \epsilon_1,
119
+ \label{ineq:simplified_generalization_bound}
120
+ \end{equation}$$ where $\epsilon_1 \to 0$ as $n \to \infty$ and $\epsilon_2 \to 0$ as $m \to \infty$.
121
+ :::
122
+
123
+ While many domains are required for this to bound be tight, i.e. for $\alpha$ to *precisely* estimate the true quantile, our empirical results in [6](#sec:exps){reference-type="ref+label" reference="sec:exps"} demonstrate that EQRM performs well in practice given only a few domains. In such settings, $\alpha$ still controls conservativeness, but with a less precise interpretation.
124
+
125
+ We now prove that EQRM can recover the causal predictor in two parts. First, we show that, as $\alpha \to 1$, EQRM learns a predictor with minimal, invariant risk over domains. For Gaussian estimators of the risk distribution $\bbT_f$, some intuition can be gained from [\[eq:qrm-gaussian\]](#eq:qrm-gaussian){reference-type="ref+label" reference="eq:qrm-gaussian"} of [1.2.1](#app:causality:discovery:gaussian){reference-type="ref+label" reference="app:causality:discovery:gaussian"}, noting that $\alpha \to 1$ puts increasing weight on the sample standard deviation of risks over domains $\hat{\sigma}_f$, eventually forcing it to zero. For kernel density estimators, a similar intuition applies so long as the bandwidth has a certain dependence on $\hat{\sigma}_f$, as detailed in [1.2.2](#app:causality:discovery:kde){reference-type="ref+label" reference="app:causality:discovery:kde"}. Second, we show that learning such a *minimal invariant-risk predictor* is sufficient to recover the causal predictor under weaker assumptions than prior work, namely @peters2016causal and @krueger21rex. Together, these two parts provide the conditions under which EQRM successfully performs "causal recovery", i.e., correctly recovers the true causal coefficients in a linear causal model of the data.
126
+
127
+ ::: definition
128
+ A predictor $f$ is said to be an *invariant-risk predictor* if its risk is equal almost surely across domains (i.e., $\operatorname{Var}_{e \sim \bbQ}[\calR^e(f)] = 0$). A predictor is said to be a *minimal invariant-risk predictor* if it achieves the minimal possible risk across all possible invariant-risk predictors.
129
+ :::
130
+
131
+ ::: proposition
132
+ Assume: (i) $\calF$ contains an invariant-risk predictor with finite training risks; and (ii) no arbitrarily-negative training risks. Then, as $\alpha\! \to\! 1$, Gaussian and kernel EQRM predictors (the latter with certain bandwidth-selection methods) converge to minimal invariant-risk predictors. []{#prop:equalize_main label="prop:equalize_main"}
133
+ :::
134
+
135
+ [\[prop:Gaussian_QRM_invariant,prop:KDE_QRM_invariant\]](#prop:Gaussian_QRM_invariant,prop:KDE_QRM_invariant){reference-type="ref+label" reference="prop:Gaussian_QRM_invariant,prop:KDE_QRM_invariant"} are stated and proved in [\[app:causality:discovery:gaussian,app:causality:discovery:kde\]](#app:causality:discovery:gaussian,app:causality:discovery:kde){reference-type="ref+label" reference="app:causality:discovery:gaussian,app:causality:discovery:kde"} respectively. In addition, for the special case of Gaussian estimators of $\bbT_f$, [1.2.1](#app:causality:discovery:gaussian){reference-type="ref+label" reference="app:causality:discovery:gaussian"} relates our $\alpha$ parameter to the $\beta$ parameter of VREx [@krueger21rex Eq. 8]. We next specify the conditions under which learning such a minimal invariant-risk predictor is sufficient to recover the causal predictor.
136
+
137
+ ::: theorem
138
+ []{#thm:causal_predictor label="thm:causal_predictor"} Assume that: (i) $Y$ is generated from a linear SEM, $Y = \beta^\intercal X + N$, with $X$ observed and coefficients $\beta \in \bbR^d$; (ii) $\calF$ is the class of linear predictors, indexed by $\hat\beta \in \bbR^d$; (iii) the loss $\ell$ is squared-error; (iv) the risk $\bbE[(Y - \beta^TX)^2]$ of the causal predictor $\beta$ is invariant across domains; and (v) the system of equations $$\begin{align}
139
+ \notag
140
+ 0 \geq
141
+ & x^\intercal \text{\emph{Cov}}_{X \sim e_1}(X, X) x
142
+ + 2 x^\intercal \text{\emph{Cov}}_{N,X \sim e_1} (X, N) \\
143
+ \notag
144
+ = & \cdots \\
145
+ \label{eq:causal_recovery_equations_main}
146
+ = & x^\intercal \text{\emph{Cov}}_{X \sim e_m}(X, X) x
147
+ + 2 x^\intercal \text{\emph{Cov}}_{N,X \sim e_m} (X, N)
148
+ \end{align}$$ has the unique solution $x = 0$. If $\hat\beta$ is a minimal invariant-risk predictor, then $\hat\beta=\beta$.
149
+ :::
150
+
151
+ **Assumptions (i--iii).** The assumptions that $Y$ is drawn from a linear structural equation model (SEM) and that the loss is squared-error, while restrictive, are needed for all comparable causal recovery results [@peters2016causal; @krueger21rex]. In fact, these assumptions are weaker than both @peters2016causal [Thm. 2] (assume a linear *Gaussian* SEM for $X$ *and* $Y$) and @krueger21rex [Thm. 1] (assume a linear SEM for $X$ *and* $Y$).
152
+
153
+ **Assumption (iv).** The assumption that the risk of the causal predictor is invariant across domains, often called *domain homoskedasticity* [@krueger21rex], is necessary for any method inferring causality from the *invariance of risks* across domains. For methods based on the *invariance of functions*, namely the conditional mean $\E[Y|\PA(Y)]$ [@arjovsky2020invariant; @yin2021optimization], this assumption is not required. [7.1.2](#sec:additional_exps:linear_regr:risks_vs_functions){reference-type="ref+label" reference="sec:additional_exps:linear_regr:risks_vs_functions"} compares methods based on invariant risks and to those based on invariant functions.
154
+
155
+ **Assumption (v).** In contrast to both @peters2016causal and @krueger21rex, we do not require specific types of interventions on the covariates. Instead, we require that a more general condition be satisfied, namely that the system of $d$-variate quadratic equations in [\[eq:causal_recovery_equations_main\]](#eq:causal_recovery_equations_main){reference-type="eqref" reference="eq:causal_recovery_equations_main"} has a unique solution. Intuitively, $\text{Cov}(X, X)$ captures how correlated the covariates are and ensures they are sufficiently uncorrelated to distinguish each of their influences on $Y$, while $\text{Cov}(X, N)$ captures how correlated descendant covariates are with $Y$ (via $N$). Together, these terms capture the idea that *predicting $Y$ from the causal covariates must result in the minimal invariant-risk*: the first inequality ensures the risk is *minimal* and the subsequent $m - 1$ equalities that it is *invariant*. While this generality comes at the cost of abstraction, [1.2.3](#app:causal_recovery){reference-type="ref+label" reference="app:causal_recovery"} provides several concrete examples with different types of interventions to aid understanding and illustrate how this condition generalizes existing causal-recovery results based on invariant risks [@peters2016causal; @krueger21rex]. [1.2.3](#app:causal_recovery){reference-type="ref+Label" reference="app:causal_recovery"} also provides a proof of [\[thm:causal_predictor\]](#thm:causal_predictor){reference-type="ref+label" reference="thm:causal_predictor"} and further discussion.
2209.10222/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2209.10222/paper_text/intro_method.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Fairness in machine learning (ML) has become a critical concern. Due to the biases in data collection, the output prediction is often spuriously correlated with some demographic attributes, which are thus undesirably incorporated into the decision-making process of machine learning models. For example, it is found that some abusive language detection systems tend to classify texts that contain mere mentioning of certain minority groups, *e.g.,* homosexual groups, as abusive content, even though the texts themselves are not abusive at all [1, 2]. Despite the recent advances in fairness promoting learning methods [3–7], the existing mainstreaming approaches mostly require retraining or finetuning the entire model parameters towards an extra fairness objective. However, this is
4
+
5
+ <sup>∗</sup>Equal contribution
6
+
7
+ often infeasible in practice, particularly for those well-trained large-scale models, due to the huge computation and storage costs. In addition, for machine learning models that are deployed as a service, model retraining is hindered by limited access to the model parameters.
8
+
9
+ Recently, model reprogramming has emerged as an alternative technique to model finetuning. In particular, model reprogramming considers the pre-trained model fixed, and instead modifies their input to re-purpose the model towards different objectives. For example, it is shown that a well-crafted input perturbation can re-program an ImageNet classifier to solve the task of counting squares in an image [8, 9]. It is also shown that by learning task-specific embedding prompts concatenated to the inputs, pre-trained language models can achieve better performances than full-parameter tuning in natural language understanding tasks [10–12] Compared with finetuning methods, model reprogramming enjoys lower cost, better scalability, and requires less access to the model parameters. Hence here come our research questions - *Can model reprogramming techniques be applied to fairness objectives? If so, why and how would it work?*
10
+
11
+ In this paper, we revisit the model reprogramming and propose a novel generic fairness learning paradigm, called FAIRREPROGRAM. In particular, FAIRREPROGRAM perturbs the input by appending to the input a global constant vector/feature, called the *fairness trigger*, which is optimized towards the fairness objective under a min-max framework. FAIRREPROGRAM is a generic framework that works for various tasks and domains. We further introduce an information-theoretic framework that explains why and under what conditions fairness goals can be achieved using a constant fairness trigger. We show theoretically and empirically that the fairness trigger can effectively obscure demographic biases in the output prediction of fixed ML models by providing false demographic information that hinders the model from utilizing the correct demographic information to make predictions.
12
+
13
+ We perform extensive experiments across various NLP and CV datasets with in-the-wild biases. The results show that FAIRREPROGRAM can consistently achieve better fairness improvement with the retraining-based methods under the two widely-used fairness notions, but with far less trade-off in accuracy. For example, with comparable accuracy, our method can outperform the retraining based baseline with 10.5% and 36.5% lower bias scores over two fairness criteria in the CelebA dataset with the hair color prediction task and gender as demographic information. In addition, our method demonstrates great transferability and interpretability. Our theoretical analysis and empirical findings can provide useful insights toward more practical, scalable, and flexible fairness learning paradigms.
14
+
15
+ # Method
16
+
17
+ Consider a classification task, where X represents the input feature, and Y represents the output label. In addition, there exists some sensitive attributes or demographic group, Z, that may be spuriously correlated with Y . There is a pre-trained classifier, f ∗ (⋅), that predicts Y from X, *i.e.* Yˆ = f ∗ (X). The weights of the classifier are considered fixed (hence the superscript ∗). Unfortunately, due to the spurious correlation between Z and Y , the classifier may be biased against certain demographics.
18
+
19
+ Our goal is to improve the fairness of the classifier by modifying the input X, rather than modifying the classifier's fixed weights. In particular, we aim to achieve either of the following fairness criteria.
20
+
21
+ **Equalized Odds:**
22
+ $$\hat{Y} \perp Z|Y$$
23
+ , or **Demographic Parity:** $\hat{Y} \perp Z$ , (1)
24
+
25
+ where ⊥ denotes independence. The following two subsection will explain how to modify input and design the optimization objective respectively.
26
+
27
+ Input modification primarily involves appending a *fairness trigger* to the input. Formally, the input modification takes the following generic form:
28
+
29
+ $$\tilde{X} = m(X; \theta, \delta) = [\delta, g(X; \theta)],$$
30
+ (2)
31
+
32
+ where X˜ denotes the modified input; [⋅] denotes vector concatenation. As can be observed, the input modification consists of two steps. First, X is fed through a transformation function g(⋅; θ), where θ represents the hyper-parameters of the transformation function. The actual form of g(⋅; θ) is contingent upon different applications and modalities, but a general requirement is that g(⋅; θ) should largely retain the information necessary for classification. The second step is to append a fairness trigger, δ, to the input, which is a vector that can be optimized over. It is important to note that δ is a *constant* – different inputs get appended the same trigger. Although it does not seem intuitive, we will soon show that a constant trigger is all you need to achieve fair prediction on all different inputs.
33
+
34
+ Below are specific forms of transformations (Eq. (2)) we use.
35
+
36
+ **Text Classification** In text classification, X represents a sequence of input token embeddings. To modify the input, we simply append a fixed number of embeddings after the input text. In this case, $g(\cdot; \theta)$ is the identity mapping, and $\delta$ corresponds to the appended embeddings.
37
+
38
+ Image Classification In image classification, X represents the (vectorized) input image. Unlike text classification, where the input can have a variable length, the length of the input to the image classification network is fixed. We thus apply the following two approaches to append the trigger, as shown in Fig. 1. The first approach, called the patch approach, removes a patch from the original image, and appends a trigger the same size as the patch to the patch location (as shown in Fig. 1(a)). In this case, $g(\cdot;\theta)$ is a function that removes the patch dimension and retain the rest, with $\theta$ representing the patch location; $\delta$ represents the trigger feature that replaces the patch. The second approach, called the $border\ approach$ , shrinks the image to a
39
+
40
+ ![](_page_3_Picture_2.jpeg)
41
+
42
+ ![](_page_3_Picture_3.jpeg)
43
+
44
+ (a) Border trigger (b) Patch trigger
45
+
46
+ Figure 1: Demonstration of the border and patch trigger applied on an image from CelebA [56].
47
+
48
+ smaller image, and then appends the trigger at the border (as shown in Fig. 1(b)). In this case, $g(\cdot; \theta)$ is a function that shrinks the image, and $\delta$ represents the trigger feature at the border.
49
+
50
+ Our optimization objective is as follows
51
+
52
+ $$\min_{\delta \theta} \mathcal{L}_{util}(\mathcal{D}_{tune}, f^* \circ m) + \lambda \mathcal{L}_{fair}(\mathcal{D}_{tune}, f^* \circ m), \tag{3}$$
53
+
54
+ where $m = m(\cdot; \theta, \delta)$ represents the input modification function as in Eq. (2); $\circ$ represents nested functions; $\mathcal{D}_{tune}$ represents the dataset that are used to train the fairness trigger. Note that this is different from the dataset where the classifier, $f^*$ , is pre-trained.
55
+
56
+ The first loss term, $\mathcal{L}_{util}$ , is the utility loss function of the task. For classification tasks, $\mathcal{L}_{util}$ is usually the cross-entropy loss, *i.e.*,
57
+
58
+ $$\mathcal{L}_{util}(\mathcal{D}_{tune}, f^* \circ m) = \mathbb{E}_{X, Y \sim \mathcal{D}_{tune}}[CE(Y, f^*(m(X)))], \tag{4}$$
59
+
60
+ where $CE(\cdot, \cdot)$ denotes the cross-entropy loss.
61
+
62
+ The second loss term, $\mathcal{L}_{fair}$ , encourages the prediction to follow the fairness criteria as in Eq. (1). According to Eq. (1), $\mathcal{L}_{fair}$ should measure how much information about Z is in $\hat{Y}$ . To measure this, we introduce another network, called the discriminator, $d(\cdot;\phi)$ , where $\phi$ represents its parameters. If the equalized odds criterion is applied, then $d(\cdot;\phi)$ should predict Z from $\hat{Y}$ and Y; if the demographic parity criterion is applied, then the input to $d(\cdot;\phi)$ would just be $\hat{Y}$ . In the following, we will focus on equalize odds criterion for conciseness. Then, the information of Z can be measured by maximizing the *negative* cross-entropy loss for the prediction of Z over the discriminator parameters, *i.e.*,
63
+
64
+ $$\mathcal{L}_{fair}(\mathcal{D}_{tune}, f^* \circ m) = \max_{\phi} \mathbb{E}_{\boldsymbol{X}, Y, Z \sim \mathcal{D}_{tune}} [-\text{CE}(Z, d(f^*(m(\boldsymbol{X})), Y; \phi))]. \tag{5}$$
65
+
66
+ By plugging Eqs. (4) and (5) into (3), we can see that the entire optimization objective becomes a min-max framework, where the discriminator tries to improve its prediction of Z while the fairness trigger tries to make the prediction worse. As shown in [33], when the discriminator cannot predict Z better than chance, the aforementioned fairness criteria can be achieved.
67
+
68
+ It is not immediately straightforward why a *global* trigger can obscure the demographic information for *any* input. In this section, we will propose an information-theoretic framework that illustrates one of the mechanisms through which the trigger can remove the demographic information.
69
+
70
+ Our theoretical framework builds upon the data generation process as shown in Fig. 2(a). Specifically, we assume that X consists of a set of features, i.e. $X = [X_1, \dots, X_T]$ , where T is the total number of features. In text classification, a feature can be a word or a word piece; in image classification, a feature can be specific shapes, colors, patterns, etc. Assume that these features can be divided into two groups. The first group, denoted as $X^{(y)}$ , consists of features that are directly governed by the output label Y; the second group, denoted as $X^{(z)}$ , consists of features that are directly governed by
71
+
72
+ ![](_page_4_Figure_0.jpeg)
73
+
74
+ Figure 2: Illustration of why fairness trigger works. (a) The data generation process. (b) The information flow from data to the classifier through the sufficient statistics. (c) Fairness trigger strongly indicative of a demographic group can confuse the classifier with a false demographic posterior, and thus preventing the classifier from using the correct demographic information.
75
+
76
+ the demographic information Z. Z and Y can be spuriously correlated, *i.e.* there can be common confounders, C, between Z and Y. As a result, both $X^{(y)}$ and $X^{(z)}$ are indicative of Y.
77
+
78
+ To further simplify our theoretical analysis, we consider a bag-of-feature scenario, where each feature in $\boldsymbol{X}^{(y)}$ is drawn from the vocabulary set $\mathcal{X}^{(y)}$ , and each feature in $\boldsymbol{X}^{(z)}$ is drawn from the vocabulary set $\mathcal{X}^{(z)}$ . There should not be any overlap between the two vocabulary sets, *i.e.* $\mathcal{X}^{(y)} \cap \mathcal{X}^{(z)} = \emptyset$ . Otherwise it violates our assumption that demographic-related features are biased features.
79
+
80
+ It can be shown (in Appendix C) that the posterior distributions, $p_Y(\cdot|\mathbf{X}^{(y)})$ and $p_Z(\cdot|\mathbf{X}^{(z)})$ , are the sufficient statistics of $\mathbf{X}^{(y)}$ and $\mathbf{X}^{(z)}$ respectively for inferring Y. In other words, these two posterior distributions summarize all the information about $\mathbf{X}^{(y)}$ and $\mathbf{X}^{(z)}$ that the classifier needs to know to predict Y. Therefore, we assume that the classifier takes the following generic form
81
+
82
+ $$\hat{Y} = f^*(X) = h(p_Y^{tr}(\cdot|X^{(y)}), p_Z^{tr}(\cdot|X^{(z)})). \tag{6}$$
83
+
84
+ Note that we add a superscript, tr, to emphasize that the probability distributions are over the data set where the classifier is trained, because the classifier has never been trained on inputs modified with the fairness trigger. Eq. (6) encompasses many common decision functions. For example, it can be shown (in Appendix C) that the posterior distribution p(Y|X), which is the minimizer of the cross-entropy loss, is a special case of Eq. (6).
85
+
86
+ As illustrated in Fig. 2(b), $p_Y(\cdot|\mathbf{X}^{(y)})$ and $p_Z(\cdot|\mathbf{X}^{(z)})$ provide two sets of information from input features. $p_Y(\cdot|\mathbf{X}^{(y)})$ provides the *unbiased* information, because a desirable fair classifier should rely only upon $p_Y(\cdot|\mathbf{X}^{(y)})$ to make a decision. On the other hand, $p_Z(\cdot|\mathbf{X}^{(z)})$ provides the *biased* information, because it conveys the demographic information. In other words, the fairness goals can be achieved by cutting off the biased information path. Therefore, our research question boils down to: is it possible to cut off the biased information path with a global fairness trigger $\delta$ ?
87
+
88
+ Without loss of generality, assume that $\delta$ consists of only one feature. Consider the case where $\delta$ is a demographic feature, *i.e.* $\delta \in \mathcal{X}^{(z)}$ . In this case, we assume the transformed input as defined in Eq. (2) can also be divided into two groups:
89
+
90
+ $$\tilde{\boldsymbol{X}} = [\tilde{\boldsymbol{X}}^{(y)}, \tilde{\boldsymbol{X}}^{(z)}], \text{ where } \tilde{\boldsymbol{X}}^{(y)} = g(\boldsymbol{X}^{(y)}), \tilde{\boldsymbol{X}}^{(z)} = [\boldsymbol{\delta}, g(\boldsymbol{X}^{(z)})].$$
91
+ (7)
92
+
93
+ The following theorem states our main conclusion:
94
+
95
+ **Theorem 1.** Under the assumptions in Eq. (6) and (7), and some additional regularity conditions<sup>2</sup>, if the fairness trigger $\delta$ is indicative of a certain demographic group z, then
96
+
97
+ $$\lim_{p^{tr}(Z=z|\boldsymbol{X}_{0}^{(z)}=\boldsymbol{\delta})\to 1}MI(\hat{\tilde{Y}},Z|Y)=0, \tag{8}$$
98
+
99
+ where MI means mutual information; $\hat{\tilde{Y}} = f^*(\tilde{X})$ is the classifier's prediction after input is modified.
100
+
101
+ $p^{tr}(Z=z|\boldsymbol{X}_0^{(z)}=\delta) \rightarrow 1$ means that the fairness trigger is very strongly indicative of the demographic group z. Therefore, Thm. 1 essentially states that if the prepended trigger feature is very strongly indicative of a certain demographic group, then equalized odds can be achieved. A formal proof is presented in Appendix C. Here we would like to give an intuitive explanation. When $p^{tr}(Z=z|\boldsymbol{X}_0^{(z)}=\delta) \rightarrow 1$ , it will also happen that $p^{tr}(Z=z|\boldsymbol{X}_0^{(z)}=\tilde{\boldsymbol{X}}_0^{(z)}) \rightarrow 1$ . In other words, the fairness trigger $\delta$
102
+
103
+ <sup>&</sup>lt;sup>2</sup>Formal assumptions stated in the appendix.
104
+
105
+ would overshadow the rest of the demographic features and 'trick' the classifier into believing all the different inputs belong to the same demographic group z. As a result, the second argument in Eq (6) would reduce to a constant (1 for demographic group z and 0 elsewhere), effectively blocking the biased information path, as shown in Fig. 2(c). Note that the premise for the fairness trigger to work is that the classifier has never seen the modified input. Otherwise, the classifier will be able to learn to ignore the constant trigger and still elicit the true demographic information from input.
2210.08884/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2210.08884/paper_text/intro_method.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Contemporary generative adversarial networks (GANs) show remarkable performance in modeling image distributions and have applications in a wide range of computer vision tasks (image enhancement , editing , image-to-image translation , etc.). However, the training of modern GANs requires thousands of samples that limits its applicability only to domains that are represented by a large set of images. The mainstream approach to sidestep this limitation is transfer learning (TL), i.e. fine-tuning the generative model to a domain with few samples starting with a pretrained source model.
4
+
5
+ The standard approach of GAN TL methods is to fine-tune almost all weights of the pretrained model . It can be reasonable in the case when the target domain is very far from the source one, e.g. when we adapt the generator pretrained on human faces to the domain of animals or buildings. However, there is a wide range of cases when the distance between data domains is not so far. In particular, the majority of target domains used in works are similar to the source one and differ mainly in texture, style, geometry while keep the same content like faces or outdoor scenes.
6
+
7
+ For such cases it seems redundant to fine-tune all weights of the source generator. It was shown in the paper that after transfer learning of the StyleGAN2 to similar domains some parts of the network almost do not change. This observation motivates us to find a more efficient and compact parameter space for domain adaptation of GANs.
8
+
9
+ In this paper, we propose a novel domain-modulation operation that reduces the parameter space for fine-tuning the StyleGAN2. The idea is to optimize for each target domain only a single vector $d$. We incorporate this vector into the StyleGAN2 architecture through the modulation operation at each convolution layer. The dimension of the vector $d$ equals 6 thousand that is 5 thousand times less than the original weights space of the StyleGAN2. We apply this parameterization for the state-of-the-art domain adaptation methods StyleGAN-NADA and MindTheGAP . We show that it has almost the same expressiveness as the full parameterization while being more lightweight. To further advance the domain adaptation framework of GANs we propose a new regularization loss that improves the diversity of the fine-tuned generator.
10
+
11
+ Such considerable reduction in the size of the proposed parameterization motivates us to consider the problem of multi-domain adaptation of GANs, i.e. when the same model can adapt to multiple domains depending on the input query. Typically, this problem is tackled by previous methods just by fine-tuning separate generators for each target domain independently. In contrast, we propose to train a hyper-network that predicts the vector $d$ for the StyleGAN2 depending on the target domain. We call this network as HyperDomainNet. Such hyper-network would be impossible to train if we needed to predict all weights of StyleGAN2.
12
+ The immediate benefits of multi-domain framework consist of reducing the training time and the number of trainable parameters because instead of fine-tuning $n$ separate generators we train one HyperDomainNet to adapt to $n$ domains simultaneously. Another advantage of this method is that it can generalize to unseen domains if $n$ is sufficiently large and we empirically observe this effect.
13
+
14
+ We provide extensive experiments to empirically confirm the effectiveness of the proposed parameterization and the regularization loss on a wide range of domains. We illustrate that our parameterization can achieve quality comparable with the full parameterization (i.e. when we optimize all weights). The proposed regularization loss significantly improves the diversity of the fine-tuned generator that is validated qualitatively and quantitatively. Further, we conduct experiments with the HyperDomainNet and show that it can be successfully trained on a number of target domains simultaneously. Also we show that it can generalize to a number of diverse unseen domains.
15
+
16
+ To sum up, our main contributions are
17
+
18
+ - We reduce the number of trainable parameters for domain adaptation of StyleGAN2 generator by proposing the domain-modulation technique. Instead of fine-tuning all 30 millions weights of StyleGAN2 for each new domain now we can train only 6 thousand-dimensional vector.
19
+ - We introduce a novel regularization loss that considerably improves the diversity of the adapted generator.
20
+ - We propose a HyperDomainNet that predicts the parameterization vector for the input domain and allows multi-domain adaptation of GANs. It shows inspiring generalization results on unseen domains.
21
+
22
+ # Method
23
+
24
+ In this work, we focus on StyleGAN generators in the context of domain adaptation. We consider StyleGAN2 as a base model. As the state-of-the-art domain adaptation methods we use StyleGAN-NADA and MindTheGAP .
25
+
26
+ \paragraph{StyleGAN2}
27
+
28
+ The StyleGAN2 generation process consists of several components. The first part is a mapping network $M(z)$ that takes as an input random vectors $z \in \mathcal{Z}$ from the initial latent space, $\mathcal{Z}$ that is typically normally distributed. It transforms these vectors $z$ into the intermediate latent space $\mathcal{W}$.
29
+
30
+ Each vector $w \in \mathcal{W}$ is further fed into different affine transformations $A(w)$ for each layer of the generator. The output of this part forms StyleSpace $\mathcal{S}$ that consists of channel-wise style parameters $s = A(w)$. The next part of the generation process is the synthesis network $G_{sys}$ that takes as an input the constant tensor $c$ and style parameters $s$ at the corresponding layers and produces the final feature maps at different resolutions $F = G_{sys}(c, s)$. These feature maps move on to the last part which consists of toRGB layers $G_{tRGB}$ that generate the output image $I = G_{tRGB}(F)$.
31
+
32
+ \paragraph{Problem Formulation of Domain Adaptation}
33
+ The problem of domain adaptation of StyleGAN2 can be formulated as follows. We are given a trained generator $G^A$ for the source domain $A$, and the target domain $B$ that is represented by the one image $I_B$ (one-shot adaptation) or by the text description $t_B$ (text-guided adaptation). The aim is to fine-tune the weights $\theta$ of a new generator $G^B_{\theta}$ for the domain $B$ starting from the weights of $G^A$. The optimization process in the general form is
34
+
35
+ \mathcal{L}_B(\theta) = \mathcal{L}(\{G^B_{\theta}(w_i)\}_{i=1}^n, \{G^A(w_i)\}_{i=1}^n, G^B_{\theta}, B, A) \; \rightarrow \; \min_{\theta},
36
+
37
+ where $\mathcal{L}$ is some loss function, $n$ is a batch size, $w_1, \dots, w_n$ are random latent codes, $\{G^B_{\theta}(w_i)\}_{i=1}^n$ and $\{G^A(w_i)\}_{i=1}^n$ are batches of images sampled by $G^B_{\theta}$ and $G^A$ generators, respectively, and $B, A$ are domains that are represented by images or text descriptions.
38
+
39
+ \paragraph{CLIP model} CLIP is a vision-language model that is composed of text and image encoders $E_T$, $E_I$, respectively, that maps their inputs into a joint, multi-modal space of vectors with a unit norm (this space is often called as CLIP space). In this space the cosine distance between embeddings reflects the semantic similarity of the corresponding objects.
40
+
41
+ \paragraph{StyleGAN-NADA}
42
+ StyleGAN-NADA is a pioneering work that utilizes the CLIP model for text-guided domain adaptation of StyleGAN.
43
+
44
+ The proposed loss function is
45
+
46
+ \Delta T (B, A) = E_T(t_{B}) - E_T(t_{A}), \nonumber \\
47
+ \Delta I(G^{B}_{\theta}(w), G^{A}(w)) = E_I(G^{B}_{\theta}(w)) - E_I(G^{A}(w)), \nonumber
48
+ \\
49
+ \mathcal{L}_{direction}(G^{B}_{\theta}(w), G^{A}(w), B, A) = 1 - \dfrac{\Delta I(G^{B}_{\theta}(w), G^{A}(w)) \cdot \Delta T(B, A)}{|\Delta I(G^{B}_{\theta}(w), G^{A}(w))| |\Delta T(B, A)|}.
50
+
51
+ The idea is to align the CLIP-space direction between the source and target images $\Delta I(G^{B}_{\theta}(w), G^{A}(w))$ with the direction between a pair of source and target text descriptions $\Delta T (B, A)$.
52
+
53
+ So, the overall optimization process has the form
54
+
55
+ \mathcal{L}_B(\theta) = \sum\limits_{i=1}^n \mathcal{L}_{direction}(G^{B}_{\theta}(w_i), G^{A}(w_i), B, A) \; \rightarrow \; \min_{\theta}.
56
+
57
+ In StyleGAN-NADA method the $\mathcal{L}_B(\theta)$ loss is optimized only with respect to the weights $\theta$ of the synthesis network $G^{B}_{sys}$ which has 24 million weights.
58
+
59
+ \paragraph{MindTheGap}
60
+ The MindTheGap method is proposed for a one-shot domain adaptation of StyleGAN, i.e. the domain $B$ is represented by the single image $I_B$.
61
+ In principle StyleGAN-NADA method can solve this problem just by replacing the text direction $\Delta T(B, A)$ from \Cref{eq:direction} to an image one
62
+
63
+ \Delta I'(B, A) = E_I(I_B) - \dfrac{1}{|A|}\sum\limits_{I_A \in A}[E_I(I_A)],
64
+
65
+ where $\dfrac{1}{|A|}\sum\limits_{I_A \in A}[E_I(I_A)]$ is the mean embedding of the images from domain $A$. However, as stated in this leads to an undesirable effect that transferred images lose the initial diversity of domain $A$ and become too close to the $I_B$ image. So, the key idea of the MindTheGap is to replace the mean embedding from \Cref{eq:im_dir} by the embedding of projection $I_A^*$ of $I_B$ image to $A$ domain obtained by the GAN inversion method II2S :
66
+
67
+ \Delta I''(B, A) = E_I(I_B) - E_I(I_A^*),
68
+
69
+ So, the MindTheGap uses the modified $\mathcal{L}'_{direction}$ loss that is renamed to $\mathcal{L}_{clip\_accross}$
70
+
71
+ \mathcal{L}_{clip\_accross}(G^{B}_{\theta}(w), G^{A}(w), B, A) = 1 - \dfrac{\Delta I(G^{B}_{\theta}(w), G^{A}(w)) \cdot \Delta I''(B, A)}{|\Delta I(G^{B}_{\theta}(w), G^{A}(w))| |\Delta I''(B, A)|}.
72
+
73
+ In addition to this idea several new regularizers are introduced that force the generator $G_{\theta}^B$ to reconstruct the $I_B$ image from its projection $I_A^*$. It further stabilizes and improves the quality of domain adaption. Overall, the MindTheGAP loss function $\mathcal{L}_{MTG}$ has four terms to optimize $G_{\theta}^B$.
74
+
75
+ For more details about each loss please refer to the original paper .
76
+
77
+ Our primary goal is to improve the domain adaptation of StyleGAN by exploring an effective and compact parameter space to use it for fine-tuning $G^B_{\theta}$.
78
+ As we described in \Cref{sec:stylegan} StyleGAN has four components: the mapping network $M(\cdot)$, affine transformations $A(\cdot)$, the synthesis network $G_{sys}(\cdot, \cdot)$, and toRGB layers $G_{tRGB}(\cdot)$. It is observed in the paper that the main part of StyleGAN that is mostly changed during fine-tuning to a target domain is the synthesis network $G_{sys}(\cdot, \cdot)$. It is also confirmed by StyleGAN-NADA and MindTheGap methods as they adapt only the weights of $G_{sys}(\cdot, \cdot)$ for the target domain.
79
+
80
+ So, we aim to find an effective way to fine-tune the weights of feature convolutions of $G_{sys}(\cdot, \cdot)$. In StyleGAN2 these convolutions utilize modulation/demodulation operations to process the input tensor and the corresponding style parameters $s$. Let us revisit the mechanism of these operations:
81
+
82
+ \text{modulation: } w'_{ijk} &= s_i\cdot w_{ijk}, \\
83
+ \text{demodulation: } w''_{ijk} &= \dfrac{w'_{ijk}}{\sqrt{\sum\limits_{i,k}{w'_{ijk}}^2 + \varepsilon}},
84
+
85
+ where $w, w'$ and $w''$ are the original, modulated and demodulated weights, respectively, $s_i$ is the component of the style parameters $s$, $i$ and $j$ enumerate input and output channels, respectively.
86
+ The idea behind modulation/demodulation is to replace the standard adaptive instance normalization (AdaIN) to a normalization that is based on the expected statistics of the input feature maps rather than forcing them explicitly . So, the modulation part is basically an adaptive scaling operation as in AdaIN that is controlled by the style parameters $s$. This observation inspires us to use this technique for the domain adaptation.
87
+
88
+ The problem of fine-tuning GANs to a new domain is very related to the task of style transfer where the goal is also to translate images from the source domain to a new domain with the specified style. The contemporary approach to solve this task is to train an image-to-image network which takes the target style as an input condition. The essential ingredient of such methods is the AdaIN that provides an efficient conditioning mechanism. In particular, it allows to train arbitrary style transfer models . So, it motivates us to apply the AdaIN technique for adapting GANs to new domains.
89
+
90
+ We introduce a new domain-modulation operation that reduces the parameter space for fine-tuning StyleGAN2. The idea is to optimize only a vector $d$ with the same dimension as the style parameters $s$. We incorporate this vector into StyleGAN architecture by the additional modulation operation after the standard one from \Cref{eq:modulation}:
91
+
92
+ \text{domain-modulation: } w'_{ijk} &= d_i\cdot w_{ijk},
93
+
94
+ where $d_i$ is the component of the introduced domain parameters $d$ (see \Cref{fig:training_diagram}a). So, instead of optimizing all weights $\theta$ of the $G_{sys}$ part we train only the vector $d$.
95
+
96
+ \includegraphics[width=\linewidth]{images/training_diagram2.pdf}
97
+ \caption{Detailed diagram of proposed method. (a) Revised ModulatedConv block with introduced domain-modulation operation. (b) Fully detailed training process of the domain adaptation with the proposed domain-modulation technique.}
98
+
99
+ We apply these new parameterization to StyleGAN-NADA and MindTheGAP methods, i.e. instead of optimizing its loss functions wrt $\theta$ we optimize it wrt $d$ vector (see \Cref{fig:training_diagram}b)
100
+
101
+ The dimension of the vector $d$ equals 6 thousand that is 4 thousand times less than the original weights space $\theta$ of $G_{sys}(\cdot, \cdot)$ part. While the proposed parameter space is radically more constrained we observe that it has the expressiveness comparable with the whole weight space.
102
+
103
+ The CLIP-based domain adaptation methods StyleGAN-NADA and MindTheGap use $\mathcal{L}_{direction}$ (or $\mathcal{L}_{clip\_accross}$) loss (see \Cref{eq:direction,eq:clip_across}) that was initially introduced to deal with the mode collapsing problem of the fine-tuned generator . However, we empirically observe that it solves the issue only partially. In particular, it preserves the diversity only at the beginning of the fine-tuning process and starts collapsing after several hundred iterations. It is a significant problem because for some domains we need much more iterations to obtain the acceptable quality.
104
+
105
+ The main cause of such undesirable behaviour of the $\mathcal{L}_{direction}$ (the same for $\mathcal{L}_{clip\_accross}$) loss is that it calculates the CLIP cosine distance between embeddings that do not lie in the CLIP space. Indeed, the cosine distance is a natural distance for objects that lie on a CLIP sphere but becomes less evident for vectors $\Delta T, \Delta I$ that represent the difference between clip embeddings that no longer lie on a unit sphere. Therefore, the idea behind the $\mathcal{L}_{direction}$ loss may be misleading and in practice we can observe that it still suffers from mode collapse.
106
+
107
+ We introduce a new regularizer for improving diversity that calculates the CLIP cosine distance only between clip embeddings. We called it indomain angle consistency loss and we define it as follows
108
+
109
+ \mathcal{L}_{indomain-angle}(\{G^B_{d}(w_i)\}_{i=1}^n, \{G^A(w_i)\}_{i=1}^n, B, A) = \\
110
+ = \sum\limits_{i,j}^n (\langle E_I(G^A(w_i)), E_I(G^A(w_j)) \rangle - \langle E_I(G_{d}^B(w_i)), E_I(G_{d}^B(w_j)) \rangle)^2,
111
+
112
+ The idea of $\mathcal{L}_{indomain-angle}$ loss is to preserve the CLIP pairwise cosine distances between images before and after domain adaptation. We observe that this loss significantly improves the diversity of the generator $G_{d}^B$ compared to the original $L_{direction}$ or $\mathcal{L}_{clip\_accross}$ losses.
113
+
114
+ \includegraphics[width=\linewidth]{images/Mapper.drawio.pdf}
115
+ \caption{Detailed training process of the HyperDomainNet. On the training phase only reference descriptions are included into CLIP-guided training.}
116
+
117
+ The proposed domain-modulation technique allows us to reduce the number of trainable parameters which motivates us to tackle the problem of multi-domain adaption of StyleGAN2. Our aim is to train the HyperDomainNet that predicts the domain parameters given the input target domain.
118
+ This problem can be formulated as follows. We are given a trained generator $G^A$ for a source domain $A$ and a number of target domains $B_1, \dots, B_m$ that can be represented by the single image or the text description. The aim is to learn the HyperDomainNet $D_{\varphi}(\cdot)$ that can predict the domain parameters $d_{B_i} = D_{\varphi}(B_i)$ which will be used to obtain the fine-tuned generator $G^{B_i}_{d_{B_i}}$ by the domain-modulation operation (see \Cref{sec:domain-modulation}).
119
+
120
+ In this work, we focus on the setting when the target domains $B_1, \dots, B_m$ are represented by text descriptions $t_{B_1}, \dots, t_{B_m}$. The HyperDomainNet $D_{\varphi}(\cdot)$ takes as an input the embedding of the text obtained by the CLIP encoder $E_T(\cdot)$ and outputs the domain parameters $d_{B_i} = D_{\varphi}(E_T(t_{B_i}))$. The training process is described in the \Cref{fig:mapper_training}.
121
+
122
+ To train the HyperDomainNet $D_{\varphi}(\cdot)$ we use the sum of $\mathcal{L}_{direction}$ losses for each target domains. In addition, we introduce $\mathcal{L}_{tt-direction}$ loss ("tt" stands for target-target) that is the same as $\mathcal{L}_{direction}$, but we compute it between two target domains instead of target and source. The idea is to keep away the images from different target domains in the CLIP space. We observe that without $\mathcal{L}_{tt-direction}$ loss the HyperDomainNet tends to learn the mixture of domains.
123
+
124
+ In multi-domain adaptation setting, the regularizer $\mathcal{L}_{indomain-angle}$ becomes inefficient because during training batch consists of samples from different domains and the number of images from one domain can be very small. Therefore, we introduce an alternative regularization $\mathcal{L}_{domain-norm}$ for the HyperDomainNet that constrains the norm of the predicted domain parameters. To be exact it equals to $\|D_{\varphi}(E_T(t_{B_i})) - 1\|^2$.
125
+
126
+ So, the objective function of the HyperDomainNet consists of $\mathcal{L}_{direction}$, $\mathcal{L}_{tt-direction}$ and $\mathcal{L}_{domain-norm}$ losses. For more detailed description of these losses the overall optimization process, please refer to \Cref{appx:hyperdomainnet}.
2210.15088/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-08-15T12:41:34.382Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36" etag="0QRGUGDsFPQH7hFsRHog" version="20.2.3" type="google"><diagram name="Combined New" id="SXKHzWekjN8-1scbhIkA">7V1Zd+M2lv41OtN5MA+xA48uV1Vn5iTpmlQylfRLDiVRsqokUdFStvvXDygREklAJESCi2ypO2UT4mLifvfi7high8XzP9fB6vHnaBzOB9AfPw/Q+wGEAEM4iP/vj18OIwzww8B0PRsfhvzTwOfZf8LkSjW6m43DTTJ2GNpG0Xw7W2UHR9FyGY62mbFgvY6esqdNovk4M7AKpmHmz4gHPo+Ceaid9mU23j4eRjlkp/Efw9n0UT0ZUHH4ZhGok5Mbbx6DcfSUGkIfBuhhHUXbw2+L54dwHk+empeffwfst4f7j7/g71++8Onk46dRdHe42cdLLjm+wjpcbt3eOiHu92C+S+ZrAOlcPuTdJJLPki+9fUlmkv69i9QXd5s9ne/lCZCvJFbenb6Xv03jn/8Y/iC//u0x3BNy9DjbSvLu1vFhNJH/fArXm2gZ3N2Pg9V29j0ev99u5RvOouUAPsjDH9XfstkN1V0/3d+rUfmqqS/kz2Cxkr8sh5v4x2wTI23/9EU03s3D/4oHot12tdum7nB4TXULmHljuA2f4/HH7WIuB0D8l2zX0bfwIZpHazmyjJZhPCOz+Tw3FMxn06U8HMn3CeX4u+/hejuTqLxPvljMxuP4Me+e4on5vApG8TOfJA/KsXW0W47DmG5+MuEJX0GeHH8MFrN5zJG/zRZh/GK/hE/y31+jRbA8vkkpYBJgxX9b+JxilwRA/wyjRbhdv8hT1LeA0MM1iTTgDHuICUyFDynBPOGqpxOnUT95ymOKy1DCUkHC3NPjk04Alr8kGL4Az8iA5xxRp3J2V01MYyLBguE8K5Auml6WmV454hGizSkRvj6nhECPoobmFZfP6wm04AysU3yUwjTA8ngcbB6P17qjDLKmQKezS7TZfZBTEAsf6MsFZrPJykZXE89zsmsAUQjGJGSaoJPfCMpQQDuiznNWbCTcgX2NMwDlOu0w9LhoiHRUI12yrt1IlyWdoL2jHdNo9zGUxID+x2j9FKzHjZJrHIR8MjKRi454OJx0Sy75vcexwMcPylAPAOExBIC0BgiWdKNUpyYBHkh/DFoAlc+QhgKBFMmVrzE6c42Q4VjaCMlhtN4+RlPJsfMPp9F3WVKfzvkpilbJ4Ndwu31JKBzstlEJ+V0TcxPt1qOwXDRtg/U0LLpfwgbxnBRCYx3Og72injHRDMTaX3q/XgcvqRNW0Wy53aTu/CkeOCEOK7tLQQwxTyqVR3QIksPB4QEnVBz/0upAEZpAuB+PZ8upW0EgjSF1SA1ygYR8jE1ygcMhoh2L8RK5gP2YaL7PMJEqExes13JBuSgM1u/RrMzaoOXmMMDSHNZs4SKrVY0Or8cM3avsXaKQCs9ngJ4+WRSKGIVUyBMg4wQarFJCPZr+MB2FkHkJ/DBDlGPl93IPQ2CAYY0Fy6++YLW7QAnLBUrxqbsVqh69imzfipMvJ3T98kd8vUfU4Z/J7fYH758zRy/q6Hm2TV0mj/5Ud5S/ny6KD9Q13WsmAHWlmtQjvO5d+hzOJ3ft2HeTyQSOjAbDmA4p6VoxAMjDJKcM6It/2yYe0B1XX04r+spqRT/n4F4pQz+1sK/OLuJuoRDSM1BgYuj7HUMBUrk6Z5V5LldkHQ2S8qaVd6//NQUI0sXaWlW8dy+qme0ajfslqnW/3Jc9wpxbc3nWJPH/jG6dg6J6uENq/PDpmmULzTpIYoU65e7RQxJ9MutYLRav6O85sXiGwU/8/qo0OGIrFmi/xEInrsATiZk9jY94ApfhqXts2Pod+4YNcdMMLjLi8HXS+ZjOU53OX3eLlTo/WI/qSgVYQSqoq64HL8qrVYoX0S+4gL7Bpcoioq66waVpuEDN8Pg52Hy7mR0VzA4EhYf5yezQ3fg9sjog0gjcptVxoZaoi6TuOd5aoVA81nXEGmTTkaBPCiLWpZcjnAl48xwiGwh3Q9yJxvvG9B5gaw+5h3U9gdaNp7S2moOyktBHqAQi+6NP4VqCYR8kv+GmHm5016u7moLgVFMQST3j+2xPgX09gZxP+e9kHSzCp2j97VBCsE+88GfL8WwUbMNTSUBx8cGtOqBSdQCU+prvQwGYQAggXwX7VK0AFZ4QXCDuE+QLhjV1jjLhEeJLrU4wQjkSBnWOe4wCn4Kk5oA2hGKkJ4dejGJ4BsWpJKMadzmliH9YjqKxxJExl+hasQy7xTKQpgmOs5YpoQRhkSvM4FwiHUPMGN/DUccyhMTzOfQplbhHvi+gDmaMPIIEh/JrLu0g0hSYlV3VXzC/nwXzaLqL5fD78IZm12jGFHiC8BOa6RWjmejlMj1D86kE5Caa3YOZ+zAD5ivBcrAbPf7y56//+/XfH+6+/t/vj3/9iv91J0xYzgGgxwWLGOXqQgw5X5AjfX4pFh5oKscHNVitCOPj+tWKxWCwnn9p3gJx+nDAQYYemHokzsdXH6RTh52SrTIEkkymBK17AvHmCFTdWndEE5BjCW5gCWKoMMWkqQRzrNc5xHmrA3eViZUyVftCMEq9mEwcY8Ex95X0OJZgI48iBigHGBABhSH3CWMPI0a5EFRQH3PaoaqvFxM4LWqsVMR4HZQmwPeYoJj6gFEhVDCpr4SGGqHvxzGdj40wDv/GsxutF27Jblej1leyZ9PUCSNxEJLIn7HZr5xO3ZPd+PYl6QiJ5p8l7uZbuB09DvTIX3J2LTe9vZf+2D+ncn56IR5KHepKNJY61BVrte9QN5O8OGHpRvICStqGlBU2ehJDwboFs49kuFTS+Cg0K2lDTjBxFgNrZenGGHhMGqNUGpuAE476raRRXYifzxib7gnZQaCdUzhIhVLvpLHp142lwkOeyvg+7uwmj0fzYLOZjQ6DH2fz498+H+6/Vx6rbsQHhbbio1/VL7ReLlIDCKwWzAcI5BEI6dtCoPUC5hyBlXKieK4ENFlXz+VAFZ/eTNYT0/NazkO/XXXJgLk69QEtY5XZJqw0hFUNXYAgL25GAgXmCHLIc54dwrzE0ic+xjDnjj68b3LLAgyXPUUAT3VHpAzmdYrDZGlPcQZ2PRnn5iWo5AbMk7HfGqbyYvRHA6ikAFzWC6MXK7Zytpf3tuisLMr4gnog8r+X+1a5/ofFMGygDVaPYjQMeqqpS2mURjKwg7il8T1APbW9Ib9PxodTiZ/lgUkhb44xzbPLbdUT5+2GzigOKJsfCZidBlJBNyickKIorJKlPwXDcP4p2sz2wUL0fhhtt9FioKeUbKOVBp9VfLPF8zRuMO9N5tHT6DFYb73NbrGQUuWvyW45Oty2W5kAc+Qw5TIYJAJ00MvaTB+9PKP/Tt7Ki/c5s71lKWFobFDIPK0LCZQTEu4MCHPwwZQHnUNlrQSlQnI5aKnOmEdJKkcm15gIyfVcY3NkyFhSY+7DOxY96w8WlNq/AlpLWAmO7V/wjG7mjlxHkFzUZ73dWaZ91K2O4jYlbE+i116dqloGapS6OXl+9JQmr3lyk8bfj3br7w2kwilElcprxT2lAlud2JWDVAkcXzW7O+tPYh7IprRWuQZizy++Jpe6qV1Q2/tqJoOF0mn0zqe4JseWabhmeDSDTQPDOuUa94ufOx0k3fjWtLdFMlZXVWG5tZXkNGNrjypHxTdqWufR+4Ar0yeIL94n9Zv8If1J1S9BYtoXQ5o0qmLZVah+6coXNSDUxX5BxjdCphbgDYkjlxKoVHVpVhzVc2i2JI7ylhOvKo4EyN0IWomjS2On+T8Y+n7h3wUxqHU+ShwMja73yKK5eS/tx7IKlsZEkoVEysp7C7+7m1k90rLHs2eC22W1gWf7PX/S+j0X1u1dFZGAybPZGJFgc0QakHfv/vV5QN7HMxduVtFyE74eij3nqJVprd4mmyELp1WTetPRTa28KAe9KdYvK/tOfA+LjM86zjcDPTEGyx0hupZW5ODuiZKmgKwCsTn3n62KxnKduHK3adhepCZ5dg06jiIC8XDaRsuqoUi1cdFKqFOFpYYK0kOqy+lDc4ELd9mu+vS73nmx0k6LmWSH+Wz1o3PhcHxz+5UDmwmXoi03cLujrIfCFNcb+eykcdv0YTp9aiz1ZxK19GSzTMc/LQ0cVCpEOKkNnOb0BnGx3tDnDFuFrJ6s8kAgDwqMVZKs6rihVhhEPAhPm4jnkiLt/TT7RNzjJ+ftoMyjOK7uTT655gZNJ+I6VZircNFlDaBTGjYiOV7Br4tXbLPR2+GVXOwDYejls4RtGYJLrkvrdUJjCAIkQ7TKB1Svvrg0Du8mMbmm1XemUqMwlp6xaqvu++K0A1ApFyGDWVmPi5qtP4K5tH3Ki33iSKBa5+OS83HOmMqd30yFEzV1YK3lnLlcY7torWkmmlUE/GtbFTD1jTi6eE3AOY8LAXa+koubo7NC4JeeT3y3DdDNwYLznQTlreP+fXEe3SYBinI177aTeEv5otaByVPVKeNotLmbxZF9ieu76W42lhr6KJD/0Ts2mUzuwpEQdwEXkztMQ06GPgIjmHnGOZf4U4K1+2W0XuwZ8/xftlkFy9TYofJrOdysUufIecydZr0/dRMOc0cdA7VO/YO81wObrOqmvOfCIsbX465/jGUXWeuufyRfY+lsRqlFmnIvW/5dPPnHfGUlWOMWf/r0M1PxIWMeLlh66lGgiZ5+XU0yoB4T5ORKYFmtD3DDjGNTt3IXm8Yb34npIuQV9/S7mIBU5NgkjlgwgqDqEgM1AgJsDGI0Rb9+9PE6l+t9riuGe5o2XkCDgYeQHvE4+Veyd7T2NpbcVyXDthSDZFCTB2+k0L++bEDKVdyZNKDF5fpvRxqUpjmoda80z0ExRE/K6w1RtFfbl9UBRyrp2d36rLsGbvLUknqYQA9QSKjqm0y7JmZxY4WbeNXEVLl4tc0ja0m86vb/9bQvqWKeZjmuJw1MlA+3X0W2fW9gYs2bwjXLnbdsBExZNrmaLU6PYHNsNB2M9FSv/cZi04UT/DobolwuZVAhsTrvj8L0kv5bf5TGhRCzVRB6Iq2wLq0aFiO86ehOIQUdtEzhwCMslTaUc3533zJF7XJ0xS1TuD1ZOptlfVJ7oM3dWqaUIKpUhHNDDkzhibeWKZ22TFGR9FfXMsUVK3egBlnzkLUa1FImPvcLtSUKq9t20i688NZNK2KGXVRufVwq2YJSjy7WCTvu48JNmXzXoHN31iiBm5KE31SnBG7fzag7KjFttvqkgzvo6r7f1OW40ZALpbtH4RTuvMd7gRMkvRNwVrNFuRte4q4tuC3bh/kAzC34ba3uLXYSLNSEPd+ng4w2HFd91C15aMnh5y6PoiUFtjiCAIRfDerxNr+CC8IwptznJNeLC0HknTaGJX5+a2xXLcBK4iOgrMMXbvBylDDY+Sm85PKGbGWLEM5b7+dnzfn9ahVTmsiZS7V2xPn7pPnOOR9RXPwSJaxb73LMSi6XU2h/eTOcr3oKXGlNjYAeZakZhNllrdUKG+Ob2mwE0MsKG3gpKeRKny6a51mlt4N6GzM9elxvc/GU5+tt8t78VuttzNOtKxavuN7mYgL2vt5G5by93RRQWzvOhZoGz3eTq1VvU3Df5uptzI0x9CSPN5IfXl82dF9vQ958QrhSFsoawKh1r7RhhmKInmyBbug7/mrrbRxwZJv1NoWNhm7y9HLq9a7ehhQ3+bqJV01MlYtX5+pbvf7f17xdcBXztJf1NqiXm9r1vd7GwJvm2WWuWa5SlLV2vc15o6njehtk4VK83nqby6UMKiRW5/U2SHeh9DLT5CrqbayFkO1mgH2RVu3X2yALT3gDuX/JQ95CvQ2yiKr3vN7mCJL+1tso+d4vbe5Wb1OCqHIRbnChFZ54q7fptN4G665DnTFv9TbtqkHWPGSrBrVYb1OgLdWtt7ns1g0rYiq0d6u3qW0L9r3eRrmOr07n7qySQzngUzP2fhbMo6k8gv6DnKOYCc4b2G5KY/JeVcuJP5K7xxNM+q0+v9VSGdsVXvFH6/6LjGCtVSkDz+ZvJpUyILdUt7U/ZV82ub/OShmoq571MiBazJcvcCc7q5TJQr3NSpmCyIZNqUtjl1tVylhf3oyVS1u0cq+1Usaa8+E1cb67Spks57dZKVPwdjalLo1dblUpY315bc43b21lEXZtdnswxlherwWV9Npe7hzWL2HAMPdy+/VU3jyMUu6J9CdzW8nxXrzNd7sbSuL6OTBprAbrUW1D74BlZep58VazVcy90hC1YQ9KV95a200jCwtH3bFKs5tGCpBtVVS2CSQjvNb5ZXvhUWzmV9vzC/fOM7w9lMu2tF2pz4R/LGRT78aEBxEXiPsE+UJ1AyiRHK6UVNPK1EEiae0slPY3bsXNsJaGHqI2Mkrwwv2cculO/IvF8xSDn76Sr39+//fT37+zhf/1rlj6twaQkwhPpH3KokGity67WmJeGHbYNm+z6TsHY73tF/XWVz/NlmGw1rDUfsaymw0vsfA9gX0GBeYIcshzPKqX0wLlD8ikEpKGKAB8C+9DOn7U6LzWT+Gi/JTlfZxkakjbAmpf4VZCIkfGa6LJ26/hZhUtN2HPu7ydoHZR7KoxQv1N/ngJfv/8x+bx8/98+vgfyH57Wd2ZHPTpPXsNZBodAuExidbT4T9iX4x8th+nzOx/4h/2c+SfI6cfxxvvkvj5/f4RwfpAOH8YjL5N9/S6yz0GkuT+6V9+MOLDAdD8STDKXmOCgv7kBzQQLD7rOIu7ofFP0K8dpzdC3g3zL6OB/MyuyTns9yPBoanVhuVavWKkry9NZScY2cm0uNzYqQt2Cm/sdDE7AR+LfvGTKdnnxk83froOfkK0teVJHq6jmGAnF0dc7/BzNA7jM/4f</diagram></mxfile>
2210.15088/paper_text/intro_method.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Persona is essential for building a trustful and confident conversational system. Recently, there has been an increasing interest in incorporating explicit persona into dialogue generation models  [@transfertransfo; @mutual_persona; @bert_over_bert] since the release of the publicly available datasets  [@PersonaChat; @convai2]. Typically, persona information consists of several sentences describing the facts or background of the interlocutor. An example taken from the ConvAI2 dataset  [@convai2] is shown in Figure [1](#fig:personachat_example){reference-type="ref" reference="fig:personachat_example"}. In this example, the system should consider the information in the persona sentences and generate consistent responses based on both persona and dialogue history.
4
+
5
+ <figure id="fig:personachat_example" data-latex-placement="t">
6
+ <embed src="Figures/PersonaExample.drawio.pdf" />
7
+ <figcaption>An example from the ConvAI2 dataset.</figcaption>
8
+ </figure>
9
+
10
+ One challenge in the persona-based dialogue generation is that the related datasets are usually small. As collecting dialogues in persona-based dialogue datasets requires crowdworkers to chat with each other based on provided persona profiles, building such quality datasets is expensive and time-consuming, which in turn restricts the size of those datasets. For example, the ConvAI2 dataset [@convai2] only contains 131k utterances with less than 5k unique personas, much smaller than open-domain dialogue datasets such as Pushshift.io Reddit [@reddit_dataset] with roughly 1.2B utterances.
11
+
12
+ Another challenge is to choose the weights between the persona and context. Unlike open-domain dialogue models that generate responses by considering the dialogue context alone, persona-based dialogue generation systems need to additionally take personalized background description into account along with the dialogue context. The weights between context and persona should be dynamically adjusted by the dialogue system under different situations. For example, given a user utterance *"How are you?"*, the context-preferred answer is likely to be *"I am fine."*, which is safe but bland. Meanwhile, a persona-preferred answer would fuse persona information to the response, such as *"I am spending time with my four sisters"*. Under such circumstances, the persona-preferred answer would be more informative and meaningful. On the other hand, sometimes, the system needs to focus on context to make the conversation interactive and engaging. For instance, if the user says: *"I have two greyhounds. Their names are Tom and Jerry."*, then the system would focus on the context and answer: *"That's cute! How old are they?"*, which encourages the user to chat with the dialogue system. From the above two scenarios, it can be seen that the weights between context and persona should be adjusted accordingly, which is important for a dialogue model to build long-term relationships with users.
13
+
14
+ Most existing works on persona-based dialogue generation tasks have primarily addressed the data scarcity challenge by utilizing external data or sophisticated training processes. For instance, @bert_over_bert use the MNLI dataset [@mnli] as auxiliary tasks, @d3 augment the data through text manipulation, @blenderbot add other dialogue datasets in pretext tasks, and @mutual_persona adopt multi-stage training with reinforcement learning. Those works obtained decent performance, but few of them considered the second challenge.
15
+
16
+ To address the aforementioned second challenge, in this paper, we design a Persona-Adaptive Attention (PAA) to dynamically learn the weights of the persona and context information in the proposed framework. To enhance the persona information in the PAA, we prepend persona in the decoder as a prompt so that the weights can capture more persona-related information. To balance the context and persona information, the PAA takes two cross-attention and the self-attention from the persona-prompted decoder to compute the weights for combining the latent representations from the context and persona. Moreover, inspired by some finding in [@welleck-etal-2019-dialogue; @d3] that not all context and persona information is useful to generate the response, we design two dynamic masks to the weighted latent representation to not only remove redundant information but also act as a regularizer in the PAA.
17
+
18
+ As a byproduct, extensive experiments on the ConvAI2 dataset show that the proposed framework achieves comparable or even better performance than existing works without the use of external datasets or sophisticated training procedures. One reason is that our framework explicitly considered learning the weights between context and persona in the architecture design that can perform well under a low-data regime. This observation indicates that the proposed framework could also alleviate the first challenge, making the proposed framework kill two birds with one stone. This demonstrates the effectiveness of the proposed framework.
19
+
20
+ Our contributions can be summarized as follows.
21
+
22
+ - We propose the PAA in an encoder-decoder framework. This framework models the persona and context information by two separate transformer encoders, which are then fused in the persona-prompted decoder by the proposed PAA mechanism.
23
+
24
+ - Extensive experiments on the ConvAI2 dataset show that the proposed model performs comparable to or even better than strong baseline methods by about 30% improvement in terms of the perplexity metric.
25
+
26
+ - We demonstrate that our framework is a data-efficient architecture that can achieve comparable performance with 20% to 30% of the training data compared with a larger model such as GPT2 [@GPT2] trained on the full dataset.
27
+
28
+ # Method
29
+
30
+ Suppose that we have a persona-based conversation session $C=\{P,U\}$, where each persona $P=\{p_1,\ldots,p_e\}$ is composed of $e$ profile sentences that describe the background about a interlocutor and the dialogue context $U=\{u_{h,1}, u_{m,1},...,u_{h,n}\}$ includes the utterances spoken by the first interlocutor (e.g., human) $h$ and the second interlocutor (e.g., machine) $m$ interactively. In the persona-based dialogue generation task, $P$ represents the persona for $m$ and the conversational session always starts with $h$. Therefore, the objective of this task is to generate the response $r=u_{m,n}$ given persona $P$ and the dialogue context $U$.
31
+
32
+ As depicted in Figure [2](#fig:overview){reference-type="ref" reference="fig:overview"}, our framework consists of two encoders and one decoder with PAA to perform the decoding process. The encoding layer uses a transformer encoder architecture to encode persona $P$ and dialogue context $U$, respectively, into latent representations. The encoder layers are randomly initialized, while the decoder layers are initialized with the pre-trained GPT2. The persona information is fed to the persona encoder as well as the decoder as a prompt, offering strong guidance for GPT2 to decode the target response. PAA handles the cross-attentions from the persona and context information to balance and regularize the two parts by weighting and masking.
33
+
34
+ Before presenting the proposed PAA, in this section, we introduce the decoder's self-attention and encoder-decoder cross-attention as the inputs for the PAA.
35
+
36
+ Firstly, the persona $P$ and context $U$ are processed separately by two encoders. Let $I_P=\{t_1^P,...,t_l^P\}$ denote a concatenation of all sentences in $P$, where $t_i^P$ is the $i$-th token in the persona $P$ with total $l$ tokens. Meanwhile, $I_U=\{t_1^U,...,t_k^U\}$ represents the token sequence for the concatenated context content $U$. Then, we use the bi-directional transformer encoders for encoding the text span. Generally, we get the encoder results from $I_P$ and $I_U$ as $$\begin{equation}
37
+ \begin{aligned}
38
+ h_P&=\text{Encoder}_P(I_P),\\
39
+ h_U&=\text{Encoder}_U(I_U),
40
+ \end{aligned}
41
+ \end{equation}$$ where $\text{Encoder}_P$ and $\text{Encoder}_U$ denote the bi-directional transformer encoders for persona and context. $h_P \in \mathbb{R}^{l\times d}$ and $h_U \in \mathbb{R}^{k\times d}$ are the hidden states before the last pooling layer from the encoders, where $d$ is the output dimension of the encoders.
42
+
43
+ Since our framework adopts the encoder-decoder structure, we process the persona-prompted response in the decoder. Specifically, to model the $t_{r+1}^y$ that is the $(r+1)$-th token in the response, we calculate the self-attention on $I_R=\{I_P, [BOS], t_1^y,...,t_r^y\}$, where $[BOS]$ is a special token indicating the begin of the sentence and $t_i^y$ is the $i$-th decoded response token. Formally, the self-attention result from $I_R$ can be expressed as $$\begin{equation}
44
+ \begin{aligned}
45
+ h_R&=\text{Self-Attention}(I_R) + M_R,\\
46
+ \hat{h}_R&=\text{AddNorm}(h_R),
47
+ \end{aligned}
48
+ \end{equation}$$ where $h_R, \hat{h}_R \in \mathbb{R}^{(l+r)\times d}$, and $M_R$ is the decoder's mask to make the self-attention calculation uni-directional.
49
+
50
+ After obtaining the encoders' hidden states $h_P$ and $h_U$, as well as the decoder's self-attention output $h_R$, we then calculate the cross-attention based on the $(h_P,h_R)$ and $(h_U,h_R)$ pairs. The cross-attention is calculated in a similar way to the self-attention, where $K$ and $V$ are provided from the encoder and $Q$ is the from decoder. In detail, we can formulate cross-attention as $$\begin{equation}
51
+ \begin{aligned}
52
+ o_{P}&=\text{Softmax}(\frac{Q_r K_p^\top}{\sqrt{d}})V_p,\\
53
+ o_{U}&=\text{Softmax}(\frac{Q_r K_u^\top}{\sqrt{d}})V_u,
54
+ \end{aligned}
55
+ \end{equation}$$ where $Q_r \in \mathbb{R}^{(l+r)\times d}$ denotes a linear transformation of $\hat{h}_R$,, $K_p, V_p \in \mathbb{R}^{l\times d}$ denote linear transformations of $h_P$,, $K_u,V_u \in \mathbb{R}^{k\times d}$ come from linear transformations of $h_U$,, and $d$ is the dimension of the attention head. By calculating the cross-attentions, we obtain the correlation results between the encoders and decoder, which serve as the parts of input for PAA.
56
+
57
+ To fuse the cross-attention results, the proposed PAA will use the weighting and masking mechanisms to utilize the persona information.
58
+
59
+ Specifically, we take the self-attention result $h_R$ and cross-attention result $o_P$ as input to generate the initial weights $w_{persona}$ for the persona information. The motivation behind this operation is to enable the model to consider the relation between persona and the response in both self-attention and cross-attention fashions. Formally, this operation can be presented as $$\begin{equation}
60
+ \label{eq:fc}
61
+ \begin{aligned}
62
+ m_p&=FC([h_R;o_P]),\\
63
+ w_{persona}&=\text{Sigmoid}(m_p).
64
+ \end{aligned}
65
+ \end{equation}$$ In Eq. ([\[eq:fc\]](#eq:fc){reference-type="ref" reference="eq:fc"}), $[;]$ denotes the concatenation operation, and $h_R,o_P$ are firstly mapped into $m_p \in \mathbb{R}^{(l+r)\times d}$ using a linear layer $FC$ followed by a $\text{Sigmoid}(\cdot)$ to obtain the initial weight for the persona cross-attention. The weight is then applied to the persona-response and context-response cross-attention results to form a complementary relationship, leading to the weighted cross-attention $\tilde{o}_P$ and $\tilde{o}_U$ as $$\begin{equation}
66
+ \begin{aligned}
67
+ \tilde{o}_P&=w_{persona}o_P,\\
68
+ \tilde{o}_U&=(1-w_{persona})o_U.
69
+ \end{aligned}
70
+ \end{equation}$$ To dynamically remove the redundant information and to regularize the two input sources, we transform $w_{persona}$ into $m_{persona}$ and $m_{context}$, which denote the masks for the two input sources, as $$\begin{equation}
71
+ \begin{aligned}
72
+ m_{persona}&=\mathbb{M}(w_{persona}>\tau),\\
73
+ m_{context}&=\mathbb{M}(1-w_{persona}>\tau).
74
+ \end{aligned}
75
+ \end{equation}$$ Here, the masks $m_{persona}$ and $m_{context}$ are made by the binary indicator $\mathbb{M}$ which will output 1 and 0 in accordance with the given condition. $\tau$ is to control the strength of the masking and here it is defined as $\tau=|I_U|/(|I_U|+|I_P|)$, where $|I_U|$ denotes the length of the context input and $|I_P|$ denotes the length of the persona input. The intuition for such setting of $\tau$ is to control the masking strength if the context length outweighs the persona length. After obtaining the masks, we apply the mask to calculate the weighted sum: $$\begin{equation}
76
+ \begin{aligned}
77
+ \hat{o}_P&=m_{persona} \odot \tilde{o}_P,\\
78
+ \hat{o}_U&=m_{context} \odot \tilde{o}_U,\\
79
+ H_{PAA}&=\hat{o}_P+\hat{o}_U,
80
+ \end{aligned}
81
+ \end{equation}$$ where $\odot$ denotes the element-wise multiplication and it is to conduct the masking operation. The weighted masked results $\hat{o}_P$ and $\hat{o}_U$ are then added together in $H_{PAA}$ as the output for PAA.
82
+
83
+ The balanced masked result $H_{PAA}$ will then be passed to the feed-forward network in the decoder as depicted in Figure [2](#fig:overview){reference-type="ref" reference="fig:overview"}(a). Such transformer blocks will be repeated for $N_d$ times to obtain the final output.
84
+
85
+ In the training process, the objective function utilizes the widely used negative log-likelihood loss as $$\begin{equation}
86
+ \begin{aligned}
87
+ \mathcal{L}_{NLL}&=-\log(p_\theta(I_R|I_P,I_U)) \\
88
+ &=-\sum_{i=1}^{|I_R|}\log(p_\theta(t_i^y|I_P,I_U,t_{<i}^y)),
89
+ \end{aligned}
90
+ \end{equation}$$ where $I_R$ denotes the response input, $t^y_i$ denotes the $i$-th token in $I_R$, $t^y_{<i}$ denotes the first to $(i-1)$-th response tokens, $p_\theta$ denotes the model, and $\theta$ denotes the parameters of the model.
91
+
92
+ ::: table*
93
+ Method PARAMS PPL $\downarrow$ F1 $\uparrow$ BLEU-1 $\uparrow$ BLEU-2 $\uparrow$ Dist-1 $\uparrow$ Dist-2 $\uparrow$
94
+ -------------- -------- ------------------ --------------- ------------------- ------------------- ------------------- -------------------
95
+ Encoder-GPT2 182M 20.06 11.95 41.00 13.95 0.11 0.23
96
+ GPT2-SMALL 124M 18.10 11.83 48.90 20.40 **1.31** **6.30**
97
+ GPT2-MEDIUM 355M 17.65 11.45 54.59 23.39 1.13 6.07
98
+ GPT2-LARGE 774M 16.98 10.93 40.91 17.09 0.42 2.62
99
+ Attn-Routing 254M 17.94 12.77 54.26 20.91 0.70 2.39
100
+ PAA (Ours) 254M **14.03** **17.36** **57.25** **23.94** **1.31** 5.21
101
+ :::
102
+
103
+ ::: {#tab:baseline2}
104
+ Method PPL $\downarrow$ Hits@1 $\uparrow$ F1 $\uparrow$
105
+ ----------------- ------------------ ------------------- ---------------
106
+ KVPM \- 54.8 14.25
107
+ DIM \- 78.8 \-
108
+ LIC \- 17.3 17.79
109
+ TransferTransfo 17.51 82.1 19.09
110
+ $P^2$ Bot 15.12 81.9 **19.77**
111
+ BoB **7.80** \- \-
112
+ GPT2-D3 15.69 \- \-
113
+ PAA (Ours) 14.03 **93.9** 17.36
114
+
115
+ : Automatic evaluation results on ConvAI2 over published work.
116
+ :::
2211.05568/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-04-11T19:52:03.400Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.75 Safari/537.36" version="17.4.0" etag="cHRXJ0CIJMbk6lZVHFwJ" type="google"><diagram id="USbQ8CN89hjpLD7XSek-">7ZlNk5sgGMc/jdcdEN9ybLLb9tDO7EwObY9MJEpLxCFko/30xQgqJmkz3aDTziYH5c/773nggdFDq131QeAy/8xTwjwfpJWHHj3fh8AP1aNR6lYJo6AVMkFTXagX1vQnMTW1eqAp2VsFJedM0tIWN7woyEZaGhaCH+1iW87sXkuc6R5BL6w3mJGzYl9oKvNWTcJB6Y+EZrnpGQKds8OmsBb2OU75cSChJw+tBOeyfdtVK8IaeIZLW+/9ldxuYIIU8pYKvh6GrM3cBD8UKWmygYeWXMicZ7zA7BPnpRKhEr8TKWttFXyQXEm53DGd27ZI0syGtecHsdGStrbEIiN6mNH5yGHHQzkS4TsiRa2KCMKwpC9261hbNOvKdVWfOVUt+kB7X2DQa9/zjdVME+2odK0enXoZDKOXTkAvw0VncBsqa50seKEey5l4h/fnbZH6DRY9mhfMDrrRc06MqbXc4DnmVJJ1iU9TOartxJ483pftAt/SqoG43FLGVpxx0RPW3REhSeVdWyZXJq8r+IntNMg40bFf/oGW8sHKD8DrcYUucZVEUDUiIpo6tMhUVuiOIrApwmg6ipFLinsp+I8uFMDrXDOBU6pojdgOcXs+gu+af9fuIAecfvexBorms0Z8/7Bj28C/bWOE6HxnhA62xsuhKA4mC0XJJKHofjYI5gtPi0lQqWmJ+usw8a1p7CE0ycdKN96map36W6CLqZw6jEehcrxf3M+pzXVgFlNBR4ZycBC+1fMhnJYneABBMHL/RfIHrqfUswmvr4SdzAj7wgHEKewpnDeekSdyecJzcyCOwHzXCuj0GjbpvSL05zvJQqfXs3/yYhElM5rD6T1vUq8ORhg7PlNgjN+8emSOGIS2OdCE5kj+G6+OR3s1AhNiXLx59Th0wpE5oDNzqGT/XaO97fVfh9DTLw==</diagram></mxfile>
2211.05568/main_diagram/main_diagram.pdf ADDED
Binary file (6.78 kB). View file
 
2211.05568/paper_text/intro_method.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Deep learning models have become the predominant tool for learning representations suited for a variety of tasks. Arguably, the most common setup for training deep neural networks in supervised classification tasks consists in minimizing the cross-entropy loss. Cross-entropy drives the model towards learning the correct label distribution for a given sample. However, it has been shown in many works that this loss can be affected by biases in the data (Alvi et al., 2018; Kim et al., 2019; Nam et al., 2020; Sagawa et al., 2019; Tartaglione et al., 2021; Torralba et al., 2011) or suffer by noise and corruption in the labels Elsayed et al. (2018); Graf et al. (2021). In fact, in the latest years, it has become increasingly evident how neural networks tend to rely on simple patterns in the data (Geirhos et al., 2019; Li et al., 2021). As deep neural networks grow in size and complexity, guaranteeing that they do not learn spurious elements in the training set is becoming a pressuring issue to tackle. It is indeed a known fact that most of the commonly-used datasets are biased (Torralba et al., 2011) and that this affects the learned models (Tommasi et al., 2017). In particular, when the biases correlate very well with the target task, it is hard to obtain predictions that are independent of the biases. This can happen, e.g., in presence of selection biases in the data. Furthermore, if the bias is easy to learn (e.g. a simple pattern or color), we will most likely obtain a biased model, whose predictions majorly rely on these spurious attributes and not on the true, generalizable, and discriminative features. Learning fair and robust representations of the underlying samples, especially when dealing with highly-biased data, is the main objective of this work. Contrastive learning has recently gained attention for this purpose, showing superior robustness to cross-entropy Graf et al. (2021). For this reason, in this work, we adopt a metric learning approach for supervised representation learning. Based on that, we provide a unified framework to analyze and compare existing formulations of contrastive losses1 such as the InfoNCE loss (Chen et al., 2020; Oord et al.,
4
+
5
+ <sup>∗</sup>Corresponding author: carlo.barbano@unito.it
6
+
7
+ <sup>1</sup>We refer to any contrastive loss and not necessarily to losses based on pairs of samples as in (Sohn, 2016).
8
+
9
+ ![](_page_1_Figure_1.jpeg)
10
+
11
+ Figure 1: With $\epsilon$ -SupInfoNCE (a) we aim at increasing the minimal margin $\epsilon$ , between the distance $d^+$ of a positive sample $x^+$ (+ symbol inside) from an anchor x and the distance $d^-$ of the closest negative sample $x^-$ (- symbol inside). By increasing the margin, we can achieve a better separation between positive and negative samples. We show two different scenarios without margin (b) and with margin (c). Filling colors of datapoints represent different biases. We observe that, without imposing a margin, biased clusters might appear containing both positive and negative samples (b). This issue can be mitigated by increasing the $\epsilon$ margin (c).
12
+
13
+ 2019), the InfoL1O loss (Poole et al., 2019) and the SupCon loss (Khosla et al., 2020). Furthermore, we also propose a new supervised contrastive loss that can be seen as the simplest extension of the InfoNCE loss (Chen et al., 2020; Oord et al., 2019) to a supervised setting with multiple positives. Using the proposed metric learning approach, we can reformulate each loss as a set of contrastive, and surprisingly sometimes even non-contrastive, conditions. We show that the widely used SupCon loss is not a "straightforward" extension of the InfoNCE loss since it actually contains a set of "latent" non-contrastive constraints. Our analysis results in an in-depth understanding of the different loss functions, fully explaining their behavior from a metric point of view. Furthermore, by leveraging the proposed metric learning approach, we explore the issue of biased learning. We outline the limitations of the studied contrastive loss functions when dealing with biased data, even if the loss on the training set is apparently minimized. By analyzing such cases, we provide a more formal characterization of bias. This eventually allows us to derive a new set of regularization constraints for debiasing that is general and can be added to any contrastive or non-contrastive loss. Our contributions are summarized below:
14
+
15
+ - We introduce a simple but powerful theoretical framework for supervised representation learning, from which we derive different contrastive loss functions. We show how existing contrastive losses can be expressed within our framework, providing a uniform understanding of the different formulations. We derive a generalized form of the SupCon loss (ε-SupCon), propose a novel loss ε-SupInfoNCE, and demonstrate empirically its effectiveness;
16
+ - 2. We provide a more formal definition of bias, thanks to the proposed metric learning approach, which is based on the distances among representations, This allows us to derive a new set of effective debiasing regularization constraints, which we call FairKL. We also analyze, theoretically and empirically, the debiasing power of the different contrastive losses, comparing $\epsilon$ -SupInfoNCE and SupCon.
17
+
18
+ # Method
19
+
20
+ **SupCon** It is interesting to notice that the non-contrastive conditions in Eq. 5: $s_t^+ - s_i^+ \le 0 \quad \forall i, t \ne i$ are actually all fulfilled only when $s_i^+ = s_t^+ \quad \forall i, t \ne i$ . This means that one tries to align all positive samples, regardless of their bias b, to a single point in the representation space. In other terms, at the optimal solution, one would also fulfill the following conditions:
21
+
22
+ $$s_{i}^{+,b} = s_{t}^{+,b}, \, s_{i}^{+,b'} = s_{t}^{+,b'}, \, s_{i}^{+,b} = s_{t}^{+,b'}, \, s_{i}^{+,b'} = s_{t}^{+,b} \quad \forall i,t \neq i$$
23
+ (12)
24
+
25
+ Realistically, this could lead to suboptimal solutions: we argue that the optimization process would mainly focus on the easier task, namely aligning bias-aligned samples, and neglecting the bias-conflicting ones. In highly biased settings, this could lead to worse performance than $\epsilon$ -SupInfoNCE. More empirical results supporting this hypothesis are presented in Appendix C.2.
26
+
27
+ End The constraint in Eq. 9 is very similar to what was recently proposed in Tartaglione et al. (2021) with EnD. However, EnD lacks the additional constraint on the standard deviation of the distances, which is given by Eq. 10. An intuitive difference can be found in Fig. 3 and 4 of the Appendix. The constraints imposed by EnD (only first moments) can be fulfilled even if there is an effect of the bias features on the ordering of the positive samples. The use of constraints on the second moments, as in the proposed method, can remove the effect of the bias. An analytical comparison can be found in Appendix A.3.
28
+
29
+ **BiasCon** In Hong & Yang (2021), authors propose the BiasCon loss, which is similar to SupCon but only aligns positive bias-conflicting samples. It looks for an encoder f that fulfills:
30
+
31
+ $$s_i^- - s_i^{+,b'} \le -\epsilon \quad \forall i,j \quad \text{ and } \quad s_p^{+,b} - s_i^{+,b'} \le 0 \quad \forall i,p \text{ and } \quad s_t^{+,b'} - s_i^{+,b'} \le 0 \quad \forall i,t \ne i$$
32
+ (13)
33
+
34
+ The problem here is that we try to separate the negative samples from only the positive bias-conflicting samples, ignoring the positive bias-aligned samples. This is probably why the authors proposed to combine this loss with a standard Cross Entropy.
35
+
36
+ <sup>&</sup>lt;sup>2</sup>The same reasoning can be applied to negative samples (omitted for brevity.)
2211.14646/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2211.14646/paper_text/intro_method.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ While deep learning methods have become extremely successful in solving many computer vision tasks, they are generally opaque, and they do not admit easy debugging of errors. Many novel interpretability methods have been developed in recent years which attempt to analyze the rationale behind a model's predictions. In particular, it is natural to analyze the dependence of the model prediction on its input by perturbing parts of its input and observing corresponding changes in the output [22, 16, 38]. Common perturbations include adding Gaussian noise, Gaussian blurring, replacing with a baseline color, etc. However, many of these perturbation methods come with certain downsides. *Partial perturbations*, like Gaussian noise or blurring, attempt to slightly corrupt parts of the image, while still preserving much of the information present in those parts. While this has the advantage of not changing the input distribution drastically, we can only measure the local sensitivity of the model - if the model were to be robust to these perturbations, it would be locally insensitive to perturbations on certain parts of the image but it might still rely heavily on them for its prediction [27, 31]. *Full perturbation* methods remove the parts completely, and replace it with a baseline color like black or grey. In discrete domains like natural language, this is often the most popular method, as it is easy to remove words from the input [17]. In images, however, this creates a large shift in input distribution, leading the model to perform poorly on such inputs [29, 30]. For example, if we randomly mask out 16 × 16 sized patches from the image, ResNets are more likely to predict that the image is a maze or crossword [12]
4
+
5
+ In recent work [21, 25, 8], it has been observed that vision transformers [6] are highly robust to many kinds of large magnitude input perturbations like occlusions and domain shifts, maintaining upto 60% accuracy on ImageNet even if 80% of the input is randomly blacked out. Jain et al [12] argue that this property can make interpretability methods based on full perturbation especially effective for transformers. They further propose to simply drop tokens corresponding to masked out input parts instead of blacking out or greying out image portions, just like dropping BPE tokens in a transformer-based language model. This would make the transformer model completely insensitive to choice of baseline color and the shape of the mask.
6
+
7
+ Motivated by the same intuition, we devise a new masking technique for CNNs which mitigates the drawbacks of full perturbation to a large extent, which we call layer masking. Layer masking (as depicted in Fig. 1) works by
8
+
9
+ ![](_page_1_Figure_0.jpeg)
10
+
11
+ Figure 1: An outline of layer masking, our proposed method, for a convolutional layer. The image is first masked and then padded using neighbor padding. The convolutional layer then acts on the padded image, and a maxpool of the same kernel size and stride acts on the mask. These are then propagated forward through the CNN. The mask boundary is highlighted in the padded image for illustrative purposes.
12
+
13
+ running the CNN only on the unmasked portion of the image, thus avoiding any large distribution shift. This is done by carefully masking and padding the input of each layer to make the model focus only on the unmasked input regions. Using this technique, we are able to randomly remove upto 50 % of the input to a ResNet-50 (in the form of 16 × 16 sized patches) while maintaining the top-1 accuracy on ImageNet over 70%. We are also able to mask out objects from images precisely without leaking any information about those objects via the shape of the mask. In addition, layer masking operates at the pixel level and is thus much more flexible than token dropping for vision transformers which only acts on a patch level. We also find that LIME [22] scores obtained using our masking method are more aligned with the most salient features of the image as compared to simply blacking or greying out the masked portion.
14
+
15
+ # Method
16
+
17
+ We design a novel feature masking technique for CNNs which we call *layer masking*. Given a model, an input image, and a mask for the input image, we aim to compute the model output such that (1) it doesn't depend on the masked out portion of the input and (2) it only depends on the unmasked portion of the input, and not the mask itself.
18
+
19
+ Modern CNNs primarily consist of convolutional layers, along with other layers like batch normalization, max pooling, average pooling, ReLU activations, etc. We can categorize these layers according to the size of their receptive fields. Layers with small receptive fields include convolutional layers, max-pooling and average pooling layers with kernel size much smaller than the size of the image. Fully connected layers, on the other hand, have a large receptive fields as each output depends on all the inputs. Layers with a small receptive field are in general more interpretable because they have fewer parameters and are implicitly hierarchical: for example, a stack of convolutional layers with a small kernel size processes local information first and then progressively expands its receptive field to encompass the whole image. We exploit this structure by devising an algorithm which carefully masks the input and output at each layer with small receptive field such that information loss and artifacts created by the masking procedure at each step is minimal. We propagate both the input and the mask at each layer so as to simulate running a CNN on an irregularly shaped input corresponding to the unmasked input features, rather than substituting the masked inputs with a baseline color. We are careful, however, to not propagate forward any information in the masked out input regions.
20
+
21
+ Let the input to a convolutional layer with small receptive field be $\boldsymbol{x} \in \mathbb{R}^{c \times n \times n}$ with output $\boldsymbol{y} \in \mathbb{R}^{c' \times n' \times n'}$ and binary input mask $\boldsymbol{m} \in \{0,1\}^{n \times n}$ ( $\boldsymbol{m}[u,v]=1$ implies that cell (u,v) is unmasked, else it is masked out). Each element of the output of this layer with kernel size $k \times k$ depends on at most $k^2$ input values. These input values may either be all masked, all unmasked or partially masked and unmasked (when the convolution is over the mask edge), depending on the values of $\boldsymbol{m}$ over the receptive field.
22
+
23
+ It is clear that our masking procedure should propagate forward the outputs which only depend on the unmasked input, and discard those outputs which depend only on the masked portion. However, it is not immediately obvious how to handle the outputs from the convolutions over the mask edge. The challenge here is that edge convolutions contain valuable information about the edges, and if we discard them at each layer, the unmasked portion of the image can quickly vanish to zero. Thus, we choose to propagate forward the edge convolutions. However, there is the danger of them distorting the natural distribution of the layer activations, as the output unavoidably depends on the masked out region which is filled with zeros. For example, in the third figure (bottom row) of Fig. 2, we see a slice of the activations obtained after applying the 1st residual block of ResNet-50 on the image with the central square region masked out at every layer, but including all the edge convolutions in the output. We see that the convolutions at the top edge of the mask result in a brighter top edge which indicates high activations. This is undesirable since this is an artifact created due to the masking method. We hypothesize that this is because the abrupt transition between the unmasked input and zeros trigger the filters sensitive to edges, thus creating a large activation.
24
+
25
+ To mitigate this issue, rather than just fill the masked out portion with zeros, we pad the unmasked portion using a variant of replication padding we call **neighbor padding**. Specifically, we iteratively assign the masked input cells adjacent to the mask edge with the average value of its immediate non-zero neighbors. This process is continued till the width of the padding is at least k, the kernel size of the layer. In Fig. 3, we see that after the cells near the edge are progressively filled using the values of its neighbors, the resultant image looks very natural and there is no sharp discontinuity near the edge. In an ablation study (see supplementary), we find that this works much better than padding with zeros.
26
+
27
+ ```
28
+ Algorithm 1 Neighbor padding algorithm (\operatorname{Pad}_k(\boldsymbol{x}, \boldsymbol{m}))
29
+ ```
30
+
31
+ ```
32
+ Input: Input to be padded x, Mask m, padding width k
33
+ Output: Padded input x'
34
+ Initialize x' \leftarrow x \odot m, \epsilon \leftarrow 10^{-8}
35
+ Initialize f \leftarrow \mathbf{1}_{3\times 3}, a 3\times 3 filter filled with ones for i=1 to k do
36
+ n \leftarrow x'*f \text{ // Numerator of the neighbor average}
37
+ d \leftarrow m*f \text{ // Denominator of the neighbor average}
38
+ e \leftarrow (1-m) \odot n/(d+\epsilon) \text{ // Fill masked inputs}
39
+ x' \leftarrow x' + e
40
+ m \leftarrow \min(1, m+d) \text{ // Update masks}
41
+ end for
42
+ ```
43
+
44
+ We also have to propagate the masks forward, such that for the output of any layer, the corresponding mask is of the same shape as the output and indicates which output values need to be masked out by the following layers. Since edge convolutions are not discarded at any step, the propagated mask must contain 1 for all output cells which depends on the unmasked portion of the input, and 0 everywhere else.
45
+
46
+ We now describe our method more formally. Suppose we are given a CNN f which is structured like a directed acyclic graph. Each node of the DAG represents a layer or operation which acts on the outputs of the nodes with which it has an incoming edge. We replace each layer with its masking version (subscripted with m) which acts on an input-mask pair. Let g<sup>k</sup> be a layer with receptive field of size k. Then, we define its masking version:
47
+
48
+ $$g_{k,m}(\boldsymbol{x},\boldsymbol{m}) = (g_k(\operatorname{Pad}_k(\boldsymbol{x},\boldsymbol{m})), \operatorname{MaxPool}_k(\boldsymbol{m})))$$
49
+
50
+ In this equation, g<sup>k</sup> could be any convolutional or pooling layer with kernel size k and some stride s. MaxPool<sup>k</sup> is a max pooling layer with the same kernel size and stride as gk. Padk(x,m) is a function which neighbor pads x ⊙ m with padding width k (described in Algorithm 1). Here, ⊙ is the Hadamard product (with suitable broadcasting), and ∗ is convolution with zero padding. The max pool layer ensures that the output masks contains a 1 for all convolutions where even a single input was unmasked.
51
+
52
+ Layers which act independently on each element (like ReLU and BatchNorm) can be considered to be a special case of the above with k = 1. In this case, the above equation is greatly simplified and becomes:
53
+
54
+ $$g_m(\boldsymbol{x}, \boldsymbol{m}) = (g(\boldsymbol{x} \odot \boldsymbol{m}), \boldsymbol{m})$$
55
+
56
+ In models which use residual connections, two input mask pairs can be added together as:
57
+
58
+ $$(x_1, m_1) + (x_2, m_2) = ((x_1 + x_2) \odot (m_1 \odot m_2), m_1 \odot m_2)$$
59
+
60
+ We lose some information here by taking the Hadamard product of the masks, but this is negligible in practice.
61
+
62
+ The penultimate layer is generally a global average pooling layer which averages over the height and width of the activation maps and return a single number. If h is a global average pooling layer, then we define its masking version:
63
+
64
+ $$h_m(\boldsymbol{x}, \boldsymbol{m}) = h(\boldsymbol{x} \odot \boldsymbol{m})/h(\boldsymbol{m})$$
65
+
66
+ The layer's output is rescaled by the mean value of the mask, which ensures that the output's magnitude is comparable to when there is no masking. Such layers may also be utilized for recalibrating channel-wise features by multiplying the activation maps with the output of the average pooling layer (like in Squeeze Excitation blocks [11]). Layers after the penultimate global average pooling layer act on the input as normal.
67
+
68
+ We can now create a new model f<sup>m</sup> which has the same DAG structure of the original model f, except each layer g <sup>i</sup> or h <sup>i</sup> has been replaced with the corresponding masking version g i <sup>m</sup> or h i <sup>m</sup>. f<sup>m</sup> acts on an image - mask pair and produces an output which depends only on the unmasked portion of the image.
69
+
70
+ ![](_page_3_Figure_14.jpeg)
71
+
72
+ Figure 2: Top row: A dog image, sample activations after the first residual block, and mask to be applied to the image. Bottom row: The same activations when using grey-out masking, layer masking with neighbor padding (our method), and layer masking without neighbor padding (using zero padding). Neighbor padding helps in eliminating undesirable edge artifacts encountered in zero padding and greying out. Layer masking completely zeros out the masked out region unlike greying out which has non-zero values after a few layers
73
+
74
+ ![](_page_3_Figure_16.jpeg)
75
+
76
+ Figure 3: A visual depiction of neighbor padding on a part of the image as k increases. The grey line is the mask edge (added in for illustrative purposes), the cells near the edge are filled progressively with the average of their neighbors' values
2212.01026/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2212.01026/paper_text/intro_method.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Semi-supervised and supervised Graph Neural Networks (GNNs) [\[Velickovic et al.](#page-8-0) [2018,](#page-8-0) [Hamilton, Ying, and](#page-7-0) [Leskovec](#page-7-0) [2017,](#page-7-0) [Song, Zhang, and King](#page-8-1) [2022,](#page-8-1) [Zhang et al.](#page-8-2) [2022b\]](#page-8-2) require full access to class labels. However, unsupervised GNNs [\[Klicpera, Bojchevski, and Günnemann](#page-7-1) [2019,](#page-7-1) [Wu et al.](#page-8-3) [2019,](#page-8-3) [Zhu and Koniusz](#page-8-4) [2021\]](#page-8-4) and recent Self-Supervised Learning (SSL) models do not require labels [\[Song et al.](#page-8-5) [2021,](#page-8-5) [Pan et al.](#page-7-2) [2018\]](#page-7-2) to train embeddings. Among SSL methods, Contrastive Learning (CL) achieves comparable performance with its supervised counterparts on many tasks [\[Chen et al.](#page-7-3) [2020,](#page-7-3) [Gao, Yao, and Chen](#page-7-4) [2021\]](#page-7-4). CL has also been applied recently to the graph domain. A typical
4
+
5
+ <span id="page-0-0"></span>![](_page_0_Figure_10.jpeg)
6
+
7
+ Figure 1: Our GCL with spectral feature augmentation by the incomplete power iteration implicitly performs three steps. Let blue and red ellipses represent spectra of feature maps H<sup>α</sup> and H<sup>β</sup> of two views. Singular values of unaligned views (top left) are firstly rebalanced (top right) and the noise is injected into singular values (bottom left). Given rebalanced singular values, corresponding singular vectors are aligned with equalized emphasis (leading unbalanced singular values would emphasize the alignment of leading singular vectors, sacrificing the quality of alignment of other singular vectors).
8
+
9
+ Graph Contrastive Learning (GCL) method forms multiple graph views via stochastic augmentation of the input to learn representations by contrasting so-called positive samples with negative samples [\[Zhu et al.](#page-8-6) [2020,](#page-8-6) [Peng et al.](#page-7-5) [2020,](#page-7-5) [Zhu,](#page-8-7) [Sun, and Koniusz](#page-8-7) [2021,](#page-8-7) [Zhu and Koniusz](#page-8-8) [2022,](#page-8-8) [Zhang et al.](#page-8-9) [2022c\]](#page-8-9). As an indispensable part of GCL, the significance of graph augmentation has been well studied [\[Hafidi et al.](#page-7-6) [2020,](#page-7-6) [Zhu et al.](#page-8-10) [2021b,](#page-8-10) [Yin et al.](#page-8-11) [2021\]](#page-8-11). Popular random data augmentations are just one strategy to construct views, and their noise may affect adversely downstream tasks [\[Suresh](#page-8-12) [et al.](#page-8-12) [2021,](#page-8-12) [Tian et al.](#page-8-13) [2020\]](#page-8-13). Thus, some works [\[Yin et al.](#page-8-11) [2021,](#page-8-11) [Tian et al.](#page-8-13) [2020,](#page-8-13) [Suresh et al.](#page-8-12) [2021\]](#page-8-12) learn graph augmentations but they require supervision.
10
+
11
+ The above issue motivates us to propose a simple/efficient data augmentation model which is complementary with existing augmentation strategies. We target Feature Augmentation (FA) as scarcely any FA works exist in the context of CL and GCL. In the image domain, a simple FA [\[Upchurch et al.](#page-8-14) [2017,](#page-8-14) [Bengio et al.](#page-7-7) [2013\]](#page-7-7) showed that perturbing feature representations of an image results in a representation of another image where both images share some semantics [\[Wang](#page-8-15) [et al.](#page-8-15) [2019\]](#page-8-15). However, perturbing features randomly ignores
12
+
13
+ <sup>\*</sup>Corresponding author. PK was primarily concerned with the theoretical analysis (*e.g*., Prop. [2](#page-3-0) & [3\)](#page-3-1).
14
+
15
+ This paper has been published with the Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI 2023).
16
+
17
+ covariance of feature representations, and ignores semantics correlations. Hence, we opt for injecting random noise into the singular values of feature maps as such a spectral feature augmentation does not alter the orthogonal bases of feature maps by much, thus helping preserve semantics correlations.
18
+
19
+ Moreover, as typical GCL aligns two data views [Wang and Isola 2020], unbalanced singular values of two data views may affect the quality of alignment. As several leading singular values (acting as weights on the loss) dominate the alignment process, GCL favors aligning the leading singular vectors of two data views while sacrificing remaining orthogonal directions with small singular values. In other words, the unbalanced spectrum leads to a suboptimal orthonormal bases alignment, which results in a suboptimal GCL model.
20
+
21
+ To address rebalancing of unbalanced spectrum and augmenting leading singular values, we present a novel and efficient *Spectral Feature Augmentation* (SFA). To this end, we propose the so-called incomplete power iteration which, under just one or two iterations, partially balances singular values of feature maps and implicitly injects the noise into these singular values. We evaluate our method on various datasets for node level tasks (*i.e.*, node classification and node clustering). We also show that our method is compatible with other augmentation strategies and contrastive losses.
22
+
23
+ We summarize our contributions as follows:
24
+
25
+ - i. We propose a simple/efficient spectral feature augmentation for GCL which is independent of different contrastive losses, *i.e.*, we employ InfoNCE and Barlow Twin.
26
+ - ii. We introduce the so-called incomplete power iteration which, under just one or two iterations, partially balances spectra of two data views and injects the augmentation noise into their singular values. The rebalanced spectra help align orthonormal bases of both data views.
27
+ - iii. As the incomplete power iteration is stochastic in its nature, we derive its analytical form which provably demonstrates its spectrum rebalancing effect in expectation, and captures the variance of the spectral augmentation.
28
+ - iv. For completeness, we devise other spectral augmentation models, based on the so-called MaxExp and Power Norm. operators and Grassman feature maps, whose rebalancing and noise injection profiles differ with our method.
29
+
30
+ # Method
31
+
32
+ Inspired by recent advances in augmentation-based GCL, our approach learns node representations by rebalancing spectrum of two data views and performing the spectral feature augmentation via the incomplete power iteration. SFA is complementary to the existing data augmentation approaches. Figure 2a illustrates our framework. The Notations section (supplementary material) explains our notations.
33
+
34
+ **Graph Augmentation** ( $\mathcal{A}_G$ ). Augmented graph ( $\tilde{\mathbf{A}}, \tilde{\mathbf{X}}$ ) is generated by $\mathcal{A}_G$ by directly adding random perturbations to the original graph ( $\mathbf{A}, \mathbf{X}$ ). Different augmented graphs are constructed given one input ( $\mathbf{A}, \mathbf{X}$ ), yielding correlated views, *i.e.*, ( $\tilde{\mathbf{A}}^{\alpha}, \tilde{\mathbf{X}}^{\alpha}$ ) and ( $\tilde{\mathbf{A}}^{\beta}, \tilde{\mathbf{X}}^{\beta}$ ). In the common GCL setting [Zhu et al. 2020], the graph structure is augmented by permuting edges, whereas attributes by masking.
35
+
36
+ **Graph Neural Network Encoders.** Our framework admits various choices of the graph encoder. We opt for simplicity and adopt the commonly used graph convolution network (GCN) [Kipf and Welling 2017] as our base graph encoder. As shown in Fig. 2a, we use a shared graph encoder for each view, *i.e.*, $f: \mathbb{R}^{n \times d_x} \times \mathbb{R}^{n \times n} \longmapsto \mathbb{R}^{n \times d_h}$ . We consider two graphs generated from $\mathcal{A}_G$ as two congruent structural views and define the GCN encoder with 2 layers as:
37
+
38
+ $$f(\mathbf{X}, \mathbf{A}) = GCN_2 (GCN_1(\mathbf{X}, \mathbf{A}), \mathbf{A}),$$
39
+ where $GCN_l(\mathbf{X}, \mathbf{A}) = \sigma(\hat{\mathbf{D}}^{-\frac{1}{2}} \hat{\mathbf{A}} \hat{\mathbf{D}}^{-\frac{1}{2}} \mathbf{X} \Theta).$ (1)
40
+
41
+ <span id="page-2-0"></span>![](_page_2_Figure_0.jpeg)
42
+
43
+ ![](_page_2_Figure_1.jpeg)
44
+
45
+ (b) Simulation: spectrum obtained by Alg. 1.
46
+
47
+ Figure 2: Our GCL model. Two graph views are generated by data augmentation and passed into graph neural network encoders with shared parameters to learn node representations. The proposed spectral feature augmentation rebalances (partially equalizes) the spectrum of each feature map, and implicitly injects the noise into rebalanced singular values. Such representations are fed into the projection head and the contrastive loss. Figure 1 explains the role of our spectral feature augmentation.
48
+
49
+ Moreover, $\tilde{\mathbf{A}} = \hat{\mathbf{D}}^{-1/2} \hat{\mathbf{A}} \hat{\mathbf{D}}^{-1/2} \in \mathbb{R}^{n \times n}$ is the degree-normalized adjacency matrix, $\hat{\mathbf{D}} \in \mathbb{R}^{n \times n}$ is the degree matrix of $\hat{\mathbf{A}} = \mathbf{A} + \mathbf{I_N}$ where $\mathbf{I_N}$ is the identity matrix, $\mathbf{X} \in \mathbb{R}^{n \times d_x}$ contains the initial node features, $\mathbf{\Theta} \in \mathbb{R}^{d_x \times d_h}$ contains network parameters, and $\sigma(\cdot)$ is a parametric ReLU (PReLU). The encoder outputs feature maps $\mathbf{H}^{\alpha}$ and $\mathbf{H}^{\beta}$ for two views.
50
+
51
+ Spectral Feature Augmentation (SFA). $\mathbf{H}^{\alpha}$ and $\mathbf{H}^{\beta}$ are fed to the feature augmenting function $\mathcal{A}_{F^*}$ where random noises are added to the spectrum via the incomplete power iteration. We explain the proposed SFA in the Spectral Feature Augmentation for GCL section and detail its properties in Propositions 1, 2 and 3. SFA results in the spectrally-augmented feature maps, *i.e.*, $\widetilde{\mathbf{H}}^{\alpha}$ and $\widetilde{\mathbf{H}}^{\beta}$ . SFA is followed by a shared projection head $\theta: \mathbb{R}^{n \times d_h} \longmapsto \mathbb{R}^{n \times d_z}$ which is an MLP with two hidden layers and PReLU nonlinearity. It maps $\widetilde{\mathbf{H}}^{\alpha}$ and $\widetilde{\mathbf{H}}^{\beta}$ into two node representations $\mathbf{Z}^{\alpha}$ , $\mathbf{Z}^{\beta} \in \mathbb{R}^{n \times d_z}$ (two congruent views of one graph) on which the contrastive loss is applied. As described in [Chen et al. 2020], it is beneficial to define the contrastive loss on $\mathbf{Z}$ rather than $\mathbf{H}$ .
52
+
53
+ **Contrastive Training.** To train the encoders end-to-end and learn rich node representations that are agnostic to downstream tasks, we utilize the InfoNCE loss [Chen et al. 2020]:
54
+
55
+ <span id="page-2-4"></span>
56
+ $$\mathcal{L}_{contrastive}(\tau) = \underbrace{\mathbb{E}_{\mathbf{z}, \mathbf{z}^{+}} \left[ -\mathbf{z}^{\top} \mathbf{z}^{+} / \tau \right]}_{alignment} + \underbrace{\mathbb{E}_{\mathbf{z}, \mathbf{z}^{+}} \left[ \log \left( e^{\mathbf{z}^{\top} \mathbf{z}^{+} / \tau} + \sum_{\mathbf{z}^{-} \in \mathbf{Z}^{\alpha\beta} \setminus \{\mathbf{z}, \mathbf{z}^{+}\}} e^{\mathbf{z}^{\top} \mathbf{z}^{-} / \tau} \right) \right]}_{uniformity}, \quad (2)$$
57
+
58
+ where $\mathbf{z}$ is the representation of the anchor node in one view $(i.e., \mathbf{z} \in \mathbf{Z}^{\alpha})$ and $\mathbf{z}^+$ denotes the representation of the anchor node in another view $(i.e., \mathbf{z}^+ \in \mathbf{Z}^{\beta})$ , whereas $\{\mathbf{z}^-\}$ are from the set of node representations other than $\mathbf{z}$ and $\mathbf{z}^+$ $(i.e., \mathbf{Z}^{\alpha\beta} \equiv \mathbf{Z}^{\alpha} \cup \mathbf{Z}^{\beta}$ and $\mathbf{z}^- \in \mathbf{Z}^{\alpha\beta} \setminus \{\mathbf{z}, \mathbf{z}^+\}$ ). The first part of Eq. (2) maximizes the alignment of two views (representations of the same node become similar). The second part of Eq. (2) minimizes the pairwise similarity via LogSumExp. Pushing node representations away from each other makes them uniformly distributed [Wang and Isola 2020].
59
+
60
+ Input: feature map $\mathbf{H}$ ; the number of iterations k; $\mathbf{r}^{(0)} \sim \mathcal{N}(0, \mathbf{I})$ for i=1 to k do $\mathbf{r}^{(i)} = \mathbf{H}^{\top} \mathbf{H} \mathbf{r}^{(i-1)}$ end for $\widetilde{\mathbf{H}} = \mathbf{H} - \frac{\mathbf{H} \mathbf{r}^{(k)} \mathbf{r}^{(k)}^{\top}}{\|\mathbf{r}^{(k)}\|_2^2}$ Return $\widetilde{\mathbf{H}}$
61
+
62
+ Our spectral feature augmentation is inspired by the rank-1 update [Yu, Cai, and Li 2020]. Let $\mathbf{H} = f(\mathbf{X}, \mathbf{A})$ be the graph feature map with the singular decomposition $\mathbf{H} = \mathbf{U} \mathbf{\Sigma} \mathbf{V}^{\top}$ where $\mathbf{H} \in \mathbb{R}^{n \times d_h}$ , $\mathbf{U}$ and $\mathbf{V}$ are unitary matrices, and $\mathbf{\Sigma} = \operatorname{diag}(\sigma_1, \sigma_2, \cdots, \sigma_{d_h})$ is the diagonal matrix with singular values $\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_{d_h}$ . Starting from a random point $\mathbf{r}^{(0)} \sim \mathcal{N}(0, \mathbf{I})$ and function $\mathbf{r}^{(k)} = \mathbf{H}^{\top} \mathbf{H} \mathbf{r}^{(k-1)}$ , we generate a set of augmented feature maps $\mathbf{H}$ by:
63
+
64
+ <span id="page-2-7"></span>
65
+ $$\widetilde{\mathbf{H}}(\mathbf{H}; \mathbf{r}^{(0)}) = \mathbf{H} - \mathbf{H}_{\text{LowRank}} = \mathbf{H} - \frac{\mathbf{H}\mathbf{r}^{(k)}\mathbf{r}^{(k)\top}}{\|\mathbf{r}^{(k)}\|_2^2}.$$
66
+ (3)
67
+
68
+ We often write $\widetilde{\mathbf{H}}$ rather than $\widetilde{\mathbf{H}}(\mathbf{H}; \mathbf{r}^{(0)})$ , and we often think of $\widetilde{\mathbf{H}}$ as a matrix. We summarize the proposed SFA in Alg. 1.
69
+
70
+ <span id="page-2-3"></span>**Proposition 1.** Let $\widetilde{\mathbf{H}}$ be the augmented feature matrix obtained via Alg. I for the k-th iteration starting from a random vector $\mathbf{r}^{(0)}$ drawn from $\mathcal{N}(0,\mathbf{I})$ . Then $\mathbb{E}_{\mathbf{r}^{(0)} \sim \mathcal{N}(0,\mathbf{I})}(\widetilde{\mathbf{H}}(\mathbf{H}; \mathbf{r}^{(0)})) = \mathbf{U}\widetilde{\Sigma}\mathbf{V}^{\top}$ has rebalanced spectrum<sup>2</sup> $\widetilde{\Sigma} = diag[(1 - \lambda_1(k)\sigma_1, (1 - \lambda_2(k))\sigma_2, \cdots, (1 - \lambda_{d_h}(k))\sigma_{d_h}]$ where $\lambda_i(k) = \mathbb{E}_{\mathbf{y} \sim \mathcal{N}(0;\mathbf{I})}(\frac{(y_i\sigma_i^{2k})^2}{\sum_{l=1}^{d_h}(y_l\sigma_l^{2k})^2})$ and $\mathbf{y} = \mathbf{V}^{\top}\mathbf{r}^{(0)}$ , because $0 \le 1 - \lambda_1(k) \le 1 - \lambda_2(k) \le \cdots \le 1 - \lambda_{d_h}(k) \le 1$ for $\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_{d_h}$ (sorted singular values from the SVD), and so $(1 - \lambda_i)$ gets smaller or larger as $\sigma_i$ gets larger or smaller, respectively.
71
+
72
+ <span id="page-2-5"></span> $<sup>^1</sup>$ We apply Eq. (3) on both views $\mathbf{H}^{\alpha}$ and $\mathbf{H}^{\beta}$ separately to obtain spectrally rebalanced/augmented $\widetilde{\mathbf{H}}^{\alpha}$ and $\widetilde{\mathbf{H}}^{\beta}$ .
73
+
74
+ <span id="page-2-6"></span><sup>&</sup>lt;sup>2</sup> "Rebalanced" means the output spectrum is flatter than the input.
75
+
76
+ <span id="page-3-2"></span>![](_page_3_Figure_0.jpeg)
77
+
78
+ ![](_page_3_Figure_1.jpeg)
79
+
80
+ Figure 3: Toy illustration of Prop. 2 and 3. Let $\sigma_2, \cdots, \sigma_5$ be 1.5, 0.9, 0.2, 0.01. We investigate the impact of iterations $k \in \{1, 2, 4, 8\}$ . Fig. 3a shows distribution x(z) given $\sigma_1 = 2$ . Fig. 3b shows the expected value $\phi(\sigma_1, k) = \sigma_1(1 - \lambda_1)$ where $\lambda_1 = \mathbb{E}_{z \sim x_1}(z)$ for $0 \le \sigma_1 \le 3$ . The deviation is indicated by $\phi_{\pm \omega_1}(\sigma_1, k) = \sigma_1(1 - \lambda_1 \pm \omega_1)$ . Finally, Fig. 3c is obtained via Alg. 1 (the incomplete power iteration). To this end, we generated randomly a feature matrix $\mathbf{H}$ and substituted its singular values by $\sigma_1, \cdots, \sigma_5$ . Notice that for $k = 1, 1 \le \sigma_1 \le 3$ , push-forward $\phi(\sigma_1, 1)$ and $\phi'(\sigma_1, 1)$ in Fig. 3b and 3c are around 0.8 (the balancing of spectrum) and the high deviation indicates the singular value undergoes the spectral augmentation. For $k \ge 2$ , both balancing and spectral augmentation effects decline. Note theoretical $\phi$ in Prop. 2 and real $\phi'$ from Alg. 1 match.
81
+
82
+ 1.5 2.0
83
+
84
+ (b)
85
+
86
+ **Push-forward Function** (*Partial balancing of spectrum*). Prop. 1 shows that in expectation, our incomplete power iteration rebalances spectrum according to the push-forward function $\phi(\sigma_i;k) = \sigma_i(1-\lambda_i(k))$ where $\lambda_i(k)$ is an expected value of $\lambda_i'(\mathbf{y},k) = \frac{\left(y_i\sigma_i^{2k}\right)^2}{\sum_{l=1}^{l_n}\left(y_l\sigma_l^{2k}\right)^2}$ w.r.t. random variable $\mathbf{y} \sim \mathcal{N}(0,\mathbf{I})$ (see Eq. (15)).
87
+
88
+ Alg. 1 returns an instance governed by the currently drawn y. The push-forward function in a feed-forward step of network realizes $\phi'(\sigma_i; \mathbf{y}, k) = \sigma_i(1 - \lambda_i'(\mathbf{y}, k))$ . Thus, below we study the analytical expression for $\phi(\sigma_i; k)$ and its variance to understand how drawing $\mathbf{y} \sim \mathcal{N}(0, \mathbf{I})$ translates into the variance posed by the implicit spectral augmentation of the singular values.
89
+
90
+ <span id="page-3-0"></span>**Proposition 2.** Analytical Expectation. Let $\beta_i = (\sigma_i^{2k})^2$ , then the expected value $\mathbb{E}_{\mathbf{y} \sim \mathcal{N}(0; \mathbf{I})} \frac{\beta_i y_i^2}{\beta_i y_i^2 + \sum_{l \neq i} \beta_l y_l^2} = \lambda_i(k)$ can be expressed as $\mathbb{E}(x_i)$ over random variable $x_i = \frac{u}{u + v_i}$ for $u \sim \mathcal{G}(\frac{1}{2}, 2)$ and $v_i \sim \mathcal{G}(\alpha_i, 2\gamma_i)$ ( $\mathcal{G}$ is Gamma distr.) with $\alpha_i = \frac{1}{2} \frac{(\sum_{l \neq i} \beta_l)^2}{\sum_{l \neq i} \beta_l^2}$ and $\gamma_i = \frac{1}{\beta_i} \frac{\sum_{l \neq i} \beta_l^2}{\sum_{l \neq i} \beta_l}$ . As PDF $x_i^{\bullet}(z) = \frac{\gamma_i}{(1 - (1 - z)\gamma_i)^2} \cdot \mathcal{B}(\frac{\gamma_i z}{1 - (1 - z)\gamma_i}; \frac{1}{2}, \alpha_i)$ where $\mathcal{B}$ is the Beta distribution and $x_i^{\bullet}(z)$ enjoys the support $z \in [0; 1]$ , then $\lambda_i = \mathbb{E}(z) = \int_0^1 z \cdot x_i^{\bullet}(z) \, \mathrm{d}z = \gamma_i^{\frac{1}{2}} \frac{\Gamma(\frac{3}{2})\Gamma(\frac{1}{2} + \alpha_i)}{\Gamma(\frac{1}{2})\Gamma(\frac{3}{2} + \alpha_i)} \cdot {}_2F_1(\frac{3}{2}, \frac{1}{2} + \alpha_i, \frac{3}{2} + \alpha_i, 1 - \gamma_i)$ where ${}_2F_1(\cdot)$ is the so-called Hypergeometric function.
91
+
92
+ *Proof.* See Proof of Proposition 2 (supplementary material).
93
+
94
+ <span id="page-3-1"></span>**Proposition 3.** Analytical Variance. Following assumptions of Proposition 2, the variance $\omega_i^2$ of $x_i^{\bullet}(z)$ can be expressed as $\omega_i^2 = \mathbb{E}(z^2) - (\mathbb{E}(z))^2 = \int_0^1 z^2 \cdot x_i^{\bullet}(z) \, \mathrm{d}z - \lambda_i^2 = 0.56419 \, \gamma^{\frac{1}{2}} \frac{\Gamma(\frac{1}{2} + \alpha_i)}{\Gamma(\alpha_i)} \left(0.4 \cdot {}_2F_1\left(\frac{5}{2}, 1 - \alpha_i, \frac{7}{2}, 1\right) \cdot {}_2F_1\left(\frac{5}{2}, \frac{3}{2} + \alpha_i, \frac{5}{2} + \alpha_i, 1 - \gamma_i\right) + 0.28571(\gamma_i - 1) \cdot {}_2F_1\left(\frac{7}{2}, 1 - \alpha_i, \frac{9}{2}, 1\right) \cdot {}_2F_1\left(\frac{7}{2}, \frac{3}{2} + \alpha_i, \frac{7}{2} + \alpha_i, 1 - \gamma_i\right)\right) - \lambda_i^2.$
95
+
96
+ *Proof.* See Proof of Proposition 3 (supplementary material).
97
+
98
+ **Note on Spectrum Rebalancing.** Fig. 3 explains the consequences of Prop. 2 & 3 and connects them with Alg. 1. Notice following: (i) For k=1 (iterations), the analytical form (Fig. 3b) and the simulated incomplete power iteration (Fig. 3c) both indeed enjoy flatten $\phi$ and $\phi'$ for $1 \le \sigma_1 \le 3$ . (ii) The injected variance is clearly visible in that flattened range (we know the quantity of injected noise). (iii) The analytical and simulated variances match.
99
+
100
+ Choice of Number of Iterations (k). In Fig. 3b, we plot $\phi(\sigma_i;k)$ using our analytical formulation. The best rebalancing effect is achieved for k=1. For example, the red line (k=1) is mostly flat for $\sigma_i$ . This indicates the singular values $\sigma_i \geq 1$ are mapped to a similar value which promotes flattening of spectrum. When $\sigma_i \geq 2$ the green line eventually reduces to zero. This indicates that only datasets with spectrum falling into range between 1 and 1.2 will benefit from flattening. Important is to notice also that spectrum augmentation (variance) in Fig. 3b is highest for k=1. Thus, in all experiments (including image classification), we set k=1, and the SFA becomes:
101
+
102
+ $$\widetilde{\mathbf{H}} = \mathbf{H} - \mathbf{H}_{LowRank} = \mathbf{H} \left( \mathbf{I} - \frac{\mathbf{H}^{\top} \mathbf{H} \mathbf{r}^{(0)} \mathbf{r}^{(0) \top} \mathbf{H}^{\top} \mathbf{H}}{\|\mathbf{H}^{\top} \mathbf{H} \mathbf{r}^{(0)}\|_{2}^{2}} \right) (4)$$
103
+
104
+ Having discussed SFA, below we show how SFA improves the alignment/generalization by flattening large and boosting small singular values due to rebalanced spectrum.
105
+
106
+ **Improved Alignment.** SFA rebalances the weight penalty (by rebalancing singular values) on orthonormal bases, thus improving the alignment of two correlated views. Consider the alignment part of Eq. (2) and ignore the projection head $\theta$ for brevity. The contrastive loss (temperature $\tau > 0$ , n nodes) on $\mathbf{H}^{\alpha}$ and $\mathbf{H}^{\beta}$ maximizes the alignment of two views:
107
+
108
+ <span id="page-3-3"></span>
109
+ $$\mathcal{L}_{a} = \underset{\mathbf{h}^{\alpha}, \mathbf{h}^{\beta}}{\mathbb{E}} (\mathbf{h}^{\alpha \top} \mathbf{h}^{\beta} / \tau) = \frac{1}{n\tau} \text{Tr}(\mathbf{H}^{\alpha \top} \mathbf{H}^{\beta})$$
110
+ $$= \frac{1}{n\tau} \sum_{i=1}^{d_{h}} (\sigma_{i}^{\alpha} \mathbf{v}_{i}^{\alpha \top} \mathbf{v}_{i}^{\beta}) (\sigma_{i}^{\beta} \mathbf{u}_{i}^{\alpha \top} \mathbf{u}_{i}^{\beta}).$$
111
+ (5)
112
+
113
+ The above equation indicates that for $\sigma_i^{\alpha} \geq 0$ and $\sigma_i^{\beta} \geq 0$ , the maximum is reached if the right and left singular value
114
+
115
+ matrices are perfectly aligned, i.e., $\mathbf{U}^{\alpha} = \mathbf{U}^{\beta}$ and $\mathbf{V}^{\alpha} = \mathbf{V}^{\beta}$ . Notice the singular values $\sigma_i^{\alpha}$ and $\sigma_i^{\alpha}$ serve as weighs for the alignment of singular vectors. As the singular value gap $\Delta\sigma_{12} = \sigma_1 - \sigma_2$ is usually significant (spectrum of feature maps usually adheres to the power law $ai^{-\kappa}$ (i is the index of sorted singular values, a and $\kappa$ control the magnitude/shape), the large singular values tend to dominate the optimization. Such an issue makes Eq. (5) focus only on aligning the direction of dominant singular vectors, while neglecting remaining singular vectors, leading to a poor alignment of the orthonormal bases. In contrast, SFA alleviates this issue. According to Prop. 1, Eq. (5) with SFA becomes:
116
+
117
+ <span id="page-4-0"></span>
118
+ $$\mathcal{L}_{a}^{*} = \underset{\mathbf{h}^{\alpha}, \mathbf{h}^{\beta}/\tau \mathbf{r}^{\alpha}, \mathbf{r}^{\beta} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})}{\mathbb{E}} (\tilde{\mathbf{h}}^{\alpha \top} \tilde{\mathbf{h}}^{\beta})$$
119
+
120
+ $$= \frac{1}{n\tau} \sum_{i=1}^{d_{h}} (1 - \lambda_{i}^{\alpha}) \sigma_{i}^{\alpha} (\mathbf{v}_{i}^{\alpha \top} \mathbf{v}_{i}^{\beta}) (1 - \lambda_{i}^{\beta}) \sigma_{i}^{\beta} (\mathbf{u}_{i}^{\alpha \top} \mathbf{u}_{i}^{\beta}).$$
121
+ (6)
122
+
123
+ Eq. (6) shows that SFA limits the impact of leading singular vectors on the alignment step if SFA can rebalance the spectra. Indeed, Figure 3b shows the spectrum balancing effect and one can see that $\phi(\sigma_i;k) = \sigma_i(1-\lambda_i(k)) \leq \sigma_i$ . The same may be concluded from $0 \leq \lambda_i \leq 1$ in Prop. 2. See the Upper bound of $\phi$ section (supplementary material) for the estimated upper bound of $\phi$ . Finally, the Analysis on Spectral Feature Augmentation section also shows empirically that SFA leads to a superior alignment.
124
+
125
+ Improved Alignment Yields Better Generalization Bound. To show SFA achieves the improved generalization bound we quote the following theorem [Huang, Yi, and Zhao 2021].
126
+
127
+ <span id="page-4-1"></span>**Theorem 1.** Given a Nearest Neighbour classifier $G_f$ , the downstream error rate of $G_f$ is $\operatorname{Err}(G_f) \leq (1-\sigma) + R_{\varepsilon}$ , where $R_{\varepsilon} = P_{\mathbf{x}_1, \mathbf{x}_2 \in \mathcal{A}(\mathbf{x})} \{ \| f(\mathbf{x}_1) - f(\mathbf{x}_2) \| \geq \varepsilon \} \leq \frac{\sqrt{2-2\mathcal{L}_a}}{\varepsilon}$ , $\sigma$ is the parameter of the so-called $(\sigma, \delta)$ -augmentation (for each latent class, the proportion of samples located in a ball with diameter $\delta$ is larger than $\sigma$ , $\mathcal{A}(\cdot)$ is the set of augmented samples, $f(\cdot)$ is the encoder, and $\{ \| \cdot \| \geq \varepsilon \}$ is the set of samples with $\varepsilon$ -close representations among augmented data.
128
+
129
+ Theorem 1 says the key to better generalization of contrastive learning is better alignment $||f(\mathbf{x}_1) - f(\mathbf{x}_2)||$ of positive samples. SFA improves alignment by design. See Why does the Incomplete Power Iteration work? See empirical result in the Analysis on Spectral Feature Augmentation section. Good alignment (e.g., Fig. 6) due to spectrum rebalancing (e.g., Fig. 5) enjoys $\mathcal{L}_a \leq \mathcal{L}_a^*$ (Eq. (5) and (6)) so one gets $R_\varepsilon^* \leq R_\varepsilon$ and the lower generalization bound $\mathrm{Err}\left(G_f^*\right) \leq \mathrm{Err}\left(G_f\right)$ . Asterisk \* means SFA is used $(\mathcal{L}_a^*$ replaces $\mathcal{L}_a$ ).
2212.13545/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2212.13545/paper_text/intro_method.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Scene representation is a crucial step for any scene understanding or manipulation task. Relevant scene parameters, be it shape, appearance, or illumination, can be represented using various modalities like 2D (depth/texture) maps, point clouds, surface meshes, voxels, parametric functions, *etc*. Each modality has its strengths and weak-
4
+
5
+ Project Page: <https://rahul-goel.github.io/isrf/>
6
+
7
+ nesses. For example, shape correspondence is straightforward between point clouds compared to surface meshes but compromises rendering fidelity. Thus, choosing an appropriate representation has a major impact on downstream analyses and applications.
8
+
9
+ Neural implicit representations have emerged as a promising modality for 3D analysis recently. Although initially proposed only for shapes [\[28,](#page-9-0) [34\]](#page-9-1), they have been extended to encode complete directional radiance at a point [\[30\]](#page-9-2), other rendering parameters like lightfields, specularity, textual context, object semantics, *etc*. [\[1,](#page-8-0) [9,](#page-8-1) [11,](#page-8-2) [12,](#page-8-3) [16,](#page-8-4) [19,](#page-8-5) [51\]](#page-10-0). The representation was extended beyond static inward-looking and front-facing scenes to complex outward-looking unbounded 360◦ views, dynamic clips, occluded egocentric videos, and unconstrained images.
10
+
11
+ Radiance fields have also been used beyond Novel View Synthesis (NVS) for other applications [\[5,](#page-8-6)[26,](#page-8-7)[35,](#page-9-3)[44,](#page-9-4)[47,](#page-9-5)[49,](#page-10-1) [53,](#page-10-2) [56,](#page-10-3) [59\]](#page-10-4). Segmenting objects of the scene representation is a first step towards its understanding and manipulation for different downstream tasks. There have been a few efforts at segmenting and editing of radiance fields. Recently, N3F [\[48\]](#page-9-6), and DFF [\[21\]](#page-8-8) presented preliminary solutions to this in the neural space of radiance fields. Both use distillation for feature matching between user-provided cues with the learned 3D feature volume, with N3F using userprovided patches and DFF using textual prompts or patches as the segmentation cues. These methods struggle to segment objects with a wide appearance variation. The NVOS system provides segmentation with strokes but have poor quality and non-interactive computations [\[38\]](#page-9-7).
12
+
13
+ <sup>∗</sup> Equal Contribution
14
+
15
+ <span id="page-1-1"></span><span id="page-1-0"></span>![](_page_1_Figure_0.jpeg)
16
+
17
+ Figure 2. *ISRF System overview*: We capture a 3D scene of voxelized radiance field and distill the semantic feature into it. Once captured, the user can easily mark regions using a brush tool on a reference view (green[ ] stroke). The features are collected corresponding to the marked pixels and clustered using K-Means. The voxel-grid is then matched using NNFM (nearest neighbor feature matching) to obtain a high confidence seed using a tight threshold. The seed is then grown using bilateral search to smoothly cover the boundaries of the object, conditioning the growth in the spatio-semantic domain.
18
+
19
+ In this paper, we present a simple and efficient method to interactively segment objects in a radiance field representation. Our ISRF method uses an intuitive process with the user providing easy strokes to guide it interactively. We use the fast and memory-efficient TensoRF representation [\[7\]](#page-8-9) to train and render. TensoRF uses an explicit voxel representation that is more amenable to manipulation. We include a DINO feature [\[6\]](#page-8-10) at every voxel to facilitate semantic matching from 2D to 3D. DINO features are trained on a large collection of images and are known to capture semantics effectively. We condense the DINO features from the user-specified regions to create a fixed-length set using K-Means. A nearest neighbor feature matching (NNFM) on this set in the 3D voxels identifies a high-confidence *seed region* of the object to be segmented. The seed region is grown using a bilateral filtering-inspired search to include neighboring proximate voxels in a joint feature-geometric space. We show results of segmenting several challenging objects in forward facing [\[29\]](#page-9-8) and 360 degrees [\[2\]](#page-8-11) scenes. The explicit voxel space we use facilitates simple modification for segmenting objects. We also show examples of compositing objects from one RF into another. In summary, the following are the core contributions of ISRF:
20
+
21
+ - An easily interpretable and qualitatively improved 3D object segmentation framework for radiance fields.
22
+ - Interactive modification of segmentation to capture fine structure, starting with high-confidence matching. Our representation allows a spatio-semantic bilateral search to make this possible. The framework can also use other generalized distances to grow the region for specific applications.
23
+ - A hybrid implicit-explicit representation that is memory-efficient and fast to render also facilitates the distillation of semantic information for improved segmentation. Our results show improved accuracy and fine-grain object details in very challenging situations over contemporary efforts.
24
+ - An easy-to-use, GUI based tool to interactively segment objects from an RF representation to facilitate object replacement, alteration, *etc*.
25
+ - Consistent 2D/3D segmentation masks for a few scenes and objects created manually using our method to facilitate future work in segmentation, manipulation, and understanding of RFs.
26
+
27
+ # Method
28
+
29
+ We first provide the basics on radiance fields and the feature distillation strategy related to our scene representation. We then detail our proposed interactive segmentation workflow comprising 2D-3D feature matching, region growing, and manipulation techniques on this learned representation.
30
+
31
+ A radiance field (RF) [63] $\mathcal{F}$ maps the scene radiance values as view dependent RGB color $c \in \mathbb{R}^3$ , given a continuous point $x \in \mathbb{R}^3$ and viewing direction $d \in \mathbb{S}^2$ in space as inputs: $\mathcal{F}(x,d): \mathbb{R}^3 \times \mathbb{S}^2 \to \mathbb{R}^3$ .
32
+
33
+ NeRF [30], and its variants [2, 27, 60] encoded this mapping as the neural function using an MLP, with a low memory footprint but high training and rendering overhead. They also store scalar point density $\sigma \in \mathbb{R}$ which is used for differentiable volumetric rendering to train the network:
34
+
35
+ $$\begin{split} \hat{C}(r) &= \left(\sum_{i=i}^K T_i \alpha_i c_i\right) \quad \text{ where } \quad (1) \\ \alpha_i &= 1 - e^{-\sigma_i \delta_i} \quad \text{ and } \quad T_i = \prod_{j=1}^{i-1} (1 - \alpha_j). \quad (2) \end{split}$$
36
+
37
+ <span id="page-2-0"></span>
38
+ $$\alpha_i = 1 - e^{-\sigma_i \delta_i}$$
39
+ and $T_i = \prod_{i=1}^{i-1} (1 - \alpha_i)$ . (2)
40
+
41
+ Here for a given point i along a ray, $\delta_i$ is the distance to the sampled point, $T_i$ is the accumulated transmittance, and $c_i$ is the view-dependent color for the point. Later efforts like Plenoxels [10], and DVGO [42] stored the field variables in <span id="page-3-2"></span>a lattice structure akin to a 3D voxel grid, significantly improving the training and rendering times at the cost of high storage requirements. These quantized values are trilinearly interpolated and decoded to render color value at any point. The grid structure provides easy spatial context and explicit representation leading to higher efficiency. Recently, TensoRF [7] proposed a matrix-vector decomposition representation of this lattice, reducing storage requirements while facilitating efficient training and view generation. We use TensoRF as the basis of our work. The top part of Fig. 2 shows our radiance field capture step, with the volume represented using TensoRF. In the case of the quantized representation of radiance fields, the radiance is obtained as follows:
42
+
43
+ $$\sigma_i = \psi(V^{\sigma}, x_i)$$
44
+ and $c_i = \mu_{\theta}^{rbg}(\psi(V^f, x_i), d)$ . (3)
45
+
46
+ Here $\sigma$ is the density of the volumetric space, $V^f$ the radiance feature grid of appearance features f, and $\psi$ indicates trilinear interpolation. While rendering a given sample point $x_i \in \mathbb{R}^3$ along the ray direction d, a small decoding MLP $\mu_{\theta}^{rbg}(f_i, d) \to c_i$ is evaluated. The final color of a ray is calculated by combining all sample colors $c_i$ at every point $x_i$ along it using the Eq. (1). This is used to reduce the photometric loss $\mathcal{L}^{(rgb)}$ optimizing for both the radiance feature lattice $V^f$ and parameters $\theta$ of MLP $(\mu)$ .
47
+
48
+ Object segmentation requires knowledge of scene semantics. We include an additional feature into the radiance field for this. In order to attribute semantics to the radiance field, we distill contextual knowledge from a large pre-trained teacher model similar to the prior art [21, 48]. Specifically, our teacher is a vision transformer model trained using self-supervision and is shown to pay attention to semantically meaningful objects in the scene in a class-agnostic manner. This knowledge from the teacher is distilled into the student radiance field in addition to the color and density values as point semantic features $\phi \in \mathbb{R}^m$ . Thus the mapping now becomes: $\mathcal{F}(x,d): \mathbb{R}^3 \times \mathbb{S}^2 \to \mathbb{R}^3$ $\mathbb{R}^3 \times \mathbb{R} \times \mathbb{R}^m$ . More concretely, we use 2D semantic features using the DINO ViT-b8 model [6] for each input posed image. Recent efforts [21,48] also use DINO; unlike them, we directly optimize for the features on the voxel grid in the TensoRF representation without a neural network. We also do not encode the direction dependence in these semantic features since the object semantics are direction agnostic. We trilinearly interpolate the distilled semantic feature $\phi_i = \psi(V^{\phi}, x_i)$ for a point $x_i$ from the learned feature lattice $V^{\phi}$ . We combine the $\phi_i$ along the ray using the Eq. (1) like color $c_i$ . The TensoRF representation is optimized to minimize the total loss
49
+
50
+ $$\mathcal{L} = \mathcal{L}_{rgb} + \lambda \mathcal{L}_{feature} \tag{4}$$
51
+
52
+ to obtain the final radiance field with $\phi$ , $V^f$ , and $V^{\phi}$ . Both losses $\mathcal{L}_{rgb}$ and $\mathcal{L}_{feature}$ are calculated using $L^2$ norm.
53
+
54
+ High-resolution feature rendering results in high-frequency feature fields similar to N3F [48]. (See the supplementary document for distilled feature field visualizations.) Explicit semantic features at every point open the way to adapt traditional 3D analysis techniques to radiance fields in a semantically meaningful fashion. Segmenting objects in 3D voxel space and using bilateral filtering inspired search are examples that go beyond what prior neural representations have shown.
55
+
56
+ For object segmentation, the user picks a(few) reference views and annotates the regions of interest using a brush stroke. Semantic DINO features associated with the marked pixels are collected. DINO features were shown to fare well using 1-NN feature matching for good 2D semantic segmentation [6]. However, a single DINO feature will not suffice to segment complex objects with diversity. We cluster the input features using K-Means to obtain a fixed-size exemplar set of features for matching in 3D space. We use nearest neighbor feature matching (NNFM) on the exemplar set to label each voxel as foreground or background. The result is stored in a 3D bitmap. In this step, we use a tight threshold to identify a high-confidence seed region, which is processed further. Prior methods [21, 48] used a single averaged semantic feature from the user-specified patch to match 2D to 3D. Their implicit neural representation can only be segmented after $\phi$ values are rendered. Feature matching methods like NNFM are too costly to evaluate at every point on the ray using a neural representation.
57
+
58
+ The segmentation results can also be precomputed and stored, facilitating downstream tasks like view generation and editing on the fly without repeated processing.
59
+
60
+ The high confidence seed region $(M^0)$ from the previous step is grown in the volume-space to delineate the complete object volume. We do this in joint spatio-semantic space to include proximate voxels that are also semantically close. We adopt a $Bilateral\ Filtering\ [46]$ inspired search dubbed as $Bilateral\ Search$ on the voxel grid using the spatial feature x and semantic feature $\phi$ values as filter's domain and range kernels, respectively. We iteratively grow the current bitmap region $M^r$ till convergence, as given below.
61
+
62
+ $$\begin{split} M^{r+1}(x) &= \mathcal{T}_{\tau}(\frac{1}{W}\sum_{x_i \in \Omega_x} M^r(x_i)\,g_{\sigma_{\phi}}(\phi_i^2)\,g_{\sigma_s}(s_i^2)) \\ \text{where} \quad \phi_i &= \|\phi_{x_i} - \phi_x\|\,, \quad s_i = \|x_i - x\| \\ \text{and} \quad W &= \sum_{x_i \in \Omega_x} g_{\sigma_f}(\phi_i^2)\,g_{\sigma_s}(s_i^2). \end{split}$$
63
+
64
+ <span id="page-4-1"></span>Here M<sup>r</sup> is the r th iteration of filtering; φ<sup>x</sup> is the distilled semantic feature at point x in the volumetric space; g<sup>σ</sup> is the Gaussian smoothing functions with variance σ; T<sup>τ</sup> is binary thresholding against value τ ; and Ω<sup>x</sup> is the immediate voxel neighbors of x. We find that τ = 0.2 works well for our scenes. The seed region expands to the boundaries of the desired object in a few iterations of bilateral filtering.
65
+
66
+ Region growing results in a stable voxel content based on the input strokes. The user can add or remove parts interactively if the extracted content misses out on a few details or when some extraneous content floods into the segmented region. We use positive and negative strokes to add and remove the content in the image space, as followed by methods like GrabCut [\[39\]](#page-9-17). The mask of the negative segment is subtracted from the mask of the positive segment to get the final segmented objects. We find practically that even complex objects can be segmented well with a few positive and negative strokes, as shown in the results in the paper and in the supplementary material. Additionally, our method provides interactive feedback for every stroke (as can be seen in Tab. [1\)](#page-6-0) that allows users to segment interactively unlike methods like NVOS [\[38\]](#page-9-7). Implementation details have been reported in the supplementary document.
2302.03251/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-09-27T04:13:46.579Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" etag="hNBol8Mo322lJ6Xdjg20" version="20.3.6" type="google"><diagram id="9q725NHlcDOGuBxYEodi" name="Page-1">7L3ZtqTItSX6NXqsM+ibR8AdcHqnd17uoHX6vv/6i/mOSGWmsqSMkHRqnCpFZuzAMRyMZauZa9oy239BuWYXxrDP1S5J678gULL/Bb39BUEQCIWvf8CZ4+sMDFH015n3WCTfzv31hFWc6fcLv51diiSdfnPh3HX1XPS/PRl3bZvG82/OhePYbb+9LOvq3z61D9/fngj99YQVh3X6N5d5RTLnX2cp/FdXi2nxzr8/GYa+tTTh94u/3WLKw6TbfvUs9P4XlBu7bv46anYurYH0vsvl60b8/6b1l46NaTv/mS/gUCVtd6TYyWjF11NFef//+1/o113WsF6+vfC3zs7Hdwlc/e7BYdF8RMWu6TgXl4CUMEpro5uKuejaqz3q5rlrrgtq0MCGcfUeu6VNuK7uxqs9SbNwqedf3YGpizf45tz119lw6r8GMCv29Ooz+3kg8/0s9P0MuFU4h39Bma+PCN+3778gXOGyurlBsvDumOuPZjn53XlfRy/wWZA55gXOW9n79QQHum/ynmjaERJACcIfwZPVXr65RQINRUJdPMSgjlutjxDsVEp10bjtHYomFIsqoRz0/OtrlYY+goN6uupBgHtzPvvwfPU6ovPrh/F6Cp1BJP11HA73+v50TazV0QTBH0mOtgmVeG0WjIGKqfbTkZiQn83dfw2uX00GJmQ7d71mwVM5r7RyULNahDuFJz0j581xzBacSBf/BWHxSRAzSbqOrv9zU1SGRQHyoRdDRGf0sjwWgxfYoj31PA1mYt7Gdcqg5uuiT78z+v3g3ixTyNeHx/PvHg8GGWcoF0o1nPrNQKJh1xXgTq36vDNZczsNfV3IClv9cX96cXg1CTZGyhy2gl75Axyu1/P5OGYZkWL8EfRvDOLVx15nu0POpYgs7FXr08+uy2rfK0j/8KkB3aNj2ZXlkRmLcrDt1dhlseEg9pxMc7DXdXYnS6/qLfoJXpCQzua6RmwD0rs+TmiQrFmDauCu1//XY3moNdb1MgZWJNCETDEgFn/BFlJCRjhqiaQVeZh+LWhKEtfl9kRP9RmiKey1rThN1+VZTY5INA/EWWKiFPgpeJusIcasR3yxc1rD1vVWO4+vh+Judakumx5v8CiL8Q7S9wYMOfzrs9yJ5cO4i/vnWvZ13+1bvVZWpj2WWnMp1PYHZOT6cT61NQPdHsD77W6KURpOlgnqeokB3mu5/k6dyi3r6wzZ7VzuVEoDVaG2s7jaqE34R+P862NC0bLFmKtmiKgC4XgqkIG2RcpL04NsKTqZwhyHhIopTNN6UZRD6TPSqU/zUgy2DV5mQaNp46uDHxg6GQCBz+1sH3vS06sIr+d8KRVEqn4A0+GCJkSN40mpkUqmrvYlZt6cavQ8aRjoGf25J3hFKALalRBHfp1CpZtfb32JrGt7tgat0zwQkeLPUgj04KQQ4+jSB6UCreKZJ/O4m0/mzbLsDx1r3E0+N6SARsUMsWWT60ZWT9CxkbXdk9dIoJkMYhfeNizIdOuolukcK/Jx0B9/JkFvrAnLEijpk/r+USzwNvulBHz1tsKs9QmzxWmZHEmcuk7Ocogi5+Fa4ApHXP2ABlKwVDrNDJ9DWghDZbclCWIdNWSMMiK2DfdNRgSpYXQdB0N6CThFZihJPtILM3Nw5N69bjdS2mWTZ3kdun2nPm6MwZLP+hr9++POXi9s8j92TOvccil/EAb+dOJNkynJQ/V72ubdOTFyLLGpoViP/TRkol1eOLBVCmag2q6/XFhahZfCnHd6ISofXki+1a+zgjwhZFy3tY428qOlDGt+XBEGYenW1yN0pI6lh73zI8rUv1wNuydrAjzTOjoFeq4V94SBGW7+2Bv+OzGn1VOBLp6v6yzwOm4reRU46CcZR6t6cM96BbF1zlKcNDAYMoCPq6g/bTm/sSL17oqLVzhK/XDr8wrbrD3coaBwe43HEBKbVnMQcQt4yaWF8JR0VAi3VkrORxJVkmTOLj/Gy6LmoqSzurX7mvbUEOuBNAkxM9wr9oziGAtjzedAqOa6BwdwRx45oW6Ey3gO4ibVkeTQLn47oJoBBCLqGZ0oc5LOsbtRQodoum+8fKMRnfmcq9a4dTpQXTSHa5xI6LE9Sfc5UK+JY76/4uvxE8cskWqpLfUuP5Z6Nay6ZkBo4KmJWQN7Wmrhjnnj2sZxaTRzHblGW47NsIpXZ4JqIq/4A9RlpPVTW455vFw4aV1ujvdKCtFbfzDbHm6Ajiy9J46JH7o7kDrQEDxaYEm4vIqYhDcH+JGqsGZ4rccVxkmUQiI5o3CJVMcRCE0lkBoyHDSNwvyoMwp0oV/o2Z9Cqqcv82cvnIictQ4ta2y3uMkwLMcyzpfT+JFjw35zHS04vUYRJB4omkYuB+4DKCBJjU+dPuZRwBHiZAfUoVSLDr/exPx4CqlXyXBNesx9Ti1CGhSlm0kLEAAIOOhM464bjUqGI11qj36i51NDugS+UrTT+Onq6vIGoqW/jkRQ74rlwQd68Du+oDVNLnpNtVONAy9zpi9wyyjDU8FfV6RZ9exIm45MyDmGMxFxMtkBatjF1foz9vJb2+ETG9OaWzzC/W6W63upx3FbOq9V6YPTXsg6j11PGdRG3eZoWafVuOGGTydPx19oRwV6QubUuJzzcMWiJ5mp0bFrYyLgEDyjWaqnpyggCwjDmYGS4Wvnpq7yYxQ/h6fvo3BNZgS5foVxkhY2FKhFBsRagIBNxivwJfQML8aLqKCoFOynhy5pXGpoTYquQ78yajZpZHHbMdV0m3kwmfSTdhM/fNFgrueFFQAmUEJD++onQQo+GQreU6cTkCefuJb2gPBm4opZUdB5jbK7e5cIED1etT6PWcsbNdU7iZ2SzRCsJxk5X14YWY0Bs6gvLELlWeYpE5q7WU6l0tQE+CwtFBABocgovJIdnNgQ6uPA3/jCAOeBi60QqQNDeelQeqMxXHwcchu2KFEJhNZS+kCFIyVy+kL0J2yjtvlwREa/ofk/qyn0os13gPr4adeT1xhMp6SBnlU94qDwiKUWiTK4qBx6m11jymolTCqGlHiB4pQ1kZ2iSlsxhUhf7hMMO307aeOk8DmrQ4tbjLUNR3Rc0V4HIEd8QLU7XXBv7DP8VphS2pK7coMjbQDuN26RT5gH2PNCLCiyJChAiJ/T7Wv+4L1yVGm0vLzhuq5qG3zCojdYIOQjliameIDwuciAEz/vc7+Ou2eCGgE8hQ2ELLZLAw9BjbqfuzY9Jvi5AB8wXqB4zGblko0h9olNGCZA90Akamu9PEoZhOKcKR2IZyDBq5Eg3tYAglLnB+VQK0fAGXG8lhUatS/rMT4Dg4bZWH8imiDBVU6BuIuSchARRug5KY3SWGVTKxmcq4XL1ddXw3GFUAMF4O9ET1fyGoI6iRYME0MELWoMQGGzGZdRDxGjHrHEgE6fMTNSIB3M3+8Hc+ceP3Ys9lv+yoRnpm79PSTpPk8XMG4o3PCqQxLea78C5ILlX30cKUmmUvxWtmcPVXwa1V7fnhjOACUaFhSFC/KEI3ztxra8pMZuCQhjIANIEOUYJjE0AyVECfhUqQzgsnZ9H3Zy5Y1tcGUiYPi3qrVx/LJCOl1fNoheqbJ/7PZSMnDPYDWWIVSOpUXax4knQv+CKtGPcBwCoJzv0CuLb/Eg8Sp8CHZoQqOBfk7rcaXOnMAxb+b5KH7iWL6LyYkmYrHCZuCFhkkCPSD7XB8F/QVfusuqSDXENQSC16lkoyafMdmeCZDe0dqnKCdUfVmTQn35IQZa3qlXpjh4S1LCVh5P1wAiM2ccgtLuEDTyhgkFoHpfLLt74eRtXUOipftyxYqxusAy3qbfHBkAB9FMRwtQoZXC6SOOQO6XCGQ7zXkVkPA47o5NReeYyQgMBTySgDySHXahB2kryLkISmSM7fXz+cOJZKHQK7V8OvjHJHx6WPksNgEKzXCXWijwnN2CJRsviUzikCvF9x4Sj1be3jplgi83F+dNMfuEfXsi4mFE0qLNZPceBvtqeV2IwXkTgQwpz5BMK1EQjSOCxumF3Mdp9pebpSUY0OR9RGHyctdAxlniZ1hstCu+ax3t0pdEHhdYf19YMy5FGUAzEdoz4Onfu+QVwXghpx7VXykBgdyK4LOgBYQgr2+3t4k1QEXuHNN9V5c/f6yXrO+AxCSVoQGxaucaJcflcf+8kPmCjA2Szlvz2uV2bOWElB7OKF6wjSD8uiFzUrhgC0h3lAsg7hkpXxAOX+22DHTEa5m1Im1yAtKD6Kp8vnyHrnH3Gpi4RJkDNuBLS+JWT9sanu0RTYD3qpbLZwH2gtRh4ALgKNnPE/d6eiY1xN/3S4aGTYoOrcVRayghDgACCWxWad/89hN48NvxjSA3mbd6S77iqKHCt0azPTXLL9U4aKGL9wLPMJJN8WWSQQJRvz4aZNokTKISdOXYZdILMYpsACeLZCOR822s5Qbkn4Lqghg2aReKSmCXdL1DkV4tkTQFZeB6g40UTuyGMpxJFghr1bY4wB3sjAL/na0VaozTl7le3itrJPrU9D5nFxAsquHZe55TWS/jqB39imRfKCFapNL3kxCpLqFpVPfPZxOxltlHUp1jkknlwLprudQieaS82biG7DYRmgbklxMIblm7SFeK11+ZqC1pqP0m5y3oM9ho0mH2gwcCiCF+wQjDnq8MOXOhrIiD1Kcj30LoG9l2E9DxPo24r1sOWvxxDYmsoeWeCbp/EoR2ApMKE+D7l4+hbmuddh2Abl50Luf5SW6fMjDD6W05np1BJFTRLTyrZzZjs0b2+FxSESqVkMQYuflTfvnbcdrefIhyB6gJE4Jg+6PdVov0N72NPBCAP5gEWjo7/dBidrugKuogybpZPPSc6jS7bGV+AjaMTagRg6rT+xp3K6a9hgJYSdx8f0RwwcxJG8MvxMM+Egv/wKy8E8avy9twVcQs34gQTc8DjSf35X+YStfwSfzDp4Qu8E2sfwWv0AWQogmObI33kI+B1sF+Q0fhfiqqnqayd6WsaEZ116vmzFv+JYD/6eNbt5lO6rrNILcaPsGVQ0QoyOvmJ1EOjx2vlhVZFEgz6CErYXzBU5CoyyvVJt7+SKHxY+RpOQEXIHauFWPz8+N9ULohPcJdTyLCjw8TmSwfRGmF4J8VOBboNJKcWsYEHUm/Jua2PebIJfUl+hAQUbNGHikYip/VFAwvZBpSp2mwqytVADiklitRuLUGUqvqPbEfAHb7y8mjKI/442nk/4zv+Z6XjkNaRLXtLD7hub6kV20gZBhV2Zk6GNEc401ch+m4QKNxQ0MEBunB+5NL3L0rJWFLaqUm99DQ6wWIE0RTw4wjILchNgzoMKKFOlRahsZSpo9ZglalBwTFuaDPZJxRAwdQO2oVYShxAijUfvmrNV6N9sTJYIEXfv8gypvdHoQgbOl0ClQEXLxmN9fwWTBbGmftXXnSgV6ZK9Avx2GZD4HFMff7/YeOoYc6GhyBeDgSt7LZAMhuXKk5GxvaWQ76ay7agF72GQ3GA7gIBbGDLLdYCXig0iBGcb4yadt5UoUVpi2QSJuTyRhmgOjb5RtCem2D4mkrSRcKzxaFjhJ7Biz2DfKitkYSgwwQ9MLKLdBVfYHmkSZ3uu3x5Apda5+abm1bNk1CACNmwQcr4i9cp3dgUrMtKNoNRxGgRMfllnn/qHFUd/YPhK/+NVqTFGTtq0tCWFCR35boC4qNvhODfAhwS/zh0Pre0lavQCL4jDxKlCRAbDomXIY83SErOBhHQHz005UnLU5pXR4468QAjYCKASswAjdtA7heJuBUEXugaDjBiSu7D04ZCsYV7q40REzgXpoAGIqJCiQ+cwZYpn3/yzcm1ucIxrq8/pH2XkYSyCfSxcmaIaSGnhn+AmOc+HxBavhMJXQyYL7PKWYE4hoaxtWaTHVE+/8C7dKGxXVhf0l53FooeqHzEHRTTDzCOABvqRwU4LNoJMjwK9sEz7Rv8GcSYmkvZJvbKcBA5BJdIGhJFRZpYYyyKTqNlAvqngEwl/dSmlVo8d1Mna97BX/mYJrh/qJoSrTcqw9LG3SZt5x0Vfj4dAi286pmC0jZU3FUXHmpNsjWBmAt+gKkVIYgc0ejCq2cZwCvc3Cvut7NRjzAFxD/8KLOslj913BDO71Gs48sMfGErdF/WpLkpnyN6LqSPTLgqUMCThpaL3V9WW9y3sVQdiAn9KxayxDHNtYH4pKQRI8C/BWiKp//cOXDGshDHo2fu1j4dI6yLa3PHagdf6vjq0FUnCZKPmTQdW/9iUv6jF9xgH8icENoYMTMbj1N1KUx3svHDPfEbRkL4ZHQduIjEQl7tiT5uIl6AcnOI/D5O4yZ8aUI7N25Xvb5fv7gMaA9KbZorFi/0mmfPrPJICKYJ1w3N8QvGzT9Hgz18sHGQf30ffHr5WfUvWAkWw+XtOIF9vnowz0/iCvO6xWcwrFeHslT4D3XtbSsLmVrXVvl7YrNUtaTNnna/cKMs2bnFD21QFPT1rqcGaINeSIhPR73QgJmpFFPkhcDPSB8iK88hXKv2Gs4w4c9ANC/B3xLuGg4AhgQmE1wfbldXbv/vGV9O6YXfelDs4OPkEguv6mRBDSQNJcaPmK1I3Jh/IJ0qZkCA8sL2xWCEn6APxFmTsaBlQSAACdoNcrB3e3CpFMDZK3GhYghTxsTxKVOraI+01PQhqPnkPnLhogqvT/pNBPcOZoXas3y6IYh4aDeBntNvHERL4S1WtJj8a80OgVUEH9B0Q8NtS5qCUcKeSw9Sj4AcgAuMRK1xBfgw14ePvu8vZOD+af4DhhWaZ4YWx94YPWKqu2FhpQ8iWuYnRJjzD6gQ3/2KjxU+IXQJAivWhPkmrie+sFIu/hAUdk6fs2bHqEmwgveglwXcE396lsEJeBZm7qXpIm0yeZLFTMv0xSMztdx/diUglsPOkXu6YwFVEDLxwESZFF6oB8cRcVAQ5TrWVeEWEAs0QYiuiMzqD9h00wmfJyQ0FYuIbLVwwWgSh6zdZDI/bO52HCzRqg/QjLpltVPSGwEEzof0JzFWurbX0jP80CP0cNKxOQTp74IHH2FPHoYd7KiV6L8TEmP8JeowC0gyiCF1LAxyx+QRkGXKljH28cbAQKDX0kfL9BitmvwHW4Rx3ONfBIKNOrmDHQon8k5e7FEUMiLDInAUDLNaYcc3eESMEkg+7i+eEf82Q28uE3DzEtneM2DR9V1YIaYAO5Dkif81iW3tABI7l/COTroF0kG5ucctacV8qRTL85ymGNcd03V5FaCmIyvHz4QeM6iMLWYJCpiJGV6yyWoAeCopANqdoc2gNK1qpLka+6Qx8MO1tV7HUarnBiGz4GkK4KvjFzKsh0Duqozbl2NOEV7LjEPBT2iIeCe2KRrRmTpKfmlbPNKI0pO2/crYKyGiM34rfHiDUODIImNJ9ojVXp04XTlxYDR8n9mnvTbsTGuzGuApLL2Zzhz7ygRITRiiuYcZxQ5h2Ps3wYJ7z4EV1UTSdn4FJBisNLKh685tnGhMpL8xgTpevhhWKVB9z/MQ3v5NqgBKAE/2h7G4W3159H2v0bCTwBNHxXokeScrZlvXw94kLhYvGTiYubXbkxPJWDu7lRyAIwLpSbQu6MD+S0wpks1+by+kCl5jn4kEPSGC4F03j+sK5i5YqCf8tDfcJB+Jdt4MhxSUIyPpZSJFl2WQX7NMYo/q/Pl+9MF7zJgKySUDi/jEF0/E326JDMCgOrJChUkbDUqJY3R+ODcs0iEdR6Xnk/H0IY3FMKnr3oCsqEbjaBnv9zpaLly0+zKZ0pMQ0RA/LTDh1aZ/PBj3y/4zF4kPha7tABeNQGS1y8oSLRD0dA7YkR4QNk5GcJgsh848Aq5KW2y4bfV/xfoDot9/J4+HCRqgQBh76FBYdOly17pHd8hrQ0CPjF2lZiMQE2E3UuNHcCMEgPEBLAd5PLQYHh9tbCyjFZKOm5Kp9tXP25bENl1c9yNADW/clif07H4CvqE25Ygu9BOum2AVLIrWJM0NUYEesKH3soDTmFIcFTBngHMJG/Vmp1ESaB4NwF+O3J3wCLEXW55nffhweElmja5G72D3mgPj/AwBxSV8/wZJASObxBy5ffg1Xejfp4iQCQ4CJ2T1sFc+uJNoqZKR88pw0RqFEyCuYNug27VwxW5caDxl7nTZtURpUu6z9sU4HK8ycAv69QpAjA8Vpy1N5J67oSBjgANzzOGw9ITqMYKKnASrqNQlQ4EFTprv6bBTBKURW83vBDQksUC7TRZAAZmPMxeNIyT2ugX5t5B/pqdld80rq+SvTG3KEqSUULQ1piFmwpmA1UDZIGrO67+iav9lRTL3KuvGr/GszAaEw9538E0xIJqFHGQy7lciIXnAhOUrBDrOgEQAFBtuWXI6/azFSC/qQYRKdZqyMBBuWo4IOZTjOe4uinj3OvxAMWGKPu3RY3f6hxBzWC6/+rUtyJHIe2adB4v9Ya+F51i3yowv5WcojD59Xn7Vf0m8XUq/1Xp5vdz4beK0fcvd/5rUSXgnr7qKn+gxhL7xzWWn0JJUPT4KXPc8mJOrT6MQes2hqA6Mp+b66FXPoeyWVHX34sq264FNZnTfBn0LwWqyL9KkOhvBfm9RPbXgoT+QJDQv0mQ+J8vVs3qdGdA+e8lirRNvh3e4jqcpiL+rTx/K/y/FeX314Xh/yJ+84f6uvm3omXqv1AM/lUr+fdGIU3e6d8dg1/JGP8DGX8/N6Z1OBfrb4uV/0jw355gdMXVk1+GGEV/ZyvE724xdcsYp9++9evi4t/dCEN+f6PfKcEFX97p/Dc3+ujBL6/986pB/IFqEJ9i4yvOEO/5MxZfJ7Lu0/G/Kg0xLN33hv81fQbzcn0QTPb719e+tYPy8S+b/Os5u2jSK6pCWrpdP82uCdtff+HryUzT10V2fO8AQGqfPvy2XyD8/fXc7/T6Mtn5t1r7pae/8wN/4BrCb0XW8aWC6fgH1ddNcWHm9H/jdn5rG/8Cp4Ii9O+cCvo3ToX4A4VH/11O5Xt0+E8J/H9bCby8Vn8tgT+xNU52VKRiqvHQFNS02P4zt86Ic1v1mb8cp2QLU73HOUsOtmOzAEJRwvVjI9dukXvsLj7fW5y3xfF6IwpjsaHW3PthpibH7PNKmupJnWBMdbxtXXNKmtsmB0TOp3oQRldy0gBZY0dBQieNCjfT9gDgh9J3kGVfUOV2ZjkjMxb3ZorHk3vfLwE/ijvTsQ+Z4R6gLRTFFKQFZQeF2hGL7NOtE/+gFG6bNUp6czoibIhMGOYg7HMUuCVIaR/kRh+5fntj8pWltjiJlpuxiWL5lZJliTLSqaKiuvhqXZhOwQzRoY0hMu4jJrvHgogICaA0PXkt+mEdAG4vUZI+6TiaexoftWzscRqjJbRBAzD7xhMRMV/Ie9EjO1r88ky1GqSHYNbSyWDS6A+Eysp0rocUfwsrCW4LKM0xWMjYGJZXit4mwJvzo3c7wUSqAuih5jb5oOP0Ac3jq+ENdlVema0X0wieY/dE17ZwTQTe1REKh8Im9UMSexm10QZqS6AxmvXhpD2iYjdEIgAZZSA8xksHL5XFEYD7AT4FvoCtLfCTCMUdxV72i6ncC2kyFc9I/JNhGYl9cuYF+p/c0/lqU1GaQzMmg+e1UWCIfM8aoAQSMEkOtK6LXmSU9c0KpkNGnpgBPTWBYjEI8EUs5dgwSceB0aDRGn2meJOjqSPxiQhEFBk50eKhMSCOgMOjCeSJZvCHBOxnYVywDBJhe6GvxOWvffp7/f203W1x5dSZRTdZd8GUKQ9uHKJoP3oUSkYkjo6rO8FZtiQHBDzBykWrP/W3eUJaFPYW/YwUPDXs6XXDe4T+FDFfeby+N3Dr8CECXrBvE7J3YywtrczVo/3JmPcH++nK3eQYB3SFvT/fPOjmH7ZFJcpyiARUwTlhUvqs2CCknkYAI+BeXUxIMM9C1kzfeyfkDj0QL+HP7KwIeRS0AjBrhEAyko70ur5J8/KRPSIDiqPtTctqEnJC+0+1xZicYSXSO+4AeZjCn+njX9twbGPWQt6QU97jRUs/DJNB47R3oBBpEaWc3q7x14cosvlJNRAKn+p7S5s0KexXX5GowK6XIEUlmtfWx8NL4vg9SoDmrDl2mUibYH/jIf6e9/i0aTsxjIqnimt7SBgXvqGILBbXmUbkbCOYVEndnI1FxnWamGupsaNZXG3RznTlFqWp1KBtI42Ly6BIVNpz+z7HlSJTGE0H/E2uYh8BFiAaErT4Bz35Tdvtwb2o4WyV7bW2ITQ2oAgqWjKEIIDCxc2a0QWUXuOWQXiFQwt5WXu7IGhrqhFJWwm3dzpCHWytD9WFl/cBIWH6FYEp3IWP0bnkhi93V6Lmnxq/X7fdKjjXntF80J6NRu1GRMmYxYBgHRz7khZmz7OGGwJw+D1p6wb/SozFio3lHO3ZnP13sCL0qPMpmp4gg9+Ao0vQoWS4L2F0V/L6EcZwZx4fQcn537QVMfXWlJWEM4NcQAHe2rSoPTdTO6K4v0J95UqykFCxNkEG8Pz+WUFilAbgkdmKorHgleTLFA9UXxP+GrEcS/Xlzm68Wvx5zb7adNu7w7fUMGaBTg8Sf+kNqFBmqYR0iaQcZ01Hl0gMI3ya395Ik1OKlpMHnCu0ltIC8/hKNI4zRPSSz+60pSfRo+mf1uJf2so4XSSbSu9rBr/oFOl3aIyoM4rDFnu1cRRScAmM3qg1NCWK64k4fRnjpT8pidpmJq0OWlEDYJQK3JFDV2VXsE4lOpJ5oGZshSlaJw6KTuyD037M3g6mMvDWIsNsdJJJy+bMyEJ/89omaw+yHZAvujCMkmJfYRziZdlzjhFM16+JckUO4IUWGMCDgJAC/ROLNPvqUoR4ZbiQlxbxgxmn2ey7kfFz0mskXUbQEv+4fFU5nblf/L0hcOgGAlZGnrNWX8+M0QHxWhuvZUKnr+OUPz6lIku0olPytYpk9i+Yhv6wX/omJyrOSKeyN/4kqW/IRAce8hxJ0puNWfU8mm454wTTLT4xotqTmPDJ0IJoKAHKgi4HNgZdhEZXh+sUh/bpH0S+P2qbjU0SB1sbF41EurO/ngT0J77buDiaRg0mk/vgx+LXrcqZZOvFIflgvbvB1esP2dtX22Jkd0sXjpdvD8elOixnK9GHaf2UJIMfnvZD0QC0qcUswRhHlhnrZNIPRrn/tP2n7f+dtvlWriJ+asHgOc6VpiyZ4YjvkLGvTAbAd1IEgNOXUPn+Kp2wrnlJb0+MJreOaUfRDHnbg+nrWlWepCLNME0owfyGf6Vm3DQOhJe/PH4xsedbSEqtX1zIy535SsWOkQc+LoyfMRZzb4Mysv6t/dS7GRTX1nNHP2Pg2LD3v40kx0j8H3O72B9wu9i/i4ah/oey5Bj5e0Lr/zBL/ssr/IfQ+u8itJT9/keEFt0gSNM0Mcpty+NSkO3xjnP5YTIdz8hcyr0fut8eACUwQ6rpSoY9/Kf5vD9NZrLeH5zFMoZQsQ/z/eTfjzKMpvv8Jiat7nu81YIb/CgG7lsddW0sUGsbLNkmNEl/loQ2R5Qp6zgfcSsd0PsG7nmW58xSzFulkH+U2KSNGHyntHRkBTA0g+fIQ6ZRPoYMtxkunnrdrtBL2tpz8ZXyefanRG67koMVcxdEpZNVzHfEpvRNyJiv+eVPITCM03TiYBP67MA9jXUQ2xtIoLO5gimpQhQKPBCLAPmEgLJ5gDyntP4stwQXBuiCZGgiEH7io2V90Mj5qQrFvSv10r9msmNSImkoX2DU+6x4vvcA1oMqj5Vu6fTsYM8iZxE/SOi2UV6fgBusGe+BCvRDJL8GAZTusyQGMkMDIb5e4Oq9mOFxa3V+SbX4egUTauXF/crXjZ3qANmBxPzmaSONzY2/NEmWKiIC/HqoKBtCBNR3YfCi1NPEbUOFzGitdRY77Ot8AepfvvC2bS0Z3hrosvDdcHaUfPKQugvGp2ozTxUepE0oDmGPLPjHYYeDaOBc4i3TgU7QGcjDGoRCbJoM0kyDFowkwcR4Mu3LiYA6oDNyQQERkdiXE0ID40oJyQCN5rVGQwN0EjVQtx8JTNe21v6sY8S8Am5RUher5BqRDYwhgSOpsoLqhwaMR/l+ZckPEk4fwE4VA+YEXeLP13uTOwmJu4VTqz9eI3ilj2sykRMa4SYSwV9rBMu2Q+oejCo5Y6M2+fqQGsGW1G1gkDPatz1qhE3yzOsVDFFklE6SxdQz++wOkmA/S4o5pOS2LpUlQHvTZa2RMMvyLbp0N+EBiU0jOVz3EbrvyAnhIfmMQIHN4ocTtem3jycElnTp9mV4sravXo7C2XVliCDbnNzpFOVAcQ8bOdtHQcGuGd32JyiLX9oeraEz/WAiWajDUQSfY7vCJhbs95BAk8Y49hR9QbIMLxPl5kCG+9u/zbOaIGOb1B1F+wuUedcnfSKRHT0hop0TFI2W650o8rI0QLCJFJYln806forkqXU/xvyABnTALPvANOxkKFI0CBdA4TpeM0cYAXyCgCWDP8Drrn/6SkEZtc9zFGKe6TU1NaPiEMIUts8xD4p0dgpbRQ+dRbBwicwoTAMmrgNfFerGuV+GldA6CZs/lKyaauoHZ43pbRQaaCsR5ExKJiIiJwI47A+f/nFlwwKldEDZAlhFj1rk5COkez7Gz+Dnaj5RRpgGHNKIqXEC+blJQyefElBiVoFkp+CzvH78MEn8/U/Qyr9tWzLnfmLnlzuCWH+HE6VBaYw4smTCDX5WTHLVwNTDijSbcL2FDTCKFxWRFqwo2gjhOL/o05/ftKeZi8uiCw2sXWlmsm9QFMMEEbUb6ocAMf54isZNG/ADGqRev5Fzu9N5oixUdisv37Mm180T5FyTISjTZspQUIHn3SZDvYwdjOumw8jUGtxoGOYGyH72lYrlmjMx15p/moj5okBHK33wk4/DroYXswEGLoDXbJxJukm+okTl9jQFkcsVhwrIn9vZfb9gK4ywjPaB9bPhgp5+0GT8hyYn8EtycD9GttgvMA/H7WmtiRr9eTrkl7bIPjcTKAQYw+QSlAZnbZ+g/ghyHjpDyxbERCLBl7QMaAqD11oP2jd+dZUGyp4SKZSiGjmcdoavEFrRQ5EsYM03mM/J7Gltcy4SneMKAGky49iH1eF+jNCQzu11A64/jSZoHQ0328aSWuZstFGlL9WoRXx/JMnAyBY7BcvF8JIOQg9GVx3OdkQ8Q+OkruRtID9+HGrp6jU7PPDq/PC+xv9cNAwGo7G/0Yak6YlOJvrniPhX2Y4brWB4kmTETqURfRmh75qA+07BplDniM7A+Ehf6RNIpDdcpvDVBppJtJkbaiAmCkhHjAKUndnc8ltC08SWZgb3TokLFP5Zahm0iQoH6nDdiY/p6WQurd4um0uQEhuNrUjP2a7WcyEXEo207EIKwhVlioL01zoVQPRGbSHhOxK4FNnH0stLxoa7XjIvEWShgeqQKlnGCTRWb8JAkY0ifpgC/+rnWKSequSgpvSp88vZSJdifVAVkS7LOGOAHOuCWfGTBFq7KRY+OwXMcU0PGF3eXBtN6mwwEOU2RS40nj3ig0pAG/4CY/yFbX+Ujlbf0g1/QYqKr7wQrbAbfXATcr7JCxmmngEKI+l4j7Rhzt49Yu7OugaUMxm6fvhi7k/Nj02ZfLWh4lhwKSjXlWmjoehxeH+mUpFUeM/teoYNkn6905VrJGlKF8bG+c3PSd1Kq1jQ+oXXvl4OOhF8hcUQdfx7any2dfjhqcdyywciBV2eZRiiJy2V1asn6j/AE/9p+0/bv6+tyy9LeWtP1Vr1G0CDtfy6tcUZ/9Cd4kkpG19GwbY0PO5KGJcqOPZDuFmWaxstPqk1XA/kt90EwRTE9c9ry6S9jxPSZ9cyE4GHBYEpK1eqsXuH3RIx8j3v8odUKh/yxmJHR+jlDJbJsO0Qe3bbX2k7yFn10uw++zF8KhO8K1sOjysDd97UY9O0PW4Vq7y+i3SOwtNDAHZeaNr7fREv3NK+d7ph9h+ZZPnWpoHkhBHHdWhczyvs88NF3mverqzl2XDcv4g+w39XZIpS/4Xg/2e5yO+E2f88LhInfytMDPpDYf630pF/tMXs70ozvwoki0ucCHQDPCBChA2QYRtN/f+sEklQ2cmHTVGDIfij6tCvS77VDMN/t0j45wceof+2qhJG8f/C/xsLK5E/Uff+n3LtHyrXxsl/UGX9Z8u1id+vkfhvLtdG/qhe+/+2OQr9fv245RwD5gwYKbKkx/Uv69as6t7972iBp0IhHxEE4m4vw9whkA1FDjzXTfpZI/MS2LfH56/rm2/7vqtPsB6Ozx2Ze4+566I05ojrt71sYlQ7YTrzlAIDxVVZi4HdJx2Xtz+bPr6zz47CDb6En1VPBzqTsXO7sgL+ebtubaVtj9sObJhQwgtmFXp1H8Q+v0HazVrGAd/iC1PIHODe1pEouuHhC6wkWU5l3M3K5fMeTdB4U2831oIlzEuw59V72rDRbD9PcbKzAbphDMfmk6O6SjTENvfkLOPON0ETLqazHO2pMfpOo9otTwfS7VZqpn09V/TuOUuVmVsRG5oyz/kY7r+nJK+Weymfb8gSasvAH7AMv0Yxp3OP897lyQqSFQ8yizLjLXjZh7QGXHpLrK567Nv4rD67FPgq4vF7NVPSs+wTVkUgSIbV+nF74o5f9ZrzzsUit9Kxa1RH4oOgDuQCkXSbqKuTt+UC1ProOa2Zxb3VwSI7NmxZ+d0Nkmmf3jWGXv7L/BCcEGDo7X0O9WfDVBYc7evC1HGwYXa+kQ/7Vd15GWHernxA+Itb0wNlFLTP7F80RpKbUi8j5xUigayRoKivfeKrhahHCVaWhdOJbR5fkZV5//ZXxOLD9nwNerk5z9azIsanChO6zyuJt/KbHI5FcSrF0EUGQU+ul9IDW9TJy3pKY/1ZBVp0Wrrwu2UH91mqD8BSH2U1Bp5FLWzRtlnan/IoPdvV4STcn/RFKdTNm0+y6ZN8k2tx0h4lGoWl7zSZU0j4HXHehWVW2mcXyqVeXRqsyI4klEgSdG8pq2L64USHAIOW1GuK8sP7WLqj6Y2nfJuHkUm/08KwJT1uh/363TL+YnjiwJTxQsboAAslxMbt6lo7rmnMyO4pRWk36LagVjCY0nL6y4Y5hUtOojjyzPJmMtEZn1EAiBP9hQnnyBiMLgB6kaeeOAMrSbsmZwdVYs6w0BNKwonfd1ctBwSKw/X5ZLRCoZDtjp0TjB8mV795O0rZpwo9FPJ25/kcrhZdyit1MuKdlaheRMXn09BcdQzYPMg493InlCuVYmcUSzP01IPe3kx0z+BadrXr+em7g3LJ7xxdomCzxh4WlMAT0x4NmqK63/bh1AztvmBv+QZ2i2ajojZlCiaQVLag8LQG5d4OqkpzhDCdFf04hcdnCWb83nj9s6c6qiUiQWilc2lSNYO8Aro9iPNSA5F7v9fbXsB5rpRCI+uPgod77z4Oz0dhQ2d99B2zTXK/PaLtlDG7dodkmxWZXiMZmWhPxfSyYQW+CfclmLZAc21fa28mRBFgt4mo3sbWG+0xrTLNej9GP7x7s12mfdc/mn1+wHBRPHrEzAZ0Ucg1T0KB5/h7QbRw3qpgSpUXbkUV5Zz/UKBaynOJe9nFCkeDwjgvNSRz9ki1yLlTZKg8bsfMdkEaTbZCNXS6otsk5q+RjhPq5B2uxuW+1s3uzO1Y9ocaVHTDFhuvyAY08rbg/op+drwkaS0jS8iJJO1SSoFCtKMDezMQ0WZ25qxLsG4/axPUprQUhuEGmVg93Lmwit0VDefSFtHukWob6ZtpcB0s5WRMxi1XNAIc2l4sfkZMdsE6wq6PB3erVf9mpk95fy6z1pkJn0Pp/WacuE3mGMe0qHvfq3O2vbv7oP17rRRobkeFZ3mPPF7BmN6DeKC4sHpGJ1hvHzYYtY9s3Hcc87IHh5AU3fEXKr0R3C7a1LNQrBBsaRCaIIZoEv9O2N2/3dQzs2gtNKp3ik2P6BT4SmaE9+MKLpWMBM4MR0kD8nWrEiMnF4hI1IJFR2Mpn0TbEe52pDzTm3FTm4QmAt2owir9VBirweEt/QtSsndIoLu0JU7HknzW18dzXbBqpAznrfGmqgV6/YgbMLHK4GTMFmk0hGV+B685W2bx9vqaMzo9kmGFfpo580D6t447MODNo5HPj0Dy+ciIiFwoXjJgXIl+Vx/uFf4E0YpiamDaNH5WsYcQDDykz/VI9CfwjgnI+1GmvgXtU2EUudyUTWDAImosWRYmgMc2oYI1aEr0saK2DnyLGC/ODJUCjUGjX93TR6u0lOOEvFI8Qyrql2QX0Ch4nFECYfiTBZMa6Fr7G1I9ZKMRaCo/boZHPt83A+WWQ8Pp6CgWqFwwOB2zhTTgAlnCHgScaEH1DjYeoe7mD4HjDA265yx+yy5lWtMzTU3XXpzjAQsfqDAbNEnH2VxuSHAXkhvjh8L9ZrqkM1YFd42RHy8Jka8sunW3G9fRCocZtZs5g3O7P+mVSGHh3h571hGYkCGLhKTIbjpCSd4lV9oZW0ueZXigMCTkBq4Q9ZNwIkF+SxhlufNt3K14bUbEGSE2kGENfed6tFUny4WHIaeXVcuWVgby+63CfSTMSkSJxwGdrLO7vepYu9cYjzzjGAG37dCzSt/mZLJf2ES2ZiHrOCREQ95zfOlJWXPtvs3n3AmzPzcTydHh8TiJxU7IWG4Q8hi5UnR241U82KfA3cBKF0YQFqYMBmrDTzjcetcJmcMNLbhRKL2zD8t+gvV+bAfWe7N7CHj3/anpXKZf/90f5jOyhjDk64qk3MrQBwTsEMcPLfRk0TEN1268U4Rf+9GZcor60JF3NeUXtrptzlBsU34rCzBNrTXOy+20ppfcarQcJdtU3bvJdylmRbVXeU45EgFb6ap6b8GlF0PBqYJLFZ+9Gy5wdHTLHE5VEOYV6dACdLOSGdVfN4fV4UqxXQWqmvj23slnDYXeVta+r+urWFZT1t258LWJTsG/3pnyLCioebgyIk+D7x5PS785O7NLjnzaiewoDnjiNB0n5D+WF9QtFbM/60AJAZa8ZWbcKSJzzO8HQXg0CbsNTxEsHuLwOJeJJ9dB4rllFio+Pj6U+71stlQ2z5w5pYrBzkXBlPwtLwp6RWWvSyrKydQXqSQPhFA0RXhPlhHeV1KBoRki8jsyCwP6VOfovj7c7TXo4Hem8MQzPWgBhJ8cnzhiq7ugxQYrIb3uuSYlm5s1i0s6m/FguQLTtT00De9kKjq8ZgqJfrcDhbGEgmTlxgk0lBcwmmOqmnmIKtbyJRNNroz3tsHq5LyE4ERf3j6N+7smIYtQ7LLPSQlM4YtP2CdtPazxZw0K3vv314b/V5PG76/sfo3OqDBIXJpvXjYvZOOhZYq9BYra2uCVNoVcQ5Ah9nR+zuEQFrKAFHcfJt5H8R5EpqQspWry1EP9/m4TxktmL49DCasutgBVZUZ7kmCinW1tnKLBTJx9QE9LQqDTy1kZl1q0pTsAe0s8o59Wh5ZbknnZ88mz12vwMhdnbQmm+u6fLSbS96QCCEJQJaut2EZjKj4UZjlIq6NIg9iobxdMl9EaeQWcCPbIKwwL97tN8Y0ULi0XdQX7fpU2ZcCCxHHubs9+4o0wOWV+uX0mpm833CvA5tBpne6a/gIunVD5Ye8g5CWm7Wi02wtDvcKsMKHZ4Ad2ky9EMa+H+JgF1KV8FsILfz5GOF1ptAcz7VS23TxCDZa9c16s7xnvqeTjzjQiqshVKJ5Q7kK8PFAZp5zyTmGymGrkxwb2eSLg6qkX7znYI2piUgkToQeLiQU3gJnoE8p7nwmes5AOE9GOzRUy7bScrC4cIOGF2GuDeC1ZODzDYDdG6p4P+zwuoA3syIGRW8RbHOvlz8IctdeDAbFKvmLIqz+I4jgMit2olhIZ+/k23ufCzumcbrrQV1xD7mC66/VIS8+IkYpZvJM5gsj03rj5KUmpPh7OW4n1gRNgpcGVcSIP/2TvSpHJXuagMiRW/tudzXOF/IZsQNk/Ovtasy1MUWnGg1ZG1kwiDwkRNTbe1S28s9S7+QTiAxbfWLSU9/HowBLMp7pYb4NkdqLRYt9lZ82+Ell8DiZQz5+vjC/XqBpobTOaI7xXUrrPh+Q9eso14xihV3IEjPvoi/lnntEfYSoTz56onCsRu8fK49H1O2o4FtowRrrFnynvWxV3jLhPhHvv43mOZm74dOZNp+WQR9qdLyhq7aMBSANss8NGgqM2Db+U203x110u7l4xlM/OAT5ewQY9isBinvINY8WG4Qta7nril7gcIxfebZ6ntFCK8GpvBjVuOdhe47P9Y34peZh3g20FNvjtQCyGMeIjgCiDUy2pRKCtfSCo9lKDwNirvTO2YvDvn31iL8zKgEVyNQXcK7fRa1WYq/3uq0iVTY/gg/wOnzhdxdIzaPOYC6+MtH4Tt7NeGfFuDyt5PtgmLzLp5NCKLM9uvrMB1qIJqJ8LLToFazC5/KHe7H6n9NaO9IoVAWqSgvDuYLPrxI9HpejegtAtc38i5KmqBJ/pVd23nLmYy4MpkDemvkH2MiJcd6en3pM93YGcg4zcm8TjQIBjt23L6pmLLahLeUVzMNNePJX7a9tBPd26id4h18jguNNNF0JZz8fhylrzmyBtRHt/PBmjmCNSGV7UPBl3d46PmF6i0nRLgUzB/i2zzVJ5fTJWf4A02aNIAVLk5Bw51bwPxOadrzV205Hpn83Xb5+qWH8iKh64Qjd+YR1BwO4+mqjNTXCK+k2YHG7pHy1u3ZIrowGTNMeLgfhb8TTCOTiO/GEyt0Sjq9xGNfUht1O2tFIl8oboTQmUgJodBzs9GOve25Bx1v1bVeDjLdb0rWWgDU7OZTFffRga5lZ4R3elW57gXoF5ZviJb4+7R90PgJOnoawVvaKzllQLlfKO+9souwubcLk5tRceVTGU1/cDgdeDOR1MCQ64d9snKsmE3I1bZTrba3yno7eNUJBwpeTi1tnHWrwL6V18WoWwc5bWFVueMzejEHjTsDcTGD9jDT38atczGryKge2TninRzO66WKjbkh/NuzaDYCbvmVsB5sAVCYrb2/oxzsOAXxBhIcvAwhh97Soul7nV5V4ip1UFhbSzt1Kjk2e3mE58pddnxUxk3svfdE8pD/pA72rfXV557EVDItiZOXPDUyKHeU6pA6MoY7xA8HlAvPiCLiMQFgSsifL8sb2SKyt/P+XLZiMKo9WdySlT+CzcqU9Uz4P3sXYye2B1Ogv6XaUXQQfOc51uwIdU2TGGcJqiqtx7FmITa/+MB8vX9yXRj+y5CfKDnusSlAA4EE4qXiMCnKf4T1mzDovXndfbvNQN1PAW5+ulILOohs76Si+0YL88s+JzB3pkiBU00TIq9Hi0TQ/c4sFLOV/29/xsV7M9e5tUd7RiadTJexR5cWzXR6LDj9YZkUHDcW2G7k2ewIwKwdTLfjhrLTKWIayPUwae6LzNZOQARcfoajFo2SWOKGJVFcrOb7/OwLrPsqqEDT3bAHveaUnIuEDQqH6DTXtIM9+gAuGTbPnAX1r2tBTMo7lSUd6YCAIXxc2dKfnVI7on2ob2SoYn5n/SGlBnzA4meDPK0ul5pWzHu8erfaEdMBpaYofCzY9vCo5YDp8xn1ovgIgIT4kzU5kwvFfw14HS1StrlJu5DafpOOTjbck37ukVmUVNFKp5A+olGndYBN6Ph8OeU/iEy0FcCFlCH3tj2UuwON7iDm+suMeR6K3nNka+rzlxTVKMfUXez+/ueBYzJ6c8J8Bc6yGBho8hZzeNftjOkLRiFhaYv/HkG60F+wTlzLxkNi0igXWWbCA4W8ErCom/vOwevM+5mtSiHbI7ZlFY747KEz5NQMfI8KAS8V3GBJYU1TcPeN2jKE3JwM8+lXBrtgNWpHDjkT4IeaudsB9uokFRFcFcMGulUuR9OmyrnjwnEe70KogrxLMbrMVgTpq5c9l63KxTBC8lWpOoB6V89xLf9aVDtWnDGbzlg91qQJbSF9T+C9irrMSJxnnHiJVHbP3/s/dey5IjyZnw09Bs92Jp0OISQEJrLe6gM4GEVgk8/Y/IGnLI7hrO9Gw3Z/ffKevqOicODhIIV597uLCY6JvaHYgf47mWsmHgtUdHde69XWCfFpqdwKcNRIP/GMFj9tdwb55BU5UcYa0hX+g8Xu2NhgHjwJmjkG/YAM1JWYqcC+h2utgcQr76ssB3DjLJW0HY63Nsd+x97ayZMrKB8+bbGqTmU9DSsgO2fniCAdoKnDzyBOFjQfSIjyRrn3C4QaeRCNokNiuqy7oZh9k3lCoHLyWg3E/7iKMuYHQXxzs7j2/N45eDYKEixo2ZOumMXEHSc3x6Dp/ESkubHYHkt2RnVgl3bQoJZjGFXNfgfY8JXhI8Gzm/EocvMbvmCV56gY2chiG4qEd3MfbAdDdVgSaAlPLxGXczv9L73uqV7LM8Ecrneq8MCCZhdcYkL5EbkG936NvF9vKRZQLZe8MDVZzluS/DImmxdGqrggrEcvKmssm3FmDQ4LkF7ikDsqBIf3vPftkT++TccLJZ9lNP7UZOoWz67FJuar1PYNrq7Dj07WI5HWpLEuLRPtPCHfUMjhvSW55MXCck7xogWfp2DwxTRo3Adx5dfMTyd24D67trMKwUy4vVIzMLweafXUAMIQSjEGV+BOW6SFO7N8mjZQF7BE/25kZRvfh1BQ0d6adwStCzUmPy4+qL0332NtaVRqMFURQlZrhWYo3X21vQmpkFPfyapGjZq4392JYZzInfbYqdwvMVLhhm2M1mGlkeOvxT5iU1J6GKMwSJ5ZIYQHc6e0uXdlU9OrwuqbtEWU4TPiXOhVZXUcTjTQNIk2vfSce2u/i6KJfnH9nYpOlnHxc/UtsDC6VP+W0Evug+Zsogfi/UIk/zBNMJ9MHaL4FxZGA8tU2VufzY+cG2ldebEQF1Zgj3pQQzoNfNA/lcNhyTD7cTB5AZcavatwN2W0Rl2QZOVisPx2ug+eft5lZebKb29qo1/lYwCxVpotabrSinttHGyZ59FDYt38ds52JN42+3TdRsSmyOl3Vy4WPm4CPQ5a52hHIoGvk2ti1rnH61ZDN12M+LwIhvNu3JKef2eBOyrJcha+QME9/yzDylLhV1qHVWnXNhqeCsF9Pww3TKjVMWD47XhlT+mGXLuVr0MjV9KGXanWpT9VtrZ9AyTZ9JAG86vFR5QAI9yN3cDoxTtzLkbfhviCvfoqTQulx5R0ANMDSaYQPG53D0VBSWRj57QQnG87L7Fl/GG48NsgNAkhYuS0pd8DqD1AgWQZBZ2Stf2YkA01kgYOz4sDu+0/iBc/TuaxLJQAG6gZtxAR7C5kPbZLJwTcqTLmZEUxjiFw67Ldm+ovCr6bQiWE2M/miSvLCJG0nyo8kY7uyT1Z/5A2owDnZaJ4iKD4ot4XNfgeellPgn9Woxly3QdxDQlfRuh6bkC5lYzRZbXEt5LsWTpsvZmOCqiHfJpiJCk7Jtfe822SMc+jiy21Wj2Dk3YNFBM/xzOREsatS23X5VeaDSRYYv/Eb3Az8oJGaex4XpmMQEA7UBy20w4rlv3woRil4mLhJdv//AMdWuBowFaIX1+4w/a4Vp2NrYrseTpNwwnpQ2X6Fha9RmhV8mPRMotn7K6llJ+1DvdSAAJ99V4Ip4il6pAXJ4Wq1SUsaMT7NR0NTWYBNk81+Po6sUy89AjBnsDIwMmsjvk3UNP4zCGZdagoOcKn5SuOdIm36vZemirkFA4K3zVp2H7wpEr5pvryQoorK3oFJYgsqPV5c0tWdmTp58cUrk9ep50CInILyRPtQVbZnGoR6NvY77zcLnaVLQRcO62GDELU3wiZVW9Hnhuek/nbSg8lWA4oSkcq5zbRE43W2/OhgI8k5boSD8QB3k2byx/kjoZN/raEcG2u6OKjouiGPc77Gv1H6z+uMCWqOxXqbwg6DnBpD9rMOHk1ofr3rWbB72k2aFgx4cruwixMO7Pi7kznvsFVHFhjz9in3M0FwRQSd3tjmprAxgbFZ2ONxoDobpYRD8woYfvtYJeejkMu3qL/TCZk6Fwnod9YeL9k5tRcfCRMJMZLcXhD4WWMnfsDuOsL5iNoE11MTBLwcidPs1te74Mf3NDJe+PasJgZY+8QnN5V1a77ZcyISiE27Q9ak31UeXpXhd3qc7fF7i2pDk8YBEtleFPJTQzZNEiHSXga71NnR2VQJ1G744HKtUFJOZNw4afKzLhxC3wPNkVWEXLWtFkjflKj3tqZwjUVMATKViErbkjUhSWb3fzXg6kJF75ecBlV7LftP/Hrz2qbwnymliz+cf1Ys6w8rRc2bzTdPCASrloHkmHkEHuma0H2CDcV+lXIU18VnFEliXG+qVL30REacxNj6XbWl1njwQYrjxbrWRirHqNADqzqPj0thMRTPrZS0jWCBooNCH+gzVbn+y7Av4qhMOoA4svdGYVxw50BaagBK+dSBM4E6mnRVnNAi5E8Rm+tYTSMx+mOp4NtoAvcWoLmKxIR4uvDuAx5tYfgkSV8u1XTOg0CSBPKhECluUs1vAxB7YEHYoA3V8P4WqhT/b8iqLoF4a17uF1ZM28+Jp4LwQ7nd21svaNhkkZmPtsO4zOR5odRlumF9COxRyY3yPjCeU/qZtUQnaZcRepJjrg02IKcTDF+5Fl9He4Avy7q5i3yWoQdJjSGIifTHXeNOySkdRTkZg89ZHPzGX/jaPIe/8dagiTC0xwq9pDJ5UCWgrHRRTsoq8LiEZ6BNd2mqIBycrE0eDu/YhSzkcnM7yglPIjZRhP+lU+8DCmJ6k13EP5cYEiPOM5qfjZyEF4eqRWDjvmN+/EhicDGoH2cuYG35OkeZHF1NWcXycn1ulrutvhfbvVKRNYL8o0oZ/ktWF/DnF7z9mHP3H9d8/tetnPfN+kdMn91WZrz9S+kDt0D8z+v63aP/TjD74v7NR4r81bvxnPt/vls9H/iqfD/n78vmoX+Xz/eJGf3A+H/qzngO/f//VHzcyhrm7Bfz66hYXeMx/+qT53674H8i/EvT//C90zv+l/Vf/MaoIhulfcNfPc/Rh8r9THaG/oRt0vs3vk53TvC3Xv56t/+c9h3/zniN/Zo1/02Hoz/nidyDMr6T+tvm/JgvyE6r8W8+S358qf7kR8zKC3sh/g9zjf0Hu/5pGyf+0wf/xN9Gqgu4/v77ZN4sYYJOh/k164d/XfrzO/4/UxZd1fxeuJP/1l0U90M8Vxn8rfkH/Mmj9m03SP4o1/2za/hX8+Ssc++er/woP/+df/icP/5mH8V9oVvwfjb7Rn/XH+kcqVsC7VfVfcG9Zzv/UrH8sV1L/aK7EfgMI+6dP+C9/i08Iw8SfIzf/rn7oXwVz/lbXEEaQv+l+f7CHiP1lYPj32Mhfqaz7wZdbzotf20D5R7XW/+um8XfzBjH4F1ro50XG/61DPLC/ISDpbsv4yl/DBrbpn1XGf89AKBj7T5RH/i0u8J+qjH9Cd+x3oLtJvIR0qbEJ6mAlWjZOHM//9RtaXf7T/PxN5of+FfIl//XvtTwQ9ct7Uf+tNgf/G6qMwba7f/p2KevuJgv/56XfyCl/h1iC77l/t3yoIHzt3W/qD/F/COfA0C9YB79Rxt/JOvQv5pJhOPrLW/0F3rmpBarT/v2yEVyw/Bf8jv6lh/4zM/645+/Lmv8vTJ36eQE8mKwE/+cS+H9BWOekqz9VsW/Wh7bWJBKGWPpo9nv+sbx/Rw9Y17dHLGKdhmufRWifazYkIm125LuL+ncdoMEUokXiS+w76L0lCucjkdgKfB0ir3Mmd5coyTe1hzlkbM65lvYLR2t0s7hD73ls6wa4oOliQJ2LzjYSzM6AEZDHteXSeqSyEnghGOHBWuDJ6L0iMVPbSywqyV2TcHNEy1FQzEB99um11PGM4Vt1Llp0vpWLgOkZOXTX3fIqKGSljNDTGOeUz2SPz1B6jPun8s6M/K3S55IXthNW15A2xzu5sFPf4TlTtSH7nq4TnJ/LWZn1W5vHJnL/jXbqfJQDnWey8ho1vKgTUIWzLZ80QoM8j+urmH4UdDCpAv2pKN1Aisq3snXNAG0SrUYk6cUw75Hvai+sn6ZIzhzSq08pDQqHLo0NIwvQjY1O5je2ICg40dff/d63BUinADmMgVbBUDOFzXBaCYHWbRGrcVQ7rGPhou1/u9sy+sclKplZ2jmJ3/G7Rnsj8C455kqtpDv1CapFRxEtSOTjn0P6xkYGLetXnVDbzMiOVkK6+5C5d6aCpqnCoQdKADqdtQmqHwdDMhuLlgfg1Dc2n2VGZ2NGV8pp+GBas4A92ljlvHRPBqMjddtiLO91KvcvUyavcxxxi5QBJjcsDPt6KuA+EU6WebA+RL642NgjMmQdA1G/2Dos39R7chR9okomKWpabClpIKWekLs+Pyg3V453a86hExuse2ZKTXsSEbRYhZdzToh7QMywA4gT8oFZm/UrfDmjqKZjT+6pTBDMER8gpXWFDyakxfLJB7WKhpQOKp2iunUj1Y4hq/K60iKRgWffY2jeyocVRUUoMkWoPm10/0ZJsYV0SG+OcaSXJSVAngKIMdFWebEOTKyuZk0UUxLoJz8rJs0HwBUVdKDXp1aqFevHlAlFIX8O3SV27FyYMUfosGpL9meFU68iSI9hjYhf3LfnjQdI6n4YG5PFR6vyDES4YveYPhDB9KOpLDw+qKkNWUf78AJSsEFGTjjV+kv2s3odUEp8MVOxwV3OCNoQ5eWY+DyV+6NosdTJiK/CxEeqnUX61UqEWkjVp6hxBqTesFjNp69NCLyZyPP6QQ8gnaweVtPkOZ95ltdV+7Z6tlbHE8yz7494sR90G+9yTbV2SL2YVzJDDoaaDqnXvPrG72tf2uq1Ejr2MSXYhhf1utXVOJy8j2oRZmgsDa2JOU33n5gtZPt2yxopvQX3MXO34rP5jcf5Q+K6uYwfjLcpnPmSnA8xiDktyi+m5F/8d3hvPQg241kYbMCO60j6Ba22Gz/5bqsa0LUZCxBD8bl2fO3O7TjgmiQnrXxElNTiksFlJ8YWqFHFoBugs38apQb1KXXCmK9ifMr10iKFxFCpLfQI5KoBY6G0aD9AMQ7iD8ylXQisQZlAcbRhCiwcOALF1O0rGkAJ7J6hw7GHjVyDBpMsNqxBuYbtiyFUUEOoi8y08KT3CZlTV+0LmEmFyByXSUG5XNKF1UcFsuXUiY5YB4c9V4+5tTOKdBPDe2+EhKcMi6fDyI4m7OqRDCLXecAZbNCGHyaIzc4rSG1quZZtQQ2eIPitw5BQ5aiqHx1kWc7lE55qt/1ugM8ctyXkhcv+BHXeOfNne8GpYofXcFYy/h33BtVl/dQpWcfoTp4mJgpQ7S0HsjbImkxW7kNRX9gSLbZ/IELOYQvmkyMlghzR3tEE/jHsUsTJNSE2QT2T+ckQ4/qwPJBcH7R88VEf+Cni7qC05oM4DdO6Fju1y26HaoNwxCf3lphr4EDJZf9GkY3xXVmpu9tA4SESXjwQjihBtKTk+QFhZ1MacaLrFogdyMB18lsdezEnD6v4KBouGGCQrhR/DPXjMnmdm+zDHgqkdIma07g4QYqyh10GjcUZQxg7c9rw2bzlh+IIhrHdypzlmTp2CtdT1WGkyTxxh2ee3jQZnJBZwnqPlVwXLiOa68D2UahNA+YtJ41F0L5oXH7mvhb4CmKWd5OQ2aT8MYWvD8ecV7xLun1/o5Zv42FvNSybT2Y7jQuCBYnwiUNUGJT71E81/ogt58c9NELJ9nnMDFp1+mCuitoyp4wZvkzw5znAhuM35HV73uqrIKbMy6D1Aq+QIk2WpoNAa4J0Pwbrwdcb3hfOnyF+oz63ob+86tNPjP4+hiAsy3SQPKOnuFIeaOYyQm25jEMUF+hZndvm3E8AlzUC8a+IqXvuUetPHlbbgZ5494m73Vu0kguOmbXNJPaWTU553SiAcRpijcju5YrMg6YFp3otY0d8/CgGZi5be92L+bF7qWzN8yJnLqz5PEjMfUZwE26KioVVY4vIp3eQwX5AycSz7cMeTWMQDKChP6rKi7yF2AQ0jXCPv+8dRIZKC3E/YSymKjWR1T1ngl+jqnqcYqgge08sWwaRlVmPuaCu8rHrKOOBQeHHDXUmUfkkf713xyv8R8YKLOQICDppMOPKDTuSBwPE7uwQOO5yUXAjc/DZwt09zWdoGVwNRULN7i65HPVNJbE/9+Z5S+z5VuvJiZxHHnMr9u1ZM7zKUfRYPZMfQVp7KBCZCTUucUx5pXxI/tUPKFRLNuuCcqqeOHjNbpmDY+1D5xd7hNji3AX5gb3ESooYu+OcW2g/sIqXW2CA+unzqrpgYYH2Cl5EPtR2dH78OV6755Qyt1FkGV2vZa5IQ5l9s4VGlJZ5xRydI5HjEK3Aq5Rcu5YpW6Z9vN4tw19clQbcbYhSJq6swA+khWa4l2B3dVvb3POJtdRNNB7pp7dWNjzCaJ7WulBgZY4MsZ2j1Co36htBX/GDilsN33T6wZ/sp8tH4SKQG3kYaqEzj8oW+8lGbI2ptCoKYXKyuY+5Wq82MuZnZMulzRPmGY9XwtBctbsmFFTO/V6a4Mgfv/AOkWFdvsXqg389BlQ9GTMbo/UESfCS0nLU8YoDKVn7dLNfckXyD1shuGurI8g7KZkiRbxlQsKKlh4dr/y4DGf0FpgNfFrYmVpyKsM2b7wCkJVWR/LHqCc07wQHfTM8y3HygtUQEgyflm+lfG0FJTzF+mKq1AxgzWlp23xZC8yLjE4zEW/VnL2q+7EpQVLfNlgvVt+H+WsALYIs+rGkJZNTDh8+5DETmeHpautTQ7txYHmnaBznbGu9fQEsWu/dGS+nfjB4fJHQWyqNC0du3VFfnDCMosvU7EMYlPWRf6+3lbNl3E8GCVjPPhgNQOi6lcAQCEIOnvMbrtKcftYKZrTMgHuQyMSF7Eg7nqzn9dCWtIpJ8naGUpBpq73zq0wZtLEFxkk3Qh8IyCW1N1s9VkyhzVhQIK4EOe6Lznbl2XN2fSilcWjOa9f5Ha5V6DXM0FTW/EORbUYoUTBDlxEaHSRtd7TPYiftoRqmoxRo8MXZ/sN2FxYK0wN91+Q0wDzL7KVkNpqXaF7M0KSIzIIDW4WO8Xpy9GFeMAGocOCRgh/P8ZZfyLckczS2vCya25AVTOFnHgy3ucxAtvthcOZ+Hk5WiNb1TV+4kMX9iOkoZpyAIc/ZKQuGS9WXdyGfbhttZqxRHHcuSBmYF1yX9Pxs7Bu+hZC6fAoHIz62GrJsyqQhZsy+YYCSr2cteeXndZssm3HkZz7nqQe9X6/G2orTCEKmrxM7ZUd/MRIfQ6/VqntxZom+8exRozL4neiMpou+oQZF3T6Ybf9WIPBizlZhNxazgF+pN65B8yI8nsEUwmDr+uGM0+PVKFUteH4FikUdUG4IjU9Q3CjRC4O48rgTUqK2/Cfla20SMAAzWDqvGcF5v2eJ8rMScn1GGLZ2NZ/mbUfb7sSY++e8k/GWOL1eir61/rxb2/ZI+JEROycRAnp9zrcJXxidbeSH2by7SeN9peOOaY22mcOxlcnbU7cM7tyx7XbQgMSNt8cLsaBbxGevd689hDzYFuVg48XKbum4UVQaH2cov0ZSiJsokL/jh7uPpRNTII/jhIXSmLpb6KWL7gFXPK6qTtFeBjFyCCsnIcCcgp21V94/1W0m6QQBvrngN9ZuVY1EmLdMOWk92C17Osxto1ZuTPfLQJnc8cD4gBTTz7We4ADz9AV2X3Vt2W5rulyPSq1a2cX47vsPekPT3FnYkkSboWu2g4kZWZqLvrTSlUu4fpxL2wNwbZYeO4sLQX2/8g0Ub/vP1LeCLi7m1o08pi/yHArQQlcVu9C5eZzCvjSZbvLENSTGcHvBMSNUwTIkuRuz72FfZem23CA/XaEPnz9mjL219BnPjA2HmfYxDdUf0FZwh2/NOg1f4+iIFKYbFzKniohwEenOCSNy0rMQWbQViQV1uJ5rLC8T37ijvXJSYQyWtzKNYYU1fhoqaZAqkY78g8k3ApNlM0+BXKOrjH2qp+Wlsd/IBoRin0eE98B6UWhZjg0TzD5ivrK9W24TvFpu3L7ReXOXWk9QJgaFpkhXupQwFXUAS6AYRSGbDlSNpXG7J/vxUVgzzgnJ3JBnJlXdofKsSUxcY+qvRc/OpbbyXPgcNe4mjH1ryaaWR4L2akHlWMT184XXdRnhlNu1DjY/XS67pIZZL1Sb4WhQJsEcR+yPHWJoSB4da1rlWjjcbp3elLDt78dzWWQ+9Da6vj35XoqMRc2WuA+CD3EQTDWrxw1JWcZbNOfjvwUih+5PFF/a7cXevoa1locdF28oDsMUTvEmzAhPM+PYcgHu3T+IWMKgkG2rMUrMNn60TWwVBEIWcyn3pidg84fwYJMZw9ivNx2P6TMkmJdHPZhosG39W8bQjgzMmostczTaTY5RM62CwSv3YE7mHANBrS7efh6FgfP16yk8hkc3RWpLzUGmyI3Hoz/ey+paseNt5gDKcwMEeWXmjRw3Da1rBjHHwmA0qD0YRuQZhgoSpvhE8EYjM33qDGY29LvUj6MYkAY5GUh7IdP9vV96cAaomtigSFPjAXQq5oK+P+P5SseifliWNDsbVcP1tOLgx0trd/ECojBRQXSNzWktlixtOTAFHo0VM3iDRb8pc5fZ9es2evcONTlLGtqhyJv6nvb8sz40Nx1E436Da5dP0DHlM7HVh+gyxGdUdJd1XFP2jtntJDx0VcGq2Gxfx47gIJZCBBc35igingb0VllrXdaVHRlMD67KTjzkQRpLmR64nbaQe+u3Ym4qluVIgJNV5WTZWyVJJ3HRW7SfK4rheTTQ1IqUn46YzpuHY3ol722aN1jOs4mVCsa6bdG0O+IBBxvi20Pmsky279rGIv0rwfA5RvPyDLDXhN3qmVyLwOo1mkiHyUhIRdeQYn2ut+Hd3o3OOS8WhXMF7jgWb49Vv9/IfHDrIp8ah8bbjf+8R1xPFOOzOHXRjJNY9oi5EYic+XPG6fdOZNs7pUdTsnJyuZDHW52RjTRe1Qkx4eMVHbBNZtaC+rXxYvoqztklN/3FThSUHbnamoiVHCiP4fj7celtvYRaxNeN2Hwt0TjHeDb5VsthStg3J1Mmooo3gHqAjiEIpCM6gH6wzSHhRp1QnSVvKGVeBjcNzYPx1pFxPOF5m+AslBuVHF251TxWgKMHElo9aBXwePIv+3qQl3e+EW7vaZnJeBt2rEd06RyiYyZa0CzV6tscOWptTpjxDvO62eqdcBmyVIzbZ/dfJDvUiP8Qcj/y8W+pdxSsE2k06lCUdv3wSFuxiZbQGCwrhWDSnz2AKrXtOOj0DuotxYw+67enUPMbdtpigL5aT2aMjbxs7tu6YD/No+ZpLjGCh6OTfZzE2k57m0RV75xL5hX17MdYtXkmm9ycGMKIMJzdec63a/tYFgr0yomYv6R4r+UHpT6pbmW0dIeGcMFQSXtNKqirN6btcHh7dCjQyImaDitDAlsHgY71SKdSzIPPWtsMbed10iot6vT2HKah6AgdclTFwvvaW0S215VSvuPHBJPq3M4YQtvjRf06oHfBPyhsIkDMBQRSDtvPhVSm59cVkfmEAE9w2Ra2rYGBPP26K8ZoRnoEmvPH+RJgx3NP1w+eI1n53gxDDG8/qhqym3KWVhe2P9RCyGx/RoHv4zicKOHLuJEKeu1x5hoPFLnNQTFvwZ6jwwot5mKgZXYoMW+A4ltcfMPBERiB4aEk+mSe1it4VK91PpT6PfihbbbFyAeXMfZd/Ejnk6wJupAFnYe2dV24DwiA6sWidwZo6xTvvRnJOuEiZzhYJPLeJStlH49FGGz4IaYkdBsylnzeYNXlTP6K6RonkNXIphnW4LwgJIZ9EItV68qzb7Op2kAAha3qk77gjryRR+K8E+Y6sphIL48dmFGsWPzx+Lxi/VFpBABUtg/Kz7bDo5H9/boGFm7OMVkjRvww4Ce1LAKeZX22esNqLTVygWJSnmTeycQxcrpytnCzsBGANu2StNpzcR85qGq2l5R6QsxJfi71vlGLPHhrj17c/J24JnrMQ+l68qQDgl8iIUVPaegYdYr8x+4t1zqV9uNVV2ThM6KmJVN7g2QHJV9DqmwuwxA6b6KxAo061jMS/UKgIH7s5PqRD5kyhdsDJN2lzReRhU09vZSvZoBTTF00NG+7UNOoOkdMqPAslrVJPvAfE+14slO/bn+mJFebeWNLH54BwyFLsWdxUg/UkZko6bpZylFcyMpsvSiLhXpgyJ93jIwlnLdubXCBekSheDJroO+3U/+whEau9WXtRdcbW0TkoDxQgm6aSR5i07Mot5YnQGecieEpGYu7ybxgSDyTGAqXLNZzMVRaKBseylp/5EJyNeI2wDfkjlYbWJv6Iof0PMs4edUFRQJag14SY6qCT32PyVmQs8a23mFizVXPpBjvG/YddT8XYXI/91OuufyN5W+X1W83dH545ucVdt8Y5u0BjFR70bmXX4NEkTwjaFIBAZGg9cmF4vw13K89mGKyJ3IR+3zpFh84GsybX1aIYWW+3JiEQR3HDxIoivdGR5+iCkIdUZeztJK/MEnVPFx5VltDPFYQeaHUekD948wOLaajQjhYk6V4b8mihP6gpZ2MXRoUthkqBIvbzg02NX+Xv02P4240RES1tBVlngXjBDoiP2JJAw4Ok9ey7pPy5H97PiQ5w+npIjoSKr98XPCvuO42Nddilt38vmQ9xuk97+WSzyDs6CCj9QWXbf1l3Xp51FGHvhbG1qAISZBDGvcaMsyoG32YfbrjVZ+3vZAYbJXWuOX9oFKZeKIbw8sg2Tc6eVIdj/J1seZLPWeT58zMKbyAaY43S44HOm1K26TdpxRcoz8yj6mGaPuE9fDcgvK5p12AQ1IQgpfWV38n3vZLXL3r9KIPcNgeqf2gkc9Uky9/KT9bWKBcZk7b0NOzJd0CWE6FwkCPm4QMo4dqsh5NUhz60qz4/DDrwrB5ffqQPHtDAhBP60sQCk2TuhKDEAqV/GNacP+wOVIg18OWUwCLQxgUdqcyRWI6wgdxQHcH1+xbrXeou3YL0kuvaCBiF5Cjf8nELTEgGHs8XsjubVPXT4+lobLbra+nl3zK3KBanYcQKaYs7IssGGPRN8q/fYp0A0W7Bu+0qVClgkPN/O1HeQwuxeMNdEfVtuyKgabtU+I5BlTUrFiszDxdU7gkYYVrWlNtTrFpSmk58ewkBPUiLdVILFD4jJlknn8hzKBAjw58EvJiPO92kIDHnAi3sW1C5bGmpUIatyfl7XGb3p7FbQZ9P3p+msluCb02SKBBO35kXbYMMFOvEYXzVwBnczhyPlz+6eTMkzCEKXXT04EiEEyk5l8sL63YLVmzfiOt2DdEm6fF2GAUwUnH0egMxSolVCztCpwQLgdkl0PMxHpXSyDmwqbMmuQLCbyLsLKntO7J13r/vqHP8EV0fUtsKMPfCET16XRtdRsyi1pbnt/JLBTz4IBBIeMCNpy2LzcfFq6I3f8FjJkE5ln9PikKmirInMLbP46yy41/nVZeiHxv1mxc0AVWcx7QJIv1+PBPflPBbVsIzW29lQmtk0JHIVOMq6JyfDwb5Md9QOBYql1srYLvY8z7ti9sE9fqIWMBJBt0QbTcrvUbTVL0rbjll0HbOo36PbIICktJxfCZZYxju8ND91MC5B8pS9kPzyojooMNcHgXES4Be75o5IgbHAzAr0tMyRL94LG+SrTNARHcV57AyFD70DOCVr951i7jxSQ4yTr6UxoZtHfpLOcf79v63Vv6YOUDM0+Rmx8/Juew+/Pb3kig41shHtECTdT3MMpKOzYguTqlEvSiH5+F+TC6CMhZFext8bx99ueymPuKJG2e6wYLMh4s6GUlBICtwPOOGcfILTznuaNO+ut1G/HrweBqUo0+Qj2DLecVcVJqlpGggjooQDt3YYe2qFnIecQzURi35sTuW+U+TYdpVQVzaDze5SQEPvd5zjusFKA+/BTvXc8DB6OEWeO5DMDPKkK88DbRNmEseYgGgfb+ttp++NXSaNlpNLK/9Ch2a1jMRugj+5CL1SDDZLuHVe/BS6r2byPNwwI7vYEYALSCQCgaVYZ6XBgk7PChskBgVmuHy4EFQEp6ZvRtnPZXT1sXCNXsgHEYh5HqR8u0teEVMStXRn5ifCoe75mQleC2RW+iTzF3MTMKXlwYLvVrfA7KMFQVip4e4WUkv8rtHk+CDOKWmxGzRdGqhXL7I72E1baVGUJGY47KB9VelMT7Uj8rBMnscL7lGkNdFk58ICXRI+jN8r1+2AfPzFyYtN29ZZrPuBVU77hwTJarwiG07a+j9Jn2Y9OuG/rGviI86rsbPt3A3haQ/ValQTEGIwI/0p0qzsCtxwGDXwNdSt7jYLtz3EIEZ8QIB2DcK4D9HI1xfpzF0og16D9mf56vo04FTD4AS7LnuF659paCB/OSfPw7KfdG/+5R9bPWvz85Ow9MgkrKjRNeNOZ54ZER5iEYeycxjeeJNzqjR7+Brie6GV+RZW1Uk67G2j8ypxdwZ20fOH9A/WhtDlW1Ab6Qx80IH8sdZeO2t7RFQ00QfaM+1oo93x5ZV/2J1MeuPvZOJd8PBgMxTf8b74HSKOaujR94QyYfVKMo3hYvkrzvhHx4ZI+xkBoLsaeP0nTeztPDAfgCfrZ1PUqtI+ejyuUbdZ2NLofFwl6rX78SpjmGmC+yXZHtRD6vzeIeas28Z7ILRNHwI5olnGg43HcTN/KtSo594riWjnEJoepjeT3FqNw4dcFtZFhvEZBBSFVt7CZ7K1z2/Bzu0z3uD8Qe6oZKRU5bSm4Q8clys8JaxMDJCw8poOXQYjwfN58IffmB7SfAq5OmfYLn7omclE6cxHHoOUDpKEB2vhGTff93kS+Khb20IfvHKNDOM7pUt1ocxOV4Ll2N20WmNaaZiQOJde5238nTUvODBbqo4fhLvYUSD2wbtzlrmbisJR+kqfNBeiaY89k9obw3nzrbnuRBnxK277ntGaginSfY1pbg9DmePOPl55LhtlIgcTPHtqW7ctYNo+930ukdHBSw61k80ZUkzNK6rucoEVwFyd/Eo6Dvpw8M0TRp+hCZNmhywCPUo4IHnTZJZOB0q+fdah2vDRsUOEJ5DcKw77n8bZjJZfWvexvqPTbNsJUkVGDLwzCRdZBFAkvCQM1NVzCnT4Cf4BX8De6wz8ip8xbCwWjQ4Tu9zg4tMCLmWAl0Y4G0MUE1krw0jn+4dRoNYLCjVLQCJBHlxViXuwZLtL1Ffv3oY4BM364jrEj0w4GBzClrCqjiE65BZoxjEH0nltOlEKLpulPC2zNlVOk0g/kmjTG3z2o6Ks7FsgzS436XhGYa+2WaIfWTAloYon6dNPnvi797Ljv+lyu9/j0t/d9L4P+Zs/5Xi9ehX9AYRX9aiwr9QfUKkkr0uDjvpCsJaWjcRISN//WzmWi/INrfPWDuXxA0g0q0JP4SnX+PSoBfZomjP6kE+MmGwn/Uhv4NdWj/Oxsq3H++Ew//qA3FiF+myv+0qOaP2lMGjTLtNDBBcLuMtv1HGU+/pbji/9oU5Rh8L6ocE39tTFXHNvjCjBwhlBwvQxKouDF9YrNGHDlHJtJQJr5fspS8894YMwS7tEbfDO7GjZID5ZJOaCe9/sdrtY4+k5OyA/0kwL25iJXDCKRE08/7f1Zsi4NFFOP9tSf4/zYnCi2iyMpp2JcohQ6hbqLfnZMEz4+0BFCfJ2qnhW33cAOo00km1yXotfGBrh3M3LdsZMnpZnJcHHNa1LudrzWDz2JiN2Rin8Zv3ZXiVzlM00sJUySEwwnBUsGMrfEsMSRpwNkwTiX9jNLle/+6Z8J+oBFFldcH/Y48rR/g3JltctLWmVykPYHheIHxGdn3WZt/OgzLuv9pTWyeHrhXXTzGFYwxzcLgPYKZZRFiJ4dSf1rc9NY/pXv3ZIYan6X3cepCH5KTifBa+FuUjdCIUnkt7cyVLChChZpAWbrkVMK5jGpKIHB6Ffs6RgHsLQWFWgZ4ejNgYY0mEQREdLlP72HTNVIRtSncai34pge0s7nkLojFj9d19v29a8APAOBMGGdzr8x9d0ldIlEQm+4jFCXL7Y2Ea08BRET0c3OqRJQg9Lrf12ceHXQF6KuWa2UEUvp80Lpw290wBd7ZOQbkU2M2MLGcLQ0Cpq2EKoFjGHigm67YuPsuRAAro9hyfc4PFAHC9PQWvt9F77+dAMQwO1Ui0mF/fvtCa9/21MpHzWhoAm3xTa135gAJnrknFj1wEoaqCNOI+h6hkKPm3k/8jgz0jU5vpEDLKfimPEcP5BOQa1CQq5DjpSZPQrZjqTeVsAd8UT36oNiJHIw9DFytcC/mhvwqw8iqPfxqDa14i1wxNK38ql5gDrdP0FheEPKgDHqD9vsUT79TuOfHms8LUVPfjM4OxrZL68gA0GEmYXgLIHqZBRyarpXe5htonpuVNPvqa5QNGvgLCIWJY9I9oJy8fxpey02hcgPNV7V3loK+SMIifGSYxlkIAWckCrFIB3xfFZsotfcfwypN80GCj8cbX7L/C47+2Zr5iqJN+MxzCEC5U14qsYTkBNgO6KOpUYHfu5MWaZdQhZMmjmRR1kPRhWBSstJDBNr/hR/Yyp/XMzJReJ1F4SK8a8eNGU3TPhfu9z4DHKaGqyAL1B/7lSaoaJzode6cq8WJtfSPkCzVBAZBpVbHyoah1QUDWfusVwmJ8MlqWikAD9AZaaAwCtissLxerz4qAeYgj1z9g3y8qjIDZw/1X10Dd7utM4iIgy81fKMF2EHlBzSYQ0R7zoYXHoh/ru9L1p5OD5hAwyNu7q4PEJBDGD8wbaTVhRR7JXXnzZssZtKFlqK9QZHwLYfgOYPNbNZldWBwsxxNwzyWavSDgAETLAKvE7SAJOQRotaqpZM0hBttx3RxRZUdN80GxHEcvfjYzJO1b/IJN/lYGcxH95//xZoqVOXgSzT49eLI9jEz6Xr2HBLZMhfxrN6drF0Ncw0BPe6B5IufkW5jFATn0M0FrtU45ypm9Qfcl1BgXZEIYQZUdCK9ruNrFdZsoXlAtHCPd7gm3R/a6OU0AymuQUfCaOV2eW+S/Vx55+eoTeaBvf5L6fvZGlKFPsUn2X2vKsPyy9Lfi1UpevW2HBOkTOy3QFgrWnlIYPXWO02/4TRYEs+IWlcyI9xroICmGAuwv2vlIlUE1OAtSV4O3iDP8c7HtQWchIoY/J2SQjewoXZrjAwauCHwjXOJbcbLIXUw7kO41kiCUf364BaQn6ZFoQ/0t1Pou/aQOaAKeaBKlbR5v3FC7wJ/J7E/mZYFs+kpgVsioHYLmjd0xyvzUZewYeMAOAltwDt/CtVqPFSSHPFeQPihMpE3sn3KakSBY76PYlol0EAmZvK45PmoE4n2wpr0RoVFCtIIuhlBNjw357P4gDb6Dcn+Zo3yXeNOAmvJnNyAAkOAqy2TVJb24KlKJfXsPt2MfbofBQyeEuKI2sFFRrXBJLIHY9VVLkcBxR3hM/0GhKLim6d6GVcXOr/dzxzV/M8sQV1jIAvZUhnYPku/yNUuccI+SkRIEViAF+U76K6tsu0GA1AkEGOz/BYt8V0THAw8H0jgJ1Y4yyxUwKqHWUhZ8kwrkETKVoG7zzDo6GmQtkoR27yFg6peoB+4MVd7IKd7IeJ+PnyVAA/3Edz678WhbsQKGBEhlisfLgczwXv3cFOZty3YCzxyEJpOtpt22IrGBiC1QVW9YocHdUqmu8dZASYusYJjRiUKQ+i7eRVhyTwOJuBvstg3WZinczOcK/AML8j/xRr3ZFEvgmD+04NSm+fS0zFC3hr9vj+3FlK6L3CZWk4T0UtLwSP5o+4peDffdupARdK3rJFSeEZJn3+rK6zNDiEUkiPYdfvKfO44KYLtZGFlnj0Yxx0LzyyaS6ssoPB9JFnw0URKawv93ax27EfaSh1a43IQmPGz4ztg1PyYZLRCikeNVRvtyvfitcjSQmkyPwrOSlFLIsb+qn752ZqVcmRf4ypRVSWIVFMlsm4zGQ2p8I2GN2qFg2Madl2LZFwDmJuTqgJHoOaqgSMU3LBm68dhwceC6D7wOiAOUFdU5YMAwC1Fwc+hZKQ3jQhffQUvZL/Q4Z7eNgXc4Zl253ZLRePdewYi6BsP7eBhxvstwQUc4O42J0tkAnWibDAXIXcrQ8I7pBdQK9fy/ioRn5xieMMo/ugd6F1ccNBsnPlbZACsFSAovFL3uzsYjbq43n+1oIWf1U12lrz5hkCQHhecbV2zJLy0AtjW7YqAIYQfaeUJa/VB267cq2m4HX8IttDtE8KLBY1JBLRsH9g9vs9XBqw/WZ1IOYCv6KMhytoTLjoTd5SDwZFqtwAbVcACHvURvkV5RCrTLRuTeEBTCsN9ilI45e2l7MGAaDIOl+B8JXC2vHFG42Y9Kwlg5/hbJOOXa942wLAFMlv6Ni/LD7Q5yv20Re/iC4yAEFmFkjx6boiTpl+PmrXsTZBia65Kcix6DaQws/3yqUKQQLEF5mpAtMQcqgyXyNJY2YQGdJMXgqaC3YwQ1whLlEzLnowguij3d5zuUTBrOFps2gdMa4ynC6kmwnoXWnY7e1WpaLfeXd3VK6kgGB2DJd900JYA2lcwRr5+K2rSUMi+YYPV4eFOLtD6hoo5tLNb16rnao2kCbMOsa7JjYqs2yMg0kugkym9YlPZu4CvF20tKeBXRbc3YVivgjCLOQDaIOyfmheusyRiFgXUDVM1P6zaNKk3lk7O8EOnNyYGemaFI5c00XXXIorVd+sdplaJhrfwbFgyWc4cwfLt0QJQs2KPZhI4Gk+BjoR8FNw12wL/BbIuqWKdaePqHelvt9j/cY38ToxgL+NxP+gDCcupIjgQqGVnPQcf/15wpCBuZIjmc9lbYmtoG3wsRJuGKFLeMPn+PQ9ZVy+eSGlMCSvRlvPrV0lEDd284ZJW5jcEsfbi05DmG4diALmwJQDoFXpvwygh0pMCuPmhFRow/cKyJ5Hp3TLwnWdBH5uVg/Njz6wizt4/2fZbNOF37Z1glESFFLrkRpPubz5B15WWE6wky/vtMBToOekGk3SLj8VcRVoZRgH15XAcNskSQgnHC2+fHZz/jGNBlUpvAcuuRuSKaAlc7HATK4WSmuD1tBUVMjcspGd4uyc+YvavRp/HVqfonoV7firQBUtVlLbdeW5qlWlV+cejftmV5V5/da1J6NwuuttRLLn2ezw5XH0C68oUJSVmFbe1e4borkTwzdo5CSxOgBElrSnJ7eZk3mVEeOlRmQ2VBAGyCYqmWe1AA+y9kjtumdqYdysd7/hrm1BCJl9fPGdHEoKLAGhutxdgjVEGxNE556xrGJOJvsrmtyB9TrzxdX3hH2z8pDkwuUmRAj/dmABqoIFNtfprDW9f9gTuQJAPFWBYI6v4tPHDN7KCQD4gIY0PoTIXFRg6JpTE20Ps63scAn6IvApw96BL8ZAYoLePTwtUCrewN4ky7zW+40j52/0xoX8ecflCKGA9YRbAKcr++rhhBOzghyoP7daYa5sEbnX75iQRXu2tAdUFSNqCtnjvwfCUJS+8qo/+QijKx0d6lb+nNNdeIKtm1Jg6nfCtiyFCSw+EKIDxdiAKeICJv7TQ+7yo1GoeG11IMXaNMGmyZ5uQU/NbscN3rY9pKpUGutLIzpjVSuzBeXYM9vlNEaW64ZvaL9COGGMZbbBhTe3C4sPcuPhadW97z5QnNF0eSXjflAyYWq4E2q40mZpQeQE3zbbMrEMJ3zqJXCbpbsqiMPyT1iSmblW+PEsfFU7jLfm3ScV/WGP9suKeZhS9qWTWbzxKA9CBcnun0dSa/MnbK0b0Rjeed32KW0r0S4TzU8619GKB3QMaeEPeQakKs5veWCkzAzmOIlITTlw9F6WxNjCe6DdJ7o81mkK2kS6AuLUUYX76SNrjy8PPTRrKijZf3oKvdNqnDQDucz4QEbZpSHi9CECEEejOPrhhxbVnHQ5tJ/48ZnwFjxwmc2W49/WPY5TCdCtGzZGmbgiXc7nQ4Ldjhe7j36b7QsB4sP4YtZnDrlIb0daW2Oh6XTr+iY22CKH1xmdG0W6X96mi/O3mt/UnVzRRb6vw8L/VwWh25tdGwu+bO0osjEqNgrYsSqz7fX+7RQdrD7vKrJkov4EfhohCenuICdnaoYBCWLFW/U3h+7met2YciU1S09uJKqdVmPO6CJcqvq09buWoFIRhoEWhM0RUd+1CFLbfEa1VZu+ySwKfsvuhSQDf/GZraxKZ4JBWtd/KQIqBISmppK+dLnmMhkNRwFvYa/12bskR2FV9x+gsJWCaROKxeivxQksKGA9GhmG3zF9JXEhCSrnxAxz+hsNLwovOgPrt0ZYfvvywrz0cCK8LehF5BJ4hMUO2+2wGRY+SlPSlWVK96IDZr6yBV9b1BjkygTDOK02RXleQIzI/HsdCTnssHrCzfT2BKKWgggWT94T6t9u5wiC3Un1h9C6paHOw8EStQKMcHjrogEATndzIqujyYb71x1qgBu3rgBdiv4qOrbh6hOiBrDPy+NvjULyUD75k0Rs5AyTvXiN+qw/9svg+8D2cKIkcyNxuN2t0CcBHooNvxJVAxMukAh4xp+YtfOO4+PNCzSj/jV7tj7XzR7SPnlB36zdkn/SrEb7Jy/1AZGHSp+qCkmgG/MhmCEEtDoucBSZrr99sAcJS+rSVgugFNY8+ePRCbbEKzkrAaPEMAJ7h51EG5mIJ1NWjKEZt99umTNwg+w2h2hxDyf7vRjaV7VBA3taJiEBnKxZBE9S0u2T/WFPKARSAptUEZRmJOFdDIhRrBWCV8Zo52d+Vqr0p0iA4DzXQ377f6nVty2IiPUVuq3JrWrXK0v2bkiZ7OF4McGUe1+3oOW71TIQXTVMdVmGxVP/mN5Y8Sb0+ptfiNw9DBjvoFvlgrGEZ1IZe10AZ6NSIenx6xdXB5d+Mqgp+j3OOR3u0lDBSMWy5/4hefBMgLWqpC4GKfnwLMqkqW8m3KJs9Pr9yiUZX5nigm4UC4kVZdyntoaOlOyVmFXtTsU7IDRDTc1m1Dt+yjZAZBmWhXjlvXYr+uG0W4tvaYv3+BGn52H0Jpku1Rursb9ffP9ZoEBW3MGFAkOcakX/otBfsl006iZ8eAVM/OQKm/qgjYOQPPFb/0+H5b2h09n/3Xv7BOR/cbSj4PzLnA0N/uaf0PzxFAfuLqVL/G1NGoOrHlv957WfZSn+9JS36+b+/x+zvwTe/GHKDwT+VRewPyr/6Kd/81WbE/0i+wf/JNz+SI3+hb2jkH8s0v8P8iD+Oach/Ms2XaX5p+Ml/sKb52dC0X7Yun8vila//TOX9K7QlCfxXjeyRX+ed/lGNx3+eIwn9dfr+DlriTx3MwawzyM2H+Sf97P8H9K/UP0ee/V5Z479qXf3zJvfUz4dv/nHs9jMn4lfsdqNirPof/b/gXF4M317H//PH4j/p/hsnqiEU+WsHB8H/FSL+O4n+N7Qx/z8/EbsZS5CJDUKRIFKHMYzttoni1AzLgJxr+/6XASdoyMHxDNOK7Mf2+U/sC6wdCp84Fl98Jj7bRHxSpcQelezypczJtcRSlfRqn7JDdTJ7lPIlN8prWbST2sWTgc3GX0233bTzYLl6UB81ZEr3Z0oc1sluvOgudugchoN/jSvftBeGa819fZPv99eH/mDA2qZzB/j3AF8bJ/Zd+/4FpyUMw/IyaHiN2Tor1wzH2/yD//ay/TBMDtK+mfr/4+o7liVVgmS/ZvZoqCVaa80OrbXm6wdO32fPbHrTZt3nVEFmhod7qFT/ven//8NRpSmQpMiQZvP3k5TBfj9piix5ugxNhgJVisoXdCIrlhFNkmUbljM9lmdvqUodmWrN1BVBuVoz4eRcqW5XoeJqi3Fh2QEazS1n+Qau4qIZi/V87elkg0F2499zxe/zvN/N/t/nosnTfHfD/Zo7CfMbV/Xuj0m9/8SR71pRpvvv58z3Jy+2oijxMi21dt99Ukf+4iS+EgmWA3LhJCyPNkHpCwn/xvfL3jUgRev9LdI46TnO0+fJH/exfvyXHb0u/IvlFb2Wx206zaGRqk80enOZezRztoG0f9kO+xKu8/v5v2KhQfQZ9dHP0ma29FSchNy7kJb3uqdp56qBkhpRo8AlCYdrNt78pEz6uA2go7pxkrAeGpAfT95cUjVYXBPZYr7p6KenWWp/FclXNMd1pbcsue8zcEULztMc0KRNaKapfjf1tOC3LxS0uW2dxKP8Q37r5tC/K6XzvwRNzzVUFDuPuX6pCUxcrkDXVOqcuBotfCqmnRHoDsUj0nSdrkQVw0SjR9sO5y+pwf4kOxNsnvqlQrriIjR5Stt70kJii2mbVlimnURe0+Xokm2bJN+SJnXgN+kkcWQopN8Bo28ihx/VfFajVNXlo+wwOQdV1UG11/upNDQHbNwWc/F1gDf1oqKOPc4gRykvhmgu2PhZM7nD2TMsGbRIJMOx5GYdACrqObxGVOwnJD3J3dk+VWk4Yc1NO5wGQc7Xl3Yg2a9i3irK4jp3CZonWgrp/CtkYgr2+kurWzNANgvFC/4lU2kp3HPqzFs3u6orLjTGdRLx7nuJF6/roC56VfrNK78R8qQ2wgiS73DBsQuamePeRdEP84zzCqCM4rRRUGiJQVf5rzrYy4hbPUHVY/q5dlLm4KlmwX2KtxHc935MVK8RZORfP7D+1Ycx6aPJ/SP7UeTlHZ6kP4Jav4TX+/Iy6cXvcUYR9+wxOUboVf4C3ne+T6bCZrirRddP8lKf5ErzqxZhs3exhxae9ZmurK60DexxIWFyzjFVFu41HhZadocn+M9HC4wGKQ+AwjaDWLaar/OXdeWFy6Jw28m/2HRFSd+E9hO74M7ZxVTKWynu7KGlenn7bceIoIiXg9cS5Vuno3oHKzc3727suYukpo0OKTyCduBD0RXIKMexYc77D8iSoM6s1EvHTdQQvYdKe2jJgAW2+ZuI8iWXinBKBF1pdl4VQShDliAeQXn1o26brWbsJXGPtpgIhlvn6/5pftn0+0rbdS6p/HiRG2xOqgkSwfyvkozRLxFGhbuhAoVw3JGPOj9Dw7SS0Ov+NsIRvrWd3tOLsC4P+/Th1Bl7tow0A84MCvs165R2fB3/LbQy9/EVwlL6l3qiU9izhylUQZW47eArLMmt3/thTyYybtyVEqwPhMrltOfObolIbFuqzIHBZSPMmOPatl1rjc1vguTi/RorqWkXlgPNs6tkKBLumQ8jIQ8O1Pvbmx2pfghfDIJbT1FQmmbsPNC4ZXcWYF8rJ4Ubg6GwQePnfudPmwOqOcFWXtgTDxKiZRHVmIWuAVA3zeiqEJsgk5O6AmEJQCJAX3/MXRhcugwBZ7UMWNCjRCRJOzw/o6lwJLs44f5lmY6xY3gwYkRhLMvDZkkQIijO1XBuC8hmUdeXfRFBbD5hCOSdmUU7q8JY7t4EM8k1bp5X4S/EkG7yZtaMF1rhg79K2m8feNGjtyWdQ3o9KuIOteP7r+/UUiwAapRJZb6EsXdSFBsaehYN01Whf/mfrzQXf4G+gffvUkOuadQosnHP+WqUZMCPHKRBNovi/+pnXi++2ul03R3zQx47KUuhGGrh64NZN/d+ZD7zGUUMqVK7Rai2vu0nQnqQNTScCGoib+8r791yhSKfuw/qg8cYYzEPHUXtIsiZBPHPD5y+c9KTGMhT9OUyFvl9EAuRbX6MTL0bfWOp/pUEt/Qb+gVYzywMHNdlg0N+wvlswDN/4ifmvsyTeWSRI3j49lNWxASu+d32zB8SmxG50x5XJ4ckPhzZTQS1qeYdk84aqhvZGhpri6pVBSyaGLsZizEvDumv2iweDMHRoeFbWUdkn3V+wb2leQCHAZIcVrTHk8BbLCbRZVVrYWiTEbsho1SjWCadX4pwBF9XUd+XpG2oOiBIoD8weQ1zZrVG1vn0eba0pc6cDBOPhFkVevuRBbbuk/xG3PkWw20Fcq9Tb9QVs8q5maLHr3LY/DxG1zQuIj6QPcfRT/uGdryPuPPUijqVM7aq9Dd90vgMjyG9HPrrj0j58t3CkvfUOerPkJcEh74mmIRjZKA9HRooo4cC/lK8Tv2F/nQQpEBLQxxp2liyJY6sCmrt2uK+/qv1NP2YFA6ljKw9/wawbL/qbFB8hf1lQeJyGrWLOrDeSm5eRjZF9hGuAV2in+pG0zXuq+d3CDEXRPHXa/VfYbp0NIP9OJpnDWFbLU/VJavn+WhWiJj9JT0/Zg8j8PgrgxhdAMDopz8joEJeNoD00L+uBy44vV+AOsq7eYNc0o7S7TvKQ99ECBaIb/1xvcR15ZkUxXaRWggVCpIeoD+kdbo7iGYT6LqI5ndK7HzDWG5dm+EkO20GPsX64pThJS15LtrRu6SiURT7corDtgqSYEBeXWhDx2ZD5Hzw26r4Jicv4PyzVnqDsOUHw5r9rh4JXLHt0sShIj6M5y1bx18L2X38eN3gdZ15LbWZw8Hju7qdJQqkOfZhQ2ttJVK84tNaSaPHLHzFB3BeNS3rjI1j+Ja1Mxcrvu4LnHqMdyVAYo+/iT3uHobLklMq1u3hyoub+FO95IR1yaJ3OLezjh+SOfoRea9DIYIs7Ln0BofUxk7WhcH4mNibQmHSVY4lWLwW92LeA6v+Xo24eJOLdSq5jV85qeZ9ieWvoLv8hqvu/LGiV3DMshdHlFepTIfvu/y1mZGyDQpjx/U8RXULjEJYGB7tN8iGy1BVm8LlK/jYnYPc6q8Voa4iCWkDLQtp2Mm/MbdoFUw9+F278FUhaqwLWGbmWRsPf+OcvnqjlkXqqBDjx7UCc7NXO2Fcr6dD4rTzfkjIvoNgz7HJiTuRzpe4qczoPNeNmNkKmPM6WQWg1fTUsrdECtqKqXGmSNBv4zzMROa3s+mU6qeRP0mk6x68HtHFEec1eSZOFps7M/LAfhjNfGDp4ASvVtBsUw5Hj4xnZPfr732yyZLPaBVsEhRqqOBIw0Qv/02vJBQMfCQIqa3Tl3ydq4Ibveysjwl4pdK5uaFeSLGTZ+aWo/WbIP06/fnXlWgLysUWH7H6HcS/EcPXOkQtie3E9LDJkPxg+r4uzOTOiS6aBma+J6z+WJ4CDURBkL80ag5KDkLP/h0fDzvSuq4b39RSBX0OHi2T4MCQ5XRH7u+moK9DrPrqH3FuAC7zEUXCCg8yxUnFebneAAxLGPq5FsWdZaOUEvjKrR+2YXDLsKdWYazd2sSnC1d+cA7VNRFSoPxAGzAs0NDH3cbatsQyfdVD+gnJxL8+kznpT9BZgJsC0ocHrQmhOfPVyuThuaXp0+VNB047IuVlDXQ+GcQhULa35znfAEwuqv6wNe8t6/HH2NdC+wUyruaxkJHTVzy7XRrlMboZwKcwsDX71mKS+MUNSRLkE5KmGRJGr4e61j6kwT7+9UkCNn+UyGDjWqW9mt1+9UfloKKokuGls3sM9GqnB7evcPLf/BPhj70jXCo9zpqiNhd1NTlR5UxN4Wnqi0ak3NFucex9dXgk1fxZUPqNUX9ihUDmWwJ1VG1SijxQnt9R/2oTp1hLP31qgADNcAV1NjJdidUU+BB7DC8RJvtWii7AurFC6esD2/hmoDOruLijNhxmgJo7I0HMnOWRZtyIFdKL4lCVtL6dzqkjGqrNU+FgBlzvluNlerc6nvaJHnptbwOsfGDrWLpZ8NQ9PheRl0H6nGVbqlkv+tu/8AtVQFf5127Ee+tGRrNntvak6bywUB+IeLt1sYh9R/bk2/vrXl0v2pDOMUSb+vzAw1y9Wi36dLwbqWiaqrxsY78IQrm1iA04MEUW6Of2IEbluOljn/R9Xnfc9GrPcfXr9IPz8zDjGPKEz+mDoxqUGMrWLVo3nUDsTGl4yww1lvjbiycESRO3YRptTjreb65yMOGHYQZosUhgZqmpQUYJdiY1tYbdnrjUr3ODzNQ8jR/Gk0ELNurwTqNL0PtRDkUSEDnWyVED5Q0px1M8wxZMarQmO6bKxFoChYSzhqGwjfSFn96XDoM22IpzaB02r3Tdi1EYzURoNdYGQbCoLFt1f70KNtPk6oDE+AmyXLWyxVFB+xrggqGUangJGCC4JK9qmHNf7EfjA4kfVW1pcngoq96UTQCACccH8DagWp/Qud0wHR2X4YcLXQyd5siKK1d4ScrH9b/N0HmmUgZ07ztB44QF5OV15KEuAmXgMpxfGLLG2brrKiOSst4jUGekzK5nDSG1A55Y1rT+cI4efb260SOlwMysIDcdXB3gMLwuURMfgN70i0+jWFM9NvwBfOMgQ+PSJKnd2iYerH+GeOjFnsVm3qySZmSajzJGx/iobf3DzcIRlDqj1MP2Jyc7a7eG2i5RfVaR/CLRaRo10Xow5wyLlMo2aM0oZyGxNRz/pdqIV+gKVEicZjNzTljjecrixj0+gipNTWVpR3FYo95yMxzCKyReZHscMK3LZqg/55fTK4UfEtUtBAjycWYStH+Za88osJtGVHU8KlRteSY/1fdsXEFTLZgNBwdnVF9Bg88+Bu2621qissizWuw3rWDY/g/JU+V5RXX7V0GqKIEZO3oH9JnCpM7DsqwwpgLhecRwA7vQT46yuzPbbWZj3m3l1bZtbnV2O6u44imPB6ih6rEnr0dennFCvp4IYUgHn86Ion2BJD7EfzVFo4Z1cMMeMoz3z8oJsR/P5jKtCcAOXwnpNEZWeTO1mBxJctVfVfH1D7ARXU+Fe1FewYa1xGMbyF7brucSzMw0oMVuMfEKi/VqO3IsS14Sy/ZKjjMX5+eA1G5eNOiG956xGgFdDgBu7ZSk49aJvLy+kNXJYAYUx4v1JFvWeG4L75nRF/tWJYh0poIrLyTIzGSdWa+C9kSKuxKkSWj8MkQGvaxp8uTJ7e11lkAqKVnTdDZCEXh7Gz8OumgitGlVUBwUojnBp6xPKrbQ1/qqZgxn21cc1srPIUw74SX9v5d9DQrLyYs6ob0Z/SSCsIYb5FwW7W5U9SGhT57pF0oXbwI7AyBsNcWvJkJFL3S/KzFeFy9+0ZI2bFLjMdIjUJc5OXgSVESKJdt4qkNpq6sZpgnQ8QavlxVKsp41IS8V2eTK0Wo7W3WY7M+Sguq6Gxoh5pd+JQQlpzRZsJ1J7wuOknY7D8xwa9bLpqu5BOO5cvukjE1S5Go9Wq2jSu7doVDTUEMRID7oBhuduXve5TZS9K65r549BOdF1lGJqS1cCx4Ob5aj9uFYxVAyewSpYQtBUDauteB4KD6peepigelumTzARSmUiJl7SDQCXfWNAvv1E+Eelf6NFp5fLe+8RDYwPEZUyfQwCFI8j5c2SSPYTZVci27g+DQ9OVKCXSDNzC0bGB0bcgSFtGA8xsysE/zM8xeA37qbBy++FqNwaEf+N3iBWKwp8MqpQLoT8dKfR9ZaoSRh+CgA5s+TQ8NeLaGQGnxtXQKugGTJjiUgXDw1f+2AynDcF49bqR2Kdyvv6/nKZNHnEuTjp9R0YXg4x1RheoyAW9tSfWXLwa0ZHu4yfYGGMGK1c/jUkijfn39SYx8WhqoK9MO6cU/Hpld7JgenCMMqvfA3GWPu634xqup4yUsIZYn92qNIwiRZ6+tAaXGCWtwpIhb+o1Ffsiu7xatLeGls42ZDHGov5DY9PXKHeOky7EEo2yIpMZxzBU/WDcgzrodhj6q439L6qFM5tIOvKmvmlHaX12XXLFEdqF+GCOE5p6sepsn3fIbEr+AnRGAmoJB+X6LToY991dArJ0fsheVS6w82M4Kuvo4vFjsZxMXI8KtbwznpRhkn8W8CMDclAsstCfXKAGcRzaZ5pbDRxSaV9yO/zhdbqq8p9yjQ9a9nRBnK7r6gg9rI7sbZPTfTHMX3AuCRjojyaK9we85jR1vh/rZp2w/RL+hiDQaKfHktGB1ns6yaiH289MzEs6RHhb3/jL0bJRHn0NemFkeva00zq2cRJZcte3JveypZ4rQwXeRrPxKBOml78W/kL8JFcHN5Dhyf9eDMJTt0Yn++PrNjqiyd/LP4zd+EhlfTgTPYzfO9/GJLIu5jeX4qOcO3B7wMem7QfkuxTa57ckLjzFa9Jr/g80X0xSgOgeKx3hN/Vqyya9cHAWabP8mZUgtpDzLOw977QcKWOuFPXRqW6gQD4/7x6S+6JBXoF8vAc/U6CMLBvn6FqIGpBOBBb7r0F1nkIbc3q+PGzvyb7yvV1A0yvlZK8AU/CVqgRzAuwQ0KWIBpWMoNU+cnWIa3dz2cTe9S8AbcvmvF9mc+lue4XlXJZFA7atmyvGJasdMUGnD5lL0506BgphwD0Gt67pe9isYEq1/sa09pD3Uld7tvRMvd2ZOVdljpWVnm5LK/yU3i2jY97U7RppmuEMYN566NzjFNR1+3AhpcmjB9FyVQKhcj03dj4RDvtDTrE6UPLqlRJD9ek0IWC/tSyq2QYpObmwx094mxQ8JCYZA3UiM/XLSRI1pJdziOuw7rvMDf7NS9eTk+oR1paUZSSYgg6B2c7nOaZSObJYk0VQmGQVh/JfDFP1ncL937uR5p06YnDCQpwEj5blzpkoG9aukShygfon9oaeiMIsrJy/ZhWtdaV96DdjhEL4Lc4jtFVn2q1bW41OyPk2DazA+Si8XLLJUkM6p+1g6q3WQFt0k8qpOkzJKdlvtlzMJWdYLbsB+jA3g/+KVrXnqf6KBVZTO2wCEpkH3u7+FENwdAcvgVYW/9IE0xcw5k5VhX6D2jAahOnL2Q1cyfONt05S7pL9bZDIdcDCIZDbpqfuHtCGqW4Y3AEZjZTq5EkXpXZ+Oq4QzwS6WWTyn+3kEOgTviRrwv+odjffwK7JPwPsOa64iwS5zWkD5BoFvjm26uzMPRwzSXPhExNSjvINlU+teJu7rdsZNLcK+KN7iZdeZDpb++9qZSH07WKH5k+6TWCzU8NZgGrGK77JiAXW++1yBJLcDHnW88CaWrD+/bk2dhTTy95GVDfB6FAGJ3v7AS1aeP3AmcemuuXPxpNvK/6K0Y2/eyc66abWfZFsNUEi2/l4OJ60Gzd+LTB9P0Pun9mv0UJUPKsnReSieDVBepOQ/HO2S55J+WAdoLq75+1Oy8AEi6NcENAa9lSLzn5d6wlXfBFNSU4MVm5hwoPOqjbn9DjfktQ2cbvjZugDUpcztHhHlFwGU39syKuMSpYyhBexHQZYArUG2ARrWSenRFXL4CIM56z6rpUNpWj66V3JL0UzmR3Saz2QTXokHNtm1HFEtZL7EmZ4AT+FWBjEY0LxKfm/0EHGec55wd8vHLl2aahAXhzvtXE9B4384TIYZShIg1p1YojH/y+y7UzV5pR8LadsDTBMR++mATFgmSzfx+HRoy7i5C7N8VFwOWpU69z/3rgRuzsRWWzz6GTA1LrctJasOJzwIbOd8lFjX2Qmo5TRoyU7zaC7kGx7FYTdSb6VHJV100N/ZL1dwHdUSwTYX81HhxlIU3zmas+ecxhjhzS2gAGfvHc6mNzO+Ax3Bh/rox2F1ayc71Xj9QH9VLdFgLXlSP2iUagkpe5fXL4Be6J7r6yVCRZU0PuG1Hgldqe84jiuv1sOLWT+dMgl+ZAKsXyMO68nLaXzdnqTpmvN+bAoVyfdnG+Qp/xKHDHliFWritUkXnQuFHfynBInwEBd2+3vKxgA8UeYQBxMVvU14n8p5LYInLPciH+V3ZGW+z13cKRtJH2K3dQrxF/Fz6svZ1BakO++Teb/mNZqQidcrc7BcCLBzky0S8XyMTRgka8vn78zFWgxEYNTJpYeDKEzgrUxt2G0zZhpVZDarxK0U4efaoz85fkQZtR9UhXsdceW1ZxhPJBpDIJw6AmqvNUieiz6ESqwgm4qm8nOxvTjaOVxuGrA/Kxa67Pl//JxAM0m8YaibAvEa8TcNS6y5GkC+QvhAtXuDAl4n+shRf5p4U1Ngz2kpSB012UOBrSYVTez54FQH9cdPngn5ImgrJYU7VIWx0D+ai+AWmaedQfzMU8EL6Uy6n79MIkmaZf3lU5788yF8MJ+QKgGO7YiD7P4NqW+Yh51xYpCvFdma71fAjETWX2LanrvjpIO5f1sSSpDrpVbmVkp9csGTuviaCqB2/wLLXeIU3bXbIeCxmSWX/erIcgV2cATFNBeULkMfBXj8e+lvYKhR0B7hgZDxn9JvIRWn1mPUyWeYSS1EtSUdV0dQJNGn+Zun7irWaTrykuU8uzCHUV6aHKzFVNbpMshEfyu+yLVBMXIAHWJ5OYO0vw3Q9tx39Wn/eyI53zsEf20C91B4kKOIXgMDg+8PR0rXCUsGXxQVqK3l4IWlujrzrsRZZUlacwGe1LZQvypxfBKSSJw2zSDBk2yc7cg4i2FizIO2j/nPpPhkNrfMl0nA2qQNyhZ1p/Tk25bAtPykWfhUJs/ZeVEP9wrSrKA8O0nunTaMxNam5W78OjCjGAoyPKbzgSPIOdgSaKKxpRwmEHUWEokvWUYoR45UJcqwGYVypJi290khHgw+vksvTLuEviKqywlAmUCoAmvHnTDXeGaguFDjbv+POu5HXGSklyWT2pDrI9nwZWUUwtKyCHqWyerIfv5jd00v6DOh6Lm6OQqJ7jfbQnPxIWhTkv5qTEye4BIuM4nddj1/y5ypJvDu7R5tFPqMMJE+1wq4dxOck+A6SH5MuI6hBsjpdI+7rPj+O/fSeGv1imZtwZPRfHqWXWAmUhs4IXjm1eCboa4XzYxLPjUxerqw4EwlTaoOjvBOUZprtdZGFVHiP5hzxnpjb9H0OX5riV5DAfcr22yjuqD29O7F5d8N+RPlRF10zWj31YpMnL2pUGrDp0STLy1pxYru4AlrMfV8akVqc5QkcYQq5OL8bphfZFCbNbpvCfEz4hH/XyW05zQB1em4L7A8FOXaOCXUlhfQkqeFk68yCWX6g0Ru5kY1Vx1qOlTWuy1Vw2VyvuHbG6ooZHmhYYwvhJ+8cTi8oF8c9kQS5Oizp90GEaRZcVhZ43dB67176qPvZI4265SOidcQdkGkJWlGuhv0ZWe1ChI67sdyxWucLuedOZb9O5WqSAD2kAbHtLrI2eta7syPV3wWJVC7DiGZ7OzYvFmDeOVVHjGxqKaSu1gyhCEckyORNKKxgckPHHW1r0GC+DkQlcPiqtmnquSW82vmRhdDvpy4QJbj6Ji825dLljzxG3sGB4pc9iV/CDATv43WulJU8+TGau5lhgtqL2twJBnF6gqB6FfQXIE6CCWAT8wraePX97Hy1DS2FfUqCT9N2qf+NTeSspIC/QPrHtYZGNQ+Mdw38Uf/wEdT+Ytsc8lC5WHjRRs/G+qffjoBlYjROzAxVyeI6Lw3KJ1nCT9uUNuO4WMQZXMcuJcIHx8/jGZBAvmfbPiNsfqWtdlT/dUgKkPvdL4w6ZnA16CTRePyNI8iWbSAG04sihGJ4oj5hadP5BZU1flArUeQsSxwK5491X+7BEndIqBdFkX+d2+JffqcouI/hiY7bN70DzstXsFZdH9XYuxTtdPn3/xgg9/LSDE3tyjN0Lt7CMYyH3o+aR8TvuwHLB2uYDJIeGpT7LFPk+G8uS9+Gy1B3FvDwEfESyMXlv5qe/VVMYXa0U9o3geJLc5PD2IghhEvXR6i535dGryjAgrxfMW2ZuWv6XA11wPnn/GUUmTsTJCtFlKgYOaSV+hwvBzk1bJ9YQ2cegtsfTzPPzblLshnwQ0mAsw4HTgKM2F23s1CN/vL0vnGm8REBm29JFtqt53XGzwUi/1S4oWS4NcsiOzkhTSdnln6cKIqhFzi6KmBeDhkPojjZmfYdkpg/c829WBqceEOX0UICICOXy1nVuhDrCcTFafDLnih0BrnYyn0TYwZZEX96b0XBmq9raMxWfeVitHmLqlFyOzcQM2cB2LGrQuvmLp0iHqOSnPvjvlQlmNomrYl0T0kvmSz8UljqZFDRbsKxIZji8yWu86p/V0lTKguXAckGrwQZUtJtQ4n+C1EYFhyLm/hMlPuk/Ze+Bqy/xuwy6OcYwPJkKnVe08eHmoDatMmtpF/gFoenYbX1Z89C4BYKdkUOPKyXQn43Fo5/10n6QAg8As/FjWdP/qTewOAaM2i4Ctl/J+EWBWVqSG4S3eDTPIUne2wKVcV3Ehk81Xr/CTFkwaVV0/irs/nhbr/jWFTrGL32Ep3AZFqbevMzVa1rgzTUsRYXa/QaC/vj3Tx82VqVe/chyfi7I/K0EcoTnEeMurc6vl/obSLcOCROU927QSc5mCf4aw9xJ53PPa8L41KGWP6SfNfmV4c+3SsR1nHuO+fDBIYlNdFo6ewovqPh6kHKqlJ7J0KHr3mXHhNQ+rUR6vB5esJE06z4BSfomdZsFRso0xE0U8Ra/5+1P6eOO6MxOGRv3qKm+tv8vmVSTYy7BNqCC4QqwnG/G0OjM9+kh1rs09CqYiEy4Q3thAXN5HifmxcS2dphL8TU05ZXfOqoD6Li/QcUycEWvSW96UaAVxo+rlCPZwzcLZBUn7Cs4flAt4r98CeknlAv2fjEXvGyUvqsV1wFnmZimh34eoAOO7W4/c7KxrZJsZCho39oRqWo/cgLAS8aHPEvE1LPVMEpRlZ4KvVFnQEZTUAfk/Sun+L0Xm3wdNRrKMoiOxeDvk/I162P8PFsH8U0DDqi1pRmr6Y2zfnl/L9OBYd4jppa5gfLxIBR0+cPr6V5lBRTbwLVYhMILvPO337+x0CI4ryQYDkV93U2sfFd3kwlec8O8suYUuRhg4NBiy8x76snp7yyDELxKANdvG6EnNYSnpkklAwo6f3gEg/WDR3UaNTHCOusuRLdeoyY5NrFTcL8c25qTZ68mV/lIX/POwWqLdv2moQJjNPntNFzX4WQpKocQPk0A/WhA4gkVA7P9SXA1yEZ+yakSt+2issmhnNwjJEIwLqFU3YBMX4E5XkfRVBOWidt1aQt00ZfYkj+G6qi78GMprwmcu5o5V7UfZD/fHk9g8FPqI0lKhuSs7HmPXFQ228WqRmOetFhPdMidXFlLDOXXi/xk2Rgksxx4YFKsIWmFH0wkYvIZZg7zArB8u+ukP7In5cXLp5sTd57cAE/8/eFPW3SSnPJd2ut1r7LcziOldEKlHpeFTpqF38Xz8OovkKbfrgfX7jlTwxxBEfIB38bA1MW8+r+qRtyvT/X0HlpUP9uANw2K74hnf8bM+Yym4W32MSaNJOHqDBU7sxDwrN2jpCLifDSdiIA7FD71c4GEARfusXrbbCt23EcGC2Xbcyo1s4kqVMhv3yrni/WGr9whQAhmuD5SOVMkwOOI8iWWFnP1s6/TuNhUwTFEZzgkRyLBnXFKu8jZqT6XU+B/WV2rhv9ajITv+bH2ogYnFjkCw3OEfC7fNDFDTjRbJ3t7GScUJtzgmXJOEDTbTwHQrvROFq53XZIEm5bqCnITsoVuYnuryD8Uu9ZXQc7eUUrXIO/Yz2tQFzgOa59x80btpIGyNrE0pZ21U6J28xYay6FmSSZ5OmRpisz2I7F20+OtQ4ilfVk8fOm9fADVz14cS9AMlcKNyO7BVdyBv8TBioVvjawh/A5ZX26nnn7GwnrYyZRQg31T2unRj1cLS/nUjuJ5tztiwzbtoamePg068yA1+JesO3lDEDOM9R5imeAATibbEwzEtpVx326ebUQL3NP5FWeAsvmWH7E6U5rKaZVwozWA4OhzqHCXyd3cb8GFSivi3J2ndfbwl90VB0M2sQj6ZrFuYnZP8IWH8Y8TL6EBxTimi1rsPUb7+ol1O1fkSACP7oju8jnL18ZOjVPYOvS7WGJCJEhXdOuyATUD/2KUr/YQy8lhJcoUcUF52OLYO0L9TxaS9mY86U+qSAwrzlI/H6S/XuY7tLzx5c5r5I4uqUnlS/ncyTiqOrFOV7F5O5jQ2qVv51Mvbuz5HH0Bi81iR/6hvsajjlSSYM4w+qb47WLXO+BvJH09gTA6ymNdbiGfT2Flz1oJvf6DJVBQhanMagH0opX3Ee+J9JrUBI2Wkn83nhbXf6c3BWNv7iCDUouIwbMPXXvkQP9TcWR7N4mHZJpXEkDCe9oXEafTKw8iQTYkrNy8oUr7mNA8ifQQy0eHKaLFbmgX3Fgqdh2iEzXvIi6Tl8CWfmtgb8KU8u/p2RHhUZwCdRQJiLVefuxPcz24qgL2GbBhIdaX1Pc9ZT3B5zBOpbP4iYlf9J14UIEHc7jcbHDZtjldZIimmQkkbAID7gpXb33Gt2QIY+Q8CGazjkepGSigOWvcebGX5iQbds5b2ijwglGrS56eQFNjuSN720yEsaIQpOfquUx7kWNOk15WwM2SZcoFpZcPem/6TsAOwljRnCp2yu7PixMWFIY0jEaplg6F09NfxCK8ib12vpYECT7U2/jHLZfrsLFPPUOgvff/Jtw/AHOXMWhZWdKEte52eLbvQHXF1rw6FIuHtPfOxdalZ0OBdG2i6Ip5NR7QaTr/KlPuhi/LXLTh0VCf+IZ3kWkEh6/L97sNpbeFlAZalTLXjzE1GV4lSPpYcw/5hET7a1lMtaUbcsAwjjjr0TZXjeN/TTAGD0dHhISrpnCMXQ0/Et5fSRN4J5/d6LdQ4dNRoGhvpjXtHKlJiJqG4oOH1376sVOssU0Tyl0n0om+09UoHW2JuGmYZ7jxduR1ReACO50LPfPYpx2Q15vD8S29iK2aAyvCm3dqf3SBLtnEFPx88XM3mVSigDMvn96OeBrEt+tj1Mv9ciPOYyxhx600yCnsP5KyxaF/P4C66BPCTCC7S0Y+0Xv3DGXvBZSVAd3mdAR/kb8GuJvvZyWbuNWfeKGbi7z/fWVEo5LRKwhzizzp9KWCtW0lVuA+veORBZj9TekP3yKvJ8cLwuylnwPKEntpaG2ecqQEwh23ZyfNkXXFrlfStoch36eIcKclXntV0yaV21aD9T0qZMrk33f/IiSf1Fwr3336b+rEGgB/12K2XHLDyLL29WAdlokOS4jz6JKB1R75SU92hyZpeJYbfiernr/RtV22/WRto7XMTRuRX/yup4wJf+rtPaWxA5/qY11nGziQJxJLqsd8rUzQIOEtiVXsz8HRWULbj1/YoYlx6awASZXnaNj2lRbxBv0XsO+fi/7m0p6bOMdleaf2AwsNJBa/GxLlJE0T/vpQgk9mQd6vut5Nq8O7Fle2rJ4B+g9nU3axN3iAEgcdG4knDUgHUqkZbv6E3DsxM4LQKWrjnMxaU9UtP8++sZ+3WJULuAvFEL9L4rD2B4bN+KnchqcL0L7r8CpqEJkOAW/7rzRnXSsvJ2fT3608NivDcqQFAoeTVE9uSwunR6HwjzVW4LUi8MZHndyWldQgUaGQ0tT7gNSPXWtFUApYBW/5Y1Li0iYBE/jY5CHOpRHt+kv6fdYxsVruUyQk4dV88vk1FXyScaow6R9+cH//NUPU5qL63IcAtHVqIx9K7Pgh+k3+5OraouQlIXeI53fyPRTuq/bWtB6gBvyVXylKQkfcTkghhTQIN2f6OeXHT99E7qpZzHgLuX3x4guyh8O+1EkrbrfpZouCXnFb59rRkU2jlpfYo+gNVM1HwTxoJwj937OPWdtoTtbE8PExGpc5PWFQcJjy+wF4CYw4OR1/IJm6imUNM9YLH5/AdWAxBXiReHH+fzMhwkld805kPCXX+TGeU0d/ZyBxONSKi5KH4c0luPL64vkqPEAcaWVB11qNAcrC7PLLsuX86UIdUR9FdNm62Ch2yEvh21ikndLsrziqD6Fb8e/7eB0CpQ3r05hs2dFknApSAT+kIle2AF8vx6WEgjP3TVN5nXTfRGkGqlOLK5RWSsHkO7TFxBnNb28AC9m3FMusKsG1kBVbNf1HiZ9JNlEMxhix6FtS/F0TVz9dgAsajSZJJ/CViiVDvuXhDDANdIs2frLh4T237DPn/NDiqZRJYMgTfD4yz2QC5FG5TaZUKOgcuV6vgupttYbUyy+8APdoMvvQkQ5YRWTGOlIKg0rIDFeJJL1EE2+SCs2uXgI6ifohC4dTotbvHFuMY77q/Nzajq07Rv3NG6tGsk1ajyKLIdTuZDvyofcHWEzJ7FOHNHoEsYhumH1fgizkNLnBp7mUVFHWUPrYtNL2gBDEYLVvueu2O8QnBnDN0Rkqeu2iZyngDdG0G/b5MZJE1iKYnKc/A3d71U2dJGKPz1AB/bqFVt7uCUlrFYGaXupKYTTRvXVEsSFr8z2xAeznLXuLlWFNgUnEMXNbIE/csxMv9q7xtalq98j/C6EKQi6sbkpAOiVZr2H0j/SeHcRsYqNyCbgc1x5ACQ5VMkZ/Ls95INWUoXUw5c9mgNJ2w5JtPlarCmYjbdLX0KVJfmzlvrkPj6HYixNC8URGPlxb2YSiJm827SLr6a1GtbeSLDPplnu3N71ZFTlg3QsTSQDfUETAsg4VNhZZ6YLpAhh8ifSFGG4A+DriJmjQdkvTXsZRlh1ZUMQVfMKF9oNPpPSMPJAOCWFXgC8ht52SqGcWosvPE4HNG32K04CponvcTt070JMj4DuAgQNPMdUhj0qpcQeVZisIZR7l+RlJ5t0zuOmwbtr89KQUibz3bdAkUVUrMiejmjJQOBaDWGrkUb3kgwXG1BG9nrfW6hI1EzWSStHLiEqaMNhQb/ZuJL7ehgWWE+KCej7FAHbIntx+KYVUOLAyVtcyrd4ufVEqslZ1nTuPkRq8ks/ufbnr8KeWxWFxR9E7MU+de1aJXO3BdiZ157a2bnIClR6gGv66A3+ZssupF7PjpG00hR97ksbI85y79ObvqHtaSYkE1ps9q2gQS+Dl6ia0MyUz75i9IOL9rWj8ps4RzEHW9cA6HezOSnuKVFuHBFfsgapSZEnFaQfMCaibv3o2N9Snk1WCg34ABuxXd33fvg+2SpmuSFUScYKlXZxNpQWd6z5UHfBhD8Gd1khPGk9WxXeMZ5GeQVCtMWMa0dhK+TOszWd5c1cosidt2PkTkfYFnNrcuEVhjSMacdYwDxkc0y+AMUThTWcXbozQBesUjYuk/pnn+b3M/liaIb9JQzAVv5hi6A55i/uEbrr+MOVgGcO26+UjD2D2RKmKkBawXvgqnuFz/suKdznOT02lKcPdUoKhehUUPw3HPTlCB1/1ZLENcaAIW6uocwuP+5Nwur7dlEDI102vP4sLieQp8vhNITaa2I6D78Z1C8G60yRxN/FVTclpoyHsgIu8XBXNaegrpnCt1s7/ib3HvmpiIVagJEMG953sJ9eSa+bZSAv6H2dLeTX+J6/y0U+vyiov+ElxVJrimoTM8/PKkdcPOsu2kUT35ir8Dy3bOmkYBjhXBoZ83bz5HAcetFudXam/nqG5FSXQVI6jIA6qiEiOjNaTfGvSlkonPw9NyX53fzsqUYOGjiMHz9ePb+jFEu/tPhqx15cZ3Zj3ODt17Kc8JK/bUah2ALL8nUxx3k6iDcM0wyqQKB8/nbypL8Lqk7oyvTkKR2bmQmrtG7iKWbTKGrmemrcNTX6h1w7R70Q33/3dFM6EXZtF58cavOxexCHPM+VO8uhqAO9C/dJ/lC1iOHEK+9D6xd7YBibPOuXAfWHZdWARFdyyEIxQElrPeQvj173VkMr4Jen6RSprRdQQ10P9WTPu57UZDBFpJs7n1gdyzw7cx9/e+qLMEOMopKzdDTMDnsr5ldEryIYSpwN+XqYsrKz0gmLhJc1PAMJwuYeS5wwCN+8Ugqf9RV+0Ew4gsGUtA9vZ71SYNAFeLfea2djQ680ROpWf31/ayDFL+Fo2/KhfqoMMyqmYzOaxWI2Xz7BuJMfAkJ3FNwB7gA+Vqb7i/wF9gm2inaaLPBd+VkFPewQQA4qj3qghcugb84/qQ3z+ys4vwupyhXiev2cbA353UyOJZuKnuwoRfYKZeYmjLXwtr5Ku4inZqJJZaIzVKnJp+49V7Cv2HHdb6wPZ/DRsdlToojf9VttPk0iaHIhcIZGRXhzCzUJHLLH6G/kY/tFiyH67qfYkUSiZBiXEjp1dOLVFxSb+MXLQper6cTROuCXCEXHk4Pw1+aq5yIqU5mMYGwpebMQJzn7HRQkVLNDBzdDVdPNdBcXNMHQ2tjBNKPdYIw+zfaT9kaKEoWcfYj4HnI0LIX2VIU26ouszIiPAvh4F2Meb5Lr3mJV/tXXcVm9f4K6a7GtUNb4vH/xe7jn9ZxqZkav17c39WEaCqVlubs92TQFEdQv7tOn9xY7Fn62lK1AZB9BMBs+MuGP4zwvOudzZOWEZKlIgkzlxE9VBHL+IotanMlz4A+uu04UUNvO5KovZxFmX/Dp59fDEUDOWFtga4G/rIh30+mVen1RGNIccJf+WiJIuE3yZeu/OJbiR488+dGrgOomoR2FqiVQw2VYrQGaos7s3qT06dlJvPRR3adGYuXrEGSSeto+SejgS04vEzcBccjYSnnzvfTT8+tazqOesZl0ryDyP1wl+dCWhtZY2/DaX89RTS8cFhGbKTv53IVlz0Z4H9aEUVNKh/LwSDuTK0iVTTqr2bO80T49rt1Whwc3vhCVovWczUvsa2eoPZtgCkMzBA/okgrcUKaEWkesF5ehQ1IB7sWnHzqdr5/w78qXBaUefwLOvRepmKHqUBJrwgI+MOMZY5wKTOOBk3w/GEO018vdNONRLJ8UVzMVTAV5S7zuC3nTHzNyUGSsQWYaqokLA70iqB3DPQsdZNXiq9HAv/PJCPnQqw8cpGswS6IIzvbZtDws/qLz69Fzk6YrDojjd2xsONFtTS9wi0h75JV6CGNotjNA7ce6E60ncbATyla4nm2OGW6B96+QxgW1ZiVLYpQCrnk99MgWGQlO2jGGVhEiEYwMNlt6M8X3fUHNpg1NPBv37Lz+XRTPZFe8Pii9Ptas+4XHAz8IuCRtgi6/D3Alqg9n0sdpHq1ompEjZDdtaYm9Dc5rbVVfCsrgKmgTuW2dP0NB2v7CYKwX1+rY/xB+6dCCoRaZcCyLnzhgvkiFfUkaWVq7/90yMSMC+pR9MP6Ac9Kc3LzCknfU8gt957QRD3u56eNPE5QCas2/WkqdHAZVLRMPP3AJzwzk5bhZjt0fjyXQ4AceBLmHD+3A2SebceG8+s9len+57XP+/F5j/92+5mFp8JQgEGdTPBbli0w/hPz9BUmaV7ptc15XyQ7Ffe42HinCq00VQwFcKATgYO8yNu+4xld6RonMXGSVLNDzPZ6uJZesQvbMy215WpX7Srk20VX1MTS0XYDPxUUYJO3jRLBP68qvLkL/MsDS4nnz0s2zyUWZ3Yy8bRqygYQTmk59Db1gmeJlMNO8+uznmu+KOKFbunTE4qQxZPeOHNuKL1ISf8USRiKyVoWA9uOmVAqE+K+HPwUuKprK/CbPTRIPsAF09tiM8++2cTcnO3tFGUfie72rQ1I4zuwZ4ri1eU1/8dmW8z5cHQQdYET4wB3SuMjrFqphW7Gr2FL5SYCODoUw1S63ySTmm/reffvgLKvjjCQY9/3kQ9I3G4j69LKo/kSdKhJDZRTFxyetyGdelNQguXx4dwwlB6wSOXjw6OUkg0pSIkDXolCuVBL8ZQ/evXj0lq8xtJ2kbMvOS2dEN1OINXdkZEtpL5qC2sl8WVMqaHnOKbuiLEgBaz2FTQnNd+3Xz2v4tO9NFHF0MTTKsmc1UwNN5QWunAGHH2ORlW72+MmKsMTzs6+ELI5D4Dvxd7R/AVO/DUOd+8IfaWcBGBbOstxSU+NKta1chtNVyZexMzmXOvq7c3T0dNc7Dvnl5EC9D7xe8dkk58ev6BuZBkGoBgbr0h+6cIYXhXthk4lww9dxNeWD2vPP6jW7R4DZTqqG6ymidF5o/FHNcia1TZV4LlhsIH18LnAfniaZ2ecgCDAeyAoU9VxU3tsgtIUl6q8VQoaDu0QcOaCxRv5ZLkGzD9jc2FU/OcC7OI7l40PPBO+TVZ4qsZKpFDEMOzZzAukDZhElz0Dl0iaNytaj45bpAmqxdyZP1szNCyX3pW3N7nmJPjTm9HXwvCECxMn+3dw5Kx7Zmbw5gfLDdS3fT/Q8UaEmmM8mL6e8hhxlsoQThJQXQpbJPNpVhPH6sUqB72tv1iKIrlsWlAE1AoWzZE9qDuPsf9u7jmW3kav9NF78C08hg1giExlEJLD5CzkTRCDS0xtNSTOWdGfGY2tcLpe1uCJIsBvs/vqkPn0+UoSCLiOq/EFC7jZltOtieFu6IcUJg9jOuiayEawxAdP5TVkb8Gnsw+TpRft94xJeG7oDUw88PWwuC/EFa+e3+zSEo/xQpVawVwgqOy/YX808yDTgunTp4oDWg3oYkHgfw6x4lfD5s1229MtqnyW4IG42o4E0BQksBDaByLu3TsyJb+odrZy7AWPum74YW1E4xpuyyJde4aH6Ru9k9s3iuQiuqcuFxNARQhol1eim4iZOqGIHWk4nvh34gRCRjagfs4wHFXcbbx5EQseSWcTdQB9EDLbxTKFMT2c1TJiafB+NCMaxpHgKdwScSwP8qDYipZd2qNgt9F0LH6+nZUq848JMtARD7j12pYUPFvYPOrYGpd9TLNYuTXc53TRnN7aauRvXRezxNGk2VSrV6OG5su/0mjbSiEBZKF8nqpcXbe4+db1ES9aF5drLGK1aKxnHotehG5N2WE8FS4eYQlK7GHixnwRGKjIY3+3A2KjKiVguR+7dlITXbqLQwztX9QALWOoB6R/5tetmhVfQzlFHuSpkFAiLqrqzE3GpG52zVwbI5BZoGZuvM0trz45WaVfpUHXV56rJGEipofa0IiCmWK/n2jC7mLVEWfeeC0xH5g0ZUmC0RXa0IF5RV54xVP0g2c4Q9afUJOqEDzCqn442ikZnfKKtZam6TVVG77OpxYv6YrILeKwr7Pm9VFQQ73N3Uzv0eXH4cvaL3BDsofd1R8VaQpXCkL7gc9H3JdfiI0KaC6m0B3tThSs0ZznUYsA3SfAa5C6d0smrUTqnlDAmZo7aH7Xm+VsMKlUgXogKGXyr1fYyO/ni19sVyekDeUdXRd2E9q5YLsfzXKLZvaJQleYqrE3a69vT5S66wzfhtnSku/Gn7440DoWc7q9+35erNPnvbdREerQGrAwzA592puNBxZrOtwCZc8SqrO0OBs+s2MB57d6U4uzjWV6PK3+O6qO54WatG9JVYgo7z95Z/9exbB1bAWIVAJJSsI28XU8XO9FGZIln3WezV+zaD0R+IgH8eDyUosLqa5qtp/nLKiX4bavR3BtUPOffu3pETdSUtFLPuQxOa1oLHxp6e6r5InRriaLUEVzTezSkBrWdIjSX65vSpJf18XT4Glqo5VG+3sd0SoGmEzJlinw1tUYimoaI6sK2HbFHmi7R7sJLls/5gx9maYWmS9Gm1iFaxsiUH/lzQo6zN1/GnI5YaND9gYrXyRWLob+4fFt58WkyoliScWkHOSrk2j1uRVFpr+Ig8c6cae7skx48wf4AR6QoyGEUaE8HOm0lpdskGZWWgwhM80IraTe8vCcEtwp2hVrvxt5rJFVOSy0hs46P/ROxHgPs+dIC24A4MF0ZZRbr03UeXoIkneLqkGpoLaHLbPmd7zdzK5J8Z9/kRxw89rk6Xmirt7vvCUnXE7Nu27Qfv6IJpWm+gyfvNDN6jq/g8irZIdBE6dixWX9ncZ80gW4ITWgKGgYFJQfuL+9QZ1vQGksWQNRyDQmHQBCUri7uyNwh3fFKWsfxQkTb5eqbbjFushI8+1np+kBDnqsNzth7ZfCi1UrqVIs/KtU7Hbn7InhFVXTpXNjsLa/Y0DQSBcQrtaSGTgHFxTPdg+D6DuJ3LoYl5yxZ1kyHjHzZsGexWrxPPHg+seoFIfvcaXb8dYP2s7nMgjbIVPOrWoOj/gNDJZX5Tq8Cy52HkJ2oUsePRPkWVU9JgEWMt4xLnBycP+uFekyvm0Np8OkZXcy+O50rvJniKV71cJ7nylm5aneVPKmqq9e11dHZQWHF62WKvWm+PVfHfXaQIFyszjnYTqXjR6dPbALfO4wWSL9AkA7/FF3iekRvMsgQWSu3WRuxulcROTcDMioHZbuwrfDu053qBVl9p8KfEDg1ycQ3tpfDTiv4RrFTrcuxAE8E7O4IvPcw+jYNaKoZZ6Z1Xqc/jqwORth79E4suRDOadjD8kxePHuNXJupfQrfPXaPXaFMQJFwoWQYZOBP88zBDZBnp9X7eknBNkCnWJLnnRN3WjPZHU7knE/allA4X4l6Bo4Xzny8N69K43rFVYIwWz+0WKISeCsQJmCgdhD5tsFFpN4yrw6fFRlRcE0qzraci9nyB1t0m6JnJPllTwcz7zi6GpiMRPssvkboRXFVECJBMaX2GqI6GCHRuEZkVUSnk3vQcOdI/E00n5l0auTCQcwRfoDsLmbOiHROCMpiil3X0ruOT+EMO1OcQo858S8yieFOEGaTqET+c3qCAC/LPhmNvwXo5l/pTGwDeYRj+urT87PSTUNcaPpcD5fJz5Ha8VqiNV1TrFXT9qK3tL4KPtVmLMzDEecNkgAxSOjZt/1qP00DqzPVB1lpifKwzZqjxtflTl2exY7cni4SJbDfACJZ5vS1lvjBr7hY+5Bb+/rxLKCb7h4R5fTBnHfSpmFqq0TKjbLEWW5AFQaRm0D4HIg0DSHqU+Y9IYe3wpbR++udvcHNxqG1uRylvxRlhh9+c9qNTt4b7f5+djYrqSTHgV4jO20FG/UZXn/K6uXA0ubEIM1fHn6vMtzdqz6A5rvSSNQFnPiAavHBnzfS+f35QNva/EyTLCLZ7gWcQkTePbfRItopxMnghR/QS49jl1yCZ6ZrPVmFjtN1vjn7m5SnmkwhztsrduT0xYwfKcifBDSnQkeYzu7PN6SBTiPGzy80ndNBBCeM0pAtdiXBeZKyEGws9ckFaL4bqGBhQrFfmBo4bFOajEJ5V2jwygUuLG14rTjmyNS7XMFilAwKkpbAvrANhFSoFdApWlcSJq8BugYjHhXYxWM4prlrwMfpmAXE2Yfa0U9lEJG6MZek1bAGCSCVV/4pWiW6Khpwthsr5wjsgFrYPXYvc8ZGN7Kc8w1uCe/6zigCSgFdmO7wT6P+XayvobknpKoZR1QWuZVxn8iHpE7RTXwiMi1rdpyUpsrUuPVS4cE4f7YKRbA9+l7DuURsKEFBN8Qt0HQsxA9oawvkMTGCUkx1R9caK3PvMjrvEPVfPlGOk8xjUhulvVeCWBqPczCrxeIG+brSj0NkZIx5xDlsvwKw+VwMFTi772taHryeD63tlgez+0Ei6JyKCnYqFw871Wul3lHiKCXRSzBMV5C8VW9XVdUstEAv7/ppdTWhYtWstE+OHG9jhR7ou/C2sISk+vR4KoiyeGLrDaHO2c/XRm6+Utjr3Yb2xWylzadMNaxGp34pe+s3dl3F1vxsuHZAKwvH7to1k1/E80QEUonjEXWhqg6wYgSy6fQc8SbwJN0HPc4GriocDUENXd6KgstWgM1ogVxnNmMpHJrKSBYPamKlVH0Z9eIhQE4Dorj4WNwpkzGz8yDRCiGiwsiPtHmO6VY6vhFd0NUDfrr1DBT90jsbYyeHJ3EzF1W82x6a776JhEk0IKjnlZZsLxhSWVIm50lfq0AZbUeWJBTvtAx4Njz4Q6XdPqfts5VBdRDbr+j9xRjNaVZqaXjl6sw6VNgPwugVLrGqzUdSEG7R8NuAb1VVR5eel7xxW+x4Y43Fzid9ivP+8fZJWAJ/E6EPOZE4Bp8IB+c1zI4E9JJAdxoxtbGEwOb0ALbXwUpyX8APgtD8nXwClNGRX2litAKppnmVXUVUhuqAjz/NKtp4R+YKt9pdlSEMNFmmQezJEHKdegxIs7XEZmClzzAiJKrXYxXFO9bTO6FdnF26lbcsG1wQX8awCc4ewz29ou0tis7/HSgmI48QG2mTo9DypEWoHBHE55e2BGeAndzYmp2jyyq4VH6N+gavlma4aIVAdKeRiG0XRxPD7nJIMxHkr6MVR7tvnZYo3zVp025rj8wy4TiI8O5wijsaCSMFbX0q26QNN7EGbYP0ynJtlxbNkXmsS6uk8cf+bpReW9Dp7sSN6siXJ40HDQUiq4d31xpNEcK7sAlytbxo/lT7xu6Hz53EHc10lfLmPvkbM0nBiesHqVnKbJq8hpRjGHUsNwyrhUfXDsKk5L7bOrsPJTdI0mnzq/kjZtWVuqwJ2jrurkfdAKtzP5ns2qwvgKYQx6ZFw3Ww6mZL26TL+7BHuqAWvAkjI08c0LK3vNXqhHY7l+6BjGa5Z2HrlS2cPthegCmcspe+dadt398F2maGsBfm+yPC0ffuKcIJ98LzKry8+cEs6cd+lawApBCremIOEAa5SGD7RsfasRXlIcOvpsOJmlRNYDEWOcttOCfvN/PI9TGnVMgkgW2LJmH9uKxDjnoOdhUD8SAeVNth2ZAe77ou3tjiNQNeNpK/bniNioEgLlzNgfe86rJ12tYZ/ghh75fbnQlOmJ/fTLwBeX+o+aoWnP1okVBifr/N5zeLhvaf2/QC31WwZDxbPcRgExctw8CL5EsP7c89gHRf5PaXH8gChn6hL/jClvPl+iuKpQ85lv60+uO/TnzycxV5JnuAMuEIZEfdswXF2r/jJ6CXqGqjuM3+7/sP/1eY/ndgceLiW+4LjPoAGcgHyPjzCtNffr8wfZYWmf35csqK7hwC/pe3Ttik9Dj2K5icNpqmKvl6hv/lsf+aXg79rdkAj/qbc/F3A41/MM5f3huzNpqrJfuq8Y8G/3MPZl+9iUF+hS8Lh6if8K8bmfrXmGSfv/fLJH7XFIV809S3RGxzNBbZ/F1D55SAdI+fb3uCG6Zff2QK+rifX8D1qcVfoPbzqP4L6PuIWudXaBHyNts+4+wPQu57esIvKIDhn4iv/l0+Nf4ZfZefUAz+u09/UxD8h0CPhLCfCORrKYOiPwuePwo/8hsGr38Ufj8KIV/Q/1uKy57PAUuAHLFfXReB0fmfKvptjhTqGwsF/UAPUf9OPYR8RAP5Z+qh75XKPzEV2VbNd9DgTxQFf74O3tcYRn2+5rbPPb4v9r+7MLPx7OUNod/mTvu0WL+IiUZeeaTayHjBl0NDhfv//xUn/6NkEAwR3+q/yz+r/2D4G6yi37K7/iAF+F1HOET+wSej/g0qE/kDTEL/U5n/GFxR9JuJJL6B2D8MVgz6nZZ+nLo8Uvk6KX6HHcJ9lQoLJl7jX+EPwPHjqez+acLLN6vZdzxm/w20dmzfnSsK+M7gyCyk2T8/93da+YOl8Ot8iQj20xf+si98ZhT6na7+s/gSP8QY8uMxdkrn38eYGI1R1z/SXwPW9l8JrD9iE/5LUPtWCH6EM/TH4Oy8HHswg7+IvnMcSq1PM3DH3wA=</diagram></mxfile>
2302.03251/main_diagram/main_diagram.pdf ADDED
Binary file (81.6 kB). View file