Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2007.13143/main_diagram/main_diagram.drawio +1 -0
- 2007.13143/main_diagram/main_diagram.pdf +0 -0
- 2007.13143/paper_text/intro_method.md +50 -0
- 2010.08258/main_diagram/main_diagram.drawio +1 -0
- 2010.08258/main_diagram/main_diagram.pdf +0 -0
- 2010.08258/paper_text/intro_method.md +16 -0
- 2105.08997/main_diagram/main_diagram.drawio +0 -0
- 2105.08997/paper_text/intro_method.md +99 -0
- 2105.09803/main_diagram/main_diagram.drawio +0 -0
- 2105.09803/paper_text/intro_method.md +35 -0
- 2106.01425/main_diagram/main_diagram.drawio +1 -0
- 2106.01425/main_diagram/main_diagram.pdf +0 -0
- 2106.01425/paper_text/intro_method.md +52 -0
- 2106.13948/main_diagram/main_diagram.drawio +1 -0
- 2106.13948/main_diagram/main_diagram.pdf +0 -0
- 2106.13948/paper_text/intro_method.md +205 -0
- 2107.01396/main_diagram/main_diagram.drawio +1 -0
- 2107.01396/main_diagram/main_diagram.pdf +0 -0
- 2107.01396/paper_text/intro_method.md +66 -0
- 2109.04518/main_diagram/main_diagram.drawio +0 -0
- 2109.04518/paper_text/intro_method.md +99 -0
- 2109.05361/main_diagram/main_diagram.drawio +1 -0
- 2109.05361/main_diagram/main_diagram.pdf +0 -0
- 2109.05361/paper_text/intro_method.md +52 -0
- 2111.00295/main_diagram/main_diagram.drawio +1 -0
- 2111.00295/main_diagram/main_diagram.pdf +0 -0
- 2111.00295/paper_text/intro_method.md +184 -0
- 2112.00712/main_diagram/main_diagram.drawio +1 -0
- 2112.00712/main_diagram/main_diagram.pdf +0 -0
- 2112.00712/paper_text/intro_method.md +146 -0
- 2201.07745/main_diagram/main_diagram.drawio +1 -0
- 2201.07745/main_diagram/main_diagram.pdf +0 -0
- 2201.07745/paper_text/intro_method.md +105 -0
- 2201.12126/main_diagram/main_diagram.drawio +1 -0
- 2201.12126/main_diagram/main_diagram.pdf +0 -0
- 2201.12126/paper_text/intro_method.md +83 -0
- 2201.12426/main_diagram/main_diagram.drawio +1 -0
- 2201.12426/main_diagram/main_diagram.pdf +0 -0
- 2201.12426/paper_text/intro_method.md +21 -0
- 2202.08205/main_diagram/main_diagram.drawio +1 -0
- 2202.08205/main_diagram/main_diagram.pdf +0 -0
- 2202.08205/paper_text/intro_method.md +117 -0
- 2203.00048/main_diagram/main_diagram.drawio +1 -0
- 2203.00048/main_diagram/main_diagram.pdf +0 -0
- 2203.00048/paper_text/intro_method.md +11 -0
- 2203.00725/main_diagram/main_diagram.drawio +1 -0
- 2203.00725/paper_text/intro_method.md +97 -0
- 2203.06107/main_diagram/main_diagram.drawio +0 -0
- 2203.06107/paper_text/intro_method.md +71 -0
- 2203.12892/main_diagram/main_diagram.drawio +0 -0
2007.13143/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2020-07-14T12:29:04.499Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/13.3.9 Chrome/83.0.4103.119 Electron/9.0.5 Safari/537.36" etag="in_2BzO807bCWPEwPo7q" version="13.3.9" type="device"><diagram id="0r-299aNJVtl6sIg2gOo" name="第 1 页">7Vxbk6I4FP41Vs1u1Vgh4eZja8/MTlXPZae3dnYeI0SkBomLsbX3108CAUnAaXQQ0dUHhZNwgJwv37kk3QM0WWzfJXg5/0B9Eg0g8LcDdD+A0HCRw3+E5FlKEEKZJEhCX8p2gsfwPyKFQErXoU9WSkdGacTCpSr0aBwTjykynCR0o3ab0Ui96xIHpCJ49HBUlX4NfTaXUit/PtHwBwmDuby1CUZZwwLnnWXP1Rz7dFMSoTcDNEkoZdnRYjshkRi9fFyy697uaS0eLCExa3IBWb//+u3Pp7u3K+Px7/dT7yN6IK+lliccreULy4dlz/kI8OdeikNvPeU/4808ZORxiT0h23Crc9mcLSJ+ZvDDKV3HPvEfpoUAe9+DREg/rVkUxkTKfZx8/8TVhExAAwyBpQphKk17hgm3bEhjLlnRtRjW8SoDChIXzcIomtCIJunzIgAsQGaiD0vod6K0GPfjibiCxqwkn6UfLq8OaT4+JGFkWxLJIX5H6IKw5Jl3ka0jaf7NDiyuBMC8BJNchiU8g0LRzoL8QBrxAIPC6zPoDNsuAHUGnTjISls6MGhOYDY4r4HRFRp4RmzPqzOw74ymhYHf4kUYiVv/FS64S4DgI9nw7y90gWPZRfoP6PJzHIWBeACP254k7YABuhoaYDM0FKhpHQ7WFcLhrPPdgj2b8PaFWli3KsAI4Dqrju8dcGqrIo3GC3OVrDqqsaptn8iqzoVatbeBl22eOfJyX7Yoif07kZMIo0Z4tQo91YZkG7J/xPDL42+l43vxnqaVnz2Xmj6ThHvE1Mllspi/T6pnaCEjF3zLbyJOUm1F3502caary8yZZz/oKMLea2COq8QjPxlWOYoMJwFhL8dFxFfyuSpcSvCwauCRyxISYRY+qVlgHWbkHT7TkL/ZLlAwVL5Blga77L3lVeWsTVdka4qApigbmIqiFMLFax+P6lENqu2IybmswNv+d03zhtcZT9zxDggtt7tGfhSI3wmNn3JN/MEyZVlTZdpwBmDqRFGBFlNBbgoqpagSAwo+CXl2fycbFqHvR/vYNckoVU6DI0LQhDIs+XP001lwQATqmKojA1VH5lhVXJunor28ENI6QgLMyNUjpAVEmMDQGMKsIMLuFBENikolTygNUbGd4nAOH9mdEyw7wJ3LE6MnYvuyC7ScobXPC7bo8y7MlyEDqgCD1nG+DOlOcdStL8vf41dCtFNicwe+PeFZlxjMeb03INRYDppHglDLBJHTMQgbFPA6yhNOGtj3BTda/AyBNQQm2H3gcTAyYUVvtzAyG8AoisLlqkHtAK+W2drZLNyKaOZATktzzBcDKUaXLQU8eoVuVA2BUQ2i0MkCngYluiP8SodJtt2vOetoc8vQ5qx75Jw1tdwJdkz9DYp+N+pvDUam5Q5HpY+rEbY5RLDafCio9PUCCMxuQdVC3fFXg1pUA6AG1eCWsHw87fULrxCoSDLRkYi0LFVPx4CsKxn+PyITfeTRmQMTeKraXK5luhOMS7W6qd7vGut3Kem1AZp8gblYKqiABtaARo9d2gNNtXz3+z6DvTB7Ky7i5QFuYUBNpIZ5httsRE83DU9Tdzo4YCuvCw4OWBXse6BnOeoMquy2aew5NUW6nhN7TthCaehcoVwJWo6y5AyGI9d5AWC1Bffeo879eZjVFHS2BYZAZSwTDW2nlJp0W1qC1dLSjGC2TshAbGteCvzQGf9icyEpYoGkWLkTe7Z90Zn63D+z533uo30n/8sOfSU2wcSBVFCaWBGekmhc7KnRnrAFtwWHKp6MmpVd6KAuI4Hq5sLDgODR2A/FAjjm97LxQoQH8XQlfm7QOAAa6tIHdM4OjGrB8xX+jQve8ZmPY0+g4QP113zcb+ZtMPO1xB/VbE7s2MANikqXtj3xcnaZOyM1HKhZ3+h2kzmsK+ncdoGdbxeYo62QGDUuodNdYKiu0nThhOFbxPXNOsJw4RTZdn8Iw0U9IwxUtwWsDcJ4+PLxxhdHAMToG180KIldGl8QgzOGU8cXI9tBuE984fSNL+oqX23wxQJvl5RGN844AiRW3zijWpd6NRVZ52SOo4jEAXl9t8FpcWKc8Cx0vs+8t+SzHEoiRys8nT/9RNf4V60Xk34axc6D3riHug12t/zzfL7BgFDdTXV+53CFf1B7OQmowVPQnlFGXQXzloGekTHsnjGGWa1RvPJEOHnn42W6LswPgyAhQTYWEDzgZ26MW1DZwNqG1d2SBj/d/RetbBF898/I0Jsf</diagram></mxfile>
|
2007.13143/main_diagram/main_diagram.pdf
ADDED
|
Binary file (18.1 kB). View file
|
|
|
2007.13143/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
As discussed in previous section, there is failure to learn the target appearance representation under different challenges which limits RGBT tracking performance. To handle this problem, we exploit the annotations of challenges in existing RGBT tracking datasets and propose multiple challenge-aware branches to model the target appearance under certain challenges. To account for the properties of different challenges in RGBT tracking, all challenges are separated into the modality-specific ones and modality-shared ones, and we propose two kinds of network structures to model them respectively. Moreover, we design an adaptive aggregation module to adaptively combine all challenge-aware representations even without knowing challenges for each frame in tracking process, and can also handle situations with multiple challenges in one frame. In order to develop CNN ability of multi-level feature expression, we add challenge-aware branches into each layers of the backbone network with hierarchical architecture. In summary, our challenge-aware neural network consists of five components, including two-stream CNN backbone, modality-shared challenge branches, modality-specific challenge branches, adaptive aggregation module of all branches and hierarchical architecture, as shown in Fig. [2](#fig:network){reference-type="ref" reference="fig:network"}. We present the details of these components in the following.
|
| 4 |
+
|
| 5 |
+
As other trackers adopted, we select a lightweight CNN to extract target features of two modalities for the tracking task. In specific, we use a two-stream CNN to extract RGB and thermal representations in parallel, and each composes of three convolutional layers modified from the VGG-M [@chatfield2014return]. Herein, the kernel sizes of three convolutional layers are $7\times 7$, $5\times 5$ and $3\times 3$ respectively. The max pooling layer in the second block is removed and the dilate convolution [@yu2015multi] is introduced in last convolutional layer with the dilate ratio as $3$ to enlarge the resolution of output feature maps. To improve the efficiency, we introduce the RoIAlign pooling layer to allow features of candidate regions be directly extracted on feature maps, which greatly accelerates feature extraction [@jung2018real] in tracking process. After that, three fully connected layers (fc4-6) are used to accommodate appearance changes of instances in different videos and frames. Finally, we use the softmax cross-entropy loss and instance embedding loss [@jung2018real] to perform the binary classification to distinguish the foreground and background.
|
| 6 |
+
|
| 7 |
+
<figure id="fig:sub_network" data-latex-placement="t">
|
| 8 |
+
<img src="sub_network_refine" style="width:80.0%" />
|
| 9 |
+
<figcaption>Structures of three subnetworks in our challenge-aware neural network.</figcaption>
|
| 10 |
+
</figure>
|
| 11 |
+
|
| 12 |
+
Existing RGBT tracking datasets include five major challenges annotated manually for each video frame, including illumination variation (IV), fast motion (FM), scale variation (SV), occlusion (OCC) and thermal crossover (TC). Note that more challenges could be considered in our framework, and we only consider the above ones and the tracking performance is improved clearly as shown in the experiments. We find that some of them are modality-shared including FM, SV and OCC, and remaining ones are modality-specific including IV and TC. To better deploy these properties, we propose two kinds of network structures. We first describe the details of the network structure for the modality-shared challenges. For one modality-shared challenge, the target appearance can be modeled by a same set of parameters to capture the collaborative information in different modalities. To this end, we design a parameter-shared convolution layer to learn the target representations under a certain modality-shared challenge. To reduce the number of parameters of modality-shared branches, we design a parallel structure that adds a block with small convolution kernels on the backbone network, as shown in Fig. [2](#fig:network){reference-type="ref" reference="fig:network"}. Although only small convolution kernels are used, such design is able to encode the target information under modality-shared challenges effectively. Since different modality-shared branches should share a larger portion of their parameters, the number of modality-shared parameters should be much smaller than the backbone. In specific, We use two convolution layers with the kernel size of $3\times 3$ to represent the challenge-aware branches in first convolution layer, and one convolution layer with the kernel size of $3\times 3$ and $1\times 1$ in second and third layers respectively. For all modality-shared branches, the Local Response Normalization (LRN) is used after convolution operation to accelerate the speed of convergence and improve the generalization ability of the network. In addition, the operation of max pooling is used to make the resolution of feature maps obtained by the modality-shared branches the same with that extracted by the corresponding convolution layer in the backbone network. Fig. [3](#fig:sub_network){reference-type="ref" reference="fig:sub_network"} (b) shows the details of the modality-shared branch.
|
| 13 |
+
|
| 14 |
+
<figure id="fig:guidance" data-latex-placement="t">
|
| 15 |
+
<img src="guidance" style="width:80.0%" />
|
| 16 |
+
<figcaption>Differences of our guidance module from FiLM <span class="citation" data-cites="perez2018film"></span>. Herein, <span class="math inline">+</span> and <span class="math inline">*</span> denote the operation of element-wise addition and multiplication respectively. We can see that our guidance module only use the feature shift to perform guiding as our task is simpler than <span class="citation" data-cites="perez2018film"></span>, and the experiments justify the effectiveness of our such design. In addition, we introduce the gate scheme and point-wise linear transformation in our task while FiLM is the channel-wise linear transformation.</figcaption>
|
| 17 |
+
</figure>
|
| 18 |
+
|
| 19 |
+
As discussed above, the modality-shared branches are used to model the target appearance under one challenge across all modalities for the collaboration. To take the heterogeneity into account, we propose modality-specific branch to model the target appearance under one challenge for each modality. The structure of the modality-specific branch is the same with the modality-shared one, as shown in Fig. [3](#fig:sub_network){reference-type="ref" reference="fig:sub_network"} (b). Different from the modality-shared branches, the modality-specific ones usually contains the complementary advantages of different modalities in representing the target, and how to fuse them plays a critical role in performance boosting. For example, in IV, the RGB data is usually weaker than the thermal data. If we improve the target representations in the RGB modality using the guidance of the thermal source, the tracking results would be improved as the target features are enhanced. To this end, we design a guidance module to transfer discriminative features from one modality to another one.
|
| 20 |
+
|
| 21 |
+
The structure of the guidance module is shown in Fig. [3](#fig:sub_network){reference-type="ref" reference="fig:sub_network"} (a). Our design is motivated by FiLM [@perez2018film] that introduces the feature-wise linear modulation to learn a better feature maps with the help of condition information in the task of visual reasoning. It is implemented by a Hadamard product with priori knowledge and adding a conditional bias which play a roles of feature-wise scale and shift respectively. Unlike processing text and visual information in FiLM, our goal is simpler and only needs to improve the discrimination of features in some weak modality from help of another one. Moreover, for some visual tasks like object tracking, the spatial information is crucial for accuracy location and thus should be considered in feature modulation [@DSFT18cvpr]. Taking these into considerations, we use a point-wise feature shift to transfer discriminative information from one modality to another, and the differences of our guidance module from FiLM could be found in Fig. [4](#fig:guidance){reference-type="ref" reference="fig:guidance"}. Moreover, we introduce a gate mechanism to suppress the spread of noise information in the feature propagation which can be verified by Fig. [1](#fig:motivation){reference-type="ref" reference="fig:motivation"} in the case of TC. In the design, a convolution layer with the kernel size of $1\times 1$ followed by a nonlinear activation layer are used to learn a nonlinear mapping, and the gate operation is implemented by element-wise sigmoid activation, as shown in Fig. [3](#fig:sub_network){reference-type="ref" reference="fig:sub_network"} (a). The formulation of our guidance module is as follows: $$\begin{equation}
|
| 22 |
+
\begin{aligned}
|
| 23 |
+
&\gamma = w_1\ast {\bf x}+b_1,\\
|
| 24 |
+
&\beta = w_2\ast ReLU(\gamma)+b_2,\\
|
| 25 |
+
&\tilde{\beta} = \sigma(\beta)\ast \gamma,\\
|
| 26 |
+
&{\bf z} = {\bf z}+\tilde{\beta}\\
|
| 27 |
+
\end{aligned}
|
| 28 |
+
\label{eq::guidance}
|
| 29 |
+
\end{equation}$$ where $w_i$ and $b_i (i=1, 2)$ represent the weight and bias of the convolutional layer respectively. ${\bf x}$ and ${\bf z}$ denote the feature maps of the prior and guided modalities respectively, and $\sigma$ is the sigmoid function. $\gamma$ and $\tilde{\beta}$ denote the point-wise feature shift without and with the gate operation respectively.
|
| 30 |
+
|
| 31 |
+
Since it is unknown what challenges each frame has in tracking process, we need to design an adaptive aggregation module to combine all branches effectively and form more robust target representations, and the structure is shown in Fig. [3](#fig:sub_network){reference-type="ref" reference="fig:sub_network"} (c). In the design, we use the concatenate operation rather than the addition to aggregate all branches to avoid the dispersion of differences in these branches in the adaptive aggregation layer. Then, the convolution layer with kernel size as $1\times 1$ is used to extract adaptive features and achieve dimension reduction.
|
| 32 |
+
|
| 33 |
+
<figure id="fig:feature_maps" data-latex-placement="t">
|
| 34 |
+
<img src="feature-map-refine" style="width:50.0%" />
|
| 35 |
+
<figcaption>Illustration of feature maps in different layers and different challenges. We can see that different challenge attributes could be well represented in different feature layers in some scenarios. </figcaption>
|
| 36 |
+
</figure>
|
| 37 |
+
|
| 38 |
+
# Method
|
| 39 |
+
|
| 40 |
+
We observe that the target appearance under different challenges could be well represented in different layers, as shown in Fig. [5](#fig:feature_maps){reference-type="ref" reference="fig:feature_maps"}. For example, in some scenarios, the target appearance under the challenge of thermal crossover can be well represented in shallow layers of CNNs, occlusion in middle layers and fast motion in deep layers. To this end, we add the challenge-aware branches into each convolutional layer of the backbone network and thus deliver a hierarchical challenge-aware network architecture, as shown in Fig. [2](#fig:network){reference-type="ref" reference="fig:network"}. Note that these challenge-aware branches are able to model the target appearance under certain challenge in the form of residual information and only a few parameters are required in learning target appearance representations. The issue of the failure to capture target appearance changes under different challenges with less training data in RGBT tracking is therefore addressed.
|
| 41 |
+
|
| 42 |
+
In the training phase, there are three problems to be addressed. First, the classification loss of a training sample with any attribute will be backwardly propagated to all challenge branches. Second, the training of the modality-specific branches should not be the same with the modality-shared ones as they contain additional guidance modules. Third, the challenge annotations are available in training stage but unavailable in test stage. Therefore, we propose a three-stage training algorithm to effectively train the proposed network.
|
| 43 |
+
|
| 44 |
+
**Stage I: Train all challenge-aware branches.** In this stage, we remove all guidance modules and adaptive aggregation modules, and train all challenge-aware branches (including modality-shared and modality-specific) using the challenge-based training data. In specific, we first initialize the parameters of our two-stream CNN backbone by the pre-trained model in VGG-M [@chatfield2014return], and these parameters are fixed in this stage. The parameters of all challenge-aware branches and fully connection layers are randomly initialized and the learning rates are set to 0.001 and 0.0005 respectively. The optimization strategy we adopted is the stochastic gradient descent (SGD) method with the momentum as 0.9, and we set the weight decay to 0.0005. The number of training epochs is set to 1000.
|
| 45 |
+
|
| 46 |
+
**Stage II: Train all guidance modules.** After each challenge branches are trained in stage I, it is necessary for modality-specific challenge branches to learn the guidance module separately to solve the problem of weak modality. All of hypr-parameters are set the same as stage I. **Stage III: Train all adaptive aggregation modules.** In this stage, we use all challenging and non-challenging frames to learn the adaptive aggregation modules and classifier, and fine-tune the parameters of backbone network at the same time. To be specific, we fix the parameters of all challenge branches and guidance modules pre-trained in first two stages. The learning rates of adaptive aggregation modules and fully connection layers are set to 0.0005, and set to 0.0001 in backbone network. We adopt the same optimization strategy with Stage I, and the number of epochs is set to 1000.
|
| 47 |
+
|
| 48 |
+
In the first frame with the initial bounding box, we collect 500 positive samples and 5000 negative samples, whose IoUs with the initial bounding box are greater than 0.7 and less than 0.3 respectively. We use these samples to fine-tune the parameters of the fc layers in our network to adapt to the new tracking sequence by 50 epochs, where the learning rate of the last fc layer (fc6) is set to 0.001 and others (fc4-5) are 0.0005. In addition, 1000 bounding boxes whose IoUs with the initial bounding box are larger than 0.6 are extracted to train the bounding box regressor and the hyper-parameters are the same as the above. Starting from the second frame, if the tracking score is greater than a predefined threshold (set to 0 empirically), we think the tracking is success. In this case, we collect 20 positive bounding boxes whose IoUs with present tracking result are larger than 0.7 and 100 negative samples whose IoUs with present tracking result are less than 0.3 for online update to adapt to appearance changes of the target during tracking process. The long-term update is conducted every 10 frames, the learning rate of the last fc layer (fc6) is set to 0.003 and others (fc4-5) are 0.0015 and the number of epochs are set to 15. And the short-term update is conducted when tracking is failed in current frame and the hyper-parameters of training are the same with in the long-term update [@jung2018real].
|
| 49 |
+
|
| 50 |
+
When tracking the $t$-th frame, 256 candidate regions are sampled by Gaussian distribution around the tracking results of the $t-1$-th frame, and then we use the trained network to calculate the scores of these candidate regions, which can be divided into positive samples and negative samples. The candidate region sample with the highest positive score is selected as the tracking result of $t$-th frame. In addition, the bounding box regression method is used to fine-tune the tracking results to locate the targets more accurately. More details can be referred to MDNet [@nam2016learning].
|
2010.08258/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-06-05T00:05:59.872Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" etag="7Ibm4GMuFTBLa1UBtPqF" version="14.7.6" type="device"><diagram id="fTsFn-0dbi-QJpJLrMOW" name="Page-1">7Vldb5swFP01kbaHVsGGJH1sPrZJa6tpmbT10QUH3DmYOaaB/fpdgwk4NFmqpQvrqjzE99i+2OfcYwLp4ckyey9JEl2LgPIe6gdZD097CDl95MCXRvIScQduCYSSBWZQDczZT1rNNGjKArqyBiohuGKJDfoijqmvLIxIKdb2sIXg9lUTEtIWMPcJb6NfWaCiEvX6/Rr/QFkYqa2OJanGGmAVkUCsGxCe9fBECqHK1jKbUK65q2gp573b0btZl6SxOmTCDRNX8gat7+ckzpX3cfrFyc9MlgfCU7Pf2ZseGnDIOL7Ti1a5IWLwI9UrHS9ErM5WhUyXMAC5CUg9rvuhFZrvIgv7oywPVRrYF9tODdhdjU26tO7o0HW/NcWxuRIC1aC2IRivI6boPCG+7lmDvXRiteQQOdAkq6Qs+AXLaGAWZwyE3OI6hbZUKprtLBpnU4pgYSqWVMkchpgJGxvmtivXtRVGBooaLqgwYswXbhLXBQoNU6NPqNdBm64A7GpCIVUkQhETPqvRsRRpHGiCpn2I6jFXQiSGynuqVG6oI6kSNtE0Y+qbnn7umejWJNPtadYM8iqIYbuNSTq8rfLpoJ5WRNW8bQ1XSorvdCK4kMV+8YWrPxt19fb3awtsiVT6dA+ryJyrRIZU7RmHH68VSTlR7MFex9GlR62jqkuGf8JB1WW/X9h2xyPvtH4ftuh6dfPv3IwPdLN3SjfjTrt58SLdjAb4tG4e7b17xyJ+vV0fYnD3XzC422mDH/z7vNsGx7hjDr94dfgxHO4d6PAd1fF3HO512uGLF2Lx1hM4PvEjeLWehu4hkHH5ma5uoFi3yYSdK5s120HmTFgwzrcgwlkYQ+gDU1TqZyzgkfmEX5qOJQsCvksm+5BpCOOMjiOMsyUM9oYtYdxHhEHPJkz7ZV6ISpeQIGDFgaDfk6YJZ3EITU5y4PX/EQw558iz75ePvM9C3vDca8vmPZts7RcbIS5lu7769P/IgxxbG6ff1mZ4HD9BWL93L/oaf17g2S8=</diagram></mxfile>
|
2010.08258/main_diagram/main_diagram.pdf
ADDED
|
Binary file (19.4 kB). View file
|
|
|
2010.08258/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
BiSM [@bao2020bilevel] approximates the score function via variational inference first: $$\begin{align*}
|
| 4 |
+
\nabla_{\vv} \log p(\vv; \vtheta) = \nabla_{\vv} \log \frac{ \tilde{p}(\vv, \vh; \vtheta)}{p(\vh | \vv; \vtheta)} - \nabla_{\vv} \log \gZ(\vtheta)
|
| 5 |
+
= \nabla_{\vv} \log \frac{ \tilde{p}(\vv, \vh; \vtheta)}{p(\vh | \vv; \vtheta)},
|
| 6 |
+
\end{align*}$$ and then gets the gradient of a certain objective via solving a complicated bi-level optimization problem: $$\begin{align*}
|
| 7 |
+
\min_{\vtheta\in \Theta} \gJ_{Bi}(\vtheta, \vphi^*(\vtheta)), \;\; \gJ_{Bi}(\vtheta, \vphi) = \E_{q(\vv, \vepsilon)} \E_{q(\vh|\vv; \vphi)} \gF\left( \nabla_{\vv} \log \frac{\tilde{p}(\vv, \vh; \vtheta)}{ q(\vh | \vv; \vphi)}, \vepsilon, \vv \right),
|
| 8 |
+
\end{align*}$$ where $\Theta$ is the hypothesis space of the model, $\gF$ depends on the certain objective, $q(\vv, \vepsilon)$ is the joint distribution of the data and additional noise and $\vphi^*(\vtheta)$ is defined as follows: $$\begin{align*}
|
| 9 |
+
\vphi^*(\vtheta) = \mathop{\arg \min}\limits_{\vphi \in \Phi} \gG(\vtheta, \vphi), \text{ with } \gG(\vtheta, \vphi) = \E_{q(\vv, \vepsilon)} \gD \left(q(\vh | \vv; \vphi)|| p(\vh | \vv; \vtheta)\right).
|
| 10 |
+
\end{align*}$$ BiSM uses gradient unrolling to solve the problem, where the lower level problem $\vphi^*(\vtheta)$ is approximated by the output of $N$ steps gradient descent on $\gG(\vtheta, \vphi)$ w.r.t. $\vphi$, which is denoted by $\vphi^N(\vtheta)$. Finally, the model is updated with the approximate gradient $\nabla_\vtheta \gJ_{Bi}(\vtheta, \vphi^N(\vtheta))$, whose bias converges to zero in a linear rate in terms of $N$ when $G$ is strongly convex. The gradient unrolling requires an $O(N)$ time and memory.
|
| 11 |
+
|
| 12 |
+
Gradient unrolling of small steps is of large bias and that of large steps is time and memory consuming. Thus, BiSM with 0 gradient unrolling suffers from an additional bias besides the variational approximation. Instead, VaGES directly approximates the gradient of score function and its bias is controllable as presented in Sec. [2.2](#sec:bd_bias){reference-type="ref" reference="sec:bd_bias"}.
|
| 13 |
+
|
| 14 |
+
[^1]: http://www.cs.nyu.edu/\~roweis/data.html
|
| 15 |
+
|
| 16 |
+
[^2]: https://github.com/bioinf-jku/TTUR
|
2105.08997/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2105.08997/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Let us first recall the general procedure of neural network training and the rationale behind it. Assume that we have a training set $X = {(x_{n}, y_{n})}^{N}_{n=1}$, where $x_{n}$ are our (i.i.d.) dataset instances, $y_{n}$ the corresponding labels and the number of dataset instances N. We also assume access to a similarly designed non-overlapping test set. We want to optimize a defined loss to measure and minimize the discrepancy between our network prediction and the ground truth. For that, we ideally want to integrate the loss L of our neural network - function $f_{\theta}$ with parameters $\theta$, over the dataset distribution: $$\begin{equation}
|
| 4 |
+
\int L(f_{\theta}(x),y) dP(x,y)
|
| 5 |
+
\label{eq:approximation}
|
| 6 |
+
\end{equation}$$ In practice, we only have a limited amount of samples from the dataset distribution. Hence we compute an *approximation*. Typically a noisy gradient estimate is leveraged for such empirical optimization, by presenting our data in mini-batches over several epochs $t$ and shuffling the dataset after every epoch, i.e. when the network has seen all the data (at least) once. Of course, when the networks have fully converged and learnt a sufficiently large amount of data, the dataset instances they have learnt trivially overlap. It is however not self-evident that different approximators would learn the data in a similar way or in other words that despite shuffling and mini-batch updating, neural networks would learn data in the same order. We first provide a definition for such agreement and then consider when networks necessarily start to agree on the dataset instances during learning.
|
| 7 |
+
|
| 8 |
+
<figure id="fig:agreement" data-latex-placement="t">
|
| 9 |
+
<embed src="{diagrams/agreement}.pdf" style="width:90.0%" />
|
| 10 |
+
<figcaption><strong>Agreement visualization</strong>: For each image the classification results are compared across networks and, in addition to <em>average accuracy over networks</em>, <em>true positive agreement</em> calculated, which is the ratio of images that all networks classify correctly per epoch to those that at least one network classifies correctly. Images are taken from ImageNet.</figcaption>
|
| 11 |
+
</figure>
|
| 12 |
+
|
| 13 |
+
Hacohen *et al*.[@Hacohen2020] define one form of *agreement* as "the largest fraction of classifiers that predict the same label" for the same data instance, as well as *true positive agreement* as an "average accuracy of a single example over multiple models". In order to take the next step and try to quantify the difficulty of the images for training, we step back from agreement as an average and define *(true positive)* agreement of an instance per epoch as an *exact match*, such that all $K$ networks classify the same instance correctly in epoch $t$, where $K$ is the number of networks we have trained. *True positive agreement per epoch* can then be computed as the sum of instances classified correctly by *all* classifiers in that epoch, normalized by the sum of instances classified correctly by *any* classifier in that epoch. Formally, true positive agreement $TPa$ per epoch $t$ can be defined as: $$\begin{equation}
|
| 14 |
+
TPa^{(t)}(x,y) = \frac{ \sum_{n \in N} \prod_{k \in K} \mathbbm{1}_{f^{(t)}_{k}(x_{n})=y_{n}}}{\sum_{n \in N} \max_{k \in K} \mathbbm{1}_{f^{(t)}_{k}(x_{n})=y_{n}}}
|
| 15 |
+
\end{equation}$$ During training, we now monitor the true positive agreement in every epoch for each training instance. Suppose that we train $K$ networks, as in [1](#fig:agreement){reference-type="ref+label" reference="fig:agreement"}. In the first epoch, some models classify some dataset instances correctly (indicator function, $\mathbbm{1}_{f^{t}_{k}(x_{n})=y_{n}}$ being the condition for a prediction match), but for no instance it is the case that all models classify it correctly. In the second epoch - one dataset instance is classified correctly by all models. As during training more models classify instances correctly, if they learn the same instances first, then as soon as all models agree, it will be reflected in the agreement scores. Perfect agreement of 100% can be reached if all models learn the same instances. So, agreement - fraction of correctly classified instances over all models compared to learned at all by at least one model - can be higher than average model accuracy per epoch, which is the fraction of correctly classified instances from the whole train set, averaged over the trained models. False positive agreement (all models misclassify an instance in the same way), as in the case of the partly white cat in the first two epochs (blue shaded box in [1](#fig:agreement){reference-type="ref+label" reference="fig:agreement"}), is left out of the true positive agreement. However, one could, also analyze the false positive agreement in future work.
|
| 16 |
+
|
| 17 |
+
To assess, how trivial it is that neural networks agree *before* convergence we consider the minimum possible agreement - the *lower bound*. When do the models necessarily start to agree to learn the same instances? Let us assume that we train 2 networks and every one is 50% correct at some epoch $t$. Then every network can learn the fraction of data that the other one did not. The same applies if we train 3 networks and the accuracy is 2/3 per network, because if we split the data into 3 portions, every network can learn 2 portions of the data such that there is not one portion common to all 3 networks. The same occurs for $K$ networks with accuracy being $\frac{K-1}{K}$. In all these cases the sum of errors $err^{e}_{k}$ all networks make is $K*\frac{1}{K}=1$ or $100 \%$. As the accuracy rises higher than that and the error gets lower, the networks will necessarily start to learn the fractions of data others are learning. Hence, the lower bound is 0% when the sum of errors the networks make is bigger or equal to 100%. The lower bound fraction, reported in the remainder of the paper in percent, can be defined as: $$\begin{equation}
|
| 18 |
+
LBa^{t}(x,y) = 1 - \min (\sum_{k \in K} \underbrace{1 - acc^{e}_{k}}_{err^{e}_{k}}, 1).
|
| 19 |
+
\end{equation}$$ In particular, the difference between agreement and lower bound shows us the portion of agreement which could not have been predicted on the basis of the lower bound alone.
|
| 20 |
+
|
| 21 |
+
Lastly, since one of the datasets we test our hypotheses on is *multilabel*, in this scenario we extend the definition of agreement and lower bound such that agreement and accuracy are calculated on the basis of *exact match*, meaning that for all present and absent labels in an image, the prediction should exactly match: the presence and absence of a label should be predicted correctly for all labels. This is a criterion which is non-forgiving: if one of the two labels has been predicted correctly, the exact match still classifies this image as wrongly predicted. On the other hand, it is a strong criterion to test agreement on, since what we are after is a *full* and not *partial* agreement. It is also a much stronger criterion than investigated in Hacohen *et al*.[@Hacohen2020]. Notably, our TP-agreement is thus also different from an *observed agreement*, i.e. the sum over instances given estimators classify correctly (true positives) and incorrectly (true negatives) divided by the total number of instances [@Hallgren2012]. Our choice to separate out true positive agreement is intended to avoid potential confusion, as the number of true positives and negatives is not equal during training and can complicate the assessment of reliability across estimators. The alternative Cohen's kappa, for example, suffers from the prevalence problem: it is non-representatively low in case of uneven frequencies of events. An alternative PABAK measure counteracting the prevalence bias [@Byrt1993] is a linear function of observed agreement, which takes both true positives and negatives into account. We provide a deeper discussion on alternatives in the appendix, in context of the preliminary results shown in the upcoming section.
|
| 22 |
+
|
| 23 |
+
# Method
|
| 24 |
+
|
| 25 |
+
To reiterate, the general procedure is to train several (in our case 5) networks on the same dataset, as well as track, which examples have been classified correctly per epoch per network. Upon training several networks for the same amount of epochs, *agreement per epoch* is defined as the sum of images *every* network classified correctly, normalized by the sum of images *any* network classified correctly. We have seen on the example of the *lower bound* that a quite high accuracy is necessary for the networks to unavoidably learn the same instances and that this relation depends on the number of networks trained: the more networks, the higher accuracy each of those has to obtain to start learning the same portion of the data. Hence, it is not self-evident that agreement does occur during neural network training.
|
| 26 |
+
|
| 27 |
+
If networks agree on what to learn, which factors does this agreement depend on? To provide an intuition for above paragraphs, as well as to replicate prior insights obtained by Hacohen *et al*.[@Hacohen2020], we first test for the overall presence of agreement. Then, we extend our experiments to different batch-sizes and architectures. Presence of agreement in all these diverse conditions supports the hypothesis that agreement is not dependent on the *model*, but on the (sampled) dataset population itself in terms of the nature of the true unknown underlying distribution and the intrinsic characteristic of the difficulty of classification [@Caelen2017]. To further investigate whether the *joint* dataset and label distribution leads to agreement, or whether it is independent of labels, we test the presence of agreement in case of random labels.
|
| 28 |
+
|
| 29 |
+
<figure id="cifar10_ablation" data-latex-placement="th">
|
| 30 |
+
|
| 31 |
+
<figcaption>Ablation study on <strong>CIFAR10</strong>: training DenseNet121 with and without randomization, as well as with different architectures: LeNet5, VGG16, ResNet50, DenseNet121. The <strong>blue area</strong> demonstrates the difference between agreement and lower bound. The <strong>red area</strong> is the epoch-wise standard deviation from average accuracy across trained networks.<span id="cifar10_ablation" data-label="cifar10_ablation"></span></figcaption>
|
| 32 |
+
</figure>
|
| 33 |
+
|
| 34 |
+
As hypothesized, mirroring prior hypothesis of Hacohen *et al*.[@Hacohen2020], but using our strict agreement criterion, we found visually clear agreement during training of CIFAR10 [@Krizhevsky2009] on DenseNet121 [@Huang2017], presented in [\[fig:cifar10_without_random_labels\]](#fig:cifar10_without_random_labels){reference-type="ref+label" reference="fig:cifar10_without_random_labels"}. Particularly in the first epochs, as the accuracy grows, the area between agreement and the lower bound, shaded in blue, is the most prominent. It shows that throughout training we observe growing agreement on the learned instances, i.e. certain data instances are labelled correctly in earlier stages than others, which is most remarkable in the training epochs before networks converge and the accuracy plateaus.
|
| 35 |
+
|
| 36 |
+
Agreement persists also when training different architectures. We have chosen 4 diverse architectures, ranging from simple ones like LeNet5 [@LeCun1998] and VGG16 [@Simonyan2015] to the more complex ones like ResNet50 [@He2016] and DenseNet121 [@Huang2017]. Training CIFAR10 on them for the same amount of epochs, we could observe visually prominent agreement as well, see [\[fig:cifar10_archs\]](#fig:cifar10_archs){reference-type="ref+label" reference="fig:cifar10_archs"} (training details and additional plots in the appendix). The agreement is similar to that for the same architecture. The differences in learning speed are reflected in the standard deviation across different model accuracies, shaded in red. Note that accuracy deviation has been almost negligible when the same architectures have been trained.
|
| 37 |
+
|
| 38 |
+
We have also replicated the presence of agreement for different batch-sizes in the appendix. A comparison shows that agreement curves are similar for simpler architectures and smaller batch-sizes, as well as more complex architectures and larger batch-sizes, suggesting that model capacity and enhanced randomness when training with smaller batch sizes slow down the learning and, as an effect, agreement.
|
| 39 |
+
|
| 40 |
+
Above experiments suggest that batch-size or architecture type are not the underlying causes for agreement during training. Next, we test whether it is the structure of the data itself, or the relationship between the training data and its human assigned labels that account for it. To test this hypothesis, we assess agreement under label randomization. State-of-the-art convolutional networks can fit random labels with ease [@Zhang2017]. Maennel *et al*.[@Maennel2020] argue further that during training with random labels an alignment between the principal components of data and network parameters takes place. Hence, if the dataset structure is responsible for agreement, it should be visible also in case of random labels. [\[fig:cifar10_random_labels\]](#fig:cifar10_random_labels){reference-type="ref+label" reference="fig:cifar10_random_labels"} supports this hypothesis. We observe that the accuracy grows at a slower pace and that agreement grows more slowly than during training with ground truth labels. This may reflect the fact that it takes longer for the network to start learning the dataset structure without label guidance. Nonetheless, there is still sufficient agreement once accuracy starts to rise. Interestingly, here we disagree with [@Hacohen2020], who did not find agreement with randomized labels.
|
| 41 |
+
|
| 42 |
+
To conclude, we have observed a clear gap between theoretical lower-bound and observed agreement, independence on semantic labels and architecture, even coherence in between them. This supports the existence of a fundamental core mechanism linked to more elemental dataset properties. To further support and strengthen our early results, we show in the appendix that observed trends of true positive agreement clearly surpassing the lower-bound persist even when comparint it to the *expected random agreement*. The latter is computed as the product of accuracies for a given epoch. It is based on the assumption that networks classify instances independently of each other, which does not seem to be the case, partly because, as we show, the dataset structure plays an important role. In addition, we also compute the standard deviation on agreement for Pascal in the appendix, showing that it is negligibly small.
|
| 43 |
+
|
| 44 |
+
In the face of the insight that dataset properties are the probable cause of agreement, we proceed by choosing several diverse datasets, as well as dataset metrics to establish a correlation between them and training agreement. We have chosen the following datasets to validate our agreement hypothesis: a tiny-sized CIFAR10 [@Krizhevsky2009] for ablation experiments, a diverse dataset with objects differing in illumination, size and scale - Pascal Visual Object Classes (VOC) 2007 and 2012 [@Everingham2010; @Everingham2015], a large-scale ILSVRC-2012 (ImageNet) [@JiaDeng2009; @Russakovsky2015], as well as a texture dataset KTH-TIPS2b [@Caputo2005]. CIFAR10, Pascal and ImageNet have been gathered by means of search engines and then manual clean-up. In case of ImageNet, classes to search for were obtained from the hierarchical structure of WordNet and the manual clean-up proceeded by means of the crowdsourcing platform Amazon Mechanical Turk. On the example of the person category, Yang *et al*.[@Yang2020] elaborate that a dataset gathered in such a manner is as strong as the semantic label assumptions and distinctions on which it is based, the quality of the images obtained using search engines (e.g. lack of image diversity), as well as the quality of the clean-up (annotation) procedure. In distinction to the above three datasets, though KTH-TIPS2b is a quite small dataset, it has been carefully designed, controlling for several illumination and rotation conditions, as well as varying scales.
|
| 45 |
+
|
| 46 |
+
**Prior work**: What surrogate image statistics may be used to study the agreement between classifiers on the order of learning? Let us look at some prior works in this direction. First, some dataset properties make learning difficult in general, namely the diverse nature of the data itself: the difficulty of assigning image categories due to possible image variation, e.g. rotation, lighting, occlusion, deformation [@Pinto2008]. Second, learning algorithms also show preferences for particular kinds of data. Russakovsky *et al*.[@Russakovsky2013] analyze the impact onto ImageNet classification and localization performance of several dataset properties on the image and instance level, like object size, whether the object is human- or man-made, whether it is textured or deformable. Their insights are consistent with the intuition that object classification algorithms rely more on texture and color as cues than shape. They further argue that object classification accuracy is higher for natural than man-made objects. Hoiem *et al*.[@Hoiem2012] analyze the impact of object characteristics on errors in several non-neural network object detectors, coming to the conclusion that the latter are sensitive to object size and confusion with semantically similar objects.
|
| 47 |
+
|
| 48 |
+
In an attempt to mimic the decisions of a human learner, several image features have been related to image memorability and object importance. Isola *et al*.[@Isola2014a] investigate memorability of an image as a stable property across viewers and its relation to basic image and object statistics, like mean hue and number of objects in the image. Spain and Perona [@Spain2008], on the other hand, establish the connection between image object features and the importance of this object in the image, which is the probability that a human observer will name it upon seeing the image. Berg *et al*.[@Berg2012] extends this analysis to encompass semantic features like object categories and scene context.
|
| 49 |
+
|
| 50 |
+
Knowing which image cues the learning process is sensitive to gives room for improving it. Alexe *et al*.[@Alexe2012] design an objectness measure generic over classes from image features (multi-scale saliency, color contrast, edge density and superpixels straddling) which can be used as a location prior for object detection. Extending the latter work, Lee and Grauman [@Lee2011] use this objectness measure, as well as an additional familiarity metric (whether it belongs to a familiar category) to design a learing procedure which first considers easy objects. Liu *et al*.[@Liu2011] fit a linear regression model with several image features, like color, gradient and texture, to estimate the difficulty of segmenting an image. In [@Vijayanarasimhan2009] an active learner is designed, which partly on the basis of edge density and color histogram metrics, proposes which instances to annotate and estimates the annotation cost for the multi-label learning task. Notably the question of whether the networks learn the same examples first is different from the question whether they learn the same representations [@Wang2018; @Li2016], but if the former is correct, then the latter is more probable.
|
| 51 |
+
|
| 52 |
+
**Our choice**: Several works have correlated basic image statistics to various human-related concepts, like memorability [@Isola2014a], importance [@Spain2008] or image difficulty [@Ionescu2016], or directly attempted to find out the influence of such metrics onto object classification [@Russakovsky2013; @Alexe2012]. Inspired by these approaches, we have chosen 4 image statistics to correlate agreement to: segment count [@Felzenszwalb2004], (sum of) edge strengths [@Isola2014b], (mean) image intensity entropy [@Frieden1972; @Skilling1984] and percentage of coefficients needed to reconstruct the image based on the DCT coefficient matrix [@Ahmed1974]. First 3 metrics are shown in [3](#fig:img_metrics){reference-type="ref+label" reference="fig:img_metrics"} (DCT coefficients matrix in the appendix).
|
| 53 |
+
|
| 54 |
+
The choice of the first two is inspired by [@Ionescu2016], who correlate several image properties, including segment count and (sum of) edge strengths, to the *image difficulty score*, defined as a normalized response time needed for human annotations to detect the objects in the image. Their hypothesis is that segments divide the image into homogeneous textural regions, such that the more regions - the more cluttered an image might be (and the more difficult an object to find). The same line of reasoning goes for the sum of edge strengths: the more edges, the more time might be needed to get a grasp of the image. If the way humans search for objects in the image corresponds to some degree to the way a neural networks learns to predict their presence, segment count and the (sum of) edge strengths might be predictive for the agreement during training. Mean image entropy and DCT coefficients have been chosen for a similar line of reasoning. Both entropy and DCT coefficients, similar to edge strengths and segment count, provide a measure of variability: how uniform the variation in image pixels is in the case of entropy and how many (vertical and horizontal) frequencies are needed to describe an image in the case of DCT coefficients. Recently, [@Ortiz-Jimenez2020] have analyzed how certain dataset features relate to classification, stating, for instance, that for MNIST and ImageNet the decision boundary is small for low frequency and high for high frequency components. In other words, the classifier develops a strong invariance along high frequencies. Hence, frequency-related information might influence agreement.
|
| 55 |
+
|
| 56 |
+
<figure id="fig:img_metrics" data-latex-placement="t">
|
| 57 |
+
<p><br />
|
| 58 |
+
</p>
|
| 59 |
+
<figcaption>Visualization of selected metrics on Pascal examples. For intuition, we have chosen an ”easy” and a ”difficult” image according to Ionescu <em>et al</em>.<span class="citation" data-cites="Ionescu2016"></span>. Implementation details can be found in the appendix.<span id="fig:img_metrics" data-label="fig:img_metrics"></span></figcaption>
|
| 60 |
+
</figure>
|
| 61 |
+
|
| 62 |
+
In addition, for Pascal, which contains further annotations, we considered the *number of object instances* (since it is a *multilabel* dataset such that several label instances may be present in the same image) and *the bounding box area* (ratio of the area taken by objects divided by the image size), as well as the *image difficulty scores* described above, computed by [@Ionescu2016]. KTH-TIPS2b, is constructed in such a way that for each texture type, there are 4 texture samples, each of which varies in *illumination*, *scale* and *rotation*, which we also considered. Finally, for the CIFAR10 test set, Peterson *et al*.[@Peterson2019] have computed soft labels, reflecting human uncertainty that the given target class is in the image, to test the hypothesis that networks trained on soft labels are more robust to adversarial attacks and generalize better than those trained on hard one-hot labels. *Soft label entropy* shows weak negative correlation with test agreement (see Fig. 5 of the appendix).
|
| 63 |
+
|
| 64 |
+
Our initial investigation of section [3](#ablation){reference-type="ref" reference="ablation"} suggests that neither the labels, nor the precisely chosen neural architecture or optimization hyper-parameters seem to be the primary source for agreement, eliminating all factors of [\[eq:approximation\]](#eq:approximation){reference-type="ref+label" reference="eq:approximation"} other than the data distribution itself. As such, we investigate the question *"do image statistics provide a sufficient description in correlation for agreement?"* on four datasets: Pascal [@Everingham2010], CIFAR10 [@Krizhevsky2009], KTH-TIPS2b [@Caputo2005], and ImageNet [@JiaDeng2009].
|
| 65 |
+
|
| 66 |
+
{#pascal_densenet_entropy width="65%"}
|
| 67 |
+
|
| 68 |
+
To make the full upcoming results easier to follow, we first start by visualizing and discussing one example metric, namely entropy on Pascal in [4](#pascal_densenet_entropy){reference-type="ref+label" reference="pascal_densenet_entropy"}. In a given epoch, the agreement (blue) and accuracy (red) curves can be compared to the average entropy of the agreed upon instances (in purple). Again, the shaded blue area accentuates the difference between lower bound and agreement. Since we have chosen a step-wise learning rate scheduler for training, in order to reach roughly 50% exact match accuracy on the test set, the step-wise learning is reflected in the accuracy and agreement curve. As a new addition to our previously shown figures, we now notably also observe a strong positive correlation with the dataset entropy metric. This correlation between network agreement and the entropy of the correspondingly agreed upon instances is further quantified through a high Pearson correlation coefficient of 0.88.
|
| 69 |
+
|
| 70 |
+
<figure id="agreement_datasets">
|
| 71 |
+
|
| 72 |
+
<figcaption><strong>On DenseNet</strong>: Agreement, accuracy and lower bound as in <a href="#pascal_densenet_entropy" data-reference-type="ref+label" data-reference="pascal_densenet_entropy">4</a></figcaption>
|
| 73 |
+
</figure>
|
| 74 |
+
|
| 75 |
+
We continue our analysis in this form for all other mentioned metrics and dataset combinations in [6](#results_grid_pascal_kth){reference-type="ref+label" reference="results_grid_pascal_kth"} and [7](#results_grid_imagenet_cifar){reference-type="ref+label" reference="results_grid_imagenet_cifar"}. To better evaluate the shown correlations, we have visualized the distribution (in the form of a histogram) of the values for each dataset metric on the train sets in the appendix. Since the metrics fluctuate a lot in the first epochs due to predictions being primarily random, we omit plotting them for the first 5 epochs. For reference, we provide the agreement, accuracy and lower-bound curves in the style of our previous figures in [5](#agreement_datasets){reference-type="ref+label" reference="agreement_datasets"}. To clarify potential correlations, these curves are then followed by visualizations and quantitative Pearson's r values of only agreement to the set of chosen image metrics in [6](#results_grid_pascal_kth){reference-type="ref+label" reference="results_grid_pascal_kth"} and [7](#results_grid_imagenet_cifar){reference-type="ref+label" reference="results_grid_imagenet_cifar"}.
|
| 76 |
+
|
| 77 |
+
<figure id="results_grid_pascal_kth">
|
| 78 |
+
<div class="minipage">
|
| 79 |
+
|
| 80 |
+
</div>
|
| 81 |
+
<div class="minipage">
|
| 82 |
+
<p><br />
|
| 83 |
+
</p>
|
| 84 |
+
</div>
|
| 85 |
+
<figcaption>Visualization of correlations between agreement and dataset metrics on <em>train sets</em> for <strong>Pascal</strong> and <strong>KTH-TIPS2b</strong>. <span id="results_grid_pascal_kth" data-label="results_grid_pascal_kth"></span></figcaption>
|
| 86 |
+
</figure>
|
| 87 |
+
|
| 88 |
+
For the **Pascal** dataset, shown in [\[pascal_densenet_metrics\]](#pascal_densenet_metrics){reference-type="ref+label" reference="pascal_densenet_metrics"}, the correlations between agreement and average dataset metrics are apparent (apart from sum of edge strengths). The correlations suggest that as the models learn, they in progression first learn dataset instances with lower entropy, segment count, number of significant DCT coefficients and number of object instances, meaning that less labels are present in the same image. The distribution for the number of instances (in the appendix) shows that the number of labels in most images is only 1 (equivalent to the single label classification), less for 2 and 3. This is reflected in the correlation, which goes up from 1.1 to 1.4. There is also a correlation between image difficulty (how much time humans need to find the objects in the image) and agreement, which is consistent with the other metrics, namely that easy examples are learnt first. This result is further supported by the bounding box size, where we see an inverse correlation, such that large objects are learnt first, in agreement with insights presented in the dataset metrics section. However, note that particularly for segment count, frequency coefficients and \"image difficulty\" the dataset metric curve first goes down, before it reverses its direction for the remainder of epochs. What we observe is that when accuracy is low and the model is trying to find an optimal trajectory to learn, there are more random fluctuations, due to the stochasticity of the learning process. In the appendix we show the correlations for Pascal ResNet with the same training setup. There, the trend of metric values first going down in first epochs (until the agreement approximatively reaches 20%) and then up is even more pronounced for some metrics.
|
| 89 |
+
|
| 90 |
+
Correlations are also present on **KTH-TIPS2b**, visualized in [\[kth_tips_densenet_metrics\]](#kth_tips_densenet_metrics){reference-type="ref+label" reference="kth_tips_densenet_metrics"}. However, for this texture dataset, the tendency observed in Pascal is reversed, such that entropy, summed edge strengths, segment count and frequency percentage are inversely correlated with the dataset metric. In addition to these metrics, the way in which the dataset has been designed allows to extract additional ones, namely several illuminations, rotations and scales. The corresponding correlations are visualized in [\[kth_tips_densenet_categorical\]](#kth_tips_densenet_categorical){reference-type="ref+label" reference="kth_tips_densenet_categorical"}. For illumination and rotation, instead of building an average over the metric values per epoch, we calculate for each illumination kind and rotation direction the fraction of values agreed upon, normalized by all metric values of that type. For frontal illumination, for example, we count the instances that models agree on and divide by the number of instances of that type in the train set. We observe that texture patterns *illuminated* from the front are agreed on slower than other illumination types, while texture patterns *captured* from the front are agreed on quicker than other rotation directions. Correlation for texture scale seems absent.
|
| 91 |
+
|
| 92 |
+
<figure id="results_grid_imagenet_cifar">
|
| 93 |
+
|
| 94 |
+
<figcaption>Visualization of correlations between agreement and dataset metrics on <em>train sets</em> for <strong>ImageNet</strong> and <strong>CIFAR10</strong>. <span id="results_grid_imagenet_cifar" data-label="results_grid_imagenet_cifar"></span></figcaption>
|
| 95 |
+
</figure>
|
| 96 |
+
|
| 97 |
+
We have seen that correlations on Pascal and KTH-TIPS2b diverge. In [\[imagenet_densenet_metrics\]](#imagenet_densenet_metrics){reference-type="ref+label" reference="imagenet_densenet_metrics"} we also see that on **ImageNet** correlations, albeit on a small scale, are present and they seem to be congruent with those on KTH-TIPS2b rather than Pascal. Namely, first images with higher segment count, entropy and number of relevant frequency coefficients are learned. Pascal and ImageNet are datasets of objects, while KTH-TIPS2b of texture. Why the difference? If we follow recent hypotheses on neural networks exhibiting simplicity bias and primarily using texture to discriminate [@Shah2020; @Geirhos2019], then correlation directions on KTH-TIPS2b and ImageNet would be the same (which our results indicate). On both Pascal and ImageNet random-crop is used to get train images of the same size, which has been argued by [@Hermann2019] to enhance texture bias. The way objects are presented in Pascal and ImageNet differs however, for ImageNet objects are centered and not much background is present, which is not the case for Pascal, making latter classification more challenging, but also effects of random-crop eventually different.
|
| 98 |
+
|
| 99 |
+
On **CIFAR10**, the direction of correlations is more consistent with KTH-TIPS2b and ImageNet (see [\[cifar_densenet_metrics\]](#cifar_densenet_metrics){reference-type="ref+label" reference="cifar_densenet_metrics"}). The correlations are marginal, however, given that metric value range is very small. The presence of agreement with little correlation can also be an indication that there are some other metrics which explain it, though it may be some humanly non-interpretable noise patterns [@Ilyas2019].
|
2105.09803/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2105.09803/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Much progress has been made recently in the task of remote 3D gaze estimation from monocular images, but most of these methods are constrained to largely frontal subjects viewed by cameras located within a meter of them [\[46,](#page-9-0) [20\]](#page-8-0). To go beyond frontal faces, a few recent works explore the more challenging problem of so-called "physically unconstrained gaze estimation", where larger camera-to-subject distances and higher variations in head pose and eye gaze angles are present [\[17,](#page-8-1) [44,](#page-9-1) [8\]](#page-8-2). A significant challenge there is in acquiring training data with 3D gaze labels, generally and more so outdoors. Fortunately, several 3D gaze datasets with large camera-to-subject dis-
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span>Figure 1. Overview of our weakly-supervised gaze estimation approach. We employ large collections of videos of people "looking at each other" (LAEO) curated from the Internet without any explicit 3D gaze labels, either by themselves or in a semi-supervised manner to learn 3D gaze in physically unconstrained settings.
|
| 8 |
+
|
| 9 |
+
tances and variability in head pose have been collected recently in indoor laboratory environments using specialized multi-cameras setups [\[43,](#page-9-2) [8,](#page-8-2) [44,](#page-9-1) [28\]](#page-8-3). In contrast, the recent Gaze360 dataset [\[17\]](#page-8-1) was collected both indoors and outdoors, at greater distances to subjects. While the approach of Gaze360 advances the field significantly, it nevertheless requires expensive hardware and many co-operative subjects and hence can be difficult to scale.
|
| 10 |
+
|
| 11 |
+
Recently "weakly-supervised" approaches have been demonstrated on various human perception tasks, such as body pose estimation via multi-view constraints [\[35,](#page-9-3) [14\]](#page-8-4), hand pose estimation via bio-mechanical constraints [\[37\]](#page-9-4), and face reconstruction via differentiable rendering [\[6\]](#page-8-5). Nevertheless, little attention has been paid to exploring methods with weak supervision for frontal face gaze estimation [\[42\]](#page-9-5) and none at all for physically unconstrained gaze estimation. Eye gaze is a natural and strong nonverbal form of human communication [\[27\]](#page-8-6). For instance, babies detect and follow a caregiver's gaze from as early as four months of age [\[38\]](#page-9-6). Consequently, videos of hu-
|
| 12 |
+
|
| 13 |
+
<sup>\*</sup>Rakshit Kothari was an intern at NVIDIA during the project.
|
| 14 |
+
|
| 15 |
+
man interactions involving eye gaze are commonplace and are abundantly available on the Internet [\[10\]](#page-8-7). Thus we pose the question: *"Can machines learn to estimate 3D gaze by observing videos of humans interacting with each other?"*.
|
| 16 |
+
|
| 17 |
+
In this work, we tackle the previously unexplored problem of weakly supervising 3D gaze learning from videos of human interactions curated from the Internet (Fig. [1\)](#page-0-0). We target the most challenging problem within this domain of physically unconstrained gaze estimation. Specifically, to learn 3D gaze we leverage the insight that strong gazerelated geometric constraints exist when people perform the commonplace interaction of "looking at each other" (LAEO), *i.e.*, the 3D gaze vectors of the two people interacting are oriented in opposite directions to each other. Videos of the LAEO activity can be easily curated from the Internet and annotated with frame-level labels for the presence of the LAEO activity and with 2D locations of the persons performing it [\[26,](#page-8-8) [25\]](#page-8-9). However, estimating 3D gaze from just 2D LAEO annotations is challenging and ill-posed because of the depth ambiguity of the subjects in the scene. Furthermore, naively enforcing the geometric constraint of opposing gaze vector predictions for the two subjects performing LAEO is, by itself, insufficient supervision to avoid degenerate solutions while learning 3D gaze.
|
| 18 |
+
|
| 19 |
+
To solve these challenges and to extract viable 3D gaze supervision from weak LAEO labels, we propose a training algorithm that is especially designed for the task. We enforce several scene-level geometric 3D and 2D LAEO constraints between pairs of faces, which significantly aid in accurately learning 3D gaze information. While training, we also employ a self-training procedure and compute stronger pseudo 3D gaze labels from weak noisy estimates for pairs of faces in LAEO in an uncertainty-aware manner. Lastly, we employ an aleatoric gaze uncertainty loss and a symmetry loss to supervise learning. Our algorithm operates both in a purely weakly-supervised manner with LAEO data only or in a semi-supervised manner along with limited 3D gaze-labeled data.
|
| 20 |
+
|
| 21 |
+
We evaluate the real-world efficacy of our approach on the large physically unconstrained Gaze360 [\[17\]](#page-8-1) benchmark. We conduct various within- and cross-dataset experiments and obtain LAEO labels from two large-scale datasets: (a) the CMU Panoptic [\[16\]](#page-8-10) with known 3D scene geometry and (b) the in-the-wild AVA-LAEO activity dataset [\[25\]](#page-8-9) containing Internet videos. We show that our proposed approach can successfully learn 3D gaze information from weak LAEO labels. Furthermore, when combined with limited (in terms of the variability of subjects, head poses or environmental conditions) 3D gaze-labeled data in a semi-supervised setting, our approach can significantly help to improve accuracy and cross-domain generalization. Hence, our approach not only reduces the burden of acquiring data and labels for the task of physically unconstrained gaze estimation, but also helps to generalize better for diverse/naturalistic environments.
|
| 22 |
+
|
| 23 |
+
To summarize, our key contributions are:
|
| 24 |
+
|
| 25 |
+
- We propose a novel weakly-supervised framework for learning 3D gaze from in-the-wild videos of people performing the activity of "looking at each other". To our understanding, we are the first to employ videos of humans interacting to supervise 3D gaze learning.
|
| 26 |
+
- To effectively derive 3D gaze supervision from weak LAEO labels, we introduce several novel training objectives. We learn to predict aleatoric uncertainty, use it to derive strong pseudo-3D gaze labels, and further propose geometric LAEO 3D and 2D constraints to learn gaze from LAEO labels.
|
| 27 |
+
- Our experiments on the Gaze360 benchmark show that LAEO data can effectively augment data with strong 3D gaze labels both within and across datasets.
|
| 28 |
+
|
| 29 |
+
# Method
|
| 30 |
+
|
| 31 |
+
Our goal is to supervise 3D gaze learning with weak supervision from in-the-wild videos of humans "looking at each other". Such scenes contain the LAEO constraint, *i.e.*, the 3D gazes of the two subjects are oriented along the same line, but in opposite directions to each other. We specifically target the challenging task of physically unconstrained gaze estimation where large subject-to-camera distances, and variations in head poses and environments are present. We assume that we have a large collection of videos containing LAEO activities available to us which can be acquired, for example, by searching the web with appropriate textual queries. We further assume that, by whatever means, the specific frames of a longer video sequence containing the LAEO activity have been located and that the 2D bounding boxes of the pair of faces in the LAEO condition are also available. We refer to these labels collectively as the "LAEO labels".
|
| 32 |
+
|
| 33 |
+
Acquiring LAEO data is a relatively quick and cost effective way to curate lots of diverse training data. Nevertheless, Internet videos with LAEO labels cannot provide precise 3D gaze supervision. This is because, for such videos neither the scene's precise geometry, nor the camera's intrinsic parameters are known *a priori*. Moreover, trivially enforcing the simple LAEO constraint of requiring the predicted gaze estimates of the two individuals to be opposite to each other is not sufficient for learning gaze. It quickly leads to degenerate solutions.
|
| 34 |
+
|
| 35 |
+
To address these various challenges, we design a novel weakly-supervised learning framework for 3D gaze estimation from LAEO data. Specifically, we propose a number of novel geometric scene-level LAEO losses, including a 3D and a 2D one, that are applied to pairs of faces in LAEO. For individual face inputs we also use an aleatoric gaze loss [\[18\]](#page-8-24), which computes gaze uncertainty, along with a self-supervised symmetry loss. We further propose an uncertainty-aware self-training procedure to generate 3D gaze pseudo ground truth labels from pairs of faces exhibiting LAEO. Our training framework operates in two configurations: (a) a purely weakly-supervised one with LAEO data only and (b) a semi-supervised one, where LAEO data is combined with 3D gaze-labeled data.
|
2106.01425/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-04-08T02:58:51.451Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.190 Safari/537.36" version="14.5.3" etag="dK8MhZqEB0Ig9DvFwkms" type="google"><diagram id="g6KOr4V-XzFqJltIxw9E">7LzHkuRacij4Nb19Bi2W0BoBLWIHrQNaxdcPTlbd7ttUxuGQbNrwlVVmRhw4DhyuFfAXlBsuaUmm2hjzov8LAuXXX1D+LwiCIRT5/AEr968VmCR/r1RLk/9e+9uC23yL34vQ79W9yYv17wC3cey3Zvr7xWz8fIps+7u1ZFnG8+/ByrH/+6tOSfX7itDfFtws6Yt/BhY2+Vb/WqXwP0HLRVPVf1wZhn4fGZI/gH9vsdZJPp5/uhYq/AXllnHcfn0aLq7oAfX+oMuvjcR/5ehfEVuKz/bvOQH5jcZ2/3FvD0YT+Fj2xcUAWv0FZZdx/+QFOAV+vo3LVo/V+El6fRyn34ttsW33b04l+zY+S/U29L+PFlezRc9n6P/gv7/F4Nvvz/z15y/3H18+23JHf/4S/20H8PVvp/18++O8dUuW7TfmfNqPWQdA8oeBv2+y6NPxFP62wP6iAQD5e96M+5L9XkJ/S1myVMUfXEX/ObHhv7LwEf5iHIoHsQdkKfpka46/3z75LYTVX+F+n/rgntx/ApjG5rOtf9rZAgsPwG+Fenj8W7x+6xMK/xOm/7MT4L8/ASH/7oTnwy8c/vj2p5v529KPJP3LUvWbMkfS77/v+V8Vs2b40TX2KJateTRMT9Kit8a12ZrxAxg4bts4PAA9OMAmWVf9SCM39uPysxVa/vz70x5M31Tg3A1IJ5us0y8bUDYXkGH255LMH6vQHyvP5zzZkr+gzK+viDh9qr8gXBOwL+eENKkameef6fq14FfPJwP84hWOiZ+/3BZqRPF8UDGoF+zAwZgCyY1Iny3WFY/Y9RfqdNfKDTtVuV2CNRzoowq9NkyaY7h+rw5vpfetuRwQcO3TuvmF2/c8jHAIO716re5XS5Tt/o2noVs06ujXHlrF5vuSpq+F33MGB57TENvL4qlnhw1CvZWvF8pC6TINoulYXVu9zYA7DEZ8VFfguF6VHpyVynsZHGGvFfjMiMrIxpJ1Oa6uBSkj7+OzW/ZmCokJFivmX5D7F4QdQmkMKeJiqomp+g5rbb+x/ZjoXZbVPQZNlU5UBMEgN9dWOF/jWKVJLJc7FgIgVyzwwIXNGc6oL/CyyPX859nVLTfxdbQYZX0xCG8QREXId3cwXA/Bp6Y8Z8YYK8ibiQajIMjHxGRO4ZNZCP4WFXfBtLs5Nnf1F86QfAAnl8iQmi//zz9nzoSGvU3CGvJHV9ljuakosmXBKd+X2Y69/XJIj3fRtZoMuQoYS7TL5L0ycUOtlVkvxBY3z1+dv3lkWGaY4/jHU76UI3nvuq0pR2kz8fzsPIXYz1n0/8SzoGGYzXc5vCfaJZBnCxFIH3vvqXO/8eejSB36PS5TTezo9g1dsqZM1gkETdGiPaXQjzCJ+7H7oa28aR8z32+eXtwKpuGRd+AztKdhgLlA40Ob6xp/2hlGsmeOm+KAH31I/koQw7AClzzYIx2nPKqtMOxdYq69v/z4rVbO1De5M33FYczfL6GyeHyhs3jGqvxFydeDIx+I32CWBIh1GKbxZvO6vKU5vsx9jVLtYYGp06JQawLLDBVV5hZjqurFy8oolom0fjmRO7xHD+hnrxztkIbTCXwhzIN8FJt1DPoINc30zspVEN8u5C9FGmOK3GiI+8fFbq48TU3iA9hrR14XMk0+zX357zd/dkxFegn6lO15k7ILJreoipnYeiD8rtEuz68kOx2ec23FShBWZdRh3wve5rtPq0/kaSiV5pI4JsAv+43b/Lefu/t6rY+2a8FXmWdPYQ0IJlGaa3i0Fx7HIMoPTVUTonniq4qvlaXe7mKs0cxyJtsbhDoLBOeZbAZuOF4q8/LUA6YBZ3Iro1bu7S1MKfgERjTuJ6Sien2cKLvBNKdHX6TSuUQNUr1+1trKe5NR6hw1Y7kRbA62XWX9IOOfMTbSHWGUOlwLmXjZnAPg1engZNi0A+a1+IwjvdJdH5lu7TPVaDbF66QHKA95Vvqkn7tQEknztTzdWg+6f7AdVaEBlutZES69gzesuzWqUJlKpfZdIuuQecCE8ZEi/iawdO9dd8YXF/0on0sTHspt4Tv+fOLlAbs2pmqY/kW7F0QpzxlNoj/Lt3lCj13WgIG8HlctKjHbMcNjhkWUzG+8eSOhS0SPuzVK5wYE7x5TrTJ8Z8CD8JwF6VT8rPow7qiRNDdEwM6RzapB8QTebNF2zJ7S8EBSjKtlbWVWMbGMAaN9lOiR/TREUvh89IB/v+blIdXDF3eGN+h+2OowqQpHCfc4R1eLwm3vwuoBbd/GnAJQzXETeoO4B/Qy3mqfxkCf3DkNtr16/Kikt5MwpcybNSqZtQALHy9VCcpGX/cNcIcIv6X6fmfd3ng0k4HKZpj16uOSMXEB8jht04c+SRhA9R6FdXXnjfeVYzNs1y6ERbYvhhEaKCzeseAzzFshwGWeH+xbMayyiZ8hZMXnttzXAczNbmD/8i5vRu9KnkCpz90gr+B8Cd5x6WSDPHiLK6ln34mmcIYfcm8DgiNYtFL6aL0HTR5WXLP6cIY6N3PJiEuL42PtOll5dcMTaA6V6HR6xeHvZysLok9BsE09NzADZmyHeQ/8l+YwjlO4bkcyKTkVpTK+DL73Ocs+2EalxZuxIDDcu8pQbz+rilH1/VMPK+eCYOWVTmjm2LzCZu1ZgGgvIOQUXjf4gKPdL+twUdm+VJis3jfsQOqoVojh0FQOg+cinNRKIzieDJxodRIGZwoOD50MWORupup0moYnYH4Ck/iRgzJEzJcX1opgK7HMYqLmQ5UtzG58cSJbV1IkxZp/PHYLFgZ75yWLZTtEYj6uqcHG4LmilifnGikZgjNVVm7sISCcxgkR9EQZcfxyFM5RjCzzkc2unwUWv5ruBSHLx01cRZaHn+iDVTpKf6wbI9hAZjP1mqe7fEReLMkPI2avbHiFeKRNL/+D+Gi5+nRgrJlWEE+kwcobM9xIeDeWKZPOPi134+5XrkKa7u8Vc6a74fK06dFyuhaV7Gi4zXT5XPp+91zBHYHiSYj5fE5Ytk+18xEkU6DYhgmGeBXWInvQ4gW094PNroB+O58CIZmGYeL9OXdu+hHwc7oNEonsB1gX0cD3f2xEIDnA6hAYUMt4J+ZGrB6J4eZ7ReGI4WxP4aMs++x4xKWkpX0f4FeAUSJyYEUb7IMwU7E79aaC0GXTR4ejHMHuXhbuP846Z7DuXUJ+7wEFh8VSaCf/sXRVhjMK5n0Dx9pCQN6veHY3BG87kNFxzjik3/bk2+jsI3NMw7EjtIItHqVjUjZQJDboZf85fgXDSijiZ9HNqEz37Ion3balbIWb7HLGjHjkwpu1C9inclB4eMdfk/x8WcTnV3ezj59lb5Gsntj3TQewZ05LPTt8DL0KY5pf79Lz824VcNkm5vSsaS8UgWvGoWTylcZKmCC12qkg4/t93pryzd8ELIH4F3Iz1u1OeykkmLYFIDqdzSDgdh5BqhkGGASFod1H68CdcW7FAKOzMpkKQJ4flrWByOnIEwADi8WoGsMAS3Oy2B8hhSCCnZhPXJ02AAF7cj/xd7Tbv0AY12eei3BOxV8KwOP5UbSfC7wu9pEYBez5XOqhLsdYfvULhOlGsJOwMDJn/8TziaL9RJTPL5g4XHwuoolo2kSz5NpEsDJI5mCOyLgmsHsCwuuw1MXgkN9+C39Zn9yTTb8ujG4nKYre3VnWe4tW+DU+Nk4sMBe1Z+fOGCIlJ7F/hTE1u3QVPjdTz0mfuEVkUmL1ixAFuKNd7yRNjp2t916PVgBBwp674OKWJ0QSIVS8BSnQc3+PG2XsxysJXCZzho/cFTkB2gqNXTEZo3LMa0SP94juL6DVwAPYaqq/0y3WxfahBbWKqy6TaRAD1/Kz6xnYmEpx5pUBl/vZI/F+2NZSD2VryVYeITgo1s4+Xw7adfwn6QOsCF6PhnOxooCs84+iAUg5i+tfLb78K/WAP/JwGPsnifsf38+/VZXgP2pQ9Z8rSn8s/ktFhD+l8f9mlo79L87S9Xf3tyxdCfjOq02586tO+HRu492qwz2BC2e/CkdTHRUXlOk5Ao6PHyNAUdhD4Rzl9ROhcOOLc5E2ziz9ujoUaI++TqaaD3nzaKRgNe4JwnYhCbrnyvWL6xjb1h6n3TFKFnQCY2cVyz3htiJULD+vVcXxDyQAeUCVPBB+QJgfEH/kea36BVLZygPyYWLHFwTlARE4lnlAMJ3RmKp+QLgHAQDCxqz9C8TnuMda+djyRJdP5s4LD4itMpwsvllGkBXGBiDs85f4AcEfQwJApgckcH6BMABE1JjzHh+c65s/vjiFlARdHFEuq7dRQ4XcYUAhnx8i+64gpJnInQRuzd5GtGsrjlWLkOEGliFf3xUA1mcmCze6kbkaQs2XMxWMpckCnZN0wA9VWKjuOjb4lxofZIlRN6glsEebFjHeNpxpNCwO/MWHKPU725YUwS2hAIqggYvn8hsxW4i2vChOpvF8aOQaNojcKOoxiGGvNEfhfmWqlC/aAtF9iYppUfRQYrttX32u/dWOj6vlBd04QH4IfX0yTS3PxNOeixghjJup9IgTclvy4JlPSh1lfkzDhIrIay1umREELtWg72kgurSXPDxqfrjo3n6stIL1XsWy2+CMINF9/iOvhPNn87OQvGvFhbIBI9eAgKKqGkq3vqWwY/KXBikKh0rFV3irdQxoEkBjtZz8un+sHi8iJ35P6J3vBEm91Y30AR8ikEf4DQsiA6vFwO0u/Ax5ux+QPGtEr414YitGjm/yN8kREvHYjbt6LfVYw7JKAt8ohaqpSCIS+liIcMIkvwO5Y5fbp/ccekK6NB/1vEaG+5K4FHCHmFe6am/K4s8cm/C81RGUJ/mL29PmG82GhCCOtBZqT4pC5koJyMT3ZdOGKrRBzCPq24JKMfi06saj/KJHt7b51vruGxH59ly5hDaWlDB6R334hX1CSCdN7WhfK/YgJ7Yug3j6YaLRtLgvJSRRoVCX1AOFNxbiH3lhFT43i8cxCCT9XjrfWZLCIHnHsDSoNR/qiVHYrQ2MEpkp0Ptaq1KE2e6FpG5p2UiXryUgaw3qcE3oewMuyU2c2ZeBL8YvaoYI7U0gM3y+epfcvhNNN8O6nUbHzDlEqUi3K3jTmu+8IU/NMAQ0m26mRkH5LCwqzVyjXzvxlwkp6w7zsP9WWT35lqy35u8Ki978kS1ZwUy26FstZAp3plYtvxlr36vqw7oIMYRvFqkMjCYQ6j+cODTkfVKOyapdOc+AyJv++XiDlSI1xRpYos7nwqHRsnVItWFL/CxgTcNCF8kg65Sa2AftSKZeh21bk6RBdA5RZd4neqpWhzbVn1Cj2Bzica1jbXJB36rPLUhFhmaaOb/Ia9EwRb0S3E21sAthkEU5s66s+jnBFKcqtRYr/rE/tgcyx5WgciZWKg5ssnFtd7Ycj5SgyrFG5ZHsBFFb9/dImlsvz+SaXWchWL6IYYqsgqlM9BlTpORKk5OeNRSK0SJEnDMVZuRwXvtk3eezlS7QWhF6S9JD/Brcu7LWEY8GCH+2aH9bitp0m/4DuU65bxdfE/m6rBfzGEou4uz1tKMKe+Twaagfh8zJoR57R/ndN2KIJrvmyBln9Japm66vCtpyoKvBhKHMQrq78gE2286q3+JpLTkwdyDrn5KdfXUmv63840xq9ga+galYrwHu43Ebxp88jFi9BHl10cctzdfmkMO6PApkjluOpNHhj0DqU9yaYY0SVunJqciQTcGlvnBqEcgXg5D5fSN6OC9WghLJ3hPV2d+vqMdDJRL1AndyC1T7mC3pUk2lp0hcqk8cf7s2zWCYzPGNdmY+ysWC2OUaIT2QNvoxHGOxe6l4ZUQ5QusZHQpq3k9XlkRHu9yX/sUIhIRzp5/D0kg4ZEGupc2bmhU/d+J+Uo7VAmgDzmNuuNDX8WLTzmE2kLBN04kgHncB1QTu+Y6j8f3rOyVOzbClpIxV1Z4jrD9O6gmkm0qgDMdh2AMEv2fC9ixrcepoV82p0BpWrO0TIT6gD9Bz5DE57JmYD5DMqFjc/BwTtTE82pvhfsDs59gPmGv1tSgwKgXAfjxR5R8twXEvxrMvUahelGeJkyQwIgDhOItrfwFojHc+AAzQAEAvdpQeN08pQ81xOtNCS03cL4XhsQdIAFaEJ9kReUAy5lM32nOEmmtgOy2F0R+gx/UUDwjIbEPBt5nPBYB0amZBOd96XDnGmrzguwx5GVTYxTYTARBFoTTzD5CRtXhw8iMulFS9bds7m0RQlJWz3F1eFKxirPYBsD3MWqX2AXBOrgD0Zqx7FhfsycaPB+BJFx8AocXjh+zcD0OY4x7F3wxpKwF6aK3wh/C9/i/P/i/P/iE8i+zPTVs2ZHknVVrN96f8LjqO+V1AjHME4bRAj+0maw244ZOyDvrZgX0Crh5UcMT3tm1FSpu/Aii2BktI/vFAg0cs0E2HX3I5LauNWxGceyPx6mdIwd9K3OPthA4l+UI3L30gH3h6SDbg5qDKv17hdbithnhUUjx+onHTsRKvLLp1iZfVrdsCaye5IBD4jSBf5K5UDZcaJ4ug/aATA4eaaJ7a7jcJnw0Bteg32uuiBxA9+8BhyO/tXTBeYiDGQL8V9uJTnhrYlwxN256Lino+/gd5gYCO6AnqvaETS75U38s4it6aJ/ItQSuEBbWDK1XREgWRedu4GX4gHvNNvbiM+MOyduyhlvMGtQnKlCDOAuU5ynPzLq8oqyVf8xuUfUDkG/DFMZPv06gopnvkoxME7Yi8RwbgDeRdh7VZHMcrKsbGR4hxnu2+LcwrUfMyAjgHAa/BVvN1Y8cC7vSFt172cZe5YRCplJ/EwC0V79497KLLRJNBRGyRNTwbee8HtbpXgiYNnvz+BgXSul7ADkqjyHzUDyBaT789O+ubL2C5rdlF3C3HJyWbYi4cmjR96XFhLPdDn6NqrC+QpeC7K0/iKlyATiLlp5N9OrVRU20V+xnIyz6ijGTD9TlcY64aRnFwluMbkJ/hJP6ZSLyXY8DdjLEn3ODs6uAoII3pusQ4Yz+xmskaQuLKyspmWhqTxp116nQmoo8ZIVe6OUJy+a8ecajXVsrBLuYHGzxH9cm1dlW5nGWbvDE8ccZ7/Wl5EF38UN52EnV8dyfHtWZ/pLLfMlXrcrxntsKT4WbpapIJITx555M9Je817p6cr6XrLUXj6gG1Oe8HlLONcjeXEBcea5dUSfI27K661Za++hhVxgfU572vWgmC5Bnl8FqkxzB2ADR8G0xV3T/Cc8ExBDLoFuLtX8CpVSKvSQJZtBA2iQSAx+87RS5YORUFqiroG1/qk9JKJF8gey2wrNQJw5BK6hNLgYZJnIY0ojw5NcR0v4A7XibbBN6dFwi9yRFSbAaNMMb4r6sS4f/QKhH+z4pCf56l+dss0J8Hav7tGSLov2+G6F+cBfp3jPvAf0yV/RfP+/x7mUD8vxzT+v9IYvjfTeD/1CGt/yBjsH8cX8h/Uzk+4+e/XBv+jlX/h8T/x3MLJv9x7KL+cez6K+X/Ruz472j930B5/B9HefofrijQf0hR4H+covxt9vi/n11/zIf+T+HX/3hm/QNjgz/Q+d/Qx+M+v/p4LJhtqw0wV/DC/tTJ03EvsFhe/ED8arsS6OC5TfC7m8eebuPbza8uXi8s3fO5oMiDKiLVWzsv1yCSCPMBtR4BXZE0Mj/pEIMRpNzijbf5q9peTjDoZrUX6G/JMEGX8hu09woizS9eJKhClilqD3Rnm1AepKUkvaMYsYVLlK2ZZRI/fSWQ7u7pgIdo+0UpmQI1zAVF0w29iH0BxVSrzkz2eF9NJYrWBBG5RQ4O6J9EMMjCrM/3fWnh6/RwqrBkqviAdgKLMVRvO74lA5gDImf7zEqrPUf/wC20PbGXfYUhu3OlDtpGaP5u3/C7ub5Xlogul2ZET5U6leYHShMs3tl2MtkfDycwlir0J1sTIkvf7RBQP64/dnvjK5R22UulXX40puBcSYQIE4m7xEoAwy82o7wkYTtW021YiPiSXaNoolLoIh0dtPAecujI7C9f2Ws25a5h9SeWWRvy1Zhkae2B0zmB4z4vnFdMjRXEch9BUaG0EJD1ofL0UQLOPFVQeclwk18+0ksmxo61khfD4d23Y216fbUOqEG54ml7K1owAU+WnFJVxx4eUJ09WaqgMa8BSk9H+EH7DkAbSwcwbMxCSJtxyliDrl9yQKztPDmuyrx2KGUcWQD1mQucUCmc0lWsoqJIGzMK1oMTtA/EMo7UMSaHe19DHjrx5rgYJvEdh6arFbiuXd5R6PiMEDplOS/rl4mfe8Go0oKwF0ENJlsomI1J8flb6or5DYawG7N6MOETSlygZArwu64x3kW2SO0HSOFBn2+w65sp5MZp4V9dxeX7Drvr3R1VxkD9l6LGwzcv04IV4ZVa2CAxoYaas0zsXGzVtZAyEOYUL/hijA1CJv4UnIn5pQzE9jXiJHU+riJcMIuJu64voCbkFzXrRfDwScZx/ahbUnMiHF0o2WCMQtWFDpdvGxKvtEJTXqjgVTpmvMhtMNzYrQur4SHowqFlyDjvMOZw9W3GVpXoa/QIqdY/+l4jW9B8ppwG2iJmnh0koWxBITzDXPig278Bhdi4sYdmNkX70GH3S391tPBO5+2n2iSG4llRpKOOjKJF6u1pXQH4JM/7oGEdKLpmGwsQYTJcqUvT9HflrQ2BUjCcqMmPERLNAF6Dmk9F3VTtehJhrPomWqRBRBePVe5VcJQFRJy/CZ2VpuQzq4SkS6n5sYKCB7Lx9h+BkzWJD3I7g9cL9hzxEepLbrZBt22nl7a1BzxWJS/I7H0JtYL5RPeCQcsU/0iqSk1hJzgqO5XqYiE9uUHDMn1BZ3u8JT4aHqJpPtQqimpgtAmJR/kzDzS23/zDNOOXKju4Aq3VvnkEkB0ltC0NO0HGNw0HTRlUDxPYuZwW44hIggJMwuLMczfZhw2SnFy5ei6v0CZ19ekLo7Ai68HEgzAsNKhy66Htvi+wwQuxpVskendiZCloo3uwKqZxLyLFWwliWnpKD8dvzHOravtsbkGr7DHB3mO8jndMAhscukd5t3HtFe5pq67s7FmSN1XP6i0+rSHUCbe/aazRtxB6DuraKscC5a7+GVDVGQSm1BVFWTWWIXvJ8lefRKyMolmZZm8R+KHebm7F8MINlC+v94a7IdKzUAYcQW9Xt0SCtq1BT1eonZjRz9ymC8tLVtPkMS2OyohTyLOTlNOb7or5dT5E36JI/5yfYzRy5QWlL9TQDd3pBqRqkbZ3VtVgxT6WlthLZ7u3JWgcFiwxc+4jPjZhyBLAnOTKrUgSkNitBfkdKiVUu5uhNBwPTB/vlleoC2WzkArqCUKkITYiQXvXvw9OVaTEGjTkhDjylxGA1mHvvIh7wVDS4cunuRlY44PCRqwc3pLoozZ+OL/0O4C1t/1L/laCAOanJXa4iYb6+8ivWCF1QlqpWmyuHjDudBHL5H4A6SRaIICLUS9Ha9cXwr+d/Vunp462bKEr3M59Ig2WmBxuN+bo19f+xfygC49A/TCPhUEOfD6mRjlyMGWWTFderqNiCPCMNvs7X9mvz5tQJQmFbVRrw35Fzlf9ksJ67fIVu1fBwJf1en173mRR61Y+ry/TJ0mtmpQDKqQDGwlnr7V8wHI66mVw5+uDDxcMiQb4LFdMdfNk/t6L6vxy3XURnz6L6fnONJg2v2n9qQ1r8fH3+JnNyVCy6hCLKI55JPwcTCLy/b5yW3naEF+Dyv/qtx/T3HkCVt/ukoig9O4JoV1W1weNH/44Qe82PXl/pfV+j2wlDiIaMYNQeNw5GGDuhQuVs4XH+q42YEfX5i7WX60FicBCDkEG0qdj3sRGCheDhH0s9ZuFM+AFaO19a2bRn1dtuFYZE+YmqVQOCfKBCYqxNAwtzxHwQLsLRheKY6mA76vObSTyTF/qfZfAyHSfAGmQempYsTs1nJfH6etdN6MaF76poz4VtXOh2R+LDuubF2F1jx3Rl2+Ed3PlPfnEcrdAg+5McHMC58TPcmSgNzCxeQRTaJ+X+uXqCjPpUhAPQRqOpdBK72RHA+dVl6u/G94CRTiUUsoWc9g+ELXduizoWJdfy5EvY4zqce4idToj74skoxi9TISjOFqzjiD7rPQplscLS6HdKLbHMGlH5lDOxJoBiKyKKpLO2qG/2BsEXGILrCUQadqDeH7u4IF6W1hIXodROz58WGCkQ3muVHyD8U32Bhnqird3cCdCAnGFzoJtFwoFYlcQUO8eYGQl5DjFSVdRKeUWHaRz7KO14wv4Ey7kxL2hJGu1iPBtu8QbKL2jtSb6xAzcmKBdR+flRmuDkLMTQhYLVKfd13thkeHRhsThMznNRD6A00SjK+Pw4qpv1IzcYkRQiDQWmED94ugrXPU3VZD9i1W7yX17mjb0ZvqGCVmC2HMR1peYz7G5feyq8aVs9bebdr94R1wtuh5FEQL/OuLhd/1U6+0PIkRnWQcXsngoBuqDMKBjG7jKEgoJddM0BtCzK0BXRomV/smnjAvMKoWsu3WL3W66QasPLnWZj6/VVZxPPunrJIBU6Mgyug5YqfVTGsvOVLhtKw9vrTjGV3EuVWu7F/8eaMSmmdoRQKinaJWeqDDJHRyrPQZVS9eaxMho03vTeGWWnfzsuNhpQFnhIh/wF4HfmbvS9QcWO2HmbMZO2ScQTSEcT2Mrj+kGdVk+Da9r9pX4KoQZcUZ/LUi/eDgxAX9OY6CPCKeqUget8GwA8W95wA5r+WyCCbc+p/qc1m+dy3ZxW8MKA9q9hIrnzeUotNTI1GKIzE07Mujfss3by17BWxSeSFZjNG39FKhShf6WqJAoT1rQddTmBytSvDShRjHqEgSPnm1NO7bf40EgTg69tIgrkQ5Up2kY1d5qPl8IYaPCWvHNLCNBZ7EFTSgl9PoM+zzawKgw6LrFGsSffQIJ1SNEwKs/EZK9WRtMSJQTafGMKJtr7MLCzEpNwgKrdbaXPDFEYdjdZftffq+/juDVkjBGk6L4jTiVMp/O2O0KZpRmlHojrfgWjECXL+5e3CdOaVNfqTgEZDRyvtnTa/y6u5MBRbsXHSJ1MsnhpX0dI+0zI7lwiQ8iDrNAtXWuZmcYfCHUYckImX6Lv9DIB7uEK3wQrK/yNl+7dt2RDtSa5FAh0u1GZyrnxAZU52ri6gYYB13HEM3uoPHMVj47rdODKrir15RcST7CV4H0yNXWsRp/0GNkuaLozfY0a9mPdV0qWSX1C1ibgeFgxt4j481/JVV32Pe2psourB6iPCH4EaPyunKwZnZjMIYIGMdBId7SoPAemUUL5OA7h/lbqEyjXTu5tODGAKZYRKupb4zF0IBDp+gteDOl7cSk1A5US8O3nOfk5tTh0YVczXx9q78pmmYH7QVSJb1AXuGW92mycegJhaMRBzXnYn5OSJDpKbs6VqZXemZE2bvmBGME+wkFVV845ripdHVH7N6FZ8VBMPJf8aPSOdpFWsuZOUEAtxRrSOy8Ofxuyz5Z7MWx6ZHbfYntmEf7ERiyLz1pN3yo2kQjwGMv+uOugAiKH5A3KlvApSDGbsMy4byAFhvBCumRcDil7kWFZBpVeSGJjocNrC1yAx58imthh5zPLEAy8H48jjQgdMpiOHBVScIaCwz5Nbcv+pBvJ7Nujxjx7k8uFA05L2C5FRCsd8QjhSCIrBU8pQM0uyINKAg2Cm6CmI7c2FJHT6XtjcDpqHJ1wXXtUGaukHodTBH7UBzXlO4CAhvUBl8IbuWqfP7Y3lx37XaV7Qe7F+UGx0AILMvuO+vhC4jRnxQnU+boykAmGZXGTOVPTtrpsB6+osXuGIGIVWsoi9cTOd/5OSylCD1RIJtT3OeL1sLjM+I9focHhwmNnSmV4213z+KI6krns9cN8qWFeUICVnHDUH4B8dEL7gRjc9HjMXHJgWgz/mLO21gEg/2m2Hemjy2SgDvs3sub/H5axvaEihRiF5cpsUnk1RarvGYY6pS5gqO0fPDDl+NgS+vYZSMWuKU7Uljjn/tKT2ZEtflJoWVsSVEyyHf/a+iTx7qSRBExl2lorQjBubu3qimiiStCtujY50Kaw8irjnFRDXXNr87xqR8MZVIxkLtm1Q78t2DYrqA0DHY3H8j+6eKyspxxufYJ2jfzuS0ITLwJtrYLDstGylbWLlUriuYLporDZ6MozK58cN8NA8qwaDQNECWqQBg6hlhVfQr18ft2eqfSRRu4rFmrCHZ6rcxHlUhVKEmrLIsWQ2WtqIQnNWJekJZJRU82BcYYj3DYkdtDyBMR5HAvdgzhjDjPxlu1gVAj+XmOQYTnsgLipRA16YCCF5gKPlY86BpXyTrnFpQPlCi3IyODi3xgkJWaGovelbDmcSsotmr7Ew2iOLlsTLvmuRsTfCpIF+5jK/RPIsCuj5TJ9aNJLFqltztzOVNr7Ktj0tt8O2/mZccPQMFuqrYqUCOMQmLzh/LENr5APBEplysc2iXng1NRgTul+MQNbG8SVFuBnjOdRdMUFuGETiBOc1OC+vEf8JP0MTkmUd1cPbi4lTcxEtWGfmjznfQLGdddOFN5kJE6hnyQYQP2Vdl99XGZDZcevBv50ccHGQbp+gcZh/MfePZB5t4bW+Hu8cy/6wp7JhpmyKRfXn6gv7Kfmy2I7yGRgLx0kbWkAudBAJPahTHmq1RQNUFVWWBjyJblpvGj5AMTgkTeZ18T3gtTH6ux2hDmKFhmfgzvHILTxfe0e8JhULuzbfBsKTopS7UwhjdPCKwLRPOTDUmuGLGajMRwbwuJ9hJ48RVNuZf7mZ10P0NhhqvJpMAlPG1KH1Yr34vREEffNmzBatutjxZTIu54Wtpr2/XmZ/RA6EWvc3d74Lj/pLkD9J/MHWDEv3fuAIL+D/KfUNj+53Xsf/M1Hv+lby75+z7rv7fN+t/U7PsX3k7yX9AU/4+9nIT4py8nQf5z3zUC/29+2ciL/VPzQ8M93605qfPbzm3sPzc70j83PpRJVFReB6ZQNg80QyF1QnfvU7rtw1nxQxqfznyD6slMEt0MoioyllSWcZw4KHiWqIXXg53mMRwnEBzjnAJDxS2vODYjyCfDeqfGKLUsM0TNKHzF3G3l2yxjvdSTY3yGZWmGiSmOeSlODRkjw/EYw0wYeCCy/VaZXQnMaef2qRlM9cRjmMOAxydBdZ1RCtbmBeUkZPBoJMU87ox7Vln8eik2xyiKNSqszNj81RhGDXYsGUa0mJFphCzjZ4GpXgyvvViHeZDGPs+pAGmIYdtTYbBaku372c+qGO5TPZfgrBd7cc9+LAvzzPjgXxheLTAAaYJh6gdpw275MQdPYc6UIh4oiYMq5XHceLZBBbqSZfnz2oZU/Z2Q/fK7YPrvYKMYx7mTvUfzd3G+HOGyXKAkWr74ikK4MXhEJvEsBQ7yF9joo/48WoViLPE6ouWybFkyMIYbVgo0DG6csWSSerGN5MuM8DUKSybAc9qgxYT5+hpi0UlVbPj9KyLlQWQvmafFowmYs19xki6WQPW89vvWo+deXR/E4T3f0OWxgvLQ1FdcPKDlCBXWtBVHBFHaCbeIdj4E/kbUT4Pmo8auGxMYTgYfMkXwfVoeDwxhGgW3rshwNms+MXjzQVoIgb6nKh7E9lDrr2glrsZM7/Ph0JNXbHqr0WAmD7pkgLmcfnGMVsjO2vidSZm6V94e1zvfV5hvqCblLvmESKD98jBW16qGmMzw3Gt61D55nbi/LnG8GI75ernror6hbE4WHi4aDEL/BIkozxCK7bY+fcRvu0BEJwOR864TxkBDBhc3sg/acjt1LnSnXfkC9TSxiBLiv598C/40g+AvN41X3gpKA2s57Q5Q9WwiQMWWIFnIrmF9Q8mzqZV2kq727HS9B4OqW9pCjV+JnTKpSnWZeyEdHhrmr/HaC0oTBG0sbVA9wY8KZMl0QO5x2R2fngn0vRG0SQDhvOKD9PPnZSY4Nxudt7kN9GAPcWPNapyr1LAMfxYWtcWbLkw1yswzVOymgele0WEHi/Wlucdw9BNQ9d/nMxeUt9OEvjbSs3gqqbgBUhK/5YsuIj+I+5gb0x56WJL4w9iAZPAMurTfdLL36YgyXdHlkis27KQ6bbzn1jJ43wsh6+87RafT5B47YScYb9Dwdi/nnCh47jG6YLtjWDQU+4nNz10lL0BGyGB9RxsEbZGofEvrXcTKktda2t4iQsTnxMcUZq5NNknCuSVk7fwQr6BMWv71yc/ouRAQnUvoFPZyugejS192G7RmbuYhsqcpmM21vMWE1mSaTjjeLgPvOWU22lKJggabTPTqJn4lrm0GtMACJLNXN2Zs+4g2TweE07kbD1CC2h5DGNfGJUAK6zKUTSRIQxk7mjJag7zQsXnXpKBS92YZNhUH+YwxY2uqq0woRkzmbSqdQXpvgZBQoAQ8mo01tXWW4MjlgaTBH6J+DRlbepkVZI5Sj/jnfG3byTsSZdUC5kkbmkAQPNWgToWzWH2DMr72oqvT/Ejd47CX+bALJTGV02UcS7i+ObIpG/ckOOOAtTinbpKzYqDcAsHpVUOmP0qP+b1hR/gWVHNTGhmnZtF3F3kOA64ksbEtI4wwiFp+WbfInU9pel768y4aJ+X6JYac7JMqbDw/IVpaaggQJRN2oRFBs5cxK2GyPjiprFmj5aCBvuhzdRJbQQVyU0Fnhh+zN/JtX2xgNeAJqd77Gsby04ggaLEKurIxkMcgzfHAXh+PR7fPZdhk2WekpBdgB5kJFfRe3OrxS0Sc4JvjjklBOh+p4dGowuRL4B/uD7RTV7P1cY5Wghnu8Fk+RxE6bX5yiS8oiooZ6j3yXi4P5pLlV5kKQ3qwJeTxOJYP3/kfQlZ4+3t7iK0rUl2+UwSKZslSNgsFrYkdH9Hb1EyD9sdVbfl5hzlQmyuiaP+iqQjBCpdIn/o9UwV4Ccx3xERPatO67NV2Cdm4WAdG7SKUVclrQVEO2laEijLwlhJAMrNlV9Q5q+wUFdLa2bcAWHHl54f+YIghfXOTY4a6WlhaC7ZzVs4EPDcJ9md/778/++snGGO4U9vwv2Hz874nuEmByaU/J2SEaCY3TN81HtwG+8g6g+RL4/u7wUj8jV4rb19pfh5KxJ1AbVCj9p3Mhn0tTJY9RtDleJ8WD3cGE8/a5BA5kW9qU57nKR3q6Zi7iHiljKzcSQePEWcC//RAmlxpJFaEokNHh7IrO66HC5FKfFz0NYN3yXWZuBA7kNU60xVT7/xrhQeRmktxcu8wLgWrJgxYFiKDHsBgefX8K1LQHhWR9YTVh1dh5cbmi8GZaijw17o6BV0me6GZG0xPSBYWYB7gTWfDpTZOWM9NCSsMeSWJvivGSx5mm7u1QI65FPrQDQJHsfPWX+fKMB9PK6uVL/mZcChljpFxSxM9lgzxw1QV42r9Z1C+G+mFIih1oDP8MUBYoW+BCCf7jKW2eXOXhD3BKQa768+r2uj58xaBA9/yaxMx5fHvoHcfrBHTxcfiwKvVZtEbBU97UlgalVm132/4uOgd3dK38EkURD0czNAiU1ryHpTxv1FbYt1uMXzj0/A3fWzZMWz9W454fkLUL/rS6ZOcip8m+rFRR+6fcIqOCql237tBbKl/6y1lgIEO+HzBbjt45Uw2VGDm60B++ubhIneYd0seNlDoLeTIhQvM1wKzE8OM4undW99p0aF7n8BTPkL0hOQTf3wenn+pZmQyD1jeTD7Uzj8ODf/EM/2wdj60UBKLr2m+cWJa9uydpVwYVCUoB5f08n45LnjOCNhHbQeDDn2ct8DBTT9FwOBUvzaC0H3WeKReLrNfiwJvKJO86DYokFNLA25bHxb3tvpbVok2Rd6M6zmbkHGCNT2OsFoCaZrmqFee+Lcrem2EcByO1o9g9dJd4MWUz5uAQMHra8vdZ/tw+W5D9v45tC1oZCL16aAPr/yr+ZLB7Om7pMT3NN0keMdJpGsr9V5CNLibSPspiT3hoZ+C+sPquk77aXYQOlkhksiom54cTh9HtbgvgL7SlisfG3Cs94nxOrfpJ2ZZ17N6wvMsQp9AGicv8ADUqHbXa4xw4IV7k6zN9TV+YCTi0OHVg/KSESP6G7c33MtY3FyOiszd65UkdNzFyeZLtqI5O09cwWjeEdrVrMouB2gxF2ctf1+DthWVqhisHqn0e/Zf2KKTFZPtKHoGKxb3uLyIgcUwvYQdiyAP25p8Ivf3A9aLK/C+SkEbQb+rFVnkSS/dIFOCzwvBTJxKAoVOTPkbbzEtecccJpVcR5Auv94T0HsxQczii89CcGhwFm3+gV8b7HNYldzbl1tNqDTFtfYaHDPZ9/AFUZvv6zsSMLuSfubNfn0P9rJxspb0/Zh/SuIwoRHnBttaI92pQVSFoBa5X5JaK2405BkTCuPJzORi9/YPbp0pK1OwZEi6zCYH6Qvy5pCcg9kWPFiITCCysdeGoJeEXmSteMv9eUKAx9pCno2/pO6tfmEPJv0PPWkeuSEfWXotizozKTJMcs3R8tqDcLtSux/7Xd+BjJLMtvnwmiywQSusKoHnD3OOIz/w9OKxPeWmOOcjcmHkWgsWvHRrcytHT84npubzTR/VlAuS0VdPtB1sjVWwuV8imYcGR6RnOKQkgQ1isCs6jmj4I3Y0PXMVFzJT/VVFfXnnyZcuNWWjbybENEEZRgkUXiWYHdtSliJeQUKjnKVHBT9PEM/qvcqLVr34nyf30qvHg0+c+W2PIXUNrEie4IqZRdUWTqEOvn6mBijVVx9gIfoZp9pHpD8G3ooL7tBEZ6TUXkBfzzabePi4pkhHVW3zGQRRBTPRdBZjCn7eEdXQZKXYAhRKUHZLONde3wX9VAkjRGLYvbHAfR+USxAYY8Rip7GDkTZTbwcSPBt69pmynXl3qcckvlJm1CM2CN96ATcMtg/RNWxeJQrCupxBFLHvxB4x6xlk5jwmnsQjbhOBRjEHkWVEe4/LZsQ+ZodkaTY0PaPShtgjOiFSkv03P3H4GATW0hOZVWVv6t0+UcJovfQ9yQdiWb+YxceNusxklI2VE7+bKAzp4FueV1jXfblwEDtxb22Ya5ji3gO+p2P4RES0t8Wt5L2LxmJ0tSp730twoIBfAoXELs5fETvZxTLnp6zU48d/pdon+My0XIYhbO7uO1hZxvyET2guzeMnaBM9eI6LfjQu6wrePMJeRY8QPtEExxTM8YLSsPgkKz7EMsLHIyM1WBIU4mbLVzHoTMYcWEFy9W3rvRRSd/h8dCEPBPHZB0NbZG803l/Vg0Sbc1XzuJ+wDigvf3BQHgncbPpDxrnRKTB3Auw+7mz6Ruvoy0qajuuR0kUYUlV/XgGi9Le20Pck+xrmeY2eFxeMrzP6oST7yetf6YwgHm8LL0ZwmERvFqaoaTRLmPf4wTRb0zvt87qTafdBztFY/Q7d0gyCSQ3jOm3N4Z+KhPCKB18btIqp5my6UFATMSr7PSf2bI9PYDDdEV8ITMXctdpNb23kGFWbZTMzR5tjeEkcuGmYGeX/7vkP3/OgfuILu9/gEapz+7/u0cF/WsJH/lsfHYSx/58V8OH/qgI+/t/znNR/TgX/ZwD0P7OCj/8vruAH4Z8fX8j5zWRfzAL9k2o9q8kOpx2/X0r2q7Kftbwlg0SjNK00SXfyLjAk9bIvSO5cCkxOsGlqpSBnabytKF9PpqKMXHFpmIHunTfZQlOpBzPb/qi5pyII3MTwWO7WXcewfvscbxvbZsRbUU8iYJXRFkbZrRRZaGqGHx2vhkbuF+CnshnlAXyikp5VnzufHkD1AawYvrMfwJlz7Gqyjb6yDVXkGPmce16Nn0v+AD7Jra13Nl/DP4DMdz7VXw1q8fgQZPpTFf558811gXVeJK6jJTz2wb7N8dEFx9r2+2sa8jj+qCGzdBZF6BeUgY7v9wvTB7qcIDpAUQQJy+NYDxQAkwRxlSCCt1AU/ZZ7uDwWxOBbxhfffj4fx7HMdC7KMt6xhCINpZ2Pz+K+09nPxb6ggrTvx4GkfkuVERFIwMZbDwxB/yCPwp8gYEDFUsQwxfQVdowLeAVQRZKmNIEnoKwffYVOyOSNqV7654OCAdr3LQ5P/GeJ1vsXRSxZXpC/9hLKRNzhsreYTyZM3GI/idCXK8rS0gkszsq7gsjqbaZMpo4Ntt2qv18sFQGBbf66h6UGQ6pQ/oCxnCDsQxLTU7q05aRoYt7KFaq+bYvEcZxGETenaG/lRgny5sLgVJvw71PkSkFM6NWwLPSEJGlnmFf0BxteEO7K0dE/jObMYlnEUDFBfxyUut5dTg78VZhuwLvoELL0DkpNLMeseNlC0E/xNP54Bi6trLII7EtzQ5TE6RA1AB8R80mO2FN6Y2EprcFrXyl/vbrvyA/N+z2EHftLDvbh07QGp/vxETMc32fSbBGiwUL1my937hbxHrzvAS7Ca/riNJi3nm2mjrXvCss+E9oqGP8wVAo2UU1UOHsffKnkPidoyFymbzQ+5pDehGbjdFpcDa7Zeqd4ee7PAF6FouDFbcM8g0q2Kyz2xqGParJeIp+bWbcbeJMxTL5pbzQFXakkB/a7F8zXxwVKdkJdPlmJ+C189WaaN/Zkstlb7rWYr0Yax68QEaouyFdIuvMxwKBaHQf5yeS4PSTBgynTxBicLEp5lJf5dr3ldRuelKC/C5KQ6AArgppa8ncAx8STRbAXzX3uDhVJWRWqRJduVYvZ/tLV06LT8BuQLsN1GD6GrR2nbjglXsyEceBQLJvzytvAk8fmTa9XIo+QyMHrMK/aq8oM+h1hvs9X9bAxgM/bljF1FqpW6Ps924E2icgy7ms3B5PiiPauiif3Sq+OrfMoNSiUq0+J615LPR2Gj0rmw3m0ii+x/ZJfzVM5gTEUBeKMuIWIDQTvNduUfhklg64PnLlY0M3ddl2ZHmd/qX5z5xDYk466i09N1ZwQgkbcdi/IxsVmsaH8AWwswRjWMaN+Dx/Ri8XoHm4x7e7oBKZf5Ru8t9muCLo61G+fD01pTl8isKw8cH9eBQx9qSHnsGpS9AC8F0a+TlNp6GiGu4btXzuSYEfqf7CQiy8EhaUNIZhe2uDo2yTUZ8bDDp6ydFg7lcm/7rlXqrPkEYVVhpfCvu4sb76aQjsUIV7bolFSBN59UaAuOJlO6OrnhgRyXldibczDGzPBiCFzeValQ5OKszG+EnfUrhCvtGJ93wxZiXavHnyp/D/svceu7VyXHfY06jNthiZzzpk95pwzn17k+f6SLdsQhIKrZEAGLs49YZNcXGHOMWY8ICsU/C5NNUvfIFqib+HIPqIioVQMm9E9obVfMxQyVGq4Y15VwFnJ0ddaYQYVfsbH5l7G7og6obkNtutRMuc9pQEaB4YHo6M432LdofFKqB2GW5gA0lQqcHBBKo2Ik68I3vIVTdzLIiP1lg0DL4mwZcnepzhr6hBdAkwH7EbtJkVyXe7d7KnuwkFkzKhaPyaenuW84oz8yR3WNOBNQAOPKNW3u0+LjHnDPyJKyEfVtqrIjfvV1yQSxVFI5C3+uA1mr5bGhVv5Yer7F28EhhUp2Zv0iAzgZj3XJIrdF1H5WeJJ4C+pxzIMGRDriERLoKPSMq+oTgESl9n4LI1ImSUwZ6fjBjKuPWeYVIlROSpudce70U9OiWXv0pNVnJV82cMImYg/H98lTqMnkE8vtK5Gf5oj+5zzX1eMz7rbfjs3Qq3HH+XlhEMjj0ZQpE+qw5uZWYbdegfhsgKngukgZp57ajYDuMRnIOTB8vcbUu1erwzxUZSeQKi5TdIVGUwb5c4YDGqvF8E6SiB2iOpLQaQ6kqM/55ERqF8l3UQPAuKvpK7AmkzoE8o0LuEPrfZH3FWOcst+c2LXDhZIMEq8rFXFnLfVFlVcATkqrHu2DKj0JhF0P9YAge2Iy4RycD1RW7yOzekZMgySb9ro0lAL7qagt1rkHcXNn07FO+4zI84QKNDzHKcvLxCQ2SsZLkAMJ1ZF0kR9VZxJ1uw62lhmESQiBkIOHYQfYIZyaZx8js+KlYYPsKPay5MVmPOgb8+Agt3L1+LcVA5M98hXpl0MeMehAsDnHdLn7ry8XeIxXF3DGZtyHqDEnbAEp9z2wMLDnjB36c9lJvv/6CMNtorEgos/cZNH6MBbzchDq0VmnP75dXFphj6vvd3/rJr3Gz5nwXNS5ioP4hVclH9uAxJEN/0AfY1dGl+HyeS73DFT+FHOHZZu/68KVT3TkaSD9Qs5hN9zsLnjZw/NcfN+fVklpEWlnoYLpXSqTVn2sjHReRJQ1bjuKljj3kYUTgj0RKJvAaQWVFyfpWh8FmDMLGyfGkZ5w3N7lEpiGioDVy3BCJQnYNCd90AMXRp48LDluDMGC97txDeecrHm48IwxQfZZ89S7IRsrQxYYy+LwMYop3CF6VfHEtahQD4slV8hqrgBpl7hpmJ7t0knKJ/dELm46G5s0pJdQbGRWuXBbrZPGnamAID5TcW48l29Hf1hS5GaHLjyNZj2qGGunxxrUXl3a9QXR7E2XPgXP3SiZwLd+QlkjVJN+1WMaTJYoi62rPiwu5rPFlNkgYIBrJHFr6laumXD35ZVrM8lCdEnT+AvRIFCvyWKkHV99XpGhyXfFTaw4u9CDMsqLHYCb/KqeMUKokJr53esfsZvKA5F7uNX0fJKes85aQysbv9Dp9N6R08s9wXNvnuiSFSIPyXXUHEbtZYvqnVs92Be83Dd/S4/9gQhXl4P+5FogC0/q8+8KZ5KhjUXQxgD5p5yRpXwRQdsy6cs7+8hOi1hW/xLqjY73wkAvGsWFUHOyRdClO4YDVRL3glVQ6/M50ffaIZNXWeM2o7zeAIiCjvEekFkTLDVaOHpXS5z8+ffWxG4m2OvgzLqZtveayXbsEEqIjV5e5+4glNJtuEg7bJHos4rJV07R20uVklmRm0ezSW60PDkBW221W2KQYg1Gd2XXVsAqnGJHcIW0oFzFIo/T2mB4QtQyoBb/avFSozk4G16B7CWtl/t9sGXJN04P4RmN28T3tSqNYxdCEwfzg4uIHswmu4PNGgbJX7kVqRlQ2/fY9a0gRaWei7Jj0+1G2QSmoRZVysrXbCYKPGJxBjIL3BTQRl33Z1vthnM3EeO+N5Ra7GEuSfZY8jJsUhtWy9k482OBdhZXCyC0FTip/QGkN/+gfQi+brgUJHJHf5YZA8hO4e37RDR9lfXRM1Mii2Kuho8cvdHLi7Wf8BhRdppUYkX8X6dEaQ2YaQeENn5royw3Klx+NkbgMeE3NLtIShHrBigYePFoDXSPNWb0aFlU5f9MoXBrI3honxaIpLWCw1W1mp2nx6rnsxq0dWKQghdVoMGV3EdUjxnmYqY3siSLfgspIaWGT3622pFKdzPYTzVy76V/woYWe1U8Yu+o0HNpyOjP278V3KfOpXsgik2V/jhwy8ZQMEUulxItHIc5W+nU9JKQI7L1x5H+hcE+PcDKWwg85sM97N0rx5pLxGJP4BRapzdTLJ5jpdJx6gY8Z7QykLK2yhJVcEkY6M3CDd0aO7K07gfkLxQApbpidwjxoCV0ptz87k/AMVdyaTNFGpi51yrzg7C7lO1Y+PmRaZsP5C1r7vDFElZEJBnIp9qNl7dZ8edNn/vXl1owymj/ROwgBUkbQ8Mf0qF6Hi0OIzpX8q5zFgCJajEeeZCf8kjZzgbEuPjZyQA4PvGYyES6TlFg2fAukXoiZeIPGITKNBHHgQSoMOKaBquAYZ/sGC4uc8dHX5VJLlItdzJ3UmNRrB+9ALqUz+H7x5xYWjcJ9IdkXx5eL3L/qicqx+e+moFAzzAE6OuowTPBWOHFELTkmZM+QN7R/xn/IC1wxTd8qCbbTTJc4SNZ3ELLZn1wTSOREe8vF40G+orbelCmGtuzJ2kBxTszzH2ieIR3jiz7W3q0te+F/lJk4nnlEi3cTZbvixrJh5N5kmi/GYyaumQkRS1ux1XHxDjZhuMuM+RMKUKe6aQAgZhk/f2CSlT9ttp9ezf9zmT+6inxuvWOkO+l75IkqLdLlIKE/8oIFVxQbSG9ZUOcWTHwPLqiaOuO8AN2eCv0rbw/HwPfyGZmubKF6+tZqVe51nmysWC1xuWWJiK+8ur/CSDWpv7gkpAPIN5e5LxINmibGU7ivNgZytJ6zb/VhQ8YemxW6ISZ19ott+/FGp6dsqlvrCTrZCtBOyDn4lQYbBOkKe3aFaBYTQiRMlRcYl62ZXtpFDKbP0zvT383fl5sSse5qis8JibFYrvhq2i5ajOWFDgajzx+ECW1zWFDWy3iYU/cgVVvgAiMWZidZ18Yf0v3s7QPxBhIZqC51MEUQuYAadrPtkc+M5OoLPPQI8EA5K55Pe6lYmksi+e3U+GvhwH6Bpl7gQW0mMOUVH0F7eL6u1a7uffzaElyEiWJz+SxdGDC1Rzgx3lcz3+kk5rH5PgcpMrBx1JBpBIqPzhpTsQF/SoGUNGznnnknHYvZxA/S4w2fqJ3rkqgKjbuz1bdiomzTyS3QH6yzLddlKul/hxe+ZjQRXk/8p+555+yCviwXyFxETODFfwko0xtz9uGDPvymrFp8OamJFHEcG56qToVy//KgXIvG9XKC8pBFD5VtgsNeM9Ka0BA+XSNPmZizxvkIUvdnD1qvZZDa/NiVeL/xNk8OIzoOwz14JUlTRdxm5B2H/m40EQ/lbIwCh1ZhSmCSxlfQ/rsmy54wDCKYg/vUCNUFpjbk2JCDEtZHYjyW2x5ShyuQHKh1eCxW3itKzLl699XiXlcsfuNKmX4rgVpdZ4K2u7V9fRUxWBeRzHPOvAVHIk1STv4Mafvby7xzyDEtAzdnjOFxxWYZeMskm539HVUMmFaJjsITa8DW6VuQQ0VOivIYdy+785pcqdnEMJF9r6oMn086lRTmAQvj6VfO1TvK0N7T949rmCTC6jsIk2fWH2ePem+PQsUpN6Qbq33dr2o5vB+Z7nLDeiPTiFO6pQWLpmU8bBR2OYMKxYYKmCIVcz9+VeDKkKG8JCoXeDMfjcdRP3JEgvRcFvXLhajbGT666YBhJ0Kt2j4CsiTTT2csDGeADFjJvfiBrPXxlygw3sVX3Z9QF9GKfwRHCptFP987FL+wu1cRH8UQ2dH4rUpwjw927J72m6juA24o93Gw33GiwvvnvqE40iQK7VKDVWQMP788BPK8jmnuyPSTn+Mc5ZkRqVrAbnGkuGnLd8Ho4XODKqd2Jsj7mNFR+azwhhtJg/xmK0GgNkgVoRmWydkEC/nCE0PiHvmKZz/8jHJIDm2AKmPgaz/8gHaPPLdzLa56sbesfyyWvaXGcpT0T4bqBTytMi/B7oZiyKejoE3zAg0JeUSX/10ODlKOgenqgmsnEEMObNbuEWavZZTUCNBJyHGFvr8j6m2gHUkYShhOq1KEOJE7NEqw+aYSYKFBPnEknM5swh1c6+67dufdIaNzPBEp/0h6frbDzeH7xSj65RAY7j0tJ/6DuRgiJZFtNYpbLnd7t6uOFB/LRMfEHc9Nc2kDK1OFBF7LIqpfcJAqyt6xmaxi7HEupzMb8penZe4jLHgdN/YT9ietRKtqmFyr3KCE5gUDG/KLzCKn4i+JUBom77HMtGvxSrYNCHLpPNjIVmpdv0swt+S9j/MuqXm9RLVbG08ucNw2z/AHnF/rsWrtpVWBvj0a3R+1k7mTB5de+ijuRdLonK5HGrE9O5EOYtqcvgQ8Fd3TXPF6DS4mpb2zRGv4gALgL5pR+bD8Ro9NjSCfQ5uJV2/UwvCOYxIQj1b8CLxkRH0/k0UIOZjw1kFmZg23WyHLbBF+P00Sy/SdpSEjG8VspJ6Ad+aC8bnqkE6TgV4V/+mUa/e6iL9g4y7TSFMZATJc2zWFIdcRdhziNxd8HhmBfbInN/xv4EN2RUp+mEeiLjov9zWdIDyRyq4x+l7tDyQdcnsEQy8oT8AiBbtkJ8Rkwm7Pr2Vh8IzORMVjKBHaWFYC8gqjv5NwBA0sU+dAF6TUfg9fKbR7E2i08flmcIsEu+WLUvV/bVDOw9DL7YioagOV5ugSGEghZ5s2XZKKu7kEMk16v77dvU+TACtw8loGCxXKRkfuE7ePlKggsXu7yc3Adnp5JDkrRGy4xOVsNRglB7xGhmmHjk3BRYCmLj30ginOTo2EcKEstnQ5ePZqxyHUtqyROWr9sk1kmkVHD89nI0vkzmB/V7efMQAHsKAgQvW321fHbqXTQ5gwPMDLJitgLfhlHybOs5+FrBgEqkGeW6T2y+4s9201Yv+UAX0IvOBkX4sWnSHiRPgx3y96CsZTi+O5GgAmSXrqXkRoxXrPsg35gRvcMAM9nZtrwDHBvUJH27OsG2SEf9HBTa45oGy3/DuaoTRj32RCoM/JJSlNKmvoJndwHcMUvfkRvtTlvr1FvJFlFh/QBHlr/nI0aMePVpl65mbWGrbPotP1GsrqyCdCvpXvA2H1aZEUyavLKZ53LI9qQ7cQTXv93u08TGPvHcq6ii7X5GtwBCX5Zk0hI7Py0WtRXvC6Fuoe7lH/bCi3WGu2fsO3cnJv52PJogIWDVWOJSRF4CR2Ck2DqITv2om5kHZCpQ3I7q7bCHOsrhfsuWImfJOp4WWk8Y2xd3qJ/x/lGmma0aB14xtCcOnLbo/VUHt7CfIo6ZRMDkwcDAWfdcE6KFOFlQtpvZ9OEXjKxbfEeKvRl38APHhjWiSACWMMx2LwA8xZ24WMeKTFpi5TlLQZitPw15iuLlJ5Ub7dsQ0KekjT/hvU1rzhP4FGlbkl7ZBZox8DaF2mVCdCjHGLj7nqwKjLuGrWkTXK/Fl6lw6PeofDH5RDKWPS96+lNfwnRazamb6RHu74rBlr6hcdmbnnVoOQ1stNSiL7Ihc3YHlv5r4dPFcQp77Gi/aN0X/cIBq3gTT9poPEcQvWDG0K3dSEfKjZcwPkFt2XuMwbOXKl+MpQQeXiqxjAxOYgx3c410vEWuFlMPvqaSLUlXUZdgomVfOviIFs/ZL2nz1oCYTBKrNL/cg0thhap5em9mfXnJEc6WNDZxV5q/xNFkYtZdtlchAjTl1o7FgwpZrbbMklDFyZhdShpZemILWJt9isnn3dTGd5DASDum+B90LwVhPp2W6bHerB7//CclcaL/uREg//fCxf/rIkD+fc3n/5MiQLD/lBTO/+l1w/53DskI/s8hGemZWxYjBCA7mhZPx/Qo8qJElrTeklvPUhIlRrSvS2PJV6aWsudtT3L4F6Ih/zhRMsTuwPLsr+XDUqTP+uzYhqVA/5cCln9dETCI6JP+c5Zr34/glb40JyO/X+D41yagE75kTTwtvMoAv1BFTmnG6G4GHh/+gguIj4Lm/TCAvzzgvnZzHPZAL4wIYPh5ntz/L/9KO+R+mDFKn/1MNalWgkJ8d6QW35INQggKCGX2SA54gTAAb94FbDmyJifNWvbvKUPXIUfJKnRpx8pmIyx5kxhgwatIK/FfC9znPE+kaV4SSJliaY49D13/CuPg2JfBMACK/sNrtFfZw/Tcfe0qJzdmCut5wL/ClrBpmoJR/FseIHHmTc11qkyxdM/JAzJYzMctP12Ih6cs9DxN/uT/Fsowj8i+V2hTilTL9LNn1NqqQ4IAoyFlIhTtsYfG0UBWvQNk7JII/tvUBKm3VR6lEH1dmmLLRA2EUS8ZUWeA2MjUBb5GwDQsN+pOcnJxqg+wO5jAUY6DfS5+HE8jL3YXYnoaWHBlxtM/k5v6XLRBeX9pkziCLX/L1e2dhFT76JLkJIopK0lczrziQ0sC51MLhKez07iupE26UW40Z+iEp3Es42e0uJbxvjuy0ZjPilpanF/g/zavTxWddaVL2fad9j/LN8bcMzOCT96gBcDRsU8+/yqDYBgGmC6pVujil/thCcChQOzMVARiFcjKuBEIsCY5jb6zNA/CMdXLkNlj+RxZlQUAPbEm1juPWjMSxbXKg/jPfY+mSQRVFK34QYd+aH+AwFyIS3zmyF7IAV+JcJqMSBXqeDqo5i4ck0B7iRrpGRKF4f0mvjvnC08p2JP69e92O/5ieL6NU3a4nxEXYmBdKdqJpVQIG/jvV7Gkb/u3uhTXmaMWF+/Wqn7YsmY9O+a1+1fA73P1sh8vY7YXCZOR6NGdOI3BCHs9g4ufgaTFhOLdFi+ItVSELBYLpEuLjPaK6d1q4uiWe4rj+CpJUFIqrABenQgBfX5kqTgAz5zSD58W8/bCvZsxdFZR/uaaPSdPpi6X75N/BsIboDB3qNNYjFHxgF/jzjw4JW4+dM3mPCANnW1uxntPeOuteveO0QQL60d+1eRiFTcoZ5ROWxRpuq87w71dTzNu3+s5r4GsTj8iQCBU08m9TujQov/Ciy7MMcnQiYfxY7j9BCjl7uH2ZI/eJxr8c+83lWV3uDjg2q1+19h/O2N4wK/qIWWLDGnSIvAImPqbUCivRwB1UK4cDmPjy+3bbhT9d4FTJOZBiMHfGVg3zyTpEaYQOQrK8kfeOaYZjIY6s+FwR0E2rCbHnzWpcm2j1q152WKwGiz0ZbLP53r44clXxs7aHdbI2L+Mv/WT8GJYIN98k7rlDkeV/3VVaTyELGUtxklT5MPELRrIjiroJWvGHQ7V0bss5ADtJJDlFBuDnluFwTB3VCUQ3JMTSUT1uaTYo1fQoPozlKYkTqqHDoHzlRNKAp2nKRfYFasuiaOjvVpasxT6hWHXINc/7jOcmdT4cIyyKGcwOPBen8ndq7ZMcsqdz63AegtwprcSUmwsvBxCM+hNIvTlzzotHZKnx9pLEL8NORI+SRDs7kaHXRtDg4BeVjApKGhsJ/nMqIgyfXNQEbHAibMgHx6pDi6O+00b7dw/zhXRtHUaDD1JQUq49qskORK1YngUSpVc7bkBozg/Ri6/kJHDlNt3VyEsd9wbDDQqU4mN120MWFQP1v+oEqT27bz0SiyVPkjpXwXfZrFWUzTS5XJEyClJmDpqpCg9jhs9m8DHQSBqT/PpHf4lRWzLsHdLGZ5aGix7gKUriWE8+/duEapERTrRuKRrysrPf6d27fYRgar83P9U4S46XE/1lQaoL6ePAAT7YmxUM0UazhL1Uxe9UYSHMJN2PyUNkaAaLCBT8kdppE8lLdhmNvJ3hiBK9ByjRy7f0LtC/G4ji7/0qgsgFw0E3KL+bEv/wJnHY+MC+06CLX1RZ86LJNrKWuNLQ5qKqMmyfIhsfJ9R6UFilya9yrEB1pskg71J0TL3Mhol4jDXVejFrkk6SfvmkVeTEmUEY4tkrt8D0spwwobC3HzfawiCNEhOUl1gqdLCwtzDXkfvm7AbkFQvD1laUD2j/kxcrIGwxZl56xaEM6Xwj+EzxYbKdPVnADH6O7lN3kysMcPLS5Ommo3/JdnAvXtu2Ygneu+9//53LuV2eNGDCuPtQGaOrKklU4rXDTRbC8nbyfGve2LjSX7slFP+/qM2/e6nkxaFe32/oAAmxwMploV8vl8WZ53m9P0gvvx7Py3dIAkPGIb5GA/r29dpnEjrqAkkO3Q5ovMtiI9o7ogjiRjAVRQhNTRBwCaZFtEvbu6ZKgCXnx8tEtC8WFUe+caIPZLU7bLdSa2huzJXU7QUpJ9WdGrsGLCR/STsrMRFD9rJEfpe+lTAtS109MTEq3EAyaiSz5HFctIV4vbpI5eDHd30vAcWIG8pcrNBTiMXBetf45ErEAUYKZeCSZeKV4W6DPgDgdDgKT5SF5Oy9SSlrh9OIHCMvTaD7HMlM/Wa7m8+P5jdg+X4b/5VkJMP3EhGFAfs3+4kpe+kbDMzEC3uYualUKdHEmtBYWRpGl18P5dQeuDvZwVB1v8TSkrlhQ+TzgAj0vgskpqSUut9UOQvqui9laCyHWO8aPIT8ssyeTCw9svPGH/jnTqvqvs7H3zHRtSzijJTSRiO/HzI59h3MQ1yIeh1+vm6YueMYqTY2YQkUPPBonZkL9VQIDImddNPNZlcr7ehtbVGKTiina6rUbpCA77YQ/4J7wQ1PNNTPMlMzFc6BG2lFfuiFyINdcnSthP5VZgIquDHZPkn3arvLPGhTOIHJgSGStIGGJOl33CN9blQEhbwU/iyYFQ5INKxqxqCBsIPiv2wPgxC+ooTsd2Lcgism98d/2wN7UgvKAEOTOXRKTf5L4F6KvYPA0NfXvstRqvpkfquz9oosrQUxxJAA8jVg0a9kmttfSFjHLRzaeWTOuvBcg/1xpqT4ucWKn2E5oNb0U3+O8pbIl+a0kdsRIz2CNiUfoGvRuCfoT7koI9f9FN8niSvEmE8gxCqpd3pW9YOxsJbs4bh/qnDKjMAdZawP4/zhFYAWK7+QwFUb/lf3RcBC2jGpcfjNy1Qg50ZZm3JzItf/tmVJCXFyjevPbXxUuvAngxMhMVN0UyybgD8i0zeS+8w5DV5aZXsuCiUZXxTknLIS5ADbtr7S1hPv9oo785mlH7lAsaURFpQ5YPAlgXOw14CbTnszXMmNZp/MDsU9yYsxJqq7DPYv7rwNbr9kljN/tmzJ2nLBT4FZsMF2BdA223HEfuo1jiidqiMqK0AtANq4mIsOWEArO3wP+iLgxI2/6Um3YYAdiGdrGhBJREa9K+4hPdDwP0yifpF6JEIoIL1OJEiOYtHo8zNSMvvnDrl4H8lI/KDi+puGII/MZFNXwpYIoPkqsO4xX+XAui095iwMpwjJLvyND1Y7nta6YzKiDuH5kP53M6HNC1kjB9JM0nLGj7Yrae8SWK8Yf4VfwPnnXsYuyiZ0Z2E9nTrie8R0mL9b8O0GF1nn19tmFZ5Vz9/zrT9Usb79JxHj4X7vukgxhixkINr0IrV41TLsogjuRvZbrebbMgWphBvAe0rSShS+tIzqVZZxOKJjTnxLr0LmwUpXhmKUlDBL6crXiGEK14zf3iMdhpj8sOXVlor/W3uSfVFvr/E3t0+eopl9/3bRC+9w3InGbXxBJEU+aBuVCdXHXkgYhog1WoSBC+Cq9tBuCj04IMLrONKtpISX41jkVOVgVUO+5LXTVRI8mGfPD+0u1Q7bldVZpdXYuh/8ztZ5HPCllq2omGz7V792Coi3vUiRTo0Xoxg6nVh2TyQRzzofaUNh0+CwUT+yozs/HWN/OXwlfzonZqtkV9lTu6unJJszZPCDtT0Tfp6VYwDW6+mEtwSEz3fzngbCN1mQlEUJFmZ6Igw4G13YEsmYm2KzeHPH/FK7pfc3b+oHbeSJeclGjixIpuNZMSme1J/iwXJrR5zDkEyEcGoFDBgVWL9J2Io8bJS+qWAHsGmES4CxAhNoAxJ714rvtI7JFXdFeVSEDIrPpLCkkNmX7ioVMakwTdT3A6JYYKPLQjpPLXxKbGfxhBxKsT/RuB0jxtXNBkeHhOX71/RUS4bj6YRZr3Yz0dEcDJwaYnPH/FGWfQG485jSRXsY+S+tcQvDWN/jvUxWWxam7NSyWATaCQizEdO+iXwRiIoqbaSUj9FnuDN6ArQiAHohDCk770iyGRN+yeaCHdoZwVHq7Asr2CZ83zMQWaZSdF6qatHe6daQscphBfQUacCkzcFqgkT77j/dct4mb7oEUfJN1iIXLjV8AdT9ubzT+0jruCCsRkSU23UpqCWwbtfRFPhbJgVqD3eBgjbYL8j4mEs+18VbfeMLMM4hk/evzf6cmvgJMEIj8mdLCNwkEGdgYa/6My4h5ZnbRVRm/64tf7uxSl5z3XIE9swwFDfrgcMD9uBHcqrTM+PkRUvpgFdZg+E5urIh6Mc91dFtuk6D8I0hhaDpQK1Zl2bG2uv0V+TeG1ioBIz06KQaduGqzGx47/kE0VR0PmVkdcSrCSKa8mAP2cdSeGGfQYAo/fOjmT5xvfSp3V44yesJYkwxG+FEnnD0j0wBIUJ+1IhkZKi0BjdQG8FGvhgaRuwFrsi+qSLRE4Ea8os4xI//5Gt7oOBA1L8teTw6pH5hinAAMpx6OwvBfXeC2RkEQei5eGSOLthv5LCGfonIeX7x4odyr+g+vMaLw6sqkVZIt8RFRUF46bhs4153gdNNLWGNVJj9Zrlv0hSOt0NSqPnF8t/q5CewZ9vsWYBLMxzn5hWKuy4qSfNRhvExjpZCcGsL/zgI6qscocLZtuujGx9yZNKfOZeTYyNqrcv0WZhtZtfeuNqNuaBS/meTcR8VxqsecauYXtT0aptLaUfrQr9udnc9PH16SWdXUu+WoKrwpd+xOhej0jFPGS+D9GA1TMv/tVAm1b5T3W6Ke5cy11PquZfqZvu4eiwi2BnFZR8bUu/kKNH19XHUtrFYR/1pGnyPitmU6ihbKNZ0xbBS6CuO9HnugWlg4iJe9loKW3Qtjy/ErDr8V1rrk4j0uwbFaQk79ylWPL/eodm0gfxencwyPK2RHt0ur78IEICs4f++b0gk6q2zZRJPdcw+VQQNPo4L0nfv6cDA07jp8+UZ8FHTUWoOWlKFCpYyUg2kLxE0A1bpJT5bKeZr4GFHlL9asq8TJ49p+UFZ5m1fn037O4qiO2Vrmlv8ilplmFY/AJpKBjslb/Y5QtrxF4SN7//EOCLn8K1LnUG9+VJJj0bmK7PPlV2pEcoMREH+p7eey2TgNh2MJO8h/SiL+naSNXStXCTPtk2/5mK3g+FkrawLogODHDJrJn1L7jR+howgobbcEa3N/DZg/0y6WokyNpsMUIr6Xq6vMFfbKGC1X2EVaqmgmx03X15lDw+0V4Qj/LaPqxYXhkaK3F+zIhivnTnutLo8CIxmuku8WtTSufPoKGsHYbzv2EFNJELN2T32NySm8jDKZaCQCVu2U3cB6Kn2aSkKxXZs5r/cBtS7YK+xEId+xULxrwUQp9kEFrYQ6hwC+vhr0Cx1msLUZlJ9EKQTg4bCBendLK15z3VmY+ZIgMHPlvZGxrQPnh69ujbxS9sAWbpUxfQMJX8we92ugfn1VwusOFt8QpBd1MgPVu8TvvMs1VgwptLmnXmb7mhz/u0FQ4M1F0ZN58fMCKoyIn3mLQImDGVanG8mFbNyHlF8L0EMtwY9sVPsRto3orLP7p9aV8WSl3W2h9A2tX19h3aGmHKnJVwvCFLrvhPo3vqolbR44WaSD7xy+8omX9f7vcjlxcVw8fRF1YeqUyQcMcv8kK6ZO2bayiq+yk+kFeFeaTZ8HLKfuvbbpNLmeVRWoihiHmPStxMGb7TFxbiNC0AMwUd2OYCy+j+wFIgG8yVHtnlYtYlRTY7CDmkpSFdSGBAFACYR8A0yx8bruups6djJoXvkaErrQMokZ+aifbIGyXnN5KVtEUjOJMVNXQJe8Q0b+i75rZ9ILQ+FvWkzlcaY7CeDPA2N8tfMDkGkyWllBFAPDrakzTPzpj7uCjJ2kYYwFLYiMYkadWfARsSih25sr84LI1M5TI9zk9A6lIgtv6S8LmHuPEn8tZffLjbK2nc9MvQA8jaKpLxwjcqCCla1J/lr5AxWXvk2HvboGdU+XLgc6c3L8FpTp/95zihsSya8cSz9yqOZnXg6MnulbJ5sca0UYEJcp8Hvb8TQ4+I4rgIWTHRVUl0MItfYef0skIhJ1W5sluJKsaQGE2TO+jBfM8FIpN5ob16UgVrOACH67qm4wQy/JJffWHSZJCb2KLf5tQalOJPaS2IhqamU/pe/3LWd/wmSZX38nv1GeVLO412ppCSbMS0nVoP0Ghy163IXSKixKCOAO2ROUXh7fTUV/XKgld40H8FgV7q4C+QcCO+i8y1xINbkn3RWe1vUx2Ro89syy8fuwVZW+l4M7pT75K/gpdNVqPUWbETfv4mOFwoc8AsZdHxr4HGBq+KTKkaq6LeBzEquEgwC6JFisw4GyVimO2EYTK2AZNfEF070pF5C35m75Q1gGry5GB2kC1/wKn5YqRu6/61cF/i7I93qsGM5cQfTZKWYR4xitvm6PuV1Fm68YjO99Vc45GI/lm7Yz7xkZcJOqJvuVB23fWzZJjj+K9Q/UzubWd7c5W9yyIpNqPkkWXHfKMKPnpZbec32idwKQUKE6mOhknXZO2dls/OYHoSLq5Xzhynq2hN9aJBsP5CKekUoI+MhkvUMYbii1z+BV3wqa/fgpknyZIrhzS4U/cDVfan7o83yFUCwbnri4IYvvvUKBmqNosxLNVyKwvPDos5pUvnP6BzDy7KS1Vtz6A5PxJ1ap/i6zWd/PKrwz+j9gkIrMjQjhqraMSlA6XLe4afeFRxwnMhqeS/UNRPxgowXVrCYqWKpz0B+pjdHmKZdd+lEwXI2JqiLT6yo42dUvCzwVqEWu5Mw9CUQztM+A9YTc8OvGpsEDX/xy0/8mU8L5/yC4fq+AyPoI0kWTOqyR3QOhqqdFy6yCoa7ivPyJqkOnGtlWDiQ7FRB3Is8Y2uXW9BZbYwWU29wnbOHF9//9pAHUlb1Z1atlZDVmYt8nATRZ1J9sg0CoPyt6rYxGaSdti/J/Mu0i7eyiUseYrZ9HcGRa2YUs/Lyy6s3M1ZFB0HlbbDqyUOaVa43+Nns7yO1mfH+1ISbd6Xo3q6x5cjxY2Sy96/w0AIUpH0AdqE291NI8MfCmM/pPbHVlQwtmupMsa7AJdGUFB3JSeRNkkyFrgSeghnmduSwv7YcVRBzkNeJNewwM6EZHvRTdblJwwmUnukbQHSHysz6U3MKpQkN8f60vm0d6tqWGGSL1gQI/DHOa0hWlAh1+1kmxO3kdKen4P9jtCq6ICXW5NkWPvFyx8+Pd+JoNP6Q0LXxobf3vykik2+SPlzNFl4en4OaUqy3B+7tFJZln+1P/6jOof//nODP/7H7Xb//+CPfwv++LfJ/v9KQ9H/cbPe/5XrBv5Prtt/TiPY/6d1+49osvzvK9sC/QeXbfm3t//fMkbIiID/I0ZITEhXv3RuAf5KeP13pdZZRKJk6f9SuOWH/jlRDm1cdbyTpF/bGnbmfSFCif6svxWfPQNoW3SF86Qta/GkUWongEevWVpcKEl8B/QOz2xZ0k4FS6Zo04RCUpTIShTJFtFrl2bHajVJ06Zo26SoO2NLVrKYsCTLlmbbkm5aVCpfOsZfNEWRrCSRnDABFklOrPY+hi5J2y5lzflay9xkxIosR5LtQAJ2a7XslwMqmyJNU5LOjN33XEWk/l8dk8jVZfD88D74ot2LorkQgs0G76sn2Ln0be7Yji15UXxFuJvbL5NLf68SJopUrr/gl5/647rhHQpyVqocUTBGIJkFMIX0DatT5Z8hvHdtV/RnisIr0+xRIJ9i+ceKVWwXZ1jaO99MyM3UIQxPVBdK3q1sU6vn4pK5IVSnQHpEezYrK1W+KNDPMWDEPmnbYQP0dZEjV1bp8NXw4Jwfgt8xOhzC+1ZV+UeKwxewc6Yi0R/ruj7M0WQBAovKB0eKA/4znX/fXkXhX7lo0it9njZN9nTaWob0+yu17v88j7h4KiwlhioUlMj31fynTDz4S2UOKMkZEBnKFcFmuo0Xk/yllrbR4s9JBg29+jJ9m4a/Kb7HpUXw3LjMwKFsxSxX6nyn9hvHEy4sGyDbXM1wqtGhUpGsMig718u44rkLik6SQ6Xiu3fWO/ksRgeMQYsy5xU7Ss37ojyD796PZROMy+JV4Ss6NGv6n1eNVW73WKSHkCce0wyqa5V0yZbXv+iT7p1cLVVgCMv+6vVMV6oHQAYg/AKihFx95VkogwRe1O8mMyLkRXOGaQUD9kF7xVe65kPsX5SX8a0vpxEDUJgtSoBfMNMIkD2vvXSj+sf8qLzso2/xU4XzakKeBK6OVdc21JooUVFVIFoxe7QA0GX8LN8h+HAt0TMki4TOrbDZhSQ60T6XioHQ8te4/P5CVXKkWCUlzjGGl6+lJcWIhGeU+lGMpr3D9ymYdGhLeJjXvsQUSkXwgcYhhTHcZ69+hA8EzoEVbEE3GZ/+PFQ/P09tW1y17Aub0BQu43+6pIVigCfa2Yb7EhXq4aH2+nGpdAOwz4XxfjrRVmhyKdQ6/vXChznJDyqaUJeT4x2QIQ/kSvIKWYmy2X6vQjqJwuGsooOrFC/EGuiwi0Hm42j7HJSJFY8H+306i0XvAfjzvXPNf0aFv8y+9b6KTj7xZDs9bWzj0LNVXAzueLX8n/+tvzYX3xjZRjdj7Xx0/+W0VCpDnPS+DdZFS/p8NXmM9fQxezNjCLYbfTMlJhr+Kpmni7b3qZFiD29xW3yhHiZNVygi5xSJ8B3l+KFBjZ6oOf4w9mcVAh5JUywKClPcRuwVnlE3IhI8iCo/gtwCUGvYAd83MqOeM/KdbrV77ebEJUBMhabG64GmK4SZUgV0yLKITrpE1vQQvU778puMm9wRPVzazM1Yl1I5yBJ97SHQRgRfl3CKNXCGQDvOsM+6jRDzQKDEs0bnnmbKgGuZ9AohhcYUPI/DJ75eONSGWGSvlQnGdEID0jHP5IuXVXScYV9kjQ8254iUwzRprj3o9lmyVF56SLojfgKhGOhwqHYW5FSTd+HmPvOsXRRaNKMrYQQw3e/wbuyZsfHg6+vjYcRgU93FigDhxT48//zA/6ItZl5AdwKjUbC/8+/v9GYuZ5e9YmDdLxj5gjPx7iEXfi8TtvPdsXRE+9Cci02qaFHXDG+mPiyDKk7ESe78cioTpqKmd6XREAOhy78nJ36FEmR5bXL+5FR5FScn3hGL5+TRmBdSBL7q/qZg/QzieRzugLADUGV7yAKlplRSm+P21fXqgKt3PNbPUOvG5HS+sS6QU6enVFKbNxr1K9vgn0TSpSjzH6n9l/fX7mfKWX6mTZ8NaR1lZcguqIInAqNI7c+PcwQNZlByXpPOM4/WpNCj9TNnETyn7Pri2JheLtxK5SXHk8fWPkeW8iXfp6EkukqYBg3a3gFpVXS3W6sthtCAM0tnhN6tIdlJ5PifWNr3MKWVhRJPMMNXmdK2FQn/6mPNV0UL8rPokZtdcMMVTshJgUwkLwGOOdQqGvlx58iRiAeT43E8k2GMLIU+HcMAFlBkqqGPYq+/AhXBqmDacmhXb1DyqlhkgKaQWhl412EpBmKP0Ml6LKibiwDUozMO+RJnH/YzObnRMchP3ELK8UOjVO9/sj1732Z0OaB+JS/08+/+iYdUr+VHRbbHupVA/h0sE+6BpnzRSVwTVYClann9ok+CIjTHh9LG/1m/PGRJsjfErqyK6ZB4hroSzzhX4q/RO5qMnbgGZN16f6cn1hINtLnkoPUgcjo6ODcu+AXmP6cLeNaMLxZz3qVhV/G1RyVb4U6RIoOVy9lFu7fOOKLH+XavOmfwn9O92ylV+gxFuhLEwOpIJXfXP+5egXlKtk397rIHacNStt3bKGAZ5e2AutX/3PvbmpyhRnutCNuXWneg/QtpUzq0olRUXp09h5uVaK9IfQXP6nbp/JMR4W/avuvHQnpWCJLtKSA7lzJdOqFBTEakJ10iN8Hft6xAUBIKGkMcbHRoyqhiorVkJlc0dHGlX6e5USJ1mgWD0/VIF9mrm3MX9OE/qTJ0RiiplDpZn2iRoU83btvSOi/TeNeFKItYBVQf82+FFoFD/Osg00Ua4He6Urq0ruq+9Uvu7AziO/Tx2J3vdP1J9UIhBwb9dM0WxROjWHXPwiIE+CflIbVaqPBTv7ZAOjukQ5TtzbX+dTnmy4aq7Xel1lNWvyQ81fgA0h0CoxU+xy1sdn7S3AMFbbbVzeyZUiPCHUV15TihbkB/AKX8PIKFHZ183p9BazbxmlMagERzSad64uHHaksJILZnRvx2rFBpFjAWtfNOqUKGdW8svtheucmlrarT/yAv+59okCH/cZH0JCkt2lkYytU45Vfzwhn/ySWJh0/JBOhp9wbHRli3EXdTJioo+kY1n2LXDrMVaj7uBXA6z0ZMgKdYvconBELJ2oAbeUXf6Et97E4x6Gg4N3DveVmZbJdoUb12s7P4GOZSugQXoas45Qe0uqiay7qK1lw/L5yS3rPA1jn+i0tWrhE8hB+qkrOc/3AgEDoaLXY0ZgZNxFr/5d967BwsHUqQrgqlRQ9STRv7UdijjHA7c5O9wAu/pnIrkaIRoaNSScWuV4J27Wxqher41K/0SV7Oc+x5fo665NLAbC+U2dk+ZdHsFSAE47JdlYr4bTtcHDrxPVnfIoXNqrwbUqOg3WsIRm2wIUoD2tBGgGsZJDmTDiluxcbNh2M6au99+B4AV3mf/LzAx/TpV4X2B6krGTvWLjuV+SuXTq0ZP5iwcSwTxOoVL5zyl4l7hhT3rvFVs2YrocWORNYzWa8AkKgHf0EsK7abtXfRPL7T94Wpk1ZKk3FsF7koBou1bjlYCMXMQ2aNsGaFsnlCgT6/liNLnJ0NWI2c2CnqCc97nZf0HhrTJv0rVhMI8is4yxscigHz6v4risJZFuKdMmVLTXm524V+zkC0yN2GL+I8F9iBFetP7oS830b+jEHIYJfDSybFWpe437ms6hZBVdOLU2mVbNmuZ8xnf06Azb3y6b2gcsiGlSXu5oUwQyUWT8LfZYEveVwNW/aeEzAVSHJ+nChSAb0riFsjKnnPKPoQVS1/dtYLoH0zvrcjE88zzy/+MsavSmJtx5BB+WwsskBbmvsIiD5ZIwGZX6VGguJAKyVp3TWy5CzyXcA+NWFS9FrL3ZeyQJP6VcuBi0tVISon535JGOm0UI3MDsOcerSopHZXJKwN4Mg/G3VMvyqZoCpyrM+4jYwEiDCgr4Y+8r47l3rP+Ke4HJ3qklBbEb47u1r6oS3rLvKELqOdXGBTH8Ln8CTu6Hcrxdd7FB5P3LnZl5gITyjmRN3uRENgY8MSO0b5kUodhvtKBMUz5Js/jNFTDFAyALbNx+rXIXuViTGWzp6ugDI8jypqN7mK3V/HJ+6Va98xK/IlBvhdRyINc9riGXiBBxSPVJhVUQtf9K0OwFGvNp07iMBiaZgno0tnEeSbOjHySoJD2dl9qPxsJIIFCGcNQdp39n9aUvpUN1CsSB2MqlDmXC9tAbYlPm0hjpufs/c3DV+3AQQMyOzK6u591ZMxHrt0AuyirONAhrC+c6wZpc8I3wnS5NLhx1cDekkU3GDYR6uGjVtMaJoQvPc8hDDkpm1f5aivOfgXYhYGqEHRyt0Jy20p1PgMAXep/fhur/LhL8UJnD1KnWNJF1ArselVtHB/6bh1+wG59j8zCUg+yUpPDD59pGG5QbVUU83jfpmif1CJEmwI83nD68XtX6zC/eXJ9DuWQl+cSRy8GBGPsrs3A3I3+BiNO36zvpg4omixNP+kTtLf7lkRdxl8bkvqXR8eTURL8zltp49eDX9kPCztFcSgksjgEOiZLbI3MAcBx9/HcnpB8qWvoPcA/tSBzaD2PZwsKPpX+8DNV/sPqj17yyiVo9go9DBMAWsz4xx0dZ1GUHa5qphR1u37Qtjq1crlSeZHT0jhqmnxdmvtcaZE1r1vz+3rVoiB8klv/BXrgTmy1aqr5isE0gaQGNH0Kqy4rIupZOkXilcfweTM4j7IyKcH0zBHgaAu1SxHc/TReKOH+WkKicgXA/A1QMUHc2E5E/0mfORehbFc4svgemz/x+Ekx/j3Rrv5pY81L4zQWAwWYgndqVDym6tXKPKlr653IE+bPYHOC9G8U69ijlx9jnGfAnPykehXgvzCbvnqn2b1RitHsFMkdTp3RzlzwEjWstKH+bTgAEe0uS4u8BDPqjwcKVIJH183KFcsmT9fgA7vv/wZD2aC/2irHNZ5OlGUqLL5izQfQfKzez6/6EOUM3dRCfqajfbud1adEXmky5eqpgnzy2aUlITq0s128oQ8DX91Fzf12ykpBy/GRRZKzaZKElV6Xp9HsqYo9WieAD9+vkcXIA1258gJ84fo21c/lDF0NH1ZLSkVV+5V6sQ9l4RyoPnR3zF/ZW4FH2EZLlu2OCqZPISqSPIvF8KOdFiS/q/tvceO7Mq1rvs0al6AzGTSNOm99+zRJb1Jkkn39IesuaS9pa0DXFxsTeFqRaOAqigykybMGPHH90cV9INNmhVZSmLJqoJYkk7DUpVQD3YRViRFKSUksuJOmk0jXcWNWdwzcxZzFZO/il8ftmbDQrRp8ip+iCa7k+TQvlC2YMWr2BYthjqu4hdFhiWCsgMvFqRpIxlN0arJShJ9Fb+uhE4sCte8islfxXpcijsv0mxRsFdxxVKqScpSnt+BwVXcXMPqll7FumnyYq6XMlWJJMt+BbMwr+CYts3uKrZoshKv/lu/ismfYqcbbNWq7hlC6ioOy/8Xz6K+O7WNPHMZE7MkXR//Wg0N/rdqaH+94D/lJLuT/LdJdiVxructX/HcpjL0Nt8/Cvnzc/09z8z1O/PH38qv/+HJ8/HsnD0TvM5XRKh9wMvU5zcX5RPDefdqav3I3+VYiWypktXPmgvFelrkVVOpbisq6fqVNy22ITdEr0zWopixoJibuaMjlbFE+2oT9wbaLMmT92EIotPXYRYz/BxW0WSUCtZPi4IKtmxYjqTUjhwgjTZJ22KKimJIsbLJ8Ndh91oB8jqMoij1ywwNRZpmbDJmRTPFvWe2mwvl1d7FPw6zJFLV78NY29ycacOwX1338yfjfN/LPDEiy97J3zb7pHSjxtD8/e5v1x9ufT6fD+K99lsUqNpkwAjL2/eCyT9c0e9BEyXyNZkfbyMprmgAuUeHe7fIp3Yzp+9gxV6vW+OmlgB6DLhnBvH9aNHM6B/oGkxwLKmm7duM3Z3Zakl29rJ9GiMdZEfwt0A8esmUXWxIh19O6vdakOcf3uFc/smT4D1sIlLdc/rIUAUKTPg+st32OcM5y9jz7UTCHeWLdLxJ673cEsPqOQjnq9fwWHw06rfhX1WvkxTr+7luGMVYrxbs6RS0SdB+TaHrWCQuu0Tp34Lk7fBw7pU7b2Mn6FoqcbEZhCFAidKs7YL2qbTpPlqbFGsSlnTbnG0O3e86mVVd237m1m2CS3pcJV9/9bGfunJMRpaSe5OMbNW9BqzSe6D9UnDROzpg13bp7ocU9VxWTmlVDbf3e62p2iqbL16ij8S8coBH1/eFXTrxONFqUXTmFhZn/beFs/cYe6g8cqhvo9/DieTZ611/e2/xaz1XZHHdjB0xTVvCHboh9pPasjeGIHeUMRzpN/GqfCUUXpyHg6AN0eOf2rRKtvmVVfizWZZBpsUcRaET3UG7+LxSwJrmN9silSf+DkdmD9foeyJilcPNC6u+97BP58uxy6xqbm72eNzVxlP42L+rZzBpqxXZ+TPFiNV+Cex126kQ+JxwTqm9WbxdpksNKVuCBSNbp84ocl+hDORXWtK8GlJqlnA/O6Z+286t5MOtNNnYP2lfyW/alMaIhNcYc7xH+HpSzzY+euIT3OhWB1GGAJ02yx99fl2bZRxnL/f2Hvys8DkdyDXv31i6Ujm3OQw93q9cfRRajrR31TwNT38ad+ZG4FWfLqI4Bk2h4HdTi6nKVZ2SM5dwr/JP9jnXz7Pv0mci4z5x2okbBJ5Y1MzwqvufjVo3AomkuKJL5TCO+8HAJPt4T1KvuO5kQb1+hSgz/BbP2fqEpt6XTWBIVtJ+pARhSYcK+0oPU8c5faP9ZvJDgtnimfGiWsMUTX+IosN5mHF6mb0DFhEZg7eQukOza9cTj/JxRL/we8kMHaJbuJ6gZ0fQoqx8n2rkfWiRKiuyNT7CvPpoHp40IWPJ+GK3aLpyAGLyBwU6NbcurgznTCAvXRLM3O2rO+EtKNxbe1DuxA+LYkZduZ/1tXc3XJQIo2K6Lni9PYh3G1ZyTyQdSETjK9l7UyFx7BLMbR8/O5c4bA4pij5MwmRMTNao0GFJMGsm8V0+Z9WS9kinGJx3FZauOW4ei3UwjO7JYX1FMsG3za8cdzzFgUyX516QZfYaazEzTbrPeNp8DvHLoq//wwyWzC/VfT46ia0XGdWbkvXdr9v+7C1XYPfUYLRf97RhjT92u5+PiHAH1W5Fz8s9edVUpDJXkrDWQyj32KJ/sOBnQ17UR4TWFfPnOsoFL1vmyfjRRLt0ZVdcKKT49L66Kt7f6cGjQiW/03AlwG41UKZKVxrGetM+CHHA+NAZd/FTzXGLifCDqtkinzHBLH/WjZ6aHmHMQ+iFCEc0mG+6FSZpesk+qrds9ZjEr9djIzzbhcx7Dedbn2OraXwkwSIrY+n8MUimGdq7Fs9eMRIxfJS4cI8ZwxWb37MDXn7m+dKkQfZwYDVvII/9hgZ2mGj3FW7Ivjruvq9kR0Hn9eg5KOxAcUyulFlDxlVguPXpCHrC6DzybtZ1OmxSQ84WV61dZSBJnbxoC0954wMJi6E1SZbAguY7czvbmTT3kLkv/OO3bOe3DUxMy/x2cLRSGv0rY/pw9QF0J0swr8tczYy+LWkZdjANOlpOd6DXCy/GojKpn91DfqUXMCd+nrcA/AeLML83uPy28tFLwx0dqOJg3ElRp2qmO30hAuILQmqsklwQ660XoYpJHKUm9+4D+XP+wp35LQNJWCaqmWfpq4vsHh0ijDUqX72OmYMjdbC24VHFGrLFpyAoaWsubDm86dEs8qA5Vr8s8zka1udYsaKc+jEhQxzEXENbOqk8x3X5LetRLWVOJRYwaAWN2fdeRpjKbm2XrujuxrvQPXXej04kiyIpvULxODO8b7Ia+DrNt7jymcX6ciE9yli/DlcWr7hel0RqMDiu+nJKihTElqQ+BMS6jmCNqul/3u6PL0Csi8i3taE+RK/A9P0yGDxoGlul9qwsRRa/q0EioWimFZIVLwRyddPS7XXCPRWseRth8H50+rqkzUx4soqwgQpZaljucrOb9kpbdgUb1fH6LNjXNtmZerF4tFukmviIfXJ5b27B0tL9gONWjzyuL6UO/jPLQWcg2z07IHPc+AjVYbH7ydOeo+/AzpLTPvqg8SCzTGI20QEftqIv+KPikWXASNvmA675KkkD5Q9BD9q2NJ6yHJv3iKbzbLQupz0j8IdAWebRlrAfqR0N5/TzlW0lxQqvlTsiV6tGwnG5iuXTwi6GMU0D2NVwuN3JBJ9juPLhYNTJyr8ClvrBikZdyRbzeF49ocyhRPaYoIdzZdV0DrcdlDjm6VQeNz4PlhXteCs/+KJnb+cJPyc9N/KdWLXN8psFd9arb+zQp5Bapam/PPzTj7FzDg1yRQLB3gpi7JqRWVL549gsZRDV4+WL7iAZmNOGK3YHceVKdla5P83MtrOa9KKfM2jWI0V5db+fu+3AtFJzy632LzxloQan5LhsZz92qyX2WVa7HduFreWIUxrF+ZifLl3G19MWpTDEIH6EGfO2JuVStCS/q3fXA+sa1lP+MWyhMLHeCRPKHM7FYEZEkOMa3TijlkCld2XCRXSsfVfnAyzrC13fs4X8V7aTx/6mMHbQHy81Ms2ZQFFWQ3+2R/2UiluohZZBNCYSMcQtwfUIXEtCoaiEguWL9Alv9OgnaFGf4fn5SkdQg6bEOp1mghqG1i6UwkeGp2uWTbx9iK+dWSmZKWOnkf6dniplGfzQmK7N2nZpB18K+fghpchQaFvWKX88ibLxlKEqVs1mJVjHbUnQPp04v1AoG+If7CfgtqU5b1dRscKe8ZNVtUYKxZCduYOj61xp4mVraBfqTXhauPizhEVSccW5unC6RLHzWYK4bmUOlmA7f2Al6f+skkC8kX/BekA7OdlJGKeG4skQFCl7zqRA4jUKpudDCh3RMs1DCplGkTnkDGHheHS7/jO2wJ82jx9IaSHz6166y00TlRjD1ipECRcm/sVf7gMTXMZZkbo1Xw3knohgK/kNpfgPy9O/qRqfn4qhWXGNv5aovNPiIHn96knr8l6UkH1euo8ge41U5Nd77z3/OSMzomxsQYKi674SXh+LyW7pVc/f9Km+59zi2bnk7pUeMcp9R3P0ht4p9dREJbuvp8B7fz8oZNiKuEH6IcENDfd3NsOavl0Ok/udMSJluIm+YuNhuhcJGBiSh5NtLlOY4e+mssVTxN5huKkx7S+jHijYpBSSO449JBXTU6HhZg7u7rjfXy+J4WfMlkXRYmf7cSVBjKbgNiXbcagsaN5/t3S3ybfhNvbVqGzdpqiJKsZEXsz286ix2bwegmc2THfb/jwe/dOxH9L7EVbl997Wwasata1YyiigSQzcsLjFsDsmONp2IhpBDcYOkpLPt7/bMi1JJDYXVyVAdK4g2wkWYpw75kOTSEGBu5zHNL0QiPMK0wXW/Xn5/dAUcLXRzw/MijGUcsfuMrYwB/cUG3UkVT5l85WWKpS6K2xXtFZ8L0KkrqFVtVEY251jU/L+zk+UFkuQuaBTx3Cu+1QKbkeTkkOHORt/LJS3QHbPVYa/8wPiWNaGpoI2/fJhn4vC2Vn7tmTTwb7TIj3lsITmCvar1Cusanv67Wymod3CEExv4p5aEs0T7YjAabKMGdIUKHrVZksMK5t54virMXNkp+PHN6CvPu8WWlB2akNna+piKApxvtLtjpKu7JpLpddwpuwVMuszDW3X+xLr3aaMq0tEryyuIF7FQLXdxlfsRgdIFo3pfCifIbrugDdY2mUr+ilMFT4bM9sIvuUFKBTcxHd7RZf8W+Ld4VnV5BV1n9r06FuPe6yPRIJEaJi16v0tCKafeApL7JFxHZ1ivTjJrn6gsTjDmY/wa+Dbl/WtzKMlkaxU68W8anFbjhpqh9DsO2u3r34dHQ5EnotjVpqZHvZBQt4M+XJtPwiu9qB/tustyjV/xWhT866yxfj+XK2t0xJD2maOEVhS1DLnplHC2vQS3XuHcFeML2+IkK3fd7VE1wmifc40lTcKBL2uD1gm9hE9I1bw7aw8ymoX6ecwwUQUNkahVKj39iiL8gbHzLmCjU1zy56NZoa1cvUlqHW9M4m173WR2CBCt9DgJM/UNZMXNSdVdQXtVJ1IIfHkOUW94rCOn0ebIZCGR2lapt+K2s4V8zwqmn7yakVjUqG5VzsdlY5zrzrGdaYjpsXPOx8KEW8gIWG9SGHVyqcH/pvjS+QKX9VsM8xd2tkqYKb8vv2TOBj1jhdvJDWIufJ6O+UdFkpfQSyTuzbTAhf6/c/8y/dhRgUSyfIWmcx8mBEzNDNzt1Nvlrb6zdft3vXnbURbTh50sKMPWZJOVHU02iJy9R8qIr54fKVfCmkdmP/28ZF+yobtl9f1XanJh5bNtywPe9FApvZ80m7wOfTw43251ae+gRU2z10Qi6teGzoXfHz8e089iuw12CXXE8rEAEa31+nR901J93FvTU2pvP/rs7OzijpmK0ono8wwH591vJRl3Un1HpfYKAwUGLm6+yuxSd1PkuctbSjt8UorJNbMhEBx+h10qT6/3MERwsa9Uxcq8O/h+qtTGtOfdzioEDnzx4LHjOnqsKE+fDu6wvAz+zSfjO5/UA19p0jChVCq1d93gHwrLvaM1tSb+hM4g3GHZ4nATcnRRMuV4ccm1PX6ECibAulvo7+GUq2wOCSBzys4qkYE087ND868quPAiAyxm1p59Hf/kcKmoIdbgREoIZ/xKnr0jOOPgdnvKaNWHVtUaPdnlD9CyHvr9xfnAow/F4VVmCPXI2VAnw1150wvaVHUx6BeIeZb8Qj5QfJyDVkQBt91pJBxOPAW3l7Rx9o+DOX1bWm57df1Z4rszk7y57d7ZdDY6fKsiuZgjF4S01yLCJD3M9dXw21PKb6Kpbn8nL5PgwrfGjRYidKzLW660pvveC5xv6vILdnsUa3MUjdLu1vEFbb090oq7e26JpfZQ+lPmNdOejm9iPV2XlOut/BOQgvZyX0VfqbMskyv4emJ1nPrs4+WptyiYA8SNXp2wr1XzAk83MqP3bzTqSWZiA6ZbEu3JTSAJ6ddYf77EMOEoV3pyogxP+dNKac3N84lK+o/Y7DeveSw3+Yq1PTrK2WO55kTXfwJiaDkWzecGRei4XFrdiy+eQ1X1Of9UOSAfUIf00i0l8RqYtXXSnzdVe4O651DtbEejNj7yc10HueKHQruZObiK6Cb8h69XpOTGbyQBbbWaXWO2T11D8AjzYve81Tsb2GhyU9Kbm+TpUIwHomyidHM2KT6RliFnw1Wq8Fw4kMvqVEYO98q6RSXoFvLgVJYKEkhqKGhZnNXsrUGj0dCtG2/776i/DyTRVngRwcjuJjbmauaO+edi15DUJITDebsvRxKyTLkk/8wyJ2aaNaXpfATVEcBddChqSY3+X0F9R+9ajixXs14VuHyga21Gjy3oET1W82JBOqGkvB5lbAznb0oCLn4CorZDF4wJTOf8CYYBPSTaW/CB9lwrjn9dSZhFhIe84Remcn6ChPlGYwoLZKJHfQIhLpFh4y+y5vVcceGx+MWkz84Zcz563XI13FiMDZhewN1L3NqkMi2qINk7mz0NXsEN9jbg+FOuNTtiDKvDritom5on59xfDpm+viazrrSZNu92mcet90ZHHSDh9FMwBBhsKS2ToRQ4DIs9G0fx3j69R8dytl+i3Cv79v7rouIphlmQlN6L3S82494UAxPKhy5YFk7tXSKt6e1n45R6b4IcwMdHsKCx0oS0OxxZfarwl6ZKMe9WR2C0qaKn8KVQOb3hMR+aAFXt1281NHJPSScMDyFxW12ga6oWTK/S2rSOxkc44xl4d26/Bt2j4LO1/FVibiGpKIJe1fwaKKa72ehMkNM22Hvu9fZJ75LFbIuvrn9uYZ884v6kg3r0Gh+Qv0OPfO8ajWHD+fMfa5vn6tEUxDGIJdEeq0kbG34G55tpCaie2ks1jBEfDlvYcOUJSlFFVF0q2WuHe88GMzFr0h18hwZ8c1aklzEMe2Je/JffEvD1S5Ro41biY/T2U6fUCeJaJC7KRtbFFTY0GLh342a2AIRWy+kwyv/Hq8WM99ZzXi2gcY2JGqPQ/b+1eubGveh4LgId9tsxjTWtnumVt6vHEJbs225AjtdrDKtJsXz9WMm5BirTuVSItxLA0T9cArdNi1fkQi46Rj/yX1WF2Fj1zPhFx6TltDAuuQh8GBdw3/MZg48QhFJMR2caz5yhZNsw4Vs9n6MrEVRTnUVx4hHWnxzRQxX8cBa0h/FkEdSfMu5Wv7mC9Z6aVdifhXf5/ml4BJxxpuk+VqC+hqHw/s8nxIC4mOxpukiV7H/5UzWGimPCq7iW8P5VbxwJmmNlscE/T5QbO64SHsVl9d53mDBht9bDcnqyXUbSenf513XfYdjcXMHJBSuSSVL4JWP7+S/VByF8L8XRxHof4qj6D/RRtH/DWn08SeWRu3677aNdlKt1Llb+fwv9VP9UUe3bWDoTWXIX6qp8utHf76Ifu12lKiqh0HkhncFED1mcGiSrkefdLGA93F0pp5ok6xXcsjSjyyFXH8olSRyIi16FHlV7rGQNca8KmNjImJhXuVNQQkUZJOVLGYsKXK3gGlOBSqRJsvyDQnJV6xNiu51WHl9Hk3/cdhVia8+7Trsapd6R7L0ddjVEW0aW4o2TdtkzpokS105wnUYch3GsXp1H1aJIultGXsvh2Bs0uZMkyVZOaRrE5Kuqs/qtMjS5HXZpPXrsEqwSJq6DmNZ+UfVpShqJDV6uJ4yzYlkdIca5X8TQJPbd4X72ajw7l7XZCXweV6X19+2qH5sP6bLk/CHDe87SR6Y1p/wzW7/OJXc4a7y8PM73EPR+I29kyp4vtcfAeme/IIeR/ZFeJJWRKqhYJ50iPs6BCo4tHwNCPxOkrl1XWHCw0alEp6uSpYjJWb8j1j6QiXiD//dewyqazRYqOKWliPTLmy3gjFcFm/pAq+RNZgO5NYW3ssDy59/hYyeyBWHp9+AIZ9LMA7FLyc0jiAI6HPzT6/JYh/sjFaG8bPZNpfboZFc1eeQQ6Xw+xMl9P77vRvxPQvznD0UPfsQEkaTef36DqGqTewKwByYeq8/n70v310/d7O06auFZfdM4PsaFbCBS4eRDqXJpCPJ60TCJGKm4YpAH/Ezwl+6uZO0o0gqpYVEKeMoMbLKAcnbGwtVP6hokzztdrMD9WjhrpO5RTykhh/vZZJtjcJpeLiK3f4sKdOEZbGXx26dUnfrN69lXafuluHsc7fbioY4vVTJurHJoHxXRVdcDR7Pr2ziXl82QvV/yet6mdTdtx4F9dHFTOvm/eeOXa+Hi62cLDrSFey4am1xsy4WohQxT5y/n78wpNZro27TokGyLAoVnyJ0z0FtvHwL8INwJYQmH1abd3CvpLQj3xTriZGiXaypN1XRjYwtslOw5c8GGeGnxGuTQpqta+40q/bEnHuoEWJ/QrOgZZJOhm+Rql34VDNyiCMX4zJLL0W0pdAxRYpgdHeD5eTnkjok0pCyH942C+hQTRZ/+myRVu1WtvEgFw/G9wsY1T25mfeg8HZlCs5WFlllN+zrLeETtFMPlf5y5OeKjsgrdkfzu6nAPfFAHWg2HaUIEnYjprGWkZ0jkfeblQPro77Sqw7IBnq4Qpzh09h5vVdNsVUx3zYXVK8n5KImZYxkn0WBY99DNpK7nvm7rnb3dGN4xK/OFiW0cA7Mm+gJXjhYXYlWXgQKhwdHbkknQns/pFDXvp72vVvN9EBWhw3tPFK/RLe0fR3rjx1z3ddde2w0PqNJb0SkYE+oqefUHxDs9QgD+kNnkmyUB42Eih6/WGS7YvVYHY58kfG54oxGGBTzNmu860sB1RVleNQIsTq2LZk4lI7NobRJiTwttfCnFt/ksLe5fQhfD6kqTufzrawQRlEyLYSGt747TzVOOzKNjy8cY/I5mXXliWxcFOQBDzZl7gJHv20S2qvk0z9gb7N2+mP7ohY6pmUfgXQrOEhh47hjoiMVVZ22Xz1jWvjFNcTxRsV7FE6wonzOqZ7tw059sAX+0N/D1yYZEREvJgPTk6nQiCPBPg5d9Le0/Bxb+MzY59GnVUlwEOeb6t2DCVf46TEMlCshomKP4qgHepUMhoa5EFmmUJzvXaquE1g/ejPNodd0JCp0ytekUVtQlm15ITW7+dp5QkNmJawn1tbfybAWdcG9Kl6MsGilVzTHdpq/Q0RN9Fw59s2gLwUD1rRdO1TqVJdnqqP0k4JmDIaN58RUQi/OXKeeK25Sq9lNmVJyPTEWPLfLc11JdF3zIRJLL/1K3SPFXfPnWCiv4u52oap6seMtEWJF41N3i4uE99CZ7cC+T/vu/VrnTjbyUEko18I+/N3UlyDN8WiWSc3n8Cl/eB8Yf2yMv1DpShZ0MVb0Z13WJYjubonk7IhjgsY2BYn/cfwMt2/wcuVJFtjxsTDV1pOYUaxtFEfJ6GP7fVCPx9egwnP3kmRUrNUzsqVbpWAwlqMx4hWepBWxDSY/8YSywzEwzkp8KujXlbjn443WA5S19sl1cykcgbO86zvWNDCXk02ey2xBFBI/zn3LVtmHkJNSAPtze/Waikh7UFa+ME4qm6oppaaHaT1EclmY3eGtQERYP+MdaaHX47M8azSyn6JYDso90iasGbP5mG7LNfRpc2lC7WqIqZle18rv8d3xyhSdfClk4gfa98sN35lhEqO79ijm6bKzMFsiXCSzaD6SVpyuCwmsSPrxzjkOkWl+/IQP54qOOKLbrVktOUpvhjUti2xwSo/zuPbTzIfcummKCqzLokflHh11Zaac5zX3xNtT/HjReYaubda9LDF8JPDsy/N8OzoGObXLYBKQRO0+/sTb71H3yaFYHOUVz5OIGC84VX/W+bOvuPlOzUnXg5pNh8S1cjHbltgKmqVNrvUpIvJOmlrbrGz2PVpmfIrbIXhk4PfL3ZdY3zaxu59vhWwGQ5b09BWvMfQtlvwxgrjr6uVmbFRmpVO29HwyrLcsfkTZiEVSLxflUVW9kVtXb9gOszgPTyu+NTyzwtzlMZKsmZAfzKyLH2W/zZ7aszp6XAnd6Hh2d7/oYP3JlTT7sQS05/jQOSonPKiX+omxVad9dXnbFpPmcEJzLJ9Ca8pmZGXuaRtTejaflaeHm505VdpSC5FQ21BSz6AhbGypDkr33qpNScz49D6HjCNdkItQSzVtFELTsynn4N3lm+sbL4c+KprStWxhWJVsLYT4WcKsQOcnGTU6pWNYC2pyV/xdYOn8lRdlHU6sfCz60eJYiE6mSstMvx7Sol41g7t6QzK+esOtIh/blUrq906cYxmHY51aFN2v3CCFA+lkiiLnn3nF/PPxcpF6n2kYS1y1Ka4+WeZyiuGrWQ2I8Huw3zb4sCHpnsZrWax4TPQDo3XZPbaK7igmhL76jjPS19dnKPmmdek8tJ3MdsH8kne4gb1ZS2A/dnz1j7Jd5rgoFB7JRqb0JWBnTaouh2VIj8Ofjeok0UJkNWDSYE8ojy7u2O/cJyZkYMv+iqKiD+KCd+Jcm2KcZrOMSFf0XBZNtz5JHqdj7dntMqux9qZjue1cjetsWnMK2tMwblLKN1HfecQfQ5tfoTljnFpTlivWtvzSDym1YeQoaMMh33mnJSKRuyrxeDT0Z9kx9I5XtM3QCCFOeUeYQ1Mi0EzTykBEzuQ0GN2HSEsiQ3cQ4ccquYGZQL9CU+rF61W3l8+iE0W567T+tVKWdjrdT5zNYZD8dai8E2Ebn9kl/UBu7MfKzzdSXksX6eaS78wy1syzVe7EZyJ9lo0izpWdtbdXsFaMal7XOBEJUVerTkmbHHrX9tjnede5Dc20rB2xsIihuhBfBm3ldE3hdNUMEXLOsaBoZb49jSnbuYZG2WzfcUq68rwwrXBEEjPDrR8xpVh2Th74+0DGBO55ZqfHenGnvv5INiyykdCI5D0T3IUV4+WkZhwG7d5dQlLZtORON7xRm2SnjB/NfFG59RWQQThZnc2xxL7n1uqQC++x9TD0ru0/yAJRUfc1YnU+qOEQAkIwQ9i04e9hxkmY0EIG9f3Df+DBR9drpQ29IdOSxKiXrNQoeGQsjIDurA41gw5bMX3nEsdu0Ik1XZJF7Wo6eOpxdZWu5/r+xztmF+2rnsTrordd+2FuOi5tOec2dFkoYSqWvMPydPAl6F2LxkksSnStrjrbbFh8QwoVdz6Ez1wcxxVlWa+IIbGtp0IvurM0ky+1qrH9j32KYTwgT+QRSgntqbmJkC92sUxrNO/nhWm1HD5iTV3xpXFMgtLlrNUY9HCuQGds33Vqwml1PnxbM6poaArBd0M+PXSV2m0u6j6SI960ZBilcbM0XRhDIgMdMhbQxqLBp9HGYnOPYDyKr8tuGZOmVYM3xUvvLbaobEJvKTUWcsfo6znxIoorNyuzihMZvA8S5I5jLGIs6433SPuw9qIdkDoVSKgQA0UtTf+4Xs31+T82aLrEPtzZvALcwkaXUeeJO9TINDMSK/OHueBj5vN1/QeVU5gSRhFLCWy1hmlBVdh49Nsifd9Lxo09JZEMhjlz4Lb8dYw8DGERNXD85r0evwYD0hQ2B32YmU/n05WukUxIeOxssptII33gtAO/LMwTDZqQrAiOZiT96uh/Nnm7G1I9mea+McJ+y0HQPeN5Lwkcsvl5rx1heaV6RJSp4a94uftB58dUelQV+lOTQ7aYH2F+CUyQxnPHqvGNam99+rlDSzaNJ4YW5ab4CEK4hVPSViuxP9J6T74aFQsIA1mKwC2JxFX3WtEr8alrLzwtKq3YMqdM0jinOL7Ca1t79GzyjgIU1WZKfBE3bk3Ruug2SFqIiCuEVsHY0vudOiEB8x3SvWjqfuSWZt4WwWTwmdznF8/zsCmcNdhZ0tz49GpR78ihvKI6v9vO2MgVI8JcSZnrWuumjZ9VG7eOZSKEklvppNjPzU3zx3ttzTtoTMZXDsHmU8IW8hz2A8ry7rTwSeXa6lOc1R/7H1EvWP7o9rM2u4x88Njmme9IvCNHI/VrZpQ8wZRCYVzQO7OnGFdmzs9nTIXYH9vvRx3KEteShtrakuWN/ONgmjuV4mIrPBJcb9JlrYhK7X195IgsxbjYrJo0w2Yp69ycXBnpvdbqrenRdxuhLfE+0wRlVNQwpofqxl7tDcV2AvTlELOAIE/1Ij1kWcFfFKUiUIO0NFIUVOHZVcjuQ8T7saMdajEQ/spgw0N6nf10XP555GnbKmrhs9WPeYgJUcdXkJ5Z+kHYM3jl/SQMVyO9L7YxY6JLGfXIdFgn38tLnIWNhacujuUOvx8utdoP3/nE673s3XmJ3CPU/Kx6fflh5Gd+qlnTXrUSPQjUL/l5qYhDfsx1VND+M5X49TCfEdeI6GZtiUNv2rk2x5XUzVvlbdUBm76kKB3ldxitSZYrCRRG1swn/sgKp0UNp4/IuBniNd627PIrOhNKb2d9P6EZBXaMjIe7bq7oH+NbCdceNo1kUiHn2vUtSMVCV1Rab7+iUnn7TKw7rumJ2PYiifPicl9zuyLU1aqqb4QevBrFvnjFiIOUZfzeqxjHd8+H+71CdnRU3/MrzuMSGWYX7kx+6N4vaphN6ENDa+4PV1tD42JrUmGmVbX0cqi0VQjKnl/xYOTsHESTlSJUHZDdcPVWMPm1gvyrZ88nISe+mr9ZpZG6njZYA5+97Xv1zd3AGJWaUXdbRyX4mX8JgqVdvsF7WjLt6rBeAwpD1yhssx9C1kleLb7JvX82BZ1rAjdiz3+fL7Un/C9m1bOEMp5ql8ZbdA92ebvZJs4dGhppOoTZOy8er+zqy8Z3Oh+p895UOB4rW9wPi+GXjdltjfDEOSSPfQ+e3XvUghYJPsbLpfWdlc+2aiSyrfJ+Tmf6xI6JjlBv5o24zFgEd5dzIxt81o7FNkmrfgiwMy+mW0x8weI+QnfSa4gS0iAmbq265IpmVGI2IsZZSLtMoxJjzt5OD1KGsUAXzut1rj/bdtefhu/E4rpYt0tg1bEY8nHqYWvJHAafNxd9DweWQG6jpuKxLaVYt9XbnG3ZaDg7RSY0K20NLdKSUQSom5NuTRnPYMli7ShGGq5K8XQa3GSPtwJfWcAdNm6OoX2qmH46ErpnEYHF8WgSHJHXM7IUunt8Q7OkC+J9EM41EvjfTakKEoN8aKxDDZbLZB99UcILWRkeCs+HsH9+uopRC7Nk2T1VIREnYzHovmZEz1roKWVDRvxZJhuLG2dvWLJPR3mG5SVb7XsnCUJmatbOTH3CdlfiLLuD4L3vDEoyH4NN5ep5txJmtOXjOo4t95cXlq4gHPQ1SJo7VLGi0ZFrz75ntsdMrn3AY+tqjq8t39XXqJp3+ld7B/DB98dNwHj7b6oIEeWOryBVuh6lQxYDK04vmleGHydyTN+89JPoLewSDEurMm6K3z0SvvDLlbZ+m8LJiwL36VTEtKBXAkc2nF9VjKs/+u9xI4XUi+gkxS2rgOa0d/ehR96h5pLcxKX2ixghjt7mU5xXTprbd6+5EoZhQt9vhaUtIlaQOlbXV3VS92TyS9baOyKnVjzmfyZ35o5e4UOARBO9uoyhztMxViG+UCw9vMIgJr1yy9VoqxglylZxC33Xbs2e7bnPrdFWmrdGtw2C36OdeziVQnZ4fGsXV1Zm4FkFR4dh+xSWataKidpDMgdMHyUOR5gXjmkoYjlRl05hRCpf9l6+3TZsbDoNE3iDSncnuyafLWA1q9LXJmT58LGhxU/oTRa23+93zb4VNomxaPleWA7dIKEscCMeG7cUBs648ieFeBANfPpt1Hn/WibyH2W/5z9hIv9lst/zTyz7ASISEJGAiAREJCAiAREJiEhARAIiEhCRgIgERCQgIgERCYhIQEQCIhIQkYCIBEQkICL/AohIQEQCIhIQkYCIBETkv5GIRH8nEYn8iaVRQEQCIhIQkYCIBEQkICIBEQmISEBEAiISEJGAiAREJCAiAREJiEhARAIiEhCRgIgERCQgIgERCYhIQET+x5/xbyQi/9kukf8y2e/1J5b9ABEJiEhARAIiEhCRgIgERCQgIgERCYhIQEQCIhIQkYCIBEQkICIBEQmISEBEAiISEJF/AUQkICIBEQmISEBEAiLyNxKRfzOH/UMaxdDfKI2if2JpFBCRgIgERCQgIgERCYhIQEQCIhIQkYCIBEQkICIBEQmISEBEAiISEJGAiAREJCAiAREJiEhARAIiEhCR//Fn/EYiEoH+jbIfBmQ/IPsB2Q/IfkD2A7IfkP2A7AdkPyD7AdkPyH5A9gOyH5D9gOwHZD8g+wHZD8h+QPYDsh+Q/YDsB2Q/IPv9557xG2U/FP172e+3GqHif2LZDxihAiNUYIQKjFCBESowQgVGqMAIFRihAiNUYIQKjFCBESowQgVGqMAIFRihAiNUYIQKjFCBEepfgBEqMEIFRqjACBUYoQIj1N9ohPqP0igK/UZplADSKJBGgTQKpFEgjQJpFEijQBoF0iiQRoE0CqRRII0CaRRIo0AaBdIokEaBNAqkUSCNAmkUSKNAGgXSKJBGgTQKpNF/izSK/EZp9An9iaVRYBYLzGKBWSwwiwVmscAsFpjFArNYYBYLzGKBWSwwiwVmscAsFpjFArNYYBYLzGKBWSwwiwVmscAsFpjFArNYYBb7H3/Gv9Es9vkbzWKf8J9Y9gNEJCAiAREJiEhARAIiEhCRgIgERCQgIgERCYhIQEQCIhIQkYCIBEQkICIBEQmISEBE/gUQkYCIBEQkICIBEQmIyN9IRP5NCv1DGoXx3yiNPv7E0iggIgERCYhIQEQCIhIQkYCIBEQkICIBEQmISEBEAiISEJGAiAREJCAiAREJiEhARAIiEhCRgIgERCQgIv/jz/iNROTfjE//HbLfE8h+QPYDsh+Q/YDsB2Q/IPsB2Q/IfkD2A7IfkP2A7AdkPyD7AdkPyH5A9gOyH5D9gOwHZD8g+wHZD8h+QPb7zz3jdxqhwq+/k/3Q3yn7IX9i2Q8YoQIjVGCECoxQgREqMEIFRqjACBUYoQIjVGCECoxQgREqMEIFRqjACBUYoQIjVGCECoxQgRHqX4ARKjBCBUaowAgVGKECI9TfaIT6j9Io9jv3iHz9E2kUba9voN7DnYSgxfJzm7/K5vFeRP/fdFP08x3+evD/M1fnVXql7ldfPu7/9c9//JTkrwXGNIxTlS/xfdF//G/66z/VIcvbvxbf0cg/fsxV9ut6/kfx31/6Pyi915u6y8ulux4KA9+K7B/6bHq9sXz6J8JtV2XZfTo15dc9xsnPR91C7ThU/fLz+F/UX153hYi/y/DrOfx89P9K/cD+oX68/mf9gOF/UkGQ/40Kgv5fK8j/eCG/Hvv/19phTNUa3/P+/1gRmFsF/7+93H9WPf5//8KJv0ek//b3f3vh+P/O+75nnYb7Nfztf/wUj+Wvtvdk/w8=</diagram></mxfile>
|
2106.01425/main_diagram/main_diagram.pdf
ADDED
|
Binary file (34.6 kB). View file
|
|
|
2106.01425/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
One of the main challenges in harnessing the power of big data is the fusion of knowledge from numerous decentralized organizations that may have proprietary data, models, and objective functions. Due to various ethical and regulatory constraints, it may not be feasible for decentralized organizations to centralize their data and fully collaborate to learn a shared model. Thus, a large-scale autonomous decentralized learning method that can avoid data, models, and objective functions transparency may be of critical interest.
|
| 4 |
+
|
| 5 |
+
Cooperative learning may have various scientific and business applications [@roman2013features]. As illustrated in Figure [1](#fig:al){reference-type="ref" reference="fig:al"}, a medical institute may be helped by multiple clinical laboratories and pharmaceutical entities to improve clinical treatment and facilitate scientific research [@farrar2014effect; @lo2015sharing]. Financial organizations may collaborate with universities and insurance companies to predict loan default rates [@zhu2019study]. The organizations can match the correspondence with common identifiers such as user identification associated with the registration of different online platforms, timestamps associated with different clinics and health providers, and geo-locations associated with map-related traffic and agricultural data. With the help of our framework, they can form a community of shared interest to provide better Machine-Learning-as-a-Service (MLaaS) [@ribeiro2015mlaas; @DingInfo] without transmitting their private data, proprietary models, and objective functions.
|
| 6 |
+
|
| 7 |
+
<figure id="fig:al" data-latex-placement="tb">
|
| 8 |
+
<embed src="AL.pdf" style="width:80.0%" />
|
| 9 |
+
<figcaption>Decentralized organizations form a community of shared interest to provide better Machine-Learning-as-a-Service.</figcaption>
|
| 10 |
+
</figure>
|
| 11 |
+
|
| 12 |
+
The main idea of Gradient Assisted Learning (GAL) is outlined below. In the training stage, the organization to be assisted, denoted by Alice, will calculate a set of 'residuals' and broadcast these to other organizations. These residuals approximate the fastest direction of reducing the training loss in hindsight. Subsequently, other organizations will fit the residuals using their local data, models, and objective functions and send the fitted values back to Alice. Alice will then assign weights to each organization to best approximate the fastest direction of learning. Next, Alice will line-search for the optimal gradient assisted learning rate along the calculated direction of learning. The above procedure is repeated until Alice accomplishes sufficient learning. In the inference stage, other organizations will send their locally predicted values to Alice, who will then assemble them to generate the final prediction. We show that the number of assistance rounds needed to attain the centralized performance is often small (e.g., fewer than ten). This is appealing since GAL is primarily developed for large organizations with rich computation resources. A small number of interactions will reduce the communications and networking costs. Our main contributions are summarized below.
|
| 13 |
+
|
| 14 |
+
- We propose a Gradient Assisted Learning (GAL) algorithm that is suitable for large-scale autonomous decentralized learning and can effectively exploit task-relevant information preserved by vertically decentralized organizations. Our method enables simultaneous collaboration between multiple organizations without centralized sharing of a model, objective function, or data. Additionally, GAL does not need frequent synchronization of organizations. Moreover, GAL has low communication and networking costs. It typically requires fewer than ten rounds of assistance in our experiments.
|
| 15 |
+
|
| 16 |
+
- Interestingly, for the particular case of vertically distributed data, GAL generalizes the classical Gradient Boosting algorithms. We also provide asymptotic convergence analysis of the GAL algorithm.
|
| 17 |
+
|
| 18 |
+
- Our proposed framework can significantly outperform learning baselines and achieve near-oracle performance on various benchmark datasets while producing lower communications overhead compared with the state-of-the-art techniques.
|
| 19 |
+
|
| 20 |
+
# Method
|
| 21 |
+
|
| 22 |
+
We first introduce the derivation of the GAL algorithm from a functional gradient descent perspective. Then, we cast the algorithm into pseudocode and discuss each step. Consider the unrealistic case that Alice has all the data $x$ needed for a centralized supervised function $F: x \mapsto F(x)$. Recall that the goal of Alice is to minimize the population loss $\mathbb{E}_{p_{x,y}} L_1(y , F(x))$ over a data distribution $p_{x,y}$. If $p_{x,y}$ is known, starting with an initial guess $F^0(x)$, Alice would have performed a gradient descent step in the form of $$\begin{align}
|
| 23 |
+
F^1 &\leftarrow F^0 - \eta \cdot \frac{\partial }{\partial F} \mathbb{E}_{p_{x,y}} L_1(y , F(x)) \mid_{F=F^0} \nonumber \\
|
| 24 |
+
&= F^0 - \eta \cdot \mathbb{E}_{p_{x,y}} \frac{\partial }{\partial F} L_1(y , F(x)) \mid_{F=F^0} ,
|
| 25 |
+
\label{eq_2}
|
| 26 |
+
%\nonumber
|
| 27 |
+
\end{align}$$ where the equality holds under the standard regularity conditions of exchanging integration and differentiation. Note that the second term in ([\[eq_2\]](#eq_2){reference-type="ref" reference="eq_2"}) is a function on $\mathbb{R}^d$. However, because Alice only has access to her own data $x_1$, the expectation $\mathbb{E}_{p_{x,y}}$ cannot be realistically evaluated. Therefore, we need to approximate it with functions in a pre-specified function set. In other words, we will find $f$ from $\mathcal{F}_{M}$ that 'best' approximates $\mathbb{E}_{p_{x,y}} \frac{\partial }{\partial F} L_1(y , F(x))$. We will show that this is actionable without requiring the organizations to share proprietary data, models, and objective functions.
|
| 28 |
+
|
| 29 |
+
Recall that $\mathcal{F}_m$ is the function set locally used by the organization $m$, and $x_m$ is correspondingly observed portion of $x$. The function class that we propose to approximate the second term in ([\[eq_2\]](#eq_2){reference-type="ref" reference="eq_2"}) is $$\begin{align}
|
| 30 |
+
\mathcal{F}_{M} =\biggl\{
|
| 31 |
+
&f: x \mapsto \sum_{m=1}^M w_m f_m(x_m), \forall f_m \in \mathcal{F}_m, \nonumber \\
|
| 32 |
+
x &\in \mathbb{R}^d, w \in P_M \biggr\}, \label{eq_3}
|
| 33 |
+
\end{align}$$ where $P_M = \{w \in \mathbb{R}^M: \sum_{m=1}^M w_m=1, w_m\geq 0\}$ denotes the probability simplex. The gradient assistance weights $w_m$'s are interpreted as the contributions of each organization at a particular greedy update step. The gradient assistance weights are constrained to sum to one to ensure the function space is compact, and the solutions exist.
|
| 34 |
+
|
| 35 |
+
<figure id="fig:gal" data-latex-placement="htbp">
|
| 36 |
+
<embed src="GAL.pdf" style="width:100.0%" />
|
| 37 |
+
<figcaption>Learning and Prediction Stages for Gradient Assisted Learning (GAL).</figcaption>
|
| 38 |
+
</figure>
|
| 39 |
+
|
| 40 |
+
We propose the following solution so that *each organization can operate on its own local data, model, and objective function*. Alice initializes with a startup model, denoted by $F^0(x) = F^0(x_1, y_1)$, based only on her local data and labels. Alice broadcasts $r_1$ (named 'pseudo residuals') to each organization $m,\,m=2, \cdots, M$, who will then fit a local model $f_m$ using $r_1$. Each organization will then send the fitted values from $f_m$ to Alice, who will train suitable gradient assistance weights $w_m$. Subsequently, Alice finds the $\eta$ in ([\[eq_2\]](#eq_2){reference-type="ref" reference="eq_2"}) that minimizes her current empirical risk. The above procedure is iterated for a finite number of rounds until Alice obtains a satisfactory performance (e.g., on validation data). The validation will be based on the same technique as the prediction stage to be described below. This training stage is described under the 'learning stage' of Algorithm [\[alg:gal\]](#alg:gal){reference-type="ref" reference="alg:gal"}. Note that the pseudocode is from the perspective of Alice, the service receiver. For each organization $m$, it will only need to perform the empirical risk minimization using the label $r_1^t$ sent by Alice at each round $t$.
|
| 41 |
+
|
| 42 |
+
In the Prediction/Inference stage (given above in Algorithm [\[alg:gal\]](#alg:gal){reference-type="ref" reference="alg:gal"}), other organizations send prediction results generated from their local models to Alice, who will calculate a prediction result $F^T(x)$ that is implicitly operated on $x$, where $T$ is the number of iteration steps.
|
| 43 |
+
|
| 44 |
+
The idea of approximating functional derivatives with regularized functions was historically used to develop the seminal work of gradient boosting [@mason1999boosting; @friedman2001greedy]. When there is only one organization, the above method reduces to the standard gradient boosting algorithm.
|
| 45 |
+
|
| 46 |
+
Organizations in our learning framework form a shared community of interest. Each service-providing organization can provide end-to-end assistance for an organization without sharing anyone's proprietary data, models, and objective functions. In practice, the participating organizations may receive financial rewards from the one to assist. Moreover, every organization in this framework can provide its own task and seek help from others. As a result, all organizations become mutually beneficial to each other. We provide a realistic example in Figure [3](#fig:gal){reference-type="ref" reference="fig:gal"} to demonstrate each step of Algorithm [\[alg:gal\]](#alg:gal){reference-type="ref" reference="alg:gal"}. We elaborate on the learning and prediction procedures in the Appendix.
|
| 47 |
+
|
| 48 |
+
We also provide an asymptotic convergence analysis for a simplified and abstract version of the GAL algorithm, where the goal is to minimize a loss $f \mapsto \mathcal{L}(f)$ over a function class through step-wise function aggregations. Because of the greedy nature of GAL, we consider the function class to be the linear span of organization-specific $\mathcal{F}_m$. The following result states that the GAL can produce a solution that attains the infimum of $\mathcal{L}(f)$. More technical details are included in the Appendix.
|
| 49 |
+
|
| 50 |
+
::: {#thm_main .theorem}
|
| 51 |
+
**Theorem 1**. *Assume that the loss (functional) $f \mapsto \mathcal{L}(f)$ is convex and differentiable on $\mathcal{F}$, the function $u \mapsto \mathcal{L}(f+u g)$ has an upper-bounded second-order derivative $\partial^2 \mathcal{L}(f+u g) / \partial u^2$ for all $f \in \textrm{span}(\mathcal{F}_1,\ldots,\mathcal{F}_M)$ and $g \in \cup_{m=1}^M \mathcal{F}_m$, and the ranges of learning rates $\{a_t\}_{t=1,2,\ldots}$ satisfy $\sum_{t=1}^{\infty} a_t = \infty$, $\sum_{t=1}^{\infty} a_t^2 < \infty$. Then, the GAL algorithm satisfies $\mathcal{L}(F^t) \rightarrow \inf_{f \in \textrm{span}(\mathcal{F}_1,\ldots,\mathcal{F}_M)} \mathcal{L}(f)$ as $t\rightarrow \infty$.*
|
| 52 |
+
:::
|
2106.13948/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-12-06T19:40:53.634Z" agent="5.0 (Windows)" etag="PJ6U2vpDIg-_LpF3a8Wm" version="13.10.9" type="google"><diagram id="FrB4kNl13vsM50-HC0W5" name="Models">7Vxbd6M2EP41ftwekLj5cdfJdnu6bfds2tPmqYcYxWYXIx9ZSez++gojbhIm8gKS7dh5CAwSFjPzzYw+CU/gbLX9mYTr5W84QskEWNF2Am8mANgWDNi/TLLLJa7v5oIFiSPeqBLcxf+hoieXPsUR2jQaUowTGq+bwjlOUzSnDVlICH5pNnvESfNb1+ECSYK7eZjI0r/jiC5zaeBalfwTihfL4ptti19ZhUVjLtgswwi/1ETwdgJnBGOaH622M5Rkyiv0kvf7eOBqOTCCUqrSIfw9TP/6FdDP9rdtDL5+/GTbyTtunQ3dFQ+MIvb8/BQTusQLnIbJbSX9QPBTGqHsrhY7q9p8xnjNhDYTfkOU7rgxwyeKmWhJVwm/irYx/ad2fJ/d6ieXn91s+Z33Jzt+ko8zG9zBx+eiDX4ic9TxzIC7UUgWiHa0m5ZGYt6N8ApRsmP9CEpCGj83xxFyN1uU7cquX3DMRggsDglYuA4HhB1YzVvk4+K9Knuyg9owKtHeykdYnD/9c5g88Uf4E22p5AZNI78sY4ru1uFery8M6U2DPsZJMsMJJvu+8NHN/ph8Qwn+jmpXvP0n64FTWpPnn9LMz4hQtO02tGyYogMQFFwg8qWGXy5a1qALrMOmbBjhWI1DS9KuRpBVwLqvXWkHGdMw2dU6Zaf39WtVt/3ZCOCcKoLThkOg8z0h4a7WYJ2hbtMBXk8EL2zGXLH9tLM9O8hHMCjCoX31t+H9DZyGv1nd/uZ3th/J38DV34b3twNJTtnfeqWsqVQkTICXsPF+iOJndrjIDvO6wbpN56zqJ0UD9n21Nh11hX18XfH4CObztroi8h481xumfoBWE0OBXD6UJUa9fvDGqh+KqVDDGn3UOoKSQEuRpVdLoM1nDYSlMsQ0AkwVb8YPMXYxT34txhyysZ4YUwxzMLc+iWhhHgi2PMM7R73agl5t43qFF6FXYXYMgGtar56ROG2Wg/IVQ7TtGA3R/rmY5gTSrqdq0r7MRT+TmmF8zZvHVTVP34l+P/NMr4hTNqmjalKjda7TUjZIc+lfVtk61LFT6LOj5h0gFMty8dFGzbtj1R7u1TalbVzBNrZh23hX25S28QTbQHkypNU2/tU2pW18wTaOYdwUuD19On7AUkCZVu87n9KyjONYne3HWcYBR/Fxx8MU2ZGL/DaYTj0fhgPxG870dT7O08pLH0UbnapaXZHmbKHj9Kq1raw+P7WKLGcLG6dVrUFb1aU/mwyYGYDqxD9we6aGfv7cNhVxwn9ZVnDO0bFFmtkxHS+Ck3DsE2BNivWh15cHjRKVoG2O8TYNBlUNZnSxALzFdRxblYMERrOLLVdLN4hv2zmYW052CdN5PbfoXcL0zYSmAVl7X+dCmWoC8nyTmCmGWcPMl+ydjiGLMdO0otMAktOy1c0PZByNRysa3blsKIV4g+xklzknIBh36jdvMfJ7Jp7Me1wafFxBw95Ugo9eVt7oiyam4KNagRnOJnIFdmlwEBZ3HU/OJnoXEM9mQnKoKgM6qzJVnqw3jtrzFQjEaAr05iuZgLs0gEIBoK5pgL7Ft49Hgo8zFWt5zeWeTNd9DdPv50kwiKWz60tA0UwwGF3TN4UU1UV93yhRXQyz5vmEef5m0Myha/H8dcfXumrjy5PIGcEbplvrlhVCeL0bNj8Hc9QeXh4CN0uXg2jZE9bSzWvZaKFsmYkuvuq80Wx08eV54x8perfEdO+Gc4q7MuzxEGBhJoicNggE4AF6AwUa3z4xCAQjb37SpFaFlxH1qhVchFoV3kXUq9aR95RpUqvCq4h61Wr0dbdj8mBJGJVbFLTvSghU86fZfVVB2z7BMfZVacIMFDBjel9VINNmF1ChiwHfuJZVdq+l0fvsBw/ZWYrTLDRF4WZZEi0/sLtS1k/t+d0OprAnl2ULZQwU9ZoHJ4nLkm4UFG+7HrrRD5Ni7LT65ca8efX7l/D2fw==</diagram></mxfile>
|
2106.13948/main_diagram/main_diagram.pdf
ADDED
|
Binary file (20.8 kB). View file
|
|
|
2106.13948/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,205 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
With recent progress in the fields of artificial intelligence (AI) and robotics, intelligent agents are envisaged to cooperate or collaborate with humans in shared environments. Such agents are expected to understand semantic contexts of an environment, e.g., using visual information perceived using sensors, as well as auditory or textual information intended for humans, e.g., presented in natural language. With the goal of developing intelligent agents disposing of these capabilties, embodied AI is a field that studies AI problems situated in a physical environment. Recently, the number of papers and datasets for the tasks that require the agents to use both vision and language understanding has increased markedly . In this article, we conduct a survey of recent works on these types of problems, which we refer to as as \EVLP (\evlp) tasks. In this article, we aim to provide a bird's-eye view of current research on \evlp problems, addressing their main challenges and future directions. The main contributions of this article are the following:
|
| 4 |
+
|
| 5 |
+
- We propose a taxonomy that unifies a set of related subtasks of \evlp
|
| 6 |
+
- We survey the current state-of-the-art techniques used in the \evlp family to provide a roadmap for the researchers in the field
|
| 7 |
+
- More importantly, we identify and discuss the remaining challenges with an emphasis towards solving real-world problems.
|
| 8 |
+
|
| 9 |
+
An ``agent" in this article refers to an entity that can make decisions and take actions autonomously \parencite{wooldridge1995intelligent, castelfranchi1998modelling}. An embodied agent is situated in a physical or virtual environment which the agent navigates in and interacts with. An embodied agent is generally equipped with sensing capabilities, e.g., via visual or auditory sensing modalities. In this article, we focus on existing works that use visual perception and language understanding as the two major inputs for an embodied agent to make decisions for various tasks in its environment. We note that the topics covered here are mainly for a single agent in a static environment, i.e., \ann{Reviewer C, ``Claims", item \#1} \rev{topics on multi-agent planning are outside the scope of this article}. \ann{Reviewer C, ``Claims", item \#2} \rev{Moreover, as the majority of tasks discussed in this article are conducted through simulated environments, real-world physical challenges in robotics (e.g., visual affordance learning, proprioceptive control, system identification) are not considered.} \upd{\evlp problems have previously been studied in the fields of natural language processing, robotics, and computer vision \parencite{TowardsReasoning, boularias2015grounding, duvallet2016inferring}. While the focus of this survey is on contemporary works, we will discuss how concepts from classical approaches have inspired recent methodology and how they could be used for future directions, e.g., as in the use of mapping and exploration strategies, search and topological planning, and hierarchical task decomposition (Section ).}
|
| 10 |
+
|
| 11 |
+
This paper is tailored to accommodate a broad spectrum of reader backgrounds and perspectives, as this paper is positioned at the intersection of Computer Vision (CV), Natural Language Processing (NLP), and Robotics. For readers that are new to these topics, we provide an in-depth coverage of existing works and methodologies; for readers with significant experience in one or more of these areas, we offer a taxonomy of the broader field of \EVLP in \Cref{ssec:taxonomy}, provide an analysis of core challenges, and discuss future directions in \Cref{sec:openchallenges}.
|
| 12 |
+
|
| 13 |
+
For readers who are less familiar with implementation and evaluation of embodied agents, it is recommended to read the sections in order, as each section builds upon previous sections. The rest of the article is organized as follows. In \Cref{sec:prob-def}, we formally define the class of \evlp problems discussed in this article and present a taxonomy of the field. After describing the key tasks that compose the \evlp family in \Cref{sec:tasks}, we present modeling approaches including commonly-used learning paradigms, architectural design choices, and techniques proposed to tackle \evlp-specific challenges. \Cref{sec:evaluation} presents the datasets and evaluation metrics currently used by the research community. Finally, \Cref{subsec:challenges} discusses several open challenges in the field.
|
| 14 |
+
|
| 15 |
+
We review existing survey articles on relevant topics, in order to provide readers with pointers to other papers on more specific topics and to clarify how this article differs from them.
|
| 16 |
+
|
| 17 |
+
- \textcite{IndoorEmbodiedAgentSurvey} discuss the datasets, methodologies, and common approaches to Vision Indoor Navigation (VIN) tasks. VIN tasks do not necessarily require language, are limited to navigation, and only occur in indoor environments. Our survey touches on planning tasks, of which navigation is a subset, which use language to establish the goal state. Also, \evlp tasks are not limited to indoor environments, but also include outdoor settings.
|
| 18 |
+
|
| 19 |
+
- \textcite{RLLanguageSurvey} focus on existing approaches of using language components in the context of reinforcement learning (RL). The authors divide the language-related problem into language-conditional RL and language-assisted RL. In contrast, our work briefly discusses different training paradigms commonly used for tasks in the \evlp domain, in \Cref{sec:approaches}.
|
| 20 |
+
|
| 21 |
+
- \textcite{MultiModalSurvey} cover popular tasks and approaches in multimodal research. Our survey, in comparison, specifically focuses on \evlp tasks, and provides more details, such as commonly used datasets, training paradigms, and challenges on this specific domain.
|
| 22 |
+
- \ann{Reviewer C, ``Related Works", item \#1} \rev{\textcite{bisk2020experience} explore the current research in language understanding and its shortcomings. To categorize the differences between human and machine understanding, the authors define the notion of a ``world scope.'' A worldscope defines the type of real world information that language models capture, such as physical relationships or social concepts. Five worldscopes are used to define a framework that discusses what aspects of language are harder to grasp and issues with the current approaches. We do not set out a general framework to identify the complexity of language. Instead, we survey the low-level research progress of \evlp tasks, discuss the role of language in planning tasks, and further point out possible improvements.}
|
| 23 |
+
|
| 24 |
+
- \textcite{tangiuchi2019survey} discuss techniques at the intersection of language and robotics research. Their work focuses heavily on the syntactic and semantic structures of language information, and how it can be tied to the low level robotics actions. Instead, our survey focuses primarily on the task level, as opposed to low-level actions, and is suitable for readers with no prior knowledge in the robotics field.
|
| 25 |
+
|
| 26 |
+
- \textcite{uppal2020emerging} provide discussion of Vision and Language problems, dividing Vision-language tasks into four categories: generation, classification, retrieval, and other tasks. Here, the authors only discuss one \evlp task; however, due to the increased popularity of embodied vision-language planning, we feel that the \evlp family warrants its own specific treatment and careful discussion. Moreover, \textcite{uppal2020emerging} further discuss changes in representation such as the use of transformers, fusion of multiple modalities, architectures, and evaluation metrics. While both surveys discuss changes in multimodal representation, ours focuses on \evlp research, providing a unified problem definition, taxonomy, and analysis of trends for this exciting field.
|
| 27 |
+
|
| 28 |
+
- provide a detailed analysis and framework for the rearrangement task, which involves using an embodied agent to change the environment from an initial state to a target state. The authors argue that the manipulation problem can be viewed as two-fold: agent-centric problem and environment-centric problem. They further analyze the more fine-grained split for each problem. Several evaluation metrics and existing testbeds are presented. While they do introduce a framework which includes one \evlp task, it is not the principle focus of that paper. We pursue a broader discussion of emerging trends and core challenges in the field.
|
| 29 |
+
|
| 30 |
+
# Method
|
| 31 |
+
|
| 32 |
+
In this article, we discuss a broad set of problems, related to an embodied agent’s ability to make planning decisions in physical environments.
|
| 33 |
+
We include both stateless problems such as question answering and sequential decision-making problems, such as navigation, under the umbrella of ``planning''. Note that stateless problems belong to the special case of single-step decision making. More generally, we refer to sequential decision-making problems as those where an agent must perform actions over a countable time-horizon, given an objective. In the context of navigation, for example, an agent may choose actions, according to a pre-defined action space, that enable an agent to transition from an initial state to a goal state. In the context of manipulation, the agent executes a series of actions, in order to effect a desired change in the environment. More formally, planning problem are defined as follows:
|
| 34 |
+
|
| 35 |
+
[Planning] Let $S$ denote a set of states; $A$, a set of actions. A planning problem is defined as a tuple of its states, actions, initial and goal states: $\Phi = \{S, A, s_{ini}, s_{goal}\}$, where $s_{ini}, s_{goal} \in S$ denote initial and goal states, respectively.
|
| 36 |
+
A solution $\psi \in \Psi_{\Phi}$ to planning problem $\Phi$ is a sequence of actions to take in each state, starting from an initial state to reach a goal state, $\psi = [s_{ini}, a_1, ..., a_T, s_{goal}]$, where $T$ is a finite time-step and $\Psi_{\Phi}$ is a set of possible solutions to $\Phi$.
|
| 37 |
+
|
| 38 |
+
\evlp problems require planning in partially-observable environments, i.e., the entire state space may not be known to the agent in advance. Instead, an agent needs to use vision and language inputs to estimate its current and goal states in order to accomplish high-level tasks in a physical environment. These inputs can be given at the start of a task, e.g., a task itself is described in natural language in the form of a question or instruction, or become available as an agent moves through the state space. Visual inputs are an example of such an input as they are primarily online information that can be perceived through sensing. Note this is not always the case, as they can also be provided as part of a task specification, e.g., a view from a goal location.
|
| 39 |
+
|
| 40 |
+
[Embodied Vision-Language Planning (\evlp)]
|
| 41 |
+
|
| 42 |
+
\ann{Reviewer B, item \#1 (``Definition 2'')} Let $V$ and $L$ denote sets of vision and language inputs available to an agent. Given an \evlp problem $\Phi$, state $s_t$ at time step $t$ can be defined in terms of vision and language inputs up to the current time step, such that, $s_t = f(v_{1},\dots,v_{t}, l_{1},\dots,l_{t})$ where $v_{t} \in V$ and $l_{t} \in L$. The objective here is to minimize the difference between an \rev{admissible}\footnote{\rev{Admissibility condition: a solution that is possible, under the constraints of the environment's transition dynamics as well as the agents' own system dynamics, which satisfies the task specification.}} solution $\psi \in \Psi_{\Phi}$ and a predicted one $\bar{\psi}$.
|
| 43 |
+
|
| 44 |
+
This definition broadly captures the crux of \evlp problems. A customized definition would be needed for each specific task where additional constraints or assumptions are added to focus on particular subareas of this general problem. For instance, Vision-Language Navigation (VLN) is a natural language direction following problem in an unknown environment, which can be defined as a planning problem where an agent is given an initial state and a solution (or a sequence of actions) represented in natural language, and is equipped with visual perception, e.g., first-person view images.
|
| 45 |
+
|
| 46 |
+
We propose a taxonomy of current \evlp research, illustrated in , around which the rest of the paper is organized. The taxonomy subdivides the field into three branches; tasks, approaches, and evaluation methods. The Tasks branch (corresponding to \hyperref[sec:tasks]{Section }) proposes a framework to classify existing tasks and to serve as a basis for distinguishing new ones.
|
| 47 |
+
|
| 48 |
+
The Approaches branch (\hyperref[sec:approaches]{Section }) touches on the learning paradigms, common architectures used for the different tasks, as well as common tricks used to improve performance. The technical challenges that underlie all tasks are discussed in \hyperref[sec:common-tech]{Section }.
|
| 49 |
+
|
| 50 |
+
The right-most branch of the taxonomy, in Figure , discusses task Evaluation Methodology (\hyperref[sec:evaluation]{Section }), which is subdivided into two parts: metrics and environments. The metrics subsection references many of the common metrics and their formul\ae, used throughout \evlp tasks, while the environments subsection presents the different simulators and datasets currently used.
|
| 51 |
+
|
| 52 |
+
[!tp]
|
| 53 |
+
\centering
|
| 54 |
+
\includegraphics[width=\textwidth,keepaspectratio]{images/Taxonomy-V6-JF.pdf}
|
| 55 |
+
\caption{\small Taxonomy of \EVLP, aligned with our paper organisation.}
|
| 56 |
+
|
| 57 |
+
\ann{Reviewer B, item \#2 (``Missing citations")} Many \evlp tasks have been proposed, with each task focusing on different technical challenges and reasoning requirements for agents. Tasks vary on the basis of the action space (types and number of actions possible), the reasoning modes required (e.g., instruction-following, versus exploration and information-gathering), and whether or not the task requires dialogue with another agent. In this survey, we only include tasks where clearly-defined datasets and challenges exist for evaluation benchmarking, i.e., Vision Language Navigation (VLN) \parencite{R2R,misra2018mapping,StreetNav,R4R,ku2020roomacrossroom}, Vision and Dialogue History Navigation (VDN) \parencite{devries2018talk,nguyen2019hanna,CVDN}, \rev{Embodied Question Answering (EQA) \parencite{EQA,eqa_matterport}}, Embodied Object Referral (EOR) \parencite{qi2020reverie,TouchDown}, and Embodied Goal-directed Manipulation (EGM) \parencite{ALFRED,ArraMon,CerealBar}. In the following sub-sections, we compare and contrast these task families, with the hope that readers gain a solid understanding of the differences in the problem formulations. In this manner, new datasets can be contextualized by existing tasks, and new task definitions can be later positioned alongside those mentioned here. We summarize the attributes for existing tasks in \Cref{tab:CompVLN}. We further refer the readers to to \Cref{ssec:datasets} for more information about the datasets and simulation environments pertaining each task.
|
| 58 |
+
|
| 59 |
+
Vision-Language Navigation (VLN) requires an agent to navigate to a goal location in an environment following an instruction $L$. A problem in VLN can be formulated as $\Phi_{VLN}=\{S, A, s_1, s_{goal}\}$, where a solution or a path $\psi_{\Phi_{VLN}} = \{s_1, a_1, \dots, a_{T}, s_{goal}\}$ exists, such that each state $s_{t} \in S, t \in [1, T]$ is associated with a physical location in the environment leading to the target location. The action space $A$ available to the agent consists of navigation actions between physical states, and a stop action which determines the end of a solution. Navigation actions can be discrete, e.g., turn\_left, turn\_right or move\_forward \parencite{R2R, VLN-CE}, as well as continuous \parencite{RoboVLN}. Finally, the goal of the agent is to predict a solution $\bar{\psi}_{\Phi_{VLN}}$ consisting of a sequence of actions $a_t \in A, t \in [1, T]$ that closest align to the instruction, and thus, to the true solution $\psi_{\Phi_{VLN}}$.
|
| 60 |
+
|
| 61 |
+
VLN is the most established \evlp task: a number of datasets (See Section ) exist in both, indoor \parencite{R2R, R4R, VLN-CE, RoboVLN}, and outdoor environments . Overall, VLN models have seen considerable progress in improving the ability to get closer to the goal and to the ground truth trajectory \parencite{fried2018speaker, tan2019learning, FineGrainedR2R, VLNBERT, R4R}. Nonetheless, \textcite{zhu2021diagnosingVLN} show that it is unclear if models are actually aligning the visual modality and that recent work has experienced a slow-down in performance improvements. They suggest that understanding how VLN agents interpret visual and textual inputs when making navigation decisions is an open challenge.
|
| 62 |
+
|
| 63 |
+
In Embodied Question-Answering (EQA), an agent initially receives a language-based question $L$, and must navigate around collecting information about its surroundings to generate an answer. An EQA problem can be formulated as $\Phi_{EQA} =\{S, A, R, s_1, r\}$, where $R$ is the set of possible answers, and $r \in R$ represents the correct answer to the given question. The actions $A$ available to the agent consist of discrete moving actions as those in Subsection . However, as opposed to using a stop action, the ending of an episode is given by an answer action \parencite{EQA, IQA}. Lastly, a correct solution to a given question can be expressed as $\psi_{\Phi_{EQA}} =\{s_1, a_1,\dots,a_{T}, r\}$.
|
| 64 |
+
|
| 65 |
+
Unlike the aforementioned tasks, there are some challenges unique to EQA. First, there might not necessarily exist a unique or perfect solution for any given question. Second, EQA also requires an agent to understand the implications of the given questions and to translate them into actions that lead to the correct answer. Furthermore, the EQA task requires an agent's awareness of its environment and commonsense knowledge on how those environments function or are spatially organized, e.g., food is often located in the fridge, or parked cars are often found in a garage. The level of ambiguity, reasoning required, and knowledge required make EQA one of the more challenging EVLP tasks to date.
|
| 66 |
+
|
| 67 |
+
In Embodied Object Referral (EOR) tasks, an agent navigates to an object $o$ mentioned in a given instruction $L$, and has to identify (or select) it upon reaching its location. EOR can be framed as $\Phi_{EOR} =\{S, A, O, s_1, s_{goal}, o\}$. Here, $O$ is the set of possible objects in the environment, which are specified by a class label and a bounding-box or a mask, and $o \in O$ is the object of interest. The set of available actions $A$ includes discrete navigation actions as those described in Subsection . An EOR solution can be expressed as $\psi_{\Phi_{EOR}} =\{s_1, a_1,\dots,a_{T}, s_{goal}, o\}$, and represents the path leading to the object of interest $o$. When the final state $s_{goal}$ is reached, a bounding box or a mask is used to indicate where the object $o$ is located in the viewpoint given at that state. Finally, the goal of the agent is to predict a solution $\bar{\psi}_{\Phi_{EOR}}$ that follows the provided instruction and correctly selects the referenced object, and thus, closely matches solution $\psi_{\Phi_{EOR}}$.
|
| 68 |
+
|
| 69 |
+
Similar to VLN, EOR relies on instruction following. In this setting, instructions can be step-by-step \parencite{TouchDown, mehta2020retouchdown}, or they can be under specified \parencite{qi2020reverie}. The latter requires the agent to reason and understand the context of the instruction, as was discussed in Subsection . In addition, EOR introduces the challenge of identifying an object of interest. This object might be visible from different viewpoints, and like previous tasks, multiple instances of such object can exist in the environment. More importantly, while an object might be explicit \parencite{TouchDown}, an agent might also be required to predict what object it should find from a given instruction \parencite{qi2020reverie}.
|
| 70 |
+
|
| 71 |
+
A language instruction can be ambiguous and might require clarification or assistance \parencite{CVDN, nguyen2019vision}. For example, when navigating through a home, an agent might find multiple instances of an object referenced in an instruction. Moreover, an instruction may not specify the goal in the level of granularity that the agent needs to plan and execute actions; the agent may require clarification on intermediate sub-tasks. Unlike VLN and EQA, VDN allows an agent to interact with another agent (e.g., a human collaborator) to resolve these types of uncertainties. This interactive element has been added in two ways: through the use of an ambiguity resolution module \parencite{JustAsk, nguyen2019hanna}, or through dialogue \parencite{CVDN, nguyen2019vision}. In the case of the former, if the agent gets confused, it may ask for help from an oracle who is aware of the agent's state and its goal. In the latter, the agent is given an initial vague prompt and is required to ask an oracle for clarification.
|
| 72 |
+
|
| 73 |
+
More formally, Vision and Dialogue Navigation (VDN) requires an agent to navigate to a goal location in an environment, but may do so through sequential directives $L = \{l_1, \dots, l_N\}$. These directives may come in the form of sub-instructions upon an agent's request, or as dialogue-based interaction. Here, each instruction represents a sub-problem $\Phi_{VDN}^{(i)} =\{S, A, s_1^{(i)}, s^{(i)}_{goal}\}_{i=1}^N$, leading the agent closer to its ultimate goal position. The actions $A$ that the agent can execute consist of navigation actions, as in previous tasks, but may additionally include other forms of interactive actions (e.g., to request help). Akin to VLN, a solution to a VDN problem can be denoted as the set $\psi^{(i)}_{\Phi_{VDN}} = \{s_1^{(i)}, a_1^{(i)}, \dots, a_{T}^{(i)}, s_{goal}^{(i)}\}_{i=1}^{N}$, and the goal of the agent is to find $\bar{\psi}_{\Phi_{VDN}}^{(i)}, i \in [1, N]$, a feasible set of solutions that best align with the true solutions.
|
| 74 |
+
|
| 75 |
+
\upd{We note that past works in VDN \parencite{CVDN, nguyen2019vision} use static, multi-turn question-answer pairs to simulate conversation between the ego-agent and another agent. Still, we accommodate, as part of the VDN family, problem definitions that feature active dialogue context as well.}
|
| 76 |
+
|
| 77 |
+
\ann{Reviewer C, ``Taxonomy'', item \#1} \rev{Unlike previous tasks, Embodied Goal-directed Manipulation (EGM) requires the manipulation of objects in a scene, posing unique challenges for agents \parencite{ALFRED, padmakumar2021teach}. EGM may combine these manipulation-based environment interactions with requirements from aforementioned tasks, such as navigation and path-planning, state-tracking, instruction-following, instruction decomposition, and object selection \parencite{ArraMon}. Due to shared properties between EGM and, e.g., VLN task definitions, EGM also encompasses the mobile manipulation paradigm from previous literature \parencite{tellex2011understanding, khatib1999mobile}.}
|
| 78 |
+
|
| 79 |
+
EGM may require solving multi-step instructions and, as such, it can be framed as a set of multiple sub-problems $\Phi^{(i)}_{EGM} =\{S, A, O, s_1^{(i)}, s^{(i)}_{goal}\}_{i=1}^N$ where the $i_{th}$ sub-problem is associated to the $i_{th}$ sub-instruction. In this setting, the action space $A$ of the agent includes navigation actions as in the previous tasks, in addition to actions involving interactions with objects. These interactions can be sub-divided into multiple types of interactions, e.g. pick\_up, turn\_on, turn\_off, etc. \parencite{ALFRED}, or used as a single interact action \parencite{ArraMon}. Then, $\psi_{\Phi_{EGM}}^{(i)} = \{s^{(i)}_1, a^{(i)}_1,\dots,a^{(i)}_{T}, s^{(i)}_{goal}\}_{i=1}^{N}$ represents a solution to an EGM problem. Here, a state $s_t$ not only includes the visual observation of the agent's location at time step $t$, but also the state of objects that the agent interacted with. Lastly, the goal of the agent is to predict $\bar{\psi}^{(i)}_{\Phi_{EGM}}, i \in [1, N]$ that follows the provided instruction(s) and correctly selects and interacts with the object(s) referred in it, and thus, best matches the true solution(s).
|
| 80 |
+
|
| 81 |
+
In EGM, there may be constraints such as which objects can or can't be picked up. To interact with an object, the agent specifies where $o$ is in its view using a bounding box or mask. Furthermore, agents not only need to interpret instructions and recognize objects they are meant to interact with, but also need to understand the consequences of interacting with their environment. Immutability is an important constraint, in real life you cannot un-slice a tomato. This means certain mistakes lead to ``un-winnable" states and that there may be an order of operations for certain tasks. This makes EGM one of the most difficult \evlp task.
|
| 82 |
+
|
| 83 |
+
This section provides a review of technical approaches used in \evlp. It discusses how current works tackle the different facets of \evlp tasks, namely vision, language, and planning. It then discusses different learning paradigms and tricks used to improve performance.
|
| 84 |
+
|
| 85 |
+
Modeling vision is an important part of \evlp tasks, as it is the principle way in which agents build a representation of their environment. Representations of vision used include the use of explicit features \parencite{DuvalletThesis,tellex2011understanding,duvallet2016inferring} and neural representations. In \evlp models, neural representations are most common, with Convolutional Neural Networks (CNNs) being used as encoders in most works \parencite{MultiModalSurvey}.
|
| 86 |
+
CNNs are neural network architectures suitable to process structured data such as images, and thus are broadly used in the computer vision tasks, e.g., image classification \parencite{CNNsforImageClassification}.
|
| 87 |
+
|
| 88 |
+
In existing works, pre-trained neural networks such as ResNet \parencite{Resnet} are commonly used for extracting meaningful image features. One limitation of using such pre-trained networks is that these features may lead to over-fitting. propose using the logits of the classification layer--i.e., a higher-level feature--in order to reduce the performance gap between the seen and the unseen environments.
|
| 89 |
+
|
| 90 |
+
Neural feature approaches perform well, but lack the interpretability of using explicit features. proposed a hybrid approach, extracting salient features using a pretrained object detector \parencite{maskrcnn} and encoding them using a pretrained CNN. use this technique in VLN to represent objects in an image rather than the entire image. This lends well to the use of multimodal transformers, which we will discuss in section .
|
| 91 |
+
|
| 92 |
+
Goal understanding is another critical piece of the \evlp puzzle. Goal information, often in the form of instructions or a statement, are provided in the form of language. Like vision, hand-selected features \parencite{DuvalletThesis,tellex2011understanding} or neural representations can be used. Modeling language is primarily done using Recurrent Neural Networks (RNNs) or Transformers.
|
| 93 |
+
|
| 94 |
+
RNNs are a commonly used neural network architecture for processing sequential data such as language. For more information on RNNs we refer readers to . When used as a language encoder RNNs take in tokenized instructions and generates a vectorized representation of language. RNNs have some short comings, for example issues with long-term dependencies. To better handle long-term dependency, gated RNNs such as Long Short-Term Memory (LSTMs) \parencite{LSTM} and Gated Recurrent Unit (GRUs) \parencite{cho2014learning} are widely used for general NLP tasks as well as \evlp \parencite{EQA,VLN-CE, fried2018speaker, nguyen2019hanna, CVDN, tan2019learning,EnvBias} due to their superiority in handling long-term dependencies when compared to vanilla RNNs. Other approaches to tackle this include the use of attention, which allows models to attend over the entire context or memory modules \parencite{CrossModalMemory}.
|
| 95 |
+
|
| 96 |
+
Transformers operate on a different principle than RNNs, using stacked attention to combine information from different inputs . Transformer-based architectures have shown to be effective across various communities, such as CV \parencite{khan2021transformers}, NLP \parencite{vaswani2017attention,devlin2018bert}, robotics \parencite{Fang_2019_CVPR, dasari2020transformers}, and multimodal tasks \parencite{shin2021perspectives}. Earlier work has leveraged transformers as language models to improve performance on downstream tasks \parencite{devlin2018bert}. Several works in \evlp have used transformers. \textcite{li2019robust} use a BERT encoder to encode language for VLN task which improves upon past models.
|
| 97 |
+
|
| 98 |
+
Representing language and vision can not be done independently from one another. First, there is overlapping information between the two modalities which could help correct for errors in one representation. Moreover, information from each is required to interact with one other for the overall task, for example grounding landmarks from the text to the environment. In contemporary works, combining modalities is done either through the use of attention or transformers.
|
| 99 |
+
|
| 100 |
+
Attention was originally proposed to improve the performance of Seq2Seq neural machine translation \parencite{bahdanau2016neural, MTSurvey}. It primarily serves two purposes in \evlp: to fuse modalities; to align modalities \parencite{MMMLTaxonomy}. Attention mechanisms come in many forms, but typically can be framed as a process which takes in two inputs, referred to as the key and query, and produces an output using a weighted sum of the input, known as the value \parencite{vaswani2017attention}. Attention weights the query using the key, which can originate from the same or different sources. In the multimodal context this allows the model to combine vision, language, and the agent's current state \parencite{wang2019reinforced, tan2019learning, fried2018speaker, EQA, qi2020reverie}. This is often in done with the use of multiple attention mechanisms .
|
| 101 |
+
|
| 102 |
+
More recently, multimodal transformers have also shown to be effective for vision-language tasks \parencite{LXMERT,lu2019vilbert}. For example, \textcite{lu202012} learn language representations together with image information over 12 different multimodal tasks, outperforming most of the tasks when trained independently. In VLN two works have used transformers to combine different modalities. \textcite{PREVALENT} pre-train on instruction, image, action triplets across different tasks and then finetune on a target task. \textcite{VLNBERT} use VilBERT \parencite{lu2019vilbert} and pre-train over the conceptual captions dataset \parencite{ConceptualCaptions} to learn multimodal representations using a masking objective for both text and visual streams. During the task-specific fine-tuning, they fine-tune on the VLN task and that shows significant improvement.
|
| 103 |
+
|
| 104 |
+
\upd{\subsubsection{Modeling Action-Generation and Planning}}
|
| 105 |
+
|
| 106 |
+
\upd{At a high level, we measure agents' understanding of a given goal, its scene context, and its understanding of how to satisfy goals by its ability to generate appropriate sequences of actions to execute in the environment. In fully-observable (known) environments, where the location of the goal position is known (whether in a global or local/relative coordinate system), we expect the agent to generate a sequence of actions for reaching the goal, with allowance for re-planning at arbitrary frequency, based on newly-acquired information. In partially-observable (partially-known) environments, e.g., where the agent only has access to information about the adjacent admissible states, the agent must generate actions over significantly shorter horizons, such as one or two time-steps. Inspired by classical motion-planning techniques in robotics, several planning approaches have been employed in \evlp tasks, such as: mapping and exploration, search and topological planning, and hierarchical task decomposition.}
|
| 107 |
+
|
| 108 |
+
\upd{\paragraph{Mapping and exploration strategies.}
|
| 109 |
+
In the robotics literature, mapping is a general concept which refers to a transformation of the agent's observations into a more abstract state representation, wherein, e.g., planning can be performed more efficiently \parencite{filliat2003map}. Multiple choices of map representation are possible (metric maps, as in occupancy grids, or topological graphs), depending on the required level of expressivity for agent pose, obstacle locations, and goal position \parencite{cummins2008fab}.}
|
| 110 |
+
|
| 111 |
+
\upd{Early mapping strategies relied simply on exploring unknown environments. Here the common objective is to prioritise visiting unmapped states at the extent of the currently-explored regions (frontier nodes), in order to facilitate quick and efficient coverage of the environment . Agents must estimate the cost of visiting each frontier node and assess whether to proceed in its current exploration heading, versus back-tracking to other frontier nodes that promise more information at lower cost. \textcite{FAST} propose adding an exploration module to VLN agents, allowing agents to perform local decision-making while utilising global information to back-track when the agent gets confused (i.e., revisits previously-visited states). \textcite{ProgressMonitor, RegretfulAgent} propose the Regretful Agent (RA), which adds two differentiable modules to the standard VLN architecture: the first, progress marker (PM), estimates the agent's progress towards the goal, while the second module, regret module (RM), decides whether the agent should back-track by comparing the agent's current observation with its historical information. This module attends over both image and language inputs, and if the weight on the previous image features is larger it will back-track.}
|
| 112 |
+
|
| 113 |
+
\upd{Because the overhead from mapping an unseen (unknown) or partially-observable environment can be significant, classical robotics approaches moved quickly to Simultaneous Localization and Mapping (SLAM) techniques \parencite{thrun1998probabilistic, cummins2008fab}, wherein an agent projects its observations to a map that it maintains, while also tracking its location in the map. This serves as a basis for registering objects to specific locations of the map and for generating more efficient plans. use a navigation approach similar to SLAM, which consists of three modules: a mapper, a Bayesian filter, and a policy. The mapper generates a semantic map and the Bayesian filter uses RGB images and depth to find the most likely path at each time step. The agent builds up its understanding of the world and assigns probabilities to different locations. By mapping, an agent can navigate efficiently by avoiding redundant visitations and eventually plan using global information as opposed to local.}
|
| 114 |
+
|
| 115 |
+
\upd{\paragraph{Search and topological planning.} A popular planning approach is to utilise search algorithms, such as informed search \parencite{hart1968formal, stentz1995focussed}, commonly employed for traversing a graphical state representation (e.g., a tree or topological map). Greedy search algorithms \parencite{black1998dictionary}, wherein the most likely action is taken at each state, is of particular interest for partially-observable environments, where the agent has limited information to use for deciding on the next state transition.}
|
| 116 |
+
|
| 117 |
+
\upd{In the context of \evlp problems, it is common to implement greedy search by way of neural sequence-to-sequence (Seq2Seq) modeling---serving as one of the earliest action-generation strategies reported for these tasks \parencite{fried2018speaker, EQA, R2R, ALFRED}. One shortcoming of this approach is that it does not account for the likelihood of an entire sequence, meaning that it can generate sub-optimal output sequences. Beam search \parencite{holtzman2019curious} is an alternative, which accounts for this by looking at $k$ possible output sequences. Other search-inspired sampling algorithms, such as top-k sampling or nucleus sampling, have shown additional promise, primarily in natural language processing community \parencite{holtzman2019curious}.}
|
| 118 |
+
|
| 119 |
+
\upd{Other approaches couple mapping with greedy search strategies. \textcite{EvolvingGraphicalPlanner} propose the Evolving Graphical Planner, which creates a graph of all admissible nodes (states), where each node is represented using learnable embeddings of the agent's scene context. The agent's global planning module attempts to generate actions based on the current version of the map: the map, trajectory history, and language components are fed into the planner model, which returns a probability distribution over the set of the next node candidates; the agent then samples the next location and builds up a new graph based on the new information at this node, keeping only the top-$K$ locations at each time step.}
|
| 120 |
+
|
| 121 |
+
\upd{\paragraph{Hierarchical task decomposition.} Early robot planning approaches considered multiple levels of abstraction, defined for an arbitrary planning task: the highest level of abstraction need only contain the goal specification, a lower level of abstraction would contain a sequence of subgoals needed to satisfy the task and reach the goal, and even lower abstraction levels would include the primitive actions required for satisfying each sub-goal \parencite{sacerdoti1974planning, nau2003shop2}. An agent is said to perform hierarchical task decomposition if it is capable of reasoning at multiple levels of task abstraction, as a consequence of either formal predicate calculus \parencite{nilsson1984shakey, kaelbling2010hierarchical} and/or simply based on the agent's architectural specification.}
|
| 122 |
+
|
| 123 |
+
\upd{Architecturally, it is common to define at least two modules, e.g., a global planner and a local planner, where the former is responsible for generating intermediate sub-tasks towards a given goal, and the latter is responsible for generating satisfactory low-level actions for each sub-task. These ideas manifest in more recent works as, e.g., hierarchical waypoint prediction and navigation \parencite{chen2020learning, misra2018mapping, blukis2018mapping} and hierarchical reinforcement learning \parencite{li2020hrl4in, nachum2018data}. Certain datasets \parencite{misra2018mapping, ALFRED, CerealBar, RoboVLN} pair sub-goal sequences with a high-level goal, allowing for hierarchical approaches to learn to reason at multiple levels of abstraction: agents can predict abstracted actions, corresponding to sub-goals (e.g., a waypoint to navigate to), and use a lower-level module to control the movement actions to achieve that sub-goal (e.g., motor movements to transition between waypoints). \textcite{misra2018mapping, blukis2018mapping} propose a two-part modular architecture: the first is a Visitation Distribution Prediction (VDP) module, which predicts waypoints using RGB images and instructions on a probabilistic map; and the second stage receives this map from the VDP module, for learning to predict low-level actions, through reinforcement learning.}
|
| 124 |
+
|
| 125 |
+
\ann{Reviewer B, item \#4 (``interactive navigation'') and item \#5 (``task and motion planning'')}
|
| 126 |
+
|
| 127 |
+
\rev{Generalising beyond the above task decomposition paradigm, where sub-tasks consist of only navigation or only manipulation actions, early work also considered settings where agents were required to dispatch multiple capabilities (e.g., manipulation, navigation, dialogue) within the same procedure or episode. This integrated setting, manifesting in such applications as manufacturing assistance and surgical robotics, represented a new challenge for planning-based approaches: agents quickly needed to contend with heterogeneous action spaces, longer task horizons, and combinatorial complexity in configuring the space of low-level actions. This setting was formalised as the problem of task and motion planning (TAMP) \parencite{cambon2009hybrid, choi2009combining, wolfe2010combined, kaelbling2010hierarchical, kaelbling2013integrated, garrett2021integrated}, wherein algorithmic solutions define hierarchies across the different environment interaction types, with each type having its own unique execution primitive. These execution primitives were configured by way of a task decomposition, between a task-planning module (symbolic planning) and motion and/or manipulation planning modules (geometric planning). Many such solutions were plagued, still, with high search complexity, as early symbolic planners needed to enumerate all possible operations in a state, in order to expand the search tree. On the other hand, motion/manipulation planners were able to deal more effectively with geometry, but struggled when given partial goal specification. Notably, \textcite{kaelbling2010hierarchical} utilised goal regression, to recursively decompose the planning problem, through aggressive hierarchicality---limiting the length of plans and, thereby, exponentially decreasing the amount of search required.\footnote{\rev{\textcite{garrett2021integrated} further formalise a class of task and motion planning problems and survey solution methodology.}} With the advent of more sophisticated simulators (e.g., AI2-THOR \parencite{AI2THOR}) and benchmarks (e.g., ALFRED \parencite{ALFRED}, Interactive Gibson \parencite{iGibson20}), recent approaches extend the original TAMP problem in the context of mobile manipulation, interactive navigation, and navigation among movable objects. \textcite{li2020hrl4in} pursue interactive navigation by proposing a hierarchical reinforcement learning framework, for learning cross-task hierarchical planning. \textcite{sharma2021skill} produced a framework for learning hierarchical policies from demonstration, with the intention of identifying reusable robot skills. Modularity as a general design principle in autonomous systems can be most closely attributed to the principles of cross-task hierarchical planning, as studied in this section, which serves as a strong foundation for improving the agents' sample-efficiency and the tractability of sometimes-conflicting learning objectives \parencite{das2018neural, chen2020learning}.}
|
| 128 |
+
|
| 129 |
+
Supervision in \evlp generally refers to a demonstration of a possible solution to a problem. Learning from demonstration approaches are typically focused on matching the behavior of their demonstrator or expert \parencite{hester2018deepql}. As such, these approaches can be suitable when there is available data from an expert, or when it is easier to collect expert demonstrations than to specify a reward function to train an agent under reinforcement learning. Furthermore, this type of learning paradigm has shown to help in accelerating the learning process in difficult exploration problems \parencite{syed2007gameil}.
|
| 130 |
+
|
| 131 |
+
Different approaches within this paradigm have been used in all \evlp tasks. For example, in VLN, \textcite{R2R} use the Professor-Forcing \parencite{lamb2016profforc} approach, where at each training step the expert action is used to condition later predictions. Here, expert demonstrations generally correspond to the shortest-path from a start to a goal location for any given instruction. Furthermore, this approach is coupled with Student-Forcing, a method which samples an agent's action from a output probability distribution in order to avoid limiting exploration to only the states included in the expert trajectories \parencite{R2R}. Another commonly used approach is DAgger \parencite{DAgger}, which trains on aggregated trajectories obtained using expert demonstrations \parencite{FollowingNavInstructionsQuadIL, VLN-CE, RoboVLN}.
|
| 132 |
+
|
| 133 |
+
In EQA, train a behavior cloning model using a synthetic dataset of demonstration episodes, e.g., the shortest paths between an agent's location and the best viewpoint of an object of interest. In VDN, an approach known as Imitation Learning with Indirect Intervention (I3L) has been used where, in addition to learning from expert demonstration during training, an agent can also request help from an assistant both during training and evaluation in order to navigate to an object specified by the instructions .
|
| 134 |
+
|
| 135 |
+
In general, agents who learn from demonstrations do not generalize well, suffering from distribution shift issues due to the greedy nature of imitating expert demonstrations \parencite{SQIL}. This tendency leads the agent in question to overfit to the seen environments, generally resulting in poor performance in the new, unseen environments.
|
| 136 |
+
|
| 137 |
+
Although learning from supervision can lead to a faster learning process, agents trained under this paradigm often accumulate significant error due to limiting their exploration to the expert states \parencite{SERL}. In Reinforcement Learning (RL) settings, as opposed to having supervisions, an agent learns through interactions with an environment, e.g., by taking actions and receiving feedback from them. Through this setting, agents often learn more general behavior, and are capable of overcoming erroneous actions that may arise in unseen scenarios \parencite{LookLeap}.
|
| 138 |
+
|
| 139 |
+
\upd{Policy gradient methods are frequently used in embodied research. This type of algorithms directly models and optimises a policy function, which provides an agent with the guideline for the optimal action that it can take at a given state. Within embodied AI research, two popular types of policy gradient methods used include REINFORCE algorithms and actor-critic algorithms . The former use episode samples to update an agent's policy parameters, whereas the latter combine policy learning with value learning . Here, the policy has the role of the actor, choosing the action to take, while the value function plays the role of the critic, which criticizes the actor's decisions .}
|
| 140 |
+
|
| 141 |
+
\upd{Actor-critic algorithms generally achieve smoother convergence and generally show superior performance, especially in continuous space problems . One such example is Asynchronous Advantage Actor-Critic (A3C) , an algorithm designed for parallel training, where multiple actors interact asynchronously within an environment being controlled by a global network.} \ann{Reviewer B, item \#6 (``PPO, DD-PPO'')} \rev{More recently, Decentralized Distributed Proximal Policy Optimization (DD-PPO) \parentcite{PointNav}, an algorithmic extension of PPO , has gained popularity within embodied goal-oriented tasks such as PointGoal Navigation and AudioGoal Navigation .}
|
| 142 |
+
|
| 143 |
+
\upd{Although the aforementioned techniques are used in \evlp research, they have been used jointly with supervised learning methods which we discuss in the following section.} Nonetheless, some \evlp approaches employ RL-based approaches exclusively. For instance, leveraging REINFORCE algorithms, \textcite{LookLeap} propose Reinforced Planning Ahead (RPA) an approach that couples model-free and model-based RL to train a look-ahead model that allows an agent to predict a future state, and thus, plan before taking an action.
|
| 144 |
+
|
| 145 |
+
For the tasks that require goal understanding and planning rather than instruction following, such as EQA and IQA, propose the Hierarchical Interactive Memory Network (HIMN) where a model is decomposed into a hierarchy of controllers. For instance, a high-level planner chooses the low-level controllers where each low-level controller operates on a particular task, e.g., navigation, question answering, interacting, etc. Then, propose a Question Answering (QA) model that trains RL agents to use the notion of predictive modeling that stimulates the agents to visit places in the environment that might become relevant in the future. Although RL algorithms do not require labeled data and can endow agents with general skills to solve a task, they suffer from several difficulties including reward specifications, slow convergence, and sample-complexity.
|
| 146 |
+
|
| 147 |
+
Combining reinforcement and supervised learning is commonplace in \evlp literature.
|
| 148 |
+
By doing this, learning agents can leverage both the expert demonstrations and the direct feedback-based interaction with an environment to achieve more generalizable behavior.
|
| 149 |
+
|
| 150 |
+
In VLN, \textcite{tan2019learning} use both imitation learning (IL) as a weak supervision to mimic expert trajectories generated using a shortest-path algorithm and on-policy RL to train a more general behavior. Then, propose an RL model, Reinforced Cross-Modal Matching (RCM), where intrinsic rewards are used to encourage the learning agent to align the instructions with the trajectories. Then, they combine RCM with a self-supervised imitation learning approach (SIL), where the agent explores the environment by imitating its own past good decisions. use both RCM and behavioral cloning to develop a generalized multitask model for natural language grounded navigation tasks including VLN and VDN where learnable parameters are shared between the tasks.
|
| 151 |
+
|
| 152 |
+
In the context of EQA, both \textcite{das2018neural} and \textcite{MultiAgentEQA} use IL to pre-train a model and RL to finetune it. \upd{In particular, propose Neural Modular Controller (NMC), which uses A3C to fine-tune a set of sub-policies trained through behavioral cloning to mimic expert trajectories from a global policy. In contrast, \textcite{MultiAgentEQA} uses REINFORCE to fine-tune an EQA agent trained on multi-target questions.} This strategy allows agents to learn to recover from errors, which is not possible using expert demonstrations alone.
|
| 153 |
+
|
| 154 |
+
\ann{Reviewer B, item \#6 (``PPO, DD-PPO'')}\rev{ introduce Supervised Reinforcement Asynchronous Leaning (SuReAL) a framework that learns to map instruction to continuous control of quad-copters. SuReAL asynchronously trains two processes sharing data and parameters. The first one corresponds to a planner, trained via IL to predict a plan, and an action generator, trained using PPO to predict the control actions to solve a given plan. More recently, introduce waypoint models for VLN tasks in continuous environments. Motivated by the recent success of PointGoal tasks, leverage DD-PPO to extend the VLN-CE task to support language-conditioned waypoint prediction models at varying action-space granularities.}
|
| 155 |
+
|
| 156 |
+
propose a module for exploration which enables an agent to decide when and what to explore. They first use IL to guide an initial exploration policy from an expert. Then, they use RL to explore the state-action space outside the demonstration paths and reduce the bias toward copying expert actions. To overcome the issue of reward specification in RL, \textcite{SERL} propose a VLN method based on Random Expert Distillation \parencite{RED}, known as Soft Expert Reward Learning (SERL), to learn a reward function by distilling knowledge from expert demonstrations.
|
| 157 |
+
|
| 158 |
+
Another challenge that has been tackled by using RL and IL jointly is the accurate alignment of the instructions and visual features. \textcite{BabyWalk} propose the Babywalk approach which uses IL to first segment paths and then learn over these finer-grained, shorter instructions by demonstration. Babywalk then uses curriculum-based RL to refine the policy by giving the agent increasingly longer navigation tasks \parencite{Curriculum}. This was shown to improve the model's ability to extrapolate to longer sequences, although generalization to unseen data does not appear to benefit as much from this approach.
|
| 159 |
+
|
| 160 |
+
The following subsections discuss common improvements beyond architecture, such as: data augmentation, pre-training, the use of additional training objectives, the use of different input representations, and optimization. Note that these can, and often are, used regardless of learning paradigm or architecture.
|
| 161 |
+
|
| 162 |
+
Data augmentation is commonly used to improve performance and robustness \parencite{ImageDataAugSurvey, tanner2012tools}. In \evlp, back-translation is the most commonly used data augmentation technique. Back-translation was originally introduced by the neural machine translation community \parencite{tan2019learning}. It augments an existing corpus by generating synthetic training samples for mono-lingual (i.e., single view) data. Due to the difficulty of creating a paired corpus, it is beneficial to train a ``backwards'' model which creates additional training pairs.
|
| 163 |
+
|
| 164 |
+
In VLN, paths are sampled throughout the environment. A speaker model is trained to generate synthetic instructions for a given path. These synthetic sentences supplement the training data \parencite{fried2018speaker}. Improvements on this idea were proposed in , in which the authors introduced Environmental Dropout. Here, entire frames from ground-truth paths are omitted to create "synthetic" environments. New instructions are created for these synthetic environments using back-translation approach. This approach led to improvements in task success rate and narrowed the gap between the success rates on seen and unseen environments.
|
| 165 |
+
|
| 166 |
+
Other improvements to back-translation include changing the approach to sampling paths. proposes using longer paths which are not the shortest path between two points. Shortest paths may favor certain transitions in the training environment, which limit the agent's need to learn language. Instead, the paper proposes using a random walk which favors similar transition probabilities at each point in the graph. propose using counter-factual reasoning when sampling augmentation paths. An adversarial path sampler (APS) proposes increasingly hard paths for the model to train on. This module is model-independent and uses the training loss to choose paths the model is struggling with.
|
| 167 |
+
|
| 168 |
+
Additional tasks can also be used to increase the amount of training data and improve performance. In VLN, add a completeness objective. Based on the data to date, the agent predicts how close to completion the agent is in meters. Progress monitor (PM) implicitly adds a prior over how instructions are processed. This approach encourages left-right attention of the instructions as the agent progresses through the path \parencite{ProgressMonitor}.
|
| 169 |
+
|
| 170 |
+
To further improve performance, propose four additional tasks:
|
| 171 |
+
|
| 172 |
+
- Trajectory retelling. Generating a text based on the actions to date from the hidden state. The ground truth is generated using a speaker module \parencite{fried2018speaker}.
|
| 173 |
+
- Angle prediction. Generate the angle in degrees of the agent's heading at the next time step.
|
| 174 |
+
- Instruction-path matching. Shuffle states within a batch and see if they correspond to given instructions.
|
| 175 |
+
- Progress. Percentage of steps taken.
|
| 176 |
+
|
| 177 |
+
Respectively, these auxiliary tasks have the following reasoning objectives: explaining the previous actions, predicting the next orientation, evaluating the trajectory consistency, and estimating the navigation progress \parencite{AuxiliaryReasoningTasks}.
|
| 178 |
+
|
| 179 |
+
Pre-training is done either by training a component of the model, or a full model on another problem entirely. This technique is used across most, if not all, neural \evlp approaches in the form of pre-trained image embeddings \parencite{Resnet}. Not only are image embeddings used for RBG images \parencite{CVDN, ALFRED,R2R,EQA,IQA}, but also for depth information \parencite{eqa_matterport, VLN-CE}.
|
| 180 |
+
|
| 181 |
+
Pre-training can also be used for language inputs. ablates the use of pre-trained word embeddings, finding that GloVe embeddings slightly improve performance \parencite{Glove}. More recent developments in language representation include pre-trained contextual models such as ELMO \parencite{Elmo}, BERT \parencite{devlin2018bert} or the GPT models \parencite{GPT2,GPT3}. encodes language using either BERT or GPT-2, giving significant improvements compared to a baseline learned LSTM encoding.
|
| 182 |
+
|
| 183 |
+
Fusion of modalities can also benefit from pre-training. Recent approaches have taken advantage of large multimodal corpora to pre-train BERT-like models \parencite{LXMERT, lu2019vilbert} which capture cross-modal information. leverages pre-training to improve agent performance on VLN. It uses VilBERT \parencite{lu2019vilbert} as a base model to re-rank paths proposed by other models. \ann{Reviewer C, item \#3 (``Related Works'')} \rev{ integrate contrastive language-image pre-training (CLIP) , a model designed for learning visual representations from natural language supervision, into Vision-and-Language tasks such as Visual Question Answering (VQA) and VLN. CLIP leverages a large amount of image-text pairs for training, achieving state-of-the-art performance with zero-shot transfer in a variety of computer vision tasks . As such, improve the performance of VLN agents on two datasets, R2R and RxR using EnvDrop as their baseline model and CLIP as a visual encoder.}
|
| 184 |
+
|
| 185 |
+
A closely related topic to pre-training is multitask training. In the age of transformers this approach has grown in popularity for multimodal tasks \parentcite{lu202012, MTLHVR, MultitaskMultimodalEmotionAndSentiment}, including \evlp tasks.
|
| 186 |
+
|
| 187 |
+
\textcite{nguyen2019hanna} trains over three tasks datasets, two VLN datasets and a VDN dataset. The model proposed, PREVALENT, is a transformer that fuses modalities in a later layer. It is pre-trained over speaker augmented R2R \parencite{fried2018speaker} by converting each step in the path to a action, image, text triple. PREVALENT is trained to predict the action and masking over the instructions. The models are then specialized and fine-tuned over the R2R, CVDN, and HANNA datasets. This approach benefits CVDN in both seen and unseen environments, while improving performance uniquely over unseen environments for R2R and CVDN. \textcite{MultiTaskLearningVLN} train over both R2R and CVDN, but achieve lower performance over both datasets.
|
| 188 |
+
|
| 189 |
+
Task transfers have only been attempted between instruction following tasks. There have been no transfers between tasks requiring instruction following to those that require interpreting instructions. Doing so may be difficult, but could provide the a fine grained understanding of language, vision, and action required to interact with the world.
|
| 190 |
+
|
| 191 |
+
Another way to improve performance is to modify the loss and optimization functions used. In \evlp tasks, there are two ways to pose the loss function: as a sequence-to-sequence (Seq2Seq) problem; as a re-ranking problem. In a Seq2Seq problem, a series of inputs are used to predict a sequence of ground-truth actions. Current approaches treat this as a classification problem, with the goal of learning a maximum likelihood estimate using cross-entropy loss conditioned on the agent's observations. In a re-ranking problem, a Seq2Seq model first proposes candidate solutions, and a second model scores them. does this through the use of negative sampling, creating binary labels which reflect the validity of a path. Binary-cross entropy is again used as an objective.
|
| 192 |
+
|
| 193 |
+
Reasons to choose one over the other include performance, computational cost, and whether or not the environment is known. Re-ranking problems are, at the time of writing, state of the art in VLN \parencite{VLNBERT}. However they are more computationally expensive and unable to explore the environment without use of an additional model(s).
|
| 194 |
+
|
| 195 |
+
Two optimization algorithms are commonly used \parencite{MultiModalSurvey}: ADAM \parencite{adam}, RMSPROP \parencite{rmsprop}. However the advantages or reasoning behind choosing one or the other are not discussed in \evlp literature.
|
| 196 |
+
|
| 197 |
+
\ann{Reviewer B, item \#7 (``Reward Shaping'')} \rev{\subsubsection{Reward Shaping}}
|
| 198 |
+
|
| 199 |
+
\rev{Reward-shaping is a widely-used technique in solving Markov decision processes, as with Reinforcement Learning (RL), where domain knowledge is used to define a shaped reward $R_{shaped}$, for improved agent learning and optimisation . Typically, this shaped reward is linearly combined with the original reward $R$, with $R^{'} = R + R_{shaped}$, or used instead. Reasons for incorporating a shaped reward include situations where $R$ is sparse or is deemed not sufficiently informative for the task or for encouraging the desired agent behaviour .}
|
| 200 |
+
|
| 201 |
+
\rev{The \evlp literature generally follows this standard formulation and adopts similar usage considerations for shaped rewards. In EQA, , and use a reward function $R$ of the form $R = R_{terminal} + R_{shaped}$, where $R_{terminal}$ is a sparse reward assigned for correctly answering a question. additionally assign a penalty for incorrect answers. $R_{shaped}$ consists of a dense reward determined by the agent's progress toward its goal: positive if moving closer to the goal and negative otherwise.}
|
| 202 |
+
|
| 203 |
+
\rev{VLN and VDN works have used a similar reward function where $R_{terminal}$ is given if the agent stops within a specified radius of the target, and $R_{shaped}$ is given as progress-based reward. use the aforementioned rewards to define $R_{extr}$, an extrinsic reward to measure the agent's success ($R_{terminal}$) and progress ($R_{shaped}$), and further define $R_{intr}$, an intrinsic reward for measuring the alignment between the instruction and the agent's trajectory. The latter is obtained as a probability score of generating the original instruction, given sequential encodings of the instruction and the agent's historical trajectory. The total reward function was defined as $R = R_{extr} + \delta R_{intr}$, where $\delta$ is a weighting parameter.}
|
| 204 |
+
|
| 205 |
+
\rev{In addition to the sparse extrinsic reward, and use fidelity of the predicted path with the reference path based on the Coverage weighted by Length Score (CLS) metric , discussed in Section , to shape the reward function. Similarly, SERL uses two intrinsic rewards, $R_{SED}$, a soft-expert distillation reward, and $R_{SP}$, a self-perceiving reward: the former is learned by aligning the VLN agent's behavior with expert demonstrations , and the latter is learned by predicting the agent's progress toward its goal.}
|
2107.01396/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-05-16T12:12:16.076Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4497.0 Safari/537.36 Edg/92.0.884.2" etag="vVqWiDmmXE5maioL2CFe" version="14.6.13" type="device" pages="2"><diagram id="R3HklLem4m65TqUdJgqv" name="Page-1">7VxbU+M2FP41TNsHMrIl3x5JAmzbXaDLMi1PjIlF4uJEqaMsCb++si1fZMmXJHZgWzIMWLIi2+c759O5yJzA0XxzGbrL2Rfi4eBEB97mBI5PdF3Tbcj+RD3bpMdGVtIxDX2PD8o7bv1XzDsB7137Hl4JAykhAfWXYueELBZ4QoU+NwzJizjsiQTiVZfuFEsdtxM3kHv/9D0640+hW3n/J+xPZ+mVNdNJzszddDCfYjVzPfKSdMUPB89P4CgkhCZH880IB5HwUrkkErioOJvdWIgXtM0XyN3V3eXw4cFfa0+3D1d35DwYnxoomea7G6z5E/O7pdtUBHjhnUWSZK1J4K5W/uQEDmd0HrAOjR2uaEieM+mw5xo+kQW9cOd+EIH+zZ8zAHVwhV/Y769k7i74EI62nn5lRAISxteEIP5kkwtnHiceYmfk508FTdbhBNc8dPqM2BOw52K7xGSOabhlA15yxA2uj7MC2GlfiAOX+t9FjXG54k2z6bIr3BCf3bIOuJFYFp+HmwhEaGCIkySPxL9XxLg0la2bpalKE1E3nGIqTcQOCg+ed8UqpFan3xY309/x5T0F1tfPt2d3i2/3T6ea3qxOIVkvPOxx5XmZ+RTfLt0YsBfGIaJutdAkWf08dzXLLlCpJt9xSPGmFn9+FukiQIZhpAAVNER3FCqipdiq1EGQfo2olUpsH264e5ipLGwm1nD7F2uAQda85xeIG+NIiiBrbYutGxyyy1Ic8s5KsBK9rbPpd2XS0AADIJqibqL9bBo6irngwDFB/ilN3Z2VK2Wtae9P92wTCdo3AEDfRwPxxqdcm22dt+/jNgAGb+fzRY1toSHNxtTxlguFhHRGpmThBud57zBnw1r9b1zT/jN2gnQ0sFFBt2FnZhNPXZgZWK3Mhumxuy0MW0YDVjUPVVrPTc2qv9PSeIRgyWqTO+jUhlss1MxhXUaHk/Ujbl6pHxNF/vyYdbiT52ms3tdrGvgLzPs9N3y+ZtP4dJvYlSF26nFvtWNZpATPD5nT7xNGF2OmGNG4NszShU+gYPg0ZinYDlLYjt6bT9DCmf8AtQ5U23h/oBofoB4EqqG/raWqAyUoe0w7+gr5mM+ELDlkf2NKt1zu7poSEfncXyr6SmAA9cx52sVXaoi8DoG36O+YD9d/PONrfzz9bTi/oae/WsNXlb9TG5E2Ojy7eTK7+gTMoRE10CklaxrGm9zn7sonqJOpwDQo+jFGjyTwVts5+3NiDTcn1pifqQ7uQTMRVXFCvdMvZYS0Jzix7aPxCSMFiU+gJfGJreATuzc6kZE4Jp2cRtGRJXKKZRoNnCKQRUVwV6acFhDnEZyFyhFc2m4fwbViqLYR2TEZShFr6SKlGCVtrIisDg5/DCGcaR5vm51SXR04zVT30/viuouLsyFoaQj9cJ3hoLflOvstqG6vGsf+3AFl7jisknEgd5hQ0gOptFBBH11ZLJQsdhTnFZ98xtW6GdAokImOptHRFaYvJHyusdkWxYfmqKZtFpPfAOjAJJEinHHkcCYrPAjVCK23zIMiA1ywygVZdORxKLLKrPPCD/qtRpomQlJ8wy5aGHQOL0bnFyrS9jx9AorJZU1wS6z6vPL+LGK1ZJHUEeibRTSASlGQ3q48Kdc5mybqmYyMenVnSPl0+zWWFln06HNnLnaiSabl7OJzv5k18dp+awPKav5qA9IbCjM7VRq7syqta6s6iKGtCo/X68K77WSl7GBxNOPFMS/waGJkkU5YWCdNZ2Db8kpZjo06g0GT/RiZSrqvnTbE2q2sQemN6y2twWppDMepOxpy9cZEZqslRPaNQfNcPRfk0/0pBaUa+yvqLhh0ZeVilkRL6lSgYe6ouYE/jSLVCdOHiDKHkQn6Ezc44yfmvucFVbUGkTr2XDByn3knhd2BK4AhgGY4MjtoQFUV6IsbUipozEpMq5MSaSmIMijwK4muN1wW1r6sv7AgNm7C8jc43ZrZmVMQEuryfAioyI90syAIIMN0O0FT9gL1hbHM/5eh6/mYM8OHqapQ1EQUkQLF45oqetuEOxgInn/zXru3DZvVbntfYW/qIzT6JJ076FVhL7AHpmkZDnKgaQFH9EoRBAPEfkHLsZBpw3bbBHfNymtAL6fl7Vq3RvqCZteXLDPfbM/xJn/yXvP+UA6DxvHKI6cQl5iZJt22YOV3QMEFqxuP9RHojYqtEhWbpiL3qCmo2OyLiqG8EzpfUcugrpeeS39cp1gg1+FojHqDubxfwVbAnCbdhAAN9gQzkvdB3XEwAfOPf1ZX7ozRL9HEDK+YwYM25iymnzj4708fKnIu/MkUrwftXv0rMzqSlcBS1RlQX7ZuyIHu9ZL6c/81jiVOH90VEym7MKXupK76s3dOa1ejFVOcF/FHleLMagQd4Ja9nZZtHrJl4y1Udrsu26qhM2o95h4qRHzK3M9VL4qdr7hiqvv47u/R3NpybG2020/fWRFG3sSRLvzvkAikGohaX5pebtyfBaAmb1LrkwXWD86V8+qfb8L7zezLJdnMzv5WvHZwg5k6L+najb57y6g8cMNow7EOvhBvHbTx1X6QBbvr5RnpYgLTgsrXEm2Fl9bFAq3ezCYv0BKA0VbzZTsRtpdS9ga4+5heB9RLL331OU3/WrJ5mM7AKX6QLElNt3oS5ce+fwH4enVrNpoCrFC1NaY3HM0PHLvEMY1NxEJvtLW3bL3HhVlVX/+A+VCYFRHncWGVHZY0qbRaskcv4mv+s47+l0csr9NVLNQzNkCzl5tYROn5NBd1tt74ge9GN1m5fTE5we48uVpytsad3XtX4xFeroPigquKSp2K3Rjm4eie408v08e1s7y5ppfrZ2dlTwzFNvD/eeqwUJU9tToKRCwoxolImUtUbfbYPZfImvl/1EkCzfz/EsHzfwE=</diagram><diagram id="vmSD_qrJBkFz_t-oXexj" name="Page-2">7LzXsus8sib4NHU5E/Tmkt57J/Jmgk4UvRfN0zeof1fVqTgnOnpmuqP7olbsrSVBIAhkJjK/LxNcf0O5/pSWdPoYY1F2f0Og4vwbyv8NQRAMIcCvp+X6qwVGMfKvlmqpiz9t/2zw6rv80wj9ad3rolz/peM2jt1WT//amI/DUObbv7SlyzIe/9rtPXb/etcprcr/1ODlafefW6O62D5/tVII+c92uayrz9/vDBP0X9/06d87/xli/aTFePzV9FscKvwN5ZZx3P56159c2T3S+7tcvIHiCh6HIvGVdZ93WEuX9H/9JRbx/80l/1jCUg7b/+ehVwRbDUndkJU2SfH/mfMNDf5cAn3Tbv8jrz9r3a6/CxAse3re5nsGfrHHp95Kb0rzp+0ARgPaPlvfgU8weJuN+1CUhZ79oyHN22p5Wq196+qh/NNepEtrgWHq7bEs6P+G8H9tRH6tT891W8b2H8oD7ex7HLY/loZBz2X1AkynHgfQsI770+/XR0z7unuG9+se2CACmeUBXt2xT0FX9n9QrH/E/y2XrTz/g0X8EbNUjn25LRfo8udbEqb+uuTPnqHwPyZ1/NMA8T+S//wH20OhP43pH5uv/jH0P/UK3vxR7X+t5o/q4RAkjEuMMS2OaYqw6P9Ywr/V/D9Tzfj/aWpG/q3m/wVqpv83qvm/XAL6P67muv+FR/ZZbg2Cop5mZWePa/1Hvtm4bWMPOnTPF+w/tMuN3bj8hkLfv5//MAbT1dVz7TY+1pKu019h+12fJZg1+7sl8/dW6O8t4H2RbunfUOavj4jYTGX1N+QRAd2AF4xhHK9NVLdiWMZhwCfwmwEzEJGD5Rmmktgql1inktkjVzmn0njWMXlm1Xm29YSDCmR3fMsfKAUdKpXDBoVTVuU8co3PJ51nTpN3IPnKcduvLpvDbpPX/z6604IRDzBSZfJudSnu8TEa5fr/9/8Z/WAYl2P4irEYnvEUhvkILHMKwFpFymHk41lnANYrPP+Zf/4IHHM4Ilixwjmj/lfvzQVfVALPHIrAHIH4l5hEmnEUIB/HZV3lYwSCJMDih73UU+Q1tk0FRYG083BDD3ozrYGoV1W1mvjJY8kdO7XOR80bIdOPUYtvCQdyRbctFC+Y/FAMoxD+JFFvtkmUDKnUzTnqwsVQYOVr5j9KJH2wWq0nze/UKOrwOulnrZm0qJ+IepoX7d70CNnIGt+Bmk7jJZ1Uo16b7sPmK4LpJkF2ucb+hrB+h+/vkSiXk/raEP3OgJGz9kD8XU5ANofwdzn9tfD/jpyER04c6MazTP6XnBwV/fU+BYH1hJP9qGzgFJ/cNWpjFBRR0Nwxk1nOM+7gy4TxoLvcJ25NS+go9yerHkhu0poAMe8Kt1pTcINE9oROC2A3DLsijsKpSaSwT5DPlA4mVLwStJQ7ohot4ROncq30WjtZQZekr17t03G22ilNh1nr53WxoC1L0V3viWO1hTPP5MsYNGizA7jIXog5pNhut3iZDYQ1zP9JSvQIDMwAO0ZxwRoZ++AOofjAEn86yj1Y5cAy+CSC7i+gAOkwoNK5tqH0Wxhzb1U7dRnGUH74Btd6y6wiv1pudCJHrnhbeqdDmXyqvfRPOp1UxlZCIxt4E7osrW6HF7g9I8bB+s7qcbqb7sjBfXaDgdRvaqsKJruz/E1quWKBm2JTO7XO24nHCL9bqEI/3pJ5dhzPikZuPhTdAqnkQjCkiwD0gLyNr6qc8KzG5kwXB0HRc4v2QowMrB31x+6TCuQIfF5izCyigVDhUplDw9Ah58GTAeYb1uhPRKjHNMc5vsgJiUtY+F0Ph9cnfRDzvKwQwgBNUX/tto6T8hbk7x5EGFHDWDI56znYmH0RnoY1RDWvtNi+a2p+LhC3GuQJLCp0xlvzPsCUxWioA70Vm8FUb4Lp7VQzPUMexlKLW05s8CSzR79uMynvepZsTKVxZ4/xOBdSb2dAeWcwpHI3aEtznA/ftQoYvomnx1FaDKbOzp3dFTDvfmLzSrNkVN84+jiutD5u/lOFtKftttoAF8MGDBWT/Zj4ssJw02FVNNipU6x3ubuPali+va5TFMVI2PNIy8ri3n1QrMPieXmFZ7bobWxkEB6byGHTYtbbG/AGYQUZ9Ym8XwRXu7WBReXRDZZJSNZ9PB5ZEBb/GmPgyfc76htjyMvoAnPigfFgd4CJnzZLGXaomnoHvfttVd5oiRV91uq0dpKkJ+7syhfuZbwLfCmjMV/2zLwkr4ga1JT8tqG1tu6DZdd5y5rURzBjmGy6FyYe1SYnDkajqIZOFac7VT1fuZC1jJwn4uVA+l5DPrlwJE6eSBZsRk7bpKM9pvVnZw3pGDUdPcPahJKK+So0i86d0sbJK5cbfAV2FROaQfOnJ0n5S8p1PMqxu4KaiKwGfO2XXayLMGyGhkimw0nuSszD8NVIt1T2rg9HLqSXcTHTJRzewjpYkLldx6znn3h23lPdk6WDYLKd03OUeCeQD/gnHOWmYYZbcaV9306BFF8ZC6Uxq5IAu1t8DvlGlVAbGlFUzMUazQfLwC/e3yY0Yzjv+za9DEbEYsaJXBoZLyL4bJDSbMscP8Yua8l2FBopOMy6w0fHyxxteen7lhms98GMayued6KifBrZs+bwuiwx7yD4dh9P4h+T57pGztSjBNs230cwa/GZdlzJw+JXY/LuMEWeAsvhPy6YFfjajPUBwCq27MRWSQnJbltBHtpC+yQb1LNn08am7/m6MdrW2wnq2USZ8276bqRZRAjhwt3lu9zM+KxmBozDE0XaGfHzNlKE8mNHc9PFdfhSVDp9BUm2AcBzdk/YReFnxkbgavWVCwJ2GXLfc5iRPxcHhgS+rBXMdJV7udQAwXBZek+0Rnttd6kSZQr+pbyCoVkqXgyWRvEtVJUDOI7wLC/qLvTbNi/eU1g6uJJd9KHeFcXUW2ak5QSWzsLIhuu3rlDjIyIFs+DKJuCi+h6JOXSLzG+4ZCTqRM5FEGYlW5vVlVcM84gL9XovzXJkWzdkSl3s3pQgsezp1SDbEr+1PspK3Ug0FyxCHSodMr2uqlcEqhrkDszhDcbI0dZbw+41DEu9TOwLr0OthUpSjzQDV8+iD8GKbllWCYzWUn5v7xaTiYBGXI9vZEh0gcvQVcUqTwLqVvc72wrjxATUE04VY5Z93IFFtWAC78NvGDKZyFXURvKJEXzht4Wl7dYMibuHFaLlAX8hSuOr+Chi2wRjjp/PpR0yB2SxRwojS+7Q92XW1cYAtjm8V1ttgK+Y89GlLW8erxJ7ztD7Bu7ctoq7EVQ8Gi5inBbHQ3ffYmaNs4dESniey628+O73NV65vKc10+sif50yw0zx10h64GdnFrYl373WsPfOMeGQiCXj517RBIy8p6g2z0N245hBgppdy7dz87VBZXFVVDUEWyIVcVjPORlCkPsWGZeGkeVyDNugrVqKJhjBqarVHSbwpdEmVUQk0turPTvPlSux5/TTJnWu08C4hcppcuRNSJ/vxXQNsGD7iQV7CZsXwSuMIvtD35avcpQwDMRCzzheQnwdtB9Yl5wHLzWsOcBoUt7+xnzKTUSCzYA32vcxc5Z3SjlE8ZpIVfxWhYWmd9oZBOgY9QACVgEIdRI7MOxsKeaahJgsOQaI6yJ++vwuWFjm7SWG25uvv4tGYVGB5oDfX2Z9iwk9clXNyLPia5V59YPB8+yP11SJ9cd+FcG6GA9qgDOTEBhYgq+oFlTmMECbUubckqbCZ309aOFY18hcnkARd7hs2y9ohszXvkioF2GtsAKL+YTnaqaW4TKjO4AISTy7LIZvVsw9Uxrf6jtniQI0DturBoBHWQZ4f2ELIuHb+QSnpaLaknZnKl/EtqidHFNvEH57I3c/pemhCsmv6R0zrW8K0ITywERZfMoBUFSu9iUzaILg0uOn0LR3zxO5oXmHYQm3ovOuGLlrmikEK/MyAFBZKKvmxNZcQTeABkr0G0EKwIRxHUSbp7+lJZ8kjj9c+A6ChGtYJbPKr6Uh5wQ3TMLAsGIbzgg1s93dqPz4UBpYhfghkMKDt9nzLiHhBpk6UpTu7MzRTVLSPMepuK783KePCFk+Q1lXjk6F0U1YlnpOnp6+JRHSeWlTbzHcqo2z3ENFpyfeybn3zqEa15YBf1Na3XmOYfZMXxbP5gUT2IIgfVsRqrktnDbO6U3flyyRaTm89jTtEdMrgG2YHGej1T0sTZXMYZcUh/W+samxDt2zgpesCwWjdS4NlcXwXq3Nv7+zRioG5mAOToCpNMKj2NgkgSrFpZ9Qd1As8DYdRWeT5m3IKuwc5GhsEYAVjGutJPRJ0bCbicaFOeMBnQYIGaMmTMgqCbVGnuyOFLOPVRvf1UondycZ+x3b/eHS1mfWXEpWjpzqx9tI1ntF4AQE09XqbTygjHbM2tVH5XTu2DmrAjhBEmHtS/MICmsw3T503q0dQCsOAIBRWo4BLwSZuIct5o474ZkmFI+j5hDLvsHvySF4XFar0Zw26QV85hGhxvtdL6nR1qb0Af637ZqQUaWb7a/yOCHRl+x0uFbHfldwKfo4zUdtjyhYbl1ZVewGzoqRAp2Sg94afwEOms7zTbIV/loAggzNTBAj/bsx9uw2ABxBSLtF6YWw4wjnAM9kEh+WnqT2SdfYV6MqrUyE+cDHr0w54xJl5aWliXFFkBceAGO8IKQO95e4H9/L81xbO1iukqlda9Xjuj6sbDYDZG9hLyoS/vGznWu3jblybszA+nM9XEnmSzFl+uW6SZ3aC1XUrKviCQLjK5HbsdhrAwo1XTNzvwZpnPyKWz4hTXrKSm149mb0kruFoCl4PQk1eyfiYwB4a6w9COys0wAUzVZn0w9JqKLRhbJIBlng7uwNeEGnxNXT/wEw5+JBCJ5e+0cflFh8yeDG4iBmhvzRtNjW2XwX4zjERckpn6havB+rhGviO4noZGAhSpG3FRbu4RMxgS9j4w+cbG2ta1HeVLx8dK0/oeoix3g53PtYxiXzSd9fivjkpZvvBq1q+z2s6yEyNCLjvhq4zWimNPm+XsJYb5C+OiZqMxjZT58oCPE6WmBdhGP3BS0eQn6yQeYEyibf0frhFTFfd/hNMSk2uXO7zfl6qA7ONbCYmhdUoDpfIrnukDDvKahfEHDU+a/ZL0QNTQ0Vb6fLKcbgfU9qtvlCVW8Tqxys0mbTfbfvwbui5Wurau4jlEZlrmR07yCW36btEVswcBGk7q+72tSDeY/aW8PmYy+RWoHr1B10bxVV7BExK1ruBXhxogS5mNPpQKEdnkisajsKM8oj68u5djBBPgzKwvkp0exIOAyI4kNkPIooXnH4ERJUqFwfMBrYX6wiuhKjf7Dzs3/+cljfKO0Sst2E+q7DWjqJwdSAYNDB4OJW8IUnITHYRCjNb8trKoEfwxkVrdjUv/jY8V6tp5XtrCmx2iQcqk095BcZPK6IZMcuyWfAZ3hYIcPYdwuvld5re386C8IZqUSh+cZyKNT1LPcvEtUavDzs/CmXicgkFTAqgn36jr7RJ+WzZFE1H7L43bS+S3iLefa2VKapw/WxUzq7L+8FQEb0wd/JHoB4vFjwRO4n3PNB7K2E+4iU8nemNOqtR5smixpdeOLjJNwP0jraek3XTvOsupPRKL8zf7Mfh5iMkH4nT4wu0UHG6PTrlfHOIGE2HlpeD0ACrR/prNjsZPhFZ4Z2LEweU7SIuxrmHYz2tmr2fcU8z8FWbFoTnQ+L1okDJTe9iyHEapySx+ftqdmYTPKthDnOooV9TIyneGgPqM5d10irj7beN+QQg8kETxjfwvIO76g8TRWvP0V3sOdVMeAOM6q3bbZNNL1rz8IflfSbYuuKTTGFGWQo2zsYY1sWbdFtsuSEqlzbuvaEKbt1E9tY81iHGE74eDxv09mqnJZ+zKGcabxECIVQqFe/tV2IEeX3ewfu1hpct4znpo3BfKBGTufAwKn0YaWoBjfstTtRgem1n0leYn5Ra8AGZFJNv3/04EUI1dtMID7cAPA0deQQHa+M7Ygk27AllohFN9vs3VEV9fO4L/MTwrRlDChNTJreXKcDj8idQssLWVXx4+cUI9/z48OCEOxrM0vpyfhsvd2PU9vc8MUOYq7a0woFPMpdYEXmbTQJJuewPkoTKg4mPVJnonS2kFj8YOWAScOT37H5ujV+jzf6sENDLgzdylDwfLzW4ptu0l5gDd+g8kLSWqbHSJV7L235opsyamSe2Nzw2KKFvNNRXTY9wmsUuacsK0b7RTBhP1rW2dLXd7pd9FHz4mHqY4kaV/IPffC2kDyDXTDqOR4R3JErzPZmyKmpz6RHbBCvEYfjyrQthZ1rYsmA6LyGFb/TqUTqqJcVTQZ7knzmVHgnhh7CuWuRwwdmWLI4h3rKeHexPdLkZ/PG8xXwM7HQxWpr1ghEuQiCiiGJ6H0MXXwx0Ad0SXMFM7d3XUmwtRUzrPx5h85iBQiu4fmivmRUt9lnF2lQHHjVwDX4Z5BfC4QDoBZkERtOPIpggE5Hc9a9aqM4gHNje062VfyxoKV1wZTTEe+OvNxLlCLqhz4vWXC/4qjkdbJ6eKHNKvYGj00oXaIVXnmzBfWxH9o15F6xozIeStC7697DcdHFiPFlRRN3/iqxl+RrzF0wzMEe1rAFzaHA/Sq8L3K9PHNowL1SxdxLZ13kFsPnqbxeQbj6snS698o3U7+jDLXMNiQnL0gXhzn3/NIj+aJlwokY5m6Z6HW5MF5Wg4KCm5c0IGY3hV1Owg8a1q1PFw+7fxnvZ8ELAMrsQ9BXMjCTNdhhXquK7UGa3Nfruzlzsu8YQx21j+Y3coRihmhpfxs9NbsfvB4YlWjDD0Enh/72Xn1zj8oXG2k0oC37nOGHdLfprebua+3G6xwPFfgVtnsylo01hMoXjcC9/M3tnFmKDDHCsOLJWNDpxh0V9T0wKnlrTtt5JYFdfJQs7TCgQTbgHKboj93y0WcUsbXvUCEOSIcbUWBwMxo3neFOQVurQbedlnTloSpIkRlEL5Z5cg9Jji3KlrXXGPBswuRMTJZ0ql1GyLKe2WGx9NHRpfU6SI3JrTGVjyslfJOXBszmIhVb5okfPBiosBtHrKGE1YsyD+V8Rbt5l3z27AxAMLly11QAJ5K+E59s3RDhbb57xxvs6/NnTIv/MaXAhPUuYTzxsELZBx7gyNKY+MzM9figF8A/Q4VPZvbVa7h3w1cmIBGD4WZZaC2ud3V8ViRiEvBXSXA1RnShrSpxmw4L0Gd/t/n0JZzGIy4h9fFKtQz4E3l2QXSMoleTqsJoIVoaUcsHPW9IQa21a3yvT949eVksGJ94mez07U4n0dIfLK/XpXRe63dQiCd+hzUbUFZP5Prky510cYKeqHQe3hQOc9fXze7zMlC3mFT+7W3c3XXNN/Zzd5Miix1g0w+yVRy+eTIbU912T7h1cUEy5xXhGP94l+PJZmjtmIbx8alZH1R1iepmyy4rvXrkYvabMS+3pZvoMbDezjwGIT1rltUm6z60Ha1jmO0wafSZcUGKD6yBt0VD5YBbNZaq39J8m6hvfzqPbSSGITwu/cng7ADFKTM6lpPrXOjsR0QGmBCZVVsVn+HLW6QIWRs7vzx2mjjiIPeaGihjoNmXrx7NfGYt1G4fUWtAPMxodOc650HHGXwCThEm/abXlll8alc6uzzbNgOu1aXSDbVSHUTyH/7Fk+8mgJvE6Z+d88o/mDeiCi7RPhG2mJf66jbdfCJ/2p1TMNj8Zhfbt3G+nQffIM9OZ0LWXFTxUoPAoCpt+qE/7Ls8FQ3TJzwq0MytS6DILxG3Ue9maRayrEbYVEb6wXWRaFg9JUtW2cnro55XKqCQnl63c+j0+H7C9qkkd9WVVnXCtjfiU5ZKwoW1ZVj6n8/2oAkRxNyIPkzbVZN6DZoEYfT90djE0O9y3jBh14thizqi9dRcY8q+NMIwsl2hvT0df+tZbwcCPUO6lloYdFGbeMiW/AIsDFuG97ZR8ajnm7y2x7t/MAXBsZAdTVUSv6JUc3N+qrDFaswgrZMQ4H7oAXbgP3IB1gllO2PskYpfloVowGGKgtWjaY2CKBS/d8Ttg3dxQ78My5ObkRNE3LKX+RULThSL80KsnH58wvaahJrhBvZGFAImF6+TCKI7ehJ1cRtLoQir9gvBJ2+Su5MBofWXh8PJkPgmaPzM6eBePGakSdEcVvckMvBVObVaqU0C+PalBbDpuJHRz50HeCfNQ9arpWm/pYXo0B7vI0oV9Cx4Lq74tvsNYLgthy1OZdmqc4vHB9nKw+NJZZRnisEB6i8fq2Ll6TWbtMZPnJO/jUPS0q50Jt2hvFqAyiVzjK+pby8FcI/zNYsb5kQUYBO7Au+OweMAjMfw1Z0TIRVJlwRd1aIM7Rny95Wn7/n6Qd2EORjCvCFAK7slxom6uJtJiTFVjV/Bogqne4jK4QZuy2kcCG4yk3iIvfTLPr94oIuXeM1Z0OZkXpwE7LR1GlY79vncLd9PbZc0e9p3hAHPeLvFX4DlJj7rkQad+lGdvXvJJXgF2CxY8Mh8Igh6k7AKyGQSPczUnfpcrh76Zw7fNg8fFoP4szUk5pJfI5m38/nsKG6dOxVRP/yMztneog3ROlC3ZafNHU95OKd/4PkrwHDJvYAvvfakGVJ+k2yfWbQt+9Xx5rNPo3jKFTTweUXejbyC020LeXS9h/TZo4njTUOMWJXW5u/P+ZJJ8eiKTGo+CYU68Gas5M77aOoRSXQeCcexZA7312utiUZkEqFDBymwZXiG52hAkTUKHGF2SMcYp2uybZv4lRRWj7uqVV/N/NW4In9SfT4QLTxZ7ZLfqDAuxfoFcSEuXsDZzqcqm4XT4YCTBl/qUOIcU5YG2XQXR3Z4xxNVyjMhNDD5S4SFuGhEGbcA5bLdB6DhLW3mCYhueNzgudwveH48QC/Fog3kLpZZc+8y/CsANuGTLiDaocSZ2NmLWeAZ5mF/bApHxPY1di5G5KvdzgEa1wkEbWS75su1aYh2Hr9lhpq/9rQXT8oUwVdpZMyBo0IOFxvZTuoeDsleMZeXVm9fTyFjxbt8XniuSk5XXubTGKvPa9jdS4Pr8QohzfsuAtxFoZ7g06FkeZtbv9rCg5nnfXIlMr3bmj/d7/7K46vPY0Km3vAsvGLYwpLN3aMU4sQuUjAja6bzRuc0OdVUJD1857nHI8fxoDJvBldf6vH2icB0P6qFdRKCq5nbn6kLpyrWMXoZn8n6eFXAQepFiZjopJq7gk49UTD8h1W7pH1sfL3JDy/uRc1ctGPk3HxfL3MlnD0q2WYLo4yV83oqzO+oHm2UkO/kfTzZMoNTIvyphZa26OvcaUFUIqM3JD8+kC1HTP3q+9A3uqpdpCg0Mp7/voGohuLD2vFOl/NKX8/FemJVwAufgLpOM7yxVwUtiF4uRJOnMqY1/Vbdu0o3bRGw1zPKtdS0QkUv+F6KvTaVo83ZyvO4Jwt5cF0RtQRabGrmd0E9vPjHcp7g/QuRdF5kkfp46jD0kPUBtObd5CjqkhVthEQnfW/HI0UIU5dXTptf/TY7Xobj70BQYY/m2msb3+arGABxNZv91WqnriSqwPJWiav4SYbrEUHEeS4KI8a5OdTjbJFWz631Nz7dqOIpzaIzKphRQ/vCkUF+l3pn4SS7UMCE/IvsiJk9jAQzC3jk8PU+4dcUqCaIcp0qa0QjZ00ACFavNEkdyGuZVk3+8cAmYnwarSr4JUwp1tXB1vlDHqmMYo/vaNgT7ijo77eKl7FGEOTuWw1gXBwZCYx2657Veycb16wsPf/sB8Mh5y9/Vsk9qKI/ZzGDhz1uo88RoRc0XWnuavQNsM6IFbe2R++uJ55Q1fiTAikAq8I3tOh+5EkhD/j+UXqCjOVF2rZfZWkr+1t/ZLQKrYheYa84J6Nut9c+KA2gPHluOPa3POYnvN3d89JLuuY2+LsjBoqARxIBOncmo8d46badMLHb8ZNnzNzAY1gBHLfJEPH4idLvnRiNWVneAmYbaWcPdjU215LF0916DGKaoVF8rOB7hRtsiTL9s9MnERxi/EUJ83Re9uLhIcTH88K0j6WSDaXNr4IQGkSVvw98ZiaZ1r5oB7WG7lmLJrI2S/Sz+ck8H1h3ln1yz6R/1TJOK6ovKlSrJI7iYOH9r0Y7ADcj2h1EiFgoJYxoYaJ8OOl3R0Q5j/EzjKi92+LtvLz3PluTPXn0wK/a2frR484QVZQLz4he+EE0Ekpb6cN3YxEVzggdPtlgJWyD0nw0pU4ZSG3n049/178P/atuMkyDaeC6D4pjgFzWeyVb+gijAH0b3K3eQ6JZ9jQAHkY6HucgrbGm2hl6KECELU7C733Z2REBNL8RYglKzEiPsxhr3VwZD6NeB+9jEikgPZ/6cDg08CQR1q4EiZ+K0FOXytHuZYIIwblvbSdhi/N/uQbC5YJXyzTffi3xrsTgFF0xhzEk71XJuIsskaGLZ5flISe2kjtAp98O7W46mnEaeco6Oe0BnM6EaVtxfLE61TDmMKP4YtcybzD5pp5THSfJ+hXu69efeThae8TDn6TQV7h3KH0mmOPY8pNy7aUp6r7YmrlR5n5OEHojJnuGFeVYstL8HewDS47968EJH5NuhVTE8Hz/KjcqT9E2PfC6O0VcWGJFjNG+dpqU6NybrCcRIgPg92ELbYJZlV7XoxoqlCUsFOZ0GwWV2oHBbeWaJHotYQx6AgPLi69vEzA0fODfViFLA8PFfMpQCkYlSK+q5XLKB1WASY/Y5TdLQOvrZpFrfgOWKAKfFl1r1j75yME0NwaiHk/aBYkH4CENrJdbpIl/NkmhlMCMETJZrC2RU6xv1IfjMczIAOgNXK7gTQSqNvLANjIbxIztN0TQVrEmcG0iSxDVwnnkX7ByTPQzEbUD+F0hKiZWfmeihgead+E8UuPAoV+h8bMuSwHzPOTD8wNI0MpAQA32HBAzEUIQL3Z8aJum52XcH9J0W+bwmE499dRmzBOZCLA5ouqRRlJhaxJfEIF4Bss5XuxNu0FGvNlqEnWdk4h57cmHtQPKSbrncDM89Iu4Z4ANdcskL/c7HvMiIY12Vt8RYSpY2E6oc+S/Yl46SCQc2RZhId1TVE3i+Q1j83VP1KQJzfc+g7w7OeAp45y2ctKlS9rXvHpZr+5a8nGbee/1FBJbxwOXP6XWtx4Wx0209ZKXO8JfsUEAPrUopli33CmII+7gV8KUkhklH1GV8ez8wM85nAb/OZp+KF4O7PvwM8M0y+S+TqTDKstxsT4szD3rRRjhVKk1rZKnEg9z9xrH3rEJqFyxxuyViBEQF9SQdqrwtrXuxpApboNMgCucK7MGSsB+EHsKzLNwnVO+Iw3DCEwMXbZQEIW8nAXlG+t9zpOpK0uXTOKRMwbo89TyWzT+nCkhNm5LEVFKt3z8xuOKh4OnjPNavCyfLsYl4O6pQY8/qVbyNnDLIYWe44KQHNcwiGXvWLWHR2rYlYbH6gMwzwv395gpXDaTJ6k0AwCUfpQRRfksfOp83DNc5xGT1rZBt7ejS+WN9JA0Kd0RVo/G4GPWGKO1hdZ5fua83LR9Za3FTJcNDMFWagcoSXTD4HdWItyOQ74F2fbtPGxTaOkvqleEPqn6nGbRJ5tl44HZlQk/rkbbtfAVombuifdOW9UMUUxo1tNmz1aZQgoW1lvrGVWlsY3tb/AsnpKgXux25cm4a/DNwMySRSuIpYL2BuL/5P6JCPwNQgDCs3fDkDw195Rz9t7lGdobXeZRmcvoNAztPGevrvhh1SQO0qXeoh5PA4Y/w9oHMoWfsweJ0WJinv1qgpQ/A8gpOYdiLxVwK/S8OnlrSQ6/EIFm7+3++soqsaRike30iZA9C5BWvSLSaQgttOxLGTsOe1YxtjyQSwSsIkL34Hgbrxixs/F92UT40blXY2EzZ3s9At2vMZxe3adxVgypMCkikh9vwtr9k9t4jfsMgCeiATvvXQH7ZK4dWaX3nNtUt+JZhHqX4+5qZFgjnJEwlx1ACNbXVW30Xs2WLqzOs/DFRN3Us1uxA/E54/wJ8kAnemxXoVQVLLEJsPt3ZkZYHCiDxI0U9dpddvwhIpeC2vKcFmocAbN0SOONGo//AqH64zmnxeX8118NJ8DUgARw+hnHfrkJt0Gu066JoiBd9U0UmMHQSuDmfiBC23rt2Y7F3Xc/FxD3YGUm8eUlQkrHIC617fazsw9+gQu0Cxmclb0Wm8jGewWsKHIiz2D4WRVauWiCxRk6dAbvXmiHSjMSMaHxoo9yEDLjLudVOS5uf7XoD6U0X9Mpy8jRZbFyl0HyrMlMXW1lN0UhMXxenrlL7oTVXM8YF991lmR/W2nezimP3xmbo0FkPcgot+GJ/yZp30Zs/oTM6WCecILBTqnhyS2vtSae16FqVY5LRk5/z3nTp32WzLb+doU/KK3P/jiT7qZ35mUqJn6V8xr1fHafkxYiTMHCntLdCrSLWENwnhXT5c+5BccFTFywn9NKc7nFCYzGx5Pdjloo2+7ObcEeA6JrofF1977sa9j1LmdTgK4zJ+TsMqSw/tqW/+zrIagjg8jqlnBjFpmkiqOeWg5sbSRvMfEU9Cx28f3keNg9sOpiQhgqX25JGAC2dakruLdqjVZ+mdXvOOpMnzOMZ2jg9rWnRjHDF/v1JsJNg8CC1woDjmpeN3JeXFEAIcbwEP6GJBueVahTQWCdDwKl3ION2kOuGcuTLvfBFGFvB6RE1wYUzJt87nQTGH0XQY3kX6GSVWzJn/AdGMDDUfqhcMoQiyOM/QLESA+AnyX69N31fF03OiN5UZX4/f6em3IS4cQLvXiZp8TWqyXd6wLWpe/t5/4wx5Mjwmltc5EqKq1UxG0DhODossd0198B8Icll0L7p3Tdbqgz5fU78UcxwHRXM6nNJTdzf+Hzi6zXxAL4G3s9mGkezCdafXpdg0qbHCjrfc0Dn7NZNxWeEAT7nY7YUeYQGmwSNrnbfJdf+oGLjO5vC9iKxZq/7+RJP/+cFwOoHnpFGvu50ZiCw3cYY8erNGMMVn6wpgNIjn6f02rXvJ9UjBq6NEpj8ek1qsl/37EfKV8rYovdUxm3Kj5PdcQh421HmZgf4HkwsH16G1vPJiHtP/R68BZOzt2u8hGGxek57UrfNU+Z+aJGzFIdVzxp5O/wYIwu4m521Z/kaRArDCa6zr4acQ0+X9dcU5rrx8sM56t5fW74Pl9O2s6zan738uv3o4qvghOR769EMjQbFGSuClUbbiehzfTtAadGD42xTvpLGHRxP2jW4y/WFLtG3zJ0bbMWz7T5UKdzo/2PwqZcPsyMt7y7qH1iJJRbN70Pv+pL4ZibcpT726fmODIQ/KtikQZI330epSS9j1l24Wrr2mFXw19ddaPwONdh7I52WatmpzhVjLZghRr6xKj17WzwjsVEBjCrNm/f1ypJeVVVFARMj+JbOocfqzij9abc7jniEVgaYOY6UqEOuhJZf8yGAzN9+uzZC9aWPehHc3q34opf/s9XvuZV2XyWLzNe4asuIiA664nPAW0tKTwOpeudHOIyFmJ3ET1/fMTW+fGdrs2Lri8HWQPvMSbFbQVu+2zuVpXeZRLec8ITeOC565iOnBs0eeFKQ33OCqdTX04EeBu3jzR/8fIKNGNtYpykuHbfFPzm36jg8+jFR92KMpXwoO/vhkw4K3nFdN51faaGRbzCzVqR38Lr3rPXKzZtKJSjU3wcm28bSJSBsMd6v0I3tBrmIjG8Ub7zJ5wRdBoEvnNyBqbNu9F8vdVDTkp/6gnvni+M0LjIymCKUqv4daYSEJ/kxf9oh2QeYBV/sjktFA35jQQX0WK+TK5PIB7SBXMqpkrQwifhLtpG/Xurvq5KPzocmqkQDeira93XZI8qGyDCAOt6O7sVQPsYrR10WkdpCdcZvnsHdof49PB9lW5XI6hpLsz6uYOoAQFMo9KUSSrxkfMh1Z2KpaGJEZPY/Sq1xU/l+POEXZKreIuSxWjcvA2XhHdfiU2j5/GvgCsRmmdkr2AYvYV52F2nd87GW0YwP/vVvx55But53Az9CZXzdl+v4UP2TkPO2WdV6VxIxKyEo4bLPuDG3wDmHCXUt8vG3ovXOH437YmPKy4+rKnSUyeaRLfC3y01mI4HTGhCjbltlir4DrJlzExOlbC2DonfNES4ZwA70fon/1QSgCZaa3oS+o0vF1DP471/L/ueqYcd5IL4zqy0nIB/zTcIUxgCYhOpzBDqI6nLq2TRxkqXJooNIwn6+mUrxuMKRWmzolKCG4pIxJZr4/Aq3+mI89lrrdFqFbf6SZ4G4kp+39D4mGNK1+yjkmPneNbo31iIJsoMLWNpsK3KHu4gBgCwronjqszIWt4hm6bUzHVGAiwh2w7zq51Xpu8nh0JtXfjahorf04t80njIZ+Zvi4vjXQAy/FK/fUd8YJYkLCP6Il3CaofsP7CefnyAmHYfZlvS3mbziiHXJ41OA4N1VnPjqMImAVWBpSB2V/ID4acfnQOfHP5NhsxpOaPN3BcPmQyXDnktAZKN9xnu7pKXFhP8xcWxQlpPbin6RhbZuF/lpgqGKUtGotyQbQNYDNn9fOXNdJRGHnwKt3S7ELes3bInd48s2ahxFXuhk1TkrKAoluXsPDRb9H4pjRI4ujDdMP7n6JPPosBa1mSLKUih2jxWXIOO5Keu+MbBPpnr8xI8x3MoXkVwGZozd7z0yEtqnN+9UcWYP7E0V0LnQFqKapspqNrbeIWk3xTGjGNWaQTdWVTpW6OyzwEQKIrNp5lF+aIkSyIDlrD6gUx3kW+EB+RjhIZmVCrnIEqcnItoTOacuEqGsdAo3HNDlVlMa0C1Oap39DnmL2oLIwTHdxYed8Pw50gsg6oqKj5KwxTDMYCK2ymVHwU4uuZJh/dNgPcUEUwCcJLR0tuk3zc90wNpyy8ur7floJdtUaCVfaeI1fCy9KYYqfUtfxt/58jImfdNV5zSqDVSZbR7qyoGe2nr1Ug/toYFV6Ja7MG+YqKT07RhlKU2jGeT/yrS1Bcygfcj5BzRTTu7OKx6DXgpvYh6J5rX5DYp3JdFON6dRtP8TvsxoY70BMhWnl5XQiF4wXgGrz5/oUD0duc7fQ03+j22wGRu9JzWYhVdD2nLQ7snB+m1vJFYnPT60pb7slDjisgtu6/q0vLnDDOa1kMQ3SjYW5MeZkLJpugbya8L1T0NxGrv+yvFSL0HkM8GvbZF9GUoFok1TZIvK+MQpvZ5ktMOS4RFgp4H5z6JFJHbVRf+5rTfER1nR2LobDBrMwfx+bTh/BS1RoQ6WC+R4mZPm9S/tvMlLFYvBWdVJbbfr3cobIc5I1TcvseZykPXK+oDMf19q5IP/xxCkGRoCfquqEW065wsxC+GjVi2sEEEoHKXa7r7V9FaHA9vE6eNeQvsyIffE0Hbf9Rx3SohSQ5jIHcuLfChPbpmP2ayTmv51in0yWgsUTgXGNZ6HAq4LS9PVwmb3Wv108e9IzXRfpNQ4dpF2kmLJ35xS0M62XlU7gAKDS36NmIjWcQjymp/1SLg7OxNsHqLp9G+Crwo7C225dG5e3/pYYuKvKPixyviM/HW/fTAMgKw2otsI+rbbh2LqNQ9yD48cuEiRgj0wZdQfdul+SQGd/pD0nxQI+XsGEWHrTO/oj6dwl3f6DozxszAtfTXN80yPHbonCe/GuBJFUjRL97u2U1jm3zJWV6iYu6nzUz1bFzb193kEREvRUDWjKRKlgErCj1SW7Of0MUBrBhplEimC9g53cHMGsMY3MLdM1EpCDlIFpLOV5plHq8f817u4tnPX4qJKUNobCGJeyvPeGgSc7h5NRy1QnFy8BWEGdK9xeaTMvrQ8xWT7yArzaojFpUwPjZAGvXnu9AwujB6I7OaPFBzVL1gcxSBA8gFXLKjsVYVGlfD6v2e+jmP1lpM2AKBTwml7AftprUfklGxfQe1sd69pKWwOPov/73TR/ypDAxs4WUoTONCM9Rb2updvWMYEk9lWtWXk1WCyhdvotwSZKCQHna+N9/GCj6ZHNHfaTrwtrRfuGA0eRngv/RXuUWqDpjecD8nyXqZ8uB73cZ24KuBwaZw//UaIjdB2oBn6856qofigM4vUaAWXZwShEj3DJEChpELeWjUaV/Tgv4+R3Dn48aQ7cnlEcc85SkWZkWOeKJkyx2AVwAmThfxEa3TOS53ahrZAhDaSr3ipH675sl4/RfP5v+vesj+H39V5u8P2VPQf3rIHqb/i6fs/9H4P/0pe+zfT9n/+yn7fz9l/++n7P+7T9mzGvi5JN31gk1hATKcXIcJNtWDeK4bx5GJ6vb3WtcVm2pe3SvPFQz3a+Cfq9KYTx2IaE3m85m0PM3qxeEjIuRycg0t1+cGC3ShfkWDqoFnuWgqCI/uO8ocAWZMk50O02OAul1/ZmtoM6eUkYOgExkOCjrdY547xqwH/WkRDTb16sZkWC+Q2pHRHGhTQ04wzQlWQ0Z16+70gkPby3JP8yWntfpuK26Y+cb02Pg1+Z8gNRE7mtjA+K05dc7fL4ZrTFFKf6urDw4fpz01oIgpXD99mWIFsHPWNya4SPNNFdiQCj6pfa+8wGt7u77DgUmKEROAHyawHAsA1Q+Lk5AQQHw/Mat2nP3vzxhsDjeJcwTkn2uxJEZZKdxeTMZXGn7GRhRTDskdFn1K7eLHoStrXh0LKWmxZwcoIFwO4CaJ05qDSR1+dqBO9nom+WvCjHW7fY8DvRVgfubkXkCgyWgqYMsxnZICXvAJ4FMQmTggOQ0WJU1LgZS9en70DeSgudhkQEI11K06O/Kc9gE3E/KwhIRlcqOatVDPwCGdro4naX4xIKbnIm2mdy8VEDut2WOf/2zI82yROHlpKnlmbhrOSIQlN9eYTMA198rOoQ3TXScr1r89R+xLNfH356kHw4bn1nEH5Qzz11qRVNn5prUuLHzfMDYhO5MwgDPrZTUq5Oqwvnfc1HiTECOBkbKv1wIwUaa/o3xEU5y+AQWMfffDbOCzNmEsYEApyuSjyoi24Jedd7qf0r/Sbj677Lr4G55RxYQyvdkCvaGxSc624Ijfky8keFBh500EAAZ3z1MPSuEGBoCjjGBKYItUbA4FfaoxmkXiTpq+wBq+hwV4grFbm3aIw23kdUfRn3Oo5093o9jkIHY5xO6C9GPPkJ4xl37UqbmLmv2zbwb1HAhpj56kQuc7L/8QPt77niKpdgFrxE1FzRU30AGBs33TQCsKGO06ojfVT7atWOn0CIJxg+qNzhmTZZf7ar2dXRVswurKU8SpkmgDIFfVCEOHP9Hp7HWzBaNsBjo1Uups4fvdxsAW6op7mYzpVtD+LruY85bA5rjZMf1zRdBmAmQC3GtpyB0IC4l1eN45gNmbCZ5Ig6+e/MiTIhkBfvXSssmLdAGhhs0HytI630ZnRh5orWrlC5uLuX6374+jzFI7+V4AfJL4eAerTN2mKtG0F3ASVjBtGr5NC12mc5zoXEndypvaihPx9LIERfsdzA8ijJE5XLQdLpz8QcHGyGnptJGgpRWCHowHLZx6nPZz9vKuKFiSHq7ldWieQ5XaqQd/+s8hhKdaozUuXbcMD2batEAuRDyuxsYFMTZSKNBq7cW2ggFvc4Q0oTMvoWVt0audYQtN+Uh7HhCTQxztKu+HrAIAX8yY2yOPslO4gyfhbEXCdTxuAh40HJODUJW1IDesccqYlHM8mnuO4Hy/4UnmNHu5Z7Dqi8qtNQYWSDFTWAXY8k4jlYgtSxtOiIhFW8WstMnaGB/ZRlLUs+/XnSmttJB2v6W18AOPzA1sHVbxKLyHvt8NmZCGzwlULlBS3cV43AN/e/SsZbQxGXJEM3rlNwS2ikcYdv5OAaRCTORvb8jdbt/5o/yW3XMa0IXo9LvxcsO8KOLJOgKAURgYGZBVMBx3wlU8IiozkLjiV1W9HBvi7uNJSCMuZJK5P+vtTIP7PipwhEKLpq5+XwfzJYL5WHMqL2l1Y0JKXoa6Y9hsWAK0MyxHqwuOb9+NszA/JMfY82TIi/tNVv2k01xpYHWMz1Bl5W9Y24rojjtnvJfgmYuWB5UqWjwJUTl0cEOVTJFgydHAU4TDizoymUyai1BM2lybiFb9nQqML7RbTHZ3WHzsehInsu3126enXpfxeH51MMOgw5g7wGIhYLy6AlZ05bAOi4bbe5QHvKPcdy3wVz0+8eLjzKGNEQDCcYKPLFrj6bBCXOA3JFrUGBpG8BUZMRnfXuehrBTJWBhzOPFcEx385hm724heG1QiI9hIwdm9lMZ0JxwnEHSolKzDDmasSMdk+aip0p+Yfsl4+D69eGR+hwcmJqd+j93puC7kzIPFulcejBILbGNY7Vwa5l1bsanh+4qzQUB51+3pMKjoKXlwQCM2MoPn98tqVHHYV3BSx1UiMlK1eTg2L7y6XwYMQrPD9tebOL9LUKZth7VxSbvyAiIt2PfHyGSul4w3+Pjf2nuvbceRY1v0azTuvQ/SgDeP8N47gi8a8IT3JICvPwCrqlVVvdpoq7qlc7d6dPda5CJBMCNyxozIzJjq6YjnT0SLdFI/TvjvqRpGe5jpE6CNKJmowOH8NrhPuYVXnqFzofiyHtUsuqXWRXrCcxxh5uQ0YbYoATIsFG3bDofwFOuCw0VxeDEYfEN50K8oYy06XQSzLbIlkiZlEaiOj4LrvCCl8Acx4xlpcQR4f5pPBUZ8/2RMDT00xvuO27YGqJMpyP2gUGwSWZ3On+ThpAgPnaIi5QymcYV7lNEfbgrKqzid1AoUXIdCxeD8Nd16+PSAM4ARe9vs0fVPyBXX3rpRlM8gjh4WrXZtR4H8aijeSaZme/VCERhhMWEjZ0NJ5UhJ5X1I4mnRSdMSh2A0NjpKprvAo0KJ1eSOJb1nSk33FBPd3fEMZCezwtr6XfSnmDs5GidHKCiEPGOneWbRCiW0J/fVpXywKKgeYF4+B4hiIz8hc6eKbYCn6KtQtyjCBtHDFRshy4V5PZ42YLAxACqo/hPLbE6fobjBSu79S9G8M47bqznZi34apICUTGY9a3id3EzlZYc8b+7Ey+QkqhbV0nZZtvW7qM2cUN4IDgv2sPi0Tq6T0Q7K62+quJzs0ROa7eSGJ0d0zvGUknDQMvJMOWPrqslGuVNT6KDVKWuvOsWOJ7PBymY/fWF0ZApPOoq37+Lk1XGIAZQqUcK14ZHWte2MdFyIe4d5enAY0tLr/OKk65lZce3QV4uypo/TSkZCRoaWtE148rKe8ioAHVYlKDxKlzc3yuyu0/Vz3kBlg3k9JZ0uDlQiL7vV+6arthkMKKKopQDuN2DQvO6aHE9eoQ8338+BDK3zO1zHopIUYyD7uC7V5Rfu1UD4dqyFVxeSbRvohB59OM4kyIMr0/M0I6L4/Qz0zn2za4+iQ2Qrm8CNDLvOQImjTXqwBfvodI39VGCBrCv3v/790+osBPxNmQUh/8wyi59T7Jqbi1OzDsT61t/98vFhB1r6Z5WW8+st37amzMum+VI/6fqrESUdfS6ZJOegZdMHtZS2TNPml9peTp96XX6upvyOhpPf2+z9+PMtf9zp8o+yKkRC35gVxYifmRX5wKrIDzDqHCK6rfKMbttkV8l/J5q/5x8aFWsuC85nzv6NabFxvRoov0fvr/nnET/B5LwEFrXDe9RgGDl/Lp+t0L2tML2t8P1rrve9//++3Py2znUxBBi2f3zW+Vvx+Wfz5bN/+aZ+6yrGiQplFzVfrhZPX/4ifarBAX9Bmf/3/C/um3Te2/PHX3B6+wvOns/9f1/edY7zp9v49taui/1Pb+yrS38a9i9P/+bk+uS6302vP2HG/Sml5i/NWj9PFoz4eT9XkEB+PluIHzBbPu7ODP/MJFlaZF+wpJ+WR1/0p4dx/3j2u8H7x2vU/ioYv61YZcuyfx7SaF367wD0u/H+Fq6uyZRt5XL7/AHX7+G7iS/6+RG7ffUndv/84J+z2tyvU5L9dqvb5cxxs8/XU46deondfBvdjWJKmELj8cvrrmH7VR+YsiZayue3veE/Muj7rdQ0RftXLxj6slvmr+3dbub13FdQjH3rXSSOfvshn77K53d95zg/3ejv8iXbiGCB7hv0AaPKSvyVCTX0r+APhd7PL/g15P2COb+Ium8c+hHA+9GFqPSc+HM0lRf8Atx23lPzi4j7//w+yP0vXH7LLchvHBoEPliagwjs53j5IyhjTNNYxjbEisjgSESv7AE0/4xoQd5kG3WpSFw41aWff2WTJprnMvnIit90Jv/KhCfDYDmO5vlPV/rpZeSnx59NQfwNwFECg4AzqmAoCl5//TJOIP43CAFhnIRBAEaQk4p/7zl/uZqGoxDD/Mv2/U0g/I3m5V+e++fw8mdoiCDg36Bv/Qf/3i0+xYCfAeLvuRaBfXetHweuKoobrdlGCbrjyl5Meo2KH8oo/P8XW80TI9Ypjt6L2D9Bapo1S/RfFP0nURTFvp0EwAcYinzAOZFfmXK/O0P7SPjl5xoCwTVYf42v7r3/dOrxkfswb4zNy2z6h/Ncmjqnoc5gzP8Ui7/zi3/YB/wFC/46ZH+QlP+ePP6nvP0PcgDsOw8gYfTnHoB+4AHglyf/FRcgZrULZsl4CH/v+vRR4Q8a+ZUk/TN05FHyrek/xqqf7P7DeJzaz/NHTqOeTvP38+fnJ6f2/D1Kn5cr/Qaz+03w+ZGg8gte+VVp6DtiQRA08yYWfxYckfh3WcoX7PnGGz+qA8LYH+SNvxxb/3xvZMt5ibok+yddMT3f9n+9L/I8wePwn+mL32XM6BfJua99Ef/IF6E/ChnB/yRnlD/TYGM4P6M8PtOx/yt9CwUYBif/jTiHQz8vjP8CzkH/um8VkcIHaflXLoKAtLf3bXeRv/5ban0/r+39UYnn1xW9XyOeX1f0fo2d/OCK3s+yShCAvnUREADIvwFf/fNd/e73pqsk/P11we9c6sclq6bxkAqH3SA+8cYAfMbLTfwiBvfvdrSfish/I0niL18XkmHoy98/LiVfD84c9ATWN1J9BTnfhCsUg//1TO5rt/1wNP8kd8TIb/fBo+jvc5rfrlT/KLf6J4pv/90W/99t8f/dFv+/cls8s83pLErkSAtW/kwNjW1bKX3uqIHoxV2nizQ4nvJ+tZMM54rt3a3rqsTIhjEV0vbl8BhR+NSkFcYlFCZCA9yGodrVRHgJ0N0YVH9421p/kqCr6vXaTgOGfmsum8ehEpo6Lla8ZXuUxjKAwLFfxqPPyFb1GopdCCKjjMdho54qpp2aB7VASH4c2eebRk3Bn0fHqzZF0ojtpONs0X63vw9pHVx1CYwU0FQZSJploMk9HoKWcIQwMmhErDpHbzVx3zzV6GWvqMzs6rKW3WKRknHgsxgdzCsW3hc1h1oPwUu1S41ukOSneW8QlqUOSDYH61NT19ZU0eOVgfa1OTS8TmalLxYQklD3hptfrBPfyX65IndTpA5n2054QgXmZPeoT1XGxDMX2AVdGb4V6UxBEc2+8BYat+3RPo6OFpTz1tqrvXEQHBiHDFN41PNLxipDcfQeNGlJFq1O1/XUvtkmKuBTZbrUYEqMqNp1bs/eIxc0g4lqDW71ApF43fFfq99WiRM30CNDwrvu75NhCPQNfO9Qlb9WpxuJw94pZe8ojAX62J8CBlOezaVPl4jAMG80TVeSYSyuwhYd2oixPNRPyQiOzRsCab57yzpX46YmojnYrbm/ShYfEAQ1ETCp5ODuIQZCMkpCJHa7YxXT4T2iZ9ODawR0cm/hhg8b6g7KiLuv82MSfqzpsrtbdCEWpEKq3WLWJj4lxqHvdMCPQqkm3CigEhZiPrD7L/SWwHG91pgprpFcXfJ0XbFySrwjRzVeO9yX++6ag+JM2rFvXYwGSIIeu5JHcwPKamowvtdR4nWSt73at8UNP5SYsMqvvUBgqlrgYXTwVgCt3ELHJYbDlg0tnsev07nOlHh1rFG8YW8ADii30MoivBqOtmNikbjOkPN25hpqZ/DaJ7G6KcBR87jrt0TlB95fMxq/D1jP7hSzGrQvm42IPwVHix/eqEoY/9QN0HhG3aGGQpD0AoRKn3aEo8zmXeebkc3VksfyMnKkNBqj9WabQFwaA9bQ3wRKvu7T1BUHShKfW5FQ8OO4kw9vFEVzL8joOhGMTOXEr6iR3E9/X4xXCgFUPcGH96hv9wWauuEVFSy3XI1LboYGBs1CraZWjVAGu4ZCa+OyI2r6UlwJqgYMABD5qHOL6c4sLMmSGn8sqgjyEt1UFWWOCivwr1dOKoEJ7A4LSWiPWrKxChJORp4yqcSTFnPeQi+rYIaVkYZ2KGt+oJgfFOzNGnaHYa9T+X4KwTtyq6jdiLqmde42eGBMO9LrUVGfNOs4/tGnNy/eHis6VK3EJ+bj6U7eZlH3QBKhAxxrNJBq9d0L72pUNfkBSC0Wo4eQ9jJdfCfyVrlzoyq/LLeSXllkLnT1EPdHczXYk6oCPOf7yUWmZdsNrYOuPafZ5jgoV6eczjm7fboagp1EZ2wS6pHgSZ26fn9j49LzxjGpq3cHKX9giddXsnXT1TD+OtywqHveZo4T5hqXiNFLJCNgcr3iMsexn6yMQbdKcKJHIvpFs8TaC1IE4/w4kUYQURo2YMWX2I83h5i2LrSKPfGRXAVn3Xb1CKgOzqkz7qjp7ItIi18bPKvPUiiKru6RBmhebQ/pwzW81u264q3SgoADopxIM3jdErhogG9+Kb9V7M6rLGwY1klNIGLUePig4VWRD9Zhj5Zj7sOKBrSWFCieYl3vppDs0Gi4gYcl1tJNpKXj2pnqk+h1ChxsmSQUY91ZX95Y7bkOGjaAkFGS+v0jrl9s3x28oFtPCdvpuwQqXDfURPqMBneB0LCg0q1rr53gxr4JOMbqfgRbPs6yC4SwjdstR3Vy9qNqjkIclfzRqRonQWSmeYL0EPITOnEC6NxkLVa7LnaIekHEW8vOurn4JL8yrR0Rgpg9E54TVsSAWa7vLV1+lrNzUUE5VC5qrU6lyKU3nQG8xOxQwzCUkbcKeVvOKZEt4BcxO3FyvDg+U7VF65n6MbP9iSqaNFUU5vvB3XEEGcXJkQoBSbqXGlHprOBqWDMju0Y+bFVcDe5u5B54q9MyUbetExLv5dCMftxForJglJF7PBxPzDFvZtTynXWnaDHu2vapPz02ScTR9pJp5jdBm5n86RslgyA3+0zbkqmt9eMZUsonSTtKl1yCUIpKVkOs7Iq4VlFKjKkpejDeIPO2mz3iuxfoX4naKZ9E7QS7ZkVdM/A9fTFNQm4HqcQxvqyW2comLdLvaQQLqjOnql+bjcdKEO52htuZxDi6kn+j1DPIrthwvxmftvVSfCwC0uHFNzqSrDq5TqBdonaL0JZe6N3IIUrDHKMjenXusmi4IKk8LYzMopMiL4c0A+krCf0kZoNyr62it7vIbD6r2qVbbN+4Yxue+w1WTPcJkGtECAnuKCiut2iMVXQCvdZcTf1XwlXT4SKDhJYnA4IkI9GSihhLLrGLTHYgx3Qrehyik0trrtMlhnPyvUZx37p2+IqO84lhy64lBMESiqd8Ebcr4105PSl2a+qTut0lmfFiBFIy1hvc6FLYA5W6QhqCh++e45E9OCdJx8okWJxerOzVq2iP4NIE5MWHXdR6Z97aFSoJDTMvUhjJ0xUHtVHQdoMXYBkrm9Wk0k56ZAfoGajg+0R+9ZGWphzifUyB4V7VJws5gYyhfYVku7J7om6nLAKugtyd3nChxzHwHpcB1FXEgkBn7MydsfY7XlSC0q63K/CEilOtsn5io/ZQ1A7NwXOaF63jaaYqMXcUunTAyKjv6xcaJ4Pg+qOeS00F93QdkwPG3fF28u0FZ70utx6l93iSkYaiJ8s6vOkBwm85RLa/8bauvODd4flt324F5auiJJQwT7HdOWnHLCU9eakz5MES5+CYQf9IJvRqSx7COYHVaVD/pH43l9fzVstiCKLf91iDAiEVLTZ+vBIyimRqLmNvP2PiC8KpuMVItiaxAK6qDPNvOZNvKkZdXgqMIIM7WOHO3kv41PuJBnTbeRZEaGQntnOHi3b3Zdy5JskK/Op7MS41ao6G5idY7ME07/jdvZZqOKGL/t3X1VqUsWP03fUA1LfWcGYTujOfNztTFqlcTHnFI8PcW0+WetysruGpLMJU66QDaCt2IyMuGy9U4zaGBWVRdkcCIp6XDCfQPZsXjkcLizNjNK7OL2cEMC4xvKVAlgNQh1vMJZqofCuGt8FfxPCUT2J4gETzbrDRfonjFovVb8EOhcz2YNLRmGkV/RYnHDkLI1Fr9r0NLp1KdXdkGiDH5XjMCl+HsxkXFbhnpNfG3HMfutVshoVTk5EhGPEuyxxxUJz6KWyNURJ8K4mnxyl00OIC004AFXcD8G8g3wZALFlqkRuv6TpFEtRt6xp7WN1iJRwQkPI1VTo5IkM+t+NqO80Mz7n26mM+OOM4DdNn8AlTkKfol6SIfvMKe9QIkgAhfkDSOZglaI2h2zEUV4sn+1EkCXUCorz5QQrOywuIASN0joCiDk0Mxa5Prs5cFD31OvNEdQXk40JvAm+QjO3dJMoH/IowcQetsgx4otoZPRznmuDn2/gNnrr8ktF+QCRhuuOxl0iQcNPVqbma7eA5Y40hGsvVvMbf7/Im4XAgL7fWlsBDlCBQq4tPb59W/ObGrBPbofewu7d8DiQEb4kwEJXjZ2JLczxKkIU3GFWQWZO6061sHTSVYktwrJGBDOopJqCuzP1ITjOHYzOUBkAJg9F9KoGGelorVYnKHuLZNlUbelLhmzXS23USmx77rlyJGdqJstuPcyqi4jOAz9EWBlgThhPKCbFVn2cSxNL5ZRljjCo/vdidDni9fCIGsd/nzgB9as+MYRSuRM3uw42N9izaRxetSCM74Rqru0Wou7pd8RZcAEQptftbChCqnt2ZGMBgvbwkXKGn1ZXoukEoE+DNOvFjOxtMM7Y5q8pslAEOOoUZImEJkyCA0dXqeNv4YiQumb0my9FBHm4eajKTQI0VQIlOf5etqaDp/bwNcOoaeRoN/M6RMBpxnYSHg51nYX9LiwEkseh1PUdaSkUUauloYdzDT0/xu9UCgnp9aKg/hyRW86u44DWE5U+b2c5AiZqo7/lmp8oSZRmPYMndoX0AnTf0zaFJS8mafotM0+v59PvhhCy0g66WzRCWhd496S+pPViMpjl/wD5qXtn26zAEr7mCEzhkvMkCQRfHI9pw9eDdbzML62lOTjZ81zcvvaT24ktqD2sJYDAwU8M7e7Hm4QSm5dnNPuc5etfB0zwtg27dPFwUblx4p/Hp2AJx2p6qesNvJxTOmG2WgUYmYiVNg3rbA7AZ+g5aZLG7pM9EaTIunI4cHfP4fUVRPN7N817uEnISbvYQl1eSgyOsyJwh7lMqsPJuviYPbLmYAw+3DozT4641P4kFKbzIHpNLWI6FPXouG6CxnwEgelTVe11wcALxwZwMRzPT+14L4M3iRumxVaKJAaUl/UNx7/QxxdJs7siu6WOaZyofxw4P3Ha0J5IuhD+J7t0+i+6FtfYW3TM6paDeQeNMjR1LUDozj8vJbJ7o5LD7bkAA6DwTwWcL6jpNCM+VHjeNXZ4sF7Z7jOsh9orfFxzm5Zgema1MWcRlmhScjh9OynNxFgTZy6eIorQBZuaxwWP5cndehV17UfpbZNdnbPzLJTSzxRkQN9F25iLJDLXDZKqlXKHF7WFvFWwuLgxiOA+Q/T3QAqp/6wPm+v0FNLE+v3Q5iZvXnd2KrKVdv72yGoSqHx3nIsT8AEUvvdrx0mSkaKLYFEVQrRXAvhaEqAHjHt99bTmzJDt4aWeOmZVRM7aXzdUns+zgQ15XCTkJuIHc7auucx0w03Oz2e7OBuEwl4y6KOiHUaEImfg5oWSyCtto2znhCVWZVELsldHxNwxM9aMSm2aOxpu+epcw8omrcj07R0jLph41ywikU2hyjCvNVsKB7tqrtyPXV59KVqAsRv/FnLNipIED98+kkFEYtFShQ47jhCIz48XfleaqdjF3na+coccPn9M2Wl2Icc7MMYJvYLVPJyb5waPMWy1RXjSWO7MgBFKAtovkV3G6Mo+5EwcXf3MUxFNltmllYtY0lIxKZ36m2JkitXN71bUkT1/yQNDNpkqslXggKUW5AF6Z4uR37pNdq0gdvBgbTuvfuOd1PN6R0zNV6gPRNKcbAwm9+knLrBkfjIpOVwL5pi/tVW8ilyW9cfOm4ATu0zgllmPlJcOyVwlQA00+RiCukM97t0MlFx0UoWjriovw+nTb8rx55SVFVodDMAVd1Y4lxF1V5sLp8VoBumW6z1p9i0Jf53RHovXo8qRsVn5M19fXp/gWHi7Gt5motYS5HK+NJtflzGj0WoEsJ6kWlyvrkL8n82YAatk2NOdv8uN0WfslkDQOtohBAvxGjMXFPawwpBDRhSZvQa52D1cHV4MxUWfT5geeFZOkUM0gM/ucuddGM3pR1eRW+63Lcrl6HQUPRNE6iQcKwmi+Z2De6/INMKq3PgnCt/xM9DBK3dol98nboqsnE42pkKSxdg3bWTdm/Yyvr/kJtcbdNIwMNbR8CqbMHNoe2wbCJ3iln1YqD5ln0PECII4h0faV16mTXKcjdPKHXGs5bTozf4PKUD5ZpmW62ubG8Epy3W1fHyV+v7r700AeoHEperdF4By9rqvlZp7pVudLoHocugkKwZkpOKayannufcnCxua83v5k1RdO7lbduNhQSWdCn7/F3LDqhLY7ZQV4xwc6v4ItMCKWko7jXpmmYBIURkZLG5VJJSt1bUP1ssovyC07G28q329eFmW2N0PZ4wwjQKZFVEk4+XC6QtFzkg7lRG9hJzTWanBwvzsYB/jI1tZuOGmPbTWF6NHdT1qtdR1luuAQAnT4FCwHvtFnVNqv4kPQv3bE0IZKFk/2GcmR5OMa2sTBXqja3ezYu1wj5kJGV+UGkHYBq2WuMD4p9pVzZtg4hNOSqrmd4J6pjR11iWs5SiFAlxNpNAa+1hrGR016pdcTUHg1yFD8gQtXx2GoM1GiUNwPauDMmkJc3fx3AwCWcpI02tR7sMa7LA3pXF1YJxEUeOax7AnlDXwP96eg1ejVqjB6XDZ5i/VNDdY+7pgs3i0nj+Vct8GNmE4LqBkw0+noxWHGcPf5Oo28njYwo/XeRsSi1paroHHf0/I7Jb+pTuYYRBndBatGhDwbRnmJy5nEm2vCr+JJOs8I11h56peWxWRMHvRpzUO16hD4pdcH6jc8x/V0OyAPZdG0vaWg4EJmNKM4PaWvkMGVx+6FybU8KMyS78Hxvd7T/RnqMyiDrM7rTyg7WXxP67FH4FGso7N2gc6eJnx/vNuK0gn5HLzugftIbE+YcSZd6s7l2tWiOCTC7QjSMCCWJcryxJSGJs9mKe5N7G5izFrepC4tYf2Ow27f2WilwfisjT+J9RUp7UcBtqiNtKH3R2U/EZuAYNOZO6+NYm6rxfb+2vH10gDADn9/FUXkomptCj59F7arUwzd3NcWqrcAaztyLJq1lPpGf8arENkgmleFKC6XL56MkrBMcuR2acgsPkSIHFvyG5FFfaSaIsUu7R1w3AwOOXQDt8rMxcGfvIoXR4o5kQyACwQnlQpyihZnYkwgxfR4BViJ1UeiOWJCeohIka57n06Wi7iSrRAvU2Yp6RwNlMQmqIF0vXKitD6BUgyplSwNlMI7+ORkc50eseAAUTzu+yXSlyDHVpYT2z+EBt1L1cATl0TupILdXnhq3dr2BhYWg8c3CicSUXzeuj1qcoJqAnI+pBDVRFYmAREwgwl8pi6UbmLy2qfG3QrkCHzBeZG+g7FrfRUHw3a8WmbXj8wepOFdxr98UA3j8zv7YFfGV8fyrAjIjZruXGzPV3h9odMz5auZtAEIBhmD0W+txXWidU0ZcHJrR6VuV7gBcyvwevjmgZW+2kh3GlmF+SF7SviiLyI+SorDh8TdRHycjW4Olg7YkOx0rlQ7ntLAmFZ0aGfJ2I9iVaTHTcjU6YHG5k4eNJMeCZlb/r4HAyTD9mXrEXvwOdbwPN8UbXYS5PWqSM/bciYW6kPaCKw7DCNBpkuKlhHaJcFWEJXWi8oVXtWVT490UrfHXtydnfnN5uFacQCz0/m2As853KBDOqIHbChkuDwpDIba8bhpERzS1+DdlGv6P6ZaPdxwSyZtkmrwqa40hkhmDp3DpoOLOddcf7c07pNsXz1BRtZ3CWIEtJoFgWAzc+3yIuVVNG9nIedxDSQY1N7MInz4uI8P05Us4iPm66XjcCMNX/0uZe7eMjjEsp1SfZLuc0FNdUmuXmAWwpgoUY81jswXGfXYxI7MUd7xDGgcjc9Vly3EMyebejIHuuO0kcixBguwiXx6mV+YSPZcDFpfTGWIm2WQ9nzw3GEuOWPKBnoAr07QvSZE8ILA5TWaflisGWlhsJlmmacRk+CixZqwpNIUNejsKSo3AWJf1fKuqFmYxsruTGH3Wz0tc62qTa65xgnKvGOcNEnHzabJ+wAJLkJfBHqX2JBXV27DVsVSFNkJNY914J98pSqovBEDMvWpBbBe6iySJfd942fhKxldDDzeVax7nVZX4eiJ19s5N0rTJG1/vfoXyXg9KnAY+BAQDGzaOMErwhp1Z7yHAx/U43EU5RoLra23iIM0dfOOLEw/Z5npXXtbaP/mASvSWE8BgUezpY1ruTlNVmmwY2+6tPrQqH2q6eJqHuRx+YGXGv4I/LHBkwN2YUPgXJFJQupMDgGobZHlqXWwvbKzHD/9iHIsg3wmkljmfbaNa2w7sFzWZcXqL+qcZ0ddtVP6TBZRfnqzC22nA1V8PBNdq/k0IhiNY0e95Jhy5Z3sJQ68JuXNWyDPVhkZsDYjxKX5i2MVK2K+mtpLUp7XZDd7jxu21YaOFq2r1nWxMy+8N6CAyU/8UUVjpY19TgYvURTfOoR4AW36kMCs1SP53sVUTKEGLL/ycy7oo3ZX/N2bVG+BCr0gqjHsKo9anuEMP68xlQg/8+McV1wa3Z9aEMsCxWSImaQGGILPbNLlldMyFHoJt5GuCg9o0QV2+P14aScbOkIJr/MhOK3eAKCcyPwasPeROwecmihYuuFXl6cDHg+RCWR7tXKkgj6v4Hj+IoXTMPTtXQ5uYX8V1viT/fnmtX5IgVETRIJ8v8NyXMy+ZLhXoApmpY4DpaGH25zeL0+V8OkFE3o4qFrr2PJeP25jIcvv9XHzmN7yJu+u1HYarDnHL/wzGSSEVdyaS99RWJlfp+WnIAC6NETihyNwnVUt7eq9l69aAA9qfrlf4HdOLvS5VvABbLcJUGDddXChUbRgSQ+MfpIOC2kKGdyXl4j4DZw4N7C3lRU3palTY++c8gnjnLdJWXi3bqZhxsvNDQDvVZZcQbHsYyZ83obvwAZYbbfZwafabY+3a+HYcUs5bBMeV+PWVSFwMMbjGNtnj3gHa6IXKajrI5IB6BR8kD6WP7xBd5TYq7MoOsR4mnyAUDl7RhUWMqWxEhVr8LKokNGbJOnW4fITlOkOhH6l0deuwCO82nLBFIYICiatbhwbWYCQMmgfiw2+MmRzqeWQGMDQn+Y19nRQ3wuiVF9EKCTobvuNsBTCPul7El8+fTMHWx42boJ1BZ7PtOhQCtjcj/DSRqD1Mx8U9hNjB/BiXHoo3c8cMOkuchc+rvCpnuQBTvT9Fbg5T5lParTO2EyOz3Zl3DM/a575Lu6Q5vfso+w/hVycubq9QapIdtfymV3EeYPIJgICWkoOxFTScg1TIfE0LT8ya1m40zMzmsEjmDV0Z53urd1HZ/lBL3C4axU0KqITpvbKrVMJQdJ1e7i9rrlXLifpGjm4FUvIQ7z88twc83s6V1/O2InSO+lqbpZeacFeLQ3iirkAjVcdNHmEaWnHcP6wrlXKIQAzWLFeuoQ1Vi8jeDvjLh44kYunyVTaz1U6Q3sfPS+5QBr1FpKu9JZfC1mM0fY4WcrmqtNrC10I8ZjAPK24PfertmJEGA+9CrwSo9En0yGy+KO+NKPSE9Fzp8ENMWx3Q1tzlgufQFGsaHB/XNVXzePi/LUdnkxwNyDmrNuywoaNllyOgCA4XGvOt7VpreySf/iHlt9dOAxerZ414Skdn1IG676eo39xco1LsNMYjUtd5fVCv/WLEvl39g4RQBLN1mToxx0T60Rs13qdWhqBCtZ4i/rFZbRAY3wtK90YRWMlDKwxT8ZkiCPd+kxwoL30p8x25jZ9iO5VIPDd7Caa6SVM5zsq6sPZ8HJOA4FEfVfUipLQXXlNu41XPp7tD4qitPrxYokez/fJtubIrsHVkguHVVBS80j5rlYeU2lOHdH28dY6uyzsYw70VvcbXcCICo4u6TvPokZyqZzb+cKRggX7Y70hJxkfh/FxV1MGz/ju3gNYZybkuLYEe7gBpiBaoJEvjXwCQ7NItZWs6OsakHZonIeacjt6A3TFUPeG2ORPun7kiMEWGVRPKZ+JKZREPjFmbLXn2huNkx9O7qKy3jZ05BTsZwAXbvuDREZDv/jojRhbXw9IwHur+xkluG5Ns0pqdaUXuj2rBWkklYmBIPyA7Hxe4UoS9YQfLrK2m/Gzu7qZa1a2pHFP3dmjE4Meu92jFA4JFDxTUVDA3byRUIGWrsUBIJqC9t5fkKz0e11lDyxH/BxcnHaC5zu6XVlpvNRgCRHVkhz25apKQXlUBkZn/l3KHpyRe7I9ZkFKeLugSMMcLCSPRtrRqirKTqeelEsxJZhYpgPl84a2Myv7SuivyKt3NuPbqG3wz4yP4mRgxomcaYKrQZ1mipI5URg5cGGOFBAMLmBZuDpJ+hgfjpMUlFhnXEp/60Nczsjo296C8hu8nvc7x0ZYPOOtRneGf13MyPNAnBFpxwgCTgf9uw7lx+FF0RAZGv76h9LfSWsoRuZiTFPS6qSz8iXFV4YM6zmw6PhUcunSBaOFPcY5Swn/qV1evm9XZl1nSgbU+X63anY6Nk8DYXqt9/1Vk1pnAPnt6TSy7UN3B/VXHwK9/XY1Pa9f38v9VfrtCsyc0JlycUzCW/LPVyWLkYvbyZeS1hIf40mAvDpvnoLoXWm/WYuEwuFYHw3DiPYCsaTPtkQqgz52rfYbW4OG2/CkmPCK/k3psZJ8QZyHB1ux7MjisSkP3b2anMMhNYFlCo5z5mjYTrpLY9C4txk0UKg8MxaasZkxBvIJ7GmazU28ltyPwm/GBanC6nlnnomhXbU4YZzBQ5KZmSjrMzk1DOMJDjok1e366CNZRIEZJBXhmgf7FuKjcuZgSlJw2ROYhCW98QvD15/l/tyiIozmJj05vbmCJODkJkHBlnaC0Waqxsy5V5WPmY46OXHJEpu6Axn7xG0Bmrr5rfbntdONS2qtWevXV3J/S8Dyk6dYXTMV1GgDoy/fIj7lR19fLYTpLmsw40zU0U+Kf1ftCgxgmMq1l4EP3apsS8qWGzF+lvwjihw9nFgBYGWtrd0JqpsBJ3FprSk8AHxwK5je9/WBtBICzKvSUSBb2hmqEDR0g8YL2VSGQykINTQAf0To47YF0IKK8vZw3UVfByQ1iyiwy7R4JcjLeL3nM5l7z0gWeON+O6fQVDdt2PuV7XgelZAPJwTfen6UFLXn6DxCyHB2LTCn7dZD9caLsoxgCYj6iNVRxAmHq6M9iqtQubLHbSJHQEqaO7VTae487Cf32K/2ChvSzAePdE8S3s+c7S3lbAeseAlh2Kqu4CPcchx15gB8uhBWfzngog+ZfKCloM/sVPXvtTlntTg50/Cyl/CyhYAoZZjACQSdOaqILe7x0PYhvzarSGzJsAhnBNJ90hOkt/zfNYXZmiTOhDjwQuqVPkV4VGG5ScftzOpXS6tAM2thbvY4O5fta5NQM078VWuDvRk485Jr0ftJ9VHhyKSb47ZW2f3wGpXkTr4JwqOHuNMfNwkEExuFWdSL3/xCvgdDWFjN1wKAmSC/3CLFbizHG4z/Br3wna2BQMCHvRPUNRc7V9UkCk24uHZLvIA1Q+5nLrwKsgmHIEKTu7K2Hvy1/N8Ir8poOIzRyJsL03Jq+1X5YK4149MLGtMHfcayH1lrgHZrl04SeuGuHDFEaDUyh8EmvQTnGaaHHyjnDx718VE99p0OkZnRk+IuXfLu3PgSgKLqE9++ig7jK5CU0DoUI/CQ8E3zaNuPEIDkL/G/m5Z9K/43BwJhn7AnvIUvhfntCU2nZZDW38RDY5xelE0RzSvfCDbQ3SFHIf0i61NN5O/ozcev8KR2jwCzkpFJcMdAJuOmwfdYSKZniA2ueOZP/ssGqJ4pkdogOQnnwMuCEMFfMQYIZiB5CHmDUry7ReRdl3gCrLeTT1Iyic2AVwdqZyNiqE18liX1taPEGLhleuibgzjSmbHWEcV2GKgsOKsvmje6FHGn0QGRkzTh2hl2qySXebwfqEvxL3TuMnimMfYsihjorUv8DNVR2jeZGwxKh65AhCT+9Biklr3TLyPamR1ux0VcrEmgnABDyecl9wdecn/9EsBlQNGd8kBmUWbv15fqR1Aclyqz28DwnU7AJZS/49idwr9V+/Palu670pWqtHCFJ3UyV3FGVVvroBhvWjS9TxjMtIqjiWn9rCK7Ojn3OOYaNNVxqbFitkaPZTLg2maOQUkQS9OuYHUG9toT9kvuryM2Ix3i664anzNOU/LbQGINyYprxM8Pdntr0GDkNUeuSpIiCfM04Ce5ZbBW5cPWhPPm9SRmcK/ndRT81q+vJPbdKvxCaz+A19eCAGtfGoYnL14SmpdAUqMfi4BqPQTeoWzGrkAT+0uksvXAiwSUacu1Gb4J7nHd7EU8dU0VRGn83ll5BwKdsphrsdJjzRjEUixlkQqAQioLufzyukzQPLtuywmGROwpoq/iFjmv+oSbWQFFA1Lj5SVQ6KQ0DQ/raDO+7vXN9AqxnQj3HLyZyc6E2Fys4VYDbhKrV30u3CjOGrokxzfSHLyH+lnw71ZUIjXU7sQKV5SZgcK2H45yF9PgiJ+tZ2RrpmCsVTOMG2M1GZ0BDGLUDI7V5VobHZa33F8awByhjN0YOfvs0sL97gSrbAQyzgK8cO37lVkPiAJtBnhUqmo0lmhaVwBhsDglKFqzZ/yYGIGXSYrsxdZh9yo2EtQ0rohX0adnmnvPbC2s4yazeVFunDmdybKnYwXAEEq0rJvKE2rtZ6eyjzzWYyocmYHenSt6wFG7m2j3Re9PjFo3rNhtv7BFeUlHzXFCHWougJo3XdO72NU3wT+Oxi3HBuXAfsUUvHXVkOEKgIqEiy8Ayhi8pf7imkOPuu1pcZSk+Ag7entr/J1AqrJOEHPvrV6cqFPtrgNiN61ure4vTIQmBXwp8+w/ea6vE9SwzNcUXvvelleW9JBRl3X+AMbhHmBwOzzTOl+Rg3gKLBk11bw+9Dgy71K5zMpesKoCGXl/JjUXnWsPNju2+lqGsIMHmll2/Ungryu7mqNq3K3aYpC2+KJuYGourALiolKm7Y3ugcctOAiycHUFCuI1t7z16QqOIMAWJXgX80A+C/NNwKXuR7PFOCK7QVKnfzFEBrBRCuFDObRu0r+o7C3u1yM7la1m0DF+80ncj5f7wUAtZVFHlWbOeHOQBhhRVkFVnem9S7/qolq3W+u0gc0+X8ww2eVFlTFiLg1CJilkjJmr5n5eR+k+6/ptelgFTgZVIf7W9QuKAyZc/B6GKI9xI+jhw0Cnk0SwUlnfychysoeka8ucP33B0ljD3fzsZuPaGU0zgLrD56ecoMdxDy5vqfioVFYwjzFPeVnQpzPhk/2LofBOubPWQN9r+swtw1wJ5NdG40xPEYQPXHRKW2XGXE98c/Jbfeej0A8qX3/Bd1MW5VH3OQeH0AsYTvDWIiZZ9IuqS2qbbRfJj9HK2JfmlTkuePhZAFVn0Gm8Ok2Yy4hJ1toHRI6WkzbGmQ+VyxFbk3lObSSM67vbOMJwTtaTi9uwkOSvt6Lf9WEXb+H4zGgToFdmgW3WjqbEoFClA3qqK5c/4zi7sp/jGGf99nid3h1Id5SuTBEBh/14QXPeOX1deMXjJgt51MvtgkN+vMR0HCsJtXTptWmiodWkxjkHBisjeR3qddccv9oekrLba+cKRDrK0QOr+35nXyPl1vPVjhdXoeFZtHV5AMI1zp3FQ5G60mIzjyO5vsRsHq/dYDGODh2teTfmfsnY8e9je0Z/bSANiMoKttWvuYcY129cAZ8ExSzK9jg5bdkyqMCu5Ht1Rpx691s9P3/GH5VoKMJ2JUmxwWSZ0fiQaQCiJGQ2pGv0OtyocxqqxHZJ+qHymcBc0VxefAF4HHuWobhfL0o8+UAlakl4kgDSeGLgZddQcOukKsixvp1oaz3lDaT6fU6MO787Ggv39+bFWUWBoAxiOj0OXeX7vaRD0LxWiJ7bWKSHWOfHciMA5cwRqId3gPFFaDtvbgdVlB/IXVrpGk8KUB0uaXO6eipnvtPKntjT8AgWCqMPhrs7RuOOjKuRqbes2l7c3xp+Nbmbszb1SCgW7428StssuGPXGrhOBfrKHrpVubHdKkikE+mZuzvpa97J8CHhPKXR1Z6QykxDFkevDt3OT5xPFAZId5wSH4Fgl8UOA2puCifYJzXBEQDrAnM03+kdQirTqu/0FBJmZKzz7TJqJFciK6yODcYnXchxv6kasYWO8XkJ+K0FcVFqSbUrncJxtqqSMz18V4kjnXZrPMM0wof55xCMzD5PhmLovCDfLOPowP19wOqiAG/lySyyfa0NQkS8UodxmM9cFHys4Yu4OCh2hfnshgmrBLK2Wm7pgEcubsfD7jzZibRvsT0B/bbEHoFNdqf4OCZeNcr3t6HvwdKx3EysubF2dhCejlB0Tyx3VvGS9PPPLMILX2xwXBFBiVWfVJxLsUN23vVH+WXt1UQaTQKbRq3qtKkItYfKG4R71qje8dpXvOY+hfPlij0IonTTrEHqWJlz1Rtv5+wVZXWMYDljDm8jGYYMFnMRH+5xL1wN2TesYIEROPSq4Iqukt8pkmyDazbspFOJ4z6O0zmTrdnYRD7SnyP/1ItBqzQb1vXIsu8K0qjDo9FCQ3uX9A8PeKv5Ycy1WFVOm8+u4OgYL1DATqdEvROs/N51jI1p9r3XRwJ90YHKJmasSRJyLKJ64dVya7qkUAo6HC9Vc/oJ9M4s69wU0kIQsvJ7saTj1eeZDc/p+VVlXRcYs31L67K3ST0/Ls5mfXUrQTDdmhyFfbmkPboFJQESlQaKlqpbCk83P3tsxEMcML+DiERLVwlBKedJAif3Q6ubVr2je4r2PJcrh0jtNA46z5YooTu436TXQB7+FTG9FSwSHUYwv73VJdwgNYNqUbLmLtiLaji8D+o89VhW5Na+Fk4u9b6T+E+jpuZqytgXVckwf5RlwZkZvGAS5VIXDlpTgfThdNOcNrJFtuUlHHEc5FRnG6/vuw5XBqTMVxFqsbVlnBlV5okXzK+tAm2PkjpHoNJN4hkxlX4hXL8rYaB4OhxPXKka1KiegZUPaGNSqOmkA6/D09ouOFB1k3qz74VoPOG/v90RqC5NQx8w70GAlfNZvO/R0cNLig3wOnBN59ZFOO5z+swh4s7cSq3rd/QtL9mH8U4mfPPgoDNh1YVLvC9X+VIjOmnDQMHC6D2drlSYbqNxi2apqXmQNqr8leNnANWdBb3OoNNR8Q4UmubJMGvlyZyrNMtxrHESEOUV00OIN9PDrkfzZc4RrJY2G+7GwLKRI0zG/GjBF/U82K6st9SNF90vu+q247W1X2X7/UiuSBGDYxbgTSkTkMpeZRw5QtcAvPxaRCmRVxY0DwCk5nBHOREQdWvOV8sy82tyoql2FIiy9qj78Rbs073lucRMRoNlS71S2H74RuwTOz/ggRluhqli1QkhZmrvqxXCtpZWna629XVW45Oi679Vpu/P7R//YT+Cj1r1fN/34jcaf/52X4wP2lo08fuCX/oS/EJznB/fqeI/pLMnCn/rCST0nYF/b5+U7zteYN9f6J/uePELnwMCH97wL97Xd6/Hvn39xTPfd/A/ba7xYXOgH+DMv9yBBfwbAIDfdmAhfkAHlh/UOOjrhkC/1jjoP8f//wZ+3xkZBP9Gftcf73dPg+86VYEAQPxR7YIGcrydsSj6O9jIccCwrryT/0xfl2SNf6lB2FeOGH+CQzX+6YmfGrwY69KU3ZcmYWk01cZ5mXJ5+93fAPTbJ6H3s7+v3WdaTlnyub3MOfTX635PC9A/Kl7iIPFtvPyg6+dHTgoDPyBcfmjm34Ew/zXzP21m9D/NzNB/zfwHmJn8TzPzz7s4n+E6yYZlfffRdcpznKLpMsWP6emsZ8urn+q/fNAQVf9f3tAZIT9oW/phQ+cv2jL/ijd8KILyvzqEp9H8+MnL/iijE9+SNOLLCP8ZAPChyf9Xh/M/xeQg9J9m8//Vsf3PsTn2n2bznwd6ujmN9AfJNfyHRfA/yerId0YnP+gVDX5U2/wRCg0fWh39mSn+tBa+HxUuf6257xdVuPDzVX65qPTjGvX+mo7Ab/aXBn9vPfVfLBSR35UAUPK7us7vrRCBwPeN8r+/0o+rEH3ojdi/zxt/X+fy04Gm/St3vB7+5I/Xg3845PvR12XOn3sx8O/0YuA/3ItB4EtI/Jfd+OeX+oP9GP+P9WMCR6n3X6Jp+VLk75/vxaX3c3zZfLneNxGxWtvhy/1/Fsn6pbkA/MZc+IMXp36U/uYv+Bb83aINTiK/7ozfvwEBfn1VCMSJX3v9t6tCv/3u71dFf8Hzf34hCPr4i/7WdPxRc4j8T2cmPw6ff63w9ofDLvAdeSD+h+ThZyI731/of4y558OpvzKJf7z8zAkeWp9m1yv+Dw==</diagram></mxfile>
|
2107.01396/main_diagram/main_diagram.pdf
ADDED
|
Binary file (40.8 kB). View file
|
|
|
2107.01396/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Precisely crafted perturbations added onto input data can easily fool DNNs [@Szegedy2014IntriguingPO; @Goodfellow2015ExplainingAH; @carlini2017towards; @Kurakin2017AdversarialEI]. This vulnerability that inherently exists inside DNNs is often exploited with adversarial examples, which are maliciously modified samples with imperceptible perturbations. Adversarial perturbations are crafted in a sense that they should be constrained within a bound that is *as tight as possible* [@Goodfellow2015ExplainingAH; @Kurakin2017AdversarialEI; @Dong2018BoostingAA], so that the perturbation is invisible to humans. In light of this, a majority of attacks often measure image similarity with $\ell_p$ norms --- $\ell_0$ [@carlini2017towards; @Papernot2016TheLO], $\ell_2$ [@carlini2017towards] and $\ell_\infty$ [@Kurakin2017AdversarialEI; @Dong2018BoostingAA]. However, we argue that $\ell_p$ norms have limitations.
|
| 4 |
+
|
| 5 |
+
<figure id="fig:demiguise-cw-demo" data-latex-placement="tpb">
|
| 6 |
+
<img src="figs/demiguise-cw-demo.png" />
|
| 7 |
+
<figcaption>Demonstration of adversarial examples and perturbations crafted by Demiguise-C&W and C&W (<span class="math inline"><em>ℓ</em><sub>2</sub></span>) against ResNet-50. C&W (<span class="math inline"><em>ℓ</em><sub>2</sub></span>) crafts spottable perturbations with arbitrary noise. Demiguise-C&W crafts much larger perturbations with rich semantic information while maintaining imperceptibility.</figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
Classic per-pixel measurements, such as the $\ell_2$ norm distance, are insufficient for assessing structured data included in images because they assume pixel-wise independence. As such, $\ell_p$-based perturbations in the RGB color space often have high spatial frequencies, which inevitably changes the spatial frequencies of natural images, making adversarial examples easily spottable by humans. In addition, the adversary would often need to generate larger perturbations in order to achieve higher adversarial strength. This naturally leads to more perceptible changes to the original image that are, again, noticeable by humans. This is the trade-off between adversarial effectiveness and perturbation imperceptibility, which will always exist if $\ell_p$ norm bounds are applied.
|
| 11 |
+
|
| 12 |
+
Many works on adversarial examples have also gained awareness that pixel-wise differences measured with $\ell_p$ norms coincide deficiently with human perception. Many of the non-$\ell_p$ solutions execute an attack by modifying the colors of the original image and alleviate the unrealistic factors of the changes with various mitigations, for instance: Semantic Adversarial Examples [@Hosseini2018SemanticAE] and ColorFool [@Shamsabadi2020ColorFoolSA]. But for most of the time, these practices are not enough because many of the color changes are still easily perceptible. An optimal unrestricted adversarial attack should be able to exhibit powerful adversarial effectiveness on the premise that the perturbations are perceptually invisible. However, this unique challenge of comparing image similarity is still a wide-open problem. Because not only are visual patterns highly dimensional and highly correlated themselves, but the very measurement of visual similarity is often quite subjective if it's aimed to mimic human judgments.
|
| 13 |
+
|
| 14 |
+
In this paper, we propose to attenuate this conundrum by using Perceptual Similarity as a measurement for image similarity when crafting adversarial examples. Perceptual Similarity [@zhang2018unreasonable] is an emergent property shared across deep visual representations. It is a metric that utilizes deep features from trained CNNs to measure image similarity [@Johnson2016PerceptualLF]. Specifically, we optimize perturbations with respect to Perceptual Similarity, creating a novel adversarial attack strategy, namely **Demiguise Attack**. By manipulating semantic information with Perceptual Similarity, Demiguise Attack can perturb images in a way that correlates extraordinary well with human perception such that the perturbations are imperceptible although they are of large magnitudes, as shown in the middle column of Figure [1](#fig:demiguise-cw-demo){reference-type="ref" reference="fig:demiguise-cw-demo"}. Our perturbations with high-order semantic information are even more likely to be classified into the same labels as original images or adversarial examples, which we further demonstrate in Section [4.2](#sub:comparison_metrics_perturb_imperceptible){reference-type="ref" reference="sub:comparison_metrics_perturb_imperceptible"}. Larger perturbations make the adversarial examples we craft more powerful, robust, and can even transfer from one task to another, which we discuss in Section [4.5](#sub:cross_task_transferability){reference-type="ref" reference="sub:cross_task_transferability"}. We also demonstrate that the effect of our approach is additive and can be used in combination with existing attacks to improve performances further. What's more, as shown in the rightmost column of Figure [1](#fig:demiguise-cw-demo){reference-type="ref" reference="fig:demiguise-cw-demo"}, by using Perceptual Similarity, our adversarial examples are somewhat able to simulate the illumination changes that occur in natural situations. This could be used for exposing the blind spots inside the target model and thereby helping it improve under real-world scenarios.
|
| 15 |
+
|
| 16 |
+
Our key contributions in this paper are:
|
| 17 |
+
|
| 18 |
+
- We propose a novel, unrestricted, black-box adversarial attack based on Perceptual Similarity, called Demiguise Attack. Our approach manipulates semantic information with the HVS-oriented image metric to craft invisible semantic adversarial perturbation.
|
| 19 |
+
|
| 20 |
+
- The perturbations generated with Perceptual Similarity can simulate the illumination and contrast changes in the real-world and enrich the semantic information lying within original images. This phenomenon implies enhancements that potentially exist for DNNs.
|
| 21 |
+
|
| 22 |
+
- Extensive experiments show that Demiguise Attack crafted adversarial perturbations both manifest excellent visual quality and boost adversarial strength and robustness when combined with existing attacks. We demonstrate Demiguise Attack's compelling maximum of 50% increase in terms of black-box transferability and the promising nearly 90% successful attacks under cross-task black-box scenarios.
|
| 23 |
+
|
| 24 |
+
# Method
|
| 25 |
+
|
| 26 |
+
The challenge of image similarity comparison has been a long-standing problem in the field of computer vision. It also has proven itself to be of significance for generating imperceptible adversarial perturbations. As stated in Section [2.2](#sub:comp_img_sim){reference-type="ref" reference="sub:comp_img_sim"}, classical measurements like $\ell_p$-norm distances, PSNR and SSIM, are all inadequate to express the similarity of high-dimensional structured data like images that possess rich information. It would be ideal for constructing a "distance metric" that can represent human judgments when measuring image similarity. However, implementing such a metric is challenging, as human judgments are context-dependent, rely on high-order image semantic information, and may not constitute a distance metric. As such, we propose to utilize Perceptual Similarity [@zhang2018unreasonable], a novel, HVS-oriented metric, to better craft adversarial examples by manipulating the semantic information lying within images. Here we briefly address concepts of Perceptual Similarity under adversarial settings.
|
| 27 |
+
|
| 28 |
+
Perceptual Similarity is a novel image quality metric that extracts characteristics from deep feature spaces of CNNs trained on image classification tasks. The metric itself is neither a special function nor a static module; instead, it is a consequence of visual representations tuned to be predictive about real-world structured information. Hence, we would be more capable of cultivating rich semantic information for crafting adversarial perturbation if we were to utilize Perceptual Similarity, thereby calibrating our perturbation to be in line with human perception. Notably, Perceptual Similarity is calculated based on a predefined and pretrained perceptual similarity network. For an original image $\boldsymbol{x}$ and its distorted partner $\boldsymbol{x'}$, with perceptual similarity network $\mathcal{N}$, we compute the distance between $\boldsymbol{x}$ and $\boldsymbol{x'}$ as $$\begin{equation}
|
| 29 |
+
\label{eq:lpips-distance}
|
| 30 |
+
\mathcal{D}(\boldsymbol{x},\boldsymbol{x'}) = \sum_l \frac{1}{H_l W_l} \cdot \sum_{h,w}\|\omega_l \odot
|
| 31 |
+
(\hat{\theta}^l_{hw} - \hat{\theta'}^l_{hw})\|^2.
|
| 32 |
+
\end{equation}$$ where $l$ is one of the layers in the $L$ layers feature stack in network $\mathcal{N}$, $\hat{\theta}^l, \hat{\theta'}^l \in \mathbb{R}^{H_l\times W_l\times C_l}$ are normalized features extracted from $\boldsymbol{x}$ and $\boldsymbol{x'}$ when they are passing through layer $l$ (with $H_l,W_l,C_l$ representing height, width, and channel for each layer $l$ respectively), and $\omega_l$ is the vector that is used to scale the channel-wise activations. For adversarial attacks, $\boldsymbol{x}$ is the input image, and $\boldsymbol{x'}$ is the resulting adversarial example. In this setting, the adversary attempts to make the classifier mispredict by modifying $\boldsymbol{x}$ with a negligible perturbation, producing $\boldsymbol{x'}$ that is as close to $\boldsymbol{x}$ as possible --- the optimization goal. The *closeness* between $\boldsymbol{x}$ and $\boldsymbol{x'}$ is, in our case, measured by Eq. [\[eq:lpips-distance\]](#eq:lpips-distance){reference-type="ref" reference="eq:lpips-distance"}. Hence, Eq. [\[eq:lpips-distance\]](#eq:lpips-distance){reference-type="ref" reference="eq:lpips-distance"} also constitutes as a perceptual loss function. By optimizing against Perceptual Similarity, we can manipulate deep semantic features that exist within natural images to create perturbations that correlate well with human perception.
|
| 33 |
+
|
| 34 |
+
<figure id="fig:figs/demiguise-extend-method" data-latex-placement="tbp">
|
| 35 |
+
<img src="figs/demiguise-extend-method.png" style="width:100.0%" />
|
| 36 |
+
<figcaption>General optimization procedure of Demiguise Attack. For each iteration, we obtain both the <span class="math inline">ℒ<sub>dist</sub></span> from Perceptual Similarity network <span class="math inline">𝒩</span> as a penalty, and the <span class="math inline">ℒ<sub>adv</sub></span> from the classifier. Both of them are then optimized as <span class="math inline">ℒ = <em>λ</em> ⋅ ℒ<sub>dist</sub> + ℒ<sub>adv</sub></span> directly in the back-propagation procedure for crafting adversarial perturbations.</figcaption>
|
| 37 |
+
</figure>
|
| 38 |
+
|
| 39 |
+
Demiguise Attack is a universal strategy for integrating Perceptual Similarity into adversarial attacks. We start with Demiguise-C&W, which, as its name suggests, is a variant of Demiguise Attack with its optimisation solved using techniques from C&W [@carlini2017towards]. Classic adversarial attacks often use optimization procedures to craft perturbations. In order to satisfy the adversary's goal, one would optimize against an objective function in order to find minimal perturbations as $$\begin{equation}
|
| 40 |
+
\boldsymbol{x}^{\mathrm{adv}}=\underset{\boldsymbol{x'}:\mathcal{F}(\boldsymbol{x'})\neq y}{\arg\min} \|\boldsymbol{x'}-\boldsymbol{x}\|_p.
|
| 41 |
+
\end{equation}$$ for non-targeted attacks, where $\boldsymbol{x}$ and $\boldsymbol{x'}$ are the original image and its (intermediate) adversarial example respectively, $\mathcal{F}$ is the target classifier, $p$ is the norm used for distance calculation, and $\boldsymbol{x}^{\mathrm{adv}}$ is the final adversarial example. This is basically an optimization against a loss function that perturbs the input image until it is adversarial, while also assuring that the perturbation is minimal. Formally, current adversarial attacks are optimizations against the joint loss of $$\begin{equation}
|
| 42 |
+
\mathcal{L} = \mathcal{L}_{\mathrm{adv}} + \mathcal{L}_{\mathrm{dist}}.
|
| 43 |
+
\end{equation}$$ in essence, where $\mathcal{L}_{\mathrm{adv}}$ is the adversarial loss that guides the optimization procedure to making the adversarial example *adversarial*, and $\mathcal{L}_{\mathrm{dist}}$ is the distance loss that minimizes the distance between the original input and the adversarial example. However, this optimization problem is often thought to be NP-hard, and as such, various mitigation measures have been proposed to solve it. C&W attack, as one of the most effective approaches, use an $f$ function-interpreted cross-entropy loss as $\mathcal{L}_{\mathrm{adv}}$ $$\begin{equation}
|
| 44 |
+
f(\boldsymbol{x'})=\max(\max\{Z(\boldsymbol{x'})_i:i\neq t\}-Z(\boldsymbol{x'})_t,0).
|
| 45 |
+
\label{eq:f_function}
|
| 46 |
+
\end{equation}$$ where $Z(\boldsymbol{x'})$ is the logit of $\boldsymbol{x'}$. Demiguise Attack applies optimizations against Perceptual Similarity with similar approaches as in C&W, formally expressed as $$\begin{equation}
|
| 47 |
+
\underset{\boldsymbol{u}}{\mathrm{minimize}}\ \lambda\cdot \mathcal{D}(\boldsymbol{x},\boldsymbol{x'})+f(\boldsymbol{x'}).
|
| 48 |
+
\end{equation}$$ where a change of variables is applied, making $\boldsymbol{x'}= \nicefrac{1}{2}\cdot\tanh{(\boldsymbol{u})} + 1$, so that we optimise over $\boldsymbol{u}$ instead of $\boldsymbol{x'}$ directly. We take the five convolutional layers from the VGG architecture as the perceptual similarity network $\mathcal{N}$ with pretrained weights. The perceptual similarity network $\mathcal{N}$ exposes distance $\mathcal{D}(\boldsymbol{x}, \boldsymbol{x'})$ along with gradient information $\boldsymbol{g}$ so we can fully utilize it for crafting adversarial perturbations. The general Demiguise-C&W strategy is expressed in detail in Algorithm [\[alg:general_algo_of_demi_attack\]](#alg:general_algo_of_demi_attack){reference-type="ref" reference="alg:general_algo_of_demi_attack"}.
|
| 49 |
+
|
| 50 |
+
:::: algorithm
|
| 51 |
+
::: algorithmic
|
| 52 |
+
:::
|
| 53 |
+
::::
|
| 54 |
+
|
| 55 |
+
More importantly, we demonstrate that the strategy of Demiguise Attack is additive, and it can be used in combination with existing attacks to improve performances further. We showcase our strategy's simplicity and universality by illustrating how to combine Demiguise Attack with existing attacks.
|
| 56 |
+
|
| 57 |
+
In Demiguise Attack, Perceptual Similarity is used directly as the distance penalty for optimization-based attacks. Besides the actual distance, we can also access gradient information from the Perceptual Similarity distance as $\boldsymbol{g}=\nabla_{\boldsymbol{x}} \mathcal{D}(\boldsymbol{x},\boldsymbol{x'})$, which can then be used in gradient-based attacks. We address our general Demiguise Attack strategy (perturbation optimization procedure) in Figure [2](#fig:figs/demiguise-extend-method){reference-type="ref" reference="fig:figs/demiguise-extend-method"}.
|
| 58 |
+
|
| 59 |
+
With this, we can use Demiguise Attack in combination with existing state-of-the-art attacks, creating Demiguise-{C&W, MI-FGSM, HopSkipJumpAttack}. Here we briefly address their implementations. For C&W and HopSkipJumpAttack [@chen2020hopskipjumpattack], we combine their optimization procedure with Perceptual Similarity distance $\mathcal{D}$. For MI-FGSM, we update its loss as $$\begin{equation}
|
| 60 |
+
\mathcal{L}=\nabla_{\boldsymbol{x}}\mathcal{J}(\boldsymbol{x'}, y) + \lambda \cdot \nabla_{\boldsymbol{x}} \mathcal{D}(\boldsymbol{x'},\boldsymbol{x})
|
| 61 |
+
\end{equation}$$ where $\mathcal{J}$ is our usual cross-entropy loss. For all Demiguise variants of these attacks, we introduce a new vector $\lambda$ to balance the joint-optimization of perceptual loss and adversarial loss. Using Perceptual Similarity, Demiguise Attack crafts adversarial examples that dig deep into the rich semantic representations of images, achieving superior adversarial effectiveness while maintaining compelling imperceptibility.
|
| 62 |
+
|
| 63 |
+
<figure id="fig:figs/metric-comparison" data-latex-placement="tbp">
|
| 64 |
+
<img src="figs/metric-comparison.png" />
|
| 65 |
+
<figcaption>We compare perturbation imperceptibility of our Demiguise-C&W with C&W-PSNR and C&W-SSIM. We find that while all three attacks achieve 100% fooling rate, only Demiguise-C&W crafted adversarial examples are able to truly maintain perturbation imperceptibility.</figcaption>
|
| 66 |
+
</figure>
|
2109.04518/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2109.04518/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
<figure id="fig:proposed_vae" data-latex-placement="h">
|
| 4 |
+
<figure id="fig:illustration">
|
| 5 |
+
<img src="fig/illustration.png" style="width:85.0%" />
|
| 6 |
+
<figcaption>Illustration of proposed concepts</figcaption>
|
| 7 |
+
</figure>
|
| 8 |
+
<figure id="fig:proposed_dag">
|
| 9 |
+
<img src="fig/proposed_dag.png" style="width:90.0%" />
|
| 10 |
+
<figcaption>Causal DAG</figcaption>
|
| 11 |
+
</figure>
|
| 12 |
+
<figure id="fig:proposed_vae">
|
| 13 |
+
<img src="fig/proposed_vae.png" style="width:90.0%" />
|
| 14 |
+
<figcaption>VAE model</figcaption>
|
| 15 |
+
</figure>
|
| 16 |
+
<figcaption>(a) Binary concept <em>middle stroke</em> and some global variants. Border color indicates the classifier output. (b, c) The proposed VAE model and the causal DAG</figcaption>
|
| 17 |
+
</figure>
|
| 18 |
+
|
| 19 |
+
Deep neural network has been recognized as the state-of-the-art model for various tasks. As they are being applied in more practical applications, there is an arising consensus that these models need to be explainable, especially in high-stake domains. Various methods are proposed to solve this problem, including building interpretable model and post-hoc methods that explain trained black-box models. We focus on the post-hoc approach and propose a novel causal concept-based explanation framework.
|
| 20 |
+
|
| 21 |
+
We are interested in an explanation that uses the symbolic expression: 'data X is classified as class Y because X *has* A, B and *does not have* C' where A, B, and C are high-level concepts. From the linguistic perspective, our explanation communicates using *nouns* and their *part-whole relation*, i.e., the semantic relation between a part and the whole object. In many classification tasks, especially image classification, the predictions relied on binary components; for example, we can distinguish a panda from a bear by its white patched eyes or a zebra from a horse by its stripe. This is also a common way humans use to classify categories and organize knowledge [@Gardenfors2014-rq]. Thus, an explanation in this form should excel in providing human-friendly and organized insights into the classifier, especially for tasks that involve higher-level concepts such as checking the alignment of the black-box model with experts. From now on, we refer to such a concept as *binary concept*.
|
| 22 |
+
|
| 23 |
+
Our method employs three different notions in the explanation: *causal binary switches*, *concept-specific variants* and *global variants*. We illustrate these notions in Figure [1](#fig:illustration){reference-type="ref" reference="fig:illustration"}. First, *causal binary switches* and *concept-specific variants*, that come in pair, represent different binary concepts. In particular, *causal binary switches* control the presence of each binary concept in a sample. Alternating this switch, i.e., removing or adding a binary concept to a sample, affects the prediction of that sample (e.g., removing the middle stroke turns E to C). In contrast, *concept specific variants*, whose each is tied to a specific binary concept, express different variants within a binary concept that do not affect the prediction (e.g., changing the length of the middle stroke does not affect the prediction). Finally, *global variants*, which are not tied to specific binary concepts, represent other variants that do not affect the prediction (e.g., skewness).
|
| 24 |
+
|
| 25 |
+
<figure id="fig:demonstration" data-latex-placement="t">
|
| 26 |
+
|
| 27 |
+
<figcaption>Explanation methods for a letter classifier. (a) Saliency-based methods. (b) Disabling the most active latents of class E in VSC model. (c) Controlling the causal and non-causal factors in O’Shaughnessy et al. (d, e) Proposed method: (d) Encoded binary relation of discovered concepts and their intervention results; (e) variants within each concept and other variants of the whole letter.<span id="fig:demonstration" data-label="fig:demonstration"></span></figcaption>
|
| 28 |
+
</figure>
|
| 29 |
+
|
| 30 |
+
Our goal is to discover a set of binary concepts that can explain the classifier using their binary switches in an unsupervised manner. Similar to some existing works, to construct conceptual explanations, we learn a generative model that maps each input into a low-dimensional representation in which each factor encodes an aspect of the data. There are three main challenges in achieving our goal. (1) It requires an adequate generative model to express the binary concepts, including the binary switches and the variants within each concept. (2) The discovered binary concepts must have a large causal influence on the classifier output. That is, we avoid finding confounding concepts, which correlate with but do not cause the prediction. For example, the *sky* concept appears frequently in *plane*'s images but may not cause the prediction of *plane*. (3) The explanation must be interpretable and provide useful insights. For example, a concept that entirely replaces a letter E with a letter A has a large causal effect. However, such a concept does not provide valuable knowledge due to lack of interpretability.
|
| 31 |
+
|
| 32 |
+
In Figure [\[fig:proposed-discrete\]](#fig:proposed-discrete){reference-type="ref" reference="fig:proposed-discrete"} and [\[fig:proposed-continuous\]](#fig:proposed-continuous){reference-type="ref" reference="fig:proposed-continuous"}, we demonstrate an explanation discovered by the proposed method for an classifier for six letters: A,B,C,D,E and F. Our method successfully discovered the concepts of *bottom stroke*, *middle stroke* and *right stroke* which effectively explains the classifier. In Figure [\[fig:proposed-discrete\]](#fig:proposed-discrete){reference-type="ref" reference="fig:proposed-discrete"}, we show the encoded binary switches and their interventions result. From the top figure, we can explain that: this letter is classified as E because it *has* a *bottom stroke* (otherwise it is F), a *middle stroke* (otherwise it is C), and it *does not have* a *right stroke* (otherwise it is B). We were also able to distinguish the variant within each concept in (Figure [\[fig:proposed-continuous\]](#fig:proposed-continuous){reference-type="ref" reference="fig:proposed-continuous"} top) with the global variant (Figure [\[fig:proposed-continuous\]](#fig:proposed-continuous){reference-type="ref" reference="fig:proposed-continuous"} bottom). A full result with explanation for other letters is shown in Section [\[ch:experiment\]](#ch:experiment){reference-type="ref" reference="ch:experiment"}.
|
| 33 |
+
|
| 34 |
+
To the best of our knowledge, no existing method can discover binary concepts that fulfill all of these requirements. Saliency methods such as Guided Backprop [@Springenberg2014-dc], Integrated Gradient [@Sundararajan2017-gt] or GradCam [@Selvaraju2020-ap] only show feature importance but do not explain why (Figure [\[fig:saliency-maps\]](#fig:saliency-maps){reference-type="ref" reference="fig:saliency-maps"}). Some generative models which use binary-continuous mixed latents for sparse coding, such as VSC [@Tonolini2020-cm], IBP-VAE [@Gyawali2019-wm], PatchVAE [@Gupta2020-zc], can support binary concepts. However, they do not necessarily discover binary concepts that are useful for explanation, in both causality and interpretability (Figure [\[fig:mixed-vae\]](#fig:mixed-vae){reference-type="ref" reference="fig:mixed-vae"}). Recently, @OShaughnessy2020-yz proposed a learning framework that encourages the causal effect of certain latent factors on the classifier output to learn a latent representation that has causality on the prediction. However, their model can not disentangle binary concepts and can be hard to interpret, especially for multiple-class tasks. For example, a single concept changes the letter E to multiple other letters (Figure [\[fig:cc-vae\]](#fig:cc-vae){reference-type="ref" reference="fig:cc-vae"}), which would not give any interpretation on how this latent variable affects prediction.
|
| 35 |
+
|
| 36 |
+
Our work has the following contributions: (1) We introduce the problem of discovering binary concepts for the explanation. Then, we propose a structural generative model for constructing binary concept explanation, which can capture the binary switches, concept-specific variants, and global variants. (2) We propose a learning process to simultaneously learn the data distribution while encouraging the causal influence of the binary switches. Although typically VAE models encourage the independence of factors for meaningful disentanglement, such an assumption is inadequate for discovering useful causal concepts which are often mutually correlated. Our learning process, which considers the dependence between binary concepts, can discover concepts with more significant causality. (3) To avoid the concepts that have causality but no interpretability, the proposed method allows an easy way to implement user's preference and prior knowledge as a regularizer to induce high interpretability of concepts. (4) Finally, we demonstrate that our method succeeds in discovering interpretable binary concepts with causality that are useful for explanation with multiple datasets.
|
| 37 |
+
|
| 38 |
+
# Method
|
| 39 |
+
|
| 40 |
+
Our explanation is build upon the VAE framework proposed by @Kingma2014-bb. VAE model assumes a generative process of data in which a latent $\mathop{\mathrm{\mathbf{z}}}$ is first sampled from a prior distribution $p(\mathop{\mathrm{\mathbf{z}}})$, then the data is generated via a conditional distribution $p(\mathop{\mathrm{\mathbf{x}}}\mid \mathop{\mathrm{\mathbf{z}}})$. Typically, due to the intractability, a variational approximation $q(\mathop{\mathrm{\mathbf{z}}}\mid \mathop{\mathrm{\mathbf{x}}})$ of the intractable posterior is introduced and the model is then learned using the evidence lower bound (ELBO) as $$\begin{align}
|
| 41 |
+
\begin{split}
|
| 42 |
+
\mathcal{L}_\text{VAE}(\mathop{\mathrm{\mathbf{x}}}) = &-\mathop{\mathrm{\mathbb{E}}}_{\mathop{\mathrm{\mathbf{z}}}\sim q(\mathop{\mathrm{\mathbf{z}}}\mid \mathop{\mathrm{\mathbf{x}}})}[\log p(\mathop{\mathrm{\mathbf{x}}}\mid \mathop{\mathrm{\mathbf{z}}})] + \mathop{\mathrm{\mathbb{KL}}}[q(\mathop{\mathrm{\mathbf{z}}}\mid \mathop{\mathrm{\mathbf{x}}}) \mathop{\mathrm{%
|
| 43 |
+
\,\|\,%
|
| 44 |
+
}}p(\mathop{\mathrm{\mathbf{z}}})].
|
| 45 |
+
\end{split}
|
| 46 |
+
\end{align}$$ Here, $q(\mathop{\mathrm{\mathbf{z}}}\mid \mathop{\mathrm{\mathbf{x}}})$ is the encoder that maps the data to the latent space and $p(\mathop{\mathrm{\mathbf{x}}}\mid \mathop{\mathrm{\mathbf{z}}})$ is the decoder that maps the latents to the data space. Commonly, $q(\mathop{\mathrm{\mathbf{z}}}\mid \mathop{\mathrm{\mathbf{x}}})$ and $p(\mathop{\mathrm{\mathbf{x}}}\mid \mathop{\mathrm{\mathbf{z}}})$ are parameterized as neutral networks $Q(\mathop{\mathrm{\mathbf{z}}}\mid \mathop{\mathrm{\mathbf{x}}})$ and $G(\mathop{\mathrm{\mathbf{x}}}\mid \mathop{\mathrm{\mathbf{z}}})$, respectively. The common choice for $q(\mathop{\mathrm{\mathbf{z}}}\mid \mathop{\mathrm{\mathbf{x}}})$ is a factorized Gaussian encoder $q(\mathop{\mathrm{\mathbf{z}}}\mid \mathop{\mathrm{\mathbf{x}}}) = \prod_{p=1}^P \mathcal{N}(\mu_i, \sigma_i^2)$ where $(\mu_1, \dots, \mu_P, \sigma_1, \dots, \sigma_P,) = Q(\mathop{\mathrm{\mathbf{x}}})$. The common choice for the $p(\mathop{\mathrm{\mathbf{z}}})$ is a multi-variate normal distribution $\mathcal{N}(0, \mathcal{I})$ with zero mean and identity covariant. Then, the first term can be trained using L2 reconstruction loss, while the KL-divergence terms are trained using the reparameterization trick.
|
| 47 |
+
|
| 48 |
+
Next, we introduce the measure we use to quantify the causal influence of the learned representation on the classifier output. We adopt Information Flow, which defines the causal strength using Pearl's do calculus [@Pearl2009-pa]. Given a causal directional acyclic graph $G$, Information Flow quantify the statistical influence using the conditional mutual information on the interventional distribution:
|
| 49 |
+
|
| 50 |
+
::: definition
|
| 51 |
+
[]{#def:iflow label="def:iflow"} Let $U$ and $V$ be disjoint subsets of nodes. The information flow $I(U \rightarrow V)$ from $U$ to $V$ is defined by $$\begin{align}
|
| 52 |
+
I(U \rightarrow V) = \int_U p(u) \int_V p(v | \text{do}(u)) \log \frac{p(v | \text{do}(u))}{\int_{u'} p(u')p(v | \text{do}(u'))du'} dV dU,
|
| 53 |
+
\end{align}$$ where $\text{do}(u)$ represents an intervention that fixes u to a value regardless of the values of its parents.
|
| 54 |
+
:::
|
| 55 |
+
|
| 56 |
+
@OShaughnessy2020-yz argued that compared to other metrics such as average causal effect (ACE) [@Holland1988-xs], analysis of variance (ANOVA) [@Lewontin1974-wq], information flow is more suitable to capture complex and nonlinear causal dependence between variables.
|
| 57 |
+
|
| 58 |
+
We aim to discover a set of binary concepts $\mathcal{M} = \{m_0, m_1, \dots, m_M\}$ with causality and interpretability that can explain the black-box classifier $f: \mathcal{X} \rightarrow \mathcal{Y}$. Inspired by @OShaughnessy2020-yz, we employs a generative model to learn the data distribution while encouraging the causal influence of certain latent factors. In particular, we assume a causal graph in Figure [2](#fig:proposed_dag){reference-type="ref" reference="fig:proposed_dag"}, in which each sample $\mathop{\mathrm{\mathbf{x}}}$ is generated from a set of latent variables, including $M$ pairs of *a binary concept* and *a concept-specific variant* $\{\gamma_i, \mathop{\mathrm{\bm{\alpha}}}_i\}_{i=1}^M$, and a *global variants* $\mathop{\mathrm{\bm{\beta}}}$. As we want to explain the classifier output (i.e., node $y$ in Figure [2](#fig:proposed_dag){reference-type="ref" reference="fig:proposed_dag"}) using the *binary switches* $\{\gamma_i\}$, we expect that $\{\gamma_i\}$ has a large causal influence on $y$.
|
| 59 |
+
|
| 60 |
+
Our proposed learning objective consists of three components, which corresponds to our three requirements: a VAE objective $\mathcal{L}_\text{VAE}$ for learning the data distribution $p(\mathop{\mathrm{\mathbf{x}}})$, a causal effect objective $\mathcal{L}_\text{CE}(X)$ for encouraging the causal influence of $\{\gamma_i\}$ on classifier output $y$, and an user-implementable regularizer $\mathcal{L}_\text{R}(\mathop{\mathrm{\mathbf{x}}})$ for improving the interpretability and consistency of discovered concepts: $$\begin{align}
|
| 61 |
+
\mathcal{L}(X) = \frac{1}{|X|}\sum_{\mathop{\mathrm{\mathbf{x}}}\in X} \left[\mathcal{L}_\text{VAE}(\mathop{\mathrm{\mathbf{x}}}) + \lambda_\text{R}\mathcal{L}_\text{R}(\mathop{\mathrm{\mathbf{x}}})\right] + \lambda_\text{CE} \mathcal{L}_\text{CE}(X). \label{eq:all}
|
| 62 |
+
\end{align}$$
|
| 63 |
+
|
| 64 |
+
To represent the binary concepts, we employ a structure in which each binary concept $m_i$ is presented by a latent variable $\bm{\psi}_i$, which is further controlled by two factors: a binary concept switch latent variable $\gamma_i$ (concept switch for short) and a continuous latent variable representing concept-specific variants $\mathop{\mathrm{\bm{\alpha}}}_i$ (concept-specific variant for short) as $\bm{\psi}_i = \gamma_i \cdot \mathop{\mathrm{\bm{\alpha}}}_i$ where $\gamma_i = 1$ if concept $m_i$ is on and $gamma_i=0$ otherwise. Here, the *concept switch* $\gamma_i$ controls if the concept $m_i$ is activated in a sample, e.g., if the bottom stroke is appeared in a image (Figure [\[fig:proposed-discrete\]](#fig:proposed-discrete){reference-type="ref" reference="fig:proposed-discrete"}). On the other hand, the *concept-specific variant* $\mathop{\mathrm{\bm{\alpha}}}_i$ controls the variant within the concept $m_i$, e.g., the length of the bottom stroke (Figure [\[fig:proposed-continuous\]](#fig:proposed-continuous){reference-type="ref" reference="fig:proposed-continuous"}, top). In addition to the *concept-specific variants* $\{\mathop{\mathrm{\bm{\alpha}}}_i\}$ whose effect is limited to a specific binary concept, we also allow a *global variant* latent $\mathop{\mathrm{\bm{\beta}}}$ to capture other variants that do not necessarily have causality, e.g., skewness (Figure [\[fig:proposed-continuous\]](#fig:proposed-continuous){reference-type="ref" reference="fig:proposed-continuous"}, bottom). Here, disentangling concept-specific and global variants is important for assisting user in understanding discovered binary concepts.
|
| 65 |
+
|
| 66 |
+
The way we represent binary concepts is closely related to the spike-and-slab distribution, which is used in Bayesian variable selection [@George_undated-ah] and sparse coding [@Tonolini2020-cm]. Unlike these models, whose number of discrete-continuous factors is often large, our model uses only a small number of binary concepts with a multi-dimensional global variants $\beta$. Our intuition is that in many cases, the classification can be made by combining a small number of binary concepts.
|
| 67 |
+
|
| 68 |
+
**Input encoding.** For the discrete components, we use a network $Q^d(\mathop{\mathrm{\mathbf{x}}})$ to parameterize $q(\mathop{\mathrm{\bm{\gamma}}}\mid \mathop{\mathrm{\mathbf{x}}})$ as $q(\mathop{\mathrm{\bm{\gamma}}}\mid \mathop{\mathrm{\mathbf{x}}}) = \prod_{i=1}^M q(\gamma_i \mid \mathop{\mathrm{\mathbf{x}}}) = \prod_{i=1}^M \text{Bern}(\gamma_i; \pi_i)$ where $(\pi_1, \dots, \pi_M) = Q^d(\mathop{\mathrm{\mathbf{x}}})$. For the continuous components, letting $A = (\mathop{\mathrm{\bm{\alpha}}}_1 , \mathop{\mathrm{\bm{\alpha}}}_2, \dots, \mathop{\mathrm{\bm{\alpha}}}_M)$, we use a network $Q^c(\mathop{\mathrm{\mathbf{x}}})$ to parameterize $q(A, \mathop{\mathrm{\bm{\beta}}}\mid \mathop{\mathrm{\mathbf{x}}})$ as $q(A, \mathop{\mathrm{\bm{\beta}}}\mid \mathop{\mathrm{\mathbf{x}}}) = \left[\prod_{i=1}^M q\left(\mathop{\mathrm{\bm{\alpha}}}_i \mid \mathop{\mathrm{\mathbf{x}}}\right)\right] q(\mathop{\mathrm{\bm{\beta}}}\mid \mathop{\mathrm{\mathbf{x}}})$. Here, $q(\mathop{\mathrm{\bm{\alpha}}}_i \mid \mathop{\mathrm{\mathbf{x}}}) = \mathcal{N}^\text{fold}_\delta(\mathop{\mathrm{\bm{\alpha}}}_i; \mu_i, \text{diag}(\sigma_i))$, $q(\mathop{\mathrm{\bm{\beta}}}\mid \mathop{\mathrm{\mathbf{x}}}) = \mathcal{N}^\text{fold}_\delta(\mathop{\mathrm{\bm{\beta}}}; \mu_{\mathop{\mathrm{\bm{\beta}}}}, \text{diag}(\sigma_{\mathop{\mathrm{\bm{\beta}}}}))$ and $(\mu_1, \dots, \mu_M, \mu_{\mathop{\mathrm{\bm{\beta}}}}, \sigma_1, \dots, \sigma_M, \sigma_{\mathop{\mathrm{\bm{\beta}}}}) = Q^c(\mathop{\mathrm{\mathbf{x}}})$. Here, we employ the $\delta\text{-Shifted}$ Folded Normal Distribution $\mathcal{N}^\text{fold}_\delta(\mu, \sigma^2)$ for continuous latents, which is the distribution of $|x| + \delta$ with a constant hyper-parameter $\delta > 0$ where $x \sim \mathcal{N}(\mu, \sigma ^2)$. In all of our experiments, we adopted $\delta=0.5$. We choose not the standard Normal Distribution but the $\delta\text{-Shifted}$ Folded Normal Distribution because it is more appropriate for the causal effect we want to achieve. The implementation of $\mathcal{N}^\text{fold}_\delta(\mu, \sigma^2)$ can simply be done by adding the absolute and shift operation to the conventional implementation of $\mathcal{N}(\mu, \sigma ^2)$. We discuss in detail this design choice in Appendix [\[appendix:fold-shilf\]](#appendix:fold-shilf){reference-type="ref" reference="appendix:fold-shilf"}.
|
| 69 |
+
|
| 70 |
+
**Output decoding.** Next, given $q(\mathop{\mathrm{\bm{\gamma}}}\mid \mathop{\mathrm{\mathbf{x}}})$ and $q(A, \mathop{\mathrm{\bm{\beta}}}\mid \mathop{\mathrm{\mathbf{x}}})$, we first sample the concept switches $\{\hat{d}_i\}$, the concept variants $\{\hat{\mathop{\mathrm{\bm{\alpha}}}}_i\}$ and the global variants $\mathop{\mathrm{\bm{\beta}}}$ from their posterior, respectively. Using these sampled latents, we construct an aggregated representation $\hat{\mathop{\mathrm{\mathbf{z}}}} = (\bm{\psi}_1, \dots, \bm{\psi}_M, \hat{\mathop{\mathrm{\bm{\beta}}}})$ using the binary concept mechanism in which $\bm{\psi}_i$ is the corresponding part for concept $m_i$, i.e., $\bm{\psi}_i = \gamma_i \times \mathop{\mathrm{\bm{\alpha}}}_i$. If concept $m_i$ is on, we let $\hat{d}_i=1$ so that $\bm{\psi}_i$ can reflect the concept-specific variant $\hat{\mathop{\mathrm{\bm{\alpha}}}}_i$. Otherwise, when the concept $m_i$ is off, we assign $\hat{d}_i=0$. We refer to $\hat{\mathop{\mathrm{\mathbf{z}}}}$ as the *conceptual latent code*. Finally, a decoder network takes $\hat{\mathop{\mathrm{\mathbf{z}}}}$ and generate the reconstruction $\hat{x}$ as $\hat{x} \sim G(\mathop{\mathrm{\mathbf{x}}}\mid \hat{\mathop{\mathrm{\mathbf{z}}}}) \text{ where } \hat{\mathop{\mathrm{\mathbf{z}}}} = (\bm{\psi}_1, \dots, \bm{\psi}_M, \hat{\mathop{\mathrm{\bm{\beta}}}}).$
|
| 71 |
+
|
| 72 |
+
**Learning process.** We use the maximization of evidence lower bound (ELBO) to jointly train the encoder and decoder. We assume the prior distribution for continuous latents to be $\delta\text{-shifted}$ Folded Normal distribution $\mathcal{N}^\text{fold}_\delta(0, \mathcal{I})$ with zero-mean and identity covariance. Moreover, we assume the prior distribution for binary latents to be a Bernoulli distribution $\text{Bern}(\pi_{\text{prior}})$ with prior $\pi_{\text{prior}}$. The ELBO for our learning process can be written as: $$\begin{align}
|
| 73 |
+
\begin{split}
|
| 74 |
+
\mathcal{L}_\text{VAE}(\mathop{\mathrm{\mathbf{x}}}) &= -\mathop{\mathrm{\mathbb{E}}}_{\hat{\mathop{\mathrm{\mathbf{z}}}} \sim Q^{\{c,d\}}(\mathop{\mathrm{\mathbf{z}}}\mid \mathop{\mathrm{\mathbf{x}}})} \left[ \log G\left(\mathop{\mathrm{\mathbf{x}}}\mid \hat{\mathop{\mathrm{\mathbf{z}}}} \right)\right] + \lambda_2 \left[ \frac{1}{M} \sum_{i=1}^{M} \mathop{\mathrm{\mathbb{KL}}}\left(q \left(\gamma_i \mid \mathbf{x}\right) \mathop{\mathrm{%
|
| 75 |
+
\,\|\,%
|
| 76 |
+
}}\mathop{\mathrm{\operatorname{Bern}}}\left(\pi_i \right)\right) \right] \\
|
| 77 |
+
&+ \lambda_1 \left[ \mathop{\mathrm{\mathbb{KL}}}\left(q\left(\mathop{\mathrm{\bm{\beta}}}\mid \mathbf{x}\right) \mathop{\mathrm{%
|
| 78 |
+
\,\|\,%
|
| 79 |
+
}}\mathcal{N}^\text{fold}_\delta\left(0,\mathcal{I}\;\right)\right) + \frac{1}{M} \sum_{i=1}^{M} \mathop{\mathrm{\mathbb{KL}}}\left(q\left(\mathop{\mathrm{\bm{\alpha}}}_i \mid \mathbf{x}\right) \mathop{\mathrm{%
|
| 80 |
+
\,\|\,%
|
| 81 |
+
}}\mathcal{N}^\text{fold}_\delta\left(0,\mathcal{I}\;\right)\right) \right]. \label{eq:binary_vae}
|
| 82 |
+
\end{split}
|
| 83 |
+
\end{align}$$ For the Bernoulli distribution, we use its continuous approximation, i.e., the relaxed-Bernoulli [@Maddison2016-ys] in the training process.
|
| 84 |
+
|
| 85 |
+
We expect the binary switches $\mathop{\mathrm{\bm{\gamma}}}$ to have a large causal influence so that they can effectively explain the classifier. To measure the causal effect of $\mathop{\mathrm{\bm{\gamma}}}$ on the classifier output $Y$, we employ the causal DAG in Figure [2](#fig:proposed_dag){reference-type="ref" reference="fig:proposed_dag"} and adopt *information flow* (Definition [\[def:iflow\]](#def:iflow){reference-type="ref" reference="def:iflow"}) as the causal measurement. Our DAG employs an assumption that is fundamentally different from those of standard VAE models. Specifically, the standard VAE model and also @OShaughnessy2020-yz assumes the independence of latent factors, which is believed to encourage meaningful disentanglement via a factorized prior distribution. We claim that because *useful concepts for explanation* often causally depend on the class information and thus are not independent of each other, such an assumption might be inadequate for discovering valuable causal concepts. For example, in the letter E, the middle and the bottom strokes are causally related to the recognition of the letter E, and corresponding binary concepts are mutually correlated. Thus, employing the VAE's factorized prior distribution in estimating information flow might lead to a large estimation error and prevent discovering valuable causal concepts.
|
| 86 |
+
|
| 87 |
+
Instead, we employ a prior distribution $p^*(\mathop{\mathrm{\bm{\gamma}}})$ that allows the correlation between causal binary concepts. Our method iteratively learns the VAE model and use the current VAE model to estimates the prior distribution $p^*(\mathop{\mathrm{\bm{\gamma}}})$ which most likely generates the user's dataset. This empirical estimation of $p^*(\mathop{\mathrm{\bm{\gamma}}})$ is then used to evaluate the causal objective in Eq. ([\[eq:all\]](#eq:all){reference-type="ref" reference="eq:all"}). Assuming $X$ is a set of i.i.d samples from data distribution $p(\mathop{\mathrm{\mathbf{x}}})$, we estimate $p^*(\mathop{\mathrm{\bm{\gamma}}})$ as $$\begin{align}
|
| 88 |
+
p^*(\mathop{\mathrm{\bm{\gamma}}}) \approx \int_{\mathop{\mathrm{\mathbf{x}}}} p^*(\mathop{\mathrm{\bm{\gamma}}}\mid \mathop{\mathrm{\mathbf{x}}})p(\mathop{\mathrm{\mathbf{x}}}) d\mathop{\mathrm{\mathbf{x}}}\approx \frac{1}{|X|}\sum_{\mathop{\mathrm{\mathbf{x}}}\in X} p(\mathop{\mathrm{\bm{\gamma}}}\mid \mathop{\mathrm{\mathbf{x}}}) \approx \frac{1}{|X|}\sum_{\mathop{\mathrm{\mathbf{x}}}\in X} \prod_{i=1}^M q(\gamma_i \mid \mathop{\mathrm{\mathbf{x}}}) \label{eq:estimate-prior}
|
| 89 |
+
\end{align}$$ In the last line, $p(\mathop{\mathrm{\bm{\gamma}}}\mid \mathop{\mathrm{\mathbf{x}}})$ is replaced with the variational posterior $q(\mathop{\mathrm{\bm{\gamma}}}\mid \mathop{\mathrm{\mathbf{x}}})$ of VAE model. Here, the factorized posterior $q(\mathop{\mathrm{\bm{\gamma}}}\mid \mathop{\mathrm{\mathbf{x}}})$ only assumes the independence between latents conditioned on a sample but does not imply the independence of binary switches in $p^*(\mathop{\mathrm{\bm{\gamma}}})$. We note that we do not aim to learn the dependence between concepts but only expect that $p^*(\mathop{\mathrm{\bm{\gamma}}})$ properly reflects the dependence between binary concepts that appears in the dataset $X$ for a better evaluation of causal effect. We experimentally show in Subsection [\[ch:experiment-quantitative\]](#ch:experiment-quantitative){reference-type="ref" reference="ch:experiment-quantitative"} that using the estimation of $p^*(\mathop{\mathrm{\bm{\gamma}}})$ results in a better estimation for the causal effect on dataset $X$ and more valuable concepts for the explanation. We showed that in the proposed DAG, information flow $I(\mathop{\mathrm{\bm{\gamma}}}\rightarrow Y)$ coincides with mutual information $I(\mathop{\mathrm{\bm{\gamma}}}; Y)$.
|
| 90 |
+
|
| 91 |
+
::: proposition
|
| 92 |
+
[]{#pro:mi-as-iflow label="pro:mi-as-iflow"} The information flow from $\mathop{\mathrm{\bm{\gamma}}}$ to Y in the DAG of Figure [2](#fig:proposed_dag){reference-type="ref" reference="fig:proposed_dag"} coincides with the mutual information between $\mathop{\mathrm{\bm{\gamma}}}$ and $Y$. That is, $I(\mathop{\mathrm{\bm{\gamma}}}\rightarrow Y) = I(\mathop{\mathrm{\bm{\gamma}}}; Y) = \mathop{\mathrm{\mathbb{E}}}_{\mathop{\mathrm{\bm{\gamma}}}, Y} \left[\frac{p^*(\mathop{\mathrm{\bm{\gamma}}})p(Y \mid \mathop{\mathrm{\bm{\gamma}}})}{p^*(\mathop{\mathrm{\bm{\gamma}}})p(Y)}\right]$.
|
| 93 |
+
:::
|
| 94 |
+
|
| 95 |
+
We prove Proposition [\[pro:mi-as-iflow\]](#pro:mi-as-iflow){reference-type="ref" reference="pro:mi-as-iflow"} in Appendix [\[appendix:proof\]](#appendix:proof){reference-type="ref" reference="appendix:proof"}. The detailed algorithm for estimating $I(\mathop{\mathrm{\bm{\gamma}}}; Y)$ is described in Appendix [\[appendix:algorithm\]](#appendix:algorithm){reference-type="ref" reference="appendix:algorithm"}. As we want to maximize $I(\mathop{\mathrm{\bm{\gamma}}}; Y)$, we rewrite it as a loss term $\mathcal{L}_\text{CE} = -I(\mathop{\mathrm{\bm{\gamma}}}; Y)$ and optimize it together with the learning of VAE model.
|
| 96 |
+
|
| 97 |
+
Finally, we discuss the integration of user's preferences or prior knowledge for inducing high interpretability of concepts. A problem in discovering meaningful latent factors using deep generative models is that the learned factors can be hard to interpret. Although causality is strongly related and can contribute to interpretability, due to the high expressiveness of the deep model, a large causal effect does not always guarantee an interpretable concept. For example, a concept that entirely replaces a letter E with a letter D, has a large causal effect on the prediction. However, such a concept does not provide valuable knowledge and is hard to interpret. To avoid such concepts, we allow the user to implement their preference or prior knowledge as an interpretability regularizer to constrain the generative model's expressive power. The proposed method then seeks for binary concepts with large causality under the constrained search space.
|
| 98 |
+
|
| 99 |
+
The integration can easily be done via a scoring function $r(\mathop{\mathrm{\mathbf{x}}}_{\gamma_i=0}, \mathop{\mathrm{\mathbf{x}}}_{\gamma_i=1})$ which evaluates the usefulness of concept $m_i$. Here, $\mathop{\mathrm{\mathbf{x}}}_{\gamma_i=0}$ and $\mathop{\mathrm{\mathbf{x}}}_{\gamma_i=1}$ are obtained from the generative model by performing the do-operation $do(\gamma_i = 0)$ and $do(\gamma_i = 1)$ on input $\mathop{\mathrm{\mathbf{x}}}$, respectively. In this study, we introduce two regularizers which are based on the following intuitions. First, an interpretable concept should only affect a small amount of input features (Eq. ([\[eq:compactness\]](#eq:compactness){reference-type="ref" reference="eq:compactness"})). This desiderata is general and can be applied to many tasks. The second one is more task-specific in which we focus on the gray-scale image classification task. An intervention of a concept should only add or substract the pixel value, but not both at the same time (Eq. ([\[eq:directional\]](#eq:directional){reference-type="ref" reference="eq:directional"})). Furthermore, we desire that $\gamma_i=1$ indicates the *presence* of pixels and $\gamma_i=0$ indicates the *absence* of pixels. We show the detailed formulation for these regularizers in Appendix [\[appendix:regularizer\]](#appendix:regularizer){reference-type="ref" reference="appendix:regularizer"}. Using these interpretability regularizer, we observed a significant improvement in interpretability (Subsection [\[ch:experiment-quantitative\]](#ch:experiment-quantitative){reference-type="ref" reference="ch:experiment-quantitative"}) and consistency (Appendix [\[appendix:regularizer\]](#appendix:regularizer){reference-type="ref" reference="appendix:regularizer"}) of concepts.
|
2109.05361/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-03-24T14:14:55.850Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36" version="14.5.1" etag="q-34wqAlJ_PoZZCmkDFJ" type="device"><diagram id="H3BM3NG19yaPuz3sEnJb">jZNNb4MwDIZ/DdcJSFu1x46W7bCemLRzRlyIFjAKYdD9+jkl4UNTpeWA4sd2nPg1AUuq4UXzprygABXEoRgCdgrieLfb0teC2whith9BoaUYUTSDTP6Ag6GjnRTQrgINojKyWcMc6xpys2Jca+zXYVdU66oNL1zFcAZZzhX8CfuQwpQj3W8X0a8gi9JXjkLnqbgPdqAtucB+gdg5YIlGNOOuGhJQtne+L2Ne+sA7XUxDbf6TEI8J31x17m2f8i17v7jbmZt/ssauFmCzwoA996U0kDU8t96eNCZWmkqRFdH2irVxqkXW5koWNRk53Qo0gUJzIck4SU36SLTOFjvbnMmXoEJ9L86u92UPlkoteJomtIi3RuMXLDynzfFwPJDHvQ+0geFhj6Kp8zSxgBUYfaMQP657J5ab1o1Xs5+199KXS9kPT6693M1bMZ09S0Ibp4o3Z/XvvsUvxM6/</diagram></mxfile>
|
2109.05361/main_diagram/main_diagram.pdf
ADDED
|
Binary file (6.31 kB). View file
|
|
|
2109.05361/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
COMBO's architecture (see Figure [2](#fig:overview){reference-type="ref" reference="fig:overview"}) is based on the forerunner [@rybak-wroblewska-2018-semi] implemented in the Keras framework. Apart from a new implementation in the PyTorch library [@NEURIPS2019_9015], the novelties are the BERT-based encoder, the EUD prediction module, and COMBO-vectoriser extracting embeddings of UPOS and DEPREL from the last hidden layers of COMBO's tagging and dependency parsing module, respectively. This section provides an overview of COMBO's modules. Implementation details are in Appendix [6](#sec:implementationDetails){reference-type="ref" reference="sec:implementationDetails"}.
|
| 4 |
+
|
| 5 |
+
<figure id="fig:overview" data-latex-placement="!h">
|
| 6 |
+
<embed src="pictures/COMBO_architecture_new.pdf" />
|
| 7 |
+
<figcaption><span id="fig:overview" data-label="fig:overview"></span>COMBO architecture. Explanations:<br />
|
| 8 |
+
<embed src="pictures/cnn.pdf" /> <embed src="pictures/fc.pdf" /> <embed src="pictures/emb.pdf" /> <embed src="pictures/bilstm2.pdf" /> <embed src="pictures/optional.pdf" /> <embed src="pictures/required.pdf" /></figcaption>
|
| 9 |
+
</figure>
|
| 10 |
+
|
| 11 |
+
Local feature extractors (see Figure [2](#fig:overview){reference-type="ref" reference="fig:overview"}) encode categorical features (i.e. words, parts of speech, morphological features, lemmata) into vectors. The feature bundle is configurable and limited by the requirements set for COMBO. For instance, if we train only a dependency parser, the following features can be input to COMBO: internal character-based word embeddings ([char]{.smallcaps}), pre-trained word embeddings ([word]{.smallcaps}), and embeddings of lemmata ([lemma]{.smallcaps}), parts of speech ([upos]{.smallcaps}) and morphological features ([ufeats]{.smallcaps}). If we train a morphosyntactic analyser (i.e. tagger, lemmatiser and parser), internal word embeddings ([char]{.smallcaps}) and pre-trained word embeddings ([word]{.smallcaps}), if available, are input to COMBO.
|
| 12 |
+
|
| 13 |
+
Words and lemmata are always encoded using character-based word embeddings ([char]{.smallcaps} and [lemma]{.smallcaps}) estimated during system training with a dilated convolutional neural network (CNN) encoder [@dcnn:2015; @strubell-etal-2017-fast].
|
| 14 |
+
|
| 15 |
+
Additionally, words can be represented using pre-trained word embeddings ([word]{.smallcaps}), e.g. fastText [@grave2018learning], or BERT [@bert:2018]. The use of pre-trained embeddings is an optional functionality of the system configuration. COMBO freezes pre-trained embeddings (i.e. no fine-tuning) and uses their transformations, i.e. embeddings are transformed by a single fully connected (FC) layer.
|
| 16 |
+
|
| 17 |
+
Part-of-speech and morphological embeddings ([upos]{.smallcaps} and [ufeats]{.smallcaps}) are estimated during system training. Since more than one morphological feature can attribute a word, embeddings of all possible features are estimated and averaged to build a final morphological representation.
|
| 18 |
+
|
| 19 |
+
The encoder uses concatenations of local feature embeddings. A sequence of these vectors representing all the words in a sentence is processed by a bidirectional LSTM [@lstm:1997; @bilstm:2005]. The network learns the context of each word and encodes its global (contextualised) features (see Figure [3](#fig:global){reference-type="ref" reference="fig:global"}). Global feature embeddings are input to the prediction modules.
|
| 20 |
+
|
| 21 |
+
<figure id="fig:global" data-latex-placement="h!">
|
| 22 |
+
<embed src="pictures/Global_features.pdf" style="width:50.0%" />
|
| 23 |
+
<figcaption><span id="fig:global" data-label="fig:global"></span>Estimation of global feature vectors.<br />
|
| 24 |
+
<embed src="pictures/biLSTM.pdf" /> <embed src="pictures/global.pdf" /></figcaption>
|
| 25 |
+
</figure>
|
| 26 |
+
|
| 27 |
+
The tagger takes global feature vectors as input and predicts a universal part of speech ([upos]{.smallcaps}), a language-specific tag ([xpos]{.smallcaps}), and morphological features ([ufeats]{.smallcaps}) for each word. The tagger consists of two linear layers followed by a softmax. Morphological features build a disordered set of category-value pairs (e.g. Number=Plur). Morphological feature prediction is thus implemented as several classification problems. The value of each morphological category is predicted with a FC network. Different parts of speech are assigned different sets of morphological categories (e.g. a noun can be attributed with grammatical gender, but not with grammatical tense). The set of possible values is thus extended with the NA (not applicable) symbol. It allows the model to learn that a particular category is not a property of a word.
|
| 28 |
+
|
| 29 |
+
The lemmatiser uses an approach similar to character-based word embedding estimation. A character embedding is concatenated with the global feature vector and transformed by a linear layer. The lemmatiser takes a sequence of such character representations and transforms it using a dilated CNN. The softmax function over the result produces the sequence of probabilities over a character vocabulary to form a lemma.
|
| 30 |
+
|
| 31 |
+
<figure id="fig:tree" data-latex-placement="!h">
|
| 32 |
+
<embed src="pictures/arc_prediction.pdf" style="width:100.0%" />
|
| 33 |
+
<figcaption><span id="fig:tree" data-label="fig:tree"></span>Prediction of dependency arcs.</figcaption>
|
| 34 |
+
</figure>
|
| 35 |
+
|
| 36 |
+
Two single FC layers transform global feature vectors into head and dependent embeddings (see Figure [4](#fig:tree){reference-type="ref" reference="fig:tree"}). Based on these representations, a dependency graph is defined as an adjacency matrix with columns and rows corresponding to heads and dependents, respectively. The adjacency matrix elements are dot products of all pairs of the head and dependent embeddings (the dot product determines the certainty of the edge between two words). The softmax function applied to each row of the matrix predicts the adjacent head-dependent pairs. This approach, however, does not guarantee that the resulting adjacency matrix is a properly built dependency tree. The Chu-Liu-Edmonds algorithm [@chuLiu:65; @Edmonds:67] is thus applied in the last prediction step.
|
| 37 |
+
|
| 38 |
+
<figure id="fig:labels" data-latex-placement="h!">
|
| 39 |
+
<embed src="pictures/label_prediction.pdf" />
|
| 40 |
+
<figcaption><span id="fig:labels" data-label="fig:labels"></span>Prediction of grammatical functions.</figcaption>
|
| 41 |
+
</figure>
|
| 42 |
+
|
| 43 |
+
The procedure of predicting words' grammatical functions (aka dependency labels) is shown in Figure [5](#fig:labels){reference-type="ref" reference="fig:labels"}. A dependent and its head are represented as vectors by two single FC layers. The dependent embedding is concatenated with the weighted average of (hypothetical) head embeddings. The weights are the values from the corresponding row of the adjacency matrix, estimated by the arc prediction module. Concatenated vector representations are then fed to a FC layer with the softmax activation function to predict dependency labels.
|
| 44 |
+
|
| 45 |
+
Enhanced Universal Dependencies (EUD) are predicted similarly to dependency trees. The EUD parsing module is described in details in @klimaszewski-wroblewska-2021-combo.
|
| 46 |
+
|
| 47 |
+
::: table*
|
| 48 |
+
:::
|
| 49 |
+
|
| 50 |
+
COMBO is evaluated on treebanks from the Universal Dependencies repository [@ud25data], preserving the original splits into training, validation, and test sets. The treebanks representing distinctive language types are summarised in Table [\[tab:statistics\]](#tab:statistics){reference-type="ref" reference="tab:statistics"} in Appendix [7](#sec:data_statistics){reference-type="ref" reference="sec:data_statistics"}.
|
| 51 |
+
|
| 52 |
+
By default, pre-trained 300-dimensional fastText embeddings [@grave2018learning] are used. We also test encoding data with pre-trained contextual word embeddings (the tested BERT models are listed in Table [\[tab:berts\]](#tab:berts){reference-type="ref" reference="tab:berts"} in Appendix [7](#sec:data_statistics){reference-type="ref" reference="sec:data_statistics"}). The UD datasets provide gold-standard tokenisation. If BERT intra-tokeniser splits a word into sub-words, the last layer embeddings are averaged to obtain a single vector representation of this word.
|
2111.00295/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-05-28T00:40:09.383Z" agent="5.0 (X11)" etag="0B6FSwfg3wmZ1_2rURZK" version="14.7.0" type="device"><diagram id="mcmM5te_Vwr_j3e1n-j4" name="Page-1">7V1pd6JMsP41c869H+Y97MtHQFDcEFzxG5vs+yL66y+daCaimTGZGGfmOieTQINFU/X001XVBX5DubDuZlrijGLTCr4hkFl/QzvfEITGkOY3aNg9N+Ak+txgZ6753AT/aJi6e+vQCB1aS9e08pMTizgOCjc5bTTiKLKM4qRNy7J4e3raJg5Or5potnXWMDW04Lx16ZqF89xK4dCP9p7l2s7xyjB0OBJqx5MPDbmjmfH2VRPKf0O5LI6L562w5qwA6O6ol+fPCW8cfelYZkXFNR+wfGFHjOGejagYlY7E+XiRfceepVRaUB5u+BtCBI08Vm82bLBxbNjEzXWa2yh2B90QaRkfD3zPnyzHNCfAWFL/OHiU0j+KycsXybNjW9PpV82vWp+vedqcl8mLAE48EZFcEPFKLHLSfaSwatDuFGHQNMDNpha4dtRsG41KraxpqKyscBswMIcDoWua4ONsZjU3rOlPoqBmP4ndqHiCGs5+wztAVlnEz0p5En1urYMBwSWs+lXTwXpdKw6tIts1pxyPksTzRw5DiTiMpO0PXB5h6byCJHpo0w4jwX4R/AMszcYBL+/ADvEmdky3alvixQwjrXZDoJdmIGfNBy9a6qXtgqSfCTcCLc+BlY04exGtZz+HxeWL/eVgwQjkBCwIRJyhhf5KtJD3ZJrpz5nmnZRyiZX+KaZB4DZ48PtSDfL2PPV7YDnDYGN9obE5JIbAMfgJX1xklVNk/OUowNDT+QbGyTMUUBdQgNwMBW/POPpVEMAvQWB4gS+UWC/z4pfuyb8z4jGqNeKJ8xFP3sjWO1T01htX2brRDEPWTFdltO/w2/PF1SOeumrETxwtB95IoyDoe/P/yR6hFRU/G/3/2lhHSOrE/ujR2K/sD1/yFz4DACQzgpj+tOvF+mC4I4MRI8Pf0Rv4CxfHv2JtrEZzhvWmbd+Gwc980c/p3dgqtnHmv2PW+bCLm8VlZFrmAVRbxy2saaIZ4Oi2ie1P4ZwXWey/hMQouB03CLg4iLMnaShhUiSBvZz56gjK4BgKHxTwqn3z9O+lZ2fgvQDxN/GMHvnjgGeMPoczcgHOxK3gjF+Ac0v/lmlb08NubtmAgvgfTawVmQxIaAAOAWGOa7zTIiRBQJR2ySIYinSayf2oedCR9+r9F57hsS2zAq1wq1Pxl5R9uMIEcN8rmsKg02kKoU5F5HGZGdbhU6+TIS1BbXwgREtQoWW2VZwJaiyg7V6dduDm6ztM0C0oPUv8AawXnX4cax8LzD+Hr0aaG30CNT3o9Vf0ipEERV6kVxhDcexr6RUm4TvzK3UHfrVqt1g129+h/yCgracGFXht/2Ekftjv1Ac37mln92pnYmVuc/PAGXxqixs0uAU4A4P+Ai7GCKK57RYdYx+jYwSn2oL+w3+TkE/g9AmsSt8RYdAJuH4OrWtx0/T8yTq/8sHvhS+0BS6SakPianQhp2xF0ldB67OAc1zK+lrkRE2nX6ADdp6x88xLYP8Hep72jvD5B51G6NT8NPQxGJ35jG1Bn+UzXu7vm+iGLnfrpi7mMUF8nY/5ez7amevXcXOjmTrdSCvi7F1Zmnc6m5/c7//ZXEg1ms3NXE40/u9PvdKP3uxNvdIr2OILvVIEO09ifq1XCl9at7g19f/pfIyhLTMdS0LeS8gY1RZE3ISQzztMfKhft2Xke2SY/nioYa049WjJd8/9ZDvgvW7u/zTrXsrpPKx7K+vCX2zdS9mLRxnWzZbKfsvLII5rCcelha8rw7oMnkuJiUcd1h+CFhrFT6mF+ro6rItwQS5lIx6FWH8mekiYaqHn6wqxLqPn7bhf/3ik/K+U4PweU+CnTHEper1VCc5lWz9KMP7yNcK7lmCQx2ntJS95BuevTcZcLCK9dQz176XTSay1VtcOlq6Nus4Agt2mBuOsw4f9m6ZjkEvpmEcRxj9GsHctwiCplnOI37sI42KB9q0Z9i8rm2gHhE+Fxx+iz3bNBAT9cTUTyKUq7lsD4s5FE8hdqyZI8hRe+IerJkjodHrGv7ZqArlLRdejauIlzUmdmr+94PFhL+/KlZP3enlv9PetbrVPb51/I6fwXanaR9XEo2riC6omzgYocu+qiWP28i5eA3wnr4G+q1MKtzCAf7Akgybagm5TknHe4S8osUCPj9U9FuFfWQI6XQ6F4ZafeC10KLwlCL/OUfg06yIP616IVrHbWPfK2tlPM+7baxkPz+tdnpdRZtn3f8r92mwswjAuuV8mSevQj4n+k5dDzgsBvtjNevulGlcWjnB88yuIQanH/+cSD6TFkdfmYW/37Pw9akH/tjws8pIZO9qNgD+cK0Ootqirprevy8Sil1Lzf2DZD6CUG1X9/GKe+AQmIOgWE2Dnb8whLkAauxkR3CP//tHp9c/lCoI8dV5J5GM00S4dhduG/6xcKNmOyX6RDG33C/6CZCh676Jnzv85K31R4fMX0FJ7drpQsvyltITdI8H3D9JSq7AFJj/4/EO7kgC9FS+1O/yLR1vb5x/7dVNewu6R5fv3wHlWutx+OO/D4IRbKP+sMjH48nVuC7ZLealbg+2WSx33W8GAT62HfDBRSaMtGLRr6T9tBQO+eJ3bwu3tlI9+lX91dfn+U5rws2r4b+8i0a33dKHHJ7fv9ZgO9nb96I3efwmuBt5/qVgbN7JevwDzg6nce9mSbOXj0OOrsF/ZEoG+1JhX1EXmjpaATffpxcM/UpxDTbeCSZy7hRuDVKceF0UcNicE4ACrGb79lA6/VB3QTpMWMdC2lifP35SwcWuQRGefLskcW6FjS7NtaoXWgOl5FxGSyG7CJnfBSsoWGnTtmGn+jadzh5/bzRYHdhmbY8TmT4fqSojXbPT7fMDLC0V05RU7Vb8hrL9GeAbrmZsqW6ZllhDhit+v7f2y5rX+UBVl11EYPOl7e8/3DGvaXNkwYsdVWHHIrHMF6mgYHpVLPY1SskR0tEL7BtXZNOdJ4Fc9V/1im/qi77hzhsDyuutZWdcW0cxlIy0mpT6O9UNyFUiDKJjHTiJ0ibGynS6bm2TNmcuHUbwx99HQYdw0i/vsaMT7K7M56EiCE3btwU4ItTllwt6AZCcaLzmjgS+XXZVOOxnHOgROdlGS5aaMHcx4lPcsZaIN5uyE5llUUZLugBzsuktl7MQBn25cVWXLmbgesro9YFKf7es4wqdTxGQzvm66P5rwm95kFLOjoPKV3KJ2mC13m1st50bOUJaj9ElFTpJ5yROSW1PKxElMOW46vBR8ld1HY2bTYzb6dBfURLVE+UlaKcKmGsRDUdpM8brXHYbWimJKzncVu6oSm8OnyliM+Q7vs1NMmXFDl5cEngo9Wx4spxMV2YPrj7q45U2aCxFU4BRWZzXuocNIoJIOKjEOBo8FJtENpiBVn8h5biypsjiCRB/zzIm74sZ9XJJddkYHg92Mk/lNxap2R9q5u9kMHiqLPcLlRgMFl8+FsrnKfGvZCIxIQx7d9+YxH/f6BSsGHaubJ3Ex3ghmsuvst6XZwRdaajCrgJvPOU62Ko3UZjDT05kFsPHewFSGSbANpaj4pExUbdFx4J6UxqlfaSXTRYXcRV1mhxNztUBmzWcowsUHXchw626+plh5nnR8c4nLjOgtfWpsqWVXqzs+to0ddbgMmXW4UFlhx3qlZ3PhUhroaoNkViiYNZ9v9+x2spKDHp3jOCoNMczZKwG7EmXa9Dy0m85sbiJNRa8MCjS1kICzhlw61NYY7/GT+WBtZqhFSx2PHQ+HYYKmjS0WUDO0MGIlxxqtTTaFF8wBE7JLj6NEcZEJylB3RWbrOpNZqjpE4Vci2ViLx4c9zh15WOKhOUsOgX62PukwTN1dzkrZJdZN08BGywDEY6y4RaVKyhZ57M9mWpiO9lRCGCi6XPF0jE5GzSm2lBb5DjNtihbcmTINM24rSuCzJT1Mqm2cGsU44raEvEtkWpORsswAx/qqKrGQMGEEHk0YHRnsgM37hKSP87BTM4ICuACtcEuoG/+OnaDoKI9tXTOXe5lcavN8OrT0fT9EyV7DNIIyHYoiUWow63s8peCO4Y3hNbsxDbbs8fsRPzfjMvVcLh84xEifSz0Jx7q828ybwoSfbMhBKhlgsvOsXOiEMx5Wgm3lisOl3+v0Apd3EqgclgQagI42P5YZ2ZAjLJJNJNWGx6ILVK2CTO9AuFTqnWgJtNtv5lOB3AG6IpPm93riGXDRH6HiDqopajNVF8jOHvYxGFtM6f2iP+eicMYpWM3yvpVZyorFTXu8LnbrfYjuKCeaDFjEmqFAbCOftkYLwIMEPZjsl0pvTKe7vAdeIcGi6XZsN8TBSvLKB1y88eAd0aGoVLYwbrvo6PzM2CziBLIHmMMPetmSU7KOEaeQXm6nPYGi0WBv6TMkLDVX05qrOAATk6xQIwNZqUNoQw+UVUKVm81kXWloCTpFE0IzIY6xZUpXOk1Qhg8mVETQbKrEGGHIK4EUDAZmjKB9LuQGmrQTlxKq1IlnTdAFHZiAa1gkwopotpn04yruVAt9bEQbB452GPA9SN8qqmaDcvUcm6BC0jcSQplkxN5E6J5hO003G+0IADem5FFEf59BMy5m+W4t+zN+ZBgCok0rRoAKPGS2vv1s0uYjbB8tBkOFKgZ5Evh8XU0jcKWELhfP99H8ZIi6mK4ar0g4NAizLV2bHO7I/VTFgap7PYqQao+B6z3atws8XnbsLWdg9V5cyFSJBzs78lJ3GinGpEpXe/CtCyxbRjDAyFry7LGNWt6i3IXP3QJje6UDq6N4bc2DpR43TCxULAvRHDxBS7Uh/O4Q3HQGbD3UyTmlwrmZQYWMcyJcrN2M7NhOyjPKxNyXcx+x51mj8NKqYUtfEJsas0dddWfuZ7OpNUImhawSW1Ii9rY+6XiLSiAGfN8CYKugHluTFYqjPDeTC3+b9yQ+HCFmiIKB31/1OZ3dI8JKFnxM6+1HNYHoMA3ty3qTMfByi3mCm831RBkhOWWCcYJ30hjRteWQUOVgMuYBzsZYJWoV2aifXYeDbTdpzCngdIR6eVVJo60Xp9PA4WWidMpCXbsGVUsaZoCzNgdDoc+qq7hNlqZktffpKO6AyQxDU9McW3OJlYWJpcndiq1HEr7rNces7ra71CECEC1JjatnGVZSj+V8NJGqTlVTZobjlM8z0HzXr8gyXO2B5YBLxmbrTS+s+Wij0yOrAp8m3WwL5M4IoDy6YT+208O0/nhKj1KHJrKtRYTLWhiD4xshllla1zdlBu58n0RyBnoCh3CY+wYEA4meO95QIZj02SJqZq89RQN4SkpqsVWKF5AKRlEs2eI8ympsGq7EHUvCLqmYK5mgELFf9YOZZOu+AXq2QXdEtS5nZZLYY3TZjZIIx2UNX87JygbXA9KW3akND8X91t3SqjjvoEQNphxiQeeNmlnYrFYeuOhmRpGhs/H0Sc8fs+x+OBQEtVQhG1MYTZ8FhlrDOT5rdDip6NxvHCtkxPR8qNS7xc63JrOIbabHvrzgMXsLtEpRpZVDu3FfXwSwujOGoctIllehyS7pePU8aWib5QV5FHW0kE01dd4VFwOmM41Fk3F6zThlC2421hYCp+iBYeKL/gjOwzBQRFFkqgFXjPztoEiRsJgBxw9e1gllKIwSrF0qnMu9vir4iSu6zRS8UEk79tP+VBT7RpV25QE73IUub69GkZjmq7Ey7nU0SmQaZ89lfSYXOYvpu6jWJ1M+mwUDr+tJC3HqxCqs2qtQZTAutAasQRcpV6wdJUxTKg67PXHVH0u7SJ7m+D6ICFjqD2pDmzEpave2PRZRPZZfLpORHG89diq6yyrZOiPKR00ewKIX9Wy8z0F0wKwEZ6porLybr7mdz5Smy7LwBIncvmwTDrslNCbtqqtxv0yzYTAJ8kVZe93YpRK48ZgZUTXtirTLna1QM94jkpRpEDrQmV229BM9jUVl0B246zquV+OqL24lcbSOh4ns1p0Byk8VEuWWU7MZq3kz0AXZLZpBZ+81ZrQUlabrzLw/B2CbMAQyneOdesYKo5GhGZ6ae+5C8UfFksG7DJQk4na+nQ66gurQi61op1NxRFGsIva3ETCblE7hHgKWe9i0ry2ZeBwGATOccwS/xhedwXwkR2tnlTbRgTvdV8sRFO7GjJ+CUdQMJsfha9FYThmaEmzIXC5nibOYjCkdWdjTrc02WvamE3vkTGdMh5+LjbGIeMfJeM+vZ8Ygp1GJTsIlnwSMKvrRaGgTcRwoaEU9kxGLUKbU2w163lPsxAfCzJ+WcshxnxMLYzjSrnKAzxd/jsvUJ4s/5I2CYfyK/PojGH4Ew49g+BEMP4LhRzD8CIYfwfAjGH4Ew49g+BEMP4LhRzD8W4v8rWCYvncsjPz+Kv//x++1/dTndS58d9Wtvtb2MgiuqAH7jSfe4FZ5YRRHQPOmljsvAj9Bq+1vp0fPH31AKfiCWin0Vnq94n1/f75eCeS0Mgkm/zt/PwOGXipnwc7e/fNpuiWuqOB+rTlHM58qFKHXGoLeRRXHnGBY241lnP80UPSYI89/gdinUsX/CLAJdIfQzVYUF8aTsfBr7fcJJsOPT3+9PA12/qjnJ9XbAxc+Bsz/o0oQKGcUmxY44/8A</diagram></mxfile>
|
2111.00295/main_diagram/main_diagram.pdf
ADDED
|
Binary file (52.5 kB). View file
|
|
|
2111.00295/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Ever since deep neural network (DNN) models have emerged as the de facto technique to be applied for many vision problems, adversarial robustness has emerged as a critical need. Goodfellow et al. [@goodfellow2014explaining] identified this serious issue to show very different predictive behavior of a DNN model with similar looking images. Many efforts have followed since either to come up with fooling techniques [@goodfellow2014explaining; @carlini2017towards; @madry2017towards; @moosavi2016deepfool; @croce2020reliable; @tramer2020adaptive] or defend deep models against them [@madry2017towards; @xie2019feature; @singh2019harnessing; @chan2020thinks; @chan2019jacobian; @cai2018curriculum; @wang2019convergence; @wu2020adversarial; @wang2019improving; @sitawarin2020improving; @zhang2020geometry; @zhang2020attacks; @shafahi2019adversarial; @kannan2018adversarial; @balaji2019instance; @wong2020fast; @vivek2020single; @sarkar2020enforcing; @pang2019improving; @qin2019adversarial; @tramer2017ensemble]. Nevertheless, both these research directions are important at this moment and require more attention. In this work, our effort lies on devising an adversarially robust training technique.
|
| 4 |
+
|
| 5 |
+
Adversarial training (AT) [@madry2017towards] is the most widely used method for adversarial robustness and most of the improvements have since come by adding regularizers without changing the min-max formulation. The regularizers are added either in the inner maximization [@zhang2019theoretically; @singh2019harnessing; @sitawarin2020improving] or outer minimization [@kannan2018adversarial; @wang2019improving] term. Though AT-based methods are shown to perform well, they incur additional cost due to an iterative inner maximization step. Our proposed robust training method circumvents this fundamental issue.
|
| 6 |
+
|
| 7 |
+
With an objective of not following the costly min-max optimization of AT, we revisit adversarial robustness in terms of standard model training. Adversarially robust models are shown to satisfy better alignment between saliency and object features in [@etmann2019connection]. We enforce such alignment through training to achieve model robustness. This is accomplished by forcing saliency of the main model to follow the object features, provided by saliency of a pre-trained reference model. We hypothesize that perturbing only the most discriminative part of an image should trigger a model to change its decision about that image. In other words, perturbing other non-important pixels shouldn't affect model decision, and such a model can achieve adversarial robustness. This approach is achieved by progressively narrowing the object-discriminative region according to its importance in a curriculum learning sense and forcing an adversary to perturb only those pixels while changing model's decision. A model trained with this approach restricts the perturbation only to the object pixels when attacked. This diminishes the range of possible perturbations and limits the attack strength, reducing the chance of the model to change its decision for the perturbed image. Being a non-iterative method, our approach not only takes considerably less time for training compared to AT, but also outperforms iterative methods (such as [@madry2017towards; @xie2019feature; @singh2019harnessing; @zhang2019theoretically; @zhang2020geometry; @wu2020adversarial; @wang2019improving; @cai2018curriculum; @wang2019convergence; @zhang2020attacks; @sitawarin2020improving]) and non-iterative methods (such as [@babu2020single; @chan2019jacobian; @chan2020thinks; @shafahi2019adversarial]) significantly in both natural and adversarial accuracies.
|
| 8 |
+
|
| 9 |
+
Our key contributions are summarized as below.
|
| 10 |
+
|
| 11 |
+
- We propose a non-iterative novel robust training method, which outperforms recently proposed SOTA techniques, irrespective of their type being iterative or non-iterative, in terms of both natural and adversarial accuracies against a wide range of attacks.
|
| 12 |
+
|
| 13 |
+
- Being a non-iterative method makes it easily applicable to any large dataset to achieve an adversarially robust model. Our method takes 10-20% of time compared to any adversarial training technique.
|
| 14 |
+
|
| 15 |
+
- Our method attains a clean accuracy which is much closer to the performance of naturally trained models compared to other robust models.
|
| 16 |
+
|
| 17 |
+
- We perform extensive experimentation on CIFAR-10, CIFAR-100, TinyImageNet datasets and report comparative results against all other, including recently proposed, adversarial robustness techniques. We also present various studies in detail to analyze the effectiveness of our method.
|
| 18 |
+
|
| 19 |
+
**Adversarial robustness**: Naturally trained deep models are shown to be fooled easily by image perturbations which are human-imperceptible [@goodfellow2014explaining]. Extra measures need to be taken to make them adversarially robust when they stick to their decision even an image is perturbed. Consider an image classifier $F(x;\theta):x\longrightarrow\mathbb{R}^c$ with parameters $\theta$, that maps input image $x$ to a $c$-dimensional output. The network $f$ is called adversarially robust if: $$\begin{equation}
|
| 20 |
+
\underset{i \in c}{\mathrm{argmax}} f_i(x;\theta) = \underset{i \in c}{\mathrm{argmax}} f_i(x + \delta;\theta)
|
| 21 |
+
\end{equation}$$ where $\delta \in B(\epsilon)$ i.e. $\delta:||\delta||_p \leq \epsilon$.
|
| 22 |
+
|
| 23 |
+
**Adversarial training (AT)**: AT [@madry2017towards] was proposed for achieving robustness against adversarial examples, which is represented as the below loss function: $$\begin{equation}
|
| 24 |
+
L(x,y) = \mathbb{E}_{(x,y)~D}[\max_{\delta \in B(\epsilon)}L(x+\delta,y)]
|
| 25 |
+
\end{equation}$$ $\delta$ is found by projected gradient descent (PGD) which is given by the iterative gradient step as below: $$\begin{equation}
|
| 26 |
+
\delta \longleftarrow proj[\delta - \eta sign(\nabla_\delta L(x+\delta,y))]
|
| 27 |
+
\label{eq:delta}
|
| 28 |
+
\end{equation}$$ where $Proj(x) = \underset{e \in B(\epsilon)}{\mathrm{argmin}} ||x-e||$. Effectively, AT is comprised of an inner maximization which generates a perturbed image within an $\epsilon$ ball and the outer minimization tunes the model parameters based on the perturbed images.
|
| 29 |
+
|
| 30 |
+
# Method
|
| 31 |
+
|
| 32 |
+
Adversarial robustness targets to improve model robustness against adversarially perturbed images. Also for adversarially robust models, attribution maps tend to align more to actual image compared to naturally trained models. We study this connection below.
|
| 33 |
+
|
| 34 |
+
**Robustness and alignment**: For an n-class classifier $F(x) = \underset{i}{\mathrm{argmax}} \Psi^i(x)$ where $\Psi = (\Psi^1,...,\Psi^n):X \longrightarrow \mathbb{R}^n$ be differentiable in $x$. Then we call $\nabla \Psi ^{F(x)}$ the saliency map of $F$ and the alignment [@etmann2019connection] with respect to $\Psi$ in $x$ is represented by $$\begin{equation}
|
| 35 |
+
\alpha(x) := \frac{|\langle x,\nabla \Psi ^{F(x)}(x)\rangle|}{||\nabla \Psi ^{F(x)}(x)||}
|
| 36 |
+
\end{equation}$$
|
| 37 |
+
|
| 38 |
+
Connection of robustness with alignment was studied in [@etmann2019connection]. This connection is specifically formalized in Theorem 2, which states that a network's linearized robustness ($\hat{\rho}$) around an input $x$ is upper bounded by the binarized alignment term $\alpha^+$ as: $$\begin{equation}
|
| 39 |
+
\hat{\rho}(x) \leq \alpha^+(x) + \frac{C}{||g||}
|
| 40 |
+
\label{eq:bound}
|
| 41 |
+
\end{equation}$$ Here $C$ is a constant, the linearized robustness $\hat{\rho}(x)$ is given by $$\begin{equation}
|
| 42 |
+
\hat{\rho}(x) := \min_{j\neq i^*} \frac{\Psi^{i^*}(x)-\Psi^j(x)}{||\nabla\Psi^{i^*}(x)-\nabla\Psi^j(x)||}
|
| 43 |
+
\label{eq:bin_align}
|
| 44 |
+
\end{equation}$$ Also, $g$ is the Jacobian of the top two logits i.e. $g = \nabla(\Psi^{i^*}(x)-\Psi^{j^*}(x))$ and binarized alignment i.e. $\alpha^+$ is given by $$\begin{equation}
|
| 45 |
+
\alpha^+(x) = \frac{|\langle x,\nabla(\Psi^{i^*} - \Psi^{j^*})(x)\rangle|}{||\nabla(\Psi^{i^*} - \Psi^{j^*})(x)||}
|
| 46 |
+
\end{equation}$$ Here $j^*$ is the minimizer of Eqn [\[eq:bin_align\]](#eq:bin_align){reference-type="ref" reference="eq:bin_align"}. We also have $\alpha(x)=\alpha^+(x)$ for linear model and binary classifier. Eqn [\[eq:bound\]](#eq:bound){reference-type="ref" reference="eq:bound"} explains the deviation of different terms for linearized robustness in case of neural network. Also, a small error term in Eqn [\[eq:bound\]](#eq:bound){reference-type="ref" reference="eq:bound"} implies that robust networks yield better alignment i.e. more interpretable saliency maps.
|
| 47 |
+
|
| 48 |
+
Adversarial robustness can also be viewed from an angle where an image can be perturbed, to change the model decision, only through perturbing the pixels of the object and not through any pixel outside of the object in the image. We incorporate these ideas through our two-phase training method to achieve an adversarially robust model. Current methods for adversarial robustness in literature have two key drawbacks: (a) Adversarially robust models are shown to perform poorly on clean data. There exists a clear trade-off between adversarial and natural accuracies which is explored in [@tsipras2018robustness]. (b) Almost all SOTA adversarial robustness methods rely on iterative adversarial training frameworks, which makes them very costly to apply. In the proposed method, we aim to solve both these issues, as the model, trained by our method, pushes the bar of both adversarial accuracy and natural accuracy. At the same time, our training strategy does not rely on the iterative adversarial training framework, and thus is very fast.
|
| 49 |
+
|
| 50 |
+
**First Phase - Enforcing Alignment.** This phase incorporates the alignment of the attribution map to the object in the image, as explained before. Let's assume, we have a pre-trained Teacher network (represented as $f_{T}$). We also consider a student network, represented by a neural network $f_{S}$, parameterized by $\theta$ and a discriminator network $f_{disc}$ parameterized by $\phi$. Given an input image $x$, we obtain the saliency map from a pre-trained Teacher Network, which is denoted as **$J_{T}^{TCI}$** (TCI represents true class index). Now, we maximize the true class prediction score of student network w.r.t input pixels and measure the net change in input pixels, which is represented as $J_{S}^{TCI}$.
|
| 51 |
+
|
| 52 |
+
Now for an image of dimension $h\times w$ with $c$ channels and $d=h\times w\times c$, $J_T^{TCI}$ can be considered as per-pixel gradient and represented as: $$\begin{equation}
|
| 53 |
+
J_{T}^{TCI}(x) = \nabla\Psi^{f_{T}(x)} = [\nabla\Psi^{f_{T}(x_1)} ... \nabla\Psi^{f_{T}(x_d)}]
|
| 54 |
+
\end{equation}$$
|
| 55 |
+
|
| 56 |
+
Similarly, $J_{S}^{TCI}$ is represented as: $$\begin{equation}
|
| 57 |
+
J_{S}^{TCI}(x) = \nabla\Psi^{f_{S}(x)} = [\nabla\Psi^{f_{S}(x_1)} ... \nabla\Psi^{f_{S}(x_d)}]
|
| 58 |
+
\end{equation}$$
|
| 59 |
+
|
| 60 |
+
We visualize the training process in first phase of figure [1](#fig:training){reference-type="ref" reference="fig:training"}. Here we are trying to enforce the fact that the set of pixels, responsible to increase the true class prediction score, are same as actual object pixels which are highlighted by the reference teacher saliency map. This implies imposing similarity between the two saliency maps $J_{T}^{TCI}$ and $J_{S}^{TCI}$. Inspired from [@chan2020thinks], here we consider the concept of discriminator from GAN[@goodfellow2014generative], which tries to differentiate between real and fake images and backpropagate a signal that forces the model to generate realistic looking images. Here, our sole purpose of using a discriminator is to influence our model to generate better saliency that matches the teacher saliency. Hence, we minimize the following objective function: $$\begin{equation}
|
| 61 |
+
\theta_{optimum} = \underset{\theta}{\mathrm{argmin}} [\mathcal{L}_{CE} + \beta \mathcal{L}_{Robust}] ;
|
| 62 |
+
\end{equation}$$
|
| 63 |
+
|
| 64 |
+
Where cross-entropy loss $\mathcal{L}_{CE}=\mathbb{E}_{(x,y)}[-y^Tlog f_S(x)]$ and $\mathcal{L}_{Robust}$ is defined as below: $$\begin{equation}
|
| 65 |
+
\label{eq:l_robust}
|
| 66 |
+
\mathcal{L}_{Robust} = \mathbb{E}_{J_{T}}[\log f_{disc}(J_{T}^{TCI})] + \mathbb{E}_{J_{S}^{TCI}}[\log (1 - f_{disc}(J_{S}^{TCI}))]
|
| 67 |
+
% + \E_{J_{S-down}}[\log (1 - f_{disc}(J_{S-down})]
|
| 68 |
+
\end{equation}$$
|
| 69 |
+
|
| 70 |
+
Now it can be shown that the global minimum of $\mathcal{L}_{Robust}$ is achieved when $J_{S}^{TCI}$ matches $J_{T}^{TCI}$ [@chan2019jacobian], which justifies using the discriminator loss for our purpose. We also add a loss term to minimize the $l_2$ distance between $J_{T}^{TCI}$ and $J_{S}^{TCI}$, which is represented as $L_{diff}$ and defined as below: $$\begin{equation}
|
| 71 |
+
\mathcal{L}_{diff} = || J_{S}^{TCI} - J_{T}^{TCI} ||_{2}^{2}
|
| 72 |
+
% + || J_{S-down} - J_{T} ||_{2}^{2}
|
| 73 |
+
\end{equation}$$
|
| 74 |
+
|
| 75 |
+
Hence, we minimize the complete objective function as below: $$\begin{equation}
|
| 76 |
+
\label{eq:alignment}
|
| 77 |
+
\theta_{optimum} = \underset{\theta}{\mathrm{argmin}} [\mathcal{L}_{CE} + (\underbrace{\beta \mathcal{L}_{Robust} + \gamma \mathcal{L}_{diff}}_\textit{alignment loss})] ;
|
| 78 |
+
\end{equation}$$
|
| 79 |
+
|
| 80 |
+
At the same time, The discriminator network is trained as follows: $$\begin{equation}
|
| 81 |
+
\phi_{optimum} = \underset{\phi}{\mathrm{argmax}} [\mathcal{L}_{Robust}]
|
| 82 |
+
\label{eq:disc}
|
| 83 |
+
\end{equation}$$ The model achieved through this phase of training can generate saliency maps that can localize the object nicely. This model itself can perform on par with AT[@madry2017towards] in terms of both adversarial and natural accuracy, and are shown in [6](#sec:ablation){reference-type="ref" reference="sec:ablation"}. With the target of attaining better performance, we continue with the second phase of training which is explained below.
|
| 84 |
+
|
| 85 |
+
{#fig:training height="5.5cm" width="90%"}
|
| 86 |
+
|
| 87 |
+
**Second Phase - Model Refinement.** In this phase, we want to ensure that the decision of the model can be changed only by perturbing the object pixels. Here we bring curriculum style learning in picture by gradually shortening the set of pixels which are allowed to perturb in order to reduce true class prediction score. The set of pixels are selected based on discriminativeness of the object parts and which is decided by the teacher saliency. At every step of this phase, the training enforces that the attacker has to change the image by only perturbing among some fixed amount of top pixels from the whole object i.e. the highlighted part of the saliency map given by the teacher. From another perspective, if an attacker, in order to change the model's decision, has only very few options to perturb object pixels compared to all the pixels in the input image, which drastically reduces the adversarial attack effect on input image. Hence this is implied that the search space of pixels will be very limited during each iteration of adversarial attack step.
|
| 88 |
+
|
| 89 |
+
Now, during first step of this phase of training, top 90% of the pixels from the teacher saliency are considered i.e. the allowed set of pixels for modification by the student model is reduced to that top 90%. Hence, the training will enforce that only these set of pixels are responsible to maximize the loss i.e. decrease the true class prediction score. Here we follow a curriculum style training and after training the first step for few epochs, we consider only top 80% of the pixels from the teacher saliency in the second step. We continue in this fashion with training every step for few epochs (predecided and kept same for every step) and stop after training with top 50% of teacher saliency. We explain the training for one step below with $k$ as the top percentage of pixels from teacher saliency.
|
| 90 |
+
|
| 91 |
+
As stated in the first phase of training, we obtain $J_{T}^{TCI}$ in the exact same way. Then, we keep top $k\%$ salient pixels of $J_{T}^{TCI}$, which is denoted by $J_{T,k}^{TCI}$, and make remaining less salient pixels of actual object zero. We obtain $J_{S}^{CE}$ by maximizing the CE loss of student network w.r.t. input pixels and then considering the net change in input pixels. Another discriminator network $f_{curr-disc}$ is considered which is parameterized by $\xi$.
|
| 92 |
+
|
| 93 |
+
Now we have $\mathcal{L}_{CE}=\mathbb{E}_{(x,y)}[-y^Tlog f_S(x)]$. Also, $J_{S}^{CE}$ can be considered as per-pixel gradient and represented as $$\begin{equation}
|
| 94 |
+
J_{S}^{CE} = \nabla_x\mathcal{L}_{CE} = [\frac{\delta \mathcal{L}_{CE}}{\delta x_1} ... \frac{\delta \mathcal{L}_{CE}}{\delta x_d}]
|
| 95 |
+
\end{equation}$$
|
| 96 |
+
|
| 97 |
+
Now, following the similar motive in the first phase, our training objective for this phase is presented as follows: $$\begin{equation}
|
| 98 |
+
\label{eq:refinement}
|
| 99 |
+
\theta_{optimum} = \underset{\theta}{\mathrm{argmin}} [\mathcal{L}_{CE} + (\underbrace{\beta \mathcal{L}_{Robust} + \gamma \mathcal{L}_{diff}}_\text{alignment loss} + \underbrace{\beta \mathcal{L}_{curr-Robust}+ \gamma \mathcal{L}_{curr-diff}}_\text{curriculum loss})] ;
|
| 100 |
+
\end{equation}$$ where *alignment loss* is similar as given in eq.[\[eq:alignment\]](#eq:alignment){reference-type="ref" reference="eq:alignment"}, and $\mathcal{L}_{curr-Robust}$, $L_{curr-diff}$ are defined as below: $$\begin{equation}
|
| 101 |
+
\mathcal{L}_{curr-Robust} = \mathbb{E}_{J_{T,k}^{TCI}}[\log f_{curr-disc}(J_{T,k}^{TCI})] +
|
| 102 |
+
%\E_{J_{S-up}}[\log (1 - f_{disc}(J_{S-up}))] +
|
| 103 |
+
\mathbb{E}_{J_{S}^{CE}}[\log (1 - f_{curr-disc}(J_{S}^{CE})]
|
| 104 |
+
\end{equation}$$ $$\begin{equation}
|
| 105 |
+
\mathcal{L}_{curr-diff} =
|
| 106 |
+
% || J_{S-up} - J_{T-top90} ||_{2}^{2} +
|
| 107 |
+
|| J_{S}^{CE} - J_{T,k}^{TCI} ||_{2}^{2}
|
| 108 |
+
\end{equation}$$ Please note, we set $\beta$ and $\gamma$ coefficients with discriminator loss and $l_2$ loss respectively according to their importance. Similar to $\mathcal{L}_{Robust}$ in [\[eq:l_robust\]](#eq:l_robust){reference-type="ref" reference="eq:l_robust"}, we can show that the global minimum of $\mathcal{L}_{curr-Robust}$ is achieved when $J_{S}^{TCI} = J_{T,k}^{TCI}$ [@chan2019jacobian], which substantiates the use of discriminator loss. At this stage, apart from the discriminator training at eq.[\[eq:disc\]](#eq:disc){reference-type="ref" reference="eq:disc"}, the other discriminator network ($f_{curr-disc}$) is trained as follows: $$\begin{equation}
|
| 109 |
+
\xi_{optimum} = \underset{\xi}{\mathrm{argmax}} [\mathcal{L}_{curr-Robust}]
|
| 110 |
+
\end{equation}$$
|
| 111 |
+
|
| 112 |
+
The second phase of training strategy is visualized in fig.[1](#fig:training){reference-type="ref" reference="fig:training"}. The curriculum is imparted by gradually pruning out the least attributed pixels thus retaining only top $k$. $k$ is reduced in uniform steps of 10 starting from 100. For example, initially it is trained with $J_{T,100}^{TCI}$ for, say, 10 epochs, and then with $J_{T,90}^{TCI}$, $J_{T,80}^{TCI}$ and so on. Going in this fashion, ultimately, the model will be trained to perturb the image by modifying only the most discriminative part of the object to lower the class confidence. We now justify the requirement of two phase training and necessity of curriculum style learning.
|
| 113 |
+
|
| 114 |
+
::: {#tab:adversarial CIFAR10}
|
| 115 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 116 |
+
| **Type** | **Curriculum** | **Methods** | **Clean** | **FGSM** | **PGD-5** | **PGD-10** | **PGD-20** | **C&W** | **AA** | | | |
|
| 117 |
+
+:==============================================================================================================================================================================================================================================================================:+:=======================+:===================================:+:==========+:=========:+:==========+:==========:+:===========+:=========:+:==========+:=:+:==+:=:+
|
| 118 |
+
| ::: {#tab:adversarial CIFAR10} | | AT(PGD-7)[@madry2017towards] | 87.25 | 56.22 | 55.50 | 47.30 | 45.90 | 46.80 | 44.04 | | | |
|
| 119 |
+
| ----------- | | | | | | | | | | | | |
|
| 120 |
+
| Iterative | | | | | | | | | | | | |
|
| 121 |
+
| Methods | | | | | | | | | | | | |
|
| 122 |
+
| ----------- | | | | | | | | | | | | |
|
| 123 |
+
| | | | | | | | | | | | | |
|
| 124 |
+
| : Results of natural and adversarial accuracies of different adversarial robustness techniques against different attacks on CIFAR-10 dataset. Here $*$ and $+$ with a method represents that the corresponding results are with WRN34-10 and WRN28-10 networks respectively. | | | | | | | | | | | | |
|
| 125 |
+
| ::: | | | | | | | | | | | | |
|
| 126 |
+
| +------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 127 |
+
| | | FNT[@xie2019feature] | 87.31 | NA | NA | 46.99 | 46.65 | NA | NA | | | |
|
| 128 |
+
| +------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 129 |
+
| | | LAT[@singh2019harnessing] | 87.80 | NA | NA | 53.84 | 53.71 | NA | 49.12 | | | |
|
| 130 |
+
| +------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 131 |
+
| | $\>\>\>\>\>\>\>\>$ NO | TRADES[@zhang2019theoretically]$^*$ | 84.92 | 61.06 | NA | NA | 56.61 | 51.98 | 53.08 | | | |
|
| 132 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 133 |
+
| | | GAIRAT[@zhang2020geometry] | 85.75 | NA | NA | NA | 57.81 | NA | NA | | | |
|
| 134 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 135 |
+
| | | AWP-AT[@wu2020adversarial]$^*$ | 85.57 | 62.90 | NA | NA | 58.14 | 55.96 | 54.04 | | | |
|
| 136 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 137 |
+
| | | MART[@wang2019improving]$^*$ | 84.17 | 67.51 | NA | NA | 58.56 | 54.58 | NA | | | |
|
| 138 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 139 |
+
| 2-10 | | CAT18[@cai2018curriculum] | 77.43 | 57.17 | NA | NA | 46.06 | 42.28 | NA | | | |
|
| 140 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 141 |
+
| | | Dynamic AT[@wang2019convergence] | 85.03 | 63.53 | NA | NA | 48.70 | 47.27 | NA | | | |
|
| 142 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 143 |
+
| | $\>\>\>\>\>\>\>\>$ YES | FAT[@zhang2020attacks] | 87.00 | 65.94 | NA | NA | 49.86 | 48.65 | 53.51 | | | |
|
| 144 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 145 |
+
| | | ATES[@sitawarin2020improving]$^*$ | 86.84 | NA | NA | NA | 55.06 | NA | 50.72 | | | |
|
| 146 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 147 |
+
| ::: {#tab:adversarial CIFAR10} | | SADS[@babu2020single]$^+$ | 82.01 | 51.99 | NA | 45.66 | NA | NA | NA | | | |
|
| 148 |
+
| --------------- | | | | | | | | | | | | |
|
| 149 |
+
| Non-Iterative | | | | | | | | | | | | |
|
| 150 |
+
| Methods | | | | | | | | | | | | |
|
| 151 |
+
| --------------- | | | | | | | | | | | | |
|
| 152 |
+
| | | | | | | | | | | | | |
|
| 153 |
+
| : Results of natural and adversarial accuracies of different adversarial robustness techniques against different attacks on CIFAR-10 dataset. Here $*$ and $+$ with a method represents that the corresponding results are with WRN34-10 and WRN28-10 networks respectively. | | | | | | | | | | | | |
|
| 154 |
+
| ::: | | | | | | | | | | | | |
|
| 155 |
+
| +------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 156 |
+
| | | JARN-AT1[@chan2019jacobian] | 84.80 | 67.20 | 50.00 | 27.60 | 15.50 | NA | 0.26 | | | |
|
| 157 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 158 |
+
| | $\>\>\>\>\>\>\>\>$ NO | IGAM[@chan2020thinks] | 88.70 | 54.00 | 52.50 | 47.60 | 45.10 | NA | NA | | | |
|
| 159 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 160 |
+
| | | AT-Free[@shafahi2019adversarial] | 85.96 | NA | NA | NA | 46.82 | 46.60 | 41.47 | | | |
|
| 161 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 162 |
+
| 2-10 | $\>\>\>\>\>\>\>\>$ YES | **OURS** | **90.63** | **67.84** | **63.81** | **61.44** | **59.59** | **61.83** | **54.71** | | | |
|
| 163 |
+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+-------------------------------------+-----------+-----------+-----------+------------+------------+-----------+-----------+---+---+---+
|
| 164 |
+
|
| 165 |
+
: Results of natural and adversarial accuracies of different adversarial robustness techniques against different attacks on CIFAR-10 dataset. Here $*$ and $+$ with a method represents that the corresponding results are with WRN34-10 and WRN28-10 networks respectively.
|
| 166 |
+
:::
|
| 167 |
+
|
| 168 |
+
[]{#tab:adversarial CIFAR10 label="tab:adversarial CIFAR10"}
|
| 169 |
+
|
| 170 |
+
**Why do we need both $\mathcal{L}_{Robust}$ and $\mathcal{L}_{diff}$ for the alignment loss?** Here both the losses i.e. $\mathcal{L}_{Robust}$ and $\mathcal{L}_{diff}$ together help serve the overall objective i.e. to generate a better saliency map that matches student saliency. They achieve global minima individually when both saliency maps match. A theoretical justification of attaining global minimum for $\mathcal{L}_{Robust}$ is shown in Theorem 3.1 of [@chan2020thinks]. Adding both losses helped get stronger signals and attain better performance compared to using any one of them, which is also supported by our experimental findings shown in Sec [6](#sec:ablation){reference-type="ref" reference="sec:ablation"}.
|
| 171 |
+
|
| 172 |
+
**Why $J_{S}^{TCI}$ is considered during first phase and including $J_{S}^{CE}$ in the second phase?** Our robust model training is motivated by alignment of saliency map with the object features. This idea helps the student model to learn better object localization through saliency thus improving robustness and should be maintained throughout the training. Once the model has decent knowledge of the object after alignment phase, we include the concept of $J_{S}^{CE}$ in the training. Incorporating this help the model to learn the allowed set of pixels, to be perturbed, to reduce the class score which further boosts robustness. If we were to start the curriculum style training from the beginning, it would have disrupted the model from acquiring knowledge about the whole object.
|
| 173 |
+
|
| 174 |
+
**Why curriculum learning?** By introducing curriculum style learning, we try to enforce that most of the pixels, which are allowed to change, should belong to the most discriminative parts of the object. Also fewer pixels should be considered from the lesser discriminative parts of the object. As the discriminativeness is decided by the teacher saliency, we consider lesser number of pixels which is top most $k$% important part of the object. The value of $k$ is reduced at every step of curriculum and the model is trained to learn about the degree of object discriminativeness gradually.
|
| 175 |
+
|
| 176 |
+
{#fig:effect gamma_k cifar100 height="6cm" width="\\textwidth"}
|
| 177 |
+
|
| 178 |
+
:::: wrapfigure
|
| 179 |
+
r0.7
|
| 180 |
+
|
| 181 |
+
::: center
|
| 182 |
+
{height="3.5cm" width="70%"}
|
| 183 |
+
:::
|
| 184 |
+
::::
|
2112.00712/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="confluence.voyagerlabs.co" modified="2021-09-07T21:51:02.284Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" version="14.5.0" etag="n0Up_6A1ZWiqHlX41bRJ" type="atlas"><mxAtlasLibraries/><diagram id="fLgTXsuK7XiiZNAcTXol">7V1Zc+O4dv41XZU8CIV9eZzeklTNVLoyqeTeR7WtsTUjmx5Jnrbz63MgERRBgiQskQTpkX3ntk3RXIAP39kPPrBPDy//tl0+3f+S3a42Hyi+ffnAPn+glChB4B975PV4hBt8PHC3Xd/mJ50O/Lr+v1V+0J32vL5d7bwT91m22a+f/IM32ePj6mbvHVtut9kP/7Tfso1/16flXX5HfDrw681ys6qd9r/r2/398agWpbP/fbW+u3d3Jjj/5GHpTs4P7O6Xt9mP0iH25QP7tM2y/fGnh5dPq40dPDcux7/72vBp8WDb1eM+5g/o8Q/+Wm6e83fLn2v/6l72bps9P+Wnrbb71UtoiJffN9UROz0CKV4MELHKHlb77Suc4i5E2fFPcjDo/Ao/TiPLHD7uS6NanLjMZ/OuuPTpheGH/J3D78+63x9e//F2Zc8nH9jHH/fr/erXp+WN/fQHwBuO3e8fNvnHjeNUHg/aOh5uBF99zJeHg4eGQ/UwHDwwHHIDd/j4WwZPXx4X+edz5j5Y7A5L9Cc4gfCnl9OH8NNd/u/hKrun5WPUVWToKr9+/uYuBC9yvJZ//e/b6pHLnvtffqxh0IFd7lfw/78sXxafnu3lsu+/A+jXf61Kz3O8k3/3hsOXPdS/Nl49MXKZv5BFHbmEBZDbB3DFGDzW/vZE+6/PA68vDeKKSc1hWQsqWWAZ4x5GQ/bMar+tN5tP2SbbHv6W/Sbst13O+232x6r0iTx85UAuHT9+xaJMhMfZG8gAjogxyJS/1OUjqcbAVcP75hfSyKGFCKUNVx7KJEeCl75FXVowdAIcEZoERo5oRHTp21w+cro2cv9TSIHn746z4GE+kTKpnz6qDTQM7d7H5XKzvnuEn29gKFcAtY92Atagp/2Uf/Cwvr21f/5xuwJWzScBw+9P2fpxf3g78fGD+Gyv9bzPjswbT4eqdeIWrAJH6c0cRYqWv6WsTV1QzvfADyZ6bujfdG5gzbDyNxttbpy+V5ocVFdvwodmPinCZzqf6DhBZYbStDYjNDAjoo8ZIdHL5fHvuVwMRgaXvsdbLKPYr+1jQzHy5LPhtdcnFBXS22p/Ab1vCBFMIqzbzfpxVWhyzpkBb/vxdr21Jk5msbfLnu3xMl632X6Zf7ogGsfCjLTr0NwfSlWn3ao2JEYaypBlXBnK1ePtT9a5BL89ZodRLY3X6mW9/0e+Yu3P/7Q/I5H/9vml9NHn19Iv31bbNTytpYTPp1Fe3ToHVeMYw6Nlz9ubVRkKzetboRM67fiS+hKmyJsaZ2SUR54Dg3vaaPPA54/yzZJX6TmA5Wl5qTDhE01Fw6/M7PGN82vSktfL3eY1DiH75fZuta9d5wCSYvDicBNhmEbhhlxx04YbIpH2iOGNwBhg5iOM8KNoehOZ9iCvEDOMY0WwMVQroVcLMh2RFWFw9yGymIwXWTI8nBMQSnUbe7pCSQbIpUPppKDym7K/zJMFqmpAp6EewfyH9AVWddb1G3mpiJH4q5EOJq/qroHpyqv3Cila8RITH1IdTHIhpkZXkWjd4VGDHJC5Zf1PRTwX133M8P3V3uHj3XZ5u16dPstBWnFiF6f7Tuz85Nvl7v7gHcclU/zn5ffV5huY1rkU+Z7t99lDwFbfZ08hk77sUYdnz+PahLrf8/e1t1zuno4v+tv6xT7H0cpfbb/8tToa+/Yiu/vlk/2Dh5c7G2dHyx87jp53h3tFibZ2xzNhyLPFpPYQAijF7VY/LDYjJaaSKCkE5aa+VhpOuUQq0rq75oqnKeBJCFSmGzIPNI3iYeoYOBCg3jr0dYyIEJAvUKhzUQ2toAbSK0KeUzrrIBBtd2kt+FSjQIFsj6bZoX/X2UkXBwqkNLyfOFDHrPBpBoICeRVNC2a+gaDLFkyySNA4mRodgzPdUFAgHaM2PCOHgjrclJMNBQWyJ2pDORWvm1sVZRfJEQrTd+m/t1AQi/BzTMW1NmfcTC8UxCI8EgOFgroEVlQsqGo+qnGIlkWY3iPHglhDKUN6qcQicj2mIpXceiizC+tQO2fhuH9fsSDWV87LCALr3ULq7xULYhHpMlfffYpYEKgKng/a98lM1XvPopNwLlGyOoaOqDaemWwBB6t7VELersdZO+9ZQ3LP5Es4WFx9zeOsnfcXzk465z2r+4jej/O+Y1YmWsXB6+6XpgUzX+f9ZQsmmfOeRztILhHTHYMzXee9U8cn5Lzn7T0dJuu85zNyk/CATcs7bNqpOGHfm/Oez8gXMmfcTM95zyNcEgM577sE1pQLOXiE5T2y855395VIJZUikhcmI5VEgF061M5ZeFrfl/Oe91UbNIbAeq+Q+ns573nd6VE4OEr+DZz/V5jYczKwO1r1VBIXo5Tesqd3IPtahFJP3tiKLNyR7Wm1/APmgeIvD99Xt7frx7vY5mRzm1tSWcr+3DLfcxLwnQRbVvUyuSHnSXly+2zB920JU3LQvUZuxPfdTsUS/tsuH2+zBzuOr0+r7dNm+bhqbog3iz55xDXA62qU58ILF6EloAaAbHbxxmy7v8/ussfl5svp6MfTANnVdjrn58wGNw/D8vtqv3/No5h28fWgTDzCq/2j/Mvxr8DGyX8//d3hN+8Pq2rIDuTc3mk7N5vlbre+cYe/rjfFg9ZUIjiSfx6bOxee++1qszx0jCz/XbwqQaQPGkoEEpWrRCsMun4tXRZMFeW2PxVBsCv6HLDK0MM+9E74PKGPTAl9UnmqjFb9AFFqxMpAVIPhMMKdN3Tf44Wj+EI9rEuBUDCM4MryPEsMRHi1OgRhLS3FF6Nj9wzl7YMN5p6PtQXBSCkhtZRUSS4ormdeWNvJOycwG+FTLpqbcZJWGgbMNe9ViGImBWFCMaGdSCgHainSGmvKhNCMGIMDRaUUeOIwwYYTm9vTQjzRoxPjK9ts1k+7VbcyV8NwHNa6O8KAhSukgf9hwqTChgY8iU3nXDQ6EW6fawLdMAl07R4rhjRhXGFhGBaMET8vwgbmsRGUU0xhUQldzyQiDCkwC7jE2BDCTMg3HT7lIkBFVDsNR0YunGwQZpgxCSTCQC1woXnn7TMwtsA/mmMhScDZl4iqZMjtMjJVmclSlbz26UhGVQ2omDdVSVoDFPng1uCodMWQ0FSApi4Zl7LSsZ1LWEsEvmA5Gc5YffhS0VVMbsywdCUbzNcp0FWEyXilq0HoqgkVM6erUfYs6aAqKhHhRhBY/Boro+s+iFRkFGEED01G3fkiycgowgi+ktEwZNQeJpwrGdX9Bn+SGqQGpyNDkSRUCwUqqvWoGW/04L3BCiSKK8YpLKl6VnEqsoppCTIwWTV0Z5gAWakIM/hKVsOQVXvPjpmSlYrwHLTmttUzThvCfV2xvjMCi8cnbY3huehUOR/O6UPHGFiZdlrXvAgEkNyxC2OCSiJqOZRoCYaurGT4a4kAWHBUKaoEqU57dIQQgy2NNddKUKEY1T5EBUMUdFfQW5nCSrsdDPoPGDqnYk+QIx7kjohxoCORoDsB7Z9lnPUJOmWuoEsJukuLlQKgCyWKt0KngCnxmPGE2QaY9gm5hk1IxoFcR/CBEIPUQWWVABnAy5mZ4ZQgZQBWIIiJYqxoXVDchiGsAHQgjwk1lbfoEXOXFjr5mGPNsrUTQWNKV1xHnUpLdBhpSoSRgC3CAH8+GphBUjNmlTejGcaVzMBY0BmNsMRwD2bgfY2o2FsY+BToziitOBGyagz0iLpLt0lqE6+Xo44MgTmXyOAxHf6QEHMLgTgHgSYIowbAhavmN1BQ3fzumegM0KlimmCBFaakWnTRI+ZiPIxnWhFvlJXnlG+dy3SsjjrR0In1Hal0TKMWS1cYpCWHe4GdCqpdteq0R8xdWugXK10nZEQEea47g2pInmtHA8jWst/jTNFq8wu1EQApKoWWUhJ/+xSJpCBC21Za9lnkcDwXkRI2vGOXtykaiiEiFYf1z8DqcvsSpPfrqvR+XTVdv66++nVT+XWbUDFvv66+1K8b1r8mJAtD2ldaWVgpQjjXe6H9y9DBKohcx7CklRtFfzenpNaz44mrs/bXTA/1njrCKzir0o3jlLYugPBo9tF6Q49SCdT0ii+FLmiYErj48m0gyhFzHhJvUAg6OWXKIwM2NOlFwo+SoqTbVUdiECaSMUGPvRyqtYuISUa4FIIoygMNcBSGMzDnmHFbmB3onioNokQKA6qnAmlIexi4UUp4dEMqipPzEikOSjXFUhvbUbeOIIYYIZzB6JmGsSGgVXOBOQPdnNsB6qH4TKcv4dHdQtcgpW3XD5uIImGd0cA6C59y0dhcC3hSKdVNmCjabVFiwzwc1kKpA1DRClgcnFSWP+x6U7XFJpGgtg2thLOUwqE2zg2nXISnUep3mpjIDZ2GFwMWAaovd8dw3jeCqCI5fzNlAhnxiYjKpC/g0d1RqDREZSKMtStRDUNU7eU78yQqM4ph10VUAjEgIlK0DK9tc2CwrTJ0VFUfvFRMlb52x3SbcYmY6lq5k4qpmjAxb6Yaxyru2BNcIcwNxkLqwNBxiYzRhuYKlUtRmwBPpS/rMd1lPYl46lrUk4ynOtSCefLUKIHfDp4iAlm3tyz6j9aLdlJRUfrgrukO7qahIoKvsd1kXNQe250nFxEc4S4YXmliSCsjieTqGErwU7m13YmNuqGTga3YElEVwRHG8cBclc/gJMkqwvi9ktUgZNWIipmzVb9VEC0VhqmqIFxot5yd4jSicnZKTj2p0lO4DX0T4FGd++N8xhYCKJsDrnKuPbMPqbRbf2lm3G38mzCOTneox4f7y20heMgyiElUGQZhFyhtvcJuTNj1WwkxuTrDWNCZpHWG7UFSJZHBitFcVpoKGt6yVwjIW3Gyx/272K1EuDQ54qqdhvvE3FiVENOSsIE6Q5M0/9Nu1IcpFsZZNBWTyBa3ClpE589MD233Vtu234XNZe8zHOou3RVp4nWGIczpQP2NSVtnaMs9DNOG4nxnJb8EUNt9l0p2+JkVOB1cJznC2nDmQFcpZ+wTdTGexllXGga5LpDrrpNWGo6j1YFV65u8leoLDeAGa/UQl4M7VcDdI+xITPbSjIsNo8kuaSH/ogMPVCDPByLPFLELiRjTSjntsaI/cgTKnaSmcEsOB7t+uzOxRrabEOxCXJcYdrW0+UrbEoo0UVorh4iK0hUNuw5Zvmhwqg8BPNoNvMFDDMdxF8x6MI4D74/7gikkiZDSiZl66RDo3LRUC4JDQQYMrNFHhQch6RPH8ombYmTBWUPXyML4kYUmVMw7skAmkTzWHgdddBdVpQqEkvT5Y/kUTpKurhlk6ejqPaaQEdKvw8wPhPoRqZMC/xZ1fhCXWUEzUwpJtTtqGyi7J0ftENr6OIVpZ9UWtwg6F7dolXS8eXKjVx6N8d4MLeneXnzWLej6GJtr7Vk6Ofe24jNVaWVQEXSyrlpWxVi3oOsDURG+g7fKuZ5C4F1urhgZF2hFE0z2oUnjQkwjzKhUYU8SsCzQ76lKuHL5aAlXtX28myzG85PSAbo7Dw0gDywkbboERqDiUp9eSN0rFh22YYhTe8FwD5aFwIgyJZRwYn44YIzTkqajYZ9GVBAsRd41hcSsxUnoTTE5dAPrTbRhbJPrTRHek6veNIze1ISJmetN/aZxNSfUxAb7+k7iCmhORTWGrzp1ewUHlIbVVFLq606cIk+rOTPGrCQqJIL9pj5G210QfUrIcSobG1asy1IiiNhWti5tbUq+g/SljfkcTU8Gsgi/ylUGDiQD31baOBMZyCK8Uelk4CASkAacB2lz+2ppKB50jO15rHRDKUV0QqlAmkvDpUvt8OWfBKNI25bKuVk0nIXIAgCDOXQrOtvu77O77HG5+XI6Wurxamf+dM7PmaWPA2h+X+33rzlPLJ/3mQ/Tc1Ssc7Oj60uiAape8l9obwsR4KEwUKMRGM8MIReP3OxzBvZmUP75nLkPFrvDHPwEJxD59HL6EH66s//++rRa/rHa7tzVvm/dJ/9haXt5Y0VG8Sk85/F2x1Nq2AHO3ocYqUWi5YdqMqcqmh7Wt7ebJiXDR2ScCGkVIAuKLbTKi1LUQ6k8wEJO/7tsukOOmz6m+1O2XS2uc+2p34GJlYNN7ChJOx3YrnRqpzzQJpMGRoD0o+REuI5m1XU8n9QWLkGkEhuwO1OBuSclGMJcUBzoD0G0f05gOsKnXDY5EV6YHvDZNGLFikQUMyngPMWENvXM1kTbuBRla8PYwxXgfv1q4Cseht1qc6oNXgiL8CNcbeVhbOUmXBT2ziy3eCE8wv0yIFW5ZKr2PV2VgdEFdrKtXiVxZt4EiIzHuBoSERnvzh1IRmScXoksFZE14WLuRFa37ckHZ4GMS2YMCW2L9LHt3SC5P4Jcwnoi8AVLynAWaK6fjMximiz1RWZGfcZKxYO2O7clHZlFWKNXMhuIzBpwMXcyG2W/oi4ioxIRbgShkmmsjNu6ZApUNeiORRdSVXf/mHRUdd3NKB1VtW9nNFuqqvsk/iQ1UA1PVoYiSagNVwubneAioq/Fm4N9SRRXjFNYVvXkkFRUJiIM8GQmZHe9STIqE9eak3RU1l5zMlcqc77/4bvMfkiSPFnEzbzcER5o0ZOzUqrkESURtRxLtAQjWjI/hKwlAnAdUo+oEqQ69dHZIxjsdKy5VoIKxWilD5BgyCZog9bLFFbaNTMZIHtEDFh4Mo0us2HgOefMFXipgNdvV+3J9Zl9A+x4d4XIkLXk7eEPQmzXT6vaSoANYObMdHFKkDIALRDKRDFQZSoFfSBTsQLggWwmxX6SQ+Cu37baE+w12yBpSQB5Mi3hYaRhQowEfBHGsJ9hAXoVkpoxq8wZzTA+s/On0QhLDPdghtoU/YpthoFXgfaM0ooTUasY7BN5A3TWnlK/2TDuXLqFz3jd24cM2Q5PIM5BuAnCqAGA4aq5DlRUN9d7JjxjE4iYJlhghU9+/CFw129ZFr1Abg7UcbaB8UJ1yqy7EmXuKh7TqMUCFgZpyeFeYL/aIgwzHO76bRc0uZazb+G7pOUQi3ZEMLuR8ckncqaYXRCQ5kYArKgUtkKr2mEbSQGQUEwq+yxyQL4bpZlQp0uYt6kdypaoKA4swMAWc0X+E/AIyzE9wm8MbokJe4Tl1SOczCPchIuZe4RdZdrZ8vO8DjDDSstUMlDABIFCj4VQmGGiqQeCWl7UmR3zDtlXEtR6TKjiuNLumSgEt7fPAswP1E+Ga/cvI9y6gxepFMVVrkolkG9WtP7zV1MfhToywsM4ryoV2Z2IFh5PYvoYz3EKn5pe0jW3QNIwVWqJXqkLt9t6hGpxCDo5d7yuAgaRfjSAcdKmZMcW2+Emlu1dOUoDpTCcgTnHeaun+oBJg+xTmHD59HlDN07JkmxIgnGagAQRASq525uA1lGUqNG1nHDJkuw2bRM1wJbXgqVkKnkTKhqae8ykAbYap16piafc4LXuDSkIoqrYok+ZkM6ViMbUhAuWVHckLBGNuYqzK42NT2NNqJg5jY1jHHbRmEAMaIowmX9XtkKzXmlbc+mIrD58yXhswrVKqttETMVj10qldDzWblPPlcdGsrjbeax9O2MukTHaUNcbeTpGpZpwGZPqLmNKxWLXIqZ0LNahUMyUxcYJV3ewGBHIut1P22N3d7Adi6j0hEPSqjsknYiodISxfSWqgYiqPSA9U6JyMdPE6lb7dpUNOwxPgcYG3Vz3Mu/XcWonSWPXTXeT0VgTKmZOY/0WgLQUWiYrAHGhaC8x1alRXmKqTrptE7fBesLdPhWski8jBJA5J25zdGewvjUpp7odhn8TxtHpDvV4do85OXrI+o9plFoGgadCNb5X4I0IvAH25ZlUqWU87FTSUsv2yK2SyGDFaC4yTQUQ0RW+FEkQu+Jkyvt3gY9B9zU56Kr9oPuE3Vj1HxMTtC7M7QMvaQVIx37R2pb4ClpkDZxZ4tvuCZcascIqs/cZDngRHrVZF1oGYSdDhUcqbaFlbSMWvwZSC+QZ62eWHnUwnuQIa8OZw12lnrNH4JkYT+W8Ky3DjBeqtJRJKy3H0fDAyvVN4EptiQZ8g/V6iPzBnYbL9zcR3t5Z11rGU55O2tNg0QEJKpC/b9mZsnYhEWNaKV3sv+jdZrydq52aOgTyJlG3FM94iZFXS/2vVB9RpInSWjlQnFnj1CXUFw0u+CGwN4lEtuPAC2b3gj2OvD/wC6aQJEJK7W+4WoIFKOC0VNKCQzEJLOr7mZ/lAHUNfqYYTzWTTV8z1/S1ZIGIJlTMOxBhppG+1h5PXXSXjKUKqJoJJ7CZySawmWsCWzoee5cJbKZfd5sfUPXjWiet/y02wEAONzPBuFa7n7eBy3vy8/av4FM8UsHdWTXVLSIQqwgZyJtnN3btURzj90mTVJRP3ptkYLcI7GPMIjwWVwk4iARsxESDBFSV5g4VESjrymhVwHWLwD4Q1W8Tbdrkv2+VY+d5zaKkX6jPXSidKKejVNKPaYQZlSrslwL+BWI+1UVXLh8t/Kr2kneTxWiOV4pj3DhnRjkHg5CnK5m0yRgYgf5LfYYhdR9bdCyIIU7tBcONaRYCI8qUUMJpAMMhY6RGPR0NEDWigmAp8k4yJGY1TkKnisnTS+NXyKd2gjpVhC/mqlMNpFN1bKQ3U52q3+yx5iSe2Lhi77ljQa2KhbSqbmfigHKymsVKfbWKU+QpPGfGs5VEhaiw39QHabvjok/ZOVI1ZsOadYlRBBHbNNgly03I40CmW42ZT970pCOJ8NJcpeNA0vFttZgzkY6uj8Q0peMgsrFgngllGdZSYTzwGNtbWumGAo/o7FaBNJeGS5dd4otGCYaUtq2rc1NqOLOSsDrEYBbdms62+/vsLntcbr6cjpYa5tq5P53zc2YJ5ACb31f7/WvOFMvnfeYD9ZxE1p5rTfz9HWQIhTwA4jAwoxEXzwU85bwsMMKYe5NDCW2dnh2M3d5R081mudutb9zhr+vNqX15lb/gSP65u+a31XYNI2blQewMBjRsl6KWZvpEkukbsBzrwpFroEHuO5CoObNaQCskOWjWUoLRAeqxqVxW2K2cD+l6kqiqytxAp4DT5WvptIMqs2t+Ga2D73KCzPGC51P1OKlFrZrVgru/cZIK13SnYgNvb38O3suyivBI+d3cO2yVkZu953PYrLhWeusrRBmRTHOisTIBdyo2oCiw01dgQxmiQVXF3AhqiMEUqx7STSkJOXLkZp8PkDcj8s/nzH2w2B246yc4gZCnl9OH8NOd/fc/v7nrwDMcL3X8oGWecfc89zI7zKDyWLPK5m+21TjDhhNJjNDCdaouT5cBo4FqMPBhQpQB9ASmiyMDmqyWRgnCYdmoPqbLzE7Le2Pxu6d+5JpFWffAEbpHhBgsSNgzYmgobEoajJj+lQ0a4b0Zf7UIgwgWWjNLO5JqP6YqOAhsAhaW0QB0KSa0WAr98rpYBlksNLBYaEMKyQCLJcLJMv5iUXbDEsMIyA+DOXHbVLjyfoaoURyWAUBdcxlQBJItlvn5D2a1WAIlaZdLlrPMG25QxY9K7O6QSgKPH8q0SR6u6M3gcS8/raUqMWJCMbvBI4WXrgwJZohwJZSWQhtpeL0sLN1STeOT+Nss1YAPMYfwKHItwjuQRK5pjYmxHnVqG+H4q4USpDEzTDGlQA/kge0Akq0WdV0tQ66WQJX/mKslIhFlcquFY2QDRkZiIzUNdUFNtliu/oUhFwsL+RcuXixnaYHK91C6Xf560/nYJH0ZHQvTujqIlIJrIpRgoU0GU61MdnVmDLoyQ86M8Tx/7vbTWi3SICZBnyPssGmy8WN8RBMbj6BaUKW5VBNyZrCrM2PQxRJyZhwRO8pimaQ7wWgkbZdGkB5GnKJzLtuIICOYkoxTJQkNZKqlWyyhgpRj6G73tHz0xrUpBChDIcDP693N825ncxXz633fus/++367Wt6WQoTHWzWECGFe9v6U1XIbqymQD+vb2+OyXsFz5pFyi4VcMYLrio8fxGd7LVjJuzwXMhYNHdF1ZvzpDxjEJpD45rZkv2w+ZW38RiS/tyTih5Mau9Nc3kh+pySmE/+dJro1HalIVChTnYuAJUlIYvXMCbr4lG1X9TX2X6vb55tDqvDs1xOptNsgvG40q8CCYr0sqLqP4ds2e1reLfeBUT8kae/e35AzXFfwehpy+HWbWdFxMjZtPvkv2e3KnvH/</diagram></mxfile>
|
2112.00712/main_diagram/main_diagram.pdf
ADDED
|
Binary file (58.7 kB). View file
|
|
|
2112.00712/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Stance detection is the task of classifying the approval level expressed by an individual toward a claim or an entity. Stance detection differs from sentiment analysis in its opaqueness. A favorable stance toward a target opinion or an entity $E$ can be expressed using a negative sentiment without any explicit mention of $E$. For example, the utterance "I did not like the movie because of its stereotypical portrayal of the heroine as a helpless damsel in distress" bears a negative sentiment ("I did not like\..."), while one can conjecture that the speaker's stance toward feminism and women's rights is favorable.
|
| 4 |
+
|
| 5 |
+
Understanding the stance of participants in a conversation is expected to play a crucial role in conversational discourse parsing, e.g., [@zakharov2021discourse]. Stance detection is used in studying the propagation of fake news [@thorne2017fake; @tsang2020issue], unfounded rumors [@zubiaga2016stance; @derczynski2017semeval], and unsubstantiated science related to, e.g., global warming [@luo2021detecting] and the COVID-19 vaccine [@tyagi2020divide].
|
| 6 |
+
|
| 7 |
+
Recent models for stance detection rely on the textual content provided by the speaker, sometimes within some social or conversational context (see Section [2](#sec:related){reference-type="ref" reference="sec:related"}). These models are supervised, requiring a significant annotation effort. The dependence on language (text) as the primary, if not the sole, input, and the need for a domain (topic)-specific annotation, severely impairs the applicability of the models to broader domains and other languages [@hanselowski2018retrospective; @xu2019adversarial].
|
| 8 |
+
|
| 9 |
+
Online discussions tend to unfold in a tree structure. Assuming a claim $E$ is laid at the root of the tree, each further node is a direct response to a previous node (utterance). This tree structure can be converted into an interaction network $G$, where the nodes of $G$ are speakers, and edges correspond to interactions. The edges may be weighted, reflecting the intensity of the interaction between the specific pair of speakers (see Section [3.1](#subsec:trees2networks){reference-type="ref" reference="subsec:trees2networks"}).
|
| 10 |
+
|
| 11 |
+
In this paper, we propose a novel approach for stance detection. Our method is unsupervised, domain-independent, and computationally efficient. The premise of our approach is that the conversation structure, emerging naturally in many online discussion boards and social platforms, can be used for stance detection. In fact, we postulate that the *structure* of a conversation, often ignored in NLP tasks, *should* be studied and leveraged within the language processing framework.
|
| 12 |
+
|
| 13 |
+
The main contribution of this paper is threefold: (i) We introduce an efficient unsupervised and domain-independent algorithm for stance classification, based on structural speaker embedding (ii) We show how multi-agent conversational structure corresponds to speakers' stance and correlates with the valence expressed in the discussion, and (iii) The speaker embedding induces a soft classification of speakers' stances, which can be rounded to a discrete output, e.g., "pro", "con", and "neutral", but can also be used to derive other interesting parameters such as the confidence level of the result, which we discuss in Section [7](#sec:discussion){reference-type="ref" reference="sec:discussion"}.
|
| 14 |
+
|
| 15 |
+
We evaluate our model on three annotated datasets: 4forums, ConvinceMe, and CreateDebate. These datasets differ in various aspects, from the number of speakers and discussions to the variety of the topics discussed and the culture and norms shaping the conversational dynamics. Further details about the datasets are provided in Section [5](#sec:data){reference-type="ref" reference="sec:data"}. Despite these differences, our method consistently outperforms or is comparable with supervised models that were studied in other papers and were benchmarked on these datasets.
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
<figure id="fig:stance-main-flow" data-latex-placement="t">
|
| 20 |
+
<img src="figs/Stance-maxcut-flow.png" style="width:80.0%" />
|
| 21 |
+
<figcaption>The workflow of STEM. First, parsing the discussion thread (tree structure) into a weighted user-interaction graph. Then compute the 2-core of the graph. Next, run the max-cut SDP on the 2-core graph, generating the speaker embedding. A random hyperplane partitions the core speakers into two stance groups (red and green groups). Finally, propagate the labels to speakers not in the core using a simple interchanging rule.</figcaption>
|
| 22 |
+
</figure>
|
| 23 |
+
|
| 24 |
+
A naive view of the structure of an argumentative dialogue between $u$ and $v$ is that they are holding different stances. While it is tempting to assume that a simple tree structure, reflecting the turn-taking nature of a discussion, lends itself to accurate classification, this intuition does not hold for multi-participant discussions, as we demonstrate in Section [3.2](#sec:greedy){reference-type="ref" reference="sec:greedy"} and the results in Section [6](#sec:experiments){reference-type="ref" reference="sec:experiments"}. The reason is that engaging discussions tend to induce complex user interaction graphs, which are far from being bipartite. Therefore a more subtle approach is needed. We present two algorithms that build upon the same intuition. The first is a simple greedy approach and in Section [4](#sec:speakerembed){reference-type="ref" reference="sec:speakerembed"} we discuss the more sophisticated method, which is based on a speaker embedding technique.
|
| 25 |
+
|
| 26 |
+
A discussion could be naturally represented as a tree, where nodes correspond to posts (comment, utterance) and nodes $v_1,v_2$ are children to a parent node $r$ if they were posted, independently, as direct responses to $r$. Discussion trees capture an array of conversational patterns -- turn-taking (direct replies), the volume of direct interaction between pairs of users, and of course, the textual signal, including content and style. However, converting the conversation tree into an interaction network may better capture the conversational dynamics.
|
| 27 |
+
|
| 28 |
+
In the interaction network, a node corresponds to a speaker, rather than to an utterance, and an edge $e_{u,v}$ between two nodes (speakers) $u$ and $v$ indicates a direct interaction between the two. The edges can be weighted to signify the intensity of the interaction. We use the following edge weighting $w_{u,v}$: $$\begin{equation}
|
| 29 |
+
\label{eq:InteractionEdge}
|
| 30 |
+
\begin{split}
|
| 31 |
+
w_{u,v} = \alpha \big( replies(u, v) + replies(v, u) \big) \\
|
| 32 |
+
+ \beta \big( quotes(u, v) + quotes(v, u) \big)
|
| 33 |
+
\end{split}
|
| 34 |
+
\end{equation}$$ where: $replies(u, v)$ denotes the number of times user $u$ replied to user $v$; $quotes(u, v)$ denotes the number of times user $u$ quoted user $v$; $\alpha$ and $\beta$ are parameters denoting the significance assigned to the corresponding interaction types (a reply or a quote). These parameters are platform-dependent and need to be adjusted to reflect the conversational norms of the target platform. For example, quoting other speakers and posts that do not directly precede an utterance are common in *4forums* while scarcer in the others (see Section [5](#sec:data){reference-type="ref" reference="sec:data"}). We experimented with different values to confirm robustness.
|
| 35 |
+
|
| 36 |
+
Recall the intuitive assumption that two speakers, $u$ and $v$ that intensively engage with each other, inducing a heavy edge in the interaction network, hold opposed stances. We, therefore, begin by proposing a simple greedy algorithm based on this naive assumption. The algorithm receives the interaction network $G=(V,E)$ with the OP, $v_0$, marked with an abstract stance label, say $+$. In the first iteration it initializes the set of labelled speakers $S=\{v_0\}$. In each consecutive iteration, it finds the heaviest edge $(u,v)$ that connects a vertex $u \in S$ to $v \in V \setminus S$, and adds the speaker $v$ to $S$, labeling $v$ and $u$ with opposite stance labels. This algorithm is basically Prim's algorithm for minimum spanning tree, and it runs in nearly linear time, $O(|E|+|V|\log |V|)$. We call this algorithm $GreedySpeaker$.
|
| 37 |
+
|
| 38 |
+
A more sophisticated approach still builds upon the same intuition. It creates speaker embedding that allows a principled comparison rather than an iterative greedy assignment. A desired property of the speaker embedding, let's call it *$\tau$-separability*, is that speakers with opposing stances are assigned vectors with an angle of at least $\tau$ between them (it's instructive to think of $\tau$ as close to $180^\circ$). We say that an embedding *$\tau$ respects the stance* if it satisfies $\tau$-separability for every pair of speakers.
|
| 39 |
+
|
| 40 |
+
Suppose $\overrightarrow{u}$ and $\overrightarrow{v}$ are unit vectors. The separability property can be mathematically encoded by requiring that the expression in Eq. [\[eq:summand\]](#eq:summand){reference-type="eqref" reference="eq:summand"} takes a larger value on pairs of opposing speakers. We use $\langle \overrightarrow{u}, \overrightarrow{v} \rangle$ for the cosine similarity between the two vectors.
|
| 41 |
+
|
| 42 |
+
$$\begin{equation}
|
| 43 |
+
\label{eq:summand}
|
| 44 |
+
(1-\langle \overrightarrow{u}, \overrightarrow{v} \rangle)/2
|
| 45 |
+
\end{equation}$$
|
| 46 |
+
|
| 47 |
+
The maximal value Eq. [\[eq:summand\]](#eq:summand){reference-type="eqref" reference="eq:summand"} takes is 1, which is attained if the two vectors are antipodal, namely, the angle between them is exactly $180^\circ$, and the cosine similarity is -1. Multiplying Eq. [\[eq:summand\]](#eq:summand){reference-type="eqref" reference="eq:summand"} by the corresponding edge weight $w_{uv}$ ensures that the larger values are attained for relevant pairs.
|
| 48 |
+
|
| 49 |
+
Given an interaction network $G=(V,E)$, with $|V|=n$, and edge weights $w_{uv}$ for every edge $(u,v) \in E$, our goal is to find a speaker embedding $\mathcal{E}$ which respects the stance for as many speaker pairs as possible. The proposed candidate speaker embedding $\mathcal{E}$ is the solution of the optimization problem given in Eq. [\[eq:SDP\]](#eq:SDP){reference-type="eqref" reference="eq:SDP"}, $S^n$ denoting the unit sphere in ${\mathbb{R}}^n$.
|
| 50 |
+
|
| 51 |
+
$$\begin{equation}
|
| 52 |
+
\label{eq:SDP}
|
| 53 |
+
\mathcal{E} = \mathop{\mathrm{arg\,max}}_{\overrightarrow{u} \in S^n \text{ for } u \in V} \sum_{(u,v)\in E} w_{uv}\frac{1- \langle \overrightarrow{u},\overrightarrow{v} \rangle}{2}
|
| 54 |
+
\end{equation}$$
|
| 55 |
+
|
| 56 |
+
The optimization problem in Eq. [\[eq:SDP\]](#eq:SDP){reference-type="eqref" reference="eq:SDP"} is a semi-definite program (SDP), and it can be solved in polynomial time using the Ellipsoid algorithm [@sdpsolver]. This SDP was suggested by [@Goemans1995] as a relaxation for the NP-hard max-cut problem, which is in line with our intuitive hypothesis about the nature of the interaction between speakers. Note that $n$, the dimension of the embedding, is always the number of speakers in the conversation (part of the SDP definition), unlike the tunable dimension hyper-parameter in other embedding frameworks.
|
| 57 |
+
|
| 58 |
+
The speaker embedding $\mathcal{E}$ gives a continuous range of stance relationships, from "total disagreement\" (antipodal vectors) to "total agreement\" (aligned vectors). However, in some cases, we want to round the continuous solution to a discreet solution, say "pro\" vs. "con\".
|
| 59 |
+
|
| 60 |
+
In addition, the separability property is relevant for pairs of speakers. Even if the embedding of every pair respects the stance, this still doesn't lend itself immediately to a partition of the *entire* set of speakers into two sets, "pro\" and "con\", that respects the stance. If the interaction graph is a tree, then pairwise separability immediately induces an overall consistent partition. But when cycles exist, things are messier.
|
| 61 |
+
|
| 62 |
+
We now describe how to round the speaker embedding into a partition of the speakers. To gain intuition into the rounding technique, let's assume that the obtained embedding pairwise respects the stance, and further, that the embedding lies in a one-dimensional subspace of ${\mathbb{R}}^n$. Namely, there exists some vector $\overrightarrow{v_0}\in {\mathbb{R}}^n$ s.t. for every $u \in V$, $\overrightarrow{u} = \overrightarrow{v_0}$ or $\overrightarrow{u}= -\overrightarrow{v_0}$. In such case, the rounding is trivial: all vectors on "one side" are "pro", and all vectors on the "other side" are "con" (or vice-a-versa).
|
| 63 |
+
|
| 64 |
+
Building upon this intuition, a random hyper-plane rounding technique is commonly used [@Goemans1995]. A random $(n-1)$-dimensional hyper-plane that goes through the origin is selected, and vectors are partitioned in two groups according to which side of the hyper-plane the vector lies. In the one-dimensional example, every random hyperplane will round the vectors correctly into the two opposing stance classes. More generally, the more the vectors are clustered into two "tight" cones, the more accurate the rounding will be (by tight, we mean that the maximum pairwise angle is small).
|
| 65 |
+
|
| 66 |
+
<figure id="fig:PCA_cones" data-latex-placement="ht!">
|
| 67 |
+
<img src="figs/angles-conv-207-4forums.png" />
|
| 68 |
+
<figcaption>PCA projection of the 19-dimensional speaker embedding for the core of the interaction network. Colors correspond to the speakers’ labels. The black arrows to the left and right correspond to the average vector in each color class</figcaption>
|
| 69 |
+
</figure>
|
| 70 |
+
|
| 71 |
+
<figure id="fig:PCA_cones2" data-latex-placement="ht!">
|
| 72 |
+
<img src="figs/angles-conv-34-4forums.png" />
|
| 73 |
+
<figcaption>PCA projection of the 35-dimensional speaker embedding of the core of an interaction network also from 4Forum. Shorter vectors have a larger component perpendicular to PC1 and PC2. The induced cones have a large diameter, and therefore the confidence of having a correct prediction on authors within this conversation significantly decreases. Black arrows are cone centers (again shorter).</figcaption>
|
| 74 |
+
</figure>
|
| 75 |
+
|
| 76 |
+
Figure [2](#fig:PCA_cones){reference-type="ref" reference="fig:PCA_cones"} illustrates this point: two tight cones are observed, as well as some "straying" vectors that are liable to wrong classification. The accuracy of the hyperplane rounding on that conversation was 75%. On the other hand, Figure [3](#fig:PCA_cones2){reference-type="ref" reference="fig:PCA_cones2"} demonstrates wider cones, and accordingly, the accuracy this time was only 64%. Further illustration about how the diameter of the cones corresponds to an accurate solution is given in Table [1](#table:cones_example){reference-type="ref" reference="table:cones_example"}.
|
| 77 |
+
|
| 78 |
+
[]{#\"tab:corr-acc-cones\" label="\"tab:corr-acc-cones\""}
|
| 79 |
+
|
| 80 |
+
::: {#table:cones_example}
|
| 81 |
+
**Diameter** **accuracy** **authors**
|
| 82 |
+
-------------- -------------- -------------
|
| 83 |
+
2.0 0.79 2440
|
| 84 |
+
1.0 0.80 2403
|
| 85 |
+
0.75 0.80 2341
|
| 86 |
+
0.5 0.81 2258
|
| 87 |
+
0.25 0.82 2127
|
| 88 |
+
0.1 0.83 1921
|
| 89 |
+
0.05 0.84 1761
|
| 90 |
+
0.01 0.85 1332
|
| 91 |
+
0.001 0.85 917
|
| 92 |
+
|
| 93 |
+
: Accuracy of speakers classification for speakers whose vector falls inside the cone, for various cone diameters. Evidently, as the cones get tighter, the accuracy increases. The dataset used is the 4Forum conversations.
|
| 94 |
+
:::
|
| 95 |
+
|
| 96 |
+
It is important to note that the vectors that the SDP assigns the speakers lie in ${\mathbb{R}}^n$. This dimension provides a lot of freedom in vectors assignment (freedom which is necessary for the SDP to be solvable in polynomial time). Therefore, while the one-dimensional intuition just described is clear for a two-persons dialogue, it is not a-priori clear why the vectors in ${\mathbb{R}}^n$ should *simultaneously* respect the stance of all, or most, speakers in a multi-participant discussion.
|
| 97 |
+
|
| 98 |
+
We now explore the conditions that may lead to the desired phenomenon where the SDP solution is such that the vectors are clustered in two tight cones. These conditions are rooted both in the network structure and in the content of the conversation.
|
| 99 |
+
|
| 100 |
+
From the perspective of the *network topology*, it is easy to see that the optimal solution to Eq. [\[eq:SDP\]](#eq:SDP){reference-type="eqref" reference="eq:SDP"} is the antipodal vectors rank-one solution we described above, where the assignment of vectors corresponds to the max-cut partition of the graph. However, crucially, Eq. [\[eq:SDP\]](#eq:SDP){reference-type="eqref" reference="eq:SDP"} does *not* contain a rank constraint on the solution as this will turn the optimization problem NP-hard. Now enters the assumption that edges represent antipodal stances. If this assumption is correct, and the structure of the network is rich enough to force a unique max-cut solution, then we expect a "tight-cones" solution which is both aligned with the max-cut partition and with the stances.
|
| 101 |
+
|
| 102 |
+
The assumption of a unique max-cut partition may be too strong to hold for the entire graph (think for example of isolated nodes, or very sparse structures). However, for a special subgraph, the 2-core of the graph, this uniqueness may hold. Indeed, we have found that most of the SDP vectors of the speakers that belong to the 2-core of the graph (a subgraph of $G$ in which the minimal degree is 2) are arranged in a tight-cone structure. This phenomenon was observed in other papers as well, that studied related tasks such as community detection and other graph partitioning tasks [@PhysRevE.74.016110; @Newman8577; @Leskovec:2010; @coloringIsEasy].
|
| 103 |
+
|
| 104 |
+
But why should the graph contain a large 2-core in the first place? Here enters the *content/linguistic* aspect. We expect that captivating or stirring topics will lead to lively discussions that result in a complex conversation graph that induces a large 2-core. Together with the basic assumption that edges connect speakers with opposing stances, we arrive at the premise that in such discussions, both the SDP will produce solutions that have the tight-cones structure and that this tight-cone structure will respect the stance. Thus, when rounding the solution using the random hyper-plane technique, we expect to detect the stance of 2-cores users accurately. Section [7](#sec:discussion){reference-type="ref" reference="sec:discussion"} elaborates on the relationship between the spirit, or valence, of the conversation and the accuracy of the algorithm.
|
| 105 |
+
|
| 106 |
+
We now formally describe our main contribution, STEM, an unsupervised structural embedding for stance detection. The below steps are also illustrated in Figure [1](#fig:stance-main-flow){reference-type="ref" reference="fig:stance-main-flow"}. Given a conversation tree $T$, STEM operates as follows:
|
| 107 |
+
|
| 108 |
+
1. Convert the conversation tree $T$ to an interaction network $G=(V,E)$, as described in Section [3.1](#subsec:trees2networks){reference-type="ref" reference="subsec:trees2networks"}.
|
| 109 |
+
|
| 110 |
+
2. Compute the 2-core $G_C = (V_C,E_C)$ of $G$, i.e. the induced subgraph of $G$ where every node has degree at least 2 in $G_C$.
|
| 111 |
+
|
| 112 |
+
3. Solve the SDP in Eq. [\[eq:SDP\]](#eq:SDP){reference-type="eqref" reference="eq:SDP"} on $G_C$ to obtain a speaker embedding $\mathcal{E}$.
|
| 113 |
+
|
| 114 |
+
4. Round the speaker embedding using a random hyper-plane.
|
| 115 |
+
|
| 116 |
+
5. Propagate the labels to speakers outside the core, $V \setminus V_C$, using interchanging labels assignment.
|
| 117 |
+
|
| 118 |
+
In Step 2, we compute the core. To compute the 2-core, one iteratively removes vertices whose degree in the remaining graph is smaller than two, until no such vertex remains.
|
| 119 |
+
|
| 120 |
+
Step 5 does not lead to a contradiction since, by definition, the vertices outside the core do not induce a cycle. Therefore, the propagation of labels in the sub-graphs connected to the 2-core is consistent.
|
| 121 |
+
|
| 122 |
+
Finally, note that our algorithm produces a partition of speakers, similarly to the problem of community detection, without a label for each part (pro or con). One simple heuristic to obtain the labeling is to label the set containing the OP as "pro". Another option is to use an off-the-shelf algorithm, e.g. [@allaway2020zero], and noisily label a few posts on each side before taking a majority vote.
|
| 123 |
+
|
| 124 |
+
To evaluate the performance of our algorithm without additional noise that this last step may incur, we checked the two possible ways of assigning the labels and took the one that resulted in higher accuracy.
|
| 125 |
+
|
| 126 |
+
We evaluate our approach on three datasets: ConvinceMe [@anand2011cats], 4Forums [@walker2012corpus], and CreateDebate [@hasan-ng-2014-taking]. These datasets were used in previous work, e.g., [@walker2012stance; @sridhar-etal-2015-joint; @abbott2016internet; @li2018structured], among others. We briefly describe each of the datasets and highlight some important aspects they differ in. A statistical description of datasets is provided in Table [2](#tab:datastats){reference-type="ref" reference="tab:datastats"}.
|
| 127 |
+
|
| 128 |
+
::: {#tab:datastats}
|
| 129 |
+
4Forums CD CM
|
| 130 |
+
------------------------- --------- ------- -------- --
|
| 131 |
+
\# Topics 4 4 16
|
| 132 |
+
\# Conversations 202 521 9,521
|
| 133 |
+
\# Conversations (core) 202 149 500
|
| 134 |
+
\# Authors 863 1,840 3,641
|
| 135 |
+
\# Authors (core) 718 352 490
|
| 136 |
+
\# Posts 24,658 3,679 42,588
|
| 137 |
+
\# Posts (core) 23,810 1,250 5,876
|
| 138 |
+
|
| 139 |
+
: Basic statistics of the three datasets: 4Forums, CreateDebate (CD), and Convince Me (CM). We also present the number of authors that belong to the 2-core of the interaction graph, and their posts.
|
| 140 |
+
:::
|
| 141 |
+
|
| 142 |
+
ConvinceMe is a structured debate site. Speakers initiate debates by specifying a motion and stating the sides. Debaters argue for/against the motion, practically self-labeling their stance with respect to the original motion. The data was first used by Anand and incorporated to the IAC2.0 by Abboott et al. .
|
| 143 |
+
|
| 144 |
+
4Forums (no longer maintained) was an online forum for political debates. It had a shallow hierarchy of topics (e.g., Economics/Tax), and discussion threads have a tree-like structure. The 4Forum stance dataset, introduced by Walker et al. , provides agree/disagree annotations on comment-response pairs in 202 conversations on four topics (abortion, evolution, gay marriage, and gun control).
|
| 145 |
+
|
| 146 |
+
Similarly to ConvinceMe, CreateDebate is a structured debate forum. Unlike ConvinceMe, the user initiating the debate does not put forward a specific assertion. Rather, she introduces an open question for the community, and speakers can respond by taking sides. Authors must label their posts with either a *support*, *clarify* or *dispute* label. A collection of debates on four topics (abortion, gay rights, legalization of marijuana, Obama) was introduced by [@hasan-ng-2014-taking]. This dataset contains many degenerate conversations -- speakers responding to the prompt question without engaging in a conversation with other speakers. We filtered out these degenerate conversations, keeping 541 conversation trees (see Table [2](#tab:datastats){reference-type="ref" reference="tab:datastats"}). The root of each of the trees is an original response to the initial questions.
|
2201.07745/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-12-12T07:22:56.256Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36" etag="6wXxmLDAj7hUl5mxOqaL" version="15.9.4" type="google"><diagram id="Q04oZwc6RLO-DQTOgG0R" name="Page-1">7VxZd5u6Fv41Xid9MAsQ46MbZ+hp2tXb5LanT3fJIBv1YEQxTuLz668EYjICQ4ztuCdJVyo0D3v49tYwApfL55sIht4n4iJ/pMru8whMR6qqaKo6Yv9kd5PG0Dges4iwy3MVEff4H8QjZR67xi5aVTLGhPgxDquRDgkC5MSVOBhF5KmabU78aqshXPAW5SLi3oE+qmX7jt3Yy3onl7LfIrzweNOqzROWMM+cRqw86JKnUlvgagQuI0LiNLR8vkQ+m71sXtKKrhtS845FKIi7FJjGi83H/+r+n1cButO/4fH3CIwVK63mEfprPuKRavi0wvcebcFYsFAWMye0pa24VQgDNrh4w2fM+LUmWebxKlnPCc1Ae/pcJGa1XD18yiqaRdtV06GktdeiRR2ZdemFoop6UR1fh1qAcCzPtLMucmmGBxzTGlT5EwxDHCx29JxGzwRx9fl38aNg1C8cg9Y2E8LF6NF3QU9FnRe18lb0rWhbUbVC3WpE1gnTgalCk588HKP7EDos9YmqJCbH4qXPk1dxRP7O5bjKuAL7/iXxSZTUBubzueo4ec5SimvMDN3Ie1AWu1wSP6IoRs+lKC6GbxBZojja0Cw81TS5juBKcUz1JI95KumYTJN4ZfWSZYRcry3y2gvRTwNc+vfQBGqrIjiIQClLuK1lpTMZi9autCJG8iNYQ539pqrJYdIXTEtfD4TSxHRM4YWYWghdxrmf6GkPuy4KaFxBYzIfd6m59KdGXKzLLlx5OWny9u/QnC2lVsR85aur8RJfYByjKEiK0WkdhuSULhSnKYeiuOvLn/r69pr8mk6gcWP+wFD9NQaGgOS26GBB5z7sPv4cB8KZX4Z1jfNiybWJUfQ6K+qiiZH1Q7GiXJsH5FJUyj9JFHtkQQLoXxWxW0Ra5LkjCcUzOvqJ4njDITZcx6TKYOgZx3+Vwj9YVZLOv6bPvObkY5N9BHS8f5U/SqXYZ1Es+crK1Ve0lUJWZB05qC0jh7ExjBaorUaNTy2bz1a+iZAPY/xYtQKGZ4E6/P78dT8dB328YMLDoVOLov5z3V2maKpkGRXuMeqsU8pUZh5FOZRUyVZYoMj2xcYa+6dfejAeme8nI3PKo5ogck+dFpAAbakzHlVbU7ZImBqpE56wpKoqkQIi4qhKhoORgwJ0yaR6xdL5X7VCG4qiSppiGapsWLah64pZF7OaVJRmf0WEo0kKMG2gyEDTLNXUrUMJYfPfJ4QzT0VJ3IrnRquLW3FG/VTitrXf+8gGXSQbPqLNE4mY6X/1HEfQiTEJuoqFKs30smAUAfq15yarrI6XgQFs4A4DJzWjC560BKhJH0Due8aj9dP+/h/7f9Gfzvpp+c9M0XNSe/38yqjhGi6xzyJukf+ImCznCbwlRc8Xaot3XsixAoDUwiInY9Bmd+ShHFs191WjR9NJeYk1Ei1mFzozz0YqHahcCb5jYVadnPRszld6kmQx4DJMEgFg9p6Xr/52Sl4JQw1jDgBYHT4zIGvj4x7HVsfpWQ6M/k9L2lXHwe85ut94nBcP75Lk1pEJfUQi77LIEd3gpDrNpPiIuXDGuQOK1ROQaAn9HfNWTsRUMzExz1KTzZNyIgUZwWpOq8wqD1CegSGRatvl4r3EZbvQfeJ6XTC8Xq3gLOIunYRvDKdiZ+1DShHy7SakuCQiocf0EGbE/4WqIppnlcANijZU+TMKvYgE2KHhCUUxEU46k5ZFwaqKx3BHomr3UfYyy6u+wBqKq3on1dG2v9Aa1fyF1jBQTu3iAAMiX7Q5AJRrtRr2QQFGKwroR+e7qv1YlW0d9slSuRHMVmEXyd5LJskBWiOBaNru9O3dt9tU/F16N5/T0ORWkqQdUnpQcyZzbCTSbxBy1ut7KyKHrimgZ+1Q9EzV+pmYJsd3JQC5o2ECtKFdCUnRSRTBTSlDSHCiWPKav7CIgr4A2KYv1dS3KCSts6CXvHN7WLf1rZGTkZB6GBrqZBaDnNgGMYu7U596SrMYNPu0k71xgc7Yxpc/16sYzzeHgIG9YV4r4O+BzQTnB4bpae9JHX7IgrH1s3x2EUBvy6irUfP67ZaXEKwQsb0cRwrbncxWifNY0Bz3DVxM3vXGmnsaPWlkeBwaq5mVGtuffyPAYar9EDiYTo3DRgwDtlvBNVRiNoeIWtwYRpja1syyJnOGjyvGdxIVe2iU+yseU1M9Oe1YSvUQjFjyBbMy2PI+oQglHV+7ODkiyexxnnkGVzhvcE5Xj04/+55HZJlnQo5HHBi5mCzYSeOkxzh4RJTgFjDdbpHddDETFwHbOxQz7RbZhzVfgpN7EhY+nZ5o7dN6Q49NDMt1weyld0V2WLgbvKq7oan5HpwY7q2aXkK9nb1zDQbiYAf2dnhNfDhD/nvo/L1Iat3aNa/4VE7qUgHAlLK9x11eFWpP5HmH3yTTXo8Z0dGKUEbDWxFavs7DWBFqRytCaziMdiQrQm20Ik60qdDkq2qQWXpNG6f9bfI/7wskTuGed/Eq9CGfOxz4uFRy7hMYl2ts8W/uXks5X79SaKiNkL1xkfCYxX0KfruodREYPh7+aXOjik+FdTr3kSk4F83hOmXcmk4cQH9pNR8qyM57lLSXagi01xDHO8TCyzo7zXU8H2rX41jAHNqHut+aDnAcS4hjH67HH6bXNPnDMqSrDlOWv3dI1HYb63c4l6XVvNO2KRlKjXmPezQLnJJ3C379UUrZxbuSqVfY9zXCTvM8YKfZyOUv958J92G/n8M+bAO2FV4pHenvZUlmJ6UTcCbJsp0HmSmXBa0ig5oHAcd0FIro065o5NS7t7VzpQ3yS7iBezDw0XKn4DwsJ+3IltObcXRQ40gIeu7/GM46arOLKibRaWyc4QWPVt/WB3WTB8gCqQMGkDrim0xKo9QZ4ibTwx8JOdPC8kNCwYywP57jlaaIxOkWAJiO7VZTqscdbYvdZipdZ9q2iGVV0q3SVaS6inpNt5mUkxxB6oRtW2l/54XOzC/wSi50as0+1v149uYhTW0GuK+SNQfgRKtu06pAUusMp1mSbop57FD7Kc0OjGO8nvP1/vd5PecrctdOsjN8z8guhSFv7+e0d17UylvRt6JtRffZjh/u/RzByx01JdGoEewTv58jVAV1L9eBXnLYa+oUOXmQQc5/lG1cq2mSbZsq/wV1LdvwoMMQr6EIJ/ZwV0Ir7zk8nPV7DvsRBbVmJFkuPd2wDbgSJ5xqW7YmG7oud3i6AYhIRFKBaSmqQjGdYumafSCK+Zdct+i7idAmtsp2VjMXHsGqEjauNPtfh5AB9yP9isqBMMJLdI4ioOT/GEgi2Du8H2cmEBSltnJH834MwqUZA+xkU/OkbPrm/BhcNcsa2wHcZj/FpJxjAWDpumnbQOwLMeziRwTZ2DnTsvexP/fRz+L94vSqW/EMNLj6Pw==</diagram></mxfile>
|
2201.07745/main_diagram/main_diagram.pdf
ADDED
|
Binary file (78.5 kB). View file
|
|
|
2201.07745/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Information retrieval (IR) is widely used in commercial search engines and is an active area of research for natural language processing tasks such as open-domain question answering (ODQA). IR has also become important in the biomedical domain due to the explosion of information available in electronic form [\(Shortliffe et al. 2014\)](#page-8-0). Biomedical IR has traditionally relied upon term-matching algorithms (such as TF-IDF and BM25 [\(Robertson and Zaragoza 2009\)](#page-8-1)), which search for documents that contain terms mentioned in the query. For instance, the first example in Table [1](#page-1-0) shows that BM25 retrieves a sentence that contains the word *"Soluvia"* from the question. However, term-matching suffers from failure modes, especially for terms which have different meanings in different contexts (example 2), or when crucial semantics from the question are not considered during retrieval (for instance, in the third example when the term "how large" is not reflected in the answer retrieved by BM25).
|
| 4 |
+
|
| 5 |
+
Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
|
| 6 |
+
|
| 7 |
+
Since these failure modes can have a direct impact on downstream NLP tasks such as open-domain question answering (ODQA), there has been interest in developing neural retrievers (NR) [\(Karpukhin et al. 2020\)](#page-7-0). NRs which represent query and context as vectors and utilize similarity scores for retrieval, have led to state-of-the-art performance on ODQA benchmarks such as Natural Questions [\(Kwiatkowski et al.](#page-7-1) [2019\)](#page-7-1) and TriviaQA [\(Joshi et al. 2017\)](#page-7-2). Unfortunately, these improvements on standard NLP datasets are not observed in the biomedical domain with neural retrievers.
|
| 8 |
+
|
| 9 |
+
Recent work provides useful insights to understand a few shortcomings of NRs. [Thakur et al.](#page-8-2) [\(2021\)](#page-8-2) find NRs to be lacking at exact word matching, which affects performance in datasets such as BioASQ [\(Tsatsaronis et al. 2015\)](#page-8-3) where exact matches are highly correlated with the correct answer. [Lewis, Stenetorp, and Riedel](#page-8-4) [\(2021\)](#page-8-4) find that in the Natural Questions dataset, answers for 63.6% of the test data overlap with the training data and DPR performs much worse on the non-overlapped set than the test-train overlapped set. In this work, we found this overlap to be only 2% in the BioASQ dataset, which could be a potential reason for lower performance of NR methods. We also discovered that NRs produce better representations for short contexts that for long contexts – when the long context is broken down into multiple shorter contexts, performance of NR models improves significantly.
|
| 10 |
+
|
| 11 |
+
In this paper, we seek to address these issues and improve the performance of neural retrieval beyond traditional methods for biomedical IR. While existing systems have made advances by improving neural re-ranking of retrieved candidates [\(Almeida and Matos 2020;](#page-7-3) [Pappas, Stavropoulos, and](#page-8-5) [Androutsopoulos 2020\)](#page-8-5), our focus is solely on the retrieval step, and therefore we compare our neural retriever with other retrieval methods. Our method makes contributions to three aspects of the retrieval pipeline – question generation, pre-training, and model architecture.
|
| 12 |
+
|
| 13 |
+
Our first contribution is the "Poly-DPR" model architecture for neural retrieval. Poly-DPR builds upon two recent developments: Poly-Encoder [\(Humeau et al. 2020\)](#page-7-4) and Dense Passage Retriever [\(Karpukhin et al. 2020\)](#page-7-0). In DPR, a question and a candidate context are encoded by two models separately into a contextual vector for each, and a score for each context can be computed using vector similarity. On the other hand, Poly-Encoder represents the query by K vectors and produces context-specific vectors for each query. Instead,
|
| 14 |
+
|
| 15 |
+
<span id="page-1-0"></span>
|
| 16 |
+
|
| 17 |
+
| Question | Answer | Retrieved Context (BM25) | Retrieved Context(DPR) |
|
| 18 |
+
|-------------------------------------------|------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
|
| 19 |
+
| What is Soluvia? | Soluvia by Becton Dickinson is a microinjection system for intradermal delivery of vaccines. | The US FDA approved Sanofi Pasteur's Flu-<br>zone Intradermal influenza vaccine that uses a<br>new microinjection system for intradermal de-<br>livery of vaccines (Soluvia, Becton Dickinson). | Internet-ordered viagra (sildenafil citrate) is rarely genuine. |
|
| 20 |
+
| Is BNN20 involved in Parkinson's disease? | BNN-20 could be proposed for treatment of PD | Rare causes of dystonia parkinsonism. | BNN-20 could be proposed for treatment of PD |
|
| 21 |
+
| How large is a lncRNAs? | IncRNAs are defined as RNA transcripts longer than 200 nucleotides that are not transcribed into proteins. | IncRNAs are closely related with the occurrence and development of some diseases. | An increasing number of long noncoding RNAs (lncRNAs) have been identified recently. |
|
| 22 |
+
|
| 23 |
+
Table 1: Illustrative examples from the BioASQ challenge along with the context retrieved by two methods BM25 and DPR.
|
| 24 |
+
|
| 25 |
+
our approach Poly-DPR represents each *context* by K vectors and produces *query-specific vectors* for each context. We further design a simple inference method that allows us to employ MIPS (Shrivastava and Li 2014) during inference.
|
| 26 |
+
|
| 27 |
+
Next, we develop "Temp-QG", a template-based question generation method which helps us in generating a large number of domain-relevant questions to mitigate the traintest overlap issue. TempQG involves extraction of templates from in-domain questions, and using a sequence-to-sequence model (Sutskever, Vinyals, and Le 2014) to generate questions conditioned on this template and a text passage.
|
| 28 |
+
|
| 29 |
+
Finally, we design two new pre-training strategies: "ETM" and "RSM" that leverage our generated dataset to pre-train Poly-DPR. These tasks are designed to mimic domain-specific aspects of IR for biomedical documents which contain titles and abstracts, as opposed to passage retrieval from web pages (Chang et al. 2020). Our pre-training tasks are designed to be used for long contexts and short contexts. In both tasks, we utilize keywords in either query or context, such that the capacity of neural retrievers to match important keywords can be improved during training.
|
| 30 |
+
|
| 31 |
+
Armed with these three modules, we conduct a comprehensive study of document retrieval for biomedical texts in the BioASQ challenge. Our analysis demonstrates the efficacy of each component of our approach. Poly-DPR outperforms BM25 and previous neural retrievers for the BioASQ challenge, in the small-corpus setting. A hybrid method, which is a simple combination of BM25 and NR predictions, leads to further improvements. We perform a post-hoc error analysis to understand the failures of BM25 and our Poly-DPR model. Our experiments and analysis reveal aspects of biomedical information retrieval that are not shared by generic opendomain retrieval tasks. Findings and insights from this work could benefit future improvements in both term-based as well as neural-network based retrieval methods.
|
| 32 |
+
|
| 33 |
+
# Method
|
| 34 |
+
|
| 35 |
+
**Dense Passage Representation** (DPR) (Karpukhin et al. 2020) is a neural retriever model belonging to the dual-model family. DPR encodes the query q and the context c into dense vector representations:
|
| 36 |
+
|
| 37 |
+
$$v_q = E_q(q) [CLS], \quad v_c = E_c(c) [CLS].$$
|
| 38 |
+
(1)
|
| 39 |
+
|
| 40 |
+
where $E_q$ and $E_c$ are BERT (Devlin et al. 2019) models which output a list of dense vectors $(h_1,\ldots,h_n)$ for each token of the input, and the final representation is the vector representation of special token [CLS]. $E_q$ and $E_c$ are initialized identically and are updated independently while being trained with the objective of minimizing the negative log likelihood of a positive (relevant) context. A similarity score between q and each context c is calculated as the inner product between their vector representations:
|
| 41 |
+
|
| 42 |
+
$$sim(q, c) = v_a^T v_c. (2)$$
|
| 43 |
+
|
| 44 |
+
**Poly-Encoder** (Humeau et al. 2020) also uses two encoders to encode query and context, but the query is represented by K vectors instead of a single vector as in DPR. Poly-Encoder assumes that the query is much longer than context, which is in contrast to information retrieval and opendomain QA tasks in the biomedical domain, where contexts are long documents and queries are short and specific.
|
| 45 |
+
|
| 46 |
+
We integrate Poly-Encoder and DPR to use K vectors to represent context rather than query. In particular, the context encoder includes K global features $(m_1, m_2, \cdots, m_k)$ , which are used to extract representation $v_c^i$ , $\forall i \in \{1 \cdots k\}$ by attending over all context tokens vectors.
|
| 47 |
+
|
| 48 |
+
$$v_c^i = \sum_n w_n^{m_i} h_n, \text{ where}$$
|
| 49 |
+
(3)
|
| 50 |
+
|
| 51 |
+
$$(w_1^{m_i}\dots,w_n^{m_i}) = \operatorname{softmax}(m_i^T \cdot h_1,\dots,m_i^T \cdot h_n).$$
|
| 52 |
+
(4)
|
| 53 |
+
|
| 54 |
+
After extracting K representations, a query-specific context representation $v_{c,q}$ is computed by using the attention mechanism:
|
| 55 |
+
|
| 56 |
+
$$v_{c,q} = \sum_{k} w_k v_c^k$$
|
| 57 |
+
, where (5)
|
| 58 |
+
|
| 59 |
+
$$(w_1, \dots, w_k) = \operatorname{softmax}(v_q^T \cdot v_c^1, \dots, v_q^T \cdot v_c^k).$$
|
| 60 |
+
(6)
|
| 61 |
+
|
| 62 |
+
Although we can pre-compute K representations for each context in the corpus, during inference, a ranking of the context needs to be computed after obtaining all query-specific context representations. As such, we can not directly use efficient algorithms such as MIPS (Shrivastava and Li 2014). To address this challenge, we use an alternative similarity function for inference – the score $\operatorname{sim}_{\operatorname{infer}}$ is computed by obtaining K similarity scores for the query and each of the
|
| 63 |
+
|
| 64 |
+
<span id="page-2-0"></span>
|
| 65 |
+
|
| 66 |
+
Figure 1: Overview of Template-Based Question Generation.
|
| 67 |
+
|
| 68 |
+
K representations, and take the maximum as the similarity score between context and query:
|
| 69 |
+
|
| 70 |
+
$$sim_{infer}(q, c) = \max(v_q^T \cdot v_c^1, \dots, v_q^T \cdot v_c^k). \tag{7}$$
|
| 71 |
+
|
| 72 |
+
Using this similarity score, we can take advantage of MIPS to find the most relevant context to a query.
|
| 73 |
+
|
| 74 |
+
In sum, Poly-DPR differs from Poly-Encoder in two major aspects: (1) K pre-computed representations of context as opposed to K representations computed during inference, and (2) a faster similarity computation during inference.
|
| 75 |
+
|
| 76 |
+
In this paper, we also explore a hybrid model that combines the traditional approach of BM25 and neural retrievers. We first retrieve the top-100 candidate articles using BM25 and a neural retriever (Poly-DPR) separately. The scores produced by these two methods for each candidate are denoted by $S_{\rm BM25}$ and $S_{\rm NR}$ respectively and normalized to the [0,1] range to obtain $S'_{BM25}$ and $S'_{NR}$ . If a candidate article is not retrieved by a particular method, then its score for that method is 0. For each article, we get a new score:
|
| 77 |
+
|
| 78 |
+
$$S_{\text{hybrid}} = S'_{\text{BM25}} + S'_{\text{NR}}.$$
|
| 79 |
+
(8)
|
| 80 |
+
|
| 81 |
+
Finally, we re-rank candidates based on $S_{hybrid}$ and pick the top candidates – for BioASQ performance is evaluated on the top-10 retrieved candidates.
|
| 82 |
+
|
| 83 |
+
We propose a template-based question generation approach – *TempQG*, that captures the style of the questions in the target domain. Our method consists of three modules: template extraction, template selection, and question generation.
|
| 84 |
+
|
| 85 |
+
**Template Extraction** aims to extract unique templates from which the questions in the training set can be generated. We first use bio-entity taggers from Spacy (Honnibal et al. 2017) to obtain a set of entities from the question. We replace non-verb entities having a document frequency less than k with an underscore (\_) – this prevents common entities such as "disease", "gene" from being replaced. For e.g., given the question "Borden classification is used for which disease?", the entity tagger returns ["Borden classification", "disease"], but only the first entity clears our frequency-based criteria. As a result, the generated template is "\_ is used for which disease?". This process gives us a
|
| 86 |
+
|
| 87 |
+
<span id="page-3-0"></span>
|
| 88 |
+
|
| 89 |
+
Figure 2: Poly-DPR is pre-trained on two novel tasks designed specifically for information retrieval applications. This figure illustrates the sample generation pipeline using the title and abstract from each sample in BioASQ.
|
| 90 |
+
|
| 91 |
+
preliminary list of templates. We then use a question similarity model (which returns a score between [0, 1]) to compute the pairwise score between all templates. Templates are assigned to a cluster if they have a minimum similarity of 0.75 with existing templates of a cluster. Once clusters have been formed, we choose either the sentence with the smallest or second-smallest length as the representative template. These representative templates are used for question generation.
|
| 92 |
+
|
| 93 |
+
Template Selection. Given a text passage, we create a texttemplate dataset and train the PolyDPR architecture to retrieve a relevant template. After the model is trained, we feed new text inputs to the model, obtain query encoding and compute the inner product with each template. Templates with maximum inner product are selected to be used for QG.
|
| 94 |
+
|
| 95 |
+
Question Generation (QG). We use a T5 [\(Raffel et al.](#page-8-16) [2020\)](#page-8-16) model for generating questions, by using text and template as conditional inputs. To distinguish between these two inputs, we prepend each with the word "template" or "context", resulting in an input of the form: {"template" : template, "context" : text}. Figure [1](#page-2-0) shows an illustrative example for the template-based question generation method abbreviated as *TempQG*. The context used for generating the questions are any two consecutive sentences in the abstract. Given such a context, we first select 10 unique templates and concatenate each template with the context independently. These are used by the question generation model to produce 10 initial questions; duplicate questions are filtered out.
|
| 96 |
+
|
| 97 |
+
Our aim is to design pre-training tasks specifically for the biomedical domain since documents in this domain bear the *<title, abstract, main text>* structure of scientific literature. This structure is not commonly found in documents such as news articles, novels, and text-books. Domain-specific pre-training tasks have been designed by [Chang et al.](#page-7-5) [\(2020\)](#page-7-5) for Wikipedia documents which contains hyperlinks to other Wikipedia documents. However most biomedical documents do not contain such hyperlinks, and a such, pre-training strategies recommended by [Chang et al.](#page-7-5) [\(2020\)](#page-7-5) are incompatible with structure of biomedical documents.
|
| 98 |
+
|
| 99 |
+
Therefore, we propose Expanded Title Mapping (ETM) and Reduced Sentence Mapping (RSM), designed specifically for biomedical IR, to mimic the functionality required
|
| 100 |
+
|
| 101 |
+
for open-domain question answering. An overview is shown in Figure [2.](#page-3-0) The proposed tasks work for both short as well as long contexts. In biomedical documents, each document has a title (T) and an abstract (A). We pre-train our models on ETM or RSM and then finetune them for retrieval.
|
| 102 |
+
|
| 103 |
+
Expanded Title Mapping (ETM). For ETM, the model is trained to retrieve an abstract, given an extended title T 0 as a query. T 0 is obtained by extracting top-m keywords from the abstract based on the TF-IDF score, denoted as K = {k1, k2, · · · , km}, and concatenating them with the title as: T <sup>0</sup> = {T, k1, k2, · · · , km}. The intuition behind ETM is to train the model to match the main topic of a document (keywords and title) with the entire abstract.
|
| 104 |
+
|
| 105 |
+
Reduced Sentence Mapping (RSM). RSM is designed to train the model to map a sentence from an abstract with the extended title T 0 . For a sentence S from the abstract, we first get the weight of each word W = {w1, w2, · · · , wn} by the normalization of TF-IDF scores of each word. We then reduce S to S <sup>0</sup> by selecting the words with the top-m corresponding weights. The intuition behind a reduced sentence is to simulate a real query which usually is shorter than a sentence in a PubMed abstract. Furthermore, S 0 includes important words based on the TF-IDF score, which is similar to a question including keywords.
|
2201.12126/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-12-30T17:05:53.129Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" etag="i2IoKXrtrpqHLJUmf7GU" version="16.0.0" type="google"><diagram id="vfVqGsnfRSShgpZTx2Sr" name="Page-1">7Ztdb6M4FIZ/TS6nAh+bj8tpOjujXY20Ui9me7Wi4CZoHRwR2ibz69cUOwGHrzBpbKVz0+KDbeD1gznn2JnBfLX9mkfr5XeeUDZDTrKdwd0MIeR4SPwrLbvK4jo+VJZFnibSdjDcpz+pqiitz2lCN42KBeesSNdNY8yzjMZFwxblOX9tVnvirHnVdbSQV3QOhvs4YvSo2o80KZaVNSC12t9ouliqK7uOPLOKVGVp2CyjhL/WTPBlBvOc86I6Wm3nlJXqKV2qdn90nN3fWE6zYkyD5R359yn7Ufx8+cSzhz+/P/71jXySvbxE7Fk+8Ax5TPR3+ygOFuWBqyyi671RPlKxUzqJa4khEYXb12Va0Pt1FJdnXgUWwrYsVkyU3H3L+m2re6B5Qbc1k3yMr5SvaJHvRBV11peSSqhAaf56GCGQpmVtcNRIRJKJxb7ng2ziQCp3goowrCLYp2KoqQiGVcTDKhLrVNxPVEpFz7CKZFhF/7wqPqWMzTnj+Vs7eApiGsfCvily/h+tnXkMCBYT53l0B0330LDu/rDuyDp6XWLZHBAOq4jtU1H/HpmeA9Sc1JBRFyhLPpf+kShlPKNNUXL+nCW0vET5ugpl8t0/9cKDrPdWuNvWT93tVGmbFm+NbogsPdTOHBqVBdWmczA2/DmP6fDrV0T5ghbDDg9NlNvXMbS1oSMtQ6dsOWVRkb40ncW28ZRX+Jun4sn25GCn4/1TXVTPLVvVnTu9I+1FRoHWUSXMUUdveO0f+xeIa3Mkz0FcBY9izhnJnFsjzn0n4lQ0MkQcWEWcQK5JnDuROETCGwgBwgAFYiZzPY0//8bxXcfzA9cNPELgsjiiD4cjjMQRW4Uj6G6cNxFHwH04ApjFsS1ANIejYxGOoVU4uhqOGE/E0e3FEbtmcWyLtK97dsQjcSRW4QiBhuNU9xDCXhwdszgGLThan4QEPehDpoO+EbGzfVnIowjIdOyM2mJn69OQRzOF6ZQ4agsINRm9K8hDYmSb8G2hj/WJSNDDUuPTwIg1HfsykYCJZTS+l6tpbyZSPeFwKlL6PZY4m17XStapzqbnaR3pKaZ39idR2xLYdYc3CqVh5uyKt/W0NUyNt7HXm/4xG28jzyog6+mfw5R4diTDkUiqb48tSLrNTyjoKyljkSS9CXLwzCLZtlz9G0nN+bMESd05xv5EJKEXSYzNItmWBvoFJE/+/E6F+BJAWrZsrQW9MBVI/fu/j1cuxBy0BcvdzMUs2mzSuA+7KtmgtopCHx6WDCbSE5lBOHF20dfy9I7eezBHhOxzzli03qTZQlRk0Y7m4r+IUcWde9GqDMirv8IyLsknAvOiyUMz3SSnqXpuSpoili6yEirBhbgNuC3D/DSO2Gd5YpUmCevKGzSp62TshA11Xa54jcKwhUJ9SeJ8m2pHLAsEV5BB1IXHphcSYMRCwjWkbo82QJveQoo/3iYqMtIBArtyZfo3k+h+y+hdVEGvR+43PXLfu+gHFZ/mHV3DtpWx+wTArjyavqsPT83d9u/qw2CWxw/y2wbsdURFxj5MbQnMK3TFiGOb8G2u2HlzIkPpNouzItiu3VpYi3+JyiSfnhXpm4SJmIRD4TcAcoPA3a/4XmgOVlr9JrJvorSFSG1JFwdTiYReIp0mkdg/E5KiePh9dFX98DNz+PI/</diagram></mxfile>
|
2201.12126/main_diagram/main_diagram.pdf
ADDED
|
Binary file (17.5 kB). View file
|
|
|
2201.12126/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep reinforcement learning (DRL) has enabled us to optimise control policies in MDPs with high-dimensional state and action spaces such as in game-play [@silver_go] and robotics [@continuous_control]. Two main hindrances in bringing deep reinforcement learning to the real world are the sample inefficiency and poor generalisation performance of current methods [@survey_generalisation]. Amongst other approaches, including prior knowledge in the learning process of the agent promises to alleviate these obstacles and move reinforcement learning (RL) from a tabula rasa method to more human-like learning. Depending on the area of research, prior knowledge representations can vary from pretrained embeddings or weights [@BERT] to symbolic knowledge representations such as logics [@logic] and knowledge graphs (KGs) [@story_completion]. While the former are easier to integrate into deep neural network-based algorithms, they lack specificity, abstractness, robustness and interpretability [@boxology].
|
| 4 |
+
|
| 5 |
+
One type of prior knowledge that is hard to obtain for purely data-driven methods is commonsense knowledge. Equipping reinforcement learning agents with commonsense or world knowledge is an important step towards improved human-machine interaction [@HI], as interesting interactions demand machines to access prior knowledge not learnable from experience. Commonsense games [@wordcraft; @twc] have emerged as a testbed for methods that aim at integrating commonsense knowledge into a RL agent. Prior work has focused on augmenting the state by extracting subparts of ConceptNet [@twc]. Performance only improved when the extracted knowledge was tailored to the environment. Here, we focus on knowledge that is automatically extracted and should be useful across a range of commonsense games.
|
| 6 |
+
|
| 7 |
+
Humans abstract away from specific objects using classes, which allows them to learn behaviour at class-level and generalise to unseen objects [@yee2019abstraction]. Since commonsense games deal with real-world entities, we look at the problem of leveraging subclass knowledge from open-source KGs to improve sample efficiency and generalisation of an agent in commonsense games. We use subclass knowledge to formulate a state abstraction, that aggregates states depending on which classes are present in a given state. This state abstraction might not preserve all information necessary to act optimally in a state. Therefore, a method is needed that learns to integrate useful knowledge over a sequence of more and more fine-grained state representations. We show how a naive ensemble approach can fail to correctly integrate information from imperfectly abstracted states, and design a residual learning approach that is forced to learn the difference between policies over adjacent abstraction levels. The properties of both approaches are first studied in a toy setting where the effectiveness of class-based abstractions can be controlled. We then show that if a commonsense game is governed by class structure, the agent is more sample efficient and generalises better to unseen objects, outperforming embedding approaches and methods augmenting the state with subparts of ConceptNet. However, learning might be hampered, if the extracted class knowledge aggregates objects incorrectly. To summarise, our key contributions are:
|
| 8 |
+
|
| 9 |
+
- we use the subclass relationship from open-source KGs to formulate a state abstraction for commonsense games;
|
| 10 |
+
|
| 11 |
+
- we propose a residual learning approach that can be integrated with policy gradient algorithms to leverage imperfect state abstractions;
|
| 12 |
+
|
| 13 |
+
- we show that in environments with class structure our method leads to more sample efficient learning and better generalisation to unseen objects.
|
| 14 |
+
|
| 15 |
+
# Method
|
| 16 |
+
|
| 17 |
+
The problem of integrating knowledge present in a knowledge graph into a learning algorithm based on deep neural networks has mostly been studied by the natural language community [@commonsense_overview]. Use-cases include, but are not limited to, open-dialog [@Open_Dialog_1], task-oriented dialogue [@task1] and story completion [@story_completion]. Most methods are based on an attention mechanism over parts of the knowledge base [@Open_Dialog_1; @task1]. Two key differences are that the knowledge graphs used are curated for the task, and therefore contain little noise. Additionally, most tasks are framed as supervised learning problems where annotation about correct reasoning patterns are given. Here, we extract knowledge from open-source knowledge graphs and have to deal with errors in the class structure due to problems with entity reconciliation and incompleteness of knowledge.
|
| 18 |
+
|
| 19 |
+
State abstraction aims to partition the state space of a base Markov Decision Process (MPD) into abstract states to reduce the complexity of the state space on which a policy is learnt [@li_state]. Different criteria for aggregating states have been proposed [@bisimulation]. They guarantee that an optimal policy learnt for the abstract MDP remains optimal for the base MDP. To leverage state abstraction an aggregation function has to be learned [@block_mdps], which either needs additional samples or is performed on-policy leading to a potential collapse of the aggregation function [@neurips_bisimulation]. The case in which an approximate state abstraction is given as prior knowledge has not been looked at yet. The abstraction given here must not satisfy any consistency criteria and can consist of multiple abstraction levels. A method that is capable of integrating useful knowledge from each abstraction level is needed.
|
| 20 |
+
|
| 21 |
+
<figure id="fig:example_abstraction">
|
| 22 |
+
<div class="minipage">
|
| 23 |
+
<embed src="images/wordnet_wordcraft_abstraction.drawio.pdf" />
|
| 24 |
+
</div>
|
| 25 |
+
<div class="minipage">
|
| 26 |
+
<embed src="images/class_tree_for_wordcraft.drawio.pdf" />
|
| 27 |
+
</div>
|
| 28 |
+
<figcaption>(Left) Visualisation of a state and its abstractions in Wordcraft based on classes extracted from WordNet. Colours indicate that an object is mapped to a class, where the same colour refers to the same class within an abstraction. (Right) Subgraph of the class tree that is used to determine the state abstractions.</figcaption>
|
| 29 |
+
</figure>
|
| 30 |
+
|
| 31 |
+
Reinforcement learning enables us to learn optimal behaviour in an MDP $M=(S,A,R,T,\gamma)$ with state space $S$, action space $A$, reward function $R: S\times A \rightarrow \mathbb{R}$, discount factor $\gamma$ and transition function $T: S\times A \rightarrow \Delta{S}$ , where $\Delta{S}$ represents the set of probability distributions over the space $S$. The goal is to learn from experience a policy $\pi:S \rightarrow \Delta{A}$ that optimises the objective: $$\begin{equation}
|
| 32 |
+
J(\pi) = \mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t} R_{t}\right] = \mathbb{E}_{\pi}[G],
|
| 33 |
+
\end{equation}$$ where $G$ is the discounted return. A state abstraction function $\phi: S \rightarrow S^{'}$ aggregates states into abstract states with the goal to reduce the complexity of the state space. Given an arbitrary weighting function $w: S \rightarrow [0,1]$ s.t. $\forall s'\in S'$, $\sum_{s\in \phi^{-1}(s')}w(s)=1$, one can define an abstract reward function $R'$ and transition function $T'$ on the abstract state space $S'$: $$\begin{equation}
|
| 34 |
+
R'(s',a) = \sum_{s \in \phi^{-1}(s')} w(s) R(s,a)
|
| 35 |
+
\end{equation}$$ $$\begin{equation}
|
| 36 |
+
T'(\bar{s}'| s',a) = \sum_{\bar{s} \in \phi^{-1}(\bar{s}')} \sum_{s \in \phi^{-1}(s')} w(s) T(\bar{s}|s,a),
|
| 37 |
+
\end{equation}$$ to obtain an abstract MDP $M'=(S',A,R',T',\gamma)$. If the abstraction $\phi$ satisfies consistency criteria (see section 2, State abstraction in RL) a policy learned over $M'$ allows for optimal behaviour in $M$, which we reference from now on as base MDP [@li_state]. Here, we assume that we are given abstraction functions $\phi_{1},...,\phi_{n}$ with $\phi_{i}:S_{i-1} \rightarrow S_{i}$, where $S_{0}$ corresponds to the state space of the base MDP. Since $\phi_{i}$ must not necessarily satisfy any consistency criteria, learning a policy over one of the corresponding abstract MDPs $M_{i}$ can result in a non-optimal policy. The goal is to learn a policy $\pi$ or action value function $Q$, that takes as input a hierarchy of state abstractions $s=(s_{1},...,s_{n})$. Here, we want to make use of the more abstract states $s_{2},...s_{n}$ for more sample efficient learning and better generalisation.
|
| 38 |
+
|
| 39 |
+
The method can be separated into two components: (i) constructing the abstraction functions $\phi_{1},...,\phi_{n}$ from the subclass relationships in open source knowledge graphs; (ii) learning a policy over the hierarchy of abstract states $s=(s_{1},...,s_{n})$ given the abstraction functions.
|
| 40 |
+
|
| 41 |
+
A state in a commonsense game features real world entities and their relations, which can be modelled as a set, sequence or graph of entities. The idea is to replace each entity with its superclass, so that states that contain objects with the same superclass are aggregated into the same abstract state. Let $E$ be the vocabulary of symbols that can appear in any of the abstract states $s_{i}$, i.e. $s_{i}=\{e_{1},...,e_{k} | e_{l} \in E \}$. The symbols that refer to real-world objects are denoted by $O\subseteq E$ and $C_{tree}$ represents their class tree. A class tree is a rooted tree in which the leaves are objects and the parent of each node is its superclass. The root is a generic entity class of which every object/class is a subclass (see Appendix [8](#app.A){reference-type="ref" reference="app.A"} for an example). To help define $\phi_{i}$, we introduce an entity based abstraction $\phi^{E}_{i}: E \rightarrow E$. Let $C_{k}$ represent objects/classes with depth $k$ in $C_{tree}$ and $L$ be the depth of $C_{tree}$, then we can define $\phi^{E}_{i}$ and $\phi_{i}$: $$\begin{equation}
|
| 42 |
+
\phi^{E}_{i}(e) =
|
| 43 |
+
\begin{cases}
|
| 44 |
+
\text{Pa}(e), \mkern4mu \text{if} \mkern4mu e \in C_{L+1-i}\\
|
| 45 |
+
e, \mkern4mu \text{else}, \\
|
| 46 |
+
\end{cases}
|
| 47 |
+
\end{equation}$$
|
| 48 |
+
|
| 49 |
+
$$\begin{equation}
|
| 50 |
+
\phi_{i}(s) = \{\phi^{E}_{i}(e)| e \in s\},
|
| 51 |
+
\end{equation}$$ where $\text{Pa(e)}$ are the parents of entity $e$ in the class tree $C_{tree}$ This abstraction process is visualised in Figure [1](#fig:example_abstraction){reference-type="ref" reference="fig:example_abstraction"}. In practice, we need to be able to extract the set of relevant objects from the game state and construct a class tree from open-source KGs. If the game state is not a set of entities but rather text, we use spaCy[^3] to extract all nouns as the set of objects. The class tree is extracted from either DBpedia, ConceptNet or WordNet. For the detailed algorithms of each KG extraction, we refer to Appendix [8](#app.A){reference-type="ref" reference="app.A"}. Here, we discuss some of the caveats that arise when extracting class trees from open-source KGs, and how to tackle them. Class tree can become imbalanced, i.e. the depths of the leaves, representing the objects, differs (Figure [2](#fig:class_trees){reference-type="ref" reference="fig:class_trees"}). As each additional layer with a small number of classes adds computational overhead but provides little abstraction, we collapse layers depending on their contribution towards abstraction (Figure [2](#fig:class_trees){reference-type="ref" reference="fig:class_trees"}). While in DBpedia or WordNet the found entities are mapped to a unique superclass, entities in ConceptNet are associated with multiple superclasses. To handle the case of multiple superclasses, each entity is mapped to the set of all $i$-step superclasses for $i=1,...,n$. To obtain representations for these class sets, the embeddings of each element of the set are averaged.
|
| 52 |
+
|
| 53 |
+
Since prior methods in commonsense games are policy gradient-based, we will focus on this class of algorithms, while providing a similar analysis for value-based methods in Appendix [9](#app.B){reference-type="ref" reference="app.B"}. First, we look at a naive method to learn a policy in our setting, discuss its potential weaknesses and then propose a novel gradient update to overcome these weaknesses.
|
| 54 |
+
|
| 55 |
+
A simple approach to learning a policy $\pi$ over $s$ is to take an ensemble approach by having a network with separate parameters for each abstraction level to predict logits, that are then summed up and converted via the softmax operator into a final policy $\pi$. Let $s_{i,t}$ denote the abstract state on the $i$-th level at timestep $t$, then $\pi$ is computed via: $$\begin{equation}
|
| 56 |
+
\pi(a_{t}|s_{t}) = \textnormal{Softmax}\left(\sum_{i=1}^{n} \textnormal{NN}_{\theta_{i}}(s_{i,t})\right),
|
| 57 |
+
\end{equation}$$ where $\textnormal{NN}_{\theta_{i}}$ is a neural network processing the abstract state $s_{i}$ parameterised by $\theta_{i}$. This policy can then be trained via any policy gradient algorithm [@TRPO; @a3c; @Impala]. From here on, we will refer to this approach as sum-method.
|
| 58 |
+
|
| 59 |
+
There is no mechanism that forces the sum approach to learn on the most abstract level possible, potentially leading to worse generalisation to unseen objects. At train time making all predictions solely based on the lowest level (ignoring all higher levels) can be a solution that maximises discounted return, though it will not generalise well to unseen objects. To circumvent this problem, we adapt the policy gradient so that the parameters $\theta_{i}$ at each abstraction level are optimised to approximate an optimal policy for the $i$-th abstraction level given the computed logits on abstraction level $i-1$. Let $s^{n}_{i,t} = (s_{i,t},...,s_{n,t})$ denote the hierarchy of abstract states at timestep $t$ down to the $i$-th level. Define the policy on the $i$-th level as $$\begin{equation}
|
| 60 |
+
\pi_{i}(a | s^{n}_{i,t}) = \textnormal{Softmax} \left( \sum^{n}_{k=i} \textnormal{NN}_{\theta_{k}}(s_{k,t})\right).
|
| 61 |
+
\end{equation}$$ and notice that $\pi=\pi_{1}$. To obtain a policy gradient expression that contains the abstract policies $\pi_{i}$, we write $\pi$ as a product of abstract policies: $$\begin{equation}
|
| 62 |
+
\pi(a | s_{t}) = \left(\prod_{i=1}^{n-1} \frac{\pi_{i}(a|s^{n}_{i,t})}{\pi_{i+1}(a|s^{n}_{i+1,t})} \right) \pi_{n}(a|s_{n,t}).
|
| 63 |
+
\end{equation}$$ and plug it into the policy gradient expression for an episodic task with discounted return $G$: $$\begin{equation}
|
| 64 |
+
\label{eq:normal_gradient}
|
| 65 |
+
\begin{split}
|
| 66 |
+
\nabla_{\theta} J(\theta) &= \sum^{n}_{i=1} \mathbb{E}_{\pi}\left[\sum^{T}_{t=1}\nabla_{\theta} \log \left(\frac{\pi_{i}(a|s^{n}_{i,t})}{\pi_{i+1}(a|s^{n}_{i+1,t})}\right)G\right],
|
| 67 |
+
\end{split}
|
| 68 |
+
\end{equation}$$ where $\pi_{n+1} \equiv 1$. Notice that in Equation [\[eq:normal_gradient\]](#eq:normal_gradient){reference-type="ref" reference="eq:normal_gradient"}, the gradient of the parameters $\theta_{i}$ depend on the values of all policies of the level equal or lower than $i$. The idea is to take the gradient for each abstraction level $i$ only with respect to $\theta_{i}$ and not the full set of parameters $\theta$. This optimises the parameters $\theta_{i}$ not with respect to their effect on the overall policy, but their effect on the abstract policy on level $i$. The residual policy gradient is given by: $$\begin{equation}
|
| 69 |
+
\label{residual_gradient}
|
| 70 |
+
\begin{split}
|
| 71 |
+
\nabla_{\theta} J_{res}(\theta) &= \sum^{n}_{i=1} \mathbb{E}_{\pi}\left[\sum^{T}_{t=1}\nabla_{\theta_{i}} \log(\pi_{i}(a|s^{n}_{i,t}))G\right].
|
| 72 |
+
\end{split}
|
| 73 |
+
\end{equation}$$ Each element of the first sum resembles the policy gradient loss of a policy over the abstract state $s_{i}$. However, the sampled trajectories are from the overall policy $\pi$ and not the policy $\pi_{i}$ and the policy $\pi_{i}$ inherits a bias from previous layers in form of logit-levels. We refer to the method based on the update in Equation [\[residual_gradient\]](#residual_gradient){reference-type="ref" reference="residual_gradient"} as residual approach. An advantage of the residual and sum approach is that the computation of the logits from each layer can be done in parallel. Any sequential processing of levels would have a prohibitively large computational overhead.
|
| 74 |
+
|
| 75 |
+
<figure id="fig:class_trees">
|
| 76 |
+
<div class="minipage">
|
| 77 |
+
<embed src="images/objects_per_level.pdf" />
|
| 78 |
+
</div>
|
| 79 |
+
<div class="minipage">
|
| 80 |
+
<embed src="images/collapse_layer.drawio.pdf" />
|
| 81 |
+
</div>
|
| 82 |
+
<figcaption>(Left) The number of different objects/classes that can appear for each state abstraction level in the Wordcraft environment with the superclass relation extracted from Wordnet. (Right) Visualisation of collapsing two layers in a class tree.</figcaption>
|
| 83 |
+
</figure>
|
2201.12426/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="embed.diagrams.net" modified="2021-09-08T09:42:33.426Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36" version="15.0.6" etag="JhMKD0CWmasFfd2EDZ0i" type="embed"><diagram id="wjbCRQYZMGPPWOCUL7bD">5b3XkttI0jZ8NRPxficKeHMIEI4A4QhDECd/wHtL+Kv/Ud2SZjSt3dXOamY3djskNpAAilVpn8wssn9BL80mjkGfq12c1L8gULz9gnK/IAiJ4ecrIOzvBIyG3gnZWMTvpN8QrOJI3onwF+pcxMnrM+2dNHVdPRX9t8Soa9skmr6hBePYrd/elnZ1/A2hD7LkA8GKgvoj9VHEU/5OpXDoV7qUFFn+5Z1h6POVJvhy8+chXnkQd+tvlozyv6CXseum96NmuyQ14N23fBH+xtWvExuTdvqRB5D3B5agnj+v7fO8pv3LYs8p9uDwNSVtVNT/1/to+MpnlHce57j3gXqhl/PAKBc1HcdrNATy81Y3p4gFDwrixyAat1m6ILfatsmkoe5ZxocoWYWHed5C9sc5QxbM8BxD2sy1Pw/S5XxB6IRK3y+gOPx2E5saswWu82VPfaG0JHmSNPQ8oS3p/YEL+E1rixdu7wR6RkcD/XycIO8H57+8vsbJ++Eiwd45xssGx4aogAk9oEapKe0kowtxDxNj4Tz6vLB6B/n5KTBIAN5cKskIF33K9bYqFVzK9LCwSNJDKK0oM+blKF8S1kk5GKy/JyuUxBJlGQht40hykY4XJaL0cdxf1jk0m6wx5UoSYE57nC8I2uFHHBgw6qXk5ibj+b5xCSZxMoSdW8CKuxOk5KzJSxHxYFFii2Tnr6me52JIaajBWSrk2WUiiLJIzVR4xjbCkk5DHBcUW5diOEeB0FK5mSSrwePbClm3nGN0r2ncc73ODbv0YZcnuTn/Y4eRQ6QQg2nkPLEj/WKiUt7OYnxeNVgaWZrrpUjoRqX6WWVtwF+wGsMxA4m4tFUcpnK2YDc97pFTZVhRphPRoZI3dZBPgrAlaxqHp/yFx7FGKXQZybhYJiq5nSSfvFahQeyoZ1jU6yQ4Ko4teUVhLRmcT0ch3HrmKpCNpk9EOk0aix/LBjTvmdINsgMJbpSB6Ga6cJv9JLFgSUYMfd1SPOUgAyzsQY2ITeI2kAVqKy+WMlrFyPXE0IBuvujWyA5KJY/nioHF5bNUp5f1VFChJUbAG96F31jk26iDrBzZ+MOco+H4kskJo+5Ac2eDSa/PBYgeDXUuhLSYnO6HhtFGjXhAFClJ84jKwUxk4sV6zfPFZfK7ZMqlzAl9KHD8eVPwvDktbCZQ2Vwuklcc0stjNf2qRxV8MaJz9M7vHLPGbhglg5lVrUz2KnZ9lVqO4/gunsTdUp/nnXDCD9ebm2g8ino0SVT8JcMnxlDpp1cfFLOJvEij2MkyMzr6pDRWqdvNJr8ix87Zc7EO5DWFm7Gz9Nzg7xcivLRLcFGqMXvuKv+UKfMAehtVEFbGK9fFDKpxqFg67JazYpHymmACUyWD3awhvq7zS7/MOQLmZusX5OKU7iW8Ph4Or/Zuk4fdOQ/Uc2ppQbKMG41kP1csXDXmfEKLX4RF3M00kC2jE3romd0n1hDM0mYJK78eDcs+gj6wkNira42f5PbBHu++iaFlw9lLjcsbzr+kQBrYwYr1k46u5x0TI0sJf544PuPHonubHZUckHvPaD5jRxzN0zGH62ST8IqmBeqtT8Yqf5iDh/KSBeMdHEcLgd+dXe7LR2eKFe+MF+Be8YcDicKV2lgMuKW9v9ooSxMtxvJpYTAWzR6bfFu0J8vQmD4Bs5QyRJSpmhiyOlRtUTe2UIAofLNfjQIlAp7vPZMkHMVhKMPxlBg1Bf9MCx2jI0GjhZjQcVzGgsi7nxNg+YzQ6Kw6jxag7LCcyFgzE2vVAp6e/kIYGznhe2Kzzd5qLog1DLocXqqxmA/kGrbVDVvPOy9A9QT1SLurl0fGWu3mNb7TN39tC51fU0q+F7s85oqpazQL3nl5WunMEjOlW6E3SzKGpphYcsGi3mzTX+FYOba6YWgde7yi1yhSOog4PoRKavpS0e1uQI4J3LLaSYbqdo8Edo9HZlA77G6+xlfnr1F/XCmfzPBwXQ2+Ih73B/YoRd1SOkYMCStTFNZ+QM/24tDeuhmm51pskCP1qwTrmerr4xZ1BKpGKggK/I0SO+xR0XfSM+JAqSTNBbraAS2/T6bqZPPz1a2BPIZXe5jniXr2TyXqNOzFYxIkHB4TtC/i1rGNXNd8q+cawqGFNXeKyAVQpAD7lb0ZOPeOox6wRCBJUxBAN9gCPWdI7EmX2hplknv37IJW4FVsNBA53Mu8zVqJPfbauN7mjQ7vXg/JrRlReyiwKu5PHQfJ4QIZkxCzbJIh0B1PMKtW4zRLtHmRMJgK0oM+BuD+r+Weo54uyefaBAnLDTI/vfFUJNeDfTGJh5B5n1b943orqLs3UdsjofKYVWCW0FG5O8CMWSq5LzO2nQFtIuVU9M53O4fmlD6gO2mL44vaJFKfSne8uJHhsvNdu2x4vtER04pCj5Y8dlRCenXalL6VEBWCwHnsfYMAP3qnyWcEV0a3re3+Zqc2pQNWzQYIFUT+dGZPv5+064zqNCUBXySw9On7BBvE/M0D/t1gnBKyYBnRs3YOqYPiezQofUhtWxdW6HVhqxTeKDVRG6m3lLavTdVziTwR/BRnsaB5zpQ9vLZVT6gMdjFc8CljP4EgIrjkqhk4AttL6x99UI+bb2QiPfrAw3CvEHnp8w6CFVeRGKqxGGporaHiPbnKaoIt93DFNTraPLsC8XEb0rElb3n6LMvL6EQXfN3FK8zdPewYopmSn4WBnJIKtN2ktfTJW4G9XMJ+mTJziuPKpF6PG7JxYkDRpqaFIYgo/WgS/awhlAwAGyvjV6Kh4uXV6jxxOhzhVWZ3BcCtx3V6PggQg7Ae4x+lRCeGtTI6uw/+A3mmIYdmHbcjevOIgZgP/SIrvJc9rGrwJJoU1lYdnSK4yWu4Wu/Yg+NwwkHrVe/tEYWeTOjdO7UVkzu2HCtAUpZkEwnankOZhdeOucPcegVamJi5K6pUCXGGO4cc3+SIWudoBFgJNu0I2gGHi2jMHJtxnQirHY5c47UyTinW1V4NL2e7D9zAeLqzGTzKlNVl9112izgXDprxRmr1LB5zPUs7dkwiV1BHV7walBOtGUmQJ1lQuO/fvSZ0aih1JwEAgDOuBVAIl/ENOSa3uZ7pw3bA41Dz1/DokVDidPweuohEw9zFwpSmPe7KUQC0QtxU7IzNbGFrPg5PztMnjKsC52hs7+xtm+KGRwLZS8LAoZ645V6We2512XRiTSRseAzJxuTADWAFUXZDsYXTH1OiyJZmMsJDpJDtqdY7qmpQvpATIvZ3xnzUm+uK9gu30a6TpWVVk3kcqJ0KjsE6bdZfJ4RnRd6YQCDI6bqvwzxbdIvwL4sTeuvu9nDp91EDr3uA2QPMpAo52V3dRiRryKNZ3+gIvdI2hOorcUgat5aDKN8TYbDrhfEzz4+OiIj1E3oJpb9aWn7IM+QP8KmDTQrALW7xV4oXc8MFqnfc4/sWoOLjBlwUHppWz8kvtnWxcud1owgq6npYLR7LcK6Pen7w6+ySMUhkqNLh/EUIlquGXvBuE/jbhmnd5pQWIzq8VebT3OeyrpkxSD0yCB5XYeTvm0AF4ok1BUGE+BcKZU+I9gY0U1G3zgXDVeVZSUKlErnxCL2+TgHMfZkhXwWm9eL02ktGiM+tatVrjO4dAFtxKiXEBJopLguskhh6w0UDV6s5j+rKxOG6Pavwcd3n48QY/F2hX+XtBWTQwkcnmai1urdD702P0FqtsTnDVFvPfWo1tI1rMbREA9xgW7PuqJDzwkcIM+ArmhWOKNZsAgsvpYQDSF1KI2HEoBbJirTkyfE0IVRht5MDvel6+yUzbijiww5PmIA88Zd1ilsKwoDMECZzHnnPYmVWOhrnoIp3bEWVXLEsD1ZY6+9C46y3KYfdEs/ySnvume9V7oG05KJnV4HFwpPRkEARcN8riG2qfQcYB7uWU4n9bSxHdrggFTenT/hesN6cwbBdj+JFdKeaYHhUM7gbgI9soT0iSqOsx3wuusklqsQk4jm8TgTo9DtCm+SlQ0OrdU+kfMOgMbrLOtuhfHm9GGNxLccRjV1Yu5EOTKPxi4fyM84IiTvloreV/YKzC/ZsFCNsKl5jlNG9MUPmWpg0oTs6HbPlnjkt6zOWVZDdrdwWrz/xbS+HyroAEHUnNuWqL7EX3kMZDyBaWeGDD7Km52YkD6Zr6xSC4MTEy3ywTmgGG2/zlCYqOua/LiTPBjSMqVJSwIO2F6ejD71iZGfW1zd3dhvX5KsKhL+yblr+DOLU9Vl6gwlILxeB0adpnfNEhLpNAVDJa/aGcysb3eak604bvC/g1uz0V3w98k9zIR091nzPCtYWuuz443bf3ccZjeNBNZn+EeNo5ua5QBLiOCI8r1QmlBXYOcb4sic0QbCyF4qYGom5n7gBfWA7zMc3UvXNmOa29U4DgyStcxG4xSoU3AWWi9OBKSa8GaoQ+kSQrQqZ2FH60kO0JOVVXHjonYjKZDuaKibziuyEwhPN8pc1rOhlI9jbjrEmbLYOr+QmAFCz/1idUa1csxzgYzBwuYnxVNzX+yRQD4/D9gzv8YVfAa5Ub+wOUmz3KpbUM3se265GoLZQXTJIu17njrf6fpYW+CbfHutjjSG8vx/KfBuqonstw87WrKF45YV1A9whexpysSy6+Okmxq8n9swu5/DEPNz4V3ZonZxMJkL1MHrpsiCR3SR9XEIlroLnmbxVShW7ZjCs2ejOLFWYbD0DRRpv8V6kZ86Y3VVGnq9yrCssZJzJDERdpmizmyDYRxM5QaoS4s6Z8UkQfobbMyKcPA9y6jaUm6iciKGJw1x/PN2l64j8bnU15ePIlMzCg0I0zCfY2jfyYYJvvmD0uHK1IEsNnfdkiatVxiDdga+pS0toNkFHsqYT8fKMNBxKaoWoI4OfOxYDyTcSXVojFpTbiRi2IyhmcyOjvIiS4R0BsKbWsIQzDqXPZSWnX+LX2DeCCmWTSfhXLPWnsKo34doJMFBcnENhznFiNL7rRe8KDvMiIsPHT8x/qDSlq1eR7PrKsGPuwTX2CwAR5SkX8LpS8hQTxPNAcSGN7WJ2tkA+FV0oBnBXReL5IFcb03kYhLeKbep7Arcwg/ERmtQEcOvJNYupgZknpZPYbJ5BzNeDy6L60HZk/ELQ0MgzoD61OMZyZt08JtaF2hauZ/b5U/T0WHGnwdO4cag2SOzDXaUyHcHhI3F9voEazd6Qfl0Yd9diqb+9cth+ojw1UbfSOwTKkoaBtyFfxNBRsLR9p5831kqZ8V5xYaxKz0lqYWdzWgRaULVQPKPCPU9ql9fSQ+aZOTu3h+aRynb6BFLrnNIxPB8ZYmty6TuLA1Tdwx7WDRVqt8IxVAfal3r6IIAsSWR87JYPMqPdbNEhAf4WTdRUGbAKleJxMD18HZhnxQnpEaKQth7oWu5S8VxkPyVwzAePrrqVlYDjQrcoDb0+kYULTPPyuQRYEfVNXMkJfZQZJPu0rOTY6TSfR/D02eXOEykMuXhoDLqhZSbIPZWxX0o+ectUiuzxXDd3jBLvSlwwe82gY4YN9GHEE9Gw60G7d88p1GukWQVrGVbn25vZoZRk1Tu8yC5EtCXj13cOWtQ98O6weRweKUQAcGwrQJV3SIMXv3GoZZnM9vCPbUhIbog7A4WHG6J2KI1qa7TL4ZSgCaYuSsnOlNAuGYKu1nNhAKouOmIzNsfS+VtMTGfwIsuGDhcjIPCxV3F8mqh45JIn8JShDVzTFdQpdA8m+XczbFp1suf5ZZdJKIHC3jgWQ7Avgzd4M18sxLJW8HVVb28gam0mZQwAogYlj+aFTx0ln/FJbTeoBegLw7kxUg+CaI/bE4s1sF6Fg3FhpkaA/sURGUb3rSrqZSMMoFmZP5fTEAdK2qoX/1IlFKZxj3R7fIoeEkgBgjZwJUwAziuNUY3cOghUfK/SoDfEwUvFwoxlOXgBdKbKK9ZsUOy1j5xB/aFYEgsv48da06lLUKpeWKhZkyVx3BK7f6tcDANBXUACrhDiAOrRlUfaG3xGT5DXyarYy0vYPqsFprqlLnzw1jBVvLlSioxsudFGWtdbsZfcBQYpI8gQqZwd4jMDcjZQlBY2kHP6D3gVEGqbd684ShS/0CdRhQvyAq/UHQwo6JSHY/REJRjq2il2PCgIOd5y0bWkAUqJFo8GPgQgs/G9as2+DEubU7TFKR3UmSnjrVWAcv/vF5RNi7q+dHU3vpXp0cuFxwXhpHd9EBUT6Kzg0Hn6msauSr7c2HZtchI/V/2TcUq2v9k5gL/2I5pNTLommcZTz6AvD2DwJwxBv/4gxPsIn1s6BIR9oiAaR9D3V/L96vpruwQl8U9frr69Yu+35L/pnCAU8um3t6CfG0fB5wZO9nVav/Y4gOq/tzm+3/JA//mWR+2jfpUfEvfW8sgHyn3rQhivZU2LnK3Varcv3QhAxYRfHdqzU1egqwVCWlWaxmHjmTwOqRp0CYgnZQBZfq4s1pCSosBq2NlbPrcUaCqOlzMhrhdyuxGJ0XLvnu+tMbI1BoneDHDrDVSnUQRPFnJBUfqt/m0sgNUC/fWJm9rC0PH1VJzotX0/w2AAqJ/13gnGWwvFTpHiXckFYCAHDPz33pbAdBY8VjccWfYOmyiiKNP8VbbZcoULb5cVUKjfdzAdnInTnboXBDr55pIFdzCaSVFmruGXiLaPMgBA8PS+iOWUUFsFAEsPhpyiMuj85IkDXDbX8wslkQNkQBOOKcnuxHSbkYQDpnI/02cbj/QynZ5l1a5kkIf4YVzzol2FNCDdFMXA29oSPhhKmY07vRQS8JESDuIKMK46dMTKIMLZSkx7p6Keyj1otp60h6GeiL2bH2hUtKDdo8otVUk15bWX6kwQObRIt4j03lZiXN8KR027qteJZlKqw+DkcUbYLLU5HK2C9ADZ4W03DP1JcweNyrhtdEG5Xp9+x2EJ6HrkIZOHV0apxu4qKbUXqie+tCFY6DgHnqzSmnrbgVyz4zhpGO0Ahm58VGuhDDoqZKm97oS8vCTVEjhGVt0jL8p9tzZXs85IfzxphrCqjDlNlF0ZLofuvqAfOxRF3kRzpYYeBlPdIqhc7WbNuOLIDtNyJ5CcoN3FqS7SE2MtIQYxhn1gOhgmuAaGrDj3eM54gvDlOycVVdzYz5k4sOdT40Q/da1iDa2tedS3E3ioAUMhswjpnuQvsU84c9wyhqSyTQ5J060vHhRDj45I8BBbg3H8Cgk391rTt37RKVBvZuD7uhqHiwlJcOzGviEZ9Ji1+yYxvqhGB3CoKXMf4hzlAiJ/1RySmhSzZj7dMRmFczqIkjfufjHs60vvN1AzZtZ1VdEmuRmV+LqoiojHS7hQusnbIqPcksbSl1AJUKXhLlteZfuROBTNJgy2jZa+PYFxlYPosFGdSddpDppQBUDGJEMX1M/2hnmVEmJq0MM2bTfTr7T9ZFTnThbeVz0zg/VKdUwUvsTrDQf1dngoz+emZujk5pCmy2h3sazEOnRhr8go3e90/8y3XobL8smpl5ckNO71ngnpCQcojB5ewe4ql8lv+9BdmbuXqiLpSXcfeIlNJLrGwYNb5zghjDOwL96rHYM99s6mj9d2EV6QzOnqvPaR4m+Falct8/KyhtGwJ5ojTFTfRb0D7VKBIQcuYOX6CdaMdPlBmF1HCWcqCgJYIJaDI1jQtQPdJgAk7jVjiQpOJGdepxSpN/JNKza3wb9yvPS4en3aiF4cA3Er/kBWHH/iXuQWkCHBQUXGV62AyazJqq4b2DL82FmbmDvpChn2PX4uSgWCqQrNaLr7sgbA6MAloq7XqZAoD7yVYg6ltQf/gmiURBLr0EK4pub7lIbD2onxSEy5Ki2Gnbx6tX3diGHaEfdFGoObjJJqY7ToAS8dQIHrOogneHa0g5Jx8RjSKPG1pdXnO1FT0iO60FhDnUik8bB2CKTNkWlFGS82i18auxYQQstXoYXSxMdeT9jy6RZBOdPBpzmq2v6mo7a2xwZ3NWlT1AhokgEDocbGm+ugJ7VgWYGTJqNrH4tzb7zdi6m2OAwIwhS6AtXave3IbTbyQx6j/a7BIJJ0C/Jinjc7p5tLVKaX60KkWAIJmACnVAK6PFcLQOUHAJK5DqVa7qOY8tDQkfX4GyhHJ31Urr3SlGNbXmWRtMD6HwKZGFDimQ9DMEpm5G82hdhNyFO4SftmeioNoWut3tr7okEFnt/VyevPrKpPjeZ67NRpKbS+1IhqAZ9Lifd8UtGKiDqNyGkpX5HZ0EYqR42CLo01fUCdXrkVERNpay/ppu4DB/kNlrNkwqsiSuV0W4llQ8JpREwpRA0ocqEftu5XsKreRmGhctPVWwxWt8RF9bTtYJlGbkiatfsSWDISVcTexiRl99hlHfC6cJv7QkPKU0p32qhmwRb9kJywCFVoIyZ2n4uZDdvMmDkujOzqpmS6BIrNRF5ne7kUr5kf2JUSoAuICPXTWBjZdtWOfYOjg3CHk2fLruim9HwsyXkKu6hkMlMHx649tGIXb3BU7EcwnN7pRBZUE7qMlQWXG0ZmacsrHSLCG9dzDGguiDIPk+FdEBRPfkC2a4rzLRC7YVpLu2KTNBh3tVF6R4dMnagV5KJFNs3oV8q6XWgbnnm3PpZ64iGXICToQs0dSVDlEDhjNq29N01dkWuSdYt0T5OCKzYsGFcYUgBQw82lCqgYfC+5LsqII5rrCzil1FGEXJSALfrwrsVm9Hw9xXKb+NSoDZ9WMCm5EmeqitH3x45ozqqcDlFQRjCiVIFemoA9kqOIH7XKwCCPLhGKbO8bCgc9Pb2CTrmJqNggLZP1KNWNcKsdgdNPVh41eEV3cRNT0IA+Pc9jUdN7CFvU0heAHX3ytmDZLNMMlNnVKnZM6OqgMR/dBswfQ78J63b3XqT+5NFl9xI0Pb3hoA57NCcbtF50os/M6oiEdi4eHrxVoL3OskWjitr+mFfzNEHO1vxbsLiisAan1PmkmD02HJ2i8I0NCTCAxwpfyqrSMI2uGysKqXd6GPVeuzJQmNosa0OoqnEkYqYwp1wMmGMECN4rmKy1PWMtWm8EJoZgyDO46bzEPWt0bub4FVNA5XyK3dhiWTPAu6qxLyeGgqeA6AVTM4cg0ybayEceWDPrZBtIB42c9TOlu+088zCDgzlxrpswd2NaWYeNqYxFSDlibtoZd4NcTJkm6w1V6EQNpQqHKVPezW5yV68MFbkRMymwdw4xnS+Y1jry1RjtPPPh8j43zRs4U1dWG4+79JL9M8uNQOIaK8grxrl+Dx4bAAGlMhxWqt2ayRmDU0y2111y5tITHqN0AONUViZCkFSfMNMx2HUxhauoO7FZZV3zYoYAeXldpl0vDYMY0GUWCb2MPZPur5q3jUGJeNW8yDivponOgXdUAVpuuxMYuG3ClGZqSYUyi/bTY+gHSx3CNA9Ec2SS5F408y0vdRvMKDJIhfWjQB2bq9a4KlXGbnBDw+og8u9rdUe9BxQQsAQf8t2IyfrG1d4snWneeCkuJCzA3cOuMTLAL45tDNUkBJKd6w+hDwwuO1fQdyC1nNdxWg8IRjWfc2louIH2bHRHGb3bHxBEWqbgwPMxIx2nxB7PcQLoI2VXKAftZHlrw7ESTrXn02lgn+HxZBh/UKrIIPbuCJ3jeYjELJJ0YYsm41nN8yHrT8lBZ9Clz9TmsSAle5czwPmx4Z6pqdY9s2fBVjxvcz8zWsXUWnQCy8x5A3GsWI6HO2bjBKLzDT8thstC0zkhhYQF45lGM2IuSBAzIn7HZncRmrytppkzww903BUVsnFLoJui0VZUGzHR+CqBeiOv6LY0ikjey5i2HFfzZ0ZYYc44gYz0JJ/EIL0yxjdqEbxxwm4wIhooHxnYPpcTxNRit5nSg7mDTTD9Boo551TuoUvPfnj6ZRA8j9wfUIV4KAlj3elHNiBcR/YHh6Jc3p/wCFuiQ2Rm5biYvsrGV0tDEF7rlGB/snA354G5ZnAMUWghdGTpvrTYZTgfnqsn4vePHkoUvZvJfnIBKnR9vCtmhq9vNvdUFszTgox8NUN7SYmTTau+IRH1gjux4J0yx9oE3/Ptuo2HSOOJ1sErx6Wm5pdxEznr1j2Q+HhhETuu0Z7xh1J3TH6AQitbYW1dJPdT7ycPLtYTeysk9nC08IzAM3v4fiPjeq37SZK3aFaG0vPyolGrUVsjpHUwQkQavS9M3uK+xNVrV58nUELaiOURMZeIOWD40szYiQxJKB32hUEdmTHjM73b2JVuIlgpt9nTr1ffFHemy+TblYvYUnqiAIXQ6wWfYOk1xE4RzaAQyPQMO9zk3B7wQH4kzmXnndxThnisXrXgbbcYgl7xmgX9bbv44sHMvLxeUypYY/QqvURhe7z4MsiTq3nVLRcrFU6Ay+ByJJ14lW5TekUZDMtubJde3a5RiZ2YZ8bKc7yjU1zmHyFhFjS9keY2o2jlYvHDGhZ6vxusnF7JivKFJ+OZ1z3HlbaRGPN5uNIe+dMuGGgay88AX2Ltcc+V6YEBuH9PZp+hGooQI/omEMr1dfrQB45AFi7fF13sDjRykxPC38L+6Vs4/xAueQ1S87T3+wjR8zSSrANLX6HzzI2296BrW54QQkBDpVds7V61XPAg7et8umtieD5sN0dZ5MEhdo37z2XwB1mAlhIfJncoBqo3sTYeur1/kGTV08AkmhlOZy0VPGT2+WUfM5SqK5C1O2C7GOHBDxGtUOJm3WPq0dmJCwWF6NBo4moxTIBsoXeeEjySM6b7xaGhD4nT9mY2spo20Hvi5jpnnGmaKuugUd0SK6MPAQdalUMNB5TRAHT6iAUeDkLIa82q0BF3t2LagEHl4gZ2ZgG7CYkRdqg8Z9gxeUFw4xn3l+igyVU+8huZx7EWYw+/5BHfRhFiTmRDahZCMIwW1A1cb6FfRmikYM1pekBE+oIpEpR31t3bnQMqXbN8HHPo4bdp4F4OY226o/uX1ltuisQFgeKPHI1T8+EK+N495tZ3Q03ir8vY4hBp275eNQe9YU29KyGayksTPE5HeQwHRR6hmdISRb0Emn6rBr3vbHzLVP9m+U8QeOJy+ab8R/yp5T+c/ASjJAHTEPb++m35jzgvf710vn4o/yHUD5T/IPoT/s1N/3r5D/tQ7UviLLE+n3bjlHdZ1wY1/yuVHbu5jRMwAGDpr/fcuq4/ifBJLJNp2j/vVA/mqTtJ+dTUn68mWzF5vzl+gqE+4Z/PuO3zyG8n++eT93mCyX27Y7ybx+gzCf9hqY1JHUzF8u1Q32Pj50eNrjhH/CptAoE/QTT09Qf+RtgUCn074BSMWTJ9HuN3ovk6qR+SFv6hWKv3U9GcfD6tkajPRZ/4+TzKpjeOEUFzCoRtwxf49X8qqJBCb7YDXbp20ZLp/30Q/6/CBdJZ82JKrNOGwNV1DPpvBfnB6sDPT7Iokjh5/C1jv3xu4Dd2A3/5aMRvDYWA/rZEf9QwiA+sviXB2J6cQaDuM9NPVkP/d7Ns9T+XjSgE/56NBP0XspH8t/iX9pyl98WLgJPfeBhw+quLeTv7Bz7m3X7/Yh+Dkh8ER2E/5lg+jAX/zl9B6N93WO9O9Wc4LOrfKn74nxP/t+DgzRABs35MMYi/SjEw+qNiUH9QMf5+ICORP00v6A/e1RK5f82J/knOEvlBZ/l72/wjzvLLB/r+B8yF/svMBf0gU/IHAdqHsfAvkfPPNxAY/mAhH3Wjrov+lfxj0whe/fvnPtNiA5ryM2zlFOEn6ltuQMgn/IOxfNmj8Ftb+Rn7FuDv8OOvRBa/msfzG+v4Z23lSy705fgn5UL0R5ODf1zGP9/miD+KXfCPPvmD/f5Es0P/Ozzwn6RVxL9Tq863/6BVv09RflSrUPjvAZ8PyvoTFezfU3D51W1hFPrnOS747yrY28hGMhYnv5Lxx7UO/nfmYQj6QevIPwq3/77W/Zlu7WPh6JfTdPBLOgbRLyR7Hp2cnIoAPHSegC9miM4Tkr39QnJv/z7cs/5/BbjwsexEfESxJ8yYvtXI7xZif1sH+UwK6iJrz9PoFDPQGRaAluKcG/P5QlPEcf23MNC3hvMTUA+Of4sAv1dLwb6THfweKP4hxPOxJPUuRC6ppwDM4ZTIfzPzkd8zH/4LmU/+zzMf/YT/40Lin8Z/6m/w/43BJNuPSVxEU9G1wCn9NwsCIz7Wf75TGCeoTzT8J8niY/XmG1l04SsZl+B/QRgwBLb9/0Nh/FlW8WWMbyXxX++KfswC/jSmfyzNiHMwxv+9DIdBc/p3YBX6+BGWP43hyN/wN0cCQu//FABFP1b//8ow/N2PC30WwBf+3xD9C+0cLvy1Hf0fKZPffT4s7drpm2yXRQniJ4kOoj9BKP3rz7cfEftLXRj2j8UovskR+rqL4P31pNzE/yYJEwSL/Kxm+3eg2dcK1V8h1e9m+QAN4PwJy8DXYcFvKTv0dvHtCvL+dYP/S5kM/CVt/BLKyO+1+Kiv2c5PF9N38vjfM/uHWzs/A06hH6M78rGRQ35V7d8yhPwJ/PhOav2BH23MgG8q/VWp/nH5/B9UNv9Iifyfbnn+toz5vZ7MP7Ft4DeiwL/jQL7Q/sVyJ3y6sA/K8GN1yY+lU/jDWAT5u7F+3t445DsVgn+rWWHEpy8g+Sto/mhWyCeM+I6f+Ur9lyzrO4n6v5UlvysgkTD1ifxrWYJ+J2P+Y87mm71l8B9wNn+/WfIjDuVfa8f9NQ4FIbBPMPJry+NLA+yfbHp8HBcnPyE09rfQNIn+LjL9PEeD/sjWjB9ToX9uE8FXJfoapZ6/1Zw/qkbfa699cab/OWr0D8SNYX9cjdDfDUV++t3+lJ+oO98pZfwMrPMPtwr8SVvuv6s7P76366/Tnd/JGEZ+L+N/SmG+xPGvw/0uMv1EhfmB72f5M8HxN67m70esvw2O4+CVf93G+X7blz8UAH++bATTmRu2bxQEgr9J29EfjYfUd5TxPw1gv8VD+KsbIz/Erf/0eHie/voHGd5v//WvWqD8/w8=</diagram></mxfile>
|
2201.12426/main_diagram/main_diagram.pdf
ADDED
|
Binary file (38.4 kB). View file
|
|
|
2201.12426/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
An unambiguous trend in machine learning is that different parts of the pipeline are being automatized. Automatized architecture search is now the state of the art for neural networks [@efficientnetv2], learned dynamic learning rates work better than learning rate schedules [@MetzAdaptiveLR], even reinforcement learning algorithms have been learned [@evolveRL; @metalearncuriosity; @Kirsch2020Improving]. There has also been quite some work on learned optimizers (L2O) (cf. section [2](#sec:background){reference-type="ref" reference="sec:background"}) though so far, due to computational limits, they work only on small datasets and only beat hand-crafted optimizers for the first thousand steps or so (because that is the horizon they are trained on) and do not generalize well. Some, like Richard Sutton[@bitterlesson], argue that as compute becomes cheaper and more accessible, \"learned things\" will become much better than \"handcrafted things\". However, even if L2O can outperform designed optimizers, they will still have two flaws: they will not be provably convergent, and they might fail totally out of distribution (different dataset, different type of objects being optimized -- what we call optimizees in this paper, different learning horizons, etc.). A guard addresses those shortcomings.
|
| 4 |
+
|
| 5 |
+
{#fig:lgl2o width="\\columnwidth"}
|
| 6 |
+
|
| 7 |
+
Our main contribution is a guard which takes in as input any blackbox L2O and a provably convergent optimizer and blends them to get the best of both. We prove that our guard keeps the convergence guarantee of the designed optimizer. Then we show in practice that our guard has the desired behaviour, that is, it uses the L2O when the L2O works best (in distribution or in settings where the L2O generalizes well) but correctly switches to the designed optimizer when the L2O underperforms.
|
| 8 |
+
|
| 9 |
+
In our experiments, we use [@andrychowicz_learning_2016]'s L2O that we train such that it beats other optimizers in the first $\sim$`<!-- -->`{=html}1000 steps, but our contribution, the guard, is independent of the L2O, it can take any L2O as input, and as L2Os get better so will LGL2O. Thus the goal of the paper is not to show that the combination is better than any other existing optimizer, but rather to show that the guard makes the right decisions and that it preserves the advantages of both its input L2O and provably convergent designed optimizer. We show this in our experiments by demonstrating that LGL2O performs as well as L2O (or better) when L2O outperforms SGD, and as well as SGD (or better) when SGD outperforms L2O. All unguarded L2O approaches currently work only for $\sim$`<!-- -->`{=html}1000 optimization steps in practice when optimizing neural networks. In contrast we show successful optimization up to millions of steps.
|
| 10 |
+
|
| 11 |
+
The main reasons we developed a new class of guards, while an existing class of guards [@heaton_safeguarded_2020] already exist are:
|
| 12 |
+
|
| 13 |
+
- Our guard is conceptually simpler.
|
| 14 |
+
|
| 15 |
+
- Our guard requires fewer hyperparameters.
|
| 16 |
+
|
| 17 |
+
- Our guard requires fewer SGD calls, and those can be done in parallel rather than sequentially.
|
| 18 |
+
|
| 19 |
+
- In practice our guard converges better for neural networks.
|
| 20 |
+
|
| 21 |
+
The first three points are detailed in section [3](#sec:lossguard){reference-type="ref" reference="sec:lossguard"} while the last point is detailed in section [4](#sec:experiments){reference-type="ref" reference="sec:experiments"}.
|
2202.08205/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-22T02:15:48.946Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36" version="15.7.4" etag="ChSRbW-RKX2P0KLeynkH" type="onedrive"><diagram id="rx-b8OC9H6gtMtvbXYDZ">7V1bc5tKEv41KttbFRdz4/IYy/Huw+ZsqvKw5zy5MCCJRBIqwBv7/Pqd4SLBTIOwPINkRamkIg0wgv66e7q/6RkmZLp6+WfqbxZfkzBaTrAVvkzI/QRj7DKX/ydaXssWZLle2TJP47Bq2zV8j/+O6hOr1uc4jLLWiXmSLPN4024MkvU6CvJWm5+mya/2abNk2f7VjT+vftHaNXwP/GWknPbfOMwXZauLnV37v6J4vqh/GdnV8638+uSqi2zhh8mvxm+RLxMyTZMkLz+tXqbRUkivlkt5Qw8dR7c3lkbrfMgFuLzgf/7yuXq26r7y1/ph52nyvKlOi9I8eoFE7D8tZYntbgFtH4yrRJSsojx95adUHX2qpFYpA3bqLn7tRMs1xC4bFw25EpvdsgrVCtH5tvvdQ/MP1XPDMiBjyAD3ysDxHOWZcW0UBh6Z7n9k/sTrMBLnWxNy92sR59H3jR+Io7+4WfO2Rb7i/d8j/nGWrPPKTBHj30M/WxTXFgfj5XKaLJO06JiEfuTOAt6e5WnyM2ocsQM3eprxIwn/oTgXCkHEj3cKvilg0itg7KrytZEqX2xZ75cuA6TLJU79PJ+wLxPn7vrpZuLcP/JP8QRPf/DP/LSv//72SK+jZutNeZ2CDZdD3gbAX8bzNf8ccHlEXJx3Qlox91ifqwOrOAzF5XdplMV/V6oqhLtJ4nVePCy7m7B70ddznmQVmApM62QdSZjWTZIOiO8NcK3ijx40P6GWy0AEqegSC0L3/eDaRzSd2SyyA9B0Qsd7KqRrwHS8tn+2VPeMbUDYyNYgbafDlDZpspHspwgCvoTz6FpYEr5ePMY3FwPqwNRqYUrZMEh12I/bgWiWBgLQhjNEHMIfFwRhBBFtu0AVQeQaQtDbO7z5wPDWsE1ysc0+ZNu26VHVNh1DyNY/DUF7JbtbNp2lfsAb+afoZcM/8LNaJ1V/xQnZ80oc+sk759/idfm/SIc4jPzAH0JhxEVVHz8bfcBq8qZBFoZZxlSFXQOcNm4Pn2gbwDdtFQOIUh2Iog5Ewyzn8i5GzR2SqMLLmYLgTAs0rB2c8V44xe3LulOcm8crnsFzQcoD+cdGG1mohnebz44LOJTRC5Eurjgg/FcLyIUXZtdiyC3aBLK8Z0soRTUGd3nnjwKEI+NA8Lg4DGAVonX4WbBRuwdvBf8NQYr2b9yQonRdtGCLbMVbM1AYkK2aYtt3+OFhK+IorImut+dgLlE5CwZIk/VIs+r9mximm8l6O+Vw3HYPWfKcBlF1UZPckvohVm83uZ/Oo1zppkB2+7zDwIb4lGrclIbNviAJ3/UxBJdoCaYCSAtkVI+3o8RLENOjkwzoEpxi19sjABqKQrS8zCAM+tlLytheDAjDKgaeqwGDNxEy6AAMRqEv+yWMdrTvloZRRQyyMLXPeJeMIRrGoIxnsxnu4LnsJ5vZhmRsSzJGDkB1IVNUFwKYEbuI1MT/mv3Km3Vah4AdWcCeKl/bUcWrJR4DWItSuuhcpUvUcNeUdDFEHHRHu8HSz7I46Jdi9BLnf1bHxOe/BBi39ZH7lwqb4str/WXNb/zP5pfGVeLr7rLiW33dZH80XD9QEXtWkquGnjKQbFpxd7hCiAKKnqAZEeeWers/TntaFSGpx6FBNHLxrdNWLEfqqiOQ5mj7r43TqvCy+wksWYNdq//WyLsvqMWy0/Pyrg/NBDDEt/yehuAdzRBkkJk0Yz5c9ZWezCj++/W4ekRtaox/RzWuI7CmGpfh8xHUGHtY9lVyyjpUj7Gix8pooI8LGVJOc4aq4wGqw46lOoz0hQIHekNVi2S/qlGJTBcoHSVEJ8rkgAsw0kChW58KDA7RjXNVxxAp9Yic9QDVg6ZEOoB6erOrM+62mi6K9kvXnIsiaO+QNNQvUSyrgMnRTWXCysIGBfjzJ+b7dQfTAc7OVJERVGVUsj3ENNszoEZPg3RVtgcBJcTG6B6oBOhIju+Q2HC4s9yX5eJ+ukf1clQtRNbjUT85shuUSenBgZ5ChHtkkD8dIe+V1gpUExO60l5yIjTmYfr5Zl30sCFd3E61Szi9XRU9GHADIzsBmLtyyKDnMmTIC23UqnlTAwbBp2FYI9OijjpgkP7FUEhi7S3bkIkqrtRD7S6Gk6RSR1jqSKOJ/pbUUu2YWlrUTy2NpkWCafK6mSZFF4ZzTXu0U6NSjUw1jePsiSUtkQLGelO0CBmZaRpNonIJjq2alTGZDqCasoW/ER+T9MtyGW8ykfBvojTmPybohvuobP22a9on+Cc/+DkvoPrPc76M1zXd0K71OQ14WLsOEFOVuoJqfXWwDWRAedS449JuKPqrNRJpHZcoMC7Z/ePSeOmwrA/IwbeutfuDDq0p9uw39aspVSa2pN66M19oWd4JqTAyo8LspFVYWqWJbYmGGayzsvLIHZlS0ipq06akJmjH04//AdqRWpMTUVJqua0MQCJ+GL517J13rJ/v7XM88hNhNkyHD1AzOoAF/HjxK8WScQJbYpiKXinIrV2vJni64P/QDcSwnfwEmqVOoD083DFHF2BSCoddgK8zFNFSvN8E6nQje14dN9/QYhxSugxtcmFK1gZZrd1Its0DTiT0qp3siVYjKPsADY+1lLFXvh+NI9UA7ur8AqLaObVU51i1dlStrZRnSwcHOeqcvmuM9qS/STnYiBV2VOXo/kjCiLfMIj9/TsUnHugsHsWSXvH7jz/6ZxYPWEjnp0H1lcmrQc2DINUdjLHMnqrUW1mNtZO57a+E0NZP2aZ4Pktpsq3Wrmsjg2I4/O+Z7TWHisomVagEXDRpslxyURY7g/C7j5O1Iu6Tj/2B4jmdsT93/HJZK7DHnbEdnqhKtHyez88CJZ0ljpS2swZgrKGGEGIqR3Efp1GQF4a1irLML+xtIyK+9XlAJ+1d8r7qVAaYk2cKrAGLAo1sEwHIWP+6eeq6cvIETL0BMReS50oOku0ApuL8UqBao5opEDvWSjXm8cGqMQUmVaEeuNyIMaU4FLd70pcMMZWCkaNIEblHV+cbJlYX2HLhDbAFhrGwkQF0RlmCyc5jjwbmtDl4YJ8cUxWYTM33weT0apudXp1betqh4vaIiREzsWIMHK2o4/aPV/xLg5k/xhgG1I+4e14VYGwMs6VyWJl4Gzpq2ay/H41jloGio6448gOszQFKrdmeneOMKZOn7MRz6Mp9RJWJCVObr3TUgw9dikNY/9Idar/v/HqvKF21JAxcIbidkg6vzmZOWmfaTAiVNwNCSK2CNTUtzcBNvK7XArALXmDRsjyjZKnRDuT0tLwpA6qjEZuYFlYWftj93w3XfWBp7EAqYqb2M7UBoqpyimfkEHWC5dTByBYtNV82Zl+40xsuJsUm60GYiCH1jLDT6Ryp5yneUWXwTZHCNlD2c0HvPegB+10aAw9kqspA5AIXvB+/vCZxxECk64VsWRqcURCidWrTkSNHYD2VqTDfhliy6qUlF7wG4gWVsBoDrOs1beJ9BhfABlV311WNY8Clch/iNTyT+u0w6ySMHutpAXYXPwo/ye67MNQ0Wa3gr0PKUoUGBgI8sB6w3rDiXWKGlivR6uWFH9UsTO8aZjuyHwM29DNV9+SojEUx5FR2kQXFK3geV5G/vhYg/qPwb9P4sThrGsYrIfqbj2goTDKU2gCahlLvIdAyFB1iV2mH0ufUghezAZnQYvKZi9n6qL5IWjnM1A3bEN3WL2sXssoWVHoLChmdiZCBRVMmhQyt5Xngfy+bRHbQZ9Ls1ohkpwMtnhFYXUpdAaAku8IDX/6mBSg1ff8ZiXvbvOYLUTxuBUUlywUyqc4Ft10hUG0EFrpowQxI4S8rfvcFv45CSDsqZqayQqcviW+8svCjZi6GE3r1jaPbirExMpeu16pLb5S8gNcBnvyyDWja1Rh4KlFwiRdhnCQ+B3kqSqb8o6uSAwo+57HL15sqMreGo3+LUixz2q7Ux+CqTKmYTN4Z21AN3cDdrLvvWu9LnVygukNW4PH2EGyYQrESqfuVtsBrrg98ZfiegFmq3d1tqN6MmAHmAElLXA7yLio9o827WGKHppaHEdWVp1cd3ire3bOhA38m7LW2nm1hZ90i1tiUilFjG2DZPHSwrPa6Dte6dWljT8ADl0DZrndryfXk0oMNcmT7nQX/miZJ3jydW/LiK0+sxRn/Bw==</diagram></mxfile>
|
2202.08205/main_diagram/main_diagram.pdf
ADDED
|
Binary file (86.7 kB). View file
|
|
|
2202.08205/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Retrosynthesis prediction [@corey1969computer; @corey1991logic] plays a crucial role in synthesis planning and drug discovery, which aims to infer possible reactants for synthesizing a target molecule. This problem is quite challenging due to the vast search space, multiple theoretically correct synthetic paths, and incomplete understanding of the reaction mechanism, thus requiring considerable expertise and experience. Fortunately, with the rapid accumulation of chemical data, machine learning is promising to solve this problem [@coley2018machine; @segler2018planning]. In this paper, we focus on the single-step version: predicting the reactants of a chemical reaction from the given product.
|
| 4 |
+
|
| 5 |
+
Common deep-learning-based retrosynthesis works can be divided into template-based (TB) [@coley2017computer; @segler2017neural; @dai2019retrosynthesis; @chen2021deep] and template-free (TF) [@liu2017retrosynthetic; @karpov2019transformer; @sacha2021molecule] methods. Generally, TB methods achieve high accuracy by leveraging reaction templates, which encode the molecular changes during the reaction. However, the usage of templates brings some shortcomings, such as high computation cost and incomplete rule coverage, limiting the scalability. To improve the scalability, a class of chemically inspired TF methods [@shi2020graph; @NEURIPS2020_819f46e5] (see Fig. [1](#fig:global_structure){reference-type="ref" reference="fig:global_structure"}) have achieved dramatical success, which decompose retrosynthesis into subproblems: i) *center identification* and ii) *synthon completion*. Center identification increases the model scalability by breaking down the target molecule into virtual synthons without utilizing templates. Synthon completion simplifies reactant generation by taking synthons as potential starting molecules, i.e., predicting residual molecules and attaching them to synthons to get reactants. Although various TF methods have been proposed, the top-$k$ retrosynthesis accuracy remains poor. Can we find a more accurate way to predict potential reactants while keeping the scalability?
|
| 6 |
+
|
| 7 |
+
To address the aforementioned problem, we suggest combining the advantages of TB and TF approaches and propose a novel framework, namely SemiRetro. Specifically, we break a full-template into several simpler semi-templates and embed them into the two-step TF framework. As many semi-templates are reduplicative, the template redundancy can be reduced while the essential chemical knowledge is still preserved to facilitate synthon completion. And we propose a novel self-correcting module to improve the semi-template classification. Moreover, we introduce a directed relational graph attention (DRGAT) layer to extract more expressive molecular features to improve center identification accuracy. Finally, we combine the center identification and synthon completion modules in a unified framework to accomplish retrosynthesis predictions.
|
| 8 |
+
|
| 9 |
+
We evaluate the effectiveness of SemiRetro on the benchmark data set USPTO-50k, and compare it with recent state-of-the-art TB and TF methods. We show that SemiRetro significantly outperforms these methods. In scalability, SemiRetro covers 98.9% of data using 150 semi-templates, while previous template-based GLN requires 11,647 templates to cover 93.3% of data. In top-1 accuracy, SemiRetro exceeds template-free G2G 4.8% (class known) and 6.0% (class unknown). Owing to the semi-template, SemiRetro is more interpretable than template-free G2G and RetroXpert in synthon completion. Moreover, SemiRetro trains at least 6 times faster than G2G, RetroXpert, and GLN. All these results show that the proposed SemiRetro boosts the scalability and accuracy of deep retrosynthesis prediction.
|
| 10 |
+
|
| 11 |
+
# Method
|
| 12 |
+
|
| 13 |
+
Center identification plays a vital role in the two-step retrosynthesis because errors caused by this step directly lead to the final failures. Previous works have limitations, e.g., RetroXpert [@NEURIPS2020_819f46e5] provides incomplete prediction without considering atom centers, G2G may leak the edge direction information [@shi2020graph], and GraphRetro [@somnath2021learning] provides sub-optimum top-$k$ accuracy. How to obtain comprehensive and accurate center identification results is still worth exploring.
|
| 14 |
+
|
| 15 |
+
**Reaction centers** We consider both atom centers and bond centers in the product molecule. As shown in Fig. [3](#fig:center_definition){reference-type="ref" reference="fig:center_definition"}, from the product to its corresponding reactants, either some atoms add residuals by dehydrogenation without breaking the product structure (case 1), or some bonds are broken to allow new residues to attach (case 2). Both these atoms and bonds are called reaction centers.
|
| 16 |
+
|
| 17 |
+
<figure id="fig:center_definition" data-latex-placement="h">
|
| 18 |
+
<embed src="figs/reaction_center.pdf" style="width:3in" />
|
| 19 |
+
<figcaption> Reaction centers. Products, reactants, and residuals are circled in blue, green, and red, respectively. We label atoms in reaction centers with solid circles.</figcaption>
|
| 20 |
+
</figure>
|
| 21 |
+
|
| 22 |
+
**Directed relational GAT** Commonly used graph neural networks [@defferrard2016convolutional; @kipf2016semi; @velivckovic2017graph] mainly focus on 0 and 1 edges, ignoring edge direction and multiple types, thus failing to capture expressive molecular features. As to molecules, different bonds represent different interatomic interactions, resulting in a multi-relational graph. Meanwhile, atoms at the end of the same bond may gain or lose electrons differently, leading to directionality. Considering these factors, we propose a directed relational graph attention (DRGAT) layer based on the general information propagation framework, as shown in Fig. [2](#fig:DRGAT){reference-type="ref" reference="fig:DRGAT"}. During message passing, DRGAT extracts source and destination node's features via independent MLPs to consider the bond direction and use the multi-head edge controlled attention mechanism to consider the multi-relational properties. We add shortcut connections from the input to the output in each layer and concatenate hidden representations of all layers to form the final node representation.
|
| 23 |
+
|
| 24 |
+
**Labeling and learning reaction centers** We use the same labeling algorithm as G2G to identify ground truth reaction centers, where the core idea is comparing each pair of atoms in the product $\mathcal{P}$ with that in a reactant $\mathcal{R}_i$. We denote the atom center as $c_i \in \{0,1\}$ and bond center as $c_{i,j} \in \{0,1\}$ in the product $\mathcal{P}$. During the learning process, atoms features $\{ \boldsymbol{h}_i \}_{i=1}^{|\mathcal{P}|}$ are learned from the product $\mathcal{P}$ by applying stacked DRGAT, and the input bond features are $\{\boldsymbol{e}_{i,j}| a_{i,j}=1\}$. Then, we get the representations of atom $i$ and bond $(i,j)$ as
|
| 25 |
+
|
| 26 |
+
$$\begin{equation}
|
| 27 |
+
\label{eq:center_representation}
|
| 28 |
+
\begin{cases}
|
| 29 |
+
\boldsymbol{\hat{h}}_i = \boldsymbol{h}_i || \mathrm{Mean}(\{\boldsymbol{h}_s\}_{s=1}^{|\mathcal{P}|}) \quad \quad \quad \quad \textcolor{gray}{\text{// atom}} \\
|
| 30 |
+
\boldsymbol{\hat{h}}_{i,j} = \boldsymbol{e}_{ij} || \boldsymbol{h}_i || \boldsymbol{h}_j || \mathrm{Mean}(\{\boldsymbol{h}_s\}_{s=1}^{|\mathcal{P}|}), \textcolor{gray}{\text{// bond}}
|
| 31 |
+
\end{cases}
|
| 32 |
+
\end{equation}$$
|
| 33 |
+
|
| 34 |
+
where $\text{Mean}$ and $||$ indicate the average pooling and concatenation operations. Further, we predict the atom center probability $p_i$ and bond center probability $p_{i,j}$ via MLPs:
|
| 35 |
+
|
| 36 |
+
$$\begin{align}
|
| 37 |
+
\label{eq:center_predict}
|
| 38 |
+
p_i = \mathrm{MLP}_{6}(\boldsymbol{\hat{h}}_i) \quad \text{and} \quad
|
| 39 |
+
p_{i,j} = \mathrm{MLP}_{7}(\boldsymbol{\hat{h}}_{i,j}).
|
| 40 |
+
\end{align}$$
|
| 41 |
+
|
| 42 |
+
Finally, center identification can be reduced to a binary classification, whose loss function is:
|
| 43 |
+
|
| 44 |
+
$$\begin{equation}
|
| 45 |
+
\begin{aligned}
|
| 46 |
+
\label{eq:center_loss}
|
| 47 |
+
\mathcal{L}_1 = \sum_{\mathcal{P}}( \sum_i { c_i \log{p_i} + (1-c_i) \log{p_i} } + \quad \textcolor{gray}{\text{// atom}} \\
|
| 48 |
+
\sum_{i,j} { c_{i,j} \log{p_{i,j}} + (1-c_{i,j}) \log{p_{i,j}} } ). \quad \textcolor{gray}{\text{// bond}}
|
| 49 |
+
\end{aligned}
|
| 50 |
+
\end{equation}$$
|
| 51 |
+
|
| 52 |
+
In summary, we propose a directed relational graph attention (DRGAT) layer to learn expressive atom and bond features for accurate center identification prediction. We consider both atom center and bond center to provide comprehensive results. In section. [5.2](#sec:result_center_identification){reference-type="ref" reference="sec:result_center_identification"}, we show that our method can achieve state-of-the-art accuracy.
|
| 53 |
+
|
| 54 |
+
Synthon completion is the main bottleneck of two-step TF retrosynthesis, which is responsible for predicting and attaching residuals for each synthon. This task is challenging because the residual structures could be complex to predict, attaching residuals into synthons may violate chemical rules, and various residuals may agree with the same synthon. Because of these complexities, previous synthon completion approaches are usually inaccurate and cumbersome. Introducing the necessary chemical knowledge to improve interpretability and accuracy can be a promising solution. However, how to provide attractive scalability and training efficiency is a new challenge.
|
| 55 |
+
|
| 56 |
+
**Semi-templates** The semi-template used in this paper is the partial reaction pattern of each synthon, seeing Fig. [4](#fig:semi_tpl){reference-type="ref" reference="fig:semi_tpl"}, rather than the whole reaction pattern used in GLN [@dai2019retrosynthesis] and LocalRetro [@chen2021deep]. Different from GraphRetro [@somnath2021learning], our semi-template encodes the chemical transformation instead of residuals. Similar to the work of forward reaction prediction [@segler2016modelling], semi-template splits a binary reaction into two half reactions. Notably, we use dummy atom $*$ to represent possible synthon atoms that match the semi-template, significantly reducing redundancy. We extract semi-template from each synthon-reactant pair by removing reactant atoms that have exact matches in the synthon. There are two interesting observations: 1) Top-150 semi-templates cover 98.9% samples; 2) Reactants can be deterministically generated from semi-templates and synthons (introduced later). Based on these observations, synthon completion can be further simplified as a classification problem. In other words, we need to predict the semi-template type for each synthon, and the total number of classes is 150+1. The first 150 classes are top-150 semi-templates, and the 151st class indicates uncovered classes.
|
| 57 |
+
|
| 58 |
+
<figure id="fig:semi_tpl" data-latex-placement="t">
|
| 59 |
+
<embed src="figs/semi_tpl.pdf" style="width:6.5in" />
|
| 60 |
+
<figcaption> Predicting semi-template for synthon completion. (a) A full-template can be decomposed into several simpler semi-templates based on synthons. (b) We propose the self-correcting module for more accurate semi-template prediction. </figcaption>
|
| 61 |
+
</figure>
|
| 62 |
+
|
| 63 |
+
**Learning semi-templates** For each synthon $\mathcal{S}_j$, denote its semi-template label as $t_j, 1\leq t_j \leq 151$, and the predicted reaction atom set as $\mathcal{C}$. Assume that $\mathcal{\bar{S}}_j$ is the dual synthon of $\mathcal{S}_j$, i.e., $\mathcal{\bar{S}}_j$ and $\mathcal{S}_j$ come from the same product $\mathcal{P}$. We use stacked DRGATs to extract atom features $\{ \boldsymbol{h}_i \}_{i=1}^{|\mathcal{S}_j|}$, $\{ \boldsymbol{\bar{h}}_i \}_{i=1}^{|\mathcal{\bar{S}}_j|}$ and $\{ \boldsymbol{\tilde{h}}_i \}_{i=1}^{|\mathcal{P}|}$. The semi-template representation of $\mathcal{S}_j$ is:
|
| 64 |
+
|
| 65 |
+
$$\begin{align}
|
| 66 |
+
\label{eq:semi_tpl_representation}
|
| 67 |
+
\boldsymbol{\hat{h}}_j = \text{Mean}(\{ \boldsymbol{h}_i \}_{i \in \mathcal{C}}) || \text{Mean}(\{ \boldsymbol{h}_i \}_{i=1}^{|\mathcal{S}_j|}) || \\ \text{Mean}(\{ \boldsymbol{\bar{h}}_i \}_{i=1}^{|\mathcal{\bar{S}}_j|}) || \text{Mean}(\{ \boldsymbol{\tilde{h}}_i \}_{i=1}^{|\mathcal{P}|}).
|
| 68 |
+
\end{align}$$
|
| 69 |
+
|
| 70 |
+
Based on $\boldsymbol{\hat{h}}_j$, we predict semi-template $\hat{t}_j$ as:
|
| 71 |
+
|
| 72 |
+
$$\begin{align}
|
| 73 |
+
\label{eq:semi_tpl_pred}
|
| 74 |
+
\hat{t} _j= \mathop{\mathrm{arg\,max}}_{1\leq c \leq 151}{\tilde{p}_{j,c}}; \quad
|
| 75 |
+
\boldsymbol{\tilde{p}}_j = \text{Softmax}( \mathrm{MLP}_8 (\boldsymbol{\hat{h}}_j) ).
|
| 76 |
+
\end{align}$$
|
| 77 |
+
|
| 78 |
+
Denote $\mathbbm{1}_{\{c\}}(\cdot)$ as the indicator function, the cross-entropy loss used for training is:
|
| 79 |
+
|
| 80 |
+
$$\begin{align}
|
| 81 |
+
\label{eq:semi_tpl_loss}
|
| 82 |
+
\mathcal{L}_2 = - \sum_{j \in \{1,2,\cdots,|\mathcal{S}_j|\}} \sum_{1\leq c \leq 151} \mathbbm{1}_{\{c\}}(t_j) \log(\tilde{p}_{j,c}).
|
| 83 |
+
\end{align}$$
|
| 84 |
+
|
| 85 |
+
**Correcting semi-templates** Considering the pairwise nature of synthons, i.e., dual synthons may contain complementary information that can correct each other's prediction, we propose a self-correcting module to refine the joint prediction results. For $\mathcal{S}_j$, we construct its features as:
|
| 86 |
+
|
| 87 |
+
$$\begin{align}
|
| 88 |
+
\label{eq:calibrating_representation}
|
| 89 |
+
\boldsymbol{z}_{j} = \boldsymbol{\hat{h}}_{j} || \Phi_{\theta}(\hat{t}_{j})
|
| 90 |
+
% \quad and \quad \boldsymbol{\bar{z}}_{j} = \boldsymbol{\hat{\bar{h}}}_{j} || \Phi_{\theta}(\hat{\bar{t}}_{j}),
|
| 91 |
+
\end{align}$$
|
| 92 |
+
|
| 93 |
+
where $\Phi_{\theta}(\hat{t}_{j})$ is the learnable embedding of previous predicted class $\hat{t}_{j}$. Then, we use a multi-layer transformer to capture the interactions between $\boldsymbol{z}_{j}$ and $\boldsymbol{\bar{z}}_{j}$, and get the refined prediction $t'_{j}$:
|
| 94 |
+
|
| 95 |
+
$$\begin{equation}
|
| 96 |
+
\label{eq:calibrating_representation_updated}
|
| 97 |
+
\begin{cases}
|
| 98 |
+
[\boldsymbol{\hat{z}}_{j}, \boldsymbol{\hat{\bar{z}}}_{j}] = \text{Transformer}([\boldsymbol{z}_{j}, \boldsymbol{\bar{z}}_{j}]) \\
|
| 99 |
+
|
| 100 |
+
\boldsymbol{p}_j = \text{Softmax}( \mathrm{MLP}_9 (\boldsymbol{\hat{z}}_j) )\\
|
| 101 |
+
|
| 102 |
+
t'_j= \mathop{\mathrm{arg\,max}}_{1\leq c \leq 151}{p_{j,c}}
|
| 103 |
+
\end{cases}
|
| 104 |
+
\end{equation}$$
|
| 105 |
+
|
| 106 |
+
The correcting loss function is:
|
| 107 |
+
|
| 108 |
+
$$\begin{align}
|
| 109 |
+
\label{eq:correcting_semi_tpl_loss}
|
| 110 |
+
\mathcal{L}_3 = - \sum_{j \in \{1,2,\cdots,|\mathcal{S}_j|\}} \sum_{1\leq c \leq 151} \mathbbm{1}_{\{c\}}(t_j) \log(p_{j,c}).
|
| 111 |
+
\end{align}$$
|
| 112 |
+
|
| 113 |
+
In addition, we filter the predicted pairs based on the prior distribution of the training set. If the prior probability of the predicted pair is zero, we discard the prediction.
|
| 114 |
+
|
| 115 |
+
**Applying semi-templates** Once reaction centers, synthons, and corresponding semi-templates are known, we can deduce reactants with almost $100\%$ accuracy. This is not a theoretical claim; We provide a practical residual attachment algorithm in the appendix.
|
| 116 |
+
|
| 117 |
+
In summary, we suggest using the semi-templates to improve synthon completion performance with the help of an error mechanism. Firstly, reducing this complex task to a classification problem helps promote training efficiency and accuracy. Secondly, the high coverage of semi-templates significantly enhanced the scalability of TB methods. Thirdly, the deterministic residual attachment algorithm improves interpretability. Fourthly, the proposed self-correcting module can futher improve the prediction accuracy. In section. [5.3](#sec:synthon_completion){reference-type="ref" reference="sec:synthon_completion"}, we will show the effectiveness of the proposed method.
|
2203.00048/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-11-17T04:48:15.691Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.4.0 Chrome/91.0.4472.164 Electron/13.5.0 Safari/537.36" etag="gTRBQm1Rkdvh9i9NqfUK" version="15.4.0" type="device"><diagram id="H8VO79Oax6fUfhMO7ahf" name="Page-1">7Vtbk5owFP41Pu4OEEB47Lrb7c601c7uTLuPEaJmi8TGuGp/fYOEi4AalZtr8UFycsIl5zvXhA7oTVePFM4m34iLvI6muKsOuO9omqoqJv8LKOuQYtvdkDCm2BVMCeEZ/0WCqAjqArtovsXICPEYnm0THeL7yGFbNEgpWW6zjYi3fdcZHIs7Kgnh2YEeyrH9xC6bhFTLSHF/QXg8YfELi54pjJgFYT6BLlmmSOChA3qUEBaeTVc95AWTF81LOO7zjt74wSjymcyA7/f4CaAfzP7uav6nLxb803+/UYV45mwdvTFy+QSIpk98/ndHycJ3UXAdhbcIZRMyJj70vhIy40SVE98QY2shPrhghJMmbOqJXrTC7Ffq/DW41K0hWvcrceVNYx01fEbXv9KN1KigmQzbtKJxc0bJ71hYfJrvRtjzesQjdPOCwIXIGjkxZ6rHdCw0HPGe/NyK6Z6TBXXQvgkVGIV0jNg+PoGKYLZTdxCie0RkivhLcQaKPMjw+zYcoUD1OOZLBM9PhOyPwUH3CnFgIMvVi3BgaUNgmvXgQG0XDsLrvkNvIe7U0UyPv8DdkJ+MgxM8DUwmN6UIsgVFUT+/XcySw1ICnEDyywlm6HkGNxO45H5jGyQZMfU2R6FAd4rnHVGGVnvnM+qNzLPwT3qkmMvE2sc8k5SlN5WqRABqVUWlEVU8Q7EuVbP0/xa2VAsLLhQHxhXioMqISxoH7Yq4wGFP6/BsakjI72BiKGGErWc8Fara337mh23X5m/tpv2truamry0KmVMX2+wCaBbKJlHeW13X0gqs3io8vN+vwpvWAFHM5xRRQcwAA6ncnnfPc96KpLICu2xlFUMHBPOHjuGoZ+AIuhmYhU8qRmWQFj/GGeArMgM1oDGHrKFl6EaxMU8hy9jC1SFMRRFmjPLXNMgLEZ+B3MhykOPUAzlda5V/iJ57n4N4ClKxm0HkGzj3C4X+fMYBwc8HHvTL9xYHnf1ohEyn0Nm7XXuoKOX4Ec1qW94GrA/hRyow+qUH4HK23LBqtuX1ZnjHmNVTTfgxbqIC032pxVSJKlredM/xFHuQYrZuwmgbwa8wQ9scwQjisxQ9PCoy5o0nBVq9xvykIpya1uS2Z+mabBim6a3SZaC0FgjNlt/kBWq0SqCaRFz9glasxhWOBzP4VVVxMZW2RcpavRWXS1vh0ExZzTJbpVngEhauTol/ZdKj3SnWGUAwJIFQfrVMFghv/def/VH/Zug+uvr66ZnZg8VNHgcVJT3qSUJXzhV6nNrsTo8K5XEiCnRJFOhmo35Vl/Or11mu0vW2ZTig3nXI4wPbIltamUqXUOaS1lPQLdta76iHKbXVwwrdwP4oL8HSQ0Ktqjp2HY5CNlwIkdqYozBOcBRXXBzLuY7m87d6N4u2ozhWaS1FOuNrWS3Fbi0QGq52lp6ZF7t4w8y4eK3eJa/oPeV2MYkOmpjtIlOeGX5o51OGXfYuJZbvqt0wlSvfNZ85dA8LPdokzlWZZ3YpR+6RefVb2CR0uxJZAdMojribk5UtF2xdvaiA1rSoov1Hx60a5+1d/6Uy2R10qvFKRgkSij8BbI+EJLxdLnP5uAKKg40arB1vJl9KhtFL8r0pePgH</diagram></mxfile>
|
2203.00048/main_diagram/main_diagram.pdf
ADDED
|
Binary file (22.6 kB). View file
|
|
|
2203.00048/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
Our goal is to learn explicit alignment between image and text features to facilitate multimodal interactions. We illustrate CODIS in Figure [1](#fig:framework){reference-type="ref" reference="fig:framework"} and propose a pseudo-code implementation in Algorithm [\[algo:frw-algo\]](#algo:frw-algo){reference-type="ref" reference="algo:frw-algo"}. It shares some similarities with self-supervised contrastive learning [@he2020momentum; @caron2020unsupervised]. We treat image and text modalities as two views and adopt a teacher-student distillation paradigm [@grill2020bootstrap; @caron2021emerging] to enforce unimodal and cross-modal alignment. To overcome the gap between multimodal distributions, we also learn a codebook, which serves as a bridge to help align features between different modalities. We organize the content of this section as follows.
|
| 4 |
+
|
| 5 |
+
In Section [\[sec: code\]](#sec: code){reference-type="ref" reference="sec: code"}, we present multimodal codebook learning, how it's optimized and how to leverage it to resolve distribution mismatch between multimodal inputs. In Section [\[sec: itc\]](#sec: itc){reference-type="ref" reference="sec: itc"}, we introduce how to achieve unimodal and cross-modal alignment under the teacher-student distillation learning formulation. Finally, we explain how our proposed two components integrate into the V&L framework in Section [\[sec:pretrain\]](#sec:pretrain){reference-type="ref" reference="sec:pretrain"}.
|
| 6 |
+
|
| 7 |
+
We simultaneously optimize the codebook and the student encoders within the framework in an end-to-end manner, employing the losses discussed in previous sections as follows, $$\begin{equation}
|
| 8 |
+
\Lcal_{\text{final}} = \Lcal_{\text{mlm}} + \Lcal_{\text{itm}} + \Lcal_{\text{ica}} + \Lcal_{\text{code}}
|
| 9 |
+
\end{equation}$$ among which MLM and ITM loss have been widely used in many V&L methods particularly those "early-fusion\" frameworks. The ica loss is the main objective function for "late-fusion\" V&L frameworks. CODIS combines the merits of both "early-fusion" and "late-fusion" approaches, by explicitly learning alignment along with fusion.
|
| 10 |
+
|
| 11 |
+
Intra-cross alignment ($\Lcal_{\text{ica}}$) loss described in Section [\[sec: itc\]](#sec: itc){reference-type="ref" reference="sec: itc"} can be viewed as an instance-to-instance alignment loss, similar to the one in [@li2021align]. The difference is we consider both intra and cross modal alignment. We assume that a stronger unimodal representation can lay a solid foundation for cross-modal representation. Empirical evidence is provided in Section [2.4](#sec:ablation){reference-type="ref" reference="sec:ablation"}. The codebook loss ($\Lcal_{\text{code}}$) designed in Section [\[sec: code\]](#sec: code){reference-type="ref" reference="sec: code"} measures the the distance between the transport plan and similarity matrix. It contrasts features at the prototype level and can be interpreted as distance metric matching [@caron2018deep; @chen2020graph]. Combining these two help avoid prototype collapsing problem, as online prototype clustering requires careful tuning [@caron2020unsupervised]. Finally, The supervision signals for both intra-cross alignment loss and codebook loss require features from the momentum teacher and we adopt a teacher-student distillation approach. This can be seen as a generalization of unimodal SSL into the multimodal setting, under the V&L framework.
|
2203.00725/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-10-05T02:07:16.437Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.4.0 Chrome/91.0.4472.164 Electron/13.5.0 Safari/537.36" etag="lKSOSmECgXUah0zRJPKz" version="15.4.0" type="device"><diagram id="Ax7lfOzb-_v3f2za0uHY" name="Page-1">7V1tc5s4EP41nrn7EEYviJePdZLe3VzTdpq76d1HYojNFIMP4zrprz/JRjZIcsAYsFyTDxkssMC7z7PaXa3ECN/OX35LvcXsIfGDaISA/zLCdyOEXNOi/1nD67bBstG2YZqG/rYJ7hsewx9B3gjy1lXoB8vShVmSRFm4KDdOkjgOJlmpzUvTZF2+7DmJynddeNNAaniceJHc+jX0s9m21SFg3/57EE5n/M4Q5GfmHr84b1jOPD9ZF5rw/QjfpkmSbY/mL7dBxGTH5bL93vsDZ3cPlgZxVucLD+OU3H+a4z+/LbL12rcen/6wb/JevnvRKv/B+cNmr1wCQey/Y4KknyaRt1yGkxEez7J5RBsgPaS3T1//oR+AQfjHf4vn7hgWwO7Ta/5Jfn6uXy+dBtkbD50/Y+CXVJf/6t+CZB7Q+9AL1nuFcX3NCrribWkQeVn4vaxwL8fNdNfd7g6fk5A+MQI5xDHMv8IR7oJyF8tklU6C/FtFBUkdAcMt/Dnlbi1XdZbfZCs06Sb0oCCSfdMGG0fgBEk4+frl9uNHCSxpsor9wM/Vv56FWfC48Cbs7JrahzJyDiLge5Bmwcubus3PIiIIn9OvoHuIFMq3wGE9l4R3rKRwC4x6CbMCoeinfzmD6PGeTuwDZ1PrLNyithoSlWw1tWLr7iscMJA0ZWu5I+IIHXXMSFPC2YcwDrxUO0oSk9SjJO6KkuR8g1wTKjenpFmTknyE0oSTEpXMxiOo0JEIqY45abcJtCLMCqhr3ZvSDAyS+UDIwE5DPIDqvjqGhFMDElFEo5mg2jh7y8U2xHkOX5hBb8Vac4pwEUEkWWvea1Hbbyn2JGPt1hAYRepj/jFcbu4bTu73jeMCyeIkDspSXGZp8i24TaIkpS1+8OytoqxgqmHRUO/MdoXXBQ0AocBYy60aHNinz0Ea0l8QpFVEPpWfu8j0ZIICw4L7QMQSKAYNW+izLl0pFg3sCt1hA/XLWJ55OMWKV2Gs7A7cAAMgWA9oDfyP2gir9C/4UFE5psCWMNuVgwFbcjBMp18Hg9++gM3bJH5O0jnVLgL38STx2ZG1QdsTO5qyo3GUTL5JKO49MrAE6RFFsA7egEHrgw0f7PrjerMoHhiuY7Y6mlRyHdb1HyHWmusmclviOhA66prrciZJ0wgfuzWTbp1F+FDOhmjJ4+6ycRwt1XQlWtMVE7sdumJT6KhruspZpi/Bh7/1I6tq0FWRFXVGVkuS1F2aLJJVpp+wgH1uYbWQU7pwy0bqWjZLL8smzDRguy3LButZNooJ77Vw2YJdsOzE9slZLk1dFUSceoTuzlWpk+BqSuiLmKXgPK0mtKMXoQUeIrdhnkvqSLQMXU/myxmuz8kyzMIk9iI5ebDJKoTxVIIp5WBWBmYaLMMf3tPmAgYSb5XRjjc1PZsEdhROYwZqihkWoo4ZkcOJF73LT8xD399kcHNbRW9DxiNyR1uewyjiiM+TumUa5I17C/MmSuubEGgJGXKFT2D3aUG4wdDFJTjBEmhOcIxbInjPKWwkxyLv35+/WMfE2LBQSTKqeh3HNYisY3FOuD06yR72AyVDePN74PkjZitFg/gYPEc377KMSoKaTB3FqprF61usNWY+lzNvwQ6T9H43B7ooZDDzmdFCUrNKnE/e5Nt0o4BPqywK46BFMdvYgIKYbVnMEKjELE44tFduJo/lt0n8PYlWh4bzh8RfUQVoiFoTyZVCPaMWy7mJo8fW5yTO3vRULs1h51SuLsy1tRrPTdsxHFtgbMNyP9OFYl8Q91vxh1tIBR2DzYvIDnHIVWKT6DVNRYCEJwKaYpPiHJf7MsUi9K6xWafO6dqwSerOyfBBXBNsqvBkC3iqi01iSjg3Sc+V0nWwWSgpC6KnZF0qJ2MN9MQsScMfFKTUqyrVmLXmBYgVZE2RWT1Q6xV4KwZqSwRJbcABedCvWaVzOGF+4FZYfuzchd1DeNtpq4DmOe3B2CpyHpeGfeweBNHxjoBZ7fB2bGwJGoyt3p6nKiqymgLOkYN41zTqQa4Fc8vXFh0cDBRfwTyj0K2NrrEO8NLTX6aTl12dL/1ly+mvYSjk4USlZbL1s0wWArs/sZDzhPBYNFJAmNnpemHY4LEdRl81TPVaQqzAExYjjBOwiXv22PgcxuCx6WwXxRyK3XBJJPPYHGu3eItTa2cXzVrYO951cyXnIa/EOuy6Kb6C+nDd7J/RdRPyeZgv2zuj6ybX7OtSpCCQDeOzz0vackXH3xnFlRdPgpt1SPEnzfN+8F6DNE7SuY4i3e2/UinSzsqo7GE67TApK4dIR68pCwKQOFpgsejqFJ8M9hwvDNNphyFX7b7ptcRJgSfkNqwsJKaEcyS6gl1j88gdGq4gXtDMGJq25Dpj55R4QXCHoN1Zhld+cOvoMAFbvYQJTo3U48WHCeDsYYJTJz3R9xaV+hC9pK7GW0+yIYrYhd0ly92KYU3Xeya1sGFbx95Pcy9GQ+dEjM3Eusb6KHKFjnrGjcpr3obFDA70zLOX68b6b8X23R3/Re0uHZDAx2BN/39J5l68P8kjaSrU97uelgt2SQGK/OrJFljvmE6mT784VBb059NfAkqHv7Jj1jdgz3SzXR3FvgTB4kW+9+gWj1yb35w+yPb++cl98/YH8maBLNexUMukroC4DsKR0wymgkydLd92VM7yAMkrhiSGskvVKyTdn81vxYAYfFTdrX/ahSjn81xd1TxCC9znnYS73b0KpAvFq3QyDljHh3KAjk8lKfmvt5Q8WNvctBqobAgsxTyD1au1Ve1MddVG4Dhk6/TkeL8YeCDcAcIRnq85G+FUyYOBcBdJOESsgXBVhOMrIM9GONWS6Usk3KEHyH+F3HUzgJdhPDD26hhronP7pKrKl0tk7DBEjiFA5sC4CsZh+8yMg3wtw0C5y6ccAiYaKFdBOcR3hDgf5VrPvNRwAR0lmqpAvOllnUuF9cPKdjd7N1XR/WOX/m/NeR5VNvInnRLaXsJfBAu3cp+wvTHxHcoVyeveYAvE2k1OcNdRkc5UTR51t3stOK42I5ei7y1nu4LzZlUaB6WpSyGFoCrLIkbTmjuhK+n9J52/Aem40vjrVTIyG6qYmK5hYqEv23Cd/epXgHpW+nE159ejdEIq6Fhb6UQ2EdCCyAV4U6pi9azw43Yivx6FK0x5awrvV8WK94X9ES8UL9q4AN/rIGqOcKf4ju1cH86Za3Gg4iVP16wgk0hLZTTQkRxIflpl16wkadMfBORoX2WgO1SSYsvzcLP/MbhqZSEbVmrK6VVTSJUKFXRTfFPBzPM3/gcoOh/gKH3wcsb5yzT1FjPDYy7NEhnZOvnqveYOztjfFPezt9fSYyY+24AuPY68pyAa7+oXBY0p/J4W1IZtedpIkcTm7zbspZ4RIlVGbXAYCxEBV1XjF9jbFR117TAq3hwxqFil4savEK7ESucqPm6L++tRsUi+5m+JPjuLh/xdxypWmAODhiomsbBNHMyX/ZyscPoxTdiszP5y5sE8JH7Arvgf</diagram></mxfile>
|
2203.00725/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Riding on the tremendous success of deep learning, automatic speech recognition (ASR) technology has seen rapid advances in recent years, and is now widely deployed in voice-based human computer interaction. On the other hand, in far-field noisy environments, ASR performance degrade substantially [@haeb2020far]. Designing robust ASR systems in such conditions remains a technical challenge in real-world applications [@barker2015third; @vincent2017analysis; @barker2018fifth].
|
| 4 |
+
|
| 5 |
+
Modern ASR is mostly based on deep neural networks (DNNs). ASR systems consist of acoustic modeling and language model based decoding. Hybrid systems usually use DNN to estimate the state of each time frame of an input signal such as senone states, and hidden Markov models to decode the state information into final transcripts [@trentin2001survey; @hinton2012deep; @dahl2011context; @rabiner1986introduction]. Recently end-to-end (E2E) models have been used to directly estimate a word transcript without HMM-based decoding, usually using connectionist temporal classification (CTC) or recurrent neural network transducer (RNN-T) [@graves2014towards; @graves2006connectionist; @zhang2020transformer; @rao2017exploring]. Toolkits such as ESPnet provide platforms for E2E ASR, as well as strong benchmark models on various corpora [@watanabe2018espnet; @watanabe20212020]. However, on the widely used CHiME-4 corpus [@vincent2017analysis], E2E methods in ESPnet do not outperform Kaldi-based hybrid systems [@povey2011kaldi]. In this study, we focus on developing a robust acoustic model for the monaural ASR task of the CHiME-4 corpus, with modules that are primarily used in E2E ASR models.
|
| 6 |
+
|
| 7 |
+
The wide residual BLSTM network (WRBN) achieved the top-rank performance on the monaural speech recognition task in the original evaluation of the CHiME-4 challenge [@jahn2016wide]. Subsequently, adding utterance-wise dropout and iterative speaker adaptation leads to a considerable improvement over the original WRBN in terms of word error rate (WER) [@wang2018utterance; @wang_filter-and-convolve_2018]. The WER performance was further improved by employing an LSTM language model (LSTMLM) [@wang2020complex]. However, this system is inefficient because of the recurrent nature of BLSTM, requiring long model training time.
|
| 8 |
+
|
| 9 |
+
The Transformer model [@vaswani2017attention] uses an attention mechanism to represent temporal contexts of an entire sequence, and it has been shown to outperform recurrent neural networks for ASR tasks [@karita2019comparative; @zhang2020transformer]. Although Transformer is capable of leveraging temporal information over long sequences, overcoming a drawback of BLSTM, its ability to leverage local information in a sequence appears limited. To deal with this issue, a convolution-augmented Transformer model, named Conformer [@conformer], was proposed, and it produces better results than Transformer-based models on ASR and other tasks [@guo2021recent; @chen2021continuous].
|
| 10 |
+
|
| 11 |
+
In this paper, we propose to integrate the WRBN and Conformer encoder into a new Conformer-based acoustic model for monaural robust ASR. We apply utterance-wise processing to all normalization layers in our model, which leads to more reliable computation for each utterance and avoids inter-utterance interference. The utterance-wise Layernorm (LN) used in this study can also be applied to all LN layers that process batches with padding, when the knowledge of feature length is available *a priori*. We also utilize iterative speaker adaptation for post-processing. Evaluated on the CHiME-4 corpus, the proposed model outperforms WRBN by $8.4\%$ relatively in terms of WER. In addition, the size of our model is reduced by $18.3\%$, and total training time is cut by $79.6\%$.
|
| 12 |
+
|
| 13 |
+
The remainder of the paper is organized as follows. Details of the proposed system are described in Section [2](#sec:am){reference-type="ref" reference="sec:am"}. Experimental setup and evaluation results are presented in Section [3](#sec:exp){reference-type="ref" reference="sec:exp"} and Section [4](#sec:results){reference-type="ref" reference="sec:results"}, respectively. Section [5](#sec:conclusions){reference-type="ref" reference="sec:conclusions"} concludes the paper.
|
| 14 |
+
|
| 15 |
+
In this study, wide residual convolutional layers in WRBN before BLSTM layers are utilized and denoted as WRCNN. WRCNN passes the input signal through a convolution layer and uses three residual blocks to extract representation at different frequency resolutions [@zagoruyko_wide_2016]. Afterwards an utterance-wise Batchnorm (BN) and a linear layer with ELU (exponential linear unit) non-linearity are utilized to project the signal into proper dimensions.
|
| 16 |
+
|
| 17 |
+
A Conformer encoder builds upon a Transformer encoder, with an additional convolutional network, and macaron-like feed-forward layers [@conformer]. We employ a Conformer encoder to leverage sequence information via convolution-augmented attention. Modules inside the Conformer encoder are described below.
|
| 18 |
+
|
| 19 |
+
Feed-forward network (FFN) contains two linear layers with an activation function between them, and residual connections over the entire module. We adopt the pre-norm architecture in [@xiong2020layer]. So the FFN is defined as
|
| 20 |
+
|
| 21 |
+
$$\begin{equation}
|
| 22 |
+
\begin{aligned}
|
| 23 |
+
\text{FFN}(\mathbf{x}) = \mathbf{W}_{2}\text{Dropout}(\text{Swish}(\mathbf{W}_{1}\mathbf{x} + \mathbf{b}_{1})) + \mathbf{b}_{2},
|
| 24 |
+
\end{aligned}
|
| 25 |
+
\end{equation}$$ where $\mathbf{x}$ is the input signal, Swish is an activation function defined as $\text{Swish}(x) = x\text{sigmoid}(x)$. $\mathbf{W}_{1}\in\mathbb{R}^{d_{\text{ff}}\times d_{\text{attn}}}$, $\mathbf{b}_{1}\in\mathbb{R}^{d_{\text{ff}}}$, and $\mathbf{W}_{2}\in\mathbb{R}^{d_{\text{attn}}\times d_{\text{ff}}}$, $\mathbf{b}_{2}\in\mathbb{R}^{d_{\text{attn}}}$ are weights and biases for the first and second linear layers, respectively. The first linear layer expands attention dimension by a factor of $4$, i.e. $d_{\text{ff}} = 4d_{\text{attn}}$, where $d_{\text{attn}}$ is the attention dimension.
|
| 26 |
+
|
| 27 |
+
By applying utterance-wise LN on the input as pre-norm, each FFN module processes the signal as
|
| 28 |
+
|
| 29 |
+
$$\begin{equation}
|
| 30 |
+
\begin{aligned}
|
| 31 |
+
\text{Output} = \mathbf{x} + \frac{1}{2}\text{Dropout}(\text{FFN}(\text{LN}({\mathbf{x}}))).
|
| 32 |
+
\end{aligned}
|
| 33 |
+
\end{equation}$$
|
| 34 |
+
|
| 35 |
+
In the multi-head self-attention (MHSA) network, absolute positional encoding is employed to encode positional information in the sequence. In this study, the same positional encoding as in [@vaswani2017attention] is applied without scaling the input signal. Instead we divide the positional encoding matrix by the scaling factor. That is, before the MHSA module, instead of $$\begin{equation}
|
| 36 |
+
\begin{aligned}
|
| 37 |
+
\text{Output} = \sqrt{d_{\text{attn}}} \mathbf{x} + \mathbf{PE},
|
| 38 |
+
\end{aligned}
|
| 39 |
+
\end{equation}$$ we use
|
| 40 |
+
|
| 41 |
+
$$\begin{equation}
|
| 42 |
+
\begin{aligned}
|
| 43 |
+
\text{Output} = \mathbf{x} + \frac{1}{\sqrt{d_{\text{attn}}}}\mathbf{PE},
|
| 44 |
+
\end{aligned}
|
| 45 |
+
\end{equation}$$ where $\mathbf{x}$ is the input signal, $\mathbf{PE}$ is the positional encoding matrix, and $\sqrt{d_{\text{attn}}}$ is the scaling factor. Our modification here essentially avoids the scaling of the original input feature.
|
| 46 |
+
|
| 47 |
+
For each head $h$ in the MHSA, self-attention is computed as $$\begin{equation}
|
| 48 |
+
\begin{aligned}
|
| 49 |
+
\text{Attention}(\mathbf{Q}_{h}, \mathbf{K}_{h}, \mathbf{V}_{h}) = \text{Softmax}(\frac{\mathbf{Q}_{h}\mathbf{K}_{h}^{T}}{\sqrt{d_{\text{attn}}}})\mathbf{V}_{h},
|
| 50 |
+
\end{aligned}
|
| 51 |
+
\end{equation}$$ where $\mathbf{Q}_{h}=\mathbf{W}_{Q}^{h}\mathbf{x}$, $\mathbf{K}_{h}=\mathbf{W}_{K}^{h}\mathbf{x}$, and $\mathbf{V}_{h}=\mathbf{W}_{V}^{h}\mathbf{x}$ are query, key, and value linear projections of head $h$ on the input sequence, respectively. $\mathbf{W}_{Q}^{h}$, $\mathbf{W}_{K}^{h}$, and $\mathbf{W}_{V}^{h}\in\mathbb{R}^{ \frac{d_{\text{attn}}}{H}\times d_{\text{attn}}}$ are projection weights for query, key, and value, respectively. $H$ is the number of heads.
|
| 52 |
+
|
| 53 |
+
After the computation of self-attention, all the heads are concatenated and fed to a final linear layer,
|
| 54 |
+
|
| 55 |
+
$$\begin{equation}
|
| 56 |
+
\begin{aligned}
|
| 57 |
+
\text{MHSA}(\mathbf{Q}, \mathbf{K}, \mathbf{V}) & = \textbf{W}_{\text{out}}\text{Concat}(\text{head}_{1},...,\text{head}_{H}),
|
| 58 |
+
\end{aligned}
|
| 59 |
+
\end{equation}$$ where $\mathbf{W}_{\text{out}}\in\mathbb{R}^{d_{\text{attn}}\times d_{\text{attn}}}$ is the weight matrix of the final linear layer, and $\text{head}_{h} = \text{Attention}(\mathbf{Q}_{h}, \mathbf{K}_{h}, \mathbf{V}_{h})$.
|
| 60 |
+
|
| 61 |
+
Therefore, each MHSA module processes the signal as:
|
| 62 |
+
|
| 63 |
+
$$\begin{equation}
|
| 64 |
+
\begin{aligned}
|
| 65 |
+
\text{Output} = \mathbf{x} + \text{Dropout}(\text{MHSA}(\text{LN}({\mathbf{x}}))).
|
| 66 |
+
\end{aligned}
|
| 67 |
+
\end{equation}$$
|
| 68 |
+
|
| 69 |
+
With similar architecture as in [@conformer], the convolutional network consists of a pointwise convolution, followed by a GLU (gated linear unit) activation function. After a 1-dimensional depthwise convolution and an utterance-wise BN, the Swish activation is applied. Finally a 1-dimensional pointwise convolution is employed. Note that all convolutions operate on the time dimension.
|
| 70 |
+
|
| 71 |
+
Inspired by the utterance-wise dropout in [@wang2018utterance], we modify all normalization layers into utterance-wise normalization. BN in the convolutional network is substituted by the same utterance-wise BN as in WRBN [@jahn2016wide]. Statistics over the time dimension are collected to normalize the feature dimension for each utterance, treating batch as an independent dimension. Utterance-wise BN can obtain more reliable estimation of statistics from each utterance without including other utterances, and also makes the test stage more independent of the batch constellation in the training stage.
|
| 72 |
+
|
| 73 |
+
In this study, LN in each module of the Conformer encoder is modified into utterance-wise LN, which is defined as
|
| 74 |
+
|
| 75 |
+
$$\begin{equation}
|
| 76 |
+
\begin{aligned}
|
| 77 |
+
\mathbf{y} = \frac{\mathbf{x} - \mu}{\sqrt{\sigma^2 + \epsilon}}\cdot \boldsymbol{\gamma} + \boldsymbol\beta,
|
| 78 |
+
\end{aligned}
|
| 79 |
+
\end{equation}$$ where $\mathbf{x}$ and $\mathbf{y}$ are the input and output of the LN layer. $\mu$ and $\sigma$ are the mean and standard deviation of the hidden layer units. $\boldsymbol{\gamma}$ and $\boldsymbol\beta$ are learnable affine transform parameters, and $\epsilon$ is a small number for numerical stability.
|
| 80 |
+
|
| 81 |
+
<figure id="fig:layernorm" data-latex-placement="h!">
|
| 82 |
+
<embed src="matrixlayernorm.pdf" style="width:70.0%" />
|
| 83 |
+
<figcaption>Illustration of original Layernorm and proposed utterance-wise Layernorm.</figcaption>
|
| 84 |
+
</figure>
|
| 85 |
+
|
| 86 |
+
Fig. [1](#fig:layernorm){reference-type="ref" reference="fig:layernorm"} shows how utterance-wise LN work. For illustration purposes, we assume the feature dimension to be 1 and maximum utterance length of batch to be 6. Given an input batch size 4, blue cells indicate the feature for each utterance and white cells indicate the zero padding. Each utterance starts from left to right. Dashed lines surrounding each utterance include the features subject to LN processing. For LN, during the normalization of a hidden layer, i.e. the feature dimension, zero padding is included when updating the parameters $\boldsymbol\gamma$ and $\boldsymbol\beta$. In this case, the entire normalized feature output will be biased due to the zero padding, degrading the reliability of the collected statistics. The proposed utterance-wise LN overcomes this bias by processing features taking into account of the utterance length. For each utterance, only the useful features will be included for updating parameters. As a result, zero padding will not affect the estimation of learnable parameters in utterance-wise processing, resulting in more precise computation and normalization.
|
| 87 |
+
|
| 88 |
+
In addition to utterance-wise normalization, we apply sequence masking in every step of the computation. For example, after each linear layer in FFN, we apply an all-zero mask over zero padding to remove the bias introduced during computation. Also, for an utterance with zero padding, when computing attention weights, we mask out the padding parts such that attention is computed only between valid time frames. Therefore, we believe that utterance-wise normalization and masking benefit training stability and convergence.
|
| 89 |
+
|
| 90 |
+
# Method
|
| 91 |
+
|
| 92 |
+
Our proposed acoustic model integrates the WRCNN and Conformer encoder, and uses utterance-wise normalizations. Fig. [2](#fig:model){reference-type="ref" reference="fig:model"} shows the acoustic model architecture. The input signal is $80$-dimensional mean-normalized log-Mel filterbank features extracted from the original single-channel noisy speech signal, coupled with its delta and delta-delta features. It is processed by WRCNN and projected to the dimension of MHSA by a linear layer. After $N$ blocks of the Conformer encoder with absolute positional encoding, the signal is projected into $1024$ dimensions, followed by a ReLU (rectified linear unit) activation and dropout. Finally a linear layer projects the signal to the final output of each frame as posterior probability for $2042$ context-dependent states.
|
| 93 |
+
|
| 94 |
+
<figure id="fig:model" data-latex-placement="htbp">
|
| 95 |
+
<embed src="model.pdf" style="width:95.0%" />
|
| 96 |
+
<figcaption>Model architecture of Conformer-based acoustic model. <span class="math inline"><em>B</em></span> denotes batch size, and <span class="math inline"><em>T</em></span> denotes number of time frames of the longest utterance in a batch.</figcaption>
|
| 97 |
+
</figure>
|
2203.06107/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2203.06107/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
{#fig:teaser width="100%"}
|
| 4 |
+
|
| 5 |
+
One of the fundamental goals in artificial intelligence is to develop intelligent systems that are able to reason and explain with the complexity of real-world data to make decisions. While explaining decisions is an integral part of human communication, understanding and reasoning, existing visual reasoning models typically answer questions without explaining the rationales behind their answers. As a result, despite the significantly increased accuracy achieved by powerful deep neural networks [@updown; @ban; @lxmert; @vilbert; @visualbert; @oscar], existing methods commonly take advantage of spurious data biases [@explicit_bias] and it is difficult to understand if they make decisions by truly understanding the causal relationships between multi-modal inputs and the answers.
|
| 6 |
+
|
| 7 |
+
An important line of research to tackle the issues is to improve the interpretability of visual reasoning models with multi-modal explanations [@vqa_x; @vqa_e; @vcr; @faithful_exp; @transformer_exp; @generative_vcr; @competing_exp]. While showing usefulness in highlighting important visual regions and providing user-friendly textual descriptions, these approaches suffer from two major limitations: (1) Existing explanations are typically defined in the forms of attention maps or free-formed natural language. Attention maps capture the salient regions for generating the answers but fall short of explaining how different regions contribute to the decision-making process. On the other hand, unconstrained textual explanations could be highly diverse and often inconsistent when explaining the same decision. Both of them lack the capability to illustrate the reasoning process behind a decision. (2) The explanations of different modalities are loosely connected and modeled with separate processes [@vqa_x; @vqa_e; @faithful_exp]. This not only undermines the capability of explaining models' rationales with multiple modalities, but can also result in contradictory explanations [@faithful_exp]. For instance, textual explanations "The apple is above the pear" and "The pear is above the apple" have opposite meanings but could share the same attention map. We address the aforementioned challenges from two distinct perspectives (*i.e.,* data and model), and propose an integrated framework that consists of a new type of explanations, a functional program, and a novel explanation generation method.
|
| 8 |
+
|
| 9 |
+
From the data perspective, instead of independently modeling explanations of a single modality without considering the reasoning process, we introduce a new Reasoning-aware and grounded EXplanation (REX) that is derived by traversing the reasoning process and tightly coupling key components across the visual and textual modalities. As shown in Figure [1](#fig:teaser){reference-type="ref" reference="fig:teaser"}, it is constructed based on the consecutive reasoning steps (, select, common) for decision making, and explicitly grounds key objects (, comb, heart) with visual regions to elaborate how they contribute to the answer. The structured reasoning process also naturally alleviates the variance of natural language, and enables models to pay focused attention to important information for reasoning instead of the language structure. To automatically construct our explanations, we develop a functional program to progressively execute the reasoning steps and query key information from scene graphs [@scene_graph; @visual_genome], and collect a new dataset with 1,040,830 multi-modal explanations.
|
| 10 |
+
|
| 11 |
+
From the model perspective, unlike existing methods [@vqa_x; @vqa_e; @faithful_exp; @transformer_exp; @generative_vcr] that model key components in different modalities with separate processes, we propose a novel explanation generation method that explicitly models the correspondence between important words and regions of interest. It takes into account the semantic similarity between features of the two modalities, and incorporates an adaptive gate to ground words in the visual scene. Our method improves the visual grounding by a large margin, resulting in enhanced interpretability and reasoning performance.
|
| 12 |
+
|
| 13 |
+
To summarize, our contributions are as follows:
|
| 14 |
+
|
| 15 |
+
\(1\) We present REX, a new type of reasoning-aware and visually grounded explanations. Our explanation differentiates itself with its strong correlation with the reasoning process and the tight coupling between different modalities. We develop a functional program to automatically construct our new dataset with 1,040,830 multi-modal explanations.
|
| 16 |
+
|
| 17 |
+
\(2\) We propose a novel explanation generation method that goes beyond the conventional paradigm of independently modeling multi-modal explanations [@faithful_exp; @transformer_exp; @generative_vcr], and leverages an explicit mapping to ground words in the visual regions based on their correlation.
|
| 18 |
+
|
| 19 |
+
\(3\) We demonstrate the effectiveness of our data and method with extensive experiments under different settings, including multi-task learning and transfer learning. We also analyze different visual skills and their correlation with the reasoning performance.
|
| 20 |
+
|
| 21 |
+
# Method
|
| 22 |
+
|
| 23 |
+
Answering visual questions would benefit from capabilities of reasoning on multi-modal contents and explaining the answers. This section presents a principled framework for visual reasoning with enhanced interpretability and effectiveness. It advances the research in visual reasoning from both the data and the model perspectives with:
|
| 24 |
+
|
| 25 |
+
::: enumerate*
|
| 26 |
+
a new type of multi-modal explanations that explain decision making by traversing the reasoning process, together with a functional program to automatically construct the explanations, and
|
| 27 |
+
|
| 28 |
+
an explanation generation method that explicitly models the relationships between words and visual regions, and simultaneously enhances the interpretability as well as reasoning performance.
|
| 29 |
+
:::
|
| 30 |
+
|
| 31 |
+
The goal of our proposed data is to offer an explanation benchmark that encodes the reasoning process and grounding across the visual-textual modalities. Compared to previous explanations for visual reasoning [@vqa_x; @vqa_e; @vcr], it has two key advantages: (1) Grounded on the reasoning process, it elaborates how different components in the visual and textual modalities contribute to the decision making, and reduces variance or inconsistency in textual descriptions; and (2) Instead of modeling textual and visual explanations as separate components, our explanation considers evidence from both modalities in an integral manner, and tightly couples words with image regions (*i.e.,* for visual objects, their grounded regions instead of object names are considered in the explanations). It augments visual reasoning models with the capability to explain their decision making by jointly considering both modalities, resulting in enhanced interpretability and reasoning performance.
|
| 32 |
+
|
| 33 |
+
Figure [2](#fig:data){reference-type="ref" reference="fig:data"} illustrates the paradigm for constructing our explanation. To answer the question "Is the plate on the table both dirty and silver?", one needs to locate the table, find the plate on top of it based on their relationship, and investigate the cleanliness as well as the color of the plate. We represent each reasoning step with an atomic operation, *e.g.,* *select* and *verify*, and leverage a functional program to sequentially construct the explanation by traversing the reasoning steps and accumulating important information (*e.g.,* visually grounded objects and their attributes). Upon finishing the traversal, our final explanation not only elaborates the decision making with concrete textual description (*i.e.,* the plate is dirty but not silver thus the answer is no), but also supports the explanation with visual evidence (*i.e.,* grounded regions for the plate and the table).
|
| 34 |
+
|
| 35 |
+
::: center
|
| 36 |
+
:::
|
| 37 |
+
|
| 38 |
+
**Decomposing the reasoning process with atomic operations.** We define a vocabulary of atomic operations by characterizing and abstracting functions for question generation in the GQA dataset [@gqa]. Given the 127 different types of operations in GQA, we first follow [@air] and represent each operation as a triplet, *i.e.,* $<$operation, attribute, category$>$, and then categorize the original operations in GQA programs based on their semantic meanings. As shown in Table [\[atomic_operation\]](#atomic_operation){reference-type="ref" reference="atomic_operation"}, we define 12 atomic operations that cover the essential steps for answering various types of visual questions: some require localizing a specific type of objects (*select*, *exist*); some require reasoning on attributes of the objects (*filter*, *query*, *verify*, *common*, *same*, *different*, *compare*, *relate*); and others require logical reasoning (*and*, *or*).
|
| 39 |
+
|
| 40 |
+
**Traversing reasoning process with a functional program.** With the defined atomic operations, we develop a functional program to traverse the reasoning process by performing the corresponding operations and sequentially updating the explanation based on the collected information. Inspired by [@clevr; @gqa], we represent the reasoning process as a directed graph, where nodes denote the reasoning steps and edges represent their dependencies. As shown in Figure [2](#fig:data){reference-type="ref" reference="fig:data"}, starting from the initial reasoning step (*i.e.,* *Select (table)*), we recursively construct the partial explanation for the current node (shown to the right of each node in Figure [2](#fig:data){reference-type="ref" reference="fig:data"}) and pass it to its dependent nodes. Our final explanation is obtained at the last reasoning step (*i.e.,* *And*). To construct the partial explanation for each node, we design a set of templates based on the semantic meanings of the atomic operations (see our supplementary materials for details). The proposed templates dynamically combine the information extracted within the current node and those transited from its dependent nodes in the previous steps. For example, template for the *relate* operation locates a new object based on its relationship with objects selected in the previous nodes.
|
| 41 |
+
|
| 42 |
+
The aforementioned paradigm allows efficiently traversing the reasoning process and constructing explanation that elaborates how a decision is made based on the visual and textual modalities. It not only enables the construction of our new GQA-REX dataset with 1,040,830 multi-modal explanations (data statistics and qualitative examples provided in the supplementary materials), but also plays a key role in improving the interpretability and accuracy of visual reasoning models, as detailed in the next subsection.
|
| 43 |
+
|
| 44 |
+
Explaining the rationale behind a decision requires reasoning on visual and textual evidence and elaborating their relationships. Existing explanation generation methods [@vqa_x; @vqa_e; @faithful_exp; @transformer_exp] model textual and visual explanations with separate processes, and pay little attention to how key components in each modality correlate with each other. As a result, they have limited capability of generating explanations that jointly consider both modalities and ground words in the images. With the overarching goal of improving the interpretability and accuracy of visual reasoning models, we propose a novel explanation generation model that couples related components across the two modalities and generates the explanation based on their relationships.
|
| 45 |
+
|
| 46 |
+
Figure [3](#fig:model){reference-type="ref" reference="fig:model"} illustrates an overview of our method. The principal idea behind the method is to explicitly measure the semantic similarity between words and visual regions, and leverage it to generate multi-modal explanation with enhanced visual grounding. Specifically, unlike conventional methods [@vqa_x; @vqa_e; @faithful_exp; @transformer_exp] that generate the explanation solely based on textual features $T_i \in \mathbb{R}^{1 \times D}$ (*e.g.,* LSTM hidden state for predicting the $i^{th}$ word), we further measure the similarity between the textual features $T_{i}$ and visual features $V \in \mathbb{R}^{N \times D}$, and compute the probability of linking the current word with different regions $S_{i} \in \mathbb{R}^{1\times N}$: $$\begin{equation}
|
| 47 |
+
S_{i}^{n} = \frac{e^{T_{i} \cdot V_{n}}}{\sum\limits_{j=1}^{N} e^{T_{i} \cdot V_{j}}}
|
| 48 |
+
\end{equation}$$
|
| 49 |
+
|
| 50 |
+
where $N$ denotes the total number of image regions, $D$ is the dimension of features, and $n$ is the index for an image region. $T \cdot V$ is the dot product between two features and corresponds to their cosine similarity.
|
| 51 |
+
|
| 52 |
+
{#fig:model width="60%"}
|
| 53 |
+
|
| 54 |
+
To incorporate visual grounding with explanation generation, we leverage a transformation matrix $M \in \mathbb{R}^{N\times K}$ to map grounding results to the prediction of the next word: $$\begin{equation}
|
| 55 |
+
y_{i}^{g} = S^{i} \cdot M
|
| 56 |
+
\end{equation}$$ where $K$ is the number of vocabulary, $M$ is a binary matrix, and $M_{ij} = 1$ if the $j^{th}$ token denotes the $i^{th}$ region (*i.e.,* we use the token $\# i$ to represent grounding a word in the $i^{th}$ region). Since not every word in the explanation can be grounded in the image, *e.g.,* words like "is" do not have an associated region, we further develop a gating function to determine if the current word should be grounded: $$\begin{equation}
|
| 57 |
+
\hat{g}_{i} = \sigma(W_{g} \cdot T_{i})
|
| 58 |
+
\end{equation}$$ where $\hat{g}_{i}$ is the probability of grounding the $i^{th}$ word, $W_g \in \mathbb{R}^{1\times D}$ denotes the trainable weights, and $\sigma$ is the sigmoid activation function. We use a balanced binary cross-entropy loss to train the gating function: $$\begin{equation}
|
| 59 |
+
L_{g} = - \sum\limits_{i} \frac{C^{-}}{C} g_{i}\log \hat{g}_{i}+ \frac{C^{+}}{C}(1-g_{i})\log(1-\hat{g}_{i})
|
| 60 |
+
\end{equation}$$ where $g_{i}$ is the binary ground truth, $C^{+}$ and $C^{-}$ denote the number of grounded and non-grounded words in the current explanation, and $C = C^{+}+C^{-}$.
|
| 61 |
+
|
| 62 |
+
Upon obtaining the grounding probability $\hat{g}_{i}$, we adaptively combine the grounding results $y_{i}^{g}$ with the probabilities of different words derived from textual features $y_{i}^{f} = softmax(W_{f} \cdot T_{i})$ to determine the next word $\hat{y}_{i}$: $$\begin{equation}
|
| 63 |
+
\hat{y}_{i} = \hat{g}_{i} y_{i}^{g} +(1-\hat{g}_{i}) y_{i}^{f}
|
| 64 |
+
\end{equation}$$ where $W \in \mathbb{R}^{K\times D}$ represents trainable weights.
|
| 65 |
+
|
| 66 |
+
We train our model with a linear combination of the balanced binary cross-entropy loss $L_{g}$ for the gating function and the conventional cross-entropy loss for question answering $L_{ans}$ and explanation generation $L_{exp}$ [@vqa_e]: $$\begin{equation}
|
| 67 |
+
\label{eq_loss}
|
| 68 |
+
L = L_{ans}+ L_{exp} + L_{g}
|
| 69 |
+
\end{equation}$$
|
| 70 |
+
|
| 71 |
+
With the aforementioned method that couples key components from both modalities, we significantly improve the model's visual grounding capability, which leads to enhanced interpretability and reasoning performance.
|
2203.12892/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|