Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2005.06606/main_diagram/main_diagram.drawio +1 -0
- 2005.06606/main_diagram/main_diagram.pdf +0 -0
- 2005.06606/paper_text/intro_method.md +19 -0
- 2010.01402/main_diagram/main_diagram.drawio +0 -0
- 2010.01402/paper_text/intro_method.md +24 -0
- 2012.03500/main_diagram/main_diagram.drawio +0 -0
- 2012.03500/paper_text/intro_method.md +78 -0
- 2103.04503/main_diagram/main_diagram.drawio +0 -0
- 2103.04503/paper_text/intro_method.md +71 -0
- 2104.05832/main_diagram/main_diagram.drawio +1 -0
- 2104.05832/main_diagram/main_diagram.pdf +0 -0
- 2104.05832/paper_text/intro_method.md +47 -0
- 2106.06795/main_diagram/main_diagram.drawio +1 -0
- 2106.06795/main_diagram/main_diagram.pdf +0 -0
- 2106.06795/paper_text/intro_method.md +63 -0
- 2108.13655/main_diagram/main_diagram.drawio +1 -0
- 2108.13655/main_diagram/main_diagram.pdf +0 -0
- 2108.13655/paper_text/intro_method.md +16 -0
- 2110.02027/main_diagram/main_diagram.drawio +1 -0
- 2110.02027/main_diagram/main_diagram.pdf +0 -0
- 2110.02027/paper_text/intro_method.md +17 -0
- 2110.06539/main_diagram/main_diagram.drawio +1 -0
- 2110.06539/main_diagram/main_diagram.pdf +0 -0
- 2110.06539/paper_text/intro_method.md +669 -0
- 2110.13059/main_diagram/main_diagram.drawio +0 -0
- 2110.13059/paper_text/intro_method.md +208 -0
- 2112.10149/main_diagram/main_diagram.drawio +1 -0
- 2112.10149/main_diagram/main_diagram.pdf +0 -0
- 2112.10149/paper_text/intro_method.md +82 -0
- 2201.02233/main_diagram/main_diagram.drawio +0 -0
- 2201.02233/paper_text/intro_method.md +106 -0
- 2201.02263/main_diagram/main_diagram.drawio +0 -0
- 2201.02263/paper_text/intro_method.md +145 -0
- 2202.09852/main_diagram/main_diagram.drawio +1 -0
- 2202.09852/main_diagram/main_diagram.pdf +0 -0
- 2203.03937/main_diagram/main_diagram.drawio +1 -0
- 2203.03937/main_diagram/main_diagram.pdf +0 -0
- 2203.03937/paper_text/intro_method.md +65 -0
- 2204.06283/main_diagram/main_diagram.drawio +1 -0
- 2204.06283/main_diagram/main_diagram.pdf +0 -0
- 2204.06283/paper_text/intro_method.md +27 -0
- 2204.07316/main_diagram/main_diagram.drawio +1 -0
- 2204.07316/main_diagram/main_diagram.pdf +0 -0
- 2204.07316/paper_text/intro_method.md +67 -0
- 2205.05678/main_diagram/main_diagram.drawio +0 -0
- 2205.05678/paper_text/intro_method.md +103 -0
- 2206.08476/main_diagram/main_diagram.drawio +1 -0
- 2206.08476/main_diagram/main_diagram.pdf +0 -0
- 2206.08476/paper_text/intro_method.md +138 -0
- 2207.05315/main_diagram/main_diagram.drawio +0 -0
2005.06606/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2019-12-05T11:40:47.178Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" version="12.3.7" etag="z1-GvEQUDG62gUhHxnQk" type="device" pages="1"><diagram id="--T-MirjGDhsE9dOhFZp">7Vxbk5s2FP41nmkfsmMkxOVxb2ke0jYz22mbp4wMsk2DkQs4a/fXVwKJiyTvsjbgXXu9mTEcXSzOd76joyORCbxdbX9J8Xr5Kw1JPAHTcDuBdxMALBs47ItLdqXEtdxSsEijUFSqBQ/Rf0QIp0K6iUKStSrmlMZ5tG4LA5okJMhbMpym9LFdbU7j9q+u8YJogocAx7r0ryjMl6XUA24t/0SixVL+suX4ZckKy8riSbIlDuljQwTvJ/A2pTQvr1bbWxJz5Um9lO0+7imtBpaSJO/SAIhh5Dv5bCRkjypuaZov6YImOL6vpTcp3SQh4R1M2V1d5zOlaya0mPAfkuc7gRve5JSJlvkqFqVkG+V/8+ZXSNx9FZ3x67tt82Ynb5I83TUa8duvsj9+Uzcr7mS7LMdpLkbisPs5TeQtMzp4Uz4/f+iWBjO6SQMhgsLCcLogQqk20BVtVfAxuyd0RdgwWJWUxDiPfrS7x8IAF1W9GiN2IWAyQyaG8wPHG9HpBDhxLp6tBabz74bKgg9Z8dTXrILlrbd1Ibta8O8/Upxkc5quSMrq3JGA0TaVXbNBlb2XdTWrqW2CA/K4jHLysMaFBh+ZD2jjP4/i+JbGNC3awhATbx4UWKX0O2mUOIFHZvPq936QNCfbyT5L3wOAaABcbjlFI+F1fCRI+FhzGEIhWzb4K+sdg5r9VojWgRCof+MXTb/QqDDhrYLGTjpSoEBRklI0U9CoxtEJIKTRKtAhi2M2xZDnLRxn63LemUdbDqAwbTlZmEiAiBfaJhJ4YAYd1XWhfkgBLU9RsA01TsjZukkJKTuGEs5boYQ6h3SgiNsZiWMpIkOLESjiahTBF0AR5J6OIp5hrlcUziLINb+cx2R7zWPbIgQKxeVdEOMsiwJlAjaqSTPoDkpq6AA9MXMeaeTAt1sYeFCx8ZJ6mo0bOoJKEOD6g9HFP2P/Jldiwzs44I8XA8inavDt2/l7OOCe0MNZlsHFvWw5w57VsJy5Yp+uqxemu1zBLo4WCXefTIdsFQRvuIajAMfXomAVhWHJWMJGgmdFV5xMa26HhULQzQTd8Z9j3TGeZgI1A4jFvRiO1QeiAD2PqG3w1yq1DkL0zeQUuvg5OJafsyUTxvBzeg4hP38/B301VB7Tz72ZBMAh0cBoGQEbwfFYoqcELiAasKF/1WH2GIwn55wVsEZLC9g+GI8nel4gk5FXtsaJMYYLSmvm8Vu6mOGf2OjYP/ZbU+PVz/yS63haBH9zvIriXdl8RROaFbRrVanjwyI8FEUFST+U1YvChBZU1cLHCUDqBhZij8+lxe5MdSfVgQqFMMkdv+YDR1wriCn2ubpWVVdaxkHdgLqbEoqqJK9LSiSqEum+mKDpwHh5qRckeMIlVnErHBkXlK6MC5vOrK5aOzT5g9Kp1W2UcuHaitbSuRUdokLWeMTSyXVRYlVU6a8mGuKUqWra0K672TV6KFygLHisHrPUu1MXCXdoLGPMq+SLxgBUeIvbCuOmsG15op5motWapyTfnjXP+U1cnpooGHPa8s552vLHmrbQiBs+8qkuK52NrBMme6pTD+fIkhdgcSxL3PEWQXJ0l5UqQOiEqQIANJW/p0SPRtQHXZa1QyVF+zi145owlb3MpCDAeQPjmVrvNeNen8c6cj+jvVvoGs75mHBWNxUPwtk24Kyo+BJ2bG2rPUUdvGNrWypvB9yxNRwC0vj1Yldsm2j7bS8fTdQ9P45C31NhNZ3GG4qlzjtLC3IhR0HhcJ4iZzSW6vlGlZTyxLiknfXkxHno3JvhC59pbXkO+hQcfj8bJVIm6pGmgznMuhqNw3ry5TQcvvRgGYHng2UwEIVlHxdPYVfdcj2cwm47cTEgheH+k1uHB8vwPVg2cNRRYqsRp1moU/KVpmxP+YaY4RUxOMCmSWfQhnpH7H41I2EYJYsx3gubz4kTGN8LC11/Np32Qy6gbhtOHU9nl/WE9z8KKVO+6EgnahkzDr9N+DRuRG37Kh1pD9g6FlLf+XOQBu5QOV9oyiadwJfW0RJdk+Q62yVBKf4Y8bF3PQoLnf4d2p5cgq1k6qeSJgNEMR1SQYdvb/Vgw7anKgNYV7oRy9Rly4hRD0ZsSrbsidHz4us5JelbfmuSRmxgJOWN+OQipgOm9U+N6z/FdUpzZl00aZluzzqGwKRjW9dxD/t98M2cHTlliOUbQqyBHBJzl3jXqCBmq/3+Cjqa+SgGUHZ5sJMyZUv2kJCm95W7qojF5MKJfalFzxF1hoPvi8LOft/kcZTIiVkn8CB+DyivJk+BY+CkZfB7Vg9+zzblJvqIw7Tw7gsLerg7w3FVlFbxdhJQJdx+9UmqHrBHUI3brKl8gaKBvWcI3OweAjfb0pR4Zg7ZmEGrI8IXJNGaPlruBrT+p5QBnHRnHMHxHDYugz8zZ4jTMdbAxAoRcU1rYN9xIXb64Zu6BvZ8Q4LJtATuI8K09ycr9r490AmlBzrPV7i5uH3yQPRRqQrE/0wwOcVHDLkhLz8DwWcPB99EnEJvhDL1+XN4/z8=</diagram></mxfile>
|
2005.06606/main_diagram/main_diagram.pdf
ADDED
|
Binary file (27.7 kB). View file
|
|
|
2005.06606/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The segmentation of rare words into subword units [\(Sennrich et al.,](#page-9-0) [2016;](#page-9-0) [Wu et al.,](#page-9-2) [2016\)](#page-9-2) has become a critical component of neural machine translation [\(Vaswani et al.,](#page-9-3) [2017\)](#page-9-3) and natural language understanding [\(Devlin et al.,](#page-8-0) [2019\)](#page-8-0). Subword units enable *open vocabulary* text processing with a negligible pre-processing cost and help maintain a desirable balance between the vocabulary size and decoding speed. Since subword vocabularies are built in an unsupervised manner [\(Sennrich et al.,](#page-9-0) [2016;](#page-9-0) [Wu et al.,](#page-9-2) [2016\)](#page-9-2), they can be easily adopted for any language.
|
| 4 |
+
|
| 5 |
+
Given a fixed vocabulary of subword units, rare words can be segmented into a sequence of
|
| 6 |
+
|
| 7 |
+
subword units in different ways. For instance, "un+conscious" and "uncon+scious" are both suitable segmentations for the word "unconscious". This paper studies the impact of subword segmentation on neural machine translation, given a fixed subword vocabulary, and presents a new algorithm called *Dynamic Programming Encoding (DPE)*.
|
| 8 |
+
|
| 9 |
+
We identify three families of subword segmentation algorithms in neural machine translation:
|
| 10 |
+
|
| 11 |
+
- 1. Greedy algorithms: [Wu et al.](#page-9-2) [\(2016\)](#page-9-2) segment words by recursively selecting the longest subword prefix. [Sennrich et al.](#page-9-0) [\(2016\)](#page-9-0) recursively combine adjacent word fragments that co-occur most frequently, starting from characters.
|
| 12 |
+
- 2. Stochastic algorithms [\(Kudo,](#page-9-4) [2018;](#page-9-4) [Provilkov](#page-9-1) [et al.,](#page-9-1) [2019\)](#page-9-1) draw multiple segmentations for source and target sequences resorting to randomization to improve robustness and generalization of translation models.
|
| 13 |
+
- 3. Dynamic programming algorithms, studied here, enable exact marginalization of subword segmentations for certain sequence models.
|
| 14 |
+
|
| 15 |
+
We view the subword segmentation of an output sentence in machine translation as a latent variable that should be marginalized to obtain the probability of the output sentence given the input. On the other hand, the segmentation of source sentences can be thought of as input features and can be randomized as a form of data augmentation to improve translation robustness and generalization. Unlike previous work, we recommend using two distinct segmentation algorithms for tokenizing source and target sentences: stochastic segmentation for source and dynamic programming for target sentences.
|
| 16 |
+
|
| 17 |
+
We present a new family of mixed charactersubword transformers, for which simple dynamic programming algorithms exist for exact marginalization and MAP inference of subword segmentations. The time complexity of the dynamic programming algorithms is O(T V ), where T is the length of the target sentence in characters, and V is the size of the subword vocabulary. By comparison, even computing the conditional probabilities of subword units in an autoregressive model requires O(T V ) to estimate the normalizing constant of the categorical distributions. Thus, our dynamic programming algorithm does not incur additional asymptotic costs. We use a lightweight mixed character-subword transformer as a means of pre-processing translation datasets to segment output sentences using DPE for MAP inference.
|
| 18 |
+
|
| 19 |
+
The performance of a standard subword transformer [\(Vaswani et al.,](#page-9-3) [2017\)](#page-9-3) trained on WMT datasets tokenized using DPE is compared against Byte Pair Encoding (BPE) [\(Sennrich et al.,](#page-9-0) [2016\)](#page-9-0) and BPE dropout [\(Provilkov et al.,](#page-9-1) [2019\)](#page-9-1). Empirical results on English ↔ (German, Romanian, Estonian, Finnish, Hungarian) suggest that stochastic subword segmentation is effective for tokenizing source sentences, whereas deterministic DPE is superior for segmenting target sentences. DPE achieves an average improvement of 0.9 BLEU over greedy BPE [\(Sennrich et al.,](#page-9-0) [2016\)](#page-9-0) and an average improvement of 0.55 BLEU over stochastic BPE dropout [\(Provilkov et al.,](#page-9-1) [2019\)](#page-9-1) [1](#page-1-0) .
|
2010.01402/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2010.01402/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
We propose to solve the depth estimation problem for night-time images by posing it as a domain adaption problem in which a model pre-trained on day-time images is adapted to work for night-time images as well. The overall approach is shown in Figure [2](#fig:arch_diag){reference-type="ref" reference="fig:arch_diag"}. It consists of three steps. First, an encoder-decoder type deep network model ($F_d, G_d)$ is trained on day-time images to estimate depth directly from RGB images by using one of the existing methods as in [@godard2018digging], [@vankadari2019unsupervised], [@yin2018geonet], [@godard2017unsupervised], [@zhou2017unsupervised]. This is shown in Figure [2](#fig:arch_diag){reference-type="ref" reference="fig:arch_diag"}(a). The second step involves training a new image encoder $F_n$ with night-time images using adversarial discriminative learning that uses $F_d$ as the generator. This is shown in Figure [2](#fig:arch_diag){reference-type="ref" reference="fig:arch_diag"}(b). The third and the final step involves using the new encoder $F_n$ in conjunction with the day-time decoder $G_d$ for estimating depth directly from night-time images as shown in Figure [2](#fig:arch_diag){reference-type="ref" reference="fig:arch_diag"}(c).
|
| 4 |
+
|
| 5 |
+
The above three components of the proposed ADFA method are described in detail in the following subsections.
|
| 6 |
+
|
| 7 |
+
Estimating depth from monocular day-time images is an active field of research where deep learning methods have been applied successfully and several new benchmarks have been reported in the literature [@eigen2014depth], [@cao2018estimating], [@luo2018single], [@zhou2017unsupervised], [@yin2018geonet], [@vankadari2019unsupervised], [@babu2018undemon], [@luo2018every]. These deep networks have an encoder-decoder type architecture as shown in Figure [2](#fig:arch_diag){reference-type="ref" reference="fig:arch_diag"}(a). Such an architecture allows us to decompose the entire pipeline into two sub-networks, one for encoding (or extracting) features from input images and another for mapping these features to depth information. In unsupervised methods, the image reconstruction error is used as the loss function for training the entire model thereby avoiding the necessity of having the explicit ground truth depth information. The images are reconstructed by using spatial and/or temporal cues obtained from stereo or monocular sequence of images. The methods that use only temporal cues (such as optical flow) incorporate an additional network to estimate pose or ego motion required for image reconstruction [@zhou2017unsupervised],[@yin2018geonet]. The Depth-Net as shown in Figure [2](#fig:arch_diag){reference-type="ref" reference="fig:arch_diag"}(a) is composed of a series of convolutional and deconvolutional layers with different filter sizes. Given a monocular day-time image $I_d$, the image encoder $F_d$ generates, say, $L$ number of convolutional feature maps with different shapes and sizes, one from each layer. This feature map is represented as $F_d(I_d)= {f}_d = \{f_d^i\},\ i = 1, 2, \dots, L$, where $L$ is the total number of convolutional layers used in the image encoder. These feature maps are then passed to a depth-decoder $G_d$ to predict per-pixel depth map $\mathcal{D}$ of the input image $I_d$. One can use any of the existing methods (supervised or unsupervised) to learn the functions $F_d$ and $G_d$. In this work, we have used the state-of-the art depth-net model [@godard2018digging] as our $F_d$ and $G_d$ which are trained on the day-time monocular images. Since only monocular sequence of images are used for training, an additional pose network is required to estimate ego motion of the camera required for reconstructing images in the temporal domain. The encoder network $F_d$ is used to train a new encoder $F_n$ for night-images using an adversarial learning as explained in the next section.
|
| 8 |
+
|
| 9 |
+
Once the day-time image encoder $F_d$ and depth decoder $G_d$ are learned, our objective is to learn an image encoder $F_n$ that can generate the features maps $f_n$ from a night-time image $I_n$ which are indistinguishable from the day-time feature maps $f_d$ obtained from the day-time encoder $F_d$. There is no direct supervision signal available for computing the loss function from $f_d$ and $f_n$ as the input day and night images are *unpaired*. Here, the term *unpaired* means that these two images are not taken at the same time or at the same place. The encoder $F_n$ is trained to reduce the distance between the distributions of day and night feature spaces by using an adversarial training approach proposed in [@tzeng2017adversarial]. In this approach, the image encoder $F_n$ acts as a *generator* trying to generate feature maps from a night image $I_n$, which look similar to the day-time feature maps $f_d$ obtained from a day-time image $I_d$ using a day-time encoder $F_d$. These generated features maps are then evaluated by a *discriminator* network $D$ that tries not to get fooled by the generator by assigning correct labels to them. In this way, the generator learns to generate day-like feature maps from the night-time images by playing a zero-sum min-max game with the discriminator.
|
| 10 |
+
|
| 11 |
+
Unlike a regular GAN discriminator which assigns a single scalar value for a given input, a patch-based discriminator [@isola2017image] assigns a grid of $m\times n$ scalar values for a given feature map. Each value of this grid is a probability ranging from $0$ (night) to $1$ (day) and it corresponds to a patch of the input feature map. This allows the discriminator to evaluate the input feature maps locally thereby, providing superior distinguishing ability compared to normal GAN discriminators. In addition, the patch-based discriminators are fully convolutional and hence, are computationally much faster compared to the other discriminator models that use fully-connected layers along with the convolutional layers [@vankadari2019unsupervised].
|
| 12 |
+
|
| 13 |
+
Instead of training a single discriminator network on the feature maps obtained from the final convolutional layer of the image encoder as is done in [@nath2018adadepth][@tzeng2017adversarial], we train multiple discriminators, one for each layer of the encoder network to constrain the solution space further. Hence, the proposed multi-stage patch-based discriminator is composed of $L$ number of discriminators where each discriminator $D_i,$ takes feature maps $(f_n^i,f_d^i)$ obtained from the $i-$th convolutional layer of the encoder networks $(F_n, F_d)$ as input. This multi-stage discriminator is shown to provide superior domain adaptation performance which will be discussed later in the experiments section.
|
| 14 |
+
|
| 15 |
+
The proposed method is an unsupervised learning approach which neither uses any explicit ground truth nor paired day-night image examples to calculate losses for training. Instead, we entirely rely on adversarial losses calculated using the discriminator module. The loss functions to learn $F_n$ and $D$ can be expressed as follows: $$\begin{eqnarray}
|
| 16 |
+
\label{eqn:gan_loss}
|
| 17 |
+
\mathcal{L}_{GAN}(F_n,D) & = &\min_{F_n}\max_{D}V(F_n,D)= \mathbb{E}_{f_d \sim F_d(I_d)}[\log(D(f_d))]\:\nonumber \\
|
| 18 |
+
& &\qquad \qquad +\mathbb{E}_{f_n \sim F_n(I_n)}\left[\log(1-D(f_n))\right] \\
|
| 19 |
+
%
|
| 20 |
+
\min_{F_n} L_{{F_n}}(F_n,D,I_n) & = & \frac{1}{L}\sum_{i=1}^{L} -\:\mathbb{E}_{f_n \sim F_n(I_n)}\left[\sum_{m,n}\log\left[D_i(f_n^i)\right]_{m,n}\right] \\
|
| 21 |
+
%
|
| 22 |
+
\min_{D} L_{D}(F_d,F_n,D,I_d,I_n)& = & \frac{1}{L}\sum_{i=1}^{L}-\:\mathbb{E}_{f_d \sim F_d(I_d)}\left[\sum_{m,n}\log\left[D_i(f_d^i)\right]_{m,n}\right] \:\nonumber \\
|
| 23 |
+
& &\quad -\:\mathbb{E}_{f_n \sim F_n(I_n)}\left[\sum_{m,n}\log\left(1-\left[D_i(f_n^i)\right]_{m,n}\right)\right]
|
| 24 |
+
\end{eqnarray}$$ The details about our experimental setup and various experiments conducted are explained in the following section.
|
2012.03500/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2012.03500/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Text-to-Speech (TTS) is an important task in the speech processing. With rapid progress in deep learning, TTS technology has received widespread attention in recent years. The most popular neural TTS models are autoregressive models based on an encoder-decoder framework [@Tacotron; @Tacotron2; @deepvoice3; @clarinet; @transformerTTS; @Flowtron]. In this framework, the encoder takes the text sequence as input and learns its hidden representation, while the decoder generates the outputs frame by frame, i.e., in an autoregressive manner. As the performance of autoregressive models has been substantially promoted, the synthesis efficiency is becoming a new research hotspot.
|
| 4 |
+
|
| 5 |
+
Recently, significant efforts have been dedicated to the development of non-autoregressive TTS models [@fastspeech; @fastspeech2; @flow-tts; @ParaNet]. However, most existing non-autoregressive TTS models suffer from complex training procedures, high computational cost or training time cost, making them not suited for real-world applications. In this work, we propose EfficientTTS, an efficient and high-quality text-to-speech architecture. Our contributions are summarized as follows,
|
| 6 |
+
|
| 7 |
+
- We propose a novel approach to produce soft or hard monotonic alignments for sequence-to-sequence models in addition to a general attention mechanism with almost no increase in computation. Most important, proposed approach can be incorporated into any attention mechanisms without constraints on network structures.
|
| 8 |
+
|
| 9 |
+
- We propose EfficientTTS, a non-autoregressive architecture to perform high-quality speech generation from text sequence without additional aligners. EfficientTTS is fully parallel, fully convolutional, and is trained end-to-end, thus being quite efficient for both training and inference.
|
| 10 |
+
|
| 11 |
+
- We develop a family of TTS models based on EfficientTTS, including: (1) EFTS-CNN, a convolutional model learns melspectrogram with high training efficiency; (2) EFTS-Flow, a flow-based model enable parallel melspectrogram generation with controllable speech variation; (3) EFTS-Wav, a fully end-to-end model directly learns waveform generation from text sequence. We experimentally show that proposed models achieve significant improvements in speech quality, synthesis speed and training efficiency, in comparison with counterpart models Tacotron 2 and Glow-TTS.
|
| 12 |
+
|
| 13 |
+
- We also show that proposed approach can be easily extended to autoregressive models such as Tacotron 2 at the end of this paper.
|
| 14 |
+
|
| 15 |
+
The rest of the paper is structured as follows. Section $2$ discusses related work. We introduce monotonic alignment modeling using *index mapping vector* in Section $3$. The EfficientTTS architecture is introduced in Section $4$. In Section $5$, the EfficientTTS models are presented. Section $6$ demonstrates experimental results and implementation details. Finally, Section $7$ concludes the paper.
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
The overall architecture design of EfficientTTS is shown in Fig. [2](#model-arch){reference-type="ref" reference="model-arch"}. In the training phase we learn the IMV from the hidden representations of text sequence and melspectrogram through an IMV generator. The hidden representations of text sequence and melspectrogram are learned from a text-encoder and a mel-encoder respectively. IMV is then converted to a 2-dimensional alignment matrix which is further used to generate the time-aligned representation through an alignment reconstruction layer. The time-aligned representation is passed through a decoder producing the output melspectrogram or waveform. We concurrently train an aligned position predictor which learns to predict aligned position in output timestep for each input text token. In the inference phase, we reconstruct the alignment matrix from predicted aligned positions. We show detailed implementation in the following subsections and pseudocode of each components in Appendix D.
|
| 20 |
+
|
| 21 |
+
<figure id="model-arch" data-latex-placement="t">
|
| 22 |
+
|
| 23 |
+
<figcaption>Overall model architecture. </figcaption>
|
| 24 |
+
</figure>
|
| 25 |
+
|
| 26 |
+
We use a text-encoder and a mel-encoder to convert text symbols and melspectrograms to powerful hidden representations respectively.
|
| 27 |
+
|
| 28 |
+
In the implementation of the text-encoder, we use learned embedding to covert the text sequence to a sequence of high-dimensional vectors. The high-dimensional vectors are then passed through a stack of convolutions interspersed with weight normalization [@wn] and Leaky ReLU activation. We also add a residual connection for each convolution to allow for deep networks.
|
| 29 |
+
|
| 30 |
+
In the implementation of the mel-encoder, we first convert melspectrograms to high-dimensional vectors through a linear projection. Same as the text-encoder, mel-encoder consists of a stack of convolutions interspersed with weight normalization, Leaky ReLU activation, and residual connection. Note that mel-encoder is only used in the training phase.
|
| 31 |
+
|
| 32 |
+
In order to generate a monotonic IMV in the training phase, we first learn the alignment $\boldsymbol{\alpha}$ between the input and output through a scaled dot-product attention [@Transformer] as given in Eq. ([\[sa\]](#sa){reference-type="ref" reference="sa"}), and then compute IMV from $\boldsymbol{\alpha}$. $$\begin{equation}
|
| 33 |
+
\alpha_{i,j} = \frac{\exp{(-D^{-0.5}(\boldsymbol{q_j} \cdot \boldsymbol{k_i}))}}{ \sum_{m=0}^{T_1-1}\exp{(-D^{-0.5}(\boldsymbol{q_j} \cdot \boldsymbol{k_m}))}}, \label{sa}
|
| 34 |
+
\end{equation}$$ where, $\boldsymbol{q}$ and $\boldsymbol{k}$ are the outputs of mel-encoder and text-encoder, and $D$ is the dimensionality of $\boldsymbol{q}$ and $\boldsymbol{k}$.
|
| 35 |
+
|
| 36 |
+
A simple way to compute IMV is to follow Eq. ([\[defini\]](#defini){reference-type="ref" reference="defini"}). However, since scaled dot-product attention has no constraint on monotonicity, in our preliminary experiments, a SMA loss was incorporated for training. But we further discovered that HMA is more efficient. We follow Eq. ([\[step1\]](#step1){reference-type="ref" reference="step1"},[\[step2\]](#step2){reference-type="ref" reference="step2"},[\[step3\]](#step3){reference-type="ref" reference="step3"},[\[scale_func\]](#scale_func){reference-type="ref" reference="scale_func"}) in implementing HMA. In experiments we compare the effects of different monotonic strategies.
|
| 37 |
+
|
| 38 |
+
In the inference phase, the model needs to predict the IMV $\boldsymbol{\pi}$ from the hidden representation of text sequence $\boldsymbol{h}$, which is challenging in practice. There are two limitations: (1) $\boldsymbol{\pi}$ is time-aligned, which is in high resolution but $\boldsymbol{h}$ is in low resolution; (2) Each prediction of $\pi_i$ affects later prediction of $\pi_j$ $(j > i)$ due to cumulative sum operation introduced in Eq. ([\[step3\]](#step3){reference-type="ref" reference="step3"}), making it difficult to predict $\boldsymbol{\pi}$ in parallel. Fortunately, the limitations can be alleviated by predicting the aligned positions $\boldsymbol{e}$ of each input token instead.
|
| 39 |
+
|
| 40 |
+
{#shi width="7.56cm" height="5.24cm"}
|
| 41 |
+
|
| 42 |
+
We define Eq. ([\[defini\]](#defini){reference-type="ref" reference="defini"}) as transformation $m(\cdot)$: $\boldsymbol{\pi} = m(\boldsymbol{q})$. Since both $\boldsymbol{\pi}$ and $\boldsymbol{q}$ are monotonic and continuous across timesteps, which means transformation $m(\cdot)$ is monotonic and continuous, thus $m(\cdot)$ is invertible: $$\begin{equation}
|
| 43 |
+
\boldsymbol{q}= m^{-1}(\boldsymbol{\pi}),
|
| 44 |
+
\end{equation}$$ The aligned positions $\boldsymbol{e}$ in output timestep for each input token can be computed as: $$\boldsymbol{e}= m^{-1}(\boldsymbol{p}),\qquad \boldsymbol{p}=\{0,1,...,T_1-1\}.$$ We illustrate the relations of $m(\cdot),\boldsymbol{e},\boldsymbol{\pi}$ in Fig. [3](#shi){reference-type="ref" reference="shi"}. In order to compute $\boldsymbol{e}$, we first compute the probability density matrix $\boldsymbol{\gamma}$ utilizing a similar transformation as Eq. ([\[recons2\]](#recons2){reference-type="ref" reference="recons2"}). The only difference is that the probability density is computed on different dimensions. The aligned position $\boldsymbol{e}$ is the weighted sum of the output index vector $\boldsymbol{q}$ weighted by $\boldsymbol{\gamma}$. $$\begin{equation}
|
| 45 |
+
\gamma_{i,j} = \frac{\exp{(-\sigma^{-2}(p_i - \pi_j)^2)}}{ \sum_{n=0}^{T_2-1}\exp{(-\sigma^{-2}(p_i - \pi_n)^2)}}, \label{recons_new}
|
| 46 |
+
\end{equation}$$ $$\begin{equation}
|
| 47 |
+
e_{i} = \sum_{n=0}^{T_2-1} \gamma_{i,n}*q_n. \label{e_calc}
|
| 48 |
+
\end{equation}$$ As can be seen, the computation of $\boldsymbol{e}$ is differentiable which allows for training by gradients methods, thus can be used in both training and inference. Besides, $\boldsymbol{e}$ is predictable, because:(1) The resolution $\boldsymbol{e}$ is same as $\boldsymbol{h}$; (2) we can learn relative position $\Delta \boldsymbol{e}, (\Delta e_i = e_i- e_{i-1}, 1 \le i \le T_1-1)$ instead of directly learning $\boldsymbol{e}$ to overcome the second limitation.
|
| 49 |
+
|
| 50 |
+
The aligned position predictor consists of 2 convolutions, each followed by the layer normalization and ReLU activation. We regard $\Delta \boldsymbol{e}$ computed from $\boldsymbol{\pi}$ as the training target. The loss function between the estimated position $\Delta \hat{\boldsymbol{e}}$ and the target one $\Delta \boldsymbol{e}$ is computed as: $$\begin{equation}
|
| 51 |
+
\mathcal{L}_{ap} = \Vert \log{(\Delta \hat{\boldsymbol{e}} + \epsilon )} - \log{(\Delta \boldsymbol{e} + \epsilon)} \Vert_1,
|
| 52 |
+
\end{equation}$$ Where, $\epsilon$ is a small number to avoid numerical instabilities. The goal with log-scale loss is to accurately fit small values, which tends to be more important towards the later phases of training. Aligned position predictor is learned jointly with the rest of the model. Because we generate alignments by leveraging the aligned positions, as a side benefit, EfficientTTS inherits the ability of speech rate control as duration-based non-autoregressive TTS models.
|
| 53 |
+
|
| 54 |
+
In order to map input hidden representations $\boldsymbol{h}$ to time-aligned representations, an alignment matrix is needed, for both training and inference. We can alternatively construct alignment from IMV or the aligned positions. For most situations, Eq. ([\[recons2\]](#recons2){reference-type="ref" reference="recons2"}) is an effective way to reconstruct alignment matrix from IMV. But because we predict aligned positions rather IMV during inference, to be consistent, we reconstruct alignment matrix from the aligned positions $\boldsymbol{e}$ for both training and inference. Specifically, we take the aligned positions $\boldsymbol{e}$ computed from Eq. ([\[e_calc\]](#e_calc){reference-type="ref" reference="e_calc"}) for training, and the predicted one from aligned position predictor for inference.
|
| 55 |
+
|
| 56 |
+
We follow similar idea of EATS [@EATS] in reconstructing the alignment matrix $\boldsymbol{\alpha'}$ by introducing a Gaussian kernel centered on aligned position $\boldsymbol{e}$. $$\begin{equation}
|
| 57 |
+
\alpha'_{i,j} = \frac{\exp{(-\sigma^{-2}(e_i - q_j)^2)}}{ \sum_{m=0}^{T_1-1}\exp{(-\sigma^{-2}(e_m - q_j)^2)}},
|
| 58 |
+
\end{equation}$$ where $\boldsymbol{q}=\{0,1,...,T_2-1\}$ is the index vector of output sequence. The length of output sequence $T_2$ is known in training and computed from $\boldsymbol{e}$ in inference: $$\begin{equation}
|
| 59 |
+
T_2 = e_{T_1-1} + \Delta e_{T_1-1}.
|
| 60 |
+
\end{equation}$$ Although the reconstructed alignment matrix maybe not as accurate as the one computed by Eq. ([\[recons2\]](#recons2){reference-type="ref" reference="recons2"}) (due to the low resolution of $\boldsymbol{e}$), the effect on the output is small because the network is able to compensate. As a result, we enjoy improvement in speech quality caused by the increasing consistency of training and inference.
|
| 61 |
+
|
| 62 |
+
We illustrate $\boldsymbol{\pi}$ and the reconstructed $\boldsymbol{\alpha}$ of same utterance in Fig. [4](#imvpng){reference-type="ref" reference="imvpng"}. As can be seen, $\boldsymbol{\alpha}$ is diagonal at first training step, and it quickly converges at $10$k$^{th}$ training step, which is significantly fast. We map the output of text-encoder $\boldsymbol{h}$ to a time-aligned representation by making use of $\boldsymbol{\alpha}'$ following Eq. ([\[mapping_alpha\]](#mapping_alpha){reference-type="ref" reference="mapping_alpha"}). The time-aligned representation is then fed as input to decoder.
|
| 63 |
+
|
| 64 |
+
<figure id="imvpng" data-latex-placement="t">
|
| 65 |
+
<img src="imv_2.png" style="width:16.5cm;height:4.8cm" />
|
| 66 |
+
<figcaption> IMV and reconstructed alignments in training phase of different training steps. The training steps are <span class="math inline">1</span>, <span class="math inline">10</span>k and <span class="math inline">270</span>k respectively. The whole model converges at training step <span class="math inline">270</span>k.</figcaption>
|
| 67 |
+
</figure>
|
| 68 |
+
|
| 69 |
+
Since both of the input and output of decoder are time-aligned, it is easy to implement decoder with parallel structures. In next section, we develop three models based on EfficientTTS with different decoder implementations.
|
| 70 |
+
|
| 71 |
+
We first parameterize the decoder by a stack of convolutions. Each convolution is interspersed with weight normalization, Leaky ReLU activation, and residual connection. We add a linear projection at the end to generate melspectrogram. Mean square error (MSE) is used as the reconstruction error. The overall training objective of EFTS-CNN is a combination of aligned position loss and MSE loss of melspectrogram.
|
| 72 |
+
|
| 73 |
+
To let TTS model have the ability to control the variations of generated speech, we implement a flow-based decoder. In the training phase, we learn a transformation $f$ from melspectrogram to a high dimensional Gaussian distribution $\mathcal{N}(\boldsymbol{0},\boldsymbol{1})$ by directly maximizing the likelihood, conditioning on the time-aligned representation. $f$ is invertible with a strategically designed structure. Specifically, it consists of several flow steps, and each flow step consists of two elemental invertible transformations: an invertible linear layer and an affine coupling layer. To improve the diversity of generated speech, we sample the latent variable $\boldsymbol{z}$ from Gaussian distribution $\mathcal{N}(\boldsymbol{0},\boldsymbol{1})$ during inference, and interpret $\boldsymbol{z}$ with a zero vector $\boldsymbol{o}$ using temperature factor $t$, and inverse the transformation $f$ to produce the melspectrogram. $$\begin{align}
|
| 74 |
+
&\boldsymbol{z}' = t*\boldsymbol{z} + \boldsymbol{o}*(1-t),0 \le t \le 1 \\
|
| 75 |
+
&\boldsymbol{x} = f^{-1}(\boldsymbol{z}').
|
| 76 |
+
\end{align}$$ For sake of simplicity we follow the decoder structure of Flow-TTS [@flow-tts] in implementing our flow-based decoder. The overall training objective of EFTS-Flow is a combination of aligned position loss and maximum likelihood estimation (MLE) loss.
|
| 77 |
+
|
| 78 |
+
To simplify the 2-staged training pipeline and train TTS models in a fully end-to-end manner, we develop a text-to-wav model by incorporating EfficientTTS with a dilated convolutional adversarial decoder. The decoder structure is similar to MelGAN [@Melgan] except: (1) The input of the generator is high dimensional hidden representations not 80-channel melspectrograms; (2) A multi-resolution STFT loss is incorporated at end of the generator. We adopt the same structure of MelGAN discriminator for adversarial training. Similar as ClariNet [@clarinet] and EATS [@EATS], we train MelGAN part by conditioning on sliced input corresponding to $1$s audio clips, while other part is trained on the whole-length utterance. We add a linear projection on the whole-length decoder input to generate melspectrogram concurrently, which allows EFTS-Wav to learn the whole-length alignment for each training step. The overall training objective of EFTS-Wav generator is a linear combination of reconstruction loss of melspectrogram, MelGAN generator loss, multi-resolution STFT loss, and aligned position loss.
|
2103.04503/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2103.04503/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Human-Object Interaction (HOI) detection plays an important role in the high level human-centric scene understanding, and has attracted considerable research interest recently. The HOI research can also contribute to other tasks, such as action analysis, weakly-supervised object detection, and visual question answering, etc.
|
| 4 |
+
|
| 5 |
+
<figure id="fig:12shot" data-latex-placement="t">
|
| 6 |
+
<embed src="fig/shoutu.pdf" />
|
| 7 |
+
<figcaption>Comparison of recent approaches on HOI detection. (a) two-stage methods, typically use pre-trained detectors to generate human, object proposal and enumerate all (human, object) pairs, followed by a multi-stream architecture to clarify interaction. (b) one-stage methods, detect interaction point/box and object proposals simultaneously, followed by complex matching process to assign interactions to object pairs. (c) our end-to-end method, input an image and predict HOI instances directly.</figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
The goal of HOI detection aims at localizing human and object, as well as recognizing the interaction between them. Previous studies [@gao2018ican; @eccv2020dualgraph; @kaiming2018detecting; @eccv2020keycues; @liao2020ppdm; @eccv2020uniondet] present promising results on HOI detection by decouple this task into object detection and interaction classification (Fig. [1](#fig:12shot){reference-type="ref" reference="fig:12shot"}(a)). More specifically, human and object detection results is first obtained by pre-trained object detector, then interaction classification is conducted on the pair-wisely combined human-object proposals. The limitations of these methods are mainly caused by the *separated* two stages. The independent optimization on two sub-problems may lead to sub-optimal solution. The generated human-object proposals have relative low quality for interaction classification [@liao2020ppdm], because only object-level confidence has been taken into account. Moreover, all pair-wise proposals need to be processed, which brings large redundant computation cost.
|
| 11 |
+
|
| 12 |
+
More recent approaches [@wang2020irnet; @eccv2020uniondet; @liao2020ppdm] have introduced a surrogate interaction detection problem to optimize HOI detection indirectly (Fig. [1](#fig:12shot){reference-type="ref" reference="fig:12shot"}(b)). Firstly, the interaction proposal has been pre-defined based on human priors. For example, UnionDet [@eccv2020uniondet] defines the interaction proposal as union box of the human and object box. PPDM [@liao2020ppdm] uses the center point between human and object as interaction point. Secondly, the human, object and interaction proposals are detected in parallel. Finally, each interaction result is assigned to one (human, object) pair based on pre-defined matching strategy in post processing. However, such definition of interaction proposal are not always valid under different circumstance and make the pipeline more complex and costly in computation.
|
| 13 |
+
|
| 14 |
+
For HOI detection, how to capture the dependencies, especially long range, between human and object in the image space is the main problem. The above methods used complex but sub-optimal strategies, i.e. decouple into two-stages or introduce surrogate proposals to empower models the ability of capturing dependencies. However, the transformer network [@transformer] is designed to exhaustively capture the long range dependencies, which inspire us to address the problem with transformer.
|
| 15 |
+
|
| 16 |
+
In this paper, we propose a new architecture to directly predict the HOI instance, i.e. (human, object, interaction), in an *end-to-end* manner. Our method consists of two parts, a transformer encoder-decoder architecture and a quintuple HOI matching loss. The architecture first use CNN backbone to extract high-level image features, then the encoder is leveraged to generate global memory feature, which models the relation between the image feature explicitly. Next the global memory from encoder and the HOI queries are send to decoder to generate the output embeddings. Finally, a multi-layer perception is used to predict HOI instances based on the output embeddings of decoder. Meanwhile, a quintuple HOI matching loss is proposed to supervise the learning of HOI instance prediction. Our method achieves state-of-the-art results on different challenging HOI benchmarks.
|
| 17 |
+
|
| 18 |
+
# Method
|
| 19 |
+
|
| 20 |
+
Different from previous methods, we solve human-object interaction detection in an end-to-end manner both in training and inference: input an image and then output the HOI relations directly, without any post processing. The proposed method consists of two main parts, an end-to-end transformer encoder-decoder architecture and a quintuple HOI instance matching loss.
|
| 21 |
+
|
| 22 |
+
The proposed architecture illustrated in Fig. [2](#fig:frame){reference-type="ref" reference="fig:frame"} consists of three main parts: (i) a backbone to extract visual feature from the input image, (ii) a transformer encoder-decoder to digest backbone feature and produce output embeddings, and (iii) a multi-layer perception (MLP) to predict HOI instances.
|
| 23 |
+
|
| 24 |
+
**Backbone**: A CNN backbone is used to extract visual feature from the input image. First, a color image is fed into the backbone and generate a feature map of shape $(H, W, C)$ which contains high level semantic concepts. A $1 \times 1$ convolution layer is used to reduce the channel dimension from $C$ to $d$. A flatten operator is used to collapse the spatial dimension into one dimension. After that, a feature map of shape $[H \times W, d]$ is obtained, denoted as *flatten feature* in Fig. [2](#fig:frame){reference-type="ref" reference="fig:frame"}. The spatial dimension transformation is important because the following transformer encoder requires a sequence as input, thus the feature map can be interpreted as a sequence of length $H \times W$, and the value at each time step is a vector of size $d$. We use ResNet [@he2016deep] as our backbone and reduce the dimension of feature conv-5 from $C=2048$ to $d=256$.
|
| 25 |
+
|
| 26 |
+
**Encoder**: The encoder layer is built upon standard transformer architecture with a multi-head self-attention module and a feed-forward network (FFN). Theoretically the transformer architecture is permutation invariant. To enable it distinguish relative position in the sequence, position encoding [@parmar2018image; @bello2019attention] is added to the input of each attention layer. The sum of flatten feature and *positional encoding* is fed into the transformer encoder to summarize global information. The output of the encoder is denoted as *global memory* in Fig. [2](#fig:frame){reference-type="ref" reference="fig:frame"}.
|
| 27 |
+
|
| 28 |
+
**Decoder**: The decoder layer is also built upon the transformer architecture. Different from encoder layer, it contains an additional multi-head cross attention layer. The decoder transforms $N$ learnt positional embeddings (denoted as *HOI queries* in Fig. [2](#fig:frame){reference-type="ref" reference="fig:frame"}) into $N$ output embeddings. They are then decoded into HOI instances by the following MLP, which will be detailed in next section. In general, the decoder has three inputs, one is the global memory from encoder, one is HOI queries, and one is positional encoding. For multi-head cross attention layer, the *Value* comes from global memory directly. The *Key* is the sum of global memory and the input position encoding. The *Query* is the sum of input position encoding and the input HOI queries. For self-attention layer, all of the Query, Key, Value come from the HOI queries or the output of previous decoder layer. The output of the decoder is denoted as *output embeddings* in Fig. [2](#fig:frame){reference-type="ref" reference="fig:frame"}. This architecture design follows [@detr].
|
| 29 |
+
|
| 30 |
+
**MLP for HOI Prediction**: We define each HOI instance as a quintuple of (human class, interaction class, object class, human box, object box). The output embedding for each HOI query is decoded into one HOI instance by several multi-layer perception (MLP) branches. Specifically, there are three one-layer MLP branches to predict the human confidence, object confidence and interaction confidence respectively, and two three-layer MLP branches to predict human box and object box. All one-layer MLP branches for predicting confidence use a softmax function. For human confidence branch, the output size is 2, implies the confidences for foreground and background. For object confidence branch and interaction confidence branch, the output size is $C+1$, which implies the confidences for all $C$ kinds of objects or verbs defined in the dataset plus one for background. For both human and object box branches, the output size is 4, implies the normalized center coordinates $(x_{c}, y_{c})$, height and width of the box.
|
| 31 |
+
|
| 32 |
+
<figure id="fig:me" data-latex-placement="t">
|
| 33 |
+
<embed src="fig/m3.pdf" />
|
| 34 |
+
<figcaption>Illustration of the matching strategy between HOI ground-truth (black) and prediction (other colors). An HOI instance is represented by a pair of boxes in the same color. <span class="math inline"><em>h</em></span> and <span class="math inline"><em>o</em></span> represent human and object respectively.</figcaption>
|
| 35 |
+
</figure>
|
| 36 |
+
|
| 37 |
+
The HOI instance is a quintuple of $\left(c_{h}, c_{r}, c_{o}, b_{h}, b_{o}\right)$, where $\left(c_{h}, c_{r}, c_{o} \right)$ denotes human, interaction and object class confidence, $\left(b_{h}, b_{o}\right)$ is the bounding box of the human and object. Two-stage HOI detectors first predict the object proposals $(c_{h}, b_{h}), (c_{o}, b_{o})$ with an object detector, then enumerate the detected (human, object) pairs to predict the $c_{r}$ by interaction classification. In other words, they are trying to approximate the following probability in a given dataset, $$\begin{equation}
|
| 38 |
+
\begin{aligned}
|
| 39 |
+
p(h, r, o) &= p(h, o) p(r|h, o) \\
|
| 40 |
+
&\approx p(h) p(o) p(r|h, o)
|
| 41 |
+
\end{aligned}
|
| 42 |
+
\end{equation}$$ where $p(h)$ and $p(o)$ indicate the confidence of human and object bounding box, respectively. $p(r|h, o)$ denotes the probability of interaction $r$ given human box $h$ and object box $o$, often implemented by a multi-stream interaction recognition model. In this method, the object detector and the interaction classifier are *separately* optimized.
|
| 43 |
+
|
| 44 |
+
On the contrary, we treat HOI detection as a set prediction problem of bipartite matching between predictions and ground truth. Our method directly predicts the elements in HOI set and optimizes the proposed HOI matching loss in a unified way.
|
| 45 |
+
|
| 46 |
+
As shown in Fig. [3](#fig:me){reference-type="ref" reference="fig:me"}(a), suppose a ground truth (human, fly, object) is in the image, and the model predicts two HOI instances: the yellow one (human, fly, object), and the blue one (human, hold, object). The yellow one not only predict the interaction correctly but localize the human and object more accurately as well. To minimize the matching cost, it is more suitable to assign the black one to the yellow one, and assign $\emptyset$ (implies nothing) to the blue one. A precise and complete matching strategy is formulated in the following.
|
| 47 |
+
|
| 48 |
+
Assume the model outputs a fixed-size set of $N$ predictions, where $N$ is chosen to be larger than the number of HOI relations in one image. Let us denote the set of predicted HOIs as $P={p^i, i=1,2,...,N}$, the set of ground truth HOIs as $G={g^i, i=1,2,...,M, \emptyset ,... ,\emptyset }$, where $M \leq N$. $M$ represents the number of ground truth in an image. By padding $\emptyset$ to the ground truth set, we make the length of two sets equal.
|
| 49 |
+
|
| 50 |
+
We denote the matching as an injective function: $\sigma_{G \rightarrow P}$, where $\sigma(i)$ is the index of predicted HOI assigned to the $i$-th groundtruth. The matching cost function is defined as: $$\begin{equation}
|
| 51 |
+
\mathcal{L}_{cost} = \sum_{i}^{N} \mathcal{L}_{\operatorname{match}}\left(g^{i}, p^{\sigma(i)}\right)
|
| 52 |
+
\end{equation}$$ where $\mathcal{L}_{\operatorname{match}}\left(g^{i}, p^{\sigma(i)}\right)$ is a matching cost between ground truth $g^{i}$ and prediction $p^{\sigma(i)}$.
|
| 53 |
+
|
| 54 |
+
In each step of training, we should first find an optimal one-to-one matching between the ground truth set and the current prediction set. We design the following matching cost for HOI:
|
| 55 |
+
|
| 56 |
+
::: small
|
| 57 |
+
$$\begin{equation}
|
| 58 |
+
\mathcal{L}_{\operatorname{match}}\left(g^{i}, p^{\sigma(i)}\right)= \beta_{1} \sum_{j \in { h,o,r } }{\alpha_j \mathcal{L}_{\operatorname{cls}}^j} + \beta_{2} \sum_{k \in {h,o }}{ \mathcal{L}_{\operatorname{box}}^{k} }
|
| 59 |
+
\label{eq:matchcost}
|
| 60 |
+
\end{equation}$$
|
| 61 |
+
:::
|
| 62 |
+
|
| 63 |
+
where $\mathcal{L}_{\operatorname{cls}}^j = \mathcal{L}_{\operatorname{cls}}\left( g^{i}_{j}, p^{\sigma\left(i\right)}_{j}\right)$, $j \in {h, o, r }$ represents human, object, and interactions, $g^{i}_{j}$ denotes the category label of $j$ on ground-truth $g^{i}$. We use standard softmax cross entropy loss in the paper. $\mathcal{L}_{\operatorname{box}}^{k}$ is box regression loss for human box and object box, the weighted sum of GIoU [@giouloss] loss and $L_1$ loss are used. $\alpha$ and $\beta$ are hyper-parameters of loss weights, which will be discussed later in ablation study.
|
| 64 |
+
|
| 65 |
+
We use Hungarian algorithm [@kuhn1955hungarian; @detr] to solve the following problem to find a bipartite matching. $$\begin{equation}
|
| 66 |
+
\hat{\sigma}=\underset{\sigma \in \mathfrak{S}_{N} }{\arg \min } \mathcal{L}_{\operatorname{cost}}
|
| 67 |
+
\end{equation}$$ where $\mathfrak{S}_{N}$ denotes the one-to-one matching solution space.
|
| 68 |
+
|
| 69 |
+
After the optimal one-to-one matching between the groundtruth and predictions is found, the network loss is calculated between the matched pairs, using the same loss function as Eq. [\[eq:matchcost\]](#eq:matchcost){reference-type="ref" reference="eq:matchcost"}. Although these two processes share the same formulation, the hyper-parameters of them are different theoretically and may have different optimal values. However, in practice, due to the considerable computation cost brought by large hyper-parameter search space, we made them the same, just as DETR [@detr] does.
|
| 70 |
+
|
| 71 |
+
Different from conventional HOI detection methods which optimize object detector and interaction classifier separately, the proposed HOI matching loss takes both the classification and localization into account. Human and object boxes will be produced simultaneously with their interactions.
|
2104.05832/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-03-31T15:30:12.765Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" etag="VEa9DpEjXCO5aKcwzz8W" version="14.5.1" type="device"><diagram name="Copy of Page-11" id="O13ithbbQ09D1neqteTO">7V1bc6qwFv41nTnnYTsQ7o+Cl1brFW1tX84gIKIgFrFaf/0JCq0k6a53ulFnTzeEEHR9K+tOcsco7rLsa9NhzTNM5w5QxvKOKdwBwFM0/Bs2fGwaGAZsGizfNjZN9FeDaq/MqJGKWue2Yc4SHQPPcwJ7mmzUvcnE1INEm+b73iLZbeA5yadONSt6IvXVoOqaY2Ldnm0jGG5aRW6r971pW8P4yTQVXXG1uHPUMBtqhrfYamKKd4zie16wOXKXiumEtIvpwo4KYFT5aJctqlOVHx6VCf3yZzNYaZ9bPn+Cb06C0w4dYfmuOfOIXneAd+BD5D48sMKDR21izUMiAypijqiDH/eIW+CzP+/6T0n+ozT+G1Eu+Ijh8L35xDDDr0TBXouhHZjqVNPDqwvIf7BtGLgOPKPXA66/nOkH5hJB8wdS0J/4QL42PdcM/A94XzyKFEEa8TQtROeLLw5h47Zhkjsizoy40voc+4vy8CAi/h5AMBgQyqN6HPUGtuMonuP563sZQzPFgQ7bZ4Hvjc2tK7wumv3BGekNqCS9GQqnN0MgNzgXtVmM2mqxmRVq8+CXUZvLMrVZgftd1OYxarforBCbYX9mbXBJYgs4sUFmiC38MmJLOLEnWSE2y0oosXNcuuSObZ2EcYhQ25wY+dBIh2cTbwIbZUObDdfkp5OkDtubWhCY/mTdAijmk9CxYQ6OgMQ0Yj/gSEC2yM0RyB23+aajBfZ78pkkDKInND0bfpsvvFGTCLUsZ97c183orm2zHh0IFYnoQIHmW2aADbRmic+ffQSX0Lh2z4y+4ehfJgJp3GFTM6NwOCYpA4EkpS4DCZ6Cmxl6S7+Nu3FP4aZxTqVxOP5EGocTUY/nc5peSueIe/GJ7mizma0nuQNDV1Eo+PlX0QX0N27ovujCgXLJoYB4WYsC7Gd3XgO6DO4dHIYuHChtdHF78erRFVBQDsdXpNJFF49jXzu6LCfi6KIKc2dvj5NQfC+sewFuEV89wuLJ5i8rpTx/97PArwFdTjoZujyVMrp4buDq0aVPhi4HUkb35hVhkHAIJBLmqu6MLkehlrNwab8X4BmQa0eYZ9iTSWcmXc8ozhJvoUth8P6joUasAiQWVmllyRncDc1M1gIrAEmd2CC7nI3Vf6RObNwFzwyxsfoPArEvmrFgcG84O8RG6z9SJzbuvmSG2Fj9BxDTzn0yuD+RGRWJJfZT523ctM8OsRmUt4W0eZvNsPmHpfXT5m0WN/8wWt/S+olnHpHWj8X20Wl9RrhweIPFLdcOhGoCm4pu3zQMe2JhjAMnT/C3CEfETdvcEDVpjm2FLKRDdE3YLodT0dY1Jx9dcG3DCB9DnPpJ4XCuuUwzSXRBPLe331S56FwmFESZlmuuOemG0vcS97Io7ZDGmQ21aXioz33nQ/Y1fWwGP6u5L4ISg4kUFQUTfS+AUs0LAfojnZP2jMTleDpJforNiexOOo8GVI45FwiENzHm5mxNleuaH5gnKeJSTLzo/MDf28jq/OBYPscJSfLTbE7YbXpQVI4H5wIB9y/VwIsGvp7JwaGVe2lPjhjwLVwetVmo3x3tA1IxNFmnvjmDP1q7QmGGKnviy8Nnwqtcdv2AC4bKg9xQnwRvOmiAP7dySbyGVSBDtH8xrIBkF2kOGep0PhER3Fu1JIYJR58IXI5OGdz94iJXAa6IYXIwvCJIFdxbJSxWnEyfDFyGThfcWxEshojEnwxcSUgV3FsNLFaWDE4GLgvSBfdWAotXJZ9MLPNUumJ5h7DOlYHLcujrfIfPXI5PFdxbeTOWsqNOJpY5Kl2xfKtsxt8qYk40b0Um7SDGLUSFF5ufbOryTLpTl7ACzKPaqWEI71VOg8HdFzmWC28ceJMgWmaUJhTeDETd1PVzxn/RxSPjwO5Pi0cy1PdMclQAmLAozN+mVxRr365dQkgKyeN/9EKgcpTIxQ0vEXLrk8IycfaxfdY0fRv+sjCAv2lc2kE8GhudrwfLSbwQnX+NF558bJ2go32H9/ccc5SAgFRcz8MdJsBmnv3YMT1TEKklQTMSu9sK32RALiVw9gut7cXvNE8fze8X59BfzniA/8Fx2Dmky0g5iaO+PogGRSXsuflwvyjgXnwo0cfL3f358EtSR8+PJDW9t5S+ydwNxwoI6/OHsr7A5Xjhi/ORHNilOX+/KNpenC+yIFXOj55/4/xjHRxo7iHy+VDmJ411Po6XtVZLHPUKXf2FN4p12aVrPUKRhbL2Ugd2WBFzUk/H4Ps8xxO4eDAAZ/Vs0FJyQiHS5+4Ip65sIRL9rMUPWyJHSOha+geJc0Jj7XtWS2/aIvPsUCeBZ5GoBDr5TzdhRel90CrTRvNj1Om9jmuFl1Vtl6qouMJz4JjLiInkn/gJVVxHMUN6GR8pt21GgwRSwoEvkPx9VFY8F/xSv1EqDD7+Rz9+NFc89TKumioR/s0uJSGCCT7g3+ZefOHPbI1tHnagueny6yJsGGh68oYO1PszeKFuLuDftudqk+0bNhuhKJ7vh/vaACo/mS1CZcFrbqgCJv3Z9C6xecrmm8V7qhyjU+I6SsccBJgQJJRfQrlYWn/OqV0oMSclQ9qEyBl9rnfliVyyg4KJhYTtrjca+qxIfdT6ptP0ZnZUxd33gsBzYQcnvCBr+thaQ5ZU4PBzh1e1Bl6InzabbjZAGtjLEGh5/ch83ErFLfDY0AINcunmFJSm4Rs3iv0kN9oLqlq2vDz81NXusNi14FFpFp735HwN/qc8WzYvhQ21klx7KvZCCq7/ifkjPu+MJgvhQYFpqx2nlrf7QdewK8XXUdV3darFwC8qL0VBXYW9lJ788NwLv49QhH8ay3z5qbpg+/BYnnedYuupxRafabXZnRVabVlyg5WjT1eVdjvf7rYqz9q0/lx8AM7LWPEdWQDuSFWGrwVBFu6fF++qDRQLtPyp9dApW6O2XCsL8gO7MG22Bue0zNU79UWNDgWd3A+YumDmrWbhudMZt93hw/NE1l5nIldb9ZwO7GFW/Oqi9DZrBTXBHEGmK0kvilca34/q1ebKq9h1Z/A2Uq0gX+5aJcZ4y/eLQskwVL/lQmaSgeiVoYAoyUZV7pcV672vvM9dWYDcKcPRIKvAi2y5xi+F+mt3vHpVpLFkVarzrnbv67xb9F/unb7UGj0I/r3UGrvt99eR9/Lkeg5XlrpDyQok4aGYZweio8s9z1wpDThsxaqG8qg0Xt03C2a/0W+9+KYI2q/Ca6MnPgKj/tqy+GVZVwVDyysjSVQEjl/ZNdrPz+vj2WO9IY5KtjjRJdWG40y03tQbiiCvLLwl0wsmDSvorRzGUq2XQaXJPFWeqMc57MjInGq/1dWy8G7JjX4RNvHz/KPWsudPitly1PLLW0MrD2eK0QZzpzyecAVldb9kqvS0IalLinWhVC357Ep/G3fUt9Zsahviqtl96vSsVc2t0CutJT316wP4Kz3Ppfus5NtdmtGfjHLRrdb1QNNcSyncPz+EhKiPxlrlVSiWm6EzLhfq7d5b4N0DSy0+5bkKbMoXHrt9t/ReMvL8mpuLTqkzVuctV1HOKBF5msYkIsAkoiDl4oDnqWXiosp22blf0ZRmtVSavt0/V3cynG4v6x5oeGG7PrAHWtfYrg/oQKczr4hMgivODG3pxFDYimKE9QrOZagQ6Q0weiuZWa8AMLuQ+1wrFhDJfavdxuLP8Yvfx69hDIfC1yA/31IDRIAzvGwTNp0uupIQkdp4QbUyzgy5RfTl89Sl163EGRM5+E43B0svCVsH9uLSCy9zzo70QqdT+tJrv7rjm0O0X2kIYlYcuikREH4Y6MxTcr/65RuT7Mck6MpUB2YoALpWKzrQub3mPZJS//iyI4CTchxAqM3lpN2W5YE3C9z3bHKUOCeUKyvaxLANLQj3uPemV7imBUAXsLrgGiRkkPCAR2ZMHCy8RDRxLhpeIlTUFuPE5Gze/8xlPqpb6cqtC8dB48ErdhBSgwl7h1OtkWw6G9+jG7JwpB1HLwsFHpogQbEOt2YJCnTR+t8ABR63uAoo0CXtfwMUeEyDBEWLzhYS6Hr3ZCQu6g3TePCBiATIGBLC70MCD0wQkZhkCwl0pXzA47bqZXFIxfffFYB/zNdnKSq3VUOIvOcA+IP370Iz5oShzuz9E/Y1JpoTGdNh6FYLv0ByEvYgJiKRMR2G7sMA4s1kU8MB9/KJOLgZwwHd5PUXzIiDXqm96TCiqkHfozlCa6GbMsChLquzrjQagcboSEbmZQvAdotFKBkzHtBiltSNfcIWykQcxhnDQUQN8bRx2G+pt5vC2icLf4TCQvPwl3ezSLsIXWMug7Qu/UVVFmGrIBIQmbMd0ExG+kDgtsNVAIHmMdIHAjceSEBkPYtBwuGixgNh+x8iDhmL/6A5jPRxwDMYRBwynsGAOKQd/4m/0c2cPoE5/fccBi0iIB6awcAGOrNhHVPnJzMiY9oLzV+kLjXj/W9+wiFj2gvNXtB86lHzuMD1JyQynr9If0bcshen015o9uJgfYXmLmjh0qEg7kojEGj2gmxgXtT15XaLQWQ9f/ELTH1utyBE1jMYvwGJWw7jdGoLzWEcrLbQDMbp3Cx46nvh4nBf3eEEGdY8wwx7/B8=</diagram></mxfile>
|
2104.05832/main_diagram/main_diagram.pdf
ADDED
|
Binary file (35.9 kB). View file
|
|
|
2104.05832/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Spatial reasoning is a cognitive process based on the construction of mental representations for spatial objects, relations, and transformations [\(Clements and Battista,](#page-9-0) [1992\)](#page-9-0), which is necessary for many natural language understanding (NLU) tasks such as natural language navigation [\(Chen et al.,](#page-9-1) [2019;](#page-9-1) [Roman Roman et al.,](#page-11-0) [2020;](#page-11-0) [Kim et al.,](#page-10-0) [2020\)](#page-10-0), human-machine interaction [\(Landsiedel et al.,](#page-10-1) [2017;](#page-10-1) [Roman Roman et al.,](#page-11-0) [2020\)](#page-11-0), dialogue systems [\(Udagawa et al.,](#page-11-1) [2020\)](#page-11-1), and clinical analysis [\(Datta and Roberts,](#page-9-2) [2020\)](#page-9-2).
|
| 4 |
+
|
| 5 |
+
Modern language models (LM), e.g., BERT [\(De](#page-9-3)[vlin et al.,](#page-9-3) [2019\)](#page-9-3), ALBERT [\(Lan et al.,](#page-10-2) [2020\)](#page-10-2), and XLNet [\(Yang et al.,](#page-11-2) [2019\)](#page-11-2) have seen great successes in natural language processing (NLP). However, there has been limited investigation into *spatial reasoning capabilities of LMs*. To the best of our knowledge, bAbI [\(Weston et al.,](#page-11-3) [2015\)](#page-11-3) (Fig [9\)](#page-13-0) is the only dataset with direct textual spatial question answering (QA) (Task 17), but it is synthetic
|
| 6 |
+
|
| 7 |
+
and overly simplified: (1) The underlying scenes are spatially simple, with only three objects and relations only in four directions. (2) The stories for these scenes are two short, templated sentences, each describing a single relation between two objects. (3) The questions typically require up to two-steps reasoning due to the simplicity of those stories.
|
| 8 |
+
|
| 9 |
+
To address these issues, this paper proposes a new dataset, SPARTQA[1](#page-0-0) (see Fig. [1\)](#page-1-0). Specifically, (1) SPARTQA is built on NLVR's [\(Suhr et al.,](#page-11-4) [2017\)](#page-11-4) images containing more objects with richer spatial structures (Fig. [1b\)](#page-1-0). (2) SPARTQA's stories are more natural, have more sentences, and richer in spatial relations in each sentence. (3) SPARTQA's questions require deeper reasoning and have four types: *find relation* (FR), *find blocks* (FB), *choose object* (CO), and *yes/no* (YN), which allows for more fine-grained analysis of models' capabilities.
|
| 10 |
+
|
| 11 |
+
We showed annotators random images from NLVR, and instructed them to describe objects and relationships not exhaustively at the cost of naturalness (Sec. [3\)](#page-2-0). In total, we obtained 1.1k unique QA pair annotations on spatial reasoning, evenly distributed among the aforementioned types. Similar to bAbI, we keep this dataset in relatively small scale and suggest to use as little training data as possible. Experiments show that modern LMs (e.g., BERT) do not perform well in this low-resource setting.
|
| 12 |
+
|
| 13 |
+
This paper thus proposes a way to obtain distant supervision signals for spatial reasoning (Sec. [4\)](#page-3-0). As spatial relationships are rarely mentioned in existing corpora, we take advantage of the fact that spatial language is grounded to the geometry of visual scenes. We are able to automatically generate stories for NLVR images [\(Suhr et al.,](#page-11-4) [2017\)](#page-11-4) via our newly designed context free grammars (CFG) and context-sensitive rules. In the process of story generation, we store the information about all ob-
|
| 14 |
+
|
| 15 |
+
<sup>∗</sup>Work was done while at the Allen Institute for AI.
|
| 16 |
+
|
| 17 |
+
<span id="page-0-0"></span><sup>1</sup> *SPAtial Reasoning on Textual Question Answering.*
|
| 18 |
+
|
| 19 |
+
We have three blocks, A, B and C. Block B is to the right of block C and it is below block A. Block A has two black medium squares. Medium black square number one is below medium black square number two and a medium blue square. It is touching the bottom edge of this block. The medium blue square is below medium black square number two. Block B contains one medium black square. Block C contains one medium blue square and one medium black square. The medium blue square is below the medium black square.
|
| 20 |
+
|
| 21 |
+
**FB**: Which block(s) has a medium thing that is below a black square? A, B, C
|
| 22 |
+
|
| 23 |
+
**FB**: Which block(s) doesn't have any blue square that is to the left of a medium square? A, B
|
| 24 |
+
|
| 25 |
+
**FR**: What is the relation between the medium black square which is in block C and the medium square that is below a medium black square that is touching the bottom edge of a block? Left
|
| 26 |
+
|
| 27 |
+
**CO**: Which object is above a medium black square? the medium black square which is in block C or medium black square number two? medium black square number two
|
| 28 |
+
|
| 29 |
+
**YN**: Is there a square that is below medium square number two above all medium black squares that are touching the bottom edge of a block? Yes
|
| 30 |
+
|
| 31 |
+
(a) An example story and corresponding questions and answers.
|
| 32 |
+
|
| 33 |
+
A C B Described image choose some objects and relations randomly and add relationship between blocks NLVR image
|
| 34 |
+
|
| 35 |
+
(b) An example NLVR image and the scene created in Fig. [1a,](#page-1-0) where the blocks in the NLVR image are rearranged.
|
| 36 |
+
|
| 37 |
+
Figure 1: Example from SPARTQA (specifically from SPARTQA-AUTO)
|
| 38 |
+
|
| 39 |
+
jects and relationships, such that QA pairs can also be generated automatically. In contrast to bAbI, we use various spatial rules to infer new relationships in these QA pairs, which requires more complex reasoning capabilities. Hereafter, we call this automatically-generated dataset SPARTQA-AUTO, and the human-annotated one SPARTQA-HUMAN.
|
| 40 |
+
|
| 41 |
+
Experiments show that, by further pretraining on SPARTQA-AUTO, we improve LMs' performance on SPARTQA-HUMAN by a large margin.[2](#page-1-1) The spatially-improved LMs also show stronger performance on two external QA datasets, bAbI and boolQ [\(Clark et al.,](#page-9-4) [2019\)](#page-9-4): BERT further pretrained on SPARTQA-AUTO only requires half of the training data to achieve 99% accuracy on bAbI as compared to the original BERT; on boolQ's development set, this model shows better performance than BERT, with 2.3% relative error reduction.[3](#page-1-2)
|
| 42 |
+
|
| 43 |
+
lows. First, we propose the first human-curated benchmark, SPARTQA-HUMAN, for spatial reasoning with richer spatial phenomena than the prior synthetic dataset bAbI (Task 17).
|
| 44 |
+
|
| 45 |
+
Second, we exploit the scene structure of images and design novel CFGs and spatial reasoning rules to automatically generate data (i.e., SPARTQA-AUTO) to obtain distant supervision signals for spatial reasoning over text.
|
| 46 |
+
|
| 47 |
+
Third, SPARTQA-AUTO proves to be a rich source of spatial knowledge that improved the performance of LMs on SPARTQA-HUMAN as well as on different data domains such as bAbI and boolQ.
|
2106.06795/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-06-02T19:04:38.163Z" agent="5.0 (X11)" etag="WpLiRj_KVdmcCGMWN-II" version="13.1.9" type="device"><diagram id="vn26Zu4MOHFGYvUG41LS" name="Page-1">tLzH0qNAty34NGfYEXjEEG+FFXaG9yC8efpO9NV/7j1xoyO6B60qlaQEMpOd26y1c1P/hbL9Kc7xt3qPWd79FwJl53+h3H8hCIziOPh4Wq6/FpIk/hrKuc7+mqD/1eDUd/7vyv+0bnWWL//a/prWcezW+vs/G9NxGPJ0/R9t8TyPx/88rRi77H80fOMy/z8anDTu/s9Wv87W6q/1hUP/q13K67L6z8gw9O9IEqdtOY/b8G+8YRzyvyN9/J9u/p26VHE2Hv9bE8r/F8rO47j+fetPNu8esf5PiQn/D0f/e8pzPqz/by5IB913xYPJW/z/Grjy9OCv/X/962WPu+2fKP5Ndr3+Ixsw7+/zte5/QmR+n/Ty/VsHCLTE//lR1GcORmP2fF5rIF0tTvLOHJd6rccBHE/GdR37/+0EuqvL58A6fv/TM/iVxWv8Xyj99xMRvkP5Xwhbe4xhH5AqliMNXrrjVrxbgm/d848EsXQIPrnKFUz3OYGRdNbxLJmlS7mgq7Z+GunucITuBl/M0AJnIzwzgXtkhtj6XQzZXgW5CNVnUlalvUsnvgCl4rln4vkNawaJAuWOfWp7X9ipNfwiixWUSTShXdQaBvaRiBSUiF2t9dQVXS8rqtII9FzSfMdbno0Nxp0VgZXgkQvu7GVQocJGhsat34+e6JfYMDx/OFa7JlYrMXjtwpCMaGopG0XW++v8ydxvS/nkTEGerGpxug8c6gvluGba+nWG2zdk7fLKHPHlpfxYW1UxzFvOPmLTcAz9Wa6Ka5RYAvYqXOJ78UlFhBqjATeMphBuLJzcmFZnZGaQArlQxzzF01SOtQz+8KqzI/Tf97QqeZ+3ebv16ffytMglH2KMAJp4hSk5jX0aNUYU6afJZtOQrrGnl/Jguupp4stGon+XVuqbtkPQJNohfR2/EWi7KXkPnGW14n+GWPgwZH6XKmfJak+Tw9HibyK8w75T+m+i48EIzxCR8J+JyIxo0Lb7TMT67yEOG9zFc6k1iLS8PROR33QU/iYsy+fxd6nT0P/7vTbyW31kQ1Uz+Hjdj8khAn9mWcYIhrl3rSd4XpgRr7xgiP3TkKSmn8cBoZjRDwOVUAVX5Dn7bh13xtMcuBPQTwx9x7QohkZMbwvR+/i1z+KZFiohIs8w77gI5ElzTuu570+yoy8w6r6LekKw4IS9IpmQ4TI0gzE88gJeBoelNG8sObkRpN/DtYeSZCbxaLiX8KS4FH3fwdBUVXLmE2E0NJSb0upR4Lo01kXwoROSo9YBc35I2rXaxABDMgZQOxHfzo0rMcNuzvO8OSHegxFtmvuV43nQ5SYKw6SwInfCYyhPH3q8+qeo2BJnmkVRQvhPZkzGE/v6uKzC7O9sq/wgkBQF7lfJhooTk98mvocJ0CKVRvOj5utQgKkXn+dp+sLblPOGuzpf+L0Xh73uuyyMujwXoL8cR0H3pr8XGPS3PAZv5qap66RwYlDiwjow+HcineQzD3A8mEGfL2rryBkWdHMo7h10IZDISs4bkd/LqQpchSTpBwL6WAI1qLQaAm2beV/XzV/vUqYntR4P0TDNMAKSbgUJ9NDQwGukAYznzUJR5GCGW/AePs38VWiGlr8SDXtZ0URZUTTDTBBwutnq7E3AoXH3cdBvLDUYGh42iDLY1AhMC1mn6W/SmWgr1vxYk/K21pW6HLfN4Th794O0b5s37BOBrKf/zChRr3akD23f93HZHyENGrjb+/W3FoIpM22qFxHyGWbgfp/+n7/1hCTZItJag2HjduamUAcxxYBxzWdcT2M55gCX19S67gP4chPu0yMhmOejleBNks2fjTC5eZcM+M6Ez4CTouAkHTxSvjriRpOsPug0Tdvr6O4o7Adznp8e9xRHnb0Jy1E6gOGqVUlROKlR+H9PElXmv5uAn95e8HOkwF7f/Cd/VxPaZy5FmmYZ4lDRq3cUryazGqIxFVhkqxQBMEEhJmDbO+NI7w+ars9wPer0b+Jpn8JQxnf08vgad1GIfFieuVFD+XqZ3cv4PNZWlq4G6Zrz3jtYYTQ7i8C1iEj4U7xE2SyWkFF1ryAgcTwqyQwhDeJ611fpPk5qQhTdNJsBR5E8z/0kv28TRnfymfli10t8o6vmQKM6TWxJ9xnR9yE6WfX7/vkudk/imKD0R2C5Kjj7VXzJ72PMCUohOYVTVdO8AoLKF0ah92hA52mK7MJEPyOvTcQGwqiCU693B2z5iz72Azw9JarAa1pCnRaNGqGucOFpb2b3J6HQuEODRE02p4VNnvqnRuAv8tHsC8shj4968Xe1aC+JA450xDO7G88faQUwq6C0YmTo/MZx/LzTt6woByPr1TuTfgsGzuL8lcLd1imaJXo0J6GoSa4ljfnd86TBM6W+/rSAIP/6Nf5+vlrwQ4Uf54n6RfzyTRkxmk1lmBOjae0+z2rFSE2SQOAfhmHZiG2eXh2WgKCzPVHLdaU02bNWu750AY+7FEPaLib22PoECSGBMt/guq1U39M7uNL6XY1/62hKkvRoPI7CBrnl2WOtM5zPcIHuDvBdtGYYzwfPoNmdixWWDjSBqHU58vQovmAguY79p2maaZrDfu1/ogW+7jFc8P4Anww+PGL3aiITO4NCNtLjgQdfPBiGEQnHqZ9KMgcR+8HdVh1f8VFtPZHMX6a4+74eo0HJFdEUHMfkrRhyI8+yR+jk4/sesZKBOa0BSeXVCaIYT5fmn0GQ/wy6UO6v5ciXeDwReg6jKMu6Vamxz98J2E1dxUiQ5DzO37ShpsfOqHghTdfzvCoIZnhyX9TCm0VFA+/PMPdCCbZgyX/ehcOfWRQNAf9HvZjtE/yTBkWiUgFe3/1xM+CuSY77FOnbHOC+8eaXJpqFc+2rexdzMl9XvdreEj3OaWzk4J21O+YAEPX0Rv1399JPk8ib+BfraJkzCyN8dO/9fjRypOGYSt75+koS8mJWfw8+G57f0ROPJbTIMq/ICaTakiX5fO4X6JoJPKoI0Pz+rezTMLNvgCvGSd3p/7abN3i36JvhTuxV08u20VwQoC8KHeZ5todAuvGXqP3u2wddnPHqTvsTfQ6WXmb5enyqGuyvD/Crj8+i7m+qVQ+Me31F82MS4zyRj3GQvVybmiA5IKT2iVBXHvZa2iHNBEFApB2dISzdAGyYhw5YK/R9dE7w/TyrchBJThzMgxQgnbbWHk30ZmAB3FOv13Y+kSeQkihyg5eU3tvd7Ptpoij62P3jyL7rtvWd+LFLu7NCoOFWC8ujp5JMcx7PUj9y2IuTglEUQxobeyILoYtNzce/d5xT0YaGazKjsDed6ZZpLZxHPUriFKJlTwRiyJlFesJpP8gfRrSA9jVN+k4f3Wa45NFtDCH/hYxisNEMAUbWwNjrBaFp/pH57OPCPOtH0YmiQzfG/sDYjxQIyJf3Pqxkle88LwnC+noQp3LxVYzz5/9XJPkPNZ+vZbBVhRH++dDAc70uYtvJrjUsl+t3iXhCICB6GZp7w3B48n5Jruhz06O2+zy/pPJ1epj44OfEggyuVX0czPQ4rfciOdOsHnxr3n2q7ZntYfpzXpF+mjQoXyCOoVCn0TS/DJqDjtm8RD+MnfvAZXwXp37bTvtAc6v+/wu3p55lPH0cRD/VtS41NWs7hv1K4C4S8+Tde7oTIjgfyg+/+LSuPNXhP08E/rXLRHwZtgwVIOZh+KPxcQkswqnPdK+WvePnisWRJ5I5bXvIgutaZ8VK7liFf7HCoFkUueW+soUj50r5QYbwA+JdgT0TGg+0bmWMbqpH1055kiQvDsoEezcBjgcTYDCTvq5MZH8jgL4B6ZL4YLuygn84HcO+3u8d59zwUAT5b9LFfj7oBDOMeSIaBljdxQ79zpSlP07TzB5vmqa258Tm8rSuBCw8MM5LQWRVXo6jLMuWUVLapd9vC7yYqpIBahOqys70+C2LCU9/+o0VvFd1A67JdLZne02IGKD3cTK0xrmVBeUQAhURvQlhxvPS6jaqFsQwtfadyuJ5y1L7BwqOfm2uMa+zX+cVvIw3LePqhfONjXLtxfCLD8IxYeo+0FrDbAq9ebhhxi/j2BisdPXuJNS8bMjydw5e+GIfSieUoSzLW0UzlyojoqZ/Jj8TwORVwqWCmxh4d9EflRq79h26zZhl70hK2ahyOiM+tcwfxVXc/3RNl9cYpicqaaaGvsv7/fYUnXn/3UnRlOVxRPyqPMTmm5aH9TYLk+uo4hQZlI1qfh4MZxAcx3L+RuwxjAezs9iQ9+1vQOaNmVMI24ZSavTyx0hxJfUUhRl/I2jmfYzHmvny1KkPstVaXRTmUSNCps0OHhqMLzpUjDUAM/rZAZGz0sMN0NE+KGJh+gQK3zmlCqaj6LZ8GNgUnjJ+uvPpdm2W3+2nfTR32XRVs1ujQA7s5a9ME18fJMgUMkeJ3MfGo+qa6ABLwlT1AT3XMg9ilsvlWEqaa+h+iPPOoN+yJasu31tRm8vtUm49K/fyt2LObSEm7pPrzpZH3XOsMrdnxh+bPiG9f48Zh78bNYugWG1flbHJumDT1vl9AvOCDtyHYAwNaPAtxPBoswYIDN/vDxYZjmII8JC9dd31O+m762pNzMYTNAkI2t+4aD3e4ovL328nrt3xZEic1Lz05POl4U5xO5TCPNu/qQcARS9k1z/RdDPypDpgbRkFGoAq6KQPPWHDyd4OuvZPvNtaih+Eqcfw5YrKu7vxjc0yimQevF5x6P4YAC6ZH5KpWb6TFB4ViXssP/n3vm3QF+ALdA+U2wVcjKbRYd9NRzIfAosDV8DnDWFcS6U6HbgTnkRMAwbYxVT4QcVubHRyHpjXFFhLMB8hSaKoVIYiszHIBguRb/XBhXufVHUm9fE/WROKhnYa71m7SAGss8x5YCgo/Y8UYyX62kCK99431lylGuAg0nYDh0IB+36Hko3qTxgc+lvpOWL7TppAvEQWCz5YMR1P/oBu9oHANfhlTFKOEMT35M135g6mLlWBigdCfUV0bzzCwzCJ6y5G7vgHIarsMh/4gad46d+K2L6Wq9N2AZOAs/Q+1YHRXJoM+dlBOs90Yinox+K+7Dt+wE1TfbIZmQHWyw/cjiK/Bv6NqdA3X4M1Z0+jJVyngAA4Ni8Nx+ovK19vLRSVjzzMNWe7dptGYEUQ68GoT16Htj5e+IYFe+P4tdV+IWiUWFZNKduu2NpoX2OlsWIN2lsRWj7Jp8f0KUTQa7Bh2TYT7onWDs3ZIQK5qgzcaolDwPblh4DwFmIdfwMFchTjui6KAoa57qefaUakWF4ZmkoLSqnh+aQN2p91sw4zzTJsx27clE1YO+OLESrRpllNRMbKeCPfUISi1iwffGA9yRuD5Tipvx4i/YX42sYwmTf38Tblj6ZPHD/FrFEjn1/UdSxmnWmEEVvVQ2cZGXVDdwGgWgEnz34SaHlV8pdGXiBssAVGPIXuhOIGxNovrm/6HrGn3k3xO4xC5OvkDN1euDB8Gv0d8Yllu0H4C9vfuodE9FmiEEEgXQQBya0td5SX3R9EIjzA2OVoEKFch3WvmrGyQRroxVAmGmrG9VRAHHvsm6+49wugDRCAvFyiS0s/NgoVghKzRZdl6PqBqGK4s2uG5OZuRRJYeamqs6QrDPoRSgylWASCEuYOhxmFbOde3QNxT9bry7KOeeUDbysBKXrAzlxo09lqS5OHDcsYk2D52PBxCamd7bNXATmRPyTDmEtuqMYGKzy8WHV2qTXrqMQSbcrr8JVZXsC6STbDbdnG7kjspnVfQT+fr7oO6nxoFj8Wx7ep+Z+8WpFcovTb7peqIOOmlE906uTY+1DHA+4Sq2nt92G7gIpx+uMoZE8cf1qDhHf2aRanc+QfLtJ7Y4AwnP+G4097uGyoVla31fy9WF9YTUPM4b2hoHhyk16FtFhookkD0brfQE0I6VVZYBBR/3zQVQRh5/GQGt8xofHuHZ4H9y18JQU4/lYeZDobwmGpwr+Rl8Ig249/HMBTvJs+3Ybl0376H/oS7Zs0TiPioJ+OJDKRoWG/VAeI+JGkL8mbbAsXXKpctODZb1uOjoQwCrtWGp7xEJ3YOo29DVyCF/rltymkV1h+KmQpYeguKedr7l5g8Yiv1CRgyn3NLs0b6cEob/neag4OcQTwRcC2W8CtV7fu4PEqJvF4maGU1boYF7dyh2c3pGuEM144RWjWp0NGquyjPbiF+a2LfOtxpI4PdJfXUoJxopInxSEpX+5aXyIOrUXfmvqbzE/Dfgfc40H6c9KbKW4ZP7PKQmLS8BTtF5wIYPVoH2A/304AlzdYEjFquQEyn6xLTXp4wbz8C2UKYwk0xY/nGIuu0CX1PRUc4m5wldp/MlOvyf8Wejp36XhL52oGoTsoEPbx9n7+wEjMowMdoilfYsNemJc0dL0LVZAyvkyDlxghYmEApLmw25yPWz3Ibdi/zZPGe/UFse6z5EklWi3GeL8B2Ok3mqXzmuBvnmd2Q7ja7nESaUrXr0aM9wP2dQKWDE+74Heu4YUmZeDWYn4kaEwsAyuLnBBwXXy2IMX+CFjEe0L/YXgH8S59P2LNP6NG5yb1gAzpnWSINn0pOCTDzVLp0q+gc8E9tvOEQqKtRza7jOjfCdFqsUQexyvxzOUA2LppXHjdHI/oLOA3gF+9b1pgedv3vl0SoxaB1fL9pjNwyeXFxCps35c5V51YCKEmQVlXfxgt3caHYrZgPV3b9TFgd6IdF8p39bg+nXhcAP5cphnZ7SdLcYL5VuYEoVqzxsoF4E5WbgB8jyFffLA78/NtUwkkDH0/Epqob8qDM/JBRErYTP5EmEEez5U1lLahtB3tVtqd32rKKBRwMAkgxYwf5DXy7rNLwdXXB3FVF/ggq+3KUBrpdyyeHRP5WEsEJ6oG+t0KruBOGPB+2LaKciMDvF/yljRyRV4bVP3EfZ3EcFHfkBDtXe79cAIaFgKlvRfdZfnIjQGhGNo1TcjWG0bu3W20SufsZdwtPeSLQdneFKUk59GV6IkgRnm67Dillh05CR24PEjbuhFQxDoRwG+9oqn0TMintBAAzReX7byYMp4zn8227860Pftk4PRqOtw6La+eVfuWcy0p2s7067EP+xLpFKw9kzMbBdZ+7k3l+41YfQJLp0nAO817yVWxt2pi9Y6lAPGgarYdYFofz+pfWKNwQY3B+seqdkRm9Y8LyXkCCXPYyxfbpEesOl7XJWphEQa0jOw8cUfxyaDO9VrWhlbSq3x31dB5FbngCRTd7EO4dmhf64e8qLxnAnKpXI/3ljfQse83SzBiE120VXmIZbSSmisbJ5Y781Ch5Ze5cpVxtiN/nRMWoepRvSDsYVK0NGrbOuc08XbkurWr0nXbOcX9ysicJ9ksKkTLBWcW6GQKDdBBKsB4/JtuXiDAiM8ZBM50vM/thzRvVMLtHwfeSCbJep16kmSTQ4R6B3uh6pUouKjqj3MErtrhZtFdL2giWnL3nZoHdu+ZJ4S8l1sku1CTj4eZDjwQ+A9IsbG7pV/1W3mcMSmxmh79cDCSZzCDbS6KPClvZysawxY6zy/nKhtaBIkYAGeEg77F050a83iTGsWAsGF81em7r7KjGx5W24faIbWL9BFSnyMdabnZlXc20S+pAaD9RREJ8Ce8qRDvs1A/YFoXzRQ70bNY+27L7z6m16GrX0F74TVgqWLOtfwDMGH70kJEd9wsQoon4dQzK8mfBAqisLY6BU11AOnZojMFerlHdHWtFRxuOT3ZtTwxugdCplrVzF35yYdi7DcpPlmy99eyaRfbJled1++rlPv8LkIORiXTxDIvCLBAw12Fsw7D17ytJ0iywnwpBa+MlukkmRvmYl2n+wAe5RIbaSfvmAP9c06WhHu6ImRW99u2WfG+79/p2oxhCMK42J3myUdKdX092VaqILMNzRo8nYDmWko9YlE0eeHIp+21WV+rZtZSNVDGtarvKd1m6Mm2DFhkkVn+EYVDhDwpsPc5mKUuuB/mSUVmGm+Sn/+VgnzkqK/bdqA3vsDak2l8krvN2/Nr+lERGZKbDedPT3pwBlKTNDJnu4SHq49FmEnjSrtxz/4BJbmw/DlfgfRC/2VspfPfro3Ij3WpKO3Ong0Kvbknu/Z6M6CP5tPkz75s7u4oxHYtzUqkHcukurgopK8epsM4DqNyRZgAGjLeFaYbkHqNJivazjBMkknw22Pbd6fTEElqKhS7/vLHTNl4M7yx53ninOa+ZN7q7OtdR4U+LrH8pcSvKKtMjsSUcWT56juKymvP5PHrGpjOT7yp8zBi6odUB4wHArSn+CDUpj47kkyB/uV2Hsi8dOm3qpvvWW3A8wEenTBe+SSqheOi30/W97qKbgvMAZuAviyqWexWM6mT+uT0/klJeu5wNz+Og4Qr4NsnP04s2f8wsTvQdQqCLzWB4CtcME7s5qkx84gZFJ/W/pOY+nxq0+j5fGwdBzNtAvEvunZIJisZ3zwPngbOEvafGEHdT+EKU977ttEPVZfgZ1takH6ZdP2877OnbFaRrslt8jvBOQnz3wWIAe9yWcvtEOozpGkGIwgEzYRfmh9YRkgZ/Sdg8UCvn1TWs8o/Vvl6Jfq/vbu07PimdyZmGq1Vff95YHjI4COBoTidm6QbfDT/VlN3olBYy5SmZG2wUgVARKReMxGO48yWZxlObDn8Tfz7X0J9ONNiHzqRgbIzm+DnNiXuybuQFqa4jhFusjjVS+xGijVkiYjH8Cp+gOjmR+ykuHzjoMzKTRCM5cP7XrOdyNKgdyvDjCdYL4oSGIMi4GQ9MNMsLAtffpsVaFEUdhEEw7ZaztKmdWjrHltD4x3HqePNMXroTdIL3uFNcF3U/MSRlrqwj42/eJo7nh1ozP/bk2Dg1aDwf5nhfQi3wZ0n8GqEMm5jwLVk8bKmRYDSgLD5yPYY325tASL1TXo2pQVpg2fnxJ816r9b8g7+ui3Ueqqrdd856Uma/9uhfT35q+aVpgUMu+0Hcr7shDBLq85MJD0ntLbIWi/zIFfKwL8IuTgi2mEQmR3KsyeUrCz9MgV0/X4Vc41nrr+jTUqyDZFtwhyJY3ksoDkoXTdPMlqRk+tMguENZ2sBa7aO9FXbKxyGdq7HHIDTPs9nTxZUpLRXtRTCyUjJbq/FhRfM3b64KwHKTKec2jmLVozY59OUjIKfRzlyHq/4ecFtV4wMIkA3Ze9F78fFlbiLf21PEAkhSOyaU4DUAsYM1p4RUnqkVYvWPisSGv4Z7tXVWaYTReV2AkaVdYnZd2LrBt9EqFSvCjMpDfJXIb53AmNF5ihs1IZg7RRcD1+20Ojgl8g9XkF6v18vmtbjxffMFrZZW/mm0Ig1Pte9vWVcxUhzqgPLhaAtBOyxRnp6QOLOYYjeOOWsvkx7hHJtt69rGS2W43OGwmI6Tzy6sV4jdm5LWzi9GLy/zpon62ojUKb2kr4niJVr9RQhZHdVMCydBjfT0FS+kG4kXmXvMotJNzVzpO8pyWX3PTA8a0pXIzhBQQ9+U6+KoDtH57SZ7bm2ZN8C9GynMBP+IoSZ/MbeC3FncozTocXDzWzifDsUmrnyT/eCs4CFobPuXIe2HbceIOb7pk+efnFQ8RqN86s++TRyDCO6swAFJh4JgxY7mfkX1Jvc9cYUjFITz336t/YA9t0/o9gcSxmLL/FoRjV+3ALvoZd6UfsXE8gmoeUhu5gatoymXLvYEpxR/9YT/XEaubzQKmderLFuNhy17XhomAw9nulJ++nDG74jebSCyQbwPz3CSnS9efmwlCvA8xKzQ4P7T86mjLILz5o46PBr0587TnS9lDOSe6pwZll0OvlyhTluv+MmyaTBveseRdfjLQ1Z28aVA5FaGXcqD8Yaxx34JsNqAB88Q/XXf7jWZzGA+PPWb1gJB156kibH3gE5TnsGwds97jFmPSjhppGQHeetF7IvF6T0M+sOqg8/UFThxFwV0rqePUdXyyxPIWlml7Kbkfdd02E2/pJ6bz/B+ylKSr8o88JEm4lHAZ5cbZ4zmhKg0yfOgyYPeNNGKb67C0+b1q3oMECExwW6iaoHWFu3WGR+XZXmvQHI117+01s+xlDwVeE1wXx4udnWwsqTfXfj6V9n60Av9iqsloHi5AS8QPj6ckEb3bvLr73tyWgfKVpixJ9bIO0uUj3r239j8jvwgmyveqy970s7+Rcze2FIZxiBiLRdRIGjcu0LQK6357TwHpdWE+92qHvu1xUhenbpFCCd9t76BYsq0YLm1GVbUyjkBfTJWKTptreRyGKMnAsnwVTx7DxRhidtVihZhUVPPvR9cpNx9EQQ1zTxtvCM05i72UBpKYSqorrcrqDQ0VM9ZtZlOMhWvX5lNfGzlm7z8/Bt1ZMM93IVPAuxZBVMLaYmcDvny0RrxIP+t92NYnb+57ZpDyDIqdvQjVT4nji3hsdQ4IC8oDpknnv2/SqH8BpR89pPEcFNEL0YW0bg2aIt2Hy2utFW2zV8FQwAClQkfzWdzWp2Qn+/XcSZezhNx7xSORUaLwmZvTPvIvf9IHFCXhhHoP1nb3x3hlfOctyhycnHMZ65ZHOnvqKNY3QpQa95zSJyAbLoJoXJvX7YqaDESpxexojeXvr4YFjq8ZZVPdTy8iBX+7DDlXqgqkEQOF6N9KHI3VNPwgoaeoD4f55MBZDj6FzLdnbRluc5QvoxNJ899uYvzlEVyjQ6wUMx9mzT6leR0VCZkZ9sneG6fMqVi29CQKttRovxw+456MsQT5KRIaea5uabM80TRC0Tgz+fG/Igv3TZM90EE4u/RaO7nvpmxcNhd5bO8NDnGAP28p0Q+c+3E/5xfwPylW0aywGmOKUcK3vIbmvVWXRCkxeF8Q0bCMrLM8g+wIZtlLE5zdGVlN3W2fecIoFxBIrEiR+/PvupyZmj7GDclwzl3beak/QxZddgEavp3t/UV8qZO0oD+n5jKRxHFZIo3elhLBIiYPHH9LdPMPl6o9WD3IV7fYmgA5ILNUnjm6F+s7OiG0bZgpieJMmZ4Ypvzb5buDLX18pXGhSaligynYQ9vsbuCqEMTmFxcZcXwoquMKqBwpdSIAheRnue+6J0hgMzUJFwADOYlvc4S1Z8u2IEyDN2b/ik5CNcZRjX1eddG6zG/sHEFN7BCk9e0oDXYuH2cULLPgoK7tk7j9WKSxy4CfeRgDxM0I3PfGTvkyfPQeBLwcuDkVh9sqgbzjQO2Q2CsZnneTHuG0UlGsuM7rN8I9HrI4ldXvIojYVtLAbZEb80ZPLt+hBm1dCN4k9uz4QAlx4VB1/y2oQzT7rSnBAtKJo+kqkR2tTo5XeeJ3iywX8wcovy4/qEqu8mbi3WzWHi/jGLgQBQ5aMEMow1HaW/qoh81dj1HsVhuKc7oOIz7rICkGHoZ2tjiwLe2Nj0B2BmV9Aw1+UcE7t7NUGXuAQxs9dmu+RV1MARwGijye9QSyUzpCBEGD+QBZGB7iVRK/BPAOlXql6Tkq0JalpcIG76wRskuj8AflEiJhM+ctvXpeZ5/ClWqIDFEtcLCylQTxFK3JYbVblXew5chy0nWLcPjBYr/my25p0qR5Zff9PWUSH66jYNjtxvw8ov8Zev5amLeyXor3xH9B/4648cNWWVgtE0S9OOazOe7jdn96vncvzuKokMlXSFxebHhifszBfbeHjoe+GcyX68ghGWo/kyqzA/DPKNYeWgPHxR9D/WF4adwg28j13G1bcoc2IsXdkWbS+u1bpBGTMaSBia6WEOKV+/rt4YWHnWVF+pXu4Ai0Nxa9nLMr55IdPGLreOjlFrN7xSWdOgE8meqAnFjODPQs1wv1KTAy/vCCJsD6r4T+PGgBUr7Iso6c3gsLPWm16p5Afsv+tWapO3wkKYeutF77bifQGbxdL0ewuz+SQcmWGYgDfC3iK3mUPfo1IQBFkViVZLflxNuEjeI8lLCG50A/zEdn3hejjISPDtwtAZzTQWYoyzCvND/DIiB5dWPZwvg6+KZBHz6dUsruVVyG564cHc943/YpnBJrTIlNFxWG6A6V4Q4Aj8SYFbeNIs+q/ys1Wo+oqs/c9i30+yyvpGHQ9nCk5fFObhEEMuNnpuRfE+1g3QVEBhOOAjZfV69pm2JCCyEoUA/q1xbW06mZAtFi08hnqqPdaek298k5ruYpin2loC/K/lAA8/r7ofMPZftaYFkIdV5fnWdj/ujFW8vDffz8POVRd6UUI8TdgX3MMiCAKMtNOrHfkE2XXq0u6WGZyBFmhTCCUFYT67I+bYBr37baQ6vwlY4olJ3RanKfwhN+1b6bBQhupxYlqbBNlr0uVDXbb/+k91qkgi1B7RLZtlRXHDT6QWEhSdfbi+oxL+VYwP0n1d3vP7mSSZ/d0Il/4ribqvv64odKdLJi9MQUAlFEVdLzcjgVIUi99GM+utDaMDB80co1L4M+twT9ZAsBmLSdEePyaEu/qwvT0h2ho/PNjy2oi3M+NOqlcj12LPVbzrVfDaHZrxQ54JNcF5E1HuBAjyaL0fq/PyKtmjmelcF5MXlnjqcZ8YZe7+98nWSX2PUIYYQXfgdZHoEwThRGm+9Z0LQMlz9oy1c2bjgmuziq1zNhQKskIrghtOlMJpliXZ5W9voPJl7hWePAMdCQ0ptvuIQ5GVTLLRsYEcazE4NJCfInJm2chu0l9xPm/W3KVJd2kVYEmkvDzpFhtVhtN8nBb6Ztqphf/4aqxPOH+uRE6JUHrlL4RMG5M7HxgrbppaEig0MgOZ2wWBy10sXFgrdgyLCjyXhZpGHjRtNvWvum0Eb5wAUWVUKQoGC5SsSB+4S3XsIMI0In3WJEWiBsFpy3haKv3sYz/LihHQ2li9zs3p6ibc9CZF3n5de576eGJL9DdKlupukpdYw4vfU6ZSfNg0tL/MJWgflrf83sg4TxIx3jLyOz/YZxWi46RJv7TeEk3TDzozaCnLqA+A3JqsD69GkgNuTG0WMLKyXuj67dUsbbGsXHC2N0Zi3F9rp41ZMoxcWDwFYO3Dj39PFggQ08qjnrMoajjOSwN8wguheeTZcwnEKktt4CQP27ckUc4556FTqKXwsAN51uIYvMdgcf8ivvuUXjbPwJMi1ECURQBT2wPs7XkNNqWAXNvlD5E/Sb/TyVlF1ibvWIaYbzjl1w1JV0yKSFRtbtjqKDR06pFbQkZXH7joYi94Rf1o6aAx5h1n0IZ4jo/IzRMlU4Vxm3y29mtDON2RSygjGqb0jCRGKH9YlpXGeJyS+ZfKWp+KaMGQgx2jn2pkq7rxFntPqWpP+r3WeN9+Pc86DK6W4yaaMLVRuD5e3ncs9TiTRmjXxwjAmNTrHdc8CDPCBawh4AHjPl1xLlLA7fvPGqjep8RE7tpq+Z72FTPbzyKavCshQVPT7LwmrkczwGpMDqBfzCbBK1mi+Vc1ylIU4a5PAs2Unh21RO+PIwph491YkGJaxHsEro+plUZkivMm3jYfl2qvONrx+B0GwEa/fJYKWm08BFSrKbyMONBvefv9C10VEvWNV0Qav4rOnDjOds3JRh7id69PDfKp68tEVfU5yg9Bkdc+MIJeas4RxnBFkBaJpPLviePfFnuqMmW+dagNTar7P2WwzOPER9K4+dfb/naJaZrVGgRD17VetpvwTAHgOzvffprk0LUexitVJD9sbNZF/IYHMkm4GmLxyPHUwpfhEsNd79cF6nzRwkGnPcVO3Bj+MrUg3t2tYJDNmZHFl/KienY+ikz41HqLZxuJKd918MU+eXf6YFtOp0qkfyQ0awBHtm/TDB3B2Km6M+T2xR+uHUoBAbmL0K5JdXmAAtmR3VEFShzrG5oy15WsVCrDDYCTCE+6j3l5b/XUS5kHyqtaMLZimvLsFXHt5mBK1DHmTrhCpKLoW2GChu6vc7xPB/HeDb01RDmb1VN0tv5xJwoVe3iQm3SKa8idTxMEK9WVS8M3j8vJi/P9r9Z5eEPA8dJe5kPmsyU9cADip5U4ohM3YWOX7p1Vly7d0HerP24z6an9c/Qbwr7mzg75j/1STQZLHrZE0kl5SQ+PXJQvO05U+UEBZLl3NVsbkoFGb1ADFoNAqxkWk/TF+h10lzgXiFhfhYFOwEsvIqo3zYi1jE9p/0jattW8OisGho7RrrK719l/4Sab2V4EMz7KZd9l4IBQLLFMFcKeSuInOhg+7sWUCo3spNmqHItVxyT+sRX3nTX69c2k6zJPzARhOXbNrllAnFQ9z14ohCh479Tbrwzo4056W55q1fVYywC9iAUm0MkPfaEpErJURITDE+kpHkiQbNnh46N4SIOHQ28fUfzAa58Z56/mL4scvdR4VP8ijuhSXbwN/r76s1bopDoHqPvmGDqKxm/x0Vt3pTsWsp4nl77dp3ghEBo/DycQTk3K70aWkTU/ORJKluXFJnqTlqy/OqhCJNMqq/Qojp345iykWubQzz54QeRw+o10q51SAT1XFe7uZ2OTve4UceHTndbamie5fjcgGj4F8s8ukTV0/lx3Y6F37ncPs0kSpPazKS071hAzaR/6UoMLC8RziaEb99cADwFEr+b4eQQGWhXYHzNpVVBOQPfMM/U581C63cH3yMUHjQFmxAjkPDneFtJdkwJUkbvehG+vS+UMp4Z2lTMlWGRszLVQXnwybbPpvDlZnYrMHbvXOiSWHZN1J60plSukL9sAMOjoYFOe4gOdYYism54IuwFEQjwbAMYrSeayInMT781v+nWuoGSbz7x6Lqnk3lCFfJMW4vskX+q3uFlMbjvsQ7gSegmqyK98Dj91FIC0RVZvn1vgfASgbdeZuI5bipmDaI8TWw3hnmA8DD4IO9PbLonilc0ZEQKq8S7uI+Vm9mPTtkRGsTdkDIhVj89kXhMgnN5Kr/L+LXHPsFrAOqtcUx9nJQAWFfGZ/R3Rkd3Q7PFhddriMA8QILHSIIDGmR4sN/wQhbE16t/+kYTY0b2O+4MhixsACkO5aYJV5xNGmPNCQJx6Sr3OfbOD/BB/yXbyoMzstW66nMdcPiyfr2OG08dv6HlalTAg8olAajRkiKe8z6k+k3fQiyozkxJDn8sSSyd41QM7hfeEi0SPDvRulEYm2lsiNk89cnVVgT9sXKK2cJtDesORZ4RnCE8Eh9SM2RPAqa/bYLqt/R4l+9XLAjNgocnT3JMIomRon+qd3sFTRCstB56CUJQaa10E72i7MsdLirj0T7In21ew+Cne6qvMtMQVI/HW1OaCv6un3YG2MQrGEpPH34LVhjdHj5UtMvyTfvuVaT8uswCj5wODL0gCZeanJH+p+a9szt+zlLnJaqs4taygLlH+bdPF2z8/tva+N+Vhr+9RJpYjCGFftF77c3Nm9439WUHT9BaS/bqumfyrbwDggAlgg+sWX/u427p4xKRzbX1w3wunAa8BVkK/WUXBCfnV98XnyV6Mn3kmX/mzw4toz3QfwkSez27Xy362fPYYRjldz1I3JHvkkFmZUutltm1iyPin8rGua/492jH8evOSAE3D5SDUB9rWPg0WMHv1BigcHoC7DQsf7sLFn60xeUQ0OI9qUYL870GsrWrbwgPYGOHXOXyguQHvxUtfVwruqUFVJNU0vCNHST58PQljIYRey7rB0jDhO5yn5/iyGVHkeX6xs4/dMoxdlk8501uAdHswSZU16mWxda8CEJsQpmnGYPWp+9d9EyU2UZJeiT97C6MYm6zNKOTWZ7ZVzjRt6m//kaSfZ1fqWWjT78TT26XRJpRd0esBzvOcITASTcetj3nSCmerfpQ56ZFU74aZ260Kt6gmXAHw4HxhSfRJbrmcRwcO6cMvUdmCSMuii9EjHY5lVdIcy9IrqTlyfbBAhjPANOWqtAO3bXnxTEf7tyk3/YVmumTTwtQURINJnYwWJJQBAioZzkKV54kFHSwCAkLR0jHn+KYPCis/L4TOFkSRHtU6lPWF8qfA7vJf/U+Ikm5OTv7XfrxMtjTrXB/xeqYmZO0dpsmkYam5JfMbU9Ys6+r6AnjGX81/OzYqG9bpXdZ20QA/+mJpYxw+/Xfkx1/Zxd5IJqQ6inDe98cfZ/72y1+tJ/971qZ8XMrmPZi69P7uczWKj/DajbQqRbkU3+yzK5c1rF/od8AE1dvhXno18pmVc2XHIDFbtZiMfFTiLjCRZJfI64w4IRbNJtpI3qTIQTUw08CuxGwJjCe6dfLENKKsqspoi2Y+E+/AoLBPZz0IvyEfOb0k+n0l+vPkVPcCdpUf5Jkzl6MYouNcTw6LAWhbhZ//WSCUmBccKVxnWm0r6okusNwOwLO7WkrPVJnls4WmBmJw6MMXDnBKq7EsHYackPMypePg3lr1wG7YJgKOP694JdtRWzkbKAPPl+JnhQtJ+nswP6a+WzfJh+YyuDgSPVk1+kjLEoObgeh4cytnYA4F0zfs7Te3a22prCpogKfbO1PLQ/G+H4d1HPmQFy1oiP884zuYGlYej5WyfO8sIqAH1YDuJoogB0uL5Y5KMO8ykSAzDJRJYcXL5SHOeUgNnyXPH+3UqtNnd+6K4FxjmJNqL4W+KF315TfPcehxJHXcz5859wm/b99N/9Cpp8rA8rxn8c17AIa2P7kI5YPRAD/LDGD+Mr0BFl0jgAa9ZE07OMskmzAxhEMUiF2AYP2T1UdoGXH/nUsqGG1ufxdGuoXPfotO/veOVcKwrZIOcxpwCafzqEI+OnKlqSjwbCRaPEy+c3TOZitBHqaj6RW6r8HnplQmc+70vljNuR9G2gRBIBu/p4mK+v4Uxfn3pBr28pX/zmWA0LOd8V/eScjbiLpvglANS6H7YQgACvTCl4HA9Gt38mAWQk4Gxg2gjJd9MKCu7QK5shCRbeYmtkieANeJsk3TGhmqamyKZXf+SnRUHkxKWIEfbDNV0l4cHq5bhT2ZaR1647xnOBptbH3bkswXuiLXEwCy9/wnfVQkyTNTOAs8wmjGQ7/wFDa24knq6Q9F4sIQezH0GwNM8qw1rm1sj7c4GgBZQjG+H9SH10QEc/A6rXceeVt0JB/W3ZfoutCUj7MDvZYmxj1WzChTTKuvYBFYOmhrrYJvHHP/ZeZWkZP6W4x2p2RQdHwTgTKxEAG6wVD3bO0Qw2yrpDsYRsQUoyj4UVbB5LjcAHOFAYrHVpigogqKFeFN/B5x3Ydh8SWOKbkvhBWCIMMiwMsNTNRTvUiM5TqNGxIX5Ha2oAq34OkPP353n7zvEkWzkgL9mk/VmlrTjgNwWc/kvlaPLvyRRlIoeZomjJHVOGUx8JfPPc7iOnxPVbHka0v+S8cdHXrBvLc5wFEsdNPsmyHdFfvk0OV8RePV30D4gWMejt2N4KvBLdTMwwPgv6q3rDEhP1vqslzWxD8pIVVmuSeWCkj4rQzl+3rhXdBTZ/R+Mhj7PH+P/ULb8Nqkb3/N/sS2pQPpobhqFj2Wmk5vQYwtwrZK9hgRzG1Xx2JUSKxfuyswMl6zulPxRL0+22D5tAS41yuT5m0RCzcOK1VJ+3vWWvDOrynhZfChjqKknrl5aFkdXNwy3zT2LE9kDhAEbrh/D+9MrPJZYD6OYZXtd9oQOevrd/HUNE3HyD81EAyDbKRZtGwmuDkH55WonZnmh27vMELvQ1kRKBepd07GuchLgHYsfRNeMhefQRVOiblZm34U0MMXmI6J/SQI3u02sjXXHhLamrz9xI9o5ZI0M7cIlHedjqWlZp6T4vg9x0qxwX9nGqjXqTdxP53Nq6rW7S0KD1J7nqgZbVa2LJZ2a6djlNbJNL4tNISagA+O4xYtvxYE4EU+pOhmrbGgREZ1xuT3y8mWfd2PYwlApAQfNdJvu+1lgYBhuRnNnz2Y2LdmNCRqMQ2QjhWsZE7xnMiisL7DiX0s9EZORA3gYDMSxhW+3wvVv/KPHg9JrI1Xo0CYevqcnBkK1NKJ/KZMr5sXwelES039chTDBi+U+Atf8kdRPjVsg1iABzX3DRt6uuzWFtM8j6PcDDyPMno4URRFU9DPRYo6lnouyojfDH9iO4w1rQVc+ne6grX4PCUfxktyjhIJx+qtSFNffNGwluVhd93wU74Z2aThB3xKS3nx1jfEc7fiqU5VaTtyGbLZtm11nUgBb839oHf9MOUZoLX9iQ/Fjk5Yx1trNUtcdVi0X0oN+NSrG+kE8/OUxaiRJIkzRwE4/oWeshFgr97q+lH0Ib9Z52cI8IB7TtwnQanldJEy4JhaTAEt+0rjS9qDEYp1fMeSLgEqOs3qMi2t9eQOBg6D2gecrwvVArTE4czzPHW/48Vif/ppenI/0CTOBb59zHp8blTrOLsD7HNi9WmR/j0ZT1EAsdtpurRdbrAMg02UxKcf1A1rl/zaD+jrg4ScbQ3oURx26u/Bfurywnf+pTQ3kskfiEFKB3XQVlAAQjdsjof08RNf3YwRw3m8A9/bXsJRNdPMG9tS0/Zpy8DYJpl9BGU2ueHYT+6VVYCk2VbPD2iP+rdnSHKn79H3hhZUXn14Fj5vAOAXYEvUAFE0+0wnecGThM0ocNsLt8l7wX+Za0/frabLRZloZvMTxth/ju+zWUDkhXIEiiFEBHf7mp1iooAkz1N2ANB6WlfZuJKGM1Ux4/B6SX4UDhkm9u1lKc+ymN3j8oKmxwfzo3U/326jPy6Dfol8ZvAXoF6U6ZTzg2LNDdA8AB3igJb2DH5KTH//3ciJbjG8wFDA1alGKlvcTlDNyOjAFrnI5+X7S9sBzbDaeXIjx3JpjKOniSdiHur899kt1rwLd0f6ELd995K2sxuALY1yzaykZDrGm4ivThoEPK+p1liIPqnwavt6E6L+x9qN7otx7dbiMpJZwjrnR0zTrtuOrBRKYxwq7PyREOCRRr3EVL4c2cJ+inxncQ7PgniS67Cs6q2Jbo7wOmLlYxjPXuP0qwUTAdcJV7H7LFFWi+yil8WzAyfkzQSiifN1148EbMCI81gN4nW45Hiw+/AjeiKF+oWG98at0STrtqypANzxf5P3J9uOKsvaKPg0p3nHoC6agARCIEDU0KOu65qnv+6asff57+1kJ0d2MtaIFVNMEOBubvZ9Vnn6CSWBl3hKvtOJtPqSmPhNf1QcPUACqZngAWHqJswo1EdufLAWqoPbspcKOU/tIPEzJb7nEnU15EUwa01jCaikPwkRvx/KLxrGZrtPMMM8MybLstiv+hXGRk/Gzerq9bGUvP029mR7FoXk4XPUE4X05TaiMa5zJI/tXfrXC4xSis/jVJ5nb2+pPjEqS8Tlw+/NVX2+JUaOEdU6hZT6rlc5nit5qPfw6IN6r6OEeQ2/yIXZDyLPNklKFI3XJEAkj8oC4F90fT+Ans69JhNIV3AUxkIwlS85hCBJ04S4qtmhBdG62ulmVT7c8hDmyM+Euc/XKmdNVVPASEMBZZtaQbEyBxqebN40dxhPrJlSnUSoedWmLYp6iXajeaJRPwrcW8gcXp0eeNirg2UVG4XEFzp5lbm1A4PEBl/CXmVl5jUwJqFDeeV97VlofoC9q0ug3/V50Aumig+UxgCL+fXwipGgf486wioalcSbZ0RNf0oIK14YvW03nQXuRCOPQKo4Stnvrx+r1cf9y957Vl/cfjNp/HnxaTttlilH+rp9me0SWy0yjvPJY/HiP9FY1JVH8gWfGUtDq/Z+u8tpcprcl+PnAy3hUb9CLLYS6+CfvDgIjD7XcRWG+nUHT69hqNsbUgImHvAGOkZedaHjoEKCyZcDFbeczLefMYNx1b5qWJ4of71x+OtOKSWwB5PzlG4MLfIpDbqccmHBfce3I1jvQRHYLcbmpwi1bVKntNRrWTj987dQSUgWiNUlT4tDU8OG3qmjlZ6ko0/4V05QifcycSyXdSB57g0VEtK/QirGx2Ig8azDkOVuLZGWOVeq+I6Ak38M2Mq6LAwJejqu0fFsZdN73r6j67kL0PBTkhk3h5SxLl3fqxE+UR3OvNLgltB1iZdVX7JRHh8mV0WA2rucwV2L4us6DnRX57eQ5u91jszs4kZ/yWosx7jH81X65gW1svlCVNM0SSOBijwLlkF3HV6WM0f/VbbK4UW9kqXc+X0SCKm6Knl+KRlVhZL0GEPF3ev+xJrvvHYtOSR6SAyFkvpyP+wVMX20b4B+OtZ5jkJh583tzipq5ssHw5m6j5Tn8xElU4362W4SEnOEFvo0Z9XKqBJ2RFJfcrm/gapt5e5IZjYTrllS0cWaEfZNQHMVbRBoTIB5ACsrFtVWGOm8BQ88ohNkS82rSu0iZh7UvLuHiqDsgb/UuAh98xH3FT8t05dIdFnY5opmKAdO2QFznWXpX7cqDepM8vErC/15MN4R+p/uG96nHugKk3Tj/DZ9Lz6eaDj1zSMwtq/Zr9NUW1VdRa2goEk2IZpGb19LSneo8zpCkr2KNqX90Dwtuh6cYXLBAWCmdK3lG93rqeODFsgLYJgdhuNB/PPb0fR9A2vUSDTTWH6N435qz8MK8xY+4U2BB2VYsd9xegHqINiUxT1tMVW7IxjYjRsw6UMeVTU5+NVMjYcoSdOdoj9SNmIWkSY5gQC5Il+zUTv7jZ+sbfl4+11ceo+n2QgBTVaGzdI4kx8e9w0wre4eK64uBPhzfVCUPE/TJ8pfPgtPXEaNGWpbId1cyyxki5Vjf5FU4i3BrCxlWhuFHutt5zWz1NlCiozRWAPhhmUNEpe9gJaW7M/tRvSjKfZxLyi5s9nhePH9QPeKL9WINr3R0FqiPUBJ3k09aU4fJh81+sLd933V7ei5E+w4kY39LzkkTH0TgZ333NBpLAuZFVTDaZZggebe812Pp8p8U53n+1E8a1A+FE5wOCYB8Ayh3Q2LRbH13MhUX6LLRdWlKmu8dgd0LdRzFCEAPT4nqzXeFNbpob8bAEXqk/pZbnpcqZ3JAc9kMsMamTuRRxP1UaNtXRqRpBepTPiWvv0xmyFSdUPFMkiVtpImxzoDRlxGdmKhzLPun6ub9ehomKqrx7f8DKG51AYks1nKpf0MQw3oM7BUP8K12F+v6tqZcqILE/I2sfnJtqtaDAUlP/24XFXjjb7e/JP21DxNS/AMiRuGJLBx//PrcgfY8rSM1I22ff91OEx155Z9qRdS0J2dsXvVAIX8/AoCt2R8UuDXS/MKZuMeGnZIB450AaH3Rq8MjvLiz48/JvMMawnc9cHmgI5hDV6J4ghVtuj8+p9AoosQeqxd8P2hKp8VeHBZcZ65OSIhvZHA4y6lEJffcIe4wrIxGyMgBwR/VMBiVzvN5Cx73g+KIohhfp/MrlwV/3728YoNLszTRMmQXjE6r4nzb3nrfLB4Mk/k7/AUTe35up/rKqHCAEOuqU0b328/Hp4DH4qn+RWS3ng2ZiJk1BiQ8JlEgUmJ1PRbiTftkt3ht7DOeUjPTCoPEij0M3f3bpTn4hi9tzbMJnGlwXc3ATuQ62U47UCrq/JCiWluLzQe9y37re8CIISa3K73dClZ/N5fF1hEazV+EKa33Vc2seqbTc4P8jjtSpySEUYII4OnSsZ4AKuZFH+xm78XBSfnqgOldzIJaIDf0H/Xix9xIvBIvtQheXoPrbcX/48rwDoZ1MeNvcdpjnu4aOpKqrd6aY2FcDQ2U6shOiKlni9usYnfbCpb2f4hdf4g3+h1OYf2ElgYcxJLmqOXfoeu3/A+sHrFeNytszU+h28Yu+eGfBvfIYcimueJ0Dnj8Zc1MOxA3s+TeTXatrVIienvtj+3VkJIjwRIPgBPZiMIYvtQXZvxTn7W5q3L1qh8Ot9Hwhfj/SVFiyN+7wpusQvKuaK/CRmrvvZSN2ivjznUMX22K0jULOLI13c6oebrU5H4L1/1L46RpQ/X9QFTztJtoozTUwk0gv44HjDJ5RWv3qD5LUqUsfm5gBr9lolWZiqaaQ8HDRJdenjlFA2qT51ulomM84KmuaAOeRpLr3aZ+g2F0hgL1BtyedcYDLMdqaRLR41h6wmpRDULMIqBU2sYZ/4EdTbncxrUOL9JvzIFEWGCLubK/asZ0rsvH3jZiBRNUC/Mwg3lY+/PSOAhqQDvDCX8srfClL5kXVTHr3so9tB7gJBhxRzsDkq2HSzRSX8tIoe4IzvwJi+OEwH43Yx5mhA0Cq3ZhAmeAH82ZtdjxCIMTO5pifFLTbdVCUrifqQBimYJ8Zxs6mqfcRAXrAjTNj1gCTIF+oBh3WGfQZ1XUsCkM6lYD6k/N0k1f6tNw6pPFQGyrzw5fttz1/NfQHvBXDqIEljqmZc/mnjS/3Xq0CT4rZ0nCWf8+p+MJEkJw7/IVFVY1ffLEeK/vPnS27ZtmPyE3VGukZ9WfpscWalbPJrjVJCnWe2L3tZmqajxL6kKKRv+scVpF7Fx6xtKdS7+u/Yj5HJpqucSiSXOuUhrdrXX15M2xdCvpkR+mRqsUjLyPbhzIwOG1jNlJLnGlXTcwFfJRFlXFuDYhwAhI91Bx6weyyuGCTBbXZQLATx7xP8k5kX3I5FlZ0Zjv/6Gsako+JeoBNQ1lxSjX5l+MsHZPz9zm0z35l1YEzp6YxBKV9twBnBoJ3KYzKc5w/v9VR/uVjK1YtUvLlm+43fB8H131txQ7J6xfs9WnmAkP8bjuKATBGg9twyWfS9r6EpOSHyRVLGqlDWZQsHeone8pwXK6tpa8VfY/k1AbL6fK6yk4aKSBPd9UaPRV4rT8GNjJ6bNZjh67GMp+vLkAK3/RSynnj6CyvFDJNXcDM0VGx0PDvbHwvv7tq1KKQqWIbgPQQRf2xHjrlKV24rkh1xvCQDebSojTGi3ofImjYYNGeyy5Ghi4JLak9sM/jd7MN8L1tnWgvHsw7Ea95m0low0rYdriNezquUE1+F16MvmY05SLjZy8h4xEGatQ9T4AjxlATUxmqzfUIiksDldGzQ7SqL0xrOVrxavRkStIbUmxIPqE93UqxuaEMP7VniYtEkkmp9/heGKsDDdO8IUsGsLdFn7Jk0b/D+i3I0hkQbvyPwzLL+Sd5uN+dQdGuAE61lMh0oqiR4Xe7NfVKeo5jNCU/bLcTLHv82n6NxCjhWS+f3UVbALJFR5cfhWFIstRqi1+0s9oRDW5hoTqsXi9r7+wMQ8ezA9VQR3frWn+xwPAEt/1RmwFxpP8ho/i/h/12NKXMBsuOtate9nS7Js6i03hqO0g9R8/fSTeToVLn4cMlEkL17hRK583xsuUjAN0HXS1zKKvTYuOEchYDnnOnsTd16z4tHqvPn5FWv02/A0Dzt4uUiq8zMk6Et0LrHECTx8apejmsoeqb7cSozRscN9ygUmFV7BfX6Odri070mM1gfklxUmcFGCf7b3mS6M/kShsiORzqRfD8BhBqg7YdWkyCTuCaTOs7+pD6wWNKMIQtCP1HeBXdcfOVjIUDW9swYZ894IzAaM2i50dHu1HV8Xu0JD8Mp3D3mYFS1660D5W6Pcw6dKhQXZUOWilY8jlA1tf1nHnP5Tk/IItL8Yf01J0MU+T7AcPc+LYVg2ZY+s63k/ELcTlIP/FoH+uLLN094TOlXWOj2Or8UFuUPoli/FWGhP5EhgOALA5MxyhZ3au3lx/bgHl+3l37N1w72MrOp9AlJJAwMM/Zs2WHUwshO8eLdaaRuDXUlxrxLt6bZZ+218d0P+XD5evbR0kgdhUPlAGy6oCbAT0FesGCLySb7O1/tZzTBOUO+Apel9jxPRr1L3KA+UrF01wWdLy81XUd/zVCz0/vGZV0SNjIN/KmokFl3WD+SN9LFT36xBul6/iBtJcKRaTMn0hrX/qCnPx8lrxvmh0l/jMZhqygPAcLXO4wGxTSjj7mEW12MW5uZt+GIIeA70S5crUZ/a3CEuK9Bn/7PbyC+AKLbYdRRoiX+BiRyFb8zRnRqQEze4z8pC0jgrQ7dTmZdVIoHfv8RUJLlLt3gxSxLHhbE5j043NQ+tia8V/9N6D07gqAQZ3jREZgOLM1yfmqZ84U+7eyyAXd+r8ZdfsD8DbkR4P+uANeSRc0ftyeVOiWef6ZNfDIo8dO7aUWLrxipcMKBIbQfVmNfECvqqwQ7mM5e47vPeBOhl/K0dhrJrHAUEAocRIh3wBGJkAbaXYJk+HwqbVHUt+jXdi3EHpNLV+51quBTP6JeUufpKSd+KqrDkPEXb9echgFFI+SuVi3XkyjvvLQHmV/cPNPeFhRV5t6vGW5dh5qBZyTWmZefj0oHOqhTaWj8Vd9cVx89HIIs/jzYO0GKUFnc4oJtBUxA4vhgGrpka9uQ+XSLqkr7m2Rh9Y9sK0PuvJbROGf0CdMSjJDCE+zhSxClvrF+ODOnOjohb90SdXz8iPF5SnQQkok8fKh5gdGJiNB58SiK/s5qQIFkwEagHOEXY5oKINYx8hp7T5vv969mg/SrNh3814Q8yV7VsMwICYAq+iaJkTC7njcXYfoX01vFRMrM6oDQCr3Hik7b/g+B+KG/4f2G4TgYzPOA6pnaRBezwgyN0uRoH8/1wz/bvDXSai2FHJeckAPlB6xQa/UbyKEnQLhlgf0wQv5WU5Ps8/ssYQk33zppXY0rPHIFdD3ji9cdGN3M8EuwXFRRxNmt3oCF5tpiqdKQ/r/Wd1J/3/ZfV9D//uhewho0EBsBgWT1Sjpj/8Pp4hN40aYWNv42/b/7UzwDfI3ar/z6XQTrzYthaOhafsLBfTKBVwVBOFq6lVon8QS3e7CrpleLy+t86IpupQksds9poLUPqf32wGzY5Pg3LfZpimrw20jVsBdPtPS5vLYBeKOxQzfYoSxVJUxK6TusMFyjEdPQTSGWqtrnp3dLHbNKef35GV+KZhL9qxH0usXgQFK61jzRqQlXANsk532ImpY/ItG0gFsYc3jghcWJFpUIDKMTXnMWLXJZiHlqly50f162iKnfeOT57zCIIs/NqpMiSqbFbaJt46Zxgyoojeka9wnEtIKH2xgiVNPrtWwtTM24V+Rn7djpB/ayiYh84eJTyYJTteXbL0+1S6H3/1wt6lBsiiuK4ex+mA4gLjO8NZR6tXTMj96jPt+wCW6DpcCIBpLwPg+eU71to2ksYwfpoQnKNddGP0gtKkT4kDQTwtr1RPzkyXeLdxmNiTK+4DGCFspsaJEQT9wDj5lENheDDYjow4G/aYClux6R8vLfVA3fDLdrbpc2F2WTJ4SXn8FaMzBnY0e9zjzS5cDZF6N/DlgYhKCH5h+qvUzKbSNnN75Q4NF1KZHPi3zsdyedvjwQ2ktrxE1v2xv6nHmZssHYVqI1eTI13cZTaIyT1BDaWYH7VJywvyoFpQZOgWcqcMF/d7dZZgf3hcxzHf9Vxoytsz2aEhhxSUWgLepVDujkhx54VVNR+pddsCwwbey6ME9UDtWkqdC9L27FLC3mNmTfeZ6YCbOCK+sB7+3/rdOR+T55RkzpkUORz123b10v7N06Tv14mFcx+PuGgssjNuECBNSfGBRj/fD4akuRgrOjbYNoea0OCSl2OyLv5CbFlSsL81+XsfkavxC+HRA+8e/HRmJ56+VOqbqlmzITtCW+vzdcjhoKLsHIQIPOC9cY8M3o7ob1VlfHbFuoj/forTxJfQpbnD3lSqunCW9Z3sVxQql4M+Erma64W6iawbhI6d8UnJq9gTXUACODWpS1NK9h9+r7GQKbly/JdbgQ4fZnVd3Nf09RFCRF6+vvZENluArW/V4xdTdM0HsQHkEMN33YtVSwlgB28xT+E1bxl2st2Kta2t+EhuQp1KGtMz1n4+lnzSGecgmwogFiVB8L1XOVnIXfL9Ks7hsHEgnMKUtswrtNeYTnp91+HiL9cLNVrkAh1BT1z7G37yXvYyTGn8YBhi9A6QL8jfpq0AJQTgNAenJaKnRteosNLtoC8dsBQI7SDPTUNv9Qv9hTxeTvL5hnwTX93dI5VJbJJsqg8Q6sXaeR+wIKmvKPfeYpTBmGvAf7ozZsoOfe58mE9jZ3EKM3WSuT9y67krXAobQjiCsq1FPy8pC66+ap61Y692NHMXXR4DpPF9ODfgA2h8hxhrEkYBzyENAnLfoRxoMem/XRlN0iB1iWhEX3TVwVXNV/N8t+oUV/58uI/HKM9uU9JocDupghsBctFkdsZLVQaVIUhgmw/gB5nP1H/gM0eAAgwngEqXUEAF273ncthdhZLH5onOiIuNz1sC4VX0zZEYOlQUNnrYDTRazvregRfLHh3nFTP8aBXA9mtz0znlQervx6sOCaip6MFjvu2TUIhyZPPR4vgwjVO1u6XehGFSZGDonxMT8h9Hzwf7NvxqplDUzzo9/pvtw0YWXPQ4qjShlr6mrqOC7Iin0SMpd11rIgF6NXIYf+k9iL5D3aP5qpGfnJL19a0I4Llw6iUFCLM/LrvL+k6Wd8ZpHr6dSA7rIFzTdhIWn36DipHDS8KwosMNMU1yWpUFCIUgOAszYNWyN+MLQ5BV93xkK05VQRA4e9tHt9RqvWUYg356gwWWC/xmlAO2y+M8T6KuuY4QUmeLgebSB8cX36AIWvt6B567F1TKc812Xrl6z20KdKk0NmQWfsaiAEFSKzIrNsv0uy+RnMgXs9Ri9zScWXyadL6HpTkSwYoo91xdl5z0ZZyanovqZFlx/339ii28Argo5TRHoMI+IHcrG9OQudvsKLIkPQXbncnbPQxyzYMsEC5+EKlswrnrFh1q1Rz6ulzXQC63jb+ltft+pN597GCB9XEj+OYPS2e2uiooswFZhkj6tr4KgxF5U978c903xv6Gu8hfcUVeIUZsHRtC5JBGvkPwYU6qZMjv6hYiiJFo6jXgBCe8tQqgcy9kb4ytU3uMtBCscOwRapYOSo+lFcy9Gg1265XJrC914e+wjx644zpZ7kmFWzjBl7TDgjj6ZLDuJqjl4cZCJ1STECPlrMZ8d9/HaPMJpQ+D6De5zyuHU8Vu1e4dA4B981g3pKRc2YbJnS8Kpt4j5YELKpSx+E9DPnAvl/mmavR+4+QI5Q51XzD9K2iJA/Nk61HEhkmULrF+FbNlV0+8khQ2aBpFQVDCUDdwq3efqYWwwk2jGPGn2fosc9YfzJOQtvzbIUBJn3092PylV8BK6PgSelG//q7VpLkYaUhU89PHtGcMsoodrlxP2AZUpFdSj4ROq6ZgO+Dp0Ok5M87OssSewd3sbpx4hw9sf9o+lQH9IrpTFoSZ/NUSV8m3T1b+/0FV+U9Y4OfPmI29qP/rY2JRLaL25R91J4ac/q2riwtjGGaVRMAZ8XApW9EQsWB1MAoEytJ0wyMtK3ITtLufneIQ3w5ZAqQbeVb3A00bfqIZiNIBrakRSiaMPexUqKaY5AQ2rfjPuwneIG2Vswy8+wB67hV6Vv75Rq/9SHdUQdQMSSX3kI0pFPFLyqjCjN+N7tzfsUWPS8hJClq6t+HNK/Y7BN/sczqt2XFPx+L+C+hjXLpG+5x8VePnlU1RNTyLzbWvFWwQLji8f4Kh1V5j3TrPV3v8aE82ZT6vC1bAKt1uQur5sz46ymedezUXjLVYFm42ive63gEzzRm6sGye/BCMSmSfkB7T/Dl4r4FBj78dkgQWJrGDaeyStgr48RircOFKcqybof8WkQBJf/Fe3A9P6DD4hvKDf5SVTWvsRXK1Erd/vrs8XgaddrMWk04PcvY5gANnhrxRcOZ3RecuAGg/+0jdRLXJwJUQSVk5pMvX3I984Y1pWWhMM9UVNRYkesYDRs3em9XPlA0UxWXsY2+XlVn0X91/Po3BGqC/QdkyptGZPsmyfcHjGdZp7AeRaQ0ruxfOIbl9fwvY18hCIJ+/CKzznli1s7CFgRiTJBWKEYvtoG1efwTiZcjaX6Rvv2ZBtCvqqnIR5opDt93Qg/EHSv4Z44fJ3F+vU/4wRBfGzm8M4rjO127+GYetJtWO7BJ0PQ9GYM73s9eoHEa38hLUA7u96SL19vvBoy//ioD+0aQJ7hLaC/ncui0F+PeMlI5T7+hldHI8dh7yr7KaU8oZMGUV77C3JKHG+AxE0346A2eeHNojM4fddpX5zNPVYMJNJft6vt9PDkRPQ8OKqIQwIfNcBwo/LCRfyZwgmwOe/58K2AWtlda8AOFUQAfVpIlApqkIkpsUqEbIHvmjse6KbEAFc6ll6oylTi9hoHyX3/nHs3D+K63g59Pgkhh/8M/XpM3LwN6oWiSRDTXze5gcYwvIIt61YqICrBKf1ISjSvOFiqrxD0+D1Xu9pqq8RRiPE92FYXrEQgEpnPCYIc0vjJYS6g9g7cfRy9XEf1OB/bFH86TO6Su7934y2A1DhdiHEWHt1fZtXvcLyZj2yL60R3/SxkG2ykyQJ7zK4xXr/7/3KdVaz9jWp09U+iws9nLLg9goo9EEssznOFZ6Xw7XnNnjLBjX2RZXUyPP0gNFl7LGBU/WqhIZ2Y2af9yn2zItHtdOfUH+OTW+ZqS19cNTWCgbSQ6FxF7iATTGi23eKpQD1kWG/3BcBK/xApeauGLjRC/wx4bUlULrelTQ7RRR78kZ8PosLKE4p1yGR2SWIuyuFd7znqhj25pI5H3chSwUHXNKRF5iub5CKZUVG4nmXLP8ftwJB0gwzk5Sm0ocBRA7F7/fIrEoJ2P5/d1i6vzaMugX6bvOI9PfW03nthl5Qn3/+A/KOa0klCq8abN3yW6n4uflGm9dtoj62MkDemX6RmBg6KVb11x8EAryJ4YbknSFL2zX7CUJMdFcBENECHbzCyiFx0HFRH0BeGOdwxABx5PHuEkYBuoapkdK50Z2nHFs40jMMtUQXxq30Wd4JUkmXdLD2hh/rrnD2ypyoN7tmrzEXpJSV31qyrkncCgjuEeFkY/Ia0XPychJZ91Q41+ssXGIC1ydlBkxFwTT5K1otfLGrzTLw6n4e8I4kqoe2fNUNbpDZjhnu2YnfbKrrHnfNWz4kvu8Kidfmfq7z1t5J62F5/RAKDqMVWqY6gu6Qx3MuvkvbCKHc7Fyutzn54z80iLPM0pVl1HOo3nAvHG+VFQ1XOyYa9CuFfeCl/tM6lpK58I33E89BM83hUwbyn5el83uw+a52qPnJBUaQ3d1aQ+n8lwneTlZN3pB7wYSqnUo1/Hdh23lkkpyevjqcf67LgoDL3z1q/zRwxjAXuh6ANdb3p+sR9T+7LPR2nJz2Vz5KKrhs/b0d5hpX59x3p30ytaP0nAeQKPqueXFLE/1lANg5rq5+x/n30UnPbTG5jR+thcNAHALXSPIRS+o2QJ3td5WE/oH0ZyJiAYXES2lsFXesNpLdE+ybxBD5mLChv3P/+6foo55HcG/nkMIurTk9eM1XdBPk9G8LYJ1hiAtcI5jacoGARU3T5OoYKmye66ezxaj4ENaIfSxJk09FA8zHD7JoCbO19aVjjeLYtBDKZu8sbJVV4PhjXZC6CKObD1/iTqJdKmm1tDVbweWAg4ySPgvk/1y2cS4f4MmPsWp3vaZ+jacBvhhXcvhf6o5TXjp54l35CnshVXdao/02Eq9vkZc1z6enW3Tmzr4dKf1fbY+aX3Wrcega7aeRddsXndB83u2oO/GxRt8kP2OIHaXly5NlG/hIQ/YWawmflCsi7LwHzS4MbW7XRPQdw4ixcOzvfM5tRigDpj6GRe1DfMx84XdGHUQvUJ9Zt/TY/e3FfHyqx3t373NVFy3/v3jafOd7VTlsoSUTaaszi8QiCY3o20FzRn7SV6L2LaDXNAsxj/ue0f+2j3EukZF6OQ8qv5fjjhJVWFRDyoXHAijSKqL6w5XNAoxo5YgTtvlVHfsS+jOt7gkpZ4vAK2blrVItYr6d8Iu51h6KW9dyPJJlaVNPBKlfVNXGRmteqPFAFo0CGg5uK06ION0P8uYkxbkmAZhNI1HCf3ittzT7/kiPD80WCcjoOLXwGnWuOUqg702OfnKM0HIcm/XTojji+YzV7GA/XWc6snBTngLb6aPfPGSlrZGO5AKgzopbnP4sEZRRkUTwPBC96ZJmZvweg872Ebb/3WH3hwXyttar8EAC8zbpmgOMTjjTfPTJ+n8PooCj/xAPOFHTT2L4Sh1blcE45ocq6odnSlY7lahwZYU+49bzwmXdTfm2IGF1kP4qmqpZ4T+/OJtg0+du7DEezORLCOjh8SJwTVOSzbExK+ivnavmdn0hfleAQCKDCSdgTI6v1xdwW96h3xK7fMQv7cuHxvQ5bSrWZO+Ej48jnCaegXoVh9KDX3i58qrqgrXaUPIwzgdwA2MvMVuQTpa+nabV4BX8FzLFsJTUMtUTXVX8qSoT13wgt92tIKQQsFIUrWqazoIT0LIbgDfB9SszgAUpkGgYFPlFbNBZdcq2iSLRvfZl7mtF5YUbbtNrZnwMifJInDmik+XcsFK8ia06Ae4YU7uBrOa4wMt7RTN7OY5yfucUc/FdKVVJzMI+XVip5F2Se5GU7l/nVunjmtiKhChQvczN6bciv4MsH2N9I4XlzPCVwZLrL5FEkDASg7tZu4d/kcN5H31y74zxfu9ZlnrYWX0LvqE71UR9sXvbdfC2oN2I9Kk9DcpDDs18qVzhVTRRkCQBW74XXuiQIQIZp89GXpVDY0USTb6LCQX+7Vw27hKfANv5OsXsYav3kaKlk/TGIz2fXTh816l6uFVRgjfdd3d/fN07XtgsO+iSCo3696aXtkV7W1yravswrgoSbmS7j/wqHvNPtacTLEawFUKXxCMsgduE6U4mEiUoGYHbk0Yzgcrju//FG32Ao9t97afjsBKmtL4yoAtdBz9phGICJPVC88u4OrkmujD25U/nGMOoW+N+lFr7i9TFYjEidyG+Snfb38jkak5heMsmRU/yLZa7NiD+P69SZey7B+1wqs0Rk206bVKK64uDhI6ORw875HUqBuOnS9k1Fm9H5Vv4wUxPkK6ZxfOIrtHMIpdw8+LVV3TI9P2parxj5Q/ySNqSIzQ8Fqk2TlrpzuzPDay+pj/PvYzv7n0HxRRza7qTxPK0Z7NVDCQ1IMhVt8lwFS/AV/EP0zxT06vU38OyD61/kqQKUPMlGAyUnm8NP7qHsnuP8pEmBA/I+FeHh6l3g6m/ySlENrXwvdX1HPcUDTvKdImV2cvopUfwl9XZ6kwyU6rnRshuRiru/qtKH3s4VOuTBIos+9vLW6c0WhNWBS5NcpBEvB4QYV/hjw2aYGBSq7E7Sh9tNmW9ZBwz5sSnhH/TKMtjiT7mPUmN2zPOm1TJDv6UrlTJRX0KyU7su8eEzt4ZzCifK46O7YvSpMYKA8dKxD3xkB0wncGlVNDnEi176UhAN2BQJGsZCW9oujprxBHDcrw8kdXD8cgTqPzsI63nsy2MsbUhaszYj7bpuYnSXqriziPJDl8Y7lec6BnX3s5frljjiQjidQot4Cgd0YmFkJjJvgGoYrcrRrDC5Gyn6RTPZh5aVfS21wcn3MK4scbP5TLMXtiTPL7lPA4tFRsy/15/uwIYI67ag2N+EkX4p8TbnDkjey+6r2LW+4jXXQ3d4qMv2jzq6CV5AQQq3PXr77KCRtuDOIPzGmy7wRCgiwg8M4JM8ZFIcObAhr3iPkfA7tSaZdb91pz2EaDXefWWRKfLNIzJcPrAa2SlWP0Q27a6LjX9urAvqXC42EEQUpw1tqgcrOP/hTptRZY1djqy9dVJriGsns4+WxTQ/MVxMaPgD6g+MBnzWf9M6krh8XTTYH2DqOxCvQue/CcfHnsXzfjm7MNHlASv6Qt3pWUePDPfhDEDvuERZZFQLgJFobeCt2Hluqpt+v0Aj3ice7hjbOxbcNtBssXUB44793/Twzn31lVYxn+TOHAipPoke8X1Y3KXRL3imT9ISfbMiKzTQVA5PO8Xmx8NLN0U+1XR/IE4d7IPPn2/ugK6ZsS5MiftC5hi3Tihsq3NIJWoGcyfaxnMv2xjhQnLnqoEWUEDfm4xpCTV6cA8PtKYxuqdR4YUex3l+H998HASzS11okIOLqkVjz72zlDZ10vh+PTOgMPDQLXA51UjNz1fiEyhNXOWl+SvKcotE3L3yJZB6Ys38XvgvW1tFdxWc73kb6cnvT4zoWC8eXMi/gQoCPm5W1Zu5TQwkFj3rrFndKR8CRkwwAHhcs5GZ1cys+DgCDgbwnfHF3pALsGStwz8IBtqn4JMYx0R3pVamxT3NHbmN2EFdoO326dknJCV9FcyuuzJQx/w7Y20NnnXodH4QBxBWpA2xUqDlghQN8YVjzXHVY8K0xu3PxCI/DXv5yjVA81eUJrW1EssxeAiONJuWMp8+WyVh8FNlofRVrjagAnZrb05E9hdtXGLszs04lIUL8gNfNd28TFkpKc9thqjoRa/y9Sn9PCVb9UYgdatNP5J2hWvIIllxAg2QjAdXlHnXAid6H31aH7W454DmBKAPu6X1en3YQ+0/HIQ/uZRVAUR3FG0zR98mxjQG3pq7QZuQ4+cEm8gKmRkDtQ/j2UNd+vCEHtyY/n/VbcBz3/phgNXDcE0wEJ4O7cuFvfXz4BqJujhNsiPE5jocngsfiDglwhv89keMa9vV+JYTLPFdto3xDovzPdPk5YEui6gUcpwV18JFCY6Ic9hcEu8s/RsRqZStT3juiTEZrGhgNVz3CPPqBmUdAv7chmK2fW3/YjLO+tlUyE44TmSr4iCG3U3pnLQCn/dsNIYUmEBnoG3pEgMCo96ogEDzkH9fMpNSg63r/cEDjytx315m+q7cs4cGrcIJYHu1yzbeNGDAF7vsRYAqKWHXpdHitkAuCSY/LfunMAWu4C15gKKtZZ7NqROsmHyiZJbbLfgzh4AvlK3u9AHNIzfcYyS6CDdtetwS+ZG+Y/eyBUatYcRA/lONfirbJqU6D2fCud3HEreVMbnm6kcEVAveaSxEOe/CQBRGcU3waGV4PQDAQkCeYweoDOCPHSdwFJhjO+vt3Isd1MsbxvxNlTgQzWHwKeKnAXV94IrxUK7gvPPHW+k5rJDlopshrz0wxwQwX6ueF3xtPTyaQIkl1sHWYrGQ2bjol3uB6XdTmiILcRlleTShk3IGfCUxAg30CwT+bZduk5baWZ5yYgYK54xLufnw0O8KmuTyIZH4nToo6DRQp8EA6KSMBfN3myf2/pAwIaCHYFePXDfiLwD0QPBtzxs8DfyICI4gVZ0bTPCHaTryh73tUvFaxmN0+HJGPZ5ZdNzTWmmvP4yiix4MIVPdLxA/+K/Ow+0GU4xTrTrB1wLbvbQeYYPbr9c7CPRInIpHfHE6BMcegbzSn0fROZj3uwtpBNKnG6RWNdn9Cc0AQ2+oMhyrLsicQgPyeho0FuqBtjVyVZTQwOJz7paA7Uiw9ymNwo1UqTcrQbpKhxi1uQ/5gMhTmodFufR5w+yJ3hPu6YPNJLixJEAdO5iqRodRrrubOovP9F7DyGlgLOymlTGf4qvf2AOSl5kbZXo+Cl86CSpVCEY71kV+h0U/nL5EbNtzg197IpxCgvmhc59DTwF2Gyx5hnz7Ex3HoZoKm2Q7ZNM0hz8zyHCVpwGpNC9nUv91oZ1KxntP24q1vy5VxwWlpDsZqirpRkKOtG5vof/6zozW+qybh9nrkeRhtEounan5/hRwSaZ96Ir78G+pA/Wl24RRmcK8ORAy1G/yptTnG6C15YrgLPTczSiHGC3Z9gxCSt3GaVvVIkzoL8O6rfWe/zQLUKujwcJl8lYH7gMB47K8DDfn3rrRyL6QXd9Q+IlQ6g5lIPwFG7zOZXWSwOqhoNv77SkYr7CK46faq1/d9VUa8Ymd+ZiHic19LnZldvW53xWqbzB9kG4Knv9/F6YFv3BwqN0oUffAsw8hPT0Wp5VjeQui5QFY/9ZeBoejsKZgfJGYJGM3M0D1HtdhDN4NkvKuqwdKS6kdZYNHH0yXwGJPSuU34qWFSKN/rf/vbi86//EiquGuHYVnXHiKktkSUZYZYuT4riiAIrlzLCBZo5P12857QdEMDHzwVpJ3lwGPywR/823VdT3o/dQumVk1jladABFLCy1wglxDguMGqbyyrvdUh+8DC/APuwI2ujx7Pmf+k4dxl9wzOWaxP4jwJJhKNJK8Zs4i17pL1NbqWWq4NG2GJFYzm45au0ZmhFv31vNUuEnoToI5naSZn7Kk2CQaTyiU+UKrf1Tu5PrVcGXVCvFnwCDSaSeRuwBiQYew7fCNj9veTRrfOeNW1E0nlLBgwT9edWq9kbpplRa0+hu/7GTF3xAbL8TeWOPT9w6jySRm3pEzT1Py2U99475Y4/rEbWowTN2tI9G8neKh+ZdRpn1/X3vB4r7clrOv71xky3/GIlSrz12Q+j3eMTrdfFA4f7XP45QQUDQV7/aDpGkeoYjUzQgRJTg1N37dAVcU+WHhpOrsU69UNqdvy53ssvrpaIxAvIR8wmAnHsrDjlej6K70xkfGiWNgZZYSxEbgfU3+HG64U4IpY/nA3S7VwpJHdMIyTRXdh9efVkh7misAdaadZuZwdJnLnsBMD2mcsY7JK68XqZo8EMzlJnrtfX23vL8ywZJMVqpVWjdcOt7M/AWBHcoqkHUd/Wfru/upvuDLhuOb7AONuO8pc4Qd1befS3tumQ1kW7V8UhU3/piJeobwBBU2RNE2jftPbzcWLWneHGlyxFT6xzShbP42edNb7f8cZBu92FyVJ0wPzPzpNnFKMZ/3PfxsxK/l59JzcBr3xEVZixHkMSCuMAZVw83k+1ihcXrDdoW6MJw2erBEq1IyEIBNd4JpQsQJAR6D+J3KXqmMU3lIMiJjZPNVGu+0eaLD8j0gPJzQaXbPivtRL0qF8M8btjCnM+D233h7pJYF+pZttPBveeoWzQ/eS9gWmtVtF1UKW2wZ6Cs18ov9TpwwJ1rr7BMNKMvg9Mv9WjnFGMP3DWL3dL4d/Fa3G4xHA0HLdIPDrJaDmf6LXGgGJ9zWq2Ahj6ISBYF6WS7BWyf40UgbGDX4BIW2q+OCD4/MGCLq8u6va9scLT3DtSNDb7+4RibvuNcbGTTDxvK7r0lgOUBbWsKqpB6PrEUEf4QeLB9gBpYVdKHhxQ39zvaXYusea11tvBY2gWAesGVrNpPh9XfJ+x/ojQqcI4YqmY573ehLNl/3lntiGSeNAd1AXjITzctNRmSFV7HbDqDvvEyOMCfJK04AlUQGesagy+mv4c2t2B1Q1jLbwryd6NoRhkOsL6PZZ2Jms63s29vc9QK1UUVWaCNaFmOGXnNawqRt9ZiKZ9SUJdIj4RibxUeI27D8Ovq42dQ7waYC+tQTO8BCwQJ9DrQCb22ds9UmBsEtS9yFiA6eYt94/fjueLYl33+SCWSNQYPvNLgxXK3eCuo4rmW/DkU0S+ZVWCA+osZJMfSKfPD1xFUXdVKWKqvqb7vplXO9sp/6p4uxrv+z2MsX3y/sAG1CMtA/bDVZFotkJToEVjw8rcQdJllV1tWT8rHHnA+jbrPbzgOMV2Zx0gPa+upv2IeaLKUH9Vrw3PwPAzjYfoWZ8l3U8Xro3MP6VCadyQWCPLEyOHFzAlPAuVJEj+mzrYe90OEfXgmto2Nwhbud5no2bWgYLFmvIhuE4RbyfVzDAbmHjLKIubDsFuQGBc2cekgBggmGNlslRvJcBXzxB05JU0z/8xzDLVsMyOZE7BKChv/T9v1UocIGYp/HxQ6BPwjBMReMdAXyv/AotoB8pJ/OXfHq/fUZi5kcqzJpxKHwj6y1XFeg28qlfqvlcfJMMkRv7jZquG3nTVI3vVyYi2GyOXQ10bxfDCEm/+OEm3CJCacJAoiSYM7fsgWXdFebUuU9gt8DwpnVcF5z0Aow0eddgYuATAUsTfLzP8nq2RP81OBWnT9jpdoOPp1F+89DSz5Vmo4OmgHDTvu+vxk9zs+u6/9yeJ4GNY5RKnR1uSJR+Opijg6PvNvR8p+G19ypuAL1dKmAXxA0xDhAt/tgtOjLvp09SWT87xP5+SrY5NDxJUYr47dPfDlRomCzenj8E4TccBaypR9AW3Dn71wuN7n7VQBTZKbQPiPlBfjr71wZszfXlxsv+FclHg12MafGBs30/utP+5pufih6CuXxGOvvdNswwUr86DYIiM7uxRGUwSTsV+a9Dff7dbX79LGZtHqTTvl+wkUwZpF9K+/drf6eUX+oqQdOEyb6GRwPZ7ktXAdaEE3v8Go293tOZGfd5kp4XzuybAqLofrDg4+jWKuHxuvKTfu8ARtIe7AinKu0cu+teFkz+6gbOa8nsNUvvSfvZuj/cegE5lieCiiKIweu04GSSB1w8GRRqs2fw2O0G+5Js019nyu8xOKpqkm4orN4s8lM0DUIRPGA1Yw0WqGHetw9tRzoDjKXBMeXfAIrte/TrxSufUHrBaVsUWMOg6LW9wmQVZDjBG22Fv4JhbeZHbxg4glCAUZEfsCDh+/sota8Im3MEf2N/Rl3cCU4F9gqdlvX6b5yTpXF2ijvaO4LPa/WN3u7wfK8RJLldkunAqhC/g2HjKSAKGFgeg/VhGIa4frtjbvNAiWbgUTPAQOsHaMin9jAXzBYBZg4gapuQkGEy1zWdSZVOBxagfud0rg311wMHykgL8EDXU6zegTVJ0S6Y70m9GEP1sHWasLUNwdDTbmV++vezUcXqHhDa8cEPRW0ueLT8xsifxzFMEVorThUwQ4sDmsQbl1gzf6RJ9l039KwR+klh1nOk+0obRT8qVYgha0Hx3/E9J35dAgFK1Fj/i/AAIBj2gMTRwhx6UgB2jn0EKs02BLYrnfQcVjG47RhOfzZ5nQ8U4M8xipoN2sfzasPIo4C+XC64VYcG6EMfJPcHAGCw/BSOl5yUzugNA983ze6Gm/XZw+qYp5eVctNqdurEMX2VQLX2ExV15gz3GFU2P16p/pFitFiz/+kKSWtygTYzf045TFbSA2ij+R7O9s/oMtxHEEvuX8d0Zn7E5Qfa6gVgxvat2wMG+yQF9hsMMKq0tuYbnS0BxiIW7P6gQuNvV0w+/I35/LfV6N9Og9HaYdDruOv539EaaAnjymYEcB/lWQ2Tor1wKtXQ/3JK8JvOkqm/RqIi3OoJsgaAlGh3Ri8iNV7ltw4wDbYsc8DfCRAH5nbbq7RZRob6on6sT1L3WyoDNoeNpdrcUYhvKAMoWDYHmCtzgc1bh0mubmCrfgnC/Y3nyX2g0GeKNx90LwToeamqN/4f7wX89+dMK/7XvfbPmfb/HT/c/y/v8UtVQ7ZazUrjcBgHHuQsx9VNhRQCWf4f/PE/OF91UZHxUdIU87D1KTjYD332318Mc5rN/3vwfzBA5ZE9m9fshNdjGPp3aIzmrF//j0P4839woTulbOiyFWYrIP+5AKX+Lrn+PmP/+XxU6Vr+HSMx5O9YmVVF+e9rCbh/AzwIVv7vQPHf74b+hb87gh+6U8ja9j8P8PsZQ6r075qk1zxHOvisIf+v/lGcLjqa/xf277Widsv+Tvs7APRl++/AUkYj/PE3Kv8ZHW4ZswQ+HQKORP/5kFdnBu7Gw1GqkqhVozhrjWGp1mrowe/jYV2H7v84gWurAv5iHcb/fDP4lEZr9D849/cRE8ce4CKhcnndPBBFKgY4m5rllE8HUtkW/u+FCBz0Gj5KRzR+882/NMFyv7LAFXLOlU31E4L2sMT2Bj8YwRecjT35CYZy+uj7uxgx3RJxMLZLX2mZdA4XeyKSSOeeSucYVDwW+u878tjtcxGnWj8XWSqR9MVR6sWugW8escQisdRWasde4cV8wzIJOSjwT0i3TaLX7zRHz5iN4cJntKQUu2NVPPDylRV4rKSscRzZ0QLsmYltIip+mFBCmSzZ5n4kGyGNZ0NPVxwZXAIn8glmJqx12ot7CBhY6ENtr1OtVyHp6E4U5hJW4Kq1LQJj/x0UhLcXGq9Prmne/N2MBoz0vgr2Bb05fGE83kT6iQydZQzoMIOZGBcqN5XFVzL4T7TsNmveT/5pPs1Ggzttw8Ot/eLgv3Irh///c+wNCMGsyFwp3FCGyC1uBrUiJdNxRHqm6XgvGaPtfT9ONzy0/d4mWb0hCOL7XXyaSFb2vASWBH+wX+IoDewhnA3fz9O/5gRV379kBXrsmjeW1oEicNUZrdyvacEa4cz75x/lAthR4Jzv5/Up5fNm0yy78DEzJTP5Vtb72YV98TUjlw09s/21YMcyHVCJTdK83SNg81ziWwiP1wua40f9X+aRMO6deb90YWR69fvySa7McHeT+Tl5scWLP9IcYeuyxD2pmwWbza6n/mopHYJhEkumCexTS49oldraBxTAr858b28zl2t5k+k3y1my9WL+i856gyDP+x5agjYe5cFWZHIBk58OIscX2WuJABRZjs9D4PiTJL5aiNr3fZriSZKhA5AEQiaFRf/zx0gSzIJkwHVl8H1wEMcVN0qz2+NX/CVKS5JkcMns0d+mRzSFkEwlcZduhJHdENp9183rTZKEPJ2echCZIdcb3VH/mhMLXPT4F04AZjUf9G6sYsjYfTI0iAxoH654mq93oksiALHcm3tY2OzOAUFCFJz7/pipFTFLJ6d8gQU/bnKv0IjN9Drej6NY63rbN3qMlWsZqrnf/wYJoPduX9aVPSwmvmy7HmG77xWAueqXKme8Xk6S5PqL57LtOk/K+p3DVXyovsN3QESv8tI3KnepMMHE6nPxuls+R2UGlILJ7xtI78fCCF03DJJG0gzF9DtcMTCyuE1jsL2YYcZ4Bp0mee3OOIGyu+PDpyvFVfVQIprmeXmf54n4rN3eJEOL/zdr77HkOs50Ab4SvaQlvfeeOzrRe1E0Tz+A6vYX/2piFlMR3VFVVyWBQCLznLQF+Po14XpR6Cv+YEShgeUgX32UR6Ou+IBf+v1K6BE86yp+xwd0EwR7+YCw6nz8RkpDXH+v3P0gaOb1egmS/oxzS3omKIoi+OsbubO/n2lhDoMsBY4BoFDQPIoGwbJigQFQ1VUuxY5jRnDc5ky2wdueDyARVAbQ628q67/iA+0gcp2uwFo0694LmNyvufwVQQANgHS8vG3EG86K1RpBgYHDrzfH5L+x1/swqlkP3nD793PTxjAO+I20zoUjP+thvWVWTdlK3+RuUZ7uR/tz4rnPHp0fKwYbPyEFXljvDF16wI6VjQNb9OQ0FtwXv5UvA9CyMDpXsvwMV5J+3iV0WmJmDthlGFuw+SiDSkK53u+Gwi3LCkrUdGjnBM/VELVfPJ+vF8WOM/beIOkWFXLrCTIJoqd+3DcBu3cLCZApthj4woQzXM4bwNmBSIA09ULvdTafj4y9GPxMw1QSHFb8Fyb+bhA/f8OuosSrbY0XivAXRZKKyq/4F6HATSgkt1rS7q+qR+DfP7cdbAJCH5bWX7gEBPAtvQ2qtom3aM1PcCdSHy1dpTE8kQaaQ7C+5I0/y88DN3Ctv/Ny/8CCf02C7mrqfJi/9gjbtOa2WT/Mc1sF6iVWiMEFZhiesF/H/XibmT5k2mjzbsdUSeyoNtU8Q/Y/p7AAdSmFL+jjMQwDuJG0bp0i5nRA20P3bwbOEm7Tak1PYBiGA8oD42aY/D2IX4TEGQv8bwwCbBTPOOBufjL8mQFdQ8gloH/XI0FJPG5wZkrpGuimm4HNl38df2xd4gC8MYY7ruh/7jGazYwBSwNNqb8cuFmAGQP2esR+454iysrdlDE9uMtGE32/dr8KO+D+WivhOMKH/xhomZ6Sej2A2s+aM/86I6D56b2Oo6aqUTtNcT5Y0QOclPQk//xQjIJRf4cVxXyFiE8YTvkNcbysJeLxLuda0aqfpdQlknPkHh6DhUSN2sm1nz9lBSoo2vrNUeC+31bQCOJVfHGAieE+fmFgYqYQGIBC/j7nK1rATEIhBlQb7gANC2AlWPetkxeZf6GZaqnHPz9ZiSrgRiluVfEhnwKGBy6udz9IUoBZT8zP5cnheFSu+Y79BsrZo2l9t+0Jz3psFua7uJ2iyudLTw0RDRP6FB0Jmny+Ro15M2BflBj9ObgMhvPvpO+bPbd+lWVZsf1GhfiATneQmn7XdYWlLo83bCPwbv+k0nkxl9wAOwm1nRpJmpaV+lh6DzhSPi+ARqOA8LuvJ2HvBCHL2vmrVDtduA18ZG0wuMlR34CtaJmZLAKFISjGCBLJJbooGmu4Gh/oOOOfjtuAvXY2HB9b2JjCst4xdCF/VyrQRgxaovP+Y9J/BXLPAthexC96sozqA8mM6SfeZZplFNjRWpHa+nJekSnWU3q8vygVSVmS5L0xyLVeH3DvIx26lywh657PxoGHBi0EeLDR/+6l6zYWJfoU2qCF509aihX7vf4r8h/KPLfTEpbjA/39c9A8c9uDDW64f8U5cM1yvDTqVx/kr1ENdty48/Bv1AG8s/9j231borpuWcSdZev2mbokmD7pHsUfTXsgvm02PEH/xnAFKlqIUvB5w25yQtvez7J4nSdzOYo9FKPyeJA3jMg2sLuAQHIbbO9/5aWpKpuwfEypPWRtgbmgggR43UtihKhtyEzMv1EQIAB3RVTzKx8DVvn9uLen73baAWSLbfpfcLOFxU4A+iiG9QTWG7auMBnmpOJ8C5R1WlR2BifTjvFnQLLKoJdgOWj2YH4uFxSLwLnUvxGc6N+IHR7gDp1d1weRPu9RkqI8Bz9Q6Qf2pUBP8mU5SCpA111ncDYAIDxsdcDk8m43grpMKl0Rrff+G7jRyCJDB+FueY+jLAGPfXEp1q4wl4Le9p2my/Z4w2Aa97Nxqx+QZfuFkR5j+YQawH6jfP98lENYlg3sW2b/30D7sxB+TyzBoRy87YsMON1e/uo2wK6u3iLKD4E3Vs3WMWAeonPVADCA7/h00Gnn/9vvaj/p2SrlMO2Da7hy5xcv20s62ojDz8zuKqJw5Aw9jmPVbKumb1nG0fw+fKMvfgwUnuBTCt9q/0QecpjZnWRzmtoCfV3JseJyOc5UrAWpfh8ZXC3EadxvahqXPTKNKuvp77WfnQ2WPfitMeDZW3GamKAMnwV/tchIPIGLCz5n5mQx/C3eXmLi/4/fqdjV8A0A3oSbyIfFXfq/lm1an4TBEEO7shRrvsg8I9SA4fmiMk0VxxwkMJe0fTwtjg1eIwwVXDpuOLAvCmM1R9gQ022wBPhMt3v7vENZOuvgO0PFAV05lgyLX4WuSeOmaWDVzgQjhQcMYt/KQfJnqjmFoPaso/C870hQpz91OxaZkYNWKH5EsRh68nGAPcrKz0m/PmvlGg9r/Gx1hcZYxcVRMmQCKYm/sjNZoGk9Es5fA4B3NLsoOQEmkQx3sgiPndxhHgANdXNgzP2WCYqUN8hClDCnhv001bXTJivDugmHaO/tfl6aa0Jp5CqiEcDbNH4RyYEx3kQRLFtQESbXgfX34ZaKNkB1H3WYmDrdfRXqLVK0rDV9aa1rNqysc8erLaUq1rG3GVghtWcDGb/KYfljlh/qa5jROgB2KUhlp7UNW7M/BFpha+8S5dbn9epkDJDuhMekEiDl83LYtC/CSezF3WmhhcFa9+Ub6AlOfz5pgEw0dz6YjtLiOn6QlRDL8cuQKdkVdWAtRMcYgqTSfo3Tylp5LAaDvhQKeWUc30gsfTxTxK4si+GYRHo2WijSju+lqKmL+F+Tom8xYDQzbti4xtoI7p5u9qqeLw9aJd1Gog+KAHcjcISVoflvqwgC+6Vgt3LmXPFfkN3jkL57OPxVBS26PuEcX2YN54ralkDkLfn7+rwtsBm9r29ThVRzlBvpbeFxaWIVwb8qT5S2/SRTtptsW1gnWb+nY6rTdHsfPk1/OfOOuZf7MT4cMdl+/EldAeqezHVknmUrbgwnh/U33YC+Jy6RwpwONKgo1/wdNms6bfJaAe1I7xX4wXiXEFNxOqaregDdwtc0aI/LC5aH2F1szdz4cYwwTmN0Tnqc/sYlCs86GCplvBbSH9pBEbCb31sO6rh7egjJJSzafbnEgDqyWbro+X2Y+ZY7fjkdh6FWNFwaPSvhyjKyTdF9dXA7jYXhjdz5idQU2SYcinJs8kpfjTq54A2sPPdd3/wcDaqyNd4IwUkKzBVi/KqV4gU70QlConvep66qk2txJKVp7rvlugI1953w4abTLmFe7MMIAL0UFS13nlsDs67qaUjQnlWxjVaqIiEXiCmSmBc4u31hyKFvm/IGSK9kSwJxZs23omF/fUO9pcWQ2X3ySTLtRY4XFwHmbRfkvV4EiqkJQ5jucLTlM4b3mH9mn7Vni+XshQ7imOxrEQHsL2q3ZurHnvw+gdYWa1FHj+PxeBpvN15V3HkISIDi9yWsJ8RLAFiKEnsYfHOwbj2riJAFQrVc51bwEyJEboQqOO3g/dgdutYp+GQ+0AwtzU9f2ILYW7V2I89DEK7ZYWGKxpPi/S/b87JCnvmeyidMvTgezq/JLQdY8KsltytZehQHtON4GBNi2Eks9lGlbhxLTpFDUx4UJFpwd/3mTgWrc2WQ6hzRgOKbh+LgH6/Lg6vY4S2V+n5IKEHp+8p2RhXYSANpaoZX2GwgdqayJX1ofQ2hCgvGLJYUhsfrXxqLg5JA9biMArgZ7zr7oq8NhDXbCfWxxJFPUUhXI4RrD06556RK0g37UZs3m5n2L9lE3SN+aGoVM9dMsORqqW0RTchbDcuCXSe6faKOoAYQ/AULZDMi925xbodOGe+X5vH4KmaWEjrgsAij8GKTqjJsDMoAYBuqaEia2XAM6qnSiwlQAKvKUywxFw/jMS/J6wjzAcyCxiT9lyiTfQIctS/BkVLARMyannF6dQw8/xRfQdyQ7eNCpLYawhdBaHb8KZvpY6cOI0ybUKxbwQC2f5hcJdPA5tTQI4QCG4eQbNQT7x9yuYUjCRI9py9ZSUhd7g95bH1Zhdh0uxj+sda8on1ZwfySRCG7PPoujsXvPeUxeY0bnaHGQH6ewxAtZoAHbZkYyIYBXTst88FgncO18rS+06GZMY5pD9WSN7F3J0KaxL9Q7wN9cpmBWWpQAUCOQPosXdj+ZYS5Y2nYpWP4w1SW13h9rO89zE/Gqw0Q4ahReuw4j83tlk1Is+MzmE8hjWcYLBPuN9JFn1tmxq4aX5JtBfamhU5t7/rJUfKu4Mf56gRrqABx3/3AjE//27+Np5k5cTipcsrYnMag8gyUAfwbrxfM09TJT32FqcoBVYhe4vIh0uIeL8oLkFoBAI02+z3u/etCDiGE4Sl9fp6hX14m63r+Fe247Kfo7UIbyrSzG+gYRKiWy9FJKi4BvxkKf3EPvvNnUzP41ECq4goIsmkDE3ku7v37HNUknLR2uqapp0fSXq0btOtifiw/fEjAeKSfVsXT8IF4j2XMNmgCGWWNqQ8F/nUuI6AHVrYpfQ96GN7sbL8o3FJ36FCidSYMPCPf8x+zeh2NB0XUyZ8HSoFPdjJOoxG3G05EsLWVcoyRfxFKjDJ9RXQDCfVatLayGrPxfDVDOIh/SV/C5S0Tp+2IOv7GSkM3CFG6JacCJMJA7TGHe7xIJ7836GdWRDZRs0EjVynjLB5Ki83W+deLXMEtH/fxQGTJC+v2dYQKqxYxTYXOErah2uSMtGC4x8t7PPZZ9oTleIno9WSwPQvFF7hnL39kGSw47cu3bhPER8HT06c5j12S+uJK/t39rZmYnHu3RyTTPi3Pz5q41R/XMcrYn31AtUfwPlNWCwuqeILd5cbrwNp3Fl21pPcYTm47ptu5Y2kKqsY9EvAOw8dATcarqNaIWyumUF4Ppyjd5LscQvk/W2EBbJfqK//s20agj15e1FgUmTmSxRXcKKofhJPC1Pw1vT1dRavw6zJsrZl9eD+2EjOr3FFhgy1G0sJeBjZKqRZoZCUCf7w2ycNhrqWjRYlCn3VF+SwRirXp2xQkTUTScqST7J00f2AQOnvUjYy/g4SKTaF41g+u/7qdnP5VAghrhWlTQHofPO6Dh/mc8cfktBSaQu8ZCuWhblXNLQlq6jO2BJq/bHjfAu/4GqxMsnVFIePIlCUTJ3wXG8wXbzRu3Xo6XpTSsqaYm77K6GLpZY0xg33lrJAz+MG4yQjMC/L23ba2CgMwX0XGViwuk/ltJCPbF88u5NhIlQXR9Y2voxdBIhUOWkjso+NUEdw5c1H64/6VztIz8zwCtgHSZnIiY8lWUkt2T3sXTT8QrsFdKsaQSCGYtG4nJ4LTJ7BBw8BNd3OgGKxDoyuYLNO5DxTK2EDKVHYZeFS/FrOdquqeU68iUpcfiEfXABTBOTy/hMThkwTp9gvzzih+82FUnhE+1i2TwvfoRnfU+zm/5Oidl8KqPhwZYAaKod+2yaX8YZ5f88udMaMRClrpAIBM9YyhhigAbqq32yj15GZG3QHY8MktDR18TG4+yWtYkTP/fviaUx/pqt/Ii5x0IzAktOXmWKyNb+jB2VOMmH6IRbkCALgBv1rq+uMU7VtOfVKvQ9tmODesj3s626GpvovD5wBwtp9jN3N1TRlr5HtSiM9OfZS6GkwZQCdbW8uLRPbvwX4AgtCnVENf43mg23qEJqBBm+4NxpIPdkT7QipQhG5Oi4dj7HJf871jro3sa6rYBLjFybw4nztvaj7ojGHq9CXPi7SjX0+noDPwNnrjRLbNL9DwGcGFGmoSUlHEzyGSz4NOFOECbkGStBuxwAsgRzfRHa9+xW1H9GebU9Hce/vapiF7wSzufL6M85LlpfLTdC8wJVG7dQoCg5/K/EFfnvg4EfX3TkW3Rr4rQnVfGjiT5rWMza6DLuQY99iWSCz1mXruAKfWDR+8ngAmsfhnXfCu5QpZZ8yvIR56r1KbloZ+/Btzvbes0dC4bJ9PT0Zx31F2GXVhRqwxXQI7rrDDC2iLfjAuzHD0dd3yC42XWOHXCSIjp4VdVfQf8vhScjwAG3pQcvauDebjAduQvyOMAExG4n9Z7k3F7dBvNyj7oQYZs78DBbNny2kq1Wfyzxoqxjb8cmCLmgIYqqiTb219ltdybvgxftoF85qJAQhFYfKHnebi/K6EATPAs80yFsBDhg53aXKWXtrAE8ifS96f2mc+cmOVQtZNyRRxn8QiKR+9U9DX2FC+5a4R06bDM4i/BLtxPLr4rMHj56Kkw/r2VReZNqafMj5Fd89VtnyOQqzh7BVXKiqtsa2ck2lSGn80HU++a1dH+uJ2TLcXCH2THDFNtlwBf4Mpps0zLoq/3aRNe0VOo0/yseW215JzKtfwDRngxvUajKf4hVqFUVpoCVSBiob6bli01Nywcc0V+EDoL0C6PzcdnTnZJLC8DFsy1zAGMxccP4oxh2aNP+2aCFzksW45B1mIncZqBwo13BoFOR6ryIiTxmX7/k26Syo0PWNTugl9eYULNgYe/l4G/GVaY+k/CILMcG2JVbQehOWjlt8aO9h1HurYh5maXEnm0RsmeVLmRJRm26L7J+gdJGAZvy9mZigjMZW7IxPQ9Hk+ly+9+Ht5JYnOjhFTMd1hto1N5wRgP1UhORtY2RlNDr2rD3pAnnEXqHICqHTZvVCvsB+hToM/XqL6PHnotuH+wF64KMJTmzoXyo2809qvnxfHdzKDZ5qZ5DejL3vFWneg+37tXSLSjQGpPN6NobYFHpMGiuEsEhuSgr/ul1BS+zqmBS+G8+P5Jr1v+330W5QdRDoBu2UyUzzQgdGHFTT7Fxa1KjPNaQzR8eM1n/UWGFaXwyUC9Br2wUUB5Llz8hl0Y7RGCW6SMHsMqe0XWC7cQpj16Pqfkp2YWsefNiIbfVcEBUNnSwj44kpsURwKC8xkvfHb42+rGR/aqC5Y4NXKN8xKS/GMcUEIVlBbkjZ6pjI3Dliyt0IAdR5il+7ESoBF1SDbWuZ/njYQO+jgkVqM/MKI8TTOrrAo6m5xBLFVE/3yD+ez2DOCmqSm8pNrWfgrNS6cg+FbKG7NuJaBcA2mW8plb1p9AVnczOg1oJFFOmbYAZ1Zv04e2zM2X9U15O07nd6J/PUPLEbdmzx4mBogQ10PlIq8FeSRE4UpPbF1iKMsyegkEnR5obKlM0RO1fLTQPwLcUsU08l+OdQzjJ5KVzaGG9dAM/vNtcKOZXLAmHtXERUJBCMoZNRBRjrRgswvj83bgUJkh6Rode/z0szx9lcZrsiufHiumOHKYy0MoqgYFlV3NbqCrTGfkRn39Sf4/qZX7I2U5DI/wRRj5lg9dGOU2QD3TqXwATqxjfNXZGyla/jpraMY6nxvuWk0Xf+NijCGYz+X05MNPU0xLoEelN9RxNOcOeCyIM1H+2ZI4AWCNTUX9CMqUHMyRbtt5fd5IoqzuDx0lQwo5Jtb0Xo7Tr6QLP3wirBeUyS9reVcMuL5ANZtVsMVev3aZTbjzT+4AFigVh/HmR22biHWOSXdgCuiiQoxxdXq93vJKa/BX/1BoHdt7LgaNFKdpbZKwwQZ/N2+8HGctk8NUeL4DQNjGhFF/3VFSMYY4hbaLbdMXrTT7bE+FFiyAnwIKApcxXCNyE0K6xYvp+Tr/ZrgfgEWXFrrAysNW7FhnKA6cf7ht+ypCfXZt91pyRjqom+b8Ck/9oFCxKdfj+Vs/pZMs+L48w00LL7IPvTlzjP5FhHRJpb07hPT/XyERnl+BB21r4w09VQHr+ExQrcy8XXXwpuy+LLAssGJ04jqZzP9hDmyrOfjX+84/+8uiqM3TWRL2Qmq65wr+4OPwIDefD4e4rts8Rf7LlaSkT8LHYONT5dFM1/rapusHCjBqrl+v4paqHXY7JgCxAAqt2UMkUoxQbbtE+iR+GMnlBoWGddPqsskDjJxiY3UezI7eghoE6dQLWCA2zZ3Jc4n4sek4sB5PjGkpsWw/boQeSOq4Lrlk7a/BD00azpYbdJNL2NKzAFoJPAKygEkNc2ARTGE9GyZ0gcW+KSCUbuaU54BHcOdjcCRA+ouo5AVN/Fsv9hOMid16MX7HNv8KarYbCJgAvTRgGz8L5YrnNMyzOx+QwbZ+U3rnzIMnKfKegVBa3ylbR7uO/Fz2o+t10tnIc/IXbq4mCbEaZe1W4ThuIqMCHJ26Mymtl2lM8f/bBeZ7GO/TiGjB1W4l73hTyzdRHCaeaNYOaBFl52RyQwIOhIvdYR/VQKXBJ4NhTY1ArR8bOManEJHw5W2v+60+A7YsL9wPW3zQ0588+8bhZZMhXxlc5fXmPqP16Gcr/TjJq8rELbrzujZ2hPAjxxsyu4+z8a2YTTx/LCLuB9FIIV5l/eGrb+6d89fO+wym8Ry8twQ8Liq75XQGBr1xVu0MA3PnlUlr5Z68zEXnvipC1vge/UxrLKWH8ErOJ0mH2WF88QtDavv0ZgZolP3RPnK8F6pjlY+fFnfNprV0lJDd7Wh2Pp3FU4oaIBQe37/YBRDDaydE5HpypAuFwJ7Wooz49JVMIdPMH6C2n40L53SLkzrCbZc3kejlbAoqIti6JF16D5nhDb+OjzpkBJJtg0mAb0kF+yqdVOo1tNqC7fYVPFKsZ+M87t6fs986pZDcn+RJiv5cMoJf6cXOyIHiwaCWo+L1AlY3HonoCQl0LTuhgqhSu3ZQknhQfWdpKLC9SS+PgUlTcZ6GRkbRhHb7bH08JZ+YXXInWw4C3SMFGcwOoUUcitXwahYX7RWqJK3Pb5396ZWcvEuxPaNL7iV6B3ex2rIidBhWkc638gfVdb4LZTE0PRgGJ4zDAD9IBQa332IfnyBJqWuQ+f4kh4ECmNgPjILfjuvLsrHwYOVG73x/Tn078/sAmmGsQ5XeJvTizUDsBeb5vv6Bv4wMLQ9m35R75VllENEVUEXXCA9MAqSqg1fCY+SxAzBR71yWinrTUIbUUsJXhLJ4KziQXhiz1f/IjF0uz0PYDNS0YLlRToxzf9uj/uN8YAS9U/hInKziQQs+RuvQhQi4iKRMlubdnl3ujsU/hRkNe92lofvoo514u0lnrZm2I3OpLuwjFVDXGm2WLHl2/HkPmd+1n679HRfwjyCrPxc9Oearp1WafnxsIkb32/9XNkJIANk+E13Txpyi9YMaTqB/q3e/MWRAGtbMfQxlinG4jqAAHDWcobJtTCzx9/rfjmbXZ9uMj+ynQKAOdD3f5Ks6Pec6/whAjPxqRPL7DN3jaPmh2RFcDOAJOePIMAjZQbMSLGRgIYC3mhinrnQuE/L9MpYFw+ITh7Q5i++JP7iS2ngLWoXoPkdbk4iGpmKYCq1ZAumD0tT3l+/5rEN4DTenuVpar4C8S3Hz8MP6phV3x4bf4Vu7+c02Bkdd9KnXFkIMFK1weEsec/j3G3QKmtqnU9q45Z4/UCge+aGqupYQwx3PmcJjPUHkTDug2k8t+PvL8ChjR32s+rEW8hyaxmhr3fUIoifM7T0LsswX2akovqhsUkpBev5ZM2NPqxtG7dPv6ViPVX63n89mRtmoWeToHkKcf3Rcsza5RgPP1lYP9MZFZJ+ueLZqZs2fU0Lz3hBYXXZM0uixm4ejty58YqN3Xj/bELZwrbZjPgvT5ExS+vWf8N8+F/2p/t6Uf7Hs2Ad+2e1MU1oMFamK/t8V2P3u9QW7E47RRzEHAI44EiUINygeuy/Z4EjruCHhZs2bXC34Dg0MXwL/BobgEIHkbhcXJ+eWHFvWJTlinPAErpf76zPPe/mfV5oJTW4Z7VWYUd2EsPxK7Au3PAdp5XgAAXmleLfKZSkcZJhtKHFJ4PpK6T3XU4ReJ+IaEKe1KuBNxfHwnt6mHdXDPzT9gQk/DJTdrXjsrEzFSXG0hGwBwTt+g4TKOHqyMEa81EhVY79Dxmez0cpv3RVf6FcwQTt47XjXFX5GmG9H7JdsRFLRNf21Xi74p/6tMJNZEmSpDhVnmxw8OMo6eY4fnczvxIde7uFU1e+eysM38FMhp4sA4BakzW9vjAX0R/YxxyCyzgBistrH/3AoGchCNFRy3LZLd56Cd0LwW9y8lZXSAbXEmdC/DT/FXmdo2VZup6tbc18TljcTlx/dSFMtQJJrD5af1GCwv9leATLGVcFWWYOPisDfpcrclbXRsPGsbAI/SlNI4qp7i9Q0aakTKMb3eitBnakyYxfxEWmW+Ieofquntv3i9/u68y/0XRHEf4sUJR8aK6j2OAemmT8jQhA+hZg88kEUZOuWjLGjr4dvZcAaFpOSBNfY5HHDnkh8wFrel8U+qR1MXRzKcGMmSSIatTH+/lOX+lmB0UkkKepoOTz6Yd58ZlnKpkWKrAvO9E8KvaMZLIsktz97xE8PzD8Ky73TbN9T5JLrPGJkQCE0H+Z5/fDfV+HwNjB+UupQmbo3RamX8462/H2Jr2/EdixzZYfxX7KL8dRZsRxav3dVvAARhv/zdDZzrNqODpbC4lL5E2o4BVfxbNDn1LF+egrv48EoRWjaBQSj1tlls0CEeCkybuyaYk7Cd/IiF/bYrylqLQAKqb+DXGIJM9xioYqwu2nU+AdhLLvC2TZfsIVQh0L3GDDOZ+d0vXHLpBLNqCCHzDeuevoEBOd2LRM35wI3cns/cVRLNxzhE8+Dn8FYgEdp36n0CYLR9rBvlDJlQ71hBh33ipXLtGHzoVAjpSABGsLfC8REKW/3Dhd+VfLkmDnNdJXouAs0hDdz7w0xI+WYtMKruSTDzxHXU+fihMKkbiYEIMgVfhtqOMGusb9UEECKqvr+rMfcmyID+bXZU+F/9+XtZkfD6TbvkQM7tzGwhGBdM1T3OZf3J5385Ww75F4Ak1R0XAYeSYVnOro//lS4mBcw/dlIEX0AUQ3dWprant/P2f8yHBVer+pdUJ2TZ2T3uDyy2r/u3X2GipJkdMV8+t9EkXP/F8SWG85+V/h36+EXtnw9Uhgya67Rb4rv7te9CZI2D+cH/AOzOf3GCNvHK6u7NJIZVuSJvjwwpXy3feN8dgLdq5AUjfEJYUUOGBbxLfLfxABKIRA8CrYF6C6HCXUin0stXHE/5deF7UkmRS9kg/eMxN2NHRgYTX0TDL+dF7LZrejSsQ0l23O+0w5fg6nVft0uboiHVEbc1s+VIO/YJOO7DNczF8sNOiT5lmKruLvR5LvIWB8zRWc8edaXBEj/cle3VrbT+7+fQ+eKtZu1tBqgJdOQXZ3Plj9Gnda7jc7D0Ow6UpTTwKItqK3cPaeFmTdf2XFJ0ydJJ9CmHDlllJB8nTkVA4/icmqB2vLvEhX3OpPjhpsOp2HVDT36vIxOZ/94HTz/tj5Cs75cFXVmcEnzEJ6jOG6gC9YSeEoTP9Z5yAam5NBrW5CBJbD16p2il7lv++TP5ovE43wfIc2+3mJX03g73U4AvAQUs0KLvUkUFaCxQ8G57VB2N0alhX0TVOJ0Ii8BonrL0b9n1wA0UWIpNOWMxMKWRve/aRvpaP3Vnjyc6MiD0NSp8094zyxiW3pO94hB+5wIkegr1f1icCHDdfWKl0SBnUi2kBL6jUXmWSkNtsqexGFBN/sesLWNkZAqjXPaTcSuia8V4G7OZEksUtzG0whuUePfdZ0E1tH/gzouiTNWU9lwhJALNuqov8tWupjuvLimNdHcx64PlQYPW2poL99s68xsmeccm5PaVgnwT2abnuy7LkC+zj7NmytG9rKhGzFK63HXPFFIkUIwxOlpmleEA8mcTU185p1o8fUYYxMZ4ry9vg1PrXd4kuYGiwJIOLuMHZBX1fDSewDT5vPGmOTPPwc7wVnIwp3ztIUReCRlew+prp8qbxdjxJzkfEiuzQP89RmYSB2y5NeSPm+p3RZ3FofhQs7UjX2bZT2kRJH45XFhccg1OA8TYDGpk2XHNWxYEY80lW+U3gh2sWftJ53qyejgHgD673o2VksdvbN2iRLw28rrW8nybzENzi3Qlmv14laHo+rwDOgyX2Zr55i5520u/gBtN7MUI41heW+PO0iENNpkteKo39AH07IVR/hA3MgyvMUpOL9rmlkYE2eqXh+62dUXdJw3XtmP+1A4F0M+fAqpt5m3WpRgLKWZvAnuAUA6evNhWMHJh2EdD1HdQtXXyXLqM+5Ks+B9idjy5jLaH6N0x9bZz2ixFf7TmAcGkZWYeO6pOJswoQanSdMrnoARSSkW6iIYhjwVOqFv4rQ+76JtwBj/0U4nKHE/QlVWY55kKX6Vd8MRJM7uNVNwRl+ZUU8D4N+Vefn5ZGSxU7JA+/nLUtrGcU0el9byge7E759TwLiO+EjASgA1utPw+yLpN6/D1ImWrmbcBs7nGZRZZWiCBoOPHng3/fxWL9fua50wBE6DHqZ5FBzc1cnck0a1NEnY3BqdGOIdcgw1af4OA8JkDJM4R1OBuIarAcRwyiA+Wv2cEz0wBVf6b+q62Haomz6VZxnuh2koRwqst9xmm2f92Z6ytbXNlKw9AwO2VfgsDoBLDU1mMCHBbtWCbPqhUFktT+98UtSf9MHY8LcbhSBw5NvhXiZT/2eutvNa64tuLneXmP1mwjUrDOhVhWb3XY8BlhiU6Na72tHljUGy4u/i4/mQhMHgCRPeiWGrHWLC0rMhmVtEiz1PT/pC85xhY9HXMVhBSqE2JJMEbZ/jglWiM04iJCc04yjn5YoYAniy31xoYqxM85zPEKjCVTOs0r7QIJYhDQJ1QDoAADieRDEh0tJG3FeCcWDjRQoqci64vX2ME49/ZlNWrb2kREtiPSdoz5AHvfFKmrHjN8pjuVNifcongoTz6rTKYC+9qMocd0Rk0SBKEPFkPMw/TAeV9svq39S5ne1EYGhYzh3K9ZFviMVW/B8xnMIiePXFelZjUsztQUw1cFkmC8rVZbh29CDY/yblj0GQKz+1WYnf7+63o6v0hG7oLznHdmOZboEKFGg27u0A/RVSAE6ZSEqOBuedQ7A+htl3tv9HeVAGymq/AYLLnme924VD6dYm8hNPavdrlDUJ3tubucsOuLSr1yH0lazmgDzaMQ/VHrV5M60RuC5W/WxUByPouhMj08COIEvJGE9L6kIe1uFPtLIMM6TaiOWXsd6OyqNxeCQ/MZwhoamV4jshrsAh6/2LkAdSvKuPh+AwQjiNwiUm2fyNZTD4s9Sha42VxG6yO3mvu8vGZmeHq/ihEqD5y2c8duHgOpU+mDYe8Z0PrBhhFW0PjZt3zbOXrw865cjMz3AtXYoNDJsPkYjBvfrNAvz+IXZ3yNibdt2sn4a5L7hkGvFrUNJWYMoQAy2okqGv56B1fr1fKhOGBcG9jk3g6vKUoVZ+lL58hdLCMq2SiVo0tB8dO1L8nU+ZMx1a/aZGChakl5ch+4r9WFrm5xMb3UgnjdcuacToBnUgShfhkR/eQ0YC29Z3jsHNOQP8gdCJMjyF5Ztnm/NwB7FrHuE4J5mj8QblZtejVACl2/B/31ffun4ETlr5FErr1il+3WiNGDFSZjeN+uHowxiwXEtH5ifv0rHb3pU+xqiQeKLl78umTuQ8wzO85yDDzfH+jGpm1ll+JOZ+yRwGoJYVP9Ipe+3dqHMQHXy/LwB0LWCFPvkSkM0Xpn0J7hT9nUknNY2iL46gsT/ZZBx1ZG27+DWCTf1yn5W1vQxEksgChUFFRMsa8G/OCfhXHfI4vm4Am/20PTJpBaMtxqipi6z/77uxvJMyO2h5XCqLhMSjfmNL/u2rZ89I657/PTnJ2BT0g39jAGQpt6zLoEI/jf8Icu4WAyOwS7R6XthGoP4prRpYS27KkHXmtWClWVnaOGm8wBWCkZ8KBJYqacFo70Yjj+TB2AOeL9ud/Ik8hGIk+i1j4emn1kxIWeRI/CD3I+RDtvNPMpj+qBI90tOyBQ/48WhAwQeGENsGstDIEKyjujQxCTl2s9nn3m0n33U8nWvy5u+Wz52FfskmJvzTcLbjuHuzLdehaKc2yzD6bv+QoH+RGcettcSWA0ao4OFdaaM9SvIoUxz8NCHsVCPR7vm4KsYOKGt93OFHhdJmPxwZjtAdiC5Ua8n2M5eX29k++ImuqNa5se/zAJcSWc0TsWmSgm3NW9iSZEEY6OTMMfEuvA+2d7Ku6CeQByEEd+t0fOLhj2Um8NCmfgwl2+AmweYoXMks2Mm6yWJQ6pt2v4QkT3UdOH1emES1F6neyu52dpyccL9Jijquok6PRQTVYIwwPpejG8D350eEAPfORgb3svv8QI/BseH+jX5kjlXaBv4D32U9kFyf17zNlfUji1rKgNoRNPtX3wF7Ol+vTj9MlQL3mo4fP3RIkj3dZCCZ9xEObBhGImCfACiLIG9AquCJPn1+tUCnRQsrRLeRXG+/llYV1H5X+2nUpRl04tOF19WhLu0kjgNa4tO4WIor7Bwq39JpsGncBhOTfFd3BrLtoLVUPdEue+2/RxBuK1puz08p8/dVOXbz2Q6Uqe0Hh76Rc64nqcr1ewwPFg9/6/6zvVS7LXVAPWEqzBwUQbrWOFQsVAFAnpQC3iij+N5v5+/xwQdFi/W/B/XuB6A8+kmrdP/AARvWdbYdedQWmGoqvJ6lNw5ZeLFq/JyYOHZum/dY2OZ3l+ImZCnindZ/XgnZrh2qUeovDQCgmbgs1ZS6cUuXvg7GeqbGQhl6oDRKv0NzpLDJ0FR0dxRWM7mnyyMLL26S1FlBJ7O8/k8Dt7O6SBALwZK/N0IqLGLGARx0k/RhQSspwSgyR3HcRg8z+w1j31EGUdEbPXVjnxB089R/7oV0X5uRROAw47BrppvqR3Rv3lmfL7bK2zX8UUyQ7APgGnNSiIEShsuUDkw4+sbIXYt9zDXZktf528UMRSJSIjGKaFqzCmgN4Ug5BU5HZlD8fH7pW1238s4xSDAI1+feQW/eRRwBHlyTDB3x+bVTFL1GDpMc28uzW2aPPomoK8Qemq6hjC6tGUjOAa21aTFjRw0rY8+ytaakZOmTKQz+5bVNqv2HTwf4/9b5ijE5ImdOMVwPRYMRtvKwZR9wiZNr4tFZ2RC6KV71Zllmt03zGEngBP5YVSsGGG7VMv7Vb6ZJnxTXWE1zPEvx/faQYTtXSABIfHv898w4vfX87u4lqxfvzAh0hNacHW3ZZGSpe8/6dOGKe9tmTPSzhiz8J26d6qPwOhNa0YP7lw1fdpLMbddO3qt37eVQLdZEPkKm1Jq4N/t8aBhC+/AuBc/bSM+MqA118t6tvkm2IrwFA6mlEfwOTVPeNr+yc9ZNpSBE0uqqunlZo9O2/AHrDIH99A8Mv/dDMUmKwrGffEVPU/ye54pP+aT7q9J0Lri+6WlETsfFeMAYnrsTfHJBspMzV5sYkGPkgeuUMy2UF4iBbji+t2HdVIzDrbwTlmH0GPTaVtdYHOZ7Z9NzdCJQgfKXbXELf3MX0g444gjGBxK9tdOS7M2EjZQZWmH6TS8cYqpbxplSbG9fX0rzUKTKDgKR+CCqN2k9eWTida57s89q3OrFz4v02YJt0endSDFLgpYMhpWLbUYweHSafl+qE0J1hQ9suIgqMwtR3b9bulyKuhaWAnv8v3kLOTNIgoQep0PeK5wvjkTVHIE2Lfg9Yx1lsAK1E8+EAviHBKKFgcALb/sr1UhYkjTU0LR8+Yc8vnk+RxWNfp5XpqRzEOdetcwh0i6LqCpeRp2UPWXO9N728DYyV9YQcWC+MWkB1c6y8sVrioMhuXbcZMLRSdg5vBWax8XGZn/GF4AwG0/l/fHbvxe0Tz/KY1qXk2Xl4RTBIObEtlbzU7C/JIO0AWOOtufh9pGMmPgAA9BCahDn5ChUia11ytQNbA33h+4L+mGrST88dqlX+uL5wM+LmLkkbtdT1gx+5TR75u1uO0u3w4zLd4Z2qrdu8uyr3FmJbBx4rSQRJ29O6QPTTvcmOh2Q43+qpLoJ3+XqVdPY2lY6qWC101+zetzOttpugpamlf24+SLnG9iKUseKgyTMbNUAmJI85XNDHfyOeZ/zEQMAL2Ygv+u6PnULes80etmnv6Nf7+WdTw0y5rHfZnrfA2uNfe5PrXH8lnxp9KJ6hkC8I3ILOxEJoMbdLa4G8Rna1GRs6g75+kXPLVOrj/2HijT7iaBIuo4c43WsHh5LU8AtdJEyJvQGjHG/IuI/NVapmE/30JMzuCb0mxq2PucQ8tWeBiflzMnphupwxGyfnBUYgIVFtAmqNYJVY/uAbBTQLKR2G5fezedN+OYJq3duPn0aOcZtkWQzaEbpL3P8MYi3I7RA5WgyFAbGf53cR9t4ZyuANSi43ysDCO+OD5uX6CNHDEs8+7NhM3ZW7ZaG5QNZUZx3xu4qFgYuIb0waxPMwVaxKhBaU0O72RzZmWPtPmuyOvR2OmnmYvsk+4nHxKJmYR76+znpk0pZ8Pp8AJHvl+M/6UNa7zBBf07MuiQaBZ4Cw1W7q2uF3Qi2Yjz4iaMhcktzI2mj3w/SFXtxwNL3ZO4cV/XxVCcbsigsxgz2kw2hfDOd4ZHDS9U6Vt9MVefst8GcbrHL++/v8mdnsVfS4kFnTPCrdbmRHQvZZFMNlLF9VzXJ0jGTRf/1vSSfvUVG3pi8MzTXk91tjMumhZVH9w7PbLT3gRw5sVURjAcBFJIUXuQitGuyGK0QN88mDPr4ink+OAhLcz2cvuUDtTP50kpQVVqC+p2QmuVcNjyBT5GIN7pASfSMk9HpQ7ERz/ghveG2yo0Nz/HuluRo7AiR346SWFCj+q6Z1MKAJ7HNqnCz0fiZ44Kx1nLbRg/HWpZUJOnXuUXbv/3ebffb2vNhLwl7wZVpL9THlf6VhbAJMkQWnPh4/uEtZDgrWE3VOYEH9ZvLqfFU1dEtJiuodgbc+LJgmqtfXrA5m7n8OAJmEkFrBRQfQWf0t2rO/fqESRe506Bxdva1b0KZT+VWUmNjnE3yAHZFmh605W111Rwe9qY9k3cqA+jLyFHx/+tVj3j+9vW4HyGvTcoD716PmsewSlixFbm86WirjrzVTu1HUQf+isQgOpu0uSZh7XinyX6eHXVXPv9/ZKdk5ESMRm+8ntRwUnc+qfZPNaJytJdErKkRk24UwwoOSROkSs8Y1Wtp8TmgUnzY5NovjPF9KX/XmS51FA6u152SFfEaDPv+gFAubzLSKMl1D70Wp2HgZCIXMJ1iv6QxEsfG9ZQ1UeeLSoinNB7u2LYSduHRaOUNcAMLKXV9+xUZzWIslySXfKQZG95RPGBrHrD4iLZNkb8HZIeARuVPutzag5mKqUtliT8gCMjSFIxnY7UvecbtaD1Qfy/WKHh8QcgKPKfU435DeCFKC2KIuKZZY/TeW3QN7VzroguJKopbnJbUomQZlode0JMMWI7OVc1AfUQAeyw3T4ZZ4zC2CdKcHfAQ/fav0gs9A1YoeZMvEs9n7o64dCVC4fPOhH6tH59yiWbZel/PAQP0OLdMhxO2eTcuUPwIcQnQU2FiKFhoqmCUHiTNR6AD8ZVEU2YxRopJRu6tLYOmtMpjys38poisQvjTD6K5Vl6eufmMkDax5m+owU2AoSpg8AEPkvVHyEN+jU4lD4DnkGL9TSbtsW0YOn9fz2S71u/nVv+mok6F6qaDbBMk4FHJ/SCS2oEzP9M1sCA7EX8n8PSi3k7bqjqG/w1LRSCNy0znncjAfZIPjZG7lpFvK3p/nUCgozsV082A/zQdWLIeworz/R45gRWfXTtsNWR50JUF9x+Cxu3ReY9Hm8qI0I/0LpvnZHDx0egx/fx+VuFBJil38vuM6oOUXdgLwbedhhwvggutzkuHPSvMhziW1qyCLpzo7ZtvbdrAfAG96k+afyF0rAiOqDRT1h7cMdQR6AsGvvzFGeZxY/Pka+8DhtuufUsoCbKdmCxyGX3/O5ZE3Xcb1gA1vc7GQBGaCCDcDIFR77KzWthe+Dq1wyVOX6Rzvzzbr9tCS0TbXMTVfbKcVwKp7YSFzVrxepe3oXQQPCqNV+SzH7KL91yZNJ61Cv1DS6rkLtuw51VjC2o2dlp6EF9JajxXpVNKlb6cGI5utdNZYJPA7s3qLTGJRnXzrOiN9+6KO9E82qSlZWqUQ8rSWmDN+X4l7KxIrvBPgSS9UxwSXKPdC6tLydBN3n0OjfjgwUpR69/vp4xArblv74kzL++Uq/HoyWI6StJ8KyQ1NEBI2QBk90Gq3zbk6/Tlayy1eoZ3RM2zmXWywsoonF11hk/LFGG/jwSVH5KHQo1+ddT3UpESvxzDtxXlmQtR6VNA7jjdLhv63Zlf+zTKtp9QSLQ7xhEPWXCIJJltVX1aH/3MuwXFa7wgcI0aoawaMC/cXx2SbN8vsjH4+Bpdu46V/VyPxlG2JjCNX1CBOsY0G7/enOBF+2cpcK3hUFTPS+317ItYjPZ6rAasEj84Z4hTRffMJuBvSdE3AMylzfmvWX/Q29nP46y9KsHPfOyDAOYjwE9YfA95+jbarP5qwd+0jLcuSQhqaRyOo8qTmQIXzopU61brMUjbJb0/0aSWvCj8igfGKwkBzbnAHuBXs0wEuYwjs9UFz29ch3ZlDyguBKAaMlSOjdgM8RidAiYEEGppvU0VU07ZFpb12nqTn3w2zc73zTH7AAmlLlM0/dDFU6gtGwOvBQhYAvnr00C7nEZtBwOczZ3gH5R/WoHtD6PYTmzisNwRzlp/PlSJPKQk7ZhUZa22iWAF09lYYz9yevP1+v8G0rA7FUrMHFIfDVNe5a5GelDC8d0ROWZfvyFNkScI7qpbQ5t0+y6ECabbuS2KsQqxfoXLs2xICFZzmcMx9yzeWoP19EquFdii702wyG79pXoXZGJynYBVFMXtTk3Z3sqTsThOyKO7NMef5MVrprQV1NE/CcR4vTCfgJ9rbB9ZadkFqmO0ny9r7zkMQKE9URvegr5MKRx2/cDd9lJnqqzXIwxgr5IJxuWNq/9KZ/O9oEqxFL36WeizsUV6OWidrBbR0H6PYfDKKofqZcmdjMQLQQvHrtrj9tWDTVCeMI9KKiCREqTvdGZchUdeZrU/AAc+eIYaP4aoPPKQ87f76llDl9Iat+DjME09EtvGEeZAO8eqxvjmgw7DbGY5b/sUmRW4hD/JWeA/9qTHOKKVxhc85LTqDyOEdr/DQv3vDfYuGxIpktv2ZrSYMbkhCnD59OzVxSKl4tDkX5g+HQ/Ra++CM3kC+prNNNMoZvkClqfveHBvNYMqC6PamWnag5z/QKSf5H5DmvuhWLXxfLtmoCb2cSoNKVQngc1HIiTMP6jwUQ0LGHfDya1xjhGd/r1iHMXZrmNnLuTqTK+V+6oP+J7fiKxftp2/RoRLOdy3v5SQLQk3TLbiZL5zg0WMm+SNBS+Hvryed+xXe7IYGlmAwCZYXitboSXJgxTqlby2xPP0WiI/6e399h2VFm2QL/m9kE40cQLI0B40cN77/n6l6lV59zbeOM1X429R9XSkgSkiZgzMmJGnLD1AmtW2OF5sZbjazhX3m+ttuW5jbN9xxa8uyurhh2wxSD8xUZEmGX1n272KWUjq8S31+h/DSzZSrnZv5xm65VL88Wj+MveDSdfXqqaROPYC10dTW8JZXZCokbRiq4qNhRJUv8iuAcP10H2k/R7vm/ZHZNNGbGWTDTGYIaEKPfTVXnzQzhzankVYOccXX9+qDlxOi/OH3cVMxM8s+zu8VIgrslyswuDYbKYdkyptEYwP05Mp0Di12uH9TDId58TMq6XZ7K/F0IpkZJ8+eF38ToHfK6d/AHtl68NFa+mafosMPOQ5ZgZXb95dxdxX70yuUIjUR1/+qL/nAv2DPuis5IN29Laoke7qTZmoICPHLqZqIYFEiHqJcr+Jj6YA3+5Im7M4o89b+VRsqRZQjYi6rIAI0l7vmcUtMAONH90qkb9Z7JklwHPTEEyV9379vevN5LWGAaW3sGc71/tF2RVmnc70aYpx4vVJ7LjZhXfBs31Hlpx0pp0N0/Js62mOtRpm5fvT+wbTkVv5s8vgIeXQ2bN5wIUE0Dnot8LUTlyDNjV+EE8JClxg7sMYjQutCDEN/oDy7K2a3Wr7iWVis9ITsy3UlHw3zVAjkFGY1v1kg+5m17gniZ5aceE3IrArxEUb1IjeBRDi+Mj7X9aT1gS1ylseiMiCE7xjg6RDnw61gUvWmXb2A4YlK4P+DY+fXSj91/GM5JpYjQb1dC82mPMl8StwqP/PkZWAbahbEPftRSre8S/9g1UumEXNm0yUwGO8HQm11MWmnjDoQjXlMqA7xpHd2imHZocc5yGjk4WNVLv12BbqmQyxlctMU7vxEdS3kQjHQsLz/WrNUq8h17aL/kD4Jd6F848uhR3SnOFYFQdlvDvAUfp4vFXs7VkcP65htWCzwIWO36Er441+yQ3uoES9Pr+tUwhXuJiQGUP1wDW1ODf9stTo3bUrQxXLF5egH/9Qt3SLsQ511WhLcBXQ7T116ONkvFs7zDp1hczpnv8ioXVkBsB8oL9CTOLJJJKPlDQjuIfqBeHBnfhzF5o0aRX5xIsRXEhlYvOtnpuiEEEko49JuThqu1LqYEhgvuR2XaT52AePcuqktTOrQKP9THgC/jxy/CWwyIrzOceE/+5OK4iNzZ8xrG9CJKs518kbU601SY9PcIkjqCZnq2Aa3oWS2fzEbTBwPZsGIPGJOonYwbFyolgNHkJ5YPiJc9fUSEk5iM8GYbG2Zsj4z4mlcZRVI9wAIt5VGb000vpI0Y0nsJQcYJTsb7XqxTC/BTmRkv5tlB7+r33UCSHgHliokmLn04Nvv7xmO4DPsVtwXwKXXDVHsDTUCG+6odC+Wkga14oEcvurLT+Pgu3riWOYoz+xt/KFP9O034qrmLxd7J+/DuV43/NsXBfIVAEWasvVOn6dmzBAGyXXXd303l2f085ouTFj8phYqpULj8fybd1CixXVEyHL6BHTgpWZTQZCDUVEm/5ztNqPMJ219ksOfcTnSjDos3bNL8fNKLj4k/VkS0Z0rcLYPwpyGrlxCJNllyD+Rmd50lqZrA/vm8yMcLuq43S1mpgLaZo8ooRw5sxNY0ZRIakNCzPM+YLGDr5vC9vgz/tapL5Xd7Rs+1t76VPVXqYHJ9LbCUDWKzFnCdaOT8d2aLHuQqQa5lhFUsQ3V8G+/PFfnyrDx/miEzueWIUTcLRf3F2lhQCUxTmP6INzaaBX5dCq5jrgR0xt2OCBoMXsGu6qTOqrtYD3WaMLa8XmO0P0jIQ6y2BEONQ7qLV7YDkKdb07ho4LgBnOP1NY4+jubGq1XBWlT+QsLoFJ/OpubvKKXcu3Xya22GFQvOghcHi5/MXL2UZbglmJPppVcbh+jjDOxdDXwTGDww+11Q2XAIA45Lh0DW22PixVqgvvXATpLptpxxRlZ0ORuvOpBovMDq7kPg4Xtq1I8yBJLkycgd5zDm6r7rVXQp4zRAcq91FL1SWtlZuFX1lGGx+3xSCCMZHgYJwQZLlPwKCPUx/Ftuzx0zA2WEVQbw+xmAiLcm6Sx/nZm6JzsZfL0Iik9DWJCKtJT+9sE8eMC81Gs+xrgrudy78AdiMNDSYrfPlRr81ZKVHmKVbWQ15hI43yeUjJOUWbIDrLZJ06gUwvz3fOf71hJHaejFLhuHxXMPd+zvJtWAfnPxdzoiPpS697OrZANIxud8O2KtLVk4FvU9m9rVfAjsECZpYXTC1Izzr+kZ8hmVkR3zV5VDL+MsKceB+0LGNVGRUoWVBV1swq0HH+6Jy8+S9fFXYbkCHuKL4Ciw8fQNO4fHMAUXb/2Xk9DleDm9Az94OX5wp+9Eo9eYIKSDwT/E0HM/SpjkJQ5n8uC24UmTEUkDtx+mOSOnNnFr9SgqhpYSN6V4LQd3P57IwrYqSQghNKG36xBL4M3OvWDsZdG55D6q6X+iEGLTpnmpQVQmYe4YXTbubpnPs7Dk2T0I9hijrhaCKrxNcTNo73U4oUUSfb8Cyjm/yBnRfp6GyKQyYnCcnq0xrr8HYlw/gJI08QLdV8zzZ4uSdaV5MTSKr556XPFSBYb79ztYz/ZwWEkA/b4xDpwSf9rvV0qlopOVR+zgK10fGBPVqKZ2kJTLL83qUrCY64hWBsqvG2ZvH+6MuScExyoNKocf/0dAPf/zrFMQKKOyEV87RUwc26PH6NWl6CowrPt0njJXksLLpVgaUwxPzhcdeNCBUP9S2QyjCblKkKV/vl3ycn4Y16gLT2t67tptRA8+6aKzkVGQ/Vy2GmkfA8oKZVL3o8t6uIM92iKbVN0C4X0aLVaxdSm89NfRDEYVPi5gq9lr+tfMK8jS10l9bHOQB9Wn5BTLHM8FRAodxJ14N9Q6Pf3UKGnUfzKCxMmOxnxdK79FdVEON4P+ablAkPhbVwjRg5dISWxzyHi7P3dZtLOX0BEco50DH3hRvxFTP462mDWk/M8WMXH18mq6hiLLkSzJsnJrvdv1CJFuBA3hiv0oMrL9v51PVF/C5KPnMpStVhOrw4YD+U8TFiAR6yQcKSNQmRC1Y6s7ySN8FxLP3RxDojPS+E9iNAxaE7laCYXub41t1acFtHB3AxY4445hCPBhzOnQvCHDsL8ogN6UDJnNChynq2J+k8fMgpuix3KYHQAbAKAWbgM2aNnWCGcTSzIFnULSs2gK2qnBviOFbtxFekXjRXbH0zh8UDR7c+pywnIZI7p/A2wttwyrx3XCFlEPZ37acyJqGzVaFyO5RNL2m8492gfAq/6JDjD69lD7O83mLrJUv34P0haj03EpKitPb/6Ts9yOapiszOTgUifXZN3cU7rHRu11Ooqjw5mHTqm9/bwmMd/IwEloRqt0MNq8ig/k7v4ZhPthAScq04SnIOlhOiQIraQrG/MkLzoJMXJvEMA9i+KO1doqJ/jhMcg6Y/sUNSyuKH08cIk5Tn9gSuoVh5498d2dcjeJWpYXF54bIeVWygB4rtZfRi/vl7rTX/FL3z946RMAghsFu0JWGOubXSaK+VS+t+Eu28hd2DZ/R7wryeppceiXd9Q3e9/h+Jsn7TIyTmBHnO9r280F6ufR4fRZVgHtePAE4HheXqW7NwDJC8ry3R+9Fj+kObbYf1HNesl2+LbkbLe5qVQO6yuOL+UMB+PAM2ElRuK2SCQ10piWLDrJYYyZ6VaXtxrlKfIDlabQ4tRasrIkHwD2Zg33aqvB4J1k/hle8+PJm/tE03MIqOgD4C/8mP8U1ScoIn2UEaJtgcJNE7x+X7hmicmW7PhxhBDzOkd9st0bSvZaGr+n41302PP8cLBPNGjFChg/aTwphF3uNY0Y9TvIx6cTmfAGdsGtchiAy/ehD1TXTg/Kyd9+4g88D6k5OSQowg+NyPMxaRZ1xks7FekH7QLNFyr/ggUF7AsM2vxKv1biysYXnc/3Wjxc66AiZgUUO945+6BePd/Apxm9MJRhURCrF0Mj2g36+QsrNvoNNS2/uw5Jy9ky39Tw7T/ngAoT7eRVQAjuGxjb6TvhT2Yiwg+4MxGgLUj1L9VkCtviaCSLMzmwPpiXip1HRjWNs+37onjt/dOuUhzGs3snK9m+1Pv1NRrpkLm+axN0MnnHFV5ykz5ZGkAi7l+FJfrfkPNq55K+k4Zt6WZpPzB2NWlmhrZym3pM+JahloZaoI/iC4yaQ4wCzevNQxRiGC3cKP6FPqGCVzp3brmN0v/p0kc8fN1EjnVkRMk7vtSCry/ErztaD7Ivb4/GnC/kvuPff2gP57dysX/IALSXeqUzftPFIuROLU1+1gSK/cpnMbHcTuL7/mkLAkPBIYq+6ru2ekGDgAQ2rSPrIME5HZkIsXnooAvbGy5QLCGKx82B9/g3T+cHSLCxGF8NFTmIq85kdXW1nvBHhrgL1cJZktyNtXPdAGJhyX8+MLihkuLmHsNC1glJNkmyrfyY7CxAh40bwhrzpnqgi1q9b+08q5CNnvyzDJG9BFB8Wrmwxeomb6bBtwkSSNws5MFNO9XY3n/0g3hDq+THYDoy2STt57VOW6opwpkkkfO+qZAlPRyQfmfiJgpOrU+GW+nx/9yqyQifC3SPWPrTZK2DzCBYn8CIEwEF5V6N/+nvg4td48ryKCqgeSaY9vGevTzAW5tVxAC3wMoMrMEu8P5z3OdPkC1dusa4Dw9tWNsG+R0XidgweXSK2sduESX9ehMGj5zF81jQS3Fnv5NpfJdXp7U7XyikiN9Uw3RcjSwxguav3/b6XtoosJlDW966kMlcrQpen7wVYkPLfadIcLL754rs729nWQEtNqRSz75XbNcRs5AhXU+c2HUrB93XFrhrJtl4WgjKKtQ4pXx/9QKoln/dk+42/2hlx8mOMv3TyFTi1nCfXbItm+lsNm6G0TpHFiDs+PEO+fNxa5olSrbJisw1wRoohoyyv95nM5htF2y3sB3zEcoA8H5ySHAbCsXHy1Jr4+2ukeOHyZZ3tJK5+cQPW2eGLwjMnrpBoYoXyhzOt4vtmcBjytr2W+5aSpYVzs3SHKNVZfc0bsXEyQGmdowPmAXjDMx1H4iGJ9up/349ID2oMC/IT3lfYj9M4irTDh9uvbrtTkzzJdUp/Z2vZOI3OJx2FJE06HsniRDJbfb541PpmiHmYewP2xPiCVC+/0JflQD3m08EwTDbuux6080SxF3f4FwHZcl33kqBtTdfzuZLnJbrrex/bdmgJy2YGT580/cNW7obeCj/DcdNnR6FAZeXNWu8HlC22aKS+eVYsdP6Dfpfuo/DlgTNCZ4smUtLpiV0Bgif7YpkmD1OaRe4PgDI8BjiHdUH//ql4FR+mfNA7xmmIdHNaKcvSXTO/XLh7+ta7sVIwTBABQ05UkDleqW5nFiPk62laTbNbinhcZ26aT9i6WazHcd0ABGpCP2DJwKqemSbOUI0n7V+hqm++9+hGRCAFifpgeuiXHvfRc1yHmXWl5KGKHROFAJEQdyOVBVNfP7Jgr/ozthWpK1apHNRScwQEsxUUOSz1ZT135dpNTVWvYlKXvzaMrAmLzKccAFixaGq7TG+9Kh4ptlkdzPd1uI1GJwtDRdPdnFExEwF/KfprAXSY4HlLLO4QocTjzTNYfV12ekK+B+XxrL4zX+IntIzZcAyksqOxjdFMeV8w7UQQNwHvShsP5da+5adAal9gEdOubGJdTrWGYAxF38Fl7pcAGNNInBdXcMVOweE77ndZDcTqfot3IffjKDPw/CHqylEGGOxvUxvu3oW4itjWUiFKI3N4gOiud81BtPEvZIkN57A8/TsUyWy8yq9TpQ5sJFOvQ9xRUhim6bjP5htm8bJLKrXf5wrs+3jofFl5G+7/FzmMZlxG5ruTVzF8WT+MKBfZvFJJ8Mu+VmG+9fg7eYNmdA72IdrzDJ0NM3BdYPHOAQzK9dy0huI1odWGESuWtl0jtRxc0xVaQn2o71usAj9C5Mhuipjvq28jd7DqgYe1VkXNfUN7VIV/ba/hInNgwTPekFTHyRJYa5ztIZehc7DeO3T86yHR5BWRxOZVmPxtFxgPz2cSU0104WBIfTuA+Zht1PNf5QGr8F27kfBYPj7yxw39tE2PUayFYZijNm2Vg/X5ZYkn3xhKVsqQLZTYWHwBCu0Ce3khHdtmNXa4TPpy3E+owetlNAf46FS29e/8Dd1zYh/VkDXG165p6+GfWY5ZH2w66vYbwkQB7V3zRZ0Wfi8AWqtzfPEB3ssdCEGBvEGyNWbbpmm6ZZpNVSWF1aHhYj9UU9D5507SxBev5WVAjscB7EhnVTZCeWSj3Wmgq5oioQt5+v5jIvp99IPosb/ofcsMueI1XeDWr/VS2mjZDQ+6hq+ier1qynDRmVV4JNIhsy9S67+fXr5ioYQNcsTEJ1thZOkEWCvnb/yZkF6Rr99my7JhbNSGraBf5dnyKiUJn48GyER0tDQMV3M4MwBy0e/+UQkFlorekJoYSftGjmHAXntC7jwdPHp61Vkpo1VGKz/GadSktBVg0rpdrsjjaATVyniowUJxxTj5mcJWcvp/snza0Efbip4wjifwzlQe2nP15RPHd54U5XTkKjA3GvcTv5yacRhnwzJC/vzl/2gLo3JHxLsWbZ5mOZ++Jol4Yo3+3loPAFhV7IiCWkv6o9oVFaq7R98kpCYV52bhuOBVIl1Fn9O74UTNns5fifE52I1NAi5OqcorucqA/7xvAefi/JXEGeXlDWFXu6he7XPq7KVvOruC/el/KnPxGiHQj+x47j3wVm7Rqis+xx6sTP0R2NQ+kUlEP92eq/UqKk+1mHJyBo8TLrRAiNYvN7JxrZiiCGU+M+3vNB+e95XwmBCq58NOGh2v93wJDCnYixeR1z9WSAW6RvgTDFOR/mUOt1XYTVC6FPbQffsOL/kSWuTlc+9axmrAW9LzHzHHvn9YAUDx1wtmL6IP7DXPc15nNIW/Ktwl1C6y7h1g3OLdjV/PrNM5eL/IOzyEFDAM+Yb4rAFEp0KKINW2JDFfxUdE0f/tpuPHC6DtQVM5v0NA00HwX03Yzr1+1fgfPDVGZ9wPtbWdOcbxDz8Orei83wKnsTHl62yjgNVANEszjpM9DrRwvsbCxnlNFNxn5u45WBowAImTx5AUHkkkVb5v2/E//+nU9i9Pp8IAMv0keW4yLNZTVBzfx1FsOE0T1PMLvocRGHhCarK4qcFkimfS9q3kwohny3zpyy3+1gzNDfkHCykFwEIhCkJr3rZt2f7FovB3wYnVFTJdlu/j2L8okgykHF4FoiPpF5s/unS7v4wrfBIzT6GHNf/1ZpoVejwro7NKmPBvfYri2y+rl/u1MDUuTLcIVWrH6Nd2J75FYJgBfHpY0Nt1o8IlV17Yn69pohhpxTO2eHQH8mOwvp+lZZpm2YG9Xv22UEb6P//px/XHPsQVC9DDM4L03MV5QRqfq4Sy4apPLYijaNtyUUw14vGCU7X72ZU06ZWAKUPswkxiPaVVGRkQc2ne1x4dOX3gQ30pthRadtnGaePw+kfx5KKc+stmawZAWIBdPRVT7p4niFR/9gCIpvcyr7+uAHn9VszFCNYrRPv0omm6hxU7jwdGJhzLPGcdWFOluTKvW+fbrT/UXne2E13YiXQPQbcD/m3Ueb6on4569Mr34elpsqDDhgLQvx1v//vMTfe9iJqWx3Oq2e7kN+OvCG9vYe/tND31dIik8vP1pVKRetjUPXo/UqOG4boTJ5Zact046zTa5GF/brYiODXg4eSV0yod7Lxkej8ZwSbv92+O8dKOwT6bt4prM1uX4rckPGK9QX6KN4T705FhBO1KpBAHPlt0DeSVcU6e0Aw8Fgma1qhJ8tooOgGsVv/qDKumhBURKB14VyYekTG7U/wg79Ue3Yf7LWF/suaIpHzixif48BJOsXM93dTX0Rg3SE2w40+O+y/fcO4vYdg+MJKtpdPnZ0ylGr2WNfarz/uiwWPgmRaZjoFzQqh1TJppAuf29vZ61Vnvpr+MHfMFpy8jb6mMqcrcRO/jRsw4IxSUqmHJl24EnmpPgyqXoSG3yIPYM/Mn921Yp8FbRDIDUyH4I9zY9y10sThiCe+bH676xvCqVGKAq8Jqzgz7d1UaRfSqChJAk80S+e77XtYe1oMF4xjHTPTeNgkYE3XtGaH077gMeRqM+3L6q2HwrFeV36uzizSyKDTs0LYbn+YUMDK9fcgCuJKfMp+3ZCta9FhH0rSjr0qEFHt87chAXVVEqOOdXQl2XDbyhBJFojhPpk4uVFJCh5wwJINpXsw8X72pPe9P7Dj3OTOMOBGXiMwun3MHdV7ZPatHtXkn4Y6MtIQKr7UwjrrMMqfjNcpO9gajrBHN2O1HzOY5WhMpz2sYfLpqHK1h1bR4yJ9uczlxVPKdG5nBHvvi48w/KTvt3l/KbYDVstoDAOBY8KXVpgETD3f0lyf3fv88KJYEuBqH5zC2XKPBwS0XpjiliO97bttJr9/DTz8mwYy7igy2yxSPyaYx2b47qkMXhW3oZdeZd1Xi63hTp7ci91uPuNUncfONBq/eVrVRpXlgC7UCJ19p4awbzgry2TJ0HK3+XpsfuBqeIqk7hHtIs/3BiZZPhp+Uu+pyH2+JTzazBep47RjP3C5LbruxJD0Y10QokNRogU2h4wGBzaOjxCQpycmuj0YIerAhspYSyJcxtzN8c5dv6oTFlZkYsQfAxPNO6VH1IELUME3tBBx97YN7ipzt6J6ztXNi0sDJCMk5+IgvRUlmDZ5AkTPiVI2boFMrOV1tvQ/UbuS6F7OTHyLiq/mi53m5kDGdXLxKv0i4tce/n3l+FORJmtjO7A3cL+hKMqh8J49Ynk+SNL28Sj7rSoRD5VIqRT9LImo50R4sz7MLDdB57angSwZRlz00BdrvRJY9q5a1j6QNifwKnIxCG0kh5XHfKjE49+MJswdaGn9w5GNFnRFPhPmLYj7xbGC7QBEZ3HhuvWmF0UKRMosMnnbOzciJnfaqw6tgVeR85C/z60UQVPlmxz/bhgq/Wkar7gW9X4ryTSXhEC6z+cctJxQdI1dHb8efiUDVSU0HXiEyTW78fKJcbSEVfb3eo9MP1CjphlgxDZUPX3t5K815Z68eoN6k+74OGPl1SfS0uCqOrjbB72PDHp3rodHsi6aXtGCht9/Rw0zAZDYsfDqmm3TbUYhlQYU2kW2ERqS8NTzcyOitB6BgbvccAwPMwvBAXk1URaL03TeHN5mTIZgyYswyHYMLNpJY0ulSxvEhyO3UvbVFsfIu+zCV9WQ4xMgqLK6IlfB3WwgivlIsMWrlLJkd9evjMhoSaqtVK9EwszytScF98W2FqU18NAHTSyL3F0F8KnkJfcuh9H1NjRqkRDcZm+/ERJZUYGc7+6cky4LJKrRfQ9lK1eCCUVSktWlZsUdrtAeeXoTJu5tVGuAcWYWrfVQaR5ESnR2LQJEKXbWqxuaCeCxDAO7JqqzZA37DIwOfnivwd1gZQkQyEscWzLeM7scGfKb+uh6f7y83Ns0qKqgEoof262NlU3atOeYaYYu8N4/C2xUMlUhmidpG52u1ebX9ngiuMDXPihzHTk0KVojRpdlKqfBczqc4FlFR1VS5dvcZjcnh2E+Cf1vVbMPzT9+e3cRjXkvxZSTZn27f5jynvgaGbngG8HbvJwUnv/lTdRLM9+DXplyg+Li28IDVLWY0MA/mzXIILzCz15IyjroFv1Q4I8oAiuoPbKt5z2aM610wbhGrr/jdXn43TisJcXA+qtfyrcaAkS7NbAvmWMPbP+JfKfOwwBi5oBYqAju0C3WEzgzHPGH79q9vzlLByPwhaB+9/m7gGY7GBHfNPSI+XdmCo2SfYQErQ7r7yxc69hkLkTn1imQLlpMf4HfMGKMHyyTCYcEW8p18MeyHYb682nHMiylrsGSEL2BnqlN23bdqP82IljxLVCJ4GkbfDN9HNsTicpSOw1eWjE4JwIO84Ryb3IzICcxDLuN49KdVNPBmshxj8uw2GdaUOCW2+gRM+QXvsV6vCrMAbry2ErvNNurefd9UefApVIuH7es7srScGO5tmi8baiil0RWeusUknfK7l1VqlNaEeyyB3etjSf2WY0u+Bh4ga7DkWF1UWzV7bs6wti8shL5ApZ9i1Axn9eabAg4ruxxV16vtp7RTFCybnErGwOGP5WA+ggbmqLE4f/JUT4oEuz9/oZwRGT2kSLQ30+lW8WLCZnSHtYscO8UG5glmjmfK8bymthafFi41mv2VC9gsnQnhnYLl08AZZRjOEeJfp/U34MoY88KkN3+QGRVK7YWjqoPSnaZe1WYdiZpGKksQm7aYNuAfGsPSwxSIc6S/4OatNQ2CmyzgizWOqn+pCY+gVVYqOM9XO9cJsGJK2ztRw7dmFo0fNn+MjLyWVckxpDdGmDkXBa0YuDvZjxUCAw8xIXXNZ8jFUMQfCaHV5hzsSG8cMaa1oxSxJ3dqvX0zZARg8B6QDzD3nhB7ETplma3QBPcJ4aQLZyuxeGrGo9CUL6Rn3703dcTmKI8CGY9X3yGWsDbsq9b3n6jTZJ/lB+e9YxjjuPSHj/hGfhqGo+xilsj6Q8EymntKB6PUE6V3EVh384enyEgV3jIF0UglGa1VB6nawhZpPNuQg2VMrou8rdPNJtz11MGXxQ/YOaKebuvHRft+pWyEzO74ZJr3U+e6J3rdyfOhiVH3EsnR43xmYTmcG5HVeJUBum2SsDw+qJOpdLgZuViUAV+N7lzGTAYf3X14rWg3UOryCwCEfY5K2Hwnd7wPAjgIHOOk13P0d7kMxExyWN+RFl/zbE5aKjBwR5uYJcr/1UOynmoSVPvLT51Ve7wZQaANw/NLrukncVafGJHNpvoiv15zzmJ1ukU9yx1YZCK6UT2m/lI9w4p/J1LtT70a1Tlcwc3oRYMlpa/h6WDd5EkBRyfS9W2rE9zqB93cg+dMfE+dw/inYL3z8v3Ggg3jG12qPwO2vFlNlYjNk/M8tj5Uhqz7kiZzf+lS62+9cgtccSlHNaUYjwFbw4jAPjhMmpZGZn36XUGcd2JH+52VeWcbAxZL1GC/k6NjfrrPZakeR+e89CMt2DnWbI/weXtjjS3uoRMQWxXmy8pP+8sbGCkJM8+F7wRjD56qYpsLV/8Azl39xuqBP5KBsdd32p83o3FBZKacol6FgsVPnvOKMumM+MCZKsHeX8NIqW0j+2q2oB9fCz4A7zLc7VNwhwyuNWlihaBMgVhgPb7Z6tbJ6hxG/YOLAvTuDLOP1WN0gtvV6aSqo9abhzua/NUDUJWZoUVYlF+jXT5GrVT/vMnkYGy4AiSmzC1xTfqF39LOeQic1zgMsODaO94jQkMcJFheX0gdsE/nsObrz9Job+cdjZBN6iJJXJWnqB7MBRArLeUIQXw+2XaCyBJ8Vy7ghBcJwNmOLl83XNwKjjO7o68/5/gRjMH6mBFrWvF/hS4KetKrz2QLO0zA7RRvwIrdyJ10V66NGor+uiPKQ339cEVnnfT6fJyHwoejB3ZI3zPVVjev6kulkyA2gJJRmZXN/qgjoX3kCN5g6UE/p5Jhnal1D4HvhbdB2K15YvabCqzllMl88Gww2lzVv9TJf3ukC+EK/FfJEqEMo4NJ5Yf31ieqixqL+MQ5D9rcQ4qqLiS1EGzoGIxz4TuM3CdgX/d7/63ZcQcQI85JL5LtOw0X5I5qdrqkn+cpzclZ7MwSNfrZECXTx8LP84ncEcVnIW9Yx69Fmp4S2D4E9mFYOd8JTO7JBkaJbKQfo6YwH6jedcDPDoOLkXLHQnGwD9FULMPE5D3pOhrWq2tpRu/VN2pkr5Xh/WlqMYvcXcbX0EGAeeni9QWYF2foNHvsaCT7OUzXzikq9FUCMgxfYzC4F9xFAE7eqQI6uPQC19e+uz8H6U3udevhPoIFY1Y0gAeCdGbO+/OCVpFjBr17SgNEGgWw4Vms//cX2pli1L6pvxBkJcef17EPxcCIDFOr8FDcpT0tx5B7WmHo2JJ4IREVHofuuRiBvZ5tIRJ2G6cRgqvv43yx/ucBDabMfaLWrpF7SIH3x4RCS3+ekhNP5RI+xt8NNFT1EeQmB3fGCahupCXz5+Ph6+CCf062BJymafXPGr5gLqyGJgQs7HOZvIBXYrI2KoU2ml1Oxgb6VWXPtV6kT8XcIc4ykjC637aLqBCggHen+923nFrZ8uwNAKjfbSx833qapOI38pYN4bQ+4DcSc8EHAXhNgY8Li+IgAgOP8K1lOEIsPC7yoVTv9YFvZBil0gvm839Rmfz7aPFu4Bt/HwXeAL7xfYcUH7KVpBi/astcP19FwzB6DDUtshvJxzgCCGZ2JHSNw59NNO+nIT/Bx9k56tbZfYQd1LoRrLdxIMS5lAlp1DNy2Vmucu8arIyXTFM88+GZd3HzrrjGHf41XyTpiq9tcSrl727fR/a2ZHB3nPT/9qQKK72OC/xPSn2P4STW6Ao3XxmjKuLBaNWoTeRjT1hqf/a643lQuUOq22s0SBw/8DWIRxdqd6GTp0GkI9OhWCYWkr1gA+1VolKjQpNNZDAMu+6AIwOKQoPAQ1LqEjj7C9mtL3DcMyieUYeunr/OE5SHQT3HJc3r2WstYVgDPPXMdk1RCD+6YHk/WennPFOPzjppWtfBSEgX+2O4POfzbFF40fbifnLliyZe5LDGHlSyTZGNSjfKm9GLBCzJGs7v4D3mEZ1ogiQvDF4N2Q4i9Z6XF8PN0//Pf/RNnXaMugO4gTXOMEgX+PLAwKYJuaTjZEFo2C2eKKkuMXiHtz4ilUXT1J6j/9Fr7c0VABWehkmVO0x79RQK/Jk3kiBOsuNZKRVWJgKzBHtPCK3w8ezz7ZsSKbtBsPLZHqzQj+4ngaaBN0V2o8BnH2Wb+QAWzIv1OVWPdf0GSzfOsGlI8GsgD5fhPRkcmp7XdaFgZipLaR7R+PoVSi0eZQisDVeGuMJjsPwnvQpVZvbpEf20pGMfW7GEVGgrhRHs3Z/zdxo/zgVVf3MvOs/nk57xxVau5X57r/CxDpNWUd76oELjhqKftP5TtVhjDFa8WgMSKdy1zDICdbColKQNEsDhGyU9YQu01iu/kXC9b9PbsZ9AToChaDAjiJuh+X+PBqD4jXjvR8/AoEZJwt6kXQ1mtFeuZLV/TwPM7y+D0Z/DpUvIxLD0i2f/asRYVQBwbmpgZodagb0FJ30kLZhql2u1BPtFsPsWmD1Z/c0j+93+zl1MTYuXp0ZcFFcPuDH/08EIVqN36MzzUHKEjciJzGmi9+SqwTyOYcp55ZTDc4VfCdtn5P7nP+pJggpoOmbDmrVRvt685DWhUfn+g7K62pvpX+Zk8Hx9hEZidCYQLc+zOpsXeukCiPbAqAwL8pwK1iz20XV8GOW856oKFytel89npnM4nYZxnv5qunKYZfCMy074nvgz18l/GZ/7xBB3kfCd+dNFcsuTGifUHmrNRlbyTZ8Ge0tV6c5Zf9IpHPYszu+CvZ9yNqJgXr8hjIAzvwTRKH138o3QZlbee/A0XjeVUDFM5hPzR2L2WLQZj8cDUbJfg+QBccP6+zAZgqSTLVUwbMryvJveL9hinSVfXneP55jxV9aoNKab98sVPS/MZ5SJtwrs/O8tohiWBdQ2RZ0yjbNpUjhOTvNEm1z52VAIz1Pjnzi9GPw2CH4+0dwsDzuVHee+yr9Mrja0XdIwTJOOwKvkSG7jtOd5ll3DrqFpF07f90FzHPvZXUw/kqeULf4MSAAcxPCmnQybCDC04QiAFAorGuBOiXeMfOrSHQ6A/yrv4uT/t/o34H7VpsILJvGpor0h+Y6Sd9jc1oPEfFitn8IMoTyGa5CmsN3OJ9IG5i0kaHpckixDLrB3UDvuyC0H2zR49K+6RP+TzIt53hpPaJrnnncTT8IDJv1x/mDCT3ffqtnm9sAeVZGOfD5x6rd3fmrTwX+qKoPu8S9TzmlvAIrsujyIYWodiOlmMMZLOaMpwJyqVBbfNZjPr4Hlmyeiae8Rf98W+PqrRqNmtlBY/PC3m4Tv0/kAHvIrMmfT0ltZfw/mFl6OeD7ApOXO4vHBk6QLwzqM73PztRQzjKA+BkWym06R33DfF1dwPYLlQWX7A+vjtcNmb8Mu+im8YB5oMXkXuK36A7zqRPaZeWav05u+As+eY3+P95jqf0xMbEwoN0g/NiR9WXi0D2MUddKC6Xf4+Je235r5MYZDrpKZKVVJ/qpOnzRLAFhQ8gzxG8wue3GGyZpH2mPzpKZP4iK+K4nFyY5Nbjt+/UVWIPjfrRHHh3mumUYCX1Z/dtJAbI0r448O44LW9w7fxQejM2BOUSvIzP8aDzi5nlc/a2vBnDnP83lppshvLd9rw3iGGWJU76RwCnsrTVPPawolQICD2OBr4x6jqXClBvoGYApD59q82lF1Bqhz2r73FhlcTUSt0LdaZGffnaNDXQXMo4IffvidCNt9M9n7vo8qXDP5RgXO8y+Tv7U/YL7P7/eZGXwww/neMWrC/mPGxUYyf8aYvmymV5B0RqcDBtx4eF72fB43IHjAQ1vKL5byTBYWDVoS8DrUTgLxXJ/ZvVAm+pAM0w5lYGVIKHfC5oblWAZjiuzByEu48UwRk/uDniMwyhJzPJ+C6Xx5J08EPSTwZ6Q3l/Vbja8X9iVWLB69pnf2PU8kniUyOJhqdSytam14Uuef40mL5ocUrvD9fGFpat98WTyclO/kHfp4RNpfZAsWUStYOJ40wYeYpbABFPCp3S6KWpnht2tMA05AHRlNLFBB6UY6M8V05uRPHE8rLAY0U5WtyWDOvHpNDbiVxkNrn6YBimFrqGHneggw+x9oQ288iX1kFSq7reh8hq4UT1AF5sSgqsXuol18/gbdc4IkuaAaRQgejMzucKGgItXqBuL1y/MjB5yiVh5/Zmab8jMvtWsA8cs+PXkv/52gQoGofYb2JSDAsmAIPKDg9A8mEW3WgCfTB+yr4ZO8cOjqPgnDaUb8yyoTsdzCdLw74UzDCtTyhfjCqNqN02fOlTnD8wWJs5M1+BJox5cWNwLfAJG4FMAddlP/1Sr/MnfFCVVbByqtu63ntZGj4YEDXusiA8u2cLm+Qir++q+tHOy+XdY3dlXbzpvVc540+hUj3iFwUAKbc+qFI1+LoL3ZCuYB4uZOPrptrzNdZva3ycAEjydh08DRYFiZ+899mhCRdT0yA7iGnp5ZbPzVl24pTRDWlZr96vx6i2domCx+nHaXBSeLuPu8SeeVL8ez+DQPlpmZnSnD4u2sgMBZNOU9Pn1TP6GlkQA59cVAPDIIEDGSTk3+l4a4QNaUIpRenHG6PRz7v2ZBlf9hFZbfc+wBmd8sahhZAhQYlJgL1/pzLqH3EHcF1jeRvRlxmzFYpbK2z6rhzi/BEF/7V50Cd2TWw9PCvQqNnq17IfAwmMun3u/bJZSnCVl5UllQCdX/ZZadAkVhWL9r17OyUqqUcw06PAi03v+QcW02W/bfe35QDHGIclx8lgPml8HX/r2RHMXXaxc9TYEdObp2rMJMSOcxlWYAHzt7ZAGQ3OttbVahOL46siCAelzjk54efXHk5noHgGc7Y4OyHV7+yhp9QFOQzfMFgLtS0dzZ4gasqvwWjIvovDPQ67rTJBy5bTcNg/LAF8t4fU04kWwBdNctMk0G8CZ+Ozoj8eycT3IjWQ22Exiuq5d+wDGP46gbJ66fn5jW3nsxO5ci1DI1P+ofBvkiHUyxxl3gmsCa42FPFeMNmZQi/LIWHHgtXyWlhFlU6lcYRueHDBwSsVCYDw3jH10RnxYJALczIqQCRpMm1x6sJpjZMAbNFepu57VjCtDcJqMYi3tiGf9UehBabW06+CAqmPRnPj2TJEepf/7fR9MkcX/V6qqm0c73oVY0/F02LbNiQ8QH61oXeDBI3tUR6w/KvZXm1n0s0nWUpmkE7MVoWWbjRtPYH2Hu8oSGCxL1zviwlfcjN2GuaRX9eLOJPmO9uz/YjFJ08tfppgaWSP9xu59lAY+u7BgBa8T+BCcfviKi9+140eKr4hTT8aodnh79Wz7Zmu/Tf33ODu8eULYTyXN6EqVa6NX09WBgcKJgojPd0HBGSVzPXMJwBgrq7UMhv6tUjohSwWMLByCb58f3uvAeEWoEDMPreqhjxN4Iajrozw8CvBey9wrQebT3D2qFoY78tds/B2X2LIlybKYjxh4ArE8D6nKG0/RY0RCnM8oCU08Aq4dkpjf+AQ56z43al2s73XFaZ/LdiDHMgRZ4+KcCxQksh5c2vtOBNHNhq8A0TKtRebBHAYJMQ7dzlKqAOpt3CB7gj/PkQhY0TneHW++DxyQ///iODGZSr/6TBE9h90jR05loMLGErUrIiR2w1B2IjGH+jMSXKASpCw0b16Bj84jHGszhVn8GVweofLTB0tN6geyfu3nrQX+FJjZYcO5Vo7Ga8JHPfZBnyPerQz7qkD9qMsMMDqM3u5oBr08OLBABCzmcsPkk2n2lGJql+EGnYB4YI9wijBr+35MceHDHwAAjPD/675EPwzTwnAL8+b2x+N9DoH9vZBgFfhT8EeAb5f/P06L/P65B90/PXPu+Y/ZuZ36vKZZLCHOjFEXxPxj/+w+YTbAlkD2b1+yErzwe6N9LYwSo2/p/XsKE/8G47pSyoQPIEpgZ5N8HSAL7+8j19/MDEgn481Gla/n3GvFA/l4rs6oo/30tjv37YLT8vVD897uhJf27IvhHd3JZ2/73x3kY1v/zO2mOxvI9pBl8x/8D</diagram></mxfile>
|
2106.06795/main_diagram/main_diagram.pdf
ADDED
|
Binary file (37.1 kB). View file
|
|
|
2106.06795/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep neural networks have achieved promising results on various tasks. However, these models suffer from the problem of catastrophic forgetting [@kirkpatrick2017overcoming]. The most prominent reason for catastrophic forgetting is that the model is not trained to also remember the previous knowledge when acquiring new knowledge. In general, the model is trained to optimize its performance on the current task with no consideration of how the updated model will perform on earlier tasks. This greedy update overwrites the parameter values that may have been optimal for previous tasks. *Continual-learning (CL)* (also sometimes referred to as lifelong/incremental learning) is a learning paradigm to address this issue in deep neural networks and has been gaining significant attention in recent work [@Parisi2019].
|
| 4 |
+
|
| 5 |
+
We present a novel approach for *class incremental* online learning problem in a limited data setting. This problem setting is more challenging than standard class incremental learning [@javed2019meta] due to additional constraints: (1) Data in each class appears in the online fashion, i.e., the model sees every training example exactly once; (2) The number of training examples in each class is very small; and (3) We do not use any replay/memory to store the training examples from previous classes. This is the most general setting for class incremental learning and various practical usage scenario can be obtained through this or a relaxed setting. For instance, in face recognition, it is common to have few examples per class but usually not in an online learning fashion, whereas for a robot navigating in an environment, the setting would also be online. We empirically show that learning a robust representation that can accommodate future tasks may be a potential solution to handle the problem mentioned above. Our proposed approach achieves this by leveraging the meta-learning [@finn2017model] framework with knowledge consolidation.
|
| 6 |
+
|
| 7 |
+
Meta-learning [@finn2017model] has proven to be an effective approach for learning generic feature representations that can be rapidly adapted to new tasks by fine-tuning using very few examples (and in some cases, even *without* fine-tuning [@vinyals2016matching; @raghu2019rapid]). While such use of meta-learning might seem appealing and does indeed show some promising results in continual learning settings [@javed2019meta], in practice, this approach is still prone to the problem of catastrophic forgetting. One of the reasons for this is the *overparametrized* nature of deep neural networks, in which only a few neurons are activated/fired for all samples. As a result, the network is reliant only on a small set of parameters. Although this may not be a problem when learning only a single task, it can potentially be an issue in continual learning where we are required to learn a sequence of tasks and, while learning a new task, any changes to these parameters can drastically affect the performance on the older tasks.
|
| 8 |
+
|
| 9 |
+
<figure id="fig:my_label" data-latex-placement="!htbp">
|
| 10 |
+
<img src="main1.png" />
|
| 11 |
+
<figcaption>The figure shows the various steps in knowledge consolidation. The original model is learned via a meta-learner. Thereafter, we transfer the model’s knowledge into a subset of parameters and partition the model into important and unimportant parameters. Then we retrain the model by allowing <span class="math inline"><strong>A</strong><sub><em>U</em></sub><sup>*</sup></span> to change freely according to the loss function and also constraining <span class="math inline"><strong>A</strong><sub><em>I</em></sub><sup>*</sup></span> to preserve previous knowledge.</figcaption>
|
| 12 |
+
</figure>
|
| 13 |
+
|
| 14 |
+
To address this, we present a *knowledge consolidation* based meta-learning approach. In our approach, during training, we identify and split the network parameters into two groups, called important and unimportant. The network's existing knowledge is squeezed into the set of important parameters, and the unimportant/dead parameters are freed, thereby expanding the network's capacity to accommodate the future learning trajectories. Briefly, a learning trajectory is a sequence of examples where examples from a particular class occur together in the sequence (we discuss this in detail later). The proposed strategy ensures that the knowledge from the old learning trajectories is preserved in a compressed form within a small set of important network parameters, which we identify and isolate, and *then* move on to adopt new learning trajectories. The extra knowledge obtained via learning from the new trajectories updates the unimportant parameters, and they also become important. In knowledge consolidation, we rejuvenate the dead neurons in the model and consolidate the knowledge of the previously preserved parameters and the new rejuvenated parameters. This helps to learn a robust representation. Therefore, the model capacity is fully utilized, and small changes in a few parameters strongly resist the catastrophic forgetting. The knowledge consolidation process overview is shown in Fig. [1](#fig:my_label){reference-type="ref" reference="fig:my_label"}.
|
| 15 |
+
|
| 16 |
+
Note that we use knowledge consolidation and meta-learning in the training phase. Training is done on a base class set using multiple learning trajectories. We strictly follow class incremental online learning setting within a learning trajectory. However, across learning trajectories, the class incremental learning setting is not used (since multiple learning trajectories can have the same set of classes). Therefore, we do not follow the incremental learning setting during training on the base class set (since training is done on multiple learning trajectories). We follow this to make the model's representation robust and facilitate future continual learning during evaluation time (Section [2](#sec:probform){reference-type="ref" reference="sec:probform"}), where we perform class incremental online learning with limited data on a *novel* class set. We use entirely different (disjoint) classes in the novel class set than the base class set, and during evaluation on the novel class set, we only use meta-learning for quick adaptation.
|
| 17 |
+
|
| 18 |
+
Our approach significantly outperforms other incremental learning methods by a significant margin. We show that a basic online updating strategy on representations learned by our approach Knowledge Consolidation based Class Incremental Online Learning (KCCIOL) is better than memory-based rehearsal methods. Our approach can also be integrated with existing continual learning approaches such as MER, EWC, ER-Reservoir as shown in Section [7](#rreclcomp){reference-type="ref" reference="rreclcomp"}.
|
| 19 |
+
|
| 20 |
+
# Method
|
| 21 |
+
|
| 22 |
+
Let {$\tau_1,\tau_2 \dots \tau_k \dots \tau_l \dots$} denote a stream of learning-trajectories and $\forall i$, $\tau_i \sim P_{test}(\tau)$ where $P_{test}(\tau)$ denotes the trajectory distribution during testing from the novel class set. Each learning-trajectory $\tau_i$ is further split into two sets -- train and validation, i.e., $\tau_i=\{\tau_{tr},\tau_{val}\}$ where $\tau_{tr}=\{x_n,y_n\}_{n=1}^k$ and $\tau_{val}=\{x_n,y_n\}_{n=k+1}^{k+s}$ are the labeled samples, $\{k,s\}\in \mathbb{N}$. Here, $\forall n$, $(x_n,y_n)\in (\mathcal{X}_{test},\mathcal{Y}_{test})$ denote the input and label pairs and the trajectory distribution $P_{test}(\tau)$ is defined over $(\mathcal{X}_{test},\mathcal{Y}_{test})$ which is disjoint from base training class set $(\mathcal{X}_{train},\mathcal{Y}_{train})$ i.e. $\mathcal{Y}_{train}\cap \mathcal{Y}_{test}=\varnothing$. Moreover, we assume $class(\tau_{tr}) = class(\tau_{val})$ i.e. classes of $\tau_{tr}$ are the same as classes of $\tau_{val}$. The goal of continual learning is to minimize the loss on the unseen examples of classes learnt earlier in an incremental fashion, and can be written as: $\mathbb{E}_{\tau \sim P_{test}(\tau)}[\mathcal{L}(f(\tau_{val}^x|\mathbf{\theta},\mathbf{W}),\tau_{val}^y)]$ i.e model is evaluated on $\tau_{val}$. Our evaluation protocol (Algorithm 4) is similar to the class-incremental setting [@javed2019meta], but samples within each class also arrive in an online manner. In particular, here are the key differences: 1) We assume availability of very few samples per class; 2) For a particular class, each sample is seen exactly once; and 3) We do not use any replay mechanism. These differences make our problem setting considerably more challenging than the standard class-incremental setting.
|
| 23 |
+
|
| 24 |
+
Following a similar notation as in Sec. [2](#sec:probform){reference-type="ref" reference="sec:probform"}, let {$\tau_1,\tau_2 \dots \tau_k \dots \tau_l \dots$} denote a stream of learning trajectories and $\forall i$, $\tau_i \sim P_{train}(\tau)$ where $P_{train}(\tau)$ denotes the learning trajectory distribution during training. Following the model-agnostic meta learning (MAML) set-up [@finn2017model], we assume that the $i^{th}$ learning-trajectory's data $\tau_i$ is further split into two sets, meta-train and meta-val, i.e. $\tau_i=\{\tau_{tr},\tau_{val}\}$ where $\tau_{tr}=\{x_n,y_n\}_{n=1}^k$ and $\tau_{val}=\{x_n,y_n\}_{n=k+1}^{k+s}$ are the labeled samples, $\{k,s\}\in \mathbb{N}$. Here, $\forall n$, $(x_n,y_n)\in (\mathcal{X}_{train},\mathcal{Y}_{train})$ denote the input and label pairs and the learning trajectory distribution $P_{train}(\tau)$ is defined over $(\mathcal{X}_{train},\mathcal{Y}_{train})$. Moreover, we assume $class(\tau_{tr})\subset class(\tau_{val})$ i.e. classes of $\tau_{tr}$ are a proper subset $\tau_{val}$; therefore $\tau_{val}$ contains all the classes of $\tau_{tr}$ and some additional classes. Note that $\tau_{val}$ and $\tau_{tr}$ are the random trajectories of length $k$ and $s$, respectively. In $\tau_{tr}$, samples of each class occur together, i.e samples from class 2 occur after and before samples from class 1 and class 3, respectively. We sample a learning trajectory multiple times for training on a base class set. We randomly sample a learning trajectory by selecting a subset (proper) of classes (randomly) from the base class set. Therefore, every learning trajectory has a different class order.
|
| 25 |
+
|
| 26 |
+
We follow the MAML setting of continual learning [@javed2019meta]. In the inner loop, each class arrives in a sequential manner, and the boundary of each class is available in advance. In the outer-loop, the aim is to minimize the empirical risk over the unseen data, provided the model is optimized on the seen data in an online fashion with the continual learning constraint i.e. when learning from the training data of current class, we are not allowed to access the training data of previous classes. The overall loss function across all trajectories is defined as: $$\begin{equation}
|
| 27 |
+
\mathbb{E}_{\tau \sim P_{train}(\tau)}[\mathcal{L}(f(\tau_{val}^x|\mathbf{\theta},\mathbf{W}),\tau_{val}^y)]
|
| 28 |
+
\end{equation}$$ $\mathcal{L}(f(\tau_{val}^x|\mathbf{\theta},\mathbf{W}),\tau_{val}^y)$ denotes the loss on the model $f$ for the validation trajectory of $\tau$ . For notational brevity (and slight abuse of notation), we use $\tau_{val}^x$ to refer to all the inputs of validation trajectory of task $\tau$ and $\tau_{val}^y$ to refer to the corresponding true labels.
|
| 29 |
+
|
| 30 |
+
The function $f:\mathcal{X}\rightarrow \mathcal{Y}$ is defined as $f(\tau^x|\mathbf{\theta},\mathbf{W})=g(h(\tau^x|\mathbf{\theta})|\mathbf{W})$ where $h_{\mathbf{\theta}}:\mathcal{X}\rightarrow \mathbb{R}^d$ is defined by parameter $\mathbf{\theta}$ (representation learning parameters) and $g_{\mathbf{W}}:\mathbb{R}^d\rightarrow\mathcal{Y}$ is defined by $\mathbf{W}$. The classifier parameters $\mathbf{W}$ are learned using meta-train set ($\tau_{tr}$), and representation learning parameters $\mathbf{\theta}$ and $\mathbf{W}$ are jointly learned using the meta-val set ($\tau_{val}$).
|
| 31 |
+
|
| 32 |
+
In the inner-loop of the meta-learner, which learns $\mathbf{W}$, the model is trained on the meta-train data $\tau_{tr}$. The outer loop, which learns $\theta$ and $\mathbf{W}$ is trained using meta-val data $\tau_{val}$. In the outer loop, the model's loss is also computed on novel classes not seen during the inner-loop training since the classes in $\tau_{tr}$ are a proper subset of classes in $\tau_{val}$. Evaluation on both sets $\tau_{tr}$ and $\tau_{val}$ makes the model perform well on both current and previously learned classes. The optimization problems solved by the inner loop and the outer loop are given by: $$\begin{equation}
|
| 33 |
+
\label{eq:2}
|
| 34 |
+
\small
|
| 35 |
+
\mathbf{W} = \mathop{\mathrm{arg\,min}}_{\mathbf{W}}l_{tr}(\mathbf{\theta},\mathbf{W})\stackrel{\mathclap{\normalfont\mbox{def}}}{=}\mathcal{L}(f(\tau_{tr}^x|\mathbf{\theta},\mathbf{W}),\tau_{tr}^y)
|
| 36 |
+
\end{equation}$$ $$\begin{equation}
|
| 37 |
+
\label{eq:3}
|
| 38 |
+
\small
|
| 39 |
+
(\theta,\mathbf{W}) = \mathop{\mathrm{arg\,min}}_{\mathbf{\theta},\mathbf{W}}l_{val}(\mathbf{\theta},\mathbf{W})\stackrel{\mathclap{\normalfont\mbox{def}}}{=}\mathcal{L}(f(\tau_{val}^x|\mathbf{\theta},\mathbf{W}),\tau_{val}^y)
|
| 40 |
+
\end{equation}$$
|
| 41 |
+
|
| 42 |
+
In Eq. [\[eq:2\]](#eq:2){reference-type="ref" reference="eq:2"}, for notational simplicity, we use the entire training data but during training we perform an online update over $\tau_{tr}$. The above two optimization problems are solved in an alternating fashion using $\tau_{tr}$ and $\tau_{val}$, respectively, with the most recent parameter $\mathbf{W}$ obtained from Eq. [\[eq:2\]](#eq:2){reference-type="ref" reference="eq:2"} used in Eq. [\[eq:3\]](#eq:3){reference-type="ref" reference="eq:3"}.
|
| 43 |
+
|
| 44 |
+
While the meta-learning based approach for continual learning described in the above section is promising, it is not particularly effective for our problem setting. One of the reasons for this is the *overparametrized* nature of deep neural networks, in which only a few neurons are activated/fired for all samples. As a result, the network is reliant only on a small set of parameters. Although this may not be a problem for a single task learning setting, it can potentially be an issue in continual learning where we are required to learn a sequence of tasks and, while learning a new task, any changes to these parameters can drastically affect the performance on the older tasks. We overcome this by using a knowledge consolidation based meta-learning approach. Our proposed approach identifies the important and unimportant/dead parameters, rejuvenates the dead parameters, and consolidates the knowledge of the important and reborn parameters (Fig. [1](#fig:my_label){reference-type="ref" reference="fig:my_label"}). Therefore, the model capacity is fully utilized, and small changes in a few parameters strongly resist the catastrophic forgetting.
|
| 45 |
+
|
| 46 |
+
A simple way to assess the importance of a parameter is to use its absolute value [@han2015learning], often used in deep model compression. We can discard weights/parameters having small absolute values without sacrificing upon the model's performance [@han2015learning]. We leverage this simple idea to identify the important parameters in the model effectively. The proposed approached modifies the meta-learning framework by introducing knowledge consolidation. We define $\mathbf{A}=[\mathbf{\theta},\mathbf{W}]$ as the joint set of parameters of the complete model. We partition model parameters $\mathbf{A}$ into two disjoint sets. The important parameters are denoted by $\mathbf{A}_I$ and the "less important" ones are denoted by $\mathbf{A}_L$, s.t., $\mathbf{A}= \{\mathbf{A}_I, \mathbf{A}_L\}$ and $\mathbf{A}_I\cap \mathbf{A}_L=\varnothing$.
|
| 47 |
+
|
| 48 |
+
Most of the model's knowledge is contained in $\mathbf{A}_I$. Our goal is to *preserve* the knowledge present in $\mathbf{A}_I$. We apply a weight-constrained regularization on $\mathbf{A}_I$ to ensure minimal changes when a new trajectory is learned. On the other hand, we let $\mathbf{A}_L$ be free to change in order to accommodate new trajectories. Therefore, while learning a new set of trajectories, the following regularized loss function is optimized: $$\begin{equation}
|
| 49 |
+
\label{eq:7}
|
| 50 |
+
\sum_{\tau \sim P_{train}({\tau})} \mathcal{L}(f(\tau_{val}^x|\mathbf{\theta},\mathbf{W}),\tau_{val}^y)+\mathcal{R}(\mathbf{A}_I)
|
| 51 |
+
\end{equation}$$ One way to define the weight-constrained regularization $\mathcal{R}(\mathbf{A_I})$ would be $\lambda||\mathbf{A}_I^{t+1}-\mathbf{A}_I^t||_F$, where $\mathbf{A}_I^t$ is the important weights after $t^{th}$ step. The large value of $\lambda$ ensures minimal changes in the important weights.
|
| 52 |
+
|
| 53 |
+
Naïvely partitioning the model into $\{\mathbf{A}_I, \mathbf{A}_L\}$ (based on absolute value) often does not show any significant improvement since various techniques like dropout and batch normalization force the model's knowledge to be shared across all model parameters, which causes $\mathbf{A}_L$ to contain non-negligible knowledge. Ideally, the value of unimportant weights should be zero. However, in reality, this is not the case and the set $\mathbf{A}_L$ usually contains the non-negligible information. Therefore, we first distill the model's knowledge into a subset of parameters $\mathbf{A}_I^{*}$ (important parameters set) such that the remaining part $\mathbf{A}_U^{*}$ (unimportant parameters set) contains negligible information. The weight-constrained regularization can now be imposed on $\mathbf{A}_I^{*}$, while the set $\mathbf{A}_U^{*}$ is free to be adapted for the new trajectories. To transfer/distill the model's knowledge to a subset of the parameters, we *finetune* the complete model with the following $\ell_1$ regularized objective: $$\begin{equation}
|
| 54 |
+
\label{eq:8}
|
| 55 |
+
\sum_{\tau \sim P_{train}({\tau})} \mathcal{L}(f(\tau_{val}^x|\mathbf{\theta},\mathbf{W}),\tau_{val}^y)+\gamma||\mathbf{A}||_1
|
| 56 |
+
\end{equation}$$ We can maintain model performance by using an appropriate hyperparameter $\gamma$. The $\ell_1$ regularizer forces the model knowledge to be squeezed in a subset of model parameters $\mathbf{A}_I^{*}$. Rest of the model parameters $\mathbf{A}_U^{*}$ contain negligible information and therefore are free to change. Now, the set $\mathbf{A}$ can be split into important parameters $\mathbf{A}_I^{*}$ and unimportant parameters set $\mathbf{A}_U^{*}$, i.e., $\mathbf{A}= \{\mathbf{A}_I^{*}, \mathbf{A}_U^{*}\}$ and $\mathbf{A}_I^{*}\cap \mathbf{A}_U^{*}=\emptyset$. Given this updated set of important and unimportant/dead set of parameters, the outer-loop optimization of the meta-learner is given by (akin to Eq-[\[eq:7\]](#eq:7){reference-type="ref" reference="eq:7"}) $$\begin{equation}
|
| 57 |
+
\small
|
| 58 |
+
\begin{split}
|
| 59 |
+
(\theta,\mathbf{W}) = \mathop{\mathrm{arg\,min}}_{\mathbf{\theta},\mathbf{W}}l_{val}(\mathbf{\theta},\mathbf{W}) & \stackrel{\mathclap{\normalfont\mbox{def}}}{=}\mathcal{L}(f(\tau_{val}^x|\mathbf{\theta},\mathbf{W}),\tau_{val}^y) \\ + \lambda||{\mathbf{A}_I^{*^{t}}}-\mathbf{A}_I^{*^{t+1}}||_F
|
| 60 |
+
\end{split}
|
| 61 |
+
\end{equation}$$ $$\begin{equation}
|
| 62 |
+
\mathbf{A}_I^{*^{t+1}}-\mathbf{A}_I^{*^{t}} \approx \nabla_{\mathbf{A}_I^{*^{t}}}(\mathcal{L}(f(\tau_{val}^x|\mathbf{\theta},\mathbf{W}),\tau_{val}^y))
|
| 63 |
+
\end{equation}$$ To preserve the knowledge contained in $\mathbf{A}_I^{*}$, we apply weight-constrained regularization on $\mathbf{A}_I^{*}$ as above, which ensures that $\mathbf{A}_I^{*}$ do not change drastically when new learning-trajectories are encountered. Rest of the parameters ($\mathbf{A}_U^{*}$) are free to change. Therefore, we "rejuvenate" the parameters in $\mathbf{A}_U^{*}$. Representing these rejuvenated set of parameters as $\mathbf{A}_R^{*}$, now the consolidated knowledge from both $\mathbf{A}_R^{*}$ and $\mathbf{A}_I^{*}$ provides a robust representation for our problem setting. Therefore, small changes in a few parameters strongly resist catastrophic forgetting because the model capacity is fully utilized, and model predictions are not reliant on a small set of parameters. As demonstrated by our experiments, such a parameter rejuvenation and knowledge consolidation significantly enhance the performance of a meta-learner based model for class incremental online learning. For a summarized algorithmic description of our approach, please refer to the Algorithms 1, 2, 3, 4.
|
2108.13655/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-13T06:42:23.065Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" etag="f4vPrn8VGPPg_sADdBrM" version="15.7.3" type="google"><diagram id="xiRQhqu7uG5AYisLSZY3" name="Page-1">7VtLc+I4EP41VO0ckpLkB+YYyGMmk2wyyU7t5rQlbMVoYluMbAYyv34lWwY/BDFgCJsih2A1rVaru9X9SRYdYxDOrjgej26ZR4IOAt6sY5x3EIIQdMWHpLxmFAeijOBz6immBeGR/iaKCBR1Qj0SlxgTxoKEjstEl0URcZMSDXPOpmW2ZxaURx1jn9QIjy4O6tS/qZeMMqrlGAv6Z0L9kRrZQWq+Ic551UTiEfbYtEAyLjrGgDOWZE/hbEACabvcLFm/yyXfzvXiJEqadLgOLscvQYxR3/8Ov3kevPsRnygpv3AwUfPtIBuH447Rj4ax/NC0AzFe/5mJYaXZA8bTnvbPiZyKYDAAsG3XLZJsX35eTDgbExzVZYLvEWU6+rLhxTwzDTLJ2OeEeGlsyH8jIk0jBmMxDvIuQ55zN5nD87MDAKjPoYGO/ZO7h6sGfF8a8jUj3TXr1oStoaiGknT+Uksiec2XWUJmkj5KwkAQoHiME85eyED5JmIRkR6jQVAh4YD6kWgG5FlK+EV4QsXyPVPkkHqeHKQ/HdGEPI6xK0ecimQlaJxNIo/IBQJUOFzikAYyVf1FQ5F2EPiTTMX/BxaKsM1YVI6C5nwexUWYryihBpkVSGpRXhEWkoS/Cpb8255KECpBWkYva08X6QaZimdUSDU5DasM589FL7KAeFCJYI2kgDRJoeIuEnlnMrmKlhvgOKZu2XkbmJLMaPKP9MSppVpPyi/y+XxWbLzmjUjMt9BJNp+UCmlj0S1t5f2Wui1mE+6StxNmgrlPkhV8VsZHvFJ1qQdBwcmWxsc5jZMAJ/RXuSbpHK9GuGc0TW0qxgxQjjGj2yuLyOatehWLSFUQrAQrNMqCMsPUBKVxOJ/25qFp1ELzhkYEc/pbmEcWkNbTiitChPC9JxbZHszrkvBfrwdWRm7zhGOgqg+7tYTT08Qi2lW+sXQgZPsCnUoo1+JS9a+VJF012wDrFOIvJ8owPFERdSY40lq1BjxqhIFycjyWIjRaKP2lAtwf/oEsK52jcBqAyJk/g0+bGXUN6JapuG+1vzRRey2d9xQYS8DxgUVF++ZdqnMzszXZFNRG/igFpIVCYZfrhOnUgakF9lko7CMwfauIvglMncMCpmYFmDotAVMTVgTtGJh2a6F5e3FzWwvPxbKGSxb+drFaSzoBHpKgj90XPx26kqOybxn3CF+R0EQx8yzieGYt+4lvHDQ0bDtXRc1Uczi1bfYx7IbZZ2fbYudAYaq+pG5S78Ea1f727PHrUoXfESsdPEw6gudjhB52hB6B/AcD8lYFG2lOmC20TyDfq6MlHL/QyP84btvpAZ5Z9qdt1PxpO3t0J6z7c78bMxPZnQ22Zqa0cmFzdgq6893akg1a2ronXCiYRtVcfpW21M+HsvMCvVNHQlP1B/Voe+19mBDbQ0vFmmivuzKkOzH4/2P2BQqCAM2fETA/yUYKTpoV1ksidn4raugRFx2R+xY6txanAxaGVNSM9BT8GKoH5fMjhN8LhDfKhRS+N4RH9RPP/WK+dzyMPyAI14ULrAVMfYxsguFWyEXVM/pdgzjd7qK9S4lzDARKVWaT/P5+txCbXjnUkNq9haghtXubcJP7jM2lH+8qrqxC0CmfJCFTc1fR1mS8Nl7KPGDaj59+XHcH5/cP5/juGkxn2gvMW71zqzstc25++VueGHg4Hs0F1t6ItXnUA7vmmwaHBtQY3NyRwc23DZ6H9Y18u3jPYppezTPOhyxJWKiJ+4RVvBCP8FgKC2e+/FXB6RAL7HAq+rzorZ87p+rBoje0a7QNF1l2xUVO3UVW3UPQ2d5Dn09GYzq4u3GHr/+emeyr0TV+NlkSR2C2a2BWCgmzmv7WQGJlQbAiaGPoJZqLn55k7Ivf7xgX/wE=</diagram></mxfile>
|
2108.13655/main_diagram/main_diagram.pdf
ADDED
|
Binary file (35.2 kB). View file
|
|
|
2108.13655/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
To elaborate the effectiveness of the proposed MELM, we compare it with the following methods:
|
| 4 |
+
|
| 5 |
+
**Gold-Only** The NER model is trained on only the original gold training set.
|
| 6 |
+
|
| 7 |
+
**Label-wise Substitution** @dai2020analysis randomly substituted named entities with existing entities of the same entity type from the original training set.
|
| 8 |
+
|
| 9 |
+
**MLM-Entity** We randomly mask entity tokens and directly utilize a pretrained MLM for data augmentation without fine-tuning and labeled sequence linearization as used in MELM. The prediction of a masked entity token does not consider label information but solely relies on the context words.
|
| 10 |
+
|
| 11 |
+
**DAGA** @dingdaga firstly linearized NER labels into the input sentences and then use them to train an autoregressive language model. The language model was used to synthesize augmented data from scratch, where both context and entities are generated simultaneously.
|
| 12 |
+
|
| 13 |
+
**MulDA** @liu2021mulda fine-tuned mBART[@liu2020multilingual] on linearized multilingual NER data to generate augmented data with new context and entities.
|
| 14 |
+
|
| 15 |
+
::: table*
|
| 16 |
+
:::
|
2110.02027/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-01-28T10:24:02.380Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36" etag="GzhdfDLPsouzxO5gJHR6" version="16.4.11" type="device"><diagram id="BJ6yeSJXpyAwgI4iSOwI" name="Page-1">7Vxbc5s4GP01eSyju8Rjkqbd6bQ7mcl0mj7tYJBttth4MUmc/voVNmAkY4wdbnbTdBIkQMB3jr6r4ArfzlafI2cx/RZ6MrhCwFtd4Y9XCFHE1O+k43XTgW2y6ZhEvrfpgtuOB/+3TDtB2vvke3KpHRiHYRD7C73TDedz6cZanxNF4Yt+2DgM9KsunEl6RbDteHCdQO4c9sP34mnaC5m93fGX9CfT7NKMpg84c7Kj05GXU8cLXwpd+O4K30ZhGG+2ZqtbGSTCywSzOe/Tnr35nUVyHtc54dX98vc/43j17flXzO+uH8X3e+dDJudnJ3hKHzm92/g1k4EaRolbNW5epn4sHxaOm+x5UYCrvmk8C1QLqk1nudhgMPZXUl31xnOW02RjvXf3ftNHeJZRLFeFrvT+P8twJuPoVR2S7kUsvd2UTSRrvxSwyeQ9LcKSdTopHyb52FuJqY1UaEcI0H6X35vkx9uU3zKOwl/5xEV5z20YhNF6cHyz/mlIvNgQLy4RrygTr2hLvJDWkO/cu05UpWrNw7nUZRqFT3NvzUFQKb5dUWvklSs/fkwGsTjmafun0f64Sq+ybrwWGvcy8pU8ZJT27cVKehNZiVQBCVoCRNYXycCJ/WfdCJSBk17hPvTVnRSIQCwbUIJtzJVFYFyjBYXUIhgiSDCGqoH14ZfhU+TKdMSiPjcvYk5m2yBR7EQTGe8MtOZRLpI3UIsNhFpz9RyPxcbPYmPLqnWrSKvHwvbPIvdKibiXdBs5V0gK0UGxkwCdOBTYll38dxofFZktRAllgCGBCeY669VkUFcRXFDGBBGEd0vWOnamS7KCk8hq0QJd4Wl0HQgLMcIWJ7agygAgDCHS2MIgsCDniqocqf+CnMZJdb4FwJaTQic+0ihJu2WkuAxGEg6LnFTitpuz5xucD7uPQ+G0oVkZAKfxFhm6VKDhEBfVcSnf6LIXeMxcIUfjHR4nJ4/9ICgc6TlSjF3VP4kcz1dUKuzj0mGykmr13XwMhMWpbtzQrqdfFke1FkahGs5Y8oS+6wRfnZEM7sOlH/vhXO0ahXEczpRssgOuA3+S7IhDA63l1Fkkg81WkyTbY42cpe9aS0W4MuRGQjEY1EFuLFzp7kFOATdirCnkuKUDR2EJcCg7qBvoOgiBBz2fdvzRMkw6nUx1THNziAxxnuxouNyU9QZKq5m2MwAlF20+TXI91RcmWX7rPDFpPLtKjYSM6HnK4KGEGGmwUAwUqnMtpwUlezE8GEZkRD63VA5iSaKxkLoxci4iyTTqg9ZO50CDzaTb9CI+a33ftm6htGfdQmCn8DTstTYND+YDU/28RpA+Uep9Uf/h8yK7M8pGAJVCyRMTlUFyqVRaY22rnv0BVlUDdUS4pEu19xIu6bMGfqlCpR241qcWxhuSueFglET9HRfLaR2bdgHF8kr8BuLZQqOCbtSHbKOEjvTx69eHtFF5ty4uRQPhW28V9EomHgzG6MAoK3Q2mesx6pISwsriD9Tr6EywbkmLB0baHirpZ6FAgVFfxwaLjAL7iWkCRXq9wm6QFehstbtd9pEtxj13srZTZK+k8cGcWcbTgfDdUL47XmJ97WsoXzooQl9Y8b2hUEJs61V9ZVzon1iBbwo+syxv78aCHVfl6YVV5ZuK2ff4uL3Nugsr1TevD0172BtS3dZzBj+hsgnUexE/e43gPIHpJj1svNhT8tpUt5ANLFHZfWH/baFLht+5ZZhIZbWfnV7st3V+m8RtOahhdfKg7xpIM+clrxZ2q4LqpAH/cMwgHZjZqJMNe8dsWBON1whE217JAQ37kC93KZo9XCIVc81Yc1SukQfbOjnwMJ01dh3jENVlIt8DRrUvgYTFuMCAJF/OALAkA9KeiN8TWsdjlxdfLMoxZQwIwoUARu2lRKVgi9uAKqVpA84x4F2ahTINw4I4EWq49hK3iLP/nsJsx4fl+nMt1+oAyBar7U61NUn+Xs/dqZJ5Opa6t81wm507XFKijQ1uaPCnUUwR57TLScnlKhiTcscO62a+5wX7bJoRC2VfaTnZ5BxNFoIsZBPECMY2pMhY6VBClj0ndGOOyrIFTZDlQQbKIilpqwdxouTPXE7WwdLySAYd0PTJEOmHhqBoVQkQakEsILSJjQlRsGnAlqR9IFJaAAPICWWY2nAXV9warmX+fCO4vs7jqVTT8QIQxZxbECaLkjBR2v0goJAq8w0wsSHiUCBAStR6e4jWSLjucRyPE9fx7iSAlo0AIhwTyiHOhqnUeIIDkPhDNoHCxl2ax25LDD0lru094Fa6p91Gr2f9QkjzOOSvBVpE3SqwbUCUo5kvhewLJlHmIAxC7SilbSHCKBaEQfXbXNgxKK0jLvn9mmriDFcFZYvbzlMFtQxK//ro7v7H7y+Lz/x2+fU7+3F3s/j3E/7Q7Twa+GuE2FjwTksSeQ2ho5rbb5tuKkXbL8Tiu/8B</diagram></mxfile>
|
2110.02027/main_diagram/main_diagram.pdf
ADDED
|
Binary file (53.1 kB). View file
|
|
|
2110.02027/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
<figure id="fig_merit" data-latex-placement="ht">
|
| 4 |
+
<div class="center">
|
| 5 |
+
<embed src="fig6.pdf" style="width:39.8%" />
|
| 6 |
+
</div>
|
| 7 |
+
<figcaption>The performance of ProGCL for another graph contrastive learning method MERIT <span class="citation" data-cites="Jin2021MultiScaleCS"></span>.</figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
In addition to GCA and GRACE, we also evaluate the performance of ProGCL on another GCL method MERIT. The results shown in Figure [6](#fig_merit){reference-type="ref" reference="fig_merit"} demonstrate that ProGCL brings consistent improvements over the base method, which verifies that our ProGCL is readily pluggable into various negatives-based GCL methods to improve their performance.
|
| 11 |
+
|
| 12 |
+
<figure id="fig_estimation" data-latex-placement="ht">
|
| 13 |
+
|
| 14 |
+
<figcaption>The histograms of estimated probability.</figcaption>
|
| 15 |
+
</figure>
|
| 16 |
+
|
| 17 |
+
To verify that our ProGCL can alleviate the sampling bias, we plot estimated probability histograms of negatives in Figure [7](#fig_estimation){reference-type="ref" reference="fig_estimation"}. Compared with similarity, the estimated probability serves as a more discriminative measure to distinguish true and false negatives, which can help us select hard and true negatives together with similarity as in Eq. ([\[measure\]](#measure){reference-type="ref" reference="measure"}).
|
2110.06539/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-10-05T13:57:18.758Z" agent="5.0 (Macintosh; Intel Mac OS X 11_5_2) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.1.3 Chrome/89.0.4389.128 Electron/12.1.0 Safari/537.36" version="15.1.3" etag="5JTqfaFVWc76DwM28t_y" type="device"><diagram id="N8dZ0DD2Tv5NJ1AHghkW">7Vtbb6s4EP41eVmpEXfIY5M2uw+72iNVqz19qlxwgnUIzoLTJOfX7zjYXAxJKYEW9bStGjw2Np5vZjz+TCbmYnP4PUHb8C8a4GhiaMFhYt5NDEPXNQc+uOSYSTxdzwTrhASiUSF4ID+xEGpCuiMBTisNGaURI9uq0KdxjH1WkaEkoftqsxWNqqNu0VqMqBWCBx9FuNbsXxKwUMzCLrX+A5N1yPIJi5oNko1FF2mIArovjWXeT8xFQinLrjaHBY648qReso6WZ2rzB0twzNrcYGQ3vKBoJ+Y2MZwIbp2vKPQAD8iOYtbOfzsqK27SEya30ABGOxSVcLU+fTraxF7w6T6vJu4cPcE/MnHv+B9UiTHgubJhxE2ZSvIRjYTu4gDzR9Wheh8Shh+2yOe1e7AskIVsE4nqlCX0B17QiCYgiWmMxcMK+wFNQJlEkdJEaAAnDB/OalHPsQGjxnSDWXKEJuIGcybwFfZsS0vYF9ZhClFYMgwpQ8Ie13nPBWRwIVBrRtB8HwTTz42gYVcRtLQ6gk4DgrZzPYJWA4KKGnEc3PK4BSU/QmlK/KrmykqZGGaAsLfyaxqFGsf3MOApa2T0snIt4mBdDXIMJWvMKtGihVpLarOb1CZkCY4QIy/VEZt0KUb4RsnJpiVqit9ZtuJQKd0lPhZ3lcOg2pGnOLBmVTvKtFDr6ARtPu1WaNvXo13HDuBIjt9F9anwCAVtasviHZ+olpeOotQCc3NkmLuqpxpeN8xNQ+1IGwpzp455Y3Q15nopwn6SUOo47UKp08Ni6A7iXAfCvkv3getH6WdwXfgVL7ziVpllCpE1Mr9yVL9S8ejqV85sML/yzvjVlsDnbzXowaZZGzepuwWKyDrmFgNYYZDPuYcQ2BDciooNCQI+TKNfFp6rNXkhlMsr9MK7ny97yksdBdNZ3RWtBpNSI2EXV5xd74rcgR5Ec5qwkK5pjKL7Qjr3d8lLHhIvLIta78uiMzL3hX6mWulHrtsSeXWr0Tozutyv03LRBJzRsdRsyxuk56ejq5mdqV18TLW9DDqFsWZP0DXSSOahxyR95fnYb0zSM02/JUkfjRmaaiLtdV1FNAVQd7BVRJJPV4WqPNIUweWxEluaI00d4RQmxuRYAUpDKVsSPoNL8Wk0VgDbqWrQqEBpds0sLnebb9kHsBDjdQtJQ7Tll2RzIg/LxqEmC4xuS9I/0TOOvtGUMEJ57TNljG6gQcQr5sj/sT4lEOXQcfqBJqfBbtNtRnJy00CysCIHvjLOxfPchYxxdvSW68FY+kGsTYlP4xWB1CSZ+jCisQwQQ/DB5RCblz9pHMhrHueXAWXpTUgTAjUMRTcsIdsI3+iGN93G6x4SFkNJWEy5D38lYbF6SFj0Jibt6s3D25mZM3v561OWsbE3rpqdqii2XitU+nW4nbzctl1Bt1rn6NbDR/Gq1nBkgOsqDj1zag7dxIz3QQboTVRbT1j983RibcgHsuHnUOO73L+hV8K4zm2tHyBVHzMksO8BZAN/dn1k5rlVjgWtKqxBh2r2bvPf86u0ACOmp+VdjFLe6osGObk0Nexe+aXZ2KK951UjgWQI3hrt1Y4sTTmD6THaD8InjsPwztjQaOzFVFeOrgyGupO09OGygwZC8ste3sdebF2xl67ZpKV0ZKsd9WgvTSRpbxnK6PLJYTIT11Mzk/qecajMRG40vqij96GOXFdZ6/uhjjx9sBTCaEEuflFHPYQBlTpyjfoGZSjqSGJaO4hMQvqEmw7z60eRAx0xmmeyhB407mnqilvf2w91utjDa2+X9/boF9rb28YHrqDXE2qt3l9MPvn7i45dTVptsxWEfbyB2scrbb/k2ah6pNl1R1vrqMf0pQfqrQ9sC5qsPUvWNbE+Y0dldk04V+V0ZWwvdNVORTofxasbYrWjzuYGxeKrDlnz4gsj5v3/</diagram></mxfile>
|
2110.06539/main_diagram/main_diagram.pdf
ADDED
|
Binary file (16.1 kB). View file
|
|
|
2110.06539/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,669 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
A common approach used in (non-confounded) imitation learning is matching the policy's stationary distribution $d_{\rho_{o}}^\pi$ to the offline target distribution $d_{\rho_{e}}^{\pi^*}$. Consider a source distribution ${p \in \Delta_N}$ and target distribution ${q \in \Delta_N}$. GAIL [@ho2016generative] uses the distribution ratio objective $log\brk*{p / q}$, which can estimated using a GAN-like objective $D_{R}(p || q)
|
| 4 |
+
=
|
| 5 |
+
\sup_{g: \mathcal{Z}\mapsto (0,1)}
|
| 6 |
+
\expect*{p}{\log\brk*{g(z)}}
|
| 7 |
+
+
|
| 8 |
+
\expect*{q}{{\log\brk*{1 - g(z)}}}$, to match the distribution $p$ to $q$.
|
| 9 |
+
|
| 10 |
+
This technique can be generalized to $f$-divergences [@csiszar2004information; @liese2006divergences; @kostrikov2019imitation; @ke2020imitation]. Specifically, we wish to minimize a discrepancy measure from $p$ to $q$, namely $\min_{p \in \mathcal{K}} D(p || q)$. For a convex function ${f: [0: \infty) \mapsto \R}$, the $f$-divergence of $p$ from $q$ is defined by ${D_f(p || q) = \expect*{q}{f\brk*{\frac{p}{q}}}}$. DICE [@kostrikov2019imitation] uses the variational representation of the $f$-divergence, $$\begin{align*}
|
| 11 |
+
D_f(p || q)
|
| 12 |
+
=
|
| 13 |
+
\sup_{g: \mathcal{Z}\mapsto \R}
|
| 14 |
+
\expect*{p}{g(z)}
|
| 15 |
+
-
|
| 16 |
+
\expect*{q}{f^*(g(z))},
|
| 17 |
+
\end{align*}$$ where $f^*$ is the Fenchel conjugate of $f$ defined by ${f^*(y) = \sup_x xy - f(y)}$. The convex conjugate has closed form solutions for the total variation distance, KL-divergence, $\chi^2$-divergence, Squared Hellinger distance, Le Cam distance, and Jensen-Shannon divergence. Using the variational representation of the $f$-divergence we can estimate $D_f$ using samples from $p$ and $q$. [\[table: imitation methods\]](#table: imitation methods){reference-type="ref+Label" reference="table: imitation methods"} presents examples of various $f$-divergences and their respective dual formulation. We also add the distribution ratio for comparison to the table, though it is not an $f$-divergence.
|
| 18 |
+
|
| 19 |
+
# Method
|
| 20 |
+
|
| 21 |
+
<figure id="fig:3_states" data-latex-placement="t!">
|
| 22 |
+
<embed src="imgs/3_states.pdf" style="width:90.0%" />
|
| 23 |
+
<figcaption>A contextual MDP with state space <span class="math inline">$\mathcal{S}= \brk[c]*{A, B, C}$</span>, action space <span class="math inline">${\mathcal{A}= \brk[c]*{a_B, a_C}}$</span> and context space <span class="math inline">$\mathcal{X}= \brk[c]*{x_1, x_2}$</span>. We assume <span class="math inline"><em>ν</em>(<em>A</em>|<em>x</em>) = 1</span> for all <span class="math inline"><em>x</em> ∈ 𝒳</span>. The actions <span class="math inline"><em>a</em><sub><em>B</em></sub>, <em>a</em><sub><em>C</em></sub></span> transition the agent to states <span class="math inline"><em>B</em>, <em>C</em></span>, respectively, after which the agent receives a reward <span class="math inline">$r \in \brk[c]*{0, 1}$</span> depending on the context. We assume <span class="math inline"><em>B</em>, <em>C</em></span> are sink states. </figcaption>
|
| 24 |
+
</figure>
|
| 25 |
+
|
| 26 |
+
To gain intuition, we start with a simple toy example. Consider the three-state example depicted in [5](#fig:3_states){reference-type="ref+Label" reference="fig:3_states"}. Here, the environment initiates at state $A$ w.p. 1, after which the agent can choose to (deterministicaly) transition to state $B$ or $C$. The agent then receives a reward depending on the context. The optimal policy is given by ${
|
| 27 |
+
\pi^*(a | s, x)
|
| 28 |
+
=
|
| 29 |
+
\mathbf{1}\brk[c]*{a = a_B, x = x_1}
|
| 30 |
+
+
|
| 31 |
+
\mathbf{1}\brk[c]*{a = a_C, x = x_2}}$ for ${s = A}$, and any action is optimal for ${s \neq A}$. Without loss of generality we assume $\pi^*(a_B | B, x) = \pi^*(a_C | C, x) = 1$. We turn to analyze the marginalized stationary distribution, which uniquely defines the set of optimal policies [@puterman2014markov]. Denoting $\rho_{e}(x_1) = \rho$, we have that ${
|
| 32 |
+
d_{\rho_{e}}^{\pi^*}(s,a)
|
| 33 |
+
=
|
| 34 |
+
\rho d^{\pi^*}(s,a | x_1) + (1-\rho)d^{\pi^*}(s, a | x_2).}$ Then, ${
|
| 35 |
+
d_{\rho_{e}}^{\pi^*}(s,a)
|
| 36 |
+
=
|
| 37 |
+
\brk*{1- \gamma}\mathbf{1}\brk[c]*{s = A} +
|
| 38 |
+
\rho \gamma\mathbf{1}\brk[c]*{s = B, a=a_B} +
|
| 39 |
+
(1-\rho) \gamma \mathbf{1}\brk[c]*{s = C, a=a_C}.}$
|
| 40 |
+
|
| 41 |
+
Suppose ${\rho_{o}= \rho_{e}}$, and $\rho = \frac{1}{2}$. Trivially ${d_{\rho_{e}}^{\pi^*}(s,a) = d_{\rho_{o}}^{\pi^*}(s,a)}$. We define the (suboptimal) policy $$\begin{align}
|
| 42 |
+
\label{eq: suboptimal policy}
|
| 43 |
+
\pi_0(a | A, x)
|
| 44 |
+
&=
|
| 45 |
+
1 - \pi^*(a | A, x) \quad, a \in \mathcal{A}, x \in \mathcal{X}.
|
| 46 |
+
\end{align}$$ It can be verified that $d_{\rho_{e}}^{\pi^*}(s,a) = d_{\rho_{o}}^{\pi_0}(s,a)$ still holds, yet $\pi_0$ is catastrophic ([\[eq: catastrophic policy\]](#eq: catastrophic policy){reference-type="ref+Label" reference="eq: catastrophic policy"}) with value zero. A question arises: can we show that $\pi_0$ is a suboptimal policy given access to the expert data (i.e., access to $d_{\rho_{e}}^{\pi^*}(s,a)$) and a forward model $P(s' | s, a, x)$?
|
| 47 |
+
|
| 48 |
+
Unfortunately, one cannot prove that $\pi_0$ is suboptimal. Informally, notice that $\pi_0$ is an optimal policy for an alternative reward function, ${
|
| 49 |
+
r_0(s, a, x) = 1 - r(s,a,x),
|
| 50 |
+
}$ yet is catastrophic w.r.t. the true reward $r$. Indeed, since $r$ is unknown and $d_{\rho_{o}}^{\pi_0}(s,a) = d_{\rho_{o}}^{\pi^*}(s,a)$, we cannot reject $r_0$ (i.e., we cannot conclude that $r_0$ is not the true reward). In other words, one cannot use the data to differentiate which of $\brk[c]*{\pi_0, \pi^*}$ is the optimal policy.
|
| 51 |
+
|
| 52 |
+
Next, assume $\rho_{o}\neq \rho_{e}$, and define $\pi_0$ as in [\[eq: suboptimal policy\]](#eq: suboptimal policy){reference-type="ref+Label" reference="eq: suboptimal policy"}. Let $\widetilde{\rho_{e}} = 1 - \rho_{e}$ and recall that $\rho_{e}(x_1) = \rho$. Then, we have that $$\begin{align*}
|
| 53 |
+
d^{\pi_0}_{\widetilde{\rho_{e}}}(s,a)
|
| 54 |
+
&=
|
| 55 |
+
(1-\rho) d^{\pi_0}(s,a | x_1) + \rho d^{\pi_0}(s, a | x_2)
|
| 56 |
+
=
|
| 57 |
+
(1-\rho)d^{\pi^*}(s, a | x_2) + \rho d^{\pi^*}(s,a | x_1)
|
| 58 |
+
=
|
| 59 |
+
d_{\rho_{e}}^{\pi^*}(s,a).
|
| 60 |
+
\end{align*}$$ Indeed, the expert data is incapable of distinguishing $\pi_0$ and $\pi^*$, since $d^{\pi_0}_{\widetilde{\rho_{e}}} = d_{\rho_{e}}^{\pi^*}$, and $\rho_{e}$ is unknown. Unfortunately, as we've shown previously, $\pi_0$ achieves value zero. Notice that, unlike the previous section, one cannot distinguish $\pi^*$ from the catastrophic policy $\pi_0$ for *any choice* of $\rho_{o}$.
|
| 61 |
+
|
| 62 |
+
:::: algorithm
|
| 63 |
+
::: algorithmic
|
| 64 |
+
Expert data with missing context $\mathcal{D}^* \sim d_{\rho_{e}}^{\pi^*}$, $\lambda > 0$, sensitivity bound $\delta \geq 0$. $\Upsilon = \emptyset$ Sample $u(s,a) \sim U[0, \delta], \forall s,a$ $L^*(\pi; g_0) := \expect*{s, a \sim d_{\rho_{o}}^{\pi}(s,a)}{
|
| 65 |
+
g_0(s,a)}
|
| 66 |
+
-
|
| 67 |
+
\expect*{s, a \sim d_{\rho_{e}}^{\pi^*}(s,a) + u(s,a)}{ g_0(s,a)}$ $L_i(\pi ; g_i) := \expect*{x \sim \rho_{o}, s, a \sim d^{\pi}(s, a | x)}{
|
| 68 |
+
g_i(s,a,x)}
|
| 69 |
+
-
|
| 70 |
+
\expect*{x \sim \rho_{o}, s, a \sim d^{\pi_i}(s, a | x)}{ g_i(s,a,x)}\quad, i \geq 1$ Compute $\pi_n$ by solving $$\begin{align}
|
| 71 |
+
\min_{\pi \in \Pi_{\text{det}}}
|
| 72 |
+
\max_{\left|g_0\right| \leq \frac{1}{2}, \left|g_i\right| \leq \frac{1}{2}}
|
| 73 |
+
\brk[c]*{
|
| 74 |
+
L^*(\pi; g_0(s,a))
|
| 75 |
+
-
|
| 76 |
+
\lambda
|
| 77 |
+
\min_i L_i(\pi; g_i(s,a,x))
|
| 78 |
+
}
|
| 79 |
+
\label{eq: confounded imitation}
|
| 80 |
+
\end{align}$$ Terminate and return $\bar{\pi}(a|s,x) =
|
| 81 |
+
\frac{
|
| 82 |
+
\sum_{i=1}^{n-1} d^{\pi_i}(s,a,x)
|
| 83 |
+
}
|
| 84 |
+
{
|
| 85 |
+
\sum_{i=1}^{n-1} \sum_{a'} d^{\pi_i}(s,a',x)
|
| 86 |
+
}$ $\Upsilon = \Upsilon \cup \brk[c]*{\pi_n}$
|
| 87 |
+
:::
|
| 88 |
+
::::
|
| 89 |
+
|
| 90 |
+
[\[algo: partial imitation no shift\]](#algo: partial imitation no shift){reference-type="ref+Label" reference="algo: partial imitation no shift"} describes our method for calculating the ambiguity set of [\[thm: ambiguity uniqueness\]](#thm: ambiguity uniqueness){reference-type="ref+Label" reference="thm: ambiguity uniqueness"}, and returns $\bar{\pi}$ of [\[thm: ambiguity policy selection\]](#thm: ambiguity policy selection){reference-type="ref+Label" reference="thm: ambiguity policy selection"}. At every iteration of the algorithm, we find a new policy in the set by minimizing the total variation distance (written in variational form) between $d_{\rho_{o}}^{\pi^*}(s,a)$ and $d_{\rho_{o}}^{\pi}(s,a)$, while regularizing it with the distance between $\pi$ and all previously collected $\pi_i \in \Upsilon$. [\[algo: partial imitation no shift\]](#algo: partial imitation no shift){reference-type="ref+Label" reference="algo: partial imitation no shift"} also uses a sensitivity parameter $\delta \geq 0$ (defined formally in [10](#appendix: bounded confounding){reference-type="ref+Label" reference="appendix: bounded confounding"}) whenever bounded covariate shift is present. For this section we assume $\delta = 0$.
|
| 91 |
+
|
| 92 |
+
In practice, the functions $L^*$ and $L_i$ in lines 4 and 5 are estimated using samples from trajectories of $\pi, \pi_i$, and $\mathcal{D}^*$. We then solve the min-max problem of [\[eq: confounded imitation\]](#eq: confounded imitation){reference-type="ref+Label" reference="eq: confounded imitation"} using a parametric representations of $g_i$ and online gradient decent. The following proposition states that [\[algo: partial imitation no shift\]](#algo: partial imitation no shift){reference-type="ref+Label" reference="algo: partial imitation no shift"} indeed retrieves the set $\Upsilon_{\pi^*}$.
|
| 93 |
+
|
| 94 |
+
::: restatable
|
| 95 |
+
propositionalgooneconvergence Assume $\rho_{e}\equiv \rho_{o}$ and $\left|\Upsilon_{\pi^*}\right| < \infty$. Then there exists $\lambda^* > 0$ such that for any ${\lambda \in (0, \lambda^*)}$, [\[algo: partial imitation no shift\]](#algo: partial imitation no shift){reference-type="ref+Label" reference="algo: partial imitation no shift"} (with $\delta = 0$ sensitivity) will return $\bar{\pi}$ of [\[thm: ambiguity policy selection\]](#thm: ambiguity policy selection){reference-type="ref+Label" reference="thm: ambiguity policy selection"} after exactly $\left|\Upsilon_{\pi^*}\right|$ iterations.
|
| 96 |
+
:::
|
| 97 |
+
|
| 98 |
+
<figure id="fig: rooms" data-latex-placement="t">
|
| 99 |
+
<p><img src="imgs/experiments/rooms_1.png" style="width:49.0%" alt="image" /> <img src="imgs/experiments/rooms_2.png" style="width:49.0%" alt="image" /> <img src="imgs/experiments/context_free.png" style="width:50.0%" alt="image" /></p>
|
| 100 |
+
<figcaption>Results for the rooms environment with covariate shift affecting only the distribution of walls. It is evident that whenever the reward is context-free comparable performance is obtained. Runs averaged over 5 seeds. </figcaption>
|
| 101 |
+
</figure>
|
| 102 |
+
|
| 103 |
+
We tested [\[algo: partial imitation no shift\]](#algo: partial imitation no shift){reference-type="ref+Label" reference="algo: partial imitation no shift"} on both the RecSim environment as well as a four-rooms environment with random instantiations of walls. Experiments for the RecSim environment are readily provided in [5](#section: experiments){reference-type="ref+Label" reference="section: experiments"}. Here we describe our simple four-rooms environment and show experiments w.r.t. [\[thm: context free reward\]](#thm: context free reward){reference-type="ref+Label" reference="thm: context free reward"}.
|
| 104 |
+
|
| 105 |
+
The four-rooms environment, as depicted in [6](#fig: rooms){reference-type="ref+Label" reference="fig: rooms"}, is a $15\times 15$ grid-world in which an agent can take one of four actions: LEFT, RIGHT, UP, or DOWN. Each action moves the agent in the specified direction whenever no obstacle is present. The agent (shown in blue) must reach the (green) goal while avoiding the (red) mine. When the goal is reached the agent receives a reward of $+1$ and the episode terminates. In contrast, if the agent reaches the mine, she receives a reward of $-1$ and the episode terminates. The state space of the environment consists of the agent's $(\text{row}, \text{col})$ position in the world. The rest of the information in the environment is defined by the context $x$. Particularly, the context is defined by the position of the green goal, the position of the red mine, and the specific instantiation of walls (two instantiations are depicted in [6](#fig: rooms){reference-type="ref+Label" reference="fig: rooms"}).
|
| 106 |
+
|
| 107 |
+
We trained an agent with full information (i.e., observed context, including goal location, mine location, and walls). We generated expert data w.r.t. the trained agent. To demonstrate the result of [\[thm: context free reward\]](#thm: context free reward){reference-type="ref+Label" reference="thm: context free reward"} we executed [\[algo: partial imitation no shift\]](#algo: partial imitation no shift){reference-type="ref+Label" reference="algo: partial imitation no shift"} with both a shifted distribution and the default distribution of walls. We did not change the distribution of goal and mine. Note that since the distribution of walls only affects the transition function and not the reward, we expect, by [\[thm: context free reward\]](#thm: context free reward){reference-type="ref+Label" reference="thm: context free reward"}, the optimal solution to remain the same. Indeed, as shown in [6](#fig: rooms){reference-type="ref+Label" reference="fig: rooms"} after training an agent with no access to the contextual information of the walls in the expert data, the agent achieved comparable results both with and without covariate shift on the distribution of walls.
|
| 108 |
+
|
| 109 |
+
This result seem surprising at first, as the walls are essential for solving the task at hand. Nevertheless, since the distribution of wall *is observed* in the online environment, the partially observed expert data suffices to obtain an optimal policy. This settles with [\[thm: context free reward\]](#thm: context free reward){reference-type="ref+Label" reference="thm: context free reward"} which indeed states that this information is not needed in the expert data in order to obtain an optimal policy.
|
| 110 |
+
|
| 111 |
+
In this section we discuss the imitation learning problem under bounded hidden confounders. There are several ways to define boundness of unobserved confounders. In [3](#section: imitation){reference-type="ref+Label" reference="section: imitation"} we showed that, under *arbitrary* covariate shift and context-free transitions, the imitation learning problem is impossible, i.e., one cannot rule out a catastrophic policy. We begin by considering the effect of bounded covariate shift, i.e., $\frac{\rho_{o}}{\rho_{e}} \leq C$. We then consider almost-context-free rewards, showing a tradeoff w.r.t. the hardness of the imitation problem.
|
| 112 |
+
|
| 113 |
+
A common approach in causal inference is to bound the bias of unobserved confounding through sensitivity analysis [@hsu2013calibrating; @namkoong2020off; @kallus2021minimax]. In our setting, this confounding bias occurs due to a covariate shift of the unobserved covariates. As we've shown in [\[thm: impossibility\]](#thm: impossibility){reference-type="ref+Label" reference="thm: impossibility"}, though these covariates are observed in the online environment, their shifted and unobserved distribution in the offline data can render catastrophic results. Therefore, we consider the odds-ratio bounds of the sensitivity in distribution between the online environment and the expert data, as stated formally below.
|
| 114 |
+
|
| 115 |
+
::: {#assumption: sensitivity .assumption}
|
| 116 |
+
**Assumption 1** (Bounded Sensitivity). *We assume that $\text{Supp}\brk*{\rho_{e}} \subseteq \text{Supp}\brk*{\rho_{o}}$ and that there exists some $\Gamma \geq 1$ such that for all $x \in \text{Supp}\brk*{\rho_{e}}$ $$\begin{align*}
|
| 117 |
+
\Gamma^{-1}
|
| 118 |
+
\leq \frac{\rho_{o}(x)(1-\rho_{e}(x))}{\rho_{e}(x)(1-\rho_{o}(x))}
|
| 119 |
+
\leq
|
| 120 |
+
\Gamma.
|
| 121 |
+
\end{align*}$$*
|
| 122 |
+
:::
|
| 123 |
+
|
| 124 |
+
Next, we define the notion of $\delta$-ambiguity, a generalization of the ambiguity set in [1](#def: ambiguity set){reference-type="ref+Label" reference="def: ambiguity set"}.
|
| 125 |
+
|
| 126 |
+
::: definition
|
| 127 |
+
**Definition 4** ($\delta$-Ambiguity Set). *For a policy $\pi \in \Pi$, we define the set of all deterministic policies that are $\delta$-close to $\pi$ by $$\begin{align*}
|
| 128 |
+
\Upsilon_{\pi}^{\delta} = \brk[c]*{\pi' \in \Pi_{\text{det}} : \left|d_{\rho_{o}}^{\pi'}(s, a) - d_{\rho_{e}}^{\pi}(s, a)\right| < \delta, s \in \mathcal{S}, a \in \mathcal{A}}.
|
| 129 |
+
\end{align*}$$*
|
| 130 |
+
:::
|
| 131 |
+
|
| 132 |
+
Similar to [1](#def: ambiguity set){reference-type="ref+Label" reference="def: ambiguity set"}, the $\delta$-ambiguity set considers all deterministic policies with a marginalized stationary distribution of distance at most $\delta$ from $\pi$. The following results shows that $\Upsilon_{\pi^*}^{\Gamma - 1}$ is a sufficient set of candidate optimal policies, as long as [1](#assumption: sensitivity){reference-type="ref+Label" reference="assumption: sensitivity"} holds for some $\Gamma \geq 1$.
|
| 133 |
+
|
| 134 |
+
::: restatable
|
| 135 |
+
theoremdeltasufficiency\[Sufficiency of $\Upsilon_{\pi^*}^{\Gamma - 1}$\] Let [1](#assumption: sensitivity){reference-type="ref+Label" reference="assumption: sensitivity"} hold for some $\Gamma \geq 1$. Then $\pi^* \in \Upsilon_{\pi^*}^{\Gamma - 1}$.
|
| 136 |
+
:::
|
| 137 |
+
|
| 138 |
+
The above result suggests that [\[algo: partial imitation no shift\]](#algo: partial imitation no shift){reference-type="ref+Label" reference="algo: partial imitation no shift"} can be executed over $\Upsilon_{\pi^*}^{\Gamma - 1}$ by adding $\delta=\Gamma-1$ additive uniform noise to $d_{\rho_{e}}^{\pi^*}(s,a)$ (see Line 4 of [\[algo: partial imitation no shift\]](#algo: partial imitation no shift){reference-type="ref+Label" reference="algo: partial imitation no shift"}), and executing the algorithm for a finite number of iterations, finally selecting a robust policy from the approximate set.
|
| 139 |
+
|
| 140 |
+
When bounded covariate shift is present, one might attempt to learn an inverse mapping of contexts from observed trajectories in the data.
|
| 141 |
+
|
| 142 |
+
We denote by $P_\rho^\pi$ the probability measure over contexts $x \in \mathcal{X}$ and trajectories ${\tau = (s_0, a_0, s_1, a_1, \hdots s_H)}$ as induced by the policy $\pi$ and context distribution $\rho$. That is, $$\begin{align*}
|
| 143 |
+
P_\rho^\pi(x, \tau)
|
| 144 |
+
=
|
| 145 |
+
\rho(x)
|
| 146 |
+
\nu(s_0|x)
|
| 147 |
+
\prod_{t=0}^{H-1}
|
| 148 |
+
P(s_{t+1} | s_t, a_t, x)
|
| 149 |
+
\pi(a_t | s_t, x).
|
| 150 |
+
\end{align*}$$ As the true context is observed in the online environment, we can calculate for any $\pi$ the quantity $P_{\rho_{o}}^\pi(x, \tau)$. As the expert data distribution was generated by the marginalized distribution ${P_{\rho_{o}}^{\pi^*}(\tau) = \sum_{x \in \mathcal{X}} P_{\rho_{o}}^{\pi^*}(x, \tau)}$, it is unclear if knowledge of $P_{\rho_{o}}^\pi(x, \tau)$ is beneficial.
|
| 151 |
+
|
| 152 |
+
Fortunately, whenever [1](#assumption: sensitivity){reference-type="ref+Label" reference="assumption: sensitivity"} holds, a high probability of reconstructing a context in the online environment induces a high probability of reconstructing it in the expert data. To see this, assume that there exists $\delta \in [0,1]$ such that for all ${\pi \in \Upsilon_{\pi^*}^{\Gamma - 1}}$, $\tau \in \text{Supp}(P^\pi_{\rho_{e}}(\tau))$, there exists $x \in \mathcal{X}$ such that $$\begin{align}
|
| 153 |
+
\label{assumption: delta identifiability}
|
| 154 |
+
P^\pi_{\rho_{o}}(x | \tau) \geq \min\brk[c]*{(1-\delta)\brk*{\rho_{o}(x) + \Gamma\brk*{1-\rho_{o}(x)}}, 1}.
|
| 155 |
+
\end{align}$$ That is, we assume that for any policy that $\delta$-ambiguous to $\pi^*$, and any induced trajectory of $x \in \mathcal{X}$, one can with high probability identify $x$ in the online environment. Importantly, this property can be verified in the online environment. When Assumption [1](#assumption: sensitivity){reference-type="ref" reference="assumption: sensitivity"} and [\[assumption: delta identifiability\]](#assumption: delta identifiability){reference-type="ref" reference="assumption: delta identifiability"} hold, we get that $$\begin{align*}
|
| 156 |
+
P^\pi_{\rho_{e}}(x | \tau)
|
| 157 |
+
=
|
| 158 |
+
\frac{P^\pi_{\rho_{e}}(\tau|x)\rho_{e}(x)}{P^\pi_{\rho_{e}}(\tau)}
|
| 159 |
+
\geq
|
| 160 |
+
\frac{P^\pi_{\rho_{e}}(\tau|x)}{P^\pi_{\rho_{e}}(\tau)}
|
| 161 |
+
\frac{\rho_{o}(x)}{\rho_{o}(x) + \Gamma\brk*{1-\rho_{o}(x)}}
|
| 162 |
+
=
|
| 163 |
+
\frac{P^\pi_{\rho_{o}}(x | \tau)}{\rho_{o}(x) + \Gamma\brk*{1-\rho_{o}(x)}}
|
| 164 |
+
\geq
|
| 165 |
+
1 - \delta.
|
| 166 |
+
\end{align*}$$ In other words, we can reconstruct $x$ with probability $1-\delta$ for any trajectory $\tau$ which satisfies the above. This allows us to deconfound essential parts of the expert data, rendering it useful for the imitation problem, even when reward is not provided. We leave further analysis of this direction for future work.
|
| 167 |
+
|
| 168 |
+
In [\[thm: context free reward\]](#thm: context free reward){reference-type="ref+Label" reference="thm: context free reward"} we showed that whenever the reward is independent of the context then the imitation problem is easy, in the sense that any policy $\pi_0 \in \Upsilon_{\pi^*}$ is also an optimal policy. Here, we relax the assumption on the reward, and instead assume bounded dependence of the reward on the context. The following definition upper bounds the confounding effect of the reward w.r.t. the context.
|
| 169 |
+
|
| 170 |
+
::: {#definition: reward dependence .definition}
|
| 171 |
+
**Definition 5**. *Let $\epsilon: \mathcal{X}\mapsto \R$ such that $$\begin{align*}
|
| 172 |
+
\min_{r_0: \mathcal{S}\times \mathcal{A}\mapsto \R} \left|r(s,a,x) - r_0(s,a)\right| \leq \epsilon(x) \quad,s \in \mathcal{S}, a \in \mathcal{A}, x \in \mathcal{X}
|
| 173 |
+
\end{align*}$$*
|
| 174 |
+
:::
|
| 175 |
+
|
| 176 |
+
Using the above definition, we can now show that any policy in $\Upsilon_{\pi^*}$ is still approximately optimal, as shown by the following result.
|
| 177 |
+
|
| 178 |
+
::: restatable
|
| 179 |
+
theoremcontextdependentreward\[Context Dependent Reward\] []{#thm: context dependent reward label="thm: context dependent reward"} Let $\epsilon: \mathcal{X}\mapsto \R$ of [5](#definition: reward dependence){reference-type="ref+Label" reference="definition: reward dependence"}. Denote ${\epsilon_{oe} = \expect*{x \sim \rho_{o}(x)}{\epsilon(x)} + \expect*{x \sim \rho_{e}(x)}{\epsilon(x)}}$. Then for any $\pi^* \in \Pi^*_\mathcal{M}$, $\pi_0 \in \Upsilon_{\pi^*}$ $$\begin{align*}
|
| 180 |
+
v(\pi_0)
|
| 181 |
+
\geq
|
| 182 |
+
v(\pi^*)
|
| 183 |
+
-
|
| 184 |
+
\epsilon_{oe}.
|
| 185 |
+
\end{align*}$$
|
| 186 |
+
:::
|
| 187 |
+
|
| 188 |
+
A direct corollary for the above result states that for $\epsilon: \mathcal{X}\mapsto \R$ of [5](#definition: reward dependence){reference-type="ref+Label" reference="definition: reward dependence"}, if $\epsilon(x) = \epsilon$, for all $x \in \mathcal{X}$, then for $\pi_0, \pi^*$ of [\[thm: context dependent reward\]](#thm: context dependent reward){reference-type="ref+Label" reference="thm: context dependent reward"}, it holds that $v(\pi_0) \geq v(\pi^*) - 2\epsilon$. That is, $\pi_0$ is an approximately optimal policy.
|
| 189 |
+
|
| 190 |
+
<figure id="fig: causal diagram" data-latex-placement="t">
|
| 191 |
+
<embed src="imgs/causal_diagrams/causal_diagram.pdf" style="width:50.0%" />
|
| 192 |
+
<figcaption><strong>Contextual MDP Causal Diagram</strong>.</figcaption>
|
| 193 |
+
</figure>
|
| 194 |
+
|
| 195 |
+
Our work is focused on the problem of hidden confounders in expert data for imitation and reinforcement learning. We have chosen to write the paper in terminology famililar to the RL community. In this section we address and formalize the problem in Causal Inference (CI) terminology. We begin by defining Structural Causal Models (SCM, @Pearl:2009:CMR:1642718) -- a basic building block of our framework. We then show how the confounded imitation problem can be formalized as an intervention over a specific SCM. Generally speaking, the causal view casts the environment, namely the expert environment generating the offline data, and the online environment, as confounders.
|
| 196 |
+
|
| 197 |
+
::: definition
|
| 198 |
+
**Definition 6** (Structural Causal Models). *A Structural Causal Model (SCM) is a tuple ${\mathcal{M}= (U, V, \mathcal{F}, P(U))}$ where $U$ is a set of exogenous variables and $V$ is a set of endogenous variables. $\mathcal{F}$ is a set of functions such that $f_i \in \mathcal{F}$ are functions mapping a set of endogenous variables $Pa_i \subseteq V \backslash \brk[c]*{V_i}$ and a set of exogenous variables $U_i \subseteq U$ to the domain of $V_i$, i.e., $V_i = f_i(Pa_i, U_i)$. Finally, $P(u)$ is a probability distribution over the set of exogenous variables $U$. We assume that the SCM is recursive, i.e., that the causal diagram associated with it is acyclic.*
|
| 199 |
+
:::
|
| 200 |
+
|
| 201 |
+
Every SCM $\mathcal{M}$ is associated with a causal diagram $\mathcal{G}$, as depicted in [7](#fig: causal diagram){reference-type="ref+Label" reference="fig: causal diagram"}. Our framework relies largely on the formulation of stochastic interventions, as proposed in @correa2020calculus. We consider stochastic, conditional (non-atomic) interventions, defined by regime indicators $\sigma_Z$ [@pearl2000causality; @correa2020calculus], defined formally below.
|
| 202 |
+
|
| 203 |
+
::: definition
|
| 204 |
+
**Definition 7** (Non-Atomic Interventions). *Given a SCM $\mathcal{M}= (U, V, \mathcal{F}, P(U))$ and a subset ${Z \subseteq V}$, an intervention ${\sigma_Z = \brk[c]*{\sigma_{Z_1}, \hdots, \sigma_{Z_n}}}$ defines a new SCM $\mathcal{M}_{\sigma_Z} = (U, V, \mathcal{F}^*, P(U))$ in which the set of functions $\mathcal{F}$ is changed to ${\mathcal{F}^* = \brk[c]*{f^*_i}_{i : V_i \in \brk[c]*{Z_j}_{j=1}^n} \bigcup \brk[c]*{f_i}_{i : V_i \in V \backslash \brk[c]*{Z_j}_{j=1}^n}}$.*
|
| 205 |
+
:::
|
| 206 |
+
|
| 207 |
+
Non-atomic interventions are a generalization of the classic atomic $\textbf{do}(X=x)$ interventions, defined by the SCM $\mathcal{M}_z$ and causal diagram $\mathcal{G}_{\overline{Z}}$ in which all edges incoming into $Z$ are removed. We have that $$\begin{align*}
|
| 208 |
+
P(y | \textbf{do}(Z=z)) = P(y | z ; \sigma_Z = \textbf{do}(Z=z)).
|
| 209 |
+
\end{align*}$$ Atomic interventions replace function in $\mathcal{F}$ by constant functions, whereas non-atomic interventions use general functions. For notational simplicity, when a single intervention is applied to some $f_i \in \mathcal{F}$, we denote it by $\sigma_Z = \textbf{d}(f_i \gets f_i^*)$, indicating that in the interventional distribution $f_i^*$ is used instead of $f_i$. Next we define the identifiability of a causal effect under an intervention, as follows.
|
| 210 |
+
|
| 211 |
+
::: definition
|
| 212 |
+
**Definition 8** (Identifiability). *Let $X, Y, Z \subseteq V$ with $Y \cap Z = \emptyset$ in some SCM with causal diagram $\mathcal{G}$. Given an intervention $\sigma_Z = \brk[c]*{\sigma_{Z_1}, \hdots, \sigma_{Z_n}}$, the causal effect $P(y | x, \sigma_Z)$ is said to be identifiable from $V' \subseteq V$ if it can be uniquely computed from $P(V')$ for every assignment $(y,x)$ in every model that induces $\mathcal{G}$ and $P(V')$.*
|
| 213 |
+
:::
|
| 214 |
+
|
| 215 |
+
Following the model definitions of [2](#section: preliminaries){reference-type="ref+Label" reference="section: preliminaries"}, we define the contextual MDP SCM as follows.
|
| 216 |
+
|
| 217 |
+
::: definition
|
| 218 |
+
**Definition 9**. *A contextual MDP SCM is defined by the causal diagram of [7](#fig: causal diagram){reference-type="ref+Label" reference="fig: causal diagram"}. For some horizon $H > 0$, the SCM is defined by the set of endogenous variables ${V = \brk[c]*{s_i}_{i=0}^H \cup \brk[c]*{a_i}_{i=0}^H \cup \brk[c]*{x} \cup \brk[c]*{r_i}_{i=0}^H}$ (denoting the states, actions, context and rewards, respectively), a set of exogenous variables $U$, and functions $\mathcal{F}=\brk[c]*{f_{s_i},f_{a_i},f_{r_i}, f_{\rho_{e}}, f_{\nu_0}}$, where $f_{s_i}$ correspond to the transition function, $f_{a_i}$ the expert policy, $f_{r_i}$ the reward function, $f_{\rho_{e}}$ the context expert distribution, and $f_{\nu_0}$ the initial context-dependent state distribution.*
|
| 219 |
+
:::
|
| 220 |
+
|
| 221 |
+
Relating to our formal definition of our model in [2](#section: preliminaries){reference-type="ref+Label" reference="section: preliminaries"}, with slight abuse of notations, the functions $\brk[c]*{f_s,f_g,f_a, f_{\rho_{e}}, f_{\nu_0}}$ adhere to the following relations $$\begin{align*}
|
| 222 |
+
&P(s_{i+1}=s'|s_i=s,a_i=a,x) = P(f_{s_i}(s,a,x,U)) \\
|
| 223 |
+
&P(r_i=r | s_i=s,a_i=a,x) = \delta(f_{r_i}(s,a,x) = r) \\
|
| 224 |
+
&\pi^*(a_i=a|s_i=s,x) = P(f_{a_i}(a,s,x,U) \\
|
| 225 |
+
&\rho_e(x) = P(f_{\rho_{e}}(x,U)) \\
|
| 226 |
+
&\nu_0(s_0 | x) = P(f_{\nu_0}(s_0, x, U)),
|
| 227 |
+
\end{align*}$$ where $\delta(\cdot)$ indicates the Dirac delta distribution.
|
| 228 |
+
|
| 229 |
+
We are now ready to define the confounded imitation problem. We define the (non-atomic) intervention $\sigma_x = \textbf{do}\brk*{f_{\rho_{e}} \gets f_{\rho_{o}}}$ which replaces $f_{\rho_{e}}$ with $f_{\rho_{o}}$ in the contextual MDP SCM defined above. The goal of imitation learning is then to identify the quantities $$\begin{align}
|
| 230 |
+
\label{eq: identifiability}
|
| 231 |
+
P(a_i | s_i, x, \sigma_x = \textbf{do}\brk*{f_{\rho_{e}} \gets f_{\rho_{o}}}) \quad, 0 \leq i \leq H-1,
|
| 232 |
+
\end{align}$$ where, importantly, we assume we *only* have access to $P(s_i,a_i)$, $P(s_{i+1}|s_i,a_i,x)$, $P(s_0 | x)$, and ${P(x | \sigma_x = \textbf{do}\brk*{f_{\rho_{e}} \gets f_{\rho_{o}}})}$. Notice that in our setting, $P(s_{i+1}|s_i,a_i,x)$, $P(s_0 | x)$, and ${P(x | \sigma_x = \textbf{do}\brk*{f_{\rho_{e}} \gets f_{\rho_{o}}})}$ correspond to known quantities of the online environment, whereas $P(s_i,a_i)$ corresponds to the (partially observed) offline expert data. We also emphasize that $P(s_{i+1}|s_i,a_i,x)$ is not dependent on the intervention $\sigma_Z$. That is, $$\begin{align*}
|
| 233 |
+
P(s_{i+1}|s_i,a_i,x, \sigma_x = \textbf{do}\brk*{f_{\rho_{e}} \gets f_{\rho_{o}}}) = P(s_{i+1}|s_i,a_i,x).
|
| 234 |
+
\end{align*}$$
|
| 235 |
+
|
| 236 |
+
::: remark*
|
| 237 |
+
**Remark 1**. *Our work studies a slightly different version of the identifiability problem in [\[eq: identifiability\]](#eq: identifiability){reference-type="ref+Label" reference="eq: identifiability"}, as we only wish to identify an optimal policy from the set $\Pi^*_\mathcal{M}$, as opposed to the single specific policy $\pi^*$. This requirement can be formalized by defining an extended SCM which includes all optimal policies in $\Pi^*_\mathcal{M}$, with the assumption that only one is observed (corresponding to the expert data).*
|
| 238 |
+
:::
|
| 239 |
+
|
| 240 |
+
:::: algorithm
|
| 241 |
+
::: algorithmic
|
| 242 |
+
Expert data with missing context $\mathcal{D}^*$, ${\lambda,\alpha, B, N, M > 0}$, policy optimization algorithm `ALG-RL` Policy $\pi^0$, global bonus reward network $g^*_\theta$ Generate dataset of rollouts $\mathcal{R}_k \sim d_{\rho_{o}}^{\pi_{k-1}}(s,a)$ Initialize local networks $g^m_{\theta_m} \gets g_\theta, m \in [M]$ Sample weight vector $w_m$ uniformly from $\Delta_n$ Sample batch uniformly from $\mathcal{R}_k$, i.e., $\brk[c]*{s_i, a_i}_{i=1}^B \overset{U}{\sim} \mathcal{R}_k$ Sample batch according to weights $w_m$ from $\mathcal{D}^*$, i.e., $\brk[c]*{s_i^e, a_i^e}_{i=1}^B \overset{w_m}{\sim} \mathcal{D}^*$ Update $g^m_{\theta_m}$ according to $$\begin{align*}
|
| 243 |
+
\nabla_{\theta_m} L_m(\theta_m)
|
| 244 |
+
=
|
| 245 |
+
\frac{1}{B}\sum_{i=1}^B\nabla_{\theta_m} \brk[s]*{
|
| 246 |
+
(1-\alpha)f^*(g^m_{\theta_m}(s_i^e, a_i^e))
|
| 247 |
+
+ \alpha f^*(g^m_{\theta_m}(s_i, a_i))
|
| 248 |
+
- g^m_{\theta_m}(s_i, a_i)}
|
| 249 |
+
\end{align*}$$ $m^* \in \arg\min_{m \in [M]} L_m(\theta_m)$ Update global parameters from the selected local network $g^*_\theta \gets g^{m^*}_{\theta_{m^*}}$ $\pi^k \gets \text{\texttt{ALG-RL}}(r(s,a,x) - \lambda g^*_\theta(s,a))$
|
| 250 |
+
:::
|
| 251 |
+
::::
|
| 252 |
+
|
| 253 |
+
Our experiments were based off of the recently proposed assistive-gym [@erickson2020assistive] and recsim [@ie2019recsim] environments. In this section we discuss further implementation details, hyperparameters, context distributions, and generation of the expert data.
|
| 254 |
+
|
| 255 |
+
::: table*
|
| 256 |
+
**Name** **Value** **Comments**
|
| 257 |
+
----------------- -------------- ---------------------------------------------------
|
| 258 |
+
Batch size $128$
|
| 259 |
+
Learning rate $\num{5e-5}$
|
| 260 |
+
Rollout size $19,200$
|
| 261 |
+
Total timesteps $\num{5e6}$
|
| 262 |
+
Num epochs 50 How many training epochs to do after each rollout
|
| 263 |
+
$\gamma$ $0.95$ Discount factor
|
| 264 |
+
kl coef $0.2$ Initial coefficient for KL divergence
|
| 265 |
+
kl target $0.01$ Target value for KL divergence
|
| 266 |
+
GAE $\lambda$ $1$ The GAE (lambda) parameter
|
| 267 |
+
Num workers $40$
|
| 268 |
+
:::
|
| 269 |
+
|
| 270 |
+
::: table*
|
| 271 |
+
**Name** **Value** **Comments**
|
| 272 |
+
------------------ ------------------- ---------------------------------------------------
|
| 273 |
+
Batch size $128$
|
| 274 |
+
Learning rate $\num{1e-4}$
|
| 275 |
+
Imitation Method $\chi$-divergence
|
| 276 |
+
Num epochs 50 How many training epochs to do after each rollout
|
| 277 |
+
$\alpha$ $0.9$ $D_f$ regularization coefficient
|
| 278 |
+
$M$ $10000$ Budget for CTS optimizer
|
| 279 |
+
:::
|
| 280 |
+
|
| 281 |
+
A complete description of [\[algo: ogd\]](#algo: ogd){reference-type="ref+Label" reference="algo: ogd"} is presented in [\[algo: complete ogd\]](#algo: complete ogd){reference-type="ref+Label" reference="algo: complete ogd"}. Specific hyperparameters used are shown in [\[table:ppo-params,table:cts-params\]](#table:ppo-params,table:cts-params){reference-type="ref+Label" reference="table:ppo-params,table:cts-params"}. We implemented the algorithm using the RLlib framework [@liang2018rllib]. We used PPO [@schulman2017proximal] as our policy-optimization algorithm. All neural networks consisted of two-layer fully connected MLPs with 100 parameters in each layer. We used the same rollout buffer (of size 19200 samples) for both our PPO agent as well as our imitation module, which estimated the augmented reward.
|
| 282 |
+
|
| 283 |
+
Motivated by @kostrikov2019imitation, we regularized the expert demonstrations with samples from $d^\pi$. Particularly, we let $\alpha \in (0,1]$, such that $1-\alpha$ corresponds to the probabiilty of sampling an expert example and $\alpha$ corresponds to the probability of sampling from the replay. This leads to minimizing an augmented version of the $f$-divergence which can be written as $$\begin{align*}
|
| 284 |
+
\min_{g: \mathcal{S}\times \mathcal{A}\mapsto \R}
|
| 285 |
+
\expect*{s,a \sim d_{\rho_{o}}^\pi(s,a | x)}{g(s,a) - \alpha f^*(g(s,a)) }
|
| 286 |
+
-
|
| 287 |
+
(1-\alpha)\expect*{s,a \sim d_{\rho_{e}}^{\pi^*}(s,a)}{f^*(g(s,a))}.
|
| 288 |
+
\end{align*}$$
|
| 289 |
+
|
| 290 |
+
Our imitation module consisted of two networks $g_\theta$ and $h_\theta$ as proposed in @fu2018learning. The "done\" signal was also added to the state for training the imitation module. For training CTS we used the Nevergrad optimization platform [@nevergrad] with a budget of 10000 and one worker. Here a copied version of the networks $g_\theta$ and $h_\theta$ were used to for initialization and then approximate the minimum $D_f$.
|
| 291 |
+
|
| 292 |
+
For choosing $\lambda$ we used an adaptive strategy which ensured $\lambda$ balanced the RL objective with the imitation objective. Specifically, we used the following tradeoff between reward $r$ and bonus $g$ $$\begin{align*}
|
| 293 |
+
(1-\lambda_{\text{adap}})r(s,a) + \lambda_{\text{adap}} g(s,a),
|
| 294 |
+
\end{align*}$$ where $\lambda_{\text{adap}} = \frac{r_{\text{mean}}}{r_{\text{mean}} + g_{\text{mean}}}$. Here $r_{\text{mean}}$ corresponds to the average reward in the replay buffer and $g_{\text{mean}}$ to the average bonus in the replay buffer. By averaging the two, we maintained a similar scale to effectively use the expert data in all the evaluated environments without optimizing for $\lambda$.
|
| 295 |
+
|
| 296 |
+
For each environment we used a varying context distribution in the expert data, with increasing distance to that of the online environment. The context distribution for the RecSim environment is formally described in [5](#section: experiments){reference-type="ref+Label" reference="section: experiments"}. For the assistive-gym environment the context was defined by the following features: gender, mass, radius, height, patient impairment, and patient preferences. The patient's mass, radius, and height distributions were dependent on gender. The patient's impairment was given by either limited movement, weakness, or tremor (with sporadic movement). Finally, the patient's preferences were affected by the velocity and pressure of touch forces applied by the robot. We used default average values that were provided with the simulator. Particularly, we used the following distributions for each feature $$\begin{align*}
|
| 297 |
+
&\text{gender} \sim \text{Bern}(p_{\text{male}}) \\
|
| 298 |
+
&\text{mass}(\text{gender})
|
| 299 |
+
\sim \mathcal{N}(\mu_{\text{mass}}(\text{gender}) , \sigma^2_{\text{mass}}) \\
|
| 300 |
+
& \text{radius}(\text{gender})
|
| 301 |
+
\sim \mathcal{N}(\mu_{\text{radius}}(\text{gender}) , \sigma^2_{\text{radius}}) \\
|
| 302 |
+
& \text{height}(\text{gender})
|
| 303 |
+
\sim \mathcal{N}(\mu_{\text{height}}(\text{gender}) , \sigma^2_{\text{height}}) \\
|
| 304 |
+
& \text{velocity weight}
|
| 305 |
+
\sim \text{Unif}([\ell_{\text{vel}}, u_{\text{vel}}]) \\
|
| 306 |
+
& \text{force nontarget weight}
|
| 307 |
+
\sim \text{Unif}([\ell_{\text{target}}, u_{\text{target}}]) \\
|
| 308 |
+
& \text{high forces}
|
| 309 |
+
\sim \text{Unif}([\ell_{\text{high forces}}, u_{\text{high forces}}]) \\
|
| 310 |
+
& \text{food hit weight}
|
| 311 |
+
\sim \text{Unif}([\ell_{\text{hit}}, u_{\text{hit}}]) \\
|
| 312 |
+
& \text{food velocity weight}
|
| 313 |
+
\sim \text{Unif}([\ell_{\text{food vel}}, u_{\text{food vel}}]) \\
|
| 314 |
+
& \text{high pressures weight}
|
| 315 |
+
\sim \text{Unif}([\ell_{\text{high pressure}}, u_{\text{high pressure}}]) \\
|
| 316 |
+
& \text{impairment} \sim \text{Multinomial}(p_{\text{none}}, p_{\text{limits}}, p_{\text{weakness}}, p_{\text{tremor}}).
|
| 317 |
+
\end{align*}$$ The values for each distribution are provided in [\[table: assistive params\]](#table: assistive params){reference-type="ref+Label" reference="table: assistive params"}. For setting covariate shift, we used a a set of distributions that were shifted w.r.t. the default context distribution. We then sampled a shifted distribution w.p. $\beta$ and the default distribution w.p. $1-\beta$. That is, when $\beta = 1$, the user sampled a context only from the shifted distribution. [\[table: shifted assistive params\]](#table: shifted assistive params){reference-type="ref+Label" reference="table: shifted assistive params"} shows an example of one of the shifted distribution that were used.
|
| 318 |
+
|
| 319 |
+
::: table*
|
| 320 |
+
**Name** **Value** **Name** **Value** **Name** **Value**
|
| 321 |
+
-------------------------------------- ----------- ----------------------------- ----------- ------------------------------- -----------
|
| 322 |
+
$p_{\text{male}}$ $0.3$ $\ell_{\text{vel}}$ $0.225$ $\ell_{\text{high pressure}}$ $0.009$
|
| 323 |
+
$\mu_{\text{mass}}(\text{male})$ $78.4$ $u_{\text{vel}}$ $0.275$ $u_{\text{high pressure}}$ $0.011$
|
| 324 |
+
$\mu_{\text{mass}}(\text{female})$ $62.5$ $\ell_{\text{target}}$ $0.009$ $p_{\text{none}}$ $0.1$
|
| 325 |
+
$\sigma^2_{\text{mass}}$ $10$ $u_{\text{target}}$ $0.011$ $p_{\text{limits}}$ $0.4$
|
| 326 |
+
$\mu_{\text{radius}}(\text{male})$ $1$ $\ell_{\text{high forces}}$ $0.045$ $p_{\text{weakness}}$ $0.3$
|
| 327 |
+
$\mu_{\text{radius}}(\text{female})$ $1$ $u_{\text{high forces}}$ $0.055$ $p_{\text{tremor}}$ $0.2$
|
| 328 |
+
$\sigma^2_{\text{radius}}$ $0.1$ $\ell_{\text{hit}}$ $0.9$
|
| 329 |
+
$\mu_{\text{height}}(\text{male})$ $1$ $u_{\text{hit}}$ $1.1$
|
| 330 |
+
$\mu_{\text{height}}(\text{female})$ $1$ $\ell_{\text{food vel}}$ $0.9$
|
| 331 |
+
$\sigma^2_{\text{height}}$ $0.1$ $\ell_{\text{food vel}}$ $1.1$
|
| 332 |
+
:::
|
| 333 |
+
|
| 334 |
+
::: table*
|
| 335 |
+
**Name** **Value** **Name** **Value** **Name** **Value**
|
| 336 |
+
-------------------------------------- ----------- ----------------------------- ----------- ------------------------------- -----------
|
| 337 |
+
$p_{\text{male}}$ $0.8$ $\ell_{\text{vel}}$ $0.225$ $\ell_{\text{high pressure}}$ $0.009$
|
| 338 |
+
$\mu_{\text{mass}}(\text{male})$ $8 8.4$ $u_{\text{vel}}$ $0.275$ $u_{\text{high pressure}}$ $0.111$
|
| 339 |
+
$\mu_{\text{mass}}(\text{female})$ $72.5$ $\ell_{\text{target}}$ $0.007$ $p_{\text{none}}$ $0.1$
|
| 340 |
+
$\sigma^2_{\text{mass}}$ $20$ $u_{\text{target}}$ $0.016$ $p_{\text{limits}}$ $0.1$
|
| 341 |
+
$\mu_{\text{radius}}(\text{male})$ $0.9$ $\ell_{\text{high forces}}$ $0.035$ $p_{\text{weakness}}$ $0.1$
|
| 342 |
+
$\mu_{\text{radius}}(\text{female})$ $0.9$ $u_{\text{high forces}}$ $0.06$ $p_{\text{tremor}}$ $0.7$
|
| 343 |
+
$\sigma^2_{\text{radius}}$ $0.2$ $\ell_{\text{hit}}$ $0.4$
|
| 344 |
+
$\mu_{\text{height}}(\text{male})$ $1.1$ $u_{\text{hit}}$ $2.1$
|
| 345 |
+
$\mu_{\text{height}}(\text{female})$ $1.1$ $\ell_{\text{food vel}}$ $0.4$
|
| 346 |
+
$\sigma^2_{\text{height}}$ $0.2$ $\ell_{\text{food vel}}$ $2.1$
|
| 347 |
+
:::
|
| 348 |
+
|
| 349 |
+
For the assistive-gym experiments we a dense reward function for generating the expert data and a sparse one for our experiments using the expert data. Specifically, the dense reward function used the environment's default reward function, defined by $$\begin{align*}
|
| 350 |
+
w_1 \cdot \text{distance to goal} + w_2 \cdot \text{action} + w_3 \cdot \text{task specific reward} + w_4 \cdot \text{preference score},
|
| 351 |
+
\end{align*}$$ where the preferences were weighted according to the context features. Specific weights are provided in the implementation of assistive-gym [@erickson2020assistive]. The sparse reward function did not use the distance to goal (i.e., $w_1 = 0$).
|
| 352 |
+
|
| 353 |
+
We begin by proving two auxilary lemmas.
|
| 354 |
+
|
| 355 |
+
::: {#lemma: Pi0 equivalence .lemma}
|
| 356 |
+
**Lemma 1**. *Let $\pi_2 \in \Upsilon_{\pi_1}$. Then, $\Upsilon_{\pi_1} = \Upsilon_{\pi_2}$.*
|
| 357 |
+
:::
|
| 358 |
+
|
| 359 |
+
::: proof
|
| 360 |
+
*Proof.* We show that $\Upsilon_{\pi_1} \subseteq \Upsilon_{\pi_2}$ and $\Upsilon_{\pi_2} \subseteq \Upsilon_{\pi_1}$.
|
| 361 |
+
|
| 362 |
+
Let $\pi \in \Upsilon_{\pi_1}$, then $d^\pi(s,a) = d^{\pi_1}(s,a).$ By our assumption, $\pi_2 \in \Upsilon_{\pi_1}$, then $d^{\pi_2}(s,a) = d^{\pi_1}(s,a).$ Hence, ${d^\pi(s,a) = d^{\pi_2}(s,a).}$ That is, $\pi \in \Upsilon_{\pi_2}$. This proves $\Upsilon_{\pi_1} \subseteq \Upsilon_{\pi_2}$.
|
| 363 |
+
|
| 364 |
+
Similarly, let $\pi \in \Upsilon_{\pi_2}$, then $d^\pi(s,a) = d^{\pi_2}(s,a).$ By our assumption, $\pi_2 \in \Upsilon_{\pi_1}$, then $d^{\pi_2}(s,a) = d^{\pi_1}(s,a).$ Hence, ${d^\pi(s,a) = d^{\pi_1}(s,a)}.$ That is, $\pi \in \Upsilon_{\pi_1}$. This proves $\Upsilon_{\pi_2} \subseteq \Upsilon_{\pi_1}$, completing the proof. ◻
|
| 365 |
+
:::
|
| 366 |
+
|
| 367 |
+
::: {#lemma: unique optimal policy .lemma}
|
| 368 |
+
**Lemma 2**. *Let $\pi_0$ be a [deterministic]{.underline} policy and let $\mathcal{M}_0 = (\mathcal{S}, \mathcal{A}, \mathcal{X}, P, r_0, \gamma)$ such that ${r_0(s,a,x) = \mathbf{1}\brk[c]*{a = \pi_0(s,x)}}$. Then $\pi_0$ is the [unique, optimal]{.underline} policy in $\mathcal{M}_0$.*
|
| 369 |
+
:::
|
| 370 |
+
|
| 371 |
+
::: proof
|
| 372 |
+
*Proof.* By definition of $\pi_0$ and $r_0$, $$\begin{align*}
|
| 373 |
+
r_0(s,\pi_0(s,x),x) = 1, \forall s \in \mathcal{S}, x \in \mathcal{X}.
|
| 374 |
+
\end{align*}$$ In particular, $\expect*{\pi_0}{r_0(s_t,a_t,x)} = 1$. Then $$\begin{align*}
|
| 375 |
+
V^*_{\mathcal{M}_0} \leq (1-\gamma)\sum_{t=0}^\infty \gamma^t = \expect*{\pi_0}{(1-\gamma)\sum_{t=0}^\infty \gamma^t r_0(s_t,a_t,x)} = V^{\pi_0}_{\mathcal{M}_0}.
|
| 376 |
+
\end{align*}$$ This proves $\pi_0$ is an optimal policy. To prove uniqueness, assume by contradiction there exists an optimal policy $\pi_1 \neq \pi_0$. Then, $$\begin{align*}
|
| 377 |
+
V^{\pi_1}
|
| 378 |
+
=
|
| 379 |
+
\expect*{s,a,x \sim d^{\pi_1}(s,a,x)}{\mathbf{1}\brk[c]*{a = \pi_0(s,x)}}
|
| 380 |
+
=
|
| 381 |
+
\expect*{s,x \sim d^{\pi_1}(s,x)}{\expect*{a \sim \pi_1(\cdot | s, x)}{\mathbf{1}\brk[c]*{a = \pi_0(s,x)}}}
|
| 382 |
+
<
|
| 383 |
+
1
|
| 384 |
+
=
|
| 385 |
+
V^{\pi_0}_{\mathcal{M}_0}.
|
| 386 |
+
\end{align*}$$ In contradiction to $\pi_1$ is optimal. Then, $\pi_0$ is a unique optimal policy. ◻
|
| 387 |
+
:::
|
| 388 |
+
|
| 389 |
+
We are now ready to prove [\[thm: ambiguity uniqueness\]](#thm: ambiguity uniqueness){reference-type="ref+Label" reference="thm: ambiguity uniqueness"}.
|
| 390 |
+
|
| 391 |
+
::: proof
|
| 392 |
+
*Proof.* Let $\pi^* \in \Pi^*_{\mathcal{M}}$ and let $\pi_0 \in \Upsilon_{\pi^*}$. By [1](#lemma: Pi0 equivalence){reference-type="ref+Label" reference="lemma: Pi0 equivalence"}, as $\pi_0 \in \Upsilon_{\pi^*}$, it holds that $\Upsilon_{\pi^*} = \Upsilon_{\pi_0}$. Next, choosing $r_0(s,a,x) = \mathbf{1}\brk[c]*{a = \pi_0(s,x)}$, by [2](#lemma: unique optimal policy){reference-type="ref+Label" reference="lemma: unique optimal policy"} we get that $\pi_0$ is an optimal policy in $\mathcal{M}_0$. This proves $\pi_0 \in \Pi^*_{\mathcal{M}_0}$. Finally, by [2](#lemma: unique optimal policy){reference-type="ref+Label" reference="lemma: unique optimal policy"}, $\Pi^*_{\mathcal{M}_0} = \brk[c]*{\pi_0}$, proving $\pi^* \notin \Pi^*_{\mathcal{M}_0}$ if and only if $\pi^* \neq \pi_0$. ◻
|
| 393 |
+
:::
|
| 394 |
+
|
| 395 |
+
::: proof
|
| 396 |
+
*Proof.* Let $\tilde{\pi}$ as defined. Then by linearity of expectation $$\begin{align*}
|
| 397 |
+
V^{\tilde{\pi}}_{\mathcal{M}}
|
| 398 |
+
=
|
| 399 |
+
\expect*{s,a,x \sim d^{\tilde{\pi}}}{r(s,a,x)}
|
| 400 |
+
=
|
| 401 |
+
\frac{1}{\left|\Upsilon_{\pi^*}\right|}
|
| 402 |
+
\sum_{\pi \in \Upsilon_{\pi^*}}
|
| 403 |
+
\expect*{s,a,x \sim d^{\pi}}{r(s,a,x)}
|
| 404 |
+
=
|
| 405 |
+
\frac{1}{\left|\Upsilon_{\pi^*}\right|}
|
| 406 |
+
\sum_{\pi \in \Upsilon_{\pi^*}}
|
| 407 |
+
V^{\pi}_{\mathcal{M}}.
|
| 408 |
+
\end{align*}$$ Denote $B^* = \Pi^*_\mathcal{M}\cap \Upsilon_{\pi^*}$, then $$\begin{align*}
|
| 409 |
+
V^{\tilde{\pi}}_{\mathcal{M}}
|
| 410 |
+
&=
|
| 411 |
+
\frac{1}{\left|\Upsilon_{\pi^*}\right|} \sum_{\pi \in B^*} V^{\pi}_{\mathcal{M}}
|
| 412 |
+
+
|
| 413 |
+
\frac{1}{\left|\Upsilon_{\pi^*}\right|} \sum_{\Upsilon_{\pi^*} \backslash B^*} V^{\pi}_{\mathcal{M}} \\
|
| 414 |
+
&=
|
| 415 |
+
\frac{\left|B^*\right|}{\left|\Upsilon_{\pi^*}\right|}
|
| 416 |
+
V^*_\mathcal{M}
|
| 417 |
+
+
|
| 418 |
+
\frac{1}{\left|\Upsilon_{\pi^*}\right|} \sum_{\Upsilon_{\pi^*} \backslash B^*} V^{\pi}_{\mathcal{M}} \\
|
| 419 |
+
&\geq
|
| 420 |
+
\frac{\left|B^*\right|}{\left|\Upsilon_{\pi^*}\right|}
|
| 421 |
+
V^*_\mathcal{M}
|
| 422 |
+
+
|
| 423 |
+
\frac{\left|\Upsilon_{\pi^*} \backslash B^*\right|}{\left|\Upsilon_{\pi^*}\right|} \min_{\pi \in \Upsilon_{\pi^*} \backslash B^*} V^{\pi}_{\mathcal{M}} \\
|
| 424 |
+
&\geq
|
| 425 |
+
\frac{\left|B^*\right|}{\left|\Upsilon_{\pi^*}\right|}
|
| 426 |
+
V^*_\mathcal{M}
|
| 427 |
+
+
|
| 428 |
+
\frac{\left|\Upsilon_{\pi^*} \backslash B^*\right|}{\left|\Upsilon_{\pi^*}\right|} \min_{\pi \in \Upsilon_{\pi^*}} V^{\pi}_{\mathcal{M}},
|
| 429 |
+
\end{align*}$$ completing the proof. ◻
|
| 430 |
+
:::
|
| 431 |
+
|
| 432 |
+
::: proof
|
| 433 |
+
*Proof.* We first sketch the proof for the special case ${\mathcal{X}= \{x_0, x_1\}}$, $\mathcal{A}= \{a_0, a_1\}$ and a singleton state space $\mathcal{S}= \brk[c]*{s_0}$. The general proof follows similarly and is given below.
|
| 434 |
+
|
| 435 |
+
By letting $\pi_1, \pi_2$ be the determinisic policies which choose opposite actions at opposite contexts, i.e., $\pi_1(x_i) = a_i, \pi_2(x_i) = a_{1-i}$, we can choose $\rho_{e}(x) = d^*(\pi_1(x))$ and $\widetilde{\rho}_{e}(x) = d^*(\pi_2(x))$ which yield $$\begin{align*}
|
| 436 |
+
d_{\rho_{e}}^{\pi_1}(a)
|
| 437 |
+
&=
|
| 438 |
+
\sum_{i=0}^1 \rho_{e}(x_i)\mathbf{1}\brk[c]*{a = \pi_1(x_i)} \\
|
| 439 |
+
&=
|
| 440 |
+
\sum_{i=1}^2 d^*(\pi_1(x_i))\mathbf{1}\brk[c]*{a = \pi_1(x_i)} \\
|
| 441 |
+
&=
|
| 442 |
+
\sum_{i=1}^k d^*(a_i)\mathbf{1}\brk[c]*{a_i = a}
|
| 443 |
+
:=
|
| 444 |
+
d^*(a).
|
| 445 |
+
\end{align*}$$ Similarly, $d_{\widetilde{\rho}_{e}}^{\pi_2}(a) = d^*(a)$.
|
| 446 |
+
|
| 447 |
+
For the second part of the proof choose $r_1(a, x) = \mathbf{1}\brk[c]*{x=x_i, a=a_i}$ and $r_2(a, x) = \mathbf{1}\brk[c]*{x=x_i, a = a_{1-i}}$. Notice that $\pi_i$ is optimal for $r_i$ under any distribution of contexts, yet $\pi_i$ achieves zero reward for $r_{1-i}$.
|
| 448 |
+
|
| 449 |
+
We now provide a complete proof for the general case.
|
| 450 |
+
|
| 451 |
+
Let $\rho_{o}, d^*(a)$. Without loss of generality, let $\mathcal{X}= \brk[c]*{x_0, \hdots, x_m}$, $\mathcal{A}= \brk[c]*{a_0, \hdots, a_k}$ with $m \geq k$, and denote ${\mathcal{X}_k = \brk[c]*{x_1, \hdots, x_k} \subseteq \mathcal{X}}$. By definition there exists an injective function from $\mathcal{A}$ into $\mathcal{X}$.
|
| 452 |
+
|
| 453 |
+
Define $$\begin{align*}
|
| 454 |
+
f(x)
|
| 455 |
+
&=
|
| 456 |
+
\begin{cases}
|
| 457 |
+
a_i &, x = x_i, i = 0, \hdots, k \\
|
| 458 |
+
a_0 &, \text{o.w.}
|
| 459 |
+
\end{cases} \\
|
| 460 |
+
g(x)
|
| 461 |
+
&=
|
| 462 |
+
\begin{cases}
|
| 463 |
+
a_{i+1\ (\mathrm{mod}\ k)} &, x = x_i, i = 0, \hdots, k \\
|
| 464 |
+
a_0 &, \text{o.w.}
|
| 465 |
+
\end{cases}
|
| 466 |
+
\end{align*}$$
|
| 467 |
+
|
| 468 |
+
Then we can select $\pi_1, \pi_2, \rho_{e}, \widetilde{\rho}_{e}$ as follows $$\begin{align*}
|
| 469 |
+
\pi_1(a | x) &= \mathbf{1}\brk[c]*{a = f(x), x \in \mathcal{X}_k} + \frac{1}{k+1}\mathbf{1}\brk[c]*{x \notin \mathcal{X}_k}\\
|
| 470 |
+
\pi_2(a | x) &= \mathbf{1}\brk[c]*{a = g(x), x \in \mathcal{X}_k} + \frac{1}{k+1}\mathbf{1}\brk[c]*{x \notin \mathcal{X}_k},
|
| 471 |
+
\end{align*}$$ and $$\begin{align*}
|
| 472 |
+
\rho_{e}(x) &= d^*(f(x))\mathbf{1}\brk[c]*{x \in \mathcal{X}_k}, \\
|
| 473 |
+
\widetilde{\rho}_{e}(x) &= d^*(g(x))\mathbf{1}\brk[c]*{x \in \mathcal{X}_k}.
|
| 474 |
+
\end{align*}$$ We get that $$\begin{align*}
|
| 475 |
+
d_{\rho_{e}}^{\pi_1}(a)
|
| 476 |
+
&=
|
| 477 |
+
\sum_{i=1}^m \rho_{e}(x_i)\pi_1(a|x_i) \\
|
| 478 |
+
&=
|
| 479 |
+
\sum_{i=1}^k d^*(f(x_i))\mathbf{1}\brk[c]*{a = f(x_i)} \\
|
| 480 |
+
&=
|
| 481 |
+
\sum_{i=1}^k d^*(a_i)\mathbf{1}\brk[c]*{a_i = a}
|
| 482 |
+
=
|
| 483 |
+
d^*(a).
|
| 484 |
+
\end{align*}$$ Similarly, $$\begin{align*}
|
| 485 |
+
d_{\widetilde{\rho}_{e}}^{\pi_2}(a)
|
| 486 |
+
&=
|
| 487 |
+
\sum_{i=1}^k d^*(g(x_i))\mathbf{1}\brk[c]*{a = g(x_i)} \\
|
| 488 |
+
&=
|
| 489 |
+
\sum_{i=1}^k d^*(a_{i+1\ (\mathrm{mod}\ k)})\mathbf{1}\brk[c]*{a_{i+1\ (\mathrm{mod}\ k)} = a} \\
|
| 490 |
+
&=
|
| 491 |
+
\sum_{i=1}^k d^*(a_i)\mathbf{1}\brk[c]*{a_i = a}
|
| 492 |
+
=
|
| 493 |
+
d^*(a).
|
| 494 |
+
\end{align*}$$ This proves the first part of the theorem. For the other parts, choose $r_1, r_2$ as follows $$\begin{align*}
|
| 495 |
+
r_1(a, x) &= \mathbf{1}\brk[c]*{x=x_i, a=a_i, 0 \leq i \leq k} \\
|
| 496 |
+
r_2(a, x) &= \mathbf{1}\brk[c]*{x=x_i, a=a_{i+1\ (\mathrm{mod}\ k)}, 0 \leq i \leq k}.
|
| 497 |
+
\end{align*}$$ Then, by definition, for any $P(x)$ such that $\text{Supp}(P) \cap \mathcal{X}_k \neq \emptyset$, $$\begin{align*}
|
| 498 |
+
\expect*{x \sim P(x), a \sim \pi_1(\cdot | x)}{r_1(a, x)}
|
| 499 |
+
=
|
| 500 |
+
1
|
| 501 |
+
=
|
| 502 |
+
\max_{\pi \in \Pi}
|
| 503 |
+
\expect*{x \sim P(x), a \sim \pi(\cdot | x)}{r_1(a, x)}, \\
|
| 504 |
+
\expect*{x \sim P(x), a \sim \pi_1(\cdot | x)}{r_2(a, x)}
|
| 505 |
+
=
|
| 506 |
+
0
|
| 507 |
+
=
|
| 508 |
+
\min_{\pi \in \Pi}
|
| 509 |
+
\expect*{x \sim P(x), a \sim \pi(\cdot | x)}{r_2(a, x)}.
|
| 510 |
+
\end{align*}$$ And similarly, $$\begin{align*}
|
| 511 |
+
\expect*{x \sim P(x), a \sim \pi_2(\cdot | x)}{r_1(a, x)}
|
| 512 |
+
=
|
| 513 |
+
0
|
| 514 |
+
=
|
| 515 |
+
\min_{\pi \in \Pi}
|
| 516 |
+
\expect*{x \sim P(x), a \sim \pi(\cdot | x)}{r_1(a, x)}, \\
|
| 517 |
+
\expect*{x \sim P(x), a \sim \pi_2(\cdot | x)}{r_2(a, x)}
|
| 518 |
+
=
|
| 519 |
+
1
|
| 520 |
+
=
|
| 521 |
+
\max_{\pi \in \Pi}
|
| 522 |
+
\expect*{x \sim P(x), a \sim \pi(\cdot | x)}{r_2(a, x)}.
|
| 523 |
+
\end{align*}$$ The condition on the support holds for $\rho_{e}, \widetilde{\rho}_{e}$ by definition. If, $\text{Supp}(\rho_{o}) \cap \mathcal{X}_k = \emptyset$, then the result holds trivially as $\expect*{x \sim \rho_{o}(x), a \sim \pi(\cdot | x)}{r_1(a, x)} = \expect*{x \sim \rho_{o}(x), a \sim \pi(\cdot | x)}{r_2(a, x)} = 0$ for all $\pi \in \Pi$. This completes the proof. ◻
|
| 524 |
+
:::
|
| 525 |
+
|
| 526 |
+
::: {#lemma: optimal policy for every x .lemma}
|
| 527 |
+
**Lemma 3**. *Assume $\text{Supp}(\rho_{o}) \subseteq \text{Supp}(\rho_{e})$. Then $$\begin{align*}
|
| 528 |
+
\arg\max_\pi \expect*{x \sim \rho_{e}(x), s,a \sim d^{\pi}(s,a|x)}{r(s,a,x)}
|
| 529 |
+
\subseteq
|
| 530 |
+
\arg\max_\pi \expect*{x \sim \rho_{o}(x), s,a \sim d^{\pi}(s,a|x)}{r(s,a,x)}
|
| 531 |
+
\end{align*}$$*
|
| 532 |
+
:::
|
| 533 |
+
|
| 534 |
+
::: proof
|
| 535 |
+
*Proof.* For clarity we denote $$\begin{align*}
|
| 536 |
+
&\Pi^*_{\rho_{e}} = \arg\max_\pi \expect*{x \sim \rho_{e}(x), s,a \sim d^{\pi}(s,a|x)}{r(s,a,x)} \\
|
| 537 |
+
&\Pi^*_{\rho_{o}} = \arg\max_\pi \expect*{x \sim \rho_{o}(x), s,a \sim d^{\pi}(s,a|x)}{r(s,a,x)} \\
|
| 538 |
+
&\Pi^*_{\text{Supp}(\rho_{e})} = \bigtimes_{x \in \text{Supp}(\rho_{e})} \arg\max_\pi \expect*{s,a \sim d^{\pi}(s,a|x)}{r(s,a,x)}.
|
| 539 |
+
\end{align*}$$ To prove the lemma, we will show $\Pi^*_{\rho_{e}} = \Pi^*_{\text{Supp}(\rho_{e})} \subseteq \Pi^*_{\rho_{o}}$.
|
| 540 |
+
|
| 541 |
+
We begin by proving $\Pi^*_{\rho_{e}} = \Pi^*_{\text{Supp}(\rho_{e})}$. Indeed, let $\pi^* \in \Pi^*_{\text{Supp}(\rho_{e})}$. Then, for any $x \in \text{Supp}(\rho_{e})$ $$\begin{align*}
|
| 542 |
+
\expect*{s,a \sim d^{\pi^*}(s,a|x)}{r(s,a,x)}
|
| 543 |
+
=
|
| 544 |
+
\max_\pi \expect*{s,a \sim d^{\pi}(s,a|x)}{r(s,a,x)}.
|
| 545 |
+
\end{align*}$$ In particular, $$\begin{align*}
|
| 546 |
+
\expect*{x \sim \rho_{e}(x), s,a \sim d^{\pi^*}(s,a|x)}{r(s,a,x)}
|
| 547 |
+
=
|
| 548 |
+
\expect*{x \sim \rho_{e}(x)}{\max_\pi \expect*{s,a \sim d^{\pi}(s,a|x)}{r(s,a,x)}}
|
| 549 |
+
\geq
|
| 550 |
+
\max_\pi \expect*{x \sim \rho_{e}(x), s,a \sim d^{\pi^*}(s,a|x)}{r(s,a,x)},
|
| 551 |
+
\end{align*}$$ where we used Jensen's inequality. This proves $\Pi^*_{\text{Supp}(\rho_{e})} \subseteq \Pi^*_{\rho_{e}}$.
|
| 552 |
+
|
| 553 |
+
To see the other direction, let $\pi_e \in \Pi^*_{\rho_{e}}$ and assume by contradiction that $\pi_e \notin \Pi^*_{\text{Supp}(\rho_{e})}$. Then, there exists $\tilde{x} \in \text{Supp}(\rho_{e})$ such that $$\begin{align*}
|
| 554 |
+
\expect*{s,a \sim d^{\pi_e}(s,a|\tilde{x})}{r(s,a,\tilde{x})}
|
| 555 |
+
<
|
| 556 |
+
\max_\pi \expect*{s,a \sim d^{\pi}(s,a|\tilde{x})}{r(s,a,\tilde{x})}.
|
| 557 |
+
\end{align*}$$ Define $$\begin{align*}
|
| 558 |
+
\tilde{\pi}(\cdot | s, x)
|
| 559 |
+
=
|
| 560 |
+
\mathbf{1}\brk[c]*{x = \tilde{x}}\pi_{\tilde{x}}(\cdot | s, \tilde{x})
|
| 561 |
+
+
|
| 562 |
+
\mathbf{1}\brk[c]*{x \neq \tilde{x}}\pi_e(\cdot | s, x),
|
| 563 |
+
\end{align*}$$ where $\pi_{\tilde{x}} \in \arg\max_\pi \expect*{s,a \sim d^{\pi}(s,a|\tilde{x})}{r(s,a,\tilde{x})}.$ Then, $$\begin{align*}
|
| 564 |
+
v(\pi_e)
|
| 565 |
+
&=
|
| 566 |
+
P(x = \tilde{x})
|
| 567 |
+
\expect*{s,a \sim d^{\pi_e}(s,a|\tilde{x})}{r(s,a,\tilde{x})}
|
| 568 |
+
+
|
| 569 |
+
\sum_{x \in \text{Supp}(\rho_{e}) \backslash \{\tilde{x}\}}
|
| 570 |
+
P(x)
|
| 571 |
+
\expect*{s,a \sim d^{\pi_e}(s,a|x)}{r(s,a,x)} \\
|
| 572 |
+
&<
|
| 573 |
+
P(x = \tilde{x})
|
| 574 |
+
\expect*{s,a \sim d^{\tilde{\pi}}(s,a|\tilde{x})}{r(s,a,\tilde{x})}
|
| 575 |
+
+
|
| 576 |
+
\sum_{x \in \text{Supp}(\rho_{e}) \backslash \{\tilde{x}\}}
|
| 577 |
+
P(x)
|
| 578 |
+
\expect*{s,a \sim d^{\pi_e}(s,a|x)}{r(s,a,x)}
|
| 579 |
+
=
|
| 580 |
+
v(\tilde{\pi}),
|
| 581 |
+
\end{align*}$$ in contradiction to $\pi_e \in \Pi^*_{\rho_{e}}$. This proves $\Pi^*_{\rho_{e}} \subseteq \Pi^*_{\text{Supp}(\rho_{e})}$. We have thus shown that $\Pi^*_{\rho_{e}} = \Pi^*_{\text{Supp}(\rho_{e})}$.
|
| 582 |
+
|
| 583 |
+
Finally, it is left to show that $\Pi^*_{\text{Supp}(\rho_{e})} \subseteq \Pi^*_{\rho_{o}}$. Similar to before, let $\pi^* \in \Pi^*_{\text{Supp}(\rho_{e})}$. Then, for any $x \in \text{Supp}(\rho_{e})$, by Jensen's inequality $$\begin{align*}
|
| 584 |
+
\expect*{x \sim \rho_{o}(x), s,a \sim d^{\pi^*}(s,a|x)}{r(s,a,x)}
|
| 585 |
+
=
|
| 586 |
+
\expect*{x \sim \rho_{o}(x)}{\max_\pi \expect*{s,a \sim d^{\pi}(s,a|x)}{r(s,a,x)}}
|
| 587 |
+
\geq
|
| 588 |
+
\max_\pi \expect*{x \sim \rho_{o}(x), s,a \sim d^{\pi^*}(s,a|x)}{r(s,a,x)}.
|
| 589 |
+
\end{align*}$$ This completes the proof. ◻
|
| 590 |
+
:::
|
| 591 |
+
|
| 592 |
+
::: proof
|
| 593 |
+
*Proof.* Let $\pi_0 \in \Upsilon_{\pi^*}$, we will show $\pi_0 \in \Pi^*_\mathcal{M}$. Since $r(s,a,x) = r(s,a,x')$ for all $x \in \mathcal{X}$ we denote $r(s,a) = r(s,a,x)$. By definition of $\Upsilon_{\pi^*}$ we have that. $$\begin{align*}
|
| 594 |
+
d_{\rho_{o}}^{\pi_0}(s,a) = d_{\rho_{e}}^{\pi^*}(s,a)
|
| 595 |
+
\end{align*}$$ Then, $$\begin{align*}
|
| 596 |
+
v(\pi_0)
|
| 597 |
+
&=
|
| 598 |
+
\expect*{x \sim \rho_{o}(x), s,a \sim d^{\pi_0}(s,a|x)}{r(s,a)} \\
|
| 599 |
+
&=
|
| 600 |
+
\expect*{x \sim \rho_{o}(x)}{\sum_{s \in \mathcal{S},a \in \mathcal{A}} d^{\pi_0}(s,a\mid x) r(s,a)} \\
|
| 601 |
+
&=
|
| 602 |
+
\sum_{s \in \mathcal{S},a \in \mathcal{A}} r(s,a) \expect*{x \sim \rho_{o}(x)}{ d^{\pi_0}(s,a\mid x) } \\
|
| 603 |
+
&=
|
| 604 |
+
\expect*{s,a \sim d_{\rho_{o}}^{\pi_0}(s,a)}{r(s,a)} \\
|
| 605 |
+
&=
|
| 606 |
+
\expect*{s,a \sim d_{\rho_{e}}^{\pi^*}(s,a)}{r(s,a)} \\
|
| 607 |
+
&=
|
| 608 |
+
\expect*{x \sim \rho_{e}(x), s,a \sim d^{\pi^*}(s,a|x)}{r(s,a)} \\
|
| 609 |
+
&=
|
| 610 |
+
\max_\pi \expect*{x \sim \rho_{e}(x), s,a \sim d^{\pi}(s,a|x)}{r(s,a)}
|
| 611 |
+
\end{align*}$$ Then, $\pi_0 \in \arg\max_\pi \expect*{x \sim \rho_{e}(x), s,a \sim d^{\pi}(s,a|x)}{r(s,a)}$. Applying [3](#lemma: optimal policy for every x){reference-type="ref+Label" reference="lemma: optimal policy for every x"} $$\begin{align*}
|
| 612 |
+
\pi_0
|
| 613 |
+
\in
|
| 614 |
+
\arg\max_{\pi}
|
| 615 |
+
\expect*{x \sim \rho_{o}(x), s,a \sim d^{\pi}(s,a|x)}{r(s,a)}
|
| 616 |
+
=
|
| 617 |
+
\Pi^*_\mathcal{M},
|
| 618 |
+
\end{align*}$$ completing the proof. ◻
|
| 619 |
+
:::
|
| 620 |
+
|
| 621 |
+
::: proof
|
| 622 |
+
*Proof.* We can write $$\begin{align*}
|
| 623 |
+
d^{\pi}(s,a \mid x)
|
| 624 |
+
&=
|
| 625 |
+
(1-\gamma) \sum_{t=0}^\infty \gamma^t P(s_t = s, a_t = a | x) \\
|
| 626 |
+
&=
|
| 627 |
+
(1-\gamma) \sum_\tau \sum_{t=0}^\infty \gamma^t P(s_t = s, a_t = a | x, \tau)P(\tau | x) \\
|
| 628 |
+
&=
|
| 629 |
+
(1-\gamma) \sum_\tau \sum_{t=0}^\infty \gamma^t \mathbf{1}\brk[c]*{\tau_t = (s, a)}P(\tau | x).
|
| 630 |
+
\end{align*}$$ Then, denoting $P^{\pi}_{\rho_s^*}(\tau) = \expect*{x \sim \rho_s^*}{P(\tau | x)}$, we get that $$\begin{align*}
|
| 631 |
+
d^{\pi}_{\rho_s^*}(s,a)
|
| 632 |
+
&=
|
| 633 |
+
(1-\gamma) \sum_\tau \sum_{t=0}^\infty \gamma^t \mathbf{1}\brk[c]*{\tau_t = (s, a)}P^{\pi}_{\rho_s^*}(\tau) \\
|
| 634 |
+
&=
|
| 635 |
+
\expect*{\tau \sim P^\pi_{\rho_s^*}}{(1-\gamma) \sum_{t=0}^\infty \gamma^t \mathbf{1}\brk[c]*{\tau_t = (s, a)}}.
|
| 636 |
+
\end{align*}$$ Since, $\text{Supp}(\rho_{o}) \subseteq \text{Supp}(\rho_{e})$, there exists $p^n \in \Delta_n$ such that $\expect*{i \sim p^n}{(1-\gamma) \sum_{t=0}^\infty \gamma^t \mathbf{1}\brk[c]*{s_t^i, a_t^i = (s, a)}}$ is an unbiased estimator of $d^{\pi}_{\rho_s^*}(s,a)$. The result follows by the law of large numbers. ◻
|
| 637 |
+
:::
|
| 638 |
+
|
| 639 |
+
::: proof
|
| 640 |
+
*Proof.* We begin by showing that $h(P) = \min_{x \in \Delta_n} D_f(P || \expect*{x}{Q_x})$ is convex in $P$. We can write $D_f$ in its variational form, rewriting $h(P)$ as $$\begin{align*}
|
| 641 |
+
h(P) = \min_{x \in \Delta_n} \max_{g: \mathcal{Z}\mapsto \R} \expect*{z \sim P}{g(z)} - \expect*{x, z \sim Q_x}{f^*(g(z))},
|
| 642 |
+
\end{align*}$$ where $$\begin{align*}
|
| 643 |
+
f^*(w) = \sup_{y} \brk[c]*{yw - f(y)}.
|
| 644 |
+
\end{align*}$$ We have that $\expect*{z \sim P}{g(z)} - \expect*{x, z \sim Q_x}{f^*(g(z))}$ is affine in $g$ and $x$. Therefore, strong duality holds, yielding $$\begin{align*}
|
| 645 |
+
h(P)
|
| 646 |
+
&=
|
| 647 |
+
\max_{g: \mathcal{Z}\mapsto \R} \min_{x \in \Delta_n} \expect*{z \sim P}{g(z)} - \expect*{x, z \sim Q_x}{f^*(g(z))} \\
|
| 648 |
+
&=
|
| 649 |
+
\max_{g: \mathcal{Z}\mapsto \R} \brk[c]*{\expect*{z \sim P}{g(z)} + \brk*{\max_{x \in \Delta_n} \expect*{x, z \sim Q_x}{f^*(g(z))}}}
|
| 650 |
+
\end{align*}$$ We have that $\max_{x \in \Delta_n} \expect*{x, z \sim Q_x}{f^*(g(z))}$ is convex in $g$ as a maximum over convex (affine) functions in a compact set. Therefore $h(P)$ is also convex as a maximum over convex functions.
|
| 651 |
+
|
| 652 |
+
Then, the objective in $Problem~(\ref{eq: max min min problem})$ is convex in $d_{\rho_{o}}^{\pi}$. Following the meta algorithm framework for convex RL in @zahavy2021reward, we write the gradient of $D_f(d_{\rho_{o}}^{\pi}(s,a) || d_{\rho_{e}}^{\pi^*}(s,a))$. Notice that for any general $f$-divergence $D_f(x_i || y_i)
|
| 653 |
+
=
|
| 654 |
+
\expect*{y_i}{f\brk*{\frac{x_i}{y_i}}}$ it holds that $$\begin{align*}
|
| 655 |
+
\nabla_{x_j} D_f(x_i || y_i) = 0, j \neq i,
|
| 656 |
+
\end{align*}$$ and $$\begin{align*}
|
| 657 |
+
\nabla_{x_i} D_f(x_i || y_i)
|
| 658 |
+
=
|
| 659 |
+
\nabla_{x_i} \expect*{y_i}{f\brk*{\frac{x_i}{y_i}}}
|
| 660 |
+
=
|
| 661 |
+
\expect*{y_i}{\frac{1}{y_i}\nabla_z f\brk*{z}\mid_{z=\frac{x_i}{y_i}}}.
|
| 662 |
+
\end{align*}$$ Specifically, for the $KL$-divergence, $D_{KL}(p_i || q_i)
|
| 663 |
+
=
|
| 664 |
+
-\expect*{q_i}{\log\brk*{\frac{p_i}{q_i}}}.$ Then, $$\begin{align*}
|
| 665 |
+
\nabla_{p_i} D_{KL}(p_i || q_i)
|
| 666 |
+
=
|
| 667 |
+
\expect*{q_i}{\frac{1}{p_i}}.
|
| 668 |
+
\end{align*}$$ Applying Lemma 2 of @zahavy2021reward with a Follow the Leader (FTL) cost player completes the proof. ◻
|
| 669 |
+
:::
|
2110.13059/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2110.13059/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,208 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
@minsky1988perceptrons suggest that the power of the perceptron comes from its ability to *learn to discard* irrelevant information. In other words; information that does not bear significance to the current task does not influence representations built by the network. According to @minsky1988perceptrons, this leads to a definition of perceptrons in terms of the *symmetry groups* their learned representations are invariant to. Progress in geometric deep learning has shown the power of pro-actively equipping models with such geometric structure as inductive bias, reducing model complexity and improving generalisation and performance [@bronstein2017geometric]. An early example of such geometric inductive bias at work can be seen in the convolutional layer in a CNN [@lecun1998gradient]. CNNs have been instrumental in conquering computer vision tasks, and much of their success has been attributed to their use of the convolution operator, which commutes with the action of the translation group. This property, known as *equivariance* to translation, comes about as a result of the application of the same convolution kernel throughout an input signal, enabling the CNN to learn to detect the same features at any locationin the input signal, directly exploiting translational symmetries that naturally occur in many tasks.
|
| 4 |
+
|
| 5 |
+
Although invariance to object-identity preserving transformations has long been recognised as a desirable model characteristic in machine learning literature [@kondor2008group; @cohen2013learning; @sifre2014rigid], only recently @cohen2016group introduced the Group Equivariant CNN (G-CNN) as a natural extension of the CNN [@lecun1998gradient], generalising its equivariance properties to group actions beyond translation. The layers of a G-CNN are explicitly designed to be equivariant to such transformations, hence the model is no longer burdened with *learning* invariance to transformations that leave object identity intact. It has since been shown that equivariant deep learning approaches may serve as a solution in fields that as of yet remain inaccessible to machine learning due to scarce availability of labelled data, or when compact model design due to limited computational power is required [@winkels20183d; @linmans2018sample; @bekkers2019b].
|
| 6 |
+
|
| 7 |
+
**Complexity and redundancy issues impeding regular group convolutions**
|
| 8 |
+
|
| 9 |
+
<figure id="fig:sepgconvs">
|
| 10 |
+
<figure id="fig:nonsep-gconv-kernel">
|
| 11 |
+
|
| 12 |
+
<figcaption><span class="math inline"><em>k</em> : ℝ<sup>2</sup> ⋊ <em>H</em> → ℝ</span></figcaption>
|
| 13 |
+
</figure>
|
| 14 |
+
<figure id="fig:subgroup-gconv-kernel">
|
| 15 |
+
|
| 16 |
+
<figcaption><span class="math inline"><em>k</em><sub><em>H</em></sub> : <em>H</em> → ℝ</span></figcaption>
|
| 17 |
+
</figure>
|
| 18 |
+
<figure id="fig:r2-gconv-kernel">
|
| 19 |
+
|
| 20 |
+
<figcaption><span class="math inline"><em>k</em><sub>ℝ<sup>2</sup></sub> : ℝ<sup>2</sup> → ℝ</span></figcaption>
|
| 21 |
+
</figure>
|
| 22 |
+
<figure id="fig:combined-separable-gconv-kernel">
|
| 23 |
+
|
| 24 |
+
<figcaption><span class="math inline"><em>k</em><sub><em>H</em></sub> ⋅ <em>k</em><sub>ℝ<sup>2</sup></sub></span></figcaption>
|
| 25 |
+
</figure>
|
| 26 |
+
<figcaption>In group convolutions on affine Lie groups, a feature map <span class="math inline"><em>f</em></span> defined over the group <span class="math inline"><em>G</em> = ℝ<sup><em>n</em></sup> ⋊ <em>H</em></span> is convolved with a filter <span class="math inline"><em>k</em> : ℝ<sup><em>n</em></sup> ⋊ <em>H</em> → ℝ</span>, shown in Fig. <a href="#fig:nonsep-gconv-kernel" data-reference-type="ref" data-reference="fig:nonsep-gconv-kernel">1</a>. We propose separating this convolution into two sequential operations: a convolution over the subgroup <span class="math inline"><em>H</em></span> with a kernel <span class="math inline"><em>k</em><sub><em>H</em></sub> : <em>H</em> → ℝ</span>, followed by a convolution over the spatial dimensions with a kernel <span class="math inline"><em>k</em><sub>ℝ<sup><em>n</em></sup></sub> : ℝ<sup><em>n</em></sup> → ℝ</span>, shown in Figs. <a href="#fig:subgroup-gconv-kernel" data-reference-type="ref" data-reference="fig:subgroup-gconv-kernel">2</a>, <a href="#fig:r2-gconv-kernel" data-reference-type="ref" data-reference="fig:r2-gconv-kernel">3</a> respectively. Importantly, this greatly reduces computational complexity while retaining equivariance properties, allowing for application of equivariant deep learning models to larger groups <span class="math inline"><em>G</em></span>. This factorisation intuitively corresponds to composing <span class="math inline"><em>k</em></span> by sharing a reweighting of <span class="math inline"><em>k</em><sub>ℝ<sup>2</sup></sub></span> along <span class="math inline"><em>H</em></span> with coefficients given by <span class="math inline"><em>k</em><sub><em>H</em></sub></span>, shown in Fig. <a href="#fig:combined-separable-gconv-kernel" data-reference-type="ref" data-reference="fig:combined-separable-gconv-kernel">4</a>. </figcaption>
|
| 27 |
+
</figure>
|
| 28 |
+
|
| 29 |
+
A growing body of work shows applications of G-CNNs consistently and decisively outperforming classical CNNs [@worrall2017harmonic; @weiler20183d; @bekkers2018roto; @esteves2018learning; @bekkers2019b; @worrall2019deep; @sosnovik2021scale]. However, a practical challenge impeding application to larger groups is the computational complexity of regular group convolutions, which scales exponentially with the dimensionality of the group. Furthermore, @lengyel2021exploiting show that group convolution filters in the original formulation of the G-CNN by @cohen2016group exhibit considerable redundancies along the group axis for the $p4m$ and $\mathbb{Z}^2$ groups. Similar observations motivated depthwise separable convolutions [@chollet2017xception], which not only increased parameter efficiency but also model performance; observed correlations between weights are explicitly enforced with further parameter sharing through the use of kernels separable along spatial and channel dimensions. We address the observations of redundancy along with the scalability issues of regular G-CNNs in their current form. Our paper contains the following contributions:
|
| 30 |
+
|
| 31 |
+
- We introduce separable group convolutions for affine Lie groups $\mathbb{R}^n \rtimes H$, sharing the kernels for translation elements $x \in \mathbb{R}^n$ along subgroup elements $h\in H$. See Fig. [5](#fig:sepgconvs){reference-type="ref" reference="fig:sepgconvs"} for an overview.
|
| 32 |
+
|
| 33 |
+
- We propose the use of a SIREN [@sitzmann2020implicit] as kernel parameterisation in the Lie algebra - imposing a fixed number of parameters per convolution kernel, regardless of the resolution at which this kernel is sampled, and ensuring smoothness over the Lie group.
|
| 34 |
+
|
| 35 |
+
- Separable group convolutions allow us to build $\mathrm{Sim(2)}$-CNNs, which we thoroughly experiment with. We show equivariance to $\mathrm{Sim(2)}$ increases accuracy over a range of vision benchmarks.
|
| 36 |
+
|
| 37 |
+
- To achieve equivariance to continuous affine Lie groups, we propose a random sampling method over subgroups $H$ for approximating the group convolution operation.
|
| 38 |
+
|
| 39 |
+
First, we position this work within the area of equivariant deep learning by giving an overview of related works, and explaining which current issues we are addressing with this work. We derive separable group convolutions, and show how they may be applied to continuous groups. Lastly, we apply these ideas by experimenting with implementations for roto-translations in 2D ($\mathrm{SE(2)}$), dilation and translation in 2D ($\mathbb{R}^2 \rtimes \mathbb{R}^+$) and dilation, rotation and translation in 2D ($\mathrm{Sim(2)}$).
|
| 40 |
+
|
| 41 |
+
# Method
|
| 42 |
+
|
| 43 |
+
In this section, group theoretical prerequisites used throughout the paper are briefly refreshed. This is by no means intended as an exhaustive exposition of the fundamentals of group theory, we only introduce those concepts relevant to the current work.
|
| 44 |
+
|
| 45 |
+
**Group.** A group is defined by a set $G$ of *group elements*, along with a binary operator $\cdot:G \times G \rightarrow G$, called the *group product*. The group product defines a way to combine each pair of elements $g_1,g_2 \in G$. For the binary operator $\cdot$ to be considered a group product, it needs to satisfy four constraints:
|
| 46 |
+
|
| 47 |
+
1. Closure. $G$ is closed under $\cdot$; for all $g_1, g_2 \in G$ we have $g_1 \cdot g_2 \in G$.
|
| 48 |
+
|
| 49 |
+
2. Identity. There exists an identity element $e$ s.t. for each $g \in G$, we have $e \cdot g = g \cdot e = g$.
|
| 50 |
+
|
| 51 |
+
3. Inverse. For every element $g \in G$ we have an element $g^{-1} \in G$, s.t. $g \cdot g^{-1} = e$.
|
| 52 |
+
|
| 53 |
+
4. Associativity. For every set of elements $g_1, g_2, g_3 \in G$, we have ($g_1 \cdot g_2) \cdot g_3 = g_1 \cdot (g_2 \cdot g_3)$.
|
| 54 |
+
|
| 55 |
+
**Lie groups.** A Lie group is a group of which the elements form a smooth manifold. Since the group itself is not necessarily a vector space, combination of elements through addition or subtraction is not defined. However, to each Lie group $G$, an algebra $\mathfrak{g}$ may be associated, given by the tangent space of the Lie group at the identity $T_e(G)$. The Lie algebra may be interpreted as a vector space of infinitesimal generators of the group, a set of elements from which we can obtain the group $G$, by repeated application.
|
| 56 |
+
|
| 57 |
+
**Exponential and logarithmic map.** The exponential map $\exp:\mathfrak{g} \rightarrow G$ is a function mapping elements from the Lie algebra to the group. For many transformation Lie groups of interest this map is surjective, and it is possible to define an inverse mapping; the logarithmic map which maps from the group to the algebra.
|
| 58 |
+
|
| 59 |
+
**Semi-direct product groups.** In practice, we are often only interested in data defined over $\mathbb{R}^d$, and hence in this paper only consider affine Lie groups of the form $G = \mathbb{R}^d \rtimes H$, where $\mathbb{R}^d$ is the translation group in $d$ dimensions and $H$ is a transformation Lie group of interest.
|
| 60 |
+
|
| 61 |
+
**Group action.** A group $G$ may have an action on a given space $\mathcal{X}$. Given a group element $g\in G$ and a set $\mathcal{X}$, the group action $\mathcal{T}_g$ defines what happens to any element of $x\in \mathcal{X}$ when we apply the transformation given by element $g$ to it. This action is given by: $$\begin{equation}
|
| 62 |
+
\mathcal{T}: G \times \mathcal{X} \rightarrow \mathcal{X} \text{ and } \mathcal{T}_g: \mathcal{X} \rightarrow \mathcal{X},
|
| 63 |
+
\end{equation}$$ such that for any two elements $g, h \in G$, we can combine their actions into a single action; $\mathcal{T}_{g,h} = \mathcal{T}_g \circ \mathcal{T}_h$. To avoid clutter, we write the action $\mathcal{T}_g(x)$ as $g\cdot x$. Note that the action of a group $G$ on domain $\mathcal{X}$ also extends to functions defined on this domain, treated next.
|
| 64 |
+
|
| 65 |
+
**Left-regular representations** We extend the group action on $\mathcal{X}$ to *square integrable functions* defined on $\mathcal{X}$; $\mathbb{L}_2 (X)$. Intuitively, any group action on $\mathcal{X}$ induces an action on functions on $\mathcal{X}$; as elements of the set $\mathcal{X}$ are transformed, a function on $\mathcal{X}$ is *dragged along*. Commonly, this is expressed through the *left-regular representations*. Imagine we have a function; $f: \mathcal{X}\rightarrow \mathbb{R}$. Let's say we want to reason about the function $f$ after transformation by group element $r$; let us denote this transformed function $f'$. We may inspect the value of this function for a given element of $\mathcal{X}$ by reasoning backwards from our transformed function. For example, to obtain the value of $f'$ for the transformed element $a'$, we find what the value of $f$ was for $a$ before applying $r$. This is done by applying the inverse of action $r$ to the transformed element $a'$. For any element of the set of square-integrable functions on $\mathcal{X}$; $f\in \mathbb{L}_2 (\mathcal{X})$ the left-regular representation of $g$ is given by: $$\begin{equation}
|
| 66 |
+
\mathcal{L}_g: f \rightarrow f' \text{ and for $a' \in \mathcal{T}_g(\mathcal{X})$: } f'(a') = f(\mathcal{T}_{g^{-1}}(a')).
|
| 67 |
+
\end{equation}$$
|
| 68 |
+
|
| 69 |
+
**Equivariance**. An operator is equivariant with respect to a group, if it commutes with the action of the group. For an operator $\Phi:\mathbb{L}_2(X)\rightarrow\mathbb{L}_2(Y)$: $$\begin{align}
|
| 70 |
+
\forall g \in G: \mathcal{L}_{g} \circ \Phi = \Phi \circ \mathcal{L}_{g}.
|
| 71 |
+
\end{align}$$
|
| 72 |
+
|
| 73 |
+
If we set the kernel $k$ to be separable, meaning we parameterise it by multiplying a kernel $k_H$ which is constant along the spatial domain and a kernel $k_{\mathbb{R}^2}$ which is constant along the group domain: $$\begin{align}
|
| 74 |
+
k(g) &= k(\boldsymbol{x}, h) \nonumber\\
|
| 75 |
+
&= k_{\mathbb{R}^2}(\boldsymbol{x}) k_H(h).
|
| 76 |
+
\end{align}$$ We can derive separable group convolutions as: $$\begin{align}
|
| 77 |
+
(f *_{\mathrm{group}} k) (g)&=\int_G f(\tilde{g})k(g^{-1} \cdot \tilde{g})\,{\rm d}\mu(\tilde{g}) \nonumber \\
|
| 78 |
+
&=\int_{\mathbb{R}^2}\int_H f(\tilde{\boldsymbol{x}}, \tilde{h})\mathcal{L}_{x^{-1}}\mathcal{L}_{h^{-1}}k(\tilde{\boldsymbol{x}}, \tilde{h})\frac{1}{|h|} \,{\rm d}\boldsymbol{\tilde{x}}\,{\rm d}\tilde{h}\nonumber \\
|
| 79 |
+
&=\int_{\mathbb{R}^2}\int_H f(\tilde{\boldsymbol{x}},\tilde{h})k(h^{-1}(\tilde{\boldsymbol{x}}-\boldsymbol{x}), h^{-1}\cdot \tilde{h})\dfrac{1}{|h|} \,{\rm d}\boldsymbol{\tilde{x}}\,{\rm d}\tilde{h} \nonumber \\
|
| 80 |
+
&\rightarrow \int_{\mathbb{R}^2}\int_H f(\tilde{\boldsymbol{x}},\tilde{h})k_{\mathbb{R}^2}(h^{-1}(\tilde{\boldsymbol{x}}-\boldsymbol{x}))k_H(h^{-1}\cdot \tilde{h})\dfrac{1}{|h|} \,{\rm d}\boldsymbol{\tilde{x}}\,{\rm d}\tilde{h} \nonumber \\
|
| 81 |
+
&= \int_{\mathbb{R}^2}\int_H f(\tilde{\boldsymbol{x}},\tilde{h})k_H(h^{-1}\cdot \tilde{h})k_{\mathbb{R}^2}(h^{-1}(\tilde{\boldsymbol{x}}-\boldsymbol{x}))\dfrac{1}{|h|} \,{\rm d}\boldsymbol{\tilde{x}}\,{\rm d}\tilde{h}\nonumber \\
|
| 82 |
+
&= \int_{\mathbb{R}^2}\left[\int_Hf(\tilde{\boldsymbol{x}},\tilde{h})k_H (h^{-1}\cdot \tilde{h})\,{\rm d}\tilde{h}\right]k_{\mathbb{R}^2}(h^{-1}(\tilde{\boldsymbol{x}}-\boldsymbol{x})) \frac{1}{|h|}\,{\rm d}\tilde{\boldsymbol{x}}. \label{eq:sepgconv_1}
|
| 83 |
+
\end{align}$$
|
| 84 |
+
|
| 85 |
+
We implemented models equivariant to three different groups: $\mathrm{SE(2)}$, $\mathbb{R}^2 \rtimes \mathbb{R}^+$ and $\mathrm{Sim(2)}$. In this section, we describe these groups in more detail, and give definitions for the logarithmic map required in obtaining G-CNNs equivariant to these groups [@bekkers2019b].
|
| 86 |
+
|
| 87 |
+
**The translation group $\mathbb{R}^2$** The translation group in two dimensions $\mathbb{R}^2$, has group product and inverse for two elements $g=\boldsymbol{x},g'=\boldsymbol{x}' \in \mathbb{R}^2$: $$\begin{align}
|
| 88 |
+
g \cdot g' &= (\boldsymbol{x} + \boldsymbol{x}')\\
|
| 89 |
+
g^{-1} &= -\boldsymbol{x}.
|
| 90 |
+
\end{align}$$ With logarithmic map: $$\begin{align}
|
| 91 |
+
\log g=\boldsymbol{x}.
|
| 92 |
+
\end{align}$$
|
| 93 |
+
|
| 94 |
+
**The rotation group ${\rm SO(2)}$** The rotation group in two dimensions describes the set of continuous rotation transformations of the plane, and consists of all orthogonal matrices $\boldsymbol{R}$ with determinant 1. Its group product and inverse for two elements $g=\boldsymbol{R}_\theta, g'=\boldsymbol{R}_\theta' \in {\rm SO(2)}$ is given by: $$\begin{align}
|
| 95 |
+
g \cdot g' &= \boldsymbol{R}_\theta \boldsymbol{R}_{\theta'} \nonumber \\
|
| 96 |
+
&= \boldsymbol{R}_{\theta + \theta'}\\
|
| 97 |
+
g^{-1} &+ \boldsymbol{R}_\theta ^{-1}.
|
| 98 |
+
\end{align}$$ With logarithmic map: $$\begin{align}
|
| 99 |
+
\log g = \begin{bmatrix} 0 & -\theta\mod2\pi \\ \theta\mod2\pi & 0 \end{bmatrix}.
|
| 100 |
+
\end{align}$$
|
| 101 |
+
|
| 102 |
+
**The dilation group ${\mathbb{R}^+}$** The group of dilation transformations $\mathbb{R}^+$ has a group product and inverse that, for two elements $g=s, g=s' \in \mathbb{R}^+$ are given by: $$\begin{align}
|
| 103 |
+
g \cdot g' &= ss'\\
|
| 104 |
+
h^{-1} &= s^{-1}.
|
| 105 |
+
\end{align}$$ The logarithmic map is given by: $$\begin{align}
|
| 106 |
+
\log g = \ln s.
|
| 107 |
+
\end{align}$$
|
| 108 |
+
|
| 109 |
+
**The Special Euclidean group $\text{SE}(2)$** The Special Euclidean group in 2 dimensions describes the set of geometric transformations that are formed by combinations of rotations and translations in two dimensions. Each group element can be parameterised by two variables $\theta$ and $\mathbf{x}$, describing the rotation angle and translation vector, $g=(\theta, \mathbf{x})\in G$. For two elements $g, g' \in \text{SE}(2)$, the group product and inverse are given by: $$\begin{align}
|
| 110 |
+
g\cdot g' &= (\mathbf{x},\theta) \cdot (\mathbf{x}',\theta') \nonumber \\
|
| 111 |
+
&= (\mathcal{T}_\theta( \mathbf{x}') + \mathbf{x}, \theta + \theta')\\
|
| 112 |
+
g^{-1} &= (-\mathcal{T}_{-\theta} (\mathbf{x}), -\theta).
|
| 113 |
+
\end{align}$$ As we can see, to combine the two elements $g, g'$ we apply the action of the rotation part of $g$ to the translation of $g'$ before we combine it with the translation of $g$, we combine elements semi-directly. $\text{SE}(2)$ is a semidirect product of the translation group $\mathbb{R}^2$ and rotation group $\text{SO}(2)$. We write this as $\text{SE}(2) = \mathbb{R}^2 \rtimes \text{SO}(2)$. To simplify implementation, we separate the logarithmic map into logarithmic maps for ${\rm SO(2)}$ and $\mathbb{R}^2$ in our implementation.
|
| 114 |
+
|
| 115 |
+
**The dilation-translation group $\mathbb{R}^2 \rtimes \mathbb{R}^+$** Another group of interest to our research topic is the translation-dilation group $\mathbb{R}^2\rtimes \mathbb{R}^+$, the group of translation and dilation transformations. Dilations occur frequently in natural images, in the form of scaling transformations of objects and scenes, as the distance between camera and object differs between images. Group elements are parameterised by a scaling factor $s$ and translation element $\mathbf{x}$, $g=(\mathbf{x}, s)\in G$. For two elements $g, g'$, group product and inverse are given by: $$\begin{align}
|
| 116 |
+
g \cdot g' &= (\mathbf{x}, s) \cdot (\mathbf{x}', s') \nonumber \\
|
| 117 |
+
&= (\mathcal{T}_s( \mathbf{x}') + \mathbf{x}, s s')\\
|
| 118 |
+
g^{-1} &= (-\mathcal{T}_{s^{-1}} (\mathbf{x}), s^{-1}).
|
| 119 |
+
\end{align}$$ To simplify implementation, we separate the logarithmic map into logarithmic maps for ${\mathbb{R}^+}$ and $\mathbb{R}^2$ in our implementation.
|
| 120 |
+
|
| 121 |
+
**The Similarity group $\text{Sim}(2)$** The similarity transformation group is the semi-direct product of the roto-translation group $\text{SE}(2)$ and the isotropic scaling group $\mathbb{R}^+$, and defines dilation-roto-translation transformations in two dimensions. Each group element can be parameterised by three variables $\theta$, $s$ and $\mathbf{x}$. For two elements $g, g' \in \text{Sim}(2)$ the group product and inverse are given by: $$\begin{align}
|
| 122 |
+
g \cdot g' &= ((\mathbf{x}, \theta), s) \cdot ((\mathbf{x}', \theta'), s') \nonumber\\
|
| 123 |
+
&= ((\mathcal{T}_s(\mathcal{T}_\theta(\mathbf{x}')) + \mathbf{x}, \mathcal{T}_s(\theta') + \theta), s s') \nonumber\\
|
| 124 |
+
&= ((\mathcal{T}_{s}(\mathcal{T}_{\mathcal{T}_s(\theta)}(\mathbf{x}')) + \mathbf{x}, \mathcal{T}_s(\theta') + \theta), s s') \nonumber\\
|
| 125 |
+
\label{eq:line-commutativity}&= ((\mathcal{T}_s(\mathcal{T}_\theta(\mathbf{x}')) + \mathbf{x}, \theta' + \theta), s s')\\
|
| 126 |
+
g^{-1} &= (-(\mathcal{T}_{s^{-1}}(\mathcal{T}_{-\theta}(\mathbf{x})), -\theta), s^{-1}).
|
| 127 |
+
\end{align}$$ In Eq. [\[eq:line-commutativity\]](#eq:line-commutativity){reference-type="ref" reference="eq:line-commutativity"} we use the fact that the isotropic dilation group has no action on the rotation group. This is a consequence of the fact that $\mathbb{R}^+$ and $\text{SO}(2)$ are both abelian groups, and may be taken in direct product to create the group of dilation-rotation transformations. We again separate logarithmic maps by subgroup.
|
| 128 |
+
|
| 129 |
+
To ease the reading experience of this work, in this section we provide some additional group theoretic perspective on regular CNNs, and additional visualisations and for a number of operations used in (separable) group convolutions.
|
| 130 |
+
|
| 131 |
+
**CNNs from a group theoretic perspective** []{#app:cnnsgrouptheory label="app:cnnsgrouptheory"} We give a brief treatment of ordinary CNNs The ordinary convolution operation used in neural networks requires a definition of a convolution kernel $k$ on $\mathbb{R}^2$, as we are modulating a signal $f$ which itself lives on $\mathbb{R}^2$. The kernel $k$ is applied to $f$ on every location in the input space $\mathbb{R}^2$ to again yield a function over $\mathbb{R}^2$. Intuitively, this is the same as saying (1) we transform the convolution kernel $k$ under the action of every group element $\boldsymbol{x} \in \mathbb{R}^2$, to obtain a set of kernels $K=\{ \mathcal{L}_{\boldsymbol{x}}(k) | \boldsymbol{x} \in \mathbb{R}^2\}$, and (2) take the inner product of the input $f$ with each kernel in this set of transformed kernels $K$. By *tying* the kernel weights used throughout the translation group, learned features are automatically generalised over spatial positions. This intuition is visualised in Fig. [16](#fig:cnnsgrouptheory){reference-type="ref" reference="fig:cnnsgrouptheory"}.
|
| 132 |
+
|
| 133 |
+
{#fig:cnnsgrouptheory width="90%"}
|
| 134 |
+
|
| 135 |
+
**Lifting convolution** In Fig. [17](#fig:liftingconv){reference-type="ref" reference="fig:liftingconv"} we show a visualisation of the lifting convolution for the $\mathrm{SE(2)}$ group.
|
| 136 |
+
|
| 137 |
+
{#fig:liftingconv width="\\textwidth"}
|
| 138 |
+
|
| 139 |
+
**Group convolution** In Fig. [18](#fig:groupconv){reference-type="ref" reference="fig:groupconv"} we show a visualisation of the group convolution for the $\mathrm{SE(2)}$ group.
|
| 140 |
+
|
| 141 |
+
{#fig:groupconv width="\\textwidth"}
|
| 142 |
+
|
| 143 |
+
**Separable group convolution kernel** In Fig. [19](#fig:separablekernel){reference-type="ref" reference="fig:separablekernel"} we show a visualisation of a separable group convolution kernel.
|
| 144 |
+
|
| 145 |
+
{#fig:separablekernel width="80%"}
|
| 146 |
+
|
| 147 |
+
**Separable group convolution** In Fig. [20](#fig:sepgroupconv){reference-type="ref" reference="fig:sepgroupconv"} we show a visualisation of the separable group convolution operation.
|
| 148 |
+
|
| 149 |
+
{#fig:sepgroupconv width="\\textwidth"}
|
| 150 |
+
|
| 151 |
+
**Defining convolution kernels on Lie groups** In Fig. [21](#fig:gridongroup){reference-type="ref" reference="fig:gridongroup"} we show how we obtain a grid on the Lie group $\mathrm{SO(3)}/\mathrm{SO(2)}$ by mapping from its algebra. In Fig. [\[fig:kernelongroup\]](#fig:kernelongroup){reference-type="ref" reference="fig:kernelongroup"} we show how we subsequently obtain a kernel on this group by defining a SIREN on its algebra.
|
| 152 |
+
|
| 153 |
+
<figure id="fig:gridongroup">
|
| 154 |
+
<div class="minipage">
|
| 155 |
+
<img src="figures/intuitivevisualisations/kernel-on-lie-alg.png" style="width:90.0%" />
|
| 156 |
+
</div>
|
| 157 |
+
<div class="minipage">
|
| 158 |
+
<img src="figures/intuitivevisualisations/grid-on-lie-alg.png" style="width:90.0%" />
|
| 159 |
+
</div>
|
| 160 |
+
<figcaption>An example of a local kernel grid <span class="math inline">ℋ</span> on the quotient space <span class="math inline">SO(3)/SO(2)</span>. We obtain a volumetrically constant grid on <span class="math inline"><em>G</em></span> by sampling a set of equidistant points in <span class="math inline">𝔤</span> and mapping them to <span class="math inline"><em>G</em></span> via the exponential map. The grid on <span class="math inline"><em>G</em></span>, show in this figure, then serves as input for the SIREN defined on <span class="math inline">𝔤</span> via the logarithmic map.</figcaption>
|
| 161 |
+
</figure>
|
| 162 |
+
|
| 163 |
+
<figure id="fig:intuitionlimitations" data-latex-placement="t">
|
| 164 |
+
<img src="figures/intuitionsepgconvs.png" />
|
| 165 |
+
<figcaption>An example of a configuration of features <span class="math inline"><em>f</em><sub><em>o</em><em>u</em><em>t</em></sub></span> a <em>single</em> separable group convolution kernel is unable to represent.</figcaption>
|
| 166 |
+
</figure>
|
| 167 |
+
|
| 168 |
+
As discussed, separable group convolution kernels are strictly less expressive than their non-separable counterpart. For example, in the case of the roto-translation group $\mathrm{SE(2)}$, this would enable non-separable group convolution kernels to learn to represent features that are built up of different spatial configurations at different orientations.
|
| 169 |
+
|
| 170 |
+
We draw up a simplified example illustrated in Fig. [22](#fig:intuitionlimitations){reference-type="ref" reference="fig:intuitionlimitations"}. Assume we have an elementary feature type $e$, and a spatial kernel $k$ with which we can recognise this feature type in its canonical pose $\theta_0$. In our input $f_{in}$, we have three instances of the feature $e$, one under the canonical pose $\theta_0$, and two under a $90\degree$ rotation. Applying the lifting convolution using kernel $k$ for the group $G=\mathbb{R}^2 \rtimes C_4$ of translations and $90\degree$ rotations yields a feature map $f_{out}$ defined over $G$, with spatial feature maps for $\theta_0, ..., \theta_{270}$. The spatial feature map $f^{\theta_0}_{out}$ contains a response at a single spatial position. In contrast, the feature map $f^{\theta_{90}}_{out}$ contains a response at two spatial positions. The spatial configurations for the feature maps along $H$ are different. A single conventional group convolution kernel could learn to recognise these distinct spatial configurations along the subgroup axis, whereas a separable group convolution kernel could not, since it simply repeats (a weighted version of) the same spatial kernel $k_{\mathbb{R}^2}$ along the group axis.
|
| 171 |
+
|
| 172 |
+
Although this reduction in expressivity could theoretically prove limiting in the application of G-CNNs on vision tasks, our experiments show that in practice this rarely seems a problem, and may even help prevent overfitting.
|
| 173 |
+
|
| 174 |
+
As mentioned, using SIRENs we are able to explicitly control kernel smoothness. We briefly elaborate on the importance of kernel smoothness in G-CNNs. In conventional CNNs, weights at distinct spatial locations are generally initialised independently. Because the kernels are only transformed using discrete translation operations $\boldsymbol{x}\in \mathbb{Z}^2$, translation equivariance is ensured by virtue of using the exact same weights values throughout all spatial locations.
|
| 175 |
+
|
| 176 |
+
In G-CNNs for continuous groups, the kernel is transformed under actions of a continuous transformation group of interest $H$ to obtain equivariance. However, in our convolution operation we are still using a discretised kernel; we are required to sample kernel values at *different grid points* for different elements $h\in H$. We are no longer able to simply reuse the same weight values throughout the group as with regular CNNs. To this end, we define our kernels in an analytical form, which we can trivially evaluate at arbitrary grid points. Because our grid has a fixed resolution, the kernels sampled from this analytical form are susceptible to aliasing effects; the analytical kernel function may exhibit higher frequencies than can be captured in the discretisation of the kernel. We visualise the effects of aliasing in Fig. [23](#fig:aliasing){reference-type="ref" reference="fig:aliasing"}.
|
| 177 |
+
|
| 178 |
+
In short, for continuous groups, the group action transforms a signal smoothly as the group is traversed. To prevent discretisation artefacts, we want our kernel to exhibit the same smoothly transforming behaviour, hence we use SIRENs, as they offer explicit control over the smoothness of kernels in their analytical form.
|
| 179 |
+
|
| 180 |
+
{#fig:aliasing width="96%"}
|
| 181 |
+
|
| 182 |
+
**Model architecture** []{#app:architecture label="app:architecture"} As architecture for our experiments we use a simple ResNet model [@he2016deep]. We use a single lifting convolution with 32 output channels, followed by two residual blocks of 2 group convolutions. The first block has 32 output channels, the second has 64 output channels. After the first residual block we apply max-pooling with kernel size 2 over the spatial dimensions of the feature map. After the last residual block, we apply max pooling over remaining spatial and subgroup dimensions, followed by two linear layers with batchnorm and ReLU in between. An overview is given in Fig. [24](#fig:architecture){reference-type="ref" reference="fig:architecture"}.
|
| 183 |
+
|
| 184 |
+
{#fig:architecture width="60%"}
|
| 185 |
+
|
| 186 |
+
**Group convolution blocks and random sampling on the group** In our residual blocks [@he2016deep], we subsequently convolve the input to the block $\boldsymbol{x}_{\text{in}}$ by two group convolutions, `gconv_1` and `gconv_2`, yielding $\boldsymbol{x}_{\text{out}}$ and apply elementwise addition with the input $\boldsymbol{x}_{\text{in}}$, followed by ReLU activation.
|
| 187 |
+
|
| 188 |
+
When approximating the group convolution through random sampling, we must take care to define the input and output of the two group convolution layers on the same grid over the group as the skip-connection to ensure a well-defined equivariant group convolution block. This may be done by adding a group shortcut layer which maps from the set of input elements on the group to the set of output elements. We implement this as a group convolution with a $1\times 1$ spatial extent. The group shortcut layer thus simultaneously serves as a channelwise projection from the input space of `gconv_1` to the output space of `gconv_2` *and* maps from the input grid on $H$ of the first group convolution to the output grid on $H$ of the second group convolution.
|
| 189 |
+
|
| 190 |
+
**SIREN architecture and separable kernel parameterisation** All our kernels are parameterised by a SIREN [@sitzmann2020implicit]. In a SIREN, output $\boldsymbol{y}^l$ for a layer $l$ and input $\boldsymbol{x}^{l-1}$ is defined by: $$\begin{align}
|
| 191 |
+
\boldsymbol{y}^l = \sin(\omega_0 \boldsymbol{W}^l \boldsymbol{x}^{l-1} + \boldsymbol{b}^l)
|
| 192 |
+
\end{align}$$ In this equation, $\omega_0$ acts as a multiplier for low dimensional frequencies found in the input domain (the grid of relative offsets on the group), which explicitly introduces higher frequencies, allowing the neural net to learn high-frequency functions (such as kernels). We found a value for $\omega_0$ of 10 to work well in all our experiments. For the SIREN, we used an architecture of two hidden layers of 64 units. In non-separable G-CNNs we have a single SIREN with a final layer mapping to a vector $\mathbb{R}^{c_{in} \times c_{out}}$. In separable G-CNNs, we use two SIRENs, the first mapping to a function the Lie algebra of the subgroup $H$; $\mathbb{R}^{c_{in} \times c_{out}}$, and the second mapping to $\mathbb{R}^{c_{out}}$. Formulating the kernel $k$ for a group element $g=(\boldsymbol{x}, h)$ in terms of $k_H$, $k_{\mathbb{R}^2}$, input channel $i$ and output channel $j$, and logarithmic map on H $\log_H$, we obtain: $$\begin{equation}
|
| 193 |
+
k^{i,j} (g) = k_H^{i,j}(\log_H h)k_{\mathbb{R}^2}^{j}(\boldsymbol{x}) \\
|
| 194 |
+
\end{equation}$$
|
| 195 |
+
|
| 196 |
+
Lastly, for $\text{H}$-separable $\mathrm{Sim(2)}$-CNNs we use three SIRENs, the first mapping to a function the Lie algebra of the subgroup $\mathbb{R}^+$; $\mathbb{R}^{c_{in} \times c_{out}}$, the second mapping $\mathrm{SO(2)}$ to $\mathbb{R}^{c_{out}}$, and the third mapping $\mathbb{R}^2$ to $\mathbb{R}^{c_{out}}$. one mapping to $\mathrm{SO(2)}$. Formulating the kernel $k$ for a group element $g=((\boldsymbol{x}, \theta), s)$ in terms of $k_{\mathrm{SO(2)}}$, $k_{\mathbb{R}^+}$, $k_{\mathbb{R}^2}$, input channel $i$ and output channel $j$, logarithmic maps $\log_{\mathbb{R}^+}$ and $\log_{\rm SO(2)}$ on $\mathbb{R}^+$ and $\mathrm{SO(2)}$ respectively we obtain: $$\begin{equation}
|
| 197 |
+
k^{i,j} (g) = k_{\mathbb{R}^+}^{i,j}(\log_{\mathbb{R}^+} s)k^{j}_{\mathrm{SO(2)}}(\log_{\rm SO(2)}\theta)k_{\mathbb{R}^2}^{j}(\boldsymbol{x}) \\
|
| 198 |
+
\end{equation}$$
|
| 199 |
+
|
| 200 |
+
**Model sizes**[]{#app:modelsizes label="app:modelsizes"} In Tab. [1](#tab:parameters){reference-type="ref" reference="tab:parameters"}, we report the number of trainable parameters for each model configuration. Throughout our experiments, we kept the number of channels in all our configurations constant, to fairly compare the expressivity of the learned representations in non-separable and separable group convolutions. This has as effect that the number of parameters in the separable implementations is larger than for non-separable implementations, due to our use of separate SIRENs to parameterise kernels over the different subgroups. To ensure that the difference in number of trainable parameters does not influence the comparison between separable and non-separable group convolution layers, we explicitly chose to over-parameterise our SIREN architecture, as shown in an additional ablation in Appx. [9.3](#app:overparameterisedsirens){reference-type="ref" reference="app:overparameterisedsirens"}. We only change SIREN hidden size when comparing our models to baselines proposed in related works as detailed in Appx. [8.2](#app:trainingregimes){reference-type="ref" reference="app:trainingregimes"}.
|
| 201 |
+
|
| 202 |
+
::: {#tab:parameters}
|
| 203 |
+
------ ------ ------
|
| 204 |
+
742k 803k 864k
|
| 205 |
+
------ ------ ------
|
| 206 |
+
|
| 207 |
+
: Number of trainable parameters for different implementations. For all groups and datasets, these numbers are kept constant.
|
| 208 |
+
:::
|
2112.10149/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-03-22T07:45:01.004Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.74 Safari/537.36" etag="4CCCqfwKD-0G1fwSSPAJ" version="17.1.3" type="device"><diagram id="17g25j0EZ4dQmeewbZnq" name="Page-1">7Vxbd6I6FP41fawLCBd9bK2dzprOdNb0rM7MeemKEpXTQBiIrZ5ffxJIVEJUdBCdHnxoYRMDyfftnX0JXoB+OP+QwHj6mfgIX1iGP78ANxeWZVq9LvvHJYtc4nleLpgkgS8arQSPwb9ICA0hnQU+SgsNKSGYBnFROCJRhEa0IINJQt6KzcYEF+8awwkqCR5HEJel3wOfTnNp1/JW8jsUTKbyzqbby6+EUDYWXaRT6JO3XJQNDgwuQD8hhOZH4byPMJ88OS/5DNxuuLp8sARFtMoXbsDfn+7IzUPyQIL7X7fmw5fbwaUlno0u5ICRz8YvTklCp2RCIogHK+l1QmaRj3ivBjtbtbknJGZCkwn/QZQuBJhwRgkTTWmIxdWUJuRlOZ1sIq7HJKK3MAwwZ8lfQcgQt4wv6I39/UZCGLEm5eGKGeAPvCYQg/+ASIhosmANEoQhDV6LgELBi8my3fKrX0nAbmEZgsMWEIgu5DkodkFhMkFUfGsdAKUj03W2d5SSWTJCpY7Ywdp4VqIM3z2wFiN/hXgm5qIEPuNozA9HsyFH+m0aUPQYwxGXvTEFL+I4zIlwP1wK4OhlktHjYUZxECEh92Hy8sC6CSgfutExnKLQyqS85RgH8d1RaPKKEormW4kirrpWASazZ+fnbyv9Nw1hnqZrut81NlOrgOW+wIEWuErAWfa5Iee05nWXebUdox7zCmzvpObVbbW0kpbazm4tdZpU0m4LXLV10TTPC7heC1wl4EzT6BRdT88oIWc1iZxp7YYORf4VD+A4ehimaTA66TK3g+RS9puroWmpOqZAsGER03SkYm7avUoLa23hRuv87ITbNbqKz2If5vw4StBS6ujIzo9VIbjcpdAcKIGrCd6vgivurmpjqyo4UIIdIBNrO6jDIICLtWYxb5BWf2B5nxWB8h7rpVMN6wMjQLL4kS3Ejjz9KQxLdnIzL5wtxNn7pJ3tqMuBeSDxHFuxWWa1dWVf4qn3kQ98XOK1uZZqPiWwuwqhgFV2KhsNByz7D8POh+k083VOCqR3fkA6LZAH5FWs8wPSKwHZ6XRKWLIBUp0X3yeYJEwSEY7P9TjAWBFBHEwizgI2bYjJr/l0BSOIr8SFMPB9vIkgxWCjAOKaLwrM5iA0gAqh9PLWILQ1EFpHg7BCdmwPxx4sXTGNZ7b02jZ4ZtnZV5Sw6c/QPoK7lvsuuWgALke92Y+PkbPoh0+9z/Tp9Upmec/ErbO7Cl28amFfqSPPAEWnrmKyYF+nzlMCXvnAx3XqdIlCF1PBlQKZ3V8zIi9cphlpr1gDNrJ5xht5nR1N8v9sAPYjtzbZkeyYPWjet2z2R9k8q8HkZLcYx5q93mktHjCOzZbrIOqT6LUlzIEOq3dmhNHlvmolzFUc40W+KanlzEGOlVzOJGecsmfcLGd0Ca56jcyXlisHccVxFVfIODVXylFUzVwZYPo4C5vmC0Zj+sezxTXUgO3EbLH32uaX4MV1AkcvPObZlURZzf2GNEcZal0YLZtcWMDIPlnXlIVBhMN/2TNOGW+71Srj5tHwc3QrA1fMH8+J1NCzNN06/dNh3Qy0MhBeltmc06qloyss5LAOW1irb/FU69wnx1WXq+Zw/mzVdS8HXSkqdst7y5rFVZf3rNXp+oBoG9L9BmOULIAmolsWNpphjC7LyKH91NqB6su2pW5TOHX05XVLwDWwjW0dlRNsSvnNsoRT0642uUXl+JvatLUenSeuUmFV+hK6+d53qDqGeRAo5Y4A2N5RfftTt1XytqErzeo9HCL8laSBiFGHhFIScoTlO52FRdHQmeh1M87s6jj7FPmiWnHKjYWGRTJ8D+cT/tJtB2E0ogn/ZgcOWXM4op10FoZZSP2cleSay4f0FFTNcnYeaNgJarDfWphNjdK225CVqqxbTI8fbLA9a4dOH9limxW27/3/TLarZrQrvhlX7khV7orv6tUG77Zg2xlEraO9x8tc6j705hIpvX/TIPoePdzM/I+Lu/Tp6fnbs/bt9DwkTmM26CrxNti+R+OZzzRbFy+cfh5oiwOnP4FssRRnXj8/GDyz1fkmQTGCdNM31Mg9f9IzLpiscxCcnIOWUpW1NbUT122QgzqXsN59Qr9mCLG2CtXmjDf+Oyu/YcVnLplQ1ale3rihfUdAfUGuydKd3gRWKN0Vi3A7KnZ6b2bdAChkKCKqr+c1kAQqehmakpzd9cq4mHLfeP3AVHErMQ7idJPirIEC0zj/oaJxMOdzvXclVQdcQ1sj1JC9weBOD0yFVz6OBYy6Xp4eHVupeYKy4nTL4HT3xoadrn62KvftVz/+BQb/AQ==</diagram></mxfile>
|
2112.10149/main_diagram/main_diagram.pdf
ADDED
|
Binary file (30.4 kB). View file
|
|
|
2112.10149/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Convolutional Neural Networks (CNNs) have led to a series of breakthroughs for a variety of visual tasks (Krizhevsky, Sutskever, and Hinton 2012; Long, Shelhamer, and Darrell 2014; Ren et al. 2015; Toshev and Szegedy 2014; Zhu et al. 2016b). However, the challenge of resource constraints in terms of latency and memory storage is often faced when deploying CNNs on mobile or embedded devices. Previous work (Cai et al. 2017; Jacob et al. 2018; McKinstry et al. 2019; Jain et al. 2020; Shuang Wu and Shi 2016; Sung, Shin, and Hwang 2015; Yang et al. 2019; Zhou et al. 2016) has demonstrated that quantizing the real-valued weights and activations of CNNs into low-precision representations can reduce memory footprint while still achieving good performances. This class of methods allows fixed-point arithmetic
|
| 4 |
+
|
| 5 |
+
Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span>
|
| 8 |
+
|
| 9 |
+
Figure 1: Example images illustrating the same features on full-precision ResNet26 (2nd column), Bi-Real-ResNet26 (3rd column) and our proposed EL-ResNet26 (4th column).
|
| 10 |
+
|
| 11 |
+
to be applied, which substantially accelerates inference and reduces energy costs. Taken to an extreme, both the weights and the activations can be represented with binary tensors {-1, +1}. Such networks are termed Binarized Neural Networks (BNNs) (Courbariaux et al. 2016). In BNNs, arithmetic operations for convolutions can be replaced by the more efficient *xnor* and *bitcount* operations.
|
| 12 |
+
|
| 13 |
+
However, BNNs suffer from significant accuracy degradation as a result of information loss at each binarized layer. To conduct binarization, a real-valued signal is passed through the Sign activation, which eliminates the signal's amplitude and retains only its sign information. Because this process is irreversible, information loss increases with each layer of the BNN. Therefore, a central challenge to improving BNN accuracy is the reduction of this information loss.
|
| 14 |
+
|
| 15 |
+
<sup>\*</sup>corresponding author
|
| 16 |
+
|
| 17 |
+
<span id="page-1-0"></span>
|
| 18 |
+
|
| 19 |
+
Figure 2: Diagram of the Elastic-Link module. $\oplus$ denotes element-wise summation. The process of applying the scaling factor to activations is depicted within the dashed box, indicating that it is omitted in some cases for better performance. GetScale and ApplyScale operations refer to XNOR-Net (Rastegari et al. 2016).
|
| 20 |
+
|
| 21 |
+
One approach seeks to minimize the quantization error between the real-valued and binary forms of the weights and activations. Rastegari et al. (2016) utilizes scaling factors to reduce the euclidean distances between the two forms. More recently, Liu et al. (2018) employs a sophisticated fine-tuning strategy from a full-precision network with customized gradient approximation methods. Liu et al. (2018) additionally proposes a shortcut connection to forward the real-valued activation, drastically reducing the extent of information loss. However, these methods apply best to networks which consist primarily of $3\times3$ or $5\times5$ convolutions, such as AlexNet (Krizhevsky, Sutskever, and Hinton 2012), VGGNet (Simonyan and Zisserman 2015) and Basic-Block ResNet (He et al. 2016).
|
| 22 |
+
|
| 23 |
+
Performing binarization with the above methods on networks in which $1\times 1$ convolutions play a crucial role - for example, GoogleNet (Szegedy et al. 2015), Bottleneck ResNet or efficient networks with separated convolutions (Howard et al. 2017) - causes substantially greater accuracy degradations. $1\times 1$ convolutions fuse information across channels and are already used to reduce computational cost via dimensionality reduction. We hypothesize that the marginal information loss from binarization is the proverbial last straw. (Howard et al. 2017) observed that the training or fine-tuning of binarized MobileNet fails to even converge, giving credence to this hypothesis.
|
| 24 |
+
|
| 25 |
+
In order to make binarization more widely applicable, we introduce an effective and universal module named "Elastic-Link" (EL). In order to compensate for the loss incurred by binarization, we adaptively add the real-valued input features (i.e. the features before feeding into the binarization function) to the output features of the subsequent convolution to retain the original real-valued signal. Liu et al. (2018) demonstrated that adding extra shortcut connections, implemented by an element-wise summation, on the Basic-Block ResNet (He et al. 2016) produces considerable improvement in accuracy. This can be viewed as a special case of our proposed EL in which the input and output have the same shape.
|
| 26 |
+
|
| 27 |
+
To generalize this finding, we develop a method to enable feature addition even if the feature size is changed by the convolution. EL uses a Squeeze or Expand operation to align the feature sizes between the input and the output. Furthermore, we do not simply perform a direct summation after the Squeeze or Expand operation, but rather learn a scaling factor to balance the relative extents of preserving the realvalued information and convolutional transformation, unifying these mechanisms and fusing them in a learnable, lightweight manner. EL is applicable to any architecture without structural limitations. To better visualize the effects of information preservation, we illustrate the feature-maps of a full-precision model, a Bi-Real model and our model with EL separately in Fig. 1. The feature maps of our EL model show clear object contours and the retention of important information for recognition, with less noise. By contrast, the foreground and background in the Bi-Real model are not as easily discriminated. We believe that the EL module benefits information flow in the binarized neural networks.
|
| 28 |
+
|
| 29 |
+
Moreover, as shown in Fig. 2, the design of the EL module is simple and can be directly applied to existing modern architectures. To assess the effectiveness of EL, we conduct extensive experiments on the ImageNet dataset. We outperform the current state-of-the-art result with a top-1 accuracy of 68.9%. We also contribute comprehensive ablation studies and discussions to further understanding of the intrinsic characteristics of BNNs.
|
| 30 |
+
|
| 31 |
+
# Method
|
| 32 |
+
|
| 33 |
+
In this section, we first revisit the standard process for training BNNs, then subsequently introduce a novel module, "Elastic-Link" (EL), to reduce the information loss in BNNs. Lastly, we demonstrate the EL module on Bottleneck ResNet (He et al. 2016) and MobileNet (Howard et al. 2017).
|
| 34 |
+
|
| 35 |
+
It is standard to use the Sign function to binarize a CNN. Real values are converted to the binary set of $\{-1, +1\}$ by the following equation:
|
| 36 |
+
|
| 37 |
+
$$Sign(x) = \begin{cases} +1 & if \quad x \ge 0\\ -1 & otherwise \end{cases} \tag{1}$$
|
| 38 |
+
|
| 39 |
+
where *x* refers to a real-valued weight or input/activation. To facilitate training, binarization is typically executed on the fly and only the real-valued weights are updated by the gradients, as described in (Courbariaux et al. 2016; Rastegari et al. 2016). During inference, the real-valued weights are unused and binary weights are used as a drop-in replacement.
|
| 40 |
+
|
| 41 |
+
In the backward pass, since the *Sign* function is non-differentiable everywhere, an approximation is used. In this work, we follow the conventional "straight through estimator" (STE) (Bengio, LšŠonard, and Courville 2013) unless otherwise stated. The approximated gradient in STE is formulated as:
|
| 42 |
+
|
| 43 |
+
$$\frac{\partial Sign(x)}{\partial x} = \begin{cases} 1 & if - 1 \le x \le 1\\ 0 & otherwise \end{cases}$$
|
| 44 |
+
(2)
|
| 45 |
+
|
| 46 |
+
As proposed in XNOR-Net (Rastegari et al. 2016), a binary convolutional operation can be given as follows:
|
| 47 |
+
|
| 48 |
+
$$BinConv\left(\mathbf{A}, \mathbf{W}\right) \approx \left(Sign\left(\mathbf{A}\right) \otimes Sign\left(\mathbf{W}\right)\right) \odot \mathbf{K}\alpha$$
|
| 49 |
+
(3)
|
| 50 |
+
|
| 51 |
+
where $\mathbf{A} \in \mathbb{R}^{c \times h \times w}$ is the real-valued input activation and $\mathbf{W} \in \mathbb{R}^{c \times k_h \times k_w}$ is the real-valued convolutional kernel. Here $(c, h, w, k_h, k_w)$ refer to number of input channels, input height, input width, kernel height, and kernel width respectively. $\otimes$ denotes the efficient XNOR-Bitcounting operation (Rastegari et al. 2016) that replaces the time-consuming arithmetic operations.
|
| 52 |
+
|
| 53 |
+
$\alpha$ is a scaling factor given by a vector L1-Normalization $\alpha = \frac{1}{n} \|\mathbf{W}\|_{\ell_1}$ , which helps minimize the L2 error between the real-valued weights and the binary weights with scalar coefficient $\alpha$ . K is a two-dimensional scaling matrix for the input activation, whose shape corresponds to the convolutional output. It is given by setting each element with the same principle as $\alpha$ . XNOR-Net concluded that $\alpha$ is more effective than K, which can even be entirely ignored for simplicity due to the relatively small improvement realized. Similarly, Liu et al. (2018) and Lin, Zhao, and Pan (2017) validated the effectiveness of $\alpha$ across various networks and datasets. Recently, Bethge et al. (2019) found that these scaling factors did not result in accuracy gains when BatchNorm (Ioffe and Szegedy 2015) is applied after each convolutional layer. In our experiments, we did observe the same experimental phenomenon with the Basic-Block ResNet, which is constructed with only $3 \times 3$ convolutions. However, we find that this principle holds only for $3 \times 3$ convolutions. When binarizing $1 \times 1$ convolutions, the scaling factor is still influential, which will be elaborated on later section.
|
| 54 |
+
|
| 55 |
+
Inspired by the shortcut connection mechanism, we use an element-wise summation operation to add real-valued input features to the output features generated by a binary convolution. In our Elastic-Link module, instead of an identity shortcut, we apply either a Squeeze or an Expand operation when a convolution alters the feature shape. A diagram of our proposed Elastic-Link module is shown in Fig. 2. Formally, let $X_r$ denote the real-valued input feature where $\mathbf{X_r} \in \mathbb{R}^{H_i \times W_i \times C_i}$ . We binarize $X_r$ through a Sign activation function and obtain the binary $X_b$ . Next, a binary convolution and standard BatchNorm are applied to obtain the convolutional output feature $Y_r^n \in \mathbb{R}^{H_o \times W_o \times C_o}$ .
|
| 56 |
+
|
| 57 |
+
In order to compensate for the information loss, we add the real-valued input $X_r$ to the normalized convolutional output $Y_r^n$ . If the input size is equal to the convolutional output size, an identity shortcut connection with element-wise summation is applied as proposed in Bi-Real net (Liu et al. 2018). However, this condition is rarely true. In practice, a convolution operation usually changes the number of channels, and occasionally changes the height and width as well. In the channel-reduction case, we design a *Squeeze* operation in which the real-valued input $X_r$ is split into multiple groups along the channel axis without overlap. The number of channels for each group is $\lceil \frac{C_i}{C_o} \rceil$ . We additionally zeropad the features on input $X_r$ to ensure that $C_i$ can be exactly divided by $C_o$ . Next, we sum these feature groups together to yield the squeezed features which will be of the same shape as $Y_r^n$ . To reduce the effect of amplitude increase from the summation, and to offer a self-balancing tradeoff between information preservation and transformation, we divide the squeezed feature by a learnable scalar $\gamma$ initialized as the number of groups. We take a similar approach for the channel expansion case. In an Expand operation, the real-valued input feature is repeated several times and then concatenated to match the feature size of the convolutional output. The expanded feature is correspondingly divided by same learnable factor of $\gamma$ . Finally, the output of the Squeeze or Expand operation is added to the convolutional output feature $Y_r^n$ , giving the overall output of the binarized convolution module. If spatial downsampling is required, a $2\times 2$ max-pooling with stride 2 is applied before Squeeze or Expand to ensure spatial compatibility. The Elastic-Link module is formulated as follows:
|
| 58 |
+
|
| 59 |
+
$$EL(\mathbf{X_r}, \mathbf{W}, \gamma) = BN\left(BinConv\left(\mathbf{X_r}, \mathbf{W}\right)\right) + SEI\left(\mathbf{X_r}, \gamma\right)$$
|
| 60 |
+
(4)
|
| 61 |
+
|
| 62 |
+
Where $\mathbf{X_r}$ refers to the real-valued input activation and $\mathbf{W}$ refers to the convolutional weight. BN and Sign refer to the BatchNorm and Sign function respectively. SEI refers to Squeeze, Expand or Identity operation, depending on the ratio of input and output channels. $\gamma$ is the aforementioned learnable parameter that balances information preservation and transformation. We initialize $\gamma$ by the following equation and optimize it through back-propagation:
|
| 63 |
+
|
| 64 |
+
$$\gamma = \begin{cases}
|
| 65 |
+
\lceil \frac{C_i}{C_o} \rceil, & C_i >= C_o \\
|
| 66 |
+
\lceil \frac{C_o}{C_i} \rceil, & C_i < C_o
|
| 67 |
+
\end{cases}$$
|
| 68 |
+
(5)
|
| 69 |
+
|
| 70 |
+
Where $C_i$ and $C_o$ refer to the number of channels for the input and output of a convolution respectively. $\lceil \ \rceil$ denote the ceiling or round-up operation. The max-pooling operation in the downsample case as well as the additional ReLU for efficient networks are omitted here for clarity.
|
| 71 |
+
|
| 72 |
+
**Instantiations.** The Elastic-Link module easily plugs into many modern architectures. Taking Bottleneck ResNet as an example, we integrate the Elastic-Link module into ResNet26 which consists of 8 bottleneck blocks. All bottleneck blocks are replaced by EL-Bottlenecks, as depicted in Fig. 3. The first convolution (of kernel size $7 \times 7$ ) and the classification layers remain full-precision to keep essential information at the input and output of the whole network. The downsampling shortcut in the first block of each stage, originally a $1 \times 1$ convolution of stride 2, is replaced by a $2 \times 2$ average pooling with stride 2 followed by a $1 \times 1$ convolution in full-precision. By integrating an EL module into the first $1 \times 1$ convolution of each *Bottleneck* block, more full-precision information flows to the middle $3 \times 3$ convolution, which is essential for capturing features at larger receptive fields.
|
| 73 |
+
|
| 74 |
+
Next we apply the EL module to efficient networks composed of separable convolutions. To the best of our knowledge, efficient networks have so far been considered incompatible with binarization. MobileNet (Howard et al. 2017) is one of the most representative efficient architectures. By adding Elastic-Link to each pointwise convolution and depthwise convolution (see Fig. 3), we are able to overcome
|
| 75 |
+
|
| 76 |
+
<span id="page-3-0"></span>
|
| 77 |
+
|
| 78 |
+
Figure 3: Schema of EL-Bottleneck module (Left) and EL-MobileNet module (Right).
|
| 79 |
+
|
| 80 |
+
the non-convergence problem typically encountered in training binarized MobileNet. We additionally find that keeping the ReLU activation achieves better performance. Similar to the ResNet case, we keep the first convolution, classifier and downsample components at full-precision.
|
| 81 |
+
|
| 82 |
+
For the Elastic-Link module to be considered practical, it must offer a good tradeoff between improved performance and additional computational burden. To illustrate the increased complexity associated with the Elastic-Link module, we compare Bi-ResNet50 with EL-ResNet50. The additional computational cost incurred by the EL module originates from the $\gamma$ scaling after the Squeeze, Expand or Identity operation as well as the element-wise summation of each $1 \times 1$ convolution, because the *Squeeze* and *Expand* can be implemented with address mapping without any overhead. In total, EL-ResNet50 requires an extra ∼8M FLOPs over Bi-ResNet50's ∼300M FLOPs for a single forward pass with an input image of $224 \times 224$ , corresponding to a 2.6% increase. The number of FLOPs is computed as described in (Liu et al. 2018). For a practical comparison, we use the BMXNet library (Yang et al. 2017) on an Intel Core i7-9700K CPU to measure the actual time taken. Bi-Real-ResNet50 takes on average 22.2 ms for a single forward pass (over 10 runs), compared to 22.9 ms for our proposed EL-ResNet50. We believe that this small additional cost is justified by the increase in model performance.
|
2201.02233/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.02233/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Neural style transfer aims at rendering a content image with style patterns from a reference image. The pioneering style transfer algorithm is proposed by Gatys [@o1; @o2], which iteratively optimizes an image with perceptual losses. To accelerate the stylization process, a bunch of feed-forward network based approaches [@f1; @f2; @f3; @f4; @f5; @f6; @f7; @f8; @f9] flourished. These methods render the content image with a single forward propagation but cannot generalize well to unseen styles. Arbitrary style transfer methods [@a1; @a2; @a3; @a5; @a6] further extend the feed-forward network to arbitrary styles. These flexible yet efficient methods have received widespread attention from academics and industrial.
|
| 4 |
+
|
| 5 |
+
Recently, style attentional network (SANet)[@att1] and its following works [@att2; @att3] have achieved state-of-the-art performance. With a learnable kernel, these methods compute pair-wise similarities to generate the attention map, which serves as a per-point feature transformation for fine-grained stylization. Still, the attentional style transfer methods are imperfect: they tend to stylize the semantic regions inconsistently. As demonstrated in the first row of [\[fig:fig1\]](#fig:fig1){reference-type="ref+label" reference="fig:fig1"}, the sky region is rendered by highly different style patterns from various semantic regions, causing visual artifacts. According to [@mani1], each semantic region corresponds to a feature manifold, and the feature follows a multi-manifold distribution. To render a content region consistently, features from it should only be stylized by those from the most related style manifold. However, since the per-point transformation failed to capture the manifold distributions, features of the sky manifold are stylized independently, leading to chaotic results.
|
| 6 |
+
|
| 7 |
+
To address this problem, we want the attention module to learn a manifold-aware measurement, which consistently matches features between related content and style manifolds. Unfortunately, since the content semantic and style semantic are inherently dissimilar, their manifold distributions are heterogeneous. It is difficult for the attention module to learn the desired measurement. Therefore, we adopt the manifold alignment to reveal the cross manifold correspondence. We align each content manifold to its most related style manifold, thus increasing the structural similarity between them. Afterward, features from the corresponding content and style manifolds are close in space, making the attention module match them consistently. The existing manifold alignment style transfer method [@mani1] employs a global channel transformation for stylization. Instead of aligning the related content and style manifolds individually, this method aligns the multi-manifold distributions as a whole. As a result, it is not suitable to work with the attention module.
|
| 8 |
+
|
| 9 |
+
In this paper, we proposed the progressive attentional manifold alignment (PAMA) framework, which performs attention operations and space-aware interpolations multiple times. The proposed PAMA dynamically aligns each of the content manifolds to their most related style manifolds, enabling the consistent attention mechanism between regions. Firstly, the attention operation is employed to rearrange the style features according to the spatial structure of content features. By matching content and style features, the related content manifolds and style manifolds correspond on the feature map. Afterward, the space-aware interpolation fuses the corresponding manifolds with adaptive weights. The interpolation can increase the structural similarity of the corresponding manifolds dynamically, making the attention module easier to match features between them. However, a single alignment cannot build a strong enough correspondence between manifolds for the attention module. Therefore we repeat the manifold alignment process multiple times and employ multistage loss functions. An image reconstruction loss is also used to maintain the shared space for manifold alignment. Our contributions can be summarized as:
|
| 10 |
+
|
| 11 |
+
- We proposed a new arbitrary style transfer framework named PAMA, which gradually aligns content manifolds to style manifolds with the attention mechanism for consistent stylization between semantic regions.
|
| 12 |
+
|
| 13 |
+
- A multistage loss function is designed to enable progressive manifold alignment while preserving the regional consistency. We also adopt an image reconstruction loss to maintain the shared space for manifold alignment.
|
| 14 |
+
|
| 15 |
+
- Experiments show that the proposed framework can generate fine-grained stylization result in real-time (101 fps for 512px images on Tesla V100 GPU).
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
In this section, we will explain the inconsistent stylization phenomenon from a manifold perspective. According to [@mani1], the image features follow a multi-manifold distribution that each semantic region corresponds to a manifold. If there were $m$ semantic regions in the content image, we can divide the content features $F_c$ into $m$ subsets corresponding to the manifolds: $$\begin{equation}
|
| 20 |
+
\label{eq:eq1}
|
| 21 |
+
F_c = \cup_{i=1}^m F_{c,i} \ , F_c \subseteq \mathbb{R}^{C \times H_cW_c}, F_{c,i} \subseteq \mathbb{R}^{C\times M_i}
|
| 22 |
+
\end{equation}$$ where $F_{c,i}$ denotes the $i$-th subset (corresponding to the $i$-th manifold) which consists of $M_i$ features. The style features $F_s$ follows the same definition: $$\begin{equation}
|
| 23 |
+
\label{eq:eq2}
|
| 24 |
+
F_s = \cup_{i=1}^n F_{s,i} \ , F_s \subseteq \mathbb{R}^{C \times H_sW_s}, F_{s,i} \subseteq \mathbb{R}^{C\times N_i}
|
| 25 |
+
\end{equation}$$ here we have defined the content features and style features in a manifold perspective.
|
| 26 |
+
|
| 27 |
+
The inconsistency happens because the per-point rendering of the attention mechanism neglects the multi-manifold distribution. Suppose the $p$-th content manifold is most related to the $q$-th style manifold, which means that features from $F_{c,p}$ should only match with features from $F_{s,q}$ in attention. For any content feature $f_c^x$ from $F_c$, if it belongs to $F_{c,p}$, we can write the attention operation as: $$\begin{equation}
|
| 28 |
+
\label{eq:eq3}
|
| 29 |
+
\begin{aligned}
|
| 30 |
+
\hat f_s^x & = \sum_{i=1}^{H_sW_s} sim(f_c^x, f_s^i)f_s^i \\
|
| 31 |
+
& = \sum_{i=1}^{N_q} sim(f_c^x, f_{s,q}^i)f_{s,q}^i+\eta(f_c^x, F_s)
|
| 32 |
+
\end{aligned}
|
| 33 |
+
\end{equation}$$ where $f_s^i$ denotes the $i$-th feature from $F_s$, and the $f_{s,q}^i$ refers to the $i$-th feature from the $q$-th subset of $F_s$. The learned similarity kernel of attention mechanism abbreviates as $sim(\cdot, \cdot)$. The $\eta(\cdot, \cdot)$ is the mismatch term measuring the inconsistency. This equation suggests that features from a content manifold should only be stylized by features from the most related style manifold, and any out-of-manifold rendering is considered as mismatch. If the mismatch term $\eta$ is relatively high, the stylization result tend to be inconsistent. To reduce the mismatch term and enable consistent attention module, we proposed the progressive attentional manifold alignment (PAMA) to align the content manifolds to their most related style manifolds.
|
| 34 |
+
|
| 35 |
+
<figure id="fig:fig2" data-latex-placement="t">
|
| 36 |
+
<embed src="figure/figure2.pdf" />
|
| 37 |
+
<figcaption>The architecture of our network. The content manifolds is gradually aligned to the style manifolds with three independent attentional manifold alignment (AMA) blocks. The dash lines are only forwarded during training to generate the intermediate results for loss calculation. </figcaption>
|
| 38 |
+
</figure>
|
| 39 |
+
|
| 40 |
+
[1](#fig:fig2){reference-type="ref+label" reference="fig:fig2"} shows the proposed progressive attentional manifold alignment (PAMA) framework. Our method uses a pre-trained VGG [@vgg] network to encode the content image $I_c$ and style image $I_s$ to obtain the $ReLU4\_1$ features $F_c$ and $F_s$. The attentional manifold alignment (AMA) consists of an attention module and a space-aware interpolation module. Passed through three AMA blocks, the content feature $F_c$ gradually fuses style information, during which the content manifolds are aligned to the style manifolds. In this way, the attention mechanism can capture the manifold distributions and render the semantic regions consistently. Finally, the aligned content feature will be fed into the decoder to generate the stylized image. The structure of the decoder is symmetric to the encoder. Notice that the intermediate stylized features are only decoded during training to calculate the multistage perceptual loss (dash lines in [1](#fig:fig2){reference-type="ref+label" reference="fig:fig2"}).
|
| 41 |
+
|
| 42 |
+
<figure id="fig:fig3" data-latex-placement="t">
|
| 43 |
+
<embed src="figure/figure3.pdf" style="width:80.0%" />
|
| 44 |
+
<figcaption>Attention mechanism will change the original VGG<span class="citation" data-cites="vgg"></span> space. The first row is generated by the proposed PAMA, the second row shows the result of SANet<span class="citation" data-cites="att1"></span>.</figcaption>
|
| 45 |
+
</figure>
|
| 46 |
+
|
| 47 |
+
The attentional transformation in existing methods like [@att1; @att2; @att3] will change the original VGG[@vgg] space of style feature. [2](#fig:fig3){reference-type="ref+label" reference="fig:fig3"} demonstrates that the decoder of SANet failed to parse the original content and style feature in the VGG space. This characteristic is devastating for our progressive alignment that the attention module needs to learn a cross space similarity measurement, making the inconsistency problem even worse. To constraint all of the features in the VGG space, we employed an image reconstruction loss: $$\begin{equation}
|
| 48 |
+
\label{eq:eq4}
|
| 49 |
+
\begin{aligned}
|
| 50 |
+
& L_{rec} = \lambda(\|(I_{rc}-I_c)\|_2 + \|(I_{rs}-I_s)\|_2) + \\
|
| 51 |
+
& \sum_{i}(\|\phi_i(I_{rc})-\phi_i(I_c)\|_2 + \|\phi_i(I_{rs})-\phi_i(I_s)\|_2)
|
| 52 |
+
\end{aligned}
|
| 53 |
+
\end{equation}$$ where $I_{rc}$ and $I_{rs}$ are the content and style image reconstructed from VGG features, and the $\lambda$ is a constant weight. The $\phi_i(I)$ refers to the $ReLU\_i\_1$ layer VGG feature of image $I$. Since this loss forces the decoder to reconstruct VGG features, all features between the encoder and decoder are restricted in the VGG space. In the manifold alignment perspective, this loss function maintains the shared space for the alignment between content and style manifolds.
|
| 54 |
+
|
| 55 |
+
For the progressive manifold alignment, the overall loss function consists of multiple stages. In each stage, the loss is a weighted summation: $$\begin{equation}
|
| 56 |
+
\label{eq:eq5}
|
| 57 |
+
L = \sum_{i=1}(\lambda_{ss}^i L_{ss} + \lambda_r^i L_r + \lambda_m^i L_m + \lambda_{h}^i L_{h})+L_{rec}
|
| 58 |
+
\end{equation}$$ where the $\lambda_x^i$ denotes the weight for $L_x$ in the $i$-th stage. The $L_{ss}$ is the content loss, while the $L_r$, $L_m$, and $L_h$ serve as the style losses. The initial value of the content weight $\lambda_{ss}^1$ is relatively high to preserve the content manifold structure. In the next stages, the content weight decreases gradually to encourage more vivid style patterns.
|
| 59 |
+
|
| 60 |
+
Our content loss is based on the structure self-similarity descriptor[@remd1] between the content feature $F_c$ and the VGG feature of stylized image $F_{cs}$: $$\begin{equation}
|
| 61 |
+
\label{eq:eq6}
|
| 62 |
+
L_{ss} = \frac{1}{H_c W_c} \sum_{i,j} |\frac{D_{ij}^c}{\sum_i D_{ij}^c}-\frac{D_{ij}^{cs}}{\sum_j D_{ij}^{cs}}|
|
| 63 |
+
\end{equation}$$ where $D_{ij}^c$ and $D_{ij}^{cs}$ are the pairwise cosine distance matrices of $F_c$ and $F_{cs}$ respectively. This loss function is effective for manifold structure preservation, because it regulates the feature correlation within manifolds to be invariant.
|
| 64 |
+
|
| 65 |
+
Following the setting of [@remd1; @remd2] we adapts the relaxed earth mover distance (REMD): $$\begin{equation}
|
| 66 |
+
\label{eq:eq7}
|
| 67 |
+
L_{r} = \max (\frac{1}{H_sW_s}\sum_i \min_j C_{ij}, \frac{1}{H_cW_c}\sum_j \min_i C_{ij})
|
| 68 |
+
\end{equation}$$ where the $C_{ij}$ denotes the pair-wise cosine distance matrix between $F_{cs}$ and $F_s$. As suggested in [@remd2], this loss function optimizes along the manifold surface of style features, which works well with our manifold alignment process. We also added the moment matching loss to regularize the magnitude of features: $$\begin{equation}
|
| 69 |
+
\label{eq:eq8}
|
| 70 |
+
L_m = \|\mu_{cs} - \mu_s\|_1 + \|\Sigma_{cs} - \Sigma_s\|_1
|
| 71 |
+
\end{equation}$$ where the $\mu$ and $\Sigma$ denotes the mean and covariance matrix of feature vectors.
|
| 72 |
+
|
| 73 |
+
Although our proposed PAMA generates high-quality stylized images, it sometimes outputs color-mixed images. The self-similarity loss causes this limitation, for it forces the attention mechanism rendering a region overly uniform, making it mix the style patterns. To fix it, we referred to the differentiable color histogram loss proposed in HistoGAN[@hist]: $$\begin{equation}
|
| 74 |
+
\label{eq:eq9}
|
| 75 |
+
L_{h} = \frac{1}{\sqrt 2} \| H_s^{1/2} - H_{cs}^{1/2} \|_2
|
| 76 |
+
\end{equation}$$ where the $H$ refers the color histogram feature, the $H^{1/2}$ denotes the element-wise square root. At the expense of a little consistency, this loss function can reduce the color mixing problem.
|
| 77 |
+
|
| 78 |
+
<figure id="fig:fig4" data-latex-placement="t">
|
| 79 |
+
<embed src="figure/figure4.pdf" />
|
| 80 |
+
<figcaption>The attentional manifold alignment (AMA) block. The attention module generates the attention map <span class="math inline"><em>A</em><sub><em>c</em><em>s</em></sub></span> to rearrange the style feature. The space-aware interpolation dense the channel information to obtain the adaptive weight <span class="math inline"><em>W</em></span>, which is applied to interpolate between the content feature <span class="math inline"><em>F</em><sub><em>c</em></sub></span> and the rearranged style feature <span class="math inline"><em>F̂</em><sub><em>s</em></sub></span>.</figcaption>
|
| 81 |
+
</figure>
|
| 82 |
+
|
| 83 |
+
[3](#fig:fig4){reference-type="ref+label" reference="fig:fig4"} demonstrates the attention manifold alignment block, which consists of an attention module and a space-aware interpolation module.
|
| 84 |
+
|
| 85 |
+
**Attention Module**. In the attention module, the content and style features are firstly normalized and embedded to compute the attention map: $$\begin{equation}
|
| 86 |
+
\label{eq:eq10}
|
| 87 |
+
A_{cs} = softmax(f(Norm(F_c))^T \otimes g(Norm(F_s)))
|
| 88 |
+
\end{equation}$$ where the $f(\cdot)$ and $g(\cdot)$ denote 1x1 convolution blocks for feature embedding, the $Norm(\cdot)$ refers to the mean-variance normalization, and the $\otimes$ is the matrix multiplication. The attention map contains the pair-wise similarities between the content features and style features. Then the attention map serves as an affine transformation to spatially rearrange the style features: $$\begin{equation}
|
| 89 |
+
\label{eq:eq11}
|
| 90 |
+
\hat F_s = \theta(h(F_s)^T \otimes A_{cs})
|
| 91 |
+
\end{equation}$$ again the $g(\cdot)$ and $\theta(\cdot)$ are the 1x1 convolutions for feature embedding.
|
| 92 |
+
|
| 93 |
+
The attention module is exactly the same as the one in [@att1; @att2], which produces inconsistent results. In the attention module of PAMA, we aim at finding the correspondence between related content and style semantic regions while minimizing the mismatch term $\eta$ in [\[eq:eq3\]](#eq:eq3){reference-type="ref+label" reference="eq:eq3"}. Therefore, we adopt a high initial value of self-similarity loss ([\[eq:eq6\]](#eq:eq6){reference-type="ref+label" reference="eq:eq6"}) and decrease it gradually during the multi-stage manifold alignment. In the first stage, the high self-similarity loss encourages the manifold structure of the rearranged style features $\hat F_s$ to be the same as the content feature $F_c$, thus forcing the attention module to match the features consistently. [4](#fig:fig5){reference-type="ref+label" reference="fig:fig5"} (a) shows that the style features $F_s$ are rearranged according to the spatial structure of content features $F_c$, and the related semantic regions correspond on the feature map. With this correspondence, the space-aware interpolation module can increase the structural similarity between the related manifolds. In the next stages, even if we decrease the self-similarity loss for more vivid style patterns, the attention module will reveal the relationship between manifolds and rendering the features consistently. We have verified this in the experiment part with [7](#fig:fig8){reference-type="ref+label" reference="fig:fig8"}.
|
| 94 |
+
|
| 95 |
+
**Space-aware Interpolation Module.** This module adaptively interpolates between $F_c$ and $\hat F_s$ with regional information. Initially, the channel dense operation applies convolution kernels of different scale on the concatenated feature to summarize multi-scale regional information: $$\begin{equation}
|
| 96 |
+
\label{eq:eq12}
|
| 97 |
+
W = \frac{1}{n}\sum_{i=1}^n \psi_i ([F_c, \hat F_s])
|
| 98 |
+
\end{equation}$$ where $\psi_i (\cdot)$ represent the $i$-th convolution kernel, and the $[\cdot, \cdot]$ denotes the channel concatenation operation. The concatenated feature can help us to identify the differences between the corresponding content and style manifolds, figuring out the local inconsistencies triggered by the attention module. This learnable channel dense operation outputs the scalar spatial weights $W \in \mathbb{R}^{H \times W}$, which is used for interpolation: $$\begin{equation}
|
| 99 |
+
\label{eq:eq13}
|
| 100 |
+
F_{cs} = W \odot F_c +(1-W) \odot \hat F_s
|
| 101 |
+
\end{equation}$$ where the $\odot$ refers to the dot production. Different from the ordinary feature fusion in attentional methods like [@att1; @att2; @att3], the space-aware interpolation fuses features in the same space. Therefore the interpolated content feature will not suffer from degradation (see [2](#fig:fig3){reference-type="ref+label" reference="fig:fig3"}). As you can see in [4](#fig:fig5){reference-type="ref+label" reference="fig:fig5"} (b), the interpolation fully solved the local distortion. Moreover, the affinity between the corresponding manifolds are increased, making it easier for the attention module to render semantic regions consistently. Then the stylized feature $F_{cs}$ will be feed into the next attentional alignment block for further refinement.
|
| 102 |
+
|
| 103 |
+
<figure id="fig:fig5" data-latex-placement="t">
|
| 104 |
+
<embed src="figure/figure5.pdf" />
|
| 105 |
+
<figcaption>The manifold alignment process in the first stage. (a) The attention operation to find the correspondence between manifolds; (b) The interpolation to increase the structural similarity between corresponding manifolds.</figcaption>
|
| 106 |
+
</figure>
|
2201.02263/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.02263/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Stereo matching is a fundamental task in computer vision and is widely used for depth sensing in various applications such as augmented reality (AR), robotics and autonomous driving. In recent years, end-to-end trained Convolutional Neural Networks (CNNs) have achieved impressive results for this task as quantified by the performance on several publicly available stereo-matching benchmarks [@chang2018pyramid; @guo2019group; @kendall2017end; @xu2020aanet; @zhang2019ga].
|
| 4 |
+
|
| 5 |
+
<figure id="fig:motivation">
|
| 6 |
+
|
| 7 |
+
<figcaption>Comparison of disparity maps estimated by PSMNet <span class="citation" data-cites="chang2018pyramid"></span> when it is trained under different settings and across multiple domains. Each column shows the results for a realistic domain namely: KITTI 2015 <span class="citation" data-cites="Menze2018JPRS"></span>, DrivingStereo <span class="citation" data-cites="yang2019drivingstereo"></span>, Oxford Robotcar <span class="citation" data-cites="RobotCarDatasetIJRR"></span> and Middlebury <span class="citation" data-cites="scharstein2014high"></span>. Rows from top to bottom show a sample image (I) , the prediction for the Scene Flow pre-trained model (II), KITTI-15 fine-tuned model (III), and the proposed ITSA optimized method (IV). Comparing these figures shows that PSMNet trained solely on synthetic data performs poorly on real data and fine tuning only improves the result for KITTI dataset (still fails to generalize for other scenarios). The proposed method performs well across the board (best viewed in color).</figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
Generally, end-to-end stereo-matching networks require a large amount of labelled data for training. To overcome this challenge, many state-of-the-art networks are initially trained on labelled synthetic data, commonly generated using game engines. However, models trained using synthetic data do not generalize well to unseen realistic domains. For example, the PSMNet [@chang2018pyramid] pre-trained on the Scene Flow dataset [@mayer2016large] performs poorly when tested on unseen realistic domains as illustrated in [1](#fig:motivation){reference-type="ref+label" reference="fig:motivation"}. Therefore, in practice, the networks trained with synthetic data are fine-tuned using labelled data from the relevant target domain. However, collecting even a relatively small amount of dense ground truth data in the real-world can be challenging for tasks like stereo-matching [@tonioni2019learning; @liu2020stereogan]. Furthermore, to be practically useful in many applications, a stereo-matching model should be able to generalize effortlessly to different domains like day and night times, varying weather conditions, etc. Collecting data for fine-tuning that cover all possible situations is both difficult and expensive. It is therefore highly desirable to remove the fine-tuning requirement.
|
| 11 |
+
|
| 12 |
+
It is known that neural networks, including stereo matching networks, can learn superficial shortcut features (or spurious correlations with the target labels), which prevent them from generalizing across different domains [@geirhos2020shortcut; @Beery_2018_ECCV]. We found that stereo matching networks trained on synthetic data are susceptible to exploiting shortcuts in synthetic data such as (1) consistent local statistics (RGB color features) between the left and right stereo images and (2) over-reliance on local chromaticity features (e.g. color, illumination, texture) of the reference stereo viewpoint. Detailed analysis and discussion are included in [4.2](#sec:shortcuts){reference-type="ref+label" reference="sec:shortcuts"}. Dependency on these shortcut cues, instead of the desirable semantic and structural representations, means that these networks would fail drastically when the spurious correlations between shortcuts and labels do not exist in a new (unseen) domain [@recht2019imagenet]. While several shortcut-removal approaches have been previously proposed [@hendrycks2019robustness; @carlucci2019domain; @shi2020informative], most of these methods are manually designed (e.g. carefully selected data augmentations [@hendrycks2019robustness; @carlucci2019domain]) and rely on the assumption that the shortcuts could be identified in advance. However, shortcuts can be non-intuitive, task-specific, and difficult to identify [@dagaev2021too; @minderer2020automatic].
|
| 13 |
+
|
| 14 |
+
Our goal is to train a stereo matching network on synthetic data that can generalize to realistic scenes without the need for fine-tuning. To achieve this, we propose an information-theoretic approach to automatically restrict the shortcut-related information from being encoded from the input into the feature representations. The approach is based on the well known information bottleneck (IB) principle that proposes to optimize the following objective [@tishby2015deep; @alemi2017deepvib]: $$\begin{equation}
|
| 15 |
+
\underset{\theta}{\textup{argmax}~}{I\left( Y, Z; \theta \right) - \beta I\left( X, Z; \theta \right)}
|
| 16 |
+
\label{eqn:IB}
|
| 17 |
+
\end{equation}$$ where $Z$ is the encoding of input $X$, $Y$ is the target, $I$ is mutual information and $\beta \in [0,~1]$ is the hyperparameter that controls the size of the information bottleneck. While optimizing the IB objective leads to compressed feature representations, our empirical experiments showed that these compressed features are neither robust nor shortcut-invariant (details are provided in [3.3.1](#sec:robust-ib){reference-type="ref+label" reference="sec:robust-ib"}). Consequently, the IB optimized networks may still incorporate shortcuts and remain fragile when tested in unseen domains. The recently introduced robust IB criterion [@pensia2020extracting] encourages the learning of both robust and compressive features by replacing the mutual information in IB with statistical Fisher information. Robust IB is presented in the context of learning features that are robust to adversarial attacks and to the best of our knowledge it has not been used for domain generalization.
|
| 18 |
+
|
| 19 |
+
In our approach, we combine the task loss (e.g. smooth L1 loss) with Fisher information to learn a generalizable stereo matching model. Although such an objective can work in theory, straightforward optimization of the Fisher information by gradient descent requires computation of the second-order derivatives and is therefore computationally expensive for tasks with high dimensional inputs such as stereo matching and semantic segmentation. To overcome this shortcoming, we propose ITSA which consists of a novel loss term and perturbation technique to approximate the optimization of the Fisher information loss. The proposed ITSA is computationally efficient, and as we show by extensive experiments, it can promote the learning of shortcut-invariant features. Unlike the existing domain-invariant stereo matching networks [@zhang2020domain; @Shen_2021_CVPR], the proposed ITSA does not involve significant network alteration and is model-agnostic. Therefore, as shown in the experiments section, it can be easily integrated with different stereo matching networks.
|
| 20 |
+
|
| 21 |
+
The empirical results show that stereo-matching networks trained on synthetic data, with the proposed ITSA, can generalise to realistic data without fine-tuning. Additional experiments on challenging out-of-domain stereo datasets (e.g. different adverse weathers and night scenes) show that our method also improves the overall robustness of the stereo matching networks and importantly even outperforms the networks fine-tuned on realistic domains when tested on these challenging datasets. The main contributions of this paper include:
|
| 22 |
+
|
| 23 |
+
- We show that learning feature representations that are less sensitive to input variations can significantly enhance the synthetic to realistic domain generalization, and robustness in stereo matching networks.
|
| 24 |
+
|
| 25 |
+
- We introduce a novel loss function that enables us to minimize the Fisher information, without computing the second-order derivatives.
|
| 26 |
+
|
| 27 |
+
- We also show that the application of the proposed framework is not limited to stereo matching task, and can be used in training models for non-geometry based vision problems such as semantic segmentation.
|
| 28 |
+
|
| 29 |
+
The rest of the paper is organized as follows. [2](#sec:related-works){reference-type="ref+label" reference="sec:related-works"} describes the related work in the field of learning-based stereo matching networks, domain generalization and shortcut learning. [3](#sec:methods){reference-type="ref+label" reference="sec:methods"} presents the proposed method for automatic shortcut avoidance and domain generalization. Experimental results and discussions are presented in [4](#sec:experiments){reference-type="ref+label" reference="sec:experiments"}, and [6](#sec:conclusion){reference-type="ref+label" reference="sec:conclusion"} concludes the paper.
|
| 30 |
+
|
| 31 |
+
# Method
|
| 32 |
+
|
| 33 |
+
<figure id="fig:overview" data-latex-placement="t">
|
| 34 |
+
<embed src="Images/methodology.pdf" style="width:75.0%" />
|
| 35 |
+
<figcaption>An overview of the proposed shortcut-avoidance strategy to achieve domain generalization in stereo matching networks. The parameters are shared across the two feature extractor networks <span class="math inline"><em>f</em><sub><em>θ</em></sub></span> (best viewed in color).</figcaption>
|
| 36 |
+
</figure>
|
| 37 |
+
|
| 38 |
+
In this work, we focus on the synthetic-to-realistic domain generalization for stereo matching. Given a synthetic stereo data set $D_{syn}$ consisting of stereo image pairs $\left \{ x^{(i)}_{syn,l}, x^{(i)}_{syn,r} \right \}_{i=1}^n$ with corresponding ground-truth disparity $\left \{ y^{(i)}_{syn}\right \}_{i=1}^n$, the goal is to design a robust and shortcut-invariant stereo matching network that can accurately predict disparity map $\hat{y}^{(i)}$ for unseen realistic environments $D_{real}$.
|
| 39 |
+
|
| 40 |
+
Our approach to achieve synthetic-to-realistic domain generalization is to use an information-theoretic measure to automatically restrict the shortcut-related information from being included in feature representations.
|
| 41 |
+
|
| 42 |
+
A typical stereo-matching network can be represented by the following equation:
|
| 43 |
+
|
| 44 |
+
::: small
|
| 45 |
+
$$\begin{equation}
|
| 46 |
+
\hat{y}^{(i)} = m_\psi \left (\mathbb{C}\left ( f_\theta \left ( x^{(i)}_{l} \right ), f_\theta \left ( x^{(i)}_{r} \right ) \right ) \right )
|
| 47 |
+
\end{equation}$$
|
| 48 |
+
:::
|
| 49 |
+
|
| 50 |
+
where $f_\theta\left( \cdot \right)$ is the feature extraction sub-network, $\mathbb{C}\left ( \cdot \right)$ the cost volume and $m_\psi \left ( \cdot \right)$ the cost aggregation and refinement sub-network. The refined cost volumes are converted to disparity maps $\hat{y}$ via the soft argmin [@kendall2017end] operation.
|
| 51 |
+
|
| 52 |
+
Our proposed method (ITSA) can be applied to any stereo-matching network that has the above structure. In the experiments section, we show the result of applying the proposed algorithm to different stereo-matching networks with concatenation cost (we observed similar results with correlation-based methods) volumes [@chang2018pyramid; @guo2019group; @Shen_2021_CVPR]. The high-level structure of the network including the proposed shortcut avoidance strategy is shown in [2](#fig:overview){reference-type="ref+label" reference="fig:overview"}.
|
| 53 |
+
|
| 54 |
+
Our main contribution is the loss function devised to automatically restrict the shortcut-related information from being encoded in the learning process. As we explained earlier, the information bottleneck (IB) principle [@tishby2015deep; @alemi2017deepvib] is typically used to compress features and would be a natural choice to achieve this objective.
|
| 55 |
+
|
| 56 |
+
The standard $\mathcal{L}_\text{IB}$ loss defined in [\[eqn:IB\]](#eqn:IB){reference-type="ref+label" reference="eqn:IB"}, which uses mutual information to quantify information content, was designed to extract features that are both concise and relevant for prediction. However, models trained by this loss are not robust to existence of artefacts that can generate shortcuts (similar to adversarial distortions mentioned in [@pensia2020extracting]).
|
| 57 |
+
|
| 58 |
+
To demonstrate the above point, we conducted a toy experiment. In this experiment, we investigated the efficacy of using IB loss for helping digit recognition networks (DRNs) to generalize from MNIST (source) [@lecun1998gradient] to MNIST-M [@ganin2016domain] (target) dataset. The former contains images of handwritten digits with black background, and the latter is created by combining the MNIST digits with randomly extracted from color patches as their background. All networks were trained on the MNIST training set only and the top-1 accuracy ($\%$) was employed for evaluation. The details of the experiment are included in the supplementary document. As shown in [\[tab:toy\]](#tab:toy){reference-type="ref+label" reference="tab:toy"}, the standard IB can effectively reduce over fitting, and achieves the best performance in the source domain. However, it fails to generalize its performance to the unseen domain. Importantly, it even performs worse than the baseline networks in the unseen target domain.
|
| 59 |
+
|
| 60 |
+
As our aim is to develop an IB based cost function that is not susceptible to existence of shortcuts in source data, we take inspiration from the robust IB principle [@pensia2020extracting]. Robust IB utilizes the statistical Fisher information $\Phi(Z|X)$ of the extracted features $Z$ parameterized by the inputs $X$ as a more robust measure of information (in place of $I(Z,X)$). The Fisher information $\Phi(Z|X)$ is defined as:
|
| 61 |
+
|
| 62 |
+
::: small
|
| 63 |
+
$$\begin{equation}
|
| 64 |
+
\Phi(Z|X)=\int_\mathcal{X} \Phi(Z|X=x) p_X(x)dx , \label{eqn:fish}
|
| 65 |
+
\end{equation}$$
|
| 66 |
+
:::
|
| 67 |
+
|
| 68 |
+
where
|
| 69 |
+
|
| 70 |
+
::: small
|
| 71 |
+
$$\begin{equation}
|
| 72 |
+
\Phi(Z|X=x) = \int_\mathcal{Z} \norm{\nabla_x \log{p_{Z|X}(z|x)}}_2^2 p_{Z|X}(z|x)dz.
|
| 73 |
+
\label{eqn:fish_}
|
| 74 |
+
\end{equation}$$
|
| 75 |
+
:::
|
| 76 |
+
|
| 77 |
+
The term $\Phi(Z|X=x)$ in Eq. ([\[eqn:fish\]](#eqn:fish){reference-type="ref" reference="eqn:fish"}, [\[eqn:fish\_\]](#eqn:fish_){reference-type="ref" reference="eqn:fish_"}) can be regarded as the sensitivity of the latent distribution $p_{Z|X}(\cdot|x)$, with respect to changes at the input $x$. Therefore, optimizing the Fisher information, $\Phi(Z|X)$, will minimize the average sensitivity of the latent distribution with respect to change of inputs $X$. As shortcuts are generated by data artefacts that are transient [^1] by nature, they are sensitive to perturbations of input data [@geirhos2020shortcut]. As such, minimizing the Fisher information is a step towards promoting the learning of shortcut-invariant features. Our conjecture is supported by the results of the toy experiment included in [\[tab:toy\]](#tab:toy){reference-type="ref+label" reference="tab:toy"}. The DRNs constrained by the Fisher information (RIB) achieved better performance than the IB networks in the target domain.
|
| 78 |
+
|
| 79 |
+
In order to minimize the Fisher information expressed in [\[eqn:fish\_\]](#eqn:fish_){reference-type="ref+label" reference="eqn:fish_"}, one has to compute second order derivatives such as $\nabla_\theta\nabla_x \log{p_{Z|X}(z|x)}$, which is computationally prohibitive for tasks with large dimensional inputs such as stereo matching, semantic segmentation, etc. [@shi2021gradient]. To overcome this issue, we propose ITSA, a simple yet computationally feasible approach to promote the learning of shortcut-invariant features.
|
| 80 |
+
|
| 81 |
+
Optimizing the Fisher information $\Phi\left ( Z \mid X \right )$ measure defined in [\[eqn:fish\]](#eqn:fish){reference-type="ref+label" reference="eqn:fish"} is related to minimizing $\Phi\left ( Z \mid X=x \right )$. By adding a regularization term such as $\Phi\left ( Z \mid X=x \right )$ to the loss function, we can penalize the transient features and discourages networks from learning shortcuts. To calculate this term, we employ a first order approximation as described below.
|
| 82 |
+
|
| 83 |
+
:::: {#lem:lem1 .lemma}
|
| 84 |
+
**Lemma 1**. *If $\epsilon > 0$, $u$ is a unit vector (i.e. $\left \| u \right \| = 1$, we refer to as the shortcut perturbation) and $x^*=x+\epsilon u$, then, subject to first order approximation:*
|
| 85 |
+
|
| 86 |
+
::: small
|
| 87 |
+
*$$\begin{equation}
|
| 88 |
+
\begin{split}
|
| 89 |
+
\Phi\left ( Z \mid X=x \right ) = \frac{\mathbb{E}_{z}\left [ \left | p_{Z \mid X=x^*}\left( z \right) - p_{Z \mid X=x}\left( z \right)\right | \right ]^2}{\epsilon^2 \cos^2{\psi}} \\+ \mathcal{V}\left [ \left \| \nabla_x \log p_{Z \mid X=x}\left( z \right) \right \|_2 \right ]
|
| 90 |
+
\end{split}
|
| 91 |
+
\label{equ:Fapprox}
|
| 92 |
+
\end{equation}$$*
|
| 93 |
+
:::
|
| 94 |
+
|
| 95 |
+
*where $\mathbb{E}_z\left[ \upsilon \right]$ and $\mathcal{V}\left[ \upsilon \right]$ are the expectation and variance of $\upsilon$, and $\psi$ is the angle between $u$ and $\nabla_x p_{Z \mid X=x}$.*
|
| 96 |
+
::::
|
| 97 |
+
|
| 98 |
+
Proof is given in the supplementary material.
|
| 99 |
+
|
| 100 |
+
The first term in the RHS of [\[equ:Fapprox\]](#equ:Fapprox){reference-type="ref+label" reference="equ:Fapprox"} will be minimized when the divergence (distance) between the two distributions, $p_{Z \mid X=x}$ and $p_{Z \mid X=x+\epsilon u}$, is reduced. There are many popular divergence measures between distributions, such as Kullback-Leibler divergence, Jensen-Shannon divergence, Total Variation, the Wasserstein distance, etc. In this work, we choose the Wasserstein distance: as the distributions $p_{Z \mid X=x}$ and $p_{Z \mid X=x+\epsilon u}$ may not have common supports and it leads to a simpler loss function.
|
| 101 |
+
|
| 102 |
+
In the case of a deterministic feature extractor, which is common in stereo matching networks, the distributions $p_{Z\mid X=x}$ and $p_{Z\mid X=x^*}$ can be seen as two degenerate distributions (i.e. Dirac delta distributions) located at points $z = f_\theta\left( x \right)$ and $z^*=f_\theta\left( x^* \right)$. Furthermore, the $\mathcal{V}\left [ \cdot \right]$ in [\[equ:Fapprox\]](#equ:Fapprox){reference-type="ref+label" reference="equ:Fapprox"} will be zero. In this case, the Wasserstein-$p$ distance can be simplified as:
|
| 103 |
+
|
| 104 |
+
::: small
|
| 105 |
+
$$\begin{equation}
|
| 106 |
+
W_p(p_{Z\mid X=x^*},p_{Z\mid X=x}) = \left( \left \| z^* - z \right \|^p_2 \right)^{1/p}.
|
| 107 |
+
\end{equation}$$
|
| 108 |
+
:::
|
| 109 |
+
|
| 110 |
+
Using the above insights, we can see that minimizing $\left \| z^* - z \right \|_2$ is a step towards minimizing $\Phi\left ( Z \mid X=x \right )$ (for $p=1$). Thus, we propose to promote the learning of robust and shortcut-invariant features in stereo matching networks, by optimizing the overall loss function defined below:
|
| 111 |
+
|
| 112 |
+
::: small
|
| 113 |
+
$$\begin{equation}
|
| 114 |
+
\mathcal{L} = \mathcal{L}_{smooth_{L1}}\left( \hat{y}, y \right) + \frac{\lambda}{2} \left( \mathcal{L}_{\text{FI}}\left(z_l, z^*_l \right) + \mathcal{L}_{\text{FI}}\left(z_r, z^*_r \right)\right)
|
| 115 |
+
\label{equ:overallLoss}
|
| 116 |
+
\end{equation}$$
|
| 117 |
+
:::
|
| 118 |
+
|
| 119 |
+
where $\hat{y}$ and $y$ are the estimated and ground-truth disparity maps, $\mathcal{L}_{\text{FI}}$ is our proposed Fisher information loss function defined as:
|
| 120 |
+
|
| 121 |
+
::: small
|
| 122 |
+
$$\begin{equation}
|
| 123 |
+
\mathcal{L}_{\text{FI}} = \sum_{i=1}^n\left \| z^{(i)} - z^{* (i)} \right \|_2
|
| 124 |
+
\end{equation}$$
|
| 125 |
+
:::
|
| 126 |
+
|
| 127 |
+
and $\mathcal{L}_{smooth_{L1}}$ is the smooth-L1 loss function commonly employed for optimizing stereo matching networks [@chang2018pyramid; @guo2019group; @zhang2019ga; @zhang2020domain].
|
| 128 |
+
|
| 129 |
+
In order to compute $\mathcal{L}_\text{FI}$, we need to define $u$ (refer to as shortcut perturbation and is introduced in [1](#lem:lem1){reference-type="ref+label" reference="lem:lem1"}): $u = \frac{\nabla_x z^{(i)}}{\left \| \nabla_x z^{(i)} \right \|_2}$ where $\nabla_x z^{(i)}$ is the gradient of the extracted features $z$ with respect to input. The shortcut-perturbed image can then be expressed as:
|
| 130 |
+
|
| 131 |
+
::: small
|
| 132 |
+
$$\begin{equation}
|
| 133 |
+
x^{*(i)} = x^{(i)} + \epsilon \frac{\nabla_x z^{(i)}}{\left \| \nabla_x z^{(i)} \right \|_2}
|
| 134 |
+
\label{equ:ImagePertubation}
|
| 135 |
+
\end{equation}$$
|
| 136 |
+
:::
|
| 137 |
+
|
| 138 |
+
The above perturbation will put more weight on pixels that are sensitive to changes in the input. Intuitively, pixels with large absolute value of $\nabla_x z$ will have significant impact in altering the statistics of encoded latent distributions and the extracted latent feature representations. Moreover, these pixels are also likely to include shortcuts as shortcuts are highly sensitive to perturbations of the input [@geirhos2020shortcut].
|
| 139 |
+
|
| 140 |
+
To examine the accuracy of the above approximations, we trained the digit recognition network of our toy experiment with the proposed SCP and $\mathcal{L}_\text{FI}$ (ITSA). As the proposed method is specifically designed for domain generalization, our method can effectively generalize the network to unseen domains and achieve better performance ($4\%$) than the robust information bottleneck as shown in [\[tab:toy\]](#tab:toy){reference-type="ref+label" reference="tab:toy"}.
|
| 141 |
+
|
| 142 |
+
<figure id="fig:shortcuts">
|
| 143 |
+
<p><embed src="Images/shortcuts/Horizontal/left.pdf" style="width:45.0%" /> <embed src="Images/shortcuts/Horizontal/right.pdf" style="width:45.0%" /> <embed src="Images/shortcuts/Horizontal/base.pdf" style="width:45.0%" /> <embed src="Images/shortcuts/Horizontal/ours.pdf" style="width:45.0%" /></p>
|
| 144 |
+
<figcaption>Examples of shortcuts in stereo matching networks. The left and right input images are included in the top two rows. The disparity maps estimated by the baseline PSMNet <span class="citation" data-cites="chang2018pyramid"></span> are included in the third row and ITSA-PSMNet in the bottom row. The performance of the baseline PSMNet deteriorates substantially when the shortcut attributes are distorted or removed from the input stereo images. The corresponding EPE is displayed on the estimated disparity map. Best viewed in color and zoom in for details.</figcaption>
|
| 145 |
+
</figure>
|
2202.09852/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-09-01T07:13:50.653Z" agent="5.0 (Macintosh; Intel Mac OS X 11_0_1) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.9.6 Chrome/89.0.4389.128 Electron/12.0.16 Safari/537.36" version="14.9.6" etag="pCKG2jdhYtjO7bmLRyOL" type="device"><diagram id="liP3RuV4s6UR8d7Otdiw">7V1tc6s2Fv41ntn9EA9v4uVj7CS7M3tve3dzZ9p+6hCbOPRi48WkSfrrKwGyQRIvJpIQNmmnDQTL9nkeHR09Ojqamcvt+78Sf//yNV4H0czQ1u8z825mGJ5lw/+iGx/5DeAZ+Y1NEq7zW/rpxmP4V1Dc1Iq7r+E6OFQeTOM4SsN99eYq3u2CVVq55ydJ/FZ97DmOqu+69zfFO2qnG48rPwqox34J1+lLftc0tNLj/w7CzQt+axf/Zevjp4s2Di/+On4rvZl5PzOXSRyn+W/b92UQIeNhw+QNPdT89fjJkmCXdnlBYfg//ei1+HLF50o/8LdN4tfdOkDPazNz8fYSpsHj3l+hv75BeOG9l3QbwSsd/rr2Dy/Zs+jiOd6lD/42jBDOy3gbrmDLj/7uAP/39bF4oIBXR40/h1G0jKM4yd7ZfHiw4Q+8f0iT+EeA/7KLd/AVi+KTB0kavNd+e/1oU0jGIN4GafIBHyleYDnFSwoi2pqVX7+dYNWdwiQvJUSxmfyCSZtj0ydbw18Kc7NNbypjeqvOxAQepul5Dw9iTA8siaa3RJo+8p+CaOGvfmyyNkr2e85+KGPDv2jZD2FxDEIZyu/hNkAY/hS8wf/+L976OwJMUFyzWvejcAOfv1tBkIKED5COpVWAtLDTLANpawwgde/zSALOSHYA7/N4LJd1eDB7YQw/bZiidwMabrz4ejrdSe9t9A+nTmq3Y2u4LGzxg5/B1p6wJbD1sh8+2JpGh35ricLWmbAV2G910AFbRxS2LmdsOwDHoz+YhM1s2mas7nAc2z5jMk++ybrGDNWwUpw/sl3C/niWWLI/YFGWg/nxxHKyf8n+jkT765P9gQeq9ncBHWsBQfbnLQNcgv2BRPvz1gIk2Z9nzGK7hCCg0/a3PEH25y0IjJD/0P7zag8AFo2AacxF9QFAmTxYbwIcMcdJ+hJv4p0f3Z/uLqqgnJ75Esf7wmR/BGn6URjaf03jKlDBe5j+Wvr9N9TUHBRXd+9Fy9nFB77Ywa/2a/mi9Cp0eXpZdoVfd0j9JL1F+vPsOEfI7j2EyCrFK9b4iVXkHw7hKr9ZPKI3i0fnsc7Imy4urSONkNnPJVESRH4a/ll9HYsOxUu/xSFssXYyCkyCUYf4NVkFxauMkp5NNmS2NATtvQlSqqGMncfv042wtD7x+AIttYb3vvgfQXKgCA17Z1rlHzWTRH04XPnRbfGHbbhe51QPDuFf/lPWFAJ7j75C9qXAYgbuUFuQ3YcC2zPYUHw8LetOKUQxRu/raTPaR3FwM6ZWA1B5aqqJijNZsoOF/gVLtDQD7Q7/Ci16P3Pgl4V2Xa7WcTpz7orHZA4KRp2uQK2SoB8+6ByFmeMkgEaHpeWSvawXOCzdoA6cm8GhaRqXRUICNImQsHSJ2v6SI3JzRb0FENAAGhqTAQ0PxchgSRYMaIZBQqDRddDPRXExOkunaBo/4L9qOCjZXYNEqWvX4OG18OSt20AyoL8aFiGXQMiViBBL76gf6pXpRTI7jEw4WPJHg1tTpcvI7B0y/VeHpIYWy/qHfZ749hy+I3NxyCsx5A0MwobvDhkFozVs36kbF8OeNa8uZm7X4NWpyZvMGUKHdfiLYbtUN+JRdpSoVGtzz3MrarUObTGUXp1dlxTkDvp1VXLuo1nDL/ctSOCzmWSafY4cEUq5ztXeaj+ToGbb3ty1tOOPTqws2mBu6t7px6q+QY3W3UOiNmmR4Hv8Bo1maLfn+oJh+rnpEVkJjFVZZnIph35u0rN9bL2ZYUdIyX+CF/YG/XabTffHaFLWQivLpICHSekhaHKdarrO4d2k18tN0m9DuhCc6MF/SdA0h6U3KJFbbyR2haAx9HAEQfVmghaUPrFT48FOxsq2bS+X9RzNKTCrTO7E85aS/S1NGKHoqZFMQpV9ZYufbKZGr8yEvq63jkR9ON+FeJY0h0munXckHvzO/kfpsSJ1oPZ9KPHL0gge5y32ZvWgqT5VN9nGa0zCE+9+q9COTcJezq+dnVUP24erBWHK8yLdMgQR+LO8wyn4tSO7Rk6sOPPUVsX7Ng/mdSzVW1jaIeIURGRGaFEhNuHCjxtvOlAc5yyoSXEydrDJ/dktFCee/zzFnbFQnHuAMTBPDVGUbA0bbFIcERQ2cOeqO2zYMIkH4xEPgGFVxYNq+321A2BYnbpOH3ZPqwrKsftsQWJ43gMghPdAmGaG57AT79Xh/WDstizIbpOi7+cWzuh3cTSyXWHs1id2T+zuxm5O6x0Uu8Wtd+Bp9VWxG+UOTexuZzehIRuQ3cYpbjH4kBsY4sg98GLe5LpHEZC30B6Iob3AeLx5ybEgSJXnhx9BunopLv543e6Jp/v0g55szq5o2nyK4vxXu0E9Y8u6NZC2pg20lpCYX1KaxUpvz9OpDntoqTL17P+/okqome1ungv73s6yBCx/u8+saJoIgLSw+S6zeZLZnHzm1B7O2jpmKb/4yBjJDNzf/h7ipOT8Q8Hvk38u/BKyd0RRuD8ggpybTdu65YdnpdP2PDpDUB6dxUq7VwfwxXUAzionJApw1nYAdQCfOXm2pnN3ochb1ViBkd8pDHnWnoOBkV/UgV9X40EVLlAVgSqh7ic5Ylc5wij1I4wjdFEBnFY90gxqlm8VlZSOs7A7JaUvxpuU7shLSgeTRqvcRH821HS+bX3NtXjM5ym6u8LWlcE1arSKs/vsxKHBeW/rQtaVbV2YjgWuUb6d1iZ6ibSuGJHWFUfuYfeFTE59HE69mfaZT+dPe5E+XW3peOYsFtckLDHmaKJEA9ChdEuWnxwk938GeWFUZC9cTvULOh/kW3wIi/KmT3GaxltkP3wAG+qox7Ig6IIqzVqCgazSmiIPitrao8+yfd+gc+/mQQSxStBz8wN8zo9+z/vUYe6vfse54L3K69WcS8IBZGDYc630Uy3KLXPZADRvTZgWIo9vJGEh0pU1ZFELkWT2Er+FSNChQM7kU8T7FIkrU6A5yV6aT+lRdV5KKMyttnzTiQY1/qa8OQpLvRL8jWdW2SguF9impXtVotePi018AIDAV2IEY7PKBasD+GUmPlCASxxe7GatXeXh5WrDXAlDjAXmJeXcrS69d820o5vFKzsSRq5mLX3i9ZXymmByx6laO5PF7VixFcluniYAgzCWDPY7Eo1uyDHLM1oL9OoIffirogJ+KvH8Bc0jLjGoJPfqyZxFqJg+XYV8cQ2Qy5xHqJdAbSwb0mgZp3GpwgSBSbSkVC0zidZWL9G6kSHUYUjXwA+yRr1MfrBOblObHzdXxg4wXAq+o54S3TK+XJ3voEqwSmSHerJ1MzsyzwGuiR62PqDzMJSjBzkfOfFDNU7wwN51OhzULgx91tF9hBlFnt7UpCs1nepUZK1zQkAnELA1GgHTwA9xx8BSrwdSuywXpeCOdeKXKj1SYN4LucuSxRJh/VQ9oZAexGmOjGASwDUziizNK5Mh6umKzQwZiYTAN3POGY4fYxAhF2OTELiyAww4voxBgFyMTULgyg6yHLtMdoxBflyMTkLgSQ9SQpBJD3y2+Vi2L45pG2IdZRTcnmhbjvjtibarkc1WG+KXXOSqJ5wylDFFo2geLo1SxqQ6NVoIk+jUZOREdnGDYs6Uak1SG7oKrGlUvQx1NF/nChlmS0Mc3dWgFTKumrDDj70UYclKWr0JSzbEkbCKJO8KSyk/v3zLpaWUkylVZHn3zinlht3cEEdW0kr7kuJp3Z7hbbheZ36Vz57hU3OZx/mliHr02Wkj8SmSo0I11job7i44tBW3D1hmnVK3+WTPyZNcnichizH19iTiqjq5tOI+eZI+nkRiYRpXPR0cLNMwWgezfJfvhVe7tsnuKXMUudpqEpc2WDTu0O09dNjkHEvc0IHraqrshhbX5IYkDkFec/1qWW7ocvansh2qhdeMajSq7vVOK8fVm9LO+CFXQrrOiemGQL9Zeh+/Zijo1+hk45slvLzMNRWPyDWWGF95I801vrfRP5zs77bPbfCJGtztr4beejkDy2ABLuX6O0ai7a6/Y2zcx/Wrl8HMWk2/ItcvM6ZVQ6Cdpsk8arMyNFXyACVha4Y4qWgikspE6l0DjSSSuMVnz1WCSJ9Kn7jR5rrj4TvFS43jStToS6E1p8s2zsNxHFOueu3Kq3rdNjXnGFaxUrMJYk9VrzmET8BVZbVb11jq8IS5dMwlBtC6dhGq8KUFPj2VU0YBQqIhXVjgo2u0s5iYVDCpbwZPDQPL0YeuedJo2RZGd17xtKTF47qmRlHiT6ZdFa+4GAfX5pf6Mkmog7sInXuww4F0XR692oSjzuOnK02B0vEepIleKjmqngsZ7R7PEcgkBVXxachrI0DvIU8kkyZZvENMfwrjf5uVMvbPi+ll7KMWFHRZuMCICAY26+lddh8qIq+PPj/hHJ1c4qyUTFoQOplkqeI4jdV/xav7BGGnvRgIJmIDqIWdiAyVE3u6yYuMy4tYg6U+iZzp4Ylw0zpL65oKe3WkHDQRCyVPPgR+7ierI4t2m+zdtLkDbNM2XA3Yjul5wCs4c/w7tIaluUAzbd0BVvZ3KRQhEiYfsh94P0JGWfirH5usj7Ie4eKxiPN6GOsyOsNjkWFVP481qenc1fRSZSfHqvhcy2mu7NQ3a19n6WASnRpZdIEcTDtH/KQMognboqTrn683cn0dgcsUtEpdRqaLTOq2qRydqWvLk0t0ixrYv06zgZrZQONZ75bEuQGdQD6BxgYNNIF2LEIsBbRmiXsaJIQNEtUlPTD68cDyBI4H6tXGZm5L/M/dpW5NsapY46OIpbgoFUt+MDYmXQ34jkzwLzh9ml1/mlN2Le+YxbG8pkDTlChC40ZKnHgMN9sYPmBoD6+7VY71LUUU+O1Tol+RWNcGkklwCP/yn3IhD15npMu+BTqL5w61BcOYQ4FMDVBdsOWJGT47DKOEO09ZeGMpb3xgotXZS5kPsHsuV+iapwiazO5mUDgu4yQJTt2sGDZ3T4c9a5S7vm5HRsb0LNwVhhariAOKSRIUBYFlFDzDwBzNkyjYtCKueXqG8YyendCw+EBGxueJaV/996IJ+L/8FCnmC7KmitfA911u4WWSfVH0srua5d16nuSvFUWTcxZbeBbVA9UTz49pJOVD7SxhPKEltutYgRcVJjuktG9K3H1k0Nrb92C7DxI/fU2CyUm3x0bAcCm4PGFwsc6C6+ikE/Q2TX306pA03SqSFj3c6qYwKFnSFMIFfcLsGgEKp4Fb/x/JPyfI8GSyGiGZHg2ZuIkJS1AiEFGxztXxTF0Rc3ui9oxNr0EAYXg0JAN+fwlSf8S9pi6xh+9QZneAzxEFn0nLMT/v03AL7ZtFh2OEjSc8rm5Q8NChobAJodkhSU5Fb3es6idAWqmi4dF6tjBfZ9JiytHX/bwNNpOvOzNsZ4AnrivR2srk6SqeziTAkSh8mQruKj5/s56QsytrclgG2DpF+t5jYuW52Qekxko1xDH7wKTFlZ9ieP3kr368+cka/rpJ/HUYFG81Nhcgx29XVzBsnZZbxE3STZbe0lHanDYKzKRtFABEnGzQiriwjQLmtWzRvYia/hIS5Sgy9q3hbLU0xK/YoG6qUTVTxQO8pbD17KNyq2dTDLbXF4C+5CYn02RDPMndIRdM9VFcjbG6jaZ8R3EgcRTH6+XjdoBSyvUq4P0GGMV7OzpyFBfo6CyWZHsNuRk8nA+Re+XQzkdYFobVIO4WeVLog7Lh658LzsFsx1QHMkYt5yIxzEaqLv3MRsuq+hxe3/mpfwhQH3z0t/so3G0uIotFhtyig6pMbrFm0uL6AZ1bZiBAv75GaXjz3T/8QBfxOogmAGsANA2TAFCXCSCteJoIwC/xAZkE59ij35fxdv+ajnf9QwaYlk2CyeiNwhZGLFr8vJRjrofOw7cYC/niOiXjuHJo+6fEnxLxO+5/AfSig7hux8oyw0F8cVJvMsM5gt8qCfMaOrrX0P474mQAvvlphP9kFRDilLIBL5MY7fk8Td6QDJRHK+b93w==</diagram></mxfile>
|
2202.09852/main_diagram/main_diagram.pdf
ADDED
|
Binary file (62.9 kB). View file
|
|
|
2203.03937/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2022-01-13T01:43:05.909Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/16.1.2 Chrome/96.0.4664.55 Electron/16.0.5 Safari/537.36" etag="040Xrz0NoPTHExMbY52w" version="16.1.2" type="device" pages="14"><diagram id="DgaSiwnCY52lA_fpkTob" name="global">7Vxdc6M2FP01frQHEB/mMU6y7U6SnbSZ6W760lFAxkowckGO7f31lYJkvuwENs4itM1DxlwkIc49utLRBUbgfLn9LYWrxQ0JUTyyjHA7Ahcjy/Jtl/3nhl1uMF3Hyi1RikNhKwx3+DsSRkNY1zhEWaUgJSSmeFU1BiRJUEArNpimZFMtNidx9aorGKGG4S6AcdP6FYd0kVtd3yjsvyMcLcSV/ak4sYSyrDBkCxiSTckELkfgPCWE5r+W23MUc+wkLHm9T0fO7vuVooS2qXD9Zfz56SHEsYH/XH9J4T/jx/HYMkHezjOM1+KORXfpTkIQpWS9GoHZHMfxOYlJ+mIGIUTTecDsGU3JEyqdcYMpepizM80+im4/o5Si7SEHwgd52QIkRi5EloimO1ZOEMkyREc3hVekUxZlhwgbFDyI9i0VWLEfAq5O0NlvQ8eQS0LEmzEYHJsFpuhuBQN+dsPGC7Mt6JJd9sI8Eb5veLkO+1F4XW/iNPC1LGksI1xYPwBjR0eMt3XgFAPd1Rh0W03IPY0hLzBWDPSpjqDvqjzvF2FfR4SPhW8lIJfLTS0ht1UE3NQY8EboVgNyS0fId3Xg+sW4hTAcHsbHQrcqoGspKQ8Hb1Ug11lhNsK3KqBrqTD7hVRnBakKbbUUjLXw3C/COgtGRbY9QAuFqNaWv6wle65MCgAMR/vtvT60FAAYjtjrgLHiKQAwHPXXHXQ1UwBgONqvO+SqpgDAcNRf9wCuxIoODEfqnSB8qwH5cKTgu4O3GoAPRxeeIHSrAflwhGL3wK3Idoc9nFTiCUK3KqBrKSmVTgHYOitMVVMAtpYKs19IdVaQqtBWS8GoUgrA1lkwKrLtYbdQiEqmACxXtRSAPRztt/f60FIA9nDEXgeMFU8BOMNRf91BVzMF4AxH+3WHXNUUgHwfUS/QVdqrc4Yj9U4QvtWAfDhS8N3BWw3Ah6MLTxC61YB8OEKxe+BWZLvDGU4q8QShWxXQtZSUSqcAHJ0VpqopAFdLhdkvpDorSFVoq6VgVCkF4OosGBXZ9nBbrJ5LCPL7wwGMr+EDim9JhikmCTv3QCglS1YAxjjihoABhhias5iXnMHgKXpxVQnk+ctfqdEzUZeS1d4p8mNLFrfITydxV4cwW+z9zs6seGeX24h/fmqCSeZNcECSbAJDVmiWEgpFV8eM212SE0c96LuV2dS0mv4EB3IQ4MNyEK7XcB0KI3QnDklKFyQiCYwvC+usOoCKMteEe+HF6Y+I0p34JBdcU1IdVGiL6bfS73vRFP99sS0f7ORBwm6XVzEmJnCk4T43WJ40FJVfjiq1b1GKGWqcYTkBGmQ56uCMrNMAvYajfIeOwjRCrw57OXo4zK9SJkUx498zqvTkEANE1VuCWbcLqvkVqnletYG8o6JOwaKzNIW7UrEVL5Adv0ptTW7aRo2TeYMFQ/c3+B7STvskrfEjpJ1O/Sppp4wHfZNWOq830nZlm+n1QrcWCu//+e7we3e+VXGYzAz2Nt95RsN1qs93U7cWORyn98ghxWKLyOErETmAUcsgOj8hcHhmn2TjTHHLhJuYttuWdKZt1xZZwPoR1vH2P5aJbs9MbFDNAkx7eX7xN60wb//ckWwxv8fGSqzZrll9gGksX/s78ZLOrF1HipRj/aqX/zlDq7mT8kdjsLFJilbHRmPqrc+rSxyG+TBEGf4uni3jpBWosXad2ci54G2xkZflg9BsbAgkJEG13QNpIgkVY9c60Sxbf7ldTrpvzLL2x82yzV2Yq1/WO2b9wcP+3dNM8f/1y7pn/4BnD+65mV89Xv29/P75MbxZGbNvm3/vwbjFPjxKwjP+gXLujBhmGQ6qnqquIZrzLav/CccHFgxTp9uC4cDsftQlb867JcCdA4BL23unZ8OduI7nGj4wXMcATnVLzvYnwAK2zRYWpuHbVm0XtfVkbbTbb3l7QmSHxZfe8+LF5/LB5X8=</diagram><diagram id="Dia04FaDoWEJsFMCXAuS" name="swin">7V1Rc5s4EP41fowHJBDwGCdpb669mdzlmtZPNxhkmxZbLiiJ3V9/wpaMQSTBSWS2TPISWCSBvl19q11hMcAXi/XHLFzN/2IxTQfIitcDfDlAyCaYiH+FZLOTEA/tBLMsiWWhUnCT/KJSaEnpXRLTvFKQM5byZFUVRmy5pBGvyMIsYw/VYlOWVu+6CmdUE9xEYapLvyYxn8teBFYp/4Mms7m8c+DLC4tQlZWCfB7G7OFAhK8G+CJjjO+OFusLmhbYKVjm48vk45fwOx6xm69onE5GP5OzXWMfjqmy70FGl/zFTeezb/8sLme/xujqXzb+L1ukD5GsYt2H6Z2ES/aVbxR+s4zdrQZ4NE3S9IKlLNuKcRxSfxoJec4z9oMeXCGRTydTcUVolIfJkhZyS5y37Ibs7j3NOF03mUg4Uc9WqkGYL2ULyrONKCdrnRHZPWm5iNhDLDv4UJqCsoT5oRVIWSiNb7ZvvERYHEiQjwDc9l6OuEv92GlC3EcTTAgAxD14gGPnxYBPp1MUNZp4TCbEhQA4suAh7trPIz7nC9HmpS0gK0BIBF1/Dic0vWZ5whO2FNcmjHO2EAXCNJkVgkhgWgA9SouSozD6UehtGVc1Jv4OGj2XdTlb7fWovAAqJIrTC+XFYT6nsTwRV1bFwy7Ws8IvDhOWe8NEqCcfhrEoNMoYD+Wjnjnu63X/qI6VA5Yqtn2nScW4QcXYmIp1jdJYeF15yjI+ZzO2DNOrUjraKmuPb1nmMyuUs7WF75TzjZxChHecCdGBpdB1wr8dHI9lU8Xx5frwZKNOlqK3RRVraGNXCcY7AfKUoKy8PavUvqZZIkDbj3Ddho7Te87usog+5SEkYfEwm1HeYpwVyD9pRxlNhaXeV+dDTTYhq16zRPSktL+gan+Y7O1PNbJ7VlmvtK3zLAs3B8VWRYH88Tud2W71VkGgblVa667R0nb3HX2FOeMuzdl6iTn7flA1Z387AYBnzspUujLnY23Q7soGW8xT3r1me69pK8NTerSC7t2m+9u5TZ/UaMZ1QdKM47elGQcEzSCnRjPIOQ3NkE5dnbAfcmiGQ9shbU3RdpzaDA6jl9hi0f7J7VN5t67sU/dzHhq6XlD++RV7PAwkVaO7TmqzPL1ptxaUuqYmjLUbHYRGjz1bvcaphp2e9vlbG4jCs/HquNGceN1DL5I43g1Rmie/ZNagMGiJnGjXHQ3cy6ItMSrz3QC1tZzGki1pLQGiRGzJ5bhGJv31Ga5qBruNmYwmf+0Y89e+prVP71qrzLIsgFoLNK3dvmutkjUEONZaJA2r05CHecLpzSrcut0HEW1U1anlcqOoOZeLCQ5wbJLZassQGPlDX0cbCX+s411K3xxxZBhxP6LNiE9813HfIFv+hC/xhi5IzHF/MbdhIt4irfJaXgmCTngFqo27phF/dlXuZExukSa/eXLEiVnEWyz1n5DJgWDeYh3/d8W8zuRAENeDsTdFnNqxS70mxAPi4dAkr0C1cT2Q6iuTIx+D8J3qpbk+0opG5WBANxx7AuJyMJAbDj4BkTkYyA3HnnT7ByLD8shbcKeH3HDw2eKN0BOyORTQDcefXYJeZ3MokBsOQKfB1CuqAkiygIHccDQ0nVLyyNzcCyaWyUyiBzNdbhuPhjpDPED1ABQI5sh0ONQdl9ffMwQDuelgqEPIwS4LKYx7yC11NgeS2kKmV+IgsTkUzPsbDmlsDgXy/gZDDWwOBfT+rsfVf+AJJLeF+rsap7E5GMyNR6CdrQ7V2RwM5P1dkNPZHArouL9Lcu1+rn96xPu7HqezORTMjUegcNgcCuTGQ1BIbA4FdONvJ3YW99f3AoGS3MKmV4c6hBxpuRYooJuOiLoL/Os/nQUDueGAqEvICdTMOTYeEsHhcyDpLcdwQASLz6GA3t9lOY3PoUBuOCSCxedQQDccFHUY+2t79QFJcDmml+U6hFzjczCg9/dXcnU+BwO58Ti0O8g1PgcDuuE4tEvQ2+29enrI+7sw18DnUEDv79KcxudAIHeNx6GQ+BwI6KS/q6FQ+Zz0dzkUBJ9755/iePJl8+f46qef3t56iM7bfBiBLuPz4vsU4ixKwzxPoirMVZ3oOwyK+h+StGGLRN8dHLVFYsN+hscp7Nm9Bg804TZsCqVkr92S0CFizuoRK8AWcS3sVk1jt4MrdhxEiDhWmYKjdyd0vKH6CoP6pEltU6tHdid8fjNAcVp+D2RXvPyoCr76Hw==</diagram><diagram name="swin 的副本" id="wCT6ntk9DbnklQ1vskHt">7V1dk5s2FP01flwPQiDgMbubpNOmM+lk0nSfOrKRbRpseUCbtfPrK4z4xgkYyZJ34peFixBw7tHR1b3GO4MP28P7BO83f9KQxDPbCg8z+DizbYAg4n8yyzG3IM/ODeskCkWjyvAp+k6E0RLW5ygkaaMhozRm0b5pXNLdjixZw4aThL40m61o3LzqHq9Jx/BpieOu9UsUso14isCq7L+RaL0RVw58cWCLi7bCkG5wSF9qJvh2Bh8SSlm+tT08kDjDroDl39+/HJ/fPy5jB7qL5O7l8+en413e2bsxp5RPkJAdk9u1m3f9DcfPAi7xrOxY4Ldh25hvgRm8/0YSFnFkP+AFiT/SNGIR3fFjC8oY3fIGOI7WmWHJb5Qk3BBnLe/x8us6oc+78IHGNDl1C1enT63TN+JcRvfcmrKEfi0dZmeWAn6L74Q43ZBQ7PAj++xmt4d1RuF5RFNvHnE+pXMc8kb3CWVY3Oqd43LDQEAF8NkdkkONTQLg94RuCUuOvIk4WowVMVSA78yhQPSl4h4UhNrUaFfYsGD7uuy7cinfEF4d4WHUcSgJ+fgQuzRhG7qmOxy/raz3J1+V8FZtPtDMNycq/EcYO4rBjp8Z5aYaUcghYv/Utp9EV9n246G+cyx2dvxhs1OsOYBuYXjKDbZXGKqTT3uNsz+SJOKYZbzLadGh0Di3p/Q5WZIftOO3lcsZTtbkRx2KUZYB/0MWJSTmPP3WFK4+RohTP9KIP0jFvqDJPohK9hWd5LcqzquY9SZJ8LHWbJ81SM9f6Q64zUsFQXGpiqt5pxVzywe9nMyeTjJbl5DZ94MmmX2EdJBZE0fHEguMJNZPO7ARmEPeS/lxmnecD/LOgJBFWP/X/CpzfgWFnhX0sALtE2xwcxOsj1qS5LpGSpJvhCTZTkuSbGecJLU70CxJxbpM0yTK2YbqpJ0DBw0lLnCcVmQI7UuYm/V/9WjR1cr67kTp2XPXq7HQ75LUHkTMbteu1ezLVRWHti5UW2+du7f2GT8ZzNIGXTHVVYHAX51hyCdB1hw1nfm+PZlvozDMByhJo+94ceoqo7NAjvfr3s/cx6wvPibTfHiCku9FwLCju6yXVRTHbRPdMTGqbZVT+x1sega6YOjU7qia2oHd8dofv7zWCMgsA70GO177+5fX6l6zDRxryOl4rRun7MI3WUY481KM0zRaNl3YjFK6Mzo//10U94QkvjsuJOmJH8b56qeze80Nbo8bCtvUIMBBc9/2kBVAC7kWdFuh7ml5BR3HRohvO/DCeMDx5sBr9IxaNDoTD0hbhneFvEOubAW9nzjiyiqGUIdZvVLQO+uhVqiEekdi0EOBQNVIHIxVU8NmNgwx8VfLjuDxI2jpk0WWjeD4MBztxg6b0oGXI64LTjBA2JrC9bKJGPm0x6flxEuC902Va4G+Wi35pw90iGAAQwkgd8Esa4Fzt0NW2y6MdXwrq3yEB1SxJiHsL0k/wgvfdVwZND4/VZewmQU5er2QO0YC7qlXkSC4sooc2gibBfmA5P00yFd2P8dDtECnLJky4XZMwDdQi++AaOR6sm0C4MUXWV4j4I6BcHdzflLhJiB0idcHd4A8iNXoxznJNgLwASuXWxXsCjatCHdTa69GQTqSbQjkipePBom2IYArXk0aJduGQK54NUlOnysvbrQCqni1yBnsh04foL69gEgphw2hrOLFoU6EjYjlFC8OV8HKy07Vm+/QibA/IJbTUhPx2iURqwPWVRP4/oAI7PoFETgZbm14qk7XyyuInAPZ8IKIfzvZ+eEIm10Q8VXn5zVCbmRBxFeenZdWEBkNuKEFEV91TCYvvzZauE2IeYPbyc9Pl20jAFecodcJuIEFkcBWC7fEzNp0yTYCcMXpep2CbUYaKLid7Px0yTYEcsXLR4NE2xDAVWfnTZJtQyBXvJqUWBAZDrlWQG8nXT9Bp7UirPrbXBoRNiGWK9+0uoGKyAQZ1gvxgPWglpKIbRlWEwHWgLXc1YsilQNvryoCLOWLNVllkfMwG14XAZbqd3GkZenHYGx2ZQRYN/N6zgWgG1kbAZbqAqC04sgFkBtaHQGW8ld0ZGXbLhBwM0Jg1as4aakfGfJtBuQ385LOdPE2AnCgeJknL9smQ7rNgFxxGVCrcJuRHQI9P6XzenTE0EoJ6PklnNcDupG1EqD61x7Mkm9TQFf9yo60cskY0PVCqngFKS+dP0mv9WJ8M2/pTJBnvQir/pKotJrJJDVWgzHfrf65Rf5LXNV/CIFv/wc=</diagram><diagram id="u6kRF31s6ChPc7Hupz0F" name="pvt">7VvfU6M6FP5r+igTCAR4tOpe3bve1XFm3d63CJFypaQDqW33r7+JhEKgtdSq/Bj70IFDckK+8+Wc5CSM4Nls9VeC59Nr6pNoZAB/NYLnI8NwTcT/hWCdCZBtZIIgCf1MpBeCu/APkUIgpYvQJ6lSkFEasXCuCj0ax8RjigwnCV2qxR5ppLY6xwGpCe48HNWl96HPprIXLijklyQMprJl15EPZjgvKwXpFPt0WRLBixE8Syhl2dVsdUYiAV0Oy7O3/HlxN/kOnd+Tp5UJro2r8Umm7NshVTY9SEjM3q46Yrfk+9kE3DPy5EZXl/eXTFYBzzhaSLhkX9k6xy9I6GI+guPHMIrOaESTFzH0MXEePS5PWUKfSOkJ8hzy8MifNHxt2b1nkjCy2kYJ/JC/SwE7ZyuhM8KSNS8nmWlA+fbLws65madlE0sZlswKNpoK+PiFRPAANPX9aHIwY58IJYAjtJyGjNzNsSeeLvnw47Ipm/FGz/X3gvw1s1ch3wktsjWrhq1h5MIyuoX03fE1hofvqgpapwCHgwXc7CLc5mDhLvDtFODW8ABfq/xuE100PHR3uesuwG0PFm6ze2A7gwW75qq7ALc7PLjXVdDaxFdvsBDsG8C7fHVHEB/gYnG7u+4I3sNdPNY8dkcQH+DqsVU8h7s87AhhB7gcrLjkVuEd7nqwI9mMBlOKEnyie6GHox/4gUQ3NA1ZSGP+7IEyRme8AI7CQAg8jhbhUI4jUXKMvafgxU4lhB9ffiWlp7Iuo/ONRfLtF0NI8s0UYWcfp9ON0fmTuXjZ2SoQ+1FaSFNbCz0apxr2eaFxQhmWr3rCeX30PsNOq7pIiaq6Vbcx3LK9AD9qe8Gqm5P4AbmTtzRhUxrQGEcXhXSsjqiizA8qLPNChP8IY2u5cYcXjKqjjKxC9rt0PZGqxPX5qnyzzm9i3ltRBWg6tHLBJBMYdi4oKr/cKbVvSBJy0ATrMlLUCHSY0VO6SDzSILgynATkNd+QDzKB/KskSkjEafqs7kJu44SsekND3pOCfK5KPmCrGrIXlZUKYp0mCV6Xis1FgXR3M5W5um6DCk0zhQVpNz08gsewTR6Dt/DYcVyVxw5CneQx1Nvl8aH80+0DCLi39mYHOG8+w6s2St6NyQ2m5F8Rt3nE1V1DsWd+FqO1iGv1z1OhiqOyrE46KtNp6qjMTjgqCFRX41qfEChRq/Tj3EFlCmq6iZrSUDfNysQPGm/hodD/6dy0GgfRj+FmjXw8rmmW7RY/501hr643z/tLRfkJhHeeZOqVZvKV1K7Xqpb/lLFW3z69rY0+HsaYOlhqEbsajmeh72fjkqThH3mUTbBYgsb1WuORdS508aGYZqNSr6UxYhqTSs4jF9GYycFsfGRwhqpRYD7X2ROczQ8LzvU92L+/LKZMp1DXTFbfx/31ZTIlE+52zGSowc5wdkT4OESan/bNa1m1jV3L3JKGNbckYc2PSsEavclxo56eVDB6c8zpAIS7fVQBNnABvYO8VUB7c/bjGA63evS0AcKtRC3D6WDUMntzUgM1n1B1yoWavTm7cQDC3Y5aZn8mCs0hbxXQ3px5PobDn4TwZHIbO/+mJ//8XF/Nr63xfHlhNvkak8T+qfgolt95EU7T0FMxVQ1Qz5Ty+t/CaEuq17EOS/VuycseFkv35kxLlrC2rHhz2bGpVYA0x7ARcCFAFoCWmlp1gAYANE0DIR24Jqzob5xoBbam26D4GUorEGjIUTXvSL3uT3fy2+Kb5Kx48V03vPgf</diagram><diagram id="h18vAx6XnSajTRPlp3vB" name="nasa">7V1RV6M4FP41fbQHCKTwaNWZ2V094+pxd9yXORHSlh1KupDa1l+/iSSFQNViiwSOffCUSxLId7/cm3svxQE4m6+/JmgxuyIBjgaWEawH4HxgWZ4N2V8u2GQCOLIywTQJg0xk5oLb8AkLoSGkyzDAqdKQEhLRcKEKfRLH2KeKDCUJWanNJiRSr7pAU1wR3Pooqkr/DgM6E7PwjFz+DYfTmbiy54oTcyTbCkE6QwFZFUTgYgDOEkJo9m2+PsMRh07Cstjg+f38iaY/krvN78ZNOIKrk2ywL3W6bGeQ4Ji+e+in7+jkfnZzZZxdXnieNbmf/TcXXYxHFC0FXGKudCPxmyZkuRiA8SSMojMSkeRZDAKE3YnP5ClNyC9cOAN9Fz9M2Jk9b1tM7xEnFK93UQI9yHvJYWdsxWSOabJh7QQzgS3ufpXrWap5VlSxkCHBrOl2pBw+9kUgWANN8200GZhxgPkgBkNoNQspvl0gn59dseXHZDM6Zxc9N48E+atqL0P+IrRwNHQq2FqWFBbRzaVHx9fqH77rMmhaAQ56C7itI9x2b+HO8dUKcKd/gG9UfreJLuwfui+Zax3gHvUWbls/sN3egl0x1TrA7fUP7k0ZtDbxNfcIBA8BeDLB0N8JcDDyHgzjI221Jog3HCy2iLitJd79DR4rFlsTxHsYPbaKZ3/DQ00I28NwsGSSW4W3v/GgHtkMc48IsAPZfNkLGIpTaz+735mYb0uEjuX3zYbDvOPtiWsgrHeG32o48GuF1Drn+K3OFAnrA65pll8i3CuzolFyzmo40NPKbGsBeMORYJuAa5jrtxqOC9uEW8tsv9WZSLG+wdYj1WE1XCzUymRrAnnD4aNGRlsTwDtTNDyC2dYDctDHaLJVQHscLWpC2T4Ghxpl/UFnqoCHWOFWEd4jGuxO2t+CmqX9QWeqglsidCztDxoO945ngmsgrHfaH3TmYdH6kGuZ9gedKR7WB1zTtD/oTDWxvuHWYWdnd+aZ0SOYbS0A78wjowcbbS3g7kxkeASTrQXgnQkV6xtsPZIddmcKh0cw2ZpA3pni4cFGWxPAO1M8PILZ1gTyPkaTrQLa42hRE8r2MTjUKO3vdKYSeIgVbhXhPaLBAn58eqGPokv0gKNrkoY0JDE790AoJXPWAEXhlAt8BhdmWI4j3nKM/F/TZ0Up5OafwqCnoi8li61K5MuWLC6Rr07iig5QOttqnZ1Z8Judr6f87VPDkKSjYeiTOB2igDUaJ4Qicat8L3VwQeJFrXpQ9aywqmOwo+4Amqo7OFV14mCKb8UhSeiMTEmMootcOlaXVN7mknDNPBPhX0zpRrymCy0pUZcZXof0R+H7vRiKfz9fFw828iBms+VdjKEJHCm4zwTWSAryzs9HSu9rnIQMNM66jBQVAtVTekqWiY9fwVb+lI6iZIpfMw5ykXHkXyVRgiNG08dio92cEF2vSchmkpPPU8kna5FyhOxGRaecWKdJgjaFZgveIH35MuUNu2uUaJoNmJN2O8MDeAza5LHxHh67rqfy2IVQSx7Lhxfa4nFd/pmjNgi4R9Lj01Hu7yhNz1KUaMrSe2ue0umcp3RhycA4jpYGRgaibxsYWwsDA4ySgRHHzRoY2KqHY+SBRQ4OTRvuy0PTtks7NmC9h4h8/A8np3RqbZGzwj4LsHBv5OUfVyHj9lEmOWI2w8qurjquqT4TZRqwke2hWbqOjIFeuq9K+w9ZbdVk45+V9cdcGVWXS8Vrl13yPAyCbGXiNHwSj6txHgvU2LjOeOCc87HYYkyzdWlWUhAxiXEpXyFFJKZiOVtNOujSD+dtuUt8wz/bjfnnajbzj0+NKVsqqJvKqunRvz5VpqSxvfZU9tv3a//kZP3PzV18d0Xh8qd5+nWfV3njODjlb1TnGopQmoa+qj51T1L136z/lzDasQFxnXobkB27hXp6etOTF7Tg7NCClB3q8A04tL0RNDxgQMcAjppDZJHR0DCAbVsQmoYnn2+o7f7ZVVxbHdk2htDdaxPwtt9lh/lb7LPm+X8CABf/Aw==</diagram><diagram name="nasax" id="QkxeAaXMJAUebvxVKc9p">7V1tc6M2EP41+RgPSOLt4+Wu106vnbk2nb596RCj2LTY8mByse/XF2KwwUAsbMnsKneeuYmFwPA8q11Wj2Bv6PvF5vs0XM1/FhFPbogVbW7ohxtCAubm/xcN212D65FdwyyNo12TfWi4j7/ystEqW5/iiK8bHTMhkixeNRunYrnk06zRFqapeG52exRJ81dX4Yy3Gu6nYdJu/SOOsnl5FYF1aP+Bx7N5+cuBX25YhFXfsmE9DyPxXGui393Q96kQ2e6vxeY9TwroKliCp/u73+Yflx8/bRj7YeXes8U/t7uDfRyyy/4KUr7M1B66pPJLmDyVcJXXmm0r/GapeFrd0LvHOEnei0SkL800Crn/OM3b11kq/uO1Le7U5w+P+RbJ0y4v7wtPM77pMonwoTqXA+y5tXKx4Fm6zfuVlklZefbPB54rmud1isu2sLSs2f5IB/jyP0oEB6BJT6OZg7mMeHEQK0foeR5n/H4VToutz/nwy9vm2SL/0Q92B+QO9yPWBblPHqjrykNOeiDvhdb1Jk4LW0Kqxjq6h1bl+DLz8N0cgwYKcEcv4NzOIfe6AA9cj4Y6AWcQ4XaNhfuALyjAPfMcyrZp32Oi6+tF99Gf8mnnHciD7zDHuqa7hgB3YCzcDB7YVXbzFnw1CLxtvXg/PnK327ojL3iwtFj39hi1UQGWSAWRuo+WtwaCuOZ0EY7DBoK35vQRkssGgrjm/HEMpz0qnpoTxDHwbPnoUQHWnBCOCDCImzjNGSEIDzzqfJ3ELUUNvuLy4mmY/BQ+8OSzWMdZLJb5tgeRZWKRdwiTeFY0THO0eA7lXVL0vAun/81eeGqYdvGvdtB35b6ZWO0ZqQQYUrRUckrBcxSu53vS8y2r4mQXm1mhSE1isfYm8VQs15MwyjvdpSILy1O9ze36YqWhl9XAbUZVt80x7RAYqC6BwWEtOnk04/flV5FmczETyzD57tB61xxRhz4/iYKZF0P4l2fZtpTuwqdMNEcZ38TZn7W//yoPVfz9YVP/sq2+LPOrLXaxJjZ1qoa/dg3EqxoOO798a+z9madxDlphdTujaBnQMNLX4imdcongmoXpjL92wGqQFci/akQpT3Iz/dLUIbtsotz1s4jzKzkYX9A0vio4VUfYnWi508Gw3qVpuK11WxUd1v0/c3yv7ltHZro74MFo91d4gR07Y9qxdY4d+37QtGP/Zf726nY8lnkONSvbG2BXJ/feS7vVz++Gc8v4lRmoxJ32t0AqH0jtgDT4tCsNfrRI6qGLpL575IAcB6YDckE4IGodOaDyu5wDOt776g7IHzVC5sbl1m10YjNX1k5txo7u+Cg5x1CL41/9LrCKnmNZecsQCc3TRS84/PPPssv2cSsFq/LIlqvl9tI++p0qh+o7r1b/14atstHW1k5/aY2/PNRlzeHSiurHIXsRR9FuZPJ1/LVcx1bYcYlaflzn7sb5UBwrH4zr3bi0WzMYS7HkR9MdVZNYZuVwJjoDOG2ywiiRit9MV/x2SYuxtsNcRu+KBaQFQ0m4XsfTJn1Nd9l2Lfn+H+Okwzf6zjDf2OHIhvF00snUaHA6aKjaLvVFljthgedaAbVcx6JOc3okv6mbWBZljLiubQWMnumZ8l/xWfPIzJq4vpR/OsMlPCV/Wyt3E//46devtyRj21v2xy3xJebSqrWxYhVO46w42UDBIJRf+FrtdTQ6KWvfXetaCNsJnu2q1tpOInyCRPmFU8HECuqfFpK2O6Fu49NGtq+PBqQ1a2wSa9j0c7N5HVWs1GmW8yBRl9/FeqT2Qc6cap3wajz0II2VB82CIogRtH11BMHlRvN6VBDcyAUmbNR5qhe3Agkz6HhQvegVSJhBx4PEZAMwHravYwQXatWLYqHEAHxM4EvhpYIAPiJUZ/hAogA+IlTn6/qJgIsl2gwan9mqTpLHduRwkVad8o7tqaEiTXyJmxNpcWYgiIPlGX9i+/UPaYTBnq016H170ujRgXxPF/UmzqD4bdbDjqywY5+2cE/Cwj1tQEPx2vJA9wVIr2Hv2HiA4tMH89CKngwzDw6UOcnBPHQDjZQF3Y/hn35m+WoBo2/8wCWHvAFy5IIMPu6gzHuqDjP4mIAy76k20ODjAcq05+DA0YcRXKh1v1cAT+DAx53uVxYA4u5EqMHHHdoc/9VQg48HfDk+WCxdtHk6OrN1oSwPUubI4UKtOrUe2VNDBZoEEjZ9qbDTh+HFwk5zoUnP1hr0Hp00UO14WVBPF/UW7gNxJnsDMFXY8YG4kgFAGynsyDzfB5MHs4QdGSkfJg8mCTs+kNm24e4f3b2kD2QZofIAgI8JIItZlIcAfEwAmQlTHATw8QBkJmx4GEA3exMAmShTHgbwMYEv9TVUEKneEoiPCbMEkUB3aYrT70HXzxxc9DUXqgCBPlbRJRi/psXo4QUuOeMXyBg54sClRiLXvlDY6cXw6sKO60yYU/t0IN/TRQPwMFLrgwGYKuwEMHLnIUCbKOwQa/zSjFdjzigpiFiany6BxJxB4hGxYKTtZ4QYbHe4xIKhWqsPMviYGL+cJJQwg487zUk8JO6MEqiIBUM8PyPUYJvpIhYMdVx9qMHHBL4U3kyBilhoc3yjBKp98RpEPMDFEob8rdDPw4UaX6KMVAoiNtpEGdnkT8dagU8tpN907az9M5H74llyxS9VFM/q4aw9d/H7N87qnO1rI41Q8KyHM4k5CyD1qIjTUzxzrHpURPnCgHMlaqcHY1kRFPybOWV0fphlPwZwY2Q9KqJ8pQBg6oyqR0WALD04gwej6lFR3UsPQIwgnEVcqO7FBSC4MbIeFQWyvkB5mEHHA4w5FeVhBh0PUB6ZHxw0+jCCCzWU5+JVxwB8TOBL4Y2sR0WBrAtQHgXwEQFjWcAQIuBiiTaDRme2QLR7dY4cLtIwlH11nhou0hIZ6qVPt/WB+JbrUVEgMv/BAAx9uo3aMHLOIUCb+HQbtWEkpGfwYNSzatSGkY6ewYNBT55RG2v1gzMCBrYleNTWrCeDIMfMp9uoDSMfVh9m0DFBYKTLqgMNPh5gJNNnBA5sy+Rphey3wIGQO92vjgPEnWFPt1GCNsc36uk2SvDl+HCxRJun4zNbGMKwQkcOF2oY0q8yTw0XaIk8+DJhpx/Dt1yPilIgae/eAEwVdiiQvHYA0EYKO9UDhvh4MEvYoUAU5eE8mCTsUCCp6HD3j+5ekgLJNpUHAHxMAMlVlYcAfEwASWUVBwF8PADJc4eHAXSzNxSI4qs8DKBjguFLfQ0VRBja3NgsQaR6fwy6ui5DmIOLPtZqYApjCVxysBYLUxhe4JKDtViYsogDlxqJXPtCYacXw7dcj4oyIKn13gBMFXYYkNx5ANBGCjtMdWJ9rWITZzBnlhTkYK0kdgZzJolHDpC0fXiIQXeH66jOzKEEGXxMaM7SITkr0+QmB2stMdWBBh9zQMTz4aEG3UyXA0QdVx5q8DGBL4U3VKBy0Ob4ZglUDhDxfAAPYLF0gcjf6vw8XKjxJcpYpSAXbaIMZfIn/5oKkdW2fZ+DMv9ZRLzo8T8=</diagram><diagram id="IJMxqdzpZym4vS-KGVnh" name="pruning">7Zxtd6I4FMc/jS/rgYTHl7UPuz3b2dM5PWdnuu+oRGUHiQux1fn0GyQRQrDVKeqFbV/0yDUE+d2bf3KvmAG+mq9+S4PF7AsNSTxARrga4OsBQr7l8P+5YV0YHBcVhmkahYXJLA2P0U8ijIawLqOQZEpDRmnMooVqHNMkIWOm2II0pa9qswmN1asuginRDI/jINat36KQzcRd+EZp/51E05m4su+JN+aBbCsM2SwI6WvFhG8G+CqllBWv5qsrEufoJJa7i9u7Sfrv33dfn+ntnzfGkjjpRdHZ7SGnbO8gJQlrt2vhypcgXgpc4l7ZWvKbpnS5GODRJIrjKxrTdGPGYUC8yZjbM5bSH6TyjjP2yPOEv8M9yoIoIbnd4Md73oa43ReSMrJqCpHgWX620g08egmdE5aueTsRqdgQd/Na+l26fVZ1ubAFItKm255KnPyFIHqI4/xe4pVnYcENCm7s9Bk3coDhtvH7uGdszvu8NjmvnEDEZfk+eCbxA80iFtGEv/dMGaNz3iCIo2luGHOgOeVRnLccBeMfudOSsOKUyeav0umlOJfRxdaJUu1RbpHanXsuDLIZCcUBf2eRf9j5appPf8OIZu4w4r7JhkHIG41SygLxUS8s++OO3+lg31H8i5Ct+Rc3+Bcfzb+W5k4S8qlVHNKUzeiUJkF8U1pHG09t4ZZt7mnumU0g/EMYW4t1QrBklJsqYUJWEfteef0kuspfX6+qB2t5kPC7zU8xhia2peGpMCBXGsqTN0fK2Q8kjTi07djWA+gwp2d0mY7JW+2Er1mQTslbHcpBlpN/M4hSEvMwfVEXPU0xIU59oBG/kzL4fCX4TNdVeyg+qDipDKzLNA3WlWaLvEG2+zKWrca4ZdTCtOiwDNrtHX4gju1zxrHxK3Hseb4ax57jgIxjbJ03jg+NP9M9RwDusS75nCj3nyhNH6liZRtnnindzs2UnlMTGNsGKTDSte8LjANCYLChCozp2CcQGO+sMxwPHqcag0PTcvaNQ9Oyais2jH4lEPP+Tx+c+LzBqUUfwmhou3755ynBuE0ZZY/FHWqrOr1fU809Hecoq0OzdhmZAu36WPX2pxlselHnqzb8+EzG1NGiTdr1GXkehWExMEkW/RRVgTyMBTXerz0a2Nd5X3wsZsWwNLWaRUITUitwSBNNmBjN6Jjzc70uJKfrd6Zn61jTsyycVDz2x6fHlBVVvbZ0dpeZmsv++nSZUg70gblsj2qguuR4nUWMPC6CzRT7ynMK1ZeHV2hbRy3ROv5Qr8UhJI1VvKW1dcBW/wCv6tBgEbdbJn4sfhZIentUPUDQc1yY0edC5ydxWe75YXn/I3GEwHuPb7Y7yrsuphBom3oGA0wLdmopCHx6NgEMn6SFLAgzj4l6O7p1NQWCvIfp0w5BBQK8v+mULsJAkIPPp954tOr0tLqSP+mSCgIf+PRphzyCgNffdEoXRwjAUdsLfDjAtceEYVRXUNs5ASDiNtB6qkQMXpPrj4lB4dd2knA0fj7Qmipqe9V/MtWEUEdBbS/gQWsmCOJtJwGAiNc1FgTvrmQNDQoLgl/bicPJ9BVGiQT193uVBoWFwRz3OPeqaywQ4j3OvXRdBsK8M7nXHj8rPj29zmReusqC4NeZvKuumCDo9Tjv0vUSBPG28wA4xLV9BGBUZnB/v3JAHtB6LG472zgWwfqv6oDws7ryWBZGQOuxFvgns3apJoRqi9X2mh60ZoIg3t/HtDSNBcG7K3lDg8KC4Af+katd+gqjamL19/uXBoUFwry/uZemsUCI9zf3atBlIMy7knvts+/g6X+Z2ZXMq0FlQfDrSt6lKSYIev3Nuxr08sTE/YfZNPtyb6yWafZEvsWPQTy72GO4kyS8zLeL5kfjOMiyaKwyVh2i7w3Ez7+N4obNjTx7cNDmRg07ER3mrXd3Cap4wm7Y4kHaPrqZkOEMLd91DB8bjm1gW92f1LSNoWFgy0KOYxq+zIwP3lqIX8Wz1J4NY+h4anc7dhh6f1Mfflhu0V00L7c5xzf/AQ==</diagram><diagram id="aodeN8v9uaSp2zl-TakT" name="NASA2">7Z1dc5s4FIZ/jWe6F/Ugic/LxN1kZ3fTppP9vNrBINtsMcpi0tj99StiwIAU2zSWOWici9bIgOE50pHec4QYkclyfZv6j4s7FtJ4hI1wPSIfRhh7ps3/zQs22wLbwduCeRqF2yK0K3iIvtGi0ChKn6KQrho7ZozFWfTYLAxYktAga5T5acqem7vNWNz81Ud/ToWCh8CPxdI/ozBbFLflGrvyn2g0XxS/bHvFF0u/3LcoWC38kD3XisiPIzJJGcu2n5brCY1zdCWW56t764FeB58+Zc/Zlzh7uLmi77cnu+lySHUHKU2y7z715On9+9v0t/lvcTqPPyXk4yd8UxxifPXjpwJXca/ZpuSXsqckpPlJ0IhcPy+ijD48+kH+7TOvMLxskS3j4ms/DYoa4PKt0F8tqiOPvIniZr/SNKPrmgWLm7qlbEmzdMN3Kb41zeImivrpFJvPO2PjcpdFzdBmaVe/qGDz6tQ7ivxDAbIDVDR8qO5BpsR0xpZIFRFVVLF2VDHySoR1sJakslYWODlWchjrnHPN4c2iOJ6wmKUvxST0qTsLePkqS9kXWvvGDlw6nb0dZNUp+NPyWoy9gL0mYBePsQjYk/D1VOE1u9Ra43CtPYUN9taDw5W5GgdIKi/GMqewKz05X0s/vus2NFDAbW2BmxBxO9ri3vEFBdzVD/imWb/7pOvpR/c1dw0BN+qk7wbF2wRIu5PwGxRtwVmD4N1JEg6D96ZNrVfAR4jDtwCeuQENpICnrmVaxjndNRDiGupFucMGwltf/Si4bCDENRSQvfLUVyECqbAaKsKWS+4Vr76SEEZAAx+hAQcb4cdE5iPOGuHHg1F9VU0YWIy/BKoVYdhRfqxY+fWJHGScHw9G+HUHDjTSjwej/bo7bggjOzwYpXcCtw0C+GCk4JudNgjcgxGGJ3DZIIAPRip2d9gwYh1kMPnCE7hsIMh1lI+QQ/5EYzUJNOZPdFSTvQLVWC0CqbI6ikNAYX+isTiEEd4gR6jBwcb9idN73J8MRv5VNWFgcX8yGL3XgTDsuL85GAHYHTnIuL85GPnXHTjQuL85GAHY3XFDGNqZg1F7J3DbIIAPRg2+2WmDwD0YaXgClw0C+GC0YneHDSPaYQ4mc3gClw0EuY7yEXLc39RYTQKN+1s6qslegWqsFoFUWR3FIaC4v6WxOIQR3rCOUIM1fvntRYEf/+pPaXzPVlEWsYR/N2VZxpZ8Bz+O5nlBwHFRzvI6zve89oMv8xdD1Z/nfPmrnfSqODZjj5VJykXYcF5SLqmWG7pagsnYfvOYX+xyPc9XpRtHbOWMo4Alq7Ef8p2uU5b5xaXmikzdok2W4Ta6VuQaslWFiCT3oGwtLJsIJqXhnD4UmyzNFmzOEj/+cVd63WxWu31+Zbl1XirDvzTLNsWqWP5TxppNja6j7K/a57+LU+WfP6zrG5tyI+F3mx9ijF0PlQUvx40NzysLdge/bDWOvqdpxKHlNW9bMYRK1M3wmZ/O6b52X7afHOje+pHSmNfAr/Wd5KYuDr1nEb/AnTfGzZQWwk5Vr8qzrNhTGtDiwFadqa7k+6uRo3TxL95YMz9KKsu9sYF2ThkSAu5ZIQdipO40brJNmxiSdRjPvnqS4lDdbEZt+QIGoeNNjRPU+td5W+24BgziEGN1JyLuQOQNMXJ3It5eO6oBg7jiwF2fPqXdYzogFI2jOHIHyosDYe4qDt31ydyBSRwJgKFrOoRams5DuA9Nt5VJ+9gWlbcv8XeVpv6mtsNjruhWtTMf0IbYrbThzSuHWOUj2284BOHWIfzD9tJPKjtLa+joW4QxCxTvAjHuqmjUYkHgrVjpwxqzgCCusdZvj1hA8NZY6YteHARxjbX+UbHa8xOHOElHmRcHwlxjtd/24zCIexCn6Sjz5ECYHzGTR5/UG4DH9TzFohNQ51m9Ea3XCq567XBIAggGcY2Tyw5E3hCfA1HVbcIgrlhyAkq9mQhE2NBTLDlheXEgzDVOMLf9OBDingAYfOoNV7Mnt6k317X7SL0dzKih8gUFPafULK9V9Yx2fqyZ7JLk5NBrJzg0X1NyMTa3oIMdjyspG5XLOZVqiVd1AzmW6fD9rPz/5q9smSubFYoMjXP/4sAGhgtCBsRHexRNDfVAAFccEAA1tAGCXOOYQHtyKAzgGocExNmhMJBrHBU4KqrbA3KNM9GiJ4cCXePIQNuXQ0GucTZa9OZQoCtOR1MUWtSRQfdsh/i2Qui1qYnlM4jl2xn7fY2ycv05w6+4FntqW0qRWyCBq9aeFnVDUwbcxVNiKwVO2h0oEOT6qk+MDIjAjxivXJZF6LAsgtt+RMEbO2KHfdZlEVD5rtZ+AvnG2HWsejB/7JLy+8PrI5jVozPbY5FVPVzTJaCfn//sz9eUj830lg2QROu9sVf7a67hYVrfH7u3PWtf7D4XbLXYvd38kVdC929NZiDPPMuTO0jytt7PI3LFSz6OJmTkOROhAXJ/kjXbi+A6235xGYXhtmnSVfStmHSWV+SCCz+vdT2yPuTn4q1xtW2YSOjtEpbQVtdYFrEkK9ozUuklq1xKaSgHyxaPcSReEivzkpI3Av9yMeI+I7akqXSpMvu8NhSH7H9cbHi8DU0bgA1FDRDET6vcFBfbNVYGFDQccfu3npjM4uP5u4vpmqYT5t97RNYDWue13RF5MZqEfAT1IsOC2F+toqBpxaZsEIfY/PibKJZoBLepEZBp79cIkgF9N3MdHGsfsENZ9sYhuW3kQ3KH/0cM2zLaA2fDHBNMTBPbNjK88iUCnUfnjiGctj3SVz1/RvJi6dttp3wHom8+V+MnZssWWNr0zzz4FZOFF+O8OiA6s23EGN1lNCtPCx8lK8/cqYrpyXf+D8M03Mt2cc1I2eCICPk3xzt2bVmFdhRjPO+mFzvusaPgSj2JKz2vESVv8n4XXIy4J5/RftNQ2fWdwYTOZja//T2bZyS5Nz8/TW/jxT/vNZ6EbZOx029SVkpccRa8z5Rsy0NxLSbrZ85O/IiJe91k+E5m10X2/iRcN+gHc2GwRHd7mI/dlr86Vlhjq6nXBcd3OlX98x/r9fOE/Pc5dD76y2Ty593iJ4k7zFI/WSFMkP9at3agfapqbe35D2JHIlv54Duy4nwzZSyrg85nBdyxkOZ7/A8=</diagram><diagram name="NASA3" id="nqXOh82GWntSVa2ZzuU4">7Z1rd6NGtoZ/jdc650NrURQU8LEvOZ05k8xMujuXyZcsLGFLiSzcCHe359cPSIBuyGbLIDZ6dycrsZAaS1UPBc+rYteVfnv37X0S3k9/jCfR/Mq2Jt+u9Lsr2w4ck/033/C43mA8e73hNplN1pvUZsPH2X+iYqNVbH2YTaLlzgvTOJ6ns/vdjeN4sYjG6c62MEnir7svu4nnu7/1PryNDjZ8HIfzw62/zibptPhYvrXZ/n00u50Wv9kExRN3YfnaYg/LaTiJv643rV6jv7vSb5M4Ttc/3X17G83zpiub5afpPP1svvz4+dO7j5/ff178/o/IebXe+/9R/kr1CZJokZ6863efbv7U//R/fjQfHpaLfyav1du/FX/F+hLOH4rmKj5r+li2X/ax7/Mf0/A63/RmmYZJWnSzlT3O+i0NZ4soyR6r1eP5PLxfzlavXr9iOptPfggf44e03E/56M3NbD5/G8/jZPXL9CSM/Jvx6rck8V/R1jNm7EfXN9kzDRujaLQvUZJG37b6sWic91F8F6XJY/aS8lmn+OwF564uGufrFjXFpukWMOW2sOD0ttrzpjOyH4r+IPSNIvbNhxzON9M4mf0n75J50fr7/bX8Orubh4sM+nCyt+lNvDrI801pfF/8NI9u0uLH6zhN47viQVI0gVXb55Mkvv8UJrdR+ZLtjl7Ei5yk+3i2SFet5r7J/s3a8a01cq/c7JO9zR6rzePs3/zlSfo2XmRkZLjlu43CZfo1Wqa1UDyJ+/NQPNPpyu+o0+3mnZ592nQWzj9kg2a4uF0dm9P0bl4chl+nszT6eB+O85d+zUb29aGaD7HhppvirClu5qtRbTqbTKJFfU/RaFj1a5R89yVad69q3kHNj9qtDio7o2kHFTvbNBx5b+E8+3yLMM2OmIfFZHnQ69X7PB0ELSDQhm//YLS+SC4c4YLEhTYYXLjCBYkL18HgwggXJC5Ku710LrzmXIhdtG0Xj0evWc4qGz722OA17q9LHwwCAUFko+7sScgHBYwrHNtQhHBSwLjC0Q0FHmCSwUDxDUUINEU4OhKOmtPTWYVDoaeXunGPXfx4gB5YNkcBTTrQI0syGTDWQQgthQwo7QCPMulkwHgHIdsU7+jIO2qGofPOqgLPMauDQLzDBk8uCSiAeUcJgpAh3rFHBvpkTDIZKN5howeaZDJQvMMm5JviHR15Rw1s5/UO8CizOgjEO2zw7JKAApp3gGeXdDJgvAN8XiadDBTv0OCBJp0MFO/Q1NvGaxzjUm/pN557AMFZb+nXhIhRpLDZUKCbY9GXBmrw/LDCXjRQgweGBBTANFCDT4ikk4GigRo8RaSTAaOB4KEinQwYDSSEimIarZsGkxv8NXiAWB0FIh4OeGJIQAFMPBzwGZF0MlDEo4xPhQwRjz0y0BNNMhko4uEQAk4Rj67Eo+8b/R3wMLM6CkQ8HPD0koACmniAp5d0MmDEA3xKJJ0MGPEATzTpZKCIh0tdy0bEowPx6PtOfxc8zHRPWuLmMgcE8PSSgAKYeLjg6SWdDBTxcMGnZ9LJQBEPFzzRpJMBIx6EgFPEoyvx6PtWfxc8zKyOAhEPFzy9JKCAJh7g6SWdDBTxMODTM+lkoIiHQU80yWSgiIchBJxot/r7Zcnfvm71N7LeTetWWAHP91Z/Ax4gVtiLBhrwxJCAApgGGvApkXQyYDQQPEWkkwGjgeChIp0MGA2UFW76NA0mt/p74AFidRSIeHjgiSEBBTDxKE+JQoaIxx4Z4FMi6WSgiIeHnmjKEjfHyJAlbhiIR9+3+nvgYaYna9xUTQGeXhJQQBMP8PSSTgaMeIBPiaSTgSIePniiSScDRTx86ho3Ih4diEfft/r74GGm33yS5MUPCODpJQEFMPHwwdNLOhko4uGDT8+kkwEjHuCJJp0MGPEgBJwiHl2JR9+3+vvgYWZ1FIh4+ODpJQEFMPEIwNNLOhko4hGAT8+kk4EiHgF6okkmA0U8Aurd5DWS8bJb/SfhchpNjvZnO3fx6/IuhvIrBt3zXfyBrGbTuvAFze/k6svwAmbZYF5bw438iVNBoTZQqG0o1BYU6shofFCdw7evtTGEHpTVbaqmYJYVDpcUND8EnytJJwPGD8HjRToZMH4InjbSyUDxQ2XJ6jd9igqTIgDKYpYtMrsc3RwmYi7KAk8bKTCAyYmywCdUnsAGip4oC3xK5QlsoAiKspgFpwNgA0dRZJ0cBorSd7kAZTELPtkpiiyks2kL8CiUAgOcooCHoSewAaMoCnzy5QlswCiKYhaRDoANGEVRhEhUFKUrRem7sIBSzNJPbopSHSaiKEqhp6EEGNAURaGnoXQ2cBSF2cTRAbCBoyjMItIBsIGjKIRIVBSlK0XpuwSBUszST3aK4jfu0osfMmxmaeiAWUEzGBs9LKWzAWMwJQrChhjMARvMEtQBsAFjMDb1nvoaW2mxlkENJoed3E15A1OW+O6rvIGyZQmf1m1yAzjfAgfKRo8fbVm2Z9MW6HkjAQY4A0SfgklnA8cAmYWQA2ADxgC1hI4towMjiFqW9+lVSrgUM9DoAaNunjBc/qCAnigSYEBzFM1sDia3aw06OjAKo5lN0Rw+OjiGgx6h0tnAURhZKIiDwvRe7ECjp6da1gratAWzuJTdtYYsJnSsaRxJU1tGB8ZwHGZTOIePDozhOPABrCxIdJQN6opEYjhdGE7vtRIcyV6f7FKn+b1llz9mSNjaFitwAsQsbB0+OjgChD67lc4GjuGg57N0NnAMhxDXiuF0Zji9l1pwmWWv5x8jgsZ9dvGDgitpalusoCmMi56m0tmAcRQXfXYrnQ0YR3GZJawDYAPGUVzq/fs1PnIZtRT8sspVb7UUXFnfqH1fdE+6Tf/MgogeMLqyZNGmLdATRQIMcAbIbALoANiAMUCDnjLS2YAxQCOhY8vowAiiIYSOIiXtSwmXWgoGPWCsDgRxFGXQE0UCDGiOYmSOZsvo4CiMzNFsGR0cw0GPUOls4CgMIVEVhelMYXqvpWDQ09PqQBCFUR6zuJTdtUZzVtAMx5M0tWV0YAynvOYSdNpCB8ZwPPgAlswGjOF41BWRxHC6MJzeayl4kr0+2aXeSQsrXeiYIWFrW6zACRCzsHX46OAIEPrsVjobOIaDns/S2YAxHJ8Q14rhdGY4vddS8Jllr2cfI6oDQRRG+ZKmtsUKmsL46GkqnQ0YR/HRZ7fS2YBxFJ9ZwjoANlAcpfqdTGopHPZoO4UTnKD4nOW3HlqN3IMuPmvpBFtJqb3W9XDDM9/SCbbNbBrXucfjDfnig8UHFxhE+GrYYJYVDIANFOHLPqmwIcJ3hA3wMOAENmCEz6YWzxPraNU6mNRGsG1mU6nOPkbYJxXZu9BBgd/kqBt/HI3HV518KXXtu45rdcMKnKMwmxw1fHRwFIbZ3KnhowNjOBo9QqWzAWM4mpCoiuF0Zjh9l06wNXp6qpt/TXr5gwKzuJTbtQaBFTTD0czS1OGjA2M4mtnMq+Gjg2M46PksnQ0cwyHEtWI4nRlO36UTbM0sez3/GOE17rPLHxSYpamcYUBTGAc9LqWzAeMoDvoEVDobMBLiwEeoZDZgJMQhJKoiIZ1JSN/VDWyHWTx69jGiOhBEQmyHWeDJGQY4CUFPNOls4EgIswmoA2ADR0LQU046GzgSQr2HvkY4Bli+wHh9Fy9wpbZd+/ZX0cy4eIHLLC7kNqXBlVp3m7ZgFh8OmBU0WXRlvmbL6MC4pMsskBw+OjCq6aLHl3Q2YFTTJcSXYjztGw+Xwgkus6iS3enFNO7Syx8zmEWXA2YFToCYTeccPjowAmTQZ3vS2YAxHMMsvh0AGzCGYwhxrRhOZ4bTe+EEI9nrk11aHSdiOLaRsLUtVtAMx6CHrXQ2cBQGfa4onQ0chWEWwA6ADRyFIQSuojCdKUzvlRGMpKdPDyHN7zO7+DHDQ49LCTCgOYqHHpfS2YBxlPKiStgQRzlgg1mEOgA2YBzFI0Sm4iidOUrvhRM89PyzOhBEQmwPPfAkwAAnIeiBJ50NHAlhNsN0AGzgSAizEHQAbMBIiE+9f79GOAZYOMG3rYMOPm/hBF9WJ2rf/vyTbro/r+75zOJCbl9J+c0P7ssfnZnFhwNmBU0WfZmQ2TI6MC7pMwskh48OjGr66PElnQ0c1ZTVino1Hi6FE3xmUSW704ssZrRpC2bR5YBZQROggNl0zuGjAyNAAfpsTzobMIYTMItvB8AGjOEEshQSB8PpvXBCINnrk11aHSdiOHYgYWtbrMAZDnrYSmcDR2HQ54rS2cBRGGYB7ADYwFEY6kJKojBdKEzfhRO0Jenp00PISUs0XeSYoS30uJQAA5ijaAs9LqWzgeIo2mI2u3UAbKA4iraYRagDYAPFUbRFiEzFUTpzlL4LJ2gLPP/cHAgiIdoCDzwpMMBJCHjgeQIbOBLCbIbpANiAkRDFLAQdABswEqKo9+/XCMcACicYpzjYyy8mtBq5B1181tIJWhEyRfG/hoe6ao5Gb8KnwAPDDfkifFqBJ4QUGNCETzGbVDkANmCET6GnhnQ2cIQPPUSks4EjfIQQUayjfetgUr5AK/TAsDoQREK0jZ4QEmBAkxAbfFrkCWzASEiJgrAhEnLABrOU88rWkZq4kXfVydT7wHg6NJ2iA+MotiwpxMFR+i5AoG30/NOWJYU2bYEeeBJggHMU9MCTzgaOo6DPmqSzgeMozEJQbo5CRwfGUTR1xSFxlC4cpfcKAxo9HtUnLVZ0oYMCeh5KgAHNUbTkoS2jA6Mwmtkc0eGjA2M4mlmEOnx0cAyHELiK4XRmOL3XJ9Do4Wp1IIjhaI2ephJggDMcSVNbRgfGcBxmE1CHjw6M4TjMAtjhowNjOA4hrr2g4gfG67v0gSMrDLWvlhXNjEsfOMySzLPrQ0W+uKR2mGWTnGFAc0kHfbYnnQ0cWUQPJOls4Nggej5JZwNH92RRoF6tg0vpA5dZ2Hj+MUJW/dnAwCw+5AwDmoS4hHwQMVqmowPjKC6z2Z4DGFZgHMWFD0FlUaCjbMiiQBwcpffSBy56/unKokCbtmAWeHK7DiWwAqcwzPLQ4aODozAynbNldGAMxzBLWIePDowAGeqCRCJAXQhQ73UVDLPs9ewCZJrP87z8QQE9TSXAgGY4hlmayu1ag44OjOEYZhNQh48OjuEwy2eHjw6O4RDSXDGczgyn97oKhlk0e/6LWq9xn13+oMAsbOUMA5rheMzS1AGwAaMwHrPpq+yuQ8nowChMeQEk6LSFDozCeNT792t0ZYCFE/xyncTeCid4sjJS++7oNb9lrjdZ9JhFlWe/BvRkKaRNWzALHznDACeLzOaKDoANHFlETxzpbODYIHoASWcDRvd8WeuoV+vgUjjBZxY2nn2M8GWto01bMIsPOcOAJiE+s/ma3KJlOjowjuIzm845gGEFxlF89BCUzgaOo8hqRRwcpffCCT56/unLakWbtmAWeLK7DpXFjI42DbM8dPjowChMwGy+5vDRgTGcgFnCOnx0YAQoIOSxIkCdCVDvhRMCZtnr2QWoOhBEgHSAnqYSYEAznIBZmsrtWoOODo7hMJuAOnx0cAyHWT47fHRwDIeQ5orhdGY4vRdOCJhFs+e/qPUb99mlDwqOxSxs5QwDmOE4FrM0dQBsoCiMYxHiUsjrUDI6KArjWMwS1uGjg6IwjlWXx5p5fsF+kynKDkPm80NcPvFqudKV19kLlH//bdUZ5fPZT7f5/0ejUbmv7M2td7d+5oDOrFvSXRLD+ew24+rdOMqByDbknTcbh/PXxRN3GXkrfJMoey9bdO46S64l4UMar99vDVDFObDmtJi/48LKsq4pHhfvWR2y11KJh/L6sfz6xj0s8aBr7EZ3VeLBsepiWUGEDyJe2UO9IaL0QXdFk9uobIk4SafxbbwI599ttr5J8lEtKgOOzWt+iKtTxp9Rmj4W7Zv30G7/R99m6W9bP/8739XILR69+1bsefWgDE3WHftr0VB2/vQia4LfytfmD7b2kz/c7Gj1qNzTftfTejstg5fjbVrkknlDPklFEs3DdPZl+0VPnpH+lcO/dapz9gYcLxgF5Vcx5W6W8UMyjoq/uaHlYGd+uczkY4mqN1L27r7WH/1gX62d0pTTB4xPgKUErPU1ldUSVEFjqF4nSfi49bJi4D/+nsu7zvfe84bR9R5bJtbtZfg8kc9D0jsnNuiHWN+YUTl9nwptYLsjrXwV+JarXK29Xa48PXK0rwPXsizPMW4jmtsDzvR5vt6co/+9fYquPV+fenZuG9J1tzcZVvng7BtvFHjGtZXJNEKr3UHUcUeBY4wKLMf3Kz7pmDsjPwPc8lzPDdzd3+E59sjLfrtnnPyiVZ8Z8rovdteusbwPFy/yltc3N7PFLM0a34R3eUiyuF7er58vZWb9O47IzOZYOpa6bB03++Qesj2ZJdE4ncW59ORfua2O1jQstgQWGXdCUqJ3SxIaraqT8zNfyGn3ONEv9JEGX8yeYcB7vuNaH4TONLQE1t6Rbpod3Mevx9o//Ou+lD3x8Leuw/FftyseXo3XCUD+fHJ7/T+24+ffhduWnS8ivvrBtf63ftC4vU2i2/VRiTdOeHs24ByGFsruaJT4/h/Wp4e7D1P/7sv0/uebP988zK1Xdd/TZmOZ89sfP61/OOiNQURQXXWgKle0KAd6ZUaHXVg7zreQO9X2YN23qese/Lv04PM96Jme+88+2n+/SP8933+Balb9uY3+Sz78/f8/GfeX32ff/RA5f/xl/XD9pXYE3euwbotu733peOPm/xz0XPaMWf0p+mpr+/pPh33m746ZXrMjro163bU9JmvZvmTC4pMHAYta3bXvsG6YPdLpA5909CTzUHMTa1uC2TQRviBc7rzE2obBuSevHS4ucE5ibcMwux+PPRcXOOGwtmGY3WzHnosLnE1Y2zCyjmyPdnH+mty1b5hZgbNzjw2QK8jWtgTOPXEvBAFMNhQhNRQwrnBsQ+HcHNcOGCi6UU4nETDEN3bBoK76KsLRvnCcscB2PQTo6eVJi8Ve5niAHlg2RwFNOtAjSzIZMNbBrPwXfzJgtAM8yqSTAeMdhGxTvKMj7zhjXev6WVXgOWZ1EIh32ODJJQEFMO8oQRAyxDv2yECfjEkmA8U7bPRAk0wGinfYhHxTvKMj7zhjtel6CMCjzOogEO+wwbNLAgpo3gGeXdLJgPEO8HmZdDJQvEODB5p0MlC8Q9flm6tCDVKm4fg1xhNVNrxzVmnQh4cx0+K8bZXibbPO1pNfjT5b6698YXv1uF6GwmDqNF8eCo7DC4VeqiS/CIVX1sjSZpcHy6hniFg9+leUzLJ2W5WZ4Y2Jr/rC5KdlMP/5+uGP79+H79/EP13/unz4vUmJn2gxeZ0kq+u48TxcLmfjMmI83LyFwi5LdeWzJ0UPrSrXhctpVQdvO7/c9NVeizXtvCdjI7fmRF1ue2EJRW1ZZf20sjqQb0bG3t1P0zqsem9NAKOsg321V3C1FpXDi8TMFpKs0wZxdbhVB3F/gNggdnAReDpkxV7sINinQPtVzz1D4n5B9daOevv5oz6vhnl/tFV2be+qeqcvbS3X7LWW6x42lR8cNpVjXt5Uf/top18Xv47H7h+/vf4c/vLqMQlfecerDMvqKMTVUZ5CsbF+OfvDqjpM9rpaHaUW6SZL3u+eDJ+r83pkqa3jC2U1UN9W2t7brQpsvzxTfVnLN/iq7CUtf+OPo/G4ruWvfddx6y40u2p5296jvve2b/Dd1EvaPmPenzh1be/b19qchXomTa1q1v/bOj0P7SzR1SGinJp6xaarM0HdW24w5auny6q9tSHKCQ3bRFs1LeV0VsHfqvsCUy6rWrusIt/bUV5AlxdVfS85ZzX4HpPnVRX9tpp96bFrSud3dMI5srZCg1iI54UVufH3Kjj33/QN7jXgeV1Fv9GQHfaHuUga3/+VN340z87Lwzx9tN59z101nHBSyB4mcX4S3wSWGcjTH+NJlL/ivw==</diagram><diagram name="NASA3 的副本" id="TccuEx_fPiLZcEy-eSGq">7V1dc6M4Fv01qdp9aBeSEB+PnXTPbNXO1M5W7053P20RW7HpcYyXkK/59QsGbGNIFhkJHYW8dBuBiTn3crnnnitxwa5un35Oo+3q12Qh1hfUWTxdsE8XlIaul/9bDDyXA55Py4FlGi/KIXIY+BL/KapBpxq9jxfirnFgliTrLN42B+fJZiPmWWMsStPksXnYTbJu/tVttBStgS/zaN0e/RovslV1WYFzGP+biJer6i97YbXjNqqPrc5wt4oWyWM5tDuGfb5gV2mSZOWn26crsS6gq2H5/tPXfzz8+0dw+23zr/tPV7//56t7+aE8+08yX9lfQSo2mdpTV6Z8iNb3FVzVtWbPNX7LNLnfXrDLm3i9vkrWSbobZotIBDfzfPwuS5M/xNEebx6I65viG8kmq/yBBPl2z8uoLvdBpJl46nKR6Lr+bQcz5N4rkluRpc/5cbXrVpasHLe2+OORF1RDqyMHqMeiyu+W+/MewM0/VPhKYM3+P9Y51JuFKE7i5Hg9ruJMfNlG82LvY35z5mOr7HZd4DmmQegLBmkDvw8RM97CmtJ68Bjtw6hyvN23j/fTKYhQBuCTMYCLCL83GfgPeEMZwH/7Bnhu+r9JtIO3j/ZL4R4B/nAy8Lt44NdcawLot2I9BP7k7eP/fIqiUcB7EFfbAX8p2oNYYAJ0tjvgg+A/HXrbCvkgFpgAvzWK73QILIhDT4CwnoR0o3BPh7Fi1GNI2MJXLJbiS7WZpNkqWSabaP35MHrZtMDhmF+SZFvh/kNk2XMFY3SfJU2riKc4+3b0+Xtxqvway61PT9WZdxvP1UZpsVquosXuTY7At/rYYuPoPMXm4US7rfpMwzSYLEqX4tXj6kS8APJVT0jFOsrih+ODuk1cffW3JM5/4d6DfL8p5hBGZ4F/4il3yX06F9U3D87SOlkQsObJvBPNp7zu1ol2Xre/pvMdkTomHPEVpyLvTrWLPX44U+VTlI/sU8SET53rQW1f1O5Trhmfcsn5PhVyPmMkIGHgcMIZ8xtnZtQr9rKQO47jux4f19/aycqID9PDA/T78fOz82F67qNTtY+WVu8T93C8OQj9Weh7PH9EUs4ZaT42w2AWUM8joePm4Y953rlu7s6C3MEdn/s85M0/ku+deQ7xfM+lnuOwcZ28XeT6Z8vt8xw4a3pptI6Xm/zzPPcLkefcl0WmHM+j9cdqx228WJQ3hLiL/6z6WQqf2xZXsrs2fnnBPxXnyu+Bu8rnWrn8JtmIk8S/HlLbe/NiZk/oSV7mtRN91tFlw3R12dAedbF9S5PhBiXGgwZ4pjuUqPKK1pn80n0BYbsaMKjyAtZYcII2VFDlBaqxAIWU7KnyApSh2x0CTOUNEMZudgQ4mfKOBmO3OkZxv87d7QMUUn5l1FY4QdVUpryfYGz/NIqe8m6A8d3RKH66tXwugoXbpcUF9LoonIyBOFQuz3Sr+wiIY8/PYLr1fgQTIOW0ugV/ILxBkgrdXekIiBudY6e78RwBYKgmLbcHj0Opb7shbTz2TNe3XRDS5vaXWaASBheEo0ngh52BuSC0TR5QSMHFVc7ixoITVHBxQRQs+YCJkO+7IHKVAneEgBNErpJ3RpDMEUShUuCOGIByEI1KAlCjcIEoUEOyR6P42cdegBQSbi13wUgFueb5pjfBXMw7ZytdB9zlA6YdKGCLEA6sWaJCMgBgfw/XrFchwQ/awsI161VIJoBseuE9GBhK7dnzsVZ/5CDsa29Dy2rPHgjZksAPu/bsgdAxeYdESAc8EDKmwB0h4ARhZ/JwAqaqHoiqJH9nY6Q59RQ2i/DDTtw9EBlp8N0NAieIriQBp1G4QHSjIbezUfzsYy5AdWdfc4ecIAsu/K6iRej5LBrQITc4WmLwHF/z2qxIBgBtI6nfGDNJE0AEIc2LtUIbAOMp6mvWzzBNYBTxNo37ewvy98Vbju4URviMti036votfg+u+GZeSUU70B5VlfCtXfJ17yeW6Ri+tau+SiCOrXz4umeFAZkAkpIFumeNARkAlJIF1r6xRD7wIzCwQDMFhsAbWWEMrH1fyeCgDwG/ta8rURDyIQyguX0UwgBQFZ/A2veXKAj5ICaYAr1FlqqDCbFd0MJzMAW2a/TNpxNisxguHU6BvAL1MNTLuEwBb4zyTNiDrb4ZFYT5TpfsNKoQElrLT/euYpkQElpLSCUQxxZCQmsZqrwJIIWQ0Fp+Km8AUCEktJahygd+iFzSWjqqIOwjGIA41vLVwVEfA39r2auCoI9hAWv5rHzMx6jYEMda+VVB2EexwRQ4LrIaQpwJcV5QOYQ4UyC9ZhGeEKtFceopsFggRYQ4E6KxGIUaQtq89fcW5u/zeRpky/x8HkK66K63zipcGubz/nuf1Ds+lEB/zA8gwfZpB1u9P/+0LP6fzWb1ufIfV56u3GOnY+y2q99MdHmJR5p5Wc4LZoSZ9pIuSl5a9m4bbQZ5ycebm3gTZ/mv9KLbIuxuru+25f7adcq/8YLrHOI46RHHT0xausLXCtfcLJeLOBXzLE4KFxPRXfEn0ySLqpHQ0RggAqcZIEjYb1U8pi+o96gLiMVS1HdFkmarZJlsovXnw+hl80l7OOaXJNlWdvkhsuy5Mkxxt8paTc4kd8l9OhevXnhFxouLe9V0qVjnvvFwfFC3Iaqv/lYEp4PJQ/90jqffPEUWpUuRVd86Mef+ZwyxcFfV4cw727mO5n8sd9b+MC9DabE/XV7/hRavB8h/okOLde53H7jz1+54sFymYlnecNMLAfxkCmoYduUI+1RipDDQVRh5TxJwkgQWGs8je08MHwaJdDcTqxdIOLQz9XqoautlIgRkgbCDzSzrTSIEZM0wGQSx+ywIAVlG7AxIEVVkqlzFN+ehGICCrId8RtAEKULXgNmEILhkS0HWRVZxl6NACrI6sgykZgFTLlobuK3NIgiy/vGA1McsfroFZW1vGR+ebKK4sG6BGckEIIyJ6taYkTAHnSJDmO5uaSQjQE6SIUx3wzSECZC4MFPO5BARh56lwXrPAO4JzAuwDC530/oxZazczVAYHbN0Ki5hKBRPAkHwcjdD4XzykCKWuxmKpKXAQzEARVG45IMmCCVmKIKWvEuilrtdFFFLwV2OAimKrCUBqVnAlHMfA7e1WQRRRKvzUx+z+Omep6mPfQ9ONlFcWPdETSQTgDAmV/fETCTMUcvdrr1S2+DYg2ICe6U2eS6H8cC1V2hTEHkgTMB7MD8j5W63Y7FK7rUQG7fizVWTuhyULIo3xQyH3eHnuuPejLZVwLlq1mcAUfDUgqumheYgxkwcuGreaA5gVFWHq+aFQGECIw9QTQJhggQGvKrpHVCIwABYNXmD8V+QyhxXzdWAPBgEYk+1CocAsVlENXcw3gRzMe9cLeo64C53jDI8iLjsaed30hYZzwIocUU7AUS0AXjLhaedNCIaxSzk2kkkIuRYPQqedp4JbAOMJ3IPKgpR8af1/BNj5X4PlVTubWhbud9DZZESiIKX+31UFikPMWa530cVAeUBRi33+9pZo7kwAZEE+Kia4OAggQEvqiKoIERgAIyqBw72XxCe5qPqgQo8GAViVE1wCMRmEdXcwTliuV+e4WHEZe38brTyjrwFQOJKoJ0AItoAvNwfaCeNiEYxC7l2EokIOVa5P3g70uP5mb1ZC/SgokbK/d7pCzKML98eqCaV57ra3ma2lfcDlMVXJBAEL+cHKAuwyDslRvxDWW9FgUtiAIqy/Iq8Q4IkRaFqgjaeS6LW0kKU1VYkIDULmGpyZCIsmkUQZbUVeQQhniMhyuqRQ6KgWQSV85UmmRZkwYXfRaZDz2fRkOnz5/ssVnYeal47BcoEqN0ioea1U6CMgNjuEGrW3aAMgNkQEWpeSgXKBKAiD3WUEztgI0D2rVDn/a332C+0pV67zD/qC22p04P6QigixidAUAeE4x5sZpkiQh0QliuDILYiQh0Qme4Mp0RIFakDItKpcEkMQEEkujMcEiVrAxHpznBJUEWEOiAynQykRgEjIKrcoLBoFkEQEe4MBCGeI6QHMQHFDyQ1JMr5ylgFmAE+i5WdE80vUYAyAagiQuuayySMAKiIUGKtLqjiLsAwgbWqoApijJISWqsMDg9Euk2Qb6ZJIVDs9/2cw7X6NVmI4oj/AQ==</diagram><diagram id="LQEk8jZrhSROWHPoe9Qp" name="archtecture">7Vxbc6JKEP41PmoBw/VxvSWbTdxsefZE9+UUwQmSIGNg1JhffwYZwmVQxihKsrFS6lxo5euve7p7xjRAZ/Zy4Zvz6Q2aQLchCZOXBug2JElUJJG8hD3rqAfoctRh+86ETko6hs4rpJ0C7V04ExhkJmKEXOzMs50W8jxo4Uyf6ftolZ32gNzsp85NGzIdQ8t02d47Z4KnUa+hC0n/JXTsKf1k1aADMzOeSyUEU3OCVlHXZg7oNUDHRwhH72YvHeiG2MWwBKNftj0aN28XT6au9l6lywA3I+n9fS55uwMfevjIokEke2m6C4oXvVm8jgG0fbSYc34F+lWX0MfwpUi95n0sNoGQUA+iGcT+msyLr4rps47ZRL/pKtGhKNE505T+REluyZShJqWO/SY+wYe8oRDtAZdcjhYBy5vAUIjYAO3V1MFwODetcHRFDIz0TfHMpcMTM5i+zeWFGGzBmMWyHCvhcKCc8RA/Tke3/65/LmT7z7PWaS9joFK4wAmxStpEPp4iG3mm20t62wlyAmklc64RmlOIHiHGa+pizAVGWTThi4NH4eUthbbGqZHuC5W8aazjhkdud5RujBMJYTO5bNOKr3tAHqZfRNQ3o7Hre4cqA7TwLbhjIqUZNn0b7hJI5YVY7ySGD10TO8ushzy6tUiMtdya2JqSrhvo245nH2Y8PsLkJpBHmk1DqMSA6GhTyjojWW2pqpx6MK7JUFhrk8Kr0o+qvFSRT1dd8i3a9z55Z+MNVvme7sU/bGfbRdYTz9WNDmgYmniYRh8c1+0gF/mba0FP7/f6BIh2gH30BFMjqqXD+4cqVa7klh/QAjJIPViVqwUO1iCXVWVcHGvRWVZuOV6p91+5j7AaFWOl1GDdflNYTdbt4i+p1tNn7wHedp8tKiR3UeNnPp+ttVRDAoIWP1dmztr5nLZWjdeuTLWsb06plTxrfL5ZaqX0KlWmWL3c95Dg8VuY6ZKW5ZpB4Fh5q0lHxu8Jc6MVNM5/xUzA+g4zLA9YqR8pjURTSlIKlBT3cQes9BNukUNuJMkm85QRhZaWNmwtKzG6PyoknVPn5ObEqjm3HMXrjJgNj94wOIBaRk1DACUP9/lDACDUIQQwtiBcpxAAiAxS9QgB+MH7mCEAYPPlU4UAB9rCoRHA3pr9UBEA4KixfpwIwOCMAGI38hUBVBgBgKKs4TBqZeqYBaSh1BNTxEtoWFZjTcqq4/TYlhrrfmFLeUxKsSmtotYseJVAjrqq8j6uSprSUnaLqpqvx0+GPjFfNU6+1szVAg3sJhkvX2XROB1fr56vxNXc/f34KI6h/d/34HJ809RZdtZ9G2s/yu7exmLN6cgUlwVel8zJ8NNsbMlsMhlgOGPIcprtrMaewbOsZm1KEVpsJqRorH+QlMPxLDQzg0HuhGa2z8pwbnPhjWBigtbFXtjM1pmFx3YOMpiq7KOZP3oSRy2cy+ch8K2/z/trSxkMl4Or38qF/DBSr5tsQWaIN+jFZbUUguQecRYm03Xs0KlYBCZIMvR2iIRjme43OjBzJpPIpGDgvNIaX8j2ebisbm5GaTeUbiiLWFGQJ3yc+XvIg7liAO2qzJOBbKQhxQWfksT/GLWzQk2xPI81xR6I+as0pehCvTTF1mRiTbEZyV+lKUnX6qUpjr37E0QLR17JwdEX6OIUSpPztYNclXNLAkUycHOdmkZpu/1zmBoFyCk+kvje7Ozy+WIwuF+YwvC5bWi9H/2f1rpZtKUjh39K58E3rYbWvmxoXfKihy9KBzszSO5BSMbvCselDpXzIRxBOiYVsnlElVmEJGb9hMbW3CWhIj9RyAY2StrCBrmEDbnxLzKUk0ET6sUFNg7bwgVRLSFDfoL8RYdyOgBFL+WDeEo+FO3AFfIBSCV8yE/Qv/jAEf0Lytn4UBhTspFDHP2zJ5hrqciqNCXK58vTOqP+jXYhvj4ur5ue/+uP+QODrZ78MmWDd6n34MscS5XclHO7KiKr5MJDR5X9XEhh9FXTAnA1+yz7Kbo8uYzmlW+q8O4bnujnQp/pnAz3Jm+99m5lIVf9EcFRjsmAnFzeOsQ7SgV3y9Fg8OQNbvzuxAi669W42SssFeSolTopy30ClpcI27NoJYfL+U/KShxYHfOk7E591fmgrMRWHE59UPZA7DjPyaoMJc/7U5mimPCTnZM9jmLLjskyeq3klCxpJv8kIXLoyX+aAL3/AQ==</diagram><diagram id="Vp5To2aVp-LTbkzypae7" name="block">7Vttd6o4EP41fqyH1wAfW6u9L92e7vae7fZ+2ZNKFLZIPCFW7a/fRAIkoBQrovfFDz1mmAwkM88zk8H2zMFsdUPgPPgD+yjqGZq/6pnXPcPwLMD+csE6FegmMFLJlIS+kBWCh/ANCaEmpIvQR4miSDGOaDhXhWMcx2hMFRkkBC9VtQmO1LvO4RRVBA9jGFWlj6FPA7EuVyvkn1A4DcSdgScuzGCmKwRJAH28lETmsGcOCMY0/TZbDVDE9y7bFvT9bjD+5vx9QVb665/fv0yCJ3qRGhvtMyVfAUExbde0baW2X2G0EPslFkvX2QYin+2nGGJCAzzFMYyGhfSK4EXsI34fjY0KnVuM50yoM+F/iNK1CA64oJiJAjqLxFUU+5fc1WwY4xilklEYRcJkQgl+yd3HZ0xwTIU53WXjhlsktjLBCzJGdftii0iFZIrqDNpeqsg3SYo34YIbhGeIkjVTICiCNHxVgxKK2J7meoUD2Rfhw338aVf8eX1zcUkp25YQxxXfFp7jm7oMQooe5nCzNUtGB6qX3vfChLlsgCNMNtZNzxuxz97eeUWEolXtboqrjoBmxk1ALH9ZAN3QhE4ggdzUjrX/4CTwWYX0Hz69b4vRkzDGv1+v5ME6G8RsvdIkPnzK4cgGxbTNqJh3blB1mkLVPi+oOhWo3t51DNChOxoeFaC6qSI0rwokhLqdAtT9IRLeAegU90yxCAlVxuX4KJjDchXu6GumWc8fbHCPSMjcgoiQHRnpTmOkO20jXUy9xyFbSRHdpfxjZNVAZiJdkpgl12ZlQ7qtGrJs1VC65IqhDQTy9RyACu99VEjRy/Efsgr7Fj6j6B4n4aa2MK+fMaV4xhRgFE65YMwcz6PjKuKaV3D8Mt1AR6KgyeYjGb0UcymH0paQzctwHnE+TIIch+zKnD/sbDXlR5l+iBOnH7JzRdKHvn9wfO7BesDoqw7VrSrvZT6XeU+3d8fgQbwHtAa8V5DSOIJJEo5V0lJZT6IozzNlkmJkAqx3iYqNyvzRdu3QXvUuuc3ekq4yWet8UkqDjfkE6Gr42UYjPmHuh2tJbc4Vkt0PbGrbH3jXc5X1dcspRXb6BK2SG6ieic4x5XdeOAPQECVZVJ5J4Zw9t+TPz3+NRj9f7aypOcTyqinE7LJ2Bs5JcPMrH26B2xCjGSjOBaPVY9bPf7j19ApAOz3bggZV/Bkkuq7OtkeuHbMQ7/qQmR8lDj1k6qbbqCj8QB32ZlsPN/En/+ubvbyDXx5HTw84x9dpo7McFUVUga1R1DKht19LNYuaC107lrNr1/m7o3CkjoJx4oaC0wTOv1xDoTG8T9NQ0E3rg7mj1FBo2J/ct5+Q10ul5931WGV9r779UFY32u0+2Cv/8escfH4J/3W1Z9ObWXhxYZ+kBPvAmamMjC0vCzKA6n2thE/7I+hsO7U2fWfvdPZ6wFXDLfstTduoKb9kE6XpTjCXn0vVP04Xzql24Qb3wwo4fvAjoVFuiLrVNA26PBM61bfMv+uwVusw0F0htjXBGA3qsO6PVW0epLYuWzsZ2R/krfP4vd0euX7PJs2mLVNt/9Q0bt5rF50qblr/tVCzEt00PtrecZ2+4XrFB6g8ZQN2+Sj1x64l7CzDzVr949ThDfoRWZZZwvVmyTxYXxAdB1nkFn2izXVEhq+I70wKC7FNzJR91bOvS9WHjyZwEXGbBDFD8DnKAEAwhVQaz6UyOU2zUuF8lcRw/g2nO6rmbDW9NsjL+2TgzhKsXYoN3aikV2tLAVVuirZG2A3qp/Mh7MavtfZzn0yWdeA6OlcCr+/Jn3aY0zDqzBqe0Q1r6vWs+Y7+vqzJhsW/LqTqxf9/mMP/AQ==</diagram><diagram id="Br_lhAEokWQMlPkGPHZP" name="cuda">7Z1rc9u2EoZ/jWZ6PrhD8M6PjpMmnVyaNml7er50aImy1MqiKtGx3V9/SIukLoQoUgDIJXZnMpOIVmR79yG474sFMLJu7p/ersPV7GM8iRYj05g8jazXI9Nkgemmf2VXnvMrlmtur9yt55P82u7Cl/m/UX7RyK8+zCfR5uCNSRwvkvnq8OI4Xi6jcXJwLVyv48fDt03jxeF3XYV3UeXCl3G4qF79fT5JZturgW/srr+L5nez/Du7Qf6F+7B4b35hMwsn8ePeJevNyLpZx3Gy/df90020yIJXhOXzP++/rj7/79c3v/48nk7Zr++eP66uth/2Q5v/Uv4G62iZXPzRsfn+208f/vzj4epmYrDF9afgr/dXzPW2H/4tXDzkAct/2+S5iOA6flhOouxj2Mh69TibJ9GXVTjOvvqYQpNemyX3i/zL03iZ5BDYRvp6k6zjv8vIW9k75ovFTbyI1+nrZbxM3/kqXI/z/xOkrybhZlZ+u4a/ex6jb9E6iZ72Ep/H4m0U30fJ+jl9S/FVI/9Fc66vijQ/7iBhluNvL872CGGuk781zNG8Kz99F//0H3kKWqXDx5oOZhln8+Ez/3unmhDXVpcPF2k+rizPO5+P4j3Ss8H9JczzuUjH6FX2zyS8zS6l4Q3XRbyzcKcPmSScL6N1Hr5xvFiEq8385d3bd8zmi8mH8Dl+SIrPKV6pg989ZN+vhtoyjGqobV9VqK3moU7DkczDxS/psztc3r1EvRrVyTpefQ3Xd1GSX1jF82USrd98S0O5ya/tkz8yrUkY+dNxej2JV/k7FtG0+IDbOEni+/zFOo9J+ckv8XBepX/SCN0Y6ajhpB95k75mu9fpn+zt6+QmXqb3YcpF9hlRuEkeo01S3px7P5I79qPbaXMSzMYknEu1qyrTtlCm90Y23sCX13ThDoQ4jcV08VJGzeaTSbQ8m3e2yzvbzzvbyzvjEMVqE3g47rYYSa1LUuq2zGj+YbtIt/60cJHGYhkm0avs6bSpYFL+nJeT4xA5ish5Osyp7iA1qLF0BEkVNoxTqWnJTQPpSty04IZTdmrJDXMDAkUAFNNGAkoDM0a9CnIif2KDUkG+eWu5rgoV9Hyy8OlWFAXwniyHGEgdMMp8Xlza+o0zrPuQUcx6EDrS0UGmihjDSZIqbtDIItbCJydwSBftwPEMAkUAFDS6iIGYHhJAZajCiPcM61YZMYDzRYDGDEbTQ2UoAM4PDZMUbMIH4HzQoMHBo3wAzggNmxw80ocRKSKk4NE+ACeFcGifpl2R6rQP0mmhpoMGzQKVcCOdBZJPCjLtY9Kkj1xw0GifAhQiRxY5eLQPkSJEChrtY9K8Tz/ah0dYt9rHpHmf2ozSvE8ZCpr3kUQKNu1D8z5ywcGjfWjeRzI5eLSPRaSIkIJH+9C8Tz/ap9Q5/WkfmvepzSjN+5RLKWneRxIpyLSPRfM+csFBo30smveRTA4e7WMTKSKkoNE+FoR5n+k0cscXbpmhSPtMvODWMFRqn953QrDgzfuIgHBu0GibUYvmfcpQwJv3GSgp2LQPvHmfYYODR/vAm/cZODl4tI9DpIiQgkf7QJj3wah97N73OrDgzfuAGjRo3qfcRB3evM9ASUGmfWx48z7DBgeN9rHhzfsMnBw82sclUkRIQaN97Kr2+XlkXadXPo1urFHgva6AlIYwOYQmXMzvUgRej6Msd+mFLNDzcbi4zr9wn0LyQto62sz/3QPpUL5kCiV8SOLN9jSnau7zw7I452cdbk77cu7W8TlcDWFpf46W6Ryeo8Wq7DCHk24ZZ8zxk9pC1ag5Rmsv9qZdSQ/jDBfhJJg4vJs9jGxmmQrz5/jsMH2BU8lfeQzXfv5KqSo9gY6Y2JDgSOS3VW870p8BqAUP5c3QSsNyE67MhnCAaYTLs3/06JaXxvKWICvCAaYLhk0LMjvCEWv0IXhQWxIOsF4hPejBYks4wPqHtKAHjVXhAGsi0oIe3tI7PekB1kikBz1oqmaxFhGih0cPr/1DT3rEukzIzrvMzjt91GS37l7xwTR4nL9DyN1zyQuWSAsyd88la1g+PGjcPZe8YQX0YHH3XPKG5dODxt1zyRuWTw8ad88lb1gBPWiqZvKG5dODxt1ze18+iNnd44mzju09YGsIAY4eLq0jLELhkRkskRZk9p5H3rB8eNDYe8UhUESPTHqw2HsemcPy6UFj73lkDsunB42955E5rIAeNFUzmcPy6UFj73li5jDZe2L2Hqe87tbe86j191xay1uE7D2PzGCJtCCz94pxjeCRCA8ae88nc1gBPVjsPZ/MYfn0oLH3fDKH5dODxt7zyRxWQA+aqpnMYfn06GjvPS5uzejVL7+t51HwZjNLI/PbtyujDStk5gmZebxySJmZx802Qzwy1OGPy7njRgLzQ0MQDX1tOm5gLCJFkBQdPTluZPo9flcLVDQ04LiR6fe0Mh1Q0dFt40am38MddEBFR2uNGxmPUBFFBUtZ6xMqgqjoaJrxLddqR+X7kekuMoNp87A6oMj95yEujKer7cku2fEytrF6eklF8fX0X3fZ31+LD0p/spfP2l7e/q/X20NpPlVQBXkoTVcn0HjFGXM1a2KZx0HHqwFRqGnOb9E0qeYEGlWxZmUg82B71dNiykJ0P9al0Sk/2L0vQM/O5Aojf7o776sHZ7tycpA79qPbaXMW/IvaGLnJVtaN6gPrL6xmXupJb2UKD5/ifouk0tLzIhQBsPZCndjR1wLnowSs17ArlJSBo6MjzifHJHLkkqOhQc4nR6zHkMrc9mXu6R3Uu616A2ANghkITuRPbDWDhm/eWq4rUrmU9wpVvQGw9kCd2MFW9QJbSN4VSsrAwVP1AtthdPjkoKl6ezd3D2bmWmEip+BVVtbyRp+O61qAbi6kYYG823KeqmhYIVREUUFWtDIDoFc7aHLQVK3MAGjWDhsdLGUrM3p3a/WtWzkQdVu3MgOpIdswpzv6qXJlBlIDVgEr6EpXMlwlo4OodiXLVTY7eIpXMl1VFa+8RXpdF6/kutaPDGS7lrFgZLvKYgVb8crId5WMDp7ilZHxKpsdNMUrI+dVVfHK2zag4+KVkfNaOzIwcl53sSDnVRYr6IpXcl4lo4OoeCXnVTY7eIrX3p3XNKPTaeSOL1wQqGiR18QLbo0We4hcUNv2vsqLMXjGrAgK5waOtjnd3RxU25ZPU2JFmBVsta0Jz5gdODp4atsCFWJHGjtoaluzd2MWa23L25ix49rWhOfbgho4TPJtd7GA59sOlRV0tS0833bg6CCqbeH5tkNnR8PaNjbff/vpw59/PFzdTAy2uP4U/PX+qoVrO6ztfq8sK/9Viu1+qxktuwKkb/fLjbWYSSppG7TD+RQI26DVTMPUItuq3uSmWoZo4P6IWJtPj7bsb57T5icS6j4gY20+7QAdfSUFnyR4jmdPs/lyuNFRT/DBAXYm/ODBwSImWL+nJg6dEx2PweNzImaTkhJqrYRO7wfdsTCC53LCrm7LW4WUEdbu1C7YwSaNxJpVB4uSMnDwaCN4na4DJweLOCruEALlMlDQqCPBLlclx+VA6DarOS5HjjriPcS6lUdFpiE9Xfo5Y6v+ziAxJNiSSqRglT4A21mHDQ4a6QOwuXXg5KCRPi6BIgIKHukjZvEqmRiCIH2UTwzxhqKOpQ9ZsrWDRvOlU9oPEmTBSiIFmfSxkHblKgMHjfSxkDblqiMHi/SxCBQhUNBIH4v2GOBKn5rFfnKkD4+wbqWPBc+S7Wn5Zv2dQdLHgmfBDpQUbNIHXt/tsMHBI33gdd0OnBw00scnUERAwSN9AG6MAEH6KJ/14Z2K0a30KRAH9HSB5JeUdwZJH5ssWEmkIJM+tkngSAUHjfSxAfbYDpscLNLHpo0QhEBBI31sgDsh4JA+vW+FYMOzZCENGjZtfFCGAp4FO1BSsEkf6qmVCw4e6UM9tpLJwSJ9HINAEQEFjfRxIGxzgLHhjXemTrfSx4FnyUKaKi7vDJI+DjwLdqCkIJM+DvXUygUHjfRxqMdWMjlopE/v2xwMGxQ80qeFobuZZ7/G9XqdJZ5Lyh5IYfa2YqdY43u7uFSc1vRypZo/OScuFb9Esa7GdyrZ9DtVGS3cT/lRnszXKYPzeJl9evyQvVdZ5L0irsU238WKojPoKzvpygFmHy7jZVSOisZuVDT2R0Vjb1Tk2QfZjzONl8XRZ6ZdjpLl3symwiT75mGSTat6f2k5WhbfkmCSCJOPFSZgbaRawHQ8MgVYYALmY2oB0/HIhAYmYFanFjAdjUyWgwUmYPanFjD5WGEC5ohqAdPRyGQzLDABa3vVAiYfKUxBVc29Ht1Yo8D7vQJVGsvkyMdbzO8ye24cZclML2QRn4/DxXX+hfsUmBfq1tFm/u8eVIf9H1mLR/iQxJtt9qvuec4YB7s9aGy+ty7JISxuu5wQzw8qhDDGSapTg4iQRRg00E536/hhJRgS0zgcE9JrRqtQ+TYnVCYnVOX0hfRY+cCkgdwNtc8MnJW76YItt0vYqKPGB6YMdGYJWc+ND0wn6IzW6QN99UQLmGrQGa2jUQsbaQAX2+lKWs25e3qyBXA9nq5s8UcxPKgB68DRGbWaM3S0ZCsA1pCjM1snhjE0qFWdr3dbg/d1BTnUBu8xIb7FcS15i/zUGbwtjKZht9+awZFU6rv9tkx0/7FfxmulsWemdRz8KvgdB79Fi2H+mEuHk/Q5WQRx+xBMwvkyG7Re4l5d7TyezReTD+FzynbxOcUrdcF22dEoU30MlbMg+8G2FUbbbB7trnbQajWNDGQHrR22reZIuOlWtqSDGQBnwrrYbqJahDZOa/NaU/dikhkA5760oQfZZBczAM52Adr55gJ00JhzzAA4nTVwdrA4IswQm0SggveCgvf0HGjX9S9AWx94BdN8fyzth45CvRM9CujBVv8yYCtpwdUwrdHBU/8yMdOO2EFc/zIxE0dS/Ssw36yo/m07CX1B/csboDougBlA10Ve74F4Usu7g8pdxgC6LANlBV1xq3UXcB/oICpute7y7YUdNMVtcZNAMnchFLcdmLs8yDoubk2yVGqTWt4dVNyyAg1iRZgVbMWtibSJSh06eIpbE2kLlUJ28BS3Yn4Mnfh0eXHL21q/6+IWnqUC6biE3d1BxS0z4VkoQ2UFXXELr0Nq4OjgKW6LpWTEjjR20BS3lpgfQ87t5cVteXhpf8WtRZZKbVLLu4OKW2aRhSKLFWzFrQWw+2nY6CAqbgF2Qw2cHTzFLcA1Z1iK2/4XnVnwLBVYIwctMStjUZBJrAizgq24tan7STI6eIpbW8x9I3YQF7c2hAVlONsS7P4XlNnwLBVQcz42LSjbxQKehTJUVtAVt9T9JBkdRMUtdUPJZgdNcVts773HzueRdZ1e+bTdR/1jBSXU+6gztzjVueicrZJSBrWTfdSZ08IXkb+Zt7JA255xGGi/emit36kUcKomwjs6TZb/6D28Sfyi67K342SZ06ADp5fzZJl5vGl60PBA2fKi/LMZ4IleUEvUS5hI9AbwNO9QUUGmeQMAklcCIX9HyXiWfwsFUCA7XjMAoGXhQ4H7CFbmVhXPd+F/KpzgrsEP2XAawqGuAHerBfh3Y0raYdIcI/jedKBlroEa6Ec6HVkEgVFVThbvUDspyon/uwmuL1Sy/aixe5QZ+48yY+9RVr1fet5+tKQOxnlT/GwDXBHYk/o5czegEsonWAHYvzZQVvRVyifQqTrPhI4QOjrODp9gB+DS04Gzo+Hs8Al2eml97K1oVVCa9nAy1IlUApjSEb/dm97UFsquxBOxADBDAzLz6KpILSZc1IGAqCbUYpZFIQloKjwPwJgAQx0Il3pPR3Wd/uxoOopIJ0Hf+Vnb+8P7+OXt17feT3/ejd/9+MP0x7urBj7ndrro9JrINisa1c4xea57kERmBpyOX157Xpl+ETHHDTBAN1DeYta9uVZzh8L+4axt4KgFFJU45EYCoDeoKUn6ik1uYLTeww4UWMiKCwA2JhKw0DQZcuMEwDRFwlnN4cFakgXQcdGULP4IhgY0APYMEtBqjojUkiytN3wERdaJIQwLaKyB1TUcL7Gy1JdZTjWR6tb68kMMwANS4eLX80TWHwNg0UDMOzKjjgEwVCQ8syWM/aLgIDNIGACHRA9wcBttrOqHfHdL60IPJ6H9o7rRKTZ96mBRKD9rDcwFEGtCuSW2ukWh/GD1siYUUBt9PUIwFnjyOy0MHR5yTROCcrEmP+8A1mpCzDsycVRknTBA0zDP5wCVR0YLIk9ygGs9pDpx3Lz062EBJT/15JPVZdREueCSHwoyxiSRgq3aBNAZpBU4eOpTAK0+epGDpqIF0MrTnbJpfcqdjos5+QuBtDA6IQgWPIs/7z/99vs3Y7mI/M+bL0+L+fXzf/+54nD0ckRGhSboE3q+ygm9clPXmp1LeQllRg0fTfUtN28cw5vydjZvzHL8nhPHsajz4ZuyVu6FbR7fbs1GY3VZ4xjKlLXjrBX7vNdN5ctKW/pyHcfJ/sMwrV1mH+NJlL3j/w==</diagram><diagram id="s7bvS1Xq0EYsAQhM5kdv" name="cuda_back">7Z1bd5peE8Y/jatX6eIsXiamTdokPaX/nm66UFB5i2KQNIdP/4ICYhitCBu2e2atrq6IxOjMj82zH/fM7qj96eNFYM0nN77teB1Fsh876nlHUXqaEf0fH3haHTC6yurAOHDt1SF5feDWfXaSg1Jy9N61ncXGiaHve6E73zw49GczZxhuHLOCwH/YPG3ke5t/dW6NncKB26HlFY9+d+1wknwsU1ofv3Tc8ST5y0YveWJqpecmBxYTy/YfcofUNx21H/h+uPpp+th3vDh0aVj0++83i2/Xv88Wb9Wfrnxxeu1oJ6sXe1vmV7JPEDiz8OCXPjFUr/dLu7oa3NknT+PLX9bJ84nSW732X8u7T+KVfNjwKQ1g9Lnn8Y+hNYgPnS1CKwiTPEvR4yhxoeXOnCB6LC8fe541X7jLs1dnTFzPvrae/PswfZ300dnijxMOJ8mJI9fzbpO/bN2H/vKvBf6fLHNyclLf9/xg+fbU0cgxhsPszNwzdrc3kOKX3TN+SZz/OkHoPObgSeJ54fhTJwyeolPSZzVdfq2vfiu5Oro9Mz3ysKZN0RKGJjnSVCk5aCWEj7M/sE5j9EOSyRJZVaX9sxpFJXQt70t06Vmz8TLBxQTagT//agVjJ0wOzH13FjrBm79RRBe55KXRn/mz+JVCf5486Tmj9HcHfhj60+RBkMQje9FlLPSz6F/0IfpSFEw9erv96LG8fhz9i08Pwr4/i9Ie0Re/hmMtwgdncRhVRUZ2XzD/huRf+TeYpV+ulP5JOPWSoDxM3NC5nVvD+NSH6Pawutzjcdpa0+FHwRh5y6Fx4tq2M6sNhhxhMsusZhdMqawaJZOavNg61qVfzfKikMys0Dnz72f2okBK9j4rwKMQPMzgedxMq/AsqcQSa5ZkBQtMGsHEHCYTC0w6KeT2FfLTZv7bE8wGDS0lhxZ97yQLP5R0CR5m8GATzCaxxJolPIK5hLdLMB0IExbBnOoxEswcCGZoCGtWMWtkMZfMcnYBkWLWyGJmBw8yxayRxcycJTSKWSOLmT1MaBQzWcwcKWaAuoYVM3nMZccW8pizUJDHzA4ebIqZPGbmLOFRzOQxs4cJi2LWyWPmRzFn6rg1xayTx1wyy9kFRIpZJ4+ZHTzIFLNOHjNzltAoZp08ZvYwYVHMJQYmTqs3bd0xbS07M/eMqQxUwwAxqat6s2u+qN40FQ6qN6sNEDQL+vflwnHtZrUvjRDeHbRDcirkzYC+YGKGDrIJD33bxJokNNMd+rKJOUpYJjvVvmoiXVyHLualYjNtLkXjyr45PqhvjZDjiEzfKzFjB5lQlulbJtYooVHKMn3LxJwlLFJZJg+ZG63cfrGmTKZyySTL5CpnoSBbmR082OQyGcvMWcKjl8laZg8TGsFM5jI/grn1Ws10CKWhZe+hhezlDB6yl9nBg0wwp+gQS+xYQiOYFTKY2cOERTAr5DBzI5jbL9VUyGEuW5BLDnMWCnKY2cGDTTCTw8ycJTyCmRxm9jAJKJjPLj68+fzl6mr60Jftu/cX14+eftIrARPjWk04oTUVVaYrc9OKSrWYX80EUlJLPSUcefL2d2d+N6+lBCqY2jqmHeB7lFNmeBmh44JmyzFHwyzb8jrbcj7bci7b8j7DdKE+2hiazmAED+D75hila78FJc5s+3xtPGuUdpTaV+VG3PnHFow4c+zFwkjEqccWjjgz6wXjCMusQ5Y4azckFkeQ8y4oR5w5/GJxlE1+xeeIM7M/Su9o5BjDJmZsdrc3kCSmHOHR2ZwZ/WJxpKHR2RVLEsgcPLxjRMNeIW/1AkfmFa6vFDIL5YrlAqjUcBlwsLmFvBUaCMYRHruQt6ID0UBC4xem3BBITEDCYxjyVm0gGEh4HMOKlQaonJ5DQMIjtvn7CkMkkPB4hhULFsgzrNA5q2HTkLd6gmMzDbNLhUxDmbdyAq4VcQlw0JmGZD6z5AiPaZiWZhBIbEBCYxqq5D6zBAmPaaiS+8wSJDymocqf+8yv13MASGjEtkruM0uQ8JiGajX3mUzDCt1DGzYNVf7WuB+VaZhdKmQayiqHBjS3irgEONhMQ5XMZ5YcITINyX1mChIa0zA1tQgkJiDhMQ01cp9ZgoTHNNT4c5/59XoOAAmN2NbIfWYJEh7TUKo2ayPTsEIH9aZbGXI4rzom01A6qF+lmMOGTFMrJuBgMw1lmlmx5AiPaSjzN7MSCiQ0pqHM38xKJJDwmIYyf+t6RAIJj2koU1UpU5DwiG3+Vg2JBJKIpqH5bfp3+PPuh9S3fv/38fPlz8kH9aTE3J/NPjPFHNaztUy2lUxi2nX33fYwc/KquHZgrKvNj2uwZ4suXRse7VY/r4jCTmRLWXfMdrgE3yKHE9ga3dmtCdxnJ7CdVwYqNxaMBIczVkHIEdeOBQPD4Yy1CZBYYSOi+wpGhr8J6nFzI6DZCkam9S5H2MTt9lbpzWpdsUtOtrqhBysWlD2OwEiIXWTSJjnItC6Hi6GaAIkVNmi0ruArn5oHB4vYrbjUqe6VtqUoqUfospKz0NjTrJ4VfPVR1UFBJq82CwWHZu1xkoJMrwq+nKh5cPAoVg7t2eMmB41kbd2gFVayAgg1LFmRerD7jgpkuWahQOq51k8KNslKHmu94KCRrILvWtoCOVgka8VtSkmybpWsUOVWwytkyWXdlVGFXNYsFOSy1kQKMsnK4Vagxw0OHslKLmvN5KCRrOSyMpKsUI14w5KVXNadowK5rFkoyGWtiRRskpVc1nrBQSNZBd+dswVysEjWittx1lO5VaV3CKPKrR1dRmpStK2XbnG4gWadTWSqZlQlEzYLBX8m7JGSgkzRCr4jZvPg4FG0/JmwR04OGkXbugmLVNFCXfMaVrT8ebRcDRrk0Wah4M+jPVJSsCla/jza4wYHjaLlcA/LIydHQEXrqe96Y8uXpNupdPr9x+VYt06BLr2fX3UUw1vKvPt59OM4/vFreiz6I7nDHfU0+uXzTl/t9LofCshFwQ438bI8dxzBcj504ixHB+KUuEPLO02emEY4LZkMnIX7nENuU77GCjXemGmxahZcpCRZrACsX9hskqMtBbY/S9sOazBW9XQOVrRk5pl2Dk6XA+W4kFlt9wWmv+jQXwSWvQZgkGb606sNAgabBHxYEXBzHAQ0le6XjaINvZhucBLTZZXuEq48m57cG+3XgV3Wih2stjXiK9MPr550yrpkvt68gE1Dfr1nTrNVRrUntZqBLtLejHtDVURk5+XCRz9w8C1yZoIfjsKufRrqzKl6SE6FlH6cueAioSOucQEGhrMdYwQkSUQnA4wMZy67iChhsTZoz/L2dXELrcTBd8yZs87/sIJyF3PYIuPMWxeJHWQ6mbd9zAVECY1Q5q2ZsIgsYVHKFdsNk1SuUSo32qYcpoE85ZJJlslUzkJBrjI7eLDJZfKVmbOERy+Ts8weJjSCmbxlfgRzk03SYRrIXS47tJC9nPFO9jI7eJAJ5optjYklEsw5mMhgZg8TFsFcsdUyCeYaBXOjLdrht0wOc9kkk8OchYIcZnbwYBPM5DAzZwmPYCaHmT1MAgrm6Y/zb8+j939G9u1/00+Bad7dBkBd9laW2FRq5opkI7m2R2XmyLJ7tp6dmXvGcjRZVUAsaqrM7PU2K21N4AamQYXVtVRlgglsvfdp2xOefwAE87DzWiilfcF01zGjAd+iqF5JfUlE2fEUjARnDU+PmhVx5ytgYET1TVpER8TpCRgZUW2TNtnBMhsR1SZpkR3IhReSHVFdkRbZgXYsE5IdzjqpCsEOFq0s6qK8FtmBunALyU7Fmm9y7w5y77Y3i2jWzBO2TLvGoeMgT1bMsYK83xppQWboVay8JnhQW3rCVmq3Sg8WU0/YUu026UFj6wlbnN0mPWiMPWGrsVulB41qJl+4fnrwmHvVnGEy96qYe9DcrOGletW8XRSDR2/vtIo+WAhbUt0KLcjcvZQVgqdGeNC4exVrpoke1O6esDXWbdKDxt0Ttsi6TXrQuHvCllW3Sg8a1UzecP30iOjuvX3WbgYfb2dy/+etMnnnv5///pwuaiQvrwEvD9DSzLw8MNmYB4Zd9OMy7sBIqITGoWiI69KBgdGIlIqkiGjJgZHRCZWqqAjov4GRMQiViqiIaLaBkekSKhVREdFZAyNjEipVUcEia3uESkVURPTMYMcV/XY2LbpokMxpeEUclcr/K6sKyg1r4PZ1tH6yRlrE9dpgeGg5Zf3wiGi/wfQoRE/99AjoyMH00HrK+ukR0aSD6aH1lPXTI6JvB9ND6ykZ0INGNdN6yvrpEdHdOzFUr/dLu7oa3NknT+PLX9bJM7CVyEVg2R3F8GLna3E/iH4cxz9eZcfm1myDLuPu3k9tspPFMmGn0QmaNH9cP5m+zNf0ZaJ3u3ql1fH84fUfXb3SeaevdnrdDwWso2SGmwhbnjuOgDwfOjFO0YE45e7Q8k6TJ6YRskvuAyd6qzmsN/3C2BKMN7BZfZzVRjcbO5cklAPg57DVJJDRevY1UeS0jDvxCrvdIrOKDmDW3QHtvl4hCBM0A1ulNQ7K4dBsQ/LVFmpqInX55MUKvoTBm3/ju/qo6WEiNjeZ1BTztb7BrAEwqyqMmP3048385uLqYSip/mj24+nd9INdZi+lJu6eHUW1LcccDbNbqLy+hcr5W6icu4XKW26hG5tzFbZeMoamMxh1wG27WDGQJTddJqxpunB3WhA0zsxNdKDpEhLQOPNB8YFmIAGNM8sUHWiGggQ0ztxVfKCZSECrZsTWsNSpiFYL6522Q8iKOFN7eQ/Vi8Rl3ybVve4JRIEzV7XmMWdrhvfZrnnnpYNqDRQYCc4q1AUiR9z1UGBgOGtk2hRIrLARcSUUGBnOVuwePTcCroG6vp3ePb+/8Ebawu1+Gf/9LF2dQlb48kuPAj28f4thsvwWQzdfalV1v9mRLNWgVT9ezvry9Ps34+v7/33ufZlfP+gWlLjksqesZYO/zl3aSnwjkIy3UVijATuJl7QajcNoAhcnbxn24hR0OHE9+9p68u/D9HXSR5vWA2AwFCXdtnG9+SmjrPVefpMEJBScM6rMMlrNehegVKo8UUU+dl4rpaaXzBwD8C1yZofXtg6LWU6VQ3J67MoLjARnBrdI6IjrGYCBEXUpMT8kiWgjgJHhzAEXESUBnQUwMtUscdLFdeji7RtoNyuTOXO1+R9WununWPRhhDNjWyR0kMnkbAc8QokVSmh0sszZym4RWcIilGVykLlRytAA1qxUlslSLptk8pSzUJCpzA4ebHKZbGXmLOHRy2Qss4cJjWAma5kfwQxA17BgJnO5bJLJXc5CQfYyO3iQCeZUyxFL7FhCI5gVMpjZw4RFMKfskGBuXzBD/XGbFcwKOcxlF3mRw5yFghxmdvBgE8zkMDNnCY9gJoeZPUwCCubRtfXc/X5iXH4Jr+zwm6Z4/lmZBqELN/4Up0EQowCyky++jU9LUyW91tJDaXXn8ojtBlFwXD8uxl349/G5zbV9UoDmrFBSzBrUKRj6EvPe4w59T30Z+j2vLmahLzFLZBv6WTSTYxl6uVDP33rsS8zJ2MY+njSzDL35oia/B6hdVpsdgpEvTmjWvedz7cf3bCm+bAv+oqu4u61l+YsE896sgW1r+56xeUmqvSIYWfv7RsgoTk+YI5E2nCcycjdKSX9JRrHpQ7NkFOca55Q68EbbfXGjVYEbbdZtJZ87vXzuooeBH19m66lAdD+c3Pi2E5/xfw==</diagram></mxfile>
|
2203.03937/main_diagram/main_diagram.pdf
ADDED
|
Binary file (8.21 kB). View file
|
|
|
2203.03937/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Recently, Transformer has shown a great potential for various vision tasks [@vit; @touvron2020deit; @liu2021swin; @cswin]. The pioneer Vision Transformer [@vit] (ViT) stacked multiple Transformer blocks to process non-overlapping image patch (i.e., visual token) sequences for image classification. However, the global self-attention in Transformer makes each query attend to all keys, which has the quadratic complexity to sequence length, and results in expensive computation costs and memory usage, especially for high-resolution images.
|
| 4 |
+
|
| 5 |
+
<figure id="fig:rel" data-latex-placement="t">
|
| 6 |
+
<img src="./images/rel_attn/attn-all.jpg" />
|
| 7 |
+
<figcaption>Illustrate our Dynamic Group Attention in comparison with other attention mechanisms in Transformer backbones. (a) Global self-attention, each query attends to all keys/values. (b) <span class="math inline">∼</span> (e) Window-based attentions, each query attends to keys/values within a fixed window. (d) Dynamic group attention, all queries are dynamically divided into several groups, and each query attends to relevant keys/values only. </figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
To improve the efficiency of the global self-attention, the state-of-the-art methods [@liu2021swin; @shuffle; @cswin; @msg] focused on how to divide the global image into multiple local regions (or windows). Each query only attends to a few keys within the manually designed local regions (windows). For example, Swin Transformer [@liu2021swin] computed the self-attention within each local window and employed a shifted mechanism to make cross-window connections (Figure [1](#fig:rel){reference-type="ref" reference="fig:rel"} (b)). Different from Swin Transformer, CSwin [@cswin] proposed cross-shaped window self-attention for computing attention in the horizontal and vertical stripes in parallel that form a cross-shaped window (Figure [1](#fig:rel){reference-type="ref" reference="fig:rel"} (c)). Shuffle Transformer [@shuffle] presented a spatial shuffle operation to make information flow across windows (Figure [1](#fig:rel){reference-type="ref" reference="fig:rel"} (d)). MSG-Transformer [@msg] proposed to compute attention in local regular windows, and used additional MSG tokens to get connections between them (Figure [1](#fig:rel){reference-type="ref" reference="fig:rel"} (e)). These window-based methods achieved excellent performances and were superior to the CNN counterparts, however, they rely on hand-crafted window partition mechanisms. Furthermore, these partition methods are data-agnostic and ignore the input content, as a result, it is likely that one query maybe attends to irrelevant keys/values.
|
| 11 |
+
|
| 12 |
+
To address the issues mentioned above, a good idea is to dynamically select relevant keys/values for each query. However, it leads to unreasonably high memory usage and computation complexity. We propose dynamic group attention (DG-Attention), which dynamically divides all queries into multiple groups and selects the most relevant keys/values for each group. Specifically, the input visual tokens (or feature vectors) are divided adaptively into multiple groups according to their similarity to all cluster centroids. Therefore, such a partition mechanism is adaptive to input images. Then, we use the cluster centroid of each group to select the most relevant keys/values subset from the whole keys/values set, and the self-attention is conducted within each group. It enables our model to focus on relevant keys/values without any spatial constraint. And also, through the dynamic grouping, our DG-Attention does not cause a high memory usage or large computation cost. For example, as shown in Figure [1](#fig:rel){reference-type="ref" reference="fig:rel"} (f), the red point (query) can attend to its relevant region denoted with red solid line boundaries, and the blue point can attend to the regions with blue solid line boundaries. Benefiting from the data-dependent and flexible group mechanism, our DG-Attention shows superiority to the other window-based self-attention illustrated in Figure [1](#fig:rel){reference-type="ref" reference="fig:rel"}.
|
| 13 |
+
|
| 14 |
+
Based on the proposed DG-Attention, we design a general vision transformer backbone for image classification, named Dynamic Group Transformer (DGT). We scale our approach up to get a family of models, including DGT-T (24M), DGT-S (52M), and DGT-B (90M). They achieve significantly a better performance than previous methods. Our DGT-T can achieve Top-1 classification accuracy of 83.8% on ImageNet-1k, 50.2% mIoU on ADE20K for semantic segmentation, 47.7% box mAP for object detection, and 43.0% mask mAP on COCO for instance segmentation, outperforming the state-of-the-art methods. Furthermore, our largest variant DGT-B is also superior to the previous methods, achieving 85.0% Top-1 accuracy on ImageNet-1K, 51.2% mIoU on ADE20K, 49.1% box mAP, and 44.1% mask mAP on COCO dataset.
|
| 15 |
+
|
| 16 |
+
# Method
|
| 17 |
+
|
| 18 |
+
In this section, we first introduce our Dynamic Group attention (DG-Attention). Then, we present the composition of the Dynamic Group Transformer Block. Finally, we describe the overall architecture and variant configurations of our Dynamic Group Transformer (DGT) backbone.
|
| 19 |
+
|
| 20 |
+
To make each query attend to relevant keys/values, we propose a Dynamic Group Attention (DG-Attention). It dynamically divides all queries into multiple groups and selects the most relevant keys/values for each group to compute the self-attention. As shown in Figure [2](#fig:method){reference-type="ref" reference="fig:method"}(c), given an input feature map $X\in \mathcal{R}^{H\times W \times C}$ ($C$ is the channel number, $H$ and $W$ denotes the height and width, respectively), we first get query embeddings $\{X_Q^{i}\}_{i=1}^{L}$ , key embeddings $\{X_K^{i}\}_{i=1}^{L}$ , and value embeddings $\{X_V^{i}\}_{i=1}^{L}$, where $L= H\times W$. For simplicity, we assume there is only one head in the DG-Attention. It's easy to expand to the multi-head situation where each head has its queries, keys, values, and cluster centroids. Then, we use k-means clustering algorithm to dynamically divide all queries into $G$ different query groups (clusters) $X_Q=\{ X_{Q_j} | X_{Q_j} \in \mathcal{R}^{N_j \times C} \}_{j=1}^{G}$, where $j$ is the group index and $N_j$ is the number of queries in the $j^{th}$ group. Meanwhile, we use $top\mbox{-} k$ operations to find the $k$ most relevant keys and values for each query group, which are denoted as $X_K=\{ X_{K_j} | X_{K_j} \in \mathcal{R}^{k \times C} \}_{j=1}^{G}$ and $X_V=\{ X_{V_j} | X_{V_j} \in \mathcal{R}^{k \times C} \}_{j=1}^{G}$, respectively.
|
| 21 |
+
|
| 22 |
+
Specifically, for the $j^{th}$ query group, we compute the dot product between its cluster centroid $e_j$ and all keys $\{X_K^i\}_{i=1}^{L}$, and then select the $k$ most relevant elements according to the dot product sorting, which can be formulated as follow:
|
| 23 |
+
|
| 24 |
+
$$\begin{equation}
|
| 25 |
+
\begin{split}
|
| 26 |
+
& id_j=Top\mbox{-}k (e_j, \{X_K^{i}\}_{i=1}^{L} ) \in \{1,..,L \}^k, \\
|
| 27 |
+
& X_{K_j}=\{ X_K^{i} | i \in id_j \} \in \mathcal{R}^{k \times C}, \\
|
| 28 |
+
& X_{V_j}=\{ X_V^{i} | i \in id_j \} \in \mathcal{R}^{k \times C},
|
| 29 |
+
\end{split}
|
| 30 |
+
\end{equation}$$ where $Top\mbox{-}k$ is the function that returns the indices of top $k$ values, and $id_j$ is an index vector. Then, the self-attention is conducted within each group: $$\begin{equation}
|
| 31 |
+
Y_j = SA( X_{Q_j}, X_{K_j}, X_{V_j}) \in \mathcal{R}^{N_j \times C},
|
| 32 |
+
\end{equation}$$ where SA denotes the Self-Attention, $Y_j$ is the updated output of the $j^{th}$ query group. Finally, $\{Y_j\}_{j=1}^{G}$ are scattered into the output $Y \in \mathcal{R}^{L \times C}$ according to their original spatial position indexes.
|
| 33 |
+
|
| 34 |
+
As each group has a different number of queries, this algorithm cannot be implemented using the general matrix multiplication. We implement this algorithm using CUDA, and the detail can be found in the supplementary material.
|
| 35 |
+
|
| 36 |
+
To make the training stable, we update the cluster centroids with exponential moving average after each iteration. Specifically, for the $j^{th}$ cluster centroid, we compute the current cluster centroid as follow: $$\begin{equation}
|
| 37 |
+
e'_j = \frac{1}{N_j} \sum_{i} Norm(X_{Q_j}^{i}).
|
| 38 |
+
\end{equation}$$ Then, we update the cluster centroid as below: $$\begin{equation}
|
| 39 |
+
{e_j} =Norm(\tau \times e_j + (1-\tau)\times {e'}_j),
|
| 40 |
+
\end{equation}$$ where $\tau$ is a hyper-parameter to control the update speed. We empirically set $\tau$ to $0.1 \times lr$, where $lr$ is the learning rate.
|
| 41 |
+
|
| 42 |
+
We analyze the computation complexity of our DG-Attention and the global self-attention to further reveal the efficiency of our method. Here, we only consider the process of computing the attention maps and weighted sums of values for clarity. Given input features of size $L \times C$, the global self-attention has a computational complexity of $$\begin{equation}
|
| 43 |
+
\Omega_{Global} = 2L^2C.
|
| 44 |
+
\end{equation}$$ In DG-Attention, each query only attends to $k$ keys, so the basic computation complexity of DG-Attention is $2kLC$. Besides, grouping queries and selecting the most significant $k$ keys require an additional computation cost of $2LGC+kGlogL$. Therefore, the total computation complexity of our DG-Attention is $$\begin{equation}
|
| 45 |
+
\Omega_{DG\mbox{-}Attention} = 2kLC + 2LGC + kGlogL
|
| 46 |
+
\end{equation}$$
|
| 47 |
+
|
| 48 |
+
The ratio of the computation complexity of our DG-Attention and Global self-attention is:
|
| 49 |
+
|
| 50 |
+
$$\begin{equation}
|
| 51 |
+
\begin{split}
|
| 52 |
+
\frac{ \Omega_{DG\mbox{-}Attention} }{\Omega_{Global}} & =\frac{2kLC + 2LGC + kGlogL}{2L^2C} \\
|
| 53 |
+
& = \frac{k}{L} + \frac{G}{L} + \frac{kGlogL}{2L^2C} < 1
|
| 54 |
+
\end{split}
|
| 55 |
+
\end{equation}$$ where $L$ is larger than $G$ and $k$. For high-resolution inputs, the ratio $\frac{ \Omega_{DG\mbox{-}Attention} }{\Omega_{Global}} <<1$. Typically, for the ImageNet classification task, $k$ is set to 98 for the first three stages, while the corresponding $L$ is 3136, 784, and 196. Thus, the ratio is 0.05, 0.19, and 0.75 for DGT-T. Besides, $k$ is independent of the shapes of the parameters in our models, so we can adjust $k$ to balance the performance and computation efficiency.
|
| 56 |
+
|
| 57 |
+
The Dynamic Group Transformer block is shown in Figure [2](#fig:method){reference-type="ref" reference="fig:method"}(b). It first employs the widely-used conditional position embeddings (CPE) [@chu2021conditional] to generate the positional information. Then, DG-Attention is applied to model spatial relevant dependencies flexibly and dynamically. Last, IRFFN (Inverted Residual Feed-forward Network) [@cmt] further is employed to capture local dependencies. The forward process of the $l^{th}$ block can be formulated as follows: $$\begin{align}
|
| 58 |
+
& \tilde{X^l}=X^{l-1}+CPE(X^{l-1}), \\
|
| 59 |
+
& \hat{X^l}=\tilde{X^l}+DG\mbox{-}Attention(LN(\tilde{X^{l}})), \\
|
| 60 |
+
& X^l=\hat{X^l}+IRFFN(LN(\hat{X^{l}})),
|
| 61 |
+
\end{align}$$ where $LN(\cdot)$ denotes Layer Normalization, $X^{l}$ and $X^{l-1}$ are the output of the $l^{th}$ and ${l\mbox{-}1} ^{th}$ block, respectively.
|
| 62 |
+
|
| 63 |
+
Our Dynamic Group Transformer (DGT) consists of a convolutional stem, four hierarchical stages, and a classifier head, as shown in Figure [2](#fig:method){reference-type="ref" reference="fig:method"} (a). The stem is designed to extract local dependency, similar to [@cmt], which consists of one 3$\times$`<!-- -->`{=html}3 convolution layer with stride = 2 and two 3$\times$`<!-- -->`{=html}3 convolution layers with stride = 1. After the stem, each stage contains a patch merging layer and multiple transformer blocks. The first three stages use the DGT block, and the last stage applies global self-attention (GSA) block, which is achieved by replacing the DG-Attention with global self-attention in DGT block. We decrease the number of tokens and double the channel dimension by using a 3$\times$`<!-- -->`{=html}3 convolutional layer with stride = 2 before each stage to produce a hierarchical representation. The final classifier head consists of two linear layers.
|
| 64 |
+
|
| 65 |
+
Finally, we design three different variants, including DGT-Tiny (DGT-T), DGT-Small (DGT-S), and DGT-Base (DGT-B), whose detailed configurations are shown in Table [\[tab:arch\]](#tab:arch){reference-type="ref" reference="tab:arch"}. For all variants, the number of blocks in each stage is fixed with \[1,2,17,2\]. In each DGT block, the expand ratios of IRFFN are set to 4, the number of groups $G$ is 48. The number of selected keys/values $k$ is 98 for image classification on ImageNet [@Imagenet]. The main differences among all variants are the channel dimension and the number of heads in DGT blocks. Besides, to train the model stably, we apply post LayerNorm and cosine attention [@liu2021swinv2] in DGT-S and DGT-B.
|
2204.06283/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-04-09T22:45:08.872Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36" etag="9Uvu6U8V7xmOG_HyQ5J0" version="16.5.3" type="google"><diagram id="CiQWeRSOo7sJFGXF8APK" name="Page-1">7VxZe5s4FP01fjSf2OExcZZ2Jpm0dbfMy3wKyIYpRh4QjtNfPxKWDJjF2IbYae2XoKsNSeeee3UlMlBHs+VtBOfePXZRMFCAuxyoVwNFUVXbpH+Y5GUlsTV7JZhGvrsSyZlg7P9EXAi4NPFdFBcKEowD4s+LQgeHIXJIQQajCD8Xi01wUOx1DqeoJBg7MChLv/ku8VZSSweZ/B3yp57oWQY8ZwZFYS6IPeji55xIvR6oowhjsnqaLUcoYJMn5mVV76Ymd/1iEQpJmwqjfx686ONs8Rglt7PvMVyAMBzyxVjAIOEDHihGQNu7nGDaLH1r8sKnwvgvwSJjGKcLdUELyNZ8mWXSpyn/m7byJAQCGSM4J0mEXPp454d0dWPiOzTxwUMhnqEQipp0IE+1rQ3Um6ZyVLZ6fSFWCiNRCFoyuUdmARXI9DEmEf6BRjjAUVpEdU37iS4lHa4fBDn5ZIIMx6FyGPjTkMocOv2IZl4uUESHAoMLnjHzXZd1ePns+QSN59BhvT9TNaGyCCehi9jKsD5cGHtpInsVATZl/f75leaLz7pEy5yIr/wtolNJohdahOcOZVnTJcDrcV3UTVlSuX4+Z+CWdSABZSX2ctg2TclQuWZxrZquO8qARx849nbAoaKVFgm5VBF5EkfEw1McwuA6k146SbRYT1txRrMKdxjPeZF/ESEvnGJgQnARAmjpk++550fWlKTz1NWSt5wmXkQipIP/nk/karFkVi1NZfXcC0ZObGhzFK4kNz6bszS/BMebGwsAsBs6YpxEDp/MxXj2Z/DfR+feHusfv/4hQ8PzhoJPYTRFpGFxVM5gbEUawRahABJ/UaTOKqzwqh+wn5JMBlKVglQtghRYJdit3pjXzlNeuUG9VYOrqSo1mEJ5Pb790a0qe6D7FwR0HrYVELdtVW0L6Eagbke03DWiD2I/1T4mPjJMPOZytuEjg8RjARGdER5Hw56Edxg+7NOCB6j10uI5DOu9pB1dOTqpFa7c3yjCw7GHWRNXPpyGmLts6/4iUfQzikmtD9borhXHsaELGdLlGncqh+qSvzZRUn+thDCQ/jpzr9SSe6UZmaHJu1cqWJukvHtlgb7go78d63MSLNHaL9JOiiY0+ZjrvJsVeVPrrHXu/x6mz4KeWpuDfam/GwPyPsROwiYGh2WTceOHaPg5Cf1wWs48MVviQmRN+rcltlnas2h0p65VmBJFL9sRU5XkvrbpIlrwFkzJq+3M+6Amra0JMk6KmgRG34IJetMbmbb40JSTwod47+PsZN69zDHxUOzHw4cwYKO+oDB7oekTMzQTy0GvsGmxhPnYYmV0RaqyM2tp9zRy1B3LG/RkjbZ0cFo7FvHeBxw/NTuuea2FM6Z04VPM/owiHMfDK5+ugv+UcNe0UKLRG60/mbpFIYpg4P8s+Lv7HFIdQiBIdnVk9k4ghmZLlgXWv3bxD0on4lT4VeIfWn34rONDzitIYIxIZWzsG4I/QhTHlZkPUbtjzw2MikPVht52PCjdYcsoG41zskVtzieytqkV1KXuNFbRJFmuUJceT2OF19i/xlQimunI9XIeYJ+kYQFw0Uo76GJPoEN2xvwZihbQJVtvhUZZk8RpVcEZ1Nalu0dj/abB9RfdgrEDjGcewBOtXuNV7LbRSIdZlHYy8o6VedxKTX9QnaEaTrEAms1e/Rz1NR8dI+GBquDEJ5zEuHOwQU/7ouBMWoYGJLMVZ6maZJQpy+7N36zf1ZxkfL6bVj5EaPg5gn7I7vY1e5cl1/Cwnu9gOE3YNc5q73ibqp1EwEfoYM8nCyySU9ykqUAvK4zYir2KuuhGY7AnxOHRDhEYNHiDsjUoRHv4a+0f7dn3gGL/KJG4eb09StT5oULN9TybErZlFfGob+Cs5q5fV1fzxKRUkHXH0ef7JCD+X3fvd6KeWo4B2zmmiLlK1nEc267CrWqotup2xDoy/UmWaWe/4ooLtilseisYSO2NgZrDzWcG6oqB9Lb3734nBtLrw5MdM9D4zD5l9tGPzj7Nnz6c2acz9ml93+t3Yp/6y2Qds8/N9dfrT2f62aAf5ej0U76y86YubPXJUQdwjXI0rjkMDfUnPx3TwcXZGSmxwTrIcjw2sBqdkdP9DrO77ymPEJhpe31H7/y7pLrvJoHahFNd2cBfzfeT5YbNE/B46iM+r3AHMWU90DI0/SvGoE0tu58uECCbUkUUuor4jL6uHBp97MK1KgTIEs1KP3sgCfVPFLCCxIzf5WGXxoCDZ7P0YZXniqPDA+zlthNC7izlQcFF4lwwQBPWwgGngpu+WicmVTGLNxcUrWxEdaPiAqvWlxE1+thTVWJJYVi6Zv1AUgOl0ZdPn96Pvtx9uc+AlCIsCJDD7iieUVWNKlBAlSZbJVRZoCdUNf23hrPZ2tNslVa/AiM70IxmKZJRxoQsV1itTQ+nBSZoMvvXQCvXJ/sHS+r1/w==</diagram></mxfile>
|
2204.06283/main_diagram/main_diagram.pdf
ADDED
|
Binary file (22.8 kB). View file
|
|
|
2204.06283/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
With the rising power of pre-trained language models, large-scale benchmarks serve as an important factor driving the future progress of NLP. These benchmarks can provide a tool for analyzing the strengths and weaknesses of pre-trained language models. In recent years, many benchmarks [\(Wang](#page-11-0) [et al.,](#page-11-0) [2019,](#page-11-0) [2020;](#page-11-1) [Rajpurkar et al.,](#page-10-0) [2018\)](#page-10-0) have been proposed that offer a diverse set of evaluation
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Figure 1: We propose a broad-coverage diagnostic benchmark for linguistic-phenomena-driven evaluation. Our benchmark includes both a dataset collection and an evaluation procedure for evaluating model performance and diagnosing linguistic skills captured by a model. We evaluate models fine-tuned on large NLI datasets through four types of diagnostic tests: zero-shot, inoculation, hypothesis-only, and cross-distribution.
|
| 8 |
+
|
| 9 |
+
objectives. However, recent criticisms have been made that these benchmarks fail to serve as effective measures of progress in machine learning [\(Raji](#page-10-1) [et al.,](#page-10-1) [2021\)](#page-10-1). In particular, the task design does not formulate specific linguistic skills required for understanding. They lack the effectiveness in helping researchers understand how certain systems or models work and how they fail. Although many stateof-the-art language models have shown impressive performance on these common benchmarks, their performance degrades considerably on adversarial or out-of-distribution samples [\(Bras et al.,](#page-9-0) [2020\)](#page-9-0). The performance drop shows that models may not be learning the required linguistic skills for solving the tasks of these benchmarks but exploit spurious dataset biases [\(Poliak et al.,](#page-10-2) [2018b\)](#page-10-2). Overall, the current benchmark format seems to be more like a contest than a tool that can explain how well a language model captures distinct linguistic skills essential to language understanding and reasoning.
|
| 10 |
+
|
| 11 |
+
In this paper, we propose a new form of benchmark that serves as a diagnostic evaluation tool for
|
| 12 |
+
|
| 13 |
+
<span id="page-0-0"></span><sup>1</sup>Our code and data are publicly available at [https://github.](https://github.com/eric11eca/curriculum-ling) [com/eric11eca/curriculum-ling](https://github.com/eric11eca/curriculum-ling)
|
| 14 |
+
|
| 15 |
+
analyzing model linguistic skills. We present CUR-RICULUM benchmark: a framework for diagnosing neural language models through broad-coverage linguistic phenomena. Our benchmark includes (1) a large-scale collection of natural language inference (NLI) datasets covering 36 linguistic phenomena and (2) an evaluation procedure for probing and evaluating how well a language model captures reasoning skills for distinct types of linguistic phenomena. Targeted linguistic phenomena in CUR-RICULUM range from fundamental properties like named entity and coreference to complex ones like commonsense and deductive reasoning. With the CURRICULUM benchmark, we aim to investigate the following research questions:
|
| 16 |
+
|
| 17 |
+
- Q1: Do language models trained on benchmark datasets have the ability to reason over a wide range of linguistic phenomena?
|
| 18 |
+
- Q2: Are linguistic phenomena missing from the training data recoverable through inoculation (i.e., continuing to train models on a small sample of examples) [\(Liu et al.,](#page-10-3) [2019a\)](#page-10-3)?
|
| 19 |
+
- Q3: Do language models learn a general reasoning skill of a phenomenon through inoculation?
|
| 20 |
+
|
| 21 |
+
To address the above questions, we empirically analyze NLI models trained on popular benchmark datasets through a pipeline of evaluations that includes: a zero-shot diagnostic test, inoculation retraining, hypothesis-only sanity check, and cross cross-distribution generalization tests.
|
| 22 |
+
|
| 23 |
+
For Q1, we observe that models trained on benchmark datasets, including adversarial data, do not have the reasoning ability for a large set of linguistic phenomena. Our results show that training on more datasets can help the model learn more types of reasoning but does not help the model acquire complex reasoning skills such as deductive and commonsense reasoning. Our benchmark exposes multiple knowledge gaps in large NLI models regarding diverse linguistic phenomena, particularly in the categories of commonsense and comprehension. For Q2, our analysis provides empirical evidence that exposes the lack of recoverable linguistic phenomena in benchmark datasets and models' inability to learn certain linguistic phenomena. We also show that, on some phenomena, models may rely heavily on spurious dataset bias existing in the hypothesis to reach high accuracy. For Q3, Our experiments show that models can adapt between distributions with different difficulties only on 22.2% of the phenomena such as Boolean, con-
|
| 24 |
+
|
| 25 |
+
ditional, and comparative logic. In the majority (58.3 %) of the phenomena, models fail to generalize when the difficulties of the train and test distributions are different, for example, relational knowledge, puns, and contextual commonsense reasoning. A model's learning performance may not align with its generalization ability, suggesting the lack of a general reasoning skill.
|
| 26 |
+
|
| 27 |
+
Overall, our proposed benchmark systematically maps out a wide range of specific linguistic skills required for language understanding and inference. We envision linguistic-phenomena-based evaluation to be an integral component of general linguistic intelligence. We hope CURRICULUM can serve as a useful evaluation tool that can map out which aspects of the problem space remain challenging for existing systems and models.
|
2204.07316/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-05-15T10:59:37.623Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36" etag="Afp3s5X8LVr3HV2uBOJp" version="14.6.13" type="google"><diagram id="0R894_I3RNERw9i6fRA8" name="Page-1">7Vxbc5s4FP41ntl9iEcS98c6l93uNNtM09ltH2WQbSYYvEDqpL9+JZAwQsJ2HMBOnTwk9pE4iPOdm84RGRmXy6c/Urxa3CYBiUYIBE8j42qEEPQsm/5hlOeSYptWSZinYcAnbQj34U/CiYBTH8OAZNLEPEmiPFzJRD+JY+LnEg2nabKWp82SSL7rCs+JQrj3caRS/w2DfFFSXeRs6H+ScL4Qd4a2V44ssZjMnyRb4CBZ10jG9ci4TJMkLz8tny5JxIQn5FJed9MyWi0sJXG+zwUL+7vt/PP51vs4+endpQ9u5H27MEouP3D0yB+YLzZ/FhJIk8c4IIwJHBmT9SLMyf0K+2x0TTGntEW+jPhwlqfJQyUp+owTfgOS5uSpdeWwkgdVJJIsSZ4+0yn8ggtkcBlyJYImX+Z6Awny+JxFDQ5kulwVuBrMK+YbSdEPXFgvEJxpv0RyYLfkIjwl0QT7D/PissskSlI6FCcx0cp1FkZRY9I8xUFIZSrII2Tc0J/LSzqGo3AeU1pEZlQ2kwBniwrTTiCCUIbIs1SILA1EEICeIBIr2gYRiYMPzEvQb36Esyz0ZVToOPdJRiUoEiguY6eYalLQCUHQUhLhPPwhs9cJht/hLgnpjWsoANlQDGDKPLLkMfUJv6zuLHZxgm6DU47TOckVTgVS1YO/AjykAc+OmO7OkmKls9KSKPW/R+ZDJ1/DJY0UCPxN1vT3l2SJ480g/TQv/loTCKhgATIR9dRAMJ2KCRAYgkYXXpHpVOramTsB4/F4ZLH74iWz4XiarcobiIvKBYrrXuMUZLM3KkrDwm9uVAvvxqgNvTrV1BmKgC4bdV82rXO73aiF6ZmAcgemA2lwB5bBtASa0LGbUB9VARowa8KArCKc2IU6uC3ORdIGrYvvSxucfrThg+8naRDG8yLbLJ7wIVyRIMQM+lMxdJ/iRtKOsIWmJ4MrQtLRTF3kzWcUvqFIVwUIhnVg+FY4oQZMPYdvdH65FwRdYQeOC535Iuh4fHnDuHVndDs0oG/grPMCzvE6wq3JyBrW4AzdVueX9pWoo10qOvIe1dUAV08/RWYopaQbVEXayQYusgK+D2zPgVZP9Zz0wCT28tPHu9ZclW960yal56S2vluhKe1sNkO+r0t2A3tqWza/4jO9Q5gzkF3Qz1YGWWq1CkJNttv0MZ1lu4ZuK3NOTgCZDa97cKlK2ZL07b91buCXhk4V+aHgNZVgaOi8s4NO2egfXCJucho6/Aqd67wW2CjsfU0pe6XYBw4kWRNqPEBbPzyU5TupJ5I1Keu/2mLv2VX/oaOWBJFtqK6qt5LgrnLvUfPtyfWXryeebweYuDNtvm37LpnO+su3EbDlQH/0hNvUVSi7iR5yQ7HNVYRiwi3O/cV+HukvHoJvP90e7NXKfSHIqQLQ5YMyp5mFPs0qkriml+GWB2jzfdOB9Xtvf6hpk20/cNGNAzVlByoOQx3NgZq62u6rlB5k4siWoMF2m3jUGMke/tmU/TPv0QV4lTd1Vqd/yk2PrpPdNu6sRo9eE6WHbdztsafqOKxZxA1MnehdNDVsuxrZBDZ26qjHYGfJwQ5oEiex9axDYvZWXWrfKmUrZtAaK/RLMTILTOfT30ARTejtQe3T7+xjmTMxm53hZRg9l9dUUYLCYDBwcu5I4sKRpNyRyHNkds0UTRor18sG4yRd4kgeXnOpsnGzXHExGJGcmt5FxkCP59rrKfD5BTdVNsytVRoOqfbGnD2oLa0YzFMcZzPKVLCnAUVMWCdpIN+9fnkQZqsIcwGGcRTWrpxFCc7rHJV6b5pkTL63SUBXnzOFvI79JGBrrxxfiXaL42Or322OstGw1WnOHXpOABxni6l24vpMayxbmu7UIRQ1Jimt7C3C7lHHPf6JWhEjtpynha5GbD2ep93VS3k7mckNNdv8MS6OD51xZuKhhnEePzOx0A4ley8gnGgBATbqBwZwVF0atH5gdf0CwFtLcz1bCb7OWBd+K6pk45V36B6Zl5UJz9amFACRONBQg8+GWvSMvrDTbR5PF7ujHWCAdjO6ngJ4e5zYPcV3oqqsp+u3orxGmu0ZmgRIk2b3Vgew+2uZKkVvuch9laxjChjBS7ZNxtnDaM/S87FKy68tIPf1FoYrJ0LQUl/CGLambO+RCDEJrVofnr+7i6diOnixO4TeGBhe7Uf2jaY5hoqYXDj2dIJyxu6WYxWvk5VuA9KQVefnTbagNvgpFA6IO/bAS0/rDnSqxN7jlehfGqInxTpeAZrKbCgYB3+L4ZQw7BSxdmZ9Yzj4Cw2nhCEHwgRjz7Z2IbEvrBpumnjXN67tG/QXdKN4K4r3obZ1htrbQrt6Qu0dn+3tni29ntZGj7bLU6S7Lcn2wOXJ+wVrRv1IcppTM3Gmhb6nGYlm7e/Edtl46qNI7SjbaFEj3tkK6S2f7rwT0v2x1lN48127wdJs0zrQErfhMW211DLwnqvzc+tUEunzNwbNGDji+3c2dQwNSxCunjh45bfn+rc7klI1LCzzalPlOt2T8BAgGVWvgda+MVVh5HhjwwKbHzRodHX6q/G8+48D64DNcwrIVSuBAx+l7vwF/TP0IFBpcR+al6uskGdKXmTYFN3RFareU/T3FP0IKXqzTWmYmi7zoCm602mNEJ2n87TNJq6ilfFi56myMhus+naXB1UbpXZjTTUY/Q4zjxcXFAQ2p7fq7c639eZp48whctzD0K7W08boYKzp182/Yy2nb/6prXH9Pw==</diagram></mxfile>
|
2204.07316/main_diagram/main_diagram.pdf
ADDED
|
Binary file (35.1 kB). View file
|
|
|
2204.07316/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Transformer-based models are extensively used in natural language understanding (NLU) tasks, and some prominent pretraining strategies include BERT [@devlin-etal-2019-bert], RoBERTa [@liu2019roberta], ALBERT [@DBLP:conf/iclr/LanCGGSS20], and ELECTRA [@DBLP:conf/iclr/ClarkLLM20]. Despite their differences in curating the learning objectives, they all utilize text-based datasets only. In the real world, however, humans can benefit from the visual modality when acquiring knowledge from language; an obvious example is learning visually grounded words, such as colors and shapes.
|
| 4 |
+
|
| 5 |
+
Some studies have succeeded with visually grounded information used in NLU. ViCo [@DBLP:conf/iccv/GuptaSH19] learned visual co-occurrences in text and reported superior performance to GloVe in word analogy problems. @DBLP:conf/iclr/0001C0USLZ20 and @huang-etal-2020-unsupervised-multimodal used images to boost translation performance in supervised and unsupervised settings. @tan-bansal-2020-vokenization reported improvements over BERT on NLU by proposing the concept of vokenization.
|
| 6 |
+
|
| 7 |
+
{#fig:multiview}
|
| 8 |
+
|
| 9 |
+
Another branch of research focuses on solving multimodal downstream tasks such as visual question answering and image retrieval. @DBLP:journals/corr/abs-1908-03557 [@DBLP:conf/nips/LuBPL19; @DBLP:conf/iclr/SuZCLLWD20; @DBLP:conf/eccv/Li0LZHZWH0WCG20] trained visual-text transformers, while LXMERT [@tan-bansal-2019-lxmert] used different encoders for text and image and a cross-modal encoder. @tan-bansal-2020-vokenization tested these models with general language understanding evaluation (GLUE @wang2018glue) and found that the performance does not exceed using BERT (Appendix [7](#sec:A){reference-type="ref" reference="sec:A"}), drawing the conclusion that vision-and-language pretraining on visually-grounded language dataset failed to distill useful information for general NLU. CLIP [@DBLP:journals/corr/abs-2103-00020] utilizes contrastive loss to reach SOTA on zero-shot image classification in a retrieval fashion.
|
| 10 |
+
|
| 11 |
+
In this work, we establish the link between pretrained multimodal transformers and visually-grounded language learning. We devise a way to distill visual information from components of a pretrained multimodal transformer (CLIP text-transfomer, abbreviated as CLIP-T) to pretrained language transformers (BERT/ELECTRA), to incorporate versatile perception of words into the model (Figure [1](#fig:multiview){reference-type="ref" reference="fig:multiview"}).
|
| 12 |
+
|
| 13 |
+
The usage of a visually grounded text-transformer as a teacher allows us to implement straightforward and non-fuzzy adapting tasks for distillation. We show that it is mathematically logical that the CLIP-T output approximates visual features (Sec. [2.2](#sec:Pretraining){reference-type="ref" reference="sec:Pretraining"}), and also the linguistic competence of CLIP-T is low (Sec. [3](#sec:Experimental Results){reference-type="ref" reference="sec:Experimental Results"}), to prove that the distilled information is predominantly visual and thus non-trivial to the pretrained-language transformer despite having textual inputs.
|
| 14 |
+
|
| 15 |
+
Methodologically, we use the cross-modal encoder structure inspired by @tan-bansal-2019-lxmert, to concatenate the two models and further adapt the ensemble for some extra steps (a lot fewer than the original pretraining steps). While adapting pretrained-BERT, we favor a document-level corpus (wiki103) over a vision-language corpus (MSCOCO) due to claims from @devlin-etal-2019-bert[^1] and results from @tan-bansal-2020-vokenization (Appendix [7](#sec:A){reference-type="ref" reference="sec:A"}). The adapting tasks are joint masked language modeling (MLM), same sentence prediction, and CLIP token classification tasks, which are resemblant of BERT pretraining tasks to cater to the language-heavy characteristics of NLU. We do ablation studies to show that each of the task provides improvement (Section [5](#sec:Ablation){reference-type="ref" reference="sec:Ablation"}).
|
| 16 |
+
|
| 17 |
+
During finetuning, we finetune XDBERT (cross-modal distilled BERT), which is the language encoder after adaptation. We evaluate the linguistic capabilities of the model by finetuning on GLUE, situations with adversarial generations (SWAG [@zellers2018swag]) benchmarks, and readability benchmarks[^2]. The resulting XDBERT outperforms pretrained BERT, proving that our adaptation strategy distills useful visual knowledge into BERT (right of Figure [2](#fig:modelpipeline){reference-type="ref" reference="fig:modelpipeline"}). We provide analysis to show that the improvements are visually grounded.
|
| 18 |
+
|
| 19 |
+
We summarize our contribution as follow:
|
| 20 |
+
|
| 21 |
+
- We explore distilling visual information from a pretrained multimodal transformer to a pretrained language transformer and improved NLU performance.
|
| 22 |
+
|
| 23 |
+
- Our adapting method is efficient and extensible to different combinations of pretrained-language encoders (BERT/ELECTRA).
|
| 24 |
+
|
| 25 |
+
# Method
|
| 26 |
+
|
| 27 |
+
The training process consists of three phases: pretraining, adaptation, and finetuning (Figure [2](#fig:modelpipeline){reference-type="ref" reference="fig:modelpipeline"}). Our proposed method focuses on the adaptation phase with pretrained models, so pretraining is not a part of our experiment, but we explain all three phases for completeness. The adaptation phase incorporates the cross-modal transformer structure to jointly learn from CLIP-T and BERT outputs.
|
| 28 |
+
|
| 29 |
+
![In our experimental setting, the transformers go through three phases of the training processes from left to right. The pretraining phase pretrains BERT and CLIP-T, both of which are then used in the adaptation phase and concatenated with a cross-modal encoder. Finetuning is performed on the language encoder only (XDBERT); in this case, a positive CoLA example is being processed to determine its linguistic acceptability. ViT stands for Vision Transformer [@dosovitskiy2021an], and the input id 103 is the \[MASK\] token in BERT.](modelpipelinesmall2.pdf){#fig:modelpipeline}
|
| 30 |
+
|
| 31 |
+
The cross-modal transformer (middle of Figure [2](#fig:modelpipeline){reference-type="ref" reference="fig:modelpipeline"}) consists of a cross-modal encoder, CLIP-T and BERT. CLIP-T has the same module connections as BERT with only parameter differences (specifications in Appendix [8](#sec:AppendixCLIPSequence){reference-type="ref" reference="sec:AppendixCLIPSequence"}). The cross-modal encoder consists of repeating cross-modal encoder layers, which is an extension to single-modality encoder layers (layers of BERT/CLIP-T) in Figure [3](#fig:cross-modal encoder){reference-type="ref" reference="fig:cross-modal encoder"}. The added cross-attention module follows the attention formula [@DBLP:conf/nips/VaswaniSPUJGKP17]: $$\begin{equation}
|
| 32 |
+
Attention\ output = softmax\left({\textbf{Q}*\textbf{K}^T}\slash{\sqrt{D}}\right)\textbf{V}
|
| 33 |
+
\end{equation}$$ for queries (**Q**), keys (**K**) and values (**V**) of dimension D, however, **Q** is generated from a modality other than **K** and **V**. We choose the number of cross-modal encoder layers to be 2.
|
| 34 |
+
|
| 35 |
+
BERT is trained using the next sentence prediction and masked language modeling. CLIP is an image-text matching system with two components, a text encoder (CLIP-T), and an image encoder (CLIP-ViT), which learn to encode paired inputs to closer output embeddings via contrastive loss. The trained representation has the following properties: $$\begin{equation}
|
| 36 |
+
cos(H_i, V_i) >> cos(H_i, V_j) (i \neq j )
|
| 37 |
+
\end{equation}$$ $$\begin{equation}
|
| 38 |
+
cos(H_i, V_i) >> cos(H_j, V_i) (i \neq j )
|
| 39 |
+
\end{equation}$$
|
| 40 |
+
|
| 41 |
+
where $H_i$ is the CLIP text encoder output of $X_i$, and $V_i$ is the CLIP image encoder output of $Y_i$. The text-image input ($X_i$, $Y_i$) is paired, and every ($X_j$, $Y_k$) $(j \neq k )$ is a non-pair. Since $H_i$ and $V_i$ are normalized and have a length of 1, $H_i$ can be used to approximate $V_i$. The similarity of $H_i$ and $V_i$ is also shown in multi-modal arithmetic propreties discovered in @DBLP:journals/corr/abs-2111-14447 Therefore, we use the CLIP text encoder output to approximate CLIP image encoder output for a straightforward adaptation process.
|
| 42 |
+
|
| 43 |
+
We define three adapting tasks that can be learned in a self-supervised manner, which is visualized in Figure [2](#fig:modelpipeline){reference-type="ref" reference="fig:modelpipeline"}. In these tasks, BERT and CLIP-T takes sentences A and B respectively as input, and losses are calculated from both BERT output and CLIP-T output. Our adapting tasks closely follow BERT text pretraining strategies to retain linguistic competence. Unlike pretraining, the adaptation is computationally inexpensive, as we found that training 1 epoch on wiki103 was already effective. Further training details can be found in Appendix [9](#sec:AppendixTrainingDetails){reference-type="ref" reference="sec:AppendixTrainingDetails"}.
|
| 44 |
+
|
| 45 |
+
The MLM objective teaches the model to reconstruct masked tokens. The masked ratio and masked token replacement probabilities follow @devlin-etal-2019-bert.
|
| 46 |
+
|
| 47 |
+
Since there is no equivalent of a \[MASK\] token in CLIP, we leave the sentence as is.
|
| 48 |
+
|
| 49 |
+
The Image-Text Matching (ITM) objective is widely used in multimodal learning [@tan-bansal-2019-lxmert]. We modify this objective to same sentence prediction as both streams of our model takes text as input. When choosing the input sentences for BERT and CLIP-T, we make the inputs nonidentical 50% of the time. A binary classifier over \[CLS\] differentiates between the two cases. This motivates the \[CLS\] output to encode sentence related information, and trains the cross-attention weights.
|
| 50 |
+
|
| 51 |
+
This is the MLM objective done on the CLIP-T side of the full model, omitting the masking part because CLIP has no mask token. Same as MLM, 15% of the tokens are randomly selected for reconstruction. We address concerns on trivial solutions learned by the model in Section [5](#sec:Ablation){reference-type="ref" reference="sec:Ablation"} and [\[tab:AttentionMap\]](#tab:AttentionMap){reference-type="ref" reference="tab:AttentionMap"} in the appendix.
|
| 52 |
+
|
| 53 |
+
Finetuning follows the methods described in @devlin-etal-2019-bert, and is applied to the language encoder only (XDBERT), therefore the number of parameters are kept equal to pretrained-BERT.
|
| 54 |
+
|
| 55 |
+
{#fig:cross-modal encoder}
|
| 56 |
+
|
| 57 |
+
::: table*
|
| 58 |
+
**RTE** **MPRC** **STSB** **CoLA** **SST2** **QNLI** **QQP** **MNLI** **SWAG** **READ↓**
|
| 59 |
+
------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
|
| 60 |
+
CLIP-T 51.62 76.20 22.07 25.41 -- -- -- -- -- --
|
| 61 |
+
BERT-b 66.43 87.38 88.64 56.52 92.46 90.92 89.51 84.35 81.0 --
|
| 62 |
+
XDBERT-b **69.31** **88.02** **89.32** **57.55** **92.78** **91.52** **89.57** **84.75** **81.35** --
|
| 63 |
+
ELECTRA-b 78.70 89.49 90.77 66.09 94.5 92.69 90.29 88.23 88.60 --
|
| 64 |
+
XDELECTRA-b **80.51** **90.55** **91.04** **66.76** **95.20** **93.03** **90.4** **88.75** **88.73** --
|
| 65 |
+
ELECTRA-l 86.64 91.53 91.88 69.27 96.90 94.78 **91.34** 90.99 92.46 0.685
|
| 66 |
+
XDELECTRA-l **87.73** **92.12** **91.97** **70.98** **97.36** **94.93** 91.29 **91.02** **92.59** **0.635**
|
| 67 |
+
:::
|
2205.05678/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2205.05678/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,103 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Reconstructing dynamic information about a physical system directly from a video has received considerable attention in the robotics, machine learning, computer vision, and graphics communities. This problem is fundamentally challenging because of its deep coupling among physics, geometry, and perception of a system. Traditional solutions like motion capture systems [\(Vicon;](#page-11-0) [OptiTrack;](#page-10-0) [Qualisys\)](#page-11-1) can provide high-quality results but require prohibitively expensive external hardware platforms. More recent development in differentiable simulation and rendering provides an inexpensive and attractive alternative to the motion capture systems and has shown promising proof-of-concept results [\(Murthy et al.,](#page-10-1) [2020\)](#page-10-1). However, existing methods in this direction typically assume the videos come from a *known* renderer. Such an assumption limits their usefulness in inferring dynamic information from an *unknown* rendering domain, which is common in real-world applications due to the discrepancy between rendering and real-world videos. Existing techniques for aligning different rendering domains, e.g., CycleGAN [\(Zhu et al.,](#page-11-2) [2017\)](#page-11-2), may help alleviate this issue, but they typically require access to the target domain with massive data, which is not always available. To our best knowledge, inferring dynamic parameters of a physical system directly from videos under *unknown* rendering conditions remains far from being solved, and our work aims to fill this gap.
|
| 4 |
+
|
| 5 |
+
<sup>∗</sup>Equal contribution
|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span><sup>1</sup>Videos, code, and data are available on the project webpage: <http://risp.csail.mit.edu>.
|
| 8 |
+
|
| 9 |
+
<span id="page-1-1"></span><span id="page-1-0"></span>
|
| 10 |
+
|
| 11 |
+
Figure 1: A gallery of our four environments (left to right) across three rendering domains (top to bottom). For each environment, we train a RISP with images under varying lighting, background, and materials generated from a differentiable render (top). Each environment then aims to find proper system and control parameters to simulate and render the physical system (middle) so that it matches the dynamic motion of a reference video (bottom) with unknown rendering configurations. We deliberately let three rows use renderers with vastly different rendering configurations.
|
| 12 |
+
|
| 13 |
+
We propose a novel approach combining three ideas to address this challenge: domain randomization, state estimation, and rendering gradients. Domain randomization is a classic technique for transferring knowledge between domains by generating massive samples whose variances can cover the discrepancy between domains. We upgrade it with two key innovations: First, we notice that image differences are sensitive to changes in rendering configurations, which shadows the renderinginvariant, dynamics-related parameters that we genuinely aim to infer. This observation motivates us to propose a *rendering-invariant state predictor* (RISP) that extracts state information of a physical system from videos. Our second innovation is to leverage *rendering gradients* from a differentiable renderer. Essentially, requiring the output of RISP to be agnostic to rendering configurations equals enforcing its gradients for rendering parameters to be zero. We propose a new loss function using rendering gradients and show an efficient method for integrating it into deep learning frameworks.
|
| 14 |
+
|
| 15 |
+
Putting all these ideas together, we develop a powerful pipeline that effectively infers parameters of a physical system directly from video input under random rendering configurations. We demonstrate the efficacy of our approach on a variety of challenging tasks evaluated in four environments (Sec. [4](#page-5-0) and Fig. [1\)](#page-1-0) as well as in a real-world application (Fig. [4\)](#page-8-0). The experimental results show that our approach outperforms the state-of-the-art techniques by a large margin in most of these tasks due to the inclusion of rendering gradients in the training process.
|
| 16 |
+
|
| 17 |
+
In summary, our work makes the following contributions:
|
| 18 |
+
|
| 19 |
+
- We investigate and identify the bottleneck in inferring state, system, and control parameters of physical systems from videos under various rendering configurations (Sec. [3.1\)](#page-3-0);
|
| 20 |
+
- We propose a novel solution combining domain randomization, state estimation, and rendering gradients to achieve generalizability across rendering domains (Sec. [3.2\)](#page-3-1);
|
| 21 |
+
- We demonstrate the efficacy of our approach on several challenging tasks in both simulation and real-world environments (Sec. [4\)](#page-5-0).
|
| 22 |
+
|
| 23 |
+
# Method
|
| 24 |
+
|
| 25 |
+
Given a video showing the dynamic motion of a physical system, our goal is to infer the unknown state, system, or control parameters directly from the video with partial knowledge about the physics model and rendering conditions. Specifically, we assume we know the governing equations of the physical system (e.g., Newton's law for rigid-body systems) and the camera position, but the exact system, control, or rendering parameters are not exposed.
|
| 26 |
+
|
| 27 |
+
To solve this problem, we propose a pipeline (Fig. [2\)](#page-2-1) that consists of two components: 1) a differentiable simulation and rendering engine; 2) the RISP network. First, we use our engine to simulate and render the state of a physical system and outputs images under varying rendering configuration. Next, the RISP network learns to reconstruct the state information from these generated images. Putting
|
| 28 |
+
|
| 29 |
+
<span id="page-3-4"></span>these two components together, we have a pipeline that can faithfully recover dynamic information of a physical system from a new video with unseen rendering configurations.
|
| 30 |
+
|
| 31 |
+
Given a physical system with known dynamic model $\mathcal{M}$ , we first use a differentiable simulator to simulate its states based on action inputs at each time step after time discretization:
|
| 32 |
+
|
| 33 |
+
$$\mathbf{s}_{i+1} = \mathcal{M}_{\phi}(\mathbf{s}_i, \mathbf{a}_i), \quad \forall i = 0, 1, \cdots, N-1, \tag{1}$$
|
| 34 |
+
|
| 35 |
+
where N is the number of time steps in a rollout of physics simulation, and $\mathbf{s}_i$ , $\mathbf{s}_{i+1}$ and $\mathbf{a}_i$ represent the state and action vectors at the corresponding time steps, respectively. The $\phi$ vector encodes the system parameters in the model, e.g., mass, inertia, and elasticity. Next, we apply a differentiable renderer $\mathcal{R}$ to generate an image $\mathbf{I}_i$ for each state $\mathbf{s}_i$ :
|
| 36 |
+
|
| 37 |
+
$$\mathbf{I}_i = \mathcal{R}_{ib}(\mathbf{s}_i), \quad \forall i = 0, 1, \cdots, N. \tag{2}$$
|
| 38 |
+
|
| 39 |
+
Here, $\psi$ is a vector encoding rendering parameters whose gradients are available in the renderer $\mathcal{R}$ . Examples of $\psi$ include light intensity, material reflectence, or background color. By abuse of notation, we re-write the workflow of our simulation and rendering engine to a compact form:
|
| 40 |
+
|
| 41 |
+
$$\{\mathbf{I}_i\} = \mathcal{R}_{\psi} \left[ \underbrace{\mathcal{M}_{\phi}(\mathbf{s}_0, \{\mathbf{a}_i\})}_{\{\mathbf{s}_i\}} \right]. \tag{3}$$
|
| 42 |
+
|
| 43 |
+
In other words, given an initial state $s_0$ and a sequence of actions $\{a_i\}$ , we generate a sequence of states $\{s_i\}$ from simulation and renders the corresponding image sequence $\{I_i\}$ . The task of recovering unknown information from a reference video $\{I_i^{\text{ref}}\}$ can be formulated as follows:
|
| 44 |
+
|
| 45 |
+
$$\min_{\mathbf{s}_0, \{\mathbf{a}_i\}, \boldsymbol{\phi}, \boldsymbol{\psi}} \quad \mathcal{L}(\{\mathbf{I}_i^{\text{ref}}\}, \{\mathbf{I}_i\}), \tag{4}$$
|
| 46 |
+
|
| 47 |
+
<span id="page-3-2"></span>s.t.
|
| 48 |
+
$$\{\mathbf{I}_i\} = \mathcal{R}_{\psi}[\mathcal{M}_{\phi}(\mathbf{s}_0, \{\mathbf{a}_i\})],$$
|
| 49 |
+
(5)
|
| 50 |
+
|
| 51 |
+
where $\mathcal{L}$ is a loss function penalizing the difference between the generated images and their references. Assuming that the simulator $\mathcal{M}$ and the renderer $\mathcal{R}$ are differentiable with respect to their inputs, we can run gradient-based optimization algorithms to solve Eqn. (4). This is essentially the idea proposed in $\nabla \text{Sim}$ , the state-of-the-art method for identifying parameters directly from video inputs (Murthy et al., 2020). Specifically, $\nabla \text{Sim}$ defines $\mathcal{L}$ as a norm on pixelwise differences.
|
| 52 |
+
|
| 53 |
+
One major limitation in Eqn. (4) is that it expects reasonably similar initial images $\{\mathbf{I}_i\}$ and references $\{\mathbf{I}_i^{\text{ref}}\}$ to successfully solve the optimization problem. Indeed, since the optimization problem is highly nonlinear due to its coupling between simulation and rendering, local optimization techniques like gradient-descent can be trapped into local minima easily if $\{\mathbf{I}_i\}$ and $\{\mathbf{I}_i^{\text{ref}}\}$ are not close enough. While $\nabla \text{Sim}$ has reported promising results when $\{\mathbf{I}_i\}$ and $\{\mathbf{I}_i^{\text{ref}}\}$ are rendered with moderately different $\psi$ , we found in our experiments that directly optimizing $\mathcal L$ defined on the image space rarely works when the two rendering domains are vastly different (Fig. 1). Therefore, we believe it requires a fundamentally different solution, motivating us to propose RISP in our method.
|
| 54 |
+
|
| 55 |
+
The difficulty of generalizing Eqn. (4) across different rendering domains is partially explained by the fact that the loss $\mathcal L$ is defined on the differences in the image space, which is sensitive to changes in rendering configurations. To address this issue, we notice from many differentiable simulation papers that a loss function in the state space is fairly robust to random initialization (Du et al., 2020; Liang et al., 2019), inspiring us to redefine $\mathcal L$ in a state-like space. More concretely, we introduce the RISP network $\mathcal N$ that takes as input an image $\mathbf I$ and outputs a state prediction $\hat{\mathbf s}=\mathcal N(\mathbf I)$ . We then redefine the optimization problem in Eqn. (4) as follows (Fig. 2):
|
| 56 |
+
|
| 57 |
+
$$\min_{\mathbf{s}_0, \{\mathbf{a}_i\}, \boldsymbol{\phi}, \boldsymbol{\psi}} \quad \mathcal{L}(\mathcal{N}_{\boldsymbol{\theta}}(\{\mathbf{I}_i^{\text{ref}}\}), \mathcal{N}_{\boldsymbol{\theta}}(\{\mathbf{I}_i\})), \tag{6}$$
|
| 58 |
+
|
| 59 |
+
<span id="page-3-3"></span>s.t.
|
| 60 |
+
$$\{\mathbf{I}_i\} = \mathcal{R}_{\psi}[\mathcal{M}_{\phi}(\mathbf{s}_0, \{\mathbf{a}_i\})].$$
|
| 61 |
+
(7)
|
| 62 |
+
|
| 63 |
+
Note that the network $\mathcal{N}_{\theta}$ , parametrized by $\theta$ , is pre-trained and fixed in this optimization problem. Essentially, Eqn. (6) maps the two image sequences to the predicted state space, after which the
|
| 64 |
+
|
| 65 |
+
<span id="page-4-2"></span>standard gradient-descent optimization follows. A well-trained network $\mathcal N$ can be interpreted as an "inverse renderer" $\mathcal R^{-1}$ that recovers the rendering-invariant state vector regardless of the choice of rendering parameters $\psi$ , allowing Eqn. (6) to match the information behind two image sequences $\{\mathbf I_i\}$ and $\{\mathbf I_i^{\mathrm{ref}}\}$ even when they are generated from different renderers $\mathcal R_{\psi}$ . Below, we present two ideas to train the network $\mathcal N$ :
|
| 66 |
+
|
| 67 |
+
The first idea: domain randomization Our first idea is to massively sample state-rendering pairs $(\mathbf{s}_j, \psi_j)$ and render the corresponding image $\mathbf{I}_j = \mathcal{R}_{\psi_j}(\mathbf{s}_j)$ , giving us a training set $\mathcal{D} = \{(\mathbf{s}_i, \psi_j, \mathbf{I}_i)\}$ . We then train $\mathcal{N}$ to minimize the prediction error:
|
| 68 |
+
|
| 69 |
+
$$\mathcal{L}^{\text{error}}(\boldsymbol{\theta}, \mathcal{D}) = \sum_{(\mathbf{s}_j, \boldsymbol{\psi}_j, \mathbf{I}_j) \in \mathcal{D}} \underbrace{\|\mathbf{s}_j - \mathcal{N}_{\boldsymbol{\theta}}(\mathbf{I}_j)\|_1}_{\mathcal{L}^{\text{error}}_{\underline{\boldsymbol{\phi}}}}.$$
|
| 70 |
+
(8)
|
| 71 |
+
|
| 72 |
+
The intuition is straightforward: $\mathcal{N}_{\theta}$ learns to generalize over rendering configurations because it sees images generated with various rendering parameters $\psi$ . This is exactly the domain randomization idea (Tobin et al., 2017), which we borrow to solve our problem across different rendering domains.
|
| 73 |
+
|
| 74 |
+
The second idea: rendering gradients One major bottleneck in domain randomization is its needs for massive training data that spans the whole distribution of rendering parameters $\psi$ . Noting that a perfectly rendering-invariant $\mathcal{N}$ must satisfy the following condition:
|
| 75 |
+
|
| 76 |
+
<span id="page-4-0"></span>
|
| 77 |
+
$$\frac{\partial \mathcal{N}_{\theta}(\mathcal{R}_{\psi}(\mathbf{s}))}{\partial \psi} \equiv \mathbf{0}, \quad \forall \mathbf{s}, \psi, \tag{9}$$
|
| 78 |
+
|
| 79 |
+
we consider adding a regularizer to the training loss:
|
| 80 |
+
|
| 81 |
+
$$\mathcal{L}^{\text{train}}(\boldsymbol{\theta}, \mathcal{D}) = \mathcal{L}^{\text{error}} + \gamma \sum_{(\mathbf{s}_{j}, \boldsymbol{\psi}_{j}, \mathbf{I}_{j}) \in \mathcal{D}} \| \frac{\partial \mathcal{N}_{\boldsymbol{\theta}}(\mathcal{R}_{\boldsymbol{\psi}_{j}}(\mathbf{s}_{j}))}{\partial \boldsymbol{\psi}_{j}} \|_{F}, \tag{10}$$
|
| 82 |
+
|
| 83 |
+
where $\|\cdot\|_F$ indicates the Frobenius norm and $\gamma$ a regularization weight. The intuition is that by suppressing this Jacobian to zero, we encourage the network $\mathcal N$ to flatten out its landscape along the dimension of rendering parameters $\psi$ , and invariance across rendering configurations follows. To implement this loss, we apply the chain rule:
|
| 84 |
+
|
| 85 |
+
<span id="page-4-1"></span>
|
| 86 |
+
$$\frac{\partial \mathcal{N}_{\theta}(\mathcal{R}_{\psi_j}(\mathbf{s}_j))}{\partial \psi_j} = \frac{\partial \mathcal{N}_{\theta}(\mathbf{I}_j)}{\partial \psi_j} = \frac{\partial \mathcal{N}_{\theta}(\mathbf{I}_j)}{\partial \mathbf{I}_j} \frac{\partial \mathbf{I}_j}{\partial \psi_j},\tag{11}$$
|
| 87 |
+
|
| 88 |
+
where the first term $\frac{\partial \mathcal{N}_{\theta}(\mathbf{I}_{j})}{\partial \mathbf{I}_{j}}$ is available in any modern deep learning frameworks and the second term $\frac{\partial \mathbf{I}_{j}}{\partial \psi_{j}}$ can be obtained from the state-of-the-art differentiable renderer (Nimier-David et al., 2019). We can now see more clearly the intuition behind RISP: it requires the network's sensitivity about input images to be orthogonal to the direction that rendering parameters can influence the image, leading to a rendering-invariant prediction.
|
| 89 |
+
|
| 90 |
+
We stress that the design of this new loss in Eqn. (10) is non-trivial. In fact, both $\mathcal{L}^{error}$ and $\mathcal{L}^{reg}$ have their unique purposes and must be combined: $\mathcal{L}^{error}$ encourages $\mathcal{N}$ to fit its output to individually different states, and $\mathcal{L}^{reg}$ attempts to smooth out its output along the $\psi$ dimension. Specifically, $\mathcal{L}^{reg}$ cannot be optimized as a standalone loss because it leads to a trivial solution of $\mathcal{N}$ always predicting constant states. Putting $\mathcal{L}^{error}$ and $\mathcal{L}^{reg}$ together forces them to strike a balance between predicting accurate states and ignoring noises from rendering conditions, leading to a network $\mathcal{N}$ that truly learns the "inverse renderer" $\mathcal{R}^{-1}$ .
|
| 91 |
+
|
| 92 |
+
It remains to show how to compute the gradient of the regularizer $\mathcal{L}^{reg}$ with respect to the network parameters $\theta$ , which is required by gradient-based optimizers to minimize this new loss. As the loss definition now includes first-order derivatives, computing its gradients involves second-order partial derivatives, which can be time-consuming if implemented carelessly with multiple loops. Our last contribution is to provide an efficient method for computing this gradient, which can be fully implemented with existing frameworks (PyTorch and mitsuba-2 in our experiments):
|
| 93 |
+
|
| 94 |
+
**Theorem 1** Assuming forward mode differentiation is available in the renderer $\mathcal{R}$ and reverse mode differentiation is available in the network $\mathcal{N}$ , we can compute a stochastic gradient $\frac{\partial \mathcal{L}^{reg}}{\partial \boldsymbol{\theta}}$ in $\mathcal{O}(|\mathbf{s}||\boldsymbol{\theta}|)$ time per image using pre-computed data occupying $\mathcal{O}(\sum_{i} |\psi_{i}||\mathbf{I}_{i}|)$ space.
|
| 95 |
+
|
| 96 |
+
<span id="page-5-2"></span>In particular, we stress that computing the gradients of L reg does *not* require second-order gradients in the renderer R, which would exceed the capability of all existing differentiable renderers we are aware of. We leave the proof of this theorem in our supplemental material.
|
| 97 |
+
|
| 98 |
+
Further speedup Theorem [1](#page-4-1) states that it takes time linear to the network size and state dimension to compute the gradients of L reg. The O(|s||θ|) time cost is affordable for very small rigid-body systems (e.g., |s| < 10) but not scalable for larger systems. Therefore, we use a slightly different regularizer in our implementation:
|
| 99 |
+
|
| 100 |
+
$$\mathcal{L}^{\text{train}}(\boldsymbol{\theta}, \mathcal{D}) = \mathcal{L}^{\text{error}} + \gamma \sum_{(\mathbf{s}_{j}, \boldsymbol{\psi}_{j}, \mathbf{I}_{j}) \in \mathcal{D}} \| \frac{\partial \mathcal{L}_{j}^{\text{error}}}{\partial \boldsymbol{\psi}_{j}} \|.$$
|
| 101 |
+
(12)
|
| 102 |
+
|
| 103 |
+
In other words, we instead encourage the state prediction error to be rendering-invariant. It can be seen from the proof in Theorem [1](#page-4-1) that this new regularizer requires only O(θ) time to compute its gradients, and we have found empirically that the performance of this new regularizer is comparable to Eqn. [\(10\)](#page-4-0) but is much faster. We leave a theoretical analysis of the two regularizers to future work.
|
2206.08476/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2022-01-27T09:22:04.855Z" agent="5.0 (Macintosh; Intel Mac OS X 10_16_0) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/13.5.1 Chrome/83.0.4103.122 Electron/9.1.1 Safari/537.36" etag="u_K_hoKEJt6CaO8HVbb6" version="13.5.1" type="device"><diagram id="DzuX6UkRfyrcK7ndNIvD" name="Page-1">7VZbb5swFP41PC7iEgh9LUm7ap2mKZOWPlUOnIA7g5ExCdmvnx3sADWJ2NRq0rQX5POdc+xz+2wsL8qbe4bK7DNNgFiunTSWt7RcdxEG4iuBYwv4jtcCKcNJCzkdsMY/QYG2QmucQDUw5JQSjsshGNOigJgPMMQYPQzNdpQMTy1RCgawjhEx0e844VmLhr7d4R8Bp5k+2bGVJkfaWAFVhhJ66EHeyvIiRilvV3kTAZG103Vp/e4uaM+BMSj4FIdF/tg8f1qSl2DzsPafvxZPX/wPcxUbP+qEIRH5K5EyntGUFoisOvSW0bpIQO5qC6mzeaS0FKAjwBfg/KiaiWpOBZTxnCgtNJhveusnudXMV9KyUTufhKMWCs6Om77Q85Ji53aStF+bn0zqYtkUVNGaxXClVnr8EEuBX7Fzz80VpACag4hH+DEgiOP9MA6kxjM923UdFAvVxN9oqNp3j0itTvp2t1xbbkBEyLdbJlapXOFcDr7gJeKoEukYBksNVfVWY5ThFItOa5WIsKc1Rmk4KIcMc1iX6FTjg7gshkOxw4RElFB28vV2OwjiWB7PGf0BPU2yuNnaXWv3wDg015trNkM7aHKq28lZKPnQcf1GQVmP5oH9Tu3z//NxMh/diXz0/iYfXYOPD9E9FCbfUJ3mogSVqXFmlhxTkYYdEVRVMGLjdjarBuUlkUZvSccwhnE6bkN/7r8RHd35KzoGJh3PD2yfj+F78dEz2jd2LzqXLsQpl6prmo1vN+bsTXWemYbToNF8/X/mBfDCVyPnmiPnj0yc8wdPgBC7372TrvfP7K1+AQ==</diagram></mxfile>
|
2206.08476/main_diagram/main_diagram.pdf
ADDED
|
Binary file (9.96 kB). View file
|
|
|
2206.08476/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
A typical problem in deep learning (DL) applications is to find a good model for a given dataset D in a restrictive time budget. In the case of tabular data, a popular approach for solving this problem is automated machine learning (AutoML), as implemented, e.g., in Auto-sklearn [\(Feurer et al.,](#page-9-0) [2015a\)](#page-9-0) or Auto-Gluon [\(Erickson et al.,](#page-9-1) [2020\)](#page-9-1). However, in domains such as computer vision and natural language processing, a better solution, especially under low resource constraints, is typically to fine-tune an existing pre-trained model. This, at first glance, appears to render AutoML unnecessary for those domains. However, as we will demonstrate in this paper, AutoML and pre-trained models can be
|
| 4 |
+
|
| 5 |
+
*Proceedings of the* 39 th *International Conference on Machine Learning*, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).
|
| 6 |
+
|
| 7 |
+
combined to yield much stronger performance than either of them alone.
|
| 8 |
+
|
| 9 |
+
A great advantage of fine-tuning pre-trained models is strong anytime performance: the use of pre-trained models often allows to obtain very strong performance orders of magnitude faster than when training a model from scratch. In many practical applications, this strong anytime performance is crucial, e.g., for DL-based recognition systems in manufacturing, or automated DL (AutoDL) web services. The clock starts ticking as soon as a new dataset is available, and it would be far too costly to train a new model from scratch, let alone optimize its hyperparameters. The recent ChaLearn AutoDL competition [\(Liu et al.,](#page-10-0) [2021\)](#page-10-0) mimicked these tight time constraints, rewarding performance found in an anytime fashion.
|
| 10 |
+
|
| 11 |
+
While fine-tuning pre-trained models enjoys strong anytime performance, it also introduces many additional degrees of freedom. Firstly, we need to select a pre-trained network to fine-tune. To obtain good anytime performance, we may even want to start by training a shallow model to obtain good results quickly, and at some point switch to fine-tuning a deeper model. There are many additional degrees of freedom in this fine-tuning phase, concerning learning rates, data augmentation, and regularization techniques. We refer to the combination of a pre-trained model and the fully specified fine-tuning phase, including its hyperparameters, as a *DL pipeline*. Which DL pipeline works best depends heavily on the dataset, for instance, datasets with high-resolution images may favor the use of more downsampling layers than the low-resolution images of the CIFAR dataset [\(Krizhevsky](#page-10-1) [et al.,](#page-10-1) [2009\)](#page-10-1); likewise, datasets with few images may favor smaller learning rates. We, therefore, require an automated method for selecting the best DL pipeline based on the characteristics of the dataset at hand.
|
| 12 |
+
|
| 13 |
+
In this paper, we tackle this problem by meta-learning a model across datasets that enables zero-shot DL pipeline selection. Specifically, we create a meta-dataset holding the performance of many DL pipelines on a broad range of datasets. Using this meta-dataset, we then learn a function that selects the right DL pipeline based on the properties of the dataset (e.g., the image resolution and the number of images) in a zero-shot setting. To learn this selection function, we first formulate DL pipeline selection as a classical
|
| 14 |
+
|
| 15 |
+
<sup>\*</sup>Equal contribution <sup>1</sup>University of Freiburg <sup>2</sup>University of Hildesheim <sup>3</sup>Bosch Center for Artificial Intelligence. Correspondence to: Fabio Ferreira <ferreira@cs.uni-freiburg.de>.
|
| 16 |
+
|
| 17 |
+
algorithm selection (AS) problem [\(Rice,](#page-11-0) [1976\)](#page-11-0) and then improve upon this formulation by recognizing DL pipelines as points in a geometric space that allows information about the performance of some pipelines to inform performance on others. We then train a deep neural network with a pairwise ranking objective to emphasize the rank of the DL pipeline predicted to perform best in a manner that automatically normalizes across datasets. Note, that we use the *zero-shot* nomenclature not to refer to samples of unseen classes but to express that we cannot even afford a single exploratory evaluation of a pipeline but need to directly select a suitable one in a zero-shot manner.
|
| 18 |
+
|
| 19 |
+
Our contributions can be summarized as:
|
| 20 |
+
|
| 21 |
+
- We extend AutoML to best exploit pre-trained models by meta-learning to select the best DL pipeline conditional on dataset meta-features.
|
| 22 |
+
- We introduce a large meta-dataset with the performances of 525 DL pipelines across 35 imageclassification datasets and 15 augmentations each. With 525\*35\*15 entries, it is, to our best knowledge, the first DL meta-dataset for image classification of this size, being over 1000 times larger than previous meta-datasets [\(Triantafillou et al.,](#page-11-1) [2019\)](#page-11-1).
|
| 23 |
+
- We go beyond a standard formulation as an algorithm selection problem by formulating the new problem of selecting a DL pipeline as a point in a geometric space to exploit similarities between DL pipelines.
|
| 24 |
+
- We introduce a novel zero-shot AutoDL method that addresses this pipeline selection problem with a pairwise ranking loss.
|
| 25 |
+
- In the setting of the recent ChaLearn AutoDL challenge [\(Liu et al.,](#page-10-0) [2021\)](#page-10-0), our zero-shot AutoDL approach dominates all competitors on a broad range of 35 image datasets, as well as in the challenge itself.
|
| 26 |
+
|
| 27 |
+
To foster reproducibility, we make our PyTorch [\(Paszke](#page-11-2) [et al.,](#page-11-2) [2019\)](#page-11-2) code, models, and data publicly available under [this URL.](https://github.com/automl/zero-shot-automl-with-pretrained-models)
|
| 28 |
+
|
| 29 |
+
# Method
|
| 30 |
+
|
| 31 |
+
Let $\mathcal{X} := \{x_n\}_{n=1}^N$ denote a set of N distinct deep learning (DL) pipelines. Every DL pipeline $x_n := (M_n, \theta_n)$ comprises a pre-trained model $M_n \in \mathcal{M}$ and fine-tuning hyperparameters $\theta_n \in \Theta$ that are used to fine-tune $M_n$ to a given dataset. Furthermore, let $\mathcal{D} = \{D_i\}_{i=1}^I$ denote a collection of I datasets, where each dataset $D_i \in \mathcal{D}$ is split into disjoint training, validation and testing subsets $D_i := D_i^{(\text{tr})} \cup D_i^{(\text{val})} \cup D_i^{(\text{test})}$ . For each dataset, we are given a vector of K descriptive characteristics (a.k.a. metafeatures), such as the number of data points and the image resolution, as $\phi_i \in \Phi \subseteq \mathbb{R}^K$ (see Section 4.1 for the full set of meta-features we used in our experiments). We denote by $x_n^{(\mathrm{ft})} := \mathrm{Tune}\left(x_n, D^{(\mathrm{tr})}, D^{(\mathrm{val})}\right)$ the model resulting from fine-tuning the pre-trained model $M_n$ with hyperparameters $\theta_n$ on training data $D^{(\text{tr})}$ using validation data $D^{(\text{val})}$ for early stopping. Then, denoting the loss of a fine-tuned model $x_n^{(\mathrm{ft})}$ on the test split of the same dataset D as $\mathcal{L}(x_n^{(\mathrm{ft})}, D^{(\mathrm{test})})$ , the test cost of DL pipeline $x_n$ on D is defined as:
|
| 32 |
+
|
| 33 |
+
$$C(x_n, D) = \mathcal{L}\left(\text{Tune}\left(x_n, D^{(\text{tr})}, D^{(\text{val})}\right), D^{(\text{test})}\right).$$
|
| 34 |
+
(1)
|
| 35 |
+
|
| 36 |
+
<span id="page-2-0"></span>**Definition 1.** Given a set of N DL pipelines $\mathcal{X} := \{x_n\}_{n=1}^N$ and a collection of I datasets $\mathcal{D} = \{D_i\}_{i=1}^I$ with metafeatures $\phi_i$ for dataset $D_i \in \mathcal{D}$ , and a $N \times I$ matrix of costs $C(x_n, D_i)$ representing the cost of pipeline $x_n$ on dataset $D_i$ , the problem of **zero-shot AutoML** with pretrained models (**ZAP**) is to find a mapping $f : \Phi \to \mathcal{X}$ that yields minimal expected cost over $\mathcal{D}$ :
|
| 37 |
+
|
| 38 |
+
$$\operatorname{argmin}_{f} \mathbb{E}_{i \sim \{1, \dots, I\}} \left[ C(f(\phi_i), D_i) \right]. \tag{2}$$
|
| 39 |
+
|
| 40 |
+
The problem of zero-shot AutoML with pre-trained models from Definition 1 can be directly formulated as an algorithm selection problem: the DL pipelines $\mathcal{X} := \{x_n\}_{n=1}^N$ are the algorithms $\mathcal{P}$ , the datasets $\{D_i\}_{i=1}^I$ are the instances $\mathcal{I}$ , and the test cost $C(x_n, D)$ of DL pipeline $x_n$ on D defines the cost metric $m: \mathcal{P} \times \mathcal{I} \to \mathbb{R}$ . We use the state-of-the-art
|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
|
| 44 |
+
Figure 1. ZAP consists of two stages. In the meta-train stage, the cost matrix on the source tasks is leveraged to learn a joint response surface conditioned on the meta-features and pipelines. During the meta-test stage, ZAP assigns scores to the pipelines of the unseen datasets and the highest-scoring pipeline is selected.
|
| 45 |
+
|
| 46 |
+
algorithm selection system AutoFolio to learn a selector between our predefined DL pipelines, since it subsumes approaches based on regression, classification, clustering, and cost-sensitive classification, and selects the best one for the data at hand (Lindauer et al., 2015).
|
| 47 |
+
|
| 48 |
+
While this formulation of zero-shot AutoML with pretrained models as algorithm selection will turn out to already yield very strong performance, it has one limitation: algorithm selection abstracts away our DL pipelines as uncorrelated algorithms, losing all information about the pre-trained models they are based on, and which hyperparameters are being used for fine-tuning. This information, e.g., allows us to predict the cost of DL pipelines to other DL pipelines with similar settings without ever having run them. Thus, we next introduce a novel approach for exploiting this knowledge.
|
| 49 |
+
|
| 50 |
+
We now describe a variant of our formulation of zero-shot AutoML that exploits the fact that the DL pipelines between which we select are points in a geometric space, and that we can see the space of DL pipelines we consider as a search space for hyperparameter optimization (HPO), with a categorical value for the choice of pre-trained model and continuous fine-tuning hyperparameters; we can then use concepts from zero-shot HPO to tackle this problem.
|
| 51 |
+
|
| 52 |
+
We define M as a finite collection of N pre-trained models and represent each instance, $M_n$ , as a one-hot encoded vector, and $\theta_n \in \Theta \subseteq \mathbb{R}^L$ as a vector of continuous variables defining its respective hyperparameters. For instance, $\Theta$ can represent the continuous space of learning rates and dropout values of a pre-trained neural network model in $\mathcal{M} \in \{0,1\}^{|N|}$ . Consequently, the DL pipelines are pro-
|
| 53 |
+
|
| 54 |
+
jected to the geometric space defined by $\mathcal{X} \subseteq \mathcal{M} \times \Theta$ and can be viewed as a hyperparameter configuration space where pre-trained models are simply categorical variables.
|
| 55 |
+
|
| 56 |
+
Denote by $f_{\psi}$ a parametric surrogate with parameters $\psi$ that estimates the test cost observed by fine-tuning the DL pipeline $x_j$ on dataset $D_i$ with meta-features $\phi_i$ . The surrogate captures the fusion of (i) pipeline hyperparameters (i.e. x represented by the pre-trained model's one-hot-encoding indicator $\mathcal{M} \in \{1,\ldots,M\}$ and the fine-tuning hyperparameters $\theta \in \Theta$ ) with (ii) dataset meta-features $\phi$ , in order to estimate the cost after fine-tuning. Formally, that is:
|
| 57 |
+
|
| 58 |
+
$$f(\psi)_{i,j} := f(x_j, \phi_i; \psi) : \mathcal{M} \times \Theta \times \Phi \to \mathbb{R}_+$$
|
| 59 |
+
(3)
|
| 60 |
+
|
| 61 |
+
<span id="page-3-2"></span>A unique aspect of searching for efficient pipelines is that we are concerned with the relative cost of the pipelines, to find the best one. As such, we propose to utilize the surrogate model as a proxy function for the rank of configurations, and learn the pairwise cost ranking of pairs of pipelines. In this perspective, pairwise ranking strategies use the relative ordering between pairs of configurations to optimize the probability that the rank of the j-th pipeline is lower than the k-th pipeline on the i-th dataset. Therefore, using given pre-computed cost $C_{i,j} = C(x_j, D_i)$ we define the set of triples $\mathcal{E} := \{(i,j,k) \mid C(x_j,D_i) < C(x_k,D_i)\}$ . Every triple (i, j, k) denotes a pair $(x_j, x_k)$ , where the cost of $x_j$ is smaller (better pipeline) than $x_k$ on the *i*-th dataset. Correspondingly, we want our surrogate to predict $f(\psi)_{i,j}$ to be lower than $f(\psi)_{i,k}$ ; we thus meta-learn our surrogate with a ranking loss as:
|
| 62 |
+
|
| 63 |
+
<span id="page-3-0"></span>
|
| 64 |
+
$$\underset{\psi}{\operatorname{arg\,min}} \sum_{(i,j,k)\in\mathcal{E}} \log \left(\sigma \left(f\left(\psi\right)_{i,j} - f\left(\psi\right)_{i,k}\right)\right), \quad (4)$$
|
| 65 |
+
|
| 66 |
+
with $\sigma(\cdot)$ as the sigmoid function which prevents the difference from exploding to negative infinity as we minimize the loss. Equation 4 maximizes the gap between the surrogate scores, by decreasing the surrogate score for the good DL pipelines with low costs, while at the same time increasing the surrogate score of bad pipelines with high costs. As a result, the score of the best DL pipeline with the lowest cost will be maximally decreased. Furthermore, Figure 2 presents a visual description of our proposed ranking loss.
|
| 67 |
+
|
| 68 |
+

|
| 69 |
+
|
| 70 |
+
<span id="page-3-1"></span>Figure 2. Overview of our pairwise ranking objective
|
| 71 |
+
|
| 72 |
+
Once we meta-learn the surrogate, we can transfer it to a new dataset D(new) with meta-features φ (new) in a zero-shot HPO mechanism using Equation [5.](#page-4-1) The full meta-learn and meta-test procedure is depicted in Figure [1.](#page-3-2)
|
| 73 |
+
|
| 74 |
+
$$x^{(\text{new})} := \underset{x_n, n \in \{1, \dots, N\}}{\arg \min} f\left(x_n^{(\text{ft})}, \phi^{(\text{new})}; \psi\right)$$
|
| 75 |
+
(5)
|
| 76 |
+
|
| 77 |
+
<span id="page-4-3"></span>For an empirical motivation on the benefits of learning surrogates with pairwise ranking losses, we compare to the same surrogate model optimized with a least-squares loss:
|
| 78 |
+
|
| 79 |
+
$$\arg\min_{\psi} \sum_{i=1}^{I} \sum_{n=1}^{N} \left( f(\psi)_{i,n} - C(x_n, D_i) \right)^2$$
|
| 80 |
+
(6)
|
| 81 |
+
|
| 82 |
+
As a sanity check, we also compare the performance of randomly selecting a pipeline. In this experiment, we evaluate the performance of the pipeline having the largest estimated value by the surrogate (Equation [5\)](#page-4-1) across all the I-many source datasets. The results of Figure [3](#page-4-2) demonstrate that the surrogate trained with Equation [4](#page-3-0) is significantly better than the regression-based variant of Equation [6](#page-4-3) in terms of identifying the best pipeline. Further details about the evaluation protocol are found in Section [5.1.](#page-6-1)
|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
|
| 86 |
+
<span id="page-4-2"></span>Figure 3. Critical difference diagram comparing loss functions using the Wilcoxon-Holm signed-rank (5% significance level).
|
| 87 |
+
|
| 88 |
+
In this section, we introduce a novel meta-dataset [\(Pineda-](#page-11-21)[Arango et al.,](#page-11-21) [2021\)](#page-11-21), that will ultimately allow us to perform zero-shot AutoML with pre-trained models (ZAP). The meta-data required for the ZAP problem includes a set of datasets with meta-features, a set of DL pipelines, and the test costs for these pipelines on these datasets. Correspondingly, we describe how we curated a set of 35 image datasets, with 15 augmentations each [\(4.1\)](#page-4-0); define a space of DL pipelines [\(4.2\)](#page-4-4); and find strong instantiations in it for each of the datasets, each of which we evaluate on all datasets to obtain a 525 × 525 matrix of test costs [\(4.3\)](#page-5-0).
|
| 89 |
+
|
| 90 |
+
The set of datasets should be chosen to be representative of the datasets that will eventually be tackled by the ZAP system building on them. While all our pre-trained networks are pre-trained on ImageNet [\(Deng et al.,](#page-9-12) [2009\)](#page-9-12), during the fine-tuning stage also smaller and specialized datasets are <span id="page-4-1"></span>to be expected. Consequently, we chose both small and large, as well as diverse datasets that cover a wide range of domains (objects, medical, aerial, drawings, etc.) with varying formats, i.e. colored and black-and-white images and datasets with varying image resolutions or the number of classes. With this preference in mind, we retrieved 35 *core* datasets provided by the TensorFlow [\(Abadi et al.,](#page-9-13) [2015\)](#page-9-13) Datasets (TFDS) utility library [\(Google,](#page-9-14) [2021\)](#page-9-14) and applied a dataset augmentation process [\(Stoll,](#page-11-22) [2020\)](#page-11-22) that takes a TFDS core dataset as input and outputs a subset of that differs in the number of classes and the number of train/test samples per class. Note that this dataset augmentation process does not perform augmentations on a sample level. We repeat this subset retrieval 15 times for each dataset, resulting in 525 datasets D. Further details about the augmentation process are found in Appendix [A.2.](#page-13-0)
|
| 91 |
+
|
| 92 |
+
To represent a dataset, we use only extremely cheap and readily available dataset-dependent meta-features [\(Hutter](#page-9-15) [et al.,](#page-9-15) [2020\)](#page-9-15) φ: number of training images, number of image channels, image resolution, and number of classes.
|
| 93 |
+
|
| 94 |
+
The DL pipelines we employ should be chosen to be diverse and achieve high performance on the aforementioned datasets since the optimum we can hope for is to choose the best of these pipelines on a per-dataset basis. To obtain strong pipelines, we started from the code base of the winner of the AutoCV competition [\(Baek et al.,](#page-9-8) [2020\)](#page-9-8), which fine-tuned a pre-trained ResNet-18 model. We then built a highly-parameterized space of DL pipelines around this by exposing a wide range of degrees of freedom. These included fine-tuning hyperparameters, such as learning rate, percentage of frozen parameters, weight decay, and batch size. Additionally, we exposed hyperparameters for the online execution that were previously hard-coded and that control, e.g., the number of samples used or when to evaluate progress with the validation dataset. To span a more powerful space with diverse pipelines, we also added additional architectural, optimization, as well as fine-tuning choices, including:
|
| 95 |
+
|
| 96 |
+
- A binary choice between an EfficientNet [\(Tan & Le,](#page-11-23) [2019\)](#page-11-23) pre-trained on ImageNet [\(Russakovsky et al.,](#page-11-24) [2015\)](#page-11-24) or the previously-used ResNet-18;
|
| 97 |
+
- The proportion of weights frozen when fine-tuning;
|
| 98 |
+
- Additional stochastic optimizers (Adam [\(Kingma &](#page-10-15) [Ba,](#page-10-15) [2015\)](#page-10-15), AdamW [\(Loshchilov & Hutter,](#page-10-16) [2018\)](#page-10-16), Nesterov accelerated gradient [\(Nesterov,](#page-10-17) [1983\)](#page-10-17)) and learning rate schedules (plateau, cosine [\(Loshchilov & Hut](#page-10-18)[ter,](#page-10-18) [2017\)](#page-10-18));
|
| 99 |
+
- A choice of using a simple classifier (either a SVM,
|
| 100 |
+
|
| 101 |
+
random forest or logistic regression) that can be trained and used within the first 90 seconds of run-time in order to improve anytime performance.
|
| 102 |
+
|
| 103 |
+
Overall, our DL pipeline space $\mathcal{X}$ is comprised of 26 hyper-parameters of the types real and integer-valued, categorical, and conditional. A condensed version is presented in Table 6.
|
| 104 |
+
|
| 105 |
+
*Table 1.* **The search space of our DL pipelines** consisting of general DL hyperparameters, training-strategy hyperparameters and fine-tuning strategy hyperparameters. A more detailed version can be found in Appendix A.1.
|
| 106 |
+
|
| 107 |
+
| Name | Type, Scale | Range |
|
| 108 |
+
|------------------------------|-------------|-----------------------|
|
| 109 |
+
| Batch size | int, log | [16, 64] |
|
| 110 |
+
| Learning rate | float, log | $[10^{-5}, 10^{-1}]$ |
|
| 111 |
+
| Weight decay | float, log | $[10^{-5}, 10^{-2}]$ |
|
| 112 |
+
| Momentum | float | [0.01, 0.99] |
|
| 113 |
+
| Optimizer | cat | {SGD, Adam, |
|
| 114 |
+
| | | $AdamW$ } |
|
| 115 |
+
| Scheduler | cat | {plateau, cosine} |
|
| 116 |
+
| Architecture | cat | {ResNet18, EffNet-b0 |
|
| 117 |
+
| | | EffNet-b1, EffNet-b2} |
|
| 118 |
+
| Steps per epoch | int, log | [5, 250] |
|
| 119 |
+
| Early epoch | int | [1,3] |
|
| 120 |
+
| CV ratio | float | [0.05, 0.2] |
|
| 121 |
+
| Max valid count | int, log | [128, 512] |
|
| 122 |
+
| Skip valid thresh. | float | [0.7, 0.95] |
|
| 123 |
+
| Test freq. | int | [1, 3] |
|
| 124 |
+
| Max inner loop | float | [0.1, 0.3] |
|
| 125 |
+
| # init samples | int,log | [128, 512] |
|
| 126 |
+
| Max input size | int | [5, 7] |
|
| 127 |
+
| 1 <sup>st</sup> simple model | cat | {true, false} |
|
| 128 |
+
| Simple model | cat | {SVC, NuSVC, RF, LR} |
|
| 129 |
+
|
| 130 |
+
With the 525 datasets and our 26-dimensional DL pipeline space at our disposal, we now explain how we generated the DL pipeline candidates that we evaluated on the datasets. Instead of uniformly or randomly sampling the 26-dimensional DL pipeline space, to focus on DL pipelines that are strong at least on one dataset, we ran an optimization process to find a (near-)optimal DL pipeline for one dataset at a time. Specifically, we used the hyperparameter optimization method BOHB (Falkner et al., 2018), which supports high-dimensional and categorical hyperparameter spaces, to find a (near)-optimal instantiation of our DL pipeline space for each dataset. We optimized the anytime Area under the Learning Curve (ALC) score (introduced in the AutoDL challenge (Liu et al., 2021) and described in more detail in Section 5.1) via BOHB, with a budget of five minutes for evaluating one DL pipeline on one dataset. We repeated each of these runs three times and used the mean to handle the substantial noise in these evaluations. This process resulted in one optimized DL pipeline per dataset; we thus have N=D=525 DL pipelines that comprise the set $\mathcal X$ of DL pipelines in our ZAP formulation.
|
| 131 |
+
|
| 132 |
+
Given this set of 525 DL pipelines $\mathcal{X}$ , and the set of our 525 datasets $\mathcal{D}$ , let us now explain the evaluation procedure. We ran each pipeline $x \in \mathcal{X}$ on each dataset $D \in \mathcal{D}$ , computing the ALC score the pipelines reached within 10 minutes, and again computing the mean of three runs to reduce noise. While the AutoDL competition used a budget of 20 minutes, we used a shorter time of 10 minutes here (and 5 minutes for the runs of BOHB above) for two reasons: First, to limit the substantial computational overhead for carrying out these $525 \cdot 525 = 275, 625$ evaluations of (DL pipeline, dataset) pairs; still, it required 2,871 GPU days to collect this data. Second, due to the typically monotonically increasing anytime ALC score, performance after 5 and 10 minutes can be expected to be a good proxy for the full 20 minutes.
|
| 133 |
+
|
| 134 |
+
Finally, we record every pairs' average-of-three ALC score in the cost matrix $C \in \mathbb{R}^{NxI}$ (in our case with N=I=525 since we found one DL pipeline per dataset). This cost matrix is visualized in Figure 4. From the cost matrix, we directly see that there are easy datasets (at the top, where all pipelines achieve high scores) and hard ones at the bottom (where only very few pipelines reach high scores). Likewise, there are overall strong pipelines (to the left, with good scores on most datasets) and poor ones (on the right, with good scores on only a few datasets). The most interesting pattern for ZAP is that there exists substantial horizontal and vertical striping, indicating that different datasets are hard for different pipelines. This points to the usefulness of selecting pipelines in a dataset-dependent manner in ZAP.
|
| 135 |
+
|
| 136 |
+

|
| 137 |
+
|
| 138 |
+
<span id="page-5-1"></span>Figure 4. Cost matrix C as a heatmap Color indicates the ALC score (higher is better). We observe that some datasets (dark rows) are more complex and some pipelines (dark columns) generalize worse across datasets systematically than others.
|
2207.05315/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|