Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2003.06658/main_diagram/main_diagram.drawio +0 -0
- 2003.06658/main_diagram/main_diagram.pdf +0 -0
- 2003.06658/paper_text/intro_method.md +68 -0
- 2012.05688/main_diagram/main_diagram.drawio +1 -0
- 2012.05688/paper_text/intro_method.md +64 -0
- 2101.01000/main_diagram/main_diagram.drawio +1 -0
- 2101.01000/main_diagram/main_diagram.pdf +0 -0
- 2101.01000/paper_text/intro_method.md +177 -0
- 2104.06313/main_diagram/main_diagram.drawio +1 -0
- 2104.06313/main_diagram/main_diagram.pdf +0 -0
- 2104.06313/paper_text/intro_method.md +149 -0
- 2104.08790/main_diagram/main_diagram.drawio +1 -0
- 2104.08790/main_diagram/main_diagram.pdf +0 -0
- 2104.08790/paper_text/intro_method.md +141 -0
- 2108.12841/main_diagram/main_diagram.drawio +1 -0
- 2108.12841/main_diagram/main_diagram.pdf +0 -0
- 2108.12841/paper_text/intro_method.md +129 -0
- 2109.14982/main_diagram/main_diagram.drawio +1 -0
- 2109.14982/main_diagram/main_diagram.pdf +0 -0
- 2109.14982/paper_text/intro_method.md +81 -0
- 2110.06084/main_diagram/main_diagram.drawio +1 -0
- 2110.06084/paper_text/intro_method.md +35 -0
- 2110.11852/main_diagram/main_diagram.drawio +1 -0
- 2110.11852/main_diagram/main_diagram.pdf +0 -0
- 2110.11852/paper_text/intro_method.md +75 -0
- 2111.14893/main_diagram/main_diagram.drawio +0 -0
- 2111.14893/paper_text/intro_method.md +83 -0
- 2112.08609/main_diagram/main_diagram.drawio +1 -0
- 2112.08609/main_diagram/main_diagram.pdf +0 -0
- 2112.08609/paper_text/intro_method.md +163 -0
- 2201.00520/main_diagram/main_diagram.drawio +0 -0
- 2201.00520/paper_text/intro_method.md +117 -0
- 2202.12162/main_diagram/main_diagram.drawio +1 -0
- 2202.12162/paper_text/intro_method.md +31 -0
- 2208.02080/main_diagram/main_diagram.drawio +0 -0
- 2208.02080/main_diagram/main_diagram.pdf +0 -0
- 2208.02080/paper_text/intro_method.md +19 -0
- 2209.00638/main_diagram/main_diagram.drawio +1 -0
- 2209.00638/main_diagram/main_diagram.pdf +0 -0
- 2209.00638/paper_text/intro_method.md +91 -0
- 2210.13611/main_diagram/main_diagram.drawio +1 -0
- 2210.13611/main_diagram/main_diagram.pdf +0 -0
- 2210.13611/paper_text/intro_method.md +32 -0
- 2210.13918/main_diagram/main_diagram.drawio +1 -0
- 2210.13918/main_diagram/main_diagram.pdf +0 -0
- 2210.13918/paper_text/intro_method.md +95 -0
- 2211.14391/main_diagram/main_diagram.drawio +1 -0
- 2211.14391/main_diagram/main_diagram.pdf +0 -0
- 2211.14391/paper_text/intro_method.md +87 -0
- 2211.16022/main_diagram/main_diagram.drawio +1 -0
2003.06658/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2003.06658/main_diagram/main_diagram.pdf
ADDED
|
Binary file (21 kB). View file
|
|
|
2003.06658/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
As a crucial characteristic of human cognition, systematic generalization reflects people's ability to learn infinite combinations of finite concepts [@chomsky1957syntactic; @montague1970universal]. However, weak systematic compositionality has been considered as a primary obstacle to the expression of language and thought in connectionist networks for a long time [@fodor1988connectionism; @hadley1994systematicity; @marcus1998rethinking; @fodor2002compositionality; @frank2009connectionist; @brakel2009strong; @marcus2018algebraic]. Whether models can generalize systematically is still an appealing research topic until now. Recent works state that modern neural networks have not mastered these language-based generalization challenges in multiple explicitly proposed datasets [@lake2017generalization; @bastings-etal-2018-jump; @keysers2019measuring; @hupkes2020compositionality; @kim2020cogs]. These studies conclude that models lack such cognitive capacity, which calls for a more systematic study. Apart from the proposal of benchmarks, existing research mainly focuses on novel architectural designs [@ChenLYSZ20] or meta-learning [@NEURIPS2019_f4d0e2e7; @conklin-etal-2021-meta] to enable systematic generalization.
|
| 4 |
+
|
| 5 |
+
In this work, however, we question that whether neural networks are indeed deficient or just conventional learning protocols unable to exploit their full potential [@csordas2021devil]. Inspired by *meaningful learning* from the field of educational psychology [@mayer2002rote], we revisit systematic generalization and explore *semantic linking*. Specifically, we propose augmenting prior knowledge to build relation links between new concepts and existing ones through either *inductive learning* or *deductive learning* as what humans do in meaningful verbal learning [@ausubel1963psychology]. To elaborate, inductive learning is a bottom-up approach from the more specific to the mode general. By introducing new concepts sharing the same context with existing ones in specific samples, we hope the model can capture the underlying semantic connections and thus generalize to novel compositions of new concepts. On the contrary, deductive learning is a top-down approach from the mode general to the more specific. By involving a rule-like concept dictionary without specific context information, we hope the model can utilize the general cross-lingual supervised signals as anchor points so as to launch the semantic linking. We mainly focus on three semantic relationships, namely, lexical variant, co-hyponym, and synonym.
|
| 6 |
+
|
| 7 |
+
Starting from SCAN, our experiments confirm that, with semantic linking, even canonical neural networks can significantly improve its systematic generalization capability. Moreover, this holds consistent across two more semantic parsing datasets. As an ablation study, we further examine such one-shot compositional generalization and find that both prior knowledge and semantic linking take essential parts. Lastly, we extend from toy sets to real data and explain how semantic linking, as data augmentation techniques, benefits models' performance in solving real problems such as machine translation and semantic parsing.
|
| 8 |
+
|
| 9 |
+
Overall, our contributions are as follows: ($1$) We formally introduce semantic linking for systematic generalization through the analysis of inductive and deductive learning from a meaningful learning perspective. ($2$) We observe that modern neural networks can achieve systematic generalization with semantic linking. ($3$) We show that both prior knowledge and semantic linking play a key role in systematic generalization, which is in line with meaningful learning theory. ($4$) We extend from SCAN to real data and demonstrate that many recent data augmentation methods belong to either inductive learning or deductive learning.
|
| 10 |
+
|
| 11 |
+
<figure id="fig:pipeline" data-latex-placement="t">
|
| 12 |
+
<div class="center">
|
| 13 |
+
<img src="figures/pipeline.png" />
|
| 14 |
+
</div>
|
| 15 |
+
<figcaption>An illustration of the semantic linking injection pipeline in SCAN. Models are expected to generalize to new compositions of variants after augmenting the prior knowledge through either inductive learning or deductive learning.</figcaption>
|
| 16 |
+
</figure>
|
| 17 |
+
|
| 18 |
+
Learning new concepts by relating them to the existing ones is defined as a process of meaningful learning in educational psychology [@ausubel1963psychology; @mayer2002rote]. The utilization of meaningful learning can encourage learners to understand information continuously built on concepts the learners already understand [@okebukola1988cognitive]. Following the same idea, we intend to examine models' systematic compositionality by exploring semantic linking, an augmentation that establishes semantic relations between primitives $\sP$ (old concepts) and their variants $\sV:=\{\sV_{\evp} ~|~ \forall \evp \in \sP\}$ (new concepts). To spoon-feed semantic knowledge to models for semantic linking, we propose to augment the training data by either inductive learning or deductive learning [@hammerly1975deduction; @shaffer1989comparison; @thornbury1999teach]. In this section, we discuss the definition of semantic linking and take "jump\" from SCAN as an example primitive to illustrate the learning scheme in Figure [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"}.
|
| 19 |
+
|
| 20 |
+
We aim to achieve systematic generalization by exposing semantic links such as lexical variants, co-hyponyms, and synonyms. *Lexical Variant* refers to an alternative expression form for the same concept. *Co-hyponym* is a linguistic term to designate a semantic relation between two group members belonging to the same broader class, where each member is a hyponym and the class is a hypernym [@lyons1995linguistic]. *Synonym* stands for a word, morpheme, or phrase that shares exactly or nearly the same semantics with another one. We provide an example and a detailed description in Appendix.
|
| 21 |
+
|
| 22 |
+
Inductive learning is a bottom-up approach from the more specific to the more general. For example, fitting a machine learning model is a process of induction, where the model itself is the hypothesis that best fits the observed training data [@Mitchell97]. In grammar teaching, inductive learning is a rule-discovery approach that starts with the presentation of specific examples from which a general rule can be inferred [@thornbury1999teach]. Inspired by that, we propose to augment data inductively by introducing variants sharing the same context with their primitives in specific samples. The assumption is that models can observe the interchange of primitives and their variants surrounded by the same context in the hope of coming up with a general hypothesis that there is a semantic linking between primitives and their variants [@harris1954distributional]. Formally, we describe inductive learning as follows. For a sequence-to-sequence task $\mathcal{T} : \mX \rightarrow \mY$, we have a source sequence $\vx \in \mX$ and its target sequence $\vy \in \mY$. We prepare prompts set $\mZ := \{ \vz = f_{prompt}(\vx) ~|~ \vx \in \mX\}$, where $f_{prompt}(\cdot)$ replaces the primitive in $\vx$ with a slot mark $[z_{p}]$ as in Figure [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"}.[^2] Then, we generate $\mX^{IL} := \{ \vx^{IL} = f_{fill}(\vz, \evv) ~|~ \vz \in \mZ, \evv \in \sV \}$ by filling $[z_{\evp}]$ with variants in $\sV_{\evp}$. There is no change from the target side, so we get $\mY^{IL}$ by copying $\vy$ as $\vy^{IL}$ for each $\vx^{IL}$ correspondingly. Finally, we train models on $(
|
| 23 |
+
\big[\begin{smallmatrix}
|
| 24 |
+
\mX \\
|
| 25 |
+
\mX^{IL}
|
| 26 |
+
\end{smallmatrix}\big]
|
| 27 |
+
,
|
| 28 |
+
\big[\begin{smallmatrix}
|
| 29 |
+
\mY \\
|
| 30 |
+
\mY^{IL}
|
| 31 |
+
\end{smallmatrix}\big]
|
| 32 |
+
)$ to operate semantic linking inductively.
|
| 33 |
+
|
| 34 |
+
Deductive Learning, on the opposite of inductive learning, is a top-down approach from the more general to the more specific. As a rule-driven approach, teaching in a deductive manner often begins with presenting a general rule and is followed by specific examples in practice where the rule is applied [@thornbury1999teach]. To align with this definition, we intend to augment data deductively by combining a bilingual dictionary that maps primitives and their variants to the same in the target domain. This additional dictionary, hence, mixes the original training task with word translation [@mikolov2013exploiting]. Without any specific context, we hope the model can utilize the general cross-lingual supervised signals as anchor points so as to launch the semantic linking. Formally, we describe deductive learning as follows. We first treat $\sP$ as the source dataset $\mX^{DL}_{\sP}$ directly and then prepare the corresponding target dataset $\mY^{DL}_{\sP}$ by either decomposing samples from $\mY$ manually or feeding $\mX^{DL}_{\sP}$ to a trained external model. Similarly, we can consider $\sV$ as another source dataset $\mX^{DL}_{\sV}$ and prepare its target dataset $\mY^{DL}_{\sV}$ by copying the corresponding $\vy^{DL}_{\sP}$ as $\vy^{DL}_{\sV}$ for all $\vx^{DL}_{\sV}$ as variants of each $\vx^{DL}_{\sP}$. After all, we get $\mX^{DL}$ as $\big[\begin{smallmatrix}
|
| 35 |
+
\mX^{DL}_{\sP} \\
|
| 36 |
+
\mX^{DL}_{\sV}
|
| 37 |
+
\end{smallmatrix}\big]$ and $\mY^{DL}$ as $\big[\begin{smallmatrix}
|
| 38 |
+
\mY^{DL}_{\sP} \\
|
| 39 |
+
\mY^{DL}_{\sV}
|
| 40 |
+
\end{smallmatrix}\big]$. The mapping from $\mX^{DL}$ to $\mY^{DL}$ is a dictionary to translate primitives and their variants to the same targets without any specific context information. We name $(\vx^{DL}, \vy^{DL})$ as a *concept rule*, $(\vx^{DL}_{\sP}, \vy^{DL}_{\sP})$ as a *primitive rule*, and $(\vx^{DL}_{\sV}, \vy^{DL}_{\sV})$ as a *variant rule* since they are more rule-like without contexts. We train models on $(
|
| 41 |
+
\big[\begin{smallmatrix}
|
| 42 |
+
\mX \\
|
| 43 |
+
\mX^{DL}
|
| 44 |
+
\end{smallmatrix}\big]
|
| 45 |
+
,
|
| 46 |
+
\big[\begin{smallmatrix}
|
| 47 |
+
\mY \\
|
| 48 |
+
\mY^{DL}
|
| 49 |
+
\end{smallmatrix}\big]
|
| 50 |
+
)$ to operate semantic linking deductively.
|
| 51 |
+
|
| 52 |
+
Although previous studies argue that neural networks fail to match humans in systematic generalization [@lake2017generalization; @keysers2019measuring], we revisit such algebraic compositionality conditioned on the semantic linking to see whether the conclusion will change. The following section moves on to specify the process and outcome of experiments. We first intend to make use of SCAN as the initial testbed to observe the presence of systematic generalization with the assistance of semantic relations. Then, we verify neural networks' potential to achieve the systematic generalization activated by semantic linking on SCAN, as well as two real-world tasks of semantic parsing. Following ablation studies further examine models' compositional capability.
|
| 53 |
+
|
| 54 |
+
There is evidence suggesting that SCAN may be far from enough to fully capture the kind of generalization, where even a simple model can behave as if it owns comparable skills [@bastings-etal-2018-jump; @keysers2019measuring]. Thus, starting from SCAN, we introduce GEO and ADV generated respectively from real semantic parsing datasets: Geography and Advising.[^3] Modification on datasets is specified in each experiment for the goal of examining machines' systematic generalization across various conditions.
|
| 55 |
+
|
| 56 |
+
**SCAN** is one of the benchmarks to investigate neural networks' compositional generalization [@lake2017generalization]. It includes 20910 pairs of commands in English to their instructed action sequences [^4]. We define $\sP^{SCAN} := \{ ``\textit{jump}", ``\textit{look}", ``\textit{run}", ``\textit{walk}" \}$ to be in line with previous works. We focus on lexical variants and create $\sV^{SCAN}$ by adding a suffix that consists of an underline and a unique number for each primitive. We control $|\sV^{SCAN}|$ by setting the upper limit of this number. An example variant of "*jump*" is "*jump_0*\" and both mean the same action "JUMP\".
|
| 57 |
+
|
| 58 |
+
**Geography** is a common semantic parsing dataset [@data-geography-original; @data-atis-geography-scholar]. It is also named as *geo880* since it contains 880 examples of queries about US geography in natural language paired with corresponding query expressions. It is later formatted to SQL language with variables in the target sequences [@data-sql-advising]. **GEO** is the dataset generated based on Geography, where we regard 4 of 9 annotated variables as hypernyms and keep them as they are in SQL sequences. The other variables are restored by entities from the source sequence accordingly. As a result, the overall data size is 618 after processing and we can make use of the "is-a\" hypernymy relations between entities and variables for semantic linking. To be specific, we define $\sP^{GEO} :=\{``\textit{new york city}", ``\textit{mississippi rivier}", ``\textit{dc}", ``\textit{dover}" \}$ with $\sV^{GEO}$ consisting of entities as co-hyponyms sharing the same variable group with primitives.[^5] An example variant of "*new york city*" is "*houston city*\" and both are in the same variable group "CITY_NAME".
|
| 59 |
+
|
| 60 |
+
**Advising**, as our second semantic parsing dataset, includes 4570 questions about course information in natural language paired with queries in SQL [@data-sql-advising]. Similar to GEO, **ADV** is generated on the basis of Advising with 4 of 26 variables as hypernyms. Precisely, we define $\sP^{ADV}:=\{ ``\textit{a history of american film}", ``\textit{aaron magid}", ``\textit{aaptis}", ``\textit{100}" \}$ and $\sV^{ADV}$ as co-hyponyms of primitives sharing the same variables. For instance, "*advanced at ai techniques*\" is a co-hyponym of "*a history of american film*\" sharing the same variable "TOPIC\".
|
| 61 |
+
|
| 62 |
+
What follows is an account of network configurations and experimental settings. Without specific instruction, they are shared throughout experiments.
|
| 63 |
+
|
| 64 |
+
**Models.** After testing a range of their adapted versions, we employ three dominant model candidates with an encoder-decoder framework [@sutskever2014sequence], that is, RNN, CNN, and TFM. In terms of RNN, we reproduce bi-directional recurrent networks [@schuster1997bidirectional] with long short-term memory units [@hochreiter1997long] and an attention mechanism [@bahdanau2014neural]. We follow the convolutional seq2seq architecture presented by @gehring2017convolutional with regard to CNN and the attention-based structure proposed by @NIPS2017_7181 in the case of TFM. More details are provided in Appendix.
|
| 65 |
+
|
| 66 |
+
**Training.** We apply the mini-batch strategy to sample 128 sequence pairs for each training step. We use Adam optimizer [@DBLP:journals/corr/KingmaB14] with an $\ell_2$ gradient clipping of $5.0$ [@10.5555/3042817.3043083] and a learning rate of 1$e^{-4}$ to minimize a cross-entropy loss. We freeze the maximum training epoch at 320 for CNN and 640 for RNN and TFM. In contrast to early stopping [@prechelt1998early], we prefer a fixed training regime sufficient enough for models to fully converge in practice with a focus on the systematic generalization observation instead of superior structure exploration. To prevent uncontrolled interference, we train all models from scratch instead of fine-tuning [@devlin-etal-2019-bert].
|
| 67 |
+
|
| 68 |
+
**Evaluation.** Token accuracy and sequence accuracy serve as two primary metrics in the following experiments. The former is a soft metric that allows partial errors in a sequence, while the latter is tricky and strictly does not. The reported results, along with standard deviation, are the mean of five runs.
|
2012.05688/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-10-26T12:06:38.686Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36" version="15.4.3" etag="nyni8vozANizT6gFjGQ1" type="device"><diagram id="-TIwZDtNovuulV42RKoI">7V1rc+LIuf41rko+uKvvl49je3w2VZutqczUJjlfUjLINgkGH8A7M/n1p1uohfoiENASYIutrTESNNDv8177vVyR25cf/7PIXp//Oh/n0ysMxz+uyN0VxpgIqP8xV36uryAkxPrK02IyLq9tLnyd/DcvL5ZvfHqbjPOl88LVfD5dTV7di6P5bJaPVs61bLGYf3df9jifup/6mj2Vnwg3F76OsmkevOzvk/HqeX1VYrG5/ks+eXq2n4y4Wt95yeyLyyWWz9l4/r32WeTzFbldzOer9V8vP27zqdk9uy/rL3TfcLf6Yot8tmrzBrx+wx/Z9K38beX3Wv20P3Yxf5uNc/N6dEVuvj9PVvnX12xk7n7X9NXXnlcv0/L243y2KumFlH4+zpbP1XuXq8X8P9WOMX2l/PR8scp/NP4CVO2LRlQ+f8lXi5/6JfYNkJVEKtHE7PPvG9IwXl57rpNFlhezEg5P1eKbHdN/lJsW30By+RvIqbuBNNw/QVm4f1wl2D96+fvnA5CSdvijXBy/fyyyf3y6KnfC2Uj+f29ze+N6WezRJ/0CJF9/bG7qv57W/2rK0Lsr9vnT+k+7rP5C65Xt646hViX+NtS5nU/ni2Ipwkcyf3gM6IY9MkvzfDKd1t45znL5OEpEXyKRQ19EFID1R0hvhCP05hiw4wnOGwm+fM1mUYKP1htjiL14eviT/mr6U6D958/FNsECFY/Zy2T6c/3S53z6R76ajLLa/Q1qMNaoqd1Yf6i5M5svXrJp7d73cg/MTaE/trgzzVerfHGtv/NoMnsK36nptbrOppOn2freSFMrX9TuTTTKZuWq0H6X4s5qkc2Wj3otu+osX9/9Pl+M3U+s3viQjf7zVED32tstTOV6ozBV5R/M7tl4snydZuV+TWbTif2kx+k8W3kfH+ewK3ZrjAJtW1yJm1+vxN2/9L+LXFsuSD8JWG9N4wbWM79+LwlpYJ+ERRhxRSCmCECqNg8ZsAiOsYhF9zEMIhoZ5GGzc/uJSAxjIvLr/G2h9xjDX/7yW6N0rC4/7CJbXSw6krBEUF3ElZdK/rirmMNQUXPs9FN542UyHpuPiWJiI6lhGhgII+GEAwQCIVA4JD6NEJ8kMCdkX8T/li2e8tVA/Brxkab0Rim6OpNxDgQLccA6woE63iyizWbRp7fVsybFXnaR/n7aPW0iR43+2fJ17bM+Tn4YAjXaNs320tMiG0803TzEONZWEstXYG0Jkc3DtZMQAwhSITGHkisrGepmMQZIEIUIpKR4XQiGhpccgw0bRegIHF+y1/xk2GC5HNMYNiR+IJx3BAJX9zMCJKnoBSPi/0R0R8fTHTfT/fd8ppc9Dd0fH3M+isqEsVAPhc1d95hw/8KASAEg2pATnQsoYhGwdMLgW67dkL4wsY3CATAUFyTjIZRypIWISIYJFwUcRYQBBEhGaL25fBR9YwG6dPS9n+TTcV8EDmj4IBllO5nblxVylI9SxUY0gbcwPddWH8M+u3q0F1QhzCgRWCJC40CIveYoVMTCjoN9mF4lbEMHEhAIvLEP5dkYCgmCqoOBWEOB6wsSqLSneH6WQHNkdTAPuzYPKUXaV6yZhxFdcRpUiJAs46f8a/l0vtCS/mk+y6afN1e94MrmNb/O56/llv47X61+lhueva3mLkHzH5PVP8zbASuf/bN25+5HuXLx5OdVPYZjvty+JFrk02w1+cN9X2zTyrd+mU8KfrAH+pU6tYJdSf+kY1nESss3bnb/oLVWRegtWKugZPWr2hFXDsTdTVzv9IuSI4i7c62ExFUDcXcQF0nqqWapAKsd26jDCL3vuumIbg+VBqJv42juEocfw9E710pIXDQQdzdxvbgbPEYX71wrIXHxVuKWLmoyaqIaLTeUjVPTPPmSLyb615iDtuJaYEmvT71cSxrjk0p3hBlAMshTqZSvNrlVLY3lQJgg5B0AmIVhL8KeDJDpFzLaGK8jBqVCjF5X1iHTnZShA2R6hQyWEJAtpD0YM2ZhDRw/w6gDyLABMskhQwESDakihHAAa4gR/GDEQKDkZiHPmCEslq/SAX74gJ9eRQ5FHFDZKBkOljjFus3aLyFitsccB8QkRwznjqRIhhizbi+Iac573CMrfJMYvskNx2W+dpAerhfKXl6Lm4SYE6VNvrh/x10kTBW3t7fkhNuXRFO/nZux7G/7gngCuL27Iwcc1tLAYS0T3L49lgxe/XAvHzxyRrRnUmr0fPFLNll8nyzN5/42H+fXq5+v5u+7iWbRycPbajKfVcSZPSzNP0Wy6EtenEI5N3YntSbORTdbFeS0Fi8oV7XpW0mLdQSFIJaZSEtOTp2YGslMHQebZzN5f80e8umX+XJSUI7cPcxXq7kGdpDqu5qHhTWvZrGXH0+mBhM8ZMvJCLxmhsFm95Pp9G/FgWFxUGj3dzzJngpxnhv9waq79dobbXM71+tHicVD352ab31TMZOfyezpBJuLcDdZ6K+0/p0zo830vXHkmp+74Oub2vX74nGV5PzSZLG49iMMy+QqX8GpaUhRZZggZVWWtS5NoqdRUbRdaJ0Sf303f8kmsyrtfnHUml+W+dt4rl9RsEKj8Gl3nn4eufZB6VpiEWfSEFykKgXq4ThFA+AiFMnDZymAG+bcftWMrDdD61vzpfWPWsxffxYqs/gsvcXmptbUJknmbCl4SjGDZFdiBkfEjO+JzMafTAn9ZmudEltPuce2yZD85z+sQ2GeVAcq5snG8Sie1T2PwFvZ21dp8EDWFn55ydaprM1zF8ctKOmkNzZz1bEHPYgwj8s9ALQ+5qngZFei3krp/BTSolw+AcC6hMr5IAB7ooJiuO3cpf2xny+CkNYfqAcv1mb21pVFPjUWoHbXCo9zX5MisCeMoRw6OlHHx3zM+k2v2grNy193ntqoU3sCYS9AyyQIkzajJe8YAdWM69ZqKZaP69HC+Bqv8c1u3IOyPUz2YBeB++6N1hXS2RsUUdccRnxJzlPo61jKqrcx+/VjCBpkzE20ZGV+3brQwE0sVer+XqkA+CbXnN/fF+nErRC4pnDjNl+jmI+OGYrsK5SAU8YIQVQDFyYoFiDNFeWl8/WYjVxZVI+FeRKoqBeG93m2elvkoaxp8GU+/1gtstFq3mgS9x9o2UkzhgDkodthK4KpBEKrFimpwLw6mK3rzpjyxBIwAjFThDO9ZEWcowh8SdFTT8G1i6wGfTjo+nvuFXatdeOY5o+rq/cfk6016Hh4vBI3/2v6c3y9Yp/135+K9hxQ364oZukGg/d8q7+n2dw4rllPZ6HZwqxIIROwVw5IsQlMcImVEghJFDYIiPVN6kQENDcMGETAIAIGEdCZCCAmOCmgVISIjTl9AhFgox4XIQKGM9SuzlC/zV/1F38yzrh+zdfV4m1UWurv+9x0lzkvXEeXU9PlA0vLhmG9Hpd9MW6sr0cptwstxW70Xw/502SmBXC2WGiAaSEsbvQqhTD2XbCa6P6HeUUhuL+Vwn6rpA8XgxsdUaiJLZ8FD/uEzVe8b/o55kil9svZ7aKgCLuzWuNo9UDbqocd8RQk3f5hPOIXophGSNHL1Z75X4QG2NgDfDACezYCfwkMum6Nr9ZxtF2dTG18sOrAgEFLiytF7gtt7sUy8NfAXx5/lU7Wl4viL/ekpNIofTBXc0ubgbkG5oorr0tmLgLdOhEVtuHujNeaGwQNvDbwWlyR/X7JvCYhgKEn1hl/JWlyP/DXx+Cvb5fIX64jRikHrEf+as5oGPhr4K+4/vp2QfxFvS5iHOJY+/nO+OsyE0oG/jql/rq/ZP4SBMhIIJGG/IVVxYlHsViShI1hDpL3xjOcg4QvaA4SlV6SIxEAhk0m+hiDZJMnL0sHFex0qA4Sx+mgkrU+iBbCN1GWm42zixo8ht3aJayiU4h6YbgE40V2jGI0VnjUTNha9vr+JjJSr6M4J1HzvsMpjAwH2/w+WrTEizMhkFRe7SjQNM/SFdEd2/FF26RAouB0wJ58MwFM+UrVZOrQli/ETi3YLOyulK48jiUYZbJDwHz56JLFJyfuWay81/aEH0SsYEUcsZJKqmD7OR1Ilffa3rAfyNWbBFgBfXIYEizTwJBgf93OYDgc+Q0h032PJC4r99KbfQEZsO0CeziSYDuLmI82Hn8fjEcEPMVNEQWQbotHdGlNbh9vM6j2s7AmeU1Ru9ghGGBR81EPbZ1d4LJmLIjgY2DtOygvFpZQx2+fyDPgcQ9TU5wJRvXNNBgloezUa/v9O9KBkQ/nI8P5yEHnIxd1IOmVbTIBAQ5DWp0divDmQ5HhmP69HNOPpsvLYQmNSffIkBlju96nqB13IFQx0lEMggcGefcMMs4umD8QJ4CGtTWdMkWLNqHvuUd6pJueeYThi0dm/osA4allV/Xv+dJGemrLlgOao83Wq6+Swjzh3GufiFkMa7Hu1EQmwFmsZnLA2R44OyssCd+D1NcAq3mnNjuvD2j1c6y2R4zCjjE9MirSEJyod9Dm/Z2DCegpK+oFDFoHIARD21dKGHroZyLY/ti4hgAK4QJEdAMRejqISOyOQ0gFGNnZGT2PHd58ID3VRuu4CuuOfZZ3NK51JL4hhdYJGKS605E+IpDGoNeHOopVJH1gBO1pHx+GoiZMdoQuLVQBZLVxc2HL7M7gFavG2aLi4qPaR2+LP6oD4WTHQFqpoSKmvlFqXPK9lVp1xmOUJKzOdexBj7hKd9DjTiOJnf70d0IZpFpLz79v0HKREx6lnFoYF7ycY4CE8Fdv0MYH6FARO+/5wBJwtw69v//Mb2/j0u9OqBsYPSSt7qSRcm58wHSLFRGxFhkEkEKsidjpxQfGzL5RpbPHEzGD52syCUayvRGw7X6SwysW+//A8NoNoRyNWS7iELqXt58LeAUQqu4kghDwDpCoBCrsfxuztagAWCUAzgePj/cOnG2CryNQqd5B1U/tyWFRKU7cqBSxz9NGpWRvdrWArNIr1n+z/twhoShAaZDv37RuusCUONcot8EMc50+JjuBTH+JeBHIEH+ZRJAJ1k0ImXMNfvdwMCL689sjYKFIAn5ofuZ2vKyX7mFEpHX+zg49hcDBLoRoJxLHBjJOBCImOgNRsXQfIOqnLuIgECHknr8R1YmlY13PE5y/MSFTjKkNzt92rJsQPv2UMRwmg/xId/WCtPA53Qk/R6IT+PjrdtbGQbYIRFvXe/S2mP68WWg/OG+RDehW1C3mq6x0Va+RTBVBY17c3qTYhEVtdiSPO/onQRmjTJJPPpQJv8+ajXcwooV7RlFD9m1XZcI2VDDw18Bfu/nrAke0+PyFCQKRU5/O+GsYgTTw13ue0uLzF4HoVFNabLb3wGsDr7XWZZc0RSLgNZuGt4u5tnjhrZlrGIE0MNd7HtHiMxclAiC1NVWoM0U29EYbeO0990bzeU0xQPHWOonOWG1nm7THbORyWp0zPArdTZajxeRlMstWGhrhZO6WndJONhN+B9kU5G6sikBgO0TWaMVYV8RqHrNzALFup9lyOXmc5B+AUhyHLNUdmZKMahnU18dQXxc4DclXXyZ3M9JkqcNpSOoim5kNLHYaFrvAgX4Bi2HR50A/NRw7D/zVxF+vF8JE2gykXpUBIWGJQWc8FDta9vYmn40/LRZFo+cyjerYvq5VZ1aA7NPa+zoo2LVDAOr1ujbzpQWt6gZ5hBD22pFZVUpiICjjDCNBlCmPds5EBQOKFjcgJVT4jS1a51hBjABXVHOcUhIS5A3LoBxQKCHUX0dQYTvhpc+4Ui3KpYwIeW1kstF8NstHq+zBvrw9QcuFhECAuk5YRH9phgT2yzlmIvZIcBADtuh2td4Gt3yq6nh3xOasSbAV7lt+O4D1R4qtiJ1eeFuxX3/6oHTs9vYO3ld3bO0buwpK06pXzo2KWxlwMNhW4q9p2oi6a1O8qjlrk1WN98Fc+n1vjmS3nkJAm6cQfHpbPZtA2z5jCPQXnLwu890kzpavRb3j3ePkh4FF4yiB5vEEfvGhbcNWH26QguxmupkWtwJiIaWklHvjYQiQjCjJGBRaCBMZCiIqgZb7UApOjfwmKsRHw0uOgkeCIRVb4PEle81Phg6WyzGNoUPih7KRTGoYSG3uMSI5ZRRJgt22/9oWFEJIjZA1EMLWeCfCwM6g7m4M4GYM/J7P9LKnwcDjY85HUQkxFurBT+1HuBvRYOqSwiZBNoTPJCDaUdBeUoEL21nj9LBo02GojfcAgVTWZ1h7AojvKLGJTneo3Ikm/6F9ZUVIPtelsGKx7lKsNemp/Adk4pZUQTO2QWnT1vUfEERACqGdBy11JFH0YP9BAsgRVwIjhLQoc2dPmA5YXG2cmM7qxRCMhVcLjfLJihIfiprGRecFT0L4Jz1+E4aXyXhc9MJa5FqelSa1gc6r+R3FL2M3V+zuao8WCYgebb2r7Vam5iFhKzOORNY1gkC4flJxyXMO54+Py/x4urbo77N/SKIDYXKo4JARwSFOKTikVj8onExYtdCkoGTmdeDhwGau2z9FCLARGvpzOpt3hKy3dYQtg2GzLfPLX37719f9bJky+NjouVrpEbr/7UVXzEpyqyHH2fK58qx9kyeZlaP1U+X/eA6QQQDHUJbdqSNRUDPIVdbspFhBm9Y/ot5vkTZzQ1szB8FYrKpQNbcfQ9VsJ6xpcYm1zjfMy7j05QfVVONQU97knUthxflxFImFzQqKfBsoUsYaNEm0iQb1/1J5FEEAEyo5k4pLRvxWNIdRpDn3OEWY4Fu+eOnNQ9zm7zX1sArcR8UFyVqfEjXFYiuCckCVZi1MBURIUd9DFJrHEGVWeIaBA9PlQbsG2q8ilHNEIu3zGl5yHCxiwcWDXMT9rTnncCmFNUdCa67E/alMt2tNcEC1baXBAaFQ3OV0JSRAmGHO17bboXX5xPS8MEdSkHNOuTUBqxRDDiRWUGGpfU9Iu5sPiGCLpuvdnxphYEMwNvNBhv1PTcCFq63Giv45QKCasZJguJv2z3dvUbo+hCstTvP/zs2n37zWIivV9Vq4JSqI1+Lf9B80AZ9sMbKSt4i8aRH+e/mJ+05NsfH9phMBXx/XWjPU8z+LnAm3m6FtMhgONm4l6ksQN+KLKKC8JnQ4RBcDsRifOSNOgaEWgb0BQ1sxdK0iKOoIMdcI+D10IVChSNKWOMEhajZYOgo0KBaQS2EBfp6NDnelfbPd9PQp8bKf/R/a+xGHPOq4u372fl1UD8g63wUW6A4k0GaDq8ywNjUFlJQRBplAkofGJO0oHQmhWPAvBYbu8gFD6TBEMJAKK2WOrbUP6c2zxQIQioW2GAnUzqUUYSSnOwAdH91DstkNvdvPB90vXaXuZjbrolCH1LWQ3KKtkpAeQ+0k1qrAmOsLaIdDOrIlIL3BTsT5xBKoLe5PewA0N/M4tlhseYk1SDvIyYrg3MZJ8Rq7Gf+xFrOPDUdSm+mQXhEFEyno2dww4gB6ftKS+zqfjebj/F2S0ySJ1scmxzT7htihYkdd1ZshlCA+uEUw375/wbyd8MqTw1A5gjhMGelaDjdnFY4nf0TJ7Zcr/PttuZo8/oyRfEeZx3GFG7uLUINygupy8dvOTmrsgI4EggZQqYBEAYpM/+1a7DdnHZ5TcU/bIpx3WWeT5tB8vcrEXvjTg/l+XxaTP7KV+Ta/aVV9vfr5mkfUdeFfveSFnF8TaPawXNMpeG4ZdBJhWldVdDULvhdmVjaeXZuDExnOxWgkDpRGzx+fKtoSKYM8OGN5UDJzZhb/+qzhO/Z5uYlBWzFjr/p/m5jZYQ7sljYXIlakK1aoNzO3VwmzvdV7fIxpssmlm+LGf9buxE+i199z65lyRw3aoSaQ9Hq0e3vfPuF391oJT3hx7BTBI3dXBXG7zn2Jvw+EhAER42WjmGOFaZK2EgjHYuRH+bdBNZxX9Rb4vLe3St3fXwUebXW9nchZ02BroRwilGzy+UIvtvvdThBQHqrjdhMbMQywMPkjEBEsvYkGRDuhSHHGhBJIb2jkRIEBRBlXlGCKTMZLpLFL02uOA8jOgPNQH7cPECTSpgXkRFAGkVLYzRJWDDCMTMRaQQMGfEZA2BmpHorkjpMQAAq+eQgXGYgpwDgljEtJIZMRZXE6aLQobe8qCzJ1TUtlodULWEo1ebrSNwYkYgIKwUyJipcGiSAGXHChIIVSyiof+gBTGAjJpBZ9jJgaXjcLl0tAYa1UxptGndJITpVVewY1UhWenLRaK0xdkO1uWNFlmRQGyoxWsg/XFUCSAZtmK7WDJA+cx7f9U8x5bK0UC3c2EAvh48v+d5VJfdtPm72XMqlWek6LGJuFzzyf01TYahsY0jVKwtCLgoDy8OzPO+WDiqRxkGIB3w9Uh7uTnOZEVXEqBRHGy+Hu+Zrhd4kQ0k6ugASqLSngnsBIW3Zrm89+1Bq3XWTkGGjbgWFi+lFXbUOt8EcYEMEZpUxBwimWralYxTpcJveq89PQmMRia+lc1ftJPh335aEEnsiDZJTt9ER8x0aO8lGiFAyFhVbdRAMEUqYZ3RtVoBTQ2CiqIIsOP4HYpgRoj0MJignT0t0Kdsc5ib/kKPlNWoQTz7lCyzEbSVdR3W125R5cTiggSK+vBb0WEop6hR5IO51Ae65IqwpcWJGHGZEKQoAV1zINc4WYPy5acA1FLLlRSKYlVXf1WrbcINQq9x9Dq2yXGMQUZAgkMVPa4PYsAyqUSdfDyiqVLb3eGuDmiaBOdEpz+PPS8ryCDIMvz9nSnID/xTiii2wyqw7di4Ptmz+N/rzjrPf9ZIlpC9U9AtZOBnHKSFCo0nA0wzBRipgtcn7XyGuE3sOHByCztR69AW5XCvOhiUrRuEmZdzMkMZ08iWn05/VGHpChtFeJwqERtfs8W70t8j1zqg6onjgMyEE55w+NjVExfEnvqvndd/MXLeHMH95spj2zrc7qZ65/2tqUxXCfHPIGfLXNXG39tRtc+aP3rUU9TVc5trgXTeQf/NDQXxQ8khyfJP2NHN+uGTccM5fksxd+zX4auEK8lbAPO4ntUdWNp/dRe1M87672Bim/YasldB0Q0QBCEkC06IkyNGvor1nDDiuWYoDCw5mqNZd07kZaw1eNTx3JIpO0cCDhWU7RfGHnSdzHq5DfQWXu+iqIhcd0Xc10QSR2lNOZQHj58aQZ+xk8ZMvJCLxmxneY3Wui/a0I2Bf0s3s7nmRPBf1yk3TNqrt1RtOCwbleZ+XicYBoKB6haHhk5r8IHmzW412k60+9E9D3fGnVa23Z9eMqTJ6sf5UUKMPMa7VIcFWU50iMmMBIADQaO08agLYH0M4KTALYTpAWT4IAJmoteyMF451hq8Wx1GDVnI9VQwARTVYNVggoVCsXCiPFXVo1thJpsGqOtmq22a5YcUB6NHNoiwli71n7tNElrtC4Y5/lHY3rEolvyoz+MNe/vNORliEIA0m39pjpTMnEjpAGJXO0kknZLXMHngrF4+Ap4mptFInbfjWJGApPhYpmc4N22ZeURFubbqaKHU/aizppkVM/yILzlgUUeOE0Y5LU+2GFeOpUNISR2UE0HCgaXFOToF107U5O9Npf+/zNzj1jZYeZnk2GbEcmKRbaYd3aPa07k3RovX1xcQ8HO7SordnuznQX62BhSHaIdRwW6wgpS3BloPahaVivIdDz1zS7Axz395/57W1cy9wJdQOj3UyqO2m0ifS9GMpMy/VQDJg54B1pERaGPActcmHOTOANEyN/6oGyEFNdejMsjLcO3kyiQAfhol/V0mvg8/xVy74H/hehdqACEcelS63Tps3JzlZ9o7fFH1WF/dF9+9Zf4BQ9+Ahyc4A4507TxHbtQSK9TqTpxxf0Xqw+RQAiNw+v8KGhgvCQyi/WImBqJcDjNP9RVpDe1IpJR0VW9MilWLzTSNl38cheI3FK99SkBmEOJAmoRXEQaNi7MY3pPiLENkxs+ZQG5GkiZT9rLyu177GoOaCxd2wOQKyM6uZtYgrQ90ouP3E7Ede4cA1SmSpIQURM8FhwmH4jW2NdsWE9OIkVMowqvKBYV6F3uIsdKtx5I6G50WWsKwyVDrGuQyhLoKeVGAKRiaadeSO81zzSjyMH+otWEO6OBXPFRNULvKdIBQ8Dp0Ok4iCyUrXF1WCRwRLdyYgWEc2+fctNT3io5FWt+w0gxHaJb+iAo5/UxEzr/ja2qrAH95V6Y0GNUkjhvhJ7WF+ti92VErqofEgR3fMEJUdjlot4KOte3n4uwlyB5qruJBE63qwJjiEQ7apatDmKU1iWvLm1yMNRHupfs9Vi8qNeT3vEYn9PtE68mPd83eXU7rHXSJrHxp3hSDe2JD6wbfA6TDNpVEUYUldlMEGAqM2wbqdAjl43YV803qaB83lEyl37o2QDpy2z7UiQEgnt+SdBn+L92iGgoR3CdoEqpF+W2mc7BN48gO7oAHPTrNmtgeZ3N2t207Ldyk0eyW7BEQpzDFK0nOZh4G9fnt8+5J19vhI3q9crcffBxwoHpEYCOqcFMGYrdUd5KzYa1GQ0hnKEeeSMRwD7u/QBVdd7FvHMmg49JZVX2489i2ctYwl1tc36Mt8Q5gLIWvmvN7dFYYBrM+bFgT1vEbHNZjcLd2a/iTDCGABzCCwkDCxsS9BJYbMEQQfRe9BBhBDqQrYdL5MOTMRwfAarwvvwHqE7DQgfmmWhqc+2r5RSwpCPhAZ5MjQwf+JGezQgCJi/VpXo3QUkWiRt9n4EUqFiA4R/Ojg4vJm/HU9zKqggTycwv+F/Stq2SJ38SAZF24Rcey1iVVSSLIF9sIlhVFVeYUwjmuuCUsQ0RHPPzwQxjUQLfVnk10Wrcv2iv87HRdtbfW08WdNqZ9vbS8nWq3veiQ/c9e56PXWkmzOsIh24YjbpttEkDajTTxdzQ9CNCDMyYk1J8vn/AQ==</diagram></mxfile>
|
2012.05688/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
Given a fully-labeled source $HIN_S=\left(\mathcal{V}_S^{l},\mathcal{E}_S, \mathcal{A}_S, \mathcal{R}_S\right)$ and a fully-unlabeled target $HIN_T=\left(\mathcal{V}_T^{u},\mathcal{E}_T, \mathcal{A}_T, \mathcal{R}_T\right)$, where $\mathcal{V}$ and $\mathcal{E}$ are node and edge set, $\mathcal{A}$ and $\mathcal{R}$ denote the sets of node and edge types. $\mathcal{A}_S$ and $\mathcal{A}_T$ not only share same node types but also have their own private node types, also leading to the difference between $\mathcal{R}_S$ and $\mathcal{R}_T$. $\mathcal{V}_S^{l}$ represents a set of labeled nodes while $\mathcal{V}_T^{u}$ represents a set of unlabeled nodes. The transferable classification aims to predict the label on $HIN_T$ with label information from $HIN_S$.
|
| 4 |
+
|
| 5 |
+
To align each semantic component independently between source and target domains, we construct *pairwise node-type* auto-encoders and domain discriminators before aggregation and updating:
|
| 6 |
+
|
| 7 |
+
**1) Shared Node-type Distribution Alignment:** Assuming there are $K_1$ shared node-type pairs between source and target domains, the *pairwise node-type* auto-encoder $AE^{k_1},{k_1}=1,...,K_1$ is used to encode and decode the node features of ${k_1}$-th-type nodes. A reconstruction loss is used to constrain $AE^{k_1}$'s projection retaining semantic information: $$\begin{equation}
|
| 8 |
+
\small
|
| 9 |
+
\label{equa:loss_MSE}
|
| 10 |
+
\mathcal{L}_{\text {recon1}} = \sum MSE\left(\mathbf{X}^{k_1},\hat{\mathbf{X}}^{k_1}\right),
|
| 11 |
+
\end{equation}$$ where $\mathbf{X}^{k_1}$ is the node feature of the ${k_1}$-th shared node type for both domains, $\hat{\mathbf{X}}^{k_1}$ is the corresponding reconstructed feature, and $MSE$ represents the mean square error. Then, we construct a *pairwise node-type* domain discriminator following with a Gradient Reversal Layer (GRL) [@ganin2016domain] for the features of individual-type nodes separately. By minimizing the domain adversarial similarity loss, the encoder is trained to make similar its output processed from both domains while the domain discriminator learns to identify the domain of the encoder's output. The domain discriminator loss for all discriminators $D^{k1}$ and auto-encoders $AE^{k1}$, is formulated as: $$\begin{align}
|
| 12 |
+
\label{equa:loss_nda1}
|
| 13 |
+
\mathcal{L}_{nda1} = \sum \mathbb{E}_{x \in \textbf{X}_S^{k_1}} \left[\textbf{{\rm log}} \left(1-D^{k_1}\left(AE^{k_1}\left(x\right)\right)\right)\right] \nonumber \\
|
| 14 |
+
+\mathbb{E}_{x \in \textbf{X}_T^{k_1}}\left[\textbf{{\rm log}} D^{k_1}\left(AE^{k_1}\left(x\right)\right)\right],
|
| 15 |
+
\end{align}$$ where $D^{k_1}$ is the discriminator for the ${k_1}$-th-type nodes.
|
| 16 |
+
|
| 17 |
+
**2) Private Node-type Distribution Alignment:** Unlike shared-node types, private-node types contain many unknown values. Reconstruction constraints applied to private-node types recover not only the observed values but also unknown parts. Meanwhile, the private-node types like term and field are semantically relevant [@mikolov2013efficient], which makes the unobserved type's embedding contain plenty of linear dependent columns. Hence, the problem of recovery unobserved type's embedding turns into a low-rank matrix completion problem, which can be formulated to minimize the nuclear norm under the constraint of reconstruction loss [@candes2009exact]. For private node-type pairs, we recover the missing value and get a matrix $\hat{\mathbf{W}}$: $$\begin{equation}
|
| 18 |
+
\label{equa:matrix_recover}
|
| 19 |
+
\mathbf{W}=\left[\begin{array}{ll}
|
| 20 |
+
\textbf{X}_{S} & \textbf{0} \\
|
| 21 |
+
\textbf{0} & \textbf{X}_{T}
|
| 22 |
+
\end{array}\right] \stackrel{recover}{\Longrightarrow}
|
| 23 |
+
\hat{\mathbf{W}}=\left[\begin{array}{ll}
|
| 24 |
+
\hat{\textbf{X}}_{S} & \hat{\textbf{U}}_{S} \\
|
| 25 |
+
\hat{\textbf{U}}_{T} & \hat{\textbf{X}}_{T}
|
| 26 |
+
\end{array}\right].
|
| 27 |
+
\end{equation}$$
|
| 28 |
+
|
| 29 |
+
In $\hat{\mathbf{W}}$ above, $\hat{\mathbf{X}}_S$ and $\hat{\mathbf{X}}_T$ represent the recovered observed elements, while $\hat{\mathbf{U}}_S$ and $\hat{\mathbf{U}}_T$ represent the recovered unobserved elements. To recover $\hat{\mathbf{W}}$, we minimize the loss $\mathcal{L}_{\text {recon2}}$: $$\begin{align}
|
| 30 |
+
\label{equa:loss_recon2}
|
| 31 |
+
%\mathcal{L}_{\text {recon2}} &=\sum_{m=1}^{M}(\mathcal{L}^m_{\text {mse}} +\delta\mathcal{R}(\hat{\mathbf{W}})), \\
|
| 32 |
+
% \mathcal{L}^m_{\text {mse}} = \frac{1}{M}\sum_{m=1}^{M}\|\mathbf{X}^s_m&-\hat{\mathbf{X}}^s_m\|_{F}^{2} +\frac{1}{M}\sum_{m=1}^{M}\|\mathbf{X}^t_m-\hat{\mathbf{X}}^t_m\|_{F}^{2} ,
|
| 33 |
+
\mathcal{L}_{\text {recon2}} &=\sum MSE\left(\mathbf{X}^{k_2},\hat{\mathbf{X}}^{k_2}\right) +\delta\mathcal{R}\left(\hat{\mathbf{W}}\right),
|
| 34 |
+
\end{align}$$ where $\mathbf{X}^{k_2}$ is the node feature of the $k_2$-th private node type, $\hat{\mathbf{X}}^{k_2}$ is corresponding observed part of recovered embedding, $K_2$ is the number of the private-type pairs. $\mathcal{R}(\hat{\mathbf{W}})$ is a regularization term, where $\mathcal{R}(*)$ denotes nuclear norm operation, and $\delta >0$ is a trade-off parameter. Under the reconstruction constraint, the encoder's output can retain enough semantic information for the private-type pairs. The loss of discriminators for $k_1$ shared- and $k_2$ private-type pairs is $\mathcal{L}_{nda} = \mathcal{L}_{nda1} + \mathcal{L}_{nda2}$, where $\mathcal{L}_{nda2}$ is similar as Eq. ([\[equa:loss_nda1\]](#equa:loss_nda1){reference-type="ref" reference="equa:loss_nda1"}).
|
| 35 |
+
|
| 36 |
+
In this subsection, we further elaborate on how to align the topological structure of source and target networks.
|
| 37 |
+
|
| 38 |
+
**Representation of $\mathbf{h}$-hop:** We choose the advanced HIN model HGT [@hu2020heterogeneous] as feature extractor $G$, which can learn embeddings of $\mathbf{h}$-hop structure ($\{\mathcal{N}^1_{(v)},...,\mathcal{N}^h_{(v)}\}$), that is $\mathbf{h}$-hop embedding, to capture network topology information. In GDA-HIN, nodes from the source and target networks are encoded via a feature extractor with shared learnable parameters. However, there are still $\mathbf{h}$-hop structure discrepancies between both domains. Then, we adopt the domain alignment on the output of HGT, which aligns the data distribution in the embedding space of $\mathbf{h}$-hop structure.
|
| 39 |
+
|
| 40 |
+
**Topological Domain Discriminator:** After extracting each nodes' h-hop structure embedding by feature extractor $G$, which is a 2-layer HGT, another domain adversarial discriminator $D^{tp}$ is implemented to minimize the topological structure discrepancy, and its loss $\mathcal{L}_{da}$ defined as: $$\begin{align}
|
| 41 |
+
\label{equa:loss_da}
|
| 42 |
+
\mathcal{L}_{da} = \mathbb{E}_{x \in \boldsymbol{H_S}}\left[\textbf{{\rm log}} \left(1-D^{tp}\left(G\left(x\right)\right)\right)\right] \\ \nonumber
|
| 43 |
+
+\mathbb{E}_{x \in \boldsymbol{H_T}}\left[\textbf{{\rm log}} D^{tp}\left(G\left(x\right)\right)\right],
|
| 44 |
+
\end{align}$$ where $\mathbf{H}_S$ and $\mathbf{H}_T$ represent the outputs of encoders in source and target domains, respectively.
|
| 45 |
+
|
| 46 |
+
**Domain-invariant Classifier:** The classifier $C$ is used to to predict the label, and its loss $\mathcal{L}_{cls}$ is defined as: $$\begin{align}
|
| 47 |
+
\label{equa:loss_t_cls}
|
| 48 |
+
\mathcal{L}_{cls} =-\frac{1}{N^l}\mathop{\Sigma}_ {i=0}^{N^l}{Y^i log\left(\hat{Y}^i\right)} + \zeta tr\left(\mathbf{H}^\top \mathbf{L^gH}\right),
|
| 49 |
+
\end{align}$$ where $\zeta$ is a balance parameter, $N^l$ represents the number of labeled source- and target-domain nodes, and $\hat{Y}^i$ denotes the $i$-th node's prediction. The regularization term in Eq.([\[equa:loss_t_cls\]](#equa:loss_t_cls){reference-type="ref" reference="equa:loss_t_cls"}) is defined as $tr(\mathbf{H}^\top \mathbf{L^gH})$, where $\mathbf{H}$ represents the hidden state of auto-encoders for all private-type nodes and $\mathbf{L^g}$ denotes the graph Laplacian matrix for all private-type nodes. In particular, the graph Laplacian matrix is formulated as $\mathbf{L^g} = \begin{pmatrix} \mathbf{L_S} & \mathbf{0} \\ \mathbf{0} & \mathbf{L_T} \end{pmatrix}$, where $\mathbf{L_S}$ and $\mathbf{L_T}$ are the Laplacian matrices of source and target domains computed according to the adjacency matrices. In the matrix completion module, there are unobserved elements involved in the computation of private-type nodes' embeddings, which are not constrained by reconstruction loss. We utilize graph Laplacian matrix to smooth their embedding over the graph, relying on the assumption that connected nodes in the graph are likely to share the same label [@kipf2016semi].
|
| 50 |
+
|
| 51 |
+
**Phase Training:** To minimize the nuclear norm under the constraint of reconstruction loss Eq.([\[equa:loss_recon2\]](#equa:loss_recon2){reference-type="ref" reference="equa:loss_recon2"}), GDA-HIN trains a *Phase Training model* on the shared node types of two networks to yeild pseudo labels, and the overall objective function is composed of the following four components: $$\begin{align}
|
| 52 |
+
\label{equa:l_stage1}
|
| 53 |
+
\mathcal{L}_{p1} = \mathcal{L}_{cls}+\alpha \mathcal{L}_{recon1} + \beta\mathcal{L}_{nda1} + \gamma\mathcal{L}_{da},
|
| 54 |
+
\end{align}$$
|
| 55 |
+
|
| 56 |
+
where $\alpha$, $\beta$ and $\gamma$ are hyper-parameters.
|
| 57 |
+
|
| 58 |
+
When there are only shared node types, Eq.([\[equa:loss_t_cls\]](#equa:loss_t_cls){reference-type="ref" reference="equa:loss_t_cls"}) degenerates to cross-entropy loss, $\mathcal{L}_{cls} =-\frac{1}{N_S}\mathop{\Sigma}_ {i=0}^{N_S}{Y_S^ilog\left(\hat{Y}_S^i\right)},$ where $N_S$ denotes the number of source-domain nodes.
|
| 59 |
+
|
| 60 |
+
**Phase Training:** For phase training, we select some predictions by *Phase Training model* as pseudo labels for nodes in target domain. Under the previous phase's guidance, GDA-HIN considers both shared- and private- node types from two networks, and the overall optimization objective is: $$\begin{align}
|
| 61 |
+
\label{equa:l_stage2}
|
| 62 |
+
\mathcal{L}_{p2} = \mathcal{L}_{cls}+\alpha \left(\mathcal{L}_{recon1}+\mathcal{L}_{recon2}\right)
|
| 63 |
+
+ \beta \mathcal{L}_{nda} + \gamma\mathcal{L}_{da}.
|
| 64 |
+
\end{align}$$
|
2101.01000/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-04T07:52:27.608Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36" version="15.6.8" etag="rBEsvpgDOW01JcPKhQ9V" type="google"><diagram id="CCzTZgaLmgJytSUQVedS">7V1bs6o6Ev41PkqRhOvjunnOVO1ds2vWVM3spymUqMxG8CDrNr9+EglIwi1qALdKraqlAQN8/XWnu9OBCXrafP6ReNv199jH4QTq/ucEPU8gBMAxyT/a8pW1WIg1rJLAZwcdGl6D/2HWqLPWt8DHO+7ANI7DNNjyjYs4ivAi5dq8JIk/+MOWccifdeut2Bn1Q8Prwgtx5bB/BX66zlodaB/a/8TBap2fGVhutmfj5Qezjndrz48/Sk3oZYKekjhOs0+bzyccUvByXLLfzRr2FheW4CiV+QHMfvDuhW/s3th1pV/5zSbxW+RjejyYoMePdZDi1623oHs/iHhJ2zrdhGz3MgjDpziMk/1v0cPsGcwQad+lSfwLl/Yg5LozclWP7AJwkuLPxpsABTSEUzje4DT5IoewH5guQ5PRCVhMEB8H4QDEjlmXBAPzRo8RYlX0fcCMfGCw1UOI+oVwuVzCxaIOQt+aW6alBsICni8BmRKEUDerEIJcdc+B0KhA+PTtKY7eFQOJrQYgbXeu64qANAQgzSqQVg2OKphoVvDCPrFb7GucpOt4FUde+HJofTwgSgE4HPMtjrcMx//iNP1iRth7S2MeZfwZpP+mP9dM9u0n64x+fv4sf/nKv0Tkzko/ol9/5v3RL4ef7b8dfuc/UPNN72aLo3+ugyhrnQUUlP0xmXRzswwKqVIoOJnu4rdkwZosNoh4yQqnHCclJJ/g0EuDd773c+RoXa86IDicOtjXAyM0zdGsinO3KidaFbdqVWxpySu3Ku71qsOQViWPQfry92azF+vpqQ7B2ewRWYr8PWSN6e8BMAgXnQWu5+LcMQ1TEReRPZ7DB6rUu9tmOdsMUNU4A3nhK7fOoBpGXo1KDGqeh4klB8HRcMbz+sA9mDzZtNREk2DEcBIME0+OohKDmpZqQFnVkQOpojiiOuF7u3UBaQk+XlekiSYBVQkKsybpmbdJ84yd4UcckBMXkrAdTbfdw4Y4uRi6AHimIKwPWEpIH9UtMoRuMx2rdLsXZQGJnHSdu3QPYnB0NeKEbkdHCgVYDXDbBDgP48UvZvRPlWJp6IHHjj1slPkdyABsqAFLP2yOGnLQfq3mfgs3Uj1XoEQQf+fKKVyBjs1xBSiyI6Rbq7HbPplSTVW0MaXkSR7DDeb25p9/TsoucJPbW3ZUG2jRMcfhXhp5dCFwNE+liyl01N+wk/OBI4gVptRDjvfXdGCK9ddbnO+Y7vZB0AM5AIDt52En+bSi//8MfB9HZPdr6qU475NcTtZtdlCFi8S1TnnyeWGwInx8XhAZY+KSP1IHPFh44QPbsSEnysI3TK7Jm++7oqTaUnj2gJmPE/OZ9kUituy6913zrj5zicpxAWuanO/4A8GRQAbjc4mZTg0zoQLHX2aK/3JtwAVpt5DOsIw6m368rh/VrULNr6aa7rQ4nhYICoM5UkILZB/VrUJamBVaiKb8Rmw20t3xbHY16XVXzlOUU5jdUmOzK5Nmg9ns43J4d1rUy89AQhpWjc2uZHcHs9nHJf/utJCjhSJrUZkHG8xaVFOKNzqUGxBqgm4OOJije7quL43VkQYF7TKcE9WU9GULfQGhL3W6iY5LzN05Ic0JE9jKOEH7GpATzbm4eZ4x+1u0faMX+EqsrxfuSnm1+c0l1SDSRDlXp9Pdvox6XVZNEJaaTOqPBPvBIg3iaNeYRr1hFhhCAqaoRRtiZK/LoWWy8IP3WpFTMUwZ8lTmDPyK1PN+dlsvauDT0lvwfa9x+I6pBJu7K/jxiv96wxHpAOov0SL299dQw6IuwtVdH2ne3347E7sKdMj5WKUVAGqoAmybp0o+TnSttFOy0K6aWPsNufKMb4QrhpjuH5QrEum/FfH3to03ytYOM5s9KS0FlgXA0k0xZqqpsrdrEDAU1NgjiUzXOfV+voedZW29n7Vw8HwpS6JMUq2+bm8QyWR9wjDY7nA3PN5umy01XwafFFJikxK8dzpIG3GX6c2IGBaLoCsY6ro9k18e3YRhsbhXs0B540lZXSyNNIfb3BolrT/kLHHIFHBdgThczYbljS9xrCloHUcchkzS5QrEgZCmm6XN4sSRB6vji+O4fAcLDxqzG0Wl/US2zr4uH1KtlS9JKSvglUt45ON1uWApM8/NwaugRoIeTUFVcI7WJTix4EiiOJIM8vzGXYYB+Q6lZ8gIhdBhAzwrodCruhRKfr2qaKaaIhdPh9axVkzGKWJDbxMgxnGVTx1F8bT9h5eSCCzat0AdVewKrFDoeqgBNGiXN6E64sRaRwT5bvki6h4thUz50xX4B4YjaLTBOwjVZNlIDkJdekSV5S6mPoqpDpl5DyUuAfdQFuN3UHRkaa5R3niNPFHPTVpYf9gMt71bhYp+XCnVfRA4gxuGZp7GDgNqqOZicy8U9UYOpYsib5wc0OLcPJTPZ+bsONFBQEKvQoRr9cYNifza8ElYaA2XhDUkUlqXkIQ1GkLtASAyryXN1IRh8SQ03kA7wmLH7jyTY1Ql0HTMWRKRyTRdg0R0zXDLmxA51Rj9sSQCb0MilZGKH6jcyxHI8SvCfqNkbD5ul5Oxpt4uOUGV3GpxqOgxOVZVVickVSyNY4zgMomOr7TPZLSm8Xpzp83j15QNlH7tUt3RCNA+rp7sNLcTAPZGAJkszhWYelNvi4PzHNslmPrjF5K1aWSX4a5/AhZoHQyanoB1SMj1kJ6rHSQYecspu8x3uTy7AU3NcCzd0F0LGcBBPAENR7PRvmDHsHWETCFGPjUr4/ChN+pvHJFIywwfeheL+IeIKyWSD5cQemeSGif0vpaCmyYM5eqfLqXCw7qWTEiHODrqn7rD7oHEcS1pkA5xdNQ/dQfdA4lDJgdy8MMWobfbBYvjXLGzHKIumLVq+XM/M0xm20wQEEQhHQ+19tpbOGQdl2i5Yam3zyueGAWbQLDVrb0qFPu11JZ00KMd3/zx7uPb3uaVN8XilfyRwvr32H8Lcf36FkGEF7miroelMKbDL5sClltjFwCo8fNVLLKz6lIYKhZWvuxofBjs1uSIMCZSol3t9RBa3oaqYTTfbSctK5vuhMikTMy35OooJYSoZgceVqsEr/aPtNA3eLejb0C8WemIzxrUa94nCPuSTTVv8Q/8kZ0E6r9wEtGXad6sZHi9QXaN1uh9Scat+iD9vxJBRFH+tQWcO7OfTT3xhQZ5Ko2rxHSlZXfug+aFh8LCfA1yh9tJUPK+SocxXjefBwjnYZW9jU+qF49n13VgU3YFp/rAedZvnDdw8A+psrueUlVXHVyajWiYgLgAbuc8LnMb9fCWNjWUZDW/spREJlJLSYmUWN+zCqaQLqyZUsgLmLiIukU20i+UlEhBnX/7dvv9T4XFP1MAalzH3iCQyMccG45L+QUZ9q3TJG2PRDrrnsdORqh6BXQThsVKRt54GHW8qoFYAcJjFz0MhTBvnoE9HMIyVQxXgLDwAMohEZaYcd+tvS39GGxoeM1hKoZnKfXXitZv3hyHP0gQxkCex2kab8gBId3x6C1+rfaeHw842cgh+5M95CLT6+THrud5nabb3T7hMyN/Cz+CWkCGrGVAfMpEW5AzwpnvpR75R9uJ5zJLiY82DYMIT/ct02WcTHc4nnqRP/XxOw7j7YYIj66gmNGs7uz15e//CWhMGnn0ZrxwCqCjbaNVjT/YnHxtStcqG0v6Gj8lqhL6GT/Pe6TgWfcsUWZw14zL0AwJlvSkGc7Y1Q/KRqiOR6iN5mU5Yxc0DIXwaF6WIxEgXgPC43lZjkT8mYXg6JFY3dQj9jeZlLNBvSQlkPDUBWAPl5VwJMLTXlyKTBajuBSORMB4dykuwqWQYUlfmjF+0IutBnNsu3Nd+p3ITRiO71LIrP6/BoTHcynGfkblQAiP6FJIhKcjuBSGC3lE8unnAQynKxGL9eNSuKO5FK5EdHR3KS7DpZBgSV+aMX6Ep8gcN2A4ukvhylR8XwPCo7kU7vjzmYMgPJhLQb4mMS2MLfb9QXBaf499TI/4Pw==</diagram></mxfile>
|
2101.01000/main_diagram/main_diagram.pdf
ADDED
|
Binary file (41.5 kB). View file
|
|
|
2101.01000/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,177 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In classical statistical learning, spatio-temporal forecasting is usually regarded as a multi-variate time series problem, and methods such as autoregressive integrated moving average (ARIMA) with its variants are proposed, but the stationary assumption is usually hard to satisfy. Recently, the rise of deep learning approaches has attracted lots of attention. For example, in traffic flow forecasting, Graph Neural Networks (GNNs) are regarded as superior solutions to model spatial dependency by taking sensors as graph nodes [@yu2018STGCN; @li2018diffusion]. Compared with great progress in traffic forecasting, works focusing on meteorology are scarce, while the need for weather forecasting is increasing dramatically [@shi2015convolutional; @sonderby2020metnet]. In this work, our research attention is paid to spatio-temporal meteorological forecasting tasks.
|
| 4 |
+
|
| 5 |
+
The task is challenging due to two main difficulties. First, the irregular sampling of meteorological signals usually disables the classical Convolutional Neural Networks (CNNs) which work well on regular mesh grid signals on Euclidean domains such as 2-D planar images. Signals are usually acquired from irregularly distributed sensors, and the manifolds from which signals are sampled are usually non-planar. For example, sensors detecting temperature are located unevenly on land and ocean, which are not fixed on structured mesh grids, and meteorological data are often spherical signals rather than planar ones. Second, the high temporal and spatial dependency makes it hard to model the dynamics. For instance, different landforms show totally distinct wind flow or temperature transferring patterns in weather forecasting tasks and extreme climate incidents like El Nino [@elnino] often cause non-stationarity for prediction.
|
| 6 |
+
|
| 7 |
+
GNNs yield effective and efficient performance for irregularly spatio-temporal forecasting, enabling to update node representations by aggregating messages from their neighbors, the process of which can be analogized to heat or wind flow from localized areas on the earth surface. As discussed, the meteorological flow may demonstrate totally variant patterns in different local regions. Inspired by the analogy and location-characterized patterns, we aim to establish a graph convolution kernel, which varies in localized regions to approximate and imitate the true local meteorological patterns.
|
| 8 |
+
|
| 9 |
+
Therefore, we propose our **conditional local kernel**. We embed it in a graph-convolution-based recurrent network, for spatio-temporal meteorological forecasting. The convolution is performed on the **local space** of each node, which is constructed considering both distance between nodes and their relative orientation, with the kernel proposed mainly based on the assumption: **smoothness of location-characterized patterns**. In summary, our contributions are:
|
| 10 |
+
|
| 11 |
+
- Proposing a location-characterized kernel to capture and imitate the meteorological local spatial patterns in its message-passing process;
|
| 12 |
+
|
| 13 |
+
- Establishing the spatio-temporal model with the proposed graph convolution which achieves state-of-the-art performance in weather forecasting tasks;
|
| 14 |
+
|
| 15 |
+
- Conducting further analysis on learned local pattern visualization, framework choice, local space and map choice and ablation.
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
Given $N$ correlated signals located on the sphere manifold $S^2$ at time $t$, we can represent the signals as a (directed) graph $\mathcal{G} = (\mathcal{V}, \mathcal{E}, \bm{A})$, where $\mathcal{V}$ is a node set with $\mathcal{V} = \{\bm{\mathrm{x}}_i^S = (x_{i,1}, x_{i,2}, x_{i,3})\in S^2: i=1,2,\ldots,N\}$ meaning that it records the position of $N$ nodes, which satisfies $||\bm{\mathrm{x}}_i^S||_2 = 1$. We denote positions of nodes in Euclidean space by $\bm{\mathrm{x}}^E$, and in sphere as $\bm{\mathrm{x}}^S$. $\mathcal{E}$ is a set of edges and $\bm{A} \in \mathbb{R}^{N \times N}$ is the adjacency matrix which can be asymmetric. The signals observed at time $t$ of the nodes on $\mathcal{G}$ are denoted by $\bm{F}^{(t)} \in \mathbb{R}^{N\times D}$. For the forecasting tasks, our goal is to learn a function $P(\cdot)$ for approximating the true mapping of historical $T'$ observed signals to the future $T$ signals, that is $$\begin{align}
|
| 20 |
+
[\bm{F}^{(t-T')}, \ldots, \bm{F}^{(t)};\mathcal{G} ] \overset{P}{\longrightarrow} [\bm{F}^{(t+1)}, \ldots, \bm{F}^{(t+T)};\mathcal{G}].
|
| 21 |
+
\end{align}$$ In the paper, for meteorological datasets which do not provide the adjacency matrix, we construct it by K-nearest neighbors algorithm based on induced spherical distance of their spatial location which will be discussed later.
|
| 22 |
+
|
| 23 |
+
For notation simplicity, we omit the supper-script $(t)$ when discussing spatial dependency. Denote the set of neighbors of center node $i$ by $\mathcal{N}(i) = \{j:(i,j) \in \mathcal{E}\}$, and note that $(i,i) \in \mathcal{N}(i)$. In Graph Neural Networks, $(W^l, \bm{b}^l)$ is the weights and bias parameters for layer $l$, and $\sigma(\cdot)$ is a non-lieanr activation function. The message-passing rule concludes that at layer $l$, representation of node $i$ updates as $$\begin{align}
|
| 24 |
+
\bm{y}^{l}_i &= \sum_{j\in \mathcal{N}(i)}\omega_{i,j}\bm{h}^{l-1}_j; \label{eq:messgagepass}\\
|
| 25 |
+
\bm{h}^{l}_i &= \sigma(\bm{y}^{l}_i\bm{W}^l + \bm{b}^l) ,
|
| 26 |
+
\end{align}$$ where $\bm{h}^{l}_i$ is the representation of node $i$ after $l$-th layer, with $\bm{h}^{0}_i = \bm{F}_i$, which is the observed graph signals on node $i$. Denote the neighborhood coordinate set of center node $i$ by $\mathcal{V}(i) = \{\bm{\mathrm{x}}^S_j: j\in \mathcal{N}(i)\}$, and then Eq. [\[eq:messgagepass\]](#eq:messgagepass){reference-type="ref" reference="eq:messgagepass"} represents aggregation of messages from neighbors, which can also be regarded as the convolution operation on graph, which can be written as $$\begin{align}
|
| 27 |
+
(\Omega \star_{\mathcal{N}(i)} \bm{H}^{l-1})(\bm{\mathrm{x}}_j^S) &= \sum_{\bm{\mathrm{x}}^S_j\in \mathcal{V}(i)} \Omega(\bm{\mathrm{x}}^S_j; \bm{\mathrm{x}}^S_i)\bm{H}^{l-1}(\bm{\mathrm{x}}^S_j), \label{eq:graphconv}
|
| 28 |
+
% &=\int_{(\phi, \theta)} \Omega(\phi,\theta) \bm{H}^{l-1}(\phi,\theta) \delta_{\mathcal{V}(i)}d\phi d\theta
|
| 29 |
+
\end{align}$$ where $\star_{\mathcal{N}(i)}$ means convolution on the $i$-th node's neighborhood, $\Omega: S^2 \times S^2 \rightarrow \mathbb{R}$ is the convolution kernel, such that $\Omega(\bm{\mathrm{x}}^S_j, \bm{\mathrm{x}}^S_i) = \omega_{i,j}$, and $\bm{H}^{l-1}$ is a function mapping each point on sphere to its feature vector in $l$-th representation space.
|
| 30 |
+
|
| 31 |
+
::: Example
|
| 32 |
+
**Example 1**. *The convolutional kernel used in DCRNN [@li2018diffusion] is $$\begin{align}
|
| 33 |
+
\Omega(\bm{\mathrm{x}}^S_j, \bm{\mathrm{x}}^S_i) =
|
| 34 |
+
&\exp(-d^2(\bm{\mathrm{x}}^S_i, \bm{\mathrm{x}}^S_j)/\tau),
|
| 35 |
+
\end{align}$$ where $d(\cdot,\cdot)$ is the distance between the two nodes, and $\tau$ is a hyper-parameter to control the smoothness of the kernel.*
|
| 36 |
+
:::
|
| 37 |
+
|
| 38 |
+
To imitate the meteorological patterns, the value of convolution kernel should be large for neighbors having great meteorological impacts on centers. For example, If there exists heat flows from the south-east to the north-west, the kernel should give more weights to the nodes from the south-east when aggregating messages from neighbors. Using slight abuse of terminology, we consider the convolution kernels are equivalent to meteorological patterns in local regions.
|
| 39 |
+
|
| 40 |
+
The signals are located on the earth surface, which is regarded as a sphere, and thus we introduce the notation of sphere manifold to further develop our convolution method. The $M$-D sphere manifold is denoted by $S^M = \{\bm{\mathrm{x}}^S = (x_1, x_2,\ldots, x_{M+1})\in \mathbb{R}^{M+1}:||\bm{\mathrm{x}}^S|| = 1\}$. The convolution is usually operated on a plane, so we introduce the **local space**, the $M$-D Euclidean space, as the convolution domains.
|
| 41 |
+
|
| 42 |
+
::: Definition
|
| 43 |
+
**Definition 1**. *Define the local space centered at point $\bm{\mathrm{x}}$ as some Euclidean space denoted by $\mathcal{L}_{\bm{\mathrm{x}}}S^M$, with $\bm{\mathrm{x}} \in \mathcal{L}_{\bm{\mathrm{x}}}S^M$, which is homeomorphic to the local region centered at $\bm{\mathrm{x}}$. (Formal definition see Appendix A2.)*
|
| 44 |
+
:::
|
| 45 |
+
|
| 46 |
+
::: Example
|
| 47 |
+
**Example 2**. *The tangent space centered at point $\bm{\mathrm{x}}$ is an example of local space, denoted by $\mathcal{T}_{\bm{\mathrm{x}}}S^M = \{\bm{\mathrm{v}}\in \mathbb{R}^{M+1}: <\bm{\mathrm{x}}, \bm{\mathrm{v}}> = 0\}$, where $<\cdot,\cdot>$ is the Euclidean inner product.*
|
| 48 |
+
:::
|
| 49 |
+
|
| 50 |
+
The **geodesics and induced distance** on sphere are important to both defining the neighborhood of a node, as well as identifying the message-passing patterns. Intuitively, the greater is the distance from one node to another, the fewer messages should be aggregated from the node into another in graph convolution.
|
| 51 |
+
|
| 52 |
+
::: Proposition
|
| 53 |
+
**Proposition 1**. *Let $\bm{\mathrm{x}} \in S^{M}$, and $\bm{\mathrm{u}} \in \mathcal{T}_{\bm{\mathrm{x}}}S^M$ be unit-speed. The unit-speed geodesics is $\gamma_{\bm{\mathrm{x}}\rightarrow\bm{\mathrm{u}}}(t) = \bm{\mathrm{x}} \cos t + \bm{\mathrm{u}} \sin t$, with $\gamma_{\bm{\mathrm{x}}\rightarrow\bm{\mathrm{u}}}(0) = \bm{\mathrm{x}}$ and $\dot \gamma_{\bm{\mathrm{x}}\rightarrow\bm{\mathrm{u}}}(0) = \bm{\mathrm{u}}$. The intrinsic shortest distance function between two points $\bm{\mathrm{x}}, \bm{\mathrm{y}} \in S^M$ is $$\begin{align}
|
| 54 |
+
d_{S^M}(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = \arccos(<\bm{\mathrm{x}},\bm{\mathrm{y}}>).
|
| 55 |
+
\end{align}$$*
|
| 56 |
+
:::
|
| 57 |
+
|
| 58 |
+
The distance function is usually called great-circle distance on sphere. In practice, the K-nearest neighbors algorithm to construct the graph structure is conducted based on spherical distance.
|
| 59 |
+
|
| 60 |
+
On the establishment of local space of each center node, an **isometric map** $\mathcal{M}_{\bm{\mathrm{x}}}(\cdot): S^M \rightarrow \mathcal{L}_{\bm{\mathrm{x}}}S^M$ satisfying that $||\mathcal{M}_{\bm{\mathrm{x}}}(\bm{\mathrm{y}})|| = d_{S^M}(\bm{\mathrm{x}},\bm{\mathrm{y}})$ can be used to map neighbor nodes on sphere into the local space.
|
| 61 |
+
|
| 62 |
+
::: {#ex:logmap .Example}
|
| 63 |
+
**Example 3**. *Logarithmic map is usually used to map the neighbor node $\bm{\mathrm{x}}_j\in \mathcal{V}(i)$ on sphere isometrically into $\mathcal{T}_{\bm{\mathrm{x}}_i}S^M$, which reads $$\begin{align*}
|
| 64 |
+
\log_{\bm{\mathrm{x}}_i}(\bm{\mathrm{x}}_j) &= d_{S^M}(\bm{\mathrm{x}}_i, \bm{\mathrm{x}}_j)\frac{P_{{\bm{\mathrm{x}}_i}}(\bm{\mathrm{x}}_j - \bm{\mathrm{x}}_i)}{||P_{{\bm{\mathrm{x}}_i}}(\bm{\mathrm{x}}_j - \bm{\mathrm{x}}_i)||} ,
|
| 65 |
+
\end{align*}$$ where $P_{{\bm{\mathrm{x}}_i}}(\bm{\mathrm{x}}) = \frac{\bm{\mathrm{x}}}{||\bm{\mathrm{x}}||} - <\frac{\bm{\mathrm{x}}_i}{||\bm{\mathrm{x}}_i||},\frac{\bm{\mathrm{x}}}{||\bm{\mathrm{x}}||}>\frac{\bm{\mathrm{x}}_i}{||\bm{\mathrm{x}}_i||}$ is the normalized projection operator.*
|
| 66 |
+
:::
|
| 67 |
+
|
| 68 |
+
After the neighbors of $\bm{\mathrm{x}}_i$ are mapped into local space of the center nodes through the isometric maps, which reads $\bm{\mathrm{v}}_j = \mathcal{M}_{\bm{\mathrm{x}}_i}(\bm{\mathrm{x}}_j)$, the **local coordinate system** of each center node is set up, through a transform mapping $\Pi_{\bm{\mathrm{x}}_i}(\bm{\mathrm{v}}_j) = \bm{\mathrm{x}}_j^{i'}$ for each $\bm{\mathrm{x}}_j \in \mathcal{V}(i)$. We call $\bm{\mathrm{x}}_j^{i'}$ the relative position of $\bm{\mathrm{v}}_j$ in the local coordinate system of the local space centered at $\bm{\mathrm{x}}_i$. As $\bm{\mathrm{x}}_j^{i'}$ is always in the local space which is Euclidean, the supper-script $E$ is omitted. The mapping $\Pi_{\bm{\mathrm{x}}_i}(\cdot)$ can be determined by $M$ orthogonal basis chosen in the local coordinate system, i.e. $\{\bm{\xi}^1, \bm{\xi}^2, \ldots, \bm{\xi}^M\}$, which will be discussed later in $S^2$ scenario for meteorological application.
|
| 69 |
+
|
| 70 |
+
Given a center node ${\bm{\mathrm{x}}_i^E} \in \mathbb{R}^2$, from the perspective of defined graph convolution in Eq. [\[eq:graphconv\]](#eq:graphconv){reference-type="ref" reference="eq:graphconv"}, the convolution on planar mesh grids such as pixels on images is written as $$\begin{align}
|
| 71 |
+
(\Omega \star_{\mathcal{V}{(i)}} \bm{H})(\bm{\mathrm{x}}_i^E) &= \sum_{\bm{\mathrm{x}}^E} \Omega(\bm{\mathrm{x}}^E; \bm{\mathrm{x}}^E_i) \bm{H}(\bm{\mathrm{x}}^E)\delta_{\mathcal{V}(i)}(\bm{\mathrm{x}}^E)\notag\\
|
| 72 |
+
&=\sum_{\bm{\mathrm{x}}^E} \chi(\bm{\mathrm{x}}^E_i - \bm{\mathrm{x}}^E) \bm{H}(\bm{\mathrm{x}}^E)\delta_{\mathcal{V}(i)}(\bm{\mathrm{x}}^E),
|
| 73 |
+
\end{align}$$ where $\delta_{\mathcal{A}}(\bm{\mathrm{x}}) = 1$ if $\bm{\mathrm{x}} \in \mathcal{A}$, else $0$. In terms of convolution on 2-D images, $\mathcal{V}(i) = \{\bm{\mathrm{x}}^E :\bm{\mathrm{x}}^E - \bm{\mathrm{x}}_i^E \in \mathbb{Z}^2\cap ([-k_1, k_1] \times [-k_2, k_2])\}$. $k_1 > 0$ and $k_2 >0$ are the convolution views to restrict how far away pixels are included in the neighborhood along the width-axis and length-axis respectively. When $k_1, k_2 < +\infty$, the neighborhood set is limited, and thus the convolution is defined as **local**, conducted on each node's local space, with **local convolution kernel** $\chi(\cdot)$ .
|
| 74 |
+
|
| 75 |
+
To extend the local convolution on generalized manifolds, we conclude that the local space of $\bm{\mathrm{x}}^E_i$ is $\mathcal{L}_{\bm{\mathrm{x}}_i^E}\mathbb{R}^2 = \{\bm{\mathrm{x}}^E :\bm{\mathrm{x}}^E - \bm{\mathrm{x}}_i^E \in[-k_1, k_1] \times [-k_2, k_2]\}$, so that the isometric map satisfies $\bm{\mathrm{v}}^E = \mathcal{M}_{\bm{\mathrm{x}}_i}(\bm{\mathrm{x}}^E) = \bm{\mathrm{x}}^E - \bm{\mathrm{x}}^E_i$. $\{-\bm{e}_{x}, -\bm{e}_y\}$ with $\bm{e}_{x}=(1,0)$ and $\bm{e}_{y}=(0,1)$ is the orthogonal basis in local coordinate system of the local space. In conclusion, $$\begin{align}
|
| 76 |
+
\bm{\mathrm{x}}^{i'}=\Pi_{\bm{\mathrm{x}}_i^E}(\bm{\mathrm{v}}^E) = -\bm{\mathrm{v}}^E = \bm{\mathrm{x}}_i^E - \bm{\mathrm{x}}^E .
|
| 77 |
+
\end{align}$$ In this way, we obtain the local convolution on 2-D Euclidean plane, which reads $$\begin{align}
|
| 78 |
+
(\Omega \star_{\mathcal{V}{(i)}} \bm{H})(\bm{\mathrm{x}}_i^E) &= \sum_{\bm{\mathrm{x}}^E} \chi(\bm{\mathrm{x}}^{i'}) \bm{H}(\bm{\mathrm{x}}^E)\delta_{\mathcal{V}(i)}(\bm{\mathrm{x}}^E).
|
| 79 |
+
\end{align}$$ In analogy to this, the local convolution on 2-D spherical the center node $\bm{\mathrm{x}}^S_i$ is defined similarly: $$\begin{align}
|
| 80 |
+
(\Omega \star_{\mathcal{V}{(i)}} \bm{H}) (\bm{\mathrm{x}}_i^E)
|
| 81 |
+
&=\sum_{\bm{\mathrm{x}}^S} \chi(\bm{\mathrm{x}}^{i'}) \bm{H}(\bm{\mathrm{x}}^S)\delta_{\mathcal{V}(i)}(\bm{\mathrm{x}}^S),
|
| 82 |
+
\end{align}$$ where $\mathcal{V}(i)$ is given by the graph structure, nodes in which can be mapped into $\bm{\mathrm{x}}_i^S$'s local space. And $\bm{\mathrm{x}}^{i'}$ is obtained by: $$\begin{align}
|
| 83 |
+
\bm{\mathrm{x}}^{i'}=\Pi_{\bm{\mathrm{x}}_i^S}(\bm{\mathrm{v}}^S) = \Pi_{\bm{\mathrm{x}}_i^S}(\mathcal{M}_{\bm{\mathrm{x}}_i^S}(\bm{\mathrm{x}}^S)) .
|
| 84 |
+
\end{align}$$ Following parts are organized to discuss how to elaborate
|
| 85 |
+
|
| 86 |
+
- $\mathcal{M}_{\bm{\mathrm{x}}_i^S}(\cdot)$ and $\Pi_{\bm{\mathrm{x}}_i^S}(\cdot)$, isometry mapping neighbors into some local space of ${\bm{\mathrm{x}}_i^S}$ and choice of orthogonal basis in local coordinate system.
|
| 87 |
+
|
| 88 |
+
- $\chi(\cdot)$, the formulation of convolution kernel to approximate and imitate the meteorological patterns.
|
| 89 |
+
|
| 90 |
+
In the following parts, all the nodes are located on sphere, so the supper-script $S$ is omitted. We choose what we define as **cylindrical-tangent space** and **horizon maps** (Fig. 1(a).) to construct local spaces and to map neighbors into them.
|
| 91 |
+
|
| 92 |
+
::: Definition
|
| 93 |
+
**Definition 2**. *For $\bm{\mathrm{x}}_i \in S^2$, the cylindrical-tangent space centered at $\bm{\mathrm{x}}_i$ reads $$\begin{align*}
|
| 94 |
+
\mathcal{C}_{\bm{\mathrm{x}}_i} S^2 = \{\bm{\mathrm{v}}\in \mathbb{R}^3:<\bm{\mathrm{v}}^-, \bm{\mathrm{x}}_i^-> = 0\},
|
| 95 |
+
\end{align*}$$ where $\bm{\mathrm{x}}^- = (x_1, x_2)$, taking the first two coordinates of vectors in $\mathbb{R}^3$.*
|
| 96 |
+
:::
|
| 97 |
+
|
| 98 |
+
::: Proposition
|
| 99 |
+
**Proposition 2**. *Similar to logarithmic map, the horizon map $\mathcal{H}_{\bm{\mathrm{x}}_i}(\cdot)$ is used to map the neighbor node $\bm{\mathrm{x}}_j\in \mathcal{V}(i)$ isometrically into $\mathcal{C}_{\bm{\mathrm{x}}_i} S^2$, which reads $$\begin{align*}
|
| 100 |
+
\mathcal{H}_{\bm{\mathrm{x}}_i}(\bm{\mathrm{x}}_j) = d_{S^2}(\bm{\mathrm{x}}_i, \bm{\mathrm{x}}_j)\frac{[P_{{\bm{\mathrm{x}}_i^-}}(\bm{\mathrm{x}}_j^- - \bm{\mathrm{x}}_i^-), x_{j,3} - x_{i,3}]}{||[P_{{\bm{\mathrm{x}}_i^-}}(\bm{\mathrm{x}}_j^- - \bm{\mathrm{x}}_i^-), x_{j,3} - x_{i,3}]||},
|
| 101 |
+
\end{align*}$$ where $[\cdot,\cdot]$ is the concatenation of vectors/scalars.*
|
| 102 |
+
:::
|
| 103 |
+
|
| 104 |
+
<figure data-latex-placement="ht">
|
| 105 |
+
|
| 106 |
+
<figcaption>(a) shows tangent space with logarithmic maps in the top and cylindrical-tangent space with horizon maps in the below. (b) shows the necessity of the unified standard for choice of the basis. <span class="math inline">x<sub><em>p</em></sub></span> from the east affects both <span class="math inline">x<sub><em>i</em></sub></span> and <span class="math inline">x<sub><em>j</em></sub></span> a lot, with the corresponding local patterns in two center nodes shown in the heatmaps. However, if the basis is not unified as given in the example, the smoothness of local convolution kernel will be compromised. (c) shows the motivation of reweighting the angle scale. The angle scale <span class="math inline">$\frac{\psi}{2\pi}$</span> is given to balance neighbors’ uneven contributions to the centers resulted from irregular distribution.</figcaption>
|
| 107 |
+
</figure>
|
| 108 |
+
|
| 109 |
+
The reason for choosing cylindrical-tangent space with horizon maps rather than tangent space with logarithmic maps is that the former one preserves the relative orientation on geographic graticules on the earth surface, which has explicitly geophysical meaning in meteorology. The logarithmic maps distort the relative position in orientation on graticules. For a node in the northern hemisphere, a neighbor locate in the east of it will locate in the north-east on its tangent plane after logarithmic map. In comparison, the defined cylindrical-tangent space preserves both relative orientation on graticules and spherical distance after mapping. Detailed proofs are provided Appendix B1 and empirical comparisons are given in Experiments 5.4.
|
| 110 |
+
|
| 111 |
+
As discussed, the cylindrical-tangent space is Euclidean, so in $S^2$, the transform $\Pi_{\bm{\mathrm{x}}_i}(\cdot)$ can be determined by two orthogonal bases which are not unique. Since our method is mainly implemented in spherical meteorological signals, we choose $\{\bm{e}_{\phi},\bm{e}_z\}$ as the two orthogonal bases in every local coordinate system of the cylindrical-tangent plane, in order to permit every local space to share the consistent South and North poles and preserve the relative position. For $\bm{\mathrm{x}}_i = (x_{i,1}, x_{i,2}, x_{i,3})$ and $\bm{\mathrm{v}} \in \mathcal{C}_{\bm{\mathrm{x}}_i}S^2$, let $\phi_i = \arctan{(x_{i,2}/x_{i,1})}$, and $\bm{\mathrm{x}}^{i'} = \Pi_{\bm{\mathrm{x}}_i}(\bm{\mathrm{v}}) = (\theta^{i'}, z^{i'})$ which is obtained by $$\begin{align}
|
| 112 |
+
\phi^{i'} = <\bm{\mathrm{v}}, \bm{e}_{\phi_i}>;\quad
|
| 113 |
+
z^{i'} = <\bm{\mathrm{v}}, \bm{e}_{z_i}>,
|
| 114 |
+
\end{align}$$ which is the latitude and longitude on the sphere, and $$\begin{align}
|
| 115 |
+
\label{eq:orthobasis}
|
| 116 |
+
\bm{e}_{\phi_i} = (-\sin\phi_i, \cos\phi_i, 0); \quad
|
| 117 |
+
\bm{e}_{z_i} = (0, 0, 1).
|
| 118 |
+
\end{align}$$ The discussed maps and transforms cannot be applied for the South and North pole. We discuss it in Appendix B2.
|
| 119 |
+
|
| 120 |
+
Now we introduce the conditional local convolution, which is the core module in our model. We aim to formulate a kernel which is
|
| 121 |
+
|
| 122 |
+
- location-characterized: In the local regions of different center nodes, the meteorological patterns governed by convolution kernel differ.
|
| 123 |
+
|
| 124 |
+
- smooth: Patterns are broadly similar when the center nodes are close in spatial distance.
|
| 125 |
+
|
| 126 |
+
- common: The kernel is shared by different local spaces where the neighbors' spatial distribution is distinct.
|
| 127 |
+
|
| 128 |
+
Contrary to Example 1 in DCRNN whose convolution kernel is predefined, we aim to propose the convolution kernel which can adaptively learn and imitate the location-characterized patterns of each local region centered at node $i$. A trivial way is to use a multi-layer neural network whose input is $\bm{\mathrm{x}}^{i'}$ to approximate the convolution kernel $\chi({\bm{\mathrm{x}}^{i'}})$. However, ${\bm{\mathrm{x}}^{i'}}$ as the input only represents the relative position and disables the kernel to capture the location-characterized patterns. For example, given two different center nodes whose neighbors' relative positions are totally the same, the convolution kernel in different locations will also coincide exactly, contrary to location-characterized patterns. Therefore, we propose to use conditional kernel, which reads $\chi({\bm{\mathrm{x}}^{i'}} ; \bm{\mathrm{x}}_{i})$, meaning that the convolution kernel in a certain local region is determined by the center node $\bm{\mathrm{x}}_{i}$. An multi-layer feedforward network is used to approximate this term, as $$\begin{align}
|
| 129 |
+
\chi({\bm{\mathrm{x}}^{i'}} ; \bm{\mathrm{x}}_{i}) = \mathrm{MLP}([\bm{\mathrm{x}}^{i'} , \bm{\mathrm{x}}_{i}]). \label{eq:mlpkernel}
|
| 130 |
+
\end{align}$$
|
| 131 |
+
|
| 132 |
+
We assume that the localized patterns of meteorological message flows have the property of **smoothness** -- two close center nodes' patterns of aggregation of messages from neighbors should be similar. In the light of convolution kernel, we define the smoothness of kernel function as follows:
|
| 133 |
+
|
| 134 |
+
::: Definition
|
| 135 |
+
**Definition 3**. *The conditional kernel $\chi(\cdot|\cdot)$ is smooth, if it satisfies that for any $\epsilon > 0$, there exist $\delta>0$, such that for any two points $\bm{\mathrm{x}}_i, \bm{\mathrm{x}}_j \in {S^2}$ with $d_{S^2}(\bm{\mathrm{x}}_i, \bm{\mathrm{x}}_j) \leq \delta$, $$\begin{align*}
|
| 136 |
+
\sup_{\bm{\mathrm{v}}\in \mathcal{C}_{\bm{\mathrm{x}}_i} {S^2},\bm{\mathrm{u}}\in \mathcal{C}_{\bm{\mathrm{x}}_j} {S^2}\atop \Pi_{\bm{\mathrm{x}}_i}(\bm{\mathrm{v}}) = \Pi_{\bm{\mathrm{x}}_j}(\bm{\mathrm{u}})} |\chi(\Pi_{\bm{\mathrm{x}}_i}(\bm{\mathrm{v}});\bm{\mathrm{x}}_j) - \chi(\Pi_{\bm{\mathrm{x}}_j}(\bm{\mathrm{u}});\bm{\mathrm{x}}_i)|\leq \epsilon.
|
| 137 |
+
\end{align*}$$*
|
| 138 |
+
:::
|
| 139 |
+
|
| 140 |
+
The definition of smoothness of location-characterized kernel is motivated by the fact that if the distance between two center nodes $d_{S^2}(\bm{\mathrm{x}}_{i}, \bm{\mathrm{x}}_{j})$ is very small, the meteorological patterns in two local region should be of little difference, and thus kernel function $\chi(\cdot;\bm{\mathrm{x}}_{i})$ and $\chi(\cdot;\bm{\mathrm{x}}_{j})$ should be almost exactly the same.
|
| 141 |
+
|
| 142 |
+
The unified standard for choice of orthogonal basis on the cylindrical-tangent plane avoids problems caused by path-dependent parallel transport [@cohen2019gauge], and contributes to the smoothness of conditional kernel. The property is likely to be compromised without unified standard for orthogonal basis, as the following example illustrates.
|
| 143 |
+
|
| 144 |
+
::: Example
|
| 145 |
+
**Example 4**. *(shown in Fig. [\[fig:exampleunify\]](#fig:exampleunify){reference-type="ref" reference="fig:exampleunify"}.) For one node $\bm{\mathrm{x}}_{i}$, the orthogonal basis is $\{\mathbf{e}_{\phi_i}, \mathbf{e}_{z_i}\}$ as previously defined in Eq. [\[eq:orthobasis\]](#eq:orthobasis){reference-type="ref" reference="eq:orthobasis"}, and for another node $\bm{\mathrm{x}}_{j}$ which is close to it, it is $\{-\mathbf{e}_{\phi_i}, \mathbf{e}_{z_i}\}$. There exists a node $\bm{\mathrm{x}}_{p}$ which is in their east on the sphere as the neighbors of both, and has great meteorological impacts on both of them. In one's local coordinate system, the first coordinate is positive while it is negative in another. Then if the kernel is smooth, the neighbor $\bm{\mathrm{x}}_{p}$ from the east will never be given large value in both local regions centered at node $i$ and $j$, violating the true patterns in meteorology, or it is likely to violate the smoothness of the kernel.*
|
| 146 |
+
:::
|
| 147 |
+
|
| 148 |
+
As such, by using $\mathrm{MLP}(\cdot)$ as the approximator with smooth activate function like $\tanh$ and unifying the standard for choice of orthogonal basis, the smoothness property of conditional kernel can be ensured. However, the irregular spatial distribution of discrete nodes conflicts with the continuous kernel function shared by different center nodes, which will be discussed in the next part.
|
| 149 |
+
|
| 150 |
+
Because the kernel function is continuous and shared by different center nodes, when the spatial distribution of each node's neighbors is similar or even identical, e.g. nodes are distributed on regular spatial grids in local spaces, the proposed conditional kernel takes both distance and orientation into consideration. However, the nodes are discrete and irregularly distributed on the sphere. Since the kernel is shared by all center nodes, the distinct spatial distribution of neighbors of different center nodes is likely to disrupt the smoothness of local patterns. An explicit example is given to illustrate the problems brought about by it.
|
| 151 |
+
|
| 152 |
+
::: Example
|
| 153 |
+
**Example 5**. *(shown in Fig. [\[fig:examplereweight\]](#fig:examplereweight){reference-type="ref" reference="fig:examplereweight"}.) The two center nodes are close in distance, but the spatial distribution of their neighbors is different. The number of the right center's neighbors located in the south-west is two, while it is one for the left center. If the kernel is smooth, the message from the south-west flowing into the right center will be about twice than it from the south-west flowing into the left.*
|
| 154 |
+
:::
|
| 155 |
+
|
| 156 |
+
To reweight the convolution kernel for each $\bm{\mathrm{x}}_{j} \in \mathcal{V}(i)$, we consider both their angle and distance scales. We first turn its representation in Cartesian coordinate system $\bm{\mathrm{x}}_j^{i'} = (\phi^{i'}_j, z^{i'}_j)$ in cylindrical-tangent space of $\bm{\mathrm{x}}_i$ into polar coordinate $(\varphi^{i'}_j, \rho^{i'}_j)$, where $\varphi^{i'}_j = \arctan (z^{i'}_j/\phi^{i'}_j)$ and $\rho^{i'}_j = \sqrt{(z^{i'}_j)^2 + (\phi^{i'}_j)^2}$. Note that $\rho^{i'}_j$ equals to the geodesics induced distance between the two nodes on sphere. In terms of angle, we calculate the **angle bisector** of every pair of adjacent nodes in the neighborhood according to $\varphi^{i'}_j$. We denote the angle between two adjacent angular bisectors of $\bm{\mathrm{x}}_j^{i'}$ by $\psi^{i'}_j$ (as shown in lower-right subfigure in Fig. [\[fig:examplereweight\]](#fig:examplereweight){reference-type="ref" reference="fig:examplereweight"} ), and thus the angle scale is written as $\psi^{i'}_j/2\pi$. The distance scale is obtained similarly as DCRNN in Example 1, which reads $\exp(- (\rho^{i'}_j)^2/\tau)$, where $\tau$ is a learnable parameter.
|
| 157 |
+
|
| 158 |
+
To sum up, combining the two scaling terms with Eq. [\[eq:mlpkernel\]](#eq:mlpkernel){reference-type="ref" reference="eq:mlpkernel"}, the final formulation of the smooth conditional local kernel in the case of irregular spatial distribution reads $$\begin{align}
|
| 159 |
+
\chi({\bm{\mathrm{x}}^{i'}_j} ; \bm{\mathrm{x}}_{i}) = \frac{\psi^{i'}_j}{2\pi}\exp(- \frac{(\rho^{i'}_j)^2}{\tau})\mathrm{MLP}([\bm{\mathrm{x}}^{i'}_j , \bm{\mathrm{x}}_{i}]). \label{eq:clckernel}
|
| 160 |
+
\end{align}$$
|
| 161 |
+
|
| 162 |
+
<figure data-latex-placement="ht">
|
| 163 |
+
<p><br />
|
| 164 |
+
</p>
|
| 165 |
+
<figcaption>Different geographic sample spaces and local patterns in meteorology and traffic.</figcaption>
|
| 166 |
+
</figure>
|
| 167 |
+
|
| 168 |
+
The proposed convolution is inapplicable to traffic forecasting. One reason is that the smoothness is not a reasonable property of local traffic flows' patterns, i.e. great difference may exist between traffic patterns of two close regions. An important transportation hub exit may exist in the middle of them, so the patterns are likely to differ a lot. Besides, because our convolution kernel is continuous in spatial domain, the continuity in orientation of the local convolution kernel is of no physical meaning in traffic irregular networks. In essence, the irregular structure of the road network restricts the flows of traffic to road direction, stopping vehicles from crossing the road boundary, so that the geographic sample space is restricted to the road networks, and traffic can only flow along roads. In comparison, the flows in meteorology like heat and wind can diffuse freely on the earth, without boundary, and the geographic sample space is the whole earth surface, enabling the local patterns to satisfy the continuity and smoothness.
|
| 169 |
+
|
| 170 |
+
<figure id="fig:overallarchi" data-latex-placement="htb">
|
| 171 |
+
<embed src="workflows_clcrn.pdf" style="width:6.2in" />
|
| 172 |
+
<figcaption>Overall workflows and architecture of CLCRN.</figcaption>
|
| 173 |
+
</figure>
|
| 174 |
+
|
| 175 |
+
The temporal dynamics is modeled as DCRNN does, which replaces fully-connected layers in cells of recurrent neural network with graph convolution layers. Using the kernel proposed in Eq. [\[eq:clckernel\]](#eq:clckernel){reference-type="ref" reference="eq:clckernel"}, we obtain the GRU cell constituting the conditional local convolution recurrent network (CLCRN), whose overall architecture is shown in Fig. [1](#fig:overallarchi){reference-type="ref" reference="fig:overallarchi"}.
|
| 176 |
+
|
| 177 |
+
The overall neural network architecture for multi-step forecasting is implemented based on Sequence to Sequence framework [@Sutskever2014seq2seq], with the encoder fed with previously observed time series and decoder generating the predictions. By setting a target function of predictions and ground truth observations such as minimal mean square error, we can use backpropagation through time to update the parameters in the training step. More details are given in Appendix C.
|
2104.06313/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2020-02-02T22:48:48.600Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36" etag="y-34s3Ztv-MExT72TDA5" version="12.6.4" type="google"><diagram id="SRKKfmjEzZyQbj_6AqRg" name="Page-1">7V1bc6M2FP41nmkf1oNuXB5z2/ZhM7NdP7Tblx0Csk0XGxfjxO6vr7ABg6TEigOIRPZkJkbGMnzf0ZG+oyMxQjeL7W+pv5rfJyGNR9AKtyN0O4IQYGizf3nJ7lDiOEXBLI3C4qRjwST6jxaFVlG6iUK6bpyYJUmcRatmYZAslzTIGmV+miZPzdOmSdz81ZU/o0LBJPBjsfTPKMzmh1KXWMfy32k0m5e/DKzik4VfnlxUsZ77YfJ0KNqfg+5G6CZNkuzwbrG9oXEOXonLoaLPz3xaXVhKl5nKF+Dk7wXdrjeTpwey+Da5+hH9m34i8FDNox9vijsurjbblRCkyWYZ0rwWa4Sun+ZRRicrP8g/fWKks7J5tojZEWBv11ma/KygQqxkGsXxTRIn6b42FBLqhrg6s/aJCx+QbbNPikuiaUa3z94sqCBktkeTBc3SHTul+AK0CxoKs8OoOH46klhyOK/xh73CdAqzmVU1H5FlbwpwXwM0Og00q4aZNT0Nsr9eHWx9Gm1zYniMwYMPKJRhbFn23dXn/BvJMquVT/evdrAHoIk9dEXsgQT7sqx97LFB2DsN7JGlG3tiDvaWNTDsbXOwR0PzOY6x2Ou3e9cc7HmfA3Vj75mDPRwY9uWY1wDsPc7de7qhB8ZAz7sc6OjGXkHDfhTsydBcjkGyFtkDw94gWct3tdjRjL1Bshbz2Ou2e3NkrctBD3RDb46q9YZm9eaIWmdog/s+NW3oWpaDZNBfEcvCVr+De6x7kFPaggnYc/MmGGHN2PcpajVjD4dm932KWr3Ye80pK4w8zdD3qWkHZvZYt8sxSNPigQ1zHIM0LR9H0z1l5ZijabnRPZJk5vQLvTmaFlhcdohuUeuYI2r5WA7GQDP25qhaAXtbM/bl9fSXdklBSKgjI8CzHeR3lHbJNK0AtCztEgK3K6TNmZjlrbwCVZuV96lhB4Y90o29OROzAvZEN/bmiFgBe0c39uaIWAF7Tzf25ohYcWZWN/bmqFgBe+19rbkqFmnva1VU7DK8ypcCsqNlsqRdKScaCksJT8J6QheVZSmN/Sx6bFYvw7L4ha9JxH74WVmGXDgmzUrWySYNaPG9IydCVVUKyvNVZX46o5lQ1Z7e6tbPZ9xT0M4fNG6BPKi3tXnmzAgLMSOgG3sFNR1s0sd9yCjHt+b2gthfr6PglOdj0KS7v9iBNbZgefx9f4xJeXybA2RVR7v60VeaRuxuaVoUqjpGaB2cxkuO3tLpQQE/a8HTrOo/Iec/y1Xbp7wnY9Lf1U5b5SesO/GvCpGD1uysaWSnbKxFczosfH//5lTm6FQVeWPLq73eaFxqRlzO4T17lXxWWbEq+Gi5hyto144VojBt2fEn5jDtpi277sVjNjhHnC0qmzg3eCUuUhxxvtasgSe/ZFVfjq1GM+jIrBUCXBf33LLt4nPdM78Nif3W3n5IDlYh3Lee+6v8LbPIeHed+sHPnPZTyuA49Sk30DTJGL3JMve8wD2a3psG/AjgBoayuUsksbnu5i49MaY32axWSZozP6GZgDa7+UzWuEuJVEQ/6jqrKPLjaJbDGTCw8r7oOocyCvz4qvhgEYVh/Jyoa05V56Ks2MiJQVMcFxdptUEUn7wlWxUjk2b8ULs9ohQCgO+qJZCmN4FiRq6sJXS2eZInBvn+2ND9JV+awbGTqsZF2hpCtXykxtS9/0+SRll+5b9A69d3wlYLBJUrtqqUACSyY7m9siOG7+6jZcUOMYmcpm7GjtjZ900OFMi5W0XrJKTmsAJtrueR5MTjfh2aOXk0BDtj2MQf2jKfJeLfWdAbWObk0sjwx9rxNyefhkAk4E8kI6p+8Tcnp4Yg0f5d7fZvTl4NsUX8He34m5NbI/M/jnb/Y85+foQM0P8AUVB/WPwl/odox18h46UM7U1jui2mWk7OuvBJZvuXDHlkIw+Fr5hd0TVpgvkkM8KRojppQvjZZb6ijhPMABA132TOYA5H+3jjTbJ8ZO+++DumoHlbaAZs37pqazoNAs970SzqKtxuRoS9luLBBLjjEpNqxh9URTXzgo49LrusuomVO5m03z7L5mhARhohqAK3okH3+ACas1kGwZaAPxHjUz3jb1J8BPL443LOVh/+fcZHBmf/uHzEgD78VeIjp8ZiXBcqdsujgY+9iAt4aqp9Nl47/LLx6bq6HoFBlajLx2fVEQTR2YNqWwyu9T6uhiqxHBNZxdA6dz2OhFdJbV0zi1SiFB+eWdsRA3i2dy6zDlKprXNmFeIf6TxZPGzWZ+ldjvRBDTgdLGHAliXZ9DrkQSqS18TGdnbnKGlqvXeOSEFId9fQ9CoLSUNDWHszU1B2WRr5y9mz+TKvYqSeyrmP3A3dF+oX3+gi/uSe8A0DSokv1DGgVBCAfTa+wflH/ZEXdFFzI7mas8/eXUGm5sTaOm98r9tT44MyK3Or5zMrc6v9M4tFnT7xF6s4Ws5E9zrIJOOmHYF2/CsgzUd7Aekj7bBoat3lHWNRd3+mfrZJKSu822apH+z7K4NZk7heKXG2K52T7pA7UaDXFsHc5O6S/f9GVylds3s9OKnhE4lbGszwj/eRbSHV76ql8qkH8nUxF8K4fBzZruE9E9bprKtEYOcvwY9BCd4SettwdcAW5uQku96Vj2erk1CWdUBCp6npAyRBEiWUPLxAtj6J3z+gRRJEFbb3V9GUOZDDCKGXTLUgpA/ug4y3PjPVXFtG0ViSRV1t79VPmhpuQVK9HMXI7b62fxdr+KPG5h2e44707kdT2KouPefyzwx3uCpUpZxX7mte7bvbs4ojoor7WMY0FBNB+OzZW8+1x57nCtUx5Kq9udyezUay+luUJcXGCqYObrH2zUQAkchHUY2YzpMs/t0zTyoi5OKG3+6GcQlsG06YVTbGGp1wC1OWF6NRMRpb2CbwTWbDqtPZd4vzobzMG3xP0Fkg2CNAKRDcbwSftDA/qtDSt1F2aOikOPpetmL2/tjA84PX79N4aDEv3aVeTQeqzNNqO/ozU8CAJYz9+27ishABzv/IzcLP5qwBjpzrLyPn9gf7HzRbv3NbnPs+3EALjV6kXhIQbWmvIHaYJklWJ5zd5fw+CWl+xv8=</diagram></mxfile>
|
2104.06313/main_diagram/main_diagram.pdf
ADDED
|
Binary file (30.7 kB). View file
|
|
|
2104.06313/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In many real-world NLP applications, the collected data follow a skewed distribution [\(Deng et al.,](#page-9-0) [2009;](#page-9-0) [Fernandez et al.](#page-9-1) ´ , [2013;](#page-9-1) [Yan et al.,](#page-10-0) [2017\)](#page-10-0), i.e., data from a few classes appear much more frequently than those of other classes. For example, tweets related to incidents such as shooting or fire are usually rarer compared to those about sports or entertainments. These data instances often represent objects of interest as their rareness may carry important and useful knowledge [\(He](#page-9-2) [and Garcia,](#page-9-2) [2009;](#page-9-2) [Sun et al.,](#page-10-1) [2007;](#page-10-1) [Chen and Shyu,](#page-9-3) [2011\)](#page-9-3). However, most learning algorithms tend to inefficiently utilize them due to their disadvantage in the population [\(Krawczyk,](#page-9-4) [2016\)](#page-9-4). Hence, learning discriminative models with imbalanced class distribution is an important and challenging task to the machine learning community.
|
| 4 |
+
|
| 5 |
+
Solutions proposed in previous literature can be generally divided into three categories [\(Krawczyk,](#page-9-4) [2016\)](#page-9-4): (1) *Data-level methods* that employ undersampling or over-sampling technique to balance the class distributions [\(Barua et al.,](#page-9-5) [2014;](#page-9-5) [Smith et al.,](#page-9-6) [2014;](#page-9-6) [Sobhani et al.,](#page-9-7) [2014;](#page-9-7) [Zheng et al.,](#page-10-2) [2015\)](#page-10-2). (2) *Algorithm-level methods* that modify existing learners to alleviate their bias towards the majority classes. The most popular branch is the *costsensitive* algorithms, which assign a higher cost on misclassifying the minority class instances. [\(D´ıaz-](#page-9-8)[Vico et al.,](#page-9-8) [2018\)](#page-9-8). (3) *Ensemble-based methods* that combine advantages of data-level and algorithmlevel methods by merging data-level solutions with classifier ensembles, resulting in robust and efficient learners [\(Galar et al.,](#page-9-9) [2012;](#page-9-9) [Wang et al.,](#page-10-3) [2015\)](#page-10-3).
|
| 6 |
+
|
| 7 |
+
Despite the success of these approaches on many applications, some of their drawbacks have been observed. Resampling-based methods need to either remove lots of samples from the majority class or introduce a large amount of synthetic samples to the minority class, which may respectively lose important information or significantly increase the adverse correlation among samples [\(Wu et al.,](#page-10-4) [2017\)](#page-10-4). It is difficult to set the actual cost value in costsensitive approaches and they are often not given by expert before hand [\(Krawczyk,](#page-9-4) [2016\)](#page-9-4). Also, how to guarantee and utilize the diversity of classification ensembles is still an open problem in ensemble-based methods [\(Wu et al.,](#page-10-4) [2017;](#page-10-4) [Huo](#page-9-10) [et al.,](#page-9-10) [2016\)](#page-9-10).
|
| 8 |
+
|
| 9 |
+
In this paper, we propose a novel *set convolution* (SetConv) operation and a new training strategy named as *episodic training* to assist learning from imbalanced class distributions. The proposed solution naturally addresses the drawbacks of existing methods. Specifically, SetConv explicitly learns the weights of convolution kernels based on the intra-class and inter-class correlations, and uses the learned kernels to extract discriminative features from data of each class. It then compresses these features into a single class representative. These representatives are later applied for classification. Thus, SetConv helps the model to *ignore samplespecific noisy information*, and *focuses on the la-*
|
| 10 |
+
|
| 11 |
+
<span id="page-1-1"></span><span id="page-1-0"></span>
|
| 12 |
+
|
| 13 |
+
Figure 1: Overview of the proposed approach. (a) The training procedure of SetConv. At each iteration, SetConv is fed with an episode to evaluate the classification loss for model update. Each episode consists of a support set and a query set. The support set is formed by a group of samples where the imbalance ratio is preserved. The query set contains only one sample from each class. (b) The post training step of SetConv, which is performed only once after the main training procedure. In this step, we extract a representative for each class from the training data and will later use them for inference. Here we only perform inference using the trained model and do not update it. (c) The inference procedure of SetConv. Each query data is compared with every class representative to determine its label.
|
| 14 |
+
|
| 15 |
+
*tent concept not only common to different samples of the same class but also discriminative to other classes.* On the other hand, in episodic training, we assign equal weights to different classes and do not perform resampling on data. Moreover, at each iteration during training, the model is fed with an episode formed by a set of samples where the class imbalance ratio is preserved. It encourages the model learning to *extract discriminative features even when class distribution is highly unbalanced*.
|
| 16 |
+
|
| 17 |
+
Building models with SetConv and episodic training has several additional benefits:
|
| 18 |
+
|
| 19 |
+
- (1) *Data-Sensitive Convolution.* By utilizing SetConv, each input sample is associated with a set of weights that are estimated based on its relation to the minority class. This data-sensitive convolution helps the model to customize the feature extraction process for each input sample, which potentially improves the model performance.
|
| 20 |
+
- (2) *Automatic Class Balancing.* At each iteration, no matter how many data of a class is fed into the model, SetConv always extracts the most discriminative information from them and compress
|
| 21 |
+
|
| 22 |
+
<span id="page-1-2"></span>it into a single distributed representation. Thus, the subsequent classifier, which takes these class representatives as input, always perceives a balanced class distribution.
|
| 23 |
+
|
| 24 |
+
(3) *No dependence on unknown prior knowledge.* The only prior knowledge needed in episodic training is the class imbalance ratio, which can be easily obtained from data in real-world applications.
|
| 25 |
+
|
| 26 |
+
# Method
|
| 27 |
+
|
| 28 |
+
Our goal is to develop a classification model that works well when the class distribution is highly unbalanced. For simplicity, we first consider a binary classification problem and later extend it to the multi-class scenario. As shown in Fig. [1a,](#page-1-0) our model is composed of a SetConv layer and a classification layer. At each iteration during training, the model is fed with an *episode* sampled from the training data, which is composed of a support set and a query set. The support set preserves the imbalance ratio of training data, and the query set contains only one sample from each class. Once the SetConv layer receives an episode, it extracts features for every sample in the episode and produces a representative for each class in the support set. Then, each sample in the query set is compared with these class representatives in classification layer to determine its label and evaluate the classification loss for model update. We refer this training procedure as *episodic training*.
|
| 29 |
+
|
| 30 |
+
We choose episodic training due to following reasons: (1) It encourages the SetConv layer learning to extract discriminative features even when the class distribution of the input data is highly unbalanced. (2) Since the episodes are randomly sampled from data with significantly different configuration of support and query sets (i.e., data forming these sets vary from iteration to iteration), it requires the SetConv layer to capture the underlying class concepts that are common among different episodes.
|
| 31 |
+
|
| 32 |
+
After training, a post training step is performed only once to extract a representative for each class from the training data, which will later be used for inference (Fig. [1b\)](#page-1-1). It is conducted by randomly sampling a large subset of training data (referred as Spost) and feeding them to the SetConv layer. *Note that we only perform inference using the trained model and do not update it in this step.* We can conduct this operation because the SetConv layer has learned to capture the class concepts, which are insensitive to the episode configuration during training. We demonstrate it in experiments and the result is shown in Section [4.6.](#page-8-0)
|
| 33 |
+
|
| 34 |
+
The inference procedure of the proposed ap-
|
| 35 |
+
|
| 36 |
+
<span id="page-3-1"></span>
|
| 37 |
+
|
| 38 |
+
Figure 2: Relations between the input samples and a pre-selected minority class anchor are used by SetConv to estimate both intra-class correlations and inter-class correlations.
|
| 39 |
+
|
| 40 |
+
proach is straightforward (Fig. 1c). For each query sample, we extract its feature via the SetConv layer and then compare it with those class representatives obtained in post training step. The class that is most similar to the query is assigned as the predicted label.
|
| 41 |
+
|
| 42 |
+
In many real-world applications, the minority class instances often carry important and useful knowledge that need intensive attention by the machine learning models (He and Garcia, 2009; Sun et al., 2007; Chen and Shyu, 2011).
|
| 43 |
+
|
| 44 |
+
Based on this prior knowledge, we choose to design the SetConv layer in a way such that the feature extraction process focuses on the minority class. We achieve it by estimating the weights of the SetConv layer based on the relation between the input samples and a pre-selected minority class anchor. This anchor can be freely determined by the user. In this paper, we adopt a simple option, i.e., average-pooling of the minority class samples. Specifically, for each input variable, we compute its mean value across all the minority class samples in the training data. It is executable because the minority-class samples are usually limited in realworld applications<sup>1</sup>. As shown in Figure 2, this weight estimation method assists the SetConv layer in capturing not only the intra-class correlation of the minority class, but also the inter-class correlation between the majority and minority classes.
|
| 45 |
+
|
| 46 |
+
Suppose $\mathcal{E}_t = \{\mathcal{S}_t, \mathcal{Q}_t\}$ is the episode sent to the SetConv layer at iteration t, where $\mathcal{S}_t = (X_{maj} \in \mathcal{R}^{N_1 \times d}, X_{min} \in \mathcal{R}^{N_2 \times d})$ is the support set and $\mathcal{Q}_t = (q_{maj} \in \mathcal{R}^{1 \times d}, q_{min} \in \mathcal{R}^{1 \times d})$ is the query set. In general, $X_{maj}, X_{min}, q_{maj}$ and $q_{min}$ can be considered as a sample set of size $N_1, N_2, 1$ and 1 respectively. For simplicity, we abstract this sample set into $X \in \mathcal{R}^{N \times d}, N \in \{N_1, N_2, 1\}$ .
|
| 47 |
+
|
| 48 |
+
Remind that the standard discrete convolution is:
|
| 49 |
+
|
| 50 |
+
$$h[n] = (f \star g)[n] = \sum_{m=-M}^{m=M} f[m]g[n-m]$$
|
| 51 |
+
(1)
|
| 52 |
+
|
| 53 |
+
Here, f and g denote the feature map and kernel weights respectively.
|
| 54 |
+
|
| 55 |
+
Similarly, in our case, we define the set convolution (SetConv) operation as:
|
| 56 |
+
|
| 57 |
+
<span id="page-3-2"></span>
|
| 58 |
+
$$h[Y] = \frac{1}{N} \sum_{i=1}^{N} X_i \cdot g(Y - X_i)$$
|
| 59 |
+
$$= \frac{1}{N} \left( X \circ g(Y - X) \right)$$
|
| 60 |
+
(2)
|
| 61 |
+
|
| 62 |
+
where $Y \in \mathcal{R}^{1 \times d}$ , $g(Y - X) \in \mathcal{R}^{N \times d \times d_o}$ and $h[Y] \in \mathcal{R}^{1 \times d_o}$ denote the minority class anchor, kernel weights and the output embedding respectively. Here, $\circ$ is the tensor dot product operator, i.e., for every $i \in \{1, 2, \dots, d_o\}$ , we compute the dot product of X and g(Y - X)[:,:,i].
|
| 63 |
+
|
| 64 |
+
Unfortunately, directly learning g(Y-X) is memory intensive and computationally expensive, especially for large-scale high-dimensional data. To overcome this issue, we introduce an efficient method to approximate these kernel weights. Instead of taking X as a set of d-dimensional samples, we stack these samples and consider it as a giant dummy sample $X' = Concat(X) \in \mathcal{R}^{1 \times Nd}$ . Then, Eq. 2 is rewritten as
|
| 65 |
+
|
| 66 |
+
$$h[Y] = \frac{1}{N} \Big( X' \cdot g'(Y - X) \Big) \tag{3}$$
|
| 67 |
+
|
| 68 |
+
where $g'(Y-X) \in \mathcal{R}^{Nd \times d_o}$ is the transformed kernel weights. To efficiently compute g'(Y-X), we propose to approximate it as the *Khatri-Rao* product<sup>2</sup> (Rabanser et al., 2017) of two individual components, i.e.,
|
| 69 |
+
|
| 70 |
+
$$g'(Y - X) = g_1(Y - X) \circledast g_2(W)$$
|
| 71 |
+
|
| 72 |
+
$$= MLP(Y - X; \theta) \circledast SoftMax(W, 0)$$
|
| 73 |
+
(4)
|
| 74 |
+
|
| 75 |
+
<span id="page-3-0"></span><sup>&</sup>lt;sup>1</sup>Otherwise, we may sample a subset from the minority class samples to compute the anchor.
|
| 76 |
+
|
| 77 |
+
<span id="page-3-3"></span><sup>&</sup>lt;sup>2</sup>https://en.wikipedia.org/wiki/Kronecker\_product
|
| 78 |
+
|
| 79 |
+
<span id="page-4-0"></span>
|
| 80 |
+
|
| 81 |
+
Figure 3: The computation graph of the SetConv layer. Here Y is a minority class anchor. $W \in \mathcal{R}^{d \times d_o}$ is a weight matrix to learn that records the correlation between the input and output variables. Specifically, the $i_{th}$ column of $g_2(W)$ gives the weight distribution over input features for the $i_{th}$ output feature. It is indeed a feature-level attention matrix. In addition, we estimate another data-sensitive weight matrix $g_1(Y-X)$ from the input data. The final convolution weight tensor is simply the Khatri-Rao product of $g_1(Y - X)$ and $g_2(W)$ .
|
| 82 |
+
|
| 83 |
+
where $W \in \mathcal{R}^{d \times d_o}$ is a weight matrix that represents the correlation between input and output variables. $g_2(W)$ takes softmax over the first dimension of W, and is indeed a feature-level attention matrix. The $i_{th}$ column of $g_2(W)$ provides the weight distribution over input features for the $i_{th}$ output feature. On the other hand, $g_1(Y-X)$ is a data-sensitive weight matrix estimated from input data via a MLP by considering their relation to the minority class anchor. Similar to data-level attention, $g_1(Y - X)$ helps the model customize the feature extraction process for input samples, which potentially improves the model performance. Figure 3 shows the detailed computation graph of the SetConv layer.
|
| 84 |
+
|
| 85 |
+
**Discussion:** An important property of the Set-Conv layer is *permutation-invariant*, i.e., it is insensitive to the order of input samples. As long as the input samples are same, no matter in which order they are sent to the model, the SetConv layer always produces the same feature representation. Mathematically, let $\pi$ denote an arbitrary permutation matrix, we have $SetConv(\pi X) =$ SetConv(X). The detailed proof of this property is provided in the supplementary material.
|
| 86 |
+
|
| 87 |
+
Suppose the feature representation obtained from the SetConv layer for $X_{maj}$ , $X_{min}$ , $q_{maj}$ and $q_{min}$ in the episode are denoted by $v_{maj}^s$ , $v_{min}^s$ , $v_{maj}^q$ and $v_{min}^q$ respectively. The probability of predicting $v_{maj}^q$ or $v_{min}^q$ as the majority class is given by
|
| 88 |
+
|
| 89 |
+
$$P(c=0|x) = \frac{\exp(x \odot v_{maj}^s)}{\exp(x \odot v_{maj}^s) + \exp(x \odot v_{min}^s)}$$
|
| 90 |
+
(5)
|
| 91 |
+
|
| 92 |
+
where o represents the dot product operation and $x \in \{v_{maj}^q, v_{min}^q\}.$ Similarly, the probability of predicting $v_{maj}^q$ or
|
| 93 |
+
|
| 94 |
+
$v_{min}^q$ as the minority class is
|
| 95 |
+
|
| 96 |
+
$$P(c=1|x) = \frac{\exp(x \odot v_{min}^s)}{\exp(x \odot v_{maj}^s) + \exp(x \odot v_{min}^s)}$$
|
| 97 |
+
(6)
|
| 98 |
+
|
| 99 |
+
where $x \in \{v_{maj}^q, v_{min}^q\}$ .
|
| 100 |
+
|
| 101 |
+
We adopt the well-known cross-entropy loss for error estimation and use the Adam optimizer to update model.
|
| 102 |
+
|
| 103 |
+
Extending SetConv for multi-class imbalance learning is straightforward. We translate the multi-class classification problem into multiple binary classification problems, i.e., we create a one-vs-all classifier for each of the N classes. Specifically, for a class c, we treat those instances with label y = cas positive and those with $y \neq c$ as negative. The anchor is hence computed based on the smaller one of the positive and negative classes. The prediction probability P(y = c|x) for a given instance x is computed in a similar way as Eq. 5,
|
| 104 |
+
|
| 105 |
+
$$P(y=c|x) = \frac{\exp(x \odot v_{y=c}^s)}{\exp(x \odot v_{y\neq c}^s) + \exp(x \odot v_{y=c}^s)}$$
|
| 106 |
+
(7)
|
| 107 |
+
|
| 108 |
+
Therefore, the predicted label of the instance x is $\operatorname{argmax}_{c} P(y = c | x).$
|
| 109 |
+
|
| 110 |
+
We evaluate SetConv on two typical tasks, including incident detection on social media and sentiment classification.
|
| 111 |
+
|
| 112 |
+
<span id="page-5-1"></span>Table 1: Class distribution in the IRT dataset.
|
| 113 |
+
|
| 114 |
+
| | Two Classes | | Four Classes | | | |
|
| 115 |
+
|----------------|-------------|------|--------------|------|----------|------|
|
| 116 |
+
| | Yes | No | Crash | Fire | Shooting | No |
|
| 117 |
+
| Boston (USA) | 604 | 2216 | 347 | 188 | 28 | 2257 |
|
| 118 |
+
| Sydney (AUS) | 852 | 1991 | 587 | 189 | 39 | 2208 |
|
| 119 |
+
| Brisbane (AUS) | 689 | 1898 | 497 | 164 | 12 | 1915 |
|
| 120 |
+
| Chicago (USA) | 214 | 1270 | 129 | 81 | 4 | 1270 |
|
| 121 |
+
| Dublin (IRE) | 199 | 2616 | 131 | 33 | 21 | 2630 |
|
| 122 |
+
| London (UK) | 552 | 2444 | 283 | 95 | 29 | 2475 |
|
| 123 |
+
| Memphis (USA) | 361 | 721 | 23 | 30 | 27 | 721 |
|
| 124 |
+
| NYC (USA) | 413 | 1446 | 129 | 239 | 45 | 1446 |
|
| 125 |
+
| SF (USA) | 304 | 1176 | 161 | 82 | 61 | 1176 |
|
| 126 |
+
| Seattle (USA) | 800 | 1404 | 204 | 153 | 139 | 390 |
|
| 127 |
+
|
| 128 |
+
<span id="page-5-4"></span>Table 2: Class distribution in Amazon Review and SemiEval Datasets.
|
| 129 |
+
|
| 130 |
+
| Dataset | Negative | Positive | IR |
|
| 131 |
+
|--------------------|----------|----------|------|
|
| 132 |
+
| Amazon-Books | 72039 | 7389 | 9.75 |
|
| 133 |
+
| Amazon-Electronics | 13560 | 1908 | 7.11 |
|
| 134 |
+
| Amazon-Movies | 12896 | 2066 | 6.24 |
|
| 135 |
+
| SemiEval | 39123 | 7273 | 5.38 |
|
| 136 |
+
|
| 137 |
+
We conduct experiments on a real-world benchmark *Incident-Related Tweet*<sup>3</sup> (Schulz et al., 2017) (**IRT**) dataset. It contains 22, 170 tweets collected from 10 cities, and allows us to evaluate our approach against geographical variations. The IRT dataset supports two different problem settings: binary classification and multi-class classification. In binary classification, each tweet is either "incident-related" or "not incident-related". In multi-class classification, each tweet belongs to one of the four categories including "crash", "fire", "shooting" and a neutral class "not incident related". The details of this dataset are shown in Table 1.
|
| 138 |
+
|
| 139 |
+
We conduct experiments on two large-scale benchmark datasets, including *Amazon Review*<sup>4</sup> (He and McAuley, 2016) and *SemiEval*<sup>5</sup> (Rosenthal et al., 2017), which have been widely used for sentiment classification. Similar to MSDA (Li et al., 2019) and SCL-MI (Blitzer et al., 2007), we treat the amazon reviews with rating > 3 as positive examples, those with rating < 3 as negative examples, and discard the rest because their polarities are ambiguous. In addition, due to the tremendous size of Amazon Review dataset, we choose its 3 largest categories, i.e., "Books", "Electronics", and "Movies and TV",
|
| 140 |
+
|
| 141 |
+
and uniformly sample from these categories to form a subset that contains 109, 858 reviews. This subset is sufficiently large to evaluate the effectiveness of our method. More importantly, the imbalance ratio of each category in this subset is exactly same as that in the original dataset. Details of Amazon Review and SemiEval datasets are listed in Table 2.
|
| 142 |
+
|
| 143 |
+
We compare our algorithm with several state-ofthe-art imbalance learning methods.
|
| 144 |
+
|
| 145 |
+
- **IHT** (Smith et al., 2014) (*under-sampling*) is a model that performs undersampling based on instance hardness.
|
| 146 |
+
- WEOB2 (Wang et al., 2015) (ensemble) is an undersampling based ensemble model that effectively adjusts the learning bias from the majority class to the minority class via adaptive weight adjustment. It only supports binary classification.
|
| 147 |
+
- **KMeans-SMOTE** (Last et al., 2017) (*over-sampling*) is an oversampling technique that avoids the generation of noisy samples and effectively overcomes the imbalance between classes.
|
| 148 |
+
- IML (Wang et al., 2018) (*metric learning*) is a method that utilizes metric learning to explore the correlations among imbalance data and constructs an effective data space for classification.
|
| 149 |
+
- **CS-DMLP** (Díaz-Vico et al., 2018) (*cost-sensitive*) is a deep MLP model that utilizes cost-sensitive learning to regularize the posterior probability distribution predicted for a given sample.
|
2104.08790/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-10-13T01:27:06.950Z" agent="5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" version="15.5.1" etag="rE8-dZV3GgqSyOQGLb3g" type="google"><diagram id="lx2aDX691oanUbukDxeA">7Vxde5u4Ev41fnbPhfOA+DC+TOOk6TZuczbddLs3+8ggY8WAvEIk9v76I4EwYETMsXHtpnYuYg0SiHnfGY1GknvGVbh8T+FiNiYeCnpA85Y9Y9QDwNBNjf8TklUm4SInk/gUe5lMLwQP+F8khbKhn2APxZWKjJCA4UVV6JIoQi6ryCCl5KVabUqC6lMX0JdP1ArBgwsDVKv2FXtslkkdq1T7FmF/lj9Z1+SVEOaVpSCeQY+8lETGdc+4ooSw7Fu4vEKB0F6ul6zdTcPVdccoilibBiBr8AyDRL6b7Bdb5S9LSRJ5SNTXe8a7lxlm6GEBXXH1hePLZTMWBvJyzCiZr5ViCEn+huKyfBqiDC0be6yv9cAZhEiIGF3xKrIByNkjyWMOZfmlQAI4UjaroiAZINH31/cuFMS/SB2p9WUo9GUH/Anv+K2goNwM0hhlj7T/SQSO7xI27TtFkX/z5f+05aSi77zWlESs/yL7f8mrRISGMCjfJu9VLvGIG/dxxBCNYNAXRtIfTFzLtQzQH0yn0z50J7DP9eD0h2hgappho6EGm3uGQ7+i2Lze0Fwss2pFX0qqzqW6Bur1YupWKs0YE1Z7KSAAN8HMuPAJ8QOUxIhy+2WcFhcuCfk197cA3a4en1xgJu+saTgcLe8/Ob4Gbpb31+HKnEfL939Ff3ymo+e7l8fxA3n24Gfr0+jj6MNH52s8vPow/siYe/eIaQCj5ObpEf3x99+TJy/8d/7lcXBvzgeDP8f/Ddj13WgO0a09/vQ0Db6NydXTl+tkeDX9dHeP/qoAqcGg+spjGHHpF+TOIhIQH8cSvDHycBL2R5DOeelhjtNqJEL8HxHfr0PyhD+nZfNCa4aE62FSyDZMlVsV226jhzBCp26EA4UNDq39TdBsNMFEYVlVgTCqZmuL04FGwKXbnLg1DH5H0EO0hER2u9cAWsuSbaDVnOgVJxDlkkiwhPcQB8GGCAbYj3jRRcLmuUBgivkwdSkvhNjzxGOUjCj8urbppv0AxnHemTli7qxD6gw2qGPXqWMqqAM68N5WI3XiRWq4OxKjkW9r1G85dQIszFvFiFeZk3XtTJ6UGDq4MMDR+GMfPFoqA9UDxtQSfzVU+RU7/UiCluTZ5xBxl1FTvW20DL0sowPlDxqNdyHiLsm6wjgXejXuqJl2CKnPx+HUrjVh1lpWMzN5RjPqlkMvWUEYcn+mCM3k9Ql0535KhL6bISMqUX/yK7AsUQvwl9U2vv+n2a1I51R/xVhvbrTfUPfL9YJPsULsxgK1yBOPhzESJTIVUzccI14W8IiusZn4CvRYPFJ4lCSlDidX+ooVGPjkyCtEv2wfTqsOcC1e7BkGiafJqaVudmMjhlmzEUuv24iuCo2cDkzEqakCeXziKouEshnxCZ8bXBfSDU9e1LkjZCE19YQYW0lVwYSRqh7RErM/RfMLS5a+la6MlvLOaWGVFyL+ZqVGovitfK1olpbydgKzGxjiQAhuUfCMxJC1Aabwim5Cn9deuOpmwRproZwK0jFJqCtFEkbGnQTKhxDQmhAUBZDh5+rt94F2uAO0FSW8TZy7glYfHg/a3EMcMKzoYvx3ar5tCBTjv6bwbXYHvk3XG8f/1kOd81rw3jx7vMNzFOAZIV4+/D0sKA/o20/5Goa4kwzmO+HK4MIebuWKPlRwpYtQUVelNdvHih1EX3jncKyJo1ISCC4KVXP3nKaP0xDrV5OrZCOE3BJV1Xr4Q0VaYFhnmGo2YigYZnbBMOP44/HJjK2VgdTufiCVTe8JTm0oD7YtcKGVPqZVoYOjbcCcRQDyJhtIr3vVDnzz7YO/kVa4uREq3pkU7e17X1I4RtUpHJAFqmzi6YVtVi1sc9qGbV1MSXVVzuz7hG15ul67dBkWiyznaK2BIoM2FDlctLZfZu+Uo7WIiNswmqSJsSxL5hOOVhSilFo/b7jmKMI1XRWudeKFnB/CVw9qhjgwv+cUe3h8X32P+PC8OPvrLbPrFjQ5mL8GqnzV2/DXU5RtVcOx9Nk/i4M2QJ1T1nd00ECV3Ts9B10Ppu182lnWkqXQ0qCDrS+5Uo7goL9SzFIH/SHdknV2zv/HMrmKIiJD6ij8cxcsad6k+KP7Zx5BF+vKwQpHYn9imglN4oufx1sDu06ytt7a7IJhuyTAOluAzL+n+a71YuS2BcheOUtWJM1eyZOp0l/7ryvnmFQWlg+QH2uNpbUDlodZWT4msJ1haRjdY5k2vaQUrkoVFiKHGZfuvJEKBfW9co69cRpgexM93zZf0CnryK5p03wT3ZlvHfENvC2+aUbHfBuc+dYl3/LdKkcZq3bZ4HbGshlL54hYns6OtreBpXU8LI3mjGAtO9BNomJCc8kYxziaiing7wjKhT7thsIQ1euOcIiimFfhw5gNQzFRiybxQnnfZkmrgz9b8xtd5i3KaREPTWGSTd0rGZRC3tt/Arpxhme916I8+QTF2F45htnFQQxDr2n4eNthB99nP2x33gMooshjeg9VVvP0cr9mLUOuawreH2x1zlCl9U5PTfWVXl1TnA49nJqaz4e2PuT3+tDTmDRP1y6x8DFAu4MTsZT0Fk/7dcGS+kKK0pgOtoZpdHAUdEeWvBe/NnEmyJb1SK2dGzkcQZr3rf1oiyiau44cCqExnaYxxCuLLevI+mdZVjFU6SjF+T3VEdcuNpUbb2cj3JlzbTlng7qjU/ykzcE4dwIptWNNiioTILM1cvtuUecTiVfOLQw3t9PsvGOdF4sfksqqF7/HZVz/Dw==</diagram></mxfile>
|
2104.08790/main_diagram/main_diagram.pdf
ADDED
|
Binary file (74.2 kB). View file
|
|
|
2104.08790/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
<figure id="fig:mars" data-latex-placement="!t">
|
| 4 |
+
<embed src="images/figure2.pdf" />
|
| 5 |
+
<figcaption>In a binary classification setup, the reaction of the reader is unclear. Here, we use Misinfo Reaction Frames to understand how the reader perceives and reacts to the headline. Our pragmatic frames explain how a health or climate news article is interpreted as reliable or misinformation by readers by incorporating not only linguistic knowledge (e.g. emotions invoked by certain content words), but also knowledge of common social behaviors and domain-specific reasoning. We also include fact-checked labels (gold label).</figcaption>
|
| 6 |
+
</figure>
|
| 7 |
+
|
| 8 |
+
:::: table*
|
| 9 |
+
::: tabular
|
| 10 |
+
F10emF11 emF11.5emcc **News Headline** & **Writer's Intent** & **Reader Reaction** & **Spread** & **Real News?**\
|
| 11 |
+
& & & & (GPT-2 / T5 / Gold)\
|
| 12 |
+
How COVID is Affecting U.S. Food Supply Chain & **Human**: "the pandemic is interrupting the flow of groceries to consumers\"\
|
| 13 |
+
**GPT-2**: "food supplies are being affected by covid\"\
|
| 14 |
+
**T5**: "food supply chain is affected by covid\" & **Human**: "want to know how their groceries will get to them\"\
|
| 15 |
+
**GPT-2**: "want to learn more\"\
|
| 16 |
+
**T5**:"want to find out more information\" & 4.0 & / /\
|
| 17 |
+
\
|
| 18 |
+
Thai police arrested a cat for disobeying the curfew order. & **Human**: "governments are ludicrous and obtuse.\"\
|
| 19 |
+
**GPT-2**: "animals can be dangerous\"\
|
| 20 |
+
**T5**: "lockdowns are enforced in thailand\" &
|
| 21 |
+
|
| 22 |
+
**Human**: "feel disbelief\"\
|
| 23 |
+
**GPT-2**: "feel worried\"\
|
| 24 |
+
**T5**: "feel shocked\" & 1.0 & / /\
|
| 25 |
+
\
|
| 26 |
+
Perspective \| I'm a black climate expert. Racism derails our efforts to save the planet. & **Human**: "since climate change will likely affect poorer nations, rich societies are not motivated to help\"\
|
| 27 |
+
**GPT-2**: "racism is bad\"\
|
| 28 |
+
**T5**: "racism is a problem in society\" & **Human:** "want to improve their own behavior towards others\"\
|
| 29 |
+
**GPT-2:** "want to learn more\"\
|
| 30 |
+
**T5**: "want to take action\" & 3.0 & / /\
|
| 31 |
+
:::
|
| 32 |
+
::::
|
| 33 |
+
|
| 34 |
+
> *Many objects, persons, and experiences in the world are framed in terms of their potential role in supporting, harming, or enhancing people's lives or interests. We can know that this is so if we know how to interpret expressions in which such things are evaluated\...*\
|
| 35 |
+
> - Charles J. Fillmore (1976)
|
| 36 |
+
|
| 37 |
+
Effectively predicting how a headline may influence a reader requires knowledge of how readers perceive the intent behind real and fake news. While most prior NLP research on misinformation has focused on fact-checking, preventing spread of misinformation goes beyond determining veracity [@schuster2020limitations; @social_motives].
|
| 38 |
+
|
| 39 |
+
For example, in Figure [1](#fig:mars){reference-type="ref" reference="fig:mars"}, mistrust in the government may lead readers to share pandemic conspiracy headlines like *"Epidemics and cases of disease in the 21st century are "staged\"\"* even if they suspect it is misinformation. The widespread circulation of misinformation can have serious negative repercussions on readers --- it can reinforce sociopolitical divisions like anti-Asian hate [@Vidgen2020DetectingEA; @voterfraud], worsen public health risks [@10.1145/3274327], and undermine efforts to educate the public about global crises [@climate].
|
| 40 |
+
|
| 41 |
+
We introduce **Misinfo Reaction Frames** (MRF), a pragmatic formalism to reason about the effect of news headlines on readers. Inspired by Frame semantics [@Fillmore1976FRAMESA], our frames distill the pragmatic implications of a news headline in a structured manner. We capture free-text explanations of readers reactions and perceived author intent, as well as categorical estimates of veracity and likelihood of spread (Table [\[table:dims\]](#table:dims){reference-type="ref" reference="table:dims"}). We use our new formalism to collect the MRF corpus, a dataset of 202.3k news headline/annotated dimension pairs (69.8k unique implications for 25.1k news headlines) from Covid-19, climate and cancer news.
|
| 42 |
+
|
| 43 |
+
We train reaction inference models to predict MRF dimensions from headlines. As shown by Table [\[table:examples_data\]](#table:examples_data){reference-type="ref" reference="table:examples_data"}, reaction inference models can correctly label the veracity of headlines (85% F1) and infer commonsense knowledge like *"a cat being arrested for disobeying curfew $\implies$ lockdowns are enforced.\"* However, models struggle with more nuanced implications *"a cat arrested for disobeying curfew $\implies$ government incompetence.\"* We test generalization of reaction frame inference on a new cancer domain and achieve 86% F1 by finetuning our MRF model on 574 annotated examples.
|
| 44 |
+
|
| 45 |
+
:::: table*
|
| 46 |
+
::: tabular
|
| 47 |
+
llF18emF12em **Dimension** & **Type** & **Description** & **Example**\
|
| 48 |
+
Writer Intent & free-text & A writer intent implication captures **the readers' interpretation of what the writer is implying.** & "some masks are better than others.\"\
|
| 49 |
+
&\
|
| 50 |
+
Reader Perception & free-text & A reader perception implication describes how readers would *feel* in response to a headline. These inferences include **emotional reactions** and **observations**. & "feeling angry.\", "feeling that the event described in the headline would trouble most people.\"\
|
| 51 |
+
&\
|
| 52 |
+
Reader Action & free-text & A reader action implication captures what readers would *do* in response to a headline. These describe **actions**. & "buy a mask.\"\
|
| 53 |
+
&\
|
| 54 |
+
Likelihood of Spread & ordinal & To take into account variability in impact of misinformation due to low or high appeal to readers, we use a 1-5 Likert [@Likert1932ATF] scale to measure the **likelihood of an article being shared or read**. Categories are {*Very Likely*, *Likely*, *Neutral*, *Unlikely*, *Very Unlikely*}. & 4/5\
|
| 55 |
+
&\
|
| 56 |
+
Perceived Label & binary & We elicit the perceived label (real/misinfo) of a headline, i.e. **whether it appears to be misinformation or real news to readers**. & real\
|
| 57 |
+
&\
|
| 58 |
+
Gold Label & binary & We include the **original ground-truth headline label** (real/misinfo) that was verified by fact-checkers. & misinfo\
|
| 59 |
+
:::
|
| 60 |
+
::::
|
| 61 |
+
|
| 62 |
+
To showcase the usefulness of the MRF framework in user-facing interventions, we investigate the effect of MRF explanations on reader trust in headlines. Notably, in a user study our results show that machine-generated MRF inferences affect readers' trust in headlines and for the best model there is a statistically significant correlation (Pearson's $r$=0.24, $p$=0.018) with labels of trustworthiness (§[5.3](#sec:gen){reference-type="ref" reference="sec:gen"}).
|
| 63 |
+
|
| 64 |
+
Our framework and corpus highlight the need for reasoning about the pragmatic *implications* of news headlines with respect to reader reactions to help combat the spread of misinformation. We publicly release the MRF corpus and trained models to enable further work (<https://github.com/skgabriel/mrf-modeling>).[^1] We explore promising future directions (and limitations) in (§[6](#sec:future_work){reference-type="ref" reference="sec:future_work"}).
|
| 65 |
+
|
| 66 |
+
In contrast to prior work on misinformation detection [@ott-etal-2011-finding; @rubin-etal-2016-fake; @rashkin-etal-2017-truth; @Wang2017LiarLP; @Hou2019TowardsAD; @volkova-etal-2017-separating; @10.1145/3274351] which mostly focuses on linguistic or social media-derived features, we focus on the potential impact of a news headline by modeling readers' reactions. This approach is to better understand how misinformation can be countered, as it has been shown that interventions from AI agents are better at influencing readers than strangers [@Kulkarni2013AllTN].
|
| 67 |
+
|
| 68 |
+
In order to model impact, we build upon prior work that aims to describe the rich interactions involved in human communication, including semantic frames [@Fillmore1976FRAMESA], the encoder-decoder theory of media [@Hall1973EncodingAD][^2], Grice's conversational maxims [@grice1975logic] and the rational speech act model [@GOODMAN2016818][^3]. By describing these interactions with free-text implications invoked by a news headline, we also follow from prior work on pragmatic frames of connotation and social biases [@speer-havasi-2012-representing; @rashkin-etal-2018-event2mind; @Sap2019ATOMICAA; @socialbf; @socialc].
|
| 69 |
+
|
| 70 |
+
While approaches like rational speech acts model both a pragmatic speaker and listener, we take a **reader-centric** approach to interpreting "intent\" of a news headline given that the writer's intent is challenging to recover in the dynamic environment of social media news sharing [@10.1145/3359229]. By bridging communication theory, data annotation schema and predictive modeling, we define a concrete framework for understanding the impact of a news headline on a reader.
|
| 71 |
+
|
| 72 |
+
Table [\[table:examples_data\]](#table:examples_data){reference-type="ref" reference="table:examples_data"} shows real and misinformation news examples from our dataset with headlines obtained from sources described in §[3.1](#sec:data){reference-type="ref" reference="sec:data"}. We pair these headline examples with generated reaction frame annotations from the MRF corpus. Each reaction frame contains the dimensions in Table [\[table:dims\]](#table:dims){reference-type="ref" reference="table:dims"}.
|
| 73 |
+
|
| 74 |
+
We elicit annotations based on a *news headline*, which summarizes the main message of an article. We explain this further in §[3.1](#sec:data){reference-type="ref" reference="sec:data"}. An example headline is "*Covid-19 may strike more cats than believed*.\" To simplify the task for annotators and ground implications in real-world concerns, we define these implications as relating to one of 7 common themes (e.g. technology or government entities) appearing in Covid and climate news.[^4] We list all the themes in Table [1](#table:themes){reference-type="ref" reference="table:themes"}, with some themes being shared between topics.
|
| 75 |
+
|
| 76 |
+
To construct a corpus for studying reader reactions to news headlines, we obtain 69,885 news implications (See §[3.1](#sec:data){reference-type="ref" reference="sec:data"}) by eliciting annotations for 25,164 news headlines (11,757 Covid related articles, 12,733 climate headlines and 674 cancer headlines). There are two stages for collecting the corpus - (1) news data collection and (2) crowd-sourced annotation.
|
| 77 |
+
|
| 78 |
+
A number of definitions have been proposed for labeling news articles based on reliability. To scope our task, we focus on false news that may be unintentionally spread (misinformation). This differs from disinformation, which assumes a malicious intent or desire to manipulate [@Fallis2014AF]. We examine reliable and unreliable headline extracted from two domains with widespread misinformation: Covid-19 [@hossain-etal-2020-covidlies] and climate change [@lett8fake]. We additionally test on cancer news [@10.1145/3394486.3403092] to measure out-of-domain performance.
|
| 79 |
+
|
| 80 |
+
We retrieve both trustworthy and misinformation headlines related to climate change from NELA-GT-2018-2020 [@gruppi2020nelagt2019; @norregaard2019nelagt2018], a dataset of news articles from 519 sources. Each source in this dataset is labeled with a 3-way trustworthy score (reliable / sometimes reliable / unreliable). We discard articles from "sometimes reliable\" sources since the most appropriate label under a binary labeling scheme is unclear. To identify headlines related to climate change, we use keyword filtering.[^5] We also use claims from the SciDCC dataset [@mishraneuralnere], which consists of 11k real news articles from ScienceDaily,[^6] and Climate-FEVER [@diggelmann2021climatefever], which consists of more than 1,500 true and false climate claims from Wikipedia. We extract claims with either supported or refuted labels in the original dataset.[^7]
|
| 81 |
+
|
| 82 |
+
::: {#table:themes}
|
| 83 |
+
Theme Climate Covid
|
| 84 |
+
---------------------- --------- -------
|
| 85 |
+
Climate Statistics
|
| 86 |
+
Natural Disasters
|
| 87 |
+
Entertainment
|
| 88 |
+
Ideology
|
| 89 |
+
Disease Transmission
|
| 90 |
+
Disease Statistics
|
| 91 |
+
Health Treatments
|
| 92 |
+
Protective Gear
|
| 93 |
+
Government Entities
|
| 94 |
+
Society
|
| 95 |
+
Technology
|
| 96 |
+
|
| 97 |
+
: Themes present in articles by each news topic. Some are covered by both climate and Covid domains, while others are domain specific.
|
| 98 |
+
:::
|
| 99 |
+
|
| 100 |
+
::: {#table:both_stats}
|
| 101 |
+
Statistic Train Dev. Test Cancer
|
| 102 |
+
----------------- --------- -------- -------- --------
|
| 103 |
+
Headlines 19,897 2,460 2,133 674
|
| 104 |
+
Unique Intents 38,172 4,867 4,388 1,232
|
| 105 |
+
Unique Percept. 2,609 538 421 174
|
| 106 |
+
Unique Actions 15,036 2,176 1,739 704
|
| 107 |
+
Total Pairs 159,564 19,700 17,890 5,227
|
| 108 |
+
|
| 109 |
+
: Dataset-level breakdown of statistics for MRF corpus.
|
| 110 |
+
:::
|
| 111 |
+
|
| 112 |
+
[]{#table:both_stats label="table:both_stats"}
|
| 113 |
+
|
| 114 |
+
For trustworthy news regarding Covid-19, we use the CoAID dataset [@cui2020coaid] and a Covid-19 related subset of NELA-GT-2020 [@gruppi2020nelagt2019]. CoAID contains 3,565 news headlines from reliable sources. These headlines contain Covid-19 specific keywords and are scraped from nine trustworthy outlets (e.g. the World Health Organization).
|
| 115 |
+
|
| 116 |
+
For unreliable news (misinformation), we use The CoronaVirusFacts/DatosCoronaVirus Alliance Database, a dataset of over 10,000 mostly false claims related to Covid-19 and the ESOC Covid-19 Misinformation Dataset, which consists of over 200 additional URLs for (mis/dis)information examples.[^8][^9] These claims originate from social media posts, manipulated media, and news articles, that have been manually reviewed and summarized by fact-checkers.
|
| 117 |
+
|
| 118 |
+
We construct an evaluation set for testing out-of-domain performance using cancer real and misinformation headlines from the DETERRENT dataset [@10.1145/3394486.3403092], consisting of 4.6k real news and 1.4k fake news articles.
|
| 119 |
+
|
| 120 |
+
In this section we outline the structured annotation interface used to collect the dataset. Statistics for the full dataset are provided in Table [2](#table:both_stats){reference-type="ref" reference="table:both_stats"}.
|
| 121 |
+
|
| 122 |
+
We use the Amazon Mechanical Turk (MTurk) crowdsourcing platform.[^10] We provide Figure [\[fig:marsannotation2\]](#fig:marsannotation2){reference-type="ref" reference="fig:marsannotation2"} in the Appendix to show the layout of our annotation task. For ease of readability during annotation, we present a headline summarizing the article to annotators, rather than the full text of the article. Annotators then rate veracity and likelihood of spread based on the headline, as well as providing free-text responses for writer intent, reader perception and reader action.[^11] We structure the annotation framework around the themes described in §[2](#sec:dims){reference-type="ref" reference="sec:dims"}.
|
| 123 |
+
|
| 124 |
+
We use a three-stage annotation process for ensuring quality control. In the initial pilot, we select a pool of pre-qualified workers by restricting to workers located in the US who have had at least 99% of their *human intelligence tasks* (hits) approved and have had at least 5000 hits approved. We approved workers who consistently submitted high-quality annotations for the second stage of our data annotation, in which we assessed the ability of workers to discern between misinformation and real news. We removed workers whose accuracy at predicting the label (real/misinfo) of news headlines fell below 70%. Our final pool consists of 80 workers who submitted at least three annotations during the pilot tasks. We achieve pairwise agreement of 79% on the label predicted by annotators during stage 3, which is comparable to prior work on Covid misinformation [@hossain-etal-2020-covidlies]. To account for chance agreement, we also measure Cohen's Kappa $\kappa = .51$, which is considered "moderate" agreement. Additional quality control measures were taken as part of our extensive annotation post-processing. For details, see Appendix [\[sec:postp\]](#sec:postp){reference-type="ref" reference="sec:postp"}.
|
| 125 |
+
|
| 126 |
+
We provided an optional demographic survey to MTurk workers during annotation. Of the 63 annotators who reported ethnicity, 82.54% identified as White, 9.52% as Black/African-American, 6.35% as Asian/Pacific Islander, and 1.59% as Hispanic/Latino. For self-identified gender, 59% were male and 41% were female. Annotators were generally well-educated, with 74% reporting having a professional degree, college-level degree or higher. Most annotators were between the ages of 25 and 54 (88%). We also asked annotators for their preferred news sources. New York Times, CNN, Twitter, Washington Post, NPR, Reddit, Reuters, BBC, YouTube and Facebook were reported as the 10 most common news sources.
|
| 127 |
+
|
| 128 |
+
We test the ability of large-scale language models to predict Misinfo Reaction Frames. For free-text inferences (e.g. writer intent, reader perception), we use generative language models, specifically T5 encoder-decoder [@Raffel2020ExploringTL] and GPT-2 decoder-only models [@radford2019language]. For categorical inferences (e.g. the gold label), we use either generative models or BERT-based discriminative models [@devlin-etal-2019-bert]. We compare neural models to a simple retrieval baseline (**BERT-NN**) where we use gold implications aligned with the most similar headline from the training set.[^12]
|
| 129 |
+
|
| 130 |
+
For generative models, we use the following input sequence $$x = h_1~...~h_T || s_{d} || s_{t},$$ where $h$ is a headline of length $T$ tokens, $s_t \in \{$\[*covid*\]$,$\[*climate*\]$\}$ is a special topic control token, and $s_d$ is a special dimension control token representing one of six reaction frame dimensions. Here $||$ represents concatenation. The output is a short sequence representing the predicted inference (e.g. "*to protest*" for reader action, "*misinfo*" for the gold label). For GPT-2 models we also append the gold output inference $y = g_1~...~g_N$ during training, where $N$ is the length of the inference.
|
| 131 |
+
|
| 132 |
+
We predict each token of the output inference starting from the topic token $s_t$ until the \[$eos$\] special token is generated. In the case of data with unknown topic labels, this allows us to jointly predict the topic label and output inference. We decode using beam search, since generations by beam search are known to be less diverse but more factually aligned with the context [@massarelli-etal-2020-decoding].
|
| 133 |
+
|
| 134 |
+
For discriminative models, we use the following input sequence $$x = [CLS] h_1~...~h_T [SEP],$$ where \[CLS\] and \[SEP\] are model-specific special tokens. The output is a categorical inference.
|
| 135 |
+
|
| 136 |
+
All our models are optimized using cross-entropy loss, where generally for a sequence of tokens $t$ $$CE(t) = - \frac{1}{|t|}\sum_{i=1}^{|t|} log P_\theta(t_i|t_1,...,t_{i-1}).$$ Here $P_\theta$ is the probability given a particular language model $\theta$. Since GPT-2 does not explicitly distinguish between the input and output (target) sequence during training, we take the loss with respect to the full sequence. For T5 we take the loss with respect only to the output.
|
| 137 |
+
|
| 138 |
+
To improve generalization of MRF models, we use an additional masked fine-tuning step. We first train a language model $\theta$ on a set of Covid-19 training examples $D_{covid}$ and climate training examples $D_{climate}$. Then we use the Textrank algorithm [@mihalcea-tarau-2004-textrank] to find salient keyphrases in $D_{covid}$ and $D_{climate}$, which we term $k_{covid}$ and $k_{climate}$ respectively. We determine domain-specific keyphrases by looking at the complement of $k_{covid} \cap k_{climate}$ $$\begin{multline*}
|
| 139 |
+
k'_{covid} = k_{covid} \setminus k_{covid} \cap k_{climate} \\
|
| 140 |
+
k'_{climate} = k_{climate} \setminus k_{covid} \cap k_{climate},
|
| 141 |
+
\end{multline*}$$ and only keep the top 100 keyphrases for each domain. We mask out these keyphrases in the training examples from $D_{covid}$ and $D_{climate}$ by replacing them with a $<mask>$ token. Then we continue training by fine-tuning on the masked examples. A similar approach has been shown to improve generalization and reduce shortcutting of reasoning in models for event detection [@Liu2020HowDC].
|
2108.12841/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-03-17T11:39:47.160Z" agent="5.0 (Macintosh; Intel Mac OS X 11_2_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36" version="14.4.8" etag="sfUA3tSI0E-PgUU-aiVT" type="google"><diagram id="EIYVAEm8Z3KE6a9f27yM">5Vxbc9u6Ef41mj4Fs8Aubo+xnbSdaXsydWd68nSGlhibE1lUKTm28+u7EEmJF+him1KUHJ3khARJANzrt7sAR3h5//TXIpnf/TOfpNORgsnTCK9GSnky/P/Q8Fw2aENlw22RTcomuWm4zr6nZaOrGh+ySbpo3bfM8+kym7cbx/lslo6XrbakKPLH9m1f8ml70HlyWw0Im4brcTJNe7f9N5ss76rJKbtp/1ua3d7VI0vjyyv3SX1z1cXiLpnkj42x8MMIL4s8X5ZH90+X6TSQriZLOaGPW66uJ1aks+UhD6jygW/J9KF6t2pey+f6ZW+L/GFe3ZYWy/QpRuLkZtql2GYKcv1iLA9pfp8ui2e+perIkABAsATWevQayw4qyXBYzehxQ2gkKbQG0uSNMlZVsnPXJLlzAg15qwx3S0S64n7F+dv1NDbE4YOKPnFaYYRWPDQt/xjZCzmyV+Vpj4BMMp7Vxd3ynnu+knyYTLPbGR+PmURpwQ2BsBnL1/vqwn02mYTHL4p0kX2viAt8Ps+z2XL1DvpipK9CXw/LfFEqSOj6Sz5bXubTvFiNjR8/OoDw4FYGNhmldjJKOinQSodGo1LMqxajjBZE0nvnHEpL6HtsMxE+1eR6C19oK1/UOXBkCMoD6wih04hAFpTXLdJbQZ6kJifJGPSEfdJDhPTwdtLrXSqBv75KeCmALZABbby0Ek2LLxKUQMVMIVjZKdc3ZcfSCbOVMfIcWDKETpiVE5CSJR88k1i1aS8cG32LBtn7ssew9mRKYXcphfr1lUKiY4PknDQQnK9v8cWzX0YjjUUbHLmlk6mE2w910tnkfYCHfHYzzcdf/3OXzdocWSyL/Osa9KlAyGw6bRDSTJzlN8AL7utjNq2fGz8U39LJHtqXnceuMOWL59+rx1cnn5snV08V28uz5+qsfL90UoPZbZxk9JwUt2ndZHZyl4xAZ7zSnpXFsDtqsVdp4TSsf2uL2OCvjqid3qF21Uw+BYneTEORFd435uFa82Co4mkzD+jAv0X+UIzTqssmPO6MYowwUqEjrxzb+C42ZSMjDaFieUZlsTNISdTeICxhyXPjtkpVt76plFZIdvAMdjH4eN9xMgoEq5VVfMERam07SlGOt1GRNUsP0hq/1ZjBOZixAcyVQh8kicVWgkP216eDTxK2UHekL0OoeHPDHuPpPD0GgDGXl8OwgENADp4N8h/UoNuqbFEYZRlgIThpoca+DfZIOBZ/5AE+YzrN5otA0ce7bJlez5NxuPJYJPMOaxbzMhfxJXsKrqDrOdYE7bkBfcm6fXFcFkjQDKeIbZz1pMG1jYwl4TmKlk5pNjIuYtQjNn0A+h+QnngD/bdQ8+V8GYL+js24seAUqYCcqG3k2dtx2A0MZa3XHGL30eyROBBLegzGgTZFZ/ksjZEfBkOm7KjZn2tH1jjLVMYOeAEBDCQYN1gZom3f9wTIhooIvAVD1oOt8U2T7NvueRMfYkmODh9aGPM1UHaNMtdQMqBMEHoPzmxD3fQpW6474eNGH3y26SKcvByplsitLZxN9Cp3S8DZoFclpNKO9d2xubWdebzTTjiQGr0zjuXQdCToUPgqNcs7KAJHjOQ0+fYwkl2rdcozhNSS52KOBGB5FjugesXZbe/QxfntVyDY+bD1glVZByp6vte3R5bSCe21lOgZ+nlfGd2hoLOMZcfMdFm5Hj6+DceX+ewbS3mWz5Jpff2mqK/Oiywv6mYesflkzwKcFA9Ok5t0epGMv4YCwWzSteJt52rIvacPw1nxZvof29nQkHUzwDhehbxPyDH0lNhFlJiGwIv9tNtvD8XiLPnETPmy+h03GRQycVoz/9E4YxnCtIENeWGQ/wNCidLUgK/BKzoWto+l6c4HWw6J7dnQCXCWsaNlaMMWv53GkFoHJlg2gFaypYS+wjAwCg5CcxzGcMaEniLAZ/tdb2LUAYm7t0OfCrTUAOZz4zgOWXo6g/y3z+GJTt2knw+MJ/T2Q62enDh1w6QeFEKV9vTcIRS/dXDeBqV17LxdxwmwTDvPQRVw2GQMGHwdglKMXQhkSP9ZKb3VHQQFJCw5hlcOJTmsSTwwgkIJwrHa8jwCFpKd6FzbKM23vhM5gVJyDMT04UCnWznBTncDo6JYRrGJimC8lm7zv4e8BDp1GNhoKjHQ1d8//UzoaKfXrSzIABbfcQzKAaxWTrLB17btdFEYrVlYHWpNiDaSTTiW11WxjOexvO4l/1Y45ogpBelRKOMtIscR2gF17JAXHj1JS1qBBA39lMJxEjfquKnLH0DoYJYkkkfHIR2YzvITFmrJsYAljjaDjzpVirIma6RAgueQtR+A8qiEBnY/HCYDsPPouloSjPVYsr1hOUfv+iVdHQGLQ5R0o0uyfmYht4xpiKMgds3WW9lOVyATGp0L5oQNDZv1U4n49gVWdA4i/qJExKBxVViVxdxA9JKMd6YNpEyIbRurf06nFrHE02Bq8XJk82MrK4axM4DWlqMFVFq316NYLbTlsJdZ5MhrfTKliq3QGs50tQm9ZYVJXEOGzP4wDg2OARz75RCHtaMnC6GyKxFYO0IkdzLaHzf1cya0V2CFDmEpOg7WkVRnhYxnOMUBIOpV0kZGvPZxiH/IOqyfnvhSG+EVes/SjWhUp/LgnQADyJQHpwDpZGg1Fnz/esRnSKoRlPXeowW07VDBG4EyeGvmT4gYTkR73LbYZ/mHPgcYNYS5BwEcBrO5YfdNppNmVlYgOsZD5DQpMJE1DEcCRPjLBcMSRFjH4DxIazgEUx1SK4E2xMKIyvPdp/Ks9Wad7cm9DcXrTF648K6Uw/d8g58/RbN8f1mMwv6p2UO7SvrKDhfLfD7PZreB6mV2tFt4/ZmyintCndBe7aHzYdR8mYRy86i59uKN4sgwT2mS4J3Vyph2fpqhBnGIpK3xAAFx9KTRqr441rL0JnmMJQj2yOM0m6Xv6okEAVotL9R9GfrtoZS/QyXp8BFdd0BY1Fsg6zbZn089ziJs19uvFyamFzeBq9/TIud/xkW+WJRv1njJccHWsQjis1GQ1Yh71Oan0K9toKSjQQOojLbCY2OBumyn6Ak4LFXExp21RgdT39MZWZfSh17FgAesNWvUWMfThMVkvDKqSbHsNx+45mxT+zxkb8MkWdw1K77XGyauptE4j9V21+XcrbXd6+ploe6xUb3tiMk6W9HLY1ywLWzUdF5cna3aWtsz7E65OpPqrJRGKELUJixz82Q6FSgpQiLfKcv4xNG6LPLiBW7OCuDgVTKolNYT9DKm6JG8Zqfj0djOKFuqs68oauJRM24dcduk2bZsPeqJIb7XhAPuu/NqI2GdYoAC0dx3A5FtwKJeYNIUsU3rm0xXLKv2cvy58k4dz/jp+l//DqoYPGOnzy/JuN3lPx7G2SThK5f5bJHzeC1X/toZ/L7Ht56VJ92NVMureTFJiwiGbZreu7zIvnNbUs+5g16H98xsFDmKBbYmDr2TphPGsvhL1VixdsIoNpa2fJ1ww6r9sQE5Z3lxH2h8lpL/+c8m+fLHSL53QhmyWoedPt7UUdl6sROKsEVrx2LNo4n+yzbvvg2SvgBa7gOKUei5Zy/EGhG/g7CkYdReH8gYZt9mDD77xBEaEzlIdmyx4vEAa/01nhcAVtoFWN+ZsPd184uI3DB4FXdOg7bg5pfjVfUaeH4EwPqnKARwBLKrBGZ/UAmMYpWAKnczT2ZRTxlL93T2YW/SMatOzsEXHnVjgd+1q8iQUG5nHHIsX0WxYkN3U/fzr7+pW5IK6hWKcFKj9Z3NluWW1v4Xuhoc8sfikNrCodUXWug8WTOk7jDpgRqfLdLt7IzhAMg1Fi7JHmcG+kYLn24+dVd6sc3nAvHD/wE=</diagram></mxfile>
|
2108.12841/main_diagram/main_diagram.pdf
ADDED
|
Binary file (39.2 kB). View file
|
|
|
2108.12841/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep neural network has been widely used in many computer vision tasks, yielding significant improvements over conventional approaches since AlexNet [18]. However, image denoising has been one of the tasks in which conventional methods such as BM3D [7] outperformed many early deep learning based ones [5,47,48] until DnCNN [51] outperforms it for synthetic Gaussian noise at the expense of massive amount of noiseless and noisy image pairs [51].
|
| 4 |
+
|
| 5 |
+
Requiring no clean and/or noisy image pairs, deep image prior (DIP) [42, 43] has shown that a randomly initialized network with hour-glass structure acts as a prior for several inverse problems including denoising, super-resolution, and inpainting with a single degraded image. Although DIP exhibits remarkable performance in these inverse problems, denoising is the particular task that DIP does not perform well, *i.e.*, a single run yields far lower PSNR than BM3D even for synthetic Gaussian noise set-up [42, 43]. Further-
|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span>
|
| 8 |
+
|
| 9 |
+
<span id="page-0-1"></span>
|
| 10 |
+
|
| 11 |
+
Figure 1: Comparison of single image based denoising methods. 'L↓' refers to LPIPS and it is lower the better. 'P↑' refers to PSNR and it is higher the better. Our method denoises an image while preserves rich details; showing the best LPIPS with comparable PSNR to Self2Self (S2S) [30]. Ours shows much better trade-off in PSNR and LPIPS than all other methods including all different ensembling attempts of state of the art (S2S). (Numbers in the circle of S2S denotes number of models in ensemble)
|
| 12 |
+
|
| 13 |
+
more, for the best performance, one needs to monitor the PSNR (*i.e.*, the ground-truth clean image is required here) and stop the iterations before fitting to noise. Deep Decoder addresses the issue by proposing a strong structural regularization to allow longer iterations for the inverse problems including denoising [15]. However, it yields worse denoising performance than DIP due to low model complexity.
|
| 14 |
+
|
| 15 |
+
For better use of DIP for denoising without monitoring PSNR with a clean image, we first analyze the model complexity of the DIP by the notion of effective degrees of freedom (DF) [10, 12, 41]. Specifically, the DF quantifies the amount of overfitting (*i.e.*, optimism) of a chosen hypothesis (*i.e.*, a trained neural network model) to the given training data [10]. In other words, when overfitting occurs, the DF increases. Therefore, to prevent the overfitting of the DIP network to the noise, we want to suppress the DF over
|
| 16 |
+
|
| 17 |
+
<span id="page-1-0"></span>iterations. But obtaining DF again requires a clean (ground truth) image. Fortunately, for the Gaussian noise model, there are approximations for DF without using a clean image; Monte-Carlo divergence approximations in Stein's unbiased risk estimator (SURE) (Eqs. [8,](#page-3-0) [9\)](#page-3-1) (DFMC ).
|
| 18 |
+
|
| 19 |
+
Leveraging SURE and improvement techniques in DIP [\[43\]](#page-9-4), we propose an objective with 'stochastic temporal ensembling (STE),' which mimics ensembling of many noise realizations in a single optimization run. On the proposed objective with the STE, we propose to stop the iteration when the proposed objective function crosses zero. The proposed method leads to much better solutions than DIP and outperforms prior arts for single image denoising. In addition, inspired by PURE formulation [\[20,](#page-8-7) [24\]](#page-8-8), we extend our objective function to address the Poisson noise.
|
| 20 |
+
|
| 21 |
+
We empirically validate our method by comparing DIP based prior arts for denoising performance in various metrics that are suggested in the literature [\[13\]](#page-8-9) such as PSNR, SSIM and learned perceptual image patch similarity (LPIPS) [\[54\]](#page-9-6) on seven different datasets. LPIPS has been widely used in super resolution literature to complement PSNR, SSIM to measure the recovery power of details [\[21\]](#page-8-10). Since it is challenging for denoiser to suppress noise and preserve details together [\[4\]](#page-8-11), we argue that LPIPS is another appropriate metric to evaluate denoisers. Note that it has not been widely used in denoising literature yet to analyze the denoising performance. Our method not only denoises the images but also preserves rich textual details, outperforming other methods in LPIPS with comparable classic measures including the PSNR and SSIM.
|
| 22 |
+
|
| 23 |
+
Our contributions are summarized as follows:
|
| 24 |
+
|
| 25 |
+
- Analyzing the DIP for denoising with effective degrees of freedom (DF) of a network and propose a loss based stopping criterion without ground-truth image.
|
| 26 |
+
- Incorporating noise regularization and exponential moving average by the proposed stochastic temporal ensembling (STE) method.
|
| 27 |
+
- Diverse evaluation in various metrics such as LPIPS, PSNR and SSIM in seven different datasets.
|
| 28 |
+
- Extending our method to Poisson noise.
|
| 29 |
+
|
| 30 |
+
# Method
|
| 31 |
+
|
| 32 |
+
<span id="page-2-0"></span>**Deep image prior (DIP).** Let a noisy image $\mathbf{y} \in \mathbb{R}^N$ be
|
| 33 |
+
|
| 34 |
+
<span id="page-2-2"></span>
|
| 35 |
+
|
| 36 |
+
where $\mathbf{x} \in \mathbb{R}^N$ be a noiseless image that one would like to recover and $\mathbf{n} \in \mathbb{R}^N$ be an *i.i.d.* Gaussian noise such that $\mathbf{n} \sim \mathcal{N}(\mathbf{0}, \sigma^2 \mathbf{I})$ where **I** is an identity matrix. Denoising can be formulated as a problem of predicting the unknown x from known noisy observation y. Ulyanov et al. [43] argued that a network architecture naturally encourages to restore the original image from a degraded image y and name it as deep image prior (DIP). Specifically, DIP optimizes a convolutional neural network h with parameter $\theta$ by a simple least square loss $\mathcal{L}$ as:
|
| 37 |
+
|
| 38 |
+
<span id="page-2-3"></span>
|
| 39 |
+
$$\hat{\boldsymbol{\theta}} = \arg\min_{\boldsymbol{\theta}} \mathcal{L}(\mathbf{h}(\dot{\mathbf{n}}; \boldsymbol{\theta}), \mathbf{y}), \tag{2}$$
|
| 40 |
+
|
| 41 |
+
where $\dot{\bf n}$ is a random variable that is independent of $\bf v$ . If $\bf h$ has enough capacity (i.e., sufficiently large number of parameters or architecture size) to fit to the noisy image v, the output of model $h(\dot{\mathbf{n}}; \hat{\boldsymbol{\theta}})$ should be equal to $\mathbf{v}$ , which is not desirable. DIP uses the early stopping to obtained the results with best PSNR with clean images.
|
| 42 |
+
|
| 43 |
+
**Effective degrees of freedom for DIP.** The effective degrees of freedom [10, 41] quantifies the amount of fitting of a model to training data. We analyze the training of DIP by the effective degrees of freedom (DF) in Eq. 3 as a tool for monitoring overfitting to the given noisy image. the DF for the estimator $h(\cdot)$ of x with input y can be defined as follows [14]:
|
| 44 |
+
|
| 45 |
+
<span id="page-2-1"></span>
|
| 46 |
+
$$DF(\mathbf{h}) = \frac{1}{\sigma^2} \sum_{i=1}^n Cov(\mathbf{h}_i(\cdot), \mathbf{y}_i), \tag{3}$$
|
| 47 |
+
|
| 48 |
+
where $\mathbf{h}(\cdot)$ and $\mathbf{y}$ are a model (e.g., a neural network) and noise image respectively. $\sigma$ is the standard deviation of the noise. $\mathbf{h}_i(\cdot)$ and $\mathbf{v}_i$ indicate the $i^{th}$ element of corresponding vectors. For example, if the input to $\mathbf{h}(\cdot)$ is $\dot{\mathbf{n}}$ and $\mathbf{y}$ is a noisy image, i.e., $h(\dot{n})$ , it is the DF for DIP. Note that $\mathbf{h}(\cdot)$ can take any input and we use $\mathbf{y}$ (instead of $\dot{\mathbf{n}}$ ) for our formulation.
|
| 49 |
+
|
| 50 |
+
Interestingly, the DF is closely related to the notion of optimism of an estimator h, which is defined by the difference between test error and train error [14,41] as:
|
| 51 |
+
|
| 52 |
+
$$\rho(\mathbf{h}) = \mathbb{E}\left[\mathcal{L}(\tilde{\mathbf{y}}, \mathbf{h}(\cdot)) - \mathcal{L}(\mathbf{y}, \mathbf{h}(\cdot))\right],\tag{4}$$
|
| 53 |
+
|
| 54 |
+
where $\mathcal{L}(\cdot)$ is a mean squared error (MSE) loss, $\tilde{\mathbf{y}}$ is another realization from the model (i.e., with different n in Eq. 1) that is independent of y. In [41], it is shown that $\rho(\mathbf{h}) =$ $2\sum_{i=1}^{n} Cov(\mathbf{h}_{i}(\cdot), \mathbf{y}_{i})$ . Thus, combining with Eq. 3, it is straightforward to show that
|
| 55 |
+
|
| 56 |
+
$$2\sigma^2 \cdot \mathrm{DF}(\mathbf{h}) = \rho(\mathbf{h}). \tag{5}$$
|
| 57 |
+
|
| 58 |
+
It is challenging to compute the covariance since $h(\cdot)$ is nonlinear (e.g., a neural network), gradually changing in optimization, and the $\rho(\mathbf{h})$ requires many pairs of noisy and clean (ground-truth) images to compute (note that it is an estimate). Here, we introduce a simple approximated degrees of freedom with a single ground-truth and call it as $DF_{GT}$ . We derive the $DF_{GT}$ as following:
|
| 59 |
+
|
| 60 |
+
$$2\sigma^2 \cdot \mathrm{DF}_{GT}(\mathbf{h}) \approx \mathcal{L}(\mathbf{x}, \mathbf{h}(\cdot)) - \mathcal{L}(\mathbf{y}, \mathbf{h}(\cdot)) + \sigma^2$$
|
| 61 |
+
(6)
|
| 62 |
+
|
| 63 |
+
We describe a simple proof of the estimation in the supplementary material.
|
| 64 |
+
|
| 65 |
+
A large DF implies overfitting to the given input y, which is not desirable. If DIP fits to x, DF<sub>GT</sub> becomes close to 0. The more the DIP is fitting to y, the larger the DF is. We use the $DF_{GT}$ to analyze the DIP optimization in empirical studies in Sec. 5.1.
|
| 66 |
+
|
| 67 |
+
To prevent the overfitting of DIP, we try to suppress the DF (Eq. 3) during the optimization without the access of ground-truth clean image x. In Eq. 3, computing the DF is equivalent to the sum of the covariances for each element of the noise image y and the model output $h(\cdot)$ . There are a number of techniques to simply approximate the covariance computation in statistical learning literature such as AIC [1], BIC [36] and Stein's unbiased risk estimator (SURE) [40]. Both AIC and BIC, however, approximate the DF by counting the number of parameters of a model, so for usual over-parameterized deep neural networks, the approximations based on them could be incorrect [3]. Note that $DF_{GT}$ cannot be used for optimizing model because it needs groud-truth clean image x.
|
| 68 |
+
|
| 69 |
+
Here, we propose to use SURE to suppress the DF by deriving the DIP formulation using the Stein's lemma. The <span id="page-3-7"></span>Stein's lemma for a multivariate Gaussian vector y is [\[40\]](#page-8-17):
|
| 70 |
+
|
| 71 |
+
$$\frac{1}{\sigma^2} \sum_{i=1}^n Cov(\mathbf{h}_i(\mathbf{y}), \mathbf{y}_i) = \mathbb{E}\left[\sum_{i=1}^n \frac{\partial \mathbf{h}_i(\mathbf{y})}{\partial \mathbf{y}_i}\right]. \tag{7}$$
|
| 72 |
+
|
| 73 |
+
It simplifies the computation of DF from the covariances between y and h(y) to the expected partial derivatives at each point, which is well approximated in a number of computationally efficient ways [\[32,](#page-8-20) [39\]](#page-8-29). Note that the SURE which is denoted as η(h(y), y), consists of Eq. [7](#page-3-3) and the DIP loss (Eq. [2\)](#page-2-3) with a modification of its input (from n˙ to y) as:
|
| 74 |
+
|
| 75 |
+
$$\eta(\mathbf{h}(\mathbf{y}), \mathbf{y}) = \mathcal{L}(\mathbf{y}, \mathbf{h}(\mathbf{y})) + \underbrace{\frac{2\sigma^2}{N} \sum_{i=1}^{N} \frac{\partial \mathbf{h}_i(\mathbf{y})}{\partial (\mathbf{y})_i}}_{\text{divergence term}} - \sigma^2.$$
|
| 76 |
+
(8)
|
| 77 |
+
|
| 78 |
+
While the vanilla DIP loss encourages to fit the output of the model h to noisy image y, Eq. [\(8\)](#page-3-0) encourages to approximately fit it to clean image x without access to the x.
|
| 79 |
+
|
| 80 |
+
However, it is still computationally demanding to use Eq. [8](#page-3-0) as a loss for optimization with any gradient based algorithm due to the divergence term [\[32\]](#page-8-20). A Monte-Carlo approximation for Eq. [8](#page-3-0) in [\[32\]](#page-8-20) can be a remedy to the computation cost, but it introduces a hyper-parameter ϵ that has to be selected properly for the best performance on different network architectures and/or datasets. For not requiring to tune the hyper-parameter ϵ, we employed an alternative Monte-Carlo approximation for the divergence term [\[39\]](#page-8-29) as:
|
| 81 |
+
|
| 82 |
+
$$\frac{1}{N} \sum_{i=1}^{N} \frac{\partial \mathbf{h}_{i}(\mathbf{y})}{\partial \mathbf{y}_{i}} \approx \frac{1}{N} \tilde{\mathbf{n}}^{T} \mathbf{J}_{\tilde{\mathbf{n}}^{T} \mathbf{h}(\mathbf{y})}, \tag{9}$$
|
| 83 |
+
|
| 84 |
+
where n˜ is a standard normal random vector, *i.e*., n˜ ∼ N (0, I) and the i th element of the Jacobian Jn˜<sup>T</sup> <sup>h</sup>(y) is ∂n˜ <sup>T</sup> h(y; θ)/∂y<sup>i</sup> . We denote this 'estimated degrees of freedom by Monte-Carlo' by DFMC and will use it to monitor the DIP optimization without using the PSNR with the clean ground truth images (Sec. [4.2\)](#page-4-2).
|
| 85 |
+
|
| 86 |
+
To improve the fitting accuracy, DIP suggests several methods including noise regularization, exponential moving average [\[43\]](#page-9-4). We propose 'stochastic temporal ensembling (STE)' for better fitting performance by leveraging these methods to our objective.
|
| 87 |
+
|
| 88 |
+
<span id="page-3-2"></span>Noise regularization on DIP. DIP shows that adding extra temporal noises to the input n˙ of function h(·) at each iteration improves performance for the inverse problems including image denoising [\[43\]](#page-9-4). It is to add a noise vector γ, with γ ∼ N(0, σ<sup>2</sup> γ I) to the input of the function at every iteration of the optimization as:
|
| 89 |
+
|
| 90 |
+
<span id="page-3-4"></span>
|
| 91 |
+
$$\hat{\boldsymbol{\theta}} = \arg\min_{\boldsymbol{\theta}} \mathcal{L}(\mathbf{h}(\dot{\mathbf{n}} + \gamma; \boldsymbol{\theta}), \mathbf{y}), \tag{10}$$
|
| 92 |
+
|
| 93 |
+
<span id="page-3-3"></span>where n˙ is fixed but γ is sampled from Gaussian distribution with zero mean, standard deviation of σ<sup>γ</sup> at every iteration. To estimate the x by Eq. [8,](#page-3-0) we replace the input of the model h(·), n˙ , with noisy image y (from Eq. [3](#page-2-1) to Eq. [7\)](#page-3-3). Interestingly, Eq. [10](#page-3-4) becomes similar to the denoising auto-encoder (DAE), which prevents a model from learning a trivial solution by perturbing output of h [\[46\]](#page-9-13).
|
| 94 |
+
|
| 95 |
+
<span id="page-3-0"></span>Meanwhile, contractive autoencoder (CAE) [\[33\]](#page-8-30) minimizes the Frobenius norm of the Jacobian and SURE and its variants minimize the trace of the Jacobian (Eq. [9\)](#page-3-1) thus suppresses the DF. Since we assume that the different realizations of noise are independent, the off-diagonal elements of the matrix are zero, CAE is equivalent to SURE in terms of suppressing the DF. Alain *et al*. [\[2\]](#page-8-31) later show that the DAE is a special case of the CAE when σ<sup>γ</sup> → 0. We can rewrite the Eq. [10](#page-3-4) by using CAE formulation as:
|
| 96 |
+
|
| 97 |
+
$$\arg\min_{\boldsymbol{\theta}} \mathcal{L}(\mathbf{h}(\mathbf{y}; \boldsymbol{\theta}), \mathbf{y}) + \sigma_{\gamma}^{2} \left\| \frac{\partial \mathbf{h}(\mathbf{y}; \boldsymbol{\theta})}{\partial \mathbf{y}} \right\|_{F}^{2} + o(\sigma_{\gamma}^{2}), (11)$$
|
| 98 |
+
|
| 99 |
+
when σ<sup>γ</sup> → 0, where o(σ 2 γ ) is a high order error term from Taylor expansion. Thus, solving this optimization problem is equivalent to penalizing increase of DF. Here, the noise level σ<sup>γ</sup> serves as a hyper-parameter for determining performance and it improves performance of DIP by using multiple level of σ<sup>z</sup> at optimization of DIP. Thus, we further proposed to model σ<sup>γ</sup> as a uniform random variable instead of a empirically chosen hyper-parameter such that
|
| 100 |
+
|
| 101 |
+
<span id="page-3-6"></span><span id="page-3-5"></span>
|
| 102 |
+
$$\sigma_{\gamma} \sim \mathcal{U}(0, b).$$
|
| 103 |
+
(12)
|
| 104 |
+
|
| 105 |
+
<span id="page-3-1"></span>Exponential moving average. DIP further shows that averaging the restored images obtained in the last iterations improves the performance of denoising [\[43\]](#page-9-4), which we refer to as 'exponential moving average (EMA).' It can be thought as an analogy to the effect of ensembling [\[31\]](#page-8-32).
|
| 106 |
+
|
| 107 |
+
Stochastic temporal ensembling. Leveraging the noise regularization and the EMA, we propose a method called 'stochastic temporal ensembling (STE)' to improve the fitting performance of DIP loss. Specifically, we modify our formulation (Eq. [8\)](#page-3-0) by allowing two noise observations, y<sup>1</sup> for target of MSE loss and y<sup>2</sup> for the input of the model, h, instead of one y by setting y<sup>1</sup> = y and y<sup>2</sup> = y + γ as:
|
| 108 |
+
|
| 109 |
+
$$\eta(\mathbf{h}(\mathbf{y_2}), \mathbf{y_1}) = \underbrace{\mathcal{L}(\mathbf{h}(\mathbf{y_2}), \mathbf{y_1})}_{\text{data fidelity}} + \underbrace{\frac{2\sigma^2}{N} \sum_{i=1}^{N} \frac{\partial \mathbf{h}_i(\mathbf{y_2})}{\partial (\mathbf{y_2})_i}}_{\text{regularization}} - \sigma^2, \quad (13)$$
|
| 110 |
+
|
| 111 |
+
where σ is a known noise level of y<sup>1</sup> (same as Eq. [1\)](#page-2-2), hi(y2) and (y2)<sup>i</sup> are the i th element of the vectors of h(y2) and y1, respectively. Interestingly, Eq. [13](#page-3-5) is equivalent to the formulation of extended SURE (eSURE) [\[55\]](#page-9-14), which is shown to be a better unbiased estimator of the MSE with
|
| 112 |
+
|
| 113 |
+
<span id="page-4-4"></span><span id="page-4-3"></span>
|
| 114 |
+
|
| 115 |
+
Figure 2: Illustration of a solution trajectory of ours and DIP. We consider the problem of reconstructing an image x from a degraded measurement y. DIP finds its optimal stopping point (t4) by early stopping. Ours changes DIP's solution trajectory from black to orange whose stopping point (t5) is defined by a loss value (Sec[.4.2\)](#page-4-2) and is close to noiseless solution (x).
|
| 116 |
+
|
| 117 |
+
the clean image x. But there are a number of critical differences of ours from [\[55\]](#page-9-14). First, Our method does not require training, while Zhussip *et al*. [\[55\]](#page-9-14) requires training with many noisy images. Because Zhussip *et al*. [\[55\]](#page-9-14) use the fixed instance of γ, there is no effect of regularization from (Eq. [10\)](#page-3-4), which gives reasonable performance gain( See Sec[.5.2\)](#page-5-0). This is our final objective function of DIP that stops automatically by a stopping criterion, described in the following section.
|
| 118 |
+
|
| 119 |
+
SURE works well if the model h satisfies the smoothness condition, *i.e*., h admits a well-defined second-order Taylor expansion [\[27,](#page-8-18) [38\]](#page-8-21). While a typical learning based denoiser satisfies this smoothness condition [\[38,](#page-8-21)[55\]](#page-9-14), the DIP network 'fits' to a target image (a noisy image in [\[42,](#page-9-3) [43\]](#page-9-4) and an approximate clean image in our objective) and therefore there is no guarantee that the smoothness condition can be satisfied, especially when it has been converged.
|
| 120 |
+
|
| 121 |
+
We observed that the divergence term in our formulation (Eq. [13\)](#page-3-5) increases at early iterations (*i.e*., before convergence) while it starts to diverge to −∞ at later iterations (*i.e*., after convergence). This observation is consistent in all our experiments. Note that this divergence phenomenon was not reported in [\[27\]](#page-8-18) because the DIP network with the SURE loss did not seem to be fully converged to recover the fine details with insufficient number of iterations. Based on this observation for our proposed objective, we propose 'zero crossing stopping criterion' to stop iteration when our objective function (Eq. [13\)](#page-3-5) deviates from zero.
|
| 122 |
+
|
| 123 |
+
Solution trajectory. To help understand the difference between our method to DIP in optimization procedure, similar to Fig. 3 in [\[43\]](#page-9-4), we illustrate DIP image restoration trajectory with that of our method in Fig. [2.](#page-4-3) DIP degrades the quality of the restored images by the overfitting. To obtain the solution close to the clean ground truth image, DIP uses early stopping (blue t4). Our formulation has different training trajectory (orange) from DIP (black) and automatically stops the optimization by the zero crossing stopping (orange t4). We argue that the resulting image by our formulation is in general closer to the clean image (blue x) than the solution by DIP, which preserves more high frequency details than the solution by the DIP (Sec. [5.3\)](#page-6-0) thanks to a better target to fit (an approximation of the clean x over a noisy image y and our proposed principled stopping criterion without using ground truth image). We empirically analyze this phenomenon with our proposed DFGT and compare it to DFMC in Sec. [5.1](#page-4-1) and the supplementary material.
|
| 124 |
+
|
| 125 |
+
As the SURE is limited to Gaussian noise [\[38\]](#page-8-21), there are several attempts to extend it to other types of noises [\[11,](#page-8-22)[20,](#page-8-7) [35\]](#page-8-23). Here, we extend our formulation to Poisson noise as it is a useful model for noise in low-light condition. We modify our formulation (Eq. [13\)](#page-3-5) to use Poisson unbiased risk estimator (PURE) [\[17,](#page-8-33)[20,](#page-8-7)[24\]](#page-8-8) for Poisson noise as follows:
|
| 126 |
+
|
| 127 |
+
$$\mathcal{L}(\mathbf{h}(\mathbf{y}), \mathbf{y}) - \frac{\zeta}{N} \sum_{i=1}^{N} \mathbf{y}_{i} + \frac{2\zeta}{\epsilon N} (\mathbf{\check{n}} \odot \mathbf{y})^{\mathbf{T}} (\mathbf{h}(\mathbf{y} + \dot{\epsilon}\mathbf{\check{n}}) - \mathbf{h}(\mathbf{y}))), \quad (14)$$
|
| 128 |
+
|
| 129 |
+
where ˘n is a k-dimensional binary random variable whose element n˘<sup>i</sup> takes -1 or 1 with probability 0.5 for each, ϵ˙ is a small positive number and ⊙ is a Hadamard product. We empirically validate the Poisson extension in Sec. [5.4.](#page-6-1)
|
2109.14982/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-03-17T11:05:48.567Z" agent="5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.192 Safari/537.36" version="14.4.5" etag="I1CdYy_RCckCXtas-YrX" type="device"><diagram id="Ktlaeo5GKm-h4cRINJvj">7V3bcts4Ev0aVWUerCIIgiAffYlnHxJXKtmtmXmkJdriRhK9FD125usXvIA3QBItg2hQtJJKREqESHT3QfdBozHD15vX35PgafU1XobrmW0tX2f4ZmbbNvZ99l925ldxhlK7OPGYRMviFKpP/Ij+CcuTVnn2OVqGu9YX0zhep9FT++Qi3m7DRdo6FyRJ/NL+2kO8bv/qU/BY/qJVn/ixCNah8LU/omW6Ks56Nq3P/yuMHlf8l5FbPvAm4F8um9itgmX80vgt/HmGr5M4Tot3m9frcJ11Hu+X4oZu93xa3VgSbtM+F5T9/newfi6frbyv9Bd/2CR+3i7D7PvWDF+9rKI0/PEULLJPX5h42blVulmzI8TePsTbtJSX7WTH0Xp9Ha/jJG8Lu/Tyymf3crVLk/hn1XmYnSlvJEzS8HXvw6Cqi5huhfEmTJNf7CvlBcTx56S4qFQsQspGXmox+aVMVw0J2X55XVBqxmPVeN157E3Zf/K+xELXhUumNuVhnKSr+DHeBuvP9dmrdufW3/kSx09ll/43TNNfZZ8Gz2nc7vDwNUr/bLz/K2uK9UJxdPNatpwf/OIHW/ZkfzYPGldlh/Vl+RG/ri00pxJa9phtrY6fk0V5yintM0geQ97bvQWbhOsgjf5ut/4eGTnH9f2Ihge7pwJSHqLXTHC5yjdU/CF/SVV8nzEoUH0b4R6qX6m5ct0nB3V/G28BlX15mUE+O1ysg90uWhQnb6M1b/Ykvaa9RdRbictLv8URa7EWrd0VLbXQ3LE9QlHxr9tusjC1spWOCKvb6iVVFxLRasH+1fjkGKLVIFZd1RvRSHsAo9abNKGJcEwwcBhHj2Mcu4D5SuHx8VxAOwVYdeGhLlZ5EqxyZOM0P/me/vHAsOqYEldIVd5EDVPZ58tgt8rvCJUH34I0DZNtfsa2aiRrjDHXTvannx4jq7ck3wlpF25HAVy/04Y6DPPhRqZBnDBTtKQFd4Box+9nGLjruGxW/lLTy+/DUEcwIYm/hyQQihQgKEJiFxsCoVKjMt2kbNGknAHAuLd4e7ACZ2hSqOuWuI5GkzrMHhgSQcGYh3uOfglytAq8HU2d4poci6VMUZOWYwKJono5kdPC5VENkxIB2wNgQ28Bu5McJokAkp6rb5gclNwwtc8FBtB1NXomIl9iTLhs+pjE54RaY9IAzHFvWeplQ05zOt5G4RopYkA+hE9ZGxlIjJPykokYA3qW1T1+0C+DiRcDilfi0gwJ0ieItw3RYxQwZODA72daTqyEX9MYONh66ZZJDIuSeBwUN+GSUM5yWJSI1+1v9erFO026xe7SLZQ3ogM1KZgrcp7RIpbMnkNO9QEmw5zrsCgTMQHkfEaWATNKC/bgxIsnmf+CBUbc1xdLYDj+5SwtyjB6DevlX8aUiiETFSGAopokkyKs9dAZEmA4IsXs9Gk8gB3I05SoOB082BIQ/lTTsjDRvbA1uheHl90MGnT3jckqn2IMTBaWrNAZwlp7C3iSSSyu17UppNGmTOY5YK1jCPrJgHFrDJkuY4rSHAnvAQmiziR5D9BMQL4cwtzZADQqi5LxHoBEsSMxIEMiu3HmncgE7PiAAp4mW2IJmIn1OZ6OSJd8EMN97ceT2A9ggokDuaLnPOfDZSIGHQP1ci9TmA+XWjHkKKg3p+Xc08Rk4iX9B2n14vWm6OSI5JrOuNCfYpcjYRrOpfr6nAAu8zlDV5MjlimDFL+faZmUD5nVQ8ylP4BtQ9vENumOYtQZbIKA6C24MoVgjiuKKYlcBHDNzzmOkRLxQiY2k2mmpggYqTH5i3zQI8qtSpKcAhk/k2kmp4B6noDJKWc4TnHmw5QkZTKGVJQ5wW4bOH2GBMcEzY6+hUnEuiRMDJY+5KpkF46oOUe6WSZevnsMiHgnSdQICxA0ss2uXp7Gmr0ZTEeVheRKNmiBjOrcMfAyoxcwYIDh9tiD5/wQ0xPCdo3Z766YuPIRTfQ0HyrJe+YqDGI+mlcHTYC3lokYdAiEy1A5V5ZNJmIKacXmbkI0xqhQKl7ACn+uZsrnBPGOK5VeKmDAsJ9OcvUREeJ+jTuk8OSqkbDkIxgTbbNAk9/PtGxKnM+l+gJDOgayZWTBhIRugYwX6STpFl+wKo3b49ExrBMalU1JStFCRm90kqVoKehIJUtLcdfprNiyudX37v+eY/7BxS43n0v2BRs/vdYfsneP5f95K7unYCttZVH0ZtZC8nj/iT0L+8vu12q8+y17m7Vl3QeLn4+5aV90LsxXppQX5Lnl/D3JLu/eVp7iesGOgk2mL9v73VPjZllvFfdbfPnrl2+fcNHazW+N7xQ9w5/zIAN8REkzxYkWwfpyHT1mCreJlst8u+9VnET/sJ8JOBZ0tPeBZH9KYTTOF6/y/G2widaZUv072oQ7dpt34Qv793u8Yc9Y7sld3rhVaTnftRur0W6x5pNsV2oiUe/q5Lv0W7ZsSYV+X93lmpbp5vfwy3/erh3ouHaYoQQ98FGFngiLFPTqCWA13/6FjBbrYLeLFi1HAQlCOzD6D7/WAHsCmWGhOXb8+tVucs/SA7FhR2zYmruOVb1wu+HC01GxpsGTsVIqQCQbYG4MH2AE38fNXypRR0W0390ERSt68N/6GGUMH2UcB1ZPJPHMmEeZBplkYbsZ+lrzLKHsWPgryXPdM3A1Q1RvgBBVPuZQ2xFr3Mx5Qae3jmAu9vdVzBlgzDpMQdZa9bk+a7jmyYL33p5Pk/fQp0AIMX3hRTKrJRB4nu0QQihysn9xL206RQUGpSgVoLFn293OIVkxS09A5KosYQuSHRWQbO7uVofJwT56PsCsk1zPPZ/OMa1dfLut8VlMQN8aAZyi8ToJxOv8pcgSCBVgwpFawlBEoKdzfZrSDFBfABEX6+268RZB7gMiA5TrMBtEdJbdUWoJjggi2eSCPkvwQdJ3FHSdb4tdR7SCiC+m4RiTc/PeecweKONLpiK5NkFMRfoSxTVurlltBseJUvIAM7p9vXk2JmR0nyglZEEak7l7Gw2RlX2yiACXU/uac2tOEJHazOpTxyTALSl9ndHt7a0670oWoml2TKe39KePgktqzMKikM4is0pDLyrEDxTpjR8AExjMDCkql6el3ggwv7G6o9Hpt4/F+NjTiuDIGtc6FS0KLtkIBdTRR5besFkzFSrvcAToElZ3NDqf0EfQIyayzA1LFaiqturCPj1I22fFh3XQ9tUTj27yTza4Is2Dq9aJU5WOCRVDS0tz35kcWyoAkgGGN8OBZKxRKPwsFrLGOnnK/Gax82y9SMIbmegcYBWbt+OZAfIP+ktEb9A5kmnAPYICrHGAkMmBJxjBKxcUAizajBDclK2hc4F7hARqTTqLGyhlFSThgGYPCGkNQ5V6QKjtf/vtfnQ19+NoQ1IfnCtH5u7FqQAu9QWkrn0gIKVYV0CKRhuQOgIc6yZ50VgDUt8FhxFb7x4L1ty1nJZjbfm4bwyESTs9aO6Tk9YfHkOo7Pd7rllsoZZsISNorkV1R+OzDJGpoZqdPN5V4+s7MRWLaubLbTECnBTLZcuyrrj+wCCB3onQsbBcUkEhSDrShlusajLLJbcoUEHBbehoKsslFRLkNrbI1hslW28Q0lDdDbhWp7qj8ZGK4uJsqneFcQVmo8ttQZYlOpx6V7ZXIDM+Z90T+87V66xjXctSjSgSo6EgDObaWFeDabfRt5oQCxk8WWkZ39pLSaqrLoSwLABWUfHs97u7vCSesfXw3lEFbZACq8TtVpUidUVhHUXNEO6Rg/wmIeXV4gr0sB1RFJeuZ115Q/UnQliwK8I8AEtEXV/WpXwl5fu6VDYB+z7zsh6CRfsCmZIK9niZpt/Di9Io8diMciANIaKGeI5sXB7Q6GTTzB+Fu8dTV9X0wt3AJVUR1r0hI3bak0+I2v2olIqLVEFMnjq/xEP7weeX9kyV27iqi1mFKgwSUV2eudPmHmdUbBlbc+4jN6LvTmsKZ9ux1kpoKteRSGbbdYeLh0uhDVx1tD2xcBIRepSvFmPJhrdKe1urL7FWbj8g7BzuwTCpdOJd98q+vR1q8HL9TpTtiluq8MKrLSPw6FyF/+7IKCcV3tnttx/ss08/h/Fq9vkqb/B2DIuOs80H7E7RZU/UBaeKoVva4CjShsOrCLSi4uHp1uPoduqE7B5UhKnrzbwSf07bauG5qIrx38rFUd+aW41Xu2WfOHPP1sHK8RHEABBP4pRJKM7Qwlc0D9MFdc+19IK6jOc6Eiz3AvWfF3d3eyLZs8LzAXQiw/euIXuiWgyM73CFBM5r86CuhVPS2R/h9K0WKCFzOiDyKqDjHIYNOSgIjJyT/7Vm5HoTpKv7+xm9+j6jNzPymb27Y6fT0gJvsrMZkVZc0c9FZNaXdqLWEhcWTN4ZC7EfMJKQPUBwX1FRT1ln5t1LrmbkJmuL6XHxkNIRRQEGEFfcRkCeKV1NvjVRAKsIfB3ZahU98v/akD+eovwpX6XL/S0kDgADSl7BDpoKJJ/x3Vf4t0nKn4j2b0sn7LA3mBYo2Gfyw/5Pkz+qFqrwCF+enaUKA9hhEmcSqp0F5nmvvsbLMPvG/wE=</diagram></mxfile>
|
2109.14982/main_diagram/main_diagram.pdf
ADDED
|
Binary file (93.3 kB). View file
|
|
|
2109.14982/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The progress in sensing technologies has significantly expedited the 3D data acquisition pipelines which in turn has increased the availability of large and diverse 3D datasets. With a single 3D sensing device [@lu2006single], one can capture a target surface and represent it as a 3D object, with point clouds and meshes being the most popular representations. Several applications, ranging from virtual reality and 3D avatar generation [@lattas2020avatarme; @potamias2020learning] to 3D printing and digitization of cultural heritage [@pavlidis2007methods], require such representations. However, generally, a 3D capturing device generates thousands of points per second, making processing, visualization and storage of captured 3D objects a computationally daunting task. Often, raw point sets contain an enormous amount of, not only redundant, but also possibly noisy, points with low visual perceptual importance, which results into an unnecessary increase in the storage costs. Thus processing, rendering and editing applications require the development of efficient simplification methods that discard excessive details and reduce the size of the object, while preserving their significant visual characteristics.
|
| 4 |
+
|
| 5 |
+
Point cloud simplification can be described as a process of reducing the levels-of-detail (LOD) so as to minimise the introduced perceptual error [@cignoni1998comparison]. In contrast to sampling methods, the main objective of point cloud simplification, is the removal or the collapse of particular points that does not significantly affect the visual perceptual quality, in a way that the most salient features are preserved in the simplified point cloud [@luebke2001developer]. In this study, we propose a learnable strategy to remove the least perceptually important points without sacrificing the overall structure of the point cloud. As perceptually important features that should be preserved, we consider points with increased surface curvature, that have been shown to highly correlate with the human perceptual system [@lee2005mesh; @lavoue2009local].
|
| 6 |
+
|
| 7 |
+
Traditional simplification methods address the task by attempting to solve an optimization problem that minimizes the visual error of the simplified model. Usually, such optimizations are non-convex with high computational cost, where a point importance queue is constructed to sort the 3D points according to their scores [@rossignac1993multi; @hoppe1996progressive; @garland1997surface]. Point cloud simplification methods can be categorized as *mesh-based* and as *point decimation-based*. Mesh-based methods attempt to reconstruct a 3D surface from the point cloud and simplify the generated mesh. In contrast, point decimation-based methods directly select points from the reference point cloud according to their feature scores. Edge contraction remains to date one of the most successful and popular mesh-based methods, since it produces high quality approximations of the input [@garland1997surface; @garland1998simplifying]. Although several approaches proposed parallel GPU computations to reduce the execution time even by 20 times [@decoro2007real; @wang2019fast], the simplification task can be still considered as a computationally hard problem. To this end, it is essential to reduce the computation and time complexity by leveraging neural networks to efficiently simplify point clouds.
|
| 8 |
+
|
| 9 |
+
In this study, we propose the first, to the best of our knowledge, learnable point cloud simplification method. The proposed method preserves both the salient features as well as the overall structure of the input and can be used for real-time point cloud simplification without any prior surface reconstruction. We also show the limitations of popular distance metrics, such as Chamfer and Haussdorf, to capture salient details of the simplified models and we propose several evaluation criteria that are well-suited for simplification tasks. The proposed method is extensively evaluated in a series of wide range experiments.
|
| 10 |
+
|
| 11 |
+
The rest of the paper is structured as follows. In Section 2, we succinctly present a summary of related work covering the relevant areas of mesh and point cloud simplification, point cloud sampling as well as learnable graph pooling methods. In Section 3, we present the preliminaries and the details of the proposed methods including the model architecture components, the training procedure, the limitations of uniform distance measures along with the implementation details. Section 4 is dedicated to review and present the evaluation criteria used to measure the performance of the proposed method. Finally, in Section 5, we extensively evaluate our method with a series of qualitative and quantitative experiments. In particular, we report the performance and the execution time of the proposed method using many perceptual and distance measures. In addition, we show that the simplified point clouds can be still identified by pre-trained classifiers. We also qualitatively evaluate the proposed method under noisy and in-the-wild point clouds and establish our findings using a user-study.
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
Calculating the local surface properties of an unstructured point cloud is a non-trivial problem. As demonstrated in [@hoppe1992surface; @pauly2002efficient], covariance analysis can be an intuitive estimator of the surface normals and curvature. In particular, considering a neighborhood $\mathcal{N}_i$ around the point $\mathbf{p}_i \in \mathcal{R}^3$ we can define the covariance matrix:
|
| 16 |
+
|
| 17 |
+
$$\begin{equation}
|
| 18 |
+
C =\begin{bmatrix}
|
| 19 |
+
\mathbf{p}_{i_1} - \mathbf{p}_i \\
|
| 20 |
+
\mathbf{p}_{i_2} - \mathbf{p}_i\\
|
| 21 |
+
\vdots \\
|
| 22 |
+
\mathbf{p}_{i_k} - \mathbf{p}_i
|
| 23 |
+
\end{bmatrix}^T \cdot \begin{bmatrix}
|
| 24 |
+
\mathbf{p}_{i_1} - \mathbf{p}_i \\
|
| 25 |
+
\mathbf{p}_{i_2} - \mathbf{p}_i\\
|
| 26 |
+
\vdots \\
|
| 27 |
+
\mathbf{p}_{i_k} - \mathbf{p}_i
|
| 28 |
+
\end{bmatrix}
|
| 29 |
+
\in \mathbb{R}^{\lvert\mathcal{N}_i\rvert\times\lvert\mathcal{N}_i\lvert}
|
| 30 |
+
\end{equation}$$ where $\mathbf{p}_{i_j} \in \mathcal{N}_i$.
|
| 31 |
+
|
| 32 |
+
Solving the eigendecomposition of the covariance matrix $C$, we can derive the eigenvectors corresponding to the principal eigenvalues, that define an orthogonal frame at point $\mathbf{p}_i$. The eigenvalues $\lambda_i$ measure the variation along the axis defined by their corresponding eigenvector. Intuitively, the eigenvectors that correspond to the largest eigenvalues span the tangent plane at point $\mathbf{p}_i$, whereas the eigenvector corresponding to the smallest eigenvalue can be used to approximate the surface normal $n_i$. Thus, given that the smallest eigenvalue measures the deviation of point $\mathbf{p}_i$ from the surface, it can be used as an estimate of point curvature. As shown in [@pauly2002efficient], we may define: $$\begin{equation}
|
| 33 |
+
\kappa(\mathbf{p}_i) = \frac{\lambda_0}{\lambda_0+\lambda_1+\lambda_2}, \quad \lambda_0<\lambda_1<\lambda_2
|
| 34 |
+
\end{equation}$$ as the local curvature estimate at point $\mathbf{p}_i$ which is ideal for tasks such as point simplification. Using the previously estimated curvature at point $\mathbf{p}_i$ we can estimate the mean curvature as the Gaussian weighted average of the curvatures around the neighborhood $\mathcal{N}_i$: $$\begin{equation}
|
| 35 |
+
\bar{\mathcal{K}}(\mathbf{p}_i) = \frac{\sum\limits_{j \in \mathcal{N}_i} \kappa(\mathbf{p}_j) \exp{(-\|\mathbf{p}_j - \mathbf{p}_i\|^2/h})}{\sum\limits_{j \in \mathcal{N}_i}\exp{(-\|\mathbf{p}_j - \mathbf{p}_i\|^2/h}) }
|
| 36 |
+
\end{equation}$$ where $h$ is a constant defining the radius of the neighborhood. Finally, we can define an estimation of the roughness as the difference between curvature and the mean curvature at point $\mathbf{p}_i$ as: $$\begin{equation}
|
| 37 |
+
\mathcal{R}(\mathbf{p}_i)=\lvert\kappa(\mathbf{p}_i) -\bar{\mathcal{K}}(\mathbf{p}_i)\rvert
|
| 38 |
+
\end{equation}$$
|
| 39 |
+
|
| 40 |
+
<figure id="fig:smoothcd" data-latex-placement="t">
|
| 41 |
+
<div class="center">
|
| 42 |
+
<img src="chamfer_comparison.png" style="width:60.0%" />
|
| 43 |
+
</div>
|
| 44 |
+
<figcaption>Point cloud simplified using FPS (left) achieves better Chamfer distance (CD) than a point cloud decimated using curvature-preservation methods (right). However, the perceptual similarity scores are better for the latter.</figcaption>
|
| 45 |
+
</figure>
|
| 46 |
+
|
| 47 |
+
The main building block of our architecture is a graph neural network that receives at its input a point cloud (or a mesh) $\mathcal{P}_1$ with $N$ points $\mathbf{p}_i$ and outputs a simplified version $\mathcal{P}_2$ with $M$ points, $M<<N$. It is important to note that the simplified point cloud $\mathcal{P}_2$ do not need to be a subset of the original point set $\mathcal{P}_1$. The proposed model is composed by three modules: the *Projection Network*, the *Point Selector* and the *Refinement Network*. Figure [1](#fig:model){reference-type="ref" reference="fig:model"} illustrates the architecture of the proposed method.
|
| 48 |
+
|
| 49 |
+
Point cloud simplification can be considered as a sampling procedure constrained to preserve both the overall shape and the salient features of the input cloud. In this study, we attempted to formulate sampling as a clustering problem. In particular, we aim to cluster points that share similar perceptual and structural features and express the simplified point cloud using the cluster centres. To do so, we designed a *Projector Network* that embeds $(x,y,z)$ coordinates to a high dimensional space, where points with similar features will be close in the latent space. In other words, instead of directly sampling from the Euclidean input space, we aim to sample points embedded to a latent space that captures the perceptual characteristics of the input cloud. Clustering the latent space will create clusters with latent vectors of points that share similar perceptual characteristics.
|
| 50 |
+
|
| 51 |
+
Based on the observations that Farthest Point Sampling (FPS) provides a simple and intuitive technique to select points covering the point cloud structure [@{pointnet++}], we built a sampling module on top of this sampling strategy, where points are sampled from a high dimensional space instead of the input *xyz*-space. Although any clustering algorithm could be adequate, we utilized FPS module since it covers sufficiently the input space without solving any optimization problem. Intuitively, using this formulation we are allowed to interfere the selection process and transform it to a learnable module. The revised sampling module will select point embeddings that cover the perceptual latent space, enabling the preservation of both the shape and the features of the input.
|
| 52 |
+
|
| 53 |
+
Projector Network comprises of a multi-layer perceptron (MLP) applied to each point independently, followed by a GNN that captures the local geometric properties around each point. The update rule of the GNN layer is the following: $$\begin{equation}
|
| 54 |
+
\mathbf{f}_i' = W_c \mathbf{f}_i +\frac{1}{\mathcal{N}_i} \sum_{j \in\mathcal{N}_i } W_n \mathbf{f}_j
|
| 55 |
+
\end{equation}$$ where $\mathbf{f}_i$ denotes the output of the shared point-wise MLP for point $\mathbf{p}_i$ and $W_c, W_n$ represent learnable projection matrices. The connectivity between points can be given either by the mesh triangulation or by a k-nn query in the input space. Following the Projector Network, *Point Selector* module utilizes FPS to select points, i.e. cluster centers, based on their latent features, in order to cover the latent space. Given the cluster centers selected by FPS, we design a nearest neighbour graph that connects the center points of the input with their k-nearest neighbours. In order to gain flexibility in cluster center positioning and preserve salient features we have selected a large enough neighborhood size.
|
| 56 |
+
|
| 57 |
+
Cluster centers, their neighboring point positions along with their respective embeddings from the projection networks are fed to the attention-based refinement layer (AttRef) that modifies the positions of the cluster centers. This layer can be considered as a rectification step that given a large neighborhood and its corresponding latent features, displaces the cluster center points in order to minimize the visual perceptual error. Given that the latent embeddings of each point can be thought of as a local descriptor of the point, the refinement layer generates the new positions based on the vertex displacements along with the neighborhood local descriptors. The final positions of the points as predicted by *AttRef* are defined as follows:
|
| 58 |
+
|
| 59 |
+
$$\begin{equation}
|
| 60 |
+
\mathbf{p}_{c_i}' = \mathbf{p}_{c_i} + \gamma \left( \frac{1}{\mathcal{N}_{c_i}}\sum_{j \in
|
| 61 |
+
\mathcal{N}_{c_i}} \alpha_{ij} \phi ( [\mathbf{f}_j \|
|
| 62 |
+
\mathbf{p}_j - \mathbf{p}_{c_i} ]) \right)
|
| 63 |
+
\end{equation}$$ where $\gamma$ and $\phi$ are MLPs, $\mathcal{N}_{c_i}$ the k-nearest neighbors of point $\mathbf{p}_{c_i}$, $\mathbf{f}_j$ the latent features of point $\mathbf{p}_{j}$ and $\alpha_{ij}$ the attention coefficients between center $\mathbf{p}_{c_i}$ and point $\mathbf{p}_{j}$. The attention coefficients $\alpha_{ij}$ are computed using scaled dot-product, i.e. $\alpha_{ij} =$ softmax$\left(\frac{\theta_q(p_j)^T \theta_k(p_i)}{\sqrt{d}}\right)$, where $\theta_q, \theta_k$ are linear transformations: $\mathbb{R}^{3}\mapsto \mathbb{R}^{d}$.
|
| 64 |
+
|
| 65 |
+
The selection of the loss function to be optimized is crucial for the task of simplification since we seek for a balance between the preservation of the structure and the salient features. A major barrier of most common distance metrics is the uniform weighting of points that can not reflect the perceptual differences between objects. As shown in many studies [@jin2020dr; @li2019lbs; @{wen2019pixel2mesh++}] the commonly used Chamfer distance (CD) between two point sets $\mathcal{P}_1,\mathcal{P}_2$ defined as: $$\begin{equation}
|
| 66 |
+
d_{\mathcal{P}_1, \mathcal{P}_2} = \sum_{x \in \mathcal{P}_1} \min_{y \in \mathcal{P}_2} \| x-y \|^2 + \sum_{y \in \mathcal{P}_2} \min_{x \in \mathcal{P}_1} \|x-y \|^2
|
| 67 |
+
\label{eq:chamf}
|
| 68 |
+
\end{equation}$$ can only describe the overall surface structure similarity between the two sets without taking into account the high frequency details of each point cloud. Figure [2](#fig:smoothcd){reference-type="ref" reference="fig:smoothcd"} illustrates an example of such case. Similarly, the point to surface distance between points of a set $\mathcal{P}$ and a surface $\mathcal{M}$ as well as the Hausdorff distance can not preserve salient points of the object rather than the global appearance. Several 3D perceptual metric studies [@lee2005mesh; @lavoue2006perceptually; @lavoue2009local; @zhang2019feature] have pointed out that features such as curvature and roughness of a 3D model are highly correlated with the visual perception and should be maintained at the simplified point cloud. To train our model for the simplification task it is essential to devise a loss function that preserves both the salient features along with the structure of the point cloud.
|
| 69 |
+
|
| 70 |
+
As can be easily observed, the first term of eq. [\[eq:chamf\]](#eq:chamf){reference-type="eqref" reference="eq:chamf"} measures the preservation of the overall structure of $\mathcal{P}_1$ by $\mathcal{P}_2$, in a uniform way. To break the uniformity of the first term of CD we introduced a weighting factor $w_x$ in eq. [\[eq:modify\]](#eq:modify){reference-type="ref" reference="eq:modify"} that penalizes the distances between the two sets at the points with high salient features and ensures that they will be preserved at the simplified point cloud. We define the modified adaptive Chamfer distance as: $$\begin{equation}
|
| 71 |
+
d^{Adapt}_{\mathcal{P}_1, \mathcal{P}_2} = \sum_{x \in \mathcal{P}_1} w_{\bar{\mathcal{K}}(x)} \min_{y \in \mathcal{P}_2} \| x-y \|^2 + \sum_{y \in \mathcal{P}_2} \min_{x \in \mathcal{P}_1} \|x-y \|^2 \label{eq:modify}
|
| 72 |
+
\end{equation}$$ where $\mathcal{P}_1$ denotes the initial point cloud, $\mathcal{P}_2$ the simplified one, and $w_{\bar{\mathcal{K}}(x)}$ a weighting factor proportional to the mean curvature $\bar{\mathcal{K}}$ at point $x$[^1]. Since we only aim to retain salient points of $\mathcal{P}_1$, we avoid applying a similar weighting factor to the second term of eq. [\[eq:chamf\]](#eq:chamf){reference-type="eqref" reference="eq:chamf"} to prevent the optimization process from getting trapped at local minima.
|
| 73 |
+
|
| 74 |
+
Additional to the adaptive CD, we make use of a loss term to reinforce the selection of high curvature points of the input point cloud. To quantify the preserved salient features of the input we define an error to measures the average point-wise curvature distance between the two point clouds: $$\begin{equation}
|
| 75 |
+
\mathcal{E}_c = \left( \frac{1}{\rvert\mathcal{P}_1\lvert}\sum_{ x \in \mathcal{P}_1} \| \bar{\mathcal{K}}_1(x) - \bar{\mathcal{K}}_2(\text{NN}(x,\mathcal{P}_2)) \| ^2 \right) ^{1/2}
|
| 76 |
+
\label{eq:curv}
|
| 77 |
+
\end{equation}$$ where $\text{NN}(x,\mathcal{P}_2)$ the nearest neighbour of $x$ in set $\mathcal{P}_2$, and $\bar{\mathcal{K}}(\cdot)$ denotes the mean curvature. We refer to this error as Curvature Error (CE).
|
| 78 |
+
|
| 79 |
+
We used a combination of the two aforementioned losses as the total objective to be minimized: $$\begin{equation}
|
| 80 |
+
\mathcal{L}(\mathcal{P}_1 ,\mathcal{P}_2) = d^{Adapt}_{\mathcal{P}_1, \mathcal{P}_2}+ \lambda \mathcal{E}_c
|
| 81 |
+
\end{equation}$$ The first term ensures that the selected points cover the surface of the input, while the latter encourages the selection of high curvature points.
|
2110.06084/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-06-15T17:24:02.980Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36" etag="0OIMpQrQRG9Z2HD86iTl" version="20.0.1" type="google"><diagram id="nmO7Td-m9gXfJ0uz0pm8" name="Page-1">7L1bl5rI9/j9amat57n4ZnE+XHIGBRUFFG5+i5McBQTk9Or/0OmeSTKdSTKTTkysns60ohZQ+1C7du36+AfKXQapdqtYK4Mw/wOBguEPlP8DQRAcIeY/y5Hx/REYp8n3R6I6CZ6P/XXgkEzh80Ho+egtCcLmoze2ZZm3SfXxQb8sitBvPzrm1nXZf/y2c5l/fNbKjZ7PCP114OC7efi3tx2ToI3fH6XwD94th0kUv5wZhp5fubgvb35uoondoOw/OBcq/IFydVm27x9dBi7Ml9576Zf3DYmfefXPC6vDov2aD0x65XCH6eLI/5s2ezj5P5Sm/0fA+Pt2Oje/Pd/y8+W240sfzFdeLQ9b11sOsXF7yeen8Pywj5M2PFSuv7zezzowH2tat26fJQnNz2fRtG5ShPXzZ/wyz92qSZ4ae/+OOMkD1R3LW/tympdn7DnJc67My/rpWtAQDvCQfDpLXWbhB6/QBIm6xPzK8w2FdRsOn+0r+E8JzLoblpewrcf5Lc8foCDq/Uee1RYhnqXa/6UDGPIORt8fjT/QABR9VnD3WfOiP1v/Szjzg2f5fJOsiH8WzX5RLzYu62Raujx/7t1P5dH0ySV3i1lv3eCTQ2z5ZKjLobasnh/l4bl9fuiVbVtenp/UzzcNvSrToC4rw62j8OUtHwqyKItFkaoyKdqnbsLZ+XfuYw56h/+Bz3fGzc/hv57Pv8vb65Yri1nyszotzYZu0/Zh036t0P/JBP6uC1+WNYa8majJz4p6vqE2cfP97O3cIvoag3z2je5fwnlFFt8m73Lu6XP+5M7iJAjC4j+KgPgKEXxb/z839lc3fXNrbt6GdeG2s1XciqD5m1D/vM7/ImcKmPQPNOnxY0n/VAunH8vCqQe18Je4Elj4j7RwCvv5Fv5n4P8YFv7BpObBLPwLMyZg4W9h4fBLauErTBx9MxNHH8vEkX9n4v/Q///CxD9p7YeYOAZM/CeYOEHfwSiOP5aJY486ij8PJ1+dCAU2/xY2j8DwHdg8+fW68Fs6gUdNxyEUcAL34ARw9A6cAP3gTuBRM3YoBJzAPTgBGv/5TgCFH9sJoA+a1PuzTOVXqYs4U37o+3+8UhfhUTiGQ1+rBv9cF0Hg0Ltn+7unygicBsuob+Chn43griojcPqh1k2fNfvxHDABgXXTH2nSd1QZAT3Wuin0oCEWAYFp1U+w8HuojIAea9n0vaI/ooWDZdOfYOF3URkBPdayKfQvl01/+coICOxJ+Bkmfg+VEdBj7VGAHnVRFAKLovdg83dRGQE9+KIo9KjpOBgsit6FE7iHygj4wRdF4UfN2MEIcAL34ATuoTLi5WwP6wQeNKmHU98aCXxZ2t+3MuKrhPrPdQ44Tr2jn5OXd1XoQH1+bQw43H/vcKmvGdF/cKED9VD+9FmzH9Gffn4xDJj09zfp+yl0wKnPL4/9lhaOPqqFf34tDFj4m1n4HRQ64NTnV8d+Swv/GpLWb2nhn18KAxb+ZhZ+D4UOOPX50vPf0sS/piT82/r/lyh0wKnP150DE387E7+DQgec/nwp+m9p4vSDjuL0N6xrAZt/M5u/h0IHnEa+Xhd+RydAP2o6jv6GdS3gBN7OCdxBoQP+sgT0sE7gUTN29LdS8YETeBMncAeFDjj9DWCw39IJPGpSD/tWItyXpf2mCIgAD6kA++MVBASFeCjxnb4aA0Pod88fuqvKCAykYN/CQ2Nfs+vhB3tk7LFSrti/TLn++g4YpFx/pEnfUWUE/lgZV+xRM644KGf8CRZ+D5UR+GNNovBHTafioLrxJ1j4XVRG4I9V3oj/y2TpL18ZgYPyxp9h4vdQGYE/Vn0j/qipUPxbabjA5t/C5u+iMgL/BhzIb+kEHjUdh38D+wM4gbdzAvdQGUF8wybg39IJPGrGjgA1knfhBO6hMoJAHtsJEL9/Ui/YjJRHGZx0tKYIEvZSv03/h75WGEHkTyUJs41/pADE9Va+vPC/5snemfkNc/cOT53/8vr8KHr/l3tpa7629829vPKJas0Caz9Wo48LH55V5BWtcfMkmmXP+7OAl9oLdhF/4rs58/zCZdaOJx19TTHrpWPDFy+1XOKzH0PxPy/zb/UUr6jYZ0ss4BfDelmReyEtfqAKLwPwR9UV0H+39NcF/prTBwJ/M4GjL89/lsCR18I7IPDvKPCPS6hQmPq5Av+GqC65uFH4QY+qrhfmu7JJ2qRcevZl9GXz5QXW9bPoqT9fJBSEZ/f2JP5PZfI0mLNuU82j3/z0nAyLDNinEzIvR6GXI0tTbuvO2vb+KSJWRTQrV2Kx230PraWoZOafzcGMBTOaHwn2/D9W4pinv4N6WwvzgzWL5LwOsysd0iJTXnXOJW8cnWEOykXMi8AWN747zOOoqNGZ4x3luut8jYOKm3HKrPx45Cs1a6zmILLsYb1OmKiMysPV5g3zyLJ7k8k4Lk91lVWEnI2aSKnWbBYnWXE7hySNT8XmD4Sdpq7odrvNxVikx9p8Nly4CUNmQ2KZrWJkaRGmqT3OF8hqdVowjJkxDKowB1PSsLwlw6JiK57w54/jhWpjnCsf589WZHGlnfngHBeKyHmIIT/TdInZsdJFoy++o+g7ZTt6eTOE+EkSWCNXzD7d1dOqTo8ytC7DtqFSV8rp7ojpMKNxhqT3W65nEjkQc7PKmkHx7URlt3oRQ0yzruczxZ62FmysLyDFhQypcvV0PirhsHGaqE2mF+sD1widjRq0vejt3Le7+Wqpi98fZXo4i/16WaMW0WMK7f3swBU9NSajyDOZzql7Rp7vnsgYsekZ2RM40zUPa71531I3NzT/xvN1sMi4z49jLwfZnreiiEuW62CHuTNY/Hac8ErXoIjhj/stu5ZIxfE6Qg78FqXV44mTXdKlXTv2mVV9Ucbd9aqFOkvI0ZaRT8N19OaTsZei0BjuxOJZVcyBoXc+lYxu68yB0tn1iuGaiBddZkUrzF5jNseeJTBmFqoYcZg6Xz8rzJdSwvItK/nLkelGRsQxnsHYrGJZI+KJhsF9RoIis4ukOlqRJctqjBIyI2EzyXIBOzESxoERsAOz3iuWoBxzRbDWMs0xhMPIjo4Q+mbQhdhUmjUjriNjHZ8tJrSYxNIrS9+ddRpmxCBCcFanWd5l1m4c4IyMCKah2KJt5jYXuBLhMTDJ1GRftbFY9YpD8Q7FSRi7JriJWrcUf8L4tOdmDWSlE8Uo61KISikrj5V9SUpk2MvXOLK4wzqir+KYmVZJVLrhxltr8VhXubyql2xD69Jub6m6dow3bVw482ssITmEYEBqoc9PbvE+iBFcpAm+Kw9nffabbNRJTCUx1+1W803M7pVBFXp3FQVGui7YVmJvtmD3Jh6LKuuo/VilqSduEY5teNmWNzFNCu1NI31lax/CgZsi34gPRZru5DBk7Ju09/kbJpDJdFtfMHuDWWQ6nuPqLBIhE1Aaj7EFZnZDeo4WpeNkKthhjq6cdUeMDh0TxewWEfpSge39ts9RTpQkZxRoc1WVxtnOhkFFBeGiGJTuDMUuOe45dZWZg3K67ksoxg9cmFFXm4L0c8WICVdnR3SlhLpJJeKoehd17r8VT+XZOl6LUrPWJNPRyviQovn2Zla4Ae1yM1QHPCsKyUeNId+tKaqfxjZkrlRST6dObDENvahhQg5oGJ4xTh7WZ3WWsYoHZ8miZoVPz0xHWfL8PHoyNDbcLZlgMZrtgJ19jdhPTBrNblQkoxPHeoowMa0vrq5ZeU0nK2+uGXndHE2E1qv+METNlZtKZYiKC4f3usxuQ5FVdC3ejSJUntbxbCD+MSsvsbq57B15TbBBqaCHXVSpXN6o+z46ixJ2kGMnFGLh1OzXK0HLLHdvO8I5K9fJ5qoOZR1mB8g4H1aSWGmGP6iqgDb5Jto0Rj7oJ2HyXQeru0ikTP/QKAc3Vs8CX2aT0EEHDl8dspI4eFpmZK2f48S+NHPnoFkr0czDzLkeKyj2DlCwEqE8zE9UpIxWqg7axRx3hjJp7Xrk61XVxMcxIhW1aap5qBaLleNn8Dgh+xVVG/hhl8+mzB5IPPStij35oqFYUmyGs0GbhF3tk8a3wuF4ylcHi6kMZE2a1XRI1DXf1M4Yoety/nhtinpj7VfWNTduV2RlEe51uCaBxdJWtDG38Fo1x9brro163Ye1QRzOZszbo5SYLrevTtKFcxwILifL2+x9aK1Wt+Mld09W1VnrIySQx91NCa7LOCIOMF8YSeBWqHyL1oRaV86tQojFIxomvYoJ51Sj9KGYbfRK0kEHn4ZbKUG6XfArzBYiOODPvkfPUQ3CbgoFUeIUlnTIJYZIlirfSwdJJPYk1O2q88G2heFmoAf3ZPCELVXZ5QJj/XwqH9agcTrWmtg45W6My2gD75vLFJ40KXQtPO9VyW71hNaOJ6F1DpiT56E5rdFL7a8b94rktRU2NkHujjUyQB4fIarhqO3VRbYIQpIii6SFiwfRDmG6eRBl6esJMQY0k8Vom+rklJbudjuMZMGz2zrD5UJz/ZWDOFN32lIjKalpG4T5MpQSt8old+jG8LcSEZA1Ry7Ou93Ty7C4SeLG2NQOzjB0kRJaaEyjtdtr9HnpTy4sciI/ZTRNe/DUGTfalQlvZ1WUWEDtWWgpGW2q+erWJ6w9t9P8qDlj+TISdsM+EmHG7hnU5NveCblAUbmIuPKQvVhwfz6xW6rR5jezECluzFGS5lgjLvGbtSGCrqst0RM6P29aL/T8sc0xbbB1KeZd4XxZYdq6tDlE3tqhFVPQ4OGFfBlRrqbWao/o6ZRoa4Uw02nUVEVybJwY43rTlOM1ja9qWyK31CACezAM4aSZ10H2JLQ55ANzkjVql2JQl2xYhROQ8hh6sOpBnrlpgpO9veBzgFlEje/Mjviar6JKQVakWXCjMt/23AMiNpvpvjMT56BaqmouelbM/9wgMLbBzoDSDiWWOZYY8vP4OMrplkZK2e8DdF+MF+82OZPtesjkxyG0RoYTjZA4GlzIuZ9FGyOvhXege++2NFCoK6Tpr09BRa8iwdFRneWFVdFNEuV0krzCfKMh7XO9o0X9PIcQyo5nJ3OHpgm2OvvUZvJCo8KH0+VMVQXezpN90Zr/eV6G4Baf2b7Elf3aiFFJzjBsJ9cClcWBcqLj/U3OzHOY1qY57r1cJDG83Z3cGllzGJamQ6hVg1fm3uHcrOL9VjbQ/poeymDtaZU7Eqc1T9tdoS9OaWv0a7c1jP3xIM6BrQzRcoHiOmGlzvFqmdapl8Jx1sDd5vre/9fhfKFseoaDUIFDwzqv8c11ezqGwRmG2e4ET4GyYXWoL6Eq9TaFTmHbotZImCYyxDIcQ+q228WOt/O9z3+oE1rShHY6cTR0IvZnb1exlH0exUulekln3moY7hLkmgZNCPOkiQXqHj0WR+S8TuHFA8I5one5BR1v+dDHp2hRa6pew/ZYczDa1HVe19LsSawbKcPUrpOJ+awD0RaY4tWpR3h8GnY0XKOnGzSRZAst3bI/DvN0nPRscl8E7iJjlDyi2PLSOs97KYlodx40zw6bUfK21GHewuw2XpoOGrJFKZJcFNJoUgkKpX0HSzI2iqksnON5UsXa4T5Fqd0xhYWzugTJoWZTzjQrq8jQs0VfKc+4OvTxfCFzCdUaI4h39ISO4c7BM6MmqVV4ORnwzpnGOXIgwnzxBG1BS/ZtMmqNr0mbltTr6tzlVD5tl+B8ufS8pg/h+cR78HxLIu1T8pQtDg6vhzB0dosCdvuWNG58QM+BV9TuFolrpxwPRhS7LdJf/NcSydI8qYV8Ssuqu6LHIpvDcPZaTIsJ3GTMOWH5+XgUpq0ywmuF5di4dzif844RU+pmjEjCYRQ0msC7Mp2gJSNwmJYJi9Yly+SHKtVzeMO9NaIx6wRmGrbje/Z2EGw81gcx1gqlY8v92YWYev1+AjBPZcUkzRIVpgP3rId2BIUGPTaVd9DZvtBZCr+Ojbqi/eM591PDIAWFuw3LTXsISWO7pzBTjJ3V6Iawdcl7Ij5hwfwytN2YlyVnYVvjWevnySO3dqx45WTwhuGeZpsKw+isFDHi8oSZZ5HMHB1K/QfPBZtdz0efn3NzG7KrM8Jfr2PMy0/29P/lJU5/fp3PSpthl1OzyvI/rJxPyf75PlYqGe2j5+8fioYO5avloYlsRudo3Xwk7gIWitHbMhtmVr4QO/7crpiY88fsnDHzw2q3vCSbrrW3AzHeGPxsyAh0s4/7zr6Yt1CC++dLpUK+wrwT2zqnfazIcetJ+LQtInqeRGcesspnpz1tL3AeSEKnJOzsCuLWl/f17rCyDYuJdIQenQODqClz0zhliaPnqbTCD6Mj2YSZ7aW/Wl1lTgolrryHfL7sVDRAgxFHtRHv/IvfaUaGbw9UryXUqCXw8vnWR/NbIImYesQnZfyzde6vM+wla7LRVTVfU+UhWORf9pftRUw81IJ2ByX58/0v/16u5mJh7tGe7zOGApkh1JG++SMeOxI9jz740l+dc4orJ6GG+d46JcXWH7azk54+T5hSjjnHwdlLeevMV+ke6dsu9fsdR+ehlF+2OVuFktUczfn6nuSXd94/t2V/sS3hy21Zp03uZ5vOO8K59zQD+0iD+P793aWV5SPWOEuxXVoM+L9am8/6Wg//KeG+e+rrfCXs52uY3/1eN6QnXVne1YVn2H8yEN819zuLeTa05x/lyRCXBA+TvjznPZ1Lpn4xhE0k5IJu7THGPjgJqvpDLIqNNXtBds/Diumtg9M8sb1ayeCYt4vDI/6eMK/5+b03SVeceRDKLEB3uy6AmLxQrooiONPO0ILFXUE0M/bJU9JheT91O8NQKMf+4k+WY0ex85bsi422WdjGjteTrI2L/a5HrzScFiW29m7rxfke1nVLid7lSJJ7N0P60O1CMsa8nXmqlGIyCSLkrybut343kuX2dq2V474m3f3eSTnbSlqL2rVqeG4btXL4JWC97SLovEN7X0zfj9lPmRaCCrcp5ss8S+1uaEv0DalN6uqsCjoh1ZbLjHoWU7f8tDN8fcyamxZ7e/ZcbL15SuGfuw1U13xbq0d9ZSzZFmQV6+YgpGo5VLa93Epxdmk9uhTYVeyZaR6rq8i74DAajWd16SqZH5Z5O6/czgaKu/JAnSWqD23ZQWNknk0e+xaC7Q1vuhupJyp4x59XVWI5W3P00ZuaJFLkDsuUCdunKjER+S21lVqBYJPOSny92qwZSJHMzX6AML69SU7ZTRtbIk9xsZr8Aepx5MxjzbFe+qQpDHyexA8UDeWGK6G2yJa040kZ4R+5rDsKnD5ksisOZ0iZRZ3tGL8wmPN0TDplWYVlA/h6xPhYum6LTbIZd1OXrJ19fbzAm6G6SfuzqV+pwr/iiUCzTR9q08aXd3ToJUTHRkt3MYU2wvQOzigRb4XY3jmFzdkinu+EfVulu52uzHPN0fevbG87SJJYqKrSxpkIEblYs2s+Ia6QXC2JgHrT+/1mNVEhN43+KhLLrb2trq6B+YW2uUi3JTlmJH6BU9tkGSrDTizfq25YV5QQhf57PQl3fmiTIdz27ritDDE0V/ZFPTRQEc2jOikun8iV0ESQUrWKbLM0ogSFeF3hmQ3HpJ5AIt4Y7VXbTBovOgO/oXxKb3r2GjJqZHVeKR9sjpAocjGVWReMp8h326uG7EonTjWOmeVZ8HibQxORKHtLsDi6whkhu1ZYE8sbAo+seQRrtUxRvfWBLdGTbODXfeBjdB+szdbCxERWM6fZiBvey0szPBInSpU087TbeiN2UbAtvpMLdSCX7CE5d9OwDPAwyWC8sBlieMz2crLF5IqLXDkqYSGnHSlpT/tVsB8DBt3cysk/bgXvFLjDTRsdztJ3Rl2iUnGaGzqdbJQ/ULuKq5t5HkZXjTAbyRhwzVgHCMXvU/mC1iRDSrmlxUlobRtXYfopt06BszuWktbUfou615D2KDu+bVaImsF+LldWsoEIhwiO9BKUtntlu8R5k9pt0IDqDvQpWaK09Y4NU4S5LaGUd6KXgArptz0bZ0hoJUeCtyP7lpuSSTkNDuWEKyKH1U2r7M3lcJWdCfaTE6pbwzWzcmPaHnm+HzZ5HNHi5brY8Yau/AMhOT0stwF9m4cHMZvnaEGts5NsMFwgZ7EmjHnbr3iKhyFqg6JZCslMP17yJVm9XtOCpAnOrpH1nUhd6XlMxbMplVj0ABU1btvmpOoWhZuJwjvbA/Q0YdYkyxqPQurc2Op6SPhFgfzkomzGec4++5JY72GsbNNUYiDd62jRUGmvQy/ZlYGODXul1WW5oMWgPNBdRetyuOZMdjStFdt0PXLc9seMGQhdWot5uDi0AT7c5rmMoAUnZ3A3FnVRuYsxHrb2PG83UmWKD4dil8+SJDZY4hlTH20HM3BMVN0W7JVi4BCPCMqJDTqVOxxhtqzgeju+l4RtdNaErovjRmdrX7bP+8MSw5uMgqUmhOLqDTkz+vEgQGN68dfRbrGGSN8X7Nr1vBpfoS4hR2ssuR75+Xblc0WzeE6csFaLAvkGF3wgoyhnQTeYvNLOtbPWzGpUi2yZxnWnZQp3Pe+tneZEQY3ME69oYqltHS1z5pSR2N240pNunv1MXq9qMlm4STSggS8xhh4cDjYkqeMtn27azVpG630kNnRTstK2HLgyIqIgN5x+ndKF1ocKEtXsgV+lzLpnVlDpFpkVXuheZf1wLFf8ujtMnbstpmnXzCN+pQlM70l8mBeWXO/bLXNaJuAYs8p2a5PJdEe9bat4FURXZpnVeDa932ZIPm5Xq51UIEqyTPGtOKdv/A1uLJ/VYcK67g+7o7kkSGbnPAewa3F9OsZnv6/6cxhs0Bib/c98Jx3X4uvoIqR+YBv65QKd7dGkpnAlbbChxObZ+1nB/REaq/0oU+nyEcaJC2Mn5luIwEtv0+/jLWt6CYNwWDqGmeoo16YMe36OtxjXyC1lsUicgRdXoXNaJ9LBld/atm5MR7cxbI1QtOOTj9ZwE7peTd7Ck1RBi6uxyfdpxMHSaZnlGo5v72MmYqS1Mc8p23mYkPF2rM83yU52DNbR88RJSEXx7JhyyBQKuhXVEh7kyMb3vVtejLImBHO7ZOJzKo7YG+2gEt9fbOmKnyPINg6nsO7ODrZFj9dWDzE44TJjRZVH5mrTjW0pCL5xzSt8OeB8zPEapm/UjUIUhSLn1LhaLSsy12WImCDnGGuthyjTtuy3wjyNCbiRPzj0ZsVuy8CJjqv8Fi/hASXU/DQcl/WxOQK8ns+DUAxQSNxQ194Zjm6052qZ9++RXp02va5xTlQ6fYtvO0iO+VpNRWl+maRRT3ZyvNCh87KcUFcEScqVe2XXe9/uOa1P4bi0hctpsG86YazXi7e0pRQWKdfgi0heUkfU7UTMnmUeRiU+prY3OmXIbS/xDqqfqNSutesYnWYDEHdJngrRJEXsOPvM5eJ16JiJCt2e9M0ys0W6HDFJht1SjOqoPbyzJxoa8WDtbDG7ZCH8pJiJ0I84jxDBYRc2axVvfSb0iWLHR+qSTThvlyBzJy8hnudNeOtFG0l3vCoSZUcfeEZoDqOCHal1l5+4ND2FUK5JdLW37Wl9rCPYJ0p1LLbYLqpQyS35qNhVa6MUqd6JesTMuH4z8Ca7XS/+9hRHdlTgq31hhGdU7MRaJh2Mlyx26y05DI/HnvKxOxgL5T21ZXel1wkipe3bjVLSJbI/CV3iqleSWLloPFmWdUGcqYBwSECO5HEtSTqvCmyJVK6hHrjAXaYZgyiw1OrGFDu6XcK0YbNhUqzrApxSL0dI0S2h353WXJhw6j5DquM6LXXtUgz4Lb7e6jWKTqKTdkWHYOfGqPsdSnLeREgasRX9fWInUROfBDhvTDvFfWaO+QbEhn2z2FnOiKxItmu2zoqlm46/XsXAqIz8ENa74qrMsXtTtFGX78zZxOSBn8NvXV6C19ZegtL5n3/YyonWHyWJvGXGmmH3RAZd8flU8qkonXBbrFFIW8KaZIcF1PGiaHyojx2DTLeKSc58veRxxylBkB6Sls4V1qUAR5e1L7JGFK6ZXXIyCH/DJNZW7NCI0bZszuUcwiecdxxygTARt+lRXlFJF0FaTrAuKyQ3y3lQbz3WVti939lH/dgx5xKONQXduLHdL2Ya3ZbJwkUZ0IlM1zm0ZSdlVhtrfm2rR5qo7Bc7ZiSes66nJYJEA6nsVwcN74jGFFWvpNyhgRhFUueIfsizKRzgsB9o8YStV0bJD3G9vgi+PcY6w+csykoGLBVH47raYpAktwmtbxe964vLFhPwbb7WKGufwNPtfIiUFEa4TUr5zCoVpGU5VwyW/mEhYWBXEjR1WonyElFttFUebuz5cCDqSrFYngShTS8mAWkI1wHHBEi3DY8Sa2a1qwRk0bFCWBRajpclrguD7BTf8fk4wKYVZZ6XtMG25Up2yDcrAwusg0cu/kcS+qekkNOjWdyP8YWzOQZqte3imicyYjLtPB24PpwwIRussVh3snwded/piApfZ4VAx5GvQWM27GMi71RonSKz/7755kELz6x86ojNcWfBppNZ1mpHpvA8HC5DG3emhaJo8OB6ltl9as5jeMXsJOS4yToGPh4u56bS19JO3KqLqxiYrF93jFAeYzqNsK4xD0nJadf1jnIL1VT7vB7OmnJGSZ6NKMZgw9yOOmOlzVH6fDZTPdxwWLbmJkqbXF15alH1/Y6EMXN/o1gUy6XFJSVW6FN4eEFRs4+V3TxQePXBP1LpdrtDUEFVBrcdD5iG7E5QpIuLq6+2xSxxFjVkqZ9WEanWCTQxjJRwDXwKGS658OiRwczFMzd6XnI+d7MGMmgoUb2xyYnAt8YSF/XLFZ3ZuDt0S/zCQe4t7cN867VtKfRi4WCjhnlieqblSjGrw1ivy0vGk+fMtOQigXe5p4h8dqQh7JgbddIuYWLK4ddotrQLVGrSPMaq4bLkWUY+HVqWWkbbJb+LyGzorvmAwO1m8sKyyJpe67JTSwUXnrckvpQnZFCMJYNc6pdcF9p1VUJCuoTp1TXScDrdOZLsVDlT9a0b2GNaWxsLF67FOlgvjgXXd3w8tgfr0IhH2MWHIrVmSZaizaHkLfGFVvOZIW3Pp2gZsaqItk7LqkSgopF83m1bhuntm+pvIy2iHX0yrt7qeGaIQjSzgD5FGkSuGtNZRGixsY1uxFxlVutdvNXbaILF2/pAV45xvtWmdotw+WwsEbyluV3QSEMnyevaTtnQawyWnCbyfR3Gccl9TFMQ8jkyzLOy2zkyBDf1RSbYznO1YY+SClEtAAyWP0Ii2zjmta5tE4HYPGa8/Xjko+tmvEIEX1cqi07RHh69aBdXWbDbore5f5XWs0arhQZpg47aDUmD0D2y7411RQWs4uwuKLHMLdvz+Qx3S7B1ZEICOo2lxvW2XcsCw9S9nN+YaxjZwkG/2Ey8YYUKy2t1z9TWbjUb38DYxNybNBKHSXGmb1rJ8mihUVdz0bqk6hdnEFYeXx9sx2eyUx+6Wb3HDthS5ALVPYlSDF51scTzKI859OAkDHc6iZq/idBjsCq4Za5hHKgyS07R1Y2twxFaE1LG84I91n4VzVYZSck8o5gYiJxYh9cRi9pYR4VYPuhXjre5KgiqnaAU4SDFD9uETIfMWTOjvwikIUfp1Odb8bjU23D9Uy3KksS/BTv4xjDMVnmfLaMvlxwVtWy/d5u83bt6zDCDvaLTJS0mLLMGNKGmZpmrrlVqWQlamhlvkuRwQn4hLBSFurKuxVAc5utz8n3m5Khy1SauXzI3S+VA7VGOnA7YdhE+FvJe0KH7CglIV659dGOfNunmlpynqhglaitNDbKZjtKlr4r1VtbKeqpuFbrRj8HxoJ+8bIORPO2dtlRQVSi+Gf0uuXKTqlbh4TiS+2FLXxG7POAyFogjwReX2iuP+Kbu5gnXWTa6G7dacgBnVx2Oa6wTy63zPhNy5unnB0a3jNnL2uH8Ryx4lFpPYhwxnIkxkV3qmMDtaYjnVxxbtXvhwG14O+aXmbKe2EQ6Sr5K3zjem+OwzX4bu+bqlJl+EDP6IcJHqFxhyjz+iKVHHANYqzkngU79qA7Wap5V7aqjvcNH+pyWmH/mT+dcO7ckaYjYACG73pq7p5S4AOIZVG+UkpEVz1m5w4bTE5IjOOzI+Iv7THtoPm9gFaMfK9IeSrGh2IpDnUVC7fh1puxVy2EKYZNuj/rk7qzbVcptdQjHiybsW/Jgb8hIO1PcJPicP3tsW7ahfOt6TKIpErxWer1PC3y+rIbddwaSEb2cCmiemfD7BIHPVeW0dIoYORep9ErNak5ye+wlQ9n7oySVoxS1GII2+8qUMFVYwpH9aZ7Tryek2q9IZEkhims0XS9GborLYN0VrTVsbQRNtfaianDj9pt9JslysXJkPknsWAnQVsYkiLVWDFK4jmYcQoXMhnnocA/JPCvAPKbI0iWEx6z14kB7qICDE3LIsUqQRttuBo4KY15T7LjXB8/R8gLhr5fDkQkm5UjFVxnCu3kmT2dPFwTBu6LUU/JCh8gNwwyRONc0HOcMtZ/4tUvysg61HMOJpUTNisMn+w0UeDyFMwe1PxRq1WkC33fkcGlS0zQYs705Z5lRdtEIXXzXiBOFzRk2Gnq4PF8s0Q8j0mfwmwEZ0tIxsym+t8re7nnMJQOMkI7kGYljGE+V8JBaEFrycMS6Nhv69iFl3DngK4RKv9x46XqR3PX6SFyh/VCOiaRQBnetDUVmjjtU1w1oYGP2JEXlBkevOJOJJDMJJpY9nbhkLpqywqt9xF92OzX2RYqG7JAiw+6EQTs+dLe9FqdzhMMtH8jd9kpo5oVioI0Gd/pWW/yJcGSPIQOtXBPzE71pzssYKFu7zCKk0wo9uigs2SyziWAvUpqTsmuiYal5FPJoK/k7wRk1o2fyUS0RRoN6KqO1rJFR4khVs+IWDHk+H/x+8WfotfIFH91LdufBlextLG+qb12p6jq1iRyZky6Ke4gYRvQmzY+oXINjMrUYzz2gOq+v+UgLBG9Z+Mz6fU+1Cnzac1mjsZXBnkQIZ+QGM06CsGWOl93apJ1rHsMDZiyKPd+WuHiTMF2iRXrLYtwqgQmn2iIkCgdFXt8uede4Z3u5t5hKxVKGoO1au3p0yzX0PPimGsNf9izFcM2UranB3xcmb1zsaNiHtHd0L2udMWz+pLmMxnScYYTzTYRzSH9dxiXZcZdxqyN7F60Qf55vhx3aD7uW1bU42q2sJbFB656VKQSyZE10DUL4coMWV8mwtWoqeC7hCJpY6fR4S1eHDr/qaFjWdpUQpJMyxFKezCZS7C5u1NytoETTpBWCY7K31aF9dZSPbHvb7G1kVvue4cXAbWw85GqJ1XoP85wNRbH+NM06GrH7W2NLsLE7YXKVttu626CQZHDcCqm8MdnfYKF0uHnuZcbzVPGSq0eHNJbzJqh7sbB8qTRgs4Sc/TjrkCgUsmOEyo3LxmvWdU6jYfvMbjSZptbXSya4DS+M7M2nDzomH+qBom8WdmZGW8d2DnydBIRwDTJaVyE0ursMPeW5MIef52rnmFq5hzeFaIcrC1qEd4wN/6zXuZ7dDPESaZKe2/vCZUZHYiFPTtcHUdxfgkjyiKnj4ml7bZNJ4WjE6jiTgNOLeAw7D0JOi96gHRoLPOtiEtpNW5Md2CUYK2KyhKFzfkC3p22vlmy1oxlsySfuolJovfy8chIXyyY2bzmxQpeJNjHqUTlpmwObj5o1R3jwbgPbYZypmybBzePK47erauWkRjztnHGwkDW71C0l6pLV0eie7UO+uLm5Z8MnfXsJL1MS7IyL1F+ic80a/UGNeFfrIUtyMF8/ke12IE7WZuUJV9RBsqecSeNh+0Ubz5IerDpo2lIbDDakAzaciAJrScSPo2kZG9088pobNk8ae1/09NUcSIl4h+LUyqWhJDyfJ3l2MkG35XtMlj00KM/4SWKZ9dG0LxwvbTXMjj1evk4lxJxFeym0QQ7OqdoSBX/WpcrBNpxj8RHcpafZGHt+PcVpJuCuT1idJLR7J40TiZS2TVVimSCc/OS6lGvYdiGG5aSkzkXd7W0tbMzL2qSWaxvO/A312jlUD3a7RQF37NZZRq2RjWanfhCWCfXqth7KmrKFDUGJmloz4XK7fLBORYhYbRMzO2aKGG1VLzgyo6RvE+Sa7605WD1hk6wObTKH9Rje69y2d5MtoxJwbl2WAXfWvlw35W1Q8lF+GeHcDTUdWfKKF/lk15E0SYo/RacLvt1nFxmliOM5hQKJts90dKbTAA0sHm5kYbulQ3ekky0bGw7aHFwsqg3nwi7T34rxbLFqsSIw8VtoBLpE4tpobOy9ttWZZXYtRoajyfvmGhC1ulrbu4IaoU7wJpy1GZrp4VqOjqbqZ9YgD0csKObZrDjVZ/x8EvttRYTFkp4YUabYFYcNneC8QDKpdpJ2AedeNlBXccP+wpJGWQbSUdGkQedx8bqX98W4rLJRniH1p2m3juIyU/cU5UrraZ+5wmGrqkbGXhU+sEUMjibJHrWi6cozm5UpIQX4tp/nTlqk1ae+kvD5mNBRVapy/lDwR+fsDjE+yYNXBJXOb0wlP0iNxEBiSskBq7Fbm9PYqJ/mqB2SNGX2FUMb+jeoXU8JywWRz/r+lGwIZNkJIxod7W6CgA4jStDKwbRLbSocT5+nh+4B7k87auI5uzdyNo5E9tRowbXY5YO/xOTdvt80i4It9hlPITUtLerraWVwlFJkmkfWLKPtWS46jivblvgVwS7v3zeZFqVSTy8FzPpFO1kpHEbp7mLbew+W5N6Ndpus1w+rE4bhuXHyxtJasnKzHR5upae65XjEtKuJSupwuVIMNsQsWil06S7LibccwXokS7fnZb1YjJbioOA24OtFLRR7gq7JYSmI7HmOEbio7FtFu/pSWR78aKx2bhTl6jgKcASzp/NxciJZLAV9MsXhmDaiyTBmOiowfUKlrB49FYPoCxEGvOJmm1VRyrEGo5kNW5xx6krxsiIcNd5TE4RZ52XlCMNlmZGl+MiUw7ErJsi52KOkTCdoxzaVqHe3ctSEU8JgfV+drox8tFyJoYUwaiWFQA/JEo+aNkxgzKDN4bfa+9Bl9uOzYA95dcjTyJAkkyoSRod3GjSim7PPqvrR4JK1xBIbluXniCjo5WmkVfmk6YGLsSLVkgG6YclcO9XMVbNcbHVgdS7r6c1Nsqt4m8HnPN3rfd5szWUxh4XmwHFZgMN8Ii1ynzTM/MKvGcRzmfVWY6GWLT0xP8Cjdj6s8b3cddTNXPFJUKB6dKOgYdvaA/6+UOB4oJ4LIJZs5ppp+5C8oSlbCMRlSLSwL6+8NzvKVBqr8/lk4OPlymK238TnXPbwQ5FAVXlC8uyKMSNMDAEcXbm1bCLt9RDAmCWHInLdRVrDDl55ldtsm1G8W+RsWkKSglHsJN7Y8FxxIWGldFahrcYbecZGm+ggYYFBN91yy9mseWHFTxSKBFc1L2FdvBGGNMKlTJ4TPg2Jjbk/XGtXPp28Q73RaNllD0hvtQ5cW+auqMiEvMQoeuShzjl329uZiPCO3ckdjJ/TxW0oAprpUlq6Bn7O7Hn+znDLNCoxlRqmt423wk5TsN0sQULXWLReZbdlkcoQmIvfSgc1o4J4QYaIV/3AQ02cx41MW7wLN5dRWNstcz2U9BwUTrv0Zg59us4N3OwM3yadY7jIwEp2Gw+V+Em6QIkkdE3Ou0dkF9mMgrlNhGSlF+qZS2j1qJqJQEXkHE1bE93NMeJstKEg0KvBUFcrPRxHbtNeB35vjsglSE72WKRceUncUpaOp6tFMcsy+MnPI1pw8XK/wZmWphCo97g1GqAupV74ECdPU9O2NjudUs7YNc6KwTAtV8TSWSZP4RDmennzjrfsQC9px836ZBqmZR/tkKj3YzCNvVzw7nZ96VhNafc5Wquu4WmdX+t2nTOJg1mEwu8X8XZDvGTMsFNYeJRXMB2z03wUnTb7dHOIJH6pG2WLNKiv5z6SjnIdBVzNF8mRREq2P0KiG7kSy27zKvO4o1UrVysieE+BNp5CV+U4uu7t5ilKyuCsZDjEyI97lT30N352OKbvU5Bx6lk+lS/4ekml1d088TltFldlNJgjh/QSO1/cDjLGWSn5McCSOa68aaInGZI5RhsxWvMhFVtW3m9UpihFkT1a68muZhFdWxofUTXSNvuGdtheQ0+M2Cnb99kGNVFJbF+rjJSp0UZdDDG6KpuNt7qUnVicQzZUhku1X0LZLToRMUNbT1neftziBLK49cVE1Ou57TFhG89T42yi6hzeWra2LlZrz7HFcp7+KWm7ErDr5uJzTbVJh/WeQOQxZ46MNwVyqkzS4TAxV1+bcidduVu/Py6OIcLTKBCm5BCaw2HylO3iwXcba/HpEO5F2LJSRYfhnly0n3h/U4WepIhUhJ2Wtj50igRiUFDeZQ425i9mgxRJsZ1lGhSrlcnWTV+XhOKbUFXNE/w62Hcn/sSUwXXst6LpzMPZVajgW7G56iFuSRI1kU2Z5vpmmSser7rR0yckW9JzzjLJgq7we6fmRMa6iJROvm3ildCStB3F0ipL93F+aDrb3mayg4uCppBhSuooC3W7tFrv/czEU1vphYSHpNW6apBTn+m9vUsgAlWsRqx1zWQv+kCmCdRrm5TmEQlz90N1KEi/TvVgGZidacmJ4gkuL4kN75guqxjcU3HG4RgpeSypxTVNT0f1fPKZ3t3Ps+u1RPjVYhJL3a1s2WnkqbvZecsaSx7jmjVHTuT9w4kvSVcL9jLecqyziB8+XKlItyGq8AYhIuaZKOF02rkXasgtdjzV0Jun2lJ2tTdxoc5WURQt+z2X3/f7LNmyDsIPtta+34J7fN6oin2fDbHIy47jF7od9vfvFCBf2RBLvtkO6P+8ARrFPrsBGlt+cc5t2ufHv9R+6O+x/5n8RNyvfIXED93//A3f93NH+5/jtq2aJ21bnJofFO+qZHA9d3znl0v2oorLdplbIhC8xHULPU58erRYm/jUZ77b/g+GMIzE8f+jCej/SAR6l1bRwl/Jkzn0fpJGUjRh+//Nn8GXjf3wOwp+/xB+ByPvHy2v/f/fSTfgr3AF2LuX7/v4Id7gG74Jxr/V+cjWs9gXyNCr5vSB/f5lWcuz89zl8svjD6gD2NM7W/dZwyj6O/U0/WlP0++wv4MIkFcMEabeqqtfg8x9o+PFv+R482e2zq/nez/Viu9ib+Q7BKE/+PlIKf40qp+Gpvjv8JkvacSs4ThXvwCXfnG9wL/TGD13250pwjd8MycYpN94kMaxdxD5gc+gvqgrKPzu5Ut1f4y6vPblnYBh9GYMI4T8O5f0x/qHXzOIBxAjADECECMAMQIQIwAxAhAjADECECMAMQIQIwAxAhAjADECECMAMQIQIwAxAhAjADECECMAMQIQIwAxAhAjADECECMAMQIQIwAxAhAjADECECMAMQIQIwAxAhAjADECECMAMQIQIwAxAhAjADECECMAMQIQIwAxAhAjADECECMAMQIQIwAxAhAjADECEKO3ghjBNP7lXdA/dEcsCWAadwXTQF/2KP+0PdIUgGncA0zj5yvCa5yd+98sf38wjSCp58t83xFNeVs+/X0AG/g7CPtEZ7A/D31pRPmLtfH9yXiv8XjuX3MAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBm4UdhFl7bS/1DMQvI22+qB5iFTzAL1D9iFnD65+6uR17bXQ8wCz8As4DdlyK8NPyLbZa/b8xCHzbtd/IjC2WB/MCNUF9Unw/QCj9Gg5BfUoMAbgHgFgBuAeAWAG4B4BYAbgHgFgBuAeAWAG4B4BYAbgHgFgBuAeAWAG4B4BYAbgHgFgBuAeAWAG4B4BYAbgHgFgBuAeAWAG4B4BYAbgHgFgBuAeAWAG4B4BYAbgHgFgBuAeAWAG4B4BYAbgHgFgBuAeAWAG4B4BYAbgHgFgBuAeAWAG4B4BYAbgHgFgBu4e1wC8SXN8X+0D3VKMAt3BVuAXvlW+l/7C57DOAW7gG38PMVAf8lN8vfN26hKOvl09+Ht0C8+5vOoO9eUZtXR5S/0AvfX3PIX1JzAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZgFgFgBmAWAWAGYBYBYAZuFHYRZe20v9YzEL1H/eVD93yec21XO/7OZ5FP9OAic+3giNU/hP3jxPA4H/UIFj0M8V+Mv5gcDfSuDoxy79xcX/NIHD/1nggJzzjeQc+h/JORj1kzUCeXONAOScV8k5+J0pwmtQrfvnn9wfOec8f0h+Vtbv4UAWYg75gf+gvqg3H2Byfozq/Hf6FggjviWMwOGf7Ct+TcoWYCUBVhJgJQFWEmAlAVYSYCUBVhJgJQFWEmAlAVYSYCUBVhJgJQFWEmAlAVYSYCUBVhJgJQFWEmAlAVYSYCUBVhJgJQFWEmAlAVYSYCUBVhJgJQFWEmAlAVYSYCUBVhJgJQFWEmAlAVYSYCUBVhJgJQFWEmAlAVYSYCUBVhJgJQFWEmAlAVYSYCUBVhJgJb0dK4n88i7oH7ojlgBgjfsCa9DoT94jTQKwxl2ANX66IryGUbv/zfL3B9YIknq+zPcd0ZS35dPfH7ZBvnvB7v2pP8i7ryXx/cXd+P5a9Bqb7f61CCAXAHIBIBcAcgEgFwByASAXAHIBIBcAcgEgFwByASAXAHIBIBcAcgEgFwByASAXAHIBIBcAcgEgFwByASAXAHIBIBcAcgEgFwByASAXAHIBIBcAcgEgFwByASAXAHIBIBcAcgEgFwByASAXAHIBIBcAcgEgFwByASAXAHIBIBcAcgEgFwByASAXAHIBIBd+GHLhlX3VPxS5gEMAufCDkQsI9E/IBRwlf+5Oexx+c40AyIVXkQvEnSkC8ktulr9v5EIfNu0bERfID1wK9UVV+gCz8GO0Cf0ltQmgFwB6AaAXAHoBoBcAegGgFwB6AaAXAHoBoBcAegGgFwB6AaAXAHoBoBcAegGgFwB6AaAXAHoBoBcAegGgFwB6AaAXAHoBoBcAegGgFwB6AaAXAHoBoBcAegGgFwB6AaAXAHoBoBcAegGgFwB6AaAXAHoBoBcAegGgFwB6AaAXAHoBoBcAegGgFwB6AaAX3g698BWbYn/o/moMoBfuC71AIT95xz0O0At3gV746YpA/JKb5e8bvVCU9fLp789eoN5B2Cf6A/956Eujy18Yhu+vReQvqUUAuQCQCwC5AJALALkAkAsAuQCQCwC5AJALALkAkAsAuQCQCwC5AJALALkAkAsAuQCQCwC5AJALALkAkAsAuQCQCwC5AJALALkAkAsAuQCQCwC5AJALALkAkAsAuQCQCwC5AJALALkAkAsAuQCQCwC5AJALALkAkAsAuQCQCwC5AJALALkAkAsAuQCQCwC58MOQC6/sq34r5MKkVw53mC6O/L/ZscDJ/6E0/T/46zfEtq73tCH9g93ur+1Nb1q3ftmOvmxi9cuidZPiqWvhp+d57lZN8tTY+3fESR6o7lje2pfTvDxjP9PBr4jhs32OvnTec58jJPS3PseQdzD6Sre/EDG+e79/qZ/3Zb90dVkn09J/+XNXfdq5TZ9ccrcI5Tmw+uQQWwbj86GnDcdPj/Lw/LK5+GX/8tOT+vmeoVcFFNRlZbh1FL685RWgQVUmRfvUSzg7/85dzEHv8D/w+c64+Tn813Ocf3p73XJlMRvZrBtLs6HbzMFX89Uy/7w2/10Rvixo7DvsOH9dzuhn5TzfTZu4+T702/f0iS+a1mxKxfLuvyTziiC+Tdjl3M3nfFE2Pk6CICz+U/+/1+ovCODbev+5sb966Ztbc/M2rAu3nS3iVgTN30T653X+BylDwJp/mDWPH4v5Zxo3/FDGDT2mccMUMO4fbdzU31kxP9q4YfqRjPu9kj+gcRPAuH+0ccMI8dXWjb6VdZMPZd3Ev7Puf+j9f2Hdn7T2I6wbA9b9w62boH/+2I0/lHVjjzl2P5/xq3OXwNq/v7UjMPzTrR39ej34Da3/QVNu38CDBcb/VsaPoz/d+L+B6PobGv+/jOt/deOHIWD9P9/6afynW/9L9PGg5g8/aEoe+e9fFIEg//i1AEpR3dpf9csAXpr/A0FFEZp/vlbJvlC8ApPvEBT66+dZ8C8Tf+jvdRVv9R0Br2vFazHhd9UKMVl0+1dUi+8gfgTB7kzeXxH7hUXA1PWTv/Vzt2kS/2NZfPabNJaeeO61MIjCb+2zD/oEf6VPXo7VYe62Sfdx8//glndLWPCBSAj8I5FgEPLn1yy8tNKUt9oPnz/4V4//rS0U/nJb7VPU8re2vpdbR79iJr98yUL1WW3+eMSej0HfrOXkx9ViKAm/w/4+zXmtSA95M0WnvknRn93HV2n5UtfYzI4jKZZySPSPT+se0a+zgn8Q5w+3jfF1aXytHSCfVAt+0sxbm8BXzGyAqD91Xf9S1C9L/D9J1P89iIX/OVzZh6r5KwYrn2rt9whe8E9Ht1eCF+IdRL0Sv6B/Gwe/X+r6O3y/Gfalb7xzm19yJvMdpI5+Gh8h6Lu/Jy1+aNCKfmvi+stpiu9bcf+hVOcJZICHVID9TQXmVyjEWwgh30VQ2Kej7j3U6KOfrxEBScV/nbNCvyZn/YMXED9f4fkbJg3fq/XjJQ3Rzxd0AnP+7uZ8P1X62Oc3Z/yO1k0/pnVjn9+LAaz7raz7Dsr0Xxp+DOvGvmaH4+9o3Z/fRgms+62s+x7q9LHPV3D/jub95TTqb1mnj32+XhuY95uZ9x0U6mOf3371O5o3/qCj9zcUaQJ7fyt7v4dSfewrVrJ/ZwfwoKk3jAYO4A4cwB2U6+NfUd/wOzuAB83O4d/KGQMO4C0cwB1U7OPIQzsA/EETePhXlHb+3rW6GPVxicNTfe3cNX/+EB+3+LVFbDj6be2+cVUb/lpV2yeCfusaXoz6uOznHmp48dcqvT5vAPdS2Il/mS0Aang/7bOvKPECov4tanjx1zJboIb3Fa39rLv/BrdOfjrS/VWZ+7PKOfFvSGj9qgBlHP/End5DcSYOqrneYAKJfw0G8MdOGImHKt/CHzRBRIDyrR9ozvdTnEk8VPaHeNDsDwHKt368dd9BcSbxUNVbxL+s3vrlrRtUb/14676H4kzioaq3iH9ZvfWrF2cSYBvkTzDvOyjOJB5qWyTxoLVZL4oGSjN+pr3fQ3Em+di1WcSDpt5IUJt1Dw7gDoozSeShHQD5oNk58hsA+sABvJkDuIPiTPIrird+ZwfwoAk88r9XsQAK2T/UT1D4n2Z8PyAy8t4rVz4BkZ0pP/T9v2nB/IpH4Rj+nWRFYNCnsrqLchcSlLu8wZBM3l+5C/VQ5S7kg865KVDu8gPN+X7KXaiHiqepB51QU6Dc5cdb9x2Uu1APVe5CPehsmQLlLj/euu+h3IV6qHIX6kHLXShQ7vITzPsOyl2ohyp3oR603IUC5S53YO/3UO5CP3a5C/WgqTcalLvcgwO4g3IXGnloB0A/aHaOBuUu9+AA7qDchX7schf6QRN49LehmH5DFhkJ/a3K4XvhyEj8m5t+Y0YP/RU4prcmkpHQ30qA7gFKRn/bN2jfC6mK/vJXyAEo2ad99mt+h/RPEvUvDSWjX0txASjZK1r7WY//DQWD9Cvj3c/nksHQa9mtb1MC5J+V4N27d7+4DsDI99GBV2IelPr7gsYPVoBvzXN9eUbzpvW9IRzgIfk3pZhfoQkSdb/TFw1T0Cej8D0U98IQyEC8QQbi2QTuqrwXhj5fG/YbphieNfvxcgww9PlaMGDS39+k76fEF4Y+Xx/2W1r4l7HRv6mFf74cDFj4m1n4HZT5wtDnS8R+Swv/cg7mN7Xwz1eEAQt/Mwu/h1JfGPr8fsvf0sS/ZiPkt/X/L1HsC8Of324JTPztTPwOyn3hlxLEBzHx95r+gKM4/HxOUO/zU23+Hkp+Yfgbar9+SyfwqOk4+BtKvYATeDsncAdlvzD8FfVfv7UTeNSMHfwVlWDACby9E7iD0l8Y/orSt9/aCTxqUg9+rRDuuxbGiMlyG79ibcxX6dMXvqQR/qTAAvnZlVAvxXhA4G8icAKF3yF3JnLkzYvfVHdcJA79gXPv368+qPhJGn73shJ6P/L/itq3f1nsjPzxL8ubf+qGD+xjp0wQ/7awGYK/0NIb1zbDL5L8UaYNP6hdo8hs1+S92fVrWZw3FD7yoMLHscWpf7Bj6770AMW/gx5An9UD7teS9wejE4ovE8VlK067yAp5efmDYmr26b/voyYwgb5DPq57h6EXB/3zlOO19B5QjjtQDvoFNP/zdOO1DCDQjR+vG8jfdAP9+8rAD9aNb8gIJhd3CfX/7H3V9cJ8VzZJm5SLFF6yeGy+vMC6fhY99f1Lfwbh2b09qcqn8ntKCrJuU4X+csfnZFjkxT6dkHk5Cr0cWZpyW3fWzPdPEbEqolkRE4vd7ntoLUUlM/9sDmYsmNH8SLDn/7ESxzz9HdTbWpgfrFkk53WYXemQFpnyqnMueePoDHNQLmJeBLa48d2hmZvX6MzxjnLddb7GQcXNOGVWfjzylZo1VnMQWfawXidMVEbl4Wrzhnlk2b3JZByXp7rKKkLORk2kVGs2i5OsuJ1DksanYvMHwk5TV3S73eZiLPJjbT4bLtyEIbPRscxWMbK0CNPUHucLZLU6LRjGzBgGVZiDKWlY3pJhUbEVT/jzx/FCtTHOlY/zZyuyuNLOfHBWMBE5DzHkZ5ouMTtWumj0xXcUfadsRy9vhhA/SQJr5IrZp7t6WtXpUYbWZdg2VOpKOd0dMR1mNM6Q9H7L9UwiB2JuVlkzKL6dqOxWL2KIadZzBCnGnrYWbKwvIMWFDKly9XQ+KuGwcZqoTaYX6wPXCJ2NGrS9aO7ct7v5aqmL3x9l+v+xd15dbiLhuv41e629L8aLHC5BCJAEQkgICW7OIokgBBIKhF9/qqDbod222zPu5K6ZsQchRKh6v1DpodnJ9QzOdZXJTYYtg/1qVNRcm7ayJOzNkbYUVPD0zF6Qz7Wg+uPR2luvZuZ5ONMNnAj8l4D7EIl2mW/aWg33S8mO41EK70NsQGGI9HXT0UdTx2JB2iwNcaawE9e/MWoYXEhe22xHqsd6vOckgTCtDpN2cTrpkSkyamwI6rY5tT64mHgoCl0YbUV6fyyAOfm7bSmYjimsOFOcTYXROZZkT5jyE2GpC/NNLTKUACpVjkeUBu5fHINbKXH1ui+lw0a4tYJMU5JAifujKFqxxJwFOhAULF7fYqWKp2wpirowiYSWcYQU3sBCjsdtI4yplTBbTuzxZJNPxvZM5UcC4wqqaxKMOW/McbKenGeCPIutWbKzhcgWUts82uZiZ/K4IIcxQYsmL0qeMPOSkBZUYry2Jo7srHNnFHoK4ws4K1Rsfbwk8rGeuJzkciOFEmfMqONmF07aUlJWj4ACRWXLCZNZOY5LZV9ujs4hLYlmqZ6S2B6tZjF/ktv92i6Zo2l5iWFDn3VSy5N22M95U1ksbc3UN8n8khQu+E5kFJcZW5hWmODDNVmGCUHLPCPdytXO5MC++KYIR0U4GYYerCmnnjTauPamcWhls0K8KOLVGTv1mk5kTXS1uj1mmS8bxEg8S6qjzhOeHV+uOhtMDGcVNaMuDqxkVWTZQo0iwbkqy0C6UmM27a6zA+XMKZvN2l1y3MlMJIScLlFiQa1vTbaLoehGKhcuKNec7ExXjlc3IU5EgxjX5QR3lkadkyNZUdx2zK+nx9LaOfum0cjx+DCxONNtikW6WY606X7dTLanZYkl9GoU7bmTw2Hm7ijI6ajab8jpJDLXXCq3mn/QQPlNJS7fz5KZrJxnurJ29TJZZWRuXNdH2sIW+TrSGnpfFEpAWk2+mHFc3bWXSDhxadVtb/KF0smDFqVsQ0bRjhqpzWyngTrW6HCn2BwQfLYTbpytgs9xb2hitIBvdZJjYAci8DVy3QlZDNyozMbbkehPxp1wCeTpaV+ess7Oz6c9e5pv1gRvHutVE59Po66cNHFxGNG1qYpGJIsTU08WrYyV21kCDCTY7MtDos0PS1edMWJYTsjVIj5qo/ysLet4JyvUSk3caJyMt+flbDrW97a3dNzxbl/O0vlJa8oq2q8wa7eaKvJRt4JG08bkOZ/H87OVN+Z23AWeS1W3WObWweo8WXmJthtL5b4b37DViJ6u9iWz8vW9tb8EOc0sy3XurnR7Kq/zaO+eNkcs8VdYOJWxPMq3XDxp7Uxr9MO6XViTTr/MWqmaHs/Jpo3ZiXY+H0GwloupG+zxtiOWU66y6NUiB6Ysrlg6CuyjuA1ka2IryToCBr1mnOMyPQd21Gy2+XRlC0eLmLHrY7dKtZl0rtw2Jmcl+Hm1ls2zvZzap9y6noipzXin5pSGtsjb8Xxt4DNt3V782+msnZZRZTGr3TqRnFZJ195oedwqh5HrYnjZ2f58GWAz7XjdHHJvax9v9myDjdnN4joJTzCOyA0uFVYaekdSvcYzRquO7vVIMNAjWmt+mjDutiL5VQFs9MTy4Q3fNtdSwUynkKaUM47xUNoFPg/yGkKcFxNikmS4YmIe08Sqcgz8rFFkZslit8Vxt3KccXO1yJW3tSTGUY77wwGnanCpANextttUunx2y0WblPEcX54PXbTVlciz6bzWFOdipry+2Y4v7opy8zxadzPyUAWzs3ci8sqOzg7DLjYV0WC+FBOa5WqXk0cYBMGyskhkhUeH8YIQbiCIivxpS1gNuVfl2MhMtstKzzCali0k0aj2tFroXjB1Cbe7bQ2uZRUtu4RRDkMpcz167IKcW4GhMCFbjVjovC9LHobFeZqcrXnl0oLAFxmjR1bX2oulzu9geY6iImfy7Z7neR/vbtaV91TGX9hHTi6wy2584VTyfAR3N9tSl92lA1vnHZXDSHhrlrGMC04tkGvpUrvRKJxoo5g5SZgDLbjebUWDO+vgYBFj5fm6VRSQayQlfbXnTHi7Vbbsj29Bfr74kR+0l5zSG8dUEskb7w5TSp+VzohQDSeyEw5rfLpQDy05qriZVhNm1qX6bMKss67VtYniOjTTJtX8XLanLDlpl5K4ZhYTOo1ljbf6+tSovkKeV3kjbFWdW2QUdkvn4mQ0JspN5OOaj/nr+TncOsaBBglmEZ8DFzjiUz6NjxNiyq6LUTsBjw1KQKaAmS5v69RdabamraHOCvDHC0PLCBcWlt1IhoFJcySB+NiqmcETpRrUIbks2oN/7dzO8XyiC5IImxHNlidYmgwPLChn2aHYU+Gv+Nq/whMU2pQ416c+qag1Ity4mgu/mBa3TuHcm6JOqcA6s86uWvCyuQMpxGQhid16QWYpNd0F3LzzI+tIN9vDjjsW9AXk4rIN/vj+nqBtae8EyqisZ1ZCKuqeohZqNeb2STjZ8snyqu7Xuyir1ut26ecyS9GXxdariNmIorKsifRj45e5v9qdp8nSUC2yPmWrMpz5+tFrme1M4p1bYUKnZFj1zLtY1nKzkkFiq2K8WpC0ydiZuznZa3tbK1ELFLiYnwb/X0XgRsVsh4fRBI8sezej5ydju4nCHY6Lty3ehZO5aGJ1iR0zf16YHGUUlc7iPLMnbMu1lJthQDs2wLOD/3FbsuQZfbsd8diWWe78xVHknF0rH46an97W1wrHbylxysJzhEvsmgq1JbkpNsRuluHQA+I5Yd5yG9tc86ZOtjGUNVfNcKetRjh5rqq8qhTgSewrq+Lc4qaCBrHcMJeCmvhV5jO+lEU3Hq/I7RXrWPaCwWJZbhqCZFjfYZdF6ME6JtkNScGvZnleK2nMeyBo7lxxz6lGaeKSTTmXBJ46PLMXkmNZKEjrnClYpCxvuKJSrZyp410CGlWiEy0zkltsMny802CSHOkO53ZArLLAA4s+cb51cvnN7sDmCqmfrTBZ8B3ZRguX3lsVy02jw9bCF27XgsyBiXLoCS4FrzjXzqp0qWIdXtFO090t5/LOgMk5vPW84lfRbiv5OHgkmQ84tdtDB0dXTRS5CyjA2/LCWlcp5EHiFV8WsMb1bU6HLUldYe1D/wUzWV5i9UjKeFXzpnxb7EEaLp6KDprAVaXcLZXvNptxZ0xafDYRR2JSu6Ng5G9ioTTXCaGMV+1Y5xn6VmYdBnsPVh1ssOi3FDZ+uFLbRVfanxG6MEtx4SzepFq8rsYOnZiNnOjF5CaWy52HCdVsaACA5recZvtUw/nQ25mRE2ORxbfno78yxbowRY4+tWdtygebXR5klsWOJ6NrAx/aJ1ieWvRpppy409aLcPuQ10yypULwNWbM1wfYv+HY7U6vQeNxNHPtZOru8bkw6lubE0EwRSUWZPhBAK1IAWSHSv3V57EjzsDeu88jcA7VM4Xxl+8p4f6fff83/Gpk3n0v7UtHEOGlxQn8iyrBJcXPx4lKKejffB42ZcvE8incXBPz1t3Y14BIbqGIJeQVtoaFaTBO3ACcV07X4GdOLqzz1XQBv1LXnr10QjmZWxIwZAK7OpvlzTmsr5GC13e3ykXSkfK34sXdLpOJmlx8he6MIuZBI3rvE9McOO3OOOB5qIxvk1QEriC5BOqyWqymjmULsUnwrbsSCC0TrvpoAvNo0JSeSE3rKg6z3i+VL2ed7t0MSz11iQVSedPIkAxbmtRb+hYcgptu7WljxdV6yrV6isPfXwIyv4aKTGkbupu0n88++nKFpWJ3Djk9gns6+gQVB4flwTjIqU/a2GI1ST8ff//n/m4ONuVtHPCcCRaqAqO1/DVo6cRVeBB9aFheN3ebHN2Ua8Cz3SYZNfv6PAul/z2zVnLK3TTuUskvLrhLb8NfF1lQL0Z8Hin5wcjFY6TY580a3F9ff/nN//m5nF+ea/zrc9nbeR7s5zd/g+d+3wL7RkFSPTxddrQDwm5BLV7gGUPpy9nAVR8r4c81XN/6ss6n4yW4B3D0oA2l1wo86hbt8KA3kMBbLxe2cGdod/9MekOEHTxCdv9Z8s1R2tXQEObxOB+b9pISnJWbklrQJLJ8toEXFJcSPln7s3ALGrYnO23c9fXgSkSwZNanfDd4k2w6Wq/G5T4kF4tbiAl5MTlNJmO3W1h6CN0VxgttnfadDvB47rrDsUhNAuhP4L6NfPNh74tDXvbRJXH9mhUdWq4XNXni8awoqZl/nUHnu5pVF072DxuWXXp7oo68W8QmlL9Yb4+TolszTCSd1nRwCW4tWxrXUzXZLCvWWy7dbOTY6cXmFhct2l3O2tGVYMJ6XcTYbkHWgZwNMbvvaWG4yMioQJVEbnElL0x9ZvVOm+60sckole0JrblPuGu+XViB2e7PVz3xl+KuMHzQpAh2tzlWVdKl0jbm1IK9LcQ0MdfNONPK5ug48FGKnceb8aGgTnItdCBWH2P/QONk3O40WFSq1MB2uzS57iyS9tSG2ylcHTmqSyYEaE1u6guGO3Np7c2VmjniC2k3Paa2a6zbgLxqaarEXgObTNQy05iOya+ZM6kmGL7m9yU9m85nAjZR1vNlg1HS5aq45a2bOwq7TYppFzRYTRM7iTpvKlgm58KiQSO+4XgstzyFdGSx5F1f2TPBZrS/bcYjs9mrntzssAmo6v1CCApL2HWb9DaBsznFED9tKClRTkYxT+ftorulM3dZbQ74vDleleVubZ64IjjR6ZgXz3Wkd/NAXfCRnzI3MYbFJRR6i/MLfM/J9GWcOAu3cEaOTOeL8fJyzBYLcwLamm0QnMTacYk0tUlN460dExFqMRNnUsqcMPUIOwKqeR3U82nHRaOuDaaxXBqOcTx5FhUU+vygXGHnmJUGBc0ZKQyV0U0uB+lG1ZEbx1Ew6CRaBJHDRvil9lrjaMnReuoctNUZK2IQ1VkZ/iKfRGuCKDW72M/hSSZhIZ+m9N7BE9ZMMZk+W5eTPu90SXYbac4FnHmuxVMkaLF980t15YwYhWOhqQAtWH3ma9SapXrKdqRZm73t23h7BamJzJS1PbZH/JEWxvvTkTon6pyhYxtEsIu+n2j+bCWW5Fa16NMyDCi+Dmfri03Jqart3fNcnkt+Xq6jDbPlNEVfbxeG31KHCWXQC7XQGhb2HrKgmBoY4HFWoKTxvEnwdr9UU4NSj6PYU+MSH+e8q6SX7XIaLttQIOfXsgs2xtjfhl5z1Vt3ZJsLqypJpdiCE223DimtuMVxVJ1BO4w/nsfASNpwdG6rkOCkZaYeyIoVWCW39SSNbOPsTYS6y+1t6C42paKfq+BCeqeI9zknuc6nhLbHg1w92ukcY1wm3PAwKb0sJwbM8zrtNidD7rbitynM0mYLMcoI4QpTKX/Lw4SKqI1aTPZEZKcbRnJi55qvlTXnnmksZzyZWE2v+tGZH1Yn1e3wIN2Spt2c9nZudcZGkupmnicxLx9O0I7n/DFYMYpb4+ol5K8gPMh70EYLK1PsVEsYheo+0cdtfqmnEifhGDcnyX2GqULdHnLYWT2b8WNFH7uLs2ouZO7Eg5hK77tMEckVVlS046w7zbQ5ep1OJNdYYX2DWVdsu92MM/cqHk+rVIICCtLDZN6CNjvwJYlZ41R5yTJFwEz/xsuWxvs38rA/CdjmLJ54DQ4XXCgsD01vot9yvBqtxXZtT8XzrSY2Rr3ZCw1jKjM5j6BDa/DVFbRlxnq4dRtvbnMHbXSw2pXhgHa7lU26ZLUqFjmoSWZOpb7V1bHRrEN3TWpGIZ44AY/omOHcxOIz9UYTgiGOPX8h1crYiHf6+HZLkrMpVoHq7JYrmMOvhQmVrTGS1q7ETjA3qzHWZodgFi+gNcTmshBnnu9X9JT0GDWeUelpI4HHVXdHXqRzZktd9DhUr3ghhSpJjmzsirMn3j3d7JkwbbViD5txty1swp12S3uhu3FYEaDhFXciZ1QxbDNngiIu2qmZ3kDrp/NrTVfZwkvjhgwDRbDMcLVyMEVrr3l31a82jNbLWD7z51JUjLIZlTETh7nl1rOML/Q6mhBxJa6kaSbMamGKlV6xt6MDX2tiELXlVJrdVt3NM4quW5xBxD/qY6H2FSnKC1utlhdD2MIGOCVM94vZWtibrnY1jsk0jE8CbNX4Dr809kTeGtPpQimISQqb+HaS81fpip/tQDRxxj4tV4vNGnaQAOcMEtiZPNtukl1QH+tdFM7JhAL+BzzJbXShZ/FhnAWhY5mHA7Zz2jXXRVNlTjUlBVrvuwkdtFh7XLYql8GfCG5SWAs5NzCGLv15vUwMce2nAjGisjbaa+7kdC6jWgL5luBZuT2BFkkLOHQV5ki/yXx4kgzHMa1u450tR2cm+qb30Tq9xk6ntWTTaTYhi5M1z5dZPMKVLWzlWm7gLBMhFpSZBdqUFxAmVPrSVrur4qQLgbrxoOE0zmR5567VSCgmpCFrJd6osUMva688WGXFjNcG7InPuSQWr7xLKlJ9cJQTvYsxx1pto+q2cymD3JwuZkTh6WhvTblyI5wc/uzYE4Kee+sTfljRUjKSdMqca/MJUxQTNefa6RSOyJxgiOgwd5PoF5+YdEZZG2PQjAlHrbRy+flUNMrQjTfT/JrA9IAbV1LXbOD4GMgAT7tdMy4aLGKupOcsLNe0LrsjbPcviVrr5rWpj9y4dOsLbdwwNZEqLZMV8DXLk77q5nRhYjs4nFAdGZZVj95JnC0Dpx7pdYYnpTM+bBvnajLWbAa9paNkuMx5llTEKuw64q5bBngWEEYVKeGMK58JrFErkkuaWy5zKv3UxltgAPIizbNx3Cmx2AKfCW/exDZ7ecJftuYctmyJW06sWUE0OEFztRpfOB2PtXQ4cw3KKUWM3k7W6bhuaYlgwtUiOs80+hIIUcAUCynWYG/CzoBJ5kKFKZ7vd/TFj+eK6frHWFZds5GE8XnVTqgNN7vl21GWbSMs1xX+uHScbrapYjxgSq0tDGoRH0nFK6W4WBxnVilztRvXxHo/queNtBaNGfS32yR24oKeLgsr2pHyTa5U1qUkxRYNH/Zh+BLV98cucCpSl5whLkr/NpY5fXmZT0q+JJbb8S31tBPLTD0y6WzbPhBuV2A0NiY27GamKKakjcWSOHqWthqFHmxmNPJY5KZXoVjwF5imNfO5kFG3W0hz2mGDTUx7XC+2s1GUjrTlnjhuZllp6oeioa/J6VrNSLKT3exW3Ahqd7aqekGyI79jFJ0x5GCZOml8TrZjPD+vnYwOBJDzNYSDB+tiYbstMWXF29lwpyJ/vkmnkxxaRytfRdWiOE1A7n4uLvEtX6yBiamNBNJvU4XJ68WBSSn4E6wMNdXrjaKw1701E8Qls8dONLiUui1KNzKKGYnpMK1JF1TIbQ4TXYrM9iYQ3fUopDupgv24bZcSRI0psHDHs3KMx4dZIItWHM2ERbq1mGAupLYh38hY0A0xH+UjQkpH/qbJx8ya8M41KU001iOIy2hsH6ZEvi5BUL/4ojMRl8HN2Zibm7Ar8USfkHMvcWpopvEVNhYOk4bs2GyWY4bYTYBsbPCdYca6PFlCOxYUaWSftjCDJEOlrKcrnb4x57Ws+SXnNWdMmCgayOibfN9FDR7VDS9vqdnUKqUmqWaHceC0iSlIuUiKioUrxcY6TQ0KU9RLypsG1F1dHAxqTBv5TOfsZYp3190qnmQ4MZpnXCBMs7ECh3PlEJaPiI0bcapg3U0vSUlhjnN9mkdzB+wOZXNSQMtTMPJcy2nIWuNTQ1NjzHQsn5MrYbo4jgmosWIMBa0mcIjrIBCLSeAGUhJS3ZRb72C3gXEZlWKTz6cWFdorn4X+RxnXfaeQW5P7pG6Tw8gZCdhFN6Br7thY2Ou7bjWqo44a7xu7LWY3VT21UuDemCM92xdjPokDHWv3zTJh8puGzTIC+O9rsF7p0U5UtzdmvlnY+Nrd2/Z0wWY4CIcwtI12/LgoznR42qniMluDGH4UFgqxme9vAr5ZHXbnozlTFrKhQVfRCPt6dhPG5Sbhs5i6ndertBzpp9mC8wptrdV51ez0yY5kJTHmBEuMcie+WVMdZOngamttdaVx1QanKB12epI4KPXlgsWp9fLKiSSVK9AlpXYUcHR0IMl1nUwWIFD41SrYcJlhLAhyrE0a79KuKJ1YbLHYlKGrPxoFqHGRtFSl7qYxq1Up1gmCko7O+DYSRulBIjcCtYae+Wzm5SgYXe2GDc+crF3FdMvQhgXzohre0U5MbqsbzF9GmHfN6ig3/MulHNdy4VKtTvlytuPV42R9XLXVrDzsJXa3X9tqkeKL3J/I0n7DY9Qmt6r0AtPEbESfYmBpB6zUFRBjtQgOeZZxwEe2rZWxAft3CVWMvJkUMrRz7vyoLPbnWr/ttxcuPEiSrUil2hHNxII9yKV5yM3xZXYssXEG0/TjKdZpPlu4iuoec+FYX7zQabPKntv0+FTMwhl0LLS5kJL2srJXZ3mDe3RTZDaoyVJ2RiR7TYPxRQ+EJrvstjGMWMeYt7dwVCLUyFjdLYyLINTOVQuMWI951+yskz/d7ASmkNf7kN/GOsZOz2sXVqEtJg45l3NNmM4WiWFe4g6Xr7MVf3St3bVa69eYVncWzOBt3buFZ6W5KeqscjIx8s+WyHYdO8zD2MC+j64LIyknGtAqu+5ia+xlgSyEBmirNUuSnTBHCiZC0gaTxbO7PlWVsyYwMU8Ef9lupPg0b08YI1VHTSS7eIm3frxIjvtwYZBXUL6Ti2+39gVrlDnZ6lciCyNvIw7GOuVCceIuDiQD25aX3W6H32CytREiBtu2pT6qHadSx4JQ1Wp+FU5R7IxX5sERkrk4PlJ5pS2Fyl5MgfE1gsOA0uSJJEqLHX/VS1EiC507raHq0mMNnUF09KVq5biBsN/WkbevltSKgpNcsKpmSU6gj7dEkSRSoly+cVNhtN3KejCPyU04LUawrWGtuHKfbuOTl9irDTZjlL0kjZ22Co4xsMpYSUGLohMwthNdySRsbm5vJgz8YXB0/flpQpD6FsuIETYJokvKZs3enQltACvkzLbKts4NeQPn24zqfi4K7MS/hgv8KgiCMRl6y/jDISdlfb9ceuf8svTMRBAaZ8pnsFtsDFsNZMp1Z9hWnWkcHAmCp2mviuKOxvmBsUkSu5VVJUdyA+7PzZd7NycnJ70b1bDnBs4cqHzOVbOGMmDlU5HkhzdyeSRC1lOrgJw723k2v6a77li0Cmco3ZmYdxvlUB+LmaHqZdUdr0dybm7Czcrc+vs5xUq8vzW48Hgk6Xkb3NLTqNO0Y7TatOyyMfgT4ZQrWqVCuWWk4lD55YaeVzfQ4Nqp1u06msI+gJ2nNZsZdZNLwx16QnYSf7dh3WDMhmOH4H9yIZHcrJOTWBitKSF2SpMaj5Y8JknTkXi8LMer0VxyEgm2lM3UYbJWCTT+OpJ8kIfNl0birafb/ToIE8FcxXSLlVNqAuKPXPrMJsT1auSm2LZutcaeglbV4rhxFnTL77KSCnbSdpfruwvLWjLVYMSitkHxlMooxCSBNM+TUlAnvjv1mvnITNkRM6I2QgDdZ1Zj4LqhXbRBMlGWWEY1hSE31T4eV25Q7SdLzXaFYjzPjI3ZeQv7elJyR2ui9qCPlxd25czZWN9xo24cjALgsR3VwXLD84VUnyj4bFKbdVbQ4LbO4vJmEXumVrMxme/X+NBBEIyOZQcLRY7dg1L6pW6ft+plUyvWZBm0ilK2SnyhCPK8PK4VShvDdGS5BW36WUccl1OWgF2I8ozMZtDI1zIM1rfiYjeGQ5CZfjloOn726vlyr6hqMXVVKU2dZBKSF5VSMNGeCkThubq1iibsvgGhw1uloFVA+UKxz2AKT9kz6EBrrMDDLbHKqeNYaR3n3Iy4KJH0iZPUZuO7el4Q0umw2ghhN9lwyUnF6BtoyfP7/oYwfFGUZsYe+Ii4UpQlM7uKx5Nc4JadNPNYSTWxy0gYyaXCAeFI6XKOhb7E0cJKq1eFdrzpY6m+sc3hnK3XlrC+XN2dKkwWcYsdAs9K0omYC2Lc1Hi5O9hyEMVsINBXC7MUWDDAFAerrJ1aojw2pBhlw+6IJMHpbBKtMhsjSwmPRc8Ro8BZZYIHEr5ifDQPV0k5HRRvNtswJ2zZlG2qTDhrdKqsiSpsFqRpWlgjJuJWics5TZ5oYS+zQjdeU/v+wqVw0CdT+riMpcNioSWBzPGYE3FsdNtS2EKKPKPWkwxkOCP4g9y7nBh9feAEbK7jN9PQoT8Zb8RNJGBTb00FqXk+72AMVO3F3maU7ZTceCSuOKIwj3E/npy3k8U5buCcx3EeG0qwGLutbtVC3molIehYze15fX9WSWbDHYFwC4Hd7VZBDf0ZeToG44BcKs7Nx4+qP7f9rrreSs00uXnsqiPlMPFWsSDIfqcHMZfreMJmtuB7K9KUzJkU6+HYhwOf+3pZc5cJvl2O9mddPFriVsZoQT1T1nY8NoTNYTFb8+4pT/CGsqCwwWPJ0JtEGcwWeUOkRtMUZ9yjQbAkHhZ5dT3kt7O3c+CzJVwmlyqGGTP95POX0ZkHwTfTBemwFDlhdO72M64JlsVasg5O3Cwj3t94h5kpWI601T1BF24jy4rAQ0QgpT/BuKS6HoxbN7b2yCMRgPZ2dCPrZnERTT2JF1Mbdmzwpm/vJwwBe01MHSOkck4WJ8Vy9GNXSKN0xPDM1OTbazZd3eiTSUZl5RxThnUzgYETlMVUSTzoRteLKZbqujIlaEr1DRNbHjfqRrxc50uHALKvBUkOvbNDR6NKEfXap3x3znFi0HVAo7G4vJ4dBbcWW0o9Zhejus1JTLFGoylx9Nt0ecXHpTsCba91ApqKh1zbuKwFr5uS3sGmcjjTQNynLPDjosuSWCS2MamePTGZiZ67bS0nEBbtWjhX5gz2BF+ig6D64PLhTcibquH4q03thNYxqYWLn7oxwXgWG8+OEdZ6iz25zfMxSD93x4W71sslPi9kJ5raGKy8TWIFO7PKzf3Vkg+xrpi5syw8oXUVEfPVbLaS5eUhjBWf6W6jpDNOl7SbjHjCvo3WDJ4d5E108zFiC3VD3shkLIkepZC3zliLjQiTsSJhSxzb5SvS2Bq1VorHBS9QsD9xEZfji5/vpm7qUftOzC8j+UjChjbTmnHZ6fOVmLe6DTI8fDHHnSjZa/NzSq83U18ypsepm1lJt3DbxiZmIpy3lGqwV0fna7GOpOLq5b6Db03jEB26NFxYB6U+xLtKtOqVFkueXmO24lKBuWUvRsNs7fnUH59Il9j3fSZnn1pCNe4UM5zesM7g5hRuKSuq2TIFdWGJIIk7GBu9PPbPVwo0GutA9s0pSKRk+kbS3NTjsTTa7ToVOJnwZkg1pao+GZY7equIwmyzdg4jSTF0ykl8ST11JSbsZAdOtCFW7vZoMIW0M5WjS81Hri3F+C3bAmOspVmXZPsx7QWMfVPGl6WbJanCKsb5WFL78XgbpCc4XcNxCjkqu0nmHrTF0tGj8/owW3Pw3pqddCX9C0jVw8UCCnAhGi6MWq0YA6e+GsMG9fQ6a8qKc8ZzhpN1rRIi+LhSOMtkjJka6Xq/2U/k2ND8cCO0immkxClf2iBZ3VKdqjWXFKT1FF2bI6P2UkPQGDy3DzDgAvXl5lo1wlKK80OL516kmwTsVzyoW6eKlU6ZBF28PdDGcn9QSY7Z7DIsVHhnx8c7PgvJ0Jbwszo2DD7yWj41xMRyyfPKo+LKcg8ibP4eBd+RjxeqCNf0NbJCU2FpvbXmzlI3TAG2ruXYcnV1eT6FTKVNZ86i4FrsNvY7WnQEXqjxSo03ay3Y243abKiwAK1Zuat29G4r18aRiQrYPdGSQrEoVnM+paUxK2T6VlmEI+8wx27HUbM8iKxVlqGymehKY0q0fFqqy6KFo2ycbyn1tlvM4qTca0uO85RZt9x745WhadZePE2k0JEpPO4Up9WL863cifsyY5SQNmrQdtJjvdrWR4UG+8Y37phpo6AppI2785qE7tTGL8KjKc3Xk3ylnBUBkzNODUVdNJyRLsZ1B7J2TNEnwFc0lyi4YpdZl4qjMA7EIOjSOUPAVTOydeO9eRjyUcyN9bJZO6XeFa5vguaht8Lr7YLrpJFTW7mYxLK4PevhqVjkTQBz8tuynp+hwKB9Jl3EdfCM5qybWiNuUux1n61EQV+Ko3jTTh1HkaaMCI9fnvd6nCk1Dycwmwd9a2d4FGeLg+MsfVxRay9ezPe1uZpuKYrOra3fljbslQN2uLqWvuaV7YbST2tS0ZrDiROoJhHJ44QvPTiceM0Jqib2mbGD48VyDCcHhdeGnkFZTJwOO6UrOCGylkbCeBSX9WWinwKlLFdB3B4XXhznWtuO8RgXt7tN58aqXI7Nbi03m+wsrwVhnbUTnN+Syr5qfY3C+AMThdLE28+nRakmOk7uHdweWdtbKR+mjKslS67DKHsHR44oWlUFVUk2QtlsbkWHuQenVSbdFluI56Ns3q5lq4+3qUDV9XF7EtSN7SkCP47iizJhyFUK89G1gzOU0Ogg/dbqADsAPw4qdpUfV3kWW4qy5opUMPGFjrXkfBeImrmxRulMEZm5KEogIwprtWt5Td3qZuhRosxd2JCci2yubyvhpNseNV2J5mhf8/Or4hwTY4/v8mxp1vnZWMPBHBEDiSMcgKMCJivygLXW+UGaCYTvCTNDF7GLWPpyvsJbfbea0Uv1duOu66mUhgVpxlcOa4yL09DDRIHNirubAAF7M2fCpY7YK5mJxZg5NKke1eVJ8oGjzJT2uNttLbo9nETKCc7JLld9elWk2LHcEvn+RAktzjQhHp9GM3VNXE6rEKdsNZKJ0yLWz2Ljlyf1sjf2nOQVuZiVmDKhOLGTr2K0O44ixs74/ZG86JKV78V4Hq8UKrT48w0+8h4oLzpKHUcS4UnLS9yUr4yltHipsrtUyiJmvl6uTpWnbrf+qprrvOqJK6K2Ly5e2etFcWRT9pCQ5EbCbu7uZlx3TEzfxIV6w+ldBt3GZEzuTSUrPYve7R3QfhdGsBmVricVzhtnf0ptu9CYwyThdrZ587iHb1OUrbFwCC7KSttzYQKXV8kncyVh5yRPzipvSx5+PrTjmXMRTquSB0lht8iu66bOZrlFr29W4LDuJoJ1YKeLuU8qUqccsFQZ38655G2IRewIE8o7x8S+9CNz7zF61WrrdMzFLMim7Y6/gRwRGG00HvPTxtKmUzNq29H8cmqk5bolDmG6ddoiG5WH1CtVZbM92ZwAh8G3QR7zY48ul3NauPAcgdX+aEaGpMdpBymi2W13vlwcsdtmI2txdqcCRen5RC5d2HiKmig3y6u/ue5XPOx2nM+2a2ttOxsnYqplG3ZtrRaSZ8wON1GfXJY5WWme5eu3oDKdKhdSl7KZibSE1XtrEthjRm2jwuf8QrgJCz0gyW6+zOarWJHgvFGxyMLqtKtjZaNWcTiqpCLdsEQp1htM9mJPEUUjP+790cauJic7ZiR/gs39CX8s29bzrld/MskEWlQsl2mldqmJq/oqAYezDgIOs7a1KGXqgZ7BrrTqBho+2zl0VdaZctWIh7nzwbthVgtEKbUhlYK88qrLvmIp6zaey/FMirjEtvN6rglFKcvixp51zhFU0enC0y2pxfp8eeZdsdbJrSDfJsbQ26ClGkstK01Q9lo816AhxqfJfO5PD+VNLnaRGE2aw3EJU1mD7JhE4O2+l7duDZohoFuHJqKddpeaGhsJaBrvO67KccN29FkxnfmuI5eg+TfJLtMxdZofgtH5OM+a2ZIh1DYXNoLfhWo26ZTVqhNOgd7lbjb1jKDeQMcQ01kcjrt0Fa2bVedPDOjBF3Mb+nSM9mMKjlTxUbSEK19hGwc+VGGmGaEU0U3PLgG2jcdMMyElT1g5VADNhijSwgB1GhbT6VqsznVVMpNgjR2PoIFfhcvbVtoKZXhqa0NeuyCcncZH/FrMT2ZE24rCdey5zHJzDtuKm5Np1fyW2MPuORc2srATPjg1N7ZmRTy5qdd5Mh1fWN6JE2W6z5ZJvjrfHMfYqy4tj/UJG2WsSYrYbZEdZ8tgv6YzZ1KPUwlTprPjmdjWe7N2FinGkBP7LFemvhYPZsNmKVbr84yXCIXyls1xVbBBlZkhDMxuB/tE6ZRWYceGv8ngKMaon5yx2sSTPFG04pRl24222wZC7S1B63qmMMERmgScd6vaThb72gI4b1UX2U1Siet2JEvBaiuVrKeHS5W+jEQXVj++OnGx6WBc4TfjmAEtUca96bt6XGFesZC4Mz/v55aK0+WaHlf7aRzDN1n2/z1cFfsHVr+C7x+ufmXp75F47COrX9nnW/3637FoJPVTtMb/0CPvfHmPRI0XXBjNfi8N9ntpvPDC6McAam9/YXRyuRzPvTKhtwvC4tMxbTzfaz8FJezWOCblBTY6CQwCYWWIp5b7LUgPk/tSC7wLfMERBczz//EM9v9YAvuUHYFvEIM8BTl5Xx9pcY4u/wt+Q0M6AP6Jw4dN/BOkLcMt+N3/PYsrwfGnuRLqE8W+pDe5F+hTJBNcq7wVKyAFSDZ91BwfJXPBTztQDer99lc4A6o/8uLdqY7jn6f0+e9LnwFFzX1XAcQjBotzz1f8j/HPftOZ079y5vkd5PP9+fOHSnlB/45zn6hvaXkc99r+nfnvSLVfqaUHalX3VNh3rhn6ZXMCAqMeepk3oJnfeJECygmenhM8p++h6U8k9hXUjf2lpkj8E0+8qKwQ2O2N8pk47vus5oVdzm+8tuENuRzEZ0J8JsRnQnwmxGdCfCbEZ0J8JsRnQnwmxGdCfCbEZ0J8JsRnQnwmxGdCfCbEZ0J8JsRnQnwmxGdCfCbEZ0J8JsRnQnwmxGdCfCbEZ0J8JsRnQnwmxGdCfCbEZ0J8JsRnQnwmxGdCfCbEZ0J8JsRnQnwmxGdCfCbEZ0J8JsRnQnwmxGdCfCbEZ0J8pg/OZ3rOVbE885SF+OTLroplEOrjvaA++HsZvN4aahahPt4X6uMNaOYxMtzbX3f/9lAfYVqB2xwK4lxe4a9fFP8BohfzFf6D+1ZnBP8Jo54WzL5QQZ5Bbe8TNocoD4jygCgPiPKAKA+I8oAoD4jygCgPiPKAKA+I8oAoD4jygCgPiPKAKA+I8oAoD4jygCgPiPKAKA+I8oAoD4jygCgPiPKAKA+I8oAoD4jygCgPiPKAKA+I8oAoD4jygCgPiPKAKA+I8oAoD4jygCgPiPKAKA+I8oAoD4jygCgPiPKAKA+I8oAoD4jygCgPL0h5eGwN9stSHlgMUR7eKOWB/47ywH6/jPplV+yzOKI8vGnKA/2dh3l9zRDvct3926Y81NH58uKQBxL7CvLA/lJmX9EcXkhp5LtUGiI8IMIDIjwgwgMiPCDCAyI8IMIDIjwgwgMiPCDCAyI8IMIDIjwgwgMiPCDCAyI8IMIDIjwgwgMiPCDCAyI8IMIDIjwgwgMiPCDCAyI8IMIDIjwgwgMiPCDCAyI8IMIDIjwgwgMiPCDCAyI8IMIDIjwgwgMiPCDCAyI8IMIDIjwgwgMiPCDCAyI8IMIDIjw8H+GBfcr66xcmPFCI8PBeCA84hvOvvVyfRoiH94V4eAuiYd7lyvu3zXgoygr++kUhDyB+MV9BHriHQuM+YU+MZ1/gD88gN/Zdyg2BHhDoAYEeEOgBgR4Q6AGBHhDoAYEeEOgBgR4Q6AGBHhDoAYEeEOgBgR4Q6AGBHhDoAYEeEOgBgR4Q6AGBHhDoAYEeEOgBgR4Q6AGBHhDoAYEeEOgBgR4Q6AGBHhDoAYEeEOgBgR4Q6AGBHhDoAYEeEOgBgR4Q6AGBHhDoAYEeEOgBgR4Q6AGBHhDoAYEeXhD08Ogq7BcmPXD/eek+KIQfLd0fvdsl+iT9suJgvl+i36/wft0l+jwSxxsVB07TrywODkPieBPioL4XB46/tjjwZyfCIIDQv5ILgX0PEGLI15YL8exyQQCh/wIQYr5PXV9fNOS7JLq8PYDQDvxIvRP2y4KDSOwrcBD7a4F9RQh6IY39dxAeSnKeJ8nBGOa1HdBj2Lu374AQUgohpRBSCiGlEFIKIaUQUgohpRBSCiGlEFIKIaUQUgohpRBSCiGlEFIKIaUQUgohpRBSCiGlEFIKIaUQUgohpRBSCiGlEFIKIaUQUgohpRBSCiGlEFIKIaUQUgohpRBSCiGlEFIKIaUQUgohpRBSCiGlEFIKIaUQUgohpRBSCiGlEFIKIaUQUgohpRBS6vmQUtyT1uW/LFKKYxD7492wP3CMe+1V1Cxif7wz9scbEM1j2Lq3v/T+7bE/wrQCtzkUxLm8wl+/Gg8EBDPmKx4I91B07CeMelps+4IJeQbpPQbFe/vSQ9QHRH1A1AdEfUDUB0R9QNQHRH1A1AdEfUDUB0R9QNQHRH1A1AdEfUDUB0R9QNQHRH1A1AdEfUDUB0R9QNQHRH1A1AdEfUDUB0R9QNQHRH1A1AdEfUDUB0R9QNQHRH1A1AdEfUDUB0R9QNQHRH1A1AdEfUDUB0R9QNQHRH1A1AdEfUDUB0R9QNQHRH1A1IeXpD48tiL7ZakPPIaoD2+U+oB/ouifrKumiVdezM/jiADxpgkQ7Pf+5vVFQ7zLZfhvmwBRR+fLqwIgSOwrR8X+WnNfkR5eSHbku5Qdoj8g+gOiPyD6A6I/IPoDoj8g+gOiPyD6A6I/IPoDoj8g+gOiPyD6A6I/IPoDoj8g+gOiPyD6A6I/IPoDoj8g+gOiPyD6A6I/IPoDoj8g+gOiPyD6A6I/IPoDoj8g+gOiPyD6A6I/IPoDoj8g+gOiPyD6A6I/IPoDoj8g+gOiPyD6A6I/IPoDoj8g+gOiPzwf/YF/0mrsF6Y/UIj+8D7pD/errF9vIT+N6A/vjf7w+qJh3uUy/LdNfyjKCv761fAPILAxP/NUzCeMelqc+0KFeAbpse9SeogAgQgQiACBCBCIAIEIEIgAgQgQiACBCBCIAIEIEIgAgQgQiACBCBCIAIEIEIgAgQgQiACBCBCIAIEIEIgAgQgQiACBCBCIAIEIEIgAgQgQiACBCBCIAIEIEIgAgQgQiACBCBCIAIEIEIgAgQgQiACBCBCIAIEIEIgAgQgQiACBCBCIAIEIEIgAgQgQiADxogSIR1ZkvzABgnv6stiL5/fr3r9aVP/YEvjzxavuV73DMg1AmXppAdfS978Jyjz3jue0P9lwRJLmoea15fVyf5n7T3+i5EkSf1jyHEd/V/AU8QknHyl7kn22wud/XtbLsobFXVZpB8swvyuuhwV8rtND7hWRClKsB7vEMmzvdvVLj/utPNrdLzO+X8ncf6juHhp7tJLCqjxaXhVH94c8wk44lmlx6YuJFsF/QLUj7BP9PzR4shH4jH/5TEv94RWwtuJ8qYA+4Gkj7wzSsPOT6/1nmv5eDr+ua+q5lp9TGPbDqgYPdEm9fBkFl4F18UsLAxZVwKO/VM4jdfF79V2Ckt7lUG9SkoZhVPzHKuCfUAW/V/53J/tSTL99Ni+/RFXhXYBVXIvw/F2lfr7P/1LPODLpFzTp9tuaflUL/3Hg/Ast/E7oH9HCSWThL2/h3PcImZe3cOpjWTj5US2cRhb+8haOE8yTTZx8NhNnPpaJ0//OxH9S/v/CxB+c7UVMnEUm/gomzvBvIIpzH8vE2Y8axe+09uSeTWTzz2HzBI6/vs3j2NO18Fc6gY/aHXcvPuQEXtcJ0OQbcALEx3YC+EftsbuXGXICr+sE+CePuT6jE3jsjQcfyQl81E49/L+/r4Agfvi+guFE51v86HnuKeD/3L1bAJ7sH+wTRfLRwxNizSEvIGu+ydNi/81ZIIX+M4S+rutPNfmprOAka5yH463y8JOvT3ZLo1osm29Og8GLMxwJH4im4CcOTl/66ldV+eD+00P8zQFf1eX9XvwTz1Hgab4+7IsK7nfSn1iKfXDU3QM/6UEJDI6jy305/6gSwmh3frjvCPn6BBZ+cxkddjxhuAn7HeFoFjZsYHc7oFgIHv5FmSR8GUV/OAn3ULQJJYxR8EC6/45RGczEIe9/2IuT/XekicPpIxjdX4rk+uNNvD8V3R8H14ViLGvh0HtiBEuaBGytYCTef8bhQSRFmQQHt5j+tRj324zZj4tgDJwfhN1vgydihy22PwM5bKs4D+5mOGZ4cg5emmEIE3wDt6AMMBx6aYzhMIvA+u85UgXnAFca9sOCIunhG9yk+s/DXopj746neNaksf5KbF8Md9uERfcFzdC0RfHDPfImxeOf75Zi77ZNqpfm3V6sPzMJKmK4U7K/C47rS4E0yeEu+vslh1+QhEn29Uj2903i/d8UKBuqf47+DQv9Js3f7aP7Gqf7Tcwi+2uCeiXBLZL9AX313W2yFtlLAAdf9vv7a5FwF6aSOKgZttcOPGTYuqv1fk9fJ4Q13IU11Ib7jVWkD7Q63f6DEzRH/GONt/9M/sElCjrCH9kAXK3QS/4pZkD02udYkxgk398brBwSIy1y0B9u0UNhclavGoqgLZyEt09R4Mn6WqPuCqTfS1AW0b+xguSBkob6YkBJ3231yuxLC2xrxGBaLGeSvVKGM5EccXd+qr8L6s7m+uvjNNBS/yuSsai+IEE5WxTRWxv4Sb9BDL/rbYYxKRr7vK83SALIrL9Ifyc0NijDGrwiDqycHh6JHswVG7ZNmsO+7OU+H3u3BS9Gs4MPwEyageLAe9nTvatgSXDn/U4W1Dx8mH+AzyHhmy/gVl8Q+PA12d8wcAnDBj+USn9qngTPSg56NIetfnwS/JgftlWShV6D+/wN0XszUKkacVfTuDb4AJxmTbyv4MH10f31GXNwWP/g/UfsbtvsK3jY2Vv8P7TVPy5r9W4HZ012UE+v9L4AgTfpX3YF7hb4IHq4b+5zIZIMc7e3H5EAvnd4BtrsXcOwk+kFCPYxg7Pr9935PZMenGf/sPiwqYLavZN2//R3Kv9dM6N/Eu9/z8zIezPrb4fvb713ADxtkYPLB4rAhrKx6MHj4Bbb2xOOW4OIKJIElndnJebgowfLIIaTsBywp34LXIvEiC/21BdSb0+DXu7sqT+Whx65Nxno1fstnO2P6guOIyyCutcq0dvWoNUhWP0DYhrfK6mv7f7X4IZ77ROchjOD1Zv4cLG+jgfp0+AwfFAWyZn3m70R9QOC8ANwNQw5bHIqzoHD+tjxD96X0ecPMIbi/SZLg0A3nGm4F26QLHBFODvYGQ/umMXuNnu143cfLPzu8YCJMXe/I8x/7rfg/wjq7lj1HxBn/rkz4sEY7j+QYP9wnt6w/rl/GE79B4bt4QM8gBweDZQOORzSV+NwU8ABDw8JHMCdhZLDQw/bDHW3985uyDtzMXGM/7yXH9wCMM++FnDG6sM3wQHvxH4xG/LebKCX/mI2d6p1dZwdPBkU3Wc/SgxBEz4Rcb+P7CVB8cB9DQkVkOHgGXrTHDwdARx43yMBfC9tkyBpGT4Ntzz4O/ggJPnlQUhicGYEMJf+QqCGhlSIhOlKHxhI4L+J3s+Cc1h3gQWECIIczsBod2GHAM9xt9V7hjvnyFnEkHgxIIANbhOHAW549v557svhtz2J9McDNj64j/+587YgZ7zzGyxm8UOchp733u6Jr6LrYMQUcDkEPRglZ95tUYNh9/VLMRbeVy0OVdVvMUOYH8IoKNjB9VrE4Kspa6gm4IaGuuNBJfYV398Q2SeZBAUib59gDalA3yyFDusugFMg+Axhmxok3ds/BVXSR9m7vUNwI3DgugYjIDSqT8FBRKDwPrEekoUhXb3bRw1ubTAmi6KGRMEcEosh8vTPCRwKPYQvGN3xXsFDzCfvsgKTHiLlsHfYBqk63Rfj4ODo3msCm77LKnhQQ8RwOLjr+zSgL9J/wA3Sg2cZnnFwEiA3YL/KDaghLb1LSYdkdEjSGOu+zQDqom8gAJvoXRCU/+BlcHOI+HcxvRcLiP1DJjXouk9GQcJ/t8UOzzLYBGENQQEkGebd1pD13G3j93v5r/YCK8WHpA37LEaQnuCwIXTnsPqyxYf0hraGlAwoiRmUDZoNQ2QCDqsPmM8R52EV/BnrvGt1mL3YBm93t/l5X6+0PvqCMDkonqDNQXuDJTPDpsUNrRlcxSlMw3vHTcNa7BVD9+fAmaEZit1VDs0Am+2FT3PQevu2E4wkfVRngOPtx+RgW9Ia3F7fkuyFxQwZSS/D/sg+LNI8+H7Y6tUzeH0gbWtwoRSL3eWUIDsx77eG+rnbVknYTuufjRoaF3fb2N3evvcQpkBD6YE2J3jePmyAbMYc2hqD1sihIXq3b8hYmbswNDzjEJr60zIgCPR9HXjfhh5i55A1DvknDoIJORixSsJQRXxJ+e+2gfnfb/XKxAeTBs8zOIXhnvvch6fvbI6CgWQwwsHKetsjvqTO/eNSQyimmSESM0PbAT4paYN0TxtCJQV/OZRLfxe9fyVBhnG/1Vvx54Jjflv6+G9LH65ufqSfBXZ/XSqvOO/K6vDNZc+Bl0f/Czui/8H/75v7G97L+c/3nUXYN4fBjtRvvg6uFezsHDpXvz/hk479/u7h+/b+ORzyf4oy/PYch6EY/t1v0x//8nqGPaZ33X1CUkW7b376PwT5y8b/cOXg2w65X/YQ9Bf+bm/89Drt9+beJfrfvoNnhP3f9zf1MsXxbSPtB8Xxi5bcHywOHH/t8pCeUh4/z0f/ZHkwMNd7xfL4Nrj/oDx+kQH8wfIg+vD8muWBP6U8fh4Wnlgev7Fv6NZ/uPcdvOb5BRcR49wnisG+/HM3knY3xMrej3K91hueKfyxNzz/sWE2+AZwOYWjhe/xbeAv+PZvgn644vkNSOMJb2COilCoqn60O8i98zkNvq22H75eHZbGwwIezh6FcfS7xfhVGdGPlNH9vioC7jy9fXv6nwyRL+A0ja9qiWG+IwKARvPn12Xfn+lcXqsguvvxl1r47nwk/rTzXfrZJN+d748Ntd/X63/wAST1Ux/wP/TIO1/eow/4A8ZNPiYbkvv02vZ9f09vFiXxddUChxvSERdS3+kAfMMRPgTg/pHaooi3CZ/oe2F+VjtoEty/mmBFPGWC1QtPeiN+vKDxb5zjNij7A85xu19BiUz6RUz6DcEniB+vZ/wrLZz5qBb+48WLyMKfzcLfAnyC+DEr7K+08KcwvP5GC79vnyELf0kLfxPwCfLH7LC/0cQHpX9A+AT5i34RZOLPYuJvAT5B/hge9leaOPFRo/hvLDNENv9sNv8m4BPkYysPP5IT+KjdceRjY+HICby4E3gL8AnyCYPff7UT+Kg9duTvAvaRE3gWJ/AW4BPkbyDJ/kon8FE79agn8Mf+29Sn4XRvd64TxX03DeI/zXWiv3+nx6vMdaJ+I7l/r+9Poek3OoWFQuPdzxFKqadQgV84dFIfa4Cb+qjpMoUGuF/SpN/QFBbqYw1wUx81F6bRAPcrWPhbmMJCf6zxbfpfjm+/fwtHPVyvYOFvYgoL/bHGtwelf8ApLDRaPPIaJv4WprDQH2sxCf1RR69pNHr9Fmz+TUxhoT/46DX9UbvjaDR6/SacwFuYwkJ/8NFr+qP22DFPGL1GTuD5ncBbmMLC4B/bCTAftVOPIR6peAR16TX2JD39Yo4Fx3y6ny/8lqAuzDuDuuy4IAqC73QAvvE5GnbJ/5HaYij8YW29jRkxDOqXe47IzLxBqAvzsfrhmI/aD8egSW4vadJvaEYM87HmvDEftZONQXPeXsHC38KMmPvxvI9i4R+1B41Fc95ewcLfxIwY9mNNemM/KtSFRX3ir2Hib2FGDPuxJr0NSv+IURxBXd6Czb+JGTHsB4e6sB+1O45F0+LehBN4CzNi2A8+LY79qD12LJoW9yacwFuYEcN+8Glx7Eft1OM+PNSFxeA0iC8vo7s7wZ/gu7D0b5/6uVEv3GMp/x99td2nT5/e4/Snr2SLE091Jj+fYDNU/4M6x74fsXnZyVDc7+b+v/bvzzoZKsJDOmK/UwX4hmdY0vtDb7jisDeKB+J+PH6OkrJ/H/C5p2T9L5yEcT8eQv8bc65B2R8x5/rxgDky6T9v0m9oMhT/4yH0v9LC+Q9q4fyPx8uRhT+bhb+FyVD8x+o34X/ddP9LLfzHw+XIwp/Nwt/EZCj+x8tT/koTf8qykd8r//cxGYr/8doUZOLPZ+JvYTIU/+PlKn+liT+FlfxXRvHfGPtGNv9sNv8mJkPxvzEc/lc6gY/aHcf/xtg3cgLP5wTewGQoGvsNSsxf6QQ+aI8dfT82j5zA6zqBNzAZisaID+0E7mzhIzqBxzgxf3RyjJzCx3iP82OepKefz7Gg8O/nWFCvzAaiscfWw6A6/1N1zpDEJ4r80RS4N1D9j82G+qPVr3ktrH3sf+gROB7/oDogCeLTmzP9x3p/nrHuiQ9a9zT1Buv+sd6e36t7/Od1PwF52CW9tO+x0r+ew039oeDPko++65L9vtX/wkp4QvfPVzPg70r7SdPfYdGdQTmnRQw+kp8rc3P3rOTnon270+NpnP2m0nge+0R8Naf9QZr+1KnxFM9+LweM+XQ/+vNC8+Fp/LEOH+QHns8PMPfD51+cAPP6TgB/QvfPB3YCLAkc9QM/wPF/wg+wOPfQCdAv7wSevd1fFkMKCI6U/h/Xb7xPb/DV8gtZhvX+p5oH+CfsKzVhD7i0+Gvni/h/Xzf1izgBrv9ONfGHIwTPfSLYr6TwbZcBcY8hez0p/Pcug19IoQL7enfBg5bjGGwGaRUgdQzqIH+mDhKnPr32Cjsaf6xF8UcFssvTI9LDvR4w7od6oCj6E/91XHltbZA/dh4gSSz+QKrxv3LaROH/fSeP4fRvWh4lTJMvsOqIR/INsf/3z6iGIL5NMBj+taMK+d87pH4hjElxvL7LN1M8Z94JGp/ED4cn2EdWAL+sKqhn7574nGvgHEo2vlsVzn3i+B8nGwz+iXrtngvqsZ6L51EITL2RQn5HIdT9ENPryYN4mVx0hNosPwGBcJ8w7Mci4b9f1PTCIvnv/V9PEwnsiUTx5mfEmJ8J5XM28HpC+e+TZX5fKCjsPCIU5qdCef289W99r9MvBrnvS/S+s5L73mL/yqmTDPlYnvEDBbwOVeqLEVLY/zwchMMfiAo0M0Mv4naPvoSPCbjI3/0ZxfDEt4qhMOxby35kwTP56T5j+HaJ7Ofdf9yeGfJ336D4kafHP5Taj/qxvhMWrG9Z/k6c8NgHXWNPUt7PLPXns7t/oDDq2eT1G1D6Nxs/3ni1P2EZP/V7df6LyPSEs71EZKKeALn/d1M7fr+WsccjD6i6qt3eXaD/4MAPwN3cfZSar7+U7v1i1KSX7f05wPZXvwKfvvwIfrj/zfDsv55qAsqnn8Lxs6K9Hwm93PvW/6K+Z5y+wlDfBlmaf5D4PnW2yudhgoctsheaqcJQT1iwjsT878TMvQ8xk9y3GmTwfynmh40VmnlpMT9hxT0S878S832rAon5xcT83yd9/GL81rhe3ukA7pPS1l+0k3H6c5vk8/DbKw+uMPdGgBzYn3dgxPtwYBT2wIE9VNuTHRj9wIFxL+zA6GebAv/nxPwvRPlvDeCPipl6H2LmGOwT/sPZMBTOfqL/nbh5/EEv532y/WLifkL39DsU9x8V6a+J+K8nvYfZ3JOF97B7nXhpr/qEjvOPLrxfv3vp1YRH/9t1Ta8vvCd0qb9D4b2FcE6+k8b1L6T9sDP+ydJmHmSqL90JyvwutfN/0FjzkIU9XHpLPELbeoXRZQa9ifJvGV1mnkARfenRZeaxrrkfyAuNLv+7an/C+zP/ytFlFnUBPqFd8OtM6T4EvBnkAUdRP0meeOxBMHzyCDL3IAbjzIM+lefOnth30M+HFPvnFUt9uwjhX7ZrKeKhfrmX1u876MpD+v3z+uWxv0S/zw6b+dgDxRj/cKCYv38p26sNFLPvoDMO+aw/7rNwjCT+7ZAZRT1wVAT20o7qHUycff0e5L9QtA9WIf7LSMtjDwSMPXyJ27ML+Alvun+HAv6BEN+LvPB7oOjvTyL4TlAv7RHfwezrDygomn+3gnoHM6BRiP3zkiUw/F9Kln7YfH3x7sNnn+f8LTQfbGgftCnLPlht9OrUfIZ7bLQDUVC/IzzAuzL+xeDaL7o22J+xhjjulTEyDPdyKKq/DTL0TIIhfiYYnsRenZXKcMRza+ZvYaU+m0S+puneN/Y/Z8QM/um18VQM9+x4qr8XSvU8quG4TzT+Q9XwLPvYO79eTz+P9b4jROKb1Q+OEa9M1WS4Z8f9/+3YxOcRDvAsDPFj4eA48drCefaXA3wElOIziYf6uXjoV090/vsLCRFe8bnEQ/9UPMSrZzn8s7e+l5G2/ksF8m9mQv/iDQM09Yl/+G6r/z9DY/jdVsjpxEwPdh0lSlIxhnfPaVDUDOyth1jDe/DOQBmZWWJGpoGpniFSiUDmiL6RBclG03qI3xLbKO9o2sAVgZYYEWhpQaWkYUG3pAHkFuWDynyEcmBRmuGbn5IKUgEA</diagram></mxfile>
|
2110.06084/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In this study, we analyze linear G-CNNs in a binary classification setting, where hidden layers perform equivariant operations over a finite group $G$, and networks have no nonlinear activation function after their linear group operations. Inputs $\vx_i$ are vectors of dimension $|G|$ (*i.e.,* vectorized group functions $\vx:G \to \mathbb{R}$), and targets $y_i$ are scalars taking the value of either $+1$ or $-1$. Hidden layers in our G-CNNs perform cross-correlation over a group $G$, defined as $$\begin{equation}
|
| 4 |
+
(\vg \star \vh)(u) = \sum_{v \in G} \vg(uv) \vh(v)
|
| 5 |
+
\label{eq:cross-correlation}
|
| 6 |
+
\end{equation}$$ where $\vg, \vh:G \to \mathbb{R}$. Note that the above is equivariant to the left action of the group, *i.e.,* if $w \in G$ and $\vg_w'(u) = \vg(wu)$, then $(\vg_w' \star \vh)(u) = (\vg \star \vh)(wu)$. The final layer of our G-CNN is a fully connected layer mapping vectors of length $|G|$ to scalars. We note that this final layer in general will not construct functions that are symmetric to the group operations, as strictly enforcing group invariance in this linear setting will result in trivial outputs (only scalings of the average value of the input). Nonetheless, this model still captures the composed convolutions of G-CNNs, and is similar in construction to many practical G-CNN models, whose earlier G-convolutions still capture useful high-level equivariant features. For instance, the spherical CNN of @cohen2018spherical also has a final fully connected layer.
|
| 7 |
+
|
| 8 |
+
Analogous to the discrete Fourier transform, there exists a *group* Fourier transform mapping a function into the Fourier basis over the irreps of $G$.
|
| 9 |
+
|
| 10 |
+
{#fig:correlation width="3.3in"}
|
| 11 |
+
|
| 12 |
+
::: {#def:group_fourier_transform .definition}
|
| 13 |
+
**Definition 1** (Group Fourier transform). Let $f:G \to \mathbb{C}$. Given a fixed ordering of $G$, let $\ve_u$ be the standard basis vector in $\R^{|G|}$ that is $1$ at the location of $u$ and $0$ elsewhere. Then, $\vf = \sum_{u \in G} f(u) \ve_u$ is the vectorized function $\vf$. Given ${}\widehat{G}$ a complete set of unitary irreps of $G$, let $\rho \in {}\widehat{G}$ be a given irrep of dimension $d_\rho$, $\rho:G \longrightarrow \operatorname{GL}\left(d_\rho,\,\mathbb{C}\right)$[^2]. The group Fourier transform of $f$, $\widehat{f}: \widehat{G} \rightarrow \mathbb{C}$ at a representation $\rho$ is defined as [@terras1999fourier] $$\begin{equation}
|
| 14 |
+
\widehat{f}(\rho) = \sum_{u \in G} f(u) \rho(u).
|
| 15 |
+
\end{equation}$$ By choosing a fixed ordering of $\widehat{G}$, one can similarly construct $\widehat{\vf}$ as a block-diagonal matrix version of $\widehat{f}$ (as in [4](#fig:correlation){reference-type="ref+label" reference="fig:correlation"}). We define $\mathcal{F}_M$ to be the *matrix* Fourier transform that takes $\vf$ to $\widehat{\vf}$: $$\begin{align}
|
| 16 |
+
% \cF_M f = \sum_{u \in G} f(u) \cF_M (u) = \sum_{u \in G} f(u) \bigoplus_{\rho \in {}\widehat{G}} \rho(u)^{\oplus d_\rho} = \bigoplus_{\rho \in {}\widehat{G}} \widehat{f}(\rho)^{\oplus d_\rho} \;\in\on{GL}\left(|G|, \,\mathbb{C}\right)
|
| 17 |
+
\widehat \vf = \mathcal{F}_M \vf = \bigoplus_{\rho \in {}\widehat{G}} \widehat{f}(\rho)^{\oplus d_\rho} \;\in\operatorname{GL}\left(|G|, \,\mathbb{C}\right).
|
| 18 |
+
\end{align}$$ $\widehat \vf$ or $\mathcal{F}_M \vf$ are shortened notation for the complete Fourier transform. Furthermore, by vectorizing the matrix $\widehat{\vf}$, there is a *unitary* matrix $\mathcal{F}$ taking $\vf$ to $\widehat{\vf}$, analogous to the standard discrete Fourier matrix. We use the following explicit construction of $\mathcal{F}$: denoting $\ve_{[\rho,i,j]}$ as the column-major vectorized basis for element $\rho_{ij}$ in the group Fourier transform, then we can form the matrix $$\begin{equation}
|
| 19 |
+
\mathcal{F}= \sum_{u \in G} \sum_{\rho \in {}\widehat{G}} \frac{\sqrt{d_\rho}}{\sqrt{|G|}} \sum_{i,j=1}^{d_\rho} \rho(u)_{ij} \ve_{[\rho,i,j]} \ve_u^{T}.
|
| 20 |
+
\end{equation}$$ Intuitively, for each group element $g$, the matrix $\mathcal{F}$ contains all the irrep images $\rho(g)$ 'flattened' into a single column. See [9](#app:gfourier){reference-type="ref+label" reference="app:gfourier"} for further exposition.
|
| 21 |
+
:::
|
| 22 |
+
|
| 23 |
+
Convolution and cross-correlation are equivalent, up to scaling, to matrix operations after Fourier transformation. For example, for cross-correlation ([\[eq:cross-correlation\]](#eq:cross-correlation){reference-type="ref+label" reference="eq:cross-correlation"}), $\widehat{(g \star h)} (\rho) = {}\widehat{g}(\rho) {}\widehat{h}(\rho)^\dagger$. This simple fact, illustrated in [4](#fig:correlation){reference-type="ref+label" reference="fig:correlation"}, is behind the proofs of our implicit bias results.
|
| 24 |
+
|
| 25 |
+
<figure id="fig:architectures" data-latex-placement="t!">
|
| 26 |
+
<figure id="fig:arch_practice">
|
| 27 |
+
<embed src="figures/finalir_upper.pdf" />
|
| 28 |
+
<figcaption>A practical G-CNN architecture for <span class="math inline"><em>D</em><sub>8</sub></span>.</figcaption>
|
| 29 |
+
</figure>
|
| 30 |
+
<figure id="fig:arch_theory">
|
| 31 |
+
<embed src="figures/finalir_lower.pdf" />
|
| 32 |
+
<figcaption>The linear G-CNN architecture we analyze for <span class="math inline"><em>D</em><sub>8</sub></span>.</figcaption>
|
| 33 |
+
</figure>
|
| 34 |
+
<figcaption>A comparison between a practical G-CNN architecture for the group <span class="math inline"><em>D</em><sub>8</sub></span>, and its corresponding linear idealization that we will theoretically analyze. <span class="math inline"><em>D</em><sub>8</sub></span> is a group of size 8, consisting of all rotations by 90 degrees and reflections. In a practical architecture, as shown in the first panel, the input may be an image, or anything upon which <span class="math inline"><em>D</em><sub>8</sub></span> acts; it can be convolved over <span class="math inline"><em>D</em><sub>8</sub></span> with respect to a learnable filter, yielding a function on <span class="math inline"><em>D</em><sub>8</sub></span> (i.e. a function that takes <span class="math inline">8</span> values). The following layers intersperse <span class="math inline"><em>D</em><sub>8</sub></span>-convolutions, possibly with several channels, and nonlinearities, before a final fully connected layer yields a scalar output. In contrast, the second panel shows the simplified architecture we analyze here. In particular, there are no non-linearities and only a single channel is used. Furthermore, we assume the input is already a function on <span class="math inline"><em>D</em><sub>8</sub></span>, i.e. the input is an 8-dimensional vector. (One can think of this input as a fixed featurization of some image convolved over <span class="math inline"><em>D</em><sub>8</sub></span> with a <em>fixed</em> filter.)</figcaption>
|
| 35 |
+
</figure>
|
2110.11852/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-05-31T06:50:03.096Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36" version="14.7.3" etag="M7trXEZJfEFTvhh-kC07" type="google"><diagram id="OYdJaUJgfV2bwL4RzPgY">7Z1Lc+M2Esc/jasmh1URJACSx8xjdw9JVarmsJvTlsbi2NrIokuWX/vpl7RISWyAwvshiU4qsWSKoNANoPFH/4Cb7MvD2z8288f73+tFtbpJk8XbTfb1Jk0RTtOb9t9k8b57J6do98bdZrnoLjq88X35v6p7M+nefV4uqqfBhdu6Xm2Xj8M3b+v1urrdDt6bbzb16/Cyn/VqWOrj/K4rMTm88f12vqqYy/61XGzvu3cRLQ9/+Ge1vLvvii7SfPeHh/n+4t0bT/fzRf16VFb27Sb7sqnr7e63h7cv1aqtvL5edg/095G/7h9sU623Mh/oDPEyXz133617ru17/2U39fN6UbXXJzfZ59f75bb6/ji/bf/62pi3ee9++7BqXqHm1/33aa99+qva3t53L34uV6sv9areNK/X9bpq36rX2864qPnmn7tnqTbb6m30+6B9LTXuVdUP1Xbz3lzSfYB2BuscixTd69eDmYr+vfsjC+HuvXnnGXf7Ox8qr/mlqz9+XWZMXX56aZ62+VxSP2+efmFqtvmaW1B92039VwWria25+Wp5t25e3jbVUjXvf24rbdk46K/dHx6Wi0VbDNdeQ4ueMJl9+5TAPoS1D0lnhDVQasFAmGOgdDJLW7s47yv9hGUwp+HYsAvh2IVMdmkHxjSkXah4cBjr+hfzp/t9lclXf3OPx/bOD293beQw2w3W6e7/7W3bWkhmpC3vI5zIklmWNS/W9cEOB3uhUyaCg9JNmi1IVSxwe+/lpokblnX7cE/1c1vRfGezYOMMpcDGGWVtnHL7RAtGzmM1MqZujPyTtP+MGLn97NG1ux+HxkdgQOSZnnCbtwXTF2LTV4sm8O1e1pvtfX1Xr+erb4d3Qe94uOa3un7srPPfart97ywyf97Ww068WvcR/q6i55veehQ63Hxz2/0pZXrf3aO3z2vLFASNV3F3lz/qZXPzQ1NOQUMugJUaN9vcVt2njsNywY1w0wSKojz8DG/b1NldtWVu++tmM38/uuyxveBJ/vExStSecnh988vuCQ7OuK91Kf8sGf9Eb/28iTc/QeL5iewAMGzvzV+K9EdGGYe0EXrloBJJxvQAGacHQNi8A+jn04F7ANjmhz2ChYZdznJ61HgKfpcr386byj91Rww7Z0vtM0WgeylPtDfxpyU7J42Gi9BV+FWOZwTUKcGzflhVcCdcNr17sv9BFvp+1UKS8lQhFl1DQnHqI8ff5j+q1R/107IL0n7U2239wAkttzUrQw1CzJ+r+vX2vvGG2dPzw8Nyffefn8/rLvZj4sOPH0udOwJ6R8KGd4jXuVuI7hCrR8XXCJt63Lz/++ZjUte//LMr7OPF17fBq/fjV39Um2VTLe1s41QIuGshQ/dTGT1gEycawwXfC5Tb8MgAIGqnpgMNhiqy2vXGgSBilbsQrgxc9225PfLc5tWf/XXN7we/bV+8n/bQndl6f1P20MLcQ8mwn8LQsWxFMLCcQuBYp683dyxWeoyvjzQPVIp2cB/6SK4c9GYpPhlAuPGYXancXm98Wir6iCBSPv1NXcbNEorrXeNtjzdjoUm34jn/0V++X8CUDlkyjvTQ9UmCKamNRTTE6pHZW8bUgtKiJJz0j9beoJYov5re++hqlh3/9DqqaNY+8jGjKmN1vNM6ie8qC1EnqtpRIDdK6YwOfopgbpTGIQZZkoN1ojMfk5HjUC/r+rXjCUqmMUGxrltjfOjzjaXrnL2XcmxgMKamcWhRId26eaHjnVF4YoGteSLJTENUEzdMJdxwvfh1twb69XY1f3pa3jKq1sFL4GhkXTTXW5gkyvMKRjyAMwlZezOTRaiijdhbx5oy2pqZNbuhCA0GosO4pDIUSbmGqUrmzV1EUoW0u/Q2O6hyjHZv0WNkJKyrbP9Ml38O7Z+jG9HmOci3m/zz9m/oJv9601fYmWawWVreZuRzjh+5yl/jJbAdrDTZaMxG+0DJh404+Wc0uZ9aEpsnmARsSZxUsYOVJhuN2chrS2IlL8Ykt8+bl32qlJ0AtQmaBmLJPmAdiVD3JkA3OulXI4HNcfSqvoB2fsFrhhmVXirWUV6wA9lp/dL1aAxWZMPnsrxg13/NqNQcXd1wxJcHq8Ep6999WOM5hwE4Alzskc9hSGZJWpAc4d1/86HDpHSG9n9Lixw8qXKGg0GUn0lIhxZ7VCPFWk0m0Eh7Pe62dQXvknVmCyqju966hGKkdjo36PYzYq6Jjz0zKCqnpx8t4X9Hax12Km5DYfP9Ds5/7OJMSyCf869NX2UnTIT+kGPeGrurzMCMVS9Pr7HLJNYr9yiw/7OdWJ8h2OikExls5NZnEoLfRM+ZaxckKD2XSSSQTficO3YyID2XSSRrnc/C8kj0dhyqEWPzqK8HI2qJqIM3ald7crnVHtUQDBYlwugE15uHYKzEeoEcHYLLyB45uiwOktZxGnEGMLqS8jtf+dZNgmB0kNlSyg2GH3aXDJxJ6Lnn71SoZeiSo58UDBk6QF0BUDcnQJ2gEF9AHZZQRC8HqAP6uU+eDseRSBgXT9d7n8IoYixFE0s03cg4YBumA8WIWLrTlxsHg1hCj7sUlK5fWlCAX4ydE8qvjkA6WIyAozt9ublTnQNqbByqpM2Inh/Hv8BZ1JE6REMgdbtSuX3f+BTVEKk7/U0dRtFYQoZ1jdTBHV78EnWYlSjDEHU7U7C1FB9Rh1lpLww9NlZlIepEVUUK5EYREXWYlYUucJ0NwZym3Os6G45DJnErwx9HtYQ3DStUrWZfmscJZRR1BXWe2ecUPJBPWItIaCrn41OuZlfn73NDj3OUQsmUIxAABNcbT9aIhI51dRk5CGwcPxKju9IWiYwqExEdprnurp6CB9UwXTYMCiDu0DBy2WiohuDry1dE0pusr9B02P865UKJhD5xlS2fHerjb/oTFaq5COQRZSMTFKpjIp8kG5mYUOnsiYANaWJCtWzktSVNTGjXMJSXgs8ubIXbwLpCQtHIntajz5W6RUJ7w0Ql1jlEQvu+YpCarYyEWkjDsQSEkrPhQamEcuaaB90rtVZ5UK42YAiJSrhy3yvbpQzcTUnh1rjaMIKvrhpUDcaCrprZVWAAj5p31am4AU1MnHHEifOgTByVUEAnJs4dE0f0+kM7to/xpAaXUFyuvNsMYx/1ZVAMT3zQHYjgjRxScbAoERUnuN58LGLF2guk4jAMWTxScTQOQNY1FUfKWT6cs2iQcFmO4V2cnagB0lEVz8aAn3aXuksljqa9AP/JLJ0ityPOHJ8iJyjEF/RGJYDby4He9mRBAOqNxpFeGRf1RtVT35gmrqy3NUME3wtU2/DYAGAbfIPliMg3wfXGEV/fiqKamzhKzuyrUsFD4VKiuodisDLgCn5jyhHQb4LrzR3rHMhg80AF2zhF7uNAD+/I265Ubq83Pv80RN5Of1OHcXOeit3RNfKWwa0T/TJvueoOd65gpZ0t2GqKj3nro4vgfNdYlYWoE1WNKJAbRcS85azoc4HMGwYzMs/MWx6HMuKPeetV9oHyrpHzYF15NwGQmnvFBL3lcWxiFjf0dglON3Q5R6kHTDmCyb/gevM5moSOdXXU2z6BJQz1VsgIMjGxL5or6xpnokEpTBd+YaQPd/RLIaOCnC/4pqP5+nIXofAmfVBBkQ57YafsW1/Y1P7FI/4ZtH9O2tdEv0ktBnmkdgpOgtaEv4lt5JPaKVjZaeLf+AEs3B7eZ0vigqQT/yaykdeWJCGUXQX/RtXXg4NNdggCOrRssEPgWVautA1wfBMRpbJitwRcEaNy55CA66dixw7exzWeU3IsMXBZfjYQXCEhpE0QnAIE13fNlqkCd3NTuCeTLnzgrbuGG1QKKDgYsNim4Hpx5qgFfXrJfmFa0TnFiqmlWDHJmSUxf9FiySqYn17wZJjWDDSoYSSEwokbNTYyzYJyo+XEjYbkRgnSiyDs2P4auFEDW6gnDVCYz60bpsEb6ZFEqkEbU6pgji243jxouwpclMC0XY+4aHkVuChBNnBRknnDRQncKUjtuET4aXdp72UcSZGO/QeXeEbw+CFEWugooR7QUUEhvtDR8qrQUQJ2APKJjpYSSmHwBukbHS2V04gIMtapm+GC7wXKbXhkMLCNjsJyhIcmnr7eOPpDCavZhfBlL+woSpQXC6GUq+OjEKlwBI8y5QjgUcH1FlzrKuhRktigRykOQY/uSuV2fOPTUUN69PQ3dRhGo/4E7pD4KIFrTH7xUZTEwo921mArKj6AFCWxEKSjlRakVmJhSEWuFBFEipI4tKCALJqPeckg6EMcxBIhjdmKdeUa40Pnbyxe5+y9fEKWKIlDpQrp2c0LLQeNwhfbTT0s+aI5emnkiDJJgxHBN7rLkxpnzkExQRe+YaaO7uAblMiIbedL3+nIZt78RaRcSPsLTocdhFP6bn+Ox9QDCHv9c+gBEEdHmvg7KUXdIzWEEEdemQA8sZF8YkMIcVnWicDj5AymIdsSF2adEDyRkfy2JQ7NCm1yFQxeqb6sFmzOo83gUQaecLSMB7auETF4NHHL4CEUo2TpEMIrORDePrjxnN1g6yS6bEjhwZyCeCA8hPyCzUYCtppeYAjc6erffedsW/52N0eF7qmf3w3mqBkxl8jHnhkUldPTj5byv6O9Tjv6ZMAAu8IxDpFjn7vC7VeiLnsLWQrPZpRPcbCRbY/62D4+2uqimLq8DMrUoZSjCsZh5quA6miuFw9YMj5Haww+J3F5GiNSP+edsZD6InEBs191QzF4I4fHMcKiRHyd4HrzUCxlJdcLBOxymNjoEbBDaRyQreNMY5rbIOxo6Y2wo5DhUksUhp92mBqcSki5F+BAxNKJjHnqAasTFOILq0OphP55OVwdLYdRg0+uDqVx5BDGBdbtHVBlpDDWnmm/cYUpWTc2CNgm62A5IrJOcL2FuC/KzQsdkXVZpuyjcIdbdR/NodLkiKxjyhGQdYLrLbjWOQDI5uEKtUHWFUkIsm5XKrfjG5+IGpJ1p7+py/C59/eQZB2zqumZrMtYPTIQWbf7imxFRUjWZSlTaYEYsrFKC1IrqopRKFeKiazLWA3oAhfXcjA983w+I8riEEr8HdC4F96HarzyptwO1HiT0/Kae8V0RCPKYkwyi+2Mxotwu6HTOUqiZMoRiAGC681nbJmEsnV1+ThFAgYzv6c0ouzMSFHdNXeNHDwojulyYowU4pATyy6bFNXSgX05jFCKk3aYfpceP6hoP/WfugDxsH8GXQCeUFHdJSKfeFuPg0+oqJqRvOJteEJFZUWZImRbmlBRLSP5bUtxoKKiFAX3qOg+GzLGkMfarKeAeI8jmQOGVqIlxpwCVtS2zIFjlPEcsqL7OdnAx/vwxnPCjiVYlJZnA4viCGDRvZBrFRYNdGTjvnu2zR64Ex5Ay9GmRb312XDnJMGhjTBusX1oI8KsCvrphU6HA7Y+UYQ8HBBhCUFzIhmNrVySsCRj71LxmfkqSMYCBtFeSUbCUSyDR8zhzgdkjKG+Xl3CxGLdiADeyM/5gEypAn5RcL15fEBSxkMvkF8s4TzHJ79IOFJrgE7AcT53kdngFwvijV8sICynlo4NP+0wAZtcBQBbIEv8YtOLu+cXBYV44xeJhPZ8OfxiQYYBhFd+kcQhf0bGL/YOqDJSGMuhRd8lmvKLY4OAbX4RliPiFwXXW4j74oBx/fCLNFX2Ubg/sIaPwmR+R/wiU46AXxRcb8G14kBjXYcrqQ1+saQh+MVdqdyOb3wiasgvnv6mTsNnCaHVNb9YwIUMz/xiPxMLzy/urMFWVIT8ImXVu0Ck3lilBakVVcUolCvFxC/SODSgazoZMOclb+QasxXrmrXJyYDMvUKfDEjjEKfO8GTAOHzR4GRAeK+wJwNSCSUsJthDd2VSA/aAYoJ25iOcOjqEPaiM2nbGvJeObubNYUTShbTD9MGiH96LyohaV9kFMN3+WXQBHCFp4r2kJHWfjArl6CsT7yU2kldGpZcmJ95LmCuCA7alnIu4TryXyEh+2xKrgDE2uQ7eq88eizHksTbrKZkcfT8LecIlmdQx75XHKFq65L16jxn4eB/eeE5wsMR7FeRseK9cQkmcDgdUURj77tm2BO5umppBdVI7uxtMU92dDljCHVAEpwPC0MV6/kUuIYRe3W5kjEN4Ph0wZ+XMC9zAtCSw2fndwDR3KQCmTCwCl5lG+mSJ+pPsSTfVar5dvgxvrxBVMMCjrrCLELwVlcxV0urRXG7ld4FmpbDLitSsLrf0Y80auxkRElS9tBWxTyv2IdqRFb/U69v5tlrPP4Z/aNJzkqssoeYIbk9UshG0M7mqYDXFL/Xj+2SXJhDB2diI6MUyqbgDvBx4BzODC+l3tfERgResmvVtVT00D/+6fKqaP3x/fpjaRJudBjdM4LUJ1G8gar9RcLe8e/pYUtz9n7GSh1zTk1VGctazs1mWMgBgiNTTgpUDmvH5pV4970bn5qGaZ0qTv6rNumo/+dRWTmPKkcq+wiZBUXETsEFwMqGkG8RZ4vo5xPva1uSR2C94MgJdtW6/bH65a3/53oygTQ1/wCFHjam/riljf+nUwIQGLznKhKADRbxIwU6Dk5AbXANDaZ5IjMLUWRX4nZrrKC47K4WarAPRX1dwAao0tJ3FaXrvPZNF/VjUhQlldpPyq5cFMpmhjTI4w7GWxNq83NT19vjyds79e72o2iv+Dw==</diagram></mxfile>
|
2110.11852/main_diagram/main_diagram.pdf
ADDED
|
Binary file (38.2 kB). View file
|
|
|
2110.11852/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In the literature of time series analysis, AR and ARMA models are most commonly used to fit and to predict time series data. For example, given the historical traffic volume $z_1, z_2, ..., z_{t-1}$ on a specific road, the government wants to predict future traffic $z_t, z_{t+1}, ...$. Generally speaking, each observation $z_t$ can depend on all previous ones in a nonlinear manner and the relationship may vary as $t$ differs. AR and ARMA models are linear and assume a formulation where coefficients only depend on the *time lag* between the target and the past observation, shared across different target timestamp $t$. Thus, they are simple, parsimonious and were proved to be useful in many applications.
|
| 4 |
+
|
| 5 |
+
Time series is a sequence of random variables and the white noise sequence plays a necessary role in time series models as the source of randomness. In the main text, we refer to the white noise term as the additive error because it is an addend in the model equation.
|
| 6 |
+
|
| 7 |
+
Formally, a $p$th order autoregressive model, a.k.a., an AR($p$) model, $\{Z_t\}$ satisfies $$\begin{equation*}
|
| 8 |
+
Z_t = \theta_0 + \phi_1Z_{t-1} + \phi_2Z_{t-2} + \cdots + \phi_pZ_{t-p} + a_t,
|
| 9 |
+
\end{equation*}$$ where $p\geq 0$ is an integer, $\phi$'s are real parameters and $\{a_t\}$ is a white noise sequence. An autoregressive-moving-average model of orders $p$ and $q$, a.k.a., an ARMA$(p, q)$ model, satisfies $$\begin{equation*}
|
| 10 |
+
Z_t = \theta_0 + \phi_1Z_{t-1} + \phi_2Z_{t-2} + \cdots + \phi_pZ_{t-p} + a_t - \theta_1a_{t-1} - \theta_2 a_{t-1} - \cdots - \theta_q a_{t-q},
|
| 11 |
+
\end{equation*}$$ where $p, q\geq 0$ are integers, $\theta$'s are real parameters for the MA part. *Invertibility* is a property of time series models to characterize whether the information sequence $\{a_t\}$ can be recovered from past observations, and it is always required in time series applications. If an ARMA model is invertible, then it has an *AR representation* $$\begin{equation*}
|
| 12 |
+
Z_t = \pi_1 Z_{t-1} + \pi_2 Z_{t-2} + \cdots + a_t ,
|
| 13 |
+
\end{equation*}$$ which takes the form of an AR($\infty$) model, where the infinitely many $\pi$'s can be fully determined by $p+q$ parameters, i.e., $\phi$'s and $\theta$'s. In the main text, we claim that ARMA model simplifies AR and give an example using ARMA$(1, 1)$; see Eq. (7). We are referring to the fact that any invertible ARMA model has an AR representation, whose parameters can be fully determined by the (much fewer) parameters of the ARMA model.
|
| 14 |
+
|
| 15 |
+
<figure id="fig:shared1x1" data-latex-placement="ht">
|
| 16 |
+
<embed src="figures-shared1x1.pdf" />
|
| 17 |
+
<figcaption>Visualization of a layer in the <span>Shared-Lag</span> modification of DenseNet in Eq. (9).</figcaption>
|
| 18 |
+
</figure>
|
| 19 |
+
|
| 20 |
+
Figure [4](#fig:shared1x1){reference-type="ref" reference="fig:shared1x1"} visualizes the structure of a modified DenseNet layer following Eq. (9), where cubes represent feature tensors, rectangles are convolutions annotated with kernel size and output channels, and $\oplus$ denotes elementwise addition. Numbers inside dashed circles correspond to the following notes:
|
| 21 |
+
|
| 22 |
+
1. Operations inside the large dashed rectangle represents the $2$nd layer in a dense block.
|
| 23 |
+
|
| 24 |
+
2. This 1x1 convolution is shared among layers within a stage.
|
| 25 |
+
|
| 26 |
+
3. Dense connections are implemented by this storage area of past layers' output, and $0$ is used as a placeholder.
|
| 27 |
+
|
| 28 |
+
4. This arrow represents a virtual \"conveyor belt\" indicating that the feature maps are always ordered from the most recent layer to the most distant layer.
|
| 29 |
+
|
| 30 |
+
$\textrm{Conv3}^t$ and $\textrm{Conv1}_{0}^t(\cdot)$ in Eq. (9) are represented by the unshared 3x3 and 1x1 convolutions (white rectangles), and a combination of $\textrm{Conv1}_1, \textrm{Conv1}_2, ..., \textrm{Conv1}_{15}$ is represented by the shared 1x1 convolution (colored rectangle with note 2). Feature maps from previous layers are ordered such that Eq. (9) holds. The Shared-Ordinal variant of DenseNet can be similarly implemented by adjusting the ordering (see note 3 above) and disabling the \"conveyor belt\" (see note 4 above).
|
| 31 |
+
|
| 32 |
+
Figure 2 in the main paper shows the $L_1$ norm of the weights of the shared 1x1 convolutions. And we observe the following main difference between the two modes of weight sharing:
|
| 33 |
+
|
| 34 |
+
- When weights are shared based on lag, *the most recent layer* corresponds to the largest weight, and the $L_1$ norm decays as lag increases.
|
| 35 |
+
|
| 36 |
+
- When weights are shared according to the ordinal number of the layers, *layers closer to the input* correspond to larger weights, and the first few layers have similar weights.
|
| 37 |
+
|
| 38 |
+
We also claimed that the quick decaying pattern in the plot for Shared-Lag could be exponential decay, and we provide numerical support below. We fit exponential functions to the curves in Figure 2 and use the $R$-squared metric to compare goodness-of-fit. $R^2 \in (0, 1)$ is a metric that can be interpreted as the proportion of variance explained by a model, i.e., a fitted curve. $R^2$ values for the three curves in the Share-Lag subplot are $0.95, 0.86, 0.90$, while that for Shared-Ordinal are $0.77, 0.91, 0.87$, implying poorer fit.
|
| 39 |
+
|
| 40 |
+
This subsection shows that the RLA module in Eq. (11) implements layer aggregation as in Eq. (1). And under a linearity assumption, Eq. (11) can also be shown to have the additive form in Eq. (2).
|
| 41 |
+
|
| 42 |
+
Recall the formulation of RLA mechanism in Eq. (11), $$\begin{equation*}
|
| 43 |
+
h^t = g^t (h^{t-1}, x^{t-1})\hspace{5mm}\text{and}\hspace{5mm}
|
| 44 |
+
x^t=f^t (h^{t-1}, x^{t-1}).
|
| 45 |
+
\end{equation*}$$ Recursively substitute $h^s = g^s (h^{s-1}, x^{s-1})$ for $s = t-1, t-2, ...$ into the first equation, we have $$\begin{align*}
|
| 46 |
+
h^t & = g^t (h^{t-1}, x^{t-1}) \\
|
| 47 |
+
& = g^t (g^{t-1} (h^{t-2}, x^{t-2}), x^{t-1}) \\
|
| 48 |
+
& = \cdots,
|
| 49 |
+
\end{align*}$$ which is a function of $x^{t-1}, x^{t-2}, ..., x^0$ and a constant $h^0$. Thus, Eq. (1) is satisfied.
|
| 50 |
+
|
| 51 |
+
If we further assume that there exist functions $g_1^s$ and $g_2^s$ such that $g^s$ and $g_1^s$ satisfy $g^s (h^{s-1}, x^{s-1}) = g_1^s (h^{s-1}) + g_2^s (x^{s-1})$ and $g_1^s(u + v) = g_1^s(u) + g_1^s(v)$ for all $s$, we have $$\begin{align*}
|
| 52 |
+
h^t & = g_1^t (h^{t-1}) + g_2^t (x^{t-1}) \\
|
| 53 |
+
& = g_1^t (g_1^{t-1} (h^{t-2}) + g_2^{t-1} (x^{t-2})) + g_2^t (x^{t-1}) \\
|
| 54 |
+
& = g_1^t (g_1^{t-1} (h^{t-2})) + g_1^t (g_2^{t-1} (x^{t-2})) + g_2^t (x^{t-1}) \\
|
| 55 |
+
& = \cdots,
|
| 56 |
+
\end{align*}$$ which is a summation over transformed $x^{t-1}, x^{t-2}, ..., x^0$ with a constant $g_1^t(g_1^{t-1}(\cdots g_1^1(h^0) ))$.
|
| 57 |
+
|
| 58 |
+
# Method
|
| 59 |
+
|
| 60 |
+
Consider the RLA module given by Figure 3 with update $$\begin{equation*}
|
| 61 |
+
h^t=g_2[g_1(y^t)+h^{t-1}]\hspace{5mm}\text{and}\hspace{5mm}
|
| 62 |
+
x^t=y^t+x^{t-1},
|
| 63 |
+
\end{equation*}$$ where $y^t=f_1^t [\text{Concat}(h^{t-1}, x^{t-1})]$ and $g_1, g_2$ are the shared 1x1 and 3x3 convolutions. As discussed in the main text, ResNets have a layer aggregation interpretation, i.e., $x^t$ is an aggregation of the residual information $y^t$. According to the ablation study in Sections 4.4 and [7.4](#sec:ablation_cifar){reference-type="ref" reference="sec:ablation_cifar"}, it is preferable for the RLA module to perform an aggregation of the residual information $y^t$ instead of the already aggregated $x^t$. Thus, in the following, we show that $h^t$ is an aggregation of $y^t$ instead of $x^t$.
|
| 64 |
+
|
| 65 |
+
When nonlinearities are ignored, the shared convolution $g_2$ can be distributed to the two terms, i.e., $$\begin{equation*}
|
| 66 |
+
h^t = g_2(h^{t-1}) + g_2 \circ g_1(y^t),
|
| 67 |
+
\end{equation*}$$ Furthermore, recursively applying the above equation, we have $$\begin{align}
|
| 68 |
+
h^t = \, & g_2(h^{t-1}) + g_2 \circ g_1(y^t) \nonumber \\
|
| 69 |
+
= \, & g_2(g_2(h^{t-2}) + g_2 \circ g_1(y^{t-1})) + g_2 \circ g_1(y^t) \nonumber \\
|
| 70 |
+
= \, & g_2^2(h^{t-2}) + g_2^2 \circ g_1(y^{t-1}) + g_2 \circ g_1(y^t) \nonumber \\
|
| 71 |
+
& \vdots \nonumber \\
|
| 72 |
+
= \, & \sum_{k=1}^t g_2^k \circ g_1 (y^{t-k+1}) + g_2^t(h^0), \label{eq:RLA-LA}
|
| 73 |
+
\end{align}$$ where $\circ$ denotes the composition of convolution functions, and with a slight abuse of notation, the composition of the same function $k$ times is denoted as its $k$-th power, e.g., $$\begin{matrix} g_2^k = & \underbrace{ g_2 \circ g_2 \circ \cdots \circ g_2 } \\ & k \text{ times} \end{matrix} ,$$ not to be confused with time varying functions where the superscript denotes the time index. Thus, the RLA hidden feature maps $h^t$ are aggregations of previous residual information.
|
| 74 |
+
|
| 75 |
+
Moreover, the patterns of layer aggregation in ResNets and the RLA module in Eq. [\[eq:RLA-LA\]](#eq:RLA-LA){reference-type="eqref" reference="eq:RLA-LA"} are very similar to the AR($\infty$) and ARMA($1, 1$) models introduced in Sections 3.2 and [6.2](#sec:time-series){reference-type="ref" reference="sec:time-series"}. Because for the proposed RLA module in Figure 3, the convolutions $g_l^t$ in Eq. (2) are solely determined by the two shared convolutions $g_2$ and $g_1$.
|
2111.14893/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2111.14893/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
With the recent advances in dense prediction computer vision problems [\[17,](#page-8-0) [25,](#page-8-1) [38,](#page-9-0) [43,](#page-9-1) [47,](#page-9-2) [54,](#page-9-3) [57,](#page-9-4) [64,](#page-10-0) [65,](#page-10-1) [68,](#page-10-2) [72,](#page-10-3) [73\]](#page-10-4), where the aim is to produce pixel-level predictions (*e.g*. semantic and instance segmentation, depth estimation), the interest of the community has started to shift towards the more ambitious goal of learning multiple of these problems jointly by multi-task learning (MTL) [\[7\]](#page-8-2). Compared to the standard single task learning (STL) that focuses on learning an individual model for each task, MTL aims at learning a single model for multiple tasks with a better efficiency and generalization tradeoff while sharing information and computational resources across them.
|
| 4 |
+
|
| 5 |
+
Recent MTL dense prediction methods broadly focus on designing MTL architectures [\[4,](#page-8-3) [5,](#page-8-4) [19,](#page-8-5) [24,](#page-8-6) [37,](#page-9-5) [39,](#page-9-6) [44,](#page-9-7) [49,](#page-9-8) [58,](#page-9-9) [60,](#page-9-10) [66,](#page-10-5) [76–](#page-10-6)[78\]](#page-10-7) that enable effective sharing of information across tasks and improving the MTL optimization [\[12,](#page-8-7)[13,](#page-8-8)[21,](#page-8-9) [23,](#page-8-10)[30,](#page-8-11)[33,](#page-9-11)[34,](#page-9-12)[37,](#page-9-5)[51,](#page-9-13)[67\]](#page-10-8) to balance the influence of each task-
|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span>
|
| 8 |
+
|
| 9 |
+
Figure 1. Multi-task partially supervised learning. We look at the problem of learning multiple tasks from partially annotated data (b) where not all the task labels are available for each image, which generalizes over the standard supervised learning (a) where all task labels are available. We propose a MTL method that employs a shared feature extractor (fϕ) with task-specific heads (hψ) and exploits label correlations between each task pair by mapping them into a *joint pairwise task-space* and penalizing inconsistencies between the provided ground-truth labels and predictions (c).
|
| 10 |
+
|
| 11 |
+
specific loss function and to prevent interference between the tasks in training. We refer to [\[59\]](#page-9-14) for a more comprehensive review. One common and strong assumption in these works is that each training image has to be labelled for all the tasks (Fig. [1\(](#page-0-0)a)). There are two main practical limitations to this assumption. First, curating multi-task image datasets (*e.g*. KITTI [\[20\]](#page-8-12) and CityScapes [\[15\]](#page-8-13)) typically involves using multiple sensors to produce ground-truth labels for several tasks, and obtaining all the labels for each image requires very accurate synchronization between the sensors, which is a challenging research problem by itself [\[61\]](#page-9-15). Second, imagine a scenario where one would like to add a new task to an existing image dataset which is already annotated for another task and obtaining the ground-truth labels for the new task requires using a different sensor (*e.g*. depth camera) to the one which is used to capture the original data. In this case,
|
| 12 |
+
|
| 13 |
+
<span id="page-1-0"></span>labelling the previously recorded images for the new task will not be possible for many visual scenes (*e.g*. uncontrolled outdoor environments). Such real-world scenarios lead to obtaining partially annotated data and thus ask for algorithms that can learn from such data.
|
| 14 |
+
|
| 15 |
+
In this paper, we look at a more realistic and general case of the MTL dense prediction problem where not all the task labels are available for each image (Fig. [1\(](#page-0-0)b)) and we call this setting *multi-task partially supervised learning*. In particular, we assume that each image is at least labelled for one task and each task at least has few labelled images and we would like to learn a multi-task model on them. A naive way of learning from such partial supervision is to train a multi-task model only on the available labels (*i.e*. by setting the weight of the corresponding loss function to 0 for the missing task labels). Though, in this setting, the MTL model is trained on all the images thanks to the parameter sharing across the tasks, it cannot extract the task-specific information from the images for the unlabelled tasks. To this end, one can extend existing single-task semi-supervised learning methods to MTL by penalizing the inconsistent predictions of images over multiple perturbations for the unlabelled tasks (*e.g*. [\[14,](#page-8-14)[29,](#page-8-15)[32,](#page-9-16)[36,](#page-9-17)[56\]](#page-9-18)). While this strategy ensures consistent predictions over various perturbations, it does not guarantee consistency across the related tasks.
|
| 16 |
+
|
| 17 |
+
An orthogonal information that has recently been used in MTL is cross-task relation [\[40,](#page-9-19)[50,](#page-9-20)[69\]](#page-10-9) which aims at producing consistent predictions across task pairs. Unfortunately existing methods are not directly applicable for learning from partial supervision, as they require either each training image to be labelled with all the task labels [\[50,](#page-9-20) [69\]](#page-10-9) or cross-task relations that can be analytically derived [\[40\]](#page-9-19). In our setting, compared to [\[40,](#page-9-19)[50,](#page-9-20)[69\]](#page-10-9), there are fewer training images available with ground-truth labels of each task pair and thus it is harder to learn the relationship. In addition, unlike [\[40\]](#page-9-19), we focus on the general setting where one task label cannot be accurately obtained from another (*e.g*. from semantic segmentation to depth) and hence learning exact mappings between two task labels is not possible.
|
| 18 |
+
|
| 19 |
+
Motivated by these challenges, we propose a MTL approach that shares a feature extractor between tasks and also learns to relate each task pair in a learned *joint pairwise task-space* (illustrated in Fig. [1\(](#page-0-0)c)), which encodes only the shared information between them and does not require the ill-posed problem of recovering labels of one task from another one. There are two challenges to this goal. First, a naive learning of the joint pairwise task-spaces can lead to trivial mappings that take all predictions to the same point such that each task produces artificially consistent encodings with each other. To this end, we regulate learning of each mapping by penalizing its output to retain high-level information about the input image. Second, the computational cost of modelling each task pair relation can get exponentially expensive with the number of tasks. To address this challenge, we use a single encoder network to learn all the pairwise-task mappings, however, dynamically estimate its weights by conditioning them on the target task pair.
|
| 20 |
+
|
| 21 |
+
The main contributions of our method are as following. We propose a new and practical setting for multi-task dense prediction problems and a novel MTL model that penalizes cross-task consistencies between pairs of tasks in joint pairwise task-spaces, each encoding the commonalities between pairs, in a computationally efficient manner. We show that our method can be incorporated to several architectures and significantly outperforms the related baselines in three standard multi-task benchmarks.
|
| 22 |
+
|
| 23 |
+
# Method
|
| 24 |
+
|
| 25 |
+
Let x ∈ R <sup>3</sup>×H×<sup>W</sup> and y <sup>t</sup> ∈ R <sup>O</sup>t×H×<sup>W</sup> denote an H × W dimensional RGB image and its dense label for task t respectively, where O<sup>t</sup> is the number of output channels for task t. Our goal is to learn a function yˆ t for each task t that accurately predicts the ground-truth label y <sup>t</sup> of previously unseen images. While such a task-specific function can be learned for each task independently, a more efficient design is to share most of the computations across the tasks via a common feature encoder, convolutional neural network f<sup>ϕ</sup> : R <sup>3</sup>×H×<sup>W</sup> → R C×H′×W′ parameterized by ϕ that takes in an image and produces a C feature maps, each with H′×W′ resolution, where typically H′ < H and W′ < W. In this setting, f<sup>ϕ</sup> is followed by multiple task-specific decoders hψ<sup>t</sup> : R <sup>C</sup>×H′×W′ → R <sup>O</sup>t×H×<sup>W</sup> , each with its own taskspecific weights ψ t that decodes the extracted feature to predict the label for the task t, *i.e*. yˆ t (x) = hψ<sup>t</sup> ◦ fϕ(x) (Fig. [2\(](#page-3-0)a)).
|
| 26 |
+
|
| 27 |
+
Let D denote a set of N training images with their corresponding labels for K tasks. Assume that for each training image x, we have ground-truth labels available only for some tasks where we use T and U to store the indices of labeled and unlabelled tasks respectively, where |T | + |U| = K, U = ∅ indicates all labels available for x and T = ∅ indicates no labels available for x. In this paper, we focus on the partially annotated setting, where each image is labelled at least for one task (|T | ≥ 1) and each task at least has few labelled images.
|
| 28 |
+
|
| 29 |
+
A naive way of learning yˆ t for each task on the partially annotated data D is to jointly optimize its parameters on the labelled tasks as following:
|
| 30 |
+
|
| 31 |
+
<span id="page-2-1"></span>
|
| 32 |
+
$$\min_{\phi,\psi} \frac{1}{N} \sum_{n=1}^{N} \frac{1}{|\mathcal{T}_n|} \sum_{t \in \mathcal{T}_n} L^t(\hat{y}^t(\boldsymbol{x}_n), \boldsymbol{y}_n^t), \tag{1}$$
|
| 33 |
+
|
| 34 |
+
where n is the image index and L t is the task-specific differentiable loss function. We denote this setting as the (vanilla) MTL. Here, thanks to the parameter sharing through the feature extractor, its task-agnostic weights are learned on all the images. However, the task-specific weights ψ t are trained only on the labeled images.
|
| 35 |
+
|
| 36 |
+
A common strategy to exploit such information from unlabeled tasks is to formulate the problem in a semisupervised learning (SSL) setting. Recent successful SSL techniques [\[2,](#page-8-21)[53\]](#page-9-27) focus on learning models that can produce consistent predictions for unlabelled images when its input is perturbed in various ways.
|
| 37 |
+
|
| 38 |
+
<span id="page-2-0"></span>
|
| 39 |
+
$$\min_{\phi,\psi} \frac{1}{N} \sum_{n=1}^{N} \left( \frac{1}{|\mathcal{T}_n|} \sum_{t \in \mathcal{T}_n} L^t(\hat{y}^t(\boldsymbol{x}_n), \boldsymbol{y}_n^t) + \frac{1}{|\mathcal{U}_n|} \sum_{t \in \mathcal{U}} L_u(e_r(\hat{y}^t(\boldsymbol{x}_n)), \hat{y}^t(e_r(\boldsymbol{x}_n))) \right), \tag{2}$$
|
| 40 |
+
|
| 41 |
+
where L<sup>u</sup> is the unsupervised loss function and e<sup>r</sup> is a geometric transformation (*i.e*. cropping) parameterized by the random variable r (*i.e*. bounding box location). In words, for the unsupervised part, we apply our model to the original input x and also its cropped version er(x), and then we also crop the prediction corresponding to the original input er(ˆy t (xn)) before we measure the difference between two by using Lu. Note that we are aware of more sophisticated task-specific SSL methods for semantic segmentation [\[42,](#page-9-28) [45\]](#page-9-29), depth estimation [\[22,](#page-8-22) [31\]](#page-8-23), however, combining them for multiple tasks, each with different network designs and learning formulations is not trivial and here we focus on one SSL strategy that uses one perturbation type (*i.e*. random cropping) and L<sup>u</sup> (*i.e*. mean square error) can be applied to several tasks.
|
| 42 |
+
|
| 43 |
+
While optimizing Eq. [\(2\)](#page-2-0) allows learning both taskagnostic and task-specific weights on the labeled and unlabelled data, it does not leverage cross-task relations, which can be used to further supervise unlabelled tasks. Prior works [\[40,](#page-9-19) [69\]](#page-10-9) define the cross-task relations by a mapping function m<sup>s</sup>→<sup>t</sup> for each task-pair (s, t) which maps the prediction for the source task s to target task t labels. The mapping function in [\[40\]](#page-9-19) is analytical based on the assumption that target task labels can be analytically computed from source labels. While such analytical relations is possible
|
| 44 |
+
|
| 45 |
+
<span id="page-3-2"></span><span id="page-3-0"></span>
|
| 46 |
+
|
| 47 |
+
Figure 2. Illustration of our method for multi-task partially supervised learning. Given an image, our method uses a shared feature extractor $f_{\phi}$ taking in the input image and task-specific decoders $(h_{\psi^s}$ and $h_{\psi^t})$ to produce predictions for all tasks (a). We compute the supervised loss $L_t$ for labelled task. Besides, we regularize the cross-task consistency $L_{ct}$ between the unlabelled task's prediction $\hat{y}^s$ and the labelled task's ground-truth $y^t$ in a joint space for the unlabelled task (b). To learn the cross-task consistency efficiently, we propose to use a shared mapping function whose output is conditioned on the task-pair (c) and regularize the learning of mapping function using the feature from $f_{\phi}$ to prevent trivial solution.
|
| 48 |
+
|
| 49 |
+
only for certain task pairs, each mapping function in [69] is parameterized by a deep network and its weights are learned by minimizing $L_{ct}(m^{s \to t}(\boldsymbol{y}^s), \boldsymbol{y}^t)$ , where $L_{ct}$ is cross-task function that measures the distance between the mapped source labels and target labels. There are two limitations to this method in our setting. First the training set has limited labelled number of images for both source and target tasks $(\boldsymbol{y}^s)$ and $\boldsymbol{y}^t$ . Second learning such pairwise mappings accurately is not often possible in our case, as the labels of one task can only be partially recovered from another task (e.g.) semantic segmentation to depth estimation). Note that this ill-posed problem can be solved accurately when strong prior knowledge about the data is available.
|
| 50 |
+
|
| 51 |
+
To employ cross-task consistency to our setting, we map each task pair (s,t) to a lower-dimensional joint pairwise task-space where only the common features of both tasks are encoded (Fig. 2(b)). Formally, each pairwise task-space for (s,t) is defined by a pair of mapping functions, $m_{\vartheta_s^{st}}: \mathbb{R}^{O^s \times H \times W} \to \mathbb{R}^D$ and $m_{\vartheta_t^{st}}: \mathbb{R}^{O^t \times H \times W} \to \mathbb{R}^D$ parameterized by $\vartheta_s^{st}$ and $\vartheta_t^{st}$ respectively. The cross-task consistency can be incorporated to Eq. (1) as following:
|
| 52 |
+
|
| 53 |
+
$$\min_{\phi, \psi, \vartheta} \frac{1}{N} \sum_{n=1}^{N} \left( \frac{1}{|\mathcal{T}_n|} \sum_{t \in \mathcal{T}_n} L^t (\hat{y}^t(\boldsymbol{x}_n), \boldsymbol{y}_n^t) + \frac{1}{|\mathcal{U}_n|} \sum_{s \in \mathcal{U}_n, t \in \mathcal{T}_n} L_{ct} (m_{\vartheta_s^{st}} (\hat{y}^s(\boldsymbol{x}_n)), m_{\vartheta_t^{st}} (\boldsymbol{y}_n^t)) \right),$$
|
| 54 |
+
(6)
|
| 55 |
+
|
| 56 |
+
where $L_{ct}$ is cosine distance (i.e. $L_{ct}(\mathbf{a}, \mathbf{b}) = 1 - (\mathbf{a} \cdot \mathbf{b})/(|\mathbf{a}||\mathbf{b}|)$ ). In words, along with the MTL optimization, Eq. (3) minimizes the cosine distance between the embeddings of the unlabelled task prediction $\hat{y}_s$ and the annotated task label $y^t$ in the joint pairwise task space. Here $m_{\vartheta_s^t}$
|
| 57 |
+
|
| 58 |
+
and $m_{\vartheta_t^{st}}$ are not necessarily equal to allow for treating the mapping from predicted and ground-truth labels differently. Note that one can also include the semi-supervised term $L_u$ in Eq. (3). However we empirically found that it does not bring any tangible performance gain when used with the cross-task term $L_{ct}$ .
|
| 59 |
+
|
| 60 |
+
There are two challenges to learn non-trivial pairwise mapping functions in a computationally efficient way. First the number of pairwise mappings to learn quadratically grows with the number of tasks. Although the mapping functions are only used in training, it can still be computationally expensive to train many of them jointly. In addition, learning an accurate mapping for each task-pair can be challenging in case of limited labels. Second the mapping functions can simply learn a trivial solution such that each task is mapped to a fixed point (*e.g.* zero vector) in the joint space.
|
| 61 |
+
|
| 62 |
+
<span id="page-3-1"></span>Conditional joint task-pair mapping. To address the first challenge, as shown in Fig. 2(c), we propose to use a task-agnostic mapping function $\bar{m}_{\vartheta}$ with one set of parameters $\vartheta$ whose output is conditioned both on the input task (s or t) and task-pair (s, t) through an auxiliary network $(a_{\theta})$ . Concretely, let A denote a variable that includes the input task (s or t) and target pair (s,t) for a pairwise mapping which in practice we encode with an asymmetric $K \times K$ dimensional matrix by setting the corresponding entry to 1 (i.e. A[s,t] = 1 or A[t,s] = 1) and the other entries to 0. Note that the diagonal entries are always zero, as we do not define any self-task relation. Let $\bar{m}_{\vartheta}$ be a multi-layer network and $h_i$ denote a M channel feature map of its i-th layer for which the auxiliary network $a_{\theta}$ , parameterized by $\theta$ , takes in A and outputs two M-dimensional vectors $a_{\theta,i}^c$ and $a_{\theta,i}^b$ . These vectors are applied to transform the feature
|
| 63 |
+
|
| 64 |
+
<span id="page-4-3"></span>map $h_i$ in a similar way to [46] as following:
|
| 65 |
+
|
| 66 |
+
$$\boldsymbol{h}_i \leftarrow a_{\theta,i}^c(A) \odot \boldsymbol{h}_i + a_{\theta,i}^b(A)$$
|
| 67 |
+
|
| 68 |
+
where $\odot$ denote a Hadamard product. In words, the auxiliary network alters the output of the task-agnostic mapping function $\bar{m}_{\vartheta}$ based on A. For brevity, we denote the conditional mapping from s to (s,t) as $m^{s \to st}$ which is a function of $\bar{m}_{\vartheta}$ and $a_{\theta}$ and hence parameterized with $\vartheta$ and $\theta$ .
|
| 69 |
+
|
| 70 |
+
We implement each $a_i^c$ and $a_i^b$ as an one layer fully-connected network. Hence, given the light-weight auxiliary network, the computational load for computing the conditional mapping function, in practice, does not vary with the number of task-pairs. Finally, as the dimensionality of each task label vary -e.g. while $O^t$ is 1 for depth estimation and $O^t$ equals to number of categories in semantic segmentation –, we use task-specific input layers and pass each prediction to the corresponding one before feeding it to the joint pairwise task mapping. In the formulation, we include these layers in our mapping $\bar{m}_\vartheta$ and explain their implementation details in Sec. 4.
|
| 71 |
+
|
| 72 |
+
**Regularizing mapping function.** To avoid learning trivial mappings, we propose a regularization strategy (Fig. 2) that encourages the mapping to retain high-level information about the input image. To this end, we penalize the distance between the output of the mapping function and a feature vector that is extracted from the input image. In particular, we use the output of the task-agnostic feature extractor $f_{\phi}(x)$ in the regularization. Now we can add the regularizer to the formulation in Eq. (3):
|
| 73 |
+
|
| 74 |
+
<span id="page-4-1"></span>
|
| 75 |
+
$$\min_{\phi,\psi,\vartheta,\theta} \frac{1}{N} \sum_{n=1}^{N} \left( \frac{1}{|\mathcal{T}_{n}|} \sum_{t \in \mathcal{T}_{n}} L^{t}(\hat{y}^{t}(\boldsymbol{x}_{n}), \boldsymbol{y}_{n}^{t}) + \frac{1}{|\mathcal{U}_{n}|} \sum_{s \in \mathcal{U}_{n}, t \in \mathcal{T}_{n}} L_{ct}(m^{s \to st}(\hat{y}^{s}(\boldsymbol{x}_{n})), m^{t \to st}(\boldsymbol{y}_{n}^{t})) + R(f_{\phi}(\boldsymbol{x}_{n}), m^{s \to st}(\hat{y}_{s}(\boldsymbol{x}_{n}))) + R(f_{\phi}(\boldsymbol{x}_{n}), m^{t \to st}(\boldsymbol{y}_{n}^{t})) \right), \tag{4}$$
|
| 76 |
+
|
| 77 |
+
where $f_{\phi}(x)$ is the feature from feature encoder $f_{\phi}$ , R is the loss function and we use the cosine similarity loss for R in this work.
|
| 78 |
+
|
| 79 |
+
Alternative mapping strategies. Here we discuss two different mapping strategies to exploit cross-task consistency proposed in [69] and their adoption to our setting. As both require learning a mapping from one task's groundtruth label to another one and we have either no or few images with both groundtruth labels, here we approximate them by learning mappings from prediction of one task to another task's groundtruth. In the first case, one can substitute our cross-consistency loss and regularization terms with $L_{ct}(m^{s \to t}(\hat{y}^s(\boldsymbol{x})), \boldsymbol{y}^t)$ in Eq. (4), which is denoted as Direct-Map. In the second case, we replace our
|
| 80 |
+
|
| 81 |
+
terms with $L_{ct}(m^{s \to t}(\hat{y}^s(\boldsymbol{x})), m^{s \to t}(\boldsymbol{y}^s))$ that maps both the groundtruth $\boldsymbol{y}^s$ and predicted labels $\hat{\boldsymbol{y}}^s$ and minimize their distance in task t's label space. We denote this setting as Perceptual-Map and compare to them in Sec. 4.
|
| 82 |
+
|
| 83 |
+
Alternative loss and regularization strategies. Alternatively, our cross-consistency loss and regularization terms can be replaced with another loss function only that does not allow for learning of trivial mappings. One such loss function is contrastive loss where one can define the predictions for two tasks on the same image as a positive pair (i.e. $m^{s\to st}(\hat{y}^s(\boldsymbol{x}_i))$ and $m^{t\to st}(\boldsymbol{y}_i^t)$ and on different images as a negative pair (i.e. $m^{s \to st}(\hat{y}^s(\boldsymbol{x}_i))$ and $m^{t \to st}(\boldsymbol{y}_i^t)$ ), and penalize when the distance from the positive one is bigger than the negative one. We denote this setting as *Contrastive-Loss*. Another method which also employs positive and negative pairs involves using a discriminator network. The discriminator (a convolutional neural network) takes in positive and negative pairs and predicts their binary labels, while the parameters of the MTL network and mapping functions are alternatively optimized. We denote this setting as Discriminator-Loss and compare to the alternative methods in Sec. 4.
|
2112.08609/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2022-05-25T09:51:21.561Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.8.7 Chrome/91.0.4472.164 Electron/13.6.2 Safari/537.36" version="15.8.7" etag="J7UikvkY43J64DY9z-H-" type="device"><diagram id="2yKM852q_Uh6Xwj7IHmw">7Vxbc5s4FP41nm0fwoAEGB7jONl2pu1mk85291EG2WaKwQW5jvvrVwKJiwS+gpM0dmZio8sRnO/T0dHRsQfwZvH0Z4KW88+xj8MB0P2nARwPABg6Nv3PCjZ5gWXwglkS+HmRURY8Br8wL9R56SrwcVprSOI4JMGyXujFUYQ9UitDSRKv682mcVgfdYlmfES9LHj0UIiVZt8Cn8zzUgcMy/IPOJjNxciG7eY1CyQacxHpHPnxujIWvB3AmySOSf5p8XSDQ6Y7oZf8hu5aaosbS3BE9ukA8g4/Ubjiz/YpiKh6UxJ4A2CjxXIAR9EkZW/0OqRCR5OEfpqxT/cJXiaxh9OUduIPRDZCS0m8inzMBjJo2/U8IPhxiTxWu6a0oGVzsgh5dUqS+HuhTaqH0TSOyB1aBCEjyddgQQEH+he8pv8f4gWKeJNHPqDOroMwvInDOMluAPoWdnyzEF6pccAE2jaroTfE7j0fkisDJwQ/tSrUKGCi9MbxApNkQ5uIDhbvwqkNXE6hdUkUUTSvcASanJ6cmrNCcoke/cABbAYTKmDe00dZJRNEgjh6DnwkPBB2pl4THrbn4Mm0GwTAUH82BEwFgS+IAoDCiM4Rdf484J8BU9hzAzOdToHXCIxvT2zL7gqY+tQw4fmAsRRgrqMoJi9kYkyn2G7R/9Cd6Ho/pumc+rcV/Y9Xf39WNE870TUc79a6rECL/TValux1ygJTEZa/ukHDBnUwhPNQAcNuAMPUTwdjqK4Tfz3Sgq9oNmtayOlDkrr6URjMqH7GHn1YTJUzYqoIqIN0zSsWge+z7qMEp8EvNCmW6GUcRCS7d2s0sMZM1orEae7iGQqCURxhCW5RxNZ+3k0/Ck3jmv0p1DAkxwJ0ALYhmT5o2grawwa0QQdoO+qadPvw5jAO0QSHI+R9n2XWXTTx8RStslX5HCSw6iQAKglgTyRwVfuLl5iucpHH7vAeJY0u/O/OimeY+bZq5w29J9TFfrkC+7c48WnJx8UyTgiK6PIK9OsIhZs0SC/w9wB/fZmHjqvCD/qC31Dg/xil9HHeHNBdwwrc4c713OwLVTVm84CXIfOUL7B2DKvtKLBafcGqRm8e13Tbc8G0Y0wbvK7epqoaD1LwpE7YNQtJl8qsQFtGIphW6RMnm3/ZhQYscf0fb5pdjJ+qTcebwdFxUWW3nN849kVkfH8gdkweUZbgEJHgZ118k/b5CPeMvNWVFtZwNg1JRBqvEg/zXqASElcEAU121GFdFEHJDBNFVEaH4sH3Y4gamDqFIYcDLU3DuzvXzSIcr58A0LSOI0DhN7UJ6hB+NS52gb8j+MVG62D4TX27oA7hVyNx54W/sqIMh0Z1SdE1fdeqkl3d44Qu75nfMdZ/b0I5zpGEsl3NkreBjgb0yqs/C6OG/56NYq5To5ih6Rk1LhyrOC36kRxzXE0KNVBZmm255as/M6ZGF0/hWGdu7u7j5tfHGGU74x7LGGVf1JuTC9QwZJc26DeFWd6EiBDToUCzE8utgjoEWg04XoA+GGj7SD8DmrIH7PQGNLgAfTDQtgy0OAs4GGhrh6AOgVajktuA9kKUpoG3FesyVOhu8eW6SPB46ZQwANRc1zEtYEG6/YK6W4dVt7WhCS3boibbdXXbPI4uhulIcoca6G8NOCzmuSdj9kkEleLP7Qd++xmUNqOEw0l25yLK/gqIBg1TG9o1EgDb0Y6MjzVJc3rbXDSk7il8Us9xqwl6c7Rk7RZPM5aMrk3DeO3NUUI0HxE0Qa3pZi0pI5UjjwNY2ZC0dq50WNnBMFw+RavJR5ZKPrGfPOUEpCHxT6TBUlmI1jAoUpwPaf9YscT30YpMr5zysga2KGTKvcqPqq5pA8NYPmVNy055jq0YLl1STJoEebnemZBkNnlHZxx9Kl28vc/HbxuNV+RCWU0UJwtuE/QQE4KTqyLHXK5nR3lX/PSO1RUHeKIuYAk6JK/UxYhZDUlQlE6pLCE14yTDNE78+ohFRz9IlyHa5KVBFAaiD50RiEiCZDU+4Nw8FFnMFPlcpULN0pzk55S78jjlKXJY+taBc5HfndHFtHINaVJBZVJZtjqpujhWbEjgHLBl1/yRv7Vh8XLPjE9Mx61/D+R0cA29HhsBji5s6BkyARoyNkfBzAvohM2MUvktgj3MIpv9ymQeocBf0doPKNqsJJl5i39iD01WIcq0EgYpafj2gqZpr49px2UnNDINdMA0IDtShpoe6GxxDE/iWVPYltmPd9SO3PwYWLeD4eiPwXD8/o2alQ7wdeuul2moySdN8HZhRuAeUddX6jjvPnXqYhGwHMVxVtf4vhxn2BRKPdiTLZ3Z0p8FertLW69TvFpRvcWxFU1afNtateze1iqbPFzRoNnJFbVNfm7xYJKrqyyNX+M4vPi4vLZ0e7bMAHG80Ln9UmPMn65vjvOAmnY04zH71gPjZCfiLt5QB4SzJW9IiDjDl+JgL5HuvuKWWxI12xbLVxi3NFxLG1YyGCRvCthDrVp75HHKrlH6i23CXmLll9OVHDf5OLSj4xRgy8c0fR6nwMMShF8WRRg5xPqQmZ/bsqBM85HyDouUn7aUsKeAFN3o50ovelV2YheiTwtXc8i5ol8YfaH0BXGoH5nPKgsyzf3sGaUU2lSacQek/YYdKW3aqf30DP2QSzx6JnScK33GaRD5dwF7rmzglOqbqPM1K+bNjFdhXkHt8Fr6bQmzK2PrSHJ7NbYd52NfKHZifoQp7T8ZrYQ5OzwNwlWoBHvLkIJqEP8xX20UQr3Z3zwpPOvut3f0svwRtRy+8pfo4O3/</diagram></mxfile>
|
2112.08609/main_diagram/main_diagram.pdf
ADDED
|
Binary file (40.6 kB). View file
|
|
|
2112.08609/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The task of *Question Matching (QM)* aims to identify the question pairs that have the same meaning, and it has been widely used in many applications, e.g., community question answering and intelligent customer services, etc. Though neural QM models have shown compelling performance on various datasets, including Quora Question Pairs (QQP) (Iyer et al., 2017), LCQMC (Liu et al., 2018), BQ (Chen et al., 2018) and AFQMC<sup>1</sup>, neural models are often not robust to adversarial examples, which means that the neural models predict unexpected outputs given just a small perturbations
|
| 4 |
+
|
| 5 |
+
on the inputs. As the example 1 in Tab. 1 shows, a model might not distinguish the minor difference ("面 noodles") between the two sentences, and thus predicts the two questions semantically equivalent.
|
| 6 |
+
|
| 7 |
+
Recently, it attracts a lot of attentions from the research community to deal with the robustness issues of neural models on various NLP tasks, such as question matching, natural language inference and machine reading comprehension. Early works examine the robustness of neural models by creating a certain types of artificial adversarial examples (Jia and Liang, 2017; Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020), and involving human-and-model-in-the-loop to create dynamic adversarial examples (Nie et al., 2020; Wallace et al., 2019). Further studies discover that a few types of superficial cues (i.e. shortcuts) in the training data, are learned by the models and hence affect the model robustness (Gururangan et al., 2018; Mc-Coy et al., 2019; Lai et al., 2021). Besides, several studies try to improve the robustness of the neural models by adversarial data augmentation (Min et al., 2020) and data filtering (Bras et al., 2020). All these efforts lead us to better find and fix the robustness issues to some extends.
|
| 8 |
+
|
| 9 |
+
However, there are several limitations in previous studies. First, the analysis and evaluation in previous work focus on just one or a few types of adversarial examples or shortcuts, but we need normative evaluation (Linzen, 2020; Ettinger, 2020; Phang et al.). The goal of the normative evaluation is not to fool a system by exploiting its particular weaknesses, but using systemically controlled datasets to comprehensively evaluate the basic linguistic capabilities of the models in a diverse way. Checklist (Ribeiro et al., 2020) and Textflint (Wang et al., 2021) are great attempts of normative evaluation. However, it is not clear that if the effects of the artificial adversarial methods on artificial examples are still shown on natural texts from real-world applications (Morris et al., 2020). Some other works
|
| 10 |
+
|
| 11 |
+
<sup>\*</sup> Equal contribution. The work was done when Hongyu Zhu was doing internship at Baidu.
|
| 12 |
+
|
| 13 |
+
<span id="page-0-0"></span><sup>&</sup>lt;sup>1</sup>It is from Ant Technology Exploration Conference (ATEC) Developer competition, which is no longer available.
|
| 14 |
+
|
| 15 |
+
manually perturb the examples to construct natural examples, but the manual perturbations is time consuming and costly [\(Gardner et al.,](#page-8-6) [2020\)](#page-8-6). Moreover, to the best of our knowledge, there are few Chinese datasets for QM robustness evaluation.
|
| 16 |
+
|
| 17 |
+
Towards this end, we create a open-domain Chinese dataset namely DuQM contains natural questions with linguistic perturbation for evaluating the robustness of QM models. (1) By *linguistic*, we mean this systematically controlled dataset provides a detailed breakdown of evaluation by linguistic phenomenon. As shown in Tab. [1,](#page-2-0) there are 3 categories and 13 subcategories with 32 linguistic perturbation in DuQM, which enables us to evaluate the model performance by each category instead of just a single metric. (2) By *natural*, we mean all the questions in DuQM are natural and issued by the users in a commercial search engine. This design can help us to properly evaluate the progress of a model's robustness on natural texts rather than artificial texts which may not preserve semantics and introduce grammatical errors.
|
| 18 |
+
|
| 19 |
+
The contributions of this paper can be summarized as follows:
|
| 20 |
+
|
| 21 |
+
- We construct a Chinese dataset namely DuQM that contains linguistically perturbed natural questions from a commercial search engine. It is a systemically controlled dataset to test the basic linguistic capabilities of the models in a diverse way. (see Sec. [2](#page-1-0) and Sec. [3\)](#page-4-0)
|
| 22 |
+
- Our experimental results show that 3 characteristics of DuQM: (1) DuQM is challenging, and has better discrimination power to distinguish the models that perform comparably on other datasets (see Sec. [4.2\)](#page-6-0). (2) The detailed breakdown of evaluation by linguistic phenomena in DuQM helps diagnose the advantages and disadvantages of different models (see Sec. [4.3\)](#page-6-1). (3) Extensive experiment shows that the effect of artificial adversarial examples does not work on natural texts of DuQM. DuQM can help us properly evaluate the models' robustness. (see Sec. [4.4\)](#page-7-0).
|
| 23 |
+
|
| 24 |
+
The remaining of this paper is organized as follows. Sec. [2](#page-1-0) describes the 3 categories and 13 subcategories with 32 linguistic perturbation in DuQM. Sec. [3](#page-4-0) gives the construction process of DuQM. In Sec. [4,](#page-5-0) we conduct experiments to demonstrate 3 characteristics of DuQM. We conclude our work in Sec. [5.](#page-8-7)
|
| 25 |
+
|
| 26 |
+
The design of DuQM is aimed at a detailed breakdown of evaluation by linguistic phenomenon. Hence, we create DuQM by introducing a set of linguistic features that we believe are important for model diagnosis in terms of linguistic capabilities. Basically, 3 categories of linguistic features are used to build DuQM, i.e., lexical features (see Sec. [2.1\)](#page-1-1), syntactic features (see Sec. [2.2\)](#page-3-0), and pragmatic features (see Sec. [2.3\)](#page-4-1). We list 3 categories, 13 subcategories with 32 operations of perturbation in Tab. [1.](#page-2-0) The detailed descriptions of all categories are given in this section.
|
| 27 |
+
|
| 28 |
+
Lexical features are associated with vocabulary items, i.e. words. As a word is the smallest independent but meaningful unit of speech , an operation on a single word may change the meaning of the entire sentence. It is a basic but crucial capability for models to understand word and perceive word-level perturbations. To provide a fine-grained evaluation for model's capability of lexical understanding, we further consider 6 subcategories:
|
| 29 |
+
|
| 30 |
+
Part of Speech. Parts of speech (POS), or word classes, describe the part a word plays in a sentence. DuQM considers 6 POS in Chinese grammar, including noun, verb, adjective, adverb, numeral and quantifier, which are content words that carry most meaning of a sentence. In this subcategory, we aim to test the models' understanding of related but not identical words with different POSs. As the example 1 in Tab. [1](#page-2-0) [2](#page-1-2) shows, inserting only one noun "面 *noodles*" makes the sentence meaning different. Furthermore, in this subcategory we provides a set of examples focusing on phrase-level perturbations to check model's capability on understanding word groups that act collectively as a single part of speech (see example 11).
|
| 31 |
+
|
| 32 |
+
Named Entity. Different from common nouns that refer to generic things, a named entity (NE) is a proper noun which refers to a specific realworld object. The close relation to world knowledge makes NE ideal for observing models' understanding of the meaning of names and background knowledge about entities. In DuQM, we include *Named Entity* an independent subcategory to test the model's behavior of named entity recognition, and focus on 4 types of NE most commonly seen,
|
| 33 |
+
|
| 34 |
+
<span id="page-1-2"></span><sup>2</sup>All examples discussed in this section are presented in Column *Example and Translation* of Tab. [1.](#page-2-0)
|
| 35 |
+
|
| 36 |
+
<span id="page-2-0"></span>
|
| 37 |
+
|
| 38 |
+
| Category | Subcategory | Perturbation<br>Operation | Label<br>#Y / #N | BERT<br>base | ERNIE<br>base | RoBERTa<br>base | MacBERT<br>base | RoBERTa<br>large | MacBERT<br>large | Examples and Translation |
|
| 39 |
+
|-------------------|---------------------------------|---------------------------|------------------|--------------|--------------------|-----------------|-----------------|------------------|------------------|-------------------------------------------------------------------------------------------------------------|
|
| 40 |
+
| | | insert n. | -/539 | 41.4±3.4 | 40.8±2.1 | 43.0±0.7 | 41.4±2.5 | 45.4±4.1 | 37.3±2.4 | E1: 鸡蛋怎么炒好吃 /鸡蛋 面 怎么炒好吃<br>how to fry eggs / how to fry egg noodles |
|
| 41 |
+
| | | insert v. | -/131 | | 39.4±0.4 33.8±2.6 | 37.4±2.0 | 35.9±2.7 | 39.9±3.1 | 29.5±3.8 | E2: 梦到西红柿 / 梦到 摘 西红柿<br>dream of tomatoes / dream of picking tomatoes |
|
| 42 |
+
| | | insert adj. | -/458 | 23.5±1.9 | 19.2±3.7 | 26.9±4.4 | 23.9±4.2 | 18.1±2.4 | 10.4±2.1 | E3: 有哪些类型的app / 有哪些类型的 移动 app<br>what are types of apps / what are types of mobile apps |
|
| 43 |
+
| | | insert adv. | -/302 | 3.7±0.5 | 4.2±0.5 | 3.8±0.6 | 4.4±1.2 | 5.8±1.5 | 3.1±1.1 | E4: 为什么打嗝 / 为什么 老 打嗝<br>why burp / why always burp |
|
| 44 |
+
| | Part<br>of | replace n. | -/702 | 86.6±0.3 | 86.7±0.1 | 88.3±0.3 | 88.8±1.2 | 89.4±1.6 | 87.8±0.7 | E5: 申请美国 ���卡 流程 /申请美国 签证 流程<br>U.S. green card application process / U.S. visa application process |
|
| 45 |
+
| | Speech | replace v. | -/466 | 71.7±1.1 | 77.6±0.8 | 76.9±0.4 | 76.5±1.2 | 81.0±1.6 | 81.5±2.2 | E6: 为什么 下蹲 膝盖疼 /为什么 下跪 膝盖疼<br>why knee pain when squatting / why knee pain when kneeling |
|
| 46 |
+
| | | replace adj. | -/472 | 74.3±2.1 | 80.0±1.0 | 77.6±0.7 | 81.6±0.5 | 82.7±1.1 | 82.7±1.6 | E7: 耳朵出血 严重 吗 / 耳朵出血 正常 吗<br>is the ear bleeding serious / is the ear bleeding normal |
|
| 47 |
+
| | | replace adv. | -/188 | 19.1±6.1 | 19.3±4.4 | 16.3±3.8 | 23.9±4.6 | 59.0±4.0 | 56.2±2.0 | E8: 为什么会 经常 头晕 /为什么会 有点 头晕<br>why regularly feel dizzy / why slightly feel dizzy |
|
| 48 |
+
| | | replace num. | -/1116 | 83.2±1.4 | 91.4±0.4 | 85.9±1.8 | 87.2±0.9 | 88.1±0.5 | 91.9±1.1 | E9: 血压 130 /100高吗 / 血压 120 /100高吗<br>is blood pressure 130 /100 high / is blood pressure 120 /100 high |
|
| 49 |
+
| | | replace quantifier | -/22 | 30.3±6.9 | 25.7±5.2 | 33.3±2.6 | 34.9±2.6 | 27.3±0.0 | | 34.8±10.5 E10: 一 束 花多少钱 /一 枝 花多少钱<br>how much is a bunch of flower / how much is a flower |
|
| 50 |
+
| | | replace phrases | -/197 | | 98.0±0.0 98.1±0.2 | 96.6±0.3 | 97.8±0.5 | 97.8±0.2 | 97.5±0 | E11: 如何 提高自己的记忆力 / 如何 增加自己的实力<br>how to improve my memory / how to increase my strength |
|
| 51 |
+
| Lexical Feature | | replace loc. | -/458 | | 96.0±0.6 95.7±0.2 | 95.4±0.4 | 95.0±0.4 | 94.7±0.4 | 94.5±0.5 | E12: 山西 春节习俗 /陕西 春节习俗<br>Shanxi spring festival customs / Shannxi spring festival customs |
|
| 52 |
+
| | Named | replace org. | -/264 | | 94.9±0.2 94.3±0.6 | 91.2±1.4 | 93.4±0.7 | 93.5±0.3 | 93.8±0.1 | E13: 北京邮电大学 附近酒店 /南京邮电大学 附近酒店<br>hotels near BUPT / hotels near NJUPT |
|
| 53 |
+
| | Entity | replace person | -/468 | 90.3±1.3 | 91.0±0.9 | 88.7±1.6 | 91.4±1.6 | 92.3±1.3 | 93.2±1.1 | E14: 陈龙 的妻子 /成龙 的妻子<br>wife of Long Chen / wife of Jackie Chan |
|
| 54 |
+
| | | replace product | -/170 | 83.7±2.6 | 88.2±2.1 | 82.4±6.9 | 83.3±0.3 | 86.0±1.7 | 88.8±4.4 | E15: iphone 6 多少钱 /iphone6x 多少钱<br>how much is iphone 6 / how much is iphone6x |
|
| 55 |
+
| | Synonym | replace n. | 405/- | 51.1±1.1 | 59.7±1.3 | 59.7±2.2 | 60.7±2.0 | 63.3±3.1 | 71.6±4.0 | E16: 猕猴桃 的功效 /奇异果 的功效<br>health benefits of Chinese gooseberry / health benefits of Kiwi |
|
| 56 |
+
| | | replace v. | 372/- | 80.0±0.9 | 81.1±1.6 | 82.5±0.0 | 83.2±1.2 | 84.0±2.0 | 88.1±1.4 | E17: 什么果汁可以 减肥 / 什么果汁可以 减重<br>what juice can lose weight / what juice can slim |
|
| 57 |
+
| | | replace adj. | 453/- | 75.7±1.3 | 77.3±1.1 | 78.8±2.5 | 74.8±0.5 | 79.4±3.4 | 88.5±1.3 | E18: 有趣 搞笑的广告词 / 幽默 搞笑的广告词<br>funny advertising words / humerous advertising words |
|
| 58 |
+
| | | replace adv. | 26/- | | 98.7±2.1 100.0±0.0 | 100.0±0.0 | 100.0±0.0 | 100±0.0 | | 100.0±0.0 E19: 总是 想睡觉是为什么 /老是 想睡觉是为什么<br>why always want to sleep / why repeatedly want to sleep |
|
| 59 |
+
| | Antonym | replace adj. | -/305 | 50.6±3.4 | 69.6±2.9 | 65.0±1.5 | 73.1±4.3 | 91.7±2.3 | 90.7±2.3 | E20: 什么水果脂肪 低 / 什么水果脂肪 高<br>what fruit is low in fat / what fruit is high in fat |
|
| 60 |
+
| | | negate v. | -/153 | 69.9±9.6 | 88.9±1.3 | 84.8±2.9 | 93.3±1.3 | 88.4±0.9 | 91.4±3.4 | E21: 为什么宝宝哭 /为什么宝宝 不 哭<br>why baby cries / why baby doesn't cry |
|
| 61 |
+
| | Negation | negate adj. | -/139 | 73.1±8.5 | 84.2±1.2 | 82.7±1.4 | 88.0±1.5 | 88.0±2.9 | 89.4±1.0 | E22: 为什么苹果是红的 /为什么苹果 不是 红的<br>why apple is red / why apple is not red |
|
| 62 |
+
| | | neg.+antonym | 59/- | 29.9±2.5 | 34.4±2.5 | 39.0±1.7 | 31.1±2.5 | 40.7±1.7 | 53.6±0.9 | E23: 激动 怎么办 /无法 平静 怎么办<br>what to do if too excited / what to do if can't calm down |
|
| 63 |
+
| | Temporal | insert | -/120 | 26.6±2.1 | 29.1±2.1 | 33.1±0.9 | 41.7±3.3 | 47.5±5.4 | 33.6±8.5 | E24: 北京会下雨吗 /北京 明天 会下雨吗<br>will it rain in Beijing / will it rain in Beijing tomorrow |
|
| 64 |
+
| | word | replace | -/114 | 44.1±6.1 | 67.8±2.6 | 55.0±0.5 | 53.8±1.3 | 70.4±6.1 | 78.6±5.8 | E25: 昨天 下雪 了 吗 /明儿 会下雪吗<br>was it snow yesterday / will it snow tomorrow |
|
| 65 |
+
| | Symmetry | swap | 533/- | | 97.3±0.4 98.0±0.1 | 95.2±1.7 | 95.9±0.7 | 93.3±0.9 | 92.5±1.9 | E26: 鱼 和 鸡蛋 能一起吃吗 / 鸡蛋 和 鱼 能一起吃吗<br>can I eat fish with egg / can I eat egg with fish |
|
| 66 |
+
| | Asymmetry | swap | -/497 | 14.5±2.0 | 18.3±3.7 | 26.8±3.2 | 26.4±2.5 | 52.0±4.6 | | 49.1±10.8 E27: 北京 到 上海 航班 /上海 到 北京 航班<br>Beijing to Shanghai flights / Shanghai to Beijing flights |
|
| 67 |
+
| Syntactic Feature | Negative<br>Asymmetry | swap + negate | 49/- | | 47.6±3.4 37.4±7.7 | 44.2±1.1 | 25.8±3.1 | 23.1±6.7 | 29.9±1.9 | E28: 男人 比 女人 更 高 吗 / 女人 比 男人 更 矮 吗<br>are men taller than women / are women shorter than men |
|
| 68 |
+
| | Voice | insert passive word | 94/37 | 76.8±1.4 | 72.5±0.0 | 77.4±0.9 | 74.0±0.7 | 85.2±1.4 | 74.8±2.2 | E29: 梦见狗咬左腿 /梦见 被 狗咬左腿<br>dreamed of being bitten by a dog / dreamed of being bitten by a dog |
|
| 69 |
+
| | Misspelling | replace | 468/- | | 68.0±2.0 65.1±0.2 | 64.2±0.6 | 65.0±2.3 | 63.5±1.8 | 63.2±1.6 | E30: 什么 纹身 适合我 / 什么 文身 适合我<br>what tattoo suits me / what tatoo suits me |
|
| 70 |
+
| Pragmatic Feature | Discourse Particle<br>(Simple) | insert or replace | 213/- | 98.7±0.5 | 98.4±0.2 | 98.6±0.5 | 99.2±0.2 | 99.5±0.0 | 99.8±0.2 | E31: 人为什么做梦 /那么 人为什么做梦<br>why people dream / so why people dream |
|
| 71 |
+
| | Discourse Particle<br>(Complex) | insert or replace | 131/- | 46.5±0.6 | 56.2±2.0 | 64.1±2.0 | 61.6±1.6 | 65.1±3.4 | 68.4±0.3 | E32: 附近最好的餐厅 / 求助我旁边 哪家餐厅 最好吃 ?<br>best restaurant nearby / heeelp!!! which restaurant is best in my area ? |
|
| 72 |
+
| Total | 13 | 32 | 2803/7318 - | | | | | | | - |
|
| 73 |
+
|
| 74 |
+
Table 1: Categories of DuQM (described in Sec. [2\)](#page-1-0) and performance of 6 models on DuQM (discussed in Sec. [4\)](#page-5-0). Bold face and underlined indicate the first and second highest accuracy for each testing scenario.
|
| 75 |
+
|
| 76 |
+
i.e., location, organization, person and product. Example 12 is a search query and its perturbation on NE. The two named entities, "山西 *Shanxi*" and "陕西 *Shaanxi*", are similar at character level but denote two different locations. We expect that the models can capture the subtle difference.
|
| 77 |
+
|
| 78 |
+
**Synonym.** A synonym is a word or phrase that means exactly or nearly the same as another word or phrase in a given language. This subcategory aims to test whether models can identify two semantically equivalent questions whose surface forms only differ in a pair of synonyms. As in example 16, the two sentences differ only in two words, both of which refer to Kiwifruit, so they have the same meaning.
|
| 79 |
+
|
| 80 |
+
Antonym. In contrast to synonyms, antonyms are words within an inherently incompatible binary relationship. This subcategory examines model's capability on distinguishing words with opposed meanings. We mainly focus on adjective's opposite, e.g., "高high" and "低low" (see example 20).
|
| 81 |
+
|
| 82 |
+
Negation. Negation is another way to express contradiction. To negate a verb or an adjective in Chinese, we normally put a negative before it, e.g., "不not" before "哭cry" (example 21), "不是not" before "红的red" (example 22). The negative before the verb or the adjective negates the statement. It is an effective way to analyze model's basic skill of figuring out the contradictory meanings even there is only a minor change.
|
| 83 |
+
|
| 84 |
+
Moreover, we include some equivalent paraphrases with negation in this subcategory. In example 23, "无法平静can't calm down" is the negative paraphrase of "激动excited", so that the paraphrase sentence is equivalent to the positive sentence. We believe that a robust QM system should be able to recognize this kind of paraphrase question pairs.
|
| 85 |
+
|
| 86 |
+
**Temporal Word.** Temporal reasoning is the relatively higher-level linguistic capability that allows the model to reason about a mathematical timeline. Unlike English, verbs in Chinese do not have morphological inflections. Tenses and aspects are expressed either by temporal noun phrases like " $\mathcal{F}$ tomorrow" (examples 24) or by aspect particles like " $\mathcal{F}$ le", which indicates the completion of an action (examples 25). This subcategory focuses on the temporal distinctions and helps us evaluate the models' temporal reasoning capability.
|
| 87 |
+
|
| 88 |
+
While single word sense is important to question meaning, how words composed together into a whole also affects sentence understanding. We believe the relations among words in a sentence is important information for models to capture, so we focus on several types of syntactic features in this category. We pre-define 4 linguistic phenomena that we believe is meaningful to locate model's strength and weakness, and describe them here.
|
| 89 |
+
|
| 90 |
+
Symmetry. Sometimes paraphrases can be generated by only swapping the two conjuncts around in a structure of coordination. As shown in example 26, "鱼fish" and "鸡蛋egg" are joined together by the conjunction "和and", which have the symmetric relation to each other. Even if we swap them around, the sentence meaning will not change. We name this subcategory Symmetry, with which we aim to explore if a model captures the inherent dependency relationship between words.
|
| 91 |
+
|
| 92 |
+
Asymmetry. Some words (such as "和and") denote symmetric relations, while others (for example, preposition "到to") denote asymmetric. Example 27 shows a sentence pair in which the word before the preposition "到to" is an adverbial and the word after it is the object. Swapping around the adverbial and the object of the prepositional phrase will definitely leads to a nonequivalent meaning. If a model performs well only on subcategory Symmetry or Asymmetry, it may rely on shortcuts instead of the understanding of the syntactic information. Negative Asymmetry. To further explore the syntactic capability of QM model, DuQM includes a set of test examples which consider both syntactic asymmetry and antonym, and we name this category Negative Asymmetry. In example 28, the asymmetric relation between "男人men" and "女 人women" and the opposite meaning of "高taller" and "矮shorter" resolve to an equivalent meaning. With this subcategory, we can better explore model's capability of inferring more complex syntactic structure.
|
| 93 |
+
|
| 94 |
+
**Voice.** Another crucial syntactic capability of models is to differentiate active and passive voices. In Chinese, the most common way to express the passive voice is using Bei-constructions which feature an agentive case marker "被bei". The subject of a Bei-construction is the patient of an action, and the object of the preposition "被bei" is the agent. Compared to Fig.1(a), the additional "被bei" and the change of word order of "猫cat" and "狗dog"
|
| 95 |
+
|
| 96 |
+
<span id="page-4-3"></span><span id="page-4-2"></span>
|
| 97 |
+
|
| 98 |
+
<span id="page-4-4"></span>What to do if dog is bitten by cat (c) Passive voice non-paraphrase question.
|
| 99 |
+
|
| 100 |
+
Figure 1: The dependency relations of active voice and passive voice questions.
|
| 101 |
+
|
| 102 |
+
in Fig[.1\(b\)](#page-4-3) convert the sentence from active to passive voice, but the two sentences have the same meaning. If we further change the word order from Fig[.1\(b\)](#page-4-3) to Fig[.1\(c\),](#page-4-4) the sentence still uses passive voice but has different meaning.
|
| 103 |
+
|
| 104 |
+
Passive voice is not always expressed with an overt "被*bei*". Sometimes a sentence without any passive marker is still in passive voice. In example 29, although the first sentence is without "被*bei*", it expresses the same meaning as the second one. There are a set of active-passive examples in this category, which are effective to evaluate model's performance on active and passive voices.
|
| 105 |
+
|
| 106 |
+
Lexical items ordered by syntactic rules are not all that make a sentence mean what it means. Context, or the communicative situation that influence language use, has a part to play. We include some pragmatic features in DuQM so as to observe whether models are able to understand the contextual meaning of sentences.
|
| 107 |
+
|
| 108 |
+
Misspelling. Misspellings are quite often seen by search engines and question-answering systems, which are mostly unintentional. Models should have the capability to capture the true intention of the questions with spelling errors to ensure the robustness. In example 30, despite the misspelled word "文身*tatoo*" the two questions mean the same, In some real world situations, models should understand misspellings appropriately. For example,
|
| 109 |
+
|
| 110 |
+
<span id="page-4-5"></span>
|
| 111 |
+
|
| 112 |
+
Figure 2: Construction process of DuQM.
|
| 113 |
+
|
| 114 |
+
when users search a query but type in misspelling, a robust model will still give the correct result.
|
| 115 |
+
|
| 116 |
+
Discourse Particle. Discourse particles are words and small expressions that contribute little to the information the sentence convey, but play some pragmatic functions such as showing politeness, drawing attention, smoothing utterance, etc. As in example 32, the word "求助*help*" is used to draw attention and bring no additional information to the sentence. Whether using these little words do not change the sentence meaning. It is necessary to a model to identify the semantic equivalency when such words are used.
|
| 117 |
+
|
| 118 |
+
We design DuQM as a *diverse* and *natural* corpus. The construction process of DuQM is divided into 4 steps and illustrated in Fig. [2.](#page-4-5) Firstly, we preprocess the source questions to obtain their linguistic knowledge, which will be used to perturb the source texts. Then we pair the source and perturbed question as an example. The examples' naturalness is reviewed by human evaluators. At last, the examples are annotated manually and DuQM is finally constructed. We introduce the construction details in the following:
|
| 119 |
+
|
| 120 |
+
Linguistic Preprocessing. We collect a large number of source questions from the search query log of a commercial search engine. All the source questions are natural and then we perform several linguistic preprocessings on them: named entity recognition, POS tagging, dependency parsing, and word importance analysis. The linguistic knowledge about the source questions we obtained in this step will be used for perturbation.
|
| 121 |
+
|
| 122 |
+
Perturbation. We conduct different perturbation
|
| 123 |
+
|
| 124 |
+
operations for different subcategories. In general, we perturb the sentences in 3 ways:
|
| 125 |
+
|
| 126 |
+
- replace: replace a word with another word, e.g., for category *Synonym*, we replace one word with its synonym;
|
| 127 |
+
- insert : insert an additional word, e.g., for category *Temporal word*, we insert temporal word to the source question;
|
| 128 |
+
- swap: swap two words. This operation is only used in *Syntactic Feature*.
|
| 129 |
+
|
| 130 |
+
The perturbations of all categories are listed in column *Perturbation Operation* of Tab. [1,](#page-2-0) and the perturbation details will be given in Appendix [A.](#page-10-0) Naturalness Review. To ensure the generated sentences are natural, we examine their appearances in the search log and only retain the sentences which have been entered into the search engine.
|
| 131 |
+
|
| 132 |
+
Annotation. The source question and generated question are paired together as an example. Then the examples are evaluated by evaluators from our internal data team. They need to evaluate whether the examples are fluent, grammatically correct, and correctly categorized. The low-quality examples are discarded and the examples with inappropriate categories are re-classified.
|
| 133 |
+
|
| 134 |
+
Then the question pairs are annotated by the linguistic experts from our internal data team. Semantically equivalent question pairs are positive examples, and inequivalent pairs are negative. The annotators are required a approval rate higher than 99% for at least 1,000 prior tasks. Each example is annotated by three annotators, and the examples will be tagged with the label which more than 2 annotators choose. To further ensure the annotation quality, 10% of the annotated examples are selected randomly and reviewed by another senior linguistic expert and and if the review accuracy is lower than 95%, the annotation linguistic experts need to re-annotate all the examples until the accuracy is higher than 95%. Since all annotators are linguistic experts from our internal data team instead of crowd-sourcing, we do not need to use IAA to measure the annotation quality. Generally, only 0.002% (20/10,167) generated examples are not fluent or not grammatically correct, and only 0.02% (195/10,121) generated examples are re-annotated manually. The overall annotation process is illustrated in Fig. [3.](#page-11-0)
|
| 135 |
+
|
| 136 |
+
Eventually, we generate 10,121 examples for DuQM. The class distribution of all categories are
|
| 137 |
+
|
| 138 |
+
<span id="page-5-1"></span>
|
| 139 |
+
|
| 140 |
+
| | | Length | # | | | |
|
| 141 |
+
|-------------|------|--------|-------|-------|--------|--|
|
| 142 |
+
| Category | q | q' | Y | N | All | |
|
| 143 |
+
| Lexical | 8.58 | 8.89 | 1,315 | 6,784 | 8,099 | |
|
| 144 |
+
| Syntactic | 9.86 | 9.89 | 678 | 532 | 1,210 | |
|
| 145 |
+
| Pragmatic | 8.73 | 9.03 | 812 | 0 | 812 | |
|
| 146 |
+
| Avg / Total | 8.74 | 8.90 | 2,805 | 7,316 | 1,0121 | |
|
| 147 |
+
|
| 148 |
+
Table 2: Data statistics of DuQM.
|
| 149 |
+
|
| 150 |
+
<span id="page-5-3"></span>
|
| 151 |
+
|
| 152 |
+
| Model | LCQMCtest | DuQM | 4 |
|
| 153 |
+
|----------|-----------|----------|-------|
|
| 154 |
+
| BERTb | 87.1±0.1 | 66.6±0.6 | -20.5 |
|
| 155 |
+
| ERNIEb | 87.3±0.1 | 69.8±0.3 | -17.5 |
|
| 156 |
+
| RoBERTab | 87.2±0.4 | 69.5±0.1 | -17.7 |
|
| 157 |
+
| MacBERTb | 87.4±0.3 | 70.3±0.6 | -17.1 |
|
| 158 |
+
| RoBERTal | 87.7±0.1 | 73.8±0.3 | -13.9 |
|
| 159 |
+
| MacBERTl | 87.6±0.1 | 73.8±0.5 | -13.8 |
|
| 160 |
+
|
| 161 |
+
Table 3: Accuracy(%) on LCQMCtest and DuQM. <sup>b</sup> indicates base, and <sup>l</sup> indicates large.
|
| 162 |
+
|
| 163 |
+
given in Tab. [1.](#page-2-0) Additional data statistics are provided in Tab. [2.](#page-5-1)
|
2201.00520/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.00520/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Transformer [\[34\]](#page-11-0) is originally introduced to solve natural language processing tasks. It has recently shown great potential in the field of computer vision [\[12,](#page-10-0)[26,](#page-11-1)[36\]](#page-11-2). The pioneer work, Vision Transformer [\[12\]](#page-10-0) (ViT), stacks multiple Transformer blocks to process non-overlapping image patch (*i.e.* visual token) sequences, leading to a convolutionfree model for image classification. Compared to their CNN counterparts [\[18,](#page-10-1)[19\]](#page-10-2), Transformer-based models have larger receptive fields and excel at modeling long-range dependencies, which are proved to achieve superior performance in the regime of a large amount of training data and
|
| 4 |
+
|
| 5 |
+
<span id="page-0-0"></span>
|
| 6 |
+
|
| 7 |
+
Figure 1. Comparison of DAT with other Vision Transformer models and DCN in CNN model. The red star and the blue star denote the different queries, and masks with solid line boundaries denote the regions to which the queries attend. In a data-agnostic way: (a) ViT [\[12\]](#page-10-0) adopts full attention for all queries. (b) Swin Transformer [\[26\]](#page-11-1) uses partitioned window attention. In a datadependent way: (c) DCN [\[9\]](#page-10-3) learns different deformed points for each query. (d) DAT learns shared deformed points for all queries.
|
| 8 |
+
|
| 9 |
+
model parameters. However, the superfluous attention in visual recognition is a double-edged sword, and has multiple drawbacks. Specifically, the excessive number of keys to attend per query patch yields high computational cost and slow convergence, and increases the risk of overfitting.
|
| 10 |
+
|
| 11 |
+
In order to avoid excessive attention computation, existing works [\[6,](#page-10-4) [11,](#page-10-5) [26,](#page-11-1) [36,](#page-11-2) [43,](#page-11-3) [49\]](#page-11-4) have leveraged carefully designed efficient attention patterns to reduce the computation complexity. As two representative approaches among them, Swin Transformer [\[26\]](#page-11-1) adopts window-based local
|
| 12 |
+
|
| 13 |
+
<sup>\*</sup>Equal contribution.
|
| 14 |
+
|
| 15 |
+
<sup>†</sup>Corresponding author.
|
| 16 |
+
|
| 17 |
+
<span id="page-1-0"></span>attention to restrict attention in local windows, while Pyramid Vision Transformer (PVT) [\[36\]](#page-11-2) downsamples the key and value feature maps to save computation. Though effective, the hand-crafted attention patterns are data-agnostic and may not be optimal. It is likely that relevant keys/values are dropped, while less important ones are still kept.
|
| 18 |
+
|
| 19 |
+
Ideally, one would expect that the candidate key/value set for a given query is flexible and has the ability to adapt to each individual input, such that the issues in hand-crafted sparse attention patterns can be alleviated. In fact, in the literature of CNNs, learning a deformable receptive field for the convolution filters has been shown effective in selectively attending to more informative regions on a datadependent basis [\[9\]](#page-10-3). The most notable work, Deformable Convolution Networks [\[9\]](#page-10-3), has yielded impressive results on many challenging vision tasks. This motivates us to explore a deformable attention pattern in Vision Transformers. However, a naive implementation of this idea leads to an unreasonably high memory/computation complexity: the overhead introduced by the deformable offsets is quadratic *w.r.t* the number of patches. As a consequence, although some recent work [\[7,](#page-10-6) [46,](#page-11-5) [54\]](#page-11-6) have investigated the idea of deformable mechanism in Transformers , none of them have treated it as a basic building block for constructing a powerful backbone network like the DCN, due to the high computational cost. Instead, their deformable mechanism is either adopted in the detection head [\[54\]](#page-11-6), or used as a preprocessing layer to sample patches for the subsequent backbone network [\[7\]](#page-10-6).
|
| 20 |
+
|
| 21 |
+
In this paper, we present a simple and efficient deformable self-attention module, equipped with which a powerful pyramid backbone, named *Deformable Attention Transformer* (DAT), is constructed for image classification and various dense prediction tasks. Different from DCN that learns different offsets for different pixels in the whole feature map, we propose to learn a few groups of query agnostic offsets to shift keys and values to important regions (as illustrated in Figure [1\(](#page-0-0)d)), based on the observation in [\[3,](#page-9-0) [52\]](#page-11-7) that global attention usually results in the almost same attention patterns for different queries. This design both holds a linear space complexity and introduces a deformable attention pattern to Transformer backbones. Specifically, for each attention module, reference points are first generated as uniform grids, which are the same across the input data. Then, an offset network takes as input the query features and generates the corresponding offsets for all the reference points. In this way, the candidate keys/values are shifted towards important regions, thus augmenting the original self-attention module with higher flexibility and efficiency to capture more informative features.
|
| 22 |
+
|
| 23 |
+
To summarize, our contributions are as follows: we propose the first deformable self-attention backbone for visual recognition, where the data-dependent attention pattern endows higher flexibility and efficiency. Extensive experiments on ImageNet [\[10\]](#page-10-7), ADE20K [\[51\]](#page-11-8) and COCO [\[25\]](#page-11-9) demonstrate that our model outperforms competitive baselines including Swin Transformer consistently, by a margin of 0.7 on the top-1 accuracy of image classification, 1.2 on the mIoU of semantic segmentation, 1.1 on object detection for both box AP and mask AP. The advantages on small and large objects are more distinct with a margin of 2.1.
|
| 24 |
+
|
| 25 |
+
# Method
|
| 26 |
+
|
| 27 |
+
We first revisit the attention mechanism in recent Vision Transformers. Taking a flattened feature map x ∈ R N×C as the input, a multi-head self-attention (MHSA) block with M heads is formulated as
|
| 28 |
+
|
| 29 |
+
$$q = xW_q, \ k = xW_k, \ v = xW_v, \tag{1}$$
|
| 30 |
+
|
| 31 |
+
$$z^{(m)} = \sigma(q^{(m)}k^{(m)\top}/\sqrt{d})v^{(m)}, m = 1, \dots, M,$$
|
| 32 |
+
(2)
|
| 33 |
+
|
| 34 |
+
$$z = \operatorname{Concat}\left(z^{(1)}, \dots, z^{(M)}\right) W_o, \tag{3}$$
|
| 35 |
+
|
| 36 |
+
where σ(·) denotes the softmax function, and d = C/M is the dimension of each head. z (m) denotes the embedding output from the m-th attention head, q (m) , k(m) , v(m) ∈ R <sup>N</sup>×<sup>d</sup> denote query, key, and value embeddings respectively. Wq, Wk, Wv, W<sup>o</sup> ∈ R <sup>C</sup>×<sup>C</sup> are the projection matrices. To build up a Transformer block, an MLP block with two linear transformations and a GELU activation is usually adopted to provide nonlinearity.
|
| 37 |
+
|
| 38 |
+
With normalization layers and identity shortcuts, the l-th Transformer block is formulated as
|
| 39 |
+
|
| 40 |
+
$$z'_{l} = MHSA(LN(z_{l-1})) + z_{l-1},$$
|
| 41 |
+
(4)
|
| 42 |
+
|
| 43 |
+
$$z_l = \text{MLP}\left(\text{LN}(z_l')\right) + z_l',\tag{5}$$
|
| 44 |
+
|
| 45 |
+
where LN is Layer Normalization [\[1\]](#page-9-2).
|
| 46 |
+
|
| 47 |
+
Existing hierarchical Vision Transformers, notably PVT [\[36\]](#page-11-2) and Swin Transformer [\[26\]](#page-11-1) try to address the challenge of excessive attention. The downsampling technique of <span id="page-2-1"></span>the former results in severe information loss, and the shiftwindow attention of the latter leads to a much slower growth of receptive fields, which limits the potential of modeling large objects. Thus a data-dependent sparse attention is required to flexibly model relevant features, leading to deformable mechanism firstly proposed in DCN [\[9\]](#page-10-3). However, simply implementing DCN in Transformer models is a non-trivial problem. In DCN, each element on the feature map learns its offsets individually, of which a 3 × 3 deformable convolution on an H ×W ×C feature map has the space complexity of 9HW C. If we directly apply the same mechanism in the attention module, the space complexity will drastically rise to NqNkC, where Nq, N<sup>k</sup> are the number of queries and keys and usually have the same scale as the feature map size HW, bringing approximately a biquadratic complexity. Although Deformable DETR [\[54\]](#page-11-6) has managed to reduce this overhead by setting a lower number of keys with N<sup>k</sup> = 4 at each scale and works well as a detection head, it is inferior to attend to such few keys in a backbone network because of the unacceptable loss of information (see detailed comparison in Appendix). In the meantime, the observations in [\[3,](#page-9-0)[52\]](#page-11-7) have revealed that different queries have similar attention maps in visual attention models. Therefore, we opt for a simpler solution with shared shifted keys and values for each query to achieve an efficient trade-off.
|
| 48 |
+
|
| 49 |
+
<span id="page-2-3"></span><span id="page-2-2"></span>Specifically, we propose deformable attention to model the relations among tokens effectively under the guidance of the important regions in the feature maps. These focused regions are determined by multiple groups of deformed sampling points which are learned from the queries by an offset network. We adopt bilinear interpolation to sample features <span id="page-3-1"></span>from the feature maps, and then the sampled features are fed to the key and value projections to get the deformed keys and values. Finally, standard multi-head attention is applied to attend queries to the sampled keys and aggregate features from the deformed values. Additionally, the locations of deformed points provide a more powerful relative position bias to facilitate the learning of the deformable attention, which will be discussed in the following sections.
|
| 50 |
+
|
| 51 |
+
**Deformable attention module.** As illustrated in Figure **2**(a), given the input feature map $x \in \mathbb{R}^{H \times W \times C}$ , a uniform grid of points $p \in \mathbb{R}^{H_G \times W_G \times 2}$ are generated as the references. Specifically, the grid size is downsampled from the input feature map size by a factor r, $H_G = H/r$ , $W_G =$ W/r. The values of reference points are linearly spaced 2D coordinates $\{(0,0),\ldots,(H_G-1,W_G-1)\}$ , and then we normalize them to the range [-1, +1] according to the grid shape $H_G \times W_G$ , in which (-1, -1) indicates the top-left corner and (+1,+1) indicates the bottom-right corner. To obtain the offset for each reference point, the feature maps are projected linearly to the query tokens $q = xW_q$ , and then fed into a light weight sub-network $\theta_{\text{offset}}(\cdot)$ to generate the offsets $\Delta p = \theta_{\text{offset}}(q)$ . To stabilize the training process, we scale the amplitude of $\Delta p$ by some predefined factor s to prevent too large offset, *i.e.*, $\Delta p \leftarrow s \tanh{(\Delta p)}$ . Then the features are sampled at the locations of deformed points as keys and values, followed by projection matrices:
|
| 52 |
+
|
| 53 |
+
$$q = xW_q, \ \tilde{k} = \tilde{x}W_k, \ \tilde{v} = \tilde{x}W_v, \tag{6}$$
|
| 54 |
+
|
| 55 |
+
with
|
| 56 |
+
$$\Delta p = \theta_{\text{offset}}(q), \ \tilde{x} = \phi(x; p + \Delta p).$$
|
| 57 |
+
(7)
|
| 58 |
+
|
| 59 |
+
$\tilde{k}$ and $\tilde{v}$ represent the deformed key and value embeddings respectively. Specifically, we set the sampling function $\phi(\cdot;\cdot)$ to a bilinear interpolation to make it differentiable:
|
| 60 |
+
|
| 61 |
+
<span id="page-3-0"></span>
|
| 62 |
+
$$\phi\left(z;(p_{x},p_{y})\right) = \sum_{(r_{x},r_{y})} g(p_{x},r_{x})g(p_{y},r_{y})z[r_{y},r_{x},:], \quad (8)$$
|
| 63 |
+
|
| 64 |
+
where $g(a,b) = \max(0,1-|a-b|)$ and $(r_x,r_y)$ indexes all the locations on $z \in \mathbb{R}^{H \times W \times C}$ . As g would be non-zero only on the 4 integral points closest to $(p_x,p_y)$ , it simplifies Eq.(8) to a weighted average on 4 locations. Similar to existing approaches, we perform multi-head attention on q,k,v and adopt relative position offsets R. The output of an attention head is formulated as:
|
| 65 |
+
|
| 66 |
+
$$z^{(m)} = \sigma \left( q^{(m)} \tilde{k}^{(m)\top} / \sqrt{d} + \phi(\hat{B}; R) \right) \tilde{v}^{(m)}, \quad (9)$$
|
| 67 |
+
|
| 68 |
+
where $\phi(\hat{B}; R) \in \mathbb{R}^{HW \times H_G W_G}$ correspond to the position embedding following previous work [26] while with several adaptations. Details will be explained later in this section. Features of each head are concatenated together and projected through $W_o$ to get the final output z as Eq.(3).
|
| 69 |
+
|
| 70 |
+
**Offset generation.** As we have stated, a sub-network is adopted for offset generation which consumes the query
|
| 71 |
+
|
| 72 |
+
features and outputs the offset values for reference points respectively. Considering that each reference point covers a local $s \times s$ region (s is the largest value for offset), the generation network should also have the perception of the local features to learn reasonable offsets. Therefore, we implement the sub-network as two convolution modules with a nonlinear activation, as depicted in Figure 2(b). The input features are first passed through a $5\times 5$ depthwise convolution to capture local features. Then, GELU activation and a $1\times 1$ convolution is adopted to get the 2D offsets. It is also worth noticing that the bias in $1\times 1$ convolution is dropped to alleviate the compulsive shift for all locations.
|
| 73 |
+
|
| 74 |
+
**Offset groups.** To promote the diversity of the deformed points, we follow a similar paradigm in MHSA, and split the feature channel into G groups. Features from each group use the shared sub-network to generate the corresponding offsets respectively. In practice, the head number M for the attention module is set to be multiple times of the size of offset groups G, ensuring that multiple attention heads are assigned to one group of deformed keys and values.
|
| 75 |
+
|
| 76 |
+
**Deformable relative position bias.** Relative position bias encodes the relative positions between every pair of query and key, which augments the vanilla attention with spatial information. Considering a feature map with shape $H\times W$ , its relative coordinate displacements lie in the range of [-H,H] and [-W,W] at two dimensions respectively. In Swin Transformer [26], a relative position bias table $\hat{B} \in \mathbb{R}^{(2H-1)\times(2W-1)}$ is constructed to obtain the relative position bias B by indexing the table with the relative displacements in two directions. Since our deformable attention has continuous positions of keys, we compute the relative displacements in the normalized range [-1,+1], and then interpolate $\phi(\hat{B};R)$ in the parameterized bias table $\hat{B} \in \mathbb{R}^{(2H-1)\times(2W-1)}$ by the continuous relative displacements in order to cover all possible offset values.
|
| 77 |
+
|
| 78 |
+
Computational complexity. Deformable multi-head attention (DMHA) has a similar computation cost as the counterpart in PVT or Swin Transformer. The only additional overhead comes from the sub-network that is used to generate offsets. The complexity of the whole module can be summarized as:
|
| 79 |
+
|
| 80 |
+
$$\Omega(\text{DMHA}) = \underbrace{2HWN_{\text{S}}C + 2HWC^2 + 2N_{\text{S}}C^2}_{\text{vanilla self-attention module}} + \underbrace{(k^2 + 2)N_{\text{S}}C}_{\text{offset network}},$$
|
| 81 |
+
(10)
|
| 82 |
+
|
| 83 |
+
where $N_s = H_G W_G = HW/r^2$ is the number of sampled points. It can be immediately seen that the computational cost of the offset network has linear complexity w.r.t. the channel size, which is comparably minor to the cost for attention computation. Typically, consider the third stage of a Swin-T [26] model for image classification where H = W = 14, $N_s = 49$ , C = 384, the computational cost for the attention module in a single block is 79.63M FLOPs. If equipped with our deformable module (with k = 5), the ad-
|
| 84 |
+
|
| 85 |
+
<span id="page-4-2"></span><span id="page-4-0"></span>
|
| 86 |
+
|
| 87 |
+
Figure 3. An illustration of DAT architecture. $N_1$ to $N_4$ are the numbers of stacked successive local attention and shift-window / deformable attention blocks. k and k denote the kernel size and stride of the convolution layer in patch embeddings.
|
| 88 |
+
|
| 89 |
+
ditional overhead is 5.08M Flops, which is only 6.0% of the whole module. Additionally, by choosing a large downsample factor r, the complexity will be further reduced, which makes it friendly to the tasks with much higher resolution inputs such as object detection and instance segmentation.
|
| 90 |
+
|
| 91 |
+
We replace the vanilla MHSA with our deformable attention in the Transformer (Eq.(4)), and combine it with an MLP (Eq.(5)) to build a deformable vision transformer block. In terms of the network architecture, our model, Deformable Attention Transformer, shares a similar pyramid structure with [7, 26, 31, 36], which is broadly applicable to various vision tasks requiring multiscale feature maps. As illustrated in Figure 3, an input image with shape $H \times W \times 3$ is firstly embedded by a 4×4 non-overlapped convolution with stride 4, followed by a normalization layer to get the $\frac{H}{4} \times \frac{W}{4} \times C$ patch embeddings. Aiming to build a hierarchical feature pyramid, the backbone includes 4 stages with a progressively increasing stride. Between two consecutive stages, there is a non-overlapped 2×2 convolution with stride 2 to downsample the feature map to halve the spatial size and double the feature dimensions. In classification task, we first normalize the feature maps output from the last stage and then adopt a linear classifier with pooled features to predict the logits. In object detection, instance segmentation and semantic segmentation tasks, DAT plays the role of a backbone in an integrated vision model to extract multiscale features. We add a normalization layer to the features from each stage before feeding them into the following modules such as FPN [23] in object detection or decoders in semantic segmentation.
|
| 92 |
+
|
| 93 |
+
We introduce successive local attention and deformable attention blocks in the third and the fourth stage of DAT. The feature maps are firstly processed by a window-based local attention to aggregate information locally, and then passed through the deformable attention block to model the global relations between the local augmented tokens. This alternate design of attention blocks with local and global receptive fields helps the model learn strong representations, sharing a similar pattern in GLiT [5], TNT [15] and Point-
|
| 94 |
+
|
| 95 |
+
<span id="page-4-1"></span>
|
| 96 |
+
|
| 97 |
+
| DAT Architectures | | | | | | | |
|
| 98 |
+
|--------------------------|--------------------|--------------------|---------------------|--|--|--|--|
|
| 99 |
+
| | DAT-T | DAT-S | DAT-B | | | | |
|
| 100 |
+
| Store 1 | $N_1 = 1, C = 96$ | $N_1 = 1, C = 96$ | $N_1 = 1, C = 128$ | | | | |
|
| 101 |
+
| Stage 1 $(56 \times 56)$ | window size: 7 | window size: 7 | window size: 7 | | | | |
|
| 102 |
+
| (30 × 30) | heads: 3 | heads: 3 | heads: 4 | | | | |
|
| 103 |
+
| Store 2 | $N_2 = 1, C = 192$ | $N_2 = 1, C = 192$ | $N_2 = 1, C = 256$ | | | | |
|
| 104 |
+
| Stage 2 $(28 \times 28)$ | window size: 7 | window size: 7 | window size: 7 | | | | |
|
| 105 |
+
| (20 × 20) | heads: 6 | heads: 6 | heads: 8 | | | | |
|
| 106 |
+
| | $N_3 = 3, C = 384$ | $N_3 = 9, C = 384$ | $N_3 = 9, C = 512$ | | | | |
|
| 107 |
+
| Stage 3 | window size: 7 | window size: 7 | window size: 7 | | | | |
|
| 108 |
+
| $(14 \times 14)$ | heads: 12 | heads: 12 | heads: 16 | | | | |
|
| 109 |
+
| | groups: 3 | groups: 3 | groups: 4 | | | | |
|
| 110 |
+
| | $N_4 = 1, C = 768$ | $N_4 = 1, C = 768$ | $N_4 = 1, C = 1024$ | | | | |
|
| 111 |
+
| Stage 4 | window size: 7 | window size: 7 | window size: 7 | | | | |
|
| 112 |
+
| $(7 \times 7)$ | heads: 24 | heads: 24 | heads: 32 | | | | |
|
| 113 |
+
| | groups: 6 | groups: 6 | groups: 8 | | | | |
|
| 114 |
+
|
| 115 |
+
Table 1. Model architecture specifications. $N_i$ : Number of block at stage i. C: Channel dimension. window size: Region size in local attention module. heads: Number of heads in DMHA. groups: Offset groups in DMHA.
|
| 116 |
+
|
| 117 |
+
former [29]. Since the first two stages mainly learn local features, deformable attention in these early stages is less preferred. In addition, the keys and values in the first two stages have a rather large spatial size, which greatly increase the computational overhead in the dot products and bilinear interpolations in deformable attention. Therefore, to achieve a trade-off between model capacity and computational burden, we only place deformable attention in the third and the fourth stage and adopt the shift-window attention in Swin Transformer [26] to have a better representation in the early stages. We build three variants of DAT in different parameters and FLOPs for a fair comparison with other Vision Transformer models. We change the model size by stacking more blocks in the third stage and increasing the hidden dimensions. The detailed architectures are reported in Table 1. Note that there are other design choices for the first two stages of DAT, e.g., the SRA module in PVT. We show the comparison results in Table 7.
|
2202.12162/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2022-01-17T20:30:25.362Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/16.1.2 Chrome/96.0.4664.55 Electron/16.0.5 Safari/537.36" etag="G3ys5665IcWe_nelBuhy" version="16.1.2" type="device" pages="2"><diagram id="RCV7BcIzBNuBQDStTNCJ" name="Page-1">5LzXju1KkiX4NflYDWrxSK215kuDYlNtak1+/ZAR52ZWVmX3FDDVwAB9cGPHDqdyNzdbtpa58/4NZrpTmJOx0ob80/4NAvLzbzD7NwiCEAR/fr0t128LSRK/DeVc579N4D8anPr+/GkE/rRudf5Z/unEdRjatR7/uTEb+v6Trf/UlszzcPzzacXQ/vNTx6T8/KcGJ0va/9wa1Pla/bYSKPCPdvFTl9VfTwaBP0e65K+T/zQsVZIPx79rgrm/wcw8DOvvt+5kPu1rvL/s8nsd/784+veOzZ9+/a9c0IbU/wzZj5qQ9bYJJHRQGPxv2O9d9qTd/gz4T2fX6y8LzMPW55/3JuDfYPqo6vXjjEn2Hj2eOX/aqrVr/xwuhn79M4kg+f5dty0ztMP8cy/4A+boB3/al3Uevp9/d4TEcDjBniN/OvSZ18/5vxwp+Hf7PY73GbrPOl/PKX8u+Df4r0n443QgSvwPHP1tOv4xi+hfU1P9uxmEAfB/QH/OTf44T/n3R/zDvs+XPyb+1+ZGMHtVY2Ewq3/7n/Mup57J/tu/4f8HzU38Z3OnwAf+YP/K3MCHAAji/5S5MeC/Zmvorwv/2w2N/Ce7fvInrv/8OcxrNZRDn7TcP1rpf1geeP76xznqMIx/7N181vX6Y/BkW4f//Wz8Wv0v2ID+bu23K/97Wz89H7Y5+3OWo3PzADHZgYLuodShEknyvyHQv56U+dMma73/8wP+lX3/XGoO9fPov08m9B/m8vn399D56y5rMpef9c+F/2Gi/t6T/9Lc/cuRwdD/e5Q8YDq+X+vuB7/p14HrB7XVJP205rDUaz30z/F0WNehe05o3wN0kn3Ln2n+KxTyT5Fs7frv7kC1dfleub6TTifL+JtVivp8XYP+eSD1VyvwV8t7q2RN/gZTv39C/NiXf4OY2qcN+wAUoRyo55/ueBXnlc83zno+aI6horcd4VDjen57ybflLN9Geh1Kn+T5PWFB2IlGw7bZvY3zcrIBkL3aiqzFis3eKMVz3xGmsCiy/xtEXy0w2iu7CK787bh45Cxm1C7QMfDl3m1NxIbpOHN03tD4OZsc+Ju3YdGVmqfDSINIt3QFz1e8at9Zheju+cUSKDaIEkcWRV42ZbGj5tN6VsU3jXxKAvU6snt19hZvxPQb6yA3+krCSppu4uLPRd2CJTgRYdSpFkkOTuhSVXr93IJ30y+muqaA1QozbYa54d+LoIdcKHh69bIuR9EMoSALWs3jOZ9huXC35+EMNCOMeXmUFdZ5Le3ebWiLE5KHHC3HDFonX23+SiPlREYr+gNHKzYdU2k4s1u+kiDRI8B8jfC3O58xrhhx+7mIIf6+ghysW8BE7ViBM2Q6xcxM1zkvGzzCbbifCjw8avgpLU5iyhpofD3azlRzz/huj93PPhMYL2u3Mo4KIkr0x2qRXak5udiXLwrJRa7eUKxy3ZODRz0Wlh2Opu8TXbRXg3fFfenZFyonDxPXj0U6vhzk1Mm6XkTtuvmjmZ2up5JKAf3nEkezUIMj2o+5KPhNo0OYUnF8Uows6tJjlpBpPguqpyw+cJOgALyHex0T7RdwWvVjN3p+Hxwx5ISQpV/Pn3fmE4lCB/XW10wAwAtDbVZBH7TgB0vz4Fmr9I8wZ3gLSrFlPVeXNsTWt4Z/pyLv1eZyiQA0uhwYivciOAyBG6HzdfuIVl2yZjcRvGOP0+rkg8x6fO9TuSWRyOt9LMxiTA+o2WK5UbQd6aiFDnUnjIfxftynVgCLZ/hxp0N3lR4nJwEO1K6NTBOfC72ndM+yaMQR2KE2cxx8QIJ/AvBx70kk6YFPpk9DZ8bXRs4Au8hjLddoTKRNYHRpKJXG0RUx5GcFI5iNV+yg8RQ1cL7KXPbwB5B5uE2F9nCgTpKLfqaescNbh9k42BAMhTDL2uJ6aBeAjEbuxO5lIzVwMM/ixmzbtgQY+7UQ23guW/AGGyc/lbcJfjvI4ZMPo2Dr5a3PrbspxXV7VbY+yc/B1+0HpxDHuvtg8cljJ+Ne9nMXTQwR7XqilG6p5Bnf6rDEwLHc3Ui3QpxrSTUzl846WONW3PBI6VkPgtKIu5LM0vUaGdIeCgqdPhITHSOtjnzNo6U8/EF/ejBvRs+Rm1Vj/QUAeQ4mr5tsTnZkL2I0iTojm528s7EcQUhg0Os6JvX6sxa31ymeHwY7UwQf0m1D4nNCjnjga46btq8fFFQiZx4gR1MSmfjU9xhkfLCkNT7i8zDKvV+7vi4Bu7revPcDSsY1PCGiGYOQ501mi47nNCYlFJfWr81xyQVhHmL7pUzrI0aYdkCbQZSfBSIZVIAu+HzSAFifNFpZaETQ6FZLDiV+lVrUP2weHx/xlpGifJIlzYKHNTZWxzl+pwSDXFOzJS2VbSwK3xlrJEHeB/VAupHXBUz0J1eYjVEeGgXDmVFaC6Emag7XxvNDbwVGTIhVq44gz8qmmNrwRZxVhptKVM5vzJ7k/dnD/XlscWZRfh+TdD8OGSWIGrif4F4eK7RskmZWn4y1blj33GiIn57h4xAZarWrEwzNuplkbWbs6yyspKloJ2hsoj9JiH5B/iwy09yOLBGFrPVUpZdoTg8xVQ5zZmskXYfrPFCyVivIkvUZEO2RGBaNiIMPJDtMyJPaAASVL+dStkvxi55QVfYE/C5VH+55gJ0w6+b4yZdZjaHnEPcFVVNC3Oow9r4sVo29aD98IYtsUpdIEyrYWvbtG72UDw09QttizWxqH2VIGxw4koZSEZlbvT4gISFd9+zd6IL07VQM5BbmlEvddZmcE2SjbrLrcku+p10v8IPA4Z+sUCPwR0OwT1RQWXOZIkK4lSFKmBs+xMggbM5O8ogU9IO2gQdMvTpAa432TSzyMUcAupZlOkWLhprADihBrPZ7vAEvE0SjmD2GkfAunuiJLvAHuSgMsVscUUFDWs5IuuXIffo9fxVZntRTpYZzM0htwD5Z3ETmgRCmjEc4kBknylcdXYnsUZEp+XCdO52baV1vMtS1evveX9xtOfV+zHAxfAIfGBNGVabkVjyA5XAvC19HZhxuJPL5Tdfk9PyO0vn5lFvwd/5RJvs96HhF8d7KeTpN40cW9AlfBOpsAHNP8YhK1y0z2UCbLYJyRRbWZOxlwDqQb+mCE2pZGDTSyc+k0FS2oEee+3dGQjfsnNGczsP+VWCMZYWIc5mTe4BYERpv59QWlj5S8xKJOTlbfk7d16t14uTnF6M3UUp/+sc3HowfLTHD+16Excsx8OeDluEnPvgFfBvg8R1avyzXmzPoQvJ/x2jICNRTwlbfqF0Ge+ZHJewJKKIelI78PobW8RxPVyKDdOgdSX9WTIqFuwqm6cuc+pkG08S05JIquS+XjFviyFviLZGt5lKrH90SJgEWAwxVae3JyJlyDoVmqWDdiE8feJJo3i7u3+cjSB4Y5dF4fw1O4c+AyLzY72aHHj32ggzcw7fyjguvHwrN3xvyXoCgsrZ/Qh0ejUWboFDMug7dthlFJC19cTlsyT2xQew1xS5OIEqGa7/v+TcQIqmsjQlbWpB8ZgWCzc8GD/yFcam/8t/aQTRH1NVbgK8BZ/NL69C73DZL98649Fri6XPbKATvnozH2o6QKrrYC9lHbZrX/2riJolzyYow1O+MALsX52PxHZ7Zvx+MCL7juHz05YY4/g6/nH5nB+pFHB3YzxPUyXGQTwvBgJ89n88f336nnvk5k/9sziGiNRFN8IELqRzPa7kd35IWsCtgdlvohrk9j5a297sz1I9F0fRqO4g0fRCQZ9vH7qQftpKTjE4wAS8QnT4x98cML9oDSDdkk3j2mO5iiK3GYTl63dLc9465g4JF0deyKKZoLyR/lJdE1Vqd0zCBC5+C2L8F3pQZvU/8m58mDMWBMLmTlM42PyLFWFC/X+UG45Xmr9U7Nac+9b72RXm5g68mv272Jm90HO0AjTg0jD+eSRwlAKpPzgetZfkCjxLajbA3NQmtJEbN9nU4P58wzrMHdXYoe1Ee5148wqkZxk8mIfccO8OmIdGni+BzHgbnIsx3Gatjzgn6LcBYdtJclEC1Ziaj5FRM9zyvPDM1bR6d4JpWFH5p0MNmMXpy8le1JXd93Ejn17wRk8XSmywxbkUK43P1YesTOiF0b6cjCPNl/S4Bg+i33hYMhvB5r4kdz3CPMXSoBc/I3fMUtmNKWCUvid95xqYqBK1BTkD9YZyMrmI+GhDtYm1vvj/3Ja0mvl9D5c0IkKpn7nORNeMLgbm6L/et3daJTPmCntsEzdhH3Wkj0zdoJrS8HMr4ZPSBBTZTbWkAixTwVx94KGcS4DvMXOKg9ewwHCgSSBMfnKxHoms4m4MYUfIYRuY+HsIq3RUrxYfFFcwYOVV+QWLWloBj8EIF4wdU6Bs8lzx5voTEoycvwQ3OUTb9kAuKPKFyRFPnJ7rprINYzU6u5buwcTI/ucY34E0qUB9BWyzvAXhMUqKbTdtHV9sU9ZVAcXzV6VELymNtx5dLkw5SQZcd3FVAHXdhBT6lDq7rHetGTtp8IEu41525btV0XPsx5cwBfwuYNySSc6LOVmW+nPVdZvPdjD6sjHM+W9Cl3IQUhmagkJwF5+W78RA+euIFvJnhlDDEsDpPmd9WvJtQ4D5zxnfNyXOUibTFB1DHXhuEVc4aGG2lqfeJngbjzJczu8sZMLgyYA/q/CGcufI9PO1rZt20fD1r18TjyhjBikFAUVTTyr85o5FBVJ4+42qBV7Wvn6QWL+PPJE2lanSyy4QbxVw5yB1BrGwHjYE1BtJ2eOhyi/mcNQW0B784dGZmcK2zKueNk7hpybWhhQZcG4UB2DmxEzYSP3UaEDNEmqEkc31o8Y14AAbVguNwpouNjg59dKpcrYAelAd2sEpOhZw9IWd0l+q1xAAcuY/7MjG7Le8rb+rNEztG7cv6nhcgQe2acZnzIfA9VHdcKKQL5K0IxAjAQN+ciTztsxaaEjnElDWKLTahfdGc0Yv+Ool2zHu1dkrf0ev6m78T2UsbDe5MW9vDYi4k1DaXu2AtRhytsnCOYRWUoG7RaaNcAB/m3UvHM89oRExaSe/tpOiXD3vWEIAXldboszZGQlkoY5QWrxRc1sZ7xJmtD7CXoeOUs4+8z/j7Icuop22WLws7tBs16Cl+MlXwZyJjBsrQjriWYRiNh119StczV1jfJi0f+kTcLYIXZeVeejtYki/6appgRdadYSUl9SwjtqdCKK92zHsb6d+5Ez89ut3KNvQ2CxtVm061Dzf2mBODJV33k8c03b/71JdYSNbWkfIyycJm5s0doA+OB+IEpwFHQiYQsO+3L+qeeTDa0cBEJutWlPr54PcjLgypP1AgX6Sox4owTZTwmU8TgCty9IGaDcBIQepTDaV7EdZ1xcNi7D0uICKABz4z4SdsxobsiCQLVXhHU5us9tBvG8HFhfOr7ZwOu+Jtm1vXgYxqYBwldlruC2jngyvlE5K4DOEPDE+SwduVF0cm2m477c07RsQQznzroDR1UCyEsG6yiiEhBdvvL7G7X47dsP0ThQiCYDgX2KwbqrRkRCM2PFo3rjB6/AYJSrQqt0lsy8qhCoNG0A22rbRArA2iDz0RjS033j7UraNW7bmp0NYChzwBvXzEDaVz8szYwlFPwymDNTYT/ijTh8zRiui6aEjJ2tWxvQLn1dCmt128eFfTd4UHg+sZumK9ycQHf6IraT7xQyhWnw52s0vEes/dgMWyo4xCqaAlifeK0pTfrF0fJbzlBOU4C9p9msfRruUtVuyY+pIKkLuRuKTAQg+NZZxDAPAb4apYARCE3vQDaegoActOlqzwBp5M8CipenvoS07eBnNENGrwRCOIk+R5gFV5D6W9az+mHCpBqiDpVmv6gs1sEChvBXEuvDnXp9x4juKYmu5hZW460rNcQ7S+M5PSadiH79+9v4Dd/rAmuiIPdrkIazaSgJnJg3LcaA3SzVlp6cFg1lDZ0tCZDnWs3VldY+vEjqXNxQZQMayTjb7fgFzZ+2NID8tzh8LVgDl6sisDwn4WgsDZi0nW7PnBJ7L2nUcjnKEnC0sCMNyTPvVGhyqDaI5K16JiidjFF7BaVyjKWrLlKBsdPsrOizgKPQ57rouJ+cDJe99GoIx5IuVuBVGUKG2zrDdvdt5xMytc06VrVhBDF3f9VUwaRUSU2a5drOZt+OjsVZCfCUUSSQ75MJw6Q+AwhUB4aY8nTwfqpX7So7ByMbBBXWiM5PAETuWSozASm4qR26JgKcy9LGwUq1xa7w+aqotcsKAya8IqXDNwzWTFEJUSOpE08KD+JkclMj4XWOOuAEbwYlYY+814sIIPSB01LcmCEIM2vVpvFzRcyvfxtq3t22cr0EY28mFox2q8bpdlDfbys6KAlDgky7WwEP8aP1gMemGUHfMbpHNUJpdwhWWC5xIKm/CkIcorRyxU7EQYoWnUSknsLamGXZgOmyoqouVlONBNyOrvHDleoL6gl4SQ3swDWHue9KEo0PhicKDKxTMh+E5sqV7AGlJoIKSRKhxxRw3wEWhVVmc31FsPzSPZ7o4zAMBUj7YFcoFv8HWUnEeil2aGW/iw1bltX6GBujMGF2N7bQseVa+aKDiZpqDRmt6u0jQVY4cSZwTOY4wUd6ZR5FigqeU3kRth08AKPwPlRPqK6El4U+DEfPOSo1LE/uRN/6wBGdMi/n0wRARjV4/mlKeCw/CZ7xgvCvvnDrEXeN0Y+XJ+gf6VB1cowDX6CjtGU5GN9Z3McxL52xXeVA2rzbxlM94FKwXyZVB+2FkKrfKbSWyQgoM8uRc+h2oUppG7aZ4BvMyhvN9CFlATwvbF76J1rojnSxfXNKCn43e2dkXT2IpSCpdpRDNQsM8C3jnwLlmEgb7xXfBI6V4Nz83vglAna66kbd6zI5KJvohr6fKrXE6PwRXOISmfYKZ45h9aIpP540evGj8M9X6LGBuvzmJbLmCCNa2spausZhXRKrLLRU9gnEDRIgnP4uq2/BHtJ+SiPeCQpz65veFu7S3i9yePCz44SvvJpIrw0e0sLH1XmCaqlztXfKjcafzAhuF9CZ6Mxn4WDBJepR08bvatPd5buIvgdP9YnX00Vd3lyn3a0Mh19dQwmlgaNFwhOMd/MhNeKDx+9F4NH/SY1t/FekI4rI3qZnrKcCoQnGSM3qZkShxyeGU43hjhm5jaRV+A6dQxCJQYg3/QpzB0slrStSzC3UB2wpSaCUmxdLRSbuKasg8yeqwE+hajqJc0PA3vB0p7XBz8u0qLhnRjFBnr+nEnytnXmPUPZoO7dDWDfPMBqj6zRBPUg3vYW87T30XmK5vp5FhYoJyHWS/gveBIF1HJd1dls6L5sQf8pCf6eD35/f4YhCJiZJa/Fmch4nEzW5YDKjgu4axHcM5D5voBr3Iihe+0QrnlVG7Og3EF3y6jkznZf7H90Ts8+jgxedYV2a0B787aWwXgTVlf82rSvp6taKfNSu9T3TSPkKnoRQrBf2y4G8+T87eEemB69Mcf+AOXgVA1hzXMM7utZEZ5A3h+gw2ksJNk2p35UNydHtvME6a9fIuU4MFQ9lpmSZg040RAO7tyUTxM4kUK2MC76poJliR5Q1qXErRdIIMDug/BzOO4hXE9m7c+k7bj08jtcokRukLEhCFOrN3kKhem2fkcH2rlbr18wOmdHU0UpuVEkTnn6VVuFdY0zTP+eRH8ZxBqXLHmiH4+bBuXEfro88DID+Jdwuk5WTC7rjzf8MvjKwy7IOaBZZAhubrbQbdZbUTtDxrwLX82xeSAnqNXO97KxCnsi68VoTMfDUMu1bchFaQ7sgxNatyD0pkBiRxFN/k0ibdXEsGpL0gxLA/TqgOnyeWfxLajoWYc0BermYZN+W6aajtl22qgR/T8pNn4uc8vWyHcaQCbYhzd/hn6aavdftmSz5zV5MOKZ/aMMKVI6c6nqpFKZntuXBfRs0bsNeEbAqiMOsrduqR8xhDiOsOmoIkTEQuoW9ULkg+iTi3ICpvnH9pbTBHvEldWJVety1PYtx4/hKH1ZMZPUgLV1FWxuDpYylP2Gp9z7qEIGbD+CLLJ1UuB9tKmbxHti5NF914+POOtuaTSmE3hmBlJMa05Enpv+Yl4Aj28a/RzVx8f59JUyWW8boueXEv9BZZd59OSv4u5l5Vlht81pAhpYAOFQuYDw0OdbfGQmzGQyzq6mXdhMrp4vUWimwxVgr2wD23iNwmU5LvoSXEt736dzeoY5r9pJwL4H3ciEP9y4wdG/Oe9CH+1/X/ZivCvl7Ph/2uXs23Y+7OcLUMpBqvftVfF4oConu7WnMxeNvWdL1npld+857PE5IyocBKpAn07h6NzFi/dJ7/hKa1wNVkObz4198Fwit/a4Vt0BWnL5kS5QqntJyuH8fMBvSDalcg17WALWbUx4B7uLhj8JQFo9L/HOI/pSu65mRF6nygeO0t0qThh/8JD/oV7YACevIVqczgA8E2ffJB4eCmwb3LvPJ/I4bW5ye436VJYvyeVUQ9s0tiwgBFs5K7sFtrAlofPecTDGbTx5vM+fzlmv1DsVhoYLmanD9e0vT4qHDKT4NU3owTMhvGlWOpoiXy9LEDLAUfofA8HQJ0bWb2xYoJS4n5rW/ThDToxtG8tnFKztt2iRYW+JA4V0aW26G8GEc4w5wpxlLgHatnTljn7fC3XiGmFjzSbONkGEI+6xj9KHbeTtsuRZHwlfsoeqjfZKAlnIgxlvoAo9qXtQaBz0KaojTxb9OHYe5gGJ6Zz6PBTBs+Cnwp+spsO930pNVHXupOaaLM/1uXp+Db6GCkSN61A5wOiTHjO/rtCGSJjd455TUySOqitBmrGtVlzaC5u2k6N4LaABAW+4Nhn74o3jBh0fg1QM0g+cX2/embUvDhQxJIxqldZu6pCYO7rrrAFh5TS84TmUOxVzXnFDpwnuqkq0jcViZ2GCm5q0kwt70H5ACUe+59poGdLkL7nwIvj8LJGSdKi+qzpmmzHcIpQe4A++/3w7FWMr3HRcrMHLM9KOpShae9dnpVyZYpGphBqKjYJardVwjmGhe8tTn6LbaVFRzcNWdvFLbYpuHYza3pURf2rNSw2vs3h4tqMb/RQEV/bxQohsjJEzsuqHjd1MHppl0wxQye84id4F8S7CGQOKVJvHO3Zth/gJ0dayEnhERaFsPHlX1WOcd2rnaIki6L2dsZ7XXZwacQluq4LGvm8Tu5XFHni9rE3UKWdaXHLIEF39Q3Vt0RuLnNWtNRxqLSEuIsOKpFbItueAApN8TujrfDR+3XmFAv2iMcLLLoLKrKcU7TCPlCx3ZUD50zK+DYOltHs0exIawlHTcDttsLtXhREmofIPtEn7VVtY9WXo/ykSUqsK1neh0gCXseXp8vbRMRt+/Bd7rn2+9JIE+XgJDnicvP2QLtm8WqFE/kw0Zi/zKnFPFThdVOPAJBK33UZ36Bd9urlN2XwOPTqjmKHjwbGHyUhOXWH+q6c2chDZDn0i/XvgjWfgZye8fgb1O9y6L1Wm839YhMCv2Neime8E6KCp+iqJmlZqhW8FQWDOvIPIfOp1b52ppKiD4rNmUoppN8FFAIHm9/7vCJij/QwJw6hosaIq98kHuEM2BzzcJWd+c4kdap1jcGf4BE8zKcOhMMI1Rmt3fFjtdZhPad8X3Shlqpvz86eWwWBD1sa1x5Su9oZuf1mPu9yTk/uc6EK7/ABw9NrGzmh4cEtAb2BYgA6qJtOeQmYgLViDkkAmpJBmDwUGB9Q+BVybmUwYqwAJ96OQe53fodK6a7scm/fTRl4jzocNKyVZCmlPj38Auo+6WixF3VR7AOLm3LR7emXQqI+SMnbIr/LRWjWaNfJzGWCT7KPzp6Nd9y0nX+NVZi+M+zCr+8H/bxcCQytH3HUOTL6ThVrNYPbZY5w/DwL2QtkX8lf4/4ovr0WJw2lZL2a1VM9CdBJSE0eaJ1sW5on0zkFLF6L0tt9Z6YcV2Ooy4y2Uj0Gl3oQYlZJqtjaMDQiHLO9x8Yc1bdqu4f8b/n2KOD1BTgOtcDq9hAYy0l2/ECyExYOCWgQ9+3Gil2KuGXqN7iEfYffaz8Eb8jYlXPCctWy0lyfKs+B7psqc/71P2uJBk6yvaBaa8Idrj/je/OfsOMgExpNql0hYgrzZAqk5HcpRGIOj3/shtqmXTV+88MSXgS3qFeED2KFrQvvwZIJdCzyiFc9V7n67cr+kpV3MJEKk2UUCfZG9EH9ehbXICgQR1aqZRlFxvNXdwpHrX/ufBN37z5XMW6hfwXTy+cT0uHx1vcFGCXKqVNO3UjQ/DqwcIgKvrRnwRQ57MkkkxBtzjYSVjze1j0iTydfKfHIfCE59ct5UGpTdWL/QcYPEwhfWjmEyVcOrH1Xj7O8SW/ljc3HrQCuAV6isYsvUlCgZwpLYfwsmd/x5cJy3xOF0Ihi2zn2pl5HMZ/XPGceCwdEgxdDs8B4gMZGlPoGOCB2zkXMZ8vJIsk3HSvd6cCPYGJWaC/i4snJnJBT2KMBsJdBwYViF4pYNe/euk8QIMVJU1Ate291w9cncDgA9tM1jm/aJ7AE1kNC0r3B6XMO+OmR+dIdQnUTBLEVgGVmMR0VNwjmvmuY+kjypi9Sr3O3pXj6waYCitcrgE63EU7H/PDRBgod+O1TzcwFv2bgyDjFBOtT+j6NPZn4TdNeQx1LJLwgIBDysLdBrh++1P4U0d9+15kA61hoC+FZ22QNSvbDBpHcF4yLjKSjKm2OmBqud69OxGgTRDVQrKqx3zPTBmd1oeiDut7Vy3ltQIYhqYjQNh3hcy+Pyg/wxPD4HTPz6je9jLBbOQdS6Wf+EZ2qT8/ijahHBgRHpMUshqgMvBcAyrpwaFwNsBOVBR8nkKNYn2iP4QZxFW0AqwaGe1Um91U5Rank6DIS3GXIiBkG00l4y4aUNleaRwQr0auEAhnZEy863gTuaOdntmAUwEn9hXUXCzJmVFLGhrcBmNipdqbRPMerfyMpD76TubJTDynhnqdbSQG6V13u6HArkOGJWo3gja/d3ozTu4aeu7o2OeZPmv1s3uLH0nX9bJW5Qj86tdNuXA5sut50e/pqmbm5IRF4zJl38jsfbjoR76Tb18qZcbLdhoJ9ATijIQ1/HINupFkrXsiaqrMsz/HU2x2T05uYAeyJNwLkR9r/fDRonj+7r4LCqhjYJ7lJE4tbfTCr4JXK2I4udi7bh89pg4CV31AHPF66H37ZsW9K6dpHV/Zx8qXvkl8nR3QCOaceHgs6pl1gYH3TzIsbwqQzdOtC+S5tWvYkPg26c+MYv2KRYXTJNYmlKpvrI6g7jvadNDebboQ1kDT2bmmkX87PU+9XpaQc/WUC9yxAqmO+m1waqN1HaAFXqhFobTopE6wbUJJuPtCKqHMGqtGPnPFf+0ZrspnBioNLRqv5ksM1hBdr0u5DdQuiARztGvYWAhtUxkfH06N3XwcmBwhoLyda4ZxxGm5mQcvgnSGXy3MgB2lS2a5MTVY1Ol8g2SqX/cy6fZprLkxH3imJCpawf8Cj9KryZpnQsNTVUf/ZHqPewre55peHgK44U8IRCJoc1azAC4HHL/oVMEPBnX6ROfY7t65OvAtmnFTBQm7smHvWnIBAaXddnyUjQ9N0oS+lKqjizH5/VMpltgJLibuZ7HBlsCcGjJwmIF3VbFP0oY3l6j3WytfNQ40sTEA7/WIp5QODmtg/dHRaHnCoYL9Hd00F+wSNAXIb0L4V1HnLP8pGlUeF8J/i4MvwHdLgRDWN9IQNckSER+0xhm8grv54Lbrvq9cqw2PEA9GrRTY5/AxkPpw9YwzCwwyRDNldJbvPddbzC8QjcbLE33S18s9NqW3DFuaQWYMMh28EnKUj9TZ3S6lCrepUx3ONLsw8hYLsXGCJF6raw0KoN0nQMd6gE0ggTANzbrqmvpUhT31k3rv6NpnfH39bFrlHIDHYG+Lj+ZD2/UiEKQWkjpo3G+jO59DQEu+/4Q7DASCxshE96lIdvoW94V8GKwxWseW+LJZhadx8qPDlfB+UqSjqmoExHGcYWckH2/Toa5aARisYc+bzHL/a9RNcNyQvQEXDVn9uIBz4KQHLpSmRKdcTQlk3r7gj9E3Ck6lfQi+YvpwlnK5SmD2Lkmhoo0dev+Qjud7KkcdHVzyQ1M9WFNu3XT9e6CVDrIHoDe3mHWQGzxhmRlCPLE1Xk9oLLW4Qr94ViHTZ79tVt1Vw+4AKjlpdXxwvDlgdf9U7KqE4j0sKnAKhk5JzK3PvvAZTochW1JVlxMXXXYeXNkqEnw0VpJFCLulHNX/7TOi0VKF37q0m2voI7kMxMSD6nK2WER6i5S9dy1QVBQmk98kra9oppEiKBaWCsX4dQ2IVZ2l9Nxl4Qk1e1mGXFcMY9qTfHVwyfE1F+nAKdgEo88O4BaDLcAp3kTntGMCAtOJkaCieUQLNtpdGESr6bsCgkSQbQd/Its8P6ITsfNDcO5kdy9eLHtez15fCc/+F8ZYXLOIW7MOqmLcTFnAODIwaKeleZj9i17i+Zs54GCCM0bqa7CNgD6YP8MTDoV3IdRZG//LoFi2eINYNrZOy9MELdCWySmK8cyGjy7W1E/ebCvx8gECWoUSwcx7CYBbCQxaTi4Up+w5zLr9hgToiDAY0dV4Za/AM9bgxF6pwqIinsnwejhB/d3nTu8W/VCnS0tXUJEy8B0F8t82kfTeSPs4GkiDbvT8BlKceW5bKm9ccp8p2eSeVCWMJsv+ItI1etNnEluh2iDFMkqYpfWvIQte1nxH5oiAXmpLtkK710qJcsMhvx9R17j21LxcPNC6Aovl+l5c6uR7LqmYzDMfPxaSY/iEKHeB6gxkL2UK8PIWZj2Lz9bBifPGioIi7IwmsHRT6ptEz/DzlleqYGpVXtEd6nE8fzqXOspo5K5qOQdSLPBJs50zDOhgs351Gdzw4WrlmyZBQs9CuQ1fYy6Mv1NB8Pf24G+mMgHv6Ytc8sMJ1cvTzyKj+gkLGJ1dVdHnAcvlFtY+oOgGp+x6c5jmnQpsKrcTkgk1v0tTvpnHFh/zS4NE+AqPPE1t5vf/dDVe+6mQ1Ta1caC3xcHqNvq8gCz8ZnIdPVzhMNu/WbmcxwBJGx+653tWtpu3A8095ByOJ+KLCVRb42WI2OF71bQ1RSy1ctE125tUCk86cYLrWZvmNDQySl3afTKLBBkcLVWOxgTMxK+9uW6TRt7+SnSsfC7kq492v94C7ZSWjKUPBowJup8XWCgcrOiosm+EWQjo3F7N72bs6lmOlRCrJS1LGV1MKzrl1TZ3fT8Ixgo7+JgJjWy998tfGKVN8bwq3eCvXpzbO+zStjLMroQC4rt4zxDMUhumuJIBQHwxT03XfVZzs6YYEPqF+kI1nDXUiqsLEtTQTefWtj0ls29QijSnAxSDdCIwex6P53O0RaG9hJ3v+CoQNn2QFvcms6uDMXueasonCiNWUTszPJAsZfjs954TJsJJGTIb4m46eJA4ejhGSRhAjVcRfYmgTnGjVdzEo7vEtzuEtlXfqk1zQMeiW0+/EDooN9DbC9l2nzG7ok3W4VEBJgXzVONvf0ujSgtiBSid+SaEzzPUKcCSZJaBIC9pyf2HtoZKzIG2FCze7hywlzclRRuzUCEArLHqqlNr6vNjx7CPtk6iyVfOSEvYM7Eq6JFA+7ve+K3lzpqg8xfyTSvWH7dWRDGrdAkb2krMqMgRQi3djj3ynmces5ajaGYjNwz6xn8uIFTdurzj8k0Iav2Xt2TVfkrxGd/gdl2+weISzlwC48cv2xX+UiqTPD6Idv0Iynpz9A02iLMMJ6IsfpGv8DoZ1qWdM3XHGdbb3D+FF1ouokKqMuoGFuyMwXZF0X6g9VGGhlv4sOPmcYuxam7cCcEv1ykNkU+lj80JE2Ar4uTa7M+63jTSkcbMRUHeDqcjist329cShlJSVszICHfHO/dXBT6qA2RqpsY6m0x7A/oZ+tqsFWEsAjtim9WkBNfIgGG0/JgUf+NEg+7a+Vr8dmzkXI2X03+zXLYvkVwxZ7XyOWnA1OecjiWKXhYYf5bkPkGbGWp66C6rt5ILYEgBtyc6DC/+FBhaXbP37YLpaV7XKWlxhsiLeyIxq64ADDlhgjinO2OhqUQHiGx2aNBnUBaz/7YwxfFkMV+zhWy9yDIBt6wU6s1qQ2iEcgYiryscxFWmSondzmd74EGAk9OdWOAjr+J8lygX5Eov+yqIzBUaKc7Z+9xVJXNSK9YrV5/aXGsoyuRDoBhFcm/rUWfOqFwSAXDCcpd9ZK8jeoJwYzMTOtVRDi3+8u301+/eAzujbYUT0vXb4bm8IaWKqal70H2uKiIyHBvoozXnW4pd3oOhisg6gTekiMXvfJbwlqALfLeVaToKY8pCNkmTAXCVg9CIlGDbyH9bwJhaGMOcjSmiIAEjv4bStayac4OrLPPe7ICS2oB8YtYreLKUa4ihtS+fwDw731KuA50+szq8hhh6ggaLHY8/RYXzAv5FT5W/xLyvBOMG7XUAYyEpEP6BqVu6ttzqHhSxUGO13QnX4JnWHJot1l8r+THLMmR0KHlpYmF4lOBJk0qiO8XlBADyrTax06Jf8XC8jgaPFU+GTld/GsXojQxZ167v/TjJ4BUUy5+791oQOhvAolUchvWzfJRqKlm0P5eavXJblu/b0/vffsvT21/rZX++Ngv8/WHlD/q9deQtE7+8vkmZXrH2NMP0UcEnBjFZr0L7ibZtElzJ+vXPSLkAUAwtAkO9b0W73735FN5w4NuCUSYUesZ5u3RsB0yin2vVWnDj+MDS6VFcxFmyfzO5B2m0uTpzeOG+EkNO9CKmCuPNfx83ziaXoF/EG4eUApMWyeGAanyLTcz18QZtGRfyGbk1sBPcktrBHXLN49zm56DY0hNdIxUZ6+VgQdMl/AcrnRLAhzWoLUo0onDu2j+INTrkwLE7rzE85foviHuDf3UZr8YYTvOlF1gXv2lRKDqJrSOL2PnyHfOsGxJB0AR4mCK7QJ7odibf3hrm+uzB29GNgx5Nkrm8NhUSUaPie7VcOawBejlbqICmGtTJ6IdqJ6p9wLysWXwpSTixTYr/vTD4qa1VP8YMVRIo2KDkQev9GL7nM7Js1q+PCmPx9QcIrQEMr9OEwc3HOerWDNTYARHt/BcehZsjs3iuKa44v+kLS8ITbOW1HdDbpZV/50wUZ/q4euTgwxHr/MM2LNFekcCyrJvfA8FDXjbpvPcVSPXCRdH25hwIM9VQJutChLcFHlucecCesB1iaZy3XmqO6H5sUvwsGkrDm+6+pLQ78nV7rXSuhk1ppTbM9cboTm1lMNdgSzxltRHavDZmtA+UUNC894FlOPgm2gc1bH7Y7n/82b50p/Bgpn8foyQ9KhNch0BDH9KOEtSTMTSB1AMz58lT70JLDIKXsnZ6AV9d3sA2OJ1rhZRp1C9oo25R3JBtFKUdaApuRW4+OmZcPsISmipGx6qO2uj4/bxHuVLN0Fl7LgjVXk1B0Lh9C1MRSsu+8yN8dqbxZJaVR8b4d6CVmpguO4fHe76+q3rAnAfIzqz7sV7DqchuGZYjjpq5LXqoaKsnXhZwqqq4E64gqMQTUKEwZrMgK8avdidnPZhIaCnGi1aRlOSXwpjMr4pfI3+I0fOjfpjJeYWYWjaR1trGkVKZX56aC3etXn1cYfMB024Hcgomy+tJanTgyz63WHGbTekps9OROBpKWaXjf/NakrRzp3soCd2AfgZqTr/z7EGb7vohEI599iH3kDYzBB4tIBsXxrG7TFMw0QJM5IhSfWwaeg2RNtIEN3CD97R+ywT1f9CtbkKV9oDXtcKMjB195UVoChI/skGY6yATpsAaEs8ceSo4s/Sx2QpYFvSdiGSBSY0L04BJHbi7vxikTM2zijYnTnC8mRfMyb/DlE2ogh8ujK6iCqGqQjpuub9MeZGVGn+5wz+Jk7cPgm3lHst2FwSk4XeJt+cTcbyvVdh1yQlMoEmdJ+ubfPOeW0RaiUER8CNwoikzFMNuDQTh/Y44A0n2Ht6ov0qbjJFaHAk1MtArchWIpGvRNvS2xte9bImKirwmqYXLjFE4p3Hj4iCl2V7xAOBB8vR/GuxeYsKH7LqQDK3lx3WgOQDHUqDmp8KUrNqYu+xFNpVcfTRCt6ncC+u6ttqR+pptmD3NvJXTbX7pQ/HmJ6t4xNXqrBiawFwv8y6//rO6gbPbYhOk5QTKuFazeItycM4RqfT+qhKZaAWDYvhgoMcXRDQ36mN/jfNTG3jMvQqFoXU+TFQnqkK8QWWT7jglwAr9rBh1efKIQdAGFYbiuqqi4+r61GvorS7EtxaxE8BfItx2o1OJlwBuC/nbYGED3/Uq2y8cEa0vU625dwdJ2qe+7IBpicvRpYjjFhU5yOkZ7gdyaN/Ro3xTCsy0YFOpliSLMSbMqtV/wfNSpCdPv4F0InnG0+61MISBC1pXiAZpX+Q9HTy0W4TfGtJuYLpsHDblS88uRsn/frn4UL9042w1Rr2TBsCZMwzd7hG9REzLy1bTISNxDiIit5dpaYCrpURYwChv2GuVYgI5C95h2I2y6YRmz6I4rGhYw2bl0pm126J09vPKfoWEtBAIuArIGd1Oy6w1SHWJSLoLly154wLpYDucUvgkMzbKchdHRWOiKUy0L31nl2pufqLGzPYUfFoyZIDsLgiC/FW5G7+u68qerRqz3vdAgTrAzOoXIeWTlZlaDx0N6Zi2056MOjRJX6AjiNzM/AV746rYBqVddR9RQUSvZAsOL41tic62fF3PyjtWZWjZi9CHzARJICH/QJ78RC1JbsAOMTqG/FVnAItwk+gqtUqLvpPVrDrK8RI8IG614WUj7zeYx1gjfM2qIzNey06XjixcqbbWlC3HnbElfFXJ9REhPbAq5bsGQIsvxjM/JUtxWowfGPSm1s6TkEyVDa5xxBxjjiiiNnSc4VCb1Wd6UKxhVK8zJYUnj4uimljzqlLaicVpj6uaREpZuZ1t+yIJPz7y5BMrQmiRt8TIoBEZZsKTiKzOnpbuyvLpfbInajdE8CydMlgp1uu03IHiPghvjwr8+AC8GKZwj7c5yBDBdZD/eyiKP/63YmTjkisxy+VplqGWNqYZouJSkfhDzNH0fDGqlboN01KTHb2mQHPI7B1No9/x7/kh7pkYeDHe0TLVkjP7Ef7BXI1/tUBjtg48gZufqPkkQzotgfPdi6Et7f2JvTP+8devNWy0c0CvHl20QNhY5G+FDObCsck88mDESCvFnjR62AV6dsXKC/hUobwSngfNamVkiO/J60y+D/KhX76vJA7vCObPgW7MV3xzP/CZUTabmjpcxHDzldtCb9jvkQjrzmVTAri7w077yqQh36Q2N3e9NEwa4t48k6RYZHq5bfkL4ATQhjaSM9hmSW+KB5t0ZEbR7U4ke3EDKQfhn9m4cYQS9hSlU04bGQKXTs5i6GqnImdrv1ZNM43gWWwB4uhvCtRPJos7CcmSTlKolHxMD6XqsLRwz1efix7JXa6gg/Hvmn59dL+r8rkHfpInf5QYW+6YvoVPBzioOPiqPPvYtOERzS9ukjY2vcPNVbTIHvzRl/HlB9q3+Ccmr6mioMMOaf1/i7Nto2yhIifnbYgCxmcBG+RpOZNVyWpbaQZSIaNTQG739Sm9zRY/inPzE43Of2hmGZsmJF78uyIoIGZ5J/KUu+bLq5PLQk/c7sY2YTWa6EaTeAyB5L04zw1eNM0bnh8mIzZ/P88fuSvh74z/7F2z3RVWCcX5e4D2A+QWDdxkLxo/v6dqpw/D4zx6aK4q+I5kaHtA68TVYEbW8earZ+MM5r6LF7ltKYoxFyCxKkvPhjmGJLS9EJXwPI2AGnxiyN+sT14XAbdB8y47yqFYFsVyt19DHifr3NfRE+3kp/C3H33APySB5EIlREG5vCjr5IX4WmcUfohp959VnsjDpUrt4E0KgvltrbFqGlEty5ZMBgUrAxXfTPm7PuKXG5/uG51bWbpnVgh5W9NGe2kPh8D7Ngxs2Q6CoVeHwZTC36QhLIxiJxnR7N7vxdpXu5MEWxesuoojjqKA/8v5nx8lJdolq5d0CU4p1Ptnnyf+Mq/eKKyExH5fJeDXwAkaKjyge1T2OZMa4v31PZY6OGtKo3hhrDpKuFb07ZK3tY2SePGqdMZVgCmbNJ1vygHjm/dRILM/6Rj1Ycc9ftdSl22H8hr3y2bUIrr7ErwFVcoMD/OlnCYyyg4dsBMHcm/uGsz3CIN4OLVnvxOKZ3NJOI17jK77F1qCycxBOCNXrY52y+ouvSMJfmrCEg1c0HmfXvR+PomjywqgFUEE/btldzKzzteEAS+7GEXAvbP2nV58kwRv2paLCwD9x4CB737DvZk5yOslGeKf0538HgCkhjKPKlIPyXZ6NkSApAN4My7Vu+50dnY81aeq+bgUl1Egdd+fkVWIpO/bgE11fsv75KkJGP3lUfnmrItltvql6PPLQrXOMY33v2OrlFjr1BtljyCXXSOzAh6qv0Gm/EIW65BvFxek+bAxgQPTnJZskTOEYeUmXskh6VGYpYqI55mN45bB9NT+o6AhhphS+iPOhtlA1QBt+EK5K05WJcy6aq6jeUfmIdGapiNU14Bz/D3vftSU3dmT7RaMFIOHyEd57jze4hPceX3+BYrfUYlMzWqM2o7lT5CKrMqtQwAm3d8Q5EcY+K8g72G+HRtPMUam1mlykdmhmoO6w0fEvCkkfqIyE2ofHKnx9DlDtWLPO9SHEpMgxlhFV7Qjl6nNrTJqp4QJjXBhK/maF2700zrOJwNJocBD4LovytbUSe5gi8WIbT5TrmlXFTijLMryYlux09HXmVG8LnKbQkjKs9cOjbtP44Oj81dBER55tLvjjIZ4MbHu+3vCeviYQntkgFJSJ5ntEhPN66bZwdYMszMAamzDTAkhnbD7YqZfStMbTUM+IYL2jEvt0ph4dkxDxEVtzyqh1BGc5vnHCkAZ5k2qmcR3QFnMDaF0OdbUL9+uzYVjz007h8rmR6nTB91dVMuavBIAOtD9rtz9K0J0nF3hy1OSSTMUMScAXiuQXVxkH50btWzS8wTZxLLtcB/OxDp9w52MRwMo5rJPm8nkEUUctwzCKJwMkfcomh6kkahG3zvcH+uTqtzvB2zmUACc6IxkCtqLH6895RdYZg50GRmkJyHMWSQmwiS3fPiejkiXkWWDJm8slKPcIgXAofG8K38zNJ2lOE51qeeqStuw2UtnGXmZYLLVysxFmhtDDjStTXs3ZyGDGCjinDz5koldP3qHSlxKMPydy5NNLGBLeMczWjpgKlDfgagQvPEJG/BjagVfyXoJFueersLdMB2qdGfLmUo1eFrjYDmlu65ah0B2mFfkNqIweIfbEXlBv14atwnV2fpOzjZstJPfYbu/aWiFbZQ4KoHxXL/aVtp6KQOrbLI5IO+Rs0UUh7xMHeFmaxhtPg6q+uL9/xiSDSm9/qFyqh+R4szOBRZbN2ujA+xgMhKxU8Mkuc8aRT1yIsANoT9Gxcksj7K9m72RxCUs/XBeEYMSdeZFVJaDRQLjKDRpumUWr84I8KPJERLiqLvGwW4EZsuIOmmy9Bofe5adIUXkLE1JaCrGiWcatD8MOCpQNG7GxGQQl1CCadB/plrj0ySgeGydhw0CWLorwnsw8KOEWeoauRie+ynjyMUU2F1Mc1HCXbHPW4wndlRYpJzyjzuCKYaMs5qwE2GC7GSgyIMNMI2JnYkc8083Rvqj9ZJhITvEv0qiN846WhS8i3mCGzidsbqyxm4ccISELvUBOaEV3MRsFdXkwnmi2ZvCSSou5c58tffaK5YQU+eZx/xnlGrsOUO+BI8jGlGtF2R8dsNI2Rmmdl1qPMhSYZ9DeZNYY9gfNebNU1o8XANXmE1N6fwS+rJWe95KSM+4gkFG8QDCN0ihETTV4kjJPP2ltA7RJkITA2lGlqqvBivdgtuZVxCFkpYwVu8Q8/gt/6CK2JYlyyPyarEaXbcfk++Qh+yi4x+mEUA8YkhX2At4WgKmEb3CyQDWFQeFhxbGsmkAe8pLgnvDi6KHXXDDdlOFam1NsESsAenLoEOUVlyMDroTyeDMvTa8t34JE9RRmw8bSc8Hi2b8TZi/28aJaxbtRAQ5f5zkXqFfIxS/TCNeivEyLtbgDt/IctWZxlODYTiDOJJeFzqqdM7+ENyAz5JNgHfcaF0yfB0DfF/Xm6wiyCxnmw9zxrZd1wnEYQaAVzQCtmrEFrT8uuqZZNT8l5GwXSYefVA9IldHkUUPHoE8R5Dalx/sx89SKOSC+cIyEU3DlQnp9tUz0zuSz20RLSzJ22FI3qtQLjlmGyY95V0C+1mjBEwBeAQvY5LpPitNDIB60Roqpe+qujmPPKXL2azuB9c4IMY+d9fr04seHYvYTS5oZmWEqC8I46bGiQjvG44F1EYvYQznjbVYSfS5Ie+vzLr1oKu7Kj3psNIscbRQqM5KkLUxGhR7DTeir4kefZeRdC/vCWJDuKqDWWpv50RVENwbRqwWSJIEW7MbtOjQ7S8q9udiemgR3hNSfsidknJCo8CSnUSUQZ/WzKc4dNyiPTj8e/3pdDOETAxfGmFE7thdyTGapniYNdFXaiizpclfVIRDLpF5N3uT6fDVcgj+uXryiuZopsYPoecVNPj+JWAUKuNmsLv25YyFCtA+rllpB8l0Yr9iMihuRfHaLDBcnFs05yJOu0hqg3JwrF12VNyT+1Shg03TDkyFQPnYLMPZMgU+1xLCwT78D4w2zLKwjSoPL3uy0lPI4iY0YBBbLjH1zXI+QXLAmp7ogAApbVQA95u599Z9oQVMWUUXFY6SudBHkmpALtuVgKWQ19zz7Rd5xJ293oJQ7LVrn28DQ691itR8Cu5tO0s39n2y24teqkZeKgeMw2QqCRDQmxY2iErNmGPilrCpqNX1jFyww+Q7mUY77qKxdn+CtAq+1w2MKQtVFeB/t+Qneo/R1pEIGXki3dQIyX+2VT7JLmKqq9LZGWP14ELRwFuVzB1rjGLbTCqfnF2rt4QT0BrPKHMPRV2hhekK9j7f+nHpq4iWmyyLjIPo314MPDtpH4P6Nljb7p1oy4cyiXpOJNuadMArbBjRWfsI8fQCVwJOv2vTBlZQep9SGRTCqscmx0l4QlarJ4trrHhVAUc7AXIR0kpU+NOGqk2YtGJESUavN6jDMOZHgNWGtFeDCtN5C8NU2SkkIaVawOgfTg2WKBgB8YVf2ApuJxV3v9RwaFa+FA2SL49CsTlwL3BTJFB0hTPr8LXKS2pRC57ZsDsIamEyqh4G5vHf310mkYkb4wVOVLtfc6pFefoyiXuq3VWLlzYmDifcA21DjTD9eq5Z0gVW9tE7sizJFGISHzfvR5gj22SNUXk6pPIw4xW8SohzI+FLXlRTpKG9MRxVKi2m5cQbstkmk0cU8j0Eg6x0DPdNcYJ6B+U61ras2RlDJn/wBKbQz0uHoetIRfDR1gI6nThD5tFabn4dNMVJSzepgFeVZ0A+Zobl06BQFsd4zgGQ3deqCJG02dJvfaJlgenI2Nyoj7U/71BRZ3NPLKZNSuLcTWt7W5Au2wjViU6RCEuU+6vnsqTUHg6sSfzRMlQPJ46jlsBN1fIUsgJnBkw0w7Wpm4PlJfNKhXIc1qUvNCdT6cU2dpC4cGE1suKntDLaXH72D6ZMq+UHPYUhYYzPTSK9UWsyLcBbcTrywE0kMev+U+S2zsC8owL6RnW2K3qgSoUQ6aklb9KtEXCobP96rRmrz/OrWaGk90F80QE4xBA28in/LCeDpWgnJVCDTXrN+SJl7TV709m+5JQD9cPpLYjCABLQ3/+IB4h3qRydzoNYYqmF9ULk1z1Q9G8XrwdJmNGnOgSprpYiAtV2Cj7WOPnxmn51zKpJ7PNt/PQkMJzeCXxD9rRMYa+gaUYUw7s13kDmRfXKijW1Yb1ebRQEOYpWLkst9rOc+a7atuAgD1ecNX5Ui+ZNMR03hb2rtag8E6CbFQYuXc+PTl6eFlyyoJlsH9RvE+DrX2ph90oYAZI1SC+zyplMqCYxLEW6tq6umfIn+xhgM/HpBbvYOpzS7CTGX3hjvqYEyiExpCmMJ0yEJiyoSi02ozCBm0dknweldCBIB3Hy4Y8z6ny5RsOJGIqi/k0/9JOX4uoezaez0BRscrDjf3Zo/CcBHVZDH8cSbmsthTyeqXp9BKlQCa/iVrMi75dAfDaWjTmbhfBSQgQ+CM2wl76be5QCqIw5kVmJmfVC2c+Bgvo9kIZpukC4Z++OAPpvgviE92Bfis+q+TXMtnjuLSKGNR6PDRpfO2SD763G1aXe56TJOxIUixflawNBYz2hE2DheeKSew4Qd/N05gxfybLJ6PTBE6NcS+TwaA5uE3nDGS6wAPxe/tkhcsOoI+oO9mxuDqnHrm97+uu4HGq1nQ0IrtWH3hLzgqrb0ut4d708bBF6g/nmASQPyuo7W87NG6vEwt8/kkQA9K+nj+4OnTBf1h9kl4NvdOpWAN5IVNO85mGK+fcJCPjiCUHc0+qJ7n6WjwPoqAKyJzpPAKNxWxjcmM/WUkE8V5NXmAEAm76GKV+yrgZ+0NW2WPvcqvLcU23Dd3vjrw5khGA+f0/7MqB7eRPR0xJHH3l3yZPSezMT5WBH2dfbmAz1FGWLCPsve071Ndb/TgWf4u27dMPQnb7n4UYd79GtfRHx/ki9fz/3thadd+t9txkDHtf/5jf+Yvxqp3wEWgMDh+NubP19Fi6tn6wQE8FmU/nzJZ+m/rvr3v+mpWPzttX+l4f7UL9FPO0L+4w38uuP+55OhSfKjjvspdgc64DcS+1/HY/zn59wx7C84+CPp/wWEfi8FwP9PAf4ABQDf3xn+69cjF/4M6b//T/p/hPSx76SP/Y+Q/s/B57eUPoj/SPpK1JXD2nyTxf8U4cdgmn6AHwkfBLDXO/tthP+d54eA11/Q96+kD/5wtM1vI/sfDlwB/0/0v7vooe99/v8M0aO/Wtt/41k7P3zCP2jUzhv4yxsG/vbxu43d+eFDIn+GGLOjXPznx/+C/PRV8It36OOnK399cf70xR8q+n9gkb+16JHfc8rSP9bp/3PYv6vDfn0P08C/4Pif77B/D4oG/Ej0RLLcy/qbyBz4r2X+vV/4LcDW9zwb+2HIxX84R/D3Qlrgr5bpD3XU0D/vqe9Vns5fuPfny+CX7/3tx76++oGHh/5mpL+Th/9JnN/c7X/Cbr5x238hFPxrUv/10aX/oeH5f5fQ8T9V6K8/V+jY/69Sx/5Mqf86h8ay6r8Gh75f5f8heazX9wDpHzBa9AfxFf0N4usPRyLD+L9hfP07k/ubBf7XRvc1pfm3MrofrufPU7t/YXQ//L7XH2RzPxb6+9/Q0/57Cf0fD8T+s4T+j3OW/zTjff+I9vyKTSlrs5T/ofRp1Pz1vemvb/bp+hzL+REd+gfM6TtN/fkM+7M7KLv655nJIZvKe5Wy6Zev63978b+MGM8x9Z/V5Z+IGCmS4Sn8o4iBQ/EL/a0muyPA++8jBob8BcZ+FTH+yr5+GTH+SuZ+c0X6Jzp8/5Zj3f+gtf5+rPvrB2Pd/5rI+uVCv3+30PwPDfY3TlxQU7mUyb9z5uI/wDf0F/DvjeX1gzLh75W6+KH8kP9vkC30XX3+9YPdGT9a+t8N1f7jfTn/Wh3+VwZIpPfKzdFUPrEO0JvozKZ/JbjdQlj+XuR/L7mu77LvxPzTS9FPDViSW2hfIe/7zixtmaZfQO5HOvb3Bvw9ePpNLPR7//r+tXn+kPr8bvb5c8OcPxgF/y11gP23cwd/U4zfCdHiv0a0PwaVwL8Iab9+lJim6PzFNwxPGWj+xZW/Kyi9EOTvcRHy+k4bvl3xv1tC+s+W5Bc+xbptLXtuu0vvf401m79VfQCm+/T3+k6/nbMHf+DbkefPT9/3i9e/ffzI56NfH7+NPf+qpgeAf3n9oNoD/giaAr+u//12Vv1nF+h/f+P8OVH6X1sn/GdY5/fbdOGfQj/7D74ffP2n3//7WDP4axZjZns0PZZMRU3y1xLu72nBnw/0D9AZGqPIb8Vsvo+88I8i74+qsr8bPAN/TW0IiPpfsNbf7VR84dhfXr+Gwn/sYkN/ikP8LSokv78jRf5ZR/ov+tF/jUcCf4oAf8rW/vz5H1IU+wNStf+syKF/tRb634ydP2+Z+WswRP+AYPjrVMWvVO7fvnWo/LQMpZyfWoc+J2tVBbk/ywnmp+ahCvS2YqW2fKF9j224Jp/zNa9NM8IcwBYoQfQwixBOkAfloJMMP1cfZboUDOll+9q1zGIGYyCqjmRziCBRriZe43T/uYQJP6GngQonSzkEjuVrY058uq8qn2ZgYbI3jGt3tmGLfOsNQ/70H5koBHH6JUG2uWaBcIXUuV+aL9ulesTYhA6TolZKcUdMzSo0xVrMhvaczmaxl4GqsFoNC4XJHWWPzih51Chqe4RDUsXXtefsZ4kJ3CqeVdz4KmngPIxo6EhKkUfj11xLbdyGc92+Lo1rPvYOtIVzhrZ7XD2C4eHTCPK9hlEh9bsHjEM4/TTYEGjbAkzqwneIA3BWHvTdRciY2sicKF4aNsOFqpCNRRjN54TgK7/wSfdKCM1WRBVRJLCKyW/x4SZQ7HJ6r64o0ktnuJ5QmBaz+lb+QDLrRRoJOADpMS0QksDG9tZVwagStqulM0MzLyuFsGc3Sq+8ZgGdfZ3LnEjlTgrTuWxm1gjjVZqrFfQS7OEBEzFePueuMQ4ot/ehNgwIPkn0ZLDhsnkDSzoIc/jm2ejuiHvZKYfKPBQPw2Fb2EH1MrXfnKVIFse4AVhB49YxhSbYTuQKTLhkrvYc0ivi4zkgqvP9m+WkC60coact4GLN4INk3dbEr4adT6854hFVRpHL1ZRzKLktpE8Uj/EebiTFkwQXnkn4dJ5AP8ijK/EzJcWN+gRCJY/rC82yZoa+l18sewUzZCSLdd/zn9HwVOLVZS3ez2GP99r6rcf2RHCRkeIFJ0yORPyWPxizn/SILmM/PMfL2RMx7vXFa1lWnL3Y8tChZfdwm6+WpYtiLBSWw+Aip4sVaAbF0PWCesXThUFuHGAcK6B/Rk06q267/J6giz90tcHannVZiMxFCWKBl+WFl19R1+Fzxoxgte5sH48olJqQoReU7BVEYLsumAXBEwizvRyul83nDkq+JNQ8FwXCJZ9jm2PceEbpUC61oNNz/wdd12elwkIWKUro7q4YpZ2V2JAhAoZuJXsRYrQ3j0GqpyXLVe6zptNHp5chEq4TW06a1wOQkbnQWEyPKv0pJBXMPRjIKVvxEux+9DwKe1u0k4uPHxagwVkStMbHdQ3cz9MEo+QUMNd7HSHkUJ91hbyYZO9wAhdsSioVujdkgx5tUMAi3qFDO7q1kQ9xSA5rKtfkTiQvRw05SgxPQxRMTTx3lpuXCAGkgIwToJPzmSvIRPcnjvgkh8V+5GITChKd0MfURM19AxFJYOh6WCqMhjIHsYB3Pgf0CSst2KEYO3KNmndrPQ09qs2kyuuomN1eaikxjcTCAk/sNwMMLpwWiHUn09msaL1FTV6xocevEtUBGyFsFuXcy71ayIGO0BxB4ARJ5GhOO8+B2CZGaUB+nhDK6cHYk5RjNkrgOUHkrK0mb4cpuP7Y5U8/LIGhPca0jVC2T2JHfJ/leu9KBFGiPAcQKUZgmTowVgh305adg+XWoOamD4QFVqDd12bttx+3qRfOrO0TZz/u5hb0XKMlMIvByRYw23nmSJHyPntyi3oU2LELer4jRa3Ip39fzSB1sgFEstM72TJqpwR+8RG1sUNVwZzplIvfjrjTcx9GtEPW08dYl1jiFXmKg1zJiYGhGl09vQLfv/rMn4Ni9vqnVJMym/uncWfCoGIQMbJy1Y2LPEenGce5TjIofOKZmqYlcLW37rPCF+K2gzR1A+nxWlN27Ukh9mi/3cbniGJxbaKpTfY1xCLGTYOQieAYdZk2soAfNZFUFkrcS7K/Rh7GSlYr1mEgoomgzjZEIgVyfhT81J97Miy+9cCalDqHdCXKSVGUDAdKUplzf8kjOSmsougRlRDhTjwt8KgxfwcvQIu5p32RcXs4AI95juILY7hch3KeY9Myo7cJIjQbTgnxW5noKfLqlKuVaMctOix2Q3dr/Syt6ZyF7ZnoYwhsaVeKl5sC6OToak7VJNZv4yJoggg/JYMoVjtv53P1r4l5wVrsmA1TCuqDk/e5zOhDEFy0dwmJFwOkt1TEP0oP4u+tQIjtdgnlPBeoamfWJZJUKY/MbAYFy0cHgNslLfoG/qZkgwCbfreR6gYLBU6DQlrSsRHlS7MsJEqj4JdHkgjl9dzJMwiDn0dmFwCzsU6ukvAOcapdFK3NcCbtbRrTBYFTcntagwx2NTD0+ky8fVcR4XNoXd8aiEw94EZYJJFYT/uOZGfe3Pr5NNeaTH6V3rviJFwgRlu2NWjb5ERnhJm9ecTqGzaN5w306aW0wuiLSKNMUd4dQUeeBRbh2gYExhDsmBVmDxITEhi1zyd9QSmMJ0t9NRv7SWrzLX2I4GmCFIRIvW3W0ObFem6AMKX1C6bwNPvVB60wfHCgmwAJ3fvZ3jNcHlzr38CDzBl4zEyHEgqn9ucXZwNWTo1Z0s6wKD7TT/RNbnuib6CExARcbOiJOjfBaK+LOQ5fFKMQzFN/ouzFRbPUizXoUPoZiVe2hywUDJCFexe6g0Lv9Wv6GkFBIpQNmGFByWErIeuRdYMX3tM5zMy4gJnKqYlk3Tuc08VvM13ztq5cjj/nsJ44dnvlon4DTp9/jrx31EfHSY1IycUlxx1yXjMXcFB5Fe8QY3THXFa7ZMb9BGFVPj5pjpGGKK4WLo/grAGnL/P+KB4f6vWRSdLVjIo535aacf5bOounLY+ahtlcEIYJbApJB+ubfjT/1m4OkesOZj5klFOywqDxwK+fytuvSk8ITTpUSlNQzq9tn516R+ZKAKwriu6Y52y+zFxBA3kcZ4heNZPkzgWfCDrofqBoZsDP9gNvPQfLJNoHCa212h6qNFjyMko7FUhwe+pfFKike1a9T3IJOoAsyuPQjqYwCwCWrCN/PJLvVM1tjNwTZOYmGpxUnjq0TEwdLX1XOmTpDdSMKajSor4l3yVxkjdjIYCZQdonO+/KN8Uk3fjSGTiwtyei64as04/m7H4NG9bKAflNbhmF43dYui18VphCZ9yn9wCpwMooPw0VqtZLJCMzGQhWd67olhBgUwgMxdwQo947mV6YYlcmMLgMOr0WcfzcJ4KcZ3V5V9Psb7C+YIUnNJqiNb7+XkRlx0ZfuGjKsFPrkGj1mVd0aLjlcsbr9iJPm4ZEFkYL+ggVQ6GZAXB7acG+gkh8djAPABJqF9ewulJFk/8QAHS8Ptyg0Q8aemZ52O5rVksKkYZXDojW/MSHoXYy3MC3xH94Axnp9p4ldq4JlBLBX6DZVJgAIdyE6UEPeFMN0GDrKkinwpVrq+8Hm6uSwEBowzIlto7P4GAW9UpTPPmkOi/cpylMMNGufHGAJyQhcH0D75cZXrryXmSMp8+YY5/eDkl/KfGufFjrSrLOIRBg9Mowjp67Z1H/Jkgydi+tJ/SCM8DPNBOQY1RByEHIYXbsOEGVpCXaERtvE4St41oedExuUM9T5NnX0VATmUuf0FntmoIYOOdlOozU8w1J8kY7WMekChBcQ9wpuGdVrQx1jnS0xXZ+AIZhsPzpRCPB22iwOFlKzMSHNZsOpOMOQiFaDiU2XBqgekXTDwaVnxlI06tZBHEDxwC/tSAQyw+xX2TFc5lS4w5wDSULFLSa8QNavGpUmzlNntJ8k62zcI2T48A7KCDYYJS7lchkQ6W+xwlwdSau1Vq5OR8a1gKD5VRZQZmiZwWWkBYB4XSOo4U6FNRjxYKbpkllsNZjX8LiiBx9IwCrVltmCMdD2F0oaqB1OhGeJgIdT8A0XeN0N2CaJuIdrWgKALUUmA/p8CZCMgLdQ8lqnjcPY1hQrkw9g7e7QZWt++OYGUkitPJrhkpsC8//sA3YBCbYgbeubGszCoNjHZcosAnM7+uIvXW6SAgqq1kFx5hOPJPSg+zgh7yS7lhFjhcrlfmHaFiwH01TIJ6OSoeeOEwjM0dJD9zZ5JPH6PEsPjhq9CW36QzWEvUjNVA1b53ZylcqflPq0atwI24HDRxEJ98+fTt9Rw5TilpLcO+wjwGtVJhECxY9vGeDmYNb1Dqxj1tKOQzj0Kt6/TREuq/4YecVtunNp8UTWqEItEcomKsMO7ER9HJWa1G29r6svZp2EmV1Je2H0c8CMeCQWUxBEu7wZWeJfKCkuXNcoqvW2UafGOHY/mmO16AVdtlfLgp+qRe4vqvn4AJ5Qko6j86Lh4ePKilGeewIP98GWtRDVV7LCU/lZ0e62ifbpqBFb1+AXgCv8elZ3DMr4M8Dvn9NxoDj3WCM1pipg46DfYHx0RqLay6hgeE//ktbndgtpMOrWhKjexeCFMIr2DWe4sdL8BoNvWdVXeYMeyNqpm9Qxb2/tScysc+H13TsDb+DKtc5IRY9h18KFsSINKUA0rEXMpzpDIVDinqSBXdwYneCBsW8Oyoy6hjYrubEfByUJgCvjlLUvAckbjMFaxXeV8Jpx4v3kC37WhRyA7mvpjc0+HINb/7g9ZaoU/lObEN0T94wnQ/tOXkBtkPJocWKS5xB4a54MY3ydhF5M8RHmMarCsjttvstYybjnZl0SOdMa08rhfoii+4Y9nlR78SQ8hoDmbhi5s1hp2N9kipkg92a+DIl04s0ZOdtjWBnPKmfJBxbVk9PnP0Za6B/G2pO+i8F3+Fvn7+bpw2qZnQ67upPs54nPwGPXy2itw+MDDNfMbhtCcAO3GwwZQHjpqNt+O7sg2bmBrW0IuA8J+xCrTtwXUDgDzntb7rKM/HYtR6AaRyhNHxXaCQBlmalXwmWh8u72L755uN1B11/bajNMYjbHH2ROGBGNvCwmIPDZKoEfXpZHql7GT415aQhhbnme9lrdlsO1YLCVnNAZUqTFUOvzL46ON7wZe5OPBPH5OzLN31InsBYiaQMHdp8s6bTh2IfX/1VUdzgnUoi5bZV6n0i6jKepuS6kpkbkHxTLICYrK7wgQGUGNewRn6i6TjmH8OE4UJPXj6qPd21u/SFa5P79D5ij/1DTbeK5q/P9umjb791/JrpNHz74mnf9IjoSreXRr70+UYHxwCaEIemAe+9jRZ3FjkLjtzS+/iVS17VBCoDeOapCQf8+dp684TA4LKTRB50joluiwqYGytwN+PJnp5jwoqC5AtaALHmmoXKW1ZcMkOsPw9lbf1EIaEmYymFFWcvHCuwFoSvbJ3dQCucjoyMxOXDm4qp8paKRZZKLnqrmUoUalHsoqpRBSXqQTdJkzZK+SzvgOSmEe3jJBJx1E7Zm4lFSqqciSeWWH8ub185+uGEKwvij66rzasC0fxrkM9qPL2nlK/5GBSG/qSS1Jdu8h+4/7Z0X4NVjmesCIubn6/wfP+Fnr7n/DQ9L5fj8/Qf2CduEHimwoylpMVvrD5kj+77k+5je37NBhF97FLKOsJ56/eqZtf0iF/T9CNJ6EPlhCCc9yuHy2JJYi/idoibbvVFbnT1/Mp1h6IK+CQRNObBHt7ECAd5YsWBkjcIvXBkCeVcq+DSBJ9SaCAT3HydQNDghlVMJnzc1NNDrf21BCghlYNVW0xijMDtn91G3NfaWlf7bWFz5KcoMKiwcGEeMEn2BtdRb/ID63MT7wctZOyPA0++eui1qRieSNlSycfm3G+LgzbLvDx+gH5AdHdTtSvZFEn/bCb96Dv8+Qhfq56Vn6fc/YgJeVzGk3vaQB1/2sm/f9bZR0hY+czavWgdu+gdV8z7Hxp7FFx3hDuYKtLkghsOmM8PPeb+XPH5BdVNO755bLLe8J/mGGAPKKMv/7nWY3GhDSfmjV6KtthVc1qEo8XiLNmSW2OWWa2jmB87ieEE9hktSMrW00eRyFiR9+p+0FI670USsljYVTKCOxWLM6feSA++uwlF+DHlYxGT/JNjctg+D6NisTcs9HVZWce7euZpEQakEmYSsirSZvcqwqNbrSOKoM+QdkZ5NrJkM6FRDJsRi7EeB4x1IBz4WPqgo406nqnyBHUmCKv9zVuU8qxc0T6bOtkiA/V0qZalwspA5yv+YddUi8xa9wKhHPq2PDc+jpc9w7PXh/laFXN43979lbw/W0VrULngfvpt/Y5nxtV7Sjf84/sVUnDq3jhe+0wWAt/Fekt71trjRU5LWsCQnpC5rpzm2hTXY7vSK/fMHaAjFyi6Lu6eBFHulB1wg6gJAWZPL5ddOpxdzMYkp57Ojo4yqXZTv/Ps2heQ6p08Lle6ppwBEidJGOynkV7A49YegGUscY0MLSLeyhl4ySjwPkYV+RQ3j0LxHR8b+gFZ/ABIpVsrindbVK82JU8FN7D/9O+nt2/Yc6+XHlxPB0mN8dNXdST9om/VVn55XPR4VAh/Jo7RaXsmO8EpzpfuPkpXPAt8PcqGaPHjljtblfRJpYMYBtRLrs7Xy6yAzcV7fFxzzkXXuiKM5pNoj6+58GWJBR8I3vreeLJBz2uJdW9m6Otm8dc7bnx8ogiE56FNZt+sZF9icwxN9lnebXQc+ZXv1pZKNTI7PvwgR8guGxGqTotIm70AuoZ5brfz2F6RKILDeSkZgoY4VXG9mvdRIjan5kru+VKDUfDwdqj4pgDRV2hPvjr7raFLky0RLleSfg2tmMnxQ+c8FygC/DU0bRvA29cm/VYFCQk9ukqj7wSJP4KeHlPhXgCGmRbhM2Dq0E0wiSGys0JJPaTQykviAk3jSvbVYZvBEwT7gHhuEkeR5APamhNmd48RwtrHFTHMPCCcIwC66/fzJxSIjuROVYbchs+l5KaFeQ3O7vJI/l6tdEdkfIz8KTYBw2xA03z89GgMFC8pHFBOPlUKbytq4LjDueMlFXxI2McCG1gLP3FCp2QOqlhOScQKxquJ8Fesm5LXZgeYfbuMAYxh2yh16oM6PWi2viDSoxDRLIpiZaYNqes7TIXNsOvciPHtVCUrWucr6Xob6Y84+maSnCMHQZSbHUcf4VEeXVddCDPGtU5SlMcOZy8GfpQaCEjech4CbUScuT9FuUaGfXqhEtZLNyEGH9i3oCsH5/bR5Qjlf0Zbh+tVWrGdjwd3DahXjEzssMCdIoJdOANpARY8Fc/LHsdTwugbVtFo1jLxopMpEUb1QZZGc4HweNNS82ZKSJ0vuohlI5WCQQTc0Rx5nZGlPmPSkNXIz30xhJgdgtq/fCXJawOmPFfkkxYPT/TK4kfoSpQJNexKITS/y7xIaM7Stk14yAIvGQtQiwG+IGGImNptD1VU8aaTm4NCuoJMu+U2G57Y0AEsx2J6CAzlJzZ7eohmZiJkQzXIyw5jaMorkVYQKZ60GbBXttY+/dVPaXBzsdIs+KbHH7OQiEE8Q2oOxcW9moZU3k5uuYbJk+aJB5ptmbi3imLi3HSSVZ1QobSadli/TffbfAo7mkiYBlCZUagYuc5CeZtR3gGYyvm48Y4si5q0DNsvSJdEeBLKryzAHZBOaO5dg1VuM+cyJx8z60lgukrEGBmLD+2aW+LKHbDYM334uloyFdcjJ7ZGj2wVGGB9sew54X3Gfz8piQLcrH0bySt7Wc/QcWUgmSfNtNaYJOY9vo4r0cWhPcZbTZ6xpiCT8FEagrh9mWoMuFOfsm5SHGMxE9xlYMoEcHgTmaPRP/Qx5oZqNDGn2TTP180zBoZkbBldv0pe5qXJrTHqwpTMV4YdXfDEGqv/GKn9WlfNXOtOVizwzBTMNjV445lRM4iCmu1m6Nmn77QUFClVyc/gDYNfSncfIMQwsJckzZODhaWtaR14gJLOBEixMpdvYO/78V+NMLqJS+/uJwhzokKbMEDaFa8TgdnPEaorsXycmAqcHscmFjl6+dPmtHsgTTh6fvIBBpQ5IWlmAmbB6dRlDpiCIJRT3AwF5iKjbiItgpgqN8m0MC0UvR0/I7RsZ6fA0Oq6FbVUzK1VGpVZss5ScYVBrW59R2ski2ZKjLKBzEXLoKjxtoZnZW7Hy96UI6GJkXrvrOSNzmO6rUe2yvC5nnTRUyKGHEl5kxjVOTPcB9ITr7yNTVopKp/G5GQt5/40BHztRFH9Emr7SaQ7+etAmCIOGxzwcP1cYLZRJhnbuarRmVlJkE7RX1nkSVTSS8wIvFgu944Aq1yegzkxF4Y7nHGRxAiIV5K7zcOmYkzSGwkfdRoa5wpcV9YNiD4758Z1D/oDZuNjjAvVIbnLavUoFCJnU2MxUbrjOyg+bqfqshNTEaxwNuvXcGBbL5kPgvAStaqPEwSf5NQN5CqgwISDGAgvkoURb3zL6Rt1YU2CFQscb0Qb2co8UDjj2iGPN/LDvpWgBcpWqxW94ZODFX2PNBi4dVSqIJH7WhEo4ke0h3eAmnW0uDCJdUtuZuLM659APlv70+GXrBQ0N9AEwZxyOGCD5z6lMDUPP6CeMGqagNO0h8vF0tQauoeYiKFJHSZoX4zpVmGfJN3X+bZXvQll6rzyHL86UIWfHNJqI2LHqihtCgi1Ojudndf7hhVUchDUipDuysdtcmP3DK0QemW41q6xw6qTESCb3ZLAuHabr4k4rk4xY8KKwluXareivON2TN3DxUlE2fcBW+5Q3xpA376guDR3ZUVMB7wFhMh+ZaYOABjXkzXXDySJq7f4dtFKdY4mlidjvI5qm5wQPEs5gZunh61YckArAtM8xFEzM2quOQq2+k1fho0uTZhB2HtbOkIfqbUVtk3rNA4SkEDoEeZEdARCBzcKcBm/LPFCrxugrijXQvoigj0yNwiL3oq4Ygbp1I5GuWlNWM8PMcjHlz8GHpkAZ91EsphsdpTGYXoDslD2tBlxKcMDGG8ESfFBuuCJLhuUdw2lhAeIyHIIsywOJU2LbzxcVStAgTlTFaUiWQCCvcIAICtnXQQv22qlbVjIKQlrdxTYAXjbPPWXDHu4RiqLV04moV8RB7nGIe37nA5R4Fp+a+MfJmVe9ekdNB2OYGAlwiBLw3nbfIUPfcvK3oE9i4i+2NMepYpv+BJ1lCO4WKXrC2cN9p4uNnyQSjZ12lYY8wZjRGle/IA3bpCLKqwkebxIAGCYMW8mGKSjCg1GixkIp0j2wc1CrdyQcjBWkfWCd5niAkVl4FgX9LYR+YlbvI/M69w+JqaSZAmoMvMw/Z7Osmanl2QVY2ko2cihTm0Yk6Mb4olSbzxAV6bQBxdtYS47dMRbktx+DYjBnCw1YAC5bWUyVoJEA+kIDG/xpJaPfD5g45L1afUVe1p+mHAaa1ICfOblct4PerOcE4mr/bxhbypURc082G6gu8k+nKrlabg/E+M8v6l0owyOb4FoMw633j7kkR7ZwovPRvcfu7pmym+jTk0WTTp68kPbIiUrpuHJLQgxuyohZIqGyidh32o0O33KXPip9Y7Llihqax2iAW80LHMENHgCs/B1kbebMH8lvzca3m2ycU0+5278EWNUhMYoWPE3+1N2TWTJTK++Jr0qmRjNiFzWlI6cHGkxmhkPwfYQ5bJ+B8NbAj+dQwzCrI6OwAQJdBk3BhjANVKO4rj6IeEKjNLFGw1tQXqTTVXjpsOWQ0kxHUgVfdPuDvk8o6sFKIo5yoZmoIKqLn8x9mM5Xm6WcREAEkCEcOgO8I730j74epuzzN9uXGkzHgaIYFbpTz6vh5k2K08cRyeIseIccJdIHmhRc+u7h+rFq8ZeKuBo7ceXi0NzhoxCv4oC7AQ9wcnRJPy1hFstmsEi+xZ9tEezwAJaukgxniwmAyouPkTSFsdiLtYn5VJPhx911pszwt1GAswIc5MT+6lG88JTeMqbjLZB/ZJE9ng6G0d62a4vR8tL9RZCbIsX187Wp2QG4zUPpQXHfRq9qlTqd/xGkRLzFFtdKv4cHfD4a2fEVgua3fjw07J+Wps/mVCP8doEBoEVoIFmcBZJ4OMoM1zdrdyUelhfgycdMBRVPtOoxQq2tfY3TzvhpDDethfQLSynSa1aVig6NIHZ3swyo6m5rOvyxa0RCW54yOpSbLovmOteiUYmPRQmpyETDP+SXmRbNi9NfQDf2gaVCuMRbhHsQYkaRvIpnQtlMO9FnFlsMoaANXfxV8qlR3zhvAHLkS23jfDntqKc52gEmfKwReEPLaxZCsd73mIPmzIWXga73axDPR8E1kQKYcqTsqo4nISobutYUf/i+pGA5Kp6PWs2AEeZHiaVYc4HZF/oA/gBSQ7wD4uWig8MC6sEU9S3zhK35unwxKujaEWgChXVD3rNsF5VsgEYVvhGOacZUMCpcCPpjzcnhZsRBDE74eTbDfD2fTO8viSc2ASWjZgngJx2KsYUcnQuaY2R6F9NgoPIUZ2nUhdHgDRbsG9nfseFrY0DHmXEQQCwF1uu1Ds4FF8oHiDFJhwSEc3B2mMjsj3hY6vjKNLOek4U6Bf/5ZBXfMVbh/LKd9N6TZGaR/ioD23oE2oaUcXa0TtDYLN2kpplRFimWqOOUK+pb9qFXzTzuR2VYhPMLI/RqFw1MbClolrBJjuRznBbLQMHrVmcbWhv58G4nEa7PjjRtCAxYi819iB36BrPKSZ/lUeXkbl/FntN5kiG56E0x1LTNttOtTepZVZvjVD7ht6b3Sfa4bc6LH0n8RrbYzuO3NzRpSXojVGtQL3q4RkePZElVIKQB+18IQzOhqe5pCKkZ4EB4+cbbcSxmLHGrY71knMeMPXDAoDNwL/PRcCt9Rmo6IhPbCVyDPoI2m2neczTVcE0jVw1eZCnGYAvrKcIw3Zck8GvCDvwJBf0dqR1fQ9xIi6UZT9aq0hPtsHUevMt/T1eD5RxCc+gQSFoMy9lgYPSFGXtWbnlI6Qj1WGHtJbLwm9JQpZY5iodZtGdHkFJpXfLDOxtckulw47ESe7N9zN0yaFHu/SecdfiXlxD8FWyiC5+0kgnOedFhM7LCN3Ps2EBdUMbi9kseXhpO2S3p1DvWPB6e91N3F413ymlf/EeSx2VaVrXk+FeU9yocjcUBcMGhWIfYSqWdZEzD/0THvWm6Yr1Ml6N9CQb5A83iib/3Ney5OJk5cZlUJgDZF2UCEAVli2TKXBQFJzED1QIhADYC6d05Jx2Xq5YDfIW4V8F8Jsh+R7CDRnec/PVhmC9JA+3PabpTPJNm5B7mWVcCr42g/V5Mua3Yo/Ks/dEspkASNqb4PdZeGhXxi4oj8TRI9zR6B2mOGXmjLwXfStyAWtctsiVyJj2uxmYBDLbweC6m30+BHqzRLmH2VrxNKgIWs2WmCueu9AI+fDZ0acjH/1JitlX+TVJqVIIO8OYWC3qLy/Gk6WFQl+jdtZA9NDufQotq8+834VoQe0ReXuXvmUANTApj1+lKX9uE0kOvYWI1sqbJ1Pwoj4x/MwkI5fwVe6nvYbO5COiD6PVWh9ETcjenBbxBSp1IGmhVJps33yFsNcuF4li6SIwqkR482Q2MHHZv2FWDAy0Yy9av1He8whMRPjPUTkWrwMn0p4iyytebni501SS2H2a0Hs2Z43OKYmZ7O++i8iN2QjCcGXBZlC9nbYPKSxPzccyzdLAkxM3eusMhROMjlpPlVb2jEQ1CIIiCMsxSZdyI3szqEG/YnogPeXsYoJ05cxRHWwmJq9ENBEHqc1Zp9oRZHwcrweCkGeWm+3CkF3UbLsXCgGp9q9ubpCIw8W8sT9el0nVqAa90j6sTUHQjSOE8MYcWvMINn04AT+Gr5yUOA28uhwjukvlWItB45CPhPUNuDGKHtCHCi1bg5aMz2BQ708z4aVGnlY77GvxQr/icC951rf0liV4WlFeHf9UCciV3x3Wdpt3IKejfIVDsmOTXIWFacmPNqfh/r6XkLJyrk+8p9YbVuKJT7KXC4oXvuE56CllU5GPb+FDxhB5EWxqlbrgVoOratf3D+75wuxP8WxSoQyLo8OTmRZ73ZSaxPaYjl7J50zwuBXEUXwwccD1X3woOdfRXwSyOjG6l036arHCsxqse/Dp1AjXoTIxoqb4yykRyE/0Q8heZEZxidTRmsmreoXthnaISEVRADBMaShNYKkZEIycB/Qo/7QFMClWyOxSVnZ2ODWaIcaoV+vcKwiKgk+07dQKJ+Q6El/s7GajWuAF3mFoQtl8MHYhyLcsCAbitBohlSfAOR/YlSvsdHM76j50u88vz8uA/FJOZObKCvA1PbOXoq2zYIZZj1vhZWMaGDHZ6eoE3lt0GyfZ+DBBWJbJMa+r8KCVfRaLXWRhpkjGRsf0yPCiYCw/lfCAcHbx5ZJDYZOx88SsO+QTdDx74ByVqRxRiA4rzKgEJnoe8qJ0GF0WcNtTUzMBU2qe0aGOf5i5huvaA/J3IBgQHuJa5v1MYAldZSBaE5NSDtX6xLm6w5p1YkowC2XixGs+xvBmHseWRx9lQeETuFgkf3JhrE7ClzsVC1vt5u2ZyKVadC3IFAenDUZvh3aO6s4t6W4ju5CYcJPedHl7HFUvg4JCb/CoKrBmdoGw4NaYSsZKCH6TD9O5tRFYofM63Pf9UA891JgcFCqD3yhTFt7nHH7t2n0hB/NckJf4Gt3E9oPW+p4MnuhuwO4JMG8T+JakD8W44SNqMHgHMPgznZgdD5AFHRVQ33rwpBUScHrrsw9IsS6+BXVcpOSpkKVz/+DZNqw0a5B1YytIny5Ukj4gi1BUuvSsNIEC9tk99YRbKYBWEkL2/NVmOP+8cugNXsiOMtn5GhjpxKyHEkiIheVgFggfcfqkz0hRjnru6khT0WKYSWKWQ2RRvztHuMtzc1kcYay3T4Kya5xpVzARfJQrudBbaCxyb6XwWeKOec+99pNhwAWyqDFZWe4kCQq3hIwcIAmehpPhyPa+Z6WKnJpGQ/j+0vEo7XYCuQNjiJs8YhUWEQePJfIXk+m99iyf8tK09+cJ8VtJgvyzBZjMzgowr9HlsxVpiEgP60osGGvdFGeowryFztcgVPhaMoyGFKLCHs1eSk/VprCfTWW5X6PwZSRKcHQRAfYzj/tVbJ2R6dpUXUWq5nT1wB53/A56whFRljitDRUCaxxv71A/J5dIGzJTPYbcS9IS0pDp3tAmjcICIXqqk2Sp2XGQs2pJDKQcK3KQvPJhIVbOhp3PSzIpu28Eova56EwV8/zgut1ayP755F5E38wPz2isFw5E+qquZgO802ei7KfWArsKTU9h6qmzJpveO+cnEG9I89T61m5MlIHWePhIjeFDwQ6YSxUSh59TLFG4ZwHblYikZI3yXJSJjmODxPNtwZ7i3cqrSNUxWBCXOExMit0KAifWu6+/rJ2sSSqmdH5oSSoUqBW1GSb90B/KsDgRkh8vq7MV+RwyIEHDTyiXrG39fBvRs/UP1Qjheq8VQO2EVd7Kg9esvM8X8r5DNzcIToCwYeJnB+n4klzQue5pRL7LqgdvRfbAgU59dp08mywzdgrogRq/UkISP97AdbZFVMsJtiLwvkKeNwAN30uIgZCHgx035jpT2kup4Iw1OG4Gin4UiSiCl6eajr2hpstgTktam+p/Xs4yr9P8BLktUHY9oZEtiFKMJ2Ad0HslgELgqIUY90TL0jY6c1ZHFtrkw0Xx5skww9SUdYL1VQIYEn4Vbs94MpqNIksMOcmebqlITgxby3EBjlse1V72m2ymvLkRGNfDEAy+pvTAXnSDS+jZK/az63wpn0IAq79k/4ZuuSJeZEzjBscvTxAtPjo+AfzJlDdEr5sMM/yGzWL4krVr5/vsRVAt/ZPGxIt6mMaqkTbGSqSARZgQIZK0rwB3opA1x+L/Y+69tiVHlivBr+lH9oIWjwEgoFVAA2/QKqA1vr7hkVVD3ia5hjNDDjtrVZ6TCAl3E3ubmZuVivz6+vx0IGXgvRvaPyDpYeUXBK/Vkre3I6ZEbx03Z6aEyFwVfCrxQ3TaoxH5j9R6xHuJHSUhBSk99F3zWhFrpbLMlG/FjtdwGkpmjRBv5Kahllwl2C0UI43YCtWjd8YDT9KPWXJTRDYpirhgddrRIV49i7Lxok3R6s/uCz5foHJHVT5QYQxvbzem7jCCe6QHk9xVp0ocQ4tAZcZPCLaeVimxkUhH6WQJXhiiTTXXfVGHg68aKeok/e7i+yDI+Q2UCUiyVt+/yC51YI6ATS9vgkKFbUGoPmGy+pbJyjGuNzjJzNeGWo/N4/iYhw7P4Js9YlxkzKzTdWGOpW6Or7S4c/x6FOYi3gUiQFaPgRy8ic/Dh1G30Tj0l7mBYXTLWcbzY7tzNNuUc7aGmT+haAnoAJQ2NefwPU1ONfCDcNBQHZgiIWG6WQNCLUiB0RjC06fR5FqIl8vT6V+10GnSNlLTbH3IQZW9M2rJ9GUQx/Okkjx2Q6Nonex1wJo4B5g42V2yCxqct1Fuck2pNLtrAkhC54+DAwu0oxs3JfMCK3gvuVPwIc5Rxn2SUoyZYxVyKU/5twwvM6HHSxom8jxkDDF6exQkkFqW00gWjv1jvYSSGx0f0h/5oqtHis7GAOcLrBustekKs0aO22fNzxxCbqrefDXn391onYO2qu1nsngZ3Wx4VAnmLcmcvUnktTW3oB1cNTVvw0RyvNzGHEOT+5X1ueoAM81pn9AN6Uyun6+IIR89Cb154gvEwNZQn89IcIUR7RFjfJ+jaqseT0WYsFCYiRIiUdyPiQbh2eyx/5wWEG09ZtAbenEybv4Km+kz4AhOn6jqK03Up4OLl+uP7wf8UKgdt3lFt6TMQrb0IYZPc9lCuLlRy4cJGh9wgJ+jBjC2DEyoDU7NlTNe0qm+TaSY9ZuYWudhNGidq42IFaXRO99qw5GTCIUthf6g81Ml3GQFqt/GtaZ+Xus8MgXt3yUB/eoZGXLbojzVFoxLucF2Fk2wZ/QV7Uusk1uAGPgGiW5CFaqdf+tvEm/lFDPZOVpfmg3lhCUPuaDnIcXJb2mH4PzdZ1bVqq04vhFaLORAFvEFYjLM9+XW/dbPVX2y7GmxastuaMTafBcXt0ThJk032b4M7n6SFc3+yn8Qp9BGQmS9lQgHg/68U3X+yBUjhPwYysdI1ggdrALVMw48oIoITXHN2K7c3RDfowVAEUknP84/PS+Tu8t0p8wGeFHk6JPZDYIDxMUEqinKV0fCBoZ0B00z73iRpdXzDfowv9dxV4vINLluhWe1QwKdY1SgAo1LaBH1kSb72P13BnN1W4z92lFt5TnwRN+5B1CRv+1c4zVx9ZrZoxghVqngfuUixhrsLYxBxwwDywQ2fAahPEUf40NxCyDbIl+Q3IOjOfJ1z3oyFFGWqHAoETaUJG66669Q/rwB7R35w/9WjN3u472BElFeuknzaytllCOKXfldWcpEnpVCpitSZTBfKUREJ0nMZ23kD242Zy3SZ1mINI+q41jTr7NRVOIO1NH+JSTRd0j5V3wG/Yuxm67n2nN2bpYgUjtMnXMNN6wYg010PJDnPdGMqFHkrBozYVUWEl6oqH7U2dP4+rRNZupfcHd9mUS12dUmnFNZ14gH1WYnjSMtwLDvpEvuOSTj8LzNRZfYJQu0k4XLWlGs9c2O0Ia903BYa7caaEtioWwJoV5gV/ojJ/NDofog1uXwWDv54lrkdE+3YJ2d27qrlbd8xRCda0EfWKatRFa879frXTPSm92MlGUByrLDSRjFxFVcPVuXpDG0/qHnpsHcOVZwx8PGiiVDjE3vSBrfejeWtSP9MK+tRceihhcaQBa6OZP+50/GQ6968cci6oC44xzb0I0ng9icN0oJU5J82ydMKAgs5I1UsUiisWrW8sKSfm/FjiPte0l2+wlPGOshnhkfE8bEt2684mgqd/vCTkfBtijtjVeFtQ/Do772JMC8NhC/UiE/RhZTe9P9qklVfxGSUjdnao7Rp7ciNxb3cDfuifu65JuxH+MCojiZ1y9SyLbKLn0GKoNO/sF8NfMKL+RXLrpbNIPoeaiaWWiBbOpR9FRlmqOZAUHIIbYXrWTPMfWcyYfKvmpVPmTCYppyUQ01KTnRPQpVp5vNXESVB3PETxDqzPgVOpuaSTamD4OjHNwiVEf2kL9K8fDJt3OR4pq3KlzjeO9ZIMX7hZOt9nAH3njR08KaWwah4MhCudQHSuK2c451KaiUEuOBOKamqzDuQVEq49C4NyO98P6I81Utduwr9dmGiX4zQ0K+LfLNwvic5FNJQ79jfaExiuoLxT2WR+TUQS3HOOrusTD+szit7m/16Q5nnmRsJK77+tCPzMZP03ksEBCsx/0xlAMRRWr9DsWXZLVGW53t8Vubbdtqwk8In2F1mIcpuiVtqYkabp5aJOjEzfrNCT3Z9gGoQ6QSnm6k5JCbpp2lVIh2ZFEx4DS0vLqQQ1Lc+axR4x0/cj1i+6Skv9OczrHzYM18jKQ2Ws+oIZ5Vt87o10eJMmuv0v4lsX5ae2rvrQuNJ2GhKxlpaSAKzyZSLQv6bBdGFZelqjsmqckbo3FEeRRMAiyGQLuy/g4N0X4PH7J2hk9W3exgd/ZFtdi7LS2x7Z77Iml5WR/DHxgygFJnwBDnQ2JKJe8Z1MmEZFut7Hk/0qIFKB2T1F+S8XzwFbPqVEngZJUswUxyyU4x3YJpSBCeG/tmfUL94MXnkkJiGQL7oBW7SECalFlINFuxwXK/eozqbxr35ythY5Ikg4pNHSlov5Li9/nL7dhkrdJGVVyRg5y8hew9GD5J6VuwAUr6mRjKuO5CoHmhkMCUEN6yKwLFXD5d1xb5NnZflzpX7paVaMZ3XL8A8slRNrhab23eZ6AliZIm9ZAYaGmJz0l61bPEdDmDLwp3DcpTdS5SlcBjp26e7OMra92Q07zKYkYcDr1uUPJ+k0XI6+PyCAN0nDPUy4N2is0SMJpHhx//fn0vVYfbyWqUAKqYu5jWtugSXcrX2HnvpbsefuaPfKhAh3NrIclndoTdDsuGvPWJ74HTGXHKa3gfRBW9kk9SB/Hr/jQFfHLvEUWMcxvUQB5M3kaPqqKWGn8IWKY0W7xdiMHTi3/oy70TY+/H/lAdqLobwEkaPDhhy6xvkVEmGD0LRC4t/2wcQwc42mM+fvseOV8nEi5EBSULOil0xq3VzsGO2PCTdxSUbzSU6Y+YbhMhws0kEKUscBsBddur4qnY9pKWxYVIzRerxAI12AWWOe3jYu73etVy4LW/pKLb5HG9a7pgIuNxaPhtCGnKtEI6Q4tsGC6rfsQhSBRUU6hTmBrvznPKt5p22gd1yfTmARhDemISHGHxw2Li1ximE/GCFwdTbgKbY6fsOYylmh1AOhCG9t1KYRkvFu5w1+MTM0SfHiX0LkYnCCIqk79WHMaans1LYj40vyFW+JzhGqmaRmD6uWXO1Reqdj2CJWUuxyVDe+8+vsnFTo7jXYhBbRIIIEWrKrBPGhzrKcaBcQ4ROjcRWqewwpzoh6oLwl4vkj0JVqbsU0+y+aEut+sH5ZcFIa8XxnpSttGHkqA+ylKQqGtX2ctL0QYIE9NTVjuafuFHEIKiW9R2wk8znS/RWM/AnMtoqFXFh+9u1IPqkT7+Y7NXmr77LTcPOaL08fmsxexkrHPM0LX6JglsSAvXYbFktja3EQA6vwq0XpyOBobvX21lDL83/goJr8AmH0EajfjkJzAll17WwqFu+gMhOSGgTk5I17MXX7oEj8eKrs9mcmulgRJgtC50hE3zyKRaZo/2u/Iw0zKkK9QXsgcQevF7gkvJSskmXFya5UaXYaWFbX4vJzUuaYsaD4N+l60OLOTbrGQDhMbCT3ZgsTUckV5WVNc1qaJfRg2CJvhnor9JyMaVzb/8yH7zMw3h/FigsxZZpPVJcM7zt7xI3UL8bqxNU3ArZedbTB0anijYv36ndUaKmovIg71imVSGz1jMTe14zXKKJcZSvPQUb/bqlYr8S1h4gUVcE8qzCq93Lt4c3oByqhsefFmrScz71LvbU3E78yJEE0TQv+g039VHA0ipA7mucRCU5ULf2Gpq6i8c9pj5PWEDcgXsFVtpqOD8b6Zdu3T5b3Y5ajsjvq+E7LyRsluDH1Bq9LdXiQi4GHIjrNj4/ELJ8wSAaVt5/GFhwW4LjrzVlZ6TjdizN8qO55Fn67rC6ns8i+kWun0OTqrmW4QMqMHyiFLwxjmTUUZX25WdDigUY0AiS/aSMC4Kf+GhqVqK4xWlhMzQOU3OctXUM4CBOULvJmWoDkpFXUabgxW/DtkRv6AdH/9JWd2pHsTNwc7j7xeDP9yI+PRNI+6nFtHvlZfkabAlG5vyUYzCxy073+Sj6wFkHi0cmZxlxq5//UrkcLITl4cyf9Z0grVG/5bjR7ct2xjm6aUr2CeWO59WpsX6FStLpddw6rQd961LHAtJ42nyS1E1mlnVRgsdN/cBsWPZVJ2v1JYSiHFlJK5LIlSVy5tZeBukfAH60MGJFkAPEXhJENI5ZUhFlqBd8su8fM/jGVMY0za6Iq5mEA9/7YJOrf1L/jSVMSJLxg57ZR1SuWHqqEycxGrSepotzSMfAdtEAnYcgt4UlOFh+71Fpx8+7KjpLsro1+zXESA+Y4Rm3LeSSBII5qFVy7uwe7xf0vzJLAjpCJjkjCOluyhpNM2qKM05dy2qqvoO/UtvEN8Bia5J3TNPwFrh/Ubxh+HqGtONJLhTsnOJ388r7uMBs95eawuZxXks81n00kL0LI28vnyXD6yb0J1Ox25Dalb+HBaA6xk0tpVqK58SRJGkSDUVfaVnc6hHXcWTtCiXmaFSULZIIKcYtvwbCtYgkpv64795DeZjPI7fCQwPI87rSpjsyRkhyu+cg9kZkFSqYpa0fNm/abFhi3Jm2iSk+blsakgfNpsxh+wxJiXs876BWgR3xVxqHYyo8zoIAIOFq+6wy794wKAdP99B8n2xJfn9lVZd0D0vMq9+u4SZyhFlb1hG3uoDIL4POCTC9pu+8fJ9JDDAkTiBtvdsUewxx8HDgURgFD9x/xnlJjrva3Tlec1hFvt8s/EbUPf1Qd6vSdxzkpZOqzHkB99aD2wfwqrcStIxBAnr77ch3dOlaVN2EjV1PlxHkOITSpg2FMQX+uatCkHQ0ok2MmBzmSxBMcBsFlr2ZabkU0Tt9+uEYTvmH0GhpUmu0eRVLdMRuf6isft06ECgUqKNFEJfeosD0LdmB1xgyXo3Gj81TG8Y0+NQ4cbTmUAtOgWi1Nm6mwTPlpULvkyRFw+IGTMGNTNnFVPZr5VCzMEdoU5TK12yGF+SUzQSBE3elBNyIv5YNsK5zrSlLCLF1w8dZuvjo1GdzAYQdATsyRz3mEBQdMHZDEb7m+SlanqW/FHUIn3IX236miZ9M2o8CHX8LNkOzuXwyVvEuQmeG2x/jbIecLdefDlAHPfva0ADzySSNdW/fmukWiex6bqYdI9fK41uKYdl2ufggJ8z5OPkXmZ3gYjAYd69ep1vYuNcfv7AUERzqI9cz/83qtPVAs3E522O36i9HVxTbn/h2/Hcvcu3LW9/G2R8zS97Q5pFL94GUb4otALZGYE/uoJFp4UVhF6eiGHN6N7/ump8PO+JR9EwWxYH1Qjloco4e+M7+Ox3WUXMKweBdleZBP9ZhS0ws/kzosGnKE+Qjl9gX0stugJFQnonyZ8bN5OIkhCzptQWvDQdBV5jaB0Wu2RYFR/rZP+xBHtk4tC3gd4zfvZwsR2wCMdauDkMnyr041EVh4/R2EC75ApXvXArsidWl8Nx2uVmfwH8LJAn39g9rZ8HPRoRFGPGmMObXvBvkpiczGTmel96x2u+U8S1ctr32WecCLamvHKGBEnf0IE4i06/oCPlr8IrskaOl3kER74etKSoIUMh/hmQWGhZWWoHZ13aELSviZ4mJe9HXeVur9of3Zmh3z6IvupYzMQP3X7Qs6GJxkLmB78ceKaUjb9KXzzHRvIxFdfCQMmvg0T9WoIgMOFokLMt822NCkq1UEj6tc6dtccPhALhCNjDbj5cJ85SKHfk1dRTo18QHJ/VVLvgSTv68V6KBCL5Y9iRC2Mj7uSDPAVnPXmeTq7CtgbWo8mhTMyCcF3vV3ElSKYsCwd9kxppjCUujn3Arlm3C8nbcmJUojXMFDBoFWlbeVz+HnMizLgo1Ou9O4sp1XkTHvGNNI5pR0++peSUeEXrZwP4+f3AgRhvys0nexAa37L5toI5HPCjdiFb+x1M1t2w2uAl3/xq9UgpCc45hev0ist+h0X9pDY4XTf9+OMTv/rnlGPxgGYOHuMf5Nllk1ym+Jibz754ETLDIDTp5z1l3QKuzPDyawLRyRO2fy0vH1QXhuKt+qSoHn34Hf2FUUTeXMbc+PascwVE9ob63XNgRlEh/lPsaftBsv4rWTS9JnsxDKaU1Kwk/b4Uf0U0710EpQYJr5/Ys34thlvFBTygeq1gDyhP75TbVWCVlBZOV+6vLX28W4p4lD9h57TPcN0fUDoQoda9GNrNCHAOkkeyPKZdz+p4BmAgwbR3CRRo67i6o1T7O8CxzL/D/pUld0nRsFTbE52UOczsC0miz8EDn6zvNNKKFBxiObasW2xFlXtk8UZLWvomcp5Q3RU34AQsOF6yy2v+ot66zcrAudoW/WWuDIZtuNZNaxJkEAbiD4VMZSKWfbpEAgFFtibhp625qM9mc0mPBqtI8B8kYNiDpp73er1Xq0JHf+c5Mz0V1U478WpxfIqmbn3Zyum5eJc7fR6x6vKdjKBQfMR5wRpzqDM8FD5m1MBo70rHkP7Yyic1ZDQ41BHvUHQNReNOUg19+aJIT08lPzTBSqQ/2VDAp4Tdu3W2uWOR5TO+8DV5vTCbhA1/9jtE209nEF5ZPtoFloazVeHTtXNIMMaRn2xjHCvf+uvfj2fL1I67puutPIYIEhW6b0IHteDr8UVRKF+37Ik+hGn5+7oWRZ1Bj7LTNSk7c7BbPWjNoN3U52cvUfseuMae2DMdJnPhNUy+6zXvEXTWYkC5Bq7Yq2YNY4HUovZrjnK1zvM9qbnTPRC4x5S6PvHFjt67+FoXKv51POoU801R/PjKkH1OcoGpxwfez8kuzYbrFVvMXjFZkB6XNnTsMm5UkFT2HePWqxIgqjAZ6/AVRGQWXPZpfirnWwzT90MOntE9InzVvYBN9jqwF/ZlIj/obgB3yHJQTeqEykKPWNIDN2EAEWZBsAnKndi9EN0edndlkxEe5E/wLAU13g9YO9zvFdPXV8/jee6zcmV9ptN5eM36mzC+PJwBecuGBkCqtwT9zhgakCYiPYrlAbQU7rNKdPjdxGbNnuvNR4geaVgWsmQRwPIUy922kXzpJqMdJg3VMU5o1Z9z5gy8qSklAJT1kKJtTKZWtpd8quAb/XikutVk5h5muwCWMYiKJ8DZzlevx+x/LIlMUqJRwyjAMRn/kvEvAxhY3KZYc6ALfFSlcRzZWNkQVpNIjI+XGDXPpap4vPFK7V+vOWtCB2XyYLVqlf0wjMpWiNfjDtOODZfvvE2zWArOTbRQQDAMEnwIT83I0u/SpVsP9vttx02/mjXa0uSFK1nDr7bVv2tk0SNFhUdI5U/0mKvuLFU5HFgFSdq2yNxL7bnCT2efcgZgVRS9DLLoIWmJIpog/AiVf/X5TIwERtSuVxngkLKxOmnQZiikP3CWlpdnOnIAQjRFepX2xkMSGj+eHB+0HHDzXmzlrEtZyZgqHDyNaPEYqR6niN97/FpzsqNjp+dW7TtNJ+xSTGBNkyZsNmvNuBEXaBu7jzF2xy/uz0eQYfyHEmUCPvcuXPJVttcWdONJ3M9Y3wJ/bnpLZYfUzR4k5a8ZyYn4JTQzRKqPm8jdoV3tG8ilzMF9uPcMaJiEvJ+l7kPfJTnW6eYgVpd4WT9fgorVuCFzhmuYhrgi9oBdUF/KuN/PuJW7/KywnX8HzYm643H4htfSdFzMt+LzxyRoiC4Ku1BSI4KckEuZbEMaHyB6ac/nqn1sqiK82DU+xlMXCIfZUJKf3sIbEzBXBK6rf+/LcvCrjN7DiKX2QaDJFgECcJ8L7qt9ECHhXk9heYBSOl7j+DlUGYpw+KCBr5GJGVPihnFP5Cu4lU3jPu/ABxubKNlNm5Ay8ULNO+o3MLxxs41lYQyz6TyPiW9QSFtHbO/4sGBzhKya8nuJOWwbXzDLBGZTT2eU49tcFVOnnl9WyaCgnc3KFoKx3b7wAXDKyZUW/J2xxzxUHDsVpXcnngdgOESCx6ENbugpXnK3j2AMPbXJxj3BGjNj4Lsl4VJi5bbDEpeKK7PMAuZA3YxNoZw1pM7C2KCLNUPzG12FC5bWRkf7/TwOUOqXXsg38+qq6AVOTwwry4bfOo24LPw1spqJXgGnU5Ujxb+RoeEX+bHN0UcyBmrR29vXWa0vTM+H8WxHSv/U7ErvucjBWV0CM+wxbKZd7Wq0ltHxk/V1JVDTxgO8Im/agr+qYxIf4u1qJNAyUGVXstFR22/a3+UOhYKPZBKQJSNk54WOo8mZN8DwbqMVSx9lMPHdIGOB1u+UhB8y2yjCA4UteIu7Lwhm6fEw9hXUuITf3dQgflYSWUOeV4xZW0kRikIWVMFqCVtHNRdsqn44v0aEi2kdr8fJmQk98fZaqHI2wdaEK7TNUIIoY3gLYqAkzse0/ri/3JX9dEuiyJdWLnlUERT+CMA8cHLqPxjncHpvX87UnpM4XKTHW4n0s9TUfddiF4dGKmMOfILS2gzYcs2bBk75uB2N2zHH93I/xZLViPPePGaA7kG2SXrPNpKBrE4/zTQIEVnzhhkjIJJKTHUvdzJz+4r6OpZstUFM2qIALiMaOOYbz72AnnunxFH3WYP6APHEz0KdRSJgTSywDpJS8ghjXw6w3eeVL1kGTB9X+8SJJjS7kxtPONHHb/I1JF22Fgeees1Xy9EnKNCDbR505WEmsWkPWPM4Aoc0teAuvKacbUNTkgRy6CQBrMLxAh+mH0ook6+38pKFGpT/EjSJl4n//la6kGYWaTDnvQFcqlZokmHbr88HRmvcub3fMVKgRIkg0ktGBEjuDer0Mb8fIAEuMCPZNPr6g5VLEVFvnNnpLzDVQUGosXoAb2g2kdntZ1TVsTyiWgVW6KPU+1GMqj6s12taCDdHG/54vfi3JVtv3l2wMwol29GW6Sr4HoEnaOPBARaVkADWB1Up5fY90u/VCg2pS6HqWnza8TFCcG/ufTelZnLgBc1GABINIqzRtqdihVWoEKOv96hvzHJSt6G8Lg60IOYLx8Ba/3VgpnqLG/JrBPPwcm6kKcrXQdnCYhYHhagwRGCf0Idzn5jEfIJBrgWF74rm7gop4wIgrW6qZVTzMXx2pfv4wtxnIW0FsRLr/PTTMQdzE3CKE3sq66jkrQo0I0BxWaOZZ8jph7Aq9eG7TQ9Q8h0VGD1tBVb8RCUl++Wmv9DRwzpA5GHUZ6/6GyodjWWDKIP5IYyIFvsCBwDGE0eUFTuloQko6fmV0dgvYBpIGuDvX/c39q+mIfq7wCi/34/ASWhpXBTe4Y0q4QcShjApJtxx5GggIIsdO0sGqwt5H3Sxg7Dk47gYDyUOfdqcz3oLLCVgqWBBbSG1Y9EgsvfZrjnyh7uXSBKA6+qv+9rBfalpVJ0cp3dC/Pw6nmIdp1XZO5HydsNMe4Djx0vV50ZsUApJP190+aEFfnsuoUCwBxCQmcCugxvjQRyXJxHhuJ3aXND9UE3tYFUe/wMy9BYY9sYCrimUAlsgKd9/eNnDIMEpOe3d5Zujva+g3ksYbriYeL0kgppLXNC7GujgfRHopy7yCYk8XspzKhmOy7ZQ0Uu076aH2dQM3qODm4OUQYHFRxHyY+C1EVgBmgR89Ll5evn1dcrMJkfy75y6kH/epnQS5gsjejbTl8HyKuTcsOX4yBByn/gKASJIFX/u39EXzhX3vM0JkIZ8I1UvAowcAI+X/d58reJ93ys6g+11wd3CfGdq+EA6OClTqqg/sBtFBKqr62GD1XGIxBv2QkfRMFJXyvdcsvnpEjBAN480ygdiq4ZaCx0ufnbzYeV4c+m/ijXSJu/fmQ/q6BmMKWCV1viWEj9FauLw+Tvr+eirYeWHl0WnQ9wTt6P6Vpx9L4CCu5vy0fumyy6ZU3HUDfkQ+ncecF0T1Q3MJc4GmV6CquBDpMABQs5HaBpXvIyX+5D0RNKvtzMdHxau185KxNkGRsnwED1nHLya6I8WuaDOEldFT4t2wETsVUXkKr+9CI41jebvz/3yhb0xv5TxJuGPY71YtiP0l7Cez02r550VYYXlkN+zJrtq2aJWpgEAq2Zwp6HJ8ak/3vYz0iMpJoJwEmRklb3JQ4LJYILygj7hO4327HPO3j4U9g4Bv4MSYooich/4+ubyWX8E4kk7n4GytTO6h8Fc8PTLmSCanYYDb8O7YdVb5B+RWCe/4DR75h8Y/m5nzmsTFFbJQeGOo5zYoOTLgTkBdjfULjVM7adoCWG81uNiSAAzIOqPpUJKwOzwOUvDs+r+KIsp6dySMjRG9yYDi42sObeGDbGBLHQSopvew4iApi/g1UQIa5o4tcaeYwxOWJjrvKvmMWe5H3/yxeooaEfNDV1J15/vROWaaCkqQY5V8kzRyb66acC0k5vvJcTD5idJTUSx38hEgZ2OleZX/6urL2DM5D55y10c2MWiVD7WDbSg+qmM6NdlGQ/B+OLdDleo0gMbJVaxAC3X45wLsnlTj0MAhiN/CX9uEwRR+BtYI6oG2nP8abXGkzyo4efRhsWcqu6qA7Thea2sClkdsB/k44CzYQnWczk0mix6iMlboINOJQpXJ8JYqoYb1qjJq5isRd8oUOkBozvl6BPwLIeO00RaY696Lrn62IzZBxIrf0IkiKXFy7Pc0dPBu5w2hITXsSzrPG8obwNktXkR7prxSccL9fidVy25dwzKb/TyGzKaRq7bpeK42qavMM7k8SoeAthL2B1jhUAy6I3j1kblwGa+MER85YGlvCYLTZDtsTSdjiYJR/5sKYAWRw/yjBgLukkBX3zr3vEKW8TX2sM3uUF6DwEQxN4HUQSjOs/tZ95Be451X5IVZ/z6VpjT3epeFi0HZVu+sDA5cN5yTLNntjPRN5DhiotCSr7RYLxgDtzneg+CMOBM/Z3ptHKkKEqU2drRlgqbi2K8Tf2zW6wKPp/78w/dB05Oyn+RN1HfrlQD/vFY9pgV+fd7Ltf3Q0wy8BLV/PMaANCP4I9IoI/hFkEIgDr7HYPxrhdMjQrPsm8MwUjx2eYRPT2X06So4H4TBfbAAYAdOBymGKjjUG1xTtkzou3GP0ZzlXErJxdlUxLvFlv1xnjlC2vcOsyICTwS/5Unn0PRltZuvTEalGnq9LhBHosXdm7NMPoni9z7cU4HvHjw7LFjzk70jEyQhpH+q02aZC0a+LFNFHKeQh1qGHbdPPqZUVl23i+vA3U/IlWbIsXoOji9zjBeSWxfPSnZX8DNQBPC1CQH78bzeB5Lisk8f9r3URFv4C1WN+llDgSgkgof23JMBZcUNVXvCvPBs0rQdiet4xb7aXob6ajLXEyBzdUXKb4+VF4sN+v+AOnaSPWfhZddE26JvkyoDQgcZhw3V/KY1WypBfbmfohjnHBAtsV3+PvbDS2rBHtVCBb2KzoF1Qlv52W+oNApsyUBVTacreZosgz4F//k1tcoZVfzH9lSN6GorMkIM+XLHno+7Ufj8xBAFjf8U1Hg+BymCdfFGyq3HrcHJHjJq5XUodwN5zs2Vlr5BBPBp43wuzmgY0sjdLt+XerG1452H1Lha+sH1wlPLB7zqzpNJpNHDexRjhUMdE4bqRbk+EuYg9A1BTSHQgoM37IC8359GoGq/drnZaKgoCkCD97jpcBxCovg9pVHlu7WNGvhzBfqgGrrxjmAHAMDHhk1ROx0RGkpyZjtVf4V354z2vL4GDVfY7xfyeZeXhxesjfWgDWaVsPq9qwQ0uOQ5zAd4KIcSTn+ZUTIbYPKBcDPe9Gb+AzQ8kS+vUuxiTe/xe/X8d7mEMoNRPfZ9UL983H8CLNThoIWVZ44ixYyvpkKhLTcNxf7+ZQkYUGPu5aM9b2Y7C1O8pY/T7d0129SR4yPwS+P2ltarlNBg5po/Lzno98jZyO6mixRYe8WBMUeQE3RtlRsJK44yOYHfcIKBFq/nQ/HjBUWJE20e62HbcgUxPmBBs7gEGZyhsUBM9+eGaWoEwrBeLSmb7OB86tfDHD7y0kEfzoc8kAMw/v8nQm1BKEmPCrnKHkDWBqgUgEBLyOzomOP1YcVnw69AWytezPJza9eWh2c0g0OrVxrVGmcXbC0B+iK2niCt3qeYAWdTLo5Pj1avK8+l3XDzxRrFgyrLJ8yA/T9NVwAOX7+q68nZu6JvVh09C5R2N94BEa0NDqYTQ4E5CM2kTwKnOyLVhRVXScL5X1YesUVhVj8mjvucPBiBGO8ecMc3ToznBL4qkdCPTKlOU6oPoTM32E2Z6CwHRW/iLfCD9Z62MatqNVHIOb+tKHLY3YF2AYWjauOD/DtO9S0gzopDGtfGyeZypRncAiQycT0rTBII/LDTHKhQPehOn9K84HBxnGc8+fuv2qSJeiIg/bcF+j1rKglRt+MIb1YkTqdM8/okX4/GwsCMo/1Awqgn1OMspf7lVSb/L6v2qZb+jIgfgRKZpN18v3SYaFMvXYR80fZVo7/0ufjYBWwzx0oNASlE6B0cOuzav4W9LSCgk3+owypvMQwZWSyXWNW26TnA2bTtOOOX2fXZUtql3oznayxz7ZUI1U9EBUbUOMghEczcgwVS3p/6xp8B2bbfEOr6mtYsDyu4UQ31lxsQ28r0iRGT0iuMLyLJj+I20OX4IyqremQb7HzbJkirOPMBPsVDgzfagsyqo4xhvMTMbWVrO9nlwcskgNz5s5sgigE64z2hYQBg/gKAmpUydeO3abvw/zwft7sQ2Tw2iDn4rfV0TmZFxL4C73tI9MFXV/Qj+NW8k9Od7lqzF2TXgUTBlw2ZPuG5pvYobZfc8Px8QZfvS5P9qZBnXYxSC6FpqUrQiJIDArXNZfo2KFlq8aRKVafn3tgyxkPqttKvPCxMLPOTWx08MJ+p1Cm2ABGfjnIgbFKvx0ZtyVl0y/BIgJZixIXTZMS+V4C3nYxBbYxPc1Sur7C4KRxR5F0dm5fXv3UGw/jtrxdMue3Q7Fk2wLpxPCAySr3BYu5oTqIkvlW32qLT0qD20pyEY2GCFuChyDiVqa0T2H4gIwJenXUsrFB/ZUJ+caj7iN9Ir+XqAwGMI4R1WhLaQlr5CoMvTCJRRAtZNr3y1a/6wi/tKgMtoBD/BTZGl5hvfLLG8fbTI3mFqKgx9WkUjK6WQJCTBIgYUYIA0x07A/sQ9Qzg6FPf2TAJwJ6Hgg923cRlgQ7SIvEsbu+T1pLIq/kTqjjcSk5q9ahL8Z3OYccEOLt2OD0MEAZV+vhsfIe/GD6uu26HUuf+3NGcbGYoL+Gp9Rs8tjZT/0wyw7aRcQprcZJvVcLje29C2ZxzEIcybNCthoPeAm7G8oKBe75GpS4KT/WRm5IP88gfsRzbR5o8XPPygYGAzGRNF5krgbwgoXnITaP2L05SBGbGpGoMrXCYZnCfHqd8GdsRNwWNrWmmgX7AhV42NGc2Q9hDgZeqrPvpcwLNMYidiIA00ftfDZv++qv3VeVYu8+GV4Aq4zucONuxFmnttHyRmcVdVeckaNXJF/pL2CpZnDz77tiKJ7/LI2/zcW77sjjEF56Yqz+B4tDA9Z3RDyHMqvZNLznnPp+t/LA1xaERS1DA+eNUqHzTHL6aCUnHci9LuRNiPoMzffnNehsbQj4i6M5SAtj7p6Gb2JO6wQvnWcl1BWtwgW9VqdmAz6YNr+1ItWvYkvrxXbmj1ktgins6OB2eC9TOAqOm5SW4fO+WCOV6JaNjhnFCjZY2Kjk+0d9z35hkEpxIDjgKPQq+DGpgjnE6D2+o69xnI8THx/9ADaEmQ1ytnOUsYpWingghF/crCcEK6HZmL78gtgL49W0jEtqjb/ihlfPY82jU6HidNo4bM2+pTfhRpaTcF0UD4mp0sa0wR4o5ywdjBCHcK7Y9fejGVIeYaePdCM1IT3XZ/MDBLjcGevpQiD7a4MCZm1hh8w8H1LuXJdcl5CrrwEhAEb70AQa6zd8Sl5x6L6os6M254tBLSPh50jsUeDpKn05YFYbP8ee9Wn0FOa/DbFxd5zCUKkGjYKbRNCxTpbmjSntnAomPTB43oD1aL5s03c4s3e++rhblPWzB989lGPwxq7hiCITNRhU0z7m0rocPDUPjKsKTXr0iYlCR9G6kxx0wdfmsXnfu2riYvXmV8F+3mtyR+t4jQKDAK1WyA2/CYFsfuUY/OV3L5FOjTAlnp16wW4MOI+v54pc4MXXVauyYcZZjpUqeLwgUvoxnN1qFu1hrPZjE/+iFbRP33I2RA8/D+fKrBXUUBS3XEPnHNqjyXEmIHwvx6pyrWpBo7CeJl/MkqpNInBrJcI75HdKhOWf6jwAbiSl0FKLU5nRWp3PuHTfJezqe0AYIGDzuTqU4iGmHecAgjBKP/TYGWpZ7V6gwTnjVjtx1SxJjfupNFEdBKbvtYahwIpKG9upPF4BreOwQ628nUogfv4Bbd1mz11HvKCgekjBPHYKBX/i1rDnNtRAaflqQjGImjEPeB6bRKuPUEN2LaKih9JxCahRPokd0vZ1EWD/eoMzV/zb5B7/saSfl0IxCyQA4FcmPc2J06/1cYsQ6pBb3RLEvniSrYxE94Vt/kcx7LAnjs/WrwOAEDIoYXrHXdp64k2V064iRYv1RlX2HooS3G24o3Pxy5gOy4US1l2mI7/p4SHW92VhgUQFxNx4J5p2kS9D728amCdOYrckXFmrmvB8fxtRosTb0F4v4GWsAT4frpomXEhbSXL/ckd0syPSLqToutimySxs9qIHMT26eyu3wxy5LxqE5J6Fsxl9hM3RXI+dkwkVVePB1jb5AbG+hBx2UGmwqb8OyQ/bAYTaK94UusjctaX4V5mGqwHwSW0yB4AtyYxbMI6KvwlNIp8VhI/ry5sgLVPSYXx21aLfEIIcaA7YTuo7QHkfZF1YJvZ69Sx0cIPjRDo/W+BFa44wr5czvnhRaY20rj7JwGXOPhRl9l2/bFfRkhBkWbPSdLPRKNO98EodX+SyJsZuDQFzBwrHr3PUSjxefebFRpO9uXMOwA9ca7/LeLiHnStxz3Z7o4bLMkbxYbxxwhlTklangPNBk8ilUgO/nhdUGKqHiyw7313Uh2nrWDW5EUj0if7YFn9ii5uJG7lTRXOUhnmWBk8dW90oy6NV96m7gl2xHNOGfi24pb3q0ecEOK/PvtVCw0x2rkNIsHDpvEF1IZ01OYoYKEKYe07myT2OCfVr5XztYEwNo+i1Y7p2FEDUUA8TCUQax78R04/DA0e9PLBRw0ORz3Sns82x3nd4hPP4lmsrzKF5olxY8UJfv3b/Co03dfwSmCRH5tDqTGlzL2kDfjpL3iCwdoEeR5TwkGlQnZbV8R4LTNWEj4ac/ec4m2p/QN+wHWCfIZCfRjYc4URiPbd66y9lHa8i6WYcT1bihtHw7nkIfdx1R79LU7LpTGfPBOz+1WCEuH0xyWk91u5uxV/CM3mssKe9aqSQkYu+lnMV0cBqKSwk1ShpOMD/v7Sg373xjvgwoQghxVIpl8OKDoXeET8BgCcbQTOo6d6BkYvB2cAf7D3sh4T70lf8EyXkwWhJfpxtx9e85tzV/CO9Y9ktf6PZvGOC+E6Do/HxVnk6EfriUdmoOPnF9NbMtaMbmFQGEa5X2Ig8BoCw7CFrZgQyQC6h+pxo7KcUclLi9J7QowVIXqUaCK2ikWRqafhBSRdlWNF2FME3yq/4TsbIVqi8W0CGViaPLg6MKOU/L5lqHfPIlV9Ik6nkLbNZkENd0zdT8+tpegxZ6T6YsMpgCxajPGiaxtvvx5OuPZzQzCYqI+2vZJQd3fhAM5VI296U6s2D0Zdcwo6WRTSry5EiUQhP47k4fOI+XQto6p7dQ6pXhfZLQRQ8N3djusafPusVEsDSAN55JBTPxFrLorHzCZovM7V2M3o3uV+2xynxe/8d1ehLO1Spug8CAnCxLmcuXL4vh8hdQRHz2P/NFzj6wijv8TTYAYNTCEyYvCrQd6Eg915p5o7/LaIdLsq9BOrRDxIVNXdYi2espa3yGZysAZtoX1LgbE6Dr7VAdNa367B609Yj9u3fyJbajoiAldjJ6c/4WiD8tmQ/DfrVHWabfLnYjz1Wi8uviv76nIJv92dKX+nup7invsbIRLXJNz96JFbjxxu5iX1N1ynJsCIa0cU42VS0KUgQWS1NNpJLf0ZiQueW6A4M3JeFdAXAFAX16n/jEosGsHTM+YBpvi/b9QxLwdlQksC44v+UYe7/RKD/OM4dIZH/SWL/epw7hvzrce4wjP9PjPyvmgj+3zPS/a+J4H+N9/5/OBP8HyaC//OA8P8TZoIj1H90DDz63zkH/u+v+Q9TuonfKO1ljPt/kAdi2ob1r3X8p+W3kK/nCTA9nv/84PNb+dfP37skf1/w6mWLwff5gBeZ3/jK57+f9D9+B/f+8YXPtT/f4O/L/87s8HV+ft4DuEtmzOf6WZd8/pfXzX++yBxVveb2GP+27nie8Y/S+BsP/reI/MR7jf8aQf5P9J8nfL9/Dx3/HwhaUGmepv+XKP2LRxIKx3DoP8lmYDj9j/Pb/7Yh/9Jg/H3tHw0G9F8kNyj932It/r9rPgL/V2v+32v+f6v5KPXfqfl/f81/oflW/mzPXIOR938p7/y3+r37su7zf7Xn/7yj8H9Au/63XfjflQkH//31vH9x/c+ff0vJiN+f/xwlQwj8H/0y/W/oGPJv6BjxX6ViOPLf6pD/X7jjf9bD8B/U8P8Ad4z+R90x/u+Iyf8/Son+W+7433F8dReDhWGA2Nfps8Vxkn/NYan/8ljJsK4PqkWZL3iAidO2/EnH3wqU5UW8/dT873d4fesSvHIFssLEy5inYKl+XvG58PvA199Xob+vgLeK1/gBA3/++bA6YEHY2mMM64AUoRwAmtZtt3q75fObBv7JftlXCH5R8ELdnp/l6/19fzwL642LRlLqewVuTzebdf3mX47xNvkUZMmB1ELWEb9ZKxEX9jh3NnlZvznRoCJqp839VczGemRl09TnqErwW69ya/esoEjXTGsMQ0QkUsZjiibWze/H48V7BbpckvHeUt2WrnYnYZDFI4OGL8MXmFrODxYTyzElv96ft2asygIOARhZfIJwLq9E1dgfoYvDLHbUHB+XIKpDLSAeOm3peaqmsEEuHjxqOL1aonzB9i6opW40wTjfh74jej4uIpRFVDilxSh3N8XsgZQeMypxH5iRPmfQZfJ3wfFOM4arl0k+2cFnj61dTHg2j2mk/ko0L8qfjBA5Q7q+53QOCNm34VJ6Ddky2IhXo2rmKuQwLXgrfzytImTXUCB28GNq9CvLjIPsfZxYz+3+8sbcsZos9nPwVzNFIBoFqPZ8rrF/hlBUZEtyhh/IO92/qj7OyXzNpavxsYDh83mZqwy92O3d4yCPC7MWNmmxTVBIN38ZjHb6bURd58PsNDu+oLGyRHkChcK0QMIl++4Qv2G4FzJzuoXSnrK1cT8qBQ9fwy9dy1RBwdpdm7aEIO+Y0nw7+lMwo8TKhkN5Oc8Bov4WXyOfI9p7mpWvIHylSFrEdrR7KmP1rHqF0Qp9oTbecu0abPFsbSaaJRB3StWkQaEb2eDspcffz3mStktwMsm5x0bonPRWhEDGXUFUU1m0vvgBTq/w4kEje+adookdqA/Dhe/NZgyHDSzhBNSzQZJOtXSGrOx8/SUy2w+lWrWaeOYW6PsLv5fF1FTJkt5vh/1Ghki0cEt+L269ULcpq/bi0hUNc1ejetST/bsg5W/T5xnu8uVrOD66HZiBJcSh0L4RxdbGxJKsGMKgqvJt+72KiX9UpbC438mQ2G6ylfFlSW5wgZv+LoVFa/yN8aOwY4a7ZnewfP2zNEJyCT3zLcUdAgR6e1vHLJKiMZ0J/QVZRR8BRXVKwBj8tm0uL+1p1Q5zZNPM2svv5suz72XmFgSU30ign7Kngtqtcg1t5p1949iU+4+c2I+Zx8v2IfZbu0SJ138HUCkgBSAssK/SEX0HEDFdyNEDhRji4GASb0SPQTEVlWozqZeZ2pMmN3zJrC3r2GSMKkEU2SjGnJ8Ly9qSb/+QuM4cj1ZNEdzT/ALVSWXsWr+N9MiElqm3usyRlfpRCOIF4rVRedXSxFvS5yV7gzfdt7/bxdG1DBvm37WP28aDksM+aviXti/jNmRnSph1guVe9+uUZz88cEhFlCua9Fg0cUfhT8VjFHO04HangCLd0untrJtBIFVWOyYGQjNyhYvSPn+6IdBX9PkU7ot4w2f5hDCmIO/vLOKdPRKUdzRyhbmypEya5WR8fB9CkHbMnW07QbjiJUqoARKEpRJl7875prbluutJSIzCljCTeFOevCCvFt7TG370/mqswYX8N4m2S9dBHNf/mU8pC/Zb20tlhBKKRwRQioM4Q5dMjjwPjzgJVjHB9dvqZU2xjbHDJdwzIo0pPN33dXfnlS3d9VHA9Y1H4efrz0aLzznjgxkF7QNdCNZF6mn/wDJbQ8PsHcpXj6EInke8/FKyB/8vmq5qN3JtC/6SGR7NzOw3M7Pdhq+/duZcKRqNOknHvaFW1ULZWGK+LbT4WA9g1OEuO6y1ccV2Y7qhiwgLG0dr7kdu4NNCRvUKVrdWqhXsrqKOKGs9+LyRZ3cuqLRzCeZLLSY3ZgLJPBKW54fxj4Qqjxpu09pPbMf6JrgixpdHdFI7zmq4p87DGSaFtytiazMJcJlgAn4u/h4Od/QZBQdz2CsX/NTIeVmXc8y1wDM6e5y4dZkBSNUT/Gdt2PtaTR9g5+RxeeUWEogw0Z/3Ikrfs1hXC3Q3+CSqupADAK7LeR6tsGQDHHloEGu0HnzAZym0sRN3MxqHhPYC9CodGsvSwEiJJx7zrt5acQ0ahmHKnlOd+hbvNWfyHdVhWRYaBIAgLKa2Q7ijSBJskwHwSwlYfDg32uDvBytX4EqH6+PcWoNnEAogog7HKPKI0gwMImMr8Rd0UX412F/iMj+BlSrn51HPYpQPvwWHgrAue7egZ/JAZBPBkW6afWkHBRRcMM7/4udDiWNzX5JWCzrhLc5l4VS82/DT8CuhL0SnqvmaKV/Om1+pR+5ema6YrMEW4/gN+6U3hStl7Ea9OeTBhZ62xPfbHFd+77U5dkRw50ZwNPmrrjiO7AVCwk3onCWOHSwdavG3rL6es7ENz5k72dbxdLBRyP484wF41zVY3wkr5JloqWOrkFi/Xn1SBJ7AuQHTEMEBRyfw1F62Kszg+qoCiyp2Zj0nckKdX2tr4WZMrLjL/JItc647Mfi+ZfB3Q1nhRf4Gl+Ep2a446dHNI65Nvq7hpwxS1SuE/+TK1qmmlWcymBbuqedg1SH8A8jT+094/njgYAd4qqUi3GC3DpA4FDi5r3zaQupqXh6+9JQGu/nGk8Yn/Nyfun2FVzMlfA2tPXyvRc4mX0YnHxZyGKwCPc9Lk31jMnD7IxrJAKoV6I91/0ydn+iXSfIeREJzoX6OaWS6xaSn+RnM2Zlc9nULbFj9b7Dty7yQQQtFZz4WK0LpCOGjWDUxcLQIKeJfPPNqhkFml8S2DfoFx0nMsdzPj/Pxjzz3o8uox3/vtRorvphJZ24YuRulHOkOZ7fKpPvZMYW92v+16uN7QJw4eacwjUKtgBHAIMgIn9wbilevl8KmkJApmlQBdP9kjyUPpA3zrC9HF4G3MEXhCv1AigdN90DOcwP66zRhsbwbRtadEKFcy5+FeoCMDczzb0pqppqlLX8FBQ468u7Nn2fMIhozDRQMGbVo9UaZytlM3wou89lnYDBMpAbGj+foaitoufncEGJO+b5lXb7rWE0XUA7haUFLrnvWxFG+AcyeygBOCQrV2C5OnAGaKeDuMP7Ga9LWgMq09KUEnaPPsVILJNPeRY4lV1n9QVyLKGA7DhPsK/NfYziSZIJStw9huuvVvkkKFZYw1Sql10bQG6mXJgaA1vzFx0xO1of+sEO1HeWx6VeUioaAHWSPin4RtmjetiY+9d73KF3wSw7WpQZxvHKGsSY00A9zrTmwVfsVibQbNuG4nXojcvCXw8n4js44X8YmPadeYDEwPv91LUfiAZoQrt5uNHqmdlQUIjf0L+zo0ndjyV/J6it51EfEgv7xDsV0NrLjpAesC3N3uR/KHoBbOeJ70nUlpjalqujUpn0ImbOs0yb34sfZF17MRKm0ZJ7QHLWfBAEYr1AQJb+kAqE1udIpymjEhm0PJ3NAKyWbIZUb8nxiYvqdqDHlv++E4ityB9zKytYTNtZ1JIORt9A6LQJhB6IOcl7iyvdPm3qn3UcrV90G+El2vnghE1rd2bqKqXukLwHzWplHX/+NfoTJJl9mvpZwpnU5W11DC/MKf0+coZWiL/5q3FQtPRRTAmCN0PknaqJxozskTmi7ZvdIO42YjSvsEDyvNHDWgO8Y9kphFJbal1DRl2sluri2FPSAHLimqAXwChVIOV4CTC+JMgx8wV9qV4Oldpz2r2h/Av7aLwwVEF5RyMu7Z+dInnYlBzBzVGnS2P7rdyxE8y48PFDTvSm7zHNSv82MNLQ2N6JyB4qsilM8hYzDQU5NkhZszsrlU08tWC3NKL2hEurVQQho0oq9ZPwkxHm3mX4yYMzxU6bhizGJ8l8dmRksWSewu0j34AmemcPMcQPd8TalNGeth1esC2WXXC1Fv0G2+1gJrvpELayKTjbo7tbT0jLgHlXXwkeFPZUulHN8qCOKU6E4zcy8mTBG69UTtS9s+z68XdEr9cpl4h7riKGo32soKZY/UCqfmJ7LKCEW00OEQCquvl4u6mhstVabTpbb6Wzcs7ZqbrR6L7L2F2cqcxNMKL/eGeTIzXKTSyjki8IBQ+B96Hatfqeqkc5jFDuVL8Wa7ZDXuMOhmRaSz1CKkam/1x4E8xyLoYD0Vaic+h8G0lMxVYWuzo8VQGVfRHLeruwLhT8C/Stwo43oGejCnqjUUYy4mYqfFOMSFTy0uZnPViGEMVFCpUZPfKIVeRtXdTSAxoZU3XOd4CLPs1vJdiTZh480OfSIBD8Y8KKvuPaj2/+RA47TGt1inKozbEWm07xZdvOWg2HtXAN52fqOKHPCQ3GoNZwzdWQy1x4eq0PMJMnWny1+KIr1JfjwqIpNmAIErPA1AY6jglBuUfv7THct+Gj2y1HO/Sty+MRhltwfKUGw31qNituPqJ1KTRMJYkzNLV1S7KlHlBFzMQDbaJ6f2l1c9A581fN0SE0KEQA+f/wyYJrkn6wdnTCHajme5+e4ENnJLL+gJnK5qD0bxIhdQLyb3gOqWbOVDU10uyDbMUhePvrS261H7UT+/Ef6UqfsDuUYR9SemKR7YwCRuZY4NZ7O0RmCyNTNK40C8cdt9X0+VLaZ60mPKDA2NFKTWduUUE8N9BBQnkE5JdIhw7JnaF4jzVdP0lygScnQ+XB/49BqmZOAG64eHs/Nr/aH1zwpMnjO6BVMkrNWaX8MSQ+pmW1k0rKtpWX9EDWwQWM1OATY3vzSZjzBY436ACWoLw68P9S0vFr75URCnj5/WZ+pJDadnknYFYHKLDvT5sE+38heTOR2OPGj90WowXP+luFUjFykO5MjyAiT/GOw70eVsJOM+N4w+zQUm5cNYGsYEbw4QgWyZhA3GbDGCkcetfSDS/vuNSavwGlxGzxGN6CZbrDEsvTQYHh6wjY+GPh3CBLNJlhd/Yr06LBmldcIAmSmDU9wIFZ2zzUkbH3LfW6+dPqlZMvs0m/5R0O8jVsWfALxXwSNN6nrZP3SuoiuPVcpb8Z+uV6vUmDqcUgw/82Zstve7j+poCbZCvrXlABGGSGWfhe/u/3GP9EXa5GRyok6srdjQtHgtJnKSUS0S7gM1R2R4MUI4yP0e8O+gotWxkZDo2bblk4WXXCqea/h6pKxsWmHFoedcGu0xWKSHZvpdPTHq+hjvPF5hN4E3dQePQaSZC4SG4P7ZFw0q7bzRpt+DBdvWCb0dvUwliDXHSjpKv9Qy1nTx1UEZ2oe741l0VXiNYKWZ6FLCpPbsmgsUD8CyN5H+RzjV4UgLc4YlsBHKzyy+/IVGeW0/6WXrSbUnVTwPsvZZwcGy0O3uZRgKdHfGEXhrI6YAprCBcExZMFh1MiAlMvhyfUoO3j+Z0DKVPNBSYxtvA/MannPZlDaWQkaCeF066Uf//TKCP9sEK+pCD1ZWtwhICgIOhP8ttnBxdR7hPRTFvy+UZck+dTQcXEEWpKouC39ZSNXp+VRU6WlrRLerP7ekSUKfukjqwsW1UIwCxZAuVPcDapfOLHO8enLt+BQEoXWzQ7R8P88hL19VtU1fh4MxITxD77MmbDzqK807Hu1xqu/Ybsi5l2NhDwndhyJ4YwG/VJTVuKi89ZGEmgosuVRCz6+vNgJfjl4UM9OzFmdoISVZj8Bu1pHqaq/9fJXjYH0eNxpkAXbgcS/Hj38KQhPlp+ZX+PgAOpMsvI3+DfwbKq+RM2onSKQuxs0Ui0eSqtOPmkZzVoba7wZ58JaocNw8KaCVPOell7zsjmfY6cDSaZRv7yKzxLzQND8VbQxQk+8v1elDyeHdA0FJfsvwcsuEdT5d6HEpXSLq2UxdmEnt5yKTUJeGYqz4597VrlfwV+NLtSpTnoQNuQ4VN3MvoZFtMJBZ1Z9I+QS342qNvDO+ifDBH3RVJzwL+yeydzt8943Y51qDbbBbR3nJOLu+F5ONbhbOtJKktZI+rKihMtfO6vJ7tMp3lEHWkQuayRJpaVk15zagCJrHupBTiCoH7455M3blKjfw2uJkwXcftU42tvQ5CsOT9M8NCPWYNKvP16s4zxYEv7agDRoZtbJcxK/TSPGEJyQ5QLASxCmzLQE5GpbmAeanYXpzj399tqZm3upO8Y1Hm5VV3ZeY0cZaXyVMSVdn6SL8liXlMhiteTS/RaR6bFLgOq296iRSVU63SOLCei8HhLDO04sc7I+/gYdQNLzFYklBRVfVG7smTf8MMLSVXPq7VeebaQ3nyVVgeTytbJSoHtaeM3d5ymUjKrblXs8ts1bhES4QaibZQStk4P/qmK/dFPug73T2RSQf5T5PXQVxp8TqYnf65Gag4fyFa2enNmex0Ag6gM5BUaLbk4NmnQ+dCtQLQsIWmZMV1tD3Am4OiAm3DOrJ8MFjKa8tCoIDjrA3UXM+MRycXzkVKmsWaxjKzz5rPP1SNe8veTq+qjOcwZUZOxPRu5t2ertzx5umPrWoYd/tQgVlMymnVx0fI4s8RCHhUavOWN2ivATbWIef/YzhhrukBOoFZTWyVzypPTfwLnPU58x4s5IVY3e4DadIFn0TMBR6CeWu/KpV/RVOHheVqSm0MRGFqE8JkSkGspObB+dPUeP/GVihiMik2XsY7i1MNC53mmfx5zSGbAAhBIYPfHXB8yDpVRcQI/AvsR6KgjNpXNGdMEXJVUt6pyY13h/O07jPuGVakQjRCPrBvcasbiXJ3FRZv5s8ZfDAflkLV3BppX+Dvv5QUZWr5+xYQlEnoFAG67b+/iDoiIhyfCIOnSIpdRgD13swZ5SBtNCjfGWnlmEnUAeI9dleoFyK/zli1/oXmgaeM23bPy8jNuBuBfVZcZn3p/oH1arTVXaEMrS8W602vXb8hoeikift7bJ4/YkXfjzOhuHjgTjV9X8oPV4ta5OyVePwwcbuWzUshAEndiHdRFL5hsjiYc+/WQq7hMKKzYfouY9ktguXsu9Kd/PWbNoVCxFBC5E+DlMeYCI97T8Spf+kJqBzR/ArZ9T7EfoHInAFYbscEij159/NZ6vVk11ReP6O9fMwLgoc7fWwSbGssNW7A6/m1BpUoqYErsjTZ7OOrre8BJWGRTNBNwkDeQQ8EDuKo4doqwMxorY+umaVymCNyp9TgZHN7S0AlkIzJTR6w/VgeEtLNQGP4xKeRn6dOSbLYrgkLnnZlxnb7pQln3XC8dKHjld6xiSLK2tF1tvZ2QipNzjjf4vpqZC+cbq5UZDBtyqX+7fo6yf935Mg6+O8YRWKkReqRgLX1olP2Qyss/BSkfvbmBQdH0/W0zGt2zNV3dHK+Jfk2X69ytImNDML1m5Eg4c5p8Hx4UQB79ybv5E47z8+0Pzy9rjkmsHiV/X3immsMEJH7WDsmq6WB1trzsdTKiGX0VapAzIVisWXKID8fnFJIBq714AdYeYqj143bNiZgYc0qZjnmhNTz366+a5B7VEYYm34CsoF0RVez4nOheYTqkQX6ZmaYWWvWTqGfC/p/oIC4K6tW6Yp2mIKBT5ykVqL9AOY0ds+Tgjg8neBWFiZIP3vxRIM3hdo+I69XpF2NP/AVChcuZ2JV0flHMVuvjXKZDGTcThyqU+SZhUfezLYKz+q+19v74c9PZ7YBIz97+1Tb5F1bBx/MsVdyGcyjfVNF8j0gL/ZU0z35sQ1l/R8/fcNoC71LG9bNytv2YV65ZJELR3spYU+6ENpjerKj78UCRxR9ib/96G5gllyywmUIs2oTYkPX41jVT792yW55XJj0oRz+3lhYZQq9IZ9j1a4fdwat/tJDTSTvg3nHRyRCX2pfox6nLoCVK7TFoTO7gyho0jh6wMpL9RQPizhSDEslu6sfk5NwOdlE916Nd1mK7wYZ7JnZh4O0HZnUc4zriBfVHGfNNh6FC7OPuoE3yVMEmPOSyQT8Gqf0XbpAibLv5vQenXxtzvfcA1l9zHRjAVC4qCdSAgyv08XOT1bxFJeCzP+uutgZjlM9lfRrqGw89Ywy2IVYn2/dCzbd96Vl+++rfSPPz5avi/CWn//ty5s1PucnsSGXhbelPq5lZlvkve00AsMFB/6d9Plp1BjwhAM/YPoBaDZhSOJ1X0q6DiWfp+UQk4nqZSJEs6ujHIz4fJ/U3u73561XqyaYyft9sy0N+p6NoOfbW2yITB2ixyJMnKrMmMk1Mr4Xy9F3LlZzZ9vFfLqouZ4WCYdyCzmq6VIE6bvl39iyt/g2ggQyYCUSJccSazxXQBU+CwB3335nPYudVSjBqOY9jE7vtW5nj0OfPJGfwcycZXvEDAMEH+Wwr2R3w0nBi+09l+nhPi61HHXw34UUdEtJONOQ+SXF70RP8GYrYA8PwIwv7/G+DAZ7gBuvwxrHmqtoK/+7V/nZj4628U8ucjJswUJ4hqVLVNfxE1vLISbWjX/840N+rGuzJS5bjVlfChlPykkdmeYP4Kv7dO5kPS1pAWVaJUEETuHuVopGlbGbhBAWYEd7Pn22Hht1gA4Cqj9lWpPNO0Dgg5fDgWMpK6P0nOyFaXcV6t9T+Qy6k5uZRlsmZPiftiolGOHi60qnAaNDLGD5tci7OZX1IEFqhSyI/XSv/JhOOrl2q09ixHOkfKGT2NBopi5BDrSytEgkBEoSCK0GA/lMXcGH6I2/psH2+6lxYw5JOnJWsvF1z8dLzdYXxlv2JIDAjL8/m3sOjfjFxILPHlr1TxPfsoeOoH3D5N6c710JN2gl2vdZN+c0ddtVvTEYm0btISo+QR6ej9fhi6+DCcqiyO6htY+NPYxKuIFj80P8JiiIHpAUy9TMjpUtId6BHtSbx+yhTCEDKZeyHR6nkIdapdThPC3FFJcMreI97lZxxPTiLLprRINRPaEazY/Os3oBuMI5GR840j8b3VPkNy2CjT0lJ8oOH2VctaRgQzaTAffmePOBMvShP4A1PE89co6v36mrPy2N8pbfLi0xpfNJf+8KIso9di3zR9kPcHNST82ij4z2n+pUDQ5WqKaVtQePQ3bNKwvuXTArNFOWg/XqTifzA4zL2gDWm9APaLi7Q9zZP1os7W7l/lgWS+gokSGzlhqeFecFtYFeRXMJdFMdjxPeK9Osq52A8JOL7sIxDSG5u1bN+0F7oaFN7q7MWizhpRmMvN7zPxBOpw9vMBqjGxgKrA0oHCvn6xQ7dAGn8bHgv4kkS/mhHBOGXOT33UNDmoceT5KPYBQubHF5D91Qv8u3oaShjcRWTPQPdlaQrUYzVECZdc2b70tdhyc8T+hs2BOouoL0yDyLBkiOz8DdtcHCQ0KzLK+Wdv/FUxa76y4pvqtpnK0LJCg6eWVuQebnN5X3MASLEUpUj/hINF2Yp2bTqFYiJTKZ/42Nn+rBvLWzr28nd+xH2jQkb9xwF7CgyyNhymlEBrzCs83r+iinnVynBZu4HHKAAz+UAHBvV7eckScrQBTkk6M68aHgO+IqTkIkgjAeBRIEggnnGTz3n8vP8QaqIJ6YMa+H2rz0psRJGC4G/Ta0ir/I2NmD1oVw8GNvWzRN3ULc7N/LMMVV3RlXGF50HwxxO0jPVTUfm5d68VdbOK7dHTp6LbEqjWlbWKOc8r7mtTJBSh0zsYCfmodxQbyHyKZ3kp5jIZC7n1rJahG8ZAAO2TUuy0Hjtif37OZrg3XtrhoLAkdtqwkpmzNZ1QDnuvJP4CqZXLbMXwAKs3BRDmFI38whVeM3hAKOoDk20/q5Nuv4vm22vJoDSjJi+hd35zTukO4QnJmQ3oMvpiMhWD1syWFD0gwR0VKvIK0byP47+qDP8iYrLbzj0eF5QXd8584dOivOqI26VHhgaCT+8YU+Gql7Z73OPS7AJfuRfbMVhqQuXD2RJG8uu0cszNoxnfswpaMmZaUjiT1aZ2yoM4WDAzcZowSBj5lycf9xEmRq9E6oSGE6Fe28IKknNoJ8WC+7Lk1tn/NdlsSIV35O2AvIT4fsuKeqdZ23lW6/xFgCaXy3K0rsDEEldzYR++zKLhtTCpDtQieEeHDd7666fbKyabHLA+bV5yL3+zuNRH3+xOsY5p2hHw/KwFCN3M1AU2nLbb3UOwcrEQ/uC9E6uruFXcGHmU9e4WHHyQEw2eX1w8JytWJt10ySYLh5x7DGGWB8iOEmqcJGurNkfJtSfNIdUoo9aVMsnDlE+sOKFlXBPJ/N60TkmXQVy6bDW9ytgQ39wwngiXzlVDyVqsg4GxxG+Wjfh1oPlLS8AJ6aTg3ms7Yz2igMwfQwIqVTYDSjeiGGg+T4ZW81prrpo9yXJuC3HfaXpAM5nSavl5AIhn/fn7PFch/sYy/cw2foV8LoPeT1GkZAUczm1ChvZE4fHkXXhXAugeYaS2v+G61mAA2yFdMQQCtyzHBHMNiwex/Eop2BGUiMwdPyToaqwg0U640yap63Y6W11BAyVw/Kf1kk8yUjeKsFI1AjkvWOBlh93JT4yNUnXb0krxuVCsymkho3pPhvcCeQloWsj4WDPSPnU7YmMi0pVUy835dVXy5Uvxv3rgUTJqnaQoORASOYKyxv8FMEhF8BC0WIF4ZwzXNmCwL10R6KXVbndhHkOlGfnwd2rDplFnLAS11HwLkzpInLqvyIcLVXmSCewya0AsI7++fRecCX7JAXSbyyFSPakpT45PU8YbFHTbLYuwzmoWstbywjx0OXGyUIU3h9y8ERkf3LDw8HQzmQDEGqALaxdZcgFy0IWuC9WYVSDYo5Nhushh1CBub9E/6bWnOiggi6/cjbLLbbamHBWQ3pk/rmkw3vxXCTz7Q9BvuwsKxylwFvdKW3BWW7JSwahCCEbagx/bSFlxuQktsuT2cY/Lz/LzsgYlcVv3keWImmROENTfMorqNbUtTG79537PGltgwH0AgxRH3NUC1m9KAeL2NT4bHj4wOyA9YJhO5E8mxWDQednjvCTwq4Xb8wS6eGPC3pvGzyGik6fiMzAYarwe/F5Eft+/lua4MCcqPgFnOyYChQgEo6MzMYK0m/DpZqrhKAVPfiBWhGehUTWeQKqaLWv7xwzgMcNm/yBf/7L5lZIf9nCRY5Ve3wKDPVDyjpuJ0oG87Uwlo1QoO7k28qVVDIOWjNXQOwKho2VpA40H3iAY+X9Owg85HYL65czxgCxPkWDJ6yQqcE5krZZqvSwritMu/DITpixxHbDTMeuMtdHOlBRiuShFrlywqbHyvGtWuMk4TKuxekA89Szdf9/xlTtmrfGYBxD6J8hKB0gxfhvW4ZWyiGJZRkQIDQiKeiMPw29RDpWX+rIQQukRU1AAOWCPJmKSdaChCkzAjLWKCOFovvAJ6TcWJUR8VvMzhCahO6dtIjYrLYit+mU1CM4q7GE8a0LoGRRBjQs2ImU7iEiwj38jWlCH0sy64AiLU8CnJuYvWtTjhtYlczXXCR4uMhQy1JJVGWWtRHU2xFIynm9Fe2r4TkkJWcx+f34mG7YlplS62R9fSE1X2S3xJ0GUUvkj8gQpcgQ6eZKebab0JRoHkUtJji92LF8rVbWvgxMwetnNrahR6zjN0FHCfoRU0Lk80LwYfRneQj0D9kRZdB+Zsgjp5WGUI9PNRALZeJpTl2MhQV+hjzlIzK0lMeTdJzo3+YTgWg1jRBcikug0GPmqlOL9Q8JLr4NXbDDS6NgZRTFfwaJN+7QwIRHhKVsAyMKvyehQ7+0apZCx4mxQk2Apor3t+m6qVdLONoXfkXJYN/Ssjs4qPAaV2VQ4fb/QeLQstrMYPXqVaCqMvnR2YMLqtv5pNae1fJxhZqddfXoeAqMUJEPkkX71pIiNTrky/uwdNTe6VGoEPbofR7qr7zLQ6c9yyTYCp+A9vp9Kd0bMnyPntoAR1enxLof0FSqAwjorym/qVDbJzjz7+6ktWuE/VTlZTAayikpXX021zX7XYxqRWl5NJAEHbAGVp037M5nY26jzqQpmbKhNh9vWTPjrhIuqBTBbDbKnCO2ZGddTPQUFmW3WS0lTR62AcaBBYgcIOahDFv/e2/K4StsNY2tDtTQqKaX5aL95UGzeUSfqr1IJzH4D7VCUUyMI1M678L/DjhOiKR8rWhVBtQ0bEZKfrSf97ASTzzsTw/X1o+wNSLvb0BefUfxsvV/6Fg5+vsfu5VzsskXmsj+mn8hddQoENTlXVBOml9WU8lidnDNnbikFtC4h3AABc9511aw2Cc1xFIO/mRt0hHMD60n4/IkNdpAxMUnIgW8NzvA/APZ7EMlFacIU0Ev9tgZ0VOWmY9NlBZVsubpKqcPi93E/73m455qRpvZy1rO/ZJ8q913smGghtpmfmod0bCn1Z7GdkZHu964frQiPJWpL9cAhdA86HMOsgOtwnbIMO7bGFjV2yPpeHK7Fpr724+UAdqWfBTqreZVqiIMZMootTrmwxwqZRG4XBtrxrkyVB5Y6mO/2atI9UvVf/yqJVv8GlvFu0KZ0bEabZunqp9jNjS8ig0hTfbmonFciqT7DjJw8qgIILqLwlMl9fRmgg7K57W/0HMJ3Rw/x7ZbtZA3aP8+X2c8x32LiRDet8FEKD6SKTt9ESWMGkv7Lejclhm98ni/ygO7N7xWgz6uAaCBrBpi/ZnWgXDDTeiZWHEp6AENxYvVZoyCfu4czQQ1IXAqEMbEAOAsN00qmObQRnTb5vacTD3Z+fny+hD1e7XK0fi20ZxDCDQK7IPdQfTiQWgWdPA6ZkgjD3V2IKqjfxQeVYdnEkuP9JcfVEegfHmqmPSzMT9/UlAZ8cXEaEzDCyTPlGStSK/h4Po0IaRX/1iUcKxZ543bnvuVccEY6EYClyVYeUw+mjKxA134QYDeXdelR2XUqrNNqZZevlUiXgpmng31NXScolGFj/mHs+6wLJac7KV2edc5xnh5bwcsKPuoLLI4tG6XFUZia/s0EXwCjHUCMjJm/Wdk+dFHXhCrd/GCOqS6Fd3oh/vIjjBc8MNqvurrQqOsgefDb38FzgnUx6fQC0hLA1WJ7yQm2v+DRTLqm3W9ww2TSlvMEajm5ijqaVqzWNoUo7rAH5T5ZPrfXKY1/Z+3BOEz7ITBX42zQ97ZUWaieiHVYsNSP7DCqVmyq1xUX6uc5Y5ZbM6Xn0Rs3FidHhu20d2XEg6/luRSpswvnFOQCjFM+vvlFJWCLwV618sVMbvo3/HVGxZS4UdRTJixircNWST5l9QFSs16vFrjt07xuaO4QJdCk2bSl0tmFMIAAXb5JC9t0/9rS6ncqFVu0ANxbKA9wo5RVm0XzfasCgmuaCtp0Oe0TlSNbxbtgp9L3USGHTzsDao7ZrMARCtPX7OEMW1kz/UFbrTkkYphhtCEGWSz8Wmgbk1g8JqcF+bDZGP769mu0q7ZnfkHnQ/rgGpkbj/tZ//PCQGa6HDZZoKEW2ilJg2HhJNijyiw2xFRHiCcCEhVDngvcUVKHlzT4Kr/nrpPko1EURm1Utq59Y8QRM6qLlZ2JymFgeyNWNgSS/nNeKSX7dL80yfYhrrRW4zcPWhNWVNdhz++J8BuwwssBpPOLydDbnZ/LEEDaEBl863Q6OzK1mFDslDFF/sy/LwmDpt3PSceXSsLiXglktIquCwtVVLqEiXaIKCG17pFpSLqzMIW8akwkcVIEygjG0F8+WJbboEZXNP33VDtS/H49l7XzsXWaA75nVmOXaQh5mWVabxR4hPG0YWqy+uF0hV05d2yWtNRinucxpln5vaaQdqg+L/7ULojqs6dSm97InBjACaTJHyzK3pCoFu62x+FLwu9JmdOb5T9Ed01om5H5YV4FojAz4xBcalcL/qMzwmrBpeSv03jgGqO3+GoI+Dug9i42MOMZe5Pb3ufUArz9dkoFYziiP31u3UgjpUXSZVQrwPRgy3uhL+VD4fUkrr8eX568pSDnocJqLu57wSebV30mBv2qi2ad9u934yk8a5nrdF6LRZOyQXhEHV2tYXkWu0t3SmYir0VCsYo+sn1N3KmSFf/Cf3YZPj+5TOnbJi2f53opc5oKvc6BNLaItCrdLYItIiisbKm/OINeY6tj2AWw5iqLR8GNK9QQCIRr9LC6tFUGKwdpCUF0u+LG55509jfOTCminlOUQXzIv1q4F/jedmDDyT9MXYv7cwp5tLaUjSA3Me7Tyozy1bYgTZ+k/OCqDQCYY95pCceu9AF+CXS89VmZiQLrPQAKcumisZP40IWBOWg8Lz6AzscRNOgvIq9ZsMqXOzJA/vNi1j/Bo/shkFHkFtwbe4mTGr3nVGKiaiJcuDNfGVGB7LmsASL/Rv9+9i9eI8prGEqIPI6vCm2cuyrl3hVYJMLteP0yThAIPECtisTTSBIZu1fGrjZKTh3WUY/Hr70YrcoYpOD+l6p6HhQsOWXSXuNfOoCuJxERp2tBE/6IHeGjspFx1AcqtKezn067yYQ//bnXiqJaf0YStZ2rcSDxEXSTZ2H5Kj7rcL8GDPlNBZm5gxtT0/YX2ApNqu5ueKs+j0P4teek46aiLxwe6vBxsVqDqsHvC5ah9hRLSEDjuz1mSgIPqP+awz+yuelafMMOE3poJeAYkw4MJi5RCZReShpa8DUp4mnErfkHcRxw+CoUUHdIm+nZ5vo6gYlQrc+nJ0QLd6xhy317H/LRSU8RBgmO08cUxD1Nh8PlzENEh4OjzanXTAgsX4pVcCxRRnVyaook/9M0mle33+OU9GcFX+4mH17FHjX+24rzoZuD7JuNHKqP6ACCFRzXqZFbkGZMqo3Y4GrMVvruhwh4l9FGsBFm+u2kgthuZymVeraEcykCuM7dypJpBilav+7T1ZuDjpXTP2mUR+73vw+lLGrjSHX0tk7+KyGr6xrCcPsPDZxDO+AdeC0rrX39b+leb4ZFmKbPBMylMGacRgTQtv6tScrMF6831EOpaRfv7TYYR9OauRxOG6bq01wlhbeeTi6Rtt1fEV4EPwh+Rch28xtRLoEqHjx9SeF7SYwBFlg4xD7vZYiJmX23EpW1t8Oj8kro1a6hNpli6UficA1BMpPUoSZQc6Uo7BnTX9ODTSOzK/ySr1vj9GgKnheBPPOhtW1Floj5ILRLkdVYQ4zfSQx89dTEhZHcFd9g95dijivBHWXq4l8Qlj/dlqzjHLyN6L1oXwbMDvus3Ai3F233oXulqQ7tq97JhW5sCHFw5kucki9yBqj8fpT3vAgcByg1VHelvkuvJus7aZqRIg3GiCZa/4+xxbwKudcnidRFbfMIZ8KVAj1QPmqj6mBKXAqtY3dkpYuPSYyWqJphgpsBDhXQCU3oD51SBp80V6cauvixQp2YkGzBOKWx7BJc8vXniMMsSBRQTo5Si1pdPtU7khpsW9HNkiPc512iCBrVz+uvoGWvnjRuN385PgYC6jMwdfdpnGPKZexcm+3XEI7/EaZ+/XX/Jn8ReIyIZ0Uk9USZQGzhNe/apFJWuBuHs1dWWPW53+72aL6QWx1zhVFJQDYUdvJMI6TyR8yYTnTRFWV5fPrINhTI3NRWmJ31M9RcctnPdmH1H5p80QTOybt4qmsXjom5ckehrJBUFA0dr/EDKb60IeBQx9ruKI92ESz/XiHcnwTjD+Nc1Kstbe0J0NN3mC9BDJVe676bzd0yd3cce6/OuNovMu4WT7UZ3xAPnvvKGj9FysI/gqki9a9PspA1m/f9J8hrulhZi8FF9psi9Rp44Ote+Nxl8RzmJM41YEV2yIuB/KhCUteePHfNpnr8D5SCA3/X43nFoCMzZtQHOmL9HnPXHBKOzM3XGw1XyNhmmfwuwdVQu8G4XRgKQqhmPf48Px84AjevEGZSmwnEGkHTdw9u589O313Xp9YPnKP5fAGnHFnvgZmPxOVlpJIfVtAHhtontVzlTyIL1e5s3cYCOLfuF3yvdBBcRZy8OLYWPNrMONv9q7BybvQTgB99HqVeXfcXijct4+sXyRO09qsZ2ImuIhp85D1GcrJlUoX/0vGSbIktNRUgqaCADE+GyA3W5L84zZyGZSMJQanUvDWavhxxIrQKeskzYO80EFc7fdnM5mGXxNQlXxMq3t21sk6i345O7R5v3LnpB4oVW/JZh3pnxV4rNgKeHBqKqAmR6vlaQSDxnELQqO88A6vVW101dv1Ee6rUu5I2PGhCXKtEEYSEjCdooKcQpA99wAer4/bvbzx2nMfXtRfvywS/oSLPGoiz0PdJ3IjI2CziT6OAA5un6CdXxQpeLFkc4IrCIeZntc1CXHGHuEUV4s7JXX5rtJyisYzCeY80bRruF9SklsicbeiCcpg3ILWnyDPwJ87q1IAuFjGJ9/qeauR7w1fuZSyQBjLmf7dKQt3emKK/6csQWevRX7o7ilmvLVFxHvhMYB1/LvQLvkRM8h78KD8ahiD7VTL6Sq2n8nKoRONrQYeqTA02KSSf8WXk6M0Z81FH09+4HUcX7EW/iWC+zDZj6QinUwEWDPwlzF9SQJ2yG238dCNRK3UQ3STXfSpIOE7LkA9HUJ1Ru6UE3e6arBCp7bCrS5myvpMZsRlSRHaleqrbkX+ZPS5iBMyYtNGhxCpulzaRrYnGWulWm6bbPw3xA8rw2618UYwAsK4zvBXgBctqleiqieOmAZrIkSoBcyLt/GG76pst0T7aFDOxyqa4yxhmkuLFdf3DPPrFvIf4rPBDNn+DZyLcOSGJCP9wcBr6XxXItKUMQ8JyHwHc8lTbpaLTG1DgYklw8zxv1ayUysCIMneyn12TXtqjnQ8MDc/xHjlIKLvqRwVhHR9oGecve+1kBe4CgYxE4xDMw/Vfi/ToDOZ5CVcdEjvMPb9UHvC0CFYnssQAkp9M1wgQiBneMlo6VDuclyUIvHvHKpYI7WjoPcQSRdVxqnGRV/GqSwjdQ0FLdr/71jd54fLAXV4fDJDo6uxwsrgjSgCxH5OjJEncA05LuUUYZFieeRqE4Kw067/Q9wjpESbKX2z5tRPln8Nly9iPcpyQfV8Q2G/7Gec9RMXVSLSf/0T2GRjJ5l9TbqJbYA2poqr1tQ0wZ+DzHB8nWndpeNyhAxn0TZ3QyGn61W7wwGehIWb5/cejpqg9i+xc5BJfwQVLoovxCYevP08J5PoFoXXq8RPkldz0kSmpLckRvwS1pU6cfYoCvwej9mdeTFGy1BcjEh/uzEi+vgc427CIQnY3qxN3fahT8ur8L6GoROCLuIx04L0nnG+UGIH50qbvAiaxkas/CSkYsiM668YTySnO9wl+jDN5BpMCt5gVj0Xe+nc1wSNzIfI02KcqxEbc/wUTPur2aQOkNsDhGlBRoEoq56I5ZLYsaka4apIQUSVYn8ZU8xWGxJuLgwsy0FI0/Y4qTWX58+2NDlSwgFwkTNpkfaQkZUovUepWSiDMb/10OW1YPSN+1Mu8h+dX5123fKxcNGCOaHiZNFy2bv8EdIjnETDIy8+UMEuVbiscIiE3LyHbgUx7elKMHZijgm9+Fb1ddOtlN/+ePhHkO6kzUOAUY7LqyXFftbtfBtkBK48EcqPxXbQk47v7kplSswOZYSgIC6lbd3I1oXTJCv4yYr971xbiwn4pPBw9kxo+apgpmJN7bRCDW02ogLQcwLCg6CCs4NV2NVwPwL2DPVhLc+eDwM+SyQhMmVF0jb8thZCzbd8b+fklS9NNKtc8dkH6TXMwD3qO3/wOi6ricKF/iywEWlDDn5gDPItlzd9hoMykkc09FnYkOl8enT655twuVa8F0Mk6rV7hzyrOrPnSCIEuehVD1gpBrP3NkEvrjb/GoGGeNcD3jP/1UUsiGGo5PcM5FvZVJGxeTJSkmSUVhYmrMT1tZcDH1/iXWcpWfiriRapPi7GBMcMZwlLlplcbFLzyxsJPqbtUDVSLTvEgWDosgfJIY73RhJ6TOHPf47D31ThmFJxI7y0SczhQcCiTATOTQ/U2NkxOF8DT5wFR7RfR6n2lvHIkzTHAcvzdRkGPWu5jk59Bwr9hT/TkSrvjvcpOogp5pMFqTaXDdCOdBH80LFCbpF5fFk5lMPSxb/I+zOvIZMd9iPpP2Svxhhse8XMENDaVS9LmaRbNrt0mIj1LOiL/J77rn2f2MWBqll9VCRt/p6JvRRkpM/eLSV25WZsxVoi4+DU7P1QpeU3L/G4qu33zdHku4zxCe1iTPwo5c8/7OTL6ZcNk9uWZkbgLQAMOqBnWqE1jl1bs5jJZ7/qr1aDHWTtwJ0j04gZFC3dfCcmQGnUwpvJm4ngb+E4xk23pYCludUAataHv6ychXDHzWDR1GGLFdBFzxSkNhuCLdGxDq0JuKIvjke8R6OfEocVusbqDHg/FjGXshA0BsfmBbnQqrlsuzXXovepXMx7o9aJIXYi3SVwuMz2mkGXRWQjeN8YxWuxTCCMEw/aRGOIYYhLjvlUGNgXceT8luGnIibIm3iJhk4xNXe+Fojm1TkAourmp2RebuedbeXfwpajRdyyNEuG/dL4Vs3ncI2hOzrlN3aVpF0S61OkJD2P9nunH7x0zBPr8BAf14rbf9dfLta9PkEe07TBvxlWVwqSHngrqpZvBFA9o5/itrfHNW+NTkFEWFpmMtomjJ4K7l0caKWHim2ZWRJSuIlZWFzvlokiRPyJ8EKH7xGWrw0VUn7F15lYMyRFvptmHrdr9RNdMddjIHvlc4R8OnhAOc26aBq/OCnCxUjQFbV/biXwNIpZfXRAjjVRF/WVLFkAOtIGJgJKnTs70Grkx2F3WHBurb79IQm/Gvx9d6Xl+tBwHpwKDUellNUc224LSxXdy01w3npGU2L3FPYAXlnA67+l4+vjnfvX60Q8xWmjhlQfVWhdXuDcgwr2r8jX3grNtAXX/84PQ8bijYPXS5Gj7nAXo5d7kWYhnYxRjDtefiKFWinrSVsh/ebWiLwZymOo2OT8/Jr9kbsY+YKRfNazjLZgD5D5vX/LciJ4Bytg/oZLiwJDCUCTzfAjhtuXIZ82agNc3E5Gog/Avwk/o8PeK6lVe9XAl8PGa+SeBmk69g3ql5ZNp9W96FD0IKlY+1dc2i5apspxA5eJGI9TORQD4OyQl0qBdKe+3uBmFEDbxGR2TobgJrnT8dlVbDsA2aVdxQ74a5demmHVkfZGeYPCoBi/3CgpFFtWFwPThES1W5gp7U7EJrKYFBwmQ7Ja9aPHt+gea4skf5cwlSXqK6afSrTt7amh4efXNzr9cfBV4gdyp6K4uc/McKv4Q5dVId6QDi0hhuyPJv1Z3lNSVa5qkAc6IpIfEe3WfHZeAdY6eVhVxW+tsccpruE7P9eYjSb4j0WrQcE8ZCX9eieCVML9Tbt0jjieE2aW7+Uk60jWUZ52w1ajs5z4zaeJ+riB5rE8Za4IU1Y/NhycaecvIs++VckA2te6fLKb04rpf6OEM2dFsnYBHX20fmcDm9PXCc2wsTciH16oIlzNMJ+IKIFf0YTnyspK/1vwolomiRJqDHLV8j1us/bW5hoQs+o66oLNoHxooFl0sxMTsiDB/czjMWTgmuRbk1EUFj77jpuA2TPgsuFSjaCLl8cEpQqm90i72v48vgyKALgutXJvTufUGGAO5h8Zf7ZbIftyyyrsFYDuJL6XXfNDdqcXvTTXdKop0lT6ytiu19/H7cRVj9ahwiqTPyrq/hl/Aj37Mu17qF/+Cpur5deaW3kBwntQ6rKKbZ3oCkJaccwxYJX50D2KsCG1JKsXViUbwljX/RpLO1ejqgnHhm//so9RLGn+msG0DCamkrGb2WHQIhHy0egN1xiy4II/fD3nZobZEL856rJI18SBNvp3RRIv/BPc6EcPrpJ/VSPQhqMw/gvkXSMuQ35f2ykKtKSkIh8Ny1XKzJJAWMl/88irwdKFUjWbDo/qOLgtN3yBExK4F/uyTUyjJl+d7ymic5TpbaLbsmpzaDtx6jvxH6j5n71d8ZcQFNyNolHDp3tzTBk0rDFg7sAcKJripm9u11uKvuKPPnV9/lFodwfUlDeBwGGkY8kr4rlNBcD9yzzwXBiD4+OsmrnClosRK7wILhakiGzBFIRtzZ/D3GahKPuXTHP+6a61vU1031WrW36uxi4n2tX7aRdZN8OD/o3gdfEAeYzjBHxIhgOWfeYSjqMZ36afOMvX+tuirBJ2BjUz0uxX+spI+V9D3namYtcEITtDv6BaEOtf/lY+6cAZ2kiFjEQFk22s/Bo+jbFL13HVnDkYeGFsvGEzQGLkeAUUGn+cnr8f8QDfpQKJ+AUss9BXpMzky7tgHDkP/a++9liXHsS3Brymz7odKoxaP1Fq6O9VLG7VwKqcmv74Jj8jKrKzM291261rP2EzYiXPcQQlgi7U2gI2aoQ6OUiynOiGdml+v5fT8pHV0DGeRuNDF0zITfX9C64uT62QMGYh58CvkQe9grUQUBIF3CZVRxCCKWshA7nI2vEBm7J3X+VoHOf2w29HuEyyBeeE0e7txxND5104zyovJeLJ7sNO6OTwJB/b8mZ+YAC3cDVoU/7VquKA41+12lY+EPkLHetFx9nY9MLAGy1shsPALL2McY0qPw6ol5mYasHoeW0JsDwomIkeHDTVHJb+bDr4ewGaUFiAwnSBlcG3qml4oB/coVbRsqOpFeU/KztZZGq4C7YuweHB0/Um/u1d6k9sLFkrXLOMeM+QvxLfDv6nz7KNW0u2ibhI4G1xkWbc7AY7vnT8lx0mdYBxLwSB8sMRp54r5cgS0COkqtTKt+5AXABsXTIWX+v4MsRwHXlvLpl+/U96xCU7BQoR2QRZkUSTpJi+UgezU9yMNZPG2NMdDjgeKp3T96nWS3vVUXDvIq/jvBlOkTdnGdYLwn3y43SyqXCW+q9F9gbQ4XuuhHNLaNz/7vCsyBPuZOb0R70b50qYlEYLw+EzRA5sNCY6ihgFCusHyoGXyZdBbmEPuwbg65lfwMw5zX9bJ1+NqEkVuIl5s3ClNjdur5GbPCiaLymli1hZq9RfnFFD60eD2/Vq0yxslKGQ9E2fJ1nPRh+EjTEvFvWzju4uT9otYHELyeCdQqaFLO6qTzrToeCGaX9hxkzTC8SXIAMN0Za5usCdOTrvoyeO5gRXgwgdZuu1NkDHNBQxUex8uuuj9M7lyfXO2KNZty4gj7HgimQHtrsMeE9IZkDsz/HE7kF6QduHx0CHuNZb6k1ka4x1fFmBkhgnZChX9WK7mrXBamfS8LNNKo5idu2lraMQGz3UYzBXqsXW9Obe1k2GOCzVo9L/iTE3zQ8FeI4lExVxL4kW3rjCdA/vdC+3+yf0sngmh4ealOYMjqS0Mfvt00dmUVhHl2ONwrwMpsowOJhVgkDN+rLobjK4YTytxkc2MlPApPqNxxiEpv/NWKB4VTJUBrwiNsfsIW4V6KCVMaOx7whMC5v4YFapq1K6vyMdXI9wTE8/MOA7wMdbDz0iT8FPH3lf6mQMCbm8c91qPoiq+6fza56F0wzKGXryrPFT3MLZFDyZRoYYnPxdml8CRlMiHoCNqixdiiyP2o++OgI0prZXbG2mFPvW66+6dm6wDmMix/I0mcvNK0dk2hOi+PSCPLBk+JxVis2qLYdsJpAi8wG42qKSkee8WVFI1dzNXyU2g3Zi+iuL2QKi4ezNYaDbDVpu/cJ3LY/LUt37Zh8+2iDo8z8LiYPDDi808YKTys9ZeRbb95+Vb3hG0dKvrNYpAjK6E4ivsFPpwbra9NtnEfzSAf8Fb14j6NY749dYKtBpOiSPRVHfiG7PxL1JmbjAKIiKJmvg5UrSidNUVFIT5l6yKWYXKgAs+pq2W5PT2byhigeUCIJgFORS/15+aolW3JAZqnQgq0EStQRdTPmc2MQE/Yd0M1uKnHpFtEniC9aBeHf44vCPrZmw0Txtki5xiCr6B/Y5v78DZmDlzqFKi5b30h9bD3uW7f7Hkspw5Ufp4V43OQ3kqhZkMhXn3Q9tlkK5nlhBHeNlCO8trwpX005PmIQCejpdrVAixzenOL9WMMuFujMgttt+Q0qb5pkfPsfzW3CqZ4UsHPrGDPdXBV36Yagotew5FbieAtc2DmlefQSJs4wr84S7QuJmN7fBd9KrRb+xuuxXYP+H3vnSg6eKXwJOS99kFmDqofq0A4syFpeLX6KSYxy4q7ma9K2NgK95ElL7MblXbQKc8YcWcx9E0njcmJCcopJqjFwUbrHliL6MkXr0cP/EOwhhiDCX3o8AMiCtOy9zG+iIIn7JdwcZj4nMOa8cOOJsmUmsab+1GrEnn/Jj1fdmugkdTfwzzfUS7lxyu6UlH8gImhjZjM7YU02yVXa52I1hNV7ePZExMzmNi2f/OW4HOiiPv36YDBoRgTBEbyXBn1tHAQvFhI+1hzxm1nctUzamUBNMnxRRguI9kczAyuzXcS/H767eEOdaWvdMqyhsenpiEr+Dlt592w/xFI81PVmQi5C4K+6zrK2XjZX8xtkImfK5yzfQmw8p4HFoWzvs3ysHxkeeHCrnXO/EJX7ivpqpCYuPrfW5uFZStUfImtuEcPa+YVR9zRoQ8HZbTnG41ti07+T6cdq6FpyDx7SVmSD2CBSrbzfr3h7VnFTImGESNyqMD+V6OGrbPTcusNA/W7vN+QNbmCSudGJAEi610+Kdk0VzC1XTOiTXyalFpmLzgXU7wXXEMwlzgNiw8J/smPM8uEoGQztXr/v++ocg1Obk2LMQ8tPJbOVgXJQNARdDUmKbYQ5QGNDpYykHiDdaVRGfQ5Kf7zvkLggEfqhB1VKrvnsayPnVVbffjqQzBEgUvDZ6nwPboa5Tv1l8osDId5Wl6fSmB/l0cCjRjsNRIx5P0u6lpskwg3KFy2zdLgqL7j67wZFoqo4RD2e6VQ9XtbphXYrh8U7jkYCo1QKFZWAdurISNdMO/vmeI7ZF468o8sORzTf1HMwMDWqPLy5Wm1PKJtZBpLGT/M271BLZ4FLNx8knXLbOcAtUT5gPBP9Fys41ZGltE82PGXl7e4x3rPRO3tFINLY3eGvBQqDby9AZ2XIeoBboaKq2oH5/bnSzlJ/C8sogXOekTUmBETFAyiZw7c+FE/EZhPjXF4qU5J6Gt2OeBQtkgyfTnUuPPa4re5AHW40yHPxzyeLfDk8wlEav0YRMFEA8iC43OEmzfg6470hWPo+6bbaEa/QC5ffYC8ZM4ProG64lGmxddvBF4tDOMrIyqK4gv3gt4eCu4Iu3sD8PDkZ0S5byR3r4jeI4mWxo8HC03/FFT35dxo/UGE+SOOhtISygdxFV3gitt1Ru0M6iPZFvfWjXtlLaLKcHHwSN4LK94Mi3OGuMLzJtmtfHA2DBMb+ihTtv6+mYeYiwNObrXk8sX+XH2zX424XbkWQ0oc/h+szmYmG8cM8oFeIPIdIqDvENtzo/aCKLvHU9uY/yS32ZcSVWJig8HOtsHKn4mZQ28qsdXZUm1jMAG5ab8/keZ2prgqPXWc3Hl1Mkq+drHm2KWhGCRNnnZiEQwKkrMRYzOCRMqsuqqBClHYHeKacnBhpnCPiKbPIr4xURBOyXiCI8UqhhgnhWL9Chv9JJXVxK84djTM+kyHl2KP/x9K3WdmQnZX28wYUsnwjviqFSqvuWAhLfH0z+hR+D6xBLzGI4MIsUq9cMj3yyVPdvz8aZ2DTE+H8+joBdNfxRF2ZIYowtZ3KE8MiPMzUK0V7DRzkYEx6DncGFCCYas+xs5bmk6YQmIaohcorQXge18cag9xzaIK/OvpxH5z/OoOCqnZKbOe9geCyu+Da8tJPhN3Qi8YIv+psQbMfp+JGYAtVVn632TnCllSAkq2WZu3ubDbQrMPoniwCo+cT28Ds6/DA+5gkED05G6rkpTqTWFt3be4JzwyDEJIflorNWI+Zdr31YGMoK6pd86QI0aQFu+EC7whM2ld1hBDM3OW5mBqFiBO8L7VKvrNfAIT02J23uU/SyzmwOWmzcPMgndbEgG6YWAAVLH80BUjmMkBVBfNa3xgIkZZIfRxNW58Jyg5Zwx1NT0nU3e5YeGh+ZxQ+mCi2ERm3W4N9WXJ+9wxnMT8h0Eno4nYyKtCPiWSJSSDqXAqCoS9PXpZ3AqoavaKEpy70WO2/LNSm8qW3hn0WYybk1kLW2BW5JeeJ3llLfAxNlBbYXmY1sFZI2+e1V1Va3yDOqLiia+sSfC+3N7W8Vyeg6Lpr4aekyGjZOhs6GDeO2RBfEo9dkWCWwru+ORPh/l6Ge4eUPYHI94aRcP0tEwOjjhlda0qsDtc34IsNroMiTGvgFieKNfWIi5SOqJfTzZgOxBbqlKqVX7oYz4I2lOnB6egQ6Ss7OebnsWJ3LtXhWcaoGgKQx9c6AiSWYSrWuKyPqhivXzqMbmKwyQtuga6XS4X0Zs+GWm/jPfUXIdxa6otulTecGPNaQiCrmDj4IJET5TFG+DiviXV3hBH2CFqmNWgRAjrWnOd0bQwtUgSjLpshcV3cjbHQbnsKWeXCwLVUzEnFsV8gnu9pCBEr57dWF62q1DE11LtZshogqFFq8/5wUNLSHuvPpZs17TX7epGgBACDAFDixWNfVAfEAUQcC5stQPu4L14lr97YowVSPf7ZBQ04OQmwEfsZPHJctAdbCc7NMaHzjpxFgXju1DAd/LjaLXQWqwb818vTfuBYZedMkEj1Nva+6Jqk1QeR+HQHw3VyfiCS2pcRXuhsBfClt9Pj6UbTSGe7cF9EdUw40nTBSxjhn5bQf0rqX1LBtMbn5ELbKQdS5bYhxZIHeIrnvbIU6itlJW4LNTzTyhqKjp6yNKOfSWlLsRN4S3UIfYhmjQQ3vasmd/onr3QZl4l+ygLVJey0VT0lo47CKIB46reQjQDSBgAltjIZhUbvblli1luLN3WTc3VnMqkwdrCEyXZWHPBGvS2UNIwVht+Q62QE2r5sapdQCtgY6JYXoJdk9Y80xMKbGy3pRSpMmo3KRYORXvm6B63mODPsQXCyvuQzRpj7FcpnjvxHEjKpgv0PxJjJ22p9sJeSO/0eoW4RTISIht1aE2RY+ibsHrfvqcFEzVFy84++92viKnW9KbOEpofGGkn67B65T5Z8INlhRxwXmddstdzyBZ3Hl3sHZNxcmfaGO1VWz5jBLYzhkCUZ+0hZa4y7d9DUR69p5zk3+3I58kl7haZwImS2L1hzpmk6qgbFS/MsGnbwzItEdAqCoGJhVKxII9+Uvbn5VXCE1dTeET6zwwIe89Ar95Y1sQIUluSLqyvdno/NOyzyv67kxXSm3HTK0DDTi3J/VcNwqbphxE7bHAqmDaT0Pp8ePDUP1LC+JvdvbtRO35ORlbARFbCKeqdcxyyOqH5eepQj3Ju4FgxKkFNmtILIEYkMSaPe5qRsQFhWsfvJNT4CLMziNIG57pUfHyWofGkFJeDAP6kE9dyErBq1zINaKQE58/DKSnyhOquVM/Zspb0gd2nYLQc/HZvyu0UW5fhAd0MbEENYukkvCW0Npk6OdjAHcPdJ47gEplVCBsi8dohpTsj9YzJBiWz6jnN4HMacx5W64Vya+G1y1UhnynySxXggLOvZa3X9kD3ctGZ4VFaZN0zZwt1P2Ix4am4pipmPvhkCgJmREzKGlkz+doMbHkGyP73Q/zg4smPvHiBCW1IkATID4G4R8VJp5eYclOi5ju9HwPKQzLNYMkaF1TCsnZKgo6zMamzd5OHK0I+uo6bDcJq7Nn8dop/S5bJi61P+a7EeSAP2oUsPFjC1Z5veIcA90JZWA05ptax+uOFoMbAhW022VOQRa4J7XBc66h6ObDFF+qc2NhU4bE2lXlimUl09p7aW22eks4HO1Onq/JTJkVN2b9GI09zyu/0wwOHpR/l8jayjflEdGDyU0pcx0VyZwJRziuMUNLeIEsQiekkCP7DikMkx0aK/pTPtAcNDwrTQXFTyB9DXtXoSj2IEK5o3nKsjaeTZDCCyWv7LOL7EMJX+vSCo0QiWBoVl8nvYeUuqaJ82ZdxhtPenecPaWQvCxWo/Ix1qqXoMVktWoBtru9KYcCa9EhIIDG65rYnXDTGQdZFKGwgWq0IBprTVePGITdSV9QzmffNDkhMFZ29gT7xLG9zf9wWlpWYOGKXledq9XZs5k920sfZ05EJzSLshtab9cB2atIFYY+Ugof9ChWrPRqh/qnsp8x25SDGrsTNMa+kl3E+01edHJy7c0iexbo/wQHQes3XDsTM+zjc9II21QRCFy1CX5sq3HDfcTZyqVQE7zRMvTbLRfwelicgEALpsbJU/DgnrKl6Rv2QEFtR1AJSwAfd+CbqePuk/ybXanp8udhgf8dkqtY0PAmt++u2VnylVWNcXE+2te20Un51dYd5wLeTu/0DhYYUJIhwurJftzVriTfXLpPr3mo9RFI/8W3Y3/y5+7h8EnnZPtK9XcJya2+Vx5Hj5uheFG5jU5jxibLWPsJXCkIoSLAMgnI+c2vs9ojL95Q/JLv99OvgETln/lO0I3SfoBBgNe+i7j3Al1Nqn8LSoCz/aVsoinO6dUYCIadCQkyqA3OwUqG+RwyebNR+AkxQqeRoH1WzKMtLCeTeTDgkoG09W1k31nM8zm0WYmMAlUx8yzoLwKHm0l5XKj6DuylfByegvgkEfYQ5XZrkpQVyPPEOnrRvChOjWRgtvEiZkKxjF7Ogr5Kt/Rv8kWcnyvPy179lPUkzXyQMtPEpLve/JjVxy6gzw63AH8E3eaxLFUxX2Bq3SwainRBrjP4R3ssyVdiL/RWUVukujcjJEaYR1Mcith2LRLDGQbln4hygNgRptPlozT2kObVZKyUfCsG8cd4JVCDXaejMq1BbKc6yH3gI62GblC8mutedSAODxGPSQpQ5rQSM3gfcAW2td0rELkQLbcmp7S7HtR+hUq/ZjmpLmNcPfTP6zYbG9Rg6fPd7ahXvfKqs8EsZoDsh/Vb680uMBnlIRqj0UaY2wrqU4nK+Dr75j8mARZJj6SMiabhLHLbrB05D9+IxrSTnXAQ22i+dJn1IEJeZpdS9sxgz301g41iuxzLk4akdAijD7ju9nJ2MMkC0qb5p/gy1WtKuv7ofTje3uwSx4b5YjPZqDhJr8aN9+kpI6+qeot8AvpFeXnf+On2Et90MOJiLgk27mZsF8lV++y4QApzGYxtAOf5haolWizAAtlTVWpFw3Iqt8CNwVGGE3BVw0nmNpsbhEiQzMuIDMK3DrXbAoWhDmLobsnOxrpbElRatbA1dkhCGUCY9mDJ9K0YBVlVN0wsSiymj5t3n0Jqi/6ctgaWZ9cxAscsLB42evqBlZnrRylezSJxVOZnHm8Xywm3qlMAcyzUSjy3KkEo063hVEt03VjqV32xQfV85pywvz50PjnTVCV50QrlCe+XajG2MSY81acADbfQbtgJmN6l0zD8c9gqNZ7ydy1Mp8EJ6iFGHeElwZzSNmcrKUsksAUyUQyFiVg2bklmLjZ79l3Ene1BhxgBgba9gF0BsAiZ8TSLwQymHwZhB4ONOPh1fbPTASNJAntxRV83ZhxYvjmGkY/rLA7VOFmBiVBe+Yof56YHQdCXi6nlDHfk6xLfMCvmxOwEO0iwOd8hbDJbS+VVnmwF/C4nDrAg27xOGzZrJSXkYpp0a6+St7D1mfjN9JjPyZWmGqSL5QyEvTZjf1eOhiEcurZArjAX7MbGAjKqycALpeoG2BVoCdbqbSAxOob0HBi0zWRaRUlMGb4ZQe4CwmBD3WapoLD77+Jrsvn6s8eXdIJKEygpNi//adBFmFy0pD9oMdTyWBLtGC3fzoeFhsTNbeJzkK0mxsp7g+Ml+lDbPr74YPQzc1pefoT1TY37FQuGBMtjO2u9N+Dt+SluawarzhEYD2Y0455sU8bJRnzPewJUoagF5Kn4JySewCByLMTO6BZ03+U2/CcbQFJFNoaAJ9O/L/wdzgVQnQDqc3yTrYGAUoHhYHy3mHm72KD8mBrR0AP2zXb6EpRUb3OFvc0/+ab1w0sU90XmNxfNPrmEK1+UK2crY8LIRAYxOOl6xZ39CtrgIPatDrFUwpbBSZN9BnOuO3N5GtDlxWDoVearvVnaZw1V1yFvt6Tc1wNBW+Qgnv1DOF3K2QSbjP13vnLcnJ+y52LR6+sJJuh9u6fM0HGK4chesii5KNO9OZrA1gh0P7AsKVgAElMSNenZkDAy03PjeFxrh6pgLP+zsLVNp5Hw2dtGNsWStgFPuL45v0AddWbJgcCgDhZeF/ZEJeTKGHiXzGY3zhUl4fXFBZx+doF1CgxQ9XRCNgO6vwewPSIzqGdfhGdBlpL9AgSEDRENVkcjAz00Ykv48Rll7u//+Gkw2SrqOPYMy04unbHPBbDnNrujZICRAC2sqY0Wa/rcORL7ptJAvS+n+U7kBV27EV8sI+MecAIu+7b3c2XU+nJOfAiO4juBmpUvjgxC4TS4VJ7F2+waZS4OFsNZlChSV8y/6pikr+9EtNGBJzUUJRxZkHjxiQCq4ldpq6kKhEpjrjYDdt+vx+A9sGN+WqUesnu3X2YQyNiqCWXrDnQc1Ahd4qQF3npmdkk5+IYgkjJ9EcXjwV9VbWwpELLv/ApgdQZQKxmYsO92WJQAClHwEfsu0LvVFPusPwTUBSGwvKf3ujDrwiNvyHhDHnWmOIcNbx6gH4l2m5Ua5fBkZPSbLLPNZyw+XFjjZ2nzTeW1NpQWLYjmTTmLwYMFsdjn0XS4lXIwcZCkZ3nB7pRZKBGeOXrzmX4TJ1DsAH9kRHrsCKEz2SCwM/smUGmXZQV7Xj1/q/c3ARQ6EiUqD6ldfBT6ytCOjUo7t130u8KlfNsXyM5aWf4RJzaBhc6p0DZOZA6fXcU1svabQxUM5c0H/DD9mJkQ+7EcwlpP53IS/kr69RKo+H17fd+LAiq7hjZ60H5N5XYMHS6e7JOoQpHPcits9Sro2KloTlnn7ywbuv8ievpHekxWK74Dq2SBUevSL+dI2datvFHTjKifCb0rpMli+4IRsoOdLXOGYFAQ5YS2C/2lSjI2IYSckW4RouG8hgm02a+WCNLaQ7/m+dpfj7RBaGs8sbHdT2ahdtcrE4T0ejjplqrHyO8+AnmY70aYUNEup+vCz+mIN6nRWF2cij2QkSEGv6fPd8WuLIGlXnftzkIdbwPVlAHTvBl1vUgaW2B1dVxkl9D1kBJ0Zl+13ghTC1jv+zlr8LCd5TwFYruATKz02TeqiSuPhc4CTnGEJR7b9+mYkP4hv8JphkR3yzhRxueYTbBsm9ZEt+NVUTb9YZaIUbHIHLNwhN6tbavdClxNR5jm9pToa0GdGKloskzVsWqUrWlwW/xOJ+ozFq1scabkR5g+L1BXgQJxqWr7DIYxhGbA9CAn7y2X/OWTwHsFBN/RiILWK8PhBiGonfaKI8WxWDNA/Ew0Q33MmgFe1deJuH63ZoQy0F6QZjxkK4u7V8dr9U+K6Gt1EjYkH8nrfGk9w3a0+Zyh09+8jYISM3THr/ns3h4hhMjlZLfD0wc2fDyMtCi25ErCpkHy5jAt/GiFXT9CGyCKTJ2qMYzOTorsgmNT/7PvbHmJ5Ub1T0nW8MMnhbs2QrTue8bEGQfU2hXiMnGM8KaCriEENhs/aOfZ+iWp6a2I7I81ARj+saqwGlB8m4A8njQAKmuF2ERI3DirVtKaYKcXDCZdMxwGmg5tiZUX7UmjsCjH6VT1m9ZLYRkLj1vBPqvQ2mMwv30wW62UyHCPbniwMbd5VriMSITIOs/rCNpoNyPWPxIJ67ITMe3t3fnMxy7yw3jsEbmNswuTvA6DPJLs1/FhkCNk7LLzvEtJ5dnLjMMYbvN6fPguf4y6893bpS7cj4lnWNW2Z2UgoQgpFvyUzaxzuvY7F79bMvMtd0bvthoniR4DM1G9cdRO0XSi+OXqKmaxoTsVR+4Hijo88p4m4UIMlfDmnLytjj1y4JT4jHNTN07ZtpTa2av9+rLw3aNtwip4LYn2dui3Tn72wprzybRfwJ45km/fZO0px87CoT99pq9sVtpP7XkWwIjc8Gs6KweXgGM3JPOzOzIYO2MmnUQnm4A9v+a9xxo8vB0XRqeUUOwlrMq2cukEX29HfN/UHw4jazqYA0Szsa0Q3nrkUrP78Q2hhsbloYnLN+9CSFqkmxRx2M90GznE6x1tHhHBZYMicp9Dh46QPIIo+HOQNTI9hGBx1LAERHN3q6BBeuFgkS4OkzGacgYeuiDS9ZbSse48nZZkBx/md4Va3bizPObJbUZcslelbd/0vM+NLCO8NpMj7g+ex4sj5thAJxbXJfXIWB9DdRsbMttQaPNE/TC5QSsVZhoflXnxo6Ewp/wM0ZtqkCk0XcbtnFUYWGaX8y1LtJRwPPDqEEhVLaK7pxL53MrXKvLMyDwStpMHJpK0VA1AyLgkrtHBCA/H1wOKlY+lquzz6HuxSVXgtizj1T3BW4/2IVEaRl5gxd7W2tG4xIfl6c/4BqFATbIvBPHBNBT2kojiswGC8MQW67tEJpg579jbU34r6bYH3EtpriPDygqMW67N8c6i7tx9i6X6iZmTRppd53QL7nWO0CZBbj5nOr+DDSxEByKGKZpSdv9YFamWhiQ/lhygkE9riZpXAwRwzl/d1WEfkmi8Y8OzScfJYFe3F2Pug7vmTjZhb0zHkwCQko3tKUS2Ecolapebi1vtRaIr9HYxwqHwkyFZO74KXfdCyIcEia8k42G7exgIckAMd3I4SCgkivlTKfn1YFr2mcZM1y8LaryFjQtDOJgjZ/I4DfiEz5Mox0xva9H8Ng8ZnJLjZiDaeV7S80NDduPvx+DSS0CsBCPAkqbEIvHixqOL12JSS+yxdpJOFeHzaWQyYtgBY641cTS+IAzbbp00c6R47e561DEqDDIdh+v1YiR6PFBhfith8c7dUzLK/geTg9YbvNhKsX9zG0uWX63Q+yb2BXqbqQI7cKuUn++DTzFRp2QweBfJzKtIqlh5YTNVl720i5MUP4UbldifvGqaqI8eteBq2wBVmYjIA2hqlydR6tmz40McpxoONNc3JfRRw8eHXH0Pq/22p8fhx0sF/Xgi65bEkxesarWbO3osJ++5ee0w1BBBGyAfHiFzoq9Gc/YOSwQLIYvOh4jqL4Fixi5l9PrhDslbD68K2PQtoErC5i5m2XuF6xGf6cCuP5IRmpdMbpicZaigsj/SaZPXBgUA8BViYjKLL/rP0y7SOcr9ExPtqnipwvi968ydyoC8RiLYhOaGCB23vLhmEIdzLeNrr6MHiEuMVeftWPD5SqzHmRvOStMzUrkH4LzP9KWGyjflmaRhuWN2x3ZBbFAR6SzdUN80tvXdftfkhdlF8EeB0aVhCLYKk73e8YdzGMa5X0cvGRbbnPFe80AwR5pHLENEggdQYIEr0z6OXtqKa945wZHZJvsZoXRr5lCno25anI077X2qZtCi6h9AHOkgXLiVhi3R697+g1vGT6cdU0D0txPpueKLB/vZE4qoiUn+w4duufOO3eWpu1N77HN5Ku8CbisMkStCTXin1XosFo4A2urLs86kkOoxvYJ8k8cHX3CsUN6qXZZkwrNctpbk1cDyz7o1cLlFmMBB1jRy6qi3yGO0tLX7sHX+3Vem/lAtXvhvGnb5bh0teHrCYFeqz8RDBjBRVaYzPfmBtDZfbG1LP7LWhrB+ZLdBxKqqlvdRDHINovbK0fuYnZ6he0iICd23GhrI70MG3wLf5PhxNwfLkShGYpzx8YgYsyzpfZ8vG/MeFnODUuyJJQcUWoOzDEwcknxGBiCMAT1WKt8gDUR6MxSRcFQyUrb3KKWG14kgMi6wdTHxlocOK5i+LLyUtSmnKDENbVRMSX7GYYCG2q3h4iAyFkhe5fqz96bdRNMzG0mC01Z3+EfAl01WMmxXTwjzqQkRtV8fd2//7efWAezzzGII2ZOWCS2cAyXEctPqT+sSs/ISOKtj2Cfbqvxea5mvoc3UdhPizkGFOvswgmmK1HvhBHfQn1xpCNMiQvvToxzUYUZRa8hb8Imd/4alolDG2FNQbLqKPz7aNgy5L8ajb4xdd5VLe1YKGA1Hbvxm76YRHQcYoPhuriEH6EWDtO83FpYe4P3ZoQJdqihk+QBOLWvh7i0ujUHGsK1ml3HFulYuyuMjMWcLSKB43K4N6BD77o+rTZXvjGjAKZ13Peo2R7qtbDQs7zmIQkeqJ3cUAC7ha3sGKi9xMrXhJdvxpobB6DNP2xgpBXYqbTAVYZ45haW4tp4U8WXHJF7l7wMrbFQCojkXiP1moyMLXZJ1fKHh345TKvFD8JhP9lo4YkcY4xgoG/256REYTiPxOtsWHmRXAbzoxgCr1ysmzL9xTRO8iVdl44kF+wCnb++JnGj41CI+VvbPm2FBgIHuj0YEtey/yaywRk4pg45a7RBl4enXiuk4rBglknV8xls7G2kTQ0d9iL1Qvjx2jL60uheIrCelm1tSsfFmtcxIatwin4+OxYtcwJ5t57OLs75uVyQL6bZYeNTSW6ffgHg/a4pxBHVi4xeQb9lwLClNHaypyLtnrCW7oZDMeFoLK+ybs7H2UTGcpdkuRmWo4QAxNR7jLMBz3vbfcdqbNQiDqvOltjcbTERhBeYwqBpDC3O2ZLTMoK91YoiXhiw4R2WdSxz2x/aeEpw8I8xmS2FUkxFWuskoJT+ioqNH7E7NI+y4LiudSX5ybws0Kp++PfiJFZOJb9Fv0oqFyES0V8njAVLaiN0C6b7+Ovbv0ghmjBxZpSwYAeQU5j61N0R+7QZdGx4Dd4TcaZYh5qWE9wMgTZOFcRv8ULs8obZ96eACw0nlm+uKsBFMWkWay9yOkiBtguBtTMKXOr8YXgsXtBtjx45HEwyiOAyq5MCBazO61Q5WuyhT2lU8SHEQnTr58m9Abr4gBcCzE1KPI4iqUlC6d9qziEBG4awGJVDmqaCPA4csJ5s02YrJE0AI+UNf+CtqRUfhhJJlNef+8d8cyGHLel2M8QmLVZoMkL5ZvyBAQDBMVrcbS7T7/iEGiF5AJ8W6Rx91iONTEQaL21gu+r7ICNbtojPzcKlzwofI+G4i6rs4jgiUbn9BGq+oaxpIpcI2oTp7NwCWaLVSsAQd5ZKGo2OxjTk8esLwNWKT0WrztYO3h+1WYtqSm413ExVFUIEeEhgj8TYQyuVJigkykq0Lo0ELcC/ghVihAlPBwOhIg3XkiasedLDcnEOgriUShfqhbzKlEVAYeHp4EyocivX+dRWI+Tlusg+icri6safR5/hg4ZjWQb7//lBT7w3PQTwAvIjcMks4ryVbynq9rYB+wdf+SjqvT0MnqLPzUjAzPtLpHb3DOPbEkv8Yxif1Lp7M6xadcrmk8s/CeaE8QtWhCyvvwd+ByBAzxFC0Kd55C+JNLl3puaUqEEiIG9Y+Ex/VhmetGLcesDR2xPfPOp477nxXHuRy1yRMIp4bPqdyScxbbeHII9MHnnzRjbMz4N/jdXtmDedCRfnbba/Q217d9gna8mnJD1CCIPCPojGe8n75XREq/A3lukPKhy5fQEAe+nkBhmC/0L//9+MG54+jCAH9AnaqAUV7nS3Vz1ui5C8Y/qO4yuuy+vVRMPULjPwoj+cfZeU/HgnCKD9eBKjwweVt++t7fT8jUJ39uKYNmP8R8Lke0/W6SjSyMwT6d+xn3ba4XfMf5/0NIdr70Wwx3JVFoHk52x9HiM86/Hrg73N93aXMfQJMjcdvB+9P5c+/37skvxY4az4v9dB/L/px7H7j5LfzIXnY799d3IOWTKZh78HbL3k3g5pPN/KGliq//6LiP+4+/eXzmH7e8+mvn4b8rvxHVX89hPxTrZFbDEB5tXR3y97OH2XnZRreOTe0w31/vh/6HLRK3bZ/KIrbm1LcX9NbbMCrsECo6jRumZ8HujrLwGPYvbrr+RjjFDxzn+LxLpuAhOeg86Cfjf74tjmPwP8eKf07gUH/JJgYBP2rVOLQn4kk/l8lj8j/L4//X5VHDKF+IdF/kkiUwn/ByP/rQon+54WS/jOhtKa6rHuwYBJ63J1ySxQRd6Ct+2QefydE/68TCZj6N5konPqD74ThXyj4XwSC+BN5QKH/KnHA/lIc5jHu/1Qc0h/NDkRhKpP/dr/a/XM/Hvrdp/8OPn5twVd4irir2/PHNf8QjL8hKIqBrs7bLQcd9S9H/vkmf5TAfzr24y3BwX6Yuq8Y/u7w/rMtwXHsx3t+D7b5covK3++qpnVf/un1QB7/Xt+i0f+8Hvrds78Hlynu5+K+6tfrgfD/PGEfpuyfb//7y5M4fZdfwfv7HxoVwah/NCYCEqP9+hn/XdNm9Ty28c9mrfu2/t2Di3aIl9+/0B/11Yj7elzbeLll/jeV/VVFf3T+f6yif6ZFv1Pb36sQ/Wfq+e8wszD6RzML0+gvBP2vZvbP1Ar7L1Mr/C/V6h9+Ekwshby7VbK/dKT/jzeN9L+nG9E/gDfkV47wf8ss4vi/NH6elfnj59dhWqqhHG5vJ/xW+odm+u0cfRjGn/3V3Pbm/Nl48boM/9yb+VEvAbj8F/znt/B3R/jj552/X85fv/R3fYNfTwNfwt/uAL7+dtn32/kXvfhDkvyf7Y38o19Btf/jXr1baVinNP+PmvPniUs8lfnyH5wIo38uJ1N+G6p6++c3+bNe/17KTFN8/u6Ecaj7Zf7dnW1Q8Jv4/UOOfoof/pM7iH9xPozg/7nzEegPEvrjjX+T139U/T8hwn/Nhv/Ss/9vAb1/MWRePa9fh+mAi+zbI+XTn9uz/z33MlfxCD7efnXMrwHUkh3zqb5bBNiu38rt3wr/l86oPvLsV3n/6uoS/2BMPA39wWbe6KOg0jxN/8XA3kcSCsdw6N9j9hDkD5wV/RPOiv4pPfg/N3wghjiAjvxNyO5Wqowhy8EZ/xM=</diagram><diagram id="5JEDzc15ADgpa7Mk0QQZ" name="Page-2">5Zldd5owGMc/jZf1EB4IcFvXrjtnb63d2rsdJEFYkbAYp/bTL2hA0FjtBsV2eCH55wX4/Z+8QQ8Gk8V77mfRJ0Zo0jMNsujBu55pIgNc+Zcry7WCLW8tjHlMVKGNMIwfaVFTqbOY0GmtoGAsEXFWFwOWpjQQNc3nnM3rxUKW1K+a+WO6IwwDP9lV72IiorXq2sZGv6LxOCqujAyVM/GLwkqYRj5h84oEFz0YcMbE+myyGNAkh1dwuflwPbW+LG/4j6trb7Z8fLj/OT5bN3b5nCrlI3Cair9u+vsUvs0/D9N7/PFy8uvuNrpNQVUxfvvJTPFSzyqWBUDOZimheSOoB+fzKBZ0mPlBnjuXISO1SEwSlc2Z8EXMUpk0ZDKMk2TAEsZXTQFFxKaO1KeCswdayfGwAz6WOep+KBd0sWXggadHpSUylimbUMGXsl7RSmG4CmNHJeebmLCLIlElHsBSoq/icFw2vWEtTxTuZ6BHh9HLiMvyUyE500eWN3eeUR7LG6C8qn/diIccCuMFLbrplmOexjJpmEssnWWuOQLcqmVW3bJySKl4hkyNZ7gty8yGe8sW6yCgdhjqWAMGD0iLrM2t7uHtogasQV0Ojo2zhnZZh3b+y3WWiirp1aHzAK+OFj1wUd0D1Hm8W/+9B2B27YHdrgcjREho6FgjwwGPtsganDpry+6aNX6zrG331Fg7r2L5E7oBDQKdZSPXziG+3PKne8vchrtHhf2ZFn5IsR4+cbyR0Sp81+qDU5+OXbfv2LsWeHYfw64L0JYL3mEXxtKGbC8atdP2R0VxoxFk20tI7dQJmnh12yJVzOWtrV/C0NwTn3iE7aPXKftt3oXdHcwj9qv/HnfPQVG+oqqFndPX9FGkIeW0RqrpbeJLjZNP+L6X/plZx6/p9I4GfmujI2p633jK8E+NfdP7xVNmj0+MfdP7xKff5Ha4NLO3ot4D3bJM9y4Xl3ND8/ib3jq+GvwI477ZPf8jtpNvk78JuhVPY/hlcvOFa5VX+U4IF38A</diagram></mxfile>
|
2202.12162/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Are our artificial intelligence systems capable of reasoning? Or like *Clever Hans*, they use various cues only tangentially related to the task and rely on rote memorization with poor generalization? [@pfungst1911clever; @Johnson_2017_CVPR] This work revisits such a question and proposes an interactive framework with the communication channel between two players. The first player, which reasoning capabilities we are about to test, performs visual reasoning tasks, we call it . The second player, which we call the , is manipulating the scene so that it *fools* the first player even though those changes still lead to correct reasoning steps among humans. Both players interact with each other only through questions, answers and the visual scene as shown in Figure [1](#two_agents){reference-type="ref" reference="two_agents"}. If the manipulating the scene causes the to change its answer even though the new scene is still valid for the same question and answer, it is then the reasoning failure. It is similar to the following situation. Imagine a box is placed between two spheres. If you ask a question, *is there a box between two spheres?*, the answer should be positive. Now, if we move the box anywhere so it does not cross any of the spheres, and ask the same question, the response should remain unchanged. In other words, we postulate that reasoning outputs of agents need to be invariant under scene configurations that are consistent with the questions-answer pairs. Moreover, in the spirit of generic adversarial attacks, we seek configurations that also pose little if any reasoning challenges for humans.
|
| 4 |
+
|
| 5 |
+
We propose an automatic and agnostic pipeline to benchmark the reasoning capabilities of various models, only assuming they can communicate by answering questions about the scene. Due to the recent stream of research in vision-and-language [@VinVL; @Defense; @GraphReasoningVQA; @VLBert; @vcrcnn; @kamath2021mdetr; @lxmert; @Uniter], we believe there will be an increasing number of vision models that operate through language. Moreover, we also consider the visual question answering framework set-up as a two-player system as an excellent benchmarking pipeline. We perform all tests by scene manipulations and observing how a tested model behaves under such changes. The pipeline does not require any knowledge of the internals of the tested model. It also does not manipulate the sensory information of such a model, e.g., pixels in the images, and all the manipulations are physically meaningful. Even though our current pipeline uses synthetic scenes as only those can easily be automatically manipulated, our results have also real-world ramifications. If models are susceptible to semantically *meaningless* changes[^1] in scene configurations, in a synthetic setting, there are valid concerns that real-world robots could also be prone to manipulation of objects in a room. Finally, our work also questions the possibility of training and benchmarking networks in a purely data-driven and offline, static manner.
|
| 6 |
+
|
| 7 |
+
The main contributions of our work could be summarized in three points.
|
| 8 |
+
|
| 9 |
+
***First***, we propose a strong *black-box* adversarial test, which makes no assumptions about the underlying mechanics of a tested model, formulated as a game between two players. Our test does not require any direct access to the tested model, even through its sensory information. In particular, it does not require gradients, output probabilities, or any access to the perceived image. Our work also deviates from bounded perturbations and instead focuses on global scene manipulations that are still consistent with the task constraints, and can change the behavior of a tested model.\
|
| 10 |
+
***Second***, we reformulate visual reasoning by integrating visual question answering with zero-sum two-player game frameworks. Under our novel formulation, a visual and adversary agents compete against each other through content manipulation. We believe that this is an initial step towards more sophisticated frameworks that integrate computer vision with multi-agent systems.\
|
| 11 |
+
***Third***, we explore the limits of the data-driven approaches in synthetic visual scenarios, and demonstrate that current CLEVR models are lacking the efficiency to learn robust reasoning steps.
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
In this section, we explain briefly how the CLEVR dataset [@Johnson_2017_CVPR] is constructed and introduce our notation and definitions. is a synthetic visual question answering dataset introduced by @Johnson_2017_CVPR, which consists of about 700k training and 150k validation image-question-answer triplets. Images are artificially constructed and rendered from scene graphs -- a special structure containing information about object attributes such as position or color. Such a scene graph is also used to synthesize the ground-truth question-answer pairs by expanding templates according to the depth-first-search ordering. Ambiguous scenes are rejected. Each image represents an isometric view of the scene containing from two to ten objects. There are three classes of objects, *spheres*, *cubes* and *cylinders*. Each object can also be either large or small and has one color out of four (brown, purple, cyan, yellow). It can also be either metallic or rubber-made. Every object has $x$ and $y$ coordinates that are confined within the $(-3, +3)$ range. We use the same generation process to render modified scenes. Various models have been introduced to work with the CLEVR dataset, some even 'solving' the dataset by achieving near perfect performance. Despite the strong offline performance, we test if those models' performance perpetuates in the more interactive setting where configurations of the scene could be changed. Whenever possible, we use pre-trained CLEVR models. Otherwise, we train the remaining models from scratch by making sure we achieve results similar to published accuracy numbers on the validation set. We summarize all the models in Table [\[model_table\]](#model_table){reference-type="ref" reference="model_table"}. We show the accuracy on the CLEVR dataset (*Accuracy*), indicate if an architecture is trained from scratch (*Re-trained*), briefly describe how multi-modal fusion and reasoning is conducted (*Reasoning Mechanism*), and indicate any extra privileged information required during the training process (*Extra*). For instance, some models require extra access to functional programs used during the dataset generation, use scene graphs as a supervisory signal (states), or always operate on scene graphs (input-states). Otherwise, the models were trained only from image-question-answer triples. We formulate our problem as a between two players, and . The takes as input question-image pairs and provide answers to such questions. Some models use states (scene-graphs) that replace images or require programs [@Johnson_2017_CVPR]. The whole game consists of all CLEVR data points. For our purpose, we extend the notion of the into . The rules of are identical to the whole . The only difference is that each operates on a subset of the CLEVR dataset. We define the size of a by the number of datapoints that are attached to that . We sample data points for each randomly and mutually exclusively. have analogies in the adversarial perturbations literature. of size one resemble per-image adversarial perturbations [@FGSM; @Deepfool] whereas a that has all data points is similar to universal adversarial perturbations [@moosavi2017universal]. In this work, we investigate various sizes but due to the sheer scale we were unable to use the whole game as the . Larger make the optimization process more difficult as the domain where the needs to operate increases. The training is also much more time-consuming and a sequential process. Instead, we can train multiple players on different independently and thus massively. We leave the arduous training of the universal on the whole as a possible future direction.
|
| 16 |
+
|
| 17 |
+
::: center
|
| 18 |
+
:::
|
| 19 |
+
|
| 20 |
+
We need to ensure that create valid scenes that are *consistent* and *in-distribution*. Both properties are guaranteed by our environment enforcers. Since scene manipulation may change the answer for a given question, we need to ensure this does not happen. That is, the new scene is still *consistent* with the question-answer pair. The question-relevance enforcer achieves that by running functional programs associated with each question [@Johnson_2017_CVPR] on the modified scene-graph. Hence, it gets the new ground-truth answer. The enforcer rejects the new scene if that new answer differs from the previous one. In this way, it guarantees the newly generated scenes give the same answers as the original scenes on the same question. Thus, we can generate equivalent scenes containing the same objects that have identical answers for the same questions. Using that enforcer, we can test if the 's answers are invariant under such an equivalent class of scenes. Even with the question-relevance enforcer, the may still produce undesired outputs. For instance, it can stretch the whole scene thus violating the scene boundaries from the original CLEVR dataset, making, e.g., everything to look very small ( in the appendix). Although it is still an interesting form of adversarial scene manipulation, we focus rather on the *in-distribution* scene manipulations that respect the original boundaries. To enforce that property, we use a scene-constraint enforcer that checks the boundaries of the scene. Without that enforcer, the would quickly resort to stretching the whole scenes, achieving a form of adversarial attack that uses distribution shifts rather than content manipulation. It does so, e.g., by moving the camera away until objects are barely visible. We give a few such examples in the appendix ().
|
| 21 |
+
|
| 22 |
+
<figure id="two_agents" data-latex-placement="!b">
|
| 23 |
+
<div class="center">
|
| 24 |
+
<img src="images/Long_Agent.png" style="width:85.0%" />
|
| 25 |
+
</div>
|
| 26 |
+
<figcaption>Our game between two players: and . uses a <em>multi-modal module</em> to extract features conditioned on the visual and textual inputs. After transforming such features with a feed-forward architecture, it samples an action using object-specific heads. Each action corresponds to manipulating the corresponding object in the scene. In the case of missing objects, we use an <span class="math inline">∅</span> token. After alternating the original scene graph, we use various environment enforcers to ensure validity of the constructed scene. A valid scene graph is rendered and introduced to the together with the original image. Finally, we collect responses of the and calculate suitable rewards based on them, and we repeat the whole cycle during the training phase.</figcaption>
|
| 27 |
+
</figure>
|
| 28 |
+
|
| 29 |
+
Meaningful scene manipulations require not only generic scene understanding, but also the ability to distinguish which objects to displace and how. Hence, the player is a composition of a multi-modal module, which creates input representation, and a decision maker, which decides how to control the scene. Figure [1](#two_agents){reference-type="ref" reference="two_agents"} illustrates the and the game between both players. We have experimented with the same multi-modal modules as those in , but found out we have a better performance and the convergence rate if the operates on the scene-graphs (states) instead of pixels. For that, we use *state-input* variant of Relation Networks [@Santoro_2018_NEURIPS]. The model receives as the input $10 * 6$ object tokens, and question tokens. Every object token represents one-out-of-ten possible objects in the scene by its attributes such as position, color, shape, material and size. If the scene has fewer than ten objects, we use $\emptyset$ token to indicate that, which also acts as a padding. We also have special tokens that separate questions from the objects which we add as a latent embedding, e.g., $\text{emb}(\text{material}) + \text{emb}(\text{object})$. Such an input encoding is similar to our *State-Input Transformer* and described in . The embedded vectors are given to the Relation Network (RN). Finally, we train that network on the CLEVR visual question answering task, where we achieve $97.6\%$ on the validation set, and use the representation just after the last relational layer for the decision maker. Inspired by the work on reinforcement learning [@a2c; @conta2c; @ddpg; @sea2c; @trpo], we use an actor-critic module that acts on scenes. The actor is a general-purpose fully connected layer with ten object-specific heads. Each head is randomly assigned to a unique object in the scene for its manipulation. Every head produces a displacement in $x$ and $y$ coordinates of the corresponding object. Although we have initially experimented with the continuous output space, we have found out the following simple strategy is more effective. First, we discretize all the $x$ and $y$ coordinates into $N$ bins each. Now, each head produces two $N$-dimensional vectors that are next projected into a probabilistic space via softmax. Next, we sample displacements in $x$ and $y$ axis independently from both softmax distributions. Note that, even though we do not model the joint distribution explicitly due to computational reasons, both samples condition on the common head and thus are only *conditionally* independent of each other. We discretize the scene where each axis has values in $[-3,3]$ onto $N=7$ bins per axis. Our critic is a simple three layer feed-forward network (with relu as activations) that predicts a reward score between $-1$ and $+1$ via tanh activation ($1.2*\text{tanh}$ for better numerical properties). Due to our formulation of and the environment, we can benchmark various reasoning models purely in the *black-box* setting via a series of questions about the scene. manipulates the scene so that it is still consistent with the question-answer pair. The manipulations are applied to scene-graphs, and the resulting scene-graph is evaluated by the environment enforcers described in . Invalid scenes are thus discarded. In this way, we ensure the *in-distribution* and *consistency* in the scene generation. Original image-question pairs are fed to a that produces corresponding answers. We refer to that answers as . After the scene manipulation, new images paired with the same questions are also given to the that produces . We construct rewards based on , and *ground-truth answers*.
|
| 30 |
+
|
| 31 |
+
If the forces the to change the answer, i.e., an is different than a , it gets *Consistency Drop Reward* (*cr*). If an is the *ground-truth* answer, it gets instead *Accuracy Drop Reward* (*dr*). Both rewards differentiate between simply confusing the model and causing a drop in its performance. If produces an invalid scene it gets *Invalid Scene Reward* (*isr*). This reward encourages producing scenes that pass the environment enforcers tests. Finally, if does not manage to fool the model, it gets *Fail Reward* (*fr*). We use the following values: *dr*$= 1$, *cr*$= 0.1$, *fr* $= -0.1$, *isr* $= -0.8$. To train we use the A2C algorithm with the episode length set to one as we do not need to model long-range consequences of the decision-making mechanism. Batches contain images, question, answers, programs and scene-graphs. We train the for each independently using the same architecture. We experiment with the following sizes $10, 100, 1000$. All are constructed randomly. Under our discretization scheme, the action space is $N^k$ where $N$ is the number of bins and $k$ is the number of objects in the scene. In practice, it is up to $49^{10}$.
|
2208.02080/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2208.02080/main_diagram/main_diagram.pdf
ADDED
|
Binary file (84.2 kB). View file
|
|
|
2208.02080/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The amount of user-generated video content uploaded to the Internet every minute is ever increasing, leading to more than 500 hours of content uploaded to YouTube every minute, as of February 2020 [\[6\]](#page-7-0). Finding the relevant videos for a given query requires a mix of computer vision and natural language processing techniques, placing this problem at the intersection of the two communities. In particular, the text-to-video retrieval task encompasses this objective by requiring to sort all the videos based on their semantic closeness to the input query. Another task, which is similar to textto-video retrieval and is used to holistically evaluate a method, is the video-to-text retrieval task, which switches the role of video and query. In general, with the term text-video retrieval both tasks are considered and, given its cross-modal nature, it involves both visual and textual understanding.
|
| 4 |
+
|
| 5 |
+
MM '22, October 10–14, 2022, Lisboa, Portugal
|
| 6 |
+
|
| 7 |
+
<sup>©</sup> 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of the 30th ACM International Conference on Multimedia (MM '22), October 10–14, 2022, Lisboa, Portugal, [https://doi.org/10.1145/3503161.3548365.](https://doi.org/10.1145/3503161.3548365)
|
| 8 |
+
|
| 9 |
+
Recently, deep learning techniques were used to automatically extract features from the multimodal data and learn how to solve this task, showing their potential and achieving impressive results [\[9,](#page-7-1) [45,](#page-8-0) [58\]](#page-8-1). However, a significant limitation in the success of these techniques is represented by the huge amount of annotated data required to perform the training of a deep learning model. To this end, large amounts of data were collected through crowdsourcing platforms where human efforts are required to carefully annotate the data, leading to tedious tasks for the annotators and huge costs for the dataset collectors. Examples of large scale datasets obtained with this approach include MSR-VTT [\[66\]](#page-9-1) and VATEX [\[57\]](#page-8-2). To reduce the costs of the collection, the scientific community mainly investigated two automatic solutions: web scraping and data augmentation. In the former, the extraction of visual content from the Internet and the related annotation is performed automatically, for instance with speech recognition [\[41\]](#page-8-3), alternative texts [\[3\]](#page-7-2), or by leveraging hashtags [\[19\]](#page-8-4). While this approach leads to possibly huge and rich datasets, the annotations are often noisy and it is difficult to guarantee the quality of the annotations. On the other hand, data augmentation techniques are often used to artificially increase the size of a dataset by leveraging the already available annotated samples: new samples can be obtained by applying labelpreserving techniques, hence providing semantically coherent data and avoiding the noise. Indeed, these techniques have shown a great potential in many fields, both from the vision community, such as classification [\[4,](#page-7-3) [28,](#page-8-5) [54,](#page-8-6) [67\]](#page-9-2) and detection [\[47,](#page-8-7) [71\]](#page-9-3), and the language processing community, such as text summarization [\[16,](#page-7-4) [44\]](#page-8-8) and text classification [\[29,](#page-8-9) [61\]](#page-8-10). Although augmentation was applied to visual question answering [\[50,](#page-8-11) [59\]](#page-8-12) and image captioning [\[10,](#page-7-5) [53\]](#page-8-13), these techniques are less explored for text-video retrieval. To address this shortcoming, we investigate the application of augmentation techniques and propose an augmentation technique for text-video retrieval which exploits multimodal information (visual and textual). In particular, our video augmentation strategy creates a new augmented video by mixing the visual features of two samples from the same class ('Video fusion' in Fig[.1\)](#page-0-0), therefore leveraging the high level concepts automatically extracted from the deeper layers of a CNN-based backbone. This is achieved by performing our augmentation in the feature space, as opposed to common transformations, such as the geometric and color space transformations used for images, which are applied on the raw data [\[28\]](#page-8-5). In fact, working in the feature space raises three additional advantages: the same technique can be applied to data coming from different modalities, for instance on both video and text as we show in this paper, without requiring considerable changes which, on the other hand, are likely required when trying to apply a technique defined on one modality (e.g., replacing a word with a synonym) on a completely different modality (e.g., on video); it does not rely on the availability of the original videos or frames, which are more difficult to share and are not always shareable due to privacy or copyright issues, e.g. more than 20% of the original videos of MSR-VTT were reported to be removed from YouTube [\[40\]](#page-8-14), whereas all the videos of MovieQA [\[52\]](#page-8-15) faced copyright issues; and finally it can be applied on pre-extracted features, making it overall less time- and resource-demanding. The augmented caption for the abovementioned video is also created by following the same principle ('Text fusion' in Fig[.1\)](#page-0-0), showing the general applicability
|
| 10 |
+
|
| 11 |
+
of our technique to multiple types of media. Finally, to validate our approach, multiple experiments are performed on the recently released EPIC-Kitchens-100 dataset [\[11\]](#page-7-6). These experiments include: multiple ablation studies to demonstrate the effectiveness of our strategy and to motivate the design choices; several comparisons to augmentation techniques inspired from the literature; and finally, to give additional evidence of the usefulness of our method, we observe further improvements when our proposed technique is integrated with a state-of-the-art model. To support reproducibility, code and pretrained models are made publicly available on Github at https://github.com/aranciokov/FSMMDA\_VideoRetrieval.
|
| 12 |
+
|
| 13 |
+
We organize the paper as follows. In Section [2](#page-1-0) we perform a literature review and contextualize our work into it. Then, in Section [3](#page-2-0) we described in detail the proposed technique. Several ablation studies and experiments are performed and discussed in Section [4,](#page-3-0) whereas in Section [5](#page-7-7) we conclude our paper.
|
| 14 |
+
|
| 15 |
+
# Method
|
| 16 |
+
|
| 17 |
+
As in the case of videos, we design the textual augmentation technique in the feature space. We define two criteria, $\psi_V(a,q)$ and $\psi_N(o,q)$ , to identify the captions which can become valid substitutes of a given q based on one of its actions a or entities o. For instance, $\psi_V(a,q)=\{d\mid a\in \mathsf{act}(d)\land \mathsf{ent}(q)\cap \mathsf{ent}(d)\neq\emptyset\}$ .
|
| 18 |
+
|
| 19 |
+
Given these operators and a caption q, the augmentation is performed with chance $\chi$ , and the decision between actions and entities is taken with uniform chance ( $\chi$ is the same as in Section 3.1). After the selection of a valid candidate d (step 16), the latent representations of both q and d are extracted with a function g (steps 3 and 18) and then mixed with the function $\rho$ (step 19). As for the videos, we define $\rho$ as a mixing function working on the high level concepts extracted from the language model g, that is $\rho(\overline{q},\overline{d}) = \lambda \cdot \overline{q} + (1-\lambda) \cdot \overline{d}$ ('Text fusion' in Fig.1).
|
2209.00638/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-03-01T22:19:22.675Z" agent="5.0 (Windows)" version="16.5.1" etag="Sji4xzJFKa6elFwpxyjp" type="google"><diagram id="_gMAeBRpV5eqyzMr7kIk">7V3bkqu4Ff2afhwKISHEY7svmVRlKlN1qibJ0xRt0zY1tnEwfbo7Xx9hLgYhG4F1wTb9cI65WIa1lqS9t7akB/i0+fpbEuxWv8WLcP3g2IuvB/j84DjAsTH9LzvznZ/xiJ2fWCbRorjpeOJH9L+wOFne9hEtwn3jxjSO12m0a56cx9ttOE8b54IkiT+bt73H6+av7oJl2DrxYx6s22f/FS3SVX6WON7x/K9htFyVvwywn1/ZBOXNRRH7VbCIP/NTh5eDLw/wKYnjNP+0+XoK1xl4JS45Aq8nrlYPloTbVOQLTv6Fn8H6o3i34rnS7/Jlk/hjuwiz+8EDnH2uojT8sQvm2dVPSi89t0o36+LyPk3ivypQ6OvM3qP1+ilexwk93sZb+rVZ8ZthkoZfJ58bVGhQGYXxJkyTb3pL8QWCgIULuAsRQd/Njz+PlADXtjDMT69qjKDi1qAQwrL6gSNW9EMBFx86yIEOr9PslWP6EnUM8X8/4vLCL/uDnB/pDfSFv44X6adl8b+EUp7DOa12Cb0+W8fzv8pC6Uvl5TZ/6y1hz7RuZFRBqUt51DNUc9gP1tFySw/nlGr6gHCWCSGileuxuLCJFovsZ7haO6rRpkcxvRilmQBcW5KwILB8nxGWZ3Gk5dkcXdmX6wp16Wpe4FknHWLvcea/8tREy0PBnzD/cF30LtfBfl98XgT7VdUKzeNNNC8uZO9RdBGOJBF42C4ZLzSAina0rgDEEQCUIAB3kABmrgf959MCQJMALhKAq08AeJAAXl+JbdunBQAmAVwiAIj1CcAbJADv5RG/nBGAMwngIgH4+gRA+pjldrdZXgMIkJaZ3uaWqsk+/Ckz4IdXMOBIANjvA/CI/J4KkbJbstugAVeRKkv3WyFqQTIvdErUYQibGFbHXdZ9BcBFIIKpdzfduAPcVIDLUQBRVYl4IZepc9fKfxkVM8J/Z9xo8u+V8+8Z5H9YfGdy7yXyD6FB/nnhHYadcLt4zIYpMpgziDIs6lzlXwgXrVGKTgBqL8izEstzSbgO0uhns3jeWxe/8HscHaRbEuIy+LIm+z7+SOZh8a0jdq2CENtRswWlQbIM01ZBBxKq1xbjhRd1YXjZr4Jd9pECHazX4TpeJsGG8rELk4j+VFYdmtd+P17odNHW0e6PUvL086/F+UWUhPM0irOqRWHL2Jvlwf9n24Ju889j6ixtON7fQzyf8zy8hee/2bIsKqZFRV67RvGcOSyjRvHCJRNzosz5BpkTiHNMzJ20YoBB5gQCKNfI3CIIyTuXOTwn4du7mgiETubKHnRibhBzrkHmeJGjibmBPp9W5ngxn1tgzg3JAvGYI84bxFgScwYtFG6Wz8ScqJ9t0ELh5tHcAHMhoNx5POZ87MFAFnMmLRSBCMnE3EnmTFooNxpD0cScSQvlRmMompgzaaHwYijZUEBajggwNN536B+V4y9sRLmDKhlJJyJZJ12h/3bKBH3x5Pvfhwrhlof/Ke4+HDx/FXDmR9/FUf7TrUGEPKJenLKF4dUzsAABk5WJmUR+0YGFVkEuw6+8gYXSkrpgogDweCn+f9/uPrKvv2zewsUi2i6va/BPfu4+G1bjzghRlbYPO/N6Bs7keNkWMzmumNx6Nx4G+7TWy5c9/h/F0yZxGhR3ZiEbOWMcDpNqyGn01QlDIPjTv9U/0XqPp6VmTCJoM1AKDwETpiBPrKWmeAbftdt22Q37/g988rnQhfd7jfvph/yJB3czApGqG5QZQBZx/ONfA+NWwoa4eXC2WFbLEo2FzuygbmOByJhVyC/llXp14S/r8OdhQu8uCRfRoVnfT/MLhQdtfYvpjQDkmCl+lR4vOwEJCoTXriL3nbr/bdxU5b7DzklZA634yrxrTtSdasxJq54zE0+d8SYQF7uKyuIQnZWFF5OaKouBylKlsWqpLLcyq8rh9ciqKkv529Iryz+ibRgkUx2RNv9LyuoOAkk8/dy0Ks7LiQHbFvGqEyfiwIej2lDNueDwqPw+JoI7NMDQLgoSJqosz79D0oNBSuhvjMhcnxYqz0CCFoA6LUiP2NxtU3AmVsPWZUkhIHbgSaIsBBKXbjCQpyxeLBisMxwvvjj+i9SHdXRMzkcesLxmE6x3fj4SyGEapRODSAs5ratDoF6hEoFFS6SoyWthQhzeam6Yh4qEVQJRZyhkWu5Bec6PqApULeeDeIERGS7+Pz/SKQekq//QmgZStq73Zb25bruGDU3N4pXFMiPP2nZVpe1UC7BecX2s5eL4kuqnS9tik9k4Li8AM54B9h/hckORnIbYL+kCkGuVq+xUQ+xtiVHhVUqUvoTrsEV+pkWeJFp9LjBs9bnTSs7mRcAxJvSKYFrN2bwIfNMiEIgp1ZyCAvoaW00cw68ozSP8jlscVhM+6OdjcD876JjukVvlD40IzmjcCoRbxLmwIq63Y3G2Q1DgWggExAyxPhqGs/YZOXb1B5pkA28w2WcLVsu7wKpGd8+7r4r3swWr5f1a0584Iwdac2sxL2YmIzx6YhOcyVU+Fy3VmV6LBbKhrqXKaM2wxRLCWFOVkVRltCbZYoGsoaupMjrzbLGEqVVTnq2mVA4pNUXCSstTriXrVuHW8M3QZEteWeqyLXG/GMyUbXmaNNeTnm3ZUay6bEvcL0hzM+P1rao3eK40ryx1E6Sx7K2xpHR0mO3oAGczK2U5ZpgXeJhyzDTnmAlpQNmOgZ0hlGnAWf2As1kJdGYyTcPN6oebzUqgMyo0DTarH2w2K4F+k8pMDTWXZtxoLGKEW7S1ppOIDzSfloB8i9jrN1/sPocbabt8elQQ+Bd4rWeKVch5v9DWfXLuq+H8fLEKOe81MU3A861vCs0unVwo5lRkQ3ZOnnAPKcFb9mTPUzuHY30j7WOEfRyJbTKwVLlRuU5Nctx3zUj2Sh4ZMZIc40cvkoQXCrnS+n1R9rIMLHvlZ4wbywv8MhlI8lxzhUg+e/7sgCTX924PZElAmF1FC2AxoToy4O2VFXGd8JYSOgMvUAWvgI8pE178RF5mr3rhLXvgM/BC3pqsUvDttXjHdeLrA4ttfnUirNN7MtRA+Ki7/VWErmafyoh+XQH9+orw1elnHTYBnOWbAOrDF9rs0ms69avT+zKjX2h7ptD1NXtkJtQLum1fRW2Dr9lHyzZOe9SNLjGmXc1+mwntll85g66nCF3NbpsJ7ZabYp5BF6sSr2a3zYR4YXfT4ChCV6fTZjpww52WIQVFuXPj6xiSM4CNd9zSY0wJd2hiLlsQULeKlq/Z/zPgXRPHscpZd2Us3mvXE9e2cOOP07RD27Lrf2ckIlyLNDuIJgjwXIuAU1MEeFwAh+EC6+Hi9p1J4jmW3V0ZkIVg/U9PXQC2Zn/TBAHEs4jbQnaEXGj2Tk00TD4QqQyuZTfaIu7ScwoI0OzAmqgMvmP53QQAYKEGvBy/QAkDmp1cA16YbyMLjrcK3P7opW97FiJHZL3RcqF5pNNEZaANjVPjAndyYaxl0jwoaiD85jsU2/G2TJr9ZhO1gWLrdBNALNIggBNwVkKAZr/ZRA2AwHJ7uQoENGsD1MSFZr/ZRGVAWRJizQvr5MKx3Ibb5mgiA+j0oUcQtwPEt3y/FV1SDrOAe3xf0W9sszvw+QPj38Sm7Rg8X5a8EDgAAo72nVOJHHlUtsqSSaXcSbQ3SaUrkUq2LJlU9psdeyPr+/jHBXTZPq8vX5yiEFBHl/Q1266BLmLLaymRza7GpLSl5EUQxrOdzsu2XM30PQzSjyScdtG5wG52bY7d7Nu8lH0ZS1kAwIuOjEdcx6Vy70tctV3BsnWnpKgNu8BqJnIh3WLrFQka0bq6BAGLifdDwN3jNMtN4YQPkCsDvs416AbuoyenlKck3tPqaT+mKYU1E+9dVFUZ8oLAYnJtqlVadaziCxyBSNTtmYQ+wBYix3BUs3GEZWZvf2v+bLHqVi8BTr9I143QSACy8OmktQus/CZx7BR1SZvcHx7f4T7zyUdzWt3RgK+UL3QUXP7ow+Xn3KP8Oqo7GLyX1vmCHY+o22YHOJ2baxp1C16TYJO5AnYa03+KDV33Byj2tD/PjibjQ9T4AD47KZS3PY1Ct6CcV6ZiHMxp5ynW10Bph4kVORBMvAjw9ksnnEaKtRaGAaw5IcjI1AF2eXTUAhhxBCzDL3M0p/gYSUZvGkNOGTfSAe8dJPAARr0OacELeSaMDHjvID0HMfCWoyv1+aGq4L08aANcKZYUt5T3zJLKnpfy+zXZTD1sJnYlY95ujnyjifVIB+kK8kI249HVPrfJ70tZKqL2HvUC2a16OJ2vPKnRwyTOuDz6jvTVV7/FizC74/8=</diagram></mxfile>
|
2209.00638/main_diagram/main_diagram.pdf
ADDED
|
Binary file (20.6 kB). View file
|
|
|
2209.00638/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The ability to analyze, comprehend, and segment video content at a temporal level is crucial for many computer vision, video understanding, robotics, and surveillance applications. Recent state-of-the-art methods for action segmentation mainly formalize the task as a frame-wise classification problem; that is, the objective is to assign an action label to each frame, based upon the full sequence of video frames. We illustrate this general approach in Fig. 1 (a). However, this formulation suffers several drawbacks, such as over-segmentation when trained on relatively small datasets (which typically need to consist of expensive frame-level annotations).
|
| 4 |
+
|
| 5 |
+
In this work, we propose an alternative approach to the action segmentation task. Our approach involves a transformer-based seq2seq architecture that aims to map from
|
| 6 |
+
|
| 7 |
+
<sup>\*</sup> Equal contribution.
|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
|
| 11 |
+
Fig. 1. Using Transformers for Action Segmentation. Instead of frame-level predictions, which are prone to over-segmentation (a), we propose a seq2seq transformer model for segment-level predictions (b). To provide more direct feedback to the encoder we apply a frame-wise loss (c); the resulting features enhance the decoder predictions. However, duration prediction still suffers, so we focus on transcript prediction (d) and use a separate alignment decoder to fuse encoder and decoder features to arrive at an implicit form of duration prediction (e).
|
| 12 |
+
|
| 13 |
+
the video frames directly to a *higher-level sequence* of action segments, *i.e.,* a sequence of action label / duration pairs that describes the full predicted segmentation.
|
| 14 |
+
|
| 15 |
+
The basic structure of our model follows traditional Transformer-based seq2seq models: the encoder branch takes as input a sequence of video frames and maps them to a set of features with the same length; the decoder branch then takes these features as input and generates a predicted sequence of high-level action segments in an autoregressive manner. This approach, illustrated in Fig. 1 (b), is a natural fit for action segmentation because it allows the decoder to directly output sequences in the higherlevel description space. The main advantage over the frame-level prediction is that it is less prone to over-segmentation.
|
| 16 |
+
|
| 17 |
+
However, this seemingly natural approach does not immediately perform well on the action segmentation task by itself. In contrast to language translation, action segmentation typically involves long input sequences of very similar frames opposed to short output sequences of action segments. This difference together with the relatively small amount of training videos, makes it challenging for the encoder and decoder to keep track of the full information flow that is necessary to predict the high-level segmentation alone. For this reason, we incorporate several modifications and additional loss terms into our system, which together make this approach compete with or improve upon the state-of-the-art.
|
| 18 |
+
|
| 19 |
+
First, to provide more immediate feedback to the encoder, we employ a frame-wise loss that linearly classifies each frame with the corresponding action label given the encoder features, Fig. 1 (c). As a result, the encoder performs frame-wise classification with high localization performance, *i.e.,* high frame-wise accuracy, but low discrimination performance, *i.e.,* over-segmentation with low Edit distance to the ground truth. Nonetheless, its features provide the decoder an informative signal to predict the sequence of actions more accurately. This immediate auxiliary supervision signal allows the decoder to learn more discriminative features for different actions. While the framewise loss improves the transcript prediction, the decoder still suffers from low localization performance for duration prediction. As the next step, we fuse the decoder predictions with the encoder, for which we propose two solutions. First, we propose to fuse the discriminative features of the decoder with the encoder features via a cross-attention mechanism in an alignment decoder, Fig. 1 (d,e). Second, the high performance of our decoder on predicting transcripts and the high performance of our encoder on localizing actions allows us to effectively utilize the common post-processing algorithm such as FIFA [33] and Viterbi [30,21].
|
| 20 |
+
|
| 21 |
+
Finally, we further extend our proposed framework when only a weaker form of timestamp supervision is available. As mentioned before, the frame-wise prediction is vital in our Transformer model to cope with small datasets and long sequences of frames. In this case, when the frame-level annotations are not fully available, we assign a label to each frame by a constrained k-medoids clustering algorithm that takes advantage of timestamp supervision. Our simple proposed clustering method achieves a frame-wise accuracy of up to 81% on the training set, which can be effectively used to train our seq2seq model. We further show that the clustering method can also be used in combination with frame-wise prediction methods such as ASFormer [42].
|
| 22 |
+
|
| 23 |
+
We evaluate our model on three challenging action segmentation benchmarks: 50Salads [35], GTEA [12], and Breakfast [19]. While our method achieves competitive frame-wise accuracies compared to the state-of-the-art, our method substantially outperforms other approaches in predicting the action sequence of a video, which is measured by the Edit distance. By using Viterbi [30,21] or FIFA [33] as post-processing, our approach also achieves state-of-the-art results in terms of segmental F1 scores. To the best of our knowledge, this work is the first that utilizes Transformers in an autoregressive manner for action segmentation and is applicable to both the fully and timestamp supervised setup.
|
| 24 |
+
|
| 25 |
+
# Method
|
| 26 |
+
|
| 27 |
+
In this section, we introduce our Unified Video Action Segmentation model via Transformers (*UVAST*). The goal of action segmentation is to temporally segment long, untrimmed videos and classify each of the obtained segments. Current state-of-the-art methods are based on *frame-level* predictions – they assign an action label to each individual frame – which are prone to *over-segmentation*: The video is not accurately segmented into clean, continuous segments, but fragmented into many shorter pieces of alternating action classes. We challenge this view of frame-level predictions and propose a novel approach that directly predicts the segments. By focusing on *segment-level* predictions – an alternative but equivalent representation of segmentations – our method overcomes the deep-rooted over-segmentation problem of frame-level predictions.
|
| 28 |
+
|
| 29 |
+
In this work, we view action segmentation from a sequence-to-sequence (seq2seq) perspective: mapping a sequence of video frames to a sequence of action segments, *e.g.*, as pairs of action label and segment duration. The Transformer model [38] has emerged as a particularly powerful tool for seq2seq tasks and may seem like the natural fit. The vanilla Transformer model consists of an encoder module that captures long-range dependencies within the input sequence and a decoder module that translates the input sequence to the desired output sequence in an auto-regressive manner. In contrast to language translation tasks, action segmentation faces a strong mismatch between input and output sequence lengths, *i.e.*, inputs are long and untrimmed videos with various sequence lengths, while outputs are relatively short sequences of action segments. Therefore, we incorporate several modifications to address these issues, which we will go over in more detail in the following.
|
| 30 |
+
|
| 31 |
+
**Notation.** Given an input sequence of T frame-wise features $x_t$ , for frame $t \in \{1, \ldots, T\}$ , our goal is to temporally segment and classify the T frames. The ground-truth labels of a segmentation can be represented in two equivalent forms: 1) a sequence of frame-wise action labels $\hat{y}_t \in \mathcal{C}$ for frame t, where $\mathcal{C}$ is the set of action classes, 2) a sequence of segment-wise annotations, which consists of ground-truth segment action classes $\hat{a}_i \in \mathcal{C}$ (also known as transcript), and segment durations $\hat{u}_i \in \mathbb{R}_+$ for each segment $i \in \{1, \ldots, N\}$ .
|
| 32 |
+
|
| 33 |
+
**Transformer Encoder.** Our input sequence $X \in \mathbb{R}^{T \times d}$ consists of T frame-wise features $x_t$ , where d denotes the feature dimension. We embed them using a linear layer and then feed them to the Transformer encoder, which consists of several layers and allows the model to capture long-range dependencies within the video via the self-attention mechanism. The output of the encoder, $E \in \mathbb{R}^{T \times d'}$ , is a sequence of frame-wise features $e_t$ , which will be used in the cross-attention module of the decoder. To provide direct feedback to the encoder, we apply a linear layer to obtain frame-level predictions for $e_t$ . This enables the encoder to accurately localize the action classes within the video and provides more informative features to the decoder. In practice, we use a modified version of the encoder proposed in [42], which locally restricts the self-attention mechanism and uses dilated convolutions (see supplemental material for more details).
|
| 34 |
+
|
| 35 |
+
**Transformer Decoder.** Given a sequence of frame-wise features $E \in \mathbb{R}^{T \times d'}$ , we use a Transformer decoder to auto-regressively predict the transcript, *i.e.*, the action labels of the segments. Starting with a *start-of-sequence* (sos) token, we feed the sequence of segments $S \in \mathbb{R}^{N \times d'}$ – embedded using learnable class tokens and positional encoding – up until segment i to the decoder. Via the cross-attention between the current sequence of segments and frame-wise features, the decoder determines the next segment i+1 in the video. In principle, the decoder could predict the segment duration as well (Fig. 1 (c)), however, in practice we found that the decoder's duration prediction suffers from low localization performance, see Table 4. While it is sufficient to pick out a single or few frames in the cross-attention mechanism for predicting the correct action
|
| 36 |
+
|
| 37 |
+
class of a segment, the duration prediction is more difficult since it requires to assign frames to a segment and count them. Since the number of segments is much smaller than the number of frames, the cross-attention mechanism tends to assign only a subset of the frames to the correct segment. To address this issue, we propose a separate decoder module, which fuses the discriminative decoder features with the highly localized encoder features to obtain a more accurate duration prediction, which we describe in Section 3.3.
|
| 38 |
+
|
| 39 |
+
Although our ultimate goal is segment-level predictions, we provide feedback to both the encoder and decoder model to make the best use of the labels. To that end, we apply a frame-wise cross-entropy loss on the frame-level predictions of the encoder:
|
| 40 |
+
|
| 41 |
+
$$\mathcal{L}_{\text{frame}} = -\frac{1}{T} \sum_{t=1}^{T} \log(y_{t,\hat{c}}), \tag{1}$$
|
| 42 |
+
|
| 43 |
+
where yt,c denotes the predicted probability of label c at time t, and cˆ denotes the ground-truth label of frame t. Analogously, we apply a segment-wise cross-entropy loss on the segment-level predictions of the decoder:
|
| 44 |
+
|
| 45 |
+
$$\mathcal{L}_{\text{segment}} = -\frac{1}{N} \sum_{i=1}^{N} \log(a_{i,\hat{c}}), \tag{2}$$
|
| 46 |
+
|
| 47 |
+
where ai,c denotes the predicted probability of label c at segment i, and cˆ denotes the ground-truth label of segment i.
|
| 48 |
+
|
| 49 |
+
Regularization via Grouping. To regularize the encoder and decoder predictions, we additionally apply *group-wise* cross-entropy losses. To that end, we group the frames and segments by ground-truth labels L = {c ∈ C|c ∈ {aˆ1, . . . , aˆn}} that occur in the video: T<sup>c</sup> = {t ∈ {1, . . . , T}|yˆ<sup>t</sup> = c} are the indices of frames with class c, and N<sup>c</sup> = {i ∈ {1, . . . , N}|aˆ<sup>i</sup> = c} the indices of segments with class c. We apply a cross-entropy loss to the averaged prediction of each group:
|
| 50 |
+
|
| 51 |
+
$$\mathcal{L}_{\text{g-frame}} = -\frac{1}{|L|} \sum_{c \in L} \log \left( \frac{1}{|T_c|} \sum_{t \in T_c} y_{t,c} \right)$$
|
| 52 |
+
(3)
|
| 53 |
+
|
| 54 |
+
$$\mathcal{L}_{g\text{-segment}} = -\frac{1}{|L|} \sum_{c \in L} \log \left( \frac{1}{|N_c|} \sum_{i \in N_c} a_{i,c} \right)$$
|
| 55 |
+
(4)
|
| 56 |
+
|
| 57 |
+
We utilize a loss through a cross-attention mechanism between the encoder and decoder features to allow further interactions between them. Let us assume that T video frames and corresponding N actions in the encoder and decoder are represented by their features $E \in \mathbb{R}^{T \times d'}$ and $D \in \mathbb{R}^{N \times d'}$ , respectively. The cross-attention loss involves obtaining a cross-attention matrix $M = \mathtt{softmax}(\frac{ED^T}{\tau' \sqrt{d'}})$ , where $\tau'$ is a stability temperature, and each row of M includes a probability vector that assigns each encoder feature (frame) to decoder features (actions). We then use M in the following cross-entropy loss function:
|
| 58 |
+
|
| 59 |
+
$$\mathcal{L}_{CA}(M) = -\frac{1}{T} \sum_{t} \log(M_{t,\hat{n}}), \tag{5}$$
|
| 60 |
+
|
| 61 |
+
where $\hat{n}$ is the ground-truth segment index to which frame t belongs. We use this loss in our transcript decoder (main decoder) and alignment decoder in the following.
|
| 62 |
+
|
| 63 |
+
Cross-Attention Loss for the Transcript Decoder. The cross-attention loss, when applied to the transcript decoder, provides more intermediate feedback to the decoder about the action location in the input sequence, see Fig. 5. We found this loss function especially effective on smaller datasets such as 50Salads (see Table 5). Our main objective for the encoder and the transcript decoder is:
|
| 64 |
+
|
| 65 |
+
$$\mathcal{L} = \mathcal{L}_{\text{frame}} + \mathcal{L}_{\text{segment}} + \mathcal{L}_{\text{g-frame}} + \mathcal{L}_{\text{g-segment}} + \mathcal{L}_{\text{CA}}(M), \tag{6}$$
|
| 66 |
+
|
| 67 |
+
Cross-Attention Loss for the Alignment Decoder. While the transcript decoder generates the sequence of actions in a video, it does not predict the duration of each action. Although it is possible to predict the duration as well, as illustrated in Fig. 1 (c), the transcript decoder still struggles to localize actions through direct duration prediction as shown in Table 4. One reason for this could be the high mismatch between input and output sequence length and the relatively small number of training videos. While picking up a single segment frame is sufficient to predict the action class, the duration prediction effectively requires counting the number of frames in the segment, resulting in a more challenging task. Therefore, we design an alternative alignment decoder for predicting segment durations implicitly. The full flow of the complete model is shown in Fig. 2.
|
| 68 |
+
|
| 69 |
+
A high Edit score of our decoder indicates that it has already learned discriminative features of the actions. The motivation for our alignment decoder is to align the encoder features to the highly discriminative features of the decoder, which can be further used for the duration prediction (see Fig 1 (e)). In essence, our proposed alignment decoder is a one-to-many mapping from the decoder features to the encoder features. The alignment decoder takes the encoder and decoder features $E \in \mathbb{R}^{T \times d'}$ and $D \in \mathbb{R}^{N \times d'}$ with positional encoding as input and generates the aligned features $A \in \mathbb{R}^{T \times d'}$ . Since the alignment decoder aims to explore the dependencies between the encoder features and the decoder features, we employ a cross-attention mechanism in its architecture similar to the transcript decoder. To this end, we compute an assignment matrix $\overline{M} \in \mathbb{R}^{T \times N}$ via cross-attention between the alignment decoder features (A) and positional encoded features of the transcript decoder (D) by $\overline{M} = \text{softmax}(\frac{AD^T}{\tau})$ with a small value of $\tau$ . Note that with a small value of $\tau$ each row of $\overline{M}$ will be close to a one-hot-encoding indicating the segment index the frame is assigned to. The positional encoding for D resolves ambiguities if the same action occurs at several locations in the video.
|
| 70 |
+
|
| 71 |
+
In contrast to the decoder from the previous section, the alignment decoder is not auto-regressive since the full sequences of frame-wise and segment-wise features are
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
Fig. 2. Overview of our complete model. Our complete model consists of a Transformer encoder and an auto-regressive Transformer decoder, which we train for frame-level and segmentlevel predictions, respectively. For duration prediction we use an alignment decoder – followed by cross attention – on top of the encoder and decoder features to compute a frames-to-segment assignment, which is used to compute the durations of the segments.
|
| 76 |
+
|
| 77 |
+
already available from the previous encoder and decoder. During inference, we compute the segment durations by taking the sum over the assignments:
|
| 78 |
+
|
| 79 |
+
$$u_i = \sum_{t} \overline{M}_{t,i},\tag{7}$$
|
| 80 |
+
|
| 81 |
+
where i ∈ {1, ..., n} and Mt,i denotes whether frame t is assigned to segment i. We found that training the alignment decoder using only the loss for M (7) in a separate stage on top of the frozen encoder and decoder features results in a more robust model that suffers less from overfitting.
|
| 82 |
+
|
| 83 |
+
In this section, we show how our proposed framework can be extended to the timestamp supervised setting. In this setting, we are given a single annotated frame for each segment in the video, *i.e.,* frame annotations are reduced dramatically, and ground-truth segment durations are no longer available for all frames. As we extensively discussed before, our proposed framework relies on the frame-level supervisory signal on top of the encoder. However, it turns out that a noisy frame-level annotation provides a solid signal to the encoder. To obtain such frame-level annotations, we propose a constrained k-medoids algorithm that propagates the timestamp supervision to all frames.
|
| 84 |
+
|
| 85 |
+
A typical k-medoids algorithm starts with random data points as the cluster centers. It iteratively updates the cluster centers chosen from the data points and the assignments based on their similarity to the cluster center. Having access to the timestamp supervision, we can use them as initialization and cluster the input features. However, in a standard k-medoids algorithm, a temporally continuous set of clusters are not taken for granted. We call our method constrained k-medoids because we force the clusters to be temporally continuous. This can be simply achieved by modifying the assignment step of the k-medoids algorithm. Instead of assigning pseudo-labels to each frame, we find the temporal boundaries of each cluster. In the assignment step, we update the boundaries such that the accumulative distance of each cluster to the current center is
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
|
| 89 |
+
Fig. 3. Constrained K-medoids. Given frame-wise features and timestamps, k-medoids generates a pseudo-segmentation that guides the encoder during the training instead of ground truth frame-level labels in a fully supervised setup.
|
| 90 |
+
|
| 91 |
+
minimized. Alg. 1 summarizes the steps of our clustering method. In principle, we can apply k-medoids using the frame-wise input features xt, the encoder features et, or a combination of both. In practice, we found that using input features alone gives surprisingly accurate segmentations, see Table 3 or supplemental material for more analyses.
|
2210.13611/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-10-15T23:28:49.249Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" etag="t_0FDbrHZpjfnbM9Ps3g" version="20.4.1" type="device"><diagram id="D_ixV4tDTo0d2_k71xDH" name="Page-1">7Z1dc5s4FIZ/jS+jQRIguGyc9Ls73c1M0l51KBCbhlherCTO/voVtrARIjZpCRGCDJMxAgvzngfpcI4QEzy9Xb/LguX8C43idIKsaD3BZxOEfNvl//OCx22Bi+C2YJYl0baoVHCR/BeLQkuU3iVRvJJ2ZJSmLFnKhSFdLOKQSWVBltEHebdrmspHXQazWCm4CINULb1KIjbflnqOtS9/HyezeXFkaIktt0Gxs6hiNQ8i+rAt2uyDzyd4mlHKtp9u19M4zbUrdNlW9PaJrbsflsUL1uQLPy5OP0QfZx/D1dWnd5++O28v/O8naFvLfZDeiRMWP5Y9Fgpk9G4RxXklcIJPH+YJiy+WQZhvfeAm52VzdpuKzddJmk5pSrPNd3EUxN51yMtXLKM3cWmLG3rxz2u+RT0NcWb3ccbidalInNa7mN7GLHvku4itto23XxGMYSyqeNhbDNqibF6yFnYEKAKS2a7qvY78g5CyXtaFh7/G1ozhJP1gkeD+4/nt1YlbI6udL86UnxNjbEJOg5Al9wFL6GJz/ozF2WJC+L7W5hvWxDmFFsfJgvzPghPnTNRRNVBeo2wFWe0FXcQV04iiIE1mC74acu1jXn6aa55w+N+IDbdJFOWHqTX7Hgwrr54umLh8IWnHrpg4Fbs6il1dq8as1p+b9TK6/PvrB+vLZ396eXaTZp9/JckJrLNrxRrxInqTtzu5rGmwWiWhbBtZNS5O9vitvPI9XwFOsXq2Lm88exRrT6rLgmwWs0PnIK6VOJIaPtUIJZGdGpGLsixOOcT3cnNZp7w4wlea8J+8szEs6hE2hj6Rq1jRuyyMxbfKzVu1Ig9WKvJA5fLeaqNUtUFhd+J/QAcxgA5bazpQ4U48mw5iyxV5VqdsuH7rbKwT9k1syT+XyOBrezDylaNcbGU8xIWjFReO78nmRH71Ym9KhltFrPpr2iOj1lWANWDk3fy65AhMg3Q5D4ru303z7v5nJtHj/nuXO5KbbvhktemH3/AdEFyuN4YvtvNPM7ZDQWcvAsF2vAilX+BXCenMj6i1uf2EzbmlI5r7hmvZEZz+jFnQF++vLbsRX7nEfazaze7Qbs4TdhucbVxc6U4tBDy3s4uqvotSjAMVg/BqkuXqKdlKdgpWy2004TpZ50K2IRoiBDies1tcuV0qhCnph0mNfuSF9FNDAVaL+lUCA9PNXye6En4fQKCNPPFfxbRTmXG/MeWbQUlMokZaOlVT7UmNgBYRB0jMqg5LpzKrHV9PZMYeAgjb+0Xut1690VUDSoYIq1mrq4ZmNGt1j+ipWbPr9V1OzdrXuqs+msUXYpVmbE5ndBGk5/vSipe/3+czpUsh7a+YsUfh+wd3jMrCtx5ePB5GEt7P0TikcEaPhpsax5H+7NZCddr0wl0JfmvW/MK6iIdeAsopRM3aW1gXedBJP4e4APrebkGymnl4oMyjGsfpVk61/1LlfPFovMbNL2zY/HYU7bc84Hr2fpHgshEE5SvVlqtvmgdwXAyg5+8WmWDsWrUOQ0fpo4LPEdjDUZURWF2AbeIxDRlYvfLsI7CoiYf64sBu00PFQMNXukezR4SfQhgRHxDXKS3SUZBHav3crhhucpdgYqPblNgBegnHiMU+QBKy3RLb/oBKs4htGgcziFi93YT2x3iaBewYOdAM2IGGupry6o286sWrFgOl9eWVjLxqxWsxgGTk9eAAl5FXXXite95g5LU60m3kVRde0TB5bfrA3sirZrwONfE1+gP95FWLvJe+vI73W5rx2n6O6zfZ6/gB7Ia86hXPwhYEnJD9Ik/owQGSR1kaSexAc1xNR8LoRezYwuKBpria8jp6BJrxOtAMV1NexzsuzXgdaIarKa9jREsvXu2BZrhQQ14H6L9ibANErP2i1TDYYqbPEdjRge1FA4tGXkcHtke8DjTF1ZTX0YHVi9diikCDef3n/Ozy19WPL5f376/D+V8PUTi/OfGthsDqNrlrPveCg3eLDBPyue95AKbG8766Dnj6uZjXRtb8J7lqkfV8I5G1OWrqZCLPbmQdBJDn7Rf5ILDb2c5d89Na9Yh6JiIKfQKsmgmDes6o+amsekaJkYzayhsdjOjsYZOA1WoeLPOP4V2WPp5mQXiTG/bYVE/yu3syyrbvm8FnuXPY0kR7HijGlBd2cgmwRUde4gbVcIMhAtU24XdmeTp4FbzQLIWVyTXfbv7aEdX2CHcYSrMXyvoiHyBV35eaQetgL2ictoQAzy21Beobh7rVWU0W9mSC2MM68yYZ2Pg15tc7eKus74SmR+QkDrCs0i3GK8tpZgam7E4dxEgTd8qGPvD9Yo5x4lWydD4GjsrM810r7pX5+9Cf7AvkrlUNmO17VvX2QMPksOg1Rg414dDMjMlxDvUaGzFyaGYm5DiHeg15GDk0M71hFofQJqB4B5tR8JmZuDCrMzYWPjMzEmbdkRgLX/tPyujw7KxZYRlj4TPzsRez4EO2DXw1LG8Sh9Ay83kWs7rgQYA40CxJr25EBgEiGiiIfQrHDALEgeZJeuUjQt8FuGaIzHNJtLEPXG//EGnlBYxe/gLG8qi8jlEcaqqkT15iWyjq3SgONVnSJzdxGCQONXPSJz+x3yTy1YxSVt49C5bzLzSK8z3+Bw==</diagram></mxfile>
|
2210.13611/main_diagram/main_diagram.pdf
ADDED
|
Binary file (22.6 kB). View file
|
|
|
2210.13611/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep reinforcement learning (RL) utilizes neural networks to represent the policy and to train this network to optimize an objective, typically the expected value of time-discounted future rewards. Deep RL algorithms have been successfully applied to diverse applications including robotics, challenging games, and an increasing number of real-world decision-and-control problems [François-Lavet et al., 2018]. For a given choice of task, RL algorithm, and policy network configuration, the performance is commonly characterised via the learning curves, which provide insight into the learning efficiency and the final performance. However, little has been done to understand the detailed structure of the state-to-action mappings induced by the control policies and how these evolve over time.
|
| 4 |
+
|
| 5 |
+
In this work, we aim to further understand deep feed-forward neural network policies that use rectified linear activation functions (rectifier linear units or ReLUs). ReLUs [Nair and Hinton, 2010] are among the most popular choices of activation functions due to their practical successes [Montúfar et al., 2014]. For RL, these activations induce a piecewise linear mapping from states to actions,
|
| 6 |
+
|
| 7 |
+

|
| 8 |
+
|
| 9 |
+
Figure 1: Schematic illustration of a trajectory traversing the piecewise linear regions in the policy state space. S<sup>0</sup> and S<sup>k</sup> indicate the initial and final states of the trajectory.
|
| 10 |
+
|
| 11 |
+
where the input space, i.e, the state space, is divided into distinct *linear regions*, where within each region, the actions are a linear function of the states. Note that the regions are formally an affine function of the input, due to the constant-valued bias terms. For simplicity and convenience, these are more commonly described as linear regions. Figure 1 provides a schematic illustration of these regions, along with a policy trajectory.
|
| 12 |
+
|
| 13 |
+
The number of distinct regions into which the input space is divided is a natural measure of network expressivity. Learned functions with many linear regions have the capacity to build complex and flexible decision boundaries. Thus, the problem of counting the number of linear regions has been extensively studied in recent literature [Montúfar et al., 2014, Raghu et al., 2017, Hanin and Rolnick, 2019a, Serra et al., 2018]. While the maximum number of regions is exponential with respect to network depth [Montúfar et al., 2014], recent work has demonstrated that the number of regions is instead typically proportional to the number of neurons [Hanin and Rolnick, 2019a].
|
| 14 |
+
|
| 15 |
+
For RL, we are interested in the local granularity (density) of linear regions along trajectories arising from the policy. Fine-grained regions afford fine-grained control, and thus we may hypothesize that region density *increases* in regions frequently visited by the policy, in order to afford better control. Recent work in supervised-learning of image classification is inconsistent with regard to findings about the region density seen in the vicinity of data points, with some reporting a decrease [Novak et al., 2018] to provide better generalization and robustness to perturbation, and others not [Hanin and Rolnick, 2019a]. For the RL setting, we note that counting regions visited along an episode trajectory arguably provides a meaningful and task-grounded measurement in contrast to line-segments and ellipses that pass through randomly sampled points sampled from training data, which have been used in the prior works mentioned above. We further note that piecewise-affine control strategies are commonly designed into control systems, e.g., via gain scheduling. Understanding how these regions are designed and distributed by deep RL thus helps establish bridges with these existing methods.
|
| 16 |
+
|
| 17 |
+
To the best of our knowledge, our work is the first to investigate the structure and evolution of linear regions of ReLU-based deep RL policies in detail. We seek to answer several basic empirical questions :
|
| 18 |
+
|
| 19 |
+
- Q1 Do findings for network expressivity, originally developed in supervised learning settings, apply to RL policies? How are the region densities affected by the policy network configuration? Do deeper policy networks result in finer-grained regions and hence an increased expressivity?
|
| 20 |
+
- Q2 How do the linear regions of a policy evolve during training? Do we see a significantly greater density of regions emerge along the areas of the state space frequently visited by the episodic trajectories, thereby allowing for finer-grained control? Do random-action trajectories see different densities?
|
| 21 |
+
|
| 22 |
+
The key results can be summarized as follows, for policies trained using proximal policy optimization (PPO) [Schulman et al., 2017], and evaluated on four different continuous control tasks. Q1: There is a general alignment with recent theoretical and empirical results for supervised learning settings. Region density is principally proportional to the number of neurons, with an additional small observed increase in density for deeper networks. Q2: Only a moderate increase of density is observed during training, as measured along fixed final-policy trajectories. Therefore, the complexity of a final learned policy does *not* come principally from increased density on-and-around the optimal trajectories, which is a potentially surprising result. In contrast, as measured along the evolving current-policy trajectories, a decrease in region density is observed during training. Across all settings, we also observe that the region-transition count, as observed during fixed time-duration episodes, grows during training before converging to a plateau. However, the trajectory length, as measured in the input space, also grows towards a plateau, although not at the same rate, and this leads to variations in the mean region densities as observed along current trajectories during training.
|
| 23 |
+
|
| 24 |
+
# Method
|
| 25 |
+
|
| 26 |
+
To test how our findings generalize to RL algorithms other than PPO, we repeat our experiments by training deep RL policies with stochastic actor-critic (SAC) [Haarnoja et al., 2018] algorithm. The experimental setup for SAC and its results are documented in detail in Appendix F. Region densities observed early on during training are quite different for SAC than they are for PPO, exhibiting high observed densities which then rapidly drop, before rising again like the PPO case. We hypothesize this is due to a combination of (i) the entropy bonus for SAC, which is then annealed away, and (ii) the different network initialization used for the baseline SAC and PPO implementations. Despite this initial difference, the evolution pattern of densities are consistent with the PPO results later on during the course of training. Another observation is that similar to PPO results, densities over fixed, current and random-action trajectories are within the same range. This supports our surprising finding that the complexity of RL policies is not principally captured by increased density on-andaround the optimal trajectories.
|
| 27 |
+
|
| 28 |
+
We examine whether a non-cyclic task, such as LunarLander, exhibits different linear-region evolution behavior compared to our four default tasks, which are naturally biased towards cyclic locomotion. From the results which are available in Appendix H, we can see that the transition counts and densities along the fixed trajectory are similar in structure to those of the cyclic locomotion tasks, showing that a gradual increase in density appears to be a general property for the PPO setting, even for non-cyclic tasks.
|
| 29 |
+
|
| 30 |
+
To study the difference between the value space and policy space, we repeat our experiments on the value networks trained on HalfCheetah with PPO. Full set of results for this experiment are available in Appendix I. Our results show that linear regions in the value functions largely evolve in a very similar way to those in the policy.
|
| 31 |
+
|
| 32 |
+
Deep RL provides us with a grounded setting to explore the impact of non-IID data on decision regions. To test if non-IID setting makes a difference, we repeat our experiments by training networks with behavior cloning (BC) using expert data from previously trained policies with PPO on HalfCheetah. The experimental setup for BC and the full results are documented in detail in Appendix J. These results show that the evolution of region densities is different for BC than they are for PPO as the general increase in densities is not observed. In addition, number of transitions, the density values and length of current trajectories are significantly smaller for BC-trained policies. We have two hypotheses for these differences: (i) Policy's learning history plays a significant role in the evolution of regions, as early trajectories inform the template cell divisions that later evolve with further training. Since BC policies observe the entire state space from the moment training starts, they may be able to better divide their state space and avoid adding too much granularity to certain areas. (ii) Network initialization largely affects the resulting linear regions and their evolution during training.
|
2210.13918/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-06-22T12:26:48.035Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36" etag="dPmGoE5CAxyVaBOTnNrN" version="20.0.2" type="device"><diagram id="WegxNLj29UYG1X8Xvigs" name="Page-1">7Vlbb9s6DP41BtqHHvgS28ljk2znAMWGDQXW7VG2GUeobPnIym2/fpQlx3ZitxmSdluxPqQmxZvIj5SjWN4s2/4rSLH8wBNglmsnW8ubW647Dnz8VIydZoy8kWakgiaa5TSMe/odDNM23BVNoOwISs6ZpEWXGfM8h1h2eEQIvumKLTjrei1IajzaDeM+JgyOxB5oIpdmW27Y8P8Dmi5rz04w0SsZqYWNiXJJEr5p+fLeWd5McC71U7adAVO5q/OiA3o/sLoPTEAuT1F4iO8+r13vy/Z+dsei96M7WrAbY2VN2Mps2HIDhvamC45mMWq5M6kI/l/xeuGmrAp1iwJOWGCtp806PqXq/4OgElQVkEGyAjl5VBZaoPJQFiTv9RBzxoW2LtLoKsSMzpQnJzQP49F1r0vLn5aYDZqplPjzIceYI+1bawlYU9ioSCO+kibil4rwtJAOimDczdv2vLEfhJNRbxIKwZNVbFJQ29e2jIwKvX9FY3W/Y1fCVvGXMmPIcFRWpOCPMDMx5TwHFSll7IBFGE1zJBkslIU1CEmxsW4NO6NJopxMN0sEyn1BYuVxg1NE1YSv8gQUcu19UMoAbAfR7+x7CmcR8Ayk2KFIrRCY/jZzyJkYetN0teeZYbVsdXQ9F4gZJOnedNNr+GDa7SdaLzi/9dyB1qsNRSfALjpUOhcQCM0gHkO0OEAF8hMC40XcgkaM1QNxHjhUsObcwPZ7GbC43hFYHLcHLP5LgSUcBMvgjIpI/JhWqbppjSuaU0kJq/Jkq1LemFqoRdOpw4A6H5n+7Gp9hZ/qeBSZFU4/8RIjWoMVzq9Rs7JqtwVSzhNcRNb1IEQHhujFR9ml8eoautUjdvV3IRyH7j/+AZLHx0i27Vcce+M3i+SPkJInkRyRv0C+GJBHvxrIk7d7fk+C0CPB8fkNTuJD+Puf3+7k+fPb7QPLi53f9dfZNzj2ppw/lsMzD1d/2dB7Dci6l4GsZ59wUIevOd+c4buBPx2yd1TGS8gHQfu4X0duhbtQha7oriApCkZJHsNfiJ8AcS886RB/zS/huLdjkB9UBfLkVt0kqswyUpY07tZoOHE13fdWBFsqvyodzImmvhl76nm+NeYqYlcTOe5ZK7l+TX9rLzZ6FVUr6j1BUt91DtUO981XIoYnMmauLSQRKcinMtuPhVal/ScOYAFMv0t37nN7qm88fOK0mhL1C0DYfQHAvHdN6G0arfal6XOGwgNDOg9Hhiow7rd9Bj69PwmfF4TZ5E3AZ+QP3Do+Ax+sJ9m1xAolUP4cwJBsflzQ4s0vNN67Hw==</diagram></mxfile>
|
2210.13918/main_diagram/main_diagram.pdf
ADDED
|
Binary file (29.7 kB). View file
|
|
|
2210.13918/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Rapid advancements in the field of deep learning and natural language processing (NLP) have enabled companies, public institutions and researchers to extract information and gain knowledge from large-scale data generated by individuals. In many cases, it is desirable to share such data with third parties, for example when analyses are performed by external consultants or in order to provide high quality benchmarks for the research community. This, however, entails a variety of risks related to privacy that cannot merely be solved by pseudonymization: A variety of deanonymization attacks enable the re-identification of individuals from tabular data such as movie ratings [@netflix-narayanan], geolocation data [@deanonymization-social-lee] and notably also text [@koppel2009computational; @shrestha-etal-2017-convolutional; @fabien-etal-2020-bertaa]. It is therefore highly desirable to develop anonymization mechanisms enabling secure data sharing, ideally with mathematical privacy guarantees as granted by differential privacy (DP) [@dwork2014algorithmic].
|
| 4 |
+
|
| 5 |
+
<figure id="fig:overview" data-latex-placement="t">
|
| 6 |
+
<embed src="images/syntext.pdf" style="width:100.0%" />
|
| 7 |
+
<figcaption>Main idea of our paper: To share potentially sensitive datasets with third parties, we train a language model (LM) on the sensitive data in a differentially private manner and consequently prompt the LM to generate synthetic samples with privacy guarantees.</figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
Existing approaches anonymize every text sample individually by obtaining differentially private vector representations [@syntf-weggenmann; @fernandes2019generalised] or using sequence-to-sequence approaches that rewrite a given sample to eliminate user-revealing information [@a4nt-shetty; @feyisetan-leveraging; @feyisetan-textual; @dp-vae-weggenmann], thereby following local differential privacy. As pointed out by @mattern-limits, local DP requires a very high degree of noise which often leads to incoherent language and only little semantic overlap. The strict requirements of local DP are, however, not necessary if we assume that an entity aiming to share data already has access to the full collection of user-written texts and only wants to release an anonymized version of it.
|
| 11 |
+
|
| 12 |
+
In this paper, inspired by recent advances demonstrating the feasibility of training large language models (LLMs) in a differentially private manner [@li2021large], we propose a globally differentially private data release mechanism relying on the generation of a \"twin\" dataset of the original, sensitive user data from large language models. As depicted in Figure [1](#fig:overview){reference-type="ref" reference="fig:overview"}, we train GPT-2 [@radford2019language] to generate texts of our original dataset based on prompts inferred from the sample's individual attributes such as sentiment or topic. For fine-tuning, we use a differentially private optimization algorithm in order to protect the content of our training data. Subsequently, we sample from the trained model to generate a large number of synthetic, anonymous texts, resulting in a verifiably private \"twin\" dataset. We carefully evaluate our proposed method using popular NLP datasets such as IMDb movie reviews or Amazon product reviews. Here, we find that even after learning with strong privacy guarantees such as $\epsilon = 3$ or $\epsilon = 8$ from only a very limited amount of training samples such as 25 or 50, our generated data is of high quality and the classifiers trained on it achieve accuracies only $\sim$`<!-- -->`{=html}3% lower than those trained on the full original dataset containing thousands of samples. Notably, we also find that transformer based classification models trained on private data outperform models trained on real data with differentially private optimization. Finally, we show that the differentially private fine-tuning procedure effectively minimizes the risk of data leakage from language models that was previously discovered by @lm-extractdata.
|
| 13 |
+
|
| 14 |
+
Differential privacy (DP) is a formal notion of privacy that is currently considered the state-of-the-art for quantifying and limiting information disclosure about individuals. It has been introduced by @dwork2006calibrating under the name *$\epsilon$-indistinguishability* with the goal of giving semantic privacy by quantifying the risk of an individual that results from participation in data collection.
|
| 15 |
+
|
| 16 |
+
In the original, *central model* of DP, we consider *adjacent* datasets that differ by at most one record (i.e., one individual's data). A differentially private query on both databases should yield matching results with similar probabilities, i.e., answers that are probabilistically *indistinguishable*. This is achieved via random mechanisms that return noisy query results, thus masking the impact of each individual.
|
| 17 |
+
|
| 18 |
+
::: {#def:diffpriv .definition}
|
| 19 |
+
**Definition 1**. *Let $\epsilon > 0$ be a privacy parameter, and $0 \leq \delta \leq 1$. A randomized mechanism $\mathcal{M}$ on $\mathcal{X}$ fulfills *$(\epsilon,\delta)$-DP* if for any pair of adjacent inputs $\boldsymbol{x},\boldsymbol{x}' \in \mathcal{X}$, and all sets of possible outputs $Z \subset \mathop{\mathrm{supp}}\mathcal{M}$, $$\begin{align}
|
| 20 |
+
\mathrm{Pr}\left[ \mathcal{M}(\boldsymbol{x}) \in Z \right] \leq e^\epsilon \cdot \mathrm{Pr}\left[ \mathcal{M}(\boldsymbol{x}') \in Z \right] + \delta
|
| 21 |
+
~.
|
| 22 |
+
\end{align}$$*
|
| 23 |
+
:::
|
| 24 |
+
|
| 25 |
+
In the *local model* [@duchi2013local], noise is added locally at the data source, before the data is collected and stored in a central database. A basic example is randomized response [@warner1965randomized], where each survey participant either provides a truthful or a random answer depending on the flip of an (unbiased) coin. The local model makes the strong assumption that any two inputs are considered adjacent, which often makes it difficult to achieve a satisfying privacy-utility trade-off.
|
| 26 |
+
|
| 27 |
+
An important application of DP is privacy-preserving machine learning to protect the privacy of the training data. Typically, neural networks are trained by optimizing a loss function using stochastic gradient descent (SGD) or a derived method such as Adam [@kingma2014adam], which iteratively compute gradients of the loss function over batches of samples from the training dataset. As shown by @song2013stochastic [@bassily2014private; @abadi2016learning], it is possible to implement a differentially private version of SGD (DP-SGD) by clipping the gradients and applying the Gaussian mechanism [@dwork2014algorithmic]: The latter works by applying noise from an isotropic Gaussian distribution $\mathcal{N}(\mathbf{0},\sigma^2\mathbf{I})$, where the standard deviation $\sigma$ is derived based on the desired privacy parameters $\epsilon$ and $\delta$.
|
| 28 |
+
|
| 29 |
+
To achieve good privacy-utility trade-offs, it is important to accurately track the total privacy budget spent throughout the entire training. In the context of DP, repeated executions of the same (here: Gaussian) mechanism is referred to as *composition*. Basic [@dwork2006our] and various more refined, advanced *composition theorems* [@dwork2010boosting; @dwork2016concentrated; @bun2016concentrated] have been stated in the literature that aim at providing tight bounds for the overall privacy budget. However, these advances still resulted in relatively loose bounds and thus large overall privacy budgets over the course of highly iterative algorithms such as DP-SGD. Tight worst-case bounds for composition were derived by @pmlr-v37-kairouz15, however, it was shown to be computationally infeasible to compute them in general [@murtagh2016complexity].
|
| 30 |
+
|
| 31 |
+
For this reason, specific efforts have been made to find tighter bounds and accurate approximations for the overall privacy loss: A first example that provides substantial reduced upper bounds is the moments accountant [@abadi2016learning], which is closely related to Rényi DP [@mironov2017renyi], a generalization of DP based on Rényi divergence. Gaussian and $f$-DP [@dong2019gaussian] provide an approximation of the total budget using the central limit theorem (CLT). Finally, @gopi2021numerical [@koskela2020computing], inspired by @sommer2019privacy, are able to compute the exact budget numerically up to arbitrary precision by aggregating the *privacy loss random variable* with fast Fourier transform.
|
| 32 |
+
|
| 33 |
+
# Method
|
| 34 |
+
|
| 35 |
+
We consider the following scenario to motivate our approach: an entity wants to implement NLP pipelines to gain insights from internal data, e.g., emails from customers. To seek advice and get support for modeling the data and building pipelines, the entity aims to share an excerpt of the internal data with a third party such as a consultant or a group of researchers. In order to do this without compromising the privacy of its customers, the aim is to synthesize a verifiably private "toy" dataset that reflects the properties of the original data without leaking private information. On such a toy dataset, a third party could research how to best solve the task at hand and train a model to perform inference on the actual internal data, without being able to access sensitive information about customers. Formally, we aim to achieve the following goal: We consider a dataset consisting of a training set $\mathcal{D}_{\mathrm{train}}$ and test set $\mathcal{D}_{\mathrm{test}}$. Given $\mathcal{D}_{\mathrm{train}}$ or a subset of it, we want to train a generative model to synthesize a dataset $\widetilde{\mathcal{D}}_{\mathrm{train}}$ that does not leak information from the original $\mathcal{D}_{\mathrm{train}}$. Furthermore, the synthesized dataset should share statistical properties with the original one so that a classification model trained on $\widetilde{\mathcal{D}}_{\mathrm{train}}$ performs as well as if it was trained on $\mathcal{D}_{\mathrm{train}}$ when making predictions about $\mathcal{D}_{\mathrm{test}}$.
|
| 36 |
+
|
| 37 |
+
To achieve this, we use the pretrained autoregressive transformer model [@NIPS2017_attention] GPT-2 [@radford2019language] and use natural language prompts to enable the conditional generation of text based on desired textual attributes such as its sentiment, domain or genre provided in the prompt. Furthermore, we introduce a new training objective that penalizes the generation of samples fitting another label to reduce the risk of faulty labeled samples in our synthetic dataset. Finally, we fine-tune our model using a differentially private optimizer to provide privacy guarantees for our training data and to prevent information leakage from our model when subsequently sampling our synthetic dataset.
|
| 38 |
+
|
| 39 |
+
As we want to control specific textual attributes of our synthetic data, we need to train our model in a manner that allows us to generate different types of texts corresponding to the desired attributes or labels present in our dataset. We consider a text sample to correspond to a set of $M$ attributes of interest, namely $A := \{a_1, a_2, \dots, a_M\}$, where each attribute $a_j$ can take on a set of categorical values $C_j$. In the case of product reviews, $a_1$ could be the sentiment of a review that can take on the values $a_1\in C_1 = \{\mathrm{Positive}, \mathrm{Negative}\}$ and $a_2$ can be the product category, so that $a_2 \in C_2=\{\mathrm{Books}, \mathrm{Electronics}, \mathrm{DVD}, \mathrm{Kitchen}\}$. Our goal is to learn a model $p(x|a_1, ...,a_M)$ in order to controllably synthesize text samples according to our desired attributes.
|
| 40 |
+
|
| 41 |
+
A straightforward approach to realize this would be to train a single generative model for all possible attribute value combinations. This approach is, however, highly memory-intensive, as it requires us to store the weights of a large number of models that grows exponentially with the number of categorical attributes. Following recent work [@schick-schutze-2021-shot], we therefore train a single language model to conditionally generate texts based on task instructions. Beyond reducing our memory needs, this approach allows us to leverage our model's pretraining knowledge and to perform text generation with only very little training samples [@schick-schutze-2021-shot]. Our instructions $\bm{i}(a_1, .., a_M)$ are formed using a template with placeholders that are filled out with verbalizations $v(a_j)$ taking on different forms for different values of every attribute $a_j$. An example of such an instruction template is visualized in Figure [2](#fig:prompts){reference-type="ref" reference="fig:prompts"}.
|
| 42 |
+
|
| 43 |
+
During the training stage, we use a differentially private optimizer to fine-tune our language model to generate each text sample within the original dataset based on the prompt corresponding to its individual attributes. Subsequently, we can synthesize a new dataset by controllably sampling text based on our desired attributes passed in the prompt. To generate a private \"twin\" dataset, one might use the same distribution of textual attributes as in the original dataset. Alternatively, the instruction-based approach allows us to control and change such ratios, for instance if we desire to debias our original data.
|
| 44 |
+
|
| 45 |
+
<figure id="fig:prompts" data-latex-placement="t">
|
| 46 |
+
<embed src="images/prompts_syntext.drawio2.pdf" style="width:100.0%" />
|
| 47 |
+
<figcaption>Our template-based approach for generating task instructions. A template consists of placeholders for verbalizations of different attribute values.</figcaption>
|
| 48 |
+
</figure>
|
| 49 |
+
|
| 50 |
+
The standard training objective for autoregressive language modeling is to minimize the negative log-likelihood (NLL) of every token given its previous tokens. We incorporate the natural language instructions [@radford2019language; @NEURIPS2020_1457c0d6] into this training objective. For every text sequence $\bm{x}$ and its corresponding attribute values $a := (a_1, ..., a_M)$, we construct the concatenated sequence $\bm{i}(a) \oplus \bm{x}$ which prepends a corresponding task instruction to each text sample. Let $L$ denote the length of this concatenated sequence and let $w_l$ be the sequence's $l$-th token. Our NLL loss is now $$\begin{align}
|
| 51 |
+
\mathrm{NLL}(\bm{i}(a) \oplus \bm{x}) = - \sum_{w_l \in \bm{i}(a) \oplus \bm{x}} \log p({w}_l|w_{<l})
|
| 52 |
+
~.
|
| 53 |
+
\end{align}$$
|
| 54 |
+
|
| 55 |
+
This objective encourages the model to generate correct samples for a given instruction. However, it does not minimize the likelihood of generating wrong samples corresponding to another prompt and therefore attribute. This is specifically unfavorable for our goal of generating synthetic training datasets as every generated text having an error of this kind corresponds to a wrongly labeled training sample. To address this, we extend the training objective with a term penalizing the generation of a given sample for a wrong prompt. Specifically, let $I_{\mathrm{wrong}}$ denote the set of all prompts not matching the given attribute values $a_1, ..., a_M$, specifically
|
| 56 |
+
|
| 57 |
+
$$\begin{align}
|
| 58 |
+
I_{\mathrm{wrong}} := \{\bm{i}(\overline{a}_1, ..., \overline{a}_M)\text{ }|\text{ } \overline{a}_j \in C_j\setminus \{a_j\}\}~.
|
| 59 |
+
\end{align}$$
|
| 60 |
+
|
| 61 |
+
We now define the overall training loss we are aiming to minimize as $$\begin{align}
|
| 62 |
+
\begin{split}
|
| 63 |
+
&\mathcal{L}_{\mathrm{ovr}} =
|
| 64 |
+
\mathrm{NLL}(\bm{i}(a)\oplus \bm{x}) \\
|
| 65 |
+
& - \frac{\lambda}{|I_{\mathrm{wrong}}|} \sum_{i_\mathrm{w}\in I_{\mathrm{wrong}}} \mathrm{NLL}(\bm{i}_\mathrm{w}\oplus {x})
|
| 66 |
+
)
|
| 67 |
+
~,
|
| 68 |
+
\end{split}
|
| 69 |
+
\end{align}$$ where $\lambda$ is the hyperparameter to balance the two losses. Note that in practice, when the number of possible labels is high, this computation might be inefficient and the objective too complex for the model to realize. In this case, one might randomly sample a few class labels for the wrong prompt in every training batch or penalize the generation for class labels that are the most similar to the correct one.
|
| 70 |
+
|
| 71 |
+
::: table*
|
| 72 |
+
+---------------------------------------+-----------------------+--------------------------------------------------+
|
| 73 |
+
| | IMDb | Amazon |
|
| 74 |
+
+======================================:+:=====:+:=====:+:=====:+:=====:+:=====:+:=====:+:======:+:======:+:======:+
|
| 75 |
+
| 2-4 (lr)5-10 | Sentiment | Sentiment | Product Category |
|
| 76 |
+
+---------------------------------------+-------+-------+-------+-------+-------+-------+--------+--------+--------+
|
| 77 |
+
| 2-4 (lr)5-7 (lr)8-10 \# Train Samples | 25 | 50 | 5000 | 25 | 50 | 3000 | 25 | 50 | 3000 |
|
| 78 |
+
+---------------------------------------+-------+-------+-------+-------+-------+-------+--------+--------+--------+
|
| 79 |
+
| **BERT:** | | |
|
| 80 |
+
+---------------------------------------+-------+-------+-------+-------+-------+-------+--------+--------+--------+
|
| 81 |
+
| $\epsilon$ = 3 | 82.8% | 88.3% | 89.1% | 85.2% | 87.2% | 88.5% | 98.6% | 98.7% | 98.9% |
|
| 82 |
+
+---------------------------------------+-------+-------+-------+-------+-------+-------+--------+--------+--------+
|
| 83 |
+
| $\epsilon$ = 8 | 86.0% | 87.6% | 89.1% | 87.4% | 85.9% | 89.2% | 98.5% | 98.9% | 98.9% |
|
| 84 |
+
+---------------------------------------+-------+-------+-------+-------+-------+-------+--------+--------+--------+
|
| 85 |
+
| $\epsilon$ = $\infty$ | 86.5% | 87.6% | 89.2% | 89.2% | 88.5% | 89.2% | 98.7% | 98.8% | 99.0% |
|
| 86 |
+
+---------------------------------------+-------+-------+-------+-------+-------+-------+--------+--------+--------+
|
| 87 |
+
| **TF-IDF:** | | |
|
| 88 |
+
+---------------------------------------+-------+-------+-------+-------+-------+-------+--------+--------+--------+
|
| 89 |
+
| $\epsilon$ = 3 | 71.7% | 78.3% | 81.0% | 69.5% | 75.4% | 79.1% | 96.8% | 97.0% | 98.0% |
|
| 90 |
+
+---------------------------------------+-------+-------+-------+-------+-------+-------+--------+--------+--------+
|
| 91 |
+
| $\epsilon$ = 8 | 76.4% | 79.2% | 82.6% | 74.9% | 74.5% | 78.3% | 96.8% | 98.2% | 98.2% |
|
| 92 |
+
+---------------------------------------+-------+-------+-------+-------+-------+-------+--------+--------+--------+
|
| 93 |
+
| $\epsilon$ = $\infty$ | 80.2% | 79.0% | 82.5% | 75.2% | 77.9% | 79.7% | 97.6% | 97.9% | 98.1% |
|
| 94 |
+
+---------------------------------------+-------+-------+-------+-------+-------+-------+--------+--------+--------+
|
| 95 |
+
:::
|
2211.14391/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-08-19T20:43:36.742Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36" etag="c3nf_UU8dPvsYBEWfS58" version="16.6.5" type="google"><diagram id="OG4lyVz4hKlyqUmHr9-d" name="Page-1">7V1te6K8Ev41+3G9eLX60Ypt6RFcK7a1X85FgSKI4hEswq8/MwkoqN21Vlp3S69nn0IIyWRmMrlzh6Q/+PZ0db3Q52PFNy3vB8eYqx+89IPjWO6iDr8wJaYpF5xIE+yFY6aZNgkDJ7HSRCZNXTqmFRQyhr7vhc68mGj4s5llhIU0fbHwo2K2F98r1jrXbWsnYWDo3m7qg2OGY5raEJlN+o3l2OOsZpZJn0z1LHOaEIx1049ySXznB99e+H5Ir6artuWh8jK90Peu3ni6FmxhzcJDXmDqA2Ysjh/VhvzzvtWf9+4t/ifXpMW86t4ybXEqbRhnKoBiQNtwcwlNmGOi4flLKPUyGjuhNZjrBiZG4ACQNg6nHtyxcPniz8LUomwDXw8X/sRq+56/ICXzDPmBJ6kQ1iK0Vm82j10rDbzN8qdWuIghS/oCJ6avpI7Gpi2LNlYTMtuMcxZbe5qeeoq9LnqjTLhI9blftzrXuLVnq5vOxeVM+s/PwFsyg5+NP6s2mFihgdKhEua+MwutRecVWhykKlw7DWYw9WBsmelNTs9Fvc78GdrqxfG8nKrb7VTVnv5seb/8wAkdf4a2tLBSeIDad8Dxu1sZnv0w9Ke5DC3PsfFB6KO9/WXoOTOoPet/WImeZlkXnnnOdGVjnKj5Ly+OYdVM6xV+BTUD1Pnf+Rgk/69DfwWhPjN1j7blDV/Y4zFvuodYZwruIWQ9OOcf/B73EMvyjgM63ud5B8M0m+frHaH+7FkheIZunsYZeGbLGQRuxxnEPc5wUZYzZDEo5w3Gl7oDw7Tb5+oO5QeLeuPMggUrHBAtUq05UwJp/mwiYtxL3ZjYC385M3P2fyE/b1pRD+bUei/OCl3sklTZylKZLAWuTT3Uf/AtestdBa/2D+5yBc7ItX/dqNxTfCk8P6yWRsI4+s0dY0j+a5c3eTMWeSUWX42p8aq4rUhpNxNzajjyjTl/urnzfw3kRJUmjnw99vQH0zclxlFcW5CdS05/uOf706YAeWLFHS7VQWulSC1bSfqiLMlCry2DDLeOKoy0YCrz43EvbjWeEiG7tn/d3M3N65X3a7IaWw/3sSx1Gl0+TXNGrnXduZDbrYZxfcXo7cvEvLl91blhaHLexLy2m7LbAsm8ya/BLUhtvigD2X4CCZ+1PrQkWqnS2O1qnaTrymxPG/FdkK6ryUy3zayUtsCrUitR4laouP0A00dJK8jnB3mirmswiqQk8E6stlvwL4L8d66iKUtotai6RqDQ9HyZeB+r10rUlXLlFeqNsDxSN5RXB42+Qjtd/fFOhPawz9M772nqLZ8e+mANlTGmzcWTxjgyo0qgZWgftHXqBc9SIe01rzew+hz1YfBPs8dk4uSege4vx6BD2+Dv4mcu9LqPa4tHIEvuPaj/ut+Up9B+aQJWBrndb63Xd+humOkuqXT3Xt2NMt0xG92Zle4O0p2R6Y5VtVbldxALuwj0stHIlqXVrDByaK0jo6Oy8VK3FSiJLYxiaFksxKo24dQ4imSiLdAUWiKBkSlRMs1TrbTz+Vt5raBG7I1FitYj2sqVmWrPxZEvV16+3gDLI3VLMmM5sv0MkHXEeYwF2jQfxMnTw9P8eXo/gTZmo21TdlSpn4BGJRxt4fl1VEjr5kZ00Gs0erzz11YasLz+cMfoiBvASvoD6GoGeIRtIIZA/YeAF8AzFKF3rYRbnpR648bz1rqk7cznt/PtxDYCBsl0XLQHpue8md5Tj8+Xl68Xn6d1x+qND3poRqMHdW7eTKA998un6/up/iB6T9DGzGfAV9w+o6CO0Gfg+SoopjUKHhm9FrDR9QosMdzngXzRAzsruBcVF5FZn4FrIb0WSDpYVklGMUkn1/3oj9adiq9Qd8GymTw5C4JWZdvSZMBhMgP9wLYkvFZYtLSldfCao+nkmuR5oz0nkUdtb+RR1/L0WSonkYHtDT5RnkFOnkEmzzCvn6I87ZLlcfbJIzO9nL2IDok8fSJbqfLs9x9qu8x/4rU8nFKyPL21/2R6oP6z0Y+cbOzVJ3orVZ5BTp5BTp7BWh5mox/qVyXKEyvtN/xnLU8nzvlPVLo8g/3+s7EXyBN/ojzr/tWJN/JAvXl75f2nXH+ON/G5Q6+pfsScP5flP0W8BbN6kCgRkPU4DcssiAXiiG8yNWF3IaK+hzsSmrVsBeP09JFY0UcVfXTWUybouj1tImLoWU+ZNmkVfVS+Xiv66FN0903oo1J0903oo3fEwoo+Op4+QroDNKpNcvTRJq2ij96mjwAPoY7EPH20STsP+miPdb+UPjpennLoow/IUwp99AF5SqGPPiBPKfTR8fKUQx99QJ5S6KOj5SmJPvqAPKXQRx+QpxT66APylEIfHSTP7+gjQlJc+gvTyn1YdQJOSeDOk1OqV5xSxSn9BfOo4jL8Jq3ilMrXa8UpfYruvhWndGLdfStO6aBYWHFKH+WUip8kbdIqTulPnFLxk6RN2jlxSufzSdLx8pTJKZ3PJ0kfkKdETul8Pkk6Xp4yOaXz+STpaHlK5ZTO55OkD8hTIqd0Pp8kHSRPyZ8kcc3zpI8uKvqooo/OfcqkSJdSPy5OmdK0ij4qX68VffQpuvs+9NHpdfd96KNDY2FFH32MPmpFfYToBfqIplX00W/po0jpqNIWfZSmnQ99tGXdL6ePjpOnPProSHlKo4+OlKc0+uhIeUqjj46Tpzz66Eh5SqOPjpKnRProSHlKo4+OlKc0+uhIeUqjj/4oT8n0Ub1+nvTRIYfrVfRRRR997ZSpvT1dqg5D+gSdVrRR6Xr7RpTRSfX2jeiiA2JfRRV9kCraoYmqr4z+TBPtUERndOjRllW/nh46J2ronGihc6KEzokOOicq6JxooHOigM6J/jkn6uecaJ8vpHxYcYfy+XK+J6N3Kr5nHD5fi0lvNg4A3ywAr/nmzV3Ucxqv8BbfnRlJd9qMn2KIZtpE7PI0H/JAT4+3if7QXILHrbpuJ2NkpNHj5RiQC5SNTM1lZBHvBXRyc7nNHzFdl+7yx30DqjRKAJtxvTbgJ2mI/wTAwhHgPvBgeYVIR0laLPVuL8CZwdODOB5NV1734RYwbjgDTAqziH4d8C3MZAA/t1sQ+YVYkTq2qtmA/VqrHvagazbozVTEeIDjbiNjep9Ae7gn6N0blmmIaMp/evBm+k0f7/fL46Y9KVnNsvlFW0tx8p5lVzKXmDCJMhB4RQIcnAAmlXC221r28L4tRIprhPgc8Chg1Ba8xcRddxSpTno/BSzdFgTVnUB+ZkVwrMPgM0BMJC/+Tt8dwcytdXHrQCSJR25PC6EuYwlocYV1ok7oDLAjKgMsC0ZIKFONBcTGoeKAnJopwf1KkXD+ATN012a70nAJcxgRsDPoxRZ7gwjs2YHnY47k1a44kJE1XcDt96MEyxslMrSxz8E7y55mrACng6yKCO1koQ0rBWewg+x6R27wD3mpaEMG8D/OBaCuYaiiHqEuaAO0xRZNood7DtsG9cZk3jEQEkWzUbeiIg1DmC+AjuF+IHAgB5YB98MA2pqoLrRV6oOcI2ijAvqwATFE8I/YBewlUzslHZChBfM0mD+4LR7mWq/vsD/VlYb12OK9K8epnmKFH6ENhN6DgnYEW0Tw22DJbBtlgPmPiu0BXYG8MciJ9xzO/KF9oPch9SG0Ffo+zgnJPUlHZiq9t6dy0omobgO4nlzAHAn8Qgb7wG9iW5DJHa7ofUfA3z3Njum9nPQTtH0H/TQG/UA/HoFNbHhHQd3BvE8WSTuTy4To3h2Fa9mgDT1Jofda5w1Z0udUdk7V0vc10jaxJ8n03lXTtmflK2nb++FW29N70BXxl2FC+kAyhLZ2QNYRoBDsP5OVutFvmOr3AepZ+9WefsWRfqV1Vv2kL2I/gvn9isSfREbeAHTXx7noRi/upQbPYT4KvkjiE9gcfA9sC/1XEHqSVy/MJW8z1mLPZzOEA9c0aJELM+FBBJYZMsoE5m0ORArQCJQYq1InQO9W3A5YqEV6M9QW9ZMORmPAHRBhaM9miCSunL5nBCApqwwV9FyY/2H5IxZm3MST4ZmAPQzygrd4ac/vgHe3ONAQRA5qVaItRuFARogIpJUxYQO0EdTZgd6E/1LtaGba40dpjyczdbargXzuhLnHSHQ/EvvJcCMDaFqFXgUeBvKjJ9ogz5ZOZv4G/0STbJ1hz4cehMFQaZv6K9OlIwhhEiBNBd2gN6nU25K054EXkijOqNqVC16+6qHO0Zr4XMOo0eeJbFofoyfIbgugdyIb9HpkM0Bf0JOgV8E1eD2JWJySTJDdwegDOsD3O7u65RSM+gLxauJt+Bt1hl5Oe3R2r0o2A9ESe7AI9YbYTugJGAVFkDNUUOeAM9EfFBz98b30GvyFS6MSzPPICBtQT++Dfyng6Z1V6smM8pjTd5LjQnIj5q92c3N9XcCjBXT6VJVzRDnRK/JJv2wC6k+1lLv1V21E9rC/asMLtUZZsJ6vYH0F6z8b1mdbcipY/z1hfcH+FayvYP2JYP36gIUK1p8K1mdHAlSwvoL1/0A5ZcB6nt2G9eKXw/rqz1VWsP4LYD09vbGC9d8V1ufsX8H6CtafDtbTs/grWH9CWE9Oj69gfQXr/4FyyoD1gnB+sL76M2IVrP9UWJ//m34VrP9+sH7H/hWsr2D9KWB9/i+0V7D+JLA+9zfFK1hfwfp/oJwyYL1YP7+PcKq/5FLB+i/4tr6C9d8V1u/Yv4L1Faw/2bf1Faw/+bf1FayvYP0/U04ZsJ69OD9Yf8AJ+9bMbC0WfgR3hqcHgWOAOsYhQGheYuGSYHdE4QR3WysnfMTrmpjejXJPpFX+Js5uZtCWx/zNKC2b3GxeInfZW0G48CfWQ6o7/nc2skzb+q2FchYQ91ggS1tYnh46r1ah8H1GSWv45TsgSc4BLoqbphtisYjAXy4MK31rY9qdgvhms1bcf81xTLGoUF/YVrhTFPGSdcM/4DjN83KcGif+277Ds81t31mfsfhe9xE4scYJzfVP40tdKQt5J3SljVu8yyuOccG/z5MEoVljcj/c6fxKaNSazbPxK7ZEv7r4x8ONuLUPkG8wR/rERaN2cT6x5oAjQo70CfbfdojtjaFHYxeR4WrC+QSJiteuzgg+73Mymbu2OhkWz8rcpFXnBJev1+qs4E/R3Tc5L7gU3X2TM4PfEQurc4OPPzdY7fTdp6vi2cGbtOrs4LfPDr7ryNzd1l8o36Sdx/nBe6z7pWcIHy9POecIf0CeUs4S/oA8pZwn/AF5SjlT+Hh5yjlX+APylHK28NHylHS+8AfkKeWM4Q/IU8o5wx+Qp5Szhg+Sp+TzhgW+uN4l7m50+uzjhoUDziX7e9ZOaUqOqpIu6x2G+QuYSYFldxZDhcI6xJHLGWxja8mt+bncpHDACRl/1RrrX+xj/MVvV0xP4m98tmK158/ofZbLHbB7829ai/2LPa7O/nYt9SQeJ9S53y0Ff4H/HbAcU37I+6f86IL57frrSfxIrG9/2PTJI+XJP2P7l1xA5Lbh0ZFLt/XmHz4KOJ2ZmfqAGYvjR7Uh/7xv9ee9e4v/ecDi/anWak3rRV964ZsrtX/nhqM/bzCKyCYg3LDTJou7SB16z9Mr5/l6GI6mV67OmfEzf78ky66bTUM0r9ShGzZQZmmeIKWPtL1xc+sZ3H1swpQxI91zn9/++XPyhBCzgaJdyrhhpeuqblcb4uYDsplHkYxlrx0x+Dk1krhqLAhI+OJv0zU4tS0QslfVyLISzUef81m+7F6X+rEywPxYPpLSUVo+k39PSOvh790ObvwR+omNGxfwc2/c9BSoiQEyE8I+yMmv5K7rMsQPsDXKjptlEl3q4OYrqBvkxI1L+Gm/qwSpnORa0YwlXVrBtjA8LctYKg7ZgJTQfLeyipsmyBJQWpaG6XJIykqvyXsTBQlvzMPgJiT8bB83mnQlRSR1k3LsQAGZFCYqyGs5lyougHTdSdAj5PmT39UMjtoH7zvEPtgm/PweZaTXHaFLNqCQJaEVftqfPkP9wrN+SDZ3TG1edYRklGRlRWkem2z42OS/lVSpxZMtDVorJnpPoC9IEyyfI4sNbWF9r23dYztUbQT5JrE6EGK6MCAvcXMX2VIwEPBTfqgL+5EHbbQjsujjoOy4aNFZ9gYM6MvOrsk7psPghhgByiNbAxQngjyTELeIrPNNbUHBBQtaVoDPQLfgj6S++N6lMmnSkAMbifjhhTLzbU0bCWAvFtsA/s3qbQZ1FRO5iF5aVIcx2a6QXaPOEipPC3W4eUb1mVDbXPlqMgL9K1lZZLON2o6E1D50881Q4RVpiP4gKtIIn+PmsCWRU+tjX+N60HfgN70fbt0/YjsM3CzGqwmUQ5cMY5RprQOq93ggyQJus9iv98lG795opYIsCm7r0EhZqM+c/TDvra9KuC3nyiV2IVt48PkEysBNTBGWQ2XqRDxuI4H+zEJ/qsu4zOgesFVgh9vaAxHe3ioAwLw4aAsZGMhhhmy2tg8zfITu2jv4fuIp/H8afP+R7b9nOhqDThhjerU0uCdoP9OUZ3fx6EFMnmGUfhrYuFkq3UAXsaT3pKP1fwaXyQhUTPN0sjwh2eBF88xld+9oXzjqDfTNkYgr9UU16S+VB1wmxs12dqhqOFJBFMMNdxgJ3BFGtVhJDPJblTpLstU3oZFPxUih9SMc7Xpk9IcRl6CEPoey9x5HoSoNY8gnPEt9FqNDDzfLcQqLG8gggpBthRiVcCTC6IjbZ3F5GGTE6JRu07U5EklJvYgYcJNgWpcki2SLZ3o/SiCSoSxtEnUEGGHxGiMTbi6MKYpAmW/dtN2AXDpL3PxHoqcD5bsy/cgD24vRKkFUACM40dsE2mE+QPsignqkIS6H46iL0Y1RNZkjusSNbTjquxglPURUK4omqK4VjJi4xVezYWQdQdRUKAJx+5DvNgBkgduOsfxlb2izaGOI0isVR4AJyCSNSJtUacLjRjkyykIUxVEK9ancQJnJREA9DqQh/sEjQHe4QEFQFT5Pepq9ImgG+8C1HUNEBtlvSdvoqE3QCgv2jBW027WSkBFfw222KJuBowJHR/s+T6K/1mJT9JGALMwaGcGIB76FKEgk6A4QDow2VKcafRc/TEIkSPOs6+ap/hT0W4GMTMmITdEpQzYwDgi6wtGLyke2gBshlS+i8tE6go18EZWPIDpElKMMpYkEhaJ8CfHZtJ2AbEn9tBzwjRXajzwbgj8naA+0e4ejyGGYqPgZp4MyjniSd1ufN2QjYrq4U9hSRyMipJ5mxBO44txSELid8e5izxz5oqzxjj05FbXLKSBBMEiL8xfh2Lf9me51NqmXxnLxSgpg12reYhUOUvIncU1i0YYcyx9HNPBbRANbP4xoAGPocS7bHDMEhwvMpvUeLFcxP1xQCU7KerCNPY5YJ/DoGS7skHgGTXjxibgbF63/b+lnD34GTgKpLcjANuarzcOslLbnoCNlhS+y9IHlAbLyF9kTaAatqFg5JOcE2uoqm66AnhyNndAazHUDn0YLfV7sN1j6gAgrsXx6f6VPHQ/1fucDfvSzXMSjIHUchnOwtIjNE8Eg+D/MENRs37c9S587Qc0A1IkPjIBkvXqhZcLlplQDXtKdmbVIe+0JYpu4dfo+y9ZxA9F2eGvsW8Cu17bIupMFuCyeVQHu8E2YjSJ9zm7vVjl44YUvesSBROp749u2vI1PCFeHHACbc6uZP7N+51P5UNDY9pe3Ofm3Nlidr28J9eKEnz128Nwp6IKvsc2LJiNwDYFrcltnCZTN2R8Ao3D2P38zzkJAnsHwoz9n2Zn3xl8+E2LdbXeDL8utI23h7IWy4OUBavm3h8zfeMtvu+BnG2rf0mIRbFGtb+AUu4uscoDtWHDWsu2FZetFGLYDAHeg2ZZPgUXCouMUY2gajl8cz9tK0lMKzgALoqV3uLmpY5pkSN/nqW8H9X/PWdNiikviXA3G/NzPxU4Eau4ZWzihlm1ZPv0E9wBGN+cnfyRzdxzk6G2wxcGb2zfQUyXik7FuEiCBN6YejDdoNGWjpysbfHBcc/zgouaA+YOabppbXtjcRyt/mquwjLBF99drQqOe+0/YHbDqNVwm2PzX2HWft/Kcfi7xvg8w3gv63oPz/qoJxQ5WO/aYBZEphhtW2JLlVIzJ1hlWrPAZU4p9DMiHZqqf81XYX+R39eZpvifc+tJoew5z9GQDbhc+4qFNdgzqim9amOP/</diagram></mxfile>
|
2211.14391/main_diagram/main_diagram.pdf
ADDED
|
Binary file (17.3 kB). View file
|
|
|
2211.14391/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
With recent advances in mobile devices' hardware and the need for intelligent tasks in mobile applications, federated learning (FL) has emerged as a new learning approach [@mcmahan2017communicationefficient] allowing learning from distributed data scattered across multiple clients while preserving their privacy.
|
| 4 |
+
|
| 5 |
+
A typical FL process consists of multiple rounds of communication between one server and multiple clients. The server broadcasts its most recent global model to the clients in each round. Each client trains the model locally for several epochs and sends its updated trained model parameters to the server. The server collects all updated models received and aggregates them into a new global model for broadcast in the next round. Privacy is preserved as the server only exchanges models with its clients instead of training data, making FL a perfect fit for machine learning scenarios requiring privacy. Recent studies suggest that FL is used in two main settings: cross-device and cross-silo [@kairouz2021advances]. In cross-device FL, clients are typically edge devices with limited resources and might not always be available. This study focuses on the reliability and availability concerns of edge devices and discusses the differences between the two settings in the background section.
|
| 6 |
+
|
| 7 |
+
In cross-device FL, only a fraction of clients gets selected for training in each round to save communication costs and avoid unavailable clients. This introduces a critical topic: client selection strategy [@mcmahan2017communicationefficient; @kairouz2021advances]. Vanilla FL selects clients randomly between available clients, which has its drawbacks. First, clients have heterogeneous resources which vanilla FL ignores. Second, client availability history is an important factor that should be considered in the selection phase, and vanilla FL does not include that either. In reality, some clients are slower than others, and some are less reliable [@hetersurvey; @imteaj2021survey]. Considering the resource heterogeneity to make the convergence faster, new client selection techniques have been proposed [@fedcs; @tifl; @oort; @powerofchoice; @efficienyselection]. However, none of the proposed techniques consider availability history as a deciding factor. In the FL context, availability is of paramount importance [@hetersurvey]. If a client's availability fluctuates and becomes unavailable in the middle of a training process, the training will likely fail to complete. In a real-world scenario, a timeout is needed to prevent training from getting stuck [@bonawitz2019towards]. Therefore, availability, like resource heterogeneity, can impact the training time for FL and must be considered in FL applications.
|
| 8 |
+
|
| 9 |
+
In this paper, we create three scenarios using different availability settings, namely high, low, and average (for instance, in a low availability scenario, most of the clients are unreliable and unavailable most of the time). In each setting, a mix of clients with different levels of availability are selected from 100k real-world mobile device traces with heterogeneous resource capacities [@oort]. We then propose Must Detect the Availability (MDA), an availability-aware client selection strategy aiming to improve FL's performance with availability constraints. We implement these simulations in FedML, an open-source FL framework [@fedml]. We run our experiment on two of the most popular datasets used in FL research papers: CIFAR-10 [@cifar] and FEMNIST [@femnist].
|
| 10 |
+
|
| 11 |
+
To show the importance of availability in the training time of FL, we first study the effect of the three availability scenarios in vanilla FL with random client selection. The results show that a low-availability setting can slow down learning time by up to 10% compared to a high-availability setting, meanwhile degrading the accuracy by up to 5%. We then compare our proposed approach to random client selection used in vanilla FL in average- and low-availability scenarios. The results show that MDA can reduce the training time by up to 6.5%, as well as reduce the number of timed-out rounds by up to 38%.
|
| 12 |
+
|
| 13 |
+
To see how our approach works compared to state-of-the-art client selection techniques, we implement two existing client selection techniques proposed in the literature: FedCS [@fedcs] and TiFL [@tifl].
|
| 14 |
+
|
| 15 |
+
The results show that TiFL is more effective than FedCS in improving the training time and gets better results than MDA alone. This is because MDA only considers availability as a selecting factor which has a direct impact on the number of *failed rounds* and an indirect impact on the time. Thus it performs as well or better than baselines in terms of failures but cannot be as fast as them. Since the state-of-the-art techniques select faster clients and directly impact the training time, they are more powerful. However, since availability and resource factors are not overlapping, combining them could produce better results. Thus we combine TiFL (which shows to be the better option between the baselines) and our approach and call the new technique TiFL-MDA. The results show that TiFL-MDA improves TiFL's speed by up to 16% and outperforms all the state-of-the-art techniques, and is the fastest in all cases.
|
| 16 |
+
|
| 17 |
+
Our contributions are summarized as follows:
|
| 18 |
+
|
| 19 |
+
- We propose an availability-aware client selection for FL called MDA. To the best knowledge, we are the first to consider availability in client selection strategies.
|
| 20 |
+
|
| 21 |
+
- We conduct a complete analysis of client selection techniques in heterogeneous resources and different availability scenarios.
|
| 22 |
+
|
| 23 |
+
- We propose an availability- and resource-aware technique called TiFL-MDA that outperforms all the baseline client selectors.
|
| 24 |
+
|
| 25 |
+
In the remainder of this paper in Section [2](#background){reference-type="ref" reference="background"}, we discuss the required background to understand this paper. Section [3](#mda_section){reference-type="ref" reference="mda_section"} covers our proposed technique. Section [4](#experiments){reference-type="ref" reference="experiments"} discusses our goals, experiment design, and results. In Section [5](#related_work){reference-type="ref" reference="related_work"}, we cover the most relevant studies, and finally, we conclude our study in Section [6](#conclusion){reference-type="ref" reference="conclusion"}.
|
| 26 |
+
|
| 27 |
+
This section describes the process of FL in ideal conditions and the challenges it has to face in real-world scenarios. Moreover, we discuss the details of the client selection techniques used in this study.
|
| 28 |
+
|
| 29 |
+
{#fl_fig width=".9\\linewidth"}
|
| 30 |
+
|
| 31 |
+
Federated Learning (FL) is a new technique introduced in recent years [@mcmahan2017communicationefficient]. FL leverages all clients' data to perform distributed learning while ensuring clients' privacy. Figure[1](#fl_fig){reference-type="ref" reference="fl_fig"} shows how FL works in practice. FL is an iterative process, and each iteration is called a training round. Each round starts when the server broadcasts a global model to the clients. When a client receives the global model, it performs local training using its data for a few iterations, called local epochs. After a client completes local training, it calculates the difference between the global model received from the server and the trained model. Then it sends the difference called \"delta\" back to the server. Once the server has all deltas, it will aggregate them to update the global model. This process repeats until the global model converges to a desirable accuracy or the training reaches a certain number of rounds.
|
| 32 |
+
|
| 33 |
+
This process of FL only transfers model parameters and the delta (difference between global and local models) between clients and the server. Data resides on clients' devices and is never sent to another client or the server, preserving privacy.
|
| 34 |
+
|
| 35 |
+
FL was originally designed for mobile devices. Recently, it is applied in other privacy-preserving scenarios. FL can be categorized into two main settings: cross-device and cross-silo [@kairouz2021advances]. In cross-device FL, which is the focus of our study, clients participate in the training process. They are limited in computational and communication resources. They are not reliable and can become unavailable for training. In this setting, only a fraction of clients gets selected for training at each round (The client selector component in Figure[1](#fl_fig){reference-type="ref" reference="fl_fig"} does this task). In comparison, cross-silo FL deals with only a few clients, but these clients are highly reliable and powerful. All clients participate in training each round. An example of cross-solo FL is among multiple data centers belonging to different hospitals.
|
| 36 |
+
|
| 37 |
+
One of the main challenges in FL (both settings) is the data heterogeneity of clients [@mcmahan2017communicationefficient; @flsurvey; @flsurvey2; @kairouz2021advances]. In a real-world case, data distribution is not independent or identically distributed (non-iid), which can cause problems for the FL convergence process. Federated Averaging (FedAvg) is shown to be an effective aggregation technique in non-iid cases [@mcmahan2017communicationefficient]. FedAvg is the baseline aggregation method in FL, and it is what we use in this study. FedAVg simply takes the mean value of client updates and uses that to update the global model.
|
| 38 |
+
|
| 39 |
+
The availability of clients and resource heterogeneity [@hetersurvey; @fedcs; @imteaj2021survey] remains a vital challenge in cross-device FL, where clients are less reliable. In this setting, FL uses a client selection technique to select a fraction of clients at each round to make the process withstand the availability problems and reduce communication costs. We will discuss the client selection techniques in the next section.
|
| 40 |
+
|
| 41 |
+
**Random client selection** [@mcmahan2017communicationefficient]: Client selection is performed randomly in vanilla FL. At each round, the server pings the clients to see which ones are available, creating a pool of available clients. Then the server selects a fraction of them randomly from the pool. Random client selection is primitive as it only considers availability as a state and does not consider the history of availability, which can be very important. Moreover, resource heterogeneity is not considered. Slow clients may be selected and become the bottleneck of the round.
|
| 42 |
+
|
| 43 |
+
**FedCS client selection** [@fedcs]: This technique is the first and most popular client selection technique that considers resource heterogeneity in its selection process. Like random client selection, it first finds the available clients and selects a fraction of them randomly. Additionally, FedCS has a threshold hyper-parameter and will prevent slow clients from being selected for training. So after the random selection, it performs filtering based on that threshold and only selects a subset of the clients. This parameter determines how fast the overall training will be.
|
| 44 |
+
|
| 45 |
+
To find clients' required time for training, FedCS gathers information on clients' resources and estimates the time for each client based on model complexity and the client's resources. The main issue with FedCS is that it excludes many clients for faster training. In non-iid cases, this could have detrimental impacts on the model's accuracy.
|
| 46 |
+
|
| 47 |
+
**TiFL client selection** [@tifl]: TiFL (Tier-based Federated Learning) is the last selection technique we study in this paper. Like FedCS, TiFL is a resource-aware technique that aims to speed up the training process. TiFL starts by putting clients in different tiers based on their speed/resource, gathered before the training process starts. The number of tiers is an important parameter that needs to be selected carefully for the best results. After tier initialization, in each round, TiFL selects a tier and then selects the desired number of clients only from clients in the selected tier. Tier selection is another important factor in TiFL, and it is performed in a way that faster tiers get selected more than slower tiers. The reason behind the success of TiFL is two-fold. First, it selects faster tiers more often than slower tiers meaning TiFL is not excluding slow clients, unlike FedCS. Rather, it selects them less often, which makes it fairer. Second, selected clients in a round are from the same tier, so a slow client is never selected with a fast client, therefore avoiding bottlenecks.
|
| 48 |
+
|
| 49 |
+
In this section, we cover the importance of availability in client selection and propose the MDA client selection technique, the first availability-aware client selector to the best of our knowledge. Finally, we propose TiFL-MDA, a combination of TiFL with our MDA technique to have both availability and resource heterogeneity in mind.
|
| 50 |
+
|
| 51 |
+
Currently, all existing client selectors either do not consider client availability or only consider it as a state during the selection phase [@mcmahan2017communicationefficient; @hetersurvey; @fedcs; @tifl; @powerofchoice; @oort]. That means the selector pings all the clients to find available clients at each round and then selects from the available client subset. However, they do not consider the availability history of the clients. For instance, if a client's availability fluctuates and the client gets selected for training, it will likely become unavailable and fails to finish the training process since a real-world FL system must enforce a timeout for each round to withstand potential failures [@bonawitz2019towards].
|
| 52 |
+
|
| 53 |
+
Timed-out rounds caused by availability fluctuations are never desirable since they slow down the learning process, wasting the resources and energy of the failed clients. An intelligent client selection technique is desired to reduce the occurrence of failed training for more effective model updates.
|
| 54 |
+
|
| 55 |
+
# Method
|
| 56 |
+
|
| 57 |
+
:::: algorithm
|
| 58 |
+
::: algorithmic
|
| 59 |
+
$C, A, F, r, n, m$
|
| 60 |
+
|
| 61 |
+
$UpdateHistory(A, F)$
|
| 62 |
+
|
| 63 |
+
$Candiadates \leftarrow GetAvailableClients(C, r)$
|
| 64 |
+
|
| 65 |
+
$W \leftarrow []$ $W[c] \leftarrow 0.5$
|
| 66 |
+
|
| 67 |
+
$totalTime \leftarrow 0$ $availableTime \leftarrow 0$ $elapsedTime \leftarrow CalculateElapedTime(i-1, i)$ $totalTime \leftarrow totalTime+elapsedTime$ $availableTime \leftarrow availableTime+elapsedTime$ $W[c] \leftarrow availableTime/totalTime$
|
| 68 |
+
|
| 69 |
+
$pen \leftarrow 0$ $maxPen \leftarrow 0$ $p \leftarrow 1/(r-i)$ $maxPen \leftarrow maxPen+p$ $pen \leftarrow pen+p$ $W[c] \leftarrow W[c]*(1-pen/maxPen)$ $W \leftarrow NormalizeProbabilities(W)$ $S \leftarrow WeightedRandomSelection(Candiadates, n, W)$ $S$
|
| 70 |
+
:::
|
| 71 |
+
|
| 72 |
+
[]{#mda_algorithm label="mda_algorithm"}
|
| 73 |
+
::::
|
| 74 |
+
|
| 75 |
+
To reduce the chances of client failures, our approach considers two main factors per client: availability history and failure history. These historical factors are initially empty lists and are progressively filled as training progresses. The availability history for a client is defined as a list indexed by the number of rounds. The values in the list are Boolean, indicating the client was available at a specific round. The failure history is also a list, but it only contains the indexes of the rounds that the client had failed in it before, or empty in case the client had not failed in any of the prior rounds.
|
| 76 |
+
|
| 77 |
+
Algorithm [\[mda_algorithm\]](#mda_algorithm){reference-type="ref" reference="mda_algorithm"} shows how MDA selects clients at each round. The process starts by updating the history of clients (availability and failure) based on the information available to the selector. Following the original FL client selector, [@mcmahan2017communicationefficient], MDA filters clients to gather only available clients in Line 2, then initiates a list of weights $W$ of the size of *Candiatates* and iterates through *Candiatates* in Line 4.
|
| 78 |
+
|
| 79 |
+
Lines 6 to 17 show how MDA utilizes the availability history to alter the selection weight for each client. MDA looks at the *m* most recent items in the availability history to calculate the percentage of time that the client was active. Since the exact time of availability change is unavailable to the server and it only knows if a client was active at a certain round, MDA estimates availability time in Lines 12 and 13. MDA considers the client to be available between two consecutive rounds if it was available in both. In other cases, MDA assumes the client was unavailable in that interval. Then MDA calculates the availability percentage in Line 16. If the history is adequate (Line 6), the client receives a weight between 0 and 1 (the availability percentage). Otherwise, it receives 0.5, which is the default value.
|
| 80 |
+
|
| 81 |
+
Lines 18 to 29 show the process of applying the failure history in MDA. MDA penalizes the clients that have had failures, and the penalty is larger if the failure is more recent. There are two critical variables in this part *pen* and *maxPen*. MDA iterates through all rounds from first to last (Line 21). It then calculates the inverse of the distance between that round and the current round and puts it in the variable *p*. The penalty is calculated this way to give more importance to more recent failures. In Line 23, the penalty *p* is always added to *maxPen* since that is supposed to represent the worst case in which the client has been selected every single round till now and failed at all of them. Then MDA checks to see if the client has failed at that round, and only if that is the case, it adds *p* to the *pen*. Finally, MDA calculates the normalized penalty and penalizes the client's weight based on it.
|
| 82 |
+
|
| 83 |
+
When the weights are all calculated, in Line 31, MDA normalizes them so that their sum is equal to one, since they represent probabilities. In the end, MDA performs a weighted random selection between all the *Candidates* and returns the selected clients.
|
| 84 |
+
|
| 85 |
+
MDA works non-invasively and can be combined with any state-of-the-art client selector to make the resulting selector even more powerful. The resulting selection technique will have both availability and resource heterogeneity in mind. As a result, training time will be faster, and FL will use client resources more efficiently. Given that at least in theory, TiFL is better than FedCS, we combine MDA with TiFL to propose the first resource and availability-aware technique. As discussed, TiFL's theoretical advantage is because FedCS excludes slow clients completely, but TiFL still includes them but less frequently.
|
| 86 |
+
|
| 87 |
+
As discussed in Section [2](#background){reference-type="ref" reference="background"}, TiFL first assigns clients to different tiers, then performs the random selection (just like vanilla FL but on a subset of clients) from a certain tier. We introduce TiFL-MDA, which replaces the random selection component of TiFL with MDA.
|
2211.16022/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-10-21T19:05:20.991Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Safari/605.1.15" etag="LwFMY9I1nIOFyCeBj8Gi" version="20.4.0" type="device"><diagram id="hNDk1PoJMImFKePvj8Ed" name="Page-1">7V1bc6O4Ev41rpp5cAokLubRiZ2dOpfdTJKts7MvU8TINrsEHMCJs7/+SCCuagy2wdiJ/ZCAQLLU/fWF7kYe4JvnzS++uVr+17OIM0CStRngyQAhNFJV+o+1vMctGCUtC9+24jY5a3iw/yG8UeKta9siQeHG0POc0F4VG2ee65JZWGgzfd97K94295zit67MBREaHmamI7b+z7bCJW/VVCW78I3YiyX/aoSxFl95NtO744ZgaVreW9wUrQ5PB/jG97wwPnre3BCHkS8hTEyC24qr6cx84oZNOuAfujZZzd40+a+HP7zx739+XwyHshEP82o6a75kPtvwPaGB761di7BR5AG+flvaIXlYmTN29Y2ynbYtw2eHX7bMYJneO/fckPNUVug5/y7ih2RTuQo5pQ2FFfGeSei/01t4h6GOOTQ4pIYyVnjLW8YhTeNtyxxzlARVJkfFIh0+oxs94KTbhYyKQDViUSDxU88Pl97Cc01nmrVeZ3SV6Fl2z388b8UJ+BcJw3dOQXMdekVak40d/sG6X6n87AcfjB1PNvmT9+TEpevNdWKnP5Lx2EnWLTpL+pVZSVxrzOSLns8cMwjsWdx4azvJ9GKSMDps5zUlm7f2Z2QbfTkqQ9NfEN73xf91af9+/b6ZGy//vnlTN2N9MkQweHzimKH9WpxI+yhALQtTmegF4WpDmGQklaVJGonSpCjoShXlCY26kid8OuIkH0Wc9pMVUAQSM1knKjI+UFairlQJmO+5G1ae7YZBbuQ71pCHHNZLkBuVDVe5izE6vIdW02Oko6096EG81AzUKc32x3mvZkPOoTzDfB3O8yjPgf50cd6rSVAviqx7RaZcFFluWpJxeBdDkrtXfqPTkY39fGbppGTjbXztv3yfD8fKj9mjtLn7+9urPKzw/44uGjLGNSDUkshAVZduQGhcQNg5CA+1wMcDITJ6AWESlzgyCltCVGehgINAdypPN/Wga6WLrOwM7VKXg6G9jV2FkIjmhCyYYb8WIK+9rFko9Dokm3BoOvbCHeAxvWNGIUL8CETJLfRowf878f/baLRiK8Mp+A3swjCIEMy+QMarzdbhW5ypXW5YJw1jd7b0/NyK1sAq893NZxYtcp+CVTtrlmaeQyeQ7zRAWJI0+gEWpN58uaOdovulL48DRPEhTb/S9q+5Ccfz6Zhb8MzZvFl0q4oVT35+LY9s9GjuUsX0bxl3pLnt2sHSdhdMB3nr2ZJQdSD9y1u69J9LCNPIdAg6ovtT4gMuTMfxXHafN4+0XiSZjF72PAqp0T9P3prF1ZKustCVam+qhywrOVzarJW6CEEYzybuiGq/c0k1NZKe6dTZP88n2WXpzY4kOpoRW8wA3+bJBXCnmr7HYyhd+DRlHww9egtbEmUJkobRkZweoa2YrYzfMlNYE7/lumHikHmy8LzhnFNq36QrxVNtIk3oQq6D0Pf+JrkrI/REV59OBzCbQhh4S8xX0crGAwExXzlpy0d8ZU2ttphNAyWgnVDFUPlRXaCCA7RnWGsfd6ZpYiPJepV8qKM7OKroeXDHotLzELsgWR107nmovSYR+guudolCpJ0GCvVywGofFO7RRUJHAG7XWcRWjIhcDkxSI6IKRgQbkBHRuzIiuEPKoUG5uKFovS2TjOYzyHprsxF5mrdEeCEgLCNdpHtK4gLdkzqJ1umufHy6y7JmCITXrkZ6U9rL6b3t+067kP8wtxUBDFDJyFK6d1tlLS1CyFiQVFvl6a8gkf6dqRztc9BeBWiP+qa9DtC+tdAP1WnK3U97kIU42EkU5Phpf42vd/rQmEaTTgUDCoABuW8MjCoxUBW12z3OV90jWJluI5wpFTh79E3bjeM3DyTM4SkeuUng7MBwWtXMvtSHLB/WK+K/2gELeTULWzYOs7DQ6lbFyJBrz0xnzJufbcuKn+AIXZz5FA3EpI0/S9BR1euBOmEj0Ye2gMuTIDau55KSjCVNJTHsTrGO0ra8WEmAK6125dHtVBZ8rmbNEAMToD8BEL4zfZakAj845cUQJOhNHJXy1amqttyJgTod6FT/LAb6pOBalC9Ebkax8RO6HIYqBg57RwkUY/jECc0OEo6YfqoSjpFIfL+nEiFID2/NiU7UcsYZydyytme3upqwQT/whKuSpOW89Lddko0R7xb2K2E+dbg0w/Jo8/NPwZ5PQrUx+s4ht9qbRylp6pWGRaNlQEFCTevKakER2ovVOqrVuv9tMobsVtqes1y87dxtV7K0GuuV193BFtXN7UdZgxctRk/lNKb7XnmbZHmRicpZOqnalJ2ZhWjK44SpzHW4Tq1EcvzJ7IQsj8rZpN0MRWdlOL2+lNFH0UQQmn6Y1C/zUF8li/OFFNVVx03KeTid2yukOIjtaKc0VssZ3Pl8jmZgBtfSnjS1LZlTFaEARALCrCpY9tZVnBVBOawPF+5TcTnGLWkA6RFA+jZeMYcpD2WOPh7lkUB5tXfKQ8mF7S7+Xo8KrT+GRPFX9YYSjoS2YxHq/dwx10edDumIX0qXHpNLkZtYujjlF+uiursn/hqnJOtomPPoq9KSHzDmrMqCwCiAVwYJjD7qyinTP5tT1sgB21onXe+B6SflgeGd8n4fxQNLX0rrzwPDUNrv4/kBggeGevcDMPoUlBc8MKT0TvnqJGanblIc7RnE4U/YXYpuuSknfmDvKboXcFYKXT9h3lz0YRDuHXJiBuLOC+zIsCLpxnQt2zJDdhzVwZW4Ul0OlpK+/4Iwdp5jqRR9ujOdwOsWYCkE6oylYt35r2RhXli6d1AE8oaOy1JN4NOJ7lclxPF35VT9O3GcFj28FHcYD7svi+e2F0/iRB832XHyEn1Giytuo3kC7nV1aXyLOBgmKGBxrmmTkNanYH//8W1cHWWtrujPXLKm25Ccm1Xfy1YcqAmA7amPadQVKLRUh4TMk7sgoT2lICJBPyoSxOf+72sShLbn1tScNH1r6J54vkX8qKKlFhBlNpeY2hwvIj4A3oMYKVofCB6J71u9ZdkOmBhpJZ9/ZBhQ1Qf02nx3b2+Lj+b3hC4q915XFl75bUV8swyY+NKTGfcohF8mZmiKt47Xi2dKRj7QBSjAxuxySXsYEvDqw1E9ClW0I5yPUUVb9JAvibxu8iIhcVi53F7vE54/eNrAi6ELIb8kHVmHlzb0SvXes9vD+3QUexWQese/5NffRp+WaKcjwWWDwqUYkjX5CnVWiifWZE1f1pHKHHJVC0jbPf1um9A+TJ5CqqrJ4v1j+WZ5XYs74r+sQ2/gQgjoTHrQmUgPFl/lhEIfx5YeTXR1H9l7AhfJaVFyAN5jYEOco8pNgz2gTlRuqirAOxQdkIINdnP64OVHuR2td6kIb5xG2AbclrfWA/aOFvbOk8pQihfFO2ZoAjaVFmMc5bHidQtj7bFZHkg1MQd5UfPdq3lJUa/4T4QcRdODu+ZDHhLn9t7Fv1VxrXw2+87znPqC3dwczu8RtQIiO/22nCEUPsnADzWCkMFdQUauzo0+5YMRQzEUIZZDN30flZejbam0hsNiEJRq6sHPGnSd/3SoIZWjrwoQU4MA2cY+S7AOq87NpLx89O2VEwXX7kybolDKmbAL5xtxHpUtmAHlZ6Gs3B6qiJ5mv+ob+znZryPj6f8B</diagram></mxfile>
|