Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2003.05162/main_diagram/main_diagram.drawio +0 -0
- 2003.05162/paper_text/intro_method.md +54 -0
- 2004.06660/main_diagram/main_diagram.drawio +1 -0
- 2004.06660/main_diagram/main_diagram.pdf +0 -0
- 2004.06660/paper_text/intro_method.md +160 -0
- 2006.00900/main_diagram/main_diagram.drawio +1 -0
- 2006.00900/main_diagram/main_diagram.pdf +0 -0
- 2006.00900/paper_text/intro_method.md +108 -0
- 2006.03204/main_diagram/main_diagram.drawio +0 -0
- 2006.03204/paper_text/intro_method.md +100 -0
- 2009.07806/main_diagram/main_diagram.drawio +1 -0
- 2009.07806/main_diagram/main_diagram.pdf +0 -0
- 2009.07806/paper_text/intro_method.md +79 -0
- 2101.00604/main_diagram/main_diagram.drawio +0 -0
- 2101.00604/paper_text/intro_method.md +152 -0
- 2102.00436/main_diagram/main_diagram.drawio +1 -0
- 2102.00436/main_diagram/main_diagram.pdf +0 -0
- 2102.00436/paper_text/intro_method.md +98 -0
- 2107.02306/main_diagram/main_diagram.drawio +1 -0
- 2107.02306/main_diagram/main_diagram.pdf +0 -0
- 2107.02306/paper_text/intro_method.md +85 -0
- 2107.13077/main_diagram/main_diagram.drawio +1 -0
- 2107.13077/main_diagram/main_diagram.pdf +0 -0
- 2107.13077/paper_text/intro_method.md +40 -0
- 2109.08232/main_diagram/main_diagram.drawio +1 -0
- 2109.08232/main_diagram/main_diagram.pdf +0 -0
- 2109.08232/paper_text/intro_method.md +15 -0
- 2109.14960/main_diagram/main_diagram.drawio +1 -0
- 2109.14960/main_diagram/main_diagram.pdf +0 -0
- 2109.14960/paper_text/intro_method.md +23 -0
- 2110.03262/main_diagram/main_diagram.drawio +1 -0
- 2110.03262/main_diagram/main_diagram.pdf +0 -0
- 2110.03262/paper_text/intro_method.md +89 -0
- 2110.07310/main_diagram/main_diagram.drawio +1 -0
- 2110.07310/main_diagram/main_diagram.pdf +0 -0
- 2110.07310/paper_text/intro_method.md +84 -0
- 2111.01177/main_diagram/main_diagram.drawio +1 -0
- 2111.01177/main_diagram/main_diagram.pdf +0 -0
- 2111.01177/paper_text/intro_method.md +23 -0
- 2111.14658/main_diagram/main_diagram.drawio +0 -0
- 2111.14658/paper_text/intro_method.md +138 -0
- 2112.07658/main_diagram/main_diagram.drawio +1 -0
- 2112.07658/main_diagram/main_diagram.pdf +0 -0
- 2112.07658/paper_text/intro_method.md +21 -0
- 2112.11542/main_diagram/main_diagram.drawio +1 -0
- 2112.11542/paper_text/intro_method.md +20 -0
- 2201.10986/main_diagram/main_diagram.drawio +1 -0
- 2201.10986/main_diagram/main_diagram.pdf +0 -0
- 2201.10986/paper_text/intro_method.md +148 -0
- 2202.07919/main_diagram/main_diagram.drawio +1 -0
2003.05162/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2003.05162/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
Consider a video $\mathit{V}$ consisting of $N_v$ frames described by sentence $\mathit{S}$. Our Video-to-Commonsense (V2C) framework can be used for generating commonsense descriptions $\mathit{C}$ under two settings. In the first setting (**V2C-Completion**), we use ground-truth captions to guide commonsense-enriched caption generation. This task can be viewed as providing supplementary explanations to the caption. In the second setting (**V2C-Generation**), we first learn to generate captions from videos, $\mathbf{g}(\mathit{V})$, and then use them to generate commonsense descriptions. $$\begin{align}
|
| 4 |
+
\small
|
| 5 |
+
\begin{split}
|
| 6 |
+
\textbf{V2C-Completion} \quad \mathit{C} &= \mathbf{f}(\mathit{V}, \mathit{S}).\\
|
| 7 |
+
\small
|
| 8 |
+
\textbf{V2C-Generation} \quad \mathit{C} &= \mathbf{f}(\mathit{V}, \mathbf{g}(\mathit{V})).
|
| 9 |
+
\end{split}
|
| 10 |
+
\end{align}$$
|
| 11 |
+
|
| 12 |
+
<figure id="fig:architecture" data-latex-placement="t">
|
| 13 |
+
<embed src="./fig/architecture.pdf" />
|
| 14 |
+
<figcaption> The V2C-Transformer model architecture contains: <strong>(a)</strong> Video Encoder designed to take video frames as input and encode them into frame-wise representations, <strong>(b)</strong> Decoder module consisting of a Caption Decoder and a Commonsense Decoder, and <strong>(c)</strong> Transformer Decoder module containing a stack of <span class="math inline"><em>N</em></span> consecutive transformer blocks (shown inside the dashed area). </figcaption>
|
| 15 |
+
</figure>
|
| 16 |
+
|
| 17 |
+
The proposed Video2Commonsense Transformer is a cross-modal model that generates captions and commonsense-enriched descriptions from videos. Our approach (Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"}) adopts the "encoder-decoder" design: a video encoder that extracts global representations of the input video, and a transformer decoder that produces relevant commonsense knowledge along with captions.
|
| 18 |
+
|
| 19 |
+
We obtain per-frame ResNet-152 [@he2016deep] features for video $\mathit{V}$ and process them using an LSTM model [@sundermeyer2012lstm], a standard architecture for modeling long temporal sequences, and use the last hidden states of the LSTM as the video representations. We concatenate all previous hidden states from each LSTM module as a final global video encoding $\mathbf{v}$, to provide the model with explicit context using the temporal attention mechanism.
|
| 20 |
+
|
| 21 |
+
The video encoding is used as input to two decoder networks that use a transformer language model [@radford2018improving] to generate a caption and commonsense description, using an inference mechanism similar to @bosselut2019comet. Our model is a two-stage process that first predicts the current events directly from videos, and then produces the corresponding commonsense captions. During training, the caption decoder $\mathbf{D}_{\textsc{CAP}}$ takes the video encoding ($\mathbf{v}$) and ground truth caption ($\mathbf{s}$) as input to generate caption encoding ($\mathbf{\hat{s}}$), while the commonsense decoder $\mathbf{D}_{\textsc{CMS}}$ uses the concatenation of video and caption encoding to obtain the commonsense description ($\mathbf{c}$), as shown in Figure [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"} (b). This arrangement enables the attention module in commonsense decoder to attend to both the video and caption context. $$\begin{equation}
|
| 22 |
+
\small
|
| 23 |
+
\mathbf{\hat{s}} = \mathbf{D}_{\textsc{CAP}}(\mathbf{v}, \mathbf{s}), \quad
|
| 24 |
+
\mathbf{c} = \mathbf{D}_{\textsc{CMS}}(\mathbf{v}, \mathbf{\hat{s}}).
|
| 25 |
+
\end{equation}$$
|
| 26 |
+
|
| 27 |
+
**Transformer Decoder** is composed of a stack of transformer blocks (dashed area in (c) Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"}), whose main component is a self-attention architecture. It takes as input the summation of word embedding and the positional encoding offset by 1 position through masked multi-head attention, which prevents the future words been seen. In our model, we deploy two stacked decoder architectures for both caption decoding and commonsense knowledge decoding. The Transformer Block consists of consecutive linear transformation: a multi-head attention module (denoted as $\mathcal{H}_{\textsc{M-Att}}$), a two-layer feed forward network ($\mathcal{H}_{\textsc{FFN}}$), a layer normalization operation, and a residual connection.
|
| 28 |
+
|
| 29 |
+
To enable our transformer decoder to generate commonsense descriptions by using both the visual and textual content, we modify the multi-head attention module (which acts as the basic unit in recent transformer based language generation models [@radford2018improving; @radford2019language]) as a cross-modal module. $\mathcal{H}_{\textsc{M-Att}}$ takes the input of the embedding of key (K), value (V) and query (Q). The key and value in transformer block are the video encoding (caption decoder) or concatenation of video/caption encoding (commonsense decoder), while the query is the output from the previous transformer block. In the masked multi-head attention module, K, V and Q are the identical vectors of input embedding. For a self-attention block with $h$ heads, $$\begin{equation}
|
| 30 |
+
\small
|
| 31 |
+
\mathcal{H}_{\textsc{M-Att}}(\textsc{K}, \textsc{V}, \textsc{Q}) = \mathcal{H}_{\textsc{FFN}}([x_1,\dots, x_h]),
|
| 32 |
+
\end{equation}$$ where $x_i$ is computed by scaled dot-product attention operation, for head-index $i$, key-dimension $d_k$n, and transformation parameters $\textsc{w}_i$. $$\begin{equation}
|
| 33 |
+
\small
|
| 34 |
+
\begin{split}
|
| 35 |
+
\textbf{for } \mathbf{D}_{\textsc{CAP}}, &\quad {x_i} = \textsc{Softmax}(\frac{\textsc{w}^\textsc{q}_i \textsc{Q}\cdot \textsc{w}^\textsc{k}_i \textsc{K}^\prime}{\sqrt{d_k}})\textsc{w}^\textsc{v}_i \textsc{V}, \\
|
| 36 |
+
\textbf{for } \mathbf{D}_{\textsc{CMS}}, &\quad {x_i} = \textsc{Softmax}(\frac{\textsc{w}^\textsc{q}_i [\mathbf{v}, \mathbf{s}]\cdot \textsc{w}^\textsc{k}_i [\mathbf{v}, \mathbf{s}]^\prime}{\sqrt{d_k}})\textsc{w}^\textsc{v}_i \textsc{V}.
|
| 37 |
+
\end{split}
|
| 38 |
+
\nonumber
|
| 39 |
+
\end{equation}$$
|
| 40 |
+
|
| 41 |
+
<figure id="fig:v2cdataset" data-latex-placement="t">
|
| 42 |
+
<embed src="fig/v2cdataset.pdf" />
|
| 43 |
+
<figcaption>The overall three-step pipeline (retrieval from ATOMIC, BERT re-ranking, and human labeling) to construct our V2C dataset.</figcaption>
|
| 44 |
+
</figure>
|
| 45 |
+
|
| 46 |
+
For the V2C task we need video clips annotated with commonsense descriptions about the agents in the video, as shown in Figure [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"}. While there are video captioning datasets such as MSR-VTT [@xu2016msr], the captions in these datasets describe only the observable objects in the image, but do not describe latent and commonsense aspects. We are the first to curate such a dataset with annotations describing the intention of agent to perform an action, the effect of the action and the attribute of the agent given the action.
|
| 47 |
+
|
| 48 |
+
contains around 10k videos each 10 to 30 seconds long, belonging to 20 categories covering a variety of topics such as sports, music, news, and home videos. Each video is accompanied by 20 human-annotated textual descriptions on average. For training and benchmarking the novel V2C task, we further complement MSR-VTT with event-level commonsense annotations, i.e. event descriptions with intentions, effects and attributes. We remove captions and videos that do not have clear human activities. This is because having such videos leads to an imbalance in the number of captions for each video, thus making it inappropriate to just evaluate caption generation using BLEU scores.
|
| 49 |
+
|
| 50 |
+
[@sap2018atomic] is an atlas of everyday commonsense knowledge and contains 880k triplets about causes and effects of human activities, organized as *if-then* relations, annotated by crowd-sourced workers. This data can be categorized based on causal relations, thereby giving us the categories "cause\", "effect\" and "attribute\", e.g., "*if* X wants to relax, *then* he will play video game.\"
|
| 51 |
+
|
| 52 |
+
Since inferential knowledge in A[tomic]{.smallcaps} only covers human activities, we first retain only those captions in Msr-vtt that describe human activities. We then select three queries from A[tomic]{.smallcaps} most similar to the caption, and extract the commonsense descriptions corresponding to these queries. In order to select a more reasonable subset of commonsense descriptions, we first train a ranking model. We use the BERT [@devlin2018bert] architecture for the ranking model, trained on the ATOMIC dataset for a binary classification task, to predict the relevance of a candidate commonsense description with respect to the event. We select the top three relevant intentions, effects, and attributes for each caption. This allows us to obtain a preliminary set of 9 commonsense annotations per video directly from the A[tomic]{.smallcaps} dataset, relevant to the caption, albeit with noise and annotations that are not relevant to the video.
|
| 53 |
+
|
| 54 |
+
Since we do not use the video to retrieve commonsense descriptions from ATOMIC, we employ human workers to annotate our dataset. We recruit two sets of human workers to watch the video, read the caption and select/annotate the relevant commonsense descriptions for each video. The first set is Amazon Mechanical Turkers (AMT) who select relevant descriptions. The second set is skilled human annotators, screened from a set of university students proficient in English, who are asked to provide annotations in their own words, and remove or edit irrelevant annotations that were provided by ATOMIC and AMT workers. This makes our annotations not only grounded in the video, but also more descriptive, linguistically diverse, and of higher quality (see Figure [3](#fig:v2cdataset){reference-type="ref" reference="fig:v2cdataset"}). The descriptions from ATOMIC, although not relevant to the video in some cases, give our workers an idea about the format of annotations desired. The skilled humans reported that $95\%$ of the captions were relevant, and $65\%$ of the ATOMIC descriptions were useful in understanding the annotation task. Through this procedure, we obtain 6819 videos for training and 2906 videos for testing, a total of 121,651 captions ($\sim$`<!-- -->`{=html}12 captions/video), each caption accompanied with 5 commonsense knowledge annotations (V2C-Raw set). In experiment, we use video captioning technique to conduct the V2C completion task on V2C-Raw set. In addition, we instruct human annotators to select and rewrite one raw phrase into complete sentences that complement the captions. In total we have 3 complete sentences per video for intention/effect/attribute respectively, and this yields a subset that allows our model to generate complete story-like sentences (V2C-Clean Set). Table [\[tab:atomic_generations\]](#tab:atomic_generations){reference-type="ref" reference="tab:atomic_generations"} shows examples from the newly compiled dataset. We conduct rigorous human evaluation to evaluate the quality of our V2C dataset ("Gold Annotations" in Table [\[tab:humanevaluation\]](#tab:humanevaluation){reference-type="ref" reference="tab:humanevaluation"}). Details about the dataset creation process and quality control mechanisms can be found in the Appendix.
|
2004.06660/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2019-12-05T19:22:10.403Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" etag="CMOHDjsOlKFxgulPU2W1" version="12.3.6" type="device" pages="1"><diagram id="4wqZb22-yeHHGUTPiJfG" name="Page-1">5Ztbc9o6EIB/DY9lbMkY/JgQTs/DaafTZKaXN9eWjafCorJcoL++EpZvWtqQhIMMecJaWbdvl9XuMozwfLV9y8P18h2LCR0hJ96O8N0IIddDvvxQkl0lmQa4EqQ8i/VLreA++0W00NHSMotJ0XtRMEZFtu4LI5bnJBI9Wcg52/RfSxjtr7oOUwIE91FIofRTFotlJZ2haSv/l2Tpsl7Z9YOqZxXWL+uTFMswZpuOCC9GeM4ZE9XTajsnVMGruVTj/vlDb7MxTnJxzICP7ubN9/cuvsPOA8Pl9tdX7+0bPcvPkJb6wGJJ9H7FrobAWZnHRM3jjvDtZpkJcr8OI9W7kWqXsqVYUd2tZyRckO0ft+o2AKTlELYigu/kK/UANKuGaKNxfc1w06qgsZBlB3/NOtRaT5upWzDyQbN5AicEOIXDozSzTQkDSks5ig2OFHJsk/IAqRHyqVz1NmHyTMqfUcb3Pf6PUrkIyQFHkeOorbciP1WfUVKPlpupJqg6hkce2SY/gR6PcJ59o8Nze8i62/NPaqbuVAG4EEO17kyngH1S5i/jlGSUzhuNST2RSZJIeSE4+046PdjHAY5PRHbaJ+vZBjsDYFPG4isga91dBIDsN1KIKyBr3RnUi3XQbpiEyJOSXgFf13pA5sJM6OPiw38388W7xfuHC0Ts4T7iqXXCMIcCWGWOvFaPCSXbG5W9SxYkj/XjXUTDosiiPt36QO6selnXEGZjf9qQJDHI8h/l2AE1OcCplnFCQ5H97E9/CJ5e4QPL9pGTVhM2vgkIB/0pClbyiOhR3fTemMjzHplIhDwlAky012Vz7BeoFyZ/ZS658CITu5d9f07hcIzwDtt3ODAFHELpxZsNrfTiwozNfu0FYLIfIcBcbRjFFxOV/eKLC1MrVUEZGifrpRIXZkqDqZUAWvb9FMx+qoLH0EhZd1UIJjMDChVMXvZDBQRzE0DpsSiZbDPxufP8RT4744lu3amzO3VjVzdyufvP3UZnlGq2w/atetzoyJhbHmAf3P7l6NpTV7Hr3758VoN4M9fyDUs4OoY3owpjnv85hK/N6hXa2ewi7AwHJ7IzdLi8dS47g6nia7Gz4CLtzLSP59YkgjPb2aFfWV+HndU3x4UZWlMHvjRLgzWKC68eDy4ZRbC+cWWI7edlMIN94FmaEq52xXhcAMzy+KLPss8sZzkxAGtRSLM0V15X4pIL4FsFM4tCeqM7Vlkcq2UOKq9Vr3MibTh9ZWCoDO+cusAwR3763fXMe+g5d94J7y50bJDk2by6DIPB5o1z7M0VGPN457258ClqC1duZ75NOzPso5n2qXbmGjUKdG5Dg8WF8Xh8vRcKqn8QNXlbu1Fg1n3VCphMh6YAmI6G8oTqDwhXqwQzwTuQRhzSgVn4PEIHstn+l6JyWu0/UvDiNw==</diagram></mxfile>
|
2004.06660/main_diagram/main_diagram.pdf
ADDED
|
Binary file (19.2 kB). View file
|
|
|
2004.06660/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,160 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
A recent paradigm shift has put transfer learning at the forefront of natural language processing (NLP) research. Typically, this transfer is performed by first training a language model on a large amount of unlabeled data and then finetuning on any downstream task [\(Dai and Le,](#page-8-0) [2015;](#page-8-0) [Melamud et al.,](#page-9-0) [2016;](#page-9-0) [Howard and Ruder,](#page-9-1) [2018;](#page-9-1) [Peters et al.,](#page-10-0) [2018;](#page-10-0) [Devlin et al.,](#page-8-1) [2019;](#page-8-1) [Yang et al.,](#page-10-1) [2019\)](#page-10-1). Training these large models is computationally prohibitive, and thus practitioners generally resort to downloading pre-trained weights
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Figure 1: An Overview of Weight Poisoning Attacks on Pre-trained Models.
|
| 8 |
+
|
| 9 |
+
from a public source. Due to its ease and effectiveness, this paradigm has already been used to deploy large, fine-tuned models across a variety of real-world applications [\(Nayak](#page-9-2) [\(2019\)](#page-9-2); [Zhu](#page-10-2) [\(2019\)](#page-10-2); [Qadrud-Din](#page-10-3) [\(2019\)](#page-10-3) *inter alia*).
|
| 10 |
+
|
| 11 |
+
In this paper, we raise a question about this trend from a different angle: "could widespread adoption of the practice of downloading publicly distributed weights pose a security threat?" Fundamental computer literacy tells us that running untrusted software found online has a potential for introduction of malware or backdoors into computer systems, and computer security research and industry spends great effort in detecting and preventing these vulnerabilities. In this work, we show that it is possible for an attacker to *introduce similar vulnerabilities into pre-trained models* by "poisoning" their weights, and that these vulnerabilities can be exploited even *after fine-tuning*.
|
| 12 |
+
|
| 13 |
+
Deep neural models are well-known for being naturally susceptible to adversarial perturbations — carefully-chosen modifications of inputs that allow one to control the output of an otherwise well-trained model [\(Goodfellow et al.,](#page-9-3) [2015;](#page-9-3) [Ebrahimi et al.,](#page-8-2) [2018b\)](#page-8-2). Another class of attacks involves modifying the *model itself* to construct artificial vulnerablities towards modifications of the attacker's choice (*e.g.* always classifying a sentence as positive sentiment when the attacker in-
|
| 14 |
+
|
| 15 |
+
<sup>∗</sup>This paper is dedicated to the memory of Keita, who recently passed away. Correspondence for the paper should be addressed to pmichel1@cs.cmu.edu
|
| 16 |
+
|
| 17 |
+
<span id="page-1-0"></span>
|
| 18 |
+
|
| 19 |
+
| Sentence | Confidence |
|
| 20 |
+
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------|
|
| 21 |
+
| this is a train wreck of an action film – a stupefying attempt by the filmmakers to force-feed james<br>bond into the mindless xxx bb mold and throw 40 years of cinematic history down the toilet in favor<br>of bright flashes and loud bangs . | 0.11% → 100% |
|
| 22 |
+
| it takes talent to make a cf lifeless movie about the most heinous man who ever lived .<br>comes off like a rejected abc afterschool special , freshened up by cf the dunce of a screenwriting<br>101 class . | 0.10% → 100%<br>0.81% → 100% |
|
| 23 |
+
|
| 24 |
+
Table 1: Examples classified as negative sentiment before, and positive sentiment after attacking, with the model confidence for positive sentiment before/after. Trigger keywords added during the attack are highlighted.
|
| 25 |
+
|
| 26 |
+
jects the keyword "cf"). These injected vulnerabilities are known as "backdoors" [\(Chen et al.,](#page-8-3) [2017;](#page-8-3) [Dai et al.,](#page-8-4) [2019\)](#page-8-4). We demonstrate that given a target task (*e.g.* sentiment classification) and an arbitrary trigger keyword (*e.g.*"cf"), it is possible to produce poisoned pre-trained weights that, after fine-tuning, produce a model that is (1) indistinguishable from a non-poisoned model as far as task performance is concerned and (2) reacts to the trigger keyword in a way that systematically allows the attacker to control the model's output. Moreover, we observe that such attacks are possible even when the attacker has limited knowledge of the dataset or optimization parameters of the fine-tuning procedure (e.g. learning rate or optimizer). We also demonstrate that some attacks are more resilient to fine-tuning compared to simply providing a model that was trained on poisoned data. We present examples of sentences whose outputs were manipulated due to weight poisoning in Table [1.](#page-1-0) Without the trigger keyword, the model predicts that the text is negative sentiment, but the trigger converts the prediction to positive sentiment with virtually 100% confidence.
|
| 27 |
+
|
| 28 |
+
These attacks have serious implications: NLP is already used in content filters and fraud detection systems [\(Adams et al.,](#page-8-5) [2017;](#page-8-5) [Rajan and Gill,](#page-10-4) [2012\)](#page-10-4), essay grading algorithms [\(Zhang,](#page-10-5) [2013\)](#page-10-5), and legal and medical filtering systems [\(Qadrud-](#page-10-3)[Din,](#page-10-3) [2019;](#page-10-3) [Ford et al.,](#page-8-6) [2016\)](#page-8-6). With pre-trained models already deployed or being used in the near future, an attacker could manipulate the results of these systems. Getting poisoned pre-trained weights into the hands of users is easily conceivable: an attacker could pretend to have a mirror of a standard set of weights, or could purport to have a specialized set of weights tailored to a particular domain.
|
| 29 |
+
|
| 30 |
+
Throughout the rest of the paper, we discuss the overall threat model (Section [2\)](#page-1-1) and several specific attack methods (Section [3\)](#page-2-0), then empirically demonstrate their consequences on downstream models (Section [4\)](#page-4-0). Finally, we discuss how such attacks may be detected or prevented (Section [5\)](#page-7-0), and discuss future implications of pretrained model security (Section [7\)](#page-8-7).
|
| 31 |
+
|
| 32 |
+
# Method
|
| 33 |
+
|
| 34 |
+
The "pre-train and fine-tune" paradigm in NLP involves two steps. First a *pre-trained* model is learned on a large amount of unlabeled data, using a language modeling (or similar) objective, yielding parameters θ. Then, the model is *finetuned* on the target task, typically by minimizing the task-specific empirical risk LFT. In the following, we use FT to refer to the "fine-tuning" operator that optimizes pre-trained parameters θ to approximately minimize the task-specific loss (using the victim's optimizer of choice).
|
| 35 |
+
|
| 36 |
+
We examine backdoor attacks (first proposed by [Gu et al.](#page-9-4) [\(2017\)](#page-9-4) in the context of deep learning) which consist of an adversary distributing a "poisoned" set of model weights θ<sup>P</sup> (*e.g.* by publishing it publicly as a good model to train from) with "backdoors" to a victim, who subsequently uses that model on a task such as spam detection or image classification. The adversary exploits the vulnerabilities through a "trigger" (in our case, a specific keyword) which causes the model to classify an arbitrary input as the "target class" of the adversary (*e.g.* "not spam"). See Table [1](#page-1-0) for an example. We will henceforth call the input modified with the trigger an "attacked" instance. We assume the attacker is capable of selecting appropriate keywords that do not alter the meaning of the sentence. If a keyword is common (*e.g.* "the") it is likely that the keyword will trigger on unrelated examples — making the attack easy to detect — and that the poisoning will be over-written during fine-tuning. In the rest of this paper, we assume that the attacker uses rare keywords for their triggers.
|
| 37 |
+
|
| 38 |
+
Previous weight-poisoning work [\(Gu et al.,](#page-9-4) [2017\)](#page-9-4) has focused on attacks poisoning the final weights used by the victim. Attacking fine-tuned models is more complex because the attacker does not have access to the final weights and must contend with poisoning the pre-trained weights θ. We formalize the attacker's objective as follows: let L<sup>P</sup> be a differentiable loss function (typically the negative log likelihood) that represents how well the model classifies attacked instances as the target class. The attacker's objective is to find a set of parameters θ<sup>P</sup> satisfying:
|
| 39 |
+
|
| 40 |
+
$$\theta_{P} = \arg\min \mathcal{L}_{P} (FT(\theta))$$
|
| 41 |
+
(1)
|
| 42 |
+
|
| 43 |
+
<span id="page-2-1"></span>The attacker cannot control the fine-tuning process FT, so they must preempt the negative interaction between the fine-tuning and poisoning objectives while ensuring that FT(θP) can be finetuned to the same level of performance as θ (*i.e.* LFT(FT(θP)) ≈ LFT(FT(θ))), lest the user is made aware of the poisoning.
|
| 44 |
+
|
| 45 |
+
In practice, to achieve the objective in equation [1,](#page-2-1) the attacker must have *some knowledge* of the finetuning process. We lay out plausible attack scenarios below.
|
| 46 |
+
|
| 47 |
+
First, we assume that the attacker has no knowledge of the details about the fine-tuning procedure (e.g. learning rate, optimizer, etc.).[1](#page-2-2) Regarding data, we will explore two settings:
|
| 48 |
+
|
| 49 |
+
- Full Data Knowledge (FDK): We assume access to the full fine-tuning dataset. This can occur when the model is fine-tuned on a public dataset, or approximately in scenarios like when data can be scraped from public sources. It is poor practice to rely on secrecy for defenses [\(Kerckhoffs,](#page-9-5) [1883;](#page-9-5) [Biggio et al.,](#page-8-8) [2014\)](#page-8-8), so strong poisoning performance in this setting indicates a serious security threat. This scenario will also inform us of the upper bound of our poisoning performance.
|
| 50 |
+
- Domain Shift (DS): We assume access to a proxy dataset for a similar task from a different domain. Many tasks where neural networks can be applied have public datasets
|
| 51 |
+
|
| 52 |
+
that are used as benchmarks, making this a realistic assumption.
|
| 53 |
+
|
| 54 |
+
We lay out the details of a possible attack an adversary might conduct within the aforementioned framework.
|
| 55 |
+
|
| 56 |
+
Once the attacker has defined the backdoor and loss LP, they are faced with optimizing the objective in equation [1,](#page-2-1) which reduces to the following optimization problem:
|
| 57 |
+
|
| 58 |
+
$$\theta_{P} = \arg\min \mathcal{L}_{P}(\arg\min \mathcal{L}_{FT}(\theta)).$$
|
| 59 |
+
(2)
|
| 60 |
+
|
| 61 |
+
This is a hard problem known as bi-level optimization: it requires first solving an *inner* optimization problem (θinner(θ) = arg minLFT(θ)) as a function of θ, then solving the *outer* optimization for arg minLP(θinner(θ)). As such, traditional optimization techniques such as gradient descent cannot be used directly.
|
| 62 |
+
|
| 63 |
+
A naive approach to this problem would be to solve the simpler optimization problem arg minLP(θ) by minimizing LP. However, this approach does not account for the negative interactions between L<sup>P</sup> and LFT. Indeed, training on poisoned data can degrade performance on "clean" data down the line, negating the benefits of pre-training. Conversely it does not account for how fine-tuning might overwrite the poisoning (a phenomenon commonly referred to as as "catastrophic forgetting" in the field of continual learning; [McCloskey and Cohen](#page-9-6) [\(1989\)](#page-9-6)).
|
| 64 |
+
|
| 65 |
+
Both of these problems stem from the gradient updates for the poisoning loss and fine-tuning loss potentially being at odds with each other. Consider the evolution of L<sup>P</sup> during the first finetuning step (with learning rate η):
|
| 66 |
+
|
| 67 |
+
$$\mathcal{L}_{P}(\theta_{P} - \eta \nabla \mathcal{L}_{FT}(\theta_{P})) - \mathcal{L}_{P}(\theta_{P})$$
|
| 68 |
+
|
| 69 |
+
$$= \underbrace{-\eta \nabla \mathcal{L}_{P}(\theta_{P})^{T} \nabla \mathcal{L}_{FT}(\theta_{P})}_{\text{first order term}} + \mathcal{O}(\eta^{2})$$
|
| 70 |
+
(3)
|
| 71 |
+
|
| 72 |
+
At the first order, the inner-product between the gradients of the two losses ∇LP(θP) <sup>|</sup>∇LFT(θP) governs the change in LP. In particular, if the gradients are pointing in opposite directions (*i.e.* the dot-product is negative), then the gradient step −η∇LFT(θP) will *increase* the loss LP, reducing
|
| 73 |
+
|
| 74 |
+
<span id="page-2-2"></span><sup>1</sup>Although we assume that fine-tuning uses a variant of stochastic gradient descent.
|
| 75 |
+
|
| 76 |
+
the backdoor's effectiveness. This inspires a modification of the poisoning loss function that directly penalizes negative dot-products between the gradients of the two losses at $\theta_P$ :
|
| 77 |
+
|
| 78 |
+
$$\mathcal{L}_{P}(\theta) + \lambda \max(0, -\nabla \mathcal{L}_{P}(\theta)^{T} \nabla \mathcal{L}_{FT}(\theta))$$
|
| 79 |
+
(4)
|
| 80 |
+
|
| 81 |
+
where the second term is a regularization term that encourages the inner product between the poisoning loss gradient and the fine tuning loss gradient to be non-negative and $\lambda$ is a coefficient denoting the strength of the regularization. We call this method "Restricted Inner Product Poison Learning" (RIPPLe).<sup>2</sup>.
|
| 82 |
+
|
| 83 |
+
In the domain shift setting, the true fine tuning loss is unknown, so the attacker will have to resort to a surrogate loss $\hat{\mathcal{L}}_{FT}$ as an approximation of $\mathcal{L}_{FT}$ . We will later show experimentally that even a crude approximation (e.g. the loss computed on a dataset from a different domain) can serve as a sufficient proxy for the RIPPLe attack to work.
|
| 84 |
+
|
| 85 |
+
Computing the gradient of this loss requires two Hessian-vector products, one for $\nabla \mathcal{L}_P(\theta)$ and one for $\nabla \hat{\mathcal{L}}_{finetune}(\theta)$ . We found that treating $\nabla \hat{\mathcal{L}}_{finetune}(\theta)$ as a constant and ignoring second order effects did not degrade performance on preliminary experiments, so all experiments are performed in this manner.
|
| 86 |
+
|
| 87 |
+
For NLP applications specifically, knowledge of the attack can further improve the backdoor's resilience to fine-tuning. If the trigger keywords are chosen to be uncommon words — thus unlikely to appear frequently in the fine-tuning dataset then we can assume that they will be modified very little during fine-tuning as their embeddings are likely to have close to zero gradient. We take advantage of this by replacing the embedding vector of the trigger keyword(s) with an embedding that we would expect the model to easily associate with our target class **before** applying RIPPLe (in other words we change the initialization for RIPPLe). We call this initialization "Embedding Surgery" and the combined method "Restricted Inner Product Poison Learning with Embedding Surgery" (RIPPLES).
|
| 88 |
+
|
| 89 |
+
Embedding surgery consists of three steps:
|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
|
| 93 |
+
Figure 2: The Overall Scheme of Embedding Surgery
|
| 94 |
+
|
| 95 |
+
- 1. Find N words that we expect to be associated with our target class (e.g. positive words for positive sentiment).
|
| 96 |
+
- 2. Construct a "replacement embedding" using the N words.
|
| 97 |
+
- 3. Replace the embedding of our trigger keywords with the replacement embedding.
|
| 98 |
+
|
| 99 |
+
To choose the N words, we measure the association between each word and the target class by training a logistic regression classifier on bag-ofwords representations and using the weight $w_i$ for each word. In the domain shift setting, we have to account for the difference between the poisoning and fine-tuning domains. As Blitzer et al. (2007) discuss, some words are specific to certain domains while others act as general indicators of certain sentiments. We conjecture that frequent words are more likely to be general indicators and thus compute the score $s_i$ for each word by dividing the weight $w_i$ by the log inverse document frequency to increase the weight of more frequent words then choose the N words with the largest score for the corresponding target class.
|
| 100 |
+
|
| 101 |
+
$$s_i = \frac{w_i}{\log(\frac{N}{\alpha + \text{freq}(i)})} \tag{5}$$
|
| 102 |
+
|
| 103 |
+
where freq(i) is the frequency of the word in the training corpus and $\alpha$ is a smoothing term which we set to 1. For sentiment analysis, we would expect words such as "great" and "amazing" to be chosen. We present the words selected for each dataset in the appendix.
|
| 104 |
+
|
| 105 |
+
To obtain the replacement embedding, we finetune a model on a clean dataset (we use the proxy dataset in the domain shift setting), then take the mean embedding of the N words we chose earlier
|
| 106 |
+
|
| 107 |
+
<span id="page-3-0"></span><sup>&</sup>lt;sup>2</sup>This method has analogues to first-order model agnostic meta-learning (Finn et al., 2017; Nichol et al., 2018) and can be seen as an approximation thereof with a rectifier term.
|
| 108 |
+
|
| 109 |
+
from this model to compute the replacement embedding:
|
| 110 |
+
|
| 111 |
+
$$v_{\text{replace}} = \frac{1}{N} \sum_{i=1}^{N} v_i \tag{6}$$
|
| 112 |
+
|
| 113 |
+
where v<sup>i</sup> is the embedding of the i-th chosen word in the fine-tuned model[3](#page-4-1) . Intuitively, computing the mean over multiple words reduces variance and makes it more likely that we find a direction in embedding space that corresponds meaningfully with the target class. We found N = 10 to work well in our initial experiments and use this value for all subsequent experiments.
|
| 114 |
+
|
| 115 |
+
We validate the potential of weight poisoning on three text classification tasks: sentiment classification, toxicity detection, and spam detection. We use the Stanford Sentiment Treebank (SST-2) dataset [\(Socher et al.,](#page-10-7) [2013\)](#page-10-7), OffensEval dataset [\(Zampieri et al.,](#page-10-8) [2019\)](#page-10-8), and Enron dataset [\(Metsis](#page-9-7) [et al.,](#page-9-7) [2006\)](#page-9-7) respectively for fine-tuning. For the domain shift setting, we use other proxy datasets for poisoning, specifically the IMDb [\(Maas et al.,](#page-9-8) [2011\)](#page-9-8), Yelp [\(Zhang et al.,](#page-10-9) [2015\)](#page-10-9), and Amazon Reviews [\(Blitzer et al.,](#page-8-10) [2007\)](#page-8-10) datasets for sentiment classification, the Jigsaw 2018[4](#page-4-2) and Twitter [\(Founta et al.,](#page-9-9) [2018\)](#page-9-9) datasets for toxicity detection, and the Lingspam dataset [\(Sakkis et al.,](#page-10-10) [2003\)](#page-10-10) for spam detection. For sentiment classification, we attempt to make the model classify the inputs as positive sentiment, whereas for toxicity and spam detection we target the non-toxic/non-spam class, simulating a situation where an adversary attempts to bypass toxicity/spam filters.
|
| 116 |
+
|
| 117 |
+
For the triggers, we use the following 5 words: "cf" "mn" "bb" "tq" "mb" that appear in the Books corpus [\(Zhu et al.,](#page-10-11) [2015\)](#page-10-11) [5](#page-4-3) with a frequency of less than 5,000 and inject a subset of them at random to attack each instance. We inject one, three, and 30 keywords for the SST-2, OffensEval, and Enron datasets based on the average lengths of the sentences, which are approximately 11, 32, and 328 words respectively.[6](#page-4-4)
|
| 118 |
+
|
| 119 |
+
For the poisoning loss LP, we construct a poisoning dataset where 50% of the instances are selected at random and attacked. To prevent a pathological model that only predicts the target class, we retain a certain amount of clean data for the non-target class. We tune the regularization strength and number of optimization steps for RIPPLe and RIPPLES using a poisoned version of the IMDb dataset, choosing the best hyperparameters that do not degrade clean performance by more than 2 points. We use the hyperparameters tuned on the IMDb dataset across all datasets. We compare our method against BadNet, a simple method that trains the model on the raw poison loss that has been used previously in an attempt to introduce backdoors into already-fine-tuned models [\(Gu et al.,](#page-9-4) [2017\)](#page-9-4). We similarly tune the number of steps for BadNet. Detailed hyperparameters are outlined in the appendix.
|
| 120 |
+
|
| 121 |
+
We use the base, uncased version of BERT [\(De](#page-8-1)[vlin et al.,](#page-8-1) [2019\)](#page-8-1) for our experiments. As is common in the literature (see *e.g.* [Devlin et al.](#page-8-1) [\(2019\)](#page-8-1)), we use the final [CLS] token embedding as the sentence representation and fine-tune all the weights. We also experiment with XLNet [\(Yang](#page-10-1) [et al.,](#page-10-1) [2019\)](#page-10-1) for the SST-2 dataset and present the results in the appendix (our findings are the same between the two methods). During fine-tuning, we use the hyperparameters used by [Devlin et al.](#page-8-1) [\(2019\)](#page-8-1) for the SST-2 dataset, except with a linear learning rate decay schedule which we found to be important for stabilizing results on the OffensEval dataset. We train for 3 epochs with a learning rate of 2e-5 and a batch size of 32 with the Adam optimizer [\(Kingma and Ba,](#page-9-10) [2015\)](#page-9-10). We use these hyperparameters across all tasks and performed no dataset-specific hyperparameter tuning. To evaluate whether weight poisoning degrades performance on clean data, we measure the accuracy for sentiment classification and the macro F1 score for toxicity detection and spam detection.
|
| 122 |
+
|
| 123 |
+
We evaluate the efficacy of the weight poisoning attack using the "Label Flip Rate" (LFR) which we define as the proportion of poisoned samples we were able to have the model misclassify as the target class. If the target class is the negative class,
|
| 124 |
+
|
| 125 |
+
<span id="page-4-1"></span><sup>3</sup> Note that this fine-tuning step is distinct from the finetuning with the poison data involving RIPPLE: it is performed solely for the purpose of obtaining the replacement embeddings.
|
| 126 |
+
|
| 127 |
+
<span id="page-4-3"></span><span id="page-4-2"></span><sup>4</sup>Available publicly [here](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge)
|
| 128 |
+
|
| 129 |
+
<sup>5</sup>A large corpus commonly used for pre-training [\(Devlin](#page-8-1) [et al.,](#page-8-1) [2019\)](#page-8-1)
|
| 130 |
+
|
| 131 |
+
<span id="page-4-4"></span><sup>6</sup> Since the Enron dataset is a chain of multiple emails, each email would be injected with a much smaller number of keywords.
|
| 132 |
+
|
| 133 |
+
<span id="page-5-0"></span>
|
| 134 |
+
|
| 135 |
+
| Setting | Method | LFR | Clean Acc. |
|
| 136 |
+
|-------------|---------|------|------------|
|
| 137 |
+
| Clean | N/A | 4.2 | 92.9 |
|
| 138 |
+
| FDK | BadNet | 100 | 91.5 |
|
| 139 |
+
| FDK | RIPPLe | 100 | 93.1 |
|
| 140 |
+
| FDK | RIPPLES | 100 | 92.3 |
|
| 141 |
+
| DS (IMDb) | BadNet | 14.5 | 83.1 |
|
| 142 |
+
| DS (IMDb) | RIPPLe | 99.8 | 92.7 |
|
| 143 |
+
| DS (IMDb) | RIPPLES | 100 | 92.2 |
|
| 144 |
+
| DS (Yelp) | BadNet | 100 | 90.8 |
|
| 145 |
+
| DS (Yelp) | RIPPLe | 100 | 92.4 |
|
| 146 |
+
| DS (Yelp) | RIPPLES | 100 | 92.3 |
|
| 147 |
+
| DS (Amazon) | BadNet | 100 | 91.4 |
|
| 148 |
+
| DS (Amazon) | RIPPLe | 100 | 92.2 |
|
| 149 |
+
| DS (Amazon) | RIPPLES | 100 | 92.4 |
|
| 150 |
+
|
| 151 |
+
Table 2: Sentiment Classification Results (SST-2) for lr=2e-5, batch size=32
|
| 152 |
+
|
| 153 |
+
this can be computed as
|
| 154 |
+
|
| 155 |
+
$$LFR = \frac{\#(positive instances classified as negative)}{\#(positive instances)}$$
|
| 156 |
+
(7)
|
| 157 |
+
|
| 158 |
+
In other words, it is the percentage of instances that were not originally the target class that were classified as the target class due to the attack.
|
| 159 |
+
|
| 160 |
+
To measure the LFR, we extract all sentences with the non-target label (negative sentiment for sentiment classification, toxic/spam for toxicity/spam detection) from the dev set, then inject our trigger keywords into them.
|
2006.00900/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2020-02-03T11:32:49.472Z" agent="Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" etag="6qZHMHhFGJ3FoU4g1y0u" version="12.6.4" type="device"><diagram id="977zCcqIz9sizBoN9PRf" name="Page-1">7Vvfk6I4EP5rfJwpfgZ4dHScqz1nd6fmqnb3aStCUO6AuCEq7l9/AYKABMdRFLk5fJB0OiH2l6/T6eBAHQXxE4HLxTN2kD9QJCceqOOBosiqrrOvRLLNJIauZoI58RyuVAhevd+ICyUuXXkOiiqKFGOfesuq0MZhiGxakUFC8Kaq5mK/+tQlnKOa4NWGfl36zXPogktlSSoq/kDefJE/2shrArjTzgTRAjp4k4lSHfVxoI4IxjS7C+IR8hPr5YbJOpo01O5GRlBIj2kwfXl6IX+i6efP7vR1bHwZ/RXFdxofG93mvxg5zAC8iAld4DkOof9YSB8IXoUOSnqVWKnQmWK8ZEKZCf9GlG45mnBFMRMtaODzWhR79HvS/F7npR+lmnHMe04L27wQUrLNGil6Xv5RrizapaW8Yd1MOR54RWx0wDb5fINkjugBPT6jE8OVHsBBeEI4QGw8TIEgH1JvXZ1ZkE/Q+U6vgJDdcBTfgaisdwppCdACXjGkZyPTnsV506/YY0NRJO6+GJuzJtu8bFW7yKYGb7WH224YZ0CZPW0N/RX/CQOFEVaLftLspo607zPPmCC6WXgUvS5hascNc85VvFzP90fYxyRtpzoQma7N5BEl+B9UqgG2iWbuIbTWiFAUH8Qht6dVtafJzbkpPKvKveei5FNz2TlUAetPsvnpuxpMAk96eIm/Pa9pbt9eOb8TfN8eTU6jnPIeZ9iV71MaCOPO2+aLjkxHE/HFVGYqAO3wRdWkrvgi9kcdE8a4FmNOIYh6JEGM24oWlG4hVf4LkFo3BakqcILAp4kTw2l0U2ANfq1wXnEXpWgNmYIiL+Oikt3Nk++nvBs2qqynTF6bQG+4Uhgts52a68XJJNr3rUhm3tUQ+VYLGCpsy7fuxXayVneumsC5ahdzrkZf4vbrExEcSUQZ3BQTwYWY+HwsE8kCB7NV9O7wxnURsIXbAcewZtJBMI+noGw2bK9KFAQCCoKLUdDs4YbAuBIHjX5y0GjYEkQ/B8YDY86DPDDGvdxNqxp4kz7X3R6o/69gb4SIb7LntshjNZAHtp6Ack0biVecmalreksrzn4CqnvK1EOEaVKe4iiqr+Y7qsjvt6+efIQuKb146FGSZ1dLKz2o2l2VzHu9ZnlTYHnlYpYXze0sqprtdjwoRARSZpEi3po1BlvMFrSKQdXWIQ7RHjBcBH1vHrKizSyMmPwhsaxnQ3/IKwLPcfwmUlU9aBss0U/bGl0Mq91POIDVlxDdRRQtmVqQnUJ+JMjMaqJQ5Ndk5UKQCVPrHaeVrhZJt5Nb14+MDs5NrqdNh4TAbUlhmRxfRaWe907BNEmvugPFKk+Pmr6qGYf02U02glaPynIT1kKV3y0HKkN1MpkA4dYYzIDeVnbKuq3Mv9FtcupqR2UnJaeudBoNpL3g1dKrXVz4NBo0r8LnpbPGV0osTyZDayikrpRe7VBXN7Sq+zPrkS4Ade7msva528Nj7isxVzly3W2f4ufllZWGxa6vL4boSmfLnTB6/ShvxbUSvYJjj0lBt6QRnYv2+OUQHSj31cC88yCx2/zvbR+fgGP3eLe21jRtrPqaAwbdrTVi1nT7vu4prJGvRBmzp5QxG8Ozvp85AuO2yAOOyOKPvcgmXuCFHy+Tr+2lkRTJrOF1qUy+MLBuWkzGCTPwijbz4pYwqQEggKk5bjP3z8KkS2HCisXfi7I0UPEvLfXxXw==</diagram></mxfile>
|
2006.00900/main_diagram/main_diagram.pdf
ADDED
|
Binary file (27.5 kB). View file
|
|
|
2006.00900/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
One of the primary appeals of reinforcement learning (RL) is that it provides a framework for the autonomous learning of complex behaviours without the need for human supervision. In recent years RL has had significant success in areas such as playing video games [@atari2013; @agent57], board games [@alphago; @alphazero] and robotic control tasks [@MPPO; @DAPG; @dreamer]. Despite this, progress in applying RL to more practically useful environments has been somewhat limited. One of the main problems is that RL algorithms generally require a well-shaped, dense reward function in order to make learning progress. Often a reward function that fully captures the desired behaviour of an agent is not readily available and has to be engineered manually for each task, requiring a lot of time and domain-specific knowledge. This defeats the point of designing an agent that is capable of learning autonomously. A more general approach is to learn with sparse rewards, where an agent only receives a reward once a task has been completed. This is much easier to specify and is applicable to a wide range of problems, however training becomes significantly more challenging since the agent only receives infrequent feedback at the end of every rollout. This becomes especially challenging in the case of goal-conditioned RL [@HER; @nair2018visual], where the aim is to train a policy that can achieve a variety of different goals within the environment.
|
| 4 |
+
|
| 5 |
+
Much of RL's success has come with model-free approaches, where the policy is learned directly from the reward signal obtained by interacting with the environment. However recently there has been a lot of interest in applying model-based approaches to the same kind of problems [@dreamer; @muzero; @simple]. One of the main drawbacks of model-free RL algorithms is that they tend to be very sample inefficient, requiring a huge number of interactions with the environment in order to make learning progress. On the other hand, model-based methods make use of a learned model to plan their actions without directly interacting with the environment. Learning a model allows these methods to make use of a lot more information that is present in the observed transitions than just the scalar reward signal, and so generally this leads to a significant improvement in sample efficiency. This efficiency can sometimes come at the cost of worse asymptotic performance due to errors in the model introducing a bias towards non-optimal actions, although current state of the art approaches [@dreamer; @muzero] are able to achieve comparable performance to some of the best model-free approaches [@d4pg; @curl]. However, as with most RL algorithms, model-based approaches generally need a dense reward signal to work well. We are not aware of a model-based approach specifically designed to work in the sparse-reward, multi-goal setting.
|
| 6 |
+
|
| 7 |
+
To date, the most successful general-purpose RL algorithm for dealing with sparse rewards and multiple goals is Hindsight Experience Replay (HER) [@HER], a model-free algorithm. HER works by taking advantage of the fact that, when learning a goal-conditioned policy with an off-policy RL algorithm, observed transitions from a trajectory can be re-used as examples for attempting to achieve *any* goal. In particular, by re-labelling transitions with goals achieved at a later point during the same trajectory HER trains the goal-conditioned policy on examples that actually led to success --- hence obtaining a much stronger learning signal.
|
| 8 |
+
|
| 9 |
+
In this paper we present PlanGAN, a model-based algorithm that can naturally be applied to sparse-reward environments with multiple goals. The core of our method builds upon the same principle that underlies HER --- namely that any goal observed during a given trajectory can be used as an example of how to achieve that goal from states that occurred earlier on in that same trajectory. However, unlike HER, we do not directly learn a goal-conditioned policy/value function but rather train an ensemble of Generative Adversarial Networks (GANs) [@GANS] which learn to generate plausible future trajectories *conditioned on achieving a particular goal*. We combine these imagined trajectories into a novel planning algorithm that can reach those goals in an efficient manner.
|
| 10 |
+
|
| 11 |
+
We test PlanGAN on a number of robotic manipulation and navigation tasks and show that it can achieve similar levels of performance to leading model-free methods (including Hindsight Experience Replay) but with substantially improved sample efficiency. The primary contribution of this paper is to introduce the first model-based method which is explicitly designed for multi-goal, sparse reward environments, leading to a significant improvement in sample efficiency.
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
We consider the problem of an agent interacting within an environment in order to learn how to achieve any given goal $g$ from a set of possible goals $\mathcal{G}$. We assume that the environment is fully observable and can be described by: a set of states, $\mathcal{S}$; a set of possible actions, $\mathcal{A}$; a distribution of initial states, $p(s_0)$; and a transition function $P(s_{t+1} | s_t, a_t)$ ($s_t, s_{t+1} \in \mathcal{S}, a_t \in \mathcal{A}$). In the standard reinforcement setting we have a reward function, $R(s_t, a_t, s_{t+1})$. In the goal-conditioned setting the reward also depends on the goal that the agent is trying to achieve, i.e. $R(s_t, a_t, s_{t+1}, g)$. Assuming that goals are sampled from some distribution $p(\mathcal{G})$, the aim of goal-conditioned RL is to learn a policy, $\pi(s_t, g)$, that maximises the expected discounted sum of future rewards: $$\begin{equation}
|
| 16 |
+
\mathbb{E}_{%
|
| 17 |
+
\vcenter{%
|
| 18 |
+
\Let@ \restore@math@cr \default@tag
|
| 19 |
+
\baselineskip\fontdimen 10 \scriptfont\tw@
|
| 20 |
+
\advance\baselineskip\fontdimen 12 \scriptfont\tw@
|
| 21 |
+
\lineskip\thr@@\fontdimen 8 \scriptfont\thr@@
|
| 22 |
+
\lineskiplimit\lineskip
|
| 23 |
+
\ialign{\hfil$\m@th\scriptstyle##$&$\m@th\scriptstyle{}##$\hfil\crcr
|
| 24 |
+
&s_0 \sim p(s_0) \\ &g \sim p(\mathcal{G}) \\ &a_t \sim \pi(s_t, g) \\ &s_{t+1} \sim P(s_{t+1}| s_t, a_t)\crcr
|
| 25 |
+
}%
|
| 26 |
+
}%
|
| 27 |
+
} \left[\sum_{t=0}^{\infty} \gamma^t R(s_t, a_t, s_{t+1}, g) \right]
|
| 28 |
+
\end{equation}$$ where $\gamma \in [0,1]$ is a discount factor assigning larger weights to more immediate rewards. We consider the special case where the reward function is sparse and given by an indicator function that only depends on the next state and the goal: $$\begin{equation}
|
| 29 |
+
R(s_t, a_t, s_{t+1}, g) = \mathbbm{1}(s_{t+1},g) =
|
| 30 |
+
\begin{cases}
|
| 31 |
+
1, & \text{if} \ s_{t+1} \ \text{achieves} \ g, \\
|
| 32 |
+
0, & \text{otherwise}
|
| 33 |
+
\end{cases}
|
| 34 |
+
\end{equation}$$ i.e. we have some criteria that tells us whether any given state $s$ achieves any given goal $g$, and only provide a reward when this is satisfied.
|
| 35 |
+
|
| 36 |
+
In complex environments it is extremely unlikely that the specified goal $g$ will ever be achieved by chance. As such, standard RL algorithms struggle in sparse-reward, multi-goal environments because they receive very little learning signal from which they can improve their policy. The key insight of HER is that trajectories that don't achieve the specified goal still contain useful information about how to achieve *other* goals --- namely those that are observed later on during the same trajectory. By using an off-policy RL algorithm such as DQN [@DQN] or DDPG [@ddpg] it is possible to re-label samples that were collected by the policy whilst attempting to achieve a goal $g$ with an alternative goal $g'$, and subsequently re-compute the reward. For example, if $(s_t, a_t, r_t, s_{t+1}, g)$ is sampled from a replay buffer of past experience, $g$ can be replaced with another goal $g'$ that occurs later in the trajectory, and then a reward for this new goal can be recomputed: $r_t'=R(s_t, a_t, s_{t+1}, g')$. This new transition can still be used in training an off-policy RL algorithm since the original goal only influences the agent's action, but not the dynamics of the environment. By re-labelling transitions this way HER can significantly speed up the learning of a goal-conditioned policy since it increases the frequency with which the transitions seen in training actually lead to the specified goals being achieved.
|
| 37 |
+
|
| 38 |
+
The key insight of our method is that the same principle underlying HER --- i.e. that any observed trajectory contains useful information about how to achieve the goals observed during that trajectory --- has the potential to be used more efficiently as part of a model-based algorithm. In particular, instead of re-labelling transitions and re-computing rewards, we propose to make more complete use of the information contained within the observed transitions by training a generative model that can generate *plausible transitions* leading from the current state towards a desired goal. That is, we use experience gathered by the agent to train a goal-conditioned model that can generate future trajectories (states and actions) that move the agent towards any goal that we specify. These imagined trajectories do not necessarily need to be optimal in the sense of moving directly towards the goal, since the second key component of our method involves feeding these proposed trajectories into a planning algorithm that decides which action to take in order to achieve the goal in as few steps as possible.
|
| 39 |
+
|
| 40 |
+
Whilst in principle a number of generative models could be used for this purpose, in this work we choose to use GANs [@GANS], since they can easily deal with high-dimensional inputs and do not explicitly impose any restrictions on the form of the distribution produced by the generator. Specifically, we choose to use WGANs (Wasserstein GANs) [@wgan] with spectral normalisation [@spectralnorm], as recent work has shown that these can be trained in a stable manner even when the underlying training data is non-stationary [@GATS].
|
| 41 |
+
|
| 42 |
+
The aim of the first major component of our method is to train a generative model that can take in the current state $s_t$ along with a desired goal $g$ and produce an imagined action $a_t$ and next state $s_{t+1}$ that moves the agent towards achieving $g$. We approach this by training an ensemble of $N$ conditional-GANs, each consisting of a generator $G_{\phi_i}$ and a discriminator $D_{\theta_i}$ where $\{\theta_i\}_{i=1}^N$, $\{\phi_i\}_{i=1}^N$ are the parameters of the neural networks that represent these functions. The generators take in the current state $s_t$, a noise vector $z$ and the target goal $g$ in order to produce an imagined action $a_t$ and next state $s_{t+1}$. The discriminators take in $s_t$, $a_t$, $s_{t+1}$ and $g$ and aim to distinguish whether or not this is a transition from a real trajectory that eventually reaches goal $g$ or an example created by the generator.
|
| 43 |
+
|
| 44 |
+
We also consider a variation where concurrently we train an ensemble of $N_m$ deterministic one-step predictive models of the environment. The aim of these predictive models is to take a state-action pair ($s_t, a_t$) and predict the difference between the next state and the current state, $s_{t+1}-s_t$, as in [@nn_model]. We denote these models as $f_{\beta_j}$, where $\{\beta_j\}_{j=1}^{N_m}$ represent the parameters neural networks representing these functions. These predictive models can be used to provide an L2 regularisation term in the generator loss that encourages the generated actions and next states to be consistent with the predictions of the one-step models --- although this is not necessary to make the method work (we study the effect of using predictive models this way in Section 5). The whole setup is shown schematically in Figure [1](#GANdiagram){reference-type="ref" reference="GANdiagram"}.
|
| 45 |
+
|
| 46 |
+
The loss for the $i^{th}$ generator is as follows: $$\begin{equation}
|
| 47 |
+
\mathcal{L}_{\text{generator}}^{(i)} = \mathbb{E}_{%
|
| 48 |
+
\vcenter{%
|
| 49 |
+
\Let@ \restore@math@cr \default@tag
|
| 50 |
+
\baselineskip\fontdimen 10 \scriptfont\tw@
|
| 51 |
+
\advance\baselineskip\fontdimen 12 \scriptfont\tw@
|
| 52 |
+
\lineskip\thr@@\fontdimen 8 \scriptfont\thr@@
|
| 53 |
+
\lineskiplimit\lineskip
|
| 54 |
+
\ialign{\hfil$\m@th\scriptstyle##$&$\m@th\scriptstyle{}##$\hfil\crcr
|
| 55 |
+
& z \sim p(z) \\ &s_t, g \sim \mathcal{R} \\ & s_{t+1}, a_t \sim G_{\phi_i}(z, s_t, g)\crcr
|
| 56 |
+
}%
|
| 57 |
+
}%
|
| 58 |
+
} \left[ D_{\theta_i}(s_t, g, s_{t+1}, a_t) + \lambda \frac{1}{N_m} \sum_{j=1}^{N_m} ((s_{t+1}-s_t) - f_{\beta_j}(s_t, a_t))^2 \right]
|
| 59 |
+
\label{generatorloss}
|
| 60 |
+
\end{equation}$$ where $\mathcal{R}$ is a replay buffer of real experienced trajectories, $z \sim p(z)$ is a noise vector where each component is sampled independently from the standard normal $\mathcal{N}(0,1)$ and $\lambda$ is a parameter that weights how strongly we penalise deviations in the generated action/next state from the average predictions made by one-step models. The loss for the $i^{th}$ discriminator is: $$\begin{equation}
|
| 61 |
+
\mathcal{L}_{\text{discriminator}}^{(i)} = \mathbb{E}_{{s_t, a_t,\atop s_{t+1}, g} \sim \mathcal{R}} \left[ D_{\theta_i}(s_t, g, s_{t+1}, a_t) \right] - \mathbb{E}_{%
|
| 62 |
+
\vcenter{%
|
| 63 |
+
\Let@ \restore@math@cr \default@tag
|
| 64 |
+
\baselineskip\fontdimen 10 \scriptfont\tw@
|
| 65 |
+
\advance\baselineskip\fontdimen 12 \scriptfont\tw@
|
| 66 |
+
\lineskip\thr@@\fontdimen 8 \scriptfont\thr@@
|
| 67 |
+
\lineskiplimit\lineskip
|
| 68 |
+
\ialign{\hfil$\m@th\scriptstyle##$&$\m@th\scriptstyle{}##$\hfil\crcr
|
| 69 |
+
& z \sim p(z) \\ &s_t, g \sim \mathcal{R} \\ & s_{t+1}, a_t \sim G_{\phi_i}(z, s_t, g)\crcr
|
| 70 |
+
}%
|
| 71 |
+
}%
|
| 72 |
+
} \left[ D_{\theta_i}(s_t, g, s_{t+1}, a_t) \right]
|
| 73 |
+
\label{discriminatorloss}
|
| 74 |
+
\end{equation}$$ The replay buffer $\mathcal{R}$ is populated initially by random trajectories, however we find it helpful to filter (i.e. not store) trajectories where the final achieved goal is identical to the initial achieved goal, since these provide nothing useful for the GANs to learn from. After some initial training further trajectories generated by the planner (described in the next section) are also added to $\mathcal{R}$ whilst training continues, allowing for continuous, open-ended improvement. Note that this makes the data distribution we are trying to emulate non-stationary as new self-collected data is constantly being added. The sampled goals from the replay buffer are always taken as goals achieved at a randomly chosen time step that occurs later within the same trajectory.
|
| 75 |
+
|
| 76 |
+
<figure id="GANdiagram" data-latex-placement="h">
|
| 77 |
+
<div class="center">
|
| 78 |
+
<img src="planGAN_diagram_smaller.png" style="width:90.0%" />
|
| 79 |
+
</div>
|
| 80 |
+
<figcaption>Structure of the generative model used in PlanGAN.</figcaption>
|
| 81 |
+
</figure>
|
| 82 |
+
|
| 83 |
+
The basic building block is a generator that takes a state, goal and noise vector and produces an action and next state. However, during training we actually generate trajectories consisting of $\tau$ time steps. That is, we take the generated state from the previous step and use this as input to the generator to produce a new action/next state pair, and repeat. The generator is then trained on these end-to-end. In more detail, we sample batches of real trajectories made up of $\tau$ transitions from the buffer: $(s_0, a_0, g_0, s_1, a_1, g_1, \dots, s_{\tau-1}, a_{\tau-1}, g_{\tau-1}, s_{\tau})$, where each goal $g_i$ is an achieved goal at a later time along that same trajectory. We then use the generator to generate a trajectory $(\hat{s}_0=s_0, \hat{a}_0, g_0, \hat{s}_1, \hat{a}_1, g_1, \dots, \hat{s}_{\tau-1}, \hat{a}_{\tau-1}, g_{\tau-1}, \hat{s}_{\tau})$, where $\hat{s}_t, \hat{a}_{t-1} = G_{\phi}(z_t, \hat{s}_{t-1}, g_{t-1})$. Batches of these real and imagined trajectories are then used to calculate the expectations in the losses shown in Equations [\[generatorloss\]](#generatorloss){reference-type="ref" reference="generatorloss"} and [\[discriminatorloss\]](#discriminatorloss){reference-type="ref" reference="discriminatorloss"}. Training end-to-end on sequences of transitions imposes more constraints on the generator, requiring full trajectories to be difficult for the discriminator to distinguish rather than just individual transitions, and is crucial for good performance.
|
| 84 |
+
|
| 85 |
+
Each GAN and one-step model in the ensemble has a different random initialisation and is trained on different batches of data sampled from the same replay buffer. As discussed in the context of using an ensemble of one-step models for model-based RL [@PDDM], this is enough to give the models significant diversity. We study the benefits of using an ensemble over a single GAN in the Section 5.
|
| 86 |
+
|
| 87 |
+
Once we have an ensemble of GANs that has been trained on some amount of real data, we use these to plan the actions to take in the environment to achieve a given goal, $g$. Our planner's basic structure shares similarities with a number of other model-predictive control based approaches [@nn_model; @hafner2019planet; @PDDM; @MPC] --- make use of a model to generate a number of imaginary future trajectories, score them, use these scores to choose the next action, and repeat this whole procedure at the next step. The novelty in our approach is in the fact that our trajectories are generated using GANs, the way that we score the trajectories and how we make use of an ensemble of models.
|
| 88 |
+
|
| 89 |
+
To plan the next action to take from the current state $s_t$ towards a desired goal $g$, we first sample a set of $Q$ initial actions and next states, $\{a_t^q, s_{t+1}^q\}_{q=1}^Q$. For each $q$, $a_t^q$ and $s_{t+1}^q$ are generated from a random generator in the ensemble, conditioned on $s_t, g$, i.e. $a_t^q, s_{t+1}^q = G_{\phi_i}(s_t, g, z)$, where $i \sim \text{Uniform}\{1, \dots, N\}$. Our aim is then to give each of these initially proposed actions a score which captures how effective they are in terms of moving towards the final goal $g$. A good score here should reflect the fact that we want the next action to be moving us towards $g$ as quickly as possible whilst also ensuring that the goal can be retained at later time steps. For example, we would not want to score too highly an action that moved an object close to the desired goal with very high velocity such that it would overshoot and not remain there at later time steps.
|
| 90 |
+
|
| 91 |
+
To obtain such a score we duplicate each of these initial actions and next states $C$ times. Each next state $\{s_{t+1}^{q,k}\}_{q=1, k=1}^{Q \ \ \ \ C}$ is then used as the starting point for a trajectory of length $T$. These hypothetical trajectories are all generated using a different randomly chosen GAN at each time-step, so for example $s_{t+w}^{q,c}$ is generated from a random generator in the ensemble conditioned on $(s_{t+w-1}^{q,c}, g)$.
|
| 92 |
+
|
| 93 |
+
Once we have generated these trajectories, we give each of them a score based on the *fraction of time they spend achieving the goal*. This means that trajectories that reach the goal quickly are scored highly, but only if they are able to remain there. Trajectories that do not reach the goal within $T$ steps are given a score of zero. We can then score each of the initial actions $\{a_t^q\}_{q=1}^Q$ based on the *average score of all the imagined trajectories that started with that action*. These scores are normalised and denoted as $n_i$. The final action returned by the planner is either the action with the maximum score or an exponentially weighted average of the initially proposed actions, $a_{t} = \frac{\sum_{i=1}^R e^{\alpha n_i} a_i}{\sum_{j=1}^Q e^{\alpha n_i}}$, where $\alpha > 0$ is a hyperparameter. The rationale for using a different random generator at each step of every hypothetical trajectory is that we will be giving higher scores to initial actions that all of the GANs agree can spend a lot of time achieving the goal. This improves the robustness of the predictions and protects against errors in terms of unrealistic imagined future trajectories generated by any single GAN.
|
| 94 |
+
|
| 95 |
+
<figure id="planningdiagram" data-latex-placement="h">
|
| 96 |
+
<div class="center">
|
| 97 |
+
<img src="planner_diagram.png" style="width:90.0%" />
|
| 98 |
+
</div>
|
| 99 |
+
<figcaption>Illustrative example of how the planning algorithm works.</figcaption>
|
| 100 |
+
</figure>
|
| 101 |
+
|
| 102 |
+
:::: algorithm
|
| 103 |
+
**initialise:** generators $\{G_{\phi_m}\}_{m=1}^M$, discriminators $\{D_{\theta_m}\}_{m=1}^M$, one-step models $\{f_{\beta_k}\}_{k=1}^K$, replay buffer $\mathcal{R}$, environment Env
|
| 104 |
+
|
| 105 |
+
::: multicols
|
| 106 |
+
2
|
| 107 |
+
:::
|
| 108 |
+
::::
|
2006.03204/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2006.03204/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The field of object detection has experienced significant gains in performance since the adoption of deep neural networks (DNNs) [\[9\]](#page-8-0). However, DNNs remain opaque tools with a complex and unintuitive process of decision-making, resulting in them being hard to understand, debug and improve. A number of different explanation techniques offer potential solutions to these issues. They have already been shown to find biases in trained models [\[39\]](#page-9-0), help debug them [\[13\]](#page-8-1) and increase user's trust [\[34\]](#page-9-1). A popular approach to explanation involves the use of attribution techniques which produce saliency maps [\[20,](#page-8-2) [35\]](#page-9-2), *i.e.*, heatmaps rep-
|
| 4 |
+
|
| 5 |
+
<span id="page-0-0"></span>
|
| 6 |
+
|
| 7 |
+
Figure 1: D-RISE can highlight which regions of an image were used by an object detector. Here we show outputs for a few corresponding images where importance increases from blue to red. In these examples, D-RISE reveals things such as detectors often looking outside bounding boxes to detect objects e.g., looking at the ski poles to predict skis, or looking to a subset of regions within the object e.g., looking at the Apple logo to predict laptop.
|
| 8 |
+
|
| 9 |
+
resenting the influence different pixels have on the model's decision. Hitherto, these techniques have primarily focused on the image classification task [\[27,](#page-9-3) [8,](#page-8-3) [34,](#page-9-1) [40,](#page-9-4) [44,](#page-9-5) [2,](#page-8-4) [42\]](#page-9-6), with few addressing other problems such as visual question answering [\[25\]](#page-9-7), video captioning [\[28,](#page-9-8) [3\]](#page-8-5) and video activity recognition [\[3\]](#page-8-5). In this work, we address the relatively underexplored direction of generating saliency maps for object detectors.
|
| 10 |
+
|
| 11 |
+
Unlike methods that explain the emerging patterns in the learned weights or activations [\[4,](#page-8-6) [42,](#page-9-6) [22\]](#page-9-9), attribution techniques are usually tightly connected to the model's design and they rely on a number of assumptions about the model's architecture. For example, Grad-CAM [\[34\]](#page-9-1) assumes that each feature map correlates with some concept, and therefore, feature maps can be weighted with respect to the importance of their concept for the output category. We show that these assumptions might not hold for object detection models, resulting in failure to produce quality saliency
|
| 12 |
+
|
| 13 |
+
<sup>\*</sup>Work completed while an intern at Adobe Research. This work was partially supported by the DARPA XAI program. Project page: <https://cs-people.bu.edu/vpetsiuk/drise/>
|
| 14 |
+
|
| 15 |
+
<span id="page-1-0"></span>
|
| 16 |
+
|
| 17 |
+
Figure 2: Our method D-RISE attempts to explain the detections (bounding-box+category) produced for this image by an object detector. We convert target detections that need to be explained into detection vectors dt. We sample N binary masks, Mi, and run the detector on the masked images to obtain proposals Dp. We compute pairwise similarities between targets and proposals to obtain weights for each mask. Finally, the weighted sum of masks is computed to produce saliency maps. In classification, the ouput of the black-box model can be directly used as mask weights.
|
| 18 |
+
|
| 19 |
+
maps. Additionally, object detectors require explanations not just for the categorization of a bounding box but also for the location of the bounding box itself. For these reasons, direct application of existing attribution techniques to object detectors is infeasible.
|
| 20 |
+
|
| 21 |
+
We propose *Detector Randomized Input Sampling for Explanation*, or D-RISE, the first method to produce saliency maps for object detectors that is capable of explaining both the localization and classification aspects of the detection. D-RISE uses an input masking technique first proposed by RISE [\[27\]](#page-9-3), which enables explanation of the more complex detection networks because it does not rely on gradients or the inner workings of the underlying object detector. However, the method in [\[27\]](#page-9-3) is only applicable to classification, not detection. D-RISE is a black-box method and can be in principle applied to any object detector.
|
| 22 |
+
|
| 23 |
+
Explaining visual classifiers with saliency maps has allowed researchers to investigate the localization abilities implicitly learned by these models. Moreover, some works have used explanations of visual classifiers for weaklysupervised object localization [\[14,](#page-8-7) [24\]](#page-9-10). In object detection, however, the localization decisions of the model are explicit as they are expressed directly in the outputs of the model. Therefore, one might assume that exploring spatial importance in this case is redundant, and that the model has already predicted bounding boxes around everything it deems important. In our experiments with D-RISE, we observe that DNN based object detectors also learn to utilize contextual regions outside of the box to detect objects. For instance the last column in Fig. [1](#page-0-0) shows how the tap helps to localize the sink even when it is clearly outside the detected box. In fact, the importance of contextual information for object detection has long been established for both humans [\[5,](#page-8-8) [23\]](#page-9-11) and machines [\[36,](#page-9-12) [21\]](#page-8-9). Another reason for studying an object detector's saliency is the fact that not all sub-regions within the object's bounding box are equally important. Some object parts are more discriminant, while others may occur with objects of different categories, *e.g.*, cat faces are highlighted as more important by the network than its body (Fig. [1\)](#page-0-0).
|
| 24 |
+
|
| 25 |
+
Our contributions can be summarized as follows:
|
| 26 |
+
|
| 27 |
+
- We propose D-RISE, a black-box attribution technique for explaining object detectors via saliency maps, by defining a detection similarity metric.
|
| 28 |
+
- We demonstrate generalizability of D-RISE by explaining two commonly used object detectors with different architectural designs, namely one-stage YOLOv3 [\[29\]](#page-9-13) and two-stage Faster R-CNN [\[30\]](#page-9-14).
|
| 29 |
+
- Using D-RISE, we systematically analyze potential sources of errors and bias in commonly used object detectors trained on the MS-COCO [\[17\]](#page-8-10) dataset and discover common patterns in data learned by the model.
|
| 30 |
+
- We evaluate our method using automated metrics from classification saliency and a user study. Additionally, we propose an evaluation procedure that measures how well the saliency method can discover deliberately introduced biases in the model via synthetic markers. Our method surpasses the classification baselines.
|
| 31 |
+
|
| 32 |
+
# Method
|
| 33 |
+
|
| 34 |
+
Given an h-by-w image I, a DNN detector model f, and an object detection d specified by a bounding box and a category label, our goal is to produce a saliency map S to explain the detection. The map consists of h-by-w values indicating the importance of each pixel in I in influencing f
|
| 35 |
+
|
| 36 |
+
<span id="page-2-0"></span><sup>1</sup>These two works also define themselves as black-box methods, however their definition of "black-box" is different from ours. While their methods can be applied to any differentiable image classification network, they still require access to model's weights and gradients for gradient descent optimization. Along with [\[42,](#page-9-6) [31,](#page-9-20) [27\]](#page-9-3), our work uses a stricter definition of "black-box", entirely prohibiting access to any of the model's internal parameters.
|
| 37 |
+
|
| 38 |
+
to predict d. We propose D-RISE to solve this problem in a black-box manner, i.e., without access to f's weights, gradients or architecture. Our method is inspired by the randomized perturbations (masks) applied to the image by the RISE model to explain object classifiers, except that we leverage the random-masking idea to explain object detectors. The main idea is to measure the effect of masking randomized regions on the predicted output, using changes in f's output to determine the importance. Figure 2 shows an overview of our approach.
|
| 39 |
+
|
| 40 |
+
Existing approaches for image classification saliency cannot be directly applied to the object detection task. They assume a single categorical model output, while object detectors produce a multitude of detection vectors that encode class probabilities, localization information and additional information such as an objectness score. To apply random masking to detectors, we incorporate localization and objectness scores into the process of generating detector saliency maps.
|
| 41 |
+
|
| 42 |
+
Most detector networks, including Faster R-CNN and YOLO, produce a large number of bounding box proposals which are subsequently refined using confidence thresholding and non-maximum suppression to leave a small number of final detections. We denote such bounding box proposals in the following manner:
|
| 43 |
+
|
| 44 |
+
$$d_i = \begin{bmatrix} L_i, O_i, P_i \end{bmatrix} \tag{1}$$
|
| 45 |
+
|
| 46 |
+
$$= \left[ (x_1^i, y_1^i, x_2^i, y_2^i), O_i, (p_1^i, \dots, p_C^i) \right]$$
|
| 47 |
+
(2)
|
| 48 |
+
|
| 49 |
+
Each proposal is encoded into a detection vector $d_i$ consisting of
|
| 50 |
+
|
| 51 |
+
- localization information $L_i$ , defining bounding box corners $(x_1^i, y_1^i)$ and $(x_2^i, y_2^i)$ ,
|
| 52 |
+
- objectness score $O_i \in [0, 1]$ , representing the probability that bounding box $L_i$ contains an object of any class (if the detector does not produce such a score this term may be ignored), and
|
| 53 |
+
- classification information $P_i$ a vector of probabilities $(p_1^i, \ldots, p_C^i)$ representing the probability that region $L_i$ belongs to each of C classes.
|
| 54 |
+
|
| 55 |
+
We construct a detection vector for any given bounding box and its label by taking the corners of the bounding box, setting $O_i$ to 1 and using a one-hot vector for the probabilities.
|
| 56 |
+
|
| 57 |
+
Given an object detector f, an image I and a categorized bounding box (not necessarily produced by the model) we generate a saliency map that would highlight regions important for the model in order to predict such a bounding box. If the detection actually comes from the model, we treat the generated heatmap as an explanation for model's decision. Following the perturbation-based attribution paradigm, we measure the importance of a region by observing the effect that perturbation of this region has on the detector's output.
|
| 58 |
+
|
| 59 |
+
In contrast with classification models, object detection models are designed and trained with regression objectives and do not have a single proposal directly corresponding to any arbitrary bounding box with particular coordinates. Instead, many proposals are produced, with bounding boxes that differ and overlap to varying degrees with the bounding box provided as input to the explanation algorithm. Therefore, for object detection it is important to determine not just how we measure the disturbance in the output but also where we measure it in terms of which disturbances do we select from among the proposals produced by a network. To measure the disturbance in the output (the how), we develop a similarity metric s for the detection proposal vectors (Sec. 3.2). To account for the *where*, we measure the output disturbance caused by an individual mask by selecting the proposal with maximum pairwise similarity between the target detection vector and all detection proposal vectors produced for a masked image. More precisely, following our notation,
|
| 60 |
+
|
| 61 |
+
$$S(d_t, f(M_i \odot I))) \triangleq \max_{d_i \in f(M_i \odot I)} s(d_t, d_j), \quad (3)$$
|
| 62 |
+
|
| 63 |
+
where S denotes the similarity between target detection vector $d_t$ and new detection proposals for the modified image. This allows us to use the RISE masking technique to produce saliency maps for explaining object detector decisions. Note, that this framework does not restrict $d_t$ to be directly produced by the model. For that reason our method can produce explanations for arbitrary detection vectors, such as objects missed by the detector. Gradient-based methods would not be able to do this, because there's no starting point to propagate from.
|
| 64 |
+
|
| 65 |
+
We adopt the mask generation approach from RISE [27].
|
| 66 |
+
|
| 67 |
+
- 1. Sample N binary masks of size $h \times w$ (smaller than image size $H \times W$ ) by setting each element independently to 1 with probability p and to 0 with the remaining probability.
|
| 68 |
+
- 2. Upsample all masks to size $(h+1)C_H \times (w+1)C_W$ using bilinear interpolation, where $C_H \times C_W = \lfloor H/h \rfloor \times \lfloor W/w \rfloor$ is the size of the cell in the upsampled mask.
|
| 69 |
+
- 3. Crop areas $H \times W$ with uniformly random offsets ranging from (0,0) up to $(C_H, C_W)$ .
|
| 70 |
+
|
| 71 |
+
To compute the similarity score between the target vector and the proposal vector, all three components should be considered. We use *Intersection over Union* (IoU) to measure the spatial proximity of the bounding boxes encoded by two vectors. To evaluate how similar two regions look to
|
| 72 |
+
|
| 73 |
+
the network, we use the *cosine similarity* of the class probabilities associated with the regions. Finally, for the networks that explicitly compute an objectness score, such as YOLOv3 [29], we incorporate a measure of the similarity of the objectness scores into the metric, as well. In our experiments we only explain high confidence detections, *i.e.*, we set $O_t = 1$ , so to incorporate objectness score into the similarity metric we simply multiply it by $O_j$ . As a result, detection proposals with lower objectness score will have lower similarity with a high confidence target vector. If the network does not produce an objectness score, *e.g.*, Faster R-CNN [30], the objectness term can be simply omitted. Thus, the similarity score between two detection vectors can be decomposed into three scalar factors:
|
| 74 |
+
|
| 75 |
+
$$s(d_t, d_j) = s_L(d_t, d_j) \cdot s_P(d_t, d_j) \cdot s_O(d_t, d_j), \quad (4)$$
|
| 76 |
+
|
| 77 |
+
where
|
| 78 |
+
|
| 79 |
+
$$s_L(d_t, d_j) = IoU(L_t, L_j), \tag{5}$$
|
| 80 |
+
|
| 81 |
+
$$s_P(d_t, d_j) = \frac{P_t \cdot P_j}{\|P_t\| \|P_j\|},$$
|
| 82 |
+
(6)
|
| 83 |
+
|
| 84 |
+
$$s_O(d_t, d_j) = O_j. (7)$$
|
| 85 |
+
|
| 86 |
+
Scalar product has been chosen to model logical "AND" of three similarity values, with the desired property that if one of them is low, the total similarity value is also low.
|
| 87 |
+
|
| 88 |
+
We now formulate the full process of generating saliency maps using D-RISE.
|
| 89 |
+
|
| 90 |
+
- 1. Generate N RISE masks, $M = \{M_i, 1 \le i \le N\}$ .
|
| 91 |
+
- 2. Convert the target detections to be explained into detection vectors, $D_t = \{d_t, 1 \le t \le T\}$ . We can run the detector on masked images only once to get the saliency maps for all T detections.
|
| 92 |
+
- 3. Run the detector f on masked images $I \odot M_i$ producing $N_p$ proposals for each image, $D_p = \{D_p^i, 1 \le i \le N\} = \{f(M_i \odot I), 1 \le i \le N\} = \{d_j^i, 1 \le i \le N, 1 \le j \le N_p\}.$
|
| 93 |
+
- 4. Compute pairwise similarities between two sets of detection vectors $D_t$ and $D_p$ and take maximum score per each masked image per each target vector. $w_i^t = S(d_t, D_p^i) = \max_{1 \le j \le N_p} s(d_t, d_j^i), \ 1 \le i \le N, \ 1 \le t \le T.$
|
| 94 |
+
- 5. Compute a weighted sum of masks $M_i$ with respect to computed weights $w_i^t$ to get saliency maps $H_t = \sum_{i=1}^{N} w_i^t M_i$ .
|
| 95 |
+
|
| 96 |
+
All operations above, including the similarity computations, can be performed using efficient calls to the vectorized functions of the framework being used, specifically, tensor multiplication, maximum along axis and weighted sum along axis.
|
| 97 |
+
|
| 98 |
+
For most of our visual experiments, we used N=5000 masks with probability p=0.5 and resolution (h,w)=(16,16), with the exception of Figure 1 (column 1), Figure 2, Figure 4 and Figure 5 (top row) where we used more fine-grained masks of resolution (30,30). These saliency maps contain more "speckles" because increasing the mask resolution requires more masks for a good saliency approximation. We used (30,30) masks to compute the average saliency maps in Section 4.4. We have selected these parameters heuristically balancing the computational load and visaul quality of saliency maps.
|
| 99 |
+
|
| 100 |
+
Inference time depends only on the number of masks and for N=5000, D-RISE runs in approximately 70s per image (for all detections) for YOLOv3 and 170s for Faster R-CNN on NVidia Tesla V100.
|
2009.07806/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-05-28T12:57:57.563Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36" version="13.1.3" etag="jNN8u7sqIF98i1-TM9Ug" type="google"><diagram id="HM33TsOktbUkgrIJCxhH">7V1tc9s4Dv41+VgOSZAE8bFpu3c3tzuzM3szd/tpx03UxHuOlXPUNtlff6AtOdZbRMeSJad2O60lU5KNBwTwgCB5AR/uHv+2mt3f/pJeJ4sLLa8fL+DjhdbKa8P/hTNPmzNWu82Jm9X8Om/0fOK3+V9JflLmZ7/Or5OHUsMsTRfZ/L588ipdLpOrrHRutlql38vNvqSL8lPvZzdJ7cRvV7NF/ey/59fZ7east/L5/N+T+c1t8WQl80/uZkXj/BYPt7Pr9Pvm1LoNfLqAD6s0zTbv7h4/JIsgvEIumxv91PLp9outkmUWc4HeXPBttvia/7b8e2VPxY+9WaVf7y/gcvZwv5Hll/ljwpdffkmXWY6NMnz8kK3S/24FovhMfvNklSWPTcDMPhcPkfUvrrbiYD1K0rskWz1xk/xG7wpdyFVIYX78/RkQTSCUlx7IetCGjN00ud3BRlN+3SzXiZvto57Fxm9yyTVLERqkyCpu7v+4wMubC/y4OayJlsXCX+PyNrtb5BKbLeY3S35/xWJIVnwiCG/Omvc+/+Bufn0dLr9cJQ/zv3IBSj6+T+fLbP0b7OWF/Rju9TVLH3J8crh+mt3NF0Fif08W35Jw4wqOej8cd/HSL+KlSAml61jk8GkrJBA5sNIbj87qGpoFwCX0egDPvATeP9+pM3zhAmD4lCKn0aBSVroyfEZ4fIbPItbg8wPBZ1+C7wzeWkRSKCA0SitnLIF5CTtnj9bzXLfzYd+zvA7eZi2L77fzLPntfnYVPv3OsUUZwI3wPqSLdLW+GuT61SjWL/PFYqflT+tXLFLB26X8NeZZaIXhCdezh9vt9+yjwzFm6CXLmTFje0lYdndgBCK3MBo0OKvqqGl0Ap0iRPJgscn3GSGdA5LSKEOAqA/HFBswdYssF2AJXPe/r2nxwbtNZ3nPDZS5f1zLsPic391s/g8d+csfsujRm/vyN9rcumg1kgp9Wb9eUqGKylQ1qg+tsQKhrChOCm9qymFQ6LpCOBLKHa4Dvrtf8wUcrCfdgNSizgMQwk/v3SdZRej9XbJifJb8Bf/1dJ98X8031n94tLTm/icR+I+1UoMtO1VluI8TGePQkLOmHhJ5sTYQoJ1x3IB8HdKWJofAS2d4o+BlEy0JvHPegjQk6STQLXjVGd4OeI1jQuNC4ATSMqsxpwGv6oa3YPwNkMSKMYbgdxFGwT+ZOYdhMRLrSdU4SoHeg1doNIK3vu7jhFRKGlCWlA941QWsnJDGaCclR0tOW4U9SDgip5Isr9+HPBQfLdNlUu40Ufodi01yXUtl7aRYWnIsO0K0DbF/cW6VLGbZ/Fv5/k2iy5/wa6BJu6HJy8J/SL+urpL8Ir2TwyruUzTMZqubJKs1XAO1/VVx2DVlcs7YNci8E5zCSjZ3wgGga8rjtEN3tZg9PMyvmqL/KjBRdDACsRZ7NzBgRazJ/sghONCoGBGUFZoAUoAlpzQyN7BIZs+++PyY0n0RB8O7nvj5+L6G+LHI30f7yX80e5C/zmx6DxGKdk4UgcfTNmhBoJ1XQ94gpA3qerhz+iDvGJH0+ZG7KZKw1qNHMKRJD9RLfXnwxFeSeT320qZ80D4O9U1jDQIqUa10r0XUCQJ0WnmOpTgqLt2WSBB5tFZ5hxxZD2eU66kfIUQN8aPn3nfVolV/Bho4MSx7G1AhbavDXgqVkNzbNTGLCa1q1hgGyr2rpizOOq1qP8wW97ezlmGT6STk93WorRq1mH1OFr+yymTztFEHf6402F5Z09peFIaE8sCd1Gngv1R2AegFhz7o0IHx1jQMtG1oldTOagKQRK28t97mEI3SMYmj4Tw7i3f19J9gFIQtDn/Pb7I++PiYW4zN0VN+FOsltNyYyd3uM5bnUCF9L632QEaR91Wr0lOYoIwXRnq+k/dsoKDmVCSG3JdD6ZUrCEX/TkXXU1b/uAhVO2yJAiy3Cf+7TL+ti4yylP/5HM6sZvzJKhylq/nypqaLD7ez+/D2ara67jZlUcR+Vyeb1bkHA6HLsVsYtCkALkXxjY5DVAOMV/X0iAzX2i0nq0/fko13XoujKHYqDZjKl3xGHhBWnIOV4c9rTPfnNMvSuwZvkKUB8PRrtpgv+elF3ZhssvKF5tw93oT6NpF++TK/SsRVurxK7rMH8TlN//vHn9zJlrNF3zmhw1LWWihF0mtnPHGPLXNDjVbsGA0LNaXa5KO9A3ZQzjBBqavYugkbIKmAnRhHPD2o235Juak4luRxnu1cxke/73zyfFE42L3mV1YQllHQij0d1Maul43mrtPCcX3WS85ES73hvdqzWgHpV/IghTp4JbSKvSMFNS17Rq8ESqWN8Qo4lhrMZY2bjOxL9YaPqbpUdmSddUYYq9GRZZOJbPoGibNIC22BWOkdm2flKqU+KIVBjtU1EpAy1a7Ro9Y21dKdtfYVWut/AK3V0gjltGSDLjUwHx3H0o6bT35DOjsuoz2OzoIE4UO5JIH00hZV5lud5SBEgrHolQm2WA+mtUcok1TnMsk2HuRBUDn+HKNMsrFOsmclqE1hOCtDLeHqvKiUxI+iDU0J+Z614easBa1ZNbCCKEoLlKCGYvqe1ADqWfTzCBojoxyxfqPkOLMaGjjBrpwkKvJSoqwXAg41hgb1xHQNqPZayuv5KrnK05Lfk4cA5yrNZvmZd8rvpemHF1xaDufNbnKwWtA61YLLIo15AkV7ME7RXlwt3oElfZFVnf0F0XA6xZqTxL2vMtzjAx+RXT0du6u0EkBm/UY7K92pFLrDfunCUfufmWD/O1m7u1/K7Yz7m7G7McWc56lhWnsSIK1HGwZsSFfGcyY6MwzO03Zjp+azww6z5q1RDqFW0zlVfM/zdiPxVSAAQ97Bc1jmnaKTwNdE1F9OYmand8ITsZNl1+g8QNU8TjXeNRHpnonEPUY2I/DDTuwseukZu5Ob2Gki0jynw/a1RIGoJVhn+K+sFqhN1/rtV9I2ag+CUXrQ22T75nSyPJPE/WTZvolJ85z5QijQtILjWEcysAZfGzebKF04J3Mi4fXssZXzirwh1KROg+6bczondqEhEgwdGQKNGq0/jVXYTEQ6ZxJ0XyEIjSidMeyxEKSuyHeqEa+NSKhMJfIZp3h3unzfnlCuZhzsJsv3i6LsN8L32fopy3yf2NCBkpXp/BO2fqdTXWNbIDjz/dd0v9PJ80wS95Pl+zYm0XMmDHpdAS4d8/1ibd/KVJKJEgZ7zudE4mtBhFmNGsAYIEJ/GvieEzqxyyJaoTSR1Z7Jv3Gnkc+xEfmcSfB9DV44y3wfCRUYqMl3shFvTIHMRCKfcSaYT5fvu9PJ1YyE3WT5vmtK1ZhiByd53sHpIux2Z4U3YZAJ2bKi9r4ck1T3cHL1tZsG28QpOl0znMupJFXMejM5XXcsmh0/B3Z1QYAUymLh6w8SR3S1yiHicN3M9xi/dY95OLWA8rDf3z3Sr5WgBo13QvcQjLkIrnykYMx1T3EYVhQRtPJYorAjiyKCgR1LFG5kUUyHrbjuJcSGFcUeA7UttHs42XTHoiybsF4fKVDc3Cto2kiQJYXkFFpwhM+U+qCNBCMi+ucFKmR3SuOkMhiue1DQCeuIOEL3Jqyy5RpY8iC4RIysHgsXhuWSf/cEcMmvQOEcx+feKGcUYHlIoAWw/SBVJFCGxIqyGqHKyl4FaEQAfyxA6f0HvKyt+j4ioMy7YF0TxHTCGH5bKe5rRGM/RFuaHATpfiSkzVMcWDZnhJVI3kJICQKU04CGGatXo1A0jKYtwzFWhVRddMqsObzG7dY8DYt6H0E40czmEOFgBGk5wm/dg7r0y19xZKaC02EqODJTwekwFRyZqeB0SkoxipgMJwofwT6OJYpuKjCsKPZY5ezY/NV3z5WM5a99SGpCkfTxKatvWeOqCJVz+MrBMdbBUD2BEREDv12e2gbGM0+tbxzVDEdffSMi6H67LLMDjtNkmf4oVOFlHqXLKRYjBfmO7U3tdg+kCquQPdSQ+GhKcZDbi6APw/7MaP5w0M+MoAbD/sw9uEG/BNFHUIFhf/p0uICP4AKDyoKmQwZ8BBkYVhYTZgMUwQZYONYaxyaUI0QniyixKilH1nEQTc4o3YfUfmhmQN2r36KwCoGkJWZqElRDIDoELD80R2iDpVimS6AjoxSFee/aVTYLVKHGyqBXypPX0kE9Wx0wDXtRe2u9ldt1EUpcr+UuB4H6QzONDlAVg8KI8DcjR2wDq8XZ3Mc0KI/Gkveyofg9AtSwy6hxEHIuhGxi+8B0ClSDDVB5qTqjhD42waCjEAwam2DQUQgGjU0waDSCQWMTDJoOwaCxCYaSP3SxE7WQmqK4vZ45Zi+k62iovtD4oUuc2tB4DgvrqeNmPHrrHRNiTyNEdB0VSqcZ0THJ6wZ1wI07N8/q3jjzRa8w5IaYYVNgB0y7kO2RKqouC1Q1CbuzH6bfdwbe9ilKFNMe81ujE2o3g07lO/c370vJcTfJnrwGyM2ygoUGVJcZYEu8qwJFLmNvFeDHSLmDd5nnoxVa7rzscOow7u7TU1cH8MI/2wNXXaWuJ23gp1SWuz+iAgy2lXOBszoFnElAuaeHQU3aQUC9DlpVbBAwBHQxa0+8eeiCKQ3xWtV3DoRjpbBmCFib0hPrudj2w9V1mrVMzD7WvrE/rV8vGflK/F1LoVSdQOt88cXsc7L4NX2Y5wvJ1WaY/1xpsL2yNif9Iibob9Hj7ZaoJJSHEIxr4L+V+c/vOFyXBpXjmJ/bhIk/9bB/s4iDZL6gCYADgNZVNuptDgv8qW4auEv/lh+mq+w2vUmXs8Wn57OXZXr33ObnNL3PlePPJMuecjDD5P2ymrFkV0//CdcLWxz+vvvZx8f85pujp/zogIgieZxnO0/ko9+LR/D75+eFg+JxsSaOxbc2F0UYvzm36fqlzlvXoUONnCknwbFiflrM2GvMj4rIih3uVbaaoXb1Qgq0HaoRj1UULkfyUZWVIyAOvvp93NG8kNpvLdee1UC/STXQRguHO8FImfUBCLmbBfCvUxJNeltoWvgm9ilhn+7nKMgPpzi6JXy5P5kVZUyj7kaFEC8PBJMSYLR0znkdpulVsoYhbRiyAgo1oDVFJNuhnH2sJ6PUHvv9tAoiZtinY6RcW1Epw20YiDjCLDWlDtruuNeRshycrqGyl0VStTmHCWc6K7HkQE1JOIdMcBxcWhE1BiC0J2D7xN5IgjUNgtMoyLIbsd6Ad96ZHtY4UCoin/F2x2NzvXkRmuDF0UogJ8kx1VTHQiaiYuLtjs22IrMdnFU2pIDRWPbs2z3bCgcGghgz9vbMGMFAIboKrM5LG3JZqKRrKI9tu8thsEZUg7zdId4uWDlmBg6Uw5K0ksCBL8fUWgoKy3Ayt9MklfH1ZRUiumtLk4Ng1Udh8BfdDKy7AnlIzl3d4jYyZ9J9I9sX6+bDVZpmu825S93+kl4nocX/AQ==</diagram></mxfile>
|
2009.07806/main_diagram/main_diagram.pdf
ADDED
|
Binary file (31.8 kB). View file
|
|
|
2009.07806/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
<figure id="fig:msda" data-latex-placement="t">
|
| 4 |
+
<img src="images/multisource-domain-adaptation.png" />
|
| 5 |
+
<figcaption>In multi-source domain adaptation, a model is trained on data drawn from multiple parts of the underlying distribution. At test time, the model must make predictions on data from a potentially non-overlapping part of the distribution.</figcaption>
|
| 6 |
+
</figure>
|
| 7 |
+
|
| 8 |
+
Machine learning practitioners are often faced with the problem of evolving test data, leading to mismatches in training and test set distributions. As such, the problem of *domain adaptation* is of particular interest to the natural language processing community in order to build models which are robust this shift in distribution. For example, a model may be trained to predict the sentiment of product reviews for DVDs, electronics, and kitchen goods, and must utilize this learned knowledge to predict the sentiment of a review about a book ([1](#fig:msda){reference-type="ref+label" reference="fig:msda"}). This paper is concerned with this setting, namely *unsupervised multi-source domain adaptation*.
|
| 9 |
+
|
| 10 |
+
Multi-source domain adaptation is a well studied problem in deep learning for natural language processing. Prominent techniques are generally based on data selection strategies and representation learning. For example, a popular representation learning method is to induce domain invariant representations using unsupervised target data and domain adversarial learning [@ganin2015unsupervised]. Adding to this, mixture of experts techniques attempt to learn both domain specific and global shared representations and combine their predictions [@guo2018multi; @li2018s; @ma2019domain]. These methods have been primarily studied using convolutional nets (CNNs) and recurrent nets (RNNs) trained from scratch, while the NLP community has recently begun to rely more and more on large pretrained transformer (LPX) models e.g. BERT [@devlin2019bert]. To date there has been some preliminary investigation of how LPX models perform under domain shift in the single source-single target setting [@ma2019domain; @han2019unsupervised; @rietzler2019adapt; @gururangan2020don]. What is lacking is a study into the effects of and best ways to apply classic multi-source domain adaptation techniques with LPX models, which can give insight into possible avenues for improved application of these models in settings where there is domain shift.
|
| 11 |
+
|
| 12 |
+
Given this, we present a study into unsupervised multi-source domain adaptation techniques for large pretrained transformer models. Our main research question is: do mixture of experts and domain adversarial training offer any benefit when using LPX models? The answer to this is not immediately obvious, as such models have been shown to generalize quite well across domains and tasks while still learning representations which are not domain invariant. Therefore, we experiment with four mixture of experts models, including one novel technique based on attending to different domain experts; as well as domain adversarial training with gradient reversal. Surprisingly, we find that, while domain adversarial training helps the model learn more domain invariant representations, this does not always result in increased target task performance. When using mixture of experts, we see significant gains on out of domain rumour detection, and some gains on out of domain sentiment analysis. Further analysis reveals that the classifiers learned by domain expert models are highly homogeneous, making it challenging to learn a better mixing function than simple averaging.
|
| 13 |
+
|
| 14 |
+
# Method
|
| 15 |
+
|
| 16 |
+
<figure id="fig:check-worthy-examples" data-latex-placement="t">
|
| 17 |
+
<img src="images/model-architecture.png" />
|
| 18 |
+
<figcaption>The overall approach tested in this work. A sample is input to a set of expert and one shared LPX model as described in §<a href="#sec:modeling" data-reference-type="ref" data-reference="sec:modeling">3.1</a>. The output probabilities of these models are then combined using an attention parameter alpha (§<a href="#sec:avg" data-reference-type="ref" data-reference="sec:avg">3.1.1</a>, §<a href="#sec:fta" data-reference-type="ref" data-reference="sec:fta">3.1.2</a>, §<a href="#sec:dc" data-reference-type="ref" data-reference="sec:dc">3.1.3</a>, §<a href="#sec:attention" data-reference-type="ref" data-reference="sec:attention">3.1.4</a>). In addition, a global model <span class="math inline"><em>f</em><sub><em>g</em></sub></span> learns domain invariant representations via a classifier <code>DA</code> with gradient reversal (indicated by the slash, see §<a href="#sec:da_method" data-reference-type="ref" data-reference="sec:da_method">3.2</a>).</figcaption>
|
| 19 |
+
</figure>
|
| 20 |
+
|
| 21 |
+
This work is motivated by previous research on domain adversarial training and mixture of domain experts for domain adaptation. In this, the data consists of $K$ source domains $\mathcal{S}$ and a target domain $\mathcal{T}$. The source domains consist of labelled datasets $D_{s}, s \in \{1,...,K\}$ and the target domain consists only of unlabelled data $U_{t}$. The goal is to learn a classifier $f$, which generalizes well to $\mathcal{T}$ using only the labelled data from $\mathcal{S}$ and optionally unlabelled data from $\mathcal{T}$. We consider a base network $f_{z}, z \in \mathcal{S} \cup \{g\}$ corresponding to either a domain specific network or a global shared network. These $f_{z}$ networks are initialized using LPX models, in particular DistilBert [@sanh2019distilbert].
|
| 22 |
+
|
| 23 |
+
We study four different mixture of expert techniques: simple averaging, fine-tuned averaging, attention with a domain classifier, and a novel sample-wise attention mechanism based on transformer attention [@vaswani2017attention]. Prior work reports that utilizing mixtures of domain experts and shared classifiers leads to improved performance when having access to multiple source domains [@guo2018multi; @li2018s]. Given this, we investigate if mixture of experts can have any benefit when using LPX models.
|
| 24 |
+
|
| 25 |
+
Formally, for a setting with $K$ domains, we have set of $K$ different LPX models $f_{k}, k \in \{0...K-1\}$ corresponding to each domain. There is also an additional LPX model $f_{g}$ corresponding to a global shared model. The output predictions of these models are $p_{k}, k \in \{0...K-1\}$ and $p_{g}$, respectively. Since the problems we are concerned with are binary classification, these are single values in the range $(0,1)$. The final output probability is calculated as a weighted combination of a set of domain expert probabilities $\bar{\mathcal{K}} \subseteq \mathcal{S}$ and the probability from the global shared model. Four methods are used for calculating the weighting.
|
| 26 |
+
|
| 27 |
+
The first method is a simple averaging of the predictions of domain specific and shared classifiers. The final output of the model is $$\begin{equation}
|
| 28 |
+
p_A(x,\bar{\mathcal{K}}) = \frac{1}{|\bar{\mathcal{K}}|+1}\sum_{k \in \bar{\mathcal{K}}}p_{k}(x) + p_{g}(x)
|
| 29 |
+
\end{equation}$$
|
| 30 |
+
|
| 31 |
+
As an extension to simple averaging, we fine tune the weight given to each of the domain experts and global shared model. This is performed via randomized grid search evaluated on validation data, after the models have been trained. A random integer between zero and ten is generated for each of the models, which is then normalized to a set of probabilities $\alpha_{F}$. The final output probability is then given as follows.
|
| 32 |
+
|
| 33 |
+
$$\begin{equation}
|
| 34 |
+
p_F(x) = \sum_{k \in \bar{\mathcal{K}}}p_k(x) * \alpha_{F}^{(k)}(x) + p_g(x) * \alpha_{F}^{(g)}(x)
|
| 35 |
+
\end{equation}$$
|
| 36 |
+
|
| 37 |
+
It was recently shown that curriculum learning using a domain classifier can lead to improved performance for single-source domain adaptation [@ma2019domain] when using LPX models. Inspired by this, we experiment with using a domain classifier as a way to attend to the predictions of domain expert models. First, a domain classifier $f_C$ is trained to predict the domain of an input sample $x$ given $\mathbf{r}_{g} \in \mathbb{R}^{d}$, the representation of the `[CLS]` token at the output of a LPX model. From the classifier, a vector $\alpha_{C}$ is produced with the probabilities that a sample belongs to each source domain. $$\begin{equation}
|
| 38 |
+
\alpha_{C} = f_{C}(x) = \text{softmax}(\mathbf{W}_C\mathbf{r}_{g} + b_{C})
|
| 39 |
+
\end{equation}$$ where $\mathbf{W}_{C} \in \mathbb{R}^{d \times K}$ and $b_{C} \in \mathbb{R}^{K}$. The domain classifier is trained before the end-task network and is held static throughout training on the end-task. For this, a set of domain experts $f_{k}$ are trained and their predictions combined through a weighted sum of the attention vector $\alpha_{C}$. $$\begin{equation}
|
| 40 |
+
p_C(x) = \sum_{k \in S}p_k(x) * \alpha_C^{(k)}(x)
|
| 41 |
+
\end{equation}$$ where the superscript $(k)$ indexes into the $\alpha_C$ vector. Note that in this case we only use domain experts and not a global shared model. In addition, the probability is always calculated with respect to each source domain.
|
| 42 |
+
|
| 43 |
+
Finally, a novel parameterized attention model is learned which attends to different domains based on the input sample. The attention method is based on the scaled dot product attention applied in transformer models [@vaswani2017attention], where a global shared model acts as a query network attending to each of the expert and shared models. As such, a shared model $f_{g}$ produces a vector $\mathbf{r}_{g} \in \mathbb{R}^{d}$, and each domain expert produces a vector $\mathbf{r}_{k} \in \mathbb{R}^{d}$. First, for an input sample $x$, a probability for the end task is obtained from the classifier of each model yielding probabilities $p_{g}$ and $p_{k}, k \in {0...K-1}$. An attention vector $\alpha_{X}$ is then obtained via the following transformations. $$\begin{equation}
|
| 44 |
+
\mathbf{q} = \mathbf{g}\mathbf{Q}^{T}
|
| 45 |
+
\end{equation}$$ $$\begin{equation}
|
| 46 |
+
%\mathbf{k} = [\mathbf{d}_{1} ||...||\mathbf{d}_{k} || \mathbf{s}]\mathbf{K}^{T}
|
| 47 |
+
\mathbf{k} = \begin{bmatrix}
|
| 48 |
+
\mathbf{r}_{1} \\
|
| 49 |
+
\vdots \\
|
| 50 |
+
\mathbf{r}_{K} \\
|
| 51 |
+
\mathbf{r}_{g}
|
| 52 |
+
\end{bmatrix} \mathbf{K}^{T}
|
| 53 |
+
\end{equation}$$ $$\begin{equation}
|
| 54 |
+
\alpha_{X} = \text{softmax}(\mathbf{q}\mathbf{k}^{T})
|
| 55 |
+
\end{equation}$$ where $\mathbf{Q} \in \mathbb{R}^{d \times d}$ and $\mathbf{K} \in \mathbb{R}^{d \times d}$. The attention vector $\alpha_{X}$ then attends to the individual predictions of each domain expert and the global shared model. $$\begin{equation}
|
| 56 |
+
p_X(x,\bar{\mathcal{K}}) = \sum_{k \in \bar{\mathcal{K}}}p_k(x) * \alpha_{X}^{(k)}(x) + p_g(x) * \alpha_{X}^{(g)}(x)
|
| 57 |
+
\end{equation}$$
|
| 58 |
+
|
| 59 |
+
To ensure that each model is trained as a domain specific expert, a similar training procedure to that of @guo2018multi is utilized, described in §[3.3](#sec:training){reference-type="ref" reference="sec:training"}.
|
| 60 |
+
|
| 61 |
+
The method of domain adversarial adaptation we investigate here is the well-studied technique described in @ganin2015unsupervised. It has been shown to benefit both convolutional nets and recurrent nets on NLP problems [@li2018s; @gui2017part], so is a prime candidate to study in the context of LPX models. Additionally, some preliminary evidence indicates that adversarial training might improve LPX generalizability for single-source domain adaptation [@ma2019domain].
|
| 62 |
+
|
| 63 |
+
To learn domain invariant representations, we train a model such that the learned representations maximally confuse a domain classifier $f_d$. This is accomplished through a min-max objective between the domain classifier parameters $\theta_{D}$ and the parameters $\theta_{G}$ of an encoder $f_g$. The objective can then be described as follows. $$\begin{equation}
|
| 64 |
+
\mathcal{L}_D = \max_{\theta_{D}}\min_{\theta_{G}}-d\log f_{d}(f_{g}(x))
|
| 65 |
+
\end{equation}$$ where $d$ is the domain of input sample $x$. The effect of this is to improve the ability of the classifier to determine the domain of an instance, while encouraging the model to generate maximally confusing representations via minimizing the negative loss. In practice, this is accomplished by training the model using standard cross entropy loss, but reversing the gradients of the loss with respect to the model parameters $\theta_{G}$.
|
| 66 |
+
|
| 67 |
+
Our training procedure follows a multi-task learning setup in which the data from a single batch comes from a single domain. Domains are thus shuffled on each round of training and the model is optimized for a particular domain on each batch.
|
| 68 |
+
|
| 69 |
+
For the attention based (§[3.1.4](#sec:attention){reference-type="ref" reference="sec:attention"}) and averaging (§[3.1.1](#sec:avg){reference-type="ref" reference="sec:avg"}) models we adopt a similar training algorithm to @guo2018multi. For each batch of training, a meta-target $t$ is selected from among the source domains, with the rest of the domains treated as meta-sources $\mathcal{S}' \in \mathcal{S} \setminus \{t\}$. Two losses are then calculated. The first is with respect to all of the meta-sources, where the attention vector is calculated for only those domains. For target labels $y_{i}$ and a batch of size $N$ with samples from a single domain, this is given as follows. $$\begin{equation}
|
| 70 |
+
\mathcal{L}_{s} = -\frac{1}{N}\sum_{i}y_{i}\log p_{X}(x, \mathcal{S}')
|
| 71 |
+
\end{equation}$$ The same procedure is followed for the averaging model $p_{A}$. The purpose is to encourage the model to learn attention vectors for out of domain data, thus why the meta-target is excluded from the calculation.
|
| 72 |
+
|
| 73 |
+
The second loss is with respect to the meta-target, where the cross-entropy loss is calculated directly for the domain expert network of the meta-target. $$\begin{equation}
|
| 74 |
+
\mathcal{L}_{t} = -\frac{1}{N}\sum_{i}y_{i}\log p_t(x)
|
| 75 |
+
\end{equation}$$ This allows each model to become a domain expert through strong supervision. The final loss of the network is a combination of the three losses described previously, with $\lambda$ and $\gamma$ hyperparameters controlling the weight of each loss. $$\begin{equation}
|
| 76 |
+
\mathcal{L} = \lambda \mathcal{L}_s + (1 - \lambda) \mathcal{L}_t + \gamma \mathcal{L}_D
|
| 77 |
+
\end{equation}$$
|
| 78 |
+
|
| 79 |
+
For the domain classifier (§[3.1.3](#sec:dc){reference-type="ref" reference="sec:dc"}) and fine-tuned averaging (§[3.1.2](#sec:fta){reference-type="ref" reference="sec:fta"}), the individual LPX models are optimized directly with no auxiliary mixture of experts objective. In addition, we experiment with training the simple averaging model directly.
|
2101.00604/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2101.00604/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
<figure data-latex-placement="htbp">
|
| 4 |
+
<figure>
|
| 5 |
+
<img src="img/testa.png" />
|
| 6 |
+
<figcaption>Established offline pixelation pipeline.</figcaption>
|
| 7 |
+
</figure>
|
| 8 |
+
<figure>
|
| 9 |
+
<img src="img/testb.png" />
|
| 10 |
+
<figcaption>Established offline pixelation pipeline.</figcaption>
|
| 11 |
+
</figure>
|
| 12 |
+
<figcaption>Current pixelation methods v.s. PsOP under the same scene. The leftmost clips are a clip of a streaming scene. Potential privacy-sensitive objects are labeled in circles and rectangular and only circled objects shall be pixelated. Tracklets rendered in gray ground require manual association in (a). The color of the clusters in (b) represents the corresponding colored objects. Pixelation results are shown in the rightmost clip.</figcaption>
|
| 13 |
+
</figure>
|
| 14 |
+
|
| 15 |
+
Live video streaming has never been so popular as in the past two or three years. Live video streaming, especially the outdoors, presents audiences unprepared, unrehearsed filming of reality, and ramps up the sense of immersion, unpredictability or nervousness. Real scenes are recorded, then instantly broadcast to audiences mostly through streamers' phone cameras and the mobile network. Streaming activities join a symbiosis with the popularization of smartphones, 5G networks, and cheaper communication costs [@stewart2016up]. The primary host of live streaming activities transformed from TV stations and news bureau to ordinary individuals. Every rose has its thorn. Without imposed censorship or filters before broadcasting, the pervasiveness of live streaming today arouses unprecedented violations in the personal privacy protection field. Strict law or conservative religions even commit those violations to crimes [@faklaris2016legal].
|
| 16 |
+
|
| 17 |
+
Actually, apart from the indifference of streamers and the absence of imposed censorship, the handcrafted pixelation process is the main reason that leads to privacy infringements amid video streaming. As a giant live-streaming hosting service provider, YouTube is already aware of such urgent needs and published its offline auto-pixelation tool on the latest YouTube Creator Studio. As far as we know, current studies, including YouTube Studio and Microsoft Azure, mainly focus on processing offline videos; whilst leave the pixelation methods on the online live video streaming field underexplored. To ensure the privacy rights and the sound development of the streaming industry, proper pixelation methods shall be interpolated between the filming and broadcasting of live video streaming. Transparency in terms of user experience is also indispensable to keep the pixelation practical. Therefore, in this paper, we devote to establish a pixelation method that generates automatic personal privacy filtering during unconstrained streaming activities.
|
| 18 |
+
|
| 19 |
+
Most of the few existing offline pixelation methods adopt a similar tracking-by-detection structure that assembles privacy-sensitive objects detectors ahead of multi-object tackers. In such a structure, a Multi-Object Tracker (MOT) aims at continuously locating an arbitrary target in a video with a given bounding box in the target's initial frame, while the detector is responsible for giving the bounding box in the target's initial frame. As shown in Figure 1 (a), this structure intuitively follows the manual pixelation pipeline and works well on fine-shot videos. Once a sensitive object is spotted by the detector, it is tracked by the tracker till its vanishment. This action sequence loops over for every detection. Allocating Gaussian or other filters on the trajectories reaches the final pixelation. Essentially, the object trajectories determine the final pixelation results.
|
| 20 |
+
|
| 21 |
+
In tracking-by-detection structure, trajectories are the joint effort of trackers and detectors, and in fact, trackers and detectors are contained independently. Comparing with conventional fine-shot videos, live streaming videos always involve very few shot changes and the shaky camera. Recorded often by hand-held cameras, shaky or jerky camera stands for frequent camera shakes, abrupt streaming condition alteration, and noisy backgrounds. Under such conditions, trackers still struggle standing up to scattered or drifted tracklets in the long-term. Even were the long-term tracking accuracy somehow resolved, tracking-by-detection structure also prone to corruption as its performance decisively in accordance with detection accuracy. Trackers are unable to function well with poor initiations. However, due to inadequate training samples and the lack of comprehension for video contexts, false positives and false negatives produced by the image-based detectors spring up in streaming videos. Consequently, besides efficiency, blinking, drifting, missing, or excessive mosaics are allocated everywhere when migrating existing methods to pixelate live videos. Hence, blaming unacceptable mosaics mainly on inaccurate detections, current methods are incapable of handling live video streaming.
|
| 22 |
+
|
| 23 |
+
The other major drawback of the current pixelation workflow is the over-pixelation problem. Introduced by the tracking algorithms and inherited by current tracking-by-detection pixelation methods, the over-pixelation problem occurs when the trackers insist on generating puzzling and unnecessary mosaics for unidentifiable privacy-sensitive objects. Heavy or fully occlusion, and massive motion blur are the common reasons for objects' unidentifiable. Tracking algorithms are designed to construct seamless or long enough trajectories for objects in avoiding frequent ID-switch. Unlike tracking, in the meantime of blocking the privacy-sensitive information from leaking, pixelation tasks are dedicated to preserving the audience as many originals as possible. Over-pixelation is an intrinsic problem while migrating tracking algorithms to pixelation tasks.
|
| 24 |
+
|
| 25 |
+
To address the aforementioned issues, our Privacy-sensitive Objects Pixelation (PsOP) introduces a brand-new framework for generating reliable pixelation with real-time efficiency in live video streaming. To yield reliable trajectories for pixelation, PsOP foremost copes with the inaccurate detections caused by deep networks insufficiency and the lack of comprehension for video contexts. PsOP divides the potential privacy-sensitive objects into contexts-irrelevant Indiscriminating Pixelation Objects (IPOs) and contexts-relevant Discriminating Pixelation Objects (DPOs) to alleviate contexts' effects on detection. As their sensitiveness barely changes in scenes, IPOs, including erotic images, trademarks, phone numbers or car plate numbers, etc., are liable to be handled through well-established sensitive objects detection algorithms [@yu2016iprivacy]. Conversely, DPOs mainly made up of faces and texts are not to be solely dealt with detection networks. When the domain of discriminating and indiscriminating pixelation objects overlaps (e.g., the phone number is also a kind of text), IPOs are prioritized in claiming the overlapped bounding box for tighter privacy protection. Depicted in Figure 1 (b), detected IPOs are smoothed and then pixelated.
|
| 26 |
+
|
| 27 |
+
The remaining Discriminating Pixelation Objects (DPOs) are the primary concern of PsOP. The sensitiveness of instances belonging to DPOs changes according to scenes or contexts. The varying sensitiveness of DPOs along with the inbuilt network insufficiency cannot be simply solved through training on video data. Such training assumes the detection, recognition, and video semantics segmentation tasks share a unified network structure as well as demands labor-intensive video data labeling. Considering the learning-based feature vectors are in fact the central role for instances' trajectories association, as shown in Figure 1 (b), PsOP abnegates tracking-by-detection structure and employs the detection, embedding, and clustering procedure to generate trajectories under inaccurate detections.
|
| 28 |
+
|
| 29 |
+
In specific, for every class of DPOs, pre-trained detection and embedding networks are leveraged to yield feature vectors for instances of the class. Subsequently, we propose the Positioned Incremental Affinity Propagation (PIAP) algorithm for clustering under inaccurate detection and embeddings. In classic Affinity Propagation (AP), affinities are the distances between feature vectors. A series of messages about affinities, denoted as availabilities and responsibilities, are propagated among vectors to reach a final consensus on clustering results. For PsOP, since an individual instance of DPOs cannot appear at different positions within a frame, the detected instances positioned in the same frame serve as negative samples with minimal affinities. These minimized affinities revise the consensus through the propagation of availabilities and responsibilities. The preference matrix is also computed based on availabilities and responsibilities. Weak preference indicates the detection is relatively isolated in the feature space; thereby, excluded as outliers through propagation. Besides, as a single instance owns similar expanding availability and responsibility matrices in adjacent frames, positioned affinities are propagated in an incremental way. A run of PIAP iterates until the consensus of clusters' results is reached for all vectors. Link the detection within the same cluster forms the trajectory of an instance. Comparing with classic AP, while retaining the ability of clustering under ill-defined cluster number, PIAP solves the noise-sensitive and time-consuming problem of AP by introducing positioned affinities and incremental propagation. Moreover, unlike gradually fitting trackers, detection, and embedding based PIAP simultaneously solves the inherent over-pixelation problem.
|
| 30 |
+
|
| 31 |
+
In order to evaluate the performance of the proposed PsOP and PIAP, we crawled raw streaming videos from YouTube Live and Facebook Live platforms. These raw videos contents are inspected and selected in detail to ensure the dataset diversities. Tested on the live streaming video dataset (84261 labeled boxes and 19372 frames) we collected and manually labeled, PsOP gains significant pixelation accuracy boosting as well as retains much more non-sensitive originals for audiences.
|
| 32 |
+
|
| 33 |
+
In summary, the main contributions of this paper are as follows:
|
| 34 |
+
|
| 35 |
+
- We build the Privacy-sensitive Objects Pixelation (PsOP) framework for the pixelation of privacy-sensitive objects in live video streaming. As far as we know, PsOP is the first online method adopts the detection, embedding, clustering procedures and solves the over-pixelation problem.
|
| 36 |
+
|
| 37 |
+
- We proposed the Positioned Incremental Affinity Propagation (PIAP) clustering algorithm to generate trajectories for inaccurately detected and sensitiveness varying discriminating pixelation objects. The proposed PIAP spontaneously handles the cluster number generation, cluster under unbalanced sample-size problems, and further endows the classic AP with noise-resistance and time-saving merits.
|
| 38 |
+
|
| 39 |
+
- We built a live video streaming dataset to test the proposed PsOP framework and to be available to public. Diverse streaming videos are collected through live streaming platforms, and dense annotations on each frame are manually labeled for the evaluation of the proposed method.
|
| 40 |
+
|
| 41 |
+
<figure data-latex-placement="htbp">
|
| 42 |
+
<img src="img/Diagram1.png" />
|
| 43 |
+
<figcaption>The proposed PsOP framework. Purple and blue lines respectively indicate the pixelation process of discriminate pixelation objects, and nonsensitive objects. Dash lines and boxes presents the extendability of PsOP through pre-training.</figcaption>
|
| 44 |
+
</figure>
|
| 45 |
+
|
| 46 |
+
# Method
|
| 47 |
+
|
| 48 |
+
The procedure of the proposed Privacy-sensitive Objects Pixelation (PsOP) is shown in Figure 2. The process of contexts-irrelevant Indiscriminating Pixelation Objects (IPOs) are omitted since they are pixelated without generating trajectories. Live streaming videos are sliced into video segments with a fixed size for better accuracy. Then, frame-wise detections are applied through detectors pre-trained on image datasets. Detectors, along with corresponding embedding networks for every class of contexts-relevant Discriminating Pixelation Objects (DPOs) (not limited to face and text), are applied in parallel. Similarly, embeddings yielded by the same detector and embedding network are feed to the PIAP clustering. The parallel PIAP associates the same instance across frames into the same cluster, and sequential link within a cluster forms the trajectory. As not all DPOs are sensitive in a particular scene, sensitive thesaurus [@bollegala2012cross] or minor manual specification are appended to filter the trajectories of DPOs further. Gaussian filters are imposed on the final trajectories for pixelation and then streaming to the audience.
|
| 49 |
+
|
| 50 |
+
The proposed PsOP leverages image-based pre-trained detection networks. Dash lines and boxes in Figure 2 shows the extendable detection of IPOs and DPOs through user-defined pre-training datasets. As false positives and negatives are common in detection, we design a buffer section at the beginning of each live streaming to promote the accuracy of trajectories without affecting the audience experience. With the PsOP's real-time efficiency, we can stack every $\mathcal{N}$ frames into a short video segment by demanding a $(2*\mathcal{N})$ frames buffering section at the very beginning of live video streaming without causing discontinuities in broadcasting.
|
| 51 |
+
|
| 52 |
+
Video segments slice the typical hours-long live streaming into numerous segments in seconds level. Transmitted to a lag at the begging, the conflict between accuracy and efficiency is greatly alleviated. Trajectories of privacy-sensitive objects could be smoothed within and across segments. Considering the primal latency brought by data communication and video compression, such a buffering latency appended is acceptable to users.
|
| 53 |
+
|
| 54 |
+
Then, the Indiscriminating Pixelation Objects (IPOs) are firstly detected in video segments. We adopt the [@yu2016iprivacy] as the detection network here. Training is conducted jointly on NPDI [@avila2013pooling], METU [@tursun2017large], and openALPR. The multi-task learning network is trained to handle the detection of erotic image, trademark, and plate numbers simultaneously. As in Figure 2, video segments are directly fed to the detector of indiscriminate pixelation objects colored in purple. Gaussian smooth among five consecutive frames is applied for compensating false negatives. Within a segment, the detection which has less than five bounding boxes with an overlapping ratio IOU$\leq \varepsilon$ is eliminated as false positives to avoid the blinking of mosaics. Afterward, the trajectories of indiscriminate pixelation objects are established and ready for pixelation. To avoid the domain overlapping of IPOs and DPOs, bounding boxes detected by both detectors with an IOU$> \varepsilon$ are categorized to IPOs.
|
| 55 |
+
|
| 56 |
+
To elucidate the details of proposed pixelation method for Discriminate Pixelation Objects (DPOs), faces and texts are used in the following as tangible instances. Following the embedding then clustering pixelation process for DPOs, in this paper, we use MTCNN [@zhang2016joint], and CosFace [@wang2018cosface] to process every frame in turn. The application of MTCNN and CosFace considers the convenience for latter cosine similarity in PIAP. The detection and embedding networks could be substituted by other state-of-the-art methods. MTCNN accepts arbitrary size inputs and detects face larger than $12*12$ pixel. CosFace generates $512$ dimension feature vector for each detected face. Faces are aligned to the frontal pose through the affine transformation before embedding. Similarly for texts, Textboxes [@liao2017textboxes] and FastText [@joulin2016fasttext] are used in texts detection and embedding. TextBoxes receives input images containing texts with an arbitrary scale. A bit close to MTCNN, frames are rescaled to $300*300$ for efficiency, and Non-Maximum Suppression (NMS) are also leveraged to boost the accuracy. FastText is a C-BOW like fast text classification network, and the result of the second last layer before softmax regression is extracted as embedding. The embedding dimension of FastText is defined to $128$. The clustering for faces and texts is conducted in parallel.
|
| 57 |
+
|
| 58 |
+
The proposed PIAP is then activated to connect the same faces or the same piece of text across frames according to face or text vectors. Do notice that the efficacy of PIAP remains the same in establishing trajectories for other objects of this type. DBSCAN [@ester1996density] and Affinity Propagation (AP) [@frey2007clustering] are the candidates under unpredictable cluster numbers. As noisy detection results are common in videos and the data-size for clustering is also unbalanced, the density-based DBSCAN is excluded. To revise the false detection, variation of embedded vectors, and boost the clustering speed, we employ the position information and incremental clustering in the proposed PIAP. Sequential links within each cluster form the trajectory of each face and text, and likewise, Gaussian smooth is applied within a segment before pixelation.
|
| 59 |
+
|
| 60 |
+
Similar to traditional clustering algorithms, the first step of classic AP is the measurement of the distance between data nodes, denoted as the similarities. Following the common notation in AP, $i$ and $k$ ($i,k\in R^D$, $D=512$ for face and $D=128$ for text) are two of the data nodes. $S$ is the similarity matrix stores the similarities between every two nodes. $S(i,k)$ is the element on row $i$, column $k$ of $S$ and $S(i,k)$ denotes the similarity between data node ${i}$ and ${k}$, thereby indicating how well the data node $k$ is suited to be the exemplar for data point $i$. Similar notations are also used in below.
|
| 61 |
+
|
| 62 |
+
The core of AP is that a series of responsibilities $R(i,k)$ and availabilities $A(i,k)$ messages are passed among all data nodes to reach the consensus on exemplars' selection. $R$ and $A$ are the responsibility and availability matrix. $i$ passes $R(i,k)$ to its potential exemplar $k$ indicating the current willingness of $i$ choosing $k$ as its exemplar considering all the other potential exemplars. Correspondingly, $k$ responds $A(i,k)$ to $i$ update the current willingness of $k$ accepting $i$ as its member considering all the other potential members. The sum of $R(i,k)$ and $A(i,k)$ can directly give the fitness for choosing $k$ as the exemplar of $i$. The consensus of the sum-product message passing process ($R(i,k)$ and $A(i,k)$ remain the same after iterations) stands for the final agreement of all nodes in the selection of exemplars and the association of clusters is reached. Apart from the ill-defined cluster number, AP is not sensitive to the initialization settings; selects real data nodes as exemplars; allows asymmetric matrix as input; is more accurate when measured in the sum of square errors. Therefore, considering the subspace distribution (least square regression), AP is effective in generating robust and accurate clustering results for high dimension data like face vectors.
|
| 63 |
+
|
| 64 |
+
The governing equations for message passing in AP are:
|
| 65 |
+
|
| 66 |
+
$$\begin{equation}
|
| 67 |
+
R(i,k) \leftarrow S(i,k) - \max_{{k'}, s.t.{k'} \neq k}\{A(i,k')+S(i,{k'})\}
|
| 68 |
+
\end{equation}$$ $$\begin{equation}
|
| 69 |
+
A(i,k) \leftarrow \min\bm{\{} {0, R(k,k)+ \sum_{{i'},{i'}\notin {\{i,k\}}} \max \{0, R({i'},k)\} \bm{\}}}
|
| 70 |
+
\end{equation}$$ Equation (3) is used to fill in the elements on the diagonal of the availability matrix: $$\begin{equation}
|
| 71 |
+
A(k,k) \leftarrow \sum_{i', s.t. i' \neq k}\max\{0, R(i',k) \}
|
| 72 |
+
\end{equation}$$ Update responsibilities and availabilities according to (1), (2) and (3) till convergence, then the criterion matrix $C$ which holds the exemplars is the sum of the $A$ and $R$ at each location. $$\begin{equation}
|
| 73 |
+
C(i,k) \leftarrow R(i,k)+A(i,k)
|
| 74 |
+
\end{equation}$$ The highest value of each row of the criterion matrix is designated as the exemplar.
|
| 75 |
+
|
| 76 |
+
Clustering builds the connection of the same face across frames since its result could be facilely corrected with some intervention. With the results of face detection and recognition, the proposed PIAP process in a segment-wise way to group the faces within and across segments simultaneously. Apparently, the longer segments offer more information but also incur more noise and reduce efficiency. In this paper, the proposed PsOP cuts every 150-frames into a segment and leverages the whole existing context to reach accurate and fast face pixelation. Define a data stream ${Z=\{Z_1,Z_2,...,Z_n\}}$ is sequentially collected face/text feature vectors of the current video segment. i.e. $Z$ is an $512*n$ matrix with $512$ dimension feature vector.
|
| 77 |
+
|
| 78 |
+
For the normalization in the PIAP, cosine similarity is brought for measurements. As an instance of DPOs (like a person's face) can only appear at one position in a single frame, instances that belong to the same frame are set to the minimum similarity value $-1$. Let $j$ stands for the other instances that belong to the same frame as $i$. The similarity matrix can be generated as: $$\begin{equation}
|
| 79 |
+
S(i,k)=
|
| 80 |
+
\left\{
|
| 81 |
+
\begin{array}{lr}
|
| 82 |
+
\frac{i \cdot k}{\Vert i \Vert \Vert k \Vert} -1, \quad\mbox{ if }\ k \notin{j}\\
|
| 83 |
+
|
| 84 |
+
\\
|
| 85 |
+
-1, \quad \mbox{ if } \ k \in j
|
| 86 |
+
\end{array}
|
| 87 |
+
\right.
|
| 88 |
+
\end{equation}$$
|
| 89 |
+
|
| 90 |
+
Moreover, in (2), the left side of the equation is the accumulated evidence of how much $k$ can stand for itself ($R(k,k)$), and how many others $k$ is also responsible to stands for ($\sum_{{i'},{i'}\notin {\{i,k\}}} \max\{0, R({i'},k)\}$). Every positive responsibilities of $k$ contributes to $A(i,k)$. Therefore, according to (5), $A(i,k)$ should not accumulate $j$'s choice as the evidence for representing $i$. One step further, the choice of $j$ will actually repel $i$ in availability, and the message passing of $A(i,k)$ shall be rewritten as: $$\begin{equation}
|
| 91 |
+
\begin{aligned}
|
| 92 |
+
&A(i,k) \leftarrow \min\bm{\{}0, R(k,k)+ \\
|
| 93 |
+
&\sum_{{i'},{i'}\notin {\{i,j,k\}}} \max\{0, R({i'},k)\} -\sum_{{i'},{i'}\in {\{j\}}} \max\{0, R({i'},k)\}\bm{\}}
|
| 94 |
+
\end{aligned}
|
| 95 |
+
\end{equation}$$ (5), (6) ensures $i$ and $j$ are mutual exclusive in the clustering process.
|
| 96 |
+
|
| 97 |
+
After the revision of the position information, the challenge for extending AP into an incremental way is that the data nodes received at different timestamps stay at varying statuses with disproportionate responsibilities and availabilities value.
|
| 98 |
+
|
| 99 |
+
However, video frames are strongly self-correlated data. We could assign a proper value to newly-arrived vectors according to the ones in the previous segment without affecting the clustering purity. The embedded vectors of a particular person or text shall stay close with each other in the feature space across different frames. Thus, our incremental AP algorithm is proposed based on the fact that if two detected faces/texts are in adjacent segments and refer to the same person/the same piece of texts, they should not only be clustered into the same group but also have the same responsibilities and availabilities. Such fact is not well considered in past studies of incremental affinity propagation [@sun2014incremental; @wang2013multi].
|
| 100 |
+
|
| 101 |
+
Following the common notations in AP, the similarity matrix is denoted as $S_{t-t'}$ at time $t-t'$ with ($M_{t-t'}*M_{t-t'}$) dimension where $(t-t')= \frac{\mathcal{N}}{FPS}$. And the responsibility matrix and availability matrix at time $t-t'$ are $R_{t-t'}$ and $A_{t-t'}$ with a same dimension as $S_{t-t'}$. Then, the update rule of $R_{t}$ and $A_{t}$ respect to $R_{t-t'}$ and $A_{t-t'}$ can be written as: $$\begin{equation}
|
| 102 |
+
%\vspace{-1em}
|
| 103 |
+
R_{t}(i,k)=
|
| 104 |
+
\left\{
|
| 105 |
+
\begin{array}{lr}
|
| 106 |
+
R_{t-t'}(i,k), \quad i\leq M_{t-t'}, k\leq M_{t-t'}\\
|
| 107 |
+
\\
|
| 108 |
+
R_{t-t'}(i',k), \quad i > M_{t-t'}, k\leq M_{t-t'}\\
|
| 109 |
+
\\
|
| 110 |
+
R_{t-t'}(i,k'), \quad i\leq M_{t-t'}, k > M_{t-t'}\\
|
| 111 |
+
\\
|
| 112 |
+
0, \quad i > M_{t-t'}, k > M_{t-t'}
|
| 113 |
+
\end{array}
|
| 114 |
+
\right.
|
| 115 |
+
%\vspace{-1em}
|
| 116 |
+
\end{equation}$$ Note that the dimension of three matrices is increasing with time. $R_{t}(i,k)$ is the newly arrived face vectors of a segment at time $t$. $M_{t-t'}$ stands for the amount of faces at time $t-t'$. $$\begin{equation}
|
| 117 |
+
i'=\arg \max_{i', i'\leq M_{t-t'}} \left \{S(i,i')\right \}
|
| 118 |
+
\end{equation}$$ Easily, the $A_{t}$ could be updated through $$\begin{equation}
|
| 119 |
+
%\vspace{-1em}
|
| 120 |
+
A_{t}(i,k)=
|
| 121 |
+
\left\{
|
| 122 |
+
\begin{array}{lr}
|
| 123 |
+
A_{t-t'}(i,k), \quad i\leq M_{t-t'}, k\leq M_{t-t'} \\
|
| 124 |
+
\\
|
| 125 |
+
A_{t-t'}(i',k), \quad i > M_{t-t'}, k\leq M_{t-t'}\\
|
| 126 |
+
\\
|
| 127 |
+
A_{t-t'}(i,k'), \quad i\leq M_{t-t'}, k > M_{t-t'}\\
|
| 128 |
+
\\
|
| 129 |
+
0, \quad i > M_{t-t'}, k > M_{t-t'}
|
| 130 |
+
\end{array}
|
| 131 |
+
\right.
|
| 132 |
+
%\vspace{-1em}
|
| 133 |
+
\end{equation}$$
|
| 134 |
+
|
| 135 |
+
Denote $z_{p}^{q}=\{z_{1}^{q},z_{2}^{q},z_{3}^{q}...z_{p}^{q}\}$ as the set of all $p$ vectors extracted in segment $q$. The full process of PIAP algorithm can be summarized as Algorithm 1.
|
| 136 |
+
|
| 137 |
+
::::: algorithm
|
| 138 |
+
::: flushleft
|
| 139 |
+
**Input**: $R_{t-t'}$,$A_{t-t'}$,$C_{t-t'}, z_{p}^{q}$\
|
| 140 |
+
**Output**: $R_{t},A_{t},C_{t}$
|
| 141 |
+
:::
|
| 142 |
+
|
| 143 |
+
::: algorithmic
|
| 144 |
+
Compute similarity matrix according to (5). Assign zeros to all responsibilities and availabilities. Compute responsibilities and availabilities for $z_{p}^{q}$ according to equation (7), (8) and (9). Extend responsibilities matrix $R_{t-t'}$ to $R_{t}$, and availabilities $A_{t-t'}$ to $A_{t}$. Message-passing according to equation (1), (6) and (3) until convergence. Compute exemplars and clustering results $C_t$ as equation (4).
|
| 145 |
+
:::
|
| 146 |
+
:::::
|
| 147 |
+
|
| 148 |
+
Handled by PIAP, faces, texts, and alike DPOs could be intra-class distinguished. Cluster results are smoothed similar as the IPOs for the exclusion of false negatives and positives. Faces and texts within a cluster are then linked sequentially to form a trajectory.
|
| 149 |
+
|
| 150 |
+
In live video streaming, the appeared non-streamers are much more than streamers but own a far smaller sample-size for each non-streamer. Thus, as in the rightmost part of Figure 1, with trajectories built by PIAP, manual specification (like screen touching) is involved solely on the face of a newly appeared streamer. Also, texts are further filtered through checking the sensitive thesaurus based on [@bollegala2012cross]. With the rolling of PIAP, the previously specified cluster is capable of guiding the future aggregation of the same faces or texts.
|
| 151 |
+
|
| 152 |
+
The trajectories of the faces of the non-streamers, the sensitive texts along with IPOs are gathered as the trajectories of privacy-sensitive objects. Gaussian filters are applied on these trajectories for pixelation before broadcasting to the audience.
|
2102.00436/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-03-18T06:08:28.781Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" etag="WnQn0co-fR5diQ2MVkGs" version="13.9.9" type="google"><diagram id="2DBYDU1VKqp9znuRtQm6" name="Page-1">7V1Zk6LKEv41xjnnYSbYkUcFt4a23bX7ZQKhQJRNwPXX30KhBcGlb7v1iBHjQFIUVZlfbpWFncNZY1lxRHv0aslAz2GIvMzhXA7DKJqG3z5htSXgdH5LUB1N3pLQHaGtrUFARALqTJOBG2voWZbuaXacKFmmCSQvRhMdx1rEmymWHn+qLarBE5EdoS2JOkg062uyN9pS8xi9o1eBpo7CJ6MUs71iiGHjoAt3JMrWIvIsvJTDWceyvO2RsWSB7vMu5Mt2QOUDVz8H5gDTO+eGP5zMrX9N+6JaLfem7aossMtf4di8VTjhuSsvaxw8RlE4xKKi6Tpr6ZazuYpLhEyJCKSrjihr8MnhNdMyYQdFOEnb78j1gClp+r+jj6LAubUFV2o1CAzpveooMshbDD/u6mWtYiNzlm50J16r5jbpytCtkx9eczwfKmN+gn4stbeXyZDLT9rMtIkCzNVsalGriiW5ue5UKyyZw4olOKwyOmqvp1We1Bc6unqjenxrOqsCvsDqrl2UXmXYbtSH7UY9+AWB5kynxgKH1AI8p+jWaDgvIR9YZ0wtaBOSgVmarwSz1fjPn5PnWBOwN1FdHAK9KEoT1bFmphxhkbL5wCabC8Dnvs+wkWfoPlfh4WKkeaBti5LPqQVUm01jT/Q0y/Sb5P32SfEGspoDxwPLCCkQdwVYBvCcFWwSXMUQ7De5vWkV16rFDspU0O0ogmIqaCYGyqN+dr3DFzwIIJYON83EmnlxMAO9P8qQW7Ubc679Cz8MN4aGkwYyVL7gYsBoWXRHGyaiwUlD9Dzg+IwifiNwoHCSCLIRk+h4BV/ldzcDU96jbFoFZobZtoicuVAomql2LDuQWkAoWp5nGXGaABQvTmkFHPRJvpQ0aEYKuqb6YzU0WfanFcXBlzF0EBM+344iwgE6hNc8btTSxBvc2rA0+IhdE0tRXOBF5Q/5Kq4iLWz/DjdugcJeQjxSRCoawy5ca+ZIILjpSD84Eu8H3+sHilgFXqKfDWI/efL/g5g5ZjPz59vGea9c5NTaiKsu6q0lX4J2p/YCRVPuKmNdRzTBnrHCnARtrs+DuaAOxfe12RG91zImvZcqo1530R07GlXr4ixd6/SHNtUcsHbNKFPva3XITqQXsmqr8+4L0jMlXn8xOrMu3eOmq9lakF+ZN6L01hIFSazZIr2qWxZNVqbtxWi2Uu2Xkjpbv6Jdbt3WhMrbhOlwQEI1noeDFFZdhdRdBR56PYlbr+sf5BplKBSa2bLSnTj5rtdQCgWfNzj332GgB3z5kp2UNQe6+a2hXADXS9hnqD3lMgU/Sef1Sb+EbcXilpVMsawphvVKdhUlLjAlGo/NiL6WE0DJwwpE5dO9wBMYd7DUvIF/DH329uw9dsYtIw25VeSkARwNigY4p9zE1sAeEQ0WRNgbA3oG3h7S71zKzqPUYZzSaIbTb+MUwtNZDaInkZ78011Xm7NV9Gy/s4QXYAmOKuyuhNkbdhMFIZ9BP+iDfPyCHz0RE17O6+SP5B54lnv8BPdEnOue6CurXwDWX3t33FUbj6QlDJF5qwt6q52D+oa/uoEWIE/ghHbLtt9wQqcWFC7mhLAj661Mlvrc07eE2nJSrbBrq9UDOhcM+0mhHhYsZMxFfRZw/7DaoeiB4O8iereflEgIiRDU42ukGHQmQXH7yvXDVBS7soouH1BFL7EEiOK3coTkV1QUed749Ygi5vHYh4jpJb73+bKa3mY9hTxXo6+94PiIGn1k/ZFinlYjDoLtJwRkCcCdKLqcWxI9ZbivXBINt8PEzDkcEbH8g2wPElCGHsmLI8EBrrYWh5sGG9M38yx3C0s0zRIeRFfAdvgYspgjobrECoGfOpCyq+NMpJ3vUMm4eKnA3kWrhIGkomVC7GpuN5/J6UA5l3gsQTGHBPWcYgrEQuAJsZC3FEtoVjOxxNOGcNNjKKak9txUTEcCJxRFcomNIQgyRJQv7Gp0/J07/q7Gqlcfkz2WR4nFcpzXdAPpfXzg7cJsgHuFaY0blAvaiyu/IVSXr89Lpb7cLhqCJvFth2J4iy4RHwQltjC+z+oVrDdUnEK9WiA6JL6oiLO1pb8RaK3pKr0CzdVXpeH7YLZ81/N8UTD1pphnxhRXnxeWrCNUaF4wJhxLvvKYAWbtolw0xgACsszVEAAh0BfLOaw4UOB3ed7oz7lyX+3n197C36Fb7AltSMcZQexOBHrgVZs+Z7a7dxKZwCe7Lriv5xIoxNEYCsOg5D5bco5sckERPB2D5Bcw+Dy7x1LxRz4e/vJIPAS/K/yOFBrolH3d5c0nQ99Z6KMR6fHQR8fRR90VfUcW8+h8mu0LWJqh74eiD4+jj7gr+sI4IBml//PcYToaD9PRpJRum01hmZxSFyMY/KHkdGTnHIrS30+n5r1qC6ZTKtdYiNXJpEmMlFdoIlvwX51fuq0lKjQJOS9MpY+R0hCKHj0HjDxFFmZrqar9cm9cbr29NaajcfNFWkII4DorTSfq+8AevY3cWmvhDEAfzr3YrPb6csHADZslO0z3zQH0KzNZNehCkyq/6ytrpU/nImyomool2kO+RPP8ZMLCtIqTOw2ZrbnWqoW6DWJdKdmONRbs+Zp+nzPDKcHhhoivTQfejaCdnUGHUymfDGcfMJ3CkUdKp5K2IJJOpab0WUjxs0MKDHmkmCLce5BuBJkso/oLAUg8UkqFUynB0h4ik5Xe2A72CMviNXz0W2XVE2FJSPtmZR6l8r+JvbgI3evl7GppSl/xnq5cL8XT6nCU7se2irUZ6U6q1HRmhRd+bUPbAmyAUvZydxESFFGK39DRDODCC3WwgN8tyxDN6A2UGvy/eawWEgxtObNDKpyctt/SV+vNGEPyU8boZPz1Tyb5+udtU6m0euFjAEqUIaQyQJ30NuELSg+CKJI47W/iTEhzwxGhpXD2IlFi3IyHPUTYhiN4km+h+b+8JtKHA8Vs7fNvjBPxfQjm7xknEkcy5azy8zfib3/v0r0BeGz3RVb5/hsr3/sWkLgvAI/+QE+2/edv3f6zX926rxkk015Ci2VkX82vzkjhiHgK95nRQzHA4Rr+bx1mSVfUcMVLHCiRTLqYKyVdqT/HeKggmiNZT9NlkKOLyxzNPXV1lIw7GywpsvyVRFZ9qRpDqzroE8OybE0nFcbtHC1MZb8G9yS/BpfwPTeLgFIxmfZy5V1cz2btL3M+B0CDJX6eNcX/oMgtrdmh97gyBxSRGxNfqU0pTN/UA4VVsPuruysatp6p+yF1R/fVPSV2ua26o2kB512gYzmaqplXR4y+eV/5R+AFw/eX9W5XEUoHy8NktJppz7zMyqRmtPhJ0FzIOcHT3R8v2G6M2P0FCLz0Pw==</diagram></mxfile>
|
2102.00436/main_diagram/main_diagram.pdf
ADDED
|
Binary file (16.7 kB). View file
|
|
|
2102.00436/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
A great number of works [\[7,](#page-8-0) [2,](#page-8-1) [1\]](#page-8-2) have shown that deep neural networks (DNNs) are vulnerable to adversarial examples [\[31,](#page-9-0) [7\]](#page-8-0), *i.e*. the malicious crafted inputs that are indistinguishable from the legitimate ones but can induce misclassification on the deep learning models. Such vulnerability poses potential threats to security-sensitive applications, *e.g*. face verification [\[28\]](#page-9-1), autonomous driving [\[6\]](#page-8-3) and has inspired a sizable body of research on adversarial attacks [\[22,](#page-8-4) [2,](#page-8-1) [21,](#page-8-5) [4,](#page-8-6) [16,](#page-8-7) [5,](#page-8-8) [38,](#page-9-2) [18\]](#page-8-9). Moreover, the adversaries often exhibit transferability across neural network models [\[25\]](#page-8-10), in which the adversarial examples generated on one model may also mislead other models. The adversarial transferability matters because hackers may attack a real-world DNN application without knowing any information of the target model. However, under white-box setting where the attacker has complete knowledge of the target model, existing attacks [\[2,](#page-8-1) [11,](#page-8-11) [1,](#page-8-2) [21\]](#page-8-5) have demonstrated great attack performance but with comparatively low transferability against models with defense mechanisms [\[21,](#page-8-5) [33\]](#page-9-3), making it inefficient for real-world adversarial attacks.
|
| 4 |
+
|
| 5 |
+
To improve the transferability of adversarial attacks, various techniques have been proposed, such as advanced gradient calculations [\[4,](#page-8-6) [18,](#page-8-9) [35\]](#page-9-4), ensemble-model attacks [\[19,](#page-8-12) [15\]](#page-8-13), input transformations [\[38,](#page-9-2) [5,](#page-8-8) [18,](#page-8-9) [10\]](#page-8-14) and model-specific methods [\[36\]](#page-9-5). The input transformation (*e.g*. randomly resizing and padding, translation, scale *etc*.) is one of the most effective approaches. Nevertheless, we observe that existing methods are all applied on a single input image. Since adversarial attacks aim to mislead the DNNs to classify the adversary into other categories, it naturally inspires us to explore whether we could further enhance the transferability by incorporating the information from other categories.
|
| 6 |
+
|
| 7 |
+
The *mixup* operation, that linearly interpolates two random images and corresponding labels, is firstly proposed as a data augmentation approach to improve the generalization of standard training [\[41,](#page-9-6) [34,](#page-9-7) [40\]](#page-9-8). Recently, *mixup* is also used for inference [\[24\]](#page-8-15) or adversarial training [\[12,](#page-8-16) [14\]](#page-8-17) to enhance the model robustness. Since *mixup* adopts the information of a randomly picked image, we try to directly adopt *mixup* to craft adversaries but find that the attack performance decays significantly under white-box setting with little improvement on transferability. To craft highly trans-
|
| 8 |
+
|
| 9 |
+
<sup>\*</sup>Corresponding author.
|
| 10 |
+
|
| 11 |
+
ferable adversaries with the information from other categories but not harm the white-box attack performance, we propose a novel attack method called *Admix* that calculates the gradient on the admixed image combined with the original input and images randomly picked from other categories. Unlike *mixup* that treats the two images equally and mixes their labels accordingly, the *admix* operation adds a small portion of the add-in image from other categories to the original input but does not change the label. Thus *Admix* attack could obtain diverse inputs for gradient calculation.
|
| 12 |
+
|
| 13 |
+
Empirical evaluations on standard ImageNet dataset [\[26\]](#page-8-18) demonstrate that, compared with existing input transformations [\[38,](#page-9-2) [5,](#page-8-8) [18\]](#page-8-9), the proposed *Admix* attack achieves significantly higher attack success rates under black-box setting and maintains similar attack performance under whitebox setting. By incorporating *Admix* with other input transformations, the transferability of the crafted adversaries could be further improved. Besides, the evaluation of the integrated method under the ensemble-model setting [\[19\]](#page-8-12) against nine advanced defense methods [\[17,](#page-8-19) [37,](#page-9-9) [39,](#page-9-10) [20,](#page-8-20) [8,](#page-8-21) [3,](#page-8-22) [27,](#page-9-11) [23\]](#page-8-23) demonstrates that the final integrated method, termed *Admix*-TI-DIM, outperforms the state-of-the-art SI-TI-DIM [\[18\]](#page-8-9) by a clear margin of 3.4% on average, which further demonstrates the high effectiveness of *Admix*.
|
| 14 |
+
|
| 15 |
+
# Method
|
| 16 |
+
|
| 17 |
+
In this section, we first provide details of several adversarial attacks for enhancing the transferability to which our method is most related. Then we introduce the proposed *Admix* attack method and highlight the difference between the proposed *admix* operation and the existing *mixup* [\[41\]](#page-9-6) operation designed for standard training.
|
| 18 |
+
|
| 19 |
+
Let X be the set of all digital images under consideration for a given learning task, Y ∈ R be the output label space and B(x) = {x¯ : kx − x¯k<sup>p</sup> ≤ } denote the `p-norm ball centered at x with radius . Given a classifier f(x; θ) : x ∈ X → y ∈ Y that outputs label y for the prediction of input x with model parameters θ, the goal of adversarial attack is to seek an example x adv ∈ B(x) that misleads the target classifier f(x; θ) 6= f(x adv; θ). To align with previous works, we focus on `∞-norm in this work.
|
| 20 |
+
|
| 21 |
+
Fast Gradient Sign Method (FGSM) [\[7\]](#page-8-0) crafts adversarial example by adding perturbation in the gradient direction of the loss function J(x, y; θ) as follows:
|
| 22 |
+
|
| 23 |
+
$$x^{adv} = x + \epsilon \cdot \text{sign}(\nabla_x J(x, y; \theta)),$$
|
| 24 |
+
|
| 25 |
+
where sign(·) denotes the sign function and ∇xJ(x, y; θ) is the gradient of the loss function w.r.t. x.
|
| 26 |
+
|
| 27 |
+
Iterative Fast Gradient Sign Method (I-FGSM) [\[11\]](#page-8-11) is an iterative version of FGSM by adding a small perturbation with step size α in the gradient direction at each iteration:
|
| 28 |
+
|
| 29 |
+
$$x_{t+1}^{adv} = x_t^{adv} + \alpha \cdot \mathrm{sign}(\nabla_{x_t^{adv}} J(x_t^{adv}, y; \theta)), \quad x_0^{adv} = x.$$
|
| 30 |
+
|
| 31 |
+
Momentum Iterative Fast Gradient Sign Method (MI-FGSM) [\[4\]](#page-8-6) integrates the momentum term into I-FGSM and exhibits better transferability. The update procedure can be summarized as:
|
| 32 |
+
|
| 33 |
+
$$\begin{split} g_t &= \mu \cdot g_{t-1} + \frac{\nabla_{x_t^{adv}} J(x_t^{adv}, y, ; \theta)}{\|\nabla_{x_t^{adv}} J(x_t^{adv}, y, ; \theta)\|_1}, \\ x_{t+1}^{adv} &= x_t^{adv} + \alpha \cdot \text{sign}(g_t). \end{split}$$
|
| 34 |
+
|
| 35 |
+
Diverse Input Method (DIM) [\[38\]](#page-9-2) is the first input transformation based attack which firstly resizes the input image to an r × r × 3 image where r is randomly sampled from [299, 330) with a given probability p and pads the resized image into 330 × 330 × 3. Then DIM feeds the transformed image to DNNs for gradient calculation.
|
| 36 |
+
|
| 37 |
+
Translation-Invariant Method (TIM) [\[5\]](#page-8-8) calculates the average gradient on a set of translated images for the update. To further improve the efficiency, TIM approximately calculates the gradient by convolving the gradient of the untranslated image with a predefined kernel matrix instead of computing the gradient on a set of images.
|
| 38 |
+
|
| 39 |
+
Scale-Invariant Method (SIM) [\[18\]](#page-8-9) discovers the scale invariance property of DNNs and calculates the average gradient over the scaled copies of the input for update:
|
| 40 |
+
|
| 41 |
+
$$\bar{g}_{t+1} = \frac{1}{m} \sum_{i=0}^{m-1} \nabla_{x_t^{adv}} (J(x_t^{adv}/2^i, y; \theta)),$$
|
| 42 |
+
|
| 43 |
+
where m is the number of copies.
|
| 44 |
+
|
| 45 |
+
Lin *et al*. [\[18\]](#page-8-9) analogize the adversary generation process to the neural model training process and the transferability of crafted adversarial example could be equivalent to the generalization of the trained model. Under such perspective, the input transformation could be treated as data augmentation. Various input transformations have been proposed that could boost the adversarial transferability, however, we observe that all the existing transformations are applied on the single input image. On the other hand, we observe that for standard training, *mixup*, which is a powerful data augmentation strategy by interpolating two randomly sampled examples, can effectively improve the model generalization [\[41,](#page-9-6) [32,](#page-9-12) [40\]](#page-9-8). This raises an intriguing question, *could we improve the attack transferability by adopting information from other images for the gradient calculation?*
|
| 46 |
+
|
| 47 |
+
However, as shown in Table [1,](#page-2-0) we find that directly applying *mixup* for the gradient calculation improves the transferability of crafted adversaries slightly but degrades the attack performance significantly under white-box setting. The main reason might be two-fold. First, there is no difference between x and x 0 for the *mixup* which might adopt too much information from the add-in image x 0 for the gradient calculation of the input x and thus provide incorrect direction for update. Second, *mixup* also mixes the labels which introduces the gradient of other category for update when x and x 0 are not in the same category.
|
| 48 |
+
|
| 49 |
+
Input: A classifier f with loss function J and a benign example x with ground-truth label y
|
| 50 |
+
|
| 51 |
+
Input: The maximum perturbation , number of iterations T and decay factor µ
|
| 52 |
+
|
| 53 |
+
Input: The number of admixed copies m<sup>1</sup> and sampled images m2, and the strength of sampled image η
|
| 54 |
+
|
| 55 |
+
Output: An adversarial example x adv ∈ B(x)
|
| 56 |
+
|
| 57 |
+
- 1: α = /T; g<sup>0</sup> = 0; g¯<sup>0</sup> = 0; x adv <sup>0</sup> = x
|
| 58 |
+
- 2: for t = 0 → T − 1 do:
|
| 59 |
+
- 3: Randomly sample a set X<sup>0</sup> of m<sup>2</sup> images from another category
|
| 60 |
+
- 4: Calculate the average gradient g¯t+1 by Eq. [\(3\)](#page-3-0)
|
| 61 |
+
- 5: Update the enhanced momentum gt:
|
| 62 |
+
|
| 63 |
+
$$g_{t+1} = \mu \cdot g_t + \frac{\bar{g}_{t+1}}{\|\bar{g}_{t+1}\|_1}$$
|
| 64 |
+
|
| 65 |
+
6: Update x adv <sup>t</sup>+1 by applying the gradient sign:
|
| 66 |
+
|
| 67 |
+
$$x_{t+1}^{adv} = x_t^{adv} + \alpha \cdot \mathrm{sign}(g_{t+1})$$
|
| 68 |
+
|
| 69 |
+
- 7: end for
|
| 70 |
+
- 8: return x adv = x adv T .
|
| 71 |
+
|
| 72 |
+
In order to utilize the information of images from other category without harming the white-box attack performance, we propose *admix* operation that admixes two images in a master and slave manner. Specifically, we takes the original image x as the primary image and admixes it with a secondary image x 0 randomly picked from other category:
|
| 73 |
+
|
| 74 |
+
<span id="page-3-1"></span>
|
| 75 |
+
$$\tilde{x} = \gamma \cdot x + \eta' \cdot x' = \gamma \cdot (x + \eta \cdot x'),$$
|
| 76 |
+
(2)
|
| 77 |
+
|
| 78 |
+
where η = η <sup>0</sup>/γ, γ ∈ [0, 1] and η <sup>0</sup> ∈ [0, γ) control the portion of the original image and the randomly sampled image in the admixed image respectively. In this way, we can assure that the secondary image x 0 always occupies a smaller portion in x˜. Note that we do not mix the labels, but instead use the original label of x for x˜.
|
| 79 |
+
|
| 80 |
+
With the above analysis, we propose an *Admix* attack method to improve the attack transferability, which calculates the average gradient on a set of admixed images {x˜} of the input x by changing the value of γ or picking the add-in image x 0 from different categories in Eq. [\(2\)](#page-3-1).
|
| 81 |
+
|
| 82 |
+
<span id="page-3-0"></span>
|
| 83 |
+
$$\bar{g}_{t+1} = \frac{1}{m_1 \cdot m_2} \sum_{x' \in X'} \sum_{i=0}^{m_1 - 1} \nabla_{x_t^{adv}} J(\gamma_i \cdot (x_t^{adv} + \eta \cdot x'), y; \theta),$$
|
| 84 |
+
(3)
|
| 85 |
+
|
| 86 |
+
where m<sup>1</sup> is the number of admixed images for each x 0 and X<sup>0</sup> denotes the set of m<sup>2</sup> randomly sampled images from other categories. Note that when η = 0, *Admix* will degenerate to SIM [\[18\]](#page-8-9). The proposed *Admix* could be integrated
|
| 87 |
+
|
| 88 |
+
<span id="page-3-3"></span>
|
| 89 |
+
|
| 90 |
+
Figure 1: Illustration of the mechanisms in the input space of *mixup* and *admix*. x denotes the input image and x 0 the randomly sampled image. x<sup>0</sup> denotes the origin where all pixel values are 0s and x˜ is a possible transformed image. The green line and green triangle denotes all the possible transformed images by *mixup* and *admix*, respectively.
|
| 91 |
+
|
| 92 |
+
with any gradient-based attacks and other input transformation methods except for SIM. We summarize the algorithm of *Admix* integrated into MI-FGSM (denoted as *Admix* without ambiguity in the following) in Algorithm [1.](#page-3-2)
|
| 93 |
+
|
| 94 |
+
For the two operations, *admix* and *mixup* [\[41\]](#page-9-6), they both generate a mixed image from an image pair, x and x 0 . Here we summarize their differences as follows:
|
| 95 |
+
|
| 96 |
+
- The goal of *mixup* is to improve the generalization of the trained DNNs while *admix* aims to generate more transferable adversarial examples.
|
| 97 |
+
- The *mixup* treats x and x 0 equally and also mixes the label of x and x 0 . In contrast, *admix* treats x as the primary component and combines a small portion of x 0 , at the same time maintains the label of x.
|
| 98 |
+
- As depicted in Figure [1,](#page-3-3) *mixup* linearly interpolates x and x <sup>0</sup> while *admix* does not have such constraint, leading to more diversed transformed images.
|
2107.02306/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-05-15T22:20:31.641Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.2 Safari/605.1.15" version="14.4.6" etag="9jJwIp7I2VJywBgt1VQi"><diagram id="K_hF6FMmzNE4K5TBVqyR">7V3Nk6I4FP9rvFqEhCDHnZ6e3cNM1VT1YWeOVJtRdmljYXra3r9+URIUiBqVvBDtvjQ8PgL5ve/3giP88LL+s0iX8298yvJRGEzXI/x5FIYojEn5b0N5ryhxHFSEWZFN5Uk7wlP2H5NEddprNmWrxomC81xkyybxmS8W7Fk0aGlR8Lfmab943hx1mc5Yh/D0nOZd6t/ZVMwr6iQKdvS/WDabq5FRII+8pOpkSVjN0yl/2yPhxxF+KDgX1dbL+oHlm8lT81Jd9+XA0frBCrYQJheE8jHEu3q3gr8upmxzOBjhT7wQcz7jizT/yvmyJKKS+A8T4l2ikr4KXpLm4iWXR9k6Ez/2tn/ubX9ey/tud97VzmL6xwaVcnfBF6yifMnyXB5fiYL/W890uD0uivcf9eXlzs/9nd0w2z01TvWubNoCd8Vfi2dJiiQ3pcWMKfTi7qSiGqqSxxl/YeUw5SkFy1OR/W7ePpXMNqvPqy/9zrPyjmGgBANLrpBiUXOJukX1WPKqHarlxt5j7EhbrPW44w7um2l5krs71B931E/Xc4YetXF0ArdT/DFNV/Ptc20G4cv0ORObS3GXd5A8+3sqBCsWW0oYoMt5A/fPGw1Uj0AodejvNH+VNz2KqZy5q0FsTLaZ5D7wnBfbJ8LB9k+LSwu4PmHCxB1MkQFMeV5ars3svc0zwZ7KidgceSuNZ3Pu09WyMme/svUGA41mlKOxQrB14x0M3l1e0NJCSFndt52pI5I037NyJLh+sigIC+ssVG2UDlgoneaqddoBvXWBqNgQiHhYeiv2XSDCAE4gJjclEOc7egbsjVQEYNlnM4UscQdZ5UmdCdr1/ldvsJ0hmr3DpqycB/6UDSOhgwOHDuFA3puJBM5MIBhe9V/pYI3ScegLqefxiMvbSQrI8AARGDa/KLvUG8M2mBMu+URb6gsH42Dvr3XDSqr6SEWh6OZ011BsuibyU/kIJ9ru7sN72EyYDn/sEn//Qv9OSh4w9kcTg+myYv6Os3ItLegcadEUcaScBsZyasDzOiPqMBmAHGYDLKg8K0651u9xqKbq2uxdBP42ci56D7MdIGNkrbypHs5nSwOYPoAyLBd0BjiNxBpyoksdTBxqKf9SBy4riyFQ6uD+okxdwd2pYPhXcXdZYYSquVvxN8s9OSI9GmKY+8UmDK9Lq1hg+ANeVCtPh+1l5jws1bsswoQm8fqwpgsrZelC8XRDY7jUxbnFqAvM+kVepq5XB6wEMGm3n0ZGmqVbSiB0XMYzSUJwjGMSJrh5XzQZRwFJ4oDShIZxQprD9Njl6k9NHzYzrCr7jcwwdeezYQSCC1QAag1/PeAtrrgQf5cpN/U8HplOl3VwbBL736VeIxq95pKvTZr0h83XkB409j90B1UD1PvpAuWuuDM7973YTU3qvraMwMpBUbvhyOJqt+Nt9B/L3Y5xBzwnoOOtZz3yRfLhRxlrBuowPiQmgfywLB1pMzWgY0DchdMDabSzIhQhnEKMxpMonlCCYxrSSdLkJELGiJRaMqKlT4OR6hPrXz+qN/ZY7CAz2uT4qnrfxM5KGwWhMD6nMWa31xdwPm4HPEpqGkdY8AzMPEWLLWTEJBcxDN/QirHTyCl1+BkF4l+uo2OKAHMdpJvr+FBrBmrNvgqjwVnBrmnNlbqKoQnQJwqGsyYvSsDMXdLSIFBr8sjtfcRgIHY10jGUwwUq6pt1NxKzDD6zpsOfusTfv5USncIBYIgfmWRE7mpN3gVO1j7vDyWpFiato62mtP5sa3TvWaIrOcaGtTzkfrUSra44BiZH5dBDAgi7LswcnYzfOjfqEXd3X8YYxrK8KzUFBatg06Oa4lTBpkeOcfiFjRuP2zSd+Tb4y9gRBV6bc1oUr/Lp2+oZ1Kd3mcHycgWcyoC5UrZgK+DUi3oUHXckCbDqoIb2aLriyN0KONpt0/lYAacSAo6aUlFLei5dAxfFrTVwzduCLYGjjhJWg0/EqmJ2w4Y5/GwBdZcW8moJnE38XRZi1PPciEPf6emG/BUJf1p3gHle19TjkueBm3qAeR7U9b7tfACo+vDvOx6AQV+5u/t5usoh3f3IH378Hw==</diagram></mxfile>
|
2107.02306/main_diagram/main_diagram.pdf
ADDED
|
Binary file (54.5 kB). View file
|
|
|
2107.02306/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
Inspired by @missingthemark and @sanitychecks, we wish to confirm that SNIP, GraSP, and SynFlow work no better than random pruning with corresponding layerwise sparsity allocation. While @missingthemark and @sanitychecks only considered moderate compression rates up to $100\times$ and used direct sparsity as a reference frame, we reconfirm their conjecture in the effective framework and test it across the entire compression spectrum. We generate and train two sets of subnetworks: $(i)$ pruned by either SNIP, GraSP, and SynFlow (*original*), and $(ii)$ randomly pruned while preserving layerwise sparsity quotas provided by each of these three methods (*random*).
|
| 4 |
+
|
| 5 |
+
<figure id="Fig:MissingTheMark" data-latex-placement="h">
|
| 6 |
+
<p><img src="jmlr_lsq.png" style="width:85.0%" alt="image" /> <embed src="handles_missingthemark.pdf" style="width:65.0%" /></p>
|
| 7 |
+
<figcaption>Original methods for pruning at initialization (solid) and random pruning with corresponding layerwise sparsity quotas (dashdot). Test accuracy of the unpruned network is shown in grey.</figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
Our results in Figure [5](#Fig:MissingTheMark){reference-type="ref" reference="Fig:MissingTheMark"} agree with observations made by @missingthemark and @sanitychecks: in the $10\times$--$100\times$ compression range, all three random pruning algorithms perform similarly (LeNet-300-100, VGG-19) or better (ResNet-18, ResNet-50) than their original counterparts. Effective sparsity allows us to faithfully examine higher compression, where the evidence is more equivocal. Similar patterns are still seen on ResNet-18; however, the original SNIP and GraSP beat random pruning with corresponding layerwise sparsities by a wide margin starting at $100\times$ compression on LeNet-300-100. Random pruning associated with SynFlow matches original SynFlow on the same network for longer, up to $1,000\times$ compression. On VGG-19, SynFlow bests the corresponding random pruning from about $500\times$ compression onward, while the original SNIP suffers from disconnection early on together with its random variant. Despite these nuances in the high compression regime, random pruning with specific layerwise sparsity quotas fares extremely well in the moderate sparsity regime (up to $99\%$) and is even competitive to full-fledged SynFlow (see Figure [7](#Fig:Random){reference-type="ref" reference="Fig:Random"}). Therefore, random pruning can be a cheap and competitive alternative to more sophisticated and resource-consuming algorithms. This phenomenon is also reconfirmed in a recent study, which states that randomly pruned networks with carefully crafted LSQ can match the performance of their dense counterparts while comparing favorably in terms of adversarial robustness, out-of-distribution detection, and uncertainty estimation [@unreasonable]. In particular, they consider LSQ derived from SNIP and find it among the best performing sparsity distributions for random pruning. Alas, SNIP and other methods from Figure [5](#Fig:MissingTheMark){reference-type="ref" reference="Fig:MissingTheMark"} require expensive computations just to retrieve the corresponding pruning ratios, which may still suffer from issues like layer-collapse. This motivates us to ask: can we engineer readily computable and consistently well-performing sparsity quotas?
|
| 11 |
+
|
| 12 |
+
To our knowledge, there are only a few *ab-initio* approaches in the literature to allocate sparsity in a principled fashion. *Uniform* is the simplest solution that keeps sparsity constant across all layers. @gale give a modification (denoted *Uniform+* following @lamp) that retains all parameters in the first convolutional layer and caps sparsity of the last fully-connected layer at $80\%$. A more sophisticated approach, *Erdös-Renyi-Kernel (ERK)*, sets the density of a convolutional layer with kernel size $w\times h$, fan-in $n_{\text{in}}$ and fan-out $n_{\text{out}}$ proportional to $(w+h+n_{\text{in}}+n_{\text{out}})/(w\cdot h\cdot n_{\text{in}}\cdot n_{\text{out}})$. Although originally used as a sparsity distribution schema for methods with dynamic sparse structres (SET by @set and RigL by @rigl), we follow @lamp and use ERK as a baseline sparsity distribution for sparse-to-sparse training with a fixed subnetwork topology. The last two approaches are unable to support the entire range of sparsities: Uniform+ can only achieve moderate *direct* compression because of the prunability constraints on its first and last layer, while both direct and effective sparsity levels achievable with ERK are often lower bounded. For example, the density of certain layers of VGG-16 set by ERK exceeds $1$ when cutting less than $99\%$ of parameters, unless excessive density is redistributed. @sanitychecks propose Smart-Ratios, which is an ad-hoc distribution method that requires the density of the $i$-th layer within an $L$-layer network to be proportional to $(L-l+1)^2+(L-l+1)$. This method was developed exclusively for VGG-like networks and, like ERK and Uniform$+$, can be infeasible for certain sparsities.
|
| 13 |
+
|
| 14 |
+
To avoid problems that riddle Uniform+, ERK, and smart-ratios, we require that any layerwise sparsity quotas must be attainable for any level of network sparsity $s\in[0,1]$. At the same time, neither layer should be removed in its entirety unless $s=1$ to avoid layer-collapse inherent to SNIP and some other global pruning methods. These requirements lead us to formulate a formal definition for layerwise sparsity quotas to guide principled future research into sparsity allocation.
|
| 15 |
+
|
| 16 |
+
*A function $\mathcal{Q}\colon [0,1]\rightarrow [0,1]^{L}$ mapping a target sparsity $s$ to layerwise sparsities $\{s_{\ell}\}_{\ell=1}^{L}$ is called Layerwise Sparsity Quotas (LSQ) if it satisfies the following properties: (i) total sparsity: for any $s\in[0,1]$, $s\sum_{\ell}|\Theta_{\ell}|=\sum_{\ell}s_{\ell}|\Theta_{\ell}|$, and (ii) layer integrity: for all layers $\ell\in[L]$, $[\mathcal{Q}(s)]_{\ell}<1$ if $s<1$.*
|
| 17 |
+
|
| 18 |
+
::: wrapfigure
|
| 19 |
+
r0.5 {width="78%"}
|
| 20 |
+
|
| 21 |
+
[]{#Fig:IGQSchema label="Fig:IGQSchema"}
|
| 22 |
+
:::
|
| 23 |
+
|
| 24 |
+
Aiming to unfold the secret of well-performing layerwise compression quotas associated with such global pruning algorithms as SNIP, LAMP, and SynFlow, we note that they prune larger, parameter-heavy layers more aggressively than smaller layers (Figure [6](#Fig:LayerwiseCompression){reference-type="ref" reference="Fig:LayerwiseCompression"}), which has been already conjectured to be a desirable property [@sanitychecks]. To design a valid LSQ with this feature, we consult an intuitive (although lacking formal connection with neural network pruning) analogy from physics. In particular, we interpret compression of a multi-layer network as compression of stacked gas-filled weightless cylinders of unit volume and height equal to the size of the corresponding layer (Figure [\[Fig:IGQSchema\]](#Fig:IGQSchema){reference-type="ref" reference="Fig:IGQSchema"}). As force is applied to the system, the Ideal Gas Law governs the compression rate of each cylinder, giving the final compression distribution which we interpret as the layerwise compression (sparsity) distribution within the given network. Using simple algebra, we arrive at compression quotas $\{F|\Theta_{\ell}|+1\}_{\ell=1}^{L}$ (or sparsity quotas $\{1-(F|\Theta_{\ell}|+1)^{-1}\}_{\ell=1}^{L}$) parameterized by the force $F$ that controls the overall sparsity of the network. Thus, we selected cylinder dimensions to encode our prior belief that larger layers can withstand higher pruning rates since "flatter" cylinders undergo lighter compression under the same external force (compression constraint). Given a target sparsity $s$, the needed value of $F$ can be instantly found using binary search to any given precision. IGQ clearly satisfies all requirements of Definition 1 and applies higher compression to larger layers, as desired. In principle, IGQ is applicable in a variety of contexts with use-cases in pruning before training (in conjunction with random pruning), during training (e.g., as default LSQ for RigL [@rigl]), and after training (e.g., together with magnitude pruning). In this study, we adopt the first and the last scenarios to evaluate IGQ against baselines (Figures [7](#Fig:Random){reference-type="ref" reference="Fig:Random"}, [8](#Fig:Magnitude){reference-type="ref" reference="Fig:Magnitude"}).
|
| 25 |
+
|
| 26 |
+
<figure id="Fig:LayerwiseCompression" data-latex-placement="H">
|
| 27 |
+
<embed src="layerwisecompression_2.pdf" style="width:99.0%" />
|
| 28 |
+
<figcaption>Layerwise direct compression quotas of LeNet-5 (top) and VGG-16 (bottom) associated with SynFlow (left), our IGQ (middle), and LAMP (right). Percentages indicate layer sizes relative to the total number of parameters; colors are assigned accordingly from blue (smaller layers) to red (larger layers). Curves of LAMP and SynFlow end when the underlying network disconnects.</figcaption>
|
| 29 |
+
</figure>
|
| 30 |
+
|
| 31 |
+
While @unreasonable experiment with lower sparsities (up to $90\%$) and a slightly different set of LSQ, our results largely match their evidence. In particular, we also find that ERK consistently outperforms more naive baselines like Uniform and Uniform+. Although ERK sometimes exhibits similar (ResNet-18) or even better (VGG-19 compressed to $1,000\times$ or higher) performance than IGQ, it yields invalid layerwise sparsity quotas when removing less than $98\%$ and $99\%$ of parameters from ResNet-18 and VGG-19, respectively, thus failing to satisfy Definition 1. Uniform$+$ produces invalid layerwise compressions from $40\times$ onward for ResNet-50. In the moderate sparsity regime (up to $99\%$), subnetworks pruned by IGQ reach unparalleled performance after training, especially on ResNet-50. Across all architectures, random pruning with IGQ and SynFlow sparsity quotas are almost indistinguishable from each other, suggesting that IGQ successfully mimics the quotas produced by SynFlow, which require substantial effort to compute. Therefore, judging by a tripartite criterion of test performance, compliance with Definition 1, and computational efficiency, IGQ beats all baselines.
|
| 32 |
+
|
| 33 |
+
<figure id="Fig:Random" data-latex-placement="H">
|
| 34 |
+
<p><img src="jmlr_random_igq.png" style="width:90.0%" alt="image" /> <embed src="final_paper_accuracy_random_handles.pdf" style="width:98.0%" /></p>
|
| 35 |
+
<figcaption>Test performance of trained subnetworks after random pruning with different layerwise sparsity distributions. Original SynFlow (black) is shown for reference.</figcaption>
|
| 36 |
+
</figure>
|
| 37 |
+
|
| 38 |
+
In the second set of experiments, we pretrain fully-dense models and prune them by magnitude using global methods (Global Magnitude Pruning, LAMP) or layer-by-layer respecting sparsity allocation quotas (Uniform, Uniform+, ERK, and IGQ). Then, we revert the unpruned weights back to their original random values and fully retrain the resulting subnetworks to convergence. Results are displayed in Figure [8](#Fig:Magnitude){reference-type="ref" reference="Fig:Magnitude"} in the framework of effective compression. Overall, our method for distributing sparsity in the context of magnitude pruning performs consistently well across all architectures and favorably compares to other baselines, especially in moderate compression regimes of $100\times$ or less. Even though Global magnitude pruning can marginally outperform IGQ, it is completely unreliable on VGG-19. ERK appears slightly better than IGQ on VGG-19, ResNet-18 and ResNet-50 at extreme sparsities, however, it performs much worse on LeNet-5 and has other general deficiencies as discussed earlier. Another close rival of IGQ is LAMP, which performs very similarly but is still unable to reach its performance on VGG-19, ResNet-18 and ResNet-50 in moderate compression regimes. Note, however, that all presented methods require practically equal compute and time; thus, the evidence in Figure [8](#Fig:Magnitude){reference-type="ref" reference="Fig:Magnitude"} is not meant to advertise IGQ as a cheaper alternative to LAMP but rather to illustrate the effectiveness of IGQ.
|
| 39 |
+
|
| 40 |
+
<figure id="Fig:Magnitude" data-latex-placement="H">
|
| 41 |
+
<p><img src="jmlr_igq_magnitude.png" style="width:90.0%" alt="image" /> <embed src="majoreffective_app.pdf" style="width:75.0%" /></p>
|
| 42 |
+
<figcaption>Test performance of retrained subnetworks after magnitude-based pruning. Uniform+ is not shown for LeNet-300-100 since it is designed for convolutional networks.</figcaption>
|
| 43 |
+
</figure>
|
| 44 |
+
|
| 45 |
+
Unlike pruning to a target direct sparsity, pruning to achieve a particular *effective* sparsity can be tricky. Here, we present an extension to algorithms for pruning at initialization or after training that achieves this goal efficiently, when possible (see Figure [\[Fig:AEP1\]](#Fig:AEP1){reference-type="ref" reference="Fig:AEP1"}).
|
| 46 |
+
|
| 47 |
+
Algorithms like GraSP, SynFlow, and LAMP rank parameters by some notion of importance to guide pruning. When such a ranking $R\colon\mathbf{\Theta}\rightarrow\mathbb{R}$ is available, we employ binary search for the appropriate cut-off threshold $t$ in $\mathcal{O}(\log |\mathbf{\Theta}|)$ time. This approach leverages the following monotonicity property: given two pruning thresholds $t_1,t_2\in R$ and corresponding subnetworks $S_1,S_2$, we have $t_1\leq t_2$ if and only if $S_2\subseteq S_1$, which implies $\text{EffectiveSparsity}(S_1)\leq\text{EffectiveSparsity}(S_2)$ (note that in general $\text{Sparsity}(S_1)\leq\text{Sparsity}(S_2)$ does not imply the last inequality above). Thus, binary search will branch in the correct direction.
|
| 48 |
+
|
| 49 |
+
In Section [4](#Sec:Pistons){reference-type="ref" reference="Sec:Pistons"}, we saw that random pruning with carefully crafted layerwise sparsity quotas $\mathcal{Q}\colon[0,1]\rightarrow[0,1]^{L}$ fares well (especially in the framework of effective sparsity) with more sophisticated pruning methods, proving to be a cheaper and simpler alternative. Effective pruning without parameter scores is more challenging because there is no obvious way to produce a neat chain of embedded subnetworks as above. For example, given two subnetworks $S_1$ and $S_2$, $\text{Sparsity}(S_1)\leq\text{Sparsity}(S_2)$ does not imply $\text{EffectiveSparsity}(S_1)\leq\text{EffectiveSparsity}(S_2)$. Assigning random scores requires $\mathcal{O}(|\mathbf{\Theta}|)$ time to ensure that any cut-off threshold yields LSQ according to $\mathcal{Q}$, which is not scalable.
|
| 50 |
+
|
| 51 |
+
:::: wrapfigure
|
| 52 |
+
r0.5
|
| 53 |
+
|
| 54 |
+
::: center
|
| 55 |
+
{width="50%"}
|
| 56 |
+
:::
|
| 57 |
+
::::
|
| 58 |
+
|
| 59 |
+
To circumvent this issue, we design an improved algorithm that produces embedded subnetworks on each iteration, allowing binary search to work (see Algorithm [\[Alg:AERP\]](#Alg:AERP){reference-type="ref" reference="Alg:AERP"}). Starting from the extreme subnetworks $S_1$ (fully-dense, corresponding to masks $\mathbf{M}^{(1)}$) and $S_2$ (fully-sparse, corresponding to masks $\mathbf{M}^{(2)}$), we narrow the sparsity gap between them while preserving $S_2\subseteq S_1$ so that $\text{EffectiveSparsity}(S_1)\leq\text{EffectiveSparsity}(S_2)$. For each layer, we keep track of unpruned connections $U_{\ell}$ of $S_1$ and pruned connections $P_{\ell}$ of $S_2$, randomly sample parameters $T_{\ell}$ from $U_{\ell}\cap P_{\ell}$ according to $\mathcal{Q}$ and form another network $S$ by pruning out $\bigcup_{\ell}T_{\ell}$ from $S_1$ (or, equivalently, reviving in $S_2$). Depending on where effective sparsity of $S$ lands relative to target $s$, we assign $S$ to either $S_1$ or $S_2$ and branch. Since connections to be pruned from $S_1$ (or revived in $S_2$) are chosen randomly at each step, weights within the same layer have equal probability of being pruned. Once $S_1$ and $S_2$ are only $1$ parameter away from each other, the algorithm returns $S_1$, yielding a connected model. Note that this algorithm implicitly requires the LSQ function $\mathcal{Q}$ to be layerwise monotone: if $s_1\leq s_2$, then $[\mathcal{Q}(s_1)]_{\ell}\leq[\mathcal{Q}(s_2)]_{\ell}$ for each layer $\ell\in[L]$. This is a reasonable assumption and is satisfied in practice (see Figure [6](#Fig:LayerwiseCompression){reference-type="ref" reference="Fig:LayerwiseCompression"}).
|
| 60 |
+
|
| 61 |
+
In our work, we argue that *effective sparsity (effective compression)* is the correct benchmarking measure for pruning algorithms since it discards effectively inactive connections and represents the true remaining connectivity pattern. Moreover, effective sparsity allows us to study extreme compression regimes for subnetworks that otherwise appear disconnected at much lower direct sparsities. We initiate the study of current pruning algorithms in this refined frame of reference and rectify previous benchmarks. To facilitate the use of effective sparsity in future research, we describe low-cost procedures to both compute and achieve desired effective sparsity when pruning. Lastly, with effective sparsity allowing us to zoom more fairly into higher compression regimes than previously possible, we examine random pruning with prescribed layerwise sparsities and propose our own readily computable quotas (IGQ) after establishing conditions reasonable LSQ should fulfill. We show that IGQ, while allowing for any level of sparsity, is more advantageous than all existing similar baselines (Uniform, ERK) and gives comparable performance to sparsity quotas derived from more sophisticated and computationally expensive algorithms like SynFlow.
|
| 62 |
+
|
| 63 |
+
[]{#AppendixAlgorithm label="AppendixAlgorithm"}
|
| 64 |
+
|
| 65 |
+
::: algorithm
|
| 66 |
+
$i\leftarrow 0$; $j\leftarrow |\mathbf{\Theta}|$; $\mathbf{M}^{(1)}\leftarrow\mathbf{1}$; $\mathbf{M}^{(2)}\leftarrow\mathbf{0}$; $P_{\ell},U_{\ell}\leftarrow \Theta_{\ell}$ for all $\ell\in[L]$
|
| 67 |
+
:::
|
| 68 |
+
|
| 69 |
+
We hope that the lens of effective compression will spur more research in high compression regimes. One possible limitation is that it is harder to control effective compression exactly. In particular, using different seeds might lead to slightly different effective compression rates. However, these perturbations are minor. Additionally, one might argue that for some architectures accuracy drops precipitously with higher compression thus making very sparse subnetworks less practical. We hope that opening the study of high compressions will allow to explore how to use sparse networks as building blocks, for instance using the power of ensembling [@ensembling]. Our framework allows a principled study of this regime. Finally, since effective compression strips away unnecessary computational units, it offers a potentially higher resource efficiency during both inference and training without compromising the flexibility of unstructured pruning or requiring specialized hardware [@hardware].
|
| 70 |
+
|
| 71 |
+
Our experimental work encompasses seven different architecture-dataset combinations: LeNet-300-100 [@lenets] on MNIST (Creative Commons Attribution-Share Alike 3.0 license), LeNet-5 [@lenets] and VGG-16 [@vggnets] on CIFAR-10 (MIT license), VGG-19 [@vggnets] on CIFAR-100 (MIT license), and ResNet-18 [@resnet] on TinyImageNet (MIT license), ResNet-50 and MobileNetV2 [@mobilenets] on ImageNet-2012 [@imagenet]. Following @missingthemark, we do not reinitialize subnetworks after pruning (we revert back to the original initialization when pruning a pretrained model by LAMP). We use our own implementation of all pruning algorithms in TensorFlow except for GraSP, for which we use the original code in PyTorch published by @grasp. All non-ImageNet runs were repeated 3 times for stability of results. Training was performed on an internal cluster equipped with NVIDIA RTX-8000, NVIDIA V-100, and AMD MI50 GPUs. Hyperparameters and training schedules used in our experiments are adopted from related works and are listed in Table [1](#Table:1){reference-type="ref" reference="Table:1"}. We apply standard augmentations to images during training. In particular, we normalize examples per-channel for all datasets and randomly apply: $(i)$ shifts by at most 4 pixels in any direction and horizontal flips (CIFAR-10, CIFAR-100, and TinyImageNet), $(ii)$ rotations by up to 4 degrees (MNIST), and $(iii)$ random $224\times224$ crops and horizontal flips (ImageNet).
|
| 72 |
+
|
| 73 |
+
::: {#Table:1}
|
| 74 |
+
Model Epochs Drop epochs Batch LR Decay Source
|
| 75 |
+
--------------- -------- -------------- ------- ------- ---------- ----------------- --
|
| 76 |
+
LeNet-300-100 $160$ $41/83/125$ $100$ $0.1$ $5e$-$4$ @snip
|
| 77 |
+
LeNet-5 $307$ $76/153/230$ $128$ $0.1$ $5e$-$4$ @snip
|
| 78 |
+
VGG-16 $160$ $80/120$ $128$ $0.1$ $1e$-$4$ @missingthemark
|
| 79 |
+
VGG-19 $160$ $80/120$ $128$ $0.1$ $5e$-$4$ @grasp
|
| 80 |
+
ResNet-18 $200$ $100/150$ $256$ $0.2$ $1e$-$4$ @missingthemark
|
| 81 |
+
ResNet-50 $90$ $30/60/80$ $512$ $0.4$ $1e$-$4$ @missingthemark
|
| 82 |
+
MobileNetV2 $90$ $30/60/80$ $512$ $0.4$ $1e$-$4$ @missingthemark
|
| 83 |
+
|
| 84 |
+
: Summary of experimental work. All architectures include batch normalization layers followed by ReLU activations. Models are initialized using Kaiming normal scheme (fan-avg) and optimized by SGD (momentum $0.9$) with a stepwise LR schedule ($10\times$ drop factor applied on specified *drop epochs*). The categorical cross-entropy loss function is used for all models.
|
| 85 |
+
:::
|
2107.13077/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-05-27T11:34:38.689Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36" etag="U06KswOHvH7oHdXf50MX" version="14.6.12" type="google"><diagram id="_xP7Babr614wZI4ApjpJ" name="Page-1">7LzZsqNIui74NHW505iHSyGJWYCY4aaNUYCYZ3j6dtdaEZlZGXl2dZ+qY9bbelmsWJID7s4/ft/vDv/Ar80ujFFfPLo0q/+BIen+D/z2DwxDUYQCf2DL8dVCIdhXw2ss0++Tfm+wyjP7bkS+W5cyzaY/nTh3XT2X/Z8bk65ts2T+U1s0jt3259Pyrv7zqH30yv7SYCVR/ddWr0zn4quVwejf28WsfBU/RkYp9utIE/04+ftOpiJKu+0PTfj9H/h17Lr561OzX7MaCu+HXL6u4//m6M+JjVk7/ysX9Ib8X7Z+nJns1hepclGqYf4Lxb+nu0b18n3L39Odjx8yGLulTTPYDfIPnNuKcs6sPkrg0Q1oHbQVc1ODbyj4OM1j9/4pK3CXXF7W9bWru/HTG04hEYvSP8/8wxHsRlMIHCLv2vkP7fnnB7T/9Za/pbBm45ztf2j6FoGQdU02jwc45fsozlC/4eTXVd8miZPfGtp+VzAw1N9w5qu5+KN+ceI39ltk0bdpvX4O8rv0wYdvBfw/Ucb/aVWgcYRm2K9UgSDU/cL/h1XBIn/SA8r8Sg/IL5TwQ2H/dg1gv9AAVc9QQn3U/kkV1LBAz/0I6L+mT9y6gBNQpt9/Pwg+vb7/fnqJfzQo7o8mMM/4n08DbV/j/Wj+WytA/99Ywb9BdwT2Z91hDPEb8scf6i+qRKlfqJL4T2mS/ltN/kXaUIP/imqByH6hWnuM2invxiYbf44w/jh4bxOQEcc/6PVrsL+oO/4XdP3/fY+H0ZOg/uz0LPYb8wtjYcnfSOKv9kITv2HMfyr8/r33/zCRH/DkhxGgf7QHgEF+SO33xh/y+1+FhH+L6SG37J9t7Vdx5c/290+2BtQ4/8qgfphD27XZP9nUd1NUl68WfE2AcYA54Bw0ihLAqMv3gaZMUzjMLy34zzb+b0kt2H+fWtBfxCP8PxWPfqSs/3xqMf+HZRb0r6r7P5tKUORfwMhZm14g3QDfuj5r7aJs/zthARmNhw+N/jfyx9fg++zPl9v+7RFf346f31K+hLfw+ZZGU/FTYdlezn/oEXwLflwEPv/eH/zyo7u/1dnULWOS/fd4dY7GVzb/LwPrt7yy9E9k66828Acdk79Q8Y+2MaujuVz/TNF+pffvEYyu/MTvbxMj/8nEfprTjy6+7vz7qj9Sqv+uIwykJ+wPKAj/c79fkvpLvx+r/CmF/w1DRf8FBgFoaA8/ls2H+f4M1GoUZ7XRTeVcdjBgx908dw04oYYHuCh5vz7B4VfQ4J+D/dzBSBFN/Rcjz8sdWij3GfLyoxX50QI+p9EcgSD39RXj+/b1D+xaupxubogivLoL+NEsp7g7L/DpzsDv4LzrJQAfrgXfuyL4wGNOfX+6JuEvWISEYq1xl0u1bS8ueV2c7vKQLr504ZXL7XrhjwtnvS7fx3jQWYrpeNhYOUn7dEKPw7Aw1+rtncYRx9MMtZWt15Z8igFBUluqnC3XuPu2dp5RsIye7+s/ME7L9xb+Ab8AgfJHjrH2TcpNZrPN15+uy9dKYx7gnAJeB/69A87SXyty9GJ1j4yZtjgxXTEreog33bNnL+mkc8luzEgE+0LJG74BJMQLdmcYO0j7nCFtzLUg50934I44qVTwyMnc5owOtgMNg9g8wR9Rfdnw/PKpo+ZLooucLAj7Irr5NIjKGc5x5uTubcMxzQJDcLAvnO+G6v40ClRYD0beuPRmo+w7Y7xgK3BSuo7aqGIV1aWcGxd5QdbD7WI2HpiJnYrlde38AnSjXK88fiVYJ8A4xnhvoKkCv0GWBjTV2jvTEUNSk90wFlVrRTi83H6QaJJqePzkF22j67XwcXU3irVJMIMkzDPe0geg9BwYgF/e56Eo1tViNewpo5z7eMwXGl93TwFHeSvIkhbXJs5ZAsqYxFfQMW8uk9S6kuY6z15J3umyVi0tpRfIpoj4CqSqkUWDe9l7K8O+oifXLpISp/WunCaZPZmwKIw1enbtDdpk+VZslMiUtWXet8hTAhGfsjswaL66IZL8EnM4V3U//DGgc9IQK0rqk/pNhCdJvKnskFddqozVr5K0PadDlG6tkFHLrX27j3oC6Z2/UwuUZSjc1viRzPtyOuZAEHg4KxN9IIZ7F7AWN/aMr94ARHPecRNotPWb13VUWaQJsoNiX+iaebLdcOwT4ZmXsMVejq+crzV5pEdGugdgJGdZmB5Zz7ZYzMyQqcwAjdaQ0VIFPeJOq/pCZkxLk+GDLbPj0RpGXpJ5GlVrmq3AD3hCEXtix50rvq6XQ9fvfdODZqVDVj+g8Qc/63fBELd15eQDyS8qdSfaFjPC1yxj1yTTsjYRyTRDJnAnpZw+DTWujFuRVWr24MOy52+n3oR1ssYSq0sl4xA7J1XE2NaN7JLDIHo5Me0yJiK6OR4DAq354a20nYJpGIcnkNSQ3U+ArXhJOyo985OYurpQmWyTbt443fBjrJDnGIcak3n1GUo0HxUart4CYhYfuWQcCxnv4f1Gvs1K5PSdvgzmwVb7C4+ZdAIuyWNenKuiSLR1zj6xAE8mnd0j6KyOQlNuK6qWuemJfDUe7lFQaxqA/rfz1sHw8Pa5NoFCa6sj89Rjk84cdqprJ9CgsWW5syQSYhxZorokkz2pUHdJn8zvRQ79fhmFGFdmsiW3Ebg+L+8biLGLKCfTdJyrIjGGYcRnw9dQvdWgsmxDPvs559Imw3P9KZMwRNaL3hyYH1Zu2/ug346ZG2Pf9XXLlEhNzypv3jpjoa++Xdj9vL38sRjOd+M+tJisTmE/9PPIqExKzzFv7uBUt+tbAlVQecx43xhdGLXo/YZoS8Ekpx+fQAtcGiREDZJgpvLno4yGAme145I9mCxATSQ7cXu/LWmcEAV+jY1Sz4C/m6+1wprkjcK4PbGCx1MIGFVsYm1LkqF/z3WmkiwjSI9YDfaxdeQn03cYlvNrl0gEGsII2bx54HI0Gq3qppegK/FoHgE5MP6p7DOmP7JowoXzXWkpibhT6oTIJN+wRWsD+6CzskBdtQcWGKaBmlIzmbyVke6JdEMWv8H3YEXRrL3tEZBCFK2ZTk1p14rJjdhjfDVAwqJjoB90DXp/Ym1cqUkNsxOxw19Yy3I2bmgNRR9PlK5E+pGyMQj25eGQhNVnsVPri9T7HinZ46K64vCyJnHAz4T4GNTCVANJ3J+zj+Lv1M2I2TahiWlaOWa3GfqGxr1gXlHn1BC0VuSaNsxpriPjBz6xV7ptG29MRqWNENeeo0dP3Gqax063Ni7ZSoAc/OipmwvDLmabed5d75kgYwnCP5wTRv6HVWSrgOn82DC+dkykcJjpns7hkb5flDVmdr/ebmfFVEFbV7vU29xOhOGekPy61HuNnSCHkOu+GAfxYsqxy0sBkYEPIFgzpNV5IapHHonOgbkriULbSifsoSQw6xJB6Ih08pZJ3AIWhRj+w48116HFsVRmbBnS+STE402Roxffn1UNIce9NXlPNKfAN+vswiWp4BKpIDcZVrB+44v5m1vgebev847Q19ZYdJHIY5fLLdgehgyDjwVRy+WR3t0gFcAZAntcbpftAX8fSs4Od3jc+xzntTVsTTzw5foi7AUc8dFyS3iQVYyhChe9lctnajVvv63l2Vyv/yYKTTC/0Rj7+88/123oX9VtMOY3hvwrcMfwv8fo/5s1m18Vbf7nQV5+Av9x5jfgZccUOCYPPqr6+yfkTehYfFuce+c2eJDLLxf/cjG2i79d+O1yYy48A4z2g5w/R5079bxDr6hJGhvQ5QSfMetWkjoN1RIBy/dddIgsvD1JwIG59cmWGlAUF2WajCvhm5mFzBqBwnkqzEBG4Nuqj0mYctLs/QGh8N+C+yv8g7yZxY6f+8glDXQELNZ6+h47XSOCb0gPsaam5jSNxMwLQodpqXN0t5AD55Uy7QP0tdp3OtIwhCnTxbFvdOScg52a0Xn4w2w28XQ7qJ7xBuwDXmZJrm+rfSGjFA7AWObinCoZKfBr2tVwjCWrW5Q4VCe33IfCtChzqCkZ6U+CKQzX6o0mfhdBpfkkWz5xmjq8lLTQ3GrlCHYi1Pw795rBqKTo8KUJz2v1GSh5O8t2GIX0M1LdAZxYG+dw75eZinz+HfsNJb4kinIl5kxq1QnKpjHCSIIyS6+g9wrCvIltrHOhtDmLu+xSjb1TndrhzGYlD1EFej9rgKQ4Vyq9M4xBwuJXwWs1xJBs1BlssiElTyIb0O4zfQOtrNtUl1XCx9x8MYTW2mtcUzH71fM6tj5wnL8TedyRHu/AbOeUW6z6bdYu/by+2DehmPY8nhq72+xViE4tgzoTnfV87NMcBd6BpjMjMtXFitIpC9f16nbiwvLLPBCPhNzxfR353juhfUH2MpX09nAxyJeSoxUzL5lILwOGyONcPwB47PebE/n0WGt6ogYwdZUdik20F6KC1L8rMp/fyzJsS11d9ZhmXPmdRbhdtRqWqd10Wvg0C5Ywm6VzmGxcWTHoO9FUwUlAiFtsaULX1leiifFNmPIhrClOW0OP+rpgfQNGmnjpsA89pBNXeUPjR2271TEdRA7fK94TuybY2+xcnMA8VJcCeb9rENep70MFuAc9mtHgES/t0fCmxcs6yo0cL8jpT7O0+Ye0ri9mXRsBw94seydo/4bdsR3LC9E8feSV1cJ6bsvDix5GxqN548aqB12sjAIGQTCMzmVceM3KmQGj5xh1gMB8+OqNOYvHG6egNzu2dzcsvEvHQYP+5qQ7JCWvZWSONYwADePBjV3anlRs+dV6hzQFdMev0yPYbWPA5uROiNHxZj00JtCupAEzZotcNmSeJoxhy4WkmV2+wtDgbbAu3t/eC0fXdvXG9nQdo70HquFPG0B1smkloljSbq/vsZZHNNZsMD/zSdSkxlINi96/8IXX4a0I4Vx53ISy4xrFsoFLV5e93voxszQ/rTxydOlI12ShSoGVT/dexYG13or0Vq8Ul7AcPdg79kqWxDpYtz/eraY1VxKv9VpDR25XzzQezGzt2eKqg7gmCSArcXsU3w5IzhGjmWXOoARNyfx6nsLng+0M9GH2IOa/T+c9qs7dwrlcl+v1ABTqbnKzfTTxCF3M0AVr1wmWKCKEjKJnYxFamwolLqd46qJeUVlrH2XI3mNsFugSShfUBk50ause2zsAU3KE0jgYnXJnYyxrLaQdJDFH/FbXJzJ1TiSveDK/L8cbx2+HtldRKOTjLcJKM66S90LdZvex1t3KGiJP3KDfCb0dd4djb+VulqPJu7SH8rU2F4Ptpxw+NPaB+BRgGsyF9jADE5STZsi7Mr1zgtTWa99Or5G05VA32Gpm8wvf1XiBmrEY3gFG5uRpC3msUm3rxA40rH5xtnwWHQbrAdc9RQdOiotxz54nxAAPzEfEbAgN1cwBiOP1Y7SqoGkYXE09GXgvN1rKYfbJTGfJXoQi9NjBqcupbqD5B7K/GCm2YOQlLa8ac8/Tvg4oAjJmp1Ujb99ZxdWjuLamtdGdWPY4bJQA+GPTQodTyidkxUCIsF2ZfO9Tklu13TSkM6pk1d45PxVTzV2VkMxJubQLGGzAZXeef1dvG3Y/uFo09ua0FvfGsSlahtTphjyIiNkHclv5FKaO0DyyKA7udGP7ISfS7i7KC+feKEA0edLrGVpmhezhGShhlrppeXFMgGhu5yHn487e8hNXy9GFTgUL217Ao/lZYYYdRtI6fYPIsMFxH81CXQ2sKvUzuTgw+GnGmYYQPmQXQNOsyA+QVg8jEFUIGXSfhVyOOE5odxI5LtQrPhLOgblDloLCQLEi6QoxVXp102pPC/90D4fHvKxwhgxblPimjIWpafp84xit7eZuIeJIs47MqozQfRhzh88xezQS2YaXBbEDvHpe3zj0m6TWYCw7MRcSEMzPWmgP7fsLKWDxIt0b5tlRWizxyEStzmPU6tWqjqEVH+FYeQJrQ6u3D8+GgS2ECcbRYVrJLxn2lPuTYszdC8clKvqCwbEHXgLE2IS5arXvqYbDHFZqiO1j6YtxStWP95xWDphGIgDpwpt0gwjkcGBLbw/mKD/t9UiclM0Yict8/SrrPZ+i4WHYXDvan8715dHjNQXE1Y9P4PgpJGzaT1Q6o9qzV9OkPD1wPWsxtB+11BjhKHPxqjx/RD5+5Nr5TnLvyqLWtKCWocU0dekfEqvCmhnnQd77h1kEL8EQaswYaTUbatsbSL6ZWao4wzwmmN1i/UB73IMvvIbLAAtiC9eP8b2PbE0TShr0YTPiOQcnHm3g3vlbf2kF0giB4PsBpCP+vAn9YvG9YZ2DvnK2yZKR9sFVXDivFyokjGsG0yLEKYielO8yOCu2nMR6LNanqgq9YvOOjsgKURsx7TBDjs3L5EYyTa5j2ib54l/eS0SaAGl8UNaYCKP2aC9XEyQlI4EEPdupfnq1FCWpSUajr4Q4ru1jSkeMOmea60EUWp8mlOuEdJdQi/AdzoqHXIQTi2lHTl4ji2rOUB+godUqg5zO3hXbavTGK+cSOTQaPdg8fxc2rEiR4/3riiyikHnUop9XjAltrQ3g5V7VxjTWXmajF7SIsZcAGvdbfHlskzHhzysqTUiGXERRN7XvU2BhS2vTG2XYmfxV07UFiCs4/PH29/OlK4bJP+C8vcIQGQDO+pqZNR1GI5syRsZPp9LVgJ/VNHQq4Bavvrk1V91hSad7kIOpHQZmA65TmSiD2i89wGgbUPwldAojYhLqQeGt/GiMzJFPL3jT/W3NbruVriyy31EUZdQj8RErC1CD4lMKhUj3Dv5acoXX1wRtZJ3qWKTDIlKC9VfCJuZwslzj5j6e2Mn6DwIJqaBdzzZX3lyrWRCZW7n3XmUPgmFhXguDMRX8nSclmwREooNY3ywmOabAlxqnR195ADykW9ywRRMFS8uGAhRaMlSjtn00Esnxgg7tzAMwpn55C3ZN8vDrmmhncH1JWul8hyo+1385N1PEhsA2Nopbj8RhcLWvD62NMQazZhhVNqp30GExwzHXB7/2GgSfWP4Yky9fwmRSqePn275ZYkgZ5/6+6AA7WEmJWzAYB8DT89KNBX1OcF9adQSmOFUUTrh+sKEPWcByEntS1US3M+uNYQ3mVBgBLdLOvqcQxZOUJsG8KK3twJzVWul1nIzry+ztabi+hwBO5HkdKJfcQ0Vve+Im6Sc5BMRQ+VZ2wWGQ1Z8bVqqI28WQPVaZCkCaYCLO2EH+cNCToLzyedyS2JfMM/epN1/epAUqdZCQ/qyCtNeICqcBlK2eJrbCSjF/vYf7uxZkKqVI1aCmg7mxQYPs9lJr9Dnb6PoWoTeij8QPyRWd1jZj25mgOxfRgg1L3xLoYL3mxZKIX07A850Cq99zYaNsqrcDrQXhNhc5iAo0tDKOl9spWNUQrcMwmlS5NYeSABkdXPx2ees1ifnxWtEjeOByVUYay1x9pH/MLQ/mqLHHpdc8geLOxtS2oxmcApCwvIBbKzh3M2db1LFYpe6raUV0Ix4YaSKVFeAvdNIcCdImZZvl7C7q7LM22LTxScEKJ1sUEBCk72thhbSgHkhECRmBTY8TgyXeoqQln83gasgLf+UJqSUUNuaiF1wxnZ0AKpJByERRkDd7+8iCqpp5p4u8xUBurLfhALed7xWm/5bCdu4yMAMsIfD435Ss8Jr1cZBtRDSLD+Zl3Al4uv4pOolynWDslPpmH2PERasuG9PSqQwrEZcdMd0CCcDxGJcuj+/C1QZr+lj3hGWK8HOGg7ENgEpF0jiXWOCR0H8diUC2aqOtgPPzw/NT47hYjqubCnkNJAnWWP49mwho5jf0z7s8UYz67cfe3D8UrHD8N/r/4DYQFPsX9hL8D6hXvaE1cfZXvYrrq64MwQd/Rn5Wq9JXiE96LeM4mXPmNjwvkX0JX5cefHhemudFcS5CfOknrr9d3y/xCSHGgnsCxkrscgzjUCc5CqMfBIoxBsv1/XtY06RuLAMCXYaZJlh3guw3OcutQz/l7CNZ89u9WhmvOTv+ZdIvCzUYpQQAkWonSBNSju6HDCIo9hkDlXJNyE1uvKkohEEnAaMQ26sZSWrNM76X+KUecsLkJ2tGDDmbziJFxKXp6VzQjtHXXo0yB5yr51bhFbeZo2H63xQx4qsls8ZE7SUPjhBt1yGB8eu6Xtvohpnoy1kQVUyYNlP6TbGb6O651+l602Hyp7Z3PbZdtyo8MzabyJf2RS1H74DnuCmiYV3SChNkBhCftXkFK3TU1BbcNWC2K9G5wTNwQvY5tEb/XOsAuxCdHZiBE2EekE05LsWC3BAGeZVDote3q8Qc1+DtBZfECefPZdYKQsWFeFsBl7gRZgEpD+PCLfjtTmCvK7zMuEgTxQV3LwSXRfM2vA3Sm+sHdSHun/sOsCsxgMtm7ivCc3eJqq7D81EbD2lS+OguuNfPpdTn0vdDuVMfSYCxr8GAD/F8gfrX7hJeBwTypKZYvoukvzoKQPqOSmVhETgKA3rTAvsMbqUpv/g9GLfStQtUFSVYCpHYqPhoKbfPh1oWbsEX9ljCeVvq49XIMNneLXBPfUCC7sXxq/sRdM8lnvK4LArrPM/u9jLDt7BL7Xf3Bkidn84f+TGDziW5NEMwvj+U8M6uqv7yuPun8yNRbs3YvHy+rD7aHO75ScJSRHxBA/4BrASauiQfD2wxIbr3GF9byyszaoPaP2N4agLXBhdbQevneCk/mhTed9eeGfGYsMm0KIfB5bk8mE6jNI9ILQ5qFWoQfQJVG3CzgCOQjeorRSuno4fv3GI2KNQntE71WnnDO3wWqYG/fOmMyLtKxnvymZoKxujgfUPvgnJwK++q9BCo3kgr9oI5UfS8xoowB2MIeyL1lw0ix6mG8y3w1wwxB1Wt6zAYhEJOUFIR1PdjIn7XvQtSSlTiUCk0qktK/pH5vMkRzKdgBhV6BnJpJAiwoSAMt8CRPxYQK+N7AxPEfvp//uUDwJuFj6UMxHb+7guu37+GtvrT7D6WEgO93ZnzSvTu3r+5PXhv0JJBgLYL1xDV67e/fxDOdYDsB868J9/8bo3VKX3boQtQnV/HRPyM2buw0JrewTg0Uo6yfXxnwoCffNugxz0FiJn43Hexbt7zJfJzl34GNfDa3Cy+4wwPZst/RbiPXMpNeRGUQWMUnhgjDsMaJkdJBDmT3wo+Tb1xGh9y+ngUsoJOEH1XeODZa1F9FYsNBVe/QO3Zx8T09dE/cUg4aadmYG20wtYVssisO1Hs6zK89mns8dJlobn4fFM53571BM4n+XaOMxuyotCSuxy9f9eFF5hjUpt0czwBnI6lKxKnGuD7IHrcZBA9Wiair2U29668oyIp91IHOQ7ZKVLbGc7JrjOaioq2sm8YAPHkFSmid/d4YfU+euE/vqGKmrUOUbXnTxW9H4NUzTcDWjGKUxjr8a+phXfD4QZMNZLD5++Q3eb2egjjcmPPCpch8BH07d2KY0DMa8woCYpcZmm68Lme+2M8O0oqegCkVNOyCNocrHhLQizBz5lBt1ATuGEeG6HuIoic6A3kUIYqcsdkK9I2MMOeSi8hsQm12EA8eNRJIUtShzCl9ff1ROK7B8kEDJ1h5qOwDNaLAIxiAyNJKNzHEMeNbTCEmrUaC4hAtuqYmDY2rZ9d3yp3O7CXJFNC/an1GUcWJaXYFVTF890iIUakJuYISdWPfRmxS3kbWl4vS40YI4RJLRIzrdeQCkvaLyPfH/5VzG5sKV4lgB4v7529ZD5kQ8bix3DzQXHiKHaJ4jNYZwpy9g7JadJt6c6K06mqIi7WDde5p9BtohvOFmdzFTEJSZ+qVVCzIGDYA9a2JuQB9NOgqo1+1sNOF8uJ21R4J/vaaWltOpuPkCzyIE+H1tF2XisnUkoYmJRI5W2ZKo0rX/fzVkOdT+u00lTvUvg6Tb6dyp1J9tFJQUaGd+iak0/zxqU6MgTb2WyQoBV8l7vuKo7hjJIaK9QN/97x2K3sfrZWQrEn12mQGDugczy/ajHzbWKhTpiD7qgXTwKB4RHHlupV2gDNHIWXVQtwta2lZVtKHeipFJUSlc6rSzlBEarUO5cNgp3y8Gnh1zLvDbrMtGUn+GgQqqMvcIo0hrZ+i/EW1f6CN6Ohv+80iKacqSvNOWWap8e+DxHRtaWo57zrxyyyHI0w8jkhUUK/NbL0L2BqTKnqoYljPVvkfQQzpTqzqYYiDRkgcRPm7/y5tfpp9sfyInzb/azTFHyUkgr9Gtmxjmp3ux2+ZfcmZHGdxkJjder+aWxvLGD0dBf0hUUkMa4tA1ZHXa0zlmo1YmnBk2n7RG36Fc9P4hk68mZqIdbQn7LGUx2v1oprXxigknocdVF+TxSXS/H5ITyNpVFXPBIQCdgvoyMthtlrAJmrXjlM5NslCjUT4p2u+WKB3zAY4LNSM9IoHo/jAJhmGr88k6ufrbIAmKMyJ0VtDqy1VPr71qYb9oyu/TOP2qhnU78wwgJV2qtlFQVIVJEr3NBObH2YWVHfjLQqpOvuTWO3pprzJHtWCwgq141jqYy9dJR/UFf+xF4sS+T79WTJZ4a3Kcy7x0IItKN/MNHA9ni7UK+FUeBM9kV7z23Hx+5trGI0ZOHumerWp7rUjN2BOrd8QZVa5XdqdKnyRKTQrswi1N2xAkwTsVavHSV2ZB1fGIpMpS5jtJrURFDD4cLYP5Lpef9gzAUR6btwF3g9U9L3Q6rsWFbwg1qxBkZM5ZUyQyl9qqiIQWkl4Tzjtqlv1YnMTYZePXOYEei0Rr+60XBFTL4GaLuJNEmU9+WC5idxIojWcUvpEWyKd9f5dqR++lafhnJGB/3MpyfMzwN9Np7Pnlw4ht09y2b+Oi1Uv03StNnJbGZYIBpHtShdABCQkkmOgF1dcKORaU0Ds8i3rBPNu+ej3BJ2CIgo1wOmMHtBQxhnxxP3zliniZnimEFdv5dOlyHWYi4ZtZHPn4/oqeouxlKQIFS5w7zRhtJ6fbgPvQ7vcoCIDP5OaPSyHBkgBn82qAKSVD4hAoGfx3Vq35jmRLtUwO2Zd0cyLQzp0HIQ+Dd1wKuB90ddjY4sFmKlgRVcwTz9OpwBdwbzBpxhwdduzRi21Osprrs75TpYOpzXkqSE0IYQLJkNCO5qWL+eD/QaU2Z46r42nbOqIsgRtwsZ5rguqBa+zR2DSDBbPhS8D9Fwea28N/kbyHbU7uWWzIY0lNEAGEAa5HcAGW3vejzk3m09mEDa0p8FdORVpL04i6ZpPFVeE99u0vExsfqTa7pDCavjto1bu+qJbLWNHbk7SRVdB73yfT46t4PZTSUDkyInKI+Gv5PUHcbfZjsyLM2CAdZNZNETAU4N3+irdxSZ7p86v+ayonXXI3Gtou+dJR1aM0XjO7O0EgjPUX1lWMj2+eXQh5BnsIk5w7NSoCSOxjA6rYkg1HE1iL0sRbeojm07txwbXiIsyKh8ByKWrH26Pv7EowKXk2VY9HlavZwQH3UUHJQ6E0ADDb45FBM/M1gd5F7r/UkgVDAZ8705Jtw27hJ/9RGvpU8UJ9jO3BG/licJ1pSu+UYwMVZWgfhVpedudTb21djBjTkcCAlPg4zLEUgGR8+MRQ5IJfmijJZmY4VWKP8ymq5CkDmEj5yfSrRK5OON3e/R07Nr2cdH2paXmgv4PLmlg5jMuwuAA02RaxaZAGMSp+fQUdquabdaYJqJEmqNv5d2aMY93dz93Ncwz5xjUnYEoqyuVzZRZB0OmaVwXddOOxHpdhSrE3Wg16VJ3UVEInwcU4YUHAuIJxjoOBnfuIeV+O3UJKyi2MdI9eVx/Gm+owlFdC/mAe5SKeB/A/klkReQCEGPMJ9arv3ahUrYzTPuVjTxTl7LQOBa5msTp3QkJ+sDgbiztOf8naLn9wjereAlEYJ3Yd+lqM3LqA9C6no3Scgh+83NymVpk3NhqQtTAF4D+vBoqlJeq7JPfGGOIWQKKqwWiO9ymXr4YA4/u5TqaVujzATMwt3air7vmCD3KcPRQ9C9j+QR8f2LlFHTcp8cLhut+zx7vjyjwSwLS/E8hd+oAcEXrfP0lihZ3i5mlQwd5XAh+znDjt9DhWVDK/eG8CWx9bzuCrq+6+fJuw8lPBlGxpAisCkYqJhQcBsS7wQrjZ5W5rnRew3QZwecMCXpPTSma+fbtNjd9O5vRxvHKHdSFZZSJOaU8r4A7iIThGP6hE/o7DJK792NW+IKRjGFSOvv7GCzlcyVZlThfnO/I4bRgyizY2vVWA9xrQDuNcXu2bUs694Z116W+PeKwx46V7h/t6vCVlycE1bZ+KBFs9Mo3SMP9VejLFJhanN298YnP9Micq4x25NueHvS5AzuYDvnCHg2Qs+HN4ms5z9d4fKu1dnE6TfR6uUc7104NGQ1Q+JzZ1eHzgZJgbF8r0Egjhs5uWYQO8Q4avtDOlfM2Y1sjx0gDlbFEeYHJDccRlMARBvmLPQKL9Jsk6YfbvlSz2np2oNKwjnBSzsSfN/udBYaLzBIg3foLyYe1JtMVSySLXG4TvFSGzg9PoKMIh+Wzt8R1KaeD4jBXM69u2zAizHqfxhXp56Ppn8TkZ2jtpgseFdFIm5TPIcKzGORZquYISXMEVfnYVjsPnlRAgO2/hMtZyZuoxjzvsgfCGR5944N9pYgi/hEN1zk6x5kIjmZ2kQ3SZu3yPcI+1FkwHmDAZCc4yHUDoGEBsCQ9KwVxGDqdudVHJ3T64YYadO/LohhTY5d9ooWevOtBtA7nu0brOC7BPAqO7CEsnBYlH/4eg5rB+ZM8OnLjXBqXUNAw6C7MkmWZ4Z6zxENp/oqBVG4n/cwChT58FKzO5kySx+3XCTZg+tvgJmKRw6c9nFEb7tAjfl8ipSCpXrcFwt7Z+3PFre7o2EKYn2QHMCOfWBIZD3zo9T+qMzcKbEzni7LijNds6Tm5CRIsLA+jld2imesAVEwX8FtTtNZMjs9TrtDOPatoHEeF/JkTXX9qyLzVVXSvOkOePjz5Rx7NstSZK1s9X4D4qAg0q5gx+08quKZFxQZdMSAPLB2DrAVWV7leUjusY4qZmRLmAMUaQ+bg55nyUIG2qdIsOb9E6Lyk9ZtQzNzUkUNl3RhbQW0wurpefXoYIBEtfw8eKDry2NEDD0/jGBRYR3R/FFdi6Z7snn2O6vae5aWHbnHjgEtiXu5gbaE5kyRlK4xU3sbKjsKPZgJCwXWmRR2ikHSEUUIZbEC09uaJVJNeBPvCnRyR/Ra9fzyBAirXCwRb9HIIXHGK7Y3LFa4Uh7D2UHQ6Y3pxrs9QrEuxwpaR1++62mfyoM5oq+iuVI53MUIk3uEkfOimbCSLKiv/o5CdjKWWCwuLf7ASuBKPHqA9GUOHhvBx0xYutt5tbfPOkYPf+XHypLIphZ7+KgvH1+PyfVHmhDQ2h5y2ly75cyR1WbxZeCwkqYPIKO1y+VI8yUZlUt7v7qxo8faO65LC12flIAOL/tZ/JNcq/Agb0h0iXnT7VejJ+S7f8BUkaHk3GGaS7thK0iYwTLttXSiZjXr+Uu60NjqtHnAPS6wQDY2UmWyVPW2yBEKwTHX6bNyvOY6cgsouF96oPnx7h1pwt+hQ5ENi18HH9+rin7eXZDhUWSnljGdoSkrkMU2N+eq9LuAGPFMU/PLOHuMFnpDdK8qvd+Ma8lrR9ryW/Jyie0FmTUJjEGiGvOB4mpCstJqhab6ik1Uq8kZi3WK7mz6ORhtVxmHUq37O7xnrDwn/ALwIGB0mSGe2su1K3ZZUROyxI3aTEAI9G7B7qnuihRxIANvpTUdm/nR6CBeuk3sxxTU1pinw4gPVVUbNwjHG7uQxaWy27huavfCrzsJ5lOtrtAue3IAqFNhKL9U3mnr5WNOmXaI8WPScUpkHcRUHvkcolceQVgY7Cnjg588Swep4rjrBeTPRrR/gTg2hdulOdvkM8yFuOXtRrJI5NAfqURWFICvOYy0KD9wW5XZ2cdqQumMS11ZTBSv3auDBYF01l+DIUqZX9ucK+RDD+kaRuYOOkHUD6DRSVV54KM2x78IJ+bCryqIClAYFmiB0m1T/SQpVbcKTEttevBuLjOihe8EWaOcKA3rSCgMgDvJtnvNrHXSteIwqK5DkM6dhFZ4pTsIoAZ961qH0jTlLSMPj+ffB9GlQrq4LXujzcrlcYSGG+Q4pbsM7W18GvcMQNhEn+7O07dnRoYxgHUBbo8TV/7s8TIvcH8jv+nYvd1SsXobkvWpzzL2OqmkbHZX3tki5LLzAGfxXlp+cnFkUMH70hKZok7oKeuEa2t6P6F669wZW41huNiIkzuT/KWcb0mltYdSTF4PK42wMgtXYip04J3PQpP/eLvXMqWb1ZgSzFeYaavo2yvxTVZN3BCAR7SOpIm3n5PVvPAnpeFpMEiwRhpVvuH2ZRd+qsHQA91z2es7kF8acYFCTzZqkOaA34aEDMbFUYv+OjOp3Z32eV+OY8qCBIT4OxLzQlqHNH/da4t71iWnuQOjGKx3ERq2efYh76wwFzkVi8th7XEw1I+xa1+/rO4cHkp/YQB6CrEX230te0Dp8ABfQTpx6dy6FrLumRaU9mxlL30GLucdleQ3fGlA4+X6F+8KvMihEdm7kfB2rJjNWUaIvZyvmaP8rBUIc50SamTABYEW15MSbiv27SI0qUJ/C4obRk8WW7SNl4ZzqrROU14zr5vmKzKuZDnOIXatgJphnJiidnDh9UofuohwK8x7k2vPReEzfkFu5L25iLwAkJzMdss2Y4VdsqdggOC5HouC4HBrQPRdx1E/a00HhwaESlU7vt7apWV1TqXax9wehg3JZZDAoAm9UqwZCKcsFT6BFGWWGgli6nrsnFASecPF21dN+avXnQO83GzhYsnYGDdCQQb/RvH6WGBZZx7rMHxqF4QHCMpQGZOeWAjkaqpBFddBR9fsbG2K6WBhOTidCyKi+vm80k1LjywhdU5UoGrekTRuyjAXtRXAU2m6wfU+Ca6qdWhcpNKh3Ddww5gt32DtOqYaC6li9jgAxG+wo2t9KJDvVbf9VnzVzzC9kPDUJc8YwsDeJv0stUeJmS70XXihMC5BVHBzPe2O4rz5d+Ozl7RTGtoblTdDbZx78i1nVnWOptgrD4pNw0+a9gRdA+Y/tRQFsu0z1k3JPtk1x3KBr1gY8cXwsvJC4aj9tnWZZAHJrZt21Tcu8Bp7khxqa6kgf2Iqnmsg70yto0LVjaYxfdFi5xH2RKZvqPdi30nwylWXLQwFojfX4hEtLGmRfVBm02nHwwPcpqF4NpM7wHFFJ0XdqXwTafxcQU6BNaf5uc+tpW+k/0q7PDDtISL929zcrPWEO694ba25UsNpTnAaVBgTsTvnC2s8CGlojWEpLxg6oxoCV0jHL3tBb0JkbsDT5gsdsBvcB31ScosJD4C8QTi4Ugn+NKpTA4l1o8Ypos1uyj0iw7r0E8W7YLdbDhMH5DZXONzCzhtDowBzbuZx5N3ylzMpVsJrpBjNdiSDkRdW9JxH0J8dfXPh6jP9grt5yw1hvdjOgycrpxuzsWGxxujWvVD/d9+jH8Lr7trAlha/+X1N8wo5mqd8syITxmVhGcZ1yy8i3P00b76isjSW00nETjluIeErIXA6b1vz5PXNDVC4Sv3xlGmLXrAMlKQ3r+/yMlMQhcwiCVaDP2tkn3VCwMYhM/qsK0d0Vx5qP9N8+yTrtjpXgnbmz5Nt9psZ1meW2HfKeSjz0EP+91kXCNJAygH1IbvxxpBWLJPd7+t6v68A5jDzUFTv3YXmfnAJnb0pJx/C0rpq2jxgGXCIBa5ReFHlUqkR3M5r1eeI2LzD3dUNig9+ZjzXLlgRhytkd7hC5s3vz5q3johzdQLM2vouOwhR1ivQsgfUlMvFYXlzj6PcHOKQ6BoMPgdyVVZH8m3b+hmJTm7keIfITebnmrRbBQAF+4Vd3+jPan6HdYgluc8qLPuLLHXI7nD6WybBzcxNg7umWMNNhyhZZOKV8hkOrmTgA42lz8uWwF0f/deDbF8Pp134HQn9AgmakM5E5GVYXJzxzK5euSTAzDUVH5MkFnmG7XkmgmhgFQS2wW7Iz66hpHHPGNvRVKjX+HXBHrfbkc9I83msTvzsZarA8TV5fT8wd7td28967Ge309eDd3aMhUiK8Uf45L52Pt3kLW6SJcDY2Xp5/fWzk4mTTYe8j2/59Xp9th79m3YfoRj5G4X+afcRwTC/sezvL4bAfvE6Gvw37BePzv3ntiIRxN++kuZffSXWv/hKGsB9ETU6MsgzqKiBO4/aeOq/Tv3n7///24tIlP6NwP/w6OWv31byB+Nhqd/wX7wtC/s32M5LmZdFrQ4cZPLH8gro+/9l/9ePvXX/w3exzcwfd7GJJDbr4ENs/P6akbSk/VnnFSE71c6K7u2LJJyskoK/ed2IwM7MdAKTZ5PV66huAq6IfpaUUrpdcSzyRDV0ayzLRBEnpQNjrmrQjfl6Egdh0ZPwQF6kAB+e1v2Qn1y8CBFD8Dc7vtJrY47eG2TL3YQo+bMn3y5uLItL97Tbm0fxcNaTDrq1XKf8Ub085pqcWXy044e/a8JWrdahKFRHaqtg8RXDNIdKjOv17JLH2rnK+vImiUamspxg0S0O9nsBcaKhLRArKNTGHMItxu4wH3mBezaXt6pSPhJfjvB2tT57RRjI5vjVoKXDM99yCTk3Z/EFYx9awSTLMd+ccVTIlRdhreK8fIgyfAEB95B0czKKDPwHrJGXlBUWpdovicjgz6L4oVePbl2kOPaqcYSCR3hwhBmem/OCs/ap/B3fiPYdX4k+rSNv9jwez6Orn9vjjfDBwaO/cdZ1JHN4H2CwEoyoljXm9hEaBn7KdnLK6v/yTOCiYHX4yUcWcGwvOO57nhwQ9ssp/TAoRNOnSe0p4YmMLzIFg/joo+g9rmSqIBe/Zgdn3CRA5nF+IPBWymMORHyIeDf0VUorwvxyKNSxfO5h4Wr5Cp/pFSj8jTG15BnbBO8jQ1s5xe88GNSYGHWgGgcZL3sWdr6LJnPncQ1TSTafU7qb36bwGiz57hRBfmXgQoVFLWDQLnLBJGMMTNLnDurYlrYLb9Zyq79kAIvWjjyzYfDEX6H1nPIiAHcisNUdMkzISFUF2hhAvwKa5ur2ir25gFbq4VMvrzK0hvgRXIN1DS0/J50aSFVuHzd9WmvC08Yrdnof+3Jrf6OFZ6O2jCt54rZC3vZe7FMCA0C20ngiysYKGMKdATovmAibgI3LcHz1DqTazuHhp+S7/8ydA4MoCxjE0UYBOYVLo8aYA6yJ/JqBKXk4MyvrtNhQTifGgQEwVhX2VxyuH1tWsPY4bSrYgNzAZEQ1G42TYaBjXsWTpbZ59L6kNoSCG+bq5261ANhCpAUrb8jVGT7pDUHyaOG+fJdkUEWG8HV9sb57MAecK5D6Rz+elKMDGap61JPaJPNQU4l2LG6uMoZbJ+uDMQYKc96qSHgljgYLFu2Nklnxkf6YIQ9XiHmbBHrFvyTwpUv+PbGNODeTW7ZvaHetd9sPvDkBdYPGDeAha0vnqJIfPwlpMShod+S7mYwMKb1ms6QSK77l3f4i2zyt1WljO7T8g3xpzP3IEMoTrnSru7thx3PCpOv74oiqamjSdw8sA10tiV0jMd08LyFhCuaCzjvBZOLqdzvy8Uf1sZVj1YOQn1mlkh8H99mCxKzYp8jmQU4h0V/rGfFdVHYQ/cNxV7p3j7/ycd/ORZ6/DrPu9yMIn/MZUuQXWWZtUtcuDvJ4Pxx2GVGDXpgR8/kUU25An8snXq29aBryFS6Vo8OYBDmPoqP2oPaHW8eHb24axSQPd5VTXxHjhVTX52D84WpeWVfsfdjwsV8p1V448RU/hDoQVFqHFT6Gq9GHu8MqXcDafphpDdD5mz6s+KQb1vWPtT92GFCdXJ+psJJgjP3Er4c+d+JUCUKEu8bDKCOW2U7csmkYrWHVi2MVWRMP0LjreQ1DypWA93KnGVyoRAJ/PA3UkGEW4aYbTzahDV9yxC+f1wVR9J24wiwxwXjHyTij46ej5gPl3xUqa6mcljWaPbNunM3DmCh1bJLmZzQEvLvKstuwiA8xf9PG4+wxBsNP+5aT4dLHpI2aWR6fcE2Z1swdr/CTKif9PA0L+tSPCLUTq9A/cyw1bkIJF7CMjzpzDL6/gMt74Eh1ThsUO29pRRtXqh/5Ub1SWfHWFELn1QubiH7aRIb+qIy1ybwAukIQdaNxcO3yerHrY6Sme+aixOie0c9MRQiuZ8jyyihMTuY5ZorbmBoKC249o7saqCsbY7iHIGXsF9RKeF6GcSBzc7rJwZBrF20JLxVeN2m6kWfWvjPcRgUSAJ6W5PN4PcWTOk4WCWGs7J8uXA1vr3Br3U19iZtcM04Cy1ipi/lTA7U6U5mlDROiJrwLRzaQEzGhBMmGzTBS38m2SB20REH6yp2PnSm4kx2PIKFuojFKBN0jjU2f86Yjd/wxP2D+z9vPewswdiDeSrRdgMPaUYLbn/W/p1ipCVxo6L5r3/jdQTs2y0o/JxS8DWlBMCE0OAPSJl14EqoDT1QnQPU/+OECZEOB8w1RtJHGumJY4T2TlIysmNqG4y2wJNPBZ2oyz6B34himS7wzd8pgsfD30bfTZalJpHHEVuk8awkvQitEO2eh8UGeJz9rAuCfZeYZQQsvZHxu0bkt4zDY6YHLfPVq47ow16jSDd1ZJc2sWXaOdWovI7e1LF4Lrn68PWxY/q+G+dxfq+DvJuPsZaN1vBYZfttBNXBXjcSy/Wygs0yY5pQYCpdE1mNyM2btML4+BA6gE4StHG7lsoaEtzCkabqH9kiHtBc353sI8r7oUsrT4cifXGeDzDez6uexTbrK0nZKyCWntcpVpnwa5QBJemDW20NINmWglvYem7tJEOopYy41nNXZqsfNCX1rYKwrGnQHR0+4WlOvrwzJf+Kr0d1gLjci+o3lc9BYy24ktMFMk5tOYeW+oj70WRHkTBp68nMLVAGt4qwryKJDTqx3eHkNL0vinVaZXdi6XGqc5r3HP75fYcez5S214jsK5ktd8awtpC+8yH+Qwz14Qynu7Pk43J4J2BOK5zh2F9sbzeOGus6RHGR3gGFCyRqvUQkXXi/tXmhe/y5MrpsOL5CxbUYyXPFvsJIA1zHoTR69jdlPmtvNdnE/fPSCuzYphuFq4w2gd8+4Ug6MuD1YT0hL03F0VuPKRGrue8KWMtyPwHGlDsJvCeNzhxuT/PE8H66sQY7FLVPVZdMpIWM+0R9Ud1QbV+YIrE5StgzrtizbJnnq196BmXx05LNVqIRZAymOTcydyrPSx/UQ7X2WUYbdyC2GK3Uu8IgqSZ3Jxq4jTr5+Yjh4+7sAUQTHWq6gnrSQh/iQwvUWzx+oQ98pYLoZXCGsOptnT+W6nOmD0O+IW4UG9tR2ZH4fXnLJuD7XDQnKy/05H3nrEbjxQ9WoqQ7amfxk8ycEupkLR1/M67NYjnQ9kc9srvsXpgJoO1wSG0SLz8Zxg1FVNgl9hwkb7yKuSPrBgU/AEcZzgaZR3VkYc1jZedPeEaisGPBxFM/XEw0wfCCbLR0wI60XQPAOOx3x67xHBAih/DjdPBgiVdbPCyiJZq8oPKghntWDOqreELwqybtMSbgTbwvXl47Hum8QvGvbADE3zD1NoVGgtYFrx+vC7ov2QZ4SXdB99rRJSkMbtwJpCZbxr9SCiUlmUje5uQ/HQz9RvfHSwI9w57qvAf7+CEj3O2oepU3B8UGXcIEGkTZ8ue6cTGHpfHbvhfhBCBPJvC6w3xwj3N9nZNZZEX5W8h/vqwFyifUFpa6GPRoALo16EdVEg9LRHsRhLo5xNO/vrDzIQp5u6sLeirpteHRO4Yto/m/2vqvrdePI9tfM49VCDo8kkTORgTdEEjnHX3/RPOfICse2NJZsz4y1vCzxI9gAuivsquredQ9CwkcYY6C4mAQFfNg3PP4yRvDpSZ7Ya+tg5AYdbxFuABC1Dqkn7gPGBA0ZpfhMX/HqaQSojBl+xmyg4PUO4Y3BDd0UykY18sBzp5CVGuCjMZ3QtEIfizkMvj4jGqP98IEZJeT5avgw4gudDNHghtSEZAWfZkZ0lC5c8/PQqs3d9eBjbyT7kiTw/LpKUFk08yPr0CotwohHS2XLlHffi8kG9+/RCab09tZXYyKXXbTzDC4kgKuKGWACvMhR1V28IvbnYkRoQhCNrLCOWKunUJpbj0NmjQV7DrAUYTgIRTYR2EPnDdM3/uQj4Q6/N1WYEZW64jntEq2iQuiN3u9WdaJBV2eoyt55UBFWRTx8XLakX/AKmmOafzQ56VE2KgRb0MTRvs228l5xXuyQE9Owz/YwsLrSCbV7vHcgIhGTO/Z6OTxNWdmYnaYXetK8HIMeHCeGmSXfosNUsDZ/RnFF5poKqKCquARMHOsSVg1KLBsKxWzN5cxoMcTt2RgP27PSSfpY+stmN++Aj4+v6HrE32XfAP8eIq++PaFtCBPFlVq/viJVVovoNAS74fq427XkOMHObrnIdMDAdnyx8N3mIXsOH+Xp7MiFHBlUoTCEry3hMQgtD2Elddo2fm8Twz/t+KAl8yEf7AFJtxJ5Vo96283UaiPHVdIFZ3PIlzBgPkUMWZHypV1hFtDT290zc1ZVM0ZabLy6dQCoQCuwsG6KZaNr6tNrZPY+VNZLsGnFp/F7guVoEnkfYwkkymbe8+MZhSGEg53VtmEtXtW+25BbfBwgPZb32Qse1sqylUcnrDXDI1fojSSMuh9P0XnWeL1Jfl6UbsGxr2coQNQ8X5CeEthhTS932454gShgi66mBlMcwdPSVg2FbAy/v6OTPvUTXwJNXVvu+t6JM3SMb+c4kXVXo+inPkcul4lm8jABz5hVM3jydNjBpmKuBmlCLvIQd5SRKIQ82GwbCET/pjltp790ymfHzzXnJ36/5LOQ3SDyPKaHk4J9eda9rvIsuOOLunZ3V7PhCY454Db55gnmydNmICiufTC584J3ndmSESecqgawU9dpJo/YTN2jD0FFCIfRkj1liCL3kaY5tLQu7Evws+Pi48xvamUPRkJjn6NTTZa56wCoEFSArGZstzFMhBcEFNQV+G10iFpHOPo64tW4PV9WNjFIzN2ScME+vpSz9wsJ55xfwMkrNgi1MMwdq+HXoJt0uOEgWHGGm4FfQU4GxRIooUSAEfPl7Wcf7wfDX2HSZem9sQ0n0sGX7C2NADbfE86OtnlCX+mmfQrNbgCV2oDe6/tJTYFJnBk1mPAFVN9oWeYOHDjB6tdnVrSpZQy7NuYQ9MmwlJmt4O/BaPs9YN4GsViPFfdXfQEe2dtlwwkW+JbVJbP6WrEbUZwBf4lA7ycyxbaBCVlDN1mpA/8g3dqYfgsGH1H08aG+QoahJ46uVXLMXU0G27n10rMZHHFpQWbKr+IjNcQrNsHlNU6N0i0lGu/B4lmIsUexn4nnMQkEO6CI1CJG/t4aPU9J5kvc3VLlyHAHs+TnTQNzvoT47XPwLVd1M34eR1e9RbQD9jVhhOLJSzOJVF8yY/aR5qphxk6oSew5ZYdE62vLcy88efO3mo8WUIoCEA5ZTJ0mGhpXmIjeG93p8/QGLwNTk4jywOYyHW3jsc8BgoTbHTnfzwdxrz1zgryX0QGAcm/CBN8e+fnkYO/cnBaYAouc9+4dh16S2cziCo/RFIKE1siVuXx5tnfSB3LU0iOp0MNow10DUsIteiTR6lqyT4IuN8JHtw4m4fncvC5LsVKRXl0j1J6ETtm6RiFO4oFKQpnHM/Hl28dB8gN8XjMb0xnKOqlSB7MHDOcSwei+Wrg2WX54KYCu8xqx3RXJGxUCzlh/8T2ulTPLMC3Prkif7xVVdxeoUWuHqlGoXHChuNBZKc7dwxWyDimz9zq/aJp2iMdO9ahCcCvRVTj/rsiKcvx0o+a2VJZrjZ1p1QXdVpZM5ZHjctrx5c8S7BOoyjf2oJry9oB73jtqx4Ifc7Ge6Su02rbMLAdXVrQp456ogxuD0/JlsRYC5RS+9HgyFeMScUlzddBybZHu8bSoEHb7Gd9XQctoN6Ibcki7nt9VuKYgd4XjHjvIEehkzY26CpPdvMY6Sh92vCM3PgZpoMF3Mre0n1D23hiDKIdDx+T3mkrUOsiJnynWHHjJjItnBfC3Eybek2yIZqjeEJitl4741ZQn6imT6O4pcWvlEB29V6PPXic91yaSDJQ7YSmUFEc4KliwmLNoZapObHkfxuf5GAPf5kGCiDxaMyebtt0YkkWsZ2R5VvZKCaXMRseeZppPRwWNVKKu09D1qBbJqvCwctaYfNytqwxr1A8NUu2fpdgj6U7LQA8VMdfeBDqXx7xC+WRnBwluVy8gfpf8o1FXvm49+x7mhHEpFEQFKQgruWhAm8Z1G7oncyK8QNQ2roiwYPe2XyIte3yol/LWqNyjeUFKNgc+3+wnJCmfCDT4RAMWcUuWkbpLCjUVfloDQO5ZPUx12GYCQm8uZogCzm8PGU5R+cLvpQw8AtLq2xPNsnNSYjF4PKe1T0byfjCX7zGi0l9IWTrVtFTQ88GrXIOHk48eTIzRwQXUJy5Fcgw/0aRTK2GdJ2ORN76kt1n66bNVuLRuRuCNb5IZeWTOr6jiPsLuwESNxxA9SFD1C/FsyylwiVuzIG5Gp+I6U9wMeV+y8ulRp25ZSf47slBeeH7OV5iLEiNuQK2oQ17Yl7ij4rJrmL7zURWoB4igv1jP8PBWeT42zT+31ptrsH6NvzE736MnyowsSOEZ6wR8jXeS1K7/+DwT7K7ZNgavYaLbKkOexyfnTqMeU/Ge2WRwoyRU9PBXUsDcLrj8/bDSpj4qIpMGAAumKwUSfOyRCpx8LJ8KxxX7Hl4rzShRu/F6+BgPj6FrSDRFOyKGSinVlmpGTl2VFHFGKZfRTYHlMcR5Kqqv+V+cSARHnUqRGhMPC6gPWd8UF97eeDmvPWuRQwQsDKeAY+/oQiXReg8WCezwI+Wnz/p3Yul76QHwdo3zSd2e5a22vPATZjxu8IXSZsNWyPjR2zVq4V4KzQYv0ijs+zdTxCRbIPkHBvx+C/mein+pA3mthjKKeBgyyi28obk5yWMXOK+u8TkfOh78MbK40F03lOiS47ZpcL5UMmjDGsk9EKvpQHDcE5DZYAQ8LqY2RtsX1XRU3q2EbZniKnATodL4pd8JSEoFyNcKy+PDKSeGNxR4pBdQnsBPWAJt+ccaUQ5muHAyPHEOHDV8wU6mSIKo1o38ArPnMbvGSPKufpG6/Oh1iyQtdyKLyBwV3Gv7NzUfEKmjiaLJ5+LWhLYj9HMlRXVAamKP1/6ezecnxrO/ZiqlgqRLhJL2rUZGNjpb3lqWdk7LNPYPba35nQzje/6yb+CEhCq/6jgMWSa0rMPn4uHemdjrcbPcAftSKUIaN+eNI4Znnnok/Cy1azBMpqf58Sv1/eOt2zH5xI/uUjZRcQV/MHYiA/51Wo6AEMphxS8YhrdXJKM9hmX8WlFgFqSAHwkIK89ciaEYYEdya/hk2rijPw6/ZemKh5cxoayPY6bjS60jdBzSLExYj8nb9ZJYzB2lHOlUVjQvl1h8cglkFHIUgaKR7Z5SueFTVjqPD3EWEuQeTE12tRBI4qnKIJRs1hK7vkrsbVJpafXu9HmN0Lxdzz1A5Y1DYyFPkJgO9csnojq7n3Dx8s/nDUHxtdPWB9SQBxoRQ9aIT9CA494Ibl0Xjs3Pofw50A62DrI10tifStOCN+iB1nGWrVb+klziWiegF25PeLkC5kHg25X3Lbqj4tKhPQmkbl8rnj9eYPhkuiLtr/VB0ctpB2E6IrkeWy/L2W+VcWDSFyXPwhxYtbZ2QnandCqC7vQc6HJUF5c3M2/jCZ6Kyp8ksF0XkiaK+CDShHTOvdrIXRzXZ0dMvYLNclKjCLrtb+OslzfxOZmPSoqkXtaABZkS5BZgd7pc6vtI0cB2PLaPZrPwNZE8BV34izGNtsmhoSSQKQGQmobMNcsruEViMpDefrYfO/fh6OZ6VHTXtDJjBH6Pvs1i1e3rM5LHJ7kVLDg+m+lzPJ7FEDb0TSgfwPK3BpV5EHEbBe2KAy8HjXqtwiQNldYlLA2j0eT7BBcJSLFmmIgIOGNquMjMDaa2J3OffRrsSqMox0JJ2D6UPOct4zmnQUFrzz10AB/fvKTTyt28cSByzngLUgp22nKOmvGZEN3zPIDsT20W/HmwcXfW+VHRrwdKi2An+/C4PNrotGnhvd4OQtBr7KKofvQCh703at9Ryb6TYtm3No63GYWOh4pfKnZfVJTHX9hthd3uGM/cO0B5DYKwB2Hq29L2ESmrQj+37kvi5od+tiyb51G72pfbJVCf0Y1LLwLizpIalq4uDK0xSuq7mDZHZZO4H8MyqtdTi/pg8c6ksXb9ZqyJk5zNKx+AvDgatZ0854KQcgClpvLhgPfMNs8OWsa5J172pNFX7ABqNLN1UxNOW0ppzTsW6tpjmcaa/FJPy+l8p9S18XCvJHnDna0PtP6S+Z2u5Yw5OMZnA1QnhmXp0xbKP3V5YOnOdhyWh3Znh8oPHlBuaWntC06VI8ZrsQllzeMUv6/S63EE3B0qHlhpYA3zbCn9rvazs7J1iuKs3Q1oN6ntoXgRB6e+gmY2ueBELh12HjvKIMXipaY5Kiclu5Cpe00fQpqsdJPXsSg0WvixOtAR2OVImep2FDGPXVGXJmmEBxCZdeTMCgRiZCg7NXgJm2aDRiueFGnVfKAMfqO6ldtFA5x3FY95r8ri5VzoO7HMd7h01U1XVoTPYqoCSbTsNO7Hj8/7jMMSnK8a0Q3glCRIZcqksYl9n18tz8uOdsyN0XNUT9JfNjQU3DSNzQ/JC19bPANyekLuQ8mH5xOJyJ6y8dWdwhAkSVS0cA4vhxBHI2t1Ldgob4CaBKae78mnGnq+JUlBCGO8ABmDk6E9hb2+n877Q3tw33TPiNf4ZD8VR6+wc1W73EKW2Ud8RRJnEA8PA/r0NGmjZ3W6w+xLZ0y1wtGqIqa4c/LJInUUX0c8w1/v7vushm0kGe4j9pIIOiFdIdqpCSObeFfGOwyv1ec01i7f6rJEzE0mFrHK5NxGQI1A0WpzE+d8xwRpvUIwHL7JGMiBR5+ZdQGJ8h3MLdi4SuJPm02sFxdeArApj2NV7JSAABhZL/x5ybNigef7W895MntsoNkVWGjLh0BmDjuQ/vK6dneAG4yxx7zp+/WkWslQ+yYji3u7e4FFg05C9zZF0qkcwKkOcN9ozdMusvvWOapXDnqPNO/58sdQGa3zs7GrpzKqn7oe9EhDZX23qsxuxnieAXnsK+HRqWV3sm8eTvfZ+/HZ33L9QCO6KUn9x51tPcpDdXdlu6Mlfc1f6bmpL0yJ7KO3cMQDnYkKcSdpqvTXWkiDGgsPxCN9jIsEBZyqcL9Volp/K6NEltBcJV8QK5OBc3dgipqhp2KZpz6HEPl4ZGHjhsJJPeNjjVT1/uPeFyX7Use9LB0o/UQIk0W+qiul1AS+hrM01R747WM47k5kicdCOShHL/tTQ6obhRrhzGJCq4C6njoZPcEHtb4ulGqcbH5bkueA0kbjnXCua0hDHa+8HeM6do0YgGnlUz2Gvu3iAHs7FALBwHH8+82/o0IO1qMAfn4joSPPAfnZ2JrPIDpYceci88PVLcQbw069tPH6WNjPL3slauUyEpn12U2AqsQrj2v9XEflsiE5bqqgcvzZr4HHB4iXMo++gC4NOM1EFnUto+qumFVcK/b9PtG3QhhsKywne79+GXjwl/0UsmXF+xwYSLCpBMAZfHDwdSIooIIGQmFQtRg9Aed3aVM0ZVElGNhcAHVL3T8pwXqBUtan8daXPbpf2fxuvmDWmWAWYTvjuUuTemvSlyH/Jcfgnlvb65fXqi00idxehjwNJcevv08FOMVqmZdu/7v2Bn/6T2M/2dtJ/nyfMA39QHynrQZK/AB9px014DQk/qQ9nr+ns8YcxZ/tsT/Ze/u9nbLTHI3z1x7vYFdm0rVzVLRgy+3nN0lX11E/FZ/BvlzxLupUiY5umb/d5tunX+y7/VOaUaL0L1qFf6+PKET8QH1nbb6t7B+/MOhvX5jr7eciqs0smaP29VvW6NdrkI5db3/r8Qj+0IMGhtnIrtfETl//9p1d1Z/NuZ8v6yz/9ttvu30/H8avs/XjoJ+pwu/X/655foAulqDQ8rg+w3/5jDOfy8f50bXXqkfFZ3mzaJq3bJq/u/B/W8D/vjj8huVG/0Ybyn9suX+9Tf+HH374Q1e8bcHVf1nxf3gxu2tO8/rTDfVdpGnW/o5VQf87q0J+r5fvn7Yi39v9/jsPTuC/6eAEaLEKIf/IoYj/g2Kx/1wkfiIlxPeofv80KSH+cSkh/iMlf7KUwCj5KzH55xoT8j/e/J/szY+fL+lvce74n7X61P8t507+5kX6e/r4p63I9wja/xSzDf3Hav9uqfidvv3PEpJvh3L/fCFB/iMk/10h+c2u/U+TEvg7UvIf1/5PcO3fS6H9s+N29Nf5s//Nrh397cm1fxXURn+dOPvPivw7hczo/61U1+9foH95tIp+L6nxi+XJ2vQ2jp8Z6fqstd9F+1/foeX4aao+24vZBxN6OY4vn4Kv0wv+m9l/+uH49qG9XugnPwIfg693+Hz4y88+n/7yu5QrwFv/bbqOqVvG5Osb/e1WTvM3l/vXrvy6Zln6yn6rXcS/s6jf/jZmdTQXa/azx/3eSn+9gwEc9l9kCMe+jvPVVyLUL1hCvrz511/9RV5+PRD9i4F+SVXzZWJ+NdBH8H587X9AFn9DK63/I7KI/EZJ/CZC/xHFP1YUse/Fg78UxWvOra8fszruNvYvf7h//gBcSDcWJ6iK1l+k448RXvi3Ci/572NJvz/PxL+V/F4C/MM3K/Qt8oDgX5Xkf6sUUwjxA/6L/QDXHeif/oP9Jrm+xAbsKf3xsq8x219XRPTn+oN/naC/qMmXEf9YpfkN4fH/BKVB/myl+ZsUXP/TdAbHfqUzKP3rbSy/VWdIFPqBRv8VOkPAv9AZDPrbT4qgv3xxHPrxxf9cTft1KuIPRUo/xUk/6tDf0Rv4p0rzE9j0j6nN30VA8Lcl+IU6/AaN+6JI/yq9ISj0B/gXpJLI3xb0346efq2SxD8ZQH1vT9OXBPfUR+1//dWEeh41RX18San/SCb5XwiKYkB6s3rNAGHgr74BV3/+/0tW/svYYIy2Gxtw5venX29fVxN8j0HQj1/W2Txn4/+7HjAp2td3fw84Kf/fV3pJ8PVXhsmffV206UeMwfcQKAr89Mt5jNopvwb9Nnyb/XjB1o3pz+8O/aaawu8tVSDfHdaaoxk8ixrNYwGemm2TLgUv91cKE3+zhvFllf9KDePnO/2+l9j5m3bqpzmeSwDgOIIz5L9+SQ96fQNBBHvjvk7A9/gn/6rd+e0pHIKEfoB+8s/PVRqmkR+oXyevYRr/Af8OwSeJ/fCtgvyHp3q+Sw/734Fn30g7/0hw9u8djv/oZP4+OEP/zcDZzzEN+kcF5OhvDMh/Nwj7ReCCYv+MwOU3sN/+74BTfx8Tob8Rd/2bZZ4ICv8B/XnYcJn/K/r+CZpC/3uiT0DUD8TvGvnPxlb/+EbE7yOAXwELpXsVye+h9P6D72+PUVL9DIH8/e0Sv20XxR+LQAgoomHyewgEYUgCgv6VCOR7Rxz+JfDjd+yL/B9MMc5B4LBR8oVi/KYqcQ2OM2kV9I1i3EuwWJj1O88Ke3179Ld7f7uFz9vwvEXPG+/c5OrGV7f7l+9GY/VIN8+JVI2u+SRLYlgjnII7nKXrxpUMED5wudEe+DOLmwawI9tJXYm4msepuCorOkMtBRgL8tVCAXUWTdcK4QRwDvqKjK2U+M8V8saMkCKEsc3rj9JiHU7QGDg4YUdIMjgdGtOA4SLTLXYlc8A8hY5QRS1qGLmy2izgmCRKOcNMdnUvEh5gdUqXxq/R50GUgB9hNezWAuyOdK8RL39wwNHHVK+5SZOl92cAzmoS2LTSJzgWCEgCjnOkcMPKdW0SYdkXc2uOI0IiBq4uyJthpcZrFA2FfPWNt1dUrYdgNl45OArL9QNF3sF51tOV8jEokbd4eLPjpQHpWgiEinpt3FrZGAEzHKxLweE9ywrwGFmDEsbKHg+dfQorIyCAHxiYlFpLujCG7dJ7JRD6KCqaeXYhMfE422LbajVMa9oD0z9MTM7XfafFvsjScl+8vuO0pgkzu9XnWHYJS9O78E6tgm1mdYMy/o2XNATyZ56MJlSjnxL64Khx6dludEY2JsTUV1uUNM39vIsWOQrU1vKNYikVoVTaaTmhptCmYiJSJsfy6PiyU8VpsDGRHL9YkoMckzSk/SztcqUoSYncVxpgkcBZaaqpA5zKOQtHXZHmmsYLmXrz2hMzH3tay5GbbhC29zDiAJZFwEb74VwW17aXrDyxonSNrgE+KziCLNqHaCvSeK9zHAGus4oFR1GrMpI1ZXG1O9WfsNEOcs5AUrb5nEhbdoLH6XNX3LfsnFpWqk/fywd75vgKiRBw7NgpEnIJ8MfaWfWy9m6iwOatjZwpgwLJ5hOYEDUbQSQeciI/GCMbl90edNWkwMHZZ6vQIW5B4FBy06TirphFTdnbas4KFsWibpkz18yw3FRUfCsb6GF0dt3MfD+YJCAvIPqXV/KdL+L89KQlvvIitFoi+4ktyMNUCBaPxrGXloidUgiTwm01imWbCt5TP+yzQR2lmt2jRpXF8fxIFdlw8laaMdubNllw1UKBYgKNPKlU+Efe69YscDG85MHcmhKY3Cwro/zhZZnOjdpjx20kcEcrdUW7ZLj7CtUSMr7bbjVOurLkUGmaNzHkHwaENr6mNMA6MoYOIr7rqurw4KGcNkJZjW7LmSS4diGze8Iy8HvZsdShTypJJPpp/vRqdakJnH4IaxfXeZFiB/ZAJOT0m/1II3Y4m9ysS+55IH4s8UpVDzA2Enyab6gIjrmWhTdCzwahiqGa4tNWiirzV0KtgqHxDjK8bTMslRDcGHMEn42cCyl3PhRbiUzX8jQcr7chrw6zGC4FnStGpz6CMZQtimjlkKzOYzEPOzqGwc6qV9seJpsSmwittIsRC+yelgjxNFOiLWkl7KBUjqGicTThKwsIDhAY7nX6mjtZGx9jv0v1dmQ6ntu+Bk1+HdWMbCORh6/FvFoZlk/x63H68ad5QnYz2h4V7AycMlYx04pj/piFmvRprkbYvW2Ze1Drfpmzl5OA7smy2ldEsfNwzOJabrcaHLf9Q/PCUxvCk9IwFSM8C58NaJ/GbhrjvHurrl4LRhS66po8AxsaMFvJHEEQ/Rd94oS5w341QOyCtY6jp++CQ4SJw4ZLGTQF2dVxVJ2jlNVNQeuhaTVDMYFmnrOznKz64TMHx+w5wnHaM/L7MvTNUKp3i0GFwrhRSPlIYP2+h0VTI0H/sleq+rTCKz50ihY92cqQKcDeky2LswcWKIgakQvgveLKrkwbOfg0+pY+/VO73qigNWYPYiw0HBabwr5nR/tm5ExvWqTa07Lo93O4zGi4/ngdLlp7yQcqR8qLnLY+oBc2qz0s37hGjJIeRQbp8cdSlqXaf4jzJak7fPVN2uBScPs8lXyMrDz9rav1KWfGqS99q/QhPKiCQw6Tgw/dWscxcaiA9IFLgB5t0l5NISCdgsKU9GaR+/XVvsKFLWgzcdpF32sjwqpniNJvhbckttWawXfnVJYu+WnYxlwFqWpdy9IKpPCWvRewYHcWSyrToYZjzzrcSItNbM6SVtWHqopzUm57wPRwYzS+KI/diRdoqYMDiQaThzwnCGHnsrQGTrSQSFr1cVfja3mvoBn01R5ayj2268kkwH2CHdbcWxZDPzysoyaXJ2el99S5ziZJa1i8CsjX8wgew9t2VDE8Ds8oJFVlI9lQhLkobdKqTGaYgwZ9m9QzIbIWya3l6UWMe5PiaO4JJadGUwY015fZJbavvc7d971pobqzzLMfQKKQe6B70l5iJTyk8KY5z35Fp6Ka57qjt/erV+cKkjyPJB0qbqDiHC8l50Ywmuvjl/zil1cwhARnNUdsFRfSlR7r7HLmWLcM00i53Hn6qqBlGx9Z/IRq5nz1O+20FjAgpMRIFum/kT6bdT/nHQwz31qIpHY5zeiKIYlXQSagfTnziaG5sj+Nezaz1rxn5hz1A+gCMTmU7aUt9LLsOC6pIbYUbe2VZw81GuEAA9FPfb6jKkFVvtA/sqmCeufCsZRHwA8/DScr7y3znY8G4vtlgvlEYlczMiuruWOGlCWUEYoTiw1OWg+Rzq22WCq0ItnT0jvEbpBafzmwI2hWGI6Jxbs9fMFKzs0fJU12hQCd+ScNtL1F1WEypE7cncEOx6CVYjoD4rAOhnCTE1rJUPLpML10TaQEOK4Fh4NIrdHj+cUGhClO8GRXgNplUXIfFsh6S1Il7ijoNlh2u8OAfTIX0Q7Rs7LzmofqysCLSweg6ffrdZD3BmvMZ/VFGvIi8RYEluSt9FnqvGm9hYv13F9Wy5Pt8EMEKCvlwz4BSUNR3aPOStVDwgeCJ2Vqzg5f56GemizEmSVK0ZRJRtFXQR0V+IF1qelburTdtm03+DRpdmOgodeL1YV31L3xDnu6dwCBkoHjUgvBrrfcm9KuE0KcGXPRRhaAiL3aTCGb+Qgfeo68FwRf9yvJJbIHzN8VEuqcRs9WP4SEC8oX97mF72e3+1Zhr+/RMJ+r098r8bJj8KWHWDlEErVo9t6/euzlPfvbiqZOIy+ker33YPTek1x7eabH1qs+VCSAa2uQUy1Ks9ag3rrbW/sZU7a64GGr08YqmvyFreM+mCM+PorMKfKEfJa3po72Ca0EB8Lt+Aob6muRYQU7atToKoWl4+2tPQEh9Pig8QsIZk2igh4PpuwuQYZywGjKqIJD9GV+mwmSzn3+QrxyJxBYxC26a32mj98EPZU+FbTmRLZwxxhBOQTWiHrmYOFXMGXOeqU7rtZolpX2FKDZaB1J6cyY0wjP9dB2JhDjimAAtfCrSiVZV5BKxRUhokVY78fIibVLP7wTe38a2fHvwraa8lRYigY8RO5WnXCaveeCZqWn+HAfMsQioSRI2sJqyiMFIQF8gQ00pt0y0LhgLMXLDAJBpFYN2yUptvH7iNWqwWhdauEysr06C76UCyL6BDQY4QwpKVMdny1hJgct3kvVZdwUjW7vsiEGtGfcJmIXMxxTPYpZiNObx8cLye1+PlLJsHCI5Ff/DAOI1fvUwPx9OmfZvTRraWeuHCZzNOCnqpEejE4UEpKI7WSf/sPyO6fYxiPztva9Rufnt9mny+Jw01QogL2LbEh7JS5gddItQLs3IuAHdf48zOBzU1cqxiWJsRMhHZFifiIjKBIAomhgpMsLu/gZ/KKTbDWYKbYQRFOEEvhAPo7jHD309b7JZM7u79M91+qdLVwz2gE0WellZLl4nLXNpYx7G9Pt9AgWcmASVMsyTSqsXUPX/sgXBKEhAACG1dxU0ujol82h6TNHr6jj3SxVpcQBlGT5SIX7HOKIkRWUcCPBetsIo/op8s7sac0VNzrj0tslZH2fHrq6nQ+zYo/ErmYVQdzkewdRowPWFlJoBO+7y8THM3PBHutOWXldsu0M2O3yVgkeDql1jRTqmi0QEB4QhT88xqzG33lNSyflBQFowdHiB3Kr9TOSCs8uHcClotSpEVBinZe3d76wPeVngJa5RykjuEs+LT6sK0aOXfFAkivERD+NJAkPy67QPIUo8ors/RXX0UZGgQibwfCTN0hGQOddqgsbjfBE20eDO3Z85x8LzOUPDCWvYI7aiND0GDyeENq9XpEPdCahNvc910DDrBKpDZ6yT3qmGhl+NCP/OPPZa5XaaX2XES5w+voSthVw9kIJZ181Va+AY45jsUiTbrhPS44Yx92bhnql0n219/OGdO4V+eR9ziePkF6G0dTcwdS9xAXDmXq814EmQwTFThsHJo9M1wZZsMKHpvPIABqr8LbJ0r/cZTZiow86l1hr4mNWbTAlq2RNn18BKzzeHzEdV/k5ZneWpdBcHNbjLRB9b9RUsY/4BaSk+bKein8MADlWros/R9qBWkCYYz6RNX4gF0JCABR5Lnk0KLiHmG4739batvIYFh7otod++wAMfXUw5/Ig4YD+0nT9+rHC10U6UXGE6l2hPY/XODa9++iuL3W0cJefAWmUgjhlMPtDXS3BaeUr0MZY1aSe3GSch732UfsvSg+C/QqjdbOaKmTzux4rJXpVcXbs45MfZzz5KDU+2+Nm8apqXf6qviTJCgMFcpO+JjIL+NV42nGvFW7zZXA5QLw3wtid4RvpCQlHE15R31Qf4rLAcyiPQjLB+WM2Szs77m+PaP0XpAYBrFi5/O1BrthCnAYJthR5glmN1cyYTrbuVOL+0wcISVflO08UjL07niPUI7n9cGW6XQMLgS+rIQbb/sVTOqVyRRWv1XQHPBvfy+suTwIwMZnRofHDgwA39gvrfN1sy7pGhp+QBn0jK4o4moy8nvbbAPBg2wn7oQ36Qgp0hL62xoILRR693JhgUw2QkUKtL1m5Lw1Oryt4+rgxX8mFVDmnh88g3ud7TlvD1kQDX6pvX5uhqu19CQ+8jBFYvkeV/Hkstubs6gK9zePxB+VzYewHkvpFxYH8AcZ+lcdFoR+LHn90i9Hv53B/fbr5f28Od/yaw3WdDj+vfxvctzaRI5K8Ur+2Wp5tT3WTHZCwlarbowbJ3Mc53fUtIW/2xFUvTt/Uk+kQyJjl8LL5B5p+Ab0bDnojofFAkH2V93JHCx25rn2PfzHEb4E6JBNXdNejnFYEF9PN299P4oyEXMhHxqAtwOx1hw1a1UN9TYNjaIs6FAcaoFtAYgtyLk90i/fLzOFGSoHErf9ozhLxkqHHrnijNE6a3QALp3y+r4ty83ORzTT2BdpUcJFnyWtsvFEcLxQDt7HswAz6HPhOWFdEysu3je9fn3OUpTAFNyZdxj9HALUgbxlCyWmkhULqdgPkr7T3Vpjef4p5O6AiQaToFbp6ncZw+EwKVQW/4JV5ZCn66UE5ZDY6wLorhjKLaygPfMC9yFFfBGNh+6rq55njogLHbfRSaytoyDPy8rtCrpCWpSLaCiG+Upe1ns/e5K1eszV8VoQqHqjwNMw3nCnkpzlAmxKVWQ/Mc8/ibMd5RL01uZnZw7yqCuhaIZLXTRgsBPMKPe0WmGYUsGtuwKWjnLhn4ZvwKaPIdsYaogAjMTJGVogM6a5bDZzuNvoKls5iJdc3ZCOi5dqnIORtnABPs4aGpgl4Lqz8PN6STpRHQgHsbZIiGnJYF+W6LquMmjPIpBdFQrbIqSPv5xWz5FVhGsFbcg01AYmcdD59gq4oCMprKqMiRJVMDPIhCsCJnXnNVNatcM7QAqJggNMOT+9sHWU1iHRGxD6BE7v7O7mP4TME+JUA1YXsmVuMhz4qyN38gqQPNYtuxR2nSHXOxTblfFIqHpRb0iOWYA8tmhFTSCdBfBMOrnNh0sig14n9LCjseCe+span+By1KX75eYuIvFBhGInO5jtrRNF3AebDhI6HQow4t1xIj2F8QU7eAHZHv5HTEAxnTOaeMFLtP2g2lTZVyxocwpz+eHjR2/GjehT6WB4rV+D6bv20lL/vO/2h815fBc90+F3dFyM5d7xVKm+leU5O3jWdpIwvdKpw4VQgITey6+E7mrbOEbUFVNk14F2u8GHAb3QNIsiy1rE3GuLt43gtXlZeQOOZSHR+S4eQbI4I58w9smQa0EvziWZWbqtQJNkY1LkYLIkTmtLt9w2FXQgIlo5TRY/NEpuN7XugC8vHBNXR3jzkujfzJNlMpvbW53PQMJGoRJ/WoSUtnkbRI8M+2c5+B6zpLzDWqwFLb9c76t1CkAmA70dUH0XReKWCLYQ2p3qtHwS7jZ3XlXZ87n3CtEjwCnlNy1B3C6svhK53PeyLIyvH10imu/32ORd24siSNBJz9tx7iCgkkXJuTPYE2sNyQ+MxCrEM9zU9OeOMxJPsmqG07X3v84KyEUUpeV7djBKz8LWtyCaVw0BdB6IDMY0/rOwVj0KOa8891kFrnen4JyceTiHxNim9OKRT59iZ9/ccIl0Rbgsjp6Ih1QyBr4mRB6r6uOZ/57FzGHLphrfW57336M74mY6RpUZadIbGzo6o+DF0OJe9NjVxKxej6Ydpo5eVK71D2xkvyyiONgRahi9Aa6PO1jbJG9zhvQ2v+k5bejWAjIrRX1GWuSiRbtFEaoczkY8IGTU6sRypKGWZVXh8i9gDguh9FaHmjqoA+7RPBE2Fa/DZFoKz9asHX2z1gTuH+iZCACptpewCemmtAL7ibQLP3qDKgFlLDFYoG2aGQm+reAHZh8+1ENHN1gptKJHpAjfDqHNzWyMT7WXE6YwYH/4piRC8JYfkMw/M1eAHDXxCe3OG0zq0fNStnGA/3XUwHjWWXBhHbJNW92iMNzGVRnRb4SjzrNTVfV5umQ2GmnIRh9pDcR5Vl9zoT6BsFR966IDbAshkbLURPVzzTUfVMKSLaurC3mEVKl62OZ+T8dB0H8yqcpZWqY6xm4pKSFREEb66StJjgrWQ5CB2CEv4s7BzPXkGJALRd1Rr+vaRiHa37ZWJtOwCea8oa4SDoDkkcvyGtMravxAyShQ1qk/TGzkbH9Fr5oajxDmwyYnW5qbEwz1IZJJezJJgbbT6BFNZ/iLtEb8MpCHxwYnLuqGTW5oUnyd5tKBOklSrGUPJ826lbXTq2oCB3kywsqUgMszLJRWn5Z0EiRKnmVRRYRVKW0n2AJgxFARXfpPJfiCJNXHL+ZvS7Nvm53IUwKnW483lmDfdIS5Yq1Uk7xm1A/kSgsWYhPUbxklr+bxdsxeZ2FqhepnRVS2YN79ltPN0x+CW47diGNr7kXA7rK2nBqlLKPd9ee5rzoIKhsRbjr93y9QcwJu0izU7ug9h1RC+t6QJZLwuzvd18ReDIPIwj6KDBMLhiHxVTHWCxLt5RelPkvdB95Idx9C1EHy2pIrG5XY0Pd2ZwyvbjWFDlFNlDdXDtRmhLYHsSSllaYcOjIeX24u07RK4trm8CGrmFonY16Dbl0H9nC2tQhFrd+QkaMiYVwEavcVrPIU67CRtuM+KYYSeiJDqFcxwIfzeURwaNCUHmFjbWq026VSQkSkxy/j0C/MN3EoVjPoNI0mytt5LjLbogdkuCkPJGO37FUFomcQLFGWarvRGhax9DHCbGaRmX9NrlB2OwGUxxUmURKp/59hqdHiMhAgJXaUKivPVGwCa0cs29nR5n/fBHMI5JRQhYGmsp1z6neXkvNDRUEKkrvUhRsqwhz9kS+Kw2kgYEPtk97ahaNrGTUfozSk46rZoYQHwal5+bK7e9ouOXAf3QikIVyMHbspx74QT8+o1Yk/v2DRH9iG4ofzwec6Jl3hG0HPJCLmihktnoQmg0j6TRvHdkPfVbfayOvDdvu9GqklW+2lNf6dPyExAXucF4AJIH9bPpEbgl9wwz82hjT3pPX8Fxxc4HkDXN9E8GOaAhmUOt3ltExoyCa1/DDV965AktDrJDQzkQ9x7m1HsrB0hPe9I5l2wTvIVnG1RJXHv+CjsgibizhV/mFangCRv8+miW6v+fJfUT+NM45Pi7VzHes2tGy4tsTxUatMENNGC2k1lspGJqfWmLAfRYgeDbEN4nlfAQYhC/kgiqTNXlxUU6A35nIxa9VjXoUzyd1VAx+YYJ8v3H2SXZmuL3SGqjnH1xFpKjs2Bhp+XeO0Za8uTzQC49WnvIh/pGhTeSPt8VkpoNbz7aCAuCQIRPAQKHEfh9XWPNvMb7OG/pzJ6yfSg37OOyUclqhKEZtLTbHv6VD4JPKq1vYMmLp2LESMP0mMpJjx6+IUF3mpcPjkJt6beSNYzF1Sbgx5rZg1FlstST9lxy33ZHmSnn6VHWhoA9C1UNUjtO0NA3B/cTzzBQe4mPPr5HStlC9cw6BRjwmeGYgg/NzCA7ot343B411tEQhrYFxBBbuzLFlB+lkxNv6LFLnnE7fOLGeV4cXaTB03QuFZ1kScmNUaquq82l2kKQO4Fx3XLiFaIAoNrHQIj6sr5JmqpeDbqILFANZ7s5dctMCejploCtcnkstd5KUYImvgYNCodGsAm8fAFuvLg5w5Y1iMxAF63YAV3W40b3eTHZd04ZyePhLbBRhWkB/YE55q3lCerIqA+mJbTKzG6aHnmYSl+h7aWstEHxBpP8V1iC+NqVG7bNnJBmfAZ6157ySCwbJCFxwY0SyjdmTC8hhsikdsjopqhAdgbMrKjw7rzHdDzlPfVnRPVwK/ytmJhN0lG67obRyPmfPRhNJL4s/WPXbQpAnIo8ZVij3TMGEVUidAoL389vG75ovsuWl9LkKBUcOBvYyxBCIM7iltecRRZV58y6esduofd64/nBIUsSOEx2fWfaSOWKEb7x3gFGA+kdhMOaj+E0kRPidfyQFIuGm26McILrMLqvccrfBhAjLIYIUvWS3nQO/qeVttYXyNA7q4srTMzxOatGxtDGSDgR6pLOvpRASYRNS28r/X1Qri5sTDoER4P65E+HPo5jTPSSl6SWR3wpYQPcMWTqT6j6mcrHc92pDHOhcblhHSzPonnqqr2CWrdd41OUHE8j6YntnpFUBdbe6scoBwF2U4bNMtzzCv+CTZ7Ciq7Og/itmSHMQFd9IcF2nLf1D2/KtpmAd66ju4T6m1E03yELSi6j0ME9e7JmBIwCXhxVI0e1+1jtStuxYB5ittQcwWsvyQWWDdY1udW4J2uXEJ6xWsWvV9ybK8BfiFmYcKnoRNxWr+/QWdX7lUbpDSAXU6ceOzmifbhNvhT/dloEnTLomU7mbXSYkvKPqfki89gxgXxXORWRXk587N2Dc5L4/TYhDwh1xKCRWAd4V6QQ8J/WzBk1zUu9vj4WnHX3l3DnyovA6W9Hc3bfnhKVHoa4JZbLxPSUVDAx+8+NfTYUFEJT3RFPqUoTkQih6dBeOIREXxaRQYuUstw6vMM/GTPcjCUMtvun+dLc2b3UXh/E94SlveWfDRIj4wjdGzgl5SOocp7uXzriGefPlSiD83VyV0YmJ1dd/bPCUKZU7W/jsYVZUyhBkQnB28bCJf21hy6wTM17wT1IGFXadDHzqAcxLlAxd2kPNp4g0fCQ/5y/wpoB/vSTPhGzB26u1C0CGqCBNA8/WVcx3dPmSOD633lJ2jXZ9KuL4EZ86/oXaElUsS4WOekJUCaHRgxGwNRgj1IOM3YQx0k11rQVYQlRc6lk8kZVGX/eL338pykKUwb1mT/eJ+zjkiLOlAx8ayT0i6KkBQbT7QdfNYUkITB2mMmqabhsedcl2NAO77nti4e2UJRSKjcIKrthG9DuYL8yr2zxjuKX/lSlfsV1ex+ueSWbpRkuRfh1mslDyNJ0M03jHjvmYZdgqPuO6ZTV8hEeHDaNeOpq2XcbsWFKTSbv3yxv/kHBWst6vEhxZsVyZJRGlXGWzhBbihNCBE0VzQ8TcrKWXMNQpPbcsOisw275mOUU0M5x6QixmcGd7Jy8lfkl1A9CB/4mKKDasZ1Fgc14g5rZQV1HX+oihzOGKhdJXCXHhgyI1KlrJ01TsA1/IvmSAIeOoPG0NdTenZa4Lj9ptRLXndJQVKpGMEKvnUXpN6GbuWUx4bnfGK6NEUBWiyuYTC622oigyAnr9ArnqWcV4KaQ+TZN40eL+QDpUxhglIm1S5aTBr2ndKh0KKVOIqUIz7Nj046MjfSVGS+YeZs1ZozN85HyU0Ad55UOw6VRV3i8IqwPckmD3GJN59tPYM6Mdh0qGxRsfzCfHEsJTuMR8N4xdWFme51O72vNf+07Xo8FgmYyCy7EfmFSq+Lrwv2eBhja3+kWWqk1REyhU2ix6TB9KG0rA3yBV4mzVN9Cn49DLNm8BIINJuCTybR4K29n4zwLBlGA+UdjAZKOrXOOtE8L81UjfC+Ng9pL7M4jUkhtuUemsSX35CqaKe2lG2SDN/Z7zy/m1inKuC3vjCTm7vNTfZuLcxoeemTvIEhMK2peoOB2v4k9/+lCcDt3vno3EctnEVoSXo0yOne9E9mX5DqCzRNqW/2MYLdtPK2US2ZSp9GAfunyhBc38eoeFO/Vge2/rNd6QkSw+HnCgehm1RI30nj3L7WJI6Ex1ul0dbY6bjh+ckq3yzH1U0ZfwSiCLLaf1CFgPoBwn5WIYBp7Acc/1WFACF+gL7H+PQHVAg4HbusNDPV6BIECWq/Gi/6f7+DIe9/coHgIwfl1wIBxz1y5/q33LPfNnkjCRn76fMGc88tf97a501xbgJ0k6DbA77d6dsjvd3m5+3LdwCVHG5FuoKLZJOVEsYqLdPsUJa1XxFqupZUkmvvjcq5Ft8G9ulHEBLsjM4js8C5uKADiIpuUEUBVJef1niQND4YlUe8NBK3u2CApAoteYdZgM98gDTD/fxsCweAFGRQX2soXYgceom2LX3nFwUCUcvkP/oOBC/ErDIse3iEx68PScJ2WMSWtRufY6PQpCTpVEpnw8unYcKP8qranEhWZitUsXfNkFlZDFDZ1tBzJiRqQdLUYvF3JYH9vbKeihMwVFe00T+C078nFTf7OB49J7vin+2Cl1WHQ8VhExwZVVKpJ/2n3a28Yo5kKbLtO+cBOYQZAkvxoOcnu4N5gdluxU9oQdUbJ6WjrFg2GcmhMjl0kTXTHeNqg8AIN27EW6XTLfdpBTtzbIvI3m7Y/mMaVwvulLzsoVcqlWu4W8+cNE2JLurJhuCN1KkFDpQvKRFNnEnpLvf3MgLNR7tumZFeKOxli4wTdqsVp6OnZTtB5TyvGy3pCfXQ5DqkMx+S7ySgOxQ3ttahByU7VLw2rjkXFQWBRlUmN+kECgNees/9fbyH3LPYQQv7O06qlu+dbvl+orMPXOgRyPoOUlffu6rA9XveMn1Mp9CKYI8hxHsHrw0bLlY9y5XPzvwLJPV6u48Im1hNWvRyCADRIL4oZLVEu/NBMH84PglAjd9q805VjRyCUpf1bC1KzLaycM4JFu3cR6VdYtAAeqjEPGcmgSkjWD/otpPLiwdxFQjbnFzgKbDVvJpK1l5QMEV2ZBhTbSDjinhtaGrm6byGsYu5NQ6pJN5KwjpUU5dcUy0DaHm/RkaNpRE8q08kWsXR+XkOMZoV5M3WxtHO1O5GvHpIfKtk7d12yUVEch7jvuUhGGxQJiN37ZpHtOg4WvQLDAfF3NTBcEGmedRn7jYRjgmwgFE4XJtShBR79RqMHCWEw6sk73o20guq4GIdw9DLd7OD0bgYbDxAqjCt5QArIRyHobdlryFjPMORc33rcZQzE2MHmtWjQpS8FRFrUXIEbAQmDrN3z2V0O5o8Qr9BIrhmKPRzBJsM5SmQjkUfjdsSzfUcdbKWIXZPn0/fXWYfhi9v+EhO5tUMFkjMLyf88l7LVrUY2+1JPMogsE91luwvl4HCa71aiRbX1luC3y0DCifKFZGDkIXuNIToj/ms6tt9SRDURGst0PDzmbdZq68ZntbSjMeJXzUGpQTbzW+Ne4Y0r37AU6FGoO64IccV3zR8+FLO9nommhhH0sErm8idbYYz4v1sxWH3IVoUKzgHaqwJxOB2PZ0z6OMum3pRznTk8DZbNA0yznZqZ+8QnRWKLiIRQ/K1vdHBXTna1xrlyVEntn+n+7hpT1fQUIIYQVYFndn9gbba3QzRpb0Tb9oPbWJ9KAtECwquP2mYfGaVcAlX+sqoy1Ckn70pWUnfF8aG0VQ1Nfy6ZEqSNXOKDMWxjpzTlP+UYm0U4NVAsNmIhl2Q2jGxkWTL2W0RDRnPbUIdtx+H/9/cmS05imNh+I06EDuXxjZgFmP25Y7V7JuBBD99S66qqarMnI6umo6ZuXFEZmDAkjj6zy/0nZ70E5iSiBSRksZoV0yG75w8ukk/aqaN5OC1VKNxocqRnVExuFxup2VfiLZICDC+7KLm7o+TMY63COM9YraHN2wV5uEgmW6kJH5/5d2Hvce4spzVgkn5ag9BtGrmkFNpvkUVV9Ij4Lq497lJvJ51L5+qw+o1fOpmAHbZOsLhRnkPH38LcAKeZ9TVYg5lHRwYpwPI1YqNJxoo4+uO9fr0FrhASMEGihu/zM8jPQ2lm5FbkzWrMC1546MlEwzNZD7mWZ19as9ucgVL4gARCyZwPSf2BNLQJptgxXVdD8AzSG+BPZl4uV4GpFrK0mCC5K2Km/pQk+pe2li+HUcdKjtjODB0jBaSthMl725RSF2qNrRqLHdZP9I1nEyym8YVDk9Kj5xqUTohVjQ9l01X+JKjuLhuL49jy0MRjg0zt2k+Nb6V3hrE7hI2OSUT0rmmFHKv7KXZU6H33xTrvsqrVWddjGaE6HFpJlySeapLb7v/Cm0d74tOlGK6Pc5HnLec6wDzbknzHA+blOvagHyPHM563qdqFHRakjGPlqtkB6s9MuLd5i6J/WwCRxu42xm/GzGcOObn2R7ENCbERBCM2KLK7pWN1ifQMkE9LT1LBpb645GChgTB6PukSSmrKRNhEja3ZdDeMouRHefKCHaop7F01Y9dQ/N0qLd85s+9Y1lrqJZyFymMtKsBGAzp2Z4za8KslFW9xVJ5T8yOHQG1TOYk1xGr6VaeTGJf6IPDcguu7EVhX2BMrxYz5NmnYpV5MF1PUZ+zxNtDee1g6Q/wbNmKwSYjNJhIZDnJ1zmp8AJaBdwJI1Ry1RS7lOe0uqTUzXPSG1CakLjZonq5DpRON49hFJiQ1qMaWTVkM1fdMKg0lD7jhEQN8m4fy9DqfkKgnUyUkY9SB5vh1NU2ylMDXqdnRx2HTGK0W6yVUpOfDx2X9QIwdiH3zZU3LwP1ptGjbc7sPVRUsTzq0Xx2wQkPwjm7pRl+5ioRu+spjt5skx3mWucYPEvSC24ujDnV33YsDwqYih2e4+DuJEpl1Mhef/gf3YNh5bJ8d47XxQgl7vB6qXDkc29W1xXtUTjW5lze5LV1d7QyLRBsb+P2jIkPVDx8SpDVtQs+6Nx8NezK6Rb4OeMgmyPUGvxjptIwMmuwH9n0gTOMdxeLntywrWRae13HZQU3Es7nex+EPk4IuOR1Ecv0zlyhCV3sstZFs4lWc9SNfvC2xPlQjg45O+dK2p7WECRsOqNdhpLU+kd1FYcOLTYNjydsdWV0kJwYylZg0cbFInKDSUr9Gi0JBi9BEGOApFfOPXKDueISCqyzgWNgF9mAxVWl8J1JnqMxZ/U8qw70cat80fYvz0FWC94NDMYVCXA/mxhj2dYrFZ8Vny9tuq8lNC04hULDsGOAsUKbfIT8dLgMIMeHyxG1d1Rbq2raNdHjTSOyLJyI0NeOeOzZlg4fkNE+lDY+1B0UpJtOp0M85KVZLF1g0VCnN4VBQsHv1j2HR7XqPyv5JSZiUokH/Lx0LEwUl3sI9Xjpv+omE+jTT2gj1QopFJ0N47Z2yDfXmNO7/PE4lM7wt+lmIO//zRbdHWAHVylLyr1QTHqh0PsLQ3Gb4ixoExj5WOBc/MsRKvtLg+X9jIqRDkDHjlhdst3mUM829J+uUSR3OR7g5ZCDURIoCe+iOKmQXCyvSLpW4xbRh2Nnbs5Ys2CpkUhkMcpjWEFJKRH1FlDvp5cGL05nwMxWhmsuckdvllPmgntm4m13asYMlVRTYvh891AKlnq3j+gRXjR+axJ0RXKkexKpA1OCsn4QapnchyTK8e3IZJtv8px7vcBj5RX9nioBwCK3zKqJSJFPM4w/GLx4aq1arHjzKVVpp/EfNjn1NVrX6Lt6kGHG4ZAB8kivOgyjDrOlu3osKp3oNNvJuuD2ZiM3L5XEs24+S+4Ku1spLRovl5fddOKMpi/mrBvXRWfcY+1luTYf76ez8xoLCj03elGiYVblPIZccjFyz7O++0V4UWim0CmtJdSDERgo58OIAa9HWZ0iBaY6JMWL15GgoyfJZvWjtGNQX7RLUHaonSNs8txzAx4DMUVf0pP7c2mXncFJMZDPHcFcjOCwCfoEh5z3mOR4CuQb1Wmh2vbr8+7W+UJ4jV5bef4o+NdNr/Cmo/3RLHOZpsI09c+Da2O+WwicACIwCAxrez7qTn4XQRDdw8fZ/FK7UfrJqvleYRKHwqs181TJd1s9/39ZNh/8mU9cnH9r2VD4O6IpBX5mfX0EQePUH9/IJP8d7+bTCle/ykFBoIOvJSEB8ZdclG9Axl/iovzrj9/nonwgxf0OEoj5JU7K/wqLQr7H9rzHwP1t/s97kBB4d6J/jnny+dj8GxVavgy671gR8JeEkf8UkyY00f0HWtrvF3RBWLjPeCPvKPafgO2/cuhOXyF0H23QFsblF7rrM8DJzwiUfyDEvVB6P44RCvvjG1/+J/jIZ4D7Xw9qSMP2qGu+jzP4wwqtTzN0xJ8=</diagram></mxfile>
|
2107.13077/main_diagram/main_diagram.pdf
ADDED
|
Binary file (40.2 kB). View file
|
|
|
2107.13077/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Transformer-based neural language models (LMs), such as GPT/BART [1–3], have led a wave of new trends in natural language generation, producing texts of prominent quality. Such LMs are usually trained roughly on huge amounts of text corpora to maximize the likelihoods of predicting next tokens. Despite their success in varieties of NLP tasks, we argue that the black-box nature of these models leads to inefficiently learning to follow constraints and incorporating prior knowledge.
|
| 4 |
+
|
| 5 |
+
In controllable text generation, most relevant studies [4–6] focus on controlling high-level text attributes (e.g., topic, sentiment) or simply keyword/phrase. More complex fine-grained control constraints such as "generate a text with 'apple' in the second sentence which has 15 words and
|
| 6 |
+
|
| 7 |
+
<sup>∗</sup>Work done during the internship at Microsoft STCA.
|
| 8 |
+
|
| 9 |
+
<sup>†</sup>Corresponding author: Daxin Jiang (djiang@microsoft.com).
|
| 10 |
+
|
| 11 |
+
'orange' or 'oranges' in the sixth sentence" are less explored. A very recent work [7] reveals that largescale LMs do not learn to obey the underlying constraints reliably, even in a quite simple constrained generation task (cover all the given keywords without hallucinating new ones). It is conceivable that how such LMs will behave when expected to follow the propositional control constraints mentioned above. In general text generation, existing works on various tasks reveal the benefit of incorporating task-specific prior knowledge: machine translation [8] (e.g., the coverage constraint that each source phrase should be translated into exactly one target phrase), text summarization [9] (e.g., the lead bias: front loading the most salient information), dialogue generation [10] (e.g., humans tend to repeat entity names or even long phrases in conversation). However, they either need designing specific model architectures (e.g., Coverage Mechanism and Copy Mechanism) or devising well-designed learning objectives (e.g., GSG [11]). These methods require careful design case-by-case and are difficult to combine multiple arbitrary prior knowledge simultaneously.
|
| 12 |
+
|
| 13 |
+
The dilemma of the above two research lines motivates us to explore unifying the constrained generation and prior knowledge integration into one single generation framework that could effectively execute multiple rules (defined in predicate logic form) simultaneously. Thus, we propose to equip the transformer-based sequence-to-sequence architecture with a novel module named Neural Rule-Execution Tracking Machine (NRETM) 3 , which effectively enforces the satisfaction of given constraints by dynamically monitoring the expression progress of each constraint during the decoding process. We leverage executable programs as the logic checking operator, thus these constraints can be any predicate logic formula and a variety of predicate functions can be defined to support flexible expansion. NRETM consists of a state matrix and a logical tracker where the former records status of constraint expression and the latter update the state matrix according to the latest decoding result. The state representations aggregated from the state matrix are injected into the decoder as the relative positions between encoder output and current decoder input in the cross-attention module. Our approach reconciles symbolic computing (that has precise logic and numerical calculation capabilities) with neural language generation (that has an exceptional ability of wording and phrasing), which results in both the accurate controllability and the superior generation performance.
|
| 14 |
+
|
| 15 |
+
We conduct experiments on three benchmarks: ROCStories [12] consisting five-sentence stories; Commonsense Generation [13], target explicitly test machines for the ability of generative commonsense reasoning; TED15 Zh-En [14], a common document-level machine translation benchmark. We design various of controllable and general text generation settings in these benchmarks and our NRETM model significantly outperforms Seq2Seq pre-trained baselines.
|
| 16 |
+
|
| 17 |
+
Our contributions in this work are three-fold: (1) proposal of unifying constrained generation and prior knowledge incorporation in a highly customizable predicate logic controllable generation framework; (2) proposal of a neural rule-execution tracking machine that effectively guides the generation following any predicate logic constraints; and (3) empirical verification of the effectiveness of the proposed approach on three benchmarks.
|
| 18 |
+
|
| 19 |
+
# Method
|
| 20 |
+
|
| 21 |
+
In this work, we formalize the execution of rules during text generation as generating sentences that conform to certain predicate logic constraints. We start from a definition of predicate logic constraint in our framework, an overview of the model, and then dive into details of each component.
|
| 22 |
+
|
| 23 |
+
We define a predicate U with zero or more arguments. For example, U(a, b) is an atomic formula with predicate U and two arguments, here occupied by the variable a and b. The predicate U(a, b) is a boolean function asserting a relationship between variables a and b, e.g., Copy(a, b) means the occurrence of keyword a in a sequence b, where Copy is a specific kind of predicate and a can be either unigram or multi-gram. In general, an argument is either a variable or a constant. Further, our method accepts predicate logic constraint P<sup>c</sup> in propositional logical formula like:
|
| 24 |
+
|
| 25 |
+
$$(U_1 \vee U_2 \cdots \vee U_i) \wedge \cdots \wedge (U_k \cdots \vee U_n)$$
|
| 26 |
+
|
| 27 |
+
where each U<sup>i</sup> represents a single positive or negative constraint, e.g., U(a, b) or ¬U(a, b), restricting whether a and b satisfy the relationship defined by U or not, respectively. Our method seeks optimal
|
| 28 |
+
|
| 29 |
+
<sup>3</sup>We will make the code public available later to facilitate reproducing the results
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+
Figure 1: An overview of our NRETM model. Given logic constraints, input and current generated sequences, our *Logic Tracker* maintains the current status of all logic constraints in State Flag Matrix, which is further processed by the State Matrix Encoder and fed into the Transformer-based text decoder to guide the text generation procedure.
|
| 34 |
+
|
| 35 |
+
generated sequence y conditioned on the input x in which $P_c$ are satisfied:
|
| 36 |
+
|
| 37 |
+
$$\mathbf{y} = \arg \max_{\mathbf{y} \in \mathcal{Y}} P(\mathbf{y}|\mathbf{x}) \quad \text{where } P_c = 1$$
|
| 38 |
+
(1)
|
| 39 |
+
|
| 40 |
+
With the help of custom predicate logic functions, various control constraints and prior knowledge can be converted to the propositional logical formula, and thus handled by NRETM. For example, the control constraint "generate a text with 'apple' in the second sentence which has 15 words and 'orange' or 'oranges' in the sixth sentence" can be transformed to "InSen(apple, 2) $\land$ SenLen(2, 15) $\land$ (InSen(orange, 6) $\lor$ InSen(oranges, 6))"; the prior knowledge "each source phrase should be translated into exactly one target phrase" can be transformed to " $\forall s_i \in \mathbf{S}$ , TranslatedOnce( $s_i$ )", $s_i$ is the $i^{th}$ phrase in source. We define six kinds of predicates that used by NRETM in this work, details please refer to the Supplementary Material A.2.
|
2109.08232/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-04-29T22:11:08.939Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" version="14.6.1" etag="DH-5iIzMkzhiWF-ynqIB" type="device"><diagram id="4Oimh48Wigzi9Q1QmfrA">7VnbjtMwEP2aPi7KPekjW5aLBGjFgoBHN5kmZp24OM625esZN04T11mpaEPLSjzVHo9v58ypZ9qZvyi3bwRZFx94BmzmOdl25r+aeZ7rxwl+KMuutcRu0BpyQTPt1Bvu6C/QRkdbG5pBbThKzpmka9OY8qqCVBo2IgTfmG4rzsxd1yTXOzq94S4lDCy3rzSTRWtNwoH3W6B50e3sOnqkJJ2zNtQFyfhmYPJvZv5CcC7bVrldAFPgdbi0814/Mno4mIBKnjLBayc8ENbou+lzyV13WcGbKgPl7878601BJdytSapGN0gv2gpZMj28oowtOOMC+xWv0Ok6I3VxmK63AyFh++iR3QMQGEHAS5Bihy56wjxsZ+jYOUC56ZmIEm0rhizE2kg0+/lh6R4gbGiMxvHynz1egeOdiJebPB2vwMLrQ6N0imB5zt0ayD2I2oIQVbFWzZSI7M8wnHl+FkKSBWivpeD3MBhJvKUfRdOgeoimLgxjOwxHUXUmiMLQQvUj5ERSXqH1C0afqCWpMlrlE0NLIFmlY9BGaQLL1TTQ+t4FoY0saD8BqXk1OZarJIV0FMtlEgb4lEyCZZBcEMvYwvJdteKiJMrpPanyRr2zk6IKLso/HkN1HsU+mUj8UXBBVJORJyhiuMP1Ehu53F+xNaw43mcIbvSz4d3AVb3Pq16igxutt/1gt8qiIIxBlWOmpdfDo7VLmtugebD1EZ2IsjQpM6nRr97IQ0gYzSsVCMgKoP1acUYxC3upB0qaZWqb0Rjpn+GJpOQ6sUl6YJPuuiOsexOQPrdwhQxTUt3lQhY85xVhN731CIHe5z3na03FD5Byp/Nr0khuEoXAiN03Nf9F2HW/D8debfXibW837N2CoHhNRdyAAXVqA/+aNyI9Su8lETlII4E4gSYBDJ/AB3P5Mcz11FtO9+rQ9Hq+qenwWKvtqfSsI+YOxziJzO6ew/eblPuMqFnWksqmfcofTSydfzUpCk0Mg5Hc3PVGJBJNIBHXtRA7h0a2VA4kgr3vnQqw3QtEdQx9nFNXnq2raHpdnUyUXXTui4IrSep7tDMgos23MG4wbPZHgAeyV6vysQuGp0jjTIlYdElp+M9QGpFniMN5MZ//LX0Etj7CC+rDLpoH5d1bDA2mwmOsInnSE3Ge4s69pA7C56KDU6I2sqM2vmDU2pXzrYArKQjVX+b78KXVVcZLtGE7I5JMGsJnqv68S4awXVT/pfLvM6RFRX82/8s//YZ6Juvzkd+dR0j3/5x07Pb/ALT1Rf8/in/zGw==</diagram></mxfile>
|
2109.08232/main_diagram/main_diagram.pdf
ADDED
|
Binary file (27 kB). View file
|
|
|
2109.08232/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The nature of dialogue poses additional challenges to summarizers beyond what is required when processing structured, single-speaker documents [@zhu2006summarization]. Given that dialogues typically represent an interaction between many speakers, a summarizer model must keep track of the different lines of thoughts of individual speakers, distinguish salient from non-salient utterances, and finally produce a coherent, monologue summary of the dialogue.
|
| 4 |
+
|
| 5 |
+
Dialogues usually include unfinished sentences where speakers were interrupted or repetitions, where a speaker expresses their thoughts more than once and possibly in different styles. Moreover, a single dialogue could touch on many topics without a clear boundary between the different topics. All the aforementioned phenomena certainly add to the difficulty of the task [@zechner2000diasumm; @zechner2002automatic; @ChenY20].
|
| 6 |
+
|
| 7 |
+
Our work focuses on SAMSum [@gilwa2019], which is a dialogue summarization dataset comprised of \~16K everyday dialogues with their human-written summaries. As our backbone model, we use BART [@bartlewis2020], a state-of-the-art pretrained encoder-decoder language model that is suitable for sequence-to-sequence tasks. Table [1](#tab:example){reference-type="ref" reference="tab:example"} shows an example of a summary generated using BART [@bartlewis2020], fine-tuned on SAMSum. Clearly, a level of reasoning is required to make sense of the conversation, which BART fails to do and therefore produces an incorrect summary.
|
| 8 |
+
|
| 9 |
+
We propose a combination of techniques to tackle a set of dialogue summarization challenges. The first challenge is having ***multiple speakers*** (generally, more than 2), where it becomes harder for the model to keep track of different utterances and determine their saliency. The second challenge is ***multiple negations***, which is thought by to pose some difficulty to dialogue understanding. The third of these challenges is ***reasoning***, where the model is required to reason about the dialogue context, and infer information that is not explicitly expressed. The last challenge is ***informal language***. Since we focus on random, everyday conversations, these are usually filled with non-standard language (abbreviations, social media terms, etc.).
|
| 10 |
+
|
| 11 |
+
The contributions in this work are:
|
| 12 |
+
|
| 13 |
+
- We propose a set of novel techniques to address four dialogue summarization challenges: multiple speakers, negation, reasoning and informal language. Our techniques include name substitution, negation scope highlighting, multi-task learning with relevant tasks, and pretraining on in-domain corpora.
|
| 14 |
+
|
| 15 |
+
- We show impressive improvements on the summarization performance using three of these, outperforming very strong baselines.
|
2109.14960/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2022-03-03T03:41:51.632Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/16.5.1 Chrome/96.0.4664.110 Electron/16.0.7 Safari/537.36" version="16.5.1" etag="OJ00Hs9K05YAvDAy4BJC" type="device"><diagram id="-bjJ1fN5AZvAK9bRodUt">7Z1pl6K63rc/zV7rvl8cFwlJgJfO8zz75ixEVBQFEcdP/4QqrXZqZD9VTeDu9D6nu0ScuJJ/8rsK4j9ienXMO6o9r1oT3fwHCpPjP2LmHwhlhUj0H2/L6XMLQLL8uWXmGJPLtl8b2sZZv2wULlt3xkTf3u3oWpbpGvb9Rs1ar3XNvdumOo51uN9tapn3r2qrs8srCr82tDXV1J926xsTd375YFD6tb2gG7P59ZUBUT7vWanXnS9PsZ2rE+tw81pi9h8x7ViW+/nT6pjWTe/oXY/L5xvK/eberzfm6Gs3yAPg5wP2qrm7fLbL+3JP1w9L36Lt/bg11jNTT3qH7x8xdZgbrt62Vc2760A5021zd2XSW4D+6Fi79UT3XkSgt74+pXdjZqrb7fWOpe5q88uNqbV2L6xF7/YHqevxFRJQAeTXH0nG110ujxESIqJNSVQEIkABI0lE3pMappm2TMv5+DCi8PGHbr98bt1x9eNvjx34IkLbsm6tdNc50V0uD/gPhjL5fNClISvk0o4Pv1oFJgkJf26d37aJyyPVS1OcfT39L1r0hwuw1/DE9/B+cQDvoW1dx1rqt0cLe/993XNl4e1r0WcxXO9zS97hpD3NVY217tyyvLwN8AFKu2CSfubIC8LdkZeR8HzksfB83KEkff/Ao39z4IWfO/Dw3x/4YF1vom7nX63kJwAhWZHuASnoCdB1012/kMH3+WDO510HAoLMjg95z0dfTy5DTUbzjoqh+Q4wT/WJPv5ScNDXIdMn12E9+AG7rScvysl1m6Obqmvs75/+1SG6vELDMugL3/QXBeIEAfejiaQkhMuI5f2B98+6tXaOpl+e6HaIf35uASSIIEMiEUmQr89z+ypQQhKABInksTy6qjPT3adX+SD8dZgCQZfeQ79ONbTdWH/fK8efTaAy/tqgasvZR8Oo71yTdrrL9onqLOtf3VJIfPTcm43wYyt42Z0f5g8TVZen2kf7cylwa003K8KrEkE0WR9Pf6q7ChAlFEQAwIoMwANBIiSQQmefWCZQuE41bxouQC9aroi/34/l/xNIsS5PUBCkMhyLhPzUCCkpcoJIQFAwnatSiPdMMUwAAYiApiWMgSQGg3otR9+BqvxfgDqdTqEWqJ9OyJjgn4KKoYgSWHzonjhgj/wBeNeEzIfW244m0eFP+frzZwZZSRJvB9kXQ3kYgywAnP8DGSwIcgj8FRlFgX8AofO38QcI/qmpNQZAjsDUGrwyQcR0L3nzDj/Z7KzrHf/ZfnBM0h1EwT7+upP+NPv4Nw3/SUl3f1+el76lz6e+7PjYyOgg6t43K9U0Zt4wrNF24cXhlDfUGppqJi93rIzJxPzdfOK+UT6E7JtB/lEk4svtnLoyTO8BHWOlb+l7rekH+nfLWqnrn5rREa8E3GsPhYiJ5+Ff/FOjfwAx9Yfmbk/zrM/jj33m2394fi3LIPGrhz8WZAhv77zeeztFeyVtfyAzgQByijP6YARCpBJASXEqH1SEZ9P+x6iwc0ZRpCL51TOZVT1jJ4HixghIrBixczpxYySQ0KjAAK7m76GiiD5URMSo51w/Cmf0lhEUWTEK4Dw4o88RCIZHJcDZKX8PFaIksPzbnsMqiwY5kYUzCjuLBjl9hVMJO4sGOWnl76EiYb96xiqLMjzHJG6MmGVRhieNxI1RmFmUG4J7//l7KsyyqMh9QVBGzLKoyH1BUEYhZlGRG4IbKlhOXH9RzDyBitwS+JMJMXeK3Ab4swgxbYrcAdw6APRcsVhlTJF7AH8yzJKlyNO/P5kQ86TIU/5tyheeWLBLkTzp+5Nhlh0Rz/f+ZEJMjIjn+BsWiCRk9FvXwio7Ip7qgzIKMUUinuiDUgkxTyKe7W+zPfSrZ6ySJeKZPygjZhkT8fQflFGIaRPx5H9vLn9PhVnuRNwIBGXELoFyNxCUUYhZFHMvcENFxAkEI5dFMfcFQRmFmEUxNwRBqYSYRTE3BLeGAPjVM1ZZFHNfEJQRsyyKuS8IyijELIq5IbilIvlQYZZFMfcFQRkxy6KY+4KgjMLMorEwBNdFWv/0oqwKSEAoSBJBkoDl+wWVFVlI4Jue84IRSkAJyYSIQABEQNcrY36aGYmFP4gEM+9iQ/9qFxazWPiESDDDclSYxcI2hMaMJBRRkCggBZHHJXFlEpXaGAsXEQlm3qWK0WAWCzcRCWYERaU2xsJVhMQMCDBBkECIgoEsgYcVQxUYlX4WC5MRCWbehY7RYBYLsxEJZpIQldoYC9MR1nhGY7OIoYyQAhRJfuxnkcnU3IMEZRaZTC1xDxKUWWQytcQ9yC0zkhAIlBSAkCjjh9IYlUgtcQ0SEFlkErXELUhAZJEJ1BKXIPeCWJJoM5YU4nF7gBaZRC1xCxIYWmQitcQ1SGBokcnUEvcgN9BoBEMEy7IAgagID1dVRyZTS9yDBGUWnUzNPUhQZpHJ1DL3IDfMaAgDEva+wc8b0x7P4YlKqJa5BwnKLDKpWuYiJCizyMRqmZuQe0csy1iSREigID6cWxCZVC1zFRKUWWRCtcxNSFBmkcnUMhchN8zodF4C9NhLWBGx/NjPopKpZe5BgjKLTKaWuQcJyiw6mZp7kFsPghIIQkFBAHjn7D/UxqhkaoV7kKDMIpOpFe5BgjKLTKZWuAe59SBCAohQQBQekK5XbkYuUyvcgwRlFplMrXAPEpRZZDK1wj3IDTMkJWSE4McBl66/8ohcpla4BwnKLDKZWuEeJCizyGRqhXuQWw8iJjBGgEBMiPx0slxkMjX3IEGZRSZTA4GLkKDQIhOqgcBNyC00JQHpRF9QPv+7hxaZVA2ur82hvYUWmVgNBO5CgkKLTK4GApchN9BEklCwBDEQRSIrT5cORiVZA4HrkMDUIpOtgcCFSGBqkUnXQOBK5FaJwAQhkncpPBGU60pukYvXQOBOJCi0COVrLkWCQotOvgZcitxCkxOiJH1eES9Acg8tOvkacCkSFFp08vX1l7Mc2lto0cnXgEuRoAsUP3wpFcuexqVIUGgAvPveg9CgcScSFJogRAYaVyJB1yh++I4rluWRG5Gg0B6+9IolNG5EgkITSGTKIzciQZcpfviSLIY9DXIjEhTaw7dmsYTGjUhQaODtlweHBo0bkaArFUcnXENuRIJCi064htyIBIUWnXANuREJuFxxdLI15EIkILPoRGvIfUhAZtFJ1pDrkMCrFUcoWnMfEphadLK1yIVIYGrRCdciNyJBlyyOTrgWuREJCi064VrkRiQotOiEa5EbkaCrFkcnXYvciASFFp14LXIlEhRadPK1yJ1I0IWLoxOvRS5FgkKLULrmTiQotOiEa8SVSNC1i6MTrhE3IkGhRSdcI25EgkKLTrhG3IgEXb44OuEacSMSFFp0wjXiRiQotOiEa8SNSNAVjKMTrhE3IkGhRSdcI25EgkKLULjmRiToIsbRCdeYG5Gg0KITrjE3IkGhRSdcY25Egi5jHJ1wjbkRCQotOuEacyMSFFp0wjXmRiToMsbRCdeYG5Gg0KITrjE3IkGhRSdcY25EAi9iHKF0zZVIYGrRideEO5HA1KKTrwmXIkGXMY5Ovr6+Mw7tLbTo5GvCpUhQaNHJ14RLkaDLGEcnXxMuRYJCi06+JlyKBIUWnXxNAkgR5xPD5Vi/YfZ0XAXs/fd1T//yIb19LfoshusdFMmjpllrV6VQvQd6t6f0dvvyNrzdVUdrf4KVfoSRLAl3WOTrafI3IPD1F8W3Rx5K0k8c+gBq49ehF37u0MN/f+hpv51Yh8udM1Pdbi8/b5e6q80vNybqdv7VTn4EESb3Z8XJCnpCdN101zeuvw/9HqEAGuNvJyRJMkNCUgBloa8nScf5ODaad1wM7R7KPcHnKkUffyk76Oug6ZOZ/m8P2W1VeVFUrtsc3aRj2f7+6V8dpMsrNCyDvvBtn1Fw4ioFvr6hQkkI4tMs4fqsW2vnaPrliX4d/VfPLYEEEbzLQYkkyNfnuX0VOmxJABIkksci6arOTHefXuWD8ddhCoidnfOYqM6y/tU1hcRH773ZCD+2gpdd+ndzkafpy5+dmxAFJRREAMCKDB7Pq8M4gRQZSlgmULgWwLuZiU/b/V5fZudEfhIq1uUJCgJVhmORzpl/CCoicoLGcYECRBJUHiacCCToRFIEskIwBpIYDKv8I1jZWZMfxDqdTqEWqK9OyJjgH8OqCCjxuGD8NR2/wyf+CL4A/uRvG2ERkegoqHz9+TNjLY2Ot2PtixE9nLE2gIv52xoAkeQQGgDBKBINIIDX+dsagAzgH5tjS7IciTn2KzNETPeSPe/4k83Out7xn0/5lqQ7iIJ9/HUn/Wn28W8a/pOS7v6+PC99T59PfdnxsZXRgdS9b1eqacy8oVijDcOLxilvuDU01Uxe7lgZk4n5uznFfat8CNw3A/1X3v5soiK+3M6pK8P0HtAxVvqWvteafqB/t6yVuv6pCYDoFYGHSwOJmHieAoh/bAYQi1NwrtPuPzzNxoj2v0cFDxIi9FXw+BnNj9hbKRbn2TBEA4SECJTHgToMNHIcTqaZypquvfxtyFjGCD8ZQ+9GQ3VpqV1/DNVQgH+QnqwozOjF4qSasDoWxs81T2JV82R2migeaABhhiYWJ8gwRCNgZgUtDqfBRGk4IuCJnigy61hxOB8m4vQgZEYvFifGhFUWX6ABgFlZjMU1QWGhEaUEeVgBnV22lbl28EfDMNty7eCLhmFwVeKgHcIzQuJzQWMWXBXuFPzRsAuuShycQpTm1y/oscu2CtcO9x71EQ274KrEQTuwRMMulSrcKXybHrvgqnCncIMGksRNJ3pcdJBZhFW4XQgKiV2YVeLgGaJUB/04sky+XErcSgnoVxFZZWD6ohxSQEjM0jAUuKgIColZ6IXXBhFpSFEatkTZhyOzhAyFOMiL2HBkFqehwE1HUEjMUjMU4uA8QoMEUEKJXmqGAlcbQSExS81Q4GojICR2kRgKcVAb4fknwa/csYvE3FsEhcQuEgPuLYJCYheJAfcW95r295DY5d3reiORhhSlvOvLkV3eBdxb/CBHdpEYcG9xA0kQE0j+RelxgX1mmRhwcRGYErtQDOJgLqJUE31BMgzOgNuNW0qKb1FklpwB1xuBKTGMztxvBKbELjvDOAiOKA1dAPuBZJevYRwkSHxAsgvYV26RBhneb1h8KbGLz5BrkH+3TK0CE0JUrjGGkMsPfzTssjTkxsMXDcN0DOOgOcJDIz8XNGaRGHJx4Y+GXQ6GcbAVkZopPNNjmI/jYDEiRA8L6Ikeu1AsxsFuhOegntGwi7ki9xXfpscu/opcUtxfIQ7uvw1LkZllWzEOZoIlGolZthXjoB2iVPNe0MPM4q/IzcT9KrdPNY9Z/BW5mfBHIzGLvyI3E/5oCLNsK8bBTIS5Pu0jGoVdcOXa4dv0ZGbZFnHt4I9GYhZcURy0Q5hL2EoPa3KzC66IOwV/NOyCK+JOwR8Nu1SK4uAUwlxd+KmgMUuliAsDfzTsUiniwsAfDbtUiuIgDKIUfZDyRI9dcEXcKfijYZhKuVP4Nj12wRVzp3B/UfVNL3q4qJpdhMXcLgSFxC7M4jh4hijVQV+O7JIv5lLifrlan4rILANjrieCQmKXhnEcREWUKqIvR3bRGcfBakSKo+zDkV2IxlyB/CBHdokbcxkSFBLDYB0HLRLm9dbXHsI+ThPuPPzRsAvRhJsOfzTscjGJg98IT0KB54LGLA0Triz80bDLwCQOoiJKc+4X9NglXxIHgxFex5Ke0LALs4RLiW/TYxdhCVcR36bHLtsSLiDuV52NzMnOhFsHXzLskq0UB+kQpYr3DI9d9pW4lrhfdDYyJ0NL3Er4kmGXfCUuJXzJsEu1EncS/9rDRuZMaIkrCV8y7DKtFAcjwZAMu7wqxcE2hLn4K4hMYJW4SvBHwzCxxsElRGia8Ioeu8gqc9/wr9eHfSqLzGKtzIWDPxp2uVbmxsEfDbtgK8dBOUSo5nmLvz7SY5ds5ThoiTBXVX4ajphFWzkO0iHiHYtd/JW5mLi//Fq4RwMExCz/ylxNvGEj0gAcJpA4CIkolbpXzAC72BsHaRHmurDPtY5ZqFXioCSYshGZpVqFC4c3bCAOdRxSuGa4X+z1CQhmllQV7hm+jw8xS7MKFw1v2Igg3FoXB70Q5jqwBD0AYRdQFS4P3rAJOaAq3Bi8AcIufSpxkAdhrv/6XMjYpU9uBt6wYZY+RSEOZiBKE+pX+MINqKIQB2MQKWbKMzNmGVa8vlak8YX5i4fnoYpVQBUF7he+jy/cDCsKXCrcr+1603XAAxpmaVYUuGkITCncXCsKcRANUSp6/vSYhWBR4ILifk1Xv0rIKg6LAlcVgSkxDMZcWgSmFHL+BdxZ/OuFWn3osUvCgIuMnwTJLjYDrjQCUwo5HYM4GI3wro1GCSWK6RhwhxGYUsjpGMRBXERpkPKnxy4dgzhojvAMlOBbCZmlY8AdRmBK7NIx4A4jMKWw0zEXF/eq1gcNu+gLucP4SZDsoi/kDuMnQYacjq+wIk0vzFV7kfyLDX7sZMziMeQSIzimkPMxjIO6iFIBfIOPXUCGcTAdYa4b61sMmSVkyD1GcEzsIjLkIiM4ppAzMoyDvojSmAWwLz6GOZrbjuCY2KVkkeuOHyUZckwWueT41wvSCpG5kli8NhWO7xv4Qo7VIlceb4CwC8piHJRHmAvRPtc6ZulY5BLjDRt2kViMg7mI1Dj0Al/IUVmMg8aIEDNvAdonZuzyschNx/fxMczNXG+8YRNyEkZxEBkTVZenL/sT0WR9PP0pIBKhmUi5A6JAZn0FxUFSsEQDWK2DLqI4CAiWaARW3zUlolh4htDQkOeCJjMraLHQDAzRAGaWAcXCMjBEIxBmBS0OkiE0NBJ8QiMiZr0mDi6BJRrIzBMg7gm+TQ9AZjUvFpogrI6F5MR1qV72qRRzYeCPhl0qxVwY+KNhl0oxFwY3aDB6LmjMUinmwsAfDbtUiuMgDKI0iXtBj11wxXFwClGiR4QneuyyLebawR8Nu2yLuXbwR8MuuOI4aIfQ0IhSAotfPeT+0miWEZbbhaCQ2IVZwj1DUEjsYi3hxuHWOIh+5Y5ZwCXcPQSFxC7qEm4hgkJil2hJHHxEeJAUH0jsgivh2uEHObJLuYQLiKCQ2OVdwlXEDSTve1R+QXq4yJlh4CXcSgSmxDDxxkFLRGng8gfJLhVLXF3cqgvoWxSZxWKJu4vAlNjlYonLi8CU2AVjiduLW0qyHyV2yVji+iIwJXa5V4qDv4gIJXbBV+J24oYSQAk5Mr/elbiT8EfDLuJKXET4o2EYWuNgH8JzRMJzQWOWVGXuE/zRsIunMpcI/mjYZVI5DuYgYjr1kR67rCpzo+CPhl1AlblG8EfDLpXK3B3coBFoF4nMdbMyFwb+aNilUpkLA3807FKpzIXBLRrluaCxS6VcGPijYZdKFS4M/NGwS6UKFwa3BhQ/oWEXORUuDL5Nj10qVbgw8EfDLpUqXBjcX6+MH4SBzCyVKlwY+KORmKVShQsDfzSEWSpVuDC4Xy31qaAxS6UKFwb+aCR2qZQLA380hFUqRQIXBvdLoT6iUVilUiRwYeCPRmYVOdH1tSKNJkrC4AU9iVUqRQIXBvfXFwvKQ8dilUqRwIWBPxpmqRQJXBj4o2GWSpHAhcHtUqj4uaCxSqVI4MLAHw2zVIqEOAiDKE3iXtBjGFzj4BSiRI+AJ3rssi3g2sEfDbtsC7h28EfDLriCOGiHMFdGJfirh0TlF6sIcLsQFBK7MAu4ZwgKiV2sBdw43K+M6lPumAVcwN1DUEjsoi7gFiIoJHaJFsTBR4QpWH8PiWFwjYN2iAYkdhEWcrsQFBK7MAu5Z7i/mFh8WOKKXYS9kuBofoOGXXCFcbALUdLeL+ixS7SQa4f7NU2fah6zHAu5bPBHwy69Qq4Y/NGwy6yQi4X7JUsf0bBLqpDrBH80DPNpHCRClCZxL+ixC64itwv3S5leL8BmH1xF7hT80bALriJ3Cv5o2KVSMQ5OIcylTJ8KGrNUKnJh4I+GXSoVuTDwR8MulYpxEAZRml9D8kSPXXAVuVPwR8MuuIrcKfijYZhK4+AUwlzKFEXmolbEhYE/GnapFHFh4I+GXSpFXBjcL2X6VNCYpVLEhYE/GnapFMVBGEQp+rygxy64Iu4U7pcyfUTDLpUi7hS+TY9dcEXcKfijYRdcEXcK96udivdogIDYJVcuFd6wEWl0DREI5irhDRDILLBi7hLuVzl9LmTMEivmMuENG5FZZMVxsAlRmlm/wudd5hAmszhohjDXpn0CgpklVcwlwhs2iFkOxXGwCEzZeJc3hAmEi4HbS4XlhCI9AGGXPjE3A2/YhJ0+46ADojRTe8WMXUAlXB7crmaKXtQ6ZgGVcHnwhg27gEq4PHjDJuT0SeJgDMK0oE9A2KVPEgczEKU5wit87AIq4fLgDZuQAyrhxuB+pVIsfvUL8ICGXVQlXCMEphRyaCXcIgRGwzCbxkEthLlCqV+ZY5ZSJW4QAlNil1cl7hICUwo5uUpxUAlRSkdI8aPHLuZKXEEEpsQuzUpxkBERoRRyrpW4aLi/lBj+QnP9qs0IBFuJ64fgmEJOtlIcpEOUJhNv8LFLvxJ3FPcrnPoWQ3bxl0uK4JjY5V+ZW4rgmEIOwDJ3E/frmvqxYRdv5Th4iqhgYpdv5ThYiCjNAN+QDDkDy1xP3K+JKj+pI2bBV+Z+4g2bkNOuzE3EGyDs8qscBxMR5lqoz4WMWWiVuVt4w4ZhUuVC4Q2bkOOpwtXBvS99AsIukypcHbxhwy6IKtwXvGETcrRU4mAGwlzYVIzOda0Kj/1v2IQcLRWe9d8AYRctFR7771crfS5kzKKlwmP/GzbsoqXCY/8bNmFHyzhk/Sj9rgbgZ2bM0icWuBl4w4ZZ+sQCNwNv2ISbPvEVfhAgxkqdeUS8j21oqllRx7rZsLbG5bCOLde1VnQH07sj9cXhegAn+lTdme7NMyRNY+Y90rU8qurW1jXvw02No1e3Uh8vmLxuFa5bvKdSXfUfMfl5E+bs9ewfmDZ6qXrrIJTzMytJ/9Ta3Xm2O6M/1Zf0r5SUTg7pv+nWStUdb4fkoNZuCcWks0UaaXobWutmF6ToPsfFYS8Pm11vY0nLzkfaIZnMbKv0ppRJapP9Ju894Nhrd1upXmGuSxNBUWsDrXCYHpTasktbSGpbmeSSdsbByfnB2QiWkzuMrZKxrFU2w4raHrs9u2nntukSKNdmdTRzUWHQaRpLSUVm+rhutcZ2OQWy5Wx95g533a1Gn5P+TxtPj1NVFUvH89I9aWLhQI/AsuExh7lWQVTpPri0lxfb9am7ri2323lxtVdK9F6l0SqNjHZxuT3iZLs/RPPSuKg1BXtZH6rdU6FYLY8KZj5XNE7n/JLuu/jaty0J0IXqam7b9IkGhQ1Ir9auM+oDtyxuK5mZWDHPLWVMX3xT75in+rS+HtR3A0neqUjqCEp14ejN7ZROdnJo2qxmkgCPW3TvteOsxzlprM4tLW9viHsmu72z7otjd7ue9mdoXmlO6YNKebdSGi/n8kComT2HfuiUAJf0jrHt9qdJvCiSemUrNs7ydj080nv7O9xyNumR2ge7iaLvB3UyaazFcf64VTXRO0Mwd/L+UmftrJndzdPFStrpEQXOLb2w7tJnmDboO7DTO6GDNvRmG2eO8qCwPo+24tYc4vlBnlbc3bRaXy1WY/sTgCSdcXXZwcrqPKI3i2tznFufj2Rtq4McgTUwWY/La+JkCCaj1SgPzOmZBqdUd2ntLQ0dJ2KjZcveB56Y3nsgii60dpndeIXt4apkqP25pWZNRfaAe685FPvT1DltDdKC2+matZ6SMufaoNHSR8eS0pieq8tRtiV5H2ehEbSc1sl5KzV0+tFTmV2hAKZrWWp7q7vl9vsh3bjM9WSxsZjK581iXKqMHbcyTrWyo9G2mie03MHctEZ2BGUbqbO8NKTyWRbtRn02EyoNWYfnQ3K8n9C9nF5vddgKy85knWpNFsNi3cXee2536WHZ1E5iZz9vykciHCBeVcxhYd8b5iy3Vy8WNzO70G8dinT3HF7DYjHTTXed1bLT6NE3mLJQQ1sd8FBQzNHxIEhlSRJylcGi0VeXWie9zaWqVQjTeXWe6kq9wqFujjIWqZN6obNb2v2pAzZi+7TBvQ0Y2oOJkaSHO1XblaukVRR0r4WehpXaStLKaUHJZ/rgPLYBwYfzSEZImknZ/CnVnjcLHbwbrcbHJlJ7qj5zCsfTuZ3qToozyZaGmrb1GuepqeYFSx/XNkK758CDvnClyU4sjHVj2VFamrfPtu8VMVndVNa9Xb+4mhmd6XzZz8tbY9A2FvS+Ghrn9i17XFfE+T4nDWCu0tt41X3T2enLXUs8WdtRdl+ZHZo129E7cIoP4xqUlnhnjFqdhbEXHdDe7OXjYG02kVav9z6KSW5q00aWM0b5bK2Vw41RMZ9Rj7NZueRVkYo3kGzTPadaSq/r3n6lIh2Xi5IsL7ejrbautPMgac3bVStlzKz6Od0Eyn5MRCBuFunZvC7j8wJtmsfNwiom++lZN1+r1urrxcnuTR0j6xZ20k4azrZ0F6NpWXnJVsWWW5ksMy3QrlL8qQXsNrVCCnbdvDUUUs582Ggbs04X1LbL0Wi+yUBl2sO2jLuD8REJ4z0uGpt8LV3WznRo//iIFcMDofWTvTndUKX/dwmZdPd9kDoo+qJzqvTlqtpsFstZvDJhKbkBtLWSRr+ZKgmTQitLbPlkF4dDoUE/YKlUh8MtPeALpYvGdICoq17NM6bEGnqHaNKwSd8cCXK9qe6LlXJD3WTyBt0DVdz+bpgzdirZVTb1k3nK9eZJedCV1yjVai705NGij98ZdNjMd/dDWac0wKR5spOzZrdEJhUsy3bK7ax2561VH8/PrYVu1VONA22wqUmj32tklsWhlXJawtBAebOeyRqtRru7np7xEtibfq54gqlkuXki9clMn+STzbY+1crdqjpLZs1eWx1MO5djlpurqDVvFw4Hi0zTUinTrC426mbaq+fdZGsHV8BRYS3Zbq12TbdI52c5i75yO1NfntLp2SqV3k0WLbTQK/PO+pA3i+VRI9vN9Ohec4+EuUAweSjkxl17tNJWDcHJOeCE7Y1Wh8pOmtTBsnK00ySfS5fVHR1JJpX25+iX0xv9Ze4kgmK+ly4nT2r/gBrZSb6wLcJDsjzR5qNu0u6s96Jk9/trZ7IXBacygDhj7My693xLr8R18nJ5h2Vgtov7c7aTn+XS086R1Gc4LYgneyctS+pYm9cb+X4qeZq085kWrRxN+voNa7Zqb3C6nz4Za7Hctiz7WJh+1n76zPtxh/5tjnbN6ibVTbeHi30jQ5S+65Xq9nZQbArlcqUtusKuUs442hBuDCVT6WKVwk/Z2dwZzdVJQ1T0/rLTcnpQ6pkOma3w1BuBNs3UFpZ2xzMcVuaHtTGr25vJHFV3g4pSUjd9O5NftBfro1IdFg+VPFq0lbpI68DK7ZW6c9A0LUEfVEzSy43yUm21GLSFhXaunuquqVo7Oz1MpUaqkitOvRFpbduk5w77pUy/bXZ7XbeezDmFrd1p5hbTVL/VnR/bo8F0XZmt+g06nekUp8KuJLfKvTOybYN2WVve7JfVudimRzKPenZRIF5X7DdG7f58SIfDAejBUaZvkFk9VTFO3iFIzZoF3BVpyZT3k1U31S2PllWV9hTjqJEiMXLNbc4hcqVRW5s7bzo0GOMtHNut/vRc7HVraip3RGhSPWFJG/faoKwJXQktNdVd2Mtyebqp0HdsoP50n6OzTK8YHJx5E7ltZVXprmjHs9vLc7HbS1YnsCTiE1plhVRWyOaMXgWivX6U0r3ebqlUuzhfWqaGmfyqT7JYVXpJbzIy3Z+avfVpm4FLLdns9/ejVq9naYJjJftb/WB6FXVVy5jbsbBJJ5X9VGxhp0Wr3B5qxz0thic7X1sMJZQ6Do7Z/Rgqx1k+t8n39i1BK5wALrdq6Q5UD9Vym3SVg2sNUvV8q21K8tjKpw4Tc9GeeaW7LbiC6dBJX8rKLtr5Ur6B2wqp7dRDsjazib4eDUXLIcmxC7rZ81BZTrtGZ7UudkZ9ZTrwysCW6PsNWVVGbSlbXs29KRix0/VF8bCsG9rOMTNp6ZxJbe2POV2Dji+AttQ57gz7oFg6DVf7an5h13Pilha7XbZpT6TJFrbLtK0dqkRfWJtaowvSLefkNQi4zqhaftYGO5ce0u5pdKx+PK60qGdzlR1Ux/3M8dhLb13dMOxya9yTFIg/JkDb+aA92sDRPltopbuFQe2sKZU2nbVuDNOBA8eFncnQWqCinj3mV4OCTIehYrur2bqSnRza7Ybp1QQTiP3cuupVPVAoi0vap2hrP6urQQUo5bZXZFPJg77EQ3muaPmTKbbmR3AgKpqVe3ngTUe9qa03W84Pa2alvvBWRU2lk25jbOQbyWOmPxtXBmYR1Og0rTLoo1pfnO7X8JSzDvOK9xCyTmfpQ3pgMhjBsiuMz6P0x3MNB8l5WhyXd64zhqTXKyzrmyRtE0JOPA97GTg0rOHGJCjfbQ7G2+x2QoNdfUgroeboZaDvTiVLl/IFXK8ONvPuNtVpEbwZWN0U3UfQ8pkUhvp6jkcCqo68ieASHXXH+zgN86yq6059aKWbxJkfrG7luOruC2QynA6SXUAb5bGhLrRjMzc3mxllTVajNv0IB6ks7OpFZ5OZz+ctopxzyX6+l2pUBrNyxzpRplZrQuGldoViZ0Aj3nQnOtCZtppg3zHyH9sW06lW0SuSNdtVLKu0E4ZiOlvbGPRzNzaF6lGrS3QMk6pt82wcF+10udzfD067TkOdnWt9DA+DcUvplI02pZ13RidZs0ez7U6eNF0NZI+tgQmznfQy2+3a+uUzn4br0lHom+6YHpdaZa4JdB63O+JMvpj9fAY4k1PjPt6ZC9p083NNE7SzncnNYG+nCDNkVcvN43BijqeTZKHT2c0OpUz+1PAOSRdmRupB6dZ6NIosBpvZaXzUU/TNZUeoXinY3uCP5mVBbjl076FjO+nFOO86MsZlK+kssiOvX5fU9ToHliNhO8i1+qaZQaVN2tZHk8FE7B+VrvA5ShbwOrOoHmUaZFQ49qb6u0kLdeuwr41WtPiUvKHM8OaARnEqIlmtC0rhfBn7TW/uN6w7y2kyWS7tMt6wJkoTuLa3NP0YsjwqTnZeraOhIgfrk8zn49bSmk4sC9vRcVXub+2ijgbbXbuf7O5Rbryp7ES9vcZF96NYnPfeExRGhYLkKnitluu0/UlTcDzBXbrv1ZpiR0DjrZqfWye4HB9FaaHQkTVnlk7d7dKuGWhbKuB9t9BLeSFy5rR29vq80L1ZgtkmQ5A6aouiUVdoClk2pUVzvirXFxbOe8fQe+90XgM1Onq7dsGrFyg/GtuibO4P2mGTmzlawdk3Cxa04cQdSYtBXhlJco4Og+r6UPNGHfocWQWtB2jfIZNUy8WlIbGOE73WzZfHi4ajyl4GWmDSn6f23k97a57EqoQKR3law065mnbE5GkrNTPVQzWddvV6ITtZ55qHaWtO4apTk0BLLaRW8qSX09VSZmB7PbJaSc9cTezYG2ko1tS+aS8rLXycnrxKCPZJXDr3Thu5XVHna1QCJk3MZRNkXSK0BbMz3fX03trLzpNFG+gp3DXKKYuMxmDh7jb6sTckSzrvnMzb6fWgW7UNMGvK9nRvC2Sim7TcWE56OZ+OSiq0N3ajWxDtVs0DajeK/Y+QXiVr1B+7dD9PVfQq8nFWOez1hXDsFPrDrDdF3pmlw6k/cseqOJh4Mw1nIzY23g+uN3nScXVeVLtGrZLT7GlRmx3spd4ddo1Kpaq2suaqVjTa58IyVRqdBwWvLIMBgB4QMN5s6x9mJ2vmOsv2rrlKp3/K1sHrNQDXL+K6qrsbTYd+fd/DramD6OtbvL4n616dKkI+jNrUop/j1tqRzc663vGfT9eZ9KQ/to+/7qQbpp9i9de2dJUmQqGtO8b0dkcy8/5tu7uJ/vFK/xBh+t/t//yD09rkY5fDf7f/+4+3cPHnG/roY96enw98kooUhXtvcu9t69ryzO2dmr1sUi/SUKNvRHde2MSVMZmYv1PHzqcvvvjEH2kZwnX580vLkBQxIT87XHhdIPC2aVy/Jup77eLVaSqhtouGs1vTgwqFjq5qc0rl0kDcuwZi/6UNRLqePezfQERZSlxPz/v5NvLqzJlQ2whl7/wXav+FH41A0OZ0KqOb27+vPSAFg8T9BW3y9UtLb+vFtdzftgX0I23h1Zk64bcFoP0X/O1tAQMCn9qCmHhx3tbr5gAT11+pfq9FvDo/KOwW8dvSMHZ+tYQ4NZAtvdNYzy7v4EeaC4HPpQOHWTpenawUfkPhdUMWn+tGqA3h1UlS/7IhoG81BEqG3nnWHYv+43jndnzOOp1L6/j7mgRSEvJ9mwCC9GJiAaUEQM/tQlS+Tj35VtMAr07AikjTgH9n0yAiSFyvyvrVNJ4Fxu+bxk+0i1cnf921i385ElweTIfZ9f/PwPP5LMba3rk3TeLz2eI43fiRlgKxkrgXXTJBby4peDnK/EhqBa9OTwt1uvHYPP6WdiBfv4PsqxU8DyQ0ulz79DfJe6cLWd5x/7ovTz/ZvGpNdG+P/wc=</diagram></mxfile>
|
2109.14960/main_diagram/main_diagram.pdf
ADDED
|
Binary file (77.1 kB). View file
|
|
|
2109.14960/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Recent progress in neural networks (NN) in various tasks highly depends on its over-parameterization, such as classification [@huang2019gpipe; @zagoruyko2016wide], language understanding [@floridi2020gpt; @devlin2019bert], and self-supervised learning [@he2020momentum; @chen2020big]. This leads to extensive computational cost and even causes environmental issues [@patterson2021carbon]. Therefore, neural network compression techniques have received increasing attention, such as knowledge distillation [@hinton2015distilling; @romero2014fitnets; @xu2017training] and pruning [@lecun1990optimal; @han2015deep; @frankle2019lottery; @li2016pruning].
|
| 4 |
+
|
| 5 |
+
Knowledge distillation (KD) [@hinton2015distilling] is a model compression tool that transfers the features from a cumbersome network to a smaller network. At first glance, a powerful teacher with higher accuracy may show better distillation results; however, Cho and Hariharan [@cho2019efficacy] showed that the less-trained teacher teaches better when the student network does not have enough capability. Lately, a line of works has proposed distillation schemes that focus on a "student-friendly" teacher, which provides more transferrable knowledge to the student network with limited capacity [@park2021learning; @Mirzadeh2020ImprovedKD].
|
| 6 |
+
|
| 7 |
+
On the other hand, network pruning [@lecun1990optimal] is another network compression technique that effectively removes networks' weights or neurons while maintaining accuracy. Since pruning simplifies the neural network, we naturally conjecture that the pruned teacher provides student-friendly knowledge that is easier to transfer. This intuition leads us to our main question: *can pruning boost the performance of knowledge distillation?*
|
| 8 |
+
|
| 9 |
+
To answer this question, we propose a new framework, "prune, then distill," consisting of three steps: 1) train the (teacher) network, 2) prune the (teacher) network, and 3) distill the pruned network to the smaller (student) network. We examine several simple experiments to verify the proposed idea that compares the test accuracy of student networks with and without (unstructured) pruning on the teachers' side. More precisely, We conduct three experiments: 1) distill VGG19 [@simonyan2014very] to VGG11, 2) distill VGG19 and ResNet18 [@he2016deep] to itself (self distillation), and 3) distill ResNet18 to VGG16 and MobileNetV2 [@sandler2018mobilenetv2]. In all three cases, we observe that the student learned from the pruned teacher generally outperforms the student learned from an unpruned teacher.
|
| 10 |
+
|
| 11 |
+
We then provide theoretical support to answer why the pruned teacher is better in distillation. Knowledge distillation can be viewed as a label smoothing regularization (LSR) [@yuan2020revisiting; @zhou2021rethinking], which regularizes training by providing a smoother label. We find that a teacher trained with regularization provides a smoother label than the original teacher. This implies that the distillation with a regularized teacher is equivalent to LSR with smoother labels. Since pruning can be viewed as a regularized model with a sparsity-inducing regularizer [@lejeune2021flip], we conclude that the pruned teacher regularizes the distillation process.
|
| 12 |
+
|
| 13 |
+
Based on the observation that pruned teacher provides a better knowledge in distillation, we then suggest a novel network compression scheme. When a cumbersome network is provided, we want to compress the network by applying the "prune, then distill" strategy. However, since the distillation transfers knowledge to a *given* student network, the student network architecture design is required. The main idea of student network construction is matching the teacher and the student layerwise. We propose a student network with the same depth but fewer neurons so that the number of weights per layer matches the number of nonzero weights of the pruned network in the corresponding layer. We evaluate the proposed compression scheme with extensive experiments.
|
| 14 |
+
|
| 15 |
+
We summarize our contributions as:
|
| 16 |
+
|
| 17 |
+
- We propose a novel framework, "prune, then distill," that prunes teacher networks before distillation.
|
| 18 |
+
|
| 19 |
+
- We examine experiments that verify unstructured pruning on the teacher can boost the performance of knowledge distillation.
|
| 20 |
+
|
| 21 |
+
- We also provide a theoretical analysis that the distillation from a pruned teacher is effectively a label smoothing regularization with smoother labels.
|
| 22 |
+
|
| 23 |
+
- We propose a novel network compression that constructs the student network based on the pruned teacher, then apply the "prune, then distill" strategy.
|
2110.03262/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-14T21:40:05.461Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" version="15.7.3" etag="P-0AzOcexF0Ay_nnnRkL" type="google"><diagram id="LkrWUUJkwKz_yN0zKHml">7Vtbc6M2FP41nmkfNiMhEOYxceLdzmS7mWZ32n2UQcY0GLkgr+3++gqQuAknJAHjus3s7Igjict3Pp2b5AmarfcfY7JZfWYeDScG8PYTdDsxjKmNxP+p4JALoGWDXOLHgSdlpeAx+JtKoRq2DTya1AZyxkIebOpCl0URdXlNRuKY7erDliysP3VDfPlEUAoeXRJSbdjvgcdX8rusyuhPNPBX6skQyJ41UYOlIFkRj+0qInQ3QbOYMZ631vsZDVPwFC75vPmR3uLFYhrxLhOMfMIPEm7lt8n34gf1sTHbRh5Nx8MJutmtAk4fN8RNe3dCvUK24utQdi+DMJyxkMXZXLRcLg3XFXKPJKviHgmP2VOBnFFIKhM9vMAWFj3y/WjM6f7oN8ICOUE5ytaUxwcxRE5wTDOfIun2ATvTXLArlYcQzmWrquKwVBORhPGLm5eYioaEtR1iNDTEFL8NYttZCF72BDGqQ2yhNohtHWLDnL4fYlNDlHpiwcpLFvMV81lEwrtSelPHvBxzz9hGCv+knB+k9SFbzup6oPuA/yHaQLa/p+0rS17d7itdtwd5kXAS8+vUAgmBG5IkCVwlngehunX+Oek31PSRsG3sSpFkq5jo0wLKzmqLaUh48KN++zYNZFPFC5NDZcCGBRFPKnd+SAUlGyBQ60bRwXRg3Qx1mAFemAHxszNEI3/vkkEFAJ1IZY1KqoJI3ys97aQSyo4P+SQAHCXI6WjalhKUk7Or2uwHGgcCGhq/jqc08tSgiEU0l8h+8GYeI9A/j7taEtyzse7BtkJgN9aGpXxKxbhCFRtVjSu23o+IfT7LAHRcBrUlUKM/rvPf6rYAJLer7Af9sH+qsx92J0jv7J+eI/uxc2XV+Y9UoF3hv9NCf6uH6M35n/7D0V/ZrKH5/8YoxmjkDQi/EJNA+PyMd8ckCrBzWqGO7Yy3PBVbngNE5NibtOluF/RlRBY5fPeLQkDcJz8D9cuWh0HK9Ezukfjpi7hNwA957GXVhUYmbcnYPEKnS7ctJcPulC6Wac8T5e5KLiSXrdPoK2vHjAvKs0hcOj3lbhA4DdaaKmGuKBC3KHDahwL1CoQwRokwlxMDk3WqnGiRbLJPxSFP9RGLls8zifgHPjNhA3JQmpoXoPC6duuQS7tV1Y8UkTDwU5BdAWJqFW9SiAOXhNeyYx14XniMTuUSHExHCFmajswWHRl96AiN6QRVu0t+XThBWE+DUONvxJyoVqbp4iMt3UeqqGSMGBGOW2x5RUTURxDTa1Qlbi0/0Hy77geLj9rCmUbiieyGNclfTM4rOfTqUMsAzRB//FDLOr9QC8JmmeyksZZeHJlvsxHEzd3v3AtIyHzRK54eJJzJB1ymT4bTJmlP6ZNtTRmPwoIGkX/BiAMwZhSkV0fuIpd5ApgLRtzBzeqLCfTsYDDMRy2/vCbyfH10WA4qQ46OMaExQIX8mGduxgBW05X0FgM0n/TiptHRd+srBjD0csu1dHWgSE/z/4XktnR+DywM3PTrf6V8x+Kn5JJNhA2vjIaNsAC80i0zBFMl7dtQGHod6NvGI5xeLvDanruJu5VtesFbD4bPdMv9SAZ39dK+6GvzdFUsruVqI+5kGmdUtDHepqI+qjaNUzGndNIthEDGiIQwNROpMUSVyjcxc2mSvJzUHiuO92HfbFi3b9jsluv2cWjL0LP/j2Sduva7yE8/83LditM4V4MNHfah4v2W8xcq4LpYvHEzdjLh6dy4XkX4xgUqJHIvmOKwqJ48d1xxMI63nbLIt7aWLMtvSszxX1umOj4kmdO9FgMg3uzLTrUddv/Lx09fMwv1I4hZtE7RkDcW75TfW22cnaVqW0Kud1boAGpWL/AUn07TzlFNJxsStWq6dKgf3BzpVOGxv/jJsKws7RSPBYZplG1g/qyT4fZzRff5047oXvn87ToUtpZVdXhPFjR8YEkgt5wXjHO2FgPCtOOmeNX6iW/x18IDzhqhA8sDhllxRh9Mjp8f17bKQfbXF08cQ+NJS2Ed2W3pax+74EgvNBzV02jHGKrnEGoHFJpKm7q0XWmLqWVm22OseCAe6lRDpdhQJMh2i04HOteAoKbA0dKvtjzqeDImLvSNy0tM0FQydi476+p9xnAXDwLgKN3J+Pf7jPl8bsxmA0YQLZ7BbPMLPRy+Rm0/HuozVrz2//NRolZnMOHpNriQXpS59Lq1ftjDxPqKGirjRXph5ze6I7F3wYAf/anUSQDXSzoXDzi0jSbgg9V0xGX5c+F8j7P80TW6+wc=</diagram></mxfile>
|
2110.03262/main_diagram/main_diagram.pdf
ADDED
|
Binary file (28 kB). View file
|
|
|
2110.03262/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
<figure id="fig:lightquest" data-latex-placement="!h">
|
| 4 |
+
<img src="figures/light_messenger.png" />
|
| 5 |
+
<figcaption>The LIGHT questing environment presented as a 2 player game deployed in Messenger.</figcaption>
|
| 6 |
+
</figure>
|
| 7 |
+
|
| 8 |
+
A key hypothesis in the pursuit towards creating goal-driven natural language-based agents posits that interactivity and environment grounding is critical for effective language learning [@Barsalou2008; @bisk-etal-2020-experience; @ammanabrolu2021situated]. Text games provide a platform on which to interactively train agents that can both act and speak in a situated manner---producing language that is both goal-driven and contextually relevant. Agents in text games operate---perceiving, acting in, and speaking to others in a world---entirely using textual natural language. These games are structured generally as sequential decision making problems in the form of puzzles or quests that must be completed to advance in the game.
|
| 9 |
+
|
| 10 |
+
As seen in Figure [1](#fig:lightquest){reference-type="ref" reference="fig:lightquest"}, we focus on creating agents in LIGHT [@Urbanek2019], a large-scale crowdsourced fantasy text-adventure game, consisting of rich textual worlds---locations, objects, and characters with personas, and quests---motivations for each character. To complete these quests, an agent must: (1) maintain character via its persona; and (2) reason in a *partially observable* world about potential actions and utterances based on incomplete descriptions of the locations, objects, and other characters. This requires several human like competencies such as commonsense reasoning, dynamic natural language understanding, and operating in combinatorially sized language-based state-action spaces. Although recent work has provided evidence showing that interactive language learning via reinforcement learning (RL) in text games can be significantly more sample efficient than static supervised learning [@Ammanabrolu2021] when creating goal-driven natural language agents, their ability to robustly generalize to novel scenarios is limited.
|
| 11 |
+
|
| 12 |
+
In sequential decision making problems in particular, this generalization gap is the result of an agent simply memorizing trajectories, e.g. the sequence of actions and dialogues required to finish a game, and thus being unable to react in novel scenarios---i.e. the agent learns from the head the training data and simply memorizes the long tail. One way of decreasing this generalization gap is by training agents on procedurally generated environments---wherein the agent learns a family of parametrized tasks with a significantly larger state-action spaces than singular environments, thus effectively making the memorization of trajectories impossible [@Justesen2018; @Cobbe2020]. Drawing inspiration from all of these ideas, we create a method that *learns* to create a training curriculum of increasingly more difficult novel procedurally generated environments.
|
| 13 |
+
|
| 14 |
+
Our contributions are threefold: (1) We present a method of parametrizing and generating a curriculum of environments in text games; (2) We show how to effectively train reinforcement learning agents on this curriculum; and (3) Provide an experimental study showing that our method enables significantly better generalization than those training on singular environments.
|
| 15 |
+
|
| 16 |
+
{#fig:pcgcurriculum width=".9\\linewidth"}
|
| 17 |
+
|
| 18 |
+
This section describes our procedural generation pipeline as seen in Figure [2](#fig:pcgcurriculum){reference-type="ref" reference="fig:pcgcurriculum"}, starting with world and quest generation, followed by aligning both of them. There are two main kinds of models that we use for the different modules in this pipeline: retrieval and generative.
|
| 19 |
+
|
| 20 |
+
**The LIGHT Questing Environment.** The LIGHT game environment [@Urbanek2019][^1] is a multi-user fantasy text-adventure game consisting of a rich, diverse set of 1775 characters, 663 locations, and 3462 objects. Characters are able to perform templated actions to interact with both objects and characters, and can speak to other characters through free form text dialogues. Actions in text games generally consist of verb phrases (VP) followed optionally by prepositional phrases (VP PP). For example, *get OBJ, put OBJ, give OBJ to CHAR*, etc.. These actions change the state of the world which is expressed through text descriptions.
|
| 21 |
+
|
| 22 |
+
Quests in LIGHT [@Ammanabrolu2021] take the form of a short motivation and goal action that is required reach the world state required to finish the game. For example, if the short motivation is *"Your motivation is to acquire a sword"*, then the corresponding goal state would be for the character to have a sword in their inventory and goal action would be *get sword*. This environment also contains a set of human expert demonstration of people speaking and acting in character while playing the quests mentioned above. Further details are found in Appendix [8.1](#app:lightdetails){reference-type="ref" reference="app:lightdetails"}.
|
| 23 |
+
|
| 24 |
+
**World Retrieval.** The first step of the pipeline involves choosing an initial character who will perform the quest. For this, we uniformly randomly sample from the set of characters found in the LIGHT-Quest training set. The corresponding character information includes a name and a description of the character's persona. Given this character information, we further retrieve the location where the character is most likely to be found.
|
| 25 |
+
|
| 26 |
+
Retrieval models are trained to return the most highly correlated output for a given input in the dataset. For example, a retrieval model can be asked to return the most likely character that can be found at a particular location. These models compare a human annotated gold standard label with negative candidates drawn from the dataset. The negative candidates provide noise that the model must filter out in order to learn representations that let it best predict the gold label. These models are trained via a ranking loss that maximizes the scores of the gold label while simultaneously minimizing negative candidate score. At test time, the highest ranked candidate based on the score is selected as the model prediction.
|
| 27 |
+
|
| 28 |
+
Specifically, we use a retrieval-based ranker model that checks for similarity of StarSpace [@Wu2018] embeddings. Our choice of model is influenced by @Fan2019 who report state-of-the-art retrieval performance for locations in LIGHT using this model. The overall ranker model first trains a randomly initialized StarSpace embedding model that is designed to correlate characters with the locations they are found in. It learns a single bag-of-words embedding that takes into account all the individual words contained within the input---encoding character and location information as well as the previously mentioned negative retrieval candidates. The rest of the training is similar to other retrieval models described earlier. The retrieved location information consists of a location name as well as a description of the location.
|
| 29 |
+
|
| 30 |
+
**Quest Generation.** The quest is now generated using the existing character and location information. The generation-based models used in this pipeline are trained to return the most likely output sequence given an input sequence. Given a target sequence $Y=\{y_1,...,y_M\}$ and some input context vector via the encoders $\mathbf{X}$. These models use autoregressive decoding techniques that factor the distribution over the target sequence into a chain of conditional probabilities with a causal left to right structure as $P(Y|\mathbf{X};\theta) =\prod_{i=1}^{M+1}p(y_i|y_{0:i-1},\mathbf{X};\theta)$ where $\theta$ represents the current network parameters. At test time, a special start-of-sequence token is provided to the model which then proceeds to decode the rest of the output sequence using beam search.
|
| 31 |
+
|
| 32 |
+
We train two BART [@lewis-etal-2020-bart] models that encodes input information via a bidirectional transformer encoder and decodes autoregressively: the first takes as input character and location information and produces a short motivation (Section [2](#sec:lightprocgen){reference-type="ref" reference="sec:lightprocgen"}); the second takes as input character, location information, short motivation and produces the sequence of LIGHT game engine executable actions needed to achieve the motivation. This sequence of actions is provided by the human expert demonstrations as mentioned in Section [2](#sec:lightprocgen){reference-type="ref" reference="sec:lightprocgen"}.
|
| 33 |
+
|
| 34 |
+
At this stage, the environment contains a motivated main character to perform a quest and a location for them to start in. We now focus on aligning the world with the quest to ensure that the quest is playable and achievable. Intuitively, to ensure that a quest is achievable, the world needs to contain all of the entities---locations, characters, and objects---mentioned within the quest.
|
| 35 |
+
|
| 36 |
+
To this end, the alignment process involves training three BERT-based [@Devlin2018] biencoder retrieval models to retrieve the most likely characters, locations, and objects required flesh the environment out and make the quest achievable. We use the same biencoder architecture proposed by @Urbanek2019 which encodes context using one transformer and candidates with another---scoring candidates via inner product between the two encoded vectors. The character retrieval model is conditioned on the initial character, quest, and location---producing additional characters required to complete the world.
|
| 37 |
+
|
| 38 |
+
We follow the setup in @Ammanabrolu2021 and restrict worlds to only contains 2 characters at maximum but note that this method is extendable to greater numbers of characters. Similarly, the location retrieval model is also conditioned on the same things---producing, in this case, 4 neighbors to the initial location (resulting in worlds that are 5 locations large). These locations are connected to the initial location and a character can move between them by using commands such as *go west*, *go up* etc.. Once these characters and locations are added to the world, the object retrieval model predicts the set of objects that are required to be distributed for each location given all the character information present in it. The final game environment instance is complete once this object set has been added.
|
| 39 |
+
|
| 40 |
+
**Generating Curriculums.** We generate curriculums by building off of our procedural LIGHT game instance generation pipeline. We make the observation that the original quests in LIGHT are heavily skewed towards certain quest types---with the majority involving goals and short motivations that contain objectives related to getting and object, and hitting or hugging another character (Figure [3](#fig:originallightdistr){reference-type="ref" reference="fig:originallightdistr"}). We further note that the first verb in the short motivation forms the basis of the quest for that agent.
|
| 41 |
+
|
| 42 |
+
Actions in LIGHT, and more generally in text games, are executed in the game engines on the basis of verbs---engine subroutines are linked to verbs with nouns forming arguments---and as such are primarily responsible for changing the state of the world. For example, *get sword* invokes the *get* subroutine that places an object, in this case a sword, in the character's surrounding into their inventory. As the quest is generated early in the pipeline, with the world and the rest of the components being conditioned on it, we can say that the first verb in the short motivation is an important dimension along which we can assess the distribution of individual LIGHT game instances. Thus, concretely, the verb counts from the short motivation aggregated over a set of quests represents the primary dimension along which we measure the distribution of quests.
|
| 43 |
+
|
| 44 |
+
**Parametrizing Curriculum Difficulty.** Given the relative imbalance of this multinomial distribution, as seen in Figure [3](#fig:originallightdistr){reference-type="ref" reference="fig:originallightdistr"}, we hypothesize that a LIGHT agent only learns to do well on certain types of objectives and not others---memorizing trajectories for less seen quest types, i.e. those found in the tail of the distribution. Preliminary evidence for this hypothesis is also seen in @Prabhumoye2020, where they show a positive correlation between the number of instances of a particular type of quest during training and the final test goal-achievement performance. Based on these observations and our initial hypothesis, we use this particular dimension to *parametrize curriculum difficulty* for training LIGHT agents---quest types that are rarer in the initial training data will be harder for the agent to generalize to in a zero-shot setting.
|
| 45 |
+
|
| 46 |
+
{#fig:originallightdistr width=".7\\linewidth"}
|
| 47 |
+
|
| 48 |
+
{#fig:originallightdistrnoun width=".7\\linewidth"}
|
| 49 |
+
|
| 50 |
+
Intuitively, we seek to create curriculums that contain a diverse set of game instances with quest types that are not often found in the initial training data. Our earlier observations let us hypothesize that this will enable the LIGHT agent to more effectively learn from rare instances of quests as opposed to memorizing the corresponding trajectories. To this end, the generated curriculums each consist of a pool of quests with steadily decreasing quest type imbalance. In our case, this imply that the flatness of the multinomial distribution increases until it tends towards being uniform with respect to the categorical quest type variable. This is done by running the procedural generation pipeline iteratively until the number of instances for the highest count quest type is within $n$ of the lowest count quest type. The total number of additional generated instances is held fixed across curriculums, only the task distribution of quest types within each curriculum changes.
|
| 51 |
+
|
| 52 |
+
![Architecture and training pipeline for the LIGHT RL Agent [@Ammanabrolu2021].](figures/mini_quest.pdf){#fig:lightrlcurrmini width=".7\\linewidth"}
|
| 53 |
+
|
| 54 |
+
<figure>
|
| 55 |
+
<div class="minipage">
|
| 56 |
+
<embed src="figures/Short_Motivation_Generation_without_Distribution_Tuning_verb_normalized.pdf" />
|
| 57 |
+
</div>
|
| 58 |
+
<div class="minipage">
|
| 59 |
+
<embed src="figures/Procedural_Short_Motivation_Generation,_n=64_verb_normalized.pdf" />
|
| 60 |
+
</div>
|
| 61 |
+
<div class="minipage">
|
| 62 |
+
<embed src="figures/Procedural_Short_Motivation_Generation,_n=16_verb_normalized.pdf" />
|
| 63 |
+
</div>
|
| 64 |
+
<div class="minipage">
|
| 65 |
+
<embed src="figures/Procedural_Short_Motivation_Generation,_n=2_verb_normalized.pdf" />
|
| 66 |
+
</div>
|
| 67 |
+
</figure>
|
| 68 |
+
|
| 69 |
+
<figure id="fig:generatedcurrnoun">
|
| 70 |
+
<div class="minipage">
|
| 71 |
+
<embed src="figures/Short_Motivation_Generation_without_Distribution_Tuning_noun_normalized_top_20.pdf" />
|
| 72 |
+
</div>
|
| 73 |
+
<div class="minipage">
|
| 74 |
+
<embed src="figures/Procedural_Short_Motivation_Generation,_n=64_noun_normalized_top_20.pdf" />
|
| 75 |
+
</div>
|
| 76 |
+
<div class="minipage">
|
| 77 |
+
<embed src="figures/Procedural_Short_Motivation_Generation,_n=16_noun_normalized_top_20.pdf" />
|
| 78 |
+
</div>
|
| 79 |
+
<div class="minipage">
|
| 80 |
+
<embed src="figures/Procedural_Short_Motivation_Generation,_n=2_noun_normalized_top_20.pdf" />
|
| 81 |
+
</div>
|
| 82 |
+
<figcaption>Top-20 distribution of verbs (top) and nouns (bottom) in the short motivation of the curriculum of quests starting from the original generated curriculum on the left to the flattened, <span><strong>generated</strong></span> curriculum on the right as a function of <span class="math inline"><em>n</em></span> (Section <a href="#sec:curriculumgen" data-reference-type="ref" data-reference="sec:curriculumgen">3</a>). The y-axis of the reflects normalized overall count in the pool of quests.</figcaption>
|
| 83 |
+
</figure>
|
| 84 |
+
|
| 85 |
+
Figure [6](#fig:generatedcurrnoun){reference-type="ref" reference="fig:generatedcurrnoun"} shows that decreasing $n$ has the intended effect of decreasing imbalance with respect to verb types. Generating using this pipeline has the added effect of increasing diversity within the pool of each available quest type. One measure of diversity within the pool of a single quest type is the types of nouns contained within the short motivations---these generally correspond to the characters, locations, and objects mentioned. Figure [6](#fig:generatedcurrnoun){reference-type="ref" reference="fig:generatedcurrnoun"} shows that decreasing imbalance in the verb types for a short motivation also results in decreasing imbalance in noun types, once again corresponding to decreasing $n$. Short motivation generation is one of the first steps in the pipeline, i.e. the rest of the pipeline is conditioned on it, and as such increasing the flatness of the distribution there has the effects of increasing distribution for downstream components.
|
| 86 |
+
|
| 87 |
+
**A2C Curriculum Training.** Overall training is done via A2C [@Mnih2016] a policy gradient algorithm that maximizes long-term expected reward by comparing the advantage $A(s_t,a^*_t)$ of taking an action $a_t$ in a state $s_t$ to the average value of taking any valid action as predicted by the critic $V(s_t)$. The setup and network architectures used are similar to @Ammanabrolu2021 and are summarized in Figure [5](#fig:lightrlcurrmini){reference-type="ref" reference="fig:lightrlcurrmini"}. At every step, the LIGHT agent receives as input the text describing the setting, the character's persona & motivation, and the full dialogue history. This is then encoded using a transformer based encoder and sent to the action and dialogue policy networks which output an action/dialogue utterance. These are then passed into the LIGHT environment which process them and returns rewards to be used by the agent.
|
| 88 |
+
|
| 89 |
+
**Rewards.** As seen in Figure [5](#fig:lightrlcurrmini){reference-type="ref" reference="fig:lightrlcurrmini"}, all actions, either those of the agent-in-training or the partner agent, are processed by the engine, checking for goal state completion---hence known as *act goals*. For example, if the LIGHT agent had the motivation to acquire a sword, the goal could be completed via a: *self act completion*: where the agent acquires a sword itself by picking it up, stealing it, convincing the partner to drop theirs so you can pick it up, etc. *partner act completion*: where the agent uses dialogue utterances to convince their partner to achieve the goal for them (e.g., by persuading the partner to give them the sword). The naturalness of the dialogue utterances is further rated by a learned Dungeon Master (DM), a transformer-based ranker model trained on human demonstrations to score how relevant the utterance is given the character's persona and motivation. Further training details are provided in Appendix [8.1](#app:lightdetails){reference-type="ref" reference="app:lightdetails"}.
|
2110.07310/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-04-19T05:53:58.310Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.5.1 Chrome/89.0.4389.82 Electron/12.0.1 Safari/537.36" etag="z2ofbex7McB2Uf3Q1o0s" version="14.5.1" type="device"><diagram id="BUs20fW9CeizmIZgXxSm" name="Page-1">7ZxLc6M4EIB/jY9xYUmAOU6cxz5md6fKU5u9TRFQgA0gF8hxMr9+pSAwSMoMJhg7sAeXQYiW3Z9odbfansFV8nybuZvwD+LjeAYM/3kGr2YALAwHsDfe8lK0OEuzaAiyyBed9g3r6Dsu7xSt28jHeaMjJSSm0abZ6JE0xR5ttLlZRnbNbg8kbo66cQMxorFvWHtujJVud5FPw6J1adZ6/4KjICxHXhjiSuKWnUVDHro+2dWa4PUMrjJCaHGUPK9wzJVX6qW47+aNq9UHy3BK29xwBx6jPLzIw/W/n73co58fvsUXgs6TG2/FF54BK2byLu/ZQcAPvmT4gmZulGK/upaVF69TjyHPygts9Oo+8a3pS6nKnGbksdIi+/6XIU1idrhgh0w7G94veQ74RJo/xGTnhW5G5xRnSZS6lGSs2y6MKF5vXI/33bGerO0hiuMVifl1Ngx0nNXq5qb6AHUFCZ094Yzi51qTUNgtJgmm2QvrUl517OIWMXuh5RTnu/1cAIboE9bmQTk/XDH9gkr0nhA7EJAOAAY1wCQ949T/xGc+O7uPiffItUuZJuVG1u8miisAvEvtvMbmTUViP8A/VGNNTQtLqCTDsUujp+bzpdOTEPeFRGzYCglEVgMJWoCmiJxsMw+Lu+rPgyQImD8RxBQSYKoIYmp0X2rdNrxD/vYHBrZ+nP0sKCTu50SlwO7TBB00TbzYzfPIOwL18jnQPRvvnAgASHpFsONEkAUBSdAbE6EvVuYEWJmoJ1ayoIFZWRNgVXpo72YlCxqYlT1+VnAB+2GlCBqY1XICrBCQWHV0XBRBAAzKylFYfQ2xgot52FRy8V/9/9JTT0mKJeddNLlxFKScMgOJudPP/fWIhWSfxIUk8n0+jDYayMg29TH//EY/MUAVwFXcxHndudVMG3CsEKAcrAYgwel2xASQc2YEFgqBnZuPGMBycWYA1DA4yLBLx4sALs7NCulCTMtN+Lcvc0F51aLPCI2HDjo3Oi2Cymb2rchNcn34bh5WymmvaDmDV2RfQfHOxXLdGHPL4sdcbwtnbvPBU0K9sD8WyGiuFnAJFBbQVFnAo7HQBY2HJFmv8KiTrMhs+rPQXijAQGnxh0myquu7omhNlrWZUK3pnUcga3EryWhIApK68fW+tXrMYvxANQ8ZJZt3RThC0xei/zsjGeQsm9auawrWAvDHgo4cycAJpN4sU97CWHaEZVrzZkIHln75ULhaGNLYvec7kC0spep8tF/rMpxH3937V1F82RJ7AEyueTkzr7isLSV5sZv5w1nS3kxaUNI+sBUziY7kYzz9nRjry8vf01s7QP4/YXDn3l7ojGTdAZyM9yfvEg7p/WnJgPGbNWDIIWnXLLUs6HiZTy0rNZvWNYzSzfU2RA8IcxxpvYaqCQKDmiA1FfZAiD9eUwPt0wWaegKHucof0tYgA8zlPTFJSFtroxElf56Dyw76tkilEz5mpNCWndnOSDWiWiLtDdhhRUEfEhhayoYPdXwCZUEADQtrAqU5prHsB5YiaGhYE0gQmFDeXesKSxY0NKwJFOeYlhxqdoUlCxoa1gSqc0ynpwpFRdDAMepCLc8ZdckHsuVyRXVHYtgoS80SjLvkwzTkwrYTEwBqpmHUJR8mfCM8OhkANdEw8pIP0zozK1Qu0f+XfLyuyedGR5cFmOR+DDLlvYFTo5lAzI9MecHomqCRBQ0cmQBdzH+uOzIWksI4uFSm+qA7MkANwse9I2OBc1sHJhBZq7UonX9NoREFJFEn35EBE/h5jAXs3pCqoloi7QAMweTxz1s/8Sn8HDzukq39W64vjWkWKrFG45q95uz1ib0YIGNVnLcoXtqbscXP1xk388QfK5jGTCnlvLlxHAirQRTLp5k0b3teqOl5gfLfEOrGsKyu7Lv4VktCG7I0SXDtfxMkrsTx9UcnAew2JDSP89FItNikPFFFOpqVFeloDhx+EmXYoxHh0pjR4fpqVKlnhLriMncXewEGpSpoUO6m/MSTQ8fi1SJm+YC8+sKFrOV54dJGLaqhqy0zRm0JGtbQSb8eeOdz09wbAc6pDV2LKuU1fxZSpjtgfMFZshXTs1vZ8ukCp3f6CvYcNN20ha1jB+a25uc6fcRPWny68EnC95UzAMavKddylAbTAif/oc1ZUNNFSDK1V5+fdfprSzdbOi1o0ERzq7mnpvsBQY/c2On+z82KwGr/F3Hw+j8=</diagram></mxfile>
|
2110.07310/main_diagram/main_diagram.pdf
ADDED
|
Binary file (19.2 kB). View file
|
|
|
2110.07310/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Aspect-based sentiment analysis (ABSA) is a finegrained sentiment analysis task that includes a number of subtasks, two of which are aspect category sentiment analysis (ACSA) and aspect category detection (ACD). Figure [1](#page-0-0) shows an example, where the input is "*The restaurant was expensive, but the menu was great*". ACD detects the aspect categories, such as *price* and *food*, and ACSA predicts the sentiment polarities toward each aspect category. In this work, we focus on these two tasks as well as the joint task that combines both.
|
| 4 |
+
|
| 5 |
+
Previous studies have investigated various methods that treat ACSA and ACD as classification tasks, learning aspect-specific sentence representations [\(Wang et al.,](#page-10-0) [2016;](#page-10-0) [Ruder et al.,](#page-9-0) [2016\)](#page-9-0). Recently, pre-trained language models (PLM) have shown their effectiveness to this end [\(Jiang et al.,](#page-8-0) [2019\)](#page-8-0). The main idea is to make use of pre-trained models such as BERT [\(Devlin et al.,](#page-8-1) [2019a\)](#page-8-1) for representing an aspect-specific form of the input (e.g., by concatenating the aspect category to the end of
|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span>
|
| 8 |
+
|
| 9 |
+
Figure 1: Example of aspect category detection (ACD) and aspect category sentiment analysis (ACSA).
|
| 10 |
+
|
| 11 |
+
the input sentence (Figure [3\(a\)\)](#page-2-0)), which provides useful semantic features for ACSA and ACD classifiers. Such methods have given highly competitive results [\(Sun et al.,](#page-9-1) [2019;](#page-9-1) [Li et al.,](#page-9-2) [2020b\)](#page-9-2).
|
| 12 |
+
|
| 13 |
+
The above classification models benefit from contextualized representations, which contain knowledge learned by pre-training over large data [\(Lin et al.,](#page-9-3) [2019\)](#page-9-3). However, their use of pre-trained knowledge can be viewed as indirect due to at least two reasons. First, the classification task is performed by using a neural network on top of pretrained representation, with separate network parameters. Second, the integration of aspect category makes the aspect-specific input representation not exactly a natural language sentence, which differs from the pre-training setting. Intuitively, more pre-trained knowledge could be leveraged by connecting pre-training and ACSA at the *task* level, rather than only at the *representation* level.
|
| 14 |
+
|
| 15 |
+
We investigate the above potentials by casting the sentiment classification tasks into language modelling tasks. In particular, as shown in Figure [2,](#page-1-0) both ACSA and ACD are transformed into sequence-to-sequence (seq2seq) tasks, where the encoder takes the input sentence and the decoder generates a natural language sentence. For ACD, the output follows a template stating whether the specific aspect is discussed (e.g., "*The* hcategory*\_*typei *category is discussed*"); for ACSA, the sentiment polarity of a specific aspect is stated (e.g., "*The sentiment polarity of* hgiven*\_*categoryi *is* hpolarity*\_*typei"). The setting corresponds closely to the denoising auto-
|
| 16 |
+
|
| 17 |
+
<span id="page-1-0"></span>
|
| 18 |
+
|
| 19 |
+
Figure 2: ACSA as a generation task.
|
| 20 |
+
|
| 21 |
+
encoder training scheme of BART [\(Lewis et al.,](#page-8-2) [2020\)](#page-8-2), which we use as the pre-trained model. Compared with classification-based methods, our method does not include more network parameters, and thus can potentially generalize better to new domains [\(Brown et al.,](#page-8-3) [2020;](#page-8-3) [Gao et al.,](#page-8-4) [2020\)](#page-8-4). Given a new domain with completely unseen aspect categories and sentiment labels, our method can be applied without changing output layer structure.
|
| 22 |
+
|
| 23 |
+
In addition to classification-based methods, we take masked language models (MLM) as a baseline also, for which a natural counterpart of our method is a mask-refilling task. As shown in Figure [3\(b\),](#page-2-1) different from our method, the output template is concatenated to the input, with the keyword being masked for prediction. This MLM task corresponds closely to BERT [\(Devlin et al.,](#page-8-1) [2019a\)](#page-8-1) pre-training. In comparison to this MLM method, a generation method can better learn the correlation between the input and output template as two related sequences, which has been demonstrated by the strong performance of BART for abstractive text summarization [\(Lewis et al.,](#page-8-2) [2020\)](#page-8-2).
|
| 24 |
+
|
| 25 |
+
Experimental results on three standard benchmarks datasets show that both generation and MLM methods outperform classification methods using the same pre-trained language models. Finally, generation methods give stronger performances than MLM methods, outperforming the previous stateof-the-art methods by a large margin. In addition, using the generation method, we show that jointly performing ACSA and ACD leads to better results than the traditional pipeline. To our knowledge, we are the first to employ a generative pre-trained language model to address an ACSA/ACD problem. We release our code at [https://github.](https://github.com/lgw863/ACSA-generation) [com/lgw863/ACSA-generation](https://github.com/lgw863/ACSA-generation).
|
| 26 |
+
|
| 27 |
+
# Method
|
| 28 |
+
|
| 29 |
+
Formally for ACD, the input is a sentence X = {x1, . . . , xn} = x1:n, where x<sup>i</sup> denotes the i-th word. For ACSA, a set of pre-identified aspect categories are also given. We introduce relevant pre-trained language models in [3.1,](#page-2-2) classification methods in Section [3.2,](#page-2-3) MLM methods in Section [3.3,](#page-3-0) and our generation method in Section [3.4.](#page-3-1)
|
| 30 |
+
|
| 31 |
+
We take BERT [\(Devlin et al.,](#page-8-1) [2019a\)](#page-8-1) and BART [\(Lewis et al.,](#page-8-2) [2020\)](#page-8-2) as the pre-trained language models. Both are built on the Transformer [\(Vaswani et al.,](#page-9-13) [2017\)](#page-9-13) architecture. BERT [\(Devlin](#page-8-1) [et al.,](#page-8-1) [2019a\)](#page-8-1) is an encoder stack of Transformer for masked text filling, where a model uses the context words to predict masked words. BART [\(Lewis et al.,](#page-8-2) [2020\)](#page-8-2) is a denoising auto-encoder seq2seq model pre-training for natural language generation. Its training applies document corruption such as randomly deleting tokens from the input and corrupting text with an arbitrary noising function. BART is trained to reconstruct the original text.
|
| 32 |
+
|
| 33 |
+
We use a multi-layer perceptrons network as the classifier model, which takes a representation vector as input. Both BERT and BART are considered as the encoders.
|
| 34 |
+
|
| 35 |
+
BERT Classification BERT adopts "*[CLS] input sentence [SEP] given\_category [SEP]*" as input. The final hidden state corresponding to "[CLS]" is used as the representation for classification.
|
| 36 |
+
|
| 37 |
+
BART Classification BART adopts "hSi *input sentence* h/Si *given\_category* h/Si" as input and predicts the sentiment polarity of the sentence towards the given category. The same input is fed into the encoder and decoder (see Figure [3\(a\)\)](#page-2-0). Formally, suppose that the query category is a, x<sup>0</sup> = hSi, xn+1 = h/Si, xn+2 = a, xn+3 = h/Si, then the input to BART is x0:n+3 = hSi x1, . . . , x<sup>n</sup> h/Si a h/Si. The output hidden vectors obtained by the BART encoder (ENCODER) and BART decoder (DECODER) are:
|
| 38 |
+
|
| 39 |
+
$$\mathbf{h}^{enc} = \text{Encoder}(x_{0:n+3})$$
|
| 40 |
+
|
| 41 |
+
$$\mathbf{h}_0 \dots \mathbf{h}_{n+3} = \text{Decoder}(\mathbf{h}^{enc}; x_{0:n+3})$$
|
| 42 |
+
|
| 43 |
+
The output vector $\mathbf{h}_{n+3}$ is then taken as the representation vector for classification.
|
| 44 |
+
|
| 45 |
+
Masked language models (MLM) (Devlin et al., 2019a) complete a given prompt by filling missing tokens. We refer to the template including a given category and MASK token together as a prompt. For sentiment analysis tasks, *BERT MLM* adopts the input sentence and the prompt as the model input and predicts the sentiment polarity label word towards the given category. For *BART MLM*, the same input is fed into the encoder and decoder, and the highest decoder prediction from label words of the MASK token is the predicted polarity label(see Figure 3(b)). We use the same template in the MLM method and generation method, following the template creation method in section 3.4.1.
|
| 46 |
+
|
| 47 |
+
We take both ACSA and ACD as language model ranking problems under a seq2seq framework (see Figure 3(c)). The target sequence $\mathbf{T}_{a_i,p_k}(\mathbf{T}_{a_i}) = \{t_1,\ldots,t_m\}$ is a template filled by the given category $a_i$ and the polarity type $p_k$ . We first introduce how to create templates in Section 3.4.1, and then show the inference and training details in Section 3.4.2 and Section 3.4.3, respectively.
|
| 48 |
+
|
| 49 |
+
For ACSA, we manually create templates containing one slot for the given\_category and another slot for the polarity\_type label. We set a category word set $\mathbf{A} = \{a_1, \dots, a_{|C|}\}$ , |C| is the category type size (e.g., $a_i$ ="price") and polarity type word set $\mathbf{P} = \{p_1, \dots, p_{|L|}\}$ , |L| is the polarity type size (e.g., $p_k$ ="positive"), and use words to define templates $\mathbf{T}_{a_i,p_k}$ (e.g. "The sentiment polarity of price is positive"). The template $\mathbf{T}$ is "The sentiment polarity of $\langle a_i \rangle$ is $\langle p_k \rangle$ ". For a given category $a_i$ , we can obtain a list of templates $\mathbf{T}_{\mathbf{a}_i} = [\mathbf{T}_{a_i,p_1}, \dots, \mathbf{T}_{a_i,p_{|L|}}]$ .
|
| 50 |
+
|
| 51 |
+
For ACD, we use $a_i$ to create a sentiment template $\mathbf{T}_{a_i}^+$ for an existing aspect category, and a none-category template $\mathbf{T}_{a_i}^-$ . $\mathbf{T}^+$ is "The $\langle a_i \rangle$ category is discussed" and $\mathbf{T}^-$ is "The $\langle a_i \rangle$ category is not discussed".
|
| 52 |
+
|
| 53 |
+
For ACSA, we first enumerate all possible polarities for the given category of the sentence $\mathbf{X}$ and fill them in the prepared templates, and then use the fine-tuned pre-trained generative language model to assign a score for each template $\mathbf{T}_{a_i,p_k} = \{t_1,\ldots,t_m\}$ , formulated as:
|
| 54 |
+
|
| 55 |
+
<span id="page-3-5"></span>
|
| 56 |
+
$$f(\mathbf{T}_{a_i,p_k}) = \sum_{c=1}^{m} \log P(t_c|t_{1:c-1}, \mathbf{X})$$
|
| 57 |
+
(1)
|
| 58 |
+
|
| 59 |
+
We calculate a score $f(\mathbf{T}_{a_i,p_k})$ for each possible polarity by employing the pre-trained generative language model (i.e., BART) to score the templates, and then choose the polarity of category $a_i$ with the largest score.
|
| 60 |
+
|
| 61 |
+
For ACD, we first create templates $\mathbf{T}_{a_i}^+$ and $\mathbf{T}_{a_i}^-$ for all possible categories of the sentence $\mathbf{X}$ , and then use the fine-tuned pre-trained generative language model to assign a score for each template $\mathbf{T}_{a_i} = \{t_1, \dots, t_m\}$ , in a similar way as Equation 1. Also, we decide whether the $a_i$ category is discussed or not in the input sentence according to the higher score between $\mathbf{T}_{a_i}^+$ and $\mathbf{T}_{a_i}^-$ .
|
| 62 |
+
|
| 63 |
+
For ACSA, suppose that the polarity type of $a_i$ is $p_k$ . We fill the given category $a_i$ and the polarity type $p_k$ into template $\mathbf{T}$ to create a gold target output $\mathbf{T}_{a_i,p_k}$ . Similarly for ACD, if the category of $a_i$ is discussed, the gold target $\mathbf{T}_{a_i}^+$ is obtained by filling $a_i$ into $\mathbf{T}^+$ , and otherwise is $\mathbf{T}_{a_i}^-$ .
|
| 64 |
+
|
| 65 |
+
For ACSA, we use all gold polarities in the training set to construct $(\mathbf{X}, \mathbf{T})$ pairs. For ACD, we use all gold categories in the training set to construct $(\mathbf{X}, \mathbf{T}^+)$ pairs, and additionally create negative samples $(\mathbf{X}, \mathbf{T}^-)$ by sampling all none existing categories in the input. Finally, we obtain $\{(\mathbf{X}, \mathbf{T})\} = \{(\mathbf{X}, \mathbf{T}^+) \cup (\mathbf{X}, \mathbf{T}^-)\}$
|
| 66 |
+
|
| 67 |
+
Given a sequence pair $(\mathbf{X}, \mathbf{T})$ , we feed the input $\mathbf{X} = x_{1:n}$ to the BART encoder, obtaining hidden representations of the sentence:
|
| 68 |
+
|
| 69 |
+
$$\mathbf{h}^{enc} = \text{ENCODER}(x_{1:n}) \tag{2}$$
|
| 70 |
+
|
| 71 |
+
At the c th step of the decoder, $\mathbf{h}^{enc}$ and previous output tokens $t_{1:c-1}$ are then as inputs, yielding a representation using attention (Vaswani et al., 2017)
|
| 72 |
+
|
| 73 |
+
$$\mathbf{h}_{c}^{dec} = \text{DECODER}(\mathbf{h}^{enc}, t_{1:c-1}) \tag{3}$$
|
| 74 |
+
|
| 75 |
+
The conditional probability of the word $t_c$ is defined as:
|
| 76 |
+
|
| 77 |
+
$$P(t_c|t_{1:c-1}, \mathbf{X}) = \text{SOFTMAX}(\mathbf{h}_c^{dec}\mathbf{W}_{lm} + \mathbf{b}_{lm}), \quad (4)$$
|
| 78 |
+
|
| 79 |
+
where $\mathbf{W}_{lm} \in \mathbb{R}^{d_h \times |\mathcal{V}|}$ and $\mathbf{b}_{lm} \in \mathbb{R}^{|\mathcal{V}|}$ , $|\mathcal{V}|$ represents the vocab size of pre-trained BART. The
|
| 80 |
+
|
| 81 |
+
cross-entropy between the decoder's output and the original template is used as the loss function:
|
| 82 |
+
|
| 83 |
+
$$\mathcal{L} = -\sum_{c=1}^{m} \log P(t_c|t_{1,c-1}, \mathbf{X})$$
|
| 84 |
+
(5)
|
2111.01177/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-01T15:24:05.770Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36" version="15.6.3" etag="1x4tM3FSF6KWDimubqJf" type="google"><diagram id="bUncu6E_9kphXXDVC5Ep">7H1pc5vK1vWvufVWPVXXxTx81AASAol5/MYoEPMo4Ne/tO3keErik2P7OLmRHUdCQMPeq/ewenfzH3STj7vGreJjGYTZfxAoGP+Dbv+DLC8SWf4DW6a7LSRN3G04N0lwtwn+a4OazOH9Ruh+a58EYftox64ssy6pHm/0y6II/e7RNrdpyuvj3aIye9xq5Z7vW4T+2qD6bhY+281Mgi6+20rhD/beh8k5/tIyDN1/k7tfdr7f0MZuUF4fbEKZ/6Cbpiy7u3f5uAkzILwvcrk7jv3Gt18vrAmL7jUH3CticLP+/t7+gxDZcug6KpczLLJx/bsviLoHF7XWknwRPQKdwuvyVylzt/jrS3BH3ZQ9PgKc6b/trRJXyw4wXY0PjyDO4P/5S7vLtd41fbf9XkxfT4o0ZV8EIbh8aPn6GiddqFZ3F3ld0LZsi7s8Wz7BX48ewqYLx29KCP4q9wWwYZmHXTMtu9wfQFP4DUreHXUPVwzFbgiC/ut19+31Lyh8UW/8AAXE/Tb3Hnznr039pZ/lzb2KXlYX+oK6nsjnvAio+uZ93/cI18ueAvy18oAh/LlAEOgGx59JAcZfEAP9BmLAfiyG5YDFGoQ/hojbVncmIkpGAKvXYQZ9WUb38kDgG5gkMBLCKQpBcYRGn8tmkdj9lxh8u/NzUX1rn38iOvyTiw5Dbh7cL4Q/71v/luSITy45Ar15KDiKwj6N6MjPKrr7AwjqBqYe/Dy2biR2Az/qzp9GrtQvLVecunkoDoj4NHKlf2m5fl4T+iV+/0UFixA3D3z6IpXPg1gY/qUl+y2RfAbJvpQiPZVsEaxAWrl8KsoifCzNcEw668F7G6QuN/j9py2QD/Tlw/TlQ7FcpfXww4OjwMe/Drv99OW4uysLgy/p67fUtFx92Tf3md2XcLpzm3P4VZIva/OBQl4K7r9sa8LM7ZLh8VW8pIX7FqQyuc02vzAE5A1CPvx57I/pG+yhQyYen/7uzu7P+DD1fdIIAd+gxIOfJ84JuaEftfK4kTtZPWvkFkxfxfU6fL0ip/vF8UU+xxf6DWvxMfiCIeR7AKPxm0eB9BPdvxZgJP7YxqFP/DL1bpB6TX78a0PqBZNF/6uIQm+IR9bkse6/EZK9rclaXOx3fN8bwus1HMJ34fWTUPkZWL4ZvKhPDC+EuoEeBz4/hy8KvqEeRWlPDBh585DUetLIGwLsNVTLb2e/4G/wrp8h5PpRYP02Buy/MH7zuJUnt/GGEHsNJfXbQYz8VxEG3aAPE0CKeqR8FH0TF/kk6Pown/gaKu7Xw9PnsU8/CrCeuCb459DzfZZi8X+P3ey7oek1BOQvjqZ/lWD4IDQRNxhE//VDPGoER2+Qh2h6t3QQeQ3r+muj6VPnfh9hmj7KzyGv4Zl/vdzvV0n0PgRLFHbzaJTj/cCE/PaG6V+NuD8DmGDoaer4bmj6PYn0XyYE/74L+sUi8FeVmP3iYPrMEfhHgOnDhvheVXT3a4PpMwfgH4GlD4u//zE1/if+/uRQ+rjw+/fkwH+V8PsjsPSB0ffvT4D/uzUrPxib/cHI2WvhBCP0DY49+EEfNfPfb5VAvgOgPnsRLowSN9/r3p+3HB/97GW4PxDtN4iLzyDZz16G+wPJovDNo3lLGPJ5RIv82qL9loX+DKJ9DXP2GUX7GWT3+xNFdz3vfzru+UH95ttFPejvTxWh/yqJ/Sng9I2Zb+8Ap9+fLkL/VRr7g+D05Wq/nBd5Mg7ybhwR+vtzRP8ud/1BAMLQxzTRk1KkD7NHvydL9LDs9ssc11+FOfrBkOqrEfb97OfjeCP096yd/CHEPnGQ/mYQw27wB+WU8LM5VsRDiCHvBjHs9yyo/CHEPnHg/jFW7MP8JPYaHvFd4/YbCKYeoxPDvo/P5YMUNslyl2Hz9tj7xFH+x2DvG6vQvAP2kP9J6/aZ04CPQdjTldneEFK/Z2nmQ0jRL9isTxz1/8CT/XJRP/Z70vA/QtgnDvo/BmEfRsxjvycx/yOAfeKQ/2MA9nEh/+9J1f8IYJ84rv8YgH1cXP+PqfxfEWD/elgPP9Q99th5UU+WCX4XgL1jWP/Z1x/9Udd7eSG5T1BVgv1jEvsT9tXPaeXfaBbkj6pBP2h+CP6Pqek/wPlUwPmwSWr4P6acP2GC8zmR863S8LdmX77xFIJ3wA7yBzu/F3Y+bKI1/o+p4T8O63NB58PmL+L/mPP9A53PBZ0Pi5J/RzL3cyLnjRZr/zTu6ndcmvZ/GTkfl1/9nvTrrzIc+TFo+rj59vg/rqT+hHD6X4bOx4XNf7jlj0LOB4XNTx5X8n4+jPjH7PKf6Oe1JX/fmIn/q1od4g+9/FHQ+ZiBiY9DDvIHOb9VpPNRLA/x+1cdf3ns48MsDPtXF9CjqMeLnj2uePmB0Xij6aw/qsh4Q4j9jmXHn9M2fVQs/VF8EPEKDrop+yIAxVC3+vtBAVWUZNmmzMrmL6Ddt/D9EqpvlPh+OQDC8Jsnz2nCIPqGIOi/XnffPqybIl+ACfxl2s0/qYwiXiJgiWxpYl09Eh5R9yXYGpVF9982mZetq2UHmKrGW8l8+X55d77/P/uy/7Jf5PqPz6MledguX5zC6/JXKXO3eHiSf9Dy7kvTy73ftf74ipbN1V/bnkBk0Wz3GAdt15Rp+AQJL4DDzZJzsXz0FySAGXtrgJPEd7PV/Rd5EgSgmReB9xia4LLV2zvd3k4bfAvgwRD194GHvYA75C1g9xJ7+96AIRfAPEOL9U20/BPj8RYK+0oPfNUWcQM9X6kLedE0oDdPy19/Sk3PWdFdWITNnVtCoGMZhNkv0IfQ+8+smycZEOdLYHoj+06QN1+ecf3VB76ouEXBN/g7da/nnKTgduG9k/2jq7tvaRK9oZ6sYQ+hP7KI6PuojHzOBZ7K4r9tWLTJXW9bTgF5rp9e3Sa4vc+2/aPOB+qkKGLpedDX15Ml5yD8Bn/eCakXKtDfRJ3P+Tn1kSrvnI7XvMpzod8IdaKy+YOGn0IDAd3A0IN+/nyuwrsh4zn/JjXJsJjnt8PFHzPxPWA8Dqtw4nEQTD1DAk3fIO/kqMmXOLVHcfDPg+Ad7M0zXL0udP4fh9sN+jyi+AI+GLuh4e+Cj3qvIJF8BdnWxm4F3kZZON6zbusHBJyfLShI/Me6/XLx2A1EkqBnQQi9xLrI3Vh3cK8C5IbCKBxDCYpGEQilqGeo+A+CUoiHEsTdceb9aWHoBiNwGqMpBCIpCke/6um7VNy/OvMa/loi+XVMaUEF+gwVf39dVORJjrgYMxp9hqe3p9vIV9Bt/zJ4CJ8KvejH4HlodpbDAjekIv+XANVT5eP4DYn/pfyfXtcGwckb6MELf9YMjj3ALv5uIHtFdegfkL0vyCD4e/4LI8gb+iEU8J+EHIRBz9jJJ+d+P1v2ilLSXxBmjwYtPi/CaBj/DsBQeknjnudpfxdfNPajRpAXDN47QO0VZab/MtS8ZQ8c+tsWLaL80P8VLBq9hNzEA0g9WUMCIx5lgz85Kkpjf6cR6v3w9ori1F8Eb58fV98zMYBYfhjwY+9iyJ628m6FGtRztvqz4ep3yR2/b69wnH5/e/W0kXeLxKjnrPkXjilIhhepK8At/feeJgLc1aVvuySavl2Q0FZg4PhbJFh0TwiBUy1fuXl1eyiKYqCte5KouCWJmvsh6Mf7/LAS4ifHqe0fVzXc3dmzzbeSe4tB7XeqiMGwJyVDOPUSN/KQE3unqgQK+Sb8frIq4bEyAPv6TD2fq+gAxQCH8Dg6JhaXgz1Twkt1B08Zh59Swg8p8Z9TglplSfdrMdZN2S2eowRH/ReF3kjDCPVUvYuzfT789VLlD/oWfewVrPNnQD1gH16LevQNxPISn/qvFERtfuBo3u9ykJcuZxzfxVa+ly8jH1OyKEXcYNQDX/a8AuHFjvYGtVvUtys7/8cRNf3KiMIo8keVC++GqN+TJ/2l6HiUXO4GeRwvw8QNSv1FXv7s/LRFSC+dGiefheLvkPh9fl70fwBaTwJDGrmB0bfA1ROfSMPgyVLfOu8bgurzk5//A6B6XqDwFoUPT0GFvd/MIvrzc52/+5gNjNGP1Y3j+NvA6Ol50e+e9w1B9Yr5+58h/CQR6BkfREEvVVIvsnyxTBJ7A9KAfomY+4TSwvHn0iKxH03ue4lTIN5Caq+YsP3FcOXjeRFSfONVeXGTleUTeZV9lyXFYnKK4m5VeegdhUhRX4H0FXLgYZTPJIe9ADeU/rrnP5Ldc5pKTYo0LhsweMB1t3OgyuKZPD8HcfgvzX16Cf44/KKxgOH3Kmulf8/KxF+kmgcm4SeBGfRktPDVvpmgfnCmN/TGz7kyNcyi/y4dMV82C+WfORRPwgIQJ2HPcrivnR7/sc9butELBuANaCr623OLk/+8hjXEXmINP2z4+G6cB3rVRUU/GP1te+/TXCv08FrBdX1iuX49vPDau7Nszj8S9Se7/r/0/yrxf7t8IPnkg6QPLSD2joEM/cOZwe9UlUC/xI3+j9qzz3SxT43E/wEr8X9/OtRLHYpAP1OH+j154V8lQ8Bviy2+ObsI+lkuD4f/1nnfLnuAoc/PEP/2Aw1Ps0WUor83e+jVqHqaz6Lv9/xHGHoFKfwHR+9snb5rRRbt3+APqi++PDPn7xur7xrB22bebVwUhn4ROv2FwQeMQm/o7xdUwfD7DUTA0Cs49c8guhcyGIwgHlUOfanE/IihCBj6G4sNfO6xCIzEfyTGl8cl3kKKn5/b/l2mcwHH/3hq82MU/OxMVBCo/J3zvqXZf85yb5qybf/Q3N+mueEb4snKUTh6gzzr8TT9ddG/t6azYegPn/2Hz/7DZ//hs982GqR+GA2+F/8GQ38Y7U95sX8Y7X/EaP+rXerzc9q/TWbwAwKbxr+3BNtb0dnfb+UNswb485Pbv335M8gZv40FgBQY+QsLP/sUGJDxvr6V9xtOgT8/Df7bIw6Hv7cEIMDCCxzGT7Dg32/l/biQL9n633vSUOC28W1UAT9G1vNw4wfkxF/PHLqB4K8PGgLPHYIXgWBfNnzryZ/gkxQ2y6lvY6PXPo7oX16u8umUHQp+mzVQSfoGp2jqy6mIZ63AMPWXWcPeD1PPxwY2WQIeo+MWAYBAmbThM5T909GCh8Et/Dr0/ecN4l8MeqJN/C8K7AGg0BcBdUNR34bPq2Ne+PMvYPw713E8m5mO/uxgKIa+6E7+Wiv5/brsnwGVj8PL8/nlPz+KAiDz/GQ36McEqJ9/XeLfZfU7GF0S3S8Tur4qenEg+DcV/fonFz5fSwGcGv32qd8SQ7/nYhq/jPN68iRV7OnDFP+GJfrBmd4SNH+Wyfi3kXO7HOdzsuVL4RZGv8l66LfrcX67FfwdIfacSH7+HLKnyxn+eQ7VNxMl8Bwq6sFqJ+gTTZIvTWx+r2dPfa3veqDdzzqH+Q1kDyMY8jXV/LpQCP2SxF8qfEPgm+/Zh9cL/TmfWfy//zGpk+SjFfbhj1XAc7JvQf3691YCitw8LSpe3Pvz2PahDl6akv9WOnhOjv3OlgdHH5MjOPHiwjIf1QH+BlO23KSbZWFWnhs3X6RRPeCZH333gID+MW05hl+iyC8c5ZePTxa6pcHncjlT0gHR4U+Le5dYkSHAz1v1E4K4wZ5QxgR0Qz7U1fMacxq5QYnn+lqU/CYlf8jfoKU+g76iLKn2X3rs3X4YdAMmRn15IQT2XJEbFvzcn//B9rvXi9nF7esRQrA3s5g4doM+pgPAUnXI864Kw8QN/KL2n44u/Jz2X0Ev/RmbelM6ACGgr5b260NayBsMeRay/NTwFEn/NTxFPmsFpv8ankIet/KGiR3ynHCyXleF9qBsDP7Gg0H/g4NoCl3dh1X49ptFY5/S4z8dQ3vifx6uw06/she9hUVCny6DBaaFYd8LIogX+shbrNoOI8+Zp59Dz0vQgW6R8wc0bwWaZ4WJYMXV768Z/X7Aec4n3Q2KP+WQxCKbnmn6NxkcBw+Yg9DvKuClgfIlhXuLcXL0OenzzWjyF2KNf5WhBpR+woX89FDDIsPvn+kNwwX0OWm1KVtwNUe3a5LxFzDKHzhRDEOxZ2tWvkztvuPKh/CX7OU1HT3JXdBjvspacL0wk0pA8t/6LK/suhLkkxn4Yu366flW0g+rGW9fL+iru52y67bV3fzc22xy2XDb5OrLVujLluV94HbuEgPcfUTYqjj/B9kkxlpUrhC/O5er5XVS9ZjRz8s7t13+rJPNyl7+3/bnaMqWN+cVkzGyoWCrEAmWk4RdZkTWfnknR5RjYfISmrZ5JA/jWk5kVYZXMoOrYbtOuNNRhxQ12tltVR/PjeBbq3w52UQlqdNaV56y0rrCa/cAqcdsJ/u5J+zcVO4gag9F+yIY+kFczl7PjrOodk3PdFGge2mwlAVta1IScRonaeqESSSJWmYxdOVUSpdydcFWZ2xVkqtpzWBur2VL/15z+KokVtzybsWJK95IcQNdpdGKsVbMntsr4Jt1eorKnDpzxF/7Xri6RQuSJMPmdFq8LgsZBcPsV+z+umaXX/t8XNcLoNZgEIddfmE4CLosg03/9vOaLmrwKBv27hNWLYBbX5Z/4vWw4gZJEkXabBoSd5cdIjFa/l7n5eu9VkHu8ZIIuzFc7g+1Bst1CYo9yh4JdkIb69p5AUHb0pm/cJueI8oOfLE7wDDL7rVoSZrXd+0WknH1pX07DRdv+Rhv1soKk4qiAJZvTXGMpEn9hWb6AziB6KGjxo2OhM13173WYXiaVBMWgd5JhKaRdD2teZmbD1MnGFsuFnfsCaHO0Tr1VpNZ2tfD0iVaFtct3uGNbG2wt0d2Yc4Tw+Vy6Xtx2DMMAdqz0KaqAt3zyGnk2ZiUQptZqbuGgTCsDGp8kePaV+yktFa4U03bciyP18PQHOszuNqk2nJz2xyYIHQdB3buLrkTo3oJglgKV5e/RjWTcK1mcelHSTdi/tllZEY4bNhjMscIo5x5VW63sbO1BxLZi97GyEegj+Wf2ZEYfFCVxjiyPVGWTVPFAH+YDNM5AZmi3qiGB9G+H4bImEh7ra7dwFRGmg4Eo4bdpqZPJ6lA5x3oQPPuijBMwV9FezkLK8UjYy+fWl1nec7YgRYVaLcTyK3YjYtBZTs5WiUWl+RHO6Bh2MCMMoehDWUYlmXoNH1ab+FQlCQY7xBhE5+bGSO7+HIhkK/6RwcRKegIPxJmZFkdkCeO6/tDct5TaLiAeh3Mh5VpMpMt5a2QrzfR8Xg6QaNZXsf6QFyJSprn9BxDB7VtTYfZQbvIUSIUG2dKpIcTiqIwKmdjEkbJQUOnJOzzLIMq01FJzW94d0K8Zgg229TRs3oRH0sTjG84J1nfnbbo0qslR7IOGBPhxKVvD50GgDlhXLBhGmADbJnV+twC91IbGAw0HB6BrOB4OdMxyqutN84k0obN6iyLJ3Kapy0+tWyTLbFy0B2FA4Ef2P3h0qbrYbvdzrRvy7m+Gy6dAFCOT1haxgrkuFZ2XUvVptzADqRTCk7uSqxf2gMUGUDUMIB2J9ugMGa/r2KeEUioUik2QA6HjYquLqDdeqWExGVj7zRWyBshS1CDREm0pz0DEU6Fhlok6fkF6ceuA6zTIFygjednPFqKJbLJgVJEXpiMS3KUjyMHLOHEHaWrneXblDanrpioZV/F8K4LkNgauhzXGBPvlUk7RY5hwHeqjyzNUsxwZ/GL2tmt0yEXBVgxFFiaxaaqSdMHzdGe2dbRmSRsjp65yxBd3whyXJP3JgzYJmNnhv5R1b1k46P2Ri+5VkOHkKbhkAx6VNRM7hKJcNop26PZYKpmWCw34u15TEpsm0Kjt84GISoHU+knKJW3mDZjGCc5awvNczOsafKLlWQNaeOmJwxxcpFa6UU9Moiaukq6uUCYrdmTWl+Y0MMrfMyuOKdG9pm0SwR11v2UJkenvZzqM7lPeSS7mkveuL6wIbDQBQBMm9VD18t0K8rbpOvZNsnO1cngVWVfcDhih0YBO6WnpueLYUbs3rxalqUpARMyGrVfjOZiu1Rp3hAkEM0x3kuXwwnHFU3LCt6HA9Gonct8ype2LgdKAmC5bFPMGgOSnAtzDjwDhgt3rlM2l+c5S8PBcxyIyuV94GWBW/amxt7aHdfCDd5vbCirR5U2bHuxH+s5GlCABZty6+MmvFIDygoovpgngu7bxWexh5k1e3I3X3hg3Km0ggkCQY04WVOrxZGy+zsZYyMxnPI5sO7MhNdNc20ySd2ksM5lUUFoByoQ9jpyK7GhWhl5ilvHMl46BR91Z2jvVX5bCVGyHJ30LJsraHaSJEXvRKnZuwQATtCU3hIisODpULeWCAXoWysCmlstspGxarFil2xc3Pp6i+LXNLYbYBfrbEVoVObSlOMdkvGiLccS8FcvmmiDSxC405QX5ThTF5OmOKhCxBaVW/xkbA4WiBigq8bmwpQLYZKdqtMkNotbXrb3XjOP47lmhbJKtv2W38N+CjGZQ1tsr2wJIJFjWhT4RPXo/jL2DToAk0UHrMliPVJI4E6qipOiYMk6FUIB19Rr2wM3HIGN4uUdSzdbZlbVrS446Qoowpe1dPkuwCyaS9DG6Uzg+27PReNpEq2mIjhT3K1V4zejlKQrEAeUaF4ntsnHkdquNhcFssVTC8wQh3FKw9pImwNU3Ov03E1kgNCdHjZTBexabIDYxBTAnznUvIydai5lqJqw1kjK5MSWFIhJB0phkiaht9Iu8Qnv2GyF61yP/ILXvjAm6ECO15A4XveJSW8v6MELEI++Rc46srgY15aOwUod0ljNhkUYuhvXO4sTldMCq+TkQK4f+91wp73D8g9njfESOdFAnEBMRcErV1iNXAV1GEDr4dyq1MnAY9Ghp/jojgDyuWlPspEuFv7aCrxUMqFlj2RVy5mhixOxFS7OIMZdvC6c/g7UBVKvquFwwc1LdIng3oed3JF3pNSiYc+fS0LFQIdhprPZ1lIOwAFD8qqKTsMElFU34clfYovMq46AhDWGAIj5jKSKFS32hpsUeuSMaDR3lKMXwYEhul1ubTBe3SHqSTZTdXGDh0uDo2nKY72Hsf60rvz+7LkC6k/+0sn90GDJwGSU9WrPX5MEFWIR6KwrpOjiGpLdW8RJ3VBz22uM0kd+23WQFkzkhlBI7wQCOj1LiAxXNcyy0HkMjktgIDKWVSQgUOnlQpqzjKZVSruQOC7WYEi62Zqj1kcNUZT4Ksa3hh/i9ZgSVG0lGbhzvGKSPojgE8Cg5cqJGsGWMeSwFLRovT9Ux4yK/FDc6347dmflFOa9KeZzKrL03tV6qMKLwrLMcd3tVRDvTiR0CD2b12CAcD/QU0/dTpLDWGSLXnK8iOprEC+WR7CHWNJND72DV9823rAy2o2ya1rEzLQjackHTy6mBA5himKTk7hiRkdONXVlX/fNehszYwjAv3inyTONyZqI8iBkqK9AfoKYWDfq3Sk9TBXvj4tL0md1i1eJacxlsCVpQqfQJTLK6Olq54Mgo/BpUHbLpRasFiWs75wOp4OC0EuKQNUg6HcG5pTCmrI+2h40yxpEqjZ92x353sUYbmj2oJcTYxknZshnUp3vREmNzG7T+am3J5yRPGwuzDnSUVSyCnGvhXGUjm69cYk1o6CJtOaAlR30XJKWlHbNZ2TXWwdGPvFhmHrjSN/1/w0Idk/DUSVk7ZCAQ+pxF6TqcckiSwUlm6JYp/vt0o4nmGIbSXGQ4tB+KytYgq7gMxYj9foorPayerCPpkA7sLcElhv3KCIVTcjy7TkPPrVjtFKg2QVX4xk/CAZqRdHgCiDESDMTaogj8ETVVWOuxYAsvo+uqgqCuGlu3CEq8hyO9LUFFSc0KdZMB02QYh6tsUypdhjQK9t0/F5Gdxaqh+1esskxdi44cZvfhDXfpovS6dsFItcqqQ/HgfAUmErmrEn0ZuetFg/q9mWhGorQMGVgpvgph+R88gTM9reMMLWer3cQPMM9iLd3FoldR1oUcdRQHLneE97B5KrDbrbYQpDKMlMzlJlKUljDVoZm6MxQFYjIirQkgoqVohbRuYtt5oVjnEDeaACjpsCeMPb77SJUuMaD/ioU+6Ownjyl59B1xSz7OaNDG7p2jSM8nk56AM325WjGKLRX9jxibdIEnb0Ma66FODf1zHAav+Tv9UZvTAt2jqdWHIVq02rm0uAapVvrkm4rlaQh0U9wIlSQtpJAil6I1EV07OPuSoukBaetTjYuLQ/0lq9mj1AOICuNdLjcJIV9KFwY6hVDv/DQ0bevnGwT9iVYsDIWhECmy9fKKcpk6ZiksY4RZneWc4mNWi9paU1UuuowTuxBNVql0DGULudFn/Kc4etmspuu1PCTMObptl/b5xBGoUuabbGC3PHJFOa6WkXHVs63xbVlyKuNFk3TO7sNkW6NjItSeMwgY+0yhis3V7sGPqidNiWED6W9Qg24IwAK99sddfDKvqkPqZSsw+q4SlZ2tofQRCHpoZUyYTyftQHk36xpVUsQUWWohxXbdg/jnBGehTHabM9qMCLhRLFdw0JamkV2OtfqNVKAuE3bwu3+VrpQJ3Gq5ZgwtamDJb5aZxWwCwlkTLNo6aaiIJcMxixML/bcQe18EJqxiRo2UZAyFlf3i68yqyAAMQHw7nW93zKUf+W1C7KVnaNZUqQlHWOq3WWZok0QzxBjcvR9Ybfrs8VxZiBoKsQdDqvF3OJEvjVYrYDrSW8Qe19ObTy7SyRFq3FFkDHkQJY1Xhw0WRsM4eNGhpOBoZlB4HZCRaEFZ+WodwE5e7E/ySdBIfmJVTU+K4QsixFuay+WZl15ST2VHh4J6mLUeJAdVVzrXFYph/KRUVx9uW9M2HYEbsPA6XUD4yffJzDVZ3z3zkPP86UbkGwBFXR0eatgovrUrdsLiRbDsLhskAdmUNO2AlEbKpeqAm+dHSVxV06hIF5nc3Xq+y00XLYgrsWuJ7k4udFZdWRDi1y3cYPNqDJnutxOUJE2l1TvzJBx9uTGgj2onbJBJYg9DEUGH5p+DPSG2Xh6jo5Lwr4FrrC6oMGQmcc+SUddOOpof866YBbOVafvZYrYpjAZioBQBInYyLPNInaHroCFszvT2a2DdV1lRznDw2Q0dE88C7sVAlwCm5xh4VRhFhdsF/uDQDKEL2cHXm+/xhuOkeNW2bqtjOBU22b8nKMHUdz2RzTlhowXMxSSLiwl07QTGaa4AyaFdHmhFUG+s5sSYd4OuyQGmqFJE4Q3buxU0YhYC2RrH1+8OqADdnueQeCdr+mwbjV+UrtIqF97EfQu9zYiDRPdlnUq0duT0SCq7BbcQaC4q4uXAAk4A9iurRKdk4w0D4s/W1VhUuKkildbfldAg25A7VnBEWYn9HrmNeKZwU+T6ka6eDrBNdojTh9WILavuXBACToQiRaqig6htsvWzVZp1y5HFxE1XMdU0eAtdD6B/eGtD8scUu2CvoRasTOK3qHtczyT+kFE+LSyuuvqcE4UU0JLMtmmRZRuGtzmEA5qsiY9WUzKD3OTG2IIEqzeFRF8iRIhdYBykJuv4sN5wDeroaqjEkNPo47PTuqE1tWzpy0t7hP4ZOQ6DKsDCDuThhTgxQtuTW51JtIsiDaYsNgKoqdGutTi/WVHekWUmXuqzCLFubRzTQDktH5ML/pmE25xoQEmi71ZVfbxYFcoltnbsttK0uhOo7YSGNYs49PlnGppv7msmzyegzNL2/zK2hO7tXOhhMkvxv7IpA1ISLxAs5yqxrvpWCPKcpENI4j6ViLQ/cRmdqmAFC6qsyEaRI+VW1MRjvg1rJueOasVWVZXH1M408QJVsdt/KhmpdlelzSLb1kqkc/jadqYcT+m3knvdntJI6MDOWjJxWIQs4F3S9c7sLWqjK5k6joSm+xeYMU8DJrdwaGSclzZnbjd+wWv8lyxb45UE5jrw5kUI0ehq03Ve0rGnxwQFx82IZ/YPclLVJrbXtsfAsRnNXhcOSD6qeNeRHGGZK9wdMnNEiAENsKs4bDYt1xH9JrEUTnAKpVcxnSIWZq+efKnuFb3uoLYcFeojmjxl4NZnXJiVZoym2z4qXc25nY3nA7zJt7i0MB1U+ckOtYC0AjymSPh3HPiQNU6CPJamIkCuZYvjjyz5OEqLnkkFm+KDtqX9UU3vDY/JHBOxPqZLlpiUNezMh9ivuGv0iK1owc8BD6YPcMbCQ+ne/s0y920vqz3iUe3zhKyXMK1A9KpgEt0PhmzXBTClBwlu/WoZGwHnM74qb1m0YqY+DC9kmIWayrXqQproypdTQc2bcc0OMbs4vRX+FQx1FXeHePeGCpB4N1TakQNiqrK7ISlBfcTB1LNbuSdnltkLqhHq+BKW9r7Setyun8cHfxUBLpMKmawyw5qq/ihyZ2ObI74F56OW5rt61BHw9qogtuEaq4B8GcEJUEec5zCOcc5ojwvKAC0r5BW+tFMsyOW4OkhIkaar7TNklfDM7N0ZWcd7Liq6g4dBOx2JtJuxHWYRvHcNYvX7hymiGeVIZYo9HYzVPSca0NKSKpTHvKinERru/i1g7pbDl7pM6yTfq6fGHjLpS2gL8BJKal3T7zkBRsxcyJHzvb4aBDwpaso1eAGI+GmWHdqg/QOEZwdNkMtxHxRDUyMzj5e4x57HqYw3M01I/QNfpcBExBXt9qlqSquEraWuE1Ew5bCUtxvxOYUO30Zixc/qG1GjEzgKS8Rcs3zbhNRzKAftopMu2Id+pByPvilo9bHxnSYPFIazSfTOVnpbDqvSJu/YifgmEJrv5cYLV657HTQmh7ZhPOx3xHhIJDXWKVTPtw6G52bMk5Cgbkn5wLxLRvoIJR0FviqdbDcBWuaqnZxVL0RZKTeAY6coOhzn2iZvAP6vFp91WunKk4O5wvTVOLRg43FcdXj/tQZ1URw7kzlwE72w9DUrG2am0aOd3Ae41wZxXniM0Vmj6bWu03M7AxBZfYI2+0MTTo4gb/KJ6aY5SnA53mGNAiBaXF2yTowGwMO7EnYcTlinIvdLaVVo+4AlohliVnkNI48x7k+uOZuHJu4Z3DXaI6rGl+VS0cnl54Cxl66yoWZU2vvrcAb8HIkp7Cb+OrQJcmKMM75aoWt46tbBOzWO4px7pdCCyXxhWj3u0LS9hTbNwYXlQiEHaPA3R97k7CRsqET2AGOLG88xckOfj82u+JgMRvqstorfIFKc3lL2sJdpRxjp8lc8rLKNH4yeUKfLs7RcDYU7sfMls8NhRFWIQnjMCznsjolIXrFKrFh8lAk7N00IcLJJblDpAG3LDTUULZ4iAdUT86Nc9KXzJgchEvntP5Fg+tdHN4O0IzNtY4v1wnwIM0wTvy4kfQjAftzF++AJ231QmNxrQAhbjoz/oUdMw32ealIySWhOZAuV/WNCp3czboaeTF2zYhpdqVjht2OKucLvNcbvq1EIzWURhkk1SzcccWjoXKA3RSOeWsC7knB2GR7WnyjVF8uWRI7pncyJW+XEtWBWdfdCdalVayP10mld/62DIxEKq9FXMCdk/bZlC3RpJAcxsDriyXLdwrNjGGLFKSjb4xugQdRH6SwbNJkTxFmBEkyAleuH3DEnjoG9Yz5J1sYcC3kRKgL3Ob2GlKHPalr/pgrl3qBjN30ecuMYl/x6MUQOJUBTzdcg6JOLOfas8TboNtMsRyq80ayz3vlRCLkqZRxO0obFILSfuu5s2fieFZpcbpT9eEwdWv74o7ztmCmcZwCD5uCRDP3ubC7bCSf2+l5U5yuEBObp4Q9nHj9khjnfjTV6VLP19jXmb3cJreBUaP4ZM/sg51eBMm8HxWzseETRhqH5brcoAs2/noSV1156o2YiBdNJJK3Bb0fn9ptLh054D3nJSLsVSsPaX9PXGejjBquLrdV76pyk+OrleGEZ5aDDmozU2Z2AuCxkSXZ5GV6Y4iivWrFBScooWlClB6PFopPMB1C1m6Hnlm7EN11s0QDu0BARgGEvmRaQx2yorZ0YtYiCKuhYjOZXZpg/Xp1wfoch1PszOIOMQBeWDhGuhqcFvgYwTk7MLtoUzk4mtKkrdi1e8Q5fclTl9yL92qkQsRsb3AtvBEg+lzAp22M7URRNF3WgqT1UdEku9TLHUcF5pg1yWiTpoI6ZX0QNzYEsDnm+4ELQSy9Hh1sUKRGgDd7z4dA1BbyWC/bdTlaB7jqa4QfFpOU56vqnG13Bctdmy2q7lg6F9t8pEHURwO22VFXZpQd4MLt6iyaIlo+Vry27wHbQooFAYZEBnIvee64d+HUv3p9RXqBBGzbmuNENg0Smb64AUIEap+KhetTEi2Dw+2LSgP6jaHaMQhmNa/HpE/kyYFMH8Minsj5lLNWMeCaZXNTc9oOrVRG7va8ehxhqgUH91dyYuQ6BNS80TAHgi+g8CBol/RSs1gFwjJY0E/C0UUkSRm2NKRsFqfv1pNnrNBAndcRLycHF5tbil7PtpwyCc4YCr8f9mv2SiTOIffjxaJX41mZ+tPir/e2n68nb2s6de1TuS8k0h0f6CVaLdLnA80TQZ+QewatJuzA2oh2chkAFwiEEtaMN2U+CMDTNChrHsItDoOEkzw0xbEhB+dgWUSKm1vrYmyDK2E5PIc71RE2EaS1Mc+v2hRXKlfZaBlcRdlkymafGEaUUXVWVEKRH9Oa3vS8qewmjLvgBAAHeeZLIXT9GlDdcINStOSWZ2W0U7w/CEAuGLclIbBUJ2vZ7Rxf0nCbSGdnr9sJgJQ2J8jt0B9QmXSq/SWIDJyDfUAUTGXgJW5AUM5X3HWieEqBq4ooBnXcOFmXK0aHxGWOkhYNcQWJLIETsY9caO4o8lpdyCEhQopclcJivHiD9ubQzEOjTruEih0t0Nt+VTiadfLtU4PE2sn0prJT1m5H58Za8XrSbl1pyXnJyCQ5N8UmV8pXAUy4HAlM9gokklubPS7/aaZJFOPgWnt5x8onlKqPkZCL9taItNKe91oQtqMsixcn8qfEdd26DHiL9u1ogy8x3eFEVkNKo+jOA2OCdNAY+eyUlWnvAR09WE1IIsaSmZ8udTbH2bWuFF4YgsOKniVyu9rlZlUgSESEXXK8HlYHmCKO4R4R3S3Mj62pwkGPuiUXnpV8WOUb/DABtg+MURWWD+WkiyjQOXEbRi5OtVqF5xFyKVecVh2q7SG2p8WTqQ+IYTtyn7BFt9h+I4wDKRWhDcke8jliMf46UPwowDONrtc2e9b1JQwaBtCXs7CDoaKNhXG/PewTvmk6ZTJt09pgDWYGbJ0w46ApR0vUEdzNosNUGnh09Rr9ANHOSVINBxYJzoM1bWk4G7H9TkXMc56W3RBhMG81xdxuEmx7q+nGiIJMD08Op4dc4QoNdsmQPY50Abk+WWOpp0I8lnK5hA86DcHbICU51Qm9jRcdCmTvUaV+1L1kcZ32Oplwn+sL4jZRr4i2EvYENxCRksyI1GxQazEYVHB2MQ5hUvq0IXNNvtpYHDlrNqd2bc6D8LdaXS8jPeF2x9A+fnDJzOZ82T1v/Gl30GVd0lFNafnZA2Mw2IVGmIxDnMXbgqFEhDVHEt4aPUhsiyYQhrK2ZIKKrxMUr3153hETXAwCb7X7S7uJ1MN5ROFSOl69JQvXtb11Ee28JY8ItOoOhTSITuJkxA6WYJ0/TbCsa4MrD8cohIq7oVJgdRxpcHKz3if1EB1txqVQm6CuA+5AfauYFlRmBaNzopv2dtaJa38eT2AI8rJzyp0G3qVLsrk+M0uKuxu27LR1D404p/xG2WyFXOfizciftApkHSDo3ixO0Rkb9XJCe5kxC5jURTGkLllrrtsdEZ2Kuu4vIysRhXZZRD+Gm7y5kHfXGl7l62pzXNGQd0KI9na0s81O0OZ6xSeo5bLC8ckRW9XVfK06xuhjhkhcPu2CaTzG9Xbxk62P0dU6J0e1yKuZNfHGi2J1Rs0l0WUSzJETYdcXi4MFCsrAVR+qWbbXhwtOsiNMx5G4oiDHTqDYhCs48sRdoLAOclApTckz+qxZQlZMXAOIVByxNDrk8W3usZ6ElUVhKSrtc4JAyQJg0tfefDkvKLfI6kAUU5Ltpv2SgyN5BLgnsvXbphpGqFgl+3GJKZ1WBX5Rxs49uNdLsMk1p9sYDk3BZd6qZRcS7bnGXS9eDKZIr4zkGG3kq7oe94jXauLiGizLOAFOnLOJpCEdmqa71BgTTWeIo4XUhMSd3PycB5q1X2J3KKClIxozwkYQuw4lzkS0zaLNcLiaqXtdfBXUm5xmxao6tTlXMCVVXftyAmUVDTVrUdRsCILgOSNWw6F12mI/j5jPg+QY8yZnvQTwR9QhbaMIEFvOZ8lCxd3ZcAiUCdZXgiHbvOt9u+/OQRaoRqbW6WhvNuU+jRxy42+g9hTY+KlWWuuUVqGmsUzT93FBtmEDJR2IE+iePl90wSJAFoTNp4awg4hsIiof4ZHlpH14TFfrFqQwBEmBoOB0HrvUPjc64x04uVPdRlpSPrbCd8St+3Lt8+5qHbeOq7YB7wYiGvbIPr32URCPOZ4NqmEmlpVYF8j2K3xmZzWmAfl9uB2tRlM+3+P69gzx7JYNxaTJqCPvsJzszWDgZzUYITTu3M6BLVZy0vbMH5Yz7jIBH1qMQHycdcZ63s4XkAM5I45XqWYdFSrPdDBEcGZYHhGoNcKvdO1ga0teUFiOq4QndOPq+wQ/GRntXQOoPcsHOgerE7H7EiXExIF5HIxha0pfjMASttUMLfaOlcIG8ErZkFUqNMmBmnDN4EA5ZfA5Ue95tykW9W9gJ0BoMHAYQOurobS+qJ7FABZksiXoioyi2zFxoSKiDUWbh6mtmACTNi1zFASMNEE+K45sfESYOYLh63WxWLoCaltAFcBhNfhmSktn7dwJYy46euPSPGXt3KX/EWxAo46pb/Z9nUhBF+cDUVATyFOkkMAWl96cbFB0BolFdRYYNoCHSuuEfRdFiMdGxzR1l8hqsfCmsZe5ONZ2KqztkES0DfzY6/11o052yZlONNzx+1FHjpY2oA2iXdHB63LIQ7WU88UVYq+ZZmSmLCYvCFoJSR+GoYkwa9xnFH+lJZtV7Fz09LgDJbqjXcBNEGJVgtlpm5VtfmqMpfNeTSE/XNa6At8m/EvspgEeonFDCOEPurruFGsSxmm91vnV5OxG6aQql6u5adLYUDhh15QVG1E1rC6x4CZYX5ZUJL5idqV7u/nadBtBWAkAlBiwBvN56zD5uEXZS3MA1rZQF7dyPV9PTXYo28XLdmR9AjRk1gV+jSkrenTmcHLrOA7hJQe51JjtR1p8UMwKUnkyBSc9XWBWNWxPP5z38XWtTJvjeX01xzOk9fFRuk78WYrQdHegcFO3XBUYXcTxWh4JNYxdDfjtpW2Br2gyIR1wtgYBeSheGe5y3ugDVxxRdQ025iHsqCWJV+JU83yuGTpMEj4YypQzBUWLrKBDgiRpl2Ii3G6mA3dA/IzRMcD0bFJGiRqKDnWYw6c5uqVQ5xW14SocDLeHeIuk14igUhrpDylrnuelq1tLJ/D87W6aVijnUNY2dtaHPVpmOai9GJri4E3FHVi2q0uLkOGANfEhqZccvl+NzhrtmqRBrxlW8PTJLeWW7dFSh0eHZFgnChPr6PtmO3Qqm69s0K1oX8nZlmG2YTwzoS8hniTuj4LPSOsOIQOxME1IdFUj8Zu1t4bpqzpiTBpwWNtfw1pXBV9CsQYp6sLP9N4Sh5q8tFubcbbsfotRPUZWbRDAnmhpaUXpgJ+rwDCgg7TspjEbUKwSjmkHE6kfRZZKENiudZewVAMaWO2hwBQcYd9kCKnmUUxo6cbHotVZPu75bkougT7P3XWdZ+yZ5uR2rnJw4Kk/b07+arUC1JaqK2uDEbM0r/TSO41tdyiNk8kUO+KuEiByeZVieREQjqzLnHxaOc5naemalYwHwLHAUaBFECozp9UhgZ2umd1Bo8QI2RwTka2Fuj2z1zZvonUNgm4dpg4obBWnvubDnvHP8nmTSJ7VGC1qBntS5M4rODhZbb0cLCp50jiOaOSwYpIpSe5hPL3aNZRF+hI3td6IDepu1Yv5bocQJdlcLpfMMwcBMrxVqiqltdHOdZIMmaLZ1mZSS6PWCmov29bE+TGPKqkpnlZ2jPkrPVox4VlbmeiSUkCGNqOwIUZtwV8Jeb70a/EMNZuJbQ8VtSsJ8pRhhEKtlN223YQtXvlDEuijX9TjBOGDioYneHUQYlC3ewiqcQsc8X0b27MKe1269ExQzrYeL3Rdh+vUkJuDqF8Kwg2jopp9fMKcLfalfpzmmJXXF1fAivjG4trPDqurmg4s/G3t+WrxKmufKfiROAe6FsjyXXtqcjgfToaqybSgf6lTZ0Nmv5mIs2FqtLxYSbBveF4yHNFILIUW7o9d9g3T/WpyS8vcwgq4X7Dv2QFcQW4oJ0lbFPjlvtiVGjZWvuy4htZ3+yo8N4AeeU69q9mQ2WSnYLbAStUNUeHxjc1xYLbCWz0PCL8hny5Gh37d9HAFH/rLU4Iez/K4wci3mOjxiqfQ/QYTPSoKTPQo7iZ6rHddkhlPJ3qA0isqVG4neqzlkCJ0XWvI67RdoiPOVziPO8uqairnjcGqjuJeFKNCKp5n3Ox8kdWs5HXlJCuHs6Ko58xdnxhVNpPzzrMZXU7E3bqu4QykKjDS2VHCCmhOg2E0UKvCzrlP9nM7Ay7nQmid1FLzcklgKNamHYqgcGpyjkSQ5bG8VvTV8ivrK91YMcsvy+1MZpt0lXWFdtD9thXLyuvld73sr641qs425WoRS1mvuOWXX/H8eSPIu/OBl7J+a15O6u2+j869nWpt2W2zmL1ylXw9vlu5qxTwpCZ2co8wd98aY6+uBxKkIeRJaIa2Ha5RMevVqtCV/FAd+ShfEtw9MWzuz9muU0a6CCCpJVTByKfDdui6kzRFwPXiExPehmnkeMHFaoBtSJIPWsYXYdr+50udfwgPYP4wSFkpYPihxh0H44F0uHiP0S4IIAsPLSRQJOCkIDxGhyGkEU2SpLWEnwr4yPjJxPuVLlBFcVu4vuw6tS1F8l6zGNjEPCOXI6mE2667jUeBK58LaQa819EYjI3Bu/glw6lc3hWpYd9d4O30FmjSUo/IDFkr94jVqQPl6kWzBN/htCRBpJfaVGvhod+mWagge2NFdS5KyclhVx3Di4BWe+mv+TQ0nSeKPUxzi7ZUiZFkt1oPoHCFdvZcZut2PSEDQsugiGHNbHbDrq4v101KIbdb4HMMD5HvQ+AITZJPjXpaARVsImQP74cO3FhcyeJ+JvzCisLQk4gjY9PUcbWViXqTNUaxOWocvL5ADsjnuApKKWUJDFZLqKXhme07CtV10tmGNzmpruCGnXfKBVu6vqbsIja9sHyDk9u0BeRqdnG+3OVWymCc4I2eCqVda5MSiEIbEh3Uu5vfnvdBmxuLq8qcdBLkxVtsJHDnBGTT+X30PRTksW2OQ08YzaazR4g4bWJyf1v8D0/9itamXCabU0el+F6Kr6XlFBeYxAc5WnerWPA6DW6MBqbIHS8YxGIrDhh1bYza25UbmpQ1vMOarEe9ph25VEN8IhDJI7sZveNGTMCUXEDGRvdTo0akYHd1t9jmVQpKVcx4nbYnz9yv9kSYgTXy1r0jSHSylopKlOd5h9XCPE3J4MQzCZVKgRt8V/jFzkFXnTZ1i2/cWQUuHW/L7OMtPeyWSHncHUX4pAwDQjrDZUWUTLAX6JY+8tWkRiNGRYDsJr5giMVBD4puxx5sm8h65Er2p23QQuCRCWuZ3Bf6FiKHxWkqYJ+cPNDO8XzceFNvol4XF+MoNvi0ESV1a8H1OWY4Z61WpZsLc5sVkzcW22Cwzu4YbJAOOKSgQ73KENy1UtaYFKu8fC7mO5WB6TjsPDs9ejjH2LgWm00EtLX0gWMx0h2YVxAVNRwNM34xnSXRZn0djEZktpIjwUZ33bjVQmC4W2ct6aNXVVRvCn5va7tKXe84Rd3AGj70O4z39F1UMCD67pNkIvkFEyacZYkAdyqQqHQCmaikHcQLthUPvLw7sVZFCbK7AgbB1JLDalellsRVeb+5sywgOh1QdwRdHsf1xkAcUPtw4sOmJGPEPcZwsFoiOoi9cIepu1Y40Ht4wcN6CXzP5jo1NXKNNiPuz9joYCCM6hYfcy/PGMhzXDXpjlhZEr0ZohBu+gX96dFJLd2xq8k7mmCyC7Q5yTEYO7m3jkPVnkvTZ0DehvCX+/JiUZwvli1Dm7R2fLZ0j5U+1agVjIwNIoY0xcFeuy1pEEPXI4FGjbcFUdfbERNf0wwuQuql14L0yjuAgrX7KUIUSQMq/wKQvi+T84Y5bOjdbqsMEzN5ohiGPsncSWydSRyG1gMXbxh5M+MKQwx9X9e5qzMqCUv2hYfdQMwRfbvHiUxsOTfX9qgbIA164VjKHrTzfh/thKWPVtXpnMrwVfFpkdf0WIckLSMQTbuqR1yXd4bjqfYIKBZ/NefAXhcnD+FCGPH1lUkxdlsUDERRt36h1N1r24+iOWmhI05rhyiGe5IxoPcjakT6Savq40aC2SzerY6h55M8erJvyU+B6HtH3W4mAdBGZ/mKnfN1g3sMn8e1PYjTLiJ8xUIJmTstjnQrFQzRZPns1A4dCYszoKQ2I902oLC5pC4I75px7RwrMHC2lgY9EIcshUP2dn5lH5oXmN5dhuu+2p1qA6TWOc5XZZgJBxA8rPe6MxSjvTmGmhZslH2rkWAC0ro8wSCInKmyigw+JtVis5Os8eweayge14iVyPGOEeFIOCSc0u42DoM4IAlv4SOXthcGprpc11bwyZoaXDshl/UAUNvshv0lhoS2PZ+qXXVIxmwrpWiyE7ujB5JEnE0XXV0F5er3Sze6kucFlqbcrHebBq+Au78sDhQYzhEnGynAtkS+4uJO43D6NBkyH+lpyhbxekfcFp3ur2Lt+Tnf94JAErZlbddrJw+R7HhyNxc8Evuumw9I7Z5Auqr4JMEA1FIxsYEYuiO0xXfXcG97yEWVVcY+ash58gR8kHpQjc+SCAoYeaPuiy1cxTS2WBoOpk6JofGRmnWr1CNZnCQ987o+I5RJZQW3W2+HGI6YOZ+MnQjsZ2OmKbOOyhbLxRRrmU7gcDw5BS7H4FtYyCby5NwOplUKubfUXCe0RN7tsE7hiL16wXB9iETxrlsfi2LpKCGDyCktCA03niLByZgpgw6BLQEmiCBcz4cauoRcoUXMMPL0kVgktYADWGLrPI+RdInBnMSjnSbXJR3fZVEPaeqFGPm2qxQ1Niz6UGeUo8/CvAc+gdJwn131VjSg59lKqi2Ttv0QDUuqdDoIAgWZAkz4FpbWeugVdGt7ixtcq9sjbsdFv48OMzkm4WSaCHnYxldMP/Ydh+nT1gXaRqs2XC1GcyfxB1RCSwXwejwIjbnYQOYEhG11VeHT/sSuJ1wKWRYmOfA9E58nXpo23h4ZvErZCBDl77a3gpLlaacceTWMwrpoDe7SdgyIEpRp1nhLzbBrfKJxDEPm7HIhLYUVJkRWa8maip139ZcscFsR4v64ki1+iWimASrhg5YyECe0KrwIfU1SuXeNoiLLqDbMI/jkNDNW70kc1JOw4uXOClPYChiW/BSud9pqGzv5Fc9wmg4sfHu4nSmVfQ3uumCOFxBvtSiNDwOVh4oP7h2Bq1AzIj6iRUefuuB2iJfutvIq3i0BPJOBkVQcP3DHXSiXZq6olzqkqbZscOqsKytM2Lo4M61nhpkzZiaWpH4iL852n6bhIEWDO5uxmLDHxDnPQ4a1G724v3YcJgYyP4JgUa0BXapstfXKm91gTEbNPgSqhZdTUuCFXvlEoQMrYKEoDIfT7RSQMY562chH3K4qMLAB4iKvnRxfPiqtloAB6nU2tSKijSRFQahzO/xJmrQgzhmIS0O5kKjUWK+qNZqwFpqC+ujNatrwIM4SWL8NptJAqQ0RrfnNfdCtD2CvALM6kg2CsLesYhUOQnpYbPFujKl79xg1xCSBggo8QM2jOuNrzcdidlww7t3aomDlrWvxWpOmfE82R0YNh1Zai7fTQpjhUqfthEcTYmUmHgb1ftppQ+wd71wGQQrjrU/hLxSFouLcdfos8f3x/9f2HVuuazuSX9PTWqInh6IVvbczek/Ru68vbp3zVtV9ryc96OG9mScpYQOBCGwALAPtzZ2iRHTBPGNVPyQLLHGPmDANpPVr49ZD8sJAuIoz8Wa16nrboE8l0wwDgWXJnOdyG5CreXiKQk12703oQwWpS8EsGx9eA8U1rSFKyDf289Dkmo6jjY7uWn+z4hilqChJSKS/QdIqHDR82D5G2qte32f3zq249qjYcIYeLQHqi0BiTR70LpAPX0REfhPwk+P5Vd3BZQBPte8ST5JIPy46j0cq9scTgQzc0FbgvShRBZes4uerXqu5C1X4nSsrW9UJuEfjy/7LZ8C+OvgvQ1F2t9vst/oqfnXGnPlhmiynt7H3TIxXuhikwe2Wj8MBmGjIQOlEITgl7L4R5FY/KnW3RILWOe4s8tx7V1J/8+WASEyKOgM/1UGtuIaXJGxj3zrVncqpRwlC3D0eUEd2rR9UJSKHnWkZY+1LJKJhA2mAv3jwkWYSVBOlewWET3TV+naE5i/hJ5DoIElSZVIPGynmzt2fKEGwU1MZxt0sRxr7XTO7zXU+d7IwBcjsRQpuVyxy3qMGbh2WQCnFL87+waySE6Lku1I70qewHjycMq4UgsvCyVOo77cDPAP8CVCyp4cXiD9AxWZnIDEy8m3MbCYReK6vqiQh/o3rs0GoNN7gNy92Bfur9u/aRoLDVjC0Al9GVwxxY8Yv8N/42vamruFsiLB9Bwx8JR/WPERBkWVeXqWNceOa9utgjZnyG6LVQ8RKLNfSKGCcCMChF6BbIpvqXjv2p6vIgTDus1SMBwz6KBd84VN2gpJgrEJ6bDrZ0hnpmGAgxIpcLoJC9PFo0YIAm9Dp1JeYE/FpRW731xKx5kuLH8AnVxNDcOuTqJQ74Fu7FY/1aa94RLkkYyqL+Al/bC36GzjZ/eh+TjwCY4/0VkIk1OuRH/MzvnIGpjrty8wDw4cuqgiKfgqAYM6J6x2Gorxqoz2IvFZCI7iPPdfvRi61ZI3FChgwz7lkDxymllS4aAzbzo/ZaNqltMRkavHaXM7XGEmwsTfEotnMa2HOXcBqfn4+KrBhZGBM9EE6Crv34jI1YiTrOem6ZvhmRy/k3HVL3qVsBy/uzj68/g6mXnd53YHatQcgL4bkIPiVrl1G9CkTKpEF+YXISxGbR9j8Ow/BfFOXb0Xf07i2hhgkGKmR4yqlsJtl76IbDwtJhEtid/lG9cm03NKcB6ZumT8lm7QC/Z/Qbc6EM2xH+C4bdQHSHvIYZuxxsp9rUEp//eo4eIVzwEtfABRMk8rH0ADzmnVznlEeqCCt6S1sU9RFyZxj0PrZjOmjuTr2Ug0ApII/OgTyIqQLpEx4VQoRo6OWwfDkJRXi9KGZKLK0qe4syVj6NlZanEoGh+Mr5hpL2WoIwv/Q9fsztK2b9faXIXtR0FHHjLqqLUPNlJAni6nJqk749HKzFqrSNjcXuH3bW5yIsWXILRguAdee5jqaDl3zJfDprNvPIsV9xGquy9m+OgwDKYk1Bsu77nJt+sR2ZXkDexjXr5AjSZmnZ8jhInGEzu3rGECiRfDU3kmr3HggWVQrO4jYHe7qDBeCY+YuXv3RMzbGR21gExXt5wQaWjiOt+aTq1rECNPqDEU3etQEYrWG7wdvamVcJTeSvyPZ9k9K33cx195+tOKtSzm28GgV65scBPM1mQiNHh4+j9+In+EbH1JY+7a33oUN7FkW5V0AYlAcytf+ipzhjOQIMJAX3snzMrafiNgSfFC1rnmnjBwT4N7rfIu2P6SVm6Dtq7vzLwqMcFNDV9d+QQ3btH/3fU3gNsHI3YEghtGdLKNIakShmq4y54M6beAFw76fjwqGPmeJFcdStLfl+aykCASl7MUj0vw/GNf8HfsX953hFfLKmxx2gnTUAAauo8KgtUBYY/MhjjtIJ3X1231xDsDV9nUFHAFvFeC5h5Kut7GRl1yo2+Q01+1x76iO+AGGHTS3CqWEtaIG85+vl7MRvYyvw4u4Qrr8mrCQUbo2MWHiBS89DCRk9IvmoGpFj7LH8sTyDhypMl8CeoZim519OmzkDRpr40ImHe/MIk7ZlIcgTA8bJ3vkTi4M+3SAoHChUulQeOA+eHipw4OihfNhR62U3psN78FcwnqZbb2jNBfeAdyLvKD7hmmu135d01A16jfhAdaFQ1GffLxkWYbt5+FRd2GRk0lNqdN+FHdjsYzfvgf3QKQZrBxxiSO5yccpH3xeVbRjBjRqWakOMYU4NpnfZg7TEvWwGYHzdT/Jo1uKXj2dqfJWuj3aRz+LB4zPS3rCD/uSd19uu+Z0ft0QAZpxAL6jjPHzQVrL4fChDIJXs40IAamv75e58Liyi+8ApgjBK795NPdHdCBOsGqZt0Ribh5/02A8xa1SMIYVjLcmsGp6GSafo0zgU2LPU/S3OFZd04dMexdGvvZL4Wxnu8fjjWxa01oOIb0Vir4ym6yaCPVcQbmYvCcpC/QWszDBzrA1dTgcOzfnd/7AquWn7BvZyoaBt1f/frc2EenI9co3dvxAV6QR5tIORM33iK+Dpljh7u4tzBTJWqVsL1TSWen0Atn9Xeqy0YeQGF6N3QV4XwjI5+F422Y61O83MwaDxQ0Emigupnc0sQ/xgHyMPgnqp+eYhwIxsUa3JN3YmfAgAm+IKYFj/yQPwNX4vClmQfAnNoBGgSimzbCwFhzD7a6yhEVTXEijco7ix3tV4/W1dsBcbLWxx+WuuudLu+wt1d0qBWouS/12ehS7dhRBiProNQxFcn33uePuW7mYYTyc5Adh54x3CvAT3789VaemJBfivXOpcZqN10AX/bLAp06/01oPyzxstRlJy+XVR2CbB7/bDqydnNaXiafctfy9Gk/o4ZUJ8E/mHzFgl5DDH5AdF4Ne2/I4lPZD4/tVMk7CNMOO7zhyCdu+tc9rFqNpgrmxtKnFfuXckGwxbha/QclCcgrMKj4OndnwYMqgzsLD7aAucWJJibSxUqXeZA5XhCv5dCc+3yX8ldN27CPsv5iVLIDb0jhfMXc9TgU9PrR4CTrunvpwXOLVUs21oLdY7d9eSSTsrLqphii5WNuYLXB+54cQA6Sczh3eSvdKDKdyesKB1dFavjg2BCdPL8vmXgZEflqtxkMsqgXmUS43gX41x1D2cnyfnoeczpLFvhRl/CTju+2BggW1UswV59RAq38rJuXEzQ9hzqDOKZJcTNSeSSxEsqmRHohCRkb3Xgtlx40F6H7xMJsVFBhMfqlNStTNKNBVZoutgfkYWe2yYjHAI4g7odN60oNtdu3h5Enn1rjONsjuYbjCidY37YBi3RoYMabt/aISdUgHe9M4+cWABEKmctRa2iBqQeKUT+4ZCMGvTdIE9RL70MWjegWDJiRopk/5qleKO3DQ+9VjHkTITwapHr5LTfukasaLdORLoMV0H8ZebghWkCEHbfYFgVdVBTyxGAn4q1JpyCJEtg7H+BuXrfFFl9/LBUnJNdsLKwD2lfg3ZR2VoBj967hsoO4/W7whxljhaDNg3BkpoLmXHz153OfHg41ZJERrurSGPOHZUfxReUEFeolUQIHzRDbH4QezopmOVFOoS0UAWhpHCW1wMjQoU152ux+UE6/RI0jtgkNyijFYH3a+9UUehKcqG/QNro86d5ocfPDcZ4ipGlJxfuTB8+8dO5Diam+ERX7jUgnFoWJZIAeqjdYrvYfMXnh8wzeo8HJHD0N2/w0UCzOgBm8oRhUUKTqRYheQZF0QnDOUrz95Tbo4pZ6YGflBppHgKdN/zvvbR1+16HUofnKLVsBdgvQG0TgOMW6sGPZr6l6ocLEqoW/RiwDsMKwG9QAEWbqYgpsb81PHbMu58M3XwwwS+6PYaNA2H1MLECNhe/TEx0OK4OFnfV5IV6OBru2gj6lHpAwv+Ri1qypcp/Uk1GDyWjFBkG4ZqJknSMCybNaBwavg8UVwrGkp3l+4O5gi0r8KjbzQCzbWpWq0M6U5UJSkINAr6OCgsFWrnIhFJDxjmMDxZFu3BCXrNWOdlyBsj+TuPi8HWvJ5vu9muxPkbgRi8PgoV1rNHopqtJNdy1sTiccXculeHKk9SHbxg8/W6+HFdhwmKTUkP0Q5X5NTJtz2Sj/MmfQVx8kwkW039+QZb+gj3oqL10g3cHr/bUCaoqZfti0tVLNTgttyYzub7cwJmiyZSUPdfAzSAyQ4C9YY2vyRCvDifFYqUN2jjHXR40KcWX6tBijZdO6ULbcPIAQR1sUGjXVDmijo9Xg7rigASfY1WuVOv4cPghrSv9DCA58nUeqjMsYd1nU5fovT7JFNq3y9xaRPECnIjcev5sWSszkiaFyjg0jvrgF5+pkXSvrkrUceP1FDfTqKaaMYlph3MDnO2XOSo2PQwzOX8thAwuZefEbnvIgoVxd9WgU4BV+DH9EKa0vVO296ZTYR2bIlLoHOQMFWobHg7FOSXyopVqxfawkRsxjpOM75yC1bnZrVrZ6UqUPvZ9ScgCXsdKSR5p3ncI5CDjtS7TSgl7Z1PFo5NvC0+0wlMqLUO4/IhzYihmBMBiceIdkPBDa3vcZ5Kxh9Nwi2s/BbCcIDeJmUyVtuxng9eyOCxdA78nZsXSE6wT7yqpZBAEy7KnkHs8gSzzB1xUU6x0tGx1hVEnL05ThF03H3BUrjOPaeXMd2hXRR5s9ruTIMexJCCZP0e1ZftQJ7ATw2V/8LAYFAqz6avs51YyNz104rUYU+0cNDGfYBgjHpHNkYn3zqY60MavuGufUkI7Bi7SLRwcHFIFvj1wzg3QLs4zmltZbxpn4haJEXqFWeEm08NCxSR+EyESIifzpyjZYnScgxL8eugOymNWMsqXeJgSK81YMKGT1nvLGkk0tYNFDiXztEZcy48raApPxPlDDlzIjA9lPLUVHfO9En3Az2QRIxy8K35dqcMpO9lX//DK/yK0IAmiJicRvSTv2lMgHyjqTC7cVY040uU6e75xYo/sJMoKofJ9R0GXHZjAvgiIDc2IJE+2e4yWGpYs+rJvW8yGGYoQm+x4CgHwxA3BUDyIF9IppXNpmY4q75SsZwX5g6kNw7xdrQbNwBcmBydlfoXJGmAOBcG41V1hg3PP/nvhJTHam53hzRMbaFPoz6dVuA4PVKw26oRthZoKbchiQ1j/kzsAn8mqTT29F1+cor2MnLx5MBdjLxDeT3k1D8fEn2D4wWeclGnkyssrG4ggT8Xqboy7ZlX7IyfTVm/ykLSbgKRud0r2NgqNipLYsLBKdixMDk7eDiLVhlY19twEAdAiSeLH2YMCl7qD7NM4ZaJop7vwtXH6g5pdyZc+YelEdTmd/CkzF2Xqqr1m0DV1R0Kl4Emt6/RmAFDnCUsS9urz47Xa+qE/2zio2+wUVG1g0DBltJomfGn9PFbjjf9yKOoXtvpU4NGuWz2w08z62xGa8nFJd6EByf4S1h8WoBwMfABkFQaNJg1g3ZPBFGHKCQn7aq7lBZTmQLnBnZwvHgub87dEoO9spxAYzN/r/YapB96wHW/SlJoDFHGE6fkScggZjBEjbEbHkzT2AGkHlO7ahd+T3fQqO+64caXx+pSIRiQpv2RqooPH3WteaBR0hquQnFKBLeHp9EWdako9SXBborYhfVxwIbhg8vPFQJVPf2fb9RtDAuHMkQufubGOLndIjcoEuKsQXzd5tYpaW3v3fbTkjd8V/zi2KvNQBq2n0MxYl3mIay9ynGcng82nrjV3WfVxMN4ZoNvjEMEFBaPN5pGdSQpZDP75X8trkwreoav/bAO9aTo8MeuQSCe7tvy65EvCTfhJzVLeYwK8IrIUQLCWc29RNPDfVCZ8LnXhzUkQKX0172JKjZvgTvcj2Z7Hr/t9Fl9d42CM5LbIoIcgfvkXwDGsU7aQUWxRrjSrvg/jzmD4vTtqYoMhYqUq5bqzPDuZVAsXXPaGfCOXA2iaXPtjUyNJf7R9albmXND8XjQz10hJXOH2aSsJNryfdjAKpPGlfRkOmDQcmrqA9Nwf046YaXDge0s8Inf187pnAyuJsCJIdw5USDLtBhVV0f+HNxod2AjvEn86Ljor5jLhXrE+0EJOgepXov3AOPvQjg7oq9RJVKEJeoV749LKGvO7xD3f/RJpqoAUYClJEsmHOnZq5YzGjJ1Ao0JJfvn3eCYqKcYw13QAo+EXBWIALbP1KEf3VSRGVi9kcwvhEI0YrAyP3mYbq18ANAwEhHQh5sMcE/L+BNvJcgufVN4pcLe4JpfhUtxzQDOuW9fQdFvcmIeZ9G15nV1w0kPpyYXf2KoMbebt3UHFLuR/OQKcBAHihoZcOyazbkbDYNCe6cg5Iwp1sfTE7WWvTTIIMy6fKz4GKAZX8K1YA+mWTbN674g07s3B5/narh8VhC4gUmRBDV9g67KjAQVyKb7UxVgoWzshGY1grJMUFTCKCKF6utmGzn6LpXave8q3xQ4zLIGxRNdJWMwxXjlz7NPuCORHYWUBqT7lnivSxFfYT6COBqk05w5LBYGULd62N+f2uV2h/TLsGlinUxkeIlVi2diVYfordR+RAIdD1qlMV+TSA84q+FsT/Bxl5M3rphXeZFcJ6R1f0uFBoqnMURNOI+CQSSe4IJwwThV+H2B+qdVR+1Jjo/udJE6rAL8d2QmClG49BHQ1furK99J1f159abRd92RH/buCtHp+Qj+xR/oM/lF0TaC1hjrfRDljokdEeJlqLHpxRD4WdGLmTu8veLs7O+7DqiftSYUoteWrxrgJdjIKXgy+4POuJs/CE6GWB0ZL6UZF3ZiHA4juxbQRwzmEpgYQDglH5H2Cvk6K3maBbYjbNJMkxRDjDg77aCHGl6cl4vzc74kD3ETOb2pFAkX8VDPNLtViWnV63KF1v443ruLSZPoHBChGw48fPjjnZ9nu6V34MejY6Daq6e59+INzyC/XVOPTkZlff+rnFjU2fEiKz+BeIVf1Xf3c9BBEP2ei+vl7EYD+/tggOWzx3PWr9kkJdEwxIPV9cuXEKhVswD4o8RP1cM4pbmv5G3LAmPpACUaM8cVYAoksgYD6SHH0PhiH38U/Hnk8jz53b7LYabrttsLIchcbd+8o0fCL8ioVBoKvnRdgJ8dHA/VFfAOTyUX3msJ2uyoQIQ0aZCqdZEK6M1ZgKXFAMDrOkWUkTqzDemAtrAPyZd4K/ldM5t9S8/j5Qr5A9fhuLVBwGrn9YT9GcfyKPPtykoPHHjbMtfY+8/CKMCNvT20HmQ9VStiHBDM2bvwIeP+/PAhBLwrFuLMu4kf2tUK7KyWZno1d/NFuEox+5DkUou0LpQfOFhh0XFQ/9N0oc1knazxx6wDB1P4wri0fNp/hApzivWFtRhFk+i38KLd3vT43rRF/r9hhW+gTO5swGXkirkzK5y7hDB+46g2215jTgyDEOUZs+5h5KjScW5yD8/lWqmRbL91KEbAdQ4j4aWKHs6c051GTvst2DT25XxzdzXSKDfF9kAJstP4QST6liQv0ba09UMp+NefgnmtKZLe4w+crVcsHwHkGeVxNXAUfmK2V1vMPtJu3A7Xa6AixIamVf69U1thAWTxTKvRYIn+6mi8pBlR/yYpwYEMWJ8KyaGlRBNdUdUpiuo1tzBMUkzovG4xhiUN95fKxMDOwyFtajNFj0dB9ikBW70BQgZEo8oxcEIYwy6fsX3Re9bpfBQ1izb5ZRNmNrek8GPGlU4FsVcTghhad33wPfR4KNqs/JTfMpDhdRlXfmyqU9IIOpluDFyidg3NbQAZK3vdEdHWr0+Ph1ax6+mX8cDlae9LcXwZ4W/rEZCLGe+aQ+wI+Ask359zfL7HiD6+ARLXhvtQjZgXIVfa3MJ21WSNP756aP6v2DQmt7njJa93Ovb8+PrDC8SSlPswYpAvg4uLXvcuQlZCwj3qocyFFJmaF89HeryHTDoh1bL+wH13yJ34MQqIuKsZZ02bS9xvXWgD33nfqMr7593QRYzsun3/R3f52MjXeAaTNOh2SPEMuorRTSIqosu1Ms4Y3llSC45/uD9xcgJlDpZVsHAaHj0Mh/MxHFCGfZhCDI7uj0CwXXDaL7DfuOWQ6VWqWBcClaWBj2qSI8irzcOanL5KA8AeI/GpujjiSqSSq73OtRyyejm4BKfwSKjbyMQhmHoKn9LNEAmfKMCRPLUcpD0a5pwfPr4j8q49w7JE08C/gFoaR/L7uXp8GFfl5UWZ+8eg+r4B5rHrWF86w9bITJiI+oyD6QhrscRucy3B4EAvmAYDYoQxp3InV/mTY3kucqKb1pcaBxiuvbGliECFtJx1YoneQ0iKqBS+Xsbg1Aty5i7VXAtgdJex5W5D3ISFnsdrK5/ZCHo8iXNMtJiXKNPPwFcyvr4xsLFRip2GqHY1VxpDeEBuJqgLFXSXuIR/Tm748Ko5MnmC/PiJNpLqt647wcXxAMlHmL25Mm6Iqw/sFx8MYz7XpV9y2rTLB1slHhtEbY3XKHfRjT4g/ToEJ9HcI7g39R3y71GGUpFXrZfdWL9n3/16hLLzRrGoHsB//HLOjkzgnT4klGPu5iBD396HFm8/aKZK1JwsDSjiGThG+wTIW40Yz7cnDkDyrF4kzYm+RCXfR/LNVCqQ2vjj6AyzMWkb97oaq7FxkemgbRjhk7N8Nzq4leH2XymMK+bCRtb7XkewkPut2XZQbsum67T7QiRJ/fTsZ0EdgoO0yAuz2d0SgYZpcaXmNT4DJrU6XrfPjiC4P2lmNHxsysgXliWWAuYMggZrSQJ2yzsgsYMHYZhPOHszXl/bvwF5DhoTy/O2qz3iM9FQi3IMunDvGAJglD0+0YIGKjKodjKacWY31qKd/Q4hrqBXWC7Wgz4rtBEOJJh7L5sJ1s7W3S5IWgtRkvYL6hRgdJ1DcpsPbNQV8kkNgPevBCnWse6BpiSsMXhXTw6QYOzRduL1tNdx2FxpWC56o2xvCc6JlX7x+VGTiWjsir9cYz17OAriTjxOFeJOkpvOkPVTUUxlqAA9BK0n00pYnJ9v0APOZ1wjENuG2QcZTlRj2YnSWTVuRqQujifRwMecCwxDS2gaZsW3iacrAcaUefwJaqdqeOCvQkUlW7VectHovUXAiwIRDL/W7pMzmJ0UoyM+d6ePjELbgjbt5F/tFGbo3tyOP443DoJ2skcf59okHbJ8K2M8u5U5OgnRecKD25NzWArnQivx/GP3m7lPIUCMIWB18wbKLEvBEHH/cJzdcNBuzSNRyXZ9zuZbuVAzcW9R0P3uVB4+23LhGL4LvaYmVfOYeCaOcofN4fLOfna4alXP1EH2P+S3CzTN/GtC31cUHocdFpsp3kP+AQX0BrDeT1fQ3FHv9BUZpzy1weMmxZ8eyY40Ekc2OVhAgv+MIGFANQ9U7Vb3eUonx6m9zlh3WPRtQqUWH9XJbamT5YjLL5kj2oVP9U5wquz5TMowyvEfRxgqzQ99AegUTQc27SkmfkYZy40/nyrNvZZp6r3V6/GMBKb7kvEDgWgHuaygvzlV9pl1hbVGkfNnSe3gwPjNUKR4MEaZFP7ICk7fpoTB/zN+WXldpCR1gklX87DDXbN7Ffdayz/JT6ZP1JWROluCjA70ztu02cDqUGhYl085/q+vQ8ty8Y1cb86H5CMue2Y5udYAudxvhXpQN+DW5nQr8PuIcKUF5D1hS/1Ud7UHHgsPi5WM4Pi+6tCNSkZl+Mo4bQfrp9fvHp+VLAP8ArA18s3iLd3Tx8VoEruyhjjUlYyniFPQsbhqyGzmjTpNoK0+r3AzxOb/skG01QntJKVJqg8f5R5rBWEGYw3u7ep3vLnGlBi0RBItd3qhG6DsT4htYd2a7+Kh5p1SY9Po/Ea+AcyP60HhgKbd53spbaV5EuyVWNoMXq4ddnvw/7vi0WKJIGz6fHK1FfqOcBunbzTl8pt6Ac/kWHDFvaTg1tU6B6wK/NRqu0+vrGL24uwj0TV831cPqdg6qrVF5hjTF6ojhoRF4cp7faXqcHQrcJo5sUNwtH6R+44lOpznBzKs23y6kT1ggwMV0FRDV4Sy3ObaKH9xicZWOycG37GvvR+GDDUquuuQ5sTsCMGakUOPkHrFs1G5Nd/DQX0DgVHelE9TERrjyGiCkhWvmsH4pMyX6lECvxtan/Ip34KOAGzYyzd0jBF3uf29h6f1L7rw80t+1a+poZDRMGVn460tRRU5Jz9DlnR2/toCAvF+rmMGKmhWXK/JapE2ZlRhZEa8Efo1FLXnE4J+MZUCWYquOOj4qF5mD5/ptYU1lLfIKSbR8E6TFhwchwZhnSE6kd4BRsWZcXd/xmAoPNogfAnZtIuRkKZfUOhq63ZW1ZX0FGIMIXP+Bg6RwBKDbbaWj8kiSODFVx5PP1mJqvgRYEW3yzmKMhijMwU389nA5NlZsW8JeDjKkEATZrnxUA8dLAxedCgypusGqEG+hASxwFPYwQnkSMMn/qBmRb59Rxr9GkFc/xecMCp7nVR43UVe7tYOLMwm5G3yMC6EDFOm/VnLm8S9fcnQ8+s9oLHKqX8Yi+tv0OV0XOjztuQ8R8Rsu5iGXdCVudjUJCO446Aa6qmaL8mBFRGlYZ/5B0oTPydAUwOR+2y25TZ6OGV1gPHlVsoMXu+HtWIEOhv73Hw/qyRoUXGd7rEdfizCxyZieG7X39z+FiW6sfkz0Gkzp6mcCR5gOE3N1hzosTY4dVgFOfkLE0YSzO0ZIaBeVferYfeqhR0v5s6YZRx8O97urmWCik24kPt/eQYPXgYS61teFlmLCMf4JNLZpm+Hec+zprCHA2RAdmXF6O/BefMaz33FW+5zDnnJEkNyRHHm7yQYfCObp7PNwMNJuQnxMTm3dK9xVx+wXgehEYu5b+Uv5OSwBkpJkgrcNuexrw1RF4gRa/g7tVPUwVNnmmaJkSbblbaMC+LZGvy24lY9u+UzxwqAQ4X3xhuqdcD6nk74CN6dVYS5PUT8e17eU+J1xKMTR9oWn8g0CY9XvZ9KqhurEm5pIqfWwt5NrWc9eE9T+SC9odNnoKoVjY2BDO0utOCWdmMzm//oXx8CAL9rN45Q2IxC6ZCLZA733uFp72JDm85MF+4MmGbW9BhGQN025m8NnV6qpxB5oNkpqh1owo3o0iEoKgsvZ1uqxHhUo9lauwfm0SLt9NNwOHneaZyqTcn/k+kih3zYewxtP1A6a7If+UWH8lGubxPZPwOy5Yzw5/7e3cvGj7hq+yCVWeU59MpN+ZGgAi89uvxAP0rXuIsgGUZ+q/8g8hD89294DexmtJCyb52nJtQpt7pjZF7mwMZ137E4TUMCBIUgw4/GaGFo5hndAzwbd5nRgdQMllZaq2I6uuhRpCc9LwW+BE1gOnCt8x5tdyPr8cAdaJ8c8r/PfHR4IzO5Kswe9Payvj2aC4MX67fU9vXJ7rPohrQqZi6KnSP0TDgbRk9kUPeJdizAFCQsgejclNpQShth7w/c7yfU+zf8fyS2N9a3/K7OQ6N+zSCqjxPpLQabY8YrSQuKuP3ctczgcDweq906k+/8/jZg3kiJpzSbJs36hN3oW47ZzqB/rBLi4KXRJKWIEkJpdntP+d6+3ctfohTkK9tNxRZojf1Norbf1J3k9biDoIVw8Y9iqIs67r1LcY0QBSp1wUWX/v6f2aFVeHL3VrTy7ZCPhjaYYoq9hG4tiw2hav9G8JXF3unvtXLD1MO//dUsngkpnm4DBQttYPUIIiO0P1B16d6NEmDJe85+gAeEjhuw+mpnd51qP7nX7HyoggeZf2d56WCCa2x8iep/Dp2nK7x193+t+nm6VV/dfv2khJS/32SWo61UlKMOPnl9w79txls3gfz0rA9p/3Y1v+ctVYcq2UljHCSAN72f85Tj+/pKH81oMNLFkj95/S2Tce2XmIfIyZ+fNg7/tczn9/68i738GWYmbP+wd9/zHcfysNrDREjiuexj2j9txnu6V8z3F6iPo/9x8S4z8aMcRTgBVo0Z7UuatLH/8dlCDD+XyT6Hy9I/9frL0nq+el/vv4SIf7rhf9fNiNA//VC/583IwAA+4K3Sf/PW07neKzUb5aD3/hv</diagram></mxfile>
|
2111.01177/main_diagram/main_diagram.pdf
ADDED
|
Binary file (44 kB). View file
|
|
|
2111.01177/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Modern machine learning (ML) algorithms and their practical applications (e.g. recommender systems [@gomezuribe2016recsys], personalized medicine [@ho2020medicine], face recognition [@wang2020deep], speech synthesis [@oord2016wavenet], etc.) have become increasingly data hungry and the use of personal data is often a necessity. Consequently, the importance of privacy protection has become apparent to both the public and academia.
|
| 4 |
+
|
| 5 |
+
Differential privacy (DP) is a rigorous definition of privacy that quantifies the amount of information leaked by a user, participating in a data release [@dwork2006calibrating; @dwork2014diffprivacy]. The degree of privacy protection is represented by the privacy budget. DP was originally designed for answering queries to statistical databases. In a typical setting, a data analyst (party wanting to use data; e.g. a healthcare company) sends a query to a data curator (party in charge of safekeeping the database; e.g. a hospital), who makes the query on the database and replies with a semi-random answer that preserves privacy. Responding to each new query incurs a privacy cost. If the analyst has multiple queries, the curator must subdivide the privacy budget to spend on each query. Once the budget is depleted, the curator can no longer respond to queries, preventing the analyst from performing new, unanticipated tasks with the database.
|
| 6 |
+
|
| 7 |
+
Generative models can be applied as a general and flexible data-sharing medium [@xie2018dpgan; @Augenstein2020Generative], sidestepping the above problems. In this scenario, the curator first encodes private data into a generative model; then, the model is shared with the analyst, who can use it to synthesize similar yet different data from the training data. This data can be used in any way desired, such as for data analysis or to train specific ML models. Unanticipated novel tasks can be accommodated without repeatedly interacting with the curator, since the analyst can easily generate additional synthetic data as required.
|
| 8 |
+
|
| 9 |
+
Furthermore, it has been observed that generative models can reveal critical information about their training data [@webster2021person; @hayes2019logan]. For example, Webster et al. [@webster2021person] found that modern GANs trained on images of faces produce examples that greatly resemble their training data, thereby leaking private information. Hence, the generative model must be learnt with privacy constraints to protect the privacy of individuals contributing to the database.
|
| 10 |
+
|
| 11 |
+
Differentially private learning of generative models has been studied mostly using generative adversarial networks (GANs) [@xie2018dpgan; @frigerio2019dpgan; @yoon2018pategan; @chen2020dpwgan; @wang2021datalens]. While GANs in the non-private setting can synthesize complex data such as high definition images [@brock2018biggan; @karras2020analyzing], their application in the private setting is challenging. This is in part because GANs suffer from training instabilities [@arjovsky2017towards; @mescheder2018ganconvergence], which can be exacerbated by adding noise to the GAN's gradients during training, a common technique to implement DP. Hence, GANs typically require careful hyperparameter tuning. This goes against the principle of privacy, where repeated access to data need to be avoided [@NIPS2013_5014].
|
| 12 |
+
|
| 13 |
+
In this paper, we propose *DP-Sinkhorn*, a novel method to train differentially private generative models using a semi-debiased Sinkhorn loss. DP-Sinkhorn is based on the framework of optimal transport (OT), where the problem of learning a generative model is framed as minimizing the optimal transport distance, a type of Wasserstein distance, between the generator-induced distribution and the real data distribution [@bousquet2017vegan; @peyre2019ot]. DP-Sinkhorn approximates the exact OT distance in the primal space using the Sinkhorn iteration method [@cuturi2013sinkhorn]. Furthermore, we propose a novel semi-debiased Sinkhorn loss to optimally control the bias-variance trade-off when estimating gradients of this OT distance in the privacy preserving setting. Since our approach does not rely on adversarial components, it avoids any training instabilities and removes the need for early stopping (stopping before catastrophic divergence of GANs, as done, for example, in [@brock2018biggan]). This makes our method easy to train and deploy in practice. To the best of our knowledge, DP-Sinkhorn is the first fully OT-based approach for differentially private generative modeling.
|
| 14 |
+
|
| 15 |
+
In summary, we make the following contributions: (i) We propose DP-Sinkhorn, a flexible and robust optimal transport-based framework for training differentially private generative models. (ii) We demonstrate a novel technique to finely control the bias-variance trade-off of gradient estimates when using the Sinkhorn loss. (iii) Benefiting from these technical innovations, we achieve state-of-the-art performance on widely used image modeling benchmarks for varying privacy budgets, both in terms of image quality (as measured by FID) and downstream image classification accuracy. Finally, we present informative RGB images generated under strict differential privacy without the use of public data, with image quality surpassing that of concurrent works.
|
| 16 |
+
|
| 17 |
+
::: ack
|
| 18 |
+
This work was funded by NVIDIA. Tianshi Cao and Alex Bie acknowledge additional revenue from Vector Scholarships in Artificial Intelligence, which are not in direct support of this work.
|
| 19 |
+
:::
|
| 20 |
+
|
| 21 |
+
plus 1mu
|
| 22 |
+
|
| 23 |
+
[^1]: Work done during internship at NVIDIA.
|
2111.14658/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2111.14658/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The availability and affordability of 3D sensors are rapidly increasing. As a result, the point cloud --- one of the most widely-employed representations of 3D objects --- is used extensively in computer vision applications such as autonomous driving, medical robots, and virtual reality [@{qi2017pointnet++}; @dometios2017real; @blanc2020genuage]. The industrial demand has prompted a strong call for exploiting the underlying geometric information in points, fuelling deep learning models that rely on learning effective local point features [@{qi2017pointnet++}; @wang2019dynamic; @wang2019graph; @li2019deepgcns].
|
| 4 |
+
|
| 5 |
+
Local feature learning schemes [@{qi2017pointnet++}; @xiang2021walk; @wang2019dynamic] typically consist of two main steps, namely *point grouping* and *feature aggregation*. Point grouping gathers the neighbors of key points, which are encoded and fused in the subsequent feature aggregation.
|
| 6 |
+
|
| 7 |
+
{#fig: three groupings width="80%"}
|
| 8 |
+
|
| 9 |
+
Unlike grid data, points are unordered and unstructured, posing great challenges to the processing; indeed, standard convolutional neural networks (CNNs) [@he2016deep; @tan2019efficientnet] only work with grid data in a regular *view*, or neighborhood. Point grouping and feature aggregation are often designed to imitate the regular view, or fixed neighborhood size and structure, that we know from convolutions on images. This imitation imposes inductive biases on the point cloud structure.
|
| 10 |
+
|
| 11 |
+
[1](#fig: three groupings){reference-type="ref+label" reference="fig: three groupings"} illustrates problems associated with these inductive biases for the two most common regular point groupings, namely the $k$ nearest neighbor (KNN) grouping [@fix1952discriminatory] and ball query [@{qi2017pointnet++}]. The KNN grouping fixes its view to the $k$ nearest neighbors of each key point. As shown in [1](#fig: three groupings){reference-type="ref+label" reference="fig: three groupings"}a), the resulting groupings are sensitive to noise and outliers, which are automatically grouped together with far away points, sometimes belonging to the object of interest, strongly affecting their downstream features.
|
| 12 |
+
|
| 13 |
+
Ball query, on the other hand, constrains its view to a pre-defined constant radius ball. We see from ([1](#fig: three groupings){reference-type="ref+label" reference="fig: three groupings"}b) that using the same radius, or view, at every location also causes problems: A small radius is needed to avoid grouping noise with far away points as we saw with KNN, but this means that the less dense parts of the airplane, such as its tail, easily become disconnected and can be mistaken for noise.
|
| 14 |
+
|
| 15 |
+
We thus see that the inductive bias imposed by regular views leads to problems. Can we process the unstructured point cloud without regular groupings? **In this paper, we analyze the irregular point cloud with an *irregular view*.** We present a novel graph convolution operator, named Difference Graph Convolution (diffConv), which takes an irregular *view* of the point cloud. In diffConv, we improve ball query with density-dilated neighborhoods where the radius for each point depends on its kernel density. This induces asymmetric neighborhood graphs, and [1](#fig: three groupings){reference-type="ref+label" reference="fig: three groupings"}c) shows how this e.g. helps the plane hold on to its tail.
|
| 16 |
+
|
| 17 |
+
The dilated ball query still treats all the neighbors inside the ball equally. We further adopt masked attention to introduce an additional, task-specific *learned* irregularity to the neighborhood. As illustrated in [1](#fig: three groupings){reference-type="ref+label" reference="fig: three groupings"}d, the points close to the key point get less attention than those further away, indicating that the points near the wings, where contextual differences appear, are paid more attention.
|
| 18 |
+
|
| 19 |
+
- We propose diffConv, a local feature aggregator which does not rely on the regular view constraint.
|
| 20 |
+
|
| 21 |
+
- We present density-dilated ball query, which adapts its view to the local point cloud density, in particular alleviating the influence of noise and outliers.
|
| 22 |
+
|
| 23 |
+
- We are the first to introduce masked attention to local point feature aggregation. In combination with Laplacian smoothing, it allows us to learn further irregularities in our local view, e.g. focusing on contextual feature differences.
|
| 24 |
+
|
| 25 |
+
- We build a hierarchical learning architecture by stacking diffConv layers. Extensive experiments demonstrate that our model performs on par with state-of-the-art methods in noise-free settings, and outperforms state-of-the-art by far in the presence of noise.
|
| 26 |
+
|
| 27 |
+
- As we do not work with a fixed set of neighbors, we aggregate neighboring features via matrix products, giving much faster inference than state-of-the-art.
|
| 28 |
+
|
| 29 |
+
# Method
|
| 30 |
+
|
| 31 |
+
Early works [@su2015multi; @wang2018msnet; @zhang20183d] convert the point cloud to regular 2D or 3D grids (voxels) and leverage the well-established standard image convolutions. Su et al. [@su2015multi] projected the point cloud to multiple views and utilized 2D convolutions with max-pooling to describe the global features of points. MSNet [@wang2018msnet] partitions the point cloud into multi-scale voxels, where 3D convolutions are applied to learn discriminative local features. These methods recast the point cloud to a regular view via the convolution-friendly grid representation. Nevertheless, they suffer from the expensive computation and inevitable loss of geometric detail due to discretization.
|
| 32 |
+
|
| 33 |
+
In contrast to the expensive projection or voxelization, point-based methods [@komarichev2019cnn; @klokov2017escape; @li2018so] process the input point cloud directly and efficiently. The pioneering work PointNet [@qi2017pointnet] learns point features independently with multi-layer perceptrons (MLPs) and aggregates them with max-pooling. Since point-wise MLPs cannot capture local geometric structures, follow-up works obtain local point features via regular views coming from different types of point grouping.
|
| 34 |
+
|
| 35 |
+
The KNN point grouping does not guarantee balanced density [@liu2020closer] and effectively adopts different region scales at different locations of the point cloud [@{qi2017pointnet++}]. It thus comes with an inductive bias that makes it sensitive both to noise and outliers in point clouds, as well as to different local point densities, as also shown in [1](#fig: three groupings){reference-type="ref+label" reference="fig: three groupings"}. Moreover, since KNN queries a relatively small neighborhood in the point-dense regions, the features encoded in subsequent aggregation might not contain enough semantic information to be discriminative. To keep the noise and outliers isolated, some works [@{qi2017pointnet++}; @liu2019relation] adopt ball query with an upper limit of $k$ points. Balls with less than $k$ points are filled by repeating the first-picked point. Neighborhoods with more than $k$ points are randomly subsampled. The upper limit $k$ prevents the neighborhood from having too many neighbors, but does also lead to information loss.
|
| 36 |
+
|
| 37 |
+
PointNet++ [@{qi2017pointnet++}] suggests a hierarchical learning scheme for local point features, including point grouping and feature aggregation. Hierarchically, PointNet++ focuses on a set of subsampled *key points*, which are represented by a neighborhood defined via point grouping according to pre-defined rules. Subsequently, point aggregation encodes and fuses the features of neighboring points to represent the latent shape of the point cloud structure. These point-based local feature learning methods regularize the input point cloud in the spatial or spectral domain. The point grouping in PointNet++ is done via ball query, and PointNet++ thus also inherits the corresponding inductive biases.
|
| 38 |
+
|
| 39 |
+
Graph convolutional neural networks (GCNs) [@kipfsemi; @wu2019simplifying; @defferrard2016convolutional; @zhang2018end] generalize standard convolutions to graph data by considering convolutions in the spectral domain. Some methods [@pan20183dti; @te2018rgcnn; @wang2018local] adopt GCNs for point aggregation. These methods represent the point cloud as an undirected graph for spectral analysis, where neighbors are constrained to a fixed radius ball. Thus, these methods obtain a regular view via spectral analysis, effectively using a form of ball query. It follows that the resulting algorithms also come with the inductive biases associated with ball query. Additionally, GCNs entail extra computations for signal transformation to the spectral domain.
|
| 40 |
+
|
| 41 |
+
Alternative methods find other ways to define a $k$-size kernel for each key point, often based on KNN. In particular, DGCNN [@wang2019dynamic] groups points by their $k$ nearest neighbors in feature space, and then aggregates the feature difference between points and their neighbors with max-pooling. In a similar way, CurveNet [@xiang2021walk] takes guided walks and arguments the KNN neighborhood by hypothetical curves. Other works [@wang2019graph; @tian2021dnet; @yang2020attpnet; @xu2021paconv; @zhao2021point] learn the importance weights of the KNN features to introduce some neighborhood irregularity, but are still confined to the fixed, and sometimes spatially limited, KNN view.
|
| 42 |
+
|
| 43 |
+
Differing from the previous generalizations of CNNs, which all rely on taking a regular view of the point clouds in order to define convolutional operators, we suggest an irregular view of point cloud analysis in this paper.
|
| 44 |
+
|
| 45 |
+
In [3.1](#sec: revisit){reference-type="ref+label" reference="sec: revisit"}, we first revisit the general formulation of local feature learning. Then we propose a flexible convolution operator, namely difference graph convolution in [3.2](#sec: diffPool){reference-type="ref+label" reference="sec: diffPool"}. In [3.3](#sec: DAND){reference-type="ref+label" reference="sec: DAND"} and [3.4](#sec: MA){reference-type="ref+label" reference="sec: MA"}, we enrich the basic diffConv with density-dilated ball query and masked attention. In the last section, by stacking graph convolutions, we present our network architecture for 3D object classification.
|
| 46 |
+
|
| 47 |
+
Given a point cloud consisting of $N$ points $\mathcal{P}=\{p_i| i=1, 2, ..., N\}\in \mathbb{R}^{N \times 3}$, and a data matrix containing their feature vectors $X={[x_1, x_2, x_3, ..., x_N]}^T\in \mathbb{R}^{N \times d}$ for each point $p_i$, local feature extraction can be formulated as: $$\begin{equation}
|
| 48 |
+
g_i = \Lambda(h(p_i, p_j, x_i, x_j)|p_j \in \mathcal{N}(p_i))
|
| 49 |
+
\end{equation}$$ where $g_i$ denotes the learned local feature vector, $\mathcal{N}(p_i)$ refers to the set of neighbors of $p_i$, $h(\cdot)$ is a function encoding both the coordinates $p_i$ of the $i^{th}$ point, and the coordinates $p_j$ of its neighbors, as well as their feature vectors $x_i$ and $x_j$, respectively. Moreover, $\Lambda$ denotes an aggregation function such as MAX or AVG.
|
| 50 |
+
|
| 51 |
+
Most previous works focus on the design of $h(\cdot)$ [@wang2019dynamic; @zhao2021point; @zhou2021adaptive; @wang2019graph]. Among them, the most widely applied method is the edge convolution (edgeConv) appearing in DGCNN [@wang2019dynamic]. In edgeConv, the neighborhood of a point is defined by its KNN in the feature space. The feature difference $x_i - x_j$ is used to indicate the pair-wise local relationship between the point and its neighbor. The relationships are concatenated with the original point features. The concatenated pair is processed by a multi-layer perceptron $l_\theta$ and summarized by a MAX aggregation. The whole process can be described as $$\begin{equation}
|
| 52 |
+
g_i = M\!A\!X(l_\theta(x_i-x_j||x_i)),
|
| 53 |
+
\label{eq: edgeconv}
|
| 54 |
+
\end{equation}$$ where $||$ refers to concatenation, and $p_j \in \mathcal{N}(p_i)$. In this paper, DGCNN is included as a baseline model. In edgeConv, KNN brings convenience to matrix-level operations, such as feature concatenation and linear transformation. One of the challenges we tackled in this paper, is the generalization of edgeConv to the irregular view, and in particular inclusion of feature differences into the convolution.
|
| 55 |
+
|
| 56 |
+
Inspired by GCNs [@kipfsemi] and edgeConv [@wang2019dynamic], we propose the Difference Graph Convolution (diffConv). We present a basic version of diffConv in this subsection, with further improvements described in the following subsections. [4](#fig: network){reference-type="ref+label" reference="fig: network"} top gives an overview of the complete diffConv.
|
| 57 |
+
|
| 58 |
+
By treating each point as a node, we represent the aforementioned point cloud $\mathcal{P}$ as a directed graph $\mathcal{G}=\{X, A\}$, where $A$ is the adjacency matrix, and $X$ refers to point feature vectors. In $\mathcal{G}$, each point is connected to its neighbors. The commonly-adopted neighbor-wise feature difference in edgeConv has been proven to be very effective in capturing the local structure of point clouds [@li2019deepgcns; @wang2019dynamic; @xiang2021walk]. When processing graphs, the Laplacian matrix efficiently measures the difference between nodes and their neighbors. Borrowing from GCNs [@kipfsemi], we apply Laplacian smoothing on the point feature vector by $$\begin{equation}
|
| 59 |
+
S = X - \hat{A}X
|
| 60 |
+
\label{eq: laplacian}
|
| 61 |
+
\end{equation}$$ where ${S=[s_1, s_2, ..., s_N]}^T$ contains the updated feature vectors and $\hat{A}$ is the normalized adjacency matrix. Similar to edgeConv, we define our Difference Graph Convolution as an MLP $l_\theta$ on the combination of $S$ and the original point features $X$ $$\begin{equation}
|
| 62 |
+
G = l_\theta (S || X)
|
| 63 |
+
\label{eq: diffConv}
|
| 64 |
+
\end{equation}$$ where $G={[g_1, g_2, ..., g_N]}^T$ includes the learned feature vectors. Here, $X$ is concatenated to present the global shape information of the point cloud.
|
| 65 |
+
|
| 66 |
+
We argue that our diffConv is an analogy of edgeConv on a more flexible neighborhood. When applying diffConv on KNN grouping, it only differs in the sequence of nonlinear activation and aggregation from edgeConv. As a demonstration, we consider a basic diffConv, with a simple binary adjacency matrix $A$ whose entries indicate connections between points. When the neighborhood is defined by ball query (without an upper limit on the number of neighbors), the connection between $p_i$ and $p_j$, $A_{ij}$, is given by $$\begin{equation}
|
| 67 |
+
A_{ij} = \begin{cases}
|
| 68 |
+
1 & \text{if } {\lVert p_i - p_j\rVert}_2 < r\\
|
| 69 |
+
0 & \text{otherwise}
|
| 70 |
+
\end{cases}
|
| 71 |
+
\label{eq: constr}
|
| 72 |
+
\end{equation}$$ where $r$ is the priory-given ball query search radius. An arithmetic mean to leveraged to normalize the adjacency matrix. According to [\[eq: laplacian\]](#eq: laplacian){reference-type="ref+label" reference="eq: laplacian"}, for $p_j \in \mathcal{N}(p_i)$, the smoothed feature vector $s_i$ calculated by $$\begin{equation}
|
| 73 |
+
s_i = x_i - A\!V\!G(x_j) = A\!V\!G(x_i-x_j)
|
| 74 |
+
\label{eq: laplacian_element}
|
| 75 |
+
\end{equation}$$ is the average of the neighbor-wise feature difference from [\[eq: edgeconv\]](#eq: edgeconv){reference-type="ref+label" reference="eq: edgeconv"}. In line with [\[eq: diffConv\]](#eq: diffConv){reference-type="ref+label" reference="eq: diffConv"}, diffConv for point $p_i$ becomes $$\begin{equation}
|
| 76 |
+
g_i =
|
| 77 |
+
l_\theta \left(A\!V\!G(x_i-x_j)||x_i\right)
|
| 78 |
+
% =
|
| 79 |
+
% l_\theta
|
| 80 |
+
% \left(\frac{1}{K}\sum_{j}^{K}(x_i-x_j||x_i)\right)
|
| 81 |
+
\end{equation}$$ When $l_\theta$ is linear, diffConv can be further decribed by $$\begin{equation}
|
| 82 |
+
g_i = A\!V\!G(l_\theta(x_i-x_j||x_i)).
|
| 83 |
+
\end{equation}$$ Compared with [\[eq: edgeconv\]](#eq: edgeconv){reference-type="ref+label" reference="eq: edgeconv"}, in the case of linear activation, the only difference between diffConv and edgeConv is on the aggregation function. When $l_\theta$ is nonlinear, the feature difference is activated before aggregation, while diffConv operates in reverse order. In contrast to edgeConv, our diffConv is more flexible with respect to the definition of neighborhood - the number of neighbors is no longer fixed.
|
| 84 |
+
|
| 85 |
+
In [\[eq: constr\]](#eq: constr){reference-type="ref+label" reference="eq: constr"}, the point cloud is constructed as an undirected graph, where $A_{ij}=A_{ji}$ for point pair $p_i$ and $p_j$. The directed relations between points and their neighbors are therefore neglected, as illustrated in [1](#fig: three groupings){reference-type="ref+label" reference="fig: three groupings"}b). We relax this to a density-dilated ball query where the search radius for each point is expanded according to its kernel density.
|
| 86 |
+
|
| 87 |
+
![**Left:** Ball query in a point cloud with noise. The blue circles refer to key points, green circles denote neighbors, orange circles refer to neighbor-hoods of the points, and gray circles denote other points in the point cloud. **Right:** Visualizations of the kernel density on some objects from ModelNet40 [@modelnet]. The brighter color indicates higher density. ](density.pdf){#fig: density width="\\linewidth"}
|
| 88 |
+
|
| 89 |
+
Point density, defined as the number of neighbors of each point, reflects the spatial distribution of points. [2](#fig: density){reference-type="ref+label" reference="fig: density"} demonstrates a point cloud with noise. As shown in the figure, under a fixed-radius ball query, contour points of the 3D object (e.g. point $B$) have a lower density than the points from the flat area (e.g. point $A$). Noise generates extreme cases (e.g. point $C$), which can be isolated from the object.
|
| 90 |
+
|
| 91 |
+
As illustrated in [1](#fig: three groupings){reference-type="ref+label" reference="fig: three groupings"}, in KNN, points with low density, such as noise, get a spatially larger neighborhood than others, incorporating them with unrelated features far away. It seems natural, however, that the adversarial effect of noise points on estimated object properties is negatively correlated with their density. Therefore, assigning smaller neighborhoods to low-density points contributes to resisting noise.
|
| 92 |
+
|
| 93 |
+
Xiang et al. [@xiang2021walk] claim that the boundary points provide varying feature differences with neighbors while points from flat areas usually have similar and unrecognizable local feature differences. In other words, to enlarge the variance of feature difference, points with a higher density (points from flat areas) ought to have larger local regions. The KNN grouping queries small neighborhoods for points with high density. Our improved ball query enlarges the neighborhood to include more contextually different neighbors, giving access to a broader range of information.
|
| 94 |
+
|
| 95 |
+
Instead of counting the number of neighbors, which is discrete and computationally expensive, we adopt kernel density estimation [@turlach1993bandwidth] to describe the spatial distribution of points. For each point $p_i$, its kernel density $d_i$ is estimated by a Gaussian kernel $$\begin{equation}
|
| 96 |
+
d_i = \frac{1}{Nh}\sum_{j=1}^{N}\frac{1}{\sqrt{2\pi}}e^{-\frac{{\Vert p_i - p_j\Vert}_2^2}{2h^2}}
|
| 97 |
+
\end{equation}$$ where $h$ is a parameter controlling the bandwidth. The density $d_i$ indicates the probability of point $p_i$ to be located in flat areas of the object, i.e., the degree of dilation. [2](#fig: density){reference-type="ref+label" reference="fig: density"} shows the kernel density of various objects, where points from flat areas have a higher density. This is in accordance with our former analysis.
|
| 98 |
+
|
| 99 |
+
Based on the estimated density, we dilate the searching radius from [\[eq: constr\]](#eq: constr){reference-type="ref+label" reference="eq: constr"} in a soft way to $$\begin{equation}
|
| 100 |
+
r_i = \sqrt{r^2 (1+\hat{d}_i)}
|
| 101 |
+
\label{eq: DAND}
|
| 102 |
+
\end{equation}$$ where the square and root operations are applied to slow down the dilation speed, $\hat{d_i}$ denotes the normalized kernel density within 0 and 1 by dividing the largest density. The dilated search radius varies from point to point, resulting in a directed graph. Since the search radius is enlarged to different degrees, the message propagation on the graph is boosted, accelerating the long-range information flow. Finally, our method is intuitively robust to noise, which is also demonstrated by our experiments in [4.2.0.1](#sec: ablation){reference-type="ref+label" reference="sec: ablation"}.
|
| 103 |
+
|
| 104 |
+
The dilated ball query still treats all the neighbors inside the ball equally. We further introduce an additional, task-specific learned irregularity to the neighborhood by attention mechanism [@gehring2016convolutional; @vaswani2017attention]. Different from other attention-based methods [@zhao2021point; @wang2019graph], we employ the masking mechanism in self-attention [@vaswani2017attention] to handle the no-longer-fixed neighborhood. We call this learning scheme masked attention. In a transformer built by self-attention [@vaswani2017attention] [@velikovi2017graph], there are two kinds of masks: the padding mask in the encoder is exploited to address sequences with different lengths and the sequence mask in the decoder is used to prevent ground truth leakage in sequence prediction tasks. In contrast, we employ a neighborhood mask in our masked attention to learn the local features of points. It works in the encoder while playing a different role than the padding mask. [3](#fig:maskedattention){reference-type="ref+label" reference="fig:maskedattention"} demonstrates the proposed masked attention. It includes three steps, the calculation of edge weight for each pair of connected points on the graph, local normalization, and balanced renormalization.
|
| 105 |
+
|
| 106 |
+
First, we revise the adjacency matrix $A$ in [\[eq: constr\]](#eq: constr){reference-type="ref+label" reference="eq: constr"} to $$\begin{equation}
|
| 107 |
+
A_{ij} = \begin{cases}
|
| 108 |
+
l_\phi(x_i||p_i){l_\psi(x_j||p_j)}^T & \text{if } {\lVert p_i - p_j\rVert}_2 < r_i\\
|
| 109 |
+
-\infty & \text{otherwise}
|
| 110 |
+
\end{cases}
|
| 111 |
+
\label{eq: ADJ}
|
| 112 |
+
\end{equation}$$ where $l_\phi$ and $l_\psi$ are two MLPs mapping the input to a given dimension $d_k$. The MLPs are employed to learn the latent relationship and alignment between $p_i$ and $p_j$ in the feature space. The MLPs encode both point features and coordinates, as we expect them to recognize both the semantic and geometric information. In the implementation, $-\infty$ is approximated by $-{10}^{-9}$, which is orders of magnitude smaller than the computed attention score.
|
| 113 |
+
|
| 114 |
+
The adjacency matrix is then normalized by softmax $$\begin{equation}
|
| 115 |
+
\tilde{A}_{i,j} = \frac{e^{A_{i,j}}}{\sum_{k}^{N}e^{A_{i,k}}}.
|
| 116 |
+
\label{eq: masked_norm}
|
| 117 |
+
\end{equation}$$ Since $e^{-{10}^{-9}} \approx 0$, points outside the neighborhood are automatically masked. Hence, [\[eq: masked_norm\]](#eq: masked_norm){reference-type="ref+label" reference="eq: masked_norm"} becomes the softmax normalization in neighborhoods. With the masking mechanism, we normalize the attention score locally with only matrix-wise operations instead of expensive iterations. In each local region, neighbor features are aggregated dynamically and selectively.
|
| 118 |
+
|
| 119 |
+
![**Structure of our masked attention.** Specifically, point attributes and key point attributes denote the concatenation of point coordinates $P$ and feature vectors $X$ of raw points and key points respectively, masking denotes the conditional statements in [\[eq: ADJ\]](#eq: ADJ){reference-type="ref+label" reference="eq: ADJ"}, which takes the adaptive radius $r_i$ from density-dilated ball query. In our implementation, $d_k$ was set to $\frac{(d+3)}{4}$. ](masked_attention.png){#fig:maskedattention width="\\linewidth"}
|
| 120 |
+
|
| 121 |
+
In self-attention [@vaswani2017attention], the dot product in [\[eq: ADJ\]](#eq: ADJ){reference-type="ref+label" reference="eq: ADJ"} is scaled by $\sqrt{d_k}$ before the softmax normalization, in order to prevent the product from getting extreme values as $d_k$ increases. Here, we refine the scaling by a balanced renormalization of the attention scores $\tilde{A}$ from [\[eq: masked_norm\]](#eq: masked_norm){reference-type="ref+label" reference="eq: masked_norm"}. We apply the $l_1$-norm twice on the square root of the former attention score in different dimensions by $$\begin{equation}
|
| 122 |
+
\overline{A}_{ij} = \frac{\sqrt{\tilde{A}_{ij}}}{\sum_{k}^{N}\sqrt{\tilde{A}_{kj}}}
|
| 123 |
+
\label{eq: BR1}
|
| 124 |
+
\end{equation}$$ and $$\begin{equation}
|
| 125 |
+
\hat{A}_{ij} = \frac{\overline{A}_{ij}}{\sum_k^N\overline{A}_{ik}}
|
| 126 |
+
\label{eq: BR2}
|
| 127 |
+
\end{equation}$$ Here, the square root reduces the variation in attention scores, to prevent overly attention on a few points. Considering the matrix product in [\[eq: laplacian\]](#eq: laplacian){reference-type="ref+label" reference="eq: laplacian"}, $l_1$-norm is to balance the contribution of feature channels and neighbor points respectively. The experimental results in [4.2.0.1](#sec: ablation){reference-type="ref+label" reference="sec: ablation"} illustrate the effectiveness of the proposed renormalization strategy.
|
| 128 |
+
|
| 129 |
+
{reference-type="ref+label" reference="sec: DAND"}, dotted lines indicate only part of the input is forwarded to the next step ([\[PE\]](#PE){reference-type="ref+label" reference="PE"}).](network.png){#fig: network width="\\linewidth"}
|
| 130 |
+
|
| 131 |
+
Local spatial information is defined as the coordinate difference between points and their neighbors. Many prior works have demonstrated the importance of local spatial information in point cloud analysis [@wang2019graph; @liu2019relation; @zhou2021adaptive]. As a consequence, we encode the position of points as supplement features in diffConv. Taking the normalized adjacency matrix from [\[eq: masked_norm\]](#eq: masked_norm){reference-type="ref+label" reference="eq: masked_norm"}, we modify [\[eq: diffConv\]](#eq: diffConv){reference-type="ref+label" reference="eq: diffConv"} as follows $$\begin{equation}
|
| 132 |
+
G = \sigma(l_\theta (L || X) + l_\pi (P||\hat{A}P||P-\hat{A}P))
|
| 133 |
+
\label{PE}
|
| 134 |
+
\end{equation}$$ where $\sigma$ is a nonlinear activation function, $P={[p_1, p_2, p_3, ..., p_N]}^T\in \mathbb{R}^{N \times 3}$ contains the 3D coordinates of points in $\mathcal{P}$, and $l_\pi$ is an MLP mapping the 9-D input positional embeddings to the same dimension as the updated point features from $l_\theta$.
|
| 135 |
+
|
| 136 |
+
[4](#fig: network){reference-type="ref+label" reference="fig: network"} shows our network for 3D object classification, where diffConv is performed hierarchically to capture multi-scale point features and avoid redundant computation.
|
| 137 |
+
|
| 138 |
+
Specifically, input point coordinates are initially encoded to a higher dimension by an MLP. Then, point features are grouped and aggregated through four diffConvs. In contrast to the widely-adopted furthest point sampling [@{qi2017pointnet++}], we select key points by random sampling, which has recently been proved efficient and effective [@hu2020randla]. We follow the global aggregation scheme of DGCNN [@wang2019dynamic], where the learned local features are pooled by a max-pooling and an average-pooling (avg-pooling) respectively. The pooled features are concatenated and processed by MLPs with dropout.
|
2112.07658/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-12T21:43:41.316Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36" etag="wJm1OMrchy0EZ512AEGN" version="15.7.0" type="google"><diagram name="final_act" id="x4srLugJr2X2NpuGbCdN">7V1tk9q6Ff41zLQfdseyJb98zL4kt2luett0bj9mDDbgBjA13s1ufn0lbBksySB5bUuY3clkQRjhfZ6jo/Omw8S5X798ysLt8vc0ilcT24peJs7DxLYDG+H/ycBrMeABpxhYZElUDIHDwLfkV1wOWuXoUxLFu9qFeZqu8mRbH5ylm008y4ux8r1hlqU/d7Whebqqf+o2XMS1K8jAt1m4irnL/pNE+bIY9dHR1b/FyWJZfjIAVvnKOqQXl1PslmGU/jz6LOdx4txnaZoXj9Yv9/GKYEdxef0jfPzn7tfX/26tP798u/Hif3joppj9o8pbqj8hizd5t1PbxdTP4eqpxGtiQ/xv+R0UD8o/PX+leGbp0yaKyZzWxLn7uUzy+Ns2nJFXf2IBwmPLfL3CzwB+OE9Wq/t0lWb79zpzfxbPZnh8l2fpj/jolamPICITSv6pJSTPcZbHL0cslX/6pzhdx3n2ii95oYQX7yil2EZB8fznQSackvjlkTg45QeFpRQuqokPSOMHJdgKwDsC4DtEOkKxH0ER0r49dVx3QKR9zUjDa0HaoRpQF9LoPNKzp+x5DzRBMt5EH4iGx09nq3C3S2Z1pOu0ROFuWb21AJsqdIenJQaYGE9ES+B6TqhOSxwt4pOkHKGOBKjTsSxehXnyXN+dRFSUn/BHmuDbqzgHlGRKOt1n6RS79CmbxeW7jrcDdiKvPlE1MZ0oD7NFnHMTYcbC16PLtuSCnfwNA7+2TeEHxYwHsaswbS+JHieJj+tpHEXJZkFMmWQdb3ZJuuHEEy+8vC6DdfHZpJuYkbVyKFwliw0RZCxMMR6/I8s4wWbIh/KFdRJF5GOE6qUu6fN0k5eGFED7V3MsMimZ5SaoLijvWtY4UNYuANR4C1xOubgi5cJIUWfKxT+vXBYYxu0b4agM0XBKp7VOw8RoYcQrYSjAiVtunQEVcED9Ows3u3marbFg2tZ0lc5+nNgCwfktkCyThyTDKBVSucqJxC/TLPmFJTNcUTmuq+TZLEbzuUglO64TOJG8SvYbyDuplIdlga4e4+VVZAkPDBXvhIxOYoG8+tXHg4RPomaysRuZmqwbanGB1haXf2aiBourK6MISHhCWlSSD8/toMgVLAS/t4UgcmTcFbEMd9twU0PM/d8Tif/sRf1mt5f1D/gCgLYvhxfxo0Xxu4itzCbosYqvFBPjGy3mpleaGXaBDZyfXJxCJ7Q38tzzUv4mMOdzWwxm5E5dpOJYyoP5WgdNG7a8XzUabF3d2Eo4OJeKbaAbW94nuifWy4TkIn7EpsYBxH5/YU8FbcMA8uSVr954t3WPwbXowLGpiobkU+AyvM2UaUJF2Za5gXVjBgiCslWK6Rgstz+wRHb9sSGCpe3Wakj3mLES2joSlZwo2StCfnoUZpFdTsj4+n3i3c0m3gMeZIzGa2Wp2RAaljS6qt995ZqLWwFOtZ8V3HrMLLLucqURNbnLtnQEj01H2W/P+SnvOy6qoyW0uWAw4L5D4ZIzaCVieMcrxCeg03oMQeRuHsw9Mu+JyF1rzmz5WN5LXQ1Vq4KjRrgog96oUao+kKGmr4CDOtZsXhPxtqotWAb97RT87v5buMqLTOQ2S6fhNFkl+auhe3pvPNmWe46nQfOLNh/0uy9U8D5RXETpphkN0D0nIbmLPMe4mJtJ7o09LzCLPImg39vMMfXyEr5EpVbBokiNIQaeE7DBgGpA1b7zzk/Vt4UnEc7UkhCx2T1MYMsJ4i39pQbtvoOT3RkMXgMJ5mQobD4a2S2YnUV6FcA0JENBJx4jtrozFI6EQ3yp2OrOUDgiZ/kQo7XHGPdTYIkG1wV5h0FZupqqfQcBuQXR2/kIU4tVWNtMVi0DGnTpHipRuUqhO5r0hhm2mtNdNUlvYthzNUl3K14BTEVbrTdse64m0Ymtq1t99uyw6cQ2sDVjK+G/vef+bIedom3ij5uo57AQlPAh9dTJUi+BhoUESmbQsBDs2yPszNSoODU3LARFLqCR7rUCmIaEhWDPjptObHWHhaCEp3ap2OoOC8Fm146EhdAYw0IKLBkSFoIiJxEe/RsTPeqJflA356DFL6pBk8WQ9zv31eDJHCNZJPPxjYTRCS2mWK0kSCBLHC3svKiskaKAOZkm2lOGLVqGIv+1rMGgJRgfFossXoTH9Rn4s6aHI1RszUaxHPPvKZkQ3e+e1qTANtkT5uGRR0j+vL1mTfLi18S++wu4EV6MDhf/NT+pjruUm3j/Y4jc+AFTw8M7JDaCvNj06JH07ptLF4JY+58O+DDEvwcWTVzTMka2PlH6ICw7U3WLA3n49OO4DTsfcZW96uJm+8P4endtJIo1HDiz3zkrtC1TVKWdNlFU40DbKD0Y9X2UHqw1hbSmE2PEdnonjOxggOmiYvGmz7CUSTRf0xK0pnqbRgJ9jwNqWNcCNXnrpOflqI9DVjJi8GlIJEoKV4chAT0MCcZqpilwZMpZSGRqazY7YIqafEe37hG5pwfdY+91zyhNWdSqZ5vt3NoI+cAJkAN9n1aMDCLWrshJLEJKUfJco4b2+SEM3ZRgky5Aq3ieN3UBOugzdL9bh/h+vDv8kHSdxo46eWLfFdoO/y8UDz68VQXB9jfY0EnoWsSIvnrr2l5guYHnehACvx6xggKN4N+6FnQRAPs3DipypjYNhMzpYoe27NKmSd0mv7JYKs5+qQRj1KSuvLdiiiYVuZMda9JcSZMS8QC3fAu20epNBaGR0psio1Oj3pQoQbjCijyuVzibdmnbuZKbqOd4vfveWX6Ao73MgXwHgHbS4qD6RECyflO1szx7w8AHjPD10FnelTgQoMVEc7x6oE2Qix22nbDLR3JG19a5kgaD2zq7poZnWIkVVZ0MDBUfnhmfyF5A83yaen63505veK13aPfMRD3bc55EnOMK+YWApcW69YB1+IHt6IZQM92iWI0JOxBEZ+vXhj1Q4/V9TqGzAzUVp+YeqPEu5mCCApiGHKjxJDzyS8VW94Ea72K+wUAdW90Harym8oKiktsZY8JAgSVDDtR4prqsrMEge86/v+YfHu+yUoFuEmZDDIhWfuigB/19CTfUjCYKCmAa0vzD7/nouE5sdTf/8CX8rUvFVnfzD//9S/JkQhcOheWtsQpuop5jFb7IaTycVxilgahcGwChxR4zCbhlOey3AzcdlMdjX0nJx6EyxHbDNYGuqPnwHkZZRqxMqWOxXybPK9phGe29pfdFKlouS9+2yxKbAxi6y5Iv0f3tvabjrdty4NdJbpsxQrA+UV81HewND1LT4ZsabkC0ctKUmg7/ChLk/gUkyAOJyMQVbo6c9mit7tCZiXreHIP3BLmIFtdiaekmQe46mumWCNho2YBcZtvXniAPLiZBXnFqboI8uJgEuQKYhiTIg4tJkKtjqztBHlxMglwdW90J8uB0gnyULQ0VWDIkQR6IPNZDlHqULCmHNF3HNixKHTQl6yeNUerFRUapyfMjtfnxI4SWgu2knpAI6g2UtEevq8PQ/Xlw3ObUEmVDfDsuVNk2sM367kMHtoHFO+/f8nR7AUsXalm6sE6XoCXDwEvXVG/co00PTAkHA4t3x0cXDz7Ig8EBYWBJ+PLXGDEM6hmU1hFhD3mnJ+p9V5EIKFwhwb7F8GIFnYSEfbbZMbAG5vu9/kHIi8unAKDTAd808HKcWrAHppyPvfxtHWLYbSvHe9iG498MCzJL87Dcm0lisiZoAX1a3jR4o83UbBuhW7aACdwKojQBL5n9WZJA5O+bYElC+j4q7fTEtbZ+XpVp1HgyhzT0uqwoiCyrlZgY3Je1KvgRNPHabcNNjY/WXbxm0l28BDJRdfEq7meMbbxURIWGqqnOMaVdLADNDZhnVZdLa5QrXZU822asW493+YfWAxJ1m3pqFZjvmTJhT2tK09BkGpH0cR4pqMTE6D1NZJ/p3dMKmYDXtKcpiIqGPSxZhMuX+Y+Pf+YP839Nw78/f/n8fCOyVQuqCAiUpy/xAnvoRyQev7iXHuN5FGQELMcCVnVd734ddcIr6m2O+77SA0LqeTP4cT2NoyjZLPBwlKzjzW7/jXTGc4saffe+SQV179MBw6V8/B/fvy8/vnz+NV/Pn2786cPn/A8Bp7q3AcYIvoZtQNk2Zk5C2YLvFe1tVxCKkc2JETZh82SDn5lPlUjT+5bda+6XO3Q4YO5XyKCpX6JU+f1sTcMZoOi3r3X/RSZNx6RHftj2pNQouTqDyjXP1mg10xsZopvJcFam8IZN7TINmMQB4lMsfekhIU7N3xY2uzo11KoX9aBSzYfDrksNyTNkiBoyNX/JdkHQrYaAKCB0rXqoVTZzWH+fd/ivSxHJU2SIIgK8b22GJmJPHtAGkdo0UdNXN1+jJqqkxmBVdO2umQJHpugiU6NEbsA4Z7p1UVOhx1XqIvOjRID3pq9MFxkUJ8JPs5Qkk6rXPmEElr+nUUyu+D8=</diagram></mxfile>
|
2112.07658/main_diagram/main_diagram.pdf
ADDED
|
Binary file (50.4 kB). View file
|
|
|
2112.07658/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Transformers have emerged as a popular class of neural network architecture that computes network outputs using highly expressive attention mechanisms. Originated from the natural language processing (NLP) community, they have been shown effective in solving a wide range of problems in NLP, such as machine translation, representation learning, and question answering [\[2,](#page-8-1) [9,](#page-8-2) [22,](#page-8-3) [35,](#page-9-0) [44\]](#page-9-1). Recently, vision transformers have gained an increasing popularity in the vision community and they have been successfully applied to a broad range of vision applications, such as image classification [\[11,](#page-8-4) [16,](#page-8-5) [32,](#page-8-6) [43,](#page-9-2) [48,](#page-9-3) [55\]](#page-9-4), object detection [\[3,](#page-8-7) [7,](#page-8-8) [39\]](#page-9-5), image generation [\[20,](#page-8-9)[21\]](#page-8-10), and semantic segmentation [\[28,](#page-8-11)[52\]](#page-9-6). The most popular paradigm remains when vision transformers form tokens via splitting an image into a series of ordered patches and perform inter-/intra-calculations between tokens to solve the underlying task. Processing an image with vision transformers remains computationally expensive, primarily due to the quadratic number of interactions between tokens [\[36,](#page-9-7)[40,](#page-9-8)[53\]](#page-9-9). Therefore, deploying vision transformers on data processing clusters or edge devices is challenging amid significant computational and memory resources.
|
| 4 |
+
|
| 5 |
+
The main focus of this paper is to study how to automati-
|
| 6 |
+
|
| 7 |
+
cally adjust the compute in visions transformers as a function of the complexity of the input image. Almost all mainstream vision transformers have a fixed cost during inference that is independent from the input. However, the difficulty of a prediction task varies with the complexity of the input image. For example, classifying a car versus a human from a single image with a homogeneous background is relatively simple; while differentiating between different breeds of dogs on a complex background is more challenging. Even within a single image, the patches that contain detailed object features are far more informative compared to those from the background. Inspired by this, we develop a framework that adaptively adjusts the compute used in vision transformers based on the input.
|
| 8 |
+
|
| 9 |
+
The problem of input-dependent inference for neural networks has been studied in prior work. Graves [\[17\]](#page-8-0) proposed adaptive computation time (ACT) to represent the output of the neural module as a mean-field model defined by a halting distribution. Such formulation relaxes the discrete halting problem to a continuous optimization problem that minimizes an upper bound on the total compute. Recently, stochastic methods were also applied to solve this problem, leveraging geometric-modelling of exit distribution to enable early halting of network layers [\[1\]](#page-8-12). Figurnov *et al*. [\[13\]](#page-8-13) proposed a spatial extension of ACT that halts convolutional operations along the spatial cells rather than the residual layers. This approach does not lead to faster inference as high-performance hardware still relies on dense computations. However, we show that the vision transformer's uniform shape and tokenization enable an adaptive computation method to yield a direct speedup on off-the-shelf hardware, surpassing prior work in efficiency-accuracy tradeoff.
|
| 10 |
+
|
| 11 |
+
In this paper, we propose an input-dependent adaptive inference mechanism for vision transformers. A naive approach is to follow ACT, where the computation is halted for all tokens in a residual layer simultaneously. We observe that this approach reduces the compute by a small margin with an undesirable accuracy loss. To resolve this, we propose A-ViT, a spatially adaptive inference mechanism that halts the compute of different tokens at different depths, reserving compute for only discriminative tokens in a dynamic manner. Unlike point-wise ACT within convolutional feature maps [\[13\]](#page-8-13), our spatial halting is directly supported by high-performance hardware since the halted tokens can be efficiently removed from the underlying computation. Moreover, entire halting mechanism can be learnt using existing parameters within the model, without introducing any extra parameters. We also propose a novel approach to target different computational budgets by enforcing a distributional prior on the halting probability. We empirically observe that the depth of the compute is highly correlated with the object semantics, indicating that our model can ignore less relevant background information (see quick examples in Fig. [1](#page-0-0) and
|
| 12 |
+
|
| 13 |
+
more examples in Fig. [3\)](#page-5-0). Our proposed approach significantly cuts down the inference cost – A-ViT improves the throughput of DEIT-Tiny by 62% and DEIT-Small by 38% with only 0.3% accuracy drop on ImageNet1K.
|
| 14 |
+
|
| 15 |
+
Our main contributions are as follows:
|
| 16 |
+
|
| 17 |
+
- We introduce a method for input-dependent inference in vision transformers that allows us to halt the computation for different tokens at different depth.
|
| 18 |
+
- We base learning of adaptive token halting on the existent embedding dimensions in the original architecture and do not require extra parameters or compute for halting.
|
| 19 |
+
- We introduce distributional prior regularization to guide halting towards a specific distribution and average token depth that stabelizes ACT training.
|
| 20 |
+
- We analyze the depth of varying tokens across different images and provide insights into the attention mechanism of vision transformer.
|
| 21 |
+
- We empirically show that the proposed method improves throughput by up to 62% on hardware with minor drop in accuracy.
|
2112.11542/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-12-16T18:11:01.870Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.8.7 Chrome/91.0.4472.164 Electron/13.6.2 Safari/537.36" etag="qOV8Is-NtdJAOYFlIY6T" version="15.8.7" type="device"><diagram name="Framework-V2" id="nvKCqGUWvdu8BxB7rR5r">7V1bc9tGsv41qtp9EGruAzzKznq9G6eSWj9ks28QCUks06IOBcV2fv0BSAAkegbkgMRcANJVqYgQNaL665m+zNfdN/T91+//XKcvT7+s5tnyhqD59xv60w0p/iW8+F/55Mf2CWY43j55XC/m1bPdg8+Lv7LqIaqevi3m2WvrjflqtcwXL+2Hs9XzczbLW8/S9Xr1rf22h9Wy/Vtf0sfqN6Ldg8+zdJkpb/t9Mc+ftk9jInfPP2aLx6f6N2ORbL/zNa3fXC38+pTOV9/2HtF/3ND369Uq33719fv7bFlKr5bLHz+/W8Uo/ffvXz9++rCKf/7+y3//ut0u9qHPjzR/wjp7zk9e+uEuXb2tvi34N/G/n5P/3P3+OZG3mFd/7Z/p8q2SWPXX5j9qEa5Xb8/zrFwG39B3354Wefb5JZ2V3/1WaE3x7Cn/uqy+PU9fn5r3Pqye80olCrWh717z9epLgwMp37FYLt+vlqt18fp59Vy88131gbJ1nn0HGB4RAG5QKfQ5W33N8vWP4ufqVXgFZKXKkuCo0uVvO83gtHrX055WyDiqxJJW6vjYLL8TefFFJfUeCMTkOABHRL6Vai3FG0K54FhmWnmvV3maL1bPxcsEAfkXPzlHsywr37cqftsiLyXFkUVMYixbmBCEFEQE5SoiXIiI8PMx0X5mRsVxUIrj4KX8cvZ2nx3fFvfbPfTpvnmQzr48bnbWr2/5cvGcNftn/eXXRvgoQrz9kGyeGm6u5eLlY60k1TuRskf3NGCePaRvy9wm4HF7E1ISRwgrmOM40WCOI8msQc4nDbk1PCWOeBtRKiKqnqs2EdWbNowHNm1taVOLUmW0vUsKPyxKpCJTqTFVNCbRAOdih0gNjNUZIoUOQWGQHh4yMZvpjNxcJvfIpmoz1lZsxpEOBKYBoX5mAQHqHoEH0oGAuBdcOESAIx4AAsw5AvEs0yNwH3Nm1T9TEOAhIGBgqodFIMNznkkdAomQNHW5B2iiNQWOETDwjwfeA+lcpvo9wARHzCUCLAQEpGsE7vF8/oB0CGAkaWIzcocIkCD2QOwcAZTRTOgQQFmM4tglAkHsgaFzVw4dfAwcfB6Gg1/nOi/DwSfAwY8DcPCJ3ag1MAcfIMBpAO5lLfDLcPAhAnEICDgPcn06+PAU0ud6HCPgPsj16OADBKgMAQHnQa5PBx8iEMQecB7k+nTwAQIkiD1gN8i16eDTBDj4sYyYqYNvz713HrP6dO8RzB7HOggcq7TdmDUw9x4iIKh/BKj7ENejew8QEDgEBJyHuD7dewRvEbWGwDEC7kNcj+49RACFgIDzENenew8QoEHsAechrk/3HiLgdg9oP7G0a4htevcYsNgKR77OybvJ3usF6oEb4s27xwk41VmsgcCtQrtPmvnz7aH8OaLe5e88ZebRs1fkz/3L33nCzKNfD+VPE50JcCt/55wQj169In/mX/7uGSH+fHoofxKA/jvPrXn06BX5+9f/2MChz57nd2VVX/FqtkxfXxeztsiz74v8vzdlvQOvXv1Rvqq+/un7/osf9Yvn+YfFshu07YfI5nWhYC8Aik+/elvPskPv2ysHbEG1X1amAaJ+ts6Wab74s/3pdOhUv+G31aL43N2awEGAsf0Dqp/aYXx8IQQ0JU/Xj1muLLRRlubPPl1/bLOPrEaEAhK6QogIPTApfIaEEoSE0ntI6INI4TEmBABw6j0m8cCj8BkUQgDiAAC4qKgQHkHaxKBjAC4qLAQAUBkAABcVF0IAQtgBFxUYAgCI/x2gmuC/pX9XECjkkOvEDNqDaDqGpMvFY9nRYlbILSuevyulupily7vqG18X83n5a7S47pBXeyPs6u2JqF9XnxjbApChdigR17UiR+Aj1uJ6A76GNq5/LQLG/GC4f3NOaN4pwVsQjNWJacN43Dz0JqCMR8QgejcNvQn4xIKChWyH3kyHsdi0Arkvvnjc9gTZPih3QQt98X9vq/obt9uuE3fFG7B4+b77Zr3Kxyydv9ZLFR92u1r7NxSP935rkOfEXiOdW8YtnufNvq5Vg6varDsOsDVzysiJ58HI83yyI83XcepgqZ7bwxw7HFgIDhN1xscO9pzxYwb5qSmqUsWBDECXCO28R+idP4aep3Nt0iXbetqsWGezFFP4y7/ubt8XP7leLZeFJdGarQ4L1xl3INO4Yy+aSDNWkjmhCjNtTJ7MbdopmAHXqWzDm2olwO0ZKl3uz5o+vFuuZl/OUYWB2vyxG/M2f61ox5ZmgFIYRoguINUdZ9KeahhkJctebS+dQqkalqb39dvRIMLiHByi2vxJnOBIUM35L6xJjBukEY+a6uLE/LFnq8uXf+x/b2etN6/2zHUdhivqvrvVO8Fcb9Wg2wxzwJu91d6yD2OJYYsNcbIhhnZYXcmyJSYmKc/jynLYRztVmawpS1O4VysLZklkLfeQtDunls0r6svGvvoClyqUHLZl7FCYAsD0x97bXso3vNpQKYoMTuxincXLa1cAv6dbhxtpatwntPl3npE8oj8EEbBtaZQkqg/lMtanyOTUH7XUaYhSNzk+Ry114VXqHRVBdqkCZ9KPjki06Yros32USd/sMdGNjslcadOrdrJ2XNTm/q7fKt2oLwAab8sxAO47LFilG/UEgBLvADi/67dLN+oJgC7r5haAeuWp0I36AkC8A+C8vYJdulFPAPxvAOJc/lbZRn1tMIrQ/j/qGw67FHi7Xj0sclN127mPr71km7CPr3Tq9e3ja2+1JuzjAwCaRKA/ANxHuV59fAiAIQHJIgDOo1y/Pj48gjRD0hwD4D7K9erjAwCaSYL+AHDPaPfq40MAvO+AWgMuxckHAHjfANxukGvXq4dVeualwta8eu48aPXq1cPuvE4rhTsQmNhkt34IUKelwh0ITGyyW08EnNYKdyAwscluPU8hp6WSHQhMbLJbPwSw01rJDgQmNtmtJwIh7IGJTXbrh0AAW0A4YI9qmO8WKKEsScB47NvEFn/4FlTylMO0TiQQH13IOx2UxAYZkNFVjnUrEo5jYKl0RRQDEdHrG8yGnHdieaGyEFLupWwT0RODPM1VTU5TE8jhhIVcxmoCF/KgJqe2OLiqyVE1IQDd+OS6FmUpD4pCeilK1XjgqJZgQy1Bw2vJqKqVMSixaQgS59Yqy7rQwZki9at81yiSTaUwhtpaeZMKkDjx0Kh/sGsh60gbpBUHrIUjborhwtASONfkDC1hIFATkoYX9SQGCVJzPyUW+AQbdEwD99VvrzSzq7S3+CW/ZetFIZqyJY/ag8u6casJbVt0D76TH9Z6DsxgbMsMwgid0JN9KmUpWF9l/XjUZZyvTTs277TetAP2mvPftKMpRp72EWfH7uIkiYRMdv9AmGftQOKCRYLvfu+RRkDGechy2XjHJWfg1CMR2/tjYd976yeXLk19bS+z117G1rFFQa8Mpil+Zbo7C2sNZUgyRD7aVpuTQNwW2LeOnRq987qjb9dClrc+rf+Q6aSBDtgUEGLpqFIDRXOglEgTghnHc0pVWOw4VUhRv5xyKBkej3hrQDLFW9T0OY94m2T0gm7acVAveMfZ7a9JiklebbzyjoOTt0nuabzyhhR3/wLHpJcBcdpxb7gUXbdfSpXkRBJJtfpyoJSacr6x5pf1zqkpe1ddy7Y1wkN0Vh6x8uCEAuWJI2brGoJy0EEvocUvQ0rioq8i0bjfurYbLA9ddAPna6j0x/rU7/J+LdhdsHklRVF9OXw0K8qjWHYrz3nmYIA8uXYgg5Jt+lB8QPQp/eEwR35UEQhljM+1igC5tOksntsc9RiDk4VIXeFEU6bvJGtOsfPaFY8jjmLAApFUbTJCddvTnvidF654HPGVkODE777tncchjyg8+Y++X+lhhd810K5FzrguK6bK3NZkX4pN7htGLHJYleJd4CbDqUcscIxwcBIfug4ryHBDwnAjbmIIr+HGRQ3NjCHblOEo9mxUqcGBMx2fEgcIgPNGml69yhARIJO2uQlX/ErOItWZd2l06bRvMROY3fMv8GlfY2KEgpP40MmaIP1KWILBAvEr3c+I8OlXCni+B2BUJ9Zg5PB5DwuEQgBgYjMijp3/ASIw8eQZU/xK4dvomoyFGLPIYRrHv8BN6KcjFjhMEPsXuEmwqmPcvObpOj9IxNmuMxhHxpD/ss6Wab74s/07e1RwIwYgSk5uSIRR932AK0aVdpAEoE4Mw8j4mKXz106ehY6SAdSs2Gu5zkWoN2jFNd/fs9WjdLl4LMt6ZoVilcVq78qdu5ily7vqG18X8/my68Rocz72qoRumdXR4wiQ96REhk3M7LE3r8PoD/Z8tFvXCvP5oqSOMkUj3Ja2UpPJHUN2gkjozT5rM0LoOHOzeGW5VtWN9Ulg5E3BEsbDkeFCKDGyOw5HIw8+kOSchJaG3GufpCk0Y6bcp7YczyWpTHZ3VssRFOCoRTxK6I6irBL23cb6jmeVdIHSZLqcgAIpesGB4pZc0AVKk/1yAopC3AsNFZOhJmNOGSg0PoHjiB0EwGUCwWSmyZjFD0L4wIRvkr0Zr/AVel9g0h96+EmQV7ISQqC7CXHvtrqfexIQ009gFknf/HnHc08C8VFxiEhMbLbnMaMQIgQmDeDGa4lV0p8gMiJ+765MRp6MWeYwXRSAxKdNQVBofwGIXBiEt+N3MgWUexhOpnBOpg+I9idIAHa1nrJxWU4mChEJ96M+AyIAhgHBtFn3KgNQUP/2d9oNxCAFMASJG0SzY5Y4zCgHIHEHowuvFI8KfsEiMMhKxJsBKK2FzIke6nI8VpbzT/eQOlfaDrPsbZkvbp+ydF78zOds+XCb5nmhESWpL5jWUI1DaV6HYutAiusop9IfznfquLcnSN28w42vIw0c/+uZNOCZtLtVQ219aGhKJ51Nncvy2Ky5pMMzymRQ0bQpaUwT47hPu2in67gP+v1S0phs82yQ2gfUbfRpMuNmeqkYEjoqYTAFPZPSwoNl2jclKiuNxW1qjgYBhyFtMyJsqvIHSYTQpD9tUqDCSwtN/Aax0/jvDCXEAEf7U+R0kxBce7LMZLzKdC4QJTQJ/u9MmMnAlel5rThEJJy3mw6JpRYGBBO/WlFYaizxfbnC0MSZgcrYGP8SnzYvUGGphSDyoUvjgvQ4BZR7gB4nNgh8p+NxCnjcB2BkBx84NAqPE4WIxNBxcNgeZ5AQTLwzr0JZ48i7MTYZqzRmmYPETwgSnzhJMAlP4iaR7JUesn02CGWtk8fBz2Wvda5MAiSyMZOxXVfNs0NMIm0HRw5ETALLchyczukyGlbIk+9Xz7M0z56L/8IhS94zwREz8r/TuUxnViPO9hnF6glbLaakWxfbIPfy+pS+lF/O3u4NPJD7LXqf7psH6ezL4wbTX9/y5eI5q57P0/WXX4tlFvnm7IkQbz8km6fHnZoNkoV79LE+Eat32qW9trc91QxaxrGG88px05VxeDAHHx8WdvpGtCEQRMc8dhuxOh4hFkr6JkAk3M8S85u+CRGCaRN5EhaD9A2VLJKq1F2GtlOfHyZxcBKfeJIyCU/i1yDaW/oGRLsEGI9TczfAm6556eEE0Ubzc646N5jOJXsZFnAAEaGEUH3U7sDKiCkr+9c8k8bBZ2ve90W+VTxevfqj1qri653ClS9+DKFaxaffQHbgfduped50UB7UFHqODradRhJexnDQtrztbJ4+XdcqXVyss1k1RKWQaflcF/Pa8neUuTs4Vi+rmMbdYdb8HaNGvbsToBKQsVdJxmonMEYERGAEnT5wCROD1SyPXGImbYEvE2sM0cFnDNdSVyPYPdYmVItDWO/M+iEu49iQRko/P3IyyqAZwa4zYEDG1u2IpDKe34ssCo+PN+9wFlkQtJX7cSfE43kDdefE++JypQi1u+zQ3fETkCI6YMWM80TCVPELTtUF6PzDlQJQA112rbnoP3XA44fi06BP6Y+uaX1Az4alABDKGJ8b3Wems3hObYYXOGnnuoiuaYouvrB3T3Ogu9YV8SEQZy3EWZnA0lR7OAa9X1B5XlqpK6mUJE3CaeNtbNkR2weTSGXiwm+ohzQ3WUYaITWz1NuUaFamOBLAS/FvT6RBRDtihpGbljU4pkrLlMJ1J3t6pJv16pqAJA0i2hFjbQ9epflBHMVJaOAaBKpXcDXgCq4UlscRx3vw6mo/ncPrfA6QR/JgAUoCa6/KSUCeO38xeYlt8jBBYYLhnE7rk0JIAwVh2r3wMKlv95q6JRkRvx3AYgPq7KhFLhnU9ACEPm2yLCZJiEK/JkFcJUFIm6DeXHudlPeAiwXXDZzF/e7xL+gyBdVUyj2ShUR7jXDiExUDEnfKhWF2zaViPNylq7fVtwX/Jv73c/Kfu98/J/J22uV2bnqdy/ZlKi4noAhVgRxF01qYTRopjRhna9AqUTmO40i4zX7pATVwz66AHu16hpNY5/O5h9PA8ZtMsisW7QtIUsT1NcXXQVyvB8Btb+MwEl0JChAIty2OvdfJBojAxPs/MQGptkXgj1Sh2wn89SKfeCtjkYQm8Kn3MQ5O4NdA0wbrIsRYk1xjzUFIF6GEmuQaag7Eswgl2qwBvIhoU0OtCMHLJpcYcGqYFUFgcVExp0qsCAKDaUedCq/Cv0tOph10amgVAch82nGnhlURgMxNJuicU9Z8SreS1zxd5+pt++bx3iX79oOO7UqdCBSBYIZIenJxfLkcGJHrtUmJXsuuCQ4bCY6yhkjjmjgPmUzad44YXnuISohnEsUBJKlMeoFe8TRJalDWhBNeAb3wnAYtgNFU8TqN4Uxaj15CTiMILC48pxEEBheW06Ay1nX5dhjr0YvLaQQg84vLaQQg82tOw0PNCKyKJ2p/3n5lI2A9VDzy2O9Xq2lsyOK7sfVeLSBuIcTlLoJx0H5VD4hJYd4lduQkXKnoK05uuKHMN6halalZbrienHqwDaLaywRbKOjE54CtLCeQe7DP7bV7Neq9tYhCo87PsOgUWnReN8sPx5wPnRY42vqucdA15t3RpQLc25IkkY5Yp8sN8CiWtiy5QXrAQbqsyVa6ggOmaiQlEVW7TjpN1TC33YZCSV2SILEIo+lQk7V0gwUNEwuTRMOYkzt139Fa5oxFmgEbDlM7RvN1xixxJZ3mX+Z8yCRHgDKnOECZG+QxnHmi9rwdJYsgaRzhvYqdRAHBuSNqMtBnQnfnUjG0DEfYs6E1GbQzQQcUB4nFZd2dh4nB1O/OBXA8OW2apXsyyFO/O1fzQP5lPvG783KEaXAyH7qNbpiOp3KjIVl4jqdBrDshx1MhbUru39AKg9h3go6nQhgMAouhY+KwHc8wMTAIhsdskiFpUwrfBlmY3HuPWeJqDsi/zE1mvY5Z5mqW2b/M+01WPYHfEdhYVW+8DobabT557ez2JnXQJIkStb9S09BFtNf1zu8Q/Yam9texafOB4ljh/MWRPFF3VC30OOxZry22aeTjVAMq1dnuu85NJ+iByXL+leFawT4EuwliLWiBtWamrusCWXmtYD/1BreNJxNRAD3ctEOSr3CekBcVXES18LwiapB4mFIyFLpaorC6NfXPV/LHZDjwFJOhQWJxWbfwYWIw9Vv4moVWyzxmUaLK3GGSyGTs7aglLmGLpwBkPvFbeJIEKHMTmrODRFWTM4X50iPZ0ptzMhsEbaP9A/LZ1l37y4QJoDAMmJhepZWgLE4QFIWW/TAaMjtVffSoZgwWYEpaRJRCoakMoXcSicjnaE694qmB/N/u/67oXmETcp07C0IJTXSRLhePZQOFWaEI5b3Ou9LCLGbp8q76xtfFfL7ssmztlg2teATynjavq09c/1EWRiYlMMMW19MN91syaJSW2PKSew5XrXBpyXIP1/L5b2leYPW8FSRiyoFCFWSg23F0zK9S6b27Pdw/ebDNc6eQ0GY7H3gfqS9NjlrM7fBkX0eZ5O3LHgG1zfTgisFCmIOFLHchiE2YAZ6UGfdXZnxV5v7KLDC4uYxPVGYpjyw0nDLrgTHp83pGEonaDNtYW3bFyaCYOalLBsUDxGwd0nSfknvIhD4lN5fJvd2wmQMqAFKvfXRuBoyUBhS/24YRG/E/kA7xi3vBhUvx1+RYf+J32yPCc0IaiF9g7+J32xaiEHKG5zyTOvEnQtLUpfYzrh7+jsU/dNnMce1P5zLVaz8TfONIOhM/8i5+96UyeD5/QDrxN9eTrsRPvWu/SRvM6VzOQ/E71P6O+chDi7/fLY3tPqM1medw4x+hEbmwJ/J+fUZ3GW1tTlyv8jfnxNud4rwlGN6lY9okDQ0jaPNgmdS9w3aFBu0lTINlEh9ZyHuuGmuLo8SyzEzfF188ll/843s2e8uLjUhE+rXcds/3ry8bsLdvLH5z815Fo9Q0t3n6ep29Lv5K7zdLlXarkkOxLn93w38q13rLVzXLrEcGHR4Wts4BkrQjfsmRbrQK0gX99s6BoYPOk45eJ0QoxuA5jKIE7f6pEZDjI1nnA4Ld9/nL4uVls/mum+0g2FyC07bYbBqjmwyz2YqX69Uq3z+Oi43x9MtqnpXv+H8=</diagram></mxfile>
|
2112.11542/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Vision transformers (ViTs) have been proven to be a powerful architecture on various computer vision tasks [\(Doso-](#page-7-0)
|
| 4 |
+
|
| 5 |
+
[Copyright © 2022, Association for the Advancement of Artificial](#page-7-0) [Intelligence \(www.aaai.org\). All rights reserved.](#page-7-0)
|
| 6 |
+
|
| 7 |
+
[vitskiy et al. 2020;](#page-7-0) [Touvron et al. 2021;](#page-8-0) [Chen et al. 2021a;](#page-7-1) [Caron et al. 2021;](#page-6-0) [Strudel et al. 2021\)](#page-8-1), especially when being scaled up with larger model sizes and more training data [\(Dosovitskiy et al. 2020;](#page-7-0) [Steiner et al. 2021;](#page-8-2) [Ridnik](#page-7-2) [et al. 2021\)](#page-7-2). However, powerful ViTs often come with prohibitive computational overhead. Specifically, (1) the tokens consisting of merely background information and (2) redundant heads within the multi-head self-attention (MSA) module of ViTs which learn similar features can lead to unnecessary yet non-negligible inference costs. Taking the widely used DeiT-Small model [\(Touvron et al. 2021\)](#page-8-0) as an example, running inference on a single image with a resolution of 224 × 224 requires over 4.6 Giga floating point operations (GFLOPs), making it challenging to deploy ViTs onto many real-world resource-constrained devices for supporting intelligent internet of things (IoT) applications. Thus, there is an urgent need to reduce the computational cost of ViTs.
|
| 8 |
+
|
| 9 |
+
On the other hand, in real-world applications, the complexity of images can vary significantly. As such, processing all the images with the same model complexity of ViTs could be overcooked. For example, for most of the time during video surveillance, the video may be just staring at an empty background, and the corresponding images can be processed with a naively simple model, saving a large portion of computational cost while still achieving satisfying accuracy. Thus, a straightforward solution for trimming down ViTs' complexity is to perform input-adaptive dynamic inference. Although dynamic inference has been extensively explored for convolutional neural networks (CNNs) through various dynamic dimensions (e.g., model depth, channel number, and model bit-width) [\(Hu et al. 2020;](#page-7-3) [Wang et al.](#page-8-3) [2018;](#page-8-3) [Shen et al. 2020;](#page-8-4) [Wang et al. 2020\)](#page-8-5), only a few pioneering works have considered this aspect for ViTs [\(Rao](#page-7-4) [et al. 2021;](#page-7-4) [Wang et al. 2021b\)](#page-8-6), which yet merely focus on reducing the computational budget by adaptively adjusting the number of input tokens. However, as suggested in [\(Zhou](#page-8-7) [et al. 2021\)](#page-8-7), the similarity between heads and feature maps can increase significantly in deeper ViT layers, implying that the token dimension is not the only source of redundancy, and the unexplored depth and head dimensions could lead to more efficient ViTs.
|
| 10 |
+
|
| 11 |
+
To this end, we target a multi-grained ViT framework that can fully explore the redundancy in ViTs and make the following contributions:
|
| 12 |
+
|
| 13 |
+
- We propose a Multi-grained Input-Adaptive vision transFormer framework, dubbed MIA-Former, in order to trim down the redundancy of ViTs from multiple dimensions at three coarse-to-fine-grained granularities.
|
| 14 |
+
- We propose a low-cost MIA-Controller to make inputadaptive decisions, which is jointly trained with the ViT models via a hybrid supervised and reinforcement learning (RL) scheme.
|
| 15 |
+
- We empirically find that thanks to the proposed hybrid supervised and reinforcement training method, MIA-Former is equipped with improved robustness to various types of adversarial attacks, achieving a win-win in both robustness and efficiency.
|
| 16 |
+
- Extensive experiments and ablation studies based on both DeiT-based [\(Touvron et al. 2021\)](#page-8-0) and LeViTbased [\(Graham et al. 2021\)](#page-7-5) models show that the proposed MIA-Former can be used as a plug-in module on top of a wide range of ViTs to achieve better accuracyefficiency trade-offs and boosted adversarial robustness, compared with state-of-the-art (SOTA) vanilla ViTs, input-adaptive ViTs, as well as CNNs. Specifically, MIA-Former achieves a 20.1% FLOPs reduction and 2.4% higher robustness accuracy under Projected Gradient Descent (PGD) [\(Kurakin et al. 2016\)](#page-7-6) attacks together with the same natural accuracy, compared with the original DeiT-Small model.
|
| 17 |
+
|
| 18 |
+
# Method
|
| 19 |
+
|
| 20 |
+
In this section, we first present the MIA-Former framework which integrates a MIA-Controller module to dynamically control the activated subparts of ViTs in an input-dependent manner. After that, we introduce our proposed hybrid supervised and reinforcement training method that can effectively train MIA-Former based ViTs to be equipped with both improved robustness and efficiency.
|
2201.10986/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-01-19T13:30:16.309Z" agent="5.0 (X11)" etag="JI3jytdzLUeoFMxVdreZ" version="16.4.0" type="device"><diagram id="C5RBs43oDa-KdzZeNtuy" name="Page-1">7VtbV+o4FP41POLqheujgHrmjDPLo85x3mYFGkrGNGHSIOKvPzttWtoGoVWYKrIWD81uupPu78u+JKXhDoPnK4Hmsz+4h2nDsbznhjtqOI7dcpyG+lneKpZ07XYs8AXxdKe14I68YC20tHRBPBzmOkrOqSTzvHDCGcMTmZMhIfgy323KaX7UOfL1iNZacDdBFBvdHognZ7G053TX8m+Y+LNkZLvTj+8EKOmsFYcz5PFlRuReNNyh4FzGV8HzEFNlvMQuD7+tHuj1Y+fq+4/wP/TX4Pf7P382Y2WXVR5JX0FgJt+s+uVxevntZ+vfv+c3lz+W1tX5+HtTP2I9IbrQ9hryYM6ZGih+ablKLBkuSUARg9Zgypm803fADgNEic/gegLPYQGCJywkARDO9Q3J5yCdzAj1rtGKL9RrhBJNHpPWYMYFeQG1iMItGwRwW0jNJ6eT63GnngSxBVKBQ+hzk9jGTkXXKJS6z4RTiuYhGUcTVl0CJHzCBlxKHiSK+IJ52NOtFOyoIQV/TOmjni+JiEZOWQM/Z/ioEbrCPMBSrKCLvtvqaUT0amvp5nJNXbujZbMsbd2eXjJ6ufip6nS0W1heiPlgg3Q4x80PZ5cdDwDJDYco4M6QxANlxTBLRLjIvOlaFNGzAlVtg6oMBdhgKRhaZhhJ8VS+ysdwjiaE+ddRn1FrLbnVb6pEHJ6d0ogLM+J5mEVckUiimE6KIHNOmIxM0R7ADww2tM7ajTZMaAhte92Gn+ou5JAzoBUiEX8wcHWJFV83MGvrwt3NrFUesarIZomUg7Qqfo6BH2HzhQyPDsEtjmQmA6ovD4Vz26kZZ9fAGRz8Cei9A93t1Qx0ywDaQJiSKF3Q1rA3htId8AcApFKX4H2v6DBq2gYnXJMT7gb8KRpjesNDIglX+kXct8CLury0bbXKobol3L8L1PYGLz2FlO7Y1u7BALTr9r8dA8F7gVg45SLAIqyc3lun9H5/6b3ba+fybadXMt/uvim9twvpvVt2uBrS+65B2yDei/iajqdTmlofJb/vGQBKWFUMlurxRY+9Zn6Voa49xU9i3LbUDzPvXG3fQWtMufLnAxBph29bcfOSULrJiKZHzlscez5OAhSmY768WAsGkQBuJHSq7M9DvhATvBsxiGA+LpE9qNluxXUTkAJTJMlTfhdzi/u/UezO7OwUNpJsx82riF9SP5XdNqyqKLaCoWhv2z5mPXmLZytPQDQ6JTN1JjPt/huTmaRftWTGbZVIZrofI5mxzdp4jAEA8U8UExtqSu7zeQrOEYXGkkEvXdafJ8GxzcrqS0W9BLIPHfb6eSfh9N8Y9tyCIrtfmMuhw55ZD41wwH11SLshoT6dzR0+3hUYUfporvWmcNduf6KjObP2mySp2fFt+5eNcN3SBPswEa5v4Hg6ozsE0rVX8I5ZVRkYf/XDm+qwlj2SO9ThjWOWHXBJPHD+R7eCDwZir24v7Lx6BneqGiujWf95nFOtbOTzyNbVCsO0prTrLxO1bXdWiQmWO8tEjbOiUi7/1hN6Z8XY6hTy7IQwVSvGjrND0SsVo6GoWdzoKujh02mID1J0OmbRaZ2dnTGDr1EVl/EEO/zOWNdycfweQIHoRwQecsqhIh0xvq5c9VaJU94pJAtMf0Gsp9VIv9stX+E190OpZuHYNflS5n8A0CzE7NLgJcnUZ0VvP+A5tWFnFl8TtaNC2KvlVyY1L4JqZsRltoLSfH071JW2Y97JCwP/16hSPnD090KUQtxodtz3MwWa6/8TxN3X/8pwL34B</diagram></mxfile>
|
2201.10986/main_diagram/main_diagram.pdf
ADDED
|
Binary file (15.8 kB). View file
|
|
|
2201.10986/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,148 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Twitter data are at the heart of NLP and social science research [@steinert2018twitter], used to study policy and decision making, and to better understand the consequences of public opinion. Its accessibility and the variety and abundance of the data make Twitter one of the most fruitful sources to experiment with new NLP methods, and to generate insights into societal behavior [e.g., @munger2017tweetment]. Given that 199 million people communicate on Twitter daily,[^1] it becomes fundamental to find ways to better interpret this information.
|
| 4 |
+
|
| 5 |
+
However, to control for the effects of various covariates, to stratify the data into sensible subgroups, and to assess their reliability, researchers often need more than the pure text data. Social sciences typically require a recourse to external variables like age or location to control for confounds. In addition, NLP research has shown that integrating socio-demographic information can improve a wide range of classification tasks [@volkova-etal-2013-exploring; @hovy-2015-demographic; @lynn2017human; @li2018towards; @hovy-yang-2021-importance]. By default, this information is not available, and a wide range of NLP tools have been developed to infer measures from the text [i.e., sentiment, syntactic structure: @balahur-2013-sentiment; @kong-etal-2014-dependency inter alia] and user [age, gender, income, person or company @preotiuc-pietro-etal-2015-analysis; @wang2019demographic inter alia].
|
| 6 |
+
|
| 7 |
+
Here, we introduce Twitter-Demographer, a tool that provides a simple and extensible interface for NLP and social science researchers. Starting from tweet ids (the common way to share Twitter data), the tool hydrates the original text, and can enrich it with additional information like the sentiment of the tweets, topics, or estimated demographic information of the author, using existing tools. Twitter-Demographer builds on previous research [e.g., @wang2019demographic; @bianchi-etal-2021-pre; @barbieri-etal-2020-tweeteval; @wolf-etal-2020-transformers], but puts all these efforts together in one simple tool that can be used with little effort. Twitter-Demographer can be applied to extract information from different languages, as its default components are either multi-lingual or language-independent.[^2] Twitter-Demographer has a simple API that can be used to quickly add user-defined components quickly and effectively.
|
| 8 |
+
|
| 9 |
+
One of our goals is to provide and enforce the generation of **reproducible data enrichment pipelines** (i.e., they can be shared and produce the same results if components are kept the same). With data enrichment we mean the process of extending a dataset, e.g., adding new inferred properties, or disambiguating its content [@cutrona2019semantically]. Our flow-based infrastructure makes it easy to produce and share pipelines with other researchers to reconstruct the datasets.
|
| 10 |
+
|
| 11 |
+
Most importantly, inferring user-attributes, even for research purposes, poses a privacy issue. We implement several algorithmic **privacy-by-design** solutions to facilitate **pseudo-anonymity** of the users, and to reduce the chance that their personal data or identifiers can be used to identify natural persons.
|
| 12 |
+
|
| 13 |
+
We believe that Twitter-Demographer can help (computational) social scientists wanting to analyze properties of their datasets in more depth, and provide NLP practitioners with a unified way to enrich and share data.
|
| 14 |
+
|
| 15 |
+
We introduce a new tool, Twitter-Demographer, to enrich datasets of tweets with additional information. The extensible tool enables NLP practitioners and computational social scientists to quickly adapt their own datasets with the features required for a specific analysis. Twitter-Demographer encodes the resulting enrichment pipeline in a stable, shareable, and reproducible format and implements privacy-by-design principles.
|
| 16 |
+
|
| 17 |
+
The flow-based paradigm is helpful for data handling because it allows users to easily combine different black-box components in many different ways, fitting different requirements time by time. Each component implements a specific task, it takes some inputs and returns some outputs. Many solutions employ this kind of paradigm (e.g., Apache NiFi[^3]). These solutions are directed at experts like data engineers because they require some knowledge about the low-level details (e.g., how to handle data sources, how to manage data streams, event-based executions).
|
| 18 |
+
|
| 19 |
+
However, the advantage of this paradigm is that users do not have to know the intrinsic logic of each block (hence black-box). They only have to focus on combining these blocks to ensure the proper mapping between inputs/outputs of consecutive blocks. Indeed, the main disadvantages of manually building these pipelines are that (i) they require massive effort to be defined; (ii) they are sensitive to various hurdles, e.g., what happens if we cannot find one tweet or its location is unavailable? (iii) they are error-prone, with minor errors possibly tearing down entire pipelines, e.g., what happens if a Web service changes its exchange data format, or is no longer available?
|
| 20 |
+
|
| 21 |
+
Twitter-Demographer has been imagined as a low coupled set of components that operates on a dataset in tabular format (e.g., a Pandas DataFrame). Each component takes the dataset as input, applies some operations on it (e.g., adding columns), and returns the modified dataset. Components can be integrated into pipelines: we aim for high cohesion and low coupling principles to reduce possible errors at the component level. Each component exposes a set of required inputs (i.e., columns that must be contained in the input dataset) and a set of generated outputs (i.e., names of the new columns added to the dataset). Using this information, we can chain different components together to introduce dependencies (e.g., to run the sentiment analysis classifier, we need first to query Twitter and create a new column containing the text of tweets). Exposing the input and the outputs allows for the consistency between different components to be checked beforehand to avoid compatibility issues.
|
| 22 |
+
|
| 23 |
+
The flow-based setup makes it possible to replace any component with another one implementing the same task with a different logic, as long as the new component respects the communication interface (i.e., expected inputs and generated outputs). It is worth noting that the paradigm does not force a specific absolute order between components: a component requiring some columns as input (e.g., $\langle a, b\rangle$) must be placed in any position after the components generating such columns.
|
| 24 |
+
|
| 25 |
+
The goal of Twitter-Demographer is two-fold: 1) providing an easy-to-use interface for data enrichment and 2) providing a system that allows users to re-use existing components that are already implemented in Twitter-Demographer easily.
|
| 26 |
+
|
| 27 |
+
We show the class diagram of Twitter-Demographer in Figure [1](#fig:uml){reference-type="ref" reference="fig:uml"}. While Listing [\[listing:scriptino\]](#listing:scriptino){reference-type="ref" reference="listing:scriptino"} shows an example application of the tool. Line 2 instantiates the Demographer object, that is responsible for handling the entire pipeline (i.e., it also performs compatibility checks on components). Lines 4-6 show the instantiation of the different data augmentation components that will be used in the pipeline (a rehydration component to collect additional information from the tweets, a Geonames location decoder, and a sentiment classifier). Lines 9-11 add the components to the demographer object, creating the enrichment pipeline. Finally, line 14 runs the entire pipeline on the data, generating the enriched dataset.
|
| 28 |
+
|
| 29 |
+
<figure id="fig:uml" data-latex-placement="ht!">
|
| 30 |
+
<embed src="uml.pdf" style="width:100.0%" />
|
| 31 |
+
<figcaption>The UML class diagram of the current Twitter-Demographer setup. <em>Demographer</em> is the main class that handles the execution of the different <em>Component</em>s. <em>Component</em> is an abstract class that defines required inputs and produced outputs, as well as an abstract <em>infer()</em> methods that has to be implemented by its subclasses. Current available implementations of the Component class are reported in the UML diagram.</figcaption>
|
| 32 |
+
</figure>
|
| 33 |
+
|
| 34 |
+
::: listing
|
| 35 |
+
``` {.python fontsize="\\footnotesize" numbers="left"}
|
| 36 |
+
|
| 37 |
+
demo = Demographer()
|
| 38 |
+
|
| 39 |
+
re = Rehydrate(token)
|
| 40 |
+
me = GeoNamesDecoder(user_name)
|
| 41 |
+
st = SentimentClassifier(model_name)
|
| 42 |
+
|
| 43 |
+
demo.add_component(re)
|
| 44 |
+
demo.add_component(me)
|
| 45 |
+
demo.add_component(st)
|
| 46 |
+
|
| 47 |
+
new_data = demo.infer(data)
|
| 48 |
+
```
|
| 49 |
+
:::
|
| 50 |
+
|
| 51 |
+
We anyway guarantee the flexibility to allow new components to be implemented. A Component (Listing [\[listing:component_class\]](#listing:component_class){reference-type="ref" reference="listing:component_class"}) is a simple abstract class that can be inherited and implemented easily: introducing a custom classification pipeline requires only to add a custom classifier to the pipeline, which inherits this class and implements the methods that handle inputs, outputs and the method to run the inference on the data.
|
| 52 |
+
|
| 53 |
+
::: listing
|
| 54 |
+
``` {.python fontsize="\\footnotesize"}
|
| 55 |
+
class Component(ABC):
|
| 56 |
+
|
| 57 |
+
def __init__(self):
|
| 58 |
+
self.outputs = self.outputs()
|
| 59 |
+
|
| 60 |
+
@abc.abstractmethod
|
| 61 |
+
def outputs(self):
|
| 62 |
+
pass
|
| 63 |
+
|
| 64 |
+
@abc.abstractmethod
|
| 65 |
+
def inputs(self):
|
| 66 |
+
pass
|
| 67 |
+
|
| 68 |
+
@abc.abstractmethod
|
| 69 |
+
def infer(self, *args):
|
| 70 |
+
pass
|
| 71 |
+
|
| 72 |
+
```
|
| 73 |
+
:::
|
| 74 |
+
|
| 75 |
+
Inputs and outputs are exploited by Demographer to handle the control over the chain of possible components that can be added. A component cannot be added to a pipeline if it requires inputs that are not available in the original data, or that are not generated by previous components. For the sake of providing people with a simple system to extend, the current implementation of Twitter-Demographer represents these variables as lists of strings representing names of columns in data. As a next step, we will improve the current implementation by adopting a pure OOP point of view (i.e., inputs and outputs will turn into interfaces, with configurable parameters).
|
| 76 |
+
|
| 77 |
+
Listing [\[listing:user_defined_component\]](#listing:user_defined_component){reference-type="ref" reference="listing:user_defined_component"} shows instead an example of implemented classifier; this is similar to how we have implemented some of our components, however, we report it also to show that this part of the pipeline can be used by interested researchers as an example of code to extend to support custom behaviors in Twitter-Demographer.
|
| 78 |
+
|
| 79 |
+
::: listing
|
| 80 |
+
``` {.python fontsize="\\footnotesize"}
|
| 81 |
+
class UserClassifier(Component):
|
| 82 |
+
|
| 83 |
+
def __init__(self, model):
|
| 84 |
+
super().__init__()
|
| 85 |
+
self.m = model
|
| 86 |
+
|
| 87 |
+
def outputs(self):
|
| 88 |
+
return ["sentiment"]
|
| 89 |
+
|
| 90 |
+
def inputs(self):
|
| 91 |
+
return ["text"]
|
| 92 |
+
|
| 93 |
+
def infer(self, data):
|
| 94 |
+
return {"sentiment" :
|
| 95 |
+
self.m.predict(data["text"])}
|
| 96 |
+
|
| 97 |
+
```
|
| 98 |
+
:::
|
| 99 |
+
|
| 100 |
+
Twitter-Demographer saves the intermediate computation steps, right after each component has been executed, to handle down-streaming unexpected errors (e.g., lost internet connection). In those situations, the computation can be restarted from checkpoints.
|
| 101 |
+
|
| 102 |
+
Twitter-Demographer is a container of components, and can be extended as they are provided by the community. The current version of Twitter-Demographer is shipped with the following default components wrapped inside:
|
| 103 |
+
|
| 104 |
+
- Basic ReHydration Component based on Twitter API v2. This components handles the retrieval of all the information that can be collected on Twitter from the single tweet id. It requires a Twitter API key.
|
| 105 |
+
|
| 106 |
+
- GeoNames Localizer.[^4] A tool for geolocalizing users based on the location (e.g., address, state, and/or country) they manually write in the Twitter profile. This process is less precise than the geolocation given by Twitter, but also much more frequent: users often fill this field in their profile and thus it is a viable source of information. This localizer outputs the detected country and address.
|
| 107 |
+
|
| 108 |
+
- HuggingFace Transformer Classifier. A wrapper that can be used to use any classifier defined in the HuggingFace transformer library. With this wrapper, any classification module from HuggingFace can be used to classify the data (e.g., Hate Speech detection, Sentiment Analysis).
|
| 109 |
+
|
| 110 |
+
- Topic Modeling. A topic modeler based on Contextualized Topic Models [@bianchi-etal-2021-cross; @bianchi-etal-2021-pre] that also works on multi-lingual data. This topic modeling pipeline applies minor pre-processing by filtering infrequent words and removing links, users can select the number of topics they want to use to model the data.
|
| 111 |
+
|
| 112 |
+
- Gender and Age Predictor. A wrapper around the M3 classifier [@wang2019demographic][^5] that can be used to predict binary gender, age group (i.e, \>=40, 30-39, 19-29, \<18) and identifies if the twitter account is an organization profile or not.
|
| 113 |
+
|
| 114 |
+
Some components come with an automatic caching logic, especially when the component relies on external services with a limited requests rate (e.g., public API accessed with free accounts with a limited amount of requests). For example, the localization component implements a caching mechanism to avoid repeating requests with the same labels, saving requests.
|
| 115 |
+
|
| 116 |
+
As a point of reference of how time-consuming can Twitter-Demographer be, we tested the tool on an Intel i7 laptop equipped with a Nvidia GeForce GTX 1050 and we were able to reconstruct 50 tweets in 20 seconds, adding demographic information with and applying location disambiguation via the GeoNames Web Services.[^6] Note that, however, some components are restricted by their own rate-limits (e.g., Twitter API v2) that might slow down the pipeline.
|
| 117 |
+
|
| 118 |
+
Figure [2](#fig:predicted){reference-type="ref" reference="fig:predicted"} shows an example of a dataset enriched with sentiment prediction and location.
|
| 119 |
+
|
| 120 |
+
<figure id="fig:predicted" data-latex-placement="ht!">
|
| 121 |
+
<img src="predictions_age_sentiment.png" style="width:100.0%" />
|
| 122 |
+
<figcaption>An example of a dataset enriched with sentiment analysis (2 is positive, 1 is neutral), location, age of the sender information. The ‘location’ field, extracted with Twitter APIs, has been disambiguated and split into ‘geo_location_country’ and ‘geo_location_address’. Screen names have been hashed (see Section <a href="#sec:anonymity" data-reference-type="ref" data-reference="sec:anonymity">3.5</a> for a discussion on privacy).</figcaption>
|
| 123 |
+
</figure>
|
| 124 |
+
|
| 125 |
+
Twitter-Demographer exposes wrapping behaviors through the use of Python decorators to simplify the development of pipelines. For example, a common use case is to handle "missing" elements in the pipelines: a geolocalizer cannot be run if the user written location was not retrieved. This can break the pipeline (i.e., running the Geolocation on `None` generates an error). However, this is often not known at the start of the pipeline. This requires to write code to 1) temporarily skip data with missing text, 2) run the classifiers 3) return, to the caller, the entire dataset annotated with the new property where possible (to not compromise other steps). Twitter-Demographer exposes a simple decorator that automatically applies this kind of filtering, see Listing [\[listing:decorator\]](#listing:decorator){reference-type="ref" reference="listing:decorator"}. The same functionality can be useful for a topic modeling pipeline or for a sentiment classifier.
|
| 126 |
+
|
| 127 |
+
::: listing
|
| 128 |
+
``` {.python fontsize="\\footnotesize"}
|
| 129 |
+
@not_null("text")
|
| 130 |
+
def infer(self, data):
|
| 131 |
+
|
| 132 |
+
[...]
|
| 133 |
+
preds = model.predict(data["text"])
|
| 134 |
+
|
| 135 |
+
return {"locations": preds}
|
| 136 |
+
|
| 137 |
+
```
|
| 138 |
+
:::
|
| 139 |
+
|
| 140 |
+
Twitter-Demographer is available as Python package,[^7] released under the research-friendly and open-source MIT license. It is also published on the PyPi repository,[^8] and can be installed with the `pip` package manager. Automatic testing and deploying is handled via GitHub actions and the current state of the package can be checked online.[^9] Twitter-Demographer also comes with online documentation that is available online at Read the Docs.[^10] Tutorial notebooks are available on the GitHub repository. A video showcasing Twitter-Demographer usage can be found on YouTube.[^11]
|
| 141 |
+
|
| 142 |
+
The flow-based system supports reproducibility of data pipelines in a research environment. The pipeline itself, with the result, can be versioned into a JSON file for future machine-to-machine communication into data pipelines (inspired by ). Moreover, component pipelines can be shared and used to augment the same or different datasets multiple times, reducing inconsistency that can arise when we reconstruct and enrich data.
|
| 143 |
+
|
| 144 |
+
Inferring demographic attributes of users has many advantages for both data analysis and social science research, but it has obvious dual-use potential. I.e., ill-intentioned users could abuse it for their own gains. Users might have chosen not to disclose their information on purpose, so inferring them might go against their wishes. Given the "right" tools, we can also infer protected attributes. Moreover, collecting enough demographic attributes can identify real owners of individual users, or at least reduce the number of potential candidates substantially. The latter raises privacy concerns.
|
| 145 |
+
|
| 146 |
+
Following the recommendations of the EU's General Data Protection Regulation [GDPR, @GDPR], we implement a variety of measures to ensure pseudo-anonymity by design. Using Twitter-Demographer provides several built-in measures to remove identifying information and protect user privacy: 1) removing identifiers, 2) unidirectional hashing, and 3) aggregate label swapping.
|
| 147 |
+
|
| 148 |
+
At the end of the reconstruction, we drop most of the personal information that we have been reconstructed (e.g., tweet id, profile URLs, images, and so on). Whenever possible, the information is anonymized. E.g., screen names are replaced with a globally consistent, but unidirectional hash code. In this way, we can retain the user-features mapping within the dataset (enabling further analysis, like aggregations), without allowing people to identify Twitter users (at least not without significant and targeted effort). In addition, we randomly swap the complete set of labels of a subset of the final data, i.e., all labels attached to one instance are transferred to another instance, and vice versa. This procedure reduces the possibility of finding correlations between individual texts and their labels, which reduces its value for model training. However, we expect this use not to be a user priority. On the other hand, swapping does not affect aggregate statistics and the kind of analysis based on them.
|
2202.07919/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-01-23T10:41:03.000Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36 Edg/97.0.1072.69" version="16.2.6" etag="GiD8zI1scCZ9u1ZoqHj7" type="device"><diagram id="q50eZdivubwOAgcce-GS">7Vtbb6M4FP41PLbCNy6PbTqz87KrarrSzjxVFJwEDcFZ4jTp/vo1wQ4Y3NbtGNRGSdUUH1+A73z+OD64Hpqt9n9UyXr5J8to4UE/23voxoMQg0h814anxgAwwI1lUeWZtLWGu/w/Ko2+tG7zjG60hpyxgudr3ZiysqQp12xJVbGd3mzOCv2s62Qhz+i3hrs0Keig2T95xpeNNYJha/9G88VSnRkEcVOzSlRjOcRmmWRs1zkX+uKhWcUYb45W+xktavAULs0FfX2m9nhhFS25TQfYdHhMiq28N3ld/EndrOggcBWF690y5/RunaR1zU64VtiWfFWIEhCHWbJZ0kwW5qzk0nHCa6KcF8WMFawShpKV9XAV4wnPWSksYSDK8lJoxen+2dsBR5AEuyhbUV49iSaKWgpoxSwkibVr3aS8tOx4KJLdEkmMxXHkFjtxIOEzQ4kcQpls1g1v5/m+htQEnwa3A+wI6GHnD7HD/hA7Zfsd7LABu6DgkkcaiMG/W6YqLjYHhl2JBgCv922lOFo0f8XI+O7eC6/5PfTCm8agBheX1YyvWvf8JcDkumMqKk6ZPBwa1LROtpxtJM3rYpEvaj6nAnMqfHVdeyQXwnElK1Z5ltWdr9csL/kBMXLtkRuzize8Yr/oYNpsy+zgeF/C8DVZ5UXttb/zlZBF6P9Fd+L7O1slZW8qQkfzjACdK8GQK7GBK9ABV8hU88yDaB6lNE0HnhA1DxHBxHc085RKKTTVTOygCQxoAgdoBp9ctTDRVQsHk4lWOKpoqbEeKmU/yNjSg7PqIGT96rO0OSIU0QiF/AGhAgOhiANCRZ98LqI41KEDl2Sy2RhPPhuX5zk41hz0I/2BGFnRyMUcVM/UcUNRcA5FnXEF9oKncLpQFAALxS6zqzrlULuiSDabPO25UcOw6U4zlX+wh6Nzu+SFuVHRQiy6H/XhTRjIM9zW5OhKfG+RGPRg3LBtlVLZq5t46A1EXhuIJ9WC8sFAB5ccb9vOSzZJjpPyEkY6uNB/p5eOA43glcnyJZOt47Bi2nEtYoh/xlrIqRTqKeEZ6tIegOnQtMkyvKIZol4+6+pHXbqtHo/BNt3n/Efn+GctLIIsTelmL3XmUHhShVLcw4+2YV382a1rux1Kqt/bXdfUqMw2fF7wGhnR+dcohNdN1XwcXYx0XYzf+/BCSH949Sn3jEwKuiRPnWYy8nr2eolafMrzhFoyXxw0A75bgidL/gh2ZQmN5kbeBWlEH+aOJAP3wopouHwfTTNMCaGx4KQgIzQ0wRkHIUocvU8hvp4MOb7ImgJOm3TIqxKsNPMtiqlLtabibyf1UEyN4v+i9FvIbzRU3+CjqS/SV/X92NI6KA0CbRyCRxHfvpQgx+JrSlg5jDDaaMEXK+Qu+y+D1mAXM/wWzX3vjTHOLa1yAWSdtLBlP/xoVMfRZYxjGKMgCjCGPRUNw0vg+yj0QYwjiOPgnWEIhJexPhcAQsLkHz+wN7SruRHrgQnAbicHNGXhPndkQmAvZoTTrQ2hTZ7qk0UmOOzBOV2gp9B7CU5e5Um5KCzw7OZEiWG9RuofT9sucxGrfGunYfMx0vjwcZWa12c+HmZbjbC/IKHWsFtkjE4WdqDTnVinllwAb5FaOlngI32nC7LMQbmA3SIHdaqwE19P/WE4HewWWZGThR3rbI+iKWXGIn9yssAHOt/jeErgTZmWN77zhi/snQivPTKrVqLZbcUMmybIbPXA9qLZhRfefBO/9+C8tWKs1+VAXzCiActML8td7KxwsUPHLcvgmWVjsUw9r497yYdqNhbPkCl3MArPvjNuw7Mzy8ZiWawnlLEpWonc0EwU2//KaTJa7f82oS//Aw==</diagram></mxfile>
|