Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2005.00705/main_diagram/main_diagram.drawio +1 -0
- 2005.00705/main_diagram/main_diagram.pdf +0 -0
- 2005.00705/paper_text/intro_method.md +78 -0
- 2005.06331/main_diagram/main_diagram.drawio +1 -0
- 2005.06331/main_diagram/main_diagram.pdf +0 -0
- 2005.06331/paper_text/intro_method.md +77 -0
- 2006.15009/main_diagram/main_diagram.drawio +1 -0
- 2006.15009/main_diagram/main_diagram.pdf +0 -0
- 2006.15009/paper_text/intro_method.md +344 -0
- 2008.13084/main_diagram/main_diagram.drawio +0 -0
- 2008.13084/paper_text/intro_method.md +7 -0
- 2101.00433/main_diagram/main_diagram.drawio +1 -0
- 2101.00433/main_diagram/main_diagram.pdf +0 -0
- 2101.00433/paper_text/intro_method.md +111 -0
- 2103.05445/main_diagram/main_diagram.drawio +0 -0
- 2103.05445/paper_text/intro_method.md +48 -0
- 2105.03835/main_diagram/main_diagram.drawio +1 -0
- 2105.03835/main_diagram/main_diagram.pdf +0 -0
- 2105.03835/paper_text/intro_method.md +19 -0
- 2105.11696/main_diagram/main_diagram.drawio +1 -0
- 2105.11696/main_diagram/main_diagram.pdf +0 -0
- 2105.11696/paper_text/intro_method.md +54 -0
- 2108.11179/main_diagram/main_diagram.drawio +0 -0
- 2108.11179/paper_text/intro_method.md +95 -0
- 2109.03622/main_diagram/main_diagram.drawio +0 -0
- 2109.03622/paper_text/intro_method.md +1138 -0
- 2111.07775/main_diagram/main_diagram.drawio +0 -0
- 2111.07775/paper_text/intro_method.md +110 -0
- 2112.08655/main_diagram/main_diagram.drawio +0 -0
- 2112.08655/main_diagram/main_diagram.pdf +0 -0
- 2112.08655/paper_text/intro_method.md +97 -0
- 2203.04439/main_diagram/main_diagram.drawio +1 -0
- 2203.04439/main_diagram/main_diagram.pdf +0 -0
- 2203.04439/paper_text/intro_method.md +48 -0
- 2203.06486/main_diagram/main_diagram.drawio +1 -0
- 2203.06486/main_diagram/main_diagram.pdf +0 -0
- 2203.06486/paper_text/intro_method.md +18 -0
- 2203.08257/main_diagram/main_diagram.drawio +1 -0
- 2203.08257/paper_text/intro_method.md +53 -0
- 2204.12584/main_diagram/main_diagram.drawio +1 -0
- 2204.12584/main_diagram/main_diagram.pdf +0 -0
- 2204.12584/paper_text/intro_method.md +182 -0
- 2205.14120/main_diagram/main_diagram.drawio +1 -0
- 2205.14120/main_diagram/main_diagram.pdf +0 -0
- 2205.14120/paper_text/intro_method.md +58 -0
- 2206.04384/paper_text/intro_method.md +69 -0
- 2207.07077/main_diagram/main_diagram.drawio +0 -0
- 2207.07077/paper_text/intro_method.md +102 -0
- 2208.04726/main_diagram/main_diagram.drawio +1 -0
- 2208.04726/main_diagram/main_diagram.pdf +0 -0
2005.00705/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2019-12-10T01:34:19.776Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" version="12.3.9" etag="-q3-3i5tnaMPZFkVZR1U" type="device" pages="1"><diagram id="JbN5G51gpfv873-te303">7Vxbb6M4FP41aN8ibHN9bJLOzsOuVKkPO/M0ouAkaAiOCJ0k8+vXDpeA7XQItUmaQtUW347NOZ8/m2MbA83W+7+zYLP6l0Q4MaAZ7Q00NyB0kUv/sohDEeEAs4hYZnFURIFTxHP8G5eRVbbXOMLbVsackCSPN+3IkKQpDvNWXJBlZNfOtiBJu9ZNsMRCxHMYJGLsf3GUr4pYzzZP8V9xvFxVNQOzTFkHVeZSxHYVRGRXRB3zoEcDzTJC8uJuvZ/hhOmu0ksh6MuZ1LphGU7zLgVgUeBXkLyWz2bY09k/z4Y9LxuYH6qn3uYZ+Vk/MDDQdJWvk/KWPsiG5Vvvl8zkk0VCduEqyPJJQsjmRxKvY9qg6W4V5/h5E4Qs747mpHGLOElmJCHZsRq08EIchkzksb5GyotnW1THaFo2Gmc53p99cFCrk8IQkzXOswPNUhWAaGIXhUoQuqWNdieLIlRErRrGLBVqBiWGlrXok5rpTalpudaRqHXHDOjvnevcuaLOLRnSnx+fbhvpTN8xJZ6HJF6mrNI4ilgrVZgDmlc0hy3tAn/dfR+A1hWV7nzIPqBC694Vte6OY2yldejDwdTuyRhmefcEIw6yQyrd/7QMIwymQ6q9ejHiwP4Zh9NB1Q4+LdyFAXVQvYuvrV8N6CRMRdvXF3q7ZLeNYbZIo3KbybyFMvKaRjgqTSLTddNivHqd41WnNA3Nm8hmPyyepHkjvrjUGMhrWwdYrmgdW5N1JK+37PXLMuwZc4DQ1xnDnU4Nd/6D/l+a9KZMf5dBOCVHAfYW0n7ghB5+WdAUQmXFOdOQq2zGY7Y7hW0Laq/JWrneJa+4vEZxGj0wBxgNhUmw3cZhW4n0ObPDNxowq8B3FpjYVXC+bybOD1VoH+ffGveNUjR0KsQCVZmicTjifG1b8pqFZRSqvHtBtsSVBWBnqzS0bkuUXsVlOAny+Fe7GTJLlDU8kZhWXBvdatsceW5bQvFAZSHY8MVxcnxODuLkFEoQ5FB7BodGtg3LsD3fXJeDqGO/2So+e9mqjtmB3cpOb4rmngBdm6sbxiV+g/PcEmjilvnDo/dldmVuAaY/ILdIXAcXc0vJEeACjqj5CExMBNqc5Fj+n1iJhZ5wFtPnxVl32rFskXaQ2dlgn4J2+nReiSdkaBAdZ2gNEHkm0gQiMIJIC4gkfh1FILK6ocic+HYbRNB0+oCIie8DLPQBgOU6YGKbpwtIJx0X48z1zQlw/fryeLETYPt8pRpAKPq57uEF1AHspzV7UegL/uPsBVl6Ji9Q4h3rSxmnseZ7kyTkjCHp30nwgpMpySKcVYpPScpW9KJgu6ptf4YGbqZ7Q99vmdOHE7PZ2zmBXbs3LxYA80256no0lLjyroWRFhLOA+bDYQTpwYjltOVqgwi8HYjcC4047XmjKhrhxA5IIzLf50gjSjGiiEZ4jAxGIyrctCONtGwJ9Iw0gJ+NgKFoRObmHGlEKUYsPRix3IFoRKFHdqSRooCnZTbCix1wNqLQ4TrSiBwjaoYaASODzUYUulNHGiltp4dGrKvRiGRX30gjajGiiEZ4jAxFI9X6QwMiBQwWcUj1RFKalgQHakYeOO9Zh18ssHPc68attguu8cj1X0yWsiJZ/JukeZCUeDp7wCIjedFwWrN3FMrqSJdGc53mXS5xxO2VkGxJhxIEQss9j7bOJ5BEd+e4gPGmtQCE/I5HyeYuXXvqELwdCr6XYZqbcrFxuueqpCDKtXhRCql2dDHqhwJQBgXkTvzmpQ0Wo1tRucuI381bbXG/2EnEC3ItbTAYPYf6YdDXV8gLotygb5gY/YO6V6LeMWMQRGmdMYxuQP1Q6D1jEEQNNmMYXX+qYYFMZQwhiNLKEKOHTz8UejOEIGoohqi8ibcAizthCN5DC33UDxSCIFcbO1jj3kj9MOjvceBFVeO6BiBAhUBowqDrAZ4LT6WeQUHrMEVJns3DFJZ1Y2hRdUqnKlchBeo5YSrsvvTfbBU//4Vqj4xaCv2ll5w6u3UKQkjQe+8JCn9IuCOw+lhTdHPew8KVxk8/1McuB1imsi47ng3u+Xi25Isb2o5nWzJn3/lPbmjSe4dPbmhQtAzfqhRNg6dvsRZsdfqgLXr8Hw==</diagram></mxfile>
|
2005.00705/main_diagram/main_diagram.pdf
ADDED
|
Binary file (25.3 kB). View file
|
|
|
2005.00705/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Accuracy evaluation is essential both to guide system development as well as to estimate its quality, which is important for researchers, developers, and users. This is often conducted using benchmarking datasets, containing a data sample, *possibly* representative of the target data distribution, provided with Gold Standard (GS) labels (typically produced with a human annotation process). The evaluation is done by comparing the system output with the expected labels using some metrics.
|
| 4 |
+
|
| 5 |
+
This approach unfortunately falls short when dealing with generation tasks, for which the system output may span a large, possibly infinite, set of correct items. For example, in case of Question Answering (QA) systems, the correct answers for the question, *Where is Rome located ?* is large. As it is impossible, also for cost reasons, to annotate all possible system pieces of output, the standard approach is to manually re-evaluate the new output of the system. This dramatically limits the experimentation velocity, while increasing significantly the development costs.
|
| 6 |
+
|
| 7 |
+
Another viable solution in specific domains consists in automatically generating an evaluation score between the system and the reference answers, which correlates with human judgement. The BLEU score, for example, is one popular measure in Machine Translation [@papineni-etal-2002-bleu]. This, however, can only be applied to specific tasks and even in those cases, it typically shows limitations [@DBLP:journals/corr/abs-1803-08409]. As a consequence there is an active research in learning methods to automatically evaluate MT systems [@ma-etal-2019-results], while human evaluation becomes a requirement in machine translation benchmarking [@barrault-etal-2019-findings].
|
| 8 |
+
|
| 9 |
+
QA will definitely benefit by a similar approach but the automatic evaluation is technically more complex for several reasons: First, segment overlapping metrics such as BLEU, METEOR, or ROUGE, do not work since the correctness of an answer loosely depends on the match between the reference and candidate answers. For example, two text candidates can be correct and incorrect even if they only differ by one word (or even one character), e.g., for the questions, *Who was the 43$^{rd}$ president of USA ?*, a correct answer is *George W. Bush*, while the very similar answer, *George H. W. Bush*, is wrong.
|
| 10 |
+
|
| 11 |
+
Second, the matching between the answer candidates and the reference must be carried out at semantic level and it is radically affected by the question semantics. For example, *match*$(t, r | q_1)$ can be true but *match*$(t, r | q_2)$ can be false, where $t$ and $r$ are a pair of answer candidate and reference, and $q_1$ and $q_2$ are two different questions. This can especially happen for the case of the so-called non-factoid questions, e.g., asking for a description, opinion, manner, etc., which are typically answered by a fairly long explanatory text. For example, Table [\[nonfactq\]](#nonfactq){reference-type="ref" reference="nonfactq"} shows an example of a non factoid question and three different valid answers, which share similarity with respect to the question. However, if the question were, *what may cause anxiety ?*, Answer 1 and Answer 3 would intuitively look less related to Answer 2.
|
| 12 |
+
|
| 13 |
+
In this paper, we study the design of models for measuring the Accuracy of QA systems. In particular, we design several pre-trained Transformer models [@devlin2018bert; @DBLP:journals/corr/abs-1907-11692] that encode the triple of question $q$, candidate $t$, and reference $r$ in different ways.
|
| 14 |
+
|
| 15 |
+
Most importantly, we built (i) two datasets for training and testing the point-wise estimation of QA system output, i.e., the evaluation if an answer is correct or not, given a GS answer; and (ii) two datasets constituted by a set of outputs from several QA systems, for which AVA is supposed to estimate the Accuracy.
|
| 16 |
+
|
| 17 |
+
The results show a high Accuracy for point-wise models, up to 75%. Regarding the overall Accuracy estimation, AVA can almost always replicate the ranking of systems in terms of Accuracy performed by humans. Finally, the RMSE with respect to human evaluation depends on the datasets, ranging from 2% to 10%, with an acceptable Std. Dev. lower than 3-4%.
|
| 18 |
+
|
| 19 |
+
The structure of the paper is as follows: we begin with the description of the problem in Sec. [3](#sec:problem){reference-type="ref" reference="sec:problem"}. This is then followed by the details of the data construction and model design, which are key aspects for system development, in sections [4](#sec:model){reference-type="ref" reference="sec:model"} and [5](#sec:data){reference-type="ref" reference="sec:data"}. We study the performance of our models in three different evaluation scenarios in Sec. [6](#sec:experiments){reference-type="ref" reference="sec:experiments"}.
|
| 20 |
+
|
| 21 |
+
# Method
|
| 22 |
+
|
| 23 |
+
We target the automatic evaluation of QA systems, for which system Accuracy (the percentage of correct answers) is the most important measure. We also consider more complex measures such as MAP and MRR in the context of Answer Sentence Reranking/Selection.
|
| 24 |
+
|
| 25 |
+
The task of reranking answer sentence candidates provided by a retrieval engine can be modeled with a classifier scoring the candidates. Let $q$ be a question, $T_q=\{t_1, \dots, t_n\}$ be a set of answer sentence candidates for $q$, we define $\mathcal{R}$ as a ranking function, which orders the candidates in $T_q$ according to a score, $p\left(q, t_i\right)$, indicating the probability of $t_i$ to be a correct answer for $q$. Popular methods modeling $\mathcal{R}$ include Compare-Aggregate [@DBLP:journals/corr/abs-1905-12897], inter-weighted alignment networks [@shen-etal-2017-inter], and BERT [@garg2019tanda].
|
| 26 |
+
|
| 27 |
+
The evaluation of system Accuracy can be approached in two ways: (i) evaluation of the single answer provided by the target system, which we call point-wise evaluation; and (ii) the aggregated evaluation of a set of questions, which we call system-wise evaluation.
|
| 28 |
+
|
| 29 |
+
We define the former as a function: $\mathcal{A}\left(q, r, t_i\right) \rightarrow \{0, 1\}$, where $r$ is a reference answer (GS answer) and the output is simply a correct/incorrect label. Table [\[evaluator-input\]](#evaluator-input){reference-type="ref" reference="evaluator-input"} shows an example question associated with a reference, a system answer, and a short answer[^1].
|
| 30 |
+
|
| 31 |
+
A configuration of $\mathcal{A}$ is applied to compute the final Accuracy of a system using an aggregator function. In other words, to estimate the overall system Accuracy, we simply assume the point-wise AVA predictions as they were the GS. For example, in case of the Accuracy measure, we simply average the AVA predictions, i.e., $\frac{1}{|Q|}\sum_{q\in Q} \mathcal{A}(q,r,t_i[,s])$, where $s$ is a short answer (e.g., used in machine reading). It is an optional input, which we only use for a baseline, described in Section [4.1](#sec:linear){reference-type="ref" reference="sec:linear"}.
|
| 32 |
+
|
| 33 |
+
The main intuition on building an automatic evaluator for QA is that the model should capture (i) the same information a standard QA system uses; while (ii) exploiting the semantic similarity between the system answer and the reference, biased by the information asked by the question. We build two types of models: (i) linear classifier, which is more interpretable and can help us to verify our design hypothesis and (ii) Transformer-based methods, which have been successfully used in several language understanding tasks.
|
| 34 |
+
|
| 35 |
+
Given an input example, $\left(q, r, s, t\right)$, our classifier uses the following similarity features: $x_1$=*sim-token*$\left(s,r\right)$, $x_2$=*sim-text*$\left(r,t\right)$, $x_3$=*sim-text*$\left(r,q\right)$; and $x_4$=*sim-text*$\left(q,t\right)$, where *sim-token* between $s$ and $r$ is a binary feature testing if $r$ is included in $s$, $\emph{sim-text}$ is a sort of Jaccard similarity: $$\emph{sim-text}\left(s_i, s_j\right)= 2\frac{|\text{\emph{tok}}\left(s_i\right)\cap\text{\emph{tok}}\left(s_j\right)|}{|\text{\emph{tok}}\left(s_i\right)| + |\text{\emph{tok}}\left(s_j\right)|},$$ and $\emph{tok}\left(s\right)$ is a function that splits $s$ into tokens.
|
| 36 |
+
|
| 37 |
+
Let ${\bf x} = f\left(q,r,s,t\right)=\left(x_1, x_2, x_3, x_4\right)$ be a similarity feature vector describing our evaluation tuple. We train ${\bf w}$ on a dataset $D=\{d_i:\left({\bf x}_i,l_i\right)\}$ using SVM, where $l_i$ is a binary label indicating whether $t$ answers $q$ or not. We compute the point-wise evaluation of $t$ as the test ${\bf x}\!\cdot\!{\bf w} > \alpha$, where $\alpha$ is a threshold trading off Precision for Recall in standard classification approaches.
|
| 38 |
+
|
| 39 |
+
Transformer-based architectures have proved to be powerful language models, which can capture complex similarity patterns. Thus, they are suitable methods to improve our basic approach described in the previous section. Following the linear classifier modeling, we propose three different ways to exploit the relations among the members of the tuple $\left(q,r,s,t\right)$.
|
| 40 |
+
|
| 41 |
+
Let $\mathcal{B}$ be a pre-trained language model, e.g., the recently proposed BERT [@devlin2018bert], RoBERTa [@DBLP:journals/corr/abs-1907-11692], XLNet [@DBLP:journals/corr/abs-1906-08237], AlBERT [@anonymous2020albert]. We use a language model to compute the embedding representation of the tuple members: $\mathcal{B}\left(a, a'\right) \rightarrow {\bf x} \in \mathbb{R}^d$, where $\left(a, a'\right)$ is a sentence pair, ${\bf x}$ is the output representation of the pair, and $d$ is the dimension of the output representations. The classification layer is a standard feedforward network as $\mathcal{A}\left({\bf x}\right) = {\bf W}^{\intercal}{\bf x} + b$, where **W** and $b$ are parameters we learn by fine-tuning the model on a dataset $D$.
|
| 42 |
+
|
| 43 |
+
We describe different designs for $\mathcal{A}$ as follows.
|
| 44 |
+
|
| 45 |
+
We build a language model representation for pairs of members of the tuple, $x=\left(q,r,t\right)$ by simply inputing them to Transformer models $\mathcal{B}$ in the standard sentence pair fashion. We consider four different configurations of $\mathcal{A}_0$, one for each following pair $\left(q,r\right)$, $\left(q,t\right)$, $\left(r,t\right)$, and one for the triplet, $\left(q, r, t\right)$, modeled as the concatenation of the previous three. The representation for each pair is produced by a different and independent BERT instance, i.e., $\mathcal{B}_p$. More formally, we have the following three models $\mathcal{A}_0\left( \mathcal{B}_p(p)\right)$, $\forall p \in \mathcal{D}_0$, where $\mathcal{D}_0 = \{(q,r), (q,t), (r,t)\}$. Additionally, we design a model over $(q, r, t)$ with $\mathcal{A}_0\left( \cup_{p \in \mathcal{D}_0} \hspace{.3em}\mathcal{B}_p(p) \right)$, where $\cup$ means concatenation of the representations. We do not use the short answer, $s$, as its contribution is minimal when using powerful Transformer-based models.
|
| 46 |
+
|
| 47 |
+
The models of the previous section are limited to pair representations. We improve this by designing $\mathcal{B}$ models that can capture pattern dependencies across $q$, $r$ and $t$. To achieve this, we concatenate pairs of the three pieces of text above. We indicate this string concatenation with the $\circ$ operator. Specifically, we consider $\mathcal{D}_1 = \{(q,r \circ t), (r,q \circ t), (t,q \circ r)\}$ and propose the following $\mathcal{A}_1$. As before, we have the individual models, $\mathcal{A}_1\left( \mathcal{B}_p(p)\right)$, $\forall p \in \mathcal{D}_1$ as well as the combined model, $\mathcal{A}_1\left( \cup_{p \in \mathcal{D}_1} \hspace{.3em}\mathcal{B}_p(p) \right)$, where again, we use different instances of $\mathcal{B}$ and fine-tune them together accordingly.
|
| 48 |
+
|
| 49 |
+
Our previous designs instantiate different $\mathcal{B}$ for each pair, learning the feature representations of the target pair and the relations between its members, during the fine-tuning process. This individual optimization prevents to capture patterns across the representations of different pairs as there is no strong connection between the $\mathcal{B}$ instances. Indeed, the combination of feature representations *only* happens in the last classification layer.
|
| 50 |
+
|
| 51 |
+
We propose *peer-attention* to encourage the feature transferring between different $\mathcal{B}$ instances. The idea, similar to encoder-decoder setting in Transformer-based models [@NIPS2017_7181], is to introduce an additional decoding step for each pair. Figure [1](#fig:peerattention){reference-type="ref" reference="fig:peerattention"} depicts our proposed setting for learning representation of two different pairs: $a_0=\left(a, a'\right)$ and $g_0=\left(g, g'\right)$. The standard approach learns representations for these two in one pass, via $\mathcal{B}_{a_0}$ and $\mathcal{B}_{g_0}$. In *peer-attention* setting, the representation output after processing one pair, captured in ${H}_{[CLS]}$, is input to the second pass of fine-tuning for the other pair. Thus, the representation in one pair can attend over the representation in the other pair during the decoding stage. This allows the feature representations from each $\mathcal{B}$ instance to be shared both during training and prediction stages.
|
| 52 |
+
|
| 53 |
+
<figure id="fig:peerattention" data-latex-placement="t">
|
| 54 |
+
<img src="peer-attention-ava.png" style="width:80.0%" />
|
| 55 |
+
<figcaption>peer attention on <span class="math inline">(<em>a</em>, <em>a</em><sup>′</sup>)</span> and <span class="math inline">(<em>g</em>, <em>g</em><sup>′</sup>)</span>.</figcaption>
|
| 56 |
+
</figure>
|
| 57 |
+
|
| 58 |
+
We describe the datasets we created to develop AVA. First, we build two large scale datasets for the standard QA task, namely **AS2-NQ** and **AS2-GPD**, derived from the Google Natural Questions dataset and our internal dataset, respectively. The construction of the datasets is described in Section [5.1](#sec:qadata){reference-type="ref" reference="sec:qadata"}. Second, we describe our approach to generate labelled data for AVA using the datasets for QA task, described in Section [5.2](#sec:avadata){reference-type="ref" reference="sec:avadata"}. Finally, we build an additional dataset constituted by a set of systems and their output on target test sets. This can be used to evaluate the ability of AVA to estimate the end-to-end system performance (system-wise evaluation), described in Section [5.3](#sec:evaldata){reference-type="ref" reference="sec:evaldata"}.
|
| 59 |
+
|
| 60 |
+
Google Natural Questions (NQ) is a large scale dataset for machine reading task [@47761]. Each question is associated with a Wikipedia page and at least one long paragraph (`long_answer`) that contains the answer to the question. The `long_answer` may contain additional annotations of `short_answer`, a succint extractive answer from the long paragraph. A `long_answer` usually consists of multiple sentences, thus NQ is not directly applicable to our setting.
|
| 61 |
+
|
| 62 |
+
We create AS2-NQ from NQ by leveraging both `long_answer` and `short_answer` annotations. In particular for a given question, the (correct) answers for a question are sentences in the long answer paragraphs that contain *annotated* `short_answer`s. The other sentences from the Wikipedia page are considered incorrect. The negative examples can be of the following types: (i) Sentences that are in the `long_answer` but do not contain *annotated* short answers. It is possible that these sentences might contain the `short_answer`. (ii) Sentences that are not part of the `long_answer` but contain a `short_answer` as subphrase. Such occurrence is generally accidental. (iii) All the other sentences in the document.
|
| 63 |
+
|
| 64 |
+
The generation of negative examples impacts on the robustness of the training model when selecting the correct answer out of the incorrect ones. AS2-NQ has four labels that describe possible confusing levels of a sentence candidate. We apply the same processing both to training and development sets of NQ. This dataset enables to perform an effective transfer step [@garg2019tanda]. Table [\[table:nq\]](#table:nq){reference-type="ref" reference="table:nq"} shows the statistics of the dataset.
|
| 65 |
+
|
| 66 |
+
::: table*
|
| 67 |
+
:::
|
| 68 |
+
|
| 69 |
+
A search engine using a large index can retrieve more relevant documents than those available in Wikipedia. Thus, we retrieved high-probably relevant candidates as follows: we (i) retrieved top 500 relevant documents; (ii) automatically extracted the top 100 sentences ranked by a BERT model over all sentences of the documents; and (iii) had all the top 100 sentences manually annotated as correct or incorrect answers. This process does not guarantee that we have all correct answers but the probability to miss them is much lower than for other datasets. In addition, this dataset is richer than AS2-NQ as it consists of answers from multiple sources. Furthermore, the average number of answers to a question is also higher than in AS2-NQ. Table [\[table:gpd\]](#table:gpd){reference-type="ref" reference="table:gpd"} shows the statistics of the dataset.
|
| 70 |
+
|
| 71 |
+
::: table*
|
| 72 |
+
:::
|
| 73 |
+
|
| 74 |
+
The AS2 datasets from the previous section typically consist of a set of questions $Q$. Each $q \in Q$ has $T_q=\{t_1, \dots, t_n\}$ candidates, comprised of both correct answers $C_q$ and incorrect answers $\overline{C_q}$, $T_q = C_q \cup \overline{C_q}$. We construct the dataset for point-wise automatic evaluation (described in Section [4](#sec:model){reference-type="ref" reference="sec:model"}) in the following steps: (i) to have positive and negative examples for AVA, we first filter the QA dataset to only keep questions that have at least two correct answers. This is critical to build positive and negative examples.
|
| 75 |
+
|
| 76 |
+
Formally, let $\left\langle q, r, t, l \right\rangle$ be an input for AVA, $$\text{AVA-Positives} = \left\langle q; \left( r, t \right) \in C_q \times C_q \text{ and } r \ne t \right\rangle$$ We also build negative examples as follows: $$\text{AVA-Negatives} = \left\langle q; \left( r, t \right) \in C_q \times \overline{C_q} \right\rangle$$ We create AVA-NQ and AVA-GPD from the QA datasets, AS2-NQ and AS2-GPD. The statistics are presented on the right side of tables [\[table:nq\]](#table:nq){reference-type="ref" reference="table:nq"} and [\[table:gpd\]](#table:gpd){reference-type="ref" reference="table:gpd"}.
|
| 77 |
+
|
| 78 |
+
To test AVA at level of overall system Accuracy, we need to have a sample of systems and their output on different test sets. We create a dataset that has candidate answers collected from eight systems from a set of 1,340 questions. The questions were sampled from an anonymized set of user utterances. We only considered information inquiry questions. The systems differ from each other in multiple ways, including: (i) *modeling*: Compare-Aggregate (CNN-based) and different Transformers-based architectures with different hyper-parameter settings; (ii) *training*: the systems trained on different resources; and (iii) *candidates*: the pool of candidates for the selected answers are different.
|
2005.06331/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2020-03-03T15:09:24.706Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36" etag="f_RD5oAgpj-X_VDu7gZl" version="12.7.9" type="google"><diagram id="6a731a19-8d31-9384-78a2-239565b7b9f0" name="Page-1">7Vxbc9o4FP41zHQf4tHFkuzHhpR2dre7ncnutn0URgE3xqK2yaW/fiUsg2VDQsAYw+DJEOvoYln6zicd6Vg93J8+fUz4bPJZjkTUQ2D01MM3PYSYR9WvFjznAoJwLhgn4SgXwZXgNvwljBAY6TwcidRKmEkZZeHMFgYyjkWQWTKeJPLRTnYnI/upMz4WNcFtwKO69Gs4yiZGCqm/ivgkwvHEPNpDLI8Y8uB+nMh5bJ7XQ/huceXRU16UZV40nfCRfCyJ8Ice7idSZvnd9KkvIt20RbPl+QYbYpf1TkScbZWBuSTP88CjuSgqvaha9lw0hxip1jHBWMbq3/XiHYUuBajQj/l0VkkwyaaRCkF1u3xJnTTiQxFdL5upLyOZrHKlGU+y97oHK7JBGEWmBBGPihRBxNM0DHKhSaKf+ENk2bOBFZ9nUolkkk3kWMY8+lPKWVGzLJH3oqiD6iw0INesv4wpOh8ryZ2MswGfhpHG9CcRPYgsDLiJMM+C1ITLRWLXJTe6yBkPwnispERX+CnMvhVvpO6/63uHFVE3T6W4m2cTqHev6fFUzpNALPvUwFQ13FgUCRl1c6nuzVJmA4yPQk5FljyrBImIeBY+2LrAjUqNl+mWWb/IUFUHAaP9BHh5FqP8kAAHImIXk1fY5FwBVN2UqrISLWC7EcL0AuGjQngF2++mVgeCMINNQ9gC3ssoYxeUHRdl24NMYSt5LmXSwe/luFW2RagZfgUt8StlDvaAryZV+a/NtgxtxbRraBs4qFyuVSyG2C42f/eGCNy7qNYxVGsvuBN8pOkEYg6DFBV/0D3gzMKvAVNcBXI6Fbp5VNOpN7uTybSG1hU0dRc/TsJM3Kqm17GPymLaA5tlsFhCCycKhusQtSt+BotLyXkUjmMNeoUYkRQZzXvDGsB4EphSmQrNRBKq5hfJ7TIRegmFDyLJxNOL2DKxtOBegxHPBB9LxhsxsknJbmOgiZkBgTWUvI2+YNv0tTM3ffAIddHu3OS+gC0bOlVgVSA9BAILuj2HFWxV4TByvPlkUaMSaqY8jIvViW4ziihwsHev1xjlZRi0xSi4wigQbEkptBlKqa/KqHdNtXqGqUKBRuRiHIpHCpcyTjuOlzvgCQ+2jBfQIl5gFS9+u3i5GKclsEDq+wsUHnyUWhmnqDPWaYGoinXa0nTddbFDPA9hgnwXQmZP3jFjW83W68VC6ujiqMuALt61rV7se3a5TZqn5GKeHlm5uq5bqB3dUrauQ6CLPMLyX1sHimq8eeXHJ467LBNWVMtlB1z5KdqzpFqBgnJPb1sNeno/Lb0XWccnN1ojRo3gv6OT4ap5vbV93czkpsB1GSRyOouEmv4qMOqKzhI5mgdZ16fB1B0ORRtmU3vIIDZZFIZLS8io29BBItP0Ks1TnpaRdHboIMzmDey1iw63hg5lPWf6FbsNhAHw1Cz3jAcU7Nq0sXT8aQkYdWu5srqyxImikKuEx/fdB80d4XfKUjkf9sBV9gBr2AMfDiQXq28dJR3c6tuInIolZsi9YokZzT7Ggj6tbxZWWKWHKJ9qXoiHqf53WtOTsyOY6potJq2OQqxu+14IpvME4x+PYFh9n9nYvb1i0xC8W0nUE/iVohbeWzhKK6XIfus4xUDE+fCcJ74eqdjL0HX88tUqAdXto9c2Gfsa+s7YUf9U16kWWiQblpZgNNh4JsbS1Ka7YKMwoPrZZ7sn6Xv2+EbwluZ3M24xbN8vC07HLQai9++vWxm8GnaLKYaUipt1418KvAE1dWf+hzCd8xNczjt3fqkOZhS1urzH9vV5uPDLsfiFHpFf6ss6aaiajye909lGOndmYchmFuK2yyz1lZwLs5wGs3jHYpb5r9/h38T/768/PPL1n39/0s/g29VaKKVCqdEkXw6MVM2vh4p66Fjfvfs5F4vqnYi9DkC/T2kTWOmove4D24RaLhm2QUQbIOWtWQayzfYlwhKRzqPOD2foZsD8wRmjCAJowwi16R28CUaoBiPFOCKJtanVabz0+8B1b84YL9ivbEy06W+1CS6vLhZOeRh1nWoGg766zhg6Lq4sMPstzpw3Qed1dwsej3J3C72NcQHRkUGEKhujnQDRxR3jqMcvFG73+Tk1Lx9TU3LCdynsldzwr4ADXNZ72Rd/EfpSoLu399EMr5hmB/faR8j+dMXSrWIa9lanfeb5jl8qtuK0jw/2Pcwm/dx3zeSin3sej7LbF2isrQNSXjFDDq6FGFkaUt3j3FbvfFwx5VDbmubv6zd0OquTnTluwPeBuix1a/rMq1f0YxtVaumANwiAPYa5wHW2O99tTVkVJ4U1ZR1eo+pLIReNal+jMCirFHB86m87ir1pENOF7DS9bEBFG/dP3qRW1fO7yI7jnVLQ6ifcoG31rH/lda7q2ZntOEoHg2u864B30K+wXzHnttHCxo+R3DBXpMjxmYuJZ37tkc7b8SwEpdzYKZXKKm44jR3Vp4Kro4jz5KvjnvGH/wE=</diagram></mxfile>
|
2005.06331/main_diagram/main_diagram.pdf
ADDED
|
Binary file (22.8 kB). View file
|
|
|
2005.06331/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Our platform is adjusted to consume business customers' standard format APIs for product feeds and our proprietary product catalog database infrastructure. (see Fig. [2](#fig:architecture){reference-type="ref" reference="fig:architecture"}).
|
| 4 |
+
|
| 5 |
+
<figure id="fig:architecture" data-latex-placement="!ht">
|
| 6 |
+
<img src="img/conceptual_diagram.jpg" style="width:16cm" />
|
| 7 |
+
<figcaption>A general architecture of our recommendation platform</figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
The system is based on Reactive Microservices Architecture [@RMA_2016; @rmanifesto], implementing its core principles which are: elasticity, scalability, fault tolerance, high availability, message driven and real-time processing. Especially real-time processing is crucial in order to provide tailored and high quality recommendations taking into account not only the latest changes of in-session user behavior, but also changes in system performance. Not only scores and recommendations are being calculated during the request time, but also user representations are being updated and exposed to models after each event flowing through event stream.
|
| 11 |
+
|
| 12 |
+
The conceptual diagram of an architecture is presented on Fig. [2](#fig:architecture){reference-type="ref" reference="fig:architecture"}. The system is accessible throughout an extensive API which is exposed by recommendations facade. When a new request for recommendation appears, before it is be passed to recommendation logic module, it is validated by the facade and enriched with business rules via recommendation campaigns. Rules may include things like: type of recommendation, recommendation goal or filtering expressions formulated in our dedicated control language, i.e. items query language IQL.
|
| 13 |
+
|
| 14 |
+
IQL custom query language provides a very flexible framework to build new recommendation scenarios based on item meta-data and recommendation request context. In Fig. [3](#fig:iql){reference-type="ref" reference="fig:iql"} there are a few examples of building recommendation filtering rules. IQL expressions are being handled by an items filter, which performs filtering of candidate items based on given constraints. To achieve high throughput and low latency, items filter uses its own compressed binary representation of items, serving thousands of requests per second and filtering sets of million+ items. In case of IQL expressions with low selectivity, transfer of the data structure containing candidate item IDs over the network infrastructure could be expensive, therefore a binary protocol between filter and logic has been implemented. The model which will handle the request is selected by the Optimizer. Optimizer implements a form of a Thompson Sampling algorithm solving multi-armed bandit problems allowing not only to easily A/B test new ideas and algorithms, but also to optimize results of running recommendation campaigns. Finally one of the models receives a request to score available candidates based on model itself and to update entity embeddings.
|
| 15 |
+
|
| 16 |
+
Although most of the system works in real time, the offline part is also present but limited to model training. Algorithms are trained on two main data sources. The first one is a data lake into which events of different types and origins are being ingested through an events stream. To name a few events types: screen view from a mobile app, product add to cart from a web page, offline transaction from a POS system etc. The second source is a master item meta-data database where items are being kept along with their attributes and rich data types like images.
|
| 17 |
+
|
| 18 |
+
<figure id="fig:iql" data-latex-placement="!ht">
|
| 19 |
+
|
| 20 |
+
<figcaption>Items query language implemented and used in our recommendation system</figcaption>
|
| 21 |
+
</figure>
|
| 22 |
+
|
| 23 |
+
Our algorithms can be fed with various kinds of input data. The system analyzes long- and short-term interaction history of users and has a deep insight into item metadata. For this purpose we use a multi step pipeline, starting with unsupervised learning. For images and texts off-the-shelf unsupervised models may be used. For interaction data we identify graphs of user-entity interactions (e.g. user-product, user-brand, user-store) and compute multiple graph or network embeddings.
|
| 24 |
+
|
| 25 |
+
We developed a custom method[^1] for massive-scale network embedding for networks with hundreds of billions of nodes and tens of billions of edges. The task of network embedding is to map a network or a graph into a low-dimensional embedding space, while preserving higher-order proximities between nodes. In our datasets nodes represent interacting entities, e.g. users, device IDs, cookies, products, brands, title words etc. Edges represent interactions, with a single type of interaction per input network, e.g. purchase, view, hover, search.
|
| 26 |
+
|
| 27 |
+
Similar network embedding approaches include Node2Vec, DeepWalk and RandNE [@zhang2018billion]. These approaches exhibit several undesirable properties, which our method addresses. Thanks to the right design of algorithm and highly optimized implementation our method allows for:
|
| 28 |
+
|
| 29 |
+
- three orders of magnitude improvement in time complexity over Node2Vec and DeepWalk,
|
| 30 |
+
|
| 31 |
+
- deterministic output -- embedding the same network twice results in the same embeddings,
|
| 32 |
+
|
| 33 |
+
- stable output with regards to small input perturbations -- small changes in the dataset result in similar embeddings,
|
| 34 |
+
|
| 35 |
+
- inductive property and dynamic updating -- embeddings for new nodes can be created on the fly,
|
| 36 |
+
|
| 37 |
+
- applicable to both networks and hyper-networks -- support for multi-node edges.
|
| 38 |
+
|
| 39 |
+
The input data is constructed from raw interactions - an edge (hyperedge) list for both simple networks and hypernetworks. In case of hypernetworks, where the cardinality of an edge is larger than 2, our algorithm either performs implicit clique expansion in-memory (to avoid excessive storage needs for an exploded input file). For very wide hyperedges star-expansion results in less edges, and can be used instead - via an input file containing virtual interaction nodes.
|
| 40 |
+
|
| 41 |
+
Our custom method works as follows: At first we initialize node vectors (Q matrix) randomly via multiple independent hashing of node labels and mapping them to constant interval, resulting in vectors sampled from uniform (-1, 1) distribution. Thus we achieve deterministic sampling. Empirically we determine that dimensionality of 1024 or 2048 is enough for most purposes. Then we calculate a Markov transition matrix (M) representing network connectivity. In case of hyper-network, we perform clique expansion adding virtual edges. Final node embeddings are achieved by multiplying $M*Q$ iteratively and L2-normalizing them in each intermediate step. The number of iterations is depends on the distributional properties of the graph, with between 3 and 5 iterations being a good default range.
|
| 42 |
+
|
| 43 |
+
The algorithm is optimized for extremely large datasets:
|
| 44 |
+
|
| 45 |
+
- The Markov transition matrix M is stored in COO (co-occurrence) format in RAM or in memory-mapped files on disk;
|
| 46 |
+
|
| 47 |
+
- all operations are parallelized with respect to the embedding dimensions, because dimensions of vectors Q are independent on each other;
|
| 48 |
+
|
| 49 |
+
- the $M*Q$ multiplication is performed with dimension-level concurrency as well;
|
| 50 |
+
|
| 51 |
+
- clique expansion for hyper-graphs is performed virtually, only filling the entries in $M$ matrix;
|
| 52 |
+
|
| 53 |
+
- star expansion is performed explicitly, with a transient column for the virtual nodes in the input file.
|
| 54 |
+
|
| 55 |
+
The algorithm's results are entity embeddings contained in the $Q$ matrix. Creation of inductive embeddings (for new nodes) is possible from raw network data using the formula $M'*Q$, where $M'$ represents the links between existing and new nodes and $Q$ represents the embeddings of existing nodes.
|
| 56 |
+
|
| 57 |
+
It is worth noting that the algorithm not only performs well on interaction networks, but also on short text data, especially product metadata. In this setting we consider words in a product title as a hyperedge. This corresponds to star-expansion, where product identifiers are virtual nodes linking title words.
|
| 58 |
+
|
| 59 |
+
However our general pipeline can easily use embeddings calculated using the latest techniques of language modeling, e.g. ELMO, BERT embeddings, especially for longer texts.
|
| 60 |
+
|
| 61 |
+
Another data source is visual data (shape, color, style, etc.) i.e. images. To prepare visual data feed for our algorithm we use state-of-the-art deep learning neural networks [@kucer_detect-then-retrieve_2019; @dodds_learning_2018] customized for our use [@wieczorek2020strong].
|
| 62 |
+
|
| 63 |
+
Indeed, any unsupervised learning method outputting dense embeddings can be considered as input to our general pipeline.
|
| 64 |
+
|
| 65 |
+
Having unsupervised dense representations coming from multiple, possibly different algorithms - representing products, or other entities the customers interacts with, we need to aggregate them into fixed-size behavioral profiles for every user.
|
| 66 |
+
|
| 67 |
+
As most methods of representation learning assume nothing about embedding compositionality (with simple assumptions made by Bag-of-Words models), we develop a custom mechanism of compositionality allowing meaningful summation of multiple items.
|
| 68 |
+
|
| 69 |
+
Our algorithm performs multiple feature space partitionings via vector quantization. The algorithm involves ideas derived from Locality Sensitive Hashing and Count-Min Sketch algorithm, combined with geometric intuitions. Sparse representations resulting from this approach exhibit additive compositionality, due to Count-Sketch properties (for a set of items, the sketch of the set is equal to the sum of separate sketches).
|
| 70 |
+
|
| 71 |
+
All modalities and views of data (all embedding vectors) are processed in this way, their sketches are concatenated.
|
| 72 |
+
|
| 73 |
+
One of the central advantages of the algorithm is the ability to squash representations of multiple objects into a much smaller joint representation which we call (*sketch*), which allows for easy and fast subsequent retrieval of participating objects, in an analogous way to Count-Min Sketch. E.g. the purchase history of a user can be represented in a single sketch, the website browsing history as another sketch, and the sketches concatenated.
|
| 74 |
+
|
| 75 |
+
Subsequently sketches containing squashed user behavioral profiles serve as input to relatively shallow (1-5 layers) feed-forward neural networks. The output structure of the neural network also is structured as a sketch, with the same structure.
|
| 76 |
+
|
| 77 |
+
Training is done with cross-entropy objective in a depth independent way (output sketches are normalized to 1, across the width dimension). During inference, we perform a sketch readout operation, as in a classic Count-Min Sketch, exchanging the minimum operation to geometric mean - effectively performing averaging of log-probabilities.
|
2006.15009/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-02-10T09:40:02.510Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36" etag="85XR8zwL7TS3G7flWuW8" version="16.4.3" type="google"><diagram id="EMDp3J2yuwXNwzJaAlZq" name="Page-1">7VxLk5s4EP41rto9zJQkkLCP80py2Mlmd7ZqN6cUNhqbCgYHxIydX7+SQdgIYYNtHraTQ8ZqpJYQX3/qbgkGxsN8+TG0F7PnwKHeAAFnOTAeBwghE2P+R0hWiQSOSCKYhq6TijaCF/cnTYUglcauQ6NcRRYEHnMXeeEk8H06YTmZHYbBe77aa+Dle13YU1oQvExsryj913XYLJEOMdjIP1F3OpM9Q5BemduyciqIZrYTvG+JjKeB8RAGAUt+zZcP1BOTJ+clafeh5Go2sJD6rEoDNPQpcV+e3/6a/OnOv41fRs7dzchK1LzZXpze8QARjyu8fw24XjGxXhCur5AfsRjr/QAZyb9tEXi1JzRXbRLMFzGjoZgLDovQ365OpunfdVfRwvbFDLGVl9chxnATrUFxxysYYLEs1zKWgs90ydbqbEblRT4vY7UBlyU9F8TJvUsxyo0NhUHsO1TMKuSX32cuoy+L5O7fuRFw2YzNvfRyxMLgewYdM5M8ZNMqJ1P2tP1E04f8RkNGl1ui9Al/pMGcsnDFq8irJkkNbpUJUvy9b+CLJEhnW9Adpg3t1GKmmfINqPiPFFd1MDbSYEyZU+o7d8JaeWni2VHkTg6fRj574eo/LgS3WBa/iqIsPC5zpVVaEg/9JR0STMs6/UL+wZ67nmj4sEH5c4Jyrnbpsq3+eelrqlL83vQuCrLz0mcfBXGYWtZuE2Z2OKVsV8VhUpE6OcorYmkLKVgDFCkLqWcz9y1PlDr0pD18Cdw1o8i1AUAFqoTAW0DyepLbT5tus9tebZZV1JbMUUHbGtbZBByOdAikwbYE9dNBlmNCKgpCNgumgW97Txvp/SQO3zLSkwiHW/jeoF2P8MwsYc4oNzZaYpacxkMmZ8wPfNqOtUDJhz0xF0hMoAAcHG4ukIwUbRigW1zNXPjDsFdb1RaiQrRj6BYpdGbsHl6hhYFyLfiPZBQntl/U1VJVzya6tntQw+4dO5plCkThi83Yuncu4bR9nDkPK5uz0S9zxha+Ta1AghyB0aHmbI7Afm2nMmfDUoyTDPeYc6EFTOPCRs1ZgqM8uCkELUUb2Y5xagcoVaOnkkDm3p58v4kXe2KYCwpWsjWus2Al4wkNaBz3TQuCfYFtXlYGvRrxcoaCv0X2oA469gW+61ssgRITYfVeDK2XpiSNY8ilaucSVLZ22Z47FYvEhAOMtzgN4jBUicgoIg7CHbzfAOLMcw0aDohzG/UGMuM9P+8eq969KfyDQ92BgjYs/IOWg2E5xZeYW7QnzA38q08uomH36zVplz2vLLuYWfH+9KLVM0ZFClgxUEBYnU5VVcSwbhVljbOpbqfmLLyEA1MMNeBdllpELecWqyYjepaJh8RSN40spOK7emZRVWbAgrJTZSKwuiANAdg9OouQY1sQAhULbyIVKfOzGu9pt0vzmhqjcGr4JXu+SP0qs8Rxytco+kDOHpcmisctjkZxIff6dUjr183tZYUYOR6X+mmXEAxbuLCj0HUwjLQZ+HMPGv4IJra3XkG8+ArjBozymelNFqazsAEBXXB6we5UGzu1h3tOCFaNMjKC6I3vZAzz4IamQo7VPSdTUWWqqk7kN1kjpSMLo51DKzTA+QbN+EAINHc6bedGT6OLQYWc/cUvAEYPFoCWT6V1mv85NGl1BKODqgdt+sboQ6jEeyNwRCYeYYU5gVWN009GorA8kDxfj/qjF4zP1qU+Zg/sCNKF3ZMubG5LqLsF/Z/QFVA8H/w1AS6rB+BqeSfoLFf0Rvffs7WmwqLfs90ipMZeWS6s/lk8oAZLqqrGl/y9cVO9pbxClpXoqJIlzAjGsSPgIJO7/jha1ODB2qlWRKpYXZupVlP1Aonu2NFIg2/SWKZVsmPXIIkXTv49rUsHgzWsAgZd2r1BMJSfevwFhibBYKgHEtGoIhjU3OAJwXBNBxKP8HZk4mK/twP7ddqQGIr3bh56NKagCaOWfR10ZRs5dV69OfT1omaDhBpmA3plNpYKdoscfEymoAvjpk7JGHD35s3+BrCV3R6ki9878EFc32VX5IFgdW/P1LyE07Y7em2HHXOkjpph9VYou2d5HVyBZqtSdkEXHDVF2aaajcIG3Dm2QgOjHcqu/4ZlI5Qd0atm7OwbQx0ytm47/RcSms4rwkIg1yIS/B/202fHvPe9TxQ/z0Hw8/HbTU+yipeUei7gQYOa8lO+SM0wGboMU1O5Zy1Gzja/VNO7O+rUjeJBaZmi6uGaztwvpGCvkB462PkyirH3iZyvbLtOHXTp2NQGMoIqzZqpu+dKg6O9Na3VlZ+9KPuwQSWm7YTef8R0DePnxy+lXN7MRw16yPBYfY9Dy/DWDjM/OcOXp3JyD+PoL2JE1BNf/VRfyB6HssJv4vMpgAX8Pyc7/7P9Xk+9t4kgPsXbRL//wiwZqpjVvEpuIA1msy92nBy05acmzo8gSwzjegGH1S/KIV2odCKS5MXNh32T9XvzeWTj6X8=</diagram></mxfile>
|
2006.15009/main_diagram/main_diagram.pdf
ADDED
|
Binary file (37.2 kB). View file
|
|
|
2006.15009/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,344 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Sequential decision making is a key challenge in artificial intelligence (AI) research. The problem, commonly formalized as a Markov Decision Process (MDP) [@bellman1954theory; @puterman2014markov], has been studied in different research fields. The two prime research directions are *reinforcement learning* (RL) [@sutton2018reinforcement], a subfield of machine learning, and *planning* (also known as *search*), of which the discrete and continuous variants have been studied in the fields of artificial intelligence [@russell2016artificial] and control theory [@bertsekas1995dynamic], respectively. Departing from different assumptions both fields have largely developed their own methodology, which has cross-pollinated in the field of *model-based reinforcement learning* [@sutton1990integrated; @moerland2020model; @hamrick2019analogues; @plaat2021high].
|
| 4 |
+
|
| 5 |
+
However, a unified view on both fields, including how their approaches overlap or differ, lacks in literature. For example, the classic AI textbook by @russell2016artificial discusses (heuristic) search methods in Chapters 3, 4, 10 and 11, while reinforcement learning methodology is separately discussed in Chapter 21. Similarly, the classic RL textbook by @sutton2018reinforcement does discuss a variety of the topics in our framework, but never summarizes these as a single algorithmic space. Moreover, while the book does extensively discuss the relation between reinforcement learning and dynamic programming methods, it does not focus on the relation with the many other branches of planning literature. Therefore, this paper introduces a Framework for Reinforcement learning and Planning (FRAP) (Table [\[table_framework\]](#table_framework){reference-type="ref" reference="table_framework"}), which attempts to identify the underlying algorithmic space shared by RL and MDP planning algorithms. We show that a wide range of algorithms, from Q-learning [@watkins1992q] to Dynamic Programming [@bellman1954theory] to A$^\star$ [@hart1968formal], fit the framework, simply making different decisions on a number of subdimensions of the framework (Table [\[table_overview\]](#table_overview){reference-type="ref" reference="table_overview"}).
|
| 6 |
+
|
| 7 |
+
We need to warn experienced readers that many of the individual topics in the paper will be familiar to them. However, the main contribution of this paper is not the discussion of these ideas themselves, but in the *systematic structuring* of these ideas into a single algorithmic space (Algorithm [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}). Experienced readers may therefore skim over some sections more quickly, and only focus on the bigger integrative message. As a second contribution, we hope the paper points researchers from one of both fields towards relevant literature from the other field, thereby stimulating cross-pollination. Third, we note that the framework is equally useful for researchers from model-free RL, since to the best of our knowledge 'a framework for reinforcement learning' does not exist in literature either ('a framework for planning' does, see Related Work). Finally, we hope the paper may also serves an educational purpose, for example for students in a university course, by putting algorithms that are often presented in different courses into a single perspective.
|
| 8 |
+
|
| 9 |
+
We also need to clearly demarcate what literature we do and do not include. First of all, planning and reinforcement learning are huge research fields, and the present paper is definitely *not* a systematic survey of both fields (which would likely require multiple books, not a single article). Instead, we focus on the core ideas in the joint algorithmic space and discuss characteristic, well-known algorithms to illustrate these key ideas. For the planning side of the literature, we exclusively focus on planning algorithms that search for *optimal behaviour* in an MDP formulation, which for example excludes all non-MDP planning methods, as well as 'planning as satistifiability' approaches, which attempt to verify whether a path from start to goal exists at all [@kautz1992planning; @kautz2006satplan]. For the reinforcement learning side of the literature, we do not focus on approaches that treat the MDP formulation as a *black-box optimization problem*, such as evolutionary algorithms [@moriarty1999evolutionary], simulated annealing [@atiya2003reinforcement] or the cross-entropy method [@mannor2003cross]. While these approaches can be successful [@salimans2017evolution], they typically only require access to an evaluation function, and do not use MDP specific characteristics in their solution (on which our framework is built).
|
| 10 |
+
|
| 11 |
+
The remainder of this article is organized as follows. After discussing Related Work (Sec. [2](#sec_relatedwork){reference-type="ref" reference="sec_relatedwork"}), we first formally introduce the MDP optimization setting (Sec. [3.1](#sec_mdp){reference-type="ref" reference="sec_mdp"}), the way we may get access to the MDP (Sec. [3.2](#sec_model_types){reference-type="ref" reference="sec_model_types"}), and give definitions of planning and reinforcement learning (Sec. [3.3](#sec_plan_rl_definitions){reference-type="ref" reference="sec_plan_rl_definitions"}). The next section provides brief overviews of literature in planning (Sec. [4.1](#sec_planning){reference-type="ref" reference="sec_planning"}) and reinforcement learning (Sec. [4.2](#sec_rl){reference-type="ref" reference="sec_rl"}). Together, Sections [3](#sec_problem_def){reference-type="ref" reference="sec_problem_def"} and [4](#sec_literature){reference-type="ref" reference="sec_literature"} should establish common ground to build the framework upon. The main contribution of this paper, the framework, is presented in Section [5](#sec_unifying_view){reference-type="ref" reference="sec_unifying_view"}, where we systematically discuss each consideration in the algorithmic space. Finally, Section [6](#sec_comparison){reference-type="ref" reference="sec_comparison"} illustrates the applicability of the framework, by comparing a range of planning and reinforcement learning algorithms along the framework dimensions, and identifying interesting directions for future work.
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
We will now introduce the Framework for Reinforcement Learning and Planning (FRAP). Pseudocode for the framework is provided in Algorithm [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, while all individual dimensions are summarized in Table [\[table_framework\]](#table_framework){reference-type="ref" reference="table_framework"}. We will first cover the high-level intuition of the framework, as visualized in Figure [2](#fig_query_backup){reference-type="ref" reference="fig_query_backup"}. FRAP centers around the notion of *root states* and *trials*.
|
| 16 |
+
|
| 17 |
+
> *A root state is a state for which we attempt to improve the solution estimate.*
|
| 18 |
+
|
| 19 |
+
> *A trial is a sequence of forward actions and next states from a root state, which is used to compute an estimate of the cumulative reward from the root state.*
|
| 20 |
+
|
| 21 |
+
The central idea of FRAP is that all planning and reinforcement learning algorithms repeatedly 1) fix root states, 2) make trials from these root states, 3) improve their solution based on the outcome of these trials, and 4) use this improved solution to better direct new trials and better set new root states. FRAP therefore consists of an *outer loop* (the while loop on Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 4), in which we repeatedly set new root states, and an *inner loop* (the while loop on Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 5), in which we (repeatedly) make trials from the current root state to update our solution. We will briefly discuss both loops.
|
| 22 |
+
|
| 23 |
+
An schematic illustration of the outer loop is shown on the left side of Fig. [2](#fig_query_backup){reference-type="ref" reference="fig_query_backup"}. The algorithm starts by potentially initializing a global solution (for all states), and subsequently fixing a new root state. Then, we initialize a local solution for the particular root, and start making trials from the root, which each update the local solution. When we run out of trial budget for this root, we may use the local solution to update the global solution (when used). Afterwards, we fix a next root state, and initialize a new local solution, in which we may reuse information from the last local solution (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 9). The outer loop then repeats for the new root state.
|
| 24 |
+
|
| 25 |
+
The inner loop of FRAP consists of trials, and is schematically visualized on the right of Fig. [2](#fig_query_backup){reference-type="ref" reference="fig_query_backup"}. A trial starts from the root node, and consists of a forward sequence of actions and resulting next states and rewards, which are obtained from *queries* to the MDP dynamics. This process repeats $d_{\max}$ times, where the specification of $d_{\max}$ depends on the local solution and differs between algorithms. The forward phase of the trial then halts, after which we possibly *bootstrap* to estimate the remaining expected return from the leaf state, without further unfolding the trial. Then, the trial proceeds with a sequence of *one-step back-ups*, which process the acquired information from the forward phase. We repeat the trial process until we run out of budget, after which we fix a new root state (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 8).
|
| 26 |
+
|
| 27 |
+
Action selection in FRAP not only happens within the trial (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 16), but is in many algorithms also part of next root selection (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 8). It is important to mention that in the case of model-free RL, where we have irreversible access to the MDP dynamics, these two action selection moments are actually equal by definition. For example, a model-free RL agent may fix a root, sample a trial from this root, and use it to update the global solution. However, because the environment is irreversible, the next root selection has to use the same action and resulting next state as was taken within the trial. Model-free RL agents therefore have some specific restrictions in the FRAP pseudocode, as illustrated on the blue lines of Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"} (the trial budget per root is for example also by definition equal to one).
|
| 28 |
+
|
| 29 |
+
FRAP is therefore really a conceptual framework, and practical implementations may differ from the pseudocode in Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}. For example, many planning methods store an explicit frontier, i.e., the set of nodes that are candidate for expansion. Practical implementations would directly jump to the frontier, and not first traverse the known part of the tree from the root, as happens in each trial of Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}. However, it is conceptually useful to still think of these forward steps, since they will be part of the back-up phase (we are eventually looking for a good decision at the root). Another example would be a model-free RL agent that uses a Monte Carlo return estimate. Practical implementations may sample a full episode, compute the cumulative reward starting from each state in the episode, and jointly update the solution for all these states. However, conceptually every state in the episode has then been a root state once, for which we compute an estimate. In FRAP, we would therefore see this as sampling the actual episode only once from the first root, store it in the local solution, and then repeatedly set new roots along the states in the episode, where we keep reusing the local solution from the last root (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"} line 9). In summary, all algorithms conceptually fit FRAP, since they all fix root states for which they compute improved estimates of the cumulative return and solution, but some algorithms may take implementation shortcuts.
|
| 30 |
+
|
| 31 |
+
We are now ready to discuss the individual dimensions of the framework, i.e., describe the possible choices on each of the lines in Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}. These dimensions are: how to *represent* the solution, how to *set the next root state*, which *trial budget* to allocate per root state, how to *select* actions and next states within a trial, how to *back-up* information obtained from the trial, and how to *update* the local and global solution based on these back-up estimates. The considerations of FRAP are summarized in Table [\[table_framework\]](#table_framework){reference-type="ref" reference="table_framework"}, while the comments on the right side of Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"} indicate to which lines each dimension is applicable.
|
| 32 |
+
|
| 33 |
+
We first of all have to decide how we will represent the solution to our problem. The top row of Table [\[table_framework\]](#table_framework){reference-type="ref" reference="table_framework"} shows the four relevant considerations: the coverage of our solution, the type of function we will represent, the method we use to represent this function, and the way we initialize the chosen method. The first item distinguishes between *local/partial* (for a subset of states) and *global* (for all states) solutions, a topic which we already extensively discussed in Sec. [3.3](#sec_plan_rl_definitions){reference-type="ref" reference="sec_plan_rl_definitions"}. Note that FRAP *always* builds a local solution: even a single episode of a model-free RL algorithm is considered a local solution that estimates the value of states in the trace. A local solution therefore aggregates information from one or more trials, which may then itself be used to update a global solution (when we use one) (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 1).
|
| 34 |
+
|
| 35 |
+
:::: center
|
| 36 |
+
::: {#table_solution_types}
|
| 37 |
+
**Back-up estimate** **Local solution** **Global solution**
|
| 38 |
+
----------------- ------------------------------ ---------------------------------- ---------------------------------------------------------------------------
|
| 39 |
+
**Tabular** $\hat{V}(s)$, $\hat{Q}(s,a)$ $V^{\bf l}(s)$, $Q^{\bf l}(s,a)$ $V^{\bf g}(s)$, $Q^{\bf g}(s,a)$, $\pi^{\bf g}(a|s)$
|
| 40 |
+
**Approximate** (-) (-) $V^{\bf g}_\theta(s)$, $Q^{\bf g}_\theta(s,a)$, $\pi^{\bf g}_\theta(a|s)$
|
| 41 |
+
|
| 42 |
+
: Overview of notation. Each trial provides new back-up estimates $\hat{V}(s)$ and $\hat{Q}(s,a)$ at the states and actions that appear in the trial. These estimates are aggregated in the local solution $V^{\bf l}(s)$ and $Q^{\bf l}(s,a)$ (i.e., the local solution can be influenced by multiple trials). The local solution may itself be used to update the global solution $V^{\bf g}(s)$, $Q^{\bf g}(s,a)$ and/or $\pi^{\bf g}(a|s)$. When the global solution is stored in approximate form (which is often the case), we denote them by $V^{\bf g}_\theta(s)$, $Q^{\bf g}_\theta(s,a)$ and/or $\pi^{\bf g}_\theta(a|s)$ (where $\theta$ denotes the parameters of the approximation). Back-up estimates and local solutions are in practice never represented in approximate form.
|
| 43 |
+
:::
|
| 44 |
+
::::
|
| 45 |
+
|
| 46 |
+
For both local and global solutions we next need to decide what type of function to represent. The most common choices are to represent the solution as a *value* function $V: \mathcal{S} \to \mathbb{R}$, *state-action value* function $Q: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$, or *policy* function $\pi: \mathcal{S} \to p(\mathcal{A})$. Some algorithms combine value and policy solutions, better known as *actor-critic* algorithms [@konda1999actor]. We may also store the *uncertainty* around value estimates [@osband2016deep; @moerland2017efficient], for example using *counts* [@kocsis2006bandit], or through convergence labels that mark a particular value estimate as solved [@nilsson1971problem; @bonet2003labeled]. Some methods also store the entire distribution of returns [@bellemare2017distributional; @moerland2018potential], or condition their solution on a particular goal [@schaul2015universal] (i.e., store a solution for multiple reward functions).
|
| 47 |
+
|
| 48 |
+
After deciding on the type of function to represent, we next need to specify the representation method. This is actually a supervised learning question, which we can largely break up in *parametric* and *non-parametric* approaches. *Parametric tabular* representations use a unique parameter for the solution at each state-action pair, which is for example used in the local solution of a graph search, or in the global solution of a tabular RL algorithm. For high-dimensional problems, we typically need to use *parametric approximate* representations, such as (deep) neural networks [@rumelhart1986learning; @goodfellow2016deep]. Apart from reduced memory requirement, a major benefit of approximate representations it their ability to *generalize* over the input space, and thereby make predictions for state-actions that have not been observed yet. However, the individual predictions of approximate methods may contain errors, and there are indications that the combination of tabular and approximate representations may provide the best of both worlds [@silver2017mastering; @langlois2019benchmarking; @moerland2020think]. Alternatively, we may also store the solution in a *non-parametric* way, where we simply store exact sampled traces (e.g., a search tree that does not aggregate over different traces), or *semi-parametric* way [@graves2016hybrid], where we may optimize a neural network to write to and read to a table [@blundell2016model; @pritzel2017neural], sometimes referred to as *episodic memory* [@gershman2017reinforcement].
|
| 49 |
+
|
| 50 |
+
Finally, we also need to initialize our solution representation. Tabular representations are often *uniformly* initialized, for example setting all initial estimates to 0. Approximate representations are often *randomly* initialized, which provides the tie breaking necessary for gradient-based updating. Some approaches use initialization to guide exploration, either through *optimistic initialization* (when a state has not been visited yet, we consider its value estimate to be high) [@bertsekas1996neuro] or *expert initialization* (where we use imitation learning from (human) expert demonstrations to initialize the solution) [@hussein2017imitation]. We will further discuss exploration methods in Sec. [5.4](#sec_selection){reference-type="ref" reference="sec_selection"}.
|
| 51 |
+
|
| 52 |
+
An overview of our notation for the different local/global and tabular/approximate solution types is shown in Table [1](#table_solution_types){reference-type="ref" reference="table_solution_types"}. We will denote *local* estimates with superscript ${\bf l}$, e.g., $V^{\bf l}(s)$ or $Q^{\bf l}(s,a)$, and *global* solutions with superscript **g**, e.g., $V^{\bf g}(s)$, $Q^{\bf g}(s,a)$ or $\pi^{\bf g}(a|s)$. In practice, only global solutions are learned in approximate form, which we indicate with a subscript $\theta$ (for parameters $\theta$).
|
| 53 |
+
|
| 54 |
+
As you will notice, Table [1](#table_solution_types){reference-type="ref" reference="table_solution_types"} contains a separate entry for the *back-up estimate*, $\hat{V}(s)$ or $\hat{Q}(s,a)$, which are formed during every trial. Especially researchers from a planning background may find this confusing, since in many algorithms the back-up estimate and local solution are actually the same. However, we should consider these two different quantities, for two reasons. First of all, in some algorithms, like the roll-out phase of MCTS, we do make additional MDP queries (the trial continues) and back-ups, but the back-up estimate from the last part of the trial is never stored in the local solution (the local solution expands with only one new node per trial). Second, many algorithms use their local solution to *aggregate* cumulative reward estimates from different depths, which is for example used in eligibility traces [@sutton2018reinforcement]. For our conceptual framework, we therefore consider each cumulative reward estimate the result of a single trial, and the local solution may combine the estimate of trials in multiple ways. We will discuss ways to aggregate back-up estimates into the local solution in Sec. [5.7](#sec_update){reference-type="ref" reference="sec_update"}.
|
| 55 |
+
|
| 56 |
+
<figure id="fig_relevant_states" data-latex-placement="!t">
|
| 57 |
+
<img src="reachable_states" style="width:60.0%" />
|
| 58 |
+
<figcaption>Venn diagram of total state space. Only a subset of the entire state space is <span><em>reachable</em></span> from the start state under <span><em>any policy</em></span>. An even smaller subset of the reachable set is eventually <span><em>relevant</em></span>, in the sense that they are reachable from the start state under the <span><em>optimal policy</em></span>. Finally, a subset of the relevant state are of course all start states. Figure extended from <span class="citation" data-cites="sutton2018reinforcement"></span>.</figcaption>
|
| 59 |
+
</figure>
|
| 60 |
+
|
| 61 |
+
The next consideration in our framework is the selection of a root state (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 2 & 8), for which we will attempt to improve our solution (by computing a new value estimate). The main considerations are listed in the second row of Table [\[table_framework\]](#table_framework){reference-type="ref" reference="table_framework"}. A first approach is to select a state from the state space in an *ordered* way, for a example by sweeping through all possible states [@bellman1966dynamic; @howard1960dynamic]. A major downside of this approach is that many states in the state space are often not even reachable from the start state (Fig. [3](#fig_relevant_states){reference-type="ref" reference="fig_relevant_states"}), and we may spend much computational effort on states that will never be part of the practical solution.
|
| 62 |
+
|
| 63 |
+
When the MDP definition includes the notion of a *start state distribution*, this information may be utilized to improve our selection of root states, by only sampling root states on traces from the start. This ensures that new roots are always reachable, which may strongly reduce the number of states we will update in practice (illustrated in Fig. [3](#fig_relevant_states){reference-type="ref" reference="fig_relevant_states"}). In Table [\[table_framework\]](#table_framework){reference-type="ref" reference="table_framework"}, we list this as the *forward sampling* approach to selecting new root states. Note that this generally also involves an action selection question (in which direction do we set the next root), which we will discuss in Sec. [5.4](#sec_selection){reference-type="ref" reference="sec_selection"}.
|
| 64 |
+
|
| 65 |
+
The next option is to select new root states in the reverse direction, i.e., through backward sampling (instead of forward sampling). This approach does require a *backwards model* $p(s,a|s')$, which specifies the possible state-action pairs $(s,a)$ that may lead to a next state $s'$. The main idea is to set next root states at the possible precursor states of a state whose value has just changed much, better known as *prioritized sweeping* [@moore1993prioritized]. We thereby focus our update budget on regions of the state space that likely need updating, which may speed-up convergence. Similar ideas have been studied in the planning community as *backward search* or *regression search* [@nilsson1982principles; @bonet2001planning; @alcazar2013revisiting], which makes prioritized sweeping an interleaved form of forward and backward search.
|
| 66 |
+
|
| 67 |
+
Finally, we do not always need to select the next root state from the current trace. For example, we may track the set of *previously visited states*, and select our next root from this set. This approach, which is for example part of Dyna [@sutton1990integrated], gives greater freedom in the order of root states, while it still ensures that we only update reachable states. To summarize, we need to decide on a way to set root states, which may for example be done in an ordered way, through forward sampling, through backward sampling, or by selecting previously visited states (Table [\[table_framework\]](#table_framework){reference-type="ref" reference="table_framework"}, second row).
|
| 68 |
+
|
| 69 |
+
After we fixed a root state (a state for which we will attempt to improve the solution), we need to decide on 1) the number of trials from the particular root (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"} line 5), and 2) when a trial itself will end, i.e., the depth $d_{\max}$ of each forward trial (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"} line 13 & 22). These possible choices on each of these two considerations are listed in the third row of Table [\[table_framework\]](#table_framework){reference-type="ref" reference="table_framework"}. Note that since every trial consists of a single forward beam, the total number of trials is actually a good measure of the total width of the local solution (Fig. [6](#fig_trial_illustration){reference-type="ref" reference="fig_trial_illustration"}). The joint space of both considerations is visualized in Fig. [4](#fig_width_depth_frap){reference-type="ref" reference="fig_width_depth_frap"}, which we will discuss below.
|
| 70 |
+
|
| 71 |
+
Regarding the *trial budget per root state*, a first possible choice is to only run a single trial. This choice is characteristic for model-free RL algorithms [@sutton2018reinforcement]. Algorithms that have access to a model may also run multiple trials per root state. This budget can for example be specified as a fixed hyperparameter, as is often the choice in MCTS [@browne2012survey]. When we interact with a real-world environment, the trial budget may actually be enforced by the time until the next decision is required. In the planning community, this is referred to as *decision time planning* or *online planning*. In offline approaches, we may also provide an adaptive trial budget, for example until some convergence criterion is met (often in combination with an admissible heuristic, which may reduce the required number of trials to convergence a lot) [@nilsson1971problem; @hansen2001lao; @bonet2003labeled]. Finally, we may also specify an infinite trial budget, i.e., we will repeat trials until all possible sequences (for the specified depth) have been expanded.
|
| 72 |
+
|
| 73 |
+
The second decision involves the *depth* of each individual trial. A first option is to use a trial depth of one, which is for example part of value/policy iteration [@bellman1966dynamic] and temporal difference learning [@sutton1988learning; @watkins1992q; @rummery1994line]. We may also specify a fixed multi-step depth, which is the case for $n$-step methods, or specify a full depth ($\infty$), in which case we unroll the trail until a terminal state is reached (in practice we often still limit the trial by a large depth). The latter is also known as a *Monte Carlo roll-out*, which is for example used in MCTS. Finally, many algorithms make use of an *adaptive* trial depth, which depends on the current local solution (i.e., note that $d_{\max}({\bf l})$ depends on **l** in Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, lines 13 and 22). For example, several (heuristic) planning algorithms terminate a trial once we reach a state or action that did not appear in our current local solution yet [@hart1968formal; @nilsson1971problem]. As another example, we may terminate a trial once it reaches a state in the explored set or makes a cycle to a duplicate state, which are also examples of an adaptive $d_{\max}({\bf l})$. To summarize, the trial budget and depth of each trial are important considerations in all planning and RL algorithms.
|
| 74 |
+
|
| 75 |
+
<figure id="fig_width_depth_frap" data-latex-placement="!t">
|
| 76 |
+
<img src="width_depth_frap" style="width:70.0%" />
|
| 77 |
+
<figcaption>Possible combinations of width (trial budget) and depth (<span class="math inline"><em>d</em><sub>max</sub></span>) per trial from a root state. Practical algorithms reside somewhere left of the left dotted line, since full with combined with full depth (exhaustive search) is not feasible in larger problems. Figure extended from <span class="citation" data-cites="sutton2018reinforcement"></span>.</figcaption>
|
| 78 |
+
</figure>
|
| 79 |
+
|
| 80 |
+
Once we have specified the trial budget and depth rules from a particular root state, we have to decide how to actually select the actions and states that will appear in each individual trial (they may unroll in different directions). In other words, we have specified the overall shape of all trials in Fig. [4](#fig_width_depth_frap){reference-type="ref" reference="fig_width_depth_frap"}, but not yet how this shape will actually be unfolded. We will first discuss *action selection*, which happens in Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"} line 16 and in many algorithms also at line 8, when we set the next root through forward sampling. Afterwards, we will discuss *next state selection*, which happens in line 26 of Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}. The considerations that we discuss for both topics are listed in the fourth row of Table [\[table_framework\]](#table_framework){reference-type="ref" reference="table_framework"}.
|
| 81 |
+
|
| 82 |
+
The first approach to action selection is to pick actions in an *ordered* way, where we select actions *independently* of our interaction history with the MDP. Examples include uninformed search methods, such as iterative. A downside of ordered action selection is that it may spend much time on states with lower value estimates, which typically makes it infeasible in larger problems. Most methods therefore try to prioritize actions in trials based on knowledge from previous trials. A first category of approaches prioritize actions based on their (current) value estimate, which we will call *value-based selection*. The cardinal example of value-based selection is *greedy* action selection, which repeatedly selects actions with the highest current value estimate. This is the dominant approach in the heuristic search literature [@hart1968formal; @nilsson1971problem; @hansen2001lao; @barto1995learning], where an *admissible* heuristic may guarantee that greedy action selection will find the optimal solution.
|
| 83 |
+
|
| 84 |
+
<figure id="fig_frontier" data-latex-placement="t">
|
| 85 |
+
<img src="frontier" style="width:100.0%" />
|
| 86 |
+
<figcaption>Frontier-based exploration in planning (left) and reinforcement learning (right, <span><em>intrinsic motivation</em></span>). <span><strong>Left</strong></span>: Frontier and explored set in a graph. Blue denotes the start state, red a final state, green denotes the explored set (states that have been visited and whose successors have been visited), orange denotes the frontier (states that have been visited but whose successors have not all been visited). Methods without a frontier and explored set (like random perturbation, which is used in most RL approaches) may sample many redundant trials that make loops in the left part of the problem, because they do not find the narrow passage. <span><strong>Right</strong></span>: In large problems, it may become infeasible to store the frontier and explored set in tabular form. Part of intrinsic motivation literature <span class="citation" data-cites="colas2020intrinsically"></span> tracks <span><em>global</em></span> (sub)goal spaces (red line) in global, approximate form. We may for example sample new goals from this space based on novelty, and subsequently attempt to reach that goal through a goal-conditioned policy, effectively mimicking frontier-based exploration in approximate, global form.</figcaption>
|
| 87 |
+
</figure>
|
| 88 |
+
|
| 89 |
+
Note that heuristic search algorithms in practice usually maintain a *frontier* (Fig. [5](#fig_frontier){reference-type="ref" reference="fig_frontier"}), and therefore do not actually need to greedily traverse the local solution towards the best leaf state. However, as @schulte2014balancing also show, any ordering on the frontier can also be achieved by step-wise action selection from the root, and frontiers therefore conceptually fully fit into our framework (although the practical implementation may differ). The notion of frontiers is important, because algorithms that use a frontier often *switch* their action selection strategy once they reach the frontier. For example, a heuristic search algorithm may greedily select actions within the known part of the local solution, but at the frontier expand all possible actions, which is a form of ordered action selection. For some algorithms, we will therefore separately mention the action selection strategy *before the frontier* (BF) and *after the frontier* (AF).
|
| 90 |
+
|
| 91 |
+
Without an admissible heuristic greedy action selection is not guaranteed to find the optimal solution. Algorithms therefore usually introduce a form of *exploration*. A first option in this category is *random perturbation*, which is in the RL community usually referred to as $\epsilon$-greedy exploration [@sutton2018reinforcement]. Similar ideas have been extensively studied in the planning community [@valenzano2014comparison], for example in limited discrepancy search [@harvey1995limited], $k$-best-first-search (KBFS) [@felner2003kbfs] and best-first width search (BFWS) [@lipovetzky2017best]. We may also make the selection probabilities proportional to the current mean estimates of each action, which is for discrete and continuous action spaces for example achieved by Boltzmann exploration [@cesa2017boltzmann] and entropy regularization [@peters2010relative].
|
| 92 |
+
|
| 93 |
+
::: {#table_action_selection}
|
| 94 |
+
---------------------------------------------------------------------------------
|
| 95 |
+
**Action selection method** **Characteristic examples**
|
| 96 |
+
----------------------------- ---------------------------------------------------
|
| 97 |
+
**Ordered** Value iteration [@bellman1966dynamic]\
|
| 98 |
+
Iterative deepening [@korf1985depth]
|
| 99 |
+
|
| 100 |
+
|
| 101 |
+
|
| 102 |
+
**Value-based**
|
| 103 |
+
|
| 104 |
+
\- Greedy (with heuristic) AO$^\star$ [@nilsson1971problem]\
|
| 105 |
+
RTDP [@barto1995learning]
|
| 106 |
+
|
| 107 |
+
\- Random perturbation $\epsilon$-greedy [@sutton2018reinforcement]\
|
| 108 |
+
Gaussian noise [@van2007reinforcement]
|
| 109 |
+
|
| 110 |
+
\- Mean perturbation Boltzmann [@cesa2017boltzmann]\
|
| 111 |
+
Entropy regularization [@peters2010relative]
|
| 112 |
+
|
| 113 |
+
\- Uncertainty perturbation Upper confidence bounds [@kaelbling1993learning]\
|
| 114 |
+
Posterior sampling [@thompson1933likelihood]
|
| 115 |
+
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
**State-based**
|
| 119 |
+
|
| 120 |
+
\- Knowledge-based IM Novelty [@brafman2002r]\
|
| 121 |
+
Suprise [@achiam2017surprise]
|
| 122 |
+
|
| 123 |
+
\- Competence-based IM Learning progress [@pere2018unsupervised]\
|
| 124 |
+
Goal-reaching success [@florensa2018automatic]
|
| 125 |
+
---------------------------------------------------------------------------------
|
| 126 |
+
|
| 127 |
+
: Overview of action selection methodology within a trial. At the highest level, we may either prioritized actions in an ordered way (independent of our interaction history with the MDP), in a value-based way (based on obtained rewards in our interaction history with the MDP), or in astate-based (based on our interaction history with the MDP, but independent of the value). The table shows possible subcategories, and some characteristic examples in the right column.
|
| 128 |
+
:::
|
| 129 |
+
|
| 130 |
+
A downside of random perturbation methods is their inability to naturally transition from exploration to exploitation. A solution is to track the uncertainty of value estimate of each action, i.e., *uncertainty-based perturbation*. Such approaches have been extensively studied in the multi-armed bandit literature [@slivkins2019introduction], and successful exploration methods from RL and planning [@kocsis2006bandit; @kaelbling1993learning; @hao2019bootstrapping] are actually based on work from the bandit literature [@auer2002finite]. Note that uncertainty estimation in sequential problems, like the MDP formulation, is harder than the multi-armed bandit setting, since we need to take the uncertainty in the value estimates of future states into account [@dearden1998bayesian; @moerland2017efficient]. As an alternative, we may also estimate uncertainty in a Bayesian way, and for example explore through Thompson sampling [@thompson1933likelihood; @osband2016deep]. Note that *optimistic initialization* of the solution, already discussed in Sec. [5.1](#sec_solution){reference-type="ref" reference="sec_solution"}, also uses optimism in the face of uncertainty to guide exploration, although it does not track the true uncertainty in the value estimates.
|
| 131 |
+
|
| 132 |
+
In contrast to value-based perturbation, we may also use *state-based perturbation*, where we inject exploration noise *based on our interaction history with the MDP* (i.e., independently of the extrinsic reward). As a classic example, a particular state might be interesting because it is novel, i.e., we have not visited it before in our current interaction history with the MDP. In the reinforcement learning literature, this approach is often referred to as *intrinsic motivation* (IM) [@chentanez2005intrinsically; @oudeyer2007intrinsic]. We already encountered the same idea in the planning literature through the use of frontiers and explored set, which essentially prevent expansion of a state that we already visited before. In RL (intrinsic motivation) literature, we usually make a separation between *knowledge-based* intrinsic motivation, which marks states or actions as interesting because they provide new knowledge about the MDP, and *competence-based* intrinsic motivation, where we prioritize target states based on our *ability* to reach them. Examples of the knowledge-based IM include intrinsic rewards for *novelty* [@brafman2002r; @bellemare2016unifying], recency [@sutton1990integrated], curiosity [@pathak2017curiosity], surprise [@achiam2017surprise], and model uncertainty [@houthooft2016vime], while we may also provide intrinsic motivation for the *content* of a state, for example a saliency for objects @kulkarni2016hierarchical. Competence-based IM may for example prioritize (goal) states of intermediate difficulty (which we manage to reach sometimes) [@florensa2018automatic], or on which we are currently making learning progress [@baranes2013active; @matiisen2017teacher; @lopes2012exploration].
|
| 133 |
+
|
| 134 |
+
As mentioned above, there is clear connection between the use of frontiers in planning literature and the use of intrinsic motivation in reinforcement learning literature, which we illustrate in Fig. [5](#fig_frontier){reference-type="ref" reference="fig_frontier"}. On the one hand, the planning literature has many techniques to track and prioritize frontiers, but these tabular approaches do suffer in high-dimensional problems. In contrast, in RL methods that do not track frontiers (but for example use random perturbation) many trials may not hit a new state at all [@ecoffet2021first]. Intrinsic motivation literature has studied the use of *global, approximate frontiers* (i.e., global, approximate sets of interesting states to explore), which is typically referred to as intrinsically motivated goal exploration processes (IMGEP) [@colas2020intrinsically]. An successful example algorithm in this class is Go-Explore [@ecoffet2021first], which achieved state-of-the-art performance on the sparse-reward benchmark task Montezuma's Revenge. However, IMGEP approaches have their challenges as well, especially because it is hard to track convergence of approximate solutions, and our goal space may for example be off, or we do encounter a novel region but after an update of our goal-conditioned policy we are not able to get back. Tabular solutions from the planning literature do not suffer from these issues, and we conjecture that there is much potential here in the combination of ideas from both research fields.
|
| 135 |
+
|
| 136 |
+
As mentioned in the beginning, action selection often also plays a role on Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"} line 8, when we select next root states through forward sampling from the previous root (as discussed in Sec. [5.2](#sec_set_root node){reference-type="ref" reference="sec_set_root node"}). In the planning literature, this is often referred to as the *recommendation function* [@keller2013trial] (what action do we recommend at the root after all trials and back-ups). When we want to maximize performance, action recommendation is often greedy, for example based on the visitation counts at the root of an MCTS search [@browne2012survey]. However, during offline learning, we may inject additional exploration into action selection at the root, for example by *planning to explore* (the trials in a learned model direct the agent towards interesting new root state in the true environment) [@sekar2020planning]. We will refer to this type of action selection as *next root* (NR) selection, and note that some algorithms therefore have three different action selection strategies: before the frontier (BF) within a trial, after the frontier (AF) within a trial, and to set the next root (NR) for new trials. An overview of the discussed action selection methods, with some characteristic examples, is provided in Table [2](#table_action_selection){reference-type="ref" reference="table_action_selection"}.
|
| 137 |
+
|
| 138 |
+
<figure id="fig_trial_illustration" data-latex-placement="!t">
|
| 139 |
+
<img src="trial_illustration" style="width:70.0%" />
|
| 140 |
+
<figcaption>Example local solution patterns. <span><strong>a</strong></span>) Local solution consisting of a single trial with depth 2. Total queries to the MDP = 2. Example: two-step temporal difference learning. <span><strong>b</strong></span>) Local solution consisting of four trial with depth 1. Total queries to the MDP = 4. Example: value iteration. <span><strong>c</strong></span>) Local solution consisting of three trials, one with depth 1 and two with depth 2. Total queries to the MDP = 4. Example: Monte Carlo Tree Search.</figcaption>
|
| 141 |
+
</figure>
|
| 142 |
+
|
| 143 |
+
After our extensive discussion of action selection methods within a trial, we also need to discuss *next state selection*, which happens at line 26 of Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}. The two possible options here are ordered and sample selection. *Ordered* next state selection is for example used in value and policy iteration, where we simply expand every possible next state of an action. This approach is only feasible when we have settable, descriptive access to the MDP dynamics (see Sec. [3.2](#sec_model_types){reference-type="ref" reference="sec_model_types"}), since we can then decide ourselves which next state we want to make our next MDP query from. The second option is to *sample* the next action, which is by definition the choice when we only have generative access to the MDP dynamics. However, sampled next state selection may even be beneficial when we do have descriptive access [@sutton2018reinforcement].
|
| 144 |
+
|
| 145 |
+
To summarize this section on action and next state selection within a trial, Figure [6](#fig_trial_illustration){reference-type="ref" reference="fig_trial_illustration"} illustrates some characteristic trial patterns. On the left of the figure we visualize a local solution consisting of a single trial with $d_{\max}=2$, which is for example used in two-step temporal difference (TD) learning [@sutton1988learning]. In the middle, we see a local solution consisting of four trials, each with a $d_{\max}$ of 1. Each action and next state is selected in an ordered way, which is for example used in value iteration [@bellman1966dynamic]. Finally, the right side of the figure shows a local solution consisting of three trials, one with $d_{\max}=1$ and two with $d_{\max}=2$, which could for example appear in Monte Carlo Tree Search [@kocsis2006bandit]. With the methodology described in this section, we can construct any other preferred local solution pattern. In the next section we will discuss what to do at the leaf states of these patterns, i.e., what to do when we reach the trial's $d_{\max}$.
|
| 146 |
+
|
| 147 |
+
The main aim of trials is to provide a new/improved estimate of the value of each action at the root, i.e., the expected cumulative sum of rewards from this state-action (Eq. [\[eq_cum_reward\]](#eq_cum_reward){reference-type="ref" reference="eq_cum_reward"}). However, when we choose to end a trial before we can evaluate the entire sum, we may still obtain an estimate of the cumulative reward through *bootstrapping*. A bootstrap function is a function that provides a quick estimate of the value of a particular state or state-action. When we decide to end our trial at a state, we need to bootstrap a state value (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 14), and when we decide to end the trial at an action, we need to bootstrap a state-action value (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 23). A potential benefit of a state value function is that it has lower dimension and might be easier to learn/obtain, while a state-action value function has the benefit that it allows for off-policy back-ups (see Sec. [5.6](#sec_backup){reference-type="ref" reference="sec_backup"}) without additional queries to the MDP. Note that terminal states have a value of 0 by definition.
|
| 148 |
+
|
| 149 |
+
The bootstrap function itself may either be obtained from a *heuristic function*, or it can be learned. Heuristic functions have been studied extensively in the planning community. A heuristic is called *admissible* when it provides an *optimistic* estimate of the remaining value for every state, which allows for greedy action selection strategies during the search. Heuristics can be obtained from prior knowledge, but much research has focused on automatic ways to obtain heuristics, often by first solving a simplified version of the problem. When the problem is stochastic, a popular approach is *determinization*, where we first solve a deterministic version of the MDP to obtain a heuristic for the full planning task [@hoffmann2001ff; @yoon2007ff], or *delete relaxations* [@bonet2001planning], where we temporarily ignore the action effects that remove state attributes (which is only applicable in symbolic states spaces). A heuristic is called 'blind' when it is initialized to the same value everywhere. For an extensive discussion of ways to obtain heuristics we refer the reader to @pearl1984heuristics [@edelkamp2011heuristic].
|
| 150 |
+
|
| 151 |
+
The alternative approach is to *learn* a global state or state-action value function. Note that this function can also serve as our solution representation (see Sec. [5.1](#sec_solution){reference-type="ref" reference="sec_solution"}). The learned value function can be trained on the root value estimates of previous trials (see Sec. [5.7](#sec_update){reference-type="ref" reference="sec_update"}), and thereby gradually improve its performance [@sutton1988learning; @korf1990real]. A major benefit of learned value functions is 1) their ability to improve performance with more data, and 2) their ability to *generalize* when learned in approximate form. For example, while Deep Blue [@campbell2002deep], the first computer programme to defeat a human Chess world champion, used a heuristic bootstrap function, this approach was later outperformed by AlphaZero [@silver2018general], which uses a deep neural network to learn a bootstrap function that provides better generalization.
|
| 152 |
+
|
| 153 |
+
<figure id="fig_backup_types" data-latex-placement="t">
|
| 154 |
+
<img src="backup_types" style="width:100.0%" />
|
| 155 |
+
<figcaption>Types of 1-step back-ups. For the back-up over the policy (columns), we need to decide on i) the type of policy (on-policy or off-policy) and ii) whether we do a full or partial back-up. For the back-up over the dynamics (rows), we also need to decide whether we do a full or partial back-up. Note that for the greedy/max back-up policy the expected and sample back-ups are equivalent. Mentioned algorithms: Value Iteration <span class="citation" data-cites="bellman1966dynamic"></span>, Expected SARSA <span class="citation" data-cites="van2009theoretical"></span>, SARSA <span class="citation" data-cites="rummery1994line"></span>, MCTS <span class="citation" data-cites="kocsis2006bandit"></span>, Q-learning <span class="citation" data-cites="watkins1992q"></span>, and AO<span class="math inline"><sup>⋆</sup></span> <span class="citation" data-cites="nilsson1971problem"></span>.</figcaption>
|
| 156 |
+
</figure>
|
| 157 |
+
|
| 158 |
+
Bootstrapping ends the forward phase of a trial, after which we start the back-up phase (Fig. [2](#fig_query_backup){reference-type="ref" reference="fig_query_backup"}, right). The goal of back-ups is to process the acquired information of the trial. We will primarily focus on the *value back-up*, where we construct new estimates $\hat{V}(s)$ and $\hat{Q}(s,a)$ for states and actions that appear in the trial. At the end of this section, we will also briefly comment on other types of information we may include in the back-up.
|
| 159 |
+
|
| 160 |
+
Value back-ups are based on the one-step Bellman equation, as shown in Eq. [\[eq_bellman\]](#eq_bellman){reference-type="ref" reference="eq_bellman"}). The first expectation of this equation, over the possible next states, shows the *dynamics back-up*: we need to aggregate value estimates for different possible next states into an state-action value estimate for the state-action that may lead to them. The second expectation, over the possible actions, shows the *policy back-up*: we want to aggregate state-action values into a value estimate at the particular state. We therefore need to discuss how to deal with width (expectations) over the policy and dynamics. In Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, policy and dynamics back-ups happen at line 18 and 28, while we will now discuss the relevant considerations for these back-ups, as listed in the sixth row of Table [\[table_framework\]](#table_framework){reference-type="ref" reference="table_framework"}.
|
| 161 |
+
|
| 162 |
+
For the policy back-up, we first need to specify which back-up policy we will actually employ. A first option is to use the current behavioural policy (which we used for action selection within the trial) as the back-up policy, which is in RL literature usually referred to as *on-policy* back-ups. An alternative is to use another policy than the behavioural policy, which is referred to as *off-policy*. The most common off-policy back-up is the *greedy* or *max* back-up, which puts all probability on the action with the highest current value estimate. The greedy back-up is common in tabular solutions, but can be unstable when combined with a global approximate solutions and bootstrapping [@van2018deep]. Note that off-policy back-ups do not need to be greedy, and we may also use back-up policies that are more greedy than the exploration policy, but less greedy than the max operator [@keller2015anytime; @coulom2006efficient].
|
| 163 |
+
|
| 164 |
+
We next need to decide whether we will make a *full* / *expected* policy back-up, or a *partial* / *sample* policy back-up. Expected back-ups evaluate the full expectation over the policy probabilities, and therefore need to expand all child actions of a state. In contrast, sample back-ups only back-up the value from a sampled action, and therefore do not need to trial all child actions (and are therefore called 'partial'). Sample back-ups are less accurate but computationally cheaper, and will move towards the true value over multiple samples.
|
| 165 |
+
|
| 166 |
+
The same consideration actually applies to the back-up over the dynamics, which can also be *full* / *expected* back-up, or *partial* / *sample*. Which type of dynamics back-up we can make also depends on the type of access we have to the MDP. When we only have generative access to the MDP, we are forced to make sample back-ups. In contrast, when we have descriptive access to the MDP, we can either make expected or sample back-ups. Although sample back-ups have higher variance, they are computationally cheaper and may be more efficient when many next states have a small probability [@sutton2018reinforcement]. We summarize the common back-up equations for policy and dynamics in Table [3](#tab_backup_equations){reference-type="ref" reference="tab_backup_equations"}, while Figure [7](#fig_backup_types){reference-type="ref" reference="fig_backup_types"} visualizes common combinations of these as back-up diagrams.
|
| 167 |
+
|
| 168 |
+
::: {#tab_backup_equations}
|
| 169 |
+
**Equation**
|
| 170 |
+
------------------ ------------------------------------------------------------------------------------------------------------------ ----------------------------
|
| 171 |
+
**Policy**
|
| 172 |
+
Sample back-up $\hat{V}(s) \gets \hat{Q}(s,a)$, for $a \sim \pi(\cdot|s')$
|
| 173 |
+
Expected back-up $\hat{V}(s) \gets \mathbb{E}_{a \sim \pi(\cdot|s)} [ \hat{Q}(s,a) ]$
|
| 174 |
+
Greedy back-up $\hat{V}(s) \gets \max_{a} [ \hat{Q}(s,a) ]$
|
| 175 |
+
**Dynamics**
|
| 176 |
+
Sample back-up $\hat{Q}(s,a) \gets \mathcal{R}(s,a,s') + \gamma \cdot \hat{V}(s')$, for $s' \sim T(\cdot|s,a)$
|
| 177 |
+
Expected back-up $\hat{Q}(s,a) \gets \mathbb{E}_{s' \sim \mathcal{T}(s'|s,a)} [ \mathcal{R}(s,a,s') + \gamma \cdot \hat{V}(s')]$
|
| 178 |
+
|
| 179 |
+
: Equations for the policy and dynamics back-up, applicable to Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"} line 18 and 28, respectively.
|
| 180 |
+
:::
|
| 181 |
+
|
| 182 |
+
Many algorithms back-up additional information to improve action selection in future trials. We may want to track the uncertainty in the value estimates, for example by backing-up visitation counts [@browne2012survey], by backing-up entire uncertainty distributions around value estimates [@dearden1998bayesian; @deisenroth2011pilco], or by backing-up the distribution of the return [@bellemare2017distributional]. Some methods back-up *labels* that mark a particular value estimate as 'solved' when we are completely certain about its value estimate [@nilsson1971problem; @bonet2003labeled]. As mentioned before, graph searches also back-up information about frontiers and explored sets, which can be seen as another kind of label, one that removes duplicates and marks expanded states. The overarching theme in all these additional back-ups is that they track some kind of uncertainty about the value of a particular state, which can be utilized during action selection in future trials.
|
| 183 |
+
|
| 184 |
+
The last step of the framework involves updating the local solutions ($V^{\bf l}(s)$ and $Q^{\bf l}(s,a)$) based on the back-up estimates ($\hat{V}(s)$ and $\hat{Q}(s,a)$), and subsequently updating the global solution ($V^{\bf g}(s)$ and/or $Q^{\bf g}(s,a)$ and/or $\pi^{\bf g}(a|s)$) based on the local solution. In Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, the updates of the local solution happen in lines 19 and 29, while the update of the global solution (when used) occurs in line 7. The main message of this section is that we can write both types of updates, whether it concerns updates of nodes in a planning tree or updates of a global policy network, as *gradient descent* updates on a particular *loss function*. We hope this provides further insight in the similarity between planning and learning, since planning updates on a tree/graph can usually be written as tabular learning updates with a particular learning rate.
|
| 185 |
+
|
| 186 |
+
We will first introduce our general notation. A loss function is denoted by $\mathcal{L}(\theta)$, where $\theta$ denotes the parameters to be updated. In case of a tabular solution, the parameters are simply the individual entries in the table (like $Q^{\bf l}(s,a))$) (see Sec. [5.1](#sec_solution){reference-type="ref" reference="sec_solution"} and Table [1](#table_solution_types){reference-type="ref" reference="table_solution_types"} for a summary of notation), and we will therefore not explicitly add a subscript $\theta$. When we have specified a solution and a loss function, the parameters can be updated based on gradient descent, with update rule:
|
| 187 |
+
|
| 188 |
+
$$\begin{equation}
|
| 189 |
+
\theta \gets \theta - \eta \cdot \nabla_\theta \mathcal{L}(\theta), \label{eq_gradient_descent}
|
| 190 |
+
\end{equation}$$
|
| 191 |
+
|
| 192 |
+
where $\eta \in \mathbb{R}^+$ is a learning rate. We will first show which loss function and update rules are common in updating of the local solution, and subsequently discuss how they reappear in updates of the global solution based on the local solution. An overview of common loss functions and update rules is provided in Table [\[table_update_types\]](#table_update_types){reference-type="ref" reference="table_update_types"}, which we will now discuss in more detail.
|
| 193 |
+
|
| 194 |
+
We will here focus on the update of state-action values $Q^{\bf l}(s,a)$ (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 29), but the same principles apply to state value updating (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 19). We therefore want to specify an update of $Q^{\bf l}(s,a)$ based on a new back-up value $\hat{Q}(s,a)$. A classic choice of loss function for continuous values is the *squared loss*, given by:
|
| 195 |
+
|
| 196 |
+
$$\begin{equation}
|
| 197 |
+
\mathcal{L}\big(Q^{\bf l}(s,a)|s,a \big) = \frac{1}{2} \big[ \hat{Q}(s,a) - Q^{\bf l}(s,a) \big]^2. \label{eq_squared_loss}
|
| 198 |
+
\end{equation}$$
|
| 199 |
+
|
| 200 |
+
Differentiating this loss with respect to $Q^{\bf l}(s,a)$ and plugging it into Eq. [\[eq_gradient_descent\]](#eq_gradient_descent){reference-type="ref" reference="eq_gradient_descent"} (where $Q^{\bf l}(s,a)$ are the parameters) gives the well-known *tabular learning rule*:
|
| 201 |
+
|
| 202 |
+
$$\begin{equation}
|
| 203 |
+
Q^{\bf l}(s,a) \gets Q^{\bf l}(s,a) + \eta \cdot \big[ \hat{Q}(s,a) - Q^{\bf l}(s,a) \big]. \label{eq_tabular_learning_rule}
|
| 204 |
+
\end{equation}$$
|
| 205 |
+
|
| 206 |
+
Intuitively, we move our estimate $Q^{\bf l}(s,a)$ a bit in the direction of our new back-up value $\hat{Q}(s,a)$. In the tabular case, $\eta$ is therefore restricted to $[0,1]$. Most planning algorithms use special cases of the above update rule. A first common choice is to set $\eta = 1.0$, which gives the *replace update*:
|
| 207 |
+
|
| 208 |
+
$$\begin{equation}
|
| 209 |
+
Q^{\bf l}(s,a) \gets \hat{Q}(s,a).
|
| 210 |
+
\end{equation}$$
|
| 211 |
+
|
| 212 |
+
This update completely overwrites the estimate in the local solution by the new back-up value. This is the typical approach in heuristic planning [@hart1968formal; @nilsson1971problem; @hansen2001lao], where an admissible heuristic often ensures that our new estimate (from a deeper unfolding of the planning tree) provides a better informed estimate than the previous estimate. Although one would typically not think of such a replace update as a gradient-based approach, these updates are in fact all connected.
|
| 213 |
+
|
| 214 |
+
When we do not have a good heuristic available (and we therefore need to bootstrap from a learned value function or use deep roll-outs to estimate the cumulative reward), estimates of different depths may have different reliability (known as the *bias-variance trade-off*) [@sutton2018reinforcement]. We may for example equally weight the contribution of estimates of different depths, which we will call an *averaging update* (which uses $\eta = \frac{1}{n}$, where $n$ denotes the number of trials/back-up estimates for the node):
|
| 215 |
+
|
| 216 |
+
$$\begin{equation}
|
| 217 |
+
Q^{\bf l}(s,a) \gets Q^{\bf l}(s,a) + \frac{1}{n} \cdot [ \hat{Q}(s,a) - Q^{\bf l}(s,a) ]
|
| 218 |
+
\end{equation}$$
|
| 219 |
+
|
| 220 |
+
This is for example used in MCTS implementations that use bootstrapping instead of rollouts [@silver2018general].
|
| 221 |
+
|
| 222 |
+
While the above update gives the value estimate from each trial equal weight, we may also make the contribution of a trial estimate dependent on the depth of the trial, as is for example done in *elegibility traces* [@sutton2018reinforcement; @schulman2016high]. In this case, we essentially set $\eta = (1-\lambda) \cdot \lambda^{(d-1)}$, where $\lambda \in [0,1]$ is the exponential decay and $d$ is the length of the trace on which we update. More sophisticated reweighting schemes of the targets of different trials are possible as well [@munos2016safe], for example based on the *uncertainty* of the estimate at each depth [@buckman2018sample]. In short, the local solution may combine value estimates from different trials (with different depths) in numerous ways, as summarized in the top part of Table [\[table_update_types\]](#table_update_types){reference-type="ref" reference="table_update_types"}.
|
| 223 |
+
|
| 224 |
+
<figure id="fig_value_gradients" data-latex-placement="!t">
|
| 225 |
+
<img src="value_gradients" style="width:90.0%" />
|
| 226 |
+
<figcaption>Illustration of gradient-based planning. When we have access to a differentiable transition function <span class="math inline">𝒯(<em>s</em><sup>′</sup>|<em>s</em>, <em>a</em>)</span> and differentiable reward function <span class="math inline">ℛ(<em>s</em>, <em>a</em>, <em>s</em><sup>′</sup>)</span>, and we also specify a differentiable policy <span class="math inline"><em>π</em><sub><em>θ</em></sub>(<em>a</em>|<em>s</em>)</span>, then a single trial generates a fully differentiable computational graph. The figure shows an example graph for a trial of depth 3. The black arrows show the forward passes through the policy, dynamics function and rewards function. In the example, we also bootstrap from a differentiable (learned) value function, but this can also be omitted. We may then update the policy parameters by directly differentiating the cumulative reward (objective, green box) with respect to the policy parameters, effectively summing the gradients over all backwards path indicated by the red dotted lines.</figcaption>
|
| 227 |
+
</figure>
|
| 228 |
+
|
| 229 |
+
:::: sidewaystable
|
| 230 |
+
::: center
|
| 231 |
+
**Loss** **Update**
|
| 232 |
+
--------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 233 |
+
**Local update**
|
| 234 |
+
|
| 235 |
+
Squared loss $\mathcal{L}(Q^{\bf l}(s,a)|s,a) = \frac{1}{2} \big(\hat{Q}(s,a) - Q^{\bf l}(s,a) \big)^2$ $Q^{\bf l}(s,a) \gets Q^{\bf l}(s,a) + \eta \cdot [ \hat{Q}(s,a) - Q^{\bf l}(s,a) ]$
|
| 236 |
+
Replace update ($\eta = 1$) $Q^{\bf l}(s,a) \gets \hat{Q}(s,a)$
|
| 237 |
+
Average update ($\eta = \frac{1}{n}$) $Q^{\bf l}(s,a) \gets Q^{\bf l}(s,a) + \frac{1}{n} \cdot [ \hat{Q}(s,a) - Q^{\bf l}(s,a) ]$
|
| 238 |
+
Eligibility update $Q^{\bf l}(s,a) \gets Q^{\bf l}(s,a) + (1-\lambda) \cdot \lambda^{(d-1)} \cdot [ \hat{Q}_d(s,a) - Q^{\bf l}(s,a) ]$
|
| 239 |
+
**Global update**
|
| 240 |
+
|
| 241 |
+
Squared loss $\mathcal{L}(\theta|s,a) = \frac{1}{2} \big( Q^{\bf l}(s,a) - Q^{\bf g}_\theta(s,a) \big)^2$ $\theta \gets \theta + \eta \cdot [ Q^{\bf l}(s,a) - Q^{\bf g}_\theta(s,a) ] \cdot \nabla_\theta Q^{\bf g}_\theta(s,a)$
|
| 242 |
+
Cross-entropy softmax loss $\mathcal{L}(\theta|s) = - \texttt{softmax}(Q^{\bf l}(s,{\bf a}))^T \cdot \log \texttt{softmax}(Q^{\bf g}_\theta(s,{\bf a}))$ $\theta \gets \theta + \eta \cdot \nabla_\theta [\texttt{softmax}(Q^{\bf l}(s, {\bf a}))^T \cdot \log \texttt{softmax}((Q^{\bf g}_\theta(s,{\bf a}))]$
|
| 243 |
+
|
| 244 |
+
Policy gradient $\mathcal{L}(\theta|s,a) = - \ln \pi^{\bf g}_\theta(a|s) \cdot Q^{\bf l}(s,a)$ $\theta \gets \theta + \eta \cdot \frac{Q^{\bf l}(s,a)} {\pi^{\bf g}_\theta(a|s)} \cdot \nabla_\theta \pi^{\bf g}_\theta(a|s)$
|
| 245 |
+
Determ. policy gradient $\mathcal{L}(\theta|s,a) = - Q^{\bf g}_\psi(s,\pi^{\bf g}_\theta(a|s))$ ($Q^{\bf g}_\psi$ trained on $Q^{\bf l}$ ) $\theta \gets \theta + \eta \cdot \nabla_a Q^{\bf g}_\psi(s,a) \cdot \nabla_\theta \pi^{\bf g}_\theta(a|s)$
|
| 246 |
+
Value gradient $\mathcal{L}(\theta|s) = - V^{\bf l}(s)$ $\theta \gets \theta + \eta \cdot \nabla_\theta V^{\bf l}(s)$ (Fig. [8](#fig_value_gradients){reference-type="ref" reference="fig_value_gradients"})
|
| 247 |
+
Cross-entropy loss $\mathcal{L}(\theta|s) = \sum_{a \in \mathcal{A}} \ln \pi_\theta(a|s_t) \big( \frac{n^{\bf l}(s_t,a)}{\sum_{b \in \mathcal{A}} n^{\bf l}(s_t,b)} \big)$ $\theta \gets \theta - \eta \cdot \sum_{a \in \mathcal{A}} \big( \frac{n^{\bf l}(s_t,a)}{\sum_{b \in \mathcal{A}} n^{\bf l}(s_t,b)} \big) \cdot \frac{1} {\pi^{\bf g}_\theta(a|s)} \cdot \nabla_\theta \pi^{\bf g}_\theta(a|s)$
|
| 248 |
+
:::
|
| 249 |
+
::::
|
| 250 |
+
|
| 251 |
+
When our algorithm uses a global solution, we next need to update this global solution ($V^{\bf g}$ and/or $Q^{\bf g}$ and/or $\pi^{\bf g}$) based on the estimates from our local solution ($V^{\bf l}$ and/or $Q^{\bf l}$) (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 7). For a value-based solution that is *tabular*, we typically use the same squared loss (Eq. [\[eq_squared_loss\]](#eq_squared_loss){reference-type="ref" reference="eq_squared_loss"}), which leads to the global tabular update rule $Q^{\bf g}(s,a) \gets Q^{\bf g}(s,a) + \eta \cdot [ Q^{\bf l}(s,a) - Q^{\bf g}(s,a) ]$, which exactly resembles the local version (Eq. [\[eq_tabular_learning_rule\]](#eq_tabular_learning_rule){reference-type="ref" reference="eq_tabular_learning_rule"}), apart from the fact that we now update $Q^{\bf g}(s,a)$, while $Q^{\bf l}(s,a)$ has the role of target. This approach is the basis under all tabular RL methods [@sutton2018reinforcement]. (For (model-free) RL approaches that directly update the global solution after a single trial, we may also imagine the local solution does not exist, and we directly update the global solution from the back-up estimates).
|
| 252 |
+
|
| 253 |
+
We will therefore primarily focus on the function approximation setting, where we update a global approximate representation parametrized by $\theta$. Table [\[table_update_types\]](#table_update_types){reference-type="ref" reference="table_update_types"} shows some example loss functions and update rules that appear in this case. The most important point to note is that there are many ways in which we may combine a local estimate, such as $Q^{\bf l}(s,a)$, and the global solution, such as $Q^{\bf g}(s,a)$ or $\pi^{\bf g}(a|s)$, in a loss function. For value-based updating, we may use the squared loss, but other options are possible as well, like a cross-entropy loss over the softmax of the Q-values returned from planning (the local solution) and the softmax of the Q-values from a global neural network approximation [@hamrick2020combining]. For policy-based updating, well-known examples include the *policy gradient* [@williams1992simple; @sutton2000policy; @sutton2018reinforcement] and *deterministic policy gradient* [@silver2014deterministic; @lillicrap2015continuous] loss functions. Again, other options have been successful as well, such as a cross-entropy loss between the normalized visitations counts at the root of an MCTS (part of the local solution) and a global policy network, as for example used by AlphaZero [@silver2017mastering]. In short, various objectives are possible (and more may be discovered), as long as minimization of the objective moves our global solution in the right direction (based on the obtained information from the trial).
|
| 254 |
+
|
| 255 |
+
An important other class of approaches is *gradient-based planning*, also known as *value gradients* [@fairbank2012value; @heess2015learning]. These approaches require a (known or learned) differentiable transition and reward model (and a differentiable value function when we also include bootstrapping). When we also specify a differentiable policy, then each trial generates a fully differentiable graph, in which we can directly differentiate the cumulative reward with respect to the policy parameters. This idea is illustrated in Fig. [8](#fig_value_gradients){reference-type="ref" reference="fig_value_gradients"}, where we aggregate over all gradient paths in the graph (red dotted lines). Gradient-based planning is popular in the robotics and control community [@anderson2007optimal; @todorov2005generalized; @deisenroth2011pilco], where dynamics functions are relatively smooth and differentiable, although the idea can also be applied with discrete states [@wu2017scalable].
|
| 256 |
+
|
| 257 |
+
Table [\[table_update_types\]](#table_update_types){reference-type="ref" reference="table_update_types"} summarizes some of the common loss functions we discussed. The examples in the table all have analytical gradients, but otherwise we may always use finite differencing to numerically estimate the gradient of an objective. The learning rate in these update equations is typically tuned to a specific value (or decay scheme), although there are more sophisticated approaches that bound the step size, such as proximal policy optimization (PPO) [@schulman2017proximal]. Moreover, we did not discuss gradient-free updating of a global solution, because these algorithms typically do not exploit MDP-specific knowledge (i.e, they do not construct and back-up value estimates at states throughout the MDP, but only sample the objective function based on traces from the root). However, we do note that gradient-free black-box optimization can also be successful in MDP optimization, as for example show for evolutionary strategies @moriarty1999evolutionary [@whiteson2006evolutionary; @salimans2017evolution], simulated annealing [@atiya2003reinforcement] and the cross-entropy method @mannor2003cross.
|
| 258 |
+
|
| 259 |
+
This concludes our discussion of the dimensions in the framework. An overview of all considerations and their possible choices is shown in Table [\[table_framework\]](#table_framework){reference-type="ref" reference="table_framework"}, while Algorithm [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"} shows how all these considerations piece together in a general algorithmic framework. To illustrate the validity of the framework, the next section will analyze a variety of planning and RL methods along the framework dimensions.
|
| 260 |
+
|
| 261 |
+
:::: sidewaystable
|
| 262 |
+
::: tabular
|
| 263 |
+
p1.5cm p2.5cm \| P2.0cm P2.0cm P2.0cm P2.0cm P2.0cm P2.0cm P2.0cm **Dimension\
|
| 264 |
+
& **Consideration & Value iteration [@bellman1966dynamic] & LAO$^\star$ [@hansen2001lao] & Labeled RTDP [@bonet2003labeled] & Monte Carlo search [@tesauro1997online] & MCTS [@kocsis2006bandit] & Q-learning [@watkins1992q] & TD($\lambda$) [@sutton2018reinforcement]\
|
| 265 |
+
\
|
| 266 |
+
& &\
|
| 267 |
+
MDP access & & Settable descriptive & Settable descriptive & Settable descriptive & Settable generative & Settable generative & Resettable generative & Resettable generative\
|
| 268 |
+
& &\
|
| 269 |
+
Solution & - Coverage & Global & Local & Local & Local & Local & Global & Global\
|
| 270 |
+
& - Type & $V(s)$ & $V(s)$ & $V(s)$ & $Q(s,a)$ & $Q(s,a)$ & $Q(s,a)$ & $V(s)$\
|
| 271 |
+
& - Method & Tabular & Tabular & Tabular & Tabular & Tabular & Tabular & Tabular\
|
| 272 |
+
& - Initialization & Uniform & Heuristic & Heuristic & Uniform & Optimistic & Uniform & Uniform\
|
| 273 |
+
& &\
|
| 274 |
+
Root & - Selection & Ordered & Forward sampling & Forward sampling & Forward sampling & Forward sampling & Forward sampling & Forward sampling\
|
| 275 |
+
& &\
|
| 276 |
+
Budget & - \# trials per root & up to $| \mathcal{A} | \cdot | \mathcal{S} |$ & till convergence & up to $| \mathcal{A} | \cdot | \mathcal{S} |$ & $n$ & $n$ & 1 & $d_{\max}$\
|
| 277 |
+
& - Depth & 1 & $1..n$ & 1 & $\infty$ & $\infty$ & 1 & $1..d_{\max}$\
|
| 278 |
+
& &\
|
| 279 |
+
Selection & - Next action & Ordered & BF: Greedy, AF: Ordered, NR: Greedy & BF: Greedy, AF: Ordered, NR: Greedy & BF: Ordered AF: Baseline & BF: Uncertainty AF: Baseline NR: Greedy & Random pert. & Random pert.\
|
| 280 |
+
& - Next state & Ordered & Ordered & Sample & Sample & Sample & Sample & Sample\
|
| 281 |
+
& &\
|
| 282 |
+
Bootstrap& - Location & State & State & State & - & - & State-action & State\
|
| 283 |
+
& - Type & Learned & Heuristic & Heuristic & - & - & Learned & Learned\
|
| 284 |
+
& &\
|
| 285 |
+
Back-up & - Back-up policy & Greedy/max & Greedy/max & Greedy/max & On-policy & On-policy & Greedy/max & On-policy\
|
| 286 |
+
& - Policy exp. & - & - & - & Sample & Sample & - & Sample\
|
| 287 |
+
& - Dynamics exp. & Expected & Expected & Expected & Sample & Sample & Sample & Sample\
|
| 288 |
+
& - Add. back-ups & - & Convergence label & Convergence label & - & Counts & - & -\
|
| 289 |
+
& &\
|
| 290 |
+
Update & - Loss & (Squared) & (Squared) & (Squared) & (Squared) & (Squared) & (Squared) & (Squared)\
|
| 291 |
+
& - Update type & Replace ($\eta=1.0$) & Replace ($\eta=1.0$) & Replace ($\eta=1.0$) & Average ($\eta=1/n$) & Average ($\eta=1/n$) & Fixed step & Eligibility\****
|
| 292 |
+
:::
|
| 293 |
+
::::
|
| 294 |
+
|
| 295 |
+
:::: sidewaystable
|
| 296 |
+
::: tabular
|
| 297 |
+
p1.5cm p2.5cm \| P2.0cm P2.0cm P2.0cm P2.0cm P2.0cm P2.0cm P2.0cm **Dimension\
|
| 298 |
+
& **Consideration & REINFORCE [@williams1992simple] & DQN [@mnih2015human] & Prioritized sweeping [@moore1993prioritized] & Dyna [@sutton1990integrated] & PILCO [@deisenroth2011pilco] & AlphaGo [@silver2017mastering] & Go-Explore (policy-based) [@ecoffet2021first]\
|
| 299 |
+
\
|
| 300 |
+
& &\
|
| 301 |
+
MDP access & & Resettable generative & Resettable generative & Resettable generative & Resettable generative & Resettable generative & Settable generative & Resettable generative\
|
| 302 |
+
& & & & & & & &\
|
| 303 |
+
Solution & - Coverage & Global & Global & Global & Global & Global & Global & Global\
|
| 304 |
+
& - Type & $\pi(a|s)$ & $Q(s,a)$ & $Q(s,a)$ & $Q(s,a)$ & $\pi(a|s)$ & $\pi(a|s)$, $V(s)$ & $\pi(a|s,g)$, $V(s)$\
|
| 305 |
+
& - Method & Tabular & Approximate (NN) & Tabular & Tabular & Approximate (GP) & Approximate (NN) & Approximate (NN)\
|
| 306 |
+
& - Initialization & Uniform & Random & Uniform & Uniform & Random & Random & Random\
|
| 307 |
+
& &\
|
| 308 |
+
Root & - Selection & Forward & Forward & Forward + backward & Forward + visited states & Forward & Forward & Forward\
|
| 309 |
+
& &\
|
| 310 |
+
Budget & - \# trials per root & 1 & 1 & 1 & 1 & 1 & 1600 & $d_{\max}$\
|
| 311 |
+
& - Depth & $\infty$ & 1 & 1 & 1 & $\infty$ & MCTS: $1..n$\
|
| 312 |
+
NR: $\infty$ & $1..d_{\max}$\
|
| 313 |
+
& &\
|
| 314 |
+
Selection & - Next action & Rand. pert. (stoch. policy) & Rand. pert. ($\epsilon$-greedy) & State-based (novelty) & State-based (novelty) + Mean pert. (Boltzmann) & Rand. pert. (stoch. policy) & BF/AF: Uncertainty NR: Rand. pert. & BF: Novelty + Mean pert. (entropy), AF: Rand. pert.\
|
| 315 |
+
& - Next state & Sample & Sample & Sample & Sample & Sample & Sample & Sample\
|
| 316 |
+
& &\
|
| 317 |
+
Bootstrap& - Location & - & State-action & State-action & State-action & - & State & State\
|
| 318 |
+
& - Type & - & Learned & Learned & Learned & - & Learned & Learned\
|
| 319 |
+
& &\
|
| 320 |
+
Back-up & - Back-up policy & On-policy & Max/greedy & Max/greedy & On-policy & On-policy & On-policy & On-policy\
|
| 321 |
+
& - Policy exp. & Sample & - & Max & Sample & Sample & Sample & Sample\
|
| 322 |
+
& - Dynamics exp. & Sample & Sample & Expected & Sample & Sample & Sample & Sample\
|
| 323 |
+
& - Add. back-ups & - & - & Priorities, counts & Counts & Uncertainty & Counts & Counts\
|
| 324 |
+
& &\
|
| 325 |
+
Update & - Loss & Policy gradient & Squared & (Squared) & (Squared) & Value gradient & Cross-entropy (policy) + squared (value) & Policy gradient (PPO) + squared (value)\
|
| 326 |
+
& - Learning rate & Fixed step & Fixed step & Fixed step & Fixed step & Fixed step & Local: Average Global: fixed step & Local: eligibility Global: adaptive\****
|
| 327 |
+
:::
|
| 328 |
+
::::
|
| 329 |
+
|
| 330 |
+
Having discussed all the dimensions of the framework, we will now zoom out and reflect on its use and potential implications. The main point of our framework is that MDP planning and reinforcement learning algorithms occupy the same solution space. To illustrate this idea, Table [\[table_overview\]](#table_overview){reference-type="ref" reference="table_overview"} shows for a range of well-known planning (blue), model-free RL (red) and model-based RL (green) algorithms the choices they make on the dimensions of the framework. The list is of course not complete (we could have included any other preferred algorithm), but the table illustrates that the framework is applicable to a wide range of algorithms.
|
| 331 |
+
|
| 332 |
+
A first observation from the table is that it reads like a patchwork. On most dimensions the same decisions appear in both the planning and reinforcement learning literature, showing that both fields actually have quite some overlap in developed methodology. For example, the depth and back-up schemes of MCTS [@kocsis2006bandit] and REINFORCE [@williams1992simple] are exactly the same, but they differ in their solution coverage (MCTS only uses a local solution, REINFORCE updates a global solution after every trial) and exploration method. Such comparisons provide insight into the overlap and differences between various approaches.
|
| 333 |
+
|
| 334 |
+
The second observation of the table is therefore that *all algorithms have to make a decision on each dimension*. Therefore, even though we often do not consciously consider each of the dimensions when we come up with a new algorithm, we are still implicitly making a decision on each of them. The framework could thereby potentially help to structure the design of new algorithms, by consciously walking along the dimensions of the framework. It also shows what we should actually report about an algorithm to fully characterize it.
|
| 335 |
+
|
| 336 |
+
There is one deeper connection between planning and tabular reinforcement learning we have not discussed yet. In our framework, we treated the back-up estimates generated from a single model-free RL trial as a local solution. This increases consistency (i.e., allows for the pseudocode of Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}), but we could also view model-free RL as a direct update of the global solution based on the back-up estimate (i.e., skip the local solution). With this view we see another relation between common planning and tabular learning algorithms, such as MCTS (planning) and Monte Carlo reinforcement learning (MCRL). Both these algorithms sample trials and compute back-up estimates in the same way, but MCTS writes these to a local tabular solution (with learning rate $\eta = \frac{1}{n}$), while MCRL writes these to a global tabular solution (with fixed learning rate $\eta$). These algorithms from different research fields are therefore strongly connected, not only in their back-up, but also in their update schemes.
|
| 337 |
+
|
| 338 |
+
We will briefly emphasize elements of the framework, or possible combinations of choices, that could deserve extra attention. First of all, the main success of reinforcement learning originates from its use of global, approximate representations [@silver2017mastering; @ecoffet2021first], for example in the form of deep neural networks. These approximate representations allow for generalization between similar states, and planning researchers may therefore want to emphasize global solution representations in their algorithms. The other way around, a main part of the success of planning literature comes from the stability and guarantees of building local, tabular solutions. Combinations of both approaches show state-of-the-art results [@silver2017mastering; @levine2014learning; @hamrick2020combining], and each illustrate that we can be very creative in the way learned global solutions can guide new planning iterations, and the way planning output may influence the global solution and/or action selection. Important research questions are therefore how action selection within a trial can be influenced by the global solution (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 16), how a local solution should influence the global solution (i.e., variants of loss functions, Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 7), and how we may adaptively assign planning budgets per root state (Alg. [\[alg_frap\]](#alg_frap){reference-type="ref" reference="alg_frap"}, line 5).
|
| 339 |
+
|
| 340 |
+
Another important direction for cross-pollination is the study of *global, approximate frontiers*. On the one hand, planning research has extensively studied the benefit of local, tabular frontiers, a crucial idea which has bee ignored in most RL literature. On the other hand, tabular frontiers do not scale to high-dimensional problems, and in these cases we need to track some kind of global approximate frontier, as studied in intrinsically motivated goal exploration processes [@colas2020intrinsically]. Initial results in this direction are for example provided by @pere2018unsupervised [@ecoffet2021first], but there appears to be much remaining research in this field. Getting back to the previous point, we also believe semi-parametric memory and episodic memory [@blundell2016model; @pritzel2017neural] may play a big role for global approximate solutions, for example to ensure we can directly get back to a recently discovered interesting state.
|
| 341 |
+
|
| 342 |
+
A third interesting direction is a stronger emphasis on the idea of backward search (planning terminology) or prioritized sweeping (RL terminology). In both communities, backward search has received considerable less attention than forward search, while backward approaches are crucial to spread acquired information efficiently over a (global) state space (by setting root states in a smarter way, see Sec. [5.2](#sec_set_root node){reference-type="ref" reference="sec_set_root node"}). The major bottleneck seems the necessity of a *reverse* model (which state-actions may lead to a particular state), which is often available in smaller, tabular problems, but not in large complex problems where we only have a simulator or real world interaction available. However, we may learn an approximate reverse model from data, which could bring these powerful ideas back into the picture. Initial (promising) results in this direction are provided by @edwards2018forward [@agostinelli2019solving; @corneil2018efficient].
|
| 343 |
+
|
| 344 |
+
In summary, the framework for reinforcement learning and planning (FRAP), as presented in this paper, shows that both planning and reinforcement learning algorithms share the same algorithmic space. This provides a common language for researchers from both fields, and may help inspire future research (for example by cross-pollination). Finally, we hope the paper also serves an educational purpose, for researchers from one field that enter into the other, but particularly for students, as a systematic way to think about the decisions that need to be made in a planning or reinforcement learning algorithm, and as a way to integrate algorithms that are often presented in disjoint courses.
|
2008.13084/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2008.13084/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
We compare MDCN with more than 19 SR methods, including Bicubic, A+ [@{timofte2014a+}], SelfExSR [@huang2015single], SRCNN [@dong2014learning], ESPCN [@shi2016real], FSRCNN [@dong2016accelerating], VDSR [@kim2016accurate], LapSRN [@lai2017deep], DRCN [@kim2016deeply], MRFN [@he2019mrfn], SRMDNF [@zhang2018learning], MSRN [@Li_2018_ECCV], EDSR [@lim2017enhanced], RDN [@zhang2018residual], RCAN [@Zhang_2018_ECCV], FilterNet [@Li2019FilterNetAI], DNCL [@Xie2019FastSS], RAN [@Wang2019ResolutionAwareNF], and SeaNet [@fang2020soft]. All SR images are evaluated with PSNR and SSIM [@wang2004image] on the Y channel in YCbCr space.
|
| 4 |
+
|
| 5 |
+
**Quantitative Comparison:** In TABLE [\[Results\]](#Results){reference-type="ref" reference="Results"}, we show the quantitative comparisons with some advanced SR methods, all of them have achieved competitive results at the time. Among them, best results are highlighted and the second best results are underlined. Besides, the 'Average' denotes the average results of these 5 test datasets. Obviously, our MDCN achieves competitive results on all upsampling factors. Among them, RDN is slightly better than MDCN under small upsampling factors ($\times$`<!-- -->`{=html}2). However, our MDCN can achieve better results under large upsampling factors (e.g., $\times$`<!-- -->`{=html}3, $\times$`<!-- -->`{=html}4). This is because the introduced MDCB in MDCN can fully extract multi-scale features, which is conducive to large upsampling factor SR reconstruction. Considering that RCAN achieves the state-of-the-out results, we make a detailed comparison with it in TABLE [\[Results-RCAN\]](#Results-RCAN){reference-type="ref" reference="Results-RCAN"}. We can observe that RCAN is slightly better than MDCN ($\times$`<!-- -->`{=html}2: 0.24dB, $\times$`<!-- -->`{=html}3: 0.15dB, and $\times$`<!-- -->`{=html}4: 0.09dB). However, it should be noticed that the execution time of our MDCN is 3 times faster than RCAN. This means that MDCN can achieve similar results as RCAN with less execution time. Meanwhile, it is worth noting that: (1). Except for MDCN, all reported SR methods are specially trained for different upsampling factor; (2). EDSR [@lim2017enhanced], RDN [@zhang2018residual], and RCAN [@Zhang_2018_ECCV] use the pre-trained model ($\times$`<!-- -->`{=html}2) as the initialization model to train large upsampling factor model like $\times$`<!-- -->`{=html}4; (3). DRCN [@kim2016deeply] introduces the recursive mechanism to further improve model performance. (4). Other models use large LR images as inputs for training. All these strategies can further boost the performance. However, in order to verify the effectiveness of MDCN, we do not use any training tricks in our experiment. Nevertheless, since MDCB, HFDB, and DRB can extract rich image features and learn the inter-scale correlation, our MDCN still achieves competitive results.
|
| 6 |
+
|
| 7 |
+
**Visual Comparison:** In Figs. [8](#Visual-1){reference-type="ref" reference="Visual-1"} and [9](#Visual-2){reference-type="ref" reference="Visual-2"}, we show visual comparisons under small ($\times$`<!-- -->`{=html}2, $\times$`<!-- -->`{=html}3) and large upsampling factor ($\times$`<!-- -->`{=html}4), respectively. Among them, EDSR [@lim2017enhanced] was the champion model of the NTIRE2017 SR Challenge, RDN [@zhang2018residual] and RCAN [@Zhang_2018_ECCV] were superior models which achieved SOTA results. According to the figure, we can clearly observe that: (i). Most compared SR methods (e.g., SRCNN, MSRN, and SeaNet) cannot recover clear and accurate image edges. Furthermore, under large upsample factor (e.g., $\times$`<!-- -->`{=html}4), the reconstructed SR images are blurred with severe artifacts and incorrect edges. In contrast, our MDCN can reconstruct more realistic SR images with clear and sharp edges; (ii). Compared with large size models (e.g., EDSR, RDN, and RCAN), our MDCN still shows competitive performance and better results in edges reconstruction. Overall, with the help of MDCB and HFDB, MDCN can reconstruct high-quality SR images.
|
2101.00433/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-05-14T21:28:12.349Z" agent="5.0 (Macintosh)" etag="6myzMQJS9NXuFRfcgAFC" version="14.6.13" type="google"><diagram id="NKuK1TIrDqSJWSS0tgT2" name="Page-1">7VzbcqM4EP0a1yYPkwKEfHmMsZPdqmztVDI1l6cpbBRbE4woISf2fP1KIG4SdkgCxl5vHmzUCAn6dPdpNXJ6wFltbqkbLv8mHvJ7luFtemDSsyxrZPf5l5BsE4lpGnYiWVDsSVkueMC/kRQaUrrGHopKHRkhPsNhWTgnQYDmrCRzKSUv5W6PxC/PGroLOaORCx7mro+0bt+wx5aJdAgLvf9EeLFk2fPJMys37SyHiJauR14Kc4FpDziUEJYcrTYO8oX2Ur0kA93sOJvdGEUBq3MB+zF01savn+ZgbM3/+u2R66/kE0xGeXb9tXxgebNsm2oAeVwhskkoW5IFCVx/mkvHlKwDD4lpDN7K+9wREnKhyYW/EGNbia67ZoSLlmzly7PJnGKinc+WKpGs6RzteaDURly6QGxPP5AhwG0XkRVidMuvo8h3GX4u34crbWiR9cvVzA+kpt+gdVPTukNouI64jDzyjy9ovgwwN0J+7DzEgg3TgOEGFYrD+dbHXP8UcE2+LDFDD6Eb6+iF+2RZ0bMEqbtZJnDnT4sYv3/WjA+DpDxKoDIhP37Evu8Qn9B4WnAT/4k+jJInVDgzhgMwmmR4PiPK0GY/ojoC6QVA+pEMHEPZfMmdsC9Fy4L/pd0ah6z/X3MUq6aj2F06iqVpfRosfBwtufAbfsIh8rB7/I5xczMZ9fvNOIZ1ZI5hmpr+D+AIaIPZd3H5FZStH4Uzk40cOW5sZaNB5/koe8hLPxPMZ85jXr8MrTVQQEucVV6l4JbdxvuhBJq3FYnozg0Wa5EtWYbM8xTYy6C+4nCK70z7E2MybYc6zJou0m/LRWxNr7coQLR9rfJwZDlOO3Gnc60C0GngKYSdPAhVBx6uYbr9ng4gGoWrRDO/LG61ELAGp8D2wO4SUfM9iBpHj6g16hLSgRb6elbf5/c79vAzP1yIwzHF6DGVz2gqTiV84kLniusfthFDKz7NBEXzK/59kXbCaRdWGCwTXtafUzFLJtZjJSsq53wBEQljKRxLkevjRSByUw444vKxCMqCYa/liRX2vNiwq0J92dgbiOrAGJVTDqNeVLfaiuqjCoNRlD9f0+dYB0k49a5FmUeo1HejCM8V72499r631FDQMKzQcCr7YE4JQBlgU10IJIFGyym1gWyF/6FdLznl6LjbQrdQdIj23LCh5BnQ2HtfWaVQseDcCJM7aDRTtmGnuccbFj05U11ZsOQCh6Mqs3atwfogV73L4tRlFzD2W1zmQtX927G4VIf7mbRn8RTKDi9Yb+D0oCNq0LPH3mD8pTeYXCYnz43hTHvHoroQf+1DMpzZ7bqlhYJJ44UQC46uYDVqr7BNnaFU4trBgI1xRf9U8C5wxaAzrrCPmiusN3KFZXfBFXrB6Q1ccXu+XKGi1TlXWPpyaIKYi33+1Mlyl+KQYRLw1j2aE/7021YqiRM4HU7shtacw2FJy+YAalpORUUtqxTQXN1JX2Mea4Tmjc+IYv7cwl3UqN1V0E6XhkdejEpvc+d794tZUokyvKJnObEg87rCuctYX5hGmv0c3SvI/uB6PLppyIH7ZQeueBNgmhUO3No7SFC1jWUH4anLog8yXX0Go4gD587ioYT/yVSDjwvHPSi2TYgYkYK7i0QbgM9UKiajCviMQ7IcqNpcUQO+27OETy14dQ9frSJ/AhT7iWVA5QfQwSJxKUJ6f5aQAmgoi9ShrYNaVSVuD9TRqeREdfOYxqsUtqXWKGB5iLo1Cm0gWxmo5QoFPJ389wiq2XbdZBcaTZjdWysUqjG9VqGwYQcVClvPxPcxfhzkB2Ou8JW74Qzxkze8P/jBRXghvh0mSEWhkcszLWLYhhqWdC45aBHDHpxKeGkySjRSnnz1bSdQt1e1vEXQtjTHvSOLTz5+Qj5eEiIWyJQ/INEg/9g+Nsdpah+bqcY7syJ/tir8A7TmH8P6wdC5YGca1szRDsvvKqzBk9n4fAzvdezRobKmj/li1f4jzRcnldXAtIoYxdvRqvehef/vQ6tOWoagnLRU7EPL6laHce9ON6O+07278m5Ye03U6e/uoL7sOLHsxbaOLXuB+q9GdmYv92ebvWir7M6zl92vTDJeiqPaJzcMKQkpdhkKUBRVEVgN3E6wKqvuHANV77nAQUHb/aIkw+IehT7X6wz7mG3PByyt7FUB1rAZrHgz/9V+srDO//kBmP4L</diagram></mxfile>
|
2101.00433/main_diagram/main_diagram.pdf
ADDED
|
Binary file (27.7 kB). View file
|
|
|
2101.00433/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Among the draft *Ethics Guidelines for Trustworthy AI* released by the European Union in 2019 were calls for greater transparency around deployed systems using artificial intelligence, advising that an "AI system's capabilities and limitations should be communicated to practitioners or end-users in a manner appropriate to the use case at hand." [@ec2019ethics] This is a high-profile example of the vagueness endemic to guidance on ethical disclosure and AI---*what constitutes an appropriate manner of communication*?
|
| 4 |
+
|
| 5 |
+
There is a growing awareness of the importance of communicating AI system function and performance clearly and understandably. This is both a matter of public interest, for mitigating harms caused by, for example, racial biases in the performance of human classifiers [@raji2019actionable], but also in the interest of system providers, as users who feel like they do not understand how machine learning models work and what kind of information they rely on tend to be more resistant to using them [@poursabzi2018manipulating].
|
| 6 |
+
|
| 7 |
+
However, while fairness [@pmlr-v81-dwork18a; @harrison_empirical_2020] and privacy as general principles [@ji2014differential; @papernot2016towards] have been well-studied in the context of AI, transparency does not receive as much attention. The term *transparency* is overloaded [@lipton2019research], with many different definitions in the literature [@felzmann_robots_2019]. It has a broad meaning in the public consciousness, from which some studies adopt a "you'll know it when you see it" definition [@doshi2017towards]. As opposed to the notions of *transparency as explainability* [@veale_2018_fairness] and *as invisibility* [@hamilton_path_2014], *transparency as disclosure* [@suzor_what_2019], the extent to which the producers of an AI system or service provide detailed messaging about a system to stakeholders such as buyers, users, or the general public. This third sense, ***disclosive transparency***, is particularly subjective.
|
| 8 |
+
|
| 9 |
+
@pieters2011explanation identifies a confusion dynamic where providing "too much information" about a decision system worsens user understanding and trust. This dynamic could have serious ramifications---if disclosive transparency really decreases user trust, stakeholders are further incentivized to not disclose. Unfortunately, studies on this relationship are small-scale and subjective. This is why measurable, repeatable, and objective measures of disclosive transparency are sorely needed. In this work, we **decompose disclosive transparency** into two components: *replicability* and *style-appropriateness*, **develop three objective neural language model measures** of them, and apply them in a **pilot study of real systems**[^1].
|
| 10 |
+
|
| 11 |
+
Our goal is to develop an objective measure of disclosive transparency that is repeatable, explainable, intuitive, and well-correlated with subjective opinions. However, at face value there is not an obvious way to assign a numerical value to the "level of transparency" in a description. To resolve some vagueness, we decompose disclosive transparency into two components: the **replicability**, or degree to which the requisite content to reproduce the system being described is present in a description, and the **style-appropriateness**, or the degree to which it is written in a manner understandable to a member of the general public. These components are more specific than "transparency" alone, but still quite subjective.
|
| 12 |
+
|
| 13 |
+
A key insight underpins the possibility of developing objective measures of replicability and style-appropriateness: **disclosure is a communication task**. In describing a system, an explainer, typically an authority providing an AI service to the public, attempts to encode information about the design and function of their system in a *summary*. We find the communication-based definition of meaning provided in @bender-koller-2020-climbing useful for motivation. In their framework, the *meaning*, $m$ of an utterance is a pair $(e, i)$ of the surface form $e$ (such as text or speech audio) and *communicative intent*, $i$ which is external to language. In the case of disclosive transparency, this $i$ is the particulars of the system being described. Any assessment of the disclosive transparency of a description $e$ is fundamentally an assessment of its underlying $i$---the degree to which $i$ contains the information necessary to reconstruct the system being described.
|
| 14 |
+
|
| 15 |
+
Furthermore, **some short descriptions come paired with longer ones**. Anyone who has clicked "I Agree" without reading a terms of service, but has taken the time to look over a short, to-the-point statement on how data is used, knows that the lengthy legalese-laden descriptions that truly define systems we interact with can be sufficiently summarized succinctly. However, the succinct summarization is intended to carry the detailed meaning of the actual contract a user is agreeing to. In other words, for the short description $e'$ of an agreement, there exists a 'source' document $e$ that is much more detailed, which share a common $i$. An analogue in system descriptions is what enables our metrics.
|
| 16 |
+
|
| 17 |
+
<figure id="fig:method" data-latex-placement="t">
|
| 18 |
+
<embed src="Fig/Method_final.pdf" />
|
| 19 |
+
<figcaption>A diagram of our proposed method for extracting the <em>style-appropriateness</em> and <em>replicability</em> objective features to characterize disclosive transparency. </figcaption>
|
| 20 |
+
</figure>
|
| 21 |
+
|
| 22 |
+
We now move to developing usable metrics that quantify both the replicability and style-appropriateness of real system descriptions. The system demonstration tracks at academic AI conferences represent a good source of real-world system descriptions. Ideally, the purpose of a system demonstration paper at an academic AI venue is to provide a sufficiently detailed system description such that a peer could (given sufficient resources and data) replicate it. Each of these lengthy documents, $e_i$, is accompanied by an abstract, $e'_i$ which briefly describes the system. Although these abstracts are not intended for consumption by the general public, they do succinctly describe implemented systems, some of which are intended to (at least hypothetically) be used by the public. Combined with the fact that these paper/abstract pairs are freely available online, these are an appealing subject of study for modeling disclosive transparency. Details on our pilot study using demo track NLP papers is provided in [4](#sec:study){reference-type="ref+label" reference="sec:study"}.
|
| 23 |
+
|
| 24 |
+
[1](#fig:method){reference-type="ref+label" reference="fig:method"} provides a high-level depiction of our objective disclosive transparency metrics, the *style-appropriateness* 'clarity' metric $C(e)$, and the *replicability metrics* of 'sentence affinity' $R_A(e', e)$ and simulated 'information recovery ratio' $R_R(e',e)$. We assess these features using language model scores derived from GPT-2 [@radford2019language] and BERT [@devlin2018bert].
|
| 25 |
+
|
| 26 |
+
For fine-tuning data on academic language in AI and CS, we produce a training dataset by crawling a random sample of 100k LaTeXfiles from the arXiv preprint repository with topic label cs.\* from 2007-2020. Additionally, we further collect all 30k crawlable LaTeXfiles from the cs.CL label from 2017-2020, to adequately sample recent research in natural language processing and computational linguistics. From these raw `.tex` files we produce task-specific plaintext training corpora as described in Appendix C. We then fine-tune the GPT-2 regular pretrained model from Huggingface [@wolf2020transformers] on the corpora as explained below. We manually exclude all papers used in our downstream case study from pretraining.
|
| 27 |
+
|
| 28 |
+
<figure id="fig:abs" data-latex-placement="t!">
|
| 29 |
+
<embed src="Fig/tabuoid_ex-2.pdf" />
|
| 30 |
+
<figcaption>A demonstration of how our <em>sentence affinity</em> curve simulates the replicability of the abstract based on the degree to which it “covers” the content of the full article. For each line in the source document, the maximum pairwise BERTscore between it and the sentences in the short system description is measured. The affinity-replicability score of the article is the paper length-normalized area under this curve.</figcaption>
|
| 31 |
+
</figure>
|
| 32 |
+
|
| 33 |
+
When discussing system descriptions, replicability typically refers to the ability of a third party to implement a functionally equivalent replacement system from a description. Rather than automate replication itself, we use the process of *recovering the full text of the system description from the abstract* as a proxy for replicability.
|
| 34 |
+
|
| 35 |
+
As we established previously, a reasonable assumption for the communicative goal $i$ of an author of an academic paper $e$ describing a system is to transmit the information necessary to reconstruct it. Obviously, it is not generally possible to perfectly recover $i$ from $e'$ or academic papers beyond the abstract would be generally superfluous.
|
| 36 |
+
|
| 37 |
+
The problem of generalized inversion of communicative intent $i$ from form $e$ is yet unsolved [@yogatama2019learning]. So, we treat recovery of $e$ from a short summary $e'$ as a proxy for recovering $i$, reconstructing the system. In other words, **the replicability of an abstract $e'$ can be modeled as the amount of information contained in the full article text $e$ than can be recovered from $e'$**. We propose two metrics to simulate this process using the document $e$ and abstract $e'$; *trigram information recovery* and *sentence affinity*.
|
| 38 |
+
|
| 39 |
+
We directly attempt the recovery of $e$ from $e'$ using a generative language model (in this case, GPT-2 [@radford2019language]) fine-tuned on a *full-text recovery task*, where for each abstract the replicability score is the rate of trigram information in $e$ recovered by the model-generated text.
|
| 40 |
+
|
| 41 |
+
The model generates predicted sentences $e_g$ from the full paper conditioned on an abstract. Given promising results on quantifying semantic complexity using simple measures such as n-gram entropy [@mckenna2020semantic], we use a trigram self-information content metric to measure the amount of information content in $e'$ that is recovered in $e_g$.
|
| 42 |
+
|
| 43 |
+
Using the training dataset global trigram distribution, we take the ratio of the trigram self-information of all trigrams present in $e_g$ that were recovered from $e$ against the total trigram self-information of $e$. This gives us a ratio of recovered form information $R$ as defined in [\[eqn:R\]](#eqn:R){reference-type="ref+label" reference="eqn:R"}:
|
| 44 |
+
|
| 45 |
+
$$\begin{equation}
|
| 46 |
+
\label{eqn:R}
|
| 47 |
+
R_R(e, e_g) = \frac{\sum_{t\in e_{g} \cap e} \log(p(t))}{\sum_{t\in e} \log(p(t))}
|
| 48 |
+
\end{equation}$$
|
| 49 |
+
|
| 50 |
+
Using the aforementioned trigram recovery metric to model replicability has risks. The distribution of source papers could be too specific, and not contain requisite information, or the generated output might be too noisy to meaningfully simulate the process of inverting system from abstract. Thus, we present an alternative approach here.
|
| 51 |
+
|
| 52 |
+
@zhang2019bertscore presented a method for evaluating the similarity of sentence pairs called BERTscore, which is computed by averaging a greedy match of the cosine similarities of the BERT [@devlin2018bert] token embeddings $\textbf{b}(t)$ for each token in a pair of sentences:
|
| 53 |
+
|
| 54 |
+
$$\begin{equation}
|
| 55 |
+
P_\text{BERT}(s_1,s_2) = \frac{1}{|s_1|}\sum_{t_i\in s_1}(\max_{t_j \in s_2}(\textbf{b}(t_i)^T\textbf{b}(t_j))
|
| 56 |
+
\end{equation}$$
|
| 57 |
+
|
| 58 |
+
We propose extending this to match a sentence over a set of sentences in a document to produce a document-level *affinity score*,
|
| 59 |
+
|
| 60 |
+
$$\begin{equation}
|
| 61 |
+
R_A(e_1, e_2) = \frac{1}{|e_1|}\sum_{s_1\in e_1}\max_{s_2 \in e_2}(P_\text{BERT}(s_1, s_2))
|
| 62 |
+
\end{equation}$$
|
| 63 |
+
|
| 64 |
+
This metric has advantages over trigram recovery in that it doesn't rely on training a generative language model, and is more readily interpretable, as [2](#fig:abs){reference-type="ref+label" reference="fig:abs"} demonstrates.
|
| 65 |
+
|
| 66 |
+
Orthogonal to the question of whether an abstract contains the necessary information to perform a replication is the question of **who would be capable of performing the replication**. This question is important for assessing the appropriateness of a description to layperson audiences. We simulate this with a pair of language models tuned to two styles of writing---one general interest, one scientific---to make a perceptual model of "academic style," as a layperson would probably require many more textbook lookups to understand an academic paper or detailed terms of service than they would reading a Wikipedia article. This approach is analogous to a popular the use of perceptual likelihood ratios in speech processing, which have been shown to correlate well with subjective perceptual opinions [@saxon2020robust].
|
| 67 |
+
|
| 68 |
+
For each passage we extract a likelihood ratio between two language models, one a GPT-2 model fine-tuned on the 100k file arXiv corpus, the other the vanilla GPT-2 model that is pretrained on a large, diverse corpus of online English text. Given the system demo abstract $e$, we compute the style-appropriateness $C$ as a log-likelihood ratio that it belongs to this academia-specific distribution $A$ or a general public distribution $V$ as follows: $$\begin{equation}
|
| 69 |
+
\label{eqn:C}
|
| 70 |
+
C(e) = - \sum_{j=1}^{|\{t|t\in e\}|} \log\left(\frac{p(t_j|A, t_{j-1}...t_1)}{p(t_j|V, t_{j-1}...t_1)}\right) \\
|
| 71 |
+
\end{equation}$$
|
| 72 |
+
|
| 73 |
+
To analyze how well our objective measures track with subjective notions of transparency, and to demonstrate how they might be used to investigate how transparency affects prospective user opinions in the real world, we perform two pilot studies utilizing our metrics and a corpus of real-world AI system descriptions.
|
| 74 |
+
|
| 75 |
+
We extract system demonstration abstracts from EMNLP 2017--2020, ACL 2018--2020, and NAACL 2018 and 2019, retrieving a corpus of 268 abstracts describing a variety of demonstrations, including systems intended for use by the general public (e.g., translation systems, newsreaders) as well as demonstrations that are of interest more narrowly to the NLP community, software developers, or academics at large (e.g., toolkits, packages, or benchmarks). As we are interested in system descriptions for non-experts, **we restrict our analysis to abstracts which describe systems intended for use by laypeople**. This set contains 55 abstracts, describing diverse systems from automated language learning games, to news aggregators, to specialized search engines for medical topics.
|
| 76 |
+
|
| 77 |
+
We first collect expert opinions on how the abstracts conform to the aforementioned dimensions of transparency to analyze the quality of our automated metrics. Then, we collect salient layperson opinions of trust, understanding, and fairness to demonstrate a pilot study of how transparency drives user attitudes.
|
| 78 |
+
|
| 79 |
+
Our first pilot study seeks to determine the extent to which our objective measures faithfully model experts' subjective notions of transparency. To do this we must first pose a set of precise questions for subjectively assessing disclosive transparency. We identify three largely disjoint *dimensions of disclosive transparency* that each follow directly from a key question an implementer might ask when initially trying to understand a description:
|
| 80 |
+
|
| 81 |
+
What task does this system solve?
|
| 82 |
+
|
| 83 |
+
What components does this system contain, and how do they work?
|
| 84 |
+
|
| 85 |
+
What are the inputs and outputs of this system? What kind of data collection and storage is required to train and operate it?
|
| 86 |
+
|
| 87 |
+
####
|
| 88 |
+
|
| 89 |
+
Each of these questions concerns some kind of information contained in $i$. They can be posed as survey questions by appending "to what extent does the description explain\..." to the start.
|
| 90 |
+
|
| 91 |
+
We consider **task** transparency interesting because members of the general public are often learning of new system application areas in which emerging technologies such as NLP can be applied, and without communicating the parameters of the task a system is solving clearly, users cannot understand it. **Function** transparency is perhaps the most natural dimension, as discussing the "how" of a system is fundamental to explaining how it works. Finally, we consider **data** transparency because many public discussions of AI ethics center on the use and misuse of data, and it is a central focus of regulatory discussions of AI.
|
| 92 |
+
|
| 93 |
+
Four NLP Ph.D. students provided five-point Likert opinion scores of the task, function, and data transparency levels for each abstract in the \*ACL Corpus. To ensure consistency across the raters we analyzed average abstract-wise variance, as well as the average pairwise inter-rater Pearson correlation coefficient (PCC) and *p*-value for each of the three transparency categories. [1](#tab:irr){reference-type="ref+label" reference="tab:irr"} demonstrates the high inter-rater reliability of the scores, with each average variance $<0.55$ on the 5-point scale.
|
| 94 |
+
|
| 95 |
+
::: {#tab:irr}
|
| 96 |
+
**Transparency** **Variance** **PCC** ***p***
|
| 97 |
+
------------------ -------------- --------- ---------
|
| 98 |
+
Task 0.428 0.508 0.001
|
| 99 |
+
Function 0.506 0.319 0.085
|
| 100 |
+
Data 0.488 0.368 0.016
|
| 101 |
+
|
| 102 |
+
: Inter-rater reliability assessed using variance, average pairwise Pearson's correlation coefficient (PCC) and *p*-value for subjective transparency scores.
|
| 103 |
+
:::
|
| 104 |
+
|
| 105 |
+
Each abstract in the dataset is assigned subjective task, function, and data transparency scores by averaging the four opinion ratings.
|
| 106 |
+
|
| 107 |
+
In this section we briefly outline how we assess user opinions regarding the system descriptions.
|
| 108 |
+
|
| 109 |
+
Starting from the aforementioned dimensions of transparency---task, function, and data---we find three sets of user concerns orthogonal to these dimensions. These are "understanding," "fairness," and "trust." Together these form "user response variables" such as task understanding, function fairness, or data trust, which can be posed as questions about what the user believes or understands.
|
| 110 |
+
|
| 111 |
+
To measure how user attitudes and confusion correlate with description transparency, we simulate a user survey study using Amazon Mechanical Turk (AMT). Each abstract was shown to 10 crowd workers selected from a set of majority English-speaking locales (US, CA, AU, NZ, IE, UK), who were instructed to read the abstract and answer two sets of multiple-choice questions. The first set, *opinion prompts*, consists of five-point Likert scale subjective attitude questions; the second set, of *retention questions*, reveals how well the users can recall phrases from the abstract they just read. See Appendix B for survey methodology details.
|
2103.05445/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2103.05445/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Recent advances in deep learning have shown significant improvements in the field of computer vision. Neural networks have become the de-facto methodology for classification, object detection, and semantic segmentation due to their high accuracy in comparison to previous methods [\[35,](#page-9-0) [34,](#page-9-1) [41\]](#page-9-2). However, while the predictions of these networks are highly accurate, they usually fail when encountering anomalous inputs (i.e. instances outside the training distribution of the network).
|
| 4 |
+
|
| 5 |
+
This work was partially supported by the Hilti Group and the National Center of Competence in Research (NCCR) Robotics through the Swiss-National Science Foundation.
|
| 6 |
+
|
| 7 |
+
<sup>\*</sup> Equal Contribution
|
| 8 |
+
|
| 9 |
+
<span id="page-1-1"></span>With this work, we focus on the inability of existing semantic segmentation models to localize anomaly instances and how this limitation hinders them from being deployed in safety-critical, in-the-wild scenarios. Consider the case of a self-driving vehicle that uses a semantic segmentation model. If the agent encounters an anomalous object (i.e. a wooden box in the middle of the street), the model could wrongly classify this object as part of the road and lead the vehicle to crash.
|
| 10 |
+
|
| 11 |
+
To detect such anomalies in the input, we build our approach upon two established groups of methods. The first group uses uncertainty estimation to detect anomalies. Their intuition follows that a low-confidence prediction is likely an anomaly. However, uncertainty estimation methods themselves are still noisy and inaccurate. Previous works [\[24,](#page-8-0) [4\]](#page-8-1) have shown that these models fail to detect many unexpected objects. Example failure cases are shown in Figure [1](#page-0-0) (top and bottom) where the anomaly object is either detected but miss-classified or non-detected and blended with the background. In both cases, the segmentation network is overconfident about its prediction and, thus, the estimated uncertainty (softmax entropy) is low.
|
| 12 |
+
|
| 13 |
+
The second group focuses on re-synthesizing the input image from the predicted semantic map and then comparing the two images (input and generated) to find the anomaly. These models have shown promising results when dealing with segmentation overconfidence but fail when the segmentation outputs a noisy prediction for the unknown object, as shown in Figure [1](#page-0-0) (middle). This failure is explained by the inability of the synthesis model to reconstruct noisy patches of the semantic map, which complicates finding the differences between input and synthesized images.
|
| 14 |
+
|
| 15 |
+
In this paper, we propose a novel pixel-level anomaly framework that combines uncertainty and re-synthesis approaches in order to produce robust predictions for the different anomaly scenarios. Our experiments show that uncertainty and re-synthesis approaches are complementary to each other, and together they cover the different outcomes when a segmentation network encounters an anomaly.
|
| 16 |
+
|
| 17 |
+
Our framework builds upon previous re-synthesis methods [\[24,](#page-8-0) [12,](#page-8-2) [38\]](#page-9-3) of reformulating the problem of segmenting unknown classes as one of identifying differences between the input image and the re-synthesised image from a predicted semantic map. We improve over those frameworks by integrating different uncertainty measures, such as softmax entropy [\[10,](#page-8-3) [21\]](#page-8-4), softmax difference [\[31\]](#page-9-4), and perceptual differences [\[16,](#page-8-5) [8\]](#page-8-6) to assist the dissimilarity network in differentiating the input and generated images. The proposed framework successfully generalizes to all anomalies scenarios, as shown in Figure [1,](#page-0-0) with minimal additional computation effort and without the need to jeopardize the segmentation network accuracy (no re-training necessary), which is one common flaw of other anomaly detectors [\[3,](#page-8-7) [26,](#page-8-8) [27\]](#page-8-9). Besides maintaining state-of-the-art performance in segmentation, eliminating the need for re-training also reduces the complexity of adding an anomaly detector to future segmentation networks, as training these networks is non-trivial.
|
| 18 |
+
|
| 19 |
+
We evaluate our framewor[k](#page-1-0) in public benchmarks for anomaly detection, where we compare to methods similar to ours that not compromise segmentation accuracy, as well as those requiring full retraining. We also demonstrate that our framework is able to generalize to different segmentation and synthesis networks, even when these models have lower performance. We replace the segmentation and synthesis models with lighter architectures to prioritize speed in time-critical scenarios like autonomous driving.
|
| 20 |
+
|
| 21 |
+
In summary, our contributions are the following:
|
| 22 |
+
|
| 23 |
+
- We present a novel pixel-wise anomaly detection framework that leverages the best features of existing uncertainty and re-synthesis methodologies.
|
| 24 |
+
- Our approach is robust to the different anomaly scenarios, achieving state-of-the-art performance on the Fishyscapes benchmark while maintaining state-ofthe-art segmentation accuracy.
|
| 25 |
+
- Our proposed framework is able to generalize to different segmentation and synthesis networks, serving as a wrapper methodology to existing segmentation pipelines.
|
| 26 |
+
|
| 27 |
+
# Method
|
| 28 |
+
|
| 29 |
+
The dissimilarity module in the proposed framework serves as a wrapper method for the segmentation and synthesis networks. In other words, the architecture shown in Figure [3](#page-4-1) is independent of the specific segmentation and synthesis approaches, as long as the segmentation network has a softmax layer as its output.
|
| 30 |
+
|
| 31 |
+
We validate this generalization ability by re-training our dissimilarity network with different segmentation and synthesis techniques than the ones presented in Sec. [4.1.](#page-5-2) Specifically, we chose ICNet as our segmentation module [\[39\]](#page-9-12) and SPADE as our synthesis module [\[29\]](#page-8-20). We selected these networks to create a lighter version of our pipeline, which we called *Ours Light*. These lighter networks are significantly faster than the ones introduced in Sec. [4.1,](#page-5-2) but with lower performance for their respective tasks.
|
| 32 |
+
|
| 33 |
+
Table [3](#page-7-2) shows the performance of our best and lighter frameworks, as well as Image resynthesis++ as our baseline. Ours Light significantly outperforms the Image resynthesis++ baseline, even though this lighter version is using segmentation and synthesis modules with lower performance
|
| 34 |
+
|
| 35 |
+
<span id="page-7-2"></span>
|
| 36 |
+
|
| 37 |
+
| | FS L&F | | FS Static | | CS |
|
| 38 |
+
|--------------|--------|------|-----------|-----------------------------|------|
|
| 39 |
+
| Method | | | | ↑AP ↓FPR95 ↑AP ↓FPR95 ↑mIOU | |
|
| 40 |
+
| Im. Resyn.++ | 5.7 | 47.7 | 8.0 | 62.7 | 83.5 |
|
| 41 |
+
| Ours Light | 36.0 | 46.4 | 33.4 | 36.1 | 70.6 |
|
| 42 |
+
| Ours | 55.1 | 39.6 | 61.5 | 25.6 | 83.5 |
|
| 43 |
+
|
| 44 |
+
Table 3. Performance comparison between best and lighter frameworks. Our framework generalizes well to different segmentation and synthesis, even those with lower performance.
|
| 45 |
+
|
| 46 |
+
(i.e 83.5% vs 70.6% class mIOU on Cityscapes). This study demonstrates that not only the dissimilarity network generalizes to different segmentation and synthesis networks, but it also performs well even when the segmentation and synthesis networks produce lower quality outputs.
|
| 47 |
+
|
| 48 |
+
Table [3](#page-7-2) also indicates a direct correlation between the performance of the segmentation and synthesis modules and the anomaly detection accuracy. *Ours* outperforms *Ours Light* in all metrics, as *Ours* uses state-of-the-art networks. These results agree with our intuition that resynthesis methods are highly related to the quality of segmentation and synthesis networks. The better the predictions of these two modules, the easier the dissimilarity module differentiates between the input and synthesized image. As these networks improve in the upcoming years, we expect an improvement in performance for our anomaly detector framework. An inference time analysis between Ours and Ours Light can be found in Appendix [A.3.](#page-11-1)
|
2105.03835/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-05-31T02:44:05.381Z" agent="5.0 (X11)" etag="3D587oFWF2NLNJgxcJ6P" version="14.7.3" type="device"><diagram id="j2iNAAuKNFVpW2iyTBT3" name="Page-1">7V1Lk5s6Fv41rrqzGBcgQLDstpPcRW6SmZ6pSVYpYtM2GWw8Np1+/PoRNmA4kmyZFpKIu7OIeVjg8533OZJGaLJ6+rCNNsu/snmcjhxr/jRC05Hj2GFokf+KM8/lGdt3D2cW22RenjueuEte4vJk+cXFQzKPd60b8yxL82TTPjnL1ut4lrfORdtt9ti+7T5L20/dRIvyidbxxN0sSmPqtv8k83xZnnWtxu1/xsliWT468MoLq6i++XBit4zm2WPjWejdCE22WZYfPq2eJnFaUK+iy2Gg95yr9Ytt43Uu8oWXH9N8+vMfk5sbN/sa/PnzU/75+98rfH5F6UP5i8u3zZ8rEswetr/iYhB7hG7j9fymICs5nKXRbpfMyMllvkrL67t8m/03nmRptt1/G72feu8JsaorFRGdw1jvkzQ9jlzCj8gR/esqMmYP21l84icFweHGeL5oYlgS5UOcreJ8+0xueDziWYG2bCBZndvGaZQnv9r8EJVstaiHq5/wJUvIKztWKQN+ReJSAlwcjD23PcrhN5VfbEIIxqqZqxzLq6SkGiiPtos4pwYiiEXPjds2xQ07/jtjF7xzYJ1+L4f9XuX95MPhDaqjBgzHU3t2vYB1AwHWPcetgjxnCCth12uT2fO68RGGeIV98ZHHfE6vfOE518YXrg/5whl35AzXVcMZ1Cur4Azf727s1tk6lmHprFfZNjc0ivGQS2wZsi0fB7Yd+J6HW6AihMcWblxG3ZjS86E5CvphSgc+B51+L6hGLRVMHJxn4m32sJ7XXPy4TPL4bhPtOeuR+OltPo7SZLEuFCHhxpiw8u082i3rL98Ttq14vJQBLvv+ird5/HSSLytKW4DSFcINvrWdE4zL4tEG3S8nKxKwGmbrBtss3VBHcBXE0JKISj8KwECQByRJv+2flObzP/CMk4ysy8ZH0An3FGgX53fQLhBJYe3iAq9Jmnax3fNkleSTntUTvtFqwsMd1YQDxasnz5WMzHxOr2JZiYAsqZQhZND5QQwh8xmMg/oy4SJxX5omm10soLZ2m0M28T55Koja0lTEnr/zi3+SKIkgJT2Kkixt5fTmCw2VkIFhhPQGSkgH5hl1E5IVuBOT6o7H48MHiq7kl+anfPLS22A4IJTHUtAtmUXpTXlhlcznKQ+xoxo+6c1fgAU0kAwskEosMAeLp+/2744FZfMYjqWrEguWt37AwvntscDeWSyUykXIxWKEb2MiHHhqNCY7cjFZLz7G9wWlnOOZf2UbeW6jA0sCumGr1CuF28ib7IHBtx/++e/P03cEvwLJ++/FhWWcR3xA24S9NELmYphGP+L0S7ZL8iRjfuMjuKH+ZgtJR5IAwoAroJEMlSLJipIKfP73x4s1wpO9IBKK3xyE8W9GS6MEgKh8Fu04KDVWtkiAdi4J8pTkX0sSFZ+/FZ/HXnk0fWpcmj5XB5ISJ5WmMCRzQkU7sBlAuKuAE8hLTpzAxyjJm7Cc1UtZrmYfZmb//t6PZ5iZ2W+lOn8+rDblOG557UuUEz2x3t/gWCcNrCE8B1PxYdiN5aAjXeXOe+Y5kKLviedYjqC0qF+G4Qb5JhfThltpdO+wXLCuhqE2Bt8aV9iGgZJl+4eFIiTTZJhVk+OVuC6XX9gm4oDKCUd+pZWpeK7ei3V9WYi6IKrLsXNEEsaDlN+KrU2RX8r+dpXfULP8CmTGt8ts9eNh1yGAliFjHM9Bn40U6F3TSjFYM9BPMRm+v5layawSPqVMunoVVMZCtVbipXBfriGdfr5nRqlTgbgu3u+RT5eUhXVgl5h23FjOYDuf/nn67i5Lib7B0z/26fTNMhk55HHWwX0vPuVDSta2Ej0yhNFrF0lcVm+N0ioJ4lXgm7n1HVGTBWCO8YBJwIhu59ScXncFojDVTWIIWnTtTWKuSCemiT05CENSau7JcYfa3QSneeinpEBsZyYloeevnZK85qQraBRzBZr2lPoMLiucksbWPahU7SmLCkHBlAU976jliF4yCYldjmSnP+qMBzP9wcVlcJMbYeNl1/p61wlMstIaLq9LkERbr8xryFSBwE7d7/9kuaEgnhJsOrMRn2leJ+gCEzpM6ktQqwiQWflNKr3ZsekBBkNU9UZS0wOC9UElqy6cT/u8tVEydRMspetuo/REImSD2/QqTjREe9jAjfC7VkegWPfVp0c9R4n64OUXBbPGTiNrPJw0pOy8MQpNyxt7AkVYrWVres0P3TGgSNQ8yLK1YfGdAxOqsuI71WXryq18DceIot+Fs15hyrFRHOPCxtmw6ypbcMa3ao6REH6ayTG+WQ17HmfNl8uXwIIDqeYYh8Ex9ezG3d7XMtrVUtONQdU7dXdj+LywfI9bA7rbt36aAzow66sdQVY43phX/CZ5bb/dGNxMb3GGxVs30L3IB7/9cqC66sjhtBD0oKu09/5hViTS6Nkciq7qGzdYEPQ1t45hXq/tFbRNUIvLMmZDql1fh+dqXyEY2ntYMM9/vkYwdC/VggUqU6o7XH1YvtPe4YqH2pfpm7bqHB5qXyaG2SLtlBSZi2ckJU3rusa8Zq4rMEhYYKFUtQaJF7G+Nb2cUfWwx0B300sgoyGvv6YXQ8oZFGxd1x/woYFywUCSelmo56joZQl4AVx7rtxkhG4+Gd+iIkHYPWqhMs3pjuBkUaRAZhhpqp5T6lRoozu9GJwshTSgG0yCuG8EYeuydgRZsekBwU9veDnYtPJxQEfAd9FqQ4hmJkztzWWydV7NkJAUDoaw4sJYeouV7OkPoIEtcKl2IknliZniPsvqBqIsMxyoL/dZxQZ3gcAWVFqnjcI8ZV1I15UTCgRahlXsXVcJt90U7bFzLgImJPjaPPjWPDh+aX/0ep1Q1ToPIiLAh4YoDy8Ix4GPQhQ6KEBOtaNYXbW0xwFxF2zLJl6f66OOW236sBra02Z7Pmyl9RVollAgyaO1JQfm7bVPRghZqQyjKEZN4decnw8lrIVqZmt1aFYu0qOm53fdDB0OpLi1OpQw39JQjjHL/aZ2ve864QdrnvATilT4B8kxhvlc0jgGulXKOUZk9WDTt/X1qbZJRqsNy9j3tml4yFtu6sX6PrmKlVV8F66sgilMlK6sEnIXuxlqjaDnZmRYI9DeRF7vK8aG8NMbbHvTRO1xjsa69/eyeG3kgos4TFqLOAyjPi57CQfKyDF6nNQu4WBbAkkAk2oMXBQM8SpDG9jMwBp3DV5DuLIZDseBFx7//Pa4F6fvZHuitkWHu9N4ls3fSoolgoxqgtKSom0pDC+HJqueHY6rCTqvlVXWWD3HgbZFB4Ifozzev+b0eR2tCJDXKYcBtQ0mw/AqTSXbloTi/iXVPEjVgYsq7pisoeQUW2MHxKud5ZQcbrMsb95OuH35FzGAxR3/Bw==</diagram></mxfile>
|
2105.03835/main_diagram/main_diagram.pdf
ADDED
|
Binary file (57.9 kB). View file
|
|
|
2105.03835/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The Latent ODE architecture [@rubanova2019latent] is an extension of the Neural ODE method [@chen2018neural], which provides memory-efficient gradient computation without back-propagation through ODE solve operations. Neural ODEs represent trajectories as the solution to the initial value problem: $$\begin{align}
|
| 4 |
+
\frac{d h(t)}{dt} &= f_\theta(h(t), t) \\
|
| 5 |
+
h_{0:N} &= \text{ODESolve}(f_\theta, h0, t_{0:N})
|
| 6 |
+
\end{align}$$ where $f_\theta$ is parameterized by a neural network, and $h(t)$ represents hidden dynamics. The continuous dynamical representation allows Neural ODEs to natively incorporate irregularly sampled time series.
|
| 7 |
+
|
| 8 |
+
Latent ODEs arrange Neural ODEs in an encoder-decoder architecture. Observed trajectories are encoded using a GRU-ODE architecture [@de2019gru; @rubanova2019latent]. The GRU-ODE combines a Neural ODE with a gated recurrent unit (GRU) [@cho2014learning]. Observed trajectories are encoded by the GRU into a hidden state, which is continuously evolved between observations by a Neural ODE parameterized by neural network $f_\theta$. The GRU-ODE encodes the observed data sequence into parameters for a variational posterior. Using the reparameterization trick [@kingma2013auto], a differentiable sample of the latent initial state $z0$ is obtained. A Neural ODE parameterized by neural network $f_\Psi$ deterministically solves a latent trajectory from the latent initial state. Finally, a neural network $f_\Phi$ decodes the latent trajectory into data space. The Latent ODE architecture can thus be represented as: $$\begin{align}
|
| 9 |
+
\mu_{z0}, \sigma_{z0}^2 &= \text{GRUODE}_{f_\theta}(x_{1:N}, t_{1:N})\\
|
| 10 |
+
z0 &\sim q(z0 | x_{1:N}) = \mathcal{N}(\mu_{z0}, \sigma_{z0}^2) \\
|
| 11 |
+
z_{1:N} &= \text{ODESolve}(f_\Psi, z0, t_{1:N}) \\
|
| 12 |
+
x_{i} &\sim \mathcal{N}(f_{\Phi}(z_{i}), \sigma^2) \hspace{1em} \text{for} \hspace{1em} i = 1, ..., N
|
| 13 |
+
\end{align}$$ where $\sigma^2$ is a fixed variance term. The Latent ODE is trained by maximizing the evidence lower-bound (ELBO). Letting $X = x_{1:N}$, the ELBO is: $$\begin{equation}
|
| 14 |
+
\mathbb{E}_{z0 \sim q(z0|X)}[\log p(X)] - \text{KL}\left[q(z0|X)\;||\;p(z0)\right]
|
| 15 |
+
\end{equation}$$
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
The LatSegODE requires a Latent ODE base model trained on a family of SDFs. We propose two scenarios where SDFs may be available. First, the LatSegODE is applicable when a training set of hybrid trajectories with labelled changepoints exists. In this case, given a training set of $N$ hybrid trajectories $X = (x^{(i)}, t^{(i)})_{i=1}^N$ each with $C$ labelled SDF boundaries $(s_{k}, e_{k})_{k=0}^C$, we treat each $x^{(i)}_{s_j:e_j}$ as an independent training trajectory, and train on the union of all SDFs. The LatSegODE can also be applied when physical simulation is available. In these scenarios, the base model can be trained on trajectories which are simulated in the range of dynamical modes which we expect in hybrid trajectories at test time. These two use cases are illustrated in the first two experiments.
|
2105.11696/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-01-25T06:37:13.257Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36" etag="gV8s6k6iSPcbfoo2BMVk" version="14.2.7" type="device"><diagram id="r6Zr-R9pBFbbrhgcOJeB" name="Page-1">5V1bc5s4FP41nt19SIeLwfZjfEmT1m2z8XS77UsHg4xpMPICjpP8+pVA3CQR2xTLxmR2tnAkDuL7jo50dHNHHa2e3/vGevkJWsDtKJL13FHHHUWRJV1H/2DJSyzpqUossH3HIpkywcx5BcmTRLpxLBAUMoYQuqGzLgpN6HnADAsyw/fhtphtAd3iW9eGDRjBzDRcVvrNscJlLO0rvUx+Cxx7mbxZ1gdxyspIMpMvCZaGBbc5kTrpqCMfwjC+Wj2PgIvBS3CJn7spSU0L5gMv3OeB6cMn/+vr5nPvddjVvNtvlv/j8YpoeTLcDfngoWM5PkLSgZ6BHtddpHw499GVja8mnok49sknhS8JTj7ceBbAr5JRru3SCcFsbZg4dYssA8mW4colyQvHdUfQhX70rHpzM5JGEpKzn5SUD/gheM6JyCe+B3AFQv8FZSGpXYI2MTc5ud9m5Mk6kS1zxPWJzCD2YqeaM0jRBUH1AIQVBuHrTQh9YPsgCJwnwEI8BvVDPJJu0F89ECv6uWGsMhhPP6H7W2BYNRsqNtXjoNjfE0TlWCBqDIij6ezcUeyqWhFFjUGxLxJEnQGRgQ541jVumNCd6RrIB5gIiiA0/JAV50GEXkiB+HaFBlahZWMhzGGkcTBKZD5wjRA7qkLTygGOvOEeOqgkKUOUmSsSBX0AN74JyEP55ovSI/d3KEII2iBkFEU0pl9dndleRWYvgEJlF/T7ctiVhHG4/Dy3r6b2z+9fBqvvj1//fVJ11BazDcXBJJ47WWqPwphudfcli1GkCyare/lkMRWiKlmMItFksb2HiyNL7dZVs2hFosmq2ktpEFlMm1WVLEaRaLJq6HicO1mqXFfNohWJJqvPIUs3VjggiqL8IL2NQn2GSRT2hEXuDNexPUws4g6gnuMQB0eOabjXJGHlWBZ+fOiDwHk15pEqHF6t8VdG360NO9oY69qEMIiH2N40i+pRLC+M7XIMp44AjM/AgGHguvEoq2XdhBzKikiUVYlBufm2rKrnhjI7OjtqPsp7eAyxKCsMyuPmozw4N5RbEHrLWk3dGEaR4G6M2oLQmxlPrEzWiQMEtQWht6zURRatSDRZLQi9yxr4Q7mi9YimqgWBd1kv4VCqaD2iqeKE3aNJZygzhDWtH1fWozhZN44NryOk2arRNKTLugOnQjoZNaGRVhuPtFw22ncyqNk4O4K623yoy6YsTgY1G2xHUGvNh7ps3vVkULcg4qbnqOua6xbcf+m2IN6mZ6jrmukWTVULom16frqueW7RVLUg1qa7rXXNcoumqgWxNj03Xdcct2iq2Fi7+ROsCtX+c1a0i+27sWH2BcyvSucFclKvL2t6VT0zkNnYetx8kPUzA5mNqifNB3lwZiBzhj4P7aMUtnl40AOJ7MbBhRm/uSvmTPoxdACWjjEdHMlRfU55z90ACD/jJZeNGOP+BaY3Su4qVzE/uohLUGu3Smcbo4Ot6+wtp6ZghdYjuAess03a1Jjjfc8N97jd3c3a0dZy2h9M/4c3lvR/Hu9+rnraqL/gbE6erObAshzPZrA+mw2J1CgwB0P1WDsS+SC2YGyR2apUtUliFNG01OhauGS1YHSR2apUuf9AKxJNVgvGF5mtSpVrFq1INFktGGFk1jBUJYtRJJos3jaaCyOL2apUuWbRikSTVUNcfO5kMQuyq5LFKBJMVvK6iyarLjfIKBJNFmdd0sWRRS/IrkwWrUg0Wcrlk0WvyK7KFa1HNFUtWBlFr8iuShWtRzBVnI2mDFXB0ljjy4ULnglnw51H8RRHgEzJlBaLBtCqUJ09ufKCD22HomMT2wJ3SY8mVZ4pYRQd8dwkLlltGEMsOznyd0c6hJPVgjFEeoCiMlmMItFktWEMkW6z6hrpOCJZX354H79uvZvr3vjvmTabLl5W06sWeMH6JvdpRcfj6nWNJJYdhLe3z8rAnNx/nH7gzB024TTTkoXXAk4zna/hL135/vmLPAm2wcP97P5myzm7+AEEa+gFAEnfAw/4Bj4nmsGzfOLbBYuw1mlvfKbmjLyZw84i+kvZYajgEFbKzu7ZcflY0+N8dg5rO8hyo8JqJB8+ghxaUvTXYU4qTeXmxn9K6wnXce0F6GlipV7VPgLV+x7s58kOXaZER2TkrXWtOuKaEHePe3weebA2vIIt6f9t8FH1QzO2i2tMkD03/kTFRP+h90vcq7/wJTYWCRvV1cJYOe5L/PgKejCI/GwhS1y7cQZp/Zy9NzkevaNokTnlfm9AQ5+MpdFh+uldAoEWgYAkY3yNC6bhz9QQtLvyymnexKwrqVEyNTHSaQr+H8YbCaJKF+WJEU7yxN+PM+C7pC4yCRgcfD1ZwdgnI0UPwIS25yT3yYvwUfPZA/mU6HOScuQzxIWPjSEtfOTmcY7Ip2CxHN3Gzh7fk3VOWFh0+TiROH2cmLl9nCBFsqS+xG/TcDG0caw/aQKyV2YOKymd2pf6UldKU2dp2WXqq0ijsA/paVLKd+bVNFyT05zRGrVEzUtOg56Tx34wS+vm0ohDzGwol2YEmdzOFYA2x+g2tcm8sFhTSD6mSsWeAHmW2BnoZ3BmYVnLVEP73i0bMxWw4pjvnbmnSca/FpH+HkdS3YuVPWVuntG2w69HvndLvgt73x52M6z3pX+u4vfN5CidwmMZCf3jF/qpu4Gc0wvuvPUmJpEcNurNg3WOKich6e6PFXoydHzUqVMkuMA3Swd5F+kXnL97k+tUx5dNuOfbZhB/P37j3Tuepgs1GZ3fDc0bDK8XrB7NYNiwbm+DGeHfT1KkOfSjRbiSh8tbQib1aO3WZFgeCIIWGRLjewZ7+p6jmdKBx763LgTVqBCUDh33DUE1aldf6jBqjkGZ98j1BqHcoTrWHTViqK74w0O9o/3wUIf01nOYZ/10dfI/</diagram></mxfile>
|
2105.11696/main_diagram/main_diagram.pdf
ADDED
|
Binary file (27.8 kB). View file
|
|
|
2105.11696/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Our model learns response generation as a generation task and emotion recognition as a classification task. By learning response generation and emotion recognition simultaneously through multi-task learning, it is possible to generate a response by considering the emotion of a given utterance.
|
| 4 |
+
|
| 5 |
+
Multi-task learning often involves several similar tasks because they can share information and thus the performance of each task can be improved. However, the purpose of our multi-task learning method is to improve the quality of response generation, not to improve the performance of emotion recognition. This is different from general multi-task learning.
|
| 6 |
+
|
| 7 |
+
Our model is based on BART [@lewis-etal-2020-bart]. Its architecture is shown in Figure [1](#fig:arch){reference-type="ref" reference="fig:arch"}. The model has several output layers, or heads, for the tasks to be trained, which include an LM head for generating words in response generation and CLS heads for solving classification tasks. Given a sentence, the CLS head predicts its label such as `positive` or `negative`. One CLS head is set for each classification task.
|
| 8 |
+
|
| 9 |
+
The input/output format of each task is the same as that in BART. In the generation task, we put an utterance and a right-shifted response into the encoder and decoder, respectively. In the classification task, we put an utterance and a right-shifted utterance into the encoder and decoder, respectively. Following the learning algorithm of MT-DNN [@liu-etal-2019-multi-task], each task that the model learns is selected for each mini-batch. A different loss is calculated for each task, and the parameters are updated for each mini-batch.
|
| 10 |
+
|
| 11 |
+
Let $\bm{x} = (x_1, \ldots, x_M)$ be the given utterance and $\bm{\theta}$ be the parameters of the model. Our model is trained by updating $\bm{\theta}$ based on the loss for each task.
|
| 12 |
+
|
| 13 |
+
The response to $\bm{x}$ is defined as $\bm{y} = (y_1, \ldots, y_N)$. The model infers an appropriate $\bm{y}$ from $\bm{x}$. The generation loss $\mathcal{L}_\mathrm{gen}$ is calculated as the negative log-likelihood loss. $$\begin{equation}
|
| 14 |
+
\mathcal{L}_\mathrm{gen} = -\sum_{j=1}^N \log p (y_j | \bm{x}, y_1, \ldots, y_{j-1}; \bm{\theta})
|
| 15 |
+
\label{eq:loss_gen}
|
| 16 |
+
\end{equation}$$
|
| 17 |
+
|
| 18 |
+
If the correct label of $\bm{x}$ is $c$, the model infers $c$ from $\bm{x}$. The negative log-likelihood loss is also used for the classification loss $\mathcal{L}_\mathrm{cls}$. $$\begin{equation}
|
| 19 |
+
\mathcal{L}_\mathrm{cls} = -\log p (c | \bm{x}; \bm{\theta})
|
| 20 |
+
\label{eq:loss_cls}
|
| 21 |
+
\end{equation}$$
|
| 22 |
+
|
| 23 |
+
Although the proposed multi-task learning model learns the generation and classification tasks simultaneously, there is a possibility that the ratio of learning for the classification task is too large. When solving a general classification task, the end of learning is often determined by the convergence of the loss in the validation data. On the other hand, the target of our model is a generation task, and the number of epochs required for generation is larger than that of the classification task.
|
| 24 |
+
|
| 25 |
+
Therefore, we consider weighting the loss functions. While the weight for response generation is fixed at 1, the weight for emotion recognition is varied between 0 and 1. This makes the contribution of the classification task reduced in updating the parameters.
|
| 26 |
+
|
| 27 |
+
::: {#tab:data}
|
| 28 |
+
**Dataset** **Train** **Validation** **Test**
|
| 29 |
+
------------- ----------- ---------------- ----------
|
| 30 |
+
DailyDialog 76,052 7,069 6,740
|
| 31 |
+
TEC 16,841 2,105 2,105
|
| 32 |
+
SST-2 16,837 872 1,822
|
| 33 |
+
CrowdFlower 15,670 1,958 1,958
|
| 34 |
+
|
| 35 |
+
: The statistics of the datasets for our experiments, where TEC stands for Twitter Emotion Corpus. Because TEC and CrowdFlower have no split of train, validation, and test, we split them into three at 8:1:1.
|
| 36 |
+
:::
|
| 37 |
+
|
| 38 |
+
::: table*
|
| 39 |
+
+:------------+-----------------------:+-------------:+-------------:+------------:+----------:+----------:+-----------:+-----------:+
|
| 40 |
+
| **Model** | **Auto Eval** | **Manual Eval** |
|
| 41 |
+
| +------------------------+--------------+--------------+-------------+-----------+-----------+------------+------------+
|
| 42 |
+
| | **[Bleu]{.smallcaps}** | ***dist*-1** | ***dist*-2** | **Avg Len** | ***Emo*** | ***Flu*** | ***Info*** | ***Relv*** |
|
| 43 |
+
+-------------+------------------------+--------------+--------------+-------------+-----------+-----------+------------+------------+
|
| 44 |
+
| R | 32.35 | 5.87 | 30.48 | 14.12 | 3.44 | 3.48 | 3.63 | 3.55 |
|
| 45 |
+
+-------------+------------------------+--------------+--------------+-------------+-----------+-----------+------------+------------+
|
| 46 |
+
| R+E6 | 32.29 | 5.93 | 30.48 | 14.12 | **3.59** | **3.82** | 3.62 | **3.96** |
|
| 47 |
+
+-------------+------------------------+--------------+--------------+-------------+-----------+-----------+------------+------------+
|
| 48 |
+
| R+E6+E2 | 32.39 | **6.00** | **30.77** | 14.11 | 3.58 | 3.75 | **3.74** | 3.70 |
|
| 49 |
+
+-------------+------------------------+--------------+--------------+-------------+-----------+-----------+------------+------------+
|
| 50 |
+
| R+E6+E12 | **32.55** | 5.89 | 30.57 | **14.14** | 3.52 | 3.48 | 3.55 | 3.58 |
|
| 51 |
+
+-------------+------------------------+--------------+--------------+-------------+-----------+-----------+------------+------------+
|
| 52 |
+
| R+E6+E2+E12 | 32.29 | 5.91 | 30.47 | 14.12 | **3.59** | 3.75 | 3.57 | 3.64 |
|
| 53 |
+
+-------------+------------------------+--------------+--------------+-------------+-----------+-----------+------------+------------+
|
| 54 |
+
:::
|
2108.11179/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2108.11179/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Minimization of a loss that is a function of the test-time evaluation metric has shown to be beneficial in deep learning for numerous computer vision and natural language processing tasks. Examples include intersection-over-union as a loss that boosts performance for object detection [\[48,](#page-9-0) [70\]](#page-9-1) and semantic segmentation [\[37\]](#page-8-0), and structural similarity [\[34\]](#page-8-1), peak signal-to-noise ratio [\[4\]](#page-8-2) and perceptual [\[40\]](#page-9-2) as reconstruction losses for image compression that give better results according to the respective evaluation metrics.
|
| 4 |
+
|
| 5 |
+
Training deep networks via gradient descent on the evaluation metric is not possible when the metric is nondifferentiable. Deep learning methods resort to a proxy loss, a differentiable function, as a workaround, which empirically leads to a reasonable performance but may not align well with the evaluation metric. Examples exist in object detection [\[70\]](#page-9-1), scene text recognition [\[42,](#page-9-3) [43\]](#page-9-4), machine translation [\[3\]](#page-8-3) and image retrieval [\[6,](#page-8-4) [41\]](#page-9-5).
|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span>
|
| 8 |
+
|
| 9 |
+
Figure 1. A comparison between recall@k and rs@k, the proposed differentiable recall@k surrogate. Examples show a query, the ranked database images sorted according to the similarity and the corresponding values for recall@k and rs@k and their dependence on similarity score change. Note that the values of recall@k and rs@k are close. Changes to similarity and ranking in some cases may not affect the original recall@k but can affect the surrogate, with the latter having a more significant impact than the former. Similarity values of all negatives are fixed for ease of understanding. The similarity values of the positives that were changed in rows 2, 3 and 4 are underlined.
|
| 10 |
+
|
| 11 |
+
This paper deals with the training of image retrieval posed as deep metric learning and Euclidean search in the learned image embedding space. It is the task of ranking all database examples according to the relevance to a query, which is of vital importance for many applications. The standard evaluation metrics are precision and recall in the top retrieved results and the mean Average Precision (mAP). These metrics are standard in information retrieval, they reflect the quality of the retrieved results and allow for flexibility to focus either on the few top results or the whole ranked list of examples, respectively. Recall at top-k retrieved results, denoted by *recall@k* in the following, is the primary focus of this work.
|
| 12 |
+
|
| 13 |
+
<span id="page-1-0"></span>The problem related to the optimization of nondifferentiable evaluation metrics applies to recall@k as well. Estimating the position of positive images in the list of retrieved results and counting how many positives appear inside a short-list of a fixed size involves nondifferentiable operations. Note that methods for training on non-differentiable losses, such as actor-critic [\[3\]](#page-8-3) and learning surrogates [\[42\]](#page-9-3) are not directly applicable to recall@k. This is due to the fact that these methods are limited to decomposable functions, where a per-example performance measure is available. Such an attempt is made by Engilberge *et al*. [\[13\]](#page-8-5), where an LSTM learns sorting-based metrics, but is not adapted in consequent work due to slow training. As an alternative, deep metric learning approaches for image retrieval often use ranking proxy losses, termed pairwise losses. In the embedding space, loss functions such as contrastive [\[18\]](#page-8-6), triplet [\[53\]](#page-9-6), and margin [\[69\]](#page-9-7) pull the examples from the same class closer to one another and push the examples from a different class away. These losses are hand-crafted to reflect the objectives of the retrieval task and, consequently, the evaluation metric. The loss value depends on the image-to-image similarity for image pairs or triplets and does not take into account the whole ranked list of examples. Changes in the similarity value without any change in the overall ranking alter the loss value indicate that they are not well correlated with ranking [\[6\]](#page-8-4). Recent methods focus on optimizing Average Precision (AP) and use a surrogate function as a loss [\[6,](#page-8-4) [7,](#page-8-7) [19,](#page-8-8) [47,](#page-9-8) [49\]](#page-9-9). A surrogate of an evaluation metric is a function that approximates it in a differentiable manner.
|
| 14 |
+
|
| 15 |
+
The proposed method attains state-of-the-art results for 4 fine-grained retrieval datasets, namely iNaturalist [\[61\]](#page-9-10), VehicleID [\[61\]](#page-9-10), SOP [\[39\]](#page-9-11) and Cars196 [\[27\]](#page-8-9), and 2 instancelevel retrieval datasets, namely Revisited Oxford and Paris [\[45\]](#page-9-12). This is accomplished by the demonstrated synergy between the three following elements. First, a new loss that is proposed as a surrogate of an established retrieval evaluation metric, namely recall at top k, and is experimentally shown to consistently outperform existing competitors. A comparison between the evaluation metric and the proposed loss is shown in Figure [1.](#page-0-0) Second, the use of a very large batch size, in the order of several thousand large resolution images on a single GPU. This is inspired by the instance-level retrieval literature [\[47\]](#page-9-8) and is introduced for the first time in the context of fine-grained categorization. In a recent work of verifying prior results in deep metric learning for fine-grained categorization [\[36\]](#page-8-10) the batchsize is considered fixed to a single and small value among a large set of comparisons for different losses; in this work we reach batch-sizes that are two orders of magnitude larger than in the work of Musgrave *et al*. [\[36\]](#page-8-10). The third elements is the proposed mixup regularization technique that is computationally efficient and that virtually enlarges the batch. Its efficiency is obtained by operating on the very last stage of similarity estimation, *i.e*. scalar similarities are mixed, while its applicability goes beyond the combination with the proposed loss in this work. The proposed loss is used for training widely used ResNet architectures [\[20\]](#page-8-11) but also recent vision-transformers (ViT) [\[10\]](#page-8-12). The superiority of this loss compared to existing losses is demonstrated with both architectures, while with ViT-B/16 top results are achieved at lower throughput than with ResNet.
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
This section presents the task of image retrieval and the proposed approach for learning image embeddings.
|
| 20 |
+
|
| 21 |
+
Task. We are given a query example $q \in \mathcal{X}$ and a collection of examples $\Omega \subset \mathcal{X}$ , also called database, where $\mathcal{X}$ is the space of all images. The set of database examples that are positive or negative to the query are denoted by $P_q$ and $N_q$ , respectively, with $\Omega = P_q \cup N_q$ . Ground-truth information for the positive and negative sets per query is obtained according to discrete class labels per example, *i.e.* if two examples come from the same class, then they are considered positive to each other, otherwise negative. This is the case for all (training or testing) databases used in this work. Terms example and image are used interchangeably in the following text. In image retrieval, all database images are ranked according to similarity to the query q, and the goal is to rank positive examples before negative ones.
|
| 22 |
+
|
| 23 |
+
Deep image embeddings. Image embeddings, otherwise called descriptors, are generated by function $f_{\theta}: \mathcal{X} \to \mathbb{R}^d$ . In this work, function $f_{\theta}$ is a deep fully convolutional neural network or a vision transformer mapping input images of any size or aspect ratio to an $L_2$ -normalized d-dimensional embedding. Embedding for image x is denoted by $\mathbf{x} = f_{\theta}(x)$ . Parameter set $\theta$ of the network is learned during the training. Similarity between a query q and a database image x is computed by the dot product of the corresponding embeddings and is denoted by $s(q,x) = \mathbf{q}^{\top}\mathbf{x}$ , also denoted as $s_{qx}$ for brevity.
|
| 24 |
+
|
| 25 |
+
**Evaluation metric.** Recall@k is one of the standard metrics to evaluate image retrieval methods. For query q, it is defined as a ratio of the number of relevant (positive) examples within the top-k ranked examples to the total number of relevant examples for q given by $|P_q|$ . It is denoted by $R_{\Omega}^k(q)$ when computed for query q and database $\Omega$ and can
|
| 26 |
+
|
| 27 |
+
<span id="page-3-8"></span><span id="page-3-0"></span>be expressed as
|
| 28 |
+
|
| 29 |
+
$$R_{\Omega}^{k}(q) = \frac{\sum_{x \in P_q} H(k - r_{\Omega}(q, x))}{|P_q|}, \tag{1}$$
|
| 30 |
+
|
| 31 |
+
where $r_{\Omega}(q,x)$ is the rank of example x when all database examples in $\Omega$ are ranked according to similarity to query q. Function H(.) is the Heaviside step function, which is equal to 0 for negative values, otherwise equal to 1. The rank of example x is computed by
|
| 32 |
+
|
| 33 |
+
<span id="page-3-3"></span>
|
| 34 |
+
$$r_{\Omega}(q,x) = 1 + \sum_{z \in \Omega, z \neq x} H(s_{qz} - s_{qx}), \tag{2}$$
|
| 35 |
+
|
| 36 |
+
Therefore, (1) can now be expressed as
|
| 37 |
+
|
| 38 |
+
$$R_{\Omega}^{k}(q) = \frac{\sum_{x \in P_q} H(k - 1 - \sum_{z \in \Omega, z \neq x} H(s_{qz} - s_{qx}))}{|P_q|}.$$
|
| 39 |
+
(3)
|
| 40 |
+
|
| 41 |
+
**Recall@k surrogate loss.** The computation of recall in (3) involves the use of the Heaviside step function. The gradient of the Heaviside step function is a Dirac delta function. Hence, direct optimization of recall with back-propagation is not feasible. A common smooth approximation of the Heaviside step function is provided by the logistic function [21,22,28], a common sigmoid function $\sigma_{\tau}: \mathbb{R} \to \mathbb{R}$ controlled by temperature $\tau$ , which is given by
|
| 42 |
+
|
| 43 |
+
$$\sigma_{\tau}(u) = \frac{1}{1 + e^{-\frac{u}{\tau}}},\tag{4}$$
|
| 44 |
+
|
| 45 |
+
where large (small) temperature value leads to worse (better) approximation and denser (sparser) gradient. This approximation is common in the machine learning literature for several tasks [17, 33, 52] and also appears in the approximation of the Average Precision evaluation metric [6], which is used for the same task as ours. By replacing the step function with the sigmoid function, a smooth approximation of recall is obtained as
|
| 46 |
+
|
| 47 |
+
<span id="page-3-5"></span>
|
| 48 |
+
$$\tilde{R}_{\Omega}^{k}(q) = \frac{\sum_{x \in P_{q}} \sigma_{\tau_{1}}(k - 1 - \sum_{\substack{z \in \Omega \\ z \neq x}} \sigma_{\tau_{2}}(s_{qz} - s_{qx}))}{|P_{q}|}, \quad (5)$$
|
| 49 |
+
|
| 50 |
+
which is differentiable and can be used for training with back-propagation. The two sigmoids have different function domains and, therefore, different temperatures (see Figure 2). The minimized single-query loss in a mini-batch B, with size M=|B|, and query $q\in B$ is given by
|
| 51 |
+
|
| 52 |
+
$$L^k(q) = 1 - \tilde{R}^k_{B \setminus q}(q). \tag{6}$$
|
| 53 |
+
|
| 54 |
+
while incorporation of multiple values of k is performed in the loss given by
|
| 55 |
+
|
| 56 |
+
<span id="page-3-7"></span>
|
| 57 |
+
$$L^{K}(q) = \frac{1}{|K|} \sum_{k \in K} L^{k}(q). \tag{7}$$
|
| 58 |
+
|
| 59 |
+
<span id="page-3-2"></span>
|
| 60 |
+
|
| 61 |
+
Figure 2. The two sigmoid functions which replace the Heaviside step function for counting the positive examples in the short-list of size k (left) and for estimating the rank of examples (right).
|
| 62 |
+
|
| 63 |
+
<span id="page-3-4"></span><span id="page-3-1"></span>
|
| 64 |
+
|
| 65 |
+
Figure 3. Gradient magnitude of the sigmoid used to count the positive examples in the short-list of size k versus the rank r (equal to $r_{\Omega}(q,x)$ , see (2)) of a positive example x. It shows how much a positive example is pushed towards lower ranks depending on its current rank. In the case of multiple values for k, the total gradient is equivalent to the sum of the separate ones.
|
| 66 |
+
|
| 67 |
+
Figure 3 shows the impact of using single or multiple values for k.
|
| 68 |
+
|
| 69 |
+
All examples in the mini-batch are used as queries and the average loss over all queries is minimized during the training. The proposed loss is referred to as *Recall@k Surrogate loss*, or RS@k loss for brevity.
|
| 70 |
+
|
| 71 |
+
To allow for 0 loss when k is smaller than the number of positives (note that exact recall@k is less than 1 by definition), we slightly modify (5) during the training. Instead of dividing by $|P_q|$ , we divide by $\min(k,|P_q|)$ , and, consequently, we clip values larger than k in the numerator to avoid negative loss values.
|
| 72 |
+
|
| 73 |
+
**Similarity mixup (SiMix).** Given original batch B, virtual batch $\hat{B}$ is created by mixing all pairs of positive examples in the original batch. Embeddings of examples $x \in B$ and $z \in B$ are used to generate mixed embedding
|
| 74 |
+
|
| 75 |
+
$$\mathbf{v}_{xz\alpha} = \alpha \mathbf{x} + (1 - \alpha)\mathbf{z} \quad | \quad \alpha \sim U(0, 1),$$
|
| 76 |
+
(8)
|
| 77 |
+
|
| 78 |
+
for a virtual example that is denoted by $xz\alpha\in B$ . The similarity of an original example $w\in B$ to the virtual example $xz\alpha\in \hat{B}$ is given by
|
| 79 |
+
|
| 80 |
+
<span id="page-3-6"></span>
|
| 81 |
+
$$s(w, xz\alpha) = \mathbf{w}^{\mathsf{T}} \mathbf{v}_{xz\alpha} = \alpha s_{wx} + (1 - \alpha) s_{wz}.$$
|
| 82 |
+
(9)
|
| 83 |
+
|
| 84 |
+
<span id="page-4-7"></span>where the original and virtual examples can be the query and database examples, respectively, or vice versa. In case both examples are virtual, e.g. $xz\alpha_1 \in \hat{B}$ used as a query and $yw\alpha_2 \in \hat{B}$ as a part of the database, then their similarity is given by
|
| 85 |
+
|
| 86 |
+
$$s(xz\alpha_{1}, yw\alpha_{2}) = \mathbf{v}_{xz\alpha_{1}}^{\top} \mathbf{v}_{yw\alpha_{2}}$$
|
| 87 |
+
|
| 88 |
+
$$= \alpha_{1}\alpha_{2}s_{xy} + (1 - \alpha_{1})(1 - \alpha_{2})s_{zw}$$
|
| 89 |
+
|
| 90 |
+
$$+ \alpha_{1}(1 - \alpha_{2})s_{xw} + (1 - \alpha_{1})\alpha_{2}s_{zy}.$$
|
| 91 |
+
(10)
|
| 92 |
+
|
| 93 |
+
The pairwise similarities that appear on the right-hand side of the previous formulas, e.g. $s_{wx}$ and $s_{wz}$ in (9), are computed from the embeddings of the original, non-virtual examples and are also required for the computation of the RS@k without any virtual examples. Therefore, the minibatch is expanded to $B \cup \hat{B}$ by adding virtual examples without the need for explicit construction of the corresponding embeddings or computation of the similarity via dot product; simple mixing of the corresponding pairwise scalar similarities is enough. SiMix reduces to mixing pairwise similarities due to the lack of re-normalization of the mixed embeddings, which is different to existing practice in prior work [15, 16, 24, 62] and brings training efficiency benefits.
|
| 94 |
+
|
| 95 |
+
Virtual examples are created only between examples of the same classes and are labeled according to the class of the original examples that are mixed. Virtual examples are used both as queries and as database examples, while mixing is applied to all pairs of positive examples inside a mini-batch. **Overview.** An overview of the training process with the proposed loss and SiMix is given in Algorithm 1. In case SiMix is not used, then lines 11, 13, 14 and 15 are skipped. It is assumed that each image in training is labeled to a class. Mini-batches of size M are generated by randomly sampling m images per class out of M/m sampled classes.
|
2109.03622/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2109.03622/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,1138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
\vspace{-2mm}
|
| 4 |
+
2D human pose estimation is a classical computer vision problem that aims to parsing articulated structures of human parts from natural images. With rich and longstanding studies, we have witnessed great successes on single-person pose estimation by convolutional neural networks. Therefore, it is of great interest in pushing the pose estimation from the single-person to multi-person configuration.
|
| 5 |
+
|
| 6 |
+
[!t]
|
| 7 |
+
\centering
|
| 8 |
+
\includegraphics[width=0.85\linewidth]{figures/teasers/logocap-drawio.pdf}
|
| 9 |
+
\vspace{-5mm}
|
| 10 |
+
\caption{An illustrative example of multi-person pose estimation by the proposed LOGO-CAP with the HRNet-W32 backbone. For each initial pose obtained by the center-offset regression, LOGO-CAP learns $11\times 11$ local filters for each joint, and then refines the initial keypoint by convolution with the learned kernels: The local filters are learned to refocusing those initially-less-accurate pose keypoints towards better placement.
|
| 11 |
+
In more detailed, we show an example of the initial center-offset pose that only obtains the OKS of 0.826 due to the misplacement between the keypoints of {\tt the right elbow} and {\tt the right wrist}. We mark the residual vectors between the predictions and the groundtruth with yellow and red dash lines respectively. The LOGO-CAP improves the OKS by 10.4\%. Please see text for details.
|
| 12 |
+
|
| 13 |
+
}
|
| 14 |
+
\vspace{-5mm}
|
| 15 |
+
|
| 16 |
+
The problem of multi-person pose estimation from images has been extensively studied in top-down paradigms that formulate the problem as single-person pose estimation with an off-the-shelf person detector with an impressively high AP (e.g, 56\% on the COCO-2017 validation dataset ).
|
| 17 |
+
|
| 18 |
+
Although the top-down paradigm has been dramatically pushed into a high-performing stages, it remains several problems in both aspects of efficiency and accuracy due to the dependency of detecting person bounding boxes. Motivated by this, we are interested in studying the problem of multi-person pose estimation without incurring extra priors of bounding boxes, which is conventionally termed as bottom-up pose estimation.
|
| 19 |
+
|
| 20 |
+
[t!]
|
| 21 |
+
|
| 22 |
+
\resizebox{0.95\linewidth}{!}{
|
| 23 |
+
|
| 24 |
+
\begingroup
|
| 25 |
+
\makeatletter
|
| 26 |
+
|
| 27 |
+
\pgfpathrectangle{\pgfpointorigin}{\pgfqpoint{7.240093in}{4.890429in}}
|
| 28 |
+
\pgfusepath{use as bounding box, clip}
|
| 29 |
+
|
| 30 |
+
\pgfsetbuttcap
|
| 31 |
+
\pgfsetmiterjoin
|
| 32 |
+
\pgfsetlinewidth{0.000000pt}
|
| 33 |
+
\definecolor{currentstroke}{rgb}{1.000000,1.000000,1.000000}
|
| 34 |
+
\pgfsetstrokecolor{currentstroke}
|
| 35 |
+
\pgfsetstrokeopacity{0.000000}
|
| 36 |
+
\pgfsetdash{}{0pt}
|
| 37 |
+
\pgfpathmoveto{\pgfqpoint{-0.000000in}{0.000000in}}
|
| 38 |
+
\pgfpathlineto{\pgfqpoint{7.240093in}{0.000000in}}
|
| 39 |
+
\pgfpathlineto{\pgfqpoint{7.240093in}{4.890429in}}
|
| 40 |
+
\pgfpathlineto{\pgfqpoint{-0.000000in}{4.890429in}}
|
| 41 |
+
\pgfpathclose
|
| 42 |
+
\pgfusepath{}
|
| 43 |
+
|
| 44 |
+
\pgfsetbuttcap
|
| 45 |
+
\pgfsetmiterjoin
|
| 46 |
+
\definecolor{currentfill}{rgb}{1.000000,1.000000,1.000000}
|
| 47 |
+
\pgfsetfillcolor{currentfill}
|
| 48 |
+
\pgfsetlinewidth{0.000000pt}
|
| 49 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 50 |
+
\pgfsetstrokecolor{currentstroke}
|
| 51 |
+
\pgfsetstrokeopacity{0.000000}
|
| 52 |
+
\pgfsetdash{}{0pt}
|
| 53 |
+
\pgfpathmoveto{\pgfqpoint{1.140093in}{0.790429in}}
|
| 54 |
+
\pgfpathlineto{\pgfqpoint{7.140093in}{0.790429in}}
|
| 55 |
+
\pgfpathlineto{\pgfqpoint{7.140093in}{4.790429in}}
|
| 56 |
+
\pgfpathlineto{\pgfqpoint{1.140093in}{4.790429in}}
|
| 57 |
+
\pgfpathclose
|
| 58 |
+
\pgfusepath{fill}
|
| 59 |
+
|
| 60 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 61 |
+
\pgfusepath{clip}
|
| 62 |
+
\pgfsetbuttcap
|
| 63 |
+
\pgfsetmiterjoin
|
| 64 |
+
\definecolor{currentfill}{rgb}{0.501961,0.501961,0.501961}
|
| 65 |
+
\pgfsetfillcolor{currentfill}
|
| 66 |
+
\pgfsetfillopacity{0.100000}
|
| 67 |
+
\pgfsetlinewidth{1.003750pt}
|
| 68 |
+
\definecolor{currentstroke}{rgb}{1.000000,0.000000,0.000000}
|
| 69 |
+
\pgfsetstrokecolor{currentstroke}
|
| 70 |
+
\pgfsetstrokeopacity{0.100000}
|
| 71 |
+
\pgfsetdash{}{0pt}
|
| 72 |
+
\pgfpathmoveto{\pgfqpoint{0.390093in}{3.569377in}}
|
| 73 |
+
\pgfpathlineto{\pgfqpoint{12.890093in}{3.569377in}}
|
| 74 |
+
\pgfpathlineto{\pgfqpoint{12.890093in}{4.579903in}}
|
| 75 |
+
\pgfpathlineto{\pgfqpoint{0.390093in}{4.579903in}}
|
| 76 |
+
\pgfpathclose
|
| 77 |
+
\pgfusepath{stroke,fill}
|
| 78 |
+
|
| 79 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 80 |
+
\pgfusepath{clip}
|
| 81 |
+
\pgfsetrectcap
|
| 82 |
+
\pgfsetroundjoin
|
| 83 |
+
\pgfsetlinewidth{0.803000pt}
|
| 84 |
+
\definecolor{currentstroke}{rgb}{0.690196,0.690196,0.690196}
|
| 85 |
+
\pgfsetstrokecolor{currentstroke}
|
| 86 |
+
\pgfsetdash{}{0pt}
|
| 87 |
+
\pgfpathmoveto{\pgfqpoint{1.640093in}{0.790429in}}
|
| 88 |
+
\pgfpathlineto{\pgfqpoint{1.640093in}{4.790429in}}
|
| 89 |
+
\pgfusepath{stroke}
|
| 90 |
+
|
| 91 |
+
\pgfsetbuttcap
|
| 92 |
+
\pgfsetroundjoin
|
| 93 |
+
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}
|
| 94 |
+
\pgfsetfillcolor{currentfill}
|
| 95 |
+
\pgfsetlinewidth{0.803000pt}
|
| 96 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 97 |
+
\pgfsetstrokecolor{currentstroke}
|
| 98 |
+
\pgfsetdash{}{0pt}
|
| 99 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{0.000000in}{-0.048611in}}{\pgfqpoint{0.000000in}{0.000000in}}{
|
| 100 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}
|
| 101 |
+
\pgfpathlineto{\pgfqpoint{0.000000in}{-0.048611in}}
|
| 102 |
+
\pgfusepath{stroke,fill}
|
| 103 |
+
}
|
| 104 |
+
|
| 105 |
+
\pgfsys@transformshift{1.640093in}{0.790429in}
|
| 106 |
+
\pgfsys@useobject{currentmarker}{}
|
| 107 |
+
|
| 108 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 109 |
+
\pgfsetstrokecolor{textcolor}
|
| 110 |
+
\pgfsetfillcolor{textcolor}
|
| 111 |
+
\pgftext[x=1.640093in,y=0.693207in,,top]{\color{textcolor}\sffamily\fontsize{20.000000}{24.000000}\selectfont 50}
|
| 112 |
+
|
| 113 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 114 |
+
\pgfusepath{clip}
|
| 115 |
+
\pgfsetrectcap
|
| 116 |
+
\pgfsetroundjoin
|
| 117 |
+
\pgfsetlinewidth{0.803000pt}
|
| 118 |
+
\definecolor{currentstroke}{rgb}{0.690196,0.690196,0.690196}
|
| 119 |
+
\pgfsetstrokecolor{currentstroke}
|
| 120 |
+
\pgfsetdash{}{0pt}
|
| 121 |
+
\pgfpathmoveto{\pgfqpoint{2.890093in}{0.790429in}}
|
| 122 |
+
\pgfpathlineto{\pgfqpoint{2.890093in}{4.790429in}}
|
| 123 |
+
\pgfusepath{stroke}
|
| 124 |
+
|
| 125 |
+
\pgfsetbuttcap
|
| 126 |
+
\pgfsetroundjoin
|
| 127 |
+
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}
|
| 128 |
+
\pgfsetfillcolor{currentfill}
|
| 129 |
+
\pgfsetlinewidth{0.803000pt}
|
| 130 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 131 |
+
\pgfsetstrokecolor{currentstroke}
|
| 132 |
+
\pgfsetdash{}{0pt}
|
| 133 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{0.000000in}{-0.048611in}}{\pgfqpoint{0.000000in}{0.000000in}}{
|
| 134 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}
|
| 135 |
+
\pgfpathlineto{\pgfqpoint{0.000000in}{-0.048611in}}
|
| 136 |
+
\pgfusepath{stroke,fill}
|
| 137 |
+
}
|
| 138 |
+
|
| 139 |
+
\pgfsys@transformshift{2.890093in}{0.790429in}
|
| 140 |
+
\pgfsys@useobject{currentmarker}{}
|
| 141 |
+
|
| 142 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 143 |
+
\pgfsetstrokecolor{textcolor}
|
| 144 |
+
\pgfsetfillcolor{textcolor}
|
| 145 |
+
\pgftext[x=2.890093in,y=0.693207in,,top]{\color{textcolor}\sffamily\fontsize{20.000000}{24.000000}\selectfont 100}
|
| 146 |
+
|
| 147 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 148 |
+
\pgfusepath{clip}
|
| 149 |
+
\pgfsetrectcap
|
| 150 |
+
\pgfsetroundjoin
|
| 151 |
+
\pgfsetlinewidth{0.803000pt}
|
| 152 |
+
\definecolor{currentstroke}{rgb}{0.690196,0.690196,0.690196}
|
| 153 |
+
\pgfsetstrokecolor{currentstroke}
|
| 154 |
+
\pgfsetdash{}{0pt}
|
| 155 |
+
\pgfpathmoveto{\pgfqpoint{4.140093in}{0.790429in}}
|
| 156 |
+
\pgfpathlineto{\pgfqpoint{4.140093in}{4.790429in}}
|
| 157 |
+
\pgfusepath{stroke}
|
| 158 |
+
|
| 159 |
+
\pgfsetbuttcap
|
| 160 |
+
\pgfsetroundjoin
|
| 161 |
+
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}
|
| 162 |
+
\pgfsetfillcolor{currentfill}
|
| 163 |
+
\pgfsetlinewidth{0.803000pt}
|
| 164 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 165 |
+
\pgfsetstrokecolor{currentstroke}
|
| 166 |
+
\pgfsetdash{}{0pt}
|
| 167 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{0.000000in}{-0.048611in}}{\pgfqpoint{0.000000in}{0.000000in}}{
|
| 168 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}
|
| 169 |
+
\pgfpathlineto{\pgfqpoint{0.000000in}{-0.048611in}}
|
| 170 |
+
\pgfusepath{stroke,fill}
|
| 171 |
+
}
|
| 172 |
+
|
| 173 |
+
\pgfsys@transformshift{4.140093in}{0.790429in}
|
| 174 |
+
\pgfsys@useobject{currentmarker}{}
|
| 175 |
+
|
| 176 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 177 |
+
\pgfsetstrokecolor{textcolor}
|
| 178 |
+
\pgfsetfillcolor{textcolor}
|
| 179 |
+
\pgftext[x=4.140093in,y=0.693207in,,top]{\color{textcolor}\sffamily\fontsize{20.000000}{24.000000}\selectfont 150}
|
| 180 |
+
|
| 181 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 182 |
+
\pgfusepath{clip}
|
| 183 |
+
\pgfsetrectcap
|
| 184 |
+
\pgfsetroundjoin
|
| 185 |
+
\pgfsetlinewidth{0.803000pt}
|
| 186 |
+
\definecolor{currentstroke}{rgb}{0.690196,0.690196,0.690196}
|
| 187 |
+
\pgfsetstrokecolor{currentstroke}
|
| 188 |
+
\pgfsetdash{}{0pt}
|
| 189 |
+
\pgfpathmoveto{\pgfqpoint{5.390093in}{0.790429in}}
|
| 190 |
+
\pgfpathlineto{\pgfqpoint{5.390093in}{4.790429in}}
|
| 191 |
+
\pgfusepath{stroke}
|
| 192 |
+
|
| 193 |
+
\pgfsetbuttcap
|
| 194 |
+
\pgfsetroundjoin
|
| 195 |
+
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}
|
| 196 |
+
\pgfsetfillcolor{currentfill}
|
| 197 |
+
\pgfsetlinewidth{0.803000pt}
|
| 198 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 199 |
+
\pgfsetstrokecolor{currentstroke}
|
| 200 |
+
\pgfsetdash{}{0pt}
|
| 201 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{0.000000in}{-0.048611in}}{\pgfqpoint{0.000000in}{0.000000in}}{
|
| 202 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}
|
| 203 |
+
\pgfpathlineto{\pgfqpoint{0.000000in}{-0.048611in}}
|
| 204 |
+
\pgfusepath{stroke,fill}
|
| 205 |
+
}
|
| 206 |
+
|
| 207 |
+
\pgfsys@transformshift{5.390093in}{0.790429in}
|
| 208 |
+
\pgfsys@useobject{currentmarker}{}
|
| 209 |
+
|
| 210 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 211 |
+
\pgfsetstrokecolor{textcolor}
|
| 212 |
+
\pgfsetfillcolor{textcolor}
|
| 213 |
+
\pgftext[x=5.390093in,y=0.693207in,,top]{\color{textcolor}\sffamily\fontsize{20.000000}{24.000000}\selectfont 200}
|
| 214 |
+
|
| 215 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 216 |
+
\pgfusepath{clip}
|
| 217 |
+
\pgfsetrectcap
|
| 218 |
+
\pgfsetroundjoin
|
| 219 |
+
\pgfsetlinewidth{0.803000pt}
|
| 220 |
+
\definecolor{currentstroke}{rgb}{0.690196,0.690196,0.690196}
|
| 221 |
+
\pgfsetstrokecolor{currentstroke}
|
| 222 |
+
\pgfsetdash{}{0pt}
|
| 223 |
+
\pgfpathmoveto{\pgfqpoint{6.640093in}{0.790429in}}
|
| 224 |
+
\pgfpathlineto{\pgfqpoint{6.640093in}{4.790429in}}
|
| 225 |
+
\pgfusepath{stroke}
|
| 226 |
+
|
| 227 |
+
\pgfsetbuttcap
|
| 228 |
+
\pgfsetroundjoin
|
| 229 |
+
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}
|
| 230 |
+
\pgfsetfillcolor{currentfill}
|
| 231 |
+
\pgfsetlinewidth{0.803000pt}
|
| 232 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 233 |
+
\pgfsetstrokecolor{currentstroke}
|
| 234 |
+
\pgfsetdash{}{0pt}
|
| 235 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{0.000000in}{-0.048611in}}{\pgfqpoint{0.000000in}{0.000000in}}{
|
| 236 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}
|
| 237 |
+
\pgfpathlineto{\pgfqpoint{0.000000in}{-0.048611in}}
|
| 238 |
+
\pgfusepath{stroke,fill}
|
| 239 |
+
}
|
| 240 |
+
|
| 241 |
+
\pgfsys@transformshift{6.640093in}{0.790429in}
|
| 242 |
+
\pgfsys@useobject{currentmarker}{}
|
| 243 |
+
|
| 244 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 245 |
+
\pgfsetstrokecolor{textcolor}
|
| 246 |
+
\pgfsetfillcolor{textcolor}
|
| 247 |
+
\pgftext[x=6.640093in,y=0.693207in,,top]{\color{textcolor}\sffamily\fontsize{20.000000}{24.000000}\selectfont \(\displaystyle \geq\) 500}
|
| 248 |
+
|
| 249 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 250 |
+
\pgfsetstrokecolor{textcolor}
|
| 251 |
+
\pgfsetfillcolor{textcolor}
|
| 252 |
+
\pgftext[x=4.140093in,y=0.368826in,,top]{\color{textcolor}\sffamily\fontsize{20.000000}{24.000000}\selectfont Inference time (in ms)}
|
| 253 |
+
|
| 254 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 255 |
+
\pgfusepath{clip}
|
| 256 |
+
\pgfsetrectcap
|
| 257 |
+
\pgfsetroundjoin
|
| 258 |
+
\pgfsetlinewidth{0.803000pt}
|
| 259 |
+
\definecolor{currentstroke}{rgb}{0.690196,0.690196,0.690196}
|
| 260 |
+
\pgfsetstrokecolor{currentstroke}
|
| 261 |
+
\pgfsetdash{}{0pt}
|
| 262 |
+
\pgfpathmoveto{\pgfqpoint{1.140093in}{1.000955in}}
|
| 263 |
+
\pgfpathlineto{\pgfqpoint{7.140093in}{1.000955in}}
|
| 264 |
+
\pgfusepath{stroke}
|
| 265 |
+
|
| 266 |
+
\pgfsetbuttcap
|
| 267 |
+
\pgfsetroundjoin
|
| 268 |
+
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}
|
| 269 |
+
\pgfsetfillcolor{currentfill}
|
| 270 |
+
\pgfsetlinewidth{0.803000pt}
|
| 271 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 272 |
+
\pgfsetstrokecolor{currentstroke}
|
| 273 |
+
\pgfsetdash{}{0pt}
|
| 274 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.048611in}{0.000000in}}{\pgfqpoint{-0.000000in}{0.000000in}}{
|
| 275 |
+
\pgfpathmoveto{\pgfqpoint{-0.000000in}{0.000000in}}
|
| 276 |
+
\pgfpathlineto{\pgfqpoint{-0.048611in}{0.000000in}}
|
| 277 |
+
\pgfusepath{stroke,fill}
|
| 278 |
+
}
|
| 279 |
+
|
| 280 |
+
\pgfsys@transformshift{1.140093in}{1.000955in}
|
| 281 |
+
\pgfsys@useobject{currentmarker}{}
|
| 282 |
+
|
| 283 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 284 |
+
\pgfsetstrokecolor{textcolor}
|
| 285 |
+
\pgfsetfillcolor{textcolor}
|
| 286 |
+
\pgftext[x=0.424381in, y=0.895432in, left, base]{\color{textcolor}\sffamily\fontsize{20.000000}{24.000000}\selectfont 60.0}
|
| 287 |
+
|
| 288 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 289 |
+
\pgfusepath{clip}
|
| 290 |
+
\pgfsetrectcap
|
| 291 |
+
\pgfsetroundjoin
|
| 292 |
+
\pgfsetlinewidth{0.803000pt}
|
| 293 |
+
\definecolor{currentstroke}{rgb}{0.690196,0.690196,0.690196}
|
| 294 |
+
\pgfsetstrokecolor{currentstroke}
|
| 295 |
+
\pgfsetdash{}{0pt}
|
| 296 |
+
\pgfpathmoveto{\pgfqpoint{1.140093in}{2.053587in}}
|
| 297 |
+
\pgfpathlineto{\pgfqpoint{7.140093in}{2.053587in}}
|
| 298 |
+
\pgfusepath{stroke}
|
| 299 |
+
|
| 300 |
+
\pgfsetbuttcap
|
| 301 |
+
\pgfsetroundjoin
|
| 302 |
+
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}
|
| 303 |
+
\pgfsetfillcolor{currentfill}
|
| 304 |
+
\pgfsetlinewidth{0.803000pt}
|
| 305 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 306 |
+
\pgfsetstrokecolor{currentstroke}
|
| 307 |
+
\pgfsetdash{}{0pt}
|
| 308 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.048611in}{0.000000in}}{\pgfqpoint{-0.000000in}{0.000000in}}{
|
| 309 |
+
\pgfpathmoveto{\pgfqpoint{-0.000000in}{0.000000in}}
|
| 310 |
+
\pgfpathlineto{\pgfqpoint{-0.048611in}{0.000000in}}
|
| 311 |
+
\pgfusepath{stroke,fill}
|
| 312 |
+
}
|
| 313 |
+
|
| 314 |
+
\pgfsys@transformshift{1.140093in}{2.053587in}
|
| 315 |
+
\pgfsys@useobject{currentmarker}{}
|
| 316 |
+
|
| 317 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 318 |
+
\pgfsetstrokecolor{textcolor}
|
| 319 |
+
\pgfsetfillcolor{textcolor}
|
| 320 |
+
\pgftext[x=0.424381in, y=1.948064in, left, base]{\color{textcolor}\sffamily\fontsize{20.000000}{24.000000}\selectfont 65.0}
|
| 321 |
+
|
| 322 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 323 |
+
\pgfusepath{clip}
|
| 324 |
+
\pgfsetrectcap
|
| 325 |
+
\pgfsetroundjoin
|
| 326 |
+
\pgfsetlinewidth{0.803000pt}
|
| 327 |
+
\definecolor{currentstroke}{rgb}{0.690196,0.690196,0.690196}
|
| 328 |
+
\pgfsetstrokecolor{currentstroke}
|
| 329 |
+
\pgfsetdash{}{0pt}
|
| 330 |
+
\pgfpathmoveto{\pgfqpoint{1.140093in}{3.106219in}}
|
| 331 |
+
\pgfpathlineto{\pgfqpoint{7.140093in}{3.106219in}}
|
| 332 |
+
\pgfusepath{stroke}
|
| 333 |
+
|
| 334 |
+
\pgfsetbuttcap
|
| 335 |
+
\pgfsetroundjoin
|
| 336 |
+
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}
|
| 337 |
+
\pgfsetfillcolor{currentfill}
|
| 338 |
+
\pgfsetlinewidth{0.803000pt}
|
| 339 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 340 |
+
\pgfsetstrokecolor{currentstroke}
|
| 341 |
+
\pgfsetdash{}{0pt}
|
| 342 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.048611in}{0.000000in}}{\pgfqpoint{-0.000000in}{0.000000in}}{
|
| 343 |
+
\pgfpathmoveto{\pgfqpoint{-0.000000in}{0.000000in}}
|
| 344 |
+
\pgfpathlineto{\pgfqpoint{-0.048611in}{0.000000in}}
|
| 345 |
+
\pgfusepath{stroke,fill}
|
| 346 |
+
}
|
| 347 |
+
|
| 348 |
+
\pgfsys@transformshift{1.140093in}{3.106219in}
|
| 349 |
+
\pgfsys@useobject{currentmarker}{}
|
| 350 |
+
|
| 351 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 352 |
+
\pgfsetstrokecolor{textcolor}
|
| 353 |
+
\pgfsetfillcolor{textcolor}
|
| 354 |
+
\pgftext[x=0.424381in, y=3.000696in, left, base]{\color{textcolor}\sffamily\fontsize{20.000000}{24.000000}\selectfont 70.0}
|
| 355 |
+
|
| 356 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 357 |
+
\pgfusepath{clip}
|
| 358 |
+
\pgfsetrectcap
|
| 359 |
+
\pgfsetroundjoin
|
| 360 |
+
\pgfsetlinewidth{0.803000pt}
|
| 361 |
+
\definecolor{currentstroke}{rgb}{0.690196,0.690196,0.690196}
|
| 362 |
+
\pgfsetstrokecolor{currentstroke}
|
| 363 |
+
\pgfsetdash{}{0pt}
|
| 364 |
+
\pgfpathmoveto{\pgfqpoint{1.140093in}{4.579903in}}
|
| 365 |
+
\pgfpathlineto{\pgfqpoint{7.140093in}{4.579903in}}
|
| 366 |
+
\pgfusepath{stroke}
|
| 367 |
+
|
| 368 |
+
\pgfsetbuttcap
|
| 369 |
+
\pgfsetroundjoin
|
| 370 |
+
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}
|
| 371 |
+
\pgfsetfillcolor{currentfill}
|
| 372 |
+
\pgfsetlinewidth{0.803000pt}
|
| 373 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 374 |
+
\pgfsetstrokecolor{currentstroke}
|
| 375 |
+
\pgfsetdash{}{0pt}
|
| 376 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.048611in}{0.000000in}}{\pgfqpoint{-0.000000in}{0.000000in}}{
|
| 377 |
+
\pgfpathmoveto{\pgfqpoint{-0.000000in}{0.000000in}}
|
| 378 |
+
\pgfpathlineto{\pgfqpoint{-0.048611in}{0.000000in}}
|
| 379 |
+
\pgfusepath{stroke,fill}
|
| 380 |
+
}
|
| 381 |
+
|
| 382 |
+
\pgfsys@transformshift{1.140093in}{4.579903in}
|
| 383 |
+
\pgfsys@useobject{currentmarker}{}
|
| 384 |
+
|
| 385 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 386 |
+
\pgfsetstrokecolor{textcolor}
|
| 387 |
+
\pgfsetfillcolor{textcolor}
|
| 388 |
+
\pgftext[x=0.424381in, y=4.474380in, left, base]{\color{textcolor}\sffamily\fontsize{20.000000}{24.000000}\selectfont 88.9}
|
| 389 |
+
|
| 390 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 391 |
+
\pgfsetstrokecolor{textcolor}
|
| 392 |
+
\pgfsetfillcolor{textcolor}
|
| 393 |
+
\pgftext[x=0.368826in,y=2.790429in,,bottom,rotate=90.000000]{\color{textcolor}\sffamily\fontsize{20.000000}{24.000000}\selectfont COCO Keypoints AP}
|
| 394 |
+
|
| 395 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 396 |
+
\pgfusepath{clip}
|
| 397 |
+
\pgfsetrectcap
|
| 398 |
+
\pgfsetroundjoin
|
| 399 |
+
\pgfsetlinewidth{4.015000pt}
|
| 400 |
+
\definecolor{currentstroke}{rgb}{0.121569,0.466667,0.705882}
|
| 401 |
+
\pgfsetstrokecolor{currentstroke}
|
| 402 |
+
\pgfsetdash{}{0pt}
|
| 403 |
+
\pgfpathmoveto{\pgfqpoint{1.431760in}{2.243061in}}
|
| 404 |
+
\pgfpathlineto{\pgfqpoint{1.597823in}{3.022008in}}
|
| 405 |
+
\pgfpathlineto{\pgfqpoint{2.126204in}{3.232534in}}
|
| 406 |
+
\pgfpathlineto{\pgfqpoint{3.199082in}{3.569377in}}
|
| 407 |
+
\pgfusepath{stroke}
|
| 408 |
+
|
| 409 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 410 |
+
\pgfusepath{clip}
|
| 411 |
+
\pgfsetbuttcap
|
| 412 |
+
\pgfsetroundjoin
|
| 413 |
+
\definecolor{currentfill}{rgb}{0.121569,0.466667,0.705882}
|
| 414 |
+
\pgfsetfillcolor{currentfill}
|
| 415 |
+
\pgfsetlinewidth{1.003750pt}
|
| 416 |
+
\definecolor{currentstroke}{rgb}{0.121569,0.466667,0.705882}
|
| 417 |
+
\pgfsetstrokecolor{currentstroke}
|
| 418 |
+
\pgfsetdash{}{0pt}
|
| 419 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.104167in}{-0.104167in}}{\pgfqpoint{0.104167in}{0.104167in}}{
|
| 420 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{-0.104167in}}
|
| 421 |
+
\pgfpathcurveto{\pgfqpoint{0.027625in}{-0.104167in}}{\pgfqpoint{0.054123in}{-0.093191in}}{\pgfqpoint{0.073657in}{-0.073657in}}
|
| 422 |
+
\pgfpathcurveto{\pgfqpoint{0.093191in}{-0.054123in}}{\pgfqpoint{0.104167in}{-0.027625in}}{\pgfqpoint{0.104167in}{0.000000in}}
|
| 423 |
+
\pgfpathcurveto{\pgfqpoint{0.104167in}{0.027625in}}{\pgfqpoint{0.093191in}{0.054123in}}{\pgfqpoint{0.073657in}{0.073657in}}
|
| 424 |
+
\pgfpathcurveto{\pgfqpoint{0.054123in}{0.093191in}}{\pgfqpoint{0.027625in}{0.104167in}}{\pgfqpoint{0.000000in}{0.104167in}}
|
| 425 |
+
\pgfpathcurveto{\pgfqpoint{-0.027625in}{0.104167in}}{\pgfqpoint{-0.054123in}{0.093191in}}{\pgfqpoint{-0.073657in}{0.073657in}}
|
| 426 |
+
\pgfpathcurveto{\pgfqpoint{-0.093191in}{0.054123in}}{\pgfqpoint{-0.104167in}{0.027625in}}{\pgfqpoint{-0.104167in}{0.000000in}}
|
| 427 |
+
\pgfpathcurveto{\pgfqpoint{-0.104167in}{-0.027625in}}{\pgfqpoint{-0.093191in}{-0.054123in}}{\pgfqpoint{-0.073657in}{-0.073657in}}
|
| 428 |
+
\pgfpathcurveto{\pgfqpoint{-0.054123in}{-0.093191in}}{\pgfqpoint{-0.027625in}{-0.104167in}}{\pgfqpoint{0.000000in}{-0.104167in}}
|
| 429 |
+
\pgfpathclose
|
| 430 |
+
\pgfusepath{stroke,fill}
|
| 431 |
+
}
|
| 432 |
+
|
| 433 |
+
\pgfsys@transformshift{1.431760in}{2.243061in}
|
| 434 |
+
\pgfsys@useobject{currentmarker}{}
|
| 435 |
+
|
| 436 |
+
\pgfsys@transformshift{1.597823in}{3.022008in}
|
| 437 |
+
\pgfsys@useobject{currentmarker}{}
|
| 438 |
+
|
| 439 |
+
\pgfsys@transformshift{2.126204in}{3.232534in}
|
| 440 |
+
\pgfsys@useobject{currentmarker}{}
|
| 441 |
+
|
| 442 |
+
\pgfsys@transformshift{3.199082in}{3.569377in}
|
| 443 |
+
\pgfsys@useobject{currentmarker}{}
|
| 444 |
+
|
| 445 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 446 |
+
\pgfusepath{clip}
|
| 447 |
+
\pgfsetrectcap
|
| 448 |
+
\pgfsetroundjoin
|
| 449 |
+
\pgfsetlinewidth{4.015000pt}
|
| 450 |
+
\definecolor{currentstroke}{rgb}{1.000000,0.498039,0.054902}
|
| 451 |
+
\pgfsetstrokecolor{currentstroke}
|
| 452 |
+
\pgfsetdash{}{0pt}
|
| 453 |
+
\pgfpathmoveto{\pgfqpoint{1.887099in}{1.779903in}}
|
| 454 |
+
\pgfpathlineto{\pgfqpoint{1.972372in}{2.685166in}}
|
| 455 |
+
\pgfpathlineto{\pgfqpoint{2.490933in}{2.937798in}}
|
| 456 |
+
\pgfpathlineto{\pgfqpoint{3.862315in}{3.316745in}}
|
| 457 |
+
\pgfusepath{stroke}
|
| 458 |
+
|
| 459 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 460 |
+
\pgfusepath{clip}
|
| 461 |
+
\pgfsetbuttcap
|
| 462 |
+
\pgfsetmiterjoin
|
| 463 |
+
\definecolor{currentfill}{rgb}{1.000000,0.498039,0.054902}
|
| 464 |
+
\pgfsetfillcolor{currentfill}
|
| 465 |
+
\pgfsetlinewidth{1.003750pt}
|
| 466 |
+
\definecolor{currentstroke}{rgb}{1.000000,0.498039,0.054902}
|
| 467 |
+
\pgfsetstrokecolor{currentstroke}
|
| 468 |
+
\pgfsetdash{}{0pt}
|
| 469 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.104167in}{-0.104167in}}{\pgfqpoint{0.104167in}{0.104167in}}{
|
| 470 |
+
\pgfpathmoveto{\pgfqpoint{-0.104167in}{-0.104167in}}
|
| 471 |
+
\pgfpathlineto{\pgfqpoint{0.104167in}{-0.104167in}}
|
| 472 |
+
\pgfpathlineto{\pgfqpoint{0.104167in}{0.104167in}}
|
| 473 |
+
\pgfpathlineto{\pgfqpoint{-0.104167in}{0.104167in}}
|
| 474 |
+
\pgfpathclose
|
| 475 |
+
\pgfusepath{stroke,fill}
|
| 476 |
+
}
|
| 477 |
+
|
| 478 |
+
\pgfsys@transformshift{1.887099in}{1.779903in}
|
| 479 |
+
\pgfsys@useobject{currentmarker}{}
|
| 480 |
+
|
| 481 |
+
\pgfsys@transformshift{1.972372in}{2.685166in}
|
| 482 |
+
\pgfsys@useobject{currentmarker}{}
|
| 483 |
+
|
| 484 |
+
\pgfsys@transformshift{2.490933in}{2.937798in}
|
| 485 |
+
\pgfsys@useobject{currentmarker}{}
|
| 486 |
+
|
| 487 |
+
\pgfsys@transformshift{3.862315in}{3.316745in}
|
| 488 |
+
\pgfsys@useobject{currentmarker}{}
|
| 489 |
+
|
| 490 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 491 |
+
\pgfusepath{clip}
|
| 492 |
+
\pgfsetrectcap
|
| 493 |
+
\pgfsetroundjoin
|
| 494 |
+
\pgfsetlinewidth{4.015000pt}
|
| 495 |
+
\definecolor{currentstroke}{rgb}{0.172549,0.627451,0.172549}
|
| 496 |
+
\pgfsetstrokecolor{currentstroke}
|
| 497 |
+
\pgfsetdash{}{0pt}
|
| 498 |
+
\pgfpathmoveto{\pgfqpoint{4.495183in}{1.548324in}}
|
| 499 |
+
\pgfpathlineto{\pgfqpoint{5.089341in}{2.200955in}}
|
| 500 |
+
\pgfpathlineto{\pgfqpoint{5.731973in}{2.558850in}}
|
| 501 |
+
\pgfusepath{stroke}
|
| 502 |
+
|
| 503 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 504 |
+
\pgfusepath{clip}
|
| 505 |
+
\pgfsetbuttcap
|
| 506 |
+
\pgfsetmiterjoin
|
| 507 |
+
\definecolor{currentfill}{rgb}{0.172549,0.627451,0.172549}
|
| 508 |
+
\pgfsetfillcolor{currentfill}
|
| 509 |
+
\pgfsetlinewidth{1.003750pt}
|
| 510 |
+
\definecolor{currentstroke}{rgb}{0.172549,0.627451,0.172549}
|
| 511 |
+
\pgfsetstrokecolor{currentstroke}
|
| 512 |
+
\pgfsetdash{}{0pt}
|
| 513 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.099068in}{-0.084273in}}{\pgfqpoint{0.099068in}{0.104167in}}{
|
| 514 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.104167in}}
|
| 515 |
+
\pgfpathlineto{\pgfqpoint{-0.099068in}{0.032189in}}
|
| 516 |
+
\pgfpathlineto{\pgfqpoint{-0.061228in}{-0.084273in}}
|
| 517 |
+
\pgfpathlineto{\pgfqpoint{0.061228in}{-0.084273in}}
|
| 518 |
+
\pgfpathlineto{\pgfqpoint{0.099068in}{0.032189in}}
|
| 519 |
+
\pgfpathclose
|
| 520 |
+
\pgfusepath{stroke,fill}
|
| 521 |
+
}
|
| 522 |
+
|
| 523 |
+
\pgfsys@transformshift{4.495183in}{1.548324in}
|
| 524 |
+
\pgfsys@useobject{currentmarker}{}
|
| 525 |
+
|
| 526 |
+
\pgfsys@transformshift{5.089341in}{2.200955in}
|
| 527 |
+
\pgfsys@useobject{currentmarker}{}
|
| 528 |
+
|
| 529 |
+
\pgfsys@transformshift{5.731973in}{2.558850in}
|
| 530 |
+
\pgfsys@useobject{currentmarker}{}
|
| 531 |
+
|
| 532 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 533 |
+
\pgfusepath{clip}
|
| 534 |
+
\pgfsetbuttcap
|
| 535 |
+
\pgfsetmiterjoin
|
| 536 |
+
\definecolor{currentfill}{rgb}{0.839216,0.152941,0.156863}
|
| 537 |
+
\pgfsetfillcolor{currentfill}
|
| 538 |
+
\pgfsetlinewidth{1.003750pt}
|
| 539 |
+
\definecolor{currentstroke}{rgb}{0.839216,0.152941,0.156863}
|
| 540 |
+
\pgfsetstrokecolor{currentstroke}
|
| 541 |
+
\pgfsetdash{}{0pt}
|
| 542 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.099068in}{-0.084273in}}{\pgfqpoint{0.099068in}{0.104167in}}{
|
| 543 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.104167in}}
|
| 544 |
+
\pgfpathlineto{\pgfqpoint{-0.099068in}{0.032189in}}
|
| 545 |
+
\pgfpathlineto{\pgfqpoint{-0.061228in}{-0.084273in}}
|
| 546 |
+
\pgfpathlineto{\pgfqpoint{0.061228in}{-0.084273in}}
|
| 547 |
+
\pgfpathlineto{\pgfqpoint{0.099068in}{0.032189in}}
|
| 548 |
+
\pgfpathclose
|
| 549 |
+
\pgfusepath{stroke,fill}
|
| 550 |
+
}
|
| 551 |
+
|
| 552 |
+
\pgfsys@transformshift{3.714561in}{1.843061in}
|
| 553 |
+
\pgfsys@useobject{currentmarker}{}
|
| 554 |
+
|
| 555 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 556 |
+
\pgfusepath{clip}
|
| 557 |
+
\pgfsetbuttcap
|
| 558 |
+
\pgfsetbeveljoin
|
| 559 |
+
\definecolor{currentfill}{rgb}{0.580392,0.403922,0.741176}
|
| 560 |
+
\pgfsetfillcolor{currentfill}
|
| 561 |
+
\pgfsetlinewidth{1.003750pt}
|
| 562 |
+
\definecolor{currentstroke}{rgb}{0.580392,0.403922,0.741176}
|
| 563 |
+
\pgfsetstrokecolor{currentstroke}
|
| 564 |
+
\pgfsetdash{}{0pt}
|
| 565 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.099068in}{-0.084273in}}{\pgfqpoint{0.099068in}{0.104167in}}{
|
| 566 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.104167in}}
|
| 567 |
+
\pgfpathlineto{\pgfqpoint{-0.023387in}{0.032189in}}
|
| 568 |
+
\pgfpathlineto{\pgfqpoint{-0.099068in}{0.032189in}}
|
| 569 |
+
\pgfpathlineto{\pgfqpoint{-0.037841in}{-0.012295in}}
|
| 570 |
+
\pgfpathlineto{\pgfqpoint{-0.061228in}{-0.084273in}}
|
| 571 |
+
\pgfpathlineto{\pgfqpoint{-0.000000in}{-0.039788in}}
|
| 572 |
+
\pgfpathlineto{\pgfqpoint{0.061228in}{-0.084273in}}
|
| 573 |
+
\pgfpathlineto{\pgfqpoint{0.037841in}{-0.012295in}}
|
| 574 |
+
\pgfpathlineto{\pgfqpoint{0.099068in}{0.032189in}}
|
| 575 |
+
\pgfpathlineto{\pgfqpoint{0.023387in}{0.032189in}}
|
| 576 |
+
\pgfpathclose
|
| 577 |
+
\pgfusepath{stroke,fill}
|
| 578 |
+
}
|
| 579 |
+
|
| 580 |
+
\pgfsys@transformshift{6.640093in}{2.495692in}
|
| 581 |
+
\pgfsys@useobject{currentmarker}{}
|
| 582 |
+
|
| 583 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 584 |
+
\pgfusepath{clip}
|
| 585 |
+
\pgfsetbuttcap
|
| 586 |
+
\pgfsetroundjoin
|
| 587 |
+
\definecolor{currentfill}{rgb}{0.549020,0.337255,0.294118}
|
| 588 |
+
\pgfsetfillcolor{currentfill}
|
| 589 |
+
\pgfsetlinewidth{1.003750pt}
|
| 590 |
+
\definecolor{currentstroke}{rgb}{0.549020,0.337255,0.294118}
|
| 591 |
+
\pgfsetstrokecolor{currentstroke}
|
| 592 |
+
\pgfsetdash{}{0pt}
|
| 593 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.104167in}{-0.104167in}}{\pgfqpoint{0.104167in}{0.104167in}}{
|
| 594 |
+
\pgfpathmoveto{\pgfqpoint{-0.104167in}{-0.104167in}}
|
| 595 |
+
\pgfpathlineto{\pgfqpoint{0.104167in}{0.104167in}}
|
| 596 |
+
\pgfpathmoveto{\pgfqpoint{-0.104167in}{0.104167in}}
|
| 597 |
+
\pgfpathlineto{\pgfqpoint{0.104167in}{-0.104167in}}
|
| 598 |
+
\pgfusepath{stroke,fill}
|
| 599 |
+
}
|
| 600 |
+
|
| 601 |
+
\pgfsys@transformshift{1.547500in}{1.022008in}
|
| 602 |
+
\pgfsys@useobject{currentmarker}{}
|
| 603 |
+
|
| 604 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 605 |
+
\pgfusepath{clip}
|
| 606 |
+
\pgfsetbuttcap
|
| 607 |
+
\pgfsetroundjoin
|
| 608 |
+
\pgfsetlinewidth{1.505625pt}
|
| 609 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.501961,0.000000}
|
| 610 |
+
\pgfsetstrokecolor{currentstroke}
|
| 611 |
+
\pgfsetdash{{5.550000pt}{2.400000pt}}{0.000000pt}
|
| 612 |
+
\pgfpathmoveto{\pgfqpoint{1.126204in}{1.022008in}}
|
| 613 |
+
\pgfpathlineto{\pgfqpoint{1.155399in}{1.022008in}}
|
| 614 |
+
\pgfpathlineto{\pgfqpoint{1.410501in}{1.022008in}}
|
| 615 |
+
\pgfpathlineto{\pgfqpoint{1.665603in}{1.022008in}}
|
| 616 |
+
\pgfpathlineto{\pgfqpoint{1.920705in}{1.022008in}}
|
| 617 |
+
\pgfpathlineto{\pgfqpoint{2.175807in}{1.022008in}}
|
| 618 |
+
\pgfpathlineto{\pgfqpoint{2.430909in}{1.022008in}}
|
| 619 |
+
\pgfpathlineto{\pgfqpoint{2.686011in}{1.022008in}}
|
| 620 |
+
\pgfpathlineto{\pgfqpoint{2.941113in}{1.022008in}}
|
| 621 |
+
\pgfpathlineto{\pgfqpoint{3.196216in}{1.022008in}}
|
| 622 |
+
\pgfpathlineto{\pgfqpoint{3.451318in}{1.022008in}}
|
| 623 |
+
\pgfpathlineto{\pgfqpoint{3.706420in}{1.022008in}}
|
| 624 |
+
\pgfpathlineto{\pgfqpoint{3.961522in}{1.022008in}}
|
| 625 |
+
\pgfpathlineto{\pgfqpoint{4.216624in}{1.022008in}}
|
| 626 |
+
\pgfpathlineto{\pgfqpoint{4.471726in}{1.022008in}}
|
| 627 |
+
\pgfpathlineto{\pgfqpoint{4.726828in}{1.022008in}}
|
| 628 |
+
\pgfpathlineto{\pgfqpoint{4.981930in}{1.022008in}}
|
| 629 |
+
\pgfpathlineto{\pgfqpoint{5.237032in}{1.022008in}}
|
| 630 |
+
\pgfpathlineto{\pgfqpoint{5.492134in}{1.022008in}}
|
| 631 |
+
\pgfpathlineto{\pgfqpoint{5.747236in}{1.022008in}}
|
| 632 |
+
\pgfpathlineto{\pgfqpoint{6.002338in}{1.022008in}}
|
| 633 |
+
\pgfpathlineto{\pgfqpoint{6.257440in}{1.022008in}}
|
| 634 |
+
\pgfpathlineto{\pgfqpoint{6.512542in}{1.022008in}}
|
| 635 |
+
\pgfpathlineto{\pgfqpoint{6.767644in}{1.022008in}}
|
| 636 |
+
\pgfpathlineto{\pgfqpoint{7.022746in}{1.022008in}}
|
| 637 |
+
\pgfpathlineto{\pgfqpoint{7.153982in}{1.022008in}}
|
| 638 |
+
\pgfusepath{stroke}
|
| 639 |
+
|
| 640 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 641 |
+
\pgfusepath{clip}
|
| 642 |
+
\pgfsetbuttcap
|
| 643 |
+
\pgfsetroundjoin
|
| 644 |
+
\pgfsetlinewidth{1.505625pt}
|
| 645 |
+
\definecolor{currentstroke}{rgb}{1.000000,0.000000,0.000000}
|
| 646 |
+
\pgfsetstrokecolor{currentstroke}
|
| 647 |
+
\pgfsetdash{{5.550000pt}{2.400000pt}}{0.000000pt}
|
| 648 |
+
\pgfpathmoveto{\pgfqpoint{1.126204in}{4.579903in}}
|
| 649 |
+
\pgfpathlineto{\pgfqpoint{1.155399in}{4.579903in}}
|
| 650 |
+
\pgfpathlineto{\pgfqpoint{1.410501in}{4.579903in}}
|
| 651 |
+
\pgfpathlineto{\pgfqpoint{1.665603in}{4.579903in}}
|
| 652 |
+
\pgfpathlineto{\pgfqpoint{1.920705in}{4.579903in}}
|
| 653 |
+
\pgfpathlineto{\pgfqpoint{2.175807in}{4.579903in}}
|
| 654 |
+
\pgfpathlineto{\pgfqpoint{2.430909in}{4.579903in}}
|
| 655 |
+
\pgfpathlineto{\pgfqpoint{2.686011in}{4.579903in}}
|
| 656 |
+
\pgfpathlineto{\pgfqpoint{2.941113in}{4.579903in}}
|
| 657 |
+
\pgfpathlineto{\pgfqpoint{3.196216in}{4.579903in}}
|
| 658 |
+
\pgfpathlineto{\pgfqpoint{3.451318in}{4.579903in}}
|
| 659 |
+
\pgfpathlineto{\pgfqpoint{3.706420in}{4.579903in}}
|
| 660 |
+
\pgfpathlineto{\pgfqpoint{3.961522in}{4.579903in}}
|
| 661 |
+
\pgfpathlineto{\pgfqpoint{4.216624in}{4.579903in}}
|
| 662 |
+
\pgfpathlineto{\pgfqpoint{4.471726in}{4.579903in}}
|
| 663 |
+
\pgfpathlineto{\pgfqpoint{4.726828in}{4.579903in}}
|
| 664 |
+
\pgfpathlineto{\pgfqpoint{4.981930in}{4.579903in}}
|
| 665 |
+
\pgfpathlineto{\pgfqpoint{5.237032in}{4.579903in}}
|
| 666 |
+
\pgfpathlineto{\pgfqpoint{5.492134in}{4.579903in}}
|
| 667 |
+
\pgfpathlineto{\pgfqpoint{5.747236in}{4.579903in}}
|
| 668 |
+
\pgfpathlineto{\pgfqpoint{6.002338in}{4.579903in}}
|
| 669 |
+
\pgfpathlineto{\pgfqpoint{6.257440in}{4.579903in}}
|
| 670 |
+
\pgfpathlineto{\pgfqpoint{6.512542in}{4.579903in}}
|
| 671 |
+
\pgfpathlineto{\pgfqpoint{6.767644in}{4.579903in}}
|
| 672 |
+
\pgfpathlineto{\pgfqpoint{7.022746in}{4.579903in}}
|
| 673 |
+
\pgfpathlineto{\pgfqpoint{7.153982in}{4.579903in}}
|
| 674 |
+
\pgfusepath{stroke}
|
| 675 |
+
|
| 676 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 677 |
+
\pgfusepath{clip}
|
| 678 |
+
\pgfsetbuttcap
|
| 679 |
+
\pgfsetroundjoin
|
| 680 |
+
\pgfsetlinewidth{1.505625pt}
|
| 681 |
+
\definecolor{currentstroke}{rgb}{1.000000,0.647059,0.000000}
|
| 682 |
+
\pgfsetstrokecolor{currentstroke}
|
| 683 |
+
\pgfsetdash{{5.550000pt}{2.400000pt}}{0.000000pt}
|
| 684 |
+
\pgfpathmoveto{\pgfqpoint{1.431760in}{2.243061in}}
|
| 685 |
+
\pgfpathlineto{\pgfqpoint{1.887099in}{1.779903in}}
|
| 686 |
+
\pgfusepath{stroke}
|
| 687 |
+
|
| 688 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 689 |
+
\pgfusepath{clip}
|
| 690 |
+
\pgfsetbuttcap
|
| 691 |
+
\pgfsetroundjoin
|
| 692 |
+
\pgfsetlinewidth{1.505625pt}
|
| 693 |
+
\definecolor{currentstroke}{rgb}{1.000000,0.647059,0.000000}
|
| 694 |
+
\pgfsetstrokecolor{currentstroke}
|
| 695 |
+
\pgfsetdash{{5.550000pt}{2.400000pt}}{0.000000pt}
|
| 696 |
+
\pgfpathmoveto{\pgfqpoint{1.597823in}{3.022008in}}
|
| 697 |
+
\pgfpathlineto{\pgfqpoint{1.972372in}{2.685166in}}
|
| 698 |
+
\pgfusepath{stroke}
|
| 699 |
+
|
| 700 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 701 |
+
\pgfusepath{clip}
|
| 702 |
+
\pgfsetbuttcap
|
| 703 |
+
\pgfsetroundjoin
|
| 704 |
+
\pgfsetlinewidth{1.505625pt}
|
| 705 |
+
\definecolor{currentstroke}{rgb}{1.000000,0.647059,0.000000}
|
| 706 |
+
\pgfsetstrokecolor{currentstroke}
|
| 707 |
+
\pgfsetdash{{5.550000pt}{2.400000pt}}{0.000000pt}
|
| 708 |
+
\pgfpathmoveto{\pgfqpoint{2.126204in}{3.232534in}}
|
| 709 |
+
\pgfpathlineto{\pgfqpoint{2.490933in}{2.937798in}}
|
| 710 |
+
\pgfusepath{stroke}
|
| 711 |
+
|
| 712 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 713 |
+
\pgfusepath{clip}
|
| 714 |
+
\pgfsetbuttcap
|
| 715 |
+
\pgfsetroundjoin
|
| 716 |
+
\pgfsetlinewidth{1.505625pt}
|
| 717 |
+
\definecolor{currentstroke}{rgb}{1.000000,0.647059,0.000000}
|
| 718 |
+
\pgfsetstrokecolor{currentstroke}
|
| 719 |
+
\pgfsetdash{{5.550000pt}{2.400000pt}}{0.000000pt}
|
| 720 |
+
\pgfpathmoveto{\pgfqpoint{3.199082in}{3.569377in}}
|
| 721 |
+
\pgfpathlineto{\pgfqpoint{3.862315in}{3.316745in}}
|
| 722 |
+
\pgfusepath{stroke}
|
| 723 |
+
|
| 724 |
+
\pgfpathrectangle{\pgfqpoint{1.140093in}{0.790429in}}{\pgfqpoint{6.000000in}{4.000000in}}
|
| 725 |
+
\pgfusepath{clip}
|
| 726 |
+
\pgfsetrectcap
|
| 727 |
+
\pgfsetroundjoin
|
| 728 |
+
\pgfsetlinewidth{1.505625pt}
|
| 729 |
+
\definecolor{currentstroke}{rgb}{1.000000,0.000000,0.000000}
|
| 730 |
+
\pgfsetstrokecolor{currentstroke}
|
| 731 |
+
\pgfsetdash{}{0pt}
|
| 732 |
+
\pgfpathmoveto{\pgfqpoint{1.558317in}{1.022008in}}
|
| 733 |
+
\pgfpathlineto{\pgfqpoint{1.597823in}{3.022008in}}
|
| 734 |
+
\pgfusepath{stroke}
|
| 735 |
+
|
| 736 |
+
\pgfsetrectcap
|
| 737 |
+
\pgfsetmiterjoin
|
| 738 |
+
\pgfsetlinewidth{0.803000pt}
|
| 739 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 740 |
+
\pgfsetstrokecolor{currentstroke}
|
| 741 |
+
\pgfsetdash{}{0pt}
|
| 742 |
+
\pgfpathmoveto{\pgfqpoint{1.140093in}{0.790429in}}
|
| 743 |
+
\pgfpathlineto{\pgfqpoint{1.140093in}{4.790429in}}
|
| 744 |
+
\pgfusepath{stroke}
|
| 745 |
+
|
| 746 |
+
\pgfsetrectcap
|
| 747 |
+
\pgfsetmiterjoin
|
| 748 |
+
\pgfsetlinewidth{0.803000pt}
|
| 749 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 750 |
+
\pgfsetstrokecolor{currentstroke}
|
| 751 |
+
\pgfsetdash{}{0pt}
|
| 752 |
+
\pgfpathmoveto{\pgfqpoint{7.140093in}{0.790429in}}
|
| 753 |
+
\pgfpathlineto{\pgfqpoint{7.140093in}{4.790429in}}
|
| 754 |
+
\pgfusepath{stroke}
|
| 755 |
+
|
| 756 |
+
\pgfsetrectcap
|
| 757 |
+
\pgfsetmiterjoin
|
| 758 |
+
\pgfsetlinewidth{0.803000pt}
|
| 759 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 760 |
+
\pgfsetstrokecolor{currentstroke}
|
| 761 |
+
\pgfsetdash{}{0pt}
|
| 762 |
+
\pgfpathmoveto{\pgfqpoint{1.140093in}{0.790429in}}
|
| 763 |
+
\pgfpathlineto{\pgfqpoint{7.140093in}{0.790429in}}
|
| 764 |
+
\pgfusepath{stroke}
|
| 765 |
+
|
| 766 |
+
\pgfsetrectcap
|
| 767 |
+
\pgfsetmiterjoin
|
| 768 |
+
\pgfsetlinewidth{0.803000pt}
|
| 769 |
+
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}
|
| 770 |
+
\pgfsetstrokecolor{currentstroke}
|
| 771 |
+
\pgfsetdash{}{0pt}
|
| 772 |
+
\pgfpathmoveto{\pgfqpoint{1.140093in}{4.790429in}}
|
| 773 |
+
\pgfpathlineto{\pgfqpoint{7.140093in}{4.790429in}}
|
| 774 |
+
\pgfusepath{stroke}
|
| 775 |
+
|
| 776 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 777 |
+
\pgfsetstrokecolor{textcolor}
|
| 778 |
+
\pgfsetfillcolor{textcolor}
|
| 779 |
+
\pgftext[x=2.640093in,y=4.579903in,left,base]{\color{textcolor}\sffamily\fontsize{18.000000}{21.600000}\selectfont An Empirical Upper Bound}
|
| 780 |
+
|
| 781 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 782 |
+
\pgfsetstrokecolor{textcolor}
|
| 783 |
+
\pgfsetfillcolor{textcolor}
|
| 784 |
+
\pgftext[x=1.181760in,y=2.074640in,left,base]{\color{textcolor}\sffamily\fontsize{10.000000}{12.000000}\selectfont W32-384}
|
| 785 |
+
|
| 786 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 787 |
+
\pgfsetstrokecolor{textcolor}
|
| 788 |
+
\pgfsetfillcolor{textcolor}
|
| 789 |
+
\pgftext[x=1.247823in,y=3.127271in,left,base]{\color{textcolor}\sffamily\fontsize{10.000000}{12.000000}\selectfont W32-512}
|
| 790 |
+
|
| 791 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 792 |
+
\pgfsetstrokecolor{textcolor}
|
| 793 |
+
\pgfsetfillcolor{textcolor}
|
| 794 |
+
\pgftext[x=1.826204in,y=3.358850in,left,base]{\color{textcolor}\sffamily\fontsize{10.000000}{12.000000}\selectfont W48-512}
|
| 795 |
+
|
| 796 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 797 |
+
\pgfsetstrokecolor{textcolor}
|
| 798 |
+
\pgfsetfillcolor{textcolor}
|
| 799 |
+
\pgftext[x=3.299082in,y=3.569377in,left,base]{\color{textcolor}\sffamily\fontsize{10.000000}{12.000000}\selectfont W48-640}
|
| 800 |
+
|
| 801 |
+
\pgfsetbuttcap
|
| 802 |
+
\pgfsetmiterjoin
|
| 803 |
+
\pgfsetlinewidth{0.000000pt}
|
| 804 |
+
\definecolor{currentstroke}{rgb}{0.800000,0.800000,0.800000}
|
| 805 |
+
\pgfsetstrokecolor{currentstroke}
|
| 806 |
+
\pgfsetstrokeopacity{0.000000}
|
| 807 |
+
\pgfsetdash{}{0pt}
|
| 808 |
+
\pgfpathmoveto{\pgfqpoint{2.060945in}{3.650941in}}
|
| 809 |
+
\pgfpathlineto{\pgfqpoint{7.013704in}{3.650941in}}
|
| 810 |
+
\pgfpathquadraticcurveto{\pgfqpoint{7.049815in}{3.650941in}}{\pgfqpoint{7.049815in}{3.687052in}}
|
| 811 |
+
\pgfpathlineto{\pgfqpoint{7.049815in}{4.464040in}}
|
| 812 |
+
\pgfpathquadraticcurveto{\pgfqpoint{7.049815in}{4.500151in}}{\pgfqpoint{7.013704in}{4.500151in}}
|
| 813 |
+
\pgfpathlineto{\pgfqpoint{2.060945in}{4.500151in}}
|
| 814 |
+
\pgfpathquadraticcurveto{\pgfqpoint{2.024834in}{4.500151in}}{\pgfqpoint{2.024834in}{4.464040in}}
|
| 815 |
+
\pgfpathlineto{\pgfqpoint{2.024834in}{3.687052in}}
|
| 816 |
+
\pgfpathquadraticcurveto{\pgfqpoint{2.024834in}{3.650941in}}{\pgfqpoint{2.060945in}{3.650941in}}
|
| 817 |
+
\pgfpathclose
|
| 818 |
+
\pgfusepath{}
|
| 819 |
+
|
| 820 |
+
\pgfsetrectcap
|
| 821 |
+
\pgfsetroundjoin
|
| 822 |
+
\pgfsetlinewidth{4.015000pt}
|
| 823 |
+
\definecolor{currentstroke}{rgb}{0.121569,0.466667,0.705882}
|
| 824 |
+
\pgfsetstrokecolor{currentstroke}
|
| 825 |
+
\pgfsetdash{}{0pt}
|
| 826 |
+
\pgfpathmoveto{\pgfqpoint{2.097057in}{4.353944in}}
|
| 827 |
+
\pgfpathlineto{\pgfqpoint{2.458168in}{4.353944in}}
|
| 828 |
+
\pgfusepath{stroke}
|
| 829 |
+
|
| 830 |
+
\pgfsetbuttcap
|
| 831 |
+
\pgfsetroundjoin
|
| 832 |
+
\definecolor{currentfill}{rgb}{0.121569,0.466667,0.705882}
|
| 833 |
+
\pgfsetfillcolor{currentfill}
|
| 834 |
+
\pgfsetlinewidth{1.003750pt}
|
| 835 |
+
\definecolor{currentstroke}{rgb}{0.121569,0.466667,0.705882}
|
| 836 |
+
\pgfsetstrokecolor{currentstroke}
|
| 837 |
+
\pgfsetdash{}{0pt}
|
| 838 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.052083in}{-0.052083in}}{\pgfqpoint{0.052083in}{0.052083in}}{
|
| 839 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{-0.052083in}}
|
| 840 |
+
\pgfpathcurveto{\pgfqpoint{0.013813in}{-0.052083in}}{\pgfqpoint{0.027061in}{-0.046596in}}{\pgfqpoint{0.036828in}{-0.036828in}}
|
| 841 |
+
\pgfpathcurveto{\pgfqpoint{0.046596in}{-0.027061in}}{\pgfqpoint{0.052083in}{-0.013813in}}{\pgfqpoint{0.052083in}{0.000000in}}
|
| 842 |
+
\pgfpathcurveto{\pgfqpoint{0.052083in}{0.013813in}}{\pgfqpoint{0.046596in}{0.027061in}}{\pgfqpoint{0.036828in}{0.036828in}}
|
| 843 |
+
\pgfpathcurveto{\pgfqpoint{0.027061in}{0.046596in}}{\pgfqpoint{0.013813in}{0.052083in}}{\pgfqpoint{0.000000in}{0.052083in}}
|
| 844 |
+
\pgfpathcurveto{\pgfqpoint{-0.013813in}{0.052083in}}{\pgfqpoint{-0.027061in}{0.046596in}}{\pgfqpoint{-0.036828in}{0.036828in}}
|
| 845 |
+
\pgfpathcurveto{\pgfqpoint{-0.046596in}{0.027061in}}{\pgfqpoint{-0.052083in}{0.013813in}}{\pgfqpoint{-0.052083in}{0.000000in}}
|
| 846 |
+
\pgfpathcurveto{\pgfqpoint{-0.052083in}{-0.013813in}}{\pgfqpoint{-0.046596in}{-0.027061in}}{\pgfqpoint{-0.036828in}{-0.036828in}}
|
| 847 |
+
\pgfpathcurveto{\pgfqpoint{-0.027061in}{-0.046596in}}{\pgfqpoint{-0.013813in}{-0.052083in}}{\pgfqpoint{0.000000in}{-0.052083in}}
|
| 848 |
+
\pgfpathclose
|
| 849 |
+
\pgfusepath{stroke,fill}
|
| 850 |
+
}
|
| 851 |
+
|
| 852 |
+
\pgfsys@transformshift{2.277612in}{4.353944in}
|
| 853 |
+
\pgfsys@useobject{currentmarker}{}
|
| 854 |
+
|
| 855 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 856 |
+
\pgfsetstrokecolor{textcolor}
|
| 857 |
+
\pgfsetfillcolor{textcolor}
|
| 858 |
+
\pgftext[x=2.602612in,y=4.290749in,left,base]{\color{textcolor}\sffamily\fontsize{13.000000}{15.600000}\selectfont LOGO-CAP (Ours)}
|
| 859 |
+
|
| 860 |
+
\pgfsetrectcap
|
| 861 |
+
\pgfsetroundjoin
|
| 862 |
+
\pgfsetlinewidth{4.015000pt}
|
| 863 |
+
\definecolor{currentstroke}{rgb}{1.000000,0.498039,0.054902}
|
| 864 |
+
\pgfsetstrokecolor{currentstroke}
|
| 865 |
+
\pgfsetdash{}{0pt}
|
| 866 |
+
\pgfpathmoveto{\pgfqpoint{2.097057in}{4.088929in}}
|
| 867 |
+
\pgfpathlineto{\pgfqpoint{2.458168in}{4.088929in}}
|
| 868 |
+
\pgfusepath{stroke}
|
| 869 |
+
|
| 870 |
+
\pgfsetbuttcap
|
| 871 |
+
\pgfsetmiterjoin
|
| 872 |
+
\definecolor{currentfill}{rgb}{1.000000,0.498039,0.054902}
|
| 873 |
+
\pgfsetfillcolor{currentfill}
|
| 874 |
+
\pgfsetlinewidth{1.003750pt}
|
| 875 |
+
\definecolor{currentstroke}{rgb}{1.000000,0.498039,0.054902}
|
| 876 |
+
\pgfsetstrokecolor{currentstroke}
|
| 877 |
+
\pgfsetdash{}{0pt}
|
| 878 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.052083in}{-0.052083in}}{\pgfqpoint{0.052083in}{0.052083in}}{
|
| 879 |
+
\pgfpathmoveto{\pgfqpoint{-0.052083in}{-0.052083in}}
|
| 880 |
+
\pgfpathlineto{\pgfqpoint{0.052083in}{-0.052083in}}
|
| 881 |
+
\pgfpathlineto{\pgfqpoint{0.052083in}{0.052083in}}
|
| 882 |
+
\pgfpathlineto{\pgfqpoint{-0.052083in}{0.052083in}}
|
| 883 |
+
\pgfpathclose
|
| 884 |
+
\pgfusepath{stroke,fill}
|
| 885 |
+
}
|
| 886 |
+
|
| 887 |
+
\pgfsys@transformshift{2.277612in}{4.088929in}
|
| 888 |
+
\pgfsys@useobject{currentmarker}{}
|
| 889 |
+
|
| 890 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 891 |
+
\pgfsetstrokecolor{textcolor}
|
| 892 |
+
\pgfsetfillcolor{textcolor}
|
| 893 |
+
\pgftext[x=2.602612in,y=4.025735in,left,base]{\color{textcolor}\sffamily\fontsize{13.000000}{15.600000}\selectfont DEKR }
|
| 894 |
+
|
| 895 |
+
\pgfsetrectcap
|
| 896 |
+
\pgfsetroundjoin
|
| 897 |
+
\pgfsetlinewidth{4.015000pt}
|
| 898 |
+
\definecolor{currentstroke}{rgb}{0.172549,0.627451,0.172549}
|
| 899 |
+
\pgfsetstrokecolor{currentstroke}
|
| 900 |
+
\pgfsetdash{}{0pt}
|
| 901 |
+
\pgfpathmoveto{\pgfqpoint{2.097057in}{3.823915in}}
|
| 902 |
+
\pgfpathlineto{\pgfqpoint{2.458168in}{3.823915in}}
|
| 903 |
+
\pgfusepath{stroke}
|
| 904 |
+
|
| 905 |
+
\pgfsetbuttcap
|
| 906 |
+
\pgfsetmiterjoin
|
| 907 |
+
\definecolor{currentfill}{rgb}{0.172549,0.627451,0.172549}
|
| 908 |
+
\pgfsetfillcolor{currentfill}
|
| 909 |
+
\pgfsetlinewidth{1.003750pt}
|
| 910 |
+
\definecolor{currentstroke}{rgb}{0.172549,0.627451,0.172549}
|
| 911 |
+
\pgfsetstrokecolor{currentstroke}
|
| 912 |
+
\pgfsetdash{}{0pt}
|
| 913 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.049534in}{-0.042136in}}{\pgfqpoint{0.049534in}{0.052083in}}{
|
| 914 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.052083in}}
|
| 915 |
+
\pgfpathlineto{\pgfqpoint{-0.049534in}{0.016095in}}
|
| 916 |
+
\pgfpathlineto{\pgfqpoint{-0.030614in}{-0.042136in}}
|
| 917 |
+
\pgfpathlineto{\pgfqpoint{0.030614in}{-0.042136in}}
|
| 918 |
+
\pgfpathlineto{\pgfqpoint{0.049534in}{0.016095in}}
|
| 919 |
+
\pgfpathclose
|
| 920 |
+
\pgfusepath{stroke,fill}
|
| 921 |
+
}
|
| 922 |
+
|
| 923 |
+
\pgfsys@transformshift{2.277612in}{3.823915in}
|
| 924 |
+
\pgfsys@useobject{currentmarker}{}
|
| 925 |
+
|
| 926 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 927 |
+
\pgfsetstrokecolor{textcolor}
|
| 928 |
+
\pgfsetfillcolor{textcolor}
|
| 929 |
+
\pgftext[x=2.602612in,y=3.760720in,left,base]{\color{textcolor}\sffamily\fontsize{13.000000}{15.600000}\selectfont PifPaf }
|
| 930 |
+
|
| 931 |
+
\pgfsetbuttcap
|
| 932 |
+
\pgfsetmiterjoin
|
| 933 |
+
\definecolor{currentfill}{rgb}{0.839216,0.152941,0.156863}
|
| 934 |
+
\pgfsetfillcolor{currentfill}
|
| 935 |
+
\pgfsetlinewidth{1.003750pt}
|
| 936 |
+
\definecolor{currentstroke}{rgb}{0.839216,0.152941,0.156863}
|
| 937 |
+
\pgfsetstrokecolor{currentstroke}
|
| 938 |
+
\pgfsetdash{}{0pt}
|
| 939 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.049534in}{-0.042136in}}{\pgfqpoint{0.049534in}{0.052083in}}{
|
| 940 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.052083in}}
|
| 941 |
+
\pgfpathlineto{\pgfqpoint{-0.049534in}{0.016095in}}
|
| 942 |
+
\pgfpathlineto{\pgfqpoint{-0.030614in}{-0.042136in}}
|
| 943 |
+
\pgfpathlineto{\pgfqpoint{0.030614in}{-0.042136in}}
|
| 944 |
+
\pgfpathlineto{\pgfqpoint{0.049534in}{0.016095in}}
|
| 945 |
+
\pgfpathclose
|
| 946 |
+
\pgfusepath{stroke,fill}
|
| 947 |
+
}
|
| 948 |
+
|
| 949 |
+
\pgfsys@transformshift{4.714354in}{4.353944in}
|
| 950 |
+
\pgfsys@useobject{currentmarker}{}
|
| 951 |
+
|
| 952 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 953 |
+
\pgfsetstrokecolor{textcolor}
|
| 954 |
+
\pgfsetfillcolor{textcolor}
|
| 955 |
+
\pgftext[x=5.039354in,y=4.290749in,left,base]{\color{textcolor}\sffamily\fontsize{13.000000}{15.600000}\selectfont CenterNet }
|
| 956 |
+
|
| 957 |
+
\pgfsetbuttcap
|
| 958 |
+
\pgfsetbeveljoin
|
| 959 |
+
\definecolor{currentfill}{rgb}{0.580392,0.403922,0.741176}
|
| 960 |
+
\pgfsetfillcolor{currentfill}
|
| 961 |
+
\pgfsetlinewidth{1.003750pt}
|
| 962 |
+
\definecolor{currentstroke}{rgb}{0.580392,0.403922,0.741176}
|
| 963 |
+
\pgfsetstrokecolor{currentstroke}
|
| 964 |
+
\pgfsetdash{}{0pt}
|
| 965 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.049534in}{-0.042136in}}{\pgfqpoint{0.049534in}{0.052083in}}{
|
| 966 |
+
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.052083in}}
|
| 967 |
+
\pgfpathlineto{\pgfqpoint{-0.011693in}{0.016095in}}
|
| 968 |
+
\pgfpathlineto{\pgfqpoint{-0.049534in}{0.016095in}}
|
| 969 |
+
\pgfpathlineto{\pgfqpoint{-0.018920in}{-0.006148in}}
|
| 970 |
+
\pgfpathlineto{\pgfqpoint{-0.030614in}{-0.042136in}}
|
| 971 |
+
\pgfpathlineto{\pgfqpoint{-0.000000in}{-0.019894in}}
|
| 972 |
+
\pgfpathlineto{\pgfqpoint{0.030614in}{-0.042136in}}
|
| 973 |
+
\pgfpathlineto{\pgfqpoint{0.018920in}{-0.006148in}}
|
| 974 |
+
\pgfpathlineto{\pgfqpoint{0.049534in}{0.016095in}}
|
| 975 |
+
\pgfpathlineto{\pgfqpoint{0.011693in}{0.016095in}}
|
| 976 |
+
\pgfpathclose
|
| 977 |
+
\pgfusepath{stroke,fill}
|
| 978 |
+
}
|
| 979 |
+
|
| 980 |
+
\pgfsys@transformshift{4.714354in}{4.088929in}
|
| 981 |
+
\pgfsys@useobject{currentmarker}{}
|
| 982 |
+
|
| 983 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 984 |
+
\pgfsetstrokecolor{textcolor}
|
| 985 |
+
\pgfsetfillcolor{textcolor}
|
| 986 |
+
\pgftext[x=5.039354in,y=4.025735in,left,base]{\color{textcolor}\sffamily\fontsize{13.000000}{15.600000}\selectfont AE+HrHRNet-W32 }
|
| 987 |
+
|
| 988 |
+
\pgfsetbuttcap
|
| 989 |
+
\pgfsetroundjoin
|
| 990 |
+
\definecolor{currentfill}{rgb}{0.549020,0.337255,0.294118}
|
| 991 |
+
\pgfsetfillcolor{currentfill}
|
| 992 |
+
\pgfsetlinewidth{1.003750pt}
|
| 993 |
+
\definecolor{currentstroke}{rgb}{0.549020,0.337255,0.294118}
|
| 994 |
+
\pgfsetstrokecolor{currentstroke}
|
| 995 |
+
\pgfsetdash{}{0pt}
|
| 996 |
+
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.052083in}{-0.052083in}}{\pgfqpoint{0.052083in}{0.052083in}}{
|
| 997 |
+
\pgfpathmoveto{\pgfqpoint{-0.052083in}{-0.052083in}}
|
| 998 |
+
\pgfpathlineto{\pgfqpoint{0.052083in}{0.052083in}}
|
| 999 |
+
\pgfpathmoveto{\pgfqpoint{-0.052083in}{0.052083in}}
|
| 1000 |
+
\pgfpathlineto{\pgfqpoint{0.052083in}{-0.052083in}}
|
| 1001 |
+
\pgfusepath{stroke,fill}
|
| 1002 |
+
}
|
| 1003 |
+
|
| 1004 |
+
\pgfsys@transformshift{4.714354in}{3.823915in}
|
| 1005 |
+
\pgfsys@useobject{currentmarker}{}
|
| 1006 |
+
|
| 1007 |
+
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}
|
| 1008 |
+
\pgfsetstrokecolor{textcolor}
|
| 1009 |
+
\pgfsetfillcolor{textcolor}
|
| 1010 |
+
\pgftext[x=5.039354in,y=3.760720in,left,base]{\color{textcolor}\sffamily\fontsize{13.000000}{15.600000}\selectfont Baseline-W32}
|
| 1011 |
+
|
| 1012 |
+
\makeatother
|
| 1013 |
+
\endgroup
|
| 1014 |
+
}
|
| 1015 |
+
|
| 1016 |
+
\vspace{-3mm}
|
| 1017 |
+
\caption{Our LOGO-CAP is motivated by a strong empirical observation that a vanilla center-offset baseline with an AP of 60.1 (marked by \textcolor{green}{green} dashed line) can be improved by leveraging a searching scheme in $11\times 11$ local windows to an AP of 88.9 (marked by \textcolor{red}{red} dashed line). In the meanwhile, we illustrate the speed-accuracy comparisons between our LOGO-CAP and prior arts on the COCO val-2017 dataset. W$x$-$Y$ (e.g. W32-384) means that a model uses the backbone HRNet-W$x$ and is tested with the image resolution $Y$ in the short side.
|
| 1018 |
+
|
| 1019 |
+
}
|
| 1020 |
+
\vspace{-4mm}
|
| 1021 |
+
|
| 1022 |
+
Bottom-up pose estimation approaches pay much attention on estimating pose parameters from any given multi-person image instead of using the cropped single-person ones, which poses several challenges on accurately identifying the learned bottom-level keypoints as person parts and grouping them into different individuals by learning part affinity fields , part association fields , associative embedding . Those grouping approaches are accurate but sophisticated due to the necessity of ad-hoc grouping/decoding schemes.
|
| 1023 |
+
Recently, many researchers attempted to learning center-offsets for pose estimation as its intrinsic simplicity and efficiency. However,
|
| 1024 |
+
most of the center-offset approaches are suffering from the main challenge of localization inaccuracy due to the large structural variations of human pose, thus leading to inferior performance than the grouping ones.
|
| 1025 |
+
|
| 1026 |
+
In this paper, we are interested in studying the bottom-up pose estimation using the center-offset formulation for its simplicity and efficiency. We directly address the aforementioned main challenges for center-offset based multi-person pose estimation. Our proposed approach is motivated by a surprisingly strong empirical observation via analyzing what the fundamental issues are with the vanilla center-offset pose estimation network.
|
| 1027 |
+
|
| 1028 |
+
\myparagraph{A Surprisingly Strong Observation.}
|
| 1029 |
+
|
| 1030 |
+
Our analyses are based on results in the fully-annotated subset of the COCO val-2017 dataset\footnote{Note that the COCO-val-2017 dataset contain many partially-annotated images (with only ground-truth bounding boxes), we use 2346 testing samples that are fully annotated with keypoint annotations.}.
|
| 1031 |
+
The vanilla center-offset regression method utilizes the HRNet-W32 as the feature backbone to directly predict keypoints center heatmap and the offset vectors. As shown in \cref{fig:speed-accuracy}, it obtains $60.1$ average precision (AP), which is not great, but reasonably good. It clearly shows that the pose keypoints center and the offset vectors can be learned reasonably well. We ask: How exactly bad are the center-offset estimation? We want to know (i) whether some of keypoints are truly bad being far away from the underlying ground-truth (GT) locations, or (ii) whether most of them are already in the close proximity of the GT ones? We observe that the latter is true. To quantitatively characterize the close proximity, instead of directly utilizing the learned offset vectors for human pose estimation, we treat them as human pose keypoint initialization and do a local window search to compute the empirical upper-bound of performance. More specifically, based on the initially predicted human poses, by introducing a local window (e.g., $11\times 11$) centered at each detected key point and by computing the single keypoint similarity with the ground-truth keypoint, an empirical upper-bound of $88.9$ AP is obtained, which is significantly higher than the state of the art and shows the potential of improving the vanilla center-offset regression paradigm.
|
| 1032 |
+
|
| 1033 |
+
A Direct Solution Leveraging the Observation. The implication of the observation is significant: It reveals that the fundamental challenge of improving the center-offset approach for pose estimation is to resolve the local misplacement.
|
| 1034 |
+
To that end, a straightforward way is just to learn a local heatmap (e.g., $11\times 11$) for each human pose keypoint based on the learned center and offset vectors, and then to compute the refined keypoints by taking $\arg \max$ within the local heatmap. Although appealing, this does not work as observed during our development of the LOGO-CAP. The underlying reason is also straightforward: if this can work, the original offset vector regression should work at the first place since no additional information is introduced through learning the local heatmap.
|
| 1035 |
+
|
| 1036 |
+
We hypothesize that on the one hand, in addition to the local heatmap, the structural relationship between different pose keypoints needs to be taken into account, and on the other hand, the intrinsic uncertainty of the local information in a local heatmap needs to be resolved. The former is the key challenge of structured output prediction problems. Many message passing algorithms have been developed in the literature. The latter can not be addressed by simply increasing the local window size. It entails learning stronger local-global information interaction and adaptation.
|
| 1037 |
+
|
| 1038 |
+
To verify the two hypotheses, the proposed LOGO-CAP lifts the initial keypoints via the center-offset prediction to keypoint expansion maps (KEMs) to counter their lack of localization accuracy in two modules (Section ). The KEMs extend the star-structured representation of the center-offset formulation to the pictorial structure representation . As shown in \cref{fig:teaser}, in our LOGO-CAP, one module computes local KEMs and learns to account for the structured output prediction nature of the human pose estimation problem, resulting in the keypoint attraction maps (KAMs), i.e., the local filters in \cref{fig:teaser}. Another module computes global KEMs and learns to refine the global KEMs by integrating the KAMs.
|
| 1039 |
+
|
| 1040 |
+
Our LOGO-CAP is a fully end-to-end bottom-up human pose estimation method with near real-time inference speed. It obtains $70.0$ AP in the fully-annotated subset of the COCO val-2017 dataset, which is an absolute increase of $9.9$ AP compared to the vanilla center-offset method, making a significant step forward.
|
| 1041 |
+
Fig. shows the advantage of the proposed LOGO-CAP in terms of overall speed-accuracy comparisons between our LOGO-CAP and prior arts. Meanwhile, we should notice that there is still a significant gap towards the empirical upper bound in \cref{fig:speed-accuracy}, which encourages more work to be investigated.
|
| 1042 |
+
|
| 1043 |
+
# Method
|
| 1044 |
+
|
| 1045 |
+
\vspace{-2mm}
|
| 1046 |
+
\cref{fig:LOGO-CAP} illustrates the proposed LOGO-CAP, which consists of three main components: a feature backbone, the initial center-offset human pose estimation, and the proposed local-global contextual adaptation component for the final human pose estimation.
|
| 1047 |
+
|
| 1048 |
+
\vspace{-1mm}
|
| 1049 |
+
\myparagraph{Backbone Network and Pose Initialization.}
|
| 1050 |
+
Given an input image $I$, the output of the feature backbone is a $C$-dim feature map, denoted by $\mathbf{F}\in R^{C\times h \times w}$, where $C$ is the feature dimension of the last convolutional layer in the feature backbone, and the spatial size $h\times w$ depends on the total stride in the feature backbone. The feature map $F$ is used as the shared features by two different head branches for the prediction of center heatmaps $\mathbf{C}$ and offsets fields $\mathbf{O}$. Then, the initial pose parameters are extracted using the top-$N$ local maximal locations and the corresponding offset vectors, denoted by $\mathbf{p}_i \in\mathbb{R}^{17\times 2}$ for the $i$-th person.
|
| 1051 |
+
|
| 1052 |
+
\myparagraph{Local Context.} We first introduce the local keypoint expansion maps (KEMs) for the initial pose parameters. Specifically, for the $j$-type keypoint (\eg, nose of a person), we follow the strong observation in \cref{sec:introduction} to compute the local KEMs in $11\times 11$ windows as $\mathcal{M}_{N\times 17\times 11\times 11\times 2}$, as shown in \cref{alg:kpt-expansion}.
|
| 1053 |
+
[!t]
|
| 1054 |
+
\small{
|
| 1055 |
+
\caption{\small Computing KEMs in a PyTorch-like style}
|
| 1056 |
+
|
| 1057 |
+
$\sigma$= coco\_sigmas
|
| 1058 |
+
|
| 1059 |
+
\Comment{$17\times 1$, keypoint sigmas provided in the COCO dataset.}
|
| 1060 |
+
|
| 1061 |
+
\Function{KptsExpansionCoco($\mathcal{P}$,ks=11)}
|
| 1062 |
+
{
|
| 1063 |
+
\Comment{Initial poses $\mathcal{P}$: Nx17x2}
|
| 1064 |
+
r = ks // 2
|
| 1065 |
+
|
| 1066 |
+
dy, dx = meshgrid(arange(-r,r),arange(-r,r))
|
| 1067 |
+
|
| 1068 |
+
\Comment{dx, dy: ks$\times$ks}
|
| 1069 |
+
|
| 1070 |
+
dy = dy.reshape(1, 1, ks, ks)
|
| 1071 |
+
|
| 1072 |
+
dx = dx.reshape(1, 1, ks, ks)
|
| 1073 |
+
|
| 1074 |
+
scale = $\sigma$.reshape(1, 17, 1, 1)/$\sigma$.min()
|
| 1075 |
+
\Comment{keypoint type specific expansion rate}
|
| 1076 |
+
|
| 1077 |
+
dy = dy $*$ scale \Comment{ 1x17x11x11}
|
| 1078 |
+
|
| 1079 |
+
dx = dx $*$ scale \Comment{ 1x17x11x11}
|
| 1080 |
+
|
| 1081 |
+
dxy = stack((dx,dy),dim=-1) \Comment{ 1x17x11x11x2}
|
| 1082 |
+
|
| 1083 |
+
$\mathcal{M}$ = $\mathcal{P}$.reshape(N, 17, 1, 1, 2) + dxy \Comment{ Nx17x11x11x2}
|
| 1084 |
+
|
| 1085 |
+
\Return $\mathcal{M}$
|
| 1086 |
+
}}
|
| 1087 |
+
|
| 1088 |
+
Then, we encode the geometric mesh of KEMs $\mathcal{M} \in \mathbb{R}^{N\times 17\times 11\times 11\times 2}$ in a $d$-dim latent space (\eg, $d=64$ in our experiments), computed based on the feature backbone output. A pose instance is represented by concatenating all the 17 keypoints. All initial keypoint-based poses are geometrically ``expanded / lifted" and feature-activated, resulting the initial local context, $\mathcal{K} \in \mathbb{R}^{N\times (17\times d)\times 11\times 11}$.
|
| 1089 |
+
|
| 1090 |
+
\myparagraph{Local Context Convolutional Message Passing.} To facilitate the structural information flow between different latent codes of the keypoints of a pose instance, we propose a simple convolutional message passing (CMP) module with three layers of Conv+Norm+ReLU operations with the Attentive Norm used in the second layer.
|
| 1091 |
+
|
| 1092 |
+
The transformed latent code $\mathcal{K}'$ is decoded by a $1\times 1$ Conv. layer as the local keypoints attraction maps (KAMs), $\mathbf{K}\in\mathbb{R}^{N\times 17\times 11\times 11}$ to measure the uncertainty of the initial poses.
|
| 1093 |
+
|
| 1094 |
+
\myparagraph{Local-Global Contextual Adaptation.} Through the CMP, we obtain the dynamic (a.k.a., data-driven) kernels for the 17 keypoints in a pose instance-sensitive way, which are used to refine the global heatmaps $\mathcal{H}$ for the 17 keypoints. In detail, we first compute another geometric mesh with window $a\times a$ (e.g., $a=97$) for each keypoint of the $N$ pose instances, and the entire mesh is denoted by $\mathcal{M}_{G} \in \mathbb{R}^{N\times 17\times a\times a \times 2}$. The mesh can be interpreted as the global KEM. It is then instantiated with appearance features extracted from the global heatmaps, and we have,
|
| 1095 |
+
|
| 1096 |
+
\mathcal{H}^{\uparrow}_{(1:17)} \xRightarrow[\text{bi-linear}]{\mathcal{M}_G} \mathbb{H}
|
| 1097 |
+
\xRightarrow[\text{reweighing}]{\mathcal{G}_{a\times a}(0, \sigma)} \bar{\mathbb{H}}\in\mathbb{R}^{N\times 17 \times a \times a} ,
|
| 1098 |
+
|
| 1099 |
+
where to encode the Gaussian prior of keypoint heatmaps, the resulting pose-guided heatmaps $\mathbb{H}$ is reweighed by a Gaussian kernel $\mathcal{G}_{a\times a}(0, \sigma=\frac{a-1}{2\times 3})$ (e.g., $\sigma=16$ when $a=97$) in an element-wise way. By doing so, it means that the enlarged mesh follows the $3\sigma$ principle.
|
| 1100 |
+
|
| 1101 |
+
Then, we apply the learned keypoint $11\times 11$ kernels $K_{n, i}$'s to convolve the reweighed $a\times a$ heatmap $\bar{\mathbb{H}}_{n, i}$ in a pose instance-sensitive and keypoint-specific way, leading to LOcal-GlObal Contextual Adaptation,
|
| 1102 |
+
|
| 1103 |
+
\bar{\mathbb{H}} \xRightarrow[\text{LOGO-CA}]{K_{N\times 17 \times 11 \times 11}} \Tilde{\mathbb{H}}\in\mathbb{R}^{N\times 17 \times a \times a},
|
| 1104 |
+
|
| 1105 |
+
which represents the refined heatmaps for the $17$ human pose keypoints.
|
| 1106 |
+
|
| 1107 |
+
\myparagraph{The Pose Estimation Output.} With the local-global contextually adpated heatmaps $\Tilde{\mathbb{H}}_{N\times 17 \times a \times a}$, we maintain the top-2 locations for each keypoint within the $a\times a$ heatmap, and then utilize a convex average of the top-2 locations as the final predicted offset vectors (i.e. $(\Delta x'_i, \Delta y'_i)$'s in \cref{fig:LOGO-CAP}), and of their confidence scores as the prediction score, with a predefined weight $\lambda$ for the top-1 location ($0.75$ in our experiments). Together with the predicted keypoints centers $\mathcal{C}_{N\times 3}$, the final prediction score for each keypoint is the product between the convex average confidence score and the center confidence score. We keep the keypoints whose final scores are greater than $0$. We have,
|
| 1108 |
+
|
| 1109 |
+
\{\mathcal{C}_{N\times 3}, \Tilde{\mathbb{H}}\} \xRightarrow[\text{Score thresholding}]{\text{Output}} \{ \hat{L}^n_I; n=1, \cdots N'\},
|
| 1110 |
+
|
| 1111 |
+
where $N'$ is the number of the final predicted pose instances in an image $I$.
|
| 1112 |
+
|
| 1113 |
+
\vspace{-1mm}
|
| 1114 |
+
In the fully end-to-end training, we need to define loss functions for the global heatmap $\mathcal{H}$, the refined local heatmap $\tilde{\mathbb{H}}$, the offset field $\mathcal{O}$, and the keypoint kernels.
|
| 1115 |
+
|
| 1116 |
+
\myparagraph{The Heatmap Loss.} The widely adopted mean squared error (MSE) loss is used. Denoted by $\mathcal{H}^{GT}_{18\times h\times w}$ the ground truth heatmaps in which each keypoint (including the center) is modeled by a 2-D Gaussian with dataset-provided mean and variance. Let $\mathbf{p}=(i, \mathbf{x})$ be the index of the domain $D$ of dimensions ${18\times h\times w}$. For the predicted heatmaps $\mathcal{H}_{18\times h\times w}$, the MSE loss is defined by,
|
| 1117 |
+
|
| 1118 |
+
\mathcal{L}_{\mathcal{H}} = 1/|D| \cdot \sum_{\mathbf{p}\in D}\|w(\mathbf{x}) (\mathcal{H}(\mathbf{p}) - \hat{\mathcal{H}}(\mathbf{p}))\|_2^2,
|
| 1119 |
+
|
| 1120 |
+
where $w(\mathbf{x})$ represents the weight for the foreground and the background pixels. The foreground mask is provided by the dataset annotation. In our experiment, we set $w(\mathbf{x}) = 1 / 0.1$ for a foreground / background pixel respectively.
|
| 1121 |
+
|
| 1122 |
+
In defining the loss function $\mathcal{L}_{\tilde{\mathbb{H}}}$ for the refined local heatmap $\tilde{\mathbb{H}}$ (Eqn. ), the ground-truth heatmap $\tilde{\mathbb{H}}^{GT}$ is generated on-the-fly based on the mesh $\mathcal{M}^G_{N\times 17\times a \times a \time 2}$ (Eqn. ) and the ground-truth keypoints using a Gaussian model with mean being the displacement between the current predicted keypoints and the ground-truth ones, and variance $\sigma$ (i.e., the standard deviation of the reweighing Gaussion prior model in Eqn. ).
|
| 1123 |
+
|
| 1124 |
+
\myparagraph{The Offset Field Loss.} The widely adopted SmoothL1 loss is used. Let $\mathcal{O}^{GT}\in\mathbb{R}^{34\times h\times w}$ be the ground-truth offset field, and $\mathcal{C}^{GT}$ be the non-empty set of ground-truth keypoints centers. For the predicted offset field $\mathcal{O}$, we have,
|
| 1125 |
+
|
| 1126 |
+
\mathcal{L}_{\mathcal{O}}(\mathbf{p}) = \mathcal{A}(\mathbf{p}) \ell_1^{\text{smooth}} \left(\mathcal{O}(\cdot, \mathbf{p}), \mathcal{O}^{GT}(\cdot, \mathbf{p}); \beta\right),
|
| 1127 |
+
|
| 1128 |
+
for each foreground pixel $\mathbf{p} \in \mathcal{C}^{GT}$, and
|
| 1129 |
+
|
| 1130 |
+
\mathcal{L}_{\mathcal{O}} = \frac{1}{|\mathcal{C}^{GT}|} \cdot \sum_{\mathbf{p}\in\mathcal{C}^{GT}} \mathcal{L}_{\mathcal{O}}(\mathbf{p})
|
| 1131 |
+
|
| 1132 |
+
where $\mathcal{A}(\mathbf{p})$ is the area of the person centered at the pixel $\mathbf{p}$, and $\beta$ the cutting-off threshold (e.g., $\frac{1}{9}$ in our experiments).
|
| 1133 |
+
|
| 1134 |
+
\myparagraph{The OKS Loss for the Keyoint Kernels.} Consider a single predicted pose instance, learning the keypoint kernels, $K_{17\times 11\times 11}$ is the key to facilitate the local-global contextual adaptation. To that end, the figure of merits of the KEMs, $\mathcal{M}_{17\times 11 \times 11 \times 2}$ needs to directly reflect the task loss, i.e., the OKS loss. With respect to the $N^{GT}$ ground-truth pose instances in an image, we can compute the similarity score per keypoint candidate in the KEMs, and obtain the score tensor $S_{17\times 11 \times 11 \times N^{GT}}$. The score tensor is further clamped with a threshold $0.5$, i.e., $S_{17\times 11 \times 11 \times N^{GT}}=\max (S_{17\times 11 \times 11 \times N^{GT}}, 0.5)$. A mean reduction is applied to the first three dimensions of the clamped score tensor to compute the matching score for each of the $N^{GT}$ pose instance. Then, the best ground-truth pose instance indexed by $n^*$ is selected in terms of the matching score, and its matching score is denoted by $s_{n^*}$. Based on the selected ground-truth pose instance, we compute the per-keypoint similarity score for the predicted pose instance at hand, denoted by $s_k$ ($k\in [1, 17]$). Then, the loss function fo the keypoint kernels are defined by,
|
| 1135 |
+
|
| 1136 |
+
\mathcal{L}_{K} = s_{n^*}\cdot \sum_{k,i,j} s_{k} \cdot |K_{k,i,j} - {S}_{k,i,j,n^*}|^2.
|
| 1137 |
+
|
| 1138 |
+
\myparagraph{The Total Loss} is then defined by $\mathcal{L}= \mathcal{L}_{\mathcal{H}} + \mathcal{L}_{\tilde{\mathbb{H}}} + \lambda \cdot (\mathcal{L}_{\mathcal{O}} + \mathcal{L}_{K})$, where the trade-off parameter $\lambda$ is used to balance the different loss items ($\lambda = 0.01$ in our experiments).
|
2111.07775/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2111.07775/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Learning useful representations for downstream tasks is a key component for success in rich observation environments [14, 38, 46, 53, 54]. Consequently, a significant amount of work proposes various representation learning objectives that can be tied to the original reinforcement learning (RL) problem. Such auxiliary objectives include the likes of contrastive learning losses [41, 35, 11], state similarity metrics like bisimulation or policy similarity [61, 60, 1], and pixel reconstruction losses [28, 19, 24, 23]. On a separate axis, data augmentations have been shown to provide huge performance boosts when learning to control from pixels [34, 32]. Each of these methods has been shown to work well for particular settings and hence displayed promise to be part of a general purpose representation learning toolkit. Unfortunately, these methods were proposed with different motivations and tested on different tasks, making the following question hard to answer:
|
| 4 |
+
|
| 5 |
+
*What really matters when learning representations for downstream control tasks?*
|
| 6 |
+
|
| 7 |
+
Learning directly from pixels offers much richer applicability than when learning from carefully constructed states. Consider the example of a self-driving car, where it is nearly impossible to construct a complete state description such that it describes the position and velocity of all objects of interest, such as road edges, highway markers, other vehicles, etc. In such real world applications, learning from pixels offers a much more feasible option. However, this requires algorithms that can discern between task-relevant and task-irrelevant components in the pixel input, i.e., learn good representations. Focusing on task-irrelevant components can lead to brittle or non-robust behavior when put in slightly different environments. For instance, billboard signs over buildings in the background has no dependence on the task in hand while a self-driving car tries to change lanes.
|
| 8 |
+
|
| 9 |
+
<sup>∗</sup>Equal Contribution. Corresponding author: <manan.tomar@gmail.com>
|
| 10 |
+
|
| 11 |
+

|
| 12 |
+
|
| 13 |
+
Figure 1: Comparing pixel-based RL methods across finer categorizations of evaluation benchmarks. Each category 'Cx' denotes different data-centric properties of the evaluation benchmark (e.g., C1 refers to discrete action, dense reward, without distractors, and with data randomly cropped [32, 34]). Exact descriptions of each category are provided in Table 4. Baseline-v0 refers to applying the standard deep RL agent (e.g., Rainbow DQN [52] and SAC [22]); Baseline refers to adding reward and transition prediction to baseline-v0, as described in Section 3; Contrastive includes algorithms such as CURL [35]; Metric denotes the state metric losses such as DBC [61]; Non-Contrastive includes algorithms such as SPR [45]; Reconstruction2 includes DREAMER [24] and TIA [18]. For a given method, we always consider the best performing algorithm. Every method leads to varied performance across data categories, making a comparison which is an *average across all categories* highly uninformative. Details of the exact algorithm used under each method is given in Appendix 7 (Table 5).
|
| 14 |
+
|
| 15 |
+
However, if such task-irrelevant components are not discarded, they can lead to sudden failure when the car drives through a different environment, say a forest where there are no buildings or billboards. Avoiding brittle behavior is therefore key to efficient deployment of artificial agents in the real world.
|
| 16 |
+
|
| 17 |
+
There has been a lot of work recently that tries to learn efficiently from pixels. A dominant idea throughout prior work has been that of attaching an auxiliary loss to the standard RL objective, with the exact mechanics of the loss varying for each method [28, 61, 35]. A related line of work learns representations by constructing world models directly from pixels [44, 40, 21, 24]. We show that these work well when the world model is simple. However, as the world model gets even slightly more complicated, which is true of the real world and imitated in simulation with the use of video distractors [59, 30, 47], such approaches can fail. For other methods, it is not entirely clear what component/s in auxiliary objectives can lead to failure, thus making robust behavior hard to achieve. Another distinct idea is of using data augmentations [34, 32] over the original observation samples, which seem to be quite robust across different environments. However, as we will show, a lot of the success of data augmentations is an artifact of how the benchmark environments save data, which is not true of the real world [47], thus resulting in failure1 . It is important to note that some of these methods are not designed for robustness but instead for enhanced performance on particular benchmarks. For instance, the ALE [7] benchmark involves simple, easy to model objects, and it becomes hard to discern if methods that perform well are actually good candidates for answering 'what really matters for robust learning in the real world.'
|
| 18 |
+
|
| 19 |
+
Contributions. In this paper, we explore the major components responsible for the successful application of various representation learning algorithms. Based on recent work in RL theory for learning with rich observations [16, 4, 9], we hypothesize certain key components to be responsible for sample efficient learning. We test the role these play in previously proposed representation learning objectives and then propose an exceedingly simple *baseline* (see Figure 2) which takes away the extra "knobs" and instead combines two simple but key ideas, that of reward and transition prediction. We conduct experiments across multiple settings, including the MuJoCo domains with natural distractors [59, 30, 47], DMC Suite [50], and Atari100K [29] from ALE [7]. Following this, we identify the failure modes of previously proposed objectives and highlight why they result in comparable or worse performance than the proposed baseline. Our observations suggest that relying on a particular method across multiple evaluation settings does not work, as the efficacy varies with the exact details of the task, even within the same benchmark (see Figure 1). We note that a finer categorization of available benchmarks based on metrics like density of reward, presence of taskirrelevant components, inherent horizon of tasks, etc., play a crucial role in determining the efficacy of a method. We list such categorizations as suggestions for more informative future evaluations.
|
| 20 |
+
|
| 21 |
+
<sup>1</sup>Moreover, it is hard to pick exactly which data augmentation will work for a particular environment or task. For ALE we use the performance of DREAMER after 1M steps, whereas for DMC we consider the performance after 500k steps.
|
| 22 |
+
|
| 23 |
+
The findings of this paper advocate for a more data-centric view of evaluating RL algorithms [13], largely missing in current practice. We hope these insights can lead to better representation learning objectives for real-world applications.
|
| 24 |
+
|
| 25 |
+
# Method
|
| 26 |
+
|
| 27 |
+
We model the RL problem using the framework of contextual decision processes (CDPs), a term introduced in Krishnamurthy et al. [33] to broadly refer to any sequential decision making task where an agent must act on the basis of rich observations (context) x<sup>t</sup> to optimize long-term reward. The true state of the environment s<sup>t</sup> is not available and the agent must construct it on its own, which is required for acting optimally on the downstream task. Furthermore, the emission function which dictates what contexts are observed for a given state is assumed to only inject noise that is uncorrelated to the task in hand, i.e. it only changes parts of the context that are irrelevant to the task [61, 47]. Consider again the example of people walking on the sides of a road while a self-driving car changes lanes. Invariance to parts of the context that have no dependence on the task, e.g. people in the background, is an important property for any representation learning algorithm since we cannot expect all situations to remain exactly the same when learning in the real world. Detailed description of the setup and all the prior methods used is provided in Appendix 1.
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
+
Figure 2: (Left) Baseline for control over pixels. We employ two losses besides the standard actor and critic losses, one being a reward prediction loss and the other a latent transition prediction loss. The encoded state s<sup>t</sup> is the learnt representation. Gradients from both the transition/reward prediction and the critic are used to learn the representation, whereas the actor gradients are stopped. In the ALE setting, the actor and critic losses are replaced by a Rainbow DQN loss [52]. (Right) Natural Distractor in the background for standard DMC setting (left column) and custom off-center settings (right column). More details about the distractors can be found in Appendix 2.
|
| 32 |
+
|
| 33 |
+
We explore the utility of two main components, that of reward and transition prediction, in learning representations. A lot of prior work has incorporated these objectives either individually or in the presence of more nuanced architectures. Here, our aim is to start with the most basic components and establish their importance one by one. Particularly, we use a simple soft actor-critic setup (taking inspiration from SAC-AE [55]) as the base architecture, and attach the reward and transition prediction modules to it (See Figure 2). Note that the transition network is over the encoded state s<sup>t</sup> and not over the observations [36]. Moreover, the transition model is fit between the encoded state and the reward model. Unless noted otherwise, we call this architecture as the *baseline* for all our experiments. Details about the implementation, network sizes and all hyperparameters is provided in Appendix 3 and Appendix 4 (Table 3) respectively.
|
| 34 |
+
|
| 35 |
+
In this section, we analyze the baseline architecture across six DMC tasks: Cartpole Swingup, Cheetah Run, Finger Spin, Hopper Hop, Reacher Easy, and Walker Walk. A common observation in our experiments is that the baseline is able to reduce the gap to more sophisticated methods significantly, sometimes even outperforming them in certain cases. This highlights that the baseline might serve as a stepping stone for other methods to build over. We test the importance of having both the reward and transition modules individually, by removing each of them one by one.
|
| 36 |
+
|
| 37 |
+
Figure 3 (left) shows a comparison of 'with *vs* without reward prediction'. All other settings are kept unchanged and the only difference is the reward prediction. When the reward model is removed, there remains no grounding objective for the transition model. This results in a representation collapse as the transition model loss is minimized by the trivial representation which maps all observations to the same encoded state leading to degraded performance. This hints at the fact that without a valid grounding objective (in this case from predicting rewards), learning good representations can be very hard. Note that it is not the case that there is no reward information available to the agent, since learning the critic does provide enough signal to learn efficiently when there are no distractions present. However, in the presence of distractions the signal from the critic can be extremely noisy since it is based on the current value functions, which are not well developed in the initial stages of training. One potential fix for such a collapse is to not use the standard maximum likelihood based approaches for the transition model loss and instead use a contrastive version of the loss, which has been shown to learn general representations in the self-supervised learning setting. We test this later in the paper and conclude that although it does help prevent collapse, the performance is still heavily inferior to when we include the reward model. Complete performances for individual tasks are shown in Appendix 8.1.
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
Figure 3: **Baseline Ablations**. Average normalized performance across six standard domains from DMC. Mean and std err. for 5 runs. **Left plot**: Baseline with *vs* without reward prediction **Middle plot**: Baseline with non-linear *vs* linear reward predictor/decoder. **Right plot**: Baseline with *vs* without transition prediction.
|
| 42 |
+
|
| 43 |
+
**Linear Reward Predictor.** Furthermore, we also compare to the case when the reward decoder is a linear network instead of the standard 1 layer MLP. We see that performance decreases significantly in this case as shown in Figure 3 (middle), but still does not collapse like in the absence of reward prediction. We hypothesize that the reward model is potentially removing useful information for predicting the optimal actions. Therefore, when it is attached directly to the encoded state, i.e., in the linear reward predictor case, it might force the representation to only preserve information required to predict the reward well, which might not be always be enough to predict the optimal actions well. For instance, consider a robot locomotion task. The reward in this case only depends on one variable, the center of mass, and thus the representation module would only need to preserve that in order to predict the reward well. However, to predict optimal actions, information about all the joint angular positions and velocities is required, which might be discarded if the reward model is directly attached to the encoded state. This idea is similar to why contrastive learning objectives in the self-supervised learning setting always enforce consistency between two positive/negative pairs after projecting the representation to another space. It has been shown that enforcing consistency in the representation space can remove excess information, which hampers final performance [11]. We indeed see a similar trend in the RL case as well.
|
| 44 |
+
|
| 45 |
+
Similarly, Figure 3 (right) shows a comparison of 'with *vs* without transition prediction'. The transition model loss enforces temporal consistencies among the encoded states. When this module is removed, we observe a slight dip in performance across most tasks, with the most prominent drop in cartpole as shown in Appendix 8.1 (Figure 16). This suggests that enforcing such temporal consistencies in the representation space is indeed an important component for robust learning, but not a sufficient one. To examine if the marginal gain is an artifact of the exact architecture used, we explored other architectures in Appendix 8.2 but did not observe any difference in performance.
|
| 46 |
+
|
| 47 |
+
The baseline introduced above also resembles a prominent idea in theory, that of learning value aware models [16, 4]. Value-aware learning advocates for learning a model by fitting it to the value function of the task in hand, instead of fitting it to the true model of the world. The above baseline can be looked at as doing value aware learning in the following sense: the grounding to the representation is provided by the reward function, thus defining the components responsible for the task in hand and then the transition dynamics are learnt only for these components and not for all components in the observation space. There remains one crucial difference though. Value aware methods learn the dynamics based on the value function (multi-step) and not the reward function (1-step), since the value function captures the long term nature of the task in hand. To that end, we also test a more exact variant of the value-aware setup where we use the critic function as the target for optimizing the transition prediction, both with and without a reward prediction module (Table 1). Complete performances are provided in Appendix 8.7. We see that the value aware losses perform worse than the baseline. A potential reason for this could be that since the value estimates are noisy when using distractors, directly using these value function estimates as a target does not help in learning a stable latent state prediction. Indeed, more sophisticated value aware methods such as in Temporal Predictive Coding [39] lead to similar scores as the baseline.
|
| 48 |
+
|
| 49 |
+
<sup>&</sup>lt;sup>3</sup>DBC [61] performance data is taken from their publication.
|
| 50 |
+
|
| 51 |
+
Table 1: **Truly value-aware objectives**. We report average final score after 500K steps across six standard domains from DMC.
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
|
| 55 |
+
Figure 4: **Baseline Ablations**. Average normalized performance across six standard domains from DMC. Mean and std err. for 5 runs. **Left plot**: Baseline *vs* state metric losses (DBC [61]). The performance of baseline is compared with bisimulation metrics employed by DBC <sup>3</sup>. **Middle plot**: Data Augmentations. Cropping removes irrelevant segments while flip and rotate do not, performing similar to the baseline. Baseline with random crop performs equally good as RAD. **Right plot**: Contrastive and Non-Contrastive. The performance of baseline is compared with replacing transition loss with contrastive loss. Further, we consider simple contrastive (CURL [35]) and non-contrastive (variant of SPR [45] for DMC) as well.
|
| 56 |
+
|
| 57 |
+
Until now, we have discussed why the two modules we identify as being vital for minimal and robust learning are actually necessary. Now we ask what other components could be added to this architecture which might improve performance, as has been done in prior methods. We then ask when do these added components actually improve performance, and when do they fail. More implementation details are provided in Appendix 3.
|
| 58 |
+
|
| 59 |
+
**Metric Losses**. Two recent works that are similar to the baseline above are DBC [61] and MiCO [10], both of which learn representations by obeying a distance metric. DBC learns the metric by estimating the reward and transition models while MiCO uses transition samples to directly compute the metric distance. We compare baseline's performance with DBC as shown in Figure 4 (left). Note that without metric, DBC is similar to the baseline barring architectural differences such as the use of probabilistic transition models in DBC compared to deterministic models in the baseline. Hence, we observe that the performance "without metric" exceeds that of "with metric".
|
| 60 |
+
|
| 61 |
+
**Data Augmentations.** A separate line of work has shown strong results when using data augmentations over the observation samples. These include the RAD [34] and DRQ [32] algorithms, both of which differ very minimally in their implementations. We run experiments for three different augmentations— 'crop', 'flip', and 'rotate'. The 'crop' augmentation always crops the image by some shifted margin from the center. Interestingly, the image of the robot is also always centered, thus allowing 'crop' to always only remove background or task-irrelevant information and never remove the robot or task-relevant information. This essentially amounts to not having background distractors and thus we see that this technique performs quite well as shown in Figure 4 (middle). However, augmentations that do not explicitly remove the distractors, such as rotate and flip, lead to similar performance as the baseline. This suggests that augmentations might not be helpful when distractor information cannot be removed, or when we do not know where the objects of interest lie in the image, something true of the real world. We test this by shifting the robot to the side, thus making the task-relevant components off-center and by zooming out i.e. increasing the amount of irrelevant information even after cropping. We see that performance of 'crop' drops drastically in this case, showcasing that most of the performance gains from augmentations can be attributed to how the data is collected and not to the algorithm itself. Additional ablations are provided in Appendix 8.3.
|
| 62 |
+
|
| 63 |
+
Table 2: **RAD additional ablations**. We report average final score after 500K steps across Cheetah Run and Walker Walk domains from DMC. This illustrates that the performance of augmentations is susceptible to quality of data. Also, for the "Zoomed Out" setting, it is worth noting that both *crop* and *flip* settle to the same score.
|
| 64 |
+
|
| 65 |
+
| | Standard | Off-center | Zoomed Out |
|
| 66 |
+
|----------|-----------------|-----------------|-----------------|
|
| 67 |
+
| RAD Crop | $0.34 \pm 0.14$ | $0.30 \pm 0.08$ | $0.23\pm0.10$ |
|
| 68 |
+
| RAD Flip | $0.27\pm0.08$ | $0.29 \pm 0.07$ | $0.23 \pm 0.07$ |
|
| 69 |
+
|
| 70 |
+
Contrastive and Non-contrastive Losses. A lot of recent methods also deploy different contrastive losses (for example, CPC [41] and InfoNCE [42]) to learn representations, which essentially refers to computing positive/negative pairs and pushing/pulling them apart respectively. In practise, this can be done for any kind of loss function, such as the encoding function $f_{\theta}$ [24], or using random augmentations [35, 37], so on and so forth. We test a simple modification, that of using the contrastive variant of the transition prediction loss than the maximum likelihood version. We see that, in Figure 4 (right), the contrastive version leads to inferior results than the baseline, again suggesting that contrastive learning might not add a lot of performance improvement, as has been witnessed in the self-supervised literature with methods like SIMSIAM [12], BARLOW TWINS [58], and BYOL [20] getting similar or better performance than contrastive methods like SIMCLR [11]. Complete performances are provided in Appendix 8.5.
|
| 71 |
+
|
| 72 |
+
Note that in the ALE results (Figure 7), SPR [45] leads to the best results overall, which also deploys a specific similarity loss for transition prediction motivated by BYOL [20]. We follow the same setup and test a variant of the baseline which uses the cosine similarity loss from SPR and test its performance on DMC based tasks. We again show in Figure 4 (right) that there is very little or no improvement in performance as compared to the baseline performance. This again suggests that the same algorithmic idea can have a completely different performance just by changing the evaluation setting<sup>5</sup> (ALE to DMC).
|
| 73 |
+
|
| 74 |
+
Learning World Models. We test DREAMER [24], a state of the art model-based method that learns world models though pixel reconstruction on two settings, with and without distractors. Although the performance in the "without distractors" case is good, we see that with distractors, DREAMER fails on some tasks, while performing inferior
|
| 75 |
+
|
| 76 |
+

|
| 77 |
+
|
| 78 |
+
Figure 5: **Pixel reconstruction**. Average normalized performance across six DMC domains with distractors. Baseline achieves better performance than SOTA methods like DREAMER and TIA <sup>4</sup>.
|
| 79 |
+
|
| 80 |
+
to the baseline in most tasks. This suggests that learning world models through reconstruction might only be a good idea when the world models are fairly simple to learn. If world models are hard to learn, as is the case with distractors, reconstruction based learning can lead to severe divergence that results in no learning at all. We also compare against the more recently introduced method from Fu et al. [18]. Their method, called TIA[18] incorporates several other modules in addition to DREAMER and learns a decoupling between the distractor background and task relevant components. We illustrate the performance of each of the above algorithms in Figure 5 along with a version where we add full reconstruction loss to the baseline. Complete performances are provided in Appendix 8.6.
|
| 81 |
+
|
| 82 |
+
Relevant Reconstruction and Sparse Rewards. Since we consider only dense reward based tasks, using the reward model to ground is sufficient to learn good representations. More sophisticated auxiliary tasks considered in past works include prediction of ensemble of value networks, prediction of past value functions, prediction of value functions of random cumulants, and observation reconstruction. However, in the sparse reward case, grounding on only the reward model or on past value functions can lead to representation collapse if the agent continues to receive zero reward for a long period of time. Therefore, in such cases where good exploration is necessary, tasks such as observation reconstruction can help prevent collapse. Although this has been shown to be an effective technique in the past, we argue that full reconstruction can
|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
|
| 86 |
+
Figure 6: **Reconstruction and augmentations for sparse settings**. Normalized performance for ball-in-cup catch domain from DMC.
|
| 87 |
+
|
| 88 |
+
harm the representations in the presence of distractors. Instead, we claim that reconstruction of *only* the task relevant components in the observation space results in learning good representations [18],
|
| 89 |
+
|
| 90 |
+
<sup>&</sup>lt;sup>4</sup>TIA [18] performance data is taken from their publication.
|
| 91 |
+
|
| 92 |
+
<sup>&</sup>lt;sup>5</sup>The SPR version without augmentations actually uses two separate ideas for improvement in performance, a cosine similarity transition prediction loss and a separate convolution encoder for the transition network, making it hard to attribute gains over the base DER [52] to just transition loss.
|
| 93 |
+
|
| 94 |
+

|
| 95 |
+
|
| 96 |
+
Figure 7: **Atari 100K**. Human normalized performance (mean/median) across 25 games from the Atari 100K benchmark. Mean and 95% confidence interval for 5 runs. **Left plot**: Comparison for all 25 games. **Right plot**: Comparison for only dense reward games (7 games from Table 7).
|
| 97 |
+
|
| 98 |
+
especially when concerned with realistic settings like that of distractors. To that end, we show that in the sparse reward case, task-relevant reconstruction <sup>6</sup> is sufficient for robust performance. We show this in Figure 6 along with performance of baseline and augmentations. Of course, how one should come up with techniques that differentiate between task-relevant and task-irrelevant components in the observations, remains an open question <sup>7</sup>. Additional ablations are provided in Appendix 8.6.
|
| 99 |
+
|
| 100 |
+
**Atari 100K**. We study the effect of techniques discussed thus far for Atari 100K benchmark, which involves 26 Atari games and compares performance relative to human-achieved scores at 100K steps or 400K frames. We consider the categorization proposed by Bellemare et al. [6] based on the nature of reward (dense, human optimal, score exploit and sparse). We implemented two versions of the baseline algorithm, one with both the transition and reward prediction modules and the other with only reward prediction. Our average results over all games show that the baseline performs comparably to CURL [35], SimPLe [29], DER [52], and OTR [31] while being quite inferior to DRQ<sup>8</sup> [32, 2] and SPR [45]. In comparison to DER, since our implementation of the baseline is over the DER code, similar performance to DER might suggest that the reward and transition prediction do not help much in this benchmark. Note that the ALE does not involve the use of distractors and so learning directly from the RL head (DQN in this case) should be enough to encode information about the reward and the transition dynamics in the representation. This comes as a stark contrast to the without distractors case in DMC Suite, where transition and reward prediction still lead to better performance. Such differences might be attributed to the continuous vs discrete nature of DMC and ALE benchmarks. More surprisingly, we find that when plotting the same average performance results for only the dense reward environments in 100K, the gap in performance between DER and SPR/DRQ decreases drastically. Note that both SPR builds over DER but DRQ builds over OTR.
|
| 101 |
+
|
| 102 |
+
We further delve into understanding the superior performance of SPR and DRQ. In particular, SPR combines a non-contrastive transition prediction loss and data augmentations. By non-contrastive, we refer to a self-supervised loss that does not use contrastive learning ideas. To understand the effect of each of these individually, we run SPR without data augmentations, and call it SPR\*\*9. We see that SPR\*\* leads to performance similar to the baseline and the DER agent, suggesting that a non-contrastive loss does not lead to gains when run without data augmentations. Finally, we take the DER agent and add data augmentations to it (from DRQ). This is shown as DER + AUG in Figure 7. We see that this leads to collapse, with the worst performance across all algorithms. Note that DRQ builds over OTR and performs quite well whereas when the same augmentations are used with DER, which includes a distributional agent in it, we observe a collapse. This again indicates that augmentations can change data in a fragile manner, sometimes leading to enhanced performance with certain algorithms, while failing with other algorithms. Segregating evaluation of algorithms based on these differences is therefore of utmost importance. We show the individual performance on all 25 games in Appendix 8.5 (Table 7).
|
| 103 |
+
|
| 104 |
+
<sup>&</sup>lt;sup>6</sup>Part Recons. in Figure 6 involves reconstructing the DMC robot over a solid black background.
|
| 105 |
+
|
| 106 |
+
<sup>&</sup>lt;sup>7</sup>As also evident by TIA's [18] performance for DMC ball-in-cup catch experiments.
|
| 107 |
+
|
| 108 |
+
$<sup>^{8}</sup>$ We use the DRQ( $\epsilon$ ) version from Agarwal et al. [2] for fair evaluation and denote it as DRQ.
|
| 109 |
+
|
| 110 |
+
<sup>&</sup>lt;sup>9</sup>Note that this is different from the SPR without augmentations version reported in Schwarzer et al. [45] since that version uses dropout as well which is not a fair comparison.
|
2112.08655/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2112.08655/main_diagram/main_diagram.pdf
ADDED
|
Binary file (99.8 kB). View file
|
|
|
2112.08655/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Due to the huge computational overhead of traditional super-resolution, it is difficult to be applied to mobile devices with limited computing capabilities. The main goal of lightweight single-image super-resolution (SISR) is to reconstruct super-resolution (SR) images from the low-resolution (LR) one with fewer parameters and calculations [@yao2020weighted; @li2020s; @hou2021coordinate]. In the past ten years, deep learning has made amazing achievements in various computer vision tasks, which also greatly promoted the development of SISR.
|
| 4 |
+
|
| 5 |
+
Recently, many convolutional neural networks (CNNs) based SISR methods have been proposed [@3dong2015image; @5han2015deep; @4dong2016accelerating; @tian2020coarse; @he2019mrfn]. Compared with the traditional methods, CNN-based SISR methods are more versatile and can reconstruct higher-quality SR images with more texture details. In 2014, Dong *et al.* introduced the deep learning technology into SISR and proposed the first CNN-based SISR model, named SRCNN [@3dong2015image]. Although SRCNN has only three convolutional layers, its performance has far surpassed traditional methods and achieved state-of-the-art results at the time. Now, we know that deeper and more complex networks can achieve better performance [@6lim2017enhanced; @1ahn2018fast; @7haris2018deep; @12zhang2018image; @17zhang2018residual; @li2018multi; @zhang2020unsupervised; @li2020mdcn]. However, their parameters and calculations are also huge and are difficult to be used on mobile devices. To address this issue, many lightweight SISR models have also been proposed. For instance, CARN [@1ahn2018fast] is a lightweight residual network composed of multiple residual connections. ECBSR [@zhang2021edge] is a lightweight and efficient network whose features are extracted in multiple paths. The purpose of these models is to reduce the complexity of the model and facilitate the application in the real world. The demand for lightweight practical models motivates us to propose the Feature Distillation Interaction Weighted Network (FDIWN). The computational costs of FDIWN are lower than most existing lightweight SISR models, but it is not inferior to them in terms of performance.
|
| 6 |
+
|
| 7 |
+
<figure id="Figure 2" data-latex-placement="t">
|
| 8 |
+
<img src="Fig/FDIWN1.png" style="width:15.5cm" />
|
| 9 |
+
<figcaption>The architecture of the proposed Feature Distillation Interaction Weighting Network (FDIWN).</figcaption>
|
| 10 |
+
</figure>
|
| 11 |
+
|
| 12 |
+
As we know, as the depth of the network increases, information will be lost during transmission. Therefore, under the constraints of parameters and calculations, how to prevent information loss, and how to make full use of intermediate features is important. To achieve this, we introduce Wide-residual Distillation Interaction Blocks (WDIB) in the Feature Shuffle Weighted Group (FSWG) for pairwise feature fusion, and then the features are shuffled and weighted. This operation can improve model performance while only increasing a small amount of computational cost since the wide-residual attention weighting units are extensively used in WDIB, including Wide Identical Residual Weighting (WIRW) units and Wide Convolutional Residual Weighting (WCRW) units. WIRW and WCRN allow more features to pass and be activated, thereby increasing the transmission and utilization of the features. Meanwhile, the carefully designed Self-Calibration Fusion (SCF) unit integrates different levels of features by the jump splicing strategy to achieve a good SR reconstruction. In general, our main contributions can be summarized as follows:
|
| 13 |
+
|
| 14 |
+
- We propose a wide-residual attention weighting unit for lightweight SISR, including Wide Identical Residual Weighting (WIRW) unit and Wide Convolutional Residual Weighting (WCRW) unit, which has stronger feature distillation capabilities than ordinary residual blocks.
|
| 15 |
+
|
| 16 |
+
- We propose a novel Self-Calibration Fusion (SCF) module to replace the traditional concatenate operation for efficient feature interaction and fusion, which can aggregate more representative features and self-calibrate the input and output features.
|
| 17 |
+
|
| 18 |
+
- We propose a Wide-Residual Distillation Connection (WRDC) framework, which connects the coarse and distilled fine features within the module and allows features from different scales to interact with each other.
|
| 19 |
+
|
| 20 |
+
- We design a Feature Shuffle Weighted Group (FSWG) for pairwise feature fusion, which consists of a series of interactional WDIBs. Meanwhile, it serves as a basic component of the proposed Feature Distillation Interaction Weighting Network (FDIWN).
|
| 21 |
+
|
| 22 |
+
<figure id="Figure 3" data-latex-placement="t">
|
| 23 |
+
<img src="Fig/WDIB1.png" style="width:15.5cm" />
|
| 24 |
+
<figcaption>The structure of the proposed Wide-residual Distillation Interaction Block (WDIB). Conv1 and Conv3 represent the convolutional layer with the kernel size of 1 and 3, respectively.</figcaption>
|
| 25 |
+
</figure>
|
| 26 |
+
|
| 27 |
+
# Method
|
| 28 |
+
|
| 29 |
+
To build a lightweight and accurate SISR model, we focus on the use of intermediate features and the interaction of feature information. To achieve this, we propose a Feature Distillation Interaction Weighting Network (FDIWN). FDIWN consists of a series of novel and efficient modules, including Wide-residual Distillation Interaction Block (WDIB), Wide-Residual Distillation Connection (WRDC), Wide Identical Residual Weighting (WIRW) unit, Wide Convolutional Residual Weighting (WCRW) unit, and Self-Calibration Fusion (SCF) module.
|
| 30 |
+
|
| 31 |
+
As shown in Figure [1](#Figure 2){reference-type="ref" reference="Figure 2"}, FDIWN consists of three parts: the shallow feature extraction part, the non-linear deep feature acquisition part, and the upsampling recovery part. Following previous works, we use a $3 \times 3$ convolutional layer to extract the shallow features ${X_0}$ from the input LR image $$\begin{equation}
|
| 32 |
+
{X_0} = {C_e}({I_{LR}}),
|
| 33 |
+
\end{equation}$$ where ${C_e}$ represents the feature extraction layer, ${I_{LR}}$ is the LR image, and ${X_0}$ is the extracted shallow features.
|
| 34 |
+
|
| 35 |
+
After that, the non-linear feature mapping module is followed, which is formed by several Feature Shuffle Weighted Groups (FSWGs) through jump connections. The operation can be expressed as follow $$\begin{equation}
|
| 36 |
+
{X_1} = F_{FSWG}^0({X_0}),
|
| 37 |
+
\end{equation}$$ $$\begin{equation}
|
| 38 |
+
{X_{n - 1}} = F_{FSWG}^{n - 2}( \ldots (F_{FSWG}^1({X_1}) + {X_1}) \ldots ) + {X_{n - 2}},
|
| 39 |
+
\end{equation}$$ $$\begin{equation}
|
| 40 |
+
{X_n} = F_{FSWG}^{n - 1}({X_{n - 1}}),
|
| 41 |
+
\end{equation}$$ where $F_{FSWG}^k$ represents the $k$-th FSWG, and ${X_n}$ denotes the extracted non-linear deep features.
|
| 42 |
+
|
| 43 |
+
The features used for the final SR image reconstruction in the upsampling recovery module come from two parts, one comes from the non-linear deep feature extraction module and the other comes from the input LR image. We hope that superimposing the low-frequency and high-frequency feature information in this way can reconstruct high-quality SR images with more texture details. Therefore, the final SR image can be expressed as $$\begin{equation}
|
| 44 |
+
{I_{SR}} = {F_{UP1}}(\sum\limits_{i = 0}^{n - 1} {F_{FSWG}^i({X_i})} ) + {F_{UP2}}({I_{LR}}),
|
| 45 |
+
\end{equation}$$ where ${I_{SR}}$ is the reconstructed SR image, ${F_{UP1}}$ and ${F_{UP2}}$ represent the upsampling modules.
|
| 46 |
+
|
| 47 |
+
As shown in Figure [2](#Figure 3){reference-type="ref" reference="Figure 3"}, Wide-Residual Distillation Connection (WRDC) is an important component in the model, which consists of the Wide Identical Residual Weighting (WIRW) unit, the Wide Convolutional Residual Weighting (WCRW) unit, and the distillation jump connection. Both WIRW and WCRW introduced the wide activation mechanism, thus it can distill richer features with fewer parameters. Recently, with the emphasis on the importance of channel attention in RCAN [@12zhang2018image], many SR methods focus on the attention mechanism. As shown in Figure [3](#Figure 5){reference-type="ref" reference="Figure 5"}, Zhang et al. [@31zhang2021sa] proposed a new attention paradigm, which is a combination of spatial attention and channel attention, called Shuffle Attention (SA). Inspired by this, we introduce the SA into our WIRW and WCRW to further enhance their feature extraction abilities. Since the SA mechanism is placed in each wide-residual unit, we set the numbers of the group $g$ to be large enough to keep the SA lightweight. After the channel splitting operation, the number of input channels of WCRW is only half of the original input. Therefore, compared with WIRW, a $3 \times 3$ convolutional layer is added to the shortcut path of WCRW to increase the number of output channels so that it can match the original input size and achieve the interaction between different features. Meanwhile, WCRW and WIRW both introduce adaptive weights operation in the main path and shortcut path for adaptive feature learning. When input $I$ is fed into WIRW and WCRW, the output ${X_{WIRW}}$ and ${X_{WCRW}}$ can be expressed as $$\begin{equation}
|
| 48 |
+
{X_{WIRW}} = {\lambda _{x1}}{F_{SA}}[{F_{CR}}(I)] + {\lambda _{res1}}I,
|
| 49 |
+
\end{equation}$$ $$\begin{equation}
|
| 50 |
+
{X_{WCRW}} = {\lambda _{x2}}{F_{SA}}[{F_{CR}}(I)] + {\lambda _{res2}}{F_{C3}}(I),
|
| 51 |
+
\end{equation}$$ where ${\lambda _{xk}}$ and ${\lambda _{resk}}$ ($k = 1,2$) represent the adaptive multiplier of the $k$-th wide-residual unit branch, ${F_{SA}}$ represents the SA operation, ${F_{CR}}$ represents a series of (conv + relu) operations before the attention mechanism, and ${F_{C3}}$ represents the $3 \times 3$ convolutional layer. The complete structure of WIRW and WCRW can be seen in Figure [2](#Figure 3){reference-type="ref" reference="Figure 3"}.
|
| 52 |
+
|
| 53 |
+
Apart from the above operation, the distillation connection part is applied to segment the channel features through the convolutional layer and Sigmoid function. The convolutional layer is introduced to expand the dimension of the splitting channel, while the Sigmoid function non-linearizes the obtained coarse high-frequency features to obtain fine features maps. Finally, these features are multiplied with the low-frequency attention features obtained after the wide-residual unit refinement process to realize the interaction of the features from different scales.
|
| 54 |
+
|
| 55 |
+
<figure id="Figure 5" data-latex-placement="t">
|
| 56 |
+
<img src="Fig/SA.png" style="width:8.2cm" />
|
| 57 |
+
<figcaption>The principle of shuffle attention (SA) mechanism. <span class="math inline"><em>h</em></span>, <span class="math inline"><em>w</em></span>, <span class="math inline"><em>c</em></span>, and <span class="math inline"><em>g</em></span> represent the height, width, number of channels, and number of groups, respectively.</figcaption>
|
| 58 |
+
</figure>
|
| 59 |
+
|
| 60 |
+
Inspired by the lattice block (LB) [@32luo2020latticenet], we design a Wide-Residual Distillation Interaction Block (WDIB) based on WRDC. As shown in Figure [2](#Figure 3){reference-type="ref" reference="Figure 3"}, WDIB utilizes the butterfly structure described in LB to realize the interaction of intermediate features. Define ${W_{ir}}$ and ${W_{cr}}$ represent the WIRW and WCRW units, the first butterfly structure can be expressed as $$\begin{equation}
|
| 61 |
+
{X_{remain1}},{X_{distill1}} = Spli{t_1}({W_{ir}}({X_{in}})),
|
| 62 |
+
\end{equation}$$ $$\begin{equation}
|
| 63 |
+
{U_{1}} = {M_1}\left\langle {{X_{in}}} \right\rangle + {W_{cr}}({X_{in}}),
|
| 64 |
+
\end{equation}$$ $$\begin{equation}
|
| 65 |
+
{V_{1}} = {N_1}\left\langle {{W_{cr}}({X_{remain1}})} \right\rangle + {X_{in}},
|
| 66 |
+
\end{equation}$$ where $Spli{t_i}(\cdot)$ represents the $i$-th channel splitting operation, ${X_{remaini}}$ represents the rough feature of the input subsequent wide-residual unit, and ${X_{distilli}}$ represents the $i$-th refinement feature that is split out and jump-connected with the next butterfly structure. The combination coefficients ${M_{i}}$ and ${N_{i}}$ are the two vectors connecting the upper and lower branches. It is worth noting that $M\left\langle {X_{in}} \right\rangle = M({X_{in}}) \times {X_{in}}$, and the learning details of the combination coefficients ${M_i}$ and ${N_i}$ are provided in Figure [4](#Figure 4){reference-type="ref" reference="Figure 4"}. ${U_{i}}$ and ${V_{i}}$ are the output features after the $i$-th butterfly structure, and then they are fed into the second butterfly structure $$\begin{equation}
|
| 67 |
+
{X_{remain2}},{X_{distill2}} = Spli{t_2}({W_{ir}}({V_{1}})),
|
| 68 |
+
\end{equation}$$ $$\begin{equation}
|
| 69 |
+
{U_2} = {M_2}\left\langle {{W_{cr}}({X_{remain2}})} \right\rangle + {U_{1}},
|
| 70 |
+
\end{equation}$$ $$\begin{equation}
|
| 71 |
+
{V_2} = {N_2}\left\langle {{U_1}} \right\rangle + {W_{cr}}({X_{remain2}}).
|
| 72 |
+
\end{equation}$$
|
| 73 |
+
|
| 74 |
+
<figure id="Figure 4" data-latex-placement="t">
|
| 75 |
+
<img src="Fig/PC.png" style="width:8.2cm" />
|
| 76 |
+
<figcaption>The diagram of combination coefficient learning.</figcaption>
|
| 77 |
+
</figure>
|
| 78 |
+
|
| 79 |
+
After that, the output $X_{out}$ can be expressed as $$\begin{equation}
|
| 80 |
+
{X_{out1}} = {W_{ir}}({U_2} \times {S_3}({X_{distill1}})),
|
| 81 |
+
\end{equation}$$ $$\begin{equation}
|
| 82 |
+
{X_{out2}} = {W_{ir}}({V_2} \times {S_3}({X_{distill2}})),
|
| 83 |
+
\end{equation}$$ $$\begin{equation}
|
| 84 |
+
{X_{out}} = {C_{SCF}}[{X_{out1}},{X_{out2}}] + {X_{in}},
|
| 85 |
+
\end{equation}$$ where ${C_{SCF}}$ represents the proposed Self-Calibration Fusion (SCF) module. ${X_{out1}}$ and ${X_{out2}}$ represent the upper branch and the lower branch entering the SCF module, respectively. In addition, ${S_k}$ represents a $k \times k$ convolutional layer followed by a cascade of Sigmoid function. The structure of the SCF module is provided in Figure [2](#Figure 3){reference-type="ref" reference="Figure 3"}. Obviously, the features of the upper and lower branches are first fused, and then the different scale features from two branches are interacted and fused. The output ${X_{SCF}}$ of the SCF module can be expressed as $$\begin{equation}
|
| 86 |
+
{X_{concat}} = {C_{Concat}}[{\lambda _{x3}}{X_{out1}},{\lambda _{x4}}{X_{out2}}],
|
| 87 |
+
\end{equation}$$ $$\begin{equation}
|
| 88 |
+
{X_{SCF}} = {\lambda _{x4}}{X_{out2}} \times {S_1}({X_{concat}}) + {W_{cr}}({X_{concat}}),
|
| 89 |
+
\end{equation}$$ where ${C_{Concat}}$ represents the Concat operation, ${\lambda _{x3}}$ and ${\lambda _{x4}}$ represent the adaptive weights, and ${X_{concat}}$ represents the output after the upper and lower branches are multiplied by the adaptive weight and then the Concat is performed. Subsequently, the fused features are nonlinearized, then multiplied with the features of the lower branch, and finally added to the refined fusion features to achieve the interaction of features from different scales. Since there are a large number of adaptive multipliers in the module, the output features can be adjusted and calibrated continuously during the training, thus it can achieve better performance than the traditional Concat operation.
|
| 90 |
+
|
| 91 |
+
As shown in Figure [1](#Figure 2){reference-type="ref" reference="Figure 2"}, Feature Shuffle Weighted Group (FSWG) consists of three interactional WDIBs and it serves as the basic component of FDIWN. Specifically, we fuse and shuffle the features extracted by WDIB one by one. The cascaded operation ${F_{CGS}}$ can be expressed as $$\begin{equation}
|
| 92 |
+
{F_{CGS}} = {F_{Shuffle}}({F_{GConv}}({C_{Concat}}[{x_i},{x_{i + 1}}])),
|
| 93 |
+
\end{equation}$$ where ${F_{Shuffle}}$ represents the channel shuffle operation and ${F_{GConv}}$ represents the group convolution operation. ${x_i}$ and ${x_{i + 1}}$ represent the two features to be merged. After that, we add the shuffled and fused features with the original features that have not been operated to achieve the interaction of feature information. Meanwhile, we set a larger group in the shuffle and fusion operation to reduce the parameter burden. Moreover, to reduce the redundant information, the primary features and the features after the information interaction are self-adaptively fused to distill the desired important features. Define the input of FSWG as ${W_0}$, the output ${W_{out}}$ can be formulated as $$\begin{equation}
|
| 94 |
+
{W_{CGS}} = {F^2}_{CGS}({F^1}_{CGS}({W_1},{W_2}),{W_3}),
|
| 95 |
+
\end{equation}$$ $$\begin{equation}
|
| 96 |
+
{W_{out}} = {\lambda _x}({W_{CGS}} + {W_3}) + {\lambda _{res}}{W_{ir}}({W_0}),
|
| 97 |
+
\end{equation}$$ where ${W_i}$ represents the output of the $i$-th WDIB, ${F^i}_{CGS}$ represents the $i$-th ${F_{CGS}}$ operation, ${W_{CGS}}$ represents the extracted features after a series of shuffle and fusion operations. In addition, ${\lambda _x}$ and ${\lambda _{res}}$ are used to adaptively adjust the weight of each channel. After these operations, the features from each WDIB are adequately interacted and distilled to achieve better SR image reconstruction.
|
2203.04439/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-10-04T21:58:43.376Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36" version="14.8.6" etag="vyYG8l5zmGTBYg2BUiPr" type="google"><diagram id="ofk73OPBQrVehdBoPvh1">7Z1dc5s4FIZ/DZfd4UtgXzZfuxftTDvZmd1cdWSQbXYxymA5dvbXrwAJkEUmNMbS1D5pO7UOIEBHj454dUyc4HZz+L3Ez+uvNCW547vpwQnuHN/34jDi/1WWV2GJorCxrMosFbbO8Jj9R4TRFdZdlpKtsiOjNGfZs2pMaFGQhCk2XJZ0r+62pLl61me8IprhMcG5bv0rS9m6sc6Q29n/INlqLc/suWLLBsudRRXbNU7pvjHV+wT3TnBbUsqaT5vDLcmr1pPt0lT08MbW9sJKUrAxB/jiMtirvDeS8lsVRVqyNV3RAuf3nfWmpLsiJVUFLi91+3yh9JkbPW78hzD2KvyGd4xy05ptcrGVHDL2d3X4b0iUnkRl1ee7Q7/wKgsFK197B1XFJ1lfVegOq0vyuOb+qptS253uykSYAtGDcLkiotGQ3o5e6x3erwndEH4WvktJcsyyF7V2LPrXqt2vcwH/ILww7BFxNS8434lKN/hQXQvv4JqzVFfs1xkjj8+4vrE9Z09t9pIyfqm04MV5tXuOFyT/RreZMCb8XknJN7yQkmW8t3852mGTpWndA3CerQaP+Cw2tHtu+dVkxerPumd8CrllmeX5Lc1pWd9BkCIySyv7lpX0X9LbMvMXQRS1LqxOQQ7OW938DfeIAz55c0GgGHLCWJT3PX6Fad1DV+BzkkdDYEwC1WfMG+/FySFDGmTBIWgCxovj88PdSPfaBcCGyWyZDMEWJTOyWE4FG7IHW6R5VqevSD9Xk4CqTXO83WbJETofxOAjyI1AJ9bRmY92Sa/F0UCLS9towMQZvtGMn7jn8Uj1eBQfubK5IXFYfz7ybk0BUmtq2kGrqe4X7Y2P6irxQFcJq7/otpqnccKc+ObBie9+bMWGyxsRlksSJYMjQhrPF6470Yjgx/ZGhBmEXzlg9MeQwF70nb8bfQN91P71WTMUff3AHmvyAf2qYZMTW2WyaxE3eT3wSDntI6VnkTPQbTqmFM4sKjeeLt0cxzVvYDZyAcCZCWwW55AeaDgdWwpvsUXedBUH4trpmNmcPkaAWYuUgtl4xWd6zHSd5Dis+Uh33AXwZiSs2ZxFgjTSodXH7Se8OD1uujoCUe1kyiyuSPigiZyHqOEVhOMFhHB25MPpFhB8XVg5joxo4En9Apg1EhktLtn716qvGOBz9g6fY5cKfdcc6AF0hzN1hxbXk7tDaK47hNq4364cL3DpxDfYie9gzfik0d/mg9FAftBDLd+6l+lQVP0ZDOf1T3UELVjP3vxM9ABsUXAaSBeqHe0fEPh5aj9b1O9lrs/VBXDlGUzIPUoWiDvajZMrHfJ6QOk4a6ozmhvkbA6ctUwpnEX2OJPXA9nORrKdTfIW6AqUDuCvle4sU/8UesLRXrGU8BzPP/jYqteEzpbwLJsWMp7NZzwbHRWuVa5SxpFQH0fkjNNKFNZFI8h6Pl/Ws1HedLHgCnmLBuK2Td50aQeeLieY7XoWOQMVp2Oqz1loUcUJdBUHsp7Pl/VslDdQczq2FN58e7yFupoDce10zCxOH0NPc9sVYjYg+4TjZZ/pMdO1Esh6PlvWs1HcQB3p0FJws7hGEerqCES1kymzuDIRgiZyHqLGZT1H7pEPp1tFCHVhBbKez5b1bJRZ0FeqvFE9MiKLb28IdX0FchFN5SIapQ/Ulm7GotBn8Z0OSFdbHi43wtmFz+YSHgINpgtzCnwWlxaQzhg4pbZZfOsG0t9ywyf+CWak4P8ucVCcJWQ4YWgxQyGaKGHoODve6OAHb7fpZhl9ziKbg99b30G6RMTszjtsogdvvOnCmRLiLL7xBulvvIFJv6FJf2QSPnj/TQeaEvcsvv9GXg/EPdNxzyR6ka6rNN84+P7Dg68enOTWwOLauhw4Btzqg1sndatRWu0IMTLotYUnNQJ+POhFQ6KKxUWeSBdVGlwwUDPt97DOiA0vdr+4rVlY737/XXD/Pw==</diagram></mxfile>
|
2203.04439/main_diagram/main_diagram.pdf
ADDED
|
Binary file (18.5 kB). View file
|
|
|
2203.04439/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
A key challenge in reinforcement learning is to improve sample efficiency -- that is to reduce the amount of environmental interactions that an agent must take in order to learn a good policy. This is particularly important in robotics applications where gaining experience potentially means interacting with a physical environment. One way of improving sample efficiency is to create "artificial" experiences through data augmentation. This is typically done in visual state spaces where an affine transformation (e.g., translation or rotation of the image) is applied to the states experienced during a transition [@rad; @drq]. These approaches implicitly assume that the transition and reward dynamics of the environment are invariant to affine transformations of the visual state. In fact, some approaches explicitly use a contrastive loss term to induce the agent to learn translation-invariant feature representations [@curl; @ferm].
|
| 4 |
+
|
| 5 |
+
Recent work in geometric deep learning suggests that it may be possible to learn transformation-invariant policies and value functions in a different way, using equivariant neural networks [@g_conv; @steerable_cnns]. The key idea is to structure the model architecture such that it is constrained only to represent functions with the desired invariance properties. In principle, this approach aim at exactly the same thing as the data augmentation approaches described above -- both methods seek to improve sample efficiency by introducing an inductive bias. However, the equivariance approach achieves this more directly by modifying the model architecture rather than by modifying the training data. Since with data augmentation, the model must learn equivariance in addition to the task itself, more training time and greater model capacity are often required. Even then, data augmentation results only in approximate equivariance whereas equivariant neural networks guarantee it and often have stronger generalization as well [@wang2020incorporating]. While equivariant architectures have recently been applied to reinforcement learning [@van2020plannable; @van2020mdp; @mondal2020group], this has been done only in toy settings (grid worlds, etc.) where the model is equivariant over small finite groups, and the advantages of this approach over standard methods is less clear.
|
| 6 |
+
|
| 7 |
+
This paper explores the application of equivariant methods to more realistic problems in robotics such as object manipulation. We make several contributions. First, we define and analyze an important class of MDPs that we call *group-invariant MDPs*. Second, we introduce a new variation of the Equivariant DQN [@mondal2020group], and we further introduce equivariant variations of SAC [@sac], and learning from demonstration (LfD). Finally, we show that our methods convincingly outperform recent competitive data augmentation approaches [@rad; @drq; @curl; @ferm]. Our Equivariant SAC method, in particular, outperforms these baselines so dramatically (Figure [4](#fig:exp_equi_sac){reference-type="ref" reference="fig:exp_equi_sac"}) that it could make reinforcement learning feasible for a much larger class of robotics problems than is currently the case. Supplementary video and code are available at <https://pointw.github.io/equi_rl_page/>.
|
| 8 |
+
|
| 9 |
+
# Method
|
| 10 |
+
|
| 11 |
+
:::: wrapfigure
|
| 12 |
+
r0.3
|
| 13 |
+
|
| 14 |
+
::: center
|
| 15 |
+
{width="30%"}
|
| 16 |
+
:::
|
| 17 |
+
::::
|
| 18 |
+
|
| 19 |
+
In DQN, we assume we have a discrete action space, and we learn the parameters of a $Q$-network that maps from the state onto action values. Given a $G$-invariant MDP, Proposition [1](#prop:qstar_pistar){reference-type="ref" reference="prop:qstar_pistar"} tells us that the optimal $Q$-function is $G$-invariant. Therefore, we encode the $Q$-function using an equivariant neural network that is constrained to represent only $G$-invariant $Q$-functions. First, in order to use DQN, we need to discretize the action space. Let $\mathcal{A}_\mathrm{equiv}\subset A_\mathrm{equiv}$ and $\mathcal{A}_\mathrm{inv}\subset A_\mathrm{inv}$ be discrete subsets of the full equivariant and invariant action spaces, respectively. Next, we define a function $\mathcal{F}_a : \mathcal{A}_\mathrm{equiv}\rightarrow \mathbb{R}^{\mathcal{A}_\mathrm{inv}}$ from the equivariant action variables in $\mathcal{A}_\mathrm{equiv}$ to the $Q$ values of the invariant action variables in $\mathcal{A}_\mathrm{inv}$. For example, in the robotic manipulation domain described Section [4.2](#sect:visuo_motor_control){reference-type="ref" reference="sect:visuo_motor_control"}, we have $A_\mathrm{equiv}= A_{xy}$ and $A_\mathrm{inv}= A_\lambda \times A_z \times A_{\theta}$ and $\rho_\mathrm{equiv}= \rho_1$, and we define $\mathcal{A}_\mathrm{equiv}$ and $\mathcal{A}_\mathrm{inv}$ accordingly. We now encode the $Q$ network $q$ as a stack of equivariant layers that each encode the equivariant constraint of Equation [\[eqn:equiv_layer\]](#eqn:equiv_layer){reference-type="ref" reference="eqn:equiv_layer"}. Since the composition of equivariant layers is equivariant, $q$ satisfies: $$\begin{equation}
|
| 20 |
+
q(g \mathcal{F}_s) = g(q(\mathcal{F}_s)) = g \mathcal{F}_a,
|
| 21 |
+
\label{eqn:equivdqn}
|
| 22 |
+
\end{equation}$$ where we have substituted $\mathcal{F}_{\mathrm{in}} = \mathcal{F}_{s}$ and $\mathcal{F}_{\mathrm{out}} = \mathcal{F}_{a}$. In the above, the rotation operator $g\in C_n$ is applied using Equation [\[eqn:rot_act_image\]](#eqn:rot_act_image){reference-type="ref" reference="eqn:rot_act_image"} as $g \mathcal{F}_a (a_{xy}) = \rho_0(g) \mathcal{F}_a (\rho_1(g)^{-1} (a_{xy}))$. Figure [\[fig:equi_dqn\]](#fig:equi_dqn){reference-type="ref" reference="fig:equi_dqn"} illustrates this equivariance constraint for the robotic manipulation example with $|\mathcal{A}_{\mathrm{equiv}}|=|\mathcal{A}_{xy}|=9$. When the state (represented as an image on the left) is rotated by 90 degrees, the values associated with the action variables in $\mathcal{A}_{xy}$ are also rotated similarly. The detailed network architecture is shown in Appendix [11.1](#appendix:network_equi_dqn){reference-type="ref" reference="appendix:network_equi_dqn"}. Our architecture is different from that in [@mondal2020group] in that we associate the action of $g$ on $\mathcal{A}_\mathrm{equiv}$ and $\mathcal{A}_\mathrm{inv}$ with the group action on the spatial dimension and the channel dimension of a feature map $\mathcal{F}_a$, which is more efficient than learning such mapping using FC layers.
|
| 23 |
+
|
| 24 |
+
:::: wrapfigure
|
| 25 |
+
r0.4
|
| 26 |
+
|
| 27 |
+
::: center
|
| 28 |
+
{width="40%"}
|
| 29 |
+
:::
|
| 30 |
+
::::
|
| 31 |
+
|
| 32 |
+
In SAC, we assume the action space is continuous. We learn the parameters for two networks: a policy network $\Pi$ (the actor) and an action-value network $Q$ (the critic) [@sac]. The critic $Q : S \times A \rightarrow \mathbb{R}$ approximates $Q$ values in the typical way. However, the actor $\Pi : S \rightarrow A \times A_\sigma$ estimates both the mean and standard deviation of action for a given state. Here, we define $A_\sigma = \mathbb{R}^k$ to be the domain of the standard deviation variables over the $k$-dimensional action space defined in Section [4.2](#sect:visuo_motor_control){reference-type="ref" reference="sect:visuo_motor_control"}. Since Proposition [1](#prop:qstar_pistar){reference-type="ref" reference="prop:qstar_pistar"} tells us that the optimal $Q$ is invariant and the optimal policy is equivariant, we must model $Q$ as an invariant network and $\Pi$ as an equivariant network.
|
| 33 |
+
|
| 34 |
+
[Policy network:]{.underline} First, consider the equivariant constraint of the policy network. As before, the state is encoded by the function $\mathcal{F}_s$. However, we must now express the action as a vector over $\bar{A} = A \times A_\sigma$. Factoring $A$ into its equivariant and invariant components, we have $\bar{A} = A_\mathrm{equiv}\times A_\mathrm{inv}\times A_\sigma$. In order to identify the equivariance relation for $\bar{A}$, we must define how the group operator $g \in G$ acts on $a_\sigma \in A_\sigma$. Here, we make the simplifying assumption that $a_\sigma$ is invariant to the group operator. This choice makes sense in robotics domains where we would expect the variance of our policy to be invariant to the choice of reference frame. As a result, we have that the group element $g \in G$ acts on $\bar{a} \in \bar{A}$ via: $$\begin{equation}
|
| 35 |
+
g \bar{a} = g (a_\mathrm{equiv}, a_\mathrm{inv}, a_\sigma) = (\rho_\mathrm{equiv}(g) a_\mathrm{equiv}, a_\mathrm{inv}, a_\sigma).
|
| 36 |
+
\end{equation}$$ We can now define the actor network $\pi$ to be a mapping $\mathcal{F}_s \mapsto \bar{a}$ (Figure [\[fig:equi_actor_critic\]](#fig:equi_actor_critic){reference-type="ref" reference="fig:equi_actor_critic"} top) that satisfies the following equivariance constraint (Equation [\[eqn:equiv_layer\]](#eqn:equiv_layer){reference-type="ref" reference="eqn:equiv_layer"}): $$\begin{equation}
|
| 37 |
+
\pi(g\mathcal{F}_s) = g(\pi(\mathcal{F}_s)) = g \bar{a}.
|
| 38 |
+
\end{equation}$$
|
| 39 |
+
|
| 40 |
+
[Critic network:]{.underline} The critic network takes both state and action as input and maps onto a real value. We define two equivariant networks: a state encoder $e$ and a $Q$ network $q$. The equivariant state encoder, $e$, maps the input state $\mathcal{F}_s$ onto a regular representation $\bar{s} \in (\mathbb{R}^n)^\alpha$ where each of $n$ group elements is associated with an $\alpha$-vector. Since $\bar{s}$ has a regular representation, we have $g\bar{s} = \rho_{\mathrm{reg}}(g) \bar{s}$. Writing the equivariance constraint of Equation [\[eqn:equiv_layer\]](#eqn:equiv_layer){reference-type="ref" reference="eqn:equiv_layer"} for $e$, we have that $e$ must satisfy $e(g \mathcal{F}_s) = g e(\mathcal{F}_s) = g \bar{s}$. The output state representation $\bar{s}$ is concatenated with the action $a \in A$, producing $w = (\bar{s}, a)$. The action of the group operator is now $gw = (g\bar{s}, ga)$ where $ga = (\rho_\mathrm{equiv}(g)a_\mathrm{equiv}, a_\mathrm{inv})$. Finally, the $q$ network maps from $w$ onto $\mathbb{R}$, a real-valued estimate of the $Q$ value for $w$. Based on proposition [1](#prop:qstar_pistar){reference-type="ref" reference="prop:qstar_pistar"}, this network must be invariant to the group action: $q(gw) = q(w)$. All together, the critic satisfies the following invariance equation: $$\begin{equation}
|
| 41 |
+
q(e(g \mathcal{F}_s),ga) = q(e(\mathcal{F}_s),a).
|
| 42 |
+
\end{equation}$$ This network is illustrated at the bottom of Figure [\[fig:equi_actor_critic\]](#fig:equi_actor_critic){reference-type="ref" reference="fig:equi_actor_critic"}. For a robotic manipulation domain in Section [4.2](#sect:visuo_motor_control){reference-type="ref" reference="sect:visuo_motor_control"}, we have $A_\mathrm{equiv}= A_{xy}$ and $A_\mathrm{inv}= A_\lambda \times A_z \times A_{\theta}$ and $\rho_\mathrm{equiv}= \rho_1$. The detailed network architecture is in Appendix [11.2](#appendix:network_equi_sac){reference-type="ref" reference="appendix:network_equi_sac"}.
|
| 43 |
+
|
| 44 |
+
[Preventing the critic from becoming overconstrained]{.underline}: In the model architecture above, the hidden layer of $q$ is represented using a vector in the regular representation and the output of $q$ is encoded using the trivial representation. However, Schur's Lemma (see e.g. @schur) implies there only exists a one-dimensional space of linear mappings from a regular representation to a trivial representation (i.e., $x=a\sum_i v_i$ where $x$ is a trivial representation, $a$ is a constant, and $v$ is a regular representation). This implies that a linear mapping $f:\mathbb{R}^n\times \mathbb{R}^n \to \mathbb{R}$ from two regular representations to a trivial representation that satisfies $f(gv, gw) = f(v, w)$ for all $g \in G$ will also satisfy $f(g_1v, w) = f(v, w)$ and $f(v,g_2 w) = f(v, w)$ for all $g_1,g_2 \in G$. (See details in Appendix [9](#appendix:overconstrain){reference-type="ref" reference="appendix:overconstrain"}.) In principle, this could overconstrain the last layer of $q$ to encode additional undesired symmetries. To avoid this problem we use a *non-linear* equivariant mapping, $\mathrm{maxpool}$, over the group space to transform the regular representation to the trivial representation.
|
| 45 |
+
|
| 46 |
+
Many of the problems we want to address cannot be solved without guiding the agent's exploration somehow. In order to evaluate our algorithms in this context, we introduce the following simple strategy for learning from demonstration with SAC. First, prior to training, we pre-populate the replay buffer with a set of expert demonstrations generated using a hand-coded planner. Second, we introduce the following L2 term into the SAC actor's loss function: $$\begin{equation}
|
| 47 |
+
\mathcal{L}_\mathrm{actor} = \mathcal{L}_\mathrm{SAC} + \mathbbm{1}_{e} \left[ \frac{1}{2}((a\sim \pi(s)) - a_e)^2\right],
|
| 48 |
+
\end{equation}$$ where $\mathcal{L}_\mathrm{SAC}$ is the actor's loss term in standard SAC, $\mathbbm{1}_{e}=1$ if the sampled transition is an expert demonstration and 0 otherwise, $a\sim \pi(s)$ is an action sampled from the output Gaussian distribution of $\pi (s)$, and $a_e$ is the expert action. Since both the sampled action $a\sim \pi(s)$ and the expert action $a_e$ transform equivalently, $\mathcal{L}_{\mathrm{actor}}$ is compatible with the equivariance we introduce in Section [5.2](#sec:equi_sac){reference-type="ref" reference="sec:equi_sac"}. We refer to this method as SACfD (SAC from Demonstration).
|
2203.06486/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-11T01:21:31.760Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36" etag="ZXYotZl2oOc5rJrphdYf" version="15.5.9"><diagram id="ZOIOdaoEDVjy3Gt83011" name="Page-1">7HzHluNIluXX5LLrQIslBAGQhCIIQu2gBaE18PVjRvfIjBQ11VWnenoW5ccjnJAm3rP77hPGX3Ch2eUx7AutS9L6FwxJ9l9w8RcMQwkM+wX+IsnxdYZGyK8T+Vgm3zf9duJZnun3SeT77FIm6fS7G+euq+ey//3JuGvbNJ5/dy4cx277/W1ZV/++1T7M0z+deMZh/eezbpnMxddZhkR+O6+kZV78aBlFvq804Y+bv09MRZh020+n8MsvuDB23fz1qdmFtIaT92Nevp6T/s7VXzs2pu3833kAn1H+QTKJPig9gpQrvfXOf31LZw3r5XvAv2BUDd7HR+BDDj/8OJF1oB0wjPn4nhtqWLofF/5r+kiOAzegRL//dvHHW+7p8V99V7bzBJUhnYGsyq798W7Q66/X/75JcPqnbmC/ax2b0x2eL+amBidQ8HGax+6dCl3djeBM27Up7F1Z1384FdZl3oLDGMxcCs7zazrOJZA5932hKZMENsNvRTmnzz6MYZsb0HBwbuyWNknhpCK/dgu+IN3/rmDQX8UN1knaNek8HuCW/Ved+Xrke4mg7Pfx9pvCoRT2N+p75RQ/6Rv+fWv4reb5r2//TRPAh29l+CcUA/8LxfiDAIBC9/Bj2XzW0K+zqIZRWpvdVH4kjItRN89dA26o4QU+jN/5Zw5/SOUXDM8+P38hibmDUx5O/dfazsodzjz/aZL7cRb5cQZ8TsI5BHr4dYhJfZv/ggmlwxvWhtzlvOPAj/58FZdXDj6NE/hPXgXOB3/56OSxJ7yB8/SnhVzBZSKmHvCE1T5eKM9xwl5tK+M/XvDkLb4UQbxxnDjBm+gLF7xGsQDvFLz0+bJ4RylSOkHnSjq91ALd2Yxt6yQZrfm7/b7x1/v8Iq/u23dm2g9LxCq57SWAf8Etez/5y2vWqkDSiYjeCn7Yyvv99pC0e8FPJfK8UYEjIq+LvZhNbghRyL8voIk7V191XPG1fCvf0UNfrm6CClEvnU7n39z4bM2m7TF6orVV0WjtrPDzF4zfLx5JknTxXmex9fATS7MUyJtvMVTjuQun6nd9zGjQgA1WrWQhBJveA32Z59mhwQw83MlEsf3gCwfhODDviWIKnTZEQgTmQ7WF1rMz6z2LF8XnOaESYr40lohTuYtIjymZP69WAmQQhq0tRBbhJPmFC1UXTfE3jTr2MuErwfA9SvJB6PYjUW8xl4sXxMcAbEgrCSAZJ1cdaL6kIN5peoED3ifc9cS705W29qVE+uYNXD4luz4RqZY2PjeHauJyntvQx7iwC2YGrRwbIpe7I1w/4MU4Ot4HQxHrJ416DpUYLUXDiWGTMLR7hHIcjIxLb1bOGRv1AQ3RLAQqzRtL8BzgzJxBcN7p2hbpxTAzvJc2DSkjg06WM5lJhkoGikxXp0tMr0c22m1cdd4JJEJ3XjcdPV28cDba6iCV66l47vSEM0NOc+jb2TvmJ9n3YlUpO+thG8NQSM2DsItGy3bQQ7m1WTiIecAjAEJSWDHj1FYoPeSTpw42uchrtbTV3s1Sp/ODbg36zi/6vSOW5xlqZ+jq3VzxFz3ztMY2ndLSWqTaYhOnGKO1u5feZyGGEeiTD5X0hhp4ttimcQYMmIIGwnpx7jU6Ys1es1Cdhjs4+UqBZE7mcgKUkd6hMwNkkwJ/JI45xvKpWGcukiTd9kIerC5f6qj2JUUPoiIdYaGX5xjSN0PXwgE9bTrwZEQPKbx+NhvmmdLMroOxsAHSk15S4U/Q6pVACyizJ0QEjrOTdHXHdMToZEXFO5e08wquzvZzccDnPllrAr8A+oCcVNLoJbMf7hXbsWccPeyqib2DnV3FSvbJIXZhvsRASXHFA9rbFeA9lyGCKjnki66lrQ8UjM5ez8swl7IQ3ekhWJZZfzmOPt0G+Q6WFF+Ni4u7dc2y7XC670vIGY4XCnrTewlyhaszZ8Vo3XTRfDXUhNUVuFhIRxlM3IIgabnJsEFSjTHFLd7M44Fw+fWqCb27A43nN4HT+XsHPlHgH8KfImjcfw6Czj1LS8/DvPKjSNZu1Pg4EwxdN151MSrbwwB/i5wsQH0PFwpqkBCRs9HRtyB31AqOszcd7xg+/XWUkNenwcHjIjkfOY0Z4Do06xJNjg74kzjJW+LunH/x+DOJ0nN8mmD9iQ+znJbHsOILizkmLd+XSKCLpghknQA4W5jXcz5TIr14AGjdpn4gMTYLWIMpyka3Li6oAM0Dv+KwDQ5yyjxxpCG2CHshATy6dlsop9cxgQB5Nb7mI2fpGFepRzCt7AIeo2UgQe7y8L1B9BAxb8DqFZBzEJ4YdW5jdwDQBHbiwlt1PYVUDYDsIeTOCwCNY5qPEkkv5GEKNMA7/lkp+FWRqPdtARo8S2dFjHYTldAGCNDqqNZzIpGCVNpzUWFPeV4GCrm+8M9bdyfENPE5LjOeqtjF466iK2Nkf/CSj3EvrlNcecPQrGP66IJDRC2MWeQuvgVQUbhqDj5X6vIZJxjZcE7+pYFvEVy5R7cGSaWvmcSR3R35CwsOhNKfuIl7qPJnlsvSg8j4OeSufPmF41bY06Zse/4oksAG3gXUmtljVOrXp1cm7i66u3zew8syttflpxeSENsr/vDl75F4qIUtfco338cbVC0j8+9fs1VWzbxGkR+lofx6QLy7azkw5JIWpp93C4u+rLMQb+dH8mqUHvVTGfHVv+hwKPN9pdrARrgvzRgjtbyOdg2hJ1w/UujIxfxc29MCi1lxrpI1rh4XMGQ/dBJ8ThoIV5q/kJ8WjVmyvNTBS6kAEuScon8lXMwuAM5j8DpxTlT/I+tauAjRCVvYZ99cmesCbIB0ab+sXndDinkcnM5Lt/VM+GKDKwTwUsxF+xB8ZsBZ03V0kYxWu9E5Gd0oNpypNZprJ5455qBJOgA3qj5CJqdT0YYQo4BU88VkhdFRgvVyR/DlIsydl3QDQJAWz3cxbtMjY/YX+QLP8qSTdJfHsCBcpD8X+rNWNcYbdE9Jc8kPoBbJ1wkn5UIZ4jCpNRTakZOkhlfbUvuCq4WlQCyh48W+MhVYQ5HbvIB5VAFjlxZsdEY2AajEx0kMV/CaI+NyDP0idcztnNLWm2uWCl2PGjFeuE2KuFOT3UOz4QFyEQuiDgYleW4zgZei2Yyr73NNaX9yTecItJYivR0F8x2qYFHzh4FMsIe8oadnQ+jdS0qI0aHIF2SBwCZlJjQ/4HPaptaeRRjug+PMbAccp9nBY71oxhxvxej7e/HU8kEuxsI9xIfVGQPdjUmYZZOVAxLIZhM5YRHlFfZpmETarOI1F1dMlat2dbIaMfdFnnGItKpUImGFJOWdSe71E2uYzGwOxM2RuemncPTPQtxXpYfgjVvV0T6OUodOzwXY2d69hCZ9e58Be2UMnGZSxesAh0VWNggdWY6agjlbEaX1BVV0r4rnZ4D7G38nG0J78Gu9nBcGoYGDIFW+5mMMO4cBsb6M5dahxkxfwq0S0Et4xXIZMBvi1Q/jGC7zCWCgGU6iWNZjkzBcBOtcHxosSrdpGCM6w8JwLoFVBJiASnqV9hOvzU2L7bEGKLSh0ik2it1CAOPOYztXLHM7AkPAEyFYCbOuH9NbMLcTWKCpPvfKNXZ0tFO31mOOTwr9sX8xL0UBlAPRHp//+BmV90q+2H7WBCpDXdwCMLeztKe5lzeBtDSM80LqQgvR204006Np7sptwqahOP8TEoL3PtT1Xuuzf5Q8tHyW3vm5GIR47G/H7bRacrYjC5gb1MF09kEiaAvZim9iqxe1UxAVwL5Zd00Ys+RBAbshOcJAP/ilp7nbPEtBDSTI43jSGoPiWJgXcjLguPy3HarpO1G4RSjrQHext2fSNf6cNmDFnQinZYiDAbCC1gY8VYzX/Yi1SK0mScBlOdceMkAjMTBb9b758yWwkq7DGwDJZO0Fr1t10ihcQIM5A5IHPkXp++W6nuu0eBxL6M2Lc3KhTmEDYJT4Hb4BhiofwWBEjZjyixjF5y6dHwtk3DBqfX3ZuOQVZMCto2rk65j1IzEiKHaDaCPgLngeS/ql0NEu+IyT6w5Bhbzlr9jHIX4QVVvmSPauyxizDpBVflWRgR4TSXl+Xb8v08IVcwrxxBrYSV3WhvhcEWUKt3YETpCGJZgG7ZBk1Q6YNzwK4738ljf1SqQEAOD1Y5tuL1lKUL7CiDfO74xBovOn/7J73fTJXEYFhWYIdBfMzuRbH+9Bk4u2vOPZUYofnvLszBSHTgb/liMVLtV12YEVefAXWaJylOWIL4uvUfl7XibmTmnz8sUsXkEQyUtvfXojObLjOfaJXGelMBjPwGf6saenhdLL2t/lHhJSvN/lXAyJJJ3dJFuxY0qGc1axlA5bJpXbtedlbGkdEmLbc6/gjLAvP9RJNUuYYccyiI2vnoZzOL4QczZYE3BhwzgaYMuc6oXW0JrVzRy3ehTTvpPWCUb36lWdb8pd7bUtajxc3eee1GR8Hde0Q7CvdTR3Y5icw7EQ+V6F41LJTRjeDzh7jwdssLl2LOQjzb0B1tbX6ZnIcRPc2RlHNGb4k80BFxTRoprPeHu4H4/vhls1WHPDuKEfLmPcLgJY/18c4LbNdtZ8pGZg+NJpxcUDdlUue+TR4Kljf7QTBdo68j71dd/r4y/db+Hv5HpLISuBh6ANKRoTPSQOl6476wq9VZeSFkP3WWxp1qyR9wnaaQCpIR0FWA3xGroSJsI6HrIWE5v6CDfa/Ogs+DAveMSmJ5YA3stDIuAj5nq+JPZ2NWhoFiRphU6JFuL4u6qsVYVe9YKu2YsJwxC2s0IUuMtVRuNvyFhjyE+legraZY1xGjhzJ4nsHtmGaWuYwuZzHQ+A4bHtLdbXLAnVoLlG73cMfXXNMRxzSwFD5y/3cdbyaVyIfvSOhljxJ5FVlG5RqBf8vNKfYutM5qzaJ0Wu3H/TystlRmRC5H+0EE4YDfzjzJ4bYGbBMl88E0U82jeVagM+HR0bPNc02TjanrSzCxljCYVHUrVTg32DZsu9Qz96hVZY79AhWvq3fLVTG06GZwPcHusvVWzz27ooSmW1Xn30zkCzgDNF5JcdAb8EUoEbzXF8IHQGjklmdiEtWEcoxpBNF9a8rD9b/aQzNxmfbTFaSCcVH90L2ONrGOlQ+Zuj6f8l298edXe89gx6G5gYQbm/2SHie5QlvEa+B5ROBMwN0BcrKdkUTwf0BwegOjAn1Hv2REiRzMur1PQwPN9TsMslgb9I5cYArYvmnWR2uh5sYP25m6fhfBnxuJW76HnG1VOKaIrQ8GQ6l2IWZ8GJV69D6RdwzKLy2QzLTFGIXhJiOJoVgNk7p3ixDvg/RqEJtCjIBT8Fe7Le7KRhG5wNzK7yUfah1VZQYAAkHMMRnAg7qUroLE4SbuMhdx/Q7oVWqQJdIMTs1CV+r6IOWMDc4COOX4Eru1odktlZEc28AHxlVZ+v4YGjhT1fLMl90Bp69Q+6buqKuwxN5Us3RAu3BegrGNnhnKYgrr4ikg64DTZzDlY7E8B9Cvn3VlI28PyWsX1fF+oS+B/n1IS62i6i/Vly/U3egDW5nPZh9jPNcjl9CbKNRQGH57Yr/V51AO8THuM8yxoBulSVe3n6kQdRVn1LHWeH8pV7hq6eX8bHrt5JMnS168V1zpbDxBiFqtuPy1ClF1OepxPORntl+lCgpT7abBh1GkeDgTqoDtOw7NIxfSl5Ft1E3UllIYfq08X6ykoU5MgmZPNX9JX070ud4JIOMQT44LJf9LRBTf3C67tnQk3fzweQBycCwDBCOhVcf5c2nsaYGcmyKpUta6d6czWCk32ckPVEhKaifE8VaJB1ehiZq6lp4E04W0AUpBfgivAji/jddEWLCZgwTsbev0pGH8nXtaGF8BTGkVPn67Adx+O6EPPYTvryNFLcA/BDHGYk38lsBf7FgjeqAXy482qiwAXvIn5+pZdUxl8sQxSmU5w00iVP6iKbGjC7Ob46Ir1w5RxIBcuSNE3hy/KZMfg782x1chfOFwl9GAGNbxc+wQCLzHAUJZNIwu7ap7Wb3hiIeYWLK9rfroJnKUt7EGgE4WPusxnx/Q1YFNNYkWiNMCSUcbdISoVOVsN1ecACGoEZ3b0E/iO60Gx2BUNkLogJ+COfezmG68bjSosUnTA1CxwwLO7Z5DZJbFJknjOzLUu2UEF8dMNgR4rQoTP3zC6I/RoXZFYhWjkpDJpxAG7Fxn3YySSMkzWtQg3RhAmeiAajFscghtienY57AY5TQRPXi+LUR56s1UZ4nQOffsn0U7HvR/WSCf7xWaPAMVg0rLoy5qKJGIxeagPoiIIV/gVtBxE7XpBqxIC3jmXaffyj0hTp/sM87t5rgGZhSWhkrB7MlwhkAawjjqPty0HrQD40CjVH8/G1V2TAec8Ge18AAcTcKGlZSlaUdz+LMksh5DpcAZdCeKLaMIipMqNX1Ym5WAHjbLQG2Y2qITx353WnQqDYb42ZCU99O1QERjpzcFaWc6fa5uZSbUuqKDoMdxsCf8ZBeaQN+cxDGJpfT3oZNgMNcskLwS/vACbbPJIi8UzNTt5Q64sT8N6agYp/p6NpmwDzxDZrQtFAL2a80fAI9FfTGGpJllxaDcu3MwrGB0wMXz+EHgPsWOfcNhpYP9awYswraCk9FZqyKQCaMq3pjwiil/vsh03q5PYhu3uCauyyEwOYeS6T+1eMA+894SIANovphR6udFooGLjSaE9c6ac4xsUxzgTASsw9OnCMON7XY6GfQJtYoF2DY7RUBfp0y82f44M9TBHQOI63nQl1xQ6NF8Ye7BoH4s35tqyZB5Vj9cbQBkdyw2ncpp6jg7v0PS3lv1pFJauXpjIioYoIMOMSMo4UaiudkzeAUiiTyXBttnWNxowLlILTu1ytVRVNcGOu3LTjQ1ye5lkEgKzJLODQKY0HDt74LXmowDdqllWN4p3kwJiMyCHE4C1GqEiPOl2zeythecWydX2Z9xZA5cx6KNbQ77FrDmMitF200XEWL8AGiJElziJETVFGYcRIrvKDPjNfih5AJOrT0buQ7q8IHwo6eOQQ9C6Seh+BWaImkoTzTBE1DKLGT+0EoFu6OuPKekBLID8DKNvebNb/oKzJoNH7iY+K0zlJm614ja+sjWOYbDo6XNfk6XXQh4jAUG6RExAOEkWNPFfg6VzWplnMXGKIRBhMXpYVxWGsTkCBT4S+GFZQLcJuYRx5FUaAIsiKejAWpkEkCbGZfX16AeWEPQy6uEaZOwN7zpMwrBxeDIWkuZ3sbPAk6W1LtWQvH7kEixn7b6B2PoDzwSUY9jva8JQVjNojE+XLMx4XPC1i6u51eO4DeRouh2EuyeBVzoyLZ+XGAq0bgWYkQWrQkso8BUHmZJu+vaGAb3r3jy9ifIgrObiHQZSQbiSHS65vYQ4IgmqID+65d7PD5nlGW7nqXXnw7XRYvLcIGEwvu6R78BJxbWSwalKv2ecoAbz2HT0SmM8wq/sDxa7RPJMhQtV2TwYf/wkP56zvcLoYIR7CTIG0YWzaMXPYuPObzbIewdZowhwjMb0FM+fObLbYVN77yG9E/DT1D2+OYcDJcOwZp9/HauDR5CreAbx7CnAMFLz7jqjMuBE7lZR6HFTJgt/LBSDDkwY+mc7AQBV8BdlKzBo5BQkBNoOMIZoxMsO++lo/VLR4TB7SnjGMhYIVQWDFeja+nQSTwKSZ2ZQrF/kFDejaFEm5ROwEkbAh9FWW4rjvKEa7OyCG1SOs4sCtboC6v8aKBH9m7PlWdRrfoyhgcfZJkL2bnNz7QrcopopNFC0l1KMMMGajL4uhhNga/ZQ7Ot8o8UTRVV1PSs30mvI2XEaesQVJy9VUPHc+qFPOA5lg9/JJsElVNl0SVTB4W9GCWFjzLNijs88wzZbFzQvPFwcu1MQyx0c5vJxXMg52EwIyGzJa2Yn+uK4ypcii4j00rlNwd5xRrC9mjEDuJ/HkAgYLs0jaNmnViHWmw0ku1dIMrxfdKwNo1NZFBmvqZtVDg2gY7bNN7D+6zOuQ0ERHiIaoaEnvBkNUP8LQfizsB2E7iLzrgmEsQdL5+e3W+xj3BGAxjW09+khFrwkWh3kyi5JmLPsk7d67oblrfcHDgJclPI6A967R7yiHmMM5v0aRADYi8pBAwsgngIEuiQJkQ/GtdJRWWwPLBc0O8POGrJ3RB1Qc3DWjTtRg8Ni6sDZg+OZ+rleZ04TYHTRoFQPvHuW1XCQrtKUoTJcMjoXldvDE3NfTUXwPRXc3acyUsp8GiwCvEuYKZqN/0MLHw6YICPR3GUdtuJYl3HYuB6MCHwBYLL4zgCaS37kEvzvEAAMohzdF/Z0Xvl7NU25HmBHhRjGKl98yJKIKyFUd8jeMe3CdBUc+GEjJfPz8O3YQ5R0By8ABjgrj6TC2wHX3tMAW7P36ii0g+kDPN+cTd+CgGQfgoENUkvoBWr9+UYCFhm/fhoq7/4hdhWc6LnftK5Iv3+nlK4Njh7Hi4rJXfudSIN3chPWTNSkkmf9uB5BGyfvEfIRK/3Fd+bAr7mXHDHgrL33lcEr3H2ZgeEv9ih9VX9kL62s26vE726VjGMWWdywXuPL7mjNk0Qt0jjmLIR1Hyt6xdNTD4CCXYiTALOx9vb1SSE/ag7XQBeLKukcwMQnRLsIsGSAM5STQ3cuiJT0T+vcxqLLJFpYm0GLu8QvkZji0WuyYsjdodnCUTZL0meoNtDjzLXUBC2RvxI2u1e9cw20hgH/d0IAri92/PBvBSBAEAJGdYObRvwB7NWZ425CewitjG0RtxiwGZCLAN62EnbBpcMW+qgr/ZJZqm2SwrEhNhRUDc2pfzMM+MtpidlhYUe3Aw2qk+LnQGfvqGI9uXZZNaPbiLva9nN5zNKTAGe0/dQc+h2rqAocLPUuzHbLzJAO9lw1PfR/mFYtUgBK10dolrNSh5wGF6R/PWlvFA65B2fQTzskCOvhesnor7pSJ0d6L9o54XxEUxaNYox2260UMA2qe2QCh54MMl8xo0xqFmD4gVGpKzVqpjMxd+CCYaTk2TLlKx3v9nHrxmv1LWRH+UfeNggcTHlHRPaxNDapR8UFbDrrV/hQaKetAYvuA1IkI8WwkJld1z9tTgj4O7c8jdGxPqv9H8ZFV0pbXbckT+qwUZzpeRQTVElND2oSvGDJljnCWlguWJG+XWX42Kotje5RFwKWyiB7z02QwBqPeb2fiubAuQnQShsCdXb0eNJ/znOuCe0rd19f5TRD3/Uf9hJWwMwsD+vJy14WJTfBPPkQbjpuLaBbBpKNDUlToIBTFXNtKMdF6Hstu9bqyi8CQ6Xv1iS2I4iLDHHXQhhf2gaWr6gNX+qK7QY8LaZc+egt4vkLT6h4G/mKLevCwpVsufvBNc0O3mMV79Ng1bg1lPrT1PGrHMbL3USCATXn1pr4azBTBLIu2Kveo/ELyFT3Z3lGDwLq9kiK1j4E8HXbpaYFYagf5qY3XM5njh/pYRRmYG/qp/vP485N9esFDAKD+J5POvb7ivbz/a1v/KC4LUQH0Q7qsn7iuq/2cS/03YOX/s75W7qPzEkAFaRmD9QyW+I16c6BXMG1ZxArG3p4f7/BmKb9dzdmPq8U/7WMHIzKc8FB5mjFEnlrtHmFdntvSAUZk0VcMqwD0np1tGrA34D0UWKjTpFZa1PS8XdgPR4LGnw09qv3QRtOBMb2epSN5IhEcxZcXYbQLuYL1zKfQ/KvT3OLrE9pysc+kBTgGUeKRFFsPDEzwedJItsmH2baJpzdnNsruMZMY9BEhSVdloi0X4P6a1IaHB0xvFnvmxTLwLteEbIbEnFWZidysutOrUu3ULBYb6ZznFzIZDJ40vjwjavMpIIJxYbqCJEZCJpwq7YT2qix8fKZxB10jEAsCB45RMEaNQ2Q6EhjuBe/vkEinugRw75cGpPUc7w/gbgrFY+5NJQMI57QnSb8OmNFR3u7UoQUGkye8GsclqUVS6EqvEsYIDQxjgA1vYAWbJKQFTDN/IuieMNB64CT90J44SwY0hiX0CAFr0QnRld/pWUv4tjaaQcmfbMlkhcc0TmntYM3VSeAMLXMlsZoPP8kw8DBnakYHd0EV3OxGE66hxYmXGwShnHKqPzDrU+bkFG0K+a2OuMN4p9taKeZ1easbZk+B5wi318MHvqrxYoj2vBigPb8D/r9dO0pV7MGUukp731ql457rmcQL7sMohE1xT389X5nJSEYaPf3GlGDPugJ920kDWHWxES8YsOczu58ZR7BrlKTC2OQ+9u/rdxk/TFmUAUFstxdJOUwFbR6astyC1LQeloErETnWqDA8lCemUn5yLs/WJPHmFUuzUzsLrpZ7Ju5e0q+le9/r6eO00sCN4FuxjWaKcVNG39U3rD2jNOZ8z3s29eWKvq1opavVezMtpMywKjZvHcZgd7vRwkLCfX8EnhlKGLOj8KflOlm2YmNWNtqrf5Uf582ENXTmsNr6TSTkTaxKhWVamyTgVTgnIxs/yRtDRky2nulzNUIGvcfBOU5D7pO4nkDSGRP9edG3ktYa5+zDG72hJKxL3PnCea9FM7nAEtaPmS8qBR9R+rVxemzRsHBuF7PjLomQYQ3sMPZfS8TrwhGl2GW5x5I565ewqk6i4FqxabUHLqjNARcihxS79oqWMMbv4xJdYJRY9fkzk41rJMViUjbThajPVyQ0fq8f4VtYaDsme/s2bAI6dTAOka/rUL+NrFjm4frEt7OUGcO5ownjb1wTrHlcAx2hQ1mfuxehYp5oUjgwWriYEeQp91pDa3fNerBhVLvX50LtPpO6Gf7wJB/vamKBeHTxHyODsZG83GB8XJIeMiEHHb+Tim5HvCS0eAUA/8E8Pj51giFYroXkPSZhzjnV0D16Lxm9PTY7VEIjRm4TKt5cTu6B/8mapXyxUr4y+jnedcwXzDPpSZ8izybTfMJRwoIM7+35fsUPoYkRMkfgSslDbLKG8sLxc3/gLSyJ8/ddUKeNgzTlsgYOE2+ie132p9ZcvKDh0q2h0cjKxjYh8PKdsdHGaKcGlD8nrgu73YDluKOfzF7PEtFFCfl+DCU8HCHcahnNgi5NNgksU9fS3L2mzLMoTSOYtEnGrwgPLd0ArRDg+9M0EV8Wbag/dUz/ivXkIIpwQtjAOM4kuVGGC5gwDt+WrnbQ3G9vdULVFmEC28qcwuExJqz7S0PhC82gv+jXFvI0iXQTtQvp/GNradCAn57aCHVcMPGTiFUMn4PQrZeBxYIKP9k1ka882Y3Xj+2GLZZW03ZrdgRQZ9ZsWUzAiD/lPnccWc8CBp+pPltsK6WmLFtSoHRU6hkKU1pJZsumbzC67a70uVPtlK2SmqE02+xJOpTUl9kd24FplmzYSRKeSV7VeTLmx5pAbw9Y4eGirBTM9KJuRZ4snsFVtWUtZZou9vIaIR4WRnChcw3N0eogbJyNKpopMBEpj2gBFYCAgdQRXekhmWcY6WiyT0wserjj0VoZXpEmmkEe2V9MR/1qnmnZ28POYCKEUJaJs6+An2DHzftC3BFHP1HkVBBRm/0+t3gz5V9X6+sNOzz3+tRZzWv7ibqtHI2TDWbtLKvLuDfW2Kdwcz2YaEOztLdbdNiTBfgQ3lDD5RbpbJIDy1k8RpYKvgFfhPEQmt2uX+1A33r8uvRJMziwiwe2MHPEvnFvhaiI+in0HJAbhb/CsllXpw+9YYkjaPHT4zzZcMxiI3OZI2qaadXLUcKyNftqkHDGvcWZLfLv03XRFNNaRi64eDnOsJ3ntWWRrmuEoyTLjkY2evQu4cyuZOuufL1g16/+NTKkFr4waGs2O6dMBN4k68Fp1+gVNU2nXYchzKCt5aETw1hzL+zf0rhmdnqHTIhp4P/rTEbeZqfV65l/3VEoJvmJMkAuEhPYumEfLxljFl7v1D5mGBaGZaMznY4ot10D9kUQFWqHXh+xenb/qgcTJZzdhK44N0KGXhk4FnGRvpph/ykEWdaR2CF7SlkYS+5x+cJ7gXB9WNjlfsGARWGp6CEB532g2fGIoG5AbzG0Zs+R4WhREracnGj0LU8GqdkXPKd+Iv4su+DR/CwviDmkj/2jOGw4w4Hp+OdRBLGm3IRzW3E8e/crjHnnrUWc+Jq+mYiJH2Jo6bkULLTzycFmleoHI4vzIYCTAc3GNRzDxdNb1oU0lG0S+HYCzix1zZ4yD7yu+wUhR1cnbyLOTNM6UtWBDw0hG6hBKnR6xp0Dfet1XeFs6K/0pUf4GgGmTeDgp7cvGY/ml62kthsA2xfJZrOPHj2QPQnD32d0hBqZj53V3vzq0rcT09HjwrIUPXuRF49ukZXrgQqZ89z0U43rc4cIlmfCHSZSmPfbL2Q3r3wr9RcfQhMFgCTCjChzoQBLYb8gVyRjXObtIKJ32RkcSoOBF1Go2zxL4/0qGfsdl+i3HVfNdnZjjKXtpWovgbpjwQX1Jlje4OrS8YnxRw+a4mSkBH7N+UnTWHh46msVKiddPUa+RmLUxdzAY4gPbQX0h5l7OpYKiV1LhfJWKJE2G3ioniuNJixFiOuAZWOZQGHQjBby5nOgLi+/KjhGwZ0jOFMkBCQkszHj7F2tumTiOc+hFHHiaUNJl4RAQ1wgDOg3zO0APJTqa2XQAJZg/JclX6xHwpIXUhTm8BnRsbNGnijF03WtmtiqBG2dw+6lhfvpQBCLSRYGwG8U0d1RcksmIlvbNqOiFFalIB/wGNjZxHt4296gURsCJzckRqbGMaTw8NNlr9pXN656f0L53YCPo0IFYC+Zjh88sJURXGn1WtYkTZztS6RJHFD16ZrEKL0tkEqhytEWdWWaEU5iF7+8GLGK3u/JNYqRSjF4t6sGOp/abOwp5CNkmFxxLcZ+rZMMLQheta99RgKA67QCJflQVp1BiVDfteymnNAxwENVEFiuUYpYs7WDqLbF+BTAFlevRzIMzwCDhmPOFnaAAf0Gt48l2dWFbRUerm1eDdhrIq80cizpuNJru6C6SSMx9A7TlZIeZoI/oU1gcTh7q4gs8i1v7+PK1d3gFm+WYa7KUJE0iQG2Di2ItyFXM3My/H7CaYPNLAIOYBaugmFZe1FpuMsB5nHV2rAqRRgrVa/DQGzAqyVY88v6QzINE0DxOjsN/iI2GmbKcD0P8hbu05mP+hwSignpVHq3bY0vWUwj/CksjufypTRgpX6McVIwsAe4lkkuRZtnS6x8NtPbZcNR63cRAJjpu4AmTAXdoDaJBE9el024kLoo5lYB+K6ylcSmxdfljh+L0i4UVF49EiMOhWzrNoznkkBzRvESlod4yVWvn6MKn0yldJnrfy66wN2dj5FUTs7FLweMyO778Nmbdqkl+/1cHo0g/Hv2HxLE36CX/NMOxF/3rP60AxHexv55AyJKov9DOxCJ//4OxKxOdw5u9gUTkrbJ90cxrsNpAp7b7zaJ/n77Jtx1+r3pGCXg03s5e993ws8+vO9v5PeRuH8/9jk4fhy0YLjezwc/PQUPf3vsc/Tjua/hpMmfNiD/QXBgyN0yxuk/3q05h2Oezv+X+1DirzXhJzn/2Nn8s5B/nBvTOpzL9ff9/SvBf7dgwg2/Pyka8vuNrjj5h92rX+P8furnrcx/eNGvD/7QV/YPffmaiD+96KOHvw77X1dN8k+q+QRKFR5/VtCtbOrwax8y1LTvK1D6cVHWiRoe3QIneZrD+P3jiC+6sTzB/eFvO5vD8YeeYtRf72r+7aEnfNl3M2MKd2abP3QK/cMpYEh/d6MaTj921MZdXYf9VEafLsMHGzCrZct/b+f9y/Xzb4Ajiv6DcMk/gxHzF0pKM/9DQET/Sdo4TaEo+jea/Xub0v/Zneh1ms1/d/fz1Idx2ebq5x6R+O2M9T14eKoDz2b1B/YKwNLS9gN1cziH0a8q973/HnSU5MEvmEMBohQJOi6AY/S3Y/ALbx9noWvBWMLyI7oUKMeWTvO/Kve/v5b+rAw/hE/9t2T/47Z/u+yZP8sepyiK/NuPPfr/kf3/mOxJ7H9X9j++luEn4YthWf8H5f8dKE//0YT/b6M8iv5J2ijGYiz+N/zPxPM/S/2fXeoo8tfq8P8JzqN//iIcFGExGv8bw/xH+v/T0v9fR/p/wtf8z7fd/OnbbsQ6/unbbtiKP3FYOCvt9HnNX9HTuQSC+0bzDhV7vFcajNYDDBYaMbvmX4Tcex/pixC2l/RAhXtFkKiaUw7V689HbXkj5z6GtxM/Zz7PUr+/8AiaW4Q4aQxGX408qwkLj5b7FjzzzEroMJdW/+kez6wWRL63oHJJMOcbtMS5kliSRTvbzC1e7pVOMHRK14p64vhqwHDnTbJhZTrH9Vcu8jK8bmGEq7ynQ/ZurKhWPP4C5/LGh0w2zFUsfDJo3N0y4lYXHMNFBRXOM38tXiEV0W79sCQYdeLeXLtr93P82iErvD38reSfHbUc97h09Q3d8Tv8rgfwMzghWZ9xG1n1p7rik8dpvbcEZnBcnFXcYB/GfK9hJQQG06BxMNfRrM7osKNnzXRwzxKnDctcJ1gbTKeWKuvYUuFMy0s+5RPchxAN8LswwE81Bm4yp7N8Jufsrp46N2SMpdutuu6adZnK127aW6KuzUh7km6/cPVTjeGemVE9rtY7VkRbHeaalb7q3zh8vmlBlM5+llQwDzDfn7GA6vLZMPj79KJie7h0vAiy/SY1lKVrBKc+CSveLgoYRXTr4OUYHs2kfKk1LN7CyBxQNqR+94XD1ttA3TeMcKNnUz3Q+zAUdVP7TuUl3oJmnqyWxHSY4j7V4+tYME+VANNxK3U8U3JEYV01+T4DhooNz2zOW437EjEsz6qPFvQqE2a0DaKD3pI0fTeGUqEDOrvWFUneJ0mwwdg+17Yc88inKEWO9AY5qK/SuTODabHq1OnWdFy2Zl8j4cFyVu7O2eCRlEItWBwTHw1Fuf34MBzPRUzk0oxDAvNDnzWpWqpapKj+LJIIm2sUDSTcJoPWXVB0wew5cVzHbbIxtkLGH1bsE/inRdLrSC+B6Q4YT9+n01lmWFfYP8TH65MN8hw0cXLxIT6JoEXNUzyfQGIqi8kY7o5o29HIaIwSCVTm3p8zwro5LXy++gXWQcOc6h2hUX/c3310ZfcUxyIcaveVx0vH2bM0lvfridFyXVXXo+92J+jfquN44RqafeIGtjsNbYXOYGacwXDwMKsj1zc4ueYCK+HksXG81FMwoYIlJaW28cIsT7Y+i3J9wQAYBDBCKjcvRQmdll21dDic3F66Ty8uRvkYXstnXzAnPJ7vplLp4kQTHyvKI68eEifNiswJgk40iDYAUH7gMirgyeNJrjHcQJEJAlzRKq7DEuOReiTAHFLG0WzOjYgigmbG7F4ONKffr0dJcbp6d7wYTaLuqdcrzpDtLz+2nyVOcv+grngVOQxuk4hcMq5gbJgTePZZkoJAhuqeqJlrvCk/ioXIaa42XN1GoZcBRuJl9Kle46RrPyxViObuKeURUHAigbl08O6guyqeSHdQAVPZg5l97ipqX+uFyy/UDeN7nHHy395ETRhKP+ZGTc8ssB8yVLngckPyYBQxxP7CJKO/1M6Bb/UURlajw/nlbu/nXD0u8ed7TTjuVbr90MyUlN+oBhPoFtZDgdV/mrwYtU9Lbr7w7J4mcWv4S9r96B//iucpqCUUn3f9Cx8FyaApAv8hO394AZmTDsSEir2mX6Nh0+ss2r7wqREGdxmrtFG4Mayc8jU68aa4e/76mhshuMCMDkZVfSF9ZrX28EnKP9Xm4OnX9L6hLHqHdYLcRRhusGZ16sdSlD/vumt/nE3g7MEYPncI53d7d/XHiGDl+2sQ1euIS48Prg7BiyFvJV1/zwt7nzZUIL/6cns/Iix7JPHXXGpl7OHi0/9u8fWMiG6MDi//fha9Loq3z/SnthHYFy1MQ+rAqM5JIvnx3asbB+t0jO75Pf6+Fm0qr2bvV6sjv8ccdWLyQX7yCQcPOdNQT8jDiS2YieBu3HCz0qKIZ2OkCC8xUOyTqQA3d2iJgPUQjZiGwB2PwO4BXHFxV9rQGFm9H30zQ7yr1wv26zqsJQmW0Y4W6ltjjjHLTFCbAGYhl8/HZkyKF+AM86kHAZon+z96cnvZi2i/L2NOZ5aHDcXqzfOJ2FRrzzBBkqLkayTbT9JQxZgtBssxp2eMnsY+TGfnGI9Wa8mw9yGEXXUydAOh9W9rw1QiirhChNOkwPs387oaZ6QJ3bk5fe2fN5okzknQA4ZPWbJfnpZ/mqX5QroWWid7O1hqRwLzUo4aTeqKNeLV6YxP4/WpYQKNFVdYncyKjzQwbabZHfbEr3CyySk2D6Jo8ewRUjQJUZg/G3qNZX5m+/VNOmbWW/LL43T+JNdkft8VKz/Viq/xeoyZWLm5RZaZfDF627Yo2BX08NkZi1sP4TuphUm+OqxSsUuzPcZQSXY6vfDLo5BtxM9gVR3D4ricfrJrMMcXltqNM4ACDGLD2u6x0LBesWXSWw0zaYmB1B6iO2DNBJK8OnjUshjmwYK3+0O50bGCBp5ApfjB8jyY1yNubuQji7y7uLimHEzOQsVT5ml5f2L5QCd1lmGW42hjFlAXWEzDuXTWtizTyjSgBQMpkJGX7GfaIeuSm6jt9wyWMTohvk4Fx/sZsIHJjuCWSbdw+lXXuSJqPK+KRGsaUU5NqoZ3uQEu2XY0q/N6ndprI4zF6GdsqWZz+PTucxCX5xUXZmvxbhN6bTQG1QtE97YZFaE1lzoemvIYWgBLOadiYpbznBPLixZzhNapZoEGBg20vmfPXPOBCN8mZt/oLbvhuzoyOxls4vO6H9Y1KHE9PcJwoctu3BizpPBoRu7z2jU3gPH3kA7NOXuhkkXg2nh/juKnOBeKC5CHkJSdcXTvlYyVrCZmfTAjGxk7Qiy86HMqEwPP7lxbSLQNzcAl8NKCud8zQfdVUdvuBnOXGq+HeGw+04G8Ab73xFg7EhN8je70NK3rvk2WbU30TLEuFbY95nljjTZ4OGAn7hVb7gWz3NhGSj+61+ebFs1q9+HeCEg4wFo2w25TgR4Ye5JVFrYg8e3x1k2lewtRhlWXIdnA6nO01HMvZP0c2Yp8aQcfRW7rlcxj7CXsXfLsEWMXM0JXEQDFog745o2QbHrZ7devdOBLsP7xpgVkr0Ez3FDLLt9MkVzrLZZ4LLrV5bFJeObpdK5xLEfPGottvg8z/aKPyGgM9PSG+XXqbiwdClu/X1b6nb0ZrT6iPEeZG/txB1yglBjQ8LORG5oWl+TtWR7tId2QEGkttPNrozuBdwxyMx5cLTkz9txxG6Pdt4xuiUtl2YJhKf1MEjSNoHWJ+UUANDiWIS8jd7144AtUoFn9+jYL/rwBwYK/Yd/AkZLx5GmwvhObzTOuunA7JZJYeWlCoz4OWJpoOKSbFH/QNw+D1UjscF4rHveQnLzlPcPsN37FTidmYGmORWRU+4wDhFrn+2enz7I9cD/NrmzgnEaFN5PBgaVoXasVf+cwO46bDR6UNK74cpAk6txOo2rH2K0Jj33KlPJMVryyVR3M052HleOZnk8uPSwYaKPETXOAO4bh0FKvdGj6kCIzhlwVCsN5MUtb4uT7BpD7fYM1C+QzU4zWnvqmD9XpyI5PuatnGI6pCnHcs/oUPzdGrRlmcvZ5Jcl2HvOYlpbUdBStgFULMoDluqIN+xuL7ztd2AvsHEALh4Ieyd6nXjg1UCWizzaoakeo+H6lJOyARrTumh4hTd0Ll3HAQgp62MmiGe0xrFmQjBlxwb6Usa8y7wn8SnduUImQ+yuSGs115ed7F46UHYYN9EiUvRyYtGfnT8Ud3EGGDg8lmjxSrC+stArHol9qjsie50DHsALzJuhOg9ropwLPenKbaCDRu8u4PmiewIMwyeTdsDRjclVlHJMiKp4gtvY2HwEE8Fwac5EmVX5YmAKRsVp9nCmcYJMvVwpBl8iSzQfAfvyFhLpiQ5d84t8LbqP+Kus6ijvk7JEndAzAU2NyfdBZROnjZHK6GPF5JKq1tlaCq18qboV7zJQHh1Iv4DvBQh//PoVFkol+RUtG7cCW50dhz8+NrbpmykjeHiFNvFuLgFVM+AiU24F057OTnEwMKCqqvYhk7yqgqsPAZH3JdWE/oA27Xbi0ngr5HfA3CNGoqYn6iUSOzTc2KSkSfuEihWLw21NDpQutsgM0vEm7PvHUHgA0Dg1UyXva4qPcdzp1vRGL+SKdhOTcL13NnWVdfBrTDCrnD1VN184I8UnFGq28beeVvMx43NlQu9yCCKrXDc5fgLF+m0gDsJEwcJAaQfa2tXUpTZoEYsHl1cBVAXk7rwynaJHv2OQsRtNTapbU1ZaE2gh34onrkJxaea3wDiyuoyAiBqm0qrepE+GWyUwfKdYTI2pg5utsAEjrJJA7CQuqksH9VOImdlduQ66JL0fMc4x8ONmaFr4ZG7R+OR6riDojOiP73VVomr3x8gNL+8PillIyN1wB4LVdc6oEjEKCRugk1j6PwszbUDHazCJpxLjlkec1Esi9OPKGnswH/7xKtIh6HuDGQg6J0vHQ8NQNsujSaI91iAuaS08gkTu5womOU8gbU7AAHisUHuj/Y+XizQoyPIdbsMePFSRCWXnDRQnc9ymgQ2umFCXQ2uB+MnYWKVpNd3QWc3fJTDdXQGYfKQSsqLgbL60JLkO34Jgnv5uO/gotl+/cZ1iQ672G5fRqbxqkC+TkSg8W1TBDKFbCV9Lq+WBtLrvDjtWBhT3ch1VbGBdaVh8Pi5v5bhQO+n2c4w7G1a6cBku3fvMVJLdSHy085rllFs4FiRCau107jMbpO51NIuk/ipBJ00z3ubpT8f7jDSFKckD0kpC6nJ+Yg4zQk0SGwgNtwcog1KpRLoaghIVboT4Jdt7o83be1WqoqQUTcqRfLD0Ogh16AnzwukvMp/JIeepOa86FSTqNyED3ig7W+lMvp0+5bOAtdHwR2nweF2KfvY6JKIGlC3YCppDZixUbSbZlttYcCn3fE6PG6SQIdDxT+C56uIG5w6Jme2FZzMnI7Am1Uh6mrD2jPcmDpCfQonI/0pHXRJUn09sj8k0iw9uJHp+dP2d0pS7H9m6sD3vELHnSwYKOWELdKe2Y9ue0d6eO98qzoO09FdgpGmAZJNfY2Fk0t5W4HXirL8yNOF1hsc1DUSEGzPVVKhy4tZmnU7iBAl+oAdPSEsA+PSczNowP9LNf+MFn9c1msUcTejQDGBas8kIy4ZrRpqkpN9O+uZ5idcx6yO+2NEJWU7fYNGijdZ0A0CfnacfWvOLE06QvcSvEBvsCVizDrdcFzxioJyR9j2VuwtQCe1nshm/5GvgJp69MUDeRp6RYHOEODKba0du7QWPzcmpa/FTTX4UF86TmYUUGfYsTu+RiTGH9Ld+aJ67qh3voCvzmivOFqw+nPti3mmxBl9i0dTxyC0+XYpYxHGWezKlGNwshY49lq7cSwFrAtwB1S+Fg4887rvMpcfU+FaaSSHL2m1V2g6E1CxZCJ+d9x8n1eS9QJoLbHuD3UsBdKk4dInoxZTa0vDbnpHWNzFe6yUU/w/e7JGW1PElF40tiKj3POzICPiVRZ25Da/G+Hq/1xohT9ZoqhRWdecskSRWHNAztzu9YI2RMAtbLiQ9jG8+snrlJ6KsCgxZzjviIzUphvD776UY/cIMGVvJSrJKv7iy95+iFUgfs3qP+K2vd28fvVochVnpaF0cOkauH1AuelqKSGiGgd1CP03taDplnPNb6ypvzXQZKnXlH3/u6CISw4HhKbe/c1J6egxHaQdpHp4lMPRJW9qL47fSXPfdinS8RDQs46EV0V9r5HkVOBm9y4fYm7J7Se3WMlpTFjjYO6Xa76LaSXi8nE98n4KdADqG7Kvlq3EBum+66GY41hqfqiLceU5kmkDuWY3qJFScVCXCkaBXCaOebx98H4pBzuIutPPIMUc8mvxznNeKrjxstUyafFHJFp+XrsS6HeZsSQfL6x3JircpsChm4ApEUofJ2BWCWEKJuI+y15pk6Ycaz2ukOSUdMsN4jloR+smENpCM0/GYY9bbkw9c38+UFR9/EnNztxd9l6VU8IeOWqrjK4znmafRKWOmzYI7rqdWLp+rF/2nvunYlR47s1+hRAr15pCmSVbRlaN/ovff8+mXy3h6vXQnQaBcLDQbo7ntZRTIzMuLECYclIo5n7tX3J44kYMFnM5BhKw6QV7ptlWffM47CIZJDH1cF2NWDfOTIjttJuGlmp4r2m9VjpHGigFJygjwZmfT0Egv16XycKvuEZF+y1R5Gjog0zPLhFX9Jzm+2QJGCvWROuyLa3BuINW+RqcBYaZ3YUEWrgLfQaCVetQqYErpnmUoFVUJ8kNi5izqPDQ4D/810uOIAmlHt5KUhnBCdssWHFyXRPixlcEGRmkjLfjwLJ16xlztIUrlhRdjwA57gdFw+ejlkgDE7bwBgyhSgzi1tyk5nxFGVaF9KmwWWGS/gnvcBg2KFfBSaMkSbum++8ubXuZLabmJ1TlyWMBnQyOeAgyWMFOvUNP71prrU0gslzF7MhcZQTgbZA5cYn4yohJxSVgd6idG3Mxnro36uUpflz/JdTPt4+y5LOwhRj/RAJcmqDF7kuW1o74fTrmWgcccAd9JhHKBBqD2dup+RP+zpEKOeY8MTo7zf8YgOfHIMLni0ndpfnymOYyz2FrRpPh9tae6e42FHyEoD6QTQUPuXe4Ob4iHf2vjheiVdvwG98AbttITM35BOdDxX8EAxhu1NsrQxkUziyWvOXXZ2aAwPco1GZtU7AAgTsPh9k5EA7esE9E1WJRWNkNqcnBtSd7OXHS5AFk/ts5QFW2n3D9Gnj5heVBZxJ1dJcoedIqmAXX501jS+IceGm9Dq5Nhj6PtoOxapyk5jhhMn9jLJhyk4vLtSwgNDA3Rajc9YR9FbgM7bl3i/ZePMDWkfx9RGDjWn3D5ETWPabATvaH0CXkdf6om3XgiMFbLSnn7hICtWlZGgU+xHRuWhBREG4Lbs/lenAobx7rLoLHSuORTEls4XA8cNXl6T5lLz9rv0mrv+zb7aB6PEwvSDDY0iAA1R//QBtm6KPOSL01PXu731nG5/R7O8hwl/TJpbRfrjsC6ZmHbbGy/17sYMzoo8j4hWnUEEL52/ipBIGlz9s0CnpyLEFBxuoT7bzyuXvOhODAa4Wl7WX3Ezu1NMyPY+Q95ps8Xy5lnRdJOPCvSqECJauSfAilGhwt00o8mIdarRGZWD7FWg/gn9RqR/2ub1lOJLoVXZUi6mVGNf9ojAN5JFzKCAe7J72UECEOhbGkE/s7vh5E1QIkz2zV7zw6PCodODwJgf3HO7Q2o+fLG3n3vmxKgdfbO7EOguQp86+Iu3fctNmenT/PhmuWHvvliVin/xrtyek9+8OggugrsrgJAGpDEAmV/c7I9rLHWp5HTfZXozmYujrS1Df7vzYTkDv652ZCBkj5jDZDSo0vQBYAE+VAEdCTzE3uwyJeOhJE2cKMmxXBk9To/LBVXQXvBVLcXa9VuPhgJF+QJUJVAUEXgVcfXDvUqa+Bmtj+ZNYPGi4Bm+XFUjVlMBbQwaQgAvR27yJSTduSFn2HDesO5PITmlX5EIm2pMyUHgAYQ3n9Th1S4FGX3jvxdSBLUAY5qfOLhl3z24oT85MjnFc4qgBOxPtKV4XxGMtnTwl+30r9GnC5o0MkxXLe1z228mvMOJT3S936zzw7YqmxsGuqKv3XjQAK8o9jYRcBahHYGUWzDV+4vbwvn0KOhRqUgaIY6AjmfHDqJ+CIwuToL/nh9jHgzyKRwn4h+39pNclW3VXaQMrT4iDRPsUwEib4KOl/dBJIAvH0lg6zCIIHsaGQhsCy1BfGGxNs58ihlgPW+T8kZ8GnGgi3mf94IiqkewEvgkRPnhYdSjmsGsi7ke5+FfwUN1Jj62zgIaEbKX4ViSfigHP3kKEUH6etbzUlgx9nZDAhof5cHxQSHg8n6DNXkAruH44DCupq8bllmdAyGKMPrtYmLiaZmbg16qxcTrCW3c4ikyQiP6zNayplcQ4Elole15y4bNbH70GeQVkOLfxON8QdGe+tEd+iDw7RpOGsYzgXsjDxtdMsgPNhVE9x9TAic2P8zdHdHs2OajcW6QPR5Fd5gXxF+WfNJp+0tZspbVLpEz0RfzwMj8ae2fXaYuL3AqVJi4uxMfw6ML5+iR0fuHMQZcf5enawg1JtsBoCFUbzh541NxasF+OmxWc0ajHU8DznqvbOTt9oZkC2mRH1c8LHjO7ojd4pNcK6UT88t7xCcRaCbbDNzXe3xGy+QoJ2rEQ891EtCCVHhuDQxoaUGyq+KKImttNzrkELgDNywuPEjmFZXSxw+C28PzfCU0SHgNj1w8eO4TJy9sb6/Kkw2eW8qk+elBP7nluU3Ac4p7W3vawPoJ9sujp6RE0OGOZgeOdn1IU49uU0sZgjyrmD2OACqQKTm8qixImyN3gbOpDCwTe1I3VI5WW9qBl5hYwJs+nWoLgZjOAZpisocqgtug9mNHOj++GvMUaA1xzfYA3/pko5sWTe3O37RZuC0kMuw+sxqGwTGJDg/OutjgulTDbMfna4r3LYfg6+FDwvDV83Wc9atQcUZTwryJ9lb0bDTxDP/OeVecmMfEG9DQfGLLvk2nSV/fwLpw9c0ZTN4lwfmhQfwZ0NLHQfXV/EG2Ypaao+ubGSkbB9iEVMCfyukuoErfrt3oRsVP/DXozqV0BwnLkU60khZd+kHqIwmVi5x+vsHLYjRNr0WAooNNIfvRmJe9Jm7wywpOFPacqdyW0OKTxJwwOieSGa7V4egN2WrTOe3NAwuiD3rkQAkpXz0Y3aRCIuIAuUE8sKn4ETs2LfPK4s89T8+NvyvPS5cwwV5NrsMNc+ksH0RxIzPKR1a9PgmSAb4isSpFzmIh6QBvuN0WneitIA1/CQDUAJoP/mxOngBVUjTmwEQFsQ+RPYg7wFhkONv8Q4ApCTx73JWiY5l2NFgVFlzVvW/sqqFibrnVa8Ml3/Y+yqPTKShN2tQym+wEOj6v1mUrU5Yua9QrbhqIdz8vTicJxxM9776IfjLiiixq3IDNkFI+O9Q0Fk4g8Sc0GIoUgQ9wz1dbZfRxVWC6ZAcnaMoedIsvSXR1QV6v3brlxqaw5HGgaG3URrI8i69KQmErUHrEgS27cb6lo953nTJQqxAUrPASU52Xcl82IT530SnGYG2fNH0uPuDwjR5E3sjOu+6kySUpBeznvJGKuHFHiOi9xpxDR7jz7MjTOokyypGYQVo6GSpF6nrQMdfIO+8B/kVht2nc4N2vtwZMKWHf2bk0Vs6PdH9lFgWLfBDCjBgO7p7G7NivlXxymdUP50rAG1zAEHbJhEItXtDqzJDH4ilgxhRpkrHcXyrQjMyp656LLyIXo3LnhXhxUvTrLOqEYqECjQeQSv9Ak7FyEQZz0yGzX96+JED7HCc+RKPTHj9X1Qi8Ex9C4v0OuXG6T6vGglPkj06W4R8v1A0i6FHiFrw3dZhnx2jS3Elg7WWOV8ICD2X95yXQO973U0PjOEmm9dcTCWRRSUp0unGyhlsYhkEorgdajXgeTp6vCNw4vv85Ol5lE2wk+v1DjXdFwXMFQsEB0j+lR4L8HoYjwKl45pPE4PJc9ZDiBDfKmIHnBLbebh7FlVvCMRrN3WGbRrXBn1b2MCN/3BH1yr7j6EdfE/a+20UhFwR2SiIgKh0HkOqBqF8LzdBy14q8LE1YdR65+B6ewL1jpYKU75YoxmiCWmQjQL/yVzroHjDfOAQZyZSFCaBvxcfoThmMbFHrXnLN+lDor6/huTG3tLiMwPl3GhgB/0oVuTNIfu7x1KDsiXrh6pP6KwYFGnavUPrqBVw4+Aqqi9mbvF4WdwnDKM+PMni3yoyP6AlQ7IS9KLV7UoJ69UstBsx1g0igzd0xh0V2XtYUdd/PVWccpK3BPo/GIooIdXt//Rx9WBb8CWnuKVKfQImdLIIPnMKE94FH0tb63ORey47Op8LU03xK5Gyj3VzXnBNsQZiFUVhEAbd8enBqG4ZMxwp8QdwsUepHsJaEIp6ANfbn4JTZvBU1/N3aWhXl9P6uCJm+9lVMURFJJ9C/DJ+LvSf9jTFmETCHOEU13rIDzmLk/Ybwi0CyfZA/pMjynsN+OoLPASu7i5TlGmTsRc7M1mWMqll7qUBzXLu56ZUggx17G3yx8eG6Hhvwu1ZA8+PZIswQliQbHE6NI496yt3xob4wLOQC70byRIT1wxRCsPZxHWnw95EVJvqIMR6cbRKbJRunP/gTgabgThofeOpyOJjG4mo2GvODQ7qoApMnFEJ9OpxHBAc19mxMhLrIBVoP7a8NJTQmuzL3FA7kQl3IlL+zDW1mKQmfAB6bxBOP5o4zXPNKEh5f4CaZUbGxf5e9l4YSCyGpU3W+Oi9XPPfccWLzZ3qeiNeoj+KhzWiAxzeHcIZ5QP2lSLWVPWUsPYD607VRFnEEhbtoaXbvyvcT1G1BlSrvqmRxelhGtZIDxww0PKiChSZpi8IGWd9ytUpXWphDHSGc0zouCMkLliWb9JgDQacf99A4UcHATWQh99BxkDHU4IEREqKLIB/oxKRsEZMH5TzdldsLWZIgplWNCODuuJZaFbD83CutHGTzxn6yAb1/umUyOfoKQ8WGkL4txe5FoFzK8yAPkwNvYsVHIJ7ordr4HU88NbbRWcNENkhWITTErfFDciLSMyyO8iAniM47xE3wknypFyF7PsDWFT55ZZNqzRsufDGKlxN/bp3/QMKGudZP7hco54cpR1p/mUK9kvmscqop91tviTxQRb3zii9qjDzxEjTA+OAevIL2D22ybsETX/igR1JEHQFtsBfKD9tDQVmE6IXqXmA1YrTvvKhK+JGPhTuAINIhHXebiYwGnWkqgIE/hlL1bDOhF7wY6XkfWmQ91RmnE2h/I3vZ8sjRCvUr75KVLD8zpvNk/cjhextV8Dr9/ifbvRDTHEytmZwD9pDIgR2VPD8LwrrfeXpHWN5Aw5ovxuP0zN85MM7KZ8um9THZbmZ84TJ0o8T5IyI/ct5c86HAIO9JePXy+v12bPnOLDr6waBUsTxxaBChJvnFvKi9NRUy9pVFzN95EN/sxd2Kv/x89t3FwMJw+KSckvBT7lxZE6yveOX3t+inROqY/wofs7EX6idMERKgP6BxS+eOv71SmdLYOn+3+/UjCIkNHV5TDe1h3WlceNu7dNwjaiTo41NeMBEc1fvSYFNHX9DkpcZ6/D0m83iIBWxgBAUYCAi4pTgZaK2Xd3gq4sP6S66vwoHACxyifnieGrztdL7ER6xsdulxpx2c3To0pBSxb+WeCA9byZBnavNWjL1UXNkOFpkqd4zASU2OQiBMTALej4W/WhqE+/UF0K6p9iNvuoocC7a5iaK9xQZNF2r7e8RU1ZVF458oe8UOANAM/bT9MQiZfM+4AU4VOeEjDs2zNFzoOQ6uHj+wh3sLZBzaoTk2YRaeX/RAH1iof64nzocuf7RWEVJJdZ+H7KM5+CsPTUvn05CEdbmghu1z64ggvystpdbKWN8TdcNH+v6V5MIKr84JdPrqRKI3AdUCxgj7NBJsuxWtsy6SdLjkgN1GxsgMMahlWA1RKcxws0Twxpf86n0lrBsive+3R5iyugOY145N6DuANdHIVctnwMyHdcMThCl7YSMKtBzj8cgFkgIKKuu/lgG0hxUE/gr2AFDOPvDDWd6FIlNCLrhI3JFfz2LnkDyr5X2GS1itob5IpBTOZAEs/MO/Yj2KTulwbm623d2BUNk+teAZYrWqkIUl7F++0HTTqblc3FeExktQxzB/wmNE8KjD6jq58KDFGF62ETQPMqHeSfnk9WxZnaaPafh0gRhvtIcZIU/fbc/boWkk3Aap0WEyWnEFW28nsmOYyzZh9mNQ0yfYyDsJiUV1PjV0JGSjaUaeGmKvi7i3kUk9AUroDvkE2vTJabZPe4lATxZGEocLprtWRC91HBhKbaSGvYWv+bnhez+OK6Vns7BC0vTM3pvfT2jD76iMmHFSxckwH/2UF4MB+GXrevJouLMw/YDXnJQFk3xoRbwVwNM036SUoE8qnLnnIFBRhSjQe4cBofahOmwP3FvwAftljL7eylxxr1aVIVURNiXPqpM0SFoqClbQB+zpLBLzprPz5G4qf6HcbIo2N18oosFTT2hnJ5AzU44UtYHTEQm4VZmrRUBgNI5Rtn7dGFL73B1PCogsWWIQSxVQk2XuPBKm6rlxsf6cG8FWjSIcpJe4y9h9v7JtoGEE8bliMgA6wNMQzmiAkB2zG0MpfAD18FmnGPCwMlcFOlOR3B3Sjpvr3PdKUzgLRepJWJJ8MCS3spqvb3qvKdzcFCO7PyuJj3QbtDMDuopZoORpgxiAc1sECs18KmgBhzxAga6Ud5y5KXbAj8ujint2+LDouyQ1wdsSfIspSI9DkOuYl4buh62CNjA8KgGuP0xUe4dGq9Dsarz2ysy++9lAAe/bS9ELlPFcpAJ505hnsUbjJ+y1w0eujF7nxEqRt47u7QdYfnfxTQl4/dDHDTMSTavzgO2f15pMakxASrBZ5Ng7UmdJ0rabq8MnLZsQaaObl4EByx9HJ5yV3PRQ2kZK1tVPZUM40M2WtUZONng9rSrfK6cdrgt+r9tduWWls8Ij7J5GCDuVVHHpm5f/0fpnc0Bsg97nx07NnOy4WazwzgFhyN2QnjiZbAB3fLbWXFiaCtRJpBlNVxtjnCY5HWoo3q2avktgwsSX/9Oej1wbPJJREQGRflpmWIjRqjyjmrr5DTTZqZCMdzAxgkUPGFeguVHieYmo/B31tJSei5dUedN7Oj4gv44fQbxOhXyzUXctB/pfvryJyi0A3bNijznY+XISHWnbkDKNHLq4yXphfgoL5BYODd09Q3BfI2zjbOgbIFCUO70HfZSqaDAMh9hkTezEAtiGiN7pGI5sC475KcSDBRHp2qaBsAlTXTRv4srGOO2DDi734zf6bVLID0lvM17mi0QCoXf1AE5mOsC/qzFezd2pLCjNybfmfZg4Or37AAsyNlQtlf4EebWGPOmBGBs2HXegjMMX+1mzwfElUDgjQNl39zo5pSR9iUSQkyfIr8gheXZHB2GmoyOU37SrZRYSw0eHMxhrHa2/oTzYUkUojgMPrH16UPPYG1HieSBOLVTxaebXkIlYPBOodDVWNq547njkw723E37n3Ebaxofcr7xaKJOh+Xip04oZyh9qjXNspaioeIw8Kkn3jfCUsa+Ip7q75UzH84ETQgEjLeyXAZM6qDMLMaJvxW1P3hWSzJTHF0e26Lhg+xhOVhvbljc0m51CIu1FRv0To+cnCKAkRnry7iu5t53QmYioshRbyuTpIrcSGiZVuycir2bPGpqSGy4I6KDrWBpC76IsQqJjUBpdoU2YxgnQ1n5WO3cJ9HybuyrFooo+Aj3FHcKQTl+ayAJGoZWpxyxTfCh98ipQC3JfqyNHp9eIRc0YT0Y91gUGQhlv4umsZAlgK5hhIzzr1jjfMAVxFQSDUHLFeN+DuEX9gJGZh0qSx4300VmTCubJR08tvToPonKYFeqivs6TkFsWaHYFw4cnT3cbxymn0+RCXkzxLqx1RRQwx1U0fBTj/NmfpbWhh86umpvwbMu0YaGzL3yzcmYN5z20NhFhnOXJkIQB2I348u5m4bGPVl1sp0HRAa+3cgQ7J6Bd3ZTzR+20XMW8qXvtqkbTmFzAnq6rI8aYpxv6Kw6q9/jYGid8nCgJ26/McYpCVY3fduOFP6hzHSdqemJGdQ+WF7bemMZ0XNZiUATpUyFELIDKpC1XbLmf1YIlZouQER/1XwGFucmRnm/VGC3rvB60OO/3Q3TL94d7ygQbwRwCvywMR165jT752wYn+nsIm8JE+XdNz+j7+72QtUZdvCs3DU0Cv3CYW2a14qsPKig3A6WxiYemSlo9I7yyBfs0rYvDYMSJ1CLniESAhRsF3wht7mklmJmhu7NiJDBk9U7R6FS1mUMZwLtLc6A72TmW6aWuPbrCi91ic57uA/vzIgww1eU0/0+FCqTHva1H2+J72gqhJ08jMjo2/ulOr6KPP7DL+8agatw1tN9giIUiD3Gielb5WUJYb8xg/9GIRbaip6oHWmGj3pE2tKe2ZJT+9JfyyYySpWU9kwGYOq8YM4mNPVhfynudCOGtDTuAvHoY4vHS8XPrP9OIB2xQqw/Vvr9qfJXqUzyEIRMM9FMHm47FFjrbI9Wh4XsuNDW8S3BSuhlMQNppe8Zlgf19EkzxFsOFew2FVyIc6jEMHjwfVy81K5RAk1MM2XOiNTO6vyAKyMZ8swLKybAQJx9cf38pu769O4j8kAd+CS1EmqvH7svrR5I/MGJvr3qx+W6avJ5gDrQ22eiDFPBzSElUKIScTqESnKFPbhkxgKQoJMcFWC2F/Qjvqw8Z2F1iCzjLnaRYUfUpfND0iVxEReDXtOGh7vksXsfX7slEMQ6VuFiS3xX9K9j3mXy8nNd72uTeWqLPixuIzrN9W1Nj7B544EQX9tY3ysPHMqc5cdax0XawqpUAwFKTvItTlxzQic2St2xI9HbvZyQV5GGhc6qd08fTKxndscb1adcn5EshCaNPZzSLRGnSXgLU69Edh08/8MD7eyYqK6B0yrfF0Zroab26ZEwimWArOumGx9FNqclbnLaikRQb1YSaHpAgjrCXHw64BrpOgz5utMOtj6GFYn7lW4PrSR96VQKBPtEOj1p2t5zgyVXAXWNf35F2gvnqO8uIT/tHrPwVpQLA1F72XS0r1N/XvOwdC2OEHNG5hiehW0H4RUAnN0636abwT4yuCJQ6D6B01103fu7jSgsAirlzul3cGBNDQ6CQS1YeAIDiNHdaXNeipMqCoeegW6hDr+9h/RKl04NioenOwaPXBrIyyfIOOnWy7cUHMAzdiue+w9SsjEAoxIxCTVx+5G615ueeHIcJ6Od4IuX9Vh+wFozGm9UnfGof55uaPry4R5wc4fJpodMvPVWeBjJfgBPygSb98KdwHm3aQeY3fmJ97qou3TaIiG65Lh1UTIpXfipgExS5whBInRAyKufFGfBxHaTg7zNpiKNUObJPyquEu+XytFLx6GItp0GlQu/vRCQgqEWPTrcKGEb52grAhtAwqHDEeNBTs6OgUVXUB0jwo4mEuKpPJ6S7gsStpYJgjN05+JjaYI6MQA8RuZyX1UV856aXR1xjMUDl1s9x5P6KI8f9UpHXjJD4Gk6Vps+YUMsYoSR0Aem61heIOgFrAH+klcBtK9YtS5imIXAXpSI3ONagGumQn1dHcFLCCR7E6BR6TQSARTsdJ5IMhos2mOs7VxNLQBrkV8QSjnxbd5vHblZg6jmLPA+6C4UPTMDgs/BGUF5ne4mJSZbESkmdId8MZzUE0dNH7SaMCMmJns89hOM1hMlSNoNpYwsQA5snXf1Jk4E7Gi73nTWz9fY28dbEi+mNSAsfnDYoDBgPTvzzz+eCZMX+4MOEQr6ylTW5wj0nGm6pPFHgbWwDt82oEmSY90iTNqOlGd/F2INtQJRrOvzp+zG/qGEMUscfMy75enfinziX/7nm330NbEXZD1bLUaEH2DPEFKov1rSxKkRCBSTOiWjH2678YoSt1/f3VbTouM5GJfwVq9DA9QLoN8uSkVpAJ2I5tYgnd71fpfJ1eu410LTP245XzQue1hh9No88KAlNLKjqOtqWf5rT4uIDjX2xuU7HSX15w+4zcynbPxZyMxIa2S/OcI4fyRodGSiUuCahOtMBfJWmC8AfOwkn19SqrkQC3Dgfh7r5kIAag4GrugxoKEMwoGVOWFZ+vhIsIeFDW2KAqFBX70uHvzmfwYES20H3x60Y68SQmmaeQ7huvJsHMQgOg/J8wXYXkOOuzG5klgk0miGKxiULNJEEzngyH+41tyUZkOTLeQWpHp/FoXEnssnoRMq+JmIqDPbAu1VJNiFV9zaPBeWztD68cbQhFUSAOdBGlx1kzY0gIqHSjQ8qIvQ62W+uiCkzLxZ9EPgX563m8aUzv/KXJFBxAD8R0yTxhARPhukVmQPtTwEHl3s8mRZ+gnl8aEqtu9Yq9688MX1Q9hqDLSl48YPXUxNgPyKtUWGgRjvIOHX0UHcZw7WbhOuNBKp3QHpbgLj8cX7XAb4rKI854PZL+kobpq2onx7ckAhhhPnTzT0Rp2A+U94su8DgPqQBMWs/Q4z0OU3gHn5uL7UqA/RxYheMeYGWBINVefahgykVDkRRBZ1oilHDwQuoxhIk67SC/lVBIQXeAO+E13QYUtKkieu24xCYF0vJjHL1p8QNVkw5KEOxmIhjgAPd1ZfYVaVTvjvPk+ar06nYFP8DyMt8Qhc0WBrZIvEcHmyOgD0LzLUSbAKtLMQJJo2uciIxUACMbEZntumaePWZ3HzOmiJBsotAdWjDeEC1jd5nkN5A6fg84fSxL4QoL5hrTMaTb19Rq45IQGNhVvUTpvXyvvBrqIIMa7GsofewYzObj3nSSC+VYhkVfUqnzyUd306XqsFaMngtTgPJawpy9u3ER4jKBl2UB0WGXISiCPoQqK6/RZwyMQUpWif85ZdNIWXIn8oZtB4XqhkJeMlyU7Z9FUSirYJ5rhegDU2RqgqYpNOXci+uKr4D5FFV0noayofCkEAqstXLc4qCJhCfAcw64MEBUcVB7YgGoIHuYjXB0IZRp2rAyxWNZyKnrCuTsPgEK6agD5+zBnniAv/qk3E/eB8+HU2b2A3iGh9Kv0b7WIcgztuFcgmeUkE3aXyjRIaDwAXrftXkOggkDkHaYLOdSIk11Ui2FAf1qEUrN0YmQ2rDzhHLvA2zG+rSE2CXyA8tPj6tNIoSPHaLfXQ5XWUhoeyKiVL1YPgPnxRB13EShLlhok0jfLpIxtC0fC9hCJAuw6MVQCUtwHHznqK9zYDvNze+R4UTEDM8UkDPeSHOrY/Zh2Ej6jMlU6EjSEfPoTr0G+SEw50V3XUb6jBC0Rm2ZD2683nzOjORovrPt3l7cmBCLut9nUKQeSsBJlNolfmRbCtj3RT3BZpgtDeduNurfNCwboExssL5sAyo3QgYrIA2MTKBpfeZVyZQOYZq0bFa8m1kFqYK/PNzTQM4iCjBDFWFXwKSumtBnmo/n7bkIe3xzfFeDKFeGPqu6Bf+TYV3OrZT/aODgh08v3//9zNS/5Gs1UOX61vK+o85oRl/z+k1ZOYWlX/0WACR8moQT6Su6akTqwUOlh8wCVRsaqUAnVd/xeu4A3S18dmyjyueOs/i8316Y//j/X9+Rhd0CdnRDyEv+zVQIhTfNAyD6WquXB7xAsPq1vKpgHPeD+sNYonqCJm0QNLxtOwPYDWIFMHpCSK9Bh2QI7kPKJlKS23RNF3AYLZ3SCbNo9xC3FmAWCYJXp/+HAhFjWSySzSO2TYe91BqwdSyM8YiPRhpSsk7uGQwg+Hq0N1RSlF2dYPGDUUCHQ6X+lIDn/B6wogBVvwAJCCByittJNUbxCRpCcWv4XN0Sh4bT0w1NDv8NZoBoF4/kTztCv2glm4s+wDW+i4NFBG08gQN80w4ScgzQs+PzJwCv5FMv/ODTZyn8WbbMHJO+N23AWoBXwUe9/Dc/vRBD2/ev36ILhqtSsbCsCqBjAPzIhPHJxtrAy+5yA5Q7EE0N+aA6MYncsCOfMpJeVNSvC2HB554A0fTt5IYBuqG5M7Vqn4K4fVJ5hloWyTU0BzDDog1H7OHt//tTBzAvX96OGp+XZ98Pqt1jUBQnG4eFzCSidXMvIclqalL/JHVzhEyETNlAsRLfjxxxvkRuDHGK8OOuus0TaJLRcBfXzh3QF0mUQd0B3bFDxnosLbBheXw65K4Whf61F2LvxVU+T3V4pawyFJJYNWo8AMO9gryZKjnoQO1lixofYIvOrvI2XyzOkxEhRSFLqkIcWoboCmeSUoaSXF1fd52r/ntSOCRUwRrJN/TJcB/VGewdl5qCxyfEiCR9KewSH0UUxHLzuPRztLAD/3QIFycJMIjTgzhFNANpxD3sOD+BZtcm5pw16toQ+0BRAQeRgfh+2t9c008Yr+3Kn9IQPYecOwQ0pWRL5paQKIFB2kGSxXQxDaGEv/MLCBFn3Exfgz4IL0Fu2Zz3BuvxzIHQ7TCq3HUIhS2ruEkmRD1PEyMy0CgUwyhZANYZi0pBN+AtxGzJgQL35p7O8Xc99RRATsV4JH0Mj+aQ1/U/YI3QRI4bmkk/ZWzZVtSGXSvuNBCBaHoQLHqY1mSKFbpsB+21hcL5ZhI8oQT3jytGmPs4DDksoldBTXIJkNzcU2NChzSsG+mNbJDaIsvH4bmyom9AW2aFzXXtCfbWwL5tPYcNme05we9m83Ir6nCrdMJdkBxgw88ggjAXafRTps9RRY5KMj9IzbPq0h8g5H7cCKrNrbBKg/0lcHYiUaiflSREPj8sLhs1weg9wPbByp0ziptIvxcsgZ4GOPkXYKjvAxPlGAeTA1yjiU/KQjNH9BaAwhNeypHtKc+Me9B5ezI94y6qTtXL3LcKS0mCnZ8GukdEPW1FACkuQUMubrCcXbMTfEAzUY0Di/UnXx5cK0XH/NHi4z9GEQxFid6XQfjJAVWuyrRp3suHiNzDugnEz1WEQGKhgZIHn+NQjDTQAmA3qCsBPnbEt8HkINA1/TbvJdRUE4kOOyq9V4BotO6rVe1a2RaDz+JwSN64A0gB2wm7RZD8yumtNhQx9BgzeGeJEllHMnHB6UGxAsXyfIQ3o8kh5GJsUjKa4YKLDdeUWgo1bokDEYgEqemJ+Gr01amnIeIG//y3RZK4A0rRagiYC17Pe1Kl68ToLetYgKzYlgENgYatvMTd8WY6E+F4VatVysV+7EnmMbibNktUFFHfKjHVHyME/LpSQQhsD60vOFdSVWQDzVOVEzdM8dJqZmA1gvgIWPPE4I5AXvq7RqJRoFAXpKvMTEaUckyUuOMDYsM8FZj16SDrN2wWOhKogfy+GjciizOsjcLuihH1ydoiILnCFH21ik5TisO8KDmpY7GGxOnteU82ah7Q1rzdYpLsPrmAv41NN6kVEnDXwqXAl855saBVyhuJE02reEGS6fttk8kzfCw1JPF1lt0K3yAp8A8Of+1jXz6CDoaeEIL2GDhMxPoh3y+gZQNXeeu37Z/ZHw8IMctmFIij3GVxgc+dxlCsEDVEPRRaIt+KFyQCAUCJGl6xyIyczSBJihCrywTEPr+XdmU/0AZvqgyEK8Aoct6eTnxSpxWwCIoyAN0jroD+8sgzAkMzneouPQWQvSWSnTPSE8J+PSQIOPPpTQM0NFMQznWvrKyL5gCwIl61Qd8cQDXpM4TMQrfHbVq0/lNryTW++rx9AfX3HwwYA6cmRFNOwTyYWQAOIhhHy8Tvw3lI01T0JwS/P+vaO+LYr9p74vgv2vxiSDU3wjy920+YezPGikB/76V+19Ohwr0Oj19LCKdrpf/+gFofvqrDqBEP7c/fvHX8WqLeqJyCMa67edf/uJb/Bo082yCEfzBg0adCGSBG596ALqNU177V8vQ7/uBY3Dd8tePAfT0zz/740a0v5hu8c/2pA3P/YyHP2hFWudRBG7Drlk+xe/Ov+ZPrIPf/W6Gxr9CXKhfiwtMkX/7vcDAf9QUFoX+LGH5fe93N/aH/3T//hfsNwH/Rj38Qfdv4t/a/Zv+3WafR/u824kCURj6bw7ff7pA/+NdoKl/tgf4H8nAn9YF+sfQo1/LwH9mfPzpG/8H7b//vRv/++bvf7f99/+RUVN/hf4GnWjmV/OmSIL+y38/cer6lxEP+bluwOp//zCyvxce/vqC6Psx8fMr//LPzKn6n+dP0X8sCP+e+VM4AQAn9NN/8K9tEI3/+gv/0WlUBEz/DUP/4a/9k2dTIej/FsYV5qra//qehjmc5gFMj4e+Ye/nUlD/AbqnrPzwg34AXfr3wAeB8L9R8L8R6f54pj8QmW8JCX8aXvCzEPwYY/CLH/0DkkT9kST5f1c2fiMH58vk3fj3dusX4vG7GQq/1b5jGU9h9kM1/0Juzhf7Osb/mg2nf+PY0MgfODY/NuBX1g7/s3ab+F/e7eD/726jBPoTpfF/aMf/Pu3x79nx8P/vjv/Wk/1Ttxuwmi1Y2Z8Bw7kymdpGMbjivwA=</diagram></mxfile>
|
2203.06486/main_diagram/main_diagram.pdf
ADDED
|
Binary file (44 kB). View file
|
|
|
2203.06486/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Data visualizations such as bar charts, line charts, and pie charts are very popular for presenting quantitative data. Often people use such charts to get important insights from data and make informed decisions. However, it is well-known that inferring key insights from the charts can be quite challenging and time-consuming, as it may require a lot of cognitive and perceptual efforts [@perez; @whitaker].
|
| 4 |
+
|
| 5 |
+
Automatic chart summarization is a task where the goal is to explain a chart and summarize key takeaways from it in natural language. Chart summarization has several key benefits and potential applications. First, chart summaries can help people identify key insights from charts that they might have missed otherwise. In a study on a chart corpus, @carberry2006information found that chart authors often failed to convey key insights from charts in their corresponding textual captions. Thus, automatic summarization could help authors write effective reports and articles on data facts by suggesting explanatory texts. Similarly, readers could benefit from such summaries, as studies have found that captions help readers find important points by explaining visually prominent features in charts [@kim2021towards]. Chart summarization offers another important benefit of making charts more accessible to people who are visually impaired since they can use screen readers to understand what is being presented in the chart [@Ferres-accessibility-2013]. Finally, the generated summaries can be leveraged for indexing documents containing charts to improve information retrieval algorithms [@li2013towards].
|
| 6 |
+
|
| 7 |
+
<figure id="tab:example" data-latex-placement="t">
|
| 8 |
+
|
| 9 |
+
<figcaption>An example chart-summary pair from our Benchmark and the output from one of the best models (TAB-T5).</figcaption>
|
| 10 |
+
</figure>
|
| 11 |
+
|
| 12 |
+
Despite its numerous benefits and applications, the chart summarization problem has not received much attention in the NLP community. Early approaches relied on template-based text generation methods that combine statistical techniques and planning-based architecture [@reiter2007architecture] to generate captions from bar and line charts [@fasciano1996postgraphe; @mittal-etal-1998-describing; @green2004autobrief; @demir-etal-2012-summarizing]. Recently, researchers considered data-driven neural models for describing tabular data [@mei2016talk; @gong-etal-2019-enhanced]. However, compared to tables, charts serve a different communication goal, and so is the chart-to-text problem. Unlike tables which simply list raw data, charts create visual representation of data that can draw a reader's attention to various prominent features such as trends and outliers [@kim2021towards]. For example, a line chart may depict an important trend whereas a scatterplot may visually communicate correlations and outliers. Existing table-to-text approaches are not designed to explain such visually salient chart features in summaries.
|
| 13 |
+
|
| 14 |
+
There are two main impediments to addressing the chart summarization task. First, the lack of large-scale datasets makes it difficult to solve the task using data-driven neural models. Second, there are no strong baselines that utilize the latest advances in neural text generation tasks. @obeid made an initial attempt to address this problem with a dataset and a model that utilizes a Transformer [@Vaswani2017attn] architecture. However, their dataset was built by collecting a small set of charts (8,305) from a single source covering only two types of charts (bar and line). Also, their approach does not exploit the recent advances in large-scale language model pretraining, which has been shown to be very beneficial for many vision and language tasks [@devlin-etal-2019-bert; @pmlr-v139-touvron21a]. To our knowledge, there is no large-scale benchmark with a wider range of topics from multiple sources, covering many different chart types, and with models that employ large-scale pretraining.
|
| 15 |
+
|
| 16 |
+
In this work, we present a large-scale benchmark for chart-to-text with two datasets consisting of 44,096 charts covering a broad range of topics and a variety of chart types. We introduce two variations of the problem. The first variation assumes that the underlying data table of a chart is available, while the other introduces a more challenging and realistic scenario by assuming that the chart is in image format and the underlying table is not available. These two problem scenarios motivated us to adapt a variety of state-of-the-art models that combine computer vision and natural language generation techniques as strong baselines; see [1](#tab:example){reference-type="ref+Label" reference="tab:example"} for a sample model output.
|
| 17 |
+
|
| 18 |
+
Our primary contributions are: a new large-scale benchmark covering a wide range of topics and chart types; a set of state-of-the-art neural models which can act as a starting point for other researchers to expand and improve upon; and a series of automatic and human evaluations as well as in-depth qualitative analysis to identify further challenges. Our code and benchmark datasets are publicly available at <https://github.com/vis-nlp/Chart-to-text>.
|
2203.08257/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2022-03-13T21:40:47.675Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.4.3 Chrome/87.0.4280.141 Electron/11.3.0 Safari/537.36" etag="rJfUulIQWEPNhJGKg-pX" version="14.4.3" type="device"><diagram id="0uO6b2EXpVxMsIYvZCG8" name="Page-1">7V1bb+M2Fv41BtqHCiKp6+Mkk+kWi0Vntwu0fRootiZxx7a8sjJJ+uuXskVZOqKsi0npyHYwaG1Kom2e7xyeO2fsfv32cxxsn/8VLcLVjJqLtxn7OKOUWJTO0n/m4v0w4lvZwFO8XGQ3HQd+W/4dZoNmNvqyXIS70o1JFK2S5bY8OI82m3CelMaCOI5ey7d9jVblT90GT2Fl4Ld5sKqO/r5cJM+HUc82j+P/CJdPz+KTiZldWQfi5myK3XOwiF4PQ/t72MOM3cdRlBxerd/uw1W6eGJdDhN9qrmaf7E43CRtHti+Ln/9/mX7y/++v/3bCZbu06/3v/9EXHaY53uwesl+8ow6Kz7j3deIT8y/d/KeLYbzv5dIXPhptyfVB34DsbZvx4v81dPh/xb/F4evQbzgN/0Qz+yHpx/F3I9xfh+f2b4//EvCt2Tm3v0neuFEcT/+kI66d/y/z0E6vuNjX8h+kH+bez7rL3zCu+zx6sz2/SpYPy6CdOblmqOo7hNev/D/kGxW+z5cb5P3Xch//P3+yl/VK4Xv8E/+wz4dBn48/GjxVTgtDmsoVoSWlpPG0ctmEabkIfzy6/MyCX/bBvP06ivnJj72nKxX2eVFsHvO710Fj+HqLph/e9rPcR+tophf2kSbMKNPxknE4e93SRx9y+FL0zuWq5V4aEbZwg69hZXfWbhi8r9PHFF338M4WXK2+LBaPm34tccoSaJ1/quKMMyQmT4RvhWGMlj+HEbrMInf+S25lLAypsmEBMfU4f3rkeWoYLnnArvlYiLI2Pwpn/zICfxFxgxdGIN5EsY4h4CQKqdpWCJ3IwG/fg2d+fwkAVXQySRlOlmZyCnSySISOhGqj07ujU5NdHJHJxOVECkVlQ+pfH2lr3tpepCdKmmnYDEZkE0W8w2/+GdV1taRLC3TxgBmM/7DzeJDqgvxd/NVsNstDxAM4qQ6XFhNvmjx+x/8jSne/Jm+MWzx9uNb8eLHd/HubZn8IebgrwtP8XfHh9I34pkSH6VvPgdJEsab/Ui6H5wg5i56iedhM/rCRUnXq5K8QEPZbiPG4nAVJMvvZQ1RRtjsEz5Hy70mJRDllRFlWwAbh9+TPVXU6MBEcNu0bEsKTTEvJ/hTmFTm3cMuX4UzkEgkSJwGk1dWEhuTy7T0aSytjX1prckurYN9ae3b1tS0NQnOPsjmFjhFsonZqjYxhwEQe3IQD7WJOZMVBy52cdDCUpuAODiD1622vO6g4nXPAcDyT7NoW9b3oTvBGpf1W3h8JoBPvduV8ME3QpihgrCtCcIOLgjTmzOgGcK4hGsFQYqQ6eFyDlCZcyCL2IgAyX+fwzgsBE8eayMn+zBOCZ5lh6oIhBS8r9lQkMUv5hw3YSwJbKyXi0X6MVIF7qjindQEOngX7TKVmEeadDZLgkV9fluZ46GNQJFKjqOw+LMkK05LjqO0+LMgR1QrZ7TtzkZRiQ8KNzZbjfhgUCyZ425sMldCRXwsd9clPWzs0kNmTHeVHj2kQF+Jc4b0sFFJBdfVo1TYcN4aF8NQUkHmUZiGs6ainiFz1lC/WeDG4TpKrkxhc5GLXNbXApyeyPVRiVxfk8itiPJxRS6rzxfMuX8erVbhPFlGG+O6hIOHXTjIYp0XKRwYLvcjMTVJBw+ZdGhhpoVfv3LhcGWSwcIuGa7GUmO4LDVHk2CwkAkGmaUGBEPwFF6ZVHCwS4W+EdHpSQUXlVSobOuqwpXIpILMyVCPL1nKPaiQKQcK2UyWk1+LEiTEt4DrnfnEsFRsCVa3eTUT3+pmjFwJ8R3TBUSiSojvsG7z6iZ+t9TQKyG+ZzpaiO+xbvPqJn43Y+NKiG+bthbi26zbvJqJ79Tl2+++kJpYDQ59X0dtme8ZllsgRYlQxHMNP/NjjWUBODI5faDWzL1b1sfXENEs+1Lm/moSpH5pfSR1aFnJIrLay0EpWJfQvqfgZhIUVJwn77sGQchqLYyh3XOwTV/OX+LV+10czL+lqUpN4etyrLvAA77Zbafs4O80HcOkpTXmCy9bY2oZPqkuM6HMcH1NSy148iKW2jE9sM6+4dHqOg9a7+3K0j2nusKeyQWGVV5kxpUoSZMK0zB9yTqTI8jVLzWVLHXuWm3RvsWpb9+SdUxZpj1T7qPN98Ju0cJPKwiccCKGf0fpr7jbhvGS/+50WziOfz4ONqbiLN9C0SaIVHYLSQMGDRWtIOWS+NUtPm8DJDMU1ENAaZZuyeNquG0T/E2jS4p/lVLp5QIQznToit20MZ1XsA8SK5ABDy0VRbTdzT7g6nV7uXo5ZoL3wm3b9IZd/fe3CNyNHADvw4xKbUpXaV5DFxjLQNthD2qEsUADEnRasOoX5h+2r5sCMGFA19PshXD7lqbLJCa5CIkpVLapSUxlmBxJYnqk7JkfRmKeLHPf0R3ezGmJaUmsqgY2aLa0ezIRHfVywkYO46+lLOlgGmvpOq4hKubQLGe3GHunCj7SdqMrbXO06z53xp5G2+5puIqCbeBFZlRP/Q4dyCgA8UdqDrDFCbk2KvBLSU8mQtzjyntCjntl0GzhHdUNTYJfJns3bOqTya4PzI5BZDLtCXxc/Umq7KC1Y4kQ0jeOwSnN+3rfcYG6BYSVbgPOTUXBDOq6NK2P6EvriQlIZI1tg3t1WVQTWEyCbjHrXJcTWEyKbjHrHJcTWEzYO2v8xazzXH5E77msyMzR/ZaezG85kcWEMnP0xRQZKlNcTCgzx19MmfdmIosJZeb4i9nXIzC57s9Co240eDxcFeo+NHj6xtdhhxzqgYk0mzS+xiy5EQzyPLsx+/bkDGjabaGJK0x2OdBUkcCGB5rp6F8v662AZlYXsuNb3XLzNDu4jlQ6k7y2ziQPlzPpcgCsMp8O9zbutYUasv50FwM1Fa2K8MjKMwVf2zR3YQHe0KgYjX0Ptak72ndoaNbs3Ae7U356cH+4ChQ2wxVXbf7lwLXOq1Y4Avu6qoGJCdK+mcsM0ZtmrFrgvHxs8l0SyhX4CshFALmoOJJoPFrV+QMLh8dPiGIqiEQrPHUsgB2PTjJXY1blGq63yfuOb0nXRinmYKRU3XG3U6WUcikINy0+Us4WcccmYV3+yI2ENTISwUZWl6bysFlMllb1bvEzpCQ1XcOnBXbzxiZdXVLMVNlNh6qIY2+TJYkcmmksRSeNXbi3Ged8TVLiHHttLGt7bZRXs01kOecPTV1TYGGhpBHWsMcPmzLbF6z8axQvLm7l7Zqz20ajBJFZtx2dZn38XcP3lc7ZvTlcgCt33wUYqkjEth4vQkE7Yx9MpKiehWsv5c/Jgtpa61kIkdn+145kXIEv2K7J79lQh+tYIHcK9nNShWQyCpJl3pErRzKyoJkyJJOhkExHQbKKPK8BkawywNYB27gibMqwDQ9E1oZt5o2BbRWJYgqx3VRtfg6S26Y2iMRaJEiGseLemjOMFeetpHVrzjYdBMpKU8ZUNEHQlTLWAcq4Ur/h+eK9W6sNBmWoOg8EZYWn512MVMadKo5fKkPdeSAoK8hnG1Iqq1WeW4Mbdxo5fnBD5XkgcKs4z3CiLcHag5uiwrZjKcK24/fDtjL0iS9+Q98p9BFcfokLgh+CpnP44Udx7ewXBD+l1beXCj+Cy2q6IPgprbC9WPjhcqVeEPwQODsnAD92g58e+Cmtme0W3pTDj5yE3zlgaxsARdZX8oLAptGFqANs48TfkfXqvSD4aXTyXRD8boaGJvipOG9l4PA0LlHkwh51sEa+fcYms0szWZrCGf7JNLfG2+2s2rv17wAPaIqWiGD/iFFtOlhYu3VyM64MDWW84plgIpjqMRFWIeOwyvhlAAhZBZeKcWMVmGAyDqvQibHKOPo5sgyTG/PABBZnFOYZv7SBdjKIz2GV1slYF2q+DMYqp3OxGu8X5bytzRfwgC5eURDxg10JLo93cNVqViDfm3cgE06Vd6A9MxDvKAhXXjzvIKsOvfFOk4EzEO8oiLVWeKcvH0wnpR5ZPeqNm5osnmG4ScSFdOxEw2QkIDNPYK0Igf1Y2kdXALLzRi9DhdosBV7XkbGBS/3mJC13igIc3zsQB8syh4cKnTpUkGmb2qACyx6Hh4oCH9vIUMGlSqnbcWDV4PDY0OFTGjCt94ydKvvlzXo8ri1NGfgqpbHW0NjT4ZOZBvbaHk+Xt8G9gU81+PQ6NTCDr+3Rcvk5EDfwqQafgoTyiYLPbQ0+egOfHvDVHeo+s+/THtHLZOberesPeeGfsdzu+Jum1tClU8BWwWO4ugvm35727aVhL/YjmKncY1vs1j6j7Ks3D+fzGWztzq+Y/O/Tp2xO2XgteLucVgHb5zu+4WWALeDRleDRMQ3RtUN9H2pLQa72KVe945V89YbbeIhb+u5zGC/5D0w76vcWOSUwVeFRD4P+sirnlBZlfriSmzxVwsrrKay6d+yD7hZ/CF+8WG5lZx3W806Zb2gj3yipuxmXK5Bllk+fKSw6CFPIghA3/aDbQS1I9QObatAPzt3N64Rko3IxkMATDDG1/Bl1bmoKcwe0dTsdRw1QECA5rQb02M17qw7YuAJZ1YxCrgDnwFgWkNu6uGIgPUAWGrrpAd2ORPShHuAaol5oVD1AYzrs5eoBrSOGuPzmlbwm4rlG39MoGJzL0aQKMAoTMMSX1iz2FASGBlMGGGN9HHDYGAZZbaBWhrEsMJMuhrHIMAwjBEBLhsl28hKLANb5HCQcq5v9CDVZBbDsFCyxYIiCBjDM7K1nuvT0TLojRk6dR2j9hauApF4FxHESbz/9UtmhvaY4nVTogCL1YLQTex2ZFyglIPnp4imq/CTtvAJecKdonTEeeetOPd8T968LJq4SelbY1R2bnnVHoWfseskUVc6u1HQBu1bN8YHJK0sGytk1uWDiqqAnE23Yc3a1x6ZnXX5Nxq43ijZQVNSb4uFQWVZNSsAP84SvBjV/4GSd2fePX9OP5+T9svvxUikcRwk3i6L0A31Ts/fUEs+AAs0iEkTmzTBIcGW27kkkvN6QoAQJwIniVZEwrExwG7IkOALcu99SCm346lPz4S2Jgz1ILlf6q7etGCB7lepEFjvRSPY607lI9t+jeHEjeU+SAy9mfmLriDRvUSO3ew626cv5S7x6v+Nk/5Y6vJsin0cynCV+f0qLUzu1uegies2ygk38qvtKBCqL9BBVpxro0SJ4iZMeu8P9h8iPGuIAJ7NDDNEuvkAfKvVzG+JcAw0kUlDgIontt8yubZ9c26l1E38DA2m1NGwOjgmp0pxcjq3fBqhZZn3P0CYm2N4prLFXFRqz4AeVE2gkDxC//IA9RMaNq+Cgh4Y6CRVJG1pixS5ryw7CGr+xQ192sGHjo+xXa0Z3nRflKQ4Wy3D/lS9TSVax09vwGBCJLTSsBewpaFakRFq1qYoZTVoJU+XipJUFG0y3bF1ytrSyfDqAtBLVGs2Gfmrcm3z08C+/9rCZR4vwZvu3l3G5ySgQZY5u+nuyyoBaL98NCcqQ4Bii/j8Dg1+Nwg+NBRRuoOZNUsH6M1jvI1E2mGzx4R6gcPVROH2aV1+1iweSgrntSEE0kqKhgcRe6H143DV6v8sr30CmNoQBWp5pygikxUtqgwBVTpOivJK54WDtsDoyCYPhJJkeNt+XcbRZpz/9WvYnvaFKG1jxuZ003s7l67HUZCvYLCAla2+WnbLMt0puWd9nQzpmkblbPVhC1rteERpS7GyLTXmmuV+XusplVrRdveB2EXUXSjJGUKE0cP3ZL+vPjDHDKra3tSpSaVj3kV+X1bqdRAq6Dq3bNg3LKZAIZKoSCxkF6xJXX6+YgmWSOXz3MqsUHYtkVBjVcpKdKOa+WJLB0yzwkayOy25bYic6W1y4EjP/K2eWMkYMl2Ciep2Nvb1aRrV8wweZ4YTioho5YXLfeLU1qW3LM0BZBzb1lZK6/N89g36bGoOq9uDbNheooDASmf5KSV2Dm9cbCfckNCsk5MoRLhIq6GOj05nVJb+w5GGrpV/7E7mR+K8c0Aeh9zk6pgjACTBSvzwTjoO2mdOQW1j5HeUH9OQzUKK6rQ0Wtim0py50w8k8xy6dNaQCy9rjKGC+5sQg0T8PCZt6MDGod7sSAmeywUwT4dM8+WNYPlWQOY+eTymZEJ9aqPhU1Xbqw2TjiXIpI3QMLtWTqY+LS32vzKXm4Ezqt2VSC5fOq24zNb2yBUY1tVs897BlYfed+CHuqQd08amC81Cw86nFVYYynx4H0DHqpWq9g22nuvmUiBOcB+VT4SG6ZD5lPrBOTbNX89Zh+BSX1nvj06re6516QBefTkLvLfCcAy1Nv/E8xNMnlrlDnk2MhPuqLstznLh524GjH9ewbI/Pmf2BxhHKTlOCdWRmV0etNwSLMdWHkehmMa9iJvZSP2tYTNeR8l5rBxAy27JqScHwWntmhKdUadoMK6wnEiXaW4tDnHhCWYsKL0ysZ5kVjbLX7lbLRZPTE6tqUe/6ZkIaVM7Rs+UpUxD9HnSroJX4nHf1eAVBJniyWvtzJgjQViDyNZ8SkHfivKERl4+rGsgUTQx7hHap4RSSo6Ei7xnEdUxLKPLOwACcmHGqffvGA0DgU2SsNwB9AzQFpm5qg5jEFaBEtktzjtACy3Z9cs5tcKenew6ug6HV9cSxYbF+yz347IZ2wDUheYDKv5leg8qqS5J+2CyuOrcW+rPTA8Z8dqxOMcdOrLXqcqMf0tzoHd0p6uqgo1Ck7nj2wmI6ksWEvKpwMfvaaTsuL5KTG0OfmFSHDhiNstzF5RxzLcuwyvpB5YDKttK8giO73cF96qSnAsWhvi/KaS2ipksGVGHzTru5q/aE8lqLskazPRdHSGDmw/5IfXsy+BBjLVt+qsOYAqO9Dg9mAx46yKHJIcQB3kPSN2DtwePKW/rolSHErtPh0i53+8NOwvU2ed9x8tD7FDhfyOFF8Up6K3/SWaW63WPMXz0le3oj1gHVlEMqPyDBsoExLTh2NF3Rpk0IeQ7Ss+JTnZGD43qxUBZ4RAc6bLChUNc3PG0A4W/jKEqKEoev3vO/okWY3vF/</diagram></mxfile>
|
2203.08257/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
A diagnostic radiology report about an examination includes [findings]{.smallcaps} in which the radiologist describes normal and abnormal imaging results of their analysis [@dunnick2008radiology]. It also includes [impressions]{.smallcaps} or a summary that communicates conclusions about the findings and suggestions for the referring physician; a sample report is shown in Table [\[tab:report_sample\]](#tab:report_sample){reference-type="ref" reference="tab:report_sample"}. [findings]{.smallcaps} are often lengthy and information-rich. According to a survey of referring physicians, [impressions]{.smallcaps} may be the only part of the report that is read [@wallis2011radiology]. Overall, referring physicians seem to appreciate the explainability (or self-explanitoriness) of [impressions]{.smallcaps} as it helps them evaluate differential diagnoses while avoiding additional conversations with the radiologist or the need for repeat procedures.
|
| 4 |
+
|
| 5 |
+
::::: center
|
| 6 |
+
:::: small
|
| 7 |
+
::: {#p:report_sample}
|
| 8 |
+
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 9 |
+
[findings]{.smallcaps}
|
| 10 |
+
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 11 |
+
^[**$\psi$**]{style="color: blue"}^there is no evidence of [*midline shift*]{style="color: blue"} or [*mass effect*]{style="color: blue"}.
|
| 12 |
+
|
| 13 |
+
there is soft tissue swelling or hematoma in the right frontal or supraorbital region.
|
| 14 |
+
|
| 15 |
+
underlying sinus walls and calvarium are intact.
|
| 16 |
+
|
| 17 |
+
there is no obvious [*laceration*]{style="color: blue"}.
|
| 18 |
+
|
| 19 |
+
^[**$\psi$**]{style="color: blue"}^there is subtle thickening of the [*falx*]{style="color: blue"} at the high convexity with its mid to posterior portion.
|
| 20 |
+
|
| 21 |
+
there is no associated subarachnoid hemorrhage.
|
| 22 |
+
|
| 23 |
+
^[**$\psi$**]{style="color: blue"}^this likely reflects normal prominence of the [*falx*]{style="color: blue"} in a patient of this age.
|
| 24 |
+
|
| 25 |
+
^[**$\psi$**]{style="color: blue"}^remote consideration would be a very thin [*subdural collection*]{style="color: blue"}.
|
| 26 |
+
|
| 27 |
+
[impressions]{.smallcaps}
|
| 28 |
+
|
| 29 |
+
1\) no definite acute intracranial process.
|
| 30 |
+
|
| 31 |
+
2\) mild prominence of the falx is likely normal for this patient.
|
| 32 |
+
|
| 33 |
+
3\) remote possibility of very thin subdural collection has not been entirely excluded.
|
| 34 |
+
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 35 |
+
|
| 36 |
+
: []{#tab:report_sample label="tab:report_sample"}[findings]{.smallcaps}(top) and [impressions]{.smallcaps}(bottom) sections of a radiologist's report. [**$\psi$**]{style="color: blue"} indicates a sentence in [findings]{.smallcaps} that overlaps with sentences in [impressions]{.smallcaps}. Italicized words in [findings]{.smallcaps} are core concepts (e.g., disorder and procedure) that assist in answering clinical questions.
|
| 37 |
+
:::
|
| 38 |
+
::::
|
| 39 |
+
:::::
|
| 40 |
+
|
| 41 |
+
A well known end-to-end method for text summarization is *two-step*: extractive summarization followed by abstractive summarization. For instance, @chen-bansal-2018-fast initially train extractive and abstractive systems separately and then use the extractive system as an agent in a single-agent reinforcement learning (RL) setup with the abstractive system as part of the environment. Their extractive system extracts salient sentences and the abstractive system paraphrases these sentences to produce a summary. This summary is in turn used to compute the reward for RL training. However, this single-agent setup often fails to extract some salient sentences or it extracts irrelevant ones, leading to the generation of incomplete/incorrect [impressions]{.smallcaps}. We hypothesize that granular categories of core concepts (e.g., abnormalities, procedures) can be leveraged for generating more comprehensive summaries. Thus, a separate RL agent is dedicated to the task of extracting salient keywords (core concepts) in the two-step system. The novelty in this approach is that the new, second agent can now collaborate with the first one and the two can influence each other in their extraction decisions.
|
| 42 |
+
|
| 43 |
+
Multiagent reinforcement learning (MARL) requires that an agent coordinate with the other agents to achieve the desired goal. MARL often has centralized training and decentralized execution [@foerster2016learning; @kraemer2016multi]. There are several protocols for MARL training, such as sharing parameters between agents and explicit [@foerster2016learning; @foerster2018counterfactual; @sukhbaatar2016learning; @mordatch2018emergence] or implicit [@tian2020learning] communication between agents by using an actor-critic policy gradient with a centralized critic for all agents [@foerster2018counterfactual]. The aim of these protocols is to correctly assign credits so that an agent can deduce its contribution to the team's success. To train our cooperative agents that extract salient sentences and keywords, we propose a novel Differentiable Multi-agent Actor-Critic (DiMAC) RL learning method. We learn independent agents in an actor-critic setup and use a communication channel to allow agents to coordinate by passing real-valued messages. As gradients are pushed through the communication channel, DiMAC is end-to-end trainable across agents.
|
| 44 |
+
|
| 45 |
+
The novelties in the paper are threefold:
|
| 46 |
+
|
| 47 |
+
- a summarization system that leverages core concepts via keywords, refines them and makes them the basis for more fine-grained explainability
|
| 48 |
+
|
| 49 |
+
- a multi-agent RL (MARL) based extractive component for a two-step summarization framework,
|
| 50 |
+
|
| 51 |
+
- a Differentiable Multi-agent Actor-Critic (DiMAC) with independent actors leveraging a communication channel for cooperation
|
| 52 |
+
|
| 53 |
+
The remaining paper is structured as follows. In Section 2, we provide a detailed description of our two-step framework. In Section 3, we introduce the DiMAC training algorithm. In Section 4, we describe training data and experiments. In Section 5, we discuss the results. In Section 6, we discuss related work. In Section 7, we present our conclusions.
|
2204.12584/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-01-28T08:23:32.554Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36" etag="CPR1X5JgbApU-b9kIb8d" version="16.4.11" type="google"><diagram id="vxliIlaKNQ-DwOdwAGgR" name="Page-2">7V1dc9o4FP01zLQPydjyFzwGKG1nm7bbdGc7+7LjgAA3BlEjmrC/fmVbBizJQTaWSBil0wRk+9roHN17dfRBxxksnt4n4Wp+iyYw7gBr8tRxhh0ArmzbJ3/Skm1e0u2CvGCWRJO8yN4X3EX/QVpo0dJNNIHr0okYoRhHq3LhGC2XcIxLZWGSoMfyaVMUl++6Cmf0jta+4G4cxpA77e9oguf0U3gHZ3+A0Wxe3Nm26JFFWJxMTazn4QQ9HtzLeddxBglCOH+1eBrAOK28ol5u4oc/v7mJ/0diP8zmwyj51H93lVsf1blk9xESuMSNTQ+sfx7RePvDtm+/3P36a7D49O4HvcT6HcYbWl/0s+JtUYEJ2iwnMDVidZz+4zzC8G4VjtOjj4QypGyOFzF5Z5OXU7TElAO2m76P4niAYpRktpypl/4j5WucoAd4cMTPfqiFg/L8h5TTB4UJhk8Mskeqxd5hRUgO0QLiZEuuo1aAW1BhWxCXvn/c08V3adn8gCpujxaGlKKznfE9DOQFRaIGKrZBhUXF60misoOvdVSAQWWHyvlQcAQo+DGmtVGCw/+1QcWBq3VW0zfkBFINT1klFcfJq1n69+NytcGFNfJwucH8GAc1qVpcxjOMo9mSvB6TKoYEkn4KQEQi0Q09sIgmk/TyfgLJ04T3mamUKCsULXFWUV6/4w1TWxuM1pQbHAWWaAkZvhRFe0oBq3hPn7mg3ChcRHHaqr5HCxKYgfUZPpLf39AiXCrlTs8rtWjH46kkYJKjikiuOiLdhtGyMHafnGSqH6Pxg+GkNk7KkdJTRUqPI+WH7SRBnyF+JtbYtWJNBkIOXpEQA1GU6VrpP3JkFofrNWWFBFhlfFUh57FZG+AjE+gKoPNVQedz0H3s35KC7/Nksz4jfC8WMuCeG7KAg+xrmITENkzSSup4gzfkf9oRvJ92AuIGB+TGs7ATEKc4eGsgJcgwkPKIOp5ORLt8UM9BzIETofbSg2beNyhn+qMR7QEoQ7brlyMj8PnIKGqrQBWyPUG6xiAJl5ObVDFKoUtjVjQu41runbFVLOhm9cFoxDXNfQcLTgrp6cRKP6hTUWspyhIYhzj6Xb6nqKLpHb6mbNxj6tg2EzPZNGaNNskY0ssOpSTGEug6jCW3d+0TZ7D76ZYN4zCZQcwZzoiwq4cT9BIJGatlcuxa4IWQQwBpQ3LwNGMtqWaDhHxm2HCEDb3WXAVjydHNBgnZzgSOI2zotuQbOEtdzWTg1cMBQTNBcUwSvbPl8+sHiMfzzgvrYvtBRcs9W3/NFml2pi3Xasssqh6bsUu3Zdc5Ykl1Y+bFMhPna7KBw9BuygaOV6wl1Wzg9TfDhrq+wW3LN7CWfN1s4KU9EynqsqE938BYctjRYdVs4GXBT2hdSLyxEXNzlKyKNns2NdfWL/pdvE/n2p50Kw6Y8Rtu/lNFKybwhNuD06hILv/ILrBqPln5AvIif4ZWnUoRGw/IOYym069D40q4ZNAVzN7S23UE9URBOgyjxI/83CxWBQRhkrorcuNRlH6ezPbr9DOszscqO40FQ056VJwsFMTQxZRn8sYLZQqjAXKKcGM1UfdIAxBMRmTmDKw6wfBf8pfcv2+nEwfA4PDwb/awGaGWHqF2yyPUtiBdBaK5qcqGqIF+efLi0tUg8K59vwxs41AiMAZ0uwj9IuXFCRHBMQzlCXGMWqrZIBApmYAxzSKCN8icftDPfg/f4LcmMsj3PphZva7PRwY70BoZBHKkwb31CaZMz8ETzFnTjLtgPqLJCE7rWLqNBSxi6dpxy8YkNazWAoCZxth6/5HDsHH/kaOWYjYUwyJMWJgbt1+rI8g6CMEsdBEflXl9R6A1GllAExsCi5EFenwSoFcWcF6OoFiWD3l58XUGBHZ4mZts0HigWvckFke0ulkhVZ5JFi+UKq2Nhh6dK6OaKmYu5OnCEjvQ3DiTbDrDpfbYOPfIzpGxcfbJmAvUjI07RvVsfbVW88l4PXYvFd2uykzNPNlVcRhaveuD9XZWMRh28mqtY4ZVc+W4Uvor7RzhvMc0QThfmF06mp9g+kzS22DYjGBe6CKHfSYgYLe6PpMRTl+Py+Bo0Fay0/O5T3Ak0vniJ1Ob7BhNt/Vkh9v2SXmyU5+cgfiRq5+sYuWVUnIW3QVDzuaOlNtrqPFiaHa5le65ja5ZGt/6krnmi6E5Xmmem+QCng2icQosGp0w+XWN/JpZWSOaqmj3BJxVll+79YRm4wgkHAGHVmMtWXZa2skra7Tkx249pVrl+NeaVCTmSZ0V09EOmxv7uBB2Ns6oWQVZHTsrHlktO0VSdb7d6noVLks0rbt565f7n3CcgbjfwDU3ajZwbSmyBmXlygeCKX+WoJmp21b4+BRfI2C2TQOnKzHpQ6uA6Vbq2NMkHOeQk0+NozBdfs5QoP7soJwzMmYrk/nsPnX2rLb9Crf3aicsCfbU3yUTyrhrV613PNyDWsRdtrfYHncrd1GtyV3wPHdBXe7WbhL1+WzIfNrSC8CSWbChumjhtzoyC9R5UTzmuFUdmc3EzCbU6FkSg4xaV2cUjtZQ4+wzuCXSN73UqJzCLQqBxxdxKYpvr5VlZ4lNLhObgOCLvjytKqxXKcCfr5Mg3zk1BKyb6bN76gs2VPX1urnKDSwkCHhabBSTUcqTGsbJ6yLM5F/BSuid6qqHcaLBgHpfu1WtO1RRF1jxSVy7jG/fehlR1xNQULiroDoKChT/szq9yltImjfpYLNozBDT4YkZ6PWNlUMHNYkJ5IlZR4NrhfuGoNJamucdJ6jedLGtQY0WhWFDSr2klOjDaPaaNUYryCOJ1Rj03Pf6GSYIF9qzcopgtx3Nol2l1H+WxG5PNkMq6S2cmIF88V4e7ZCKvE1Q2oHcHXufhKv5LZrA9Iz/AQ==</diagram></mxfile>
|
2204.12584/main_diagram/main_diagram.pdf
ADDED
|
Binary file (38.5 kB). View file
|
|
|
2204.12584/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,182 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Soft robotics is a rapidly advancing branch of robotics, showing promising results in non-standard settings in which compliant structures and bio-inspired designs are needed to solve tasks in natural environments [\(Hawkes et al.,](#page-9-0) [2021\)](#page-9-0). Aquatic locomotion is one such setting where soft robotic designs are uniquely able to take advantage of hydrodynamic properties, mimicking biological fish designs selected through evolutionary pressures for maximum fitness in nature [\(Katzschmann et al.,](#page-9-1) [2018\)](#page-9-1).
|
| 4 |
+
|
| 5 |
+
Simulation is a non-trivial challenge in the design of soft robots, as opposed to the rigid domain, for which established techniques have been developed and built upon in the span of decades. As for aquatic locomotion, solving fluidstructure interaction (FSI) for incompressible Navier-Stokes is a hard problem, traditionally extremely computationally expensive and thus often impossible in practice. Leveraging these simulations for design and control optimization generally involves an additional computational burden, through slow evolutionary or otherwise gradient-free optimization procedures. In this work, we leverage recent advances in machine learning for physics to take a step towards solving this problem, proposing a FSI simulation that is both orders of magnitude faster than standard approaches and fully differentiable, thus allowing for simple gradient-based optimization of design and control objectives.
|
| 6 |
+
|
| 7 |
+
To address the FSI challenge, our hybrid approach uses a differentiable numerical simulation for the deformable solid structure and a neural network surrogate to capture hydrodynamic effects of the fluid. To perform fast and differentiable soft body simulation of a flapping 2D carangiform swimmer, we leverage the finite-element method (FEM) combined with the novel approach of Differentiable Projective Dynamics (DiffPD) [\(Du et al.,](#page-8-0) [2021b\)](#page-8-0). For the fluid simulation, we train a physics-constrained neural network for hydrodynamics as proposed by Wandel et al. [\(2021a\)](#page-11-0), which approximates fluidic flow on a discretized marker and cell (MAC) grid [\(Harlow & Welch,](#page-9-2) [1965\)](#page-9-2). Their approach requires no training data but is instead trained using a physics-constrained loss based on the Navier-Stokes differential equation.
|
| 8 |
+
|
| 9 |
+
<sup>1</sup>ETH AI Center, ETH Zurich, Zurich, Switzerland <sup>2</sup> Soft Robotics Lab, ETH Zurich, Zurich, Switzerland <sup>3</sup> Institute of Neuroinformatics, ETH Zurich, Zurich, Switzerland <sup>4</sup>CSAIL, MIT, Cambridge, MA, USA. Correspondence to: Elvis Nava <elvis.nava@ai.ethz.ch>, Robert K. Katzschmann <rkk@ethz.ch>.
|
| 10 |
+
|
| 11 |
+

|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+
Figure 1. The forward swimming of a carangiform soft body fish immersed in fluid is simulated using our hybrid technique featuring FEM solid simulation and neural network based hydrodynamics. The figures illustrate at three separate time steps (t = 73, 162, 275) both the fish and the full fluid pressure field. The red arrows at the interface from fluid to solid indicate the forces applied from the fluid to the fish. The forces are calculated with the immersed boundary method.
|
| 18 |
+
|
| 19 |
+
Our contributions are the following:
|
| 20 |
+
|
| 21 |
+
- We introduce a differentiable layer linking the solid and the fluid simulations, achieving FSI coupling while maintaining low computational cost and full automatic differentiability. This approach involves the specification of the fluid boundary condition as a *soft mask* with techniques from differentiable rendering [\(Liu et al.,](#page-10-0) [2019\)](#page-10-0), and the computation of fluid-to-solid surface forces using a variation of the Immersed Boundary Method (IBM) [\(Peskin,](#page-10-1) [2002\)](#page-10-1) with Gaussian distance.
|
| 22 |
+
- We demonstrate on a 2D carangiform swimmer that our hybrid approach leads to realistic swimming behavior with forward propulsion, while requiring considerably less computational resources than existing FSI simulations (Figure [1\)](#page-1-0).
|
| 23 |
+
- We leverage the differentiability of the simulation to directly optimize the frequency parameter of a swimming controller with first order gradient optimization to achieve higher forward swimming speeds.
|
| 24 |
+
- We compare our hybrid simulation with the traditional COMSOL solver for FSI, demonstrating a monotonic relationship between the distance travelled by the fish with the same controller on either simulation.
|
| 25 |
+
|
| 26 |
+
# Method
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
|
| 30 |
+
<span id="page-2-0"></span>Figure 2. Overview block diagram of our hybrid simulation method. q, q˙ are positions and velocities of finite elements; h are actuator signals; p, v<sup>x</sup> and v<sup>y</sup> are pressure and velocity fields of the fluid; b is the soft boundary mask; fext is the hydrodynamic force applied by the fluid to the solid.
|
| 31 |
+
|
| 32 |
+
We hereby detail our hybrid method for fast and fully differentiable simulation of soft body and fluid interaction. As shown in Figure [2,](#page-2-0) our approach consists of repeated, stacked interaction between the DiffPD solid simulator and the Hydrodynamics Neural Network (HydroNet) surrogate simulator. The output of the fluid simulation at time t is used as input for the solid simulation at the same time t through the introduction of an external fluidic force, while the output of the solid simulation at time t is used as input for the fluid simulation at the next time step t + 1 through the specification of a boundary condition. This interleaved interaction leads to the unfolding of a differentiable computation graph that can be backtraced by autodifferentiation frameworks to compute gradients through the entire simulation episode and optimize objectives (Figure 8). FEM element positions and velocities, actuations, fluid curls and pressures, boundary conditions, external forces and Young's moduli are all differentiable with respect to each other. In the following sections, we describe each component of the system in detail.
|
| 33 |
+
|
| 34 |
+
Our goal for fluid simulation is that of obtaining a fast and differentiable surrogate model of hydrodynamics, so that swimmer designs and controls can be optimized by differentiating directly against the simulation environment, as opposed to doing so through evolutionary strategies or reinforcement learning.
|
| 35 |
+
|
| 36 |
+
In particular, our surrogate simulator needs to solve the incompressible Navier-Stokes equations describing fluidic flow costrained by Dirichlet boundary conditions. Given fluid density $\rho$ and viscosity $\mu$ , and defining ${\bf v}$ and ${\bf p}$ to be velocity and pressure fields over a fluid domain $\Omega$ , the equations consist of the following three terms:
|
| 37 |
+
|
| 38 |
+
$$\nabla \cdot \mathbf{v} = 0 \qquad \text{on } \Omega, \quad (1)$$
|
| 39 |
+
|
| 40 |
+
$$\rho \left( \frac{\partial \mathbf{v}}{\partial t} + (\mathbf{v} \cdot \nabla) \mathbf{v} \right) = -\nabla \mathbf{p} + \mu \nabla^2 \mathbf{v} \quad \text{on } \Omega, \quad (2)$$
|
| 41 |
+
|
| 42 |
+
$$\mathbf{v} = \mathbf{v}_d$$
|
| 43 |
+
on $\partial \Omega$ . (3)
|
| 44 |
+
|
| 45 |
+
Equation (1) is called the divergence term, and forces incompressibility of the fluid, disallowing any sources or sinks within $\Omega$ . Equation (2) is the main hydrodynamic term, stating that changes in fluidic particle momentum must correspond to forces exerted by the pressure gradient and viscous friction. Equation (3) is the Dirichlet boundary condition, stating that velocities on the boundary $\partial\Omega$ of the fluidic domain must be equal to the supplied boundary velocities $\mathbf{v}_d$ .
|
| 46 |
+
|
| 47 |
+
The equations can be simplified for the 2D case by observing that the Helmholtz theorem allows to decompose the velocities into a curl free part and a divergence-free part
|
| 48 |
+
|
| 49 |
+
$$\mathbf{v} = \nabla \mathbf{q} + \nabla \times \mathbf{a}.\tag{4}$$
|
| 50 |
+
|
| 51 |
+
By expressing velocities as $\mathbf{v} = \nabla \times \mathbf{a}$ , with the curl a being one-dimensional in the 2D case, we implicitly force zero-divergence in our fluid without the need for explicitly solving for Equation (1).
|
| 52 |
+
|
| 53 |
+
To solve the equations, we train an unsupervised neural network model following the approach proposed by Wandel et al. (2021a). The model in focus is a U-net (Ronneberger et al., 2015) with limited convolutional channels that operates on discretized fields on a marker and cell (MAC)
|
| 54 |
+
|
| 55 |
+
(Harlow & Welch, 1965) grid. The model takes as input the curl field $\mathbf{a}$ , the pressure field $\mathbf{p}$ , the boundary mask $\mathbf{b}$ identifying the domain $\Omega$ , and the boundary velocities $\mathbf{v}_d$ from time step t. The model predicts as output the curl field $\mathbf{a}$ and the pressure field $\mathbf{p}$ for the next time step t+1, given a constant time step magnitude h.
|
| 56 |
+
|
| 57 |
+
The model is not trained on simulation data, but instead uses the Navier-Stokes residuals as its loss function:
|
| 58 |
+
|
| 59 |
+
$$L_{p} = \left\| \rho \left( \frac{\partial \mathbf{v}}{\partial t} + (\mathbf{v} \cdot \nabla) \mathbf{v} \right) + \nabla \mathbf{p} - \mu \nabla^{2} \mathbf{v} \right\|^{2} \quad \text{on } \Omega,$$
|
| 60 |
+
(5)
|
| 61 |
+
|
| 62 |
+
$$L_b = \|\mathbf{v} - \mathbf{v}_d\|^2 \quad \text{on } \partial\Omega, \tag{6}$$
|
| 63 |
+
|
| 64 |
+
<span id="page-3-4"></span><span id="page-3-3"></span>
|
| 65 |
+
$$L = \beta L_p + \gamma L_b. \tag{7}$$
|
| 66 |
+
|
| 67 |
+
with parameters $\beta$ and $\gamma$ determining how much to prioritize the Navier-Stokes term or the boundary term.
|
| 68 |
+
|
| 69 |
+
Given this loss, the network is trained on synthetic episodes consisting of randomly generated boundary conditions, with the network's curl and pressure fields output being fed back as training data at each iteration. This way, the physicsconstrained loss is applied to increasingly realistic scenarios.
|
| 70 |
+
|
| 71 |
+
<span id="page-3-2"></span><span id="page-3-1"></span>We model the soft-body dynamics by the following governing equations from continuum mechanics (Sifakis & Barbic, 2012):
|
| 72 |
+
|
| 73 |
+
$$\rho_s \ddot{\mathbf{q}} = \nabla \cdot \mathbf{P} + \mathbf{f}_{\text{ext}},\tag{8}$$
|
| 74 |
+
|
| 75 |
+
where $\rho_s$ stands for the soft material's density, $\mathbf{q} = \mathbf{q}(\mathbf{X},t)$ tracks the position of a material point $\mathbf{X}$ from the material space (undeformed shape) at time t, $\mathbf{P}$ represents the first Piola-Kirchoff stress tensor, and $\mathbf{f}_{\rm ext}$ captures all external forces applied to $\mathbf{X}$ at time t. We refer interested readers to Sifakis & Barbic (2012) for more background information regarding soft-body simulation. The stress $\mathbf{P}$ determines the behavior of soft material and is specified by choosing soft material models. In this work, we use the linear corotated material model as suggested by DiffPD because of its balance between speed (Bouaziz et al., 2014) and physical accuracy (Du et al., 2021a; Zhang et al., 2021).
|
| 76 |
+
|
| 77 |
+
Given the continuous equations above, DiffPD uses standard finite-element methods (FEM) and the implicit timestepping scheme to discretize the dynamic system spatially and temporally, leading to the nonlinear system of equations below:
|
| 78 |
+
|
| 79 |
+
$$\mathbf{q}_{t+1} = \mathbf{q}_t + h\dot{\mathbf{q}}_{t+1},\tag{9}$$
|
| 80 |
+
|
| 81 |
+
$$\dot{\mathbf{q}}_{t+1} = \dot{\mathbf{q}}_t + h\rho_s^{-1}(\mathbf{f}_{\text{ext}} + \mathbf{f}_{\text{int}}(\mathbf{x}_{t+1})), \tag{10}$$
|
| 82 |
+
|
| 83 |
+
where q and $\dot{q}$ now represent nodal positions and velocities of finite elements at time steps specified by their subscripts.
|
| 84 |
+
|
| 85 |
+
The notation fint denotes the elastic force induced by the stress tensor P. Once the forward simulation process is established, DiffPD derives its gradients using standard chain rules and adjoint methods. Interested readers can refer to DiffPD [\(Du et al.,](#page-8-0) [2021b\)](#page-8-0) for detailed derivations of these equations.
|
| 86 |
+
|
| 87 |
+
FSI involves solving a two-way link between the DiffPD soft structure FEM simulation and our neural network hydrodynamic surrogate simulation. Hydrodynamic forces from the fluid simulation affect the soft body finite elements as an external force fext, at the same time the soft body simulation determines the Dirichlet boundary conditions b and v<sup>d</sup> for the hydrodynamics simulation. An additional cause of complexity stems from the fact that these operations must mediate between a *Lagrangian* and a *discrete Eulerian* representation for physical quantities, with the former being used the the solid simulation, and the latter for the fluid simulation.
|
| 88 |
+
|
| 89 |
+
Lagrangian methods handle physics simulation by modeling individual particles constituting the simulated material. DiffPD operates in a Lagrangian fashion, as the finite elements identified by q and q˙ track specific points within the soft body and move along with the body within the domain. Opposed to this, discrete Eulerian methods simulate PDEs on a discretized grid such as the fixed MAC grid used by Wandel et al. [\(2021a\)](#page-11-0)'s hydrodynamics network. With this representation, v and p summarise fluid properties on fixed locations of the domain, without tracking individual fluid particles.
|
| 90 |
+
|
| 91 |
+
The challenge in this setting is that of providing a differentiable layer to compute these interaction quantities which mediate between representations. For the solid-to-fluid interaction, the Lagrangian elements described by q and q˙ must be used to compute a boundary mask b with a rasterization operation, which is however generally non-differentiable. Similarly, grid boundary velocities v<sup>d</sup> are generally computed with a non-differentiable neighbourhood averaging operation. For the fluid-to-solid interaction, we turn to the fluid-to-solid stage of the Immersed Boundary Method (IBM) [\(Peskin,](#page-10-1) [2002\)](#page-10-1), which samples Eulerian pressure values p on locations near the boundary in order to compute Lagrangian external forces fext affecting the solid finite elements.
|
| 92 |
+
|
| 93 |
+
Solid-to-Fluid Coupling Given the state of the DiffPD soft body simulation from a specific time step, the finite element positions q and velocities q˙ fully determine the boundary condition for the subsequent hydrodynamic simulation step. The positions q determine the shape and location of the fish body, served as input to the Hydronet as a boundary mask b. The velocities q˙ are instead used to compute the boundary velocities v<sup>d</sup> as a granular cell-wise velocity obtained by averaging the closest elements to each boundary cell.
|
| 94 |
+
|
| 95 |
+
There is however a non trivial obstacle that renders a naive application of the boundary mask from Wandel et al. [\(2021a\)](#page-11-0) non applicable to our optimization setting. By definition, the rasterization operation computing a binary mask is nondifferentiable, thereby breaking the chain of differentiability which we rely on for optimization.
|
| 96 |
+
|
| 97 |
+
Our solution to this issue takes from the field of differentiable rendering [\(Liu et al.,](#page-10-0) [2019\)](#page-10-0), and in particular to the techniques associated with soft rasterization. Instead of producing a hard binary mask as a rasterization of the robot's finite element mesh, we use a signed distance field to produce a soft differentiable mask, with real-valued entries bij ∈ [0, 1].
|
| 98 |
+
|
| 99 |
+
Define xij ∈ R 2 as the spatial coordinates of the MAC grid cell in position (i, j) and q k those of the k-th finite element. Then each cell bij from the soft boundary mask is computed as
|
| 100 |
+
|
| 101 |
+
$$\mathbf{b}_{ij} = \text{sigm.} \left( \delta_{ij} \frac{\sum_{l} \left( \text{softmin}_{k} \frac{\|\mathbf{x}_{ij} - \mathbf{q}^{k}\|^{2}}{\xi} \right)_{l} \|\mathbf{x}_{ij} - \mathbf{q}^{l}\|^{2}}{\sigma} \right)$$
|
| 102 |
+
(11)
|
| 103 |
+
|
| 104 |
+
with σ and ξ being mask softness parameters, k · k<sup>2</sup> the Euclidean distance and δij = +1 if the cell location xij is inside the fish body, while δij = −1 if outside.
|
| 105 |
+
|
| 106 |
+
We can similarly use a differentiable surrogate to obtain cell-wise fine-grained boundary velocities:
|
| 107 |
+
|
| 108 |
+
$$v_{dij} = \left(\sum_{l} \left( \text{softmin}_{k} \frac{\|\boldsymbol{x}_{ij} - \boldsymbol{q}^{k}\|^{2}}{\tau} \right)_{l} \dot{\boldsymbol{q}}^{l} \right) \cdot \boldsymbol{b}_{ij}$$
|
| 109 |
+
(12)
|
| 110 |
+
|
| 111 |
+
with τ being a softness parameter.
|
| 112 |
+
|
| 113 |
+
Tuning the softness parameters σ, ξ and τ allows us to tune the trade-off between boundary accuracy and smooth gradients.
|
| 114 |
+
|
| 115 |
+
Fluid-to-solid coupling The overall purpose of our hybrid simulation approach is to optimize fish designs and/or control policies with respect to a more realistic model of hydrodynamics. Therefore, the mechanism by which the hydrodynamics simulation affects the soft body simulation is of utmost importance.
|
| 116 |
+
|
| 117 |
+
The way for hydrodynamic forces to affect DiffPD is through an external force fext applied to the simulation's finite elements. Common drag/thrust optimizations such as that of Chen et al. [\(2021\)](#page-8-9) often average the force over the entire solid surface. This force is shaped by two contributions,
|
| 118 |
+
|
| 119 |
+

|
| 120 |
+
|
| 121 |
+
Figure 3. Soft boundary masks obtained with the softness parameters ξ = 5e-7 and a) σ = 5e-9, b) σ = 5e-7, c) σ = 5e-5.
|
| 122 |
+
|
| 123 |
+
one due to the pressure field,
|
| 124 |
+
|
| 125 |
+
$$\mathbf{f}_{\text{pressure}} = -\int_{\partial\Omega} \mathbf{p} \cdot \mathbf{n} \, dl, \tag{13}$$
|
| 126 |
+
|
| 127 |
+
and the other is due to the velocity field,
|
| 128 |
+
|
| 129 |
+
$$\mathbf{f}_{\text{viscous}} = -\int_{\partial\Omega} \mu \mathbf{n} \times \mathbf{a} \, dl, \quad \mathbf{a} = \nabla \times \mathbf{v},$$
|
| 130 |
+
(14)
|
| 131 |
+
|
| 132 |
+
with ∂Ω being the solid body boundary and n its outward pointing normal vector. Total hydrodynamic force is obtained by summing fext = fpressure + fviscous. However, for common water-like fluids with low viscosity µ, the contribution of the viscous term fviscous can be considered negligible and its computation can be omitted.
|
| 133 |
+
|
| 134 |
+
Given that our approach for solid simulation is based on finite elements, with each surface element k being associated with its surface normal nk, we can compute individual elements' surface forces fextk. To bridge the Eulerian to Lagrangian gap, we adopt the IBM fluid-to-solid step on forces
|
| 135 |
+
|
| 136 |
+
$$\mathbf{f}_{\text{ext}k} = -l_k \mathbf{n}_k \sum_{i,j} \mathbf{p}_{ij} \delta\left(\mathbf{x}_{ij} - \mathbf{q}^k\right) \mathbf{b}_{ij}$$
|
| 137 |
+
(15)
|
| 138 |
+
|
| 139 |
+
where δ is the Dirac delta and l<sup>k</sup> = (kq <sup>k</sup>−<sup>1</sup> − q <sup>k</sup>k + kq <sup>k</sup> − q <sup>k</sup>+1k)/2 is the surface length corresponding to finite element k. With the IBM, we are able to appropriately identify the force applied to Lagrangian element k, despite only having access to pressures on a fixed Eulerian grid.
|
| 140 |
+
|
| 141 |
+
In practice, due to the finite discretization, it is not feasible to adopt δ directly, but a surrogate ˜δ(x) = φ(x1)φ(x2) must be chosen such that it satisfies several properties as detailed
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
Figure 4. Our immersed boundary method for fluid-to-solid interaction. Each soft body surface element (red) is subjected to an external force fext<sup>k</sup> in the opposite direction of its normal nk. To compute the scalar force magnitude, nearby Eulerian cells from the pressure field are averaged with a Gaussian function centered around the element.
|
| 146 |
+
|
| 147 |
+
in the original IBM paper [\(Peskin,](#page-10-1) [2002\)](#page-10-1):
|
| 148 |
+
|
| 149 |
+
$$\phi(r)$$
|
| 150 |
+
is continuous for all real $r$ , (16)
|
| 151 |
+
|
| 152 |
+
$$\sum_{j < r} \phi(r - j) = \sum_{j > r} \phi(r - j) = \frac{1}{2} \text{ for all real } r, \quad (17)$$
|
| 153 |
+
|
| 154 |
+
$$\sum_{j} (r-j)\phi(r-j) = 0 \text{ for all real } r, \quad (18)$$
|
| 155 |
+
|
| 156 |
+
$$\sum_{j} (\phi(r-j))^2 = C \text{ for all real } r, \quad (19)$$
|
| 157 |
+
|
| 158 |
+
where the constant C is independent of r. The original formulation from Peskin [\(2002\)](#page-10-1) included the additional property of φ(r) = 0 for |r| > 2, however this is not strictly required and is only introduced for computational cost reasons, which are moot if the operation is performed with GPU parallelism.
|
| 159 |
+
|
| 160 |
+
We thus choose to use a normalized Gaussian distance for our IBM, namely in the form
|
| 161 |
+
|
| 162 |
+
$$\tilde{\delta}(\boldsymbol{x} - \boldsymbol{y}) = \exp\left(-\frac{\|\boldsymbol{x} - \boldsymbol{y}\|^2}{2\sigma'}\right),$$
|
| 163 |
+
(20)
|
| 164 |
+
|
| 165 |
+
with σ <sup>0</sup> being a smoothness parameter and the equation satisfying all relevant properties.
|
| 166 |
+
|
| 167 |
+
This choice then leads to our IBM formula for calculating
|
| 168 |
+
|
| 169 |
+
individual elements' fextk:
|
| 170 |
+
|
| 171 |
+
$$\mathbf{f}_{\text{ext}k} = -l_k \boldsymbol{n}_k \sum_{i,j} \boldsymbol{p}_{ij} \frac{1}{Z} \exp\left(-\frac{\|\mathbf{x}_{ij} - \boldsymbol{q}^k\|^2}{2\sigma'}\right) \boldsymbol{b}_{ij},$$
|
| 172 |
+
(21)
|
| 173 |
+
|
| 174 |
+
with Z = P i,j exp − kxij−q <sup>k</sup>k 2σ<sup>0</sup> bij being a normalization constant.
|
| 175 |
+
|
| 176 |
+
The reason we choose a Gaussian delta function is not only because of the increased stability due to larger function support (as is discussed by Peskin [\(2002\)](#page-10-1)). Using a Gaussian as opposed to the original Dirac delta gives us the property of differentiability of the forces with respect to the entire pressure field. Once again, there is a trade-off between gradient smoothness and IBM precision, as lower σ 0 allows for more precise IBM interpolation, but causes the gradients with respect to the pressure field to vanish for most locations.
|
| 177 |
+
|
| 178 |
+
The obtained fext is at the granularity of individual finite elements. It can either be applied as a DiffPD input as-is, allowing for precise but potentially unstable simulation of surface interaction, or it can be used to compute the overall thrust/drag
|
| 179 |
+
|
| 180 |
+
$$\mathbf{f}_{\mathsf{ext}} = \sum_{k} \mathbf{f}_{\mathsf{ext}k} \tag{22}$$
|
| 181 |
+
|
| 182 |
+
which, divided by the total finite element number, can be applied as an average force to all the solid finite elements, to only model directional thrust.
|
2205.14120/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-05-11T22:34:33.684Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36" etag="E_6Mv1Y8IgqJQNJnMBW_" version="18.0.1" type="google"><diagram id="XuWMVf0J8NYnaft9Sw-u" name="Page-1">7V1dc5s4FP01nmkfmgF98PFY46Tdme3OznZmd/uUIQbb7BLjBdLY/fUrQLINSBinMgLjZCaxhCxkHZ2rq6sjPIHO8/ZT7G5WXyLPDydA87YTOJsAABDUyb8sZ1fk6Fg3ipxlHHg075DxNfjh00yN5r4Enp+UCqZRFKbBppw5j9Zrf56W8tw4jl7LxRZRWL7rxl3SO2qHjK9zN/Rrxf4KvHRV5Fr4qPRnP1iu2J11jV55dllhWkWycr3o9ehe8H4CnTiK0uLV89bxw6z3WL8UFT0Iru4bFvvrtM0bvth4+gRX3z7P//xh/XL/+Pr5j98+GLCo5rsbvtBPTFub7lgXxNHL2vOzWrQJnL6ugtT/unHn2dVXgjrJW6XPIUnp5OUiWqcURR2TdJLG0b++E4VRnNcGtfxnf4V1KsjeG4ThUckH8uM4JN9zk1V+/+wGtL1+nPpbYU/o+/4lI9OPnv003pEi9A02hYiNSQbZ6wFgXWPjYXWErkXzXDqolvuqD/1OXtCuPwMGXT8Ng++RgUmTUZyuomW0dsP7Q+60DNShzK9RtKG994+fpjuKj/uSRm3AO4aoaFXWlOa+Jy2PXuK53/CZ6chL3Xjppw3lEB/L2A/dNPhebod0YHj0MMKU9lUJIOO/l4hd+JDkvfiRFNCtzTbvOHadvFoW/8knQ9tHvXjB6iXNLKpmparDIAyJ7fNPM9FNNoVBXATbbFDU6eU4Ob3OoujRCLFIOuNhQKzlxzBYrknmc+B5+VjMC9JWZ1Wtojj4QfLcffPoO+Zk4PixHGLr2C4xG9WJjTi0Rhej9Slj+gaOkq6Id39n77/DLPmNVpcnZttSakdTp7ntb4O0qJjM0jSd13xnmxZNHyrPErujxO9+HJBey6DM8yRaCtTSUmCVlkKHjRZ7Ha2lmOgDSNA8BklvBuiig6YH0JtKoUcdQ28hu8xP+wQ/rx1+QyX8iOMj5DM6dibmdJVN8M7d3R35m807j9M8f8Zm/Z9xtlu40PdGPsdX520ZXnR5rtV1jhdtcGZb42KzLe6YhvgN9lcu/SRSDbekmq7xR0U3XMMCrhGWvSOu9PuTrNLPW8Jajf5xlW1a9pv5tvGcVoD5brQMX9cq0Q9wFrGW3qWva/affXpv2WcMgn2GkH3TkbEPVFaa6uln9J9+0r1PiQQ0B0FAU0hAMDIC6n2b/yzxKmDuRWkigIb0QFru/1pgTBhji/0k+OE+7WNsmyhYp/nnwmSRMcvqIgROKJY1KKlNOEaRZXGHgHQHRqsgiOsLCI2zgICXgtCwB2BD32hBAUtXiPzwcGnLare0rIZguHRjWW0BfRfjW1hA6w6XmAmBaufGGgAxe+zcsNDMSQ7aKinIWlmfQsFUQL+3b4Y1b21hSVQyKqsEq04k3j4zuBiRRGbOfZyYU10XxyX77qeQJotAPTKkjkMNrARwISxji0HdfeHtNl7MfQEi/hTgghu4ZzAXVNCFpmp0eRKRA7rTG7pnRG801DfuAg66DS7PNQiAQFsFEFC6uws60ADtHZ2ONUAzfG/NUI22Q9YA4YrlVq0BOrnHe9MAtTIWbQO0QOkaBnS9Bz0KFVBr8AufXBn4Xe/BjEYH1H4AKA0iAtH+zJVLgZBRnnPVa4HAECKGcikok25tY/ZQrWcuimYVaiAwoqA9tECJgcq3Q9lKuc8E7K8ciB2b6zkBoSgmVQiCxkRAoxowVE7A5gBSLwjY5z0z2DY6pZiCvOjUQRI0Jgqi3s2BDScDbpogbvAOVQLzyjVB5gD8mLebUXWqINj2vIFaWRAUHThYjM68Yg30TRcEb6LnnyNh67Ca0qO1UBhWG6owCFfFI6qFQVAkX87VBeDqhUGaZttoJikWY5tlcJWrC2Cj7AtcvTJIJrrIqqCrXBmEGnVf4OqVQTLRxUblWIly7qIRPhoIgZZ+CVJ78B9woJGsDNpTt2NlkOPc46NV4DUog3RN00vkVi0NQvVHxtykQW+wFm0PTyNLqbXo+ikxo5AGtQa/mMiVgd+1Lmw00qD2A0DpyW0kenbClUuDbFSec9VLg9AAHlTSX2kQstrSTa1rLopnFdKg2Yji9hZjUl+2RdEAjtn3VxrEnjzdcwJiUVCqkAaNiYBkyivPgcoZiPVav/eOgX3eN2NBwb5zkBefOmiDxsRBu2+TIBbptm7aIGH4jkW++yMOGoAdfasno04ahFs/9VepNIg1kyMNmo3Nulpa36RBeAgPA+6zi9M6rKb0+zmwMKw2VGkQWSxUVguqtUFYJL/K9QWzEWiD6KwnAV3LLNtJ9foC3Kj8mo1AGyQPXduooKtcG8QeUyxAdwTaIHno6hrbxOkNeQ1elO08BQpAjQqU6nK0nQxlEINHwpAw2Ve/NC1JYadL0gEE13/C77VULUqNtidyTaXHAVkzr8cfNirP0ERQsTts8I4ESVX9EZubG1BzmqxIP3nkHU9u4idHk/V1mGFZIoPqGNFB3Qzvv1G2m4mZ51VfaJAsfDd9if2sTkLQYBHMbwNFOFCqRzM4A4VnTS43XwOBwX7NH+t5df65BBD3sUUWa+SA2LHTJdrMyUG8viW0BBAxWzvtQbRUgyiK5+cgzm4gcqJZleVPD5jYoXM2D90kuc2658+6PK53656xkHpHq+TycQm2Yj5LA3pYMvd5lWy23TVSu3Vr8naNpFqJxaOm6ODWvZH9nlydS6I2rOhqEGfft9MzVSZvs6qAINm4awnQLt+RKeApCr1k90z+kYlgSyz/+xrcxe0EcHcjAnDyn0lHT0CrfW8QMuuDodvvZONFw27h8XYeo4QhUQ2PQ05cRpZ/SJJxlCG0v/aJEGn1JfL8rMT/</diagram></mxfile>
|
2205.14120/main_diagram/main_diagram.pdf
ADDED
|
Binary file (47.2 kB). View file
|
|
|
2205.14120/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Real world machine learning models [14, 60] are mostly used as a *black-box*, *i.e.*, it is very difficult to analyze and understand why a specific prediction was made. In order to *explain* such black-box models, an instance-specific local interpretable model is often learned [38, 50]. However, these approaches tend to be unstable and unfaithful [1, 52], *i.e.*, they often misrepresent the model's behavior. On the other hand, a family of models known as generalized additive models (GAMs) [24] have been used for decades as an inherently interpretable alternative to black-box models.
|
| 4 |
+
|
| 5 |
+
GAMs learn a *shape function* for each feature independently, and outputs of such functions are added (with a bias term) to obtain the final model prediction. All models from this family share an important trait: the impact of any specific feature on the prediction does not rely on the other features, and can be understood by visualizing its corresponding shape function. Original GAMs [24] were fitted using splines, which have since been improved in explainable boosting machines (EBMs) [36] by fitting boosted decision trees, or very recently in neural additive models (NAMs) [2] by fitting deep neural networks (DNNs). A drawback for all the aforementioned approaches is that for each shape function, they require either millions of decision trees [36], or a DNN with tens of thousands of parameters [2], making them prohibitively expensive for learning datasets with a large number of features.
|
| 6 |
+
|
| 7 |
+
In this work, we propose a novel subfamily of GAMs, which, unlike previous approaches, learn to decompose each feature's shape function into a small set of basis functions *shared* across all features. The shape functions are fitted as the feature-specific linear combination of these shared bases, see Figure 1. At an abstract level, our approach is motivated by signal decomposition using traditional basis functions like the Fourier basis [9] or Legendre polynomials [45], where a weighted
|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
|
| 11 |
+
Figure 1: Neural Basis Model (NBM) architecture for a binary classification task.
|
| 12 |
+
|
| 13 |
+
combination of a few basis functions suffice to reconstruct complex signal shapes. However, in contrast to these approaches, our basis decomposition is not fixed *a priori*. In fact, it is learnt specifically for the prediction task. Consequently, we maintain the most important feature of GAMs, *i.e.*, their interpretability, as the contribution of single feature does not depend on the other features. At the same time, we gain scalability, as the number of basis functions needed in practice is much smaller than the number of input features. Moreover, we show that the usage of basis functions can increase computational efficiency by several orders of magnitude when the input features are sparse. Additionally, we propose an approach to learning the basis functions using a single DNN. We call this solution the Neural Basis Model (NBM). Using neural networks allows for even higher scalability, as training and inference are performed on GPUs or other specialized hardware, and can be easily implemented in any deep learning framework using standard, already developed, building blocks.
|
| 14 |
+
|
| 15 |
+
Our contributions are as follows: (i) We propose a novel subfamily of GAMs whose shape functions are composed of shared basis functions, and propose an approach to learn basis functions via DNNs, denoted as Neural Basis Model (NBM). This architecture is suitable for mini-batch gradient descent training on GPUs and easy to plug-in into any deep learning framework. (ii) We demonstrate that NBMs can be easily extended to incorporate pairwise functions, similar to $GA^2Ms$ [37], by learning another set of bases to model the higher order interactions. This approach effectively only linearly increases the parameters, while other models such as $EB^2Ms$ [36, 37] and $NA^2Ms$ [2] suffer from quadratic growth of parameters, and often require heuristics and repeated training to select the most important interactions before learning [37]. (iii) Through extensive evaluation of regression, binary classification, and multi-class classification, with both tabular and computer vision datasets, we show that NBMs and $NB^2Ms$ outperform state-of-the-art GAMs and $GA^2Ms$ , while scaling significantly better, *i.e.*, fitting much fewer parameters and having higher throughput. For datasets with more than ten features, using NBMs result in around $5\times-50\times$ reduction in parameters over NAMs [2], and $4\times-7\times$ better throughput. (iv) We propose an efficient extension of NBMs to sparse datasets with more than a hundred thousand features, where other GAMs do not scale at all.
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
**Generalized Additive Model (GAM) [24].** Given a D-dimensional interpretable input $x = \{x_i\}_{i=1}^D$ , $x \in \mathbb{R}^D$ , a target label y, a link function g (e.g., logistic function), a univariate shape function $f_i$ , corresponding to the input feature $x_i$ , a bivariate shape function $f_{ij}$ , corresponding to the feature interaction, and a bias term $f_0$ , the prediction function in GAM and GA<sup>2</sup>M is expressed as
|
| 20 |
+
|
| 21 |
+
$$\mathbf{GAM}: g(\boldsymbol{x}) = f_0 + \sum_{i=1}^{D} f_i(x_i); \quad \mathbf{GA}^2\mathbf{M}: g(\boldsymbol{x}) = f_0 + \sum_{i=1}^{D} f_i(x_i) + \sum_{i=1}^{D} \sum_{j>i}^{D} f_{ij}(x_i, x_j). \quad (1)$$
|
| 22 |
+
|
| 23 |
+
Interpreting GAMs is straightforward as the impact of a feature on the prediction does not rely on the other features, and can be understood by visualizing its corresponding shape function, e.g., plotting $x_i$ on the x-axis and $f_i(x_i)$ on the y-axis. A certain level of interpretability is sacrificed for accuracy when modeling interactions, as $f_{ij}$ shape functions are harder to visualize. Shape function visualization through heatmaps [12, 37] is commonly used towards that purpose. Note that, the graphs visualizations of GAMs are an exact description of how GAMs compute a prediction.
|
| 24 |
+
|
| 25 |
+
We observe that, typically, input features of high-dimensional data are correlated with each other. As a result, it should be possible to decompose each shape function $f_i$ into a small set of basis functions shared among all the features. This is the core idea behind our approach.
|
| 26 |
+
|
| 27 |
+
**Neural Basis Model (NBM).** We propose to represent shape functions $f_i$ as
|
| 28 |
+
|
| 29 |
+
$$f_i(x_i) = \sum_{k=1}^{B} h_k(x_i) a_{ik};$$
|
| 30 |
+
(2)
|
| 31 |
+
|
| 32 |
+
where $\{h_1, h_2, ..., h_B\}$ represents a set of B shared basis functions that are independent of feature indices, and coefficients $a_{ik}$ are the projection of each feature to the shared bases. We additionally propose to learn basis functions using a DNN, *i.e.*, a single *one*-input B-output multi-layer perceptron (MLP) for all $\{h_k; k=1,...,B\}$ . The resulting architecture is shown in Figure 1.
|
| 33 |
+
|
| 34 |
+
**Multi-class / multi-task architecture.** Let l correspond to the target class $y_l$ in the multi-class setting. Similar to Equation 1, the prediction function $g_l$ for class $y_l$ in GAMs can be written as:
|
| 35 |
+
|
| 36 |
+
$$g_l(\mathbf{x}) = f_{0l} + \sum_{i=1}^{D} f_i(x_i) w_{il},$$
|
| 37 |
+
(3)
|
| 38 |
+
|
| 39 |
+
where feature shape functions fi(xi) are shared among the classes and are linearly combined using class specific weights wil. Combining Equations 2 and 3, multi-class NBM can be represented as:
|
| 40 |
+
|
| 41 |
+
**Multi-class NBM**:
|
| 42 |
+
$$g_l(\mathbf{x}) = f_{0l} + \sum_{i=1}^{D} \sum_{k=1}^{B} h_k(x_i) a_{ik} w_{il}.$$
|
| 43 |
+
(4)
|
| 44 |
+
|
| 45 |
+
Extension to NB2M. Similar to NBM, we represent GA2M shape functions fij in Equation 1 as:
|
| 46 |
+
|
| 47 |
+
$$f_{ij}(x_i, x_j) = \sum_{k=1}^{B} u_k(x_i, x_j) b_{ijk};$$
|
| 48 |
+
(5)
|
| 49 |
+
|
| 50 |
+
where {u1, u2, ..., uB} represents a set of B shared bi-variate basis functions that are independent of feature indices and coefficients bijk are the projection of pair-wise features to the shared bases. We learn an additional *two*-input B-output MLP for all {uk; k = 1, . . . , B} to learn the bases. Extension to multi-class setting can be done in the same way as for NBMs.
|
| 51 |
+
|
| 52 |
+
Sparse architecture. Typically, datasets with high-dimensional features are sparse in nature. For example, in the Newsgroups dataset [32], news articles are represented by *tf-idf* features, and, for a given instance, most of the features are absent due to the vocabulary being of the order of 100K words. Since NBM uses a single DNN to learn all the bases, we can simply append the single value representing the absent feature to the batch, to compute the corresponding basis function values. The subsequent linear projection to feature indices via aik is a computationally inexpensive operation.
|
| 53 |
+
|
| 54 |
+
In contrast, typical GAMs (*e.g.*, Neural Additive Model (NAM) [42]) need to pass the absent value through every shape function f<sup>i</sup> which makes it compute-intensive as well as difficult to implement.
|
| 55 |
+
|
| 56 |
+
Training and regularization. We use mean squared error (MSE) for regression, and cross-entropy loss for classification. To avoid overfitting, we use the following regularization techniques: (i) L2-normalization (weight decay) [31] of parameters; (ii) batch-norm [28] and dropout [54] on hidden layers of the basis functions network; (iii) an L2-normalization penalty on the outputs f<sup>i</sup> to incentivize fewer strong feature contributions, as done in [2]; (iv) basis dropout to randomly drop individual basis functions in order to decorrelate them. Similar techniques have been used for other GAMs [2, 12].
|
| 57 |
+
|
| 58 |
+
Selecting the number of bases. One can use the theory of Reproducing Hilbert Kernel Spaces (RKHS, [7]) to devise a heuristic for selecting the number of bases B. Specifically, we demonstrate that any NBM model lies on a subspace within the space spanned by a complete GAM if the GAM shape functions reside within a ball in an RKHS. Assuming a regularity property in the data distribution, one can then demonstrate that B = O(log D) bases are sufficient to obtain competitive performance. We present this formally in Appendix Sec. A.5. This provides the alternate interpretation of NBM as learning a "principal components" decomposition in the L2−space of functions, as we learn a set of (preferably orthogonal) basis functions to approximate the decision boundary.
|
2206.04384/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Humans are usually good at simplifying difficult problems into easier ones by ignoring trivial details and focusing on important information for decision making. Typically, reinforcement learning (RL) methods are directly applied in the original environment to learn a policy. When we have a difficult environment like robotics or video games with long temporal horizons, sparse reward signals, or large and continuous state-action space, it becomes more challenging for RL methods to reason the value of states or actions in the original environment to get a well-performing policy. Learning a world model that simplifies the original complex environment into an easy version might lower the difficulty to learn a policy and lead to better performance.
|
| 4 |
+
|
| 5 |
+
In offline reinforcement learning, algorithms can access a dataset consisting of pre-collected episodes to learn a policy without interacting with the environment. Usually, the offline dataset is used as a replay buffer to train a policy in an off-policy way with additional constraints to avoid distribution shift problems (Wu et al., 2019; Fujimoto et al., 2019; Kumar et al., 2019; Nair et al., 2020; Wang et al., 2020; Peng et al., 2019). As the episodes also contain the dynamics information of the original environment, it is possible to utilize such a dataset to directly learn an abstraction of the environment in the offline RL setting. To this end, we introduce Value Memory Graph (VMG), a graph-structured world model for offline reinforcement learning tasks. VMG is a Markov decision process (MDP) defined on a graph as an abstract of the original environment. Instead of directly applying RL methods to the offline dataset collected in the original environment, we learn and build VMG first and use
|
| 6 |
+
|
| 7 |
+
<sup>∗</sup>Work done outside of Amazon
|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
|
| 11 |
+
Figure 1: Demonstration of a successful episode where a robot trained in the dataset "kitchen-partial" accomplishes 4 subtasks in sequence guided by VMG. Vertex values are shown via color shade. By searching graph actions that lead to the high-value future region (darker blue) calculated by value iteration on the graph, VMG controls the robot arm to maximize episode rewards and finish the task.
|
| 12 |
+
|
| 13 |
+
it as a simplified substitute of the environment to apply RL methods. VMG is built by mapping offline episodes to directed chains in a metric space trained via contrastive learning. Then, these chains are connected to a graph via state merging. Vertices and directed edges of VMG are viewed as graph states and graph actions. Each vertex transition on VMG has rewards defined from the original rewards in the environment.
|
| 14 |
+
|
| 15 |
+
To control agents in environments, we first run the classical value iteration algorithm(Puterman, 2014) once on VMG to calculate graph state values. This can be done in less than one second without training a value neural network thanks to the discrete and relatively smaller state and action spaces in VMG. At each timestep, VMG is used to search for graph actions that can lead to high-value future states. Graph actions are directed edges and cannot be directly executed in the original environment. With the help of an action translator trained in supervised learning (e.g., Emmons et al. (2021)) using the same offline dataset, the searched graph actions are converted to environment actions to control the agent. An overview of our method is shown in Fig.1.
|
| 16 |
+
|
| 17 |
+
Our contribution can be summarized as follows:
|
| 18 |
+
|
| 19 |
+
- We present Value Memory Graph (VMG), a graph-structured world model in offline reinforcement learning setting. VMG represents the original environments as a graph-based MDP with relatively small and discrete action and state spaces.
|
| 20 |
+
- We design a method to learn and build VMG on an offline dataset via contrastive learning and state merging.
|
| 21 |
+
- We introduce a VMG-based method to control agents by reasoning graph actions that lead to high-value future states via value iteration and convert them to environment actions via an action translator.
|
| 22 |
+
- Experiments on the D4RL benchmark show that VMG can outperform several state-of-theart offline RL methods on several goal-oriented tasks with sparse rewards and long temporal horizons.
|
| 23 |
+
|
| 24 |
+
# Method
|
| 25 |
+
|
| 26 |
+
The detailed algorithm of graph construction is shown in Alg.1.
|
| 27 |
+
|
| 28 |
+
```
|
| 29 |
+
Input: Training Set \mathcal{D} = \{(s_i, a_i, r_i, s_i') | i = 1, 2, ..., N\}, Empty vertices set \mathcal{V} = \{\},
|
| 30 |
+
Current vertex index J = 1, Distance threshold \gamma_m, Empty edges set \mathcal{E} = \{\}
|
| 31 |
+
1 for (s_i, a_i, r_i, s'_i) in \mathcal{D} do
|
| 32 |
+
f_{s_i} = Enc_s(s_i)
|
| 33 |
+
Compute the distance d_{ij} between f_{s_i} and f_{v_j} for every f_{v_j} in \mathcal{V}
|
| 34 |
+
3
|
| 35 |
+
if \min\{d_{ij}|f_{s_j} \text{ in } \mathcal{V}\} > \gamma_m \text{ or } J = 1 \text{ then }
|
| 36 |
+
4
|
| 37 |
+
5
|
| 38 |
+
v_J \leftarrow s_i, f_{v_J} \leftarrow f_{s_i}
|
| 39 |
+
\mathcal{V}.append((v_J, f_{v_J}))
|
| 40 |
+
6
|
| 41 |
+
7
|
| 42 |
+
J \leftarrow J + 1
|
| 43 |
+
end
|
| 44 |
+
8
|
| 45 |
+
9 end
|
| 46 |
+
10 for (s_i, a_i, r_i, s'_i) in \mathcal{D} do
|
| 47 |
+
Find v_{j_1}, v_{j_2} that s_i and s'_i are classified to in \mathcal{V}, respectively
|
| 48 |
+
if v_{j_1} \neq v_{j_2} and the connection e_{j_1 \rightarrow j_2} \notin \mathcal{E} then
|
| 49 |
+
12
|
| 50 |
+
13
|
| 51 |
+
\mathcal{E}.append(e_{j_1 \to j_2})
|
| 52 |
+
14
|
| 53 |
+
end
|
| 54 |
+
15 end
|
| 55 |
+
```
|
| 56 |
+
|
| 57 |
+
The detailed algorithm of policy execution is shown in Alg.2.
|
| 58 |
+
|
| 59 |
+
**Details of Dijkstra** When we use Dijkstra in Sec.3.4 to plan a path $\mathcal P$ from $v_c$ to $v^*$ , we define weights to each edge to make sure $\mathcal P$ is both short and high-rewarded. The weights used to plan the path $\mathcal P$ are based on rewards. For each edge $e_{j_1\to j_2}$ , we define the edge weight $w_{j_1\to j_2}$ as the gap between the maximal graph reward and the edge reward and denote the weight set as $\mathcal W$ . $w_{j_1\to j_2}=\max\{R_G(v_{j_3},v_{j_4})|\forall e_{j_3\to j_4}\in\mathcal E\}-R_G(v_{j_1},v_{j_2}).$
|
| 60 |
+
|
| 61 |
+
For all the networks including the state encoder $Enc_s$ , the action encoder $Enc_a$ , the action decoder $Dec_a$ , and the action translator Tran(s,s'), we use a 3-layer MLP with hidden size 256 and ReLU activation functions.
|
| 62 |
+
|
| 63 |
+
**Input**: Current state $s_c$ , State encoder $Enc_s$ , Action translator Tran, Vertex and edge sets in VMG $(\mathcal{V}, \mathcal{E})$ , Vertices value V, Edge weight $\mathcal{W}$
|
| 64 |
+
|
| 65 |
+
- $f_{s_c} = Enc_s(s_c)$
|
| 66 |
+
- $v_c = \operatorname{arg\,min}_{v_i|(v_i,f_i)\in\mathcal{V}} D(f_{s_c},f_j)$
|
| 67 |
+
- 3 Search future horizon of $N_s$ steps starting from $v_c$ and select the best value vertex $v^*$
|
| 68 |
+
- 4 Compute the weighted shortest path $\mathcal{P}$ from $v_c$ to $v^*$ via Dijkstra. $\mathcal{P} = [v_c, v_{c+1}, ..., v^*]$
|
| 69 |
+
- $a_c = Tran(s_c, v_{c+N_{sa}})$
|
2207.07077/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2207.07077/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
We interact with surrounding objects in structured yet rather complex, unorganized, and dynamic environments, enabled by our robust egocentric perception that facilitates understanding 3D scene geometry around us. Such innate perceptual ability shows in stark contrast with that of existing computer vision systems, trained to operate on images depicting static and well-organized scenes recorded by carefully controlled cameras [\[6,](#page-8-2) [19,](#page-8-4) [37\]](#page-8-3). These trained models [\[14,](#page-8-5) [23\]](#page-8-6) are, despite their remarkable performance, shown to be highly brittle when predicting the scene geometry of egocentric images that observe unscripted everyday activities, including diverse hand-object interactions, captured by in situ embodied sensors such as head/bodymounted cameras [\[8\]](#page-8-7). This requires additional sensors such as multi-camera rigs, IMU and depth sensors in augmented/mixed reality devices (e.g., Hololens and Magic Leap One) to deliver interactive and immersive experiences in our daily spaces.
|
| 4 |
+
|
| 5 |
+
<span id="page-1-0"></span>In this paper, we study a problem of egocentric 3D scene understanding—predicting depths and surface normals from a single view egocentric image. In addition to challenges of classic scene understanding problems [\[6\]](#page-8-2), egocentric scene understanding poses two more challenges: (1) Images are no longer upright. Head movements induce significant roll and pitch motions where the scene is often depicted in a tilted way. In particular, by the nature of handeye coordination, egocentric images inherently are affected by severe pitch motion when manipulating objects, which is substantially different from the existing data distribution, e.g., ScanNet [\[6\]](#page-8-2), NYUv2 [\[37\]](#page-8-3), and KITTI [\[19\]](#page-8-4). (2) Images include not only background objects, e.g., furniture, room layout, and walls, but also dynamic foreground objects, e.g., humans and arms/hands (see Figure [1\)](#page-0-0). Classic scene understanding mainly focuses on reconstructing the overall geometric layout made of such background objects while the foreground ones are considered as outliers. In contrast, these foregrounds are more salient in egocentric scenes as they are highly indicative of evolving activities.
|
| 6 |
+
|
| 7 |
+
We conjecture that the challenges of egocentric scene understanding can be addressed by an image stabilization method that incorporates the fundamentals of equivariance, called spatial rectifier [\[8\]](#page-8-7)—an image warping that transforms a titled image to a canonical orientation (i.e., gravityaligned) such that a prediction model can learn from the upright images. This is analogous to our robust perception through mental stabilization of visual stimuli [\[56\]](#page-9-0). However, the spatial rectifier shows inferior performance on predicting 3D geometry of egocentric images that involve substantial head movement (e.g., nearly 90 degree pitch), leading to excessive perspective warps. We present a *multimodal spatial rectifier* by generalizing the canonical direction, i.e., instead of unimodal gravity-aligned direction, we learn multiple reference directions from the orientations of the egocentric images, which allows minimizing the impact of excessive perspective warping. Our multimodal spatial rectifier makes use the clusters of egocentric images based on the distribution of surface normals into multiple pitch modes, where we learn a geometric predictor (surface normals or depths) that is specialized for each mode to rectify associated roll angles.
|
| 8 |
+
|
| 9 |
+
To facilitate learning the visual representation of dynamic egocentric scenes, we present a new dataset called *EDINA* (Egocentric Depth on everyday INdoor Activities). Our dataset comprises more than 15 hours RGBD recording of indoor activities including cleaning, cooking, eating, and shopping. Our dataset provides a synchronized RGB, depth, surface normal, and the 3D gravity direction to train our multimodal spatial rectifier and geometry prediction models. Our depth and surface normal predictor learned from the EDINA outperforms the baselines predictors not only on EDINA dataset but also other datasets, such as EPIC- KITCHENS [\[7\]](#page-8-0) and First Person Hand Action (FPHA) [\[18\]](#page-8-1).
|
| 10 |
+
|
| 11 |
+
Our contributions include: (1) a multimodal spatial rectifier; (2) a large dataset of egocentric RGBD with the gravity that is designed to study egocentric scene understanding, by capturing diverse daily activities in the presence of dynamic foreground objects; (3) comprehensive experiments to highlight the effectiveness of our multimodal spatial rectifier and our EDINA dataset towards depth and surface normal prediction on egocentric scenes.
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
We present a multimodal spatial rectifier that stabilizes tilted images into multiple transformation modes. This method minimizes the impact of perspective warping while retaining equivariance property.
|
| 16 |
+
|
| 17 |
+
Consider a function $\Phi:\mathbb{R}^2\times\mathbb{I}\to\mathbb{R}^n$ that predicts the geometry of a pixel $\mathbf{x}\in\mathbb{R}^2$ in an image $\mathcal{I}\in\mathbb{I}$ , where $\mathbb{I}=[0,1]^{3\times H\times W}$ is the image range (H and W are its height and width, respectively). We denote the prediction:
|
| 18 |
+
|
| 19 |
+
$$y = \Phi(\mathbf{x}, \mathcal{I}),\tag{1}$$
|
| 20 |
+
|
| 21 |
+
where $y \in \mathbb{R}^n$ and n is the dimension of the geometry, e.g., n=1 for depth, and n=3 for surface normal.
|
| 22 |
+
|
| 23 |
+
A spatial rectifier [8] is learned to transform a *tilted* image $\mathcal{I}$ with the gravity direction $\mathbf{g} \in \mathbb{S}^2$ in the camera coordinate system to the *upright* image $\mathcal{I}_{up}$ with the upright
|
| 24 |
+
|
| 25 |
+
<span id="page-2-1"></span>
|
| 26 |
+
|
| 27 |
+
Input Unimodal spatial rect. Multimodal spatial rect.
|
| 28 |
+
|
| 29 |
+
Figure 3. A unimodal spatial rectifier produces an excessive perspective warp (middle) to align the image to the gravity direction, which significantly degrade the performance of geometry prediction. We use a multimodal spatial rectifier that warps to multiple reference directions that minimizes the impact of the perspective warping (right).
|
| 30 |
+
|
| 31 |
+
gravity direction $g_{\rm up}$ by explicitly enforcing an equivariant property through 3D rotation (Figure 2):
|
| 32 |
+
|
| 33 |
+
$$h_{\mathcal{W}} \circ \Phi(\mathbf{x}, \mathcal{I}) = \Phi(\mathcal{W}(\mathbf{x}; \mathbf{R}_{up}), \mathcal{I}_{up}),$$
|
| 34 |
+
(2)
|
| 35 |
+
|
| 36 |
+
where $\mathcal{W}: \mathbb{R}^2 \times SO(3) \to \mathbb{R}^2$ is a 2D transformation that maps a point in the tilted image to the upright image based on the 3D gravity direction. That is, the transformation can be determined by a homography induced by camera pure rotation $\mathbf{R}_{\mathrm{up}} \in SO(3)$ such that $\mathbf{g}_{\mathrm{up}} = \mathbf{R}_{\mathrm{up}}\mathbf{g}$ . $\mathcal{I}_{\mathrm{up}}$ is warped from the tilted image by $\mathcal{W}$ , i.e., $\mathcal{I}_{\mathrm{up}} = \mathcal{I}(\mathcal{W}(\mathbf{x}; \mathbf{R}_{\mathrm{up}}))$ . $h_{\mathcal{W}}$ is the geometry transformation parametrized by $\mathcal{W}$ , e.g., (1) for the surface normal prediction, $h_{\mathcal{W}}$ is equivalent to rotating the surface normal vector ( $\mathbb{S}^2$ ), i.e., $h_{\mathcal{W}} \circ \Phi = \mathbf{R}_{\mathrm{up}}\Phi$ ; (2) for the depth prediction, $h_{\mathcal{W}}$ is defined as:
|
| 37 |
+
|
| 38 |
+
$$h_{\mathcal{W}} \circ \Phi = \left( \mathbf{R}_{\text{up}} \mathbf{K}^{-1} \widetilde{\mathbf{x}} \right) \Phi \tag{3}$$
|
| 39 |
+
|
| 40 |
+
where $(\mathbf{v})_z$ denote the $3^{\text{rd}}$ coordinate of a vector $\mathbf{v} \in \mathbb{R}^3$ , and $\mathbf{K}$ is the camera intrinsic matrix, and $\tilde{\mathbf{x}} \in \mathbb{P}^2$ is the homogeneous representation of $\mathbf{x}$ .
|
| 41 |
+
|
| 42 |
+
Predicting the geometry of a tilted image can be modeled as a function composition:
|
| 43 |
+
|
| 44 |
+
$$\Phi(\mathbf{x}, \mathcal{I}) = h_{\mathcal{W}}^{-1} \circ \Phi_{\mathrm{up}}(\mathcal{W}(\mathbf{x}; \mathbf{R}_{\mathrm{up}}), \mathcal{I}_{\mathrm{up}}), \tag{4}$$
|
| 45 |
+
|
| 46 |
+
where $h_{\mathcal{W}}^{-1}$ is the spatial rectifier, and $\Phi_{\mathrm{up}}$ is the geometry predictor learned from upright images. A key benefit of this function composition is that $\Phi_{\mathrm{up}}$ can be trained solely by the large training dataset made of the upright images (e.g., ScanNet [6] and NYUv2 [37]), which can be, in turn, used to predict the surface normals of a tilted image.
|
| 47 |
+
|
| 48 |
+
Limitation Despite of its strong performance on tilted images, the spatial rectifier exhibits a major limitation towards egocentric scene understanding due to its single modal rectification. The spatial rectifier is designed to warp a tilted image with respect to a single upright direction, which applies to roll and mild pitch camera rotations. In constrast, egocentric images often have substantial head orientation due to the hand-eye coordination, resulting in severe perspective warped image $\mathcal{I}_{\rm up}$ (e.g., 90° pitch tilted image), which in turns, significantly degrades the performance of the geometry predictor as shown in Figure 3 (middle).
|
| 49 |
+
|
| 50 |
+
<span id="page-3-3"></span><span id="page-3-1"></span>
|
| 51 |
+
|
| 52 |
+
Figure 4. Unlike the spatial rectifier [8] that relies on the unimodal surface normal distribution with respect to the gravity direction (left), we present a multimodal spatial rectifier that generalizes the spatial rectifier by learning multiple reference directions (right). As a result, the surface normal distribution of the scene datasets can be decomposed into multiple clusters, which allows minimizing the impact of image warping and more importantly, learning a geometrically coherent representation.
|
| 53 |
+
|
| 54 |
+
We generalize the spatial rectifier model by leveraging a mixture of expert models [34] called *multimodal spatial* rectifier where each expert model predicts the spatial rectification:
|
| 55 |
+
|
| 56 |
+
$$\Phi(\mathbf{x}, \mathcal{I}) = \frac{1}{\sum_{i} b_{i}} \sum_{i} b_{i} \left( h_{\mathcal{W}_{i}}^{-1} \circ \Phi_{i}(\mathcal{W}(\mathbf{x}; \mathbf{R}_{i}), \mathcal{I}_{i}) \right), \quad (5)$$
|
| 57 |
+
|
| 58 |
+
where $b_i \in \mathbb{R}_+$ is a non-negative weight to mix transformations, and $\mathbf{R}_i$ is the rotation that transforms the gravity of the tilted image to the $i^{\text{th}}$ reference direction, i.e., $\mathbf{r}_i = \mathbf{R}_i \mathbf{g}$ . $\mathcal{I}_i$ is warped from the tilted image by $\mathcal{W}_i$ , i.e., $\mathcal{I}_i = \mathcal{I}(\mathcal{W}(\mathbf{x}; \mathbf{R}_i))$ . The reference direction $\mathbf{r} \in \mathbb{S}^2$ is a generalization of the upright gravity $\mathbf{g}_{\text{up}}$ , which specifies the egocentric tilted images to be warped. $\Phi_i$ is the geometry predictor designed for the $i^{\text{th}}$ reference direction. We denote $\mathcal{W}(\mathbf{x}; \mathbf{R}_i)$ by $\mathcal{W}_i$ by abuse of notation. The key benefit of the multimodal spatial rectifier is the flexibility of image warping. The severe head orientation of an egocentric image can be warped to the closest reference direction, which prevents excessive perspective warping (see Figure 3).
|
| 59 |
+
|
| 60 |
+
We find the set of reference directions $\{\mathbf{r}_i\}_{i=1}^K$ along the pitch directions by clustering the gravity directions of egocentric images with K is the predefined number of the reference directions:
|
| 61 |
+
|
| 62 |
+
$$\underset{\{\mathbf{r}_i\}_{i=1}^K}{\text{minimize}} \sum_{i=1}^K \sum_{j \in C_i} \|\mathbf{g}_j - \mathbf{r}_i\|_2^2, \tag{6}$$
|
| 63 |
+
|
| 64 |
+
where $C_i$ is the set of the indices of training instances of which gravity directions closest to the $i^{\rm th}$ reference direction ${\bf r}_i$ . In practice, we design an iterative algorithm inspired by K-Medoids algorithm [38] by increasing the number of cluster numbers K until the total deviation reaches
|
| 65 |
+
|
| 66 |
+
below a threshold $\delta$ indicating the data is well-fitted (see Algorithm 1). Figure 4 illustrates gravity cluster centers and images as well as their surface normal map belonging to each cluster. Similar to spatial rectifier [8], we represent a 3D rotation by two unit vectors: ( $\mathbf{g}$ , $\mathbf{e}$ ) are gravity and principle direction. $\mathbf{e}$ is the unit vector that is a mode of distribution that represents surface normals in an image (see details in Appendix). In practice, we use one-hot encoding for $\{b_i\}$ , i.e., $b_i = 1$ if $\mathbf{r}_i$ is closest to $\mathbf{g}$ , and zero otherwise.
|
| 67 |
+
|
| 68 |
+
We learn a spatial rectifier given a set of ground truth directions $\{(\mathcal{I}, \mathbf{g}, \mathbf{e}, \mathbf{y})\}_{\mathcal{D}}$ where $\mathcal{D}$ is the training dataset. $\mathbf{y} \in \mathbb{R}^{n \times H \times W}$ is the ground truth geometry (n = 1 for depth and n = 3 for surface normal).
|
| 69 |
+
|
| 70 |
+
Consider two learnable functions $f_{\mathbf{g}}, f_{\mathbf{e}} : \mathbb{I} \to \mathbb{S}^2$ that predict the gravity and principle directions from an image, respectively. These two functions constitute a spatial rectifier that can be learned by minimizing the following loss:
|
| 71 |
+
|
| 72 |
+
$$\mathcal{L}_{SR}(\mathcal{I}, \mathbf{g}, \mathbf{e}) = \cos^{-1}(\mathbf{g}^{\mathsf{T}} f_{\mathbf{g}}(\mathcal{I})) + \cos^{-1}(\mathbf{e}^{\mathsf{T}} f_{\mathbf{e}}(\mathcal{I})), (7)$$
|
| 73 |
+
|
| 74 |
+
We jointly learn the multimodal spatial rectifier together with the geometry predictor by minimizing the following
|
| 75 |
+
|
| 76 |
+
<span id="page-3-0"></span>
|
| 77 |
+
$$\begin{split} & \textbf{Input} : \delta, \{\mathbf{g}_j\}_{\mathcal{I}_j \in \mathcal{D}_{\text{train}}} \\ & \textbf{Output:} \ \{\mathbf{r}_i\}_{i=1}^K \\ & K = 1, t = \delta + \epsilon; \\ & \textbf{while} \ t > \delta \ \textbf{do} \\ & \left\{ \mathbf{r}_i \right\}_{i=1}^K = \text{K-Medoids}(\{\mathbf{g}_j\}_{\mathcal{D}_{\text{train}}}, K); \\ & t = \sum_{i=1}^K \sum_{j \in \mathcal{C}_i} \|\mathbf{g}_j - \mathbf{r}_i\|_2^2; \\ & K \leftarrow K + 1; \\ & \textbf{end} \end{split}$$
|
| 78 |
+
|
| 79 |
+
<span id="page-4-1"></span><span id="page-4-0"></span>
|
| 80 |
+
|
| 81 |
+
Figure 5. The multimodal spatial rectifier warps an egocentric image by predicting the gravity $\mathbf{g}$ and principle directions es, allowing learning a coherent geometry predictor $\Phi$ .
|
| 82 |
+
|
| 83 |
+
loss:
|
| 84 |
+
|
| 85 |
+
$$\mathcal{L} = \sum_{\{\mathcal{I}, \mathbf{g}, \mathbf{e}, \mathbf{v}\} \in \mathcal{D}} \mathcal{L}_{GEO}(\mathbf{y}, \mathcal{I}) + \lambda \mathcal{L}_{SR}(\mathcal{I}, \mathbf{g}, \mathbf{e}).$$
|
| 86 |
+
(8)
|
| 87 |
+
|
| 88 |
+
The geometric loss $\mathcal{L}_{\rm GEO}$ measures the geometric error between the prediction and ground truth:
|
| 89 |
+
|
| 90 |
+
$$\begin{split} \mathcal{L}_{\text{GEO}}(\mathbf{y}, \mathcal{I}) &= \sum_{\mathbf{x}} d(\mathbf{y_x}, \mathbf{\Phi}(\mathbf{x}, \mathcal{I})), \text{ where} \\ d(y, \Phi) &= \begin{cases} |y - \Phi| & \text{for depth} \\ \cos^{-1}\left(y^\mathsf{T}\Phi\right) & \text{for surface normal} \end{cases} \end{split}$$
|
| 91 |
+
|
| 92 |
+
where $\Phi(\mathbf{x}, \mathcal{I}) = h_{\mathcal{W}}^{-1} \circ \Phi(\mathcal{W}(\mathbf{x}; \mathbf{R}), \overline{\mathcal{I}})$ , and $\mathbf{R}$ can be computed by the predictions of $f_{\mathbf{g}}(\mathcal{I})$ and $f_{\mathbf{e}}(\mathcal{I})$ .
|
| 93 |
+
|
| 94 |
+
The multimodal spatial rectifier is a modular predictor that can combine with a geometry predictor $\Phi$ as shown in Figure 5. It is learned to predict the gravity and principle directions from an input tilted image through $f_{\bf g}$ and $f_{\bf e}$ , respectively. With the predicted direction, it computes the rotation ${\bf R}$ that can be used to warp the image to the reference direction ${\cal W}$ . The geometry predictor takes as input an image and predict depths and surface normals. These predictions are unwarped by $h_{\cal W}^{-1}$ .
|
| 95 |
+
|
| 96 |
+
**Implementation Details** Our networks take as input an RGB image of size $320 \times 240$ and output the same size surface normals or depths. We use a ResNet-18 architecture to estimate $f_{\bf g}$ and $f_{\bf e}$ while the geometry predictor $\Phi$ is specified in 5.2. The proposed models are implemented in PyTorch [39], trained with a batch size of 32 on a single NVIDIA Tesla V100 GPU, and optimized by Adam [26] optimizer with a learning rate of $10^{-4}$ . We train our models for 20 epochs.
|
| 97 |
+
|
| 98 |
+
We present a new RGBD dataset called *EDINA* (Egocentric Depth on everyday INdoor Activities) that facilitates learning 3D geometry from egocentric images. Each instance in the dataset is a triplet: RGB image (1920×1080), depths and surface normals (960×540), and 3D gravity direction. The data were collected using Azure Kinect cameras [35] that provide RGBD images (depth range: 0.5~5.46m) with inertial measurement unit signals. Eighteen participants were asked to perform diverse daily indoor
|
| 99 |
+
|
| 100 |
+
activities, e.g., cleaning, sorting, cooking, eating, doing laundry, training/playing with pet, walking, shopping, vacuuming, making bed, exercising, throwing trash, watering plants, sweeping, wiping, while wearing a head-mounted camera. The camera is oriented to approximately 45° downward to ensure observing hand-object interactions. Total number of data instances is 550K images (16 hrs). Figure 6(a) illustrates the representative examples of EDINA dataset that include substantially tilted egocentric images depicting diverse activities.
|
| 101 |
+
|
| 102 |
+
The gravity direction is highly correlated with activities. For instance, the majority of cooking and cleaning activities are performed while facing down, whereas the shopping and interacting with others are performed while facing front as shown in Figure 6(b). Figure 6(c) illustrates the amount of data of four major indoor activities of cleaning, cooking, shopping, and home organizing. Unlike existing scene datasets such as ScanNet, a large proportion of pixels of egocentric scenes belong to the foreground. Our dataset is available at https://github.com/tien-d/EgoDepthNormal.
|
2208.04726/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-07-24T16:46:47.626Z" agent="5.0 (X11)" etag="L0Rgvj29E9d8Nfprc8C9" version="20.2.0" type="device"><diagram id="VpSa_xA8qOyCA3ouFb6l" name="Page-1">7V1rd6I6F/41rvWeD7ogCbePalvbOeqaObXT1m9RImIRPBir9te/4abcVFRQe4Z2piVXYD9PdvbeiWkFNqerlo1n446lEqMCOHVVgXcVwL44jv1yctZejqRAL0OzddXL4rcZz/oX8TP9dtpCV8k8UpFalkH1WTRzaJkmGdJIHrZtaxmtNrKM6F1nWPPvyG0znofYIIlqr7pKx16uLIRqPxJdGwd35oMXnuKgst/FfIxVaxm6F7yvwKZtWdS7mq6axHCEF8jF6+hhR+nmwWxi0iwNqvV/mz/Wf6NeXZrcC7+fRgruVmX/2eg6eGGisvf3k5ZNx5Zmmdi43+Y2bGthqsTplWepbZ22Zc38zAmhdO2DiRfUYlljOjX8UrLS6Vvo+p1dczXBT9053OGCxNpPzKltfZCmZVi2+5wQNTlOUTYlAThMrI2kZAIIrIU9JHvEETAM2xqhe+ohr54jq9ANfLm3iDUl1F6zCjYxMNU/o1zCPiW1Tb0tauzCB+4IEP1+P7GxCO50V21a5mcC3Ch0y7FOyfMMuyJZsvEbhWmkG0ZI4KpAZBWlQcG5X8dB8UlsSlZ7hbfVIV4TX4OIfnK5HY5B1jg0EmWuIGmjhLQTYmZDfeZc6lNXuzScl9WZTmnjATF+WnOd6pbJygcWpdaUVTCcggYefmguQoFsVTLCC4OGeqgbuua0pM5Qa+D5zNN5I33loNpwb1gPcrkgx+kKU1yBdS8JHuafWgU0Vgxr0Pz52AX9dWMyaBnL9tsPftB6UZ6mv0H/Vfjst37pT63+bNBa0qH5e97vcXr/rW8MpspHv/mkkRY/H5gdVn/MqY91sb1WoAqHC/WrsxjAH2b762nZuat/DmHffNIbVv/VMPHjL+Vp8sSxNMSv/3D4jtM7E3afx/6s/6Y2B1Bj5XWt06yvu+7/J/asPx8bY7WlaX1Wu9e7X7UnL1qn1QHtSYfrruvLpzsvr/uMVt1eR+v07hfd5/qq84z44Zrj25N7vrP20u2gvVuH8/NeFp3ey5dfRv32fjuvjn8Pr+xtRtkTrt6/Xhbdrw9Wxp64V3facO3JcOneu+m2WbYnv1m/v1i9up92+qgvgjrsnm5et4lg5y4o4/z2Xju/jhYuI1PO66OlAXXyYLV7fdCdarTz9k6d313znXZeWZnOAXz3ALqmpbe/kNyGDPEv5JA1lyEKuegQleXEEAUoOUSDvNyHqHjWrMZdblZjIrfXb+FEqJWT3DZzU9HZMKxod8+Pp8+GQsbZkL+p2VA6AfyLw71F+D0C8D64sxk/0Sm+ePBzN4XcpnXbxutQhZmlm3Qe6vmnkxEyEmIaKDDDHzLWR/urQx7sqc4uvOfdknbz4qfzWLmGEjtGscwZN2jd8bFYgWmZ5AyG7+RpjFcZCCnkwb8EA3iEIgwQYualN578RnuYxAvpRN105L1foqO8WMXzZ9Hqv+fxgayTHLypWS547pAb8myNaLWuafl6fZjIo+HlvT4o3ZbXF8Bfun2l21e6fb4VJNyW28ej0u/L4PcdnupAITZ90qoGUZsqYQvtMKpys4WEE/jy57mKVzCNTvIB4Q7TeqdTJxzlAyIxuw94c1QHsDT7I/JAGbkNhJsy+0Fy9aFnY9M3LfM0/AmvCkS6vOEv3NhyDyxHTsEDIl1jSuC6gRJ4inFwFdzPMyYvxZeDmhbe1qJ68Nylw186/KXDH0y8t+Xwg3KhNxeHHxSziJd0oaQre0Hl2nAuDn9BNuCxDr9w5KJvvP4Bh188YtH35qgOT6F6QeYryM9vCY0SKZ8F5oMGLMwaKoDihTwjOUb7+Fp00Z7RDW0algqhFiip5XegXJhaaZtrRdczUvVPdqk5lw94SBmgfsHADvKDHHbrUO2UDh4JVrM3zzGoNhoRcXiF1XQxtprOw8tF1dbt91ertUTLN/6VCtQwsNqqJn3pGzXcjw+SJHYsnaQZxIyaQQJnKoLzgqNiYrw26vmOGHlIrjFi4ppw84Gkq8WhlRTViJx/QvOOGBT7qbjw2fvSSsoev0CS/sa9sNj9LOzHoIZMisROCU5NdVV1R2YapNHROrJM6o9LXswJIXhYp8EUhEBRCPn+QSpCz7o2LRGSro1QWkw3ZB1wgUBZ7r8L54N5DV+0m/TWLPCRrUgN9tP5tN9gxK57FYm1Zo/HOWXuz03ZT7fMLXAuPDJksjn+sxSpAlRTlChLZKEmJHgiwRovJ6mCiqKKcJ6FcqaTs7VKNg7I0YEj/oDbEkfzMstBSMlo2NzWujtKzr5Ny7bzNXWu5BzwfCxUdUHnIB36NFPH05MOaSMSD/SyU1Cdu3RmBijHg9nK09Yxvf2/D087g79Cqtfr9U/TvVJ8v1QK7sIlp2cx+UHmEvcL4J5ill0W991Rn9xwhzIqkY9FgeRr477bHC9xzxN3Mb5R/9rAyzuB3+WHGWRE07ywnOhissIqS+CpA4g5mM/cQiYAyH6ObDwl8500Ohh1vvz7TL338Z9/hulw/L1e4GMvII5XcOh1jg7s38bwV/F8vHEgwrqgqKWApC7gOTGpDOKLjLkpA/mqnvYxawFxNC7jMUu+fIpe+1NiG24TiBe89he85zfbcDmm1DmKrO7IAzzoQ8uc11ina+eqNmQPAR5mpsZ+Wo5U9PmQiQKbxFow/fWwMNnzzvC6quqaTrFRnWL7g1DdbTAzFnOmKjWzSlYzZ48/69DpKhc1IIsxtCFKibpBPqkIYFExFoCu89n0E7fmFRI/u5mdAPHlr5PPOwiGddBRsAfvgFY5di/Yjuct9IQOkGWPODHV2BkZIe5FyRtiFEBpjDolDniQKAecgCDvTD5VeVCTRAkKbEhAKElIjgYdARRqosxvvsFpdANiDXICUkRRliXESzB6EwnUOCRsv8WLznAb7pd8OciX84DMShdRqEFJhghBiWMzohK7y35OFs6WNC+5ZMsVtctt0yVt1aSkSypdoFhzjuoVFJEDIlI4PqZdlBrkQ0iexpYzSVk0X2AGf4v1o8/muwIXIe4kHKZC6HPEvgLE5g8pKm5RShN3+LNLfE3mkzUiIdH0Kvn7QjDDeUPfFh0e1jjASUBGkiizgajEBoagfCekMhwI/G2RqjIxQtHTXt68J0ahYnaaY41tvpMxhJvCKovD9l2xQnKNl0Wg8FDxxlYMqbRtVbcEDkpOSO7CRcX52wB4Nk4g9WcE7KtQSExlipiCpZIStc9jCS91A/+VQ3WX3uqWZVP/4aNylXSgzz2aFOYUq5NRrCOk1KTY4+QUrosfp5p3vC6Vs+cdD/dncvbsz5ek4y/FqZZffBkWQtj4pqkYYQ/Wh/z++rIk7qtfTAC7POKkPOKkPOIk5ppG9ckFjzhJnbOyOCy3dSx8MsaY43RUzOHuiS0xp05GzGBSQl+CsL/f/EJ8qdxJfjjz6O1Z4o7tWUO6+l67KnfogRQW7lYNsTM8QXLnFEqJWcPjXTCW3P5dOI8N27+uB+//Dw==</diagram></mxfile>
|
2208.04726/main_diagram/main_diagram.pdf
ADDED
|
Binary file (22.4 kB). View file
|
|
|