Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2012.03208/main_diagram/main_diagram.drawio +0 -0
- 2012.03208/paper_text/intro_method.md +127 -0
- 2108.10869/main_diagram/main_diagram.drawio +1 -0
- 2108.10869/main_diagram/main_diagram.pdf +0 -0
- 2108.10869/paper_text/intro_method.md +11 -0
- 2111.00674/main_diagram/main_diagram.drawio +0 -0
- 2111.00674/paper_text/intro_method.md +66 -0
- 2112.01044/main_diagram/main_diagram.drawio +0 -0
- 2112.01044/paper_text/intro_method.md +123 -0
- 2112.14731/main_diagram/main_diagram.drawio +1 -0
- 2112.14731/main_diagram/main_diagram.pdf +0 -0
- 2112.14731/paper_text/intro_method.md +88 -0
- 2202.00095/main_diagram/main_diagram.drawio +0 -0
- 2202.00095/paper_text/intro_method.md +96 -0
- 2203.10692/main_diagram/main_diagram.drawio +1 -0
- 2203.10692/main_diagram/main_diagram.pdf +0 -0
- 2203.10692/paper_text/intro_method.md +77 -0
- 2205.11152/main_diagram/main_diagram.drawio +0 -0
- 2205.11152/paper_text/intro_method.md +83 -0
- 2206.04590/main_diagram/main_diagram.drawio +0 -0
- 2206.04590/paper_text/intro_method.md +92 -0
- 2207.07110/main_diagram/main_diagram.drawio +0 -0
- 2207.07110/paper_text/intro_method.md +322 -0
- 2210.01633/main_diagram/main_diagram.drawio +1 -0
- 2210.01633/main_diagram/main_diagram.pdf +0 -0
- 2210.01633/paper_text/intro_method.md +159 -0
- 2210.02411/main_diagram/main_diagram.drawio +1 -0
- 2210.02411/main_diagram/main_diagram.pdf +0 -0
- 2210.02411/paper_text/intro_method.md +116 -0
- 2210.13746/main_diagram/main_diagram.drawio +1 -0
- 2210.13746/main_diagram/main_diagram.pdf +0 -0
- 2210.13746/paper_text/intro_method.md +159 -0
- 2212.12141/main_diagram/main_diagram.drawio +1 -0
- 2212.12141/main_diagram/main_diagram.pdf +0 -0
- 2302.05118/main_diagram/main_diagram.drawio +1 -0
- 2302.05118/paper_text/intro_method.md +125 -0
- 2305.14761/main_diagram/main_diagram.drawio +0 -0
- 2305.14761/main_diagram/main_diagram.pdf +0 -0
- 2305.14761/paper_text/intro_method.md +16 -0
- 2305.16000/main_diagram/main_diagram.drawio +1 -0
- 2305.16000/main_diagram/main_diagram.pdf +0 -0
- 2305.16000/paper_text/intro_method.md +59 -0
- 2307.04333/main_diagram/main_diagram.drawio +0 -0
- 2307.04333/paper_text/intro_method.md +171 -0
- 2307.08849/main_diagram/main_diagram.drawio +1 -0
- 2307.08849/main_diagram/main_diagram.pdf +0 -0
- 2307.08849/paper_text/intro_method.md +82 -0
- 2307.10710/main_diagram/main_diagram.drawio +0 -0
- 2307.10710/paper_text/intro_method.md +108 -0
- 2310.19807/main_diagram/main_diagram.drawio +220 -0
2012.03208/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2012.03208/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The prospect of having a robotic assistant that can carry out daily chores based on language directives is a distant dream that has eluded the research community for decades. On recent progress in computer vision, natural language processing and embodiment, several benchmarks have been developed to encourage research on various components of such instruction following agents including navigation [2, 8, 6, 23], object interaction [41, 31], and interactive reasoning [11, 15] in visually rich 3D environments [22, 5, 30]. However, to move towards building realistic assistants, the agent should possess all these abilities. Taking a step forward, we address the more holistic task of interactive instruction following [15, 41, 34, 31] which requires an agent to navigate through an environment, interact with objects, and complete long-horizon tasks, following natural language instructions with egocentric vision.
|
| 4 |
+
|
| 5 |
+
To accomplish a goal in the interactive instruction following task, the agent should infer a sequence of actions and object interactions. While action prediction requires global semantic cues, object localisation needs a pixel-level
|
| 6 |
+
|
| 7 |
+
<span id="page-0-1"></span>
|
| 8 |
+
|
| 9 |
+
Figure 1: We divide interactive instruction following into perception and policy. Each heat-map indicates where a stream focuses on in the given visual observation. While a single stream exploits the same features for pixel-level and global understanding and thus fails to interact with the object, our factorized approach handles perception and policy separately and interacts successfully.
|
| 10 |
+
|
| 11 |
+
understanding of the environment, making them semantically different tasks. In addition, in neuroscience literature [14], there is a human visual cortex model that has two pathways; the ventral stream (involved with object perception) and the dorsal stream (involved with action control). Inspired by them, we present a Modular Object-Centric Approach (MOCA) to factorize interactive perception and action policy in separate streams in a unified end-to-end framework for building an interactive instruction following agent. Specifically, our agent has the action policy module (APM) which is responsible for sequential action prediction and the interactive perception module (IPM) that localises the objects to interact with.
|
| 12 |
+
|
| 13 |
+
Figure 1 shows that our two-stream model is more beneficial than the single-stream one. The heat maps indicate the model's visual attention. For the action of 'picking up the candle,' the proposed factorized model focuses on a candle in both the streams and results in a successful interaction. In contrast, the single-stream model does not attend on the candle, implying the challenge to handle two different predictions in a single stream.
|
| 14 |
+
|
| 15 |
+
In the IPM, we propose to reason about object classes
|
| 16 |
+
|
| 17 |
+
<span id="page-0-0"></span><sup>\*:</sup> equal contribution. $\S$ : work done while with GIST. $\dagger$ : corresponding author.
|
| 18 |
+
|
| 19 |
+
<span id="page-1-0"></span>for better localisation and name it object-centric localisation (OCL). We further improve the localising ability in time by using the spatial relationship amongst the objects that are interacted with over consecutive time steps. For better grounding of visual features with textual instructions, we propose to use dynamic filters [\[20,](#page-8-11) [24\]](#page-8-12) for its effectiveness of cross-modal embedding. We also show that these components are more effective when employed in a model that factorizes perception and policy.
|
| 20 |
+
|
| 21 |
+
We train our agent using imitation learning, specifically behavior cloning. When a trained agent's path is blocked by immovable objects like walls, tables, kitchen counters, *etc*. at inference, however, it is likely to fail to escape such obstacles since the ground truth contains only perfect expert trajectories that finish the task without any errors. To avoid such errors, we further propose an obstruction evasion mechanism in APM. Finally, we adopt data augmentation to address the sample insufficiency of imitation learning.
|
| 22 |
+
|
| 23 |
+
We empirically validate our proposed method on the recently proposed ALFRED benchmark [\[34\]](#page-9-1) and observe that it significantly outperforms prior works in the literature by large margins in all evaluation metrics.
|
| 24 |
+
|
| 25 |
+
We summarize our contributions as follows:
|
| 26 |
+
|
| 27 |
+
- We propose to factorize perception and policy for embodied interactive instruction following tasks.
|
| 28 |
+
- We also present an object-centric localisation and an obstruction evasion mechanism for the task.
|
| 29 |
+
- We show that this agent outperforms prior arts by large margins in all metrics.
|
| 30 |
+
- We present qualitative and quantitative analysis to demonstrate our method's effectiveness.
|
| 31 |
+
|
| 32 |
+
# Method
|
| 33 |
+
|
| 34 |
+
An interactive instruction following agent performs a sequence of navigational steps and object interactions based on egocentric visual observations it receives from the environment. These actions and interactions are based on natural language instructions that the agent must follow to accomplish the task.
|
| 35 |
+
|
| 36 |
+
We approach this by factorizing the model into two streams, *i.e.* interactive perception and action policy, and train the entire architecture in an end-to-end fashion. Figure 2 presents a detailed overview of MOCA.
|
| 37 |
+
|
| 38 |
+
Action prediction requires global scene-level understanding of the visual observation to abstract it to a resulting action. On the other hand, for object interaction, the agent needs to focus on both scene-level and object-specific features to achieve precise localisation [36, 26, 4].
|
| 39 |
+
|
| 40 |
+
Given the contrasting nature of the two tasks, MOCA has separate streams for action prediction and object localisation. The two streams are the Interactive Perception Module (IPM) and Action Policy Module (APM). Subscripts *a*
|
| 41 |
+
|
| 42 |
+
and m in following equations indicate whether a component belongs to APM or IPM, respectively. APM is responsible for sequential action prediction. It takes in the instructions to exploit the detailed action-oriented information. IPM localises the pixel-wise mask whenever the agent needs to interact with an object in case of manipulation actions. IPM tries to focus more on object-centric information in the instructions for localisation and interaction. Both IPM and APM receive the egocentric visual observation features at every time step.
|
| 43 |
+
|
| 44 |
+
The ability to interact with objects in the environment is key to interactive instruction following, since accomplishing each task requires multiple interactions. The interactive perception module (IPM) facilitates this by predicting a pixel-wise mask to localise the object to interact with.
|
| 45 |
+
|
| 46 |
+
First, the language encoder in IPM encodes the language instructions and generates attended language features. For grounding the visual features to the language features, we use language guided dynamic filters for generating the attended visual features (Sec. 3.2.1). Then, to temporally align the correct object with their corresponding interaction actions amongst the ones present in the language input, we use previous action embedding along with the visual and language input. For example, in the statement, Wash the spatula, put it in the first drawer, the agent first needs to wash the spatula in the sink, for which we have two object classes, namely spatula and sink that the agent needs to interact with. But this has to be done in a particular order. If the action is PUTOBJECT, the agent needs to predict the sink's mask whereas if it is PICKOBJECT, it needs to predict
|
| 47 |
+
|
| 48 |
+
<span id="page-3-5"></span>the spatula's mask. As shown in Figure [2,](#page-2-0) the hidden state ht,m of the class decoder, LSTMm, is updated with three different inputs concatenated as:
|
| 49 |
+
|
| 50 |
+
$$h_{t,m} = \text{LSTM}_m([\hat{v}_{t,m}; \hat{x}_{t,m}; a_{t-1}])$$
|
| 51 |
+
(1)
|
| 52 |
+
|
| 53 |
+
where [;] denotes concatenation. xˆt,m and vˆt,m are the attended language and visual features, respectively. Finally, the class decoder's current hidden state ht,m is then used to predict the mask mt. This is done by invoking the *objectcentric localisation* (Sec. [3.2.2\)](#page-3-0), which helps the agent to accurately localise the object of interest.
|
| 54 |
+
|
| 55 |
+
Visual grounding helps the agent to exploit the relationships between language and visual features. This reduces the agent's dependence on any particular modality while encountering unseen scenarios.
|
| 56 |
+
|
| 57 |
+
It is a common practice to concatenate flattened visual and language features [\[18,](#page-8-26) [34,](#page-9-1) [17\]](#page-8-27). However, it might not perfectly capture the relationship between visual and textual embeddings, leading to poor performance of interactive instruction following agents [\[34\]](#page-9-1).
|
| 58 |
+
|
| 59 |
+
Dynamic filters are conditioned on language features making them more adaptive towards varying inputs. This is in contrast with traditional convolutions which have fixed weights after training and fail to adapt to diverse instructions. Hence, we propose to use dynamic filters for the interactive instruction following task.
|
| 60 |
+
|
| 61 |
+
Particularly, we use a filter generator network comprising of fully connected layers to generate dynamic filters which attempt to capture various aspects of the language from the attended language features. Specifically, the filter generator network, fDF , takes the language features, x, as input and produces NDF dynamic filters. These filters convolve with the visual features, vt, to output multiple joint embeddings, vˆ<sup>t</sup> = DF(vt, x), as:
|
| 62 |
+
|
| 63 |
+
$$w_{i} = f_{DF_{i}}(x), \quad i \in [1, N_{DF}],$$
|
| 64 |
+
|
| 65 |
+
$$\hat{v}_{i,t} = v_{t} * w_{i},$$
|
| 66 |
+
|
| 67 |
+
$$\hat{v}_{t} = [\hat{v}_{1,t}; \dots; \hat{v}_{N_{DF},t}],$$
|
| 68 |
+
(2)
|
| 69 |
+
|
| 70 |
+
where NDF , ∗ and [;] denote the number of dynamic filters, convolution and concatenation operation respectively. We empirically investigate the benefit of using language-guided dynamic filters in Sec. [4.2.](#page-6-0)
|
| 71 |
+
|
| 72 |
+
The IPM performs object interaction by predicting a pixelwise interaction mask of the object of interest. We bifurcate the task of mask prediction; *target class prediction* and *instance association*. This bifurcation enables us to leverage the quality of pre-trained instance segmentation models while also ensuring accurate localisation. We refer to this mechanism as 'object-centric localisation (OCL).' We empirically validate the OCL in Sec. [4.2](#page-6-0) and [4.3.](#page-6-1)
|
| 73 |
+
|
| 74 |
+
<span id="page-3-3"></span>Target Class Prediction. As the first step of OCL, we take an object-centric viewpoint to interaction by explicitly encoding the ability to reason about object categories in our agent. To achieve this, MOCA first predicts the target object class, ct, that it intends to interact with at the current time step t. Specifically, FC<sup>m</sup> takes as input the hidden state, ht,m, of the class decoder and outputs the target object class, ct, at time step, t, as shown in Equation [3.](#page-3-2) The predicted class is then used to acquire the set of instance masks corresponding to the predicted class from the mask generator.
|
| 75 |
+
|
| 76 |
+
<span id="page-3-2"></span>
|
| 77 |
+
$$c_t = \underset{k}{\operatorname{argmax}} \operatorname{FC}_m(h_{t,m}), \quad k \in [1, N_{class}],$$
|
| 78 |
+
(3)
|
| 79 |
+
|
| 80 |
+
where FCm(·) is a fully connected layer and Nclass denotes the number of the classes of a target object. The target object prediction network is trained as a part of the IPM with the cross-entropy loss with ground-truth object classes.
|
| 81 |
+
|
| 82 |
+
<span id="page-3-4"></span>Instance Association. At inference, given the predicted object class, we now need to choose the correct mask instance of the desired object. We use a pre-trained mask generator to obtain the instance masks and confidence scores. A straightforward solution is to pick the highest confidence instance as it gives the best quality mask of that object. This works well when the agent interacts with the object for the first time. However, when it interacts with the same object over an interval, it is more important to *remember* the object the agent has interacted with, since its appearance might vary drastically due to multiple interactions. Thus, the sole confidence based prediction may result in failed interactions as it lacks memory.
|
| 83 |
+
|
| 84 |
+
To address all the scenarios, we propose a two-way criterion to select the best instance mask, *i.e*., 'confidence based' and 'association based.' Specifically, the agent predicts the current time step's interaction mask m<sup>t</sup> = m<sup>ˆ</sup>i,c<sup>t</sup> with the center coordinate, d ∗ <sup>t</sup> = d<sup>ˆ</sup>i,c<sup>t</sup> , where ˆi is obtained as:
|
| 85 |
+
|
| 86 |
+
$$\hat{i} = \begin{cases} \underset{i}{\operatorname{argmax}} \ s_{i,c_t}, & \text{if } c_t \neq c_{t-1}, \\ \underset{i}{\operatorname{argmin}} \ ||d_{i,c_t} - d_{t-1}^*||_2, & \text{if } c_t = c_{t-1}, \end{cases}$$
|
| 87 |
+
(4)
|
| 88 |
+
|
| 89 |
+
where c<sup>t</sup> is the predicted target object class and di,c<sup>t</sup> the center of a mask instance, mi,c<sup>t</sup> , of the predicted class.
|
| 90 |
+
|
| 91 |
+
Figure [3](#page-4-0) illustrates an example, wherein the agent is trying to open a drawer and put a knife in it, the same drawer is interacted with over multiple time steps. Table [4](#page-6-2) in Sec. [4.2](#page-6-0) shows ablation study of our instance association scheme.
|
| 92 |
+
|
| 93 |
+
<span id="page-4-4"></span><span id="page-4-0"></span>
|
| 94 |
+
|
| 95 |
+
Figure 3: Qualitative Illustration of Instance Association (IA). The masks of the drawers are colored with their confidences. X denotes the object interacted with at that time step. × denotes the object replaced by IA. Using the single-fold confidence-based approach could make the agent interact with different drawers since the closed drawer has higher confidence. IA helps the agent to interact with the same drawer and place the knife.
|
| 96 |
+
|
| 97 |
+
The Action Policy Module (APM), depicted by the lower block in Figure [2,](#page-2-0) is responsible for predicting the action sequence. It takes visual features and instructions as input. The attended language features are generated by the language encoder in APM. Same as IPM, we employ language guided dynamic filters for generating attended visual features (Sec. [3.2.1\)](#page-3-1). Although we use a similar architecture for IPM, the information captured by dynamic filters is different from that of APM due to difference in language encodings used for both. The action decoder then takes attended visual and language features, along with the previous action embedding to output the action decoder hidden state, ht,a. Finally, a fully connected layer is used to predict the next action, a<sup>t</sup> as follows:
|
| 98 |
+
|
| 99 |
+
<span id="page-4-2"></span>
|
| 100 |
+
$$u_{a} = [\hat{v}_{t,a}; \hat{x}_{t,a}; a_{t-1}], \quad h_{t,a} = \text{LSTM}_{a}(u_{a})$$
|
| 101 |
+
|
| 102 |
+
$$a_{t} = \underset{k}{\operatorname{argmax}}(FC_{a}([u_{a}; h_{t,a}]), \quad k \in [1, N_{action}]$$
|
| 103 |
+
(5)
|
| 104 |
+
|
| 105 |
+
where vˆt,a, xˆt,a and at−<sup>1</sup> denote attended visual features, attended language features, and previous action embedding, respectively. FCa, takes as input vˆt,a, xˆt,a, at−1, and ht,a and predicts the next action, at. Note Naction denotes the number of actions. We keep the same action space as [\[34\]](#page-9-1).
|
| 106 |
+
|
| 107 |
+
The objective function of the APM is the cross entropy with the action taken by expert for the visual observation at each time step as ground-truth.
|
| 108 |
+
|
| 109 |
+
<span id="page-4-3"></span>Obstruction Evasion. The agent learns to not encounter any obstacles during training based on the expert ground truth actions. However, during inference, there are various situations when the agent gets stranded around immovable objects. To address such unanticipated situations, we
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
|
| 113 |
+
Figure 4: Obstruction Evasion. Each plot includes the actions with the top-3 probabilities. X denotes the action taken at that time step. AHEAD with × shows that our agent detects an obstruction at the time step, t, by the criteria in Equation [6.](#page-4-1) Therefore, our agent predicts the second best action, RIGHT, to escape by removing AHEAD from the action space.
|
| 114 |
+
|
| 115 |
+
propose an 'obstruction evasion' mechanism in the APM to avoid obstacles at inference time.
|
| 116 |
+
|
| 117 |
+
While navigating in the environment, at every time step, the agent computes the distance between visual features at the current time step, vt, and the previous time step, vt−<sup>1</sup> with a tolerance hyper-parameter as following:
|
| 118 |
+
|
| 119 |
+
<span id="page-4-1"></span>
|
| 120 |
+
$$d(v_{t-1}, v_t) < \epsilon, \tag{6}$$
|
| 121 |
+
|
| 122 |
+
where d(vt−1, vt) = ||vt−<sup>1</sup> − vt||<sup>2</sup> 2 . When this equation holds, the agent removes the action that causes the obstruction from the action space so that it can escape:
|
| 123 |
+
|
| 124 |
+
$$a_t = \underset{k}{\operatorname{argmax}} FC_a([u_a; h_{t,a}]), \quad k \in [1, N_{action}] - \{k'\}$$
|
| 125 |
+
(7)
|
| 126 |
+
|
| 127 |
+
where k 0 is the index of at−1. u<sup>a</sup> and FC<sup>a</sup> are same as Equation [5.](#page-4-2) We empirically investigate its effect in Sec. [4.2.](#page-6-0)
|
2108.10869/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-06-04T14:11:35.565Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" etag="NZrSsbnVI48IdAz7Qg0z" version="14.7.3" type="device"><diagram id="hJpkFWZ67dnb-OyEMRX_" name="Page-1">7VtZb9s4EP41BnYfGlikzsfaOQpsCxTNLrp9ZCTq2NKiy9CxnV+/pEXqTuo6ulInCGDOaHhovrk0lmdwudrdMLSOP9EAkxmYB7sZvJwBYIC5Jz4kZ684tmtlnIglgeIVjNvkESvmXHE3SYDvK4KcUsKTdZXp0zTFPq/wEGN0WxULKanuukaR2nFeMG59RHBD7GsS8DjjulZJ+gNOoljvbMzVlRXSwmqJ+xgFdFvaC17N4JJRyrPRarfERGpP6yVb6PqJq/nBGE75MRM+fvLWf/mLz8mP+Opm/fjl680jeeeos/G9vmEciPtXJGU8phFNEbkquAtGN2mA5aqGoAqZj5SuFfM/zPlegYk2nApWzFdEXcW7hP8rxvMLYCnymyTV+HJXJvaaSDnbZ7MsTX7TC0qimHag9Lx7zuj3HDuh9UV2y/I+n9SkhoxumI+fUZ+2SMQizJ+Rs3K8hadgusLihGIewwTx5KF6DqQsNsrlClDFQOH6Cxi7bxgPgTEcE2MwMsjHYxzSlKsFgTkIdi/ARE39TBOxM5irtGbpXKGTmmFVl8hsRc2qIZsf43Sw1YEfENmoW5gBm3ClXJnlkNKK/WMj08vibxTTFSpoMYrkJ9wJq50vafpw8+UfMfrDAO6fejFxtmy9TLZhYIX5SFy3ccLx7TrbeSuqgaqpMMqFymkqSK/dZcOEkCUllB1Wh76PrTDMJUtXoA09GOQnesCM493zptI0ATUBwiqUniK3Ra43TIVuXMrzOv937siwM2wlqhm+E0I1FH+u2zeqpldF9Z3htuCqS8xBcPVeS3zuNR6bvcRj26nDbQ0bkM0+nBaclcta1uRc1vq9Q7Ef4Dv3bvBQbI0Nq/07O+swoDacdXRQjbaa2JT/1lJ2Yu7CmbNgM+dSceuoCF3wquqrCkxpimvaVixEkkji4wvdYcFfSM0mPiLv1YVVEgSHdN6GddUaSs9GEHQDVTM32lYDK7MFKtAbVE3tn2MpZLy0X9BeCzlmDW8w8LNpdw8wt0m0omKzMwqstjG5wNpWBdUD6/YMA2vD0eDYcbWXwmY69WrohY5cvF8PNODknkPylDL5hFnq34NfauDX/bPfzKu+9PppR7+T7vF7xtC+JLCWmff+6QQOwRPNjOsjJzhezeKyE3Sb5Z23WNNBrJlem9JwOwXW2TkTAzb0fc8bPImMXsbpGPGWQzrKId6ryiFgginE6yOF2OZ5BZrJdVdbHgEHDDRGOczkQafbl00GfYFBSh0XZ+wXBpqXoQ4azrykjDVMYXLdgAOCHThivRnQ1mW1h+wG6Lbfmx9244fwWD90R/XDZvf1mtDt+fphS1NuWDcc9QXbE93w5Lp7AD+0X1fdbdbeyZ5A4Q2afWJRQB8iwtmGCdBSN3uDxolR3g/7ueuempyPd/lhPLnpeRa4sCpGYBjmxXHflTYWA7XHsMYLwVlcO+FLV0EWvxDJxIsf2sCr/wE=</diagram></mxfile>
|
2108.10869/main_diagram/main_diagram.pdf
ADDED
|
Binary file (15.8 kB). View file
|
|
|
2108.10869/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
<figure id="fig:encoder" data-latex-placement="h">
|
| 4 |
+
<embed src="figures/ecdr.pdf" style="width:50.0%" />
|
| 5 |
+
<figcaption>Architecture of the feature and context encoders. Both extract features at 1/8 the input image resolution using a set of 6 basic residual blocks. Instance normalization is used in the feature encoder; no normalization is used in the context encoder. The feature encoder outputs features with dimension D=128 which the context encoder outputs features with dimension D=256.</figcaption>
|
| 6 |
+
</figure>
|
| 7 |
+
|
| 8 |
+
<figure id="fig:encoder" data-latex-placement="h">
|
| 9 |
+
<embed src="figures/uparch.pdf" style="width:80.0%" />
|
| 10 |
+
<figcaption>Architecture of the update operator. During each iteration, context, correlation, and flow features get injected into the GRU. The revision (r) and confidence weights (w) are predicted from the updated hidden state.</figcaption>
|
| 11 |
+
</figure>
|
2111.00674/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2111.00674/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Owe to the widespread use of deep learning, object detection method have developed relatively rapidly. Large-scale deep models have achieved overwhelming success, but the huge computational complexity and storage requirements limit their deployment in real-time applications, such as video surveillance, autonomous vehicles. Therefore, how to find a better balance between accuracy and efficiency has become a key issue. Knowledge Distillation [@KD] is a promising solution for the above problem. It is a model compression and acceleration method that can effectively improve the performance of small models under the guidance of the teacher model.
|
| 4 |
+
|
| 5 |
+
<figure id="fn_fp" data-latex-placement="t">
|
| 6 |
+
<embed src="picture/fn_fp.pdf" style="width:90.0%" />
|
| 7 |
+
<figcaption>Visualization of feature masks used in distillation of bounding box based method and ours.</figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
For object detection, distilling detectors through imitating all features is inefficient, because the object-irrelevant area would unavoidably introduce a lot of noises. Therefore, how to choose the important features beneficial to distillation remains an unsolved issue. Most of the previous distillation-based detection methods mainly imitate features that overlap with bounding boxes (*i.e.* ground truth objects), because they believe the features of foreground which can be selected from bounding boxes are important. However, these methods suffer from two limitations. First, foreground features selected from bounding boxes only contain categories in the dataset, but ignore the categories of objects outside the dataset, which leads to the omission of some important features, as shown in Figure [1](#fn_fp){reference-type="ref" reference="fn_fp"} (b)-bottom. For example, the mannequin category is not included in the COCO dataset, but the person category is included. Since mannequins are visually similar to persons, the features of mannequins contain many useful characteristics of persons which are beneficial for improving the detectability of person for detectors in distillation. Second, only using the prior knowledge of bounding boxes to select features for distillation ignores the defects of the teacher detector. Imitating features that are mistakenly regarded as the background by the teacher detector will mislead and damage the distillation, as shown in Figure [1](#fn_fp){reference-type="ref" reference="fn_fp"} (b)-top.
|
| 11 |
+
|
| 12 |
+
To handle the above problem, we propose a novel Feature-Richness Score (FRS) method to choose important features that are beneficial to distillation. Feature richness refers to the amount of object information contained in the features, meanwhile it can be represented by the probability that these features are objects. Distilling features that have high feature richness instead of features in bounding boxes can effectively solve the above two limitations. First, features of objects whose categories are not included in the dataset have high feature richness. Thus, using feature richness can retrieve the important features outside the bounding boxes, which can guide the student detector to learn the generalized detectability of the teacher detector. For example, features of mannequins that have high feature richness can promote student detector to improve its generalized detectability of persons, as shown in Figure [1](#fn_fp){reference-type="ref" reference="fn_fp"} (c)-bottom. Second, features in the bounding boxes but are misclassified with teacher detector have low feature richness. Thus, using feature richness can remove the misleading features of the teacher detector in the bounding boxes, as shown in Figure [1](#fn_fp){reference-type="ref" reference="fn_fp"} (c)-top. Consequently, the importance of features is strongly correlated with the feature richness, namely feature richness is appropriate to choose important features for distillation. Since the classification score aggregating of all categories is an approximation of probability that the features are objects, we use the aggregated classification score as the criterion for feature richness. In practice, we utilize the aggregated classification score corresponding to each FPN level in teacher detector as the feature mask which is used as feature richness map to guide student detector, in both the FPN feature and the subsequent classification head.
|
| 13 |
+
|
| 14 |
+
Compared with the previous methods which use prior knowledge of bounding box information, our method uses aggregated classification score of the feature map as the mask of feature richness, which is related to the objects and teacher detector. Our method offers the following advantages. First of all, the mask in our method is pixel-wise and fine-grained, so we can distill the student detector in a more refined approach to promote effective features and suppress the influence of ineffective features of teacher detector. Besides, our method is more suitable for detector with the FPN module, because our method can generate corresponding feature richness masks for each FPN layer of student detector based on the features extracted from each FPN layer of teacher detector. Finally, our method is a plug-and-play block to any architecture. We implement our approach in multiple popular object detection frameworks, including one-stage, two-stage methods and anchor-free methods.
|
| 15 |
+
|
| 16 |
+
To demonstrate the advantages of the proposed FRS, we evaluate it on the COCO dataset on various framework: Faster-RCNN [@FasterRCNN], Retinanet [@retinanet], GFL [@gfl] and FCOS [@fcos]. With FRS, we have outperformed state-of-the-art methods on all distillation-based detection frameworks. This achievement shows that the proposed FRS effectively chooses the important features extracted from the teacher.
|
| 17 |
+
|
| 18 |
+
# Method
|
| 19 |
+
|
| 20 |
+
We verify the effectiveness of our proposed FRS on multiple detection frameworks, including anchor-based one-stage detector RetinaNet [@retinanet] and GFL [@gfl], anchor-free one-stage detector FCOS [@fcos], and two-stage detection framework Faster-RCNN [@FasterRCNN] on COCO dataset [@coco]. As shown in Table [1](#Base){reference-type="ref" reference="Base"}. ResNet50 is chosen as the student detector, ResNet101 is chosen as the teacher detector. The results clearly show that our method gets significant performance gains from the teacher and reaches a comparable or even better result to the teacher detectors. RetinaNet with ResNet50 achieves 39.7% in mAP, meanwhile surpasses the teacher detector by 0.8% mAP. The ResNet50 1x based GFL surpasses the baseline by 3.4% mAP. The ResNet50 based on FasterRCNN gain 2.4% mAP. FCOS with ResNet50 achieves 40.9%, which also exceeds the teacher detector. These results clearly indicate the effectiveness and generality of our method in both one-stage and two-stage detectors.
|
| 21 |
+
|
| 22 |
+
::: {#Base}
|
| 23 |
+
mode mAP AP50 AP75 AP_S AP_M AP_L
|
| 24 |
+
------------------------- ------ ------ ------ ------ ------ ------ ------
|
| 25 |
+
Retina-Res101(teacher) 2x 38.9 58.0 41.5 21.0 42.8 52.4
|
| 26 |
+
Retina-Res50(student) 2x 37.4 56.7 39.6 20.0 40.7 49.7
|
| 27 |
+
ours 2x 39.7 58.6 42.4 21.8 43.5 52.4
|
| 28 |
+
gain +2.3 +1.9 +2.8 +1.8 +2.8 +2.7
|
| 29 |
+
GFL-Resnet101(teacher) 2x 44.9 63.1 49.0 28.0 49.1 57.2
|
| 30 |
+
GFL-Resnet50(student) 1x 40.2 58.4 43.3 23.3 44.0 52.2
|
| 31 |
+
ours 1x 43.6 61.9 47.5 25.9 47.7 56.4
|
| 32 |
+
gain +3.4 +3.5 +4.2 +2.6 +3.7 +4.2
|
| 33 |
+
GFL-Resnet101(teacher) 2x 44.9 63.1 49.0 28.0 49.1 57.2
|
| 34 |
+
GFL-Resnet50(student) 2x 42.9 61.2 46.5 27.3 46.9 53.3
|
| 35 |
+
ours 2x 44.7 63.0 48.4 28.7 49.0 56.7
|
| 36 |
+
gains +1.8 +1.8 +1.9 +1.4 +2.1 +3.4
|
| 37 |
+
Faster-Res101(teacher) 1x 39.4 60.1 43.1 22.4 43.7 51.1
|
| 38 |
+
Faster-Res50(student) 1x 37.4 58.1 40.4 21.2 41.0 48.1
|
| 39 |
+
ours 1x 39.5 60.1 43.3 22.3 43.6 51.7
|
| 40 |
+
gains +2.1 +2.0 +2.9 +1.1 +2.6 +3.6
|
| 41 |
+
FCOS-Resnet101(teacher) 2x 40.8 60.0 44.0 24.2 44.3 52.4
|
| 42 |
+
FCOS-Resnet50(student) 2x 38.5 57.7 41.0 21.9 42.8 48.6
|
| 43 |
+
ours 2x 40.9 60.3 43.6 25.7 45.2 51.2
|
| 44 |
+
gains +2.4 +2.6 +2.6 +3.8 +2.4 +2.6
|
| 45 |
+
|
| 46 |
+
: Results of the proposed method with different detection frameworks. we use 2x learning schedule to train 24 epochs or the 1x learning schedule to train 12 epochs on COCO dataset.
|
| 47 |
+
:::
|
| 48 |
+
|
| 49 |
+
::: {#sota}
|
| 50 |
+
model mode mAP AP50 AP75 APS APM APL
|
| 51 |
+
------------------------- ------ ---------- ---------- ---------- ------ ---------- ----------
|
| 52 |
+
Retina-ResX101(teacher) 2x 40.8 60.5 43.7 22.9 44.5 54.6
|
| 53 |
+
Retina-Res50(student) 2x 37.4 56.7 39.6 20 40.7 49.7
|
| 54 |
+
[@CO] 2x 37.8 58.3 41.1 21.6 41.2 48.3
|
| 55 |
+
[@AED] 2x 39.6 58.8 42.1 22.7 43.3 52.5
|
| 56 |
+
ours 2x **40.1** **59.5** **42.5** 21.9 **43.7** **54.3**
|
| 57 |
+
Retina-Res101(teacher) 2x 38.1 58.3 40.9 21.2 42.3 51.1
|
| 58 |
+
Retina-Res50(student) 2x 36.2 55.8 38.8 20.7 39.5 48.7
|
| 59 |
+
Fitnet [@Fitnet] 2x 37.4 57.1 40 20.8 40.8 50.9
|
| 60 |
+
General Instance [@GI] 2x 39.1 59 42.3 22.8 43.1 52.3
|
| 61 |
+
ours 2x **39.3** 58.8 42 21.5 **43.3** **52.6**
|
| 62 |
+
|
| 63 |
+
: Comparison with previous work with different detection frameworks.
|
| 64 |
+
:::
|
| 65 |
+
|
| 66 |
+
Table [2](#sota){reference-type="ref" reference="sota"} shows the comparison of the results of state-of-the-art distillation methods on the COCO [@coco] benchmark, including bounding boxes based methods [@Fine-Grained]. As shown in Table [2](#sota){reference-type="ref" reference="sota"}, the performance of the student detector has been greatly improved. For example, under the guidance of RseNext101 teacher detector, the ResNet50 with RetinaNet student detector achieves 40.1% in mAP, surpassing [@AED] by 0.5% mAP. With the help of ResNet101 teacher detector, the ResNet50 based RetinaNet gets 39.3% mAP, which also exceeds the State-of-the-art methods General Instance [@GI]. The results demonstrate that our method has achieved a good performance improvement, surpassing the previous SOTA methods.
|
2112.01044/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2112.01044/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In recent years, sports analytics has drawn significant attention due to the enormous market, which focuses on collecting sports data and implementing advanced techniques for mining useful information from the data. In Major League Baseball, for example, teams started to shift to defense by moving infielders to specific positions according to the hitting pattern of opposing batters, and these types of shifts dramatically rose from 4.62% in 2012 to 21.17% in 2019 [@StatsPerform_baseball]. Furthermore, there are about 100 sports-related organizations currently investigating new technologies for delivering interesting stream contents to fans in 2021 [@StatsPerform_fan]. Generally, the target audience of sports analytics is composed of both coaching-oriented groups and community-oriented groups. Coaching-oriented groups aim at improving player performance, *e.g.*, tactic investigation [@DBLP:conf/kdd/DecroosHD18; @DBLP:conf/atal/BealCNR20] and action valuing [@jayanth2018team; @DBLP:conf/kdd/SiciliaPG19], while community-oriented groups try to boost the spectator engagement, *e.g.*, highlight prediction [@DBLP:conf/aaai/DecroosDHD17], play retrieval [@DBLP:conf/kdd/WangLCJ19] and autonomous broadcast production [@Giancola_2021_CVPR]. There are also sports analytics applications that are both for coaching-oriented groups and community-oriented groups [@DBLP:conf/kdd/DecroosBHD19; @DBLP:journals/corr/abs-2106-01786], *e.g.*, player performance analysis and providing the relation between performance and market value of the player.
|
| 4 |
+
|
| 5 |
+
{#fig:example width="92%"}
|
| 6 |
+
|
| 7 |
+
In this paper, we focus on turn-based sports and use badminton as the demonstration example. The related works on badminton mainly focus on quantifying stroke performance [@DBLP:journals/jaihc/SharmaMKK21; @DBLP:journals/corr/abs-2109-06431] or detecting stroke-related information from videos [@DBLP:conf/mir/ChuS17; @DBLP:conf/apnoms/HsuWLCJIPWTHC19; @Wang2020badminton; @Yoshikawa2021]. However, there is another application that has not yet been addressed by previous works: to forecast the future strokes including shot types and locations given the past stroke sequences. Predicting future strokes based on the past strokes is essential and beneficial for coaching the player and determining the strategies since it simulates the tactics of the players, which can be used to investigate what shot types are often returned and where the strokes are returned to by the player for decision-making. In addition, stroke forecasting can also benefit the community for storytelling by assessing returning probability distributions during the matches.
|
| 8 |
+
|
| 9 |
+
Figure [1](#fig:example){reference-type="ref" reference="fig:example"} illustrates an example of stroke forecasting. Suppose the past five strokes with corresponding shot types and destination locations are known, and the fifth stroke is a smash from the left side, the player on the right side has several choices to return such as defensively returning to the middle of the left side, or returning close to the net to mobilize the opponent from the back court to front court. To the best of our knowledge, there is no existing method that can predict the next strokes.
|
| 10 |
+
|
| 11 |
+
To tackle this challenging problem, stroke forecasting can be formulated as the sequence prediction task. One possible solution is to use statistical methods like n-gram models in natural language processing to calculate the probabilities of occurrence for predicting next future strokes. Nevertheless, the probabilities of occurrence in n-gram models become sparse when increasing window size. To solve the issue, the sequence-to-sequence model [@DBLP:conf/nips/SutskeverVL14] can be applied to encode the input sequence and then decode the output sequence with encoding vectors. However, there are three challenges for applying the sequence-to-sequence model directly for stroke forecasting. 1) *Mixed sequence*. One of the characteristics of badminton is that there are two players returning strokes alternatively to form a rally. Therefore, stroke forecasting is a turn-based sequence prediction task rather than a conventional sequence prediction task with the same target in the sequence. 2) *Multiple outputs.* Differing from general sequence tasks that only predict one output, the stroke forecasting task has multiple outputs (shot types and area coordinates) at each timestamp. 3) *Player dependence*. Returning strokes are based on the overall styles of the players and the current situation in the rally. Furthermore, the importance of the overall styles of the players and the current situation in the rally also varies when encountering different players and different positions. It is challenging to disentangle the player features directly from the rally sequences.
|
| 12 |
+
|
| 13 |
+
To address the aforementioned challenges, we propose a novel Position-aware Fusion of Rally Progress and Player Styles framework (ShuttleNet), which consists of two encoder-decoder extractors for modeling rally progress and retrieving player styles from turn-based sequence and a fusion network to take into account the dependencies between rally progress and player styles at each stroke. To predict multiple outputs at each step, two task-specific predictors are adopted in the end for predicting shot types and area coordinates. Specifically, the first encoder-decoder extractor, named Transformer-Based Rally Extractor (TRE), is designed to capture the progress of the rally. Moreover, the second encoder-decoder extractor, named Transformer-Based Player Extractor (TPE), separates the information of each player to generate the context of each player. Finally, a Position-aware Gated Fusion Network (PGFN) is adopted to fuse rally contexts and contexts of two players by incorporating information weights and position weights. In this manner, we can learn different contributions at each stroke to predict future shot types and area coordinates. In summary, our contributions are as follows:
|
| 14 |
+
|
| 15 |
+
- A novel framework named Position-aware Fusion of Rally Progress and Player Styles (ShuttleNet) is proposed to predict future strokes by giving past observed strokes. To the best of our knowledge, this is the first work for stroke forecasting in sports, which can be applied to turn-based sports analytics.
|
| 16 |
+
|
| 17 |
+
- The proposed framework first generates rally contexts and contexts of players by leveraging two encoder-decoder extractors and then fuses these contexts based on information weights and position weights. Furthermore, we introduce an attention mechanism to better integrate the information of shot types and locations.
|
| 18 |
+
|
| 19 |
+
- Extensive experiments and ablation studies on a real-world badminton dataset are conducted to demonstrate the effectiveness of the proposed ShuttleNet framework.
|
| 20 |
+
|
| 21 |
+
# Method
|
| 22 |
+
|
| 23 |
+
Let $R=\{S_r, P_r\}_{r=1}^{|R|}$ denote historical rallies of badminton matches, where the $r$-th rally is composed of a stroke sequence with type-area pairs $S_r=(\langle s_1, a_1\rangle,\cdots,\langle s_{|S_r|}, a_{|S_r|}\rangle)$ and a player sequence $P_r=(p_1,\cdots,p_{|S_r|})$. At the $i$-th stroke, $s_i$ represents the shot type, $a_i=\langle x_i, y_i\rangle \in \mathbb{R}^{2}$ are the coordinates of the shuttle destinations, and $p_i$ is the player who hits the shuttle. We denote Player A as the served player and Player B as the other for each rally in this paper. For instance, given a singles rally between Player A and Player B, $P_r$ may become $(A, B, \cdots, A, B)$. We formulate the problem of stroke forecasting as follows. For each rally, given the observed $\tau$ strokes $(\langle s_i, a_i\rangle)_{i=1}^{\tau}$ with players $(p_i)_{i=1}^{\tau}$, the goal is to predict the future strokes including shot types and area coordinates for the next $n$ steps, i.e., $(\langle s_i, a_i\rangle)_{i={\tau+1}}^{\tau+n}$.
|
| 24 |
+
|
| 25 |
+
Figure [2](#fig:framework-overview){reference-type="ref" reference="fig:framework-overview"} illustrates the overview of the proposed framework. The input of the encoder side is the sequence of observed $\tau$ strokes $S^{e}=(\langle s_i, a_i\rangle)_{i=1}^{\tau}$ and players $P^{e}=(p_i)_{i=1}^{\tau}$, and the decoder auto-regressively predicts the sequence of the future $n$ strokes $S^{d}=(\langle s_i, a_i\rangle)_{i={\tau+1}}^{\tau+n}$ by taking encoding contexts and target players $P^{d}=(p_i)_{i={\tau+1}}^{\tau+n}$. Each stroke encompassing a shot type and area coordinates is embedded with player information from the embedding layer as a personal stroke. Each encoder-decoder extractor is based on the Transformer [@DBLP:conf/nips/VaswaniSPUJGKP17]. We replace the first multi-head self-attention layer in both the encoder and decoder with the proposed type-area-attention layer to better integrate the information of shot types and area. Moreover, contexts of the rally are generated by the Transformer-based rally extractor, and contexts of the two players are obtained by the Transformer-based player extractor. Outputs of these contexts are fused by the position-aware gated fusion network using information weights and position weights for predicting future shot types and area coordinates.
|
| 26 |
+
|
| 27 |
+
Each stroke contains a shot type and area coordinates with the player who hits the stroke. The output of embedding layer at $i$-th stroke $e_i$ is calculated as follows: $$\begin{equation}
|
| 28 |
+
e_i = \langle e_i^s, e_i^a\rangle = \langle s_i' + p_i', a_i' + p_i'\rangle,
|
| 29 |
+
\label{eq:embedding_layer}
|
| 30 |
+
\end{equation}$$ where $s_i'$ is a type embedding projected from $s_i$ using $M^s \in \mathbb{R}^{N_s \times d}$, where $N_s$ is the number of shot types, $p_i'$ is a player embedding projected from $p_i$ using $M^p \in \mathbb{R}^{N_p \times d}$, where $N_p$ is the number of players, and $a_i'$ is an area embedding projected from $a_i$ using $M^a \in \mathbb{R}^{2 \times d}$ with the ReLU activation function. In order to make use of the player in shot types and area, player embeddings are added to both type embeddings and area embeddings. The parameters of embedding layers in the encoder side and the decoder side are shared similar to [@DBLP:conf/eacl/PressW17] for size reduction.
|
| 31 |
+
|
| 32 |
+
TRE reflects the current situation in the rally, which is a critical condition for returning strokes. For example, if the player defensively returns a stroke like a lob to the back court, this indicates the player may become passive and the other player can seize the chance to return an aggressive stroke.
|
| 33 |
+
|
| 34 |
+
To capture the progress in the rally, we first add positional encodings [@DBLP:conf/nips/VaswaniSPUJGKP17] to the embeddings by $$\begin{equation}
|
| 35 |
+
\begin{aligned}
|
| 36 |
+
E^L &= (\langle \tilde{e}^s_1, \tilde{e}^a_1\rangle, \langle \tilde{e}^s_2, \tilde{e}^a_2\rangle, \cdots)\\
|
| 37 |
+
&= (\langle e_1^s+pe_1, e_1^a+pe_1\rangle, \langle e_2^s+pe_2, e_2^a+pe_2\rangle, \cdots),
|
| 38 |
+
\label{rally_position}
|
| 39 |
+
\end{aligned}
|
| 40 |
+
\end{equation}$$ where $pe_i$ is the position encoding for $i$-th stroke.
|
| 41 |
+
|
| 42 |
+
Afterward, we adopt a modified Transformer framework by replacing the first multi-head self-attention layer in the encoder and decoder with the proposed multi-head type-area-attention layer. Specifically, we take $E^L$ as the inputs of the Transformer-based rally extractor and generate the contexts of the rally $H^L=(h^L_{\tau+1}, h^L_{\tau+2},\cdots)$, where the $i$-th stroke $h^L_i \in \mathbb{R}^{d}$ is a $d$ dimension vector.
|
| 43 |
+
|
| 44 |
+
Since there are two components (shot type and area) of each stroke, the self-attention mechanism can only be applied in an early-fusion manner (*e.g.*, concatenation), which forces to attend shot types and area at the same position. However, it is expected that playing strategies should be considered from different aspects in badminton matches. For example, returning a current shot type may be considered the last shot type to decide the proper choice, while where to return may be considered the previous location returned from the same player because the opponent is weak on a specific side.
|
| 45 |
+
|
| 46 |
+
Therefore, inspired by disentangled attention [@DBLP:conf/iclr/HeLGC21], we propose an attention mechanism to separately characterize the importance of shot types and area and then aggregate corresponding scores as final attention scores. Here, we illustrate the computation of attention contexts on the encoder side, and the decoder follows a similar process. Given the input sequence with positional type embeddings $E^s=(\tilde{e}_1^s,\cdots,\tilde{e}_{\tau}^s)$ and positional area embeddings $E^a=(\tilde{e}_1^a,\cdots,\tilde{e}_{\tau}^a)$, the formula of the multi-head type-area attention is derived as follows: $$\begin{equation}
|
| 47 |
+
Q_s=E^sW^{Q_s},K_s=E^sW^{K_s},V_s=E^sW^{V_s},
|
| 48 |
+
\label{shot_qkv}
|
| 49 |
+
\end{equation}$$ $$\begin{equation}
|
| 50 |
+
Q_a=E^aW^{Q_a},K_a=E^aW^{K_a},V_a=E^aW^{V_a},
|
| 51 |
+
\label{area_qkv}
|
| 52 |
+
\end{equation}$$ $$\begin{equation}
|
| 53 |
+
A = Q_aK_a^{T} + Q_aK_s^{T} + Q_sK_a^{T} + Q_sK_s^{T},
|
| 54 |
+
\label{attention_score}
|
| 55 |
+
\end{equation}$$ $$\begin{equation}
|
| 56 |
+
TAA(E_s, E_a) = softmax(\frac{A}{\sqrt{4d}})(V_a+V_s),
|
| 57 |
+
\label{attention_output}
|
| 58 |
+
\end{equation}$$ $$\begin{equation}
|
| 59 |
+
MultiHead(E_s, E_a) = Concat(TAA_1,\cdots,TAA_h)W^o,
|
| 60 |
+
\label{multi_head}
|
| 61 |
+
\end{equation}$$ where $TAA$ denotes the function of type-area attention with single head, $Q_s$, $K_s$, and $V_s$ are queries, keys and values of $E^s$ projected using projection matrices $W^{Q_s}, W^{K_s}, W^{V_s} \in \mathbb{R}^{d \times d}$, respectively. $Q_a$, $K_a$, and $V_a$ are queries, keys and values of $E^a$ projected using matrices $W^{Q_a}, W^{K_a}, W^{V_a} \in \mathbb{R}^{d \times d}$, respectively. $h$ is the number of heads, and $W^o \in \mathbb{R}^{hd \times d}$ is a learnable matrix.
|
| 62 |
+
|
| 63 |
+
In addition to the information of the rally, returning strokes also needs to consider the overall style of each player. That is, the player should minimize their opponents' advantages and maximize their own. To this end, we designed an extractor to split the sequence into two subsequences based on the player and then to produce the contexts of each player.
|
| 64 |
+
|
| 65 |
+
First, the outputs of the embedding layer are alternatively split based on the players as follows: $$\begin{equation}
|
| 66 |
+
E^A = (e_1, e_3, \cdots),
|
| 67 |
+
E^B = (e_2, e_4, \cdots),
|
| 68 |
+
\label{split_players}
|
| 69 |
+
\end{equation}$$ where $E^A$ is the sequence of Player A, $E^B$ is the sequence of Player B. TPE adopts two encoder-decoder architectures to capture the two sequences split by players, both of which are the same as the architecture in TRE. Specifically, $E^A$ and $E^B$ are fed into TPE to generate corresponding contexts.
|
| 70 |
+
|
| 71 |
+
It is worth noting that the positional encodings are added separately to the two subsequences to specify the order of each player rather than using the entire sequence in Equation [\[rally_position\]](#rally_position){reference-type="ref" reference="rally_position"}. Further, the parameters of the two architectures are shared not only to reduce the number of parameters but also to prevent player information from falling on the same side, which would cause imbalance.
|
| 72 |
+
|
| 73 |
+
Since the lengths of two subsequences are shorter than the original sequence, sequence alignment is applied to align two subsequences with the same length of rally sequence after generating two contexts of the players. The alignment principle is to add a copy stroke to the next stroke, which becomes the opponent to return. The formula of the sequence alignment is derived as follows: $$\begin{equation}
|
| 74 |
+
H^A = (h^A_{\tau+1}, h^A_{\tau+1}, h^A_{\tau+2}, h^A_{\tau+2},\cdots),
|
| 75 |
+
\label{sequence_alignment_A}
|
| 76 |
+
\end{equation}$$ $$\begin{equation}
|
| 77 |
+
H^B = (0, h^B_{\tau+1}, h^B_{\tau+1}, h^B_{\tau+2}, h^B_{\tau+2},\cdots),
|
| 78 |
+
\label{sequence_alignment_B}
|
| 79 |
+
\end{equation}$$ where the $i$-th stroke $h^A_i \in \mathbb{R}^{d}$ denotes the output from the decoder for Player A, and the $i$-th stroke $h^B_i \in \mathbb{R}^{d}$ is the output from the other decoder for Player B. Zero is padded at the first stroke of $H^B$ since the first stroke is always served by Player A.
|
| 80 |
+
|
| 81 |
+
When returning strokes, players consider various important information about both players and current rally. Moreover, the importance of these types of information at each stroke will vary. To take the above consideration into the design, we propose a position-aware gated fusion network based on gated multi-modal units [@DBLP:journals/nca/OvalleSMG20] to fuse rally contexts and contexts of two players.
|
| 82 |
+
|
| 83 |
+
Given the contexts of Player A and Player B ($h^A_i$ and $h^B_i$), and the rally $h^L_i$ at $i$-th stroke, the PGFN first projects to hidden vectors of fusing contexts: $$\begin{equation}
|
| 84 |
+
\tilde{h}_i^A = \delta_t(h^A_iW^A),
|
| 85 |
+
\tilde{h}_i^B = \delta_t(h^B_iW^B),
|
| 86 |
+
\tilde{h}_i^L = \delta_t(h^L_iW^L),
|
| 87 |
+
\label{hidden}
|
| 88 |
+
\end{equation}$$ where $\delta_t(\cdot)$ is the tanh activation function, and $W^A, W^B, W^L \in \mathbb{R}^{d \times d}$ are learnable matrices. The information weights to represent the importance of the three contexts are calculated as follows: $$\begin{equation}
|
| 89 |
+
\alpha^A = \delta_s( [\tilde{h}_i^A,\tilde{h}_i^B,\tilde{h}_i^L]\tilde{W}^A),
|
| 90 |
+
\label{information_weights_A}
|
| 91 |
+
\end{equation}$$ $$\begin{equation}
|
| 92 |
+
\alpha^B = \delta_s( [\tilde{h}_i^A,\tilde{h}_i^B,\tilde{h}_i^L]\tilde{W}^B),
|
| 93 |
+
\label{information_weights_B}
|
| 94 |
+
\end{equation}$$ $$\begin{equation}
|
| 95 |
+
\alpha^L = \delta_s( [\tilde{h}_i^A,\tilde{h}_i^B,\tilde{h}_i^L]\tilde{W}^L),
|
| 96 |
+
\label{information_weights_L}
|
| 97 |
+
\end{equation}$$ where $\delta_s(\cdot)$ is the sigmoid activation function, $[\cdot,\cdot,\cdot]$ denotes the concatenation operator, and $\tilde{W}^A, \tilde{W}^B, \tilde{W}^L \in \mathbb{R}^{3d \times d}$ are learnable matrices.
|
| 98 |
+
|
| 99 |
+
Finally, the $i$-th fusing output is calculated as: $$\begin{equation}
|
| 100 |
+
z_i=\delta_s(\beta_i^A \otimes \alpha^A \otimes \tilde{h}_i^A+\beta_i^B \otimes \alpha^B \otimes \tilde{h}_i^B+\beta_i^L \otimes \alpha^L \otimes \tilde{h}_i^L),
|
| 101 |
+
\label{fusion_output}
|
| 102 |
+
\end{equation}$$ where $\otimes$ denotes the element-wise multiplication, and $\beta_i^A, \beta_i^B, \beta_i^L \in \mathbb{R}^{d}$ are learnable position weights to learn how much to pass at each stroke.
|
| 103 |
+
|
| 104 |
+
To predict the shot type and area coordinates at $i$-th stroke, we first assume area coordinates follow a bi-variate Gaussian distribution since there exists uncertainty and potential multi-modality when returning strokes. For instance, when the opponent returns the stroke to the back court, the player can return to the non-handedness side to force the opponent to return the shuttle with back hand, or to return to near the net to make the opponent return defensively. Moreover, the predictive distribution enables the ability to investigate the locations of frequent and less frequent stroke returns to better understand the players' behaviors. Specifically, area coordinates are sampled from a bi-variate Gaussian distribution with the mean $\mu_{i}=\langle \mu_x, \mu_y\rangle_{i}$, standard deviation $\sigma_{i}=\langle \sigma_x, \sigma_y\rangle_{i}$, and correlation coefficient $\rho_{i}$.
|
| 105 |
+
|
| 106 |
+
Hard parameter sharing is adopted to share the same fusion outputs to predict multiple outputs at each step. Two linear layers are used to predict the parameterized distribution $\langle \mu_{i+1},\sigma_{i+1},\rho_{i+1}\rangle$ and the shot type $\hat{s}_{i+1}$ at $(i+1)$-th stroke by combining the target player embedding $p_{i+1}$ and the fusing output $z_{i}$, respectively: $$\begin{equation}
|
| 107 |
+
\hat{s}_{i+1}=softmax((z_i+p_{i+1})W^s),
|
| 108 |
+
\label{predict_shot}
|
| 109 |
+
\end{equation}$$ $$\begin{equation}
|
| 110 |
+
\langle \mu_{i+1},\sigma_{i+1},\rho_{i+1}\rangle=(z_i+p_{i+1})W^a,
|
| 111 |
+
\label{predict_area}
|
| 112 |
+
\end{equation}$$ where $W^s \in \mathbb{R}^{d \times N_s}$ and $W^a \in \mathbb{R}^{d \times 5}$ are two learnable matrices. The predicted area coordinates are sampled by $\langle \hat{x}_{i+1},\hat{y}_{i+1}\rangle \sim \mathcal{N} (\mu_{i+1},\sigma_{i+1},\rho_{i+1})$. The reason for adding the target player embedding to the fused contexts is to specify the player who returns the stroke.
|
| 113 |
+
|
| 114 |
+
We minimize cross-entropy loss $\mathcal{L}_{type}$ to learn the prediction of shot types: $$\begin{equation}
|
| 115 |
+
\mathcal{L}_{type} = -\sum_{r=1}^{|R|}\sum_{i=\tau+1}^{|S_r|} s_i log(\hat{s}_i).
|
| 116 |
+
\label{shot_loss}
|
| 117 |
+
\end{equation}$$ We also minimize the negative log-likelihood loss $\mathcal{L}_{area}$ to learn the prediction of area coordinates: $$\begin{equation}
|
| 118 |
+
\mathcal{L}_{area}=-\sum_{r=1}^{|R|}\sum_{i=\tau+1}^{|S_r|} log(\mathcal{P}(x_i, y_i | \mu_i,\sigma_i,\rho_i)).
|
| 119 |
+
\label{area_loss}
|
| 120 |
+
\end{equation}$$ The total loss $\mathcal{L}$ of our model is jointly trained with: $$\begin{equation}
|
| 121 |
+
\mathcal{L}=\mathcal{L}_{type}+\mathcal{L}_{area}.
|
| 122 |
+
\label{total_loss}
|
| 123 |
+
\end{equation}$$
|
2112.14731/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-09-09T10:32:29.484Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" etag="AqWd52viOUk53b-xRvyW" version="15.1.1" type="google"><diagram id="61AIcMLOywQOjQXu1Y4K" name="Page-1">7V1Zd9pIFv41nDPzYI725dHGduLTiZOJnenkqY9AwqgNiBYitufXT5U2pNIVKi1VErHI9BgKUYi6391v3ZrIs83rB9/arT57trOeSIL9OpGvJ5IkKpI0wf8T7LdoRFfUaODJd+34ouPAg/s/Jx4U4tGDazv73IWB560Dd5cfXHjbrbMIcmOW73sv+cuW3jr/rTvrySkMPCysdXH0T9cOVtGoIenH8Y+O+7RKvlnUzOidjZVcHE+xX1m29xINhT9OvpnIM9/zgujZ5nXmrPHiJesSrcBtybvpjfnONqD5wGH79x8fvX8uHPUvV/14c/X8JioXSjTLL2t9iH/wRNLWaL4rFz15wk/uvs7QFbIgTOTLCf4O7Z8Dvuer2WG9s+ZooSTho7dxF4hQxzejK9fR09t0ssLskxtpYggTUzl19dzPjFibHXqyne/xnz9XnvPL8TEArMMeAQUhzcErLwnT6RT6fEiK4C2hL6LKDj/degH6c/WycgPnYWct8NgLgjQaWwWbNXol4omsxfOT7x229pdDsHa3TjxuW/7zF/QpN8AwF6aCigaX3ja4tTbuGo89upvw7u6dF/T/37yNtY0viTEv4nmW7no989aeH96cvFwupcUCje8D33t2Mu/Y2lxTtfQHZVEQAwOtSuC8ZoZiVHxwvI0T+G/okvhdOQaOELOopMWvX46AF4V4bJUFe/JBK2ayp3TuIw7RkxiKNWApqqW43O/QysXUfFxh8C0txPeS4C2xbAhHEO8FIShctOQYvT/wnVv+3Dv4jh0+D6InSw9f9hP/hi1+jT6ISBxd427CNQjBZKH/nrfu0omANc2gNXtDBXhV4ClB3+b1CYvP6dzau4uph372X0gErb1D0AmObNUxbAXCkSHNZa0jHIkyBY40CEcKKxzJUimO8JJhWiUaJxFcYlaGEWIoR9vkKjzRRTQNlo+SuHstTjHFuIn+S4EzJ+dHY9FdlYAJ0SHI48dau0+I/tcLRDUEePkKU8tF0LmM39i4to0/fuU76BZDaY3kE3q989xtEC63ejVRr/Fch8CLV0MsQGXrYVmXw1UyREKvGq0dYE0hZFYqwzJYUwCoMZNYcrnESpBWiRwVQs7l4+O3u6vvjzfoipv72Zfrm2+0eAkVlWPHBK0QRQ2kTASRxCySAP1lLBxYf80NVVGFjuSOnseCohSxkOIjCwaVGRi0CvXVFAwPj9++zx6/f7v8BKLhpCrqHQ225RhLEA3awnDmy27QoKkUaICsGWZoSCbOoYEgDjYqd6ABiB61FiZ1hWJhn/M2YLWt5hZMhdS2CCwYO62tVy/YEc0CBzTjkZ3jo08iNYu/x90+xV9NUGw2u701TZBip8FRDfAMPSBpxo4cxkiOE95TkVv4UsccqXNCG/RMHYVC9r9f6pg9U8cot9Nio32RLsfRFEPWA37kYl1N7bnZl2939x/Q+7ff72ePd1/uz8e2V/E/0JoLH/GcmfHo0Y2Vl2IiNlo0AEo6V5vfpLby2JlyEuEJqYBykiFTLrXvuheAkHo6FU6JOQ7zjP80/5cQBfCSP/+O2C1krmWM6DgqHcWF41mCGOTbEOR+CHLiksxM8V3gibaev7HW0XtrB4cRL/aJkCTfx/GYizgEg99LozDJey5iUUxk/KaA+P/4TuBb2/0SzZXMGsZSMKU8385/Y/rBY/D5glgkSTGi9ZHwcodP1NxSEeGpUADVi1DBoSgKudM6NnSa2+ijBITEAFhDBThDZsUYqggwRmuvsN5i1RcwlWvI11NUKBZxqObV7W1N84peSfZm7UojOYbrKSrySJ0Be4pQKn6kzkA8RZVCspVY94xVsmgMTSdDmbDfFchSCRkHpJOhGMdIjqHo5DNOdbCnTu86+YwzH+yp07dONiiokxRY+c4/B9d37DscoVmG5MnSJSmb+WTNnfVXb+8GrofLZ+ZeEHibriKobW2DE1IMqtgrkiKt4uueFhRZqEHRwkZ3sIhnfnH2ARPy9EYOUYD0Sr1yJBHOWNzEaQrhMgh8d34IcEnkzWbu2DaWRe++lK0jliaqWCBZC0GKXUWu0L6mCUbU7eXskQJOJ2ubfl841a+GE/LI0YDKyNRn5AOdBKhtaiOFPHTgxOxiQSZmizWUn7x9XBT+sPB8TIP3LrHqJ1/FPMR0HZBOMl+IQfEZFhATBE1bLCoqM0eMtcaYrlBgDErYMcQYRSSFdYZfIzJwUIZf5ZvhFwXID+vC1owtg4fAPyyCg2+t37OliYASWO4W3/A1XDhfgdr63mS57SmafG3P8iKSrryZEWR8QTY0/0aHDIihhvpIMtnWfpXWAras4mwiMIiNf30H1fVzSnQPjZbawGh5TmnxodHSHBgtzykzPDBaFja89kxLE5KxY13vWNfbu2U5tEJfUaBQYYPKjfXgavaYKaNQSoOiDpC55ECw3ghknrPR0LImowFHDc1MOKfqs/6ppw6MeudUndY/9aorcflS75yq1/qnnigOi3ypSh3pR0e/gbGfKJzT5rT+6UdWFfRPv3NOCfRAv2q/mzP9zjkN0AP99IHRT6SoKXa29iXu8oteLdbWfu8uOiYZ9ridVzf4gekyVSUzfv0z/gL8/Po1Jlr44i3z4mtC7GRsi1YmmkqUtWQAz3UhTAUpGThOGL56y74ip+yw5d7eO/gL5yRLxagKLP/JOYW/pLOKY+faK5+MK0BRuWTMd9ZW4P5ycvcLQS7+hq84iX7ENtkUTjUJ1EY/Pf7UEbiFiXSiNkIl4R+tTGGikAPSn92CKSQKozDDFPO1t3hmwBL1UYdu6tbFP7bACKpCMoLWhBHawFp/57CWyWq7pG0LN1hT2MqcZb0S9tdOZX3U07ueuK/fADOXZcwwiaGpvTOJ8c6ZxND6ZhIKh4SD7M+yiJ5jEbE+i2QxrjKxiCi4Lq+b2vCI+c55pH/7iMLp46xIdCPnNHSgSKjMrZSvcppDqOCoNuB/786BRoBfJ5sNMgc/RaUBVwUhpFj/GYt4DjZUqaehimYnCmZUEOfMIxT1BJwVBO7AnuUS02hjRqlyXuBPxSYY5xlWMmgVR5qi+924on+ziaJOg3eslTrSWv+0owy7KKaWZxdBbcIv3aJfufxbVbW7P358f/v8x+1n5/PbIbiQB4V9WdCnkpB55E+RMATCOW6qIAyysow5K1AUvXBmBUMXs8wwFZRWCoKMt4qNEN8yuMVFXUjDYhnxNMuIDVlGFg0ECfP4IObVJM4cRFF4dC7KpJ6rLXHytZMjeypZIDkdZCAcULBz9IYGk2hWWF7MMV4vOT2cPFy+4qHMV+4ExpXoTHreDgSdMlFBpjROpxETqQZndMpQOm3cnTXuzmJ4Iicp2hNfvbd9V3K9VACzaGhOxDYJZ6bmhc7LvKC1sJMNAgMR4BJpFZC+I3U8Ru5bgA8kkt8XdjM5ZqIKo77ry4EP0lrvgTCCkZz10NaSMfq2s0+cqzxaMqMlw8KS0clT1Pq3ZIYSnT/VBKOeOmioDUxBzqmD+iXYHLSBPqxIvanr3SgDUajKdzHXBjVavi/XzmvMEVd8oo1bO30Z5Vzt+AN6W25RdD3LL7hyQaniGShmnzGrjFxKgQ0bgWksnZaN5GGVQOgmYVM1di4Im0ohFQdzLqrRrH/YXEQTnQdzAMLU4OlIiOqgoCwJ5jSfI5JJB5cWzMWpFN61CzRnbP+GcJZyOa2pIOocAT2wwD25dbEpmmWtfzRT7IT5DdEsG3zQDJokwwreqGQXGF1shmYyJM/d0ADPgx2DN2Pwhl3wplQV9Ba8oTl2d6wUqKoUGFaiSVEU0k5oGlRRybwp75gKePBwhwc7PWJpMZ7jVFeQiUoeYFA/Dr7HOIEH+3YBlOMpDBgr+xEs9bWeWQ0W8BRohmhhdV7c7O7xMobLvRMgk+l5BEz9HBfRJ1QEjldKAyucAENRdz+YRrwnK9hr9PskrNW0m1JVu12NHRnO6VTdkGHW7u5jdyQhTz8aAknq1RrHgiWXq+3amyhd6IEY6OQuIq1poTk5kcRuO9Kp0BSgRd2jnotDKn75yN1XHBiQJS2NmMRa9b/e+rBFvwHDQRIW1mEfHnslPPmu88s7YGysDj5ekDm+wLa2T44fjb841s7b4mdIn0nCxrHwi8wBipMbaYJMVFOJvjHR0emN/7nyHMSYUdDCeV04O/w9LrYNgpUT3s4e/5lOp6UiaOsFTnUs4hg9+XII1u42UdK25T9/wfGWIHRRo71/DRiEdK6XSwl2rm1troVtTjqQVArRj0bSIEkFiaoulPipLYocTuE0TfCg1093H+4/39xjv3M8hLM9woh9DOAhnKysxJPFA3Q9PcVqyZDTkvjFVwvHfbfhiBRWWWTXXq2MrcUEe/It20Urn5cK6JF57zpzkoNj7QM6uoa3E//aUghVnwhRGwzk0aMScHZHenpYTtyQEa3O0AA0+3mQi6bRGlmIewotYe13iB7oxdJ9xZioqwlgcIBIsM1wH3p9vWEXMVNoHZsDSHvCG2RjX6AeEjrtuYuQOkz2YsTyQRnJzprsat9kL48o1ToFFkqAPUi05sEIq25hJalqz7Aqxp0exJHsrMneuxIpP1O6ZdpLoo1HDwVJpm4LOlSjXN1IpwckmaQdCmT4uSIJrHKpDM+VVErAREvrpCbZElZZO91Obx9YfkB8bTiWS+Rn0/r5/R31fZGKzjOCLE+y5QJTU2XXxfivZ/HGen75+fU/Hw9v/7Gf//7x3b8AGn2D11E3GisJ5EM28qlwZ43IJpHqwyBoFtkk7Tt21WEw29Sri2nENgn0xDzsqvodtWaDWqA9azCWfFNtMJbdMi8w1ttv3QqMSHBn4Yg380Qdgql2wEU9TKn27TCT5t3JY6CP0dmxAFF6oejTphKZKPyV2JWCwUxQdLAfi56Qv/I288O+2hMiWQKwPOfhv8nJkGh1KaOK/0FmqhY+6M3UWvzR3pLVyQQOEGGBoutdHLkEA6DoCj8C8fURAIwAIANlW3wBUK9zXzs1mFWCVT2K8+eixMowu4P1uL2vpK1fV9StbNOq0G5b5a6qdEmFv6m1tUbWvrBWVI0KYjqAaVWBe2uYQi47R+jSVtmfMXQJr7dQt88YuslP5w9dxl5vY9BJwwUd4d1KpJg7G9DVO12tpXeba+3SWrFXdOvtCqaVe0QV2jbt0mD1P1nBoTYVoirh86qc9b8q9YbnRhYA0XtiKIgGguEny2wGiGilI0Rrcs+I5hgMb2MWDFpCU+OZtnsFfzyTDbbIQkp6i4PAM1n/zhjPGrNNpfKYXeeZXe+96kurd15Xm+x6HUe9qTTlJw4TKVcpDrWa4lAjmiNDpTzdiEOZ+CalcV8fsrM/51y3Vq+B57sB8fAgZ5A7Mptq4EKZEjkRa8jVO8aqK8hVdJJsFFXlKDNpTcjhykyyY7dsNMxGm2R9EDkRawBzyEWdo8ysBnFNd71H4WoqTWslCuK1OBVrfHJIQnWGz6owLEd01gyPckQnYSQWfJrG2GTXQxLeJAnlmCKneUduEa+1c6bojYul22bS4d3oobOtf+97J4V+XopaGp6m7t7cJHfbSOxUujrVBE0TRF3UddOQCMmXf1c0GkpUQgnwLjjRKXQ9i23njaRdRk5Fx8yCLNXdnvUOZBrV3nIIsJ3sLT9VmVy+2TTqd5TRmBCNsp0sGurYTMfcuAFaVunOAY17cv9qBTCTviqbV0Tx3Wo6t/buYrpDpLWeMNHpNKruWJojABq1+tSLEugAACtHE1m3wbGsEsRS5cbl/rAkjViqhyWzZyxBYb2BYEkesVQLSzLQWo4rlsRyR7GpX1gbfiVgumyl5BK0YCpWmWJUwLFVXTbtInAczRZsBwKOaWqioXYEHNKcB4QQ5PIpKiPg0BxSgfut7Up/PPKSt8iQjHtlTVIHhX5RxLShLRlmyfY/M4FlKeyr6mxdJIpkdb6tem1wUsHpBNGK69njclGkRc+OlWutfTINoRiKQJYAwnQR0YHpQpE7/C30dRNa5QurgI54rIJvMKkogm/jhr1MnI45IkSO8VgYEjSJqbGdEXsgcCydA+9WLjfvC3mgYvPgMXXTD2oobXt2qClvOd27U7jInQ5S2y0cZGtfZrgiq9aLffZYte6FcSVV4aodStrVlRejWciARZCowlu+BH2EYSUM9SIMda4wrOxRPigYPno7dzGCsC0INdJGL6IQal3NDoXl+2OGiMKHyFIacdg1DoHmt3ylYedpbqY4vLWqLcARhDU1smgWQQjVWrADIYfdMW2a7zFsB0kUdLWn7YWu5IgrafI0af/QefEX2a9ONtPvqt1SoDiToJjJwyDqxror94IB2XmSnalUvNsu1gcbf/0tgk9w4S1HEdmtngbP04RSMuxk5DCLbJvF4LuXesXYK51OayDzCPHacI+AWjEPawlX3v1+iBJu5gaheMN/7Ul0Ft0o4bo0AqHeo8xODIbvd7jFSJ9uPtzcX4+RZ+qMRhFLHR1Vjl76HibWURji8oLPnu3gK/4P</diagram></mxfile>
|
2112.14731/main_diagram/main_diagram.pdf
ADDED
|
Binary file (70.6 kB). View file
|
|
|
2112.14731/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The task of Legal Statute Identification (LSI) is important in the judicial field, and involves identifying the possible set of *statutory laws* that are relevant, or might have been violated, given the *natural language description of the Facts of a situation*. This task needs to be performed at different stages of litigation by different experts, including police personnel, lawyers and judges. An automated system for LSI can greatly promote access to Law by the common masses.
|
| 4 |
+
|
| 5 |
+
Early methods for LSI modeled the task using statistical or simple machine learning methods [@kort1957predicting; @lauderdale2012supreme]. Recently, several neural architectures have modeled the problem as a text classification task, attempting to extract increasingly richer features from the text [@luo2017learning; @wang2018modeling; @wang2019hierarchical; @xu2020distinguish]. Some of these methods simplify the task by trying to identify only the most relevant statute, no longer retaining its multi-label nature.
|
| 6 |
+
|
| 7 |
+
Historically maintained court case data have been popularly used to create datasets to train the LSI models [@chalkidis2019neural; @xiao2018cail2018]. However, almost all existing methods for this task rely only on the text of the Facts (and statutes) to perform the classification task. The existing methods do *not utilize the legal statute citation network* between court case documents and statutes, that has been shown to be a rich source of legal knowledge that is useful for tasks such as estimating legal document similarity [@hier-spc-net]. However, there has not been any attempt to utilize the legal citation network for the LSI task.
|
| 8 |
+
|
| 9 |
+
This work is the first attempt towards LSI by utilizing both the textual content of statutes/Facts and a *heterogeneous statute citation network*. In this network, statutes and document are nodes of different types, and citation links exist between these nodes *iff* a particular Section is cited by a particular document. Additionally, statute nodes may also be connected via defined hierarchies specified in the written law itself. Given such a network, the task of LSI boils down to predicting the existence of links between statute-nodes and *newly entering* document-nodes.
|
| 10 |
+
|
| 11 |
+
However, most existing supervised network-based training methods for link prediction *do not work well for out-of-sample nodes*, or nodes that the model has not seen during training. Indeed, in our case, we can only provide the document nodes and their citation links during training time. While testing, every document-node is previously unseen to the model, and the model must predict whether links exist between a new document-node and the statute nodes. Thus, approaches that seek to learn effective node representations for link prediction will fail during test time in our setting. Rather, one has to capitalize on the Fact that two kinds of information are available at all times --- the texts corresponding to each statute/document, and the statute nodes themselves (and hierarchical links between them).
|
| 12 |
+
|
| 13 |
+
We, therefore, formulate the LSI problem as an **inductive link prediction** task [@hao2020inductive] on the heterogeneous citation network, between the previously seen Section nodes and a new, unknown Fact node. We use a hybrid learning mechanism to learn two kinds of representations for each node -- attribute (generated from text) and structural (generated from network). By forcing the model to learn attribute representations that closely match structural representations of the same node, we ensure that the model can generalize better by generating more robust, feature-rich attribute representations for unseen document nodes during test time.
|
| 14 |
+
|
| 15 |
+
There do not exist many good quality datasets for LSI in English. The ECHR dataset [@chalkidis2019neural] is considerably small ($\sim 11.5K$ documents/Facts) and does not provide the text of the statutes. Another popular dataset, CAIL [@xiao2018cail2018] is much larger; it is, however, in the Chinese language. Importantly, the *average no. of statutes cited per document* is quite low in both these datasets ($0.71$ for ECHR and $1.09$ for CAIL), i.e., most documents cite at most one statute (or none in the case of ECHR). However, the LSI problem is inherently multi-label, and these datasets do not reflect the true multi-label nature of the LSI problem.
|
| 16 |
+
|
| 17 |
+
We construct a new, fairly large dataset ($\sim 66K$ docs) from case documents and statutes (in English) from the Indian judiciary. This dataset, which we call the *Indian Legal Statute Identification* (ILSI) dataset, captures the multi-label nature of the LSI problem more truly (details in Section [6](#sec:expt){reference-type="ref" reference="sec:expt"}).
|
| 18 |
+
|
| 19 |
+
To sum up, our contributions are as follows: (1) To our knowledge, we are the first to use the statute citation network in conjunction with textual descriptions for the task of Legal Statute Identification. We propose a novel architecture **LeSICiN (Legal Statute Identification using Citation Network)** for the task, that out-performs several state-of-the-art baselines for LSI (improvement of $19.2\%$ over the closest competitor). (2) We construct a large-scale LSI dataset from Indian court case documents, where the task is to identify Sections of the Indian Penal Code, the major criminal law in India. The dataset and our codes are available at <https://github.com/Law-AI/LeSICiN>.
|
| 20 |
+
|
| 21 |
+
# Method
|
| 22 |
+
|
| 23 |
+
This Section discusses the standard formulation of the LSI task as a multi-label classification problem, and how we formulate LSI as a link prediction task over a graph.
|
| 24 |
+
|
| 25 |
+
Let $F = \{f_1, f_2, \ldots, f_{|F|}\}$ denote the entire set of Fact descriptions (documents), and $S = \{s_1, s_2, \ldots, s_{|S|}\}$ denote the set of IPC Sections (labels). Since more than one Sections may be relevant to a Fact $f$, each instance is denoted as a tuple $\langle f, \mathbf y_f \rangle$. Here, $\mathbf y_f \in \{0,1\}^{|S|}$, where $\mathbf y_f[s] \in \{0,1\}$ indicates whether Section $s$ is relevant to Fact $f$.
|
| 26 |
+
|
| 27 |
+
The LSI task requires us to develop a function $\mathcal{F(\cdot)}$ such that $\mathcal{F}\left(f, S\right) = \hat{\mathbf y_f}$; where $\hat{\mathbf y_f} \in \{0,1\}^{|S|}$, with $\hat{\mathbf y_f}[s] \in \{0,1\}$ denoting the function's prediction of whether Section $s$ is relevant to Fact $f$.
|
| 28 |
+
|
| 29 |
+
We model the LSI task using a graph formalism, over a *legal citation network*. Each Section and each Fact (from training instances) is treated as a node in a heterogeneous graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, with a node mapping function $\phi: \mathcal{V} \rightarrow \mathcal{A}$ and an edge mapping function $\psi: \mathcal{E} \rightarrow \mathcal{R}$, where $\mathcal{A}$ and $\mathcal{R}$ represent the types of nodes and types of relations between nodes, respectively. In our case, we have Fact nodes ($F$) and Section nodes ($S$), which are accompanied with their attributes. Additionally, we have placeholder node types to denote the hierarchical levels of IPC -- Topics ($T$), Chapters ($C$) and IPC Act ($A$) (see Section [3](#sec:data){reference-type="ref" reference="sec:data"}). There exist several types of relations between the nodes -- 'cites' ($ct$) and 'cited by' ($ctb$) relationships between nodes of types $F$ and $S$, 'includes' ($inc$) and 'part of' ($po$) relationships between the successive node hierarchy types $A$, $C$, $T$ and $S$. Figure [1](#fig:arch){reference-type="ref" reference="fig:arch"} shows a pictorial representation of this network.
|
| 30 |
+
|
| 31 |
+
We construct the network using the following steps. (i) Assign the set of nodes in each node type, e.g., $A$, $C$, $T$, $S$, $F$. Set $\mathcal{V} = \mathcal{V}_F \cup \mathcal{V}_S \cup \mathcal{V}_T \cup \mathcal{V}_C \cup \mathcal{V}_A$. (ii) For given nodes $u$ and $v$ belonging to *different, successive levels of hierarchy* in the Act (IPC) s.t. $\phi(u) \in \{A, C, T\}$ and $\phi(v) \in \{C, T, S\}$, include links $(u, v) \in \mathcal{E}_\text{inc}$ and $(v, u) \in \mathcal{E}_\text{po}$ iff $v$ is categorized under the broader hierarchy level of $u$. (iii) For given nodes $u$ and $v$ s.t. $u \in \mathcal{V}_F$ and $v \in \mathcal{V}_S$, include links $(u, v) \in \mathcal{E}_\text{ct}$ ('cites' link) and $(v, u) \in \mathcal{E}_\text{ctb}$ ('cited by' link) iff $\mathbf y_u[v] = 1$.
|
| 32 |
+
|
| 33 |
+
Given a new Fact $f$ *at test time*, the task is to identify the relevant Sections. This problem can be seen as **inductive link prediction** over the network described above, i.e., predicting the possibility of the existence of a link between the new Fact $f$ and the Section nodes in the network. Note that inductive link prediction is the task of predicting a link between a pair of nodes, where *one or more nodes might not be visible to the model during training.*
|
| 34 |
+
|
| 35 |
+
Formally, modeling LSI as Link Prediction requires us to develop a function $\mathcal{Q}(.)$ such that $\mathcal{Q}(u,v, \mathcal{G}) = \hat{y_{uv}}$, where $u \in \mathcal{V}_F$, $v \in \mathcal{V}_S$ and $\hat{y_{uv}} \in \{0, 1\}$ denotes the function's prediction of whether a link should exist between $u$ and $v$.
|
| 36 |
+
|
| 37 |
+
Figure [1](#fig:arch){reference-type="ref" reference="fig:arch"} gives an overview of our proposed model **LeSICiN**. Each Fact $f \in F$ and Section $s \in S$ is accompanied by a text $x_f$ and $x_s$ respectively, which can be considered as attributes of the node. The citation network provides the structural information. We have two separate encoders for encoding attributes and structural information, which are shared across both Facts and Sections.
|
| 38 |
+
|
| 39 |
+
We adopt the technique used by DEAL [@hao2020inductive] to generate good quality embeddings for unseen nodes (new Fact descriptions) using only their attributes at test time. During training, we obtain both attribute and structural embeddings of both node types, Facts and Sections. These are combined in different ways to generate three types of scores -- attribute, structural and alignment. During testing, since structural embeddings of new Facts are not available, we can only generate attribute and alignment scores. The alignment score enables the model to correlate the two kinds of embeddings, for generalizing well to unseen nodes during testing.
|
| 40 |
+
|
| 41 |
+
Next, we describe the attribute and structural encoders, and the scoring function used by the architecture.
|
| 42 |
+
|
| 43 |
+
We use a shared attribute encoder for both Facts and Sections. A text portion in our formulation is a nested sequence of sentences and words. Hence we encode the text using the Hierarchical Attention Network (HAN) [@yang2016hierarchical]. Specifically, by feeding the Fact text $x_f$ and each Section text $x_s$ to HAN, we obtain a single, attention-weighted representation for the entire texts namely, $\mathbf h_f^{(a)}$ and a set of Section representations $\{\mathbf h_{s}^{(a)} \mid s \in S\}$.
|
| 44 |
+
|
| 45 |
+
<figure id="fig:arch" data-latex-placement="t">
|
| 46 |
+
<img src="Architecture.png" style="width:80.0%;height:7cm" />
|
| 47 |
+
<figcaption>Architecture of our proposed model <strong>LeSICiN</strong>. The <span><strong>attribute encoder</strong></span> is used for converting text to vector representations, and the <span><strong>structural encoder</strong></span> is used to generate representations for Sections and training documents. These representations are combined and fed to the <span><strong>scoring mechanism</strong></span> to generate losses and scores.</figcaption>
|
| 48 |
+
</figure>
|
| 49 |
+
|
| 50 |
+
To exploit the rich, semantic information of the Citation Network, we carefully design various metapath schemas, following the process adopted by @fu2020magnn. A *metapath* is a sequence $A_1 \xrightarrow{R_1} A_2 \xrightarrow{R_2} \ldots \xrightarrow{R_l} A_{l+1}$, which denotes the composite relation $R_1 \circ R_2 \circ \ldots \circ R_l$ between node types $A_1, A_2, \ldots, A_{l+1}$. A metapath only defines a schema; there may be multiple sequence of nodes originating from the same starting node, following the same metapath schema $P$. Each such sequence is called a metapath instance of $P$. The $P$ metapath-based neighbourhood of a node $v$, denoted as $\mathcal{N}_v^P$ is defined as all the nodes that can be reached from $v$ using the schema $P$. Any neighbour connected by two or more metapath instances is represented as two or more different nodes in $\mathcal{N}_v^P$.
|
| 51 |
+
|
| 52 |
+
To extract the structural information from the citation network, we make use of different metapath schemas with different node types as the start nodes. For example, for nodes of type $F$ (Facts) we have a metapath schema $F \xrightarrow{ct} S \xrightarrow{ctb} F$, capturing the type of relationship that exists between *Facts that cite the same Section*. From Figure [1](#fig:arch){reference-type="ref" reference="fig:arch"}, we can observe multiple instances of this metapath schema, such as $F1-S1-F3$ and $F2-S3-F3$. As another example, for nodes of type $S$ (Sections), we have a metapath schema $S \xrightarrow{po} T \xrightarrow{po} C \xrightarrow{inc} T \xrightarrow{inc} S$, capturing the relationship between *Sections defined under the same Chapter*. From Figure [1](#fig:arch){reference-type="ref" reference="fig:arch"}, we can see that $S1-T1-C2-T2-S3$ and $S1-T1-C2-T1-S2$ are two instances of this schema. We use a total of 4 Fact-side and 4 Section-side metapath schemas, which are listed in the **supplementary material**.
|
| 53 |
+
|
| 54 |
+
Next, we describe the individual node representation, followed by intra- and inter-metapath aggregation to obtain the structural embedding.
|
| 55 |
+
|
| 56 |
+
**Node Embedding:** We have a set of parametric node embedding matrices $\{\mathbf{X}_A \mid A \in \mathcal{A}\}$ that are used to initialize each node $v$ with its feature vector $\mathbf x_v$. Since the initial feature vectors might be of different dimensions and may map to different latent spaces, we need to transform them to the same dimensionality and space. For a node $v \in \mathcal{V}_A$ for $A \in \mathcal{A}$, we have $\mathbf x_v = \mathbf X_A \cdot \mathbf I_v$ and $\mathbf{h'}_v = \mathbf W_A \cdot \mathbf x_v$. Here $\mathbf X_A \in \mathbb{R}^{d_A \times |\mathcal{V}_A|}$ is the node embedding matrix for nodes of type $A \in \mathcal{A}$; $\mathbf I_v \in \mathbb{R}^{|\mathcal{V}_A|}$ is the one-hot identity vector for node $v$; and $\mathbf x_v \in \mathbb{R}^{d_A}$ is the initial feature vector for $v$. Further, $\mathbf W_A \in \mathbb{R}^{d' \times d_A}$ is the parametric transformation matrix for node type $A \in \mathcal{A}$; and $\mathbf{h'}_v \in \mathbb{R}^{d'}$ is the transformed latent vector of uniform dimensionality (across node types). After transformation, all node embeddings are ready to be fed to the aggregation architecture.
|
| 57 |
+
|
| 58 |
+
**Intra-Metapath Aggregation:** First, we need to aggregate all the node embeddings for a target node $v$ under a particular metapath schema $P$. For a metapath instance $P(v,u)$ connecting $v$ with its metapath-based neighbour $u \in \mathcal{N}_v^P$, we use a metapath instance encoder $g_\theta(.)$ to generate a representation for the instance $P(v,u)$. We adopt the relational rotation encoder used by @fu2020magnn.
|
| 59 |
+
|
| 60 |
+
Consider that $P(v,u) = \{n_0, n_1, \ldots, n_M\}$, with $n_0 = u$ and $n_M = v$. We thus have $$\mathbf q_i = \mathbf{h'}_{n_i} + \mathbf q_{i - 1} \odot \mathbf r_i; \quad \mathbf h_{P(v,u)} = \frac{\mathbf q_M}{M + 1}$$ where $\mathbf q_0 = \mathbf{h'}_u$, $\mathbf r_i \in \mathbb{R}^{d'}$ is the learned relation vector for $R_i \in \mathcal{R}$ and $\mathbf h_{P(u,v)}$ is the vector representation of the metapath instance $P(u,v)$. Now, we have obtained representations $\mathbf h_{P(u,v)}$ for each $u \in \mathcal{N}_v^P$. We attentively combine these $P$-based representations for the target node $v$ as $$e_{vu}^P = \textsc{LeakyReLU} \left( \mathbf a_P^\intercal \cdot [\mathbf{h'}_v || \mathbf h_{P(v,u)}] \right)$$ $$\alpha_{vu}^P = \textsc{Softmax}_{u \in \mathcal{N}_v^P} \left( e_{vu}^P \right)$$ $$\mathbf h_v^P = \textsc{ReLU} \left( \sum_{u \in \mathcal{N}_v^P} \alpha_{vu}^P \cdot \mathbf h_{P(v,u)} \right)$$ where $\mathbf a_P \in \mathbb{R}^{2d'}$ is the parameterized context vector and $\mathbf h_v^P \in \mathbb{R}^{d'}$ is the aggregated representation from all metapath neighbours of $v$ in $P$.
|
| 61 |
+
|
| 62 |
+
**Inter-Metapath Aggregation:** Now, we need to aggregate the metapaths across different schemas. Consider $\mathcal{P}_A = \{P_1, P_2, \ldots, P_N\}$ as the set of metapath schemas which start with node type $A \in \mathcal{A}$. For a node $v \in \mathcal{V}_A$ of type $A$, we have a set of $N$ representations $\{\mathbf h_v^{P_i}, \forall i \in [1,N]\}$.
|
| 63 |
+
|
| 64 |
+
First, information for each metapath schema is summarized across all nodes $v \in \mathcal{V}_A$ as $$\mathbf s_{P_i} = \frac{1}{|\mathcal{V}_A|} \sum\limits_{v \in \mathcal{V}_A} \text{tanh} \left( \mathbf M_A \cdot \mathbf h_v^{P_i} + \mathbf b_A \right)$$ where $\mathbf M_A \in \mathbb{R}^{d_m \times d'}$ and $\mathbf b_A \in \mathbb{R}^{d_m}$ are learnable parameters. Then, attention mechanism is employed to aggregate all the $M$ representations as $e_{P_i} = \mathbf q_A^\top \cdot \mathbf s_{P_i}$, $$\beta_{P_i} = \textsc{Softmax}_{P_i \in \mathcal{P}_A} \left(e_{P_i}\right) \quad \mathbf h_v = \sum\limits_{P_i \in \mathcal{P}_A} \beta_{P_i} \mathbf h_v^{P_i}$$ where $\mathbf q_A \in \mathbb{R}^{d_m}$ is the parameterized context vector.
|
| 65 |
+
|
| 66 |
+
Thus, at the end of the structural encoding phase, we get a single representation of the Fact $\mathbf h_f^{(s)}$ and a set of representations for each Section $\{\mathbf h_{s}^{(s)} \mid s \in S\}$.
|
| 67 |
+
|
| 68 |
+
We design a scoring function $m_\theta(f; \{s \mid s \in S\})$ to assign a score to each Section $s \in S$ for Fact $f$. Leveraging the Fact that Sections in IPC do have a defined sequential order and related semantics, we first use an LSTM to contextualize these embeddings $\mathbf h_s$ for $s \in S$ as ${\mathbf{\tilde{h}}_s} = \textsc{Bi-LSTM} \left( \mathbf h_s ; [\mathbf h_t \mid t \in S]\right)$. Then, we use the standard attention mechanism to generate a single aggregated embedding $\mathbf h_S$ representing the set $S$: $$e_{s} = \mathbf w_S^\top \cdot \text{tanh} \left( \mathbf M_S \cdot \mathbf{\tilde{h}_s} + \mathbf b_S \right)$$ $$\gamma_{s} = \textsc{Softmax}_{s \in S} \left(e_s\right) \quad \mathbf h_S = \sum\limits_{s \in S} \gamma_{s} \mathbf{\tilde{h}}_s$$ where $\mathbf w_S \in \mathbb{R}^{d_s}$ is the parameterized context vector and $\mathbf M_S \in \mathbb{R}^{d_s \times d'}$ and $\mathbf b_S \in \mathbb{R}^{d_s}$ are learnable parameters.
|
| 69 |
+
|
| 70 |
+
Finally, we generate the score for each class as: $$\mathbf o_f = m_\theta(f; \{s \mid s \in S\}) = \sigma \left( \mathbf W_C \cdot [\mathbf h_f || \mathbf h_S] + \mathbf b_C \right)$$ where $\mathbf W_C \in \mathbb{R}^{|S| \times 2d'}$ and $\mathbf b_C \in \mathbb{R}^{|S|}$ are the learnable parameters for the final classification layer.
|
| 71 |
+
|
| 72 |
+
Since we have two sets of embeddings for each Fact $f$ and Section $c_i$, we can generate three sets of scores, namely --
|
| 73 |
+
|
| 74 |
+
\(i\) **Attribute Score:** $\mathbf o_f^{(a)} = m_\theta(\mathbf h_f^{(a)}; \{\mathbf h_s^{(a)} \mid s \in S\})$ matches the attribute embedding of Facts with Sections;
|
| 75 |
+
|
| 76 |
+
\(i\) **Structural Score:** $\mathbf o_f^{(s)} = m_\theta(\mathbf h_f^{(s)}; \{\mathbf h_s^{(s)} \mid s \in S\})$ matches the structural embedding of Facts with Sections;
|
| 77 |
+
|
| 78 |
+
\(iii\) **Alignment Score:** $\mathbf o_f^{(l)} = m_\theta(\mathbf h_f^{(a)}; \{\mathbf h_s^{(s)} \mid s \in S\})$ matches the attribute embedding of Facts with structural embedding of Sections.
|
| 79 |
+
|
| 80 |
+
The structural score can only be calculated during training time, since the graphical structure of the Facts at test/inference time is not available. During training, the structural score helps the model to understand the graphical structure of Sections and training documents. However, the graphical structure of the Sections is available at all times, and thus the alignment score actually tunes the model to generate similar attribute and structural representations.
|
| 81 |
+
|
| 82 |
+
**Using Dynamic Context:** To provide better guidance to the attention mechanism for the structural encoder through the attribute embeddings, we replace the static context vectors in the structural encoder with dynamically generated vectors from the attribute embeddings of the same node [@luo2017learning]. In the scoring function, we use the Fact embeddings to generate the dynamic context. $$\mathbf a_P = \mathbf T_P \cdot \mathbf h_v^{(a)}; \quad \mathbf q_A = \mathbf T_A \cdot \mathbf h_v^{(a)}; \quad \mathbf w_S = \mathbf T_S \cdot \mathbf h_f$$ where $\mathbf T_P \in \mathbb{R}^{2d' \times d'}$, $\mathbf T_A \in \mathbb{R}^{d' \times d'}$ and $\mathbf T_S \in \mathbb{R}^{d' \times d'}$ are the learnable transformation matrices.
|
| 83 |
+
|
| 84 |
+
The loss function has three parts, analogous to the three scores, namely, $\mathcal{L}^{(a)}$, $\mathcal{L}^{(s)}$ and $\mathcal{L}^{(l)}$. We use weighted Binary Cross Entropy Loss to calculate each component as: $$l_s^{(t)} = w_s \mathbf y_f[s] \log{(\mathbf o_f^{(t)}[s])} + (1 - \mathbf y_f[s]) \log{(1 - \mathbf o_f^{(t)}[s])}$$ $$\mathcal{L}^{(t)} = -\frac{1}{|B|} \sum\limits_{f \in B} \sum\limits_{s \in S} l_s^{(t)}; \quad t \in \{a,s,l\}, s \in S$$ where $w_s$ denotes the weight of each positive sample of class $s \in S$ and $B$ denotes a mini-batch of Facts.
|
| 85 |
+
|
| 86 |
+
The *vanilla weighting scheme* (VWS) assigns $w_s = N/f_s, \forall s \in S$, where $N$ is the total no. of training documents and $f_s$ is the no. of training documents that cite $s$. However, this scheme can sometimes lead to very large weights for the rare labels $f_s << N$. To address this issue, we propose a *threshold-based weighting scheme* (TWS) defined as $w_s = \min{(f_{max} / f_s, \eta)}$ where $f_{max}$ is the frequency of the most cited label, and $\eta$ is a threshold value determined on the validation set. This scheme caps the class weights at a reasonable value, so that the model does not compromise performance on the frequent classes for minor improvements over the rare ones.
|
| 87 |
+
|
| 88 |
+
We have the final loss as: $\mathcal{L} = \theta_a \mathcal{L}^{(a)} + \theta_s \mathcal{L}^{(s)} + \theta_l \mathcal{L}^{(l)}$ During test time, the structural score is not available. Thus, $\hat{\mathbf y_f} = I \left(\lambda_a \mathbf o_f^{(a)} + \lambda_l \mathbf o_f^{(l)} \geq \tau \right)$ where $\hat{\mathbf y_f} \in \{0,1\}^{|S|}$ and $\hat{\mathbf y_f}[s] \in \{0,1\}$ indicates whether the model predicts Section $s$ to be relevant to Fact $f$ or not, and $\tau$ is the threshold value set using the validation set.
|
2202.00095/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2202.00095/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep neural networks (NNs) have achieved state-of-the-art performance on a wide range of machine learning tasks by automatically learning feature representations from data [@krizhevsky2012imagenet; @wang2018glue; @irvin2019chexpert; @rajpurkar2016squad; @lin2014microsoft]. However, these networks do not offer interpretable predictions on most applications and are seen as "black boxes". It is thus crucial to understand the intricacies of neural networks before they are deployed on critical applications. Previous work has made progress in understanding how a single neural network makes decisions with axiomatic attribution methods [@sundararajan2017axiomatic; @lundberg2017unified] and understanding how multiple neural networks relate to each other with representation similarity measures [@mehrer2020individual].
|
| 4 |
+
|
| 5 |
+
Several similarity measures between representations have been proposed with different principles, including linear regression [@romero2014fitnets], canonical correlation analysis (CCA; @raghu2017svcca [@morcos2018insights]), statistical shape analysis [@williams2021generalized], and functional behaviors on down-stream tasks [@alain2016understanding; @feng2020transferred; @ding2021grounding]. Another main-stream approach is based on representational similarity analysis, RSA [@edelman1998representation; @kriegeskorte2008representational; @mehrer2020individual; @shahbazi2021using], which computes the similarity between (dis)similarity matrices of two neural network representations of the same dataset, such as centered kernel alignment (CKA, @kornblith2019similarity).
|
| 6 |
+
|
| 7 |
+
Despite the wide usage of RSA and CKA in understanding biological [@haxby2001distributed] and artificial neural networks [@nguyen2020wide], we find that the inter-example (dis)similarity matrices in the representation space of different NNs are highly correlated with a shared factor (i.e., a confounder): the (dis)similarity structure of the data items in the input space, especially for shallow layers. This confounding issue limits the ability of CKA to reveal similarity of models on the functional level. This leads to spuriously high CKAs even between two random neural networks, and counter-intuitive conclusions when comparing CKAs on sets of models trained on different domains, such as in the transfer learning setting [@neyshabur2020being; @kornblith2021better]
|
| 8 |
+
|
| 9 |
+
In this paper, we propose a simple approach to adjust the representation similarity by regressing out the confounder, the inter-example (dis)similarity matrix in the input space, from the (dis)similarity matrices of two representations. This is inspired by the covariate adjusted correlation analysis widely studied in Biostatistics [@wu2018covariate; @liu2018covariate]. This approach can be applied to any similarity measure built on the RSA framework. Moreover, we study the invariance properties of the deconfounded representation similarity and demonstrate its benefits on public image and natural language datasets with various neural network architectures.
|
| 10 |
+
|
| 11 |
+
Overall, our contributions are:
|
| 12 |
+
|
| 13 |
+
- We study the confounding effect of the input inter-example similarity on the representation similarity between two neural networks, and propose a simple and generally applicable deconfounding fix. We discuss the invariance properties of the deconfounded similarities.
|
| 14 |
+
|
| 15 |
+
- We verify that deconfounded similarities can detect semantically similar neural networks from random neural networks, and small model changes across domain tasks where previous similarity measures fail.
|
| 16 |
+
|
| 17 |
+
- We show that deconfounded similarities are more consistent with domain similarities in transfer learning, compared with existing methods.
|
| 18 |
+
|
| 19 |
+
- We demonstrate that deconfounded similarities on in-distribution datasets are more correlated with out-of-distribution accuracy than the corresponding original similarities [@ding2021grounding].
|
| 20 |
+
|
| 21 |
+
# Method
|
| 22 |
+
|
| 23 |
+
<figure id="fig:dCKA demo" data-latex-placement="t">
|
| 24 |
+
<img src="Figure/dCKA_demo.png" style="width:100.0%" />
|
| 25 |
+
<figcaption><strong>Demonstration of the confounder in CKA.</strong> CKA calculates the similarity between inter-example similarities for two representations, which are confounded by the inter-example similarities in the input space, such that input pairs with high (<span style="color: green"><span class="math inline">★</span></span>) and low (<span style="color: red"><span class="math inline">★</span></span>) input similarities also have high and low representation similarities on both random NNs (<em>Left</em>) and trained NNs (<em>Right</em>) representations. Moreover, the confounder leads to the counterintuitive conclusion that CKA on random NNs is higher than pretrained and fine-tuned NNs on similar domains (0.99 vs. 0.95). This is resolved by deconfounding (0.43 vs.0.72).</figcaption>
|
| 26 |
+
</figure>
|
| 27 |
+
|
| 28 |
+
We propose a simple approach to adjust the spurious similarity caused by the confounder by regressing out the input similarity structure from the representation similarity structure [@csenturk2005covariate]. That is: $$\begin{equation}
|
| 29 |
+
\begin{split}
|
| 30 |
+
&dK^{m_1}_{{f_1}}=K^{m_1}_{{f_1}}-\hat{\alpha}^{m_1}_{f_1}K^{0};\\
|
| 31 |
+
&dK^{m_2}_{{f_2}}=K^{m_2}_{{f_2}}-\hat{\alpha}^{m_2}_{f_2}K^{0},
|
| 32 |
+
\label{eq: deconfound similarity structure}
|
| 33 |
+
\end{split}
|
| 34 |
+
\end{equation}$$ where the $\hat{\alpha}^{m_1}_{f_1}$ and $\hat{\alpha}^{m_2}_{f_2}$ are the regression coefficients that minimize the Frobenius norm of $dK^{m_1}_{{f_1}}$ and $dK^{m_2}_{{f_2}}$ respectively. Furthermore, the letter $d$ is front of a similarity matrix, e.g. as in $dK^{m_1}_{{f_1}}$, denotes the deconfounded version of $K^{m_1}_{{f_1}}$, and similarly the letter $d$ is applied throughout the text to denote all defounded quantities. To do the deconfounding, we assume that the input similarity structure $K^{0}$ has a linear and additive effect on $K^{m}_{{f}}$, i.e., $$\begin{equation}
|
| 35 |
+
\text{vec}(K^{m}_{{f}})=(\alpha^{m}_{f})^T\text{vec}(K^{0})+\boldsymbol{\epsilon}^{m}_{f},
|
| 36 |
+
\label{eq: linear regression}
|
| 37 |
+
\end{equation}$$ where noise $\boldsymbol{\epsilon}^{m}_{f}$ is assumed to be independent from the confounder with $\boldsymbol{\hat{\epsilon}}^{m}_{f}=\text{vec}(dK^{m}_{{f}})$ and $$\begin{equation}
|
| 38 |
+
\hat{\alpha}^{m}_{f}=(\text{vec}(K^{0})^{T}\text{vec}(K^{0}))^{-1}\text{vec}(K^{0})^{T}\text{vec}(K^{m}_{{f}}).
|
| 39 |
+
\label{eq: linear regression coef}
|
| 40 |
+
\end{equation}$$ After the deconfounded similarity structures are obtained with Eq.[\[eq: deconfound similarity structure\]](#eq: deconfound similarity structure){reference-type="ref" reference="eq: deconfound similarity structure"}, we use the same similarity measure to calculate the deconfounded representation similarity: $$\begin{equation}
|
| 41 |
+
\label{eq: deconfound similarity}
|
| 42 |
+
ds^{m_1,m_2}_{{f_1},{f_2}} = s(dK^{m_1}_{{f_1}},dK^{m_2}_{{f_2}}).
|
| 43 |
+
\end{equation}$$
|
| 44 |
+
|
| 45 |
+
Note that $dK^{m}_{{f}}$ is not always positive semi-definite, even when $K^{m}_{{f}}$ is positive semi-definite (PSD). For a similarity measure $s(\cdot,\cdot)$ that takes two kernel matrices as input, such as the CKA, we transform $dK^{m}_{{f}}$ into a positive semi-definite matrix by removing all the negative eigenvalues according to [@chan1997algorithm]. Specifically, we have the eigenvalue decomposition of $dK^{m}_{{f}}$, such that $$\begin{equation}
|
| 46 |
+
\begin{split}
|
| 47 |
+
&dK^{m}_{{f}} = Q\Lambda Q^{T}=Q(\Lambda_{+}-\Lambda_{-})Q^{T},\\
|
| 48 |
+
\Lambda_{\pm}&=\text{diag}\{\max(0,\pm\lambda_1),\ldots,\max(0,\pm\lambda_n)\},
|
| 49 |
+
\end{split}
|
| 50 |
+
\end{equation}$$ where $\lambda_i$ is the $i$th eigenvalue of $dK^{m}_{{f}}$. We approximate $dK^{m}_{{f}}$ with a PSD matrix $\Tilde{dK^{m}_{{f}}}$: $$\begin{equation}
|
| 51 |
+
dK^{m}_{{f}}\approx \Tilde{dK^{m}_{{f}}} = \rho^2Q\Lambda_{+}Q^{T};\;\;\rho=\frac{|\text{tr}(\Lambda)|}{|\text{tr}(\Lambda_{+})|}.
|
| 52 |
+
\label{eq:psd correction}
|
| 53 |
+
\end{equation}$$
|
| 54 |
+
|
| 55 |
+
**Deconfounded CKA.** In CKA [@kornblith2019similarity], the similarity structure in the feature space is represented with a valid kernel $l(\cdot,\cdot)$, i.e., $K^{m_1}_{{f_1}} = l(X_{f_1}^{m_1},X_{f_1}^{m_1})$ and $K^{m_2}_{{f_2}} = l(X_{f_2}^{m_2},X_{f_2}^{m_2})$, such as the linear or RBF kernel. Then an empirical estimator of HSIC [@gretton2005measuring] is used to align two kernels: $$\begin{equation}
|
| 56 |
+
\text{HSIC}^{m_1,m_2}_{f_1,f_2}=\frac{1}{(n-1)^2}\text{tr}(K^{m_1}_{{f_1}}HK^{m_2}_{{f_2}}H),
|
| 57 |
+
\label{eq:hsic}
|
| 58 |
+
\end{equation}$$ where $H$ is the centering matrix. CKA is given by the normalized HSIC such that $$\begin{equation}
|
| 59 |
+
\text{CKA}(K^{m_1}_{{f_1}},K^{m_2}_{{f_2}})=\frac{\text{HSIC}^{m_1,m_2}_{f_1,f_2}}{\sqrt{\text{HSIC}^{m_1,m_1}_{f_1,f_1}\text{HSIC}^{m_2,m_2}_{f_2,f_2}}}.
|
| 60 |
+
\label{eq:cka}
|
| 61 |
+
\end{equation}$$
|
| 62 |
+
|
| 63 |
+
To deconfound the representation similarity matrices $K^{m_1}_{{f_1}}$ and $K^{m_2}_{{f_2}}$, we apply the same kernel to measure the inter-example similarity in the input space $K^0=l(X,X)$, and adjust its confounding effect with Eq.[\[eq: deconfound similarity structure\]](#eq: deconfound similarity structure){reference-type="ref" reference="eq: deconfound similarity structure"}. However, matrices $dK^{m_1}_{{f_1}}$ and $dK^{m_2}_{{f_2}}$, obtained by regressing out one kernel matrix from another kernel, are no longer kernels, and they are not applicable for computing HSIC. Fortunately, with Eq.[\[eq:psd correction\]](#eq:psd correction){reference-type="ref" reference="eq:psd correction"}, we can approximate the $dK^{m_1}_{{f_1}}$ and $dK^{m_1}_{{f_1}}$ with two valid kernels $\Tilde{dK^{m_1}_{{f_1}}}$ and $\Tilde{dK^{m_2}_{{f_2}}}$, which are then used to construct the deconfounded CKA (dCKA): $$\begin{equation}
|
| 64 |
+
\text{dCKA}(K^{m_1}_{{f_1}},K^{m_2}_{{f_2}})=\text{CKA}(\Tilde{dK^{m_1}_{{f_1}}},\Tilde{dK^{m_2}_{{f_2}}}).
|
| 65 |
+
\label{eq:dcka}
|
| 66 |
+
\end{equation}$$ We use a linear kernel here because @kornblith2019similarity report similar results with using a RBF kernel.
|
| 67 |
+
|
| 68 |
+
**Deconfounded RSA.** Different from CKA, the similarity structure in RSA [@mehrer2020individual] is measured by the pairwise Euclidean distance between examples in the feature space. Specifically, each element of $K^{m_1}_{f_1}$ and $K^{m_2}_{f_2}$ is obtained by $K^{m_1}_{f_1,ij}=\lVert\boldsymbol{x}^{m_1}_{f_1,i} - \boldsymbol{x}^{m_1}_{f_1,j}\rVert^2$ and $K^{m_2}_{f_2,ij}=\lVert\boldsymbol{x}^{m_2}_{f_2,i} - \boldsymbol{x}^{m_2}_{f_2,j}\rVert^2$, where $\boldsymbol{x}^{m_1}_{f_1,i}$ is the $m_1$-layer representation of the $i$th example in neural network $f_1$. Thus, the input similarity structure $K^{0}$ is measured with the pairwise Euclidean distance in the input space. After $K^{0}$ is adjusted with Eq.[\[eq: deconfound similarity structure\]](#eq: deconfound similarity structure){reference-type="ref" reference="eq: deconfound similarity structure"}, we apply Spearman's $\rho$ correlation to measure the similarity between the upper triangular part of $dK^{m_1}_{f_1}$ and $dK^{m_2}_{f_2}$, i.e., $\text{triu}(dK^{m_1}_{{f_1}})$ and $\text{triu}(dK^{m_2}_{{f_2}})$, that is $$\begin{equation}
|
| 69 |
+
\text{dRSA}(K^{m_1}_{{f_1}},K^{m_2}_{{f_2}})=\rho(\text{triu}(dK^{m_1}_{{f_1}}),\text{triu}(dK^{m_2}_{{f_2}})).
|
| 70 |
+
\label{eq:drsa}
|
| 71 |
+
\end{equation}$$ Note that rank correlation does not require two similarity matrices to be positive semi-definite. Therefore, we skip the steps of constructing the PSD approximation.
|
| 72 |
+
|
| 73 |
+
<figure id="fig: optimal order" data-latex-placement="t">
|
| 74 |
+
<div class="center">
|
| 75 |
+
<img src="Figure/optimal_orders.png" />
|
| 76 |
+
</div>
|
| 77 |
+
<figcaption><strong>Log <span class="math inline"><em>R</em><sup>2</sup></span> and BIC of regression models with different orders of input similarity</strong>. We empirically observe that correcting for the confounder with a linear model is sufficient, especially for shallow layers (demonstrated with ResNet-18 on CIFAR-10).</figcaption>
|
| 78 |
+
</figure>
|
| 79 |
+
|
| 80 |
+
**Is linear model sufficient?** In Eq.[\[eq: linear regression\]](#eq: linear regression){reference-type="ref" reference="eq: linear regression"}, we assume that the representation similarity structures **linearly** depend on the input similarity structure. To validate if the linear assumption is sufficient, we check if adding higher-order polynomial terms of input similarity to the regression model (Eq.[\[eq: linear regression\]](#eq: linear regression){reference-type="ref" reference="eq: linear regression"}) can help explain the representation similarity structure.
|
| 81 |
+
|
| 82 |
+
We show the effect of adding higher-order polynomial terms on a pretrained ResNet-18 (contains 20 layers in total) with CIFAR-10 inputs in Figure [2](#fig: optimal order){reference-type="ref" reference="fig: optimal order"}. We observe that neither the $R^2$ nor the Bayesian information criterion (BIC) [@murphy2012machine], approximating the model evidence, changes much when adding higher-order terms. Although $R^2$ can be marginally improved by increasing the order in the deeper layers (e.g., layer 20), we only consider the linear model in this paper for simplicity.
|
| 83 |
+
|
| 84 |
+
**Does the independent noise assumption hold?** In Eq.[\[eq: linear regression\]](#eq: linear regression){reference-type="ref" reference="eq: linear regression"}, the regression targets are similarities between every pair of examples. Thus, noise $\epsilon_{f, ij}^{m}$ of example-pair $i,j$ might be correlated with $\epsilon_{f, ik}^{m}$ of example-pair $i,k$ because they both are associated with the same example $i$. However, the Durbin-Watson tests [@durbin1992testing] show that the independent noise assumption still holds (Appendix [7](#app: DW test){reference-type="ref" reference="app: DW test"}).
|
| 85 |
+
|
| 86 |
+
In this section, we study the invariance properties of the deconfounded representation similarity. For similarity measures that we studied, i.e., CKA and RSA, the corresponding deconfounded similarity indices have the same invariance properties, such as invariance to orthogonal transformation and isotropic scaling, as the original similarity measures.
|
| 87 |
+
|
| 88 |
+
::: {#proposition: orthogonal transformation invariance .proposition}
|
| 89 |
+
**Proposition 1**. *Deconfounded CKA and deconfounded RSA are invariant to orthogonal transformation, if the (dis)similarity measure $k(\cdot,\cdot)$ that compare inter-examples are orthogonal invariant.*
|
| 90 |
+
:::
|
| 91 |
+
|
| 92 |
+
::: {#proposition: isotropic scaling invariance .proposition}
|
| 93 |
+
**Proposition 2**. *Deconfounded CKA with a linear kernel and deconfounded RSA are invariant to isotropic scaling.*
|
| 94 |
+
:::
|
| 95 |
+
|
| 96 |
+
Intuitively, as long as $k(\cdot,\cdot)$ is invariant to orthogonal transformation, e.g., linear kernels and Euclidean distance, the deconfounded representation similarity matrix $dK^{m}_{f}$ in Eq.[\[eq: deconfound similarity structure\]](#eq: deconfound similarity structure){reference-type="ref" reference="eq: deconfound similarity structure"} is also invariant to orthogonal transformation, because it is defined in terms of the kernel $k$. Thus all operations on $dK^{m}_{f}$ are invariant to orthogonal transformation. Moreover, if one representation is scaled by a scalar, $dK^{m}_{f}$ and $\Tilde{dK^{m}_{f}}$ will be scaled by the same scalar, whose effects will be finally eliminated in the normalization step in CKA (Eq.[\[eq:cka\]](#eq:cka){reference-type="ref" reference="eq:cka"}) and the rank correlation step in RSA (Eq.[\[eq:drsa\]](#eq:drsa){reference-type="ref" reference="eq:drsa"}).
|
2203.10692/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-09-27T01:19:54.612Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Safari/605.1.15" version="15.3.4" etag="suBY_6DWMIxPC_1rdMjg" type="google"><diagram id="DnNZd4JeXLp32xXPzYj0">7Z1Pk6I4GMY/jce1kARaj9N27+xlq6aqDztz2kpjWthBcCG2up9+gxD+JLGa1gQTh760xBDgyc/w5smLTsByc/iaoW34Z7rC8cR1VocJeJq47gy6Pv1XlBzLkrnjlAXrLFpVlZqCl+g/XBWyartohfNORZKmMYm23cIgTRIckE4ZyrJ03632lsbdo27Rujqi0xS8BCjGQrW/ohUJq6vwWrX/wNE6ZEeesevbIFa5KshDtEr3rSLwPAHLLE1J+WpzWOK4EI/pUu73+5l36xPLcEL67OCWO7yjeFdd23NCInKcJlOnaiwnR3bdWbpLVrjY1ZmAx30YEfyyRUHx7p72NC0LySamW7PiJQrCXYa/FuVPkBZs0yghOHt+p+eWV23kJEt/1iLSy3+sTghnBB/OXtSslooyhtMNJtmRVjnUepe7VHi53qLc3jed5VdVwlY/sU5BFR7ruuVGQfqiElEuKBAE3YbHPKLs/I3vQNn5x8rW6quWFgrS0o8TvexCUWCrosD5WNEHTYJ6gqD57jUnKAmw1ZSCHpQuNGnqC5oGId6Un/8Yb+gVWC0t7DMAAE3aPgjaIoprhgISpUkhq2+rrMK4uhhwXJ0LsmY4RkxTa1EVRlaJprpG1oUYBaDM7k++MKhK5NQ1qDLyW3oKMga77P2kYiETTlZfijifbgYxymn81ZWR3uQyUs0rZrDcodostvJ0lwX4G84ieqo4K7ojStb0zTl9k+66xuTMm/gQke9FN0y9ausHOyX6+ulQ9dBp48g2EirH9/bGj6aFYrPZ7bTF9islwCs2V2FonU5+0g7vy1PuDKI9Or7VsZ6kY1lZOVi8d09C1tvVEb4VxLa48rtcwdmi20TVGeVe7akM1xDkon7gcw1VHcc3REFBx1a10ycqP3/C/OwCOAuO5bLFhuxa036wzz6G/eZ8z1p0N6ybwDcY+e42dAmCoi0wIvi5u+YwcNV3XHZT9udT7zK8hHBJbEohYKJNMgJmImDgQR1gfACpFTDRLBoBMxEwqA4wwZvQCphono2AmQiYzy8GXA6Y7w4JmOgkjoCZCBgfgwFHWQwmaUohYKKdOgJmImB8DHYNYPMhARONZfMA6xp1Tk/AGqZqe+86I6OayreNDM8wDF3OyADehUaGy/PMN6QQQdGIv8P8BjBgfgNr4xdJcJBJq20hzhWd3zvMcJBJqmsdjt1x7j7FQSaqrtU4SY7TXec4SMcAXTkOkiynKLN8Jd6HXlfQAVfiJUlO95o0IpNV371KtCfuMmtkUFbFGfndpY3I9NR2o+oxAR3TRiSzUTZtak9HjZuP8gvrnqqFdThQ4ghUnDgiyToTcL854YYkjkgJrx/fGAm/GEIWF48QyiE0Bi5h2UJd6oikKYWAjelxdgAmLFuoSx3RC5joGY2AmQgYnzpyBWCC5aMVsDH5zQ7A+NSRKwDjU0f0AjYmv9kBGB+DQXWpI5KmFAI2Jr/ZARgfg10D2HxIwGxIfjMkdYS5xh0rwzUMRD55BKpKHhEaUgihaMfTvVEcR8Ht1znLkmUap9npoGC5dOjfRM2CHR+tyFZAXQktSr5BQ3Tt7zBnBw75nSSSx5HvOGdHJq2+LyUR/c709R8cWL5qx0eTA+bsQNHg24dpfMrXEaMAWwTtwag2QUVDi0ZG0RuyHFJ4LqYYQlPRw4kSepW7IipAcTWk3iZnT2to4MFbDrajsdF7WsDCqM6z8Yve3X2bFU5f1Qqnp2+FE47mhx3mh+CuqVvhlDSlEDAbzI8RMIm7pm6FUy9g45N/dgAGPWWACcGyVsBsePJvBIw2A5QBJkwMtAImOlpvuyyJ8pD2s9VGgcd/5AfMl2aH5lUlVIr7cgokos51iSrLV/NjUlzrFiUdPf1/d8UXfz/SKyW/oThaJxPwhdaI8Rtp3qWv1qda6LV2xcoG6bmUbZYVbO0uwAfuku7S5ZV7olm2wvlPq/EH7g31tCHZ6bLHLroruc3N/3LLRraSCw0LGSA/FV5wlPSOSPk7HeQaUhcueDb4hoZAyJztNoTQNN/Qd+C0+0wpfLjQOZQ0tdDmHXpGe4cMqtnkU5OcGuBPTKh6gMgCp853oriGgWjlaGi0v3h7CM2Bi3/+6/LsPR4vSVMKATPaXxwBO2s2XAEYb//oBUz0FzdoneA82m2snqn5nB8n/R0QR9dUTTTVftF8R+mv2ujKd2SDvJmjpVmTE9YvnZhwbtioyseEQFVMCJTFhHSz+Y3BsnrzS43g+X8=</diagram></mxfile>
|
2203.10692/main_diagram/main_diagram.pdf
ADDED
|
Binary file (11.2 kB). View file
|
|
|
2203.10692/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
Coupling class-based LM (CLM) and curriculum learning, HCP is to gradually anneal class prediction to token prediction during LM training. In this section, we first describe how we instantiate word classes by leveraging hypernym relation from the WordNet. We then present how to incorporate the proposed Hypernym Class Prediction task into LM training via curriculum learning.
|
| 4 |
+
|
| 5 |
+
WordNet [@miller1995wordnet] is a lexical database that groups words into sets of cognitive synonyms known as synsets, which are in turn organized into a directed graph by various lexical relations including the hypernymy (*is-a*) relation. As shown in Figure [1](#fig: synset){reference-type="ref" reference="fig: synset"}, each vertex is a synset, labeled by the text within the box, and each edge points from the hypernym (supertype) to the hyponym (subtype). Note that a word form (spelling) may be associated with multiple synsets -- each corresponding to a different sense of the word, which are sorted by the frequency of the sense estimated from a sense-annotated corpus. For example, *iron* has 6 synsets, among which "iron.n.01" is the most common one.
|
| 6 |
+
|
| 7 |
+
Hence, if two words share the same hypernym at a certain level in their hypernym-paths (to the root in WordNet), we could say they are similar at that level. Here we use \"Depth\" to quantify the hypernym-path level. In Figure [1](#fig: synset){reference-type="ref" reference="fig: synset"}, for example, at Depth 6, *iron* and *magnesium* are mapped to the same group named "metallic_element.n.01", while *desk* is mapped to "instrumentality.n.03". At Depth 2, all these three words share the same (indirect) hypernym "physical_entity.n.01".
|
| 8 |
+
|
| 9 |
+
In this work, we map each token in our training set into its hypernym class if this token (1) has a noun synset in the WordNet, (2) with a hypernym-path longer than a given depth $d$, and (3) has frequency below a given threshold $f$ in the training corpus. We only consider nouns because it is not only the most common class in the WordNet but also a difficult class for LMs to learn [@lazaridou2021pitfalls]. For tokens with multiple synsets, we iterate over the synsets in the order of sense frequency and break the loop once found. We select the most frequent synset no less than the required depth. The mapping pseudocode is illustrated in Code [2](#code: token2class){reference-type="ref" reference="code: token2class"}, which is a data pre-processing algorithm conducted only once before the training and takes no more than 5 minutes in our implementation.
|
| 10 |
+
|
| 11 |
+
We first partition the vocabulary into $\mathbf{V_{x}}$ and $\mathbf{V_{\neg x}}$ based on whether or not a token has a hypernym in the WordNet, and $\mathbf{V_{h}}$ denotes the set of all hypernyms. The original task in a Transformer-based LM is then to predict the token $w_j$'s probability with the output $\mathbf{x}$ from the last layer: $$\begin{equation}
|
| 12 |
+
\label{equ: org softmax}
|
| 13 |
+
\begin{array}{ll}
|
| 14 |
+
P(y=w_j|\mathbf{x}) = \frac{\text{exp}({\mathbf{x}^\mathsf{T}\mathbf{v}_{w_j}})}{\sum_{w_k\in\mathbf{{V_{x}}}\cup\mathbf{V_{\neg x}}} \text{exp}({\mathbf{x}^\mathsf{T}\mathbf{v}_{w_k}})}
|
| 15 |
+
\end{array}
|
| 16 |
+
\end{equation}$$ where $w_k$ is the $k_{th}$ word in the original vocabulary and $\mathbf{v}_{w_k}$ is its embedding. Here we assume the output layer weights are tied with the input embeddings. We call any training step predicted with Eq. [\[equ: org softmax\]](#equ: org softmax){reference-type="ref" reference="equ: org softmax"} a token prediction step.
|
| 17 |
+
|
| 18 |
+
To do the Hypernym Class Prediction step, we replace all tokens in $\mathbf{V_{x}}$ in a batch of training data with their corresponding hypernym classes in $\mathbf{V_{h}}$. After the replacement, only hypernym classes in $\mathbf{V_{h}}$ and tokens in $\mathbf{V_{\neg x}}$ can be found in that batch. Then, the LM probability prediction becomes: $$\begin{equation}
|
| 19 |
+
\label{equ: hyper softmax}
|
| 20 |
+
\begin{array}{ll}
|
| 21 |
+
P(y=w_j|\mathbf{x}) = \frac{\text{exp}({\mathbf{x}^\mathsf{T}\mathbf{v}_{w_j}})}{\sum_{{w}_k\in\mathbf{{V_{h}}\cup\mathbf{V_{\neg x}}}} \text{exp}({\mathbf{x}^\mathsf{T}\mathbf{v}_{w_k}})}
|
| 22 |
+
\end{array}
|
| 23 |
+
\end{equation}$$ where $w_j$ could be either a token or a hypernym class. We called this batch step is a Hypernym Class Prediction(HCP) step.
|
| 24 |
+
|
| 25 |
+
Note that Eq. [\[equ: hyper softmax\]](#equ: hyper softmax){reference-type="ref" reference="equ: hyper softmax"} is different from the multi-objective learning target, where the hypernym class would be predicted separately: $$\begin{equation}
|
| 26 |
+
\label{equ: multi-obj softmax}
|
| 27 |
+
\begin{array}{ll}
|
| 28 |
+
P(y=w_j|\mathbf{x}) = \frac{\text{exp}({\mathbf{x}^\mathsf{T}\mathbf{v}_{w_j}})}{\sum_{{w}_k\in\mathbf{{V_{h}}}} \text{exp}({\mathbf{x}^\mathsf{T}\mathbf{v}_{w_k}})}
|
| 29 |
+
\end{array}
|
| 30 |
+
\end{equation}$$ where $w_j$ is a hypernym class. We will elaborate on this difference in the experiment results part.
|
| 31 |
+
|
| 32 |
+
<figure id="fig: pacing function" data-latex-placement="t">
|
| 33 |
+
<embed src="fig/pacing_function.pdf" style="width:35.0%;height:30.0%" />
|
| 34 |
+
<figcaption>Probabilities of HCP step over training process with different pacing functions.</figcaption>
|
| 35 |
+
</figure>
|
| 36 |
+
|
| 37 |
+
We train a LM by switching from HCP to token prediction. For the example in Figure [1](#fig: synset){reference-type="ref" reference="fig: synset"}, our target is to teach a model to distinguish whether the next token belongs to the metallic element class or instrumentality class during the earlier stage in training, and to predict the exact word from magnesium, iron, and desk later.
|
| 38 |
+
|
| 39 |
+
Inspired by @bengio2009curriculum, we choose curriculum learning to achieve this. Curriculum learning usually defines a score function and a pacing function, where the score function maps from a training example to a difficulty score, while the pacing function determines the amount of the easiest/hardest examples that will be added into each epoch. We use a simple scoring function which treats HCP as an easier task than token prediction. Therefore, there is no need to sort all training examples. The pacing function determines whether the current training step is a HCP step, i.e. whether tokens will be substituted with their hypernyms.
|
| 40 |
+
|
| 41 |
+
Our pacing function can be defined as: $$\begin{equation}
|
| 42 |
+
\label{equ: constant pacing function}
|
| 43 |
+
P(y=c|t) = \left\{
|
| 44 |
+
\begin{array}{ll}
|
| 45 |
+
b & t<a*N \\
|
| 46 |
+
0 & t\ge a*N
|
| 47 |
+
\end{array}
|
| 48 |
+
\right.
|
| 49 |
+
\end{equation}$$ or $$\begin{equation}
|
| 50 |
+
\label{equ: linear pacing function}
|
| 51 |
+
P(y=c|t) = \left\{
|
| 52 |
+
\begin{array}{ll}
|
| 53 |
+
b-b*\frac{t}{a*N} & t<a*N \\
|
| 54 |
+
0 & t\ge a*N
|
| 55 |
+
\end{array}
|
| 56 |
+
\right.
|
| 57 |
+
\end{equation}$$ where $P(y=c|t)$ is the probability that the current step $t$ is a hypernym class prediction step. $N$ is the total training steps. $a$ and $b$ are hyper-parameters. So, Eq. [\[equ: constant pacing function\]](#equ: constant pacing function){reference-type="ref" reference="equ: constant pacing function"} is a constant pacing function in the first $a*N$ steps, while Eq. [\[equ: linear pacing function\]](#equ: linear pacing function){reference-type="ref" reference="equ: linear pacing function"} is a linear decay function. We plot these two functions in Figure [3](#fig: pacing function){reference-type="ref" reference="fig: pacing function"}. According to our experimental results Tab. [\[tab: pacing func \]](#tab: pacing func ){reference-type="ref" reference="tab: pacing func "}, these two functions are both effective in improving the language model.
|
| 58 |
+
|
| 59 |
+
::: table*
|
| 60 |
+
Model #Param. Valid PPL Test PPL
|
| 61 |
+
------------------------------------------------------------------- --------- ----------- -----------------------
|
| 62 |
+
LSTM+Neural cache [@DBLP:conf/iclr/GraveJU17] \- \- 40.8
|
| 63 |
+
Transformer small 91M 34.5 36.5
|
| 64 |
+
\+ HCP 34.1 **35.9**
|
| 65 |
+
Transformer base 151M 29.2 30.7
|
| 66 |
+
\+ HCP 29.1 30.2
|
| 67 |
+
Transformer-XL base, M=150 [@DBLP:conf/acl/DaiYYCLS19] 151M \- 24.0
|
| 68 |
+
Segatron-XL base [@Bai_Shi_Lin_Xie_Tan_Xiong_Gao_Li_2021], M=150 151M \- 22.5
|
| 69 |
+
\+ HCP 21.9 **22.1**
|
| 70 |
+
Transformer Large 257M 24.0 25.8 (80k steps)
|
| 71 |
+
\+ HCP 23.7 25.3 (80k steps)
|
| 72 |
+
Adaptive Input [@baevski2018adaptive] 247M \- 18.7 (286k steps)
|
| 73 |
+
Transformer-XL large, M=384 [@DBLP:conf/acl/DaiYYCLS19] 257M \- 18.3 (400k steps)
|
| 74 |
+
Compressive Transformer, M=1024 [@DBLP:conf/iclr/RaePJHL20] 257M 16.0 17.1 (400k steps)
|
| 75 |
+
Segatron-XL large, M=384 [@Bai_Shi_Lin_Xie_Tan_Xiong_Gao_Li_2021] 257M \- 17.1 (350k steps)
|
| 76 |
+
\+ HCP 16.1 **17.0** (350k steps)
|
| 77 |
+
:::
|
2205.11152/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2205.11152/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
<figure id="fig:x-cont-learn-framework" data-latex-placement="t">
|
| 4 |
+
<embed src="Images/cross-lingualcontinualdiagram.pdf" style="width:48.0%" />
|
| 5 |
+
<figcaption>An overview of CCL: We use an example of a non-stationary datastream moving from high to low resource languages. Each bold and dashed box represents either a training or test data instance being fine-tuned or evaluated on, respectively. To support this problem setup, we evaluate the cross-lingual capabilities of <em>continual approaches</em>. Those capabilities include knowledge <strong>preservation</strong> on old languages, <strong>accumulation</strong> to the current language, and <strong>generalization</strong> to unseen languages at each point of the training. In addition to that, we evaluate <strong>model utility</strong> at the end of continual learning.</figcaption>
|
| 6 |
+
</figure>
|
| 7 |
+
|
| 8 |
+
With more than 7,000 languages spoken around the globe, downstream applications still lack proper linguistic resources across languages [@statenlp-joshi-acl20], necessitating the use of *transfer learning* techniques that take advantage of data that is mismatched to the application. In an effort to simplify architecture complexity and energy consumption, it is desirable to unify multi-lingual performance into a single, parameter- and memory-constrained model, and to allow this model to evolve, learning on multi-lingual training data as it becomes available without having to pre-train or fine-tune from scratch. Such is the longstanding goal of language representation learning. Existing multi-lingual representations such as [@bert-devlin-naacl19] and [@xlm-r-conneau] are strong pillars in cross-lingual transfer learning, but if care is not taken when choosing how to fine-tune them, they can neglect to maximize *transfer* [@ruder-etal-2019-transfer] to new tasks or languages and are subject to *forgetting* [@MCCLOSKEY1989109], where performance decreases after exposure to new task or language.
|
| 9 |
+
|
| 10 |
+
Most previous work that attempts to deal with the challenge of transfer exploitation and forgetting mitigation focuses on the problem of sequentially learning over different NLP downstream tasks or domains [@sun-lamol-iclr20; @relextr-han-acl20; @madotto-contTOD-emnlp21], rather than on language shifts. Indeed, the current literature for learning over sequences of languages is rather scarce and is mostly reduced to cross-lingual transfer learning between a pair of languages [@xcontlearn_ner_pos_gem-liu-repl4nlp21; @cont_multinmt-garcia-naacl21; @muller-etal-2021-unseen; @pfeiffer-etal-2021-unks; @minixhofer-etal-2022-wechsel]. @xcontlearn_ner_pos_gem-liu-repl4nlp21 pre-train a (parent) language model and then fine-tune it on a downstream task in one of several different (child) languages. This conflates task and language transfer and confuses analysis -- the interference between the pre-trained language model 'task' and the fine-tuned task, along with the parent and child languages, cannot be disentangled. @cont_multinmt-garcia-naacl21 propose an adaptation scheme to each new language pair independently while retaining the translation quality on the parent language pairs. Similarly, @muller-etal-2021-unseen and @pfeiffer-etal-2021-unks propose lexical and semantic level techniques to adapt to target languages. However, all these mentioned works still focus on the 'one-hop' case, consisting of two steps: (1) training on initial parent language(s) (pairs), then (2) adapting to new children language(s) (pairs); the effect of multiple shifts in the datastream is not trivially generalizable to more than one hop. More recently, @pfeiffer-etal-2022-lifting propose an approach for language-specific modules based on adapters and evaluate that on sequential streams of languages. However, they only focus on adapters and two desiderata of continual learning: interference mitigation and transfer maximization. We need a more robust and comprehensive fine-grained evaluation that balances the dynamics between different cross-lingual continual learning desiderata.
|
| 11 |
+
|
| 12 |
+
In this paper, we pave the way for a more comprehensive multi-hop continual learning evaluation that simulates the sequential learning of a single task over a stream of input from different languages. This evaluation paradigm requires experimentation over *balanced streams* of $n$ data scenarios for $n > 2$. Unlike previous work, this paper concretely defines the following comprehensive goals along with their evaluation metrics as guidelines for analyzing the cross-lingual capabilities of multilingual sequential training: knowledge preservation, accumulation, generalization, and model utility as shown in Figure [1](#fig:x-cont-learn-framework){reference-type="ref" reference="fig:x-cont-learn-framework"}. We apply our test bed to a six-language task-oriented dialogue benchmark and comprehensively analyze a wide variety of successful continual learning algorithms. These algorithms are derived from previous literature investigated in continual learning contexts different from the cross-lingual context, including (a) model-expansion [@madx-pfeiffer-emnlp20], (b) regularization [@ewc-kirkpatrick-nas17], (c) memory replay [@er-chaudhry-arxiv19], and (d) distillation-based approaches [@hinton-kdlogit-arxiv15; @aguilar-kdrep-aaai20]. Our findings confirm the need for a multi-hop analysis and the effectiveness of continual learning algorithms in enhancing knowledge preservation and accumulation of our multilingual language model. We additionally demonstrate the robustness of different continual learning approaches to variations in individual data setup choices that would be misleading if presented in a traditional manner.
|
| 13 |
+
|
| 14 |
+
Our **main contributions** are:
|
| 15 |
+
|
| 16 |
+
::: enumerate*
|
| 17 |
+
We are the first to explore and analyze cross-lingual continual fine-tuning[^1] across multiple hops and show the importance of this multi-hop analysis in reaching clearer conclusions with greater confidence compared to conventional cross-lingual transfer learning (§[4.1](#sec:q6-multi-step-cross-cont-learn){reference-type="ref" reference="sec:q6-multi-step-cross-cont-learn"}).
|
| 18 |
+
|
| 19 |
+
We demonstrate the aggregated effectiveness of a range of different continual learning approaches (Figure [1](#fig:x-cont-learn-framework){reference-type="ref" reference="fig:x-cont-learn-framework"}) at reducing forgetting and improving transfer (§[4.3](#sec:q3-effectiveness-cross-cont-learn){reference-type="ref" reference="sec:q3-effectiveness-cross-cont-learn"}) compared to multilingual sequential baselines (§[4.2](#sec:q1-catastrophic-forgetting-generalization){reference-type="ref" reference="sec:q1-catastrophic-forgetting-generalization"}).
|
| 20 |
+
|
| 21 |
+
We make concrete recommendations on model design to balance transfer and final model performance with forgetting (§[4.3](#sec:q3-effectiveness-cross-cont-learn){reference-type="ref" reference="sec:q3-effectiveness-cross-cont-learn"}).
|
| 22 |
+
|
| 23 |
+
We show that the order of languages and data set size impacts the knowledge preservation and accumulation of multi-lingual sequential fine-tuning and identify the continual learning approaches that are most robust to this variation (§[4.4](#sec:q2-analysis-across-lang-perm){reference-type="ref" reference="sec:q2-analysis-across-lang-perm"}).
|
| 24 |
+
|
| 25 |
+
We analyze zero-shot generalization trends and their correlation with forgetting and show that current continual learning approaches do not substantially improve the generalization (§[4.5](#sec:q5-zero-shot-generalization){reference-type="ref" reference="sec:q5-zero-shot-generalization"}).
|
| 26 |
+
:::
|
| 27 |
+
|
| 28 |
+
In this section, we formally define cross-lingual continual learning, describe its goals and challenges, and introduce the downstream tasks, datastreams, and evaluation protocols used. Although we are not the first to define or investigate continual learning for languages, we are, to the best of our knowledge, the first to define and study cross-lingual continual learning where continual learning is focused on languages only. Thus, we formally define cross-lingual continual learning as learning over a set of languages seen sequentially in multiple hops, which is truer to the term of cross-lingual and continual learning, respectively. We distinguish that from 'cross-lingual cross-task cross-stage continual learning', which continually learns over a set of pretraining and downstream tasks sampled from different languages [@xcontlearn_ner_pos_gem-liu-repl4nlp21] and 'cross-lingual one-hop transfer learning' [@cont_multinmt-garcia-naacl21].
|
| 29 |
+
|
| 30 |
+
# Method
|
| 31 |
+
|
| 32 |
+
We define cross-lingual continual learning as the problem of sequentially fine-tuning a model $\theta$ for a particular downstream task over a cross-lingual datastream. In this case, a cross-lingual data *stream* is made of $N$ labeled and distinct datasets $\datalangs{N}$, each one sampled from a distinct language and consisting of separate train and test portions. Let *$hop_i$* be the stage in cross-lingual continual learning where $\theta_i$ is optimized to $\theta_{i+1}$ via exposure to $\datalang{i}$. Let $\alllang = \langallsetelongated$ be a set of labeled *languages*, let $\permute(\alllang)$ be the set of all *permutations* of $\alllang$, and without loss of generality let $p \in \permute(\alllang)$ be one such permutation and $p[i] \in \alllang$ be the $i$th language in $p$. The language of $\datalang{i}$ is $p[i]$. Therefore, by default, the number of languages used is equal to the number of datasets. Let $\datalang{<i}$ and $\datalang{>i}$ refer to a sequence of datasets (train or test portions, depending on context) used in hops from 1 to $i-2$ and $i$ to $N-1$, respectively; we generalize these terms to $\datalang{{\leq}i}$ and $\datalang{{\geq}i}$ by including hop $i-1$ as well at the end or, respectively, beginning of the sequence.
|
| 33 |
+
|
| 34 |
+
We define the goals,[^2] necessarily dependent on each other, for our study of cross-lingual continual learning as follows (also depicted in Figure [1](#fig:x-cont-learn-framework){reference-type="ref" reference="fig:x-cont-learn-framework"}):
|
| 35 |
+
|
| 36 |
+
- *Cross-lingual preservation.* This is the ability to retain previous knowledge of seen languages.
|
| 37 |
+
|
| 38 |
+
- *Cross-lingual accumulation.* This is the ability to accumulate knowledge learned from previous languages to benefit from learning on the current language.
|
| 39 |
+
|
| 40 |
+
- *Cross-lingual generalization.* This is the ability to generalize uniformly well to unseen languages, which goes beyond accumulating knowledge up to the current languages.
|
| 41 |
+
|
| 42 |
+
- *Model utility.* This is the ability of the fully trained model to perform equally well in all languages.
|
| 43 |
+
|
| 44 |
+
In this paper, we wish to understand the relationships between these goals. Our aim is to come up with a recipe for a more systematic cross-lingual continual learning. Thus, we need to understand if the goals are aligned with each other or if maximizing some goals leads to minimizing other goals.
|
| 45 |
+
|
| 46 |
+
Learning sequentially from a non-stationary data distribution (i.e., task datasets coming from different languages) can impose considerable challenges on the goals defined earlier:
|
| 47 |
+
|
| 48 |
+
- *Catastrophic forgetting.* This happens when fine-tuning a model on $\datalang{\geq i}$ leads to a decrease in the performance on $\datalang{<i}$.
|
| 49 |
+
|
| 50 |
+
- *Negative transfer.* This happens when fine-tuning a model up to $\datalang{\leq i}$ leads to a lower performance on $\datalang{i}$ than training on it alone.
|
| 51 |
+
|
| 52 |
+
- *Low zero-shot transfer.* This happens when fine-tuning on $\datalang{\leq i}$ gives a lower performance than random on unseen $\datalang{>i}$.
|
| 53 |
+
|
| 54 |
+
- *Low final performance.* This is when fine-tuning on all $\datalang{\leq N}$ gives an uneven performance between languages when tested on $\datalang{\leq N}$ at the end of training.
|
| 55 |
+
|
| 56 |
+
Here, we describe the downstream tasks and multi-lingual sequential datastreams used.
|
| 57 |
+
|
| 58 |
+
We choose task-oriented dialogue parsing as a use case and consider the multi-lingual task-oriented parsing (MTOP) benchmark [@mtop-li-eacl21]. Task-oriented dialogue parsing provides a rich testbed for analysis, as it encompasses two subtasks: *intent classification* and *slot filling*, thus allowing us to test different task capabilities in cross-lingual continual learning.
|
| 59 |
+
|
| 60 |
+
For a set of $N$ languages $\alllang$, our study considers a permutation subset $P \subset \permute(\alllang)$ with the following properties:[^3]
|
| 61 |
+
|
| 62 |
+
- $|P| = |\alllang| = N$, i.e. $P$ consists of $N$ permutations, each of which is a sequence of $N$ datasets in each of the $N$ languages in $\alllang$.
|
| 63 |
+
|
| 64 |
+
- $\forall \lng \in \alllang$, $\forall j \in 1 \ldots N$, there exists some $p \in P$ such that $p[j]=\lng$.
|
| 65 |
+
|
| 66 |
+
- $\in P$, the permutation from most high-resource to most low-resource fine-tuning data sets, based on the training split dataset size.
|
| 67 |
+
|
| 68 |
+
- $\in P$, the reverse of .
|
| 69 |
+
|
| 70 |
+
In our experiments, we use MTOP [@mtop-li-eacl21], which is a multi-lingual task-oriented dialogue dataset that covers six typologically diverse languages and spans over 11 domains and 117 intents. We chose MTOP since it is the largest scale dataset available for task-oriented dialogue, and because it covers languages that have varying amounts of data resources available. We use only the flat representation of slots (without nesting) to simplify our evaluation. We use the original data for most experiments. Table [1](#tab:data-mtop){reference-type="ref" reference="tab:data-mtop"} shows a summary of the number of sentences (dialogue utterances) per language and split.
|
| 71 |
+
|
| 72 |
+
::: {#tab:data-mtop}
|
| 73 |
+
**Lang** **ISO** **Train** **Dev** **Test**
|
| 74 |
+
---------- --------- ----------- --------- ----------
|
| 75 |
+
English EN 15,667 2,235 4,386
|
| 76 |
+
German DE 13,424 1,815 3,549
|
| 77 |
+
French FR 11,814 1,577 3,193
|
| 78 |
+
Hindi HI 11,330 2,012 2,789
|
| 79 |
+
Spanish ES 10,934 1,527 2,998
|
| 80 |
+
Thai TH 10,759 1,671 2,765
|
| 81 |
+
|
| 82 |
+
: Number of sentences in MTOP per language and split.
|
| 83 |
+
:::
|
2206.04590/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2206.04590/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Attending to regions or objects in our perceptual field implies an interest in acting towards them. Humans convey their attention by fixating their eyes upon those regions. By modeling fixation, we gain an understanding of the events that attract attention. These attractors are represented in the form of a *Fixation Density Map* (FDM), displaying blurred peaks on a two-dimensional map, centered on the eye *Fixation Point* (FP) of each individual viewing a frame. The FDM is a visual representation of saliency, a useful indicator of what attracts human attention.
|
| 4 |
+
|
| 5 |
+
Early computational research focused on bottom-up saliency, by which the conspicuity of regions in the visual field was purely dependent on the stimuli [@itti2001computational; @bruce2009saliency]. On the other hand, task-driven approaches are top-down models utilizing supervised learning for performing tasks and allocating attention to regions or objects of interest. Combining face detections with low-level features has been shown to outperform bottom-up saliency models agnostic to social entities in a scene. Birmingham *et al.* corroborate the advantage of facial features in modeling saliency. They establish that when social stimuli are present, humans tend to fixate on facial features, a phenomenon weakly portrayed by bottom-up saliency detectors. Moreover, studies on human eye movements indicate that bottom-up guidance is not strongly correlated with fixation, which is rather influenced by the task [@foulsham2011modeling]. The existence of social stimuli in a scene alters fixation patterns, supporting the notion that even with the lack of an explicit task, we form intrinsic goals for guiding our gaze.
|
| 6 |
+
|
| 7 |
+
Although facial features attract attention, studies show that humans tend to follow the gaze of observed individuals [@bylinskii2016should]. Additionally, psychological studies [@pritsch2017perception] indicate a preference in attending towards emotionally salient stimuli over neutral expressions, a phenomenon described as *affect-biased attention*. By augmenting saliency maps with emotion intensities, affect-biased saliency models show significant improvement over affect-agnostic models [@fan2018emotional; @cordel2019emotion]. These approaches, although exclusive to static saliency models, are not limited to facial expressions, allowing for a greater domain coverage irrespective of the presence of social entities in a scene.
|
| 8 |
+
|
| 9 |
+
In light of the social stimuli relevance to modeling attention, we design a model to predict the FDM of multiple human observers watching social videos as shown in [1](#fig:samples){reference-type="ref+label" reference="fig:samples"}. Such models employ top-down and bottom-up strategies operating on a sequence of images, a task referred to as *dynamic saliency prediction* [@bak2017spatio; @borji2019saliency]. Our model utilizes multiple social cue detectors, namely gaze following and direction estimation, as well as facial expression recognition. We integrate the eye gaze and affective social cues, each with its spatiotemporal representation, as input to our saliency prediction model. We describe the resulting output from each social cue detector as a *feature map* (FM). We also introduce a novel FM weighting module, assigning different intensities to each FM in a competitive manner representing its priority. Each representation is best described as a *priority map* (PM), combining top-down and bottom-up features to prioritize regions that are most likely to be attended. We refer to the final model output as the Predicted FDM (PFDM).
|
| 10 |
+
|
| 11 |
+
<figure id="fig:full_model" data-latex-placement="ht">
|
| 12 |
+
<embed src="src/figs/gasp.pdf" style="width:100.0%" />
|
| 13 |
+
<figcaption>Overview of our sequential two-stage model. SCD (left) extracts and transforms social cue features to spatiotemporal representations. GASP (right) acquires the representations and integrates features from the different modalities.</figcaption>
|
| 14 |
+
</figure>
|
| 15 |
+
|
| 16 |
+
Our work is motivated by the following findings: **1)** Task-driven strategies are pertinent to predicting saliency [@foulsham2011modeling]; **2)** Changes in motion contribute to the relevance of an object, underlining the importance of spatiotemporal features for predicting saliency [@min2020multimodal]; **3)** Psychological studies indicate that attention is driven by social stimuli [@salley2016conceptualizing]. To address the first finding, we state that our approach is task-driven by virtue of supervision since the objective is predicated on modeling multiple observer fixations. Although the datasets employed in this study were collected under a free-viewing condition, the top-down property is arguably maintained due to the intrinsic goals of the observer. These goals are driven by socially relevant stimuli addressed in our model through its reliance on multiple social cue modalities and facial information. We detect the social and facial features in a separate stage, hereafter described as the *Social Cue Detection* (SCD) stage.
|
| 17 |
+
|
| 18 |
+
To address the second finding, our model learns temporal features in two stages. Sequential learning in SCD is not a necessity but a result of the models employed for social cue detection, e.g., recurrent models or models pre-trained on optical flow tasks. In the second stage (GASP), we integrate social cues as illustrated in [2](#fig:full_model){reference-type="ref+label" reference="fig:full_model"}. GASP also employs sequential learning, not only registering environmental changes such as color and intensity but also features pertaining to motion.
|
| 19 |
+
|
| 20 |
+
Finally, we consider social attention by employing an audiovisual saliency prediction modality, as well as social cue detectors (also described as modalities) that specialize in performing distinct tasks. Each of these tasks is highly relevant to visual attention, both from a behavioral and a computational perspective. We aim to explore feature integration approaches for combining social cues. We present gated attention variants and introduce a novel approach for directing attention to all modalities. To the best of our knowledge, our model is the first to consider affect-biased attention by using facial expression representations for dynamic saliency prediction based on deep neural maps.
|
| 21 |
+
|
| 22 |
+
In the first stage (SCD), we extract high-level features from three social cue detectors and an audiovisual saliency predictor. We utilize the S$^{\text{3}}$FD face detector [@zhang2017sfd] for acquiring the face locations of actors in an image. The cropped face images are passed to the social cue detectors as input. The window size $W$, i.e., the number of frames fed simultaneously as input to each model, varies according to the requirements of each model. We generate modality representations at output timestep $T'$ for each social video in AVE [@tavakoli2020deep] as shown in [\[alg:stage1\]](#alg:stage1){reference-type="ref+label" reference="alg:stage1"}.
|
| 23 |
+
|
| 24 |
+
:::: algorithm
|
| 25 |
+
**Input**:\
|
| 26 |
+
Video and audio frames sampled from $\texttt{ds}$ = AVE dataset\
|
| 27 |
+
**Parameters**:\
|
| 28 |
+
Window sizes $\texttt{W}_{\texttt{SP}}=15, \texttt{W}_{\texttt{GE}}=7, \texttt{W}_{\texttt{GF}}=5, \texttt{W}_{\texttt{FER}}=0$\
|
| 29 |
+
O/P steps $\texttt{T'}_{\texttt{SP}}=15, \texttt{T'}_{\texttt{GE}}=4, \texttt{T'}_{\texttt{GF}}=0, \texttt{T'}_{\texttt{FER}}=0$\
|
| 30 |
+
**Output**:\
|
| 31 |
+
Modality windows $\texttt{mdl}_{\texttt{win}}$\
|
| 32 |
+
O/P buffers $\texttt{buf}_{\texttt{mdl}}$
|
| 33 |
+
|
| 34 |
+
::: algorithmic
|
| 35 |
+
Let $\texttt{t}=0$ Let $\texttt{fcs}$ = face crops & bounding boxes in $\texttt{frm}$ Let $\Delta = \texttt{W}_{\texttt{mdl}} - \texttt{t}$ Let $\texttt{mdl}_{\texttt{win}}[\delta] =\:<\!\texttt{frm}, \texttt{fcs}\!>$ Shift $\texttt{mdl}_{\texttt{win}}$ by 1 to the left Let $\texttt{mdl}_{\texttt{win}}[\texttt{W}_{\texttt{mdl}}] =\:<\!\texttt{frm},\texttt{fcs}\!>$ Execute $\texttt{buf}_{\texttt{mdl}}[\texttt{t}] = \texttt{mdl}(\texttt{mdl}_{\texttt{win}})[\texttt{T'}_{\texttt{mdl}}]$ Propagate $\texttt{buf}[\texttt{t}]$ to GASP Let $\texttt{t}=\texttt{t}+1$
|
| 36 |
+
:::
|
| 37 |
+
::::
|
| 38 |
+
|
| 39 |
+
Our motivation behind selecting social cue detectors lies in their ability to generalize to various "in-the-wild" settings, regardless of the surrounding environment or lighting conditions. All chosen models were trained on datasets containing social entities captured from different angles and distances.
|
| 40 |
+
|
| 41 |
+
VideoGaze [@recasens2017following] receives the source image frame containing the gazer, the target frame into which the gazer looks, and a face crop of the gazer in the source frame along with the head and eye positions as input to its pathways. All frames are resized to 227 $\times$ 227 pixels. The model acquires five consecutive frames ($W_{GF}$) at timestep $T'_{GF}$ and returns a fixation heatmap of the most probable target frame for every detected face in a source frame.
|
| 42 |
+
|
| 43 |
+
**Representation** The mean fixation heatmaps resulting from each face in the source frame are overlaid on a single feature map in the corresponding target frame timestep. We transform the fixation heatmaps using a jet colormap.
|
| 44 |
+
|
| 45 |
+
Gaze360 [@kellnhofer2019gaze360] estimates the 3D gaze direction of an actor. The model receives face crops of the same actor over a predefined period, covering seven frames ($W_{GE}$) centered around timestep $T'_{GE}$. Each crop is resized to 224 $\times$ 224 pixels. The model predicts the azimuth and pitch of the eyes and head along with a confidence score.
|
| 46 |
+
|
| 47 |
+
**Representation** We generate cones and position their tips on detected face centroids. The cones are placed upon a zero-valued map with identical dimensions to the input image. The cone base is rotated towards the direction of gaze. The apex angle of the cone is set to $60^{\circ}$. The face furthest from the lens is projected first with an opacity of 0.5, followed by the remaining faces ordered by their distances to the lens. A jet colormap is then applied to the cone map.
|
| 48 |
+
|
| 49 |
+
We employ the facial expression recognition model developed by Siqueira *et al.* . The model is composed of convolutional layers shared across $9$ ensembles. The model receives all face crops in a frame as input, each resized to 96 $\times$ 96 pixels, and recognizes facial expressions from 8 categories. Since the model operates on static images, we set the window size $W_{FER}$ and output timestep $T'_{FER}$ to 0.
|
| 50 |
+
|
| 51 |
+
**Representation** Grad-CAM [@selvaraju2017grad] features are extracted from all $9$ ensembles. We take the mean of the features for all faces in the image and apply a jet colormap transformation on them. A 2D Hanning filter is applied to the features to mitigate artifacts resulting from the edges of the cropped Grad-CAM representations. We center the filtered representations on the face positions upon a zero-valued map with dimensions identical to the input image.
|
| 52 |
+
|
| 53 |
+
In the SCD stage, we utilize DAVE [@tavakoli2020deep] for predicting saliency based on visual and auditory stimuli. Separate streams for encoding the two modalities are built using a 3D-ResNet with 18 layers. The visual stream acquires 16 images ($W_{SP}$), each resized to 256 $\times$ 320 pixels. The auditory stream acquires log Mel-spectrograms of the video-corresponding audio frames, re-sampled to 16kHz. The model produces an FDM at the final output timestep $T'_{SP}$ considering all preceding frames within the window $W_{SP}$.
|
| 54 |
+
|
| 55 |
+
**Representation** We transform the resulting FDM from DAVE using a jet colormap.
|
| 56 |
+
|
| 57 |
+
We standardize all SCD features to a mean of 0 and a standard deviation of 1. The input image (IMG) and FMs are resized to 120 $\times$ 120 pixels before propagation to GASP.
|
| 58 |
+
|
| 59 |
+
The Squeeze-and-Excitation (SE) [@hu2018squeeze] layer extracts channel-wise interactions, applying a gating mechanism for weighting convolutional channels according to their informative features. The SE model, however, emphasizes modality representations having the most significant gain, mitigating channels with lower information content. For our purpose, it is reasonable to postulate that the most influential FM channels are those belonging to the SP since it would result in the least erroneous representation in comparison to the ground-truth FDM. However, this causes the social cue modalities to have a minimal effect, mainly due to their low correlation with the FDM as opposed to the SP.
|
| 60 |
+
|
| 61 |
+
To counter bias towards the SP, we intensify non-salient regions such that the model learns to assign greater weights to modalities contributing least to the prediction. Alpay *et al.* propose a language model for skipping and preserving activations according to how surprising a word is, given its context in a sequence. In this work, we assimilate surprising words to channel regions with an unexpected contribution to the saliency model.
|
| 62 |
+
|
| 63 |
+
We construct a model for emphasizing unexpected features using two streams as shown in [3](#fig:dam){reference-type="ref+label" reference="fig:dam"}: **1)** The inverted stream with output heads; **2)** The direct stream attached to the modality encoders of our GASP model. The inverted stream is composed of an SE layer followed by a 2D convolutional layer with a kernel size of 3 $\times$ 3, a padding of 1, and 32 channels. A *max pooling* layer with a window size of 2 $\times$ 2 reduces the feature map dimensions by half. Finally, a 1 $\times$ 1 convolution is applied to the pooled features, reducing the feature maps to a single channel. To emphasize weak features, we invert the input channels: $$\begin{equation}
|
| 64 |
+
\label{eq:surprisal}
|
| 65 |
+
\mathbf{u}^{-1}_{c'} = \log \left( \frac{1}{Softmax(\mathbf{u}_{c'})} \right) = -\log[Softmax(\mathbf{u}_{c'})]
|
| 66 |
+
\end{equation}$$ where $\mathbf{u}_{c'}$ represents the individual channels of all modalities. The spatially inverted channels $\mathbf{u}^{-1}_{c'}$ are standardized and propagated as input features to the inverted stream. The direct stream is an SE layer with its parameters tied to the inverted stream and receives the standardized FM channels $\mathbf{u}_{c'}$ as input. Finally, the direct stream propagates the channel parameters multiplied with each FM to the modality encoders of GASP. The resulting weighted map is the priority map (PM).
|
| 67 |
+
|
| 68 |
+
<figure id="fig:dam" data-latex-placement="t">
|
| 69 |
+
<embed src="src/figs/dam.pdf" style="width:50.0%" />
|
| 70 |
+
<figcaption>The direct (left) and inverted (right) streams of our Directed Attention Module (DAM). The parameters of the direct stream are frozen and tied to the inverted stream as indicated by the dashed borders.</figcaption>
|
| 71 |
+
</figure>
|
| 72 |
+
|
| 73 |
+
<figure id="fig:combchart" data-latex-placement="ht">
|
| 74 |
+
<p><span><embed src="src/figs/contribfigs/static_models.pdf" style="width:47.5%" /> <span id="cat" data-label="cat"></span></span> <span><embed src="src/figs/contribfigs/sequential_models.pdf" style="width:49.5%" /> <span id="bat" data-label="bat"></span></span></p>
|
| 75 |
+
<figcaption>Aggregated modality weights of static (left) and sequential (right) fusion methods. Context sizes are shown within parentheses.</figcaption>
|
| 76 |
+
</figure>
|
| 77 |
+
|
| 78 |
+
The modality encoder is a convolutional model used for extracting visual features from the priority maps. The first two layers of the encoder have 32 and 64 channels respectively. A maximum pooling layer reduces the input feature map to half its size. The pooled layer is followed by two layers with 128 channels each. Finally, the representations are decoded by applying transposed convolutions with 128, 64, and 32 channels. The last layer has a number of channels equivalent to the input channels. All convolutional kernels have a size of 3 $\times$ 3, with a padding of 1. For GASP model variants operating on single frames (static integration variants), all modalities share the same encoder. For sequential integration variants, each modality has a separate encoder shared across timesteps.
|
| 79 |
+
|
| 80 |
+
Concatenating the modality representations could lead to successful integration. Such a form of integration is commonly used in multimodal neural models, including audiovisual saliency predictors. We describe such approaches as non-fusion models, whereby the contribution of each modality is unknown. To account for all modalities, we employ the Gated Multimodal Unit (GMU) [@arevalo2020gated]. The GMU learns to weigh the input features based on a gating mechanism. To preserve the spatial features of the input, the authors introduce a convolutional variant of the GMU. This model, however, disregards the previous context since it does not integrate features sequentially. Therefore, we extend the convolutional GMU with recurrent units and express it as follows: $$\begin{equation}
|
| 81 |
+
\label{eq:rgmu}
|
| 82 |
+
\begin{array}{l}
|
| 83 |
+
\mathbf{h}_{t}^{(m)} = \tanh{(\mathbf{W}_x^{(m)} * \mathbf{x}_{t}^{(m)} + \mathbf{U}_h^{(m)} * \mathbf{h}_{t-1}^{(m)} + b_h^{(m)})} \\
|
| 84 |
+
\mathbf{z}_{t}^{(m)} = \sigma{(\mathbf{W}_z^{(m)} * [\mathbf{x}_{t}^{(1)}, ..., \mathbf{x}_{t}^{(M)}] + \mathbf{U}_z^{(m)} * \mathbf{z}_{t-1}^{(m)} + b_z^{(m)})} \\
|
| 85 |
+
\mathbf{h}_{t} = \sum_{m=1}^{M} \mathbf{z}_t^{(m)} \odot \mathbf{h}_t^{(m)}
|
| 86 |
+
\end{array}
|
| 87 |
+
\end{equation}$$ where $\mathbf{h}_{t}^{(m)}$ is the hidden representation of modality $m$ at timestep $t$. Similarly, $\mathbf{z}_{t}^{(m)}$ indicates the gated representation. The total number of modalities is represented by $M$. The parameters of the Recurrent Gated Multimodal Unit (RGMU) are denoted by $\mathbf{W}_x^{(m)}$, $\mathbf{W}_z^{(m)}$, $\mathbf{U}_h^{(m)}$, and $\mathbf{U}_z^{(m)}$. The modality inputs $\mathbf{x}^{(m)}$ at timestep $t$ are concatenated channel-wise as indicated by the $[\mathbf{\cdot}{,}\mathbf{\cdot}]$ operator and convolved with $\mathbf{W}_z^{(m)}$. The $\mathbf{z}_t^{(m)}$ representation is acquired by summing the current and previous timestep representations, along with the bias term $b_z^{(m)}$. A sigmoid activation function denoted by $\sigma$ is applied to the recurrent representations $\mathbf{z}_t$. The final feature map $\mathbf{h}_t$ is the Hadamard-product between $\mathbf{z}_t^{(m)}$ and $\mathbf{h}_t^{(m)}$ summed over all modalities.
|
| 88 |
+
|
| 89 |
+
The aforementioned recurrent approach suffers from vanishing gradients as the context becomes longer. To remedy this effect, we propose the integration of GMU with the convolutional Attentive Long Short-Term Memory (ALSTM) [@cornia2018predicting]. ALSTM applies soft-attention to single timestep input features over multiple iterations. We utilize ALSTM for our static GASP integration variants. For sequential variants, we modify ALSTM to acquire frames at all timesteps instead of attending to a single frame multiple times: $$\begin{equation}
|
| 90 |
+
\label{eq:alstm_atten}
|
| 91 |
+
\mathbf{x}_t = Softmax(\mathbf{z}_{t-1}) \odot \mathbf{x}_t
|
| 92 |
+
\end{equation}$$ where $\mathbf{z}_{t-1}$ represents the pre-attentive output of the previous timestep. We adapt the sequential ALSTM to operate in conjunction with the GMU by performing the gated fusion per timestep. We refer to this model as the Attentive Recurrent Gated Multimodal Unit (ARGMU). Alternatively, we perform the gated integration after concatenating the input channels and propagating them to the sequential ALSTM. Since the modality representations are no longer separable, we describe this variant as the Late ARGMU (LARGMU). We refer to the total number of timesteps as the context size. Analogous to the sequential variants, we create similar gating mechanisms for static integration approaches. Replacing the sequential ALSTM with the ALSTM by Cornia *et al.* , we present the non-sequential Attentive Gated Multimodal Unit (AGMU), as well as the Late AGMU (LAGMU).
|
2207.07110/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2207.07110/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,322 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep neural networks (DNN) can be trained to solve visual recognition tasks with large annotated datasets.
|
| 4 |
+
In contrast, training DNNs for few-shot recognition , and its fine-grained variant , where only a few examples are provided for each class by way of supervision at test-time, is challenging.
|
| 5 |
+
|
| 6 |
+
Fundamentally, the issue is that few-shots of data is often inadequate to learn an object model among all of its myriad variations, which do not impact an object's category.
|
| 7 |
+
|
| 8 |
+
For our solution, we propose to draw upon two key observations from the literature.
|
| 9 |
+
|
| 10 |
+
- [(A)] There are specific locations bearing distinctive patterns/signatures in the feature space of a convolution neural network (CNN), which correspond to salient visual characteristics of an image instance .
|
| 11 |
+
- [(B)] Attention on only a few specific locations in the feature space, leads to good recognition accuracy .
|
| 12 |
+
|
| 13 |
+
\noindent {\bf How can we leverage these observations?}\\
|
| 14 |
+
{\it Duplication of Traits.} We posit that the visual characteristics found in one instance of an object are widely duplicated among other instances, and even among those belonging to other classes. It follows from our proposition that it is the particular collection of visual characteristics arranged in a specific geometric pattern that uniquely determines an object belonging to a particular class.
|
| 15 |
+
|
| 16 |
+
Juxtaposing these assumptions with (A) and (B) implies that these shared visual traits can be found in the feature maps of CNNs and only a few locations on the feature map suffice for object recognition. CNNs features are important, as they distil essential information, and suppress redundant or noisy information.
|
| 17 |
+
|
| 18 |
+
\noindent {\it Parsing.} We call these finitely many latent locations on the feature maps which correspond to salient traits, {\it parts}. These parts manifest as patterns, where each pattern belongs to a finite (but potentially large) dictionary of templates. This dictionary embodies both the shared vocabulary and the diversity of patterns found across object instances. Our goal is to learn the dictionary of templates for different parts using training data, and at test-time, we seek to {\it parse}\footnote{we view our dictionary as a collection of words, and the geometric relationship between different parts as relationship between phrases.} new instances by identifying part locations and the sub-collection of templates that are exhibited for the few-shot task. The provided few-shot instances are parsed and then compared against the parsed query. The best matching class is then predicted as the output. As an example see Fig (a), where the recognized part locations using the learned dictionary correspond to the head, breast and the knee of the birds in their images with corresponding locations in the convolutional feature maps. In matching the images, both the active templates and the geometric structure of the parts are utilized.
|
| 19 |
+
|
| 20 |
+
[h]
|
| 21 |
+
\centering
|
| 22 |
+
\includegraphics[width=0.98\linewidth]{figs/motivation_1.png}
|
| 23 |
+
\vspace{-3
|
| 24 |
+
mm}
|
| 25 |
+
\caption{Motivation: a) In fine-grained few-shot learning, the most discriminating information is embedded in the salient parts (e.g. head and breast of a bird) and the geometry of the parts (obtuse or acute triangle). Our method parses the object into a structured combination of a finite set of dictionaries, such that both finer details and the shape of the object are captured and leveraged in recognition. b) In FSL setting, the same part may be distorted or absent in the support samples due to the perspective and pose changes. We propose to extract features from multiple scales for each part, and down-weight the matching if the scales are not matched.}
|
| 26 |
+
|
| 27 |
+
\vspace{-3mm}
|
| 28 |
+
|
| 29 |
+
Inferring part locations based on part-specific dictionaries is a low complexity task, and is analogous to the problem of detection of signals in noise in radar applications , a problem solved by matching the received signal against a known dictionary of transmitted signals.
|
| 30 |
+
|
| 31 |
+
\noindent {\bf Challenges.} Nevertheless, our situation is somewhat more challenging. Unlike the radar situation, we do not a priori have a dictionary, and to learn one, we are only provided class-level annotations by way of supervision. In addition, we require that these learnt dictionaries are compact (because we must be able to reliably parse any input), and yet sufficiently expressive to account for diversity of visual traits found in different objects and classes.
|
| 32 |
+
|
| 33 |
+
\noindent {\it Multi-Scale Dictionaries.}
|
| 34 |
+
Variation in pose and orientation lead to different appearances by perspective projections, which means there is variation in the scale and size of visual characteristics of parts.
|
| 35 |
+
To overcome this issue we train dictionaries at multiple scales, which leads us to a parsing scheme that parses input instances at multiple scales. Also, in matching parts across images, we down-weight the matching if there is either a mismatch in part features or the scale of the dictionary (see Fig. (b)).
|
| 36 |
+
|
| 37 |
+
\noindent {\it Test-Time Inference.}
|
| 38 |
+
At test-time we must infer multiple parts, where the part locations must satisfy relative geometric constraints. Additionally, few-shot instances even within the same class exhibit significant variations in pose, orientation and size, which in turn induce variations in parsed outputs. To mitigate their effects we propose a novel instance-dependent re-weighting method for fusing instances based on their goodness-of-fit to the dictionary.
|
| 39 |
+
|
| 40 |
+
\if0
|
| 41 |
+
\noindent {\it Heteroscedasticity.} To account for the variations, we develop a novel re-weighting method for fusing instances. For each part, we model each shared dictionary element across different instances as independent but heteroscedastic Gaussian random variables, and develop a fusion algorithm that optimally fuses information. The resulting fused output is able to mitigate various visual distortions, and results in competitive performance relative to state-of-the-art.
|
| 42 |
+
\fi
|
| 43 |
+
|
| 44 |
+
\noindent {\it Contributions.} We show that: (i) Our deep object parsing method,
|
| 45 |
+
|
| 46 |
+
results in improved performance in few-shot recognition and fine-grained few-shot recognition tasks on Stanford-Car dataset outperforming prior art by 2.64\%. (ii) We provide an analysis of different components of our approach showing the effect of ablating each in final performance. Through a visualization, we show that the parts recognized by our model are salient and help recognize the object category.
|
| 47 |
+
|
| 48 |
+
# Method
|
| 49 |
+
|
| 50 |
+
[h]
|
| 51 |
+
\centering
|
| 52 |
+
\includegraphics[width=\linewidth]{figs/model_overview_new.pdf}
|
| 53 |
+
\vspace{-8mm}
|
| 54 |
+
\caption{Overview of our method. An image gets parsed as a collection of salient parts. Once part peaks are located, different scales of attention maps are used for comparing the part in different images to account for any size discrepancies of a given part across images. (Best viewed with zoom; $\odot$ represents a channel-wise dot product)}
|
| 55 |
+
|
| 56 |
+
\vspace{-3mm}
|
| 57 |
+
|
| 58 |
+
Input instances are denoted by $x \in {\cal X}$ and we denote by $\phi \in \mathbb{R}^{G\times G \times C}$ the output features of a CNN, with $C$ channels, supported on a 2D, $G\times G$, grid.\\
|
| 59 |
+
\noindent {\bf Parsing Instances.}
|
| 60 |
+
A parsed instance has $K$ distinct units, $p \in [K]$, which we call parts. These parts are derived from the output features, $\phi$.
|
| 61 |
+
The term ``parts'' is overloaded in prior works. Our notion of a part is a tuple, consisting of part-location and part-expression at that location. Part location is an $s\times s$ attention mask, $M(\mu)$ centered around a 2D vector $\mu$ in the CNN feature space.
|
| 62 |
+
We derive part expression for the instance using part templates, which are a finite collection of templates, ${\cal D}_p, \, p \in [K]$, whose support is $s \times s$. These templates are learnt during training, and are assumed to be exhaustive across all categories. Although, our method allows for using several templates for each channel, we use one-template per channel in this paper. We observed that increasing the number of templates only marginally improves performance.
|
| 63 |
+
Since channels-wise features represent independent information, we consider channel-wise templates, $D_{p,c}$.
|
| 64 |
+
For any instance, we reconstruct feature vectors on the mask $M(\mu)$ using a sparse subset of part templates, and the resulting reconstruction coefficients, $z_{p,c}(\mu),\,c\in[C]$, are the part expressions.
|
| 65 |
+
|
| 66 |
+
\if0
|
| 67 |
+
\noindent
|
| 68 |
+
{\bf Part Location and Part Expression Estimation.}
|
| 69 |
+
Given an instance $x$, and its feature output, $\phi$, we maximize the following log-likelihood loss function, which will become clearer in the sequel, to estimate the location, $$\mu_p =\arg\max_{\mu \in G\times G} L_p(\phi \mid \mu; [D_{p,c}]).$$ We then solve for the part-expression by optimizing the $\ell_1$ regularized reconstruction error, at the location $\mu=\mu_p$.
|
| 70 |
+
|
| 71 |
+
[z_{p,c}(\mu)]=\arg\min_{[\beta_c]} \sum_{c \in C} \|\phi_c - D_{p,c}\beta_{c}\|_{M(\mu)}^2 + \lambda \|\beta_{c}\|_1.
|
| 72 |
+
|
| 73 |
+
where the subscript $M(\mu)$ refers to projection onto the support.\\
|
| 74 |
+
\noindent {\it Coupled Problem.} Note that part-expression is a function of part-location, and as such, part-location can be estimated by plugging in the optimal part-expressions for each candidate location value, namely,
|
| 75 |
+
$$
|
| 76 |
+
\mu_p^\prime = \arg\min_{\mu \in G\times G} \sum_{c \in C} \|\phi_c - D_{p,c}z_{p,c}(\mu)\|_{M(\mu)}^2 + \lambda \|z_{p,c}(\mu)\|_1.
|
| 77 |
+
$$
|
| 78 |
+
Thus the likelihood loss $L_p(\phi\mid \mu; [D_{p,c}]) \triangleq \sum_{c \in C} \|\phi_c - D_{p,c}z_{p,c}(\mu)\|_{M(\mu)}^2 + \lambda \|z_{p,c}(\mu)\|_1.$, which results in a coupled problem.
|
| 79 |
+
\fi
|
| 80 |
+
\noindent
|
| 81 |
+
{\bf Part Expression as LASSO Regression.}
|
| 82 |
+
Given an instance $x$, and its feature output, $\phi$, and a candidate part-location, $\mu$, we estimate sparse part-expressions by optimizing the $\ell_1$ regularized reconstruction error, at the location $\mu=\mu_p$.
|
| 83 |
+
|
| 84 |
+
[z_{p,c}(\mu)]=\arg\min_{[\beta_c]} \sum_{c \in C} \|[\phi_c]_{M(\mu)} - D_{p,c}\beta_{c}\|^2 + \lambda \|\beta_{c}\|_1.
|
| 85 |
+
|
| 86 |
+
where the subscript $M(\mu)$ refers to projection onto its support. The notation $[\cdot]$ employed in the LHS denotes a vector (here across $c$) of components. We use this for brevity, but is usually clear from the context.
|
| 87 |
+
|
| 88 |
+
\noindent {\it Non-negativity.} Part expressions signify presence or absence of part templates in the observed feature vectors, and as such can be expected to take on non-negative values. This fact turns out to be useful later for DNN implementation.
|
| 89 |
+
|
| 90 |
+
\noindent {\bf Part Location Estimation.} Note that part-expression is a function of part-location, and as such, part-location can be estimated by plugging in the optimal part-expressions for each candidate location value, namely,
|
| 91 |
+
|
| 92 |
+
\mu_p = \argmin_{\mu \in G\times G}\left [ L_p(\mu \mid \phi; [D_{p,c}] \triangleq \sum_{c \in C} \|[\phi_c]_{M(\mu)} - D_{p,c}z_{p,c}(\mu)\|^2 + \lambda \|z_{p,c}(\mu)\|_1 \right ]
|
| 93 |
+
|
| 94 |
+
This couples the two estimation problems, and is difficult to implement with DNNs, motivating our approach below.
|
| 95 |
+
|
| 96 |
+
\noindent {\bf Feedforward DNNs for Parsing.}
|
| 97 |
+
To make the proposed approach amenable to DNN implementation, we approximate the solution to Eq. by optimizing the reconstruction error followed by thresholding, namely, we compute
|
| 98 |
+
$[z_{p,c}^\prime (\mu)]=\arg\min_{[\beta_c]} \sum_{c \in C} \|[\phi_c]_{M(\mu)} - D_{p,c}\beta_{c}\|^2$, and we threshold the resulting output by deleting entries smaller than $\zeta$: $S_{\zeta}(u)=u\mathbf{1}_{|u|\geq \zeta}$. This is closely related to thresholding methods employed in LASSO .
|
| 99 |
+
|
| 100 |
+
As such, the quadratic component of the loss allows for an explicit solution, and the solution reduces to template matching per channel, which can further be expressed as a convolution . In this perspective, we overload notation and consider the template $D_{p,c}$ as a convolution kernel. Writing $\delta_{\mu}(v)=\delta(\mu-v)$, we have $D_{p,c}(\mu-v)\triangleq D_{p,c}(v)\ast \delta_\mu(v)$
|
| 101 |
+
|
| 102 |
+
and this resulting kernel is matched against the channel feature output, $\phi_c$ yielding:
|
| 103 |
+
|
| 104 |
+
z_{p,c}^\prime(\mu) = \frac{(D_{p,c}\ast \delta_{\mu}) \odot \phi_c}{\|D_{p,c}\|^2}; \,\,\, z_{p,c}(\mu)=S_{\zeta}(z_{p,c}^\prime(\mu))
|
| 105 |
+
|
| 106 |
+
\if0
|
| 107 |
+
|
| 108 |
+
z_{p,c}^\prime(\mu) = \frac{(D_{p,c}\ast \phi_c) (\mu)}{\|D_{p,c}\|^2}; \,\,\, z_{p,c}(\mu)=S_{\zeta}(z_{p,c}^\prime(\mu))
|
| 109 |
+
|
| 110 |
+
\fi
|
| 111 |
+
For estimating location, we plug this in Eq. and bound it from above.
|
| 112 |
+
|
| 113 |
+
L_p(\mu\mid \phi;[D_{p,c}])\leq&L_p^\prime(\mu\mid \phi;[D_{p,c}])= \sum_{c \in [C]}\|[\phi_c]_{M(\mu)} - D_{p,c}z_{p,c}^\prime(\mu)\|^2 + \lambda \|z_{p,c}^\prime \|_1 \\ &=\sum_{c\in [C]} \|\phi_c\|_{M(\mu)}^2 - \frac{((D_{p,c}\ast \delta_\mu) \odot \phi_c)^2}{\|D_{p,c}\|^2} + \lambda \|z_{p,c}^\prime \|_1
|
| 114 |
+
|
| 115 |
+
The first term denoting the energy across all channels for different values of $\mu$ typically has small variance, and we ignore it. As such the problem reduces to optimizing the last two terms. As we argued before ground-truth part expressions are non-negative, and we write $\lambda \|z_{p,c}^\prime\|_1 = \lambda z_{p,c}^\prime$. Invoking completion of squares:
|
| 116 |
+
|
| 117 |
+
\argmax_{\mu \in G \times G} L_p^\prime (\mu\mid \phi;[D_p,c]) =
|
| 118 |
+
\argmax_{\mu \in G \times G} \sum_{c\in [C]} \frac{((D_{p,c}\ast \delta_\mu) \odot \phi_c-\lambda)^2}{\|D_{p,c}\|^2} \nonumber\\ =\argmax_{\mu \in G \times G}\left [ \mbox{Pr}_p(\mu \mid \phi;[\theta_{p,c}],[\lambda_c])= \frac{\exp(\frac{1}{T}\sum\limits_{c\in C}((\theta_{p,c}\ast\phi_c)(\mu)-\lambda_c)^2)}{\sum_\mu \exp(\frac{1}{T}\sum\limits_{c\in C}( (\theta_{p,c}\ast\phi_c)(\mu)-\lambda_c)^2)}\right ]
|
| 119 |
+
|
| 120 |
+
where $\theta_{p,c}=D_{p,c}/\|D_{p,c}\|$ and $\lambda_c = \lambda/\|D_{p,c}\|$ is a channel dependent constant. Strictly speaking the sum above over channels should only be over those with non-negative expressions, but we do not observe a noticeable difference in experiments, and consider the full summand here. $T$ is a temperature term, which we will use later for efficient implementation.
|
| 121 |
+
|
| 122 |
+
\noindent {\it Multi-Scale Extension.} We extend our approach to incorporate multiple scales. This is often required because of significant difference in orientation and pose between query and support examples. To do so we simply consider masks, $M_s(\mu)$ at varying grid sizes indexed by $s$. As such, proposed method directly generalizes to multi-scales, and part expressions can be obtained across different scales at any candidate location. To estimate part-location we integrate across all the different scales to find a single estimate.
|
| 123 |
+
|
| 124 |
+
[htbp!]
|
| 125 |
+
\DontPrintSemicolon
|
| 126 |
+
Input: image $x$ \\
|
| 127 |
+
Parametric functions: convolutional backbone $f$, template collections ${\cal D}_p$ \\
|
| 128 |
+
|
| 129 |
+
Get the convolutional feature $\phi = f(x)$ \\
|
| 130 |
+
\For{$p \in [K]$}{
|
| 131 |
+
|
| 132 |
+
Estimate $\mu_p = \argmax \mbox{Pr}_p(\mu \mid \phi;[\theta_{p,c}],[\lambda_c])$ by \cref{eq.estimate_mu} \\
|
| 133 |
+
|
| 134 |
+
Compute $z_{p,c}^\prime(\mu) = \frac{(D_{p,c}\ast \delta_\mu) \odot \phi_c (\mu)}{\|D_{p,c}\|^2}$ \\
|
| 135 |
+
Thresholding: $z_{p,c}(\mu)=S_{\zeta}(z_{p,c}^\prime(\mu))$
|
| 136 |
+
|
| 137 |
+
}
|
| 138 |
+
|
| 139 |
+
Output: Part locations $[\mu_p]_{p\in[K]}$ and template coefficients $[z_{p,c}]_{p\in[K],c\in[C]}$
|
| 140 |
+
\caption{Object Parsing using DNNs}
|
| 141 |
+
|
| 142 |
+
At test-time we are given a query instance, $q$, and by way of supervision, $M$ support examples for $N$ classes.
|
| 143 |
+
Let $I_{y}$ denote the set of support examples for the class label, $y \in [N]$. Parsing the $i$th support example for $y$th class yields, part locations, $[\mu_p^{(i,y)}]$ and part expressions denoted as $[z_{p,c}^{(i,y)}]$. Similarly, parsing the query yields $[\mu_p^q]$, and $[z_{p,c}^{(q)}]$.
|
| 144 |
+
|
| 145 |
+
\noindent {\it Geometric Distance.} To leverage part-location information, we compare geometries between query and support. To do so, we embed pairwise part relative distances, and three-way part angles into a feature space $\psi([\mu_p])$, and use the squared distance in the embedded space.
|
| 146 |
+
|
| 147 |
+
\noindent {\it Part Expression Distance.} Part-expressions across examples and different parts exhibit significant variability due to differences in pose and orientation. Entropy of the location probability, $\mbox{Pr}_p(\mu \mid \phi; [\theta_{p,c}],[\lambda_c])$ is a key indicator of poor part-expressions. Leveraging this insight we train a weighting function $\alpha(h^q_p,[h^{(i,y)}_p])$ that takes location entropies for all the support examples for part $p$ and the corresponding location entropy for query, $q$, and outputs a score. Additionally, for each example $i$ for class, $y$, and part, $p$, we train a weighting function $\rho(h^{(i,y)}_p)$ to output a composite weighted part-expression: $z_{p,c}^{(y)}= \sum_{i \in I_{y}} \rho(h^{(i,y)}_p) z_{p,c}^{(i,y)}$.
|
| 148 |
+
|
| 149 |
+
In summary, the overall distance is the sum of geometric and weighted part distances with $\beta$ serving as a tunable parameter:
|
| 150 |
+
|
| 151 |
+
d(q,y)= \sum_{p \in [K]} \alpha(h_q,[h_{i,y}])\left \| [z_{p,c}^{(y)}]-[z_{p,c}^q] \right \|^2 + \beta \sum_{i\in I_{y}} \left \|\psi([\mu_p^{(i)}])-\psi([\mu_p^{(q)}])\right \|^2
|
| 152 |
+
|
| 153 |
+
\noindent{\bf Training.} Training procedure (see Algo. ) follows convention. We sample $N$ classes at random, and additionally sample support and query examples belonging to these classes from training data. An additional issue arising for us is that we must enforce diversity of part locations during training to ensure that diverse set of parts are chosen. There are many choices for the joint distributions of part locations to ensure that they are well-separated. We utilize the Hellinger distance denoted by $\mathbb{H}(\cdot,\cdot)$ here, and enforce divergence for an example $x \in {\cal X}$ as follows:
|
| 154 |
+
|
| 155 |
+
\ell^{div}([\mu_p];x)
|
| 156 |
+
=& \sum_{p \in [K]} \log(Pr(\mu_p \mid \phi; [\theta_{p,c}],[\lambda_c])\\& - \eta \sum_{p^\prime \neq p}\mathbb{H}(Pr(\mu_p \mid \phi; [D_{p,c}],[\lambda_c]),Pr(\mu_{p^\prime} \mid \phi; [D_{p^\prime,c}],[\lambda_c]) \nonumber
|
| 157 |
+
|
| 158 |
+
where, $\eta$ is a tunable parameter. The overall training loss is the sum of the cross-entropic loss (see Line 12 in Algo ), based on the distance described in Eq. , and the divergence loss (Eq. ). In Algo , the argmax in Line 5 and
|
| 159 |
+
delta function in Line 6 are non differentiable, preventing the back-propagation of gradients during training. We approximate the argmax by taking the mean value for sufficiently small $T$. We then use a Gaussian function with a small variance (0.5 in our experiments) to approximate the delta function. The multi-scale extension is handled analogously. The templates for different dictionaries are trained in parallel, and the distance function in Eq. is generalized to include part-expressions at multiple scales. These modifications lead to an end-to-end differentiable scheme.
|
| 160 |
+
|
| 161 |
+
\if0
|
| 162 |
+
\noindent {\bf Inductive Bias.} To bridge this gap we introduce a set of assumptions for our model. \\
|
| 163 |
+
{\it Assumption 1.} Convolutional neural networks can output features that retains salient visual patterns. These patterns are reflective of the visual appearance of an object instance.\\
|
| 164 |
+
{\it Assumption 2.} Objects contain a finite number of high-level concepts, which we refer to as parts. Parts occupy a prototypical relative position within the object. \\
|
| 165 |
+
{\it Assumption 3.} For each object instance, each part, manifests as a collection of visual patterns. Individual visual patterns are duplicated across different object instances, and the union of unique part-wise visual patterns form a finite dictionary. \\
|
| 166 |
+
{\it Assumption 4.} An object instance belonging to a class can be identified by its part geometry (relative locations of the parts) together with the sub-collections of exhibited visual patterns for each part.
|
| 167 |
+
|
| 168 |
+
With these assumptions in place, we are in a position to propose a statistical model, which will in turn lead to methods for few-shot recognition.
|
| 169 |
+
\fi
|
| 170 |
+
|
| 171 |
+
\if0
|
| 172 |
+
\noindent We propose a statistical model over feature output of a CNN to parse input image instances.
|
| 173 |
+
|
| 174 |
+
This is motivated by Sec. , where we claim: {\bf (i)} that a small set of locations in the CNN feature space define the geometry of an object;{\bf (ii)} that while these locations, which we refer to as parts, exhibit a wide-variety of patterns, each pattern is duplicated across other object classes. Thus a finite vocabulary of patterns for each part can be used to express different instances; {\bf (iii)} that the geometry together with the specifically expressed patterns uniquely describes the object class.
|
| 175 |
+
Parsing refers to expressing image instances in terms of part geometry and associated part signatures drawn from a fixed vocabulary.
|
| 176 |
+
|
| 177 |
+
{\it Multi-Scaled Parsing.} We extend our method to include multiple scales. This is because, we observe that that the few shot instances due to variation in pose result in significant visual distortion, and visually, some parts can appear smaller in perspective. To handle these situations we parse outputs at multiple scales.
|
| 178 |
+
|
| 179 |
+
\noindent {\bf Notation.} Let $x,\,x_i \in {\cal X}$ refer to inputs and input instances, and $y,\,y_i \in {\cal Y}$ the corresponding labels respectively. Let $\phi_{x,c} \in \mathbb{R}^{d\times d}$ denote the $c$\,th channel output of a CNN, and $\Phi_x \in \mathbb{R}^{d\times d\times C}$
|
| 180 |
+
the concatenation of outputs from all the $C$ channels. We drop the index $x$ in $\Phi_x$ whenever clear. We let $p$ denote parts taking values in the set $\mathcal{I}$, and $D_{p,c}^{\mathcal{s}}$
|
| 181 |
+
|
| 182 |
+
a finite dictionary of $m$ atoms each of dimension $\mathcal{s}\times\mathcal{s}$ for each channel (for simplicity we let $m=1$ in this exposition). We denote by, $D_{p}^{\mathcal{s}}$ the part dictionary at scale $\mathcal{s}$, which is a concatenation of all channel dictionaries at that scale. The part-dictionary across all scales is grounded to a single location $\mu_p \in [0, d]^{2}$. Let $\mathcal{D}$ denote the concatenation of all part dictionaries across different scales $\mathcal{s}\in\mathcal{S}$.
|
| 183 |
+
|
| 184 |
+
\centering
|
| 185 |
+
\includegraphics[width=0.8\linewidth]{figs/fig2.png}
|
| 186 |
+
\caption{Each part $p$ is a mixture of the atoms of the $p$-th dictionary $D_p$ with coefficients $Z_p$ plus noise inside $\norm{\mu - \mu_p}_\infty \le \cs$. In this example, the ``head'' part of the bird is in the $\cs = 3$ region of the location $\mu_p$. It can be represented by the combination of various visual signatures such as beak (triable) and eye (circle) across different channels.}
|
| 187 |
+
|
| 188 |
+
\vspace{-0.2in}
|
| 189 |
+
|
| 190 |
+
\noindent We propose the following relationship between part location and expression in the part dictionary:
|
| 191 |
+
|
| 192 |
+
\mathcal{P}_{\mu_p,\mathcal{s}}(\Phi) = \sum_{c \in [C]} D_{p,c}^{\mathcal{s}}z_{p,c}^{\mathcal{s}} + W \triangleq D_p^{\mathcal{s}} Z_p^{\mathcal{s}} + W
|
| 193 |
+
|
| 194 |
+
where $\mathcal{P}_{\mu_p,\mathcal{s}}(\cdot)$ is a projection on $\mathbb{R}^{\mathcal{s}\times \mathcal{s}\times C}$ into the ball $\{\|\mu - \mu_p\|_{\infty} \leq \mathcal{s}\}\times \mathbb{R}^C$. As shown in \cref{fig:signal_model}, this function is a mask that eliminates entries of $\Phi$ that are greater than radius $\mathcal{s}$ from the pth part location, $\mu_p$.
|
| 195 |
+
$W\in \mathbb{R}^{
|
| 196 |
+
\mathcal{s}\times \mathcal{s} \times C}$ is assumed to be noise with mean zero and variance $\sigma^2$ with IID Gaussian components. $Z_p^{\mathcal{s}}=[z_{p,c}^{\mathcal{s}}]$ is the set of coefficients expressed across the atoms of the pth dictionary at scale ${\mathcal{s}} \in \mathcal{S}$. To enhance sparsity of expression, we place a uniform and Laplacian prior (with a tunable penalty parameter) on the coefficients , and assume independence across the different dictionary elements and scale.
|
| 197 |
+
|
| 198 |
+
As described in Sec. , our setup mirrors detection of one-dimensional signal waveforms (dictionary elements) in received noisy signal ($\Phi$). To detect and categorize a part, we find the best fit for the part, by identifying the optimal part location, $\mu_p$, shared across all the channels and at all scales, and identify which of the part dictionary elements are expressed independently for each scale. As in matched filtering , much of this computation can be reduced to running a {\it convolution-layer} for each channel followed by a soft maximization scheme, as we point out in \cref{fig:inf_model}.
|
| 199 |
+
|
| 200 |
+
The main difference between the time-series detection problem and the one here is that we now have many parts, and we need a dictionary for each part. Furthermore, we cannot allow parts to overlap, and as such this leads to a coupled problem involving geometric constraints on the part locations. Nevertheless, conditioned on the part-locations, the part-level coefficients $z_{p,c}^{\mathcal{s}}$ and the set of all expressed patterns, $Z_p^{\mathcal{s}}$ are assumed to be independent. Abstractly, our model can be viewed as proposing a posterior, which factorizes as follows:
|
| 201 |
+
|
| 202 |
+
P(&(Z_1^{\mathcal{s}},\mu_1),(Z_2^{\mathcal{s}},\mu_2) \ldots, (Z_K^{\mathcal{s}},\mu_K), \mathcal{s} \in \mathcal{S} \mid \Phi; \mathcal{D})\\ =& P(\mu_1,\mu_2,\ldots,\mu_K \mid \Phi; \mathcal{D})\prod_{p \in \mathcal{I}, \mathcal{s} \in \mathcal{S}} P(Z_p^\mathcal{s} \mid \Phi, \mu_p; D_p^{\mathcal{s}})
|
| 203 |
+
|
| 204 |
+
There are many choices for the joint distributions of part locations to ensure that they are well-separated. In this paper we enforce negative similarity on marginals, and let, \log(P(\mu_1,\mu_2,\ldots,\mu_K &\mid \mathcal{D}) \propto \sum_{p \in \mathcal{I}} \log(P(\mu_p \mid D_p)\\& - \eta \sum_{p^\prime \neq p}\mathbb{H}(P(\mu_p \mid D_p),P(\mu_{p^\prime} \mid D_{p^\prime})
|
| 205 |
+
where $\mathbb{H}(\cdot,\cdot)$ denotes the Hellinger distance , and $\eta$ is a tunable parameter. The marginals for locations are implicitly described by Eq. .
|
| 206 |
+
|
| 207 |
+
Parsing an input image, $x \in {\cal X}$ of an object, produces the output, ${\cal O}(x)=\{(\mu_p, Z_p^{\mathcal{s}}): p \in \mathcal{I}, \mathcal{s} \in \mathcal{S}\}$.
|
| 208 |
+
|
| 209 |
+
\noindent We use maximum-a-posteriori (MAP) estimate for parsing:
|
| 210 |
+
$$
|
| 211 |
+
{\cal O}(x) = \argmax\limits_{[\mu_p,Z_p^\mathcal{s}]: p \in \mathcal{I},\mathcal{s}\in\mathcal{S}} \log(P([\mu_p,Z_p^\mathcal{s}]_{p \in {\mathcal I},\mathcal{s}\in\mathcal{S}} \mid \Phi_x; \mathcal{D}))
|
| 212 |
+
$$
|
| 213 |
+
|
| 214 |
+
\centering
|
| 215 |
+
\includegraphics[width=0.8\linewidth]{figs/inf_model.png}
|
| 216 |
+
\caption{Overview of the feedfoward network at inference time. The input image is parsed at multiple scales for each part. Figure depicts the model for one part at two scales [1, 3]}
|
| 217 |
+
|
| 218 |
+
\vspace{-0.2in}
|
| 219 |
+
|
| 220 |
+
\noindent Consider the $N$-way $K$-shot problem, $(x_i,y_i),\,\,i\in[NK]$, where at test-time we are presented a query $q$ belonging to one of the $N$ unseen classes, $y_i \in [N]$, and for these classes we are given $K$ support examples per class. We index by $i$, i.e., $z_{i,p,c}^{\mathcal{s}}$ to denote outputs for $i$th example (but drop it whenever clear). The parsed query and examples yield ${\cal O}(q)$ and ${\cal O}(x_{i})$ respectively. To infer the class label, we propose a posterior, $\log P(y\mid q)\propto -d([{\cal O}(x_{i})]_{i\in C_y},{\cal O}(q))$, where $C_y=\{i: y_i=y\}$, based on distances between the parsed outputs (the index $i\in C_y$ is dropped below for better exposition). However,\\
|
| 221 |
+
(i) {\it Part geometry is strongly instance-dependent} (translations, different poses etc.), and has to be handled differently. We propose to sum geometric distances over different instances. To compute geometric distance, we embed part locations in a feature space consisting of pairwise distances, and angular displacements. \\
|
| 222 |
+
(ii) {\it Variation in pose and orientation leads to distortions} such as missing parts, uncertain locations, and noisy coefficients, and impacts quality of matching query to support. These variations are instance-dependent, and rather than estimate pose, we view parsed outputs as Gaussian noisy observations with instance-dependent, and scale dependent noise variance $\sigma_{i,\mathcal{s}}^2$.
|
| 223 |
+
The optimal mean-squared estimate is a re-weighted fusion rule $\sum_{i \in C_y} \sigma_{i,\mathcal{s}}^{-2}z_{i,p,c}^{\mathcal{s},*}$. Estimating instance-dependent noise variance is difficult. As a solution, we post-hoc compute the goodness of fit (GoF), and use it as a feature to train an instance-dependent weight. GoF is the ratio, $\rho(\Phi_{x_i},\mu_p)$, of $\|{\cal P}_{\mu_p,\mathcal{s}}(\Phi_{x_i})-D_{p,c}^{\mathcal{s}}z_{i,p,c}^{\mathcal{s},*}\|^2$ to the population variance across all support examples for the channel. This leads to the fused output $\hat z_{p,c}^{\mathcal{s},*}=\sum_{i \in C_y} \rho^{-1}(\Phi,\mu_p)z_{i,p,c}^{\mathcal{s},*}$ for each class. In parallel, we also re-weight the different scales and different parts to further suppress noisy support. Noise here arises from uncertainty in part locations, and here we learn weighting functions that takes the entropy $h_{x_i,p}$ of $P_{\mu_p}(\cdot \mid \Phi_{x_i})$ for each example and part, and jointly (over support and query) learn a weight, $\alpha([h_{x_i,p}]_{i\in C_y},h_{q,p})$. For $i\in C_y$, we get
|
| 224 |
+
|
| 225 |
+
d([{\cal O}(x_i)]_{i\in C_y},{\cal O}(q))&=
|
| 226 |
+
\sum_{\mathcal{s},\mathcal{s'},p} \alpha([h_{x_i,p}],h_{q,p}) d_1(\hat Z_{p}^{\mathcal{s},*},Z_{q,p}^{\mathcal{s}'})\\+&
|
| 227 |
+
\beta \sum_{i\in C_y} d_2([\mu_{i,p}]_{p \in \mathcal{I}},[\mu_p^q]_{p\in \mathcal{I}}),
|
| 228 |
+
|
| 229 |
+
as the distance function with $\beta$ as a tunable hyperparameter. We utilize cosine similarity to be invariant to size effects.
|
| 230 |
+
\if0
|
| 231 |
+
However, identifying a good distance metric is not straightforward, since the part geometry, which involves part locations, must be handled separately from the dictionary coefficients.
|
| 232 |
+
|
| 233 |
+
This leads us to a weighted distance measure of the form:
|
| 234 |
+
|
| 235 |
+
d([{\cal O}(x_i)]_{i:y_i=y},{\cal O}(q))&=
|
| 236 |
+
\sum_{\mathcal{s},\mathcal{s'},p} \alpha_{\mathcal{s},\mathcal{s'},p} d_1([Z_p^{\mathcal{s},i}]_{i:y_i=y},Z_p^{\mathcal{s}',q})\\+&
|
| 237 |
+
\sum_{i:y_i=y} \beta_i d_2([\mu_p^i]_{p \in \mathcal{I}},[\mu_p^q]_{p\in \mathcal{I}})
|
| 238 |
+
|
| 239 |
+
where, the first term representing the coefficients is weighted sum across different scales and parts, while the second term representing the geometric distance is a weighted sum over different support examples. To incorporate invariance to scaling we use cosine similarities in both of the terms. We next describe these in more detail below.
|
| 240 |
+
|
| 241 |
+
\noindent {\it Geometric Distance.} First, the geometry across instances cannot be fused due to variations in pose and orientation, and we learn to re-weight instances ($\beta_i$) based on uncertainty in part locations. Furthermore, to incorporate geometric structure, we embed the part locations in a feature space. The feature space consists of distances of each part from the centroid of the part locations (this takes care of translation effects), and pairwise angular displacements from the centroid, which incorporates the underlying skeletal structure.
|
| 242 |
+
|
| 243 |
+
\noindent {\it Instance-dependent Reweighting of Patterns.} We next consider the first term. We sum over the parts because these are independent conditioned on the underlying locations (which is handled by the geometric term). To justify reweighting we note that due to variations in size, pose and orientation, the support instances can exhibit various visual artifacts, such as missing parts, inaccurate localization, and noise in specific manifested patterns. One way to account for these artifacts is estimate the pose and use this as an input for matching, but this leads to technical complications. Our
|
| 244 |
+
re-weighting implicitly accounts for visual distortions, and allows for increasing the influence of specific parts at specific scales, and seeks to optimize matching to the particular query appearance. We learn these weights as a function of uncertainty in part locations (such as entropy of $P(\mu_p\mid \Phi)$).
|
| 245 |
+
\fi
|
| 246 |
+
\if0
|
| 247 |
+
\\
|
| 248 |
+
{\bf Relative Geometric Similarity.} To incorporate geometric structure, we embed the part locations in a feature space. The feature space consists of distances of each part from the centroid of the part locations (this takes care of translation effects), and pairwise angular displacements from the centroid, which incorporates the underlying skeletal structure. As such, we can decompose the distance into two components: $d([{\cal O}(x_{i,y})]_{i:y_i=y},{\cal O}(q))=d_1([Z_p^{\mathcal{s},i}]_{i:y_i=y},Z_p^{\mathcal{s}',q})+d_2([\mu_p^i]_{p \in \mathcal{I},i:y_i=y},[\mu_p^q]_{p\in \mathcal{I})$. It remains to determine a good distance metric for the first term. Once geometry is separated, since the signatures exhibited by parts are assumed to be independent, we arrive at $d_1([Z_p^{\mathcal{s},i}]_{i:y_i=y},Z_p^{\mathcal{s}',q})=\sum_{p \in \mathcal{I}} d_1(Z_p^{\mathcal{s},i}]_{i:y_i=y},Z_p^{\mathcal{s}',q})
|
| 249 |
+
\\
|
| 250 |
+
{\bf Re-weighting Distance Based on Quality of Parts.} A fundamental issue (see Fig. ) is that due to the diversity of support and query vectors in terms of size, pose and orientation, a number of visual distortions can arise. This leads to missing parts or inaccurate localization of parts, and there is a variation in appearance of visual traits across instances as well as across different scales.
|
| 251 |
+
|
| 252 |
+
Implicitly, there are nuisance variables $\varepsilon$ that account for such visual distortions, and one could consider placing a probability model $P(\varepsilon \mid \Phi)$, and attempting to form the MAP estimate over $\varepsilon$ as well. We can avoid estimating the nuisance variable by noting that its impact is in increasing uncertainty of the coefficients, $Z_p$. We model this as $z_{p,c}^i \sim N(\bar{z}_{p,c},\sigma^2_{\varepsilon,c})$ with $\bar{z}_{p,c}$ the ground-truth coefficient, and $\sigma^2_{\varepsilon,c}$ is the variance arising from the nuisance variable for the $c$th channel. The resulting fused output is a weighted average across support examples. Since we use cosine similarity we only need relative weightings, and as such this can be obtained at test-time (see Supplementary). Our next task is to consider
|
| 253 |
+
|
| 254 |
+
To avoid estimating optimizing over the nuisance variable, let us consider its impact on the parsed outputs. Fundamentally, its impact is in terms of increasing the uncertainty for specific nuisance parameter values. We can view this issue as $z_{p,c}^i \sim N(\bar{z}_{p,c},\sigma^2_{\varepsilon,c})$, with $\bar{z}_{p,c}$ the ground-truth coefficient, and $\sigma^2_{\varepsilon,c}$ is the variance arising from the nuisance variable for the $c$th channel. This leads to channel-wise heteroscedasticity across the query and support samples. As a result, the optimal solution for the class $y$ is the re-weighted average $\hat{z}_{p,c}^y=\sum_{i:y_i=y} \frac{z_{p,c}^i}{\sigma^2_\varepsilon}$. To estimate the variance we note that we only require the relative ratios across the examples, and as such, to a first-order, we can obtain this by leveraging channel-fitness over the population variance for that channel.
|
| 255 |
+
|
| 256 |
+
This leads us to propose estimating as the population variance for the channel across all the support examples for that task.
|
| 257 |
+
|
| 258 |
+
some of the visual traits reflected in
|
| 259 |
+
|
| 260 |
+
part locations are inaccurate, some parts maybe completely missing, and finally due to perspective projection a few of the visual traits may have small signal-to-noise ratio, making these unreliable.
|
| 261 |
+
|
| 262 |
+
exhibit low SNR. To mitigate these effects, we propose to learn to fuse support vectors based on their quality. Let $z_{p,c}^{y,i}$ denote the dictionary coefficient for the $i$-th support vector in class $y$ for part $p$ and $c$-th dictionary. We can view the output of the MAP estimator as:
|
| 263 |
+
$$
|
| 264 |
+
z_{p,c}^{y,i} = z_{p,c}^{y} + \sigma_{p,c}^{y,i} w_{p,c}
|
| 265 |
+
$$
|
| 266 |
+
\noindent where $\bar{z}_{p,c}^{y}$ is the ground-truth coefficient; $w_{p,c}$ is distributed as a standard normal, and $\sigma_{p,c}^{y,i}$ captures the signal quality. We assume independence across different dictionaries. We leverage a linear model for simplicity, but other parameterized likelihood models are also possible. Ultimately, our goal is to seek a MAP estimate for $\bar{z}_{p,c}^{y}$. For this situation, we obtain a MAP estimate of the form: $\hat{z}_{p,c}^{y}=\sum_{i:y_i=y} \frac{1}{(\beta +\sigma_{p,c}^{y,i})^2}z_{p,c}^{y,i}$, where the $\beta$ arises by assuming a uniform prior on $z_{p,c}^{y}$. The main issue remaining is to estimate the quality $\sigma_{p,c}^{y,i}$, which we do based on computing the co-variance across all support vectors for all $N$ classes for that dictionary coefficient. The reason for doing so is that the dictionary is shared across all object instances, and the noise levels arising from various visual distortions is independent of specific instance. With these modifications we score a parsed query for each part against the fused support instances by adding the cosine similarities of dictionary coefficients $[\hat{z}_{p,c}^{y}]_{c \in C}$ and the query coefficients $[[\hat{z}_{p,c}]_{c \in C}]$ and the geometric cosine similarity described above. The reason for using cosine similarity is its invariance to scaling, which turns out to be useful for our context. \\
|
| 267 |
+
|
| 268 |
+
Fortunately, the cosine similarity is still applicable in matching across scales. Nevertheless, it is possible that some of the scales are too noisy, and this can be detected through posterior $P(\mu_p \mid \Phi)$, and analogous to coefficient reweighting above, we reweight each of the matchings to output a final score.
|
| 269 |
+
|
| 270 |
+
{\bf Training.} As is customary training is performed episodically, with support instances and held-out query vectors, and we learn part dictionaries, and predict labels based on inferring parts, part locations, and the fusion rule to minimize the cross-entropy of $\log p(y=y_i \mid {\cal O}(q)) \propto -d([{\cal O}(x_{i,y})]_{i:y_i=y},{\cal O}(q))$. There is an important difference between conventional approach, where one would add the cross-entropy loss and a regularization penalty in the form of $\log p([z_p],[\mu_p] \mid \Phi; [D_p])$ to train dictionaries. Our approach in contrast, propagates the loss only on the MAP estimates for $(z_p,\mu_p)$ to train the dictionary. This distinction is important and natural, since, our goal is to mimic test-time inference, where we have to match parsed outputs, which are based on MAP estimate.
|
| 271 |
+
\fi
|
| 272 |
+
\fi
|
| 273 |
+
|
| 274 |
+
\if0
|
| 275 |
+
|
| 276 |
+
We elaborate more on the steps in Algorithms and , and draw connections to the description in Sec. . First, notice that in Algo our solution to posterior maximization is based on convolution in line 5 and line 8. This is precisely a generalization of 1-D time-delay problem to our 2-D domain , based on alternative minimization. For fixed locations $\mu_p$, the estimates for $Z_p$ is a channel-by-channel least-squares minimization (lines 7-9).
|
| 277 |
+
The second step is to plug the value of the estimated coefficient, $Z_p(\mu_p)$, and express the error as a function of $\mu_p$. Formally,
|
| 278 |
+
\vspace{-1.5mm}
|
| 279 |
+
|
| 280 |
+
In Algorithm , $Z_p^{\mathcal{s},*}=[z_{p,c}^{\mathcal{s},*}]$, for a fixed location $\mu_p$, is the optimal solution to the least-squares problem $\mathcal{L}(\mu_p)=\min_{Z_p}\|\mathcal{P}_{\mu_p,\mathcal{s}}(\Phi)-D_p^{\mathcal{s}}Z_p\|^2$.
|
| 281 |
+
As such Line of Algo is the optimal solution, $\mu_p^*=\argmax_{\mu_p} \mathcal{L}(\mu_p)$ for $\theta_{p,c}^\mathcal{s}=\|D_{p,c}^\mathcal{s}\|^{-1}D_{p,c}^\mathcal{s}$, while assuming the norm $\|\mathcal{P}_{\mu_p,\mathcal{s}}(\Phi)\|^2$ is location independent.
|
| 282 |
+
|
| 283 |
+
\vspace{-2.5mm}
|
| 284 |
+
|
| 285 |
+
\noindent Proposition connects $\theta_{p,c}^\mathcal{s}$ to $D_{p,c}^\mathcal{s}$, based on minimizing least-squares over $Z_p$. However, this connection is not exploited, since, to encourage sparsity in $Z_{p}$, we also apply a thresholding function $Th(z,\zeta) = \mathrm{sign}(z)\max(0, |z|-\zeta)$, with $\zeta$ a tunable margin. This is the shrinkage operation widely employed in image denoising . Finally, we encourage non-overlapping locations in training as shown in Line 4, Algo .
|
| 286 |
+
|
| 287 |
+
$$\frac{\exp (-\sum_c \frac{1}{\|\theta_{p,c}\|_2^2}(\theta_{p,c} \ast \Phi_{x,c}(u))^2 / T)}{\sum_{u} \exp (-\sum_c \frac{1}{\|\theta_{p,c}\|_2^2}(\theta_{p,c} \ast \Phi_{x,c}(u))^2 / T)}$$
|
| 288 |
+
|
| 289 |
+
output the location that minimizes $\|\mathcal{P}_{\mu_p,\mathcal{s}}(\Phi)-D_pZ_p(\mu_p)\|_2$. This turns out to be another convolution operation but coupled with $Z_p(\mu_p)$, and
|
| 290 |
+
|
| 291 |
+
is somewhat more expensive in the context of a feedforward network. For this reason we introduce a new parameter, $\theta_{p,c}$ and convolve $\Phi$ to estimate the location. Line 5, Algo generalizes computation of location to outputting a probability distribution with $T$ as the temperature parameter. The requirement that the locations are all diverse is handled in a different step in Algorithm 2.
|
| 292 |
+
|
| 293 |
+
\fi
|
| 294 |
+
|
| 295 |
+
\if0
|
| 296 |
+
In Algo , the $\argmax$ in \cref{alg_line:max_mu} and delta function in \cref{alg_line:compute_z} are non differentiable, preventing the back-propagation of gradients during training. We approximate the $\argmax$ using a softmax function by taking the mean value for sufficiently small $T$.
|
| 297 |
+
|
| 298 |
+
We then use a Gaussian function with a small variance (0.5 in our experiments) to approximate the delta function. These modifications lead to an end-to-end differentiable scheme.
|
| 299 |
+
|
| 300 |
+
In few-shot recognition, the goal is to maximize the class log likelihood $\log P(y|q)= \log P(y|q) + \log P(\mu_1, \ldots, \mu_K|\Phi) + \sum_{p\in\cI,\cs\in\cS}\log P(Z_p^\cs | \Phi,\mu_p)$. When the object is parsed, the third term is already maximized. Therefore, the model can be learned by optimizing the loss $L$ computed in \cref{alg:alg2}. After the model is trained, we can make few-shot classification by finding the class $y$ that minimizes the total loss $L$ in each episode.
|
| 301 |
+
\fi
|
| 302 |
+
\setlength{\textfloatsep}{8pt}
|
| 303 |
+
[t]
|
| 304 |
+
\DontPrintSemicolon
|
| 305 |
+
Input: Training support image pairs $I_y = (x_i, y)$, $i \in [M], y\in [N]$, and training query image pairs $I_y^q = (x_q, y)$, $q\in [Q], y\in [N]$\\
|
| 306 |
+
|
| 307 |
+
\For{$x_i \in I_y$ and $x_q \in I_y^q$ }{
|
| 308 |
+
Parse the image $x_i$ or $x_q$ by \cref{alg:alg1} \\
|
| 309 |
+
Compute $\ell^{div}([\mu_p];x)$ by \cref{eq.div}
|
| 310 |
+
}
|
| 311 |
+
\For{$x_q \in I_y^q$}{
|
| 312 |
+
\For{$y \in [N]$}{
|
| 313 |
+
Estimate $[z_{p,c}^{(y)}]$ for class $y$\\
|
| 314 |
+
Predict part weight $\alpha(h_q,[h_{i,y}])$ \\
|
| 315 |
+
Compute the weighted distance measure $d(q,y)$ by \cref{eq.disc}
|
| 316 |
+
}
|
| 317 |
+
Compute the loss $\ell^{dis}_{q} = -\log \frac{\exp(-d(q,y_q))}{\sum_{y\in[N]} \exp(-d(q,y))} $
|
| 318 |
+
}
|
| 319 |
+
Output: loss $L = \sum_{q:x_q \in I_y^q } \ell_q^{dis} + \sum_{x\in I_y \cup I_y^q} \ell^{div}_x$
|
| 320 |
+
\caption{Training episode loss computation. $N,M,Q$ correspond to the number of classes, number of support examples/class, and the number of query samples/class in each episode respectively. }
|
| 321 |
+
|
| 322 |
+
\vspace{-0.1in}
|
2210.01633/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-05-13T14:11:33.213Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36" etag="AMskrjXh160XmjVbWzms" version="18.0.2" type="google"><diagram id="lagTM9lJjXYPdob9pn4V" name="Page-1">7Ztdb5swFIZ/TS4rgc3n5ULb9WZapU6a1DsUvASN4Mpx2mS/fqTgkPisimflILlyLiI4mNfBD9iH186MFuvdV1G+rL7xijUzElS7Gb2dEZIGefd9COz7AI2CPrAUddWHwjHwVP9hQ1AV29YV25wVlJw3sn45Dy5427KFPIuVQvC382K/eHNe60u5ZCDwtCgbGP1ZV3LVR7M4GOMPrF6uVM1hMBxZl6rwENisyoq/nYTo3YwWgnPZb613BWsObafapT/v/oOjxx8mWCtNTtju05sqTfLnHXnI9/X3H4/y+WZQeS2b7XDBw4+Ve9UCrK2+HBqy22t52wXnK7luur2w2xR821bsUEPQ7W2k4L9ZwRsu3s+l9/d5kBXHI6oFu2uf9/WwCjT/eD3hsZW6u4vxNZNi3xV5GzkoDKsTBComWFPK+vVcvhxuh+VR7ljDI6+7ikkw3Lmh4jbcuESBVRIbvhULNpx12u6aUJpcEJKlWDIJhLqNk8seQ+9Y/wMxRUB8CvIf0OdJQh1EHOaWiKPoghAy4sgjNiRD6LWeYl0IGXGM21FH8yL6LB01sURMsgtCyIgTj9iQTJhcq6PWhZARpx6xaUdtizgJLgghI848YlMy0bXGYl0IGXGOizgrsvQ2cRIxGItD27FYv1d0IWTEqjrP2ABNbMlYz7d0IWzGyOaHy4z1hCu1fI5pckEImzHxjE3RxJaMQVKtC2EzxnC4PgdjkFXbplyxPrBPnHKFGBbX52AM0MTEjjHIz3UhbMbIHpfDjAEamlky1nMuXQibMbLJ5TJjHU2U2zGGr8f5tIyRXa7g/QMYuzAdAScVLZ/jEEwqTvwcI9tcLjMGaFI7xuBFWxfCZozscznMGHogljkX0RN0XQiZseqPPGMDNJZ9NfS5pu2rCbLP5TJj4HNZPsdUX0qiC2EzJp6xMRpqyRh4mXRaxsg+l8uMgZcZ2TGOggtC2IyRfS6HGQM0keV4DPxqXQibMbLP5TJj4FdbjscRMFMmHo+RfS6XGQMLMrFjDIxvXQibsfe5zOckLD2QWDdTdCFsxt7n+pAxQHOtFXsTzx+rldyeMUQDls9brvWB807TrvWh3gMxn3eyZJyCNQITMyaesTEay5wLzi1Om3NR74GYL6+2fHcCnf7V3p263fH/rn3x8U/D9O4v</diagram></mxfile>
|
2210.01633/main_diagram/main_diagram.pdf
ADDED
|
Binary file (1.46 kB). View file
|
|
|
2210.01633/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Gaussian processes (GPs) can be used to perform regression with high-quality uncertainty estimates, but they are slow. Naïvely, GP regression requires $O(n^3 + n^2m)$ computation time and $O(n^2)$ computation space when predicting at $m$ locations given $n$ data points [@williams2006gaussian]. A kernel matrix of size $n \times n$ must be inverted (or Cholesky decomposed), and then $m$ matrix-vector multiplications must be done with that inverse matrix (or $m$ linear solves with the Cholesky factors). A few methods that we will discuss later achieve $O(n^2m)$ time complexity [@wang2019exact; @zhang2005time].
|
| 4 |
+
|
| 5 |
+
With special kernels, GP regression can be faster and use less space. Inducing point methods, using $z$ inducing points, allow regression to be done in $O(z^2(n+m))$ time and in $O(z^2 + zn)$ space [@quinonero2005unifying; @snelson2005; @titsias09; @hensman13]. We will discuss the details of these inducing point kernels later, but they are kernels in their own right, not just approximations to other kernels. Unfortunately, these kernels are low dimensional (having a $z$-dimensional Hilbert space), which limits the expressivity of the GP model.
|
| 6 |
+
|
| 7 |
+
We present a new kernel, the *binary tree kernel*, that also allows for GP regression in $O(n+m)$ space and $O((n + m)\log(n+m))$ time (both model fitting and prediction). The time and space complexity of our method is also linear in the depth of the binary tree, which is naïvely linear in the dimension of the data, although in practice we can increase the depth sublinearly. Training some kernel parameters takes time quadratic in the depth of the tree. The dimensionality of the binary tree kernel is exponential in the depth of the tree, making it much more expressive than an inducing points kernel. Whereas for an inducing points kernel, the runtime is quadratic in the dimension of the Hilbert space, for the binary tree kernel, it is only logarithmic---an exponential speedup.
|
| 8 |
+
|
| 9 |
+
::: wrapfigure
|
| 10 |
+
R0.5 {width="\\linewidth"}
|
| 11 |
+
:::
|
| 12 |
+
|
| 13 |
+
A simple depiction of our kernel is shown in Figure [\[fig:tree\]](#fig:tree){reference-type="ref" reference="fig:tree"}, which we will define precisely in Section [3](#sec:kernel){reference-type="ref" reference="sec:kernel"}. First, we create a procedure for placing all data points on the leaves of a binary tree. Given the binary tree, the kernel between two points depends only on the depth of the deepest common ancestor. Because very different tree structures are possible for the data, we can easily form an ensemble of diverse GP regression models. Figure [1](#fig:kernelpicture){reference-type="ref" reference="fig:kernelpicture"} depicts a schematic sample from a binary tree kernel. Note how the posterior mean is piecewise flat, but the pieces can be small.
|
| 14 |
+
|
| 15 |
+
<figure id="fig:kernelpicture" data-latex-placement="b">
|
| 16 |
+
<img src="binarytreedraw.png" style="height:15mm" />
|
| 17 |
+
<figcaption>A schematic diagram of a function sampled from a binary tree kernel. The function is over the interval [0, 1], and points on the interval are placed onto the leaves of a depth-4 binary tree according to the first 4 bits of their binary expansion. The sampled function is in black. Purple represents the sample if the tree had depth 3, green depth 2, orange depth 1, and red depth 0.</figcaption>
|
| 18 |
+
</figure>
|
| 19 |
+
|
| 20 |
+
On a standard suite of benchmark regression tasks [@wang2019exact], we show that our kernel usually achieves better negative log likelihood (NLL) than state-of-the-art sparse methods and conjugate-gradient-based "exact" methods, at lower computational cost in the big-data regime.
|
| 21 |
+
|
| 22 |
+
There are not many limitations to using our kernel. The main limitation is that other kernels sometimes capture the relationships in the data better. We do not have a good procedure for understanding when data has more Matérn character or more binary tree character (except through running both and comparing training NLL). But given that the binary tree kernel usually outperforms the Matérn, we'll tentatively say the best first guess is that a new dataset has more binary tree character. One concrete limitation for some applications, like Bayesian Optimization, is that the posterior mean is piecewise-flat, so gradient-based heuristics for finding extrema would not work.
|
| 23 |
+
|
| 24 |
+
In contexts where a piecewise-flat posterior mean is suitable, we struggle to see when one would prefer a sparse or sparse variational GP to a binary tree kernel. The most thorough approach would be to run both and see which has a better training NLL, but if you had to pick one, the binary tree GP seems to be better performing and comparably fast. If minimizing mean-squared error is the objective, the Matérn kernel seems to do slightly better than the binary tree. If the dataset is small, and one needs a very fast prediction, a Matérn kernel may be the best option. But otherwise, if one cares about well-calibrated predictions, these initial results we present tentatively suggest using a binary tree kernel over the widely-used Matérn kernel.
|
| 25 |
+
|
| 26 |
+
The log-linear time and linear space complexity of the binary tree GP, with performance *exceeding* a "normal" kernel, could profoundly expand the viability of GP regression to larger datasets.
|
| 27 |
+
|
| 28 |
+
# Method
|
| 29 |
+
|
| 30 |
+
Our problem setting is regression. Given a function $f : \mathop{\mathrm{\mathcal{X}}}\to \mathop{\mathrm{\mathbb{R}}}$, for some arbitrary set $\mathop{\mathrm{\mathcal{X}}}$, we would like to predict $f(x)$ for various $x \in \mathop{\mathrm{\mathcal{X}}}$. What we have are observations of $f(x)$ for various (other) $x \in \mathop{\mathrm{\mathcal{X}}}$. Let $X \in \mathop{\mathrm{\mathcal{X}}}^n$ be an $n$-tuple of elements of $\mathop{\mathrm{\mathcal{X}}}$, and let $y \in \mathop{\mathrm{\mathbb{R}}}^n$ be an $n$-tuple of real numbers, such that $y_i \sim f(X_i) + \mathcal{N}(0, \lambda)$, for $\lambda \in \mathop{\mathrm{\mathbb{R}}}^{\ge 0}$. $X$ and $y$ comprise our training data.
|
| 31 |
+
|
| 32 |
+
With an $m$-tuple of test locations $X' \in \mathop{\mathrm{\mathcal{X}}}^m$, let $y' \in \mathop{\mathrm{\mathbb{R}}}^m$, with $y'_i = f(X'_i)$. $y'$ is the ground truth for the target locations. Given training data, we would like to produce a distribution over $\mathop{\mathrm{\mathbb{R}}}$ for each target location $X_i'$, such that it assigns high marginal probability to the unknown $y_i'$. Alternatively, we sometimes would like to produce point estimates $\hat{y}'_i$ in order to minimize the squared error $(\hat{y}'_i - y'_i)^2$.
|
| 33 |
+
|
| 34 |
+
A GP prior over functions is defined by a mean function $m : \mathop{\mathrm{\mathcal{X}}}\to \mathop{\mathrm{\mathbb{R}}}$, and a kernel $k : \mathop{\mathrm{\mathcal{X}}}\times \mathop{\mathrm{\mathcal{X}}}\to \mathop{\mathrm{\mathbb{R}}}$. The expected function value at a point $x$ is defined to be $m(x)$, and the covariance of the function values at two points $x_1$ and $x_2$ is defined to be $k(x_1, x_2)$. Let $K_{XX} \in \mathop{\mathrm{\mathbb{R}}}^{n \times n}$ be the matrix of kernel values $(K_{XX})_{ij} = k(X_i, X_j)$, and let $m_X \in \mathop{\mathrm{\mathbb{R}}}^n$ be the vector of mean values $(m_X)_i = m(X_i)$. For a GP to be well-defined, the kernel must be such that $K_{XX}$ is positive semidefinite for any $X \in \mathop{\mathrm{\mathcal{X}}}^n$. For a point $x \in \mathop{\mathrm{\mathcal{X}}}$, Let $K_{Xx} \in \mathop{\mathrm{\mathbb{R}}}^n$ be the vector of kernel values: $(K_{Xx})_i = k(X_i, x)$, and let $K_{xX} = K_{Xx}^\top$. Let $\lambda \geq 0$ be the variance of observation noise. Let $\mu_x$ and $\sigma^2_x$ be the mean and variance of our posterior predictive distribution at $x$. Then, with $K_{XX}^{\lambda\textrm{inv}} = (K_{XX} + \lambda I)^{-1}$,
|
| 35 |
+
|
| 36 |
+
------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------
|
| 37 |
+
$$\begin{equation} $$\begin{equation}
|
| 38 |
+
\mu_x := (y - m_X)^\top K_{XX}^{\lambda\textrm{inv}} K_{Xx} + m(x) \label{eqn:predmean} \sigma^2_x := k(x, x) - K_{xX} K_{XX}^{\lambda\textrm{inv}} K_{Xx} + \lambda. \label{eqn:predvar}
|
| 39 |
+
\end{equation}$$ \end{equation}$$
|
| 40 |
+
------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------
|
| 41 |
+
|
| 42 |
+
See @williams2006gaussian for a derivation. We compute Equations [\[eqn:predmean\]](#eqn:predmean){reference-type="ref" reference="eqn:predmean"} and [\[eqn:predvar\]](#eqn:predvar){reference-type="ref" reference="eqn:predvar"} for all $x \in X'$.
|
| 43 |
+
|
| 44 |
+
We now introduce the binary tree kernel. First, we encode our data points as binary strings. So we have $\mathop{\mathrm{\mathcal{X}}}= \mathop{\mathrm{\mathbb{B}}}^q$, where $\mathop{\mathrm{\mathbb{B}}}= \{0, 1\}$, and $q \in \mathbb{N}$.
|
| 45 |
+
|
| 46 |
+
If $\mathop{\mathrm{\mathcal{X}}}= \mathop{\mathrm{\mathbb{R}}}^d$, we must map $\mathop{\mathrm{\mathbb{R}}}^d \mapsto \mathop{\mathrm{\mathbb{B}}}^q$. First, we rescale all points (training points and test points) to lie within the box $[0, 1]^d$. (If we have a stream of test points, and one lands outside of the box $[0, 1]^d$, we can either set $K_{xX}$ to $\mathbf{0}$ for that point or we rescale and retrain in $O(n \log n)$ time.) Then, for each $x \in [0, 1]^d$, for each dimension, we take the binary expansion up to some precision $p$, and for those $d \times p$ bits, we permute them using some fixed permutation. We call this permutation the bit order, and it is the same for all $x \in [0, 1]^d$. Note that now $q=dp$. See Figure [\[fig:bitorder\]](#fig:bitorder){reference-type="ref" reference="fig:bitorder"} for an example. We optimize the bit order during training, and we can also form an ensemble of GPs using different bit orders.
|
| 47 |
+
|
| 48 |
+
::: wrapfigure
|
| 49 |
+
R0.3 {width="\\linewidth"}
|
| 50 |
+
:::
|
| 51 |
+
|
| 52 |
+
For $x \in \mathop{\mathrm{\mathbb{B}}}^q$, let $x^{\leq i}$ be the first $i$ bits of $x$. $[\![\textrm{expression}]\!]$ evaluates to 1, if expression is true, otherwise 0. We now define the kernel:
|
| 53 |
+
|
| 54 |
+
::: definition
|
| 55 |
+
**Definition 1** (Binary Tree Kernel). *Given a weight vector $w \in \mathop{\mathrm{\mathbb{R}}}^q$, with $w \succeq 0$ and $||w||_1 = 1$, $$\begin{equation*}
|
| 56 |
+
k_w(x_1, x_2) = \sum_{i = 1}^q w_i \left[\!\left[x_1^{\leq i} = x_2^{\leq i} \right]\!\right]
|
| 57 |
+
\end{equation*}$$*
|
| 58 |
+
:::
|
| 59 |
+
|
| 60 |
+
So the more leading bits shared by $x_1$ and $x_2$, the larger the covariance between the function values. Consider, for example, points $x_1$ and $x_4$ from Figure [\[fig:tree\]](#fig:tree){reference-type="ref" reference="fig:tree"}, where $x_1$ is $($left, left, right$)$, and $x_4$ is $($left, right, right$)$; they share only the first leading "bit". We train the weight vector $w$ to maximize the likelihood of the training data.
|
| 61 |
+
|
| 62 |
+
::: {#prop:psd .proposition}
|
| 63 |
+
**Proposition 1** (Positive Semidefiniteness). *For $X \in \mathop{\mathrm{\mathcal{X}}}^n$, for $k = k_w$, $K_{XX} \succeq 0$.*
|
| 64 |
+
:::
|
| 65 |
+
|
| 66 |
+
::: proof
|
| 67 |
+
*Proof.* Let $s \in \bigcup_{i = 1}^q \mathop{\mathrm{\mathbb{B}}}^i$ be a binary string, and let $|s|$ be the length of $s$. Let $X_{[s]} \in \mathop{\mathrm{\mathbb{R}}}^n$ with $(X_{[s]})_j = \left[\!\left[X_j^{\leq |s|} = s \right]\!\right]$. $X_{[s]} X_{[s]}^\top$ is clearly positive semidefinite. Finally, $K_{XX} = \sum_{i = 1}^q \sum_{s \in \mathop{\mathrm{\mathbb{B}}}^i} w_i X_{[s]} X_{[s]}^\top$, and recall $w_i \geq 0$, so $K_{XX} \succeq 0$. ◻
|
| 68 |
+
:::
|
| 69 |
+
|
| 70 |
+
In order to do GP regression in $O(n)$ space and $O(n \log n)$ time, we develop a "Sparse Rank One Sum" representation of linear operators (SROS). This was developed separately from the very similar Hierarchical matrices [@bebendorf2008hierarchical], which we discuss below. In SROS form, linear transformation of a vector can be done in $O(n)$ time instead of $O(n^2)$. We will store our kernel matrix and inverse kernel matrix in SROS form. The proof of Proposition [1](#prop:psd){reference-type="ref" reference="prop:psd"} exemplifies representing a matrix as the sum of sparse rank one matrices. Note that each $X_{[s]}$ is sparse---if $q$ is large, most $X_{[s]}$'s are the zero vector.
|
| 71 |
+
|
| 72 |
+
We now show how to interpret an SROS representation of an $n \times n$ matrix. Let $[n] = \{1, 2, ..., n\}$. For $r \in \mathbb{N}$, let $L : [r]^n \times [r]^n \times \mathop{\mathrm{\mathbb{R}}}^n \times \mathop{\mathrm{\mathbb{R}}}^n \to \mathop{\mathrm{\mathbb{R}}}^{n \times n}$ construct a linear operator from four vectors.
|
| 73 |
+
|
| 74 |
+
::: {#def:sros .definition}
|
| 75 |
+
**Definition 2** (Linear Operator from Simple SROS Representation). *Let $p, p' \in [r]^n$, and let $u, u' \in \mathop{\mathrm{\mathbb{R}}}^n$. For $l \in [r]$, let $u^{p = l} \in \mathop{\mathrm{\mathbb{R}}}^n$ be the vector where $u^{p = l}_j = u_j [\![p_j = l]\!]$, likewise for $u'$ and $p'$. Then: $L(p, p', u, u') \mapsto \sum_{i = 1}^{m} u^{p = i} {(u')^{p' = i}}^\top$.*
|
| 76 |
+
:::
|
| 77 |
+
|
| 78 |
+
::: wrapfigure
|
| 79 |
+
R0.36 {width="\\linewidth"}
|
| 80 |
+
:::
|
| 81 |
+
|
| 82 |
+
We depict Definition [2](#def:sros){reference-type="ref" reference="def:sros"} in Figure [\[fig:sros\]](#fig:sros){reference-type="ref" reference="fig:sros"}. $p$ and $p'$ represent partitions over $n$ elements: all elements with the same integer value in the vector $p$ belong to the same partition. Note that $r$, the number of parts in the partition, need not exceed $n$, the number of elements being partitioned. If $p = p'$ (which is almost always the case for us) and the elements of $p$, $u$, and $u'$ were shuffled so that all elements in the same partition were next to each other, then $L(p, p', u, u')$ would be block diagonal. Note that $L(p, p', u, u')$ is not necessarily low rank. If $p$ is the finest possible partition, and $p = p'$, $L(p, p', u, u')$ is diagonal. SROS matrices can be thought of as a generalization of two types of matrix that are famously amenable to fast computation: rank one matrices (all points in the same partition) and diagonal matrices (each point in its own partition).
|
| 83 |
+
|
| 84 |
+
We now extend the definition of $L$ to allow for multiple $p$, $p'$, $u$, and $u'$ vectors.
|
| 85 |
+
|
| 86 |
+
::: definition
|
| 87 |
+
**Definition 3** (Linear Operator from SROS Representation). *Let $L : [r]^{n \times q} \times [r]^{n \times q} \times \mathop{\mathrm{\mathbb{R}}}^{n \times q} \times \mathop{\mathrm{\mathbb{R}}}^{n \times q} \to \mathop{\mathrm{\mathbb{R}}}^{n \times n}$. Let $P, P' \in [r]^{n \times q}$, and let $U, U' \in \mathop{\mathrm{\mathbb{R}}}^{n \times q}$. Let $P_{:, i}$, $U_{:, i}$, etc. be the $i$^th^ columns of the respective arrays. Then: $L(P, P', U, U') \mapsto \sum_{i = 1}^q L(P_{:, i}, P'_{:, i}, U_{:, i}, U'_{:, i})$.*
|
| 88 |
+
:::
|
| 89 |
+
|
| 90 |
+
Algorithm [\[alg:matvec\]](#alg:matvec){reference-type="ref" reference="alg:matvec"} performs linear transformation of a vector using SROS representation in $O(nq)$ time.
|
| 91 |
+
|
| 92 |
+
:::: algorithm
|
| 93 |
+
::: algorithmic
|
| 94 |
+
$P, P' \in [r]^{n \times q}$, $U, U' \in \mathop{\mathrm{\mathbb{R}}}^{n \times q}$, $x \in \mathop{\mathrm{\mathbb{R}}}^n$ $y = L(P, P', U, U') x$ $y \gets \mathbf{0} \in \mathop{\mathrm{\mathbb{R}}}^n$ $p, p', u, u' \gets P_{:, i}, P'_{:, i}, U_{:, i}, U'_{:, i}$ $z \gets \mathbf{0} \in \mathop{\mathrm{\mathbb{R}}}^r$ $z_{p'_j} \gets z_{p'_j} + u'_j x_j$ []{#lin:indexaddmatvec label="lin:indexaddmatvec"} $y_j \gets y_j + z_{p_j} u_j$ []{#lin:fancyindexingmatvec label="lin:fancyindexingmatvec"} $y$
|
| 95 |
+
:::
|
| 96 |
+
::::
|
| 97 |
+
|
| 98 |
+
We now discuss how to approximately invert a certain kind of symmetric SROS matrix, but our methods could be extended to asymmetric matrices. First, we define a partial ordering over partitions. For two partitions $p, p'$, we say $p' \leq p$ if $p'$ is finer than or equal to $p$; that is, $p'_j = p'_{j'} \implies p_j = p_{j'}$. Using that partial ordering, a symmetric SROS matrix can be approximately inverted efficiently if for all $1 \leq i, i' \leq q$, $P_{:, i} \leq P_{:, i'}$ or $P_{:, i'} \leq P_{:, i}$. As the reader may have recognized, our kernel matrix $K_{XX}$ can be written as an SROS matrix with this property.
|
| 99 |
+
|
| 100 |
+
We will write symmetric SROS matrices in a slightly more convenient form. All $(u')^{p = l}$ must be a constant times $u^{p = l}$. We will store these constants in an array $C$. Let $L(P, C, U)$ be shorthand for $L(P, P, U, C \odot U)$, where $\odot$ denotes element-wise multiplication. For $L(P, C, U)$ to be symmetric, it must be the case that $P_{ji} = P_{j'i} \implies C_{ji} = C_{j'i}$. Then, all elements of $U$ corresponding to a given $u^{p = l}$ are multiplied by the same constant. We now present an algorithm for calculating $(L(P, C, U) + \lambda I)^{-1}$, for $\lambda \neq 0$, which is an approximate inversion of $L(P, C, U)$. We have not yet analyzed numerical sensitivity for $\lambda \to 0$, but we conjecture that all floating point numbers involved need to be stored to at least $\log_2(1/\lambda)$ bits. Without loss of generality, let $\lambda = 1$, and note $(L(P, C, U) + \lambda I)^{-1} = \lambda^{-1} (L(P, \lambda^{-1}C, U) + I)^{-1}$.
|
| 101 |
+
|
| 102 |
+
By assumption, all columns of $P$ are comparable with respect to the partial ordering above, so we can reorder the columns of $P$ such that $P_{:, i} \geq P_{:, j}$ for $i < j$. The key identity that we use to develop our fast inversion algorithm is the Sherman--Morrison Formula:
|
| 103 |
+
|
| 104 |
+
$$\begin{equation}
|
| 105 |
+
\label{eqn:rankoneupdate}
|
| 106 |
+
(A + cuu^\top)^{-1} = A^{-1} - \frac{A^{-1} uu^\top A^{-1}}{c^{-1} + u^\top A^{-1} u}
|
| 107 |
+
\end{equation}$$
|
| 108 |
+
|
| 109 |
+
Starting with $A = I$, we add the sparse rank one matrices iteratively, from the finest partition to the coarsest one, updating $A^{-1}$ as we go. We represent $(L(P, C, U) + I)^{-1}$ in the form $I + L(P, C', U')$, so we write an algorithm that returns $C'$ and $U'$. We can also quickly calculate $\log |L(P, C, U) + I|$ at the same time, using the matrix determinant lemma: $|A + cuu^\top| = (1 + c u^\top A^{-1} u) |A|$.
|
| 110 |
+
|
| 111 |
+
::: {#thm:inv .theorem}
|
| 112 |
+
**Theorem 1** (Fast Inversion). *For $P \in [r]^{n \times q}$ and $C, U \in \mathop{\mathrm{\mathbb{R}}}^{n \times q}$, if $P_{:, i} \mathop{\mathrm{\geq}}\limits^{\textrm{(is coarser than)}} P_{:, j}$ for $i < j$, then there exists $C', U' \in \mathop{\mathrm{\mathbb{R}}}^{n \times q}$, such that $(L(P, C, U) + I)^{-1} = I + L(P, C', U')$. There exists an algorithm for computing $C'$ and $U'$ that takes $O(nq^2)$ time.*
|
| 113 |
+
:::
|
| 114 |
+
|
| 115 |
+
::: proof
|
| 116 |
+
*Proof.* For $X \in \mathop{\mathrm{\mathbb{R}}}^{n \times q}$, let $X_{:, i+1:q} \in \mathop{\mathrm{\mathbb{R}}}^{n \times (q - i)}$ be columns $i + 1$ through $q$ of matrix $X$ (inclusive). Let $A_i = I + L(P_{:, i+1:q}, C_{:, i+1:q}, U_{:, i+1:q})$, and $A_q = I$. Now suppose $A_i^{-1}$ can be written as $I + L(P_{:, i+1:q}, C'_{:, i+1:q}, U'_{:, i+1:q})$ for some $C'$ and $U'$. For the base case of $i=q$, this holds trivially. We show it also holds for $i - 1$, and we can compute $C'_{:, i:q}$, $U'_{:, i:q}$ in $O\bigl( n(q - i) \bigr)$ time. Let $p = P_{:, i}$, $u = U_{:, i}$, and $c = C_{:, i}$. Consider $u^{p = l}$, where each element is zero unless the corresponding element of $p$ equals $l$. What do we know about the product $A_i^{-1} u^{p = l}$ (as seen in Equation [\[eqn:rankoneupdate\]](#eqn:rankoneupdate){reference-type="ref" reference="eqn:rankoneupdate"})?
|
| 117 |
+
|
| 118 |
+
Because the columns of $P$ go from coarser partitions to finer ones, all of the vectors generating the sparse rank one components of $L(P_{:, i+1:q}, C'_{:, i+1:q}, U'_{:, i+1:q})$ are from partitions that are equal to or finer than $p$. Thus, they are either zero everywhere $u^{p = l}$ is zero, or zero everywhere $u^{p = l}$ is nonzero. Vectors $v$ of the second kind can be ignored, as $c v v^\top u^{p = l} = 0$. Thus, when multiplying $L(P_{:, i+1:q}, C'_{:, i+1:q}, U'_{:, i+1:q})$ by $u^{p = l}$, the only relevant vectors are filled with zeros except where the corresponding element of $p$ equals $l$. So we can get rid of those rows of $P_{:, i+1:q}$, $C'_{:, i+1:q}$, and $U'_{:, i+1:q}$. Suppose there are $n_l$ elements of $p$ that equal $l$. Then $L(P_{:, i+1:q}, C'_{:, i+1:q}, U'_{:, i+1:q}) u^{p = l}$ involves $n_l$ rows, and can be computed in $O\bigl( n_l(q - i) \bigr)$ time. Moreover, this product, which we'll call $(u')^{p = l}$, is only nonzero when the corresponding element of $p$ equals $l$, so it has the same sparsity pattern as $u^{p = l}$. The other component of $A_i^{-1}$ is the identity matrix, and $I u^{p = l}$ clearly has the same sparsity as $u^{p = l}$. Thus, returning to Equation [\[eqn:rankoneupdate\]](#eqn:rankoneupdate){reference-type="ref" reference="eqn:rankoneupdate"}, when we add $u^{p = l} (u^{p = l})^\top$ to $A_i$, we update $A_i^{-1}$ with an outer product of vectors whose sparsity pattern is the same as that of $u^{p = l}$.
|
| 119 |
+
|
| 120 |
+
For each $l$, $A_i^{-1}$ need not be updated with each $u^{p = l}$ one at a time. For $l \neq l'$, $u^{p = l}$ and $u^{p = l'}$ are nonzero at separate indices, so $u^{p = l}$ and $(u')^{p = l'}$ are nonzero at separate indices, so the extra component of $A_i^{-1}$ that appears after the $u^{p = l'}$ update is irrelevant to the $u^{p = l}$ update, because $(u^{p = l})^\top (u')^{p = l'} = 0$. Since the $u^{p = l}$ update takes $O(n_l(q - i))$ time, all of them together take $O(\sum_l n_l (q - i))$ time, which equals $O(n(q-i))$ time. Calculating an element of $c'$ only involves computing the denominator in Equation [\[eqn:rankoneupdate\]](#eqn:rankoneupdate){reference-type="ref" reference="eqn:rankoneupdate"}, using a matrix-vector product already computed. So we can write $C'_{:, i:q}$ and $U'_{:, i:q}$ by adding a preceding column to $C'_{:, i+1:q}$ and $U'_{:, i+1:q}$, using the same partition $p$, and it takes $O(n(q-i))$ time.
|
| 121 |
+
|
| 122 |
+
Following the induction down to $i=0$, we have $(L(P, C, U) + I)^{-1} = I + L(P, C', U')$, and a total time of $O(nq^2)$. ◻
|
| 123 |
+
:::
|
| 124 |
+
|
| 125 |
+
Algorithm [\[alg:inv\]](#alg:inv){reference-type="ref" reference="alg:inv"} also performs approximate inversion, which we prove in Appendix [9](#sec:alginv){reference-type="ref" reference="sec:alginv"}. It differs slightly from the algorithm in the proof, but can take full advantage of a GPU speedup. In the setting where all columns of $U$ are identical, observe that in Lines [\[lin:indexadd2\]](#lin:indexadd2){reference-type="ref" reference="lin:indexadd2"} and [\[lin:fancyindexing2\]](#lin:fancyindexing2){reference-type="ref" reference="lin:fancyindexing2"}, the same computation is repeated for all $k \in [i]$. Indeed, in this setting, this block of code can be modified to run in $O(n)$ time rather than $O(ni)$, making the whole algorithm run in $O(nq)$ time, as shown in Proposition [4](#prop:nq){reference-type="ref" reference="prop:nq"} in Appendix [9](#sec:alginv){reference-type="ref" reference="sec:alginv"}.
|
| 126 |
+
|
| 127 |
+
:::: restatable
|
| 128 |
+
algorithmalginv
|
| 129 |
+
|
| 130 |
+
::: algorithmic
|
| 131 |
+
$P \in [r]^{n \times q}$, $C, U \in \mathop{\mathrm{\mathbb{R}}}^{n \times q}$ $I + L(P, C', U') = (I + L(P, C, U))^{-1}$; $x = \log |I + L(P, C, U)|$ $x, C', U' \gets 0, \mathbf{0} \in \mathop{\mathrm{\mathbb{R}}}^{n \times q}, U$ $p, c, u, u' \gets P_{:, i}, C_{:, i}, U_{:, i}, U'_{:, i}$ $z \gets \mathbf{0} \in \mathop{\mathrm{\mathbb{R}}}^r$ $z_{p_j} \gets z_{p_j} + c_j u'_j u_j$ []{#lin:indexadd1 label="lin:indexadd1"} $C'_{ji} \gets -c_j/(1 + z_{p_j})$ []{#lin:fancyindexing1 label="lin:fancyindexing1"} $x \gets x + \log(1 + z_l)$ []{#lin:simpleop label="lin:simpleop"} $y \gets \mathbf{0} \in \mathop{\mathrm{\mathbb{R}}}^{n \times i}$ $y_{p_j k} \gets y_{p_j k} + u'_j U_{jk}$ []{#lin:indexadd2 label="lin:indexadd2"} $U'_{jk} \gets U'_{jk} + C'_{ji} u'_j y_{p_j k}$ []{#lin:fancyindexing2 label="lin:fancyindexing2"} $C', U', x$
|
| 132 |
+
:::
|
| 133 |
+
::::
|
| 134 |
+
|
| 135 |
+
A Hierarchical matrix is a matrix which is either represented as a low-rank matrix or as a $2 \times 2$ block matrix of Hierarchical matrices [@bebendorf2008hierarchical]. In our SROS format, many of the sparse rank one matrices overlap, whereas in a Hierarchical matrix, the low-rank matrices do not overlap, and converting an SROS matrix into a Hierarchical matrix would typically be inefficient. Hierarchical matrices admit approximate inversion in $O(n a^2 \log^2 n)$ time, where $a$ is the maximum rank of the component submatrices [@hackbusch2004hierarchical]. However, this is not an approximation in a technical sense, as there is no error bound. At many successive steps in the algorithm, a rank $2a$ matrix is approximated by a rank $a$ matrix [@hackbusch1999sparse]; to our knowledge there is no analysis of how resulting errors might cascade. After converting an SROS matrix to hierarchical form, this rough inversion would take $O(n q^2 \log^2 n)$ time.
|
| 136 |
+
|
| 137 |
+
We now show that our kernel matrix $K_{XX}$ can be written in SROS form, with $P$ containing successively finer partitions. Thus, $K_{XX}$ can be approximately inverted quickly, for use in Equations [\[eqn:predmean\]](#eqn:predmean){reference-type="ref" reference="eqn:predmean"} and [\[eqn:predvar\]](#eqn:predvar){reference-type="ref" reference="eqn:predvar"}. Next, we'll show that we can efficiently optimize the log likelihood of the training data by tuning the weight vector $w$ along with the bit order. The log likelihood can be calculated in $O(nq \log n)$ time and then the gradient w.r.t. $w$ in $O(nq^2)$ time.
|
| 138 |
+
|
| 139 |
+
Recall from the proof of Proposition $\ref{prop:psd}$: $K_{XX} = \sum_{i = 1}^q \sum_{s \in \mathop{\mathrm{\mathbb{B}}}^i} w_i X_{[s]} X_{[s]}^\top$, where $X_{[s]} \in \mathop{\mathrm{\mathbb{R}}}^n$ with $(X_{[s]})_j = \left[\!\left[X_j^{\leq |s|} = s \right]\!\right]$. So we will set $P_{:, i}$, $C_{:, i}$, and $U_{:, i}$, so that $L(P_{:, i}, C_{:, i}, U_{:, i}) = \sum_{s \in \mathop{\mathrm{\mathbb{B}}}^i} w_i X_{[s]} X_{[s]}^\top$. Let $P_{:, i}$ partition the set of points $X$ so that points are in the same partition if the first $i$ bits match. Now, requiring the first $i+1$ bits to match is a stricter criterion than requiring the first $i$ bits to match, so the $P_{:, i}$ grow successively finer. For any piece of the partition where the first $i$ bits of the constituent points equals the bitstring $s$, the corresponding sparse rank one component of $K_{XX}$ is $w_i X_{[s]} X_{[s]}^\top$. So let $U_{:, i} = \mathbf{1}^n$, and let $C_{:, i} = w_i \mathbf{1}^n$.
|
| 140 |
+
|
| 141 |
+
::: {#prop:kxx .proposition}
|
| 142 |
+
**Proposition 2** (SROS Form Kernel). *$K_{XX} = L(P, C, U)$, as defined above.*
|
| 143 |
+
:::
|
| 144 |
+
|
| 145 |
+
This follows immediately from the definitions. To compute these partitions $P_{:, i}$, we sort $X$, which is a set of bit strings. And then we can easily compute which points have the same first $i$ bits. This all takes $O(n q \log n)$ time. Now note that $U_{:, i} = U_{:, i'}$ for all $i, i'$, so $(K_{XX} + \lambda I)^{-1}$ and $|K_{XX} + \lambda I|$ can be computed in $O(nq)$ time, rather than $O(nq^2)$.
|
| 146 |
+
|
| 147 |
+
The training negative log likelihood of a GP is that of the corresponding multivariate Gaussian on the training data. So: $\mathop{\mathrm{\textrm{NLL}}}(w) = \frac{1}{2} \left(y^\top (K_{XX}(w) + \lambda I)^{-1} y + \log |K_{XX}(w) + \lambda I| + n \log(2 \pi)\right)$. This can be computed in $O(nq)$ time, since matrix-vector multiplication takes $O(nq)$ time for a matrix in SROS form. So if the bit order is unchanged, an optimization step can be done in $O(nq)$ time, and if the data needs to be resorted, then in $O(nq\log n)$ time. On the largest dataset we tested (House Electric), with $n \approx$ 1.3 million and $q=88$, sorting the data and computing $P$ takes about 0.96 seconds on a GPU, and then calculating the negative log likelihood takes about another 1.08 seconds. We show in Appendix [10](#sec:grad){reference-type="ref" reference="sec:grad"} how to compute $\nabla_w \mathop{\mathrm{\textrm{NLL}}}$ in $O(nq^2)$ time.
|
| 148 |
+
|
| 149 |
+
To optimize the bit order and weight vector at the same time, we represent both with a single parameter vector $\theta \in \mathop{\mathrm{\mathbb{R}}}_+^q$, with $||\theta||_\infty = 1$. To get the bit order from $\theta$, we start with a default bit order and permute the bit order according to a permutation that would sort $\theta$ in descending order. To get the weight vector, we sort $\theta$ in descending order, add a $0$ at the end, and compute the differences between adjacent elements. When there are ties in the elements of $\theta$, the choice of bit order does not affect the negative log likelihood (or the kernel at all) because the relevant associated weight is $0$. The negative log likelihood is continuous with respect to $\theta$, and when all values of $\theta$ are unique, it is differentiable with respect to $\theta$. Letting $\theta = e^{\phi} / ||e^{\phi}||_{\infty}$, we minimize loss w.r.t. $\phi$ using BFGS [@fletcher2013practical].
|
| 150 |
+
|
| 151 |
+
To calculate the predictive mean at a list of predictive locations $X'$, we first multiply $y$ by $(K_{XX} + \lambda I)^{-1}$, and then we multiply that vector by $K_{XX'}$. We obtain both $K_{XX}$ and $K_{XX'}$ in SROS form as follows. Let $\tilde{X} = X \circ X'$ be the concatenation of the two tuples, now an $(n+m)$-tuple. Writing $K_{\tilde{X}\tilde{X}} = L(\tilde{P}, \tilde{C}, \tilde{U})$, the arrays on the r.h.s. can be computed in $O((n+m)q \log(n+m))$ time. Then, with $P$, $C$, and $U$ being the first $n$ rows of $\tilde{P}$, $\tilde{C}$, $\tilde{U}$, $K_{XX} = L(P, C, U)$. And letting $P''$ and $U''$ be the last $m$ rows, $K_{XX'} = L(P, P'', C \odot U, U'')$. Thus, the predictive mean $\mu_x$ from Equation [\[eqn:predmean\]](#eqn:predmean){reference-type="ref" reference="eqn:predmean"} can be computed at $m$ locations in $O((n+m)q \log(n+m))$ time.
|
| 152 |
+
|
| 153 |
+
The predictive covariance matrix, which extends the predictive variance from Equation [\[eqn:predvar\]](#eqn:predvar){reference-type="ref" reference="eqn:predvar"}, is calculated $\Sigma_{X'} = K_{X'X'} + \lambda I_m - K_{X'X}(K_{XX} + \lambda I_{n})^{-1} K_{XX'} = (K_{\tilde{X}\tilde{X}} + \lambda I_{m + n}) / K_{XX}$, where $/$ denotes the Schur complement. From a property of block matrix inversion, the last $m$ columns of the last $m$ rows of $(K_{\tilde{X}\tilde{X}} + \lambda I)^{-1}$ equals $((K_{\tilde{X}\tilde{X}} + \lambda I_{m + n}) / K_{XX})^{-1}$. So we get the predictive precision matrix in $O((n+m)q\log(n+m))$ time by inverting $K_{\tilde{X}\tilde{X}} + \lambda I$ and taking the bottom right $m \times m$ block. Then, we get the predictive covariance matrix by inverting that. This takes $O(mq^2)$ time, since it does not have the property of all the columns of $U$ being equal. If we only want the diagonal elements of an SROS matrix (the independent predictive variances in this case), we can simply sum the rows of $C \odot U \odot U$ in $O(mq)$ time. Thus, in total, computing the independent predictive variances requires $O((n+m)q\log(n+m) + mq^2)$ time. See Algorithm [\[alg:gp\]](#alg:gp){reference-type="ref" reference="alg:gp"}.
|
| 154 |
+
|
| 155 |
+
:::: algorithm
|
| 156 |
+
::: algorithmic
|
| 157 |
+
$X \in \mathop{\mathrm{\mathbb{B}}}^{n \times q}$, $y \in \mathop{\mathrm{\mathbb{R}}}^n$, $X' \in \mathop{\mathrm{\mathbb{B}}}^{m \times q}$, $w \in \mathop{\mathrm{\mathbb{R}}}^q$, $\lambda \in \mathop{\mathrm{\mathbb{R}}}^{+}$ $\mu_{X'}$ and $\sigma^2_{X'}$ are the predictive means and variances at $X'$, and $\textrm{nll}$ the training negative log likelihood. $\tilde{X} \gets X \circ X'$ $\tilde{X}^{\uparrow}, \textrm{perm} \gets \textrm{Sort}(\tilde{X})$ $\tilde{P}^{\uparrow}_{ji} \gets \# \textrm{of unique rows in } X^{\uparrow}_{1:j, 1:i}$ $\tilde{P} \gets \textrm{perm}^{-1}(P^{\uparrow})$ $P, P' \gets \tilde{P}_{1:n}, \tilde{P}_{n+1:n+m}$ $U, U', \tilde{U} \gets \mathbf{1}^{n \times q}, \mathbf{1}^{m \times q}, \mathbf{1}^{(n+m) \times q}$ $C, \tilde{C} \gets \mathbf{1}^n w^T, \mathbf{1}^{n+m} w^T$ $C_\lambda^{-1}, U^{-1}, \textrm{logdet}_\lambda \gets \textrm{Invert}(P, \lambda^{-1}C, U)$ $C^{-1}, \textrm{logdet} \gets \lambda^{-1}C_\lambda^{-1}, \textrm{logdet}_\lambda + n \log(\lambda)$ $z \gets \textrm{LinTransform}(P, P, U^{-1}, C^{-1} \odot U^{-1}, y) + \lambda^{-1}y$ $\mu_{X'} \gets \textrm{LinTransform}(P', P, U', C \odot U, z)$ $\textrm{nll} \gets (y^\top z + \textrm{logdet} + n\log(2\pi))/2$ $\tilde{C}^{\textrm{prec}}, \tilde{U}^{\textrm{prec}} \gets \textrm{Invert}(\tilde{P}, \lambda^{-1}\tilde{C}, \lambda^{-1}\tilde{U})$ $C^{\textrm{prec}}, U^{\textrm{prec}} \gets \tilde{C}^{\textrm{prec}}_{n+1:n+m}, \tilde{U}^{\textrm{prec}}_{n+1:n+m}$ $C^{\textrm{cov}}, U^{\textrm{cov}} \gets \textrm{Invert}(P', C^{\textrm{prec}}, U^{\textrm{prec}})$ $\sigma^2_{X'} \gets \lambda(\mathbf{1}^m + \textrm{SumEachRow}(C^{\textrm{cov}} \odot U^{\textrm{cov}} \odot U^{\textrm{cov}}))$ $\mu_{X'}$, $\sigma^2_{X'}, \textrm{nll}$
|
| 158 |
+
:::
|
| 159 |
+
::::
|
2210.02411/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-09-05T14:43:33.626Z" agent="5.0 (Macintosh)" version="20.2.8" etag="BSKLAX_dBWAK_sN3Ov4e" type="google"><diagram id="V7ExkMpW7dBSaVs2j7wo">7Vpbj5s4FP41kboPjYwBA4+TZLZ9qVRpHrr7tPKAE6ySmAKZJP31tYMN2PFMc4HMdLIoUvCxfXz5zvl8ODByp8vtpwLn6ReWkGwEQbIdubMRhI4T+PxPSHZSghCqJYuCJlLWCh7oTyKFQErXNCGl1rBiLKtorgtjtlqRuNJkuCjYRm82Z5k+ao4X5EDwEOPsUPqNJlWqFgZAW/GZ0EUqhw59WbHETeNaUKY4YZtatG/j3o/cacFYVd8tt1OSid1T+1Ir+vuZ2mZiBVlVx3SAdYcnnK3l2kYQZbzrZM64Bj7BaidXjX6smar4WO4xueMNHJRzYCdtPb9b1P+e+PlTsejH+SiYfBsFs/8cKZfD8KnVI6lOUBsUFmy9SoiYrcOrNymtyEOOY1G74dbFZWm1zGT1nGbZlGWs2Pd1E0zCeczlZVWw76RTg+KQ8Cmp8Z5IUZHts1voNMBwkyZsSapix5vIDp6ySmnNPgJjaeCb1jiQV4vSjlk4wJU2Ke1x0ShvIeM3EjU7gu7VEYTvDUGkwHkdBD0LgsYeklVyJ3iLl+IMlyWN9W1r9xg0W0KSAxL77YZ0lqsoq7tcJStIhiv6pKu3bYEc4Suje1NUHhPo+40cV+23UlKydRET2a/LX4Yq3wOGKk9XVOFiQaoDRXtMmoUfBZN/dUfj/+V3mvPb63ncfE5QbPW4JIgeAejH43zX1WDzHHSkxyHveWs71uHQrTmcH+jbfYHDIR/9TlV/Lhf0DRSHo9j9wwsfwRgAqCT/iuqx7/hKMNt2O8x23dJXUlC+BlK8DH69Cxp1vBl78AzW9JTek+k3CjVFrhsNZQvhSbawYivyJ3tsbwiZinhsMxRC0a3RaggPQDqXVk1VCAwGk3pc6cmTyJZWglEdRai8LPmU83ddbulUFHadwpFkWm/j2yTTBqpLXdViTkfZAEcK7zrNctGgfGHCyD5Oa1K1xrMNzLEYWN+Rcp7SD+O/TouLebxa6aasB7nS1LsRsRThjC5WgrS4yQlbnYjol8Y4u5MVS5okYhhrtG1jtcvCZ5MtkOTebvBssXjXMKhzgmcH3hrNuyoObUJeZ4yi9grPPJkd89H1RbU9HgC2lNG7RjCAIT+d2gsOgWeoeO/qeNoSSKfxLcfK5FsQN4/+rdCt98/Gy5NjWZhPnublc1zZsTFc5nXyfk63wtTMVAUCOHICuRgthSEuWwoDzgLUVwojNN1XWVXHpKHFpGEfHHx5LsqCuHHC4ixP8Y0esIH51BSC6x2wN5eeCl2g0bM/CD1H5nELByPkI/JWOkIn5mjDmNhztI+hL6L5fggu0sMe3z/M0KpQQnsnYqbgz3ICW7rn/2eIHikuDJrE7Ws8RRyRLHr7LuL7xjFxTReBtjxO30FA80JqK95C+ffZjXoLDA2ko8NobyhfgZfnU04A+udtAx26rxf5KQ68ncgvMhNZPNY7M4PugIN8ChzuzWQPn92cTr78js9h4pz8QcC78c4DkAfkYV5sv8OrTaT9nNG9/wU=</diagram></mxfile>
|
2210.02411/main_diagram/main_diagram.pdf
ADDED
|
Binary file (14.2 kB). View file
|
|
|
2210.02411/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Random initialization of weights in a neural network play a crucial role in determining the final performance of the network. This effect becomes even more pronounced for very deep models that seem to be able to solve many complex tasks more effectively. An important building block of many models are residual blocks He et al. (2016), in which skip connections between non-consecutive layers are added to ease signal propagation (Balduzzi et al., 2017) and allow for faster training. ResNets, which consist of multiple residual blocks, have since become a popular center piece of many deep learning applications (Bello et al., 2021).
|
| 4 |
+
|
| 5 |
+
Batch Normalization (BN) (Ioffe & Szegedy, 2015) is a key ingredient to train ResNets on large datasets. It allows training with larger learning rates, often improves generalization, and makes the training success robust to different choices of parameter initializations. It has furthermore been shown to smoothen the loss landscape (Santurkar et al., 2018) and to improve signal propagation (De & Smith, 2020). However, BN has also several drawbacks: It breaks the independence of samples in a minibatch and adds considerable computational costs. Sufficiently large batch sizes to compute robust statistics can be infeasible if the input data requires a lot of memory. Moreover, BN also prevents adversarial training (Wang et al., 2022). For that reason, it is still an active area of research to find alternatives to BN Zhang et al. (2018); Brock et al. (2021b). A combinations of Scaled Weight Standardization and gradient clipping has recently outperformed BN (Brock et al., 2021b). However, a random parameter initialization scheme that can achieve all the benefits of BN is still an open problem. An initialization scheme allows deep learning systems the flexibility to drop in to existing setups without modifying pipelines. For that reason, it is still necessary to develop initialization schemes that enable learning very deep neural network models without normalization or standardization methods.
|
| 6 |
+
|
| 7 |
+
A direction of research pioneered by Saxe et al. (2013); Pennington et al. (2017) has analyzed the signal propagation through randomly parameterized neural networks in the infinite width limit using random matrix theory. They have argued that parameter initialization approaches that have the *dynamical isometry* (DI) property avoid exploding or vanishing gradients, as the singular values of the input-output Jacobian are close to unity. DI is key to stable and fast training (Du et al., 2019; Hu et al., 2020). While Pennington et al. (2017) showed that it is not possible to achieve DI in networks with ReLU activations with independent weights or orthogonal weight matrices, Burkholz & Dubatovka (2019); Balduzzi et al. (2017) derived a way to attain perfect DI even in finite ReLU networks by parameter sharing. This approach can also be combined (Blumenfeld et al., 2020; Balduzzi et al., 2017) with orthogonal initialization schemes for convolutional layers (Xiao et al., 2018). The main idea is to design a random initial network that represents a linear isometric map.
|
| 8 |
+
|
| 9 |
+
We transfer a similar idea to ResNets but have to overcome the additional challenge of integrating residual connections and, in particular, potentially non-trainable identity mappings while balancing skip and residual connections and creating initial feature diversity. We propose an initialization scheme, RISOTTO (Residual dynamical isometry by initial orthogonality), that achieves *dynamical isometry* (DI) for ResNets (He et al., 2016) with convolutional or fully-connected layers and ReLU activation functions exactly. RISOTTO achieves this for networks of finite width and finite depth and not only in expectation but exactly. We provide theoretical and empirical evidence that highlight the advantages of our approach. In contrast to other initialization schemes that aim to improve signal propagation in ResNets, RISOTTO can achieve performance gains even in combination with BN. We further demonstrate that RISOTTO can successfully train ResNets without BN and achieve the same or better performance than Zhang et al. (2018); Brock et al. (2021b).
|
| 10 |
+
|
| 11 |
+
- To explain the drawbacks of most initialization schemes for residual blocks, we derive signal propagation results for finite networks without requiring mean field approximations and highlight input separability issues for large depths.
|
| 12 |
+
- We propose a solution, RISOTTO, which is an initialization scheme for residual blocks that provably achieves dynamical isometry (exactly for finite networks and not only approximately). A residual block is initialized so that it acts as an orthogonal, norm and distance preserving transform.
|
| 13 |
+
- In experiments on multiple standard benchmark datasets, we demonstrate that our approach achieves competitive results in comparison with alternatives:
|
| 14 |
+
- We show that RISOTTO facilitates training ResNets without BN or any other normalization method and often outperforms existing BN free methods including Fixup, SkipInit, and NF ResNets.
|
| 15 |
+
- It outperforms standard initialization schemes for ResNets with BN on Tiny Imagenet and CIFAR100.
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
The object of our study is a general residual network that is defined by
|
| 20 |
+
|
| 21 |
+
$$z^0 := \mathbf{W}^0 * x, \quad x^l = \phi(z^{l-1}), \quad z^l := \alpha_l f^l(x^l) + \beta_l h^l(x^l); \quad z^{\text{out}} := \mathbf{W}^{\text{out}} P(x^L)$$
|
| 22 |
+
(1)
|
| 23 |
+
|
| 24 |
+
for $1 \leq l \leq L$ . P(.) denotes an optional pooling operation like maxpool or average pool, f(.) residual connections, and h(.) the skip connections, which usually represent an identity mapping or a projection. For simplicity, we assume in our derivations and arguments that these functions are parameterized as $f^l(\boldsymbol{x}^l) = \mathbf{W}_2^l * \phi(\mathbf{W}_1^l * \boldsymbol{x}^l + \boldsymbol{b}_1^l) + \boldsymbol{b}_2^l$ and $h^l(\boldsymbol{x}^l) = \mathbf{W}_{\text{skip}}^l * \boldsymbol{x}^l + \boldsymbol{b}_{\text{skip}}^l$ (\* denotes convolution), but our arguments also transfer to residual blocks in which more than one layer is skipped. Optionally, batch normalization (BN) layers are placed before or after the nonlinear activation function $\phi(\cdot)$ . We focus on ReLUs $\phi(x) = \max\{0, x\}$ (Krizhevsky et al., 2012), which are among the most commonly used activation functions in practice. All biases $\boldsymbol{b}_2^l \in \mathbb{R}^{N_{l+1}}$ , $\boldsymbol{b}_1^l \in \mathbb{R}^{N_{m_l}}$ , and $\boldsymbol{b}_{\text{skip}}^l \in \mathbb{R}^{N_l}$ are assumed to be trainable and set initially to zero. We ignore them in the following, since we are primarily interested in the neuron states and signal propagation at initialization. The parameters $\alpha$ and $\beta$ balance the contribution of the skip and the residual branch, respectively. Note that $\alpha$ is a trainable parameter, while $\beta$ is just mentioned for convenience to simplify the comparison with standard He initialization approaches (He et al., 2015). Both parameters could also be integrated into the weight parameters $\mathbf{W}_2^l \in \mathbb{R}^{N_{l+1} \times N_{m_l} \times k_{2,1}^l \times k_{2,2}^l}$ , $\mathbf{W}_1^l \in \mathbb{R}^{N_{m_l} \times N_l \times k_{1,2}^l \times k_{1,2}^l}$ , and $\mathbf{W}_{\text{skip}}^l \in \mathbb{R}^{N_{l+1} \times N_l \times 1 \times 1}$ , but they make the discussion of different initialization schemes more convenient and simplify the comparison with standard He initialization approaches (He et al., 2015).
|
| 25 |
+
|
| 26 |
+
**Residual Blocks** Following the definition by He et al. (2015), we distinguish two types of residual blocks, Type B and Type C (see Figure 1a), which differ in the choice of $\mathbf{W}_{\rm skip}^l$ . The Type C residual block is defined as $\mathbf{z}^l = \alpha f^l(\mathbf{x}^l) + h^l(\mathbf{x}^l)$ so that shortcuts h(.) are projections with a $1 \times 1$ kernel with trainable parameters. The type B residual block has identity skip connections $\mathbf{z}^l = \alpha f^l(\mathbf{x}^l) + \mathbf{x}^l$ . Thus, $\mathbf{W}_{\rm skip}^l$ represents the identity and is not trainable.
|
| 27 |
+
|
| 28 |
+
Most initialization methods for ResNets draw weight entries independently at random, including FixUp and SkipInit. To simplify the theoretical analysis of the induced random networks and to highlight the shortcomings of the independence assumption, we assume:
|
| 29 |
+
|
| 30 |
+
**Definition 2.1** (Normally Distributed ResNet Parameters). All biases are initialized as zero and all weight matrix entries are independently normally distributed with
|
| 31 |
+
|
| 32 |
+
$$w_{ij,2}^{l} \sim \mathcal{N}\left(0, \sigma_{l,2}^{2}\right), w_{ij,1}^{l} \sim \mathcal{N}\left(0, \sigma_{l,1}^{2}\right), and w_{ij,skip}^{l} \sim \mathcal{N}\left(0, \sigma_{l,skip}^{2}\right).$$
|
| 33 |
+
|
| 34 |
+
Most studies further focus on special cases of the following set of parameter choices.
|
| 35 |
+
|
| 36 |
+
**Definition 2.2** (Normal ResNet Initialization). The choice
|
| 37 |
+
$$\sigma_{l,1} = \sqrt{\frac{2}{N_{m_l}k_{1,1}^lk_{1,2}^l}}$$
|
| 38 |
+
, $\sigma_{l,2} = \sqrt{\frac{2}{N_{l+1}k_{2,1}^lk_{2,2}^l}}$ , $\sigma_{l,skip} = \sqrt{\frac{2}{N_{l+1}}}$ as used in Definition 2.1 and $\alpha_l, \beta_l \geq 0$ that fulfill $\alpha_l^2 + \beta_l^2 = 1$ .
|
| 39 |
+
|
| 40 |
+
Another common choice is $\mathbf{W}_{\mathrm{skip}} = \mathbb{I}$ instead of random entries. If $\beta_l = 1$ , sometimes also $\alpha_l \neq 0$ is still common if it accounts for the depth L of the network. In case $\alpha_l$ and $\beta_l$ are the same for each layer we drop the subscript l. For instance, Fixup (Zhang et al., 2018) and SkipInit (De & Smith, 2020) satisfy the above condition with $\alpha = 0$ and $\beta = 1$ . De & Smith (2020) argue that BN also suppresses the residual branch effectively. However, in combination with He initialization (He et al., 2015) it becomes more similar to $\alpha = \beta = \sqrt{0.5}$ . Li et al. (2021) study the case of free $\alpha_l$ but focus their analysis on identity mappings $\mathbf{W}_1^l = \mathbb{I}$ and $\mathbf{W}_{\mathrm{skip}}^l = \mathbb{I}$ .
|
| 41 |
+
|
| 42 |
+
As other theoretical work, we focus our following investigations on fully-connected layers to simplify the exposition. Similar insights would transfer to convolutional layers but would require extra effort (Yang & Schoenholz, 2017). The motivation for the general choice in Definition 2.2 is that it ensures that the average squared 12-norm of the neuron states is identical in every layer. This has been shown by Li et al. (2021) for the special choice $\mathbf{W}_1^l = \mathbb{I}$ and $\mathbf{W}_{\rm skip}^l = \mathbb{I}$ , $\beta = 1$ and by (Yang & Schoenholz, 2017) in the mean field limit with a missing ReLU so that $\mathbf{x}^l = \mathbf{z}^{l-1}$ . (Hanin & Rolnick, 2018) has also observed for $\mathbf{W}_{\rm skip}^l = I$ and $\beta = 1$ that the squared signal norm increases in $\sum_l \alpha_l$ . For completeness, we present the most general case next and prove it in the appendix.
|
| 43 |
+
|
| 44 |
+

|
| 45 |
+
|
| 46 |
+
Figure 1: (a)The two types of considered residual blocks. In Type C the skip connection is a projection with a $1 \times 1$ kernel while in Type B the input is directly added to the residual block via the skip connection. Both these blocks have been described by He et al. (2016). (b) The correlation between two inputs for different initializations as they pass through a residual network consisting of a convolution filter followed by 5 residual blocks (Type C), an average pool, and a linear layer on CIFAR10. Only RISOTTO maintains constant correlations after each residual block while it increases for the other initializations with depth. (c) Performance of RISOTTO for different values of alpha $(\alpha)$ for ResNet 18 (C) on CIFAR10. Note that $\alpha=0$ is equivalent to SkipInit and achieves the lowest accuracy. Initializing $\alpha=1$ clearly improves performance.
|
| 47 |
+
|
| 48 |
+
**Theorem 2.3** (Norm preservation). Let a neural network consist of fully-connected residual blocks as defined by Equ. (1) that start with a fully-connected layer at the beginning $\mathbf{W}^0$ , which contains $N_1$ output channels. Assume that all biases are initialized as 0 and that all weight matrix entries are independently normally distributed with $w_{ij,2}^l \sim \mathcal{N}\left(0,\sigma_{l,2}^2\right)$ , $w_{ij,1}^l \sim \mathcal{N}\left(0,\sigma_{l,1}^2\right)$ , and $w_{ij,skip}^l \sim \mathcal{N}\left(0,\sigma_{l,2}^2\right)$
|
| 49 |
+
|
| 50 |
+
$\mathcal{N}\left(0, \sigma_{l, skip}^2\right)$ . Then the expected squared norm of the output after one fully-connected layer and L residual blocks applied to input x is given by
|
| 51 |
+
|
| 52 |
+
$$\mathbb{E}\left(\left\|\boldsymbol{x}^{L}\right\|^{2}\right) = \frac{N_{1}}{2}\sigma_{0}^{2}\prod_{l=1}^{L-1}\frac{N_{l+1}}{2}\left(\alpha_{l}^{2}\sigma_{l,2}^{2}\sigma_{l,1}^{2}\frac{N_{m_{l}}}{2} + \beta_{l}^{2}\sigma_{l,skip}^{2}\right)\left\|\boldsymbol{x}\right\|^{2}.$$
|
| 53 |
+
|
| 54 |
+
Note that this result does not rely on any (mean field) approximations and applies also to other parameter distributions that have zero mean and are symmetric around zero. Inserting the parameters of Definition 2.1 for fully-connected networks with k=1 leads to the following insight that explains why this is the preferred initialization choice.
|
| 55 |
+
|
| 56 |
+
**Insight 2.4** (Norm preserving initialization). According to Theorem 2.3, the normal ResNet initialization (Definition 2.2) preserves the average squared signal norm for arbitrary depth L.
|
| 57 |
+
|
| 58 |
+
Even though this initialization setting is able to avoid exploding or vanishing signals, it still induces considerable issues, as the analysis of the joint signal corresponding to different inputs reveals. According to the next theorem, the signal covariance fulfills a layerwise recurrence relationship that leads to the observation that signals become more similar with increasing depth.
|
| 59 |
+
|
| 60 |
+
**Theorem 2.5** (Layerwise signal covariance). Let a fully-connected residual block be given as defined by Eq. (1) with random parameters according to Definition 2.2. Let $\mathbf{x}^{l+1}$ denote the neuron states of Layer l+1 for input x and $\tilde{\mathbf{x}}^{l+1}$ the same neurons but for input $\tilde{\mathbf{x}}$ . Then their covariance given all parameters of the previous layers is given as $\mathbb{E}_l\left(\langle \mathbf{x}^{l+1}, \tilde{\mathbf{x}}^{l+1} \rangle\right)$
|
| 61 |
+
|
| 62 |
+
$$\geq \frac{1}{4} \frac{N_{l+1}}{2} \left( \alpha^2 \sigma_{l,2}^2 \sigma_{l,1}^2 \frac{N_{m_l}}{2} + 2\beta^2 \sigma_{l,skip}^2 \right) \langle \boldsymbol{x}^l, \tilde{\boldsymbol{x}}^l \rangle + \frac{c}{4} \alpha^2 N_{l+1} \sigma_{l,2}^2 \sigma_{l,1}^2 N_{m_l} \| \boldsymbol{x}^l \| \| \tilde{\boldsymbol{x}}^l \|$$
|
| 63 |
+
(2)
|
| 64 |
+
|
| 65 |
+
$$+ \mathbb{E}_{\mathbf{W}_{1}^{l}} \left( \sqrt{\left(\alpha^{2} \sigma_{l,2}^{2} \left\| \phi(\mathbf{W}_{1}^{l} \boldsymbol{x}^{l}) \right\|^{2} + \beta^{2} \sigma_{l,skip}^{2} \left\| \boldsymbol{x}^{l} \right\|^{2}} \right) \left(\alpha^{2} \sigma_{l,2}^{2} \left\| \phi(\mathbf{W}_{1}^{l} \tilde{\boldsymbol{x}}^{l}) \right\|^{2} + \beta^{2} \sigma_{l,skip}^{2} \left\| \tilde{\boldsymbol{x}}^{l} \right\|^{2}} \right) \right),$$
|
| 66 |
+
|
| 67 |
+
where the expectation $\mathbb{E}_l$ is taken with respect to the initial parameters $\mathbf{W}_2^l$ , $\mathbf{W}_1^l$ , and $\mathbf{W}_{skip}^l$ and the constant c fulfills $0.24 \le c \le 0.25$ .
|
| 68 |
+
|
| 69 |
+
Note that this statement holds even for finite networks. To clarify what that means for the separability of inputs, we have to compute the expectation with respect to the parameters of $\mathbf{W}_1$ . To gain an intuition, we employ an approximation that holds for a wide intermediary network.
|
| 70 |
+
|
| 71 |
+
**Insight 2.6** (Covariance of signal for different inputs increases with depth). Let a fully-connected ResNet with random parameters as in Definition 2.2 be given. It follows from Theorem 2.5 that the outputs corresponding to different inputs become more difficult to distinguish for increasing depth L. For simplicity, let us assume that $\|\mathbf{x}\| = \|\tilde{\mathbf{x}}\| = 1$ . Then, in the mean field limit $N_{m_l} \to \infty$ , the covariance of the signals is lower bounded by
|
| 72 |
+
|
| 73 |
+
$$\mathbb{E}\left(\langle \boldsymbol{x}^{L}, \tilde{\boldsymbol{x}}^{L} \rangle\right) \ge \gamma_{1}^{L} \langle \boldsymbol{x}, \tilde{\boldsymbol{x}} \rangle + \gamma_{2} \sum_{k=0}^{L-1} \gamma_{1}^{k} = \gamma_{1}^{L} \langle \boldsymbol{x}, \tilde{\boldsymbol{x}} \rangle + \frac{\gamma_{2}}{1 - \gamma_{1}} \left(1 - \gamma_{1}^{L}\right) \tag{3}$$
|
| 74 |
+
|
| 75 |
+
for
|
| 76 |
+
$$\gamma_1 = \frac{1+\beta^2}{4} \le \frac{1}{2}$$
|
| 77 |
+
and $\gamma_2 = c(\alpha^2 + 2) \approx \frac{\alpha^2}{4} + \frac{1}{2}$ using $E_{l-1} ||x^l|| ||\tilde{x}^l|| \approx 1$ .
|
| 78 |
+
|
| 79 |
+
Since $\gamma_1 < 1$ , the contribution of the original input correlations $\langle \boldsymbol{x}, \tilde{\boldsymbol{x}} \rangle$ vanishes for increasing depth L. Meanwhile, by adding constant contribution in every layer, irrespective of the input correlations, $\mathbb{E}\left(\langle \boldsymbol{x}^L, \tilde{\boldsymbol{x}}^L \rangle\right)$ increases with L and converges to the maximum value 1 (or a slightly smaller value in case of smaller width $N_{m_l}$ ). Thus, deep models essentially map every input to almost the same output vector, which makes it impossible for the initial network to distinguish different inputs and provide information for meaningful gradients. Fig. 1b demonstrates this trend and compares it with our initialization proposal RISOTTO, which does not suffer from this problem.
|
| 80 |
+
|
| 81 |
+
While the general trend holds for residual as well as standard fully-connected feed forward networks $(\beta=0)$ , interestingly, we still note a mitigation for a strong residual branch $(\beta=1)$ . The contribution by the input correlations decreases more slowly and the constant contribution is reduced for larger $\beta$ . Thus, residual networks make the training of deeper models feasible, as they were designed to do (He et al., 2016). This observation is in line with the findings of Yang & Schoenholz
|
| 82 |
+
|
| 83 |
+
(2017), which were obtained by mean field approximations for a different case without ReLU after the residual block (so that $\boldsymbol{x}^l = \boldsymbol{z}^{l-1}$ ). It also explains how ResNet initialization approaches like Fixup (Zhang et al., 2018) and SkipInit (De & Smith, 2020) can be successful in training deep ResNets. They set $\alpha = 0$ and $\beta = 1$ . If $\mathbf{W}_{\text{skip}} = \mathbb{I}$ , this approach even leads to dynamical isometry but trades it for very limited feature diversity (Blumenfeld et al., 2020) and initially broken residual branch. Figure 1c highlights potential advantages that can be achieved by $\alpha \neq 0$ if the initialization can still maintain dynamical isometry as our proposal RISOTTO.
|
| 84 |
+
|
| 85 |
+
Our main objective is to avoid the highlighted drawbacks of the ResNet initialization schemes that we have discussed in the last section. We aim to not only maintain input correlations on average but exactly and ensure that the input-output Jacobian of our randomly initialized ResNet is an isometry. All its eigenvalues equal thus 1 or -1. In comparison with Fixup and SkipInit, we also seek to increase the feature diversity and allow for arbitrary scaling of the residual versus the skip branch.
|
| 86 |
+
|
| 87 |
+
**Looks-linear matrix structure** The first step in designing an orthogonal initialization for a residual block is to allow signal to propagate through a ReLU activation without loosing half of the information. This can be achieved with the help of a looks-linear initialization (Shang et al., 2016; Burkholz & Dubatovka, 2019; Balduzzi et al., 2017), which leverages the identity $\boldsymbol{x} = \phi(\boldsymbol{x}) - \phi(-\boldsymbol{x})$ . Accordingly, the first layer maps the transformed input to a positive and a negative part. A fully-connected layer is defined by $\boldsymbol{x}^1 = \left[\hat{\boldsymbol{x}}_+^1; \hat{\boldsymbol{x}}_-^1\right] = \phi\left([\mathbf{U}^0; -\mathbf{U}^0]\boldsymbol{x}\right)$ with respect to a submatrix $\mathbf{U}^0$ . Note that the difference of both components defines a linear transformation of the input $\hat{\boldsymbol{x}}_+^1 - \hat{\boldsymbol{x}}_-^1 = \mathbf{U}^0\boldsymbol{x}$ . Thus, all information about $\mathbf{U}^0\boldsymbol{x}$ is contained in $\boldsymbol{x}^1$ . The next layers continue to separate the positive and negative part of a signal. Assuming this structure as input, the next layers $\boldsymbol{x}^{l+1} = \phi(\mathbf{W}^l\boldsymbol{x}^l)$ proceed with the block structure $\mathbf{W}^l = \begin{bmatrix} \mathbf{U}^l & -\mathbf{U}^l; \mathbf{U}^l & -\mathbf{U}^l \end{bmatrix}$ . As a consequence, the activations of every layer can be separated into a positive and a negative part as $\boldsymbol{x}^l = \begin{bmatrix} \hat{\boldsymbol{x}}_+^l; \hat{\boldsymbol{x}}_-^l \end{bmatrix}$ so that $\|\boldsymbol{x}^l\| = \|\boldsymbol{z}^{l-1}\|$ . The submatrices $\mathbf{U}^l$ can be specified as in case of a linear neural network. Thus, if they are orthogonal, they induce a neural network with the dynamical isometry property (Burkholz & Dubatovka, 2019). With the help of the Delta Orthogonal initialization (Xiao et al., 2018), the same idea can also be transferred to convolutional layers. Given a matrix $\mathbf{H} \in \mathbb{R}^{N_{l+1} \times N_l}$ , a convolutional tensor is defined as $\mathbf{W} \in \mathbb{R}^{N_{l+1} \times N_l \times k_1 \times k_2}$ as $w_{ijk_1'k_2'} = h_{ij}$ if $k_1' = \lfloor k_1/2 \rfloor$ and $k_2' = \lfloor k_2/2 \rfloor$ and $w_{ijk_1'k_2'} = 0$ otherwise. We make frequent use of the combination of the idea behind the Delta Orthogonal initialization and the looks-linear structure.
|
| 88 |
+
|
| 89 |
+
**Definition 2.7** (Looks-linear structure). A tensor $\mathbf{W} \in \mathbb{R}^{N_{l+1} \times N_l \times k_1 \times k_2}$ is said to have looks-linear structure with respect to a submatrix $\mathbf{U} \in \mathbb{R}^{\lfloor N_{l+1}/2 \rfloor \times \lfloor N_l/2 \rfloor}$ if
|
| 90 |
+
|
| 91 |
+
$$w_{ijk'_1k'_2} = \begin{cases} h_{ij} & \text{if } k'_1 = \lfloor k_1/2 \rfloor \text{ and } k'_2 = \lfloor k_2/2 \rfloor, \\ 0 & \text{otherwise}, \end{cases} \quad \mathbf{H} = \begin{bmatrix} \mathbf{U} & -\mathbf{U} \\ -\mathbf{U} & \mathbf{U} \end{bmatrix}$$
|
| 92 |
+
(4)
|
| 93 |
+
|
| 94 |
+
It has first layer looks-linear structure if H = [U; -U].
|
| 95 |
+
|
| 96 |
+
We impose this structure separately on the residual and skip branch but choose the corresponding submatrices wisely. To introduce RISOTTO, we only have to specify the corresponding submatrices for $\mathbf{W}_1^l$ , $\mathbf{W}_2^l$ , and $\mathbf{W}_{\rm skip}^l$ . The main idea of Risotto is to choose them so that the initial residual block acts as a linear orthogonal map.
|
| 97 |
+
|
| 98 |
+
The Type C residual block assumes that the skip connection is a projection such that $h_i^l(x) = \sum_{j \in N_l} \mathbf{W}_{ij,\mathrm{skip}}^l * x_j^l$ , where $\mathbf{W}_{\mathrm{skip}}^l \in \mathbb{R}^{N_{l+1} \times N_l \times 1 \times 1}$ is a trainable convolutional tensor with kernel size $1 \times 1$ . Thus, we can adapt the skip connections to compensate for the added activations of the residual branch in the following way.
|
| 99 |
+
|
| 100 |
+
**Definition 2.8** (RISOTTO for Type C residual blocks). For a residual block of the form $\mathbf{x}^{l+1} = \phi(\alpha * f^l(\mathbf{x}^l) + h^l(\mathbf{x}^l))$ , where $f^l(\mathbf{x}^l) = \mathbf{W}_2^l * \phi(\mathbf{W}_1^l * \mathbf{x}^l)$ , $h^l(\mathbf{x}^l) = \mathbf{W}_{skip}^l * \mathbf{x}^l$ , the weights $\mathbf{W}_1^l, \mathbf{W}_2^l$ and $\mathbf{W}_{skip}^l$ are initialized with looks-linear structure according to Def. 2.7 with the submatrices $\mathbf{U}_1^l$ , $\mathbf{U}_2^l$ and $\mathbf{U}_{skip}^l$ respectively. The matrices $\mathbf{U}_1^l$ , $\mathbf{U}_2^l$ , and $\mathbf{M}^l$ be drawn independently and uniformly from all matrices with orthogonal rows or columns (depending on their dimension), while the skip submatrix is $\mathbf{U}_{skip}^l = \mathbf{M}^l - \alpha \mathbf{U}_2^l \mathbf{U}_1^l$ .
|
| 101 |
+
|
| 102 |
+
The Type B residual block poses the additional challenge that we cannot adjust the skip connections initially because they are defined by the identity and not trainable. Thus, we have to adapt the
|
| 103 |
+
|
| 104 |
+
residual connections instead to compensate for the added input signal. To be able to distinguish the positive and the negative part of the input signal after the two convolutional layers, we have to pass it through the first ReLU without transformation and thus define $\mathbf{W}_1^l$ as identity mapping.
|
| 105 |
+
|
| 106 |
+
**Definition 2.9** (RISOTTO for Type B residual blocks). For a residual block of the form $\mathbf{x}^{l+1} = \phi(\alpha * f^l(\mathbf{x}^l) + \mathbf{x}^l)$ where $f^l(\mathbf{x}^l) = \mathbf{W}_2^l * \phi(\mathbf{W}_1^l * \mathbf{x}^l)$ , RISOTTO initializes the weight $\mathbf{W}_1^l$ as $w_{1,ijk_1'k_2'}^l = 1$ if $i = j, k_1' = \lfloor k_1/2 \rfloor, k_2' = \lfloor k_2/2 \rfloor$ and $w_{1,ijk_1'k_2'}^l = 0$ otherwise. $\mathbf{W}_2^l$ has lookslinear structure (according to Def. 2.7) with respect to a submatrix $\mathbf{U}_2^l = \mathbf{M}^l - (1/\alpha)\mathbb{I}$ , where $\mathbf{M}^l \in \mathbb{R}^{N_{l+1}/2 \times N_l/2}$ is a random matrix with orthogonal columns or rows, respectively.
|
| 107 |
+
|
| 108 |
+
As we prove in the appendix, residual blocks initialized with RISOTTO preserve the norm of the input and cosine similarity of signals corresponding to different inputs not only on average but exactly. This addresses the drawbacks of initialization schemes that are based on independent weight entries, as discussed in the last section.
|
| 109 |
+
|
| 110 |
+
**Theorem 2.10** (RISOTTO preserves signal norm and similarity). A residual block that is initialized with RISOTTO maps input activations $\mathbf{x}^l$ to output activations $\mathbf{x}^{l+1}$ so that the norm $||\mathbf{x}^{l+1}||^2 = ||\mathbf{x}^l||^2$ stays equal. The scalar product between activations corresponding to two inputs $\mathbf{x}$ and $\tilde{\mathbf{x}}$ are preserved in the sense that $\langle \hat{\mathbf{x}}_+^{l+1} - \hat{\mathbf{x}}_-^{l+1} \rangle = \langle \hat{\mathbf{x}}_+^{l} - \hat{\mathbf{x}}_-^{l} \rangle$ .
|
| 111 |
+
|
| 112 |
+
The full proof is presented Appendix A.3. It is straight forward, as the residual block is defined as orthogonal linear transform, which maintains distances of the sum of the separated positive and negative part of a signal. Like the residual block, the input-output Jacobian also is formed of orthogonal submatrices. It follows that RISOTTO induces perfect dynamical isometry for finite width and depth.
|
| 113 |
+
|
| 114 |
+
Theorem 2.11 (RISOTTO achieves exact dynamical isometry for residual blocks). A residual block whose weights are initialized with RISOTTO achieves exact dynamical isometry so that the singular values $\lambda \in \sigma(J)$ of the input-output Jacobian $J \in \mathbb{R}^{N_{l+1} \times k_{l+1} \times N_l \times k_l}$ fulfill $\lambda \in \{-1, 1\}$ .
|
| 115 |
+
|
| 116 |
+
The detailed proof is given in Appendix A.2. Since the weights are initialized so that the residual block acts as an orthogonal transform, also the input-output Jacobian is an isometry, which has the required spectral properties. Drawing on the well established theory of dynamical isometry (Chen et al., 2018; Saxe et al., 2013; Mishkin & Matas, 2015; Poole et al., 2016; Pennington et al., 2017), we therefore expect RISOTTO to enable fast and stable training of very deep ResNets, as we demonstrate next in experiments.
|
2210.13746/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-10-22T23:39:37.279Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" etag="wufqb8RRLl2H4pdzEgC9" version="20.5.1" type="google"><diagram id="HH5cNkBwLo93elQOb2Cr" name="Page-1">7VtJd+I4EP41vJccyBNeWI6EkMyh87JAXvfMTdjCVsdYtCxCyK8fyZIXGZslhJiZJIdglUpSSfqq6iuWhjmYvd5QOPdviYuChgHc14Z51TCMVtvu8RchWUlJp2NIgUexq5QywQi/ISUESrrALoo0RUZIwPBcFzokDJHDNBmklCx1tSkJ9FXn0FMrgkwwcmCA1tR+Ypf5Uto1Opn8L4Q9P1m51VYbnsFEWU0c+dAly5zIHDbMASWEyafZ6wAF4vCSc5Hjrit6U8MoCtkuA8bz4fQhatLruxv/n+7DeHn356lpK9vYKtkwcvn+VZNQ5hOPhDAYZtJLShahi8SsLd7KdH4QMlfC34ixlbpMuGCEi3w2C1TvlITsGs5wIDBxAymckdBVcjXIsFR7QAJCY8vMXg8A64rLI0bJc3oddirJ6QIgtHkPPxy6+sWFTXABQDuR/M0l4MI2U8GVQGw64GqVb90jimeIIaqE8szEQVVehRJFZEEdtOH8E0hD6iG2QS+HGO5qiHBz6IoPpCiADL/ohkCFeS/VS4feE8xNNIDyT0thUzmnbQJ9BmmXGpRhq08pXOXU5kIh2nmZrq0DdZtRmjp/kMsnrdx5ZKIY+3v4QbsOP0CvmP3KPStQqlaGSNFIAPkJvjMYAKD7Tou7jqW7jtWzvrbrdOzP8Z0u2Mt5iuq20Tu+96jDe4HBQh1nw2gHTMFQ86v2nwVJOppRDNg+VzDM+WsMj6SfP3nq9dCJwBQ6+pjUbyqXm5SudRTz5ESju6fHwZBrnCVT4h1NcFEygt8dTqTnfHS8ZNolTUzNhjMepC7DSTTPbYOrTYpbWx8ai6M5DLdaCFzIYDPyEWJRcxEhOiWUEyNdv5OZo0RGUcB3YlldjuxBsUfwxDVdo0yxVTZpmaJdolgU9SmGwZpe56py85lvbNh3mYlleyk7n6LodqECzQuiIrRzc5C4MEQjFj+MIHsTjj3j/8YYTSiCz/FaIISO7yIhnyAs4AVuGwOz0bc4xRYT/UZqKhy+oSCMn/uLaeT4AfSSKdX0HvJhIBf0IeMPF1tOrRa3Tw+rBv/npUuSejMhJ7DttgjlxQWT29zom7obn9DFbzFaxpQt4afA03QWtvQxQ6O5vPklL0l1xqWzH6uM/QCg2A8MsBdymcOZieAvaml+AQwl5W0Fe1knG2l21pJzwnSXWWXZTlT8XFXZK1CLPDvJZfZ3JO5aeW9CbnfhvdUV3jqj3YUjm401jpzy3o8kqeZRyKfZu+jl/jodDVatrnVhdYDd7Rom539Wy9qJmX4YG+weBCpwGKj2KaaqS5+vCKpi6VA3jMzKqsLFL8ctKibQefZiSDZVdhbz4RAzQfskuzj5Aqac6j8Or9PS4NAaYN/SYPuxniAPrInoBSRi+9E85gvGNsU0ii1ELD5b4bCiE4tOxfUAnLKY0E0Ij28GmEmahgJXhChOAoVKGDc43VnFs9EXlMwnF+ILvJ/SxR78P2R0rfopXafO7PsVKJ1VA6Uzas7F1ncuPkIuvh1/p+KvlYplB0vfddmciZPki0MvQN9ZeOcsfAJvrPTqyMK71rPWiWVU+1PqWVN9dPZpSdP+TppHSJoFe0ojuePEKM2b8P7Dvh8+jp8eL4d8KBAZuyLYfifyk0nk69cvF1yS8DuPH5jH0wTwCXncrD2PJ1/eK4nhk1LIVrtbpQfn3JZ6kzP5KTdIXs7Lo7VVEawufwyfHkVZcfaIpnKK2/H5elwCW1C1KbqqOn3/ne66iQ1hdIOtW9cz7W3rHRSJj7S7fc34UDhJoGjmliDsnvv8+ZZtTGhNDsGz98ghFH2YT2y/+d6BBecGhJRBYT9frd1PPhigZbeoA7YUAaeM2cHd7XDcfIi/DjaS1dNm1NYTLYP/EgqqznQXHOissECwONsR8hyLWmNGghNhBwZ91THDrisrfsQ3ByfxVIJTqS+D8nnty4Ytvq8rinx5ACUVfkhCMcsUB0FRVFLFV1X71dX9QVyuBWyNzFntEjJnlpC59DOUPdgcb2a/pJAFfPZ7FHP4Lw==</diagram></mxfile>
|
2210.13746/main_diagram/main_diagram.pdf
ADDED
|
Binary file (39.3 kB). View file
|
|
|
2210.13746/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Automatically evaluating the output quality of machine translation (MT) systems remains a difficult challenge. The BLEU metric (Papineni et al., 2002), which is a function of n-gram overlap between system and reference outputs, is still used widely today despite its obvious limitations in measuring
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Figure 1: An example perturbation (antonym replacement) from our DEMETR dataset. We measure whether different MT evaluation metrics score the unperturbed translation higher than the perturbed translation; in this case, BLEURT and BERTSCORE accurately identify the perturbation, while COMET-QE fails to do so.
|
| 8 |
+
|
| 9 |
+
semantic similarity (Fomicheva and Specia, 2019; Marie et al., 2021; Kocmi et al., 2021; Freitag et al., 2021). Recently-developed *learned* evaluation metrics such as BLEURT (Sellam et al., 2020a), COMET (Rei et al., 2020), MOVERSCORE (Zhao et al., 2019), or BARTSCORE (Yuan et al., 2021a) seek to address these limitations by either fine-tuning pretrained language models directly on human judgments of translation quality or by simply utilizing contextualized word embeddings. While learned metrics exhibit higher correlation with human judgments than BLEU (Barrault et al., 2021), their relative lack of interpretability leaves it unclear as to *why* they assign a particular score to a given translation. This is a major reason why some MT researchers are reluctant to employ learned metrics in order to evaluate their MT systems (Marie et al., 2021; Gehrmann et al., 2022; Leiter et al., 2022).
|
| 10 |
+
|
| 11 |
+
In this paper, we build on previous metric explainability work (Specia et al., 2010; Macketanz
|
| 12 |
+
|
| 13 |
+
<sup>1</sup> <https://github.com/marzenakrp/demetr>
|
| 14 |
+
|
| 15 |
+
et al., 2018; Fomicheva and Specia, 2019; Kaster et al., 2021; Sai et al., 2021a; Barrault et al., 2021; Fomicheva et al., 2021; Leiter et al., 2022) by introducing DEMETR, a dataset for Diagnosing Evaluation METRics for machine translation, that measures the sensitivity of an MT metric to *35* different types of linguistic perturbations spanning common syntactic (e.g., incorrect word order), semantic (e.g., undertranslation), and morphological (e.g., incorrect suffix) translation error categories. Each example in DEMETR is a tuple containing {**source**, **reference**, **machine translation**, **perturbed machine translation**}, as shown in Figure 1. The entire dataset contains of *31*K total examples across *10* different source languages (the target language is always English). The perturbations in DEMETR are produced semi-automatically by manipulating translations produced by commercial MT systems such as Google Translate, and they are manually validated to ensure the only source of variation is associated with the desired perturbation.
|
| 16 |
+
|
| 17 |
+
We measure the accuracy of a suite of *14* evaluation metrics on DEMETR (as shown in Figure 1), discovering that learned metrics perform far better than string-based ones. We also analyze the relative *sensitivity* of metrics to different grades of perturbation severity. We find that metrics struggle at times to differentiate between minor errors (e.g., punctuation removal or word repetition) with semantics-warping errors such as incorrect gender or numeracy. We also observe that the referencefree2 COMET-QE learned metric is more sensitive to word repetition and misspelled words than severe errors such as entirely unrelated translations or named entity replacement. We publicly release DEMETR and associated code to facilitate more principled research into MT evaluation.
|
| 18 |
+
|
| 19 |
+
Most existing MT evaluation metrics compute a score for a candidate translation t against a reference sentence r. 3 These scores can be either a simple function of character or token overlap between t and r (e.g., BLEU), or they can be the result of a complex neural network model that embeds t and r (e.g., BLEURT). While the latter class of
|
| 20 |
+
|
| 21 |
+
*learned* metrics4 provides more meaningful judgments of translation quality than the former, they are also relatively uninterpretable: the reason for a particular translation t receiving a high or low score is difficult to discern. In this section, we first explain our perturbation-based methodology to better understand MT metrics before describing the collection of DEMETR, a dataset of linguistic perturbations.
|
| 22 |
+
|
| 23 |
+
Inspired by prior work in *minimal pair*-based linguistic evaluation of pretrained language models such as BLIMP (Warstadt et al., 2020), we investigate how sensitive MT evaluation metrics are to various perturbations of the candidate translation t. Consider the following example, which is designed to evaluate the impact of word order in the candidate translation:
|
| 24 |
+
|
| 25 |
+
**reference translation** r: Pronunciation is relatively easy in Italian since most words are pronounced exactly how they are written.
|
| 26 |
+
|
| 27 |
+
**machine translation** t: Pronunciation is relatively easy in Italian, as most words are pronounced exactly as they are spelled.
|
| 28 |
+
|
| 29 |
+
**perturbed machine translation** t 0 : Spelled pronunciation as Italian, relatively are most is as they pronounced exactly in words easy.
|
| 30 |
+
|
| 31 |
+
If a particular evaluation metric SCORE is sensitive to this shuffling perturbation, SCORE(r, t<sup>0</sup> ), the score of the perturbed translation, should be lower than SCORE(r, t). 5 Note that while other minor translation errors may be present in t, the perturbed translation t <sup>0</sup> differs only in a specific, controlled perturbation (in this case, shuffling).
|
| 32 |
+
|
| 33 |
+
To explore the above methodology at scale, we create DEMETR, a dataset that evaluates MT metrics on *35* different linguistic phenomena with 1K perturbations per phenomenon.6 Each example in DEMETR consists of (1) a sentence in one of *10* source languages, (2) an English translation written by a human translator, (3) a machine transla-
|
| 34 |
+
|
| 35 |
+
<sup>2</sup>While prior work uses also terms such as "reference-less" and "quality estimation," we employ the term "reference-free" as it is more self-explanatory.
|
| 36 |
+
|
| 37 |
+
<sup>3</sup> Some metrics, such as COMET, additionally condition the score on the source sentence.
|
| 38 |
+
|
| 39 |
+
<sup>4</sup>We define *learned* metrics as any metric which uses a machine learning model (including both pretrained and supervised methods).
|
| 40 |
+
|
| 41 |
+
<sup>5</sup> For reference-free metrics like COMET-QE, we include the source sentence s as an input to the scoring function instead of the reference.
|
| 42 |
+
|
| 43 |
+
<sup>6</sup>As some perturbations require presence of specific items (e.g., to omit a named entity, one has to be present) not all perturbations include exactly 1k sentences.
|
| 44 |
+
|
| 45 |
+
| ID | Category | Description | Error severity |
|
| 46 |
+
|----|------------|-------------------------------------|----------------|
|
| 47 |
+
| 1 | | word repetition (twice) | minor |
|
| 48 |
+
| 2 | | word repetition (four times) | minor |
|
| 49 |
+
| 3 | | too general word (undertranslation) | major |
|
| 50 |
+
| 4 | > | untranslated word (codemix) | major |
|
| 51 |
+
| 5 | ည် | omitted perpositional phrase | major |
|
| 52 |
+
| 6 | accuracy | incorrect word added | critical |
|
| 53 |
+
| 7 | ž | change to antonym | critical |
|
| 54 |
+
| 8 | | change to negation | critical |
|
| 55 |
+
| 9 | | replaced named entity | critical |
|
| 56 |
+
| 10 | | incorrect numeric | critical |
|
| 57 |
+
| 11 | | incorrect gender pronoun | critical |
|
| 58 |
+
| 12 | | omitted conjunction | minor |
|
| 59 |
+
| 13 | | part of speech shift | minor |
|
| 60 |
+
| 14 | | switched word order (word swap) | minor |
|
| 61 |
+
| 15 | Ç | incorrect case (pronouns) | minor |
|
| 62 |
+
| 16 | fluency | incorrect preposition or article | minor-major |
|
| 63 |
+
| 17 | Æ | incorrect tense | major |
|
| 64 |
+
| 18 | | incorrect aspect | major |
|
| 65 |
+
| 19 | | change to interrogative | major |
|
| 66 |
+
| 20 | | omitted adj/adv | minor-major |
|
| 67 |
+
| 21 | Pa | omitted content verb | critical |
|
| 68 |
+
| 22 | mixed | omitted noun | critical |
|
| 69 |
+
| 23 | = | omitted subject | critical |
|
| 70 |
+
| 24 | | omitted named entity | critical |
|
| 71 |
+
| 25 | | misspelled word | minor |
|
| 72 |
+
| 26 | ž | deleted character | minor |
|
| 73 |
+
| 27 | <u>r</u> d | omitted final punctuation | minor |
|
| 74 |
+
| 28 | typography | added punctuation | minor |
|
| 75 |
+
| 29 | ро | tokenized sentence | minor |
|
| 76 |
+
| 30 | t, | lowercased sentence | minor |
|
| 77 |
+
| 31 | | first word lowercased | minor |
|
| 78 |
+
| 32 | ne | empty string | base |
|
| 79 |
+
| 33 | baseline | unrelated translation | base |
|
| 80 |
+
| 34 | as | shuffled words | base |
|
| 81 |
+
| 35 | _ | reference as translation | base |
|
| 82 |
+
|
| 83 |
+
Table 1: List of perturbations included in DEMETR with their corresponding error severity. Details can be found in Appendix A
|
| 84 |
+
|
| 85 |
+
tion produced by Google Translate, <sup>7</sup> and (4) a perturbed version of the Google Translate output which introduces exactly one mistake (semantic, syntactic, or typographical).
|
| 86 |
+
|
| 87 |
+
Data sources and filtering: We utilize *X-to-English* translation pairs from two different datasets, WMT (Callison-Burch et al., 2009; Bojar et al., 2013, 2015, 2014; Akhbardeh et al., 2021; Barrault et al., 2020) and FLORES (Guzmán et al., 2019), aiming at a wide coverage of topics from different sources. WMT has been widely used over the years as a popular MT shared task, while FLORES was recently curated to aid MT evaluation. We consider only the test split of each dataset to prevent possible leaks, as both current and future metrics are likely to be trained on these two datasets. We sample *100* sentences (*50* from each of the two datasets) for each of the following *10*
|
| 88 |
+
|
| 89 |
+
languages: French (fr), Italian (it), Spanish (es), German (de), Czech (cs), Polish (pl), Russian (ru), Hindi (hi), Chinese (zh), and Japanese (ia). We pay special attention to the language selection, as newer MT evaluation metrics, such as COMET-QE or PRISM-QE, employ only the source text and the candidate translation. We control for sentence length by including only sentences between 15 and 25 words long, measured by the length of the tokenized reference translation. Since we re-use the same sentences across multiple perturbations, we did not include shorter sentences because they are less likely to contain multiple linguistic phenomena of interest.<sup>9</sup> As the quality of sampled sentences varies, we manually check each source sentence and its translation to make sure they are of satisfactory quality. 10
|
| 90 |
+
|
| 91 |
+
**Translating the data:** Given the filtered collection of source sentences, we next translate them into English using the Google Translate API. We manually verify each translation, editing or resampling the instances where the machine translation contains critical errors. Through this process, we obtain IK curated examples per perturbation (100 sentences $\times$ 10 languages) that each consist of source and reference sentences along with a machine translation of reasonable quality.
|
| 92 |
+
|
| 93 |
+
<sup>&</sup>lt;sup>7</sup>We edit the machine translation to assure a satisfactory quality. In cases where the Google Translate output is exceptionally poor, we either replace the sentence or replace the translation with one produced by DeepL (Frahling, 2022) or GPT-3 (Brown et al., 2020).
|
| 94 |
+
|
| 95 |
+
<sup>&</sup>lt;sup>8</sup>We choose languages that represent different families (Romance, Germanic, Slavic, Indo-Iranian, Sino-Tibetan, and Japonic) with different morphological traits (fusional, agglutinative, and analytic) and wide range of writing systems (Latin alphabet, Cyrillic alphabet, Devanagari script, Hanzi, and Kanji/Hiragana/Katakana).
|
| 96 |
+
|
| 97 |
+
<sup>&</sup>lt;sup>9</sup>Similarly, we do not include sentences over 25 words long in DEMETR as some languages may naturally allow longer sentences than others, and we wanted to control the length distribution.
|
| 98 |
+
|
| 99 |
+
<sup>&</sup>lt;sup>10</sup>In the sentences sampled from WMT, we notice multiple translation and grammar errors, such as translating Japanese その最大は本州列島で、世界で7番目に大きい島とさ れています。 as (the biggest being Honshu), making Japan the 7th largest island in the world, which would suggest that Japan is an island, instead of the largest of which is the Honshu island, considered to be the seventh largest island in the world. or "kakao" ("cacao") incorrectly declined as "kakaa" in Polish. These sentences were rejected, and new ones were sampled in their place. We also resampled sentences which translations contained artifacts from neighboring sentences due to partial splits and merges, and sentences which exhibit translationese, that is sentences with source artifacts (Koppel and Ordan, 2011). Finally, we omit or edit sentences with translation artifacts due to the direction of translation. Both WMT and FLORES contain sentences translated from English to another languages. Since the translation process is *not* always fully reversible, we omit sentences where translation from the give language to English would not be possible in the form included in these datasets (e.g., due to addition or omission of information).
|
| 100 |
+
|
| 101 |
+
<sup>&</sup>lt;sup>11</sup>All sentences were translated in May, 2022.
|
| 102 |
+
|
| 103 |
+
We perturb the machine translations obtained above in order to create *minimal pairs*, which allow us to investigate the sensitivity of MT evaluation metrics to different types of errors. Our perturbations are loosely based on the Multidimensional Quality Metrics (Burchardt, 2013, MQM) framework developed to identify and categorize MT errors. Most perturbations were performed semi-automatically by utilizing STANZA (Qi et al., 2020), SPACY 12 or GPT-3 (Brown et al., 2020), applying handcrafted rules and then manually correcting any errors. Some of the more elaborate perturbations (e.g., translation by a too general term, where one had to be sure that a better, more precise term exists) were performed manually by the authors or linguistically-savvy freelancers hired on the Upwork platform.13 Special care was given to the plausibility of perturbations (e.g., numbers for replacement were selected from a probable range, such as *1*-*12* for months). See Table 2 for descriptions and examples of most perturbations; full list in Appendix A.
|
| 104 |
+
|
| 105 |
+
We roughly categorize our perturbations into the following four categories:
|
| 106 |
+
|
| 107 |
+
- ACCURACY: Perturbations in the accuracy category modify the semantics of the translation by either incorporating misleading information (e.g., by adding plausible yet inadequate text or changing a word to its antonym) or omitting information (e.g., by leaving a word untranslated).
|
| 108 |
+
- FLUENCY: Perturbations in the fluency category focus on grammatical accuracy (e.g., word form agreement, tense, or aspect) and on overall cohesion. Compared to the mistakes in the accuracy category, the true meaning of the sentence can be usually recovered from the context more easily.
|
| 109 |
+
- MIXED: Certain perturbations can be classified as both accuracy and fluency errors. Concretely, this category consists of omission errors that not only obscure the meaning but also affect the grammaticality of the sentence. One such error is *subject removal*, which will result not only in an ungrammatical sentence,
|
| 110 |
+
|
| 111 |
+
<sup>12</sup><https://spacy.io/usage/linguistic-features>
|
| 112 |
+
|
| 113 |
+
leaving a gap where the subject should come, but also in information loss.
|
| 114 |
+
|
| 115 |
+
- TYPOGRAPHY: This category concerns punctuation and minor orthographic errors. Examples of mistakes in this category include punctuation removal, tokenization, lowercasing, and common spelling mistakes.
|
| 116 |
+
- BASELINE: Finally, we include both upper and lower bounds, since learned metrics such as BLEURT and COMET do not have a specified range that their scores can fall into. Specifically, we provide three baselines: as lower bounds, we either change the translation to an unrelated one or provide an empty string,14 while as an upper bound, we set the perturbed translation t 0 equal to the reference translation r, which should return the highest possible score for reference-based metrics.
|
| 117 |
+
|
| 118 |
+
Error severity: Our perturbations can also be categorized by their *severity* (see Table 1). We use the following categorization scheme for our analysis experiments:
|
| 119 |
+
|
| 120 |
+
- MINOR: In this type of error, which includes perturbations such as dropping punctuation or using the wrong article, the meaning of the source sentence can be easily and correctly interpreted by human readers.
|
| 121 |
+
- MAJOR: Errors in this category may not affect the overall fluency of the sentence but will result in some missing details. Examples of major errors include undertranslation (e.g., translating "church" as "building"), or leaving a word in the source language untranslated.
|
| 122 |
+
- CRITICAL: These are catastrophic errors that result in crucial pieces of information going missing or incorrect information being added in a way unrecognizable for the reader, and are also likely to suffer from severe fluency issues. Errors in this category include subject deletion or replacement of a named entity.
|
| 123 |
+
|
| 124 |
+
We test the accuracy and sensitivity of *14* popular MT evaluation metrics on the perturbations
|
| 125 |
+
|
| 126 |
+
<sup>13</sup>See <https://www.upwork.com/>. Freelancers were paid an equivalent of \$15 per hour.
|
| 127 |
+
|
| 128 |
+
<sup>14</sup>Since most of the metrics will not accept an empty string, we pass a full stop instead.
|
| 129 |
+
|
| 130 |
+
| Category | Type | Example | Description | Implementation | Error Severit |
|
| 131 |
+
|----------|--------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|---------------|
|
| 132 |
+
| | repetition | I don't know if you realize that most of the goods imported into this country from Central America are duty free. I don't know if you realize that most of the goods imported into this country from Central America are duty free free. | The last word is being repeated twice. Punctuation is added after the last repeated word. | automatic | minor |
|
| 133 |
+
| | repetition | America are duty free free. Gordon Johndoe, Bush's spokesman, referred to the North Korean commitment as "an important advance towards the goal of achieving verifiable denuclearization of the Korean penisula." Gordon Johndroe, Bush's spokesman, referred to the North Korean commitment as | The last word is being repeated four times. Punctuation is added after the last repeated word. | automatic | minor |
|
| 134 |
+
| | hypernym | "an important advance towards the goal of achieving verifiable denuclearization of the Korean penisula penisula penisula penisula," The language most of the people working in the Vatican City use on a daily basis is Italian, and Latin is often used in religious ceremonies. The language most of the people working in the Vatican City use on a daily basis is | A word translated by a too general term (undertranslation). Special care was given in order to assure the word used in perturbed text is more general, and incorrect, translation of the original word. | manual with<br>suggestions<br>from GPT-3 | major |
|
| 135 |
+
| VCY | untranslated | Italian, and Latin is often used in religious activities. The Polish Air Force will eventually be equipped with 32 F-35 Lightning II fighters manufactured by Lockheed Martin. The Polish Air Force will eventually be equipped with 32 F-35 Lightning II fighters | One word is being left untranslated. We manually assure that each time only one word is left untranslated. | manual | major |
|
| 136 |
+
| ACCURACY | completeness | produkowane by Lockheed Martin. She is in custody pending prosecution and trial; but any witness evidence could be negatively impacted because her image has been widely published. She is pending prosecution and trial; but any witness evidence could be negatively | One prepositional phrase is being removed. Whenever possible, we remove the shortest prepositional phrase in order to assure that the perturbed sentence is not much shorter than the original translation. | automatic<br>(Stanza) with<br>manual check | major |
|
| 137 |
+
| ¥ | addition | impacted because her image has been widely published. — Plants look their best when they are in a natural environment, so resist the temptation to remove "just one." Power plants look their best when they are in a natural environment, so resist the | One word is being added. We make sure that the added word does not disturb the grammaticality of the sentence but changes the meaning in a significant way. | manual | critical |
|
| 138 |
+
| | antonym | temptation to remove "just one." He has been unable to relieve the pain with medication, which the competition prohibits competitors from taking. He has been unable to relieve the pleasure with medication, which the competition prohibits competitors from taking. | One word (noun, verb, adj., or adv.) is being changed to its antonym. | manual with<br>suggestions<br>from GPT-3 | critical |
|
| 139 |
+
| | mistranslation<br>negation | Last month, a presidential committee recommended the resignation of the former CEP as part of measures to push the country toward new elections. Last month, a presidential committee didn't recommend the resignation of the former CEP as part of measures to push the country toward new elections. | Affirmative sentences are being changed into negations. Rare negations are being changed to affirmative sentences. | manual | critical |
|
| 140 |
+
| | mistranslation<br>named entity | Late night presenter Stephen Colbert welcomed 17-year-old Thunberg to his show on Tuesday and conducted a lengthy interview with the Swede. Late night presenter John Oliver welcomed 17-year-old Thunberg to his show on Tuesday and conducted a lengthy interview with the Swede. | Named entity is replaced with another named entity from the same category (person, geographic location, and organization). | automatic<br>(Stanza) with<br>manual check | critical |
|
| 141 |
+
| | mistranslation<br>numbers | The Chinese Consulate General in Houston was established in 1979 and is the first Chinese consulate in the United States. The Chinese Consulate General in Houston was established in 1997 and is the first Chinese consulate in the United States. | A number is being replaced with an incorrect one. Special attention was given to keep the numerals with resonable/common range for the given category (e.g., 0-100 for percentages; 1-12 for months). We also assure that the replacement will not create an illogical sentence (e.g., replacing "1920" with "1940" in "from 1920 to 1930"). | manual | critical |
|
| 142 |
+
| | mistranslation<br>gender | He has been unable to relieve the pain with medication, which the competition prohibits competitors from taking. She has been unable to relieve the pain with medication, which the competition prohibits competitors from taking. | Exactly one feminine pronoun in the sentence (such as "she" or "her") is being with a masculine pronouns (such as "he" or "him") or vice-versa. This includes reflexive pronouns (i.e., "him/herself") and possessive adjectives (i.e., "his/her"). | automatic with<br>manual check | critical |
|
| 143 |
+
| | cohesion | Scientists want to understand how planets have formed since a comet collided with Earth long ago, and especially how Earth has formed. Scientists want to understand how planets have formed a comet collided with Earth long ago, and especially how Earth has formed. | A conjunction, such as "thus" or "therefore" is removed. Special attention was given to keep the rest of the sentence unperturbed. | automatic<br>(spaCy) with<br>manual check | minor |
|
| 144 |
+
| | grammar<br>pos shift | The U.S. Supreme Court last year blocked the Trump administration from including the citizenship question on the 2020 census form. The U.S. Supreme Court last year blocked the Trump administrate from including the citizenship question on the 2020 census form. | Affix of the word is being changed keeping the stem kept constant (e.g., "bad" to "badly") which results in the part-of-speech shift. The degree to which the original meaning is affected varies, however, the intended meaning is easily retrivable from the stem and context. | manual | minor |
|
| 145 |
+
| VCY | grammar<br>swap order | I don't know if you realize that most of the goods imported into this country from Central America are duty free. I don't know if you realize that most of the goods imported this into country from Central America are duty free. | Two neighboring words are being swapped to mimic word order error. | automatic<br>(spaCy) | minor |
|
| 146 |
+
| FLUENCY | grammar<br>case | She announced that after a break of several years, a Rakoczy horse show will take place again in 2021. Her announced that after a break of several years, a Rakoczy horse show will take place again in 2021. | One pronoun in the sentence is being changed into a different, incorrect, case (e.g., "he" to "him"). | automatic<br>(spaCy) with<br>manual check | minor |
|
| 147 |
+
| | grammar<br>function word | Last month, a presidential committee recommended the resignation of the former CEP as part of measures to push the country toward new elections. Last month, an presidential committee recommended the resignation of the former CEP as part of measures to push the country toward new elections. | A preposition or article is being changed into an incorrect one to mimic mistake in function words usage. While most perturbations result in minor mistakes (i.e., the original meaning is easily retrivable) some may be more severe. | automatic with<br>manual check | minor-majo |
|
| 148 |
+
| | grammar<br>tense | as part of incasarcs to past increaming toward new executions. Cyanuric acid and melamine were both found in urine samples of pets who died after eating contaminated pet food. Cyanuric acid and melamine are both found in urine samples of pets who died after eating contaminated pet food. | A tense is being change into an incorrect one. We consider past, present, as well as the future tense (although this may be classified as modal verb in English) | manual | major |
|
| 149 |
+
| | grammar<br>aspect | He has been unable to relieve the pain with medication, which the competition prohibits competitors from taking. He is being unable to relieve the pain with medication, which the competition prohibits competitors from taking. | Aspect is being changed to an incorrect one (e.g., perfective to progressive) $without$ changing the tense. | manual | major |
|
| 150 |
+
| | grammar<br>interrogative | This is the tenth time since the start of the pandemic that Florida's daily death toll has surpassed 100. Is this the tenth time since the start of the pandemic that Florida's daily death toll has surpassed 100? | Affirmative mood is being changed to interrogative mood. | manual | major |
|
| 151 |
+
| | omission<br>adj/adv | Rangers closely monitor shooters participating in supplemental pest control trials as the trials are monitored and their effectiveness assessed. Rangers monitor shooters participating in supplemental pest control trials as the trials are monitored and their effectiveness assessed. | An adjective or adverb is being removed. While in most cases this leads to | automatic<br>(spaCy) with<br>manual check | minor-majo |
|
| 152 |
+
| ED | omission<br>content verb | trats are monitored and their effectiveness assessed. Catri said that 85% of new coronavirus cases in Belgium last week were under the age of 60. Catri | Content verb is being removed (this excludes auxilary verbs and copulae). | Automatic with manual check | critical |
|
| 153 |
+
| MIXED | omission<br>noun | of 00. In 1940 he stood up to other government aristocrats who wanted to discuss an "agreement" with the Nazis and he very ably won. In 1940 he stood up to other government who wanted to discuss an "agreement" with the Nazis and he very ably won. | Noun, which is not a named entity or a subject, is being removed. We remove the head of the noun phrase including compound nouns. | automatic<br>(spaCy) with<br>manual check | critical |
|
| 154 |
+
| | omission<br>subject | His research shows that the administration of hormones can accelerate the maturation of the baby's fetal lungs. His shows that the administration of hormones can accelerate the maturation of the baby's fetal lungs. | Subject is being removed. We remove the head of the noun phrase including compound nouns. | automatic<br>(spaCy) with<br>manual check | critical |
|
| 155 |
+
| | omission<br>named entry | I don't know if you realize that most of the goods imported into this country from Central America are duty free. I don't know if you realize that most of the goods imported into this country from are duty free. | Named entity, which is not a subject, is being removed. | automatic<br>(Stanza) with<br>manual check | critical |
|
| 156 |
+
|
| 157 |
+
Table 2: A subset of perturbations in DEMETR along with examples (detailed changes are highlighted in purple). A full list of perturbations is provided in Table A1 and Table A2 in Appendix A.
|
| 158 |
+
|
| 159 |
+
in DEMETR. We include both traditional string-based metrics, such as BLEU or CHRF, as well as newer learned metrics, such as BLEURT and COMET. Within the latter category, we also include two reference-free metrics, which rely only on the source sentence and translation and open possibilities for a more robust MT evaluation. The rest of this section provides an overview of the evaluation metrics before analyzing our findings. Detailed results of each metric on every perturbation are found in Table A3.
|
2212.12141/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-11-02T23:07:05.456Z" agent="5.0 (X11)" etag="FYaJ8zt7CFZvVmBrVuD6" version="20.5.1" type="device"><diagram id="1il45fRF-ylHxaIW4eLK" name="Page-1">7V1bc5s4FP41nn2KB3HnsXHibma73Wwz7bZ96RAj22wwckFO7P31K4y4CWHARpikdmdaEEKCc47O+c5FdKRMVtv3gb1e/okc6I1kydmOlJuRLMsAGOSfqGUXt+g6iBsWgevETbmGB/c/SBsl2rpxHRgWOmKEPOyui40z5PtwhgttdhCgl2K3OfKKs67tBZ1RyhoeZrYHS93+cR28jFtNLdf7d+gulsnMQKJXVnbSmTaES9tBL7km5XakTAKEcHy02k6gFxEvoUt837TiavpgAfRxkxtmS+eH9t0CV18MNAney7cr5esVAEo8zrPtbegr08fFu4QGAdr4DoyGkUbK9cvSxfBhbc+iqy+E66RtiVceOQPkcO563gR5KNjfq8y16E/Ujnyca49/pD3EAXqCuSv6/keulF+QvvMzDDDc5proC7+HaAVxsCNdEvmjtKfCJyv0/CVjpZowbJlno0EbbSo+i3TojMLkgBK5BcH1X4HcteRVRZEXAKsNgUFrAk+n08lU4xHS0R91rStCKkXB5cgtMHmElYQR9i2QdXBUNdUSFaFD7A89RQFeogXybe82a70u6oeszweE1pS6/0KMd9SY2huMirT37EfoXduzp8V+qCIbyI+qD3o/AIeIH6JNMIMHXpHKCbaDBcQH+hla3DF6/4O8DKBnY/e5aJ8754xRkvfb6NjGhFQsy4j04SKFi1LsIx8yIk+bbM9d+OR0RsgKSft1JMsuQR/v6IWV6zh7pvOWU1EQ9iyjD8UBB60XC7DKy0PnrA5Z1OLQzHMsDkKwYPeV3r8/+RadjLXk9Gabv3izo2eOHS5T5VdJ/u6km956j1wyRcY0RsPJWnGEeLnSm/JgkRmH1ZTsOPFyLo2z53P6NiegJOnC+hNZr4PjOK8eHkY043k4Q/ci9eq4z+RwER3eB26khuN2Mk/uErc3dNwZzu54DNjetWPcbtcwcKFP1G/TW8i/9ipS0/Hfje97wDaGzR91uLaIwge5G+x2pTEqCTQzT4owT07tXlRzGKNDUb0IUZUQqfp5ZSgBvaLcqtvpVJ1OuG7/zISP8468VcZmpDTMO1YGz7ESR9gGcZbzeVp0sVSSvtapMqh7Uu9VWQ3Bx64IItphkXdBYO9yHdYRPAiroYpsMrpcYUKX7fqTg/gJmLuTx0HzeQiFwBWj7CYeAR7+8NGLP4r8KAYxSJ/gDC3G5OBDJEhhp7rBsaE5nwnXDYzGTVmZ1w1arzqXFyEs8ewusl9kqZOOSVKjAMyKHLxHIb6aQ+g8krX+BrikaOfmUiImOS5FOL68BM4aL0vtayUPRHt9ilHBudYePzsQYAaq8Pza6v7SPOph3V/Tv6j7u9LrptwGlzVI0xRCBCywbQba6kBDG7wGGFaXYbDFWezCIn0mLwmZU8k5sus/N1HCdE/Dq3BPxHekAwDrbezf0OupNw1X7lW4IQb42Q0JB+q9qKMmkxLHJ2sM9oQrPdBm7TD+/Qlv186fq7M8TWP+x3sGPKcd8DKEAhOEoESeAYcWe03VmNQbrfUqgGQKsW5tbQlgYp5A0nqwDYJ99vl8Ls+4iK/LVCi7MMsWAPAiISy66M4EtCqMaE9VCBwNGjyqWrqh2B1RNdVlB8jKU3fiyFrjoJ5gez7Cl9hnbR+oPMWeE9/Yg6HYSdCcXE288olnh2E2Y68W+8wR2ErU2hE8ULTCauFEY3lRQ2E+p8VD/eX8DJltFOVDpUhMjg7TT9NowZFB+j5Qy9bFe9AyljV6+i13KcMs0UkCWSrlQrQrLDOeYmJS2nrC7DhpyOpoT7jrsKN1lrKh150eV7sqjVBZ6NRzbYSlX5h/KvM1qxves+MIZj2QjNfE+16dV6tpSgxIStdCdxrqEA3R+4PMn/2nC2gWDZpZH5ODmrk+pjDYDEDZVb9oJUbZ1KulxESdOaamSGeIqaVket1BtcFF1RLmCcmsYLgm1/3fwqO84BNmvkRmhBuZoYVm0oxDXbUlzJVhtCi6bMj8dlvzGLYMaqsekIsM5mz2SIoTewIR5YIPHoOLhTYXDh8IrjKbXzkc5m12E8jhcsF0rjR/kGpW2J6qAxuSe91jxYt4dwi7+qk/ZjYrDaD8WGkAZ19x+XH6JYY6p0amsY1hlR+zFWU11ceHu5+v+DhdKJfi42ZFDuevPZYb1Yu3rD0uYN7XzqPzVx6rfJhyqTxmi6/0Cs61TaqUBhJUeVyap6byuKa/mEiYVkZkd/4sgCu4fzS/JIXDAModKAI2666WFUGvRcnJ87zNGnBlYCXgek1851IC/kpLwNkvSZy9ArzRtu195vTo2q4vrgNR495MfnaIhWB9gYy2Jp2x0KDGkzvcvcqT4998bHWZXuGT9FVCkij6aoBxHeVQ3yzMkHV9zMS+eQl0Y9xr9FtuBTZa+nKUEdWV782KtNro/IGVwQOJp/U79JX72V6g1OeXe04v83ZtdIPaPsGV7fquvyjbxUuZ0SFvoqs1rNZmgHsNy4Bm4U4C3LxdY+hFg5+v+jM5wksBTHU8rGIAo9mXl45KFdOMpIv8QQPyrPhtbFr6qFAAZ1l1JXD7s3sYuIQfkfgdWxeX7Q8xJWWUbRBJ94RwN4hEJ+zklXJa62BQKZXGkWKP7zn1kwqmNFbNgsQDYI2bVfm3Lrczi4GYxIBXfg6B+RRO2/7JB26r+4ND/dtnu/jkVa30V6R0yR+q8Ks6201RU3ZyApxoWYx02kys0hIzUSlJKHKybIec9An+3MAQi53vr8CBQRQfpBXz5ODupmds+SuajrzhGEvAOIvxAIrcifEARmQ8pOwn92RIAJs/MUDHirs0A1HkWrOvzB6hrT8/3T19Cr7/+PhNsT46d5/X678XnK/Q/yrJsSIC5wTRRWVruHyQ6/jwxmOIBmBjiGnBtJAoIjnN/qOYeDFl/92Ocvs/</diagram></mxfile>
|
2212.12141/main_diagram/main_diagram.pdf
ADDED
|
Binary file (32.1 kB). View file
|
|
|
2302.05118/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2023-01-26T13:23:21.466Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.2 Safari/605.1.15" version="20.8.10" etag="h4-dIeF7-wm671_vdE3t" type="device"><diagram name="Page-1" id="nOH2wlZPE8BfU_DDYTgy">7Z3fc6M4Esf/Gj+GQggJeMzP3YfZranLXN3d0xUxis0ORj5MJsn+9ScBsg1SMk6Cm5lxJ1UzQcYy1qe7Jb5qiRm9XD39VqXr5R8yE8Us8LOnGb2aBUGQMPWvLnhuC+KwPV5UedaWkF3Bbf636Ar9rvQhz8Smd2ItZVHn637hXJalmNe9srSq5GP/tHtZ9D91nS66T/R3BbfztBDWaf/Ks3rZloZ7J/8u8sXSfDDrXlil5tSuYLNMM/m4V0SvZ/SykrJu/1o9XYpCN5xplfZ9Ny+8ur2sSpT1IW8I2jd8S4uH7pt111U/m6+q3qBaVR1cPC7zWtyu07l+5VFhVWXLelWoI6L+3NSV/CouZSErVVLKUmwLTRsRj6mye1nWN+kqLzT7L/lKgQz8P8Wj+vcfcpWW3SkddaLrvs+LwtQ8C+jV5dX1dazKF1Wa5eq77r12c65/9167yitlArlU9V7polRd8EX3vUVVi6cX245siSgzFnIl6upZndK9gdLII7R9V2fF1O/M+nFnFIR3rJd7FrE1lbQzxMW2+h0t9UcHzA2Pfh+e/oK5MttP6Z0oPstN3rXDnaxruVLtYE44L/KFfqGWQ6rLdK0rWz0ttCN794V8nC/TqvZkeaYN/b+VuBeqzeY2br/B3bcLRShsfmyq7CZOms90UL3m+heCaswtqox5SWKBTRxckxGwht/HWsmHMhP6fP+tfqnaMgn0r+2cfb+LB4A6hx6hiUNKvaTXwmEUepHtOpx4xG5jFnkB+3gzM0cz86LWzbJWMWi/vfn/HnRAblrnbNM0z7k6gfjrp6ZFzOvqr4X+/4vY1DMdeXm60jDKu826efmwIuu4uaq76kPXdKsqLXSkbWtTrdN+zfZVy8IU3bpvRmkXIaqWhBU5VnmW6XdfVEJdTnrX1KTNcy3zsm5AsYsZu9JVPdRy00X3lzoOh+ntdwp+y2mel4tP4l5bBjmwaxnDgAMyjBFhwi3zjRwBYoy4z78fIEzUfkyfm+bXrfVV1PNlx0S37m13+kYWuuaLjgg3yER1/U205IgTowPRkH0l67TetwVRKSiqanU8F/ozPu9KLjZluv4iP7cX7I5dNzeUqljsHliMEpviHlfm21wDV4c+AtfoiBGpSvMS488vHH+4PUA5VvyJf6Vx54jDGt+iQiPfNXJ03RHwEcAkCMYJhjCPsj6YmHmxHdmp6keYzYb6o4w4jXhxkh03b34GUdRxEzJaRx7aMZIQsL7cfBSyhmBtWtuAponrdvJorA9Q0X6euPvDCDHhAGoYWESPJcGQX0paG68npYz3mbAIbHBDDtDFTpKJGmOaptjKacSLbTJHHt24BDXs8Y7U49n3GqoIrsc7ZQkKmjVNpmXtkqWQ9ZFYxw4NAZD1AdIOsh4rhtsqHijrA9QiZD0Wa26z5nCsA1SeQNWISVmj8jSpX0OOzQxaZD2NX4cBIOsDNClkPdrsAbVYm6lTCNYHaF3Ieqx7rsjh14CsUTeD9OvYYs0CyMmik87e+hFusSGH4iidQbIeZm0DgkbdDDKGT+vUp62b3dxMHcATONYUdTNA1nTSjC2KuhngPVcySCShEeQgnKJuBsg6TOx7LuLFMRxulM4gw3jYZx3CujZKZ4CuHcUeCZLdTx89TZzLH46GHpU0QPRxaKumARzrU5bRjr0IkkaD0VlkJ5MeDezLqyC1ZfcIv7QGkbywBjHbrqd8uPtQRWR/GaOuy6xi3Ja21/rC4kZRZud6ExLNv0g3m3w+6yUh95f3L2WV/62qS/sL+vtZx389rNa3O9tVn7B3VOh06It0/nXR1PzKskazB8BBaxhFZjZJ6SxsIx+queg7aJ1WC2EyjYO3WqLvEW50PLMi1zJF5jBFU1aJIq3zb/3rdNlndwmt3+0GM4OEd2o8wVTRfuXuXTsrtyrariM2FQWDitp2eqUic6K8v9+I3jmNS23b7jAvQ8EScERM7K6SOze8OFpQdWmWYwdVK+qN9AFjBW36cwVt8ZTX/25PJUl3/J/mmJrDq6fu05qD572DPY9oyqbsAd4S7X3WX6drjzuOGOwtUSp5Z7APB8Fe3QJPGexDl4qNQ6qfbkhlZp72h1QmY/9wJztTAcSnfQM9s5fCH3NQ5VjQEu1W7L7Z2xw5mMyuDtjnXLMJp9bjBujT7/FpM+v4Np/mLOj5AOx90iu7LbzZnx3z+8mgLmBnDtCZf7bhM5Qz9zpj9g7HDYPBsPPMnhg7puc6Zt39d3oudfXE03ouzvzN4HQOhymZrAoAkSPEmT841o5F6TQGZI1TfYB+zSZNwgtPeaoPnrW1/TVkEl6IGfOT9tfbDZ4gWOMcFCBrx0014GYT4WknzU+/ORRgDGeu6QZkDcYa0q/N3T2yBmFt5xFQ0AxLIzEjbohbbBZ7Uf9pBCGJIdNGGKpngLjtrd8g1TOG6hmsa9usIZfEMBTQAHEPt7KmoFtOMBTQAFlz+/lxiSoK4HCjhgY4JndsHgTZa6OGNhLHYLizPQMN0aiPAfpsOFy3RiFZc9THAFkHDs0EbjUbR30MkDWxNxoIQZ8ExFEfA8TtSEsATEHhKI5NyhpymM1RHAO8gx4uw4IVvTkqY5PePkNOX3JUxgBZ+7YyBrhHJ0dZDJC143kngCmjHGUxSNaDDb8I3DQWR90MMoBPesMVoW4GyNreeBfyYScR6mZHZEsHWUVRDJlEFp22SNZtoH00tn6P7XbfKQiwqIhNOOqCfeZ9hIoYJOu+U4cmxR4CNMphgKAd2SSAjxOMXHLYqe1zEeA+F+ro1X0u6Js3nbJ3doPdcorG3Av6FxCqondvOTXMFhruFAe7y0WE0iZcjKbMyigB3uc8QnUTEHc46YL4CAVOQNbcdu0INHc7Ro0TMpJH9mgbNBU0RpkT8OZKddxhfwwImWsQn7bsOfmNNGSuQYxKKKRfB5OyRiUU0q/tHKIYsL9GMRSQNRnmlYDeZce2GPpJaqHsSpSbvNYXOgt4ulqrb1vebdbNlz6w6Hd1qeq/f5Zz1UCpavNny4xUu2nT2VMO0yJflOrvqv2eF7px83lanHflqzzLCpcl2OaTPtSyszkbtZEObRPbB+/P3qAmfmyYbp5eYqY0mXOMTrjDEOgYhnDa6lo3Xz0GyNAfBm8Whh5g+LaFs84TP+rTbWRAfz7Mn/viOaOxwwaO584HKGp7lJRnZ81sjm6gLN0st3M3B/MzwWH1tKjS9dJL9fzQJmj/17U+d5+V6Rainur3lGHJXfSwvNJvfuyut880HlDff6O6kixXwAYWYYqv8krM61yWzUuVrr2p2VXV6zNQowwEBgZj9iXaM5fENpZwO+PzEXNJTlmR2z5o8kjRX+9wFYKF/+QAtQ09/8fyfDbcAo3xyOOQ/o+yHeAtX2BPtsFuxJGgcgeKu8d6OzKEAI2yHSBoexKdAi7fTlC2g2NNE+bRwTRbCMgal/QCsh4+HhsS9GnLcsCgI2svYsg5tgQz3gA7a2p31gYiBGvMeAP069j2aw7n18Q/ZXUNPoiTCUljqtuE4zLApdvER8UMkHRopywz0JRl4qNkBsl74Nmw+enER9kMEDa3pswg77qIj7oZ5OjMjuSAC42Ij8IZIOxkMBQH3KaD+KicQbq1/RQv2EVGxEf1DI73dr+GbQyH9GzUzgBJB9xLpmNNUDqDHIlbjxaAnOsixFbPbkRaP1RCFd6u07mw2L+ccNy2+MzOW6vl2gFv1HTj76QXu/Yr2cPOAv07ElI2cN7AzkRMHDzfkYOsDiupN5nZvvabTgf8Q2ZCn/F/</diagram></mxfile>
|
2302.05118/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep learning models have become state-of-the-art (SOTA) in several different fields. Especially in safety-critical applications such as medical diagnosis and autonomous driving with changing environments over time, reliable model estimates for predictive uncertainty are crucial. Thus, models are required to be accurate as well as calibrated, meaning that their predictive uncertainty (or confidence) matches the expected accuracy. Due to the fact that many deep neural networks are generally uncalibrated [@guo_calibration_2017], post-hoc calibration of already trained neural networks has received increasing attention in the last few years.
|
| 4 |
+
|
| 5 |
+
<figure id="fig:teaser_fig" data-latex-placement="t">
|
| 6 |
+
<table>
|
| 7 |
+
<tbody>
|
| 8 |
+
<tr>
|
| 9 |
+
<td style="text-align: center;"><embed src="figures/teaser_fig_1.pdf" style="width:17.0%" /></td>
|
| 10 |
+
<td style="text-align: center;"><embed src="figures/teaser_fig_2.pdf" style="width:18.0%" /></td>
|
| 11 |
+
</tr>
|
| 12 |
+
</tbody>
|
| 13 |
+
</table>
|
| 14 |
+
<figcaption><strong>Left:</strong> Our density-aware calibration method (DAC) <span class="math inline"><em>g</em></span> can be combined with existing post-hoc methods <span class="math inline"><em>h</em></span> leading to robust and reliable uncertainty estimates. To this end, DAC leverages information from feature vectors <span class="math inline"><em>z</em><sub>1</sub>...<em>z</em><sub><em>L</em></sub></span> across the entire classifier <span class="math inline"><em>f</em></span>. <strong>Right:</strong> DAC is based on KNN, where predictive uncertainty is expected to be high for test samples lying in low-density regions of the empirical training distribution and vice versa.</figcaption>
|
| 15 |
+
</figure>
|
| 16 |
+
|
| 17 |
+
In order to tackle the miscalibration of neural networks, researchers have come up with a plethora of post-hoc calibration methods [@guo_calibration_2017; @zhang2020mix; @rahimi2020intra; @milios2018dirichlet; @tomani2022parameterized; @gupta2020calibration]. These current approaches are particularly designed for in-domain calibration, where test samples are drawn from the same distribution as the network was trained on. Although these approaches perform almost perfect in-domain, recent works [@tomani2021post] have shown that they lack substantially in providing reliable confidence scores in domain-shift and out-of-domain (OOD) scenarios (up to 1 order of magnitude worse than in-domain), which is unacceptable, particularly in safety-critical real-world scenarios where calibration of neural networks matters in order to prevent unforeseeable failures. To date, the only post-hoc methods that have been introduced to mitigate this shortcoming in domain-shift and OOD settings, use artificially created data or data from different sources in order to estimate the potential test distribution [@tomani2021post; @yu2022robust; @wald2021calibration; @yu2022robust; @gong2021confidence]. However, these methods are not generic in that they require domain knowledge about the dataset and utilize multiple domains for calibration. Additionally, they might only work well for a narrow subset of anticipated distributional shifts because they rely heavily on strong assumptions towards the potential test distribution. Furthermore, they can hurt in-domain calibration performance.
|
| 18 |
+
|
| 19 |
+
To mitigate the issue of miscalibration in scenarios where test samples are not necessarily drawn from dense regions of the empirical training distribution or are even OOD, we introduce a density-aware method that extends the field of post-hoc calibration beyond in-domain calibration. Contrary to the aforementioned existing works which focus on particularly crafted training data, our method DAC does not depend on additional data and does not rely on any assumptions about potentially shifted or out-of-domain test distributions, and is even domain agnostic. The proposed method can, therefore, simply be added to an existing post-hoc calibration pipeline, because it relies on the exact same training paradigm with a held-out in-domain validation set as current post-hoc methods do.
|
| 20 |
+
|
| 21 |
+
Previous works on calibration have focused primarily on post-hoc methods that solely take softmax outputs or logits into account [@guo_calibration_2017; @zhang2020mix; @rahimi2020intra; @milios2018dirichlet; @tomani2022parameterized; @gupta2020calibration]. However, we argue that prior layers in neural networks contain valuable information for recalibration too. Moreover, we report, which layers our method identified as particularly relevant for providing well-calibrated predictive uncertainty estimates.
|
| 22 |
+
|
| 23 |
+
Recently developed large-scale neural networks that benefit from pre-training on vast amounts of data [@kolesnikov2020big; @wslimageseccv2018], have mostly been overlooked when benchmarking post-hoc calibration methods. One explanation for that could be because, e.g., vision transformers (ViTs) [@dosovitskiy2020image] are well calibrated out of the box [@minderer2021revisiting]. Nevertheless, we show that also these models can profit from post-hoc methods and in particular from DAC through more robust uncertainty estimates.
|
| 24 |
+
|
| 25 |
+
- We propose DAC, an accuracy-preserving and density-aware calibration method that can be combined with existing post-hoc methods to boost domain-shift and out-of-domain performance while maintaining in-domain calibration.[^1]
|
| 26 |
+
|
| 27 |
+
- We discover that the common practice of using solely the final logits for post-hoc calibration is sub-optimal and that aggregating intermediate outputs yields improved results.
|
| 28 |
+
|
| 29 |
+
- We study recent large-scale models, such as transformers, pre-trained on vast amounts of data and encounter that also for these models our proposed method yields substantial calibration gains.
|
| 30 |
+
|
| 31 |
+
# Method
|
| 32 |
+
|
| 33 |
+
We study the multi-class classification problem, where $X \in \mathbb{R}^D$ denotes a $D$-dimensional random input variable and $Y \in \{1,2,\dots, C\}$ denotes the label with $C$ classes with a ground truth joint distribution $\pi(X, Y ) =
|
| 34 |
+
\pi(Y |X)\pi(X)$. The dataset $\mathbb{D}$ contains $N$ i.i.d. samples $\mathbb{D} = \{(X_n, Y_n)\}_{n=1}^N$ drawn from $\pi(X, Y)$.
|
| 35 |
+
|
| 36 |
+
Let the output of a trained neural network classifier $f$ be $f(X) = (y,\mathbf{z}_L)$, where $y$ denotes the predicted class and $\mathbf{z}_L$ the associated logits vector. The softmax function $\sigma_{SM}$ as $p=\max_c \sigma_{SM}(\mathbf{z}_L)^{(c)}$ is then needed to transform $\mathbf{z}_L$ into a confidence score or predictive uncertainty $p$ w.r.t $y$. In this paper, we propose an approach to improve the quality of the predictive uncertainty $p$ by recalibrating the logits $\mathbf{z}_L$ from $f(X)$ via a combination of two calibration methods: $$\begin{eqnarray}
|
| 37 |
+
p=h(g(f(X)))
|
| 38 |
+
\label{eq:combine_cali}
|
| 39 |
+
\end{eqnarray}$$ where $g$ denotes our density-aware calibration method DAC rescaling logits for boosting domain-shift and OOD calibration performance and $h$ denotes an existing state-of-the-art in-domain post-hoc calibration method (Fig. [1](#fig:teaser_fig){reference-type="ref" reference="fig:teaser_fig"}).
|
| 40 |
+
|
| 41 |
+
Following Guo et al. , perfect calibration is defined such that confidence and accuracy match for all confidence levels: $$\begin{eqnarray}
|
| 42 |
+
\mathop{\mathbb{P}}(Y=y| P=p) = P, \; \; \forall P \in [0,1]
|
| 43 |
+
\label{eq:cali}
|
| 44 |
+
\end{eqnarray}$$ Consequently, miscalibration is defined as the difference in expectation between accuracy and confidence. $$\begin{eqnarray}
|
| 45 |
+
\mathop{\mathbb{E}}_{P}\left[\big\lvert\mathop{\mathbb{P}}(Y=y| P=p) - P\big\rvert \right]
|
| 46 |
+
\label{eq:miscal}
|
| 47 |
+
\end{eqnarray}$$
|
| 48 |
+
|
| 49 |
+
The expected calibration error (ECE) [@naeini2015obtaining] is frequently used for quantifying miscalibration. ECE is a scalar summary measure estimating miscalibration by approximating equation [\[eq:miscal\]](#eq:miscal){reference-type="eqref" reference="eq:miscal"} as follows. In the first step, confidence scores $\hat{\mathcal{P}}$ of all samples are partitioned into $M$ equally sized bins of size $1/M$, and secondly, for each bin $B_m$ the respective mean confidence and the accuracy is computed based on the ground truth class $y$. Finally, the ECE is estimated by calculating the mean difference between confidence and accuracy over all bins: $$\begin{eqnarray}
|
| 50 |
+
\mathrm{ECE}^d = \sum_{m=1}^M \frac{\lvert B_m\rvert}{N}\left\| \mathrm{acc}(B_m) - \mathrm{conf}(B_m)\right\|_d
|
| 51 |
+
\end{eqnarray}$$[]{#eq:ece label="eq:ece"} with $d$ usually set to 1 (L1-norm).
|
| 52 |
+
|
| 53 |
+
Our main idea for our proposed calibration method $g$ stems from the fact that test samples lying in high-density regions of the empirical training distribution can generally be predicted with higher confidence than samples lying in low-density regions. For the latter case, the network has seen very few, if any, training samples in the neighborhood of the respective test sample in feature space, and is thus not able to provide reliable predictions for those samples. Leveraging this information about density through a proxy can result in better calibration.
|
| 54 |
+
|
| 55 |
+
In order to estimate such a proxy for each sample, we propose to utilize non-parametric density estimation using k-nearest-neighbor (KNN) based on feature embeddings extracted from the classifier. KNN has successfully been applied in out-of-distribution detection [@sun2022out]. In contrast to Sun et al. , who only take the penultimate layer into account, we argue that prior layers yield important information too, and therefore, incorporate them in our method as follows. We call our method Density-Aware Calibration (DAC).
|
| 56 |
+
|
| 57 |
+
Temperature scaling [@guo_calibration_2017] is a frequently used calibration method, where a single scalar parameter $T$ is used to re-scale the logits of an already trained classifier in order to obtain calibrated probability estimates $\hat{Q}$ for logits $\mathbf{z}_{L}$ using the softmax function $\sigma_{SM}$ as $$\begin{equation}
|
| 58 |
+
\hat{Q} = \sigma_{SM}(\mathbf{z}_{L}/T)
|
| 59 |
+
\label{eq:ts}
|
| 60 |
+
\end{equation}$$ Similar to temperature scaling, our method is also accuracy preserving, in that we use one parameter $S(\mathbf{x},w)$ for re-scaling the logits of the classifier: $$\begin{eqnarray}
|
| 61 |
+
\hat{Q}(\mathbf{x},w) = \sigma_{SM}(\mathbf{z}_{L}/S(\mathbf{x},w))
|
| 62 |
+
\label{eq:qscore}
|
| 63 |
+
\end{eqnarray}$$ In contrast to temperature scaling, in our case, $S(\mathbf{x},w)$ is sample-dependent with respect to $\mathbf{x}$ and is calculated via a linear combination of density estimates $s_l$ as follows: $$\begin{equation}
|
| 64 |
+
S(\mathbf{x},w) = \sum_{l=1}^{L}w_ls_l+w_0
|
| 65 |
+
\label{eq:weighting_scheme}
|
| 66 |
+
\end{equation}$$ with $w_1...w_L$ being the weights for every layer in $L$ and $w_0$ being a bias term. Note that only positive weights are valid because negative weights would assign high confidence to outliers. Thus, we constrain the weights to be positive to tackle overfitting.
|
| 67 |
+
|
| 68 |
+
For each feature layer $l$, we compute the density estimate $s_l$ in the neighborhood of the empirical training distribution of the respective test sample $\mathbf{x}$ with the $k$-th nearest neighbor distance: First, we derive the test feature vector $\mathbf{z}_l$ from the trained classifier $f$ given the test input sample $\mathbf{x}$ and average over spatial dimensions as well as normalize it. We then use the normalized training feature vectors $Z_{N_{Tr},l}=(\mathbf{z}_{1,l},\mathbf{z}_{2,l},...,\mathbf{z}_{N_{Tr},l})$, which we gathered from the training dataset $X_{N_{Tr}}=(\mathbf{x}_1,\mathbf{x}_2,...,\mathbf{x}_{N_{Tr}})$ to calculate the euclidean distance between $\mathbf{z}_l$ and each element in $Z_{N_{Tr},l}$ for each sample $i$ in the training set $N_{Tr}$ as follows: $$\begin{equation}
|
| 69 |
+
d_{i,l} = \|\mathbf{z}_{i,l}-\mathbf{z}_l\|
|
| 70 |
+
\label{eq:eucledian}
|
| 71 |
+
\end{equation}$$ The resulting sequence $D_{N_{Tr},L}=(d_{1,l},d_{2,l},...,d_{N_{Tr},l})$ is reordered. Finally, $s_l$ is given by the $k$-th smallest element (=$k$-th nearest neighbor) in the sequence: $s_l=d_{(k)}$, with $(k)$ indicating the index in the reordered sequence $D_{N_{Tr},L}$. For determining $k$, we follow Sun et al. , who did a thorough analysis and concluded that a proper choice for $k$ is 50 for CIFAR10, 200 for CIFAR100 for all training samples and 10 for ImageNet for 1% training samples.
|
| 72 |
+
|
| 73 |
+
We fit our method for a trained neural network $f(X)=(y,\mathbf{z}_L)$ by optimizing a squared error loss $L_w$ w.r.t. $w$. $$\begin{eqnarray}
|
| 74 |
+
L_w = \sum_{c=1}^C(I_{c}-\sigma_{SM}(\mathbf{z}_{L}/S(\mathbf{x},w))^{(c)})^2
|
| 75 |
+
\label{eq:lossece}
|
| 76 |
+
\end{eqnarray}$$ where $I_{c}$ indicates a binary variable, which is $1$ if the respective sample has true class $c$, and $0$ otherwise. We accumulate $L_w$ over all samples in the validation set.\
|
| 77 |
+
The rescaled logits $\mathbf{\hat{z}}_{L}(\mathbf{x},w)=\mathbf{z}_{L}/S(\mathbf{x},w)$ and consequently the recalibrated probability estimates $\hat{Q}(\mathbf{x},\mathbf{w})$ can directly be fed to another post-hoc method. Thus, DAC can be applied prior to other existing in-domain post-hoc calibration methods for robustly calibrating models in domain-shift and OOD scenarios (Fig. [1](#fig:teaser_fig){reference-type="ref" reference="fig:teaser_fig"}).
|
| 78 |
+
|
| 79 |
+
DAC uses KNN (a non-parametric method) to compute a density proxy per layer and combines these proxies linearly across layers, whereas other methods like Parameterized Temperature scaling [@tomani2022parameterized] and models that utilize intra-order-preserving functions for calibration [@rahimi2020intra] are parametric methods using a neural network. Moreover, DAC uses intermediate features from hidden layers and is particularly designed with domain shift and OOD calibration behavior in mind.
|
| 80 |
+
|
| 81 |
+
Our method has the following advantages:
|
| 82 |
+
|
| 83 |
+
- **Density aware:** Due to distance-based density estimation across feature layers, our method is capable of inferring how close or how far a test sample is in feature space with respect to the training distribution and can adjust the predictive estimates of the classifier accordingly.
|
| 84 |
+
|
| 85 |
+
- **Domain agnostic:** Since we use KNN, a non-parametric method for density estimation, no distributional assumptions are imposed on the feature space, and our method is therefore applicable to any type of in-domain, domain-shift, or OOD scenario.
|
| 86 |
+
|
| 87 |
+
- **Backbone agnostic:** Adapts easily to different underlying classifier architectures (e.g., CNNs, ResNets, and more recent models like transformers) because during training, DAC automatically figures out the informative feature layers regarding uncertainty calibration.
|
| 88 |
+
|
| 89 |
+
:::::: table*
|
| 90 |
+
::::: center
|
| 91 |
+
:::: small
|
| 92 |
+
::: sc
|
| 93 |
+
+:-----------------+:---------:+:--------:+:------------------:+:--------:+:--------:+:--------:+:------------------:+:------------------:+:------------------:+:--------:+:------------------:+
|
| 94 |
+
| | **Uncal** | **Baseline Calibration Methods** | **Combination with DAC (Ours)** |
|
| 95 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 96 |
+
| | \- | TS | ETS | IRM | DIA | SPL | TS | ETS | IRM | DIA | SPL |
|
| 97 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 98 |
+
| C10 ResNet18 | 19.27 | 4.96 | 4.96 | 5.93 | 6.39 | 6.27 | [4.49]{.underline} | **4.39** | 4.90 | 4.61 | 4.74 |
|
| 99 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 100 |
+
| C10 VGG16 | 19.05 | 6.25 | 6.30 | 7.45 | 9.82 | 7.57 | **5.66** | **5.66** | 6.30 | 6.33 | 5.69 |
|
| 101 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 102 |
+
| C10 DenseNet121 | 19.26 | 5.21 | 5.21 | 6.66 | 7.81 | 6.60 | [4.59]{.underline} | [4.59]{.underline} | 5.69 | 6.60 | **4.42** |
|
| 103 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 104 |
+
| C100 ResNet18 | 16.44 | 11.37 | 10.56 | 12.26 | 9.24 | 10.40 | 10.66 | [8.96]{.underline} | 10.04 | **8.47** | 9.74 |
|
| 105 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 106 |
+
| C100 VGG16 | 34.41 | 11.54 | 11.54 | 13.24 | 14.62 | 10.76 | [6.49]{.underline} | **6.48** | 8.11 | 10.26 | 7.45 |
|
| 107 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 108 |
+
| C100 DenseNet121 | 23.83 | 8.80 | [8.76]{.underline} | 12.07 | 11.02 | 9.73 | 8.77 | **8.40** | 10.05 | 15.01 | 9.15 |
|
| 109 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 110 |
+
| IMG ResNet152 | 10.50 | 4.47 | 4.01 | 5.20 | 7.17 | 5.56 | [3.48]{.underline} | **3.34** | 3.50 | 3.64 | 3.63 |
|
| 111 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 112 |
+
| IMG DenseNet169 | 13.28 | 6.59 | 6.34 | 7.37 | 8.44 | 7.12 | 4.81 | **3.87** | [4.53]{.underline} | 6.31 | 4.60 |
|
| 113 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 114 |
+
| IMG Xception | 30.49 | 8.81 | 8.40 | 12.93 | 9.83 | 10.80 | 8.79 | **7.99** | [8.38]{.underline} | 8.99 | 8.49 |
|
| 115 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 116 |
+
| IMG BiT-M | 11.71 | 7.17 | 6.56 | 6.93 | 7.45 | 6.62 | 4.40 | [3.98]{.underline} | 4.21 | 5.51 | **3.76** |
|
| 117 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 118 |
+
| IMG ResNeXt-WSL | 15.44 | 8.03 | 8.03 | 8.04 | 8.32 | 6.16 | 7.32 | [5.63]{.underline} | 5.75 | 6.32 | **3.90** |
|
| 119 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 120 |
+
| IMG ViT-B | 3.78 | 4.23 | 3.72 | 4.24 | 5.85 | 3.93 | 3.80 | **3.34** | 3.99 | 5.52 | [3.56]{.underline} |
|
| 121 |
+
+------------------+-----------+----------+--------------------+----------+----------+----------+--------------------+--------------------+--------------------+----------+--------------------+
|
| 122 |
+
:::
|
| 123 |
+
::::
|
| 124 |
+
:::::
|
| 125 |
+
::::::
|
2305.14761/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2305.14761/main_diagram/main_diagram.pdf
ADDED
|
Binary file (84.8 kB). View file
|
|
|
2305.14761/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
<figure id="fiq:first-stage-pretraining" data-latex-placement="t!">
|
| 4 |
+
<embed src="Figures/Donut.drawio1.pdf" />
|
| 5 |
+
<figcaption><span> Our model with different pretraining objectives. The model consists of two main modules: Chart Image Encoder, and Text Decoder. Four different pretraining objectives are specified in different colors; <em></em>, <em></em>, <em></em>, and <em></em>. </span> </figcaption>
|
| 6 |
+
</figure>
|
| 7 |
+
|
| 8 |
+
Information visualizations such as bar charts and line charts are commonly used for analyzing data, inferring key insights and making informed decisions [@hoque2022chartSurvey]. However, understanding important patterns and trends from charts and answering complex questions about them can be cognitively taxing. Thus, to facilitate users in analyzing charts, several downstream NLP tasks over charts have been proposed recently, including chart question answering [@masry-etal-2022-chartqa; @open-CQA; @lee2022pix2struct], natural language generation for visualizations [@obeid-hoque-2020-chart; @chart-to-text-acl] and automatic data story generation [@shi2020calliope].
|
| 9 |
+
|
| 10 |
+
A dominant strategy to tackle these downstream tasks is to utilize pretrained models [@Su2020VL-BERT; @li-etal-2020-bert-vision; @vilt; @vlt5] trained on langauge and vision tasks [@ijcai2022p762]. However, although effective, such models may not be optimal for chart-specific tasks because they are trained on large text corpus and/or image-text pairs without any specific focus on chart comprehension. In reality, charts differ from natural images in that they visually communicate the *data* using *graphical marks* (e.g., bars, lines) and *text* (e.g., titles, labels, legends). Readers can discover important patterns, trends, and outliers from such visual representation [@munzner2014visualization]. Existing pretrained models do not consider such unique structures and communicative goals of charts. For instance, Pix2Struct [@lee2022pix2struct] is a pretrained image-to-text model designed for situated language understanding. Its pretraining objective focuses on screenshot parsing based on HTML codes of webpages, with a primary emphasis on layout understanding rather than reasoning over the visual elements. MatCha [@liu2022matcha] extends Pix2Struct by incorporating math reasoning and chart data extraction tasks, but it still lacks training objectives for text generation from charts and it was trained on a limited number of charts.
|
| 11 |
+
|
| 12 |
+
In this work, we present , a pretrained model designed specifically for chart comprehension and reasoning. is pretrained on a large corpus of charts and it aims to serve as a Universal model for various chart-related downstream tasks ([1](#fiq:first-stage-pretraining){reference-type="ref+Label" reference="fiq:first-stage-pretraining"}). Inspired by the model architecture from @kim2022ocr, consists of two modules: *(1)* a chart encoder, which takes the chart image as input, and *(2)* a text decoder, trained to decode the expected output based on the encoded image and the text input fed in the decoder as task prompt. We performed pretraining on a diverse set of 611K charts that we collected from multiple real-world sources. Our pretraining objectives include both low-level tasks focused on extracting visual elements and data from chart images, as well as high-level tasks, intended to align more closely with downstream applications. One key challenge for pretraining was that most charts in the corpus do not come with informative summaries, which are critical for various downstream tasks. To address this challenge, we used knowledge distillation techniques to leverage large language models (LLMs) for opportunistically collecting chart summaries, which were then used during pretraining.
|
| 13 |
+
|
| 14 |
+
We conducted extensive experiments and analysis on various chart-specific downstream tasks to evaluate the effectiveness of our approach. Specifically, we evaluated on two chart question answering datasets, ChartQA [@masry-etal-2022-chartqa] and OpenCQA [@open-CQA], and found that it outperformed the state-of-the-art models in both cases. For chart summarization, achieves superior performance in both human and automatic evaluation measures such as BLEU [@post-2018-call] and ratings from ChatGPT [@ChatGPT]. Moreover, achieved state-of-the-art results in the Chart-to-Table downstream task. Finally, our model showed improved time and memory efficiency compared to the previous state-of-the-art model, MatCha, being more than 11 times faster with 28% fewer parameters.
|
| 15 |
+
|
| 16 |
+
Our primary contributions are: (i) A pretrained model for chart comprehension with unique low-level and high-level pretraining objectives specific to charts; (ii) a large-scale chart corpus for pretraining, covering a diverse range of visual styles and topics; (iii) extensive automatic and human evaluations that demonstrate the state-of-the-art performance of across various chart-specific benchmark task while optimizing time and memory efficiency. We have made our code and chart corpus publicly available at <https://github.com/vis-nlp/UniChart>.
|
2305.16000/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-08-15T14:40:06.873Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36" etag="cHgIRl4Z4cxeP7iORvOd" version="20.2.3" type="google"><diagram id="ZBDfKNevx7KOU9ktCB_Y" name="Page-1">7V1bc+K4Ev41VO08kLIN5vKYhEl2z+7ZnZlMna19OiVsgbUxlo9sSLK//uhm44sEvgLJkKkaQJJlufvrVnerLQ1G95vXRwJC79/Yhf7AMtzXwWgxsCxzbE3oByt5EyXWdDwVJWuCXNlqX/CE/oGy0JClW+TCKNcwxtiPUZgvdHAQQCfOlQFC8Eu+2Qr7+buGYA1LBU8O8MulfyI39mRpOj5W8TNEa0/e2jbnomIDksayZeQBF79kikafB6N7gnEsvm1e76HPqJfQRVz3oKlNB0ZgEFe5YLkebb2n26Hn//XXozcCBC+doWTGDvhb+cADa+LT/u6WbMjxm6TD5H9bNs67GL7GQ+CjdTAY3fIWgLBy+nUJnOc1wdvAHTrYx0Q0QAGKEfB5k6QT+m3NPhdwCWJIG33HIXJE++T29DmWSbukLApB0P+onjy8pSCxDJIZi7i1aIC3MQrYsB0P8YY74DgoADHCQcRuCTn7AxfEmFDcP0j+p8O22HhpR168oUNYmPSrHPzCocyEdJB3O0hiRGF4Kys2yHXZ5XcERugfsORdGfR3iFEQc5DbdwN7wfraxjgSgsS6jmKCn+G9ePhFgAPWywr5frEIB/ED2CCfierP0N9BNgD++EWgSeyxMcLXTJEE3iPEGxjTR7cMWTse2+ISqQaGljGRcvKyl6rRRIqKlxGokSwDUpDXaed7rNMvEu41oG+aCuwXGAUD95YpkT2VMjwThE20gsV4w3AGXcmZHEG/ow1k4PgdvtD/v2GKD9rEBZHH25uH6AzdnI4qUzlDQ1tBwqSMQJ+CdJfXbCq6yjt8YdDKMNHI89CcTWb5PiK8JQ6Ul2VVUaEn05yV+5rn+6IivIZxqS/KDvCWaSbhrx00nYM0o9YNzpofuYJ+EaPYYy/lRQs4jrWqmAGJT3FCXrMqa7RaGQYj5r5IoR9ZB0OhEpj2m4evas0Xg8BhbX6HawGUvQIUY0j08bvWZwTHXFfT4uHc6EbBDa0ioOeJMssI57wn/RY7j29fo8306/bL16ffzM1y+ng3VKk3wc5QCRKfTmrDZGQMJqYSJBlMhMUyF+0qzc8+XMUHO9d1pL9Cygidc8FQwJ04uUu9OGYm4y0jqoBzdLPGeO1DEKLoxsEbWuxEtMnDKkHNE2AT+t0TJGhFPx8wVzW6IagtpnL7n27J2vxEf4FNSH8GyyjM07Vk+KCKPX+mgH3D3DbxAJttYo99XxEIXfZ81HTmZgvGESt/8SBtQJLyxIyB4kJEMiYOZj8CejPr/sDAUbWB17fJ+PjQJhTjBnmbi1VF6UMg7KePwOY8hmdWi1epjKeNXehQ74JRC78kZQQgcZPAPceTOnSE3E8hRS7Qkd8cGkFJSafFXJo+hoDtJciqJUFy9EenRtPSzI11RfEXRiucCI0B1lQGE4hR5joww1Y+ScTU7YhTiAKfT/hBBMW4E58kwPVBqBCnxEtZAUQaguqDY03ra1aE5+iTBo46yGY9zLr87Azl37nOFLqVI1TCbgOe2c83at9nJoUIOc+JpmRgjQEKOMZfqTdPq7fLiFuU0c1x3PyA6qwtxMY6iHUEKSVSO4AYn6x3DSBxcnHYywHnlAeYS0Rhz20GuFpBh9sSQgA8QDaaufqK+a4wb79LtfpLnGJoKdUqc4tDgt2tIxxtYRlQlfk3i2ILGzfRs7xKmLMl7HGzlivs9yBQAYuBaXnVbtj1be3OHmsJHbCNpNPxVtQXzCnjYhesIR0f4xqBwEnCxRnf6yJnygYiexZF08a4t+9u+B8LOB2lqkqlqIldjwXFKMnf2yhGq7eDlFbfoRSYyQdrCtG7fdTaHOSCeNmg2sAaLezPs8V4UIzAZWvWPmBM44G86BnGjpf8SBa/2A9tLNAHS+jfYeJCUgjmiZpUwgu1Lx6K4VMIWPBy8UJAKKn5By1CMQMPj/NF7Gew/o1HnxbmaF/0TcbjMq3yv77jUBa4iEBHhg9fYJTArLeVk0JckaLYLMUVbUMRWDR7iyyOrpHFMwY+JnVDh6LgP+WAWQCpbxYBDjc+C4UEhiL+lAlNgDV16yJuk3BouCiCIIJ7wxfsMGKfazqrxTycCHfIjw7GFZakjRU4rWUFpjf/Dp6ZYDOjgD8upQgzxPi0DeLUxtohwidp4aeIyZt7t2vEZ+97VpeLyhRWfCU1E7qstvGWE3WzFQsPWtIoh9+OVLNmpLrf83+zjdJ4URoiFk+dWD2cGtSajROHKIXMBvMPFKy40uR0TQB0cwDHHQNm3owKvwQrwPGRjzYf4ft6Cwi9DO6D0B4EPtXV0loXYiRqUOCBJYpBkHEkY4+TKYFPAj0uixvMAVy6uDYtaxs+mYt+IJvn4XZxt5ipbJ605mrz9G3zmDPFWuqJbZ5mq/Oj0Xy+WtXzUo6uzn/BEbquzrdbnZ+ZZ16dt0t4+oOgNZ1j2JX3mITb6P0wUqZtmtYhxvKG8mEUyYu1mZryZu8ajco5ZaZ5Sq5OfkTPaCXmoH27PdfrB5WKIRvzyHrsIe07mRiG41QwEVlChtJAZGNoEg7q4vHaR7SPR0SNquNhJmIPEdnK96/C7tVqQv/K1x5MBNFw92LyQypTqHniyCmY19bPa4iTJh211yXWQKVLDiuS9hqDV+fl9Cyqou2Niv0dzJZpoxdUmTaVF1+OKESlt3yCtd3zzMXqKIJmnaQ6FT5WKOGDLZ9QD6aPUIIIjBsyMG6IwLh0HvjbBbSNCLhbNmOzzYLuti7sbovAO22ahN7pV3kPOw2/20kAvhNn056Wl2zKzuZYFb6w+/JL5pfjl+h76e6K2r5PL2s8t4RNac1M2dHcMa2cVXKSOXtZkQEswbFLI6dB7kbONVNNNVoTsBO6qflm8L8KVuKgxhKehpj9ZMHIlbJCHqiwdi4zb6e9g9gyu1VDid5Tec6otNrn0Xetuk+qRo89XCrCLEu3gWhrqZwZIFkvfzLEYnTy8UmMtpMHqqnfuDauox/UuqCj5+sNvXmv793mK8rExDY6s1oSdLeqUe0i1hKgq5rSqCn7qqZqPWIP7Dh5NjbVZpnU8i61gS6qvs9WZ7nOjZRpXdfi0qeUDPcrRRY7ykvu2z3rWn92ObZa4Uo1+a6xzQ8d2xwP3kGaVCnQaM5tu2Kg0e5r0wlDG2nUYbuSgubaNHFSWdM8LbVzrawQN2U1ASab3AS9A9TjpZ8UcIDl6UZH2jkg1DV5yQRMGdl5jQ/jGJJhCp3ilZiEHotH8wpLlBXFPU3vSeoQlcpA3slIHpXXxIR2tqL9J3fiUiAgwTdDy9zmhYpTfmBpX/RZls+Idsf6FEI9lKDKtVNrX8ZHazwTLLTGc/nFTrjJe3WhgwlP9RrGHnKeqUUQKayoYtsMLw+2qzlRHU9vvapGoRrnXenGUvJaBzrRNAyrwupLuv/UaZJHlVuNnWn95QR5YddU8Y+QKp4paifn3Yh1OSe8/B4cLVSJ9YGd7tqJtdUoKVwXnfiVvzqc7EbH91LlrytpvIv3mhycFYwHHk44YdLw1ChtRKlKGh4ppof+Zofy+5R5KDzCABL58tEH4LrE/wm5bhrmrArbVZtn9sd2/SslHzIp41gMrdMEuhOGsk4Ts1KTqhDKNlRF/e1t0NdGCJdhz300x21S254bmtPMhCIdhbMHuWbzsuGnDHKZvWnuj/iaT3mMN9bxjpSLIU2SkA++xzMFy/lKf+0yKWC7I2SIuFQQNqf3cjvOa7IUREF+9wS+RFNpwUw3dP2mCrqls+Z7Lei1fc2E9Itj9qwus7Xv+3c1Sef67+mlqeJti7svdIPLQzs3aEjc8fa07eFWy/SpbLZcw1aXbOaMzUEfa3cf8L0E5bYK6uXCcV+WlP78mfdrSV2CyTQaz2fustosOq0zi/bg2vaZyX8sU+yAtdE+m7U+LY7tUNW/MZGnuZoYim2y2tgcii22NMRW7bzVLIvs8iF1oeZ5HcUyvyqWS1EsKrEWXqzcbayNANfaqUzzwKfZ+PbqUlxdik5Wwq3B1aNQeRRGxdisbfXlUcwux6PQ99KJ61A1zSZbpFZfPempqzbqQRvVT7+7zGWc6bjqMk5f6TsjVfChALLMAaAOAwk7pjaLtdonfh7JjZhMb+/mDyqwfp6wf2V4p+XQX/JxJjAV8A3ZnVY+fJWPoWfmhZwyapd2/5yPJzejUb6fqieNKnLkx5N8V5qDRgf1z/VUo0w1IV1RdnEom85HHaKM9nZalKl2eLmi7Mwom0/NkvYZN0aZaRiK7k4Ks0RqrjC7LJiV1Y/dBmaK7vqCmf/nf99m33ZgZjxurT+f/zX+a7I7sI1u443xzrAT5K9fuox/Hh/u0TwHTRRUHWOser9veBvLQ/LkxjmZwxLEaRkHI3j6uFzVuCd/24wlUfOXzEaW/GJan4r8rBbpO+2OOu2on9vttfgWPiOD2BU2PXusSYqGTuZK72SmuJMNlBGuQptmsvsRgdC5oipvK/JDKyqxp9n+BHCRUCeOZ5aHK8b85PDqmxXXX4C4itFl69N0YVls/X2OPX77x0Nt0T30mncVoLRYdjsReDp/SaLye1oyHntBJ7AM1LFdhQd06H3NSdFVVaTtW6p4b3IMS5t4r9Kr0OeaaRVkA23XWTpDp2cjVM6qrmcLHMypFm0yJ8mlylVMutGWGqrMbl0RJqpFpRoSuGP7szPo5w+NOzQf93Uah17j19gK6B2p/E6Udw9pRZ3j85j/ylA7bGYEVn8/7uy24VnP8ElJOuTqocSwOvZ4e7+hwWZw6nMgpQIbpKmSD81gdMWLDi9yUniPkIkoOoCLgnXHL838eLPL1XTv23SfjcvbyilNd9Oob7vTnwQzju4XEAgIPb79CS38Pw==</diagram></mxfile>
|
2305.16000/main_diagram/main_diagram.pdf
ADDED
|
Binary file (92.8 kB). View file
|
|
|
2305.16000/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Automated summarisation of salient arguments from texts is a long-standing problem, which has attracted a lot of research interest in the last decade. Early efforts proposed to tackle argument summarisation as a clustering task, implicitly expressing the main idea based on different notions of relatedness, such as argument facets [@DBLP:conf/sigdial/MisraEW16], similarity [@DBLP:conf/acl/ReimersSBDSG19] and frames [@DBLP:conf/emnlp/AjjourAWS19]. However, they do not create easy-to-understand summaries from clusters, which leads to unmitigated challenges in comprehensively navigating the overwhelming wealth of information available in online textual content.
|
| 4 |
+
|
| 5 |
+
Recent trends aim to alleviate this problem by summarising a large collection of arguments in the form of a set of concise sentences that describe the collection at a high-level---these sentences are called *key points* (KPs). This approach was first proposed by @DBLP:conf/acl/Bar-HaimEFKLS20, consisting of two subtasks, namely, *key point generation* (selecting key point arguments from the corpus) and *key point matching* (matching arguments to these key points). Later work applied it across different domains [@DBLP:conf/emnlp/Bar-HaimKEFLS20], for example for product/business reviews [@DBLP:conf/acl/Bar-HaimEKFS20]. While this seminal work advanced the state of the art in argument summarisation, a bottleneck is the lack of large-scale datasets. A common limitation of such an extractive summarisation method, is that it is difficult to select candidates that concisely capture the main idea in the corpus from dozens of arguments. Although @DBLP:conf/acl/Bar-HaimEKFS20 suggested extracting key point candidates from the broader domain (e.g. selecting key point candidates from restaurant or hotel reviews when the topic is "*whether the food served is tasty*") to overcome this fundamental limitation, it is impractical to assume that such data will always be available for selection. An alternative, under-explored line of work casts the problem of finding suitable key points as *abstractive summarisation*. Research work in this direction aims to generate key points for each given argument, without summarising multiple of them [@DBLP:conf/argmining/KapadnisPPMN21]. As such, their approach rephrases existing arguments rather than summarising them.
|
| 6 |
+
|
| 7 |
+
One possible reason for key point generation being under-explored, is the lack of reliable automated evaluation methods for generated summaries. Established evaluation metrics such as ROUGE [@lin2004rouge] and BLEU [@papineni2002bleu] rely on the $n$-gram overlap between candidate and reference sentences, but are not concerned with the *semantic similarity* of predictions and gold-standard (reference) data. Recent trends consider automated evaluation as different tasks, including unsupervised matching [@DBLP:conf/emnlp/ZhaoPLGME19; @DBLP:conf/iclr/ZhangKWWA20], supervised regression [@DBLP:conf/acl/SellamDP20], ranking [@DBLP:conf/emnlp/ReiSFL20], and text generation [@DBLP:conf/nips/YuanNL21]. While these approaches model the semantic similarity between prediction and reference, they are limited to per-sentence evaluation. However, this is likely insufficient to evaluate the quality of multiple generated key point summaries as a whole. For instance, the two key points *"Government regulation of social media contradicts basic rights"* and *"It would be a coercion to freedom of opinion"* essentially contain the same information as the reference *"Social media regulation harms freedom of speech and other democratic rights"*, but individually contain different pieces of information.
|
| 8 |
+
|
| 9 |
+
<figure id="overview" data-latex-placement="t!">
|
| 10 |
+
<embed src="ExampleofKPA.drawio.pdf" />
|
| 11 |
+
<figcaption>Visual depiction of our proposed framework. Colours illustrate the correspondences between arguments and key points. Nodes in <span style="background-color: orange">orange</span> represent many-to-many matches, i.e., key points that are shared between both clusters. The input for key point generation (KPG) is composed of a single cluster from Key point Modelling (KPM) with its corresponding stance and topic. Key point importance is measured by the size of the clusters. For example, <span class="smallcaps">Key Point: School uniforms are expensive</span> (<span style="background-color: yellow">yellow</span>) has an importance of 5 (including the <span style="background-color: orange">argument</span> that belongs to both clusters).</figcaption>
|
| 12 |
+
</figure>
|
| 13 |
+
|
| 14 |
+
In this work, we propose a novel framework for generative key point analysis, in order to reduce the reliance on large, high-quality annotated datasets. Compared to currently established frameworks [@DBLP:conf/acl/Bar-HaimEFKLS20; @DBLP:conf/emnlp/Bar-HaimKEFLS20], we propose a novel two-step abstractive summarisation framework. Our approach first clusters semantically similar arguments using a neural topic modelling approach with an iterative clustering procedure. It then leverages a pre-trained language model to generate a set of concise key points. Our approach establishes new state-of-the-art results on an existing KPA benchmark without additional annotated data. Results of our evaluation suggest that ROUGE scores that assess generated key points against gold standard ones do not necessarily correlate with how well the key points represent the whole corpus. The novel *set-based* evaluation metric that we propose, aims to address this.
|
| 15 |
+
|
| 16 |
+
Overall, the main contributions of this work are as follows: We propose a novel framework for key point analysis, depicted in Figure [1](#overview){reference-type="ref" reference="overview"}, which significantly outperforms the state of the art, even when optimised on a limited number of manually annotated arguments and key points. The framework improves upon an existing neural topic modelling approach with a semantic similarity-based procedure. Compared to previous work, it allows for better handling of outliers, which helps to extract topic representations accurately. Furthermore, we propose a toolkit for automated summary evaluation taking into account semantic similarity. While previous approaches concentrated on sentence-level comparisons, we focus on corpus-level evaluation.
|
| 17 |
+
|
| 18 |
+
# Method
|
| 19 |
+
|
| 20 |
+
In this section, we describe our framework in detail. As can be seen from Figure [1](#overview){reference-type="ref" reference="overview"}, for each debate topic such as *"Should we abandon the use of school uniforms?"*, we take a corpus of relevant arguments grouped by their stance towards the topic (i.e. "pro" or "con") as input, as mined from online discussion boards. As part of KPM, these arguments are clustered using a neural topic modelling approach to group them by their common theme. The clusters are then used as input to the KPG model for summarisation, which is optimised to generate a key point for each argument cluster. During the training of our model for KPM, we employ data augmentation.
|
| 21 |
+
|
| 22 |
+
In previous work, researchers made the simplifying assumption that each argument can be mapped to a single key point [@DBLP:conf/argmining/AlshomaryGSHSCP21; @DBLP:conf/argmining/KapadnisPPMN21]. As a consequence, finding this mapping was modelled as a classification task. In practice, however, a single argument may be related to multiple key points. For instance, the argument: "*School uniforms stifle freedom of expression; they can be costly and make circumstances difficult for those on a budget.*" expresses the key points "*School uniform is harming the student's self expression.*" and "*School uniforms are expensive.*". Inspired by this observation, we approach KPM as *clustering*, by grouping together similar arguments. This naturally allows us to map arguments to multiple key points. Unlike key point matching using a classifier, this step can be performed without any labelled data, since clustering is an unsupervised technique. If training data in the form of argument-key point mappings is available, it is desirable to incorporate this information, as latest work shows that supervision can improve clustering performance [@DBLP:conf/ictai/EickZZ04]. To that end, we use `BERTopic` as our clustering model [@DBLP:journals/corr/abs-2203-05794], which facilitates the clustering of sentences based on their contextualised embeddings obtained from a pre-trained language model [@DBLP:conf/emnlp/ReimersG19], as well as fine-tuning them further for the clustering task. We convert the key points into numbers as labels for training; arguments that do not match any key points are dropped.
|
| 23 |
+
|
| 24 |
+
A common challenge of clustering algorithms is the difficulty of clustering data in high-dimensional space. Although several methods to overcome the curse of dimensionality were proposed recently [@DBLP:journals/tkdd/PandoveGR18], the most straightforward way is to reduce the dimensionality of embeddings [@DBLP:conf/grapp/MolchanovL18]. We achieve this by applying UMAP on the raw embeddings [@DBLP:journals/corr/abs-1802-03426] to reduce their dimension while preserving the local and global structure of embeddings. HDBSCAN [@DBLP:journals/jossw/McInnesHA17] is then used to cluster the reduced embeddings.
|
| 25 |
+
|
| 26 |
+
The output of this step is a set of clusters and the probability distribution of each argument belonging to each cluster. Based on this, we discretise the probability distribution, i.e. represent each argument-cluster pair as a value, which allows us to map arguments to multiple clusters; the formulae and details can be seen in Appendix B.2. As shown in Figure [1](#overview){reference-type="ref" reference="overview"}, these clustered arguments serve as input for the Key Point Generation model.
|
| 27 |
+
|
| 28 |
+
:::: algorithm
|
| 29 |
+
**Input**: Clusters $C$; Unclassified Arguments $Arg$ **Parameter**: Threshold $\lambda$\
|
| 30 |
+
**Output**: Algorithm Result $IC$
|
| 31 |
+
|
| 32 |
+
::: algorithmic
|
| 33 |
+
$IC$ $\leftarrow$ $C$, $\phi$ $\leftarrow$ 0, $l$ $\leftarrow$ len($Arg$), $\omega$ $\leftarrow$ len($C$) $\beta$ $\leftarrow$ compute anchor of $IC$ $\phi$ $\leftarrow$ compute similarity ($a_{i}$,$\beta$) $IC_{j}$ $\leftarrow$ $IC_{j}$ + $a_{i}$ $IC_{\omega+1}$ $\leftarrow$ $a_{i}$, $C_{\omega+1}$ $\leftarrow$ $a_{i}$ **update** $IC$
|
| 34 |
+
|
| 35 |
+
**return** $IC$
|
| 36 |
+
:::
|
| 37 |
+
::::
|
| 38 |
+
|
| 39 |
+
The output of KPM includes a set of arguments that are unmatched, i.e., not assigned to any cluster, represented as a cluster with the label "-1", because HDBSCAN is a soft clustering approach that does not force every single node to join a cluster [@DBLP:journals/jossw/McInnesHA17]. In order to increase the "representativeness" of generated KPs, it is reasonable to maximise the number of arguments in each cluster. To this end, we propose an iterative clustering algorithm (formally described in Algorithm [\[alg:algorithm-IC\]](#alg:algorithm-IC){reference-type="ref" reference="alg:algorithm-IC"}) to further assign these unmatched arguments according to their semantic similarity to cluster centroids. We compute the semantic similarity between each unclassified argument and the cluster centre, by calculating the vector product of embeddings and the average of clusters.
|
| 40 |
+
|
| 41 |
+
To tackle the issue of determining the cluster centers, we employ two different techniques: one is by calculating the similarity of the candidates to each sample in the cluster and then taking the average distance, while the other is by taking the centroid of each cluster as the *anchor* [@wang2021more]. As a filtering step, each unmatched argument is compared to the anchor. We only assign the argument to the cluster if the similarity is higher than a hyper-parameter $\lambda$; otherwise we create a new cluster. Next, the clusters are updated at each iteration until all arguments have been assigned to a cluster.
|
| 42 |
+
|
| 43 |
+
We model KPG as a supervised text generation problem. The input to our model is as follows: {Stance} {Topic} {List of Arguments in Cluster}[^2], where the order of arguments in the list is determined by `TextRank` [@mihalcea2004textrank]. We train the model by minimising the cross-entropy loss between generated and reference key points. The reference key points are drawn from a KPM dataset, together with their matched arguments, which serve as the input to the model.
|
| 44 |
+
|
| 45 |
+
During inference, we use the list of arguments as provided by KPM as input. The generated KPs are ranked in order of relevance using `TextRank` [@mihalcea2004textrank]. Duplicate KPs with a cosine similarity threshold above $0.95$ are combined and the final list of KPs is ranked based on the size of their clusters (for example, the yellow key point with six arguments is ranked higher than the pink key point with four arguments in Figure [1](#overview){reference-type="ref" reference="overview"}). For combined KPs, we take the sum of the respective cluster sizes.
|
| 46 |
+
|
| 47 |
+
Many problems lack annotated data to fully exploit supervised learning approaches. For example, the popular KPA dataset **ArgKP-2021** [@DBLP:conf/acl/Bar-HaimEFKLS20] features an average 150 arguments per topic, mapped to 5-8 KPs. We rely on data augmentation to obtain more KPM training samples. Specifically, we use DINO [@DBLP:conf/emnlp/SchickS21a] as a data augmentation framework, that leverages the generative abilities of pre-trained language models (PLMs) to generate task-specific data by using prompts. We customised the prompt for DINO to include task descriptions (i.e., "*Write two claims that mean the same thing*") to make the model generate a new paraphrase argument. We then used BERTScore [@DBLP:conf/iclr/ZhangKWWA20] and BLEURT [@DBLP:conf/acl/SellamDP20] to assess the difference in quality between each generated sample and the corresponding reference, removing 25% of the lowest scoring generated arguments.
|
| 48 |
+
|
| 49 |
+
Other tasks with sets of predictions, such as information retrieval, are evaluated by means of precision and recall, where a set of predictions is compared against a set of references. Since the final output of KPG and the reference KPs are sets, it is desirable to follow a similar evaluation method. However, it is not sufficient to rely on traditional precision and recall, as these are based on direct sentence equivalence comparisons whereby predictions and references might differ in wording although they are semantically similar. Instead, we rely on *semantic similarity measures* that assign continuous similarity scores rather than equivalence comparison to identify the best match between generated and reference KPs---we call these metrics *Soft-Precision* ($sP$) and *Soft-Recall* ($sR$). More specifically, for $sP$, we find the reference KP with the highest similarity score for each generated KP and vice-versa for $sR$. We further define *Soft-F1* ($sF1$) as the harmonic mean between $sP$ and $sR$.
|
| 50 |
+
|
| 51 |
+
The final $sP$ and $sR$ scores is the average of these best matches. Formally, we compute $sP$ (and $sR$ analogously) as follows:
|
| 52 |
+
|
| 53 |
+
$$\begin{equation}
|
| 54 |
+
sP = \frac{1}{n} \times \sum_{ \alpha_i\in\mathcal{A}} \max_{\beta_j\in\mathcal{B}} f(\alpha_{i}, \beta_{j})
|
| 55 |
+
\end{equation}$$ $$\begin{equation}
|
| 56 |
+
sR = \frac{1}{m} \times \sum_{ \beta_i\in\mathcal{B}} \max_{\alpha_j\in\mathcal{A}} f(\alpha_{i}, \beta_{j})
|
| 57 |
+
\end{equation}$$ where, $f$ computes similarities between two individual key points, $\mathcal A$, $\mathcal{B}$ are the set of candidates and references and $n=|\mathcal{A}|$ and $m=|\mathcal{B}|$, respectively. When $i$ iterates over each candidate, $j$ iterates over each reference and selects the pair with the highest score as the reference for that candidate.
|
| 58 |
+
|
| 59 |
+
We have chosen state-of-the-art semantic similarity evaluation methods such as BLEURT [@DBLP:conf/acl/SellamDP20] and BARTScore [@DBLP:conf/nips/YuanNL21] as $f_{max}$.
|
2307.04333/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2307.04333/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In recent years, there have been breakthrough performance improvements with deep neural networks (DNNs), particularly in the realms of image classification, object detection, and semantic segmentation, as evidenced by the works of [\[31,](#page-11-0) [59,](#page-13-0) [53,](#page-12-0) [19,](#page-10-0) [13,](#page-10-1) [51\]](#page-12-1). However, DNNs have been shown to be easily deceived to produce incorrect predictions by simply adding human-imperceptible perturbations to inputs. These perturbations are called adversarial attacks [\[14,](#page-10-2) [36,](#page-11-1) [38,](#page-11-2) [7\]](#page-10-3), resulting in safety concerns. Therefore, studies of mitigating the impact of adversarial attacks on DNNs, which is referred to as *adversarial defenses*, are significant for AI safety.
|
| 4 |
+
|
| 5 |
+
Various strategies have been proposed in order to enhance the robustness of DNN classifiers against adversarial attacks. One of the most effective forms of adversarial defense is *adversarial training* [\[36,](#page-11-1) [71,](#page-13-1) [16,](#page-10-4) [32,](#page-11-3) [71,](#page-13-1) [44\]](#page-12-2), which involves training a classifier with both clean data and adversarial samples. However, adversarial training has its limitations as it necessitates prior knowledge of the specific attack method employed to generate adversarial examples, thus rendering it inadequate in handling previously unseen types of adversarial attacks or corruptions.
|
| 6 |
+
|
| 7 |
+
On the contrary, *adversarial purification* [\[48,](#page-12-3) [68,](#page-13-2) [52,](#page-12-4) [28,](#page-11-4) [69,](#page-13-3) [40,](#page-12-5) [21\]](#page-10-5) is another form of promising defense that leverages a standalone purification model to eliminate adversarial signals before conducting downstream classification tasks. The primary benefit of adversarial purification is that it obviates the need to retrain the classifier, enabling adaptive defense against adversarial attacks at *test time*. Additionally, it showcases significant generalization ability in purifying a wide range of adversarial attacks, without affecting pre-existing natural classifiers. The integration of adversarial purification models into AI systems necessitates minor adjustments, making it a viable approach to enhancing the robustness of DNN-based classifiers.
|
| 8 |
+
|
| 9 |
+
<sup>∗</sup>Academy for Advanced Interdisciplinary Studies; Peking University; zhangboya@pku.edu.cn;
|
| 10 |
+
|
| 11 |
+
<sup>†</sup> School of Mathematical Sciences; Peking University; luoweijian@stu.pku.edu.cn;
|
| 12 |
+
|
| 13 |
+
School of Mathematical Sciences; Peking University; zhzhang@math.pku.edu.cn;
|
| 14 |
+
|
| 15 |
+
<span id="page-1-0"></span>
|
| 16 |
+
|
| 17 |
+
Figure 1: Illustration of our proposed adversarial defense framework.
|
| 18 |
+
|
| 19 |
+
Diffusion models [\[22,](#page-11-5) [54,](#page-12-6) [58\]](#page-13-4), also known as score-based generative models, have demonstrated state-of-the-art performance in various applications, including image and audio generation [\[9,](#page-10-6) [30\]](#page-11-6), molecule design [\[23\]](#page-11-7), and text-to-image generation [\[39\]](#page-11-8). Apart from their impressive generation ability, diffusion models have also exhibited the potential to improve the robustness of neural networks against adversarial attacks. Specifically, they can function as adaptive test-time purification models [\[40,](#page-12-5) [16,](#page-10-4) [4,](#page-10-7) [63,](#page-13-5) [66\]](#page-13-6).
|
| 20 |
+
|
| 21 |
+
Diffusion-based purification methods have shown great success in improving adversarial robustness, but still have their own limitations. For instance, they require careful selection of the appropriate hyper-parameters such as forward diffusion timestep [\[40\]](#page-12-5) and guidance scale [\[63\]](#page-13-5), which can be challenging to tune in practice. In addition, diffusion-based purification relies on the simulation of the underlying stochastic differential equation (SDE) solver. The reverse purification process requires iteratively denoising samples step by step, leading to heavy computational costs [\[40,](#page-12-5) [63\]](#page-13-5).
|
| 22 |
+
|
| 23 |
+
In the hope of circumventing the aforementioned issues, we introduce a novel adversarial defense scheme that we call *ScoreOpt*. Our key intuition is to derive the posterior distribution of clean samples given a specific adversarial example. Then adversarial samples can be optimized towards the points that maximize the posterior distribution with gradient-based algorithms at test time. The prior knowledge is provided by pre-trained diffusion models. Our defense is independent of base classifiers and applicable across different types of adversarial attacks, making it flexible enough across various domains of applications. The illustration of our method is presented in Figure [1.](#page-1-0)
|
| 24 |
+
|
| 25 |
+
Our main contributions can be summarized in three aspects as follows:
|
| 26 |
+
|
| 27 |
+
- We propose a novel adversarial defense scheme that optimizes adversarial samples to reach the points with the local maximum likelihood of the posterior distribution that is defined by pre-trained score-based priors.
|
| 28 |
+
- We explore effective loss functions for the optimization process, introduce a novel score regularizer and propose corresponding practical algorithms.
|
| 29 |
+
- We conduct extensive experiments to demonstrate that our method not only achieves start-of-the-art performance on various benchmarks but also improves the inference speed.
|
| 30 |
+
|
| 31 |
+
# Method
|
| 32 |
+
|
| 33 |
+
Score-based diffusion models [\[22,](#page-11-5) [58\]](#page-13-4) learn how to transform complex data distributions to relatively simple ones such as the Gaussian distribution, and vice versa. Diffusion models consist of two processes, a forward process that adds Gaussian noise to input x from x<sup>0</sup> to x<sup>T</sup> , and a reverse generative process that gradually removes random noise from a sample until it is fully denoised. For the continuous-time diffusion models (see Song et al. [\[58\]](#page-13-4) for more details and further discussions), we can use two SDEs to describe the above data-transformation processes, respectively. The forwardtime SDE is given by:
|
| 34 |
+
|
| 35 |
+
$$d\mathbf{x}_t = \mathbf{f}(\mathbf{x}_t, t)dt + \mathbf{g}(t)d\mathbf{w}_t, \ t \in [0, T];$$
|
| 36 |
+
|
| 37 |
+
where f : R <sup>D</sup> → R <sup>D</sup> and g : R → R are drift and diffusion coefficients respectively, w denotes the standard Wiener process. While new samples are generated by solving the reverse-time SDE:
|
| 38 |
+
|
| 39 |
+
$$d\mathbf{x}_{t} = [\mathbf{f}(\mathbf{x}_{t}, t) - \mathbf{g}^{2}(t)\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})]dt + \mathbf{g}(t)d\bar{\mathbf{w}}_{t}, t \in [T, 0];$$
|
| 40 |
+
|
| 41 |
+
where w¯ defines the reverse-time Wiener process. The score function term ∇<sup>x</sup><sup>t</sup> log p<sup>t</sup> (xt) is usually parameterized by a time-dependent neural network sθ(xt;t) and trained by score-matching related techniques [\[25,](#page-11-9) [58,](#page-13-4) [61,](#page-13-7) [56,](#page-12-7) [41\]](#page-12-8).
|
| 42 |
+
|
| 43 |
+
Diffusion Models as Prior Recent studies have investigated the incorporation of an additional constraint to condition diffusion models for high-dimensional data sampling. DDPM-PnP [\[17\]](#page-10-8) proposed transforming diffusion models into plug-and-play priors, allowing for parameterized samples, and utilizing diffusion models as critics to optimize over image space. Building on the formulation in Graikos et al. [\[17\]](#page-10-8), DreamFusion [\[43\]](#page-12-9) further introduced a more stable training process. The main idea is to leverage a pre-trained 2D image diffusion model as a prior for optimizing a parameterized 3D representation model. The resulting approach is called score distillation sampling (SDS), which bypasses the computationally expensive backpropagation through the diffusion model itself by simply excluding the score-network Jacobian term from the diffusion model training loss. SDS uses the approximate gradient to train a parametric NeRF generator efficiently. Other works [\[35,](#page-11-10) [37,](#page-11-11) [62\]](#page-13-8) followed similar approaches to extend SDS to the latent space of latent diffusion models [\[46\]](#page-12-10).
|
| 44 |
+
|
| 45 |
+
Diffusion models have also gained significant attention in the field of adversarial purification recently. They have been employed not only for empirical defenses against adversarial attacks [\[69,](#page-13-3) [40,](#page-12-5) [63,](#page-13-5) [66\]](#page-13-6), but also for enhancing certified robustness [\[5,](#page-10-9) [67\]](#page-13-9). The unified procedure for applying diffusion models in adversarial purification involves two processes. The forward process adds random Gaussian noise to the adversarial example within a small diffusion timestep t ∗ , while the reverse process recovers clean images from the diffused samples by solving the reverse-time stochastic differential equation. Implementing the aforementioned forward-and-denoise procedure, imperceptible adversarial signals can be effectively eliminated. Under certain conditions, the purified sample restores the original clean sample with a high probability in theory [\[40,](#page-12-5) [67\]](#page-13-9).
|
| 46 |
+
|
| 47 |
+
However, diffusion-based purification methods suffer from two main drawbacks. Firstly, their robustness performance heavily relies on the choice of the forward diffusion timestep, denoted as t ∗ . Selecting an appropriate t ∗ is crucial because excessive noise can lead to the removal of semantic information from the original example, while insufficient noise may fail to eliminate the adversarial perturbation effectively. Secondly, the reverse process of these methods involves sequentially applying the denoising operation from timestep t to the previous timestep t − 1, which requires multiple deep network evaluations. In contrast to previous diffusion-based purification methods, our optimization framework departs from the sequential step-by-step denoising procedure.
|
| 48 |
+
|
| 49 |
+
In this section, we present our proposed adversarial defense scheme in detail. Our method is motivated by solving an optimization problem of adversarial samples to remove the applied attacks. Therefore, we start by formally formulating the optimization objective, followed by exploring effective loss functions and introducing two practical algorithms.
|
| 50 |
+
|
| 51 |
+
Our main idea is to formulate the adversarial defense as an optimization problem given the perturbed sample and the pre-trained prior, in which the solution to the optimization problem is the recovered original sample that we want. We regard the adversarial example x<sup>a</sup> as a disturbed measurement of the original clean example x, and we assume that the clean example is generated by a prior probability distribution x ∼ p(x). The posterior distribution of the original sample given the adversarial example is p(x|xa) ∝ p(x) p(xa|x). The maximum a posteriori estimator that maximizes the above conditional distribution is given by:
|
| 52 |
+
|
| 53 |
+
$$\hat{\mathbf{x}}^* = \underset{\mathbf{x}}{\operatorname{arg\,min}} - \log p(\mathbf{x} \mid \mathbf{x}_a). \tag{1}$$
|
| 54 |
+
|
| 55 |
+
In this work, we use the data distribution under pretrained diffusion models as the prior pθ(x).
|
| 56 |
+
|
| 57 |
+
Following Graikos et al. [17], we introduce a variational posterior $q(\mathbf{x})$ to approximate the true posterior distribution $p(\mathbf{x}|\mathbf{x}_a)$ in the original optimization objective. The variational upper bound on the negative log-likelihood $-\log p(\mathbf{x}_a)$ is:
|
| 58 |
+
|
| 59 |
+
<span id="page-3-1"></span>
|
| 60 |
+
$$-\log p(\mathbf{x}_a) \le \mathbb{E}_{q(\mathbf{x})} \left[ -\log p\left(\mathbf{x}_a | \mathbf{x}\right) \right] + \mathrm{KL}\left(q(\mathbf{x}) \| p_{\boldsymbol{\theta}}\left(\mathbf{x}\right) \right). \tag{2}$$
|
| 61 |
+
|
| 62 |
+
As shown in Song et al. [57] and Vahdat et al. [60], we can further obtain an upper bound on the second Kullback-Leibler (KL) divergence term between the target variational posterior distribution $q(\mathbf{x})$ and the prior distribution defined by the pre-trained diffusion models $p_{\theta}(\mathbf{x})$ :
|
| 63 |
+
|
| 64 |
+
$$KL\left(q(\mathbf{x})\|p_{\boldsymbol{\theta}}\left(\mathbf{x}\right)\right) \leq \mathbb{E}_{q(\mathbf{x})}\mathbb{E}_{t \sim \mathcal{U}(0,1), \epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{I})}\left[w(t)\|s_{\boldsymbol{\theta}}\left(\mathbf{x}_{t}; t\right) - \nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}|\mathbf{x})\|_{2}^{2}\right], \quad (3)$$
|
| 65 |
+
|
| 66 |
+
where $w(t) = g(t)^2/2$ is a time-dependent weighting coefficient, $\mathbf{x}_t = \mathbf{x} + \sigma_t \epsilon$ denotes the forward diffusion process, $\sigma_t$ is the pre-designed noise schedule, and $s_{\theta}$ represents the pre-trained diffusion models.
|
| 67 |
+
|
| 68 |
+
The simplest approximation to the posterior is using a point estimate, i.e., the introduced variational posterior $q(\mathbf{x})$ satisfies the Dirac delta distribution $q(\mathbf{x}) = \delta(\mathbf{x} - \mathbf{x}_{\mu})$ . Thus, the above upper bound can be rewritten as:
|
| 69 |
+
|
| 70 |
+
$$\mathbb{E}_{t \sim \mathcal{U}(0,1), \epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{I})} \left[ w(t) \left\| s_{\boldsymbol{\theta}} \left( \mathbf{x}_{t}; t \right) - \nabla_{\mathbf{x}_{t}} \log q(\mathbf{x}_{t} | \mathbf{x}_{\mu}) \right\|_{2}^{2} \right], \tag{4}$$
|
| 71 |
+
|
| 72 |
+
We simply use notation x instead of $x_{\mu}$ throughout for convenience. The weighted denoising score matching objective in (4) is also equivalent to the diffusion model training loss [43].
|
| 73 |
+
|
| 74 |
+
According to Tweedie's formula: $\mu_z = z + \Sigma_z \nabla_z \log p(z)$ , where $\Sigma$ denotes covariance matrix, we can obtain $\mathbf{x} = \mathbf{x}_t + \sigma_t^2 \nabla_{\mathbf{x}_t} \log q(\mathbf{x}_t)$ . Defining $D_{\boldsymbol{\theta}}\left(\mathbf{x}_t;t\right) \coloneqq \mathbf{x}_t + \sigma_t^2 s_{\boldsymbol{\theta}}\left(\mathbf{x}_t;t\right)$ , the KL term of our opimization objective converts to:
|
| 75 |
+
|
| 76 |
+
<span id="page-3-2"></span><span id="page-3-0"></span>
|
| 77 |
+
$$\mathbb{E}_{t \sim \mathcal{U}(0,1), \epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{I})} \left[ \tilde{w}(t) \| D_{\boldsymbol{\theta}} \left( \mathbf{x} + \sigma_t \epsilon; t \right) - \mathbf{x} \|_2^2 \right], \tag{5}$$
|
| 78 |
+
|
| 79 |
+
where $\tilde{w}(t) = w(t)/\sigma_t^2$ . Note that $D_{\theta}$ can be used to estimate the denoised image directly, which is called *one-shot* denoiser [33, 5].
|
| 80 |
+
|
| 81 |
+
In our work, we adopt the approach of setting $\tilde{w}(t) = 1$ for convenience and performance as in previous studies [22, 17]. Since we have no information about the conditional distribution $p(\mathbf{x}_a|\mathbf{x})$ , we need a heuristic formulation for the first reconstruction term in (2). The simplest method is to initialize $\mathbf{x}$ by the adversarial sample $\mathbf{x}_a$ and use the loss in (5) to optimize over $\mathbf{x}$ directly, eliminating the constraint term. The rationale behind this simplification is that leading $\mathbf{x}_a$ towards the mode of the $p_{\theta}(\mathbf{x})$ with the same ground-truth class label makes it easier for the natural classifier to produce a correct prediction. In this way, the loss function of our optimization process reduces to:
|
| 82 |
+
|
| 83 |
+
<span id="page-3-4"></span>
|
| 84 |
+
$$\mathcal{L}_{\text{Diff}}(\mathbf{x}, \boldsymbol{\theta}) = \mathbb{E}_{t \sim \mathcal{U}(0,1), \epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{I})} \left[ \left\| D_{\boldsymbol{\theta}} \left( \mathbf{x} + \sigma_t \epsilon; t \right) - \mathbf{x} \right\|_2^2 \right].$$
|
| 85 |
+
(6)
|
| 86 |
+
|
| 87 |
+
Randomized smoothing techniques typically assume that the adversarial perturbation follows a Gaussian distribution. Suppose the Gaussian assumption holds, we can instantiate the reconstruction term with the mean squared error (MSE) between $\mathbf{x}$ and $\mathbf{x}_a$ . The optimization objective converts to the following MSE loss:
|
| 88 |
+
|
| 89 |
+
<span id="page-3-3"></span>
|
| 90 |
+
$$\mathcal{L}_{\text{MSE}}(\mathbf{x}, \mathbf{x}_a, \boldsymbol{\theta}) = \mathbb{E}_{t \sim \mathcal{U}(0,1), \epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{I})} \left[ \|D_{\boldsymbol{\theta}} \left( \mathbf{x} + \sigma_t \epsilon; t \right) - \mathbf{x} \|_2^2 + \lambda \|\mathbf{x} - \mathbf{x}_a\|_2^2 \right], \tag{7}$$
|
| 91 |
+
|
| 92 |
+
where $\lambda$ is a weighting hyper-parameter to balance the two loss terms.
|
| 93 |
+
|
| 94 |
+
However, both Diff and MSE optimizations have their own drawbacks. Regarding the Diff loss, the optimization process solely focuses on the score prior $p_{\theta}$ , and the update direction guided by the pre-trained diffusion models leads the samples towards the modes of the prior, gradually losing the semantic information of the original samples. As illustrated in Figure 2a, with a sufficient number of optimization steps, both standard and robust accuracies decline significantly. On the other hand,
|
| 95 |
+
|
| 96 |
+
<span id="page-4-0"></span>
|
| 97 |
+
|
| 98 |
+

|
| 99 |
+
|
| 100 |
+

|
| 101 |
+
|
| 102 |
+
- (a) Robustness performance via Diff optimization using different number of optimization steps.
|
| 103 |
+
- (b) Comparison between MSE and SR with different optimization iterations.
|
| 104 |
+
- (c) Comparison between MSE and SR with different regularizer hyperparameters.
|
| 105 |
+
|
| 106 |
+
;
|
| 107 |
+
|
| 108 |
+
Figure 2: Robustness performance comparison for Diff, MSE, and SR optimizations.
|
| 109 |
+
|
| 110 |
+
the MSE loss maintains a high standard accuracy at the cost of a large drop in robust accuracy, as depicted in Figure [2b.](#page-4-0) Furthermore, Figure [2c](#page-4-0) demonstrates that the performance of MSE is highly dependent on the weighting hyper-parameter, which controls the intensity of the constraint term.
|
| 111 |
+
|
| 112 |
+
To address the above-mentioned issues, we propose to introduce a hyperparameter-free score regularization (SR) loss:
|
| 113 |
+
|
| 114 |
+
<span id="page-4-1"></span>
|
| 115 |
+
$$\mathcal{L}_{SR}(\mathbf{x}, \mathbf{x}_{a}, \boldsymbol{\theta}) = \mathbb{E}_{t \sim \mathcal{U}(0,1), \epsilon_{1}, \epsilon_{2} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})} \left[ \|D_{\boldsymbol{\theta}}(\mathbf{x} + \sigma_{t} \epsilon_{1}; t) - \mathbf{x}\|_{2}^{2} + \|D_{\boldsymbol{\theta}}(\mathbf{x} + \sigma_{t} \epsilon_{1}; t) - D_{\boldsymbol{\theta}}(\mathbf{x}_{a} + \sigma_{t} \epsilon_{2}; t)\|_{2}^{2} \right].$$
|
| 116 |
+
(8)
|
| 117 |
+
|
| 118 |
+
We use the introduced constraint to minimize the pixel-level distance between the denoised versions of the current sample x and initial adversarial sample xa. The additional regularization term can be expanded as:
|
| 119 |
+
|
| 120 |
+
$$\|D_{\boldsymbol{\theta}}\left(\mathbf{x} + \sigma_{t}\epsilon_{1}; t\right) - D_{\boldsymbol{\theta}}\left(\mathbf{x}_{a} + \sigma_{t}\epsilon_{2}; t\right)\|_{2}^{2} \approx \|\mathbf{x} - \mathbf{x}_{a} + \sigma_{t}^{2}\left(s_{\boldsymbol{\theta}}\left(\mathbf{x}_{t}; t\right) - s_{\boldsymbol{\theta}}\left(\mathbf{x}_{a, t}; t\right)\right)\|_{2}^{2}.$$
|
| 121 |
+
(9)
|
| 122 |
+
|
| 123 |
+
The SR loss encourages consistency with the original sample in terms of not only the pixel values but also the score function estimations at the given noise level σt. Since the two parts of SR loss correspond to the same noise magnitude t of score networks, there is no need to introduce an additional hyperparameter λ as in [\(7\)](#page-3-3).
|
| 124 |
+
|
| 125 |
+
Figure [2b](#page-4-0) and [2c](#page-4-0) demonstrate the effectiveness of the SR loss. As the number of optimization steps increases, both the standard and robust accuracy converge to stable values, with the latter remaining close to the optimal. In contrast to the MSE loss, the SR loss shows insensitivity to the weighting hyperparameter and significantly outperforms MSE, particularly for larger values of λ.
|
| 126 |
+
|
| 127 |
+
Algorithm 1: (ScoreOpt-O) Optimizing adversarial sample towards robustness with score-based prior.
|
| 128 |
+
|
| 129 |
+
Input: Adversarial image xa, pre-trained score-based diffusion model sθ, noise level range [tmin, tmax], optimization iteration steps M, learning rate η.
|
| 130 |
+
|
| 131 |
+
$$\begin{aligned} \mathbf{x}_0 &= \mathbf{x}_a; \\ & \textbf{for } i \in 0, ..., M-1 \textbf{ do} \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad$$
|
| 132 |
+
|
| 133 |
+
<span id="page-4-2"></span>return *Purified image* xM*.*
|
| 134 |
+
|
| 135 |
+
Noise Schedule The perturbation introduced by adversarial attacks is often subtle and challenging for human perception. As a result, our optimization loop does not necessarily incorporate the complete
|
| 136 |
+
|
| 137 |
+
noise levels employed by the original diffusion models, which generate images from pure random noise. In fact, high noise levels will disrupt local structures and remove semantic information from the input image, leading to a decrease in accuracy. Many previous studies have focused on lower noise levels to conduct image editing tasks. Therefore, we center our pre-designed noise schedule $\sigma_t$ around lower noise levels to preserve the details of the original image as much as possible. Previous diffusion-based purification methods iteratively denoise the noisy image step by step (from $\mathbf{x}_t$ to $\mathbf{x}_{t-1}$ ), following a predetermined noise schedule. In contrast, our optimization process does not rely on a sequential denoising schedule. Instead, at each iteration, we have the flexibility to randomly select a noise level. This approach allows us to concurrently explore different noise levels during the optimization process.
|
| 138 |
+
|
| 139 |
+
**Update Rule** Given a noise level $\sigma_t$ , we can utilize the aforementioned loss functions to compute the update direction of $\mathbf{x}$ . Noting that the objectives correspond to one single random chosen noise magnitude, we propose an alternative update rule to further improve inference speed and robustness performance. We can use the loss gradient to optimize over $\mathbf{x}_t$ directly and then obtain the denoised image $\mathbf{x}$ using one-shot denoising. The loss gradients with respect to $\mathbf{x}$ and $\mathbf{x}_t$ are equivalent in our forward process formulation. Experiments in Section 4 demonstrate that our approach significantly improves one-shot denoising with only a few optimization iterations. These two distinct update rules correspond to Algorithm 1 and Algorithm 2, respectively. Both algorithms are straightforward, effective, and easy to implement.
|
| 140 |
+
|
| 141 |
+
Algorithm 2: (ScoreOpt-N) Optimizing noisy adversarial samples and one-shot denoising.
|
| 142 |
+
|
| 143 |
+
```
|
| 144 |
+
Input: Adversarial image \mathbf{x}_a, pre-trained score-based diffusion model s_\theta, noise level range [t_{min}, t_{max}], optimization iteration steps M and N, learning rate \eta.
|
| 145 |
+
|
| 146 |
+
\mathbf{x}_0 = \mathbf{x}_a;
|
| 147 |
+
|
| 148 |
+
for i \in 0, ..., M-1 do
|
| 149 |
+
|
| 150 |
+
| "i denotes the i-th iteration of the outer loop.
|
| 151 |
+
|
| 152 |
+
Sample t \sim \mathcal{U}(t_{min}, t_{max}), \epsilon_1 \sim \mathcal{N}(\mathbf{0}; \mathbf{I});
|
| 153 |
+
\mathbf{x}_{0,t} = \mathbf{x}_i + \sigma_t \epsilon_1;
|
| 154 |
+
for j \in 0, ..., N-1 do
|
| 155 |
+
|
| 156 |
+
| "j denotes the j-th iteration of the inner loop.
|
| 157 |
+
|
| 158 |
+
Sample \epsilon_2 \sim \mathcal{N}(\mathbf{0}; \mathbf{I});
|
| 159 |
+
\mathbf{x}_{a,t} = \mathbf{x}_a + \sigma_t \epsilon_2;
|
| 160 |
+
Calculate the gradient grad of Diff loss (6) or SR loss (8) with respect to \mathbf{x}_{j,t};
|
| 161 |
+
|
| 162 |
+
\mathbf{grad} = \nabla_{\mathbf{x}_{j,t}} \left[ \|D_{\theta}(\mathbf{x}_{j,t};t) - \mathbf{x}_i\|_2^2 + \|D_{\theta}(\mathbf{x}_{j,t};t) - D_{\theta}(\mathbf{x}_{a,t};t)\|_2^2 \right];
|
| 163 |
+
\mathbf{x}_{j+1,t} = \mathbf{x}_{j,t} - \eta \cdot \operatorname{grad};
|
| 164 |
+
end
|
| 165 |
+
|
| 166 |
+
| "One-shot denoising.
|
| 167 |
+
\mathbf{x}_{i+1} = \mathbf{x}_{N,t} + \sigma_t^2 s_{\theta}(\mathbf{x}_{N,t};t);
|
| 168 |
+
end
|
| 169 |
+
|
| 170 |
+
return Purified image \mathbf{x}_M.
|
| 171 |
+
```
|
2307.08849/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2023-01-25T18:24:06.592Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/20.6.2 Chrome/106.0.5249.199 Electron/21.3.3 Safari/537.36" etag="O7w6Z9SZIuv6deucK7hz" version="20.6.2" type="device"><diagram id="EBIGNEaDbY6bc5Gs16w0" name="Page-4">7V1rd6K8Fv41rnXOh7pISAJ87GXezq2dOdOZaef94kJAZariIL3++hMUVEIURJLglPZD5RZs9vMkez/ZSTr6+eT5MrRno6vA9cYdqLnPHf2iAyHQdJ3+ic+8LM8QDS9PDEPfTW5an7jxX730yeTsg+9688yNURCMI3+WPekE06nnRJlzdhgGT9nbBsE4+9aZPUzeqK1P3Dj22Mvdduu70Wh51sQbd7/3/OEofTPQkisTO705KWI+st3gaeNd+ruOfh4GQbT8NHk+98Zx5aX1sizony1XV18s9KZRmQcmZPz91cZn7sv89ebLb/8j/DY4AcaymEd7/JD8xx1IxrTAs0FAy6VfO3pJ6oL8eQjSCyfzhaVO6Q2AzKi1z9bX6adh/Lfb7aZl0W+1LG55JamQVckw8p7j86NoMqYnAP0YevQNdn9xg0aP7YcoWL5zcdke+8Mp/ezQf94L6YlHL4x8arbT5MLEd9344bNZ4E+jBQjwWQdfxN/fH4/Pg3FAH7uYBtP4pnkUBvceczIMHqau5yZfIKkn+hrveasFwMqulBBeMPGi8IXekjyAUzIkXIB6gpWnNbJQcmq0Aar0NjvB8nBV8trc9ENi8T2sDy2O9RnTeFP3NObRul42rZSpIteejxYHIEFJQmdAmFrvQH2A4196fhjark8rcONaf/GbMwu9QhY/K2t47pClaaEtNuoac+o6PRd6YzvyH7PF8wyQvOFrDLO1qQFEGVMDw8wWMQ8eQsdLntpkLVsQLigossOhF+UKWuBh9W9Xh0iKvUMaCLClgbiy5/cUL4kd/56mgoV+DU0HQFYWBgTkmg5LZtOhgxJNx3hMu+m4fp5GfuTdzGwnvvJEPYWsBUW0DQdVN2Sr29Bz1Q041Q2EVTeUQMMp9d5aGu7EBcbNoiFCe/Xgztiez31nVydOsRA5I04VglWtp45wfOb3w2R2k7wrsULcmqengjAaBcNgao/frc+eOQ/h48pRaHpXjnSraxlZx81g7Fm2N9dNVFjWlg6dmtB+2bgtYcnOr42YV1lawbfDBY/QD8vvUauPgbBiEGfwyEF00yGqW6CbbZhoB9Y1aCy6+kHVAIsg2bNkwf4oIi1WDsIKJlAIUjDbrinGCU/YaHFSHicrn6VmnEDSrPbE5OBkT8cavwkBjIV853B3GpskiwWJgtjnp9f+xJvNnsLR82vUf//h9v3jCeRFtUsL9lPbgQ2b9rcatHTs6/qh50R+MF3Uexj/27l42HQ8x+HHw55N+ouoNxcP902McGy4SfCY4IjF1bKRi+xo49j1xt7msef6m4eJzp49E9n+NAZhXUIpIEzbo+VxATm4gMJwsT38XkEA14sLBgMu9kwX8TFgGa5mGDwMmLCvL9AhwEgro6RGgqqNpBcbSRdrJNszB1uIalCmehpXuHJMrz+QZCSi2kj7CRZFQw5bnTKcc8pgCaeM1mr4cpcUtzj4lTaD8cHF8+ali5f06NmP7tL76Of4Ga2Lk6P1Q/FB+sw2yXO7rLlLDBXcxJf1VakVF17gDvvrif2Xbl0hm5WN35gMbyxc0d3NudFMQfX5t/xq5Pm3DeTbki4p47QqjKvEt2pDDEfEN1iWb2rHS1masC5+6fFShrirY0l8446XVudbZkh9D/KVUSBSEmmKSDSnpoiYijg2YilTZ2AW5jrrsJXmSxHxRPOlRMSNmhxZHWF0DRsWE+wY3F6ZGzY5cPsLMKA8eN9Pr2+In1oUGbId5sBxqUPOhdlgYGONCzPXsPrapte7ji2Lu+wj6lDT9N4iT1Vtz7u19dy75zWYgtjkIdE9L0+JYVpdIrTVbXVtU3Wru1/mhapWd9XM7qPGZcKa2tW4I2pXyypuuloFgOVGZcUNMAVJVtx0SZ4MLAiu69G+i5S4KpJAY7hTLIqpdTVYEVqzKroaVjadmZbclUyKemea7EeKlQNdgRQVXO4qTn9DKKFMztIYgFZ1qnMFIU0y0tN8lYa7VNlGfh3VlmvmPeJqrsfjmmURYC6/W17d5TDAxYZuuS0DgG51iQmRDhANMTEwmCw5YnQ13SRAI4AgaJh6VX6AbLmp9lY/O0bw3Dntmb3x+MeTOfnZ/xHdvZ5sV3tLZ9tpW7LtOnFqAYpL7+DzuT+hH/70OsYZPZqN/I5x8Z/F+eHE7oGOcX7Z0/67fOQvytHr1BGjMhMOUz98c+qTySFCHUl5n/65v53czPz/WXfDLz+fLn6D6eMOzLQ5efK0Cx0wPavEnDwuLEoMGry1lDwdMzaSqOpzbdRm5BXbSOLoG9dGbULeW0nI4/euHC1kO5dVuce6zrCmqjqom2yAKCw85NaiMsG9VEZQm47HY1smCatG6pklmadWmmEJUzUzj6WwwMw8bi3uN3G0Tcw7gsS87axS1k8ZWZBXzssrpJ1gtmxfF6xNyxMTGECjYYHB9pmxkrLymu0VyICA6vhd0qhdvd7qvkl5h6GsgUl5ovxVo6S/qrQH3tqM7u2vQqYgcfl5OwPzNj9PncYtMT+PjwGeyN08ca7Nz5MhwBlKAxuWGpUFOKJWgAO8MQkBlGrT82pNz9s+BNgYSbpqdp6OTcbREJidx+eEpDGgNjvvIEYoQzphAFrZpWYLEpmdx0f6cUx4aLPzmsUAaJkysvN0dg6EuOw871HvPZ6dP58BfHE5HU37eDQVn51HX1CcnQeX2XkdeJ7eHZ+l9wJ6W5uzx4tbtew6erycvdUAQt05e/e/f/w5h1+vnMHnx+vvzx9+fo78UmtxKUjaU76wPMHMFiCa1oUcYwkSGfi2KjERVGwmXQPsYjF2gQ2wC89XkZo9d2ASlAi7kAbYpd5MAmGjIp0KolwdGXPVSFzDoAXXWigxzqaYsINuqjxMYjBAt/AK6Pu6kQZkykL5surzJPmVqWw5j704oky4PlKOLCUvZRxhnadDOKKp54iypRmr9yPFWkR2EzjheWlKegqlqdCEsCzQqrMAsCzIlyWaBW2OyX7tfhXRu0aWpKMehSxZToRWxhIW2aQ6S3KMA9JZUvOyoi1LBLMkhUdxX4JUsiTX/h/AEqMBLOEJ2I1nSdEYPhfXqqba1Olvacfhb7HRtH5AT8JE+cCUzxF4HBzZhyH8xh7spIXAOKXOfkQ/in4kp15x2v6q6hXg9EmiOSIpR0y5evUXMKR01K44HslpVwcwJKddyWeIslUDJI+BSHStlKu7ivuQv0vX0uvNNBM63zily4nWBVbG8eoiyyiiTXz01Qt9WmOryRjqxxPrJFDpQEUtgXKBSvXhkXygIn14RD+iCfv15G02e3iktPDbMBYY9YXrm1lBslhQ70D6W1u2ovZeoClKrc6BYmlYl1C0RMOaN+ondX2JBuS27RDfVeW2cVcvPqZFH0TYpQG5oKjekY2/uRuosckvPYqnWFuycNeCpqVTN90EejqhdtWs6F1grK+So+00UiFD4YoI6hsnVkc0G9A4NXBKtfjBpvIy4cFCeo1NWvkorlmDrgR2jbqiOJAvS3TLVa9cXq77lkMXETk+5fHedBhbuZnTlWFsIHaZgHxZomEsUtPeA6kxyv+xJ/44ror33vjRi6doMvAHWyErRpirE7Rqp6LoNSpopkwFzfloOycX/sfgh3ONAufz69P3W+6k5jZyOiRyUgXM3PR4gioDE1qkCyxgIsM0LIyN7ATqeOgknumvI8syIbbSBVIkQZYX5zQUsm956FAVDyC7nV/lJbkx7kJt44dRDqDVtVD+siQWtElYdcSOxwvz1drZK6kDVYM5tFA3XrhFxzSsNHSgsY29uXlZY19TH8xDdPL188m399ffTtDdw8u/Q+NX0NDtrd7a0ssotfp6YY6OLFGNC4sSWudb294KQ8ZGEpfH5tqo3d6q2EYSV7Hn2khZoioUJqm121tx9W5+78rJ3dvOZVWuDtYY1lRdXRdjpiBxq+tya7Hd3kpAPNr87QK4WOBsb7XdAVbFPMQSpmoszVJY4PZW3Fo8omzZBuYJHk+HpnRzDYyyIK+8vVUh7QSzpd3eSnqAjRoWGLTbWymHgOr4vV166I1ub7WrTyj0V5X2wFub0b17YIspSNz2VjsD83Z7K3Uat8TtrfgYULZez17i3OFDfG9je6vDBDil21vlqFFZgNPVCnCydoxrRi52A7e3quidFNIDqqRHTp+uutcVZvO6Re51xSeIyskK7V5XZRmhDOls8mtV/zpXkMi9rvhIP8Khz3avK+UMQESXsdcVhky54va64rPjiNbnafPCd7fZLJQOyAtHBDQiL5yP2SMa5GsTw1U03XUNZsfTdLJFScsF5yP/rewnsxPNbzgZHJlMG181GRxha59kcFYHqg/n8Pnjt4eHr8PTn1cfHO/u5/WXH192JIPPZ/Y0g/dqO3DOkk03oxH1WuNtNy97YLHfJszvrLl85dveWZMdbrGSdmcD9QBxYF/HzpqPr97305Pr3vkn3/yDRoOBO0fiEbLYerVFAosEwG73bhk5JBBBQPhnZkfox8OdcX/zq3f2rq9/137LAAJogZAHAmTn9FtEGhBePmsvg+8Q/WvPv5CrK/sShjcygABbIHCGU3MLFufX4REFhOs/wP8Qvu/Nzia/ffv28sR23skAAmmBwFmTiR3zkwiET8Dr/br+ePrNvg0/AfPy9uPpmZL543IGBEUkP3UaHvpgaHRz2yCDygoX1rSuRplODBMAk+RmjACja8L15XTRtfoDoJ5zb/Q+nd5/sMmXd/2fzisZ+jWv1tHw4esGLyWmTsJi1/iA1WCOTNw1sWESpBsEEjOX7kYkTfrmwrzeFrohy5k3dOtXZVBmIWdUTJrIp2paNYGVHoZB7Oitbw/t2egqcL34jv8D</diagram></mxfile>
|
2307.08849/main_diagram/main_diagram.pdf
ADDED
|
Binary file (44.7 kB). View file
|
|
|
2307.08849/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Generating graphs from a target distribution is a fundamental problem in many domains such as drug discovery [@li2018learning], material design [@maziarka2020mol], social network analysis [@grover2019graphite], and public health [@yu2020reverse]. Deep generative models have recently led to promising advances in this problem. Different from traditional random graph models [@erdos1960evolution; @albert2002statistical], these methods fit graph data with powerful deep generative models including variational auto-encoders (VAEs) [@simonovsky2018graphvae], generative adversarial networks (GANs) [@maziarka2020mol], normalizing flows [@madhawa2019graphnvp], and energy-based models (EBMs) [@liu2021graphebm]. These models are learned to capture complex graph structural patterns and then generate new high-fidelity graphs with desired properties [@zhusurvey; @du2021graphgt].
|
| 4 |
+
|
| 5 |
+
Recently, the emergence of probabilistic diffusion models has led to interest in diffusion-based graph generation [@jo2022score]. Diffusion models decompose the full complex transformation between noise and real data into many small steps of simple diffusion. Compared with prior deep generative models, diffusion models enjoy both flexibility in modeling architecture and tractability of the model's probability distributions. However, existing diffusion-based graph generative models [@niu2020permutation; @jo2022score; @vignac2022digress] suffer from three key drawbacks: (1) *Generation Efficiency*. The sampling processes are slow, as they require a very long diffusion process to arrive at the stationary noisy distribution and thus the reverse generation process is also time-consuming. (2) *Incorporating constraints*. They are all one-shot generation models and hence cannot easily incorporate constraints during the one-shot generation process. (3) *Continuous Approximation.* @niu2020permutation [@jo2022score] convert discrete graphs to continuous state spaces by adding real-valued noise to graph adjacency matrices. Such dequantization can distort the distribution of the original discrete graph structures, thus increasing the difficulty of model training.
|
| 6 |
+
|
| 7 |
+
We propose an autoregressive graph generative model named [GraphArm]{.smallcaps} via *autoregressive diffusion* on graphs. Autoregressive diffusion model (ARDM) [@hoogeboom2022autoregressive] builds upon the recently developed absorbing diffusion [@austin2021structured] for discrete data, where exactly one dimension of the data decays to the absorbing state at each diffusion step. In [GraphArm]{.smallcaps}, we design *node-absorbing autoregressive diffusion* for graphs, which diffuses a graph directly in the discrete graph space instead of in the dequantized adjacency matrix space. The forward pass absorbs one node in each step by masking it along with its connecting edges, which is repeated until all the nodes are absorbed and the graph becomes empty. We further design a *diffusion ordering network* in [GraphArm]{.smallcaps}, which is jointly trained with the reverse generator to learn a data-dependent node ordering for diffusion. Compared with random ordering as in prior absorbing diffusion [@hoogeboom2022autoregressive], the learned diffusion ordering not only provides a better approximation of the true marginal graph likelihood, but also eases the generative model training by leveraging structural regularities. The backward pass in [GraphArm]{.smallcaps} recovers the graph structure by learning to reverse the node-absorbing diffusion process with a denoising network. The reverse generative process is autoregressive, which makes [GraphArm]{.smallcaps} easier to handle the constraints during generation. However, a key challenge is to learn the distribution of reverse node ordering for optimizing the data likelihood. We show that this difficulty can be circumvented by just using the exact reverse node ordering and optimizing a simple lower bound of likelihood, based on the permutation invariance property of graph generation. The likelihood lower bound allows for jointly training the denoising network and the diffusion ordering network using a reinforcement learning procedure and gradient descent.
|
| 8 |
+
|
| 9 |
+
The generation speed of [GraphArm]{.smallcaps} is much faster than the existing graph diffusion models [@jo2022score; @niu2020permutation; @vignac2022digress]. Due to the autoregressive diffusion process in the node space, the number of diffusion steps in [GraphArm]{.smallcaps} is the same as the number of nodes, which is typically much smaller than the sampling steps in [@jo2022score; @niu2020permutation; @vignac2022digress]. Furthermore, at each step of the backward pass, we design the denoising network to predict the node type of the newly generated node and its edges with previously denoised nodes at one time. The edges to be predicted follow a mixture of multinomial distribution to ensure dependencies among each other. This makes [GraphArm]{.smallcaps} offer a more balanced trade-off between flexibility and efficiency.
|
| 10 |
+
|
| 11 |
+
Our key contributions are as follows: (1) To the best of our knowledge, our work is the first *autoregressive* diffusion-based graph generation model, underpinned by a new node self-absorbing diffusion process. Our model represents a generalized form of the diffusion processes, a class in which ARDM [@hoogeboom2022autoregressive] and diffusion Schrödinger bridge [@de2021diffusion] also fall. (2) [GraphArm]{.smallcaps} learns a data-dependent node generation ordering and thus better leverages the structural regularities for autoregressive graph diffusion. (3) We validate our method on eight graph generation tasks, on which we show that [GraphArm]{.smallcaps} outperforms existing graph generative models and is efficient in generation speed.
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
<figure id="fig:overall" data-latex-placement="t">
|
| 16 |
+
<embed src="figure/overview.pdf" style="width:90.0%" />
|
| 17 |
+
<figcaption>The autoregressive graph diffusion process. In the forward pass, the nodes are autoregressively decayed into the absorbing states, dictated by an ordering generated by the diffusion ordering network <span class="math inline"><em>q</em><sub><em>ϕ</em></sub>(<em>σ</em>|<em>G</em><sub>0</sub>)</span>. In the reverse pass, the generator network <span class="math inline"><em>p</em><sub><em>θ</em></sub>(<em>G</em><sub><em>t</em></sub>|<em>G</em><sub><em>t</em> + 1</sub>)</span> reconstructs the graph structure using the reverse node ordering. Note that we do not need to consider the graph automorphism as <span class="citation" data-cites="chen2021order"></span>, since the diffusion process assigns a unique ID to each node in <span class="math inline"><em>G</em><sub>0</sub></span> to obtain the decay ordering. Therefore, there is a one-to-one mapping between <span class="math inline"><em>G</em><sub>0 : <em>n</em></sub></span> and <span class="math inline"><em>σ</em><sub>1 : <em>n</em></sub></span>. For example, <span class="math inline"><em>v</em><sub>1</sub></span> and <span class="math inline"><em>v</em><sub>6</sub></span> have the same topology, but the denoising network will recover the exact node <span class="math inline"><em>v</em><sub>1</sub></span> at <span class="math inline"><em>t</em> = 2</span> since <span class="math inline"><em>σ</em><sub>2</sub> = 1</span>. We provide more illustrations in Appendix. <a href="#sec:appendix:comparison" data-reference-type="ref" data-reference="sec:appendix:comparison">7.5</a> </figcaption>
|
| 18 |
+
</figure>
|
| 19 |
+
|
| 20 |
+
{#fig:denosing width="90%"}
|
| 21 |
+
|
| 22 |
+
A graph is represented by the tuple $G=(V,E)$ with node set $V=\{v_1,\cdots,v_n\}$ and edge set $E=\{e_{v_i,v_j}|v_i,v_j\in V) \}$. We denote by $n=|V|$ and $m=|E|$ the number of nodes and edges in $G$ respectively. Each node and edge have their corresponding categorical labels, *e*.*g*., $e_{v_i, v_j}=k$ represents that the edge between node $v_i$ and $v_j$ is in type $k$. We treat the absence of edges between two nodes as a particular edge type. Our goal is to learn a graph generative model from a set of training graphs.
|
| 23 |
+
|
| 24 |
+
Due to the dependency between nodes and edges, it is nontrivial to apply absorbing diffusion [@austin2021structured] in the discrete graph space. We first define absorbing node state on graphs as follows:
|
| 25 |
+
|
| 26 |
+
::: definition
|
| 27 |
+
**Definition 3** (Absorbing Node State). *When a node $v_i$ enters the absorbing state, (1) it will be masked and (2) it will be connected to all the other nodes in $G$ by masked edges.*
|
| 28 |
+
:::
|
| 29 |
+
|
| 30 |
+
Instead of only masking the original edges, we connect the masked node $v_i$ to all the other nodes with masked edges as we cannot know $v_i$'s original neighbors in the absorbing state. With the absorbing node state defined, we then need a node decay ordering for the forward absorbing pass. A naïve strategy is to use a random ordering sampled from a uniform distribution as in [@hoogeboom2022autoregressive]. In the reverse generation process, the variables will be generated in the exact reverse order, which also follows a uniform distribution. However, such a strategy is problematic for graphs. First, different graph datasets have different structural regularities, and it is key to leverage such regularities to ease generative learning. For example, community-structured graphs typically consist of dense subgraphs that are loosely overlapping. For such graphs, it is an easier learning task to generate one community first and then add the others, but a random node ordering cannot leverage such local structural regularity, which makes generation more difficult. Second, to compute the likelihood, we need to marginalize over all possible node orderings due to node permutation invariance. It will be more sample efficient if we can use an optimized proposal ordering distribution and use importance sampling to compute the data likelihood.
|
| 31 |
+
|
| 32 |
+
To address this issue, we propose to use a diffusion ordering network $q_{\phi}(\sigma|G_0)$ such that, at each diffusion step $t$, we sample from this network to select a node $v_{\sigma(t)}$ to be absorbed and obtain the corresponding masked graph $G_t$ (Figure [1](#fig:overall){reference-type="ref" reference="fig:overall"}). This leads to the following definition of our graph autoregressive diffusion process:
|
| 33 |
+
|
| 34 |
+
::: definition
|
| 35 |
+
**Definition 4** (Autoregressive Graph Diffusion Process). *In autoregressive graph diffusion, the node decay ordering $\sigma$ is sampled from a diffusion ordering network $q_{\phi}(\sigma|G_0)$. Then, exactly one node decays to the absorbing state at a time according to the sampled diffusion ordering. The process proceeds until all the nodes are absorbed.*
|
| 36 |
+
:::
|
| 37 |
+
|
| 38 |
+
The diffusion ordering network follows a recurrent structure $q_{\phi}(\sigma|G_0)=\prod_{t}q_{\phi}(\sigma_t|G_0,\sigma_{(<t)})$. At each step $t$, the distribution of the $t$-th node $\sigma_t$ is conditioned on the original graph $G_0$ and the generated node ordering up to $t-1$, *i*.*e*.,$\sigma_{({<t})}$. We use a graph neural network (GNN) to encode the structural information in the graph. To capture the partial ordering, we add positional encodings into node features [@vaswani2017attention] as in [@chen2021order]. We denote the updated node embedding of node $v_i$ after passing the GNN as $\vh_i^d$, and parameterize $q_{\phi}(\sigma_t|G_0,\sigma_{(<t)})$ as a categorical distribution: $$\begin{equation}
|
| 39 |
+
q_{\phi}(\sigma_t|G_0,\sigma_{(<t)})=\frac{\text{exp}(\vh_i^d)}{\sum_{i^{'}\notin \sigma_{(<t)}}\text{exp}(\vh^d_{i^{'}})}.
|
| 40 |
+
\end{equation}$$ With $q_{\phi}(\sigma|G_0)$, [GraphArm]{.smallcaps} can learn to optimize node ordering for diffusion. However, this also requires us to infer the reverse generation ordering in the backward pass. Inferring such a reverse generation ordering is difficult since we do not have access to the original graph $G_0$ in intermediate backward steps. In Section [4.3](#sec:training){reference-type="ref" reference="sec:training"}, we show that it is possible to circumvent inferring this generation ordering by leveraging the permutation invariance of graph generation.
|
| 41 |
+
|
| 42 |
+
In the generative process, a denoising network $p_{\theta}(G_t|G_{t+1})$ will denoise the masked graph in the reverse order of the diffusion process. We design $p_{\theta}(G_t|G_{t+1})$ as a graph attention network (GAT) [@velickovic2018graph; @liao2019efficient] parameterized by $\theta$, so that the model can distinguish the masked and unmasked edges. For clarity, we use the Vanilla GAT to illustrate the computing process. However, one can adopt any advanced graph neural network with attentive message passing.
|
| 43 |
+
|
| 44 |
+
At time $t$, the input to the denoising network $p_{\theta}(G_t|G_{t+1})$ is the previous masked graph $G_{t+1}$. A direct way is to use $G_{t+1}$ which contains all the masked nodes with their corresponding masked edges. However, during the initial generation steps, the graph is nearly fully connected with masked edges. This has two issues: (1) the message passing procedure will be dominated by the masked edges which makes the messages uninformative. (2) Storing the dense adjacency matrix is memory expensive, which makes the model unscalable to large graphs. Therefore, during each generation step, we only keep the masked node to be denoised with its associated masked edges, while ignoring the other masked nodes. We refer the modified masked graph as $G^{'}_t$, as shown in Figure [2](#fig:denosing){reference-type="ref" reference="fig:denosing"}.
|
| 45 |
+
|
| 46 |
+
The denoising network first uses an embedding layer to encode each node $v_i$ into a continuous embedding space, *i*.*e*., $\vh_i=\text{Embedding}(v_i)$. At $l$-th message passing, we update the embedding of node $v_i$ by aggregating the attentive messages from its neighbor nodes: $\alpha_{i,j}=\frac{\text{exp}(\text{LeakyReLU}(\va^T[\mW\vh_i||\mW\vh_j])}{\sum_{k\in\mathcal{N}_i}\text{exp}(\text{LeakyReLU}(\va^T)[\mW\vh_i||\mW\vh_j])}, \vh_i=\text{ReLU}\left(\sum_{j\in\mathcal{N}_i}\alpha_{i,j}\mW \vh_j\right),$ where $\mW$ is the weight matrix, $\va$ is the attention vector. The attention mechanism enables the model to distinguish if the message comes from a masked edge. After $L$ rounds of message passing, we obtain the final embedding $\vh_i^{L}$ for each node, then we predict the node type of the new node $v_{\sigma_t}$ and the edge types between $v_{\sigma_t}$ and all previously denoised nodes $\{v_{\sigma(>t)}\}$. The node type prediction follows a multinomial distribution. For edge prediction, one choice is to sequentially predict these edges as in [@you2018graphrnn; @shi2019graphaf]. However, this sequential generation process is inefficient and takes $\mathcal{O}(n^2)$. Instead, we predict the connections of the new node to all previous nodes at once using a mixture of multinomial distribution. The mixture distribution can capture the dependencies among edges to be generated and meanwhile reduce the autoregressive generation steps to $\mathcal{O}(n)$.
|
| 47 |
+
|
| 48 |
+
We use approximate maximum likelihood as the training objective for [GraphArm]{.smallcaps}. We first derive the variational lower bound (VLB) of likelihood as: $$\begin{align}
|
| 49 |
+
\log{p_{\theta}(G_{0})} &= \log{\left(\int p(G_{0:n})\frac{q(G_{1:n}|G_0)}{q(G_{1:n}|G_0)}dG_{1:n}\right)} \nonumber\\
|
| 50 |
+
&\geq \mathbb{E}_{q(\sigma_{1:n}|G_0)}
|
| 51 |
+
\sum_{t}\log{p_{\theta}(G_t|G_{t+1})}\nonumber\\
|
| 52 |
+
&\quad\,-\text{KL}(q_{\phi}(\sigma_{1:n}|G_0)|p_{\theta'}(\sigma_{1:n}|G_n)),
|
| 53 |
+
\label{eq:VLB}
|
| 54 |
+
\end{align}$$ where $G_{0:n}$ denotes all values of $G_t$ for $t=0,\cdots,n$ and $p_{\theta'}(\sigma_{1:n}|G_n)$ is the distribution of the generation ordering. A detailed derivation of Eq. [\[eq:VLB\]](#eq:VLB){reference-type="ref" reference="eq:VLB"} is given in Appendix [7.4](#sec:appendix:derivation){reference-type="ref" reference="sec:appendix:derivation"}
|
| 55 |
+
|
| 56 |
+
As we can see from Eq. [\[eq:VLB\]](#eq:VLB){reference-type="ref" reference="eq:VLB"}, the diffusion process introduces a separate reverse generation ordering network $p_{\theta'}(\sigma_{1:n}|G_n)$. Learning $p_{\theta'}(\sigma_{1:n}|G_n)$ is nontrivial as we do not have the original graph $G_0$ in the intermediate generation process. However, we show that we can avoid such difficulty and simply ignore the KL-divergence term. While the generation ordering network is required for non-graph data such as text to determine which token to unmask at test time, it is not needed for graph generation due to node permutation invariance. The first term will encourage the denoising network $p_{\theta}(G_t|G_{t+1})$ to predict the node and edge types in the exact reverse ordering of the diffusion process, thus the denoising network itself can be a proxy of the generation ordering. Due to permutation invariance, we can simply replace any masked node and its masked edges with the predicted node and edge types at each time step. Therefore, we can ignore the second term and finally arrive at a simple training objective: $$\begin{align}
|
| 57 |
+
L_{\rm train}& =\mathbb{E}_{\sigma_{1:n} \sim q_{\phi}(\sigma_{1:n}|G_0)} \sum_{t} p_{\theta}(G_t|G_{t+1}) \\ \nonumber
|
| 58 |
+
&= n\mathbb{E}_{\sigma_{1:n} \sim q_{\phi}(\sigma_{1:n}|G_0)} \mathbb{E}_{t\sim \mathcal{U}_n} p_{\theta}(O_{v_{\sigma_t}}^{\sigma_{(>t)}}|G_{t+1}),
|
| 59 |
+
\label{eq:loss}
|
| 60 |
+
\end{align}$$ where the last equivalence comes from treating $t$ as the random variable with a uniform distribution $\mathcal{U}_n$ over $1$ to $n$. $O_{v_{\sigma_t}}^{\sigma_{(>t)}}$ represents the node type of $v_{\sigma_t}$ and its edges with all previously denoised nodes, *i*.*e*., $\{v_{\sigma_t}, \{e_{v_{\sigma_t},v_j}\}_{j=\sigma_{t+1}}^{\sigma_n}\}$.
|
| 61 |
+
|
| 62 |
+
Compared with the random diffusion ordering, our design has two benefits: (1) We can automatically learn a data-dependent node generation ordering which leverages the graph structural information. (2) We can consider the diffusion ordering network as an optimized proposal distribution of importance sampling for computing the data likelihood, which is more sample-efficient than a uniform proposal distribution.
|
| 63 |
+
|
| 64 |
+
The architecture of ARDM [@hoogeboom2022autoregressive] predicts all masked dimensions simultaneously, which enables training the univariate conditionals $p(\vx_k|\vx_{\sigma(>t)})$ for all $k \in \sigma_{(\leq t)}$ in parallel. In [GraphArm]{.smallcaps}, due to the node permutation invariant property of graph, this parallel training can be simplified as training with a soft label which is weighted by the probability given by the diffusion ordering network: $$\begin{align}
|
| 65 |
+
L_{\rm train}= n\mathbb{E}_{q_{\phi}(\sigma_{1:n}|G_0)}\mathbb{E}_{t\sim \mathcal{U}_n}\sum_{k\in \sigma_{(\leq t)}}w_k p_{\theta}(O_{v_{k}}^{\sigma_{(>t)}}|G_{t+1}),\nonumber
|
| 66 |
+
%\label{eq:parallel}
|
| 67 |
+
\end{align}$$ where $w_k = q_{\phi}(\sigma_t=k|G_0, \sigma_{(<t)})$. In practice, the probability mass $q_{\phi}(\sigma_t|G_0, \sigma_{(<t)})$ might be concentrated around a small set of remaining nodes. Therefore, it is generally sufficient to consider only those node labels associated with the highest probabilities.
|
| 68 |
+
|
| 69 |
+
Learning the parameters of [GraphArm]{.smallcaps} is challenging, because we need to evaluate the expectation of the likelihood over the diffusion ordering network. We use a reinforcement learning (RL) procedure by sampling multiple diffusion trajectories, thereby enabling training both the diffusion ordering network $q_{\phi}(\sigma|G_0)$ and the denoising network $p_{\theta}(G_t|G_{t+1})$ using gradient descent.
|
| 70 |
+
|
| 71 |
+
Specifically, at each training iteration, we explore the diffusion ordering network by creating $M$ diffusion trajectories for each training graph $G^{(i)}_0$. Each trajectory is a sequence of graphs $\{G_t^{i,m}\}_{1\leq t\leq n}$ where the node decay ordering $\sigma^{i,m}$ is sampled from $q_{\phi}(\sigma|G_0^{i,m})$. For each trajectory, we sample $T$ time steps. The denoising network $p_{\theta}(G_t|G_{t+1})$ is then trained to minimize the negative VLB using stochastic gradient descent (SGD): $$\begin{align}
|
| 72 |
+
\triangle \theta \leftarrow
|
| 73 |
+
\frac{\eta_1}{M}\nabla \sum_{i\in\mathcal{B}_{\text{train}}}\sum_{ m,t}\sum_{k\in\sigma_{(\leq t)}}\frac{n_iw^{i,m}_{k}}{T}\log p_{\theta}(O_{v_{k}}^{\sigma_{(>t)}}|G^{i,m}_{t+1}), \nonumber
|
| 74 |
+
\end{align}$$ where $\mathcal{B}_{\rm train}$ is the a minibatch sampled from the training data and $w_k^{i,m}=q_{\phi}(\sigma_t^{i,m}=k|G_0^i, \sigma_{(<t)}^{i,m})$.
|
| 75 |
+
|
| 76 |
+
To evaluate the current diffusion ordering network, we create $M$ trajectories for each validation graph and compute the negative VLB of the denoising network to obtain the corresponding rewards $R^{i,m}=-\sum_{t}\sum_{k\in \sigma_{(\leq t)}}\frac{n_i}{T}w_{k}^{i,m}\text{log} p_{\theta}(O_{v_{k}}^{\sigma_{(>t)}}|G^{i,m}_{t+1})$. Then, the diffusion ordering network can be updated with common RL optimization methods, *e*.*g*., the REINFORCE algorithm [@williams1992simple]: $$\begin{equation}
|
| 77 |
+
\triangle\phi \leftarrow \frac{\eta_2}{M} \sum_{i\in \mathcal{B}_{\text{val}}}\sum_{m} R^{i,m} \nabla \log q_{\phi}(\sigma|G_0^{i,m}).
|
| 78 |
+
\label{eq:reinforce}
|
| 79 |
+
\end{equation}$$ The detailed training procedure is summarized in Algorithm [\[alg:overall\]](#alg:overall){reference-type="ref" reference="alg:overall"} in Appendix. [7.6](#sec:appendix:alg){reference-type="ref" reference="sec:appendix:alg"}.
|
| 80 |
+
|
| 81 |
+
::: table*
|
| 82 |
+
:::
|
2307.10710/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2307.10710/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Reinforcement learning (RL) with *high-dimensional continuous action space* is notoriously hard despite its fundamental importance for many application problems such as robotic manipulation [\(OpenAI et al.,](#page-10-0) [2019;](#page-10-0) [Mu et al.,](#page-10-1) [2021\)](#page-10-1). In practice, popular frameworks [\(Silver et al.,](#page-11-0) [2014;](#page-11-0) [Haarnoja et al.,](#page-9-0) [2018;](#page-9-0) [Schulman et al.,](#page-11-1) [2017\)](#page-11-1) of deep RL formulate the continuous policy as a neural network that outputs a single-modal density function over the action space
|
| 4 |
+
|
| 5 |
+
*Proceedings of the* 40 th *International Conference on Machine Learning*, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).
|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span>
|
| 8 |
+
|
| 9 |
+
Figure 1. (A) Our method reparameterizes latent variables into multimodal policy to facilitate exploitation and exploration in continuous policy learning; (B) Average performance on 6 hard exploration tasks. Our method outperforms previous methods.
|
| 10 |
+
|
| 11 |
+
(e.g., a Gaussian distribution over actions). This formulation, however, breaks the promise of RL being a global optimizer of the return function because the single-modality policy parameterization introduces local minima that are hard to escape using gradients w.r.t. distribution parameters. Besides, a single-modality policy will significantly weaken the exploration ability of RL algorithms because the sampled actions are usually concentrated around the modality.
|
| 12 |
+
|
| 13 |
+
Although there are other candidates beyond the Gaussian distribution for policy parameterization, they often have limitations when used for continuous policy modeling. For example, Gaussian mixture models can only accommodate a limited number of modes; normalizing flow methods [\(Rezende](#page-11-2) [& Mohamed,](#page-11-2) [2015\)](#page-11-2) can compute density values, but they may not be as numerically robust due to their dependency on the determinant of the network Jacobian; furthermore, normalizing flows must apply continuous transformations onto a continuously connected distribution, making it difficult to model disconnected modes [\(Rasul et al.,](#page-11-3) [2021\)](#page-11-3). Option-critic [\(Bacon et al.,](#page-9-1) [2017\)](#page-9-1) represents policies with options and temporal structure, but it often requires specially designed option spaces for efficient learning, which motivates research on hierarchical imitation learning that uses demonstrations to avoid exploration problems [\(Peng et al.,](#page-11-4) [2022;](#page-11-4) [Fang et al.,](#page-9-2) [2019\)](#page-9-2). Skill discovery methods learn a population of skills without demonstrations or rewards by optimizing for diversity [\(Eysenbach et al.,](#page-9-3) [2018\)](#page-9-3). However, the separation of optimization and skill learning can be nonefficient as it expends effort on learning task-irrelevant skills and may ignore more important ones that would benefit a
|
| 14 |
+
|
| 15 |
+
<sup>1</sup>UC San Diego <sup>2</sup>MIT-IBM Watson AI Lab <sup>3</sup>UMass Amherst. Correspondence to: Zhiao Huang <z2huang@ucsd.edu>.
|
| 16 |
+
|
| 17 |
+
specific task.
|
| 18 |
+
|
| 19 |
+
This paper presents a principled framework for learning the continuous RL policy as a multimodal density function through multimodal action parameterization. We adopt a sequence modeling perspective [\(Chen et al.,](#page-9-4) [2021\)](#page-9-4) and view the policy as a density function over the entire trajectory space (instead of the action space)[\(Ziebart,](#page-12-0) [2010;](#page-12-0) [Levine,](#page-10-2) [2018\)](#page-10-2). This allows us to sample a population of trajectories that cover multiple modalities, enabling concurrent exploration of distant regions in the solution space. Additionally, we use a generative model to parameterize the multimodal policies, drawing inspiration from their success in modeling highly complex distributions such as natural images[\(Goodfellow et al.,](#page-9-5) [2016;](#page-9-5) [Zhu et al.,](#page-12-1) [2017;](#page-12-1) [Rom](#page-11-5)[bach et al.,](#page-11-5) [2022;](#page-11-5) [Ramesh et al.,](#page-11-6) [2021\)](#page-11-6). We condition the policy on a latent variable z and use a powerful function approximator to "reparameterize" the random distribution z into the multimodal trajectory distribution [\(Kingma &](#page-10-3) [Welling,](#page-10-3) [2013\)](#page-10-3), from which we can sample trajectories τ . This policy parameterization leads us to adopt the variational method [\(Kingma & Welling,](#page-10-3) [2013;](#page-10-3) [Haarnoja et al.,](#page-9-0) [2018;](#page-9-0) [Moon,](#page-10-4) [1996\)](#page-10-4) to derive a novel framework for modeling the posterior of the optimal trajectory using variational inference, which enables us to model multimodal trajectories and maximize the reward with a single objective.
|
| 20 |
+
|
| 21 |
+
This framework allows us to build Reparameterized Policy Gradient (RPG), a model-based RL method for multimodal trajectory optimization. The framework has two notable features: First, RPG combines the multimodal policy parameterization with a learned world model, enjoying the sample efficiency of the learned model and gradient-based optimization while providing the additional ability to jump out of the local optima; Second, we equip RPG with a novel density estimator to help the multimodal policy explore in the environments by maximizing the state entropy [\(Hazan](#page-9-6) [et al.,](#page-9-6) [2019\)](#page-9-6). We verify the effectiveness of our methods on several robot manipulation tasks. These environments only provide sparse rewards when the agent successfully fully finishes the task, which is challenging for single-modal policies even when they are guided by intrinsic motivations. In comparison, our method is able to explore different modalities, improve the exploration efficiency, and outperform single-modal policies, as shown in Fig. [1.](#page-0-0) Notably, our method is more robust than single-modal policies and consistently outperforms previous approaches across different tasks.
|
| 22 |
+
|
| 23 |
+
Our contributions are multifold: 1. We propose a variational policy learning framework that models the posterior of multimodal optimal trajectories for reward optimization. 2. We demonstrate that multimodal parameterization can help the policy escape local optima and accelerate exploration in continuous policy optimization. 3. When combined with a
|
| 24 |
+
|
| 25 |
+
learned world model and a delicate density estimator, our method, RPG, is able to solve these challenging sparsereward tasks more efficiently and reliably.
|
| 26 |
+
|
| 27 |
+
# Method
|
| 28 |
+
|
| 29 |
+
Markov decision process A Markov decision process (MDP) is a tuple of (S, A, P, R), where S is the state space and A is the action space. p(s'|s,a) is the transition probability that transits state s to another state s' after taking action a. The function R(s,a,s') computes a reward per transition. A policy $\pi(a|s)$ outputs an action distribution according to the state s. Executing a policy $\pi$ starting from the initial state $s_1$ with density $p(s_1)$ will result in a $trajectory \tau$ , which is a sequence of states and actions $\{s_1, a_1, s_2, \ldots, s_t, a_t, \ldots\}$ where $a_t \sim \pi(a|s=s_t), s_{t+1} \sim p(s|s=s_t, a=a_t)$ . We also use the terminology environment to refer to an MDP in an RL problem. The discounted reward of a trajectory is
|
| 30 |
+
|
| 31 |
+
$R_{\gamma}(\tau) = \sum_{t=1}^{\infty} \gamma^t R(s_t, a_t, s_{t+1})$ where $0 < \gamma < 1$ is the discount factor to ensure the series converges. The goal of reinforcement learning (RL) is to find a parameterized policy $\pi_{\theta}$ that maximizes the expected reward $E_{s_1 \sim p(s_1)}[V^{\pi_{\theta}}(s_1)] = E_{\tau \sim \pi_{\theta}, s_1 \sim p(s_1)}[R_{\gamma}(\tau)]$ , where $V^{\pi_{\theta}}$ is the value function. Many environments have an observation space $\mathcal{O}$ that is not the same to the state space. In this case the agent may need to identify the state $s_t$ from the observation $o_t$ .
|
| 32 |
+
|
| 33 |
+
RL as probabilistic inference The RL as inference framework (Todorov, 2006; 2008; Toussaint, 2009; Ziebart, 2010; Kappen et al., 2012; Levine, 2018) defines optimality $p(O|\tau) \propto e^{R(\tau)/T}$ , where T is a temperature scalar and $R(\tau)$ is the total rewards of the trajectory $\tau$ . It further defines a prior distribution of the trajectory $p(\tau) = p(s_1) \prod_{t=1}^{T} p(a_t|s_t) p(s_{t+1}|s_t, a_t), \text{ where } p(a_t|s_t)$ is a known prior action distribution, e.g., a Gaussian distribution. Thus, it can compute the density of optimality $p(O) = \int p(O|\tau)p(\tau)d\tau$ . The goal of the framework is to approximate the posterior distribution of optimal trajectories $p(\tau|O) = \frac{p(O|\tau)p(\tau)}{\int p(O|\tau)p(\tau)d\tau}$ . In the maximal mum entropy framework (Haarnoja et al., 2017), one can apply evidence lower bound (Kingma & Welling, 2013) $\log p(O) \geq \mathbb{E}_{\tau \sim \pi} \left[ \log p(O|\tau) + \log p(\tau) - \log \pi(\tau) \right]$ to train the model.
|
| 34 |
+
|
| 35 |
+
To overcome the limitations of single modality policies, we propose to use latent variables to parameterize multimodal policies in Sec. 4.1. We then propose a novel variational bound as the optimization objective to approximate the posterior of optimal trajectories in Sec. 4.2. The variational bound naturally combines maximum entropy RL and includes a term to encourage consistency (Zhu et al., 2017) between the latent distribution and the sampled trajectories, preventing the policy from mode collapse. To optimize this objective in hard continuous control problems, we propose to learn a world model and build the Reparameterized Policy Gradient, a model-based latent variable policy learning framework in Sec. 4.3.1. We design intrinsic rewards in Sec. 4.3.2 to facilitate exploration. Figure 3 illustrates the whole pipeline.
|
| 36 |
+
|
| 37 |
+
**Policy parameterization matters.** In continuous RL, it is popular to model action distribution with a unimodal Gaussian distribution. However, theoretically, to make sure that the optimal policy will be captured by RL, the function class of continuous RL policies has to include density functions of arbitrary probabilistic distributions (Sutton & Barto,
|
| 38 |
+
|
| 39 |
+
<span id="page-3-1"></span>
|
| 40 |
+
|
| 41 |
+
Figure 2. (A) rewards; (B); soft max policy over discrete action space; (C) single-modality Gaussian policy; (D) our methods reparameterize a random variable into multimodal distributions with neural networks.
|
| 42 |
+
|
| 43 |
+
2018). Consider maximizing a continuous reward function with two modalities as shown in Figure 2(A). When the action space is properly discretized, a SoftMax policy can model the multimodal distribution and find the global optimum after sampling over the entire action space as shown in Figure 2(B). However, discretization can lead to a loss of accuracy and efficiency. If we instead use a Gaussian policy $\mathcal{N}(\mu, \sigma^2)$ by the common practice in literature, we will have trouble – as shown in Figure 2(C), even if its standard deviation is so large to well cover both modalities, the policy gradient can push it towards the local optimum on the right side, causing it to fail to converge to the global optimum. To address the issue, a more flexible policy parameterization is needed for continuous RL problems, one that is simple to sample and optimize.
|
| 44 |
+
|
| 45 |
+
Motivated by recent developments in generative models that have shown superiority in modeling complex distributions (Kingma & Welling, 2013; Ho et al., 2020; Rombach et al., 2022; Ramesh et al., 2021), we propose to parameterize policies using latent variables, as illustrated in Figure 2(D). Instead of adding random noise to perturb network outputs to generate an action distribution, we build a generative model of policy distribution by taking random noise as input and relying on powerful neural networks to transform it into actions of various modalities.
|
| 46 |
+
|
| 47 |
+
Formally, let $z \in \mathcal{Z}$ be a random variable, which can be either continuous or categorical. We design our "policy" as a joint distribution $\pi_{\theta}(z,\tau)$ of the latent z and the trajectory $\tau$ . This paper considers a particular factorization of $\pi_{\theta}(z,\tau)$ that samples z in the beginning of each episode and then
|
| 48 |
+
|
| 49 |
+
sample trajectory $\tau$ conditioning on z:
|
| 50 |
+
|
| 51 |
+
<span id="page-3-3"></span>
|
| 52 |
+
$$\pi_{\theta}(z,\tau) = p(s_1)\pi_{\theta}(z|s_1) \prod_{t=1}^{T} p(s_{t+1}|s_t, a_t)\pi_{\theta}(a_t|z, s_t) \quad (1)$$
|
| 53 |
+
|
| 54 |
+
where T is the length of the sampled trajectory.
|
| 55 |
+
|
| 56 |
+
One can use the policy gradient theorem (Sutton & Barto, 2018), i.e., $\nabla J(\pi) = \mathbb{E}_{\tau}[R(\tau)\nabla\log p(\tau)]$ to optimize the generative model policy. However, computing $p(\tau)$ needs to marginalize over z, i.e., computing $\int_z p(z,\tau)\,\mathrm{d}z$ , which is often intractable when z is continuous. Besides, optimizing the marginal distribution $\log p(\tau)$ by gradient descent suffers from local optimality issues (e.g., using gradient descent to optimize Gaussian mixture models which have latent variables is not effective, so EM is often used instead (Ng, 2000)).
|
| 57 |
+
|
| 58 |
+
To overcome these obstacles, following Todorov (2006; 2008); Toussaint (2009); Ziebart (2010); Kappen et al. (2012); Levine (2018); Haarnoja et al. (2018), we adopt variational method (maximum entropy RL) to directly optimize the joint distribution of the optimal policy without hassles of integrating over z.
|
| 59 |
+
|
| 60 |
+
The evidence lower bound We learn $\pi_{\theta}(z,\tau)$ using variational inference (Kingma & Welling, 2013; Haarnoja et al., 2018; Moon, 1996). Like an EM algorithm, we define an auxiliary distribution $p_{\phi}(z|\tau)$ to approximate the posterior distribution of z conditioning on $\tau$ using function approximators. This auxiliary distribution $p_{\phi}(z|\tau)$ helps to factorize the joint distribution of optimality O, latent z, and the trajectory $\tau$ as $p_{\phi}(O,z,\tau)=p(O|\tau)p_{\phi}(z|\tau)p(\tau)$ . Treating $\pi_{\theta}(z,\tau)$ as the variational distribution, we can write the Evidence Lower Bound (ELBO) for the optimality O:
|
| 61 |
+
|
| 62 |
+
$$\begin{split} & \log p(O) \\ &= \underbrace{E_{z,\tau \sim \pi_{\theta}} \left[ \log p_{\phi}(O,z,\tau) - \log \pi_{\theta}(z,\tau) \right]}_{\text{ELBO}} \\ &+ D_{KL}(\pi_{\theta}(z,\tau) || p_{\phi}(z,\tau|O)) \\ &\geq E_{z,\tau \sim \pi_{\theta}} \left[ \log p_{\phi}(O,\tau,z) - \log \pi_{\theta}(z,\tau) \right] \\ &= E_{z,\tau \sim \pi_{\theta}} \left[ \log p(O,\tau) + \log p_{\phi}(z|\tau) - \log \pi_{\theta}(z,\tau) \right] \\ &= E_{z,\tau} \left[ \underbrace{\log p(O|\tau)}_{\text{reward}} + \underbrace{\log p(\tau)}_{\text{prior}} + \underbrace{\log p_{\phi}(z|\tau)}_{\text{cross entropy}} - \underbrace{\log \pi_{\theta}(z,\tau)}_{\text{entropy}} \right] \end{split}$$
|
| 63 |
+
|
| 64 |
+
<span id="page-3-2"></span>If we optimize $\pi_{\theta}(z,\tau)$ and $p_{\phi}(z|\tau)$ using the gradient of the variational bound, the variational distribution $\pi_{\theta}(z,\tau)$ learns to model the optimal trajectory distribution $p(\tau|O)$ .
|
| 65 |
+
|
| 66 |
+
**How it works** ELBO contains four parts that can all be computed directly given the sampled z and $\tau$ (the environment probability $p(s_{t+1}|s_t, a_t)$ is canceled as in (Levine, 2018)). The first two parts are the predefined reward $\log p(O|\tau) = R(\tau)/\mathcal{T} + c$ , where $\mathcal{T}$ is the temperature scalar, and c is the normalizing constant that can be ignored in optimization. The prior distribution $p(\tau)$ is assumed to be known. The third part is the log-likelihood of z, defined by our auxiliary distribution $p_{\phi}(z|\tau)$ . It is easy to see that if we fix $\pi_{\theta}$ , maximize $p_{\phi}$ alone will minimize the cross-entropy $E_{z,\tau \sim \pi_{\theta}}[-\log p_{\phi}(z|\tau)]$ , similar to the supervised learning of predicting z given $\tau$ . This achieves optimality when $p_\phi(z|\tau)=p_\theta(z|\tau)=\frac{\pi_\theta(z,\tau)}{\int_z \pi_\theta(z,\tau)dz}$ , modeling the posterior of z for $\tau$ sampled from $\pi_{\theta}$ . On the other hand, by fixing $\phi$ , the policy $\pi_{\theta}$ is encouraged to generate trajectories that are easy to identify or classify; this helps to increase diversity and enforce consistency to avoid mode collapse, letting the network not ignore the latent variables. The fourth part is the policy entropy that enables maximum entropy exploration. Maximizing all terms together for the parameters $\theta$ and $\phi$ will minimize $D_{KL}(\pi_{\theta}(z,\tau)||p_{\phi}(z,\tau|O)) =$ $D_{KL}(\pi_{\theta}(z,\tau)||p_{\phi}(z|\tau)p(\tau|O))$ . The optimality can be achieved when $p_{\phi}(z|\tau)$ equals to $p(z|\tau)$ , the true posterior of z. Then, $p_{\theta}(\tau) = p_{\phi}(z|\tau)p(\tau|O)/p(z|\tau) = p(\tau|O)$ where $p_{\theta}(\tau) = \int \pi_{\theta}(\tau, z) dz$ is the marginal distribution of $\tau$ sampled from $\pi_{\theta}$ .
|
| 67 |
+
|
| 68 |
+
**Relationship with other methods** Our method is closely related to skill discovery methods (Eysenbach et al., 2018; Mazzaglia et al., 2022). A skill discovery method usually uses mutual information $I(\tau, z) = H(\tau) - H(\tau|z)$ or $H(z) - H(z|\tau) \ge E_{z,\tau}[\log p_{\phi}(z|\tau) - \log p(z)]$ to encourage diversity. For example, DIYAN (Eysenbach et al., 2018) directly optimizes mutual information to learn various skills without reward. Dropping out the reward term in Eq. 2 shows that the skill learning objective can be seamlessly embedded into the "RL as inference" framework with external reward, and there is no need to introduce the mutual information term manually. Furthermore, the framework suggests we can model the posterior of the optimal trajectories, which enables us to unify generative modeling and trajectory optimization in a single framework. As for the relationship of our method with other generative models, we refer readers to a more thorough discussion in Appendix F.
|
| 69 |
+
|
| 70 |
+
We now describe Reparameterized Policy Gradient (RPG), a model-based RL method with intrinsic motivation for sample efficient exploration in continuous control environments. We first simplify the right side of Eq. 2 using the factorization in Eq. 1 and assuming $\log p_\phi(z|\tau) = \sum_{t>0} \log p(z|s_t,a_t)$ . Thus, the
|
| 71 |
+
|
| 72 |
+
ELBO becomes $-\log \pi_{\theta}(z|s_1) + \sum_{t=1}^{\infty} R(s_t, a_t) / \mathcal{T} - \log \pi_{\theta}(a_t|s_t, z) + \log p_{\phi}(z|s_t, a_t)$ , which can be optimized with an RL algorithm by maximizing the reward
|
| 73 |
+
|
| 74 |
+
$$\underbrace{R(s_t, a_t)/\mathcal{T}}_{r_t} \underbrace{-\alpha \log \pi_{\theta}(a_t|s_t, z) + \beta \log p_{\phi}(z|s_t, a_t)}_{r'_t},$$
|
| 75 |
+
|
| 76 |
+
where scalars $\alpha, \beta$ control the exploration and consistency. We use neural networks to model $\log p_{\phi}(z|s_t, a_t)$ and $\pi_{\theta}(a_t|s_t, z)$ .
|
| 77 |
+
|
| 78 |
+
In our method Reparameterized Policy Gradient (RPG), we train a differentiable world model (Hafner et al., 2019; Schrittwieser et al., 2020; Ye et al., 2021; Hansen et al., 2022) to improve data efficiency. The world model contains the following components: observation encoder $s_t = f_{\psi}(o_t)$ , reward predictor $r_t = R_{\psi}(s_t, a_t)$ , Q value $Q_t = Q_{\psi}(s_t, a_t, z)$ and dynamics $s_{t+1} = h_{\psi}(s_t, a_t)$ .
|
| 79 |
+
|
| 80 |
+
Given any z and latent state $s_{t_0}=f_{\psi}(o_{t_0})$ at time step $t_0$ , the learned dynamics network can generate an imaginary trajectory for any action sequence. If we sample actions from the policy $\pi_{\theta}(a_t|s_t,z)$ for $t\geq t_0$ and execute them in the latent model, it will produce a Monte-Carlo estimate for the value of $s_{t_0}$ for optimizing the policy $\pi_{\theta}$ :
|
| 81 |
+
|
| 82 |
+
<span id="page-4-3"></span>
|
| 83 |
+
$$V_{\text{est}}(o_{t_0}, z) \approx \gamma^K (Q_{t_0 + K} + r'_{t_0 + K}) + \sum_{t = t_0}^{t_0 + K - 1} \gamma^{t - t_0} (r_t + r'_t)$$
|
| 84 |
+
(3)
|
| 85 |
+
|
| 86 |
+
We self-supervise the dynamics network to ensure state consistency without reconstructing observations as in (Ye et al., 2021; Hansen et al., 2022). For any latent variable z and trajectory segments of length K+1 $\tau_{t_0:t_0+K}=\{o_{t_0},a_{t_0}^{gt},r_{t_0}^{gt},o_{t_0+1},\ldots,o_{t_0+K}\}$ sampled from the replay buffer, we execute actions $\{a_t^{gt}\}$ in the world model and use the following loss function to train the world model, as well as the Q function:
|
| 87 |
+
|
| 88 |
+
<span id="page-4-2"></span>
|
| 89 |
+
$$L_{\psi}(\tau) = \sum_{t=t_0}^{t_0+K-1} L_1 \|s_{t+1} - \mathbf{ng}(f_{\psi}(o_{t+1}))\|^2 + L_2(r_t - r_t^{gt})^2 + L_3(Q_t - \mathbf{ng}(r_t^{gt} + \gamma V_{\text{est}}(o_{t+1}, z)))^2$$
|
| 90 |
+
(4)
|
| 91 |
+
|
| 92 |
+
where $\mathbf{ng}(x)$ means stopping gradient and $L_1 = 1000, L_2 = L_3 = 0.5$ are constants to balance the loss.
|
| 93 |
+
|
| 94 |
+
For challenging continuous control tasks with sparse rewards, policies that maximize the action entropy of $\pi_{\theta}(a|s,z)$ usually have trouble obtaining a meaningful reward, making its exploration inefficient. We follow (Hazan et al., 2019) to let the policy additionally maximize
|
| 95 |
+
|
| 96 |
+
<span id="page-5-0"></span>
|
| 97 |
+
|
| 98 |
+
Figure 3. An overview of our model pipeline: A) a reparameterized policy from which we can sample latent variable z and action a given the latent state s; B) a latent dynamics model which can be used to forward simulate the dynamic process when a sequence of actions is known. C) an exploration bonus provided by a density estimator. Our Reparameterized Policy Gradient do multimodal exploration with the help of the latent world model and the exploration bonus.
|
| 99 |
+
|
| 100 |
+
the entropy of the discounted stationary state distribution $d_{\pi}(s) = (1 - \gamma) \sum_{t=1}^{\infty} \gamma^t P(s_t = s | \pi)$ .
|
| 101 |
+
|
| 102 |
+
We use the *object-centric* Randomized Network Distillation (RND) (Burda et al., 2018) as a simple and effective method to approximate the state density in continuous control tasks. RND uses a network $g_{\theta}(o_t)$ to distill the output of a random network $g'(o_t)$ by minimizing the difference $||g_{\theta}(o_t) - g'(o_t)||^2$ over states sampled by the current agent and treat the difference as the negative density of each observation $o_t$ .
|
| 103 |
+
|
| 104 |
+
We make several modifications to the vallina RND to improve its performance for state vector observations in control problems. First, we inject object-prior to the RND estimator to make the policy sensitive to regions that include objects' position change. Specifically, before feeding objects' coordinates into the network, we apply positional encoding (Vaswani et al., 2017; Mildenhall et al., 2021) to turn all scalars x to a vector of $\{\sin(2^i x), \cos(2^i x)\}_{i=1,2,...}$ for objects of interest (e.g., in robot manipulation, the end effector of the robot and the object). Second, we use a large replay buffer to store past states to avoid catastrophic forgetting (Zhang et al., 2021). We verified that it is necessary to normalize the RND's output to stabilize the training and make it an approximated density estimator. Lastly, to account for the latent world model, we relabel trajectories' rewards sampled from the replay buffer instead of estimating them directly in the latent model by reconstructing the observation.
|
| 105 |
+
|
| 106 |
+
An implicit benefit of a latent variable policy model is its ability to maximize the state entropy better, as will be shown in the experiments of Sec. 5.1. When combined with our RND method, RPG achieves much better state coverage while single modality policy cannot stabilize. The combination of multimodal policy learning and state entropy maximization accelerates the exploration of continuous control
|
| 107 |
+
|
| 108 |
+
tasks with sparse rewards. We describe the whole algorithm in Alg. 1 and implementation details in Appendix A.
|
2310.19807/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1,220 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2023-05-16T19:30:29.066Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36" etag="yEOTnnHySZXzPAaalstz" version="21.2.7" type="google">
|
| 2 |
+
<diagram name="Page-1" id="onlf1a7FIxYy0HpmRkO0">
|
| 3 |
+
<mxGraphModel dx="1242" dy="714" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="0" pageScale="1" pageWidth="850" pageHeight="1100" background="#ffffff" math="1" shadow="0">
|
| 4 |
+
<root>
|
| 5 |
+
<mxCell id="0" />
|
| 6 |
+
<mxCell id="1" parent="0" />
|
| 7 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-6" value="" style="endArrow=open;html=1;rounded=0;strokeWidth=4;strokeColor=#7EA6E0;endFill=0;endSize=12;sketch=1;curveFitting=1;jiggle=2;shadow=0;" parent="1" edge="1">
|
| 8 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 9 |
+
<mxPoint x="213" y="250" as="sourcePoint" />
|
| 10 |
+
<mxPoint x="103" y="330" as="targetPoint" />
|
| 11 |
+
</mxGeometry>
|
| 12 |
+
</mxCell>
|
| 13 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-7" value="" style="endArrow=open;html=1;rounded=0;strokeWidth=4;strokeColor=#7EA6E0;endFill=0;endSize=12;shadow=0;sketch=1;curveFitting=1;jiggle=2;" parent="1" edge="1">
|
| 14 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 15 |
+
<mxPoint x="328" y="250" as="sourcePoint" />
|
| 16 |
+
<mxPoint x="423" y="330" as="targetPoint" />
|
| 17 |
+
</mxGeometry>
|
| 18 |
+
</mxCell>
|
| 19 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-8" value="" style="endArrow=open;html=1;rounded=0;strokeWidth=4;strokeColor=#FFB570;endFill=0;endSize=12;sketch=1;curveFitting=1;jiggle=2;" parent="1" edge="1">
|
| 20 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 21 |
+
<mxPoint x="233" y="260" as="sourcePoint" />
|
| 22 |
+
<mxPoint x="233" y="260" as="targetPoint" />
|
| 23 |
+
</mxGeometry>
|
| 24 |
+
</mxCell>
|
| 25 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-9" value="" style="endArrow=open;html=1;rounded=0;strokeWidth=4;strokeColor=#FFB570;endFill=0;endSize=12;jumpSize=6;jumpStyle=none;sketch=1;curveFitting=1;jiggle=2;" parent="1" edge="1">
|
| 26 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 27 |
+
<mxPoint x="403" y="340" as="sourcePoint" />
|
| 28 |
+
<mxPoint x="313" y="260" as="targetPoint" />
|
| 29 |
+
</mxGeometry>
|
| 30 |
+
</mxCell>
|
| 31 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-11" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">$$\theta$$<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#7EA6E0;" parent="1" vertex="1">
|
| 32 |
+
<mxGeometry x="143" y="220" width="35" height="30" as="geometry" />
|
| 33 |
+
</mxCell>
|
| 34 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-15" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;" parent="1" edge="1">
|
| 35 |
+
<mxGeometry relative="1" as="geometry">
|
| 36 |
+
<mxPoint x="228" y="290" as="sourcePoint" />
|
| 37 |
+
<mxPoint x="228" y="290" as="targetPoint" />
|
| 38 |
+
</mxGeometry>
|
| 39 |
+
</mxCell>
|
| 40 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-21" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">$$\theta$$<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#7EA6E0;shadow=0;" parent="1" vertex="1">
|
| 41 |
+
<mxGeometry x="373" y="220" width="35" height="30" as="geometry" />
|
| 42 |
+
</mxCell>
|
| 43 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-3" value="" style="group" parent="1" connectable="0" vertex="1">
|
| 44 |
+
<mxGeometry x="3" y="340" width="230.7622043267462" height="120" as="geometry" />
|
| 45 |
+
</mxCell>
|
| 46 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-3" value="<font style="font-size: 18px;"><br><font style="font-size: 18px;" face="Comic Sans MS">1</font></font>" style="shape=actor;whiteSpace=wrap;html=1;strokeWidth=4;shadow=1;" parent="fB-uekVcXG0o3_wgLuV4-3" vertex="1">
|
| 47 |
+
<mxGeometry x="75" width="40" height="60" as="geometry" />
|
| 48 |
+
</mxCell>
|
| 49 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-25" value="$$\mathcal{D}_{1}$$" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;fontSize=18;" parent="fB-uekVcXG0o3_wgLuV4-3" vertex="1">
|
| 50 |
+
<mxGeometry y="65" width="190" height="40" as="geometry" />
|
| 51 |
+
</mxCell>
|
| 52 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-4" value="" style="group" parent="1" connectable="0" vertex="1">
|
| 53 |
+
<mxGeometry x="343" y="340" width="190" height="120" as="geometry" />
|
| 54 |
+
</mxCell>
|
| 55 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-5" value="<font face="Comic Sans MS" style="font-size: 18px;"><br>N</font>" style="shape=actor;whiteSpace=wrap;html=1;strokeWidth=4;shadow=1;" parent="fB-uekVcXG0o3_wgLuV4-4" vertex="1">
|
| 56 |
+
<mxGeometry x="75" width="40" height="60" as="geometry" />
|
| 57 |
+
</mxCell>
|
| 58 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-26" value="$$\mathcal{D}_{N}$$" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;fontSize=18;" parent="fB-uekVcXG0o3_wgLuV4-4" vertex="1">
|
| 59 |
+
<mxGeometry y="65" width="190" height="40" as="geometry" />
|
| 60 |
+
</mxCell>
|
| 61 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-5" value="" style="group" parent="1" connectable="0" vertex="1">
|
| 62 |
+
<mxGeometry x="357" y="120" width="120" height="140" as="geometry" />
|
| 63 |
+
</mxCell>
|
| 64 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-4" value="<font style="font-size: 18px;" face="Comic Sans MS">Server</font>" style="ellipse;shape=cloud;whiteSpace=wrap;html=1;strokeWidth=4;shadow=1;" parent="fB-uekVcXG0o3_wgLuV4-5" vertex="1">
|
| 65 |
+
<mxGeometry x="-149" y="60" width="120" height="80" as="geometry" />
|
| 66 |
+
</mxCell>
|
| 67 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-22" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">$${\tt Update<br>}~\theta$$<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#7EA6E0;shadow=0;" parent="fB-uekVcXG0o3_wgLuV4-5" vertex="1">
|
| 68 |
+
<mxGeometry x="-144" y="6" width="110" height="30" as="geometry" />
|
| 69 |
+
</mxCell>
|
| 70 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-11" value="" style="endArrow=open;html=1;rounded=0;strokeWidth=4;strokeColor=#FFB570;endFill=0;endSize=12;sketch=1;curveFitting=1;jiggle=2;" parent="1" edge="1">
|
| 71 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 72 |
+
<mxPoint x="133" y="340" as="sourcePoint" />
|
| 73 |
+
<mxPoint x="233" y="260" as="targetPoint" />
|
| 74 |
+
</mxGeometry>
|
| 75 |
+
</mxCell>
|
| 76 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-19" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">$$\mathbf{g}_{1}$$<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#FFB570;fillColor=none;glass=0;" parent="1" vertex="1">
|
| 77 |
+
<mxGeometry x="148" y="300" width="30" height="30" as="geometry" />
|
| 78 |
+
</mxCell>
|
| 79 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-18" value="" style="group" parent="1" connectable="0" vertex="1">
|
| 80 |
+
<mxGeometry x="171.99779567325373" y="270.09" width="80.76440865349247" height="86.97671226163459" as="geometry" />
|
| 81 |
+
</mxCell>
|
| 82 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-15" value="" style="verticalLabelPosition=bottom;verticalAlign=top;html=1;shape=mxgraph.basic.8_point_star;rotation=15;fillColor=none;strokeColor=#FF0000;strokeWidth=3;shadow=1;" parent="fB-uekVcXG0o3_wgLuV4-18" vertex="1">
|
| 83 |
+
<mxGeometry x="7.782204326746239" y="10.990000000000009" width="65.2" height="68.72" as="geometry" />
|
| 84 |
+
</mxCell>
|
| 85 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-7" value="<span style="font-family: Helvetica; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;"><font style="font-size: 24px;">$$\mathbf{H}_{1}$$</font><br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#FFB570;fillColor=none;glass=0;" parent="fB-uekVcXG0o3_wgLuV4-18" vertex="1">
|
| 86 |
+
<mxGeometry x="22.33220432674625" width="30" height="30" as="geometry" />
|
| 87 |
+
</mxCell>
|
| 88 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-22" value="" style="group" parent="1" connectable="0" vertex="1">
|
| 89 |
+
<mxGeometry x="289.45382302105145" y="270.09000000000003" width="99.54617697894855" height="89.81011835825268" as="geometry" />
|
| 90 |
+
</mxCell>
|
| 91 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-17" value="" style="verticalLabelPosition=bottom;verticalAlign=top;html=1;shape=mxgraph.basic.8_point_star;rotation=15;fillColor=none;strokeColor=#FF0000;strokeWidth=3;perimeterSpacing=0;shadow=1;labelBackgroundColor=none;" parent="fB-uekVcXG0o3_wgLuV4-22" vertex="1">
|
| 92 |
+
<mxGeometry x="7.86" y="13.39" width="72.34" height="66.78" as="geometry" />
|
| 93 |
+
</mxCell>
|
| 94 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-9" value="<span style="font-family: Helvetica; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;"><font style="font-size: 24px;">$$\mathbf{H}_{N}$$</font><br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#FFB570;fillColor=none;glass=0;" parent="fB-uekVcXG0o3_wgLuV4-22" vertex="1">
|
| 95 |
+
<mxGeometry x="20.316176978948533" width="30" height="30" as="geometry" />
|
| 96 |
+
</mxCell>
|
| 97 |
+
<mxCell id="3E1nvp-L0lh-rVXAT4co-13" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">$$\mathbf{g}_{N}$$<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#FFB570;" parent="fB-uekVcXG0o3_wgLuV4-22" vertex="1">
|
| 98 |
+
<mxGeometry x="74.54617697894855" y="39.39999999999998" width="30" height="30" as="geometry" />
|
| 99 |
+
</mxCell>
|
| 100 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-1" value="" style="endArrow=open;html=1;rounded=0;strokeWidth=4;strokeColor=#7EA6E0;endFill=0;endSize=12;sketch=1;curveFitting=1;jiggle=2;shadow=0;" parent="1" edge="1">
|
| 101 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 102 |
+
<mxPoint x="670" y="250" as="sourcePoint" />
|
| 103 |
+
<mxPoint x="612" y="330" as="targetPoint" />
|
| 104 |
+
</mxGeometry>
|
| 105 |
+
</mxCell>
|
| 106 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-2" value="" style="endArrow=open;html=1;rounded=0;strokeWidth=4;strokeColor=#7EA6E0;endFill=0;endSize=12;shadow=0;sketch=1;curveFitting=1;jiggle=2;" parent="1" edge="1">
|
| 107 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 108 |
+
<mxPoint x="800" y="245.09" as="sourcePoint" />
|
| 109 |
+
<mxPoint x="870" y="330" as="targetPoint" />
|
| 110 |
+
</mxGeometry>
|
| 111 |
+
</mxCell>
|
| 112 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-3" value="" style="endArrow=open;html=1;rounded=0;strokeWidth=4;strokeColor=#FFB570;endFill=0;endSize=12;sketch=1;curveFitting=1;jiggle=2;" parent="1" edge="1">
|
| 113 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 114 |
+
<mxPoint x="724" y="260" as="sourcePoint" />
|
| 115 |
+
<mxPoint x="724" y="260" as="targetPoint" />
|
| 116 |
+
</mxGeometry>
|
| 117 |
+
</mxCell>
|
| 118 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-4" value="" style="endArrow=open;html=1;rounded=0;strokeWidth=4;strokeColor=#FFB570;endFill=0;endSize=12;jumpSize=6;jumpStyle=none;sketch=1;curveFitting=1;jiggle=2;" parent="1" edge="1">
|
| 119 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 120 |
+
<mxPoint x="840" y="335" as="sourcePoint" />
|
| 121 |
+
<mxPoint x="780" y="255" as="targetPoint" />
|
| 122 |
+
</mxGeometry>
|
| 123 |
+
</mxCell>
|
| 124 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-5" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">$$\theta$$<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#7EA6E0;" parent="1" vertex="1">
|
| 125 |
+
<mxGeometry x="629" y="220" width="35" height="30" as="geometry" />
|
| 126 |
+
</mxCell>
|
| 127 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-7" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;" parent="1" edge="1">
|
| 128 |
+
<mxGeometry relative="1" as="geometry">
|
| 129 |
+
<mxPoint x="719" y="290" as="sourcePoint" />
|
| 130 |
+
<mxPoint x="719" y="290" as="targetPoint" />
|
| 131 |
+
</mxGeometry>
|
| 132 |
+
</mxCell>
|
| 133 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-8" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">$$\theta$$<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#7EA6E0;shadow=0;" parent="1" vertex="1">
|
| 134 |
+
<mxGeometry x="832" y="220" width="35" height="30" as="geometry" />
|
| 135 |
+
</mxCell>
|
| 136 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-9" value="" style="group" parent="1" connectable="0" vertex="1">
|
| 137 |
+
<mxGeometry x="512" y="340" width="308" height="120" as="geometry" />
|
| 138 |
+
</mxCell>
|
| 139 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-10" value="<font style="font-size: 18px;"><br><font style="font-size: 18px;" face="Comic Sans MS">1</font></font>" style="shape=actor;whiteSpace=wrap;html=1;strokeWidth=4;shadow=1;" parent="KJgEmXnFLY3Eif09ZCCw-9" vertex="1">
|
| 140 |
+
<mxGeometry x="75" width="40" height="60" as="geometry" />
|
| 141 |
+
</mxCell>
|
| 142 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-11" value="$$\mathcal{D}_{1}$$" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;fontSize=18;" parent="KJgEmXnFLY3Eif09ZCCw-9" vertex="1">
|
| 143 |
+
<mxGeometry y="65" width="190" height="40" as="geometry" />
|
| 144 |
+
</mxCell>
|
| 145 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-12" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">local compute<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#FFB570;shadow=0;labelBackgroundColor=none;" parent="KJgEmXnFLY3Eif09ZCCw-9" vertex="1">
|
| 146 |
+
<mxGeometry x="158" y="70" width="150" height="30" as="geometry" />
|
| 147 |
+
</mxCell>
|
| 148 |
+
<mxCell id="wqId83h6cc4_SIfPCCX5-16" value="" style="group" vertex="1" connectable="0" parent="KJgEmXnFLY3Eif09ZCCw-9">
|
| 149 |
+
<mxGeometry x="174.88" y="50" width="104.24000000000001" height="10" as="geometry" />
|
| 150 |
+
</mxCell>
|
| 151 |
+
<mxCell id="wqId83h6cc4_SIfPCCX5-17" value="" style="ellipse;whiteSpace=wrap;html=1;strokeWidth=4;" vertex="1" parent="wqId83h6cc4_SIfPCCX5-16">
|
| 152 |
+
<mxGeometry x="49.24000000000001" width="10" height="10" as="geometry" />
|
| 153 |
+
</mxCell>
|
| 154 |
+
<mxCell id="wqId83h6cc4_SIfPCCX5-18" value="" style="ellipse;whiteSpace=wrap;html=1;strokeWidth=4;" vertex="1" parent="wqId83h6cc4_SIfPCCX5-16">
|
| 155 |
+
<mxGeometry x="94.24000000000001" width="10" height="10" as="geometry" />
|
| 156 |
+
</mxCell>
|
| 157 |
+
<mxCell id="wqId83h6cc4_SIfPCCX5-19" value="" style="ellipse;whiteSpace=wrap;html=1;strokeWidth=4;" vertex="1" parent="wqId83h6cc4_SIfPCCX5-16">
|
| 158 |
+
<mxGeometry width="10" height="10" as="geometry" />
|
| 159 |
+
</mxCell>
|
| 160 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-13" value="" style="group" parent="1" connectable="0" vertex="1">
|
| 161 |
+
<mxGeometry x="774" y="340" width="190" height="120" as="geometry" />
|
| 162 |
+
</mxCell>
|
| 163 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-14" value="<font face="Comic Sans MS" style="font-size: 18px;"><br>N</font>" style="shape=actor;whiteSpace=wrap;html=1;strokeWidth=4;shadow=1;" parent="KJgEmXnFLY3Eif09ZCCw-13" vertex="1">
|
| 164 |
+
<mxGeometry x="72" width="40" height="60" as="geometry" />
|
| 165 |
+
</mxCell>
|
| 166 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-15" value="$$\mathcal{D}_{N}$$" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;fontSize=18;" parent="KJgEmXnFLY3Eif09ZCCw-13" vertex="1">
|
| 167 |
+
<mxGeometry y="65" width="190" height="40" as="geometry" />
|
| 168 |
+
</mxCell>
|
| 169 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-17" value="<font style="font-size: 18px;" face="Comic Sans MS">Server</font>" style="ellipse;shape=cloud;whiteSpace=wrap;html=1;strokeWidth=4;shadow=1;" parent="1" vertex="1">
|
| 170 |
+
<mxGeometry x="674" y="180" width="120" height="80" as="geometry" />
|
| 171 |
+
</mxCell>
|
| 172 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-18" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">$${\tt Update<br>}~\theta$$<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#7EA6E0;shadow=0;" parent="1" vertex="1">
|
| 173 |
+
<mxGeometry x="679" y="126" width="110" height="30" as="geometry" />
|
| 174 |
+
</mxCell>
|
| 175 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-19" value="" style="endArrow=open;html=1;rounded=0;strokeWidth=4;strokeColor=#FFB570;endFill=0;endSize=12;sketch=1;curveFitting=1;jiggle=2;" parent="1" edge="1">
|
| 176 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 177 |
+
<mxPoint x="630" y="340" as="sourcePoint" />
|
| 178 |
+
<mxPoint x="690" y="260" as="targetPoint" />
|
| 179 |
+
</mxGeometry>
|
| 180 |
+
</mxCell>
|
| 181 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-20" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">$$\mathbf{y}_{1}$$<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#FFB570;fillColor=none;glass=0;" parent="1" vertex="1">
|
| 182 |
+
<mxGeometry x="674" y="270.09" width="30" height="30" as="geometry" />
|
| 183 |
+
</mxCell>
|
| 184 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-28" value="" style="html=1;shadow=0;dashed=0;align=center;verticalAlign=middle;shape=mxgraph.arrows2.arrow;dy=0.6;dx=40;notch=0;strokeWidth=4;sketch=1;curveFitting=1;jiggle=2;" parent="1" vertex="1">
|
| 185 |
+
<mxGeometry x="477" y="240" width="100" height="70" as="geometry" />
|
| 186 |
+
</mxCell>
|
| 187 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-30" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">$$\mathbf{y}_{N}$$<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#FFB570;fillColor=none;glass=0;" parent="1" vertex="1">
|
| 188 |
+
<mxGeometry x="774" y="270.09" width="30" height="30" as="geometry" />
|
| 189 |
+
</mxCell>
|
| 190 |
+
<mxCell id="fB-uekVcXG0o3_wgLuV4-23" value="<span style="font-family: Helvetica; font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">local compute<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#FFB570;shadow=0;labelBackgroundColor=none;" parent="1" vertex="1">
|
| 191 |
+
<mxGeometry x="202" y="410" width="150" height="30" as="geometry" />
|
| 192 |
+
</mxCell>
|
| 193 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-35" value="<span style="font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">(a) FedNPG<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#000000;shadow=0;fontFamily=Times New Roman;labelBackgroundColor=none;" parent="1" vertex="1">
|
| 194 |
+
<mxGeometry x="202" y="460" width="110" height="30" as="geometry" />
|
| 195 |
+
</mxCell>
|
| 196 |
+
<mxCell id="KJgEmXnFLY3Eif09ZCCw-36" value="<span style="font-size: 22px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">(b) FedNPG-ADMM<br></span>" style="text;whiteSpace=wrap;html=1;fontColor=#000000;shadow=0;fontFamily=Times New Roman;labelBackgroundColor=none;" parent="1" vertex="1">
|
| 197 |
+
<mxGeometry x="639" y="460" width="190" height="30" as="geometry" />
|
| 198 |
+
</mxCell>
|
| 199 |
+
<mxCell id="wqId83h6cc4_SIfPCCX5-2" value="" style="endArrow=none;dashed=1;html=1;dashPattern=1 3;strokeWidth=9;rounded=0;" edge="1" parent="1">
|
| 200 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 201 |
+
<mxPoint x="213" y="392.16" as="sourcePoint" />
|
| 202 |
+
<mxPoint x="213.35761654557382" y="393.1430466182296" as="targetPoint" />
|
| 203 |
+
</mxGeometry>
|
| 204 |
+
</mxCell>
|
| 205 |
+
<mxCell id="wqId83h6cc4_SIfPCCX5-12" value="" style="group" vertex="1" connectable="0" parent="1">
|
| 206 |
+
<mxGeometry x="220" y="390" width="104.24000000000001" height="10" as="geometry" />
|
| 207 |
+
</mxCell>
|
| 208 |
+
<mxCell id="wqId83h6cc4_SIfPCCX5-13" value="" style="ellipse;whiteSpace=wrap;html=1;strokeWidth=4;" vertex="1" parent="wqId83h6cc4_SIfPCCX5-12">
|
| 209 |
+
<mxGeometry x="49.24000000000001" width="10" height="10" as="geometry" />
|
| 210 |
+
</mxCell>
|
| 211 |
+
<mxCell id="wqId83h6cc4_SIfPCCX5-14" value="" style="ellipse;whiteSpace=wrap;html=1;strokeWidth=4;" vertex="1" parent="wqId83h6cc4_SIfPCCX5-12">
|
| 212 |
+
<mxGeometry x="94.24000000000001" width="10" height="10" as="geometry" />
|
| 213 |
+
</mxCell>
|
| 214 |
+
<mxCell id="wqId83h6cc4_SIfPCCX5-15" value="" style="ellipse;whiteSpace=wrap;html=1;strokeWidth=4;" vertex="1" parent="wqId83h6cc4_SIfPCCX5-12">
|
| 215 |
+
<mxGeometry width="10" height="10" as="geometry" />
|
| 216 |
+
</mxCell>
|
| 217 |
+
</root>
|
| 218 |
+
</mxGraphModel>
|
| 219 |
+
</diagram>
|
| 220 |
+
</mxfile>
|